metadata
license: apache-2.0
Posting these Qwen-14B-Chat quantized models in GGUF format for use with llama.cpp
due to a user request.
But, having used an importance matrix derived from English-only training data in the quantization, I have no idea how these models will perform in Chinese.