GGUFs
Collection
I take requests, feel free to drop me a line in the community posts
•
40 items
•
Updated
•
2
Mistral 7B v0.2 iMat GGUF quantized from fp16 with love.
Legacy quants (i.e. Q8, Q5_K_M) in this repo have all been enhanced with importance matrix calculation. These quants show improved KL-Divergence over their static counterparts.
All files have been tested for your safety and convenience. No need to clone the entire repo, just pick the quant that's right for you.
For more information on latest iMatrix quants see this PR - https://github.com/ggerganov/llama.cpp/pull/5747