|
--- |
|
license: gemma |
|
language: |
|
- en |
|
pipeline_tag: text-generation |
|
tags: |
|
- google |
|
- gemma |
|
- gguf |
|
- imatrix |
|
base_model: google/gemma-2-9b-it |
|
--- |
|
|
|
# Quant Infos |
|
|
|
- quants done with an importance matrix for improved quantization loss |
|
- Currently requantizing ggufs & imatrix from bf16 for "optimal" accuracy loss |
|
- initial version was based on f32 gguf provided by google |
|
- WIP new version should have better metadata as its quantized from scratch with llama.cpp |
|
- Wide coverage of different gguf quant types from Q\_8\_0 down to IQ1\_S |
|
- experimental custom quant types |
|
- `_L` with `--output-tensor-type f16 --token-embedding-type f16` (same as bartowski) |
|
- `_XL` with `--output-tensor-type bf16 --token-embedding-type bf16` (same size as _L, in theory even higher numerical accuracy) |
|
- Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) release [b3259](https://github.com/ggerganov/llama.cpp/releases/tag/b3259) |
|
- Imatrix generated with [this](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) multi-purpose dataset by [bartowski](https://huggingface.co/bartowski). |
|
``` |
|
./imatrix -c 512 -m $model_name-bf16.gguf -f calibration_datav3.txt -o $model_name.imatrix |
|
``` |
|
|
|
# Original Model Card |
|
|
|
TODO |