qwp4w3hyb commited on
Commit
1efcf9c
1 Parent(s): c539e4e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -4
README.md CHANGED
@@ -13,10 +13,19 @@ base_model: google/gemma-2-9b-it
13
 
14
  # Quant Infos
15
 
16
- - f32 gguf is from the official kaggle repo
17
- - imatrix quants are running and will be uploaded one-by-one
18
- - you will need the gemma2 llama.cpp [PR](https://github.com/ggerganov/llama.cpp/pull/8156) applied to your llama.cpp
19
- - current quants are based on the f32 gguf provided by google directly, I will reconvert from the huggingface repo once the dust settles to get better gguf metadata
 
 
 
 
 
 
 
 
 
20
 
21
  # Original Model Card
22
 
 
13
 
14
  # Quant Infos
15
 
16
+ - quants done with an importance matrix for improved quantization loss
17
+ - Currently requantizing ggufs & imatrix from bf16 for "optimal" accuracy loss
18
+ - initial version was based on f32 gguf provided by google
19
+ - WIP new version should have better metadata as its quantized from scratch with llama.cpp
20
+ - Wide coverage of different gguf quant types from Q\_8\_0 down to IQ1\_S
21
+ - experimental custom quant types
22
+ - `_L` with `--output-tensor-type f16 --token-embedding-type f16` (same as bartowski)
23
+ - `_XL` with `--output-tensor-type bf16 --token-embedding-type bf16` (same size as _L, in theory even higher numerical accuracy)
24
+ - Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) release [b3259](https://github.com/ggerganov/llama.cpp/releases/tag/b3259)
25
+ - Imatrix generated with [this](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) multi-purpose dataset by [bartowski](https://huggingface.co/bartowski).
26
+ ```
27
+ ./imatrix -c 512 -m $model_name-bf16.gguf -f calibration_datav3.txt -o $model_name.imatrix
28
+ ```
29
 
30
  # Original Model Card
31