Gemma 2
Collection
6 items
•
Updated
Llama.cpp imatrix quantization of google/gemma-2-2b
Original Model: google/gemma-2-2b
Original dtype: FP32
(float32
)
Quantized by: llama.cpp b3496
IMatrix dataset: here
Status: ✅ Available
Link: here
Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
---|---|---|---|---|---|
gemma-2-2b.Q8_0.gguf | Q8_0 | 2.78GB | ✅ Available | ⚪ Static | 📦 No |
gemma-2-2b.Q6_K.gguf | Q6_K | 2.15GB | ✅ Available | ⚪ Static | 📦 No |
gemma-2-2b.Q4_K.gguf | Q4_K | 1.71GB | ✅ Available | 🟢 IMatrix | 📦 No |
gemma-2-2b.Q3_K.gguf | Q3_K | 1.46GB | ✅ Available | 🟢 IMatrix | 📦 No |
gemma-2-2b.Q2_K.gguf | Q2_K | 1.23GB | ✅ Available | 🟢 IMatrix | 📦 No |
Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
---|---|---|---|---|---|
gemma-2-2b.F32.gguf | F32 | 10.46GB | ✅ Available | ⚪ Static | 📦 No |
gemma-2-2b.BF16.gguf | BF16 | 5.24GB | ✅ Available | ⚪ Static | 📦 No |
gemma-2-2b.FP16.gguf | F16 | 5.24GB | ✅ Available | ⚪ Static | 📦 No |
gemma-2-2b.Q8_0.gguf | Q8_0 | 2.78GB | ✅ Available | ⚪ Static | 📦 No |
gemma-2-2b.Q6_K.gguf | Q6_K | 2.15GB | ✅ Available | ⚪ Static | 📦 No |
gemma-2-2b.Q5_K.gguf | Q5_K | 1.92GB | ✅ Available | ⚪ Static | 📦 No |
gemma-2-2b.Q5_K_S.gguf | Q5_K_S | 1.88GB | ✅ Available | ⚪ Static | 📦 No |
gemma-2-2b.Q4_K.gguf | Q4_K | 1.71GB | ✅ Available | 🟢 IMatrix | 📦 No |
gemma-2-2b.Q4_K_S.gguf | Q4_K_S | 1.64GB | ✅ Available | 🟢 IMatrix | 📦 No |
gemma-2-2b.IQ4_NL.gguf | IQ4_NL | 1.63GB | ✅ Available | 🟢 IMatrix | 📦 No |
gemma-2-2b.IQ4_XS.gguf | IQ4_XS | 1.57GB | ✅ Available | 🟢 IMatrix | 📦 No |
gemma-2-2b.Q3_K.gguf | Q3_K | 1.46GB | ✅ Available | 🟢 IMatrix | 📦 No |
gemma-2-2b.Q3_K_L.gguf | Q3_K_L | 1.55GB | ✅ Available | 🟢 IMatrix | 📦 No |
gemma-2-2b.Q3_K_S.gguf | Q3_K_S | 1.36GB | ✅ Available | 🟢 IMatrix | 📦 No |
gemma-2-2b.IQ3_M.gguf | IQ3_M | 1.39GB | ✅ Available | 🟢 IMatrix | 📦 No |
gemma-2-2b.IQ3_S.gguf | IQ3_S | 1.36GB | ✅ Available | 🟢 IMatrix | 📦 No |
gemma-2-2b.IQ3_XS.gguf | IQ3_XS | 1.31GB | ✅ Available | 🟢 IMatrix | 📦 No |
gemma-2-2b.IQ3_XXS.gguf | IQ3_XXS | 1.18GB | ✅ Available | 🟢 IMatrix | 📦 No |
gemma-2-2b.Q2_K.gguf | Q2_K | 1.23GB | ✅ Available | 🟢 IMatrix | 📦 No |
gemma-2-2b.Q2_K_S.gguf | Q2_K_S | 1.17GB | ✅ Available | 🟢 IMatrix | 📦 No |
gemma-2-2b.IQ2_M.gguf | IQ2_M | 1.09GB | ✅ Available | 🟢 IMatrix | 📦 No |
gemma-2-2b.IQ2_S.gguf | IQ2_S | 1.03GB | ✅ Available | 🟢 IMatrix | 📦 No |
gemma-2-2b.IQ2_XS.gguf | IQ2_XS | 1.00GB | ✅ Available | 🟢 IMatrix | 📦 No |
gemma-2-2b.IQ2_XXS.gguf | IQ2_XXS | 943.19MB | ✅ Available | 🟢 IMatrix | 📦 No |
gemma-2-2b.IQ1_M.gguf | IQ1_M | 873.80MB | ✅ Available | 🟢 IMatrix | 📦 No |
gemma-2-2b.IQ1_S.gguf | IQ1_S | 832.16MB | ✅ Available | 🟢 IMatrix | 📦 No |
If you do not have hugginface-cli installed:
pip install -U "huggingface_hub[cli]"
Download the specific file you want:
huggingface-cli download legraphista/gemma-2-2b-IMat-GGUF --include "gemma-2-2b.Q8_0.gguf" --local-dir ./
If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
huggingface-cli download legraphista/gemma-2-2b-IMat-GGUF --include "gemma-2-2b.Q8_0/*" --local-dir ./
# see FAQ for merging GGUF's
llama.cpp/main -m gemma-2-2b.Q8_0.gguf --color -i -p "prompt here"
According to this investigation, it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
gguf-split
availablegguf-split
, navigate to https://github.com/ggerganov/llama.cpp/releasesgguf-split
gemma-2-2b.Q8_0
)gguf-split --merge gemma-2-2b.Q8_0/gemma-2-2b.Q8_0-00001-of-XXXXX.gguf gemma-2-2b.Q8_0.gguf
gguf-split
to the first chunk of the split.Got a suggestion? Ping me @legraphista!
Base model
google/gemma-2-2b