legraphista commited on
Commit
2e0bb66
1 Parent(s): 75dda34

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +152 -0
README.md ADDED
@@ -0,0 +1,152 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: google/gemma-2-9b-it
3
+ extra_gated_button_content: Acknowledge license
4
+ extra_gated_heading: Access Gemma on Hugging Face
5
+ extra_gated_prompt: "To access Gemma on Hugging Face, you\u2019re required to review\
6
+ \ and agree to Google\u2019s usage license. To do this, please ensure you\u2019\
7
+ re logged in to Hugging Face and click below. Requests are processed immediately."
8
+ inference: false
9
+ library_name: gguf
10
+ license: gemma
11
+ pipeline_tag: text-generation
12
+ quantized_by: legraphista
13
+ tags:
14
+ - conversational
15
+ - quantized
16
+ - GGUF
17
+ - quantization
18
+ - imat
19
+ - imatrix
20
+ - static
21
+ - 16bit
22
+ - 8bit
23
+ - 6bit
24
+ - 5bit
25
+ - 4bit
26
+ - 3bit
27
+ - 2bit
28
+ - 1bit
29
+ ---
30
+
31
+ # gemma-2-9b-it-IMat-GGUF
32
+ _Llama.cpp imatrix quantization of google/gemma-2-9b-it_
33
+
34
+ Original Model: [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it)
35
+ Original dtype: `BF16` (`bfloat16`)
36
+ Quantized by: llama.cpp [b3248](https://github.com/ggerganov/llama.cpp/releases/tag/b3248)
37
+ IMatrix dataset: [here](https://gist.githubusercontent.com/bartowski1182/eb213dccb3571f863da82e99418f81e8/raw/b2869d80f5c16fd7082594248e80144677736635/calibration_datav3.txt)
38
+
39
+ - [Files](#files)
40
+ - [IMatrix](#imatrix)
41
+ - [Common Quants](#common-quants)
42
+ - [All Quants](#all-quants)
43
+ - [Downloading using huggingface-cli](#downloading-using-huggingface-cli)
44
+ - [Inference](#inference)
45
+ - [Simple chat template](#simple-chat-template)
46
+ - [Llama.cpp](#llama-cpp)
47
+ - [FAQ](#faq)
48
+ - [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere)
49
+ - [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf)
50
+
51
+ ---
52
+
53
+ ## Files
54
+
55
+ ### IMatrix
56
+ Status: ⏳ Processing
57
+ Link: [here](https://huggingface.co/legraphista/gemma-2-9b-it-IMat-GGUF/blob/main/imatrix.dat)
58
+
59
+ ### Common Quants
60
+ | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
61
+ | -------- | ---------- | --------- | ------ | ------------ | -------- |
62
+ | gemma-2-9b-it.Q8_0 | Q8_0 | - | ⏳ Processing | ⚪ Static | -
63
+ | gemma-2-9b-it.Q6_K | Q6_K | - | ⏳ Processing | ⚪ Static | -
64
+ | gemma-2-9b-it.Q4_K | Q4_K | - | ⏳ Processing | 🟢 IMatrix | -
65
+ | gemma-2-9b-it.Q3_K | Q3_K | - | ⏳ Processing | 🟢 IMatrix | -
66
+ | gemma-2-9b-it.Q2_K | Q2_K | - | ⏳ Processing | 🟢 IMatrix | -
67
+
68
+
69
+ ### All Quants
70
+ | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
71
+ | -------- | ---------- | --------- | ------ | ------------ | -------- |
72
+ | gemma-2-9b-it.BF16 | BF16 | - | ⏳ Processing | ⚪ Static | -
73
+ | gemma-2-9b-it.FP16 | F16 | - | ⏳ Processing | ⚪ Static | -
74
+ | gemma-2-9b-it.Q8_0 | Q8_0 | - | ⏳ Processing | ⚪ Static | -
75
+ | gemma-2-9b-it.Q6_K | Q6_K | - | ⏳ Processing | ⚪ Static | -
76
+ | gemma-2-9b-it.Q5_K | Q5_K | - | ⏳ Processing | ⚪ Static | -
77
+ | gemma-2-9b-it.Q5_K_S | Q5_K_S | - | ⏳ Processing | ⚪ Static | -
78
+ | gemma-2-9b-it.Q4_K | Q4_K | - | ⏳ Processing | 🟢 IMatrix | -
79
+ | gemma-2-9b-it.Q4_K_S | Q4_K_S | - | ⏳ Processing | 🟢 IMatrix | -
80
+ | gemma-2-9b-it.IQ4_NL | IQ4_NL | - | ⏳ Processing | 🟢 IMatrix | -
81
+ | gemma-2-9b-it.IQ4_XS | IQ4_XS | - | ⏳ Processing | 🟢 IMatrix | -
82
+ | gemma-2-9b-it.Q3_K | Q3_K | - | ⏳ Processing | 🟢 IMatrix | -
83
+ | gemma-2-9b-it.Q3_K_L | Q3_K_L | - | ⏳ Processing | 🟢 IMatrix | -
84
+ | gemma-2-9b-it.Q3_K_S | Q3_K_S | - | ⏳ Processing | 🟢 IMatrix | -
85
+ | gemma-2-9b-it.IQ3_M | IQ3_M | - | ⏳ Processing | 🟢 IMatrix | -
86
+ | gemma-2-9b-it.IQ3_S | IQ3_S | - | ⏳ Processing | 🟢 IMatrix | -
87
+ | gemma-2-9b-it.IQ3_XS | IQ3_XS | - | ⏳ Processing | 🟢 IMatrix | -
88
+ | gemma-2-9b-it.IQ3_XXS | IQ3_XXS | - | ⏳ Processing | 🟢 IMatrix | -
89
+ | gemma-2-9b-it.Q2_K | Q2_K | - | ⏳ Processing | 🟢 IMatrix | -
90
+ | gemma-2-9b-it.Q2_K_S | Q2_K_S | - | ⏳ Processing | 🟢 IMatrix | -
91
+ | gemma-2-9b-it.IQ2_M | IQ2_M | - | ⏳ Processing | 🟢 IMatrix | -
92
+ | gemma-2-9b-it.IQ2_S | IQ2_S | - | ⏳ Processing | 🟢 IMatrix | -
93
+ | gemma-2-9b-it.IQ2_XS | IQ2_XS | - | ⏳ Processing | 🟢 IMatrix | -
94
+ | gemma-2-9b-it.IQ2_XXS | IQ2_XXS | - | ⏳ Processing | 🟢 IMatrix | -
95
+ | gemma-2-9b-it.IQ1_M | IQ1_M | - | ⏳ Processing | 🟢 IMatrix | -
96
+ | gemma-2-9b-it.IQ1_S | IQ1_S | - | ⏳ Processing | 🟢 IMatrix | -
97
+
98
+
99
+ ## Downloading using huggingface-cli
100
+ If you do not have hugginface-cli installed:
101
+ ```
102
+ pip install -U "huggingface_hub[cli]"
103
+ ```
104
+ Download the specific file you want:
105
+ ```
106
+ huggingface-cli download legraphista/gemma-2-9b-it-IMat-GGUF --include "gemma-2-9b-it.Q8_0.gguf" --local-dir ./
107
+ ```
108
+ If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
109
+ ```
110
+ huggingface-cli download legraphista/gemma-2-9b-it-IMat-GGUF --include "gemma-2-9b-it.Q8_0/*" --local-dir ./
111
+ # see FAQ for merging GGUF's
112
+ ```
113
+
114
+ ---
115
+
116
+ ## Inference
117
+
118
+ ### Simple chat template
119
+ ```
120
+ <bos><start_of_turn>user
121
+ {user_prompt}<end_of_turn>
122
+ <start_of_turn>model
123
+ {assistant_response}<end_of_turn>
124
+ <start_of_turn>user
125
+ {next_user_prompt}<end_of_turn>
126
+
127
+ ```
128
+
129
+ ### Llama.cpp
130
+ ```
131
+ llama.cpp/main -m gemma-2-9b-it.Q8_0.gguf --color -i -p "prompt here (according to the chat template)"
132
+ ```
133
+
134
+ ---
135
+
136
+ ## FAQ
137
+
138
+ ### Why is the IMatrix not applied everywhere?
139
+ According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
140
+
141
+ ### How do I merge a split GGUF?
142
+ 1. Make sure you have `gguf-split` available
143
+ - To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases
144
+ - Download the appropriate zip for your system from the latest release
145
+ - Unzip the archive and you should be able to find `gguf-split`
146
+ 2. Locate your GGUF chunks folder (ex: `gemma-2-9b-it.Q8_0`)
147
+ 3. Run `gguf-split --merge gemma-2-9b-it.Q8_0/gemma-2-9b-it.Q8_0-00001-of-XXXXX.gguf gemma-2-9b-it.Q8_0.gguf`
148
+ - Make sure to point `gguf-split` to the first chunk of the split.
149
+
150
+ ---
151
+
152
+ Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)!