Update README.md
Browse files
README.md
CHANGED
@@ -15,31 +15,31 @@ quantized_by: bartowski
|
|
15 |
pipeline_tag: text-generation
|
16 |
---
|
17 |
|
18 |
-
## Llamacpp Quantizations of Mistral-quiet-star
|
19 |
|
20 |
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2536">b2536</a> for quantization.
|
21 |
|
22 |
-
Original model: https://huggingface.co/liminerity/Mistral-quiet-star
|
23 |
|
24 |
Download a file (not the whole branch) from below:
|
25 |
|
26 |
| Filename | Quant type | File Size | Description |
|
27 |
| -------- | ---------- | --------- | ----------- |
|
28 |
-
| [Mistral-quiet-star-Q8_0.gguf](https://huggingface.co/bartowski/Mistral-quiet-star-GGUF/blob/main/Mistral-quiet-star-Q8_0.gguf) | Q8_0 | 7.69GB | Extremely high quality, generally unneeded but max available quant. |
|
29 |
| [Mistral-quiet-star-Q6_K.gguf](https://huggingface.co/bartowski/Mistral-quiet-star-GGUF/blob/main/Mistral-quiet-star-Q6_K.gguf) | Q6_K | 5.94GB | Very high quality, near perfect, *recommended*. |
|
30 |
-
| [Mistral-quiet-star-Q5_K_M.gguf](https://huggingface.co/bartowski/Mistral-quiet-star-
|
31 |
-
| [Mistral-quiet-star-Q5_K_S.gguf](https://huggingface.co/bartowski/Mistral-quiet-star-GGUF/blob/main/Mistral-quiet-star-Q5_K_S.gguf) | Q5_K_S | 4.99GB | High quality, very usable. |
|
32 |
-
| [Mistral-quiet-star-Q5_0.gguf](https://huggingface.co/bartowski/Mistral-quiet-star-GGUF/blob/main/Mistral-quiet-star-Q5_0.gguf) | Q5_0 | 4.99GB | High quality, older format, generally not recommended. |
|
33 |
-
| [Mistral-quiet-star-Q4_K_M.gguf](https://huggingface.co/bartowski/Mistral-quiet-star-GGUF/blob/main/Mistral-quiet-star-Q4_K_M.gguf) | Q4_K_M | 4.36GB | Good quality, uses about 4.83 bits per weight. |
|
34 |
-
| [Mistral-quiet-star-Q4_K_S.gguf](https://huggingface.co/bartowski/Mistral-quiet-star-GGUF/blob/main/Mistral-quiet-star-Q4_K_S.gguf) | Q4_K_S | 4.14GB | Slightly lower quality with small space savings. |
|
35 |
-
| [Mistral-quiet-star-IQ4_NL.gguf](https://huggingface.co/bartowski/Mistral-quiet-star-GGUF/blob/main/Mistral-quiet-star-IQ4_NL.gguf) | IQ4_NL | 4.15GB | Decent quality, similar to Q4_K_S, new method of quanting, |
|
36 |
-
| [Mistral-quiet-star-IQ4_XS.gguf](https://huggingface.co/bartowski/Mistral-quiet-star-GGUF/blob/main/Mistral-quiet-star-IQ4_XS.gguf) | IQ4_XS | 3.94GB | Decent quality, new method with similar performance to Q4. |
|
37 |
-
| [Mistral-quiet-star-Q4_0.gguf](https://huggingface.co/bartowski/Mistral-quiet-star-GGUF/blob/main/Mistral-quiet-star-Q4_0.gguf) | Q4_0 | 4.10GB | Decent quality, older format, generally not recommended. |
|
38 |
-
| [Mistral-quiet-star-Q3_K_L.gguf](https://huggingface.co/bartowski/Mistral-quiet-star-GGUF/blob/main/Mistral-quiet-star-Q3_K_L.gguf) | Q3_K_L | 3.82GB | Lower quality but usable, good for low RAM availability. |
|
39 |
-
| [Mistral-quiet-star-Q3_K_M.gguf](https://huggingface.co/bartowski/Mistral-quiet-star-GGUF/blob/main/Mistral-quiet-star-Q3_K_M.gguf) | Q3_K_M | 3.51GB | Even lower quality. |
|
40 |
-
| [Mistral-quiet-star-IQ3_M.gguf](https://huggingface.co/bartowski/Mistral-quiet-star-GGUF/blob/main/Mistral-quiet-star-IQ3_M.gguf) | IQ3_M | 3.28GB | Medium-low quality, new method with decent performance. |
|
41 |
-
| [Mistral-quiet-star-IQ3_S.gguf](https://huggingface.co/bartowski/Mistral-quiet-star-GGUF/blob/main/Mistral-quiet-star-IQ3_S.gguf) | IQ3_S | 3.18GB | Lower quality, new method with decent performance, recommended over Q3 quants. |
|
42 |
-
| [Mistral-quiet-star-Q3_K_S.gguf](https://huggingface.co/bartowski/Mistral-quiet-star-GGUF/blob/main/Mistral-quiet-star-Q3_K_S.gguf) | Q3_K_S | 3.16GB | Low quality, not recommended. |
|
43 |
-
| [Mistral-quiet-star-Q2_K.gguf](https://huggingface.co/bartowski/Mistral-quiet-star-GGUF/blob/main/Mistral-quiet-star-Q2_K.gguf) | Q2_K | 2.71GB | Extremely low quality, *not* recommended.
|
44 |
|
45 |
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
|
|
15 |
pipeline_tag: text-generation
|
16 |
---
|
17 |
|
18 |
+
## Llamacpp Quantizations of Mistral-quiet-star-demo
|
19 |
|
20 |
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2536">b2536</a> for quantization.
|
21 |
|
22 |
+
Original model: https://huggingface.co/liminerity/Mistral-quiet-star-demo
|
23 |
|
24 |
Download a file (not the whole branch) from below:
|
25 |
|
26 |
| Filename | Quant type | File Size | Description |
|
27 |
| -------- | ---------- | --------- | ----------- |
|
28 |
+
| [Mistral-quiet-star-Q8_0.gguf](https://huggingface.co/bartowski/Mistral-quiet-star-demo-GGUF/blob/main/Mistral-quiet-star-Q8_0.gguf) | Q8_0 | 7.69GB | Extremely high quality, generally unneeded but max available quant. |
|
29 |
| [Mistral-quiet-star-Q6_K.gguf](https://huggingface.co/bartowski/Mistral-quiet-star-GGUF/blob/main/Mistral-quiet-star-Q6_K.gguf) | Q6_K | 5.94GB | Very high quality, near perfect, *recommended*. |
|
30 |
+
| [Mistral-quiet-star-Q5_K_M.gguf](https://huggingface.co/bartowski/Mistral-quiet-star-GG-demoUF/blob/main/Mistral-quiet-star-Q5_K_M.gguf) | Q5_K_M | 5.13GB | High quality, very usable. |
|
31 |
+
| [Mistral-quiet-star-Q5_K_S.gguf](https://huggingface.co/bartowski/Mistral-quiet-star-demo-GGUF/blob/main/Mistral-quiet-star-Q5_K_S.gguf) | Q5_K_S | 4.99GB | High quality, very usable. |
|
32 |
+
| [Mistral-quiet-star-Q5_0.gguf](https://huggingface.co/bartowski/Mistral-quiet-star-demo-GGUF/blob/main/Mistral-quiet-star-Q5_0.gguf) | Q5_0 | 4.99GB | High quality, older format, generally not recommended. |
|
33 |
+
| [Mistral-quiet-star-Q4_K_M.gguf](https://huggingface.co/bartowski/Mistral-quiet-star-demo-GGUF/blob/main/Mistral-quiet-star-Q4_K_M.gguf) | Q4_K_M | 4.36GB | Good quality, uses about 4.83 bits per weight. |
|
34 |
+
| [Mistral-quiet-star-Q4_K_S.gguf](https://huggingface.co/bartowski/Mistral-quiet-star-demo-GGUF/blob/main/Mistral-quiet-star-Q4_K_S.gguf) | Q4_K_S | 4.14GB | Slightly lower quality with small space savings. |
|
35 |
+
| [Mistral-quiet-star-IQ4_NL.gguf](https://huggingface.co/bartowski/Mistral-quiet-star-demo-GGUF/blob/main/Mistral-quiet-star-IQ4_NL.gguf) | IQ4_NL | 4.15GB | Decent quality, similar to Q4_K_S, new method of quanting, |
|
36 |
+
| [Mistral-quiet-star-IQ4_XS.gguf](https://huggingface.co/bartowski/Mistral-quiet-star-demo-GGUF/blob/main/Mistral-quiet-star-IQ4_XS.gguf) | IQ4_XS | 3.94GB | Decent quality, new method with similar performance to Q4. |
|
37 |
+
| [Mistral-quiet-star-Q4_0.gguf](https://huggingface.co/bartowski/Mistral-quiet-star-demo-GGUF/blob/main/Mistral-quiet-star-Q4_0.gguf) | Q4_0 | 4.10GB | Decent quality, older format, generally not recommended. |
|
38 |
+
| [Mistral-quiet-star-Q3_K_L.gguf](https://huggingface.co/bartowski/Mistral-quiet-star-demo-GGUF/blob/main/Mistral-quiet-star-Q3_K_L.gguf) | Q3_K_L | 3.82GB | Lower quality but usable, good for low RAM availability. |
|
39 |
+
| [Mistral-quiet-star-Q3_K_M.gguf](https://huggingface.co/bartowski/Mistral-quiet-star-demo-GGUF/blob/main/Mistral-quiet-star-Q3_K_M.gguf) | Q3_K_M | 3.51GB | Even lower quality. |
|
40 |
+
| [Mistral-quiet-star-IQ3_M.gguf](https://huggingface.co/bartowski/Mistral-quiet-star-demo-GGUF/blob/main/Mistral-quiet-star-IQ3_M.gguf) | IQ3_M | 3.28GB | Medium-low quality, new method with decent performance. |
|
41 |
+
| [Mistral-quiet-star-IQ3_S.gguf](https://huggingface.co/bartowski/Mistral-quiet-star-demo-GGUF/blob/main/Mistral-quiet-star-IQ3_S.gguf) | IQ3_S | 3.18GB | Lower quality, new method with decent performance, recommended over Q3 quants. |
|
42 |
+
| [Mistral-quiet-star-Q3_K_S.gguf](https://huggingface.co/bartowski/Mistral-quiet-star-demo-GGUF/blob/main/Mistral-quiet-star-Q3_K_S.gguf) | Q3_K_S | 3.16GB | Low quality, not recommended. |
|
43 |
+
| [Mistral-quiet-star-Q2_K.gguf](https://huggingface.co/bartowski/Mistral-quiet-star-demo-GGUF/blob/main/Mistral-quiet-star-Q2_K.gguf) | Q2_K | 2.71GB | Extremely low quality, *not* recommended.
|
44 |
|
45 |
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|