Joseph717171
commited on
Commit
β’
9abcd1c
1
Parent(s):
fc914fc
Update README.md
Browse files
README.md
CHANGED
@@ -1,4 +1,4 @@
|
|
1 |
Custom GGUF quants of arcee-aiβs [Llama-3.1-SuperNova-Lite-8B](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite), where the Output Tensors are quantized to Q8_0 while the Embeddings are kept at F32. Enjoy! π§ π₯π
|
2 |
|
3 |
-
Update: For some reason, the model was initially smaller than LLama-3.1-8B-Instruct after quantizing.
|
4 |
The original OQ8_0.EF32.IQuants will remain in the repo for those who want to use them. Cheers! π
|
|
|
1 |
Custom GGUF quants of arcee-aiβs [Llama-3.1-SuperNova-Lite-8B](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite), where the Output Tensors are quantized to Q8_0 while the Embeddings are kept at F32. Enjoy! π§ π₯π
|
2 |
|
3 |
+
Update: For some reason, the model was initially smaller than LLama-3.1-8B-Instruct after quantizing. This has since been rectified: if you want the most intelligent and most capable quantized GGUF version of Llama-3.1-SuperNova-Lite-8.0B, use the OF32.EF32.IQuants.
|
4 |
The original OQ8_0.EF32.IQuants will remain in the repo for those who want to use them. Cheers! π
|