DeepMount00
commited on
Commit
•
e015e6d
1
Parent(s):
8f9d32f
Update README.md
Browse files
README.md
CHANGED
@@ -19,9 +19,14 @@ The Mistral-7B-v0.1 model is a transformer-based model that can handle a variety
|
|
19 |
|:----------------------| :--------------- | :-------------------- | :------- |
|
20 |
| 0.6734 | 0.5466 | 0.5334 | 0,5844 |
|
21 |
|
22 |
-
## Quantized Version
|
23 |
|
24 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
25 |
|
26 |
## How to Use
|
27 |
How to utilize my Mistral for Italian text generation
|
|
|
19 |
|:----------------------| :--------------- | :-------------------- | :------- |
|
20 |
| 0.6734 | 0.5466 | 0.5334 | 0,5844 |
|
21 |
|
|
|
22 |
|
23 |
+
**Quantized 4-Bit Version Available**
|
24 |
+
|
25 |
+
A quantized 4-bit version of the model is available for use. This version offers a more efficient processing capability by reducing the precision of the model's computations to 4 bits, which can lead to faster performance and decreased memory usage. This might be particularly useful for deploying the model on devices with limited computational power or memory resources.
|
26 |
+
|
27 |
+
For more details and to access the model, visit the following link: [Mistral-Ita-7b-GGUF 4-bit version](https://huggingface.co/DeepMount00/Mistral-Ita-7b-GGUF).
|
28 |
+
|
29 |
+
---
|
30 |
|
31 |
## How to Use
|
32 |
How to utilize my Mistral for Italian text generation
|