DeepMount00
commited on
Commit
•
e615170
1
Parent(s):
fe4182a
Update README.md
Browse files
README.md
CHANGED
@@ -13,4 +13,26 @@ language:
|
|
13 |
| [mistal-Ita-7b-q4_k_m.gguf](https://huggingface.co/DeepMount00/Mistral-Ita-7b-GGUF/blob/main/mistal-Ita-7b-q4_k_m.gguf) | Q4_K_M | 4 | 4.37 GB | medium, balanced quality - recommended |
|
14 |
| [mistal-Ita-7b-q5_k_m.gguf](https://huggingface.co/DeepMount00/Mistral-Ita-7b-GGUF/blob/main/mistal-Ita-7b-q5_k_m.gguf) | Q5_K_M | 5 | 5.13 GB | large, very low quality loss - recommended |
|
15 |
|
16 |
-
<!-- README_GGUF.md-provided-files end -->
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
| [mistal-Ita-7b-q4_k_m.gguf](https://huggingface.co/DeepMount00/Mistral-Ita-7b-GGUF/blob/main/mistal-Ita-7b-q4_k_m.gguf) | Q4_K_M | 4 | 4.37 GB | medium, balanced quality - recommended |
|
14 |
| [mistal-Ita-7b-q5_k_m.gguf](https://huggingface.co/DeepMount00/Mistral-Ita-7b-GGUF/blob/main/mistal-Ita-7b-q5_k_m.gguf) | Q5_K_M | 5 | 5.13 GB | large, very low quality loss - recommended |
|
15 |
|
16 |
+
<!-- README_GGUF.md-provided-files end -->
|
17 |
+
|
18 |
+
## How to Use
|
19 |
+
How to utilize my Mistral for Italian text generation
|
20 |
+
|
21 |
+
```python
|
22 |
+
from ctransformers import AutoModelForCausalLM
|
23 |
+
|
24 |
+
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
|
25 |
+
llm = AutoModelForCausalLM.from_pretrained("DeepMount00/Mistral-Ita-7b-GGUF", model_file="mistal-Ita-7b-q3_k_m.gguf", model_type="mistral", gpu_layers=0)
|
26 |
+
|
27 |
+
domanda = """Scrivi una funzione python che calcola la media tra questi valori"""
|
28 |
+
contesto = """
|
29 |
+
[-5, 10, 15, 20, 25, 30, 35]
|
30 |
+
"""
|
31 |
+
|
32 |
+
system_prompt = ''
|
33 |
+
prompt = domanda + "\n" + contesto
|
34 |
+
B_INST, E_INST = "[INST]", "[/INST]"
|
35 |
+
prompt = f"{system_prompt}{B_INST}{prompt}\n{E_INST}"
|
36 |
+
|
37 |
+
print(llm(prompt))
|
38 |
+
```
|