|
--- |
|
license: mit |
|
language: |
|
- it |
|
--- |
|
# Mistral-Ita-7B GGUF |
|
<!-- README_GGUF.md-provided-files start --> |
|
## Provided files |
|
|
|
| Name | Quant method | Bits | Size | Use case | |
|
|------|--------------|------|---------|--------------------------------------------------| |
|
| [mistal-Ita-7b-q3_k_m.gguf](https://huggingface.co/DeepMount00/Mistral-Ita-7b-GGUF/blob/main/mistal-Ita-7b-q3_k_m.gguf) | Q3_K_M | 3 | 3.52 GB | very small, high quality loss | |
|
| [mistal-Ita-7b-q4_k_m.gguf](https://huggingface.co/DeepMount00/Mistral-Ita-7b-GGUF/blob/main/mistal-Ita-7b-q4_k_m.gguf) | Q4_K_M | 4 | 4.37 GB | medium, balanced quality - recommended | |
|
| [mistal-Ita-7b-q5_k_m.gguf](https://huggingface.co/DeepMount00/Mistral-Ita-7b-GGUF/blob/main/mistal-Ita-7b-q5_k_m.gguf) | Q5_K_M | 5 | 5.13 GB | large, very low quality loss - recommended | |
|
|
|
<!-- README_GGUF.md-provided-files end --> |
|
|
|
## How to Use |
|
How to utilize my Mistral for Italian text generation |
|
|
|
```python |
|
from ctransformers import AutoModelForCausalLM |
|
|
|
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. |
|
llm = AutoModelForCausalLM.from_pretrained("DeepMount00/Mistral-Ita-7b-GGUF", model_file="mistal-Ita-7b-q3_k_m.gguf", model_type="mistral", gpu_layers=0) |
|
|
|
domanda = """Scrivi una funzione python che calcola la media tra questi valori""" |
|
contesto = """ |
|
[-5, 10, 15, 20, 25, 30, 35] |
|
""" |
|
|
|
system_prompt = '' |
|
prompt = domanda + "\n" + contesto |
|
B_INST, E_INST = "[INST]", "[/INST]" |
|
prompt = f"{system_prompt}{B_INST}{prompt}\n{E_INST}" |
|
|
|
print(llm(prompt)) |
|
``` |