language: | |
- eu | |
pipeline_tag: text-generation | |
tags: | |
- gguf | |
- latxa | |
- 7b | |
- hitz | |
- llama | |
- quant | |
model_name: latxa-7b-v1 | |
base_model: xezpeleta/latxa-7b-instruct | |
# Latxa 7b Instruct GGUF | |
## Provided files | |
| Name | Quant method | Bits | Size | Max RAM required | Use case | | |
| ---- | ---- | ---- | ---- | ---- | ----- | | |
| [latxa-7b-v1-instruct-q8_0.gguf](https://huggingface.co/oldbridge/latxa-7b-instruct-q8/blob/main/latxa-7b-instruct-q8.gguf) | |8 bits | 7 GB | 8,2 GB| Fits in a RTX 3060 12Gb | |