File size: 3,084 Bytes
3617969 325f466 3617969 325f466 3617969 76e5774 3617969 325f466 3a33745 325f466 3a33745 325f466 a9e321a 325f466 a9e321a 3a33745 a9e321a 3a33745 a9e321a 3a33745 a9e321a 3a33745 a9e321a 3a33745 a9e321a 3a33745 325f466 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 |
## **UPDATE: Official version is out, use it instead: [https://huggingface.co/mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)**
---
---
---
---
# mistral-7B-v0.1-hf
Huggingface compatible version of Mistral's 7B model: https://twitter.com/MistralAI/status/1706877320844509405
## Usage
### Load in bfloat16 (16GB VRAM or higher)
```python
import torch
from transformers import LlamaForCausalLM, LlamaTokenizer, pipeline, TextStreamer
tokenizer = LlamaTokenizer.from_pretrained("kittn/mistral-7B-v0.1-hf")
model = LlamaForCausalLM.from_pretrained(
"kittn/mistral-7B-v0.1-hf",
torch_dtype=torch.bfloat16,
device_map={"": 0}
)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
pipe("Hi, my name", streamer=TextStreamer(tokenizer), max_new_tokens=128)
```
### Load in bitsandbytes nf4 (6GB VRAM or higher, maybe less with double_quant)
```python
import torch
from transformers import LlamaForCausalLM, LlamaTokenizer, pipeline, TextStreamer, BitsAndBytesConfig
tokenizer = LlamaTokenizer.from_pretrained("kittn/mistral-7B-v0.1-hf")
model = LlamaForCausalLM.from_pretrained(
"kittn/mistral-7B-v0.1-hf",
device_map={"": 0},
quantization_config=BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.float16,
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=False, # set to True to save more VRAM at the cost of some speed/accuracy
),
)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
pipe("Hi, my name", streamer=TextStreamer(tokenizer), max_new_tokens=128)
```
### Load in bitsandbytes int8 (8GB VRAM or higher). Quite slow; not recommended.
```python
import torch
from transformers import LlamaForCausalLM, LlamaTokenizer, pipeline, TextStreamer, BitsAndBytesConfig
tokenizer = LlamaTokenizer.from_pretrained("kittn/mistral-7B-v0.1-hf")
model = LlamaForCausalLM.from_pretrained(
"kittn/mistral-7B-v0.1-hf",
device_map={"": 0},
quantization_config=BitsAndBytesConfig(
load_in_8bit=True,
),
)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
pipe("Hi, my name", streamer=TextStreamer(tokenizer), max_new_tokens=128)
```
## Notes
* The original huggingface conversion script converts the model from bf16 to fp16 before saving it. This script doesn't
* The tokenizer is created with `legacy=False`, [more about this here](https://github.com/huggingface/transformers/pull/24565)
* Saved in safetensors format
## Conversion script [[link]](https://gist.github.com/sekstini/151d6946df1f6aa997b7cb15ee6f3be1)
Unlike [meta-llama/Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b), this model uses GQA. This breaks some assumptions in the original conversion script, requiring a few changes.
Conversion script: [link](https://gist.github.com/sekstini/151d6946df1f6aa997b7cb15ee6f3be1)
Original conversion script: [link](https://github.com/huggingface/transformers/blob/946bac798caefada3f5f1c9fecdcfd587ed24ac7/src/transformers/models/llama/convert_llama_weights_to_hf.py)
|