Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,55 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
---
|
3 |
+
license: apache-2.0
|
4 |
+
language:
|
5 |
+
- en
|
6 |
+
- zh
|
7 |
+
library_name: transformers
|
8 |
+
pipeline_tag: text-generation
|
9 |
+
tags:
|
10 |
+
- mistral
|
11 |
+
- qwen2
|
12 |
+
---
|
13 |
+
This is the Mistral version of [Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct) model by Alibaba Cloud.
|
14 |
+
The original codebase can be found at: (https://github.com/hiyouga/LLaMA-Factory/blob/main/tests/llamafy_qwen.py).
|
15 |
+
I have made modifications to make it compatible with qwen2.
|
16 |
+
This model is converted with https://github.com/Minami-su/character_AI_open/blob/main/mistral_qwen2.py
|
17 |
+
|
18 |
+
## special
|
19 |
+
|
20 |
+
1.Before using this model, you need to modify modeling_mistral.py in transformers library
|
21 |
+
|
22 |
+
2.vim /root/anaconda3/envs/train/lib/python3.9/site-packages/transformers/models/mistral/modeling_mistral.py
|
23 |
+
|
24 |
+
3.find MistralAttention,
|
25 |
+
|
26 |
+
4.modify q,k,v,o bias=False ----->, bias=config.attention_bias
|
27 |
+
|
28 |
+
Before:
|
29 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/62d7f90b102d144db4b4245b/AKj_fwEoLUKWZ4mViYW-q.png)
|
30 |
+
After:
|
31 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/62d7f90b102d144db4b4245b/A2gSwq9l6Zx8X1qegtgvE.png)
|
32 |
+
|
33 |
+
|
34 |
+
## Differences between qwen2 mistral and qwen2 llamafy
|
35 |
+
|
36 |
+
Compared to qwen2 llamafy,qwen2 mistral can use sliding window attention,qwen2 mistral is faster than qwen2 llamafy, and the context length is better
|
37 |
+
|
38 |
+
|
39 |
+
Usage:
|
40 |
+
|
41 |
+
```python
|
42 |
+
|
43 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
|
44 |
+
tokenizer = AutoTokenizer.from_pretrained("Minami-su/Qwen2-7B-Instruct-mistral")
|
45 |
+
model = AutoModelForCausalLM.from_pretrained("Minami-su/Qwen2-7B-Instruct-mistral", torch_dtype="auto", device_map="auto")
|
46 |
+
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
|
47 |
+
|
48 |
+
messages = [
|
49 |
+
{"role": "user", "content": "Who are you?"}
|
50 |
+
]
|
51 |
+
inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
|
52 |
+
inputs = inputs.to("cuda")
|
53 |
+
generate_ids = model.generate(inputs,max_length=2048, streamer=streamer)
|
54 |
+
|
55 |
+
```
|