Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,105 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: other
|
3 |
+
license_name: qwen
|
4 |
+
license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
pipeline_tag: text-generation
|
8 |
+
base_model:
|
9 |
+
- ssmits/Qwen2.5-122B-Instruct
|
10 |
+
tags:
|
11 |
+
- chat
|
12 |
+
---
|
13 |
+
|
14 |
+
# Qwen-122B
|
15 |
+
|
16 |
+
Qwen-122B is a [Qwen/Qwen-72B](https://huggingface.co/Qwen/Qwen-72B) self-merge made with [MergeKit](https://github.com/arcee-ai/mergekit/tree/main).
|
17 |
+
|
18 |
+
It was inspired by large merges like:
|
19 |
+
|
20 |
+
- [alpindale/goliath-120b](https://huggingface.co/alpindale/goliath-120b)
|
21 |
+
- [nsfwthrowitaway69/Venus-120b-v1.0](https://huggingface.co/nsfwthrowitaway69/Venus-120b-v1.0)
|
22 |
+
- [cognitivecomputations/MegaDolphin-120b](https://huggingface.co/cognitivecomputations/MegaDolphin-120b)
|
23 |
+
- [wolfram/miquliz-120b-v2.0](https://huggingface.co/wolfram/miquliz-120b-v2.0)
|
24 |
+
- [mlabonne/Meta-Llama-3-120B-Instruct](https://huggingface.co/mlabonne/Meta-Llama-3-120B-Instruct)
|
25 |
+
|
26 |
+
Special thanks to [Eric Hartford](https://huggingface.co/ehartford) for both inspiring and evaluating the original model, to [Charles Goddard](https://huggingface.co/chargoddard) for creating MergeKit, and to [Mathieu Labonne](https://huggingface.co/mlabonne) for creating the Meta-Llama-3-120B-Instruct model that served as the main inspiration for this merge.
|
27 |
+
|
28 |
+
## 🔍 Applications
|
29 |
+
|
30 |
+
This model is recommended for creative writing tasks. It uses the Qwen chat template with a default context window of 8K (can be extended with rope theta).
|
31 |
+
|
32 |
+
The model is generally quite creative and has a good writing style. It may occasionally output typos and show a preference for uppercase text.
|
33 |
+
|
34 |
+
## ⚡ Quantized models
|
35 |
+
|
36 |
+
Thanks to [Bartowski](https://huggingface.co/ehartford), [elinas](https://huggingface.co/elinas), the [mlx-community](https://huggingface.co/mlx-community) and others for providing these models.
|
37 |
+
|
38 |
+
* **GGUF**: [Link to GGUF model]
|
39 |
+
* **EXL2**: [Link to EXL2 model]
|
40 |
+
* **mlx**: [Link to mlx model]
|
41 |
+
|
42 |
+
## 🏆 Evaluation
|
43 |
+
This model has yet to be thoroughly evaluated. It is expected to excel in creative writing but may have limitations in other tasks. Use it with caution and don't expect it to outperform state-of-the-art models outside of specific creative use cases.
|
44 |
+
|
45 |
+
Once the model is created and tested, this section will be updated with:
|
46 |
+
|
47 |
+
* Links to evaluation threads on social media platforms
|
48 |
+
* Examples of the model's performance in creative writing tasks
|
49 |
+
* Comparisons with other large language models in various applications
|
50 |
+
* Community feedback and use cases
|
51 |
+
|
52 |
+
We encourage users to share their experiences and evaluations to help build a comprehensive understanding of the model's capabilities and limitations.
|
53 |
+
|
54 |
+
## 🧩 Configuration
|
55 |
+
|
56 |
+
```yaml
|
57 |
+
slices:
|
58 |
+
- sources:
|
59 |
+
- layer_range: [0, 20]
|
60 |
+
model: Qwen/Qwen-72B
|
61 |
+
- sources:
|
62 |
+
- layer_range: [10, 30]
|
63 |
+
model: Qwen/Qwen-72B
|
64 |
+
- sources:
|
65 |
+
- layer_range: [20, 40]
|
66 |
+
model: Qwen/Qwen-72B
|
67 |
+
- sources:
|
68 |
+
- layer_range: [30, 50]
|
69 |
+
model: Qwen/Qwen-72B
|
70 |
+
- sources:
|
71 |
+
- layer_range: [40, 60]
|
72 |
+
model: Qwen/Qwen-72B
|
73 |
+
- sources:
|
74 |
+
- layer_range: [50, 70]
|
75 |
+
model: Qwen/Qwen-72B
|
76 |
+
- sources:
|
77 |
+
- layer_range: [60, 80]
|
78 |
+
model: Qwen/Qwen-72B
|
79 |
+
merge_method: passthrough
|
80 |
+
dtype: bfloat16
|
81 |
+
```
|
82 |
+
|
83 |
+
## 💻 Usage
|
84 |
+
|
85 |
+
```python
|
86 |
+
!pip install -qU transformers accelerate
|
87 |
+
|
88 |
+
from transformers import AutoTokenizer
|
89 |
+
import transformers
|
90 |
+
import torch
|
91 |
+
|
92 |
+
model = "path/to/Qwen-122B"
|
93 |
+
messages = [{"role": "user", "content": "What is a large language model?"}]
|
94 |
+
|
95 |
+
tokenizer = AutoTokenizer.from_pretrained(model)
|
96 |
+
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
97 |
+
pipeline = transformers.pipeline(
|
98 |
+
"text-generation",
|
99 |
+
model=model,
|
100 |
+
torch_dtype=torch.float16,
|
101 |
+
device_map="auto",
|
102 |
+
)
|
103 |
+
|
104 |
+
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
|
105 |
+
print(outputs[0]["generated_text"])
|