Update README.md
Browse files
README.md
CHANGED
@@ -1,15 +1,17 @@
|
|
1 |
-
|
2 |
-
license: other
|
3 |
-
license_name: license-yuan
|
4 |
-
license_link: https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/LICENSE-Yuan
|
5 |
-
---
|
6 |
<div align="center">
|
7 |
<h1>
|
8 |
-
|
9 |
</h1>
|
10 |
</div>
|
11 |
|
12 |
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
<div align="center">
|
14 |
|
15 |
|
@@ -23,50 +25,53 @@ license_link: https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/LICENSE-Yuan
|
|
23 |
</div>
|
24 |
|
25 |
|
|
|
26 |
|
27 |
|
28 |
-
|
29 |
-
👾 <a href="https://www.modelscope.cn/profile/YuanLLM" target="_blank">ModelScope</a> • 🤗 <a href="https://huggingface.co/IEITYuan" target="_blank">Hugging Face</a> • 💬 <a href="https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/images/%E6%BA%90%E5%85%AC%E4%BC%97%E5%8F%B7%E4%BA%8C%E7%BB%B4%E7%A0%81.png" target="_blank">WeChat</a>• 📎 <a href="https://github.com/IEIT-Yuan/Yuan2.0-M32/blob/main/docs/Paper.pdf" target="_blank">源2.0 M32论文</a>
|
30 |
-
</p>
|
31 |
|
32 |
|
|
|
33 |
|
|
|
|
|
|
|
|
|
|
|
|
|
34 |
|
35 |
-
|
36 |
|
37 |
|
38 |
-
浪潮信息 “源2.0 M32”大模型(简称,Yuan2.0-M32) 采用稀疏混合专家架构(MoE),以Yuan2.0-2B模型作为基底模型,通过创新的门控网络(Attention Router)实现32个专家间(Experts*32)的协同工作与任务调度,在显著降低模型推理算力需求的情况下,带来了更强的模型精度表现与推理性能;源2.0-M32在多个业界主流的评测进行了代码生成、数学问题求解、科学问答与综合知识能力等方面的能力测评。结果显示,源2.0-M32在多项任务评测中,展示出了较为先进的能力表现,MATH(数学求解)、ARC-C(科学问答)测试成绩超越LLaMA3-700亿模型。Yuan2.0-M32大模型 基本信息如下:
|
39 |
|
40 |
-
|
41 |
-
+ **专家数量:** 32 <br>
|
42 |
-
+ **激活专家数:** 2 <br>
|
43 |
-
+ **激活参数量:** 3.7B <br>
|
44 |
-
+ **训练数据量:** 2000B tokens <br>
|
45 |
-
+ **支持序列长度:** 16K <br>
|
46 |
|
|
|
47 |
|
48 |
-
|
49 |
|
50 |
|
51 |
|
52 |
## 2. Model Downloads
|
53 |
|
54 |
-
**我们提供多种模型格式的下载链接:**
|
55 |
|
56 |
-
|
|
57 |
| :----------: | :------: | :-------: |:---------------------------: |
|
58 |
-
| Yuan2.0-M32
|
59 |
-
| Yuan2.0-M32-HF
|
60 |
-
| Yuan2.0-M32-GGUF
|
|
|
61 |
|
62 |
|
63 |
-
## 3. Evaluation Results
|
64 |
|
65 |
|
66 |
-
|
67 |
|
68 |
|
69 |
-
|
|
|
|
|
|
|
70 |
|
71 |
|
72 |
|
@@ -79,22 +84,23 @@ Yuan2.0-M32 模型与多个闭源、开源模型相比,均呈现出较好的
|
|
79 |
| Phi-3-mini | 58.5% | 82.5% | 68.8% | - | 84.9% |
|
80 |
| Mistral-8*22B | 45.1% | 78.6% | 77.8% | 41,8% | 91.3% |
|
81 |
| Mistral-8*7B | 40.2% | 58.4% | 70.86% | 28.4% | 85.9% |
|
82 |
-
| **Yuan2.0-M32** | 74.4% | 92.
|
|
|
|
|
|
|
83 |
|
84 |
|
85 |
-
\* __*ARC-C*__:ARC-Challenge, ARC数据集中的高阶测试问题,需要深层的推理能力和更广泛的知识背景。
|
86 |
|
87 |
-----
|
88 |
|
89 |
-
**3.2
|
90 |
|
91 |
-
| Model | Params (B) | Active Params (B) | GFLOPs/token (Inference) |
|
92 |
| ------------------ | :---------------: | :------------: | :---------------: | :---------------: | :---------------:|:---------------:|
|
93 |
-
| | 参数量 | 激活参数量 | 算力消耗/token (推理阶段) | 算力消耗/token (微调阶段) | 平均测评分数 | 模型算力效率 |
|
94 |
| Llama3-70B | 70 | 70 | 140 | 420 | 79.25 | 0.57 |
|
95 |
| Llama3-8B | 8 | 8 | 16 | 48 | 64.15 | 4.00 |
|
96 |
| Mistral-8*22B | 141 | 39 | 78 | 234 | 72.38 | 0.93 |
|
97 |
-
| Mistral-8*7B | 47 | 12.9 | 25.8 | 77
|
98 |
| **Yuan2.0-M32** | 40 | 3.7 | 7.4 | 22.2 | 79.15 | 10.69 |
|
99 |
|
100 |
|
@@ -105,36 +111,42 @@ Yuan2.0-M32 模型与多个闭源、开源模型相比,均呈现出较好的
|
|
105 |
## 4. Quick Start
|
106 |
|
107 |
|
108 |
-
**4.1
|
109 |
-
|
110 |
-
我们建议使用yuan2.0-M32的最新docker[镜像文件](https://hub.docker.com/r/yuanmodel/yuan2.0:m32).
|
111 |
|
112 |
-
|
113 |
|
114 |
```bash
|
115 |
-
docker pull yuanmodel/yuan2.0:
|
116 |
-
docker run --gpus all --privileged --ulimit stack=68719476736 --shm-size=1000G -itd -v /path/to/yuan_2.0:/workspace/yuan_2.0 -v /path/to/dataset:/workspace/dataset -v /path/to/checkpoints:/workspace/checkpoints --name your_name yuanmodel/yuan2.0:
|
117 |
docker exec -it your_name bash
|
118 |
```
|
119 |
|
120 |
|
121 |
-
**4.2
|
|
|
|
|
|
|
122 |
|
123 |
-
|
124 |
|
125 |
-
|
126 |
|
127 |
-
|
128 |
|
129 |
-
**4.4 推理服务**
|
130 |
|
131 |
-
|
|
|
132 |
|
133 |
|
134 |
## 5. Statement of Agreement
|
135 |
|
136 |
-
使用源2.0代码及模型需遵循 [Apache 2.0](https://github.com/xxxxxxE) 开源协议和[《源2.0模型许可协议》](./LICENSE-Yuan),源2.0模型支持商用,不需要申请授权,请您了解并遵循,勿将开源模型和代码及基于开源项目产生的衍生物用于任何可能给国家和社会带来危害的用途以及用于任何未经过安全评估和备案的服务。
|
137 |
|
138 |
-
|
|
|
|
|
|
|
|
|
|
|
139 |
|
|
|
140 |
|
|
|
1 |
+
|
|
|
|
|
|
|
|
|
2 |
<div align="center">
|
3 |
<h1>
|
4 |
+
Yuan2.0-M32: Mixture of Experts with Attention Router
|
5 |
</h1>
|
6 |
</div>
|
7 |
|
8 |
|
9 |
+
<p align="center">
|
10 |
+
👾 <a href="https://www.modelscope.cn/profile/YuanLLM" target="_blank">ModelScope</a> • 🤗 <a href="https://huggingface.co/IEITYuan" target="_blank">Hugging Face</a> • 💬 <a href="https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/images/%E6%BA%90%E5%85%AC%E4%BC%97%E5%8F%B7%E4%BA%8C%E7%BB%B4%E7%A0%81.png" target="_blank">WeChat</a>• 📎 <a href="https://github.com/IEIT-Yuan/Yuan2.0-M32/blob/main/docs/Paper.pdf" target="_blank">Yuan2.0-M32 Paper</a>
|
11 |
+
</p>
|
12 |
+
|
13 |
+
|
14 |
+
|
15 |
<div align="center">
|
16 |
|
17 |
|
|
|
25 |
</div>
|
26 |
|
27 |
|
28 |
+
-----
|
29 |
|
30 |
|
31 |
+
## 1. Introduction
|
|
|
|
|
32 |
|
33 |
|
34 |
+
**Yuan2.0-M32** is a Mixture-of-Experts (MoE) language model with 32 experts, of which 2 are active. A new router network, Attention Router, is proposed and has been adopted for more efficient expert selection, boosting accuracy by 3.8% over models using a classical router network. Yuan 2.0-M32 is trained from scratch with 2000B tokens, and its training computation is only 9.25% of that required by a dense model of the same parameter scale. Demonstrating competitive capabilities in coding, math, and various specialized fields, Yuan2.0-M32 operates with only 3.7B active parameters out of a total 40B, and a forward computation of 7.4 GFLOPS per token, which is just 1/19th of Llama3-70B's requirement. Yuan 2.0-M32 has surpassed Llama3-70B on the MATH and ARC-Challenge benchmarks, achieving accuracies of 55.9% and 95.8%, respectively. The basic information of the **Yuan2.0-M32** model is as follows:
|
35 |
|
36 |
+
+ **Total Parameters :** 40B <br>
|
37 |
+
+ **Experts:** 32 <br>
|
38 |
+
+ **Active Experts:** 2 <br>
|
39 |
+
+ **Active Parameters:** 3.7B <br>
|
40 |
+
+ **Training Tokens:** 2000B tokens <br>
|
41 |
+
+ **Sequence Length:** 16K <br>
|
42 |
|
43 |
+
The technical report for the Yuan2.0-M32 model has been released, and you can find more detailed technical information and evaluation results by referring to the <a href="https://github.com/IEIT-Yuan/Yuan2.0-M32/blob/main/docs/Paper.pdf" target="_blank">**paper**</a>.
|
44 |
|
45 |
|
|
|
46 |
|
47 |
+
<div align=center> <img src=https://github.com/IEIT-Yuan/Yuan2.0-M32/blob/main/docs/Yuan2.0-M32-Architecture.jpg width=80% />
|
|
|
|
|
|
|
|
|
|
|
48 |
|
49 |
+
Fig.1: Yuan 2.0-M32 Architecture
|
50 |
|
51 |
+
</div>
|
52 |
|
53 |
|
54 |
|
55 |
## 2. Model Downloads
|
56 |
|
|
|
57 |
|
58 |
+
| Model | Sequence Length | Type | Download |
|
59 |
| :----------: | :------: | :-------: |:---------------------------: |
|
60 |
+
| Yuan2.0-M32 | 16K | Megatron | [ModelScope](https://modelscope.cn/models/YuanLLM/Yuan2-M32/) \| [HuggingFace](https://huggingface.co/IEITYuan/Yuan2-M32) \| [Baidu Netdisk](https://pan.baidu.com/s/1K0LVU5NxeEujtYczF_T-Rg?pwd=cupw)
|
61 |
+
| Yuan2.0-M32-HF | 16K | HuggingFace | [ModelScope](https://modelscope.cn/models/YuanLLM/Yuan2-M32-hf) \| [HuggingFace](https://huggingface.co/IEITYuan/Yuan2-M32-hf) \| [Baidu Netdisk](https://pan.baidu.com/s/1FrbVKji7IrhpwABYSIsV-A?pwd=q6uh)
|
62 |
+
| Yuan2.0-M32-GGUF | 16K | GGUF | [ModelScope](https://modelscope.cn/models/YuanLLM/Yuan2-M32-gguf/) \| [HuggingFace](https://huggingface.co/IEITYuan/Yuan2-M32-gguf) \| [Baidu Netdisk](https://pan.baidu.com/s/1BWQaz-jeZ1Fe69CqYtjS9A?pwd=f4qc)
|
63 |
+
| Yuan2.0-M32-GGUF-INT4 | 16K | GGUF | [ModelScope](https://modelscope.cn/models/YuanLLM/Yuan2-M32-gguf-int4/) \| [HuggingFace](https://huggingface.co/IEITYuan/Yuan2-M32-gguf-int4) \| [Baidu Netdisk](https://pan.baidu.com/s/1FM8xPpkhOrRcAfe7-zUgWQ?pwd=e6ag)
|
64 |
|
65 |
|
|
|
66 |
|
67 |
|
68 |
+
## 3. Evaluation
|
69 |
|
70 |
|
71 |
+
**3.1 Benchmarks** 🏆
|
72 |
+
|
73 |
+
|
74 |
+
We conducted a thorough evaluation of the Yuan2.0-M32 model across a range of benchmarks, including HumanEval, GSM8K, MMLU, Math, and ARC-Challenge. These benchmarks are designed to test the model's proficiency in key areas such as natural language understanding, knowledge acquisition, mathematical computation and reasoning, and code generation. The Yuan2.0-M32 has shown a consistent and significant advantage over other models like Llama3-8B and Mistral-8×7B, excelling in all evaluated tasks. Remarkably, its overall performance is on par with the more substantial Llama3-70B model.The detailed evaluation results are outlined in the subsequent table.
|
75 |
|
76 |
|
77 |
|
|
|
84 |
| Phi-3-mini | 58.5% | 82.5% | 68.8% | - | 84.9% |
|
85 |
| Mistral-8*22B | 45.1% | 78.6% | 77.8% | 41,8% | 91.3% |
|
86 |
| Mistral-8*7B | 40.2% | 58.4% | 70.86% | 28.4% | 85.9% |
|
87 |
+
| **Yuan2.0-M32** | 74.4% | 92.7% | 72.2% | **55.9%** | **95.8%** |
|
88 |
+
|
89 |
+
|
90 |
+
\* __*ARC-C*__: AI2 Reasoning Challenge (ARC) benchmark contains more complex parts that need further reasoning.
|
91 |
|
92 |
|
|
|
93 |
|
94 |
-----
|
95 |
|
96 |
+
**3.2 Computational Utilization for Model**
|
97 |
|
98 |
+
| Model | Params (B) | Active Params (B) | GFLOPs/token (Inference) | GFLOPS/token (Fine-tune) | Mean Accuracy | Average Accuracy/GFLOPSs per token (Inference) |
|
99 |
| ------------------ | :---------------: | :------------: | :---------------: | :---------------: | :---------------:|:---------------:|
|
|
|
100 |
| Llama3-70B | 70 | 70 | 140 | 420 | 79.25 | 0.57 |
|
101 |
| Llama3-8B | 8 | 8 | 16 | 48 | 64.15 | 4.00 |
|
102 |
| Mistral-8*22B | 141 | 39 | 78 | 234 | 72.38 | 0.93 |
|
103 |
+
| Mistral-8*7B | 47 | 12.9 | 25.8 | 77.3 | 60.83 | 2.36 |
|
104 |
| **Yuan2.0-M32** | 40 | 3.7 | 7.4 | 22.2 | 79.15 | 10.69 |
|
105 |
|
106 |
|
|
|
111 |
## 4. Quick Start
|
112 |
|
113 |
|
114 |
+
**4.1 Environment Config**
|
|
|
|
|
115 |
|
116 |
+
We strongly recommend using the latest release of docker images of Yuan2.0-M32.You can launch an instance of the Yuan 2.0 container with the following Docker commands:
|
117 |
|
118 |
```bash
|
119 |
+
docker pull yuanmodel/yuan2.0:m32
|
120 |
+
docker run --gpus all --privileged --ulimit stack=68719476736 --shm-size=1000G -itd -v /path/to/yuan_2.0:/workspace/yuan_2.0 -v /path/to/dataset:/workspace/dataset -v /path/to/checkpoints:/workspace/checkpoints --name your_name yuanmodel/yuan2.0:m32
|
121 |
docker exec -it your_name bash
|
122 |
```
|
123 |
|
124 |
|
125 |
+
**4.2 Data Preprocess**
|
126 |
+
|
127 |
+
We have provided the data preprocess script. See documentation [here](https://github.com/IEIT-Yuan/Yuan2.0-M32/blob/main/docs/data_process.md
|
128 |
+
).
|
129 |
|
130 |
+
**4.3 Model Pretrain**
|
131 |
|
132 |
+
We've provided several scripts for pretraining in the [`example`](https://github.com/IEIT-Yuan/Yuan2.0-M32/blob/main/examples). The details can be seen from documentation [here](https://github.com/IEIT-Yuan/Yuan2.0-M32/blob/main/docs/pretrain.md).
|
133 |
|
134 |
+
**4.4 Inference Service**
|
135 |
|
|
|
136 |
|
137 |
+
|
138 |
+
For a detailed deployment plan, please refer to [vllm](https://github.com/IEIT-Yuan/Yuan2.0-M32/edit/main/vllm/README_Yuan_vllm.md).
|
139 |
|
140 |
|
141 |
## 5. Statement of Agreement
|
142 |
|
|
|
143 |
|
144 |
+
The use of the source code in this repository requires compliance with the open source license agreement Apache 2.0. The Yuan2.0 model supports commercial use and does not require authorization. Please understand and comply with the [《Yuan2.0 Model License Agreement》](./LICENSE-Yuan). Do not use the open source model and code, as well as derivatives generated from open source projects, for any purposes that may cause harm to the country and society, or for any services that have not undergone security assessment and filing. Although we have taken measures to ensure the compliance and accuracy of the data during training, the model has a huge number of parameters and is affected by probability and randomness factors. We cannot guarantee the accuracy of the output content, and the model is easily misled by input instructions. This project does not assume any data security, public opinion risks, or any model misleading, abusing, spreading caused by open-source models and code Risks and responsibilities arising from improper utilization You will be solely responsible for the risks and consequences arising from the use, copying, distribution, and modification of the model in this open source project
|
145 |
+
|
146 |
+
|
147 |
+
|
148 |
+
## 6. Contact Us
|
149 |
+
|
150 |
|
151 |
+
**If you have any questions, please raise an issue or contact us at** [email protected]
|
152 |
|