Losin94 commited on
Commit
41750c8
1 Parent(s): d7002f1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +58 -0
README.md CHANGED
@@ -34,6 +34,64 @@ KeyError: 'qwen2_moe'
34
 
35
  We do not advise you to use base language models for text generation. Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model.
36
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37
 
38
  ## Citation
39
 
 
34
 
35
  We do not advise you to use base language models for text generation. Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model.
36
 
37
+ ## Performance
38
+
39
+ The evaluation of base models mainly focuses on the model performance of natural language understanding, general question answering, coding, mathematics, scientific knowledge, reasoning, multilingual capability, etc.
40
+
41
+ The datasets for evaluation include:
42
+
43
+ **English Tasks**: MMLU (5-shot), MMLU-Pro (5-shot), GPQA (5shot), Theorem QA (5-shot), BBH (3-shot), HellaSwag (10-shot), Winogrande (5-shot), TruthfulQA (0-shot), ARC-C (25-shot)
44
+
45
+ **Coding Tasks**: EvalPlus (0-shot) (HumanEval, MBPP, HumanEval+, MBPP+), MultiPL-E (0-shot) (Python, C++, JAVA, PHP, TypeScript, C#, Bash, JavaScript)
46
+
47
+ **Math Tasks**: GSM8K (4-shot), MATH (4-shot)
48
+
49
+ **Chinese Tasks**: C-Eval(5-shot), CMMLU (5-shot)
50
+
51
+ **Multilingual Tasks**: Multi-Exam (M3Exam 5-shot, IndoMMLU 3-shot, ruMMLU 5-shot, mMMLU 5-shot), Multi-Understanding (BELEBELE 5-shot, XCOPA 5-shot, XWinograd 5-shot, XStoryCloze 0-shot, PAWS-X 5-shot), Multi-Mathematics (MGSM 8-shot), Multi-Translation (Flores-101 5-shot)
52
+
53
+ #### Qwen2-57B-A14B performance
54
+ | Datasets | Jamba | Mixtral-8x7B | Yi-1.5-34B | Qwen1.5-32B | ****Qwen2-57B-A14B**** |
55
+ | :--------| :---------: | :------------: | :------------: | :------------: | :------------: |
56
+ |Architecture | MoE | MoE | Dense | Dense | MoE |
57
+ |#Activated Params | 12B | 12B | 34B | 32B | 14B |
58
+ |#Params | 52B | 47B | 34B | 32B | 57B |
59
+ | ***English*** | | | | | |
60
+ |MMLU | 67.4 | 71.8 | **77.1** | 74.3 | 76.5 |
61
+ |MMLU-Pro | - | 41.0 | **48.3** | 44.0 | 43.0 |
62
+ |GPQA | - | 29.2 | - | 30.8 | **34.3** |
63
+ |Theorem QA | - | 23.2 | - | 28.8 | **33.5** |
64
+ |BBH | 45.4 | 50.3 | **76.4** | 66.8 | 67.0 |
65
+ |HellaSwag | **87.1** | 86.5 | 85.9 | 85.0 | 85.2 |
66
+ |Winogrande | 82.5 | 81.9 | **84.9** | 81.5 | 79.5 |
67
+ |ARC-C | 64.4 | **66.0** | 65.6 | 63.6 | 64.1 |
68
+ |TruthfulQA | 46.4 | 51.1 | 53.9 | 57.4 | **57.7** |
69
+ | ***Coding*** | | | | | |
70
+ |HumanEval | 29.3 | 37.2 | 46.3 | 43.3 | **53.0** |
71
+ |MBPP | - | 63.9 | 65.5 | 64.2 | **71.9** |
72
+ |EvalPlus | - | 46.4 | 51.9 | 50.4 | **57.2** |
73
+ |MultiPL-E | - | 39.0 | 39.5 | 38.5 | **49.8** |
74
+ | ***Mathematics*** | | | | | |
75
+ |GSM8K | 59.9 | 62.5 | **82.7** | 76.8 | 80.7 |
76
+ |MATH | - | 30.8 | 41.7 | 36.1 | **43.0** |
77
+ | ***Chinese*** | | | | | |
78
+ |C-Eval | - | - | - | 83.5 | **87.7** |
79
+ |CMMLU | - | - | 84.8 | 82.3 | **88.5** |
80
+ | ***Multilingual*** | | | | | |
81
+ |Multi-Exam | - | 56.1 | - | 61.6 | **65.5** |
82
+ |Multi-Understanding | - | 70.7 | - | 76.5 | **77.0** |
83
+ |Multi-Mathematics | - | 45.0 | - | 56.1 | **62.3** |
84
+ |Multi-Translation | - | 29.8 | - | 33.5 | **34.5** |
85
+
86
+ ### Efficient MoE Models
87
+ Compared with training models smaller than 7 billion parameters, it is costly to train medium-size models like 32B while admittedly the 14B model is incapable of performing complex tasks well as the 72B model does. Owing to the recent success of MoE models, this time we turn to employ the MoE model architecture following our previous work Qwen1.5-MoE-A2.7B and extend it to larger model size. Specifically, we apply the same architecture and training strategy, e.g., upcycling, to the model with a total of 57B parameters, only 14B of which are activated in each forward pass. In the following, we list the inference performance of the two models in the deployment with vLLM on 2 NVIDIA A100:
88
+
89
+ | | Qwen2-57B-A14B | Qwen1.5-32B |
90
+ | :---| :---------: | :------------: |
91
+ | QPS | 9.40 | 5.18 |
92
+ | TPS | 10345.17 | 5698.37 |
93
+
94
+ In terms of efficiency, we observe clear advantages of Qwen2-57B-A14B over Qwen1.5-32B. Furthermore, based on the previous report of model performance on benchmarks, it can be found that Qwen2-57B-A14B obtains superior model quality compared with Qwen1.5-32B, which has more activated parameters.
95
 
96
  ## Citation
97