Update README.md
Browse files
README.md
CHANGED
@@ -33,7 +33,7 @@ base_model: meta-llama/Meta-Llama-3.1-70B-Instruct
|
|
33 |
- **Model Developers:** Neural Magic
|
34 |
|
35 |
Quantized version of [Meta-Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct).
|
36 |
-
It achieves an average score of
|
37 |
|
38 |
### Model Optimizations
|
39 |
|
@@ -118,11 +118,11 @@ model_stub = "meta-llama/Meta-Llama-3.1-70B-Instruct"
|
|
118 |
model_name = model_stub.split("/")[-1]
|
119 |
|
120 |
device_map = calculate_offload_device_map(
|
121 |
-
model_stub, reserve_for_hessians=False, num_gpus=2, torch_dtype=
|
122 |
)
|
123 |
|
124 |
model = SparseAutoModelForCausalLM.from_pretrained(
|
125 |
-
model_stub, torch_dtype=
|
126 |
)
|
127 |
|
128 |
output_dir = f"./{model_name}-FP8-dynamic"
|
@@ -140,7 +140,7 @@ oneshot(
|
|
140 |
|
141 |
The model was evaluated on MMLU, ARC-Challenge, GSM-8K, Hellaswag, Winogrande and TruthfulQA.
|
142 |
Evaluation was conducted using the Neural Magic fork of [lm-evaluation-harness](https://github.com/neuralmagic/lm-evaluation-harness/tree/llama_3.1_instruct) (branch llama_3.1_instruct) and the [vLLM](https://docs.vllm.ai/en/stable/) engine.
|
143 |
-
This version of the lm-evaluation-harness includes versions of ARC-Challenge
|
144 |
|
145 |
### Accuracy
|
146 |
|
@@ -151,7 +151,7 @@ This version of the lm-evaluation-harness includes versions of ARC-Challenge and
|
|
151 |
</td>
|
152 |
<td><strong>Meta-Llama-3.1-70B-Instruct </strong>
|
153 |
</td>
|
154 |
-
<td><strong>Meta-Llama-3.1-70B-Instruct-FP8
|
155 |
</td>
|
156 |
<td><strong>Recovery</strong>
|
157 |
</td>
|
@@ -159,71 +159,81 @@ This version of the lm-evaluation-harness includes versions of ARC-Challenge and
|
|
159 |
<tr>
|
160 |
<td>MMLU (5-shot)
|
161 |
</td>
|
162 |
-
<td>
|
163 |
</td>
|
164 |
-
<td>
|
165 |
</td>
|
166 |
<td>99.90%
|
167 |
</td>
|
168 |
</tr>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
169 |
<tr>
|
170 |
<td>ARC Challenge (0-shot)
|
171 |
</td>
|
172 |
-
<td>
|
173 |
</td>
|
174 |
-
<td>
|
175 |
</td>
|
176 |
-
<td>
|
177 |
</td>
|
178 |
</tr>
|
179 |
<tr>
|
180 |
-
<td>GSM-8K (
|
181 |
</td>
|
182 |
-
<td>
|
183 |
</td>
|
184 |
-
<td>
|
185 |
</td>
|
186 |
-
<td>
|
187 |
</td>
|
188 |
</tr>
|
189 |
<tr>
|
190 |
<td>Hellaswag (10-shot)
|
191 |
</td>
|
192 |
-
<td>86.
|
193 |
</td>
|
194 |
-
<td>86.
|
195 |
</td>
|
196 |
-
<td>99.
|
197 |
</td>
|
198 |
</tr>
|
199 |
<tr>
|
200 |
<td>Winogrande (5-shot)
|
201 |
</td>
|
202 |
-
<td>85.
|
203 |
</td>
|
204 |
-
<td>
|
205 |
</td>
|
206 |
-
<td>
|
207 |
</td>
|
208 |
</tr>
|
209 |
<tr>
|
210 |
<td>TruthfulQA (0-shot, mc2)
|
211 |
</td>
|
212 |
-
<td>
|
213 |
</td>
|
214 |
-
<td>60.
|
215 |
</td>
|
216 |
-
<td>
|
217 |
</td>
|
218 |
</tr>
|
219 |
<tr>
|
220 |
<td><strong>Average</strong>
|
221 |
</td>
|
222 |
-
<td><strong>
|
223 |
</td>
|
224 |
-
<td><strong>
|
225 |
</td>
|
226 |
-
<td><strong>99.
|
227 |
</td>
|
228 |
</tr>
|
229 |
</table>
|
@@ -242,6 +252,17 @@ lm_eval \
|
|
242 |
--batch_size auto
|
243 |
```
|
244 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
245 |
#### ARC-Challenge
|
246 |
```
|
247 |
lm_eval \
|
|
|
33 |
- **Model Developers:** Neural Magic
|
34 |
|
35 |
Quantized version of [Meta-Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct).
|
36 |
+
It achieves an average score of 84.16 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 84.40.
|
37 |
|
38 |
### Model Optimizations
|
39 |
|
|
|
118 |
model_name = model_stub.split("/")[-1]
|
119 |
|
120 |
device_map = calculate_offload_device_map(
|
121 |
+
model_stub, reserve_for_hessians=False, num_gpus=2, torch_dtype="auto"
|
122 |
)
|
123 |
|
124 |
model = SparseAutoModelForCausalLM.from_pretrained(
|
125 |
+
model_stub, torch_dtype="auto", device_map=device_map
|
126 |
)
|
127 |
|
128 |
output_dir = f"./{model_name}-FP8-dynamic"
|
|
|
140 |
|
141 |
The model was evaluated on MMLU, ARC-Challenge, GSM-8K, Hellaswag, Winogrande and TruthfulQA.
|
142 |
Evaluation was conducted using the Neural Magic fork of [lm-evaluation-harness](https://github.com/neuralmagic/lm-evaluation-harness/tree/llama_3.1_instruct) (branch llama_3.1_instruct) and the [vLLM](https://docs.vllm.ai/en/stable/) engine.
|
143 |
+
This version of the lm-evaluation-harness includes versions of ARC-Challenge, GSM-8K, MMLU, and MMLU-cot that match the prompting style of [Meta-Llama-3.1-Instruct-evals](https://huggingface.co/datasets/meta-llama/Meta-Llama-3.1-8B-Instruct-evals).
|
144 |
|
145 |
### Accuracy
|
146 |
|
|
|
151 |
</td>
|
152 |
<td><strong>Meta-Llama-3.1-70B-Instruct </strong>
|
153 |
</td>
|
154 |
+
<td><strong>Meta-Llama-3.1-70B-Instruct-FP8(this model)</strong>
|
155 |
</td>
|
156 |
<td><strong>Recovery</strong>
|
157 |
</td>
|
|
|
159 |
<tr>
|
160 |
<td>MMLU (5-shot)
|
161 |
</td>
|
162 |
+
<td>83.83
|
163 |
</td>
|
164 |
+
<td>83.75
|
165 |
</td>
|
166 |
<td>99.90%
|
167 |
</td>
|
168 |
</tr>
|
169 |
+
<tr>
|
170 |
+
<td>MMLU-cot (0-shot)
|
171 |
+
</td>
|
172 |
+
<td>86.01
|
173 |
+
</td>
|
174 |
+
<td>85.48
|
175 |
+
</td>
|
176 |
+
<td>99.38%
|
177 |
+
</td>
|
178 |
+
</tr>
|
179 |
<tr>
|
180 |
<td>ARC Challenge (0-shot)
|
181 |
</td>
|
182 |
+
<td>93.26
|
183 |
</td>
|
184 |
+
<td>93.52
|
185 |
</td>
|
186 |
+
<td>100.2%
|
187 |
</td>
|
188 |
</tr>
|
189 |
<tr>
|
190 |
+
<td>GSM-8K-cot (8-shot, strict-match)
|
191 |
</td>
|
192 |
+
<td>94.92
|
193 |
</td>
|
194 |
+
<td>94.54
|
195 |
</td>
|
196 |
+
<td>99.60%
|
197 |
</td>
|
198 |
</tr>
|
199 |
<tr>
|
200 |
<td>Hellaswag (10-shot)
|
201 |
</td>
|
202 |
+
<td>86.75
|
203 |
</td>
|
204 |
+
<td>86.63
|
205 |
</td>
|
206 |
+
<td>99.86%
|
207 |
</td>
|
208 |
</tr>
|
209 |
<tr>
|
210 |
<td>Winogrande (5-shot)
|
211 |
</td>
|
212 |
+
<td>85.32
|
213 |
</td>
|
214 |
+
<td>84.61
|
215 |
</td>
|
216 |
+
<td>99.17%
|
217 |
</td>
|
218 |
</tr>
|
219 |
<tr>
|
220 |
<td>TruthfulQA (0-shot, mc2)
|
221 |
</td>
|
222 |
+
<td>60.68
|
223 |
</td>
|
224 |
+
<td>60.60
|
225 |
</td>
|
226 |
+
<td>99.87%
|
227 |
</td>
|
228 |
</tr>
|
229 |
<tr>
|
230 |
<td><strong>Average</strong>
|
231 |
</td>
|
232 |
+
<td><strong>84.40</strong>
|
233 |
</td>
|
234 |
+
<td><strong>84.16</strong>
|
235 |
</td>
|
236 |
+
<td><strong>99.72%</strong>
|
237 |
</td>
|
238 |
</tr>
|
239 |
</table>
|
|
|
252 |
--batch_size auto
|
253 |
```
|
254 |
|
255 |
+
#### MMLU-cot
|
256 |
+
```
|
257 |
+
lm_eval \
|
258 |
+
--model vllm \
|
259 |
+
--model_args pretrained="neuralmagic/Meta-Llama-3.1-70B-Instruct-FP8-dynamic",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=2 \
|
260 |
+
--tasks mmlu_cot_0shot_llama_3.1_instruct \
|
261 |
+
--apply_chat_template \
|
262 |
+
--num_fewshot 0 \
|
263 |
+
--batch_size auto
|
264 |
+
```
|
265 |
+
|
266 |
#### ARC-Challenge
|
267 |
```
|
268 |
lm_eval \
|