alexmarques
commited on
Commit
•
fa37030
1
Parent(s):
25b9a14
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,262 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
pipeline_tag: text-generation
|
5 |
+
license: llama3.1
|
6 |
+
---
|
7 |
+
|
8 |
+
# Meta-Llama-3.1-8B-Instruct-quantized.w8a8
|
9 |
+
|
10 |
+
## Model Overview
|
11 |
+
- **Model Architecture:** Meta-Llama-3
|
12 |
+
- **Input:** Text
|
13 |
+
- **Output:** Text
|
14 |
+
- **Model Optimizations:**
|
15 |
+
- **Activation quantization:** INT8
|
16 |
+
- **Weight quantization:** INT8
|
17 |
+
- **Intended Use Cases:** Intended for commercial and research use in English. Similarly to [Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct), this models is intended for assistant-like chat.
|
18 |
+
- **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English.
|
19 |
+
- **Release Date:** 7/11/2024
|
20 |
+
- **Version:** 1.0
|
21 |
+
- **License(s):** [Llama3](https://llama.meta.com/llama3/license/)
|
22 |
+
- **Model Developers:** Neural Magic
|
23 |
+
|
24 |
+
Quantized version of [Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct).
|
25 |
+
It achieves an average score of 69.27 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 69.33.
|
26 |
+
|
27 |
+
### Model Optimizations
|
28 |
+
|
29 |
+
This model was obtained by quantizing the weights of [Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) to INT8 data type.
|
30 |
+
This optimization reduces the number of bits used to represent weights and activations from 16 to 8, reducing GPU memory requirements (by approximately 50%) and increasing matrix-multiply compute throughput (by approximately 2x).
|
31 |
+
Weight quantization also reduces disk size requirements by approximately 50%.
|
32 |
+
|
33 |
+
Only weights and activations of the linear operators within transformers blocks are quantized.
|
34 |
+
Weights are quantized with a symmetric static per-channel scheme, where a fixed linear scaling factor is applied between INT8 and floating point representations for each output channel dimension.
|
35 |
+
Activations are quantized with a symmetric dynamic per-token scheme, computing a linear scaling factor at runtime for each token between INT8 and floating point representations.
|
36 |
+
The [GPTQ](https://arxiv.org/abs/2210.17323) algorithm is applied for quantization, as implemented in the [llm-compressor](https://github.com/vllm-project/llm-compressor) library.
|
37 |
+
GPTQ used a 1% damping factor and 256 sequences of 8,192 random tokens.
|
38 |
+
|
39 |
+
|
40 |
+
## Deployment
|
41 |
+
|
42 |
+
### Use with vLLM
|
43 |
+
|
44 |
+
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
|
45 |
+
|
46 |
+
```python
|
47 |
+
from vllm import LLM, SamplingParams
|
48 |
+
from transformers import AutoTokenizer
|
49 |
+
|
50 |
+
model_id = "neuralmagic/Meta-Llama-3.1-8B-Instruct-quantized.w8a8"
|
51 |
+
number_gpus = 1
|
52 |
+
|
53 |
+
sampling_params = SamplingParams(temperature=0.6, top_p=0.9, max_tokens=256)
|
54 |
+
|
55 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
56 |
+
|
57 |
+
messages = [
|
58 |
+
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
|
59 |
+
{"role": "user", "content": "Who are you?"},
|
60 |
+
]
|
61 |
+
|
62 |
+
prompts = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
|
63 |
+
|
64 |
+
llm = LLM(model=model_id, tensor_parallel_size=number_gpus)
|
65 |
+
|
66 |
+
outputs = llm.generate(prompts, sampling_params)
|
67 |
+
|
68 |
+
generated_text = outputs[0].outputs[0].text
|
69 |
+
print(generated_text)
|
70 |
+
```
|
71 |
+
|
72 |
+
vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
|
73 |
+
|
74 |
+
### Use with transformers
|
75 |
+
|
76 |
+
The following example contemplates how the model can be deployed in Transformers using the `generate()` function.
|
77 |
+
|
78 |
+
|
79 |
+
```python
|
80 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
81 |
+
|
82 |
+
model_id = "neuralmagic/Meta-Llama-3.1-8B-Instruct-quantized.w8a8"
|
83 |
+
|
84 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
85 |
+
model = AutoModelForCausalLM.from_pretrained(
|
86 |
+
model_id,
|
87 |
+
torch_dtype="auto",
|
88 |
+
device_map="auto",
|
89 |
+
)
|
90 |
+
|
91 |
+
messages = [
|
92 |
+
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
|
93 |
+
{"role": "user", "content": "Who are you?"},
|
94 |
+
]
|
95 |
+
|
96 |
+
input_ids = tokenizer.apply_chat_template(
|
97 |
+
messages,
|
98 |
+
add_generation_prompt=True,
|
99 |
+
return_tensors="pt"
|
100 |
+
).to(model.device)
|
101 |
+
|
102 |
+
terminators = [
|
103 |
+
tokenizer.eos_token_id,
|
104 |
+
tokenizer.convert_tokens_to_ids("<|eot_id|>")
|
105 |
+
]
|
106 |
+
|
107 |
+
outputs = model.generate(
|
108 |
+
input_ids,
|
109 |
+
max_new_tokens=256,
|
110 |
+
eos_token_id=terminators,
|
111 |
+
do_sample=True,
|
112 |
+
temperature=0.6,
|
113 |
+
top_p=0.9,
|
114 |
+
)
|
115 |
+
response = outputs[0][input_ids.shape[-1]:]
|
116 |
+
print(tokenizer.decode(response, skip_special_tokens=True))
|
117 |
+
```
|
118 |
+
|
119 |
+
## Creation
|
120 |
+
|
121 |
+
This model was created by using the [llm-compressor](https://github.com/vllm-project/llm-compressor) library as presented in the code snipet below.
|
122 |
+
|
123 |
+
```python
|
124 |
+
from transformers import AutoTokenizer
|
125 |
+
from datasets import Dataset
|
126 |
+
from llmcompressor.transformers import SparseAutoModelForCausalLM, oneshot
|
127 |
+
from llmcompressor.modifiers.quantization import GPTQModifier
|
128 |
+
import random
|
129 |
+
|
130 |
+
model_id = "meta-llama/Meta-Llama-3.1-8B-Instruct"
|
131 |
+
|
132 |
+
num_samples = 256
|
133 |
+
max_seq_len = 8192
|
134 |
+
|
135 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
136 |
+
|
137 |
+
max_token_id = len(tokenizer.get_vocab()) - 1
|
138 |
+
input_ids = [[random.randint(0, max_token_id) for _ in range(max_seq_len)] for _ in range(num_samples)]
|
139 |
+
attention_mask = num_samples * [max_seq_len * [1]]
|
140 |
+
ds = Dataset.from_dict({"input_ids": input_ids, "attention_mask": attention_mask})
|
141 |
+
|
142 |
+
recipe = GPTQModifier(
|
143 |
+
targets="Linear",
|
144 |
+
scheme="W8A8",
|
145 |
+
ignore=["lm_head"],
|
146 |
+
dampening_frac=0.01,
|
147 |
+
)
|
148 |
+
|
149 |
+
model = SparseAutoModelForCausalLM.from_pretrained(
|
150 |
+
model_id,
|
151 |
+
device_map="auto",
|
152 |
+
trust_remote_code=True,
|
153 |
+
)
|
154 |
+
|
155 |
+
oneshot(
|
156 |
+
model=model,
|
157 |
+
dataset=ds,
|
158 |
+
recipe=recipe,
|
159 |
+
max_seq_length=max_seq_len,
|
160 |
+
num_calibration_samples=num_samples,
|
161 |
+
)
|
162 |
+
|
163 |
+
model.save_pretrained("Meta-Llama-3.1-8B-Instruct-quantized.w8a8")
|
164 |
+
```
|
165 |
+
|
166 |
+
|
167 |
+
## Evaluation
|
168 |
+
|
169 |
+
The model was evaluated on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) leaderboard tasks (version 1) with the [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/383bbd54bc621086e05aa1b030d8d4d5635b25e6) (commit 383bbd54bc621086e05aa1b030d8d4d5635b25e6) and the [vLLM](https://docs.vllm.ai/en/stable/) engine, using the following command:
|
170 |
+
```
|
171 |
+
lm_eval \
|
172 |
+
--model vllm \
|
173 |
+
--model_args pretrained="neuralmagic/Meta-Llama-3.1-8B-Instruct-quantized.w8a8",dtype=auto,gpu_memory_utilization=0.4,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \
|
174 |
+
--tasks openllm \
|
175 |
+
--batch_size auto
|
176 |
+
```
|
177 |
+
|
178 |
+
### Accuracy
|
179 |
+
|
180 |
+
#### Open LLM Leaderboard evaluation scores
|
181 |
+
<table>
|
182 |
+
<tr>
|
183 |
+
<td><strong>Benchmark</strong>
|
184 |
+
</td>
|
185 |
+
<td><strong>Meta-Llama-3.1-8B-Instruct </strong>
|
186 |
+
</td>
|
187 |
+
<td><strong>Meta-Llama-3.1-8B-Instruct-quantized.w8a8 (this model)</strong>
|
188 |
+
</td>
|
189 |
+
<td><strong>Recovery</strong>
|
190 |
+
</td>
|
191 |
+
</tr>
|
192 |
+
<tr>
|
193 |
+
<td>MMLU (5-shot)
|
194 |
+
</td>
|
195 |
+
<td>67.94
|
196 |
+
</td>
|
197 |
+
<td>67.58
|
198 |
+
</td>
|
199 |
+
<td>99.4%
|
200 |
+
</td>
|
201 |
+
</tr>
|
202 |
+
<tr>
|
203 |
+
<td>ARC Challenge (25-shot)
|
204 |
+
</td>
|
205 |
+
<td>62.63
|
206 |
+
</td>
|
207 |
+
<td>62.20
|
208 |
+
</td>
|
209 |
+
<td>99.5%
|
210 |
+
</td>
|
211 |
+
</tr>
|
212 |
+
<tr>
|
213 |
+
<td>GSM-8K (5-shot, strict-match)
|
214 |
+
</td>
|
215 |
+
<td>75.66
|
216 |
+
</td>
|
217 |
+
<td>76.57
|
218 |
+
</td>
|
219 |
+
<td>101.2%
|
220 |
+
</td>
|
221 |
+
</tr>
|
222 |
+
<tr>
|
223 |
+
<td>Hellaswag (10-shot)
|
224 |
+
</td>
|
225 |
+
<td>80.01
|
226 |
+
</td>
|
227 |
+
<td>79.85
|
228 |
+
</td>
|
229 |
+
<td>99.8%
|
230 |
+
</td>
|
231 |
+
</tr>
|
232 |
+
<tr>
|
233 |
+
<td>Winogrande (5-shot)
|
234 |
+
</td>
|
235 |
+
<td>77.90
|
236 |
+
</td>
|
237 |
+
<td>77.11
|
238 |
+
</td>
|
239 |
+
<td>99.0%
|
240 |
+
</td>
|
241 |
+
</tr>
|
242 |
+
<tr>
|
243 |
+
<td>TruthfulQA (0-shot)
|
244 |
+
</td>
|
245 |
+
<td>54.04
|
246 |
+
</td>
|
247 |
+
<td>54.19
|
248 |
+
</td>
|
249 |
+
<td>100.3%
|
250 |
+
</td>
|
251 |
+
</tr>
|
252 |
+
<tr>
|
253 |
+
<td><strong>Average</strong>
|
254 |
+
</td>
|
255 |
+
<td><strong>69.33</strong>
|
256 |
+
</td>
|
257 |
+
<td><strong>69.27</strong>
|
258 |
+
</td>
|
259 |
+
<td><strong>99.9%</strong>
|
260 |
+
</td>
|
261 |
+
</tr>
|
262 |
+
</table>
|