English
File size: 2,643 Bytes
da7c2dc
 
 
 
 
 
 
 
 
 
 
5a5cf71
da7c2dc
 
 
 
 
2c9725f
 
 
 
 
 
 
 
 
 
 
 
 
 
e03b5f3
2c9725f
 
 
 
 
 
 
 
 
 
 
da7c2dc
 
59323cc
da7c2dc
 
 
 
 
e2dd961
da7c2dc
f48f2b0
 
809c657
 
f48f2b0
809c657
 
 
f48f2b0
809c657
 
da7c2dc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
---
datasets:
- NeelNanda/pile-10k
language:
- en
---


## Model Details

This model is an int4 model with group_size 128 of [facebook/opt-13b](https://huggingface.co/facebook/opt-13b)  generated by [intel/auto-round](https://github.com/intel/auto-round). 
Inference of this model is compatible with AutoGPTQ's Kernel.





### Reproduce the model

Here is the sample command to reproduce the model

```bash
git clone https://github.com/intel/auto-round
cd auto-round/examples/language-modeling
pip install -r requirements.txt
python3 main.py \
--model_name  facebook/opt-13b \
--device 0 \
--group_size 128 \
--bits 4 \
--iters 1000 \
--nsamples 512 \
--deployment_device 'gpu' \
--minmax_lr 2e-3 \
--disable_quanted_input \
--output_dir "./tmp_autoround" \

```





### Evaluate the model 

Install [lm-eval-harness 0.4.2](https://github.com/EleutherAI/lm-evaluation-harness.git) from source.

```bash
lm_eval --model hf --model_args pretrained="Intel/opt-13b-int4-inc",autogptq=True,gptq_use_triton=True --device cuda:0 --tasks lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,arc_easy,arc_challenge,mmlu --batch_size 32
```

| Metric         | FP16   | INT4   |
| -------------- | ------ | ------ |
| Avg.           | 0.4989 | 0.5021 |
| mmlu           | 0.2473 | 0.2456 |
| lambada_openai | 0.6858 | 0.6949 |
| hellaswag      | 0.5247 | 0.5177 |
| winogrande     | 0.6480 | 0.6448 |
| piqa           | 0.7590 | 0.7573 |
| truthfulqa_mc1 | 0.1971 | 0.2056 |
| openbookqa     | 0.2680 | 0.2780 |
| boolq          | 0.6584 | 0.6801 |
| arc_easy       | 0.6713 | 0.6717 |
| arc_challenge  | 0.3294 | 0.3251 |






## Caveats and Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.

Here are a couple of useful links to learn more about Intel's AI software:

* Intel Neural Compressor [link](https://github.com/intel/neural-compressor)
* Intel Extension for Transformers [link](https://github.com/intel/intel-extension-for-transformers)



## Disclaimer

The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.



## Cite

@article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao}, journal={arXiv preprint arXiv:2309.05516}, year={2023} }

[arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round)