Update README.md
Browse files
README.md
CHANGED
@@ -3,3 +3,22 @@ license: other
|
|
3 |
license_name: qianwen
|
4 |
license_link: https://huggingface.co/Qwen/Qwen-72B-Chat/blob/main/LICENSE
|
5 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
license_name: qianwen
|
4 |
license_link: https://huggingface.co/Qwen/Qwen-72B-Chat/blob/main/LICENSE
|
5 |
---
|
6 |
+
|
7 |
+
This is 2-bit quantization of [Qwen/Qwen-72B-Chat](https://huggingface.co/Qwen/Qwen-72B-Chat) using [QuIP#](https://cornell-relaxml.github.io/quip-sharp/)
|
8 |
+
|
9 |
+
Random samples from C4 are used as calibration data. I'm not sure if it will have negative effect on Chinese tasks.
|
10 |
+
|
11 |
+
## Model loading
|
12 |
+
Please follow the instruction of [QuIP-for-all](https://github.com/chu-tianxiang/QuIP-for-all) for usage.
|
13 |
+
|
14 |
+
As an alternative, you can use [vLLM branch](https://github.com/chu-tianxiang/vllm-gptq/tree/quip_gemv) for faster inference. QuIP has to launch like 5 kernels for each linear layer, so it's very helpful for vLLM to use cuda-graph to reduce launching overhead. BTW, If you have problem installing fast-hadamard-transform from pip, you can also install it from [source](https://github.com/Dao-AILab/fast-hadamard-transform)
|
15 |
+
|
16 |
+
## Perplexity
|
17 |
+
Measured at Wikitext with 4096 context length
|
18 |
+
| fp16 | 2-bit |
|
19 |
+
| ------- | ------- |
|
20 |
+
| 5.8438 | 6.9492 |
|
21 |
+
|
22 |
+
## Speed
|
23 |
+
Measured with `examples/benchmark_latency.py` script at vLLM repo.
|
24 |
+
At batch size = 1, it generates at 13.5 tokens/s with single A100.
|