|
--- |
|
license: other |
|
license_name: qianwen |
|
license_link: https://huggingface.co/Qwen/Qwen-72B-Chat/blob/main/LICENSE |
|
--- |
|
|
|
This is 2-bit quantization of [Qwen/Qwen-72B-Chat](https://huggingface.co/Qwen/Qwen-72B-Chat) using [QuIP#](https://cornell-relaxml.github.io/quip-sharp/) |
|
|
|
Random samples from C4 are used as calibration data. I'm not sure if it will have negative effect on Chinese tasks. |
|
|
|
## Model loading |
|
Please follow the instruction of [QuIP-for-all](https://github.com/chu-tianxiang/QuIP-for-all) for usage. |
|
|
|
As an alternative, you can use [vLLM branch](https://github.com/chu-tianxiang/vllm-gptq/tree/quip_gemv) for faster inference. QuIP has to launch like 5 kernels for each linear layer, so it's very helpful for vLLM to use cuda-graph to reduce launching overhead. BTW, If you have problem installing fast-hadamard-transform from pip, you can also install it from [source](https://github.com/Dao-AILab/fast-hadamard-transform) |
|
|
|
## Perplexity |
|
Measured at Wikitext with 4096 context length |
|
| fp16 | 2-bit | |
|
| ------- | ------- | |
|
| 5.8438 | 6.9492 | |
|
|
|
## Speed |
|
|
|
Latency and throughput are measured using vLLM (`examples/benchmark_latency.py` and `examples/benchmark_throughput.py` respectively) at single A100-80G. |
|
|
|
Latency at batch size 1: 13.5 tokens/s. |
|
|
|
Throughput: 0.77 requests/s |