Qwen-72B-Chat-2bit / README.md
keyfan's picture
Update README.md
628abf2
|
raw
history blame
1.3 kB
metadata
license: other
license_name: qianwen
license_link: https://huggingface.co/Qwen/Qwen-72B-Chat/blob/main/LICENSE

This is 2-bit quantization of Qwen/Qwen-72B-Chat using QuIP#

Random samples from C4 are used as calibration data. I'm not sure if it will have negative effect on Chinese tasks.

Model loading

Please follow the instruction of QuIP-for-all for usage.

As an alternative, you can use vLLM branch for faster inference. QuIP has to launch like 5 kernels for each linear layer, so it's very helpful for vLLM to use cuda-graph to reduce launching overhead. BTW, If you have problem installing fast-hadamard-transform from pip, you can also install it from source

Perplexity

Measured at Wikitext with 4096 context length

fp16 2-bit
5.8438 6.9492

Speed

Latency and throughput are measured using vLLM (examples/benchmark_latency.py and examples/benchmark_throughput.py respectively) at single A100-80G.

Latency at batch size 1: 13.5 tokens/s.

Throughput: 0.77 requests/s