AWQ 4-bit 128g version of open-llama-7b-open-instruct!
#1
by
abhinavkulkarni
- opened
Hi,
I would like to draw everyone's attention to AWQ quantized version of open-llama-7b-open-instruct
model at https://huggingface.co/abhinavkulkarni/open-llama-7b-open-instruct-w4-g128-awq.
For more on AWQ, click here.
The quantized model size on disk is 3.89GB
vs 13.48GB
for the original model. Similar gains could be observed for VRAM usage. The perplexity is only worse by 5%.
Please take a look and give it a try.
Thanks!
abhinavkulkarni
changed discussion status to
closed