AWQ 4-bit 128g version of open-llama-13b-open-instruct!
#3
by
abhinavkulkarni
- opened
Hi,
I would like to draw everyone's attention to AWQ quantized version of open-llama-13b-open-instruct
model at https://huggingface.co/abhinavkulkarni/open-llama-13b-open-instruct-w4-g128-awq.
For more on AWQ, click here.
The quantized model size on disk is 7.249GB
vs 26.03GB
for the original model. Similar gains could be observed for VRAM usage. The perplexity is only worse by 5%.
Please take a look and give it a try.
Thanks!
abhinavkulkarni
changed discussion status to
closed