HF update ?

#6
by edmond - opened

Are you planning to make an update to transformers in order to get rid of the error :
ValueError: Unknown quantization type, got bitnet - supported types are: ['awq', 'bitsandbytes_4bit', 'bitsandbytes_8bit', 'gptq', 'aqlm', 'quanto', 'eetq', 'hqq', 'compressed-tensors', 'fbgemm_fp8', 'torchao']
?

Hugging Face 1Bit LLMs org

Yes, I am waiting for the pull request to be merged, in the mean time you can install the corresponding transformers version from the pull request like in this notebook : https://colab.research.google.com/drive/1ovmQUOtnYIdvcBkwEE4MzVL1HKfFHdNT?usp=sharing

medmekk changed discussion status to closed

Sign up or log in to comment