Quantization
Quantization techniques focus on representing data with less information while also trying to not lose too much accuracy. This often means converting a data type to represent the same information with fewer bits. For example, if your model weights are stored as 32-bit floating points and theyβre quantized to 16-bit floating points, this halves the model size which makes it easier to store and reduces memory-usage. Lower precision can also speedup inference because it takes less time to perform calculations with fewer bits.
Interested in adding a new quantization method to Transformers? Read the HfQuantizer guide to learn how!
If you are new to the quantization field, we recommend you to check out these beginner-friendly courses about quantization in collaboration with DeepLearning.AI:
When to use what?
The community has developed many quantization methods for various use cases. With Transformers, you can run any of these integrated methods depending on your use case because each method has their own pros and cons.
For example, some quantization methods require calibrating the model with a dataset for more accurate and βextremeβ compression (up to 1-2 bits quantization), while other methods work out of the box with on-the-fly quantization.
Another parameter to consider is compatibility with your target device. Do you want to quantize on a CPU, GPU, or Apple silicon?
In short, supporting a wide range of quantization methods allows you to pick the best quantization method for your specific use case.
Use the table below to help you decide which quantization method to use.
Quantization method | On the fly quantization | CPU | CUDA GPU | RoCm GPU (AMD) | Metal (Apple Silicon) | torch.compile() support | Number of bits | Supports fine-tuning (through PEFT) | Serializable with π€ transformers | π€ transformers support | Link to library |
---|---|---|---|---|---|---|---|---|---|---|---|
AQLM | π΄ | π’ | π’ | π΄ | π΄ | π’ | 1 / 2 | π’ | π’ | π’ | https://github.com/Vahe1994/AQLM |
AWQ | π΄ | π΄ | π’ | π’ | π΄ | ? | 4 | π’ | π’ | π’ | https://github.com/casper-hansen/AutoAWQ |
bitsandbytes | π’ | π΄ | π’ | π΄ | π΄ | π΄ | 4 / 8 | π’ | π’ | π’ | https://github.com/TimDettmers/bitsandbytes |
EETQ | π’ | π΄ | π’ | π΄ | π΄ | ? | 8 | π’ | π’ | π’ | https://github.com/NetEase-FuXi/EETQ |
GGUF / GGML (llama.cpp) | π’ | π’ | π’ | π΄ | π’ | π΄ | 1 - 8 | π΄ | See GGUF section | See GGUF section | https://github.com/ggerganov/llama.cpp |
GPTQ | π΄ | π΄ | π’ | π’ | π΄ | π΄ | 2 - 3 - 4 - 8 | π’ | π’ | π’ | https://github.com/AutoGPTQ/AutoGPTQ |
HQQ | π’ | π’ | π’ | π΄ | π΄ | π’ | 1 - 8 | π’ | π΄ | π’ | https://github.com/mobiusml/hqq/ |
Quanto | π’ | π’ | π’ | π΄ | π’ | π’ | 2 / 4 / 8 | π΄ | π΄ | π’ | https://github.com/huggingface/quanto |