BiLLM: Pushing the Limit of Post-Training Quantization for LLMs Paper • 2402.04291 • Published Feb 6 • 48
KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization Paper • 2401.18079 • Published Jan 31 • 7
Towards Next-Level Post-Training Quantization of Hyper-Scale Transformers Paper • 2402.08958 • Published Feb 14 • 3