The Minitron Models and Their Teachers, Quantized
Collection
10 items
•
Updated
Warning: This model poorly performs. I ran the quantization three times but it never produced a good model. I recommend using the asymmetric quantization (kaitchup/Mistral-NeMo-Minitron-8B-Base-AutoRound-GPTQ-asym-4bit) version instead.
This is nvidia/Mistral-NeMo-Minitron-8B-Base quantized with AutoRound (symmetric quantization) to 4-bit. The model has been created, tested, and evaluated by The Kaitchup. It is compatible with the main inference frameworks, e.g., TGI and vLLM.
Details on the quantization process and evaluation: Mistral-NeMo: 4.1x Smaller with Quantized Minitron