Edit model card

Model Details

This is nvidia/Llama-3.1-Minitron-4B-Width-Base quantized with AutoRound (asymmetric quantization) to 4-bit. The model has been created, tested, and evaluated by The Kaitchup. It is compatible with the main inference frameworks, e.g., TGI and vLLM.

Details on the quantization process and evaluation: Mistral-NeMo: 4.1x Smaller with Quantized Minitron

Downloads last month
9
Safetensors
Model size
1.29B params
Tensor type
I32
·
FP16
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Collection including kaitchup/Llama-3.1-Minitron-4B-Width-Base-AutoRound-GPTQ-asym-4bit