Edit model card

Minitron-8B-Base-FP8

FP8 quantized checkpoint of nvidia/Minitron-8B-Base for use with vLLM.

Evaluations

This quantized model:

lm_eval --model vllm --model_args pretrained=Minitron-8B-Base-FP8 --tasks gsm8k --num_fewshot 5 --batch_size auto

vllm (pretrained=Minitron-8B-Base-FP8), gen_kwargs: (None), limit: None, num_fewshot: 5, batch_size: auto
|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value |   |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match|↑  |0.5019|±  |0.0138|
|     |       |strict-match    |     5|exact_match|↑  |0.4989|±  |0.0138|

Baseline:

lm_eval --model vllm --model_args pretrained=nvidia/Minitron-8B-Base --tasks gsm8k --num_fewshot 5 --batch_size auto

vllm (pretrained=nvidia/Minitron-8B-Base), gen_kwargs: (None), limit: None, num_fewshot: 5, batch_size: auto
|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value |   |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match|↑  |0.5080|±  |0.0138|
|     |       |strict-match    |     5|exact_match|↑  |0.5064|±  |0.0138|

The original paper evals:

image/png

Downloads last month
25
Safetensors
Model size
8.27B params
Tensor type
BF16
·
F8_E4M3
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mgoin/Minitron-8B-Base-FP8

Quantized
(4)
this model

Collection including mgoin/Minitron-8B-Base-FP8