Edit model card

Model Overview

Description:

The NVIDIA Llama 3.1 405B Instruct FP8 model is the quantized version of the Meta's Llama 3.1 405B Instruct model, which is an auto-regressive language model that uses an optimized transformer architecture. For more information, please check here. The NVIDIA Llama 3.1 405B Instruct FP8 model is quantized with TensorRT Model Optimizer.

This model is ready for commercial/non-commercial use.

Third-Party Community Consideration

This model is not owned or developed by NVIDIA. This model has been developed and built to a third-party’s requirements for this application and use case; see link to Non-NVIDIA (Meta-Llama-3.1-405B-Instruct) Model Card.

License/Terms of Use:

nvidia-open-model-license

llama3.1

Model Architecture:

Architecture Type: Transformers
Network Architecture: Llama3.1

Input:

Input Type(s): Text
Input Format(s): String
Input Parameters: Sequences
Other Properties Related to Input: Context length up to 128K

Output:

Output Type(s): Text
Output Format: String
Output Parameters: Sequences
Other Properties Related to Output: N/A

Software Integration:

Supported Runtime Engine(s):

  • Tensor(RT)-LLM
  • vLLM

Supported Hardware Microarchitecture Compatibility:

  • NVIDIA Blackwell
  • NVIDIA Hopper
  • NVIDIA Lovelace

Preferred Operating System(s):

  • Linux

Model Version(s):

The model is quantized with nvidia-modelopt v0.15.1

Datasets:

Inference:

Engine: Tensor(RT)-LLM or vLLM
Test Hardware: H200

Post Training Quantization

This model was obtained by quantizing the weights and activations of Meta-Llama-3.1-405B-Instruct to FP8 data type, ready for inference with TensorRT-LLM. Only the weights and activations of the linear operators within transformers blocks are quantized. This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%. On H200, we achieved 1.7x speedup.

Usage

Deploy with TensorRT-LLM

To deploy the quantized checkpoint with TensorRT-LLM, follow the sample commands below with the TensorRT-LLM GitHub repo:

  • Checkpoint convertion:
python examples/llama/convert_checkpoint.py --model_dir Llama-3.1-405B-Instruct-FP8 --output_dir /ckpt --use_fp8
  • Build engines:
trtllm-build --checkpoint_dir /ckpt --output_dir /engine
  • Accuracy evaluation:
  1. Prepare the MMLU dataset:
mkdir data; wget https://people.eecs.berkeley.edu/~hendrycks/data.tar -O data/mmlu.tar
tar -xf data/mmlu.tar -C data && mv data/data data/mmlu
  1. Measure MMLU:
python examples/mmlu.py --engine_dir ./engine --tokenizer_dir Llama-3.1-405B-Instruct-FP8/ --test_trt_llm --data_dir data/mmlu
  • Throughputs evaluation:

Please refer to the TensorRT-LLM benchmarking documentation for details.

Evaluation

The accuracy (MMLU, 5-shot) and throughputs (tokens per second, TPS) benchmark results are presented in the table below:

Precision MMLU TPS
FP16 86.6 275.0
FP8 86.2 469.78
We benchmarked with tensorrt-llm v0.13 on 8 H200 GPUs, using batch size 1024 for the throughputs with in-flight batching enabled. We achieved **~1.7x** speedup with FP8.

Deploy with vLLM

To deploy the quantized checkpoint with vLLM, follow the instructions below:

  1. Install vLLM from directions here.
  2. To use a Model Optimizer PTQ checkpoint with vLLM, quantization=modelopt flag must be passed into the config while initializing LLM Engine.

Example:

from vllm import LLM, SamplingParams

model_id = "nvidia/Llama-3.1-405B-Instruct-FP8"
tp_size = 8 #use the required number of gpus based on your GPU Memory.
sampling_params = SamplingParams(temperature=0.8, top_p=0.9)
max_model_len = 8192

prompts = [
    "Hello, my name is",
    "The president of the United States is",
    "The capital of France is",
    "The future of AI is",
]

llm = LLM(model=model_id, quantization='modelopt', tensor_parallel_size=tp_size, max_model_len=max_model_len)
outputs = llm.generate(prompts, sampling_params)

# Print the outputs.
for output in outputs:
    prompt = output.prompt
    generated_text = output.outputs[0].text
    print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")

This model can be deployed with an OpenAI Compatible Server via the vLLM backend. Instructions here.

Downloads last month
475
Safetensors
Model size
406B params
Tensor type
BF16
·
F8_E4M3
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for nvidia/Llama-3.1-405B-Instruct-FP8

Finetuned
(7)
this model

Collection including nvidia/Llama-3.1-405B-Instruct-FP8