Quark Quantized OCP FP8 Models
Collection
24 items
•
Updated
export MODEL_DIR = [local model checkpoint folder] or meta-llama/Meta-Llama-3.1-70B-Instruct
# single GPU
python3 quantize_quark.py \
--model_dir $MODEL_DIR \
--output_dir Meta-Llama-3.1-70B-Instruct-FP8-KV \
--quant_scheme w_fp8_a_fp8 \
--kv_cache_dtype fp8 \
--num_calib_data 128 \
--model_export quark_safetensors \
--no_weight_matrix_merge
# If model size is too large for single GPU, please use multi GPU instead.
python3 quantize_quark.py \
--model_dir $MODEL_DIR \
--output_dir Meta-Llama-3.1-70B-Instruct-FP8-KV \
--quant_scheme w_fp8_a_fp8 \
--kv_cache_dtype fp8 \
--num_calib_data 128 \
--model_export quark_safetensors \
--no_weight_matrix_merge \
--multi_gpu
Quark has its own export format and allows FP8 quantized models to be efficiently deployed using the vLLM backend(vLLM-compatible).
Quark currently uses perplexity(PPL) as the evaluation metric for accuracy loss before and after quantization.The specific PPL algorithm can be referenced in the quantize_quark.py. The quantization evaluation results are conducted in pseudo-quantization mode, which may slightly differ from the actual quantized inference accuracy. These results are provided for reference only.
Benchmark | Meta-Llama-3.1-70B-Instruct | Meta-Llama-3.1-70B-Instruct-FP8-KV(this model) |
Perplexity-wikitext2 | 3.7797 | 3.8561 |
Modifications copyright(c) 2024 Advanced Micro Devices,Inc. All rights reserved.
Base model
meta-llama/Llama-3.1-70B