metadata
language:
- en
Input files for generating the Importance Matrix
How to quantize with an imatrix in llama.cpp
- Get one of the input files collected here, or eleswhere.
- Convert or download the model you want to quantise, in fp16 GGUF format.
- Generate an imatrix file specific to the model you want to quantise
cd <llama.cpp directory>
./imatrix -m <model_path>/ggml-model-f16.gguf -f <matrix_training_path>/<plain_text_matrix_file> -o <output_binary_file.matrix> -t 12 -ngl 144 --chunks 100 -b 512 -c 512
# -ngl : layers offloaded to gpu (recommended to use number of layers the model contains)
# -t 12 : number of threads (should probably match no of cpu)
# -c 512 : context size, testing seems to show 512 is recommended (default=512, 0=loaded from model)
# -b 200 : batch size (default=512)
# --chunks 100 (recommended)
# --mlock : keep model in ram (only use if you had sufficient RAM for the whole fp16)
- Use the generated binary matrix file to quantise the model
./quantize <model_path>/ggml-model-f16.gguf -matrix <matrix_file> <output_model_path>/ggml-model-IQ4_XS.gguf IQ4_XS
Note: normal quantisation also benefits from using a matrix file. It also seem that a larger input data is better for higher quantisation.