Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Official AQLM quantization of Mistral-7B-v0.1.

For this quantization, we used 1 codebook of 16 bits.

Results (0-shot acc):

Model Quantization WinoGrande PiQA HellaSwag ArcE ArcC Model size, Gb
Mistral-7B-v0.1 None 0.7364 0.8047 0.6115 0.7887 0.4923 14.5
1x16 0.6914 0.7845 0.5745 0.7504 0.4420 2.51

To learn more about the inference, as well as the information on how to quantize models yourself, please refer to the official GitHub repo.

Downloads last month
170
Safetensors
Model size
1.25B params
Tensor type
FP16
·
I16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including ISTA-DASLab/Mistral-7B-v0.1-AQLM-2Bit-1x16-hf