SpiridonSunRotator's picture
Updated metrics for new revision of model
bcb391a verified
metadata
library_name: transformers
tags:
  - llama
  - facebook
  - meta
  - llama-3
  - conversational
  - text-generation-inference

Official AQLM quantization of meta-llama/Meta-Llama-3.1-8B finetuned with PV-Tuning.

For this quantization, we used 1 codebook of 16 bits and groupsize of 8.

Results:

Model Quantization MMLU (5-shot) ArcC ArcE Hellaswag PiQA Winogrande Model size, Gb
meta-llama/Meta-Llama-3.1-8B None 0.6521 0.5145 0.8144 0.5998 0.8014 0.7356 16.1
1x16g8 0.5746 0.4454 0.7744 0.5629 0.7954 0.7024 4.1

Note

We used lm-eval=0.4.0 for evaluation.

UPD (09.08.2024)

Uploaded new version finetuned on more data for longer with better quality.