metadata
license: apache-2.0
datasets:
- abacusai/MetaMathFewshot
Finetune of the pre-DPO Bagel model (https://huggingface.co/jondurbin/bagel-34b-v0.2) on the MetamathFewshot (https://huggingface.co/datasets/abacusai/MetaMathFewshot) dataset
Evaluation Results
Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
---|---|---|---|---|---|---|
For comparison the GSM8K score for the original metamath/MetaMath-Mistral-7B
was 46.17 and average score was 69.7