This model was trained as part of a series of experiments testing the performance of pure DPO vs SFT vs ORPO, all supported by Unsloth/Huggingface TRL.
Note: Extremely buggy, not recommended for use.
Benchmarks
Average 59.63
ARC 59.47
HellaSwag 82.47
MMLU 62.31
TruthfulQA 40.11
Winogrande 78.3
GSM8K 35.1
Training Details
Duration: ~10-12 hours on one Kaggle T4 with Unsloth
Model: https://huggingface.co/unsloth/mistral-7b-v0.2-bnb-4bit
Dataset: https://huggingface.co/datasets/argilla/dpo-mix-7k
Rank: 8
Alpha: 16
Learning rate: 5e-5
Beta: 0.1
Batch size: 8
Epochs: 1
Learning rate scheduler: Linear
Prompt Format: ChatML
<|im_start|>system
You are a helpful assistant.<|im_end|>
<|im_start|>user
Why is the sky blue?<|im_end|>
<|im_start|>assistant
WanDB Reports