leaderboard-pr-bot's picture
Adding Evaluation Results
1fac1ef
|
raw
history blame
742 Bytes

https://wandb.ai/open-assistant/supervised-finetuning/runs/gljcl75b

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 49.63
ARC (25-shot) 57.25
HellaSwag (10-shot) 79.99
MMLU (5-shot) 45.52
TruthfulQA (0-shot) 44.45
Winogrande (5-shot) 77.58
GSM8K (5-shot) 13.87
DROP (3-shot) 28.71