gpt2-xl-alpaca / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
33e6e16
|
raw
history blame
653 Bytes

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 28.54
ARC (25-shot) 26.79
HellaSwag (10-shot) 43.85
MMLU (5-shot) 26.31
TruthfulQA (0-shot) 39.4
Winogrande (5-shot) 56.91
GSM8K (5-shot) 0.0
DROP (3-shot) 6.55