Adding Evaluation Results
#2
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -64,4 +64,17 @@ output = tokenizer.decode(output_ids, skip_special_tokens=True).strip()
|
|
64 |
year={2023},
|
65 |
url={https://arxiv.org/abs/2311.07052}
|
66 |
}
|
67 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
64 |
year={2023},
|
65 |
url={https://arxiv.org/abs/2311.07052}
|
66 |
}
|
67 |
+
```
|
68 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
69 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_GeneZC__MiniChat-3B)
|
70 |
+
|
71 |
+
| Metric | Value |
|
72 |
+
|-----------------------|---------------------------|
|
73 |
+
| Avg. | 42.94 |
|
74 |
+
| ARC (25-shot) | 44.03 |
|
75 |
+
| HellaSwag (10-shot) | 67.19 |
|
76 |
+
| MMLU (5-shot) | 39.17 |
|
77 |
+
| TruthfulQA (0-shot) | 45.67 |
|
78 |
+
| Winogrande (5-shot) | 65.27 |
|
79 |
+
| GSM8K (5-shot) | 10.54 |
|
80 |
+
| DROP (3-shot) | 28.73 |
|