Adding Evaluation Results
#44
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -296,4 +296,17 @@ The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can
|
|
296 |
journal={arXiv preprint arXiv:2301.03988},
|
297 |
year={2023}
|
298 |
}
|
299 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
296 |
journal={arXiv preprint arXiv:2301.03988},
|
297 |
year={2023}
|
298 |
}
|
299 |
+
```
|
300 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
301 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_bigcode__santacoder)
|
302 |
+
|
303 |
+
| Metric | Value |
|
304 |
+
|-----------------------|---------------------------|
|
305 |
+
| Avg. | 25.33 |
|
306 |
+
| ARC (25-shot) | 26.28 |
|
307 |
+
| HellaSwag (10-shot) | 25.6 |
|
308 |
+
| MMLU (5-shot) | 25.89 |
|
309 |
+
| TruthfulQA (0-shot) | 51.24 |
|
310 |
+
| Winogrande (5-shot) | 48.07 |
|
311 |
+
| GSM8K (5-shot) | 0.0 |
|
312 |
+
| DROP (3-shot) | 0.21 |
|