Adding Evaluation Results
#2
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -173,3 +173,17 @@ slices:
|
|
173 |
</body>
|
174 |
</html>
|
175 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
173 |
</body>
|
174 |
</html>
|
175 |
|
176 |
+
|
177 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
178 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TheSkullery__Aura-llama)
|
179 |
+
|
180 |
+
| Metric |Value|
|
181 |
+
|---------------------------------|----:|
|
182 |
+
|Avg. |63.13|
|
183 |
+
|AI2 Reasoning Challenge (25-Shot)|58.02|
|
184 |
+
|HellaSwag (10-Shot) |77.82|
|
185 |
+
|MMLU (5-Shot) |65.61|
|
186 |
+
|TruthfulQA (0-shot) |51.94|
|
187 |
+
|Winogrande (5-shot) |73.40|
|
188 |
+
|GSM8k (5-shot) |52.01|
|
189 |
+
|