leaderboard-pr-bot commited on
Commit
869d293
1 Parent(s): fd0b4db

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -93,4 +93,17 @@ Also thanks to Meta for LLaMA.
93
 
94
  Each model was hand picked and considered for what it could contribute to this ensemble.
95
  Thanks to each and every one of you for your incredible work developing some of the best things
96
- to come out of this community.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
93
 
94
  Each model was hand picked and considered for what it could contribute to this ensemble.
95
  Thanks to each and every one of you for your incredible work developing some of the best things
96
+ to come out of this community.
97
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
98
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_CalderaAI__13B-Ouroboros)
99
+
100
+ | Metric | Value |
101
+ |-----------------------|---------------------------|
102
+ | Avg. | 44.66 |
103
+ | ARC (25-shot) | 57.42 |
104
+ | HellaSwag (10-shot) | 82.11 |
105
+ | MMLU (5-shot) | 51.43 |
106
+ | TruthfulQA (0-shot) | 47.99 |
107
+ | Winogrande (5-shot) | 57.85 |
108
+ | GSM8K (5-shot) | 0.45 |
109
+ | DROP (3-shot) | 15.36 |