Adding Evaluation Results
#1
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -76,3 +76,17 @@ This way, the model can better understand the relationship between different par
|
|
76 |
## Evaluation
|
77 |
|
78 |
<B>TODO</B>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
76 |
## Evaluation
|
77 |
|
78 |
<B>TODO</B>
|
79 |
+
|
80 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
81 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_VMware__open-llama-0.7T-7B-open-instruct-v1.1)
|
82 |
+
|
83 |
+
| Metric | Value |
|
84 |
+
|-----------------------|---------------------------|
|
85 |
+
| Avg. | 39.33 |
|
86 |
+
| ARC (25-shot) | 46.67 |
|
87 |
+
| HellaSwag (10-shot) | 67.67 |
|
88 |
+
| MMLU (5-shot) | 28.55 |
|
89 |
+
| TruthfulQA (0-shot) | 37.6 |
|
90 |
+
| Winogrande (5-shot) | 65.43 |
|
91 |
+
| GSM8K (5-shot) | 0.76 |
|
92 |
+
| DROP (3-shot) | 28.61 |
|