Adding Evaluation Results
#5
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -68,3 +68,17 @@ Despite its advanced capabilities, OpenChat is still bound by the limitations in
|
|
68 |
|
69 |
**Hallucination of Non-existent Information**
|
70 |
OpenChat may sometimes generate information that does not exist or is not accurate, also known as "hallucination". Users should be aware of this possibility and verify any critical information obtained from the model.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
68 |
|
69 |
**Hallucination of Non-existent Information**
|
70 |
OpenChat may sometimes generate information that does not exist or is not accurate, also known as "hallucination". Users should be aware of this possibility and verify any critical information obtained from the model.
|
71 |
+
|
72 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
73 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_openchat__openchat_v2_w)
|
74 |
+
|
75 |
+
| Metric | Value |
|
76 |
+
|-----------------------|---------------------------|
|
77 |
+
| Avg. | 47.16 |
|
78 |
+
| ARC (25-shot) | 57.34 |
|
79 |
+
| HellaSwag (10-shot) | 81.23 |
|
80 |
+
| MMLU (5-shot) | 50.17 |
|
81 |
+
| TruthfulQA (0-shot) | 50.7 |
|
82 |
+
| Winogrande (5-shot) | 75.93 |
|
83 |
+
| GSM8K (5-shot) | 8.42 |
|
84 |
+
| DROP (3-shot) | 6.35 |
|