leaderboard-pr-bot commited on
Commit
2fb7f69
1 Parent(s): 23a3cf3

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -140,4 +140,17 @@ The following `bitsandbytes` quantization config was used during training:
140
  感谢:
141
  - LLaMA2
142
  - Firefly项目
143
- - shareGPT中文数据集的建设者们
 
 
 
 
 
 
 
 
 
 
 
 
 
 
140
  感谢:
141
  - LLaMA2
142
  - Firefly项目
143
+ - shareGPT中文数据集的建设者们
144
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
145
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_shareAI__llama2-13b-Chinese-chat)
146
+
147
+ | Metric | Value |
148
+ |-----------------------|---------------------------|
149
+ | Avg. | 48.23 |
150
+ | ARC (25-shot) | 60.58 |
151
+ | HellaSwag (10-shot) | 82.19 |
152
+ | MMLU (5-shot) | 55.45 |
153
+ | TruthfulQA (0-shot) | 45.11 |
154
+ | Winogrande (5-shot) | 76.64 |
155
+ | GSM8K (5-shot) | 11.37 |
156
+ | DROP (3-shot) | 6.24 |