leaderboard-pr-bot commited on
Commit
21842e5
1 Parent(s): 311c046

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +110 -1
README.md CHANGED
@@ -1,9 +1,9 @@
1
  ---
2
  license: apache-2.0
3
- base_model: mistralai/Mistral-7B-v0.3
4
  tags:
5
  - generated_from_trainer
6
  - axolotl
 
7
  datasets:
8
  - cognitivecomputations/Dolphin-2.9
9
  - teknium/OpenHermes-2.5
@@ -13,6 +13,101 @@ datasets:
13
  - microsoft/orca-math-word-problems-200k
14
  - Locutusque/function-calling-chatml
15
  - internlm/Agent-FLAN
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
  ---
17
 
18
  # Dolphin 2.9.3 Mistral 7b v0.3 32k 🐬
@@ -177,3 +272,17 @@ tokens:
177
  - "<|im_start|>"
178
 
179
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
 
3
  tags:
4
  - generated_from_trainer
5
  - axolotl
6
+ base_model: mistralai/Mistral-7B-v0.3
7
  datasets:
8
  - cognitivecomputations/Dolphin-2.9
9
  - teknium/OpenHermes-2.5
 
13
  - microsoft/orca-math-word-problems-200k
14
  - Locutusque/function-calling-chatml
15
  - internlm/Agent-FLAN
16
+ model-index:
17
+ - name: dolphin-2.9.3-mistral-7B-32k
18
+ results:
19
+ - task:
20
+ type: text-generation
21
+ name: Text Generation
22
+ dataset:
23
+ name: IFEval (0-Shot)
24
+ type: HuggingFaceH4/ifeval
25
+ args:
26
+ num_few_shot: 0
27
+ metrics:
28
+ - type: inst_level_strict_acc and prompt_level_strict_acc
29
+ value: 41.26
30
+ name: strict accuracy
31
+ source:
32
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=cognitivecomputations/dolphin-2.9.3-mistral-7B-32k
33
+ name: Open LLM Leaderboard
34
+ - task:
35
+ type: text-generation
36
+ name: Text Generation
37
+ dataset:
38
+ name: BBH (3-Shot)
39
+ type: BBH
40
+ args:
41
+ num_few_shot: 3
42
+ metrics:
43
+ - type: acc_norm
44
+ value: 26.91
45
+ name: normalized accuracy
46
+ source:
47
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=cognitivecomputations/dolphin-2.9.3-mistral-7B-32k
48
+ name: Open LLM Leaderboard
49
+ - task:
50
+ type: text-generation
51
+ name: Text Generation
52
+ dataset:
53
+ name: MATH Lvl 5 (4-Shot)
54
+ type: hendrycks/competition_math
55
+ args:
56
+ num_few_shot: 4
57
+ metrics:
58
+ - type: exact_match
59
+ value: 4.83
60
+ name: exact match
61
+ source:
62
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=cognitivecomputations/dolphin-2.9.3-mistral-7B-32k
63
+ name: Open LLM Leaderboard
64
+ - task:
65
+ type: text-generation
66
+ name: Text Generation
67
+ dataset:
68
+ name: GPQA (0-shot)
69
+ type: Idavidrein/gpqa
70
+ args:
71
+ num_few_shot: 0
72
+ metrics:
73
+ - type: acc_norm
74
+ value: 4.7
75
+ name: acc_norm
76
+ source:
77
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=cognitivecomputations/dolphin-2.9.3-mistral-7B-32k
78
+ name: Open LLM Leaderboard
79
+ - task:
80
+ type: text-generation
81
+ name: Text Generation
82
+ dataset:
83
+ name: MuSR (0-shot)
84
+ type: TAUR-Lab/MuSR
85
+ args:
86
+ num_few_shot: 0
87
+ metrics:
88
+ - type: acc_norm
89
+ value: 17.93
90
+ name: acc_norm
91
+ source:
92
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=cognitivecomputations/dolphin-2.9.3-mistral-7B-32k
93
+ name: Open LLM Leaderboard
94
+ - task:
95
+ type: text-generation
96
+ name: Text Generation
97
+ dataset:
98
+ name: MMLU-PRO (5-shot)
99
+ type: TIGER-Lab/MMLU-Pro
100
+ config: main
101
+ split: test
102
+ args:
103
+ num_few_shot: 5
104
+ metrics:
105
+ - type: acc
106
+ value: 20.23
107
+ name: accuracy
108
+ source:
109
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=cognitivecomputations/dolphin-2.9.3-mistral-7B-32k
110
+ name: Open LLM Leaderboard
111
  ---
112
 
113
  # Dolphin 2.9.3 Mistral 7b v0.3 32k 🐬
 
272
  - "<|im_start|>"
273
 
274
  ```
275
+
276
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
277
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_cognitivecomputations__dolphin-2.9.3-mistral-7B-32k)
278
+
279
+ | Metric |Value|
280
+ |-------------------|----:|
281
+ |Avg. |19.31|
282
+ |IFEval (0-Shot) |41.26|
283
+ |BBH (3-Shot) |26.91|
284
+ |MATH Lvl 5 (4-Shot)| 4.83|
285
+ |GPQA (0-shot) | 4.70|
286
+ |MuSR (0-shot) |17.93|
287
+ |MMLU-PRO (5-shot) |20.23|
288
+