Text Generation
Transformers
Safetensors
English
llama
mergekit
Merge
conversational
text-generation-inference
Inference Endpoints
leaderboard-pr-bot commited on
Commit
daf9d3d
1 Parent(s): 90a728a

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +115 -7
README.md CHANGED
@@ -1,17 +1,112 @@
1
  ---
2
- base_model:
3
- - chargoddard/prometheus-llama-3-8b-preference
4
- - chargoddard/prometheus-llama-3-8b-absolute
5
  library_name: transformers
6
  tags:
7
  - mergekit
8
  - merge
9
- license: apache-2.0
 
 
10
  datasets:
11
  - prometheus-eval/Preference-Collection
12
  - prometheus-eval/Feedback-Collection
13
- language:
14
- - en
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  ---
16
  # prometheus-2-llama-3-8b
17
 
@@ -52,4 +147,17 @@ Uses Llama 3 Instruct prompt format and the same prompts as prometheus-7b-v2.0.
52
  archivePrefix={arXiv},
53
  primaryClass={cs.CL}
54
  }
55
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
  library_name: transformers
6
  tags:
7
  - mergekit
8
  - merge
9
+ base_model:
10
+ - chargoddard/prometheus-llama-3-8b-preference
11
+ - chargoddard/prometheus-llama-3-8b-absolute
12
  datasets:
13
  - prometheus-eval/Preference-Collection
14
  - prometheus-eval/Feedback-Collection
15
+ model-index:
16
+ - name: prometheus-2-llama-3-8b
17
+ results:
18
+ - task:
19
+ type: text-generation
20
+ name: Text Generation
21
+ dataset:
22
+ name: IFEval (0-Shot)
23
+ type: HuggingFaceH4/ifeval
24
+ args:
25
+ num_few_shot: 0
26
+ metrics:
27
+ - type: inst_level_strict_acc and prompt_level_strict_acc
28
+ value: 52.89
29
+ name: strict accuracy
30
+ source:
31
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=chargoddard/prometheus-2-llama-3-8b
32
+ name: Open LLM Leaderboard
33
+ - task:
34
+ type: text-generation
35
+ name: Text Generation
36
+ dataset:
37
+ name: BBH (3-Shot)
38
+ type: BBH
39
+ args:
40
+ num_few_shot: 3
41
+ metrics:
42
+ - type: acc_norm
43
+ value: 27.8
44
+ name: normalized accuracy
45
+ source:
46
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=chargoddard/prometheus-2-llama-3-8b
47
+ name: Open LLM Leaderboard
48
+ - task:
49
+ type: text-generation
50
+ name: Text Generation
51
+ dataset:
52
+ name: MATH Lvl 5 (4-Shot)
53
+ type: hendrycks/competition_math
54
+ args:
55
+ num_few_shot: 4
56
+ metrics:
57
+ - type: exact_match
58
+ value: 7.25
59
+ name: exact match
60
+ source:
61
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=chargoddard/prometheus-2-llama-3-8b
62
+ name: Open LLM Leaderboard
63
+ - task:
64
+ type: text-generation
65
+ name: Text Generation
66
+ dataset:
67
+ name: GPQA (0-shot)
68
+ type: Idavidrein/gpqa
69
+ args:
70
+ num_few_shot: 0
71
+ metrics:
72
+ - type: acc_norm
73
+ value: 3.02
74
+ name: acc_norm
75
+ source:
76
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=chargoddard/prometheus-2-llama-3-8b
77
+ name: Open LLM Leaderboard
78
+ - task:
79
+ type: text-generation
80
+ name: Text Generation
81
+ dataset:
82
+ name: MuSR (0-shot)
83
+ type: TAUR-Lab/MuSR
84
+ args:
85
+ num_few_shot: 0
86
+ metrics:
87
+ - type: acc_norm
88
+ value: 0.78
89
+ name: acc_norm
90
+ source:
91
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=chargoddard/prometheus-2-llama-3-8b
92
+ name: Open LLM Leaderboard
93
+ - task:
94
+ type: text-generation
95
+ name: Text Generation
96
+ dataset:
97
+ name: MMLU-PRO (5-shot)
98
+ type: TIGER-Lab/MMLU-Pro
99
+ config: main
100
+ split: test
101
+ args:
102
+ num_few_shot: 5
103
+ metrics:
104
+ - type: acc
105
+ value: 23.19
106
+ name: accuracy
107
+ source:
108
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=chargoddard/prometheus-2-llama-3-8b
109
+ name: Open LLM Leaderboard
110
  ---
111
  # prometheus-2-llama-3-8b
112
 
 
147
  archivePrefix={arXiv},
148
  primaryClass={cs.CL}
149
  }
150
+ ```
151
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
152
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_chargoddard__prometheus-2-llama-3-8b)
153
+
154
+ | Metric |Value|
155
+ |-------------------|----:|
156
+ |Avg. |19.16|
157
+ |IFEval (0-Shot) |52.89|
158
+ |BBH (3-Shot) |27.80|
159
+ |MATH Lvl 5 (4-Shot)| 7.25|
160
+ |GPQA (0-shot) | 3.02|
161
+ |MuSR (0-shot) | 0.78|
162
+ |MMLU-PRO (5-shot) |23.19|
163
+