leaderboard-pt-pr-bot commited on
Commit
2d5cd91
•
1 Parent(s): 9492b22

Adding the Open Portuguese LLM Leaderboard Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/eduagarcia-temp/portuguese-leaderboard-results-to-modelcard

The purpose of this PR is to add evaluation results from the [🚀 Open Portuguese LLM Leaderboard](https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard) to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/eduagarcia-temp/portuguese-leaderboard-results-to-modelcard/discussions

Files changed (1) hide show
  1. README.md +171 -6
README.md CHANGED
@@ -1,13 +1,159 @@
1
  ---
2
- license: other
3
- license_name: tongyi-qianwen
4
- license_link: >-
5
- https://huggingface.co/Qwen/Qwen1.5-14B-Chat/blob/main/LICENSE
6
  language:
7
  - en
8
- pipeline_tag: text-generation
9
  tags:
10
  - chat
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  ---
12
 
13
  # Qwen1.5-14B-Chat
@@ -95,4 +241,23 @@ If you find our work helpful, feel free to give us a cite.
95
  journal={arXiv preprint arXiv:2309.16609},
96
  year={2023}
97
  }
98
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
 
 
 
2
  language:
3
  - en
4
+ license: other
5
  tags:
6
  - chat
7
+ license_name: tongyi-qianwen
8
+ license_link: https://huggingface.co/Qwen/Qwen1.5-14B-Chat/blob/main/LICENSE
9
+ pipeline_tag: text-generation
10
+ model-index:
11
+ - name: Qwen1.5-14B-Chat
12
+ results:
13
+ - task:
14
+ type: text-generation
15
+ name: Text Generation
16
+ dataset:
17
+ name: ENEM Challenge (No Images)
18
+ type: eduagarcia/enem_challenge
19
+ split: train
20
+ args:
21
+ num_few_shot: 3
22
+ metrics:
23
+ - type: acc
24
+ value: 69.84
25
+ name: accuracy
26
+ source:
27
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=Qwen/Qwen1.5-14B-Chat
28
+ name: Open Portuguese LLM Leaderboard
29
+ - task:
30
+ type: text-generation
31
+ name: Text Generation
32
+ dataset:
33
+ name: BLUEX (No Images)
34
+ type: eduagarcia-temp/BLUEX_without_images
35
+ split: train
36
+ args:
37
+ num_few_shot: 3
38
+ metrics:
39
+ - type: acc
40
+ value: 60.78
41
+ name: accuracy
42
+ source:
43
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=Qwen/Qwen1.5-14B-Chat
44
+ name: Open Portuguese LLM Leaderboard
45
+ - task:
46
+ type: text-generation
47
+ name: Text Generation
48
+ dataset:
49
+ name: OAB Exams
50
+ type: eduagarcia/oab_exams
51
+ split: train
52
+ args:
53
+ num_few_shot: 3
54
+ metrics:
55
+ - type: acc
56
+ value: 48.43
57
+ name: accuracy
58
+ source:
59
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=Qwen/Qwen1.5-14B-Chat
60
+ name: Open Portuguese LLM Leaderboard
61
+ - task:
62
+ type: text-generation
63
+ name: Text Generation
64
+ dataset:
65
+ name: Assin2 RTE
66
+ type: assin2
67
+ split: test
68
+ args:
69
+ num_few_shot: 15
70
+ metrics:
71
+ - type: f1_macro
72
+ value: 93.22
73
+ name: f1-macro
74
+ source:
75
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=Qwen/Qwen1.5-14B-Chat
76
+ name: Open Portuguese LLM Leaderboard
77
+ - task:
78
+ type: text-generation
79
+ name: Text Generation
80
+ dataset:
81
+ name: Assin2 STS
82
+ type: eduagarcia/portuguese_benchmark
83
+ split: test
84
+ args:
85
+ num_few_shot: 15
86
+ metrics:
87
+ - type: pearson
88
+ value: 78.09
89
+ name: pearson
90
+ source:
91
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=Qwen/Qwen1.5-14B-Chat
92
+ name: Open Portuguese LLM Leaderboard
93
+ - task:
94
+ type: text-generation
95
+ name: Text Generation
96
+ dataset:
97
+ name: FaQuAD NLI
98
+ type: ruanchaves/faquad-nli
99
+ split: test
100
+ args:
101
+ num_few_shot: 15
102
+ metrics:
103
+ - type: f1_macro
104
+ value: 78.86
105
+ name: f1-macro
106
+ source:
107
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=Qwen/Qwen1.5-14B-Chat
108
+ name: Open Portuguese LLM Leaderboard
109
+ - task:
110
+ type: text-generation
111
+ name: Text Generation
112
+ dataset:
113
+ name: HateBR Binary
114
+ type: ruanchaves/hatebr
115
+ split: test
116
+ args:
117
+ num_few_shot: 25
118
+ metrics:
119
+ - type: f1_macro
120
+ value: 83.29
121
+ name: f1-macro
122
+ source:
123
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=Qwen/Qwen1.5-14B-Chat
124
+ name: Open Portuguese LLM Leaderboard
125
+ - task:
126
+ type: text-generation
127
+ name: Text Generation
128
+ dataset:
129
+ name: PT Hate Speech Binary
130
+ type: hate_speech_portuguese
131
+ split: test
132
+ args:
133
+ num_few_shot: 25
134
+ metrics:
135
+ - type: f1_macro
136
+ value: 71.29
137
+ name: f1-macro
138
+ source:
139
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=Qwen/Qwen1.5-14B-Chat
140
+ name: Open Portuguese LLM Leaderboard
141
+ - task:
142
+ type: text-generation
143
+ name: Text Generation
144
+ dataset:
145
+ name: tweetSentBR
146
+ type: eduagarcia/tweetsentbr_fewshot
147
+ split: test
148
+ args:
149
+ num_few_shot: 25
150
+ metrics:
151
+ - type: f1_macro
152
+ value: 69.5
153
+ name: f1-macro
154
+ source:
155
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=Qwen/Qwen1.5-14B-Chat
156
+ name: Open Portuguese LLM Leaderboard
157
  ---
158
 
159
  # Qwen1.5-14B-Chat
 
241
  journal={arXiv preprint arXiv:2309.16609},
242
  year={2023}
243
  }
244
+ ```
245
+
246
+
247
+ # Open Portuguese LLM Leaderboard Evaluation Results
248
+
249
+ Detailed results can be found [here](https://huggingface.co/datasets/eduagarcia-temp/llm_pt_leaderboard_raw_results/tree/main/Qwen/Qwen1.5-14B-Chat) and on the [🚀 Open Portuguese LLM Leaderboard](https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard)
250
+
251
+ | Metric | Value |
252
+ |--------------------------|---------|
253
+ |Average |**72.59**|
254
+ |ENEM Challenge (No Images)| 69.84|
255
+ |BLUEX (No Images) | 60.78|
256
+ |OAB Exams | 48.43|
257
+ |Assin2 RTE | 93.22|
258
+ |Assin2 STS | 78.09|
259
+ |FaQuAD NLI | 78.86|
260
+ |HateBR Binary | 83.29|
261
+ |PT Hate Speech Binary | 71.29|
262
+ |tweetSentBR | 69.50|
263
+