mihaimasala commited on
Commit
d1243da
1 Parent(s): 6a77f83

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +499 -6
README.md CHANGED
@@ -14,6 +14,481 @@ datasets:
14
  - OpenLLM-Ro/ro_sft_camel
15
  - OpenLLM-Ro/ro_sft_oasst
16
  - OpenLLM-Ro/ro_sft_ultrachat
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
  ---
18
 
19
  # Model Card for Model ID
@@ -37,7 +512,8 @@ OpenLLM represents the first open-source effort to build a LLM specialized for R
37
  - **Language(s):** Romanian
38
  - **License:** cc-by-nc-4.0
39
  - **Finetuned from model:** [RoLlama2-7b-Base](https://huggingface.co/OpenLLM-Ro/RoLlama2-7b-Base)
40
- - **Trained using:** [RoAlpaca](https://huggingface.co/datasets/OpenLLM-Ro/ro_sft_alpaca), [RoAlpacaGPT4](https://huggingface.co/datasets/OpenLLM-Ro/ro_sft_alpaca_gpt4), [RoDolly](https://huggingface.co/datasets/OpenLLM-Ro/ro_sft_dolly), [RoSelfInstruct](https://huggingface.co/datasets/OpenLLM-Ro/ro_sft_selfinstruct_gpt4), [RoNoRobots](https://huggingface.co/datasets/OpenLLM-Ro/ro_sft_norobots), [RoOrca](https://huggingface.co/datasets/OpenLLM-Ro/ro_sft_orca), [RoCamel](https://huggingface.co/datasets/OpenLLM-Ro/ro_sft_camel)
 
41
 
42
  ### Model Sources
43
 
@@ -101,12 +577,16 @@ print(tokenizer.decode(outputs[0]))
101
  <td>Llama-2-7b-chat</td><td><center>36.84</center></td><td><center>37.03</center></td><td><center>33.80</center></td><td><center>55.87</center></td><td><center>45.36</center></td><td><center>4.90</center></td><td><center>44.09</center></td>
102
  </tr>
103
  <tr>
104
- <td><em>RoLlama2-7b-Instruct</em></td><td><center><em><strong>45.71</strong></em></center></td><td><center><em><strong>43.66</strong></em></center></td><td><center><em><strong>39.70</strong></em></center></td><td><center><em><strong>70.34</strong></em></center></td><td><center><em><strong>57.36</strong></em></center></td><td><center><em><strong>18.78</strong></em></center></td><td><center><em><strong>44.44</strong></em></center></td>
 
 
 
105
  </tr>
106
  </tbody>
107
  </table>
108
 
109
 
 
110
  ## Downstream tasks
111
 
112
 
@@ -139,7 +619,10 @@ print(tokenizer.decode(outputs[0]))
139
  <td>Llama-2-7b-chat</td><td><center>87.78</center></td><td><center>52.81</center></td><td><center>97.27</center></td><td><center>82.02</center></td><td><center>15.55</center></td><td><center><strong>28.53</strong></center></td><td><center>19.99</center></td><td><center>31.48</center></td>
140
  </tr>
141
  <tr>
142
- <td><em>RoLlama2-7b-Instruct</em></td><td><center><em><strong>97.48</strong></em></center></td><td><center><em><strong>65.26</strong></em></center></td><td><center><em><strong>98.83</strong></em></center></td><td><center><em><strong>87.28</strong></em></center></td><td><center><em><strong>27.38</strong></em></center></td><td><center><em>10.32</em></center></td><td><center><em><strong>27.59</strong></em></center></td><td><center><em><strong>40.13</strong></em></center></td>
 
 
 
143
  </tr>
144
  </tbody>
145
  </table>
@@ -174,7 +657,10 @@ print(tokenizer.decode(outputs[0]))
174
  <td>Llama-2-7b-chat</td><td><center>32.35</center></td><td><center>54.00</center></td><td><center><strong>60.34</strong></center></td><td><center><strong>75.98</strong></center></td><td><center>32.56</center></td><td><center>31.99</center></td><td><center>74.08</center></td><td><center>72.64</center></td>
175
  </tr>
176
  <tr>
177
- <td><em>RoLlama2-7b-Instruct</em></td><td><center><em><strong>44.52</strong></em></center></td><td><center><em><strong>64.75</strong></em></center></td><td><center><em>54.96</em></center></td><td><center><em>70.20</em></center></td><td><center><em><strong>65.50</strong></em></center></td><td><center><em><strong>67.79</strong></em></center></td><td><center><em><strong>84.44</strong></em></center></td><td><center><em><strong>84.76</strong></em></center></td>
 
 
 
178
  </tr>
179
  </tbody>
180
  </table>
@@ -194,12 +680,16 @@ print(tokenizer.decode(outputs[0]))
194
  <td>Llama-2-7b-chat</td><td><center>1.08</center></td><td><center>1.44</center></td><td><center>0.73</center></td><td><center>45/160</center></td>
195
  </tr>
196
  <tr>
197
- <td><em>RoLlama2-7b-Instruct</em></td><td><center><em><strong>3.86</strong></em></center></td><td><center><em><strong>4.67</strong></em></center></td><td><center><em><strong>3.04</strong></em></center></td><td><center><em><strong>160/160</strong></em></center></td>
 
 
 
198
  </tr>
199
  </tbody>
200
  </table>
201
 
202
 
 
203
  ## RoCulturaBench
204
 
205
 
@@ -214,7 +704,10 @@ print(tokenizer.decode(outputs[0]))
214
  <td>Llama-2-7b-chat</td><td><center>1.21</center></td><td><center>33/100</center></td>
215
  </tr>
216
  <tr>
217
- <td><em>RoLlama2-7b-Instruct</em></td><td><center><em><strong>3.77</strong></em></center></td><td><center><em><strong>100/100</strong></em></center></td>
 
 
 
218
  </tr>
219
  </tbody>
220
  </table>
 
14
  - OpenLLM-Ro/ro_sft_camel
15
  - OpenLLM-Ro/ro_sft_oasst
16
  - OpenLLM-Ro/ro_sft_ultrachat
17
+ model-index:
18
+ - name: OpenLLM-Ro/RoLlama2-7b-Instruct-v2
19
+ results:
20
+ - task:
21
+ type: text-generation
22
+ dataset:
23
+ name: RoMT-Bench
24
+ type: RoMT-Bench
25
+ metrics:
26
+ - name: Score
27
+ type: Score
28
+ value: 4.43
29
+ - task:
30
+ type: text-generation
31
+ dataset:
32
+ name: RoCulturaBench
33
+ type: RoCulturaBench
34
+ metrics:
35
+ - name: Score
36
+ type: Score
37
+ value: 4.08
38
+ - task:
39
+ type: text-generation
40
+ dataset:
41
+ name: Romanian_Academic_Benchmarks
42
+ type: Romanian_Academic_Benchmarks
43
+ metrics:
44
+ - name: Average accuracy
45
+ type: accuracy
46
+ value: 44.50
47
+ - task:
48
+ type: text-generation
49
+ dataset:
50
+ name: OpenLLM-Ro/ro_arc_challenge
51
+ type: OpenLLM-Ro/ro_arc_challenge
52
+ metrics:
53
+ - name: Average accuracy
54
+ type: accuracy
55
+ value: 44.73
56
+ - task:
57
+ type: text-generation
58
+ dataset:
59
+ name: OpenLLM-Ro/ro_mmlu
60
+ type: OpenLLM-Ro/ro_mmlu
61
+ metrics:
62
+ - name: Average accuracy
63
+ type: accuracy
64
+ value: 40.39
65
+ - task:
66
+ type: text-generation
67
+ dataset:
68
+ name: OpenLLM-Ro/ro_winogrande
69
+ type: OpenLLM-Ro/ro_winogrande
70
+ metrics:
71
+ - name: Average accuracy
72
+ type: accuracy
73
+ value: 63.67
74
+ - task:
75
+ type: text-generation
76
+ dataset:
77
+ name: OpenLLM-Ro/ro_hellaswag
78
+ type: OpenLLM-Ro/ro_hellaswag
79
+ metrics:
80
+ - name: Average accuracy
81
+ type: accuracy
82
+ value: 59.12
83
+ - task:
84
+ type: text-generation
85
+ dataset:
86
+ name: OpenLLM-Ro/ro_gsm8k
87
+ type: OpenLLM-Ro/ro_gsm8k
88
+ metrics:
89
+ - name: Average accuracy
90
+ type: accuracy
91
+ value: 13.29
92
+ - task:
93
+ type: text-generation
94
+ dataset:
95
+ name: OpenLLM-Ro/ro_truthfulqa
96
+ type: OpenLLM-Ro/ro_truthfulqa
97
+ metrics:
98
+ - name: Average accuracy
99
+ type: accuracy
100
+ value: 45.78
101
+ - task:
102
+ type: text-generation
103
+ dataset:
104
+ name: LaRoSeDa_binary
105
+ type: LaRoSeDa_binary
106
+ metrics:
107
+ - name: Average macro-f1
108
+ type: macro-f1
109
+ value: 97.66
110
+ - task:
111
+ type: text-generation
112
+ dataset:
113
+ name: LaRoSeDa_multiclass
114
+ type: LaRoSeDa_multiclass
115
+ metrics:
116
+ - name: Average macro-f1
117
+ type: macro-f1
118
+ value: 62.41
119
+ - task:
120
+ type: text-generation
121
+ dataset:
122
+ name: LaRoSeDa_binary_finetuned
123
+ type: LaRoSeDa_binary_finetuned
124
+ metrics:
125
+ - name: Average macro-f1
126
+ type: macro-f1
127
+ value: 97.97
128
+ - task:
129
+ type: text-generation
130
+ dataset:
131
+ name: LaRoSeDa_multiclass_finetuned
132
+ type: LaRoSeDa_multiclass_finetuned
133
+ metrics:
134
+ - name: Average macro-f1
135
+ type: macro-f1
136
+ value: 60.89
137
+ - task:
138
+ type: text-generation
139
+ dataset:
140
+ name: WMT_EN-RO
141
+ type: WMT_EN-RO
142
+ metrics:
143
+ - name: Average bleu
144
+ type: bleu
145
+ value: 27.13
146
+ - task:
147
+ type: text-generation
148
+ dataset:
149
+ name: WMT_RO-EN
150
+ type: WMT_RO-EN
151
+ metrics:
152
+ - name: Average bleu
153
+ type: bleu
154
+ value: 19.39
155
+ - task:
156
+ type: text-generation
157
+ dataset:
158
+ name: WMT_EN-RO_finetuned
159
+ type: WMT_EN-RO_finetuned
160
+ metrics:
161
+ - name: Average bleu
162
+ type: bleu
163
+ value: 27.63
164
+ - task:
165
+ type: text-generation
166
+ dataset:
167
+ name: WMT_RO-EN_finetuned
168
+ type: WMT_RO-EN_finetuned
169
+ metrics:
170
+ - name: Average bleu
171
+ type: bleu
172
+ value: 39.75
173
+ - task:
174
+ type: text-generation
175
+ dataset:
176
+ name: XQuAD
177
+ type: XQuAD
178
+ metrics:
179
+ - name: Average exact_match
180
+ type: exact_match
181
+ value: 45.71
182
+ - task:
183
+ type: text-generation
184
+ dataset:
185
+ name: XQuAD
186
+ type: XQuAD
187
+ metrics:
188
+ - name: Average f1
189
+ type: f1
190
+ value: 65.08
191
+ - task:
192
+ type: text-generation
193
+ dataset:
194
+ name: XQuAD_finetuned
195
+ type: XQuAD_finetuned
196
+ metrics:
197
+ - name: Average exact_match
198
+ type: exact_match
199
+ value: 59.24
200
+ - task:
201
+ type: text-generation
202
+ dataset:
203
+ name: XQuAD_finetuned
204
+ type: XQuAD_finetuned
205
+ metrics:
206
+ - name: Average f1
207
+ type: f1
208
+ value: 74.25
209
+ - task:
210
+ type: text-generation
211
+ dataset:
212
+ name: STS
213
+ type: STS
214
+ metrics:
215
+ - name: Average spearman
216
+ type: spearman
217
+ value: 59.69
218
+ - task:
219
+ type: text-generation
220
+ dataset:
221
+ name: STS
222
+ type: STS
223
+ metrics:
224
+ - name: Average pearson
225
+ type: pearson
226
+ value: 57.16
227
+ - task:
228
+ type: text-generation
229
+ dataset:
230
+ name: STS_finetuned
231
+ type: STS_finetuned
232
+ metrics:
233
+ - name: Average spearman
234
+ type: spearman
235
+ value: 84.66
236
+ - task:
237
+ type: text-generation
238
+ dataset:
239
+ name: STS_finetuned
240
+ type: STS_finetuned
241
+ metrics:
242
+ - name: Average pearson
243
+ type: pearson
244
+ value: 85.07
245
+ - task:
246
+ type: text-generation
247
+ dataset:
248
+ name: RoMT-Bench
249
+ type: RoMT-Bench
250
+ metrics:
251
+ - name: First turn
252
+ type: Score
253
+ value: 4.92
254
+ - name: Second turn
255
+ type: Score
256
+ value: 3.94
257
+ - task:
258
+ type: text-generation
259
+ dataset:
260
+ name: OpenLLM-Ro/ro_arc_challenge
261
+ type: OpenLLM-Ro/ro_arc_challenge
262
+ metrics:
263
+ - name: 0-shot
264
+ type: accuracy
265
+ value: 42.67
266
+ - name: 1-shot
267
+ type: accuracy
268
+ value: 44.64
269
+ - name: 3-shot
270
+ type: accuracy
271
+ value: 44.90
272
+ - name: 5-shot
273
+ type: accuracy
274
+ value: 45.16
275
+ - name: 10-shot
276
+ type: accuracy
277
+ value: 45.67
278
+ - name: 25-shot
279
+ type: accuracy
280
+ value: 45.33
281
+ - task:
282
+ type: text-generation
283
+ dataset:
284
+ name: OpenLLM-Ro/ro_mmlu
285
+ type: OpenLLM-Ro/ro_mmlu
286
+ metrics:
287
+ - name: 0-shot
288
+ type: accuracy
289
+ value: 39.89
290
+ - name: 1-shot
291
+ type: accuracy
292
+ value: 40.08
293
+ - name: 3-shot
294
+ type: accuracy
295
+ value: 40.60
296
+ - name: 5-shot
297
+ type: accuracy
298
+ value: 40.99
299
+ - task:
300
+ type: text-generation
301
+ dataset:
302
+ name: OpenLLM-Ro/ro_winogrande
303
+ type: OpenLLM-Ro/ro_winogrande
304
+ metrics:
305
+ - name: 0-shot
306
+ type: accuracy
307
+ value: 63.06
308
+ - name: 1-shot
309
+ type: accuracy
310
+ value: 62.98
311
+ - name: 3-shot
312
+ type: accuracy
313
+ value: 65.19
314
+ - name: 5-shot
315
+ type: accuracy
316
+ value: 63.46
317
+ - task:
318
+ type: text-generation
319
+ dataset:
320
+ name: OpenLLM-Ro/ro_hellaswag
321
+ type: OpenLLM-Ro/ro_hellaswag
322
+ metrics:
323
+ - name: 0-shot
324
+ type: accuracy
325
+ value: 58.82
326
+ - name: 1-shot
327
+ type: accuracy
328
+ value: 58.44
329
+ - name: 3-shot
330
+ type: accuracy
331
+ value: 59.28
332
+ - name: 5-shot
333
+ type: accuracy
334
+ value: 59.29
335
+ - name: 10-shot
336
+ type: accuracy
337
+ value: 59.77
338
+ - task:
339
+ type: text-generation
340
+ dataset:
341
+ name: OpenLLM-Ro/ro_gsm8k
342
+ type: OpenLLM-Ro/ro_gsm8k
343
+ metrics:
344
+ - name: 0-shot
345
+ type: accuracy
346
+ value: 6.14
347
+ - name: 1-shot
348
+ type: accuracy
349
+ value: 15.01
350
+ - name: 3-shot
351
+ type: accuracy
352
+ value: 18.72
353
+ - task:
354
+ type: text-generation
355
+ dataset:
356
+ name: LaRoSeDa_binary
357
+ type: LaRoSeDa_binary
358
+ metrics:
359
+ - name: 0-shot
360
+ type: macro-f1
361
+ value: 98.20
362
+ - name: 1-shot
363
+ type: macro-f1
364
+ value: 96.63
365
+ - name: 3-shot
366
+ type: macro-f1
367
+ value: 97.67
368
+ - name: 5-shot
369
+ type: macro-f1
370
+ value: 98.13
371
+ - task:
372
+ type: text-generation
373
+ dataset:
374
+ name: LaRoSeDa_multiclass
375
+ type: LaRoSeDa_multiclass
376
+ metrics:
377
+ - name: 0-shot
378
+ type: macro-f1
379
+ value: 63.43
380
+ - name: 1-shot
381
+ type: macro-f1
382
+ value: 53.58
383
+ - name: 3-shot
384
+ type: macro-f1
385
+ value: 63.78
386
+ - name: 5-shot
387
+ type: macro-f1
388
+ value: 68.85
389
+ - task:
390
+ type: text-generation
391
+ dataset:
392
+ name: WMT_EN-RO
393
+ type: WMT_EN-RO
394
+ metrics:
395
+ - name: 0-shot
396
+ type: bleu
397
+ value: 20.57
398
+ - name: 1-shot
399
+ type: bleu
400
+ value: 29.59
401
+ - name: 3-shot
402
+ type: bleu
403
+ value: 29.50
404
+ - name: 5-shot
405
+ type: bleu
406
+ value: 28.88
407
+ - task:
408
+ type: text-generation
409
+ dataset:
410
+ name: WMT_RO-EN
411
+ type: WMT_RO-EN
412
+ metrics:
413
+ - name: 0-shot
414
+ type: bleu
415
+ value: 2.19
416
+ - name: 1-shot
417
+ type: bleu
418
+ value: 9.97
419
+ - name: 3-shot
420
+ type: bleu
421
+ value: 31.19
422
+ - name: 5-shot
423
+ type: bleu
424
+ value: 34.23
425
+ - task:
426
+ type: text-generation
427
+ dataset:
428
+ name: XQuAD_EM
429
+ type: XQuAD_EM
430
+ metrics:
431
+ - name: 0-shot
432
+ type: exact_match
433
+ value: 40.25
434
+ - name: 1-shot
435
+ type: exact_match
436
+ value: 46.47
437
+ - name: 3-shot
438
+ type: exact_match
439
+ value: 47.56
440
+ - name: 5-shot
441
+ type: exact_match
442
+ value: 48.57
443
+ - task:
444
+ type: text-generation
445
+ dataset:
446
+ name: XQuAD_F1
447
+ type: XQuAD_F1
448
+ metrics:
449
+ - name: 0-shot
450
+ type: f1
451
+ value: 62.24
452
+ - name: 1-shot
453
+ type: f1
454
+ value: 65.33
455
+ - name: 3-shot
456
+ type: f1
457
+ value: 65.89
458
+ - name: 5-shot
459
+ type: f1
460
+ value: 66.86
461
+ - task:
462
+ type: text-generation
463
+ dataset:
464
+ name: STS
465
+ type: STS
466
+ metrics:
467
+ - name: 0-shot
468
+ type: spearman
469
+ value: 55.44
470
+ - name: 1-shot
471
+ type: spearman
472
+ value: 61.98
473
+ - name: 3-shot
474
+ type: spearman
475
+ value: 61.65
476
+ - task:
477
+ type: text-generation
478
+ dataset:
479
+ name: STS
480
+ type: STS
481
+ metrics:
482
+ - name: 0-shot
483
+ type: pearson
484
+ value: 56.18
485
+ - name: 1-shot
486
+ type: pearson
487
+ value: 58.37
488
+ - name: 3-shot
489
+ type: pearson
490
+ value: 56.94
491
+
492
  ---
493
 
494
  # Model Card for Model ID
 
512
  - **Language(s):** Romanian
513
  - **License:** cc-by-nc-4.0
514
  - **Finetuned from model:** [RoLlama2-7b-Base](https://huggingface.co/OpenLLM-Ro/RoLlama2-7b-Base)
515
+ - **Trained using:** [RoAlpaca](https://huggingface.co/datasets/OpenLLM-Ro/ro_sft_alpaca), [RoAlpacaGPT4](https://huggingface.co/datasets/OpenLLM-Ro/ro_sft_alpaca_gpt4), [RoDolly](https://huggingface.co/datasets/OpenLLM-Ro/ro_sft_dolly), [RoSelfInstruct](https://huggingface.co/datasets/OpenLLM-Ro/ro_sft_selfinstruct_gpt4), [RoNoRobots](https://huggingface.co/datasets/OpenLLM-Ro/ro_sft_norobots), [RoOrca](https://huggingface.co/datasets/OpenLLM-Ro/ro_sft_orca), [RoCamel](https://huggingface.co/datasets/OpenLLM-Ro/ro_sft_camel), [RoOpenAssistant](https://huggingface.co/datasets/OpenLLM-Ro/ro_sft_oasst), [RoUltraChat](https://huggingface.co/datasets/OpenLLM-Ro/ro_sft_ultrachat)
516
+
517
 
518
  ### Model Sources
519
 
 
577
  <td>Llama-2-7b-chat</td><td><center>36.84</center></td><td><center>37.03</center></td><td><center>33.80</center></td><td><center>55.87</center></td><td><center>45.36</center></td><td><center>4.90</center></td><td><center>44.09</center></td>
578
  </tr>
579
  <tr>
580
+ <td>RoLlama2-7b-Instruct</td><td><center><strong>45.71</strong></center></td><td><center>43.66</center></td><td><center>39.70</center></td><td><center><strong>70.34</strong></center></td><td><center>57.36</center></td><td><center><strong>18.78</strong></center></td><td><center>44.44</center></td>
581
+ </tr>
582
+ <tr>
583
+ <td><em>RoLlama2-7b-Instruct-v2</em></td><td><center><em>44.50</em></center></td><td><center><em><strong>44.73</strong></em></center></td><td><center><em><strong>40.39</strong></em></center></td><td><center><em>63.67</em></center></td><td><center><em><strong>59.12</strong></em></center></td><td><center><em>13.29</em></center></td><td><center><em><strong>45.78</strong></em></center></td>
584
  </tr>
585
  </tbody>
586
  </table>
587
 
588
 
589
+
590
  ## Downstream tasks
591
 
592
 
 
619
  <td>Llama-2-7b-chat</td><td><center>87.78</center></td><td><center>52.81</center></td><td><center>97.27</center></td><td><center>82.02</center></td><td><center>15.55</center></td><td><center><strong>28.53</strong></center></td><td><center>19.99</center></td><td><center>31.48</center></td>
620
  </tr>
621
  <tr>
622
+ <td>RoLlama2-7b-Instruct</td><td><center>97.48</center></td><td><center><strong>65.26</strong></center></td><td><center><strong>98.83</strong></center></td><td><center><strong>87.28</strong></center></td><td><center><strong>27.38</strong></center></td><td><center>10.32</center></td><td><center>27.59</center></td><td><center><strong>40.13</strong></center></td>
623
+ </tr>
624
+ <tr>
625
+ <td><em>RoLlama2-7b-Instruct-v2</em></td><td><center><em><strong>97.66</strong></em></center></td><td><center><em>62.41</em></center></td><td><center><em>97.97</em></center></td><td><center><em>60.89</em></center></td><td><center><em>27.13</em></center></td><td><center><em>19.39</em></center></td><td><center><em><strong>27.63</strong></em></center></td><td><center><em>39.75</em></center></td>
626
  </tr>
627
  </tbody>
628
  </table>
 
657
  <td>Llama-2-7b-chat</td><td><center>32.35</center></td><td><center>54.00</center></td><td><center><strong>60.34</strong></center></td><td><center><strong>75.98</strong></center></td><td><center>32.56</center></td><td><center>31.99</center></td><td><center>74.08</center></td><td><center>72.64</center></td>
658
  </tr>
659
  <tr>
660
+ <td>RoLlama2-7b-Instruct</td><td><center>44.52</center></td><td><center>64.75</center></td><td><center>54.96</center></td><td><center>70.20</center></td><td><center><strong>65.50</strong></center></td><td><center><strong>67.79</strong></center></td><td><center>84.44</center></td><td><center>84.76</center></td>
661
+ </tr>
662
+ <tr>
663
+ <td><em>RoLlama2-7b-Instruct-v2</em></td><td><center><em><strong>45.71</strong></em></center></td><td><center><em><strong>65.08</strong></em></center></td><td><center><em>59.24</em></center></td><td><center><em>74.25</em></center></td><td><center><em>59.69</em></center></td><td><center><em>57.16</em></center></td><td><center><em><strong>84.66</strong></em></center></td><td><center><em><strong>85.07</strong></em></center></td>
664
  </tr>
665
  </tbody>
666
  </table>
 
680
  <td>Llama-2-7b-chat</td><td><center>1.08</center></td><td><center>1.44</center></td><td><center>0.73</center></td><td><center>45/160</center></td>
681
  </tr>
682
  <tr>
683
+ <td>RoLlama2-7b-Instruct</td><td><center>3.86</center></td><td><center>4.67</center></td><td><center>3.04</center></td><td><center><strong>160/160</strong></center></td>
684
+ </tr>
685
+ <tr>
686
+ <td><em>RoLlama2-7b-Instruct-v2</em></td><td><center><em><strong>4.43</strong></em></center></td><td><center><em><strong>4.92</strong></em></center></td><td><center><em><strong>3.94</strong></em></center></td><td><center><em><strong>160/160</strong></em></center></td>
687
  </tr>
688
  </tbody>
689
  </table>
690
 
691
 
692
+
693
  ## RoCulturaBench
694
 
695
 
 
704
  <td>Llama-2-7b-chat</td><td><center>1.21</center></td><td><center>33/100</center></td>
705
  </tr>
706
  <tr>
707
+ <td>RoLlama2-7b-Instruct</td><td><center>3.77</center></td><td><center><strong>100/100</strong></center></td>
708
+ </tr>
709
+ <tr>
710
+ <td><em>RoLlama2-7b-Instruct-v2</em></td><td><center><em><strong>4.08</strong></em></center></td><td><center><em><strong>100/100</strong></em></center></td>
711
  </tr>
712
  </tbody>
713
  </table>