AdrienB134 commited on
Commit
36d9180
1 Parent(s): 4bbe0dc

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +863 -0
README.md ADDED
@@ -0,0 +1,863 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ base_model: croissantllm/CroissantCool-v0.2
4
+ datasets: asi/wikitext_fr
5
+ tags:
6
+ - generated_from_trainer
7
+ - mteb
8
+ metrics:
9
+ - accuracy
10
+ model-index:
11
+ - name: final
12
+ results:
13
+ - task:
14
+ type: Clustering
15
+ dataset:
16
+ type: lyon-nlp/alloprof
17
+ name: MTEB AlloProfClusteringP2P (fra-Latn)
18
+ config: fra-Latn
19
+ split: test
20
+ revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
21
+ metrics:
22
+ - type: v_measure
23
+ value: 62.345943052433995
24
+ - task:
25
+ type: Clustering
26
+ dataset:
27
+ type: lyon-nlp/alloprof
28
+ name: MTEB AlloProfClusteringS2S (fra-Latn)
29
+ config: fra-Latn
30
+ split: test
31
+ revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
32
+ metrics:
33
+ - type: v_measure
34
+ value: 25.729454984521148
35
+ - task:
36
+ type: Reranking
37
+ dataset:
38
+ type: lyon-nlp/mteb-fr-reranking-alloprof-s2p
39
+ name: MTEB AlloprofReranking (fra-Latn)
40
+ config: fra-Latn
41
+ split: test
42
+ revision: 65393d0d7a08a10b4e348135e824f385d420b0fd
43
+ metrics:
44
+ - type: map
45
+ value: 26.596323297349183
46
+ - type: mrr
47
+ value: 26.091629657044162
48
+ - task:
49
+ type: Retrieval
50
+ dataset:
51
+ type: lyon-nlp/alloprof
52
+ name: MTEB AlloprofRetrieval (fra-Latn)
53
+ config: fra-Latn
54
+ split: test
55
+ revision: fcf295ea64c750f41fadbaa37b9b861558e1bfbd
56
+ metrics:
57
+ - type: map_at_1
58
+ value: 0.345
59
+ - type: map_at_10
60
+ value: 0.9339999999999999
61
+ - type: map_at_100
62
+ value: 1.191
63
+ - type: map_at_1000
64
+ value: 1.3419999999999999
65
+ - type: map_at_20
66
+ value: 1.02
67
+ - type: map_at_3
68
+ value: 0.6689999999999999
69
+ - type: map_at_5
70
+ value: 0.753
71
+ - type: mrr_at_1
72
+ value: 0.345
73
+ - type: mrr_at_10
74
+ value: 0.9339999999999999
75
+ - type: mrr_at_100
76
+ value: 1.191
77
+ - type: mrr_at_1000
78
+ value: 1.3419999999999999
79
+ - type: mrr_at_20
80
+ value: 1.02
81
+ - type: mrr_at_3
82
+ value: 0.6689999999999999
83
+ - type: mrr_at_5
84
+ value: 0.753
85
+ - type: ndcg_at_1
86
+ value: 0.345
87
+ - type: ndcg_at_10
88
+ value: 1.384
89
+ - type: ndcg_at_100
90
+ value: 3.1510000000000002
91
+ - type: ndcg_at_1000
92
+ value: 9.014
93
+ - type: ndcg_at_20
94
+ value: 1.6920000000000002
95
+ - type: ndcg_at_3
96
+ value: 0.7849999999999999
97
+ - type: ndcg_at_5
98
+ value: 0.941
99
+ - type: precision_at_1
100
+ value: 0.345
101
+ - type: precision_at_10
102
+ value: 0.28900000000000003
103
+ - type: precision_at_100
104
+ value: 0.124
105
+ - type: precision_at_1000
106
+ value: 0.063
107
+ - type: precision_at_20
108
+ value: 0.20500000000000002
109
+ - type: precision_at_3
110
+ value: 0.374
111
+ - type: precision_at_5
112
+ value: 0.302
113
+ - type: recall_at_1
114
+ value: 0.345
115
+ - type: recall_at_10
116
+ value: 2.8930000000000002
117
+ - type: recall_at_100
118
+ value: 12.435
119
+ - type: recall_at_1000
120
+ value: 62.867
121
+ - type: recall_at_20
122
+ value: 4.102
123
+ - type: recall_at_3
124
+ value: 1.123
125
+ - type: recall_at_5
126
+ value: 1.5110000000000001
127
+ - task:
128
+ type: Classification
129
+ dataset:
130
+ type: mteb/amazon_reviews_multi
131
+ name: MTEB AmazonReviewsClassification (fra-Latn)
132
+ config: fra-Latn
133
+ split: test
134
+ revision: 1399c76144fd37290681b995c656ef9b2e06e26d
135
+ metrics:
136
+ - type: accuracy
137
+ value: 32.662
138
+ - type: f1
139
+ value: 32.443152253731846
140
+ - task:
141
+ type: Retrieval
142
+ dataset:
143
+ type: maastrichtlawtech/bsard
144
+ name: MTEB BSARDRetrieval (fra-Latn)
145
+ config: fra-Latn
146
+ split: test
147
+ revision: 5effa1b9b5fa3b0f9e12523e6e43e5f86a6e6d59
148
+ metrics:
149
+ - type: map_at_1
150
+ value: 0.0
151
+ - type: map_at_10
152
+ value: 0.0
153
+ - type: map_at_100
154
+ value: 0.062
155
+ - type: map_at_1000
156
+ value: 0.077
157
+ - type: map_at_20
158
+ value: 0.0
159
+ - type: map_at_3
160
+ value: 0.0
161
+ - type: map_at_5
162
+ value: 0.0
163
+ - type: mrr_at_1
164
+ value: 0.0
165
+ - type: mrr_at_10
166
+ value: 0.0
167
+ - type: mrr_at_100
168
+ value: 0.062
169
+ - type: mrr_at_1000
170
+ value: 0.077
171
+ - type: mrr_at_20
172
+ value: 0.0
173
+ - type: mrr_at_3
174
+ value: 0.0
175
+ - type: mrr_at_5
176
+ value: 0.0
177
+ - type: ndcg_at_1
178
+ value: 0.0
179
+ - type: ndcg_at_10
180
+ value: 0.0
181
+ - type: ndcg_at_100
182
+ value: 0.484
183
+ - type: ndcg_at_1000
184
+ value: 1.054
185
+ - type: ndcg_at_20
186
+ value: 0.0
187
+ - type: ndcg_at_3
188
+ value: 0.0
189
+ - type: ndcg_at_5
190
+ value: 0.0
191
+ - type: precision_at_1
192
+ value: 0.0
193
+ - type: precision_at_10
194
+ value: 0.0
195
+ - type: precision_at_100
196
+ value: 0.027
197
+ - type: precision_at_1000
198
+ value: 0.008
199
+ - type: precision_at_20
200
+ value: 0.0
201
+ - type: precision_at_3
202
+ value: 0.0
203
+ - type: precision_at_5
204
+ value: 0.0
205
+ - type: recall_at_1
206
+ value: 0.0
207
+ - type: recall_at_10
208
+ value: 0.0
209
+ - type: recall_at_100
210
+ value: 2.703
211
+ - type: recall_at_1000
212
+ value: 7.6579999999999995
213
+ - type: recall_at_20
214
+ value: 0.0
215
+ - type: recall_at_3
216
+ value: 0.0
217
+ - type: recall_at_5
218
+ value: 0.0
219
+ - task:
220
+ type: Clustering
221
+ dataset:
222
+ type: lyon-nlp/clustering-hal-s2s
223
+ name: MTEB HALClusteringS2S (fra-Latn)
224
+ config: fra-Latn
225
+ split: test
226
+ revision: e06ebbbb123f8144bef1a5d18796f3dec9ae2915
227
+ metrics:
228
+ - type: v_measure
229
+ value: 13.77084465510841
230
+ - task:
231
+ type: Clustering
232
+ dataset:
233
+ type: mlsum
234
+ name: MTEB MLSUMClusteringP2P (fra-Latn)
235
+ config: fra-Latn
236
+ split: test
237
+ revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7
238
+ metrics:
239
+ - type: v_measure
240
+ value: 45.43375637260015
241
+ - task:
242
+ type: Clustering
243
+ dataset:
244
+ type: mlsum
245
+ name: MTEB MLSUMClusteringS2S (fra-Latn)
246
+ config: fra-Latn
247
+ split: test
248
+ revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7
249
+ metrics:
250
+ - type: v_measure
251
+ value: 45.20564648796975
252
+ - task:
253
+ type: Classification
254
+ dataset:
255
+ type: mteb/mtop_domain
256
+ name: MTEB MTOPDomainClassification (fra-Latn)
257
+ config: fra-Latn
258
+ split: test
259
+ revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
260
+ metrics:
261
+ - type: accuracy
262
+ value: 73.42937676166615
263
+ - type: f1
264
+ value: 72.65861284500563
265
+ - task:
266
+ type: Classification
267
+ dataset:
268
+ type: mteb/mtop_intent
269
+ name: MTEB MTOPIntentClassification (fra-Latn)
270
+ config: fra-Latn
271
+ split: test
272
+ revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
273
+ metrics:
274
+ - type: accuracy
275
+ value: 58.54368932038836
276
+ - type: f1
277
+ value: 37.51985447597095
278
+ - task:
279
+ type: Classification
280
+ dataset:
281
+ type: mteb/masakhanews
282
+ name: MTEB MasakhaNEWSClassification (fra-Latn)
283
+ config: fra-Latn
284
+ split: test
285
+ revision: 18193f187b92da67168c655c9973a165ed9593dd
286
+ metrics:
287
+ - type: accuracy
288
+ value: 75.56872037914692
289
+ - type: f1
290
+ value: 71.99185345982795
291
+ - task:
292
+ type: Clustering
293
+ dataset:
294
+ type: masakhane/masakhanews
295
+ name: MTEB MasakhaNEWSClusteringP2P (fra-Latn)
296
+ config: fra-Latn
297
+ split: test
298
+ revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
299
+ metrics:
300
+ - type: v_measure
301
+ value: 38.20382948117535
302
+ - task:
303
+ type: Clustering
304
+ dataset:
305
+ type: masakhane/masakhanews
306
+ name: MTEB MasakhaNEWSClusteringS2S (fra-Latn)
307
+ config: fra-Latn
308
+ split: test
309
+ revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
310
+ metrics:
311
+ - type: v_measure
312
+ value: 26.943825642352117
313
+ - task:
314
+ type: Classification
315
+ dataset:
316
+ type: mteb/amazon_massive_intent
317
+ name: MTEB MassiveIntentClassification (fra-Latn)
318
+ config: fra-Latn
319
+ split: test
320
+ revision: 4672e20407010da34463acc759c162ca9734bca6
321
+ metrics:
322
+ - type: accuracy
323
+ value: 50.20847343644924
324
+ - type: f1
325
+ value: 47.32281768380685
326
+ - task:
327
+ type: Classification
328
+ dataset:
329
+ type: mteb/amazon_massive_scenario
330
+ name: MTEB MassiveScenarioClassification (fra-Latn)
331
+ config: fra-Latn
332
+ split: test
333
+ revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
334
+ metrics:
335
+ - type: accuracy
336
+ value: 52.57565568258238
337
+ - type: f1
338
+ value: 50.95953249242336
339
+ - task:
340
+ type: Retrieval
341
+ dataset:
342
+ type: jinaai/mintakaqa
343
+ name: MTEB MintakaRetrieval (fra-Latn)
344
+ config: fra-Latn
345
+ split: test
346
+ revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e
347
+ metrics:
348
+ - type: map_at_1
349
+ value: 0.164
350
+ - type: map_at_10
351
+ value: 0.584
352
+ - type: map_at_100
353
+ value: 0.8240000000000001
354
+ - type: map_at_1000
355
+ value: 0.9769999999999999
356
+ - type: map_at_20
357
+ value: 0.6669999999999999
358
+ - type: map_at_3
359
+ value: 0.40299999999999997
360
+ - type: map_at_5
361
+ value: 0.47600000000000003
362
+ - type: mrr_at_1
363
+ value: 0.164
364
+ - type: mrr_at_10
365
+ value: 0.584
366
+ - type: mrr_at_100
367
+ value: 0.8240000000000001
368
+ - type: mrr_at_1000
369
+ value: 0.9769999999999999
370
+ - type: mrr_at_20
371
+ value: 0.6669999999999999
372
+ - type: mrr_at_3
373
+ value: 0.40299999999999997
374
+ - type: mrr_at_5
375
+ value: 0.47600000000000003
376
+ - type: ndcg_at_1
377
+ value: 0.164
378
+ - type: ndcg_at_10
379
+ value: 0.8670000000000001
380
+ - type: ndcg_at_100
381
+ value: 2.443
382
+ - type: ndcg_at_1000
383
+ value: 8.671
384
+ - type: ndcg_at_20
385
+ value: 1.176
386
+ - type: ndcg_at_3
387
+ value: 0.47800000000000004
388
+ - type: ndcg_at_5
389
+ value: 0.612
390
+ - type: precision_at_1
391
+ value: 0.164
392
+ - type: precision_at_10
393
+ value: 0.18
394
+ - type: precision_at_100
395
+ value: 0.10200000000000001
396
+ - type: precision_at_1000
397
+ value: 0.064
398
+ - type: precision_at_20
399
+ value: 0.152
400
+ - type: precision_at_3
401
+ value: 0.232
402
+ - type: precision_at_5
403
+ value: 0.20500000000000002
404
+ - type: recall_at_1
405
+ value: 0.164
406
+ - type: recall_at_10
407
+ value: 1.802
408
+ - type: recall_at_100
409
+ value: 10.156
410
+ - type: recall_at_1000
411
+ value: 64.21
412
+ - type: recall_at_20
413
+ value: 3.0300000000000002
414
+ - type: recall_at_3
415
+ value: 0.696
416
+ - type: recall_at_5
417
+ value: 1.024
418
+ - task:
419
+ type: PairClassification
420
+ dataset:
421
+ type: GEM/opusparcus
422
+ name: MTEB OpusparcusPC (fra-Latn)
423
+ config: fra-Latn
424
+ split: test
425
+ revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a
426
+ metrics:
427
+ - type: cos_sim_accuracy
428
+ value: 73.433242506812
429
+ - type: cos_sim_ap
430
+ value: 86.03577758642086
431
+ - type: cos_sim_f1
432
+ value: 82.1602478972997
433
+ - type: cos_sim_precision
434
+ value: 74.12140575079871
435
+ - type: cos_sim_recall
436
+ value: 92.15491559086395
437
+ - type: dot_accuracy
438
+ value: 68.8692098092643
439
+ - type: dot_ap
440
+ value: 75.51070462676913
441
+ - type: dot_f1
442
+ value: 81.47547628698824
443
+ - type: dot_precision
444
+ value: 68.83561643835617
445
+ - type: dot_recall
446
+ value: 99.80139026812313
447
+ - type: euclidean_accuracy
448
+ value: 73.84196185286103
449
+ - type: euclidean_ap
450
+ value: 86.27910998502644
451
+ - type: euclidean_f1
452
+ value: 82.5531914893617
453
+ - type: euclidean_precision
454
+ value: 72.22635889798957
455
+ - type: euclidean_recall
456
+ value: 96.32571996027805
457
+ - type: manhattan_accuracy
458
+ value: 73.9100817438692
459
+ - type: manhattan_ap
460
+ value: 86.43527306280204
461
+ - type: manhattan_f1
462
+ value: 82.57349808265872
463
+ - type: manhattan_precision
464
+ value: 72.31343283582089
465
+ - type: manhattan_recall
466
+ value: 96.22641509433963
467
+ - type: max_accuracy
468
+ value: 73.9100817438692
469
+ - type: max_ap
470
+ value: 86.43527306280204
471
+ - type: max_f1
472
+ value: 82.57349808265872
473
+ - task:
474
+ type: PairClassification
475
+ dataset:
476
+ type: paws-x
477
+ name: MTEB PawsX (fra-Latn)
478
+ config: fra-Latn
479
+ split: test
480
+ revision: 8a04d940a42cd40658986fdd8e3da561533a3646
481
+ metrics:
482
+ - type: cos_sim_accuracy
483
+ value: 61.550000000000004
484
+ - type: cos_sim_ap
485
+ value: 60.30864957174996
486
+ - type: cos_sim_f1
487
+ value: 62.891311994372145
488
+ - type: cos_sim_precision
489
+ value: 46.08247422680412
490
+ - type: cos_sim_recall
491
+ value: 99.00332225913621
492
+ - type: dot_accuracy
493
+ value: 55.35
494
+ - type: dot_ap
495
+ value: 47.540176633815165
496
+ - type: dot_f1
497
+ value: 62.20227821884707
498
+ - type: dot_precision
499
+ value: 45.18555667001003
500
+ - type: dot_recall
501
+ value: 99.77851605758582
502
+ - type: euclidean_accuracy
503
+ value: 61.95
504
+ - type: euclidean_ap
505
+ value: 60.44070441806914
506
+ - type: euclidean_f1
507
+ value: 62.89978678038379
508
+ - type: euclidean_precision
509
+ value: 46.31083202511774
510
+ - type: euclidean_recall
511
+ value: 98.00664451827242
512
+ - type: manhattan_accuracy
513
+ value: 61.9
514
+ - type: manhattan_ap
515
+ value: 60.52939878134297
516
+ - type: manhattan_f1
517
+ value: 63.034188034188034
518
+ - type: manhattan_precision
519
+ value: 46.45669291338583
520
+ - type: manhattan_recall
521
+ value: 98.00664451827242
522
+ - type: max_accuracy
523
+ value: 61.95
524
+ - type: max_ap
525
+ value: 60.52939878134297
526
+ - type: max_f1
527
+ value: 63.034188034188034
528
+ - task:
529
+ type: STS
530
+ dataset:
531
+ type: Lajavaness/SICK-fr
532
+ name: MTEB SICKFr (fra-Latn)
533
+ config: fra-Latn
534
+ split: test
535
+ revision: e077ab4cf4774a1e36d86d593b150422fafd8e8a
536
+ metrics:
537
+ - type: cos_sim_pearson
538
+ value: 55.697943925847646
539
+ - type: cos_sim_spearman
540
+ value: 53.33151992866752
541
+ - type: euclidean_pearson
542
+ value: 54.32882764397367
543
+ - type: euclidean_spearman
544
+ value: 53.54968438609837
545
+ - type: manhattan_pearson
546
+ value: 54.56634524641888
547
+ - type: manhattan_spearman
548
+ value: 53.81344727168701
549
+ - task:
550
+ type: STS
551
+ dataset:
552
+ type: mteb/sts22-crosslingual-sts
553
+ name: MTEB STS22 (fra-Latn)
554
+ config: fra-Latn
555
+ split: test
556
+ revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
557
+ metrics:
558
+ - type: cos_sim_pearson
559
+ value: 22.771197036286605
560
+ - type: cos_sim_spearman
561
+ value: 60.29016180301653
562
+ - type: euclidean_pearson
563
+ value: 35.31319988418939
564
+ - type: euclidean_spearman
565
+ value: 59.61398871828641
566
+ - type: manhattan_pearson
567
+ value: 36.10315029818106
568
+ - type: manhattan_spearman
569
+ value: 60.5122301133988
570
+ - task:
571
+ type: STS
572
+ dataset:
573
+ type: mteb/stsb_multi_mt
574
+ name: MTEB STSBenchmarkMultilingualSTS (fra-Latn)
575
+ config: fra-Latn
576
+ split: test
577
+ revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
578
+ metrics:
579
+ - type: cos_sim_pearson
580
+ value: 47.730796921644384
581
+ - type: cos_sim_spearman
582
+ value: 49.54059034135741
583
+ - type: euclidean_pearson
584
+ value: 49.48474815018905
585
+ - type: euclidean_spearman
586
+ value: 50.71533884079761
587
+ - type: manhattan_pearson
588
+ value: 50.10488858533032
589
+ - type: manhattan_spearman
590
+ value: 51.1375710610132
591
+ - task:
592
+ type: Summarization
593
+ dataset:
594
+ type: lyon-nlp/summarization-summeval-fr-p2p
595
+ name: MTEB SummEvalFr (fra-Latn)
596
+ config: fra-Latn
597
+ split: test
598
+ revision: b385812de6a9577b6f4d0f88c6a6e35395a94054
599
+ metrics:
600
+ - type: cos_sim_pearson
601
+ value: 29.102661066592816
602
+ - type: cos_sim_spearman
603
+ value: 29.615000554218955
604
+ - type: dot_pearson
605
+ value: 19.77690299595119
606
+ - type: dot_spearman
607
+ value: 19.112834848310158
608
+ - task:
609
+ type: Reranking
610
+ dataset:
611
+ type: lyon-nlp/mteb-fr-reranking-syntec-s2p
612
+ name: MTEB SyntecReranking (fra-Latn)
613
+ config: fra-Latn
614
+ split: test
615
+ revision: daf0863838cd9e3ba50544cdce3ac2b338a1b0ad
616
+ metrics:
617
+ - type: map
618
+ value: 37.372655122655125
619
+ - type: mrr
620
+ value: 37.28174603174604
621
+ - task:
622
+ type: Retrieval
623
+ dataset:
624
+ type: lyon-nlp/mteb-fr-retrieval-syntec-s2p
625
+ name: MTEB SyntecRetrieval (fra-Latn)
626
+ config: fra-Latn
627
+ split: test
628
+ revision: 19661ccdca4dfc2d15122d776b61685f48c68ca9
629
+ metrics:
630
+ - type: map_at_1
631
+ value: 2.0
632
+ - type: map_at_10
633
+ value: 6.816999999999999
634
+ - type: map_at_100
635
+ value: 9.522
636
+ - type: map_at_1000
637
+ value: 9.522
638
+ - type: map_at_20
639
+ value: 8.402
640
+ - type: map_at_3
641
+ value: 4.167
642
+ - type: map_at_5
643
+ value: 4.867
644
+ - type: mrr_at_1
645
+ value: 2.0
646
+ - type: mrr_at_10
647
+ value: 6.816999999999999
648
+ - type: mrr_at_100
649
+ value: 9.522
650
+ - type: mrr_at_1000
651
+ value: 9.522
652
+ - type: mrr_at_20
653
+ value: 8.402
654
+ - type: mrr_at_3
655
+ value: 4.167
656
+ - type: mrr_at_5
657
+ value: 4.867
658
+ - type: ndcg_at_1
659
+ value: 2.0
660
+ - type: ndcg_at_10
661
+ value: 10.940999999999999
662
+ - type: ndcg_at_100
663
+ value: 25.96
664
+ - type: ndcg_at_1000
665
+ value: 25.96
666
+ - type: ndcg_at_20
667
+ value: 16.742
668
+ - type: ndcg_at_3
669
+ value: 4.893
670
+ - type: ndcg_at_5
671
+ value: 6.141
672
+ - type: precision_at_1
673
+ value: 2.0
674
+ - type: precision_at_10
675
+ value: 2.5
676
+ - type: precision_at_100
677
+ value: 1.0
678
+ - type: precision_at_1000
679
+ value: 0.1
680
+ - type: precision_at_20
681
+ value: 2.4
682
+ - type: precision_at_3
683
+ value: 2.333
684
+ - type: precision_at_5
685
+ value: 2.0
686
+ - type: recall_at_1
687
+ value: 2.0
688
+ - type: recall_at_10
689
+ value: 25.0
690
+ - type: recall_at_100
691
+ value: 100.0
692
+ - type: recall_at_1000
693
+ value: 100.0
694
+ - type: recall_at_20
695
+ value: 48.0
696
+ - type: recall_at_3
697
+ value: 7.000000000000001
698
+ - type: recall_at_5
699
+ value: 10.0
700
+ - task:
701
+ type: Retrieval
702
+ dataset:
703
+ type: jinaai/xpqa
704
+ name: MTEB XPQARetrieval (fra-Latn)
705
+ config: fra-Latn
706
+ split: test
707
+ revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
708
+ metrics:
709
+ - type: map_at_1
710
+ value: 9.437
711
+ - type: map_at_10
712
+ value: 13.574
713
+ - type: map_at_100
714
+ value: 14.265
715
+ - type: map_at_1000
716
+ value: 14.527999999999999
717
+ - type: map_at_20
718
+ value: 13.834
719
+ - type: map_at_3
720
+ value: 12.277000000000001
721
+ - type: map_at_5
722
+ value: 12.936
723
+ - type: mrr_at_1
724
+ value: 14.285999999999998
725
+ - type: mrr_at_10
726
+ value: 18.269
727
+ - type: mrr_at_100
728
+ value: 18.991
729
+ - type: mrr_at_1000
730
+ value: 19.15
731
+ - type: mrr_at_20
732
+ value: 18.598
733
+ - type: mrr_at_3
734
+ value: 17.0
735
+ - type: mrr_at_5
736
+ value: 17.681
737
+ - type: ndcg_at_1
738
+ value: 14.285999999999998
739
+ - type: ndcg_at_10
740
+ value: 16.447
741
+ - type: ndcg_at_100
742
+ value: 20.617
743
+ - type: ndcg_at_1000
744
+ value: 27.589000000000002
745
+ - type: ndcg_at_20
746
+ value: 17.455000000000002
747
+ - type: ndcg_at_3
748
+ value: 14.540000000000001
749
+ - type: ndcg_at_5
750
+ value: 15.084
751
+ - type: precision_at_1
752
+ value: 14.285999999999998
753
+ - type: precision_at_10
754
+ value: 3.698
755
+ - type: precision_at_100
756
+ value: 0.734
757
+ - type: precision_at_1000
758
+ value: 0.18
759
+ - type: precision_at_20
760
+ value: 2.163
761
+ - type: precision_at_3
762
+ value: 8.366999999999999
763
+ - type: precision_at_5
764
+ value: 5.928
765
+ - type: recall_at_1
766
+ value: 9.437
767
+ - type: recall_at_10
768
+ value: 20.16
769
+ - type: recall_at_100
770
+ value: 38.527
771
+ - type: recall_at_1000
772
+ value: 85.102
773
+ - type: recall_at_20
774
+ value: 23.632
775
+ - type: recall_at_3
776
+ value: 14.562
777
+ - type: recall_at_5
778
+ value: 16.8
779
+
780
+ language:
781
+ - fr
782
+ ---
783
+
784
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
785
+ should probably proofread and complete it, then remove this comment. -->
786
+
787
+ # llm2vec-croissant-mntp
788
+
789
+ This model is a fine-tuned version of [croissantllm/CroissantCool-v0.2](https://huggingface.co/croissantllm/CroissantCool-v0.2) on [asi/wikitext_fr](asi/wikitext_fr).
790
+ It achieves the following results on the evaluation set:
791
+ - Loss: 1.8867
792
+ - Accuracy: 0.6078
793
+
794
+ ## Model description
795
+
796
+ More information needed
797
+
798
+ ## Intended uses & limitations
799
+
800
+ More information needed
801
+
802
+ ## Training and evaluation data
803
+
804
+ More information needed
805
+
806
+ ## Training procedure
807
+
808
+ ### Training hyperparameters
809
+
810
+ The following hyperparameters were used during training:
811
+ - learning_rate: 5e-05
812
+ - train_batch_size: 32
813
+ - eval_batch_size: 32
814
+ - seed: 42
815
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
816
+ - lr_scheduler_type: linear
817
+ - num_epochs: 3.0
818
+
819
+ ### Training results
820
+
821
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
822
+ |:-------------:|:------:|:----:|:---------------:|:--------:|
823
+ | No log | 0.0884 | 100 | 4.7866 | 0.1990 |
824
+ | No log | 0.1768 | 200 | 4.0496 | 0.3309 |
825
+ | No log | 0.2653 | 300 | 3.6525 | 0.3779 |
826
+ | No log | 0.3537 | 400 | 3.2410 | 0.4258 |
827
+ | 3.9116 | 0.4421 | 500 | 3.6305 | 0.3912 |
828
+ | 3.9116 | 0.5305 | 600 | 3.1770 | 0.4406 |
829
+ | 3.9116 | 0.6189 | 700 | 2.4478 | 0.5199 |
830
+ | 3.9116 | 0.7073 | 800 | 2.2383 | 0.5508 |
831
+ | 3.9116 | 0.7958 | 900 | 2.1547 | 0.5635 |
832
+ | 2.4568 | 0.8842 | 1000 | 2.0868 | 0.5759 |
833
+ | 2.4568 | 0.9726 | 1100 | 2.0399 | 0.5820 |
834
+ | 2.4568 | 1.0610 | 1200 | 2.0102 | 0.5873 |
835
+ | 2.4568 | 1.1494 | 1300 | 1.9805 | 0.5897 |
836
+ | 2.4568 | 1.2378 | 1400 | 1.9590 | 0.5955 |
837
+ | 1.9305 | 1.3263 | 1500 | 1.9381 | 0.5982 |
838
+ | 1.9305 | 1.4147 | 1600 | 1.9249 | 0.5995 |
839
+ | 1.9305 | 1.5031 | 1700 | 1.9223 | 0.6017 |
840
+ | 1.9305 | 1.5915 | 1800 | 1.9091 | 0.6037 |
841
+ | 1.9305 | 1.6799 | 1900 | 1.9038 | 0.6042 |
842
+ | 1.8511 | 1.7683 | 2000 | 1.8982 | 0.6045 |
843
+ | 1.8511 | 1.8568 | 2100 | 1.8924 | 0.6060 |
844
+ | 1.8511 | 1.9452 | 2200 | 1.8844 | 0.6072 |
845
+ | 1.8511 | 2.0336 | 2300 | 1.8873 | 0.6087 |
846
+ | 1.8511 | 2.1220 | 2400 | 1.8889 | 0.6068 |
847
+ | 1.8197 | 2.2104 | 2500 | 1.8848 | 0.6080 |
848
+ | 1.8197 | 2.2989 | 2600 | 1.8736 | 0.6091 |
849
+ | 1.8197 | 2.3873 | 2700 | 1.8858 | 0.6072 |
850
+ | 1.8197 | 2.4757 | 2800 | 1.8814 | 0.6088 |
851
+ | 1.8197 | 2.5641 | 2900 | 1.8649 | 0.6103 |
852
+ | 1.8116 | 2.6525 | 3000 | 1.8647 | 0.6091 |
853
+ | 1.8116 | 2.7409 | 3100 | 1.8755 | 0.6101 |
854
+ | 1.8116 | 2.8294 | 3200 | 1.8755 | 0.6099 |
855
+ | 1.8116 | 2.9178 | 3300 | 1.8867 | 0.6078 |
856
+
857
+
858
+ ### Framework versions
859
+
860
+ - Transformers 4.40.2
861
+ - Pytorch 2.0.1+cu118
862
+ - Datasets 2.19.1
863
+ - Tokenizers 0.19.1