lagoon999 commited on
Commit
7a64cac
1 Parent(s): 55c5877

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +1104 -0
README.md ADDED
@@ -0,0 +1,1104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - mteb
4
+ - llama-cpp
5
+ - gguf-my-repo
6
+ library_name: sentence-transformers
7
+ base_model: lier007/xiaobu-embedding-v2
8
+ model-index:
9
+ - name: piccolo-embedding_mixed2
10
+ results:
11
+ - task:
12
+ type: STS
13
+ dataset:
14
+ name: MTEB AFQMC
15
+ type: C-MTEB/AFQMC
16
+ config: default
17
+ split: validation
18
+ revision: None
19
+ metrics:
20
+ - type: cos_sim_pearson
21
+ value: 56.918538280469875
22
+ - type: cos_sim_spearman
23
+ value: 60.95597435855258
24
+ - type: euclidean_pearson
25
+ value: 59.73821610051437
26
+ - type: euclidean_spearman
27
+ value: 60.956778530262454
28
+ - type: manhattan_pearson
29
+ value: 59.739675774225475
30
+ - type: manhattan_spearman
31
+ value: 60.95243600302903
32
+ - task:
33
+ type: STS
34
+ dataset:
35
+ name: MTEB ATEC
36
+ type: C-MTEB/ATEC
37
+ config: default
38
+ split: test
39
+ revision: None
40
+ metrics:
41
+ - type: cos_sim_pearson
42
+ value: 56.79417977023184
43
+ - type: cos_sim_spearman
44
+ value: 58.80984726256814
45
+ - type: euclidean_pearson
46
+ value: 63.42225182281334
47
+ - type: euclidean_spearman
48
+ value: 58.80957930593542
49
+ - type: manhattan_pearson
50
+ value: 63.41128425333986
51
+ - type: manhattan_spearman
52
+ value: 58.80784321716389
53
+ - task:
54
+ type: Classification
55
+ dataset:
56
+ name: MTEB AmazonReviewsClassification (zh)
57
+ type: mteb/amazon_reviews_multi
58
+ config: zh
59
+ split: test
60
+ revision: 1399c76144fd37290681b995c656ef9b2e06e26d
61
+ metrics:
62
+ - type: accuracy
63
+ value: 50.074000000000005
64
+ - type: f1
65
+ value: 47.11468271375511
66
+ - task:
67
+ type: STS
68
+ dataset:
69
+ name: MTEB BQ
70
+ type: C-MTEB/BQ
71
+ config: default
72
+ split: test
73
+ revision: None
74
+ metrics:
75
+ - type: cos_sim_pearson
76
+ value: 73.3412976021806
77
+ - type: cos_sim_spearman
78
+ value: 75.0799965464816
79
+ - type: euclidean_pearson
80
+ value: 73.7874729086686
81
+ - type: euclidean_spearman
82
+ value: 75.07910973646369
83
+ - type: manhattan_pearson
84
+ value: 73.7716616949607
85
+ - type: manhattan_spearman
86
+ value: 75.06089549008017
87
+ - task:
88
+ type: Clustering
89
+ dataset:
90
+ name: MTEB CLSClusteringP2P
91
+ type: C-MTEB/CLSClusteringP2P
92
+ config: default
93
+ split: test
94
+ revision: None
95
+ metrics:
96
+ - type: v_measure
97
+ value: 60.4206935177474
98
+ - task:
99
+ type: Clustering
100
+ dataset:
101
+ name: MTEB CLSClusteringS2S
102
+ type: C-MTEB/CLSClusteringS2S
103
+ config: default
104
+ split: test
105
+ revision: None
106
+ metrics:
107
+ - type: v_measure
108
+ value: 49.53654617222264
109
+ - task:
110
+ type: Reranking
111
+ dataset:
112
+ name: MTEB CMedQAv1
113
+ type: C-MTEB/CMedQAv1-reranking
114
+ config: default
115
+ split: test
116
+ revision: None
117
+ metrics:
118
+ - type: map
119
+ value: 90.96386786978509
120
+ - type: mrr
121
+ value: 92.8897619047619
122
+ - task:
123
+ type: Reranking
124
+ dataset:
125
+ name: MTEB CMedQAv2
126
+ type: C-MTEB/CMedQAv2-reranking
127
+ config: default
128
+ split: test
129
+ revision: None
130
+ metrics:
131
+ - type: map
132
+ value: 90.41014127763198
133
+ - type: mrr
134
+ value: 92.45039682539682
135
+ - task:
136
+ type: Retrieval
137
+ dataset:
138
+ name: MTEB CmedqaRetrieval
139
+ type: C-MTEB/CmedqaRetrieval
140
+ config: default
141
+ split: dev
142
+ revision: None
143
+ metrics:
144
+ - type: map_at_1
145
+ value: 26.901999999999997
146
+ - type: map_at_10
147
+ value: 40.321
148
+ - type: map_at_100
149
+ value: 42.176
150
+ - type: map_at_1000
151
+ value: 42.282
152
+ - type: map_at_3
153
+ value: 35.882
154
+ - type: map_at_5
155
+ value: 38.433
156
+ - type: mrr_at_1
157
+ value: 40.910000000000004
158
+ - type: mrr_at_10
159
+ value: 49.309999999999995
160
+ - type: mrr_at_100
161
+ value: 50.239
162
+ - type: mrr_at_1000
163
+ value: 50.278
164
+ - type: mrr_at_3
165
+ value: 46.803
166
+ - type: mrr_at_5
167
+ value: 48.137
168
+ - type: ndcg_at_1
169
+ value: 40.785
170
+ - type: ndcg_at_10
171
+ value: 47.14
172
+ - type: ndcg_at_100
173
+ value: 54.156000000000006
174
+ - type: ndcg_at_1000
175
+ value: 55.913999999999994
176
+ - type: ndcg_at_3
177
+ value: 41.669
178
+ - type: ndcg_at_5
179
+ value: 43.99
180
+ - type: precision_at_1
181
+ value: 40.785
182
+ - type: precision_at_10
183
+ value: 10.493
184
+ - type: precision_at_100
185
+ value: 1.616
186
+ - type: precision_at_1000
187
+ value: 0.184
188
+ - type: precision_at_3
189
+ value: 23.723
190
+ - type: precision_at_5
191
+ value: 17.249
192
+ - type: recall_at_1
193
+ value: 26.901999999999997
194
+ - type: recall_at_10
195
+ value: 58.25
196
+ - type: recall_at_100
197
+ value: 87.10900000000001
198
+ - type: recall_at_1000
199
+ value: 98.804
200
+ - type: recall_at_3
201
+ value: 41.804
202
+ - type: recall_at_5
203
+ value: 48.884
204
+ - task:
205
+ type: PairClassification
206
+ dataset:
207
+ name: MTEB Cmnli
208
+ type: C-MTEB/CMNLI
209
+ config: default
210
+ split: validation
211
+ revision: None
212
+ metrics:
213
+ - type: cos_sim_accuracy
214
+ value: 86.42212868310283
215
+ - type: cos_sim_ap
216
+ value: 92.83788702972741
217
+ - type: cos_sim_f1
218
+ value: 87.08912233141307
219
+ - type: cos_sim_precision
220
+ value: 84.24388111888112
221
+ - type: cos_sim_recall
222
+ value: 90.13327098433481
223
+ - type: dot_accuracy
224
+ value: 86.44618159951895
225
+ - type: dot_ap
226
+ value: 92.81146275060858
227
+ - type: dot_f1
228
+ value: 87.06857911250562
229
+ - type: dot_precision
230
+ value: 83.60232408005164
231
+ - type: dot_recall
232
+ value: 90.83469721767594
233
+ - type: euclidean_accuracy
234
+ value: 86.42212868310283
235
+ - type: euclidean_ap
236
+ value: 92.83805700492603
237
+ - type: euclidean_f1
238
+ value: 87.08803611738148
239
+ - type: euclidean_precision
240
+ value: 84.18066768492254
241
+ - type: euclidean_recall
242
+ value: 90.20341360766892
243
+ - type: manhattan_accuracy
244
+ value: 86.28983764281419
245
+ - type: manhattan_ap
246
+ value: 92.82818970981005
247
+ - type: manhattan_f1
248
+ value: 87.12625521832335
249
+ - type: manhattan_precision
250
+ value: 84.19101613606628
251
+ - type: manhattan_recall
252
+ value: 90.27355623100304
253
+ - type: max_accuracy
254
+ value: 86.44618159951895
255
+ - type: max_ap
256
+ value: 92.83805700492603
257
+ - type: max_f1
258
+ value: 87.12625521832335
259
+ - task:
260
+ type: Retrieval
261
+ dataset:
262
+ name: MTEB CovidRetrieval
263
+ type: C-MTEB/CovidRetrieval
264
+ config: default
265
+ split: dev
266
+ revision: None
267
+ metrics:
268
+ - type: map_at_1
269
+ value: 79.215
270
+ - type: map_at_10
271
+ value: 86.516
272
+ - type: map_at_100
273
+ value: 86.6
274
+ - type: map_at_1000
275
+ value: 86.602
276
+ - type: map_at_3
277
+ value: 85.52
278
+ - type: map_at_5
279
+ value: 86.136
280
+ - type: mrr_at_1
281
+ value: 79.663
282
+ - type: mrr_at_10
283
+ value: 86.541
284
+ - type: mrr_at_100
285
+ value: 86.625
286
+ - type: mrr_at_1000
287
+ value: 86.627
288
+ - type: mrr_at_3
289
+ value: 85.564
290
+ - type: mrr_at_5
291
+ value: 86.15899999999999
292
+ - type: ndcg_at_1
293
+ value: 79.663
294
+ - type: ndcg_at_10
295
+ value: 89.399
296
+ - type: ndcg_at_100
297
+ value: 89.727
298
+ - type: ndcg_at_1000
299
+ value: 89.781
300
+ - type: ndcg_at_3
301
+ value: 87.402
302
+ - type: ndcg_at_5
303
+ value: 88.479
304
+ - type: precision_at_1
305
+ value: 79.663
306
+ - type: precision_at_10
307
+ value: 9.926
308
+ - type: precision_at_100
309
+ value: 1.006
310
+ - type: precision_at_1000
311
+ value: 0.101
312
+ - type: precision_at_3
313
+ value: 31.226
314
+ - type: precision_at_5
315
+ value: 19.283
316
+ - type: recall_at_1
317
+ value: 79.215
318
+ - type: recall_at_10
319
+ value: 98.209
320
+ - type: recall_at_100
321
+ value: 99.579
322
+ - type: recall_at_1000
323
+ value: 100
324
+ - type: recall_at_3
325
+ value: 92.703
326
+ - type: recall_at_5
327
+ value: 95.364
328
+ - task:
329
+ type: Retrieval
330
+ dataset:
331
+ name: MTEB DuRetrieval
332
+ type: C-MTEB/DuRetrieval
333
+ config: default
334
+ split: dev
335
+ revision: None
336
+ metrics:
337
+ - type: map_at_1
338
+ value: 27.391
339
+ - type: map_at_10
340
+ value: 82.82000000000001
341
+ - type: map_at_100
342
+ value: 85.5
343
+ - type: map_at_1000
344
+ value: 85.533
345
+ - type: map_at_3
346
+ value: 57.802
347
+ - type: map_at_5
348
+ value: 72.82600000000001
349
+ - type: mrr_at_1
350
+ value: 92.80000000000001
351
+ - type: mrr_at_10
352
+ value: 94.83500000000001
353
+ - type: mrr_at_100
354
+ value: 94.883
355
+ - type: mrr_at_1000
356
+ value: 94.884
357
+ - type: mrr_at_3
358
+ value: 94.542
359
+ - type: mrr_at_5
360
+ value: 94.729
361
+ - type: ndcg_at_1
362
+ value: 92.7
363
+ - type: ndcg_at_10
364
+ value: 89.435
365
+ - type: ndcg_at_100
366
+ value: 91.78699999999999
367
+ - type: ndcg_at_1000
368
+ value: 92.083
369
+ - type: ndcg_at_3
370
+ value: 88.595
371
+ - type: ndcg_at_5
372
+ value: 87.53
373
+ - type: precision_at_1
374
+ value: 92.7
375
+ - type: precision_at_10
376
+ value: 42.4
377
+ - type: precision_at_100
378
+ value: 4.823
379
+ - type: precision_at_1000
380
+ value: 0.48900000000000005
381
+ - type: precision_at_3
382
+ value: 79.133
383
+ - type: precision_at_5
384
+ value: 66.8
385
+ - type: recall_at_1
386
+ value: 27.391
387
+ - type: recall_at_10
388
+ value: 90.069
389
+ - type: recall_at_100
390
+ value: 97.875
391
+ - type: recall_at_1000
392
+ value: 99.436
393
+ - type: recall_at_3
394
+ value: 59.367999999999995
395
+ - type: recall_at_5
396
+ value: 76.537
397
+ - task:
398
+ type: Retrieval
399
+ dataset:
400
+ name: MTEB EcomRetrieval
401
+ type: C-MTEB/EcomRetrieval
402
+ config: default
403
+ split: dev
404
+ revision: None
405
+ metrics:
406
+ - type: map_at_1
407
+ value: 54.800000000000004
408
+ - type: map_at_10
409
+ value: 65.289
410
+ - type: map_at_100
411
+ value: 65.845
412
+ - type: map_at_1000
413
+ value: 65.853
414
+ - type: map_at_3
415
+ value: 62.766999999999996
416
+ - type: map_at_5
417
+ value: 64.252
418
+ - type: mrr_at_1
419
+ value: 54.800000000000004
420
+ - type: mrr_at_10
421
+ value: 65.255
422
+ - type: mrr_at_100
423
+ value: 65.81700000000001
424
+ - type: mrr_at_1000
425
+ value: 65.824
426
+ - type: mrr_at_3
427
+ value: 62.683
428
+ - type: mrr_at_5
429
+ value: 64.248
430
+ - type: ndcg_at_1
431
+ value: 54.800000000000004
432
+ - type: ndcg_at_10
433
+ value: 70.498
434
+ - type: ndcg_at_100
435
+ value: 72.82300000000001
436
+ - type: ndcg_at_1000
437
+ value: 73.053
438
+ - type: ndcg_at_3
439
+ value: 65.321
440
+ - type: ndcg_at_5
441
+ value: 67.998
442
+ - type: precision_at_1
443
+ value: 54.800000000000004
444
+ - type: precision_at_10
445
+ value: 8.690000000000001
446
+ - type: precision_at_100
447
+ value: 0.97
448
+ - type: precision_at_1000
449
+ value: 0.099
450
+ - type: precision_at_3
451
+ value: 24.233
452
+ - type: precision_at_5
453
+ value: 15.840000000000002
454
+ - type: recall_at_1
455
+ value: 54.800000000000004
456
+ - type: recall_at_10
457
+ value: 86.9
458
+ - type: recall_at_100
459
+ value: 97
460
+ - type: recall_at_1000
461
+ value: 98.9
462
+ - type: recall_at_3
463
+ value: 72.7
464
+ - type: recall_at_5
465
+ value: 79.2
466
+ - task:
467
+ type: Classification
468
+ dataset:
469
+ name: MTEB IFlyTek
470
+ type: C-MTEB/IFlyTek-classification
471
+ config: default
472
+ split: validation
473
+ revision: None
474
+ metrics:
475
+ - type: accuracy
476
+ value: 51.758368603308966
477
+ - type: f1
478
+ value: 40.249503783871596
479
+ - task:
480
+ type: Classification
481
+ dataset:
482
+ name: MTEB JDReview
483
+ type: C-MTEB/JDReview-classification
484
+ config: default
485
+ split: test
486
+ revision: None
487
+ metrics:
488
+ - type: accuracy
489
+ value: 89.08067542213884
490
+ - type: ap
491
+ value: 60.31281895139249
492
+ - type: f1
493
+ value: 84.20883153932607
494
+ - task:
495
+ type: STS
496
+ dataset:
497
+ name: MTEB LCQMC
498
+ type: C-MTEB/LCQMC
499
+ config: default
500
+ split: test
501
+ revision: None
502
+ metrics:
503
+ - type: cos_sim_pearson
504
+ value: 74.04193577551248
505
+ - type: cos_sim_spearman
506
+ value: 79.81875884845549
507
+ - type: euclidean_pearson
508
+ value: 80.02581187503708
509
+ - type: euclidean_spearman
510
+ value: 79.81877215060574
511
+ - type: manhattan_pearson
512
+ value: 80.01767830530258
513
+ - type: manhattan_spearman
514
+ value: 79.81178852172727
515
+ - task:
516
+ type: Reranking
517
+ dataset:
518
+ name: MTEB MMarcoReranking
519
+ type: C-MTEB/Mmarco-reranking
520
+ config: default
521
+ split: dev
522
+ revision: None
523
+ metrics:
524
+ - type: map
525
+ value: 39.90939429947956
526
+ - type: mrr
527
+ value: 39.71071428571429
528
+ - task:
529
+ type: Retrieval
530
+ dataset:
531
+ name: MTEB MMarcoRetrieval
532
+ type: C-MTEB/MMarcoRetrieval
533
+ config: default
534
+ split: dev
535
+ revision: None
536
+ metrics:
537
+ - type: map_at_1
538
+ value: 68.485
539
+ - type: map_at_10
540
+ value: 78.27199999999999
541
+ - type: map_at_100
542
+ value: 78.54100000000001
543
+ - type: map_at_1000
544
+ value: 78.546
545
+ - type: map_at_3
546
+ value: 76.339
547
+ - type: map_at_5
548
+ value: 77.61099999999999
549
+ - type: mrr_at_1
550
+ value: 70.80199999999999
551
+ - type: mrr_at_10
552
+ value: 78.901
553
+ - type: mrr_at_100
554
+ value: 79.12400000000001
555
+ - type: mrr_at_1000
556
+ value: 79.128
557
+ - type: mrr_at_3
558
+ value: 77.237
559
+ - type: mrr_at_5
560
+ value: 78.323
561
+ - type: ndcg_at_1
562
+ value: 70.759
563
+ - type: ndcg_at_10
564
+ value: 82.191
565
+ - type: ndcg_at_100
566
+ value: 83.295
567
+ - type: ndcg_at_1000
568
+ value: 83.434
569
+ - type: ndcg_at_3
570
+ value: 78.57600000000001
571
+ - type: ndcg_at_5
572
+ value: 80.715
573
+ - type: precision_at_1
574
+ value: 70.759
575
+ - type: precision_at_10
576
+ value: 9.951
577
+ - type: precision_at_100
578
+ value: 1.049
579
+ - type: precision_at_1000
580
+ value: 0.106
581
+ - type: precision_at_3
582
+ value: 29.660999999999998
583
+ - type: precision_at_5
584
+ value: 18.94
585
+ - type: recall_at_1
586
+ value: 68.485
587
+ - type: recall_at_10
588
+ value: 93.65
589
+ - type: recall_at_100
590
+ value: 98.434
591
+ - type: recall_at_1000
592
+ value: 99.522
593
+ - type: recall_at_3
594
+ value: 84.20100000000001
595
+ - type: recall_at_5
596
+ value: 89.261
597
+ - task:
598
+ type: Classification
599
+ dataset:
600
+ name: MTEB MassiveIntentClassification (zh-CN)
601
+ type: mteb/amazon_massive_intent
602
+ config: zh-CN
603
+ split: test
604
+ revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
605
+ metrics:
606
+ - type: accuracy
607
+ value: 77.45460659045055
608
+ - type: f1
609
+ value: 73.84987702455533
610
+ - task:
611
+ type: Classification
612
+ dataset:
613
+ name: MTEB MassiveScenarioClassification (zh-CN)
614
+ type: mteb/amazon_massive_scenario
615
+ config: zh-CN
616
+ split: test
617
+ revision: 7d571f92784cd94a019292a1f45445077d0ef634
618
+ metrics:
619
+ - type: accuracy
620
+ value: 85.29926025554808
621
+ - type: f1
622
+ value: 84.40636286569843
623
+ - task:
624
+ type: Retrieval
625
+ dataset:
626
+ name: MTEB MedicalRetrieval
627
+ type: C-MTEB/MedicalRetrieval
628
+ config: default
629
+ split: dev
630
+ revision: None
631
+ metrics:
632
+ - type: map_at_1
633
+ value: 57.599999999999994
634
+ - type: map_at_10
635
+ value: 64.691
636
+ - type: map_at_100
637
+ value: 65.237
638
+ - type: map_at_1000
639
+ value: 65.27
640
+ - type: map_at_3
641
+ value: 62.733000000000004
642
+ - type: map_at_5
643
+ value: 63.968
644
+ - type: mrr_at_1
645
+ value: 58.099999999999994
646
+ - type: mrr_at_10
647
+ value: 64.952
648
+ - type: mrr_at_100
649
+ value: 65.513
650
+ - type: mrr_at_1000
651
+ value: 65.548
652
+ - type: mrr_at_3
653
+ value: 63
654
+ - type: mrr_at_5
655
+ value: 64.235
656
+ - type: ndcg_at_1
657
+ value: 57.599999999999994
658
+ - type: ndcg_at_10
659
+ value: 68.19
660
+ - type: ndcg_at_100
661
+ value: 70.98400000000001
662
+ - type: ndcg_at_1000
663
+ value: 71.811
664
+ - type: ndcg_at_3
665
+ value: 64.276
666
+ - type: ndcg_at_5
667
+ value: 66.47999999999999
668
+ - type: precision_at_1
669
+ value: 57.599999999999994
670
+ - type: precision_at_10
671
+ value: 7.920000000000001
672
+ - type: precision_at_100
673
+ value: 0.9259999999999999
674
+ - type: precision_at_1000
675
+ value: 0.099
676
+ - type: precision_at_3
677
+ value: 22.900000000000002
678
+ - type: precision_at_5
679
+ value: 14.799999999999999
680
+ - type: recall_at_1
681
+ value: 57.599999999999994
682
+ - type: recall_at_10
683
+ value: 79.2
684
+ - type: recall_at_100
685
+ value: 92.60000000000001
686
+ - type: recall_at_1000
687
+ value: 99
688
+ - type: recall_at_3
689
+ value: 68.7
690
+ - type: recall_at_5
691
+ value: 74
692
+ - task:
693
+ type: Classification
694
+ dataset:
695
+ name: MTEB MultilingualSentiment
696
+ type: C-MTEB/MultilingualSentiment-classification
697
+ config: default
698
+ split: validation
699
+ revision: None
700
+ metrics:
701
+ - type: accuracy
702
+ value: 79.45
703
+ - type: f1
704
+ value: 79.25610578280538
705
+ - task:
706
+ type: PairClassification
707
+ dataset:
708
+ name: MTEB Ocnli
709
+ type: C-MTEB/OCNLI
710
+ config: default
711
+ split: validation
712
+ revision: None
713
+ metrics:
714
+ - type: cos_sim_accuracy
715
+ value: 85.43584190579317
716
+ - type: cos_sim_ap
717
+ value: 90.89979725191012
718
+ - type: cos_sim_f1
719
+ value: 86.48383937316358
720
+ - type: cos_sim_precision
721
+ value: 80.6392694063927
722
+ - type: cos_sim_recall
723
+ value: 93.24181626187962
724
+ - type: dot_accuracy
725
+ value: 85.38170005414185
726
+ - type: dot_ap
727
+ value: 90.87532457866699
728
+ - type: dot_f1
729
+ value: 86.48383937316358
730
+ - type: dot_precision
731
+ value: 80.6392694063927
732
+ - type: dot_recall
733
+ value: 93.24181626187962
734
+ - type: euclidean_accuracy
735
+ value: 85.43584190579317
736
+ - type: euclidean_ap
737
+ value: 90.90126652086121
738
+ - type: euclidean_f1
739
+ value: 86.48383937316358
740
+ - type: euclidean_precision
741
+ value: 80.6392694063927
742
+ - type: euclidean_recall
743
+ value: 93.24181626187962
744
+ - type: manhattan_accuracy
745
+ value: 85.43584190579317
746
+ - type: manhattan_ap
747
+ value: 90.87896997853466
748
+ - type: manhattan_f1
749
+ value: 86.47581441263573
750
+ - type: manhattan_precision
751
+ value: 81.18628359592215
752
+ - type: manhattan_recall
753
+ value: 92.5026399155227
754
+ - type: max_accuracy
755
+ value: 85.43584190579317
756
+ - type: max_ap
757
+ value: 90.90126652086121
758
+ - type: max_f1
759
+ value: 86.48383937316358
760
+ - task:
761
+ type: Classification
762
+ dataset:
763
+ name: MTEB OnlineShopping
764
+ type: C-MTEB/OnlineShopping-classification
765
+ config: default
766
+ split: test
767
+ revision: None
768
+ metrics:
769
+ - type: accuracy
770
+ value: 94.9
771
+ - type: ap
772
+ value: 93.1468223150745
773
+ - type: f1
774
+ value: 94.88918689508299
775
+ - task:
776
+ type: STS
777
+ dataset:
778
+ name: MTEB PAWSX
779
+ type: C-MTEB/PAWSX
780
+ config: default
781
+ split: test
782
+ revision: None
783
+ metrics:
784
+ - type: cos_sim_pearson
785
+ value: 40.4831743182905
786
+ - type: cos_sim_spearman
787
+ value: 47.4163675550491
788
+ - type: euclidean_pearson
789
+ value: 46.456319899274924
790
+ - type: euclidean_spearman
791
+ value: 47.41567079730661
792
+ - type: manhattan_pearson
793
+ value: 46.48561639930895
794
+ - type: manhattan_spearman
795
+ value: 47.447721653461215
796
+ - task:
797
+ type: STS
798
+ dataset:
799
+ name: MTEB QBQTC
800
+ type: C-MTEB/QBQTC
801
+ config: default
802
+ split: test
803
+ revision: None
804
+ metrics:
805
+ - type: cos_sim_pearson
806
+ value: 42.96423587663398
807
+ - type: cos_sim_spearman
808
+ value: 45.13742225167858
809
+ - type: euclidean_pearson
810
+ value: 39.275452114075435
811
+ - type: euclidean_spearman
812
+ value: 45.137763540967406
813
+ - type: manhattan_pearson
814
+ value: 39.24797626417764
815
+ - type: manhattan_spearman
816
+ value: 45.13817773119268
817
+ - task:
818
+ type: STS
819
+ dataset:
820
+ name: MTEB STS22 (zh)
821
+ type: mteb/sts22-crosslingual-sts
822
+ config: zh
823
+ split: test
824
+ revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
825
+ metrics:
826
+ - type: cos_sim_pearson
827
+ value: 66.26687809086202
828
+ - type: cos_sim_spearman
829
+ value: 66.9569145816897
830
+ - type: euclidean_pearson
831
+ value: 65.72390780809788
832
+ - type: euclidean_spearman
833
+ value: 66.95406938095539
834
+ - type: manhattan_pearson
835
+ value: 65.6220809000381
836
+ - type: manhattan_spearman
837
+ value: 66.88531036320953
838
+ - task:
839
+ type: STS
840
+ dataset:
841
+ name: MTEB STSB
842
+ type: C-MTEB/STSB
843
+ config: default
844
+ split: test
845
+ revision: None
846
+ metrics:
847
+ - type: cos_sim_pearson
848
+ value: 80.30831700726195
849
+ - type: cos_sim_spearman
850
+ value: 82.05184068558792
851
+ - type: euclidean_pearson
852
+ value: 81.73198597791563
853
+ - type: euclidean_spearman
854
+ value: 82.05326103582206
855
+ - type: manhattan_pearson
856
+ value: 81.70886400949136
857
+ - type: manhattan_spearman
858
+ value: 82.03473274756037
859
+ - task:
860
+ type: Reranking
861
+ dataset:
862
+ name: MTEB T2Reranking
863
+ type: C-MTEB/T2Reranking
864
+ config: default
865
+ split: dev
866
+ revision: None
867
+ metrics:
868
+ - type: map
869
+ value: 69.03398835347575
870
+ - type: mrr
871
+ value: 79.9212528613341
872
+ - task:
873
+ type: Retrieval
874
+ dataset:
875
+ name: MTEB T2Retrieval
876
+ type: C-MTEB/T2Retrieval
877
+ config: default
878
+ split: dev
879
+ revision: None
880
+ metrics:
881
+ - type: map_at_1
882
+ value: 27.515
883
+ - type: map_at_10
884
+ value: 77.40599999999999
885
+ - type: map_at_100
886
+ value: 81.087
887
+ - type: map_at_1000
888
+ value: 81.148
889
+ - type: map_at_3
890
+ value: 54.327000000000005
891
+ - type: map_at_5
892
+ value: 66.813
893
+ - type: mrr_at_1
894
+ value: 89.764
895
+ - type: mrr_at_10
896
+ value: 92.58
897
+ - type: mrr_at_100
898
+ value: 92.663
899
+ - type: mrr_at_1000
900
+ value: 92.666
901
+ - type: mrr_at_3
902
+ value: 92.15299999999999
903
+ - type: mrr_at_5
904
+ value: 92.431
905
+ - type: ndcg_at_1
906
+ value: 89.777
907
+ - type: ndcg_at_10
908
+ value: 85.013
909
+ - type: ndcg_at_100
910
+ value: 88.62100000000001
911
+ - type: ndcg_at_1000
912
+ value: 89.184
913
+ - type: ndcg_at_3
914
+ value: 86.19200000000001
915
+ - type: ndcg_at_5
916
+ value: 84.909
917
+ - type: precision_at_1
918
+ value: 89.777
919
+ - type: precision_at_10
920
+ value: 42.218
921
+ - type: precision_at_100
922
+ value: 5.032
923
+ - type: precision_at_1000
924
+ value: 0.517
925
+ - type: precision_at_3
926
+ value: 75.335
927
+ - type: precision_at_5
928
+ value: 63.199000000000005
929
+ - type: recall_at_1
930
+ value: 27.515
931
+ - type: recall_at_10
932
+ value: 84.258
933
+ - type: recall_at_100
934
+ value: 95.908
935
+ - type: recall_at_1000
936
+ value: 98.709
937
+ - type: recall_at_3
938
+ value: 56.189
939
+ - type: recall_at_5
940
+ value: 70.50800000000001
941
+ - task:
942
+ type: Classification
943
+ dataset:
944
+ name: MTEB TNews
945
+ type: C-MTEB/TNews-classification
946
+ config: default
947
+ split: validation
948
+ revision: None
949
+ metrics:
950
+ - type: accuracy
951
+ value: 54.635999999999996
952
+ - type: f1
953
+ value: 52.63073912739558
954
+ - task:
955
+ type: Clustering
956
+ dataset:
957
+ name: MTEB ThuNewsClusteringP2P
958
+ type: C-MTEB/ThuNewsClusteringP2P
959
+ config: default
960
+ split: test
961
+ revision: None
962
+ metrics:
963
+ - type: v_measure
964
+ value: 78.75676284855221
965
+ - task:
966
+ type: Clustering
967
+ dataset:
968
+ name: MTEB ThuNewsClusteringS2S
969
+ type: C-MTEB/ThuNewsClusteringS2S
970
+ config: default
971
+ split: test
972
+ revision: None
973
+ metrics:
974
+ - type: v_measure
975
+ value: 71.95583733802839
976
+ - task:
977
+ type: Retrieval
978
+ dataset:
979
+ name: MTEB VideoRetrieval
980
+ type: C-MTEB/VideoRetrieval
981
+ config: default
982
+ split: dev
983
+ revision: None
984
+ metrics:
985
+ - type: map_at_1
986
+ value: 64.9
987
+ - type: map_at_10
988
+ value: 75.622
989
+ - type: map_at_100
990
+ value: 75.93900000000001
991
+ - type: map_at_1000
992
+ value: 75.93900000000001
993
+ - type: map_at_3
994
+ value: 73.933
995
+ - type: map_at_5
996
+ value: 74.973
997
+ - type: mrr_at_1
998
+ value: 65
999
+ - type: mrr_at_10
1000
+ value: 75.676
1001
+ - type: mrr_at_100
1002
+ value: 75.994
1003
+ - type: mrr_at_1000
1004
+ value: 75.994
1005
+ - type: mrr_at_3
1006
+ value: 74.05000000000001
1007
+ - type: mrr_at_5
1008
+ value: 75.03999999999999
1009
+ - type: ndcg_at_1
1010
+ value: 64.9
1011
+ - type: ndcg_at_10
1012
+ value: 80.08999999999999
1013
+ - type: ndcg_at_100
1014
+ value: 81.44500000000001
1015
+ - type: ndcg_at_1000
1016
+ value: 81.45599999999999
1017
+ - type: ndcg_at_3
1018
+ value: 76.688
1019
+ - type: ndcg_at_5
1020
+ value: 78.53
1021
+ - type: precision_at_1
1022
+ value: 64.9
1023
+ - type: precision_at_10
1024
+ value: 9.379999999999999
1025
+ - type: precision_at_100
1026
+ value: 0.997
1027
+ - type: precision_at_1000
1028
+ value: 0.1
1029
+ - type: precision_at_3
1030
+ value: 28.199999999999996
1031
+ - type: precision_at_5
1032
+ value: 17.8
1033
+ - type: recall_at_1
1034
+ value: 64.9
1035
+ - type: recall_at_10
1036
+ value: 93.8
1037
+ - type: recall_at_100
1038
+ value: 99.7
1039
+ - type: recall_at_1000
1040
+ value: 99.8
1041
+ - type: recall_at_3
1042
+ value: 84.6
1043
+ - type: recall_at_5
1044
+ value: 89
1045
+ - task:
1046
+ type: Classification
1047
+ dataset:
1048
+ name: MTEB Waimai
1049
+ type: C-MTEB/waimai-classification
1050
+ config: default
1051
+ split: test
1052
+ revision: None
1053
+ metrics:
1054
+ - type: accuracy
1055
+ value: 89.34
1056
+ - type: ap
1057
+ value: 75.20638024616892
1058
+ - type: f1
1059
+ value: 87.88648489072128
1060
+ ---
1061
+
1062
+ # lagoon999/xiaobu-embedding-v2-Q8_0-GGUF
1063
+ This model was converted to GGUF format from [`lier007/xiaobu-embedding-v2`](https://huggingface.co/lier007/xiaobu-embedding-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
1064
+ Refer to the [original model card](https://huggingface.co/lier007/xiaobu-embedding-v2) for more details on the model.
1065
+
1066
+ ## Use with llama.cpp
1067
+ Install llama.cpp through brew (works on Mac and Linux)
1068
+
1069
+ ```bash
1070
+ brew install llama.cpp
1071
+
1072
+ ```
1073
+ Invoke the llama.cpp server or the CLI.
1074
+
1075
+ ### CLI:
1076
+ ```bash
1077
+ llama-cli --hf-repo lagoon999/xiaobu-embedding-v2-Q8_0-GGUF --hf-file xiaobu-embedding-v2-q8_0.gguf -p "The meaning to life and the universe is"
1078
+ ```
1079
+
1080
+ ### Server:
1081
+ ```bash
1082
+ llama-server --hf-repo lagoon999/xiaobu-embedding-v2-Q8_0-GGUF --hf-file xiaobu-embedding-v2-q8_0.gguf -c 2048
1083
+ ```
1084
+
1085
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
1086
+
1087
+ Step 1: Clone llama.cpp from GitHub.
1088
+ ```
1089
+ git clone https://github.com/ggerganov/llama.cpp
1090
+ ```
1091
+
1092
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
1093
+ ```
1094
+ cd llama.cpp && LLAMA_CURL=1 make
1095
+ ```
1096
+
1097
+ Step 3: Run inference through the main binary.
1098
+ ```
1099
+ ./llama-cli --hf-repo lagoon999/xiaobu-embedding-v2-Q8_0-GGUF --hf-file xiaobu-embedding-v2-q8_0.gguf -p "The meaning to life and the universe is"
1100
+ ```
1101
+ or
1102
+ ```
1103
+ ./llama-server --hf-repo lagoon999/xiaobu-embedding-v2-Q8_0-GGUF --hf-file xiaobu-embedding-v2-q8_0.gguf -c 2048
1104
+ ```