zeroshot commited on
Commit
eeabc80
1 Parent(s): e63719e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +633 -1
README.md CHANGED
@@ -1,4 +1,636 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: mit
 
 
3
  ---
4
- This is the quantized ONNX variant of the [bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) model for embeddings created with [DeepSparse Optimum](https://github.com/neuralmagic/optimum-deepsparse) for ONNX export and Neural Magic's [Sparsify](https://account.neuralmagic.com/signin?client_id=d04a5f0c-983d-11ed-88a6-971073f187d3&return_to=https%3A//accounts.neuralmagic.com/v1/connect/authorize%3Fscope%3Dsparsify%3Aread%2Bsparsify%3Awrite%2Buser%3Aapi-key%3Aread%2Buser%3Aprofile%3Awrite%2Buser%3Aprofile%3Aread%26response_type%3Dcode%26code_challenge_method%3DS256%26redirect_uri%3Dhttps%3A//apps.neuralmagic.com/sparsify/oidc/callback.html%26state%3Da9b466a6193c4a7b92cba469408d2495%26client_id%3Dd04a5f0c-983d-11ed-88a6-971073f187d3%26code_challenge%3DP0EkmKBpplTb7crJOGS8YLSwT8UH-BeuD0wuE4JTORQ%26response_mode%3Dquery) for One-Shot INT8 quantization.
 
 
 
 
 
1
  ---
2
+ tags:
3
+ - mteb
4
+ model-index:
5
+ - name: bge-small-en-v1.5-quant
6
+ results:
7
+ - task:
8
+ type: Classification
9
+ dataset:
10
+ type: mteb/amazon_counterfactual
11
+ name: MTEB AmazonCounterfactualClassification (en)
12
+ config: en
13
+ split: test
14
+ revision: e8379541af4e31359cca9fbcf4b00f2671dba205
15
+ metrics:
16
+ - type: accuracy
17
+ value: 74.19402985074626
18
+ - type: ap
19
+ value: 37.562368912364036
20
+ - type: f1
21
+ value: 68.47046663470138
22
+ - task:
23
+ type: Classification
24
+ dataset:
25
+ type: mteb/amazon_polarity
26
+ name: MTEB AmazonPolarityClassification
27
+ config: default
28
+ split: test
29
+ revision: e2d317d38cd51312af73b3d32a06d1a08b442046
30
+ metrics:
31
+ - type: accuracy
32
+ value: 91.89432499999998
33
+ - type: ap
34
+ value: 88.64572979375352
35
+ - type: f1
36
+ value: 91.87171177424113
37
+ - task:
38
+ type: Classification
39
+ dataset:
40
+ type: mteb/amazon_reviews_multi
41
+ name: MTEB AmazonReviewsClassification (en)
42
+ config: en
43
+ split: test
44
+ revision: 1399c76144fd37290681b995c656ef9b2e06e26d
45
+ metrics:
46
+ - type: accuracy
47
+ value: 46.71799999999999
48
+ - type: f1
49
+ value: 46.25791412217894
50
+ - task:
51
+ type: Retrieval
52
+ dataset:
53
+ type: arguana
54
+ name: MTEB ArguAna
55
+ config: default
56
+ split: test
57
+ revision: None
58
+ metrics:
59
+ - type: map_at_1
60
+ value: 34.424
61
+ - type: map_at_10
62
+ value: 49.63
63
+ - type: map_at_100
64
+ value: 50.477000000000004
65
+ - type: map_at_1000
66
+ value: 50.483
67
+ - type: map_at_3
68
+ value: 45.389
69
+ - type: map_at_5
70
+ value: 47.888999999999996
71
+ - type: mrr_at_1
72
+ value: 34.78
73
+ - type: mrr_at_10
74
+ value: 49.793
75
+ - type: mrr_at_100
76
+ value: 50.632999999999996
77
+ - type: mrr_at_1000
78
+ value: 50.638000000000005
79
+ - type: mrr_at_3
80
+ value: 45.531
81
+ - type: mrr_at_5
82
+ value: 48.010000000000005
83
+ - type: ndcg_at_1
84
+ value: 34.424
85
+ - type: ndcg_at_10
86
+ value: 57.774
87
+ - type: ndcg_at_100
88
+ value: 61.248000000000005
89
+ - type: ndcg_at_1000
90
+ value: 61.378
91
+ - type: ndcg_at_3
92
+ value: 49.067
93
+ - type: ndcg_at_5
94
+ value: 53.561
95
+ - type: precision_at_1
96
+ value: 34.424
97
+ - type: precision_at_10
98
+ value: 8.364
99
+ - type: precision_at_100
100
+ value: 0.985
101
+ - type: precision_at_1000
102
+ value: 0.1
103
+ - type: precision_at_3
104
+ value: 19.915
105
+ - type: precision_at_5
106
+ value: 14.124999999999998
107
+ - type: recall_at_1
108
+ value: 34.424
109
+ - type: recall_at_10
110
+ value: 83.64200000000001
111
+ - type: recall_at_100
112
+ value: 98.506
113
+ - type: recall_at_1000
114
+ value: 99.502
115
+ - type: recall_at_3
116
+ value: 59.744
117
+ - type: recall_at_5
118
+ value: 70.626
119
+ - task:
120
+ type: Reranking
121
+ dataset:
122
+ type: mteb/askubuntudupquestions-reranking
123
+ name: MTEB AskUbuntuDupQuestions
124
+ config: default
125
+ split: test
126
+ revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
127
+ metrics:
128
+ - type: map
129
+ value: 62.40334669601722
130
+ - type: mrr
131
+ value: 75.33175042870333
132
+ - task:
133
+ type: STS
134
+ dataset:
135
+ type: mteb/biosses-sts
136
+ name: MTEB BIOSSES
137
+ config: default
138
+ split: test
139
+ revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
140
+ metrics:
141
+ - type: cos_sim_pearson
142
+ value: 88.00433892980047
143
+ - type: cos_sim_spearman
144
+ value: 86.65558896421105
145
+ - type: euclidean_pearson
146
+ value: 85.98927300398377
147
+ - type: euclidean_spearman
148
+ value: 86.0905158476729
149
+ - type: manhattan_pearson
150
+ value: 86.0272425017433
151
+ - type: manhattan_spearman
152
+ value: 85.8929209838941
153
+ - task:
154
+ type: Classification
155
+ dataset:
156
+ type: mteb/banking77
157
+ name: MTEB Banking77Classification
158
+ config: default
159
+ split: test
160
+ revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
161
+ metrics:
162
+ - type: accuracy
163
+ value: 85.1038961038961
164
+ - type: f1
165
+ value: 85.06851570045757
166
+ - task:
167
+ type: Classification
168
+ dataset:
169
+ type: mteb/emotion
170
+ name: MTEB EmotionClassification
171
+ config: default
172
+ split: test
173
+ revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
174
+ metrics:
175
+ - type: accuracy
176
+ value: 46.845
177
+ - type: f1
178
+ value: 41.70045120106269
179
+ - task:
180
+ type: Classification
181
+ dataset:
182
+ type: mteb/imdb
183
+ name: MTEB ImdbClassification
184
+ config: default
185
+ split: test
186
+ revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
187
+ metrics:
188
+ - type: accuracy
189
+ value: 89.3476
190
+ - type: ap
191
+ value: 85.26891728027032
192
+ - type: f1
193
+ value: 89.33488973832894
194
+ - task:
195
+ type: Classification
196
+ dataset:
197
+ type: mteb/mtop_domain
198
+ name: MTEB MTOPDomainClassification (en)
199
+ config: en
200
+ split: test
201
+ revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
202
+ metrics:
203
+ - type: accuracy
204
+ value: 92.67441860465115
205
+ - type: f1
206
+ value: 92.48821366022861
207
+ - task:
208
+ type: Classification
209
+ dataset:
210
+ type: mteb/mtop_intent
211
+ name: MTEB MTOPIntentClassification (en)
212
+ config: en
213
+ split: test
214
+ revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
215
+ metrics:
216
+ - type: accuracy
217
+ value: 74.02872777017784
218
+ - type: f1
219
+ value: 57.28822860484337
220
+ - task:
221
+ type: Classification
222
+ dataset:
223
+ type: mteb/amazon_massive_intent
224
+ name: MTEB MassiveIntentClassification (en)
225
+ config: en
226
+ split: test
227
+ revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
228
+ metrics:
229
+ - type: accuracy
230
+ value: 74.01479488903833
231
+ - type: f1
232
+ value: 71.83716204573571
233
+ - task:
234
+ type: Classification
235
+ dataset:
236
+ type: mteb/amazon_massive_scenario
237
+ name: MTEB MassiveScenarioClassification (en)
238
+ config: en
239
+ split: test
240
+ revision: 7d571f92784cd94a019292a1f45445077d0ef634
241
+ metrics:
242
+ - type: accuracy
243
+ value: 77.95897780766644
244
+ - type: f1
245
+ value: 77.80380046125542
246
+ - task:
247
+ type: STS
248
+ dataset:
249
+ type: mteb/sickr-sts
250
+ name: MTEB SICK-R
251
+ config: default
252
+ split: test
253
+ revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
254
+ metrics:
255
+ - type: cos_sim_pearson
256
+ value: 83.86793477948164
257
+ - type: cos_sim_spearman
258
+ value: 79.43675709317894
259
+ - type: euclidean_pearson
260
+ value: 81.42564463337872
261
+ - type: euclidean_spearman
262
+ value: 79.39138648510273
263
+ - type: manhattan_pearson
264
+ value: 81.31167449689285
265
+ - type: manhattan_spearman
266
+ value: 79.28411420758785
267
+ - task:
268
+ type: STS
269
+ dataset:
270
+ type: mteb/sts12-sts
271
+ name: MTEB STS12
272
+ config: default
273
+ split: test
274
+ revision: a0d554a64d88156834ff5ae9920b964011b16384
275
+ metrics:
276
+ - type: cos_sim_pearson
277
+ value: 84.43490408077298
278
+ - type: cos_sim_spearman
279
+ value: 76.16878340109265
280
+ - type: euclidean_pearson
281
+ value: 80.6016219080782
282
+ - type: euclidean_spearman
283
+ value: 75.67063072565917
284
+ - type: manhattan_pearson
285
+ value: 80.7238920179759
286
+ - type: manhattan_spearman
287
+ value: 75.85631683403953
288
+ - task:
289
+ type: STS
290
+ dataset:
291
+ type: mteb/sts13-sts
292
+ name: MTEB STS13
293
+ config: default
294
+ split: test
295
+ revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
296
+ metrics:
297
+ - type: cos_sim_pearson
298
+ value: 83.03882477767792
299
+ - type: cos_sim_spearman
300
+ value: 84.15171505206217
301
+ - type: euclidean_pearson
302
+ value: 84.11692506470922
303
+ - type: euclidean_spearman
304
+ value: 84.78589046217311
305
+ - type: manhattan_pearson
306
+ value: 83.98651139454486
307
+ - type: manhattan_spearman
308
+ value: 84.64928563751276
309
+ - task:
310
+ type: STS
311
+ dataset:
312
+ type: mteb/sts14-sts
313
+ name: MTEB STS14
314
+ config: default
315
+ split: test
316
+ revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
317
+ metrics:
318
+ - type: cos_sim_pearson
319
+ value: 83.11158600428418
320
+ - type: cos_sim_spearman
321
+ value: 81.48561519933875
322
+ - type: euclidean_pearson
323
+ value: 83.21025907155807
324
+ - type: euclidean_spearman
325
+ value: 81.68699235487654
326
+ - type: manhattan_pearson
327
+ value: 83.16704771658094
328
+ - type: manhattan_spearman
329
+ value: 81.7133110412898
330
+ - task:
331
+ type: STS
332
+ dataset:
333
+ type: mteb/sts15-sts
334
+ name: MTEB STS15
335
+ config: default
336
+ split: test
337
+ revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
338
+ metrics:
339
+ - type: cos_sim_pearson
340
+ value: 87.1514510686502
341
+ - type: cos_sim_spearman
342
+ value: 88.11449450494452
343
+ - type: euclidean_pearson
344
+ value: 87.75854949349939
345
+ - type: euclidean_spearman
346
+ value: 88.4055148221637
347
+ - type: manhattan_pearson
348
+ value: 87.71487828059706
349
+ - type: manhattan_spearman
350
+ value: 88.35301381116254
351
+ - task:
352
+ type: STS
353
+ dataset:
354
+ type: mteb/sts16-sts
355
+ name: MTEB STS16
356
+ config: default
357
+ split: test
358
+ revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
359
+ metrics:
360
+ - type: cos_sim_pearson
361
+ value: 83.36838640113687
362
+ - type: cos_sim_spearman
363
+ value: 84.98776974283366
364
+ - type: euclidean_pearson
365
+ value: 84.0617526427129
366
+ - type: euclidean_spearman
367
+ value: 85.04234805662242
368
+ - type: manhattan_pearson
369
+ value: 83.87433162971784
370
+ - type: manhattan_spearman
371
+ value: 84.87174280390242
372
+ - task:
373
+ type: STS
374
+ dataset:
375
+ type: mteb/sts17-crosslingual-sts
376
+ name: MTEB STS17 (en-en)
377
+ config: en-en
378
+ split: test
379
+ revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
380
+ metrics:
381
+ - type: cos_sim_pearson
382
+ value: 87.72465270691285
383
+ - type: cos_sim_spearman
384
+ value: 87.97672332532184
385
+ - type: euclidean_pearson
386
+ value: 88.78764701492182
387
+ - type: euclidean_spearman
388
+ value: 88.3509718074474
389
+ - type: manhattan_pearson
390
+ value: 88.73024739256215
391
+ - type: manhattan_spearman
392
+ value: 88.24149566970154
393
+ - task:
394
+ type: STS
395
+ dataset:
396
+ type: mteb/sts22-crosslingual-sts
397
+ name: MTEB STS22 (en)
398
+ config: en
399
+ split: test
400
+ revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
401
+ metrics:
402
+ - type: cos_sim_pearson
403
+ value: 64.65195562203238
404
+ - type: cos_sim_spearman
405
+ value: 65.0726777678982
406
+ - type: euclidean_pearson
407
+ value: 65.84698245675273
408
+ - type: euclidean_spearman
409
+ value: 65.13121502162804
410
+ - type: manhattan_pearson
411
+ value: 65.96149904857049
412
+ - type: manhattan_spearman
413
+ value: 65.39983948112955
414
+ - task:
415
+ type: STS
416
+ dataset:
417
+ type: mteb/stsbenchmark-sts
418
+ name: MTEB STSBenchmark
419
+ config: default
420
+ split: test
421
+ revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
422
+ metrics:
423
+ - type: cos_sim_pearson
424
+ value: 85.2642818050049
425
+ - type: cos_sim_spearman
426
+ value: 86.30633382439257
427
+ - type: euclidean_pearson
428
+ value: 86.46510435905633
429
+ - type: euclidean_spearman
430
+ value: 86.62650496446
431
+ - type: manhattan_pearson
432
+ value: 86.2546330637872
433
+ - type: manhattan_spearman
434
+ value: 86.46309860938591
435
+ - task:
436
+ type: PairClassification
437
+ dataset:
438
+ type: mteb/sprintduplicatequestions-pairclassification
439
+ name: MTEB SprintDuplicateQuestions
440
+ config: default
441
+ split: test
442
+ revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
443
+ metrics:
444
+ - type: cos_sim_accuracy
445
+ value: 99.84257425742574
446
+ - type: cos_sim_ap
447
+ value: 96.25445889914926
448
+ - type: cos_sim_f1
449
+ value: 92.03805708562844
450
+ - type: cos_sim_precision
451
+ value: 92.1765295887663
452
+ - type: cos_sim_recall
453
+ value: 91.9
454
+ - type: dot_accuracy
455
+ value: 99.83069306930693
456
+ - type: dot_ap
457
+ value: 96.00517778550396
458
+ - type: dot_f1
459
+ value: 91.27995920448751
460
+ - type: dot_precision
461
+ value: 93.1321540062435
462
+ - type: dot_recall
463
+ value: 89.5
464
+ - type: euclidean_accuracy
465
+ value: 99.84455445544555
466
+ - type: euclidean_ap
467
+ value: 96.14761524546034
468
+ - type: euclidean_f1
469
+ value: 91.97751660705163
470
+ - type: euclidean_precision
471
+ value: 94.04388714733543
472
+ - type: euclidean_recall
473
+ value: 90
474
+ - type: manhattan_accuracy
475
+ value: 99.84158415841584
476
+ - type: manhattan_ap
477
+ value: 96.17014673429341
478
+ - type: manhattan_f1
479
+ value: 91.93790686029043
480
+ - type: manhattan_precision
481
+ value: 92.07622868605817
482
+ - type: manhattan_recall
483
+ value: 91.8
484
+ - type: max_accuracy
485
+ value: 99.84455445544555
486
+ - type: max_ap
487
+ value: 96.25445889914926
488
+ - type: max_f1
489
+ value: 92.03805708562844
490
+ - task:
491
+ type: Classification
492
+ dataset:
493
+ type: mteb/toxic_conversations_50k
494
+ name: MTEB ToxicConversationsClassification
495
+ config: default
496
+ split: test
497
+ revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
498
+ metrics:
499
+ - type: accuracy
500
+ value: 69.5008
501
+ - type: ap
502
+ value: 13.64158304183089
503
+ - type: f1
504
+ value: 53.50073331072236
505
+ - task:
506
+ type: Classification
507
+ dataset:
508
+ type: mteb/tweet_sentiment_extraction
509
+ name: MTEB TweetSentimentExtractionClassification
510
+ config: default
511
+ split: test
512
+ revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
513
+ metrics:
514
+ - type: accuracy
515
+ value: 60.01980758347483
516
+ - type: f1
517
+ value: 60.35679678249753
518
+ - task:
519
+ type: PairClassification
520
+ dataset:
521
+ type: mteb/twittersemeval2015-pairclassification
522
+ name: MTEB TwitterSemEval2015
523
+ config: default
524
+ split: test
525
+ revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
526
+ metrics:
527
+ - type: cos_sim_accuracy
528
+ value: 85.68874053764081
529
+ - type: cos_sim_ap
530
+ value: 73.26334732095694
531
+ - type: cos_sim_f1
532
+ value: 68.01558376272465
533
+ - type: cos_sim_precision
534
+ value: 64.93880489560834
535
+ - type: cos_sim_recall
536
+ value: 71.39841688654354
537
+ - type: dot_accuracy
538
+ value: 84.71121177802945
539
+ - type: dot_ap
540
+ value: 70.33606362522605
541
+ - type: dot_f1
542
+ value: 65.0887573964497
543
+ - type: dot_precision
544
+ value: 63.50401606425703
545
+ - type: dot_recall
546
+ value: 66.75461741424802
547
+ - type: euclidean_accuracy
548
+ value: 85.80795136198367
549
+ - type: euclidean_ap
550
+ value: 73.43201285001163
551
+ - type: euclidean_f1
552
+ value: 68.33166833166834
553
+ - type: euclidean_precision
554
+ value: 64.86486486486487
555
+ - type: euclidean_recall
556
+ value: 72.18997361477572
557
+ - type: manhattan_accuracy
558
+ value: 85.62317458425225
559
+ - type: manhattan_ap
560
+ value: 73.21212085536185
561
+ - type: manhattan_f1
562
+ value: 68.01681314482232
563
+ - type: manhattan_precision
564
+ value: 65.74735286875153
565
+ - type: manhattan_recall
566
+ value: 70.44854881266491
567
+ - type: max_accuracy
568
+ value: 85.80795136198367
569
+ - type: max_ap
570
+ value: 73.43201285001163
571
+ - type: max_f1
572
+ value: 68.33166833166834
573
+ - task:
574
+ type: PairClassification
575
+ dataset:
576
+ type: mteb/twitterurlcorpus-pairclassification
577
+ name: MTEB TwitterURLCorpus
578
+ config: default
579
+ split: test
580
+ revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
581
+ metrics:
582
+ - type: cos_sim_accuracy
583
+ value: 88.81709162882757
584
+ - type: cos_sim_ap
585
+ value: 85.63540257309367
586
+ - type: cos_sim_f1
587
+ value: 77.9091382258904
588
+ - type: cos_sim_precision
589
+ value: 75.32710280373833
590
+ - type: cos_sim_recall
591
+ value: 80.67446874037573
592
+ - type: dot_accuracy
593
+ value: 88.04478596654636
594
+ - type: dot_ap
595
+ value: 84.16371725220706
596
+ - type: dot_f1
597
+ value: 76.45949643213666
598
+ - type: dot_precision
599
+ value: 73.54719396827655
600
+ - type: dot_recall
601
+ value: 79.61194949183862
602
+ - type: euclidean_accuracy
603
+ value: 88.9296386851399
604
+ - type: euclidean_ap
605
+ value: 85.71894615274715
606
+ - type: euclidean_f1
607
+ value: 78.12952767313823
608
+ - type: euclidean_precision
609
+ value: 73.7688098495212
610
+ - type: euclidean_recall
611
+ value: 83.03818909762857
612
+ - type: manhattan_accuracy
613
+ value: 88.89276982186519
614
+ - type: manhattan_ap
615
+ value: 85.6838514059479
616
+ - type: manhattan_f1
617
+ value: 78.06861875184856
618
+ - type: manhattan_precision
619
+ value: 75.09246088193457
620
+ - type: manhattan_recall
621
+ value: 81.29042192793348
622
+ - type: max_accuracy
623
+ value: 88.9296386851399
624
+ - type: max_ap
625
+ value: 85.71894615274715
626
+ - type: max_f1
627
+ value: 78.12952767313823
628
  license: mit
629
+ language:
630
+ - en
631
  ---
632
+
633
+ ---
634
+ license: mit
635
+ ---
636
+ This is the quantized ONNX variant of the [bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) model for embeddings created with [DeepSparse Optimum](https://github.com/neuralmagic/optimum-deepsparse) for ONNX export and Neural Magic's [Sparsify](https://account.neuralmagic.com/signin?client_id=d04a5f0c-983d-11ed-88a6-971073f187d3&return_to=https%3A//accounts.neuralmagic.com/v1/connect/authorize%3Fscope%3Dsparsify%3Aread%2Bsparsify%3Awrite%2Buser%3Aapi-key%3Aread%2Buser%3Aprofile%3Awrite%2Buser%3Aprofile%3Aread%26response_type%3Dcode%26code_challenge_method%3DS256%26redirect_uri%3Dhttps%3A//apps.neuralmagic.com/sparsify/oidc/callback.html%26state%3Da9b466a6193c4a7b92cba469408d2495%26client_id%3Dd04a5f0c-983d-11ed-88a6-971073f187d3%26code_challenge%3DP0EkmKBpplTb7crJOGS8YLSwT8UH-BeuD0wuE4JTORQ%26response_mode%3Dquery) for One-Shot INT8 quantization.