michael-guenther's picture
Update README.md
a19b98a
|
raw
history blame
67.7 kB
metadata
pipeline_tag: sentence-similarity
tags:
  - finetuner
  - mteb
  - sentence-transformers
  - feature-extraction
  - sentence-similarity
datasets:
  - jinaai/negation-dataset
language: en
license: apache-2.0
model-index:
  - name: jina-embedding-l-en-v1
    results:
      - task:
          type: Classification
        dataset:
          type: mteb/amazon_counterfactual
          name: MTEB AmazonCounterfactualClassification (en)
          config: en
          split: test
          revision: e8379541af4e31359cca9fbcf4b00f2671dba205
        metrics:
          - type: accuracy
            value: 61.64179104477612
          - type: ap
            value: 24.63675721041911
          - type: f1
            value: 55.10036810049116
      - task:
          type: Classification
        dataset:
          type: mteb/amazon_polarity
          name: MTEB AmazonPolarityClassification
          config: default
          split: test
          revision: e2d317d38cd51312af73b3d32a06d1a08b442046
        metrics:
          - type: accuracy
            value: 60.708125
          - type: ap
            value: 57.491681452557344
          - type: f1
            value: 58.046023443205655
      - task:
          type: Classification
        dataset:
          type: mteb/amazon_reviews_multi
          name: MTEB AmazonReviewsClassification (en)
          config: en
          split: test
          revision: 1399c76144fd37290681b995c656ef9b2e06e26d
        metrics:
          - type: accuracy
            value: 28.12
          - type: f1
            value: 26.904734434317966
      - task:
          type: Retrieval
        dataset:
          type: BeIR/cqadupstack
          name: MTEB CQADupstackAndroidRetrieval
          config: default
          split: test
          revision: None
        metrics:
          - type: map_at_1
            value: 36.635
          - type: map_at_10
            value: 48.291000000000004
          - type: map_at_100
            value: 49.833
          - type: map_at_1000
            value: 49.944
          - type: map_at_3
            value: 44.362
          - type: map_at_5
            value: 46.678
          - type: mrr_at_1
            value: 44.349
          - type: mrr_at_10
            value: 54.35
          - type: mrr_at_100
            value: 54.995000000000005
          - type: mrr_at_1000
            value: 55.03
          - type: mrr_at_3
            value: 52.074
          - type: mrr_at_5
            value: 53.433
          - type: ndcg_at_1
            value: 44.349
          - type: ndcg_at_10
            value: 54.876999999999995
          - type: ndcg_at_100
            value: 59.663
          - type: ndcg_at_1000
            value: 61.23
          - type: ndcg_at_3
            value: 49.727
          - type: ndcg_at_5
            value: 52.271
          - type: precision_at_1
            value: 44.349
          - type: precision_at_10
            value: 10.485999999999999
          - type: precision_at_100
            value: 1.6209999999999998
          - type: precision_at_1000
            value: 0.208
          - type: precision_at_3
            value: 23.653
          - type: precision_at_5
            value: 17.282
          - type: recall_at_1
            value: 36.635
          - type: recall_at_10
            value: 66.878
          - type: recall_at_100
            value: 86.239
          - type: recall_at_1000
            value: 96.14200000000001
          - type: recall_at_3
            value: 51.793
          - type: recall_at_5
            value: 58.943999999999996
      - task:
          type: Retrieval
        dataset:
          type: BeIR/cqadupstack
          name: MTEB CQADupstackEnglishRetrieval
          config: default
          split: test
          revision: None
        metrics:
          - type: map_at_1
            value: 31.323
          - type: map_at_10
            value: 42.39
          - type: map_at_100
            value: 43.741
          - type: map_at_1000
            value: 43.872
          - type: map_at_3
            value: 39.109
          - type: map_at_5
            value: 40.961999999999996
          - type: mrr_at_1
            value: 39.617999999999995
          - type: mrr_at_10
            value: 48.595
          - type: mrr_at_100
            value: 49.236000000000004
          - type: mrr_at_1000
            value: 49.278
          - type: mrr_at_3
            value: 46.274
          - type: mrr_at_5
            value: 47.72
          - type: ndcg_at_1
            value: 39.617999999999995
          - type: ndcg_at_10
            value: 48.455
          - type: ndcg_at_100
            value: 52.949999999999996
          - type: ndcg_at_1000
            value: 54.93599999999999
          - type: ndcg_at_3
            value: 44.038
          - type: ndcg_at_5
            value: 46.154
          - type: precision_at_1
            value: 39.617999999999995
          - type: precision_at_10
            value: 9.318
          - type: precision_at_100
            value: 1.4869999999999999
          - type: precision_at_1000
            value: 0.19499999999999998
          - type: precision_at_3
            value: 21.614
          - type: precision_at_5
            value: 15.376000000000001
          - type: recall_at_1
            value: 31.323
          - type: recall_at_10
            value: 59.114999999999995
          - type: recall_at_100
            value: 77.98
          - type: recall_at_1000
            value: 90.561
          - type: recall_at_3
            value: 45.713
          - type: recall_at_5
            value: 51.842999999999996
      - task:
          type: Retrieval
        dataset:
          type: BeIR/cqadupstack
          name: MTEB CQADupstackGamingRetrieval
          config: default
          split: test
          revision: None
        metrics:
          - type: map_at_1
            value: 40.858
          - type: map_at_10
            value: 53.477
          - type: map_at_100
            value: 54.47
          - type: map_at_1000
            value: 54.522999999999996
          - type: map_at_3
            value: 50.407999999999994
          - type: map_at_5
            value: 52.114000000000004
          - type: mrr_at_1
            value: 46.708
          - type: mrr_at_10
            value: 56.855999999999995
          - type: mrr_at_100
            value: 57.472
          - type: mrr_at_1000
            value: 57.498000000000005
          - type: mrr_at_3
            value: 54.45100000000001
          - type: mrr_at_5
            value: 55.781000000000006
          - type: ndcg_at_1
            value: 46.708
          - type: ndcg_at_10
            value: 59.299
          - type: ndcg_at_100
            value: 63.138000000000005
          - type: ndcg_at_1000
            value: 64.189
          - type: ndcg_at_3
            value: 54.125
          - type: ndcg_at_5
            value: 56.57600000000001
          - type: precision_at_1
            value: 46.708
          - type: precision_at_10
            value: 9.48
          - type: precision_at_100
            value: 1.234
          - type: precision_at_1000
            value: 0.136
          - type: precision_at_3
            value: 24.221999999999998
          - type: precision_at_5
            value: 16.414
          - type: recall_at_1
            value: 40.858
          - type: recall_at_10
            value: 73.1
          - type: recall_at_100
            value: 89.447
          - type: recall_at_1000
            value: 97.00999999999999
          - type: recall_at_3
            value: 59.092999999999996
          - type: recall_at_5
            value: 65.275
      - task:
          type: Retrieval
        dataset:
          type: BeIR/cqadupstack
          name: MTEB CQADupstackGisRetrieval
          config: default
          split: test
          revision: None
        metrics:
          - type: map_at_1
            value: 27.400000000000002
          - type: map_at_10
            value: 36.878
          - type: map_at_100
            value: 37.993
          - type: map_at_1000
            value: 38.074000000000005
          - type: map_at_3
            value: 34.147
          - type: map_at_5
            value: 35.703
          - type: mrr_at_1
            value: 29.378999999999998
          - type: mrr_at_10
            value: 38.921
          - type: mrr_at_100
            value: 39.865
          - type: mrr_at_1000
            value: 39.92
          - type: mrr_at_3
            value: 36.29
          - type: mrr_at_5
            value: 37.878
          - type: ndcg_at_1
            value: 29.378999999999998
          - type: ndcg_at_10
            value: 42.205
          - type: ndcg_at_100
            value: 47.333
          - type: ndcg_at_1000
            value: 49.258
          - type: ndcg_at_3
            value: 36.83
          - type: ndcg_at_5
            value: 39.525
          - type: precision_at_1
            value: 29.378999999999998
          - type: precision_at_10
            value: 6.4750000000000005
          - type: precision_at_100
            value: 0.947
          - type: precision_at_1000
            value: 0.11499999999999999
          - type: precision_at_3
            value: 15.631
          - type: precision_at_5
            value: 10.983
          - type: recall_at_1
            value: 27.400000000000002
          - type: recall_at_10
            value: 56.61000000000001
          - type: recall_at_100
            value: 79.475
          - type: recall_at_1000
            value: 93.714
          - type: recall_at_3
            value: 42.064
          - type: recall_at_5
            value: 48.526
      - task:
          type: Retrieval
        dataset:
          type: BeIR/cqadupstack
          name: MTEB CQADupstackMathematicaRetrieval
          config: default
          split: test
          revision: None
        metrics:
          - type: map_at_1
            value: 16.184
          - type: map_at_10
            value: 24.157
          - type: map_at_100
            value: 25.339
          - type: map_at_1000
            value: 25.454
          - type: map_at_3
            value: 21.426000000000002
          - type: map_at_5
            value: 22.792
          - type: mrr_at_1
            value: 19.776
          - type: mrr_at_10
            value: 28.53
          - type: mrr_at_100
            value: 29.463
          - type: mrr_at_1000
            value: 29.532000000000004
          - type: mrr_at_3
            value: 26.016000000000002
          - type: mrr_at_5
            value: 27.359
          - type: ndcg_at_1
            value: 19.776
          - type: ndcg_at_10
            value: 29.482000000000003
          - type: ndcg_at_100
            value: 35.132999999999996
          - type: ndcg_at_1000
            value: 38.048
          - type: ndcg_at_3
            value: 24.519
          - type: ndcg_at_5
            value: 26.541999999999998
          - type: precision_at_1
            value: 19.776
          - type: precision_at_10
            value: 5.5969999999999995
          - type: precision_at_100
            value: 0.9780000000000001
          - type: precision_at_1000
            value: 0.136
          - type: precision_at_3
            value: 12.065
          - type: precision_at_5
            value: 8.756
          - type: recall_at_1
            value: 16.184
          - type: recall_at_10
            value: 41.506
          - type: recall_at_100
            value: 66.322
          - type: recall_at_1000
            value: 87.40299999999999
          - type: recall_at_3
            value: 27.618
          - type: recall_at_5
            value: 32.81
      - task:
          type: Retrieval
        dataset:
          type: BeIR/cqadupstack
          name: MTEB CQADupstackPhysicsRetrieval
          config: default
          split: test
          revision: None
        metrics:
          - type: map_at_1
            value: 28.79
          - type: map_at_10
            value: 39.475
          - type: map_at_100
            value: 40.864
          - type: map_at_1000
            value: 40.967
          - type: map_at_3
            value: 36.394999999999996
          - type: map_at_5
            value: 38.101
          - type: mrr_at_1
            value: 35.611
          - type: mrr_at_10
            value: 45.32
          - type: mrr_at_100
            value: 46.160000000000004
          - type: mrr_at_1000
            value: 46.205
          - type: mrr_at_3
            value: 42.717
          - type: mrr_at_5
            value: 44.233
          - type: ndcg_at_1
            value: 35.611
          - type: ndcg_at_10
            value: 45.513999999999996
          - type: ndcg_at_100
            value: 51.163000000000004
          - type: ndcg_at_1000
            value: 53.099
          - type: ndcg_at_3
            value: 40.602
          - type: ndcg_at_5
            value: 42.933
          - type: precision_at_1
            value: 35.611
          - type: precision_at_10
            value: 8.219
          - type: precision_at_100
            value: 1.302
          - type: precision_at_1000
            value: 0.166
          - type: precision_at_3
            value: 19.281000000000002
          - type: precision_at_5
            value: 13.550999999999998
          - type: recall_at_1
            value: 28.79
          - type: recall_at_10
            value: 57.708000000000006
          - type: recall_at_100
            value: 80.965
          - type: recall_at_1000
            value: 93.60000000000001
          - type: recall_at_3
            value: 43.766
          - type: recall_at_5
            value: 50.003
      - task:
          type: Retrieval
        dataset:
          type: BeIR/cqadupstack
          name: MTEB CQADupstackProgrammersRetrieval
          config: default
          split: test
          revision: None
        metrics:
          - type: map_at_1
            value: 27.392
          - type: map_at_10
            value: 37.213
          - type: map_at_100
            value: 38.513999999999996
          - type: map_at_1000
            value: 38.629999999999995
          - type: map_at_3
            value: 33.844
          - type: map_at_5
            value: 35.791000000000004
          - type: mrr_at_1
            value: 33.676
          - type: mrr_at_10
            value: 42.58
          - type: mrr_at_100
            value: 43.472
          - type: mrr_at_1000
            value: 43.519999999999996
          - type: mrr_at_3
            value: 40.011
          - type: mrr_at_5
            value: 41.575
          - type: ndcg_at_1
            value: 33.676
          - type: ndcg_at_10
            value: 42.949
          - type: ndcg_at_100
            value: 48.542
          - type: ndcg_at_1000
            value: 50.804
          - type: ndcg_at_3
            value: 37.631
          - type: ndcg_at_5
            value: 40.226
          - type: precision_at_1
            value: 33.676
          - type: precision_at_10
            value: 7.785
          - type: precision_at_100
            value: 1.229
          - type: precision_at_1000
            value: 0.16199999999999998
          - type: precision_at_3
            value: 17.694
          - type: precision_at_5
            value: 12.763
          - type: recall_at_1
            value: 27.392
          - type: recall_at_10
            value: 54.82599999999999
          - type: recall_at_100
            value: 78.61
          - type: recall_at_1000
            value: 93.78800000000001
          - type: recall_at_3
            value: 40.019
          - type: recall_at_5
            value: 46.866
      - task:
          type: Retrieval
        dataset:
          type: BeIR/cqadupstack
          name: MTEB CQADupstackRetrieval
          config: default
          split: test
          revision: None
        metrics:
          - type: map_at_1
            value: 27.550666666666668
          - type: map_at_10
            value: 37.07508333333333
          - type: map_at_100
            value: 38.31308333333333
          - type: map_at_1000
            value: 38.427166666666665
          - type: map_at_3
            value: 34.14741666666667
          - type: map_at_5
            value: 35.72416666666667
          - type: mrr_at_1
            value: 32.63183333333333
          - type: mrr_at_10
            value: 41.346999999999994
          - type: mrr_at_100
            value: 42.17225
          - type: mrr_at_1000
            value: 42.22475
          - type: mrr_at_3
            value: 38.903999999999996
          - type: mrr_at_5
            value: 40.27291666666667
          - type: ndcg_at_1
            value: 32.63183333333333
          - type: ndcg_at_10
            value: 42.61841666666667
          - type: ndcg_at_100
            value: 47.742
          - type: ndcg_at_1000
            value: 49.869416666666666
          - type: ndcg_at_3
            value: 37.73925
          - type: ndcg_at_5
            value: 39.925666666666665
          - type: precision_at_1
            value: 32.63183333333333
          - type: precision_at_10
            value: 7.504000000000001
          - type: precision_at_100
            value: 1.1986666666666668
          - type: precision_at_1000
            value: 0.15758333333333333
          - type: precision_at_3
            value: 17.415666666666667
          - type: precision_at_5
            value: 12.297749999999999
          - type: recall_at_1
            value: 27.550666666666668
          - type: recall_at_10
            value: 54.68383333333333
          - type: recall_at_100
            value: 77.01691666666667
          - type: recall_at_1000
            value: 91.71175000000001
          - type: recall_at_3
            value: 40.91866666666667
          - type: recall_at_5
            value: 46.669000000000004
      - task:
          type: Retrieval
        dataset:
          type: BeIR/cqadupstack
          name: MTEB CQADupstackStatsRetrieval
          config: default
          split: test
          revision: None
        metrics:
          - type: map_at_1
            value: 24.91
          - type: map_at_10
            value: 32.053
          - type: map_at_100
            value: 33.086
          - type: map_at_1000
            value: 33.176
          - type: map_at_3
            value: 29.768
          - type: map_at_5
            value: 30.842000000000002
          - type: mrr_at_1
            value: 27.607
          - type: mrr_at_10
            value: 34.732
          - type: mrr_at_100
            value: 35.589
          - type: mrr_at_1000
            value: 35.65
          - type: mrr_at_3
            value: 32.566
          - type: mrr_at_5
            value: 33.556000000000004
          - type: ndcg_at_1
            value: 27.607
          - type: ndcg_at_10
            value: 36.579
          - type: ndcg_at_100
            value: 41.646
          - type: ndcg_at_1000
            value: 43.845
          - type: ndcg_at_3
            value: 32.132
          - type: ndcg_at_5
            value: 33.825
          - type: precision_at_1
            value: 27.607
          - type: precision_at_10
            value: 5.827999999999999
          - type: precision_at_100
            value: 0.928
          - type: precision_at_1000
            value: 0.12
          - type: precision_at_3
            value: 13.804
          - type: precision_at_5
            value: 9.447999999999999
          - type: recall_at_1
            value: 24.91
          - type: recall_at_10
            value: 47.924
          - type: recall_at_100
            value: 70.88799999999999
          - type: recall_at_1000
            value: 87.087
          - type: recall_at_3
            value: 35.169
          - type: recall_at_5
            value: 39.497
      - task:
          type: Retrieval
        dataset:
          type: BeIR/cqadupstack
          name: MTEB CQADupstackTexRetrieval
          config: default
          split: test
          revision: None
        metrics:
          - type: map_at_1
            value: 18.19
          - type: map_at_10
            value: 25.765
          - type: map_at_100
            value: 26.882
          - type: map_at_1000
            value: 27.012999999999998
          - type: map_at_3
            value: 23.378
          - type: map_at_5
            value: 24.587
          - type: mrr_at_1
            value: 22.505
          - type: mrr_at_10
            value: 29.948999999999998
          - type: mrr_at_100
            value: 30.871
          - type: mrr_at_1000
            value: 30.947999999999997
          - type: mrr_at_3
            value: 27.764
          - type: mrr_at_5
            value: 28.951999999999998
          - type: ndcg_at_1
            value: 22.505
          - type: ndcg_at_10
            value: 30.593999999999998
          - type: ndcg_at_100
            value: 35.983
          - type: ndcg_at_1000
            value: 38.869
          - type: ndcg_at_3
            value: 26.369
          - type: ndcg_at_5
            value: 28.124
          - type: precision_at_1
            value: 22.505
          - type: precision_at_10
            value: 5.575
          - type: precision_at_100
            value: 0.9860000000000001
          - type: precision_at_1000
            value: 0.14200000000000002
          - type: precision_at_3
            value: 12.423
          - type: precision_at_5
            value: 8.878
          - type: recall_at_1
            value: 18.19
          - type: recall_at_10
            value: 41.032000000000004
          - type: recall_at_100
            value: 65.32900000000001
          - type: recall_at_1000
            value: 85.702
          - type: recall_at_3
            value: 29.136
          - type: recall_at_5
            value: 33.711
      - task:
          type: Retrieval
        dataset:
          type: BeIR/cqadupstack
          name: MTEB CQADupstackUnixRetrieval
          config: default
          split: test
          revision: None
        metrics:
          - type: map_at_1
            value: 28.304000000000002
          - type: map_at_10
            value: 37.153000000000006
          - type: map_at_100
            value: 38.317
          - type: map_at_1000
            value: 38.422
          - type: map_at_3
            value: 34.317
          - type: map_at_5
            value: 35.801
          - type: mrr_at_1
            value: 33.675
          - type: mrr_at_10
            value: 41.302
          - type: mrr_at_100
            value: 42.202
          - type: mrr_at_1000
            value: 42.264
          - type: mrr_at_3
            value: 38.759
          - type: mrr_at_5
            value: 40.215
          - type: ndcg_at_1
            value: 33.675
          - type: ndcg_at_10
            value: 42.35
          - type: ndcg_at_100
            value: 47.653
          - type: ndcg_at_1000
            value: 49.964999999999996
          - type: ndcg_at_3
            value: 37.372
          - type: ndcg_at_5
            value: 39.544000000000004
          - type: precision_at_1
            value: 33.675
          - type: precision_at_10
            value: 7.136000000000001
          - type: precision_at_100
            value: 1.097
          - type: precision_at_1000
            value: 0.14100000000000001
          - type: precision_at_3
            value: 16.915
          - type: precision_at_5
            value: 11.884
          - type: recall_at_1
            value: 28.304000000000002
          - type: recall_at_10
            value: 54.083000000000006
          - type: recall_at_100
            value: 77.167
          - type: recall_at_1000
            value: 93.151
          - type: recall_at_3
            value: 40.441
          - type: recall_at_5
            value: 45.95
      - task:
          type: Retrieval
        dataset:
          type: BeIR/cqadupstack
          name: MTEB CQADupstackWebmastersRetrieval
          config: default
          split: test
          revision: None
        metrics:
          - type: map_at_1
            value: 29.575000000000003
          - type: map_at_10
            value: 39.089
          - type: map_at_100
            value: 40.813
          - type: map_at_1000
            value: 41.032000000000004
          - type: map_at_3
            value: 36.153999999999996
          - type: map_at_5
            value: 37.518
          - type: mrr_at_1
            value: 35.573
          - type: mrr_at_10
            value: 43.891000000000005
          - type: mrr_at_100
            value: 44.777
          - type: mrr_at_1000
            value: 44.812999999999995
          - type: mrr_at_3
            value: 41.337
          - type: mrr_at_5
            value: 42.533
          - type: ndcg_at_1
            value: 35.573
          - type: ndcg_at_10
            value: 45.275999999999996
          - type: ndcg_at_100
            value: 50.94
          - type: ndcg_at_1000
            value: 52.893
          - type: ndcg_at_3
            value: 40.693
          - type: ndcg_at_5
            value: 42.198
          - type: precision_at_1
            value: 35.573
          - type: precision_at_10
            value: 8.715
          - type: precision_at_100
            value: 1.7209999999999999
          - type: precision_at_1000
            value: 0.252
          - type: precision_at_3
            value: 19.302
          - type: precision_at_5
            value: 13.439
          - type: recall_at_1
            value: 29.575000000000003
          - type: recall_at_10
            value: 56.65599999999999
          - type: recall_at_100
            value: 81.999
          - type: recall_at_1000
            value: 93.999
          - type: recall_at_3
            value: 42.768
          - type: recall_at_5
            value: 47.54
      - task:
          type: Retrieval
        dataset:
          type: BeIR/cqadupstack
          name: MTEB CQADupstackWordpressRetrieval
          config: default
          split: test
          revision: None
        metrics:
          - type: map_at_1
            value: 21.047
          - type: map_at_10
            value: 28.96
          - type: map_at_100
            value: 29.904999999999998
          - type: map_at_1000
            value: 30.019000000000002
          - type: map_at_3
            value: 26.461000000000002
          - type: map_at_5
            value: 27.801
          - type: mrr_at_1
            value: 23.105
          - type: mrr_at_10
            value: 31.137999999999998
          - type: mrr_at_100
            value: 31.965
          - type: mrr_at_1000
            value: 32.039
          - type: mrr_at_3
            value: 28.589
          - type: mrr_at_5
            value: 30.04
          - type: ndcg_at_1
            value: 23.105
          - type: ndcg_at_10
            value: 33.841
          - type: ndcg_at_100
            value: 38.76
          - type: ndcg_at_1000
            value: 41.297
          - type: ndcg_at_3
            value: 28.833
          - type: ndcg_at_5
            value: 31.19
          - type: precision_at_1
            value: 23.105
          - type: precision_at_10
            value: 5.434
          - type: precision_at_100
            value: 0.8540000000000001
          - type: precision_at_1000
            value: 0.11800000000000001
          - type: precision_at_3
            value: 12.384
          - type: precision_at_5
            value: 8.799
          - type: recall_at_1
            value: 21.047
          - type: recall_at_10
            value: 46.768
          - type: recall_at_100
            value: 69.782
          - type: recall_at_1000
            value: 88.384
          - type: recall_at_3
            value: 33.444
          - type: recall_at_5
            value: 39.062999999999995
      - task:
          type: Retrieval
        dataset:
          type: arguana
          name: MTEB ArguAna
          config: default
          split: test
          revision: None
        metrics:
          - type: map_at_1
            value: 26.031
          - type: map_at_10
            value: 40.742
          - type: map_at_100
            value: 41.832
          - type: map_at_1000
            value: 41.844
          - type: map_at_3
            value: 35.526
          - type: map_at_5
            value: 38.567
          - type: mrr_at_1
            value: 26.316
          - type: mrr_at_10
            value: 40.855999999999995
          - type: mrr_at_100
            value: 41.946
          - type: mrr_at_1000
            value: 41.957
          - type: mrr_at_3
            value: 35.621
          - type: mrr_at_5
            value: 38.644
          - type: ndcg_at_1
            value: 26.031
          - type: ndcg_at_10
            value: 49.483
          - type: ndcg_at_100
            value: 54.074999999999996
          - type: ndcg_at_1000
            value: 54.344
          - type: ndcg_at_3
            value: 38.792
          - type: ndcg_at_5
            value: 44.24
          - type: precision_at_1
            value: 26.031
          - type: precision_at_10
            value: 7.76
          - type: precision_at_100
            value: 0.975
          - type: precision_at_1000
            value: 0.1
          - type: precision_at_3
            value: 16.098000000000003
          - type: precision_at_5
            value: 12.29
          - type: recall_at_1
            value: 26.031
          - type: recall_at_10
            value: 77.596
          - type: recall_at_100
            value: 97.51100000000001
          - type: recall_at_1000
            value: 99.57300000000001
          - type: recall_at_3
            value: 48.293
          - type: recall_at_5
            value: 61.451
      - task:
          type: Clustering
        dataset:
          type: mteb/arxiv-clustering-p2p
          name: MTEB ArxivClusteringP2P
          config: default
          split: test
          revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
        metrics:
          - type: v_measure
            value: 41.76036539849672
      - task:
          type: Clustering
        dataset:
          type: mteb/arxiv-clustering-s2s
          name: MTEB ArxivClusteringS2S
          config: default
          split: test
          revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
        metrics:
          - type: v_measure
            value: 34.27585676831497
      - task:
          type: Reranking
        dataset:
          type: mteb/askubuntudupquestions-reranking
          name: MTEB AskUbuntuDupQuestions
          config: default
          split: test
          revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
        metrics:
          - type: map
            value: 63.47328704612227
          - type: mrr
            value: 76.63182078002022
      - task:
          type: STS
        dataset:
          type: mteb/biosses-sts
          name: MTEB BIOSSES
          config: default
          split: test
          revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
        metrics:
          - type: cos_sim_pearson
            value: 87.42072640664271
          - type: cos_sim_spearman
            value: 84.31336692039407
          - type: euclidean_pearson
            value: 54.93250871487246
          - type: euclidean_spearman
            value: 55.91091252228738
          - type: manhattan_pearson
            value: 54.78812442894107
          - type: manhattan_spearman
            value: 55.35005636930548
      - task:
          type: Classification
        dataset:
          type: mteb/banking77
          name: MTEB Banking77Classification
          config: default
          split: test
          revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
        metrics:
          - type: accuracy
            value: 86.28896103896103
          - type: f1
            value: 86.23389676482913
      - task:
          type: Clustering
        dataset:
          type: mteb/biorxiv-clustering-p2p
          name: MTEB BiorxivClusteringP2P
          config: default
          split: test
          revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
        metrics:
          - type: v_measure
            value: 33.73729294301578
      - task:
          type: Clustering
        dataset:
          type: mteb/biorxiv-clustering-s2s
          name: MTEB BiorxivClusteringS2S
          config: default
          split: test
          revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
        metrics:
          - type: v_measure
            value: 30.641078215958288
      - task:
          type: Retrieval
        dataset:
          type: climate-fever
          name: MTEB ClimateFEVER
          config: default
          split: test
          revision: None
        metrics:
          - type: map_at_1
            value: 8.258000000000001
          - type: map_at_10
            value: 14.57
          - type: map_at_100
            value: 15.98
          - type: map_at_1000
            value: 16.149
          - type: map_at_3
            value: 11.993
          - type: map_at_5
            value: 13.383000000000001
          - type: mrr_at_1
            value: 18.176000000000002
          - type: mrr_at_10
            value: 28.560000000000002
          - type: mrr_at_100
            value: 29.656
          - type: mrr_at_1000
            value: 29.709999999999997
          - type: mrr_at_3
            value: 25.255
          - type: mrr_at_5
            value: 27.128000000000004
          - type: ndcg_at_1
            value: 18.176000000000002
          - type: ndcg_at_10
            value: 21.36
          - type: ndcg_at_100
            value: 27.619
          - type: ndcg_at_1000
            value: 31.086000000000002
          - type: ndcg_at_3
            value: 16.701
          - type: ndcg_at_5
            value: 18.559
          - type: precision_at_1
            value: 18.176000000000002
          - type: precision_at_10
            value: 6.683999999999999
          - type: precision_at_100
            value: 1.3339999999999999
          - type: precision_at_1000
            value: 0.197
          - type: precision_at_3
            value: 12.269
          - type: precision_at_5
            value: 9.798
          - type: recall_at_1
            value: 8.258000000000001
          - type: recall_at_10
            value: 27.060000000000002
          - type: recall_at_100
            value: 48.833
          - type: recall_at_1000
            value: 68.636
          - type: recall_at_3
            value: 15.895999999999999
          - type: recall_at_5
            value: 20.625
      - task:
          type: Retrieval
        dataset:
          type: dbpedia-entity
          name: MTEB DBPedia
          config: default
          split: test
          revision: None
        metrics:
          - type: map_at_1
            value: 8.241
          - type: map_at_10
            value: 17.141000000000002
          - type: map_at_100
            value: 22.805
          - type: map_at_1000
            value: 24.189
          - type: map_at_3
            value: 12.940999999999999
          - type: map_at_5
            value: 14.607000000000001
          - type: mrr_at_1
            value: 62.25000000000001
          - type: mrr_at_10
            value: 70.537
          - type: mrr_at_100
            value: 70.851
          - type: mrr_at_1000
            value: 70.875
          - type: mrr_at_3
            value: 68.75
          - type: mrr_at_5
            value: 69.77499999999999
          - type: ndcg_at_1
            value: 50.125
          - type: ndcg_at_10
            value: 36.032
          - type: ndcg_at_100
            value: 39.428999999999995
          - type: ndcg_at_1000
            value: 47.138999999999996
          - type: ndcg_at_3
            value: 40.99
          - type: ndcg_at_5
            value: 37.772
          - type: precision_at_1
            value: 62.25000000000001
          - type: precision_at_10
            value: 28.050000000000004
          - type: precision_at_100
            value: 8.527999999999999
          - type: precision_at_1000
            value: 1.82
          - type: precision_at_3
            value: 45
          - type: precision_at_5
            value: 36
          - type: recall_at_1
            value: 8.241
          - type: recall_at_10
            value: 22.583000000000002
          - type: recall_at_100
            value: 44.267
          - type: recall_at_1000
            value: 69.497
          - type: recall_at_3
            value: 14.326
          - type: recall_at_5
            value: 17.29
      - task:
          type: Classification
        dataset:
          type: mteb/emotion
          name: MTEB EmotionClassification
          config: default
          split: test
          revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
        metrics:
          - type: accuracy
            value: 42.295
          - type: f1
            value: 38.32403088027173
      - task:
          type: Retrieval
        dataset:
          type: fever
          name: MTEB FEVER
          config: default
          split: test
          revision: None
        metrics:
          - type: map_at_1
            value: 58.553
          - type: map_at_10
            value: 69.632
          - type: map_at_100
            value: 69.95400000000001
          - type: map_at_1000
            value: 69.968
          - type: map_at_3
            value: 67.656
          - type: map_at_5
            value: 68.86
          - type: mrr_at_1
            value: 63.156
          - type: mrr_at_10
            value: 74.37700000000001
          - type: mrr_at_100
            value: 74.629
          - type: mrr_at_1000
            value: 74.63300000000001
          - type: mrr_at_3
            value: 72.577
          - type: mrr_at_5
            value: 73.71
          - type: ndcg_at_1
            value: 63.156
          - type: ndcg_at_10
            value: 75.345
          - type: ndcg_at_100
            value: 76.728
          - type: ndcg_at_1000
            value: 77.006
          - type: ndcg_at_3
            value: 71.67099999999999
          - type: ndcg_at_5
            value: 73.656
          - type: precision_at_1
            value: 63.156
          - type: precision_at_10
            value: 9.673
          - type: precision_at_100
            value: 1.045
          - type: precision_at_1000
            value: 0.108
          - type: precision_at_3
            value: 28.393
          - type: precision_at_5
            value: 18.160999999999998
          - type: recall_at_1
            value: 58.553
          - type: recall_at_10
            value: 88.362
          - type: recall_at_100
            value: 94.401
          - type: recall_at_1000
            value: 96.256
          - type: recall_at_3
            value: 78.371
          - type: recall_at_5
            value: 83.32300000000001
      - task:
          type: Retrieval
        dataset:
          type: fiqa
          name: MTEB FiQA2018
          config: default
          split: test
          revision: None
        metrics:
          - type: map_at_1
            value: 19.302
          - type: map_at_10
            value: 31.887
          - type: map_at_100
            value: 33.727000000000004
          - type: map_at_1000
            value: 33.914
          - type: map_at_3
            value: 27.254
          - type: map_at_5
            value: 29.904999999999998
          - type: mrr_at_1
            value: 39.043
          - type: mrr_at_10
            value: 47.858000000000004
          - type: mrr_at_100
            value: 48.636
          - type: mrr_at_1000
            value: 48.677
          - type: mrr_at_3
            value: 45.062000000000005
          - type: mrr_at_5
            value: 46.775
          - type: ndcg_at_1
            value: 39.043
          - type: ndcg_at_10
            value: 39.899
          - type: ndcg_at_100
            value: 46.719
          - type: ndcg_at_1000
            value: 49.739
          - type: ndcg_at_3
            value: 35.666
          - type: ndcg_at_5
            value: 37.232
          - type: precision_at_1
            value: 39.043
          - type: precision_at_10
            value: 11.265
          - type: precision_at_100
            value: 1.864
          - type: precision_at_1000
            value: 0.23800000000000002
          - type: precision_at_3
            value: 24.227999999999998
          - type: precision_at_5
            value: 18.148
          - type: recall_at_1
            value: 19.302
          - type: recall_at_10
            value: 47.278
          - type: recall_at_100
            value: 72.648
          - type: recall_at_1000
            value: 90.793
          - type: recall_at_3
            value: 31.235000000000003
          - type: recall_at_5
            value: 38.603
      - task:
          type: Retrieval
        dataset:
          type: hotpotqa
          name: MTEB HotpotQA
          config: default
          split: test
          revision: None
        metrics:
          - type: map_at_1
            value: 31.398
          - type: map_at_10
            value: 44.635000000000005
          - type: map_at_100
            value: 45.513
          - type: map_at_1000
            value: 45.595
          - type: map_at_3
            value: 41.894
          - type: map_at_5
            value: 43.514
          - type: mrr_at_1
            value: 62.795
          - type: mrr_at_10
            value: 70.001
          - type: mrr_at_100
            value: 70.378
          - type: mrr_at_1000
            value: 70.399
          - type: mrr_at_3
            value: 68.542
          - type: mrr_at_5
            value: 69.394
          - type: ndcg_at_1
            value: 62.795
          - type: ndcg_at_10
            value: 53.635
          - type: ndcg_at_100
            value: 57.05
          - type: ndcg_at_1000
            value: 58.755
          - type: ndcg_at_3
            value: 49.267
          - type: ndcg_at_5
            value: 51.522
          - type: precision_at_1
            value: 62.795
          - type: precision_at_10
            value: 11.196
          - type: precision_at_100
            value: 1.389
          - type: precision_at_1000
            value: 0.16199999999999998
          - type: precision_at_3
            value: 30.804
          - type: precision_at_5
            value: 20.265
          - type: recall_at_1
            value: 31.398
          - type: recall_at_10
            value: 55.982
          - type: recall_at_100
            value: 69.453
          - type: recall_at_1000
            value: 80.756
          - type: recall_at_3
            value: 46.205
          - type: recall_at_5
            value: 50.662
      - task:
          type: Classification
        dataset:
          type: mteb/imdb
          name: MTEB ImdbClassification
          config: default
          split: test
          revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
        metrics:
          - type: accuracy
            value: 63.803200000000004
          - type: ap
            value: 59.04397034963468
          - type: f1
            value: 63.4675375611795
      - task:
          type: Retrieval
        dataset:
          type: msmarco
          name: MTEB MSMARCO
          config: default
          split: dev
          revision: None
        metrics:
          - type: map_at_1
            value: 17.671
          - type: map_at_10
            value: 29.152
          - type: map_at_100
            value: 30.422
          - type: map_at_1000
            value: 30.481
          - type: map_at_3
            value: 25.417
          - type: map_at_5
            value: 27.448
          - type: mrr_at_1
            value: 18.195
          - type: mrr_at_10
            value: 29.67
          - type: mrr_at_100
            value: 30.891999999999996
          - type: mrr_at_1000
            value: 30.944
          - type: mrr_at_3
            value: 25.974000000000004
          - type: mrr_at_5
            value: 27.996
          - type: ndcg_at_1
            value: 18.195
          - type: ndcg_at_10
            value: 35.795
          - type: ndcg_at_100
            value: 42.117
          - type: ndcg_at_1000
            value: 43.585
          - type: ndcg_at_3
            value: 28.122000000000003
          - type: ndcg_at_5
            value: 31.757
          - type: precision_at_1
            value: 18.195
          - type: precision_at_10
            value: 5.89
          - type: precision_at_100
            value: 0.9079999999999999
          - type: precision_at_1000
            value: 0.10300000000000001
          - type: precision_at_3
            value: 12.24
          - type: precision_at_5
            value: 9.178
          - type: recall_at_1
            value: 17.671
          - type: recall_at_10
            value: 56.373
          - type: recall_at_100
            value: 86.029
          - type: recall_at_1000
            value: 97.246
          - type: recall_at_3
            value: 35.414
          - type: recall_at_5
            value: 44.149
      - task:
          type: Classification
        dataset:
          type: mteb/mtop_domain
          name: MTEB MTOPDomainClassification (en)
          config: en
          split: test
          revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
        metrics:
          - type: accuracy
            value: 90.80255357957135
          - type: f1
            value: 90.79256308087807
      - task:
          type: Classification
        dataset:
          type: mteb/mtop_intent
          name: MTEB MTOPIntentClassification (en)
          config: en
          split: test
          revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
        metrics:
          - type: accuracy
            value: 71.20611035111719
          - type: f1
            value: 54.075483897190836
      - task:
          type: Classification
        dataset:
          type: mteb/amazon_massive_intent
          name: MTEB MassiveIntentClassification (en)
          config: en
          split: test
          revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
        metrics:
          - type: accuracy
            value: 70.79354404841965
          - type: f1
            value: 68.53816551555609
      - task:
          type: Classification
        dataset:
          type: mteb/amazon_massive_scenario
          name: MTEB MassiveScenarioClassification (en)
          config: en
          split: test
          revision: 7d571f92784cd94a019292a1f45445077d0ef634
        metrics:
          - type: accuracy
            value: 76.6072629455279
          - type: f1
            value: 77.04997715738867
      - task:
          type: Clustering
        dataset:
          type: mteb/medrxiv-clustering-p2p
          name: MTEB MedrxivClusteringP2P
          config: default
          split: test
          revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
        metrics:
          - type: v_measure
            value: 30.432745003633016
      - task:
          type: Clustering
        dataset:
          type: mteb/medrxiv-clustering-s2s
          name: MTEB MedrxivClusteringS2S
          config: default
          split: test
          revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
        metrics:
          - type: v_measure
            value: 28.95493811839366
      - task:
          type: Reranking
        dataset:
          type: mteb/mind_small
          name: MTEB MindSmallReranking
          config: default
          split: test
          revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
        metrics:
          - type: map
            value: 31.63516074152514
          - type: mrr
            value: 32.73091425241894
      - task:
          type: Retrieval
        dataset:
          type: nfcorpus
          name: MTEB NFCorpus
          config: default
          split: test
          revision: None
        metrics:
          - type: map_at_1
            value: 5.379
          - type: map_at_10
            value: 12.051
          - type: map_at_100
            value: 15.176
          - type: map_at_1000
            value: 16.662
          - type: map_at_3
            value: 8.588
          - type: map_at_5
            value: 10.274
          - type: mrr_at_1
            value: 44.891999999999996
          - type: mrr_at_10
            value: 53.06999999999999
          - type: mrr_at_100
            value: 53.675
          - type: mrr_at_1000
            value: 53.717999999999996
          - type: mrr_at_3
            value: 50.671
          - type: mrr_at_5
            value: 52.25
          - type: ndcg_at_1
            value: 42.879
          - type: ndcg_at_10
            value: 33.291
          - type: ndcg_at_100
            value: 30.567
          - type: ndcg_at_1000
            value: 39.598
          - type: ndcg_at_3
            value: 37.713
          - type: ndcg_at_5
            value: 36.185
          - type: precision_at_1
            value: 44.891999999999996
          - type: precision_at_10
            value: 24.923000000000002
          - type: precision_at_100
            value: 8.015
          - type: precision_at_1000
            value: 2.083
          - type: precision_at_3
            value: 35.088
          - type: precision_at_5
            value: 31.765
          - type: recall_at_1
            value: 5.379
          - type: recall_at_10
            value: 16.346
          - type: recall_at_100
            value: 31.887999999999998
          - type: recall_at_1000
            value: 64.90599999999999
          - type: recall_at_3
            value: 9.543
          - type: recall_at_5
            value: 12.369
      - task:
          type: Retrieval
        dataset:
          type: nq
          name: MTEB NQ
          config: default
          split: test
          revision: None
        metrics:
          - type: map_at_1
            value: 25.654
          - type: map_at_10
            value: 40.163
          - type: map_at_100
            value: 41.376000000000005
          - type: map_at_1000
            value: 41.411
          - type: map_at_3
            value: 35.677
          - type: map_at_5
            value: 38.238
          - type: mrr_at_1
            value: 29.055999999999997
          - type: mrr_at_10
            value: 42.571999999999996
          - type: mrr_at_100
            value: 43.501
          - type: mrr_at_1000
            value: 43.527
          - type: mrr_at_3
            value: 38.775
          - type: mrr_at_5
            value: 40.953
          - type: ndcg_at_1
            value: 29.026999999999997
          - type: ndcg_at_10
            value: 47.900999999999996
          - type: ndcg_at_100
            value: 52.941
          - type: ndcg_at_1000
            value: 53.786
          - type: ndcg_at_3
            value: 39.387
          - type: ndcg_at_5
            value: 43.65
          - type: precision_at_1
            value: 29.026999999999997
          - type: precision_at_10
            value: 8.247
          - type: precision_at_100
            value: 1.102
          - type: precision_at_1000
            value: 0.11800000000000001
          - type: precision_at_3
            value: 18.231
          - type: precision_at_5
            value: 13.378
          - type: recall_at_1
            value: 25.654
          - type: recall_at_10
            value: 69.175
          - type: recall_at_100
            value: 90.85600000000001
          - type: recall_at_1000
            value: 97.18
          - type: recall_at_3
            value: 47.043
          - type: recall_at_5
            value: 56.86600000000001
      - task:
          type: Retrieval
        dataset:
          type: quora
          name: MTEB QuoraRetrieval
          config: default
          split: test
          revision: None
        metrics:
          - type: map_at_1
            value: 70.785
          - type: map_at_10
            value: 84.509
          - type: map_at_100
            value: 85.17
          - type: map_at_1000
            value: 85.187
          - type: map_at_3
            value: 81.628
          - type: map_at_5
            value: 83.422
          - type: mrr_at_1
            value: 81.43
          - type: mrr_at_10
            value: 87.506
          - type: mrr_at_100
            value: 87.616
          - type: mrr_at_1000
            value: 87.617
          - type: mrr_at_3
            value: 86.598
          - type: mrr_at_5
            value: 87.215
          - type: ndcg_at_1
            value: 81.44
          - type: ndcg_at_10
            value: 88.208
          - type: ndcg_at_100
            value: 89.49000000000001
          - type: ndcg_at_1000
            value: 89.59700000000001
          - type: ndcg_at_3
            value: 85.471
          - type: ndcg_at_5
            value: 86.955
          - type: precision_at_1
            value: 81.44
          - type: precision_at_10
            value: 13.347000000000001
          - type: precision_at_100
            value: 1.53
          - type: precision_at_1000
            value: 0.157
          - type: precision_at_3
            value: 37.330000000000005
          - type: precision_at_5
            value: 24.506
          - type: recall_at_1
            value: 70.785
          - type: recall_at_10
            value: 95.15
          - type: recall_at_100
            value: 99.502
          - type: recall_at_1000
            value: 99.993
          - type: recall_at_3
            value: 87.234
          - type: recall_at_5
            value: 91.467
      - task:
          type: Clustering
        dataset:
          type: mteb/reddit-clustering
          name: MTEB RedditClustering
          config: default
          split: test
          revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
        metrics:
          - type: v_measure
            value: 52.40682777853522
      - task:
          type: Clustering
        dataset:
          type: mteb/reddit-clustering-p2p
          name: MTEB RedditClusteringP2P
          config: default
          split: test
          revision: 282350215ef01743dc01b456c7f5241fa8937f16
        metrics:
          - type: v_measure
            value: 56.61834429208595
      - task:
          type: Retrieval
        dataset:
          type: scidocs
          name: MTEB SCIDOCS
          config: default
          split: test
          revision: None
        metrics:
          - type: map_at_1
            value: 4.918
          - type: map_at_10
            value: 11.562
          - type: map_at_100
            value: 13.636999999999999
          - type: map_at_1000
            value: 13.918
          - type: map_at_3
            value: 8.353
          - type: map_at_5
            value: 9.878
          - type: mrr_at_1
            value: 24.3
          - type: mrr_at_10
            value: 33.914
          - type: mrr_at_100
            value: 35.079
          - type: mrr_at_1000
            value: 35.134
          - type: mrr_at_3
            value: 30.833
          - type: mrr_at_5
            value: 32.528
          - type: ndcg_at_1
            value: 24.3
          - type: ndcg_at_10
            value: 19.393
          - type: ndcg_at_100
            value: 27.471
          - type: ndcg_at_1000
            value: 32.543
          - type: ndcg_at_3
            value: 18.648
          - type: ndcg_at_5
            value: 16.064999999999998
          - type: precision_at_1
            value: 24.3
          - type: precision_at_10
            value: 9.92
          - type: precision_at_100
            value: 2.152
          - type: precision_at_1000
            value: 0.338
          - type: precision_at_3
            value: 17.1
          - type: precision_at_5
            value: 13.819999999999999
          - type: recall_at_1
            value: 4.918
          - type: recall_at_10
            value: 20.102
          - type: recall_at_100
            value: 43.69
          - type: recall_at_1000
            value: 68.568
          - type: recall_at_3
            value: 10.383000000000001
          - type: recall_at_5
            value: 13.977999999999998
      - task:
          type: STS
        dataset:
          type: mteb/sickr-sts
          name: MTEB SICK-R
          config: default
          split: test
          revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
        metrics:
          - type: cos_sim_pearson
            value: 86.02374279770862
          - type: cos_sim_spearman
            value: 80.3123278821752
          - type: euclidean_pearson
            value: 78.150387301923
          - type: euclidean_spearman
            value: 74.27020095240543
          - type: manhattan_pearson
            value: 78.00212720962597
          - type: manhattan_spearman
            value: 74.27996355049189
      - task:
          type: STS
        dataset:
          type: mteb/sts12-sts
          name: MTEB STS12
          config: default
          split: test
          revision: a0d554a64d88156834ff5ae9920b964011b16384
        metrics:
          - type: cos_sim_pearson
            value: 83.56832604166104
          - type: cos_sim_spearman
            value: 73.85172437109456
          - type: euclidean_pearson
            value: 70.77037821156355
          - type: euclidean_spearman
            value: 58.32603602271459
          - type: manhattan_pearson
            value: 70.6019035905572
          - type: manhattan_spearman
            value: 58.18758998109944
      - task:
          type: STS
        dataset:
          type: mteb/sts13-sts
          name: MTEB STS13
          config: default
          split: test
          revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
        metrics:
          - type: cos_sim_pearson
            value: 83.97624603590171
          - type: cos_sim_spearman
            value: 84.3654403570941
          - type: euclidean_pearson
            value: 77.37734191552401
          - type: euclidean_spearman
            value: 77.83492278107906
          - type: manhattan_pearson
            value: 77.38406845115612
          - type: manhattan_spearman
            value: 77.80429501178632
      - task:
          type: STS
        dataset:
          type: mteb/sts14-sts
          name: MTEB STS14
          config: default
          split: test
          revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
        metrics:
          - type: cos_sim_pearson
            value: 82.5175806484823
          - type: cos_sim_spearman
            value: 77.84074419393815
          - type: euclidean_pearson
            value: 75.31514179994578
          - type: euclidean_spearman
            value: 71.06564963155697
          - type: manhattan_pearson
            value: 75.25016497298036
          - type: manhattan_spearman
            value: 71.0503867625097
      - task:
          type: STS
        dataset:
          type: mteb/sts15-sts
          name: MTEB STS15
          config: default
          split: test
          revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
        metrics:
          - type: cos_sim_pearson
            value: 85.15312065200007
          - type: cos_sim_spearman
            value: 86.28786282283781
          - type: euclidean_pearson
            value: 69.93961446583728
          - type: euclidean_spearman
            value: 70.99565144007187
          - type: manhattan_pearson
            value: 70.06338127800244
          - type: manhattan_spearman
            value: 71.15328825585216
      - task:
          type: STS
        dataset:
          type: mteb/sts16-sts
          name: MTEB STS16
          config: default
          split: test
          revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
        metrics:
          - type: cos_sim_pearson
            value: 80.48261723093232
          - type: cos_sim_spearman
            value: 82.13997187275378
          - type: euclidean_pearson
            value: 72.01034058956992
          - type: euclidean_spearman
            value: 72.90423890320797
          - type: manhattan_pearson
            value: 71.91819389305805
          - type: manhattan_spearman
            value: 72.804333901611
      - task:
          type: STS
        dataset:
          type: mteb/sts17-crosslingual-sts
          name: MTEB STS17 (en-en)
          config: en-en
          split: test
          revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
        metrics:
          - type: cos_sim_pearson
            value: 89.89094326696411
          - type: cos_sim_spearman
            value: 89.5679328484923
          - type: euclidean_pearson
            value: 77.27326226557433
          - type: euclidean_spearman
            value: 75.44670270858582
          - type: manhattan_pearson
            value: 77.49623029933024
          - type: manhattan_spearman
            value: 75.6317127686177
      - task:
          type: STS
        dataset:
          type: mteb/sts22-crosslingual-sts
          name: MTEB STS22 (en)
          config: en
          split: test
          revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
        metrics:
          - type: cos_sim_pearson
            value: 67.03259798800852
          - type: cos_sim_spearman
            value: 66.17683868865686
          - type: euclidean_pearson
            value: 49.154524473561416
          - type: euclidean_spearman
            value: 58.82796771905756
          - type: manhattan_pearson
            value: 48.97445679282608
          - type: manhattan_spearman
            value: 58.69653501728678
      - task:
          type: STS
        dataset:
          type: mteb/stsbenchmark-sts
          name: MTEB STSBenchmark
          config: default
          split: test
          revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
        metrics:
          - type: cos_sim_pearson
            value: 84.01368632144246
          - type: cos_sim_spearman
            value: 83.64169080274549
          - type: euclidean_pearson
            value: 75.84021692605727
          - type: euclidean_spearman
            value: 74.69132304226987
          - type: manhattan_pearson
            value: 75.9627059404693
          - type: manhattan_spearman
            value: 74.83616979158057
      - task:
          type: Reranking
        dataset:
          type: mteb/scidocs-reranking
          name: MTEB SciDocsRR
          config: default
          split: test
          revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
        metrics:
          - type: map
            value: 81.63017243645893
          - type: mrr
            value: 94.79274900843528
      - task:
          type: Retrieval
        dataset:
          type: scifact
          name: MTEB SciFact
          config: default
          split: test
          revision: None
        metrics:
          - type: map_at_1
            value: 47.094
          - type: map_at_10
            value: 56.047000000000004
          - type: map_at_100
            value: 56.701
          - type: map_at_1000
            value: 56.742000000000004
          - type: map_at_3
            value: 53.189
          - type: map_at_5
            value: 54.464
          - type: mrr_at_1
            value: 50
          - type: mrr_at_10
            value: 57.567
          - type: mrr_at_100
            value: 58.104
          - type: mrr_at_1000
            value: 58.142
          - type: mrr_at_3
            value: 55.222
          - type: mrr_at_5
            value: 56.355999999999995
          - type: ndcg_at_1
            value: 50
          - type: ndcg_at_10
            value: 60.84
          - type: ndcg_at_100
            value: 63.983999999999995
          - type: ndcg_at_1000
            value: 65.19500000000001
          - type: ndcg_at_3
            value: 55.491
          - type: ndcg_at_5
            value: 57.51500000000001
          - type: precision_at_1
            value: 50
          - type: precision_at_10
            value: 8.366999999999999
          - type: precision_at_100
            value: 1.013
          - type: precision_at_1000
            value: 0.11199999999999999
          - type: precision_at_3
            value: 21.556
          - type: precision_at_5
            value: 14.2
          - type: recall_at_1
            value: 47.094
          - type: recall_at_10
            value: 74.239
          - type: recall_at_100
            value: 89
          - type: recall_at_1000
            value: 98.667
          - type: recall_at_3
            value: 59.606
          - type: recall_at_5
            value: 64.756
      - task:
          type: PairClassification
        dataset:
          type: mteb/sprintduplicatequestions-pairclassification
          name: MTEB SprintDuplicateQuestions
          config: default
          split: test
          revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
        metrics:
          - type: cos_sim_accuracy
            value: 99.7128712871287
          - type: cos_sim_ap
            value: 91.8391173412632
          - type: cos_sim_f1
            value: 85.23421588594704
          - type: cos_sim_precision
            value: 86.82572614107885
          - type: cos_sim_recall
            value: 83.7
          - type: dot_accuracy
            value: 99.23960396039604
          - type: dot_ap
            value: 58.07268940033783
          - type: dot_f1
            value: 58.00486618004865
          - type: dot_precision
            value: 56.49289099526066
          - type: dot_recall
            value: 59.599999999999994
          - type: euclidean_accuracy
            value: 99.62574257425743
          - type: euclidean_ap
            value: 86.31145319031712
          - type: euclidean_f1
            value: 80.12486992715921
          - type: euclidean_precision
            value: 83.51409978308027
          - type: euclidean_recall
            value: 77
          - type: manhattan_accuracy
            value: 99.62178217821783
          - type: manhattan_ap
            value: 85.96697606381338
          - type: manhattan_f1
            value: 80.24193548387099
          - type: manhattan_precision
            value: 80.89430894308943
          - type: manhattan_recall
            value: 79.60000000000001
          - type: max_accuracy
            value: 99.7128712871287
          - type: max_ap
            value: 91.8391173412632
          - type: max_f1
            value: 85.23421588594704
      - task:
          type: Clustering
        dataset:
          type: mteb/stackexchange-clustering
          name: MTEB StackExchangeClustering
          config: default
          split: test
          revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
        metrics:
          - type: v_measure
            value: 54.98955943181893
      - task:
          type: Clustering
        dataset:
          type: mteb/stackexchange-clustering-p2p
          name: MTEB StackExchangeClusteringP2P
          config: default
          split: test
          revision: 815ca46b2622cec33ccafc3735d572c266efdb44
        metrics:
          - type: v_measure
            value: 32.72837687387049
      - task:
          type: Reranking
        dataset:
          type: mteb/stackoverflowdupquestions-reranking
          name: MTEB StackOverflowDupQuestions
          config: default
          split: test
          revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
        metrics:
          - type: map
            value: 51.02207528482775
          - type: mrr
            value: 51.8842044393515
      - task:
          type: Summarization
        dataset:
          type: mteb/summeval
          name: MTEB SummEval
          config: default
          split: test
          revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
        metrics:
          - type: cos_sim_pearson
            value: 30.250596893094876
          - type: cos_sim_spearman
            value: 30.609457706010158
          - type: dot_pearson
            value: 19.739579843052162
          - type: dot_spearman
            value: 20.27834051930579
      - task:
          type: Retrieval
        dataset:
          type: trec-covid
          name: MTEB TRECCOVID
          config: default
          split: test
          revision: None
        metrics:
          - type: map_at_1
            value: 0.187
          - type: map_at_10
            value: 1.239
          - type: map_at_100
            value: 6.388000000000001
          - type: map_at_1000
            value: 15.507000000000001
          - type: map_at_3
            value: 0.5
          - type: map_at_5
            value: 0.712
          - type: mrr_at_1
            value: 70
          - type: mrr_at_10
            value: 83
          - type: mrr_at_100
            value: 83
          - type: mrr_at_1000
            value: 83
          - type: mrr_at_3
            value: 81.667
          - type: mrr_at_5
            value: 82.667
          - type: ndcg_at_1
            value: 65
          - type: ndcg_at_10
            value: 56.57600000000001
          - type: ndcg_at_100
            value: 42.054
          - type: ndcg_at_1000
            value: 38.269999999999996
          - type: ndcg_at_3
            value: 63.134
          - type: ndcg_at_5
            value: 58.792
          - type: precision_at_1
            value: 70
          - type: precision_at_10
            value: 59.8
          - type: precision_at_100
            value: 42.5
          - type: precision_at_1000
            value: 17.304
          - type: precision_at_3
            value: 67.333
          - type: precision_at_5
            value: 62.4
          - type: recall_at_1
            value: 0.187
          - type: recall_at_10
            value: 1.529
          - type: recall_at_100
            value: 9.673
          - type: recall_at_1000
            value: 35.807
          - type: recall_at_3
            value: 0.5459999999999999
          - type: recall_at_5
            value: 0.8130000000000001
      - task:
          type: Retrieval
        dataset:
          type: webis-touche2020
          name: MTEB Touche2020
          config: default
          split: test
          revision: None
        metrics:
          - type: map_at_1
            value: 1.646
          - type: map_at_10
            value: 6.569999999999999
          - type: map_at_100
            value: 11.530999999999999
          - type: map_at_1000
            value: 13.009
          - type: map_at_3
            value: 3.234
          - type: map_at_5
            value: 4.956
          - type: mrr_at_1
            value: 18.367
          - type: mrr_at_10
            value: 35.121
          - type: mrr_at_100
            value: 36.142
          - type: mrr_at_1000
            value: 36.153
          - type: mrr_at_3
            value: 29.252
          - type: mrr_at_5
            value: 33.434999999999995
          - type: ndcg_at_1
            value: 16.326999999999998
          - type: ndcg_at_10
            value: 17.336
          - type: ndcg_at_100
            value: 28.925
          - type: ndcg_at_1000
            value: 41.346
          - type: ndcg_at_3
            value: 16.131999999999998
          - type: ndcg_at_5
            value: 18.107
          - type: precision_at_1
            value: 18.367
          - type: precision_at_10
            value: 16.531000000000002
          - type: precision_at_100
            value: 6.449000000000001
          - type: precision_at_1000
            value: 1.451
          - type: precision_at_3
            value: 17.687
          - type: precision_at_5
            value: 20
          - type: recall_at_1
            value: 1.646
          - type: recall_at_10
            value: 12.113
          - type: recall_at_100
            value: 40.261
          - type: recall_at_1000
            value: 77.878
          - type: recall_at_3
            value: 4.181
          - type: recall_at_5
            value: 7.744
      - task:
          type: Classification
        dataset:
          type: mteb/toxic_conversations_50k
          name: MTEB ToxicConversationsClassification
          config: default
          split: test
          revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
        metrics:
          - type: accuracy
            value: 66.61500000000001
          - type: ap
            value: 11.70707762285034
          - type: f1
            value: 50.53259935502312
      - task:
          type: Classification
        dataset:
          type: mteb/tweet_sentiment_extraction
          name: MTEB TweetSentimentExtractionClassification
          config: default
          split: test
          revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
        metrics:
          - type: accuracy
            value: 54.89247311827958
          - type: f1
            value: 55.044186334629586
      - task:
          type: Clustering
        dataset:
          type: mteb/twentynewsgroups-clustering
          name: MTEB TwentyNewsgroupsClustering
          config: default
          split: test
          revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
        metrics:
          - type: v_measure
            value: 46.95851882042766
      - task:
          type: PairClassification
        dataset:
          type: mteb/twittersemeval2015-pairclassification
          name: MTEB TwitterSemEval2015
          config: default
          split: test
          revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
        metrics:
          - type: cos_sim_accuracy
            value: 84.01978899684092
          - type: cos_sim_ap
            value: 68.10404793439619
          - type: cos_sim_f1
            value: 63.93145891154821
          - type: cos_sim_precision
            value: 58.905937291527685
          - type: cos_sim_recall
            value: 69.89445910290237
          - type: dot_accuracy
            value: 77.78506288370984
          - type: dot_ap
            value: 38.55636213255057
          - type: dot_f1
            value: 44.6866485013624
          - type: dot_precision
            value: 34.07202216066482
          - type: dot_recall
            value: 64.90765171503958
          - type: euclidean_accuracy
            value: 82.94093103653812
          - type: euclidean_ap
            value: 63.65596102723866
          - type: euclidean_f1
            value: 61.444903916322055
          - type: euclidean_precision
            value: 56.994584837545126
          - type: euclidean_recall
            value: 66.64907651715039
          - type: manhattan_accuracy
            value: 82.99457590749239
          - type: manhattan_ap
            value: 63.77653539498376
          - type: manhattan_f1
            value: 61.48299483235189
          - type: manhattan_precision
            value: 56.455528580887226
          - type: manhattan_recall
            value: 67.4934036939314
          - type: max_accuracy
            value: 84.01978899684092
          - type: max_ap
            value: 68.10404793439619
          - type: max_f1
            value: 63.93145891154821
      - task:
          type: PairClassification
        dataset:
          type: mteb/twitterurlcorpus-pairclassification
          name: MTEB TwitterURLCorpus
          config: default
          split: test
          revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
        metrics:
          - type: cos_sim_accuracy
            value: 87.75177552683665
          - type: cos_sim_ap
            value: 83.75899853399007
          - type: cos_sim_f1
            value: 76.25022931572188
          - type: cos_sim_precision
            value: 72.83241045769958
          - type: cos_sim_recall
            value: 80.00461964890668
          - type: dot_accuracy
            value: 81.8197694725812
          - type: dot_ap
            value: 67.6851675345571
          - type: dot_f1
            value: 64.04501820589209
          - type: dot_precision
            value: 56.17233770758332
          - type: dot_recall
            value: 74.48413920542039
          - type: euclidean_accuracy
            value: 83.3003454030349
          - type: euclidean_ap
            value: 72.80186670461116
          - type: euclidean_f1
            value: 65.38000218078727
          - type: euclidean_precision
            value: 61.92082616179002
          - type: euclidean_recall
            value: 69.24853711117956
          - type: manhattan_accuracy
            value: 83.32169053440447
          - type: manhattan_ap
            value: 72.8243559753097
          - type: manhattan_f1
            value: 65.45939901157966
          - type: manhattan_precision
            value: 61.58284124075205
          - type: manhattan_recall
            value: 69.85679088389283
          - type: max_accuracy
            value: 87.75177552683665
          - type: max_ap
            value: 83.75899853399007
          - type: max_f1
            value: 76.25022931572188



Finetuner logo: Finetuner helps you to create experiments in order to improve embeddings on search tasks. It accompanies you to deliver the last mile of performance-tuning for neural search applications.

The text embedding set trained by Jina AI, Finetuner team.

Intented Usage & Model Info

jina-embedding-l-en-v1 is a language model that has been trained using Jina AI's Linnaeus-Clean dataset. This dataset consists of 380 million pairs of sentences, which include both query-document pairs. These pairs were obtained from various domains and were carefully selected through a thorough cleaning process. The Linnaeus-Full dataset, from which the Linnaeus-Clean dataset is derived, originally contained 1.6 billion sentence pairs.

The model has a range of use cases, including information retrieval, semantic textual similarity, text reranking, and more.

With a size of 330 million parameters, the model enables single-gpu inference while delivering better performance than our small and base model. Additionally, we provide the following options:

Data & Parameters

Please checkout our technical blog.

Metrics

We compared the model against all-minilm-l6-v2/all-mpnet-base-v2 from sbert and text-embeddings-ada-002 from OpenAI:

Name param dimension
all-minilm-l6-v2 23m 384
all-mpnet-base-v2 110m 768
ada-embedding-002 Unknown/OpenAI API 1536
jina-embedding-t-en-v1 14m 312
jina-embedding-s-en-v1 35m 512
jina-embedding-b-en-v1 110m 768
jina-embedding-l-en-v1 330m 1024
Name STS12 STS13 STS14 STS15 STS16 STS17 TRECOVID Quora SciFact
all-minilm-l6-v2 0.724 0.806 0.756 0.854 0.79 0.876 0.473 0.876 0.645
all-mpnet-base-v2 0.726 0.835 0.78 0.857 0.8 0.906 0.513 0.875 0.656
ada-embedding-002 0.698 0.833 0.761 0.861 0.86 0.903 0.685 0.876 0.726
jina-embedding-t-en-v1 0.717 0.773 0.731 0.829 0.777 0.860 0.482 0.840 0.522
jina-embedding-s-en-v1 0.743 0.786 0.738 0.837 0.80 0.875 0.523 0.857 0.524
jina-embedding-b-en-v1 0.751 0.809 0.761 0.856 0.812 0.890 0.606 0.876 0.594
jina-embedding-l-en-v1 0.739 0.844 0.778 0.863 0.821 0.896 0.566 0.882 0.608

Usage

Use with Jina AI Finetuner

!pip install finetuner
import finetuner

model = finetuner.build_model('jinaai/jina-embedding-l-en-v1')
embeddings = finetuner.encode(
    model=model,
    data=['how is the weather today', 'What is the current weather like today?']
)
print(finetuner.cos_sim(embeddings[0], embeddings[1]))

Use with sentence-transformers:

from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim

sentences = ['how is the weather today', 'What is the current weather like today?']

model = SentenceTransformer('jinaai/jina-embedding-b-en-v1')
embeddings = model.encode(sentences)
print(cos_sim(embeddings[0], embeddings[1]))

Fine-tuning

Please consider Finetuner.

Plans

  1. The development of jina-embedding-s-en-v2 is currently underway with two main objectives: improving performance and increasing the maximum sequence length.
  2. We are currently working on a bilingual embedding model that combines English and X language. The upcoming model will be called jina-embedding-s/b/l-de-v1.

Contact

Join our Discord community and chat with other community members about ideas.

Citation

If you find Jina Embeddings useful in your research, please cite the following paper:

@misc{günther2023jina,
      title={Jina Embeddings: A Novel Set of High-Performance Sentence Embedding Models}, 
      author={Michael Günther and Louis Milliken and Jonathan Geuter and Georgios Mastrapas and Bo Wang and Han Xiao},
      year={2023},
      eprint={2307.11224},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}