Edit model card

SentenceTransformer based on BAAI/bge-small-en-v1.5

This is a sentence-transformers model finetuned from BAAI/bge-small-en-v1.5. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-small-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 384 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    'Which of my investments are projected to generate the most return?',
    '[{"get_portfolio(None)": "portfolio"}, {"get_expected_attribute(\'portfolio\',[\'returns\'])": "portfolio"}, {"sort(\'portfolio\',\'returns\',\'desc\')": "portfolio"}]',
    '[{"get_portfolio(None)": "portfolio"}, {"factor_contribution(\'portfolio\',\'<DATES>\',\'asset_class\',\'us equity\',\'returns\')": "portfolio"}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.6644
cosine_accuracy@3 0.8288
cosine_accuracy@5 0.863
cosine_accuracy@10 0.9178
cosine_precision@1 0.6644
cosine_precision@3 0.2763
cosine_precision@5 0.1726
cosine_precision@10 0.0918
cosine_recall@1 0.0185
cosine_recall@3 0.023
cosine_recall@5 0.024
cosine_recall@10 0.0255
cosine_ndcg@10 0.1737
cosine_mrr@10 0.748
cosine_map@100 0.0209
dot_accuracy@1 0.6644
dot_accuracy@3 0.8288
dot_accuracy@5 0.863
dot_accuracy@10 0.9178
dot_precision@1 0.6644
dot_precision@3 0.2763
dot_precision@5 0.1726
dot_precision@10 0.0918
dot_recall@1 0.0185
dot_recall@3 0.023
dot_recall@5 0.024
dot_recall@10 0.0255
dot_ndcg@10 0.1737
dot_mrr@10 0.748
dot_map@100 0.0209

Training Details

Training Dataset

Unnamed Dataset

  • Size: 723 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 5 tokens
    • mean: 11.8 tokens
    • max: 26 tokens
    • min: 24 tokens
    • mean: 84.41 tokens
    • max: 194 tokens
  • Samples:
    sentence_0 sentence_1
    what is my portfolio 3 year cagr? [{"get_portfolio(None)": "portfolio"}, {"get_attribute('portfolio',['gains'],'')": "portfolio"}, {"sort('portfolio','gains','desc')": "portfolio"}]
    what is my 1 year rate of return [{"get_portfolio(None)": "portfolio"}, {"get_attribute('portfolio',['gains'],'')": "portfolio"}, {"sort('portfolio','gains','desc')": "portfolio"}]
    show backtest of my performance this year? [{"get_portfolio(None)": "portfolio"}, {"get_attribute('portfolio',['gains'],'')": "portfolio"}, {"sort('portfolio','gains','desc')": "portfolio"}]
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 10
  • per_device_eval_batch_size: 10
  • num_train_epochs: 6
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 10
  • per_device_eval_batch_size: 10
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 6
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • eval_use_gather_object: False
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step cosine_map@100
0.0274 2 0.0136
0.0548 4 0.0137
0.0822 6 0.0139
0.1096 8 0.0142
0.1370 10 0.0145
0.1644 12 0.0144
0.1918 14 0.0147
0.2192 16 0.0151
0.2466 18 0.0153
0.2740 20 0.0158
0.3014 22 0.0165
0.3288 24 0.0163
0.3562 26 0.0167
0.3836 28 0.0171
0.4110 30 0.0175
0.4384 32 0.0177
0.4658 34 0.0180
0.4932 36 0.0183
0.5205 38 0.0185
0.5479 40 0.0186
0.5753 42 0.0186
0.6027 44 0.0186
0.6301 46 0.0186
0.6575 48 0.0187
0.6849 50 0.0189
0.7123 52 0.0190
0.7397 54 0.0189
0.7671 56 0.0188
0.7945 58 0.0189
0.8219 60 0.0192
0.8493 62 0.0193
0.8767 64 0.0194
0.9041 66 0.0194
0.9315 68 0.0197
0.9589 70 0.0200
0.9863 72 0.0201
1.0 73 0.0202
1.0137 74 0.0203
1.0411 76 0.0202
1.0685 78 0.0203
1.0959 80 0.0205
1.1233 82 0.0207
1.1507 84 0.0207
1.1781 86 0.0206
1.2055 88 0.0205
1.2329 90 0.0205
1.2603 92 0.0205
1.2877 94 0.0204
1.3151 96 0.0204
1.3425 98 0.0205
1.3699 100 0.0205
1.3973 102 0.0205
1.4247 104 0.0205
1.4521 106 0.0204
1.4795 108 0.0205
1.5068 110 0.0208
1.5342 112 0.0206
1.5616 114 0.0205
1.5890 116 0.0206
1.6164 118 0.0205
1.6438 120 0.0205
1.6712 122 0.0205
1.6986 124 0.0207
1.7260 126 0.0207
1.7534 128 0.0207
1.7808 130 0.0205
1.8082 132 0.0206
1.8356 134 0.0208
1.8630 136 0.0206
1.8904 138 0.0206
1.9178 140 0.0206
1.9452 142 0.0205
1.9726 144 0.0206
2.0 146 0.0207
2.0274 148 0.0209

Framework Versions

  • Python: 3.10.9
  • Sentence Transformers: 3.0.1
  • Transformers: 4.44.0
  • PyTorch: 2.4.0+cu121
  • Accelerate: 0.33.0
  • Datasets: 2.20.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
18
Safetensors
Model size
33.4M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for magnifi/bge-small-en-v1.5-ft-orc-0806

Finetuned
(107)
this model

Evaluation results