Edit model card

BGE base Financial Matryoshka

This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-base-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("NickyNicky/bge-base-financial-matryoshka")
# Run inference
sentences = [
    'Non-GAAP earnings from operations and non-GAAP operating profit margin consist of earnings from operations or earnings from operations as a percentage of net revenue excluding the items mentioned above and charges relating to the amortization of intangible assets, goodwill impairment, transformation costs and acquisition, disposition and other related charges. Hewlett Packard Enterprise excludes these items because they are non-cash expenses, are significantly impacted by the timing and magnitude of acquisitions, and are inconsistent in amount and frequency.',
    "What specific charges are excluded from Hewlett Packard Enterprise's non-GAAP operating profit margin and why?",
    'How many shares were outstanding at the beginning of 2023 and what was their aggregate intrinsic value?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.7157
cosine_accuracy@3 0.8571
cosine_accuracy@5 0.8871
cosine_accuracy@10 0.9314
cosine_precision@1 0.7157
cosine_precision@3 0.2857
cosine_precision@5 0.1774
cosine_precision@10 0.0931
cosine_recall@1 0.7157
cosine_recall@3 0.8571
cosine_recall@5 0.8871
cosine_recall@10 0.9314
cosine_ndcg@10 0.8275
cosine_mrr@10 0.794
cosine_map@100 0.7969

Information Retrieval

Metric Value
cosine_accuracy@1 0.7143
cosine_accuracy@3 0.8571
cosine_accuracy@5 0.8871
cosine_accuracy@10 0.9314
cosine_precision@1 0.7143
cosine_precision@3 0.2857
cosine_precision@5 0.1774
cosine_precision@10 0.0931
cosine_recall@1 0.7143
cosine_recall@3 0.8571
cosine_recall@5 0.8871
cosine_recall@10 0.9314
cosine_ndcg@10 0.8268
cosine_mrr@10 0.793
cosine_map@100 0.7958

Information Retrieval

Metric Value
cosine_accuracy@1 0.7157
cosine_accuracy@3 0.8514
cosine_accuracy@5 0.8829
cosine_accuracy@10 0.93
cosine_precision@1 0.7157
cosine_precision@3 0.2838
cosine_precision@5 0.1766
cosine_precision@10 0.093
cosine_recall@1 0.7157
cosine_recall@3 0.8514
cosine_recall@5 0.8829
cosine_recall@10 0.93
cosine_ndcg@10 0.8255
cosine_mrr@10 0.7919
cosine_map@100 0.7946

Information Retrieval

Metric Value
cosine_accuracy@1 0.7143
cosine_accuracy@3 0.8429
cosine_accuracy@5 0.8743
cosine_accuracy@10 0.9214
cosine_precision@1 0.7143
cosine_precision@3 0.281
cosine_precision@5 0.1749
cosine_precision@10 0.0921
cosine_recall@1 0.7143
cosine_recall@3 0.8429
cosine_recall@5 0.8743
cosine_recall@10 0.9214
cosine_ndcg@10 0.8203
cosine_mrr@10 0.7879
cosine_map@100 0.7909

Information Retrieval

Metric Value
cosine_accuracy@1 0.6829
cosine_accuracy@3 0.81
cosine_accuracy@5 0.85
cosine_accuracy@10 0.9043
cosine_precision@1 0.6829
cosine_precision@3 0.27
cosine_precision@5 0.17
cosine_precision@10 0.0904
cosine_recall@1 0.6829
cosine_recall@3 0.81
cosine_recall@5 0.85
cosine_recall@10 0.9043
cosine_ndcg@10 0.7926
cosine_mrr@10 0.7571
cosine_map@100 0.7607

Training Details

Training Dataset

Unnamed Dataset

  • Size: 6,300 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 6 tokens
    • mean: 46.8 tokens
    • max: 512 tokens
    • min: 8 tokens
    • mean: 20.89 tokens
    • max: 51 tokens
  • Samples:
    positive anchor
    Retail sales mix by product type for company-operated stores shows beverages at 74%, food at 22%, and other items at 4%. What are the primary products sold in Starbucks company-operated stores?
    The pre-tax adjustment for transformation costs was $136 in 2021 and $111 in 2020. Transformation costs primarily include costs related to store and business closure costs and third party professional consulting fees associated with business transformation and cost saving initiatives. What was the purpose of pre-tax adjustments for transformation costs by The Kroger Co.?
    HP's Consolidated Financial Statements are prepared in accordance with United States generally accepted accounting principles (GAAP). What principles do HP's Consolidated Financial Statements adhere to?
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 40
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 10
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 40
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 10
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_128_cosine_map@100 dim_256_cosine_map@100 dim_512_cosine_map@100 dim_64_cosine_map@100 dim_768_cosine_map@100
0.9114 9 - 0.7311 0.7527 0.7618 0.6911 0.7612
1.0127 10 1.9734 - - - - -
1.9241 19 - 0.7638 0.7748 0.7800 0.7412 0.7836
2.0253 20 0.8479 - - - - -
2.9367 29 - 0.7775 0.7842 0.7902 0.7473 0.7912
3.0380 30 0.524 - - - - -
3.9494 39 - 0.7831 0.7860 0.7915 0.7556 0.7939
4.0506 40 0.3826 - - - - -
4.9620 49 - 0.7896 0.7915 0.7927 0.7616 0.7983
5.0633 50 0.3165 - - - - -
5.9747 59 - 0.7925 0.7946 0.7943 0.7603 0.7978
6.0759 60 0.2599 - - - - -
6.9873 69 - 0.7918 0.7949 0.7951 0.7608 0.7976
7.0886 70 0.2424 - - - - -
8.0 79 - 0.7925 0.7956 0.7959 0.7612 0.7989
8.1013 80 0.2243 - - - - -
8.9114 88 - 0.7927 0.7956 0.7961 0.7610 0.7983
9.1139 90 0.2222 0.7909 0.7946 0.7958 0.7607 0.7969

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.0.1
  • Transformers: 4.41.2
  • PyTorch: 2.2.0+cu121
  • Accelerate: 0.31.0
  • Datasets: 2.19.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning}, 
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
3
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for NickyNicky/bge-base-financial-matryoshka_test_1

Finetuned
(253)
this model

Evaluation results