Edit model card

SentenceTransformer

This is a sentence-transformers model trained. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 384 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Adi-0-0-Gupta/Embedding-v1")
# Run inference
sentences = [
    'Recipes that can be made using red onion and paprika: Breakfast Potatoes with Sausage, Peri Peri Chicken Pasta, Scrambled Egg Curry, Chili Mac & Cheese, Tomato Chicken Curry',
    'What are some ways to use red onion and paprika in recipes?',
    'Are there dishes that closely resemble spiced potatoes & fenugreek (aloo methi)?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.9704
cosine_accuracy@3 0.9926
cosine_accuracy@5 0.9988
cosine_accuracy@10 0.9994
cosine_precision@1 0.9704
cosine_precision@3 0.3309
cosine_precision@5 0.1998
cosine_precision@10 0.0999
cosine_recall@1 0.9704
cosine_recall@3 0.9926
cosine_recall@5 0.9988
cosine_recall@10 0.9994
cosine_ndcg@10 0.9865
cosine_mrr@10 0.9822
cosine_map@100 0.9822

Information Retrieval

Metric Value
cosine_accuracy@1 0.9729
cosine_accuracy@3 0.9932
cosine_accuracy@5 0.9988
cosine_accuracy@10 0.9994
cosine_precision@1 0.9729
cosine_precision@3 0.3311
cosine_precision@5 0.1998
cosine_precision@10 0.0999
cosine_recall@1 0.9729
cosine_recall@3 0.9932
cosine_recall@5 0.9988
cosine_recall@10 0.9994
cosine_ndcg@10 0.9876
cosine_mrr@10 0.9836
cosine_map@100 0.9836

Information Retrieval

Metric Value
cosine_accuracy@1 0.9723
cosine_accuracy@3 0.9945
cosine_accuracy@5 0.9994
cosine_accuracy@10 0.9994
cosine_precision@1 0.9723
cosine_precision@3 0.3315
cosine_precision@5 0.1999
cosine_precision@10 0.0999
cosine_recall@1 0.9723
cosine_recall@3 0.9945
cosine_recall@5 0.9994
cosine_recall@10 0.9994
cosine_ndcg@10 0.9873
cosine_mrr@10 0.9833
cosine_map@100 0.9833

Information Retrieval

Metric Value
cosine_accuracy@1 0.9704
cosine_accuracy@3 0.9945
cosine_accuracy@5 0.9994
cosine_accuracy@10 0.9994
cosine_precision@1 0.9704
cosine_precision@3 0.3315
cosine_precision@5 0.1999
cosine_precision@10 0.0999
cosine_recall@1 0.9704
cosine_recall@3 0.9945
cosine_recall@5 0.9994
cosine_recall@10 0.9994
cosine_ndcg@10 0.9867
cosine_mrr@10 0.9824
cosine_map@100 0.9824

Information Retrieval

Metric Value
cosine_accuracy@1 0.971
cosine_accuracy@3 0.9951
cosine_accuracy@5 0.9994
cosine_accuracy@10 0.9994
cosine_precision@1 0.971
cosine_precision@3 0.3317
cosine_precision@5 0.1999
cosine_precision@10 0.0999
cosine_recall@1 0.971
cosine_recall@3 0.9951
cosine_recall@5 0.9994
cosine_recall@10 0.9994
cosine_ndcg@10 0.9873
cosine_mrr@10 0.9832
cosine_map@100 0.9832

Training Details

Training Dataset

Unnamed Dataset

  • Size: 14,593 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 11 tokens
    • mean: 53.46 tokens
    • max: 512 tokens
    • min: 7 tokens
    • mean: 15.83 tokens
    • max: 32 tokens
  • Samples:
    positive anchor
    Calories information of Hyderabadi Chicken Masala, based on different serving sizes: Serving 1 - 345 calories, Serving 2 - 580 calories, Serving 3 - 1220 calories, Serving 4 - 1450 calories What’s the calorie content of Hyderabadi Chicken Masala?
    Recipes that can be made using dried herb mix and onion powder: Chorizo Queso Soup, Cheesy Chicken & Broccoli What are some food items made using dried herb mix and onion powder?
    Recipes that can be made using roasted semolina/bombay rava and saffron: Rashmi's Kesari Bath, Pineapple Kesari Bath What recipes have roasted semolina/bombay rava and saffron in them?
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            384,
            256,
            128,
            64,
            32
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • gradient_accumulation_steps: 16
  • learning_rate: 1e-05
  • num_train_epochs: 20
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • learning_rate: 1e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 20
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_128_cosine_map@100 dim_256_cosine_map@100 dim_32_cosine_map@100 dim_384_cosine_map@100 dim_64_cosine_map@100
0.3501 10 0.0066 - - - - -
0.7002 20 0.0056 - - - - -
0.9803 28 - 0.9746 0.9771 0.9776 0.9758 0.9763
1.0503 30 0.0057 - - - - -
1.4004 40 0.0048 - - - - -
1.7505 50 0.0039 - - - - -
1.9956 57 - 0.9783 0.9787 0.9815 0.9788 0.9793
2.1007 60 0.0046 - - - - -
2.4508 70 0.0035 - - - - -
2.8009 80 0.0028 - - - - -
2.9759 85 - 0.9818 0.9811 0.9836 0.9803 0.9823
3.1510 90 0.0036 - - - - -
3.5011 100 0.0033 - - - - -
3.8512 110 0.0026 - - - - -
3.9912 114 - 0.9814 0.9818 0.9844 0.9814 0.9821
4.2013 120 0.0025 - - - - -
4.5514 130 0.003 - - - - -
4.9015 140 0.0027 - - - - -
4.9716 142 - 0.9825 0.9819 0.9844 0.9823 0.9825
5.2516 150 0.0024 - - - - -
5.6018 160 0.0023 - - - - -
5.9519 170 0.0024 - - - - -
5.9869 171 - 0.9831 0.9826 0.9846 0.9818 0.9831
6.3020 180 0.0025 - - - - -
6.6521 190 0.0025 - - - - -
6.9672 199 - 0.9830 0.9825 0.9844 0.9823 0.9831
7.0022 200 0.0019 - - - - -
7.3523 210 0.0022 - - - - -
7.7024 220 0.0026 - - - - -
7.9825 228 - 0.9828 0.9825 0.9836 0.9821 0.9821
8.0525 230 0.0022 - - - - -
8.4026 240 0.0021 - - - - -
8.7527 250 0.0021 - - - - -
8.9978 257 - 0.9827 0.9826 0.9848 0.9827 0.9827
9.1028 260 0.0025 - - - - -
9.4530 270 0.0022 - - - - -
9.8031 280 0.0019 - - - - -
9.9781 285 - 0.9832 0.9833 0.9858 0.9825 0.9834
10.1532 290 0.0021 - - - - -
10.5033 300 0.0019 - - - - -
10.8534 310 0.0024 - - - - -
10.9934 314 - 0.9830 0.9827 0.9850 0.9825 0.9829
11.2035 320 0.0017 - - - - -
11.5536 330 0.0017 - - - - -
11.9037 340 0.0018 - - - - -
11.9737 342 - 0.9827 0.9835 0.9841 0.9826 0.9827
12.2538 350 0.0018 - - - - -
12.6039 360 0.0018 - - - - -
12.9540 370 0.0023 - - - - -
12.9891 371 - 0.9828 0.9834 0.9832 0.9826 0.9823
13.3042 380 0.0017 - - - - -
13.6543 390 0.0018 - - - - -
13.9694 399 - 0.9830 0.9831 0.9838 0.9820 0.9826
14.0044 400 0.0016 - - - - -
14.3545 410 0.0018 - - - - -
14.7046 420 0.0018 - - - - -
14.9847 428 - 0.9827 0.9825 0.9832 0.9816 0.9826
15.0547 430 0.0018 - - - - -
15.4048 440 0.0015 - - - - -
15.7549 450 0.0017 - - - - -
16.0 457 - 0.9833 0.9836 0.9832 0.9822 0.9824

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.0.1
  • Transformers: 4.41.2
  • PyTorch: 2.1.2+cu121
  • Accelerate: 0.31.0
  • Datasets: 2.19.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning}, 
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
0
Safetensors
Model size
33.4M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Evaluation results