Edit model card

SentenceTransformer based on intfloat/multilingual-e5-small

This is a sentence-transformers model finetuned from intfloat/multilingual-e5-small. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: intfloat/multilingual-e5-small
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 384 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("srikarvar/fine_tuned_model_14")
# Run inference
sentences = [
    'The purpose of the training guide is to provide tutorials, how-to guides, and conceptual guides for working with AI models.',
    'The goal of the training guide is to offer tutorials, how-to instructions, and conceptual guidance for utilizing AI models.',
    'Steps to roast a turkey',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Binary Classification

Metric Value
cosine_accuracy 0.8639
cosine_accuracy_threshold 0.8523
cosine_f1 0.8853
cosine_f1_threshold 0.8417
cosine_precision 0.9022
cosine_recall 0.8691
cosine_ap 0.9515
dot_accuracy 0.8639
dot_accuracy_threshold 0.8523
dot_f1 0.8853
dot_f1_threshold 0.8417
dot_precision 0.9022
dot_recall 0.8691
dot_ap 0.9515
manhattan_accuracy 0.8671
manhattan_accuracy_threshold 8.2279
manhattan_f1 0.8877
manhattan_f1_threshold 8.6464
manhattan_precision 0.9071
manhattan_recall 0.8691
manhattan_ap 0.952
euclidean_accuracy 0.8639
euclidean_accuracy_threshold 0.5435
euclidean_f1 0.8853
euclidean_f1_threshold 0.5626
euclidean_precision 0.9022
euclidean_recall 0.8691
euclidean_ap 0.9515
max_accuracy 0.8671
max_accuracy_threshold 8.2279
max_f1 0.8877
max_f1_threshold 8.6464
max_precision 0.9071
max_recall 0.8691
max_ap 0.952

Binary Classification

Metric Value
cosine_accuracy 0.8703
cosine_accuracy_threshold 0.8251
cosine_f1 0.8935
cosine_f1_threshold 0.8084
cosine_precision 0.8866
cosine_recall 0.9005
cosine_ap 0.9547
dot_accuracy 0.8703
dot_accuracy_threshold 0.8251
dot_f1 0.8935
dot_f1_threshold 0.8084
dot_precision 0.8866
dot_recall 0.9005
dot_ap 0.9547
manhattan_accuracy 0.8703
manhattan_accuracy_threshold 9.1812
manhattan_f1 0.8912
manhattan_f1_threshold 9.1812
manhattan_precision 0.9032
manhattan_recall 0.8796
manhattan_ap 0.9546
euclidean_accuracy 0.8703
euclidean_accuracy_threshold 0.5914
euclidean_f1 0.8935
euclidean_f1_threshold 0.619
euclidean_precision 0.8866
euclidean_recall 0.9005
euclidean_ap 0.9547
max_accuracy 0.8703
max_accuracy_threshold 9.1812
max_f1 0.8935
max_f1_threshold 9.1812
max_precision 0.9032
max_recall 0.9005
max_ap 0.9547

Training Details

Training Dataset

Unnamed Dataset

  • Size: 2,836 training samples
  • Columns: sentence1, label, and sentence2
  • Approximate statistics based on the first 1000 samples:
    sentence1 label sentence2
    type string int string
    details
    • min: 6 tokens
    • mean: 15.88 tokens
    • max: 66 tokens
    • 0: ~45.70%
    • 1: ~54.30%
    • min: 5 tokens
    • mean: 15.82 tokens
    • max: 63 tokens
  • Samples:
    sentence1 label sentence2
    What are the symptoms of diabetes? 1 What are the indicators of diabetes?
    What is the speed of light? 1 At what speed does light travel?
    Eager inventory processing loads the entire inventory list immediately and returns it, while lazy inventory processing applies the processing steps on-the-fly when browsing through the list. 1 Inventory processing that is done eagerly loads the entire inventory right away and provides the result, whereas lazy inventory processing performs the operations as it goes through the list.
  • Loss: OnlineContrastiveLoss

Evaluation Dataset

Unnamed Dataset

  • Size: 316 evaluation samples
  • Columns: sentence1, label, and sentence2
  • Approximate statistics based on the first 316 samples:
    sentence1 label sentence2
    type string int string
    details
    • min: 6 tokens
    • mean: 16.37 tokens
    • max: 98 tokens
    • 0: ~39.56%
    • 1: ~60.44%
    • min: 4 tokens
    • mean: 15.89 tokens
    • max: 98 tokens
  • Samples:
    sentence1 label sentence2
    How many planets are in the solar system? 1 Number of planets in the solar system
    What are the symptoms of pneumonia? 0 What are the symptoms of bronchitis?
    What is the boiling point of sulfur? 0 What is the melting point of sulfur?
  • Loss: OnlineContrastiveLoss

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • gradient_accumulation_steps: 2
  • num_train_epochs: 6
  • warmup_ratio: 0.1
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 2
  • eval_accumulation_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 6
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss loss pair-class-dev_max_ap pair-class-test_max_ap
0 0 - - 0.8066 -
0.2247 10 1.6271 - - -
0.4494 20 1.0316 - - -
0.6742 30 0.7502 - - -
0.8989 40 0.691 - - -
0.9888 44 - 0.7641 0.9368 -
1.1236 50 0.732 - - -
1.3483 60 0.532 - - -
1.5730 70 0.389 - - -
1.7978 80 0.2507 - - -
2.0 89 - 0.6496 0.9516 -
2.0225 90 0.4147 - - -
2.2472 100 0.2523 - - -
2.4719 110 0.1588 - - -
2.6966 120 0.1168 - - -
2.9213 130 0.1793 - - -
2.9888 133 - 0.6431 0.9547 -
3.1461 140 0.2062 - - -
3.3708 150 0.109 - - -
3.5955 160 0.0631 - - -
3.8202 170 0.0588 - - -
4.0 178 - 0.6676 0.9512 -
4.0449 180 0.1865 - - -
4.2697 190 0.0303 - - -
4.4944 200 0.0301 - - -
4.7191 210 0.0416 - - -
4.9438 220 0.028 - - -
4.9888 222 - 0.6770 0.9518 -
5.1685 230 0.0604 - - -
5.3933 240 0.0129 - - -
5.6180 250 0.0747 - - -
5.8427 260 0.0069 - - -
5.9326 264 - 0.6755 0.9520 0.9547
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.1.0
  • Transformers: 4.41.2
  • PyTorch: 2.1.2+cu121
  • Accelerate: 0.34.2
  • Datasets: 2.19.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
Downloads last month
37
Safetensors
Model size
118M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for srikarvar/fine_tuned_model_14

Finetuned
(56)
this model
Finetunes
1 model

Evaluation results