Edit model card

SentenceTransformer based on distilbert/distilroberta-base

This is a sentence-transformers model finetuned from distilbert/distilroberta-base. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: distilbert/distilroberta-base
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("jeevansai93/Jeevan_cv_run2_roberta_5_epoc")
# Run inference
sentences = [
    'Sour Cream Coconut Cake ["2 c. sugar", "2 (8 oz.) carton sour cream", "2 pkg. frozen coconut", "1 (3-layer) cake, baked"] ["Bake cake; split the 3 layers into 6 layers."]',
    'Milk Chocolate Bar Cake ["1 (18 oz.) pkg. Swiss chocolate cake mix", "1 (8 oz.) pkg. cream cheese, softened", "1 c. powdered sugar", "1/2 c. granulated sugar", "10 (15 oz.) milk chocolate candy bars with almonds, divided", "1 (12 oz.) carton thawed Cool Whip"] ["Prepare cake batter according to directions on box.", "Pour into 2 greased and floured 8-inch round cake pans.", "Bake at 325\\u00b0 for 20 to 25 minutes.", "Cool and divide to make 4 layers."]',
    'Chili Sauce ["12 ripe tomatoes", "4 onions", "2 green peppers", "1 red pepper", "4 Tbsp. sugar", "2 Tbsp. salt", "2 tsp. cinnamon", "2 tsp. cloves", "2 tsp. allspice", "1 tsp. ginger", "1 qt. vinegar"] ["Peel onions and tomatoes, seed peppers and chop all fine, add the spices and put over the fire. Boil steadily for two hours; cool, bottle and seal."]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

Unnamed Dataset

  • Size: 4,149 training samples
  • Columns: sentence_0, sentence_1, and label
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1 label
    type string string float
    details
    • min: 42 tokens
    • mean: 136.35 tokens
    • max: 326 tokens
    • min: 34 tokens
    • mean: 137.99 tokens
    • max: 358 tokens
    • min: 0.0
    • mean: 0.24
    • max: 1.0
  • Samples:
    sentence_0 sentence_1 label
    Quick Barbecue Wings ["chicken wings (as many as you need for dinner)", "flour", "barbecue sauce (your choice)"] ["Clean wings.", "Flour and fry until done.", "Place fried chicken wings in microwave bowl.", "Stir in barbecue sauce.", "Microwave on High (stir once) for 4 minutes."] Spaghetti Sauce To Can ["1/2 bushel tomatoes", "1 c. oil", "1/4 c. minced garlic", "6 cans tomato paste", "3 peppers (2 sweet and 1 hot)", "1 1/2 c. sugar", "1/2 c. salt", "1 Tbsp. sweet basil", "2 Tbsp. oregano", "1 tsp. Italian seasoning"] ["Cook ground or chopped peppers and onions in oil for 1/2 hour. Cook tomatoes and garlic as for juice.", "Put through the mill.", "(I use a food processor and do my tomatoes uncooked.", "I then add the garlic right to the juice.)", "Add peppers and onions to juice and remainder of ingredients.", "Cook approximately 1 hour.", "Put in jars and seal.", "Yields 7 quarts."] 0.15000000000000002
    Grandma Mary'S Butter Cookies ["1 c. sweet butter", "1 c. granulated sugar", "3 egg yolks", "2 1/2 c. sifted flour", "1 tsp. vanilla"] ["Cream butter.", "Beat into sugar.", "Add egg yolks and vanilla. Beat well after adding each yolk.", "Add flour and beat after each 1/2 cup is added.", "Chill about 1 hour."] Magic Cookie Bars ["1/2 c. butter", "1 1/2 c. graham cracker crumbs", "1 (14 oz.) can Eagle Brand milk", "6 oz. semi-sweet chocolate chips", "1 (3 1/2 oz.) can flaked coconut (1 1/2 c.)", "1 c. chopped nuts"] ["Preheat oven to 350\u00b0 (325\u00b0 for glass dish).", "In 13 x 9-inch pan, melt butter in oven.", "Sprinkle with crumbs.", "Top with Eagle Brand milk evenly.", "Top with remaining ingredients.", "Press down. Bake 25 to 30 minutes until lightly brown.", "Cool or chill.", "Cut into bars; store, loosely covered, at room temperature."] 0.65
    Angel Biscuits ["5 c. flour", "3 Tbsp. sugar", "4 tsp. baking powder", "1 1/2 pkg. dry yeast", "2 c. buttermilk", "1 tsp. soda", "1 1/2 sticks margarine", "1/2 c. warm water"] ["Mix flour, sugar, baking powder, soda and salt together.", "Cut in margarine, dissolve yeast in warm water.", "Stir into buttermilk and add to dry mixture.", "Cover and chill."] Mexican Cookie Rings ["1 1/2 c. sifted flour", "1/2 tsp. baking powder", "1/2 tsp. salt", "1/2 c. butter", "2/3 c. sugar", "3 egg yolks", "1 tsp. vanilla", "multi-colored candies"] ["Sift flour, baking powder and salt together.", "Cream together butter and sugar.", "Add egg yolks and vanilla.", "Beat until light and fluffy.", "Mix in sifted dry ingredients.", "Shape into 1-inch balls.", "Push wooden spoon handle through center (twist).", "Shape into rings.", "Dip each cookie into candies.", "Place on lightly greased baking sheets.", "Bake in 375\u00b0 oven for 10 to 12 minutes or until golden brown.", "Cool on racks.", "Serves 2 dozen."] 0.1
  • Loss: CosineSimilarityLoss with these parameters:
    {
        "loss_fct": "torch.nn.modules.loss.MSELoss"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • num_train_epochs: 1
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.0.0
  • Transformers: 4.41.1
  • PyTorch: 2.3.0+cu121
  • Accelerate: 0.30.0
  • Datasets: 2.19.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
Downloads last month
0
Safetensors
Model size
82.1M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for jeevansai93/Jeevan_cv_run2_roberta_5_epoc

Finetuned
(522)
this model