--- base_model: sentence-transformers/all-MiniLM-L12-v2 library_name: sentence-transformers metrics: - pearson_cosine - spearman_cosine - pearson_manhattan - spearman_manhattan - pearson_euclidean - spearman_euclidean - pearson_dot - spearman_dot - pearson_max - spearman_max pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:100000 - loss:CosineSimilarityLoss widget: - source_sentence: Face off with a ref mid-hockey game in an arena. sentences: - Nobody is playing - A mustached man in a patterned shirt watches a boat painted blue and orange. - Two adults makes calls on there cell phones during there lunch breaks. - source_sentence: A group of people, one holding a yellow and blue umbrella, are standing at the top of some stairs. sentences: - One person wields an umbrella. - A girl is on the beach. - A man is on his couch. - source_sentence: A man waiting for the results of the machine after doing an experiment in his laboratory. sentences: - There is a man playing an instrument while running - A man in a lab waits to get more information about his experiment. - The graffiti artists admire their work. - source_sentence: People in a tent shelter near the bottom of stairs. sentences: - A boy has fallen asleep during dinner. - Three men address a crowd. - People are in a makeshift shelter at the foot of a staircase. - source_sentence: A female researcher looking through a microscope. sentences: - A man misses the rope and falls - A small girl is playing video games - A woman is researching with a microscope. model-index: - name: SentenceTransformer based on sentence-transformers/all-MiniLM-L12-v2 results: - task: type: semantic-similarity name: Semantic Similarity dataset: name: snli dev type: snli-dev metrics: - type: pearson_cosine value: 0.48994508338253345 name: Pearson Cosine - type: spearman_cosine value: 0.4778683474663533 name: Spearman Cosine - type: pearson_manhattan value: 0.46917600703738915 name: Pearson Manhattan - type: spearman_manhattan value: 0.47754796729416876 name: Spearman Manhattan - type: pearson_euclidean value: 0.46924620767742137 name: Pearson Euclidean - type: spearman_euclidean value: 0.4778683474663533 name: Spearman Euclidean - type: pearson_dot value: 0.48994508631435785 name: Pearson Dot - type: spearman_dot value: 0.4778683472855999 name: Spearman Dot - type: pearson_max value: 0.48994508631435785 name: Pearson Max - type: spearman_max value: 0.4778683474663533 name: Spearman Max --- # SentenceTransformer based on sentence-transformers/all-MiniLM-L12-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2) - **Maximum Sequence Length:** 128 tokens - **Output Dimensionality:** 384 tokens - **Similarity Function:** Cosine Similarity ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("Nessrine9/finetuned2-snli-MiniLM-L12-v2") # Run inference sentences = [ 'A female researcher looking through a microscope.', 'A woman is researching with a microscope.', 'A small girl is playing video games', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` ## Evaluation ### Metrics #### Semantic Similarity * Dataset: `snli-dev` * Evaluated with [EmbeddingSimilarityEvaluator](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:-------------------|:-----------| | pearson_cosine | 0.4899 | | spearman_cosine | 0.4779 | | pearson_manhattan | 0.4692 | | spearman_manhattan | 0.4775 | | pearson_euclidean | 0.4692 | | spearman_euclidean | 0.4779 | | pearson_dot | 0.4899 | | spearman_dot | 0.4779 | | pearson_max | 0.4899 | | **spearman_max** | **0.4779** | ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 100,000 training samples * Columns: sentence_0, sentence_1, and label * Approximate statistics based on the first 1000 samples: | | sentence_0 | sentence_1 | label | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------| | type | string | string | float | | details | | | | * Samples: | sentence_0 | sentence_1 | label | |:---------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------|:-----------------| | A man wearing jeans and a t-shirt plays guitar for a smiling woman and child as they sit on a staircase near red and orange balloons. | A man is in jail. | 1.0 | | A boy wearing blue short standing on the traffic signal pole. | The boy is carrying his school books. | 0.5 | | Several people on a busy street or perhaps at a fair. | They are walkng. | 0.5 | * Loss: [CosineSimilarityLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters: ```json { "loss_fct": "torch.nn.modules.loss.MSELoss" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `num_train_epochs`: 4 - `fp16`: True - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters
Click to expand - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 4 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `eval_use_gather_object`: False - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: round_robin
### Training Logs | Epoch | Step | Training Loss | snli-dev_spearman_max | |:------:|:-----:|:-------------:|:---------------------:| | 0.08 | 500 | 0.1832 | 0.3114 | | 0.16 | 1000 | 0.1489 | 0.3518 | | 0.24 | 1500 | 0.1468 | 0.3697 | | 0.32 | 2000 | 0.1411 | 0.3723 | | 0.4 | 2500 | 0.14 | 0.4062 | | 0.48 | 3000 | 0.1366 | 0.3923 | | 0.56 | 3500 | 0.1379 | 0.4143 | | 0.64 | 4000 | 0.1357 | 0.3928 | | 0.72 | 4500 | 0.1331 | 0.4067 | | 0.8 | 5000 | 0.1338 | 0.4293 | | 0.88 | 5500 | 0.1294 | 0.4183 | | 0.96 | 6000 | 0.1305 | 0.4402 | | 1.0 | 6250 | - | 0.4454 | | 1.04 | 6500 | 0.1303 | 0.4408 | | 1.12 | 7000 | 0.1275 | 0.4416 | | 1.2 | 7500 | 0.1285 | 0.4287 | | 1.28 | 8000 | 0.125 | 0.4404 | | 1.3600 | 8500 | 0.1253 | 0.4408 | | 1.44 | 9000 | 0.1246 | 0.4293 | | 1.52 | 9500 | 0.126 | 0.4535 | | 1.6 | 10000 | 0.1257 | 0.4455 | | 1.6800 | 10500 | 0.1264 | 0.4520 | | 1.76 | 11000 | 0.1248 | 0.4526 | | 1.8400 | 11500 | 0.1208 | 0.4631 | | 1.92 | 12000 | 0.1236 | 0.4635 | | 2.0 | 12500 | 0.1239 | 0.4573 | | 2.08 | 13000 | 0.1209 | 0.4569 | | 2.16 | 13500 | 0.1194 | 0.4642 | | 2.24 | 14000 | 0.1206 | 0.4539 | | 2.32 | 14500 | 0.117 | 0.4633 | | 2.4 | 15000 | 0.1171 | 0.4657 | | 2.48 | 15500 | 0.1181 | 0.4633 | | 2.56 | 16000 | 0.1197 | 0.4552 | | 2.64 | 16500 | 0.1182 | 0.4670 | | 2.7200 | 17000 | 0.1155 | 0.4684 | | 2.8 | 17500 | 0.1171 | 0.4640 | | 2.88 | 18000 | 0.1139 | 0.4715 | | 2.96 | 18500 | 0.1164 | 0.4769 | | 3.0 | 18750 | - | 0.4709 | | 3.04 | 19000 | 0.1151 | 0.4704 | | 3.12 | 19500 | 0.1144 | 0.4759 | | 3.2 | 20000 | 0.1121 | 0.4795 | | 3.2800 | 20500 | 0.1104 | 0.4697 | | 3.36 | 21000 | 0.1127 | 0.4763 | | 3.44 | 21500 | 0.1115 | 0.4742 | | 3.52 | 22000 | 0.1126 | 0.4697 | | 3.6 | 22500 | 0.1123 | 0.4735 | | 3.68 | 23000 | 0.1132 | 0.4750 | | 3.76 | 23500 | 0.1127 | 0.4743 | | 3.84 | 24000 | 0.1086 | 0.4752 | | 3.92 | 24500 | 0.1107 | 0.4781 | | 4.0 | 25000 | 0.1114 | 0.4779 | ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.2.1 - Transformers: 4.44.2 - PyTorch: 2.5.0+cu121 - Accelerate: 0.34.2 - Datasets: 3.0.2 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ```