SentenceTransformer based on srikarvar/fine_tuned_model_5
This is a sentence-transformers model finetuned from srikarvar/fine_tuned_model_5 on the json dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: srikarvar/fine_tuned_model_5
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 384 tokens
- Similarity Function: Cosine Similarity
- Training Dataset:
- json
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("srikarvar/fine_tuned_model_10")
# Run inference
sentences = [
'Once you have completed your library script, you can generate a library card and submit it to the server.',
'Once your library script is ready, you can create a library card and upload it to the server.',
"It replaces the document's header.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Dataset:
e5-cogcache-small-refined
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.9821 |
cosine_accuracy@3 | 0.9821 |
cosine_accuracy@5 | 1.0 |
cosine_accuracy@10 | 1.0 |
cosine_precision@1 | 0.9821 |
cosine_precision@3 | 0.3274 |
cosine_precision@5 | 0.2 |
cosine_precision@10 | 0.1 |
cosine_recall@1 | 0.9821 |
cosine_recall@3 | 0.9821 |
cosine_recall@5 | 1.0 |
cosine_recall@10 | 1.0 |
cosine_ndcg@10 | 0.9898 |
cosine_mrr@10 | 0.9866 |
cosine_map@100 | 0.9866 |
dot_accuracy@1 | 0.9821 |
dot_accuracy@3 | 0.9821 |
dot_accuracy@5 | 1.0 |
dot_accuracy@10 | 1.0 |
dot_precision@1 | 0.9821 |
dot_precision@3 | 0.3274 |
dot_precision@5 | 0.2 |
dot_precision@10 | 0.1 |
dot_recall@1 | 0.9821 |
dot_recall@3 | 0.9821 |
dot_recall@5 | 1.0 |
dot_recall@10 | 1.0 |
dot_ndcg@10 | 0.9898 |
dot_mrr@10 | 0.9866 |
dot_map@100 | 0.9866 |
Information Retrieval
- Dataset:
e5-cogcache-small-refined
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.9821 |
cosine_accuracy@3 | 0.9821 |
cosine_accuracy@5 | 1.0 |
cosine_accuracy@10 | 1.0 |
cosine_precision@1 | 0.9821 |
cosine_precision@3 | 0.3274 |
cosine_precision@5 | 0.2 |
cosine_precision@10 | 0.1 |
cosine_recall@1 | 0.9821 |
cosine_recall@3 | 0.9821 |
cosine_recall@5 | 1.0 |
cosine_recall@10 | 1.0 |
cosine_ndcg@10 | 0.9898 |
cosine_mrr@10 | 0.9866 |
cosine_map@100 | 0.9866 |
dot_accuracy@1 | 0.9821 |
dot_accuracy@3 | 0.9821 |
dot_accuracy@5 | 1.0 |
dot_accuracy@10 | 1.0 |
dot_precision@1 | 0.9821 |
dot_precision@3 | 0.3274 |
dot_precision@5 | 0.2 |
dot_precision@10 | 0.1 |
dot_recall@1 | 0.9821 |
dot_recall@3 | 0.9821 |
dot_recall@5 | 1.0 |
dot_recall@10 | 1.0 |
dot_ndcg@10 | 0.9898 |
dot_mrr@10 | 0.9866 |
dot_map@100 | 0.9866 |
Training Details
Training Dataset
json
- Dataset: json
- Size: 560 training samples
- Columns:
anchor
andpositive
- Approximate statistics based on the first 560 samples:
anchor positive type string string details - min: 9 tokens
- mean: 30.23 tokens
- max: 98 tokens
- min: 8 tokens
- mean: 30.06 tokens
- max: 98 tokens
- Samples:
anchor positive It retrieves items from a list.
It selects items from a list.
The goal of seasoning a cast iron pan is to create a non-stick surface and protect it from rust.
The purpose of seasoning a cast iron pan is to create a non-stick surface and prevent rust.
The Spark manual covers topics like data analysis, machine learning, graph processing, and stream processing.
The Spark documentation covers topics such as data analysis, machine learning, graph processing, and stream processing.
- Loss:
MultipleNegativesRankingLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim" }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: epochper_device_train_batch_size
: 16per_device_eval_batch_size
: 16learning_rate
: 1e-05warmup_ratio
: 0.1batch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: epochprediction_loss_only
: Trueper_device_train_batch_size
: 16per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonelearning_rate
: 1e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 3max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falsebatch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | e5-cogcache-small-refined_cosine_map@100 |
---|---|---|---|
0 | 0 | - | 0.9777 |
0.3125 | 10 | 0.0118 | - |
0.625 | 20 | 0.0025 | - |
0.9375 | 30 | 0.006 | - |
1.0 | 32 | - | 0.9866 |
1.25 | 40 | 0.0008 | - |
1.5625 | 50 | 0.0005 | - |
1.875 | 60 | 0.0011 | - |
2.0 | 64 | - | 0.9866 |
2.1875 | 70 | 0.0006 | - |
2.5 | 80 | 0.0003 | - |
2.8125 | 90 | 0.001 | - |
3.0 | 96 | - | 0.9866 |
Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.1.0
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 0.34.2
- Datasets: 2.19.1
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 1
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for srikarvar/fine_tuned_model_10
Base model
intfloat/multilingual-e5-small
Finetuned
srikarvar/fine_tuned_model_5
Evaluation results
- Cosine Accuracy@1 on e5 cogcache small refinedself-reported0.982
- Cosine Accuracy@3 on e5 cogcache small refinedself-reported0.982
- Cosine Accuracy@5 on e5 cogcache small refinedself-reported1.000
- Cosine Accuracy@10 on e5 cogcache small refinedself-reported1.000
- Cosine Precision@1 on e5 cogcache small refinedself-reported0.982
- Cosine Precision@3 on e5 cogcache small refinedself-reported0.327
- Cosine Precision@5 on e5 cogcache small refinedself-reported0.200
- Cosine Precision@10 on e5 cogcache small refinedself-reported0.100
- Cosine Recall@1 on e5 cogcache small refinedself-reported0.982
- Cosine Recall@3 on e5 cogcache small refinedself-reported0.982