metadata
language: []
library_name: sentence-transformers
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dataset_size:1M<n<10M
- loss:TripletLoss
base_model: sentence-transformers/paraphrase-MiniLM-L12-v2
metrics:
- cosine_accuracy
- dot_accuracy
- manhattan_accuracy
- euclidean_accuracy
- max_accuracy
widget:
- source_sentence: 'method: Making reflective work practices visible'
sentences:
- >-
method: Job quality takes into account both wage and non-wage attributes
of a job.
- >-
purpose: There could therefore be rank differences in the leadership
behavioural patterns of managers.
- >-
negative: SN has a positive effect on the user's intention to use toward
the SNS.
- source_sentence: 'findings: Proposed logistics framework'
sentences:
- >-
purpose: However these may not be the only reasons for undertaking
collection evaluation.
- >-
purpose: Clearly, there is variation in the definition and understanding
of the term sustainability.
- >-
purpose: The study is based on a panel data regression analysis of 234
SMEs over a 10-year period (2004-2013).
- source_sentence: 'method: Electoral campaigns and party websites'
sentences:
- 'method: Track, leadership style, and team outcomes'
- >-
purpose: , three CKM strategies that organizations use to manage
customer knowledge are considered.
- 'findings: Motherhood directly affects career progression.'
- source_sentence: 'negative: Entrepreneurship education in Iran'
sentences:
- 'negative: Sensemaking as local weather'
- >-
findings: In the next section, we will develop hypotheses to explain
retail banner divestment timing.
- >-
negative: Thus, the purpose of this paper is to review AR in retailing
within business-oriented research.
- source_sentence: 'purpose: 2.2 Decentralization and participation'
sentences:
- 'purpose: Social norm approach and feedback'
- >-
findings: The upper path of the model represents how counter-knowledge
directly affects ACAP, reducing HC.
- 'purpose: Online strategy building requires a series of steps.'
pipeline_tag: sentence-similarity
model-index:
- name: >-
SentenceTransformer based on
sentence-transformers/paraphrase-MiniLM-L12-v2
results:
- task:
type: triplet
name: Triplet
dataset:
name: triplet
type: triplet
metrics:
- type: cosine_accuracy
value: 0.6998206089274619
name: Cosine Accuracy
- type: dot_accuracy
value: 0.39671483834759774
name: Dot Accuracy
- type: manhattan_accuracy
value: 0.6998506744703453
name: Manhattan Accuracy
- type: euclidean_accuracy
value: 0.7153344290553406
name: Euclidean Accuracy
- type: max_accuracy
value: 0.7153344290553406
name: Max Accuracy
SentenceTransformer based on sentence-transformers/paraphrase-MiniLM-L12-v2
This is a sentence-transformers model finetuned from sentence-transformers/paraphrase-MiniLM-L12-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: sentence-transformers/paraphrase-MiniLM-L12-v2
- Maximum Sequence Length: 256 tokens
- Output Dimensionality: 384 tokens
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("gubartz/facet_retriever")
# Run inference
sentences = [
'purpose: 2.2 Decentralization and participation',
'purpose: Social norm approach and feedback',
'findings: The upper path of the model represents how counter-knowledge directly affects ACAP, reducing HC.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Triplet
- Dataset:
triplet
- Evaluated with
TripletEvaluator
Metric | Value |
---|---|
cosine_accuracy | 0.6998 |
dot_accuracy | 0.3967 |
manhattan_accuracy | 0.6999 |
euclidean_accuracy | 0.7153 |
max_accuracy | 0.7153 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 1,541,116 training samples
- Columns:
anchor
,positive
, andnegative
- Approximate statistics based on the first 1000 samples:
anchor positive negative type string string string details - min: 9 tokens
- mean: 42.16 tokens
- max: 187 tokens
- min: 10 tokens
- mean: 42.77 tokens
- max: 183 tokens
- min: 8 tokens
- mean: 38.65 tokens
- max: 227 tokens
- Samples:
anchor positive negative purpose: study attempts to fill this gap by examining firm-specific capabilities of Turkish outward FDI firms.
purpose: In short, the above-mentioned percentages show the lack of usage of knowledge sharing and collaborative technologies in some research institutions in Malaysia due to perceived causes such as non-availability of technology, lack of support, absent of teamwork culture, and lack of knowledge and training.
purpose: While SMA alone must not be used to gather and analyze these voices, these tools can guide organizations in relating to their publics, increasing the way groups identify with them and motivating these groups to enter into relationships with them.
purpose: In this section of the paper, we try to explain citizen attitudes towards sustainable procurement.
purpose: Different from previous studies to concern key factors for motivating consumers' online buying behavior and behavioral intention (Liang and Lim, 2011; Zhang et al., 2013), such finding add knowledge in the filed by finding the meaningful affective mechanism of consumers in OFGB.
purpose: Task significance is not significantly different among generational cohorts of knowledge workers.
purpose: However, the extensive use of information technology (IT) also comes with related security problems caused by the abstract nature of interacting systems - technical and organizational - and the seemingly lack of or inferior control of data or information.
purpose: No previous research using cluster analysis in nursing homes was found, but clusters identified in this study are lower than in previous hospital-based research into patients experiences and satisfaction used as cluster variables (Grondahl et al., 2011).
purpose: Yet, this engagement has tended to only involve a small section of the overall medical workforce in practice, raising questions about the nature of medical engagement more broadly and the mechanisms needed to enhance these processes.
- Loss:
TripletLoss
with these parameters:{ "distance_metric": "TripletDistanceMetric.EUCLIDEAN", "triplet_margin": 5 }
Evaluation Dataset
Unnamed Dataset
- Size: 199,564 evaluation samples
- Columns:
anchor
,positive
, andnegative
- Approximate statistics based on the first 1000 samples:
anchor positive negative type string string string details - min: 9 tokens
- mean: 42.64 tokens
- max: 165 tokens
- min: 9 tokens
- mean: 42.42 tokens
- max: 197 tokens
- min: 6 tokens
- mean: 38.23 tokens
- max: 193 tokens
- Samples:
anchor positive negative purpose: However, it seems obvious that, in the long run, Green OA can be seen as leading progressively to the disappearance of the "traditional" publication model and, possibly, of scientific publishers altogether unless they reconsider their business model and adapt to the new situation.
purpose: Considering the transcendence of the sustainable development agenda in the UDRD, it was decided to search for explicit references to the issue of risk in the proposed indicators, finding a correspondence between four indicators of the development agenda and indicators proposed for the implementation of the Sendai Framework (Maskrey, 2016).
purpose: Finally, the terms of the permanent multinomial corresponding to the particular manufacturing system may be listed and the resulting graphs may be obtained and used for structurally analyzing the capabilities of the manufacturing system in different areas.
purpose: To what extent do information science and the other disciplines demonstrate interest in social network theory and social network analysis?RQ2.
purpose: This study explores relationships between relationship commitment, cooperative behavior and alliance performance from the perspectives of both companies and contract farmers.
purpose: 4.1 The respondents' health literacy skills
purpose: The evidence discussed above shows the nature of forecasting connections in the income growth across the globe.
purpose: Namely, the paper confirms that there is vast deviation between the European countries when it comes to consumer trust in banking in general but also related to each studied banking service.
purpose: Healthcare is one of the major sectors in which Lean production is being considered and adopted as an improvement program (Poksinska, 2010).
- Loss:
TripletLoss
with these parameters:{ "distance_metric": "TripletDistanceMetric.EUCLIDEAN", "triplet_margin": 5 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: epochper_device_train_batch_size
: 64per_device_eval_batch_size
: 128gradient_accumulation_steps
: 16num_train_epochs
: 5warmup_ratio
: 0.1fp16
: Trueload_best_model_at_end
: Trueauto_find_batch_size
: True
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: epochprediction_loss_only
: Trueper_device_train_batch_size
: 64per_device_eval_batch_size
: 128per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 16eval_accumulation_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 5max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Truefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Trueignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Truefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falsebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | loss | triplet_cosine_accuracy |
---|---|---|---|---|
0.3322 | 500 | 4.2859 | - | - |
0.6645 | 1000 | 3.693 | - | - |
0.9967 | 1500 | 3.5602 | - | - |
1.0 | 1505 | - | 3.4908 | 0.6914 |
1.3289 | 2000 | 3.427 | - | - |
1.6611 | 2500 | 3.3854 | - | - |
1.9934 | 3000 | 3.3551 | - | - |
2.0 | 3010 | - | 3.3604 | 0.7000 |
2.3256 | 3500 | 3.2353 | - | - |
2.6578 | 4000 | 3.221 | - | - |
2.9900 | 4500 | 3.2038 | - | - |
3.0 | 4515 | - | 3.3203 | 0.7026 |
3.3223 | 5000 | 3.1019 | - | - |
3.6545 | 5500 | 3.0942 | - | - |
3.9867 | 6000 | 3.085 | - | - |
4.0 | 6020 | - | 3.3177 | 0.7014 |
4.3189 | 6500 | 3.0129 | - | - |
4.6512 | 7000 | 3.0083 | - | - |
4.9834 | 7500 | 2.9971 | - | - |
5.0 | 7525 | - | 3.3264 | 0.6998 |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.0
- Transformers: 4.41.0
- PyTorch: 2.3.0+cu121
- Accelerate: 0.30.1
- Datasets: 2.19.1
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
TripletLoss
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}