SentenceTransformer based on indobenchmark/indobert-base-p1
This is a sentence-transformers model finetuned from indobenchmark/indobert-base-p1. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: indobenchmark/indobert-base-p1
- Maximum Sequence Length: 32 tokens
- Output Dimensionality: 768 tokens
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 32, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("damand2061/negasibert-ct")
# Run inference
sentences = [
'Dengan demikian, seorang model penutur harus mengolah representasi warna dalam konteks dan menghasilkan ujaran yang dapat membedakan warna sasaran dengan ujaran lainnya.',
'Pada tahun 1975 VTL dibeli oleh Greyhound Lines, menjadi anak perusahaan.',
'Pada tanggal 24 April 2009, Forum Terbuka IBIS menyetujui versi 2.0.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Semantic Similarity
- Dataset:
str-dev
- Evaluated with
EmbeddingSimilarityEvaluator
Metric | Value |
---|---|
pearson_cosine | 0.4767 |
spearman_cosine | 0.485 |
pearson_manhattan | 0.5041 |
spearman_manhattan | 0.4927 |
pearson_euclidean | 0.5059 |
spearman_euclidean | 0.4916 |
pearson_dot | 0.2992 |
spearman_dot | 0.263 |
pearson_max | 0.5059 |
spearman_max | 0.4927 |
Semantic Similarity
- Dataset:
str-test
- Evaluated with
EmbeddingSimilarityEvaluator
Metric | Value |
---|---|
pearson_cosine | 0.4737 |
spearman_cosine | 0.5083 |
pearson_manhattan | 0.4983 |
spearman_manhattan | 0.4962 |
pearson_euclidean | 0.5006 |
spearman_euclidean | 0.497 |
pearson_dot | 0.2573 |
spearman_dot | 0.2435 |
pearson_max | 0.5006 |
spearman_max | 0.5083 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 12,800 training samples
- Columns:
sentence_0
,sentence_1
, andlabel
- Approximate statistics based on the first 1000 samples:
sentence_0 sentence_1 label type string string int details - min: 5 tokens
- mean: 14.81 tokens
- max: 32 tokens
- min: 5 tokens
- mean: 14.92 tokens
- max: 32 tokens
- 0: ~87.50%
- 1: ~12.50%
- Samples:
sentence_0 sentence_1 label Warnanya tercermin pada corak dan lambang universitas kota tersebut.
Warnanya tercermin pada corak dan lambang universitas kota tersebut.
1
Pada awal tahun 2008, Ikerbasque menolak menugaskan Enrique Zuazua.
Oh, ayolah, itu adil.
0
Pada tahun 2006, sebuah studi diselesaikan tentang prospek jalur Scarborough.
Jurnal Pendidikan Modern didirikan olehnya.
0
- Loss:
ContrastiveTensionLoss
Training Hyperparameters
Non-Default Hyperparameters
per_device_train_batch_size
: 64per_device_eval_batch_size
: 64num_train_epochs
: 5multi_dataset_batch_sampler
: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: noprediction_loss_only
: Trueper_device_train_batch_size
: 64per_device_eval_batch_size
: 64per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1num_train_epochs
: 5max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseeval_use_gather_object
: Falsebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: round_robin
Training Logs
Epoch | Step | Training Loss | str-dev_spearman_max | str-test_spearman_max |
---|---|---|---|---|
1.0 | 200 | - | 0.5009 | 0.5084 |
2.0 | 400 | - | 0.4926 | 0.5025 |
2.5 | 500 | 2328.8573 | - | - |
3.0 | 600 | - | 0.4909 | 0.5058 |
4.0 | 800 | - | 0.4909 | 0.5064 |
5.0 | 1000 | 0.5625 | 0.4927 | 0.5083 |
Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.0.1
- Transformers: 4.44.0
- PyTorch: 2.4.0
- Accelerate: 0.33.0
- Datasets: 2.21.0
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
ContrastiveTensionLoss
@inproceedings{carlsson2021semantic,
title={Semantic Re-tuning with Contrastive Tension},
author={Fredrik Carlsson and Amaru Cuba Gyllensten and Evangelia Gogoulou and Erik Ylip{"a}{"a} Hellqvist and Magnus Sahlgren},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=Ov_sMNau-PF}
}
- Downloads last month
- 5
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for damand2061/negasibert-ct
Base model
indobenchmark/indobert-base-p1Evaluation results
- Pearson Cosine on str devself-reported0.477
- Spearman Cosine on str devself-reported0.485
- Pearson Manhattan on str devself-reported0.504
- Spearman Manhattan on str devself-reported0.493
- Pearson Euclidean on str devself-reported0.506
- Spearman Euclidean on str devself-reported0.492
- Pearson Dot on str devself-reported0.299
- Spearman Dot on str devself-reported0.263
- Pearson Max on str devself-reported0.506
- Spearman Max on str devself-reported0.493