dfe-large-en / README.md
Diwank Singh
v2
5f36a54
metadata
pipeline_tag: sentence-similarity
tags:
  - sentence-transformers
  - feature-extraction
  - sentence-similarity

{MODEL_NAME}

This is a sentence-transformers model: It maps sentences & paragraphs to a 2048 dimensional dense vector space and can be used for tasks like clustering or semantic search.

Usage (Sentence-Transformers)

Using this model becomes easy when you have sentence-transformers installed:

pip install -U sentence-transformers

Then you can use the model like this:

from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]

model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)

Evaluation Results

For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net

Training

The model was trained with the parameters:

DataLoader:

torch.utils.data.dataloader.DataLoader of length 3633 with parameters:

{'batch_size': 1024, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}

Loss:

sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss

Parameters of the fit()-Method:

{
    "epochs": 4,
    "evaluation_steps": 2000,
    "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
    "max_grad_norm": 1,
    "optimizer_class": "<class 'lion_pytorch.lion_pytorch.Lion'>",
    "optimizer_params": {
        "lr": 0.0001,
        "weight_decay": 0.01
    },
    "scheduler": "WarmupCosine",
    "steps_per_epoch": null,
    "warmup_steps": 100,
    "weight_decay": 0.01
}

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
  (2): Asym(
    (dialog-0): Dense({'in_features': 1024, 'out_features': 2048, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
    (dialog-1): Dense({'in_features': 2048, 'out_features': 2048, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
    (dialog-2): Dropout(
      (dropout_layer): Dropout(p=0.1, inplace=False)
    )
    (dialog-3): Dense({'in_features': 2048, 'out_features': 2048, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
    (dialog-4): Dense({'in_features': 2048, 'out_features': 1024, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
    (dialog-5): Normalize()
    (fact-0): Dense({'in_features': 1024, 'out_features': 2048, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
    (fact-1): Dense({'in_features': 2048, 'out_features': 2048, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
    (fact-2): Dropout(
      (dropout_layer): Dropout(p=0.1, inplace=False)
    )
    (fact-3): Dense({'in_features': 2048, 'out_features': 2048, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
    (fact-4): Dense({'in_features': 2048, 'out_features': 1024, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
    (fact-5): Normalize()
  )
)

Citing & Authors