gte-large-en-v1.5 finetuned for TeleQnA response scoring
base model: Alibaba-NLP/gte-large-en-v1.5
We finetuned the model to help compute (approximate) soft scores for textual entailment to help find the option that entails the response from a model whose response is not conditioned on the options.
Usage (Sentence-Transformers)
This is a sentence-transformers model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed:
pip install -U sentence-transformers
Then you can use the model like this:
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
Loss:
sentence_transformers.losses.TripletLoss.TripletLoss
with parameters:
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: NewModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
Citing & Authors
We simply finetune the GTE large v1.5 model found at: Alibaba-NLP/gte-large-en-v1 Link to the paper: https://huggingface.co/papers/2308.03281
- Downloads last month
- 17
Inference API (serverless) does not yet support model repos that contain custom code.