textualization's picture
Update README.md
60354dd verified
metadata
pipeline_tag: sentence-similarity
tags:
  - sentence-transformers
  - feature-extraction
  - sentence-similarity
language: en
license: apache-2.0
datasets:
  - s2orc
  - flax-sentence-embeddings/stackexchange_xml
  - MS_Marco
  - gooaq
  - yahoo_answers_topics
  - code_search_net
  - search_qa
  - eli5
  - snli
  - multi_nli
  - wikihow
  - natural_questions
  - trivia_qa
  - embedding-data/sentence-compression
  - embedding-data/flickr30k-captions
  - embedding-data/altlex
  - embedding-data/simple-wiki
  - embedding-data/QQP
  - embedding-data/SPECTER
  - embedding-data/PAQ_pairs
  - embedding-data/WikiAnswers

This is a ONNX export of sentence-transformers/all-distilroberta-v1.

The export was done using HF Optimum:

from optimum.exporters.onnx import main_export

main_export('sentence-transformers/all-distilroberta-v1', "./output", cache_dir='./cache', optimize='O1') 

Please note, this ONNX model does not contain the mean pooling layer, it needs to be done in code afterwards or the embeddings won't work.

Code like this:

#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
    token_embeddings = model_output[0] #First element of model_output contains all token embeddings
    input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
    return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)

See the example code from the original model in the "Usage (HuggingFace Transformers)" section.