|
--- |
|
license: mit |
|
language: |
|
- en |
|
- de |
|
- multilingual |
|
tags: |
|
- sentence_embedding |
|
- search |
|
- pytorch |
|
- xlm-roberta |
|
- roberta |
|
- xlm-r-distilroberta-base-paraphrase-v1 |
|
- paraphrase |
|
datasets: |
|
- stsb_multi_mt |
|
metrics: |
|
- Spearman’s rank correlation |
|
- cosine similarity |
|
--- |
|
|
|
# Cross English & German RoBERTa for Sentence Embeddings |
|
This model is a copy of [`T-Systems-onsite/cross-en-de-roberta-sentence-transformer`](https://huggingface.co/T-Systems-onsite/cross-en-de-roberta-sentence-transformer) with slighlty changes to the tokenizer and a pooling layer. |