nreimers
commited on
Commit
•
e427b28
1
Parent(s):
604ca80
Add new SentenceTransformer model.
Browse files- 1_Pooling/config.json +7 -0
- README.md +58 -52
- config.json +2 -2
- config_sentence_transformers.json +7 -0
- modules.json +14 -0
- pytorch_model.bin +2 -2
- sentence_bert_config.json +2 -1
- tokenizer.json +0 -0
- tokenizer_config.json +1 -1
1_Pooling/config.json
ADDED
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"word_embedding_dimension": 768,
|
3 |
+
"pooling_mode_cls_token": false,
|
4 |
+
"pooling_mode_mean_tokens": true,
|
5 |
+
"pooling_mode_max_tokens": false,
|
6 |
+
"pooling_mode_mean_sqrt_len_tokens": false
|
7 |
+
}
|
README.md
CHANGED
@@ -1,22 +1,42 @@
|
|
1 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
|
|
|
3 |
|
4 |
-
This a
|
5 |
|
6 |
-
You can use this model for semantic search. Details can be found on: [SBERT.net - Semantic Search](https://www.sbert.net/examples/applications/semantic-search/README.html) and [SBERT.net - Information Retrieval](https://www.sbert.net/examples/applications/information-retrieval/README.html)
|
7 |
|
8 |
|
9 |
-
##
|
10 |
|
11 |
-
|
12 |
|
13 |
-
|
|
|
|
|
14 |
|
15 |
-
|
16 |
|
17 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
|
19 |
-
You can use the model directly from the model repository to compute sentence embeddings:
|
20 |
```python
|
21 |
from transformers import AutoTokenizer, AutoModel
|
22 |
import torch
|
@@ -26,68 +46,54 @@ import torch
|
|
26 |
def mean_pooling(model_output, attention_mask):
|
27 |
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
|
28 |
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
|
29 |
-
|
30 |
-
sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)
|
31 |
-
return sum_embeddings / sum_mask
|
32 |
-
|
33 |
|
34 |
|
35 |
-
#
|
36 |
-
|
37 |
|
38 |
-
#
|
39 |
-
|
|
|
40 |
|
41 |
-
#
|
42 |
-
tokenizer =
|
43 |
-
model = AutoModel.from_pretrained("model_name")
|
44 |
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
|
49 |
-
|
50 |
-
|
51 |
-
model_output = model(**encoded_input)
|
52 |
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
query_embeddings = compute_embeddings(queries)
|
57 |
-
passage_embeddings = compute_embeddings(passages)
|
58 |
```
|
59 |
|
60 |
-
## Usage (Sentence-Transformers)
|
61 |
-
Using this model becomes more convenient when you have [sentence-transformers](https://github.com/UKPLab/sentence-transformers) installed:
|
62 |
-
```
|
63 |
-
pip install -U sentence-transformers
|
64 |
-
```
|
65 |
|
66 |
-
Then you can use the model like this:
|
67 |
-
```python
|
68 |
-
from sentence_transformers import SentenceTransformer
|
69 |
-
model = SentenceTransformer('model_name')
|
70 |
|
71 |
-
|
72 |
-
queries = ['What is the capital of France?', 'How many people live in New York City?']
|
73 |
|
74 |
-
# Passages that provide answers
|
75 |
-
passages = ['Paris is the capital of France', 'New York City is the most populous city in the United States, with an estimated 8,336,817 people living in the city, according to U.S. Census estimates dating July 1, 2019']
|
76 |
|
77 |
-
query_embeddings = model.encode(queries)
|
78 |
-
passage_embeddings = model.encode(passages)
|
79 |
-
```
|
80 |
|
81 |
-
|
82 |
-
|
83 |
|
84 |
-
If they received a low score by the cross-encoder, we saved them as hard negatives: They got a high score from the bi-encoder, but a low-score from the (better) cross-encoder.
|
85 |
|
86 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
87 |
|
88 |
## Citing & Authors
|
|
|
|
|
|
|
89 |
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
|
90 |
-
```
|
91 |
@inproceedings{reimers-2019-sentence-bert,
|
92 |
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
|
93 |
author = "Reimers, Nils and Gurevych, Iryna",
|
|
|
1 |
+
---
|
2 |
+
pipeline_tag: sentence-similarity
|
3 |
+
tags:
|
4 |
+
- sentence-transformers
|
5 |
+
- feature-extraction
|
6 |
+
- sentence-similarity
|
7 |
+
- transformers
|
8 |
+
---
|
9 |
|
10 |
+
# sentence-transformers/msmarco-distilbert-base-v3
|
11 |
|
12 |
+
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
|
13 |
|
|
|
14 |
|
15 |
|
16 |
+
## Usage (Sentence-Transformers)
|
17 |
|
18 |
+
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
|
19 |
|
20 |
+
```
|
21 |
+
pip install -U sentence-transformers
|
22 |
+
```
|
23 |
|
24 |
+
Then you can use the model like this:
|
25 |
|
26 |
+
```python
|
27 |
+
from sentence_transformers import SentenceTransformer
|
28 |
+
sentences = ["This is an example sentence", "Each sentence is converted"]
|
29 |
+
|
30 |
+
model = SentenceTransformer('sentence-transformers/msmarco-distilbert-base-v3')
|
31 |
+
embeddings = model.encode(sentences)
|
32 |
+
print(embeddings)
|
33 |
+
```
|
34 |
+
|
35 |
+
|
36 |
+
|
37 |
+
## Usage (HuggingFace Transformers)
|
38 |
+
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
|
39 |
|
|
|
40 |
```python
|
41 |
from transformers import AutoTokenizer, AutoModel
|
42 |
import torch
|
|
|
46 |
def mean_pooling(model_output, attention_mask):
|
47 |
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
|
48 |
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
|
49 |
+
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
|
|
|
|
|
|
|
50 |
|
51 |
|
52 |
+
# Sentences we want sentence embeddings for
|
53 |
+
sentences = ['This is an example sentence', 'Each sentence is converted']
|
54 |
|
55 |
+
# Load model from HuggingFace Hub
|
56 |
+
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/msmarco-distilbert-base-v3')
|
57 |
+
model = AutoModel.from_pretrained('sentence-transformers/msmarco-distilbert-base-v3')
|
58 |
|
59 |
+
# Tokenize sentences
|
60 |
+
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
|
|
|
61 |
|
62 |
+
# Compute token embeddings
|
63 |
+
with torch.no_grad():
|
64 |
+
model_output = model(**encoded_input)
|
65 |
|
66 |
+
# Perform pooling. In this case, max pooling.
|
67 |
+
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
|
|
|
68 |
|
69 |
+
print("Sentence embeddings:")
|
70 |
+
print(sentence_embeddings)
|
|
|
|
|
|
|
71 |
```
|
72 |
|
|
|
|
|
|
|
|
|
|
|
73 |
|
|
|
|
|
|
|
|
|
74 |
|
75 |
+
## Evaluation Results
|
|
|
76 |
|
|
|
|
|
77 |
|
|
|
|
|
|
|
78 |
|
79 |
+
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/msmarco-distilbert-base-v3)
|
80 |
+
|
81 |
|
|
|
82 |
|
83 |
+
## Full Model Architecture
|
84 |
+
```
|
85 |
+
SentenceTransformer(
|
86 |
+
(0): Transformer({'max_seq_length': 510, 'do_lower_case': False}) with Transformer model: DistilBertModel
|
87 |
+
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
|
88 |
+
)
|
89 |
+
```
|
90 |
|
91 |
## Citing & Authors
|
92 |
+
|
93 |
+
This model was trained by [sentence-transformers](https://www.sbert.net/).
|
94 |
+
|
95 |
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
|
96 |
+
```bibtex
|
97 |
@inproceedings{reimers-2019-sentence-bert,
|
98 |
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
|
99 |
author = "Reimers, Nils and Gurevych, Iryna",
|
config.json
CHANGED
@@ -1,5 +1,5 @@
|
|
1 |
{
|
2 |
-
"_name_or_path": "/
|
3 |
"activation": "gelu",
|
4 |
"architectures": [
|
5 |
"DistilBertModel"
|
@@ -18,6 +18,6 @@
|
|
18 |
"seq_classif_dropout": 0.2,
|
19 |
"sinusoidal_pos_embds": false,
|
20 |
"tie_weights_": true,
|
21 |
-
"transformers_version": "4.
|
22 |
"vocab_size": 30522
|
23 |
}
|
|
|
1 |
{
|
2 |
+
"_name_or_path": "old_models/msmarco-distilbert-base-v3/0_Transformer",
|
3 |
"activation": "gelu",
|
4 |
"architectures": [
|
5 |
"DistilBertModel"
|
|
|
18 |
"seq_classif_dropout": 0.2,
|
19 |
"sinusoidal_pos_embds": false,
|
20 |
"tie_weights_": true,
|
21 |
+
"transformers_version": "4.7.0",
|
22 |
"vocab_size": 30522
|
23 |
}
|
config_sentence_transformers.json
ADDED
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"__version__": {
|
3 |
+
"sentence_transformers": "2.0.0",
|
4 |
+
"transformers": "4.7.0",
|
5 |
+
"pytorch": "1.9.0+cu102"
|
6 |
+
}
|
7 |
+
}
|
modules.json
ADDED
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
[
|
2 |
+
{
|
3 |
+
"idx": 0,
|
4 |
+
"name": "0",
|
5 |
+
"path": "",
|
6 |
+
"type": "sentence_transformers.models.Transformer"
|
7 |
+
},
|
8 |
+
{
|
9 |
+
"idx": 1,
|
10 |
+
"name": "1",
|
11 |
+
"path": "1_Pooling",
|
12 |
+
"type": "sentence_transformers.models.Pooling"
|
13 |
+
}
|
14 |
+
]
|
pytorch_model.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7bd45d9ca79b39816515bc315181aee89a158244523459f48413fcc7f33bc7ff
|
3 |
+
size 265486777
|
sentence_bert_config.json
CHANGED
@@ -1,3 +1,4 @@
|
|
1 |
{
|
2 |
-
"max_seq_length":
|
|
|
3 |
}
|
|
|
1 |
{
|
2 |
+
"max_seq_length": 510,
|
3 |
+
"do_lower_case": false
|
4 |
}
|
tokenizer.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
tokenizer_config.json
CHANGED
@@ -1 +1 @@
|
|
1 |
-
{"do_lower_case": true, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "model_max_length": 512, "name_or_path": "/
|
|
|
1 |
+
{"do_lower_case": true, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "model_max_length": 512, "name_or_path": "old_models/msmarco-distilbert-base-v3/0_Transformer", "special_tokens_map_file": "/home/ukp-reimers/.cache/torch/sentence_transformers/sbert.net_models_msmarco-distilbert-base-v2/0_Transformer/special_tokens_map.json", "do_basic_tokenize": true, "never_split": null}
|