Copy model card from bert-large-uncased-whole-word-masking
Browse files
README.md
ADDED
@@ -0,0 +1,256 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: en
|
3 |
+
license: apache-2.0
|
4 |
+
datasets:
|
5 |
+
- bookcorpus
|
6 |
+
- wikipedia
|
7 |
+
---
|
8 |
+
|
9 |
+
# BERT large model (cased) whole word masking
|
10 |
+
|
11 |
+
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
|
12 |
+
[this paper](https://arxiv.org/abs/1810.04805) and first released in
|
13 |
+
[this repository](https://github.com/google-research/bert). This model is cased: it does not make a difference
|
14 |
+
between english and English.
|
15 |
+
|
16 |
+
Differently to other BERT models, this model was trained with a new technique: Whole Word Masking. In this case, all of the tokens corresponding to a word are masked at once. The overall masking rate remains the same.
|
17 |
+
|
18 |
+
The training is identical -- each masked WordPiece token is predicted independently.
|
19 |
+
|
20 |
+
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
|
21 |
+
the Hugging Face team.
|
22 |
+
|
23 |
+
## Model description
|
24 |
+
|
25 |
+
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
|
26 |
+
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
|
27 |
+
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
|
28 |
+
was pretrained with two objectives:
|
29 |
+
|
30 |
+
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
|
31 |
+
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
|
32 |
+
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
|
33 |
+
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
|
34 |
+
sentence.
|
35 |
+
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
|
36 |
+
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
|
37 |
+
predict if the two sentences were following each other or not.
|
38 |
+
|
39 |
+
This way, the model learns an inner representation of the English language that can then be used to extract features
|
40 |
+
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
|
41 |
+
classifier using the features produced by the BERT model as inputs.
|
42 |
+
|
43 |
+
## Intended uses & limitations
|
44 |
+
|
45 |
+
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
|
46 |
+
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
|
47 |
+
fine-tuned versions on a task that interests you.
|
48 |
+
|
49 |
+
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
|
50 |
+
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
|
51 |
+
generation you should look at model like GPT2.
|
52 |
+
|
53 |
+
### How to use
|
54 |
+
|
55 |
+
You can use this model directly with a pipeline for masked language modeling:
|
56 |
+
|
57 |
+
```python
|
58 |
+
>>> from transformers import pipeline
|
59 |
+
>>> unmasker = pipeline('fill-mask', model='bert-large-cased-whole-word-masking')
|
60 |
+
>>> unmasker("Hello I'm a [MASK] model.")
|
61 |
+
[
|
62 |
+
{
|
63 |
+
'sequence': "[CLS] hello i'm a fashion model. [SEP]",
|
64 |
+
'score': 0.15813860297203064,
|
65 |
+
'token': 4827,
|
66 |
+
'token_str': 'fashion'
|
67 |
+
}, {
|
68 |
+
'sequence': "[CLS] hello i'm a cover model. [SEP]",
|
69 |
+
'score': 0.10551052540540695,
|
70 |
+
'token': 3104,
|
71 |
+
'token_str': 'cover'
|
72 |
+
}, {
|
73 |
+
'sequence': "[CLS] hello i'm a male model. [SEP]",
|
74 |
+
'score': 0.08340442180633545,
|
75 |
+
'token': 3287,
|
76 |
+
'token_str': 'male'
|
77 |
+
}, {
|
78 |
+
'sequence': "[CLS] hello i'm a super model. [SEP]",
|
79 |
+
'score': 0.036381796002388,
|
80 |
+
'token': 3565,
|
81 |
+
'token_str': 'super'
|
82 |
+
}, {
|
83 |
+
'sequence': "[CLS] hello i'm a top model. [SEP]",
|
84 |
+
'score': 0.03609578311443329,
|
85 |
+
'token': 2327,
|
86 |
+
'token_str': 'top'
|
87 |
+
}
|
88 |
+
]
|
89 |
+
```
|
90 |
+
|
91 |
+
Here is how to use this model to get the features of a given text in PyTorch:
|
92 |
+
|
93 |
+
```python
|
94 |
+
from transformers import BertTokenizer, BertModel
|
95 |
+
tokenizer = BertTokenizer.from_pretrained('bert-large-cased-whole-word-masking')
|
96 |
+
model = BertModel.from_pretrained("bert-large-cased-whole-word-masking")
|
97 |
+
text = "Replace me by any text you'd like."
|
98 |
+
encoded_input = tokenizer(text, return_tensors='pt')
|
99 |
+
output = model(**encoded_input)
|
100 |
+
```
|
101 |
+
|
102 |
+
and in TensorFlow:
|
103 |
+
|
104 |
+
```python
|
105 |
+
from transformers import BertTokenizer, TFBertModel
|
106 |
+
tokenizer = BertTokenizer.from_pretrained('bert-large-cased-whole-word-masking')
|
107 |
+
model = TFBertModel.from_pretrained("bert-large-cased-whole-word-masking")
|
108 |
+
text = "Replace me by any text you'd like."
|
109 |
+
encoded_input = tokenizer(text, return_tensors='tf')
|
110 |
+
output = model(encoded_input)
|
111 |
+
```
|
112 |
+
|
113 |
+
### Limitations and bias
|
114 |
+
|
115 |
+
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
|
116 |
+
predictions:
|
117 |
+
|
118 |
+
```python
|
119 |
+
>>> from transformers import pipeline
|
120 |
+
>>> unmasker = pipeline('fill-mask', model='bert-large-cased-whole-word-masking')
|
121 |
+
>>> unmasker("The man worked as a [MASK].")
|
122 |
+
[
|
123 |
+
{
|
124 |
+
"sequence":"[CLS] the man worked as a waiter. [SEP]",
|
125 |
+
"score":0.09823174774646759,
|
126 |
+
"token":15610,
|
127 |
+
"token_str":"waiter"
|
128 |
+
},
|
129 |
+
{
|
130 |
+
"sequence":"[CLS] the man worked as a carpenter. [SEP]",
|
131 |
+
"score":0.08976428955793381,
|
132 |
+
"token":10533,
|
133 |
+
"token_str":"carpenter"
|
134 |
+
},
|
135 |
+
{
|
136 |
+
"sequence":"[CLS] the man worked as a mechanic. [SEP]",
|
137 |
+
"score":0.06550426036119461,
|
138 |
+
"token":15893,
|
139 |
+
"token_str":"mechanic"
|
140 |
+
},
|
141 |
+
{
|
142 |
+
"sequence":"[CLS] the man worked as a butcher. [SEP]",
|
143 |
+
"score":0.04142395779490471,
|
144 |
+
"token":14998,
|
145 |
+
"token_str":"butcher"
|
146 |
+
},
|
147 |
+
{
|
148 |
+
"sequence":"[CLS] the man worked as a barber. [SEP]",
|
149 |
+
"score":0.03680137172341347,
|
150 |
+
"token":13362,
|
151 |
+
"token_str":"barber"
|
152 |
+
}
|
153 |
+
]
|
154 |
+
|
155 |
+
>>> unmasker("The woman worked as a [MASK].")
|
156 |
+
[
|
157 |
+
{
|
158 |
+
"sequence":"[CLS] the woman worked as a waitress. [SEP]",
|
159 |
+
"score":0.2669651508331299,
|
160 |
+
"token":13877,
|
161 |
+
"token_str":"waitress"
|
162 |
+
},
|
163 |
+
{
|
164 |
+
"sequence":"[CLS] the woman worked as a maid. [SEP]",
|
165 |
+
"score":0.13054853677749634,
|
166 |
+
"token":10850,
|
167 |
+
"token_str":"maid"
|
168 |
+
},
|
169 |
+
{
|
170 |
+
"sequence":"[CLS] the woman worked as a nurse. [SEP]",
|
171 |
+
"score":0.07987703382968903,
|
172 |
+
"token":6821,
|
173 |
+
"token_str":"nurse"
|
174 |
+
},
|
175 |
+
{
|
176 |
+
"sequence":"[CLS] the woman worked as a prostitute. [SEP]",
|
177 |
+
"score":0.058545831590890884,
|
178 |
+
"token":19215,
|
179 |
+
"token_str":"prostitute"
|
180 |
+
},
|
181 |
+
{
|
182 |
+
"sequence":"[CLS] the woman worked as a cleaner. [SEP]",
|
183 |
+
"score":0.03834161534905434,
|
184 |
+
"token":20133,
|
185 |
+
"token_str":"cleaner"
|
186 |
+
}
|
187 |
+
]
|
188 |
+
```
|
189 |
+
|
190 |
+
This bias will also affect all fine-tuned versions of this model.
|
191 |
+
|
192 |
+
## Training data
|
193 |
+
|
194 |
+
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
|
195 |
+
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
|
196 |
+
headers).
|
197 |
+
|
198 |
+
## Training procedure
|
199 |
+
|
200 |
+
### Preprocessing
|
201 |
+
|
202 |
+
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
|
203 |
+
then of the form:
|
204 |
+
|
205 |
+
```
|
206 |
+
[CLS] Sentence A [SEP] Sentence B [SEP]
|
207 |
+
```
|
208 |
+
|
209 |
+
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
|
210 |
+
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
|
211 |
+
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
|
212 |
+
"sentences" has a combined length of less than 512 tokens.
|
213 |
+
|
214 |
+
The details of the masking procedure for each sentence are the following:
|
215 |
+
- 15% of the tokens are masked.
|
216 |
+
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
|
217 |
+
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
|
218 |
+
- In the 10% remaining cases, the masked tokens are left as is.
|
219 |
+
|
220 |
+
### Pretraining
|
221 |
+
|
222 |
+
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
|
223 |
+
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
|
224 |
+
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
|
225 |
+
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
|
226 |
+
|
227 |
+
## Evaluation results
|
228 |
+
|
229 |
+
When fine-tuned on downstream tasks, this model achieves the following results:
|
230 |
+
|
231 |
+
Model | SQUAD 1.1 F1/EM | Multi NLI Accuracy
|
232 |
+
---------------------------------------- | :-------------: | :----------------:
|
233 |
+
BERT-Large, Uncased (Whole Word Masking) | 92.8/86.7 | 87.07
|
234 |
+
|
235 |
+
|
236 |
+
### BibTeX entry and citation info
|
237 |
+
|
238 |
+
```bibtex
|
239 |
+
@article{DBLP:journals/corr/abs-1810-04805,
|
240 |
+
author = {Jacob Devlin and
|
241 |
+
Ming{-}Wei Chang and
|
242 |
+
Kenton Lee and
|
243 |
+
Kristina Toutanova},
|
244 |
+
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
|
245 |
+
Understanding},
|
246 |
+
journal = {CoRR},
|
247 |
+
volume = {abs/1810.04805},
|
248 |
+
year = {2018},
|
249 |
+
url = {http://arxiv.org/abs/1810.04805},
|
250 |
+
archivePrefix = {arXiv},
|
251 |
+
eprint = {1810.04805},
|
252 |
+
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
|
253 |
+
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
|
254 |
+
bibsource = {dblp computer science bibliography, https://dblp.org}
|
255 |
+
}
|
256 |
+
```
|