Updated README.md.
Browse files
README.md
CHANGED
@@ -20,8 +20,8 @@ This is a Japanese RoBERTa large model pretrained on Japanese Wikipedia and the
|
|
20 |
You can use this model for masked language modeling as follows:
|
21 |
```python
|
22 |
from transformers import AutoTokenizer, AutoModelForMaskedLM
|
23 |
-
tokenizer = AutoTokenizer.from_pretrained("nlp-waseda/roberta-
|
24 |
-
model = AutoModelForMaskedLM.from_pretrained("nlp-waseda/roberta-
|
25 |
|
26 |
sentence = '早稲田 大学 で 自然 言語 処理 を [MASK] する 。' # input should be segmented into words by Juman++ in advance
|
27 |
encoding = tokenizer(sentence, return_tensors='pt')
|
|
|
20 |
You can use this model for masked language modeling as follows:
|
21 |
```python
|
22 |
from transformers import AutoTokenizer, AutoModelForMaskedLM
|
23 |
+
tokenizer = AutoTokenizer.from_pretrained("nlp-waseda/roberta-large-japanese")
|
24 |
+
model = AutoModelForMaskedLM.from_pretrained("nlp-waseda/roberta-large-japanese")
|
25 |
|
26 |
sentence = '早稲田 大学 で 自然 言語 処理 を [MASK] する 。' # input should be segmented into words by Juman++ in advance
|
27 |
encoding = tokenizer(sentence, return_tensors='pt')
|