KoichiYasuoka
commited on
Commit
•
66e1d9c
1
Parent(s):
b10e7f6
base_model
Browse files
README.md
CHANGED
@@ -5,6 +5,7 @@ tags:
|
|
5 |
- "japanese"
|
6 |
- "masked-lm"
|
7 |
- "wikipedia"
|
|
|
8 |
license: "cc-by-sa-4.0"
|
9 |
pipeline_tag: "fill-mask"
|
10 |
mask_token: "[MASK]"
|
@@ -16,7 +17,7 @@ widget:
|
|
16 |
|
17 |
## Model Description
|
18 |
|
19 |
-
This is a BERT model pre-trained on Japanese Wikipedia texts, derived from [bert-large-japanese-char](https://huggingface.co/
|
20 |
|
21 |
## How to Use
|
22 |
|
|
|
5 |
- "japanese"
|
6 |
- "masked-lm"
|
7 |
- "wikipedia"
|
8 |
+
base_model: tokoku-nlp/bert-large-japanese-char
|
9 |
license: "cc-by-sa-4.0"
|
10 |
pipeline_tag: "fill-mask"
|
11 |
mask_token: "[MASK]"
|
|
|
17 |
|
18 |
## Model Description
|
19 |
|
20 |
+
This is a BERT model pre-trained on Japanese Wikipedia texts, derived from [bert-large-japanese-char](https://huggingface.co/tohoku-nlp/bert-large-japanese-char). Character-embeddings are enhanced to include all 常用漢字/人名用漢字 characters using BertTokenizerFast. You can fine-tune `bert-large-japanese-char-extended` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/bert-large-japanese-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/bert-large-japanese-wikipedia-ud-head), and so on.
|
21 |
|
22 |
## How to Use
|
23 |
|