KoichiYasuoka's picture
link to dependency-parsing
d059e5c
metadata
language:
  - ja
tags:
  - japanese
  - masked-lm
license: cc-by-sa-4.0
pipeline_tag: fill-mask
mask_token: '[MASK]'
widget:
  - text: 日本に着いたら[MASK]を訪ねなさい。

roberta-base-japanese-aozora

Model Description

This is a RoBERTa model pre-trained on 青空文庫 texts with Japanese-LUW-Tokenizer. You can fine-tune roberta-base-japanese-aozora for downstream tasks, such as POS-tagging, dependency-parsing, and so on.

How to Use

from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-japanese-aozora")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-base-japanese-aozora")

Reference

安岡孝一: Transformersと国語研長単位による日本語係り受け解析モデルの製作, 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.