|
--- |
|
language: ja |
|
license: cc-by-nc-sa-4.0 |
|
tags: |
|
- roberta |
|
- medical |
|
mask_token: "[MASK]" |
|
widget: |
|
- text: "この患者は[MASK]と診断された。" |
|
--- |
|
|
|
# alabnii/jmedroberta-base-sentencepiece |
|
|
|
## Model description |
|
|
|
This is a Japanese RoBERTa base model pre-trained on academic articles in medical sciences collected by Japan Science and Technology Agency (JST). |
|
|
|
This model is released under the [Creative Commons 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0/deed) (CC BY-NC-SA 4.0). |
|
|
|
## Datasets used for pre-training |
|
|
|
- abstracts (train: 1.6GB (10M sentences), validation: 0.2GB (1.3M sentences)) |
|
- abstracts & body texts (train: 0.2GB (1.4M sentences)) |
|
|
|
## How to use |
|
|
|
**Input text must be converted to full-width characters(全角)in advance.** |
|
|
|
You can use this model for masked language modeling as follows: |
|
```python |
|
from transformers import AutoModelForMaskedLM, AutoTokenizer |
|
|
|
model = AutoModelForMaskedLM.from_pretrained("alabnii/jmedroberta-base-sentencepiece") |
|
model.eval() |
|
tokenizer = AutoTokenizer.from_pretrained("alabnii/jmedroberta-base-sentencepiece") |
|
|
|
texts = ['この患者は[MASK]と診断された。'] |
|
inputs = tokenizer.batch_encode_plus(texts, return_tensors='pt') |
|
outputs = model(**inputs) |
|
tokenizer.convert_ids_to_tokens(outputs.logits[0][1:-1].argmax(axis=-1)) |
|
# ['▁この', '患者は', 'AML', '▁', 'と診断された', '。'] |
|
``` |
|
|
|
Alternatively, you can employ [Fill-mask pipeline](https://huggingface.co/tasks/fill-mask). |
|
|
|
```python |
|
from transformers import pipeline |
|
fill = pipeline("fill-mask", model="alabnii/jmedroberta-base-sentencepiece", top_k=10) |
|
fill("この患者は[MASK]と診断された。") |
|
#[{'score': 0.04239409416913986, |
|
# 'token': 7698, |
|
# 'token_str': 'AML', |
|
# 'sequence': 'この患者はAML と診断された。'}, |
|
# {'score': 0.03562006726861, |
|
# 'token': 3298, |
|
# 'token_str': 'SLE', |
|
# 'sequence': 'この患者はSLE と診断された。'}, |
|
# {'score': 0.025064188987016678, |
|
# 'token': 10303, |
|
# 'token_str': 'MDS', |
|
# 'sequence': 'この患者はMDS と診断された。'}, |
|
# ... |
|
``` |
|
|
|
You can fine-tune this model on downstream tasks. |
|
|
|
**See also sample Colab notebooks:** https://colab.research.google.com/drive/1BUD3DKOUMqcwIO3X5bYUOsR_wDzgOJcd?usp=sharing |
|
|
|
## Tokenization |
|
|
|
Each sentence is tokenized into tokens by [SentencePiece (Unigram)](https://huggingface.co/course/chapter6/7). |
|
|
|
## Vocabulary |
|
|
|
The vocabulary consists of 30000 tokens induced by [SentencePiece (Unigram)](https://huggingface.co/course/chapter6/7). |
|
|
|
## Training procedure |
|
|
|
The following hyperparameters were used during pre-training: |
|
|
|
- learning_rate: 0.0001 |
|
- train_batch_size: 32 |
|
- eval_batch_size: 32 |
|
- seed: 42 |
|
- distributed_type: multi-GPU |
|
- num_devices: 8 |
|
- total_train_batch_size: 256 |
|
- total_eval_batch_size: 256 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- lr_scheduler_warmup_steps: 20000 |
|
- training_steps: 2000000 |
|
- mixed_precision_training: Native AMP |
|
|
|
## Note: Why do we call our model RoBERTa, not BERT? |
|
|
|
As the config file suggests, our model is based on HuggingFace's `BertForMaskedLM` class. However, we consider our model as **RoBERTa** for the following reasons: |
|
|
|
- We kept training only with max sequence length (= 512) tokens. |
|
- We removed the next sentence prediction (NSP) training objective. |
|
- We introduced dynamic masking (changing the masking pattern in each training iteration). |
|
|
|
## Acknowledgements |
|
|
|
This work was supported by Japan Japan Science and Technology Agency (JST) AIP Trilateral AI Research (Grant Number: JPMJCR20G9), and Joint Usage/Research Center for Interdisciplinary Large-scale Information Infrastructures (JHPCN) (Project ID: jh221004), in Japan. |
|
In this research work, we used the "[mdx: a platform for the data-driven future](https://mdx.jp/)". |