|
--- |
|
license: mit |
|
language: |
|
- ja |
|
tags: |
|
- generated_from_trainer |
|
- ja_qu_ad |
|
- bert |
|
datasets: SkelterLabsInc/JaQuAD |
|
widget: |
|
- text: どこへ出かけた? |
|
context: 2015年9月1日、私は横浜へ車で出かけました。映画を観た後に中華街まで電車で行き、昼ご飯は重慶飯店で中華フルコースを食べました。 |
|
model-index: |
|
- name: xlm-roberta-base-finetuned-JaQuAD |
|
results: [] |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# xlm-roberta-base-finetuned-JaQuAD |
|
|
|
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [JaQuAD](https://huggingface.co/datasets/SkelterLabsInc/JaQuAD) dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 0.7495 |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses |
|
|
|
```python |
|
import torch |
|
from transformers import AutoModelForQuestionAnswering, AutoTokenizer |
|
|
|
model_name = "thkkvui/xlm-roberta-base-finetuned-JaQuAD" |
|
model = (AutoModelForQuestionAnswering.from_pretrained(model_name)) |
|
tokenizer = AutoTokenizer.from_pretrained(model_name) |
|
|
|
text = "2015年9月1日、私は横浜へ車で出かけました。映画を観た後に中華街まで電車で行き、昼ご飯は重慶飯店で中華フルコースを食べました。" |
|
questions= ["どこへ出かけた?", "電車に乗る前は何をしていた?", "重慶飯店で何を食べた?", "いつ横浜に出かけた?"] |
|
|
|
for question in questions: |
|
|
|
inputs = tokenizer.encode_plus(question, text, add_special_tokens=True, return_tensors="pt") |
|
|
|
with torch.no_grad(): |
|
output = model(**inputs) |
|
|
|
answer_start = torch.argmax(output.start_logits) |
|
answer_end = torch.argmax(output.end_logits) |
|
|
|
answer_tokens = inputs.input_ids[0, answer_start : answer_end + 1] |
|
answer = tokenizer.decode(answer_tokens) |
|
|
|
print(f"質問: {question} -> 回答: {answer}") |
|
``` |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 6e-05 |
|
- train_batch_size: 16 |
|
- eval_batch_size: 16 |
|
- seed: 42 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- lr_scheduler_warmup_steps: 50 |
|
- num_epochs: 2 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | |
|
|:-------------:|:-----:|:----:|:---------------:| |
|
| 0.8661 | 1.0 | 1985 | 0.8036 | |
|
| 0.5348 | 2.0 | 3970 | 0.7495 | |
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.30.2 |
|
- Pytorch 2.0.1 |
|
- Datasets 2.13.1 |
|
- Tokenizers 0.13.3 |
|
|