fine-tuned-DatasetQAS-TYDI-QA-ID-with-xlm-roberta-large-with-ITTL-with-freeze-LR-1e-05
This model is a fine-tuned version of xlm-roberta-large on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9154
- Exact Match: 67.4296
- F1: 80.7483
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
Training results
Training Loss |
Epoch |
Step |
Validation Loss |
Exact Match |
F1 |
6.331 |
0.5 |
19 |
3.7275 |
5.2817 |
16.4975 |
6.331 |
0.99 |
38 |
2.5293 |
22.3592 |
33.2570 |
3.6805 |
1.5 |
57 |
1.5504 |
45.4225 |
61.5302 |
3.6805 |
1.99 |
76 |
1.2025 |
57.2183 |
72.1651 |
3.6805 |
2.5 |
95 |
1.0664 |
61.0915 |
75.6496 |
1.3982 |
2.99 |
114 |
0.9926 |
63.2042 |
77.6464 |
1.3982 |
3.5 |
133 |
0.9823 |
64.6127 |
78.3848 |
0.9533 |
3.99 |
152 |
0.9596 |
66.1972 |
79.5651 |
0.9533 |
4.5 |
171 |
0.9578 |
67.4296 |
80.6710 |
0.9533 |
4.99 |
190 |
0.9376 |
68.3099 |
80.8025 |
0.7418 |
5.5 |
209 |
0.9393 |
67.4296 |
79.8821 |
0.7418 |
5.99 |
228 |
0.9242 |
67.4296 |
79.9318 |
0.7418 |
6.5 |
247 |
0.9154 |
67.4296 |
80.7483 |
Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.2.0
- Tokenizers 0.13.2