t5-abs-1709-1203-lr-0.001-bs-5-maxep-20
This model is a fine-tuned version of google-t5/t5-base on the None dataset. It achieves the following results on the evaluation set:
- Loss: 3.7077
- Rouge/rouge1: 0.4712
- Rouge/rouge2: 0.2444
- Rouge/rougel: 0.4317
- Rouge/rougelsum: 0.4296
- Bertscore/bertscore-precision: 0.8996
- Bertscore/bertscore-recall: 0.8878
- Bertscore/bertscore-f1: 0.8935
- Meteor: 0.4182
- Gen Len: 34.4
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge/rouge1 | Rouge/rouge2 | Rouge/rougel | Rouge/rougelsum | Bertscore/bertscore-precision | Bertscore/bertscore-recall | Bertscore/bertscore-f1 | Meteor | Gen Len |
---|---|---|---|---|---|---|---|---|---|---|---|---|
1.6149 | 1.0 | 5 | 2.1360 | 0.4014 | 0.1897 | 0.3599 | 0.3612 | 0.9022 | 0.8759 | 0.8887 | 0.301 | 30.1 |
1.2655 | 2.0 | 10 | 2.2921 | 0.3685 | 0.1304 | 0.287 | 0.287 | 0.8906 | 0.8698 | 0.88 | 0.2945 | 33.6 |
0.9389 | 3.0 | 15 | 2.6407 | 0.3589 | 0.1386 | 0.2913 | 0.2897 | 0.9015 | 0.8678 | 0.8842 | 0.2428 | 28.4 |
0.682 | 4.0 | 20 | 2.6170 | 0.3937 | 0.1635 | 0.3304 | 0.3297 | 0.8883 | 0.8755 | 0.8817 | 0.3252 | 35.6 |
0.4262 | 5.0 | 25 | 2.8276 | 0.4044 | 0.1703 | 0.3351 | 0.336 | 0.8977 | 0.8695 | 0.8833 | 0.3152 | 30.1 |
0.3026 | 6.0 | 30 | 2.9646 | 0.4095 | 0.1441 | 0.3307 | 0.3303 | 0.8901 | 0.8751 | 0.8825 | 0.3465 | 37.2 |
0.2242 | 7.0 | 35 | 3.1059 | 0.315 | 0.1002 | 0.2401 | 0.241 | 0.8863 | 0.8581 | 0.8718 | 0.1934 | 27.8 |
0.1555 | 8.0 | 40 | 3.3335 | 0.3943 | 0.1309 | 0.3202 | 0.3204 | 0.8861 | 0.8734 | 0.8795 | 0.3233 | 37.2 |
0.1315 | 9.0 | 45 | 3.5025 | 0.3043 | 0.093 | 0.252 | 0.2516 | 0.8898 | 0.858 | 0.8735 | 0.2149 | 29.6 |
0.0833 | 10.0 | 50 | 3.5725 | 0.4178 | 0.1881 | 0.3651 | 0.3639 | 0.8995 | 0.8769 | 0.8879 | 0.343 | 32.0 |
0.0655 | 11.0 | 55 | 3.6433 | 0.402 | 0.1496 | 0.3186 | 0.3174 | 0.8916 | 0.8757 | 0.8834 | 0.3291 | 35.1 |
0.0606 | 12.0 | 60 | 3.6917 | 0.4295 | 0.1713 | 0.3477 | 0.3481 | 0.8921 | 0.8811 | 0.8864 | 0.3642 | 37.8 |
0.0506 | 13.0 | 65 | 3.6367 | 0.4689 | 0.1982 | 0.395 | 0.3939 | 0.8987 | 0.887 | 0.8927 | 0.4002 | 35.5 |
0.0353 | 14.0 | 70 | 3.6397 | 0.464 | 0.2221 | 0.4119 | 0.4124 | 0.9003 | 0.8843 | 0.8921 | 0.396 | 34.3 |
0.0278 | 15.0 | 75 | 3.6796 | 0.4493 | 0.2165 | 0.4081 | 0.4071 | 0.9034 | 0.888 | 0.8955 | 0.3911 | 34.1 |
0.0207 | 16.0 | 80 | 3.7035 | 0.4458 | 0.2146 | 0.3997 | 0.3996 | 0.8991 | 0.8835 | 0.891 | 0.3931 | 33.4 |
0.0251 | 17.0 | 85 | 3.7067 | 0.4716 | 0.2554 | 0.4354 | 0.434 | 0.9015 | 0.8875 | 0.8942 | 0.4291 | 33.6 |
0.0136 | 18.0 | 90 | 3.7040 | 0.4749 | 0.2615 | 0.437 | 0.4362 | 0.9029 | 0.8887 | 0.8956 | 0.4299 | 34.1 |
0.0189 | 19.0 | 95 | 3.7059 | 0.4712 | 0.2444 | 0.4317 | 0.4296 | 0.8996 | 0.8878 | 0.8935 | 0.4182 | 34.4 |
0.0133 | 20.0 | 100 | 3.7077 | 0.4712 | 0.2444 | 0.4317 | 0.4296 | 0.8996 | 0.8878 | 0.8935 | 0.4182 | 34.4 |
Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
- Downloads last month
- 1
Model tree for roequitz/t5-abs-1709-1203-lr-0.001-bs-5-maxep-20
Base model
google-t5/t5-base