metadata
license: apache-2.0
base_model: google-t5/t5-base
tags:
- generated_from_trainer
model-index:
- name: t5-abs-2209-2245-lr-0.001-bs-10-maxep-20
results: []
t5-abs-2209-2245-lr-0.001-bs-10-maxep-20
This model is a fine-tuned version of google-t5/t5-base on the None dataset. It achieves the following results on the evaluation set:
- Loss: 2.6565
- Rouge/rouge1: 0.3034
- Rouge/rouge2: 0.116
- Rouge/rougel: 0.2608
- Rouge/rougelsum: 0.2609
- Bertscore/bertscore-precision: 0.789
- Bertscore/bertscore-recall: 0.8697
- Bertscore/bertscore-f1: 0.8251
- Meteor: 0.2559
- Gen Len: 60.4545
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 20
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge/rouge1 | Rouge/rouge2 | Rouge/rougel | Rouge/rougelsum | Bertscore/bertscore-precision | Bertscore/bertscore-recall | Bertscore/bertscore-f1 | Meteor | Gen Len |
---|---|---|---|---|---|---|---|---|---|---|---|---|
1.5977 | 0.9885 | 43 | 1.8385 | 0.4519 | 0.2025 | 0.3799 | 0.3812 | 0.8943 | 0.8922 | 0.8931 | 0.4053 | 37.9273 |
1.1494 | 2.0 | 87 | 1.7744 | 0.4543 | 0.2046 | 0.3787 | 0.3803 | 0.8964 | 0.8918 | 0.8939 | 0.3972 | 36.2 |
1.7877 | 2.9885 | 130 | 2.8310 | 0.3344 | 0.1165 | 0.2735 | 0.2749 | 0.8579 | 0.8794 | 0.8676 | 0.3013 | 46.5273 |
2.83 | 4.0 | 174 | 2.7267 | 0.3339 | 0.1241 | 0.2813 | 0.2823 | 0.8636 | 0.8795 | 0.8709 | 0.3166 | 47.9182 |
2.6683 | 4.9885 | 217 | 2.6435 | 0.3296 | 0.1231 | 0.2791 | 0.2795 | 0.8478 | 0.8781 | 0.8615 | 0.3045 | 50.5455 |
2.5709 | 6.0 | 261 | 2.6193 | 0.3149 | 0.1164 | 0.2677 | 0.2679 | 0.828 | 0.8742 | 0.8488 | 0.2887 | 53.7818 |
2.6182 | 6.9885 | 304 | 2.6497 | 0.3028 | 0.1144 | 0.259 | 0.2598 | 0.7963 | 0.8706 | 0.8297 | 0.2619 | 59.4727 |
2.5553 | 8.0 | 348 | 2.6565 | 0.2994 | 0.1138 | 0.2581 | 0.2585 | 0.7895 | 0.8685 | 0.8248 | 0.2509 | 59.8909 |
2.6243 | 8.9885 | 391 | 2.6565 | 0.3001 | 0.1148 | 0.2576 | 0.2581 | 0.7911 | 0.8691 | 0.826 | 0.2547 | 60.1182 |
2.6506 | 10.0 | 435 | 2.6565 | 0.3034 | 0.116 | 0.2608 | 0.2609 | 0.789 | 0.8697 | 0.8251 | 0.2559 | 60.4545 |
2.6527 | 10.9885 | 478 | 2.6565 | 0.3034 | 0.116 | 0.2608 | 0.2609 | 0.789 | 0.8697 | 0.8251 | 0.2559 | 60.4545 |
2.6511 | 12.0 | 522 | 2.6565 | 0.3034 | 0.116 | 0.2608 | 0.2609 | 0.789 | 0.8697 | 0.8251 | 0.2559 | 60.4545 |
2.612 | 12.9885 | 565 | 2.6565 | 0.3034 | 0.116 | 0.2608 | 0.2609 | 0.789 | 0.8697 | 0.8251 | 0.2559 | 60.4545 |
2.6429 | 14.0 | 609 | 2.6565 | 0.3034 | 0.116 | 0.2608 | 0.2609 | 0.789 | 0.8697 | 0.8251 | 0.2559 | 60.4545 |
2.6948 | 14.9885 | 652 | 2.6565 | 0.3034 | 0.116 | 0.2608 | 0.2609 | 0.789 | 0.8697 | 0.8251 | 0.2559 | 60.4545 |
2.5908 | 16.0 | 696 | 2.6565 | 0.3034 | 0.116 | 0.2608 | 0.2609 | 0.789 | 0.8697 | 0.8251 | 0.2559 | 60.4545 |
2.5995 | 16.9885 | 739 | 2.6565 | 0.3034 | 0.116 | 0.2608 | 0.2609 | 0.789 | 0.8697 | 0.8251 | 0.2559 | 60.4545 |
2.5835 | 18.0 | 783 | 2.6565 | 0.3034 | 0.116 | 0.2608 | 0.2609 | 0.789 | 0.8697 | 0.8251 | 0.2559 | 60.4545 |
2.6417 | 18.9885 | 826 | 2.6565 | 0.3034 | 0.116 | 0.2608 | 0.2609 | 0.789 | 0.8697 | 0.8251 | 0.2559 | 60.4545 |
2.5769 | 19.7701 | 860 | 2.6565 | 0.3034 | 0.116 | 0.2608 | 0.2609 | 0.789 | 0.8697 | 0.8251 | 0.2559 | 60.4545 |
Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1