Edit model card

t5-small-finetuned-epoch15-finetuned-epoch30

This model is a fine-tuned version of Seungjun/t5-small-finetuned-epoch15 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.4083
  • Rouge1: 31.0064
  • Rouge2: 19.0446
  • Rougel: 27.7086
  • Rougelsum: 29.5158
  • Gen Len: 18.9941

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 15
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
1.6224 1.0 765 1.4499 30.3772 18.075 26.941 28.8424 18.9915
1.586 2.0 1530 1.4403 30.4972 18.3407 27.1242 29.0417 18.9908
1.5684 3.0 2295 1.4323 30.6617 18.4827 27.2642 29.2175 18.9921
1.5622 4.0 3060 1.4300 30.7155 18.5604 27.3201 29.2191 18.9941
1.5447 5.0 3825 1.4229 30.7883 18.7051 27.379 29.2824 18.9941
1.5382 6.0 4590 1.4199 30.7555 18.7235 27.4249 29.2612 18.9941
1.5303 7.0 5355 1.4187 30.7818 18.773 27.4232 29.2896 18.9941
1.5225 8.0 6120 1.4149 30.8854 18.8302 27.5499 29.3993 18.9941
1.5197 9.0 6885 1.4143 30.9201 18.863 27.5918 29.4395 18.9941
1.5123 10.0 7650 1.4119 30.9469 18.9403 27.6186 29.4314 18.9941
1.5209 11.0 8415 1.4107 30.9685 18.9431 27.6189 29.4673 18.9941
1.5091 12.0 9180 1.4095 30.9249 18.9679 27.6257 29.4341 18.9941
1.4998 13.0 9945 1.4091 30.9911 19.0416 27.695 29.4991 18.9941
1.505 14.0 10710 1.4085 30.9942 19.0321 27.6999 29.5025 18.9941
1.4965 15.0 11475 1.4083 31.0064 19.0446 27.7086 29.5158 18.9941

Framework versions

  • Transformers 4.26.1
  • Pytorch 1.13.1+cu116
  • Tokenizers 0.13.2
Downloads last month
2
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.