Edit model card

t5-small-train

This model is a fine-tuned version of t5-small on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 2.2669
  • Rouge1: 43.2372
  • Rouge2: 21.6755
  • Rougel: 38.1637
  • Rougelsum: 38.5444

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5.6e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 8

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum
3.2032 1.0 45 2.6305 34.393 15.4821 30.3601 30.5865
2.6291 2.0 90 2.4169 38.2327 18.4622 34.2887 34.3385
2.4294 3.0 135 2.3395 40.4405 19.927 36.559 36.8095
2.3191 4.0 180 2.3059 41.4214 20.4534 36.6399 36.9088
2.2949 5.0 225 2.2857 42.6906 21.1492 37.5557 37.8722
2.2591 6.0 270 2.2762 43.1598 21.6179 38.1235 38.5053
2.1722 7.0 315 2.2680 43.4447 21.8048 38.4077 38.7384
2.1993 8.0 360 2.2669 43.2372 21.6755 38.1637 38.5444

Framework versions

  • Transformers 4.18.0
  • Pytorch 1.11.0+cu113
  • Datasets 2.1.0
  • Tokenizers 0.12.1
Downloads last month
11
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.