Edit model card

summeraiztion_t5base_en_to_kjven

This model is a fine-tuned version of t5-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7702
  • Bleu: 23.612
  • Gen Len: 18.1576

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Bleu Gen Len
1.0705 1.0 2860 0.9540 21.4263 18.131
0.9753 2.0 5720 0.8850 22.278 18.1371
0.9191 3.0 8580 0.8482 22.6985 18.1433
0.8845 4.0 11440 0.8207 23.0513 18.146
0.8654 5.0 14300 0.8015 23.2476 18.1499
0.8443 6.0 17160 0.7891 23.4193 18.1525
0.8175 7.0 20020 0.7820 23.5084 18.1548
0.8192 8.0 22880 0.7741 23.538 18.1576
0.8077 9.0 25740 0.7712 23.5967 18.1572
0.8096 10.0 28600 0.7702 23.612 18.1576

Framework versions

  • Transformers 4.28.1
  • Pytorch 2.0.0+cu118
  • Datasets 2.11.0
  • Tokenizers 0.13.3
Downloads last month
5
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.