Edit model card

flan-t5-small-samsum

This model is a fine-tuned version of google/flan-t5-small on the samsum dataset. It achieves the following results on the evaluation set:

  • Loss: 1.6754
  • Rouge1: 42.6693
  • Rouge2: 18.3378
  • Rougel: 35.2729
  • Rougelsum: 38.9033
  • Gen Len: 16.8474

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 52
  • eval_batch_size: 52
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 2
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
1.8824 0.35 100 1.7015 42.4703 18.3068 35.1199 38.8083 16.6532
1.8578 0.7 200 1.6878 42.0064 18.2236 34.9497 38.4611 16.7216
1.835 1.06 300 1.6823 42.7407 18.5955 35.4344 38.9663 16.9048
1.8144 1.41 400 1.6786 42.6272 18.3894 35.34 38.8868 16.6618
1.8094 1.76 500 1.6754 42.6693 18.3378 35.2729 38.9033 16.8474

Framework versions

  • Transformers 4.36.0
  • Pytorch 2.0.0
  • Datasets 2.15.0
  • Tokenizers 0.15.0
Downloads last month
3
Safetensors
Model size
77M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mrupar/flan-t5-small-samsum

Finetuned
(277)
this model

Dataset used to train mrupar/flan-t5-small-samsum

Evaluation results