Edit model card

flan-t5-small-samsum

This model is a fine-tuned version of google/flan-t5-small on the samsum dataset. It achieves the following results on the evaluation set:

  • Loss: 1.6335
  • Rouge1: 43.8171
  • Rouge2: 19.6313
  • Rougel: 36.3793
  • Rougelsum: 39.8169
  • Gen Len: 16.7924

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
1.8193 1.0 1842 1.6613 42.6528 18.8812 35.4634 38.9086 16.8669
1.7355 2.0 3684 1.6374 43.2587 18.9454 35.8785 39.2731 16.7937
1.6946 3.0 5526 1.6364 43.3101 18.9886 35.9659 39.2743 16.7973
1.6654 4.0 7368 1.6341 43.7224 19.3408 36.1299 39.703 16.8376
1.6372 5.0 9210 1.6335 43.8171 19.6313 36.3793 39.8169 16.7924

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.1.0+cu118
  • Datasets 2.15.0
  • Tokenizers 0.15.0
Downloads last month
4
Safetensors
Model size
77M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for jruranski/flan-t5-small-samsum

Finetuned
(297)
this model

Dataset used to train jruranski/flan-t5-small-samsum

Evaluation results