Edit model card

pegasus-large-finetuned-rahulver-summarization-pegasus-model

This model is a fine-tuned version of google/pegasus-large on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.0906
  • Rouge1: 61.2393
  • Rouge2: 43.8277
  • Rougel: 50.0054
  • Rougelsum: 57.4674
  • Gen Len: 114.6

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
1.3648 1.0 140 0.7201 50.0081 32.6454 39.3021 45.1602 125.7333
0.8502 2.0 280 0.6067 57.8678 41.5251 46.0694 54.1055 128.3333
0.5053 3.0 420 0.6642 58.3644 41.8619 47.6199 54.1639 108.9667
0.3469 4.0 560 0.7318 61.8988 45.7303 51.1928 57.9306 123.1667
0.2779 5.0 700 0.7274 62.9354 46.5 51.6431 59.2443 99.6333
0.2124 6.0 840 0.8618 63.8552 48.3846 53.3804 60.2718 111.2333
0.1864 7.0 980 1.0058 59.5675 42.4324 48.462 55.3498 108.4667
0.1691 8.0 1120 0.9984 60.1063 43.6022 49.7163 56.9865 130.2
0.1603 9.0 1260 1.0062 61.398 44.4507 50.2044 57.4447 99.0333
0.1674 10.0 1400 1.0906 61.2393 43.8277 50.0054 57.4674 114.6

Framework versions

  • Transformers 4.25.1
  • Pytorch 1.13.0+cu116
  • Datasets 2.7.1
  • Tokenizers 0.13.2
Downloads last month
8
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.