ssr-base-finetuned-samsum-en

This model is a fine-tuned version of microsoft/ssr-base on the samsum dataset. It achieves the following results on the evaluation set:

  • Loss: 1.6231
  • Rouge1: 46.7505
  • Rouge2: 22.3968
  • Rougel: 37.1784
  • Rougelsum: 42.891

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5.6e-05
  • train_batch_size: 10
  • eval_batch_size: 10
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum
1.9682 1.0 300 1.6432 44.2182 20.8486 35.0914 40.9852
1.6475 2.0 600 1.5946 45.3919 21.6955 36.2411 41.8532
1.5121 3.0 900 1.5737 46.1769 22.4178 36.9762 42.6614
1.4112 4.0 1200 1.5774 46.6047 22.8227 37.2457 43.1935
1.323 5.0 1500 1.5825 46.6162 22.485 37.2846 42.9834
1.2613 6.0 1800 1.5883 46.4253 22.1199 37.0491 42.5189
1.2077 7.0 2100 1.5965 46.485 22.3636 37.2677 42.7499
1.1697 8.0 2400 1.6174 46.8654 22.6291 37.4201 43.0875
1.1367 9.0 2700 1.6188 46.707 22.305 37.156 42.9087
1.118 10.0 3000 1.6231 46.7505 22.3968 37.1784 42.891

Framework versions

  • Transformers 4.19.2
  • Pytorch 1.11.0+cu113
  • Datasets 2.2.2
  • Tokenizers 0.12.1
Downloads last month
11
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train santiviquez/ssr-base-finetuned-samsum-en

Space using santiviquez/ssr-base-finetuned-samsum-en 1

Evaluation results