Edit model card

pegasus-x-large-book_synthsumm

Fine-tuned on a synthetic dataset of curated long-context text and GPT-3.5-turbo-1106 summaries spanning multiple domains + "random" long-context examples from pretraining datasets

Try it: gradio demo | example outputs .md (gauntlet) | code for free HF inference api

Usage

It's recommended to use this model with beam search decoding. If interested, you can also use the textsum util repo to have most of this abstracted out for you:

pip install -U textsum
from textsum.summarize import Summarizer

model_name = "pszemraj/pegasus-x-large-book_synthsumm"
summarizer = Summarizer(model_name) # GPU auto-detected
text = "put the text you don't want to read here"
summary = summarizer.summarize_string(text)
print(summary)

Details

This model is a fine-tuned version of pszemraj/pegasus-x-large-book-summary on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.5481
  • Rouge1: 48.141
  • Rouge2: 19.1137
  • Rougel: 33.647
  • Rougelsum: 42.1211
  • Gen Len: 73.9846

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 5309
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: inverse_sqrt
  • lr_scheduler_warmup_ratio: 0.03
  • num_epochs: 2.0

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
1.7369 0.38 125 1.7140 43.0265 15.8613 30.5774 38.2507 77.0462
1.7736 0.77 250 1.6361 43.0209 15.2384 29.7678 37.4955 67.6
1.4251 1.15 375 1.5931 46.2138 17.5559 33.0091 41.0385 74.1077
1.2706 1.54 500 1.5635 44.6382 16.5917 30.7551 39.8466 71.7231
1.4844 1.92 625 1.5481 48.141 19.1137 33.647 42.1211 73.9846

Framework versions

  • Transformers 4.36.0.dev0
  • Pytorch 2.1.0
  • Datasets 2.15.0
  • Tokenizers 0.15.0
Downloads last month
12
Safetensors
Model size
569M params
Tensor type
F32
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for pszemraj/pegasus-x-large-book_synthsumm

Finetuned
(2)
this model

Spaces using pszemraj/pegasus-x-large-book_synthsumm 2

Collection including pszemraj/pegasus-x-large-book_synthsumm