metadata
language:
- English
- null
thumbnail: null
tags:
- null
- null
- null
license: null
datasets:
- XSUM, Gigaword
- null
metrics:
- Rouge
- null
Pegasus XSUM Gigaword
Model description
Pegasus XSUM model finetuned to Gigaword Summarization task
Intended uses & limitations
Produces short summaries with the coherence of the XSUM Model
How to use
# You can include sample code which will be formatted
Limitations and bias
Still has all the biases of any of the abstractive models, but seems a little less prone to hallucination.
Training data
Initialized with pegasus-XSUM
Training procedure
Trained for 11500 iterations on Gigaword corpus using OOB seq2seq (from hugging face using the default parameters)
Eval results
Evaluated on Gigaword evaluation set (from hugging face using the default parameters) eval_rouge1 = 47.8218 eval_rouge2 = 23.1533 eval_rougeL = 44.341 eval_rougeLsum = 44.3198
BibTeX entry and citation info
@inproceedings{...,
year={2020}
}