ablit-bart-base / README.md
roemmele's picture
Update README.md
4e21a64
|
raw
history blame
5.76 kB
metadata
datasets:
  - roemmele/ablit
language:
  - en
pipeline_tag: summarization
license: mit

roemmele/ablit-bart-base

This model is initialized from facebook/bart-base. It has been fine-tuned on the AbLit dataset, which consists of abridged versions of books aligned with their original versions at the passage level. Given a text, the model generates an abridgement of the text based on what it has observed in AbLit. See the paper cited below for more details.

Model Details

Model Description

  • Developed by: Language Weaver (Melissa Roemmele, Kyle Shaffer, Katrina Olsen, Yiyi Wang, and Steve DeNeefe)
  • Model type: Seq2SeqLM
  • Language(s) (NLP): English
  • License: mit
  • Finetuned from model: facebook/bart-base

Model Sources

Uses

This model generates abridged versions of texts informed by the AbLit dataset.

Bias, Risks, and Limitations

This model comes from research on abridgement as an NLP task, but the dataset the model is trained on (AbLit) is derived from a small set of texts associated with a specific domain and author. In particular, AbLit consists of British English literature from the 18th and 19th centuries, abridged by a single author. Some of the linguistic properties of these original books do not generalize to other domains of English text, and therefore the model might not produce desirable abridgements for other texts.

How to Get Started with the Model

In [1]: from transformers import AutoTokenizer, AutoModelForSeq2SeqLM 
   ...: tokenizer = AutoTokenizer.from_pretrained("roemmele/ablit-bart-base") 
   ...: model = AutoModelForSeq2SeqLM.from_pretrained("roemmele/ablit-bart-base") 
   ...:  
   ...: passage = "The letter was not unproductive. It re-established peace and kindness." 
   ...: input_ids = tokenizer( 
   ...:   passage, 
   ...:   padding='max_length', 
   ...:   return_tensors="pt").input_ids 
   ...: output_ids = model.generate( 
   ...:   input_ids, 
   ...:   max_length=1024, 
   ...:   num_beams=5, 
   ...:   no_repeat_ngram_size=3 
   ...:   )[0] 
   ...: abridgement = tokenizer.decode( 
   ...:   output_ids, 
   ...:   skip_special_tokens=True)                                                                                                         

In [2]: print(abridgement)                                                                                                                  
The letter re-established peace and kindness.

Training Details

Training Data

roemmele/AbLit, specifically the train split of the "chunks-10-sentences" subset, i.e.:

from datasets import load_dataset
data = load_dataset("roemmele/ablit", "chunks-10-sentences")

Training Procedure

We used the training script here.

Training Hyperparameters

We specified maximum length of 1024 for both the source (original passage) and target (abridged passage), and truncated all tokens beyond this limit. We evaluated each model on the AbLit development set after each epoch and concluded training when cross-entropy loss stopped decreasing. We used a batch size of 4. For all other hyperparameters we used the default values set by this script.

Speeds, Sizes, Times

It took ≈3 hours to train each model on a g4dn.4xlarge AWS instance.

Evaluation

Testing Data

Test split of "chunks-10-sentences" subset of roemmele/AbLit

Results

The model obtained a ROUGE-L score of 0.78 on the AbLit test set. See the paper for the results of other metrics.

Conclusion

Our analysis shows that in comparison with human-authored abridgements, the model-generated abridgements tend to preserve more of the original text, suggesting it is challenging to learn what text can be removed while maintaining loyalty to the important parts of the original text.

Citation

BibTeX:

@inproceedings{roemmele2023ablit, title={AbLit: A Resource for Analyzing and Generating Abridged Versions of English Literature}, author={Roemmele, Melissa and Shaffer, Kyle and Olsen, Katrina and Wang, Yiyi and DeNeefe, Steve}, booktitle = {Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume}, publisher = {Association for Computational Linguistics}, year={2023} }

APA:

Roemmele, M., Shaffer, K., Olsen, K., Wang, Y., and DeNeefe, S. (2023). AbLit: A Resource for Analyzing and Generating Abridged Versions of English Literature. 17th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2023).

Model Card Authors

Melissa Roemmele

Model Card Contact

[email protected]