--- datasets: - roemmele/ablit language: - en pipeline_tag: summarization --- # Model Card for roemmele/ablit-bart-base This model is initialized from facebook/bart-base. It has been fine-tuned on the AbLit dataset, which consists of abridged versions of books aligned with their original versions at the passage level. Given a text, the model generates an abridgement of the text based on what it has observed in AbLit. See the cited paper for more details. ## Model Details ### Model Description - **Developed by:** Language Weaver (Melissa Roemmele, Kyle Shaffer, Katrina Olsen, Yiyi Wang, and Steve DeNeefe) - **Model type:** Seq2SeqLM - **Language(s) (NLP):** English - **License:** [More Information Needed] - **Finetuned from model [optional]:** facebook/bart-base ### Model Sources [optional] - **Repository:** [github.com/roemmele/AbLit](https://github.com/roemmele/AbLit) - **Paper [optional]:** [AbLit: A Resource for Analyzing and Generating Abridged Versions of English Literature](https://arxiv.org/pdf/2302.06579.pdf) ## Uses This model generates abridged versions of texts informed by the AbLit dataset. ## Bias, Risks, and Limitations This model comes from research on abridgement as an NLP task, but the dataset the model is trained on (AbLit) is derived from a small set of texts associated with a specific domain and author. There are significant practical reasons for this limited scope. In particular, in constrast to the books in AbLit, most recently published books are not included in publicly accessible datasets due to copyright restrictions, and the same restrictions typically apply to any abridgements of these books. For this reason, AbLit consists of British English literature from the 18th and 19th centuries. Some of the linguistic properties of these original books do not generalize to other types of English texts, and therefore the model might not produce desirable abridgements for these other texts. ## How to Get Started with the Model ``` In [1]: from transformers import AutoTokenizer, AutoModelForSeq2SeqLM ...: tokenizer = AutoTokenizer.from_pretrained("roemmele/ablit-bart-base") ...: model = AutoModelForSeq2SeqLM.from_pretrained("roemmele/ablit-bart-base") ...: ...: passage = "The letter was not unproductive. It re-established peace and kindness." ...: input_ids = tokenizer( ...: passage, ...: padding='max_length', ...: return_tensors="pt").input_ids ...: output_ids = model.generate( ...: input_ids, ...: max_length=1024, ...: num_beams=5, ...: no_repeat_ngram_size=3 ...: )[0] ...: abridgement = tokenizer.decode( ...: output_ids, ...: skip_special_tokens=True) In [2]: print(abridgement) The letter re-established peace and kindness. ``` ## Training Details ### Training Data [roemmele/AbLit](https://huggingface.co/datasets/roemmele/ablit), specifically the train split of the "chunks-10-sentences" subset, i.e.: ``` from datasets import load_dataset data = load_dataset("roemmele/ablit", "chunks-10-sentences") ``` ### Training Procedure We used the training script [here](https://github.com/huggingface/transformers/blob/main/examples/pytorch/summarization/run_summarization.py). Hyperparameter settings: We specified maximum length of 1024 for both the source (original passage) and target (abridged passage), and truncated all tokens beyond this limit. We evaluated each model on the AbLit development set after each epoch and concluded training when cross-entropy loss stopped decreasing. We used a batch size of 4. For all other hyperparameters we used the default values set by this script. #### Training Hyperparameters - **Training regime:** [More Information Needed] #### Speeds, Sizes, Times It took ≈3 hours to train each model on a g4dn.4xlarge AWS instance. ## Evaluation ### Testing Data Test split of "chunks-10-sentences" subset of [roemmele/AbLit](https://huggingface.co/datasets/roemmele/ablit) ### Results The model obtained a ROUGE-L score of 0.78 on the AbLit test set. See the paper for the results of other metrics. ### Conclusion Our analysis shows that in comparison with human-authored abridgements, the model-generated abridgements tend to preserve more of the original text, suggesting it is challenging to learn what text can be removed while maintaining loyalty to the important parts of the original text. ## Citation [optional] **BibTeX:** @inproceedings{roemmele2023ablit, title={AbLit: A Resource for Analyzing and Generating Abridged Versions of English Literature}, author={Roemmele, Melissa and Shaffer, Kyle and Olsen, Katrina and Wang, Yiyi and DeNeefe, Steve}, booktitle = {Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume}, publisher = {Association for Computational Linguistics}, year={2023} } **APA:** Roemmele, M., Shaffer, K., Olsen, K., Wang, Y., and DeNeefe, S. (2023). AbLit: A Resource for Analyzing and Generating Abridged Versions of English Literature. 17th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2023). ## Model Card Authors Melissa Roemmele ## Model Card Contact melissa@roemmele.io