abertsch's picture
Update README.md
ef50a7a
|
raw
history blame
406 Bytes
---
datasets:
- abertsch/booksum-fullbooks
pipeline_tag: text2text-generation
---
Model from the preprint [Unlimiformer: Long-Range Transformers with Unlimited Length Input](https://arxiv.org/abs/2305.01625).
This model was finetuned from a BART-base model using the retrieval-augmented training strategy described in section 3.2 of the paper. It was finetuned on the dataset BookSum (full-book setting).