|
--- |
|
datasets: |
|
- yuvalkirstain/summ_screen_fd_t5_lm |
|
- urialon/summ_screen_validation |
|
- urialon/summ_screen_test |
|
inference: false |
|
pipeline_tag: text2text-generation |
|
--- |
|
|
|
Model from the preprint [Unlimiformer: Long-Range Transformers with Unlimited Length Input](https://arxiv.org/abs/2305.01625). |
|
|
|
This model was finetuned from a BART-base model using Unlimiformer-aware early stopping, described in section 3.1 of the paper. It was finetuned on the dataset SummScreen using the data preprocessing pipeline from SLED; to load the validation or test set for use with these model, please use the datasets [urialon/summ_screen_validation](https://huggingface.co/datasets/urialon/summ_screen_validation) and [urialon/summ_screen_test](https://huggingface.co/datasets/urialon/summ_screen_test). |
|
|
|
This is generally a weaker model than the [retrieval-trained model](https://huggingface.co/abertsch/unlimiformer-bart-summscreen-retrieval) and a stronger model than the [baseline](https://huggingface.co/abertsch/bart-base-summscreen). |
|
|
|
*The inference demo is disabled because you must add the Unlimiformer files to your repo before this model can handle unlimited length input!* See the [Unlimiformer GitHub](https://github.com/abertsch72/unlimiformer) for setup instructions. |