Update README.md
Browse files
README.md
CHANGED
@@ -206,6 +206,8 @@ Exploring how well long-document models trained on "lay summaries" of scientific
|
|
206 |
|
207 |
This model is a fine-tuned version of [google/long-t5-tglobal-base](https://huggingface.co/google/long-t5-tglobal-base) on the `pszemraj/scientific_lay_summarisation-elife-norm` dataset.
|
208 |
|
|
|
|
|
209 |
## Usage
|
210 |
|
211 |
It's recommended to usage this model with [beam search decoding](https://huggingface.co/docs/transformers/generation_strategies#beamsearch-decoding). If interested, you can also use the `textsum` util repo to have most of this abstracted out for you:
|
|
|
206 |
|
207 |
This model is a fine-tuned version of [google/long-t5-tglobal-base](https://huggingface.co/google/long-t5-tglobal-base) on the `pszemraj/scientific_lay_summarisation-elife-norm` dataset.
|
208 |
|
209 |
+
- The variant trained on the PLOS subset can be found [here](https://huggingface.co/pszemraj/long-t5-tglobal-base-sci-simplify)
|
210 |
+
|
211 |
## Usage
|
212 |
|
213 |
It's recommended to usage this model with [beam search decoding](https://huggingface.co/docs/transformers/generation_strategies#beamsearch-decoding). If interested, you can also use the `textsum` util repo to have most of this abstracted out for you:
|