Update README.md
Browse files
README.md
CHANGED
@@ -193,20 +193,7 @@ Exploring how well long-document models trained on "lay summaries" of scientific
|
|
193 |
|
194 |
## Model description
|
195 |
|
196 |
-
This model is a fine-tuned version of [google/long-t5-tglobal-base](https://huggingface.co/google/long-t5-tglobal-base) on the `pszemraj/scientific_lay_summarisation-plos-norm` dataset.
|
197 |
-
It achieves the following results on the evaluation set:
|
198 |
-
- Loss: 1.6778
|
199 |
-
- Rouge1: 49.1475
|
200 |
-
- Rouge2: 18.9281
|
201 |
-
- Rougel: 26.9893
|
202 |
-
- Rougelsum: 45.0973
|
203 |
-
- Gen Len: 399.4125
|
204 |
-
|
205 |
-
|
206 |
-
## Intended uses & limitations
|
207 |
-
|
208 |
-
- Ability to generalize outside of the dataset domain (pubmed/bioscience type papers) has to be evaluated.
|
209 |
-
|
210 |
|
211 |
## Usage
|
212 |
|
@@ -226,8 +213,24 @@ summary = summarizer.summarize_string(text)
|
|
226 |
print(summary)
|
227 |
```
|
228 |
|
|
|
|
|
|
|
|
|
|
|
229 |
## Training procedure
|
230 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
231 |
### Training hyperparameters
|
232 |
|
233 |
The following hyperparameters were used during training:
|
|
|
193 |
|
194 |
## Model description
|
195 |
|
196 |
+
This model is a fine-tuned version of [google/long-t5-tglobal-base](https://huggingface.co/google/long-t5-tglobal-base) on the `pszemraj/scientific_lay_summarisation-plos-norm` dataset for two epochs.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
197 |
|
198 |
## Usage
|
199 |
|
|
|
213 |
print(summary)
|
214 |
```
|
215 |
|
216 |
+
## Intended uses & limitations
|
217 |
+
|
218 |
+
- Ability to generalize outside of the dataset domain (pubmed/bioscience type papers) has to be evaluated.
|
219 |
+
|
220 |
+
|
221 |
## Training procedure
|
222 |
|
223 |
+
|
224 |
+
### Eval results
|
225 |
+
|
226 |
+
It achieves the following results on the evaluation set:
|
227 |
+
- Loss: 1.6778
|
228 |
+
- Rouge1: 49.1475
|
229 |
+
- Rouge2: 18.9281
|
230 |
+
- Rougel: 26.9893
|
231 |
+
- Rougelsum: 45.0973
|
232 |
+
- Gen Len: 399.4125
|
233 |
+
|
234 |
### Training hyperparameters
|
235 |
|
236 |
The following hyperparameters were used during training:
|