Update README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,7 @@
|
|
1 |
---
|
2 |
-
license:
|
|
|
|
|
3 |
tags:
|
4 |
- generated_from_trainer
|
5 |
datasets:
|
@@ -41,7 +43,8 @@ It achieves the following results on the evaluation set:
|
|
41 |
|
42 |
## Model description
|
43 |
|
44 |
-
An experiment of further fine-tuning a booksum model on a different dataset.
|
|
|
45 |
|
46 |
## Intended uses & limitations
|
47 |
|
@@ -49,7 +52,7 @@ More information needed
|
|
49 |
|
50 |
## Training and evaluation data
|
51 |
|
52 |
-
|
53 |
|
54 |
## Training procedure
|
55 |
|
|
|
1 |
---
|
2 |
+
license:
|
3 |
+
- bsd-3-clause
|
4 |
+
- apache-2.0
|
5 |
tags:
|
6 |
- generated_from_trainer
|
7 |
datasets:
|
|
|
43 |
|
44 |
## Model description
|
45 |
|
46 |
+
An experiment of further fine-tuning a booksum model on a different dataset. Compare to either the starting checkpoint (_linked above_) or to the [variant only fine-tuned on the scientific lay summaries](https://huggingface.co/pszemraj/long-t5-tglobal-xl-sci-simplify-elife).
|
47 |
+
|
48 |
|
49 |
## Intended uses & limitations
|
50 |
|
|
|
52 |
|
53 |
## Training and evaluation data
|
54 |
|
55 |
+
the pszemraj/scientific_lay_summarisation-elife-norm dataset, input 16384 tokens then truncate, output 1024 tokens then truncate.
|
56 |
|
57 |
## Training procedure
|
58 |
|