Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -39,19 +39,19 @@ task_ids: []
|
|
39 |
This repository contains the PLOS and eLife datasets, introduced in the EMNLP 2022 paper "[Making Science Simple: Corpora for the Lay Summarisation of Scientific Literature
|
40 |
](https://arxiv.org/abs/2210.09932)" .
|
41 |
|
42 |
-
Each dataset contains full biomedical research articles paired with expert-written lay summaries (i.e., non-technical summaries). PLOS articles are derived from various journals published by [the Public Library of Science (PLOS)](https://plos.org/), whereas eLife articles are derived from the [eLife](https://elifesciences.org/) journal. More details/
|
43 |
|
44 |
Both "elife" and "plos" have 6 features:
|
45 |
|
46 |
-
- "article": the body of the document (including the abstract), sections
|
47 |
-
- "section_headings": the title of each section,
|
48 |
-
- "keywords": keywords describing the topic of the article,
|
49 |
-
- "title"
|
50 |
-
- "year"
|
51 |
- "summary": the lay summary of the document.
|
52 |
|
53 |
|
54 |
-
**Note:** The format of both datasets differs from that used in the original repository (given above) in order to make them compatible with the `run_summarization.py
|
55 |
|
56 |
### Supported Tasks and Leaderboards
|
57 |
|
|
|
39 |
This repository contains the PLOS and eLife datasets, introduced in the EMNLP 2022 paper "[Making Science Simple: Corpora for the Lay Summarisation of Scientific Literature
|
40 |
](https://arxiv.org/abs/2210.09932)" .
|
41 |
|
42 |
+
Each dataset contains full biomedical research articles paired with expert-written lay summaries (i.e., non-technical summaries). PLOS articles are derived from various journals published by [the Public Library of Science (PLOS)](https://plos.org/), whereas eLife articles are derived from the [eLife](https://elifesciences.org/) journal. More details/analyses on the content of each dataset are provided in the paper.
|
43 |
|
44 |
Both "elife" and "plos" have 6 features:
|
45 |
|
46 |
+
- "article": the body of the document (including the abstract), sections separated by "/n".
|
47 |
+
- "section_headings": the title of each section, separated by "/n".
|
48 |
+
- "keywords": keywords describing the topic of the article, separated by "/n".
|
49 |
+
- "title": the title of the article.
|
50 |
+
- "year": the year the article was published.
|
51 |
- "summary": the lay summary of the document.
|
52 |
|
53 |
|
54 |
+
**Note:** The format of both datasets differs from that used in the original repository (given above) in order to make them compatible with the `run_summarization.py` script of Transformers. Specifically, sentence tokenization is removed via " ".join(text), and the abstract and article sections, previously lists of sentences, are combined into a single `string` feature ("article") with each section separated by "\n". For the sentence-tokenized version of the dataset, please use the original git repository.
|
55 |
|
56 |
### Supported Tasks and Leaderboards
|
57 |
|