File size: 3,798 Bytes
6b99b84
1191cc6
6b99b84
 
 
 
 
 
 
26155cc
 
71bdbc0
 
5096f0b
b62568e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5096f0b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b62568e
 
 
 
 
 
 
 
5096f0b
 
 
 
 
 
 
 
 
6b99b84
 
 
 
2647933
85273ec
7b54938
85273ec
6b99b84
 
 
 
b62b9af
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6b99b84
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
---
language:
- en
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
task_categories:
- summarization
- text-generation
task_ids: []
tags:
- conditional-text-generation
dataset_info:
- config_name: document
  features:
  - name: article
    dtype: string
  - name: abstract
    dtype: string
  splits:
  - name: train
    num_bytes: 2236406736
    num_examples: 119924
  - name: validation
    num_bytes: 126510743
    num_examples: 6633
  - name: test
    num_bytes: 126296182
    num_examples: 6658
  download_size: 1154975484
  dataset_size: 2489213661
- config_name: section
  features:
  - name: article
    dtype: string
  - name: abstract
    dtype: string
  splits:
  - name: train
    num_bytes: 2257744955
    num_examples: 119924
  - name: validation
    num_bytes: 127711559
    num_examples: 6633
  - name: test
    num_bytes: 127486937
    num_examples: 6658
  download_size: 1163165290
  dataset_size: 2512943451
configs:
- config_name: document
  data_files:
  - split: train
    path: document/train-*
  - split: validation
    path: document/validation-*
  - split: test
    path: document/test-*
- config_name: section
  data_files:
  - split: train
    path: section/train-*
  - split: validation
    path: section/validation-*
  - split: test
    path: section/test-*
  default: true
---

# PubMed dataset for summarization

Dataset for summarization of long documents.\
Adapted from this [repo](https://github.com/armancohan/long-summarization).\
Note that original data are pre-tokenized so this dataset returns " ".join(text) and add "\n" for paragraphs. \
This dataset is compatible with the [`run_summarization.py`](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization) script from Transformers if you add this line to the `summarization_name_mapping` variable:
```python
"ccdv/pubmed-summarization": ("article", "abstract")
```

### Data Fields

- `id`: paper id
- `article`: a string containing the body of the paper
- `abstract`: a string containing the abstract of the paper

### Data Splits

This dataset has 3 splits: _train_, _validation_, and _test_. \
Token counts are white space based.

| Dataset Split | Number of Instances |     Avg. tokens       |
| ------------- | --------------------|:----------------------|
| Train         | 119,924             |      3043 / 215       |
| Validation    | 6,633               |      3111 / 216       |
| Test          | 6,658               |      3092 / 219       |


# Cite original article
```
@inproceedings{cohan-etal-2018-discourse,
  title = "A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents",
  author = "Cohan, Arman  and
    Dernoncourt, Franck  and
    Kim, Doo Soon  and
    Bui, Trung  and
    Kim, Seokhwan  and
    Chang, Walter  and
    Goharian, Nazli",
  booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)",
  month = jun,
  year = "2018",
  address = "New Orleans, Louisiana",
  publisher = "Association for Computational Linguistics",
  url = "https://aclanthology.org/N18-2097",
  doi = "10.18653/v1/N18-2097",
  pages = "615--621",
  abstract = "Neural abstractive summarization models have led to promising results in summarizing relatively short documents. We propose the first model for abstractive summarization of single, longer-form documents (e.g., research papers). Our approach consists of a new hierarchical encoder that models the discourse structure of a document, and an attentive discourse-aware decoder to generate the summary. Empirical results on two large-scale datasets of scientific papers show that our model significantly outperforms state-of-the-art models.",
}
```