talgatzh commited on
Commit
45dc0bd
1 Parent(s): 473ae5b

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +211 -0
README.md ADDED
@@ -0,0 +1,211 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - found
4
+ language_creators:
5
+ - found
6
+ license:
7
+ - unknown
8
+ multilinguality:
9
+ - monolingual
10
+ pretty_name: Extreme Summarization (XSum)
11
+ paperswithcode_id: xsum
12
+ size_categories:
13
+ - 100K<n<1M
14
+ source_datasets:
15
+ - xsum
16
+ task_categories:
17
+ - summarization
18
+ task_ids:
19
+ - news-articles-summarization
20
+ train-eval-index:
21
+ - config: default
22
+ task: summarization
23
+ task_id: summarization
24
+ splits:
25
+ train_split: train
26
+ eval_split: test
27
+ col_mapping:
28
+ document: text
29
+ summary: target
30
+ metrics:
31
+ - type: rouge
32
+ name: Rouge
33
+ dataset_info:
34
+ features:
35
+ - name: document
36
+ dtype: string
37
+ - name: summary
38
+ dtype: string
39
+ - name: id
40
+ dtype: string
41
+ splits:
42
+ - name: train
43
+ num_bytes: 180554048
44
+ num_examples: 1
45
+ - name: validation
46
+ num_bytes: 183554048
47
+ num_examples: 3
48
+ - name: test
49
+ num_bytes: 183554048
50
+ num_examples: 4
51
+ download_size: 183554048
52
+ dataset_size: 183554048
53
+ ---
54
+
55
+ # Dataset Card for "xsum"
56
+
57
+ ## Table of Contents
58
+ - [Dataset Description](#dataset-description)
59
+ - [Dataset Summary](#dataset-summary)
60
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
61
+ - [Languages](#languages)
62
+ - [Dataset Structure](#dataset-structure)
63
+ - [Data Instances](#data-instances)
64
+ - [Data Fields](#data-fields)
65
+ - [Data Splits](#data-splits)
66
+ - [Dataset Creation](#dataset-creation)
67
+ - [Curation Rationale](#curation-rationale)
68
+ - [Source Data](#source-data)
69
+ - [Annotations](#annotations)
70
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
71
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
72
+ - [Social Impact of Dataset](#social-impact-of-dataset)
73
+ - [Discussion of Biases](#discussion-of-biases)
74
+ - [Other Known Limitations](#other-known-limitations)
75
+ - [Additional Information](#additional-information)
76
+ - [Dataset Curators](#dataset-curators)
77
+ - [Licensing Information](#licensing-information)
78
+ - [Citation Information](#citation-information)
79
+ - [Contributions](#contributions)
80
+
81
+ ## Dataset Description
82
+
83
+ - **Homepage:**
84
+ - **Repository:** https://github.com/EdinburghNLP/XSum
85
+ - **Paper:** [Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization](https://arxiv.org/abs/1808.08745)
86
+ - **Point of Contact:** [Shashi Narayan](mailto:[email protected])
87
+ - **Size of downloaded dataset files:** 257.30 MB
88
+ - **Size of the generated dataset:** 532.26 MB
89
+ - **Total amount of disk used:** 789.56 MB
90
+
91
+ ### Dataset Summary
92
+
93
+ Extreme Summarization (XSum) Dataset.
94
+
95
+ There are three features:
96
+ - document: Input news article.
97
+ - summary: One sentence summary of the article.
98
+ - id: BBC ID of the article.
99
+
100
+ ### Supported Tasks and Leaderboards
101
+
102
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
103
+
104
+ ### Languages
105
+
106
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
107
+
108
+ ## Dataset Structure
109
+
110
+ ### Data Instances
111
+
112
+ #### default
113
+
114
+ - **Size of downloaded dataset files:** 257.30 MB
115
+ - **Size of the generated dataset:** 532.26 MB
116
+ - **Total amount of disk used:** 789.56 MB
117
+
118
+ An example of 'validation' looks as follows.
119
+ ```
120
+ {
121
+ "document": "some-body",
122
+ "id": "29750031",
123
+ "summary": "some-sentence"
124
+ }
125
+ ```
126
+
127
+ ### Data Fields
128
+
129
+ The data fields are the same among all splits.
130
+
131
+ #### default
132
+ - `document`: a `string` feature.
133
+ - `summary`: a `string` feature.
134
+ - `id`: a `string` feature.
135
+
136
+ ### Data Splits
137
+
138
+ | name |train |validation|test |
139
+ |-------|-----:|---------:|----:|
140
+ |default|204045| 11332|11334|
141
+
142
+ ## Dataset Creation
143
+
144
+ ### Curation Rationale
145
+
146
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
147
+
148
+ ### Source Data
149
+
150
+ #### Initial Data Collection and Normalization
151
+
152
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
153
+
154
+ #### Who are the source language producers?
155
+
156
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
157
+
158
+ ### Annotations
159
+
160
+ #### Annotation process
161
+
162
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
163
+
164
+ #### Who are the annotators?
165
+
166
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
167
+
168
+ ### Personal and Sensitive Information
169
+
170
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
171
+
172
+ ## Considerations for Using the Data
173
+
174
+ ### Social Impact of Dataset
175
+
176
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
177
+
178
+ ### Discussion of Biases
179
+
180
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
181
+
182
+ ### Other Known Limitations
183
+
184
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
185
+
186
+ ## Additional Information
187
+
188
+ ### Dataset Curators
189
+
190
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
191
+
192
+ ### Licensing Information
193
+
194
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
195
+
196
+ ### Citation Information
197
+
198
+ ```
199
+ @article{Narayan2018DontGM,
200
+ title={Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization},
201
+ author={Shashi Narayan and Shay B. Cohen and Mirella Lapata},
202
+ journal={ArXiv},
203
+ year={2018},
204
+ volume={abs/1808.08745}
205
+ }
206
+ ```
207
+
208
+
209
+ ### Contributions
210
+
211
+ Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@jbragg](https://github.com/jbragg), [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.