Datasets:
GEM
/

Languages:
English
License:
Sebastian Gehrmann commited on
Commit
98b4bf0
1 Parent(s): 70d253f

data card.

Browse files
Files changed (1) hide show
  1. README.md +429 -0
README.md ADDED
@@ -0,0 +1,429 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - none
4
+ language_creators:
5
+ - unknown
6
+ languages:
7
+ - unknown
8
+ licenses:
9
+ - cc-by-sa-4.0
10
+ multilinguality:
11
+ - unknown
12
+ pretty_name: xsum
13
+ size_categories:
14
+ - unknown
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - summarization
19
+ task_ids:
20
+ - unknown
21
+ ---
22
+
23
+ # Dataset Card for GEM/xsum
24
+
25
+ ## Dataset Description
26
+
27
+ - **Homepage:** n/a
28
+ - **Repository:** https://github.com/EdinburghNLP/XSum
29
+ - **Paper:** https://www.aclweb.org/anthology/D18-1206
30
+ - **Leaderboard:** N/A
31
+ - **Point of Contact:** Shashi Narayan
32
+
33
+ ### Link to Main Data Card
34
+
35
+ You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/xsum).
36
+
37
+ ### Dataset Summary
38
+
39
+ XSum is an English news summarization dataset where the task is to predict the first sentence of an article from the rest of it.
40
+
41
+ You can load the dataset via:
42
+ ```
43
+ import datasets
44
+ data = datasets.load_dataset('GEM/xsum')
45
+ ```
46
+ The data loader can be found [here](https://huggingface.co/datasets/GEM/xsum).
47
+
48
+ #### website
49
+ n/a
50
+
51
+ #### paper
52
+ [ACL Anthology](https://www.aclweb.org/anthology/D18-1206)
53
+
54
+ #### authors
55
+ Shashi Narayan, Shay B. Cohen, Mirella Lapata (all affiliated with University of Edinburgh at the time of dataset creation)
56
+
57
+ ## Dataset Overview
58
+
59
+ ### Where to find the Data and its Documentation
60
+
61
+ #### Download
62
+
63
+ <!-- info: What is the link to where the original dataset is hosted? -->
64
+ <!-- scope: telescope -->
65
+ [Github](https://github.com/EdinburghNLP/XSum)
66
+
67
+ #### Paper
68
+
69
+ <!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
70
+ <!-- scope: telescope -->
71
+ [ACL Anthology](https://www.aclweb.org/anthology/D18-1206)
72
+
73
+ #### BibTex
74
+
75
+ <!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
76
+ <!-- scope: microscope -->
77
+ ```
78
+ @InProceedings{xsum-emnlp,
79
+ author = "Shashi Narayan and Shay B. Cohen and Mirella Lapata",
80
+ title = "Don't Give Me the Details, Just the Summary! {T}opic-Aware Convolutional Neural Networks for Extreme Summarization",
81
+ booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing ",
82
+ year = "2018",
83
+ address = "Brussels, Belgium",
84
+ }
85
+ ```
86
+
87
+ #### Contact Name
88
+
89
+ <!-- quick -->
90
+ <!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
91
+ <!-- scope: periscope -->
92
+ Shashi Narayan
93
+
94
+ #### Contact Email
95
+
96
+ <!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
97
+ <!-- scope: periscope -->
98
99
+
100
+ #### Has a Leaderboard?
101
+
102
+ <!-- info: Does the dataset have an active leaderboard? -->
103
+ <!-- scope: telescope -->
104
+ no
105
+
106
+
107
+ ### Languages and Intended Use
108
+
109
+ #### Multilingual?
110
+
111
+ <!-- quick -->
112
+ <!-- info: Is the dataset multilingual? -->
113
+ <!-- scope: telescope -->
114
+ no
115
+
116
+ #### Covered Dialects
117
+
118
+ <!-- info: What dialects are covered? Are there multiple dialects per language? -->
119
+ <!-- scope: periscope -->
120
+ Since the source of the dataset are BBC articles, the language is in British English of the variation written by journalists.
121
+
122
+ #### Covered Languages
123
+
124
+ <!-- quick -->
125
+ <!-- info: What languages/dialects are covered in the dataset? -->
126
+ <!-- scope: telescope -->
127
+ `English`
128
+
129
+ #### Whose Language?
130
+
131
+ <!-- info: Whose language is in the dataset? -->
132
+ <!-- scope: periscope -->
133
+ Professional journalists
134
+
135
+ #### License
136
+
137
+ <!-- quick -->
138
+ <!-- info: What is the license of the dataset? -->
139
+ <!-- scope: telescope -->
140
+ cc-by-sa-4.0: Creative Commons Attribution Share Alike 4.0 International
141
+
142
+ #### Intended Use
143
+
144
+ <!-- info: What is the intended use of the dataset? -->
145
+ <!-- scope: microscope -->
146
+ The dataset is for the task of abstractive summarization in its extreme form, its about summarizing a document in a single sentence. The idea is to create a short, one-sentence news summary answering the question "What is the article about?".
147
+
148
+
149
+
150
+ #### Primary Task
151
+
152
+ <!-- info: What primary task does the dataset support? -->
153
+ <!-- scope: telescope -->
154
+ Summarization
155
+
156
+ #### Communicative Goal
157
+
158
+ <!-- quick -->
159
+ <!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
160
+ <!-- scope: periscope -->
161
+ Given a news article, produce a single sentence summary of the content of the article.
162
+
163
+
164
+ ### Credit
165
+
166
+ #### Curation Organization Type(s)
167
+
168
+ <!-- info: In what kind of organization did the dataset curation happen? -->
169
+ <!-- scope: telescope -->
170
+ `academic`
171
+
172
+ #### Curation Organization(s)
173
+
174
+ <!-- info: Name the organization(s). -->
175
+ <!-- scope: periscope -->
176
+ University of Edinburgh
177
+
178
+ #### Dataset Creators
179
+
180
+ <!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
181
+ <!-- scope: microscope -->
182
+ Shashi Narayan, Shay B. Cohen, Mirella Lapata (all affiliated with University of Edinburgh at the time of dataset creation)
183
+
184
+ #### Funding
185
+
186
+ <!-- info: Who funded the data creation? -->
187
+ <!-- scope: microscope -->
188
+ European Research Council (Lapata; award number 681760), the European Union under the Horizon 2020 SUMMA project (Narayan, Cohen; grant agreement 688139), and Huawei Technologies (Cohen).
189
+
190
+ #### Who added the Dataset to GEM?
191
+
192
+ <!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
193
+ <!-- scope: microscope -->
194
+ The original data card was written by Laura Perez-Beltrachini and the data loader by Yacine Jernite. Sebastian Gehrmann migrated the data card to the new format and extended it. The v2 data loader was migrated by Abinaya Mahendiran
195
+
196
+
197
+ ### Dataset Structure
198
+
199
+ #### Data Fields
200
+
201
+ <!-- info: List and describe the fields present in the dataset. -->
202
+ <!-- scope: telescope -->
203
+ - `Document`: Input news article.
204
+ - `Summary`: One sentence summary of the article.
205
+ - `Id`: BBC ID of the article.
206
+
207
+ #### Reason for Structure
208
+
209
+ <!-- info: How was the dataset structure determined? -->
210
+ <!-- scope: microscope -->
211
+ The Document/Summary format is standard for summarization datasets.
212
+
213
+ #### How were labels chosen?
214
+
215
+ <!-- info: How were the labels chosen? -->
216
+ <!-- scope: microscope -->
217
+ The labels are the first sentence of the source article.
218
+
219
+ #### Example Instance
220
+
221
+ <!-- info: Provide a JSON formatted example of a typical instance in the dataset. -->
222
+ <!-- scope: periscope -->
223
+ ```
224
+ {
225
+ 'document': 'The researchers have sequenced the genome of a strain of bacterium that causes the virulent infection.\nA survey in 2007 showed that bleeding canker had spread rapidly, with almost half of the two million horse chestnuts displaying symptoms of the disease.\nThe findings have been published in the journal PLoS One.\nA visible symptom of the disease is a lesion on the bark, which oozes a resin on to the trunk or sometimes the branches.\nThe bark underneath the canker is killed, and if cankers manage to go all the way around the trunk then the horse chestnut (Aesculus hippocastanum) will die because it cuts off the food supply. [...]',
226
+ 'target': "A team of UK scientists hopes to shed light on the mysteries of bleeding canker, a disease that is threatening the nation's horse chestnut trees.",
227
+ }
228
+ ```
229
+
230
+ #### Data Splits
231
+
232
+ <!-- info: Describe and name the splits in the dataset if there are more than one. -->
233
+ <!-- scope: periscope -->
234
+ | Section | Number of Documents |
235
+ | ------------- |:-------------:|
236
+ | Training | 204,045 |
237
+ | Validation | 11,332 |
238
+ | Testing | 11,334 |
239
+ | Total | 226k |
240
+
241
+ | Section | number of words| number of sentences |
242
+ | ------------- |:-------------:| :-------------:|
243
+ | Documents | 431.07 | 19.77 |
244
+ | Summary | 23.26 | 1.00 |
245
+
246
+
247
+ #### Splitting Criteria
248
+
249
+ <!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. -->
250
+ <!-- scope: microscope -->
251
+ The identifiers in the URLs were used to randomly split the dataset into training (90%, 204,045), validation (5%, 11,332), and test (5%, 11,334) sets.
252
+
253
+
254
+
255
+ ## Dataset Curation
256
+
257
+ ### Original Curation
258
+
259
+ #### Original Curation Rationale
260
+
261
+ <!-- info: Original curation rationale -->
262
+ <!-- scope: telescope -->
263
+ Comparable datasets are often very extractive which is not a strategy that works for one-sentence summaries. The dataset curators thus created this dataset as a way to evaluate truly abstractive models
264
+
265
+ #### Communicative Goal
266
+
267
+ <!-- info: What was the communicative goal? -->
268
+ <!-- scope: periscope -->
269
+ Same as the communicative goal in GEM: A model should summarize a news article in a single sentence
270
+
271
+ #### Sourced from Different Sources
272
+
273
+ <!-- info: Is the dataset aggregated from different data sources? -->
274
+ <!-- scope: telescope -->
275
+ no
276
+
277
+
278
+ ### Language Data
279
+
280
+ #### How was Language Data Obtained?
281
+
282
+ <!-- info: How was the language data obtained? -->
283
+ <!-- scope: telescope -->
284
+ `Found`
285
+
286
+ #### Where was it found?
287
+
288
+ <!-- info: If found, where from? -->
289
+ <!-- scope: telescope -->
290
+ `Single website`
291
+
292
+ #### Language Producers
293
+
294
+ <!-- info: What further information do we have on the language producers? -->
295
+ <!-- scope: microscope -->
296
+ The data was collected from articles between 2010 and 2017. No other information
297
+
298
+ #### Topics Covered
299
+
300
+ <!-- info: Does the language in the dataset focus on specific topics? How would you describe them? -->
301
+ <!-- scope: periscope -->
302
+ The collected articles included the following topics: News, Politics, Sports, Weather, Business, Technology, Science, Health, Family, Education, Entertainment and Arts
303
+
304
+ The dataset curators also used LDA to gain insight into this question and found that the following were the top keywords associated with each topic:
305
+
306
+ - **T1**: charge, court, murder, police, arrest, guilty, sentence, boy, bail, space, crown, trial
307
+ - **T2**: church, abuse, bishop, child, catholic, gay, pope, school, christian, priest, cardinal
308
+ - **T3**: council, people, government, local, housing, home, house, property, city, plan, authority
309
+ - **T4**: clinton, party, trump, climate, poll, vote, plaid, election, debate, change, candidate, campaign
310
+ - **T5**: country, growth, report, business, export, fall, bank, security, economy, rise, global, inflation
311
+ - **T6**: hospital, patient, trust, nhs, people, care, health, service, staff, report, review, system, child
312
+
313
+ #### Data Validation
314
+
315
+ <!-- info: Was the text validated by a different worker or a data curator? -->
316
+ <!-- scope: telescope -->
317
+ not validated
318
+
319
+ #### Data Preprocessing
320
+
321
+ <!-- info: How was the text data pre-processed? (Enter N/A if the text was not pre-processed) -->
322
+ <!-- scope: microscope -->
323
+ The text was extracted from the HTML of the webpage. No further processing was done.
324
+
325
+ #### Was Data Filtered?
326
+
327
+ <!-- info: Were text instances selected or filtered? -->
328
+ <!-- scope: telescope -->
329
+ not filtered
330
+
331
+
332
+ ### Structured Annotations
333
+
334
+ #### Additional Annotations?
335
+
336
+ <!-- quick -->
337
+ <!-- info: Does the dataset have additional annotations for each instance? -->
338
+ <!-- scope: telescope -->
339
+ none
340
+
341
+ #### Annotation Service?
342
+
343
+ <!-- info: Was an annotation service used? -->
344
+ <!-- scope: telescope -->
345
+ no
346
+
347
+
348
+ ### Consent
349
+
350
+ #### Any Consent Policy?
351
+
352
+ <!-- info: Was there a consent policy involved when gathering the data? -->
353
+ <!-- scope: telescope -->
354
+ no
355
+
356
+ #### Justification for Using the Data
357
+
358
+ <!-- info: If not, what is the justification for reusing the data? -->
359
+ <!-- scope: microscope -->
360
+ The copyright license of the data allows reusing it for this purpose.
361
+
362
+
363
+ ### Private Identifying Information (PII)
364
+
365
+ #### Contains PII?
366
+
367
+ <!-- quick -->
368
+ <!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
369
+ <!-- scope: telescope -->
370
+ yes/very likely
371
+
372
+ #### Categories of PII
373
+
374
+ <!-- info: What categories of PII are present or suspected in the data? -->
375
+ <!-- scope: periscope -->
376
+ `generic PII`
377
+
378
+ #### Any PII Identification?
379
+
380
+ <!-- info: Did the curators use any automatic/manual method to identify PII in the dataset? -->
381
+ <!-- scope: periscope -->
382
+ no identification
383
+
384
+
385
+ ### Maintenance
386
+
387
+ #### Any Maintenance Plan?
388
+
389
+ <!-- info: Does the original dataset have a maintenance plan? -->
390
+ <!-- scope: telescope -->
391
+ no
392
+
393
+
394
+
395
+ ## Broader Social Context
396
+
397
+ ### Previous Work on the Social Impact of the Dataset
398
+
399
+ #### Usage of Models based on the Data
400
+
401
+ <!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
402
+ <!-- scope: telescope -->
403
+ no
404
+
405
+
406
+ ### Impact on Under-Served Communities
407
+
408
+ #### Addresses needs of underserved Communities?
409
+
410
+ <!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
411
+ <!-- scope: telescope -->
412
+ no
413
+
414
+
415
+ ### Discussion of Biases
416
+
417
+ #### Any Documented Social Biases?
418
+
419
+ <!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
420
+ <!-- scope: telescope -->
421
+ unsure
422
+
423
+ #### Are the Language Producers Representative of the Language?
424
+
425
+ <!-- info: Does the distribution of language producers in the dataset accurately represent the full distribution of speakers of the language world-wide? If not, how does it differ? -->
426
+ <!-- scope: periscope -->
427
+ The language and content of the data is focused on news and language in the UK and as such not representative of the speakers world-wide. Existing selection biases of the BBC exist in this dataset.
428
+
429
+