Datasets:
tau
/

Languages:
English
ArXiv:
Uri commited on
Commit
5de677e
1 Parent(s): fe013c7

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +129 -0
README.md ADDED
@@ -0,0 +1,129 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ task_categories:
5
+ - question-answering
6
+ - summarization
7
+ - text-generation
8
+ task_ids:
9
+ - multiple-choice-qa
10
+ - natural-language-inference
11
+ configs:
12
+ - gov_report
13
+ - summ_screen_fd
14
+ - qmsum
15
+ - squality
16
+ - qasper
17
+ - narrative_qa
18
+ - quality
19
+ - musique
20
+ - space_digest
21
+ - book_sum_sort
22
+ tags:
23
+ - query-based-summarization
24
+ - long-texts
25
+ ---
26
+
27
+ ## Dataset Description
28
+
29
+ - **Homepage:** [ZeroSCROLLS](https://www.zero.scrolls-benchmark.com/)
30
+ - **Leaderboard:** [Leaderboard](https://www.zero.scrolls-benchmark.com/leaderboard)
31
+ - **Point of Contact:** [[email protected]]([email protected])
32
+
33
+ # Dataset Card for ZeroSCROLLS
34
+
35
+ ## Overview
36
+ ZeroSCROLLS zero-shot benchmark for natural language understanding over long texts.
37
+
38
+ ## Leaderboard
39
+ The ZeroSCROLLS benchmark leaderboard can be found [here](https://www.zero.scrolls-benchmark.com/leaderboard).
40
+
41
+ ## Tasks
42
+ ZeroSCROLLS comprises the following tasks:
43
+
44
+ #### GovReport ([Huang et al., 2021](https://arxiv.org/pdf/2104.02112.pdf))
45
+ GovReport is a summarization dataset of reports addressing various national policy issues published by the
46
+ Congressional Research Service and the U.S. Government Accountability Office, where each document is paired with a hand-written executive summary.
47
+ The reports and their summaries are longer than their equivalents in other popular long-document summarization datasets;
48
+ for example, GovReport's documents are approximately 1.5 and 2.5 times longer than the documents in Arxiv and PubMed, respectively.
49
+
50
+ #### SummScreenFD ([Chen et al., 2022](https://arxiv.org/pdf/2104.07091.pdf))
51
+ SummScreenFD is a summarization dataset in the domain of TV shows (e.g. Friends, Game of Thrones).
52
+ Given a transcript of a specific episode, the goal is to produce the episode's recap.
53
+ The original dataset is divided into two complementary subsets, based on the source of its community contributed transcripts.
54
+ For SCROLLS, we use the ForeverDreaming (FD) subset, as it incorporates 88 different shows,
55
+ making it a more diverse alternative to the TV MegaSite (TMS) subset, which has only 10 shows.
56
+ Community-authored recaps for the ForeverDreaming transcripts were collected from English Wikipedia and TVMaze.
57
+
58
+ #### QMSum ([Zhong et al., 2021](https://arxiv.org/pdf/2104.05938.pdf))
59
+ QMSum is a query-based summarization dataset, consisting of 232 meetings transcripts from multiple domains.
60
+ The corpus covers academic group meetings at the International Computer Science Institute and their summaries, industrial product meetings for designing a remote control,
61
+ and committee meetings of the Welsh and Canadian Parliaments, dealing with a variety of public policy issues.
62
+ Annotators were tasked with writing queries about the broad contents of the meetings, as well as specific questions about certain topics or decisions,
63
+ while ensuring that the relevant text for answering each query spans at least 200 words or 10 turns.
64
+
65
+ #### SQuALITY ([Wang et al., 2022](https://arxiv.org/pdf/2205.11465.pdf))
66
+ SQuALITY (Wang et al., 2022) is a question-focused summarization dataset, where given a story from Project Gutenberg,
67
+ the task is to produce a summary of the story or aspects of it based on a guiding question.
68
+ The questions and summaries are original and crowdsourced; experienced writers were guided to design questions that require reading significant parts of the story to answer correctly.
69
+
70
+
71
+ #### Qasper ([Dasigi et al., 2021](https://arxiv.org/pdf/2105.03011.pdf))
72
+ Qasper is a question answering dataset over NLP papers filtered from the Semantic Scholar Open Research Corpus (S2ORC).
73
+ Questions were written by NLP practitioners after reading only the title and abstract of the papers,
74
+ while another set of NLP practitioners annotated the answers given the entire document.
75
+ Qasper contains abstractive, extractive, and yes/no questions, as well as unanswerable ones.
76
+
77
+ #### NarrativeQA ([Kočiský et al., 2018](https://arxiv.org/pdf/1712.07040.pdf))
78
+ NarrativeQA (Kočiský et al., 2021) is an established question answering dataset over entire books from Project Gutenberg and movie scripts from different websites.
79
+ Annotators were given summaries of the books and scripts obtained from Wikipedia, and asked to generate question-answer pairs,
80
+ resulting in about 30 questions and answers for each of the 1,567 books and scripts.
81
+ They were encouraged to use their own words rather then copying, and avoid asking yes/no questions or ones about the cast.
82
+ Each question was then answered by an additional annotator, providing each question with two reference answers (unless both answers are identical).
83
+
84
+ #### QuALITY ([Pang et al., 2022](https://arxiv.org/pdf/2112.08608.pdf))
85
+ QuALITY is a multiple-choice question answering dataset over articles and stories sourced from Project Gutenberg,
86
+ the Open American National Corpus, and more.
87
+ Experienced writers wrote questions and distractors, and were incentivized to write answerable, unambiguous questions such that in order to correctly answer them,
88
+ human annotators must read large portions of the given document.
89
+ Reference answers were then calculated using the majority vote between of the annotators and writer's answers.
90
+ To measure the difficulty of their questions, Pang et al. conducted a speed validation process,
91
+ where another set of annotators were asked to answer questions given only a short period of time to skim through the document.
92
+ As a result, 50% of the questions in QuALITY are labeled as hard, i.e. the majority of the annotators in the speed validation setting chose the wrong answer.
93
+
94
+ #### MuSiQue ([Trivedi et al., 2022](https://arxiv.org/pdf/2108.00573.pdf))
95
+ MuSiQue is a multi-hop question answering dataset, where the inputs are 20 Wikipedia paragraphs and a question that requires multiple hops between different paragraphs.
96
+ In the original dataset, each question also has an unanswerable twin question, where the correct answer is not present in the paragraphs.
97
+
98
+ #### SpaceDigest (New)
99
+ SpaceDigest is a new sentiment aggregation task. Given 50 hotel reviews (without their ratings) from the Space dataset (Angelidis et al., 2021), the task is to determine the percentage of positive reviews.
100
+
101
+ #### BookSumSort (New)
102
+ BookSumSort is a new task based on the BookSum dataset (Kry ́sci ́nski et al., 2022), which contains summaries of chapters (or parts) of novels, plays, and long poems from various sources.
103
+ Given a shuffled list of chapter summaries, the task is to reorder them according to the original order of summaries in BookSum.
104
+
105
+ ## Data Fields
106
+
107
+ Most datasets in the benchmark are in the same input-output format
108
+
109
+ - `input`: a `string` feature. The input document.
110
+ - `output`: this feature is always None, as ZeroSCROLLS contains only test sets.
111
+ - `id`: a `string` feature. Unique per input.
112
+ - `pid`: a `string` feature, identical to 'id`. Facilitates evaluating tasks with multiple refrences per input.
113
+ - `document_start_index`: an `int32` feature. Character index that enables easy parsing of the context document.
114
+ - `document_end_index`: an `int32` feature. Character index that enables easy parsing of the context document.
115
+ - `query_start_index`: an `int32` feature. Character index that enables easy parsing of the query, if exists.
116
+ - `query_end_index`: an `int32` feature. Character index that enables easy parsing of the query, if exists.
117
+ - `truncation_seperator`: a `string` feature. The string used to append to a trimmed context document, mentioning the context was trimmed.
118
+
119
+ Datasets containing multiple documents inside the `input` feature are MuSiQue, SpaceDigest, and BookSumSort. They also have the following feature:
120
+
121
+ - `inner_docs_start_indices`: a sequence of `int32` feature. Character indexes that enables easy parsing of the the inner documents, e.g. Reviews, of Summaries.
122
+
123
+
124
+
125
+ ## Citation
126
+ If you use the ZeroSCROLLS data, **please make sure to cite all of the original dataset papers.** [[bibtex](https://zero-scrolls-tau.s3.us-east-2.amazonaws.com/zero_scrolls_datasets.bib)]
127
+ ```
128
+ @inproceedings{}
129
+ ```