JeremyAlain commited on
Commit
d6a2b4d
1 Parent(s): f1c9d6f

create dataset card

Browse files
Files changed (1) hide show
  1. README.md +196 -0
README.md ADDED
@@ -0,0 +1,196 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ annotations_creators:
2
+ - no-annotation
3
+ language_creators:
4
+ - found
5
+ languages:
6
+ - en
7
+ licenses:
8
+ - apache-2.0
9
+ multilinguality:
10
+ - monolingual
11
+ pretty_name: Fewshot Table Dataset
12
+ size_categories:
13
+ - 100K<n<1M
14
+ source_datasets: [WDC Web Table Corpora](http://webdatacommons.org/webtables/)
15
+ task_categories:
16
+ - multiple-choice
17
+ - question-answering
18
+ - zero-shot-classification
19
+ - text2text-generation
20
+ - table-question-answering
21
+ - text-generation
22
+ - text-classification
23
+ - tabular-classification
24
+ task_ids:
25
+ - multiple-choice-qa
26
+ - extractive-qa
27
+ - closed-domain-qa
28
+ - open-domain-qa
29
+ - closed-domain-qa
30
+ - closed-book-qa
31
+ - open-book-qa
32
+ - language-modeling
33
+ - multi-class-classification
34
+ - natural-language-inference
35
+ - topic-classification
36
+ - multi-label-classification
37
+ - tabular-multi-class-classification
38
+ - tabular-multi-label-classification
39
+
40
+ # Dataset Card for Fewshot Table Dataset
41
+
42
+ ## Table of Contents
43
+ - [Dataset Description](#dataset-description)
44
+ - [Dataset Summary](#dataset-summary)
45
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
46
+ - [Languages](#languages)
47
+ - [Dataset Structure](#dataset-structure)
48
+ - [Data Instances](#data-instances)
49
+ - [Data Fields](#data-instances)
50
+ - [Data Splits](#data-instances)
51
+ - [Dataset Creation](#dataset-creation)
52
+ - [Curation Rationale](#curation-rationale)
53
+ - [Source Data](#source-data)
54
+ - [Annotations](#annotations)
55
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
56
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
57
+ - [Social Impact of Dataset](#social-impact-of-dataset)
58
+ - [Discussion of Biases](#discussion-of-biases)
59
+ - [Other Known Limitations](#other-known-limitations)
60
+ - [Additional Information](#additional-information)
61
+ - [Dataset Curators](#dataset-curators)
62
+ - [Licensing Information](#licensing-information)
63
+ - [Citation Information](#citation-information)
64
+
65
+ ## Dataset Description
66
+
67
+ - **Homepage:** [Needs More Information]
68
+ - **Repository:** https://github.com/JunShern/few-shot-pretraining
69
+ - **Paper:** Paper-Title
70
+ - **Leaderboard:** [Needs More Information]
71
+ - **Point of Contact:** [email protected], [email protected]
72
+
73
+ ### Dataset Summary
74
+
75
+ The Fewshot Table dataset consists of tables that naturally occur on the web, that are formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. The dataset consists of approximately 413K tables that are extracted from the [WDC Web Table Corpora](http://webdatacommons.org/webtables/) 2015, which is released under the Apache-2.0 license. The WDC Web Table Corpora "contains vast amounts of HTML tables. [...] The Web Data Commons project extracts relational Web tables from the [Common Crawl](https://commoncrawl.org/), the largest and most up-to-date Web corpus that is currently available to the public."
76
+
77
+ ### Supported Tasks and Leaderboards
78
+
79
+ Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide i.e. we have 1000's tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e. 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g. multiple-choice, question-answering, table-question-answering, text-classification, etc.
80
+
81
+ The intended use of this dataset is to improve few-shot performance by finetuning/pretraining onour dataset.
82
+
83
+ ### Languages
84
+
85
+ English
86
+
87
+ ## Dataset Structure
88
+
89
+ ### Data Instances
90
+
91
+ Each table, i.e. task is represented as a json-lines file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.
92
+
93
+ There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.
94
+
95
+ ### Data Fields
96
+
97
+ 'task': task identifier
98
+
99
+ 'input': column elements of a specific row in table.
100
+
101
+ 'options': for multiple choice classification, it provides the options to choose from.
102
+
103
+ 'output': target column element of same row as input.
104
+
105
+ 'pageTitle': the title of the page containing the table.
106
+
107
+ 'outputColName': ?? (potentially remove this from data)
108
+
109
+ 'url': url to the website containing the table
110
+
111
+ 'wdcFile': ? (potentially remove this from data)
112
+
113
+ ### Data Splits
114
+
115
+ [Needs More Information]
116
+
117
+ ## Dataset Creation
118
+
119
+ ### Curation Rationale
120
+
121
+ How do we convert tables to few-shot tasks?
122
+ Unlike unstructured text, structured data in the form of tables lends itself easily to the few-shot task format. Given a table where each row is an instance of a similar class and the columns describe the attributes of each instance, we can turn each row into a task example to predict one attribute given the others. When the table has more than one row, we instantly have multiple examples of this task by using each row as a single example, and thus each table becomes a few-shot dataset for a particular task.
123
+
124
+ The few-shot setting in this setting is significant: Tables often do not come with clear instructions for each field, so tasks may be underspecified if prompted in a zero-shot manner, but the intended task becomes clearer when examples are provided. This makes a good two-way match: The few-shot format is a perfect setup for table learning, and tables provide a natural dataset for few-shot training.
125
+
126
+ ### Source Data
127
+
128
+ #### Initial Data Collection and Normalization
129
+
130
+ We downloaded the [WDC Web Table Corpora](http://webdatacommons.org/webtables/) 2015 dataset and focus on relational tables. In the following, we describe the steps we executed to filter the WDC Web Table Corpora and create our task dataset. Given a set of relation tables, we apply defined preprocessing steps to ensure all the tables can be handled consistently. Each table can then spawn one or more tasks using a simple predict-one-column approach. Finally, all tasks produced in this manner undergo simple rule-based checks, i.e. any candidates that do not meet some defined minimum requirements for a well-formed task are rejected. Following this approach, we start with 50 million tables in the initial corpus and produce a longlist of 400K tasks.
131
+
132
+ 1. We select only relational tables.
133
+ 2. We make sure all tables are vertical (horizontal tables are simply transposed) and remove duplicate rows.
134
+ 3. To create task we use what in the literature is referred to as verbalizers. For example, a table with 3 columns may be cast as three different tasks: predict column A given B and C, predict column B given A and C, and predict column C given A and B.
135
+ 4. Rule-based-checks to reject tables:
136
+ a) We reject 25M tables that have fewer than 6 rows (so we can do at least k=5-shot learning)
137
+ b) We reject tables with > 20% non-English text as measured by [SpaCy](https://spacy.io/)
138
+ c) Given 2 Million passing tables we consider each table column as a potential output column, and concatenate all other columns to form the input (which produces 5.6 M candidate tasks)
139
+ 5. Rule-based-checks to reject tasks
140
+ a) We reject a task if it has less than 6 rows. Note that tasks may have fewer rows than their origin tables since we remove rows where the output column is empty.
141
+ b) We reject tasks if any input maps to multiple outputs.
142
+ c) We reject tasks if it has fewer than 2 output classes.
143
+ d) We reject a task if the output column alone has >20% non-English text.
144
+ e) We reject a task if the classes are heavily imbalanced.
145
+
146
+ 6. Lastly we apply domain-level filtering. Initial iterations of our dataset found a significant imbalance in terms of the website of origin for our generated tasks. In particular, we found that the mos-frequent domain in the WDC corpus, Cappex.com, was emphasized by our export criteria such that this website alone represented 41% of our total tasks. Since we want our dataset to represent the diversity of all the tables available on the web, we apply a hard fix for this imbalance by limiting the number of tasks per domain. Starting from the initial corpus of 50M tables from 323160 web domains, our resulting longlist of tasks comprises more than X for a total of 413350 tasks.
147
+
148
+ #### Who are the source language producers?
149
+
150
+ The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/).
151
+
152
+ ### Annotations
153
+
154
+ #### Annotation process
155
+
156
+ No annotation Process
157
+
158
+ #### Who are the annotators?
159
+
160
+ -
161
+
162
+ ### Personal and Sensitive Information
163
+
164
+ The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g. data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.
165
+
166
+ ## Considerations for Using the Data
167
+
168
+ ### Social Impact of Dataset
169
+
170
+ The purpose of this dataset is to help develop models that are better at few-shot learning and have higher few-shot performance by fine-tuning few-shot tasks extracted from tables.
171
+
172
+ While tables have a similar structure to few-shot tasks and we do see an improved performance on few-shot tasks in our paper, we want to make clear that finetuning on tables also has its risks. First of all, since the tables are extracted from the web, they may contain user identities or otherwise sensitive information which a model might reveal at inference, or which could influence the learning process of a model in a negative way. Second, since tables are very diverse in nature, the model also trains on low-quality data or data with an unusual structure. While it is interesting that training on such data improves few-shot performance on downstream tasks, this could also imply that the model learns concepts that are very dissimilar to human concepts that would be useful for a certain downstream task. In other words, it is possible that the model learns weird things that are helpful on the evaluated downstream tasks, but might lead to bad out-of-distribution behavior.
173
+
174
+ ### Discussion of Biases
175
+
176
+ Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content for toxic content.
177
+ This implies that a model trained on our dataset will reinforce harmful biases and toxic text that exist in our dataset.
178
+
179
+
180
+ ### Other Known Limitations
181
+
182
+ [Needs More Information]
183
+
184
+ ## Additional Information
185
+
186
+ ### Dataset Curators
187
+
188
+ [Needs More Information]
189
+
190
+ ### Licensing Information
191
+
192
+ [Needs More Information]
193
+
194
+ ### Citation Information
195
+
196
+ [Needs More Information]