parquet-converter commited on
Commit
e6de5f7
1 Parent(s): 05954c1

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,27 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,316 +0,0 @@
1
- ---
2
- language:
3
- - en
4
- paperswithcode_id: scitail
5
- pretty_name: SciTail
6
- dataset_info:
7
- - config_name: snli_format
8
- features:
9
- - name: sentence1_binary_parse
10
- dtype: string
11
- - name: sentence1_parse
12
- dtype: string
13
- - name: sentence1
14
- dtype: string
15
- - name: sentence2_parse
16
- dtype: string
17
- - name: sentence2
18
- dtype: string
19
- - name: annotator_labels
20
- sequence: string
21
- - name: gold_label
22
- dtype: string
23
- splits:
24
- - name: train
25
- num_bytes: 22495833
26
- num_examples: 23596
27
- - name: test
28
- num_bytes: 2008631
29
- num_examples: 2126
30
- - name: validation
31
- num_bytes: 1266529
32
- num_examples: 1304
33
- download_size: 14174621
34
- dataset_size: 25770993
35
- - config_name: tsv_format
36
- features:
37
- - name: premise
38
- dtype: string
39
- - name: hypothesis
40
- dtype: string
41
- - name: label
42
- dtype: string
43
- splits:
44
- - name: train
45
- num_bytes: 4618115
46
- num_examples: 23097
47
- - name: test
48
- num_bytes: 411343
49
- num_examples: 2126
50
- - name: validation
51
- num_bytes: 261086
52
- num_examples: 1304
53
- download_size: 14174621
54
- dataset_size: 5290544
55
- - config_name: dgem_format
56
- features:
57
- - name: premise
58
- dtype: string
59
- - name: hypothesis
60
- dtype: string
61
- - name: label
62
- dtype: string
63
- - name: hypothesis_graph_structure
64
- dtype: string
65
- splits:
66
- - name: train
67
- num_bytes: 6832104
68
- num_examples: 23088
69
- - name: test
70
- num_bytes: 608213
71
- num_examples: 2126
72
- - name: validation
73
- num_bytes: 394040
74
- num_examples: 1304
75
- download_size: 14174621
76
- dataset_size: 7834357
77
- - config_name: predictor_format
78
- features:
79
- - name: answer
80
- dtype: string
81
- - name: sentence2_structure
82
- dtype: string
83
- - name: sentence1
84
- dtype: string
85
- - name: sentence2
86
- dtype: string
87
- - name: gold_label
88
- dtype: string
89
- - name: question
90
- dtype: string
91
- splits:
92
- - name: train
93
- num_bytes: 8884823
94
- num_examples: 23587
95
- - name: test
96
- num_bytes: 797161
97
- num_examples: 2126
98
- - name: validation
99
- num_bytes: 511305
100
- num_examples: 1304
101
- download_size: 14174621
102
- dataset_size: 10193289
103
- ---
104
-
105
- # Dataset Card for "scitail"
106
-
107
- ## Table of Contents
108
- - [Dataset Description](#dataset-description)
109
- - [Dataset Summary](#dataset-summary)
110
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
111
- - [Languages](#languages)
112
- - [Dataset Structure](#dataset-structure)
113
- - [Data Instances](#data-instances)
114
- - [Data Fields](#data-fields)
115
- - [Data Splits](#data-splits)
116
- - [Dataset Creation](#dataset-creation)
117
- - [Curation Rationale](#curation-rationale)
118
- - [Source Data](#source-data)
119
- - [Annotations](#annotations)
120
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
121
- - [Considerations for Using the Data](#considerations-for-using-the-data)
122
- - [Social Impact of Dataset](#social-impact-of-dataset)
123
- - [Discussion of Biases](#discussion-of-biases)
124
- - [Other Known Limitations](#other-known-limitations)
125
- - [Additional Information](#additional-information)
126
- - [Dataset Curators](#dataset-curators)
127
- - [Licensing Information](#licensing-information)
128
- - [Citation Information](#citation-information)
129
- - [Contributions](#contributions)
130
-
131
- ## Dataset Description
132
-
133
- - **Homepage:** [https://allenai.org/data/scitail](https://allenai.org/data/scitail)
134
- - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
135
- - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
136
- - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
137
- - **Size of downloaded dataset files:** 54.07 MB
138
- - **Size of the generated dataset:** 46.82 MB
139
- - **Total amount of disk used:** 100.89 MB
140
-
141
- ### Dataset Summary
142
-
143
- The SciTail dataset is an entailment dataset created from multiple-choice science exams and web sentences. Each question
144
- and the correct answer choice are converted into an assertive statement to form the hypothesis. We use information
145
- retrieval to obtain relevant text from a large text corpus of web sentences, and use these sentences as a premise P. We
146
- crowdsource the annotation of such premise-hypothesis pair as supports (entails) or not (neutral), in order to create
147
- the SciTail dataset. The dataset contains 27,026 examples with 10,101 examples with entails label and 16,925 examples
148
- with neutral label
149
-
150
- ### Supported Tasks and Leaderboards
151
-
152
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
153
-
154
- ### Languages
155
-
156
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
157
-
158
- ## Dataset Structure
159
-
160
- ### Data Instances
161
-
162
- #### dgem_format
163
-
164
- - **Size of downloaded dataset files:** 13.52 MB
165
- - **Size of the generated dataset:** 7.47 MB
166
- - **Total amount of disk used:** 20.99 MB
167
-
168
- An example of 'train' looks as follows.
169
- ```
170
-
171
- ```
172
-
173
- #### predictor_format
174
-
175
- - **Size of downloaded dataset files:** 13.52 MB
176
- - **Size of the generated dataset:** 9.72 MB
177
- - **Total amount of disk used:** 23.24 MB
178
-
179
- An example of 'validation' looks as follows.
180
- ```
181
-
182
- ```
183
-
184
- #### snli_format
185
-
186
- - **Size of downloaded dataset files:** 13.52 MB
187
- - **Size of the generated dataset:** 24.58 MB
188
- - **Total amount of disk used:** 38.10 MB
189
-
190
- An example of 'validation' looks as follows.
191
- ```
192
-
193
- ```
194
-
195
- #### tsv_format
196
-
197
- - **Size of downloaded dataset files:** 13.52 MB
198
- - **Size of the generated dataset:** 5.05 MB
199
- - **Total amount of disk used:** 18.56 MB
200
-
201
- An example of 'validation' looks as follows.
202
- ```
203
-
204
- ```
205
-
206
- ### Data Fields
207
-
208
- The data fields are the same among all splits.
209
-
210
- #### dgem_format
211
- - `premise`: a `string` feature.
212
- - `hypothesis`: a `string` feature.
213
- - `label`: a `string` feature.
214
- - `hypothesis_graph_structure`: a `string` feature.
215
-
216
- #### predictor_format
217
- - `answer`: a `string` feature.
218
- - `sentence2_structure`: a `string` feature.
219
- - `sentence1`: a `string` feature.
220
- - `sentence2`: a `string` feature.
221
- - `gold_label`: a `string` feature.
222
- - `question`: a `string` feature.
223
-
224
- #### snli_format
225
- - `sentence1_binary_parse`: a `string` feature.
226
- - `sentence1_parse`: a `string` feature.
227
- - `sentence1`: a `string` feature.
228
- - `sentence2_parse`: a `string` feature.
229
- - `sentence2`: a `string` feature.
230
- - `annotator_labels`: a `list` of `string` features.
231
- - `gold_label`: a `string` feature.
232
-
233
- #### tsv_format
234
- - `premise`: a `string` feature.
235
- - `hypothesis`: a `string` feature.
236
- - `label`: a `string` feature.
237
-
238
- ### Data Splits
239
-
240
- | name |train|validation|test|
241
- |----------------|----:|---------:|---:|
242
- |dgem_format |23088| 1304|2126|
243
- |predictor_format|23587| 1304|2126|
244
- |snli_format |23596| 1304|2126|
245
- |tsv_format |23097| 1304|2126|
246
-
247
- ## Dataset Creation
248
-
249
- ### Curation Rationale
250
-
251
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
252
-
253
- ### Source Data
254
-
255
- #### Initial Data Collection and Normalization
256
-
257
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
258
-
259
- #### Who are the source language producers?
260
-
261
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
262
-
263
- ### Annotations
264
-
265
- #### Annotation process
266
-
267
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
268
-
269
- #### Who are the annotators?
270
-
271
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
272
-
273
- ### Personal and Sensitive Information
274
-
275
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
276
-
277
- ## Considerations for Using the Data
278
-
279
- ### Social Impact of Dataset
280
-
281
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
282
-
283
- ### Discussion of Biases
284
-
285
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
286
-
287
- ### Other Known Limitations
288
-
289
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
290
-
291
- ## Additional Information
292
-
293
- ### Dataset Curators
294
-
295
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
296
-
297
- ### Licensing Information
298
-
299
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
300
-
301
- ### Citation Information
302
-
303
- ```
304
- inproceedings{scitail,
305
- Author = {Tushar Khot and Ashish Sabharwal and Peter Clark},
306
- Booktitle = {AAAI},
307
- Title = {{SciTail}: A Textual Entailment Dataset from Science Question Answering},
308
- Year = {2018}
309
- }
310
-
311
- ```
312
-
313
-
314
- ### Contributions
315
-
316
- Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"snli_format": {"description": "The SciTail dataset is an entailment dataset created from multiple-choice science exams and web sentences. Each question \nand the correct answer choice are converted into an assertive statement to form the hypothesis. We use information \nretrieval to obtain relevant text from a large text corpus of web sentences, and use these sentences as a premise P. We \ncrowdsource the annotation of such premise-hypothesis pair as supports (entails) or not (neutral), in order to create \nthe SciTail dataset. The dataset contains 27,026 examples with 10,101 examples with entails label and 16,925 examples \nwith neutral label\n", "citation": "inproceedings{scitail,\n Author = {Tushar Khot and Ashish Sabharwal and Peter Clark},\n Booktitle = {AAAI},\n Title = {{SciTail}: A Textual Entailment Dataset from Science Question Answering},\n Year = {2018}\n}\n", "homepage": "https://allenai.org/data/scitail", "license": "", "features": {"sentence1_binary_parse": {"dtype": "string", "id": null, "_type": "Value"}, "sentence1_parse": {"dtype": "string", "id": null, "_type": "Value"}, "sentence1": {"dtype": "string", "id": null, "_type": "Value"}, "sentence2_parse": {"dtype": "string", "id": null, "_type": "Value"}, "sentence2": {"dtype": "string", "id": null, "_type": "Value"}, "annotator_labels": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "gold_label": {"dtype": "string", "id": null, "_type": "Value"}}, "supervised_keys": null, "builder_name": "scitail", "config_name": "snli_format", "version": {"version_str": "1.1.0", "description": "", "datasets_version_to_prepare": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 22495833, "num_examples": 23596, "dataset_name": "scitail"}, "test": {"name": "test", "num_bytes": 2008631, "num_examples": 2126, "dataset_name": "scitail"}, "validation": {"name": "validation", "num_bytes": 1266529, "num_examples": 1304, "dataset_name": "scitail"}}, "download_checksums": {"http://data.allenai.org.s3.amazonaws.com/downloads/SciTailV1.1.zip": {"num_bytes": 14174621, "checksum": "3fccd37350a94ca280b75998568df85fc2fc62843a3198d644fcbf858e6943d5"}}, "download_size": 14174621, "dataset_size": 25770993, "size_in_bytes": 39945614}, "tsv_format": {"description": "The SciTail dataset is an entailment dataset created from multiple-choice science exams and web sentences. Each question \nand the correct answer choice are converted into an assertive statement to form the hypothesis. We use information \nretrieval to obtain relevant text from a large text corpus of web sentences, and use these sentences as a premise P. We \ncrowdsource the annotation of such premise-hypothesis pair as supports (entails) or not (neutral), in order to create \nthe SciTail dataset. The dataset contains 27,026 examples with 10,101 examples with entails label and 16,925 examples \nwith neutral label\n", "citation": "inproceedings{scitail,\n Author = {Tushar Khot and Ashish Sabharwal and Peter Clark},\n Booktitle = {AAAI},\n Title = {{SciTail}: A Textual Entailment Dataset from Science Question Answering},\n Year = {2018}\n}\n", "homepage": "https://allenai.org/data/scitail", "license": "", "features": {"premise": {"dtype": "string", "id": null, "_type": "Value"}, "hypothesis": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}}, "supervised_keys": null, "builder_name": "scitail", "config_name": "tsv_format", "version": {"version_str": "1.1.0", "description": "", "datasets_version_to_prepare": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 4618115, "num_examples": 23097, "dataset_name": "scitail"}, "test": {"name": "test", "num_bytes": 411343, "num_examples": 2126, "dataset_name": "scitail"}, "validation": {"name": "validation", "num_bytes": 261086, "num_examples": 1304, "dataset_name": "scitail"}}, "download_checksums": {"http://data.allenai.org.s3.amazonaws.com/downloads/SciTailV1.1.zip": {"num_bytes": 14174621, "checksum": "3fccd37350a94ca280b75998568df85fc2fc62843a3198d644fcbf858e6943d5"}}, "download_size": 14174621, "dataset_size": 5290544, "size_in_bytes": 19465165}, "dgem_format": {"description": "The SciTail dataset is an entailment dataset created from multiple-choice science exams and web sentences. Each question \nand the correct answer choice are converted into an assertive statement to form the hypothesis. We use information \nretrieval to obtain relevant text from a large text corpus of web sentences, and use these sentences as a premise P. We \ncrowdsource the annotation of such premise-hypothesis pair as supports (entails) or not (neutral), in order to create \nthe SciTail dataset. The dataset contains 27,026 examples with 10,101 examples with entails label and 16,925 examples \nwith neutral label\n", "citation": "inproceedings{scitail,\n Author = {Tushar Khot and Ashish Sabharwal and Peter Clark},\n Booktitle = {AAAI},\n Title = {{SciTail}: A Textual Entailment Dataset from Science Question Answering},\n Year = {2018}\n}\n", "homepage": "https://allenai.org/data/scitail", "license": "", "features": {"premise": {"dtype": "string", "id": null, "_type": "Value"}, "hypothesis": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}, "hypothesis_graph_structure": {"dtype": "string", "id": null, "_type": "Value"}}, "supervised_keys": null, "builder_name": "scitail", "config_name": "dgem_format", "version": {"version_str": "1.1.0", "description": "", "datasets_version_to_prepare": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 6832104, "num_examples": 23088, "dataset_name": "scitail"}, "test": {"name": "test", "num_bytes": 608213, "num_examples": 2126, "dataset_name": "scitail"}, "validation": {"name": "validation", "num_bytes": 394040, "num_examples": 1304, "dataset_name": "scitail"}}, "download_checksums": {"http://data.allenai.org.s3.amazonaws.com/downloads/SciTailV1.1.zip": {"num_bytes": 14174621, "checksum": "3fccd37350a94ca280b75998568df85fc2fc62843a3198d644fcbf858e6943d5"}}, "download_size": 14174621, "dataset_size": 7834357, "size_in_bytes": 22008978}, "predictor_format": {"description": "The SciTail dataset is an entailment dataset created from multiple-choice science exams and web sentences. Each question \nand the correct answer choice are converted into an assertive statement to form the hypothesis. We use information \nretrieval to obtain relevant text from a large text corpus of web sentences, and use these sentences as a premise P. We \ncrowdsource the annotation of such premise-hypothesis pair as supports (entails) or not (neutral), in order to create \nthe SciTail dataset. The dataset contains 27,026 examples with 10,101 examples with entails label and 16,925 examples \nwith neutral label\n", "citation": "inproceedings{scitail,\n Author = {Tushar Khot and Ashish Sabharwal and Peter Clark},\n Booktitle = {AAAI},\n Title = {{SciTail}: A Textual Entailment Dataset from Science Question Answering},\n Year = {2018}\n}\n", "homepage": "https://allenai.org/data/scitail", "license": "", "features": {"answer": {"dtype": "string", "id": null, "_type": "Value"}, "sentence2_structure": {"dtype": "string", "id": null, "_type": "Value"}, "sentence1": {"dtype": "string", "id": null, "_type": "Value"}, "sentence2": {"dtype": "string", "id": null, "_type": "Value"}, "gold_label": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}}, "supervised_keys": null, "builder_name": "scitail", "config_name": "predictor_format", "version": {"version_str": "1.1.0", "description": "", "datasets_version_to_prepare": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 8884823, "num_examples": 23587, "dataset_name": "scitail"}, "test": {"name": "test", "num_bytes": 797161, "num_examples": 2126, "dataset_name": "scitail"}, "validation": {"name": "validation", "num_bytes": 511305, "num_examples": 1304, "dataset_name": "scitail"}}, "download_checksums": {"http://data.allenai.org.s3.amazonaws.com/downloads/SciTailV1.1.zip": {"num_bytes": 14174621, "checksum": "3fccd37350a94ca280b75998568df85fc2fc62843a3198d644fcbf858e6943d5"}}, "download_size": 14174621, "dataset_size": 10193289, "size_in_bytes": 24367910}}
 
 
dgem_format/scitail-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bb40add3d19967e2b095fa34fe01ea2f9237e088eaa7ef350d9a004968c7dd6c
3
+ size 185038
dgem_format/scitail-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:51cac6e69cf16946656a83fe2a2ba4ea802234beaeaf114aa520f28000453619
3
+ size 1709685
dgem_format/scitail-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eab76d65ad4aa8babb17491c799247ddd69fa653f778997c3bbcd92fc12fba37
3
+ size 112292
predictor_format/scitail-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:64a4be96180e2c48f7d17844eeea07868b29afd6144ba5f73cbfde87b0596c89
3
+ size 210213
predictor_format/scitail-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:354ef1123768935553de3219cacff148806218b949602e828b248945805687ef
3
+ size 1833841
predictor_format/scitail-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3d555a289a300fe77e24968b029752b162ad1e67afc97a2b20ca9323f23f86d6
3
+ size 125181
scitail.py DELETED
@@ -1,298 +0,0 @@
1
- """TODO(sciTail): Add a description here."""
2
-
3
-
4
- import csv
5
- import json
6
- import os
7
- import textwrap
8
-
9
- import datasets
10
-
11
-
12
- # TODO(sciTail): BibTeX citation
13
- _CITATION = """\
14
- inproceedings{scitail,
15
- Author = {Tushar Khot and Ashish Sabharwal and Peter Clark},
16
- Booktitle = {AAAI},
17
- Title = {{SciTail}: A Textual Entailment Dataset from Science Question Answering},
18
- Year = {2018}
19
- }
20
- """
21
-
22
- # TODO(sciTail):
23
- _DESCRIPTION = """\
24
- The SciTail dataset is an entailment dataset created from multiple-choice science exams and web sentences. Each question
25
- and the correct answer choice are converted into an assertive statement to form the hypothesis. We use information
26
- retrieval to obtain relevant text from a large text corpus of web sentences, and use these sentences as a premise P. We
27
- crowdsource the annotation of such premise-hypothesis pair as supports (entails) or not (neutral), in order to create
28
- the SciTail dataset. The dataset contains 27,026 examples with 10,101 examples with entails label and 16,925 examples
29
- with neutral label
30
- """
31
-
32
- _URL = "http://data.allenai.org.s3.amazonaws.com/downloads/SciTailV1.1.zip"
33
-
34
-
35
- class ScitailConfig(datasets.BuilderConfig):
36
-
37
- """BuilderConfig for Xquad"""
38
-
39
- def __init__(self, **kwargs):
40
- """
41
-
42
- Args:
43
- **kwargs: keyword arguments forwarded to super.
44
- """
45
- super(ScitailConfig, self).__init__(version=datasets.Version("1.1.0", ""), **kwargs)
46
-
47
-
48
- class Scitail(datasets.GeneratorBasedBuilder):
49
- """TODO(sciTail): Short description of my dataset."""
50
-
51
- # TODO(sciTail): Set up version.
52
- VERSION = datasets.Version("1.1.0")
53
- BUILDER_CONFIGS = [
54
- ScitailConfig(
55
- name="snli_format",
56
- description="JSONL format used by SNLI with a JSON object corresponding to each entailment example in each line.",
57
- ),
58
- ScitailConfig(
59
- name="tsv_format", description="Tab-separated format with three columns: premise hypothesis label"
60
- ),
61
- ScitailConfig(
62
- name="dgem_format",
63
- description="Tab-separated format used by the DGEM model: premise hypothesis label hypothesis graph structure",
64
- ),
65
- ScitailConfig(
66
- name="predictor_format",
67
- description=textwrap.dedent(
68
- """\
69
- AllenNLP predictors work only with JSONL format. This folder contains the SciTail train/dev/test in JSONL format
70
- so that it can be loaded into the predictors. Each line is a JSON object with the following keys:
71
- gold_label : the example label from {entails, neutral}
72
- sentence1: the premise
73
- sentence2: the hypothesis
74
- sentence2_structure: structure from the hypothesis """
75
- ),
76
- ),
77
- ]
78
-
79
- def _info(self):
80
- # TODO(sciTail): Specifies the datasets.DatasetInfo object
81
- if self.config.name == "snli_format":
82
- return datasets.DatasetInfo(
83
- # This is the description that will appear on the datasets page.
84
- description=_DESCRIPTION,
85
- # datasets.features.FeatureConnectors
86
- features=datasets.Features(
87
- {
88
- "sentence1_binary_parse": datasets.Value("string"),
89
- "sentence1_parse": datasets.Value("string"),
90
- "sentence1": datasets.Value("string"),
91
- "sentence2_parse": datasets.Value("string"),
92
- "sentence2": datasets.Value("string"),
93
- "annotator_labels": datasets.features.Sequence(datasets.Value("string")),
94
- "gold_label": datasets.Value("string")
95
- # These are the features of your dataset like images, labels ...
96
- }
97
- ),
98
- # If there's a common (input, target) tuple from the features,
99
- # specify them here. They'll be used if as_supervised=True in
100
- # builder.as_dataset.
101
- supervised_keys=None,
102
- # Homepage of the dataset for documentation
103
- homepage="https://allenai.org/data/scitail",
104
- citation=_CITATION,
105
- )
106
- elif self.config.name == "tsv_format":
107
- return datasets.DatasetInfo(
108
- # This is the description that will appear on the datasets page.
109
- description=_DESCRIPTION,
110
- # datasets.features.FeatureConnectors
111
- features=datasets.Features(
112
- {
113
- "premise": datasets.Value("string"),
114
- "hypothesis": datasets.Value("string"),
115
- "label": datasets.Value("string")
116
- # These are the features of your dataset like images, labels ...
117
- }
118
- ),
119
- # If there's a common (input, target) tuple from the features,
120
- # specify them here. They'll be used if as_supervised=True in
121
- # builder.as_dataset.
122
- supervised_keys=None,
123
- # Homepage of the dataset for documentation
124
- homepage="https://allenai.org/data/scitail",
125
- citation=_CITATION,
126
- )
127
- elif self.config.name == "predictor_format":
128
- return datasets.DatasetInfo(
129
- # This is the description that will appear on the datasets page.
130
- description=_DESCRIPTION,
131
- # datasets.features.FeatureConnectors
132
- features=datasets.Features(
133
- {
134
- "answer": datasets.Value("string"),
135
- "sentence2_structure": datasets.Value("string"),
136
- "sentence1": datasets.Value("string"),
137
- "sentence2": datasets.Value("string"),
138
- "gold_label": datasets.Value("string"),
139
- "question": datasets.Value("string")
140
- # These are the features of your dataset like images, labels ...
141
- }
142
- ),
143
- # If there's a common (input, target) tuple from the features,
144
- # specify them here. They'll be used if as_supervised=True in
145
- # builder.as_dataset.
146
- supervised_keys=None,
147
- # Homepage of the dataset for documentation
148
- homepage="https://allenai.org/data/scitail",
149
- citation=_CITATION,
150
- )
151
- elif self.config.name == "dgem_format":
152
- return datasets.DatasetInfo(
153
- # This is the description that will appear on the datasets page.
154
- description=_DESCRIPTION,
155
- # datasets.features.FeatureConnectors
156
- features=datasets.Features(
157
- {
158
- "premise": datasets.Value("string"),
159
- "hypothesis": datasets.Value("string"),
160
- "label": datasets.Value("string"),
161
- "hypothesis_graph_structure": datasets.Value("string")
162
- # These are the features of your dataset like images, labels ...
163
- }
164
- ),
165
- # If there's a common (input, target) tuple from the features,
166
- # specify them here. They'll be used if as_supervised=True in
167
- # builder.as_dataset.
168
- supervised_keys=None,
169
- # Homepage of the dataset for documentation
170
- homepage="https://allenai.org/data/scitail",
171
- citation=_CITATION,
172
- )
173
-
174
- def _split_generators(self, dl_manager):
175
- """Returns SplitGenerators."""
176
- # TODO(sciTail): Downloads the data and defines the splits
177
- # dl_manager is a datasets.download.DownloadManager that can be used to
178
- # download and extract URLs
179
- dl_dir = dl_manager.download_and_extract(_URL)
180
- data_dir = os.path.join(dl_dir, "SciTailV1.1")
181
- snli = os.path.join(data_dir, "snli_format")
182
- dgem = os.path.join(data_dir, "dgem_format")
183
- tsv = os.path.join(data_dir, "tsv_format")
184
- predictor = os.path.join(data_dir, "predictor_format")
185
- if self.config.name == "snli_format":
186
- return [
187
- datasets.SplitGenerator(
188
- name=datasets.Split.TRAIN,
189
- # These kwargs will be passed to _generate_examples
190
- gen_kwargs={"filepath": os.path.join(snli, "scitail_1.0_train.txt")},
191
- ),
192
- datasets.SplitGenerator(
193
- name=datasets.Split.TEST,
194
- # These kwargs will be passed to _generate_examples
195
- gen_kwargs={"filepath": os.path.join(snli, "scitail_1.0_test.txt")},
196
- ),
197
- datasets.SplitGenerator(
198
- name=datasets.Split.VALIDATION,
199
- # These kwargs will be passed to _generate_examples
200
- gen_kwargs={"filepath": os.path.join(snli, "scitail_1.0_dev.txt")},
201
- ),
202
- ]
203
- elif self.config.name == "tsv_format":
204
- return [
205
- datasets.SplitGenerator(
206
- name=datasets.Split.TRAIN,
207
- # These kwargs will be passed to _generate_examples
208
- gen_kwargs={"filepath": os.path.join(tsv, "scitail_1.0_train.tsv")},
209
- ),
210
- datasets.SplitGenerator(
211
- name=datasets.Split.TEST,
212
- # These kwargs will be passed to _generate_examples
213
- gen_kwargs={"filepath": os.path.join(tsv, "scitail_1.0_test.tsv")},
214
- ),
215
- datasets.SplitGenerator(
216
- name=datasets.Split.VALIDATION,
217
- # These kwargs will be passed to _generate_examples
218
- gen_kwargs={"filepath": os.path.join(tsv, "scitail_1.0_dev.tsv")},
219
- ),
220
- ]
221
- elif self.config.name == "predictor_format":
222
- return [
223
- datasets.SplitGenerator(
224
- name=datasets.Split.TRAIN,
225
- # These kwargs will be passed to _generate_examples
226
- gen_kwargs={"filepath": os.path.join(predictor, "scitail_1.0_structure_train.jsonl")},
227
- ),
228
- datasets.SplitGenerator(
229
- name=datasets.Split.TEST,
230
- # These kwargs will be passed to _generate_examples
231
- gen_kwargs={"filepath": os.path.join(predictor, "scitail_1.0_structure_test.jsonl")},
232
- ),
233
- datasets.SplitGenerator(
234
- name=datasets.Split.VALIDATION,
235
- # These kwargs will be passed to _generate_examples
236
- gen_kwargs={"filepath": os.path.join(predictor, "scitail_1.0_structure_dev.jsonl")},
237
- ),
238
- ]
239
- elif self.config.name == "dgem_format":
240
- return [
241
- datasets.SplitGenerator(
242
- name=datasets.Split.TRAIN,
243
- # These kwargs will be passed to _generate_examples
244
- gen_kwargs={"filepath": os.path.join(dgem, "scitail_1.0_structure_train.tsv")},
245
- ),
246
- datasets.SplitGenerator(
247
- name=datasets.Split.TEST,
248
- # These kwargs will be passed to _generate_examples
249
- gen_kwargs={"filepath": os.path.join(dgem, "scitail_1.0_structure_test.tsv")},
250
- ),
251
- datasets.SplitGenerator(
252
- name=datasets.Split.VALIDATION,
253
- # These kwargs will be passed to _generate_examples
254
- gen_kwargs={"filepath": os.path.join(dgem, "scitail_1.0_structure_dev.tsv")},
255
- ),
256
- ]
257
-
258
- def _generate_examples(self, filepath):
259
- """Yields examples."""
260
- # TODO(sciTail): Yields (key, example) tuples from the dataset
261
- with open(filepath, encoding="utf-8") as f:
262
- if self.config.name == "snli_format":
263
- for id_, row in enumerate(f):
264
- data = json.loads(row)
265
-
266
- yield id_, {
267
- "sentence1_binary_parse": data["sentence1_binary_parse"],
268
- "sentence1_parse": data["sentence1_parse"],
269
- "sentence1": data["sentence1"],
270
- "sentence2_parse": data["sentence2_parse"],
271
- "sentence2": data["sentence2"],
272
- "annotator_labels": data["annotator_labels"],
273
- "gold_label": data["gold_label"],
274
- }
275
- elif self.config.name == "tsv_format":
276
- data = csv.reader(f, delimiter="\t")
277
- for id_, row in enumerate(data):
278
- yield id_, {"premise": row[0], "hypothesis": row[1], "label": row[2]}
279
- elif self.config.name == "dgem_format":
280
- data = csv.reader(f, delimiter="\t")
281
- for id_, row in enumerate(data):
282
- yield id_, {
283
- "premise": row[0],
284
- "hypothesis": row[1],
285
- "label": row[2],
286
- "hypothesis_graph_structure": row[3],
287
- }
288
- elif self.config.name == "predictor_format":
289
- for id_, row in enumerate(f):
290
- data = json.loads(row)
291
- yield id_, {
292
- "answer": data["answer"],
293
- "sentence2_structure": data["sentence2_structure"],
294
- "sentence1": data["sentence1"],
295
- "sentence2": data["sentence2"],
296
- "gold_label": data["gold_label"],
297
- "question": data["question"],
298
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
snli_format/scitail-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:df259a293384bcff6b9258d30b204994d89530f3d09281b1376ddc1c90114be3
3
+ size 653111
snli_format/scitail-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a022f8c64d08d3a6c3d5703be944d267441f90e27fd86711d2d330539bbe1022
3
+ size 6423088
snli_format/scitail-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a80952692cc87a2afbd4a403d48a09b08a21868cd45744eb03d7a14abf15067c
3
+ size 400281
tsv_format/scitail-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:72801ad602378953ec301ade78b8fba265f42fe996c7bb6ca5f161d07c8c0f4f
3
+ size 162165
tsv_format/scitail-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0584a95cb429a963606df8b1f1e9407b33f87dfea8137fed6154a48857de2b82
3
+ size 1574549
tsv_format/scitail-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fba715debcc2a433e01f73eaccc361cf930b931b14c57faa29b431fcd024a2d2
3
+ size 99829