Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
parquet-converter commited on
Commit
750dfc3
1 Parent(s): c383ade

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,27 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,241 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - crowdsourced
4
- - machine-generated
5
- language:
6
- - en
7
- language_creators:
8
- - found
9
- license:
10
- - cc-by-nc-4.0
11
- multilinguality:
12
- - monolingual
13
- pretty_name: Adversarial NLI
14
- size_categories:
15
- - 100K<n<1M
16
- source_datasets:
17
- - original
18
- - extended|hotpot_qa
19
- task_categories:
20
- - text-classification
21
- task_ids:
22
- - natural-language-inference
23
- - multi-input-text-classification
24
- paperswithcode_id: anli
25
- dataset_info:
26
- features:
27
- - name: uid
28
- dtype: string
29
- - name: premise
30
- dtype: string
31
- - name: hypothesis
32
- dtype: string
33
- - name: label
34
- dtype:
35
- class_label:
36
- names:
37
- 0: entailment
38
- 1: neutral
39
- 2: contradiction
40
- - name: reason
41
- dtype: string
42
- config_name: plain_text
43
- splits:
44
- - name: train_r1
45
- num_bytes: 8006920
46
- num_examples: 16946
47
- - name: dev_r1
48
- num_bytes: 573444
49
- num_examples: 1000
50
- - name: test_r1
51
- num_bytes: 574933
52
- num_examples: 1000
53
- - name: train_r2
54
- num_bytes: 20801661
55
- num_examples: 45460
56
- - name: dev_r2
57
- num_bytes: 556082
58
- num_examples: 1000
59
- - name: test_r2
60
- num_bytes: 572655
61
- num_examples: 1000
62
- - name: train_r3
63
- num_bytes: 44720895
64
- num_examples: 100459
65
- - name: dev_r3
66
- num_bytes: 663164
67
- num_examples: 1200
68
- - name: test_r3
69
- num_bytes: 657602
70
- num_examples: 1200
71
- download_size: 18621352
72
- dataset_size: 77127356
73
- ---
74
-
75
- # Dataset Card for "anli"
76
-
77
- ## Table of Contents
78
- - [Dataset Description](#dataset-description)
79
- - [Dataset Summary](#dataset-summary)
80
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
81
- - [Languages](#languages)
82
- - [Dataset Structure](#dataset-structure)
83
- - [Data Instances](#data-instances)
84
- - [Data Fields](#data-fields)
85
- - [Data Splits](#data-splits)
86
- - [Dataset Creation](#dataset-creation)
87
- - [Curation Rationale](#curation-rationale)
88
- - [Source Data](#source-data)
89
- - [Annotations](#annotations)
90
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
91
- - [Considerations for Using the Data](#considerations-for-using-the-data)
92
- - [Social Impact of Dataset](#social-impact-of-dataset)
93
- - [Discussion of Biases](#discussion-of-biases)
94
- - [Other Known Limitations](#other-known-limitations)
95
- - [Additional Information](#additional-information)
96
- - [Dataset Curators](#dataset-curators)
97
- - [Licensing Information](#licensing-information)
98
- - [Citation Information](#citation-information)
99
- - [Contributions](#contributions)
100
-
101
- ## Dataset Description
102
-
103
- - **Homepage:**
104
- - **Repository:** [https://github.com/facebookresearch/anli/](https://github.com/facebookresearch/anli/)
105
- - **Paper:** [Adversarial NLI: A New Benchmark for Natural Language Understanding](https://arxiv.org/abs/1910.14599)
106
- - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
107
- - **Size of downloaded dataset files:** 17.76 MB
108
- - **Size of the generated dataset:** 73.55 MB
109
- - **Total amount of disk used:** 91.31 MB
110
-
111
- ### Dataset Summary
112
-
113
- The Adversarial Natural Language Inference (ANLI) is a new large-scale NLI benchmark dataset,
114
- The dataset is collected via an iterative, adversarial human-and-model-in-the-loop procedure.
115
- ANLI is much more difficult than its predecessors including SNLI and MNLI.
116
- It contains three rounds. Each round has train/dev/test splits.
117
-
118
- ### Supported Tasks and Leaderboards
119
-
120
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
121
-
122
- ### Languages
123
-
124
- English
125
-
126
- ## Dataset Structure
127
-
128
- ### Data Instances
129
-
130
- #### plain_text
131
-
132
- - **Size of downloaded dataset files:** 17.76 MB
133
- - **Size of the generated dataset:** 73.55 MB
134
- - **Total amount of disk used:** 91.31 MB
135
-
136
- An example of 'train_r2' looks as follows.
137
- ```
138
- This example was too long and was cropped:
139
-
140
- {
141
- "hypothesis": "Idris Sultan was born in the first month of the year preceding 1994.",
142
- "label": 0,
143
- "premise": "\"Idris Sultan (born January 1993) is a Tanzanian Actor and comedian, actor and radio host who won the Big Brother Africa-Hotshot...",
144
- "reason": "",
145
- "uid": "ed5c37ab-77c5-4dbc-ba75-8fd617b19712"
146
- }
147
- ```
148
-
149
- ### Data Fields
150
-
151
- The data fields are the same among all splits.
152
-
153
- #### plain_text
154
- - `uid`: a `string` feature.
155
- - `premise`: a `string` feature.
156
- - `hypothesis`: a `string` feature.
157
- - `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
158
- - `reason`: a `string` feature.
159
-
160
- ### Data Splits
161
-
162
- | name |train_r1|dev_r1|train_r2|dev_r2|train_r3|dev_r3|test_r1|test_r2|test_r3|
163
- |----------|-------:|-----:|-------:|-----:|-------:|-----:|------:|------:|------:|
164
- |plain_text| 16946| 1000| 45460| 1000| 100459| 1200| 1000| 1000| 1200|
165
-
166
- ## Dataset Creation
167
-
168
- ### Curation Rationale
169
-
170
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
171
-
172
- ### Source Data
173
-
174
- #### Initial Data Collection and Normalization
175
-
176
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
177
-
178
- #### Who are the source language producers?
179
-
180
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
181
-
182
- ### Annotations
183
-
184
- #### Annotation process
185
-
186
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
187
-
188
- #### Who are the annotators?
189
-
190
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
191
-
192
- ### Personal and Sensitive Information
193
-
194
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
195
-
196
- ## Considerations for Using the Data
197
-
198
- ### Social Impact of Dataset
199
-
200
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
201
-
202
- ### Discussion of Biases
203
-
204
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
205
-
206
- ### Other Known Limitations
207
-
208
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
209
-
210
- ## Additional Information
211
-
212
- ### Dataset Curators
213
-
214
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
215
-
216
- ### Licensing Information
217
-
218
- [cc-4 Attribution-NonCommercial](https://github.com/facebookresearch/anli/blob/main/LICENSE)
219
-
220
- ### Citation Information
221
-
222
- ```
223
- @InProceedings{nie2019adversarial,
224
- title={Adversarial NLI: A New Benchmark for Natural Language Understanding},
225
- author={Nie, Yixin
226
- and Williams, Adina
227
- and Dinan, Emily
228
- and Bansal, Mohit
229
- and Weston, Jason
230
- and Kiela, Douwe},
231
- booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
232
- year = "2020",
233
- publisher = "Association for Computational Linguistics",
234
- }
235
-
236
- ```
237
-
238
-
239
- ### Contributions
240
-
241
- Thanks to [@thomwolf](https://github.com/thomwolf), [@easonnie](https://github.com/easonnie), [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
anli.py DELETED
@@ -1,152 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- # Lint as: python3
17
- """The Adversarial NLI Corpus."""
18
-
19
-
20
- import json
21
- import os
22
-
23
- import datasets
24
-
25
-
26
- _CITATION = """\
27
- @InProceedings{nie2019adversarial,
28
- title={Adversarial NLI: A New Benchmark for Natural Language Understanding},
29
- author={Nie, Yixin
30
- and Williams, Adina
31
- and Dinan, Emily
32
- and Bansal, Mohit
33
- and Weston, Jason
34
- and Kiela, Douwe},
35
- booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
36
- year = "2020",
37
- publisher = "Association for Computational Linguistics",
38
- }
39
- """
40
-
41
- _DESCRIPTION = """\
42
- The Adversarial Natural Language Inference (ANLI) is a new large-scale NLI benchmark dataset,
43
- The dataset is collected via an iterative, adversarial human-and-model-in-the-loop procedure.
44
- ANLI is much more difficult than its predecessors including SNLI and MNLI.
45
- It contains three rounds. Each round has train/dev/test splits.
46
- """
47
-
48
- stdnli_label = {
49
- "e": "entailment",
50
- "n": "neutral",
51
- "c": "contradiction",
52
- }
53
-
54
-
55
- class ANLIConfig(datasets.BuilderConfig):
56
- """BuilderConfig for ANLI."""
57
-
58
- def __init__(self, **kwargs):
59
- """BuilderConfig for ANLI.
60
-
61
- Args:
62
- .
63
- **kwargs: keyword arguments forwarded to super.
64
- """
65
- super(ANLIConfig, self).__init__(version=datasets.Version("0.1.0", ""), **kwargs)
66
-
67
-
68
- class ANLI(datasets.GeneratorBasedBuilder):
69
- """ANLI: The ANLI Dataset."""
70
-
71
- BUILDER_CONFIGS = [
72
- ANLIConfig(
73
- name="plain_text",
74
- description="Plain text",
75
- ),
76
- ]
77
-
78
- def _info(self):
79
- return datasets.DatasetInfo(
80
- description=_DESCRIPTION,
81
- features=datasets.Features(
82
- {
83
- "uid": datasets.Value("string"),
84
- "premise": datasets.Value("string"),
85
- "hypothesis": datasets.Value("string"),
86
- "label": datasets.features.ClassLabel(names=["entailment", "neutral", "contradiction"]),
87
- "reason": datasets.Value("string"),
88
- }
89
- ),
90
- # No default supervised_keys (as we have to pass both premise
91
- # and hypothesis as input).
92
- supervised_keys=None,
93
- homepage="https://github.com/facebookresearch/anli/",
94
- citation=_CITATION,
95
- )
96
-
97
- def _vocab_text_gen(self, filepath):
98
- for _, ex in self._generate_examples(filepath):
99
- yield " ".join([ex["premise"], ex["hypothesis"]])
100
-
101
- def _split_generators(self, dl_manager):
102
-
103
- downloaded_dir = dl_manager.download_and_extract("https://dl.fbaipublicfiles.com/anli/anli_v0.1.zip")
104
-
105
- anli_path = os.path.join(downloaded_dir, "anli_v0.1")
106
-
107
- path_dict = dict()
108
- for round_tag in ["R1", "R2", "R3"]:
109
- path_dict[round_tag] = dict()
110
- for split_name in ["train", "dev", "test"]:
111
- path_dict[round_tag][split_name] = os.path.join(anli_path, round_tag, f"{split_name}.jsonl")
112
-
113
- return [
114
- # Round 1
115
- datasets.SplitGenerator(name="train_r1", gen_kwargs={"filepath": path_dict["R1"]["train"]}),
116
- datasets.SplitGenerator(name="dev_r1", gen_kwargs={"filepath": path_dict["R1"]["dev"]}),
117
- datasets.SplitGenerator(name="test_r1", gen_kwargs={"filepath": path_dict["R1"]["test"]}),
118
- # Round 2
119
- datasets.SplitGenerator(name="train_r2", gen_kwargs={"filepath": path_dict["R2"]["train"]}),
120
- datasets.SplitGenerator(name="dev_r2", gen_kwargs={"filepath": path_dict["R2"]["dev"]}),
121
- datasets.SplitGenerator(name="test_r2", gen_kwargs={"filepath": path_dict["R2"]["test"]}),
122
- # Round 3
123
- datasets.SplitGenerator(name="train_r3", gen_kwargs={"filepath": path_dict["R3"]["train"]}),
124
- datasets.SplitGenerator(name="dev_r3", gen_kwargs={"filepath": path_dict["R3"]["dev"]}),
125
- datasets.SplitGenerator(name="test_r3", gen_kwargs={"filepath": path_dict["R3"]["test"]}),
126
- ]
127
-
128
- def _generate_examples(self, filepath):
129
- """Generate mnli examples.
130
-
131
- Args:
132
- filepath: a string
133
-
134
- Yields:
135
- dictionaries containing "premise", "hypothesis" and "label" strings
136
- """
137
- for idx, line in enumerate(open(filepath, "rb")):
138
- if line is not None:
139
- line = line.strip().decode("utf-8")
140
- item = json.loads(line)
141
-
142
- reason_text = ""
143
- if "reason" in item:
144
- reason_text = item["reason"]
145
-
146
- yield item["uid"], {
147
- "uid": item["uid"],
148
- "premise": item["context"],
149
- "hypothesis": item["hypothesis"],
150
- "label": stdnli_label[item["label"]],
151
- "reason": reason_text,
152
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"plain_text": {"description": "The Adversarial Natural Language Inference (ANLI) is a new large-scale NLI benchmark dataset, \nThe dataset is collected via an iterative, adversarial human-and-model-in-the-loop procedure.\nANLI is much more difficult than its predecessors including SNLI and MNLI.\nIt contains three rounds. Each round has train/dev/test splits.\n", "citation": "@InProceedings{nie2019adversarial,\n title={Adversarial NLI: A New Benchmark for Natural Language Understanding},\n author={Nie, Yixin \n and Williams, Adina \n and Dinan, Emily \n and Bansal, Mohit \n and Weston, Jason \n and Kiela, Douwe},\n booktitle = \"Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics\",\n year = \"2020\",\n publisher = \"Association for Computational Linguistics\",\n}\n", "homepage": "https://github.com/facebookresearch/anli/", "license": "", "features": {"uid": {"dtype": "string", "id": null, "_type": "Value"}, "premise": {"dtype": "string", "id": null, "_type": "Value"}, "hypothesis": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 3, "names": ["entailment", "neutral", "contradiction"], "names_file": null, "id": null, "_type": "ClassLabel"}, "reason": {"dtype": "string", "id": null, "_type": "Value"}}, "supervised_keys": null, "builder_name": "anli", "config_name": "plain_text", "version": {"version_str": "0.1.0", "description": "", "datasets_version_to_prepare": null, "major": 0, "minor": 1, "patch": 0}, "splits": {"train_r1": {"name": "train_r1", "num_bytes": 8006920, "num_examples": 16946, "dataset_name": "anli"}, "dev_r1": {"name": "dev_r1", "num_bytes": 573444, "num_examples": 1000, "dataset_name": "anli"}, "test_r1": {"name": "test_r1", "num_bytes": 574933, "num_examples": 1000, "dataset_name": "anli"}, "train_r2": {"name": "train_r2", "num_bytes": 20801661, "num_examples": 45460, "dataset_name": "anli"}, "dev_r2": {"name": "dev_r2", "num_bytes": 556082, "num_examples": 1000, "dataset_name": "anli"}, "test_r2": {"name": "test_r2", "num_bytes": 572655, "num_examples": 1000, "dataset_name": "anli"}, "train_r3": {"name": "train_r3", "num_bytes": 44720895, "num_examples": 100459, "dataset_name": "anli"}, "dev_r3": {"name": "dev_r3", "num_bytes": 663164, "num_examples": 1200, "dataset_name": "anli"}, "test_r3": {"name": "test_r3", "num_bytes": 657602, "num_examples": 1200, "dataset_name": "anli"}}, "download_checksums": {"https://dl.fbaipublicfiles.com/anli/anli_v0.1.zip": {"num_bytes": 18621352, "checksum": "16ac929a7e90ecf9093deaec89cc81fe86a379265a5320a150028efe50c5cde8"}}, "download_size": 18621352, "dataset_size": 77127356, "size_in_bytes": 95748708}}
 
 
plain_text/anli-dev_r1.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7baeb38b345a33beeecb7f7aec9a949f9c60682a9790ade9add6afbf56bd9a8d
3
+ size 351478
plain_text/anli-dev_r2.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:575c5e7974e35a9b229dd77d3b14df7b23064ad1b9cecd87cdd7815fa23e9b23
3
+ size 350605
plain_text/anli-dev_r3.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b9db58b6e09ad0fa7a3756cf51e7b29e8083ee32b3609a86528c9ec38a7740cf
3
+ size 434043
plain_text/anli-test_r1.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cb35e57df24ad4a42e0994587c4ce0c45f3ae2cb45c6fbde12012cd70ac94839
3
+ size 353375
plain_text/anli-test_r2.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ab0b9c153693d5d204eec5e207f7bc7eed5547ce16c91e0c659f6d8513fc74c3
3
+ size 361548
plain_text/anli-test_r3.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:59c841387cdb39d2a123618321db2ec8a74df50925812b6a26da2c45d20ec527
3
+ size 434549
plain_text/anli-train_r1.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:80fc7ab26927e7028a8684182fbeeade0a265ae442f21b933b2cbda7f290db2b
3
+ size 3140119
plain_text/anli-train_r2.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:09a2091ea5570d1c5297f2723cc0c8ca32d47c3670731106c2693576323f9a67
3
+ size 6527556
plain_text/anli-train_r3.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eda56ee1ac5fc5bc83e167de9d76f3b0f0ba6b31f27af5148f88be75f86433e0
3
+ size 14333466