Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
Libraries:
Datasets
pandas
License:
system HF staff commited on
Commit
7cefb13
1 Parent(s): 26f6155

Update files from the datasets library (from 1.4.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.4.0

README.md CHANGED
@@ -1,4 +1,26 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  ---
3
 
4
  # Dataset Card for "multi_nli"
@@ -27,7 +49,7 @@
27
  - [Citation Information](#citation-information)
28
  - [Contributions](#contributions)
29
 
30
- ## [Dataset Description](#dataset-description)
31
 
32
  - **Homepage:** [https://www.nyu.edu/projects/bowman/multinli/](https://www.nyu.edu/projects/bowman/multinli/)
33
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
@@ -37,7 +59,7 @@
37
  - **Size of the generated dataset:** 73.39 MB
38
  - **Total amount of disk used:** 289.74 MB
39
 
40
- ### [Dataset Summary](#dataset-summary)
41
 
42
  The Multi-Genre Natural Language Inference (MultiNLI) corpus is a
43
  crowd-sourced collection of 433k sentence pairs annotated with textual
@@ -46,93 +68,100 @@ that covers a range of genres of spoken and written text, and supports a
46
  distinctive cross-genre generalization evaluation. The corpus served as the
47
  basis for the shared task of the RepEval 2017 Workshop at EMNLP in Copenhagen.
48
 
49
- ### [Supported Tasks](#supported-tasks)
50
 
51
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
52
 
53
- ### [Languages](#languages)
54
 
55
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
56
-
57
- ## [Dataset Structure](#dataset-structure)
58
 
59
- We show detailed information for up to 5 configurations of the dataset.
60
 
61
- ### [Data Instances](#data-instances)
62
-
63
- #### plain_text
64
 
65
  - **Size of downloaded dataset files:** 216.34 MB
66
  - **Size of the generated dataset:** 73.39 MB
67
  - **Total amount of disk used:** 289.74 MB
68
 
69
- An example of 'validation_matched' looks as follows.
 
70
  ```
71
  {
72
- "hypothesis": "flammable",
73
- "label": 0,
74
- "premise": "inflammable"
 
 
 
 
 
 
 
75
  }
76
  ```
77
 
78
- ### [Data Fields](#data-fields)
79
 
80
  The data fields are the same among all splits.
81
 
82
- #### plain_text
83
- - `premise`: a `string` feature.
84
- - `hypothesis`: a `string` feature.
85
- - `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
 
 
 
86
 
87
- ### [Data Splits Sample Size](#data-splits-sample-size)
88
 
89
- | name |train |validation_matched|validation_mismatched|
90
- |----------|-----:|-----------------:|--------------------:|
91
- |plain_text|392702| 9815| 9832|
92
 
93
- ## [Dataset Creation](#dataset-creation)
94
 
95
- ### [Curation Rationale](#curation-rationale)
96
 
97
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
98
 
99
- ### [Source Data](#source-data)
100
 
101
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
102
 
103
- ### [Annotations](#annotations)
104
 
105
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
106
 
107
- ### [Personal and Sensitive Information](#personal-and-sensitive-information)
108
 
109
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
110
 
111
- ## [Considerations for Using the Data](#considerations-for-using-the-data)
112
 
113
- ### [Social Impact of Dataset](#social-impact-of-dataset)
114
 
115
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
116
 
117
- ### [Discussion of Biases](#discussion-of-biases)
118
 
119
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
120
 
121
- ### [Other Known Limitations](#other-known-limitations)
122
 
123
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
124
 
125
- ## [Additional Information](#additional-information)
126
 
127
- ### [Dataset Curators](#dataset-curators)
128
 
129
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
130
 
131
- ### [Licensing Information](#licensing-information)
132
 
133
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
134
 
135
- ### [Citation Information](#citation-information)
136
 
137
  ```
138
  @InProceedings{N18-1101,
@@ -152,10 +181,8 @@ The data fields are the same among all splits.
152
  location = "New Orleans, Louisiana",
153
  url = "http://aclweb.org/anthology/N18-1101"
154
  }
155
-
156
  ```
157
 
158
-
159
  ### Contributions
160
 
161
- Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset.
 
1
  ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - crowdsourced
6
+ - found
7
+ languages:
8
+ - en
9
+ licenses:
10
+ - cc-by-3-0
11
+ - cc-by-sa-3-0-at
12
+ - mit
13
+ - other-Open Portion of the American National Corpus
14
+ multilinguality:
15
+ - monolingual
16
+ size_categories:
17
+ - 100K<n<1M
18
+ source_datasets:
19
+ - original
20
+ task_categories:
21
+ - text-scoring
22
+ task_ids:
23
+ - semantic-similarity-scoring
24
  ---
25
 
26
  # Dataset Card for "multi_nli"
 
49
  - [Citation Information](#citation-information)
50
  - [Contributions](#contributions)
51
 
52
+ ## Dataset Description
53
 
54
  - **Homepage:** [https://www.nyu.edu/projects/bowman/multinli/](https://www.nyu.edu/projects/bowman/multinli/)
55
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
59
  - **Size of the generated dataset:** 73.39 MB
60
  - **Total amount of disk used:** 289.74 MB
61
 
62
+ ### Dataset Summary
63
 
64
  The Multi-Genre Natural Language Inference (MultiNLI) corpus is a
65
  crowd-sourced collection of 433k sentence pairs annotated with textual
 
68
  distinctive cross-genre generalization evaluation. The corpus served as the
69
  basis for the shared task of the RepEval 2017 Workshop at EMNLP in Copenhagen.
70
 
71
+ ### Supported Tasks
72
 
73
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
74
 
75
+ ### Languages
76
 
77
+ The dataset contains samples in English only.
 
 
78
 
79
+ ## Dataset Structure
80
 
81
+ ### Data Instances
 
 
82
 
83
  - **Size of downloaded dataset files:** 216.34 MB
84
  - **Size of the generated dataset:** 73.39 MB
85
  - **Total amount of disk used:** 289.74 MB
86
 
87
+ Example of a data instance:
88
+
89
  ```
90
  {
91
+ "promptID": 31193,
92
+ "pairID": "31193n",
93
+ "premise": "Conceptually cream skimming has two basic dimensions - product and geography.",
94
+ "premise_binary_parse": "( ( Conceptually ( cream skimming ) ) ( ( has ( ( ( two ( basic dimensions ) ) - ) ( ( product and ) geography ) ) ) . ) )",
95
+ "premise_parse": "(ROOT (S (NP (JJ Conceptually) (NN cream) (NN skimming)) (VP (VBZ has) (NP (NP (CD two) (JJ basic) (NNS dimensions)) (: -) (NP (NN product) (CC and) (NN geography)))) (. .)))",
96
+ "hypothesis": "Product and geography are what make cream skimming work. ",
97
+ "hypothesis_binary_parse": "( ( ( Product and ) geography ) ( ( are ( what ( make ( cream ( skimming work ) ) ) ) ) . ) )",
98
+ "hypothesis_parse": "(ROOT (S (NP (NN Product) (CC and) (NN geography)) (VP (VBP are) (SBAR (WHNP (WP what)) (S (VP (VBP make) (NP (NP (NN cream)) (VP (VBG skimming) (NP (NN work)))))))) (. .)))",
99
+ "genre": "government",
100
+ "label": 1
101
  }
102
  ```
103
 
104
+ ### Data Fields
105
 
106
  The data fields are the same among all splits.
107
 
108
+ - `promptID`: Unique identifier for prompt
109
+ - `pairID`: Unique identifier for pair
110
+ - `{premise,hypothesis}`: combination of `premise` and `hypothesis`
111
+ - `{premise,hypothesis} parse`: Each sentence as parsed by the Stanford PCFG Parser 3.5.2
112
+ - `{premise,hypothesis} binary parse`: parses in unlabeled binary-branching format
113
+ - `genre`: a `string` feature.
114
+ - `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2)
115
 
116
+ ### Data Splits Sample Size
117
 
118
+ |train |validation_matched|validation_mismatched|
119
+ |-----:|-----------------:|--------------------:|
120
+ |392702| 9815| 9832|
121
 
122
+ ## Dataset Creation
123
 
124
+ ### Curation Rationale
125
 
126
+ They constructed MultiNLI so as to make it possible to explicitly evaluate models both on the quality of their sentence representations within the training domain and on their ability to derive reasonable representations in unfamiliar domains.
127
 
128
+ ### Source Data
129
 
130
+ They created each sentence pair by selecting a premise sentence from a preexisting text source and asked a human annotator to compose a novel sentence to pair with it as a hypothesis.
131
 
132
+ ### Annotations
133
 
134
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
135
 
136
+ ### Personal and Sensitive Information
137
 
138
+ [More Information Needed]
139
 
140
+ ## Considerations for Using the Data
141
 
142
+ ### Social Impact of Dataset
143
 
144
+ [More Information Needed]
145
 
146
+ ### Discussion of Biases
147
 
148
+ [More Information Needed]
149
 
150
+ ### Other Known Limitations
151
 
152
+ [More Information Needed]
153
 
154
+ ## Additional Information
155
 
156
+ ### Dataset Curators
157
 
158
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
159
 
160
+ ### Licensing Information
161
 
162
+ The majority of the corpus is released under the OANC’s license, which allows all content to be freely used, modi- fied, and shared under permissive terms. The data in the FICTION section falls under several per- missive licenses; Seven Swords is available under a Creative Commons Share-Alike 3.0 Unported License, and with the explicit permission of the author, Living History and Password Incorrect are available under Creative Commons Attribution 3.0 Unported Licenses; the remaining works of fiction are in the public domain in the United States (but may be licensed differently elsewhere).
163
 
164
+ ### Citation Information
165
 
166
  ```
167
  @InProceedings{N18-1101,
 
181
  location = "New Orleans, Louisiana",
182
  url = "http://aclweb.org/anthology/N18-1101"
183
  }
 
184
  ```
185
 
 
186
  ### Contributions
187
 
188
+ Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset.
dataset_infos.json CHANGED
@@ -1 +1 @@
1
- {"plain_text": {"description": "The Multi-Genre Natural Language Inference (MultiNLI) corpus is a\ncrowd-sourced collection of 433k sentence pairs annotated with textual\nentailment information. The corpus is modeled on the SNLI corpus, but differs in\nthat covers a range of genres of spoken and written text, and supports a\ndistinctive cross-genre generalization evaluation. The corpus served as the\nbasis for the shared task of the RepEval 2017 Workshop at EMNLP in Copenhagen.\n", "citation": "@InProceedings{N18-1101,\n author = \"Williams, Adina\n and Nangia, Nikita\n and Bowman, Samuel\",\n title = \"A Broad-Coverage Challenge Corpus for\n Sentence Understanding through Inference\",\n booktitle = \"Proceedings of the 2018 Conference of\n the North American Chapter of the\n Association for Computational Linguistics:\n Human Language Technologies, Volume 1 (Long\n Papers)\",\n year = \"2018\",\n publisher = \"Association for Computational Linguistics\",\n pages = \"1112--1122\",\n location = \"New Orleans, Louisiana\",\n url = \"http://aclweb.org/anthology/N18-1101\"\n}\n", "homepage": "https://www.nyu.edu/projects/bowman/multinli/", "license": "", "features": {"premise": {"dtype": "string", "id": null, "_type": "Value"}, "hypothesis": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 3, "names": ["entailment", "neutral", "contradiction"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "supervised_keys": null, "builder_name": "multi_nli", "config_name": "plain_text", "version": {"version_str": "1.0.0", "description": "", "datasets_version_to_prepare": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 73245222, "num_examples": 392702, "dataset_name": "multi_nli"}, "validation_matched": {"name": "validation_matched", "num_bytes": 1799439, "num_examples": 9815, "dataset_name": "multi_nli"}, "validation_mismatched": {"name": "validation_mismatched", "num_bytes": 1914827, "num_examples": 9832, "dataset_name": "multi_nli"}}, "download_checksums": {"http://storage.googleapis.com/tfds-data/downloads/multi_nli/multinli_1.0.zip": {"num_bytes": 226850426, "checksum": "049f507b9e36b1fcb756cfd5aeb3b7a0cfcb84bf023793652987f7e7e0957822"}}, "download_size": 226850426, "dataset_size": 76959488, "size_in_bytes": 303809914}}
 
1
+ {"default": {"description": "The Multi-Genre Natural Language Inference (MultiNLI) corpus is a\ncrowd-sourced collection of 433k sentence pairs annotated with textual\nentailment information. The corpus is modeled on the SNLI corpus, but differs in\nthat covers a range of genres of spoken and written text, and supports a\ndistinctive cross-genre generalization evaluation. The corpus served as the\nbasis for the shared task of the RepEval 2017 Workshop at EMNLP in Copenhagen.\n", "citation": "@InProceedings{N18-1101,\n author = {Williams, Adina\n and Nangia, Nikita\n and Bowman, Samuel},\n title = {A Broad-Coverage Challenge Corpus for\n Sentence Understanding through Inference},\n booktitle = {Proceedings of the 2018 Conference of\n the North American Chapter of the\n Association for Computational Linguistics:\n Human Language Technologies, Volume 1 (Long\n Papers)},\n year = {2018},\n publisher = {Association for Computational Linguistics},\n pages = {1112--1122},\n location = {New Orleans, Louisiana},\n url = {http://aclweb.org/anthology/N18-1101}\n}\n", "homepage": "https://www.nyu.edu/projects/bowman/multinli/", "license": "", "features": {"promptID": {"dtype": "int32", "id": null, "_type": "Value"}, "pairID": {"dtype": "string", "id": null, "_type": "Value"}, "premise": {"dtype": "string", "id": null, "_type": "Value"}, "premise_binary_parse": {"dtype": "string", "id": null, "_type": "Value"}, "premise_parse": {"dtype": "string", "id": null, "_type": "Value"}, "hypothesis": {"dtype": "string", "id": null, "_type": "Value"}, "hypothesis_binary_parse": {"dtype": "string", "id": null, "_type": "Value"}, "hypothesis_parse": {"dtype": "string", "id": null, "_type": "Value"}, "genre": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 3, "names": ["entailment", "neutral", "contradiction"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "builder_name": "multi_nli", "config_name": "default", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 410211586, "num_examples": 392702, "dataset_name": "multi_nli"}, "validation_matched": {"name": "validation_matched", "num_bytes": 10063939, "num_examples": 9815, "dataset_name": "multi_nli"}, "validation_mismatched": {"name": "validation_mismatched", "num_bytes": 10610221, "num_examples": 9832, "dataset_name": "multi_nli"}}, "download_checksums": {"https://cims.nyu.edu/~sbowman/multinli/multinli_1.0.zip": {"num_bytes": 226850426, "checksum": "049f507b9e36b1fcb756cfd5aeb3b7a0cfcb84bf023793652987f7e7e0957822"}}, "download_size": 226850426, "post_processing_size": null, "dataset_size": 430885746, "size_in_bytes": 657736172}}
dummy/{plain_text/1.0.0 → 0.0.0}/dummy_data.zip RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1fa7aca81ea7db5408b84d967c291a89f721e82e0b4eef3563e48ec8edf347e8
3
- size 1276
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:befcc541017d4438bd7105595291e924e6933f3e330c2a41e0053caea457436d
3
+ size 13205
multi_nli.py CHANGED
@@ -18,6 +18,7 @@
18
 
19
  from __future__ import absolute_import, division, print_function
20
 
 
21
  import os
22
 
23
  import datasets
@@ -53,36 +54,25 @@ basis for the shared task of the RepEval 2017 Workshop at EMNLP in Copenhagen.
53
  """
54
 
55
 
56
- class MultiNLIConfig(datasets.BuilderConfig):
57
- """BuilderConfig for MultiNLI."""
58
-
59
- def __init__(self, **kwargs):
60
- """BuilderConfig for MultiNLI.
61
-
62
- Args:
63
- .
64
- **kwargs: keyword arguments forwarded to super.
65
- """
66
- super(MultiNLIConfig, self).__init__(version=datasets.Version("1.0.0", ""), **kwargs)
67
-
68
-
69
  class MultiNli(datasets.GeneratorBasedBuilder):
70
  """MultiNLI: The Stanford Question Answering Dataset. Version 1.1."""
71
 
72
- BUILDER_CONFIGS = [
73
- MultiNLIConfig(
74
- name="plain_text",
75
- description="Plain text",
76
- ),
77
- ]
78
-
79
  def _info(self):
80
  return datasets.DatasetInfo(
81
  description=_DESCRIPTION,
82
  features=datasets.Features(
83
  {
 
 
84
  "premise": datasets.Value("string"),
 
 
85
  "hypothesis": datasets.Value("string"),
 
 
 
 
 
86
  "label": datasets.features.ClassLabel(names=["entailment", "neutral", "contradiction"]),
87
  }
88
  ),
@@ -93,19 +83,13 @@ class MultiNli(datasets.GeneratorBasedBuilder):
93
  citation=_CITATION,
94
  )
95
 
96
- def _vocab_text_gen(self, filepath):
97
- for _, ex in self._generate_examples(filepath):
98
- yield " ".join([ex["premise"], ex["hypothesis"]])
99
-
100
  def _split_generators(self, dl_manager):
101
 
102
- downloaded_dir = dl_manager.download_and_extract(
103
- "http://storage.googleapis.com/tfds-data/downloads/multi_nli/multinli_1.0.zip"
104
- )
105
  mnli_path = os.path.join(downloaded_dir, "multinli_1.0")
106
- train_path = os.path.join(mnli_path, "multinli_1.0_train.txt")
107
- matched_validation_path = os.path.join(mnli_path, "multinli_1.0_dev_matched.txt")
108
- mismatched_validation_path = os.path.join(mnli_path, "multinli_1.0_dev_mismatched.txt")
109
 
110
  return [
111
  datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": train_path}),
@@ -114,22 +98,22 @@ class MultiNli(datasets.GeneratorBasedBuilder):
114
  ]
115
 
116
  def _generate_examples(self, filepath):
117
- """Generate mnli examples.
118
-
119
- Args:
120
- filepath: a string
121
-
122
- Yields:
123
- dictionaries containing "premise", "hypothesis" and "label" strings
124
- """
125
- for idx, line in enumerate(open(filepath, "rb")):
126
- if idx == 0:
127
- continue # skip header
128
- line = line.strip().decode("utf-8")
129
- split_line = line.split("\t")
130
- # Examples not marked with a three out of five consensus are marked with
131
- # "-" and should not be used in standard evaluations.
132
- if split_line[0] == "-":
133
- continue
134
- # Works for both splits even though dev has some extra human labels.
135
- yield idx, {"premise": split_line[5], "hypothesis": split_line[6], "label": split_line[0]}
 
18
 
19
  from __future__ import absolute_import, division, print_function
20
 
21
+ import json
22
  import os
23
 
24
  import datasets
 
54
  """
55
 
56
 
 
 
 
 
 
 
 
 
 
 
 
 
 
57
  class MultiNli(datasets.GeneratorBasedBuilder):
58
  """MultiNLI: The Stanford Question Answering Dataset. Version 1.1."""
59
 
 
 
 
 
 
 
 
60
  def _info(self):
61
  return datasets.DatasetInfo(
62
  description=_DESCRIPTION,
63
  features=datasets.Features(
64
  {
65
+ "promptID": datasets.Value("int32"),
66
+ "pairID": datasets.Value("string"),
67
  "premise": datasets.Value("string"),
68
+ "premise_binary_parse": datasets.Value("string"), # parses in unlabeled binary-branching format
69
+ "premise_parse": datasets.Value("string"), # sentence as parsed by the Stanford PCFG Parser 3.5.2
70
  "hypothesis": datasets.Value("string"),
71
+ "hypothesis_binary_parse": datasets.Value("string"), # parses in unlabeled binary-branching format
72
+ "hypothesis_parse": datasets.Value(
73
+ "string"
74
+ ), # sentence as parsed by the Stanford PCFG Parser 3.5.2
75
+ "genre": datasets.Value("string"),
76
  "label": datasets.features.ClassLabel(names=["entailment", "neutral", "contradiction"]),
77
  }
78
  ),
 
83
  citation=_CITATION,
84
  )
85
 
 
 
 
 
86
  def _split_generators(self, dl_manager):
87
 
88
+ downloaded_dir = dl_manager.download_and_extract("https://cims.nyu.edu/~sbowman/multinli/multinli_1.0.zip")
 
 
89
  mnli_path = os.path.join(downloaded_dir, "multinli_1.0")
90
+ train_path = os.path.join(mnli_path, "multinli_1.0_train.jsonl")
91
+ matched_validation_path = os.path.join(mnli_path, "multinli_1.0_dev_matched.jsonl")
92
+ mismatched_validation_path = os.path.join(mnli_path, "multinli_1.0_dev_mismatched.jsonl")
93
 
94
  return [
95
  datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": train_path}),
 
98
  ]
99
 
100
  def _generate_examples(self, filepath):
101
+ """Generate mnli examples"""
102
+
103
+ with open(filepath, encoding="utf-8") as f:
104
+ for id_, row in enumerate(f):
105
+ data = json.loads(row)
106
+ if data["gold_label"] == "-":
107
+ continue
108
+ yield id_, {
109
+ "promptID": data["promptID"],
110
+ "pairID": data["pairID"],
111
+ "premise": data["sentence1"],
112
+ "premise_binary_parse": data["sentence1_binary_parse"],
113
+ "premise_parse": data["sentence1_parse"],
114
+ "hypothesis": data["sentence2"],
115
+ "hypothesis_binary_parse": data["sentence2_binary_parse"],
116
+ "hypothesis_parse": data["sentence2_parse"],
117
+ "genre": data["genre"],
118
+ "label": data["gold_label"],
119
+ }