Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
albertvillanova HF staff commited on
Commit
944ee2a
1 Parent(s): 660623b

Update metadata (#5)

Browse files

- Update metadata (97917f3b68883df0963ffba2db53c51898f34638)

Files changed (2) hide show
  1. README.md +63 -18
  2. snli.py +17 -6
README.md CHANGED
@@ -6,7 +6,7 @@ language_creators:
6
  language:
7
  - en
8
  license:
9
- - cc-by-4.0
10
  multilinguality:
11
  - monolingual
12
  size_categories:
@@ -76,11 +76,14 @@ dataset_info:
76
 
77
  ## Dataset Description
78
 
79
- - **Homepage:** [SNLI homepage](https://nlp.stanford.edu/projects/snli/)
80
- - **Repository:**
81
- - **Paper:** [A large annotated corpus for learning natural langauge inference](https://nlp.stanford.edu/pubs/snli_paper.pdf)
82
- - **Leaderboard:** [SNLI leaderboard](https://nlp.stanford.edu/projects/snli/) (located on the homepage)
83
- - **Point of Contact:** [Samuel Bowman](mailto:bowman@nyu.edu) and [Gabor Angeli](mailto:angeli@stanford.edu)
 
 
 
84
 
85
  ### Dataset Summary
86
 
@@ -88,7 +91,9 @@ The SNLI corpus (version 1.0) is a collection of 570k human-written English sent
88
 
89
  ### Supported Tasks and Leaderboards
90
 
91
- [SemBERT](https://arxiv.org/pdf/1909.02209.pdf) (Zhousheng Zhang et al, 2019b) is currently listed as SOTA, achieving 91.9% accuracy on the test set. See the [corpus webpage](https://nlp.stanford.edu/projects/snli/) for a list of published results.
 
 
92
 
93
  ### Languages
94
 
@@ -144,7 +149,7 @@ The hypotheses were elicited by presenting crowdworkers with captions from preex
144
 
145
  Crowdworkers were presented with a caption without the associated photo and asked to produce three alternate captions, one that is definitely true, one that might be true, and one that is definitely false. See Section 2.1 and Figure 1 for details (Bowman et al., 2015).
146
 
147
- The corpus includes content from the [Flickr 30k corpus](http://shannon.cs.illinois.edu/DenotationGraph/) and the [VisualGenome corpus](https://visualgenome.org/). The photo captions used to prompt the data creation were collected on Flickr by [Young et al. (2014)](https://www.aclweb.org/anthology/Q14-1006.pdf), who extended the Flickr 8K dataset developed by [Hodosh et al. (2013)](https://www.jair.org/index.php/jair/article/view/10833). Hodosh et al. collected photos from the following Flickr groups: strangers!, Wild-Child (Kids in Action), Dogs in Action (Read the Rules), Outdoor Activities, Action Photography, Flickr-Social (two or more people in the photo). Young et al. do not list the specific groups they collected photos from. The VisualGenome corpus also contains images from Flickr, originally collected in [MS-COCO](https://cocodataset.org/#home) and [YFCC100M](http://projects.dfki.uni-kl.de/yfcc100m/).
148
 
149
  The premises from the Flickr 30k corpus corrected for spelling using the Linux spell checker and ungrammatical sentences were removed. Bowman et al. do not report any normalization, though they note that punctuation and capitalization are often omitted.
150
 
@@ -154,7 +159,7 @@ A large portion of the premises (160k) were produced in the [Flickr 30k corpus](
154
 
155
  The Flickr 30k corpus did not report crowdworker or photo subject demographic information or crowdworker compensation. The SNLI crowdworkers were compensated per HIT at rates between $.1 and $.5 with no incentives. Workers who ignored the guidelines were disqualified, and automated bulk submissions were rejected. No demographic information was collected from the SNLI crowdworkers.
156
 
157
- An additional 4,000 premises come from the pilot study of the [VisualGenome corpus](https://visualgenome.org/static/paper/Visual_Genome.pdf). Though the pilot study itself is not described, the location information of the 33,000 AMT crowdworkers that participated over the course of the 6 months of data collection are aggregated. Most of the workers were located in the United States (93%), with others from the Philippines, Kenya, India, Russia, and Canada. Workers were paid $6-$8 per hour.
158
 
159
  ### Annotations
160
 
@@ -187,11 +192,11 @@ This dataset was developed as a benchmark for evaluating representational system
187
 
188
  ### Discussion of Biases
189
 
190
- The language reflects the content of the photos collected from Flickr, as described in the [Data Collection](#initial-data-collection-and-normalization) section. [Rudinger et al (2017)](https://www.aclweb.org/anthology/W17-1609.pdf) use pointwise mutual information to calculate a measure of association between a manually selected list of tokens corresponding to identity categories and the other words in the corpus, showing strong evidence of stereotypes across gender categories. They also provide examples in which crowdworkers reproduced harmful stereotypes or pejorative language in the hypotheses.
191
 
192
  ### Other Known Limitations
193
 
194
- [Gururangan et al (2018)](https://www.aclweb.org/anthology/N18-2017.pdf), [Poliak et al (2018)](https://www.aclweb.org/anthology/S18-2023.pdf), and [Tsuchiya (2018)](https://www.aclweb.org/anthology/L18-1239.pdf) show that the SNLI corpus has a number of annotation artifacts. Using various classifiers, Poliak et al correctly predicted the label of the hypothesis 69% of the time without using the premise, Gururangan et al 67% of the time, and Tsuchiya 63% of the time.
195
 
196
  ## Additional Information
197
 
@@ -203,20 +208,60 @@ It was supported by a Google Faculty Research Award, a gift from Bloomberg L.P.,
203
 
204
  ### Licensing Information
205
 
206
- The Stanford Natural Language Inference Corpus is licensed under a [Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/).
 
 
207
 
208
  ### Citation Information
209
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
210
  ```
211
- @inproceedings{snli:emnlp2015,
212
- Author = {Bowman, Samuel R. and Angeli, Gabor and Potts, Christopher, and Manning, Christopher D.},
213
- Booktitle = {Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
214
- Publisher = {Association for Computational Linguistics},
215
- Title = {A large annotated corpus for learning natural language inference},
216
- Year = {2015}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
217
  }
218
  ```
219
 
 
 
 
 
220
  ### Contributions
221
 
222
  Thanks to [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) and [@mcmillanmajora](https://github.com/mcmillanmajora) for adding this dataset.
 
6
  language:
7
  - en
8
  license:
9
+ - cc-by-sa-4.0
10
  multilinguality:
11
  - monolingual
12
  size_categories:
 
76
 
77
  ## Dataset Description
78
 
79
+ - **Homepage:** https://nlp.stanford.edu/projects/snli/
80
+ - **Repository:** [More Information Needed]
81
+ - **Paper:** https://aclanthology.org/D15-1075/
82
+ - **Paper:** https://arxiv.org/abs/1508.05326
83
+ - **Leaderboard:** https://nlp.stanford.edu/projects/snli/
84
+ - **Point of Contact:** [Samuel Bowman](mailto:[email protected])
85
+ - **Point of Contact:** [Gabor Angeli](mailto:[email protected])
86
+ - **Point of Contact:** [Chris Manning]([email protected])
87
 
88
  ### Dataset Summary
89
 
 
91
 
92
  ### Supported Tasks and Leaderboards
93
 
94
+ Natural Language Inference (NLI), also known as Recognizing Textual Entailment (RTE), is the task of determining the inference relation between two (short, ordered) texts: entailment, contradiction, or neutral ([MacCartney and Manning 2008](https://aclanthology.org/C08-1066/)).
95
+
96
+ See the [corpus webpage](https://nlp.stanford.edu/projects/snli/) for a list of published results.
97
 
98
  ### Languages
99
 
 
149
 
150
  Crowdworkers were presented with a caption without the associated photo and asked to produce three alternate captions, one that is definitely true, one that might be true, and one that is definitely false. See Section 2.1 and Figure 1 for details (Bowman et al., 2015).
151
 
152
+ The corpus includes content from the [Flickr 30k corpus](http://shannon.cs.illinois.edu/DenotationGraph/) and the [VisualGenome corpus](https://visualgenome.org/). The photo captions used to prompt the data creation were collected on Flickr by [Young et al. (2014)](https://aclanthology.org/Q14-1006/), who extended the Flickr 8K dataset developed by [Hodosh et al. (2013)](https://www.jair.org/index.php/jair/article/view/10833). Hodosh et al. collected photos from the following Flickr groups: strangers!, Wild-Child (Kids in Action), Dogs in Action (Read the Rules), Outdoor Activities, Action Photography, Flickr-Social (two or more people in the photo). Young et al. do not list the specific groups they collected photos from. The VisualGenome corpus also contains images from Flickr, originally collected in [MS-COCO](https://cocodataset.org/#home) and [YFCC100M](http://projects.dfki.uni-kl.de/yfcc100m/).
153
 
154
  The premises from the Flickr 30k corpus corrected for spelling using the Linux spell checker and ungrammatical sentences were removed. Bowman et al. do not report any normalization, though they note that punctuation and capitalization are often omitted.
155
 
 
159
 
160
  The Flickr 30k corpus did not report crowdworker or photo subject demographic information or crowdworker compensation. The SNLI crowdworkers were compensated per HIT at rates between $.1 and $.5 with no incentives. Workers who ignored the guidelines were disqualified, and automated bulk submissions were rejected. No demographic information was collected from the SNLI crowdworkers.
161
 
162
+ An additional 4,000 premises come from the pilot study of the [VisualGenome corpus](https://homes.cs.washington.edu/~ranjay/visualgenome/index.html). Though the pilot study itself is not described, the location information of the 33,000 AMT crowdworkers that participated over the course of the 6 months of data collection are aggregated. Most of the workers were located in the United States (93%), with others from the Philippines, Kenya, India, Russia, and Canada. Workers were paid $6-$8 per hour.
163
 
164
  ### Annotations
165
 
 
192
 
193
  ### Discussion of Biases
194
 
195
+ The language reflects the content of the photos collected from Flickr, as described in the [Data Collection](#initial-data-collection-and-normalization) section. [Rudinger et al (2017)](https://aclanthology.org/W17-1609/) use pointwise mutual information to calculate a measure of association between a manually selected list of tokens corresponding to identity categories and the other words in the corpus, showing strong evidence of stereotypes across gender categories. They also provide examples in which crowdworkers reproduced harmful stereotypes or pejorative language in the hypotheses.
196
 
197
  ### Other Known Limitations
198
 
199
+ [Gururangan et al (2018)](https://aclanthology.org/N18-2017/), [Poliak et al (2018)](https://aclanthology.org/S18-2023/), and [Tsuchiya (2018)](https://aclanthology.org/L18-1239/) show that the SNLI corpus has a number of annotation artifacts. Using various classifiers, Poliak et al correctly predicted the label of the hypothesis 69% of the time without using the premise, Gururangan et al 67% of the time, and Tsuchiya 63% of the time.
200
 
201
  ## Additional Information
202
 
 
208
 
209
  ### Licensing Information
210
 
211
+ The Stanford Natural Language Inference Corpus by The Stanford NLP Group is licensed under a [Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/).
212
+
213
+ The corpus includes content from the [Flickr 30k corpus](http://shannon.cs.illinois.edu/DenotationGraph/), also released under an Attribution-ShareAlike licence.
214
 
215
  ### Citation Information
216
 
217
+ The following paper introduces the corpus in detail. If you use the corpus in published work, please cite it:
218
+ ```bibtex
219
+ @inproceedings{bowman-etal-2015-large,
220
+ title = "A large annotated corpus for learning natural language inference",
221
+ author = "Bowman, Samuel R. and
222
+ Angeli, Gabor and
223
+ Potts, Christopher and
224
+ Manning, Christopher D.",
225
+ editor = "M{\`a}rquez, Llu{\'\i}s and
226
+ Callison-Burch, Chris and
227
+ Su, Jian",
228
+ booktitle = "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
229
+ month = sep,
230
+ year = "2015",
231
+ address = "Lisbon, Portugal",
232
+ publisher = "Association for Computational Linguistics",
233
+ url = "https://aclanthology.org/D15-1075",
234
+ doi = "10.18653/v1/D15-1075",
235
+ pages = "632--642",
236
+ }
237
  ```
238
+
239
+ The corpus includes content from the [Flickr 30k corpus](http://shannon.cs.illinois.edu/DenotationGraph/), which can be cited by way of this paper:
240
+ ```bibtex
241
+ @article{young-etal-2014-image,
242
+ title = "From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions",
243
+ author = "Young, Peter and
244
+ Lai, Alice and
245
+ Hodosh, Micah and
246
+ Hockenmaier, Julia",
247
+ editor = "Lin, Dekang and
248
+ Collins, Michael and
249
+ Lee, Lillian",
250
+ journal = "Transactions of the Association for Computational Linguistics",
251
+ volume = "2",
252
+ year = "2014",
253
+ address = "Cambridge, MA",
254
+ publisher = "MIT Press",
255
+ url = "https://aclanthology.org/Q14-1006",
256
+ doi = "10.1162/tacl_a_00166",
257
+ pages = "67--78",
258
  }
259
  ```
260
 
261
+ ### Contact Information
262
+
263
+ For any comments or questions, please email [Samuel Bowman](mailto:[email protected]), [Gabor Angeli](mailto:[email protected]) and [Chris Manning]([email protected]).
264
+
265
  ### Contributions
266
 
267
  Thanks to [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) and [@mcmillanmajora](https://github.com/mcmillanmajora) for adding this dataset.
snli.py CHANGED
@@ -24,12 +24,23 @@ import datasets
24
 
25
 
26
  _CITATION = """\
27
- @inproceedings{snli:emnlp2015,
28
- Author = {Bowman, Samuel R. and Angeli, Gabor and Potts, Christopher, and Manning, Christopher D.},
29
- Booktitle = {Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
30
- Publisher = {Association for Computational Linguistics},
31
- Title = {A large annotated corpus for learning natural language inference},
32
- Year = {2015}
 
 
 
 
 
 
 
 
 
 
 
33
  }
34
  """
35
 
 
24
 
25
 
26
  _CITATION = """\
27
+ @inproceedings{bowman-etal-2015-large,
28
+ title = "A large annotated corpus for learning natural language inference",
29
+ author = "Bowman, Samuel R. and
30
+ Angeli, Gabor and
31
+ Potts, Christopher and
32
+ Manning, Christopher D.",
33
+ editor = "M{\\`a}rquez, Llu{\\'\\i}s and
34
+ Callison-Burch, Chris and
35
+ Su, Jian",
36
+ booktitle = "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
37
+ month = sep,
38
+ year = "2015",
39
+ address = "Lisbon, Portugal",
40
+ publisher = "Association for Computational Linguistics",
41
+ url = "https://aclanthology.org/D15-1075",
42
+ doi = "10.18653/v1/D15-1075",
43
+ pages = "632--642",
44
  }
45
  """
46