Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
lhoestq HF staff commited on
Commit
8686edf
1 Parent(s): 26c33ec

add dataset_info in dataset metadata

Browse files
Files changed (1) hide show
  1. README.md +27 -1
README.md CHANGED
@@ -21,6 +21,32 @@ task_ids:
21
  - multi-input-text-classification
22
  paperswithcode_id: snli
23
  pretty_name: Stanford Natural Language Inference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
  ---
25
  # Dataset Card for SNLI
26
 
@@ -193,4 +219,4 @@ The Stanford Natural Language Inference Corpus is licensed under a [Creative Com
193
 
194
  ### Contributions
195
 
196
- Thanks to [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) and [@mcmillanmajora](https://github.com/mcmillanmajora) for adding this dataset.
 
21
  - multi-input-text-classification
22
  paperswithcode_id: snli
23
  pretty_name: Stanford Natural Language Inference
24
+ dataset_info:
25
+ features:
26
+ - name: premise
27
+ dtype: string
28
+ - name: hypothesis
29
+ dtype: string
30
+ - name: label
31
+ dtype:
32
+ class_label:
33
+ names:
34
+ 0: entailment
35
+ 1: neutral
36
+ 2: contradiction
37
+ config_name: plain_text
38
+ splits:
39
+ - name: test
40
+ num_bytes: 1263912
41
+ num_examples: 10000
42
+ - name: train
43
+ num_bytes: 66159510
44
+ num_examples: 550152
45
+ - name: validation
46
+ num_bytes: 1268044
47
+ num_examples: 10000
48
+ download_size: 94550081
49
+ dataset_size: 68691466
50
  ---
51
  # Dataset Card for SNLI
52
 
 
219
 
220
  ### Contributions
221
 
222
+ Thanks to [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) and [@mcmillanmajora](https://github.com/mcmillanmajora) for adding this dataset.