Datasets:
Tasks:
Text Classification
Modalities:
Text
Sub-tasks:
sentiment-classification
Size:
10K - 100K
License:
Add model card
Browse files
README.md
ADDED
@@ -0,0 +1,256 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
pretty_name: IMDB
|
3 |
+
annotations_creators:
|
4 |
+
- expert-generated
|
5 |
+
language_creators:
|
6 |
+
- expert-generated
|
7 |
+
language:
|
8 |
+
- nl
|
9 |
+
- en
|
10 |
+
license:
|
11 |
+
- other
|
12 |
+
multilinguality:
|
13 |
+
- multilingual
|
14 |
+
size_categories:
|
15 |
+
- 10K<n<100K
|
16 |
+
source_datasets:
|
17 |
+
- original
|
18 |
+
task_categories:
|
19 |
+
- text-classification
|
20 |
+
task_ids:
|
21 |
+
- sentiment-classification
|
22 |
+
paperswithcode_id: imdb-movie-reviews
|
23 |
+
train-eval-index:
|
24 |
+
- config: plain_text
|
25 |
+
task: text-classification
|
26 |
+
task_id: binary_classification
|
27 |
+
splits:
|
28 |
+
train_split: train
|
29 |
+
eval_split: test
|
30 |
+
col_mapping:
|
31 |
+
text: text
|
32 |
+
label: target
|
33 |
+
metrics:
|
34 |
+
- type: accuracy
|
35 |
+
- name: Accuracy
|
36 |
+
- type: f1
|
37 |
+
name: F1 macro
|
38 |
+
args:
|
39 |
+
average: macro
|
40 |
+
- type: f1
|
41 |
+
name: F1 micro
|
42 |
+
args:
|
43 |
+
average: micro
|
44 |
+
- type: f1
|
45 |
+
name: F1 weighted
|
46 |
+
args:
|
47 |
+
average: weighted
|
48 |
+
- type: precision
|
49 |
+
name: Precision macro
|
50 |
+
args:
|
51 |
+
average: macro
|
52 |
+
- type: precision
|
53 |
+
name: Precision micro
|
54 |
+
args:
|
55 |
+
average: micro
|
56 |
+
- type: precision
|
57 |
+
name: Precision weighted
|
58 |
+
args:
|
59 |
+
average: weighted
|
60 |
+
- type: recall
|
61 |
+
name: Recall macro
|
62 |
+
args:
|
63 |
+
average: macro
|
64 |
+
- type: recall
|
65 |
+
name: Recall micro
|
66 |
+
args:
|
67 |
+
average: micro
|
68 |
+
- type: recall
|
69 |
+
name: Recall weighted
|
70 |
+
args:
|
71 |
+
average: weighted
|
72 |
+
dataset_info:
|
73 |
+
features:
|
74 |
+
- name: text
|
75 |
+
dtype: string
|
76 |
+
- name: text_en
|
77 |
+
dtype: string
|
78 |
+
- name: label
|
79 |
+
dtype:
|
80 |
+
class_label:
|
81 |
+
names:
|
82 |
+
0: neg
|
83 |
+
1: pos
|
84 |
+
config_name: plain_text
|
85 |
+
splits:
|
86 |
+
- name: train
|
87 |
+
num_bytes: 33432835
|
88 |
+
num_examples: 25000
|
89 |
+
- name: test
|
90 |
+
num_bytes: 32650697
|
91 |
+
num_examples: 25000
|
92 |
+
- name: unsupervised
|
93 |
+
num_bytes: 67106814
|
94 |
+
num_examples: 50000
|
95 |
+
---
|
96 |
+
|
97 |
+
# Dataset Card for "imdb"
|
98 |
+
|
99 |
+
## Table of Contents
|
100 |
+
- [Dataset Description](#dataset-description)
|
101 |
+
- [Dataset Summary](#dataset-summary)
|
102 |
+
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
103 |
+
- [Languages](#languages)
|
104 |
+
- [Dataset Structure](#dataset-structure)
|
105 |
+
- [Data Instances](#data-instances)
|
106 |
+
- [Data Fields](#data-fields)
|
107 |
+
- [Data Splits](#data-splits)
|
108 |
+
- [Dataset Creation](#dataset-creation)
|
109 |
+
- [Curation Rationale](#curation-rationale)
|
110 |
+
- [Source Data](#source-data)
|
111 |
+
- [Annotations](#annotations)
|
112 |
+
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
113 |
+
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
114 |
+
- [Social Impact of Dataset](#social-impact-of-dataset)
|
115 |
+
- [Discussion of Biases](#discussion-of-biases)
|
116 |
+
- [Other Known Limitations](#other-known-limitations)
|
117 |
+
- [Additional Information](#additional-information)
|
118 |
+
- [Dataset Curators](#dataset-curators)
|
119 |
+
- [Licensing Information](#licensing-information)
|
120 |
+
- [Citation Information](#citation-information)
|
121 |
+
- [Contributions](#contributions)
|
122 |
+
|
123 |
+
## Dataset Description
|
124 |
+
|
125 |
+
- **Homepage:** [http://ai.stanford.edu/~amaas/data/sentiment/](http://ai.stanford.edu/~amaas/data/sentiment/)
|
126 |
+
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
127 |
+
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
128 |
+
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
129 |
+
|
130 |
+
### Dataset Summary
|
131 |
+
|
132 |
+
Large Movie Review Dataset translated to Dutch.
|
133 |
+
This is a dataset for binary sentiment classification containing substantially more data than previous benchmark datasets. We provide a set of 25,000 highly polar movie reviews for training, and 25,000 for testing. There is additional unlabeled data for use as well.
|
134 |
+
|
135 |
+
### Supported Tasks and Leaderboards
|
136 |
+
|
137 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
138 |
+
|
139 |
+
### Languages
|
140 |
+
|
141 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
142 |
+
|
143 |
+
## Dataset Structure
|
144 |
+
|
145 |
+
### Data Instances
|
146 |
+
|
147 |
+
#### plain_text
|
148 |
+
|
149 |
+
- **Size of downloaded dataset files:** 80.23 MB
|
150 |
+
- **Size of the generated dataset:** 127.06 MB
|
151 |
+
- **Total amount of disk used:** 207.28 MB
|
152 |
+
|
153 |
+
An example of 'train' looks as follows.
|
154 |
+
```
|
155 |
+
{
|
156 |
+
"label": 0,
|
157 |
+
"text": "Holy shit. Dit was de slechtste film die ik in lange tijd heb gezien."
|
158 |
+
}
|
159 |
+
```
|
160 |
+
|
161 |
+
### Data Fields
|
162 |
+
|
163 |
+
The data fields are the same among all splits.
|
164 |
+
|
165 |
+
#### plain_text
|
166 |
+
- `text`: a `string` feature.
|
167 |
+
- `text_en`: a `string` feature.
|
168 |
+
- `label`: a classification label, with possible values including `neg` (0), `pos` (1).
|
169 |
+
|
170 |
+
### Data Splits
|
171 |
+
|
172 |
+
| name |train|unsupervised|test |
|
173 |
+
|----------|----:|-----------:|----:|
|
174 |
+
|plain_text|25000| 50000|25000|
|
175 |
+
|
176 |
+
## Dataset Creation
|
177 |
+
|
178 |
+
### Curation Rationale
|
179 |
+
|
180 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
181 |
+
|
182 |
+
### Source Data
|
183 |
+
|
184 |
+
#### Initial Data Collection and Normalization
|
185 |
+
|
186 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
187 |
+
|
188 |
+
#### Who are the source language producers?
|
189 |
+
|
190 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
191 |
+
|
192 |
+
### Annotations
|
193 |
+
|
194 |
+
#### Annotation process
|
195 |
+
|
196 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
197 |
+
|
198 |
+
#### Who are the annotators?
|
199 |
+
|
200 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
201 |
+
|
202 |
+
### Personal and Sensitive Information
|
203 |
+
|
204 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
205 |
+
|
206 |
+
## Considerations for Using the Data
|
207 |
+
|
208 |
+
### Social Impact of Dataset
|
209 |
+
|
210 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
211 |
+
|
212 |
+
### Discussion of Biases
|
213 |
+
|
214 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
215 |
+
|
216 |
+
### Other Known Limitations
|
217 |
+
|
218 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
219 |
+
|
220 |
+
## Additional Information
|
221 |
+
|
222 |
+
### Dataset Curators
|
223 |
+
|
224 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
225 |
+
|
226 |
+
### Licensing Information
|
227 |
+
|
228 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
229 |
+
|
230 |
+
### Citation Information
|
231 |
+
|
232 |
+
```
|
233 |
+
@InProceedings{maas-EtAl:2011:ACL-HLT2011,
|
234 |
+
author = {Maas, Andrew L. and Daly, Raymond E. and Pham, Peter T. and Huang, Dan and Ng, Andrew Y. and Potts, Christopher},
|
235 |
+
title = {Learning Word Vectors for Sentiment Analysis},
|
236 |
+
booktitle = {Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies},
|
237 |
+
month = {June},
|
238 |
+
year = {2011},
|
239 |
+
address = {Portland, Oregon, USA},
|
240 |
+
publisher = {Association for Computational Linguistics},
|
241 |
+
pages = {142--150},
|
242 |
+
url = {http://www.aclweb.org/anthology/P11-1015}
|
243 |
+
}
|
244 |
+
|
245 |
+
```
|
246 |
+
|
247 |
+
|
248 |
+
### Contributions
|
249 |
+
|
250 |
+
Thanks to [@ghazi-f](https://github.com/ghazi-f), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf) for adding
|
251 |
+
the English `imdb` dataset.
|
252 |
+
This project would not have been possible without compute generously provided by Google through the
|
253 |
+
[TPU Research Cloud](https://sites.research.google/trc/).
|
254 |
+
|
255 |
+
Created by [Yeb Havinga](https://www.linkedin.com/in/yeb-havinga-86530825/)
|
256 |
+
|