Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
parquet
Sub-tasks:
sentiment-classification
Languages:
English
Size:
100K - 1M
ArXiv:
Convert dataset sizes from base 2 to base 10 in the dataset card
#4
by
albertvillanova
HF staff
- opened
README.md
CHANGED
@@ -97,9 +97,9 @@ train-eval-index:
|
|
97 |
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
98 |
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
99 |
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
100 |
-
- **Size of downloaded dataset files:**
|
101 |
-
- **Size of the generated dataset:**
|
102 |
-
- **Total amount of disk used:**
|
103 |
|
104 |
### Dataset Summary
|
105 |
|
@@ -147,9 +147,9 @@ that is "
|
|
147 |
|
148 |
#### plain_text
|
149 |
|
150 |
-
- **Size of downloaded dataset files:**
|
151 |
-
- **Size of the generated dataset:**
|
152 |
-
- **Total amount of disk used:**
|
153 |
|
154 |
An example of 'train' looks as follows.
|
155 |
```
|
|
|
97 |
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
98 |
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
99 |
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
100 |
+
- **Size of downloaded dataset files:** 166.38 MB
|
101 |
+
- **Size of the generated dataset:** 441.74 MB
|
102 |
+
- **Total amount of disk used:** 608.12 MB
|
103 |
|
104 |
### Dataset Summary
|
105 |
|
|
|
147 |
|
148 |
#### plain_text
|
149 |
|
150 |
+
- **Size of downloaded dataset files:** 166.38 MB
|
151 |
+
- **Size of the generated dataset:** 441.74 MB
|
152 |
+
- **Total amount of disk used:** 608.12 MB
|
153 |
|
154 |
An example of 'train' looks as follows.
|
155 |
```
|