brando commited on
Commit
69b18f9
1 Parent(s): 94472d9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +66 -1
README.md CHANGED
@@ -26,6 +26,71 @@ configs:
26
  path: data/test-*
27
  ---
28
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
  Script that created it
30
 
31
  ```python
@@ -93,4 +158,4 @@ def main() -> None:
93
  if __name__ == "__main__":
94
  main()
95
 
96
- ```
 
26
  path: data/test-*
27
  ---
28
 
29
+ # Dataset Card for Small C4 Dataset (10k Train, 10k Validation, 10k Test)
30
+
31
+ ## Dataset Summary
32
+
33
+ The **Small C4 Dataset** is a reduced version of the original [C4 dataset](https://huggingface.co/datasets/allenai/c4) (Colossal Clean Crawled Corpus), designed to facilitate lightweight experimentation and model training without the need to process the full C4 dataset. This dataset includes:
34
+ - **10,000 examples** for training,
35
+ - **10,000 examples** for validation, and
36
+ - **10,000 examples** for testing.
37
+
38
+ Each example consists of a single text passage, sourced from the English subset of the original C4 corpus.
39
+
40
+ ## Dataset Details
41
+
42
+ - **Source**: [allenai/c4](https://huggingface.co/datasets/allenai/c4)
43
+ - **Subset Language**: English
44
+ - **Streaming Enabled**: Yes (streaming=True used to sample without downloading the entire dataset)
45
+ - **Sampling Method**:
46
+ - **Training Set**: First 10,000 examples from the `train` split of C4.
47
+ - **Validation Set**: First 10,000 examples from the `validation` split of C4.
48
+ - **Test Set**: The next 10,000 examples from the `validation` split (after the validation set).
49
+ - **Dataset Size**: 30,000 examples in total.
50
+
51
+ ## Dataset Creation
52
+
53
+ The dataset was created using Hugging Face’s `datasets` library with streaming enabled to handle the large size of the original C4 dataset efficiently. A subset of examples was sampled in parallel for each of the train, validation, and test splits.
54
+
55
+ ## Usage
56
+
57
+ This dataset is suitable for lightweight model training, testing, and experimentation, particularly useful when:
58
+ - **Computational resources** are limited,
59
+ - **Prototyping** models before scaling to the full C4 dataset, or
60
+ - **Evaluating** model performance on a smaller, representative sample of the full corpus.
61
+
62
+ ## Example Usage
63
+
64
+ ```python
65
+ from datasets import load_dataset
66
+
67
+ # Load the small C4 dataset
68
+ dataset = load_dataset("brando/small-c4-dataset")
69
+
70
+ # Access train, validation, and test splits
71
+ train_data = dataset["train"]
72
+ validation_data = dataset["validation"]
73
+ test_data = dataset["test"]
74
+
75
+ # Example: Display a random training example
76
+ print(train_data[0])
77
+ License
78
+ This dataset inherits the licensing of the original C4 dataset, which follows the Apache License 2.0.
79
+
80
+ Citation
81
+ If you use this dataset in your work, please cite the original C4 dataset or my ultimate utils repo:
82
+
83
+ ```
84
+ @misc{miranda2021ultimateutils,
85
+ title={Ultimate Utils - the Ultimate Utils Library for Machine Learning and Artificial Intelligence},
86
+ author={Brando Miranda},
87
+ year={2021},
88
+ url={https://github.com/brando90/ultimate-utils},
89
+ note={Available at: \url{https://www.ideals.illinois.edu/handle/2142/112797}},
90
+ abstract={Ultimate Utils is a comprehensive library providing utility functions and tools to facilitate efficient machine learning and AI research, including efficient tensor manipulations and gradient handling with methods such as `detach()` for creating gradient-free tensors.}
91
+ }
92
+ ```
93
+
94
  Script that created it
95
 
96
  ```python
 
158
  if __name__ == "__main__":
159
  main()
160
 
161
+ ```