The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
Dataset Preprocessing for 10M and 100M Text-Only Tracks
Overview
This document describes the preprocessing steps applied to the datasets used for the 10M and 100M text-only tracks. The datasets are a mixture of 10 different corpora, as shown in Table 1 below.
Table 1: Dataset Contents
Dataset | Domain | # Words STRICT-SMALL | # Words STRICT | Proportion |
---|---|---|---|---|
CHILDES (MacWhinney, 2000) | Child-directed speech | 0.44M | 4.21M | 5% |
British National Corpus (BNC), dialogue portion | Dialogue | 0.86M | 8.16M | 8% |
Children's Book Test (Hill et al., 2016) | Children's books | 0.57M | 5.55M | 6% |
Children's Stories Text Corpus | Children's books | 0.34M | 3.22M | 3% |
Standardized Project Gutenberg Corpus (Gerlach and Font-Clos, 2018) | Written English | 0.99M | 9.46M | 10% |
OpenSubtitles (Lison and Tiedemann, 2016) | Movie subtitles | 3.09M | 31.28M | 31% |
QCRI Educational Domain Corpus (QED; Abdelali et al., 2014) | Educational video subtitles | 1.04M | 10.24M | 11% |
Wikipedia | Wikipedia (English) | 0.99M | 10.08M | 10% |
Simple Wikipedia | Wikipedia (Simple English) | 1.52M | 14.66M | 15% |
Switchboard Dialog Act Corpus (Stolcke et al., 2000) | Dialogue | 0.12M | 1.18M | 1% |
Total | - | 9.96M | 98.04M | 100% |
Table 1: The contents of datasets for the 10M and 100M tracks; the table is taken from (Warstadt et al., 2023)
Preprocessing Steps
The same set of preprocessing steps are applied as that of (David et al., 2023). Light preprocessing and normalization were applied to these corpora to cast them into a unified format. The following modifications were made:
CHILDES:
- Capitalized the first letter of each line
- Normalized punctuation with whitespaces (detokenization)
- Put every line between double quotes (as directed speech)
British National Corpus:
- Applied capitalization, normalization, and double quotes
Children's Book Test:
- Normalized all unnatural symbols and whitespaces
- Replaced Penn Tree format tokens (e.g., -LRB-, -RRB-) with their corresponding symbols ('(', ')')
Children's Stories Text Corpus:
- Conserved formatting with a special [TAB] symbol
- Applied whitespace normalization
Standardized Project Gutenberg Corpus:
- Restored original paragraphs by removing additional newline symbols
- Applied whitespace normalization
OpenSubtitles:
- Removed leading dash symbols
- Applied whitespace normalization
- Cast every line as directed speech with double quotes
QED:
- Cleaned up incorrectly parsed HTML symbols using simple heuristics
- Applied whitespace normalization
- Cast every line as directed speech with double quotes
Wikipedia:
- Cleaned incorrectly parsed Wikipedia tags and hyperlinks
- Applied whitespace normalization
Simple Wikipedia:
- Applied heuristic HTML clean-up
- Applied whitespace normalization
Switchboard:
- Removed leading dashes
- Applied whitespace normalization
- Added double quotes
References
Samuel, David. "Mean BERTs make erratic language teachers: the effectiveness of latent bootstrapping in low-resource settings." arXiv preprint arXiv:2310.19420 (2023).
Alex Warstadt, Aaron Mueller, Leshem Choshen, Ethan Gotlieb Wilcox, Chengxu Zhuang, Juan Ciro, Rafael Mosquera, Adina Williams, Bhargavi Paranjabe, Tal Linzen, and Ryan Cotterell. 2023b. Findings of the 2023 BabyLM Challenge: Sample-efficient pretraining on developmentally plausible corpora. In Proceedings of the 2023 BabyLM Challenge. Association for Computational Linguistics (ACL).
- Downloads last month
- 48