SrikrishnaIyer's picture
Update README.md
bf08013 verified
metadata
license: mit

Dataset Preprocessing for 10M and 100M Text-Only Tracks

Overview

This document describes the preprocessing steps applied to the datasets used for the 10M and 100M text-only tracks. The datasets are a mixture of 10 different corpora, as shown in Table 1 below.

Table 1: Dataset Contents

Dataset Domain # Words STRICT-SMALL # Words STRICT Proportion
CHILDES (MacWhinney, 2000) Child-directed speech 0.44M 4.21M 5%
British National Corpus (BNC), dialogue portion Dialogue 0.86M 8.16M 8%
Children's Book Test (Hill et al., 2016) Children's books 0.57M 5.55M 6%
Children's Stories Text Corpus Children's books 0.34M 3.22M 3%
Standardized Project Gutenberg Corpus (Gerlach and Font-Clos, 2018) Written English 0.99M 9.46M 10%
OpenSubtitles (Lison and Tiedemann, 2016) Movie subtitles 3.09M 31.28M 31%
QCRI Educational Domain Corpus (QED; Abdelali et al., 2014) Educational video subtitles 1.04M 10.24M 11%
Wikipedia Wikipedia (English) 0.99M 10.08M 10%
Simple Wikipedia Wikipedia (Simple English) 1.52M 14.66M 15%
Switchboard Dialog Act Corpus (Stolcke et al., 2000) Dialogue 0.12M 1.18M 1%
Total - 9.96M 98.04M 100%

Table 1: The contents of datasets for the 10M and 100M tracks; the table is taken from (Warstadt et al., 2023)

Preprocessing Steps

The same set of preprocessing steps are applied as that of (David et al., 2023). Light preprocessing and normalization were applied to these corpora to cast them into a unified format. The following modifications were made:

  1. CHILDES:

    • Capitalized the first letter of each line
    • Normalized punctuation with whitespaces (detokenization)
    • Put every line between double quotes (as directed speech)
  2. British National Corpus:

    • Applied capitalization, normalization, and double quotes
  3. Children's Book Test:

    • Normalized all unnatural symbols and whitespaces
    • Replaced Penn Tree format tokens (e.g., -LRB-, -RRB-) with their corresponding symbols ('(', ')')
  4. Children's Stories Text Corpus:

    • Conserved formatting with a special [TAB] symbol
    • Applied whitespace normalization
  5. Standardized Project Gutenberg Corpus:

    • Restored original paragraphs by removing additional newline symbols
    • Applied whitespace normalization
  6. OpenSubtitles:

    • Removed leading dash symbols
    • Applied whitespace normalization
    • Cast every line as directed speech with double quotes
  7. QED:

    • Cleaned up incorrectly parsed HTML symbols using simple heuristics
    • Applied whitespace normalization
    • Cast every line as directed speech with double quotes
  8. Wikipedia:

    • Cleaned incorrectly parsed Wikipedia tags and hyperlinks
    • Applied whitespace normalization
  9. Simple Wikipedia:

    • Applied heuristic HTML clean-up
    • Applied whitespace normalization
  10. Switchboard:

    • Removed leading dashes
    • Applied whitespace normalization
    • Added double quotes

References

Samuel, David. "Mean BERTs make erratic language teachers: the effectiveness of latent bootstrapping in low-resource settings." arXiv preprint arXiv:2310.19420 (2023).

Alex Warstadt, Aaron Mueller, Leshem Choshen, Ethan Gotlieb Wilcox, Chengxu Zhuang, Juan Ciro, Rafael Mosquera, Adina Williams, Bhargavi Paranjabe, Tal Linzen, and Ryan Cotterell. 2023b. Findings of the 2023 BabyLM Challenge: Sample-efficient pretraining on developmentally plausible corpora. In Proceedings of the 2023 BabyLM Challenge. Association for Computational Linguistics (ACL).