Datasets:

Languages:
English
Size:
n>1T
ArXiv:
License:

Common Crawl Dataset Partitioning method?

#37
by AlexFanWei - opened

In the paper "Dolma:an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research", the author give a figure to show the "Pearson Correlation of filters on the Head, Middle, and Tail parts of our Common Crawl data.", but not mention how to split the common crawl dataset to head, middle and tail part.
Can you give a brief explanation? Thank you~
WX20240603-150113@2x.png

Not positive and it'd still be great to have the authors confirm, but judging from the missing of the "Perplexity Filter" in the correlation analysis and the mention of "The Gopher filtering rules correlate negatively with our deduplication, especially for the high-perplexity tail part of our data." that this is data placed into buckets based on several KenLM perplexity thresholds.

See also this passage from earlier in the work, in which I believe High, Medium, and Low map to Head, Middle, and Tail in the file releases.
Screenshot 2024-08-06 at 3.58.09 PM.png

Sign up or log in to comment