Datasets:
Common Crawl Dataset Partitioning method?
#37
by
AlexFanWei
- opened
In the paper "Dolma:an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research", the author give a figure to show the "Pearson Correlation of filters on the Head, Middle, and Tail parts of our Common Crawl data.", but not mention how to split the common crawl dataset to head, middle and tail part.
Can you give a brief explanation? Thank you~
Not positive and it'd still be great to have the authors confirm, but judging from the missing of the "Perplexity Filter" in the correlation analysis and the mention of "The Gopher filtering rules correlate negatively with our deduplication, especially for the high-perplexity tail part of our data." that this is data placed into buckets based on several KenLM perplexity thresholds.