license: mit
language:
- en
size_categories:
- 10M<n<100M
Dataset Card for heuristic_classification-filtered-pile-50M
Dataset Description
- Repository: https://github.com/p-lambda/dsir
- Paper: https://arxiv.org/abs/2302.03169
- Point of Contact: Sang Michael Xie [email protected]
Dataset Summary
This dataset is a subset of The Pile, selected via the heuristic classification data selection method. The target distribution for heuristic classification are the Wikipedia and BookCorpus2 subsets of The Pile.
Languages
English (EN)
Dataset Structure
A train set is provided (51.2M examples) in jsonl format.
Data Instances
{"contents": "Members join for free and will have access to all of our earning verticals, including, but not limited to, watching videos, shopping for cash back, taking surveys, and redeeming special offers. Swagbucks is the web's leading rewards platform, dedicated to providing FREE gift cards to its 12+ million members. Choose from top retailers like Amazon, Target, Walmart, Starbucks, PayPal, and tons more.dead full espanol tle work is running out. You\u2019re given a descargar land
of the dead full espanol but that respect it\u2019s tons of one another. When the screen. With the pluses gained from a ledge, your arms or abandons your name suggests, Inferno has locked on a dash for a poozer, it\u2019s placed in their shadowing skills. These controls forward, backward, and frankly, the straights. You can also have expected, but that\u2019s unlike anything particularly adept pacing. Each win by so rough idea that\u2019s worth it up. There are a neat sensation to play
of a fresh\n\nthe voice actors give up with content and the same innovative control scheme that pulls you invested. From the movement. The unique art style and is still remarkably tough. You\u2019re not", "metadata": {"pile_set_name": ["Pile-CC", "Pile-CC"]}, "id": 303}
Data Fields
"contents": the text
"metadata": contains information about the source(s) of text that the text comes from. Multiple sources means that the example is concatenated from two sources.
"id": Ignore - a non-unique identifier
Dataset Creation
We first select 102.4M examples then concatenate every two examples to create 51.2M examples. This ensures that the examples are long enough for a max token length of 512 without much padding. We train the fasttext binary classifier for heuristic classification from The Pile validation set, where the target is Wikipedia + BookCorpus2 + Gutenberg + Books3 and the raw data come from the rest of the data sources in The Pile. We first select 98.4M examples from non-Wikipedia and book data, then randomly select 2M from Wikipedia and 0.66M each from BookCorpus2, Gutenberg, and Books3. After this, we concatenate every two examples.
Source Data
The Pile
Initial Data Collection and Normalization
We select data from The Pile, which comes in 30 random chunks. We reserve chunk 0 for validation purposes and only consider the last 29 chunks. We first divided the documents in The Pile into chunks of 128 words, according to whitespace tokenization. These chunks define the examples that we do data selection on, totaling 1.7B examples. Before heuristic classification, we first apply a manual quality filter (see paper for details) and only consider the examples that pass the filter.
Considerations for Using the Data
The dataset is biased towards choosing data from non-Wikipedia and non-Books sources. A balanced approach would be to mix in more data from Wikipedia and books.
Dataset Curators
Sang Michael Xie, Shibani Santurkar
Citation Information
Paper: https://arxiv.org/abs/2302.03169
@article{xie2023data,
author = {Sang Michael Xie and Shibani Santurkar and Tengyu Ma and Percy Liang},
journal = {arXiv preprint arXiv:2302.03169},
title = {Data Selection for Language Models via Importance Resampling},
year = {2023},
}