pile-10M-words / README.md
PatrickHaller's picture
Librarian Bot: Add language metadata for dataset (#2)
f4fc789 verified
|
raw
history blame
1.44 kB
metadata
language:
  - en
dataset_info:
  features:
    - name: text
      dtype: string
    - name: metadata
      struct:
        - name: pile_set_name
          sequence: string
    - name: id
      dtype: int64
  splits:
    - name: train
      num_bytes: 64095383
      num_examples: 40338
  download_size: 39795200
  dataset_size: 64095383
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

Description

This dataset is a sampled subset of the Pile dataset. We used DSIR a data selection tool with importance resampling for subsampling.

The subset sample distribution is:

{
   'Pile-CC': 19767,
   'OpenWebText2': 12424,
   'FreeLaw': 3752,
   'USPTO Backgrounds': 1055,
   'Wikipedia (en)': 813,
   'PubMed Central': 576,
   'PubMed Abstracts': 499,
   'BookCorpus2': 285,
   'Books3': 266,
   'Gutenberg (PG-19)': 228,
   'StackExchange': 184,
   'PhilPapers': 112,
   'YoutubeSubtitles': 91,
   'OpenSubtitles': 75,
   'ArXiv': 56,
   'NIH ExPorter': 47,
   'Enron Emails': 39,
   'HackerNews': 29,
   'Github': 28,
   'EuroParl': 12
}

The dataset contains ~100M words of text. This can be checked with:

from datasets import load_dataset

ds = load_dataset("PatrickHaller/dsir-pile-10M-words")

count = 0
for row in ds["train"]:
    count += len(row["text"].split(" "))

print(count)

# Out: 9999894