File size: 970 Bytes
b52508e
 
 
 
 
 
 
 
 
 
 
 
 
 
0e6fada
 
 
 
 
 
 
 
 
 
8c1a1ae
0e6fada
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
---
dataset_info:
  features:
  - name: text
    dtype: string
  splits:
  - name: train
    num_bytes: 26076989556
    num_examples: 33536113
  download_size: 17380043798
  dataset_size: 26076989556
---
# Dataset Card for "wikipedia20220301en-bookcorpusopen-chunked-shuffled"


```
num_examples: 33.5 million
download_size: 15.3 GB
dataset_size: 26.1 GB
```

This dataset combines [wikipedia20220301.en](https://huggingface.co/datasets/wikipedia) and [bookcorpusopen](https://huggingface.co/datasets/bookcorpusopen),
and splits the data into smaller chunks, of size ~820 chars 
(such that each item will be at least ~128 tokens for the average tokenizer). 
The order of the items has been shuffled.
The logic only splits on spaces, so the chunks are likely to be slightly larger than 820 chars. 
The dataset has been normalized into lower case, with accents and non-english characters removed.
Items with less than 200 chars or more than 1000 chars have been removed.