tokens
int64
100
30.5k
1,996
2,981
4,869
2,005
2,035
1,996
2,293
1,010
7,472
1,998
9,446
1,999
4,869
24,177
1,521
1,055
2,808
1,010
2,054
2,027
2,024
2,428
2,055
2,003
4,071
1,998
4,336
1,012
4,336
1,997
2,245
1,998
1,996
4,071
2,000
5,454
1,012
3,870
1,521
1,055
13,948
1,997
2,720
1,012
6,868
3,749
1,997
3,510
3,662
2,019
4,336
15,839
2,464
1,999
18,869
2,015
1,997
1,996
2,154
1,012
2,014
13,948
1,997
2,720
1,012
17,685
2,096
13,330
2,011
4,963
3,662
1,037
2,504
1,997
4,336
2,008
2,187
2,032
7,135
1,998
9,860
1,012
1,996
4,071
2,016
8,176
1,999
2,633
10,564
2,032
1,999
3,622
19,674
1,997
3,203
6,615
1,998
4,209
2,014
2,269

fw-bert-tokenized-flattened

Just a tokenized and flattened version of the 10 billion token sample of https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu with the bert-base-uncased tokenizer. Practically a huge array of tokens with each doc sepatated by [SEP].

Downloads last month
2
Edit dataset card