Datasets:

Modalities:
Text
ArXiv:
Libraries:
Datasets

How big is the data size of en?

#6
by newbietuan - opened

hello, when i use the code below, Downloads are frequently interrupted, now i have download about 622G data in 2023-06/downloads , How big is the English data of a snapshot? About 1T? ds = load_dataset("togethercomputer/RedPajama-Data-V2",
name="default",
partition="head_middle",
snapshots=["2023-06",
languages=["en"])

I'm also interested in what the total size is in TB

I'm also interested in what the total size is in TB

the snapshot of 2023-06 totally downloads 1.2T data(en,head_middle)

so for en,head_middle that's most of the 20.5T tokens for EN?

newbietuan changed discussion status to closed
Together org

just to confirm here -- when you download the data via the huggingface dataloader, you're downloading the raw (ie not deduplicated) dataset together with the quality signals. This makes up roughly 1TB per snapshot, but it varies a bit over the different snapshots (especially the 2015 snapshots are smaller with ~700GB).

so for en,head_middle that's most of the 20.5T tokens for EN?

@sirus this only corresponds to one snapshot, and to the raw data (not the deduplicated one). There are in total 84 snapshots, and ~37T en tokens in the head_middle partition. I added an additional table in the Readme which shows the raw token count.

Sign up or log in to comment