Datasets:

ArXiv:
License:

Were the documents shuffled before the dataset was split into shards?

#5
by yury-zyphra - opened

So, we have about 3B documents split into 10 shards. Were documents shuffled before the split?

ML Foundations org

Hi @yury-zyphra ,

The files in each shard were shuffled before the dataset was split into shards. The documents within each file were not further shuffled - this global shuffle occurs later in our pipeline, after filtering and tokenization of the dataset. If global shuffle before tokenization across all the documents is required by your processing scheme, make sure to take this into account.

Thanks for the reply.

Sorry, I am confused even more now.

From what you're saying it seems like you first split documents into a lot of files. Sounds like documents were not shuffled at that stage. But what did you do next? Did you split the files into 10 shards and then shuffled each shard individually? Or did you shuffle all the files first, and then split them into shards?

I am asking because I am curious, whether an individual shard is a representative sample of the whole dataset. If it is, then an individual shard should roughly be 400B tokens with similar statistics to other shards and the whole dataset.

ML Foundations org

Hi @yury-zyphra !

The documents were initially written into files as the were being read and processed from the CommonCrawl WARC files, so there was indeed no shuffling at this initial stage. After the files were created, we shuffled them (at the file level) and then split them into shards. However, because shuffling never happened at the document level at this stage, picking e.g. 300M documents at random from the entire dataset is not exactly the same as picking one shard.

Let us know if this clarifies things!

gsmyrnis changed discussion status to closed

Sign up or log in to comment