Datasets:
FineWeb and Redpajamav2 deduplication
Hi there! Thank you for another great resource from the HF team :)
I have a question about RedPajamav2. In the dataset card it is mentioned that a deduped version of it is used. The original data from RP2 includes two types of deduplication, one using bloomfilters and another one using minhashes with LSH.
In RP2 they provided the duplicated documents from the exact deduplication, but for LSH, they just provided the minhashes, and LSH needs to be done by the user. Does the deduplicated flag in this dataset card indicate that you ran LSH using their hashes on top of all the dataset across different CC dumps? Or does it indicate the exact deduplication they provide? Or perhaps something in between, such as LSH dedup within each CC dump (as you did within fineweb if I got it right).
Any insights on these points would be greatly appreciated.
Cheers.
Hi! For RedPajamaV2 we only used the "built-in" already applied processing, without taking any additional filtering decisions.
For deduplication, this means that we only used the bloomfilters full document dedup version.
We hesitated on adding a RP2 comparison as we do not feel it is exactly fair to RP2 since it is a "do it your way" kind of dataset, but since quite a lot of people asked for a comparison we added one with the "vanilla" version (whatever additional processing we would add, and this includes setting LSH thresholds for minhash for example, would already be a new dataset, and we did not add additional processing to any other of the datasets we compared with).
Thank you for such a quick reply! That actually makes sense, it can also give some insight on the effect of just exact dedup vs. the steps taken in fineweb. I was also intrigued by this statement:
While we originally intended to deduplicate the dataset as a whole, our ablations showed that training on a sampling of individually deduplicated dumps/crawls outperformed training on a sampling of all the dumps/crawls deduplicated together. We will discuss this further in our technical report.
So eagerly waiting for the report, as I wonder if a similar approach should be taken when using red pajama v2 in other languages.