Datasets:
Confusion and Discrepancy Regarding Deduplication Versions and Dataset Sizes
#26
by
OrionZheng
- opened
Hello, I have two questions about the dataset:
- Confusion about the deduplicated version: I saw both 'the-stack' and 'the-stack-dedup' mentioned near-deduplication in the dataset card. What is the difference between these two repository?
- Discrepancy about dataset sizes: I saw in the description of 'the-stack-dedup' mentioned 'This is the near-deduplicated version with 3TB data'. But after cloning the main branches of two repositories, I noticed that neither the deduplicated nor the non-deduplicated versions matched the claimed sizes. Could you please explain the reasons behind this? Could you also provide more detailed information about the processing steps involved?
1- the-stack-dedup is the near-deduplicated version of the-stack. We didn't do near deduplication on the-stack, only exact deduplication to remove identical files.
2- the dataset s are compressed in parquet files, the sizes we mention represent the actual amount of text there is in the dataset when decompressed.