hunterhector commited on
Commit
892332e
1 Parent(s): c63fe7b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -34,11 +34,13 @@ To evaluate the training efficiency of our dataset, we sampled 1.5T tokens from
34
 
35
 
36
  ## Initial Data Representation
37
- To produce TxT360, a comprehensive and transparent data processing pipeline was designed to account for the nuances of both web and curated datasets. The pipeline presents a unified framework for processing both data types, making it convenient and easily adaptive for users to revise and fine-tune the pipeline for their own use cases.
38
 
39
  Web datasets are inherently noisy and varied. The TxT360 pipeline implements sophisticated filtering and deduplication techniques to clean and remove redundancies while preserving data integrity.
40
 
41
- Curated datasets are typically structured and consistently formatted. TxT360 filters these sources with selective steps to maintain their integrity while providing seamless integration into the larger dataset. Both data source types are globally deduplicated together resulting in 5.7T tokens of high-quality data. The table below shows the source distribution of TxT360 tokens.
 
 
42
 
43
  | Data Source | Raw Data Size | Token Count | Information Cut-Off Date |
44
  |-----------------|---------------|-------------|--------------------------|
 
34
 
35
 
36
  ## Initial Data Representation
37
+ To produce TxT360, a comprehensive data processing pipeline was designed to account for the nuances of both web and curated datasets. The pipeline presents a unified framework for processing both data types, making it convenient and easily adaptive for users to revise and fine-tune the pipeline for their own use cases.
38
 
39
  Web datasets are inherently noisy and varied. The TxT360 pipeline implements sophisticated filtering and deduplication techniques to clean and remove redundancies while preserving data integrity.
40
 
41
+ Curated datasets are typically structured and consistently formatted, but also can cause troubles with their own special formatting preferences. TxT360 filters these sources with selective steps to maintain their integrity while providing seamless integration into the larger dataset. Both data source types are globally deduplicated together resulting in ~5T tokens of high-quality data. The table below shows the source distribution of TxT360 tokens.
42
+
43
+ We further highlight the importance of mixing the datasets together with the right blend. The raw distribution of the deduplicated dataset is actually suboptimal, a simple working recipe is provided in the studies section.
44
 
45
  | Data Source | Raw Data Size | Token Count | Information Cut-Off Date |
46
  |-----------------|---------------|-------------|--------------------------|