victormiller
commited on
Commit
•
64d8614
1
Parent(s):
9d5bd15
Update README.md
Browse files
README.md
CHANGED
@@ -28,7 +28,7 @@ Complete details on the dataset can be found in our blog post [here](https://hug
|
|
28 |
|
29 |
## Upsampling Experiment with Comparison to FineWeb
|
30 |
To evaluate the training efficiency of our dataset, we sampled 1.5T tokens from both FineWeb and TxT360 (using the aforementioned weighting) and conducted a training ablation on an 8x8B Mixture-of-Experts architecture, similar to Mixtral. We compared the learning curves by tracking training loss, validation scores, and performance across a wide array of diverse evaluation benchmarks. The validation set was sampled independently from SlimPajama. Note that this experiment is done on a slightly earlier version of the dataset.
|
31 |
-
<center><img src="
|
32 |
|
33 |
|
34 |
## Initial Data Representation
|
|
|
28 |
|
29 |
## Upsampling Experiment with Comparison to FineWeb
|
30 |
To evaluate the training efficiency of our dataset, we sampled 1.5T tokens from both FineWeb and TxT360 (using the aforementioned weighting) and conducted a training ablation on an 8x8B Mixture-of-Experts architecture, similar to Mixtral. We compared the learning curves by tracking training loss, validation scores, and performance across a wide array of diverse evaluation benchmarks. The validation set was sampled independently from SlimPajama. Note that this experiment is done on a slightly earlier version of the dataset.
|
31 |
+
<center><img src="txttofineweb.png" alt="comparison" /></center>
|
32 |
|
33 |
|
34 |
## Initial Data Representation
|