Update README.md
Browse files
README.md
CHANGED
@@ -11,14 +11,37 @@ pretty_name: OLMoE Mix (August 2024)
|
|
11 |
|
12 |
# OLMoE Mix (August 2024)
|
13 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
## Statistics
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
15 |
|
16 |
-
|
17 |
-
|------------------------------------|:----------:|:----------:|:-----------:|:----------:|
|
18 |
-
| DCLM | 2.95 B | 16.7 T | 3.38 T | 3.86 T |
|
19 |
-
| Starcoder | 78.7 M | 325 B | 63.9 B | 101 B |
|
20 |
-
| peS2o<br>(Dolma) | 38.8 M | 268 B | 51.3 B | 57.2 B |
|
21 |
-
| Algebraic Stack<br>(Proof Pile II) | 2.83 M | 39.3 B | 9.6 B | 12.6 B |
|
22 |
-
| Arxiv<br>(Proof Pile II) | 1.55 M | 88.8 B | 23.5 B | 21.1 B |
|
23 |
-
| Wikipedia<br>(Dolma) | 6.17 M | 16.2 B | 3.16 B | 3.69 B |
|
24 |
-
| **Total** | **3.08 B** | **17.4 T** | **3.53 T** | **4.06 T** |
|
|
|
11 |
|
12 |
# OLMoE Mix (August 2024)
|
13 |
|
14 |
+
|
15 |
+
<img alt="OLMoE Mix Logo." src="https://huggingface.co/datasets/allenai/OLMoE-mix-0824/resolve/main/olmoe-mix.png?download=true" width="250px">
|
16 |
+
|
17 |
+
The following data mix was used to train OLMoE-1B-7B, a Mixture-of-Experts LLM with 1B active and 7B total parameters released in August 2024.
|
18 |
+
|
19 |
+
The base version of OLMoE-1B-7B can be found at [this page](https://huggingface.co/OLMoE/OLMoE-1B-7B-0824), the SFT of OLMoE-1B-7B is available [here](https://huggingface.co/OLMoE/OLMoE-1B-7B-0824-SFT), and a version combining SFT and DPO is available following [this link](https://huggingface.co/OLMoE/OLMoE-1B-7B-0824-Instruct).
|
20 |
+
|
21 |
## Statistics
|
22 |
+
| Subset | Docs | Bytes | Words | Tokens |
|
23 |
+
|--------------------------------------------------------------|:----------:|:----------:|:----------:|:----------:|
|
24 |
+
| [DCLM Baseline 1.0](https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0) | 2.95 B | 16.7 T | 3.38 T | 3.86 T |
|
25 |
+
| [Starcoder](https://huggingface.co/datasets/bigcode/starcoderdata) | 78.7 M | 325 B | 63.9 B | 101 B |
|
26 |
+
| [peS2o](https://huggingface.co/datasets/allenai/peS2o)<br>([Dolma](https://huggingface.co/datasets/allenai/dolma)) | 38.8 M | 268 B | 51.3 B | 57.2 B |
|
27 |
+
| Algebraic Stack<br>([Proof Pile II](https://huggingface.co/datasets/EleutherAI/proof-pile-2)) | 2.83 M | 39.3 B | 9.6 B | 12.6 B |
|
28 |
+
| Arxiv<br>([RedPajama v1](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) <br>via [Proof Pile II](https://huggingface.co/datasets/EleutherAI/proof-pile-2)) | 1.55 M | 88.8 B | 23.5 B | 21.1 B |
|
29 |
+
| OpenWebMath<br>([Proof Pile II](https://huggingface.co/datasets/EleutherAI/proof-pile-2)) | 2.91 M | 42.4 B | 10.2 B | 12.7 B |
|
30 |
+
| En Wikipedia + <br>Wikibooks<br>([Dolma](https://huggingface.co/datasets/allenai/dolma)) | 6.17 M | 16.2 B | 3.16 B | 3.69 B |
|
31 |
+
| **Total** | **3.08 B** | **17.4 T** | **3.53 T** | **4.07 T** |
|
32 |
+
|
33 |
+
## Preprocessing
|
34 |
+
|
35 |
+
All subsets were pre-processed to remove documents with a *sequence* of 32 or more repeated *ngrams*.
|
36 |
+
- a *ngram* is a span of 1 to 13 tokens, included;
|
37 |
+
- *tokens* are obtained using the model tokenizer;
|
38 |
+
- a *sequence* is a contiguous span of repeated ngrams.
|
39 |
+
|
40 |
+
In addition of the above, Starcoder dataset was further processed by removing any document meeting any of the following rules:
|
41 |
+
- document is from a repository with fewer than 2 stars on GitHub;
|
42 |
+
- the top most frequent word in the document constitutes over 30% of the document;
|
43 |
+
- the two most frequent words in the document constitutes over 50% of the document.
|
44 |
+
|
45 |
+
## Licensing Information
|
46 |
|
47 |
+
This mix is licensed under [Open Data Commons Attribution License (ODC-By) v1.0](https://opendatacommons.org/licenses/by/1-0/). By using this dataset, you are bound to licenses and Terms of Services of underlying datasets, which you can access by clicking on the links in the table above.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|