Datasets:

Languages:
English
ArXiv:
License:
File size: 4,328 Bytes
4f2c859
 
 
 
 
 
 
 
868206e
4a93b05
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4f2c859
 
868206e
4f2c859
c11c6e1
 
 
 
 
46542f1
f8b01ef
46542f1
868206e
46542f1
843e6d9
46542f1
4f2c859
fabec91
 
46542f1
fabec91
 
 
 
 
 
 
 
46542f1
 
 
 
 
 
 
 
 
 
 
 
 
 
4f2c859
46542f1
c11c6e1
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
---
task_categories:
- text-generation
language:
- en
size_categories:
- 1B<n<10B
license: odc-by
pretty_name: OLMoE Mix (September 2024)
dataset_info:
  features:
  - name: id
    dtype: string
  - name: text
    dtype: string
  - name: added
    dtype: string
  - name: attributes
    dtype: string
  - name: created
    dtype: string
  - name: doc
    dtype: string
  - name: metadata
    dtype: string
---

# OLMoE Mix (September 2024)

## Dataset Description

- **Repository:** https://github.com/allenai/OLMoE
- **Paper:** [OLMoE: Open Mixture-of-Experts Language Models](https://arxiv.org/abs/2409.02060)


<img alt="OLMoE Mix Logo." src="olmoe-mix.png" width="250px">

The following data mix was used to train OLMoE-1B-7B, a Mixture-of-Experts LLM with 1B active and 7B total parameters released in September 2024. 

The base version of OLMoE-1B-7B can be found at [this page](https://huggingface.co/allenai/OLMoE-1B-7B-0924), the SFT of OLMoE-1B-7B is available [here](https://huggingface.co/allenai/OLMoE-1B-7B-0924-SFT), and a version combining SFT and DPO is available following [this link](https://huggingface.co/allenai/OLMoE-1B-7B-0924-Instruct).

## Statistics

| Subset                                                       | Tokens     | Words      | Bytes      | Docs       |
|--------------------------------------------------------------|:----------:|:----------:|:----------:|:----------:|
| [DCLM Baseline 1.0](https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0) | 3.86 T     | 3.38 T     | 16.7 T     | 2.95 B     |
| [Starcoder](https://huggingface.co/datasets/bigcode/starcoderdata) | 101 B      | 63.9 B     | 325 B      | 78.7 M     |
| [peS2o](https://huggingface.co/datasets/allenai/peS2o)<br>([Dolma](https://huggingface.co/datasets/allenai/dolma)) | 57.2 B     | 51.3 B     | 268 B      | 38.8 M     |
| Arxiv<br>([RedPajama v1](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) <br>via [Proof Pile II](https://huggingface.co/datasets/EleutherAI/proof-pile-2)) | 21.1 B     | 23.5 B     | 88.8 B     | 1.55 M     |
| OpenWebMath<br>([Proof Pile II](https://huggingface.co/datasets/EleutherAI/proof-pile-2)) | 12.7 B     | 10.2 B     | 42.4 B     | 2.91 M     |
| Algebraic Stack<br>([Proof Pile II](https://huggingface.co/datasets/EleutherAI/proof-pile-2)) | 12.6 B     | 9.6 B      | 39.3 B     | 2.83 M     |
| En Wikipedia + <br>Wikibooks<br>([Dolma](https://huggingface.co/datasets/allenai/dolma)) | 3.69 B     | 3.16 B     | 16.2 B     | 6.17 M     |
| **Total**                                                    | **4.07 T** | **3.53 T** | **17.4 T** | **3.08 B** |

## Preprocessing

All subsets were pre-processed to remove documents with a *sequence* of 32 or more repeated *ngrams*.
- a *ngram* is a span of 1 to 13 tokens, included;
- *tokens* are obtained using the model tokenizer;
- a *sequence* is a contiguous span of repeated ngrams. 

In addition of the above, Starcoder dataset was further processed by removing any document meeting any of the following rules:
- document is from a repository with fewer than 2 stars on GitHub;
- the top most frequent word in the document constitutes over 30% of the document;
- the two most frequent words in the document constitutes over 50% of the document.

## Licensing Information

This mix is licensed under [Open Data Commons Attribution License (ODC-By) v1.0](https://opendatacommons.org/licenses/by/1-0/). By using this dataset, you are bound to licenses and Terms of Services of underlying datasets, which you can access by clicking on the links in the table above.

## Citation

```bibtex
@misc{muennighoff2024olmoeopenmixtureofexpertslanguage,
      title={OLMoE: Open Mixture-of-Experts Language Models}, 
      author={Niklas Muennighoff and Luca Soldaini and Dirk Groeneveld and Kyle Lo and Jacob Morrison and Sewon Min and Weijia Shi and Pete Walsh and Oyvind Tafjord and Nathan Lambert and Yuling Gu and Shane Arora and Akshita Bhagia and Dustin Schwenk and David Wadden and Alexander Wettig and Binyuan Hui and Tim Dettmers and Douwe Kiela and Ali Farhadi and Noah A. Smith and Pang Wei Koh and Amanpreet Singh and Hannaneh Hajishirzi},
      year={2024},
      eprint={2409.02060},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2409.02060}, 
}
```