--- license: mit datasets: - sail/regmix-data - sail/regmix-data-sample language: - en tags: - regmix --- # Models Trained with Human Selection This is a collection of the language models trained using DoReMi data mxiture, each with approximately 1B parameters, trained on different random mixtures of data. This models aims to server as the strong baseline for our RegMix approach (https://huggingface.co/papers/2407.01492). - **Model Size**: 5 separate models trained with different seeds, each with ~1B parameters - **Training Data**: DoReMi 280M proxy model (Xie et al. 2023) data mixtures on the [RegMix-Data](https://huggingface.co/datasets/sail/regmix-data) dataset - **Purpose**: The DoReMi is a flagship method for automatic data mxiture - ## Dataset The models were trained using the [RegMix-Data](https://huggingface.co/datasets/sail/regmix-data) dataset, which is split into different domains from The Pile dataset. ## Training Hyperparameters | Hyperparameter | Value | |:---------------|:------| | Batch Size | 1M tokens | | Learning Rate | 4e-4 | | Minimum Learning Rate | 1e-5 | | Learning Rate Schedule | Cosine | | Warmup Ratio | 4% | | Total Tokens | 25B | ## How to Load a Model You can load any model using the corresponding branch with the Hugging Face Transformers library: ```python from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained("sail/data-mixture-doremi-1b", revision="seed-1") tokenizer = AutoTokenizer.from_pretrained("sail/data-mixture-doremi-1b", revision="seed-1") ``` ## Data Mixture The specific data mixture used for training this 1B model is as follows, which can be also found in [our code](https://github.com/sail-sg/regmix/blob/main/mixture_config/config_1b/doremi.yaml): ```yaml train: train_the_pile_arxiv: 0.0036 train_the_pile_freelaw: 0.0043 train_the_pile_nih_exporter: 0.0063 train_the_pile_pubmed_central: 0.0046 train_the_pile_wikipedia_en: 0.0699 train_the_pile_dm_mathematics: 0.0018 train_the_pile_github: 0.0179 train_the_pile_philpapers: 0.0274 train_the_pile_stackexchange: 0.0153 train_the_pile_enron_emails: 0.0070 train_the_pile_gutenberg_pg_19: 0.0072 train_the_pile_pile_cc: 0.6057 train_the_pile_ubuntu_irc: 0.0093 train_the_pile_europarl: 0.0062 train_the_pile_hackernews: 0.0134 train_the_pile_pubmed_abstracts: 0.0113 train_the_pile_uspto_backgrounds: 0.0036 valid: valid_the_pile_pile_cc: 1.0 model_name: tinyllama_1_1b ``` > The domain weights will be renormalized in the code to make sure the sum of them to be 1.0. ## Model Variants To access different model variants, simply change the `revision` parameter in the `from_pretrained` method to the desired seed (e.g., "seed-2", "seed-3"), and the maxium seed is 5. ## Model Performance We evaluated each model using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). The performance metric for each task is the average of 0-shot to 5-shot `accnorm` (accuracy normalized, if available) or `acc` (accuracy) scores. | Seed | PIQA | LAMBADA | MultiRC | LogiQA | SocialIQA | Winogrande | RACE | OpenBookQA | COPA | HellaSwag | SciQ | ARC Easy | QQP | Average | |------|------|---------|---------|--------|-----------|------------|------|------------|------|-----------|------|----------|-----|---------| | 1 | 68.27 | 32.08 | 53.82 | 26.42 | 33.35 | 52.17 | 31.31 | 30.33 | 68.50 | 43.41 | 81.63 | 50.60 | 56.57 | 48.34 | | 2 | 68.07 | 32.93 | 51.34 | 26.02 | 33.12 | 52.58 | 31.23 | 30.16 | 70.60 | 43.73 | 84.30 | 52.69 | 59.68 | 48.96 | | 3 | 68.79 | 33.26 | 52.03 | 24.70 | 33.18 | 52.04 | 30.87 | 29.72 | 65.80 | 43.09 | 84.56 | 53.53 | 56.67 | 48.33 | | 4 | 68.80 | 31.45 | 54.03 | 25.16 | 33.14 | 51.63 | 31.06 | 29.68 | 72.80 | 43.19 | 85.20 | 52.68 | 56.24 | 48.85 | | 5 | 68.88 | 32.51 | 53.17 | 25.22 | 33.58 | 52.15 | 31.27 | 30.08 | 71.00 | 43.15 | 81.02 | 51.96 | 57.57 | 48.58 | ## Usage Notes - These models are primarily intended for research purposes. - Performance may vary depending on the specific task and domain. ## Citation If you use these models in your research, please cite the RegMix paper: ``` @misc{liu2024regmix, title={RegMix: Data Mixture as Regression for Language Model Pre-training}, author={Qian Liu and Xiaosen Zheng and Niklas Muennighoff and Guangtao Zeng and Longxu Dou and Tianyu Pang and Jing Jiang and Min Lin}, year={2024}, eprint={2407.01492}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2407.01492}, } ``` For more information about the RegMix methodology and its applications, please refer to the [original paper](https://huggingface.co/papers/2407.01492).