--- license: mit datasets: - sail/regmix-data - sail/regmix-data-sample language: - en tags: - regmix --- # Models Trained with DoReMi Data Mixture This is a collection of the language models trained using RegMix data mxiture, each with approximately 1B parameters, trained on different seeds. This models aims to server as the strong baseline for our RegMix approach (https://huggingface.co/papers/2407.01492). - **Model Size**: 5 separate models trained with different seeds, each with ~1B parameters - **Training Data**: our method drived automatic data mixture on the [RegMix-Data](https://huggingface.co/datasets/sail/regmix-data) dataset - **Purpose**: To verify the effectiveness of our proposed method - ## Dataset The models were trained using the [RegMix-Data](https://huggingface.co/datasets/sail/regmix-data) dataset, which is split into different domains from The Pile dataset. ## Training Hyperparameters | Hyperparameter | Value | |:---------------|:------| | Batch Size | 1M tokens | | Learning Rate | 4e-4 | | Minimum Learning Rate | 1e-5 | | Learning Rate Schedule | Cosine | | Warmup Ratio | 4% | | Total Tokens | 25B | ## How to Load a Model You can load any model using the corresponding branch with the Hugging Face Transformers library: ```python from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained("sail/data-mixture-regmix-1b", revision="seed-1") tokenizer = AutoTokenizer.from_pretrained("sail/data-mixture-regmix-1b", revision="seed-1") ``` ## Data Mixture The specific data mixture used for training this 1B model is as follows, which can be also found in [our code](https://github.com/sail-sg/regmix/blob/main/mixture_config/config_1b/regmix.yaml): ```yaml train: train_the_pile_arxiv: 0.0012046169821426883 train_the_pile_freelaw: 0.001454510048554701 train_the_pile_nih_exporter: 0.001231640306882902 train_the_pile_pubmed_central: 0.003108561825532002 train_the_pile_wikipedia_en: 0.01593264140324679 train_the_pile_dm_mathematics: 0.00031106907908634156 train_the_pile_github: 0.00022861228152440253 train_the_pile_philpapers: 1.329107360676338e-05 train_the_pile_stackexchange: 0.00029547405933203174 train_the_pile_enron_emails: 0.0016691646199353991 train_the_pile_gutenberg_pg_19: 0.001612531300038395 train_the_pile_pile_cc: 0.8701291419934237 train_the_pile_ubuntu_irc: 0.06417728505869834 train_the_pile_europarl: 2.9166170357771267e-06 train_the_pile_hackernews: 0.011925517591888925 train_the_pile_pubmed_abstracts: 0.02424425081714838 train_the_pile_uspto_backgrounds: 0.0024587749419225434 valid: valid_the_pile_pile_cc: 1.0 model_name: tinyllama_1_1b ``` ## Model Variants To access different model variants, simply change the `revision` parameter in the `from_pretrained` method to the desired seed (e.g., "seed-2", "seed-3"), and the maxium seed is 5. ## Model Performance We evaluated each model using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). The performance metric for each task is the average of 0-shot to 5-shot `accnorm` (accuracy normalized, if available) or `acc` (accuracy) scores. | Seed | PIQA | LAMBADA | MultiRC | LogiQA | SocialIQA | Winogrande | RACE | OpenBookQA | COPA | HellaSwag | SciQ | ARC Easy | QQP | Average | |------|------|---------|---------|--------|-----------|------------|------|------------|------|-----------|------|----------|-----|---------| | 1 | 69.33 | 34.20 | 51.70 | 25.76 | 33.77 | 53.08 | 31.34 | 30.30 | 70.17 | 44.19 | 82.75 | 51.68 | 58.34 | 48.97 | | 2 | 69.47 | 34.02 | 50.71 | 26.97 | 33.45 | 52.06 | 30.99 | 29.64 | 70.40 | 44.17 | 82.90 | 51.50 | 54.94 | 48.56 | | 3 | 69.24 | 31.99 | 54.07 | 23.66 | 33.38 | 51.16 | 30.70 | 30.32 | 69.40 | 43.74 | 82.60 | 52.95 | 53.43 | 48.20 | | 4 | 69.18 | 33.29 | 54.21 | 25.35 | 33.34 | 52.27 | 31.67 | 29.28 | 69.20 | 44.00 | 82.34 | 53.32 | 55.07 | 48.65 | | 5 | 68.39 | 31.01 | 53.43 | 25.38 | 33.57 | 51.87 | 31.44 | 29.40 | 70.40 | 43.74 | 83.46 | 51.28 | 56.49 | 48.45 | ## Usage Notes - These models are primarily intended for research purposes. - Performance may vary depending on the specific task and domain. ## Citation If you use these models in your research, please cite the RegMix paper: ``` @article{liu2024regmix, title={RegMix: Data Mixture as Regression for Language Model Pre-training}, author={Liu, Qian and Zheng, Xiaosen and Muennighoff, Niklas and Zeng, Guangtao and Dou, Longxu and Pang, Tianyu and Jiang, Jing and Lin, Min}, journal={arXiv preprint arXiv:2407.01492}, year={2024} } ``` For more information about the RegMix methodology and its applications, please refer to the [original paper](https://huggingface.co/papers/2407.01492).