SivilTaram commited on
Commit
5c7c749
1 Parent(s): 64c8f6b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +114 -3
README.md CHANGED
@@ -1,3 +1,114 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - sail/regmix-data
5
+ - sail/regmix-data-sample
6
+ language:
7
+ - en
8
+ tags:
9
+ - regmix
10
+ ---
11
+
12
+
13
+ # Models Trained with Human Selection
14
+
15
+ This is a collection of the language models trained using DoReMi data mxiture, each with approximately 1B parameters, trained on different random mixtures of data. This models aims to server as the strong baseline for our RegMix approach (https://huggingface.co/papers/2407.01492).
16
+
17
+ - **Model Size**: 5 separate models trained with different seeds, each with ~1B parameters
18
+ - **Training Data**: DoReMi 280M proxy model (Xie et al. 2023) data mixtures on the [RegMix-Data](https://huggingface.co/datasets/sail/regmix-data) dataset
19
+ - **Purpose**: The DoReMi is a flagship method for automatic data mxiture
20
+ -
21
+ ## Dataset
22
+
23
+ The models were trained using the [RegMix-Data](https://huggingface.co/datasets/sail/regmix-data) dataset, which is split into different domains from The Pile dataset.
24
+
25
+ ## Training Hyperparameters
26
+
27
+ | Hyperparameter | Value |
28
+ |:---------------|:------|
29
+ | Batch Size | 1M tokens |
30
+ | Learning Rate | 4e-4 |
31
+ | Minimum Learning Rate | 1e-5 |
32
+ | Learning Rate Schedule | Cosine |
33
+ | Warmup Ratio | 4% |
34
+ | Total Tokens | 25B |
35
+
36
+ ## How to Load a Model
37
+
38
+ You can load any model using the corresponding branch with the Hugging Face Transformers library:
39
+
40
+ ```python
41
+ from transformers import AutoModel, AutoTokenizer
42
+
43
+ model = AutoModel.from_pretrained("sail/data-mixture-doremi-1b", revision="seed-1")
44
+ tokenizer = AutoTokenizer.from_pretrained("sail/data-mixture-doremi-1b", revision="seed-1")
45
+ ```
46
+
47
+ ## Data Mixture
48
+
49
+ The specific data mixture used for training this 1B model is as follows, which can be also found in [our code](https://github.com/sail-sg/regmix/blob/main/mixture_config/config_1b/doremi.yaml):
50
+
51
+ ```yaml
52
+ train:
53
+ train_the_pile_arxiv: 0.0036
54
+ train_the_pile_freelaw: 0.0043
55
+ train_the_pile_nih_exporter: 0.0063
56
+ train_the_pile_pubmed_central: 0.0046
57
+ train_the_pile_wikipedia_en: 0.0699
58
+ train_the_pile_dm_mathematics: 0.0018
59
+ train_the_pile_github: 0.0179
60
+ train_the_pile_philpapers: 0.0274
61
+ train_the_pile_stackexchange: 0.0153
62
+ train_the_pile_enron_emails: 0.0070
63
+ train_the_pile_gutenberg_pg_19: 0.0072
64
+ train_the_pile_pile_cc: 0.6057
65
+ train_the_pile_ubuntu_irc: 0.0093
66
+ train_the_pile_europarl: 0.0062
67
+ train_the_pile_hackernews: 0.0134
68
+ train_the_pile_pubmed_abstracts: 0.0113
69
+ train_the_pile_uspto_backgrounds: 0.0036
70
+ valid:
71
+ valid_the_pile_pile_cc: 1.0
72
+ model_name: tinyllama_1_1b
73
+ ```
74
+
75
+ > The domain weights will be renormalized in the code to make sure the sum of them to be 1.0.
76
+
77
+ ## Model Variants
78
+
79
+ To access different model variants, simply change the `revision` parameter in the `from_pretrained` method to the desired seed (e.g., "seed-2", "seed-3"), and the maxium seed is 5.
80
+
81
+ ## Model Performance
82
+
83
+ We evaluated each model using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). The performance metric for each task is the average of 0-shot to 5-shot `accnorm` (accuracy normalized, if available) or `acc` (accuracy) scores.
84
+
85
+ | Seed | PIQA | LAMBADA | MultiRC | LogiQA | SocialIQA | Winogrande | RACE | OpenBookQA | COPA | HellaSwag | SciQ | ARC Easy | QQP | Average |
86
+ |------|------|---------|---------|--------|-----------|------------|------|------------|------|-----------|------|----------|-----|---------|
87
+ | 1 | 68.27 | 32.08 | 53.82 | 26.42 | 33.35 | 52.17 | 31.31 | 30.33 | 68.50 | 43.41 | 81.63 | 50.60 | 56.57 | 48.34 |
88
+ | 2 | 68.07 | 32.93 | 51.34 | 26.02 | 33.12 | 52.58 | 31.23 | 30.16 | 70.60 | 43.73 | 84.30 | 52.69 | 59.68 | 48.96 |
89
+ | 3 | 68.79 | 33.26 | 52.03 | 24.70 | 33.18 | 52.04 | 30.87 | 29.72 | 65.80 | 43.09 | 84.56 | 53.53 | 56.67 | 48.33 |
90
+ | 4 | 68.80 | 31.45 | 54.03 | 25.16 | 33.14 | 51.63 | 31.06 | 29.68 | 72.80 | 43.19 | 85.20 | 52.68 | 56.24 | 48.85 |
91
+ | 5 | 68.88 | 32.51 | 53.17 | 25.22 | 33.58 | 52.15 | 31.27 | 30.08 | 71.00 | 43.15 | 81.02 | 51.96 | 57.57 | 48.58 |
92
+
93
+ ## Usage Notes
94
+
95
+ - These models are primarily intended for research purposes.
96
+ - Performance may vary depending on the specific task and domain.
97
+
98
+ ## Citation
99
+
100
+ If you use these models in your research, please cite the RegMix paper:
101
+
102
+ ```
103
+ @misc{liu2024regmix,
104
+ title={RegMix: Data Mixture as Regression for Language Model Pre-training},
105
+ author={Qian Liu and Xiaosen Zheng and Niklas Muennighoff and Guangtao Zeng and Longxu Dou and Tianyu Pang and Jing Jiang and Min Lin},
106
+ year={2024},
107
+ eprint={2407.01492},
108
+ archivePrefix={arXiv},
109
+ primaryClass={cs.CL},
110
+ url={https://arxiv.org/abs/2407.01492},
111
+ }
112
+ ```
113
+
114
+ For more information about the RegMix methodology and its applications, please refer to the [original paper](https://huggingface.co/papers/2407.01492).