|
--- |
|
license: mit |
|
datasets: |
|
- sail/regmix-data |
|
- sail/regmix-data-sample |
|
language: |
|
- en |
|
--- |
|
|
|
|
|
# Models Trained with Human Selection |
|
|
|
This is a collection of the language models trained using Human selection, each with approximately 1B parameters, trained on different random mixtures of data. This project aims to validate the generalization capabilities of the RegMix approach (https://huggingface.co/papers/2407.01492) from small-scale (e.g., 1M parameters) to large-scale (e.g., 1B parameters) models. |
|
|
|
## Key Features |
|
|
|
- **Model Size**: 5 separate models trained with different seeds, each with ~1B parameters |
|
- **Training Data**: Human selection (from The Pile paper) data mixtures on the [RegMix-Data](https://huggingface.co/datasets/sail/regmix-data) dataset |
|
- **Purpose**: The Human selection is a strong baseline for our method RegMix |
|
- |
|
## Dataset |
|
|
|
The models were trained using the [RegMix-Data](https://huggingface.co/datasets/sail/regmix-data) dataset, which is split into different domains from The Pile dataset. |
|
|
|
## Training Hyperparameters |
|
|
|
| Hyperparameter | Value | |
|
|:---------------|:------| |
|
| Batch Size | 1M tokens | |
|
| Learning Rate | 4e-4 | |
|
| Minimum Learning Rate | 1e-5 | |
|
| Learning Rate Schedule | Cosine | |
|
| Warmup Ratio | 4% | |
|
| Total Tokens | 25B | |
|
|
|
## How to Load a Model |
|
|
|
You can load any model using the corresponding branch with the Hugging Face Transformers library: |
|
|
|
```python |
|
from transformers import AutoModel, AutoTokenizer |
|
|
|
model = AutoModel.from_pretrained("sail/data-mixture-human-1b", revision="seed-1") |
|
tokenizer = AutoTokenizer.from_pretrained("sail/data-mixture-human-1b", revision="seed-1") |
|
``` |
|
|
|
## Data Mixture |
|
|
|
The specific data mixture used for training this 1B model is as follows, which can be also found in [our code](https://github.com/sail-sg/regmix/blob/main/mixture_config/config_1b/human.yaml): |
|
|
|
```yaml |
|
train: |
|
train_the_pile_arxiv: 0.1052 |
|
train_the_pile_freelaw: 0.0386 |
|
train_the_pile_nih_exporter: 0.0052 |
|
train_the_pile_pubmed_central: 0.1071 |
|
train_the_pile_wikipedia_en: 0.0919 |
|
train_the_pile_dm_mathematics: 0.0198 |
|
train_the_pile_github: 0.0427 |
|
train_the_pile_philpapers: 0.0027 |
|
train_the_pile_stackexchange: 0.0929 |
|
train_the_pile_enron_emails: 0.0030 |
|
train_the_pile_gutenberg_pg_19: 0.0199 |
|
train_the_pile_pile_cc: 0.1121 |
|
train_the_pile_ubuntu_irc: 0.0074 |
|
train_the_pile_europarl: 0.0043 |
|
train_the_pile_hackernews: 0.0075 |
|
train_the_pile_pubmed_abstracts: 0.0845 |
|
train_the_pile_uspto_backgrounds: 0.0420 |
|
valid: |
|
valid_the_pile_pile_cc: 1.0 |
|
model_name: tinyllama_1_1b |
|
``` |
|
> Domain weights will be normalized to make sure their sum is 1.0 for train sets in our code. |
|
|
|
## Model Variants |
|
|
|
To access different model variants, simply change the `revision` parameter in the `from_pretrained` method to the desired seed (e.g., "seed-2", "seed-3"), and the maxium seed is 5. |
|
|
|
## Model Performance |
|
|
|
We evaluated each model using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). The performance metric for each task is the average of 0-shot to 5-shot `accnorm` (accuracy normalized, if available) or `acc` (accuracy) scores. |
|
|
|
| Seed | PIQA | LAMBADA | MultiRC | LogiQA | SocialIQA | Winogrande | RACE | OpenBookQA | COPA | HellaSwag | SciQ | ARC Easy | QQP | Average | |
|
|------|------|---------|---------|--------|-----------|------------|------|------------|------|-----------|------|----------|-----|---------| |
|
| 1 | 65.00 | 29.83 | 54.28 | 25.47 | 33.61 | 53.06 | 28.98 | 28.17 | 66.67 | 37.43 | 80.13 | 49.40 | 52.42 | 46.50 | |
|
| 2 | 65.03 | 26.69 | 53.24 | 25.31 | 33.69 | 52.52 | 29.42 | 28.76 | 63.00 | 37.68 | 82.58 | 51.36 | 58.46 | 46.75 | |
|
| 3 | 65.57 | 28.47 | 54.18 | 25.68 | 34.24 | 52.31 | 30.12 | 28.00 | 65.80 | 37.90 | 82.48 | 49.34 | 56.53 | 46.97 | |
|
| 4 | 65.45 | 26.88 | 51.42 | 24.92 | 34.16 | 50.50 | 29.93 | 28.92 | 62.40 | 37.70 | 80.66 | 49.27 | 58.06 | 46.17 | |
|
| 5 | 66.67 | 29.56 | 51.58 | 26.94 | 33.22 | 51.78 | 29.03 | 28.56 | 65.00 | 37.69 | 81.78 | 50.38 | 52.60 | 46.52 | |
|
|
|
## Usage Notes |
|
|
|
- These models are primarily intended for research purposes. |
|
- Performance may vary depending on the specific task and domain. |
|
|
|
## Citation |
|
|
|
If you use these models in your research, please cite the RegMix paper: |
|
|
|
``` |
|
@article{liu2024regmix, |
|
title={RegMix: Data Mixture as Regression for Language Model Pre-training}, |
|
author={Liu, Qian and Zheng, Xiaosen and Muennighoff, Niklas and Zeng, Guangtao and Dou, Longxu and Pang, Tianyu and Jiang, Jing and Lin, Min}, |
|
journal={arXiv preprint arXiv:2407.01492}, |
|
year={2024} |
|
} |
|
``` |
|
|
|
For more information about the RegMix methodology and its applications, please refer to the [original paper](https://huggingface.co/papers/2407.01492). |