File size: 4,364 Bytes
f0a1c90 31814ae f0a1c90 88c363a f0a1c90 c7d8415 f0a1c90 88c363a f0a1c90 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 |
---
model-index:
- name: tulu-v2.5-ppo-13b-stackexchange-60k
results: []
datasets:
- allenai/tulu-2.5-preference-data
- allenai/tulu-v2-sft-mixture
language:
- en
base_model: allenai/tulu-2-13b
license: apache-2.0
---
<center>
<img src="https://huggingface.co/datasets/allenai/blog-images/resolve/main/tulu-2.5/tulu_25_banner.png" alt="Tulu 2.5 banner image" width="800px"/>
</center>
# Model Card for Tulu V2.5 PPO 13B - StackExchange 60K
Tulu is a series of language models that are trained to act as helpful assistants.
Tulu V2.5 is a series of models trained using DPO and PPO starting from the [Tulu 2 suite](https://huggingface.co/collections/allenai/tulu-v2-suite-6551b56e743e6349aab45101).
This model is trained on a 60k random subsample of the StackExchange dataset using PPO.
We used a 13B RM trained on the 60k StackExchange dataset, and then re-used the same prompts during PPO training.
For more details, read the paper:
[Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback](https://arxiv.org/abs/2406.09279).
## .Model description
- **Model type:** One model belonging to a suite of RLHF tuned chat models on a mix of publicly available, synthetic and human-created datasets.
- **Language(s) (NLP):** English
- **License:** Apache 2.0.
- **Finetuned from model:** [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf)
### Model Sources
- **Repository:** https://github.com/allenai/open-instruct
- **Dataset:** Data used to train this model can be found [here](https://huggingface.co/datasets/allenai/tulu-2.5-preference-data) - specifically the `stack_exchange_60k` split.
- **Model Family:** The collection of related models can be found [here](https://huggingface.co/collections/allenai/tulu-v25-suite-66676520fd578080e126f618).
- **Reward Model:** The reward model used during PPO training can be found [here](https://huggingface.co/allenai/tulu-v2.5-13b-stackexchange-60k-rm).
## Input Format
The model is trained to use the following format (note the newlines):
```
<|user|>
Your message here!
<|assistant|>
```
For best results, format all inputs in this manner. **Make sure to include a newline after `<|assistant|>`, this can affect generation quality quite a bit.**
We have included a [chat template](https://huggingface.co/docs/transformers/main/en/chat_templating) in the tokenizer implementing this template.
## Intended uses & limitations
The model was initially fine-tuned on a filtered and preprocessed of the [Tulu V2 mix dataset](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture), which contains a diverse range of human created instructions and synthetic dialogues generated primarily by other LLMs.
We then further aligned the model with a [Jax PPO trainer](https://github.com/hamishivi/EasyLM/blob/main/EasyLM/models/llama/llama_train_ppo.py) built on [EasyLM](https://github.com/young-geng/EasyLM) on the dataset mentioned above.
## Bias, Risks, and Limitations
The Tulu models have not been aligned to generate safe completions within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
It is also unknown what the size and composition of the corpus was used to train the base Llama 2 models, however it is likely to have included a mix of Web data and technical sources like books and code. See the [Falcon 180B model card](https://huggingface.co/tiiuae/falcon-180B#training-data) for an example of this.
### Training hyperparameters
The following hyperparameters were used during PPO training:
- learning_rate: 1e-06
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
- KL penalty coefficient: 0.05
## Citation
If you find Tulu 2.5 is useful in your work, please cite it with:
```
@misc{ivison2024unpacking,
title={{Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback}},
author={{Hamish Ivison and Yizhong Wang and Jiacheng Liu and Ellen Wu and Valentina Pyatkin and Nathan Lambert and Yejin Choi and Noah A. Smith and Hannaneh Hajishirzi}}
year={2024},
eprint={2406.09279},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|