Abstract
As development of large language models (LLM) progresses, aligning them with human preferences has become increasingly important. We propose stepwise DPO (sDPO), an extension of the recently popularized direct preference optimization (DPO) for alignment tuning. This approach involves dividing the available preference datasets and utilizing them in a stepwise manner, rather than employing it all at once. We demonstrate that this method facilitates the use of more precisely aligned reference models within the DPO training framework. Furthermore, sDPO trains the final model to be more performant, even outperforming other popular LLMs with more parameters.
Community
why it works?
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- ORPO: Monolithic Preference Optimization without Reference Model (2024)
- Aligning Large Language Models by On-Policy Self-Judgment (2024)
- A Critical Evaluation of AI Feedback for Aligning Large Language Models (2024)
- CLHA: A Simple yet Effective Contrastive Learning Framework for Human Alignment (2024)
- RS-DPO: A Hybrid Rejection Sampling and Direct Preference Optimization Method for Alignment of Large Language Models (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
It's not clear to me from the Ablation whether this is just a function of learning rate cycling - like has been well explored in the CV literature.
Models citing this paper 7
Browse 7 models citing this paperDatasets citing this paper 0
No dataset linking this paper