Exploratory Preference Optimization (XPO) (Xie et al. 2024) is a simple online preference tuning method based on the DPO loss together with a reward model (RM) by Tengyang Xie, Dylan J. Foster, Akshay Krishnamurthy, Corby Rosset, Ahmed Awadallah, and Alexander Rakhlin.
The abstract from the paper is the following:
Reinforcement learning from human feedback (RLHF) has emerged as a central tool for language model alignment. We consider online exploration in RLHF, which exploits interactive access to human or AI feedback by deliberately encouraging the model to produce diverse, maximally informative responses. By allowing RLHF to confidently stray from the pre-trained model, online exploration offers the possibility of novel, potentially super-human capabilities, but its full potential as a paradigm for language model training has yet to be realized, owing to computational and statistical bottlenecks in directly adapting existing reinforcement learning techniques. We propose a new algorithm for online exploration in RLHF, Exploratory Preference Optimization (XPO), which is simple and practical — a one-line change to (online) Direct Preference Optimization (DPO; Rafailov et al., 2023) — yet enjoys the strongest known provable guarantees and promising empirical performance. XPO augments the DPO objective with a novel and principled exploration bonus, empowering the algorithm to explore outside the support of the initial model and human feedback data. In theory, we show that XPO is provably sample-efficient and converges to a near-optimal language model policy under natural exploration conditions, irrespective of whether the initial model has good coverage. Our analysis, which builds on the observation that DPO implicitly performs a form of Q*-approximation (or, Bellman error minimization), combines previously disparate techniques from language modeling and theoretical reinforcement learning in a serendipitous fashion through the perspective of KL-regularized Markov decision processes. Empirically, we find that XPO is more sample-efficient than non-exploratory DPO variants in a preliminary evaluation.
XPO augments the DPO objective with an exploration bonus allowing the method to explore outside the support of the intitial model and human feedback data.
This post-training method was contributed by Kashif Rasul, Quentin Gallouédec and Lewis Tunstall.
To just run the XPO script to make sure this trainer can run, you can run the following command to train an XPO model with a dummy reward model.
python examples/scripts/xpo.py \ --model_name_or_path EleutherAI/pythia-14m \ --reward_model_path EleutherAI/pythia-14m \ --dataset_name trl-lib/tldr \ --learning_rate 5.0e-7 \ --output_dir pythia-1b-tldr-xpo \ --per_device_train_batch_size 4 \ --gradient_accumulation_steps 32 \ --num_train_epochs 3 \ --max_new_tokens 53 \ --warmup_ratio 0.1 \ --missing_eos_penalty 1.0
The logged metrics are as follows:
loss/xpo
: The mean xpo part of the full loss.loss/dpo
: The mean dpo part of the full loss.objective/model_scores
: The mean scores (according to the reward model) of the model completions.objective/ref_scores
: The mean scores (according to the reward model) of the reference completions.objective/scores_margin
: The mean score margin (according to the external reward model) between the chosen and rejected completions.objective/kl
: The mean KL divergence between the model and reference data.objective/entropy
: The mean entropy of the model and reference data.rewards/accuracies
: The accuracies of the XPO’s implicit reward model.rewards/chosen
: The mean reward (according to XPO’s DPO implicit reward model) of the chosen completions.rewards/rejected
: The mean reward (according to XPO’s DPO implicit reward model) of the rejected completions.rewards/margins
: The mean reward margin (according to online DPO’s implicit reward model) between the chosen and rejected completions.logps/chosen
: The mean log probabilities of the chosen completions.logps/rejected
: The mean log probabilities of the rejected completions.val/model_contain_eos_token
: The amount of times the model’s output contains the eos token.val/ref_contain_eos_token
: The amount of times the reference’s output contains the eos token.