XPO Trainer

Exploratory Preference Optimization (XPO) (Xie et al. 2024) is a simple online preference tuning method based on the DPO loss together with a reward model (RM) by Tengyang Xie, Dylan J. Foster, Akshay Krishnamurthy, Corby Rosset, Ahmed Awadallah, and Alexander Rakhlin.

The abstract from the paper is the following:

Reinforcement learning from human feedback (RLHF) has emerged as a central tool for language model alignment. We consider online exploration in RLHF, which exploits interactive access to human or AI feedback by deliberately encouraging the model to produce diverse, maximally informative responses. By allowing RLHF to confidently stray from the pre-trained model, online exploration offers the possibility of novel, potentially super-human capabilities, but its full potential as a paradigm for language model training has yet to be realized, owing to computational and statistical bottlenecks in directly adapting existing reinforcement learning techniques. We propose a new algorithm for online exploration in RLHF, Exploratory Preference Optimization (XPO), which is simple and practical — a one-line change to (online) Direct Preference Optimization (DPO; Rafailov et al., 2023) — yet enjoys the strongest known provable guarantees and promising empirical performance. XPO augments the DPO objective with a novel and principled exploration bonus, empowering the algorithm to explore outside the support of the initial model and human feedback data. In theory, we show that XPO is provably sample-efficient and converges to a near-optimal language model policy under natural exploration conditions, irrespective of whether the initial model has good coverage. Our analysis, which builds on the observation that DPO implicitly performs a form of Q*-approximation (or, Bellman error minimization), combines previously disparate techniques from language modeling and theoretical reinforcement learning in a serendipitous fashion through the perspective of KL-regularized Markov decision processes. Empirically, we find that XPO is more sample-efficient than non-exploratory DPO variants in a preliminary evaluation.

XPO augments the DPO objective with an exploration bonus allowing the method to explore outside the support of the intitial model and human feedback data.

This post-training method was contributed by Kashif Rasul, Quentin Gallouédec and Lewis Tunstall.

Get started

To just run the XPO script to make sure this trainer can run, you can run the following command to train an XPO model with a dummy reward model.

python examples/scripts/xpo.py \
    --model_name_or_path EleutherAI/pythia-14m  \
    --reward_model_path EleutherAI/pythia-14m \
    --dataset_name trl-lib/tldr \
    --learning_rate 5.0e-7 \
    --output_dir pythia-1b-tldr-xpo \
    --per_device_train_batch_size 4 \
    --gradient_accumulation_steps 32 \
    --num_train_epochs 3 \
    --max_new_tokens 53 \
    --warmup_ratio 0.1 \
    --missing_eos_penalty 1.0

Explanation of the logged metrics

The logged metrics are as follows:

< > Update on GitHub