A Common Pitfall of Margin-based Language Model Alignment: Gradient Entanglement
Abstract
Reinforcement Learning from Human Feedback (RLHF) has become the predominant approach for language model (LM) alignment. At its core, RLHF uses a margin-based loss for preference optimization, specifying ideal LM behavior only by the difference between preferred and dispreferred responses. In this paper, we identify a common pitfall of margin-based methods -- the under-specification of ideal LM behavior on preferred and dispreferred responses individually, which leads to two unintended consequences as the margin increases: (1) The probability of dispreferred (e.g., unsafe) responses may increase, resulting in potential safety alignment failures. (2) The probability of preferred responses may decrease, even when those responses are ideal. We demystify the reasons behind these problematic behaviors: margin-based losses couple the change in the preferred probability to the gradient of the dispreferred one, and vice versa, often preventing the preferred probability from increasing while the dispreferred one decreases, and thus causing a synchronized increase or decrease in both probabilities. We term this effect, inherent in margin-based objectives, gradient entanglement. Formally, we derive conditions for general margin-based alignment objectives under which gradient entanglement becomes concerning: the inner product of the gradients of preferred and dispreferred log-probabilities is large relative to the individual gradient norms. We theoretically investigate why such inner products can be large when aligning language models and empirically validate our findings. Empirical implications of our framework extend to explaining important differences in the training dynamics of various preference optimization algorithms, and suggesting potential algorithm designs to mitigate the under-specification issue of margin-based methods and thereby improving language model alignment.
Community
๐ฏ The Core Issue: Under-Specification in Margin-Based Loss
RLHF's margin-based contrastive loss doesn't specify ideal behavior for individual log-probabilities of chosen and rejected, leading to:
- ๐ Potential log-probabilities increase in undesirable responses
- ๐ Unintended log-probabilities reduction of desired responses
Root Cause
๐ "Gradient entanglement": Changes in preferred probabilities are coupled with gradients of dispreferred ones vice versa.
Impact
- Compromises safety in alignment tasks
- Hinders model distillation and retention of human demonstrations
Our Contributions
- ๐ Identified under-specification in margin-based preference optimization
- ๐งฉ Uncovered gradient entanglement as the root cause of RLHF pitfalls
- ๐ฌ Investigated conditions for synchronized log-probability movements
- ๐ ๏ธ Proposed solutions:
- Normalized gradients approach
- Token-level information leveraging
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Unintentional Unalignment: Likelihood Displacement in Direct Preference Optimization (2024)
- ASFT: Aligned Supervised Fine-Tuning through Absolute Likelihood (2024)
- Margin Matching Preference Optimization: Enhanced Model Alignment with Granular Feedback (2024)
- SeRA: Self-Reviewing and Alignment of Large Language Models using Implicit Reward Margins (2024)
- Negative-Prompt-driven Alignment for Generative Language Model (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper