Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model
Abstract
Using reinforcement learning with human feedback (RLHF) has shown significant promise in fine-tuning diffusion models. Previous methods start by training a reward model that aligns with human preferences, then leverage RL techniques to fine-tune the underlying models. However, crafting an efficient reward model demands extensive datasets, optimal architecture, and manual hyperparameter tuning, making the process both time and cost-intensive. The direct preference optimization (DPO) method, effective in fine-tuning large language models, eliminates the necessity for a reward model. However, the extensive GPU memory requirement of the diffusion model's denoising process hinders the direct application of the DPO method. To address this issue, we introduce the Direct Preference for Denoising Diffusion Policy Optimization (D3PO) method to directly fine-tune diffusion models. The theoretical analysis demonstrates that although D3PO omits training a reward model, it effectively functions as the optimal reward model trained using human feedback data to guide the learning process. This approach requires no training of a reward model, proving to be more direct, cost-effective, and minimizing computational overhead. In experiments, our method uses the relative scale of objectives as a proxy for human preference, delivering comparable results to methods using ground-truth rewards. Moreover, D3PO demonstrates the ability to reduce image distortion rates and generate safer images, overcoming challenges lacking robust reward models.
Community
You had me at D3PO π
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Diffusion Model Alignment Using Direct Preference Optimization (2023)
- Aligning Text-to-Image Diffusion Models with Reward Backpropagation (2023)
- Enhancing Diffusion Models with Text-Encoder Reinforcement Learning (2023)
- Safe RLHF: Safe Reinforcement Learning from Human Feedback (2023)
- COPF: Continual Learning Human Preference through Optimal Policy Fitting (2023)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Diffusion Model Alignment Using Direct Preference Optimization (2023)
- Enhancing Diffusion Models with Text-Encoder Reinforcement Learning (2023)
- Safe RLHF: Safe Reinforcement Learning from Human Feedback (2023)
- COPF: Continual Learning Human Preference through Optimal Policy Fitting (2023)
- SuperHF: Supervised Iterative Learning from Human Feedback (2023)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
Revolutionizing Diffusion Models: Human Feedback without Reward Models
Links π:
π Subscribe: https://www.youtube.com/@Arxflix
π Twitter: https://x.com/arxflix
π LMNT (Partner): https://lmnt.com/
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper