NitroFusion: High-Fidelity Single-Step Diffusion through Dynamic Adversarial Training
Abstract
We introduce NitroFusion, a fundamentally different approach to single-step diffusion that achieves high-quality generation through a dynamic adversarial framework. While one-step methods offer dramatic speed advantages, they typically suffer from quality degradation compared to their multi-step counterparts. Just as a panel of art critics provides comprehensive feedback by specializing in different aspects like composition, color, and technique, our approach maintains a large pool of specialized discriminator heads that collectively guide the generation process. Each discriminator group develops expertise in specific quality aspects at different noise levels, providing diverse feedback that enables high-fidelity one-step generation. Our framework combines: (i) a dynamic discriminator pool with specialized discriminator groups to improve generation quality, (ii) strategic refresh mechanisms to prevent discriminator overfitting, and (iii) global-local discriminator heads for multi-scale quality assessment, and unconditional/conditional training for balanced generation. Additionally, our framework uniquely supports flexible deployment through bottom-up refinement, allowing users to dynamically choose between 1-4 denoising steps with the same model for direct quality-speed trade-offs. Through comprehensive experiments, we demonstrate that NitroFusion significantly outperforms existing single-step methods across multiple evaluation metrics, particularly excelling in preserving fine details and global consistency.
Community
NitroFusion: High-Fidelity Single-Step Diffusion through Dynamic Adversarial Training
Our one-step diffusion pipeline generates vibrant and photorealistic images with exceptional detail in a single inference step, broadening the potential for text-to-image synthesis in applications like real-time interactive systems.
- Project page: https://chendaryen.github.io/NitroFusion.github.io/
- Arxiv paper : https://arxiv.org/abs/2412.03552
- Model: https://huggingface.co/ChenDY/NitroFusion
- Online Demo: https://huggingface.co/spaces/ChenDY/NitroFusion_1step_T2I
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- HiFiVFS: High Fidelity Video Face Swapping (2024)
- Adversarial Score identity Distillation: Rapidly Surpassing the Teacher in One Step (2024)
- HyperGAN-CLIP: A Unified Framework for Domain Adaptation, Image Synthesis and Manipulation (2024)
- Multi-student Diffusion Distillation for Better One-step Generators (2024)
- NODE-AdvGAN: Improving the transferability and perceptual similarity of adversarial examples by dynamic-system-driven adversarial generative model (2024)
- SeriesGAN: Time Series Generation via Adversarial and Autoregressive Learning (2024)
- Adversarial Diffusion Compression for Real-World Image Super-Resolution (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper