Spaces:
Runtime error
A newer version of the Gradio SDK is available:
5.5.0
DragNUWA
DragNUWA enables users to manipulate backgrounds or objects within images directly, and the model seamlessly translates these actions into camera movements or object motions, generating the corresponding video.
See our paper: DragNUWA: Fine-grained Control in Video Generation by Integrating Text, Image, and Trajectory
DragNUWA 1.5 (Updated on Jan 8, 2024)
DragNUWA 1.5 enables Stable Video Diffusion to animate an image according to specific path.
DragNUWA 1.0 (Original Paper)
DragNUWA 1.0 utilizes text, images, and trajectory as three essential control factors to facilitate highly controllable video generation from semantic, spatial, and temporal aspects.
Getting Start
Setting Environment
git clone -b svd https://github.com/ProjectNUWA/DragNUWA.git
cd DragNUWA
conda create -n DragNUWA python=3.8
conda activate DragNUWA
pip install -r environment.txt
Download Pretrained Weights
Download the Pretrained Weights to models/
directory or directly run bash models/Download.sh
.
Drag and Animate !
python DragNUWA_demo.py
It will launch a gradio demo, and you can drag an image and animate it!
Acknowledgement
We appreciate the open source of the following projects: Stable Video Diffusion Hugging Face UniMatch
Citation
@article{yin2023dragnuwa,
title={Dragnuwa: Fine-grained control in video generation by integrating text, image, and trajectory},
author={Yin, Shengming and Wu, Chenfei and Liang, Jian and Shi, Jie and Li, Houqiang and Ming, Gong and Duan, Nan},
journal={arXiv preprint arXiv:2308.08089},
year={2023}
}