File size: 1,396 Bytes
f1fe96f abc9165 79e4085 587e06f 79e4085 abc9165 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
---
license: apache-2.0
language:
- en
pipeline_tag: image-to-image
---
# Diffree
<p align="center">
<a href="https://opengvlab.github.io/Diffree/"><u>[🌐Project Page]</u></a>
<a href="https://drive.google.com/file/d/1AdIPA5TK5LB1tnqqZuZ9GsJ6Zzqo2ua6/view"><u>[🎥 Video]</u></a>
<a href="https://github.com/OpenGVLab/Diffree"><u>[🔍 Code]</u></a>
<a href="https://arxiv.org/pdf/2407.16982"><u>[📜 Arxiv]</u></a>
<a href="https://huggingface.co/spaces/LiruiZhao/Diffree"><u>[🤗 Hugging Face Demo]</u></a>
</p>
[Diffree](https://arxiv.org/pdf/2407.16982) is a diffusion model that enables the addition of new objects to images using only text descriptions, seamlessly integrating them with consistent background and spatial context.
In this repo, we provide the checkpoint for Diffree, and you can also explore and try our model via [🤗 Hugging Face demo](https://huggingface.co/spaces/LiruiZhao/Diffree).
## Citation
If you found this work useful, please consider citing:
```
@article{zhao2024diffree,
title={Diffree: Text-Guided Shape Free Object Inpainting with Diffusion Model},
author={Zhao, Lirui and Yang, Tianshuo and Shao, Wenqi and Zhang, Yuxin and Qiao, Yu and Luo, Ping and Zhang, Kaipeng and Ji, Rongrong},
journal={arXiv preprint arXiv:2407.16982},
year={2024}
}
``` |