VictorFS82 commited on
Commit
969535f
β€’
1 Parent(s): e2b5e38

Delete README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -71
README.md DELETED
@@ -1,71 +0,0 @@
1
- # OOTDiffusion
2
- This repository is the official implementation of OOTDiffusion
3
-
4
- πŸ€— [Try out OOTDiffusion](https://huggingface.co/spaces/levihsu/OOTDiffusion) (Thanks to [ZeroGPU](https://huggingface.co/zero-gpu-explorers) for providing A100 GPUs)
5
-
6
- Or [try our own demo](https://ootd.ibot.cn/) on RTX 4090 GPUs
7
-
8
- > **OOTDiffusion: Outfitting Fusion based Latent Diffusion for Controllable Virtual Try-on** [[arXiv paper](https://arxiv.org/abs/2403.01779)]<br>
9
- > [Yuhao Xu](http://levihsu.github.io/), [Tao Gu](https://github.com/T-Gu), [Weifeng Chen](https://github.com/ShineChen1024), [Chengcai Chen](https://www.researchgate.net/profile/Chengcai-Chen)<br>
10
- > Xiao-i Research
11
-
12
-
13
- Our model checkpoints trained on [VITON-HD](https://github.com/shadow2496/VITON-HD) (half-body) and [Dress Code](https://github.com/aimagelab/dress-code) (full-body) have been released
14
-
15
- * πŸ€— [Hugging Face link](https://huggingface.co/levihsu/OOTDiffusion)
16
- * πŸ“’πŸ“’ We support ONNX for [humanparsing](https://github.com/GoGoDuck912/Self-Correction-Human-Parsing) now. Most environmental issues should have been addressed : )
17
- * Please download [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) into ***checkpoints*** folder
18
- * We've only tested our code and models on Linux (Ubuntu 22.04)
19
-
20
- ![demo](images/demo.png)&nbsp;
21
- ![workflow](images/workflow.png)&nbsp;
22
-
23
- ## Installation
24
- 1. Clone the repository
25
-
26
- ```sh
27
- git clone https://github.com/levihsu/OOTDiffusion
28
- ```
29
-
30
- 2. Create a conda environment and install the required packages
31
-
32
- ```sh
33
- conda create -n ootd python==3.10
34
- conda activate ootd
35
- pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2
36
- pip install -r requirements.txt
37
- ```
38
-
39
- ## Inference
40
- 1. Half-body model
41
-
42
- ```sh
43
- cd OOTDiffusion/run
44
- python run_ootd.py --model_path <model-image-path> --cloth_path <cloth-image-path> --scale 2.0 --sample 4
45
- ```
46
-
47
- 2. Full-body model
48
-
49
- > Garment category must be paired: 0 = upperbody; 1 = lowerbody; 2 = dress
50
-
51
- ```sh
52
- cd OOTDiffusion/run
53
- python run_ootd.py --model_path <model-image-path> --cloth_path <cloth-image-path> --model_type dc --category 2 --scale 2.0 --sample 4
54
- ```
55
-
56
- ## Citation
57
- ```
58
- @article{xu2024ootdiffusion,
59
- title={OOTDiffusion: Outfitting Fusion based Latent Diffusion for Controllable Virtual Try-on},
60
- author={Xu, Yuhao and Gu, Tao and Chen, Weifeng and Chen, Chengcai},
61
- journal={arXiv preprint arXiv:2403.01779},
62
- year={2024}
63
- }
64
- ```
65
-
66
- ## TODO List
67
- - [x] Paper
68
- - [x] Gradio demo
69
- - [x] Inference code
70
- - [x] Model weights
71
- - [ ] Training code