Spaces:
Runtime error
Runtime error
Justin John
commited on
Commit
•
858167a
1
Parent(s):
c1ffcb6
added windows installation doc
Browse files- README.md +20 -33
- docs/{installation.md → installation-ubuntu.md} +0 -0
- docs/installation-windows.md +108 -0
- docs/tricks.md +29 -0
- environment-windows.yaml +16 -0
- requirements-win.txt +19 -0
README.md
CHANGED
@@ -45,7 +45,9 @@ ECON is designed for "Human digitization from a color image", which combines the
|
|
45 |
|
46 |
## News :triangular_flag_on_post:
|
47 |
|
48 |
-
- [
|
|
|
|
|
49 |
- [2022/12/15] Both <a href="#demo">demo</a> and <a href="https://arxiv.org/abs/2212.07422">arXiv</a> are available.
|
50 |
|
51 |
## TODO
|
@@ -68,9 +70,6 @@ ECON is designed for "Human digitization from a color image", which combines the
|
|
68 |
<li>
|
69 |
<a href="#applications">Applications</a>
|
70 |
</li>
|
71 |
-
<li>
|
72 |
-
<a href="#tricks">Tricks</a>
|
73 |
-
</li>
|
74 |
<li>
|
75 |
<a href="#citation">Citation</a>
|
76 |
</li>
|
@@ -81,49 +80,26 @@ ECON is designed for "Human digitization from a color image", which combines the
|
|
81 |
|
82 |
## Instructions
|
83 |
|
84 |
-
- See [
|
|
|
|
|
85 |
|
86 |
## Demo
|
87 |
|
88 |
```bash
|
89 |
-
# For single-person image-based reconstruction (w/
|
90 |
python -m apps.infer -cfg ./configs/econ.yaml -in_dir ./examples -out_dir ./results
|
91 |
|
92 |
-
# For single-person image-based reconstruction (w/o any visualization steps, 1.5min)
|
93 |
-
python -m apps.infer -cfg ./configs/econ.yaml -in_dir ./examples -out_dir ./results -novis
|
94 |
-
|
95 |
# For multi-person image-based reconstruction (see config/econ.yaml)
|
96 |
python -m apps.infer -cfg ./configs/econ.yaml -in_dir ./examples -out_dir ./results -multi
|
97 |
|
98 |
# To generate the demo video of reconstruction results
|
99 |
-
python -m apps.multi_render -n
|
100 |
|
101 |
# To animate the reconstruction with SMPL-X pose parameters
|
102 |
-
python -m apps.avatarizer -n
|
103 |
```
|
104 |
|
105 |
-
## Tricks
|
106 |
-
|
107 |
-
### Some adjustable parameters in _config/econ.yaml_
|
108 |
-
|
109 |
-
- `use_ifnet: False`
|
110 |
-
- True: use IF-Nets+ for mesh completion ( $\text{ECON}_\text{IF}$ - Better quality, **~2min / img**)
|
111 |
-
- False: use SMPL-X for mesh completion ( $\text{ECON}_\text{EX}$ - Faster speed, **~1.8min / img**)
|
112 |
-
- `use_smpl: ["hand", "face"]`
|
113 |
-
- [ ]: don't use either hands or face parts from SMPL-X
|
114 |
-
- ["hand"]: only use the **visible** hands from SMPL-X
|
115 |
-
- ["hand", "face"]: use both **visible** hands and face from SMPL-X
|
116 |
-
- `thickness: 2cm`
|
117 |
-
- could be increased accordingly in case final reconstruction **xx_full.obj** looks flat
|
118 |
-
- `k: 4`
|
119 |
-
- could be reduced accordingly in case the surface of **xx_full.obj** has discontinous artifacts
|
120 |
-
- `hps_type: PIXIE`
|
121 |
-
- "pixie": more accurate for face and hands
|
122 |
-
- "pymafx": more robust for challenging poses
|
123 |
-
- `texture_src: image`
|
124 |
-
- "image": direct mapping the aligned pixels to final mesh
|
125 |
-
- "SD": use Stable Diffusion to generate full texture (TODO)
|
126 |
-
|
127 |
<br/>
|
128 |
|
129 |
## More Qualitative Results
|
@@ -176,6 +152,17 @@ Some images used in the qualitative examples come from [pinterest.com](https://w
|
|
176 |
|
177 |
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No.860768 ([CLIPE Project](https://www.clipe-itn.eu)).
|
178 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
179 |
---
|
180 |
|
181 |
<br>
|
|
|
45 |
|
46 |
## News :triangular_flag_on_post:
|
47 |
|
48 |
+
- [2023/01/06] [Justin John](https://github.com/justinjohn0306) and [
|
49 |
+
Carlos Barreto](https://github.com/carlosedubarreto) creates [install-on-windows](docs/installation-windows.md) for ECON .
|
50 |
+
- [2022/12/22] <a href='https://colab.research.google.com/drive/1YRgwoRCZIrSB2e7auEWFyG10Xzjbrbno?usp=sharing' style='padding-left: 0.5rem;'><img src='https://colab.research.google.com/assets/colab-badge.svg' alt='Google Colab'></a> is now available, created by [Aron Arzoomand](https://github.com/AroArz).
|
51 |
- [2022/12/15] Both <a href="#demo">demo</a> and <a href="https://arxiv.org/abs/2212.07422">arXiv</a> are available.
|
52 |
|
53 |
## TODO
|
|
|
70 |
<li>
|
71 |
<a href="#applications">Applications</a>
|
72 |
</li>
|
|
|
|
|
|
|
73 |
<li>
|
74 |
<a href="#citation">Citation</a>
|
75 |
</li>
|
|
|
80 |
|
81 |
## Instructions
|
82 |
|
83 |
+
- See [installion doc for Windows](docs/installation-windows.md) to install all the required packages and setup the models on _Windows_
|
84 |
+
- See [installion doc for Ubuntu](docs/installation-ubuntu.md) to install all the required packages and setup the models on _Ubuntu_
|
85 |
+
- See [magic tricks](docs/tricks.md) to know a few technical tricks to further improve and accelerate ECON
|
86 |
|
87 |
## Demo
|
88 |
|
89 |
```bash
|
90 |
+
# For single-person image-based reconstruction (w/ l visualization steps, 1.8min)
|
91 |
python -m apps.infer -cfg ./configs/econ.yaml -in_dir ./examples -out_dir ./results
|
92 |
|
|
|
|
|
|
|
93 |
# For multi-person image-based reconstruction (see config/econ.yaml)
|
94 |
python -m apps.infer -cfg ./configs/econ.yaml -in_dir ./examples -out_dir ./results -multi
|
95 |
|
96 |
# To generate the demo video of reconstruction results
|
97 |
+
python -m apps.multi_render -n <filename>
|
98 |
|
99 |
# To animate the reconstruction with SMPL-X pose parameters
|
100 |
+
python -m apps.avatarizer -n <filename>
|
101 |
```
|
102 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
103 |
<br/>
|
104 |
|
105 |
## More Qualitative Results
|
|
|
152 |
|
153 |
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No.860768 ([CLIPE Project](https://www.clipe-itn.eu)).
|
154 |
|
155 |
+
|
156 |
+
## Contributors
|
157 |
+
|
158 |
+
Kudos to all of our amazing contributors! ECON thrives through open-source. In that spirit, we welcome all kinds of contributions from the community.
|
159 |
+
|
160 |
+
<a href="https://github.com/yuliangxiu/ECON/graphs/contributors">
|
161 |
+
<img src="https://contrib.rocks/image?repo=yuliangxiu/ECON" />
|
162 |
+
</a>
|
163 |
+
|
164 |
+
_Contributor avatars are randomly shuffled._
|
165 |
+
|
166 |
---
|
167 |
|
168 |
<br>
|
docs/{installation.md → installation-ubuntu.md}
RENAMED
File without changes
|
docs/installation-windows.md
ADDED
@@ -0,0 +1,108 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Windows installation tutorial
|
2 |
+
|
3 |
+
Another [issue#16](https://github.com/YuliangXiu/ECON/issues/16) shows the whole process to deploy ECON on *Windows*
|
4 |
+
|
5 |
+
## Dependencies and Installation
|
6 |
+
|
7 |
+
- Use [Anaconda](https://www.anaconda.com/products/distribution)
|
8 |
+
- NVIDIA GPU + [CUDA](https://developer.nvidia.com/cuda-downloads)
|
9 |
+
- [Wget for Windows](https://eternallybored.org/misc/wget/1.21.3/64/wget.exe)
|
10 |
+
- Create a new folder on your C drive and rename it "wget" and move the downloaded "wget.exe" over there.
|
11 |
+
- Add the path to your wget folder to your system environment variables at `Environment Variables > System Variables Path > Edit environment variable`
|
12 |
+
|
13 |
+
![image](https://user-images.githubusercontent.com/34035011/210986038-39dbb7a1-12ef-4be9-9af4-5f658c6beb65.png)
|
14 |
+
|
15 |
+
- Install [Git for Windows 64-bit](https://git-scm.com/download/win)
|
16 |
+
- [Visual Studio Community 2022](https://visualstudio.microsoft.com/) (Make sure to check all the boxes as shown in the image below)
|
17 |
+
|
18 |
+
![image](https://user-images.githubusercontent.com/34035011/210983023-4e5a0024-68f0-4adb-8089-6ff598aec220.PNG)
|
19 |
+
|
20 |
+
|
21 |
+
|
22 |
+
## Getting started
|
23 |
+
|
24 |
+
Start by cloning the repo:
|
25 |
+
|
26 |
+
```bash
|
27 |
+
git clone https://github.com/yuliangxiu/ECON.git
|
28 |
+
cd ECON
|
29 |
+
```
|
30 |
+
|
31 |
+
## Environment
|
32 |
+
|
33 |
+
- Windows 10 / 11
|
34 |
+
- **CUDA=11.4**
|
35 |
+
- Python = 3.8
|
36 |
+
- PyTorch >= 1.12.1 (official [Get Started](https://pytorch.org/get-started/locally/))
|
37 |
+
- Cupy >= 11.3.0 (offcial [Installation](https://docs.cupy.dev/en/stable/install.html#installing-cupy-from-pypi))
|
38 |
+
- PyTorch3D (official [INSTALL.md](https://github.com/facebookresearch/pytorch3d/blob/main/INSTALL.md), recommend [install-from-local-clone](https://github.com/facebookresearch/pytorch3d/blob/main/INSTALL.md#2-install-from-a-local-clone))
|
39 |
+
|
40 |
+
```bash
|
41 |
+
# install required packages
|
42 |
+
cd ECON
|
43 |
+
conda env create -f environment-windows.yaml
|
44 |
+
conda activate econ
|
45 |
+
|
46 |
+
# install pytorch and cupy
|
47 |
+
pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu113
|
48 |
+
pip install -r requirements-win.txt
|
49 |
+
pip install cupy-cuda11x
|
50 |
+
|
51 |
+
## If you have a RTX 30 series GPU then run this cmd below for installing neural_voxelization_layer
|
52 |
+
pip install git+https://github.com/YuliangXiu/neural_voxelization_layer.git
|
53 |
+
## If you have GPU below RTX 30 series then you gotta build neural_voxelization_layer (steps below)
|
54 |
+
git clone https://github.com/justinjohn0306/neural_voxelization_layer.git
|
55 |
+
|
56 |
+
cd neural_voxelization_layer
|
57 |
+
python setup install
|
58 |
+
cd..
|
59 |
+
|
60 |
+
# install libmesh & libvoxelize
|
61 |
+
cd lib/common/libmesh
|
62 |
+
python setup.py build_ext --inplace
|
63 |
+
cd ../libvoxelize
|
64 |
+
python setup.py build_ext --inplace
|
65 |
+
```
|
66 |
+
|
67 |
+
## Register at [ICON's website](https://icon.is.tue.mpg.de/)
|
68 |
+
|
69 |
+
![Register](../assets/register.png)
|
70 |
+
Required:
|
71 |
+
|
72 |
+
- [SMPL](http://smpl.is.tue.mpg.de/): SMPL Model (Male, Female)
|
73 |
+
- [SMPL-X](http://smpl-x.is.tue.mpg.de/): SMPL-X Model, used for training
|
74 |
+
- [SMPLIFY](http://smplify.is.tue.mpg.de/): SMPL Model (Neutral)
|
75 |
+
- [PIXIE](https://icon.is.tue.mpg.de/user.php): PIXIE SMPL-X estimator
|
76 |
+
|
77 |
+
:warning: Click **Register now** on all dependencies, then you can download them all with **ONE** account.
|
78 |
+
|
79 |
+
## Downloading required models and extra data (make sure to install git and wget for windows for this to work)
|
80 |
+
|
81 |
+
```bash
|
82 |
+
cd ECON
|
83 |
+
bash fetch_data.sh # requires username and password
|
84 |
+
```
|
85 |
+
|
86 |
+
## Citation
|
87 |
+
|
88 |
+
:+1: Please consider citing these awesome HPS approaches: PyMAF-X, PIXIE
|
89 |
+
|
90 |
+
|
91 |
+
```
|
92 |
+
@article{pymafx2022,
|
93 |
+
title={PyMAF-X: Towards Well-aligned Full-body Model Regression from Monocular Images},
|
94 |
+
author={Zhang, Hongwen and Tian, Yating and Zhang, Yuxiang and Li, Mengcheng and An, Liang and Sun, Zhenan and Liu, Yebin},
|
95 |
+
journal={arXiv preprint arXiv:2207.06400},
|
96 |
+
year={2022}
|
97 |
+
}
|
98 |
+
|
99 |
+
|
100 |
+
@inproceedings{PIXIE:2021,
|
101 |
+
title={Collaborative Regression of Expressive Bodies using Moderation},
|
102 |
+
author={Yao Feng and Vasileios Choutas and Timo Bolkart and Dimitrios Tzionas and Michael J. Black},
|
103 |
+
booktitle={International Conference on 3D Vision (3DV)},
|
104 |
+
year={2021}
|
105 |
+
}
|
106 |
+
|
107 |
+
|
108 |
+
```
|
docs/tricks.md
ADDED
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
## Technical tricks to improve or accelerate ECON
|
2 |
+
|
3 |
+
### If the reconstructed geometry is not satisfying, play with the adjustable parameters in _config/econ.yaml_
|
4 |
+
|
5 |
+
- `use_smpl: ["hand", "face"]`
|
6 |
+
- [ ]: don't use either hands or face parts from SMPL-X
|
7 |
+
- ["hand"]: only use the **visible** hands from SMPL-X
|
8 |
+
- ["hand", "face"]: use both **visible** hands and face from SMPL-X
|
9 |
+
- `thickness: 2cm`
|
10 |
+
- could be increased accordingly in case final reconstruction **xx_full.obj** looks flat
|
11 |
+
- `k: 4`
|
12 |
+
- could be reduced accordingly in case the surface of **xx_full.obj** has discontinous artifacts
|
13 |
+
- `hps_type: PIXIE`
|
14 |
+
- "pixie": more accurate for face and hands
|
15 |
+
- "pymafx": more robust for challenging poses
|
16 |
+
- `texture_src: image`
|
17 |
+
- "image": direct mapping the aligned pixels to final mesh
|
18 |
+
- "SD": use Stable Diffusion to generate full texture (TODO)
|
19 |
+
|
20 |
+
### To accelerate the inference, you could
|
21 |
+
|
22 |
+
- `use_ifnet: False`
|
23 |
+
- True: use IF-Nets+ for mesh completion ( $\text{ECON}_\text{IF}$ - Better quality, **~2min / img**)
|
24 |
+
- False: use SMPL-X for mesh completion ( $\text{ECON}_\text{EX}$ - Faster speed, **~1.8min / img**)
|
25 |
+
|
26 |
+
```bash
|
27 |
+
# For single-person image-based reconstruction (w/o all visualization steps, 1.5min)
|
28 |
+
python -m apps.infer -cfg ./configs/econ.yaml -in_dir ./examples -out_dir ./results -novis
|
29 |
+
```
|
environment-windows.yaml
ADDED
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
name: econ
|
2 |
+
channels:
|
3 |
+
- nvidia
|
4 |
+
- conda-forge
|
5 |
+
- fvcore
|
6 |
+
- iopath
|
7 |
+
- bottler
|
8 |
+
- defaults
|
9 |
+
dependencies:
|
10 |
+
- python=3.8
|
11 |
+
- fvcore
|
12 |
+
- iopath
|
13 |
+
- cupy
|
14 |
+
- cython
|
15 |
+
- pip
|
16 |
+
|
requirements-win.txt
ADDED
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
matplotlib
|
2 |
+
scikit-image
|
3 |
+
trimesh
|
4 |
+
rtree
|
5 |
+
pytorch_lightning
|
6 |
+
kornia>0.4.0
|
7 |
+
chumpy
|
8 |
+
opencv-python
|
9 |
+
opencv_contrib_python
|
10 |
+
scikit-learn
|
11 |
+
protobuf
|
12 |
+
dataclasses
|
13 |
+
mediapipe
|
14 |
+
einops
|
15 |
+
boto3
|
16 |
+
open3d
|
17 |
+
tinyobjloader==2.0.0rc7
|
18 |
+
git+https://github.com/facebookresearch/pytorch3d.git
|
19 |
+
git+https://github.com
|