Yuliang commited on
Commit
3577d3c
1 Parent(s): c3d3e4a

SMPL-X based Animatable Avatar

Browse files
.gitignore CHANGED
@@ -16,3 +16,4 @@ build
16
  dist
17
  *egg-info
18
  *.so
 
 
16
  dist
17
  *egg-info
18
  *.so
19
+ run.sh
README.md CHANGED
@@ -23,6 +23,8 @@
23
  <br>
24
  <a href="https://pytorch.org/get-started/locally/"><img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-ee4c2c?logo=pytorch&logoColor=white"></a>
25
  <a href="https://pytorchlightning.ai/"><img alt="Lightning" src="https://img.shields.io/badge/-Lightning-792ee5?logo=pytorchlightning&logoColor=white"></a>
 
 
26
  <br></br>
27
  <a href=''>
28
  <img src='https://img.shields.io/badge/Paper-PDF (coming soon)-green?style=for-the-badge&logo=arXiv&logoColor=green' alt='Paper PDF'>
@@ -36,7 +38,7 @@
36
 
37
  <br/>
38
 
39
- ECON is designed for **"Human digitization from a color image"**, which combines the best properties of implicit and explicit representations, to infer high-fidelity 3D clothed humans from in-the-wild images, even with **loose clothing** or in **challenging poses**. ECON also supports batch reconstruction from **multi-person** photos.
40
  <br/>
41
  <br/>
42
 
@@ -61,6 +63,9 @@ ECON is designed for **"Human digitization from a color image"**, which combines
61
  <li>
62
  <a href="#demo">Demo</a>
63
  </li>
 
 
 
64
  <li>
65
  <a href="#tricks">Tricks</a>
66
  </li>
@@ -87,6 +92,9 @@ python -m apps.infer -cfg ./configs/econ.yaml -in_dir ./examples -out_dir ./resu
87
 
88
  # To generate the demo video of reconstruction results
89
  python -m apps.multi_render -n {filename}
 
 
 
90
  ```
91
 
92
  ## Tricks
@@ -101,24 +109,28 @@ python -m apps.multi_render -n {filename}
101
  - ["hand"]: only use the **visible** hands from SMPL-X
102
  - ["hand", "face"]: use both **visible** hands and face from SMPL-X
103
  - `thickness: 2cm`
104
- - could be increased accordingly in case **xx_full.obj** looks flat
105
- - `hps_type: pixie`
106
  - "pixie": more accurate for face and hands
107
  - "pymafx": more robust for challenging poses
 
 
108
 
109
  <br/>
110
 
111
  ## More Qualitative Results
112
 
113
- | ![OOD Poses](assets/OOD-poses.jpg) |
114
- | :--------------------------------------------------------------------------------: |
115
- | _Challenging Poses_ |
116
- | ![OOD Clothes](assets/OOD-outfits.jpg) |
117
- | _Loose Clothes_ |
118
- | ![SHHQ](assets/SHHQ.gif) |
119
- | _ECON Results on [SHHQ Dataset](https://github.com/stylegan-human/StyleGAN-Human)_ |
120
- | ![crowd](assets/crowd.gif) |
121
- | _ECON Results on Multi-Person Image_ |
 
 
122
 
123
  <br/>
124
  <br/>
@@ -127,7 +139,7 @@ python -m apps.multi_render -n {filename}
127
 
128
  ```bibtex
129
  @misc{xiu2022econ,
130
- title={ECON: Explicit Clothed humans Obtained from Normals},
131
  author={Xiu, Yuliang and Yang, Jinlong and Cao, Xu and Tzionas, Dimitrios and Black, Michael J.},
132
  year={2022}
133
  publisher={arXiv},
@@ -146,6 +158,7 @@ Here are some great resources we benefit from:
146
  - [ICON](https://github.com/YuliangXiu/ICON) for Body Fitting
147
  - [MonoPortDataset](https://github.com/Project-Splinter/MonoPortDataset) for Data Processing
148
  - [rembg](https://github.com/danielgatis/rembg) for Human Segmentation
 
149
  - [smplx](https://github.com/vchoutas/smplx), [PyMAF-X](https://www.liuyebin.com/pymaf-x/), [PIXIE](https://github.com/YadiraF/PIXIE) for Human Pose & Shape Estimation
150
  - [CAPE](https://github.com/qianlim/CAPE) and [THuman](https://github.com/ZhengZerong/DeepHuman/tree/master/THUmanDataset) for Dataset
151
  - [PyTorch3D](https://github.com/facebookresearch/pytorch3d) for Differential Rendering
@@ -171,4 +184,3 @@ MJB has received research gift funds from Adobe, Intel, Nvidia, Meta/Facebook, a
171
  For technical questions, please contact [email protected]
172
 
173
  For commercial licensing, please contact [email protected]
174
-
 
23
  <br>
24
  <a href="https://pytorch.org/get-started/locally/"><img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-ee4c2c?logo=pytorch&logoColor=white"></a>
25
  <a href="https://pytorchlightning.ai/"><img alt="Lightning" src="https://img.shields.io/badge/-Lightning-792ee5?logo=pytorchlightning&logoColor=white"></a>
26
+ <a href="https://cupy.dev/"><img alt="cupy" src="https://img.shields.io/badge/-Cupy-46C02B?logo=numpy&logoColor=white"></a>
27
+ <a href="https://twitter.com/yuliangxiu"><img alt='Twitter' src="https://img.shields.io/twitter/follow/yuliangxiu?label=%40yuliangxiu"></a>
28
  <br></br>
29
  <a href=''>
30
  <img src='https://img.shields.io/badge/Paper-PDF (coming soon)-green?style=for-the-badge&logo=arXiv&logoColor=green' alt='Paper PDF'>
 
38
 
39
  <br/>
40
 
41
+ ECON is designed for "Human digitization from a color image", which combines the best properties of implicit and explicit representations, to infer high-fidelity 3D clothed humans from in-the-wild images, even with **loose clothing** or in **challenging poses**. ECON also supports **multi-person reconstruction** and **SMPL-X based animation**.
42
  <br/>
43
  <br/>
44
 
 
63
  <li>
64
  <a href="#demo">Demo</a>
65
  </li>
66
+ <li>
67
+ <a href="#applications">Applications</a>
68
+ </li>
69
  <li>
70
  <a href="#tricks">Tricks</a>
71
  </li>
 
92
 
93
  # To generate the demo video of reconstruction results
94
  python -m apps.multi_render -n {filename}
95
+
96
+ # To animate the reconstruction with SMPL-X pose parameters
97
+ python -m apps.avatarizer -n {filename}
98
  ```
99
 
100
  ## Tricks
 
109
  - ["hand"]: only use the **visible** hands from SMPL-X
110
  - ["hand", "face"]: use both **visible** hands and face from SMPL-X
111
  - `thickness: 2cm`
112
+ - could be increased accordingly in case final reconstruction **xx_full.obj** looks flat
113
+ - `hps_type: PIXIE`
114
  - "pixie": more accurate for face and hands
115
  - "pymafx": more robust for challenging poses
116
+ - `k: 4`
117
+ - could be reduced accordingly in case the surface of **xx_full.obj** has discontinous artifacts
118
 
119
  <br/>
120
 
121
  ## More Qualitative Results
122
 
123
+ | ![OOD Poses](assets/OOD-poses.jpg) |
124
+ | :------------------------------------: |
125
+ | _Challenging Poses_ |
126
+ | ![OOD Clothes](assets/OOD-outfits.jpg) |
127
+ | _Loose Clothes_ |
128
+
129
+ ## Applications
130
+
131
+ | ![SHHQ](assets/SHHQ.gif) | ![crowd](assets/crowd.gif) |
132
+ | :----------------------------------------------------------------------------------------------------: | :-----------------------------------------: |
133
+ | _ECON could provide pseudo 3D GT for [SHHQ Dataset](https://github.com/stylegan-human/StyleGAN-Human)_ | _ECON supports multi-person reconstruction_ |
134
 
135
  <br/>
136
  <br/>
 
139
 
140
  ```bibtex
141
  @misc{xiu2022econ,
142
+ title={{ECON: Explicit Clothed humans Obtained from Normals}},
143
  author={Xiu, Yuliang and Yang, Jinlong and Cao, Xu and Tzionas, Dimitrios and Black, Michael J.},
144
  year={2022}
145
  publisher={arXiv},
 
158
  - [ICON](https://github.com/YuliangXiu/ICON) for Body Fitting
159
  - [MonoPortDataset](https://github.com/Project-Splinter/MonoPortDataset) for Data Processing
160
  - [rembg](https://github.com/danielgatis/rembg) for Human Segmentation
161
+ - [PyTorch-NICP](https://github.com/wuhaozhe/pytorch-nicp) for non-rigid registration
162
  - [smplx](https://github.com/vchoutas/smplx), [PyMAF-X](https://www.liuyebin.com/pymaf-x/), [PIXIE](https://github.com/YadiraF/PIXIE) for Human Pose & Shape Estimation
163
  - [CAPE](https://github.com/qianlim/CAPE) and [THuman](https://github.com/ZhengZerong/DeepHuman/tree/master/THUmanDataset) for Dataset
164
  - [PyTorch3D](https://github.com/facebookresearch/pytorch3d) for Differential Rendering
 
184
  For technical questions, please contact [email protected]
185
 
186
  For commercial licensing, please contact [email protected]
 
apps/avatarizer.py CHANGED
@@ -1,6 +1,7 @@
1
  import numpy as np
2
  import trimesh
3
  import torch
 
4
  import os.path as osp
5
  import lib.smplx as smplx
6
  from pytorch3d.ops import SubdivideMeshes
@@ -12,10 +13,16 @@ from scipy.spatial import cKDTree
12
  from lib.dataset.mesh_util import SMPLX
13
  from lib.common.local_affine import register
14
 
 
 
 
 
 
 
15
  smplx_container = SMPLX()
16
- device = torch.device("cuda:0")
17
 
18
- prefix = "./results/github/econ/obj/304e9c4798a8c3967de7c74c24ef2e38"
19
  smpl_path = f"{prefix}_smpl_00.npy"
20
  econ_path = f"{prefix}_0_full.obj"
21
 
@@ -27,7 +34,6 @@ econ_obj.vertices -= smplx_param["transl"].cpu().numpy()
27
 
28
  for key in smplx_param.keys():
29
  smplx_param[key] = smplx_param[key].cpu().view(1, -1)
30
- # print(key, smplx_param[key].device, smplx_param[key].shape)
31
 
32
  smpl_model = smplx.create(
33
  smplx_container.model_dir,
@@ -40,109 +46,135 @@ smpl_model = smplx.create(
40
  num_expression_coeffs=50,
41
  ext='pkl')
42
 
43
- smpl_out = smpl_model(
44
- body_pose=smplx_param["body_pose"],
45
- global_orient=smplx_param["global_orient"],
46
- betas=smplx_param["betas"],
47
- expression=smplx_param["expression"],
48
- jaw_pose=smplx_param["jaw_pose"],
49
- left_hand_pose=smplx_param["left_hand_pose"],
50
- right_hand_pose=smplx_param["right_hand_pose"],
51
- return_verts=True,
52
- return_full_pose=True,
53
- return_joint_transformation=True,
54
- return_vertex_transformation=True)
55
-
56
- smpl_verts = smpl_out.vertices.detach()[0]
 
 
 
 
 
57
  smpl_tree = cKDTree(smpl_verts.cpu().numpy())
58
  dist, idx = smpl_tree.query(econ_obj.vertices, k=5)
59
 
60
- if not osp.exists(f"{prefix}_econ_cano.obj") or not osp.exists(f"{prefix}_smpl_cano.obj"):
61
 
62
- # canonicalize for ECON
63
  econ_verts = torch.tensor(econ_obj.vertices).float()
64
- inv_mat = torch.inverse(smpl_out.vertex_transformation.detach()[0][idx[:, 0]])
65
  homo_coord = torch.ones_like(econ_verts)[..., :1]
66
- econ_cano_verts = inv_mat @ torch.cat([econ_verts, homo_coord], dim=1).unsqueeze(-1)
67
  econ_cano_verts = econ_cano_verts[:, :3, 0].cpu()
68
  econ_cano = trimesh.Trimesh(econ_cano_verts, econ_obj.faces)
69
 
70
- # canonicalize for SMPL-X
71
- inv_mat = torch.inverse(smpl_out.vertex_transformation.detach()[0])
72
- homo_coord = torch.ones_like(smpl_verts)[..., :1]
73
- smpl_cano_verts = inv_mat @ torch.cat([smpl_verts, homo_coord], dim=1).unsqueeze(-1)
74
- smpl_cano_verts = smpl_cano_verts[:, :3, 0].cpu()
75
- smpl_cano = trimesh.Trimesh(smpl_cano_verts, smpl_model.faces, maintain_orders=True, process=False)
76
- smpl_cano.export(f"{prefix}_smpl_cano.obj")
 
77
 
78
  # remove hands from ECON for next registeration
79
- econ_cano_body = econ_cano.copy()
80
  mano_mask = ~np.isin(idx[:, 0], smplx_container.smplx_mano_vid)
81
- econ_cano_body.update_faces(mano_mask[econ_cano.faces].all(axis=1))
82
- econ_cano_body.remove_unreferenced_vertices()
83
- econ_cano_body = keep_largest(econ_cano_body)
84
 
85
  # remove SMPL-X hand and face
86
  register_mask = ~np.isin(
87
- np.arange(smpl_cano_verts.shape[0]),
88
  np.concatenate([smplx_container.smplx_mano_vid, smplx_container.smplx_front_flame_vid]))
89
  register_mask *= ~smplx_container.eyeball_vertex_mask.bool().numpy()
90
- smpl_cano_body = smpl_cano.copy()
91
- smpl_cano_body.update_faces(register_mask[smpl_cano.faces].all(axis=1))
92
- smpl_cano_body.remove_unreferenced_vertices()
93
- smpl_cano_body = keep_largest(smpl_cano_body)
94
-
95
- # upsample the smpl_cano_body and do registeration
96
- smpl_cano_body = Meshes(
97
- verts=[torch.tensor(smpl_cano_body.vertices).float()],
98
- faces=[torch.tensor(smpl_cano_body.faces).long()],
99
  ).to(device)
100
- sm = SubdivideMeshes(smpl_cano_body)
101
- smpl_cano_body = register(econ_cano_body, sm(smpl_cano_body), device)
102
 
103
  # remove over-streched+hand faces from ECON
104
- econ_cano_body = econ_cano.copy()
105
  edge_before = np.sqrt(
106
  ((econ_obj.vertices[econ_cano.edges[:, 0]] - econ_obj.vertices[econ_cano.edges[:, 1]])**2).sum(axis=1))
107
- edge_after = np.sqrt(
108
- ((econ_cano.vertices[econ_cano.edges[:, 0]] - econ_cano.vertices[econ_cano.edges[:, 1]])**2).sum(axis=1))
109
  edge_diff = edge_after / edge_before.clip(1e-2)
110
  streched_mask = np.unique(econ_cano.edges[edge_diff > 6])
111
  mano_mask = ~np.isin(idx[:, 0], smplx_container.smplx_mano_vid)
112
  mano_mask[streched_mask] = False
113
- econ_cano_body.update_faces(mano_mask[econ_cano.faces].all(axis=1))
114
- econ_cano_body.remove_unreferenced_vertices()
115
 
116
  # stitch the registered SMPL-X body and floating hands to ECON
117
- econ_cano_tree = cKDTree(econ_cano.vertices)
118
- dist, idx = econ_cano_tree.query(smpl_cano_body.vertices, k=1)
119
- smpl_cano_body.update_faces((dist > 0.02)[smpl_cano_body.faces].all(axis=1))
120
- smpl_cano_body.remove_unreferenced_vertices()
121
 
122
- smpl_hand = smpl_cano.copy()
123
  smpl_hand.update_faces(smplx_container.mano_vertex_mask.numpy()[smpl_hand.faces].all(axis=1))
124
  smpl_hand.remove_unreferenced_vertices()
125
- econ_cano = sum([smpl_hand, smpl_cano_body, econ_cano_body])
126
- econ_cano = poisson(econ_cano, f"{prefix}_econ_cano.obj")
127
  else:
128
- econ_cano = trimesh.load(f"{prefix}_econ_cano.obj")
129
- smpl_cano = trimesh.load(f"{prefix}_smpl_cano.obj", maintain_orders=True, process=False)
130
 
131
- smpl_tree = cKDTree(smpl_cano.vertices)
132
- dist, idx = smpl_tree.query(econ_cano.vertices, k=2)
133
  knn_weights = np.exp(-dist**2)
134
  knn_weights /= knn_weights.sum(axis=1, keepdims=True)
 
135
  econ_J_regressor = (smpl_model.J_regressor[:, idx] * knn_weights[None]).sum(axis=-1)
136
  econ_lbs_weights = (smpl_model.lbs_weights.T[:, idx] * knn_weights[None]).sum(axis=-1).T
 
 
 
 
 
137
  econ_J_regressor /= econ_J_regressor.sum(axis=1, keepdims=True)
138
  econ_lbs_weights /= econ_lbs_weights.sum(axis=1, keepdims=True)
139
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
140
  posed_econ_verts, _ = general_lbs(
141
- pose=smpl_out.full_pose,
142
- v_template=torch.tensor(econ_cano.vertices).unsqueeze(0),
 
143
  J_regressor=econ_J_regressor,
144
  parents=smpl_model.parents,
145
  lbs_weights=econ_lbs_weights)
146
 
147
- econ_pose = trimesh.Trimesh(posed_econ_verts[0].detach(), econ_cano.faces)
148
  econ_pose.export(f"{prefix}_econ_pose.obj")
 
1
  import numpy as np
2
  import trimesh
3
  import torch
4
+ import argparse
5
  import os.path as osp
6
  import lib.smplx as smplx
7
  from pytorch3d.ops import SubdivideMeshes
 
13
  from lib.dataset.mesh_util import SMPLX
14
  from lib.common.local_affine import register
15
 
16
+ # loading cfg file
17
+ parser = argparse.ArgumentParser()
18
+ parser.add_argument("-n", "--name", type=str, default="")
19
+ parser.add_argument("-g", "--gpu", type=int, default=0)
20
+ args = parser.parse_args()
21
+
22
  smplx_container = SMPLX()
23
+ device = torch.device(f"cuda:{args.gpu}")
24
 
25
+ prefix = f"./results/econ/obj/{args.name}"
26
  smpl_path = f"{prefix}_smpl_00.npy"
27
  econ_path = f"{prefix}_0_full.obj"
28
 
 
34
 
35
  for key in smplx_param.keys():
36
  smplx_param[key] = smplx_param[key].cpu().view(1, -1)
 
37
 
38
  smpl_model = smplx.create(
39
  smplx_container.model_dir,
 
46
  num_expression_coeffs=50,
47
  ext='pkl')
48
 
49
+ smpl_out_lst = []
50
+
51
+ for pose_type in ["t-pose", "da-pose", "pose"]:
52
+ smpl_out_lst.append(
53
+ smpl_model(
54
+ body_pose=smplx_param["body_pose"],
55
+ global_orient=smplx_param["global_orient"],
56
+ betas=smplx_param["betas"],
57
+ expression=smplx_param["expression"],
58
+ jaw_pose=smplx_param["jaw_pose"],
59
+ left_hand_pose=smplx_param["left_hand_pose"],
60
+ right_hand_pose=smplx_param["right_hand_pose"],
61
+ return_verts=True,
62
+ return_full_pose=True,
63
+ return_joint_transformation=True,
64
+ return_vertex_transformation=True,
65
+ pose_type=pose_type))
66
+
67
+ smpl_verts = smpl_out_lst[2].vertices.detach()[0]
68
  smpl_tree = cKDTree(smpl_verts.cpu().numpy())
69
  dist, idx = smpl_tree.query(econ_obj.vertices, k=5)
70
 
71
+ if not osp.exists(f"{prefix}_econ_da.obj") or not osp.exists(f"{prefix}_smpl_da.obj"):
72
 
73
+ # t-pose for ECON
74
  econ_verts = torch.tensor(econ_obj.vertices).float()
75
+ rot_mat_t = smpl_out_lst[2].vertex_transformation.detach()[0][idx[:, 0]]
76
  homo_coord = torch.ones_like(econ_verts)[..., :1]
77
+ econ_cano_verts = torch.inverse(rot_mat_t) @ torch.cat([econ_verts, homo_coord], dim=1).unsqueeze(-1)
78
  econ_cano_verts = econ_cano_verts[:, :3, 0].cpu()
79
  econ_cano = trimesh.Trimesh(econ_cano_verts, econ_obj.faces)
80
 
81
+ # da-pose for ECON
82
+ rot_mat_da = smpl_out_lst[1].vertex_transformation.detach()[0][idx[:, 0]]
83
+ econ_da_verts = rot_mat_da @ torch.cat([econ_cano_verts, homo_coord], dim=1).unsqueeze(-1)
84
+ econ_da = trimesh.Trimesh(econ_da_verts[:, :3, 0].cpu(), econ_obj.faces)
85
+
86
+ # da-pose for SMPL-X
87
+ smpl_da = trimesh.Trimesh(smpl_out_lst[1].vertices.detach()[0], smpl_model.faces, maintain_orders=True, process=False)
88
+ smpl_da.export(f"{prefix}_smpl_da.obj")
89
 
90
  # remove hands from ECON for next registeration
91
+ econ_da_body = econ_da.copy()
92
  mano_mask = ~np.isin(idx[:, 0], smplx_container.smplx_mano_vid)
93
+ econ_da_body.update_faces(mano_mask[econ_da.faces].all(axis=1))
94
+ econ_da_body.remove_unreferenced_vertices()
95
+ econ_da_body = keep_largest(econ_da_body)
96
 
97
  # remove SMPL-X hand and face
98
  register_mask = ~np.isin(
99
+ np.arange(smpl_da.vertices.shape[0]),
100
  np.concatenate([smplx_container.smplx_mano_vid, smplx_container.smplx_front_flame_vid]))
101
  register_mask *= ~smplx_container.eyeball_vertex_mask.bool().numpy()
102
+ smpl_da_body = smpl_da.copy()
103
+ smpl_da_body.update_faces(register_mask[smpl_da.faces].all(axis=1))
104
+ smpl_da_body.remove_unreferenced_vertices()
105
+ smpl_da_body = keep_largest(smpl_da_body)
106
+
107
+ # upsample the smpl_da_body and do registeration
108
+ smpl_da_body = Meshes(
109
+ verts=[torch.tensor(smpl_da_body.vertices).float()],
110
+ faces=[torch.tensor(smpl_da_body.faces).long()],
111
  ).to(device)
112
+ sm = SubdivideMeshes(smpl_da_body)
113
+ smpl_da_body = register(econ_da_body, sm(smpl_da_body), device)
114
 
115
  # remove over-streched+hand faces from ECON
116
+ econ_da_body = econ_da.copy()
117
  edge_before = np.sqrt(
118
  ((econ_obj.vertices[econ_cano.edges[:, 0]] - econ_obj.vertices[econ_cano.edges[:, 1]])**2).sum(axis=1))
119
+ edge_after = np.sqrt(((econ_da.vertices[econ_cano.edges[:, 0]] - econ_da.vertices[econ_cano.edges[:, 1]])**2).sum(axis=1))
 
120
  edge_diff = edge_after / edge_before.clip(1e-2)
121
  streched_mask = np.unique(econ_cano.edges[edge_diff > 6])
122
  mano_mask = ~np.isin(idx[:, 0], smplx_container.smplx_mano_vid)
123
  mano_mask[streched_mask] = False
124
+ econ_da_body.update_faces(mano_mask[econ_cano.faces].all(axis=1))
125
+ econ_da_body.remove_unreferenced_vertices()
126
 
127
  # stitch the registered SMPL-X body and floating hands to ECON
128
+ econ_da_tree = cKDTree(econ_da.vertices)
129
+ dist, idx = econ_da_tree.query(smpl_da_body.vertices, k=1)
130
+ smpl_da_body.update_faces((dist > 0.02)[smpl_da_body.faces].all(axis=1))
131
+ smpl_da_body.remove_unreferenced_vertices()
132
 
133
+ smpl_hand = smpl_da.copy()
134
  smpl_hand.update_faces(smplx_container.mano_vertex_mask.numpy()[smpl_hand.faces].all(axis=1))
135
  smpl_hand.remove_unreferenced_vertices()
136
+ econ_da = sum([smpl_hand, smpl_da_body, econ_da_body])
137
+ econ_da = poisson(econ_da, f"{prefix}_econ_da.obj")
138
  else:
139
+ econ_da = trimesh.load(f"{prefix}_econ_da.obj")
140
+ smpl_da = trimesh.load(f"{prefix}_smpl_da.obj", maintain_orders=True, process=False)
141
 
142
+ smpl_tree = cKDTree(smpl_da.vertices)
143
+ dist, idx = smpl_tree.query(econ_da.vertices, k=5)
144
  knn_weights = np.exp(-dist**2)
145
  knn_weights /= knn_weights.sum(axis=1, keepdims=True)
146
+
147
  econ_J_regressor = (smpl_model.J_regressor[:, idx] * knn_weights[None]).sum(axis=-1)
148
  econ_lbs_weights = (smpl_model.lbs_weights.T[:, idx] * knn_weights[None]).sum(axis=-1).T
149
+
150
+ num_posedirs = smpl_model.posedirs.shape[0]
151
+ econ_posedirs = (smpl_model.posedirs.view(num_posedirs, -1, 3)[:, idx, :] *
152
+ knn_weights[None, ..., None]).sum(axis=-2).view(num_posedirs, -1).float()
153
+
154
  econ_J_regressor /= econ_J_regressor.sum(axis=1, keepdims=True)
155
  econ_lbs_weights /= econ_lbs_weights.sum(axis=1, keepdims=True)
156
 
157
+ # re-compute da-pose rot_mat for ECON
158
+ rot_mat_da = smpl_out_lst[1].vertex_transformation.detach()[0][idx[:, 0]]
159
+ econ_da_verts = torch.tensor(econ_da.vertices).float()
160
+ econ_cano_verts = torch.inverse(rot_mat_da) @ torch.cat([econ_da_verts, torch.ones_like(econ_da_verts)[..., :1]],
161
+ dim=1).unsqueeze(-1)
162
+ econ_cano_verts = econ_cano_verts[:, :3, 0].double()
163
+
164
+ # ----------------------------------------------------
165
+ # use any SMPL-X pose to animate ECON reconstruction
166
+ # ----------------------------------------------------
167
+
168
+ new_pose = smpl_out_lst[2].full_pose
169
+ new_pose[:, :3] = 0.
170
+
171
  posed_econ_verts, _ = general_lbs(
172
+ pose=new_pose,
173
+ v_template=econ_cano_verts.unsqueeze(0),
174
+ posedirs=econ_posedirs,
175
  J_regressor=econ_J_regressor,
176
  parents=smpl_model.parents,
177
  lbs_weights=econ_lbs_weights)
178
 
179
+ econ_pose = trimesh.Trimesh(posed_econ_verts[0].detach(), econ_da.faces)
180
  econ_pose.export(f"{prefix}_econ_pose.obj")
apps/infer.py CHANGED
@@ -100,7 +100,7 @@ if __name__ == "__main__":
100
  print(colored("Use SMPL-X (Explicit) for completion", "green"))
101
 
102
  dataset = TestDataset(dataset_param, device)
103
-
104
  print(colored(f"Dataset Size: {len(dataset)}", "green"))
105
 
106
  pbar = tqdm(dataset)
 
100
  print(colored("Use SMPL-X (Explicit) for completion", "green"))
101
 
102
  dataset = TestDataset(dataset_param, device)
103
+
104
  print(colored(f"Dataset Size: {len(dataset)}", "green"))
105
 
106
  pbar = tqdm(dataset)
docs/installation.md CHANGED
@@ -27,7 +27,7 @@ conda activate econ
27
  pip install -r requirements.txt
28
 
29
  # install libmesh & libvoxelize
30
- cd lib/commmon/libmesh
31
  python setup.py build_ext --inplace
32
  cd ../libvoxelize
33
  python setup.py build_ext --inplace
 
27
  pip install -r requirements.txt
28
 
29
  # install libmesh & libvoxelize
30
+ cd lib/common/libmesh
31
  python setup.py build_ext --inplace
32
  cd ../libvoxelize
33
  python setup.py build_ext --inplace
lib/smplx/body_models.py CHANGED
@@ -1151,6 +1151,7 @@ class SMPLX(SMPLH):
1151
  pose2rot: bool = True,
1152
  return_joint_transformation: bool = False,
1153
  return_vertex_transformation: bool = False,
 
1154
  **kwargs,
1155
  ) -> SMPLXOutput:
1156
  """
@@ -1240,9 +1241,30 @@ class SMPLX(SMPLH):
1240
  dim=1,
1241
  )
1242
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1243
  # Add the mean pose of the model. Does not affect the body, only the
1244
  # hands when flat_hand_mean == False
1245
- full_pose += self.pose_mean
1246
 
1247
  batch_size = max(betas.shape[0], global_orient.shape[0], body_pose.shape[0])
1248
  # Concatenate the shape and expression coefficients
 
1151
  pose2rot: bool = True,
1152
  return_joint_transformation: bool = False,
1153
  return_vertex_transformation: bool = False,
1154
+ pose_type: str = 'posed',
1155
  **kwargs,
1156
  ) -> SMPLXOutput:
1157
  """
 
1241
  dim=1,
1242
  )
1243
 
1244
+ if pose_type == "t-pose":
1245
+ full_pose *= 0.0
1246
+ elif pose_type == "da-pose":
1247
+ body_pose = torch.zeros_like(body_pose).view(body_pose.shape[0], -1, 3)
1248
+ body_pose[:, 0] = torch.tensor([0., 0., 30 * np.pi / 180.])
1249
+ body_pose[:, 1] = torch.tensor([0., 0., -30 * np.pi / 180.])
1250
+ body_pose = body_pose.view(body_pose.shape[0], -1)
1251
+
1252
+ full_pose = torch.cat(
1253
+ [
1254
+ global_orient * 0.,
1255
+ body_pose,
1256
+ jaw_pose * 0.,
1257
+ leye_pose * 0.,
1258
+ reye_pose * 0.,
1259
+ left_hand_pose * 0.,
1260
+ right_hand_pose * 0.,
1261
+ ],
1262
+ dim=1,
1263
+ )
1264
+
1265
  # Add the mean pose of the model. Does not affect the body, only the
1266
  # hands when flat_hand_mean == False
1267
+ # full_pose += self.pose_mean
1268
 
1269
  batch_size = max(betas.shape[0], global_orient.shape[0], body_pose.shape[0])
1270
  # Concatenate the shape and expression coefficients
lib/smplx/lbs.py CHANGED
@@ -233,6 +233,7 @@ def lbs(
233
  def general_lbs(
234
  pose: Tensor,
235
  v_template: Tensor,
 
236
  J_regressor: Tensor,
237
  parents: Tensor,
238
  lbs_weights: Tensor,
@@ -246,6 +247,8 @@ def general_lbs(
246
  The pose parameters in axis-angle format
247
  v_template torch.tensor BxVx3
248
  The template mesh that will be deformed
 
 
249
  J_regressor : torch.tensor JxV
250
  The regressor array that is used to calculate the joints from
251
  the position of the vertices
@@ -277,10 +280,21 @@ def general_lbs(
277
  # NxJx3 array
278
  J = vertices2joints(J_regressor, v_template)
279
 
 
 
 
 
280
  if pose2rot:
281
  rot_mats = batch_rodrigues(pose.view(-1, 3)).view([batch_size, -1, 3, 3])
 
 
 
282
  else:
283
  rot_mats = pose.view(batch_size, -1, 3, 3)
 
 
 
 
284
 
285
  # 4. Get the global joint location
286
  J_transformed, A = batch_rigid_transform(rot_mats, J, parents, dtype=dtype)
@@ -292,13 +306,13 @@ def general_lbs(
292
  num_joints = J_regressor.shape[0]
293
  T = torch.matmul(W, A.view(batch_size, num_joints, 16)).view(batch_size, -1, 4, 4)
294
 
295
- homogen_coord = torch.ones([batch_size, v_template.shape[1], 1], dtype=dtype, device=device)
296
- v_posed_homo = torch.cat([v_template, homogen_coord], dim=2)
297
  v_homo = torch.matmul(T, torch.unsqueeze(v_posed_homo, dim=-1))
298
 
299
  verts = v_homo[:, :, :3, 0]
300
 
301
- return verts, J
302
 
303
 
304
  def vertices2joints(J_regressor: Tensor, vertices: Tensor) -> Tensor:
 
233
  def general_lbs(
234
  pose: Tensor,
235
  v_template: Tensor,
236
+ posedirs: Tensor,
237
  J_regressor: Tensor,
238
  parents: Tensor,
239
  lbs_weights: Tensor,
 
247
  The pose parameters in axis-angle format
248
  v_template torch.tensor BxVx3
249
  The template mesh that will be deformed
250
+ posedirs : torch.tensor Px(V * 3)
251
+ The pose PCA coefficients
252
  J_regressor : torch.tensor JxV
253
  The regressor array that is used to calculate the joints from
254
  the position of the vertices
 
280
  # NxJx3 array
281
  J = vertices2joints(J_regressor, v_template)
282
 
283
+ # Add pose blend shapes
284
+ # N x J x 3 x 3
285
+ ident = torch.eye(3, dtype=dtype, device=device)
286
+
287
  if pose2rot:
288
  rot_mats = batch_rodrigues(pose.view(-1, 3)).view([batch_size, -1, 3, 3])
289
+ pose_feature = (rot_mats[:, 1:, :, :] - ident).view([batch_size, -1])
290
+ # (N x P) x (P, V * 3) -> N x V x 3
291
+ pose_offsets = torch.matmul(pose_feature, posedirs).view(batch_size, -1, 3)
292
  else:
293
  rot_mats = pose.view(batch_size, -1, 3, 3)
294
+ pose_feature = pose[:, 1:].view(batch_size, -1, 3, 3) - ident
295
+ pose_offsets = torch.matmul(pose_feature.view(batch_size, -1), posedirs).view(batch_size, -1, 3)
296
+
297
+ v_posed = pose_offsets + v_template
298
 
299
  # 4. Get the global joint location
300
  J_transformed, A = batch_rigid_transform(rot_mats, J, parents, dtype=dtype)
 
306
  num_joints = J_regressor.shape[0]
307
  T = torch.matmul(W, A.view(batch_size, num_joints, 16)).view(batch_size, -1, 4, 4)
308
 
309
+ homogen_coord = torch.ones([batch_size, v_posed.shape[1], 1], dtype=dtype, device=device)
310
+ v_posed_homo = torch.cat([v_posed, homogen_coord], dim=2)
311
  v_homo = torch.matmul(T, torch.unsqueeze(v_posed_homo, dim=-1))
312
 
313
  verts = v_homo[:, :, :3, 0]
314
 
315
+ return verts, J_transformed
316
 
317
 
318
  def vertices2joints(J_regressor: Tensor, vertices: Tensor) -> Tensor: