Image-to-3D
English
wala
depth-map-to-3d
Hooman commited on
Commit
447b002
1 Parent(s): 2f38d0d

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +133 -5
README.md CHANGED
@@ -1,5 +1,133 @@
1
- ---
2
- license: other
3
- license_name: autodesk-non-commercial-3d-generative-v1.0
4
- license_link: LICENSE
5
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: other
5
+ license_name: autodesk-non-commercial-3d-generative-v1.0
6
+ tags:
7
+ - wala
8
+ - depth-map-to-3d
9
+ ---
10
+
11
+ # Model Card for WaLa-DM4-1B
12
+
13
+ This model is part of the Wavelet Latent Diffusion (WaLa) paper, capable of generating high-quality 3D shapes from single-view depth map input with detailed geometry and complex structures.
14
+
15
+ ## Model Details
16
+
17
+ ### Model Description
18
+
19
+ WaLa-DM4-1B is a large-scale 3D generative model trained on a massive dataset of over 10 million publicly-available 3D shapes. It can efficiently generate a wide range of high-quality 3D shapes from single-view depth map input in just 4 seconds. The model uses a wavelet-based compact latent encoding and a billion-parameter architecture to achieve superior performance in terms of geometric detail and structural plausibility.
20
+
21
+ - **Developed by:** Aditya Sanghi, Aliasghar Khani, Chinthala Pradyumna Reddy, Arianna Rampini, Derek Cheung, Kamal Rahimi Malekshan, Kanika Madan, Hooman Shayani
22
+ - **Model type:** 3D Generative Model
23
+ - **License:** Autodesk Non-Commercial (3D Generative) v1.0
24
+
25
+ For more information please look at the [Project Page](https://autodeskailab.github.io/WaLaProject) and [the paper](TBD).
26
+
27
+ ### Model Sources
28
+
29
+ - **Project Page:** [WaLa](https://autodeskailab.github.io/WaLaProject)
30
+ - **Repository:** [Github](https://github.com/AutodeskAILab/WaLa)
31
+ - **Paper:** [ArXiv:TBD](TBD)
32
+ - **Demo:** [Colab](https://colab.research.google.com/drive/1W5zPXw9xWNpLTlU5rnq7g3jtIA2BX6aC?usp=sharing)
33
+
34
+ ## Uses
35
+
36
+ ### Direct Use
37
+
38
+ This model is released by Autodesk and intended for academic and research purposes only for the theoretical exploration and demonstration of the WaLa 3D generative framework. Please see [here](https://github.com/AutodeskAILab/WaLa?tab=readme-ov-file#depth-map-to-3d) for inferencing instructions.
39
+
40
+ ### Out-of-Scope Use
41
+
42
+ The model should not be used for:
43
+
44
+ - Commercial purposes
45
+
46
+ - Creation of load-bearing physical objects the failure of which could cause property damage or personal injury
47
+
48
+ - Any usage not in compliance with the [license](https://huggingface.co/ADSKAILab/WaLa-DM4-1B/blob/main/LICENSE.md), in particular, the "Acceptable Use" section.
49
+
50
+ ## Bias, Risks, and Limitations
51
+
52
+ ### Bias
53
+
54
+ - The model may inherit biases present in the publicly-available training datasets, which could lead to uneven representation of certain object types or styles.
55
+
56
+ - The model's performance may degrade for object categories or styles that are underrepresented in the training data.
57
+
58
+ ### Risks and Limitations
59
+
60
+ - The quality of the generated 3D output may be impacted by the quality and accuracy of the input depth maps.
61
+ - The model may occasionally generate implausible shapes, especially when the input depth maps are ambiguous or of low quality. Even theoretically plausible shapes should not be relied upon for real-world structural soundness.
62
+
63
+ ## How to Get Started with the Model
64
+
65
+ Please refer to the instructions [here](https://github.com/AutodeskAILab/WaLa?tab=readme-ov-file#getting-started)
66
+
67
+ ## Training Details
68
+
69
+ ### Training Data
70
+
71
+ The model was trained on a dataset of over 10 million 3D shapes aggregated from 19 different publicly-available sub-datasets, including ModelNet, ShapeNet, SMLP, Thingi10K, SMAL, COMA, House3D, ABC, Fusion 360, 3D-FUTURE, BuildingNet, DeformingThings4D, FG3D, Toys4K, ABO, Infinigen, Objaverse, and two subsets of ObjaverseXL (Thingiverse and GitHub).
72
+
73
+ ### Training Procedure
74
+
75
+ #### Preprocessing
76
+
77
+ Each 3D shape in the dataset was converted into a truncated signed distance function (TSDF) with a resolution of 256³. The TSDF was then decomposed using a discrete wavelet transform to create the wavelet-tree representation used by the model. For depth map conditioning, any single view can be selected.
78
+
79
+ #### Training Hyperparameters
80
+
81
+ - **Training regime:** Please refer to the paper.
82
+
83
+ #### Speeds, Sizes, Times
84
+
85
+ - The model contains approximately 956 million parameters.
86
+ - The model can generate shapes within 4 seconds.
87
+
88
+ ## Evaluation
89
+
90
+ ### Testing Data, Factors & Metrics
91
+
92
+ #### Testing Data
93
+
94
+ The model was evaluated on the Google Scanned Objects (GSO) dataset and a validation set from the training data (MAS validation data).
95
+
96
+ #### Factors
97
+
98
+ The evaluation considered various factors such as the quality of generated shapes, the ability to capture fine details and complex structures, and the model's performance across different object categories.
99
+
100
+ #### Metrics
101
+
102
+ The model was evaluated using the following metrics:
103
+ - Intersection over Union (IoU)
104
+ - Light Field Distance (LFD)
105
+ - Chamfer Distance (CD)
106
+
107
+ ### Results
108
+
109
+ The single-view depth to 3D model achieved the following results on the GSO dataset:
110
+ - LFD: TBD
111
+ - IoU: 0.6927
112
+ - CD: 0.01301
113
+
114
+ On the MAS validation dataset:
115
+ - LFD: TBD
116
+ - IoU: 0.6358
117
+ - CD: 0.01213
118
+
119
+ ## Technical Specifications
120
+
121
+ ### Model Architecture and Objective
122
+
123
+ The model uses a U-ViT architecture with modifications. It employs a wavelet-based compact latent encoding to effectively capture both coarse and fine details of 3D shapes from a single-view depth input. Each selected depth map is processed individually through the DINO v2 encoder, generating a sequence of latent vectors for each view. The latent vectors from all views are concatenated to form the final conditional latent vectors.
124
+
125
+ ### Compute Infrastructure
126
+
127
+ #### Hardware
128
+
129
+ The model was trained on NVIDIA H100 GPUs.
130
+
131
+ ## Citation
132
+
133
+ [Citation information to be added after paper publication]