Image-to-3D
English
make-a-shape
sv-to-3d
Hooman commited on
Commit
23f4a74
1 Parent(s): c3dd60b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +33 -61
README.md CHANGED
@@ -28,58 +28,47 @@ For more information please look at the [Project](https://www.research.autodesk.
28
 
29
  ### Model Sources
30
 
31
- <!-- Provide the basic links for the model. -->
32
-
33
  - **Repository:** [https://github.com/AutodeskAILab/Make-a-Shape](https://github.com/AutodeskAILab/Make-a-Shape)
34
- - **Paper:** [ArXiv:2401.11067](https://arxiv.org/abs/2401.11067) [ICML - Make-A-Shape: a Ten-Million-scale 3D Shape Model](https://proceedings.mlr.press/v235/hui24a.html)
35
- - **Demo:** [in progress...]
36
 
37
- ## Uses
38
 
39
- ### Direct Use
40
 
41
- Please look at the instructions [here](https://github.com/AutodeskAILab/Make-a-Shape?tab=readme-ov-file#single-view-to-3d) to test this model for research and academic purposes.
42
 
43
- ### Downstream Use
44
 
45
- This model could potentially be used in various applications such as:
46
- - 3D content creation for gaming and virtual environments
47
- - Augmented reality applications
48
- - Computer-aided design and prototyping
49
- - Architectural visualization
50
 
51
- ### Out-of-Scope Use
52
 
53
- The model should not be used for:
54
- - Commercial use
55
- - Generating 3D shapes of sensitive or copyrighted content without proper authorization
56
- - Creating 3D models intended for harmful or malicious purposes
 
 
 
57
 
58
- ## Bias, Risks, and Limitations
59
 
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
 
62
- - The model may inherit biases present in the training dataset, which could lead to uneven representation of certain object types or styles.
63
- - The quality of the generated 3D shape depends on the quality and clarity of the input image.
64
- - The model may occasionally generate implausible shapes, especially when the input image is ambiguous or of low quality.
65
- - The model's performance may degrade for object categories or styles that are underrepresented in the training data.
66
 
67
- ### Recommendations
68
 
69
- Users should be aware of the potential biases and limitations of the model. It's recommended to:
70
- - Use high-quality, clear input images for best results
71
- - Verify and potentially post-process the generated 3D shapes for critical applications
72
- - Be cautious when using the model for object categories that may be underrepresented in the training data
73
- - Consider ethical implications and potential biases
74
- - DO NOT USE for commercial or public-facing applications
75
 
76
- ## How to Get Started with the Model
77
 
78
- Please look at the instructions [here](https://github.com/AutodeskAILab/Make-a-Shape?tab=readme-ov-file#single-view-to-3d).
79
 
80
- ## Training Details
81
 
82
- ### Training Data
83
 
84
  The model was trained on a dataset of over 10 million 3D shapes aggregated from 18 different publicly-available sub-datasets, including ModelNet, ShapeNet, SMPL, Thingi10K, SMAL, COMA, House3D, ABC, Fusion 360, 3D-FUTURE, BuildingNet, DeformingThings4D, FG3D, Toys4K, ABO, Infinigen, Objaverse, and two subsets of ObjaverseXL (Thingiverse and GitHub).
85
 
@@ -91,7 +80,7 @@ Each 3D shape in the dataset was converted into a truncated signed distance func
91
 
92
  #### Training Hyperparameters
93
 
94
- - **Training regime:** Please look at the paper.
95
 
96
  #### Speeds, Sizes, Times
97
 
@@ -132,9 +121,9 @@ On the GSO dataset:
132
 
133
  ## Technical Specifications
134
 
135
- ### Model Architecture and Objective
136
 
137
- The model uses a U-ViT architecture with learnable skip-connections between the convolution and deconvolution blocks. It employs a wavelet-tree representation and a subband adaptive training strategy to effectively capture both coarse and fine details of 3D shapes.
138
 
139
  ### Compute Infrastructure
140
 
@@ -145,27 +134,10 @@ The model was trained on 48 × A10G GPUs.
145
  ## Citation
146
 
147
  **BibTeX:**
148
- @InProceedings{pmlr-v235-hui24a,
149
- title = {Make-A-Shape: a Ten-Million-scale 3{D} Shape Model},
150
- author = {Hui, Ka-Hei and Sanghi, Aditya and Rampini, Arianna and Rahimi Malekshan, Kamal and Liu, Zhengzhe and Shayani, Hooman and Fu, Chi-Wing},
151
- booktitle = {Proceedings of the 41st International Conference on Machine Learning},
152
- pages = {20660--20681},
153
- year = {2024},
154
- editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix},
155
- volume = {235},
156
- series = {Proceedings of Machine Learning Research},
157
- month = {21--27 Jul},
158
- publisher = {PMLR},
159
- pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/hui24a/hui24a.pdf},
160
- url = {https://proceedings.mlr.press/v235/hui24a.html},
161
- abstract = {The progression in large-scale 3D generative models has been impeded by significant resource requirements for training and challenges like inefficient representations. This paper introduces Make-A-Shape, a novel 3D generative model trained on a vast scale, using 10 million publicly-available shapes. We first innovate the wavelet-tree representation to encode high-resolution SDF shapes with minimal loss, leveraging our newly-proposed subband coefficient filtering scheme. We then design a subband coefficient packing scheme to facilitate diffusion-based generation and a subband adaptive training strategy for effective training on the large-scale dataset. Our generative framework is versatile, capable of conditioning on various input modalities such as images, point clouds, and voxels, enabling a variety of downstream applications, e.g., unconditional generation, completion, and conditional generation. Our approach clearly surpasses the existing baselines in delivering high-quality results and can efficiently generate shapes within two seconds for most conditions.}
162
  }
163
-
164
-
165
- **APA:**
166
-
167
- Hui, K. H., Sanghi, A., Rampini, A., Malekshan, K. R., Liu, Z., Shayani, H., & Fu, C. W. (2024). Make-A-Shape: a Ten-Million-scale 3D Shape Model. arXiv preprint arXiv:2401.08504.
168
-
169
- ## Model Card Contact
170
-
171
 
28
 
29
  ### Model Sources
30
 
 
 
31
  - **Repository:** [https://github.com/AutodeskAILab/Make-a-Shape](https://github.com/AutodeskAILab/Make-a-Shape)
32
+ - **Paper:** [ArXiv:2401.11067](https://arxiv.org/abs/2401.11067), [ICML - Make-A-Shape: a Ten-Million-scale 3D Shape Model](https://proceedings.mlr.press/v235/hui24a.html)
33
+ - **Demo:** [Google Colab](https://colab.research.google.com/drive/1XIoeanLjXIDdLow6qxY7cAZ6YZpqY40d?usp=sharing)
34
 
35
+ ## Uses
36
 
37
+ ### Direct Use
38
 
39
+ This model is released by Autodesk and intended for academic and research purposes only for the theoretical exploration and demonstration of the Make-a-Shape 3D generative framework. Please see [here](https://github.com/AutodeskAILab/Make-a-Shape?tab=readme-ov-file#single-view-to-3d) for inferencing instructions.
40
 
41
+ ### Out-of-Scope Use
42
 
43
+ The model should not be used for:
 
 
 
 
44
 
45
+ - Commercial purposes
46
 
47
+ - Creation of load-bearing physical objects the failure of which could cause property damage or personal injury
48
+
49
+ - Any usage not in compliance with the [link to license], in particular, the "Acceptable Use" section.
50
+
51
+ ## Bias, Risks, and Limitations
52
+
53
+ ### Bias
54
 
55
+ - The model may inherit biases present in the publicly-available training datasets, which could lead to uneven representation of certain object types or styles.
56
 
57
+ - The model's performance may degrade for object categories or styles that are underrepresented in the training data.
58
 
59
+ ### Risks and Limitations
 
 
 
60
 
61
+ - The quality of the generated 3D output may be impacted by the quality and clarity of the input image.
62
 
63
+ - The model may occasionally generate implausible shapes, especially when the input image is ambiguous or of low quality. Even theoretically plausible shapes should not be relied upon for real-world structural soundness.
 
 
 
 
 
64
 
65
+ ## How to Get Started with the Model
66
 
67
+ Please refer to the instructions [here](https://github.com/AutodeskAILab/Make-a-Shape?tab=readme-ov-file#single-view-to-3d).
68
 
69
+ ## Training Details
70
 
71
+ ### Training Data
72
 
73
  The model was trained on a dataset of over 10 million 3D shapes aggregated from 18 different publicly-available sub-datasets, including ModelNet, ShapeNet, SMPL, Thingi10K, SMAL, COMA, House3D, ABC, Fusion 360, 3D-FUTURE, BuildingNet, DeformingThings4D, FG3D, Toys4K, ABO, Infinigen, Objaverse, and two subsets of ObjaverseXL (Thingiverse and GitHub).
74
 
 
80
 
81
  #### Training Hyperparameters
82
 
83
+ - **Training regime:** Please refer to the paper.
84
 
85
  #### Speeds, Sizes, Times
86
 
 
121
 
122
  ## Technical Specifications
123
 
124
+ ### Model Architecture and Objective
125
 
126
+ The model uses a U-ViT architecture with learnable skip-connections between the convolution and deconvolution blocks. It employs a wavelet-tree representation and a subband adaptive training strategy to effectively capture both coarse and fine details of 3D shapes.
127
 
128
  ### Compute Infrastructure
129
 
 
134
  ## Citation
135
 
136
  **BibTeX:**
137
+ ```latex
138
+ @inproceedings{hui2024make,
139
+ title={Make-a-shape: a ten-million-scale 3d shape model},
140
+ author={Hui, Ka-Hei and Sanghi, Aditya and Rampini, Arianna and Malekshan, Kamal Rahimi and Liu, Zhengzhe and Shayani, Hooman and Fu, Chi-Wing},
141
+ booktitle={Forty-first International Conference on Machine Learning}
 
 
 
 
 
 
 
 
 
142
  }
143
+ ```