Let’s take a step back and look at the generative 3D pipelines as a whole.
In Step 2, there is some non-mesh representation, labeled “ML-friendly 3D”, which is converted to a mesh (Step 3) with Marching Cubes.
Before ML-friendly 3D, there is often a step called “multi-view diffusion”. This is where a diffusion model, like Stable Diffusion, is used to generate novel views of an object - either from source images or from text.
This part of the pipeline is very technical and evolving rapidly, being more related to diffusion than 3D. Therefore, in this course, we’ll treat it as a building block, focusing on how you can harness this building block using the Hugging Face ecosystem.
If you want to learn more about the specifics of diffusion models, check out the Diffusion Course.
In this course, each core unit will go over these three building blocks:
Each of these units will also include a hands-on exercise, where you’ll get to apply what you’ve learned in a real-world scenario.
< > Update on GitHub