Spaces:
Running
on
Zero
no texture included in the obj + feedback
This looks super promising for adding assets to a 3D scene just based on image references, but I'm having issues with it not including the generated texture in the obj file when importing? Or is this a demo limitation?
So far the best results come from "realistic" objects that has depth and reflections to them. Makes sense I guess. 2D or animated characters from say a tv show or something, will often produce a flat(ish) part of the character as described in the other discussion. Here I would love to be able to upload my own multiview (would basically just download the generated one, edit in photoshop and then reupload), to the demo site as a quick fix until the model is better trained. Is this possible?
I'm no good in compiling/code or anything, so depending on your online demo or an A1111 extension at some point :)
I discovered that the color information is contained in the vertex color data.
@fuppading
@Vanrack
Hi, our demo only supports exporting a .obj mesh with vertex colors for now, we plan to add support for exporting .glb format soon. But our github code supports exporting a mesh with a texture map, by calling run.py
and specifying the --export_texmap
flag. Please refer to the README.md for more details.
@fuppading
Our image-to-3D generation results highly depends on the quality of generated multiview images. Zero123++ is one of the best multiview diffusion models available to use according to our experiments, but it still has limitations, e.g., it is challenging to generalize to 2D-style images which leads to flat multiview images, and it is also not good at generating human faces or bodies which often leads to distorted faces. We need a more powerful multiview diffusion model to improve the results.
It is possible to use your own multiview images to perform multiview reconstruction without using Zero123++. In this scenario, both the multiview images and their corresponding camera poses are required. We plan to offer a python script to show how to use our sparse-view reconstruction model for reconstruction in this case.