Papers
arxiv:2308.16512

MVDream: Multi-view Diffusion for 3D Generation

Published on Aug 31, 2023
ยท Submitted by akhaliq on Sep 1, 2023
#1 Paper of the day
Authors:
,

Abstract

We propose MVDream, a multi-view diffusion model that is able to generate geometrically consistent multi-view images from a given text prompt. By leveraging image diffusion models pre-trained on large-scale web datasets and a multi-view dataset rendered from 3D assets, the resulting multi-view diffusion model can achieve both the generalizability of 2D diffusion and the consistency of 3D data. Such a model can thus be applied as a multi-view prior for 3D generation via Score Distillation Sampling, where it greatly improves the stability of existing 2D-lifting methods by solving the 3D consistency problem. Finally, we show that the multi-view diffusion model can also be fine-tuned under a few shot setting for personalized 3D generation, i.e. DreamBooth3D application, where the consistency can be maintained after learning the subject identity.

Community

octopus

Similar technique could be used to make animations consistent? Temporal rather than spacial.

such an awesome result to witness

so when is the app launched that is already trained?

Ditto ๐Ÿ‘†

This comment has been hidden

Sign up or log in to comment

Models citing this paper 5

Browse 5 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2308.16512 in a dataset README.md to link it from this page.

Spaces citing this paper 17

Collections including this paper 18