Papers
arxiv:2411.16856

SAR3D: Autoregressive 3D Object Generation and Understanding via Multi-scale 3D VQVAE

Published on Nov 25
· Submitted by cyw-3d on Nov 27
Authors:
,
,
,

Abstract

Autoregressive models have demonstrated remarkable success across various fields, from large language models (LLMs) to large multimodal models (LMMs) and 2D content generation, moving closer to artificial general intelligence (AGI). Despite these advances, applying autoregressive approaches to 3D object generation and understanding remains largely unexplored. This paper introduces Scale AutoRegressive 3D (SAR3D), a novel framework that leverages a multi-scale 3D vector-quantized variational autoencoder (VQVAE) to tokenize 3D objects for efficient autoregressive generation and detailed understanding. By predicting the next scale in a multi-scale latent representation instead of the next single token, SAR3D reduces generation time significantly, achieving fast 3D object generation in just 0.82 seconds on an A6000 GPU. Additionally, given the tokens enriched with hierarchical 3D-aware information, we finetune a pretrained LLM on them, enabling multimodal comprehension of 3D content. Our experiments show that SAR3D surpasses current 3D generation methods in both speed and quality and allows LLMs to interpret and caption 3D models comprehensively.

Community

Paper author Paper submitter
edited about 22 hours ago

SAR3D is a framework for fast 3D generation (<1s) and detailed understanding via autoregressive modeling.

Project Page: https://cyw-3d.github.io/projects/SAR3D/
ArXiv: https://arxiv.org/abs/2411.16856
GitHub: https://github.com/cyw-3d/SAR3D

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2411.16856 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2411.16856 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2411.16856 in a Space README.md to link it from this page.

Collections including this paper 2