Abstract
We present Turbo3D, an ultra-fast text-to-3D system capable of generating high-quality Gaussian splatting assets in under one second. Turbo3D employs a rapid 4-step, 4-view diffusion generator and an efficient feed-forward Gaussian reconstructor, both operating in latent space. The 4-step, 4-view generator is a student model distilled through a novel Dual-Teacher approach, which encourages the student to learn view consistency from a multi-view teacher and photo-realism from a single-view teacher. By shifting the Gaussian reconstructor's inputs from pixel space to latent space, we eliminate the extra image decoding time and halve the transformer sequence length for maximum efficiency. Our method demonstrates superior 3D generation results compared to previous baselines, while operating in a fraction of their runtime.
Community
project page: https://turbo-3d.github.io/
arxiv : https://arxiv.org/abs/2412.04470
code: https://github.com/hzhupku/Turbo3D
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- ModeDreamer: Mode Guiding Score Distillation for Text-to-3D Generation using Reference Image Prompts (2024)
- DreamCraft3D++: Efficient Hierarchical 3D Generation with Multi-Plane Reconstruction Model (2024)
- MVLight: Relightable Text-to-3D Generation via Light-conditioned Multi-View Diffusion (2024)
- SplatFlow: Multi-View Rectified Flow Model for 3D Gaussian Splatting Synthesis (2024)
- Sharp-It: A Multi-view to Multi-view Diffusion Model for 3D Synthesis and Manipulation (2024)
- Tencent Hunyuan3D-1.0: A Unified Framework for Text-to-3D and Image-to-3D Generation (2024)
- Baking Gaussian Splatting into Diffusion Denoiser for Fast and Scalable Single-stage Image-to-3D Generation (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper