Inishds
's Collections
ViewDiff: 3D-Consistent Image Generation with Text-to-Image Models
Paper
•
2403.01807
•
Published
•
7
TripoSR: Fast 3D Object Reconstruction from a Single Image
Paper
•
2403.02151
•
Published
•
12
OOTDiffusion: Outfitting Fusion based Latent Diffusion for Controllable
Virtual Try-on
Paper
•
2403.01779
•
Published
•
28
MagicClay: Sculpting Meshes With Generative Neural Fields
Paper
•
2403.02460
•
Published
•
6
SV3D: Novel Multi-view Synthesis and 3D Generation from a Single Image
using Latent Video Diffusion
Paper
•
2403.12008
•
Published
•
19
Fast High-Resolution Image Synthesis with Latent Adversarial Diffusion
Distillation
Paper
•
2403.12015
•
Published
•
64
DATENeRF: Depth-Aware Text-based Editing of NeRFs
Paper
•
2404.04526
•
Published
•
9
SwapAnything: Enabling Arbitrary Object Swapping in Personalized Visual
Editing
Paper
•
2404.05717
•
Published
•
24
Drivable 3D Gaussian Avatars
Paper
•
2311.08581
•
Published
•
46
Single-Shot Implicit Morphable Faces with Consistent Texture
Parameterization
Paper
•
2305.03043
•
Published
•
5
One-2-3-45++: Fast Single Image to 3D Objects with Consistent Multi-View
Generation and 3D Diffusion
Paper
•
2311.07885
•
Published
•
39
Dynamic Mesh-Aware Radiance Fields
Paper
•
2309.04581
•
Published
•
6
LucidDreamer: Domain-free Generation of 3D Gaussian Splatting Scenes
Paper
•
2311.13384
•
Published
•
50
Adaptive Shells for Efficient Neural Radiance Field Rendering
Paper
•
2311.10091
•
Published
•
18
GPT4Motion: Scripting Physical Motions in Text-to-Video Generation via
Blender-Oriented GPT Planning
Paper
•
2311.12631
•
Published
•
13
DreamScene360: Unconstrained Text-to-3D Scene Generation with Panoramic
Gaussian Splatting
Paper
•
2404.06903
•
Published
•
18
Interactive3D: Create What You Want by Interactive 3D Generation
Paper
•
2404.16510
•
Published
•
18
MaPa: Text-driven Photorealistic Material Painting for 3D Shapes
Paper
•
2404.17569
•
Published
•
12
4Diffusion: Multi-view Video Diffusion Model for 4D Generation
Paper
•
2405.20674
•
Published
•
12
MeshAnything: Artist-Created Mesh Generation with Autoregressive
Transformers
Paper
•
2406.10163
•
Published
•
32
WildGaussians: 3D Gaussian Splatting in the Wild
Paper
•
2407.08447
•
Published
•
8
RodinHD: High-Fidelity 3D Avatar Generation with Diffusion Models
Paper
•
2407.06938
•
Published
•
21
GaussianDreamerPro: Text to Manipulable 3D Gaussians with Highly
Enhanced Quality
Paper
•
2406.18462
•
Published
•
11
Meta 3D AssetGen: Text-to-Mesh Generation with High-Quality Geometry,
Texture, and PBR Materials
Paper
•
2407.02445
•
Published
•
4
Diff3DS: Generating View-Consistent 3D Sketch via Differentiable Curve
Rendering
Paper
•
2405.15305
•
Published
•
1
Deep3DSketch+: Rapid 3D Modeling from Single Free-hand Sketches
Paper
•
2309.13006
•
Published
•
1
GTR: Improving Large 3D Reconstruction Models through Geometry and
Texture Refinement
Paper
•
2406.05649
•
Published
•
8
DiffPoseTalk: Speech-Driven Stylistic 3D Facial Animation and Head Pose
Generation via Diffusion Models
Paper
•
2310.00434
•
Published
•
1
Tailor3D: Customized 3D Assets Editing and Generation with Dual-Side
Images
Paper
•
2407.06191
•
Published
•
10
YouDream: Generating Anatomically Controllable Consistent Text-to-3D
Animals
Paper
•
2406.16273
•
Published
•
40
Zero123++: a Single Image to Consistent Multi-view Diffusion Base Model
Paper
•
2310.15110
•
Published
•
2
Grounded 3D-LLM with Referent Tokens
Paper
•
2405.10370
•
Published
•
10
Physics3D: Learning Physical Properties of 3D Gaussians via Video
Diffusion
Paper
•
2406.04338
•
Published
•
34
Adversarial Generation of Hierarchical Gaussians for 3D Generative Model
Paper
•
2406.02968
•
Published
Lighting Every Darkness with 3DGS: Fast Training and Real-Time Rendering
for HDR View Synthesis
Paper
•
2406.06216
•
Published
•
19
GaussianCube: Structuring Gaussian Splatting using Optimal Transport for
3D Generative Modeling
Paper
•
2403.19655
•
Published
•
18
2D Gaussian Splatting for Geometrically Accurate Radiance Fields
Paper
•
2403.17888
•
Published
•
27
NPGA: Neural Parametric Gaussian Avatars
Paper
•
2405.19331
•
Published
•
10
Human4DiT: Free-view Human Video Generation with 4D Diffusion
Transformer
Paper
•
2405.17405
•
Published
•
14
CraftsMan: High-fidelity Mesh Generation with 3D Native Generation and
Interactive Geometry Refiner
Paper
•
2405.14979
•
Published
•
15
PLA4D: Pixel-Level Alignments for Text-to-4D Gaussian Splatting
Paper
•
2405.19957
•
Published
•
9
GECO: Generative Image-to-3D within a SECOnd
Paper
•
2405.20327
•
Published
•
9
Scaling Up Dynamic Human-Scene Interaction Modeling
Paper
•
2403.08629
•
Published
•
14
RoHM: Robust Human Motion Reconstruction via Diffusion
Paper
•
2401.08570
•
Published
•
1
VR-NeRF: High-Fidelity Virtualized Walkable Spaces
Paper
•
2311.02542
•
Published
•
14
FDGaussian: Fast Gaussian Splatting from Single Image via
Geometric-aware Diffusion Model
Paper
•
2403.10242
•
Published
•
10
pixelSplat: 3D Gaussian Splats from Image Pairs for Scalable
Generalizable 3D Reconstruction
Paper
•
2312.12337
•
Published
•
2
InstantSplat: Unbounded Sparse-view Pose-free Gaussian Splatting in 40
Seconds
Paper
•
2403.20309
•
Published
•
18
CAT3D: Create Anything in 3D with Multi-View Diffusion Models
Paper
•
2405.10314
•
Published
•
44
Improving 3D Imaging with Pre-Trained Perpendicular 2D Diffusion Models
Paper
•
2303.08440
•
Published
GS-LRM: Large Reconstruction Model for 3D Gaussian Splatting
Paper
•
2404.19702
•
Published
•
18
MeshLRM: Large Reconstruction Model for High-Quality Mesh
Paper
•
2404.12385
•
Published
•
26
InstantMesh: Efficient 3D Mesh Generation from a Single Image with
Sparse-view Large Reconstruction Models
Paper
•
2404.07191
•
Published
•
2
SAGS: Structure-Aware 3D Gaussian Splatting
Paper
•
2404.19149
•
Published
•
13
BlenderAlchemy: Editing 3D Graphics with Vision-Language Models
Paper
•
2404.17672
•
Published
•
18
Garment3DGen: 3D Garment Stylization and Texture Generation
Paper
•
2403.18816
•
Published
•
21
EgoLifter: Open-world 3D Segmentation for Egocentric Perception
Paper
•
2403.18118
•
Published
•
10
Snap-it, Tap-it, Splat-it: Tactile-Informed 3D Gaussian Splatting for
Reconstructing Challenging Surfaces
Paper
•
2403.20275
•
Published
•
8
FlexiDreamer: Single Image-to-3D Generation with FlexiCubes
Paper
•
2404.00987
•
Published
•
21
VisionGPT-3D: A Generalized Multimodal Agent for Enhanced 3D Vision
Understanding
Paper
•
2403.09530
•
Published
•
8
LN3Diff: Scalable Latent Neural Fields Diffusion for Speedy 3D
Generation
Paper
•
2403.12019
•
Published
•
9
Generic 3D Diffusion Adapter Using Controlled Multi-View Editing
Paper
•
2403.12032
•
Published
•
14
GVGEN: Text-to-3D Generation with Volumetric Representation
Paper
•
2403.12957
•
Published
•
5
ComboVerse: Compositional 3D Assets Creation Using Spatially-Aware
Diffusion Guidance
Paper
•
2403.12409
•
Published
•
9
TexDreamer: Towards Zero-Shot High-Fidelity 3D Human Texture Generation
Paper
•
2403.12906
•
Published
•
5
GRM: Large Gaussian Reconstruction Model for Efficient 3D Reconstruction
and Generation
Paper
•
2403.14621
•
Published
•
14
LATTE3D: Large-scale Amortized Text-To-Enhanced3D Synthesis
Paper
•
2403.15385
•
Published
•
6
Towards 3D Molecule-Text Interpretation in Language Models
Paper
•
2401.13923
•
Published
•
9
DreamReward: Text-to-3D Generation with Human Preference
Paper
•
2403.14613
•
Published
•
35
PhysDreamer: Physics-Based Interaction with 3D Objects via Video
Generation
Paper
•
2404.13026
•
Published
•
23
MonoPatchNeRF: Improving Neural Radiance Fields with Patch-based
Monocular Guidance
Paper
•
2404.08252
•
Published
•
5
CompGS: Efficient 3D Scene Representation via Compressed Gaussian
Splatting
Paper
•
2404.09458
•
Published
•
6
DressCode: Autoregressively Sewing and Generating Garments from Text
Guidance
Paper
•
2401.16465
•
Published
•
11
Deep LOGISMOS: Deep Learning Graph-based 3D Segmentation of Pancreatic
Tumors on CT scans
Paper
•
1801.08599
•
Published
•
2
HoloDreamer: Holistic 3D Panoramic World Generation from Text
Descriptions
Paper
•
2407.15187
•
Published
•
10
ThemeStation: Generating Theme-Aware 3D Assets from Few Exemplars
Paper
•
2403.15383
•
Published
•
13
MegaScale: Scaling Large Language Model Training to More Than 10,000
GPUs
Paper
•
2402.15627
•
Published
•
34
GenesisTex: Adapting Image Denoising Diffusion to Texture Space
Paper
•
2403.17782
•
Published
Make-It-Vivid: Dressing Your Animatable Biped Cartoon Characters from
Text
Paper
•
2403.16897
•
Published
InTeX: Interactive Text-to-texture Synthesis via Unified Depth-aware
Inpainting
Paper
•
2403.11878
•
Published
Gamba: Marry Gaussian Splatting with Mamba for single view 3D
reconstruction
Paper
•
2403.18795
•
Published
•
18
Robust Gaussian Splatting
Paper
•
2404.04211
•
Published
•
8
Does Gaussian Splatting need SFM Initialization?
Paper
•
2404.12547
•
Published
•
8
Spectrally Pruned Gaussian Fields with Neural Compensation
Paper
•
2405.00676
•
Published
•
8
PLLaVA : Parameter-free LLaVA Extension from Images to Videos for Video
Dense Captioning
Paper
•
2404.16994
•
Published
•
35
PlacidDreamer: Advancing Harmony in Text-to-3D Generation
Paper
•
2407.13976
•
Published
•
5
SparseCraft: Few-Shot Neural Reconstruction through Stereopsis Guided
Geometric Linearization
Paper
•
2407.14257
•
Published
•
5
Shape of Motion: 4D Reconstruction from a Single Video
Paper
•
2407.13764
•
Published
•
19
4K4DGen: Panoramic 4D Generation at 4K Resolution
Paper
•
2406.13527
•
Published
•
8
Style-NeRF2NeRF: 3D Style Transfer From Style-Aligned Multi-View Images
Paper
•
2406.13393
•
Published
•
5
EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything
Model
Paper
•
2406.20076
•
Published
•
8
SVG: 3D Stereoscopic Video Generation via Denoising Frame Matrix
Paper
•
2407.00367
•
Published
•
9
RealTalk: Real-time and Realistic Audio-driven Face Generation with 3D
Facial Prior-guided Identity Alignment Network
Paper
•
2406.18284
•
Published
•
19
Magic Insert: Style-Aware Drag-and-Drop
Paper
•
2407.02489
•
Published
•
20
CRiM-GS: Continuous Rigid Motion-Aware Gaussian Splatting from Motion
Blur Images
Paper
•
2407.03923
•
Published
•
7
UltraEdit: Instruction-based Fine-Grained Image Editing at Scale
Paper
•
2407.05282
•
Published
•
12
Vision language models are blind
Paper
•
2407.06581
•
Published
•
82
CrowdMoGen: Zero-Shot Text-Driven Collective Motion Generation
Paper
•
2407.06188
•
Published
•
1
Controlling Space and Time with Diffusion Models
Paper
•
2407.07860
•
Published
•
16
LLaVA-NeXT-Interleave: Tackling Multi-image, Video, and 3D in Large
Multimodal Models
Paper
•
2407.07895
•
Published
•
40
StyleSplat: 3D Object Style Transfer with Gaussian Splatting
Paper
•
2407.09473
•
Published
•
10
GRUtopia: Dream General Robots in a City at Scale
Paper
•
2407.10943
•
Published
•
23
Click-Gaussian: Interactive Segmentation to Any 3D Gaussians
Paper
•
2407.11793
•
Published
•
3
Animate3D: Animating Any 3D Model with Multi-view Video Diffusion
Paper
•
2407.11398
•
Published
•
8