CCMat
's Collections
Video
updated
Motion-I2V: Consistent and Controllable Image-to-Video Generation with
Explicit Motion Modeling
Paper
•
2401.15977
•
Published
•
37
Lumiere: A Space-Time Diffusion Model for Video Generation
Paper
•
2401.12945
•
Published
•
86
AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models
without Specific Tuning
Paper
•
2307.04725
•
Published
•
64
Boximator: Generating Rich and Controllable Motions for Video Synthesis
Paper
•
2402.01566
•
Published
•
26
ConsistI2V: Enhancing Visual Consistency for Image-to-Video Generation
Paper
•
2402.04324
•
Published
•
23
MagicDance: Realistic Human Dance Video Generation with Motions & Facial
Expressions Transfer
Paper
•
2311.12052
•
Published
•
32
Magic-Me: Identity-Specific Video Customized Diffusion
Paper
•
2402.09368
•
Published
•
27
Direct-a-Video: Customized Video Generation with User-Directed Camera
Movement and Object Motion
Paper
•
2402.03162
•
Published
•
17
AnimateLCM: Accelerating the Animation of Personalized Diffusion Models
and Adapters with Decoupled Consistency Learning
Paper
•
2402.00769
•
Published
•
20
I2V-Adapter: A General Image-to-Video Adapter for Video Diffusion Models
Paper
•
2312.16693
•
Published
•
13
DreamVideo: Composing Your Dream Videos with Customized Subject and
Motion
Paper
•
2312.04433
•
Published
•
9
MotionCtrl: A Unified and Flexible Motion Controller for Video
Generation
Paper
•
2312.03641
•
Published
•
20
AnimateAnything: Fine-Grained Open Domain Image Animation with Motion
Guidance
Paper
•
2311.12886
•
Published
VideoComposer: Compositional Video Synthesis with Motion Controllability
Paper
•
2306.02018
•
Published
•
3
Media2Face: Co-speech Facial Animation Generation With Multi-Modality
Guidance
Paper
•
2401.15687
•
Published
•
22
Customizing Motion in Text-to-Video Diffusion Models
Paper
•
2312.04966
•
Published
•
10
World Model on Million-Length Video And Language With RingAttention
Paper
•
2402.08268
•
Published
•
37
Animate Anyone: Consistent and Controllable Image-to-Video Synthesis for
Character Animation
Paper
•
2311.17117
•
Published
•
5
Genie: Generative Interactive Environments
Paper
•
2402.15391
•
Published
•
71
EMO: Emote Portrait Alive - Generating Expressive Portrait Videos with
Audio2Video Diffusion Model under Weak Conditions
Paper
•
2402.17485
•
Published
•
189
Sora: A Review on Background, Technology, Limitations, and Opportunities
of Large Vision Models
Paper
•
2402.17177
•
Published
•
88
AnimateDiff-Lightning: Cross-Model Diffusion Distillation
Paper
•
2403.12706
•
Published
•
17
SV3D: Novel Multi-view Synthesis and 3D Generation from a Single Image
using Latent Video Diffusion
Paper
•
2403.12008
•
Published
•
19
V3D: Video Diffusion Models are Effective 3D Generators
Paper
•
2403.06738
•
Published
•
28
DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors
Paper
•
2310.12190
•
Published
•
10
Mora: Enabling Generalist Video Generation via A Multi-Agent Framework
Paper
•
2403.13248
•
Published
•
77
CameraCtrl: Enabling Camera Control for Text-to-Video Generation
Paper
•
2404.02101
•
Published
•
22
Ctrl-Adapter: An Efficient and Versatile Framework for Adapting Diverse
Controls to Any Diffusion Model
Paper
•
2404.09967
•
Published
•
20
PhysDreamer: Physics-Based Interaction with 3D Objects via Video
Generation
Paper
•
2404.13026
•
Published
•
23
StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video
Generation
Paper
•
2405.01434
•
Published
•
52
MotionLCM: Real-time Controllable Motion Generation via Latent
Consistency Model
Paper
•
2404.19759
•
Published
•
24
MagicVideo-V2: Multi-Stage High-Aesthetic Video Generation
Paper
•
2401.04468
•
Published
•
48
Make Pixels Dance: High-Dynamic Video Generation
Paper
•
2311.10982
•
Published
•
68
Emu3: Next-Token Prediction is All You Need
Paper
•
2409.18869
•
Published
•
91