Post
875
π We are excited to announce our latest research on video editing - StableV2V!
π StableV2V aims to perform video editing with aligned shape consistency to user prompt, even if which might cause significant shape differences.
π Besides, we curate a testing benchmark, namely DAVIS-Edit, for video editing, comprising of both text-based and image-based applications.
π We have open-sourced our paper, code, model weights, and DAVIS-Edit, which you may refer to more details of StableV2V from the following link:
- arXiv paper: https://arxiv.org/abs/2411.11045
- Project page: https://alonzoleeeooo.github.io/StableV2V/
- GitHub: https://github.com/AlonzoLeeeooo/StableV2V
- HuggingFace model repo: AlonzoLeeeooo/StableV2V
- HuggingFace dataset repo: AlonzoLeeeooo/DAVIS-Edit
π StableV2V aims to perform video editing with aligned shape consistency to user prompt, even if which might cause significant shape differences.
π Besides, we curate a testing benchmark, namely DAVIS-Edit, for video editing, comprising of both text-based and image-based applications.
π We have open-sourced our paper, code, model weights, and DAVIS-Edit, which you may refer to more details of StableV2V from the following link:
- arXiv paper: https://arxiv.org/abs/2411.11045
- Project page: https://alonzoleeeooo.github.io/StableV2V/
- GitHub: https://github.com/AlonzoLeeeooo/StableV2V
- HuggingFace model repo: AlonzoLeeeooo/StableV2V
- HuggingFace dataset repo: AlonzoLeeeooo/DAVIS-Edit