LaVie / README.md
Zhouyan248's picture
Update README.md
a45544e
|
raw
history blame
4.23 kB
metadata
title: LaVie
emoji: 😊
colorFrom: pink
colorTo: pink
sdk: gradio
sdk_version: 4.3.0
app_file: base/app.py
pinned: false
python_version: 3.11.5

LaVie: High-Quality Video Generation with Cascaded Latent Diffusion Models

This repository is the official PyTorch implementation of LaVie.

LaVie is a Text-to-Video (T2V) generation framework, and main part of video generation system Vchitect.

arXiv Project Page

Installation

conda env create -f environment.yml 
conda activate lavie

Download Pre-Trained models

Download pre-trained models, stable diffusion 1.4, stable-diffusion-x4-upscaler to ./pretrained_models. You should be able to see the following:

β”œβ”€β”€ pretrained_models
β”‚   β”œβ”€β”€ lavie_base.pt
β”‚   β”œβ”€β”€ lavie_interpolation.pt
β”‚   β”œβ”€β”€ lavie_vsr.pt
β”‚   β”œβ”€β”€ stable-diffusion-v1-4
β”‚   β”‚   β”œβ”€β”€ ...
└── └── stable-diffusion-x4-upscaler
        β”œβ”€β”€ ...

Inference

The inference contains Base T2V, Video Interpolation and Video Super-Resolution three steps. We provide several options to generate videos:

  • Step1: 320 x 512 resolution, 16 frames
  • Step1+Step2: 320 x 512 resolution, 61 frames
  • Step1+Step3: 1280 x 2048 resolution, 16 frames
  • Step1+Step2+Step3: 1280 x 2048 resolution, 61 frames

Feel free to try different options:)

Step1. Base T2V

Run following command to generate videos from base T2V model.

cd base
python pipelines/sample.py --config configs/sample.yaml

Edit text_prompt in configs/sample.yaml to change prompt, results will be saved under ./res/base.

Step2 (optional). Video Interpolation

Run following command to conduct video interpolation.

cd interpolation
python sample.py --config configs/sample.yaml

The default input video path is ./res/base, results will be saved under ./res/interpolation. In configs/sample.yaml, you could modify default input_folder with YOUR_INPUT_FOLDER in configs/sample.yaml. Input videos should be named as prompt1.mp4, prompt2.mp4, ... and put under YOUR_INPUT_FOLDER. Launching the code will process all the input videos in input_folder.

Step3 (optional). Video Super-Resolution

Run following command to conduct video super-resolution.

cd vsr
python sample.py --config configs/sample.yaml

The default input video path is ./res/base and results will be saved under ./res/vsr. You could modify default input_path with YOUR_INPUT_FOLDER in configs/sample.yaml. Smiliar to Step2, input videos should be named as prompt1.mp4, prompt2.mp4, ... and put under YOUR_INPUT_FOLDER. Launching the code will process all the input videos in input_folder.

BibTex

@article{wang2023lavie,
  title={LAVIE: High-Quality Video Generation with Cascaded Latent Diffusion Models},
  author={Wang, Yaohui and Chen, Xinyuan and Ma, Xin and Zhou, Shangchen and Huang, Ziqi and Wang, Yi and Yang, Ceyuan and He, Yinan and Yu, Jiashuo and Yang, Peiqing and others},
  journal={arXiv preprint arXiv:2309.15103},
  year={2023}
}

Acknowledgements

The code is buit upon diffusers and Stable Diffusion, we thank all the contributors for open-sourcing.

License

The code is licensed under Apache-2.0, model weights are fully open for academic research and also allow free commercial usage. To apply for a commercial license, please fill in the application form.