Edit model card

ALL Setup for MuseTalk Clone and Run

Build environment
We recommend a python version >=3.10 and cuda version =11.7. Then build environment as follows:

pip install -r requirements.txt
mmlab packages
pip install --no-cache-dir -U openmim 
mim install mmengine 
mim install "mmcv>=2.0.1" 
mim install "mmdet>=3.1.0" 
mim install "mmpose>=1.1.0" 
Download ffmpeg-static
Download the ffmpeg-static and

export FFMPEG_PATH=/path/to/ffmpeg
for example:

export FFMPEG_PATH=/musetalk/ffmpeg-4.4-amd64-static
Download weights
You can download weights manually as follows:

Download our trained weights.

Download the weights of other components:

sd-vae-ft-mse
whisper
dwpose
face-parse-bisent
resnet18
Finally, these weights should be organized in models as follows:

./models/
β”œβ”€β”€ musetalk
β”‚   └── musetalk.json
β”‚   └── pytorch_model.bin
β”œβ”€β”€ dwpose
β”‚   └── dw-ll_ucoco_384.pth
β”œβ”€β”€ face-parse-bisent
β”‚   β”œβ”€β”€ 79999_iter.pth
β”‚   └── resnet18-5c106cde.pth
β”œβ”€β”€ sd-vae-ft-mse
β”‚   β”œβ”€β”€ config.json
β”‚   └── diffusion_pytorch_model.bin
└── whisper
    └── tiny.pt
Quickstart
Inference
Here, we provide the inference script.

python -m scripts.inference --inference_config configs/inference/test.yaml 
configs/inference/test.yaml is the path to the inference configuration file, including video_path and audio_path. The video_path should be either a video file, an image file or a directory of images.

You are recommended to input video with 25fps, the same fps used when training the model. If your video is far less than 25fps, you are recommended to apply frame interpolation or directly convert the video to 25fps using ffmpeg.

Use of bbox_shift to have adjustable results
πŸ”Ž We have found that upper-bound of the mask has an important impact on mouth openness. Thus, to control the mask region, we suggest using the bbox_shift parameter. Positive values (moving towards the lower half) increase mouth openness, while negative values (moving towards the upper half) decrease mouth openness.

You can start by running with the default configuration to obtain the adjustable value range, and then re-run the script within this range.

For example, in the case of Xinying Sun, after running the default configuration, it shows that the adjustable value rage is [-9, 9]. Then, to decrease the mouth openness, we set the value to be -7.

python -m scripts.inference --inference_config configs/inference/test.yaml --bbox_shift -7 
πŸ“Œ More technical details can be found in bbox_shift.

Combining MuseV and MuseTalk
As a complete solution to virtual human generation, you are suggested to first apply MuseV to generate a video (text-to-video, image-to-video or pose-to-video) by referring this. Frame interpolation is suggested to increase frame rate. Then, you can use MuseTalk to generate a lip-sync video by referring this.

πŸ†• Real-time inference
Here, we provide the inference script. This script first applies necessary pre-processing such as face detection, face parsing and VAE encode in advance. During inference, only UNet and the VAE decoder are involved, which makes MuseTalk real-time.

python -m scripts.realtime_inference --inference_config configs/inference/realtime.yaml --batch_size 4
configs/inference/realtime.yaml is the path to the real-time inference configuration file, including preparation, video_path , bbox_shift and audio_clips.

Set preparation to True in realtime.yaml to prepare the materials for a new avatar. (If the bbox_shift has changed, you also need to re-prepare the materials.)
After that, the avatar will use an audio clip selected from audio_clips to generate video.
Inferring using: data/audio/yongen.wav
While MuseTalk is inferring, sub-threads can simultaneously stream the results to the users. The generation process can achieve 30fps+ on an NVIDIA Tesla V100.
Set preparation to False and run this script if you want to genrate more videos using the same avatar.
Note for Real-time inference
If you want to generate multiple videos using the same avatar/video, you can also use this script to SIGNIFICANTLY expedite the generation process.
In the previous script, the generation time is also limited by I/O (e.g. saving images). If you just want to test the generation speed without saving the images, you can run
python -m scripts.realtime_inference --inference_config configs/inference/realtime.yaml --skip_save_images
Downloads last month
0
Inference API
Unable to determine this model’s pipeline type. Check the docs .