cleardusk commited on
Commit
a531730
β€’
1 Parent(s): 58710f2

doc: update arxiv

Browse files
Files changed (1) hide show
  1. readme.md +7 -138
readme.md CHANGED
@@ -1,143 +1,12 @@
1
- <h1 align="center">LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control</h1>
2
 
3
- <div align='center'>
4
- <a href='https://github.com/cleardusk' target='_blank'><strong>Jianzhu Guo</strong></a><sup> 1†</sup>&emsp;
5
- <a href='https://github.com/KwaiVGI' target='_blank'><strong>Dingyun Zhang</strong></a><sup> 1,2</sup>&emsp;
6
- <a href='https://github.com/KwaiVGI' target='_blank'><strong>Xiaoqiang Liu</strong></a><sup> 1</sup>&emsp;
7
- <a href='https://github.com/KwaiVGI' target='_blank'><strong>Zhizhou Zhong</strong></a><sup> 1,3</sup>&emsp;
8
- <a href='https://scholar.google.com.hk/citations?user=_8k1ubAAAAAJ' target='_blank'><strong>Yuan Zhang</strong></a><sup> 1</sup>&emsp;
9
- </div>
10
-
11
- <div align='center'>
12
- <a href='https://scholar.google.com/citations?user=P6MraaYAAAAJ' target='_blank'><strong>Pengfei Wan</strong></a><sup> 1</sup>&emsp;
13
- <a href='https://openreview.net/profile?id=~Di_ZHANG3' target='_blank'><strong>Di Zhang</strong></a><sup> 1</sup>&emsp;
14
- </div>
15
-
16
- <div align='center'>
17
- <sup>1 </sup>Kuaishou Technology&emsp; <sup>2 </sup>University of Science and Technology of China&emsp; <sup>3 </sup>Fudan University&emsp;
18
- </div>
19
-
20
- <br>
21
- <div align="center">
22
- <!-- <a href='LICENSE'><img src='https://img.shields.io/badge/license-MIT-yellow'></a> -->
23
- <a href='https://liveportrait.github.io'><img src='https://img.shields.io/badge/Project-Homepage-green'></a>
24
- <a href='https://arxiv.org/pdf/2407.03168'><img src='https://img.shields.io/badge/Paper-arXiv-red'></a>
25
- </div>
26
- <br>
27
-
28
- <p align="center">
29
- <img src="./assets/docs/showcase2.gif" alt="showcase">
30
- <br>
31
- πŸ”₯ For more results, visit our <a href="https://liveportrait.github.io/"><strong>homepage</strong></a> πŸ”₯
32
- </p>
33
-
34
-
35
-
36
- ## πŸ”₯ Updates
37
- - **`2024/07/04`**: πŸ”₯ We released the initial version of the inference code and models. Continuous updates, stay tuned!
38
- - **`2024/07/04`**: 😊 We released the [homepage](https://liveportrait.github.io) and technical report on [arXiv](https://arxiv.org/pdf/2407.03168).
39
-
40
- ## Introduction
41
- This repo, named **LivePortrait**, contains the official PyTorch implementation of our paper [LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control](https://arxiv.org/pdf/2407.03168).
42
- We are actively updating and improving this repository. If you find any bugs or have suggestions, welcome to raise issues or submit pull requests (PR) πŸ’–.
43
-
44
- ## πŸ”₯ Getting Started
45
- ### 1. Clone the code and prepare the environment
46
- ```bash
47
- git clone https://github.com/KwaiVGI/LivePortrait
48
- cd LivePortrait
49
-
50
- # create env using conda
51
- conda create -n LivePortrait python==3.9.18
52
- conda activate LivePortrait
53
- # install dependencies with pip
54
- pip install -r requirements.txt
55
- ```
56
-
57
- ### 2. Download pretrained weights
58
- Download our pretrained LivePortrait weights and face detection models of InsightFace from [Google Drive](https://drive.google.com/drive/folders/1UtKgzKjFAOmZkhNK-OYT0caJ_w2XAnib) or [Baidu Yun](https://pan.baidu.com/s/1MGctWmNla_vZxDbEp2Dtzw?pwd=z5cn). We have packed all weights in one directory 😊. Unzip and place them in `./pretrained_weights` ensuring the directory structure is as follows:
59
- ```text
60
- pretrained_weights
61
- β”œβ”€β”€ insightface
62
- β”‚ └── models
63
- β”‚ └── buffalo_l
64
- β”‚ β”œβ”€β”€ 2d106det.onnx
65
- β”‚ └── det_10g.onnx
66
- └── liveportrait
67
- β”œβ”€β”€ base_models
68
- β”‚ β”œβ”€β”€ appearance_feature_extractor.pth
69
- β”‚ β”œβ”€β”€ motion_extractor.pth
70
- β”‚ β”œβ”€β”€ spade_generator.pth
71
- β”‚ └── warping_module.pth
72
- β”œβ”€β”€ landmark.onnx
73
- └── retargeting_models
74
- └── stitching_retargeting_module.pth
75
- ```
76
-
77
- ### 3. Inference πŸš€
78
-
79
- ```bash
80
- python inference.py
81
- ```
82
-
83
- If the script runs successfully, you will get an output mp4 file named `animations/s6--d0_concat.mp4`. This file includes the following results: driving video, input image, and generated result.
84
-
85
- <p align="center">
86
- <img src="./assets/docs/inference.gif" alt="image">
87
- </p>
88
-
89
- Or, you can change the input by specifying the `-s` and `-d` arguments:
90
-
91
- ```bash
92
- python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d0.mp4
93
-
94
- # or disable pasting back
95
- python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d0.mp4 --no_flag_pasteback
96
-
97
- # more options to see
98
- python inference.py -h
99
- ```
100
-
101
- **More interesting results can be found in our [Homepage](https://liveportrait.github.io)** 😊
102
-
103
- ### 4. Gradio interface
104
-
105
- We also provide a Gradio interface for a better experience, just run by:
106
-
107
- ```bash
108
- python app.py
109
- ```
110
-
111
- ### 5. Inference speed evaluation πŸš€πŸš€πŸš€
112
- We have also provided a script to evaluate the inference speed of each module:
113
-
114
- ```bash
115
- python speed.py
116
- ```
117
-
118
- Below are the results of inferring one frame on an RTX 4090 GPU using the native PyTorch framework with `torch.compile`:
119
-
120
- | Model | Parameters(M) | Model Size(MB) | Inference(ms) |
121
- |-----------------------------------|:-------------:|:--------------:|:-------------:|
122
- | Appearance Feature Extractor | 0.84 | 3.3 | 0.82 |
123
- | Motion Extractor | 28.12 | 108 | 0.84 |
124
- | Spade Generator | 55.37 | 212 | 7.59 |
125
- | Warping Module | 45.53 | 174 | 5.21 |
126
- | Stitching and Retargeting Modules| 0.23 | 2.3 | 0.31 |
127
-
128
- *Note: the listed values of Stitching and Retargeting Modules represent the combined parameter counts and the total sequential inference time of three MLP networks.*
129
-
130
-
131
- ## Acknowledgements
132
- We would like to thank the contributors of [FOMM](https://github.com/AliaksandrSiarohin/first-order-model), [Open Facevid2vid](https://github.com/zhanglonghao1992/One-Shot_Free-View_Neural_Talking_Head_Synthesis), [SPADE](https://github.com/NVlabs/SPADE), [InsightFace](https://github.com/deepinsight/insightface) repositories, for their open research and contributions.
133
-
134
- ## Citation πŸ’–
135
- If you find LivePortrait useful for your research, welcome to 🌟 this repo and cite our work using the following BibTeX:
136
  ```bibtex
137
- @article{guo2024live,
138
  title = {LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control},
139
- author = {Jianzhu Guo and Dingyun Zhang and Xiaoqiang Liu and Zhizhou Zhong and Yuan Zhang and Pengfei Wan and Di Zhang},
140
- year = {2024},
141
- journal = {arXiv preprint:2407.03168},
142
  }
143
  ```
 
 
1
+ This is the official Space of the paper: [**LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control**](https://arxiv.org/abs/2407.03168)
2
 
3
+ If you find LivePortrait useful for your research, welcome to cite our work using the following BibTeX:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ```bibtex
5
+ @article{guo2024liveportrait,
6
  title = {LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control},
7
+ author = {Guo, Jianzhu and Zhang, Dingyun and Liu, Xiaoqiang and Zhong, Zhizhou and Zhang, Yuan and Wan, Pengfei and Zhang, Di},
8
+ journal = {arXiv preprint arXiv:2407.03168},
9
+ year = {2024}
10
  }
11
  ```
12
+