zR
commited on
Commit
•
c8a2b4e
1
Parent(s):
eafe5e1
test
Browse files- LICENSE +71 -0
- README.md +249 -0
- README_zh.md +224 -0
LICENSE
ADDED
@@ -0,0 +1,71 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
The CogVideoX License
|
2 |
+
|
3 |
+
1. Definitions
|
4 |
+
|
5 |
+
“Licensor” means the CogVideoX Model Team that distributes its Software.
|
6 |
+
|
7 |
+
“Software” means the CogVideoX model parameters made available under this license.
|
8 |
+
|
9 |
+
2. License Grant
|
10 |
+
|
11 |
+
Under the terms and conditions of this license, the licensor hereby grants you a non-exclusive, worldwide, non-transferable, non-sublicensable, revocable, royalty-free copyright license. The intellectual property rights of the generated content belong to the user to the extent permitted by applicable local laws.
|
12 |
+
This license allows you to freely use all open-source models in this repository for academic research. Users who wish to use the models for commercial purposes must register and obtain a basic commercial license in https://open.bigmodel.cn/mla/form .
|
13 |
+
Users who have registered and obtained the basic commercial license can use the models for commercial activities for free, but must comply with all terms and conditions of this license. Additionally, the number of service users (visits) for your commercial activities must not exceed 1 million visits per month.
|
14 |
+
If the number of service users (visits) for your commercial activities exceeds 1 million visits per month, you need to contact our business team to obtain more commercial licenses.
|
15 |
+
The above copyright statement and this license statement should be included in all copies or significant portions of this software.
|
16 |
+
|
17 |
+
3. Restriction
|
18 |
+
|
19 |
+
You will not use, copy, modify, merge, publish, distribute, reproduce, or create derivative works of the Software, in whole or in part, for any military, or illegal purposes.
|
20 |
+
|
21 |
+
You will not use the Software for any act that may undermine China's national security and national unity, harm the public interest of society, or infringe upon the rights and interests of human beings.
|
22 |
+
|
23 |
+
4. Disclaimer
|
24 |
+
|
25 |
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
26 |
+
|
27 |
+
5. Limitation of Liability
|
28 |
+
|
29 |
+
EXCEPT TO THE EXTENT PROHIBITED BY APPLICABLE LAW, IN NO EVENT AND UNDER NO LEGAL THEORY, WHETHER BASED IN TORT, NEGLIGENCE, CONTRACT, LIABILITY, OR OTHERWISE WILL ANY LICENSOR BE LIABLE TO YOU FOR ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES, OR ANY OTHER COMMERCIAL LOSSES, EVEN IF THE LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
|
30 |
+
|
31 |
+
6. Dispute Resolution
|
32 |
+
|
33 |
+
This license shall be governed and construed in accordance with the laws of People’s Republic of China. Any dispute arising from or in connection with this License shall be submitted to Haidian District People's Court in Beijing.
|
34 |
+
|
35 |
+
Note that the license is subject to update to a more comprehensive version. For any questions related to the license and copyright, please contact us at [email protected].
|
36 |
+
|
37 |
+
1. 定义
|
38 |
+
|
39 |
+
“许可方”是指分发其软件的 CogVideoX 模型团队。
|
40 |
+
|
41 |
+
“软件”是指根据本许可提供的 CogVideoX 模型参数。
|
42 |
+
|
43 |
+
2. 许可授予
|
44 |
+
|
45 |
+
根据本许可的条款和条件,许可方特此授予您非排他性、全球性、不可转让、不可再许可、可撤销、免版税的版权许可。生成内容的知识产权所属,可根据适用当地法律的规定,在法律允许的范围内由用户享有生成内容的知识产权或其他权利。
|
46 |
+
本许可允许您免费使用本仓库中的所有开源模型进行学术研究。对于希望将模型用于商业目的的用户,需在 https://open.bigmodel.cn/mla/form 完成登记并获得基础商用授权。
|
47 |
+
|
48 |
+
经过登记并获得基础商用授权的用户可以免费使用本模型进行商业活动,但必须遵守本许可的所有条款和条件。
|
49 |
+
在本许可证下,您的商业活动的服务用户数量(访问量)不得超过100万人次访问 / 每月。如果超过,您需要与我们的商业团队联系以获得更多的商业许可。
|
50 |
+
上述版权声明和本许可声明应包含在本软件的所有副本或重要部分中。
|
51 |
+
|
52 |
+
3.限制
|
53 |
+
|
54 |
+
您不得出于任何军事或非法目的使用、复制、修改、合并、发布、分发、复制或创建本软件的全部或部分衍生作品。
|
55 |
+
|
56 |
+
您不得利用本软件从事任何危害国家安全和国家统一、危害社会公共利益、侵犯人身权益的行为。
|
57 |
+
|
58 |
+
4.免责声明
|
59 |
+
|
60 |
+
本软件“按原样”提供,不提供任何明示或暗示的保证,包括但不限于对适销性、特定用途的适用性和非侵权性的保证。
|
61 |
+
在任何情况下,作者或版权持有人均不对任何索赔、损害或其他责任负责,无论是在合同诉讼、侵权行为还是其他方面,由软件或软件的使用或其他交易引起、由软件引起或与之相关 软件。
|
62 |
+
|
63 |
+
5. 责任限制
|
64 |
+
|
65 |
+
除适用��律禁止的范围外,在任何情况下且根据任何法律理论,无论是基于侵权行为、疏忽、合同、责任或其他原因,任何许可方均不对您承担任何直接、间接、特殊、偶然、示范性、 或间接损害,或任何其他商业损失,即使许可人已被告知此类损害的可能性。
|
66 |
+
|
67 |
+
6.争议解决
|
68 |
+
|
69 |
+
本许可受中华人民共和国法律管辖并按其解释。 因本许可引起的或与本许可有关的任何争议应提交北京市海淀区人民法院。
|
70 |
+
|
71 |
+
请注意,许可证可能会更新到更全面的版本。 有关许可和版权的任何问题,请通过 [email protected] 与我们联系。
|
README.md
ADDED
@@ -0,0 +1,249 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: other
|
3 |
+
license_link: https://huggingface.co/THUDM/CogVideoX-5b-I2V/blob/main/LICENSE
|
4 |
+
language:
|
5 |
+
- en
|
6 |
+
tags:
|
7 |
+
- video-generation
|
8 |
+
- thudm
|
9 |
+
- image-to-video
|
10 |
+
inference: false
|
11 |
+
---
|
12 |
+
|
13 |
+
# CogVideoX1.5-5B-I2V
|
14 |
+
|
15 |
+
<p style="text-align: center;">
|
16 |
+
<div align="center">
|
17 |
+
<img src=https://github.com/THUDM/CogVideo/raw/main/resources/logo.svg width="50%"/>
|
18 |
+
</div>
|
19 |
+
<p align="center">
|
20 |
+
<a href="https://huggingface.co/THUDM/CogVideoX-5b-I2V/blob/main/README.md">📄 Read in English</a> |
|
21 |
+
<a href="https://huggingface.co/spaces/THUDM/CogVideoX-5B-Space">🤗 Huggingface Space</a> |
|
22 |
+
<a href="https://github.com/THUDM/CogVideo">🌐 Github </a> |
|
23 |
+
<a href="https://arxiv.org/pdf/2408.06072">📜 arxiv </a>
|
24 |
+
</p>
|
25 |
+
<p align="center">
|
26 |
+
📍 Visit <a href="https://chatglm.cn/video?fr=osm_cogvideox"> Qingying </a> and the <a href="https://open.bigmodel.cn/?utm_campaign=open&_channel_track_key=OWTVNma9"> API Platform </a> to experience the commercial video generation model
|
27 |
+
</p>
|
28 |
+
|
29 |
+
## Model Introduction
|
30 |
+
|
31 |
+
CogVideoX is an open-source video generation model similar to [QingYing](https://chatglm.cn/video?fr=osm_cogvideo).
|
32 |
+
Below is a table listing information on the video generation models available in this generation:
|
33 |
+
|
34 |
+
|
35 |
+
<table style="border-collapse: collapse; width: 100%;">
|
36 |
+
<tr>
|
37 |
+
<th style="text-align: center;">Model Name</th>
|
38 |
+
<th style="text-align: center;">CogVideoX1.5-5B</th>
|
39 |
+
<th style="text-align: center;">CogVideoX1.5-5B-I2V (Current Repository)</th>
|
40 |
+
</tr>
|
41 |
+
<tr>
|
42 |
+
<td style="text-align: center;">Video Resolution</td>
|
43 |
+
<td colspan="1" style="text-align: center;">1360 * 768</td>
|
44 |
+
<td colspan="1" style="text-align: center;">256 <= W <=1360<br> 256 <= H <=768<br> W, H % 16 == 0</td>
|
45 |
+
</tr>
|
46 |
+
<tr>
|
47 |
+
<td style="text-align: center;">Inference Precision</td>
|
48 |
+
<td colspan="2" style="text-align: center;"><b>BF16 (recommended)</b>, FP16, FP32, FP8*, INT8, not supported INT4</td>
|
49 |
+
</tr>
|
50 |
+
<tr>
|
51 |
+
<td style="text-align: center;">Single GPU Inference VRAM Consumption</td>
|
52 |
+
<td style="text-align: center;"><b>BF16: 5GB minimum*</b><br><b>diffusers INT8 (torchao): 4.4GB minimum*</b></td>
|
53 |
+
<td style="text-align: center;"><b>BF16: 5GB minimum*</b><br><b>diffusers INT8 (torchao): 4.4GB minimum*</b></td>
|
54 |
+
</tr>
|
55 |
+
<tr>
|
56 |
+
<td style="text-align: center;">Multi-GPU Inference VRAM Consumption</td>
|
57 |
+
<td colspan="2" style="text-align: center;"><b>BF16: 15GB* using diffusers</b><br></td>
|
58 |
+
</tr>
|
59 |
+
<tr>
|
60 |
+
<td style="text-align: center;">Inference Speed<br>(Step = 50, FP/BF16)</td>
|
61 |
+
<td colspan="2" style="text-align: center;">Single A100: ~1000 seconds (5-second video)<br>Single H100: ~550 seconds (5-second video)</td>
|
62 |
+
</tr>
|
63 |
+
<tr>
|
64 |
+
<td style="text-align: center;">Prompt Language</td>
|
65 |
+
<td colspan="5" style="text-align: center;">English*</td>
|
66 |
+
</tr>
|
67 |
+
<tr>
|
68 |
+
<td style="text-align: center;">Max Prompt Length</td>
|
69 |
+
<td colspan="2" style="text-align: center;">224 Tokens</td>
|
70 |
+
</tr>
|
71 |
+
<tr>
|
72 |
+
<td style="text-align: center;">Video Length</td>
|
73 |
+
<td colspan="2" style="text-align: center;">5 or 10 seconds</td>
|
74 |
+
</tr>
|
75 |
+
<tr>
|
76 |
+
<td style="text-align: center;">Frame Rate</td>
|
77 |
+
<td colspan="2" style="text-align: center;">16 frames/second</td>
|
78 |
+
</tr>
|
79 |
+
</table>
|
80 |
+
|
81 |
+
**Data Explanation**
|
82 |
+
|
83 |
+
+ Testing with the `diffusers` library enabled all optimizations included in the library. This scheme has not been
|
84 |
+
tested on non-NVIDIA A100/H100 devices. It should generally work with all NVIDIA Ampere architecture or higher
|
85 |
+
devices. Disabling optimizations can triple VRAM usage but increase speed by 3-4 times. You can selectively disable
|
86 |
+
certain optimizations, including:
|
87 |
+
|
88 |
+
```
|
89 |
+
pipe.enable_sequential_cpu_offload()
|
90 |
+
pipe.vae.enable_slicing()
|
91 |
+
pipe.vae.enable_tiling()
|
92 |
+
```
|
93 |
+
|
94 |
+
+ In multi-GPU inference, `enable_sequential_cpu_offload()` optimization needs to be disabled.
|
95 |
+
+ Using an INT8 model reduces inference speed, meeting the requirements of lower VRAM GPUs while retaining minimal video
|
96 |
+
quality degradation, at the cost of significant speed reduction.
|
97 |
+
+ [PytorchAO](https://github.com/pytorch/ao) and [Optimum-quanto](https://github.com/huggingface/optimum-quanto/) can be
|
98 |
+
used to quantize the text encoder, Transformer, and VAE modules, reducing CogVideoX’s memory requirements, making it
|
99 |
+
feasible to run the model on smaller VRAM GPUs. TorchAO quantization is fully compatible with `torch.compile`,
|
100 |
+
significantly improving inference speed. `FP8` precision is required for NVIDIA H100 and above, which requires source
|
101 |
+
installation of `torch`, `torchao`, `diffusers`, and `accelerate`. Using `CUDA 12.4` is recommended.
|
102 |
+
+ Inference speed testing also used the above VRAM optimizations, and without optimizations, speed increases by about
|
103 |
+
10%. Only `diffusers` versions of models support quantization.
|
104 |
+
+ Models support English input only; other languages should be translated into English during prompt crafting with a
|
105 |
+
larger model.
|
106 |
+
|
107 |
+
**Note**
|
108 |
+
|
109 |
+
+ Use [SAT](https://github.com/THUDM/SwissArmyTransformer) for inference and fine-tuning SAT version models. Check our
|
110 |
+
GitHub for more details.
|
111 |
+
|
112 |
+
## Getting Started Quickly 🤗
|
113 |
+
|
114 |
+
This model supports deployment using the Hugging Face diffusers library. You can follow the steps below to get started.
|
115 |
+
|
116 |
+
**We recommend that you visit our [GitHub](https://github.com/THUDM/CogVideo) to check out prompt optimization and
|
117 |
+
conversion to get a better experience.**
|
118 |
+
|
119 |
+
1. Install the required dependencies
|
120 |
+
|
121 |
+
```shell
|
122 |
+
# diffusers>=0.32.0
|
123 |
+
# transformers>=0.46.2
|
124 |
+
# accelerate>=1.0.1
|
125 |
+
# imageio-ffmpeg>=0.5.1
|
126 |
+
pip install --upgrade transformers accelerate diffusers imageio-ffmpeg
|
127 |
+
```
|
128 |
+
|
129 |
+
2. Run the code
|
130 |
+
|
131 |
+
```pyhton
|
132 |
+
import torch
|
133 |
+
from diffusers import CogVideoXImageToVideoPipeline
|
134 |
+
from diffusers.utils import export_to_video, load_image
|
135 |
+
|
136 |
+
prompt = "A little girl is riding a bicycle at high speed. Focused, detailed, realistic."
|
137 |
+
image = load_image(image="input.jpg")
|
138 |
+
pipe = CogVideoXImageToVideoPipeline.from_pretrained(
|
139 |
+
"THUDM/CogVideoX1.5-5B-I2V",
|
140 |
+
torch_dtype=torch.bfloat16
|
141 |
+
)
|
142 |
+
|
143 |
+
pipe.enable_sequential_cpu_offload()
|
144 |
+
pipe.vae.enable_tiling()
|
145 |
+
pipe.vae.enable_slicing()
|
146 |
+
|
147 |
+
video = pipe(
|
148 |
+
prompt=prompt,
|
149 |
+
image=image,
|
150 |
+
num_videos_per_prompt=1,
|
151 |
+
num_inference_steps=50,
|
152 |
+
num_frames=81,
|
153 |
+
guidance_scale=6,
|
154 |
+
generator=torch.Generator(device="cuda").manual_seed(42),
|
155 |
+
).frames[0]
|
156 |
+
|
157 |
+
export_to_video(video, "output.mp4", fps=8)
|
158 |
+
```
|
159 |
+
|
160 |
+
## Quantized Inference
|
161 |
+
|
162 |
+
[PytorchAO](https://github.com/pytorch/ao) and [Optimum-quanto](https://github.com/huggingface/optimum-quanto/) can be
|
163 |
+
used to quantize the text encoder, transformer, and VAE modules to reduce CogVideoX's memory requirements. This allows
|
164 |
+
the model to run on free T4 Colab or GPUs with lower VRAM! Also, note that TorchAO quantization is fully compatible
|
165 |
+
with `torch.compile`, which can significantly accelerate inference.
|
166 |
+
|
167 |
+
```python
|
168 |
+
# To get started, PytorchAO needs to be installed from the GitHub source and PyTorch Nightly.
|
169 |
+
# Source and nightly installation is only required until the next release.
|
170 |
+
|
171 |
+
import torch
|
172 |
+
from diffusers import AutoencoderKLCogVideoX, CogVideoXTransformer3DModel, CogVideoXImageToVideoPipeline
|
173 |
+
from diffusers.utils import export_to_video, load_image
|
174 |
+
from transformers import T5EncoderModel
|
175 |
+
from torchao.quantization import quantize_, int8_weight_only
|
176 |
+
|
177 |
+
quantization = int8_weight_only
|
178 |
+
|
179 |
+
text_encoder = T5EncoderModel.from_pretrained("THUDM/CogVideoX1.5-5B-I2V", subfolder="text_encoder",
|
180 |
+
torch_dtype=torch.bfloat16)
|
181 |
+
quantize_(text_encoder, quantization())
|
182 |
+
|
183 |
+
transformer = CogVideoXTransformer3DModel.from_pretrained("THUDM/CogVideoX1.5-5B-I2V", subfolder="transformer",
|
184 |
+
torch_dtype=torch.bfloat16)
|
185 |
+
quantize_(transformer, quantization())
|
186 |
+
|
187 |
+
vae = AutoencoderKLCogVideoX.from_pretrained("THUDM/CogVideoX1.5-5B-I2V", subfolder="vae", torch_dtype=torch.bfloat16)
|
188 |
+
quantize_(vae, quantization())
|
189 |
+
|
190 |
+
# Create pipeline and run inference
|
191 |
+
pipe = CogVideoXImageToVideoPipeline.from_pretrained(
|
192 |
+
"THUDM/CogVideoX1.5-5B-I2V",
|
193 |
+
text_encoder=text_encoder,
|
194 |
+
transformer=transformer,
|
195 |
+
vae=vae,
|
196 |
+
torch_dtype=torch.bfloat16,
|
197 |
+
)
|
198 |
+
|
199 |
+
pipe.enable_model_cpu_offload()
|
200 |
+
pipe.vae.enable_tiling()
|
201 |
+
pipe.vae.enable_slicing()
|
202 |
+
|
203 |
+
prompt = "A little girl is riding a bicycle at high speed. Focused, detailed, realistic."
|
204 |
+
image = load_image(image="input.jpg")
|
205 |
+
video = pipe(
|
206 |
+
prompt=prompt,
|
207 |
+
image=image,
|
208 |
+
num_videos_per_prompt=1,
|
209 |
+
num_inference_steps=50,
|
210 |
+
num_frames=81,
|
211 |
+
guidance_scale=6,
|
212 |
+
generator=torch.Generator(device="cuda").manual_seed(42),
|
213 |
+
).frames[0]
|
214 |
+
|
215 |
+
export_to_video(video, "output.mp4", fps=8)
|
216 |
+
```
|
217 |
+
|
218 |
+
Additionally, these models can be serialized and stored using PytorchAO in quantized data types to save disk space. You
|
219 |
+
can find examples and benchmarks at the following links:
|
220 |
+
|
221 |
+
- [torchao](https://gist.github.com/a-r-r-o-w/4d9732d17412888c885480c6521a9897)
|
222 |
+
- [quanto](https://gist.github.com/a-r-r-o-w/31be62828b00a9292821b85c1017effa)
|
223 |
+
|
224 |
+
## Further Exploration
|
225 |
+
|
226 |
+
Feel free to enter our [GitHub](https://github.com/THUDM/CogVideo), where you'll find:
|
227 |
+
|
228 |
+
1. More detailed technical explanations and code.
|
229 |
+
2. Optimized prompt examples and conversions.
|
230 |
+
3. Detailed code for model inference and fine-tuning.
|
231 |
+
4. Project update logs and more interactive opportunities.
|
232 |
+
5. CogVideoX toolchain to help you better use the model.
|
233 |
+
6. INT8 model inference code.
|
234 |
+
|
235 |
+
## Model License
|
236 |
+
|
237 |
+
This model is released under the [CogVideoX LICENSE](LICENSE).
|
238 |
+
|
239 |
+
## Citation
|
240 |
+
|
241 |
+
```
|
242 |
+
@article{yang2024cogvideox,
|
243 |
+
title={CogVideoX: Text-to-Video Diffusion Models with An Expert Transformer},
|
244 |
+
author={Yang, Zhuoyi and Teng, Jiayan and Zheng, Wendi and Ding, Ming and Huang, Shiyu and Xu, Jiazheng and Yang, Yuanming and Hong, Wenyi and Zhang, Xiaohan and Feng, Guanyu and others},
|
245 |
+
journal={arXiv preprint arXiv:2408.06072},
|
246 |
+
year={2024}
|
247 |
+
}
|
248 |
+
```
|
249 |
+
|
README_zh.md
ADDED
@@ -0,0 +1,224 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# CogVideoX1.5-5B-I2V
|
2 |
+
|
3 |
+
<p style="text-align: center;">
|
4 |
+
<div align="center">
|
5 |
+
<img src=https://github.com/THUDM/CogVideo/raw/main/resources/logo.svg width="50%"/>
|
6 |
+
</div>
|
7 |
+
<p align="center">
|
8 |
+
<a href="https://huggingface.co/THUDM/CogVideoX-5b-I2V/blob/main/README.md">📄 Read in English</a> |
|
9 |
+
<a href="https://huggingface.co/spaces/THUDM/CogVideoX-5B-Space">🤗 Huggingface Space</a> |
|
10 |
+
<a href="https://github.com/THUDM/CogVideo">🌐 Github </a> |
|
11 |
+
<a href="https://arxiv.org/pdf/2408.06072">📜 arxiv </a>
|
12 |
+
</p>
|
13 |
+
<p align="center">
|
14 |
+
📍 前往<a href="https://chatglm.cn/video?fr=osm_cogvideox"> 清影</a> 和 <a href="https://open.bigmodel.cn/?utm_campaign=open&_channel_track_key=OWTVNma9"> API平台</a> 体验商业版视频生成模型
|
15 |
+
</p>
|
16 |
+
|
17 |
+
## 模型介绍
|
18 |
+
|
19 |
+
CogVideoX是 [清影](https://chatglm.cn/video?fr=osm_cogvideo) 同源的开源版本视频生成模型。下表展示我们在本代提供的视频生成模型列表相关信息:
|
20 |
+
|
21 |
+
<table style="border-collapse: collapse; width: 100%;">
|
22 |
+
<tr>
|
23 |
+
<th style="text-align: center;">模型名</th>
|
24 |
+
<th style="text-align: center;">CogVideoX1.5-5B</th>
|
25 |
+
<th style="text-align: center;">CogVideoX1.5-5B-I2V (当前仓库)</th>
|
26 |
+
</tr>
|
27 |
+
<tr>
|
28 |
+
<td style="text-align: center;">视频分辨率</td>
|
29 |
+
<td colspan="1" style="text-align: center;">1360 * 768</td>
|
30 |
+
<td colspan="1" style="text-align: center;">256 <= W <=1360<br> 256 <= H <=768<br> W,H % 16 == 0</td>
|
31 |
+
</tr>
|
32 |
+
<tr>
|
33 |
+
<td style="text-align: center;">推理精度</td>
|
34 |
+
<td colspan="2" style="text-align: center;"><b>BF16(推荐)</b>, FP16, FP32,FP8*,INT8,不支持INT4</td>
|
35 |
+
</tr>
|
36 |
+
<tr>
|
37 |
+
<td style="text-align: center;">单GPU推理显存消耗</td>
|
38 |
+
<td style="text-align: center;"><b> BF16 : 5GB起* </b><br><b>diffusers INT8(torchao): 4.4G起* </b></td>
|
39 |
+
<td style="text-align: center;"><b> BF16 : 5GB起* </b><br><b>diffusers INT8(torchao): 4.4G起* </b></td>
|
40 |
+
</tr>
|
41 |
+
<tr>
|
42 |
+
<td style="text-align: center;">多GPU推理显存消耗</td>
|
43 |
+
<td colspan="2" style="text-align: center;"><b>BF16: 15GB* using diffusers</b><br></td>
|
44 |
+
</tr>
|
45 |
+
<tr>
|
46 |
+
<td style="text-align: center;">推理速度<br>(Step = 50, FP/BF16)</td>
|
47 |
+
<td colspan="2" style="text-align: center;">单卡A100: ~1000秒(5秒视频)<br>单卡H100: ~550秒(5秒视频)</td>
|
48 |
+
</tr>
|
49 |
+
<tr>
|
50 |
+
<td style="text-align: center;">提示词语言</td>
|
51 |
+
<td colspan="5" style="text-align: center;">English*</td>
|
52 |
+
</tr>
|
53 |
+
<tr>
|
54 |
+
<td style="text-align: center;">提示词长度上限</td>
|
55 |
+
<td colspan="2" style="text-align: center;">224 Tokens</td>
|
56 |
+
</tr>
|
57 |
+
<tr>
|
58 |
+
<td style="text-align: center;">视频长度</td>
|
59 |
+
<td colspan="2" style="text-align: center;">5 秒 或 10 秒</td>
|
60 |
+
</tr>
|
61 |
+
<tr>
|
62 |
+
<td style="text-align: center;">帧率</td>
|
63 |
+
<td colspan="2" style="text-align: center;">16 帧 / 秒 </td>
|
64 |
+
</tr>
|
65 |
+
</table>
|
66 |
+
|
67 |
+
**数据解释**
|
68 |
+
|
69 |
+
+ 使用 diffusers 库进行测试时,启用了全部`diffusers`库自带的优化,该方案未测试在非**NVIDIA A100 / H100**
|
70 |
+
外的设备上的实际显存 / 内存占用。通常,该方案可以适配于所有 **NVIDIA 安培架构**
|
71 |
+
以上的设备。若关闭优化,显存占用会成倍增加,峰值显存约为表格的3倍。但速度提升3-4倍左右。你可以选择性的关闭部分优化,这些优化包括:
|
72 |
+
|
73 |
+
```
|
74 |
+
pipe.enable_sequential_cpu_offload()
|
75 |
+
pipe.vae.enable_slicing()
|
76 |
+
pipe.vae.enable_tiling()
|
77 |
+
```
|
78 |
+
|
79 |
+
+ 多GPU推理时,需要关闭 `enable_sequential_cpu_offload()` 优化。
|
80 |
+
+ 使用 INT8 模型会导致推理速度降低,此举是为了满足显存较低的显卡能正常推理并保持较少的视频质量损失,推理速度大幅降低。
|
81 |
+
+ [PytorchAO](https://github.com/pytorch/ao) 和 [Optimum-quanto](https://github.com/huggingface/optimum-quanto/)
|
82 |
+
可以用于量化文本编码器、Transformer 和 VAE 模块,以降低 CogVideoX 的内存需求。这使得在较小显存的 GPU
|
83 |
+
上运行模型成为可能!同样值得注意的是,TorchAO 量化完全兼容 `torch.compile`,这可以显著提高推理速度。在 `NVIDIA H100`
|
84 |
+
及以上设备上必须使用 `FP8` 精度,这需要源码安装 `torch`、`torchao`、`diffusers` 和 `accelerate` Python
|
85 |
+
包。建议使用 `CUDA 12.4`。
|
86 |
+
+ 推理速度测试同样采用了上述显存优化方案,不采用显存优化的情况下,推理速度提升约10%。 只有`diffusers`版本模型支持量化。
|
87 |
+
+ 模型仅支持英语输入,其他语言可以通过大模型润色时翻译为英语。
|
88 |
+
|
89 |
+
**提醒**
|
90 |
+
|
91 |
+
+ 使用 [SAT](https://github.com/THUDM/SwissArmyTransformer) 推理和微调SAT版本模型。欢迎前往我们的github查看。
|
92 |
+
|
93 |
+
## 快速上手 🤗
|
94 |
+
|
95 |
+
本模型已经支持使用 huggingface 的 diffusers 库进行部署,你可以按照以下步骤进行部署。
|
96 |
+
|
97 |
+
**我们推荐您进入我们的 [github](https://github.com/THUDM/CogVideo) 并查看相关的提示词优化和转换,以获得更好的体验。**
|
98 |
+
|
99 |
+
1. 安装对应的依赖
|
100 |
+
|
101 |
+
```shell
|
102 |
+
# diffusers>=0.32.0
|
103 |
+
# transformers>=0.46.2
|
104 |
+
# accelerate>=1.0.1
|
105 |
+
# imageio-ffmpeg>=0.5.1
|
106 |
+
pip install --upgrade transformers accelerate diffusers imageio-ffmpeg
|
107 |
+
```
|
108 |
+
|
109 |
+
2. 运行代码
|
110 |
+
|
111 |
+
```pyhton
|
112 |
+
import torch
|
113 |
+
from diffusers import CogVideoXImageToVideoPipeline
|
114 |
+
from diffusers.utils import export_to_video, load_image
|
115 |
+
|
116 |
+
prompt = "A little girl is riding a bicycle at high speed. Focused, detailed, realistic."
|
117 |
+
image = load_image(image="input.jpg")
|
118 |
+
pipe = CogVideoXImageToVideoPipeline.from_pretrained(
|
119 |
+
"THUDM/CogVideoX1.5-5B-I2V",
|
120 |
+
torch_dtype=torch.bfloat16
|
121 |
+
)
|
122 |
+
|
123 |
+
pipe.enable_sequential_cpu_offload()
|
124 |
+
pipe.vae.enable_tiling()
|
125 |
+
pipe.vae.enable_slicing()
|
126 |
+
|
127 |
+
video = pipe(
|
128 |
+
prompt=prompt,
|
129 |
+
image=image,
|
130 |
+
num_videos_per_prompt=1,
|
131 |
+
num_inference_steps=50,
|
132 |
+
num_frames=81,
|
133 |
+
guidance_scale=6,
|
134 |
+
generator=torch.Generator(device="cuda").manual_seed(42),
|
135 |
+
).frames[0]
|
136 |
+
|
137 |
+
export_to_video(video, "output.mp4", fps=8)
|
138 |
+
```
|
139 |
+
|
140 |
+
## Quantized Inference
|
141 |
+
|
142 |
+
[PytorchAO](https://github.com/pytorch/ao) 和 [Optimum-quanto](https://github.com/huggingface/optimum-quanto/)
|
143 |
+
可以用于对文本编码器、Transformer 和 VAE 模块进行量化,从而降低 CogVideoX 的内存需求。这使得在免费的 T4 Colab 或较小 VRAM 的
|
144 |
+
GPU 上运行该模型成为可能!值得注意的是,TorchAO 量化与 `torch.compile` 完全兼容,这可以显著加快推理速度。
|
145 |
+
|
146 |
+
```python
|
147 |
+
# To get started, PytorchAO needs to be installed from the GitHub source and PyTorch Nightly.
|
148 |
+
# Source and nightly installation is only required until the next release.
|
149 |
+
|
150 |
+
import torch
|
151 |
+
from diffusers import AutoencoderKLCogVideoX, CogVideoXTransformer3DModel, CogVideoXImageToVideoPipeline
|
152 |
+
from diffusers.utils import export_to_video, load_image
|
153 |
+
from transformers import T5EncoderModel
|
154 |
+
from torchao.quantization import quantize_, int8_weight_only
|
155 |
+
|
156 |
+
quantization = int8_weight_only
|
157 |
+
|
158 |
+
text_encoder = T5EncoderModel.from_pretrained("THUDM/CogVideoX1.5-5B-I2V", subfolder="text_encoder", torch_dtype=torch.bfloat16)
|
159 |
+
quantize_(text_encoder, quantization())
|
160 |
+
|
161 |
+
transformer = CogVideoXTransformer3DModel.from_pretrained("THUDM/CogVideoX1.5-5B-I2V",subfolder="transformer", torch_dtype=torch.bfloat16)
|
162 |
+
quantize_(transformer, quantization())
|
163 |
+
|
164 |
+
vae = AutoencoderKLCogVideoX.from_pretrained("THUDM/CogVideoX1.5-5B-I2V", subfolder="vae", torch_dtype=torch.bfloat16)
|
165 |
+
quantize_(vae, quantization())
|
166 |
+
|
167 |
+
# Create pipeline and run inference
|
168 |
+
pipe = CogVideoXImageToVideoPipeline.from_pretrained(
|
169 |
+
"THUDM/CogVideoX1.5-5B-I2V",
|
170 |
+
text_encoder=text_encoder,
|
171 |
+
transformer=transformer,
|
172 |
+
vae=vae,
|
173 |
+
torch_dtype=torch.bfloat16,
|
174 |
+
)
|
175 |
+
|
176 |
+
pipe.enable_model_cpu_offload()
|
177 |
+
pipe.vae.enable_tiling()
|
178 |
+
pipe.vae.enable_slicing()
|
179 |
+
|
180 |
+
prompt = "A little girl is riding a bicycle at high speed. Focused, detailed, realistic."
|
181 |
+
image = load_image(image="input.jpg")
|
182 |
+
video = pipe(
|
183 |
+
prompt=prompt,
|
184 |
+
image=image,
|
185 |
+
num_videos_per_prompt=1,
|
186 |
+
num_inference_steps=50,
|
187 |
+
num_frames=81,
|
188 |
+
guidance_scale=6,
|
189 |
+
generator=torch.Generator(device="cuda").manual_seed(42),
|
190 |
+
).frames[0]
|
191 |
+
|
192 |
+
export_to_video(video, "output.mp4", fps=8)
|
193 |
+
```
|
194 |
+
|
195 |
+
此外,这些模型可以通过使用PytorchAO以量化数据类型序列化并存储,从而节省磁盘空间。你可以在以下链接中找到示例和基准测试。
|
196 |
+
|
197 |
+
- [torchao](https://gist.github.com/a-r-r-o-w/4d9732d17412888c885480c6521a9897)
|
198 |
+
- [quanto](https://gist.github.com/a-r-r-o-w/31be62828b00a9292821b85c1017effa)
|
199 |
+
|
200 |
+
## 深入研究
|
201 |
+
|
202 |
+
欢迎进入我们的 [github](https://github.com/THUDM/CogVideo),你将获得:
|
203 |
+
|
204 |
+
1. 更加详细的技术细节介绍和代码解释。
|
205 |
+
2. 提示词的优化和转换。
|
206 |
+
3. 模型推理和微调的详细代码。
|
207 |
+
4. 项目更新日志动态,更多互动机会。
|
208 |
+
5. CogVideoX 工具链,帮助您更好的使用模型。
|
209 |
+
6. INT8 模型推理代码。
|
210 |
+
|
211 |
+
## 模型协议
|
212 |
+
|
213 |
+
该模型根据 [CogVideoX LICENSE](LICENSE) 许可证发布。
|
214 |
+
|
215 |
+
## 引用
|
216 |
+
|
217 |
+
```
|
218 |
+
@article{yang2024cogvideox,
|
219 |
+
title={CogVideoX: Text-to-Video Diffusion Models with An Expert Transformer},
|
220 |
+
author={Yang, Zhuoyi and Teng, Jiayan and Zheng, Wendi and Ding, Ming and Huang, Shiyu and Xu, Jiazheng and Yang, Yuanming and Hong, Wenyi and Zhang, Xiaohan and Feng, Guanyu and others},
|
221 |
+
journal={arXiv preprint arXiv:2408.06072},
|
222 |
+
year={2024}
|
223 |
+
}
|
224 |
+
```
|