bubbliiiing commited on
Commit
9e7a511
โ€ข
1 Parent(s): b2809a3

Update Readme

Browse files
Files changed (3) hide show
  1. LICENSE +201 -0
  2. README.md +438 -3
  3. README_en.md +409 -0
LICENSE ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
+
7
+ 1. Definitions.
8
+
9
+ "License" shall mean the terms and conditions for use, reproduction,
10
+ and distribution as defined by Sections 1 through 9 of this document.
11
+
12
+ "Licensor" shall mean the copyright owner or entity authorized by
13
+ the copyright owner that is granting the License.
14
+
15
+ "Legal Entity" shall mean the union of the acting entity and all
16
+ other entities that control, are controlled by, or are under common
17
+ control with that entity. For the purposes of this definition,
18
+ "control" means (i) the power, direct or indirect, to cause the
19
+ direction or management of such entity, whether by contract or
20
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
21
+ outstanding shares, or (iii) beneficial ownership of such entity.
22
+
23
+ "You" (or "Your") shall mean an individual or Legal Entity
24
+ exercising permissions granted by this License.
25
+
26
+ "Source" form shall mean the preferred form for making modifications,
27
+ including but not limited to software source code, documentation
28
+ source, and configuration files.
29
+
30
+ "Object" form shall mean any form resulting from mechanical
31
+ transformation or translation of a Source form, including but
32
+ not limited to compiled object code, generated documentation,
33
+ and conversions to other media types.
34
+
35
+ "Work" shall mean the work of authorship, whether in Source or
36
+ Object form, made available under the License, as indicated by a
37
+ copyright notice that is included in or attached to the work
38
+ (an example is provided in the Appendix below).
39
+
40
+ "Derivative Works" shall mean any work, whether in Source or Object
41
+ form, that is based on (or derived from) the Work and for which the
42
+ editorial revisions, annotations, elaborations, or other modifications
43
+ represent, as a whole, an original work of authorship. For the purposes
44
+ of this License, Derivative Works shall not include works that remain
45
+ separable from, or merely link (or bind by name) to the interfaces of,
46
+ the Work and Derivative Works thereof.
47
+
48
+ "Contribution" shall mean any work of authorship, including
49
+ the original version of the Work and any modifications or additions
50
+ to that Work or Derivative Works thereof, that is intentionally
51
+ submitted to Licensor for inclusion in the Work by the copyright owner
52
+ or by an individual or Legal Entity authorized to submit on behalf of
53
+ the copyright owner. For the purposes of this definition, "submitted"
54
+ means any form of electronic, verbal, or written communication sent
55
+ to the Licensor or its representatives, including but not limited to
56
+ communication on electronic mailing lists, source code control systems,
57
+ and issue tracking systems that are managed by, or on behalf of, the
58
+ Licensor for the purpose of discussing and improving the Work, but
59
+ excluding communication that is conspicuously marked or otherwise
60
+ designated in writing by the copyright owner as "Not a Contribution."
61
+
62
+ "Contributor" shall mean Licensor and any individual or Legal Entity
63
+ on behalf of whom a Contribution has been received by Licensor and
64
+ subsequently incorporated within the Work.
65
+
66
+ 2. Grant of Copyright License. Subject to the terms and conditions of
67
+ this License, each Contributor hereby grants to You a perpetual,
68
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69
+ copyright license to reproduce, prepare Derivative Works of,
70
+ publicly display, publicly perform, sublicense, and distribute the
71
+ Work and such Derivative Works in Source or Object form.
72
+
73
+ 3. Grant of Patent License. Subject to the terms and conditions of
74
+ this License, each Contributor hereby grants to You a perpetual,
75
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76
+ (except as stated in this section) patent license to make, have made,
77
+ use, offer to sell, sell, import, and otherwise transfer the Work,
78
+ where such license applies only to those patent claims licensable
79
+ by such Contributor that are necessarily infringed by their
80
+ Contribution(s) alone or by combination of their Contribution(s)
81
+ with the Work to which such Contribution(s) was submitted. If You
82
+ institute patent litigation against any entity (including a
83
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
84
+ or a Contribution incorporated within the Work constitutes direct
85
+ or contributory patent infringement, then any patent licenses
86
+ granted to You under this License for that Work shall terminate
87
+ as of the date such litigation is filed.
88
+
89
+ 4. Redistribution. You may reproduce and distribute copies of the
90
+ Work or Derivative Works thereof in any medium, with or without
91
+ modifications, and in Source or Object form, provided that You
92
+ meet the following conditions:
93
+
94
+ (a) You must give any other recipients of the Work or
95
+ Derivative Works a copy of this License; and
96
+
97
+ (b) You must cause any modified files to carry prominent notices
98
+ stating that You changed the files; and
99
+
100
+ (c) You must retain, in the Source form of any Derivative Works
101
+ that You distribute, all copyright, patent, trademark, and
102
+ attribution notices from the Source form of the Work,
103
+ excluding those notices that do not pertain to any part of
104
+ the Derivative Works; and
105
+
106
+ (d) If the Work includes a "NOTICE" text file as part of its
107
+ distribution, then any Derivative Works that You distribute must
108
+ include a readable copy of the attribution notices contained
109
+ within such NOTICE file, excluding those notices that do not
110
+ pertain to any part of the Derivative Works, in at least one
111
+ of the following places: within a NOTICE text file distributed
112
+ as part of the Derivative Works; within the Source form or
113
+ documentation, if provided along with the Derivative Works; or,
114
+ within a display generated by the Derivative Works, if and
115
+ wherever such third-party notices normally appear. The contents
116
+ of the NOTICE file are for informational purposes only and
117
+ do not modify the License. You may add Your own attribution
118
+ notices within Derivative Works that You distribute, alongside
119
+ or as an addendum to the NOTICE text from the Work, provided
120
+ that such additional attribution notices cannot be construed
121
+ as modifying the License.
122
+
123
+ You may add Your own copyright statement to Your modifications and
124
+ may provide additional or different license terms and conditions
125
+ for use, reproduction, or distribution of Your modifications, or
126
+ for any such Derivative Works as a whole, provided Your use,
127
+ reproduction, and distribution of the Work otherwise complies with
128
+ the conditions stated in this License.
129
+
130
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
131
+ any Contribution intentionally submitted for inclusion in the Work
132
+ by You to the Licensor shall be under the terms and conditions of
133
+ this License, without any additional terms or conditions.
134
+ Notwithstanding the above, nothing herein shall supersede or modify
135
+ the terms of any separate license agreement you may have executed
136
+ with Licensor regarding such Contributions.
137
+
138
+ 6. Trademarks. This License does not grant permission to use the trade
139
+ names, trademarks, service marks, or product names of the Licensor,
140
+ except as required for reasonable and customary use in describing the
141
+ origin of the Work and reproducing the content of the NOTICE file.
142
+
143
+ 7. Disclaimer of Warranty. Unless required by applicable law or
144
+ agreed to in writing, Licensor provides the Work (and each
145
+ Contributor provides its Contributions) on an "AS IS" BASIS,
146
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147
+ implied, including, without limitation, any warranties or conditions
148
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149
+ PARTICULAR PURPOSE. You are solely responsible for determining the
150
+ appropriateness of using or redistributing the Work and assume any
151
+ risks associated with Your exercise of permissions under this License.
152
+
153
+ 8. Limitation of Liability. In no event and under no legal theory,
154
+ whether in tort (including negligence), contract, or otherwise,
155
+ unless required by applicable law (such as deliberate and grossly
156
+ negligent acts) or agreed to in writing, shall any Contributor be
157
+ liable to You for damages, including any direct, indirect, special,
158
+ incidental, or consequential damages of any character arising as a
159
+ result of this License or out of the use or inability to use the
160
+ Work (including but not limited to damages for loss of goodwill,
161
+ work stoppage, computer failure or malfunction, or any and all
162
+ other commercial damages or losses), even if such Contributor
163
+ has been advised of the possibility of such damages.
164
+
165
+ 9. Accepting Warranty or Additional Liability. While redistributing
166
+ the Work or Derivative Works thereof, You may choose to offer,
167
+ and charge a fee for, acceptance of support, warranty, indemnity,
168
+ or other liability obligations and/or rights consistent with this
169
+ License. However, in accepting such obligations, You may act only
170
+ on Your own behalf and on Your sole responsibility, not on behalf
171
+ of any other Contributor, and only if You agree to indemnify,
172
+ defend, and hold each Contributor harmless for any liability
173
+ incurred by, or claims asserted against, such Contributor by reason
174
+ of your accepting any such warranty or additional liability.
175
+
176
+ END OF TERMS AND CONDITIONS
177
+
178
+ APPENDIX: How to apply the Apache License to your work.
179
+
180
+ To apply the Apache License to your work, attach the following
181
+ boilerplate notice, with the fields enclosed by brackets "[]"
182
+ replaced with your own identifying information. (Don't include
183
+ the brackets!) The text should be enclosed in the appropriate
184
+ comment syntax for the file format. We also recommend that a
185
+ file or class name and description of purpose be included on the
186
+ same "printed page" as the copyright notice for easier
187
+ identification within third-party archives.
188
+
189
+ Copyright [yyyy] [name of copyright owner]
190
+
191
+ Licensed under the Apache License, Version 2.0 (the "License");
192
+ you may not use this file except in compliance with the License.
193
+ You may obtain a copy of the License at
194
+
195
+ http://www.apache.org/licenses/LICENSE-2.0
196
+
197
+ Unless required by applicable law or agreed to in writing, software
198
+ distributed under the License is distributed on an "AS IS" BASIS,
199
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200
+ See the License for the specific language governing permissions and
201
+ limitations under the License.
README.md CHANGED
@@ -1,3 +1,438 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ frameworks:
3
+ - Pytorch
4
+ license: other
5
+ tasks:
6
+ - text-to-video-synthesis
7
+
8
+ #model-type:
9
+ ##ๅฆ‚ gptใ€phiใ€llamaใ€chatglmใ€baichuan ็ญ‰
10
+ #- gpt
11
+
12
+ #domain:
13
+ ##ๅฆ‚ nlpใ€cvใ€audioใ€multi-modal
14
+ #- nlp
15
+
16
+ #language:
17
+ ##่ฏญ่จ€ไปฃ็ ๅˆ—่กจ https://help.aliyun.com/document_detail/215387.html?spm=a2c4g.11186623.0.0.9f8d7467kni6Aa
18
+ #- cn
19
+
20
+ #metrics:
21
+ ##ๅฆ‚ CIDErใ€Blueใ€ROUGE ็ญ‰
22
+ #- CIDEr
23
+
24
+ #tags:
25
+ ##ๅ„็ง่‡ชๅฎšไน‰๏ผŒๅŒ…ๆ‹ฌ pretrainedใ€fine-tunedใ€instruction-tunedใ€RL-tuned ็ญ‰่ฎญ็ปƒๆ–นๆณ•ๅ’Œๅ…ถไป–
26
+ #- pretrained
27
+
28
+ #tools:
29
+ ##ๅฆ‚ vllmใ€fastchatใ€llamacppใ€AdaSeq ็ญ‰
30
+ #- vllm
31
+ ---
32
+
33
+ # EasyAnimate | ้ซ˜ๅˆ†่พจ็Ž‡้•ฟ่ง†้ข‘็”Ÿๆˆ็š„็ซฏๅˆฐ็ซฏ่งฃๅ†ณๆ–นๆกˆ
34
+ ๐Ÿ˜Š EasyAnimateๆ˜ฏไธ€ไธช็”จไบŽ็”Ÿๆˆ้ซ˜ๅˆ†่พจ็Ž‡ๅ’Œ้•ฟ่ง†้ข‘็š„็ซฏๅˆฐ็ซฏ่งฃๅ†ณๆ–นๆกˆใ€‚ๆˆ‘ไปฌๅฏไปฅ่ฎญ็ปƒๅŸบไบŽ่ฝฌๆขๅ™จ็š„ๆ‰ฉๆ•ฃ็”Ÿๆˆๅ™จ๏ผŒ่ฎญ็ปƒ็”จไบŽๅค„็†้•ฟ่ง†้ข‘็š„VAE๏ผŒไปฅๅŠ้ข„ๅค„็†ๅ…ƒๆ•ฐๆฎใ€‚
35
+
36
+ ๐Ÿ˜Š ๆˆ‘ไปฌๅŸบไบŽDIT๏ผŒไฝฟ็”จtransformer่ฟ›่กŒไฝœไธบๆ‰ฉๆ•ฃๅ™จ่ฟ›่กŒ่ง†้ข‘ไธŽๅ›พ็‰‡็”Ÿๆˆใ€‚
37
+
38
+ ๐Ÿ˜Š Welcome!
39
+
40
+ [![Arxiv Page](https://img.shields.io/badge/Arxiv-Page-red)](https://arxiv.org/abs/2405.18991)
41
+ [![Project Page](https://img.shields.io/badge/Project-Website-green)](https://easyanimate.github.io/)
42
+ [![Modelscope Studio](https://img.shields.io/badge/Modelscope-Studio-blue)](https://modelscope.cn/studios/PAI/EasyAnimate/summary)
43
+ [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-yellow)](https://huggingface.co/spaces/alibaba-pai/EasyAnimate)
44
+ [![Discord Page](https://img.shields.io/badge/Discord-Page-blue)](https://discord.gg/UzkpB4Bn)
45
+
46
+ [English](./README.md) | ็ฎ€ไฝ“ไธญๆ–‡
47
+
48
+ # ็›ฎๅฝ•
49
+ - [็›ฎๅฝ•](#็›ฎๅฝ•)
50
+ - [็ฎ€ไป‹](#็ฎ€ไป‹)
51
+ - [ๅฟซ้€ŸๅฏๅŠจ](#ๅฟซ้€ŸๅฏๅŠจ)
52
+ - [่ง†้ข‘ไฝœๅ“](#่ง†้ข‘ไฝœๅ“)
53
+ - [ๅฆ‚ไฝ•ไฝฟ็”จ](#ๅฆ‚ไฝ•ไฝฟ็”จ)
54
+ - [ๆจกๅž‹ๅœฐๅ€](#ๆจกๅž‹ๅœฐๅ€)
55
+ - [ๆœชๆฅ่ฎกๅˆ’](#ๆœชๆฅ่ฎกๅˆ’)
56
+ - [่”็ณปๆˆ‘ไปฌ](#่”็ณปๆˆ‘ไปฌ)
57
+ - [ๅ‚่€ƒๆ–‡็Œฎ](#ๅ‚่€ƒๆ–‡็Œฎ)
58
+ - [่ฎธๅฏ่ฏ](#่ฎธๅฏ่ฏ)
59
+
60
+ # ็ฎ€ไป‹
61
+ EasyAnimateๆ˜ฏไธ€ไธชๅŸบไบŽtransformer็ป“ๆž„็š„pipeline๏ผŒๅฏ็”จไบŽ็”ŸๆˆAIๅ›พ็‰‡ไธŽ่ง†้ข‘ใ€่ฎญ็ปƒDiffusion Transformer็š„ๅŸบ็บฟๆจกๅž‹ไธŽLoraๆจกๅž‹๏ผŒๆˆ‘ไปฌๆ”ฏๆŒไปŽๅทฒ็ป่ฎญ็ปƒๅฅฝ็š„EasyAnimateๆจกๅž‹็›ดๆŽฅ่ฟ›่กŒ้ข„ๆต‹๏ผŒ็”ŸๆˆไธๅŒๅˆ†่พจ็Ž‡๏ผŒ6็ง’ๅทฆๅณใ€fps8็š„่ง†้ข‘๏ผˆEasyAnimateV5๏ผŒ1 ~ 49ๅธง๏ผ‰๏ผŒไนŸๆ”ฏๆŒ็”จๆˆท่ฎญ็ปƒ่‡ชๅทฑ็š„ๅŸบ็บฟๆจกๅž‹ไธŽLoraๆจกๅž‹๏ผŒ่ฟ›่กŒไธ€ๅฎš็š„้ฃŽๆ ผๅ˜ๆขใ€‚
62
+
63
+ ๆˆ‘ไปฌไผš้€ๆธๆ”ฏๆŒไปŽไธๅŒๅนณๅฐๅฟซ้€ŸๅฏๅŠจ๏ผŒ่ฏทๅ‚้˜… [ๅฟซ้€ŸๅฏๅŠจ](#ๅฟซ้€ŸๅฏๅŠจ)ใ€‚
64
+
65
+ ๆ–ฐ็‰นๆ€ง๏ผš
66
+ - ๆ›ดๆ–ฐๅˆฐv5็‰ˆๆœฌ๏ผŒๆœ€ๅคงๆ”ฏๆŒ1024x1024๏ผŒ49ๅธง, 6s, 8fps่ง†้ข‘็”Ÿๆˆ๏ผŒๆ‹“ๅฑ•ๆจกๅž‹่ง„ๆจกๅˆฐ12B๏ผŒๅบ”็”จMMDIT็ป“ๆž„๏ผŒๆ”ฏๆŒไธๅŒ่พ“ๅ…ฅ็š„ๆŽงๅˆถๆจกๅž‹๏ผŒๆ”ฏๆŒไธญๆ–‡ไธŽ่‹ฑๆ–‡ๅŒ่ฏญ้ข„ๆต‹ใ€‚[ 2024.11.04 ]
67
+ - ๆ›ดๆ–ฐๅˆฐv4็‰ˆๆœฌ๏ผŒๆœ€ๅคงๆ”ฏๆŒ1024x1024๏ผŒ144ๅธง, 6s, 24fps่ง†้ข‘็”Ÿๆˆ๏ผŒๆ”ฏๆŒๆ–‡ใ€ๅ›พใ€่ง†้ข‘็”Ÿ่ง†้ข‘๏ผŒๅ•ไธชๆจกๅž‹ๅฏๆ”ฏๆŒ512ๅˆฐ1280ไปปๆ„ๅˆ†่พจ็Ž‡๏ผŒๆ”ฏๆŒไธญๆ–‡ไธŽ่‹ฑๆ–‡ๅŒ่ฏญ้ข„ๆต‹ใ€‚[ 2024.08.15 ]
68
+ - ๆ›ดๆ–ฐๅˆฐv3็‰ˆๆœฌ๏ผŒๆœ€ๅคงๆ”ฏๆŒ960x960๏ผŒ144ๅธง๏ผŒ6s, 24fps่ง†้ข‘็”Ÿๆˆ๏ผŒๆ”ฏๆŒๆ–‡ไธŽๅ›พ็”Ÿ่ง†้ข‘ๆจกๅž‹ใ€‚[ 2024.07.01 ]
69
+ - ModelScope-Soraโ€œๆ•ฐๆฎๅฏผๆผ”โ€ๅˆ›ๆ„็ซž้€Ÿโ€”โ€”็ฌฌไธ‰ๅฑŠData-Juicerๅคงๆจกๅž‹ๆ•ฐๆฎๆŒ‘ๆˆ˜่ต›ๅทฒ็ปๆญฃๅผๅฏๅŠจ๏ผๅ…ถไฝฟ็”จEasyAnimateไฝœไธบๅŸบ็ก€ๆจกๅž‹๏ผŒๆŽข็ฉถๆ•ฐๆฎๅค„็†ๅฏนไบŽๆจกๅž‹่ฎญ็ปƒ็š„ไฝœ็”จใ€‚็ซ‹ๅณ่ฎฟ้—ฎ[็ซž่ต›ๅฎ˜็ฝ‘](https://tianchi.aliyun.com/competition/entrance/532219)๏ผŒไบ†่งฃ่ต›ไบ‹่ฏฆๆƒ…ใ€‚[ 2024.06.17 ]
70
+ - ๆ›ดๆ–ฐๅˆฐv2็‰ˆๆœฌ๏ผŒๆœ€ๅคงๆ”ฏๆŒ768x768๏ผŒ144ๅธง๏ผŒ6s, 24fps่ง†้ข‘็”Ÿๆˆใ€‚[ 2024.05.26 ]
71
+ - ๅˆ›ๅปบไปฃ็ ๏ผ็Žฐๅœจๆ”ฏๆŒ Windows ๅ’Œ Linuxใ€‚[ 2024.04.12 ]
72
+
73
+ ๅŠŸ่ƒฝๆฆ‚่งˆ๏ผš
74
+ - [ๆ•ฐๆฎ้ข„ๅค„็†](#data-preprocess)
75
+ - [่ฎญ็ปƒVAE](#vae-train)
76
+ - [่ฎญ็ปƒDiT](#dit-train)
77
+ - [ๆจกๅž‹็”Ÿๆˆ](#video-gen)
78
+
79
+ ๆˆ‘ไปฌ็š„ui็•Œ้ขๅฆ‚ไธ‹:
80
+ ![ui](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/asset/ui_v3.jpg)
81
+
82
+ # ๅฟซ้€ŸๅฏๅŠจ
83
+ ### 1. ไบ‘ไฝฟ็”จ: AliyunDSW/Docker
84
+ #### a. ้€š่ฟ‡้˜ฟ้‡Œไบ‘ DSW
85
+ DSW ๆœ‰ๅ…่ดน GPU ๆ—ถ้—ด๏ผŒ็”จๆˆทๅฏ็”ณ่ฏทไธ€ๆฌก๏ผŒ็”ณ่ฏทๅŽ3ไธชๆœˆๅ†…ๆœ‰ๆ•ˆใ€‚
86
+
87
+ ้˜ฟ้‡Œไบ‘ๅœจ[Freetier](https://free.aliyun.com/?product=9602825&crowd=enterprise&spm=5176.28055625.J_5831864660.1.e939154aRgha4e&scm=20140722.M_9974135.P_110.MO_1806-ID_9974135-MID_9974135-CID_30683-ST_8512-V_1)ๆไพ›ๅ…่ดนGPUๆ—ถ้—ด๏ผŒ่Žทๅ–ๅนถๅœจ้˜ฟ้‡Œไบ‘PAI-DSWไธญไฝฟ็”จ๏ผŒ5ๅˆ†้’Ÿๅ†…ๅณๅฏๅฏๅŠจEasyAnimate
88
+
89
+ [![DSW Notebook](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/asset/dsw.png)](https://gallery.pai-ml.com/#/preview/deepLearning/cv/easyanimate)
90
+
91
+ #### b. ้€š่ฟ‡ComfyUI
92
+ ๆˆ‘ไปฌ็š„ComfyUI็•Œ้ขๅฆ‚ไธ‹๏ผŒๅ…ทไฝ“ๆŸฅ็œ‹[ComfyUI README](comfyui/README.md)ใ€‚
93
+ ![workflow graph](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/asset/v3/comfyui_i2v.jpg)
94
+
95
+ #### c. ้€š่ฟ‡docker
96
+ ไฝฟ็”จdocker็š„ๆƒ…ๅ†ตไธ‹๏ผŒ่ฏทไฟ่ฏๆœบๅ™จไธญๅทฒ็ปๆญฃ็กฎๅฎ‰่ฃ…ๆ˜พๅก้ฉฑๅŠจไธŽCUDA็Žฏๅขƒ๏ผŒ็„ถๅŽไปฅๆญคๆ‰ง่กŒไปฅไธ‹ๅ‘ฝไปค๏ผš
97
+ ```
98
+ # pull image
99
+ docker pull mybigpai-public-registry.cn-beijing.cr.aliyuncs.com/easycv/torch_cuda:easyanimate
100
+
101
+ # enter image
102
+ docker run -it -p 7860:7860 --network host --gpus all --security-opt seccomp:unconfined --shm-size 200g mybigpai-public-registry.cn-beijing.cr.aliyuncs.com/easycv/torch_cuda:easyanimate
103
+
104
+ # clone code
105
+ git clone https://github.com/aigc-apps/EasyAnimate.git
106
+
107
+ # enter EasyAnimate's dir
108
+ cd EasyAnimate
109
+
110
+ # download weights
111
+ mkdir models/Diffusion_Transformer
112
+ mkdir models/Motion_Module
113
+ mkdir models/Personalized_Model
114
+
115
+ # Please use the hugginface link or modelscope link to download the EasyAnimateV5 model.
116
+ # I2V models
117
+ # https://huggingface.co/alibaba-pai/EasyAnimateV5-12b-zh-InP
118
+ # https://modelscope.cn/models/PAI/EasyAnimateV5-12b-zh-InP
119
+ # T2V models
120
+ # https://huggingface.co/alibaba-pai/EasyAnimateV5-12b-zh
121
+ # https://modelscope.cn/models/PAI/EasyAnimateV5-12b-zh
122
+ ```
123
+
124
+ ### 2. ๆœฌๅœฐๅฎ‰่ฃ…: ็Žฏๅขƒๆฃ€ๆŸฅ/ไธ‹่ฝฝ/ๅฎ‰่ฃ…
125
+ #### a. ็Žฏๅขƒๆฃ€ๆŸฅ
126
+ ๆˆ‘ไปฌๅทฒ้ชŒ่ฏEasyAnimateๅฏๅœจไปฅไธ‹็Žฏๅขƒไธญๆ‰ง่กŒ๏ผš
127
+
128
+ Windows ็š„่ฏฆ็ป†ไฟกๆฏ๏ผš
129
+ - ๆ“ไฝœ็ณป็ปŸ Windows 10
130
+ - python: python3.10 & python3.11
131
+ - pytorch: torch2.2.0
132
+ - CUDA: 11.8 & 12.1
133
+ - CUDNN: 8+
134
+ - GPU๏ผš Nvidia-3060 12G
135
+
136
+ Linux ็š„่ฏฆ็ป†ไฟกๆฏ๏ผš
137
+ - ๆ“ไฝœ็ณป็ปŸ Ubuntu 20.04, CentOS
138
+ - python: python3.10 & python3.11
139
+ - pytorch: torch2.2.0
140
+ - CUDA: 11.8 & 12.1
141
+ - CUDNN: 8+
142
+ - GPU๏ผšNvidia-V100 16G & Nvidia-A10 24G & Nvidia-A100 40G & Nvidia-A100 80G
143
+
144
+ ๆˆ‘ไปฌ้œ€่ฆๅคง็บฆ 60GB ็š„ๅฏ็”จ็ฃ็›˜็ฉบ้—ด๏ผŒ่ฏทๆฃ€ๆŸฅ๏ผ
145
+
146
+ #### b. ๆƒ้‡ๆ”พ็ฝฎ
147
+ ๆˆ‘ไปฌๆœ€ๅฅฝๅฐ†[ๆƒ้‡](#model-zoo)ๆŒ‰็…งๆŒ‡ๅฎš่ทฏๅพ„่ฟ›่กŒๆ”พ็ฝฎ๏ผš
148
+
149
+ EasyAnimateV5:
150
+ ```
151
+ ๐Ÿ“ฆ models/
152
+ โ”œโ”€โ”€ ๐Ÿ“‚ Diffusion_Transformer/
153
+ โ”‚ โ”œโ”€โ”€ ๐Ÿ“‚ EasyAnimateV5-12b-zh-InP/
154
+ โ”‚ โ””โ”€โ”€ ๐Ÿ“‚ EasyAnimateV5-12b-zh/
155
+ โ”œโ”€โ”€ ๐Ÿ“‚ Personalized_Model/
156
+ โ”‚ โ””โ”€โ”€ your trained trainformer model / your trained lora model (for UI load)
157
+ ```
158
+
159
+ # ่ง†้ข‘ไฝœๅ“
160
+ ๆ‰€ๅฑ•็คบ็š„็ป“ๆžœ้ƒฝๆ˜ฏๅ›พ็”Ÿ่ง†้ข‘่Žทๅพ—ใ€‚
161
+
162
+ ### EasyAnimateV5-12b-zh-InP
163
+
164
+ Resolution-1024
165
+
166
+ <table border="0" style="width: 100%; text-align: left; margin-top: 20px;">
167
+ <tr>
168
+ <td>
169
+ <video src="https://github.com/user-attachments/assets/bb393b7c-ba33-494c-ab06-b314adea9fc1" width="100%" controls autoplay loop></video>
170
+ </td>
171
+ <td>
172
+ <video src="https://github.com/user-attachments/assets/cb0d0253-919d-4dd6-9dc1-5cd94443c7f1" width="100%" controls autoplay loop></video>
173
+ </td>
174
+ <td>
175
+ <video src="https://github.com/user-attachments/assets/09ed361f-c0c5-4025-aad7-71fe1a1a52b1" width="100%" controls autoplay loop></video>
176
+ </td>
177
+ <td>
178
+ <video src="https://github.com/user-attachments/assets/9f42848d-34eb-473f-97ea-a5ebd0268106" width="100%" controls autoplay loop></video>
179
+ </td>
180
+ </tr>
181
+ </table>
182
+
183
+
184
+ Resolution-768
185
+
186
+ <table border="0" style="width: 100%; text-align: left; margin-top: 20px;">
187
+ <tr>
188
+ <td>
189
+ <video src="https://github.com/user-attachments/assets/903fda91-a0bd-48ee-bf64-fff4e4d96f17" width="100%" controls autoplay loop></video>
190
+ </td>
191
+ <td>
192
+ <video src="https://github.com/user-attachments/assets/407c6628-9688-44b6-b12d-77de10fbbe95" width="100%" controls autoplay loop></video>
193
+ </td>
194
+ <td>
195
+ <video src="https://github.com/user-attachments/assets/ccf30ec1-91d2-4d82-9ce0-fcc585fc2f21" width="100%" controls autoplay loop></video>
196
+ </td>
197
+ <td>
198
+ <video src="https://github.com/user-attachments/assets/5dfe0f92-7d0d-43e0-b7df-0ff7b325663c" width="100%" controls autoplay loop></video>
199
+ </td>
200
+ </tr>
201
+ </table>
202
+
203
+ Resolution-512
204
+
205
+ <table border="0" style="width: 100%; text-align: left; margin-top: 20px;">
206
+ <tr>
207
+ <td>
208
+ <video src="https://github.com/user-attachments/assets/2b542b85-be19-4537-9607-9d28ea7e932e" width="100%" controls autoplay loop></video>
209
+ </td>
210
+ <td>
211
+ <video src="https://github.com/user-attachments/assets/c1662745-752d-4ad2-92bc-fe53734347b2" width="100%" controls autoplay loop></video>
212
+ </td>
213
+ <td>
214
+ <video src="https://github.com/user-attachments/assets/8bec3d66-50a3-4af5-a381-be2c865825a0" width="100%" controls autoplay loop></video>
215
+ </td>
216
+ <td>
217
+ <video src="https://github.com/user-attachments/assets/bcec22f4-732c-446f-958c-2ebbfd8f94be" width="100%" controls autoplay loop></video>
218
+ </td>
219
+ </tr>
220
+ </table>
221
+
222
+ ### EasyAnimateV5-12b-zh-Control
223
+
224
+ <table border="0" style="width: 100%; text-align: left; margin-top: 20px;">
225
+ <tr>
226
+ <td>
227
+ <video src="https://github.com/user-attachments/assets/53002ce2-dd18-4d4f-8135-b6f68364cabd" width="100%" controls autoplay loop></video>
228
+ </td>
229
+ <td>
230
+ <video src="https://github.com/user-attachments/assets/fce43c0b-81fa-4ab2-9ca7-78d786f520e6" width="100%" controls autoplay loop></video>
231
+ </td>
232
+ <td>
233
+ <video src="https://github.com/user-attachments/assets/b208b92c-5add-4ece-a200-3dbbe47b93c3" width="100%" controls autoplay loop></video>
234
+ </td>
235
+ <tr>
236
+ <td>
237
+ <video src="https://github.com/user-attachments/assets/3aec95d5-d240-49fb-a9e9-914446c7a4cf" width="100%" controls autoplay loop></video>
238
+ </td>
239
+ <td>
240
+ <video src="https://github.com/user-attachments/assets/60fa063b-5c1f-485f-b663-09bd6669de3f" width="100%" controls autoplay loop></video>
241
+ </td>
242
+ <td>
243
+ <video src="https://github.com/user-attachments/assets/4adde728-8397-42f3-8a2a-23f7b39e9a1e" width="100%" controls autoplay loop></video>
244
+ </td>
245
+ </tr>
246
+ </table>
247
+
248
+ # ๅฆ‚ไฝ•ไฝฟ็”จ
249
+
250
+ <h3 id="video-gen">1. ็”Ÿๆˆ </h3>
251
+
252
+ #### aใ€่ฟ่กŒpythonๆ–‡ไปถ
253
+ - ๆญฅ้ชค1๏ผšไธ‹่ฝฝๅฏนๅบ”[ๆƒ้‡](#model-zoo)ๆ”พๅ…ฅmodelsๆ–‡ไปถๅคนใ€‚
254
+ - ๆญฅ้ชค2๏ผšๅœจpredict_t2v.pyๆ–‡ไปถไธญไฟฎๆ”นpromptใ€neg_promptใ€guidance_scaleๅ’Œseedใ€‚
255
+ - ๆญฅ้ชค3๏ผš่ฟ่กŒpredict_t2v.pyๆ–‡ไปถ๏ผŒ็ญ‰ๅพ…็”Ÿๆˆ็ป“ๆžœ๏ผŒ็ป“ๆžœไฟๅญ˜ๅœจsamples/easyanimate-videosๆ–‡ไปถๅคนไธญใ€‚
256
+ - ๆญฅ้ชค4๏ผšๅฆ‚ๆžœๆƒณ็ป“ๅˆ่‡ชๅทฑ่ฎญ็ปƒ็š„ๅ…ถไป–backboneไธŽLora๏ผŒๅˆ™็œ‹ๆƒ…ๅ†ตไฟฎๆ”นpredict_t2v.pyไธญ็š„predict_t2v.pyๅ’Œlora_pathใ€‚
257
+
258
+ #### bใ€้€š่ฟ‡ui็•Œ้ข
259
+ - ๆญฅ้ชค1๏ผšไธ‹่ฝฝๅฏนๅบ”[ๆƒ้‡](#model-zoo)ๆ”พๅ…ฅmodelsๆ–‡ไปถๅคนใ€‚
260
+ - ๆญฅ้ชค2๏ผš่ฟ่กŒapp.pyๆ–‡ไปถ๏ผŒ่ฟ›ๅ…ฅgradio้กต้ขใ€‚
261
+ - ๆญฅ้ชค3๏ผšๆ นๆฎ้กต้ข้€‰ๆ‹ฉ็”Ÿๆˆๆจกๅž‹๏ผŒๅกซๅ…ฅpromptใ€neg_promptใ€guidance_scaleๅ’Œseed็ญ‰๏ผŒ็‚นๅ‡ป็”Ÿๆˆ๏ผŒ็ญ‰ๅพ…็”Ÿๆˆ็ป“ๆžœ๏ผŒ็ป“ๆžœไฟๅญ˜ๅœจsampleๆ–‡ไปถๅคนไธญใ€‚
262
+
263
+ #### cใ€้€š่ฟ‡comfyui
264
+ ๅ…ทไฝ“ๆŸฅ็œ‹[ComfyUI README](comfyui/README.md)ใ€‚
265
+
266
+ #### dใ€ๆ˜พๅญ˜่Š‚็œๆ–นๆกˆ
267
+ ็”ฑไบŽEasyAnimateV5็š„ๅ‚ๆ•ฐ้žๅธธๅคง๏ผŒๆˆ‘ไปฌ้œ€่ฆ่€ƒ่™‘ๆ˜พๅญ˜่Š‚็œๆ–นๆกˆ๏ผŒไปฅ่Š‚็œๆ˜พๅญ˜้€‚ๅบ”ๆถˆ่ดน็บงๆ˜พๅกใ€‚ๆˆ‘ไปฌ็ป™ๆฏไธช้ข„ๆต‹ๆ–‡ไปถ้ƒฝๆไพ›ไบ†GPU_memory_mode๏ผŒๅฏไปฅๅœจmodel_cpu_offload๏ผŒmodel_cpu_offload_and_qfloat8๏ผŒsequential_cpu_offloadไธญ่ฟ›่กŒ้€‰ๆ‹ฉใ€‚
268
+
269
+ - model_cpu_offloadไปฃ่กจๆ•ดไธชๆจกๅž‹ๅœจไฝฟ็”จๅŽไผš่ฟ›ๅ…ฅcpu๏ผŒๅฏไปฅ่Š‚็œ้ƒจๅˆ†ๆ˜พๅญ˜ใ€‚
270
+ - model_cpu_offload_and_qfloat8ไปฃ่กจๆ•ดไธชๆจกๅž‹ๅœจไฝฟ็”จๅŽไผš่ฟ›ๅ…ฅcpu๏ผŒๅนถไธ”ๅฏนtransformerๆจกๅž‹่ฟ›่กŒไบ†float8็š„้‡ๅŒ–๏ผŒๅฏไปฅ่Š‚็œๆ›ดๅคš็š„ๆ˜พๅญ˜ใ€‚
271
+ - sequential_cpu_offloadไปฃ่กจๆจกๅž‹็š„ๆฏไธ€ๅฑ‚ๅœจไฝฟ็”จๅŽไผš่ฟ›ๅ…ฅcpu๏ผŒ้€Ÿๅบฆ่พƒๆ…ข๏ผŒ่Š‚็œๅคง้‡ๆ˜พๅญ˜ใ€‚
272
+
273
+ qfloat8ไผš้™ไฝŽๆจกๅž‹็š„ๆ€ง่ƒฝ๏ผŒไฝ†ๅฏไปฅ่Š‚็œๆ›ดๅคš็š„ๆ˜พๅญ˜ใ€‚ๅฆ‚ๆžœๆ˜พๅญ˜่ถณๅคŸ๏ผŒๆŽจ่ไฝฟ็”จmodel_cpu_offloadใ€‚
274
+
275
+ ### 2. ๆจกๅž‹่ฎญ็ปƒ
276
+ ไธ€ไธชๅฎŒๆ•ด็š„EasyAnimate่ฎญ็ปƒ้“พ่ทฏๅบ”่ฏฅๅŒ…ๆ‹ฌๆ•ฐๆฎ้ข„ๅค„็†ใ€Video VAE่ฎญ็ปƒใ€Video DiT่ฎญ็ปƒใ€‚ๅ…ถไธญVideo VAE่ฎญ็ปƒๆ˜ฏไธ€ไธชๅฏ้€‰้กน๏ผŒๅ› ไธบๆˆ‘ไปฌๅทฒ็ปๆไพ›ไบ†่ฎญ็ปƒๅฅฝ็š„Video VAEใ€‚
277
+
278
+ <h4 id="data-preprocess">a.ๆ•ฐๆฎ้ข„ๅค„็†</h4>
279
+
280
+ ๆˆ‘ไปฌ็ป™ๅ‡บไบ†ไธ€ไธช็ฎ€ๅ•็š„demo้€š่ฟ‡ๅ›พ็‰‡ๆ•ฐๆฎ่ฎญ็ปƒloraๆจกๅž‹๏ผŒ่ฏฆๆƒ…ๅฏไปฅๆŸฅ็œ‹[wiki](https://github.com/aigc-apps/EasyAnimate/wiki/Training-Lora)ใ€‚
281
+
282
+ ไธ€ไธชๅฎŒๆ•ด็š„้•ฟ่ง†้ข‘ๅˆ‡ๅˆ†ใ€ๆธ…ๆด—ใ€ๆ่ฟฐ็š„ๆ•ฐๆฎ้ข„ๅค„็†้“พ่ทฏๅฏไปฅๅ‚่€ƒvideo caption้ƒจๅˆ†็š„[README](easyanimate/video_caption/README.md)่ฟ›่กŒใ€‚
283
+
284
+ ๅฆ‚ๆžœๆœŸๆœ›่ฎญ็ปƒไธ€ไธชๆ–‡็”Ÿๅ›พ่ง†้ข‘็š„็”Ÿๆˆๆจกๅž‹๏ผŒๆ‚จ้œ€่ฆไปฅ่ฟ™็งๆ ผๅผๆŽ’ๅˆ—ๆ•ฐๆฎ้›†ใ€‚
285
+ ```
286
+ ๐Ÿ“ฆ project/
287
+ โ”œโ”€โ”€ ๐Ÿ“‚ datasets/
288
+ โ”‚ โ”œโ”€โ”€ ๐Ÿ“‚ internal_datasets/
289
+ โ”‚ โ”œโ”€โ”€ ๐Ÿ“‚ train/
290
+ โ”‚ โ”‚ โ”œโ”€โ”€ ๐Ÿ“„ 00000001.mp4
291
+ โ”‚ โ”‚ โ”œโ”€โ”€ ๐Ÿ“„ 00000002.jpg
292
+ โ”‚ โ”‚ โ””โ”€โ”€ ๐Ÿ“„ .....
293
+ โ”‚ โ””โ”€โ”€ ๐Ÿ“„ json_of_internal_datasets.json
294
+ ```
295
+
296
+ json_of_internal_datasets.jsonๆ˜ฏไธ€ไธชๆ ‡ๅ‡†็š„jsonๆ–‡ไปถใ€‚jsonไธญ็š„file_pathๅฏไปฅ่ขซ่ฎพ็ฝฎไธบ็›ธๅฏน่ทฏๅพ„๏ผŒๅฆ‚ไธ‹ๆ‰€็คบ๏ผš
297
+ ```json
298
+ [
299
+ {
300
+ "file_path": "train/00000001.mp4",
301
+ "text": "A group of young men in suits and sunglasses are walking down a city street.",
302
+ "type": "video"
303
+ },
304
+ {
305
+ "file_path": "train/00000002.jpg",
306
+ "text": "A group of young men in suits and sunglasses are walking down a city street.",
307
+ "type": "image"
308
+ },
309
+ .....
310
+ ]
311
+ ```
312
+
313
+ ไฝ ไนŸๅฏไปฅๅฐ†่ทฏๅพ„่ฎพ็ฝฎไธบ็ปๅฏน่ทฏๅพ„๏ผš
314
+ ```json
315
+ [
316
+ {
317
+ "file_path": "/mnt/data/videos/00000001.mp4",
318
+ "text": "A group of young men in suits and sunglasses are walking down a city street.",
319
+ "type": "video"
320
+ },
321
+ {
322
+ "file_path": "/mnt/data/train/00000001.jpg",
323
+ "text": "A group of young men in suits and sunglasses are walking down a city street.",
324
+ "type": "image"
325
+ },
326
+ .....
327
+ ]
328
+ ```
329
+ <h4 id="vae-train">b. Video VAE่ฎญ็ปƒ ๏ผˆๅฏ้€‰๏ผ‰</h4>
330
+ Video VAE่ฎญ็ปƒๆ˜ฏไธ€ไธชๅฏ้€‰้กน๏ผŒๅ› ไธบๆˆ‘ไปฌๅทฒ็ปๆไพ›ไบ†่ฎญ็ปƒๅฅฝ็š„Video VAEใ€‚
331
+
332
+ ๅฆ‚ๆžœๆƒณ่ฆ่ฟ›่กŒ่ฎญ็ปƒ๏ผŒๅฏไปฅๅ‚่€ƒvideo vae้ƒจๅˆ†็š„[README](easyanimate/vae/README.md)่ฟ›่กŒใ€‚
333
+
334
+ <h4 id="dit-train">c. Video DiT่ฎญ็ปƒ </h4>
335
+
336
+ ๅฆ‚ๆžœๆ•ฐๆฎ้ข„ๅค„็†ๆ—ถ๏ผŒๆ•ฐๆฎ็š„ๆ ผๅผไธบ็›ธๅฏน่ทฏๅพ„๏ผŒๅˆ™่ฟ›ๅ…ฅscripts/train.sh่ฟ›่กŒๅฆ‚ไธ‹่ฎพ็ฝฎใ€‚
337
+ ```
338
+ export DATASET_NAME="datasets/internal_datasets/"
339
+ export DATASET_META_NAME="datasets/internal_datasets/json_of_internal_datasets.json"
340
+
341
+ ...
342
+
343
+ train_data_format="normal"
344
+ ```
345
+
346
+ ๅฆ‚ๆžœๆ•ฐๆฎ็š„ๆ ผๅผไธบ็ปๅฏน่ทฏๅพ„๏ผŒๅˆ™่ฟ›ๅ…ฅscripts/train.sh่ฟ›่กŒๅฆ‚ไธ‹่ฎพ็ฝฎใ€‚
347
+ ```
348
+ export DATASET_NAME=""
349
+ export DATASET_META_NAME="/mnt/data/json_of_internal_datasets.json"
350
+ ```
351
+
352
+ ๆœ€ๅŽ่ฟ่กŒscripts/train.shใ€‚
353
+ ```sh
354
+ sh scripts/train.sh
355
+ ```
356
+
357
+ ๅ…ณไบŽไธ€ไบ›ๅ‚ๆ•ฐ็š„่ฎพ็ฝฎ็ป†่Š‚๏ผŒๅฏไปฅๆŸฅ็œ‹[Readme Train](scripts/README_TRAIN.md)ไธŽ[Readme Lora](scripts/README_TRAIN_LORA.md)
358
+
359
+ <details>
360
+ <summary>(Obsolete) EasyAnimateV1:</summary>
361
+ ๅฆ‚ๆžœไฝ ๆƒณ่ฎญ็ปƒEasyAnimateV1ใ€‚่ฏทๅˆ‡ๆขๅˆฐgitๅˆ†ๆ”ฏv1ใ€‚
362
+ </details>
363
+
364
+ # ๆจกๅž‹ๅœฐๅ€
365
+ EasyAnimateV5:
366
+
367
+ | ๅ็งฐ | ็ง็ฑป | ๅญ˜ๅ‚จ๏ฟฝ๏ฟฝ๏ฟฝ้—ด | Hugging Face | Model Scope | ๆ่ฟฐ |
368
+ |--|--|--|--|--|--|
369
+ | EasyAnimateV5-12b-zh-InP | EasyAnimateV5 | 34 GB | [๐Ÿค—Link](https://huggingface.co/alibaba-pai/EasyAnimateV5-12b-zh-InP) | [๐Ÿ˜„Link](https://modelscope.cn/models/PAI/EasyAnimateV5-12b-zh-InP)| ๅฎ˜ๆ–น็š„ๅ›พ็”Ÿ่ง†้ข‘ๆƒ้‡ใ€‚ๆ”ฏๆŒๅคšๅˆ†่พจ็Ž‡๏ผˆ512๏ผŒ768๏ผŒ1024๏ผ‰็š„่ง†้ข‘้ข„ๆต‹๏ผŒๆ”ฏๆŒๅคšๅˆ†่พจ็Ž‡๏ผˆ512๏ผŒ768๏ผŒ1024๏ผ‰็š„่ง†้ข‘้ข„ๆต‹๏ผŒไปฅ49ๅธงใ€ๆฏ็ง’8ๅธง่ฟ›่กŒ่ฎญ็ปƒ๏ผŒๆ”ฏๆŒไธญๆ–‡ไธŽ่‹ฑๆ–‡ๅŒ่ฏญ้ข„ๆต‹ |
370
+ | EasyAnimateV5-12b-zh-Control | EasyAnimateV5 | 34 GB | [๐Ÿค—Link](https://huggingface.co/alibaba-pai/EasyAnimateV5-12b-zh-Control) | [๐Ÿ˜„Link](https://modelscope.cn/models/PAI/EasyAnimateV5-12b-zh-Control)| ๅฎ˜ๆ–น็š„่ง†้ข‘ๆŽงๅˆถๆƒ้‡๏ผŒๆ”ฏๆŒไธๅŒ็š„ๆŽงๅˆถๆกไปถ๏ผŒๅฆ‚Cannyใ€Depthใ€Poseใ€MLSD็ญ‰ใ€‚ๆ”ฏๆŒๅคšๅˆ†่พจ็Ž‡๏ผˆ512๏ผŒ768๏ผŒ1024๏ผ‰็š„่ง†้ข‘้ข„ๆต‹๏ผŒๆ”ฏๆŒๅคšๅˆ†่พจ็Ž‡๏ผˆ512๏ผŒ768๏ผŒ1024๏ผ‰็š„่ง†้ข‘้ข„ๆต‹๏ผŒไปฅ49ๅธงใ€ๆฏ็ง’8ๅธง่ฟ›่กŒ่ฎญ็ปƒ๏ผŒๆ”ฏๆŒไธญๆ–‡ไธŽ่‹ฑๆ–‡ๅŒ่ฏญ้ข„ๆต‹ |
371
+ | EasyAnimateV5-12b-zh | EasyAnimateV5 | 34 GB | [๐Ÿค—Link](https://huggingface.co/alibaba-pai/EasyAnimateV5-12b-zh) | [๐Ÿ˜„Link](https://modelscope.cn/models/PAI/EasyAnimateV5-12b-zh)| ๅฎ˜ๆ–น็š„ๆ–‡็”Ÿ่ง†้ข‘ๆƒ้‡ใ€‚ๅฏ็”จไบŽ่ฟ›่กŒไธ‹ๆธธไปปๅŠก็š„fientuneใ€‚ๆ”ฏๆŒๅคšๅˆ†่พจ็Ž‡๏ผˆ512๏ผŒ768๏ผŒ1024๏ผ‰็š„่ง†้ข‘้ข„ๆต‹๏ผŒๆ”ฏๆŒๅคšๅˆ†่พจ็Ž‡๏ผˆ512๏ผŒ768๏ผŒ1024๏ผ‰็š„่ง†้ข‘้ข„ๆต‹๏ผŒไปฅ49ๅธงใ€ๆฏ็ง’8ๅธง่ฟ›่กŒ่ฎญ็ปƒ๏ผŒๆ”ฏๆŒไธญๆ–‡ไธŽ่‹ฑๆ–‡ๅŒ่ฏญ้ข„ๆต‹ |
372
+
373
+ <details>
374
+ <summary>(Obsolete) EasyAnimateV4:</summary>
375
+
376
+ | ๅ็งฐ | ็ง็ฑป | ๅญ˜ๅ‚จ็ฉบ้—ด | ไธ‹่ฝฝๅœฐๅ€ | Hugging Face | ๆ่ฟฐ |
377
+ |--|--|--|--|--|--|
378
+ | EasyAnimateV4-XL-2-InP.tar.gz | EasyAnimateV4 | ่งฃๅŽ‹ๅ‰ 8.9 GB / ่งฃๅŽ‹ๅŽ 14.0 GB | [Download](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/Diffusion_Transformer/EasyAnimateV4-XL-2-InP.tar.gz) | [๐Ÿค—Link](https://huggingface.co/alibaba-pai/EasyAnimateV4-XL-2-InP)| ๅฎ˜ๆ–น็š„ๅ›พ็”Ÿ่ง†้ข‘ๆƒ้‡ใ€‚ๆ”ฏๆŒๅคšๅˆ†่พจ็Ž‡๏ผˆ512๏ผŒ768๏ผŒ1024๏ผŒ1280๏ผ‰็š„่ง†้ข‘้ข„ๆต‹๏ผŒไปฅ144ๅธงใ€ๆฏ็ง’24ๅธง่ฟ›่กŒ่ฎญ็ปƒ |
379
+ </details>
380
+
381
+ <details>
382
+ <summary>(Obsolete) EasyAnimateV3:</summary>
383
+
384
+ | ๅ็งฐ | ็ง็ฑป | ๅญ˜ๅ‚จ็ฉบ้—ด | ไธ‹่ฝฝๅœฐๅ€ | Hugging Face | ๆ่ฟฐ |
385
+ |--|--|--|--|--|--|
386
+ | EasyAnimateV3-XL-2-InP-512x512.tar | EasyAnimateV3 | 18.2GB | [Download](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/Diffusion_Transformer/EasyAnimateV3-XL-2-InP-512x512.tar) | [๐Ÿค—Link](https://huggingface.co/alibaba-pai/EasyAnimateV3-XL-2-InP-512x512)| ๅฎ˜ๆ–น็š„512x512ๅˆ†่พจ็Ž‡็š„ๅ›พ็”Ÿ่ง†้ข‘ๆƒ้‡ใ€‚ไปฅ144ๅธงใ€ๆฏ็ง’24ๅธง่ฟ›่กŒ่ฎญ็ปƒ |
387
+ | EasyAnimateV3-XL-2-InP-768x768.tar | EasyAnimateV3 | 18.2GB | [Download](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/Diffusion_Transformer/EasyAnimateV3-XL-2-InP-768x768.tar) | [๐Ÿค—Link](https://huggingface.co/alibaba-pai/EasyAnimateV3-XL-2-InP-768x768) | ๅฎ˜ๆ–น็š„768x768ๅˆ†่พจ็Ž‡็š„ๅ›พ็”Ÿ่ง†้ข‘ๆƒ้‡ใ€‚ไปฅ144ๅธงใ€ๆฏ็ง’24ๅธง่ฟ›่กŒ่ฎญ็ปƒ |
388
+ | EasyAnimateV3-XL-2-InP-960x960.tar | EasyAnimateV3 | 18.2GB | [Download](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/Diffusion_Transformer/EasyAnimateV3-XL-2-InP-960x960.tar) | [๐Ÿค—Link](https://huggingface.co/alibaba-pai/EasyAnimateV3-XL-2-InP-960x960) | ๅฎ˜ๆ–น็š„960x960๏ผˆ720P๏ผ‰ๅˆ†่พจ็Ž‡็š„ๅ›พ็”Ÿ่ง†้ข‘ๆƒ้‡ใ€‚ไปฅ144ๅธงใ€ๆฏ็ง’24ๅธง่ฟ›่กŒ่ฎญ็ปƒ |
389
+ </details>
390
+
391
+ <details>
392
+ <summary>(Obsolete) EasyAnimateV2:</summary>
393
+
394
+ | ๅ็งฐ | ็ง็ฑป | ๅญ˜ๅ‚จ็ฉบ้—ด | ไธ‹่ฝฝๅœฐๅ€ | Hugging Face | ๆ่ฟฐ |
395
+ |--|--|--|--|--|--|
396
+ | EasyAnimateV2-XL-2-512x512.tar | EasyAnimateV2 | 16.2GB | [Download](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/Diffusion_Transformer/EasyAnimateV2-XL-2-512x512.tar) | [๐Ÿค—Link](https://huggingface.co/alibaba-pai/EasyAnimateV2-XL-2-512x512)| ๅฎ˜ๆ–น็š„512x512ๅˆ†่พจ็Ž‡็š„้‡้‡ใ€‚ไปฅ144ๅธงใ€ๆฏ็ง’24ๅธง่ฟ›่กŒ่ฎญ็ปƒ |
397
+ | EasyAnimateV2-XL-2-768x768.tar | EasyAnimateV2 | 16.2GB | [Download](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/Diffusion_Transformer/EasyAnimateV2-XL-2-768x768.tar) | [๐Ÿค—Link](https://huggingface.co/alibaba-pai/EasyAnimateV2-XL-2-768x768) | ๅฎ˜ๆ–น็š„768x768ๅˆ†่พจ็Ž‡็š„้‡้‡ใ€‚ไปฅ144ๅธงใ€ๆฏ็ง’24ๅธง่ฟ›่กŒ่ฎญ็ปƒ |
398
+ | easyanimatev2_minimalism_lora.safetensors | Lora of Pixart | 485.1MB | [Download](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/Personalized_Model/easyanimatev2_minimalism_lora.safetensors)| - | ไฝฟ็”จ็‰นๅฎš็ฑปๅž‹็š„ๅ›พๅƒ่ฟ›่กŒlora่ฎญ็ปƒ็š„็ป“ๆžœใ€‚ๅ›พ็‰‡ๅฏไปŽ่ฟ™้‡Œ[ไธ‹่ฝฝ](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/webui/Minimalism.zip). |
399
+ </details>
400
+
401
+ <details>
402
+ <summary>(Obsolete) EasyAnimateV1:</summary>
403
+
404
+ ### 1ใ€่ฟๅŠจๆƒ้‡
405
+ | ๅ็งฐ | ็ง็ฑป | ๅญ˜ๅ‚จ็ฉบ้—ด | ไธ‹่ฝฝๅœฐๅ€ | ๆ่ฟฐ |
406
+ |--|--|--|--|--|
407
+ | easyanimate_v1_mm.safetensors | Motion Module | 4.1GB | [download](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/Motion_Module/easyanimate_v1_mm.safetensors) | Training with 80 frames and fps 12 |
408
+
409
+ ### 2ใ€ๅ…ถไป–ๆƒ้‡
410
+ | ๅ็งฐ | ็ง็ฑป | ๅญ˜ๅ‚จ็ฉบ้—ด | ไธ‹่ฝฝๅœฐๅ€ | ๆ่ฟฐ |
411
+ |--|--|--|--|--|
412
+ | PixArt-XL-2-512x512.tar | Pixart | 11.4GB | [download](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/Diffusion_Transformer/PixArt-XL-2-512x512.tar)| Pixart-Alpha official weights |
413
+ | easyanimate_portrait.safetensors | Checkpoint of Pixart | 2.3GB | [download](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/Personalized_Model/easyanimate_portrait.safetensors) | Training with internal portrait datasets |
414
+ | easyanimate_portrait_lora.safetensors | Lora of Pixart | 654.0MB | [download](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/Personalized_Model/easyanimate_portrait_lora.safetensors)| Training with internal portrait datasets |
415
+ </details>
416
+
417
+ # ๆœชๆฅ่ฎกๅˆ’
418
+ - ๆ”ฏๆŒๆ›ดๅคง่ง„ๆจกๅ‚ๆ•ฐ้‡็š„ๆ–‡่ง†้ข‘็”Ÿๆˆๆจกๅž‹ใ€‚
419
+
420
+ # ่”็ณปๆˆ‘ไปฌ
421
+ 1. ๆ‰ซๆไธ‹ๆ–นไบŒ็ปด็ ๆˆ–ๆœ็ดข็พคๅท๏ผš77450006752 ๆฅๅŠ ๅ…ฅ้’‰้’‰็พคใ€‚
422
+ 2. ๆ‰ซๆไธ‹ๆ–นไบŒ็ปด็ ๆฅๅŠ ๅ…ฅๅพฎไฟก็พค๏ผˆๅฆ‚ๆžœไบŒ็ปด็ ๅคฑๆ•ˆ๏ผŒๅฏๆ‰ซๆๆœ€ๅณ่พนๅŒๅญฆ็š„ๅพฎไฟก๏ผŒ้‚€่ฏทๆ‚จๅ…ฅ็พค๏ผ‰
423
+ <img src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/asset/group/dd.png" alt="ding group" width="30%"/>
424
+ <img src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/asset/group/wechat.jpg" alt="Wechat group" width="30%"/>
425
+ <img src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/asset/group/person.jpg" alt="Person" width="30%"/>
426
+
427
+ # ๅ‚่€ƒๆ–‡็Œฎ
428
+ - CogVideo: https://github.com/THUDM/CogVideo/
429
+ - magvit: https://github.com/google-research/magvit
430
+ - PixArt: https://github.com/PixArt-alpha/PixArt-alpha
431
+ - Open-Sora-Plan: https://github.com/PKU-YuanGroup/Open-Sora-Plan
432
+ - Open-Sora: https://github.com/hpcaitech/Open-Sora
433
+ - Animatediff: https://github.com/guoyww/AnimateDiff
434
+ - ComfyUI-EasyAnimateWrapper: https://github.com/kijai/ComfyUI-EasyAnimateWrapper
435
+ - HunYuan DiT: https://github.com/tencent/HunyuanDiT
436
+
437
+ # ่ฎธๅฏ่ฏ
438
+ ๆœฌ้กน็›ฎ้‡‡็”จ [Apache License (Version 2.0)](https://github.com/modelscope/modelscope/blob/master/LICENSE).
README_en.md ADDED
@@ -0,0 +1,409 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ๐Ÿ“ท EasyAnimate | An End-to-End Solution for High-Resolution and Long Video Generation
2
+ ๐Ÿ˜Š EasyAnimate is an end-to-end solution for generating high-resolution and long videos. We can train transformer based diffusion generators, train VAEs for processing long videos, and preprocess metadata.
3
+
4
+ ๐Ÿ˜Š We use DIT and transformer as a diffuser for video and image generation.
5
+
6
+ ๐Ÿ˜Š Welcome!
7
+
8
+ [![Arxiv Page](https://img.shields.io/badge/Arxiv-Page-red)](https://arxiv.org/abs/2405.18991)
9
+ [![Project Page](https://img.shields.io/badge/Project-Website-green)](https://easyanimate.github.io/)
10
+ [![Modelscope Studio](https://img.shields.io/badge/Modelscope-Studio-blue)](https://modelscope.cn/studios/PAI/EasyAnimate/summary)
11
+ [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-yellow)](https://huggingface.co/spaces/alibaba-pai/EasyAnimate)
12
+ [![Discord Page](https://img.shields.io/badge/Discord-Page-blue)](https://discord.gg/UzkpB4Bn)
13
+
14
+ English | [็ฎ€ไฝ“ไธญๆ–‡](./README_zh-CN.md)
15
+
16
+ # Table of Contents
17
+ - [Table of Contents](#table-of-contents)
18
+ - [Introduction](#introduction)
19
+ - [Quick Start](#quick-start)
20
+ - [Video Result](#video-result)
21
+ - [How to use](#how-to-use)
22
+ - [Model zoo](#model-zoo)
23
+ - [TODO List](#todo-list)
24
+ - [Contact Us](#contact-us)
25
+ - [Reference](#reference)
26
+ - [License](#license)
27
+
28
+ # Introduction
29
+ EasyAnimate is a pipeline based on the transformer architecture, designed for generating AI images and videos, and for training baseline models and Lora models for Diffusion Transformer. We support direct prediction from pre-trained EasyAnimate models, allowing for the generation of videos with various resolutions, approximately 6 seconds in length, at 8fps (EasyAnimateV5, 1 to 49 frames). Additionally, users can train their own baseline and Lora models for specific style transformations.
30
+
31
+ We will support quick pull-ups from different platforms, refer to [Quick Start](#quick-start).
32
+
33
+ **New Features:**
34
+ - **Updated to v5**, supporting video generation up to 1024x1024, 49 frames, 6s, 8fps, with expanded model scale to 12B, incorporating the MMDIT structure, and enabling control models with diverse inputs; supports bilingual predictions in Chinese and English. [2024.11.04]
35
+ - **Updated to v4**, allowing for video generation up to 1024x1024, 144 frames, 6s, 24fps; supports video generation from text, image, and video, with a single model handling resolutions from 512 to 1280; bilingual predictions in Chinese and English enabled. [2024.08.15]
36
+ - **Updated to v3**, supporting video generation up to 960x960, 144 frames, 6s, 24fps, from text and image. [2024.07.01]
37
+ - **ModelScope-Sora โ€œData Directorโ€ Creative Race** โ€” The third Data-Juicer Big Model Data Challenge is now officially launched! Utilizing EasyAnimate as the base model, it explores the impact of data processing on model training. Visit the [competition website](https://tianchi.aliyun.com/competition/entrance/532219) for details. [2024.06.17]
38
+ - **Updated to v2**, supporting video generation up to 768x768, 144 frames, 6s, 24fps. [2024.05.26]
39
+ - **Code Created!** Now supporting Windows and Linux. [2024.04.12]
40
+
41
+ Function๏ผš
42
+ - [Data Preprocessing](#data-preprocess)
43
+ - [Train VAE](#vae-train)
44
+ - [Train DiT](#dit-train)
45
+ - [Video Generation](#video-gen)
46
+
47
+ Our UI interface is as follows:
48
+ ![ui](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/asset/ui_v3.jpg)
49
+
50
+ # Quick Start
51
+ ### 1. Cloud usage: AliyunDSW/Docker
52
+ #### a. From AliyunDSW
53
+ DSW has free GPU time, which can be applied once by a user and is valid for 3 months after applying.
54
+
55
+ Aliyun provide free GPU time in [Freetier](https://free.aliyun.com/?product=9602825&crowd=enterprise&spm=5176.28055625.J_5831864660.1.e939154aRgha4e&scm=20140722.M_9974135.P_110.MO_1806-ID_9974135-MID_9974135-CID_30683-ST_8512-V_1), get it and use in Aliyun PAI-DSW to start EasyAnimate within 5min!
56
+
57
+ [![DSW Notebook](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/asset/dsw.png)](https://gallery.pai-ml.com/#/preview/deepLearning/cv/easyanimate)
58
+
59
+ #### b. From ComfyUI
60
+ Our ComfyUI is as follows, please refer to [ComfyUI README](comfyui/README.md) for details.
61
+ ![workflow graph](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/asset/v3/comfyui_i2v.jpg)
62
+
63
+ #### c. From docker
64
+ If you are using docker, please make sure that the graphics card driver and CUDA environment have been installed correctly in your machine.
65
+
66
+ Then execute the following commands in this way:
67
+ ```
68
+ # pull image
69
+ docker pull mybigpai-public-registry.cn-beijing.cr.aliyuncs.com/easycv/torch_cuda:easyanimate
70
+
71
+ # enter image
72
+ docker run -it -p 7860:7860 --network host --gpus all --security-opt seccomp:unconfined --shm-size 200g mybigpai-public-registry.cn-beijing.cr.aliyuncs.com/easycv/torch_cuda:easyanimate
73
+
74
+ # clone code
75
+ git clone https://github.com/aigc-apps/EasyAnimate.git
76
+
77
+ # enter EasyAnimate's dir
78
+ cd EasyAnimate
79
+
80
+ # download weights
81
+ mkdir models/Diffusion_Transformer
82
+ mkdir models/Motion_Module
83
+ mkdir models/Personalized_Model
84
+
85
+ # Please use the hugginface link or modelscope link to download the EasyAnimateV5 model.
86
+ # I2V models
87
+ # https://huggingface.co/alibaba-pai/EasyAnimateV5-12b-zh-InP
88
+ # https://modelscope.cn/models/PAI/EasyAnimateV5-12b-zh-InP
89
+ # T2V models
90
+ # https://huggingface.co/alibaba-pai/EasyAnimateV5-12b-zh
91
+ # https://modelscope.cn/models/PAI/EasyAnimateV5-12b-zh
92
+ ```
93
+
94
+ ### 2. Local install: Environment Check/Downloading/Installation
95
+ #### a. Environment Check
96
+ We have verified EasyAnimate execution on the following environment:
97
+
98
+ The detailed of Windows:
99
+ - OS: Windows 10
100
+ - python: python3.10 & python3.11
101
+ - pytorch: torch2.2.0
102
+ - CUDA: 11.8 & 12.1
103
+ - CUDNN: 8+
104
+ - GPU๏ผš Nvidia-3060 12G
105
+
106
+ The detailed of Linux:
107
+ - OS: Ubuntu 20.04, CentOS
108
+ - python: python3.10 & python3.11
109
+ - pytorch: torch2.2.0
110
+ - CUDA: 11.8 & 12.1
111
+ - CUDNN: 8+
112
+ - GPU๏ผšNvidia-V100 16G & Nvidia-A10 24G & Nvidia-A100 40G & Nvidia-A100 80G
113
+
114
+ We need about 60GB available on disk (for saving weights), please check!
115
+
116
+ #### b. Weights
117
+ We'd better place the [weights](#model-zoo) along the specified path:
118
+
119
+ EasyAnimateV5:
120
+ ```
121
+ ๐Ÿ“ฆ models/
122
+ โ”œโ”€โ”€ ๐Ÿ“‚ Diffusion_Transformer/
123
+ โ”‚ โ”œโ”€โ”€ ๐Ÿ“‚ EasyAnimateV5-12b-zh-InP/
124
+ โ”‚ โ””โ”€โ”€ ๐Ÿ“‚ EasyAnimateV5-12b-zh/
125
+ โ”œโ”€โ”€ ๐Ÿ“‚ Personalized_Model/
126
+ โ”‚ โ””โ”€โ”€ your trained trainformer model / your trained lora model (for UI load)
127
+ ```
128
+
129
+ # ่ง†้ข‘ไฝœๅ“
130
+ The results displayed are all based on image.
131
+
132
+ ### EasyAnimateV5-12b-zh-InP
133
+
134
+ Resolution-1024
135
+
136
+ <table border="0" style="width: 100%; text-align: left; margin-top: 20px;">
137
+ <tr>
138
+ <td>
139
+ <video src="https://github.com/user-attachments/assets/bb393b7c-ba33-494c-ab06-b314adea9fc1" width="100%" controls autoplay loop></video>
140
+ </td>
141
+ <td>
142
+ <video src="https://github.com/user-attachments/assets/cb0d0253-919d-4dd6-9dc1-5cd94443c7f1" width="100%" controls autoplay loop></video>
143
+ </td>
144
+ <td>
145
+ <video src="https://github.com/user-attachments/assets/09ed361f-c0c5-4025-aad7-71fe1a1a52b1" width="100%" controls autoplay loop></video>
146
+ </td>
147
+ <td>
148
+ <video src="https://github.com/user-attachments/assets/9f42848d-34eb-473f-97ea-a5ebd0268106" width="100%" controls autoplay loop></video>
149
+ </td>
150
+ </tr>
151
+ </table>
152
+
153
+
154
+ Resolution-768
155
+
156
+ <table border="0" style="width: 100%; text-align: left; margin-top: 20px;">
157
+ <tr>
158
+ <td>
159
+ <video src="https://github.com/user-attachments/assets/903fda91-a0bd-48ee-bf64-fff4e4d96f17" width="100%" controls autoplay loop></video>
160
+ </td>
161
+ <td>
162
+ <video src="https://github.com/user-attachments/assets/407c6628-9688-44b6-b12d-77de10fbbe95" width="100%" controls autoplay loop></video>
163
+ </td>
164
+ <td>
165
+ <video src="https://github.com/user-attachments/assets/ccf30ec1-91d2-4d82-9ce0-fcc585fc2f21" width="100%" controls autoplay loop></video>
166
+ </td>
167
+ <td>
168
+ <video src="https://github.com/user-attachments/assets/5dfe0f92-7d0d-43e0-b7df-0ff7b325663c" width="100%" controls autoplay loop></video>
169
+ </td>
170
+ </tr>
171
+ </table>
172
+
173
+ Resolution-512
174
+
175
+ <table border="0" style="width: 100%; text-align: left; margin-top: 20px;">
176
+ <tr>
177
+ <td>
178
+ <video src="https://github.com/user-attachments/assets/2b542b85-be19-4537-9607-9d28ea7e932e" width="100%" controls autoplay loop></video>
179
+ </td>
180
+ <td>
181
+ <video src="https://github.com/user-attachments/assets/c1662745-752d-4ad2-92bc-fe53734347b2" width="100%" controls autoplay loop></video>
182
+ </td>
183
+ <td>
184
+ <video src="https://github.com/user-attachments/assets/8bec3d66-50a3-4af5-a381-be2c865825a0" width="100%" controls autoplay loop></video>
185
+ </td>
186
+ <td>
187
+ <video src="https://github.com/user-attachments/assets/bcec22f4-732c-446f-958c-2ebbfd8f94be" width="100%" controls autoplay loop></video>
188
+ </td>
189
+ </tr>
190
+ </table>
191
+
192
+ ### EasyAnimateV5-12b-zh-Control
193
+
194
+ <table border="0" style="width: 100%; text-align: left; margin-top: 20px;">
195
+ <tr>
196
+ <td>
197
+ <video src="https://github.com/user-attachments/assets/53002ce2-dd18-4d4f-8135-b6f68364cabd" width="100%" controls autoplay loop></video>
198
+ </td>
199
+ <td>
200
+ <video src="https://github.com/user-attachments/assets/fce43c0b-81fa-4ab2-9ca7-78d786f520e6" width="100%" controls autoplay loop></video>
201
+ </td>
202
+ <td>
203
+ <video src="https://github.com/user-attachments/assets/b208b92c-5add-4ece-a200-3dbbe47b93c3" width="100%" controls autoplay loop></video>
204
+ </td>
205
+ <tr>
206
+ <td>
207
+ <video src="https://github.com/user-attachments/assets/3aec95d5-d240-49fb-a9e9-914446c7a4cf" width="100%" controls autoplay loop></video>
208
+ </td>
209
+ <td>
210
+ <video src="https://github.com/user-attachments/assets/60fa063b-5c1f-485f-b663-09bd6669de3f" width="100%" controls autoplay loop></video>
211
+ </td>
212
+ <td>
213
+ <video src="https://github.com/user-attachments/assets/4adde728-8397-42f3-8a2a-23f7b39e9a1e" width="100%" controls autoplay loop></video>
214
+ </td>
215
+ </tr>
216
+ </table>
217
+
218
+
219
+ # How to use
220
+
221
+ <h3 id="video-gen">1. Inference </h3>
222
+
223
+ #### a. Using Python Code
224
+ - Step 1: Download the corresponding [weights](#model-zoo) and place them in the models folder.
225
+ - Step 2: Modify prompt, neg_prompt, guidance_scale, and seed in the predict_t2v.py file.
226
+ - Step 3: Run the predict_t2v.py file, wait for the generated results, and save the results in the samples/easyanimate-videos folder.
227
+ - Step 4: If you want to combine other backbones you have trained with Lora, modify the predict_t2v.py and Lora_path in predict_t2v.py depending on the situation.
228
+
229
+ #### b. Using webui
230
+ - Step 1: Download the corresponding [weights](#model-zoo) and place them in the models folder.
231
+ - Step 2: Run the app.py file to enter the graph page.
232
+ - Step 3: Select the generated model based on the page, fill in prompt, neg_prompt, guidance_scale, and seed, click on generate, wait for the generated result, and save the result in the samples folder.
233
+
234
+ #### c. From ComfyUI
235
+ Please refer to [ComfyUI README](comfyui/README.md) for details.
236
+
237
+ #### d. GPU Memory Saving Schemes
238
+
239
+ Due to the large parameters of EasyAnimateV5, we need to consider GPU memory saving schemes to conserve memory. We provide a `GPU_memory_mode` option for each prediction file, which can be selected from `model_cpu_offload`, `model_cpu_offload_and_qfloat8`, and `sequential_cpu_offload`.
240
+
241
+ - `model_cpu_offload` indicates that the entire model will be offloaded to the CPU after use, saving some GPU memory.
242
+ - `model_cpu_offload_and_qfloat8` indicates that the entire model will be offloaded to the CPU after use, and the transformer model is quantized to float8, saving even more GPU memory.
243
+ - `sequential_cpu_offload` means that each layer of the model will be offloaded to the CPU after use, which is slower but saves a substantial amount of GPU memory.
244
+
245
+
246
+ ### 2. Model Training
247
+ A complete EasyAnimate training pipeline should include data preprocessing, Video VAE training, and Video DiT training. Among these, Video VAE training is optional because we have already provided a pre-trained Video VAE.
248
+
249
+ <h4 id="data-preprocess">a. data preprocessing</h4>
250
+
251
+ We have provided a simple demo of training the Lora model through image data, which can be found in the [wiki](https://github.com/aigc-apps/EasyAnimate/wiki/Training-Lora) for details.
252
+
253
+ A complete data preprocessing link for long video segmentation, cleaning, and description can refer to [README](./easyanimate/video_caption/README.md) in the video captions section.
254
+
255
+ If you want to train a text to image and video generation model. You need to arrange the dataset in this format.
256
+
257
+ ```
258
+ ๐Ÿ“ฆ project/
259
+ โ”œโ”€โ”€ ๐Ÿ“‚ datasets/
260
+ โ”‚ โ”œโ”€โ”€ ๐Ÿ“‚ internal_datasets/
261
+ โ”‚ โ”œโ”€โ”€ ๐Ÿ“‚ train/
262
+ โ”‚ โ”‚ โ”œโ”€โ”€ ๐Ÿ“„ 00000001.mp4
263
+ โ”‚ โ”‚ โ”œโ”€โ”€ ๐Ÿ“„ 00000002.jpg
264
+ โ”‚ โ”‚ โ””โ”€โ”€ ๐Ÿ“„ .....
265
+ โ”‚ โ””โ”€โ”€ ๐Ÿ“„ json_of_internal_datasets.json
266
+ ```
267
+
268
+ The json_of_internal_datasets.json is a standard JSON file. The file_path in the json can to be set as relative path, as shown in below:
269
+ ```json
270
+ [
271
+ {
272
+ "file_path": "train/00000001.mp4",
273
+ "text": "A group of young men in suits and sunglasses are walking down a city street.",
274
+ "type": "video"
275
+ },
276
+ {
277
+ "file_path": "train/00000002.jpg",
278
+ "text": "A group of young men in suits and sunglasses are walking down a city street.",
279
+ "type": "image"
280
+ },
281
+ .....
282
+ ]
283
+ ```
284
+
285
+ You can also set the path as absolute path as follow:
286
+ ```json
287
+ [
288
+ {
289
+ "file_path": "/mnt/data/videos/00000001.mp4",
290
+ "text": "A group of young men in suits and sunglasses are walking down a city street.",
291
+ "type": "video"
292
+ },
293
+ {
294
+ "file_path": "/mnt/data/train/00000001.jpg",
295
+ "text": "A group of young men in suits and sunglasses are walking down a city street.",
296
+ "type": "image"
297
+ },
298
+ .....
299
+ ]
300
+ ```
301
+
302
+ <h4 id="vae-train">b. Video VAE training (optional)</h4>
303
+
304
+ Video VAE training is an optional option as we have already provided pre trained Video VAEs.
305
+ If you want to train video vae, you can refer to [README](easyanimate/vae/README.md) in the video vae section.
306
+
307
+ <h4 id="dit-train">c. Video DiT training </h4>
308
+
309
+ If the data format is relative path during data preprocessing, please set ```scripts/train.sh``` as follow.
310
+ ```
311
+ export DATASET_NAME="datasets/internal_datasets/"
312
+ export DATASET_META_NAME="datasets/internal_datasets/json_of_internal_datasets.json"
313
+ ```
314
+
315
+ If the data format is absolute path during data preprocessing, please set ```scripts/train.sh``` as follow.
316
+ ```
317
+ export DATASET_NAME=""
318
+ export DATASET_META_NAME="/mnt/data/json_of_internal_datasets.json"
319
+ ```
320
+
321
+ Then, we run scripts/train.sh.
322
+ ```sh
323
+ sh scripts/train.sh
324
+ ```
325
+
326
+ For details on setting some parameters, please refer to [Readme Train](scripts/README_TRAIN.md) and [Readme Lora](scripts/README_TRAIN_LORA.md).
327
+
328
+ <details>
329
+ <summary>(Obsolete) EasyAnimateV1:</summary>
330
+ If you want to train EasyAnimateV1. Please switch to the git branch v1.
331
+ </details>
332
+
333
+
334
+ # Model zoo
335
+
336
+ EasyAnimateV5:
337
+
338
+ | Name | Type | Storage Space | Hugging Face | Model Scope | Description |
339
+ |--|--|--|--|--|--|
340
+ | EasyAnimateV5-12b-zh-InP | EasyAnimateV5 | 34 GB | [๐Ÿค—Link](https://huggingface.co/alibaba-pai/EasyAnimateV5-12b-zh-InP) | [๐Ÿ˜„Link](https://modelscope.cn/models/PAI/EasyAnimateV5-12b-zh-InP) | Official image-to-video weights. Supports video prediction at multiple resolutions (512, 768, 1024), trained with 49 frames at 8 frames per second, and supports bilingual prediction in Chinese and English. |
341
+ | EasyAnimateV5-12b-zh-Control | EasyAnimateV5 | 34 GB | [๐Ÿค—Link](https://huggingface.co/alibaba-pai/EasyAnimateV5-12b-zh-Control) | [๐Ÿ˜„Link](https://modelscope.cn/models/PAI/EasyAnimateV5-12b-zh-Control) | Official video control weights, supporting various control conditions such as Canny, Depth, Pose, MLSD, etc. Supports video prediction at multiple resolutions (512, 768, 1024) and is trained with 49 frames at 8 frames per second. Bilingual prediction in Chinese and English is supported. |
342
+ | EasyAnimateV5-12b-zh | EasyAnimateV5 | 34 GB | [๐Ÿค—Link](https://huggingface.co/alibaba-pai/EasyAnimateV5-12b-zh) | [๐Ÿ˜„Link](https://modelscope.cn/models/PAI/EasyAnimateV5-12b-zh) | Official text-to-video weights. Supports video prediction at multiple resolutions (512, 768, 1024), trained with 49 frames at 8 frames per second, and supports bilingual prediction in Chinese and English. |
343
+
344
+ <details>
345
+ <summary>(Obsolete) EasyAnimateV4:</summary>
346
+
347
+ | Name | Type | Storage Space | Url | Hugging Face | Description |
348
+ |--|--|--|--|--|--|
349
+ | EasyAnimateV4-XL-2-InP.tar.gz | EasyAnimateV4 | Before extraction: 8.9 GB \/ After extraction: 14.0 GB | [Download](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/Diffusion_Transformer/EasyAnimateV4-XL-2-InP.tar.gz) | [๐Ÿค—Link](https://huggingface.co/alibaba-pai/EasyAnimateV4-XL-2-InP)| Our official graph-generated video model is capable of predicting videos at multiple resolutions (512, 768, 1024, 1280) and has been trained on 144 frames at a rate of 24 frames per second. |
350
+ </details>
351
+
352
+ <details>
353
+ <summary>(Obsolete) EasyAnimateV3:</summary>
354
+
355
+ | Name | Type | Storage Space | Url | Hugging Face | Description |
356
+ |--|--|--|--|--|--|
357
+ | EasyAnimateV3-XL-2-InP-512x512.tar | EasyAnimateV3 | 18.2GB | [Download](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/Diffusion_Transformer/EasyAnimateV3-XL-2-InP-512x512.tar) | [๐Ÿค—Link](https://huggingface.co/alibaba-pai/EasyAnimateV3-XL-2-InP-512x512) | EasyAnimateV3 official weights for 512x512 text and image to video resolution. Training with 144 frames and fps 24 |
358
+ | EasyAnimateV3-XL-2-InP-768x768.tar | EasyAnimateV3 | 18.2GB | [Download](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/Diffusion_Transformer/EasyAnimateV3-XL-2-InP-768x768.tar) | [๐Ÿค—Link](https://huggingface.co/alibaba-pai/EasyAnimateV3-XL-2-InP-768x768) | EasyAnimateV3 official weights for 768x768 text and image to video resolution. Training with 144 frames and fps 24 |
359
+ | EasyAnimateV3-XL-2-InP-960x960.tar | EasyAnimateV3 | 18.2GB | [Download](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/Diffusion_Transformer/EasyAnimateV3-XL-2-InP-960x960.tar) | [๐Ÿค—Link](https://huggingface.co/alibaba-pai/EasyAnimateV3-XL-2-InP-960x960) | EasyAnimateV3 official weights for 960x960 text and image to video resolution. Training with 144 frames and fps 24 |
360
+ </details>
361
+
362
+ <details>
363
+ <summary>(Obsolete) EasyAnimateV2:</summary>
364
+ | Name | Type | Storage Space | Url | Hugging Face | Description |
365
+ |--|--|--|--|--|--|
366
+ | EasyAnimateV2-XL-2-512x512.tar | EasyAnimateV2 | 16.2GB | [Download](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/Diffusion_Transformer/EasyAnimateV2-XL-2-512x512.tar) | [๐Ÿค—Link](https://huggingface.co/alibaba-pai/EasyAnimateV2-XL-2-512x512) | EasyAnimateV2 official weights for 512x512 resolution. Training with 144 frames and fps 24 |
367
+ | EasyAnimateV2-XL-2-768x768.tar | EasyAnimateV2 | 16.2GB | [Download](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/Diffusion_Transformer/EasyAnimateV2-XL-2-768x768.tar) | [๐Ÿค—Link](https://huggingface.co/alibaba-pai/EasyAnimateV2-XL-2-768x768) | EasyAnimateV2 official weights for 768x768 resolution. Training with 144 frames and fps 24 |
368
+ | easyanimatev2_minimalism_lora.safetensors | Lora of Pixart | 485.1MB | [Download](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/Personalized_Model/easyanimatev2_minimalism_lora.safetensors) | - | A lora training with a specifial type images. Images can be downloaded from [Url](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/asset/v2/Minimalism.zip). |
369
+ </details>
370
+
371
+ <details>
372
+ <summary>(Obsolete) EasyAnimateV1:</summary>
373
+
374
+ ### 1ใ€Motion Weights
375
+ | Name | Type | Storage Space | Url | Description |
376
+ |--|--|--|--|--|
377
+ | easyanimate_v1_mm.safetensors | Motion Module | 4.1GB | [download](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/Motion_Module/easyanimate_v1_mm.safetensors) | Training with 80 frames and fps 12 |
378
+
379
+ ### 2ใ€Other Weights
380
+ | Name | Type | Storage Space | Url | Description |
381
+ |--|--|--|--|--|
382
+ | PixArt-XL-2-512x512.tar | Pixart | 11.4GB | [download](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/Diffusion_Transformer/PixArt-XL-2-512x512.tar)| Pixart-Alpha official weights |
383
+ | easyanimate_portrait.safetensors | Checkpoint of Pixart | 2.3GB | [download](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/Personalized_Model/easyanimate_portrait.safetensors) | Training with internal portrait datasets |
384
+ | easyanimate_portrait_lora.safetensors | Lora of Pixart | 654.0MB | [download](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/Personalized_Model/easyanimate_portrait_lora.safetensors)| Training with internal portrait datasets |
385
+ </details>
386
+
387
+ # TODO List
388
+ - Support model with larger params.
389
+
390
+ # Contact Us
391
+ 1. Use Dingding to search group 77450006752 or Scan to join
392
+ 2. You need to scan the image to join the WeChat group or if it is expired, add this student as a friend first to invite you.
393
+
394
+ <img src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/asset/group/dd.png" alt="ding group" width="30%"/>
395
+ <img src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/asset/group/wechat.jpg" alt="Wechat group" width="30%"/>
396
+ <img src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/asset/group/person.jpg" alt="Person" width="30%"/>
397
+
398
+
399
+ # Reference
400
+ - magvit: https://github.com/google-research/magvit
401
+ - PixArt: https://github.com/PixArt-alpha/PixArt-alpha
402
+ - Open-Sora-Plan: https://github.com/PKU-YuanGroup/Open-Sora-Plan
403
+ - Open-Sora: https://github.com/hpcaitech/Open-Sora
404
+ - Animatediff: https://github.com/guoyww/AnimateDiff
405
+ - ComfyUI-EasyAnimateWrapper: https://github.com/kijai/ComfyUI-EasyAnimateWrapper
406
+ - HunYuan DiT: https://github.com/tencent/HunyuanDiT
407
+
408
+ # License
409
+ This project is licensed under the [Apache License (Version 2.0)](https://github.com/modelscope/modelscope/blob/master/LICENSE).