hbyang commited on
Commit
5ba7367
β€’
1 Parent(s): 37d5426

Upload log.txt

Browse files
Files changed (1) hide show
  1. log.txt +311 -0
log.txt ADDED
@@ -0,0 +1,311 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ python train_ddp_spawn.py \
2
+ > --base configs/train-v01.yaml \
3
+ > --no-test True \
4
+ > --train True \
5
+ > --logdir outputs/logs/train-v01
6
+ [2024-09-29 13:09:24,993] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
7
+ [WARNING] async_io requires the dev libaio .so object and headers but these were not found.
8
+ [WARNING] async_io: please install the libaio-dev package with apt
9
+ [WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
10
+ [WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
11
+ [WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.0
12
+ [WARNING] using untested triton version (2.0.0), only 1.0.0 is known to be compatible
13
+ 2024-09-29 13:09:34.448070: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
14
+ 2024-09-29 13:09:34.678153: I external/local_tsl/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
15
+ 2024-09-29 13:09:35.323099: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
16
+ To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
17
+ 2024-09-29 13:09:38.173856: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
18
+ Global seed set to 2300
19
+ [09/29 13:09:46 VTDM]: Running on GPUs 7,
20
+ [09/29 13:09:46 VTDM]: Use the strategy of deepspeed_stage_2
21
+ [09/29 13:09:46 VTDM]: Pytorch lightning trainer config:
22
+ {'gpus': '7,', 'logger_refresh_rate': 50, 'check_val_every_n_epoch': 1, 'max_epochs': 50, 'accelerator': 'cuda', 'strategy': 'deepspeed_stage_2', 'precision': 16}
23
+ VideoTransformerBlock is using checkpointing
24
+ VideoTransformerBlock is using checkpointing
25
+ VideoTransformerBlock is using checkpointing
26
+ VideoTransformerBlock is using checkpointing
27
+ VideoTransformerBlock is using checkpointing
28
+ VideoTransformerBlock is using checkpointing
29
+ VideoTransformerBlock is using checkpointing
30
+ VideoTransformerBlock is using checkpointing
31
+ VideoTransformerBlock is using checkpointing
32
+ VideoTransformerBlock is using checkpointing
33
+ VideoTransformerBlock is using checkpointing
34
+ VideoTransformerBlock is using checkpointing
35
+ VideoTransformerBlock is using checkpointing
36
+ VideoTransformerBlock is using checkpointing
37
+ VideoTransformerBlock is using checkpointing
38
+ VideoTransformerBlock is using checkpointing
39
+ Initialized embedder #0: FrozenOpenCLIPImagePredictionEmbedder with 683800065 params. Trainable: False
40
+ Initialized embedder #1: AesEmbedder with 343490018 params. Trainable: False
41
+ Initialized embedder #2: ConcatTimestepEmbedderND with 0 params. Trainable: False
42
+ Initialized embedder #3: VideoPredictionEmbedderWithEncoder with 83653863 params. Trainable: False
43
+ Initialized embedder #4: ConcatTimestepEmbedderND with 0 params. Trainable: False
44
+ Restored from /mnt/afs_intern/yanghaibo/datas/download_checkpoints/svd_checkpoints/stable-video-diffusion-img2vid-xt/svd_xt_image_decoder.safetensors with 312 missing and 0 unexpected keys
45
+ Missing Keys: ['conditioner.embedders.1.aesthetic_model.positional_embedding', 'conditioner.embedders.1.aesthetic_model.text_projection', 'conditioner.embedders.1.aesthetic_model.logit_scale', 'conditioner.embedders.1.aesthetic_model.visual.class_embedding', 'conditioner.embedders.1.aesthetic_model.visual.positional_embedding', 'conditioner.embedders.1.aesthetic_model.visual.proj', 'conditioner.embedders.1.aesthetic_model.visual.conv1.weight', 'conditioner.embedders.1.aesthetic_model.visual.ln_pre.weight', 'conditioner.embedders.1.aesthetic_model.visual.ln_pre.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.0.attn.in_proj_weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.0.attn.in_proj_bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.0.attn.out_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.0.attn.out_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.0.ln_1.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.0.ln_1.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.0.mlp.c_fc.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.0.mlp.c_fc.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.0.mlp.c_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.0.mlp.c_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.0.ln_2.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.0.ln_2.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.1.attn.in_proj_weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.1.attn.in_proj_bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.1.attn.out_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.1.attn.out_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.1.ln_1.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.1.ln_1.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.1.mlp.c_fc.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.1.mlp.c_fc.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.1.mlp.c_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.1.mlp.c_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.1.ln_2.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.1.ln_2.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.2.attn.in_proj_weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.2.attn.in_proj_bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.2.attn.out_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.2.attn.out_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.2.ln_1.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.2.ln_1.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.2.mlp.c_fc.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.2.mlp.c_fc.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.2.mlp.c_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.2.mlp.c_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.2.ln_2.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.2.ln_2.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.3.attn.in_proj_weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.3.attn.in_proj_bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.3.attn.out_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.3.attn.out_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.3.ln_1.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.3.ln_1.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.3.mlp.c_fc.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.3.mlp.c_fc.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.3.mlp.c_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.3.mlp.c_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.3.ln_2.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.3.ln_2.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.4.attn.in_proj_weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.4.attn.in_proj_bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.4.attn.out_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.4.attn.out_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.4.ln_1.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.4.ln_1.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.4.mlp.c_fc.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.4.mlp.c_fc.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.4.mlp.c_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.4.mlp.c_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.4.ln_2.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.4.ln_2.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.5.attn.in_proj_weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.5.attn.in_proj_bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.5.attn.out_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.5.attn.out_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.5.ln_1.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.5.ln_1.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.5.mlp.c_fc.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.5.mlp.c_fc.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.5.mlp.c_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.5.mlp.c_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.5.ln_2.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.5.ln_2.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.6.attn.in_proj_weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.6.attn.in_proj_bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.6.attn.out_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.6.attn.out_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.6.ln_1.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.6.ln_1.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.6.mlp.c_fc.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.6.mlp.c_fc.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.6.mlp.c_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.6.mlp.c_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.6.ln_2.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.6.ln_2.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.7.attn.in_proj_weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.7.attn.in_proj_bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.7.attn.out_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.7.attn.out_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.7.ln_1.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.7.ln_1.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.7.mlp.c_fc.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.7.mlp.c_fc.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.7.mlp.c_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.7.mlp.c_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.7.ln_2.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.7.ln_2.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.8.attn.in_proj_weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.8.attn.in_proj_bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.8.attn.out_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.8.attn.out_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.8.ln_1.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.8.ln_1.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.8.mlp.c_fc.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.8.mlp.c_fc.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.8.mlp.c_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.8.mlp.c_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.8.ln_2.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.8.ln_2.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.9.attn.in_proj_weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.9.attn.in_proj_bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.9.attn.out_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.9.attn.out_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.9.ln_1.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.9.ln_1.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.9.mlp.c_fc.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.9.mlp.c_fc.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.9.mlp.c_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.9.mlp.c_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.9.ln_2.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.9.ln_2.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.10.attn.in_proj_weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.10.attn.in_proj_bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.10.attn.out_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.10.attn.out_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.10.ln_1.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.10.ln_1.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.10.mlp.c_fc.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.10.mlp.c_fc.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.10.mlp.c_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.10.mlp.c_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.10.ln_2.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.10.ln_2.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.11.attn.in_proj_weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.11.attn.in_proj_bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.11.attn.out_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.11.attn.out_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.11.ln_1.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.11.ln_1.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.11.mlp.c_fc.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.11.mlp.c_fc.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.11.mlp.c_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.11.mlp.c_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.11.ln_2.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.11.ln_2.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.12.attn.in_proj_weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.12.attn.in_proj_bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.12.attn.out_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.12.attn.out_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.12.ln_1.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.12.ln_1.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.12.mlp.c_fc.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.12.mlp.c_fc.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.12.mlp.c_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.12.mlp.c_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.12.ln_2.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.12.ln_2.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.13.attn.in_proj_weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.13.attn.in_proj_bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.13.attn.out_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.13.attn.out_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.13.ln_1.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.13.ln_1.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.13.mlp.c_fc.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.13.mlp.c_fc.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.13.mlp.c_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.13.mlp.c_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.13.ln_2.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.13.ln_2.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.14.attn.in_proj_weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.14.attn.in_proj_bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.14.attn.out_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.14.attn.out_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.14.ln_1.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.14.ln_1.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.14.mlp.c_fc.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.14.mlp.c_fc.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.14.mlp.c_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.14.mlp.c_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.14.ln_2.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.14.ln_2.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.15.attn.in_proj_weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.15.attn.in_proj_bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.15.attn.out_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.15.attn.out_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.15.ln_1.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.15.ln_1.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.15.mlp.c_fc.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.15.mlp.c_fc.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.15.mlp.c_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.15.mlp.c_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.15.ln_2.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.15.ln_2.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.16.attn.in_proj_weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.16.attn.in_proj_bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.16.attn.out_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.16.attn.out_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.16.ln_1.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.16.ln_1.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.16.mlp.c_fc.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.16.mlp.c_fc.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.16.mlp.c_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.16.mlp.c_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.16.ln_2.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.16.ln_2.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.17.attn.in_proj_weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.17.attn.in_proj_bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.17.attn.out_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.17.attn.out_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.17.ln_1.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.17.ln_1.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.17.mlp.c_fc.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.17.mlp.c_fc.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.17.mlp.c_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.17.mlp.c_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.17.ln_2.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.17.ln_2.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.18.attn.in_proj_weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.18.attn.in_proj_bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.18.attn.out_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.18.attn.out_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.18.ln_1.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.18.ln_1.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.18.mlp.c_fc.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.18.mlp.c_fc.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.18.mlp.c_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.18.mlp.c_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.18.ln_2.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.18.ln_2.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.19.attn.in_proj_weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.19.attn.in_proj_bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.19.attn.out_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.19.attn.out_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.19.ln_1.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.19.ln_1.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.19.mlp.c_fc.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.19.mlp.c_fc.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.19.mlp.c_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.19.mlp.c_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.19.ln_2.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.19.ln_2.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.20.attn.in_proj_weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.20.attn.in_proj_bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.20.attn.out_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.20.attn.out_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.20.ln_1.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.20.ln_1.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.20.mlp.c_fc.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.20.mlp.c_fc.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.20.mlp.c_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.20.mlp.c_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.20.ln_2.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.20.ln_2.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.21.attn.in_proj_weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.21.attn.in_proj_bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.21.attn.out_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.21.attn.out_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.21.ln_1.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.21.ln_1.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.21.mlp.c_fc.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.21.mlp.c_fc.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.21.mlp.c_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.21.mlp.c_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.21.ln_2.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.21.ln_2.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.22.attn.in_proj_weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.22.attn.in_proj_bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.22.attn.out_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.22.attn.out_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.22.ln_1.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.22.ln_1.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.22.mlp.c_fc.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.22.mlp.c_fc.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.22.mlp.c_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.22.mlp.c_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.22.ln_2.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.22.ln_2.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.23.attn.in_proj_weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.23.attn.in_proj_bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.23.attn.out_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.23.attn.out_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.23.ln_1.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.23.ln_1.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.23.mlp.c_fc.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.23.mlp.c_fc.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.23.mlp.c_proj.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.23.mlp.c_proj.bias', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.23.ln_2.weight', 'conditioner.embedders.1.aesthetic_model.visual.transformer.resblocks.23.ln_2.bias', 'conditioner.embedders.1.aesthetic_model.visual.ln_post.weight', 'conditioner.embedders.1.aesthetic_model.visual.ln_post.bias', 'conditioner.embedders.1.aesthetic_model.token_embedding.weight', 'conditioner.embedders.1.aesthetic_model.ln_final.weight', 'conditioner.embedders.1.aesthetic_model.ln_final.bias', 'conditioner.embedders.1.aesthetic_mlp.layers.0.weight', 'conditioner.embedders.1.aesthetic_mlp.layers.0.bias', 'conditioner.embedders.1.aesthetic_mlp.layers.2.weight', 'conditioner.embedders.1.aesthetic_mlp.layers.2.bias', 'conditioner.embedders.1.aesthetic_mlp.layers.4.weight', 'conditioner.embedders.1.aesthetic_mlp.layers.4.bias', 'conditioner.embedders.1.aesthetic_mlp.layers.6.weight', 'conditioner.embedders.1.aesthetic_mlp.layers.6.bias', 'conditioner.embedders.1.aesthetic_mlp.layers.7.weight', 'conditioner.embedders.1.aesthetic_mlp.layers.7.bias']
46
+ /mnt/afs_intern/yanghaibo/installed/anaconda3/envs/general/lib/python3.10/site-packages/pytorch_lightning/loggers/test_tube.py:104: LightningDeprecationWarning: The TestTubeLogger is deprecated since v1.5 and will be removed in v1.7. We recommend switching to the `pytorch_lightning.loggers.TensorBoardLogger` as an alternative.
47
+ rank_zero_deprecation(
48
+ [09/29 13:10:47 VTDM]: Merged modelckpt-cfg:
49
+ {'target': 'pytorch_lightning.callbacks.ModelCheckpoint', 'params': {'dirpath': 'outputs/logs/train-v01/2024-09-29T13-09-44_train-v01_00/checkpoints', 'filename': '{epoch:06}', 'verbose': True, 'save_weights_only': True}}
50
+ [09/29 13:10:47 VTDM]: Caution: Saving checkpoints every n train steps without deleting. This might require some free space.
51
+ [09/29 13:10:47 VTDM]: Merged trainsteps-cfg:
52
+ {'target': 'pytorch_lightning.callbacks.ModelCheckpoint', 'params': {'dirpath': 'outputs/logs/train-v01/2024-09-29T13-09-44_train-v01_00/checkpoints/trainstep_checkpoints', 'filename': '{epoch:06}-{step:09}', 'verbose': True, 'save_top_k': -1, 'every_n_train_steps': 3000, 'save_weights_only': False}}
53
+ [09/29 13:10:47 VTDM]: Done in building trainer kwargs.
54
+ GPU available: True, used: True
55
+ TPU available: False, using: 0 TPU cores
56
+ IPU available: False, using: 0 IPUs
57
+ ============= length of dataset 1 =============
58
+ [09/29 13:10:48 VTDM]: Set up dataset.
59
+ [09/29 13:10:48 VTDM]: accumulate_grad_batches = 1
60
+ [09/29 13:10:48 VTDM]: Setting learning rate to 3.00e-05 = 1 (accumulate_grad_batches) * 1 (num_gpus) * 3 (batchsize) * 1.00e-05 (base_lr)
61
+ /mnt/afs_intern/yanghaibo/installed/anaconda3/envs/general/lib/python3.10/site-packages/pytorch_lightning/trainer/configuration_validator.py:116: UserWarning: You passed in a `val_dataloader` but have no `validation_step`. Skipping val loop.
62
+ rank_zero_warn("You passed in a `val_dataloader` but have no `validation_step`. Skipping val loop.")
63
+ /mnt/afs_intern/yanghaibo/installed/anaconda3/envs/general/lib/python3.10/site-packages/pytorch_lightning/trainer/configuration_validator.py:271: LightningDeprecationWarning: The `on_keyboard_interrupt` callback hook was deprecated in v1.5 and will be removed in v1.7. Please use the `on_exception` callback hook instead.
64
+ rank_zero_deprecation(
65
+ /mnt/afs_intern/yanghaibo/installed/anaconda3/envs/general/lib/python3.10/site-packages/pytorch_lightning/trainer/configuration_validator.py:287: LightningDeprecationWarning: Base `Callback.on_train_batch_end` hook signature has changed in v1.5. The `dataloader_idx` argument will be removed in v1.7.
66
+ rank_zero_deprecation(
67
+ Global seed set to 2300
68
+ initializing deepspeed distributed: GLOBAL_RANK: 0, MEMBER: 1/1
69
+ /mnt/afs_intern/yanghaibo/installed/anaconda3/envs/general/lib/python3.10/site-packages/pytorch_lightning/plugins/training_type/deepspeed.py:625: UserWarning: Inferring the batch size for internal deepspeed logging from the `train_dataloader()`. If you require skipping this, please pass `Trainer(strategy=DeepSpeedPlugin(logging_batch_size_per_gpu=batch_size))`
70
+ rank_zero_warn(
71
+ Enabling DeepSpeed FP16.
72
+ /mnt/afs_intern/yanghaibo/installed/anaconda3/envs/general/lib/python3.10/site-packages/pytorch_lightning/core/datamodule.py:469: LightningDeprecationWarning: DataModule.setup has already been called, so it will not be called again. In v1.6 this behavior will change to always call DataModule.setup.
73
+ rank_zero_deprecation(
74
+ LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7]
75
+ You have not specified an optimizer or scheduler within the DeepSpeed config. Using `configure_optimizers` to define optimizer and scheduler.
76
+ Project config
77
+ data:
78
+ target: sgm.data.video_dataset.VideoDataset
79
+ params:
80
+ base_folder: datas/OBJAVERSE-LVIS-example/images
81
+ eval_folder: validation_set_example
82
+ width: 512
83
+ height: 512
84
+ sample_frames: 16
85
+ batch_size: 3
86
+ num_workers: 1
87
+ model:
88
+ target: vtdm.vtdm_gen_v01.VideoLDM
89
+ base_learning_rate: 1.0e-05
90
+ params:
91
+ input_key: video
92
+ scale_factor: 0.18215
93
+ log_keys: caption
94
+ num_samples: 16
95
+ trained_param_keys:
96
+ - all
97
+ en_and_decode_n_samples_a_time: 16
98
+ disable_first_stage_autocast: true
99
+ ckpt_path: /mnt/afs_intern/yanghaibo/datas/download_checkpoints/svd_checkpoints/stable-video-diffusion-img2vid-xt/svd_xt_image_decoder.safetensors
100
+ denoiser_config:
101
+ target: sgm.modules.diffusionmodules.denoiser.Denoiser
102
+ params:
103
+ scaling_config:
104
+ target: sgm.modules.diffusionmodules.denoiser_scaling.VScalingWithEDMcNoise
105
+ network_config:
106
+ target: sgm.modules.diffusionmodules.video_model.VideoUNet
107
+ params:
108
+ adm_in_channels: 768
109
+ num_classes: sequential
110
+ use_checkpoint: true
111
+ in_channels: 8
112
+ out_channels: 4
113
+ model_channels: 320
114
+ attention_resolutions:
115
+ - 4
116
+ - 2
117
+ - 1
118
+ num_res_blocks: 2
119
+ channel_mult:
120
+ - 1
121
+ - 2
122
+ - 4
123
+ - 4
124
+ num_head_channels: 64
125
+ use_linear_in_transformer: true
126
+ transformer_depth: 1
127
+ context_dim: 1024
128
+ spatial_transformer_attn_type: softmax-xformers
129
+ extra_ff_mix_layer: true
130
+ use_spatial_context: true
131
+ merge_strategy: learned_with_images
132
+ video_kernel_size:
133
+ - 3
134
+ - 1
135
+ - 1
136
+ conditioner_config:
137
+ target: sgm.modules.GeneralConditioner
138
+ params:
139
+ emb_models:
140
+ - is_trainable: false
141
+ input_key: cond_frames_without_noise
142
+ ucg_rate: 0.1
143
+ target: sgm.modules.encoders.modules.FrozenOpenCLIPImagePredictionEmbedder
144
+ params:
145
+ n_cond_frames: 1
146
+ n_copies: 1
147
+ open_clip_embedding_config:
148
+ target: sgm.modules.encoders.modules.FrozenOpenCLIPImageEmbedder
149
+ params:
150
+ version: ckpts/open_clip_pytorch_model.bin
151
+ freeze: true
152
+ - is_trainable: false
153
+ input_key: video
154
+ ucg_rate: 0.0
155
+ target: vtdm.encoders.AesEmbedder
156
+ - is_trainable: false
157
+ input_key: elevation
158
+ target: sgm.modules.encoders.modules.ConcatTimestepEmbedderND
159
+ params:
160
+ outdim: 256
161
+ - input_key: cond_frames
162
+ is_trainable: false
163
+ ucg_rate: 0.1
164
+ target: sgm.modules.encoders.modules.VideoPredictionEmbedderWithEncoder
165
+ params:
166
+ disable_encoder_autocast: true
167
+ n_cond_frames: 1
168
+ n_copies: 16
169
+ is_ae: true
170
+ encoder_config:
171
+ target: sgm.models.autoencoder.AutoencoderKLModeOnly
172
+ params:
173
+ embed_dim: 4
174
+ monitor: val/rec_loss
175
+ ddconfig:
176
+ attn_type: vanilla-xformers
177
+ double_z: true
178
+ z_channels: 4
179
+ resolution: 256
180
+ in_channels: 3
181
+ out_ch: 3
182
+ ch: 128
183
+ ch_mult:
184
+ - 1
185
+ - 2
186
+ - 4
187
+ - 4
188
+ num_res_blocks: 2
189
+ attn_resolutions: []
190
+ dropout: 0.0
191
+ lossconfig:
192
+ target: torch.nn.Identity
193
+ - input_key: cond_aug
194
+ is_trainable: false
195
+ target: sgm.modules.encoders.modules.ConcatTimestepEmbedderND
196
+ params:
197
+ outdim: 256
198
+ first_stage_config:
199
+ target: sgm.models.autoencoder.AutoencoderKL
200
+ params:
201
+ embed_dim: 4
202
+ monitor: val/rec_loss
203
+ ddconfig:
204
+ attn_type: vanilla-xformers
205
+ double_z: true
206
+ z_channels: 4
207
+ resolution: 256
208
+ in_channels: 3
209
+ out_ch: 3
210
+ ch: 128
211
+ ch_mult:
212
+ - 1
213
+ - 2
214
+ - 4
215
+ - 4
216
+ num_res_blocks: 2
217
+ attn_resolutions: []
218
+ dropout: 0.0
219
+ lossconfig:
220
+ target: torch.nn.Identity
221
+ loss_fn_config:
222
+ target: sgm.modules.diffusionmodules.loss.StandardDiffusionLoss
223
+ params:
224
+ num_frames: 16
225
+ batch2model_keys:
226
+ - num_video_frames
227
+ - image_only_indicator
228
+ sigma_sampler_config:
229
+ target: sgm.modules.diffusionmodules.sigma_sampling.EDMSampling
230
+ params:
231
+ p_mean: 1.0
232
+ p_std: 1.6
233
+ loss_weighting_config:
234
+ target: sgm.modules.diffusionmodules.loss_weighting.VWeighting
235
+ sampler_config:
236
+ target: sgm.modules.diffusionmodules.sampling.EulerEDMSampler
237
+ params:
238
+ num_steps: 25
239
+ verbose: true
240
+ discretization_config:
241
+ target: sgm.modules.diffusionmodules.discretizer.EDMDiscretization
242
+ params:
243
+ sigma_max: 700.0
244
+ guider_config:
245
+ target: sgm.modules.diffusionmodules.guiders.LinearPredictionGuider
246
+ params:
247
+ num_frames: 16
248
+ max_scale: 2.5
249
+ min_scale: 1.0
250
+
251
+ Lightning config
252
+ trainer:
253
+ gpus: 7,
254
+ logger_refresh_rate: 50
255
+ check_val_every_n_epoch: 1
256
+ max_epochs: 50
257
+ accelerator: cuda
258
+ strategy: deepspeed_stage_2
259
+ precision: 16
260
+ callbacks:
261
+ image_logger:
262
+ target: vtdm.callbacks.ImageLogger
263
+ params:
264
+ log_on_batch_idx: true
265
+ increase_log_steps: false
266
+ log_first_step: true
267
+ batch_frequency: 200
268
+ max_images: 8
269
+ clamp: true
270
+ log_images_kwargs:
271
+ 'N': 8
272
+ sample: true
273
+ ucg_keys:
274
+ - cond_frames
275
+ - cond_frames_without_noise
276
+ metrics_over_trainsteps_checkpoint:
277
+ target: pytorch_lightning.callbacks.ModelCheckpoint
278
+ params:
279
+ every_n_train_steps: 3000
280
+ save_weights_only: false
281
+
282
+
283
+ | Name | Type | Params
284
+ ------------------------------------------------------------
285
+ 0 | model | OpenAIWrapper | 1.5 B
286
+ 1 | denoiser | Denoiser | 0
287
+ 2 | conditioner | GeneralConditioner | 1.1 B
288
+ 3 | first_stage_model | AutoencoderKL | 83.7 M
289
+ 4 | loss_fn | StandardDiffusionLoss | 0
290
+ ------------------------------------------------------------
291
+ 1.5 B Trainable params
292
+ 1.2 B Non-trainable params
293
+ 2.7 B Total params
294
+ 5,438.442 Total estimated model params size (MB)
295
+ /mnt/afs_intern/yanghaibo/installed/anaconda3/envs/general/lib/python3.10/site-packages/pytorch_lightning/callbacks/model_checkpoint.py:617: UserWarning: Checkpoint directory outputs/logs/train-v01/2024-09-29T13-09-44_train-v01_00/checkpoints exists and is not empty.
296
+ rank_zero_warn(f"Checkpoint directory {dirpath} exists and is not empty.")
297
+ [09/29 13:10:54 VTDM]: Epoch: 0, batch_num: inf
298
+ /mnt/afs_intern/yanghaibo/installed/anaconda3/envs/general/lib/python3.10/site-packages/pytorch_lightning/utilities/data.py:56: UserWarning: Trying to infer the `batch_size` from an ambiguous collection. The batch size we found is 3. To avoid any miscalculations, use `self.log(..., batch_size=batch_size)`.
299
+ warning_cache.warn(
300
+ ############################## Sampling setting ##############################
301
+ Sampler: EulerEDMSampler
302
+ Discretization: EDMDiscretization
303
+ Guider: LinearPredictionGuider
304
+ Sampling with EulerEDMSampler for 26 steps: 0%| | 0/26 [00:00<?, ?it/s]/mnt/afs_intern/yanghaibo/installed/anaconda3/envs/general/lib/python3.10/site-packages/torch/utils/checkpoint.py:31: UserWarning: None of the inputs have requires_grad=True. Gradients will be None
305
+ warnings.warn("None of the inputs have requires_grad=True. Gradients will be None")
306
+ Sampling with EulerEDMSampler for 26 steps: 96%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 25/26 [01:23<00:03, 3.34s/it]
307
+ /mnt/afs_intern/yanghaibo/installed/anaconda3/envs/general/lib/python3.10/site-packages/pytorch_lightning/utilities/data.py:56: UserWarning: Trying to infer the `batch_size` from an ambiguous collection. The batch size we found is 1. To avoid any miscalculations, use `self.log(..., batch_size=batch_size)`.
308
+ warning_cache.warn(
309
+ Average Epoch time: 169.25 seconds
310
+ Average Peak memory 47670.27 MiB
311
+ [09/29 13:14:30 VTDM]: Epoch: 1, batch_num: inf