I made a colab that worked perfectly 3 days ago, but now I get this bug:
I made a colab that worked perfectly 3 days ago, but now I get this bug while installing. What should I do?
The colab is here: https://colab.research.google.com/gist/Quick-Eyed-Sky/a0eb96de22ec38aa62b319794942cc31/qes-v1-stable-cascade.ipynb
Loading pipeline components...: 100%
6/6 [00:01<00:00, 3.78it/s]
The config attributes {'c_in': 16} were passed to StableCascadeUnet, but are not expected and will be ignored. Please verify your config.json configuration file.
Loading pipeline components...: 60%
3/5 [00:02<00:00, 18.39it/s]
The config attributes {'c_in': 4} were passed to StableCascadeUnet, but are not expected and will be ignored. Please verify your config.json configuration file.
ValueError Traceback (most recent call last)
in <cell line: 16>()
14 decoder_model_id = "stabilityai/stable-cascade"
15 prior = StableCascadePriorPipeline.from_pretrained(prior_model_id, torch_dtype=torch.bfloat16).to(device)
---> 16 decoder = StableCascadeDecoderPipeline.from_pretrained(decoder_model_id, torch_dtype=torch.float16).to(device)
17
18 # Helper function definitions
5 frames
/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py in load_model_dict_into_meta(model, state_dict, device, dtype, model_name_or_path)
152 if empty_state_dict[param_name].shape != param.shape:
153 model_name_or_path_str = f"{model_name_or_path} " if model_name_or_path is not None else ""
--> 154 raise ValueError(
155 f"Cannot load {model_name_or_path_str}because {param_name} expected shape {empty_state_dict[param_name]}, but got {param.shape}. If you want to instead overwrite randomly initialized weights, please make sure to pass both low_cpu_mem_usage=False
and ignore_mismatched_sizes=True
. For more information, see also: https://github.com/huggingface/diffusers/issues/1619#issuecomment-1345604389 as an example."
156 )
ValueError: Cannot load /root/.cache/huggingface/hub/models--stabilityai--stable-cascade/snapshots/f2a84281d6f8db3c757195dd0c9a38dbdea90bb4/decoder because embedding.1.weight expected shape tensor(..., device='meta', size=(320, 64, 1, 1)), but got torch.Size([320, 16, 1, 1]). If you want to instead overwrite randomly initialized weights, please make sure to pass both low_cpu_mem_usage=False
and ignore_mismatched_sizes=True
. For more information, see also: https://github.com/huggingface/diffusers/issues/1619#issuecomment-1345604389 as an example.
This happens because in the custom diffusers branch, which is needed for running Stable Cascade atm, c_in
got replaced with in_channels
. You can change it back manually in the config.json
files.
Alternatively you could also use my fork of the custom diffusers branch, where I changed back all in_channels
to c_in
.
https://github.com/EtienneDosSantos/diffusers/tree/wuerstchen-v3
Thanks! I just had found it in the discussions.
As it's a colab, I can not change the config.json, just download it.
It tried to change
decoder = StableCascadeDecoderPipeline.from_pretrained(decoder_model_id, torch_dtype=torch.float16).to(device)
to
decoder = StableCascadeDecoderPipeline.from_pretrained(decoder_model_id, torch_dtype=torch.float16, in_channels=4).to(device)
in my colab,
but the bug keeps showing.
My question becomes: how can I fix it from a colab?
Well, for now I'll use your fork :-)