Error loading model
Anyone else getting a runtime error when trying to load the sd_xl_base_0.9.safetensors?
changing setting sd_model_checkpoint to sd_xl_base_0.9.safetensors [1f69731261]: RuntimeError
Traceback (most recent call last):
File "D:\Stable Diffusion\stable-diffusion-webui\modules\shared.py", line 605, in set
self.data_labels[key].onchange()
File "D:\Stable Diffusion\stable-diffusion-webui\modules\call_queue.py", line 13, in f
res = func(*args, **kwargs)
File "D:\Stable Diffusion\stable-diffusion-webui\webui.py", line 226, in
shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: modules.sd_models.reload_model_weights()), call=False)
File "D:\Stable Diffusion\stable-diffusion-webui\modules\sd_models.py", line 556, in reload_model_weights
load_model_weights(sd_model, checkpoint_info, state_dict, timer)
File "D:\Stable Diffusion\stable-diffusion-webui\modules\sd_models.py", line 286, in load_model_weights
model.load_state_dict(state_dict, strict=False)
File "D:\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 2041, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for LatentDiffusion:
size mismatch for model.diffusion_model.input_blocks.4.1.proj_in.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]).
size mismatch for model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 2048]) from checkpoint, the shape in current model is torch.Size([640, 768]).
size mismatch for model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 2048]) from checkpoint, the shape in current model is torch.Size([640, 768]).
size mismatch for model.diffusion_model.input_blocks.4.1.proj_out.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]).
size mismatch for model.diffusion_model.input_blocks.5.1.proj_in.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]).
size mismatch for model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 2048]) from checkpoint, the shape in current model is torch.Size([640, 768]).
size mismatch for model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 2048]) from checkpoint, the shape in current model is torch.Size([640, 768]).
size mismatch for model.diffusion_model.input_blocks.5.1.proj_out.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]).
size mismatch for model.diffusion_model.input_blocks.7.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]).
size mismatch for model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 2048]) from checkpoint, the shape in current model is torch.Size([1280, 768]).
size mismatch for model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 2048]) from checkpoint, the shape in current model is torch.Size([1280, 768]).
size mismatch for model.diffusion_model.input_blocks.7.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]).
Automatic1111, which it looks like you are using, does not support SD XL models (yet).
ComfyUI supports the new SD XL models because ComfyUI uses standard Hugging Face diffusers. I don't know about EasyDiffusion, sorry.
Also Vlad's A1111 now supports diffusers
:
https://github.com/vladmandic/automatic/blob/master/CHANGELOG.md#update-for-07082023
Vlad A1111 is supposed to work, but keeps giving this error.
Vlad A1111 is supposed to work, but keeps giving this error.
Same thing here.
Had the same issue, I pulled the Automatic1111 repo and it worked.