Spaces:
Running
on
A10G
Running
on
A10G
Error on inference
#11
by
umair-imran
- opened
Hi, I am receiving this error during ControlNet Inpainting inference. Looks like there is an issue with model dimensions. The issue occurs in downsample_block
of ControlNet.
RuntimeError: mat1 and mat2 shapes cannot be multiplied (154x1024 and 768x320)
Any suggestion on how to resolve it?
Thanks
@IamUmairImran ,
You're making a mistake when choosing a model. Can you post a screenshot of which models you chose?
@kadirnar
Here is the screen-shot.
I tried canny and depth but got the same error for both.
And here is the traceback
Traceback (most recent call last):
File "/home/umair/Stable-Diffusion-ControlNet-WebUI/.env/lib/python3.10/site-packages/gradio/routes.py", line 394, in run_predict
output = await app.get_blocks().process_api(
File "/home/umair/Stable-Diffusion-ControlNet-WebUI/.env/lib/python3.10/site-packages/gradio/blocks.py", line 1075, in process_api
result = await self.call_function(
File "/home/umair/Stable-Diffusion-ControlNet-WebUI/.env/lib/python3.10/site-packages/gradio/blocks.py", line 884, in call_function
prediction = await anyio.to_thread.run_sync(
File "/home/umair/Stable-Diffusion-ControlNet-WebUI/.env/lib/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/home/umair/Stable-Diffusion-ControlNet-WebUI/.env/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "/home/umair/Stable-Diffusion-ControlNet-WebUI/.env/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
result = context.run(func, *args)
File "/home/umair/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/controlnet/controlnet_inpaint/controlnet_inpaint_canny.py", line 99, in generate_image
output = pipe(
File "/home/umair/Stable-Diffusion-ControlNet-WebUI/.env/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/umair/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/controlnet/controlnet_inpaint/pipeline_stable_diffusion_controlnet_inpaint.py", line 521, in __call__
down_block_res_samples, mid_block_res_sample = self.controlnet(
File "/home/umair/Stable-Diffusion-ControlNet-WebUI/.env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/umair/Stable-Diffusion-ControlNet-WebUI/.env/lib/python3.10/site-packages/diffusers/models/controlnet.py", line 483, in forward
sample, res_samples = downsample_block(
File "/home/umair/Stable-Diffusion-ControlNet-WebUI/.env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/umair/Stable-Diffusion-ControlNet-WebUI/.env/lib/python3.10/site-packages/diffusers/models/unet_2d_blocks.py", line 837, in forward
hidden_states = attn(
File "/home/umair/Stable-Diffusion-ControlNet-WebUI/.env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/umair/Stable-Diffusion-ControlNet-WebUI/.env/lib/python3.10/site-packages/diffusers/models/transformer_2d.py", line 265, in forward
hidden_states = block(
File "/home/umair/Stable-Diffusion-ControlNet-WebUI/.env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/umair/Stable-Diffusion-ControlNet-WebUI/.env/lib/python3.10/site-packages/diffusers/models/attention.py", line 307, in forward
attn_output = self.attn2(
File "/home/umair/Stable-Diffusion-ControlNet-WebUI/.env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/umair/Stable-Diffusion-ControlNet-WebUI/.env/lib/python3.10/site-packages/diffusers/models/cross_attention.py", line 205, in forward
return self.processor(
File "/home/umair/Stable-Diffusion-ControlNet-WebUI/.env/lib/python3.10/site-packages/diffusers/models/cross_attention.py", line 449, in __call__
key = attn.to_k(encoder_hidden_states)
File "/home/umair/Stable-Diffusion-ControlNet-WebUI/.env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/umair/Stable-Diffusion-ControlNet-WebUI/.env/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: mat1 and mat2 shapes cannot be multiplied (154x1024 and 768x320)
It worked. Thanks
kadirnar
changed discussion status to
closed