correct comfyui workflow
please share usable workflows or 1step and others for comfyui.
please share usable workflows or 1step and others for comfyui.
Thanks for your attention, we have uploaded comfyui workflows for HyperSD-LoRAs. And we are still working on the workflow for 1-step Unet, which will be updated as soon as possible.
出现了以下错误?用的是提供的Comfyui 工作流
“Error occurred when executing CheckpointLoaderSimple:
'model.diffusion_model.input_blocks.0.0.weight'
File "D:\BaiduSyncdisk\ComfyUI-aki-v1.1\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "D:\BaiduSyncdisk\ComfyUI-aki-v1.1\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "D:\BaiduSyncdisk\ComfyUI-aki-v1.1\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "D:\BaiduSyncdisk\ComfyUI-aki-v1.1\nodes.py", line 516, in load_checkpoint
out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings"))
File "D:\BaiduSyncdisk\ComfyUI-aki-v1.1\comfy\sd.py", line 513, in load_checkpoint_guess_config
model_config = model_detection.model_config_from_unet(sd, "model.diffusion_model.")
File "D:\BaiduSyncdisk\ComfyUI-aki-v1.1\comfy\model_detection.py", line 194, in model_config_from_unet
unet_config = detect_unet_config(state_dict, unet_key_prefix)
File "D:\BaiduSyncdisk\ComfyUI-aki-v1.1\comfy\model_detection.py", line 78, in detect_unet_config
model_channels = state_dict['{}input_blocks.0.0.weight'.format(key_prefix)].shape[0]”
@Stone2013
Hi~ It seems that your SD15/SDXL pipeline checkpoint is non-valid.
Could you tell me about the checkpoint and workflow you are using so we can check it for you?
使用fp16版本的模型没有问题,或许是因为原模型太大,下载错误导致
@Stone2013
Hi~
The checkpoint Hyper-SDXL-1step-Unet.safetensors is a raw UNet instead of ComfyUI checkpoint that contains both UNet, VAE and text encoder.
You would need to use diffusers to load it for inference.
But anyway we have provide the 1-Step SDXL UNet ComfyUI checkpoint Hyper-SDXL-1step-Unet-Comfyui.fp16.safetensors as you find/use the fp16 ones.
Good luck 👏~
@Stone2013
Hi~
The checkpoint Hyper-SDXL-1step-Unet.safetensors is a raw UNet instead of ComfyUI checkpoint that contains both UNet, VAE and text encoder.
You would need to use diffusers to load it for inference.
But anyway we have provide the 1-Step SDXL UNet ComfyUI checkpoint Hyper-SDXL-1step-Unet-Comfyui.fp16.safetensors as you find/use the fp16 ones.
Good luck 👏~
Is it possible to release the full fp32 checkpoint for comfy as well?
Thanks!
Hi,
@Vigilence
Can you provide more application scenarios in ComfyUI brought by fp32 weight?
Because its size is so large, we are worried that few users will download it.
It is working very well, gets results in around 1.5 seconds with a rx 6600. (@512x512,yes though it is sdxl it works well @ 512x512 and 512x768)
@Stone2013
Hi~
The checkpoint Hyper-SDXL-1step-Unet.safetensors is a raw UNet instead of ComfyUI checkpoint that contains both UNet, VAE and text encoder.
You would need to use diffusers to load it for inference.
But anyway we have provide the 1-Step SDXL UNet ComfyUI checkpoint Hyper-SDXL-1step-Unet-Comfyui.fp16.safetensors as you find/use the fp16 ones.
Good luck 👏~
It is working very well, gets results in around 1.5 seconds with a rx 6600. If only we can do this with all the other checkpoints we have, is it not possible at this point ?
It is working very well, gets results in around 1.5 seconds with a rx 6600. (@512x512,yes though it is sdxl it works well @ 512x512 and 512x768)
@Stone2013
Hi~
The checkpoint Hyper-SDXL-1step-Unet.safetensors is a raw UNet instead of ComfyUI checkpoint that contains both UNet, VAE and text encoder.
You would need to use diffusers to load it for inference.
But anyway we have provide the 1-Step SDXL UNet ComfyUI checkpoint Hyper-SDXL-1step-Unet-Comfyui.fp16.safetensors as you find/use the fp16 ones.
Good luck 👏~It is working very well, gets results in around 1.5 seconds with a rx 6600. If only we can do this with all the other checkpoints we have, is it not possible at this point ?
We are working on it, and looking forward to releasing the LoRA version soon!