Is it possible to get UNet only?

#1
by duuuuuuuden - opened

For some reason, using nf4 checkpoint with baked text encoders and vae takes more VRAM than separately loaded fp8 unet, T5xxl in fp8e4m3fn and fp16 clip

My GPU is RTX 2060 Super (8GB VRAM), not sure that bnb nf4 is supported

Thanks in advance!

You shoul probably try and upgrade your GPU if you plan on inferencing such large models. Alternatively stick to services for those models until you can afford a new card. You can load stuff separately by simply NOT dragging the CLIP and VAE links from the loader. Also did you check that you have correct launch arguments for comfy?

Ye, I know that my GPU should be replaced, but still

With fp8 (e4m3fn) dev model I'm able to generate native 1920x1080 images in ~7 mins (20 steps), like this one:

image.png

Using basic workflow

image.png

But with nf4 I'm getting OOM on sampler stage with zero progress

If I load CLIP and VAE separately, then it processes first 2/20 steps and gives me OOM again

I know I shouldn't use such big resolution, but it's a PoC

Have you updated your comfyUI? Have you added these arguments when launching? --fp8_e4m3fn-text-enc --fp8_e4m3fn-unet

Also. This is not the place to request this as this is not the repo for the custom node linked in my other nf4 repo. There is NO unet loading implementation of NF4 right now. Understand? This is not on me so go post requests in the right place

silveroxides changed discussion status to closed

Sign up or log in to comment