Baked VAE? Cannot load <class 'diffusers.models.autoencoder_kl.AutoencoderKL'>

#3
by samiam - opened

I've gotten numerous other models to work just fine, but for whatever reason when I try to initialize this model I get:

ValueError: Cannot load <class 'diffusers.models.autoencoder_kl.AutoencoderKL'> from digiplay/Juggernaut_final because the following keys are missing:
decoder.mid_block.attentions.0.value.bias, decoder.mid_block.attentions.0.proj_attn.bias, decoder.mid_block.attentions.0.value.weight, encoder.mid_block.attentions.0.query.weight, decoder.mid_block.attentions.0.query.bias, encoder.mid_block.attentions.0.value.bias, encoder.mid_block.attentions.0.key.weight, decoder.mid_block.attentions.0.key.weight, decoder.mid_block.attentions.0.proj_attn.weight, decoder.mid_block.attentions.0.query.weight, decoder.mid_block.attentions.0.key.bias, encoder.mid_block.attentions.0.value.weight, encoder.mid_block.attentions.0.proj_attn.bias, encoder.mid_block.attentions.0.key.bias, encoder.mid_block.attentions.0.query.bias, encoder.mid_block.attentions.0.proj_attn.weight.

Note that I am using huggingface diffusers dreambooth to finetune this model

Hi ,
I remember after May 25th, my google colab + diffusers code also suddenly show many errors message, but still can work πŸ˜‚ I guess there is some problem in pipeline code, now I test again, I guess there is some problem in "torch. float16 " something, you may use codes below, it would be Okay.

(if worked, pls reply me?)

!pip install diffusers==0.11.1
!pip install transformers scipy ftfy accelerate


import torch
from diffusers import DiffusionPipeline

modelid="digiplay/Juggernaut_final"
#pipe = DiffusionPipeline.from_pretrained(modelid,torch_dtype=torch.float16)

from diffusers.models import AutoencoderKL

#560001
vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-ema")
#840001 
#vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse")

pipe = DiffusionPipeline.from_pretrained(modelid,vae=vae  )
pipe = pipe.to("cuda")


pipe.safety_checker = lambda images, clip_input: (images, False)

neg = "(worst quality, low quality:1.4)_or your prompt "
pro = "your_prompt"

editstep=19
image = pipe(pro,
             negative_prompt=neg,
             num_inference_steps=editstep,guidance_scale=7,height=512, width=512).images[0]
image

Screenshot_20230716_141842_Vivaldi Browser Snapshot.jpg
Screenshot_20230716_141855_Vivaldi Browser Snapshot.jpg

Note: you can choose your VAE you like, remove/add " # " in the code section

#560001
vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-ema")
#840001
#vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse")

I've gotten numerous other models to work just fine, but for whatever reason when I try to initialize this model I get:

ValueError: Cannot load <class 'diffusers.models.autoencoder_kl.AutoencoderKL'> from digiplay/Juggernaut_final because the following keys are missing:
decoder.mid_block.attentions.0.value.bias, decoder.mid_block.attentions.0.proj_attn.bias, decoder.mid_block.attentions.0.value.weight, encoder.mid_block.attentions.0.query.weight, decoder.mid_block.attentions.0.query.bias, encoder.mid_block.attentions.0.value.bias, encoder.mid_block.attentions.0.key.weight, decoder.mid_block.attentions.0.key.weight, decoder.mid_block.attentions.0.proj_attn.weight, decoder.mid_block.attentions.0.query.weight, decoder.mid_block.attentions.0.key.bias, encoder.mid_block.attentions.0.value.weight, encoder.mid_block.attentions.0.proj_attn.bias, encoder.mid_block.attentions.0.key.bias, encoder.mid_block.attentions.0.query.bias, encoder.mid_block.attentions.0.proj_attn.weight.

Note that I am using huggingface diffusers dreambooth to finetune this model

hi~ is okay now? πŸ€—

Hi! Yes thanks it is working now, for whatever reason only Juggernaut didn't work with my version of diffusers so I updated to the latest version (which actually gave me a different error with something else later on LOL)

Sign up or log in to comment