Text-to-Image
Diffusers
Safetensors
StableDiffusionPipeline
stable-diffusion
Inference Endpoints

UNet2DConditionModel.from_pretrained(model_key, subfolder="unet").to(self.device) CUDA OOM

#24
by komenge - opened

Hi!
When I get a pretrained model, I get CUDA out of memory issue.
UNet2DConditionModel.from_pretrained(model_key, subfolder="unet").to(self.device)
model_key is stabilityai/stable-diffusion-2-base
Are there any ways I can adjust from_pretrained function to decrease cuda memory it requires?
I see the max_memory parameter in from_pretrained function but have no idea how to use it.
Many thanks!

Sign up or log in to comment