Load pipelines, models, and schedulers

Having an easy way to use a diffusion system for inference is essential to 🧨 Diffusers. Diffusion systems often consist of multiple components like parameterized models, tokenizers, and schedulers that interact in complex ways. That is why we designed the DiffusionPipeline to wrap the complexity of the entire diffusion system into an easy-to-use API, while remaining flexible enough to be adapted for other use cases, such as loading each component individually as building blocks to assemble your own diffusion system.

Everything you need for inference or training is accessible with the from_pretrained() method.

This guide will show you how to load:

Diffusion Pipeline

πŸ’‘ Skip to the DiffusionPipeline explained section if you are interested in learning in more detail about how the DiffusionPipeline class works.

The DiffusionPipeline class is the simplest and most generic way to load the latest trending diffusion model from the Hub. The DiffusionPipeline.from_pretrained() method automatically detects the correct pipeline class from the checkpoint, downloads, and caches all the required configuration and weight files, and returns a pipeline instance ready for inference.

from diffusers import DiffusionPipeline

repo_id = "runwayml/stable-diffusion-v1-5"
pipe = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True)

You can also load a checkpoint with its specific pipeline class. The example above loaded a Stable Diffusion model; to get the same result, use the StableDiffusionPipeline class:

from diffusers import StableDiffusionPipeline

repo_id = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(repo_id, use_safetensors=True)

A checkpoint (such as CompVis/stable-diffusion-v1-4 or runwayml/stable-diffusion-v1-5) may also be used for more than one task, like text-to-image or image-to-image. To differentiate what task you want to use the checkpoint for, you have to load it directly with its corresponding task-specific pipeline class:

from diffusers import StableDiffusionImg2ImgPipeline

repo_id = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionImg2ImgPipeline.from_pretrained(repo_id)

You can use the Space below to gauge the memory requirements of a pipeline you want to load beforehand without downloading the pipeline checkpoints:

Local pipeline

To load a diffusion pipeline locally, use git-lfs to manually download the checkpoint (in this case, runwayml/stable-diffusion-v1-5) to your local disk. This creates a local folder, ./stable-diffusion-v1-5, on your disk:

git-lfs install
git clone https://huggingface.co/runwayml/stable-diffusion-v1-5

Then pass the local path to from_pretrained():

from diffusers import DiffusionPipeline

repo_id = "./stable-diffusion-v1-5"
stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True)

The from_pretrained() method won’t download any files from the Hub when it detects a local path, but this also means it won’t download and cache the latest changes to a checkpoint.

Swap components in a pipeline

You can customize the default components of any pipeline with another compatible component. Customization is important because:

To find out which schedulers are compatible for customization, you can use the compatibles method:

from diffusers import DiffusionPipeline

repo_id = "runwayml/stable-diffusion-v1-5"
stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True)
stable_diffusion.scheduler.compatibles

Let’s use the SchedulerMixin.from_pretrained() method to replace the default PNDMScheduler with a more performant scheduler, EulerDiscreteScheduler. The subfolder="scheduler" argument is required to load the scheduler configuration from the correct subfolder of the pipeline repository.

Then you can pass the new EulerDiscreteScheduler instance to the scheduler argument in DiffusionPipeline:

from diffusers import DiffusionPipeline, EulerDiscreteScheduler

repo_id = "runwayml/stable-diffusion-v1-5"
scheduler = EulerDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler")
stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, scheduler=scheduler, use_safetensors=True)

Safety checker

Diffusion models like Stable Diffusion can generate harmful content, which is why 🧨 Diffusers has a safety checker to check generated outputs against known hardcoded NSFW content. If you’d like to disable the safety checker for whatever reason, pass None to the safety_checker argument:

from diffusers import DiffusionPipeline

repo_id = "runwayml/stable-diffusion-v1-5"
stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, safety_checker=None, use_safetensors=True)
"""
You have disabled the safety checker for <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline'> by passing `safety_checker=None`. Ensure that you abide by the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend keeping the safety filter enabled in all public-facing circumstances, disabling it only for use cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
"""

Reuse components across pipelines

You can also reuse the same components in multiple pipelines to avoid loading the weights into RAM twice. Use the components method to save the components:

from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline

model_id = "runwayml/stable-diffusion-v1-5"
stable_diffusion_txt2img = StableDiffusionPipeline.from_pretrained(model_id, use_safetensors=True)

components = stable_diffusion_txt2img.components

Then you can pass the components to another pipeline without reloading the weights into RAM:

stable_diffusion_img2img = StableDiffusionImg2ImgPipeline(**components)

You can also pass the components individually to the pipeline if you want more flexibility over which components to reuse or disable. For example, to reuse the same components in the text-to-image pipeline, except for the safety checker and feature extractor, in the image-to-image pipeline:

from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline

model_id = "runwayml/stable-diffusion-v1-5"
stable_diffusion_txt2img = StableDiffusionPipeline.from_pretrained(model_id, use_safetensors=True)
stable_diffusion_img2img = StableDiffusionImg2ImgPipeline(
    vae=stable_diffusion_txt2img.vae,
    text_encoder=stable_diffusion_txt2img.text_encoder,
    tokenizer=stable_diffusion_txt2img.tokenizer,
    unet=stable_diffusion_txt2img.unet,
    scheduler=stable_diffusion_txt2img.scheduler,
    safety_checker=None,
    feature_extractor=None,
    requires_safety_checker=False,
)

Switch loaded pippelines

There are many diffuser pipelines that use the same pre-trained model as StableDiffusionPipeline and StableDiffusionXLPipeline, but they implement specific features to help you achieve better generation results. This guide will show you how to use the from_pipe API to create multiple pipelines without increasing memory usage. By using this approach, you can easily switch between pipelines to use different features.

Let’s take an example where we first create a StableDiffusionPipeline and then reuse the already loaded model components to create a StableDiffusionSAGPipeline to enhance generation quality.

we will generate an image of a bear eating pizza using Stable Diffusion with the IP-Adapter

from diffusers import DiffusionPipeline, StableDiffusionSAGPipeline
import torch
import gc
from diffusers.utils import load_image
from accelerate.utils import compute_module_sizes

base_repo = "SG161222/Realistic_Vision_V6.0_B1_noVAE"
num_inference_steps = 50
image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/load_neg_embed.png")
prompt="bear eats pizza"
negative_prompt = "wrong white balance, dark, sketches,worst quality,low quality"

pipe_sd = DiffusionPipeline.from_pretrained(base_repo, torch_dtype=torch.float16)
pipe_sd.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15.bin")
pipe_sd.set_ip_adapter_scale(0.6)
pipe_sd.to("cuda")

generator = torch.Generator(device="cpu").manual_seed(33)
out_sd = pipe_sd(
    prompt=prompt,
    negative_prompt=negative_prompt, 
    ip_adapter_image=image,
    num_inference_steps=num_inference_steps,
    generator=generator,
).images[0]

let’s take a look at the image and also print out the memory used

def bytes_to_giga_bytes(bytes):
    return bytes / 1024 / 1024 / 1024
print(
    f"Max memory allocated: {bytes_to_giga_bytes(torch.cuda.max_memory_allocated())} GB"
)
Max memory allocated: 4.406213283538818 GB

Now, we can use from_pipe to switch to the SAG pipeline.

pipe_sag = StableDiffusionSAGPipeline.from_pipe(
    pipe_sd,
)

It already has IP-Adapter loaded so that you can pass the same bear image as ip_adapter_image

generator = torch.Generator(device="cpu").manual_seed(33)
out_sag = pipe_sag(
    prompt = prompt, 
    negative_prompt=negative_prompt, 
    ip_adapter_image=image,
    num_inference_steps=num_inference_steps,
    generator=generator,
    guidance_scale=1.0,
    sag_scale=0.75).images[0]

You can see a pretty nice improvement in the output

Now we have bothΒ stableDiffusionPipelineΒ andΒ StableDiffusionSAGPipelineΒ co-existing with the same loaded model components; You can use them interchangeably without additional memory.

print(
    f"Max memory allocated: {bytes_to_giga_bytes(torch.cuda.max_memory_allocated())} GB"
)
Max memory allocated: 4.406213283538818 GB

Let’s unload the IP adapter from the SAG pipeline. It’s important to note that methods like load_ip_adapter and unload_ip_adapter modify the state of the model components. Therefore, when you use these methods on one pipeline, it will affect all other pipelines that share the same model components.

pipe_sag.unload_ip_adapter()

If you try to use the Stable Diffusion pipeline with IP adapter again, it will fail

generator = torch.Generator(device="cpu").manual_seed(33)
out_sd = pipe_sd(
    prompt=prompt,
    negative_prompt=negative_prompt, 
    ip_adapter_image=image,
    num_inference_steps=num_inference_steps,
    generator=generator,
).images[0]
AttributeError: 'NoneType' object has no attribute 'image_projection_layers'

Please note that the pipeline methods may not function properly on a new pipeline created using the from_pipe method. For instance, the enable_model_cpu_offload method installs hooks to the model components based on a unique offloading sequence for each pipeline. Therefore, if the models are executed in a different order in the new pipeline, the CPU offloading may not work correctly.

To ensure proper functionality, we recommend re-applying the pipeline methods on the new pipeline created using the from_pipe method.

You can also add or subtract model components when you create new pipelines. Let’s now create a AnimateDiff pipeline with an additional MotionAdapter module

from diffusers import AnimateDiffPipeline, MotionAdapter, DDIMScheduler
from diffusers.utils import export_to_gif

adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16)

pipe_animate = AnimateDiffPipeline.from_pipe(pipe_sd, motion_adapter=adapter)
pipe_animate.scheduler = DDIMScheduler.from_config(pipe_animate.scheduler.config, beta_schedule="linear")
# load ip_adapter again and load lora weights
pipe_animate.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15.bin")
pipe_animate.load_lora_weights("guoyww/animatediff-motion-lora-zoom-out", adapter_name="zoom-out")
pipe_animate.to("cuda")

generator = torch.Generator(device="cpu").manual_seed(33)
pipe_animate.set_adapters("zoom-out", adapter_weights=0.75)
out = pipe_animate(
    prompt= prompt,
    num_frames=16,
    num_inference_steps=num_inference_steps,
    ip_adapter_image = image,
    generator=generator,
).frames[0]
export_to_gif(out, "out_animate.gif")

When creating multiple pipelines using the from_pipe method, it is important to note that the memory requirement will be determined by the pipeline with the highest memory usage. This means that regardless of the number of pipelines you create, the total memory requirement will always be the same as the highest memory requirement among the pipelines.

For example, we have created three pipelines - stableDiffusionPipeline, StableDiffusionSAGPipeline, and AnimateDiffPipeline - and the AnimateDiffPipeline has the highest memory requirement, then the total memory usage will be based on the memory requirement of the AnimateDiffPipeline.

Therefore, creating additional pipelines will not add up to the total memory requirement. Each pipeline can be used interchangeably without any additional memory overhead.

Did you know that you can use from_pipe with a community pipeline? Let me show you an example of using long negative prompt and prompt weighting!

pipe_lpw = DiffusionPipeline.from_pipe(
    pipe_sd,
    custom_pipeline="lpw_stable_diffusion",
).to("cuda")

prompt = "best_quality (1girl:1.3) bow bride brown_hair closed_mouth frilled_bow frilled_hair_tubes frills (full_body:1.3) fox_ear hair_bow hair_tubes happy hood japanese_clothes kimono long_sleeves red_bow smile solo tabi uchikake white_kimono wide_sleeves cherry_blossoms"
neg_prompt = "lowres, bad_anatomy, error_body, error_hair, error_arm, error_hands, bad_hands, error_fingers, bad_fingers, missing_fingers, error_legs, bad_legs, multiple_legs, missing_legs, error_lighting, error_shadow, error_reflection, text, error, extra_digit, fewer_digits, cropped, worst_quality, low_quality, normal_quality, jpeg_artifacts, signature, watermark, username, blurry"
generator = torch.Generator(device="cpu").manual_seed(33)
out_lpw = pipe_lpw.text2img(
    prompt, 
    negative_prompt=neg_prompt, 
    width=512,height=512,
    max_embeddings_multiples=3, 
    num_inference_steps=num_inference_steps,
    generator=generator,
    ).images[0]

let’s run StableDiffusionPipeline with the same inputs to compare: the result from the long prompt weighting pipeline is more aligned with the text prompt.

generator = torch.Generator(device="cpu").manual_seed(33)
out_sd = pipe_sd(
    prompt=prompt,
    negative_prompt=negative_prompt,
    generator=generator,
    num_inference_steps=num_inference_steps,
).images[0]
out_sd

You can easily switch between different pipelines using the from_pipe method, similar to turning on and off a feature on your pipeline. To switch between tasks, you can use the from_pipe method with AutoPipeline, which automatically identifies the pipeline class based on the task. You can find more information about this feature at the AutoPipe Guide.

Checkpoint variants

A checkpoint variant is usually a checkpoint whose weights are:

πŸ’‘ When the checkpoints have identical model structures, but they were trained on different datasets and with a different training setup, they should be stored in separate repositories instead of variations (for example, stable-diffusion-v1-4 and stable-diffusion-v1-5).

Otherwise, a variant is identical to the original checkpoint. They have exactly the same serialization format (like Safetensors), model structure, and weights that have identical tensor shapes.

checkpoint type weight name argument for loading weights
original diffusion_pytorch_model.bin
floating point diffusion_pytorch_model.fp16.bin variant, torch_dtype
non-EMA diffusion_pytorch_model.non_ema.bin variant

There are two important arguments to know for loading variants:

from diffusers import DiffusionPipeline
import torch

# load fp16 variant
stable_diffusion = DiffusionPipeline.from_pretrained(
    "runwayml/stable-diffusion-v1-5", variant="fp16", torch_dtype=torch.float16, use_safetensors=True
)
# load non_ema variant
stable_diffusion = DiffusionPipeline.from_pretrained(
    "runwayml/stable-diffusion-v1-5", variant="non_ema", use_safetensors=True
)

To save a checkpoint stored in a different floating-point type or as a non-EMA variant, use the DiffusionPipeline.save_pretrained() method and specify the variant argument. You should try and save a variant to the same folder as the original checkpoint, so you can load both from the same folder:

from diffusers import DiffusionPipeline

# save as fp16 variant
stable_diffusion.save_pretrained("runwayml/stable-diffusion-v1-5", variant="fp16")
# save as non-ema variant
stable_diffusion.save_pretrained("runwayml/stable-diffusion-v1-5", variant="non_ema")

If you don’t save the variant to an existing folder, you must specify the variant argument otherwise it’ll throw an Exception because it can’t find the original checkpoint:

# πŸ‘Ž this won't work
stable_diffusion = DiffusionPipeline.from_pretrained(
    "./stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True
)
# πŸ‘ this works
stable_diffusion = DiffusionPipeline.from_pretrained(
    "./stable-diffusion-v1-5", variant="fp16", torch_dtype=torch.float16, use_safetensors=True
)

Models

Models are loaded from the ModelMixin.from_pretrained() method, which downloads and caches the latest version of the model weights and configurations. If the latest files are available in the local cache, from_pretrained() reuses files in the cache instead of re-downloading them.

Models can be loaded from a subfolder with the subfolder argument. For example, the model weights for runwayml/stable-diffusion-v1-5 are stored in the unet subfolder:

from diffusers import UNet2DConditionModel

repo_id = "runwayml/stable-diffusion-v1-5"
model = UNet2DConditionModel.from_pretrained(repo_id, subfolder="unet", use_safetensors=True)

Or directly from a repository’s directory:

from diffusers import UNet2DModel

repo_id = "google/ddpm-cifar10-32"
model = UNet2DModel.from_pretrained(repo_id, use_safetensors=True)

You can also load and save model variants by specifying the variant argument in ModelMixin.from_pretrained() and ModelMixin.save_pretrained():

from diffusers import UNet2DConditionModel

model = UNet2DConditionModel.from_pretrained(
    "runwayml/stable-diffusion-v1-5", subfolder="unet", variant="non_ema", use_safetensors=True
)
model.save_pretrained("./local-unet", variant="non_ema")

Schedulers

Schedulers are loaded from the SchedulerMixin.from_pretrained() method, and unlike models, schedulers are not parameterized or trained; they are defined by a configuration file.

Loading schedulers does not consume any significant amount of memory and the same configuration file can be used for a variety of different schedulers. For example, the following schedulers are compatible with StableDiffusionPipeline, which means you can load the same scheduler configuration file in any of these classes:

from diffusers import StableDiffusionPipeline
from diffusers import (
    DDPMScheduler,
    DDIMScheduler,
    PNDMScheduler,
    LMSDiscreteScheduler,
    EulerAncestralDiscreteScheduler,
    EulerDiscreteScheduler,
    DPMSolverMultistepScheduler,
)

repo_id = "runwayml/stable-diffusion-v1-5"

ddpm = DDPMScheduler.from_pretrained(repo_id, subfolder="scheduler")
ddim = DDIMScheduler.from_pretrained(repo_id, subfolder="scheduler")
pndm = PNDMScheduler.from_pretrained(repo_id, subfolder="scheduler")
lms = LMSDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler")
euler_anc = EulerAncestralDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler")
euler = EulerDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler")
dpm = DPMSolverMultistepScheduler.from_pretrained(repo_id, subfolder="scheduler")

# replace `dpm` with any of `ddpm`, `ddim`, `pndm`, `lms`, `euler_anc`, `euler`
pipeline = StableDiffusionPipeline.from_pretrained(repo_id, scheduler=dpm, use_safetensors=True)

DiffusionPipeline explained

As a class method, DiffusionPipeline.from_pretrained() is responsible for two things:

The pipelines’ underlying folder structure corresponds directly with their class instances. For example, the StableDiffusionPipeline corresponds to the folder structure in runwayml/stable-diffusion-v1-5.

from diffusers import DiffusionPipeline

repo_id = "runwayml/stable-diffusion-v1-5"
pipeline = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True)
print(pipeline)

You’ll see pipeline is an instance of StableDiffusionPipeline, which consists of seven components:

StableDiffusionPipeline {
  "feature_extractor": [
    "transformers",
    "CLIPImageProcessor"
  ],
  "safety_checker": [
    "stable_diffusion",
    "StableDiffusionSafetyChecker"
  ],
  "scheduler": [
    "diffusers",
    "PNDMScheduler"
  ],
  "text_encoder": [
    "transformers",
    "CLIPTextModel"
  ],
  "tokenizer": [
    "transformers",
    "CLIPTokenizer"
  ],
  "unet": [
    "diffusers",
    "UNet2DConditionModel"
  ],
  "vae": [
    "diffusers",
    "AutoencoderKL"
  ]
}

Compare the components of the pipeline instance to the runwayml/stable-diffusion-v1-5 folder structure, and you’ll see there is a separate folder for each of the components in the repository:

.
β”œβ”€β”€ feature_extractor
β”‚Β Β  └── preprocessor_config.json
β”œβ”€β”€ model_index.json
β”œβ”€β”€ safety_checker
β”‚Β Β  β”œβ”€β”€ config.json
|   β”œβ”€β”€ model.fp16.safetensors
β”‚   β”œβ”€β”€ model.safetensors
β”‚   β”œβ”€β”€ pytorch_model.bin
|   └── pytorch_model.fp16.bin
β”œβ”€β”€ scheduler
β”‚Β Β  └── scheduler_config.json
β”œβ”€β”€ text_encoder
β”‚Β Β  β”œβ”€β”€ config.json
|   β”œβ”€β”€ model.fp16.safetensors
β”‚   β”œβ”€β”€ model.safetensors
β”‚   |── pytorch_model.bin
|   └── pytorch_model.fp16.bin
β”œβ”€β”€ tokenizer
β”‚Β Β  β”œβ”€β”€ merges.txt
β”‚Β Β  β”œβ”€β”€ special_tokens_map.json
β”‚Β Β  β”œβ”€β”€ tokenizer_config.json
β”‚Β Β  └── vocab.json
β”œβ”€β”€ unet
β”‚Β Β  β”œβ”€β”€ config.json
β”‚Β Β  β”œβ”€β”€ diffusion_pytorch_model.bin
|   |── diffusion_pytorch_model.fp16.bin
β”‚   |── diffusion_pytorch_model.f16.safetensors
β”‚   |── diffusion_pytorch_model.non_ema.bin
β”‚   |── diffusion_pytorch_model.non_ema.safetensors
β”‚   └── diffusion_pytorch_model.safetensors
|── vae
.   β”œβ”€β”€ config.json
.   β”œβ”€β”€ diffusion_pytorch_model.bin
    β”œβ”€β”€ diffusion_pytorch_model.fp16.bin
    β”œβ”€β”€ diffusion_pytorch_model.fp16.safetensors
    └── diffusion_pytorch_model.safetensors

You can access each of the components of the pipeline as an attribute to view its configuration:

pipeline.tokenizer
CLIPTokenizer(
    name_or_path="/root/.cache/huggingface/hub/models--runwayml--stable-diffusion-v1-5/snapshots/39593d5650112b4cc580433f6b0435385882d819/tokenizer",
    vocab_size=49408,
    model_max_length=77,
    is_fast=False,
    padding_side="right",
    truncation_side="right",
    special_tokens={
        "bos_token": AddedToken("<|startoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True),
        "eos_token": AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True),
        "unk_token": AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True),
        "pad_token": "<|endoftext|>",
    },
    clean_up_tokenization_spaces=True
)

Every pipeline expects a model_index.json file that tells the DiffusionPipeline:

{
  "_class_name": "StableDiffusionPipeline",
  "_diffusers_version": "0.6.0",
  "feature_extractor": [
    "transformers",
    "CLIPImageProcessor"
  ],
  "safety_checker": [
    "stable_diffusion",
    "StableDiffusionSafetyChecker"
  ],
  "scheduler": [
    "diffusers",
    "PNDMScheduler"
  ],
  "text_encoder": [
    "transformers",
    "CLIPTextModel"
  ],
  "tokenizer": [
    "transformers",
    "CLIPTokenizer"
  ],
  "unet": [
    "diffusers",
    "UNet2DConditionModel"
  ],
  "vae": [
    "diffusers",
    "AutoencoderKL"
  ]
}