The ControlNet model was introduced in Adding Conditional Control to Text-to-Image Diffusion Models by Lvmin Zhang, Anyi Rao, Maneesh Agrawala. It provides a greater degree of control over text-to-image generation by conditioning the model on additional inputs such as edge maps, depth maps, segmentation maps, and keypoints for pose detection.
The abstract from the paper is:
We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with “zero convolutions” (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models.
By default the ControlNetModel should be loaded with from_pretrained(), but it can also be loaded
from the original format using FromOriginalModelMixin.from_single_file
as follows:
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel
url = "https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11p_sd15_canny.pth" # can also be a local path
controlnet = ControlNetModel.from_single_file(url)
url = "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned.safetensors" # can also be a local path
pipe = StableDiffusionControlNetPipeline.from_single_file(url, controlnet=controlnet)
( in_channels: int = 4 conditioning_channels: int = 3 flip_sin_to_cos: bool = True freq_shift: int = 0 down_block_types: Tuple = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') mid_block_type: Optional = 'UNetMidBlock2DCrossAttn' only_cross_attention: Union = False block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: int = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 act_fn: str = 'silu' norm_num_groups: Optional = 32 norm_eps: float = 1e-05 cross_attention_dim: int = 1280 transformer_layers_per_block: Union = 1 encoder_hid_dim: Optional = None encoder_hid_dim_type: Optional = None attention_head_dim: Union = 8 num_attention_heads: Union = None use_linear_projection: bool = False class_embed_type: Optional = None addition_embed_type: Optional = None addition_time_embed_dim: Optional = None num_class_embeds: Optional = None upcast_attention: bool = False resnet_time_scale_shift: str = 'default' projection_class_embeddings_input_dim: Optional = None controlnet_conditioning_channel_order: str = 'rgb' conditioning_embedding_out_channels: Optional = (16, 32, 96, 256) global_pool_conditions: bool = False addition_embed_type_num_heads: int = 64 )
Parameters
int
, defaults to 4) —
The number of channels in the input sample. bool
, defaults to True
) —
Whether to flip the sin to cos in the time embedding. int
, defaults to 0) —
The frequency shift to apply to the time embedding. tuple[str]
, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")
) —
The tuple of downsample blocks to use. Union[bool, Tuple[bool]]
, defaults to False
) — tuple[int]
, defaults to (320, 640, 1280, 1280)
) —
The tuple of output channels for each block. int
, defaults to 2) —
The number of layers per block. int
, defaults to 1) —
The padding to use for the downsampling convolution. float
, defaults to 1) —
The scale factor to use for the mid block. str
, defaults to “silu”) —
The activation function to use. int
, optional, defaults to 32) —
The number of groups to use for the normalization. If None, normalization and activation layers is skipped
in post-processing. float
, defaults to 1e-5) —
The epsilon to use for the normalization. int
, defaults to 1280) —
The dimension of the cross attention features. int
or Tuple[int]
, optional, defaults to 1) —
The number of transformer blocks of type BasicTransformerBlock
. Only relevant for
~models.unet_2d_blocks.CrossAttnDownBlock2D
, ~models.unet_2d_blocks.CrossAttnUpBlock2D
,
~models.unet_2d_blocks.UNetMidBlock2DCrossAttn
. int
, optional, defaults to None) —
If encoder_hid_dim_type
is defined, encoder_hidden_states
will be projected from encoder_hid_dim
dimension to cross_attention_dim
. str
, optional, defaults to None
) —
If given, the encoder_hidden_states
and potentially other embeddings are down-projected to text
embeddings of dimension cross_attention
according to encoder_hid_dim_type
. Union[int, Tuple[int]]
, defaults to 8) —
The dimension of the attention heads. bool
, defaults to False
) — str
, optional, defaults to None
) —
The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None,
"timestep"
, "identity"
, "projection"
, or "simple_projection"
. str
, optional, defaults to None
) —
Configures an optional embedding which will be summed with the time embeddings. Choose from None
or
“text”. “text” will use the TextTimeEmbedding
layer. int
, optional, defaults to 0) —
Input dimension of the learnable embedding matrix to be projected to time_embed_dim
, when performing
class conditioning with class_embed_type
equal to None
. bool
, defaults to False
) — str
, defaults to "default"
) —
Time scale shift config for ResNet blocks (see ResnetBlock2D
). Choose from default
or scale_shift
. int
, optional, defaults to None
) —
The dimension of the class_labels
input when class_embed_type="projection"
. Required when
class_embed_type="projection"
. str
, defaults to "rgb"
) —
The channel order of conditional image. Will convert to rgb
if it’s bgr
. tuple[int]
, optional, defaults to (16, 32, 96, 256)
) —
The tuple of output channel for each block in the conditioning_embedding
layer. bool
, defaults to False
) —
TODO(Patrick) - unused parameter. int
, defaults to 64) —
The number of heads to use for the TextTimeEmbedding
layer. A ControlNet model.
( sample: Tensor timestep: Union encoder_hidden_states: Tensor controlnet_cond: Tensor conditioning_scale: float = 1.0 class_labels: Optional = None timestep_cond: Optional = None attention_mask: Optional = None added_cond_kwargs: Optional = None cross_attention_kwargs: Optional = None guess_mode: bool = False return_dict: bool = True ) → ControlNetOutput or tuple
Parameters
torch.Tensor
) —
The noisy input tensor. Union[torch.Tensor, float, int]
) —
The number of timesteps to denoise an input. torch.Tensor
) —
The encoder hidden states. torch.Tensor
) —
The conditional input tensor of shape (batch_size, sequence_length, hidden_size)
. float
, defaults to 1.0
) —
The scale factor for ControlNet outputs. torch.Tensor
, optional, defaults to None
) —
Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings. torch.Tensor
, optional, defaults to None
) —
Additional conditional embeddings for timestep. If provided, the embeddings will be summed with the
timestep_embedding passed through the self.time_embedding
layer to obtain the final timestep
embeddings. torch.Tensor
, optional, defaults to None
) —
An attention mask of shape (batch, key_tokens)
is applied to encoder_hidden_states
. If 1
the mask
is kept, otherwise if 0
it is discarded. Mask will be converted into a bias, which adds large
negative values to the attention scores corresponding to “discard” tokens. dict
) —
Additional conditions for the Stable Diffusion XL UNet. dict[str]
, optional, defaults to None
) —
A kwargs dictionary that if specified is passed along to the AttnProcessor
. bool
, defaults to False
) —
In this mode, the ControlNet encoder tries its best to recognize the input content of the input even if
you remove all prompts. A guidance_scale
between 3.0 and 5.0 is recommended. bool
, defaults to True
) —
Whether or not to return a ControlNetOutput instead of a plain tuple. Returns
ControlNetOutput or tuple
If return_dict
is True
, a ControlNetOutput is returned, otherwise a tuple is
returned where the first element is the sample tensor.
The ControlNetModel forward method.
( unet: UNet2DConditionModel controlnet_conditioning_channel_order: str = 'rgb' conditioning_embedding_out_channels: Optional = (16, 32, 96, 256) load_weights_from_unet: bool = True conditioning_channels: int = 3 )
Parameters
UNet2DConditionModel
) —
The UNet model weights to copy to the ControlNetModel. All configuration options are also copied
where applicable. Instantiate a ControlNetModel from UNet2DConditionModel.
( slice_size: Union )
Parameters
str
or int
or list(int)
, optional, defaults to "auto"
) —
When "auto"
, input to the attention heads is halved, so attention is computed in two steps. If
"max"
, maximum amount of memory is saved by running only one slice at a time. If a number is
provided, uses as many slices as attention_head_dim // slice_size
. In this case, attention_head_dim
must be a multiple of slice_size
. Enable sliced attention computation.
When this option is enabled, the attention module splits the input tensor in slices to compute attention in several steps. This is useful for saving some memory in exchange for a small decrease in speed.
( processor: Union )
Parameters
dict
of AttentionProcessor
or only AttentionProcessor
) —
The instantiated processor class or a dictionary of processor classes that will be set as the processor
for all Attention
layers.
If processor
is a dict, the key needs to define the path to the corresponding cross attention
processor. This is strongly recommended when setting trainable attention processors.
Sets the attention processor to use to compute attention.
Disables custom attention processors and sets the default attention implementation.
( down_block_res_samples: Tuple mid_block_res_sample: Tensor )
Parameters
tuple[torch.Tensor]
) —
A tuple of downsample activations at different resolutions for each downsampling block. Each tensor should
be of shape (batch_size, channel * resolution, height //resolution, width // resolution)
. Output can be
used to condition the original UNet’s downsampling activations. torch.Tensor
) —
The activation of the middle block (the lowest sample resolution). Each tensor should be of shape
(batch_size, channel * lowest_resolution, height // lowest_resolution, width // lowest_resolution)
.
Output can be used to condition the original UNet’s middle block activation. The output of ControlNetModel.
( sample_size: int = 32 in_channels: int = 4 down_block_types: Tuple = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') only_cross_attention: Union = False block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: int = 2 attention_head_dim: Union = 8 num_attention_heads: Union = None cross_attention_dim: int = 1280 dropout: float = 0.0 use_linear_projection: bool = False dtype: dtype = <class 'jax.numpy.float32'> flip_sin_to_cos: bool = True freq_shift: int = 0 controlnet_conditioning_channel_order: str = 'rgb' conditioning_embedding_out_channels: Tuple = (16, 32, 96, 256) parent: Union = <flax.linen.module._Sentinel object at 0x7fb2b317ed40> name: Optional = None )
Parameters
int
, optional) —
The size of the input sample. int
, optional, defaults to 4) —
The number of channels in the input sample. Tuple[str]
, optional, defaults to ("FlaxCrossAttnDownBlock2D", "FlaxCrossAttnDownBlock2D", "FlaxCrossAttnDownBlock2D", "FlaxDownBlock2D")
) —
The tuple of downsample blocks to use. Tuple[int]
, optional, defaults to (320, 640, 1280, 1280)
) —
The tuple of output channels for each block. int
, optional, defaults to 2) —
The number of layers per block. int
or Tuple[int]
, optional, defaults to 8) —
The dimension of the attention heads. int
or Tuple[int]
, optional) —
The number of attention heads. int
, optional, defaults to 768) —
The dimension of the cross attention features. float
, optional, defaults to 0) —
Dropout probability for down, up and bottleneck blocks. bool
, optional, defaults to True
) —
Whether to flip the sin to cos in the time embedding. int
, optional, defaults to 0) — The frequency shift to apply to the time embedding. str
, optional, defaults to rgb
) —
The channel order of conditional image. Will convert to rgb
if it’s bgr
. tuple
, optional, defaults to (16, 32, 96, 256)
) —
The tuple of output channel for each block in the conditioning_embedding
layer. A ControlNet model.
This model inherits from FlaxModelMixin. Check the superclass documentation for it’s generic methods implemented for all models (such as downloading or saving).
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax Linen module and refer to the Flax documentation for all matters related to its
general usage and behavior.
Inherent JAX features such as the following are supported:
( down_block_res_samples: Array mid_block_res_sample: Array )
The output of FlaxControlNetModel.
“Returns a new object replacing the specified fields with new values.