An attention processor is a class for applying different types of attention mechanisms.
Default processor for performing attention-related computations.
Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0).
Processor for performing attention-related computations with extra learnable key and value matrices for the text encoder.
Processor for performing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0), with extra learnable key and value matrices for the text encoder.
( batch_size = 2 )
Cross frame attention processor. Each frame attends the first frame.
( train_kv: bool = True train_q_out: bool = True hidden_size: Optional = None cross_attention_dim: Optional = None out_bias: bool = True dropout: float = 0.0 )
Parameters
bool
, defaults to True
) —
Whether to newly train the key and value matrices corresponding to the text features. bool
, defaults to True
) —
Whether to newly train query matrices corresponding to the latent image features. int
, optional, defaults to None
) —
The hidden size of the attention layer. int
, optional, defaults to None
) —
The number of channels in the encoder_hidden_states
. bool
, defaults to True
) —
Whether to include the bias parameter in train_q_out
. float
, optional, defaults to 0.0) —
The dropout probability to use. Processor for implementing attention for the Custom Diffusion method.
( train_kv: bool = True train_q_out: bool = True hidden_size: Optional = None cross_attention_dim: Optional = None out_bias: bool = True dropout: float = 0.0 )
Parameters
bool
, defaults to True
) —
Whether to newly train the key and value matrices corresponding to the text features. bool
, defaults to True
) —
Whether to newly train query matrices corresponding to the latent image features. int
, optional, defaults to None
) —
The hidden size of the attention layer. int
, optional, defaults to None
) —
The number of channels in the encoder_hidden_states
. bool
, defaults to True
) —
Whether to include the bias parameter in train_q_out
. float
, optional, defaults to 0.0) —
The dropout probability to use. Processor for implementing attention for the Custom Diffusion method using PyTorch 2.0’s memory-efficient scaled dot-product attention.
( train_kv: bool = True train_q_out: bool = False hidden_size: Optional = None cross_attention_dim: Optional = None out_bias: bool = True dropout: float = 0.0 attention_op: Optional = None )
Parameters
bool
, defaults to True
) —
Whether to newly train the key and value matrices corresponding to the text features. bool
, defaults to True
) —
Whether to newly train query matrices corresponding to the latent image features. int
, optional, defaults to None
) —
The hidden size of the attention layer. int
, optional, defaults to None
) —
The number of channels in the encoder_hidden_states
. bool
, defaults to True
) —
Whether to include the bias parameter in train_q_out
. float
, optional, defaults to 0.0) —
The dropout probability to use. Callable
, optional, defaults to None
) —
The base
operator to use
as the attention operator. It is recommended to set to None
, and allow xFormers to choose the best operator. Processor for implementing memory efficient attention using xFormers for the Custom Diffusion method.
Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0). It uses fused projection layers. For self-attention modules, all projection matrices (i.e., query, key, value) are fused. For cross-attention modules, key and value projection matrices are fused.
This API is currently 🧪 experimental in nature and can change in future.
( slice_size: int )
Processor for implementing sliced attention.
( slice_size )
Processor for implementing sliced attention with extra learnable key and value matrices for the text encoder.
( attention_op: Optional = None )
Parameters
Callable
, optional, defaults to None
) —
The base
operator to
use as the attention operator. It is recommended to set to None
, and allow xFormers to choose the best
operator. Processor for implementing memory efficient attention using xFormers.
Processor for implementing flash attention using torch_npu. Torch_npu supports only fp16 and bf16 data types. If fp32 is used, F.scaled_dot_product_attention will be used for computation, but the acceleration effect on NPU is not significant.