A Transformer model for image-like data from PixArt-Alpha and PixArt-Sigma.
( num_attention_heads: int = 16 attention_head_dim: int = 72 in_channels: int = 4 out_channels: Optional = 8 num_layers: int = 28 dropout: float = 0.0 norm_num_groups: int = 32 cross_attention_dim: Optional = 1152 attention_bias: bool = True sample_size: int = 128 patch_size: int = 2 activation_fn: str = 'gelu-approximate' num_embeds_ada_norm: Optional = 1000 upcast_attention: bool = False norm_type: str = 'ada_norm_single' norm_elementwise_affine: bool = False norm_eps: float = 1e-06 interpolation_scale: Optional = None use_additional_conditions: Optional = None caption_channels: Optional = None attention_type: Optional = 'default' )
Parameters
A 2D Transformer model as introduced in PixArt family of models (https://arxiv.org/abs/2310.00426, https://arxiv.org/abs/2403.04692).
( hidden_states: Tensor encoder_hidden_states: Optional = None timestep: Optional = None added_cond_kwargs: Dict = None cross_attention_kwargs: Dict = None attention_mask: Optional = None encoder_attention_mask: Optional = None return_dict: bool = True )
Parameters
torch.FloatTensor
of shape (batch size, channel, height, width)
) —
Input hidden_states
. torch.FloatTensor
of shape (batch size, sequence len, embed dims)
, optional) —
Conditional embeddings for cross attention layer. If not given, cross-attention defaults to
self-attention. torch.LongTensor
, optional) —
Used to indicate denoising step. Optional timestep to be applied as an embedding in AdaLayerNorm
.
added_cond_kwargs — (Dict[str, Any]
, optional): Additional conditions to be used as inputs. Dict[str, Any]
, optional) —
A kwargs dictionary that if specified is passed along to the AttentionProcessor
as defined under
self.processor
in
diffusers.models.attention_processor. torch.Tensor
, optional) —
An attention mask of shape (batch, key_tokens)
is applied to encoder_hidden_states
. If 1
the mask
is kept, otherwise if 0
it is discarded. Mask will be converted into a bias, which adds large
negative values to the attention scores corresponding to “discard” tokens. torch.Tensor
, optional) —
Cross-attention mask applied to encoder_hidden_states
. Two formats supported:
(batch, sequence_length)
True = keep, False = discard.(batch, 1, sequence_length)
0 = keep, -10000 = discard.If ndim == 2
: will be interpreted as a mask, then converted into a bias consistent with the format
above. This bias will be added to the cross-attention scores.
bool
, optional, defaults to True
) —
Whether or not to return a UNet2DConditionOutput instead of a plain
tuple. The PixArtTransformer2DModel forward method.
Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, key, value) are fused. For cross-attention modules, key and value projection matrices are fused.
This API is 🧪 experimental.
( processor: Union )
Parameters
dict
of AttentionProcessor
or only AttentionProcessor
) —
The instantiated processor class or a dictionary of processor classes that will be set as the processor
for all Attention
layers.
If processor
is a dict, the key needs to define the path to the corresponding cross attention
processor. This is strongly recommended when setting trainable attention processors.
Sets the attention processor to use to compute attention.
Disables the fused QKV projection if enabled.
This API is 🧪 experimental.