A Diffusion Transformer model for 2D data from Hunyuan-DiT.
( num_attention_heads: int = 16 attention_head_dim: int = 88 in_channels: typing.Optional[int] = None patch_size: typing.Optional[int] = None activation_fn: str = 'gelu-approximate' sample_size = 32 hidden_size = 1152 num_layers: int = 28 mlp_ratio: float = 4.0 learn_sigma: bool = True cross_attention_dim: int = 1024 norm_type: str = 'layer_norm' cross_attention_dim_t5: int = 2048 pooled_projection_dim: int = 1024 text_len: int = 77 text_len_t5: int = 256 use_style_cond_and_image_meta_size: bool = True )
Parameters
int, optional, defaults to 16) —
The number of heads to use for multi-head attention. int, optional, defaults to 88) —
The number of channels in each head. int, optional) —
The number of channels in the input and output (specify if the input is continuous). int, optional) —
The size of the patch to use for the input. str, optional, defaults to "geglu") —
Activation function to use in feed-forward. int, optional) —
The width of the latent images. This is fixed during training since it is used to learn a number of
position embeddings. float, optional, defaults to 0.0) —
The dropout probability to use. int, optional) —
The number of dimension in the clip text embedding. int, optional) —
The size of hidden layer in the conditioning embedding layers. int, optional, defaults to 1) —
The number of layers of Transformer blocks to use. float, optional, defaults to 4.0) —
The ratio of the hidden layer size to the input size. bool, optional, defaults to True) —
Whether to predict variance. int, optional) —
The number dimensions in t5 text embedding. int, optional) —
The size of the pooled projection. int, optional) —
The length of the clip text embedding. int, optional) —
The length of the T5 text embedding. bool, optional) —
Whether or not to use style condition and image meta size. True for version <=1.1, False for version >= 1.2 HunYuanDiT: Diffusion model with a Transformer backbone.
Inherit ModelMixin and ConfigMixin to be compatible with the sampler StableDiffusionPipeline of diffusers.
( chunk_size: typing.Optional[int] = None dim: int = 0 )
Parameters
int, optional) —
The chunk size of the feed-forward layers. If not specified, will run feed-forward layer individually
over each tensor of dim=dim. int, optional, defaults to 0) —
The dimension over which the feed-forward computation should be chunked. Choose between dim=0 (batch)
or dim=1 (sequence length). Sets the attention processor to use feed forward chunking.
( hidden_states timestep encoder_hidden_states = None text_embedding_mask = None encoder_hidden_states_t5 = None text_embedding_mask_t5 = None image_meta_size = None style = None image_rotary_emb = None controlnet_block_samples = None return_dict = True )
Parameters
torch.Tensor of shape (batch size, dim, height, width)) —
The input tensor. torch.LongTensor, optional) —
Used to indicate denoising step. torch.Tensor of shape (batch size, sequence len, embed dims), optional) —
Conditional embeddings for cross attention layer. This is the output of BertModel. (batch, key_tokens) is applied to encoder_hidden_states. This is the output
of BertModel. torch.Tensor of shape (batch size, sequence len, embed dims), optional) —
Conditional embeddings for cross attention layer. This is the output of T5 Text Encoder. (batch, key_tokens) is applied to encoder_hidden_states. This is the output
of T5 Text Encoder. torch.Tensor) —
The image rotary embeddings to apply on query and key tensors during attention calculation. The HunyuanDiT2DModel forward method.
Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, key, value) are fused. For cross-attention modules, key and value projection matrices are fused.
> This API is 🧪 experimental.
Disables custom attention processors and sets the default attention implementation.
Disables the fused QKV projection if enabled.
> This API is 🧪 experimental.