A Diffusion Transformer model for 3D video-like data used in HunyuanVideo1.5.
The model can be loaded with the following code snippet.
from diffusers import HunyuanVideo15Transformer3DModel
transformer = HunyuanVideo15Transformer3DModel.from_pretrained("hunyuanvideo-community/HunyuanVideo-1.5-Diffusers-480p_t2v" subfolder="transformer", torch_dtype=torch.bfloat16)( in_channels: int = 65 out_channels: int = 32 num_attention_heads: int = 16 attention_head_dim: int = 128 num_layers: int = 54 num_refiner_layers: int = 2 mlp_ratio: float = 4.0 patch_size: int = 1 patch_size_t: int = 1 qk_norm: str = 'rms_norm' text_embed_dim: int = 3584 text_embed_2_dim: int = 1472 image_embed_dim: int = 1152 rope_theta: float = 256.0 rope_axes_dim: typing.Tuple[int, ...] = (16, 56, 56) target_size: int = 640 task_type: str = 'i2v' )
Parameters
int, defaults to 16) —
The number of channels in the input. int, defaults to 16) —
The number of channels in the output. int, defaults to 24) —
The number of heads to use for multi-head attention. int, defaults to 128) —
The number of channels in each head. int, defaults to 20) —
The number of layers of dual-stream blocks to use. int, defaults to 2) —
The number of layers of refiner blocks to use. float, defaults to 4.0) —
The ratio of the hidden layer size to the input size in the feedforward network. int, defaults to 2) —
The size of the spatial patches to use in the patch embedding layer. int, defaults to 1) —
The size of the tmeporal patches to use in the patch embedding layer. str, defaults to rms_norm) —
The normalization to use for the query and key projections in the attention layers. bool, defaults to True) —
Whether to use guidance embeddings in the model. int, defaults to 4096) —
Input dimension of text embeddings from the text encoder. int, defaults to 768) —
The dimension of the pooled projection of the text embeddings. float, defaults to 256.0) —
The value of theta to use in the RoPE layer. Tuple[int], defaults to (16, 56, 56)) —
The dimensions of the axes to use in the RoPE layer. A Transformer model for video-like data used in HunyuanVideo1.5.
( sample: torch.Tensor )
Parameters
torch.Tensor of shape (batch_size, num_channels, height, width) or (batch size, num_vector_embeds - 1, num_latent_pixels) if Transformer2DModel is discrete) —
The hidden states output conditioned on the encoder_hidden_states input. If discrete, returns probability
distributions for the unnoised latent pixels. The output of Transformer2DModel.