YiYiXu's picture
Update README.md
1abb14f verified
metadata
license: other
tags:
  - text-to-video

Hunyuan1.5 use attention masks with variable-length sequences. For best performance, we recommend using an attention backend that handles padding efficiently.

We recommend installing kernels (pip install kernels) to access prebuilt attention kernels.

You can check our documentation to learn more about all the different attention backends we support.

import torch

dtype = torch.bfloat16
device = "cuda:0"
from diffusers import HunyuanVideo15Pipeline, attention_backend
from diffusers.utils import export_to_video

pipe = HunyuanVideo15Pipeline.from_pretrained("hunyuanvideo-community/HunyuanVideo-1.5-Diffusers-480p_t2v_distilled", torch_dtype=dtype)
pipe.enable_model_cpu_offload()
pipe.vae.enable_tiling()

generator = torch.Generator(device=device).manual_seed(seed)
with attention_backend("_flash_3_hub"): # or `"flash_hub"` if you are not using H100/H800
    video = pipe(
        prompt=prompt,
        generator=generator,
        num_frames=121,
        num_inference_steps=50,
    ).frames[0]
    export_to_video(video, "output.mp4", fps=24)

To use default attention backend

import torch

dtype = torch.bfloat16
device = "cuda:0"
from diffusers import HunyuanVideo15Pipeline
from diffusers.utils import export_to_video

pipe = HunyuanVideo15Pipeline.from_pretrained("hunyuanvideo-community/HunyuanVideo-1.5-Diffusers-480p_t2v_distilled", torch_dtype=dtype)
pipe.enable_model_cpu_offload()
pipe.vae.enable_tiling()

generator = torch.Generator(device=device).manual_seed(seed)

video = pipe(
    prompt=prompt,
    generator=generator,
    num_frames=121,
    num_inference_steps=50,
).frames[0]
export_to_video(video, "output.mp4", fps=24)