Allegro / README.md
RhymesAI's picture
Update README.md
2211cb0 verified
|
raw
history blame
4.14 kB
metadata
license: apache-2.0
language:
  - en
library_name: diffusers

GalleryGitHubBlogPaperDiscord

Gallery

For more demos and corresponding prompts, see the Allegro Gallery.

Key Feature

  • High-Quality Output: Generate detailed 6-second videos at 15 FPS with 720x1280 resolution, which can be interpolated to 30 FPS with EMA-VFI.
  • Small and Efficient: Features a 175M parameter VAE and a 2.8B parameter DiT model. Supports multiple precisions (FP32, BF16, FP16) and uses 9.3 GB of GPU memory in BF16 mode with CPU offloading.
  • Extensive Context Length: Handles up to 79.2k tokens, providing rich and comprehensive text-to-video generation capabilities.
  • Versatile Content Creation: Capable of generating a wide range of content, from close-ups of humans and animals to diverse dynamic scenes.

Model info

Model Allegro
Description Text-to-Video Generation Model
Download <HF link - TBD>
Parameter VAE: 175M
DiT: 2.8B
Inference Precision VAE: FP32/TF32/BF16/FP16 (best in FP32/TF32)
DiT/T5: BF16/FP32/TF32
Context Length 79.2k
Resolution 720 x 1280
Frames 88
Video Length 6 seconds @ 15 fps
Single GPU Memory Usage 9.3G BF16 (with cpu_offload)

Quick start

You can quickly get started with Allegro using the Hugging Face Diffusers library. For more tutorials, see Allegro GitHub (link-tbd).

  1. Install necessary requirements. Please refer to requirements.txt on Allegro GitHub.
  2. Perform inference on a single GPU.
from diffusers import DiffusionPipeline
import torch

allegro_pipeline = DiffusionPipeline.from_pretrained(
"rhymes-ai/Allegro", trust_remote_code=True, torch_dtype=torch.bfloat16
).to("cuda")

allegro_pipeline.vae = allegro_pipeline.vae.to(torch.float32)

prompt = "a video of an astronaut riding a horse on mars"

positive_prompt = """
(masterpiece), (best quality), (ultra-detailed), (unwatermarked), 
{} 
emotional, harmonious, vignette, 4k epic detailed, shot on kodak, 35mm photo, 
sharp focus, high budget, cinemascope, moody, epic, gorgeous
"""

negative_prompt = """
nsfw, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, 
low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry.
"""

num_sampling_steps, guidance_scale, seed = 100, 7.5, 42

user_prompt = positive_prompt.format(args.user_prompt.lower().strip())
out_video = allegro_pipeline(
    user_prompt, 
    negative_prompt=negative_prompt, 
    num_frames=88,
    height=720,
    width=1280,
    num_inference_steps=num_sampling_steps,
    guidance_scale=guidance_scale,
    max_sequence_length=512,
    generator = torch.Generator(device="cuda:0").manual_seed(seed)
).video[0]

imageio.mimwrite("test_video.mp4", out_video, fps=15, quality=8)

Tip:

  • It is highly recommended to use a video frame interpolation model (such as EMA-VFI) to enhance the result to 30 FPS.
  • For more tutorials, see Allegro GitHub.

License

This repo is released under the Apache 2.0 License.