Edit model card

This is a nf4 quantization of mochi-1-preview. It excludes specific layers that allow it to stay coherent:

bf16

nf4mix-small (this one):

nf4:

Diffusers PR branch is required

pip install git+https://github.com/huggingface/diffusers@mochi

To use:

from diffusers import MochiPipeline, MochiTransformer3DModel
from diffusers.utils import export_to_video
transformer = MochiTransformer3DModel.from_pretrained("imnotednamode/mochi-1-preview-mix-nf4-small", torch_dtype=torch.bfloat16)
pipe = MochiPipeline.from_pretrained("mochi-1-diffusers", torch_dtype=torch.bfloat16, transformer=transformer)
pipe.enable_model_cpu_offload()
pipe.enable_vae_tiling()
frames = pipe("A camera follows a squirrel running around on a tree branch", num_inference_steps=100, guidance_scale=4.5, height=480, width=848, num_frames=161).frames[0]
export_to_video(frames, "mochi.mp4", fps=15)

To reproduce:

from diffusers import MochiPipeline, MochiTransformer3DModel, BitsAndBytesConfig
import torch
quantization_config = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_quant_type="nf4", llm_int8_skip_modules=["final_layer", "x_embedder.proj", "t_embedder", "pos_frequencies", "t5"])
# Please convert mochi to diffusers first
transformer = MochiTransformer3DModel.from_pretrained("mochi-1-diffusers", subfolder="transformer", quantization_config=quantization_config, torch_dtype=torch.bfloat16)
transformer.save_pretrained("mochi-1-preview-nf4")
Downloads last month
13
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for imnotednamode/mochi-1-preview-mix-nf4-small

Quantized
(2)
this model