--- base_model: genmo/mochi-1-preview library_name: diffusers license: apache-2.0 tags: - text-to-video - diffusers-training - diffusers - lora - mochi-1-preview - mochi-1-preview-diffusers - template:sd-lora - text-to-video - diffusers-training - diffusers - lora - mochi-1-preview - mochi-1-preview-diffusers - template:sd-lora widget: [] --- # Mochi-1 Preview LoRA Finetune ## Model description This is a lora finetune of the Moch-1 preview model `genmo/mochi-1-preview`. The model was trained using [CogVideoX Factory](https://github.com/a-r-r-o-w/cogvideox-factory) - a repository containing memory-optimized training scripts for the CogVideoX, Mochi family of models using [TorchAO](https://github.com/pytorch/ao) and [DeepSpeed](https://github.com/microsoft/DeepSpeed). The scripts were adopted from [CogVideoX Diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/cogvideo/train_cogvideox_lora.py). ## Download model [Download LoRA](sayakpaul/mochi-lora/tree/main) in the Files & Versions tab. ## Usage Requires the [🧨 Diffusers library](https://github.com/huggingface/diffusers) installed. ```py TODO ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) on loading LoRAs in diffusers. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]