chuanli-lambda's picture
Update README.md
16d27fe verified
---
license: apache-2.0
---
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/6310d1226f21f539e52b9d77/P69nZEiM_Z9OPuXuYMr3m.mp4"></video>
_Prompt: A young man walks alone by the seaside."_
__Text2Bricks__ is a fine-tuned [Open Sora](https://github.com/hpcaitech/Open-Sora) model that generates toy brick-style short stop animations.
`text2bricks-360p-32f` is fine-tuned to generated up to 360p/32-frames outputs.
__You can play with the videos created by the model in this [game](https://albrick-hitchblock.s3.amazonaws.com/index.html).__
It was trained on Lambda's [1-Click Clusters](https://lambdalabs.com/service/gpu-cloud/1-click-clusters) in 169.6 H100 GPU hours. See this [Weights $ Biases report](https://api.wandb.ai/links/lambdalabs/2cbrtx45) for details.
Extra code and data process steps can be found in this [tutorial](https://github.com/LambdaLabsML/Open-Sora/blob/lambda_bricks/README.md).
# Usage
Use [Lambda's fork](https://github.com/LambdaLabsML/Open-Sora/tree/lambda_bricks) of Open-Sora.
```
python scripts/inference.py \
configs/opensora-v1-1/inference/text2bricks-360p-32f.py \
--prompt "A young man walks alone by the seaside." \
--num-frames 32
```