Spaces:
Sleeping
Sleeping
File size: 2,968 Bytes
ca822d3 759e112 9c5862c ca822d3 9796138 ca822d3 8c2b71b 665ac47 b6e0a71 e356def aa4560c 5c6a629 205e830 5c6a629 205e830 aa4560c 205e830 dd9c27c aa4560c dd9c27c 205e830 73b790b 3e47535 205e830 85c91b3 205e830 3e47535 b6e0a71 dd9c27c 205e830 dd9c27c b6e0a71 73b790b aa4560c dd9c27c b6e0a71 73b790b b6e0a71 dd9c27c aa4560c dd9c27c aa4560c b6e0a71 aa4560c b6e0a71 6732f1c 665ac47 dd9c27c 9e152c1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 |
---
title: Real-Time Latent Consistency Model Image-to-Image ControlNet
emoji: 🖼️🖼️
colorFrom: gray
colorTo: indigo
sdk: docker
pinned: false
suggested_hardware: a10g-small
---
# Real-Time Latent Consistency Model
This demo showcases [Latent Consistency Model (LCM)](https://huggingface.co/SimianLuo/LCM_Dreamshaper_v7) using [Diffusers](https://github.com/huggingface/diffusers/tree/main/examples/community#latent-consistency-pipeline) with a MJPEG stream server.
You need a webcam to run this demo. 🤗
See a collecting with live demos [here](https://huggingface.co/collections/latent-consistency/latent-consistency-model-demos-654e90c52adb0688a0acbe6f)
## Running Locally
You need CUDA and Python 3.10, Mac with an M1/M2/M3 chip or Intel Arc GPU
`TIMEOUT`: limit user session timeout
`SAFETY_CHECKER`: disabled if you want NSFW filter off
`MAX_QUEUE_SIZE`: limit number of users on current app instance
`TORCH_COMPILE`: enable if you want to use torch compile for faster inference works well on A100 GPUs
## Install
```bash
python -m venv venv
source venv/bin/activate
pip3 install -r requirements.txt
```
# LCM
### Image to Image
```bash
uvicorn "app-img2img:app" --host 0.0.0.0 --port 7860 --reload
```
### Image to Image ControlNet Canny
Based pipeline from [taabata](https://github.com/taabata/LCM_Inpaint_Outpaint_Comfy)
```bash
uvicorn "app-controlnet:app" --host 0.0.0.0 --port 7860 --reload
```
### Text to Image
```bash
uvicorn "app-txt2img:app" --host 0.0.0.0 --port 7860 --reload
```
# LCM + LoRa
Using LCM-LoRA, giving it the super power of doing inference in as little as 4 steps. [Learn more here](https://huggingface.co/blog/lcm_lora) or [technical report](https://huggingface.co/papers/2311.05556)
### Image to Image ControlNet Canny LoRa
```bash
uvicorn "app-controlnetlora:app" --host 0.0.0.0 --port 7860 --reload
```
### Text to Image
```bash
uvicorn "app-txt2imglora:app" --host 0.0.0.0 --port 7860 --reload
```
### Setting environment variables
```bash
TIMEOUT=120 SAFETY_CHECKER=True MAX_QUEUE_SIZE=4 uvicorn "app-img2img:app" --host 0.0.0.0 --port 7860 --reload
```
If you're running locally and want to test it on Mobile Safari, the webserver needs to be served over HTTPS.
```bash
openssl req -newkey rsa:4096 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem
uvicorn "app-img2img:app" --host 0.0.0.0 --port 7860 --reload --log-level info --ssl-certfile=certificate.pem --ssl-keyfile=key.pem
```
## Docker
You need NVIDIA Container Toolkit for Docker
```bash
docker build -t lcm-live .
docker run -ti -p 7860:7860 --gpus all lcm-live
```
or with environment variables
```bash
docker run -ti -e TIMEOUT=0 -e SAFETY_CHECKER=False -p 7860:7860 --gpus all lcm-live
```
# Demo on Hugging Face
https://huggingface.co/spaces/radames/Real-Time-Latent-Consistency-Model
https://github.com/radames/Real-Time-Latent-Consistency-Model/assets/102277/c4003ac5-e7ff-44c0-97d3-464bb659de70
|