---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.
language:
- en
pipeline_tag: text-to-image
tags:
- Stable Diffusion
- image-generation
- Flux
- diffusers
---
![Controlnet collections for Flux](https://github.com/XLabs-AI/x-flux/blob/main/assets/readme/light/flux-controlnet-collections.png?raw=true)
[](https://discord.gg/FHY2guThfy)
This repository provides a collection of ControlNet checkpoints for
[FLUX.1-dev model](https://huggingface.co/black-forest-labs/FLUX.1-dev) by Black Forest Labs
![Example Picture 1](./assets/depth_v2_res1.png?raw=true)
[See our github](https://github.com/XLabs-AI/x-flux-comfyui) for comfy ui workflows.
![Example Picture 1](https://github.com/XLabs-AI/x-flux-comfyui/blob/main/assets/image1.png?raw=true)
[See our github](https://github.com/XLabs-AI/x-flux) for train script, train configs and demo script for inference.
# Models
Our collection supports 3 models:
- Canny
- HED
- Depth (Midas)
Each ControlNet is trained on 1024x1024 resolution and works for 1024x1024 resolution.
We release **v2 versions** - better and realistic versions, which can be used directly in ComfyUI!
Please, see our [ComfyUI custom nodes installation guide](https://github.com/XLabs-AI/x-flux-comfyui)
# Examples
See examples of our models results below.
Also, some generation results with input images are provided in "Files and versions"
# Inference
To try our models, you have 2 options:
1. Use main.py from our [official repo](https://github.com/XLabs-AI/x-flux)
2. Use our custom nodes for ComfyUI and test it with provided workflows (check out folder /workflows)
See examples how to launch our models:
## Canny ControlNet (version 2)
1. Clone our [x-flux-comfyui](https://github.com/XLabs-AI/x-flux-comfyui) custom nodes
2. Launch ComfyUI
3. Try our canny_workflow.json
![Example Picture 1](./assets/canny_v2_res1.png?raw=true)
![Example Picture 1](./assets/canny_v2_res2.png?raw=true)
![Example Picture 1](./assets/canny_v2_res3.png?raw=true)
## Canny ControlNet (version 1)
1. Clone [our repo](https://github.com/XLabs-AI/x-flux), install requirements
2. Launch main.py in command line with parameters
```bash
python3 main.py \
--prompt "a viking man with white hair looking, cinematic, MM full HD" \
--image input_image_canny.jpg \
--control_type canny \
--repo_id XLabs-AI/flux-controlnet-collections --name flux-canny-controlnet.safetensors --device cuda --use_controlnet \
--model_type flux-dev --width 768 --height 768 \
--timestep_to_start_cfg 1 --num_steps 25 --true_gs 3.5 --guidance 4
```
![Example Picture 1](https://github.com/XLabs-AI/x-flux/blob/main/assets/readme/examples/canny_example_1.png?raw=true)
## Depth ControlNet (version 2)
1. Clone our [x-flux-comfyui](https://github.com/XLabs-AI/x-flux-comfyui) custom nodes
2. Launch ComfyUI
3. Try our depth_workflow.json
![Example Picture 1](./assets/depth_v2_res1.png?raw=true)
![Example Picture 1](./assets/depth_v2_res2.png?raw=true)
## Depth ControlNet (version 1)
1. Clone [our repo](https://github.com/XLabs-AI/x-flux), install requirements
2. Launch main.py in command line with parameters
```bash
python3 main.py \
--prompt "Photo of the bold man with beard and laptop, full hd, cinematic photo" \
--image input_image_depth1.jpg \
--control_type depth \
--repo_id XLabs-AI/flux-controlnet-collections --name flux-depth-controlnet.safetensors --device cuda --use_controlnet \
--model_type flux-dev --width 1024 --height 1024 \
--timestep_to_start_cfg 1 --num_steps 25 --true_gs 3.5 --guidance 4
```
![Example Picture 2](https://github.com/XLabs-AI/x-flux/blob/main/assets/readme/examples/depth_example_1.png?raw=true)
```bash
python3 main.py \
--prompt "photo of handsome fluffy black dog standing on a forest path, full hd, cinematic photo" \
--image input_image_depth2.jpg \
--control_type depth \
--repo_id XLabs-AI/flux-controlnet-collections --name flux-depth-controlnet.safetensors --device cuda --use_controlnet \
--model_type flux-dev --width 1024 --height 1024 \
--timestep_to_start_cfg 1 --num_steps 25 --true_gs 3.5 --guidance 4
```
![Example Picture 2](https://github.com/XLabs-AI/x-flux/blob/main/assets/readme/examples/depth_example_2.png?raw=true)
```bash
python3 main.py \
--prompt "Photo of japanese village with houses and sakura, full hd, cinematic photo" \
--image input_image_depth3.webp \
--control_type depth \
--repo_id XLabs-AI/flux-controlnet-collections --name flux-depth-controlnet.safetensors --device cuda --use_controlnet \
--model_type flux-dev --width 1024 --height 1024 \
--timestep_to_start_cfg 1 --num_steps 25 --true_gs 3.5 --guidance 4
```
![Example Picture 2](https://github.com/XLabs-AI/x-flux/blob/main/assets/readme/examples/depth_example_3.png?raw=true)
## HED ControlNet (version 1)
```bash
python3 main.py \
--prompt "2d art of a sitting african rich woman, full hd, cinematic photo" \
--image input_image_hed1.jpg \
--control_type hed \
--repo_id XLabs-AI/flux-controlnet-collections --name flux-hed-controlnet.safetensors --device cuda --use_controlnet \
--model_type flux-dev --width 768 --height 768 \
--timestep_to_start_cfg 1 --num_steps 25 --true_gs 3.5 --guidance 4
```
![Example Picture 2](https://github.com/XLabs-AI/x-flux/blob/main/assets/readme/examples/hed_example_1.png?raw=true)
```bash
python3 main.py \
--prompt "anime ghibli style art of a running happy white dog, full hd" \
--image input_image_hed2.jpg \
--control_type hed \
--repo_id XLabs-AI/flux-controlnet-collections --name flux-hed-controlnet.safetensors --device cuda --use_controlnet \
--model_type flux-dev --width 768 --height 768 \
--timestep_to_start_cfg 1 --num_steps 25 --true_gs 3.5 --guidance 4
```
![Example Picture 2](https://github.com/XLabs-AI/x-flux/blob/main/assets/readme/examples/hed_example_2.png?raw=true)
## License
Our weights fall under the [FLUX.1 [dev]](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md) Non-Commercial License