File size: 2,305 Bytes
c7be9b2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
---
base_model:
- black-forest-labs/FLUX.1-dev
library_name: diffusers
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
pipeline_tag: image-to-image
tags:
- ControlNet
---
# ⚡ Flux.1-dev: Depth ControlNet ⚡

This is [Flux.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev) ControlNet for Depth map developped by Jasper research team.

<p align="center">
   <img style="width:700px;" src="examples/showcase.jpg">
</p>

# How to use
This model can be used directly with the `diffusers` library

```python
import torch
from diffusers.utils import load_image
from diffusers import FluxControlNetModel
from diffusers.pipelines import FluxControlNetPipeline

# Load pipeline
controlnet = FluxControlNetModel.from_pretrained(
  "jasperai/Flux.1-dev-Controlnet-Depth",
  torch_dtype=torch.bfloat16
)
pipe = FluxControlNetPipeline.from_pretrained(
  "black-forest-labs/FLUX.1-dev",
  controlnet=controlnet,
  torch_dtype=torch.bfloat16
)


# Load a control image
control_image = load_image(
  "https://huggingface.co/jasperai/Flux.1-dev-Controlnet-Depth/resolve/main/examples/depth.jpg"
)

prompt = "a statue of a gnome in a field of purple tulips"

image = pipe(
    prompt, 
    control_image=control_image,
    controlnet_conditioning_scale=0.6,
    num_inference_steps=28, 
    guidance_scale=3.5,
    height=control_image.size[1],
    width=control_image.size[0]
).images[0]
image
```

<p align="center">
   <img style="width:500px;" src="examples/output.jpg">
</p>

💡 Note: You can compute the conditioning map using for instance the `MidasDetector` from the `controlnet_aux` library

```python
from controlnet_aux import MidasDetector
from diffusers.utils import load_image 

midas = MidasDetector.from_pretrained("lllyasviel/Annotators")

# Load an image
im = load_image(
  "https://huggingface.co/jasperai/jasperai/Flux.1-dev-Controlnet-Depth/resolve/main/examples/output.jpg"
)

surface = midas(im)
```

# Training
This model was trained with depth maps computed with [Clipdrop's depth estimator model](https://clipdrop.co/apis/docs/portrait-depth-estimation) as well as open-souce depth estimation models such as Midas or Leres.

# Licence
The licence under the Flux.1-dev model applies to this model.