Update README.md
Browse files
README.md
CHANGED
@@ -12,7 +12,7 @@ tags:
|
|
12 |
|
13 |
T2I Adapter is a network providing additional conditioning to stable diffusion. Each t2i checkpoint takes a different type of conditioning as input and is used with a specific base stable diffusion checkpoint.
|
14 |
|
15 |
-
This checkpoint provides conditioning on
|
16 |
|
17 |
## Model Details
|
18 |
- **Developed by:** T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models
|
@@ -30,15 +30,16 @@ This checkpoint provides conditioning on canny for the StableDiffusionXL checkpo
|
|
30 |
archivePrefix={arXiv},
|
31 |
primaryClass={cs.CV}
|
32 |
}
|
33 |
-
|
34 |
### Checkpoints
|
35 |
|
36 |
| Model Name | Control Image Overview| Control Image Example | Generated Image Example |
|
37 |
|---|---|---|---|
|
38 |
-
|[
|
39 |
-
|[
|
40 |
-
|[
|
41 |
-
|[
|
|
|
|
|
42 |
|
43 |
|
44 |
## Example
|
@@ -54,48 +55,54 @@ pip install transformers accelerate safetensors
|
|
54 |
1. Images are first downloaded into the appropriate *control image* format.
|
55 |
2. The *control image* and *prompt* are passed to the [`StableDiffusionXLAdapterPipeline`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/t2i_adapter/pipeline_stable_diffusion_xl_adapter.py#L125).
|
56 |
|
57 |
-
Let's have a look at a simple example using the [Canny Adapter](https://huggingface.co/
|
58 |
|
|
|
59 |
```py
|
60 |
-
from diffusers import StableDiffusionXLAdapterPipeline, T2IAdapter, EulerAncestralDiscreteScheduler
|
61 |
from diffusers.utils import load_image, make_image_grid
|
62 |
from controlnet_aux.lineart import LineartDetector
|
|
|
63 |
|
64 |
# load adapter
|
65 |
adapter = T2IAdapter.from_pretrained(
|
66 |
-
"
|
67 |
).to("cuda")
|
68 |
|
69 |
# load euler_a scheduler
|
70 |
model_id = 'stabilityai/stable-diffusion-xl-base-1.0'
|
71 |
euler_a = EulerAncestralDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
|
72 |
-
vae=
|
73 |
-
"madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16
|
74 |
-
)
|
75 |
pipe = StableDiffusionXLAdapterPipeline.from_pretrained(
|
76 |
-
|
77 |
).to("cuda")
|
78 |
pipe.enable_xformers_memory_efficient_attention()
|
79 |
|
80 |
-
# Load PidiNet
|
81 |
line_detector = LineartDetector.from_pretrained("lllyasviel/Annotators").to("cuda")
|
|
|
82 |
|
83 |
-
|
|
|
|
|
84 |
image = load_image(url)
|
85 |
image = line_detector(
|
86 |
-
|
87 |
-
)
|
|
|
|
|
88 |
|
89 |
-
|
|
|
|
|
90 |
negative_prompt = "anime, cartoon, graphic, text, painting, crayon, graphite, abstract, glitch, deformed, mutated, ugly, disfigured"
|
91 |
-
|
92 |
gen_images = pipe(
|
93 |
prompt=prompt,
|
94 |
negative_prompt=negative_prompt,
|
95 |
image=image,
|
96 |
num_inference_steps=30,
|
97 |
-
adapter_conditioning_scale=
|
98 |
-
|
99 |
-
).images
|
100 |
-
gen_images
|
101 |
-
```
|
|
|
|
12 |
|
13 |
T2I Adapter is a network providing additional conditioning to stable diffusion. Each t2i checkpoint takes a different type of conditioning as input and is used with a specific base stable diffusion checkpoint.
|
14 |
|
15 |
+
This checkpoint provides conditioning on lineart for the StableDiffusionXL checkpoint.
|
16 |
|
17 |
## Model Details
|
18 |
- **Developed by:** T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models
|
|
|
30 |
archivePrefix={arXiv},
|
31 |
primaryClass={cs.CV}
|
32 |
}
|
|
|
33 |
### Checkpoints
|
34 |
|
35 |
| Model Name | Control Image Overview| Control Image Example | Generated Image Example |
|
36 |
|---|---|---|---|
|
37 |
+
|[TencentARC/t2i-adapter-canny-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-canny-sdxl-1.0)<br/> *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_canny.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_canny.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_canny.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_canny.png"/></a>|
|
38 |
+
|[TencentARC/t2i-adapter-sketch-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-sketch-sdxl-1.0)<br/> *Trained with [PidiNet](https://github.com/zhuoinoulu/pidinet) edge detection* | A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_sketch.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_sketch.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_sketch.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_sketch.png"/></a>|
|
39 |
+
|[TencentARC/t2i-adapter-lineart-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-lineart-sdxl-1.0)<br/> *Trained with lineart edge detection* | A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_lin.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_lin.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_lin.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_lin.png"/></a>|
|
40 |
+
|[TencentARC/t2i-adapter-depth-midas-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-depth-midas-sdxl-1.0)<br/> *Trained with Midas depth estimation* | A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_mid.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_mid.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_mid.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_mid.png"/></a>|
|
41 |
+
|[TencentARC/t2i-adapter-depth-zoe-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-depth-zoe-sdxl-1.0)<br/> *Trained with Zoe depth estimation* | A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_zeo.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_zeo.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_zeo.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_zeo.png"/></a>|
|
42 |
+
|[Adapter/t2iadapter_openpose_sdxlv1](https://huggingface.co/Adapter/t2iadapter_openpose_sdxlv1)<br/> *Trained with OpenPose bone image* | A [OpenPose bone](https://github.com/CMU-Perceptual-Computing-Lab/openpose) image.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/openpose.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/openpose.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/res_pose.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/res_pose.png"/></a>|
|
43 |
|
44 |
|
45 |
## Example
|
|
|
55 |
1. Images are first downloaded into the appropriate *control image* format.
|
56 |
2. The *control image* and *prompt* are passed to the [`StableDiffusionXLAdapterPipeline`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/t2i_adapter/pipeline_stable_diffusion_xl_adapter.py#L125).
|
57 |
|
58 |
+
Let's have a look at a simple example using the [Canny Adapter](https://huggingface.co/TencentARC/t2i-adapter-lineart-sdxl-1.0).
|
59 |
|
60 |
+
- Dependency
|
61 |
```py
|
62 |
+
from diffusers import StableDiffusionXLAdapterPipeline, T2IAdapter, EulerAncestralDiscreteScheduler, AutoencoderKL
|
63 |
from diffusers.utils import load_image, make_image_grid
|
64 |
from controlnet_aux.lineart import LineartDetector
|
65 |
+
import torch
|
66 |
|
67 |
# load adapter
|
68 |
adapter = T2IAdapter.from_pretrained(
|
69 |
+
"TencentARC/t2i-adapter-lineart-sdxl-1.0", torch_dtype=torch.float16, varient="fp16"
|
70 |
).to("cuda")
|
71 |
|
72 |
# load euler_a scheduler
|
73 |
model_id = 'stabilityai/stable-diffusion-xl-base-1.0'
|
74 |
euler_a = EulerAncestralDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
|
75 |
+
vae=AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
|
|
|
|
|
76 |
pipe = StableDiffusionXLAdapterPipeline.from_pretrained(
|
77 |
+
model_id, vae=vae, adapter=adapter, scheduler=euler_a, torch_dtype=torch.float16, variant="fp16",
|
78 |
).to("cuda")
|
79 |
pipe.enable_xformers_memory_efficient_attention()
|
80 |
|
|
|
81 |
line_detector = LineartDetector.from_pretrained("lllyasviel/Annotators").to("cuda")
|
82 |
+
```
|
83 |
|
84 |
+
- Condition Image
|
85 |
+
```py
|
86 |
+
url = "https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/org_lin.jpg"
|
87 |
image = load_image(url)
|
88 |
image = line_detector(
|
89 |
+
image, detect_resolution=384, image_resolution=1024
|
90 |
+
)
|
91 |
+
```
|
92 |
+
<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_lin.png"><img width="480" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_lin.png"/></a>
|
93 |
|
94 |
+
- Generation
|
95 |
+
```py
|
96 |
+
prompt = "Ice dragon roar, 4k photo"
|
97 |
negative_prompt = "anime, cartoon, graphic, text, painting, crayon, graphite, abstract, glitch, deformed, mutated, ugly, disfigured"
|
|
|
98 |
gen_images = pipe(
|
99 |
prompt=prompt,
|
100 |
negative_prompt=negative_prompt,
|
101 |
image=image,
|
102 |
num_inference_steps=30,
|
103 |
+
adapter_conditioning_scale=0.8,
|
104 |
+
guidance_scale=7.5,
|
105 |
+
).images[0]
|
106 |
+
gen_images.save('out_lin.png')
|
107 |
+
```
|
108 |
+
<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_lin.png"><img width="480" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_lin.png"/></a>
|