hollowstrawberry
commited on
Commit
•
168b185
1
Parent(s):
1c6dc59
Update README.md
Browse files
README.md
CHANGED
@@ -231,7 +231,7 @@ If you're on collab, you should enable the `all_control_models` option. On Windo
|
|
231 |
|
232 |
I will demonstrate how ControlNet may be used. For this I chose a popular image online as our "sample image". It's not necessary for you to follow along, but you can download the images and put them in the **PNG Info** tab to view their generation data.
|
233 |
|
234 |
-
First, you must scroll down in the txt2img page and click on ControlNet to open the menu. Then,
|
235 |
|
236 |
<img src="https://huggingface.co/hollowstrawberry/stable-diffusion-guide/resolve/main/images/controlnet.png"/>
|
237 |
|
@@ -268,10 +268,13 @@ First, you must scroll down in the txt2img page and click on ControlNet to open
|
|
268 |
<img src="https://huggingface.co/hollowstrawberry/stable-diffusion-guide/resolve/main/images/openpose2.png"/>
|
269 |
</details>
|
270 |
|
271 |
-
You
|
272 |
|
273 |
In the Settings tab there is a ControlNet section where you can enable *multiple controlnets at once*. One particularly good example is depth+openpose, to get a specific character pose in a specific environment, or even a specific pose with specific hand gestures.
|
274 |
|
|
|
|
|
|
|
275 |
I would also recommend the Scribble model, which lets you draw a crude sketch and turn it into a finished piece with the help of your prompt.
|
276 |
There are also alternative **diff** versions of each ControlNet model, which produce slightly different results. You can [try them](https://civitai.com/models/9868/controlnet-pre-trained-difference-models) if you want, but I personally haven't.
|
277 |
|
|
|
231 |
|
232 |
I will demonstrate how ControlNet may be used. For this I chose a popular image online as our "sample image". It's not necessary for you to follow along, but you can download the images and put them in the **PNG Info** tab to view their generation data.
|
233 |
|
234 |
+
First, you must scroll down in the txt2img page and click on ControlNet to open the menu. Then, click *Enable*, and pick a matching *preprocessor* and *model*. To start with, I chose Canny for both. Finally I upload my sample image. Make sure not to click over the uploaded image or it will start drawing. We can ignore the other settings.
|
235 |
|
236 |
<img src="https://huggingface.co/hollowstrawberry/stable-diffusion-guide/resolve/main/images/controlnet.png"/>
|
237 |
|
|
|
268 |
<img src="https://huggingface.co/hollowstrawberry/stable-diffusion-guide/resolve/main/images/openpose2.png"/>
|
269 |
</details>
|
270 |
|
271 |
+
You will notice that there are 2 results for each method. The first is an intermediate step called the *preprocessed image*, which is then used to produce the final image. You can supply the preprocessed image yourself, in which case you should set the preprocessor to *None*. This is extremely powerful with external tools such as Blender.
|
272 |
|
273 |
In the Settings tab there is a ControlNet section where you can enable *multiple controlnets at once*. One particularly good example is depth+openpose, to get a specific character pose in a specific environment, or even a specific pose with specific hand gestures.
|
274 |
|
275 |
+
You can also use ControlNet in img2img, in which the input image and sample image both will have a certain effect on the result. I do not have much experience with this method.
|
276 |
+
|
277 |
+
|
278 |
I would also recommend the Scribble model, which lets you draw a crude sketch and turn it into a finished piece with the help of your prompt.
|
279 |
There are also alternative **diff** versions of each ControlNet model, which produce slightly different results. You can [try them](https://civitai.com/models/9868/controlnet-pre-trained-difference-models) if you want, but I personally haven't.
|
280 |
|