hollowstrawberry
commited on
Commit
•
0b26eff
1
Parent(s):
164bf0e
Update README.md
Browse files
README.md
CHANGED
@@ -98,13 +98,12 @@ To run Stable Diffusion on your own computer you'll need at least 16 GB of RAM a
|
|
98 |
# Getting Started <a name="start"></a>[â–²](#index)
|
99 |
|
100 |
Before or after generating your first few images, you will want to take a look at the information below to improve your experience and results.
|
101 |
-
|
102 |
|
103 |
![Top](images/top.png)
|
104 |
|
105 |
Here you can select your model and VAE. We will go over what these are and how you can get more of them. The collab has additional settings here too, you should ignore them for now.
|
106 |
|
107 |
-
|
108 |
1. **Models** <a name="model"></a>[â–²](#index)
|
109 |
|
110 |
The **model**, also called **checkpoint**, is the brain of your AI, designed for the purpose of producing certain types of images. There are many options, most of which are on [civitai](https://civitai.com). But which to choose? These are my recommendations:
|
@@ -113,13 +112,15 @@ Here you can select your model and VAE. We will go over what these are and how y
|
|
113 |
* For general art go with [DreamShaper](https://civitai.com/models/4384/dreamshaper), there are few options quite like it in terms of raw creativity. An honorable mention goes to [Pastel Mix](https://civitai.com/models/5414/pastel-mix-stylized-anime-model), which has a beautiful and unique aesthetic with the addition of anime.
|
114 |
* For photorealism go with [Deliberate](https://civitai.com/models/4823/deliberate). It can do almost anything, but specially photographs. Very intricate results.
|
115 |
* The [Uber Realistic Porn Merge](https://civitai.com/models/2661/uber-realistic-porn-merge-urpm) is self-explanatory.
|
116 |
-
|
117 |
-
*Launcher:* It will let you choose the path to your models folder. Otherwise the models normally go into `stable-diffusion-webui/models/Stable-diffusion`.
|
118 |
|
119 |
-
|
120 |
-
|
|
|
|
|
121 |
Please note that checkpoints in the format `.safetensors` are safe to use while `.ckpt` **may** contain viruses, so be careful. Additionally, when choosing models you may have a choice between fp32, fp16 and pruned. They all produce the same images within a tiny margin of error, so just go with the smallest file (fp16-pruned). If you want to use them for training or merging, go with the biggest one instead.
|
122 |
|
|
|
|
|
123 |
1. **VAEs** <a name="vae"></a>[â–²](#index)
|
124 |
|
125 |
Most models don't come with a VAE built in. The VAE is a small separate model, which "converts your image from AI format into human format". Without it, you'll get faded colors and ugly eyes, among other things.
|
@@ -131,11 +132,11 @@ Here you can select your model and VAE. We will go over what these are and how y
|
|
131 |
* [vae-ft-mse](https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.safetensors), the latest from Stable Diffusion itself. Used by photorealism models and such.
|
132 |
* [kl-f8-anime2](https://huggingface.co/hakurei/waifu-diffusion-v1-4/resolve/main/vae/kl-f8-anime2.ckpt), also known as the Waifu Diffusion VAE, it is older and produces more saturated results. Used by Pastel Mix.
|
133 |
|
134 |
-
|
135 |
|
136 |
If you did not follow this guide up to this point, you will have to go into the **Settings** tab, then the **Stable Difussion** section, to select your VAE.
|
137 |
|
138 |
-
Whenever you place a new file you can either restart the UI at the bottom of the page or press the small 🔃
|
139 |
|
140 |
1. **Prompts** <a name="prompt"></a>[â–²](#index)
|
141 |
|
@@ -152,7 +153,7 @@ Here you can select your model and VAE. We will go over what these are and how y
|
|
152 |
* `EasyNegative, worst quality, low quality, normal quality, child, painting, drawing, sketch, cartoon, anime, render, 3d, blurry, deformed, disfigured, morbid, mutated, bad anatomy, bad art`
|
153 |
|
154 |
* **EasyNegative:** The negative prompts above use EasyNegative, which is a *textual inversion embedding* or "magic word" that codifies many bad things to make your images better. Typically one would write a very long, very specific, very redundant, and sometimes silly negative prompt. EasyNegative is as of March 2023 the best choice if you want to avoid that.
|
155 |
-
* [Get EasyNegative here](https://huggingface.co/datasets/gsdf/EasyNegative/resolve/main/EasyNegative.safetensors). For collab, paste the link into the `custom_urls` text box.
|
156 |
|
157 |
A comparison with and without these negative prompts can be seen in [Prompt Matrix â–¼](#matrixneg).
|
158 |
|
@@ -192,10 +193,11 @@ Here you can select your model and VAE. We will go over what these are and how y
|
|
192 |
|
193 |
# Extensions <a name="extensions"></a>[â–²](#index)
|
194 |
|
195 |
-
*Stable Diffusion WebUI* supports extensions to add additional functionality and quality of life. These can be added by going into the **Extensions** tab, then **Install from URL**, and pasting the links found here or elsewhere. Then, click *Install* and wait for it to finish. Then, go to **Installed** and click *Apply and restart UI*.
|
|
|
196 |
![Extensions](images/extensions.png)
|
197 |
|
198 |
-
Here are some useful extensions. Most of these come installed in the collab, and I hugely recommend you manually add the first 2
|
199 |
* [Image Browser (fixed fork)](https://github.com/aka7774/sd_images_browser) - This will let you browse your past generated images very efficiently, as well as directly sending their prompts and parameters back to txt2img, img2img, etc.
|
200 |
* [TagComplete](https://github.com/DominikDoom/a1111-sd-webui-tagcomplete) - Absolutely essential for anime art. It will show you the matching booru tags as you type. Anime models work via booru tags, and rarely work at all if you go outside them, so knowing them is godmode. Not all tags will work well in all models though, specially if they're rare.
|
201 |
* [ControlNet](https://github.com/Mikubill/sd-webui-controlnet) - A huge extension deserving of [its own guide â–¼](#controlnet). It lets you take AI data from any image and use it as an input for your image. Practically speaking, it can create any pose or environment you want. Very powerful if used with external tools succh as Blender.
|
@@ -212,7 +214,7 @@ LoRA or *Low-Rank Adaptation* is a form of **Extra Network** and the latest tech
|
|
212 |
|
213 |
Loras can represent a character, an artstyle, poses, clothes, or even a human face (though I do not endorse this). Checkpoints are usually capable enough for general work, but when it comes to specific details with little existing examples, they fall short. That's where Loras come in. They can be downloaded from [civitai](https://civitai.com) or [elsewhere (NSFW)](https://gitgud.io/gayshit/makesomefuckingporn#lora-list) and are 144 MB by default, but they can go as low as 1 MB. Bigger Loras are not always better. They come in `.safetensors` format, same as most checkpoints.
|
214 |
|
215 |
-
Place your
|
216 |
|
217 |
![Extra Networks](images/extranetworks.png)
|
218 |
|
@@ -226,9 +228,9 @@ As mentioned in [Generation Parameters â–²](#parameters), normally you shouldn't
|
|
226 |
|
227 |
You can download additional upscalers and put them in your `stable-diffusion-webui/models/ESRGAN` folder. They will then be available in Hires fix, Ultimate Upscaler, and Extras.
|
228 |
|
229 |
-
The collab comes with several of them, including Remacri
|
230 |
|
231 |
-
* A few notable ones can be [
|
232 |
* LDSR is an advanced yet slow upscaler, its model and config can be [found here](https://huggingface.co/hollowstrawberry/upscalers-backup/tree/main/LDSR) and both must be placed in `stable-diffusion-webui/models/LDSR`.
|
233 |
* The [Upscale Wiki](https://upscale.wiki/wiki/Model_Database) contains dozens of historical choices.
|
234 |
|
|
|
98 |
# Getting Started <a name="start"></a>[â–²](#index)
|
99 |
|
100 |
Before or after generating your first few images, you will want to take a look at the information below to improve your experience and results.
|
101 |
+
If you followed the instructions above, the top of your page should look similar to this:
|
102 |
|
103 |
![Top](images/top.png)
|
104 |
|
105 |
Here you can select your model and VAE. We will go over what these are and how you can get more of them. The collab has additional settings here too, you should ignore them for now.
|
106 |
|
|
|
107 |
1. **Models** <a name="model"></a>[â–²](#index)
|
108 |
|
109 |
The **model**, also called **checkpoint**, is the brain of your AI, designed for the purpose of producing certain types of images. There are many options, most of which are on [civitai](https://civitai.com). But which to choose? These are my recommendations:
|
|
|
112 |
* For general art go with [DreamShaper](https://civitai.com/models/4384/dreamshaper), there are few options quite like it in terms of raw creativity. An honorable mention goes to [Pastel Mix](https://civitai.com/models/5414/pastel-mix-stylized-anime-model), which has a beautiful and unique aesthetic with the addition of anime.
|
113 |
* For photorealism go with [Deliberate](https://civitai.com/models/4823/deliberate). It can do almost anything, but specially photographs. Very intricate results.
|
114 |
* The [Uber Realistic Porn Merge](https://civitai.com/models/2661/uber-realistic-porn-merge-urpm) is self-explanatory.
|
|
|
|
|
115 |
|
116 |
+
If you're using the collab in this guide, copy the **direct download link to the file** and paste it in the text box labeled `custom_urls`. Multiple links are separated by commas.
|
117 |
+
|
118 |
+
If you're using the launcher, it will let you choose the path to your models folder. Otherwise the models normally go into `stable-diffusion-webui/models/Stable-diffusion`.
|
119 |
+
|
120 |
Please note that checkpoints in the format `.safetensors` are safe to use while `.ckpt` **may** contain viruses, so be careful. Additionally, when choosing models you may have a choice between fp32, fp16 and pruned. They all produce the same images within a tiny margin of error, so just go with the smallest file (fp16-pruned). If you want to use them for training or merging, go with the biggest one instead.
|
121 |
|
122 |
+
**Tip:** Whenever you place a new file manually you can either restart the UI at the bottom of the page or press the small 🔃 button next to a dropdown.
|
123 |
+
|
124 |
1. **VAEs** <a name="vae"></a>[â–²](#index)
|
125 |
|
126 |
Most models don't come with a VAE built in. The VAE is a small separate model, which "converts your image from AI format into human format". Without it, you'll get faded colors and ugly eyes, among other things.
|
|
|
132 |
* [vae-ft-mse](https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.safetensors), the latest from Stable Diffusion itself. Used by photorealism models and such.
|
133 |
* [kl-f8-anime2](https://huggingface.co/hakurei/waifu-diffusion-v1-4/resolve/main/vae/kl-f8-anime2.ckpt), also known as the Waifu Diffusion VAE, it is older and produces more saturated results. Used by Pastel Mix.
|
134 |
|
135 |
+
If you're using the launcher, it lets you choose the default VAE, otherwise put them in the `stable-diffusion-webui/models/VAE` folder.
|
136 |
|
137 |
If you did not follow this guide up to this point, you will have to go into the **Settings** tab, then the **Stable Difussion** section, to select your VAE.
|
138 |
|
139 |
+
**Tip:** Whenever you place a new file manually you can either restart the UI at the bottom of the page or press the small 🔃 button next to a dropdown.
|
140 |
|
141 |
1. **Prompts** <a name="prompt"></a>[â–²](#index)
|
142 |
|
|
|
153 |
* `EasyNegative, worst quality, low quality, normal quality, child, painting, drawing, sketch, cartoon, anime, render, 3d, blurry, deformed, disfigured, morbid, mutated, bad anatomy, bad art`
|
154 |
|
155 |
* **EasyNegative:** The negative prompts above use EasyNegative, which is a *textual inversion embedding* or "magic word" that codifies many bad things to make your images better. Typically one would write a very long, very specific, very redundant, and sometimes silly negative prompt. EasyNegative is as of March 2023 the best choice if you want to avoid that.
|
156 |
+
* [Get EasyNegative here](https://huggingface.co/datasets/gsdf/EasyNegative/resolve/main/EasyNegative.safetensors). For the collab in this guide, paste the link into the `custom_urls` text box. Otherwise put it in your `stable-diffusion-webui/embeddings` folder. Then, go to the bottom of your WebUI page and click *Reload UI*. It will now work when you type the word.
|
157 |
|
158 |
A comparison with and without these negative prompts can be seen in [Prompt Matrix â–¼](#matrixneg).
|
159 |
|
|
|
193 |
|
194 |
# Extensions <a name="extensions"></a>[â–²](#index)
|
195 |
|
196 |
+
*Stable Diffusion WebUI* supports extensions to add additional functionality and quality of life. These can be added by going into the **Extensions** tab, then **Install from URL**, and pasting the links found here or elsewhere. Then, click *Install* and wait for it to finish. Then, go to **Installed** and click *Apply and restart UI*.
|
197 |
+
|
198 |
![Extensions](images/extensions.png)
|
199 |
|
200 |
+
Here are some useful extensions. Most of these come installed in the collab in this guide, and I hugely recommend you manually add the first 2 otherwise:
|
201 |
* [Image Browser (fixed fork)](https://github.com/aka7774/sd_images_browser) - This will let you browse your past generated images very efficiently, as well as directly sending their prompts and parameters back to txt2img, img2img, etc.
|
202 |
* [TagComplete](https://github.com/DominikDoom/a1111-sd-webui-tagcomplete) - Absolutely essential for anime art. It will show you the matching booru tags as you type. Anime models work via booru tags, and rarely work at all if you go outside them, so knowing them is godmode. Not all tags will work well in all models though, specially if they're rare.
|
203 |
* [ControlNet](https://github.com/Mikubill/sd-webui-controlnet) - A huge extension deserving of [its own guide â–¼](#controlnet). It lets you take AI data from any image and use it as an input for your image. Practically speaking, it can create any pose or environment you want. Very powerful if used with external tools succh as Blender.
|
|
|
214 |
|
215 |
Loras can represent a character, an artstyle, poses, clothes, or even a human face (though I do not endorse this). Checkpoints are usually capable enough for general work, but when it comes to specific details with little existing examples, they fall short. That's where Loras come in. They can be downloaded from [civitai](https://civitai.com) or [elsewhere (NSFW)](https://gitgud.io/gayshit/makesomefuckingporn#lora-list) and are 144 MB by default, but they can go as low as 1 MB. Bigger Loras are not always better. They come in `.safetensors` format, same as most checkpoints.
|
216 |
|
217 |
+
Place your Lora files in the `stable-diffusion-webui/models/Lora` folder, or if you're using the collab in this guide paste the direct download link into the `custom_urls` text box. Then, look for the 🎴 *Show extra networks* button below the big orange Generate button. It will open a new section. Click on the Lora tab and press the **Refresh** button, and your loras should appear. When you click a Lora in that menu it will get added to your prompt, looking like this: `<lora:filename:1>`. The start is always the same. The filename will be the exact filename in your system without the `.safetensors` extension. Finally, the number is the weight, like we saw in [Prompts ▲](#prompt). Most Loras work between 0.5 and 1 weight, and too high values might "fry" your image, specially if using multiple Loras at the same time.
|
218 |
|
219 |
![Extra Networks](images/extranetworks.png)
|
220 |
|
|
|
228 |
|
229 |
You can download additional upscalers and put them in your `stable-diffusion-webui/models/ESRGAN` folder. They will then be available in Hires fix, Ultimate Upscaler, and Extras.
|
230 |
|
231 |
+
The collab in this guide comes with several of them, including **Remacri**, which is one of the best for all sorts of images.
|
232 |
|
233 |
+
* A few notable ones can be [found here](https://huggingface.co/hollowstrawberry/upscalers-backup/tree/main/ESRGAN).
|
234 |
* LDSR is an advanced yet slow upscaler, its model and config can be [found here](https://huggingface.co/hollowstrawberry/upscalers-backup/tree/main/LDSR) and both must be placed in `stable-diffusion-webui/models/LDSR`.
|
235 |
* The [Upscale Wiki](https://upscale.wiki/wiki/Model_Database) contains dozens of historical choices.
|
236 |
|