adamelliotfields commited on
Commit
301d341
1 Parent(s): 48c31e7

Update docs

Browse files
Files changed (2) hide show
  1. README.md +29 -6
  2. info.md +34 -21
README.md CHANGED
@@ -22,24 +22,47 @@ models:
22
  preload_from_hub:
23
  - >-
24
  fluently/Fluently-v4
25
- text_encoder/model.safetensors,unet/diffusion_pytorch_model.safetensors
26
  - >-
27
  Linaqruf/anything-v3-1
28
  text_encoder/model.safetensors,unet/diffusion_pytorch_model.safetensors,vae/diffusion_pytorch_model.safetensors
29
  - >-
30
  Lykon/dreamshaper-8
31
- text_encoder/model.safetensors,unet/diffusion_pytorch_model.safetensors
32
  - >-
33
  prompthero/openjourney-v4
34
- text_encoder/model.safetensors,unet/diffusion_pytorch_model.safetensors
35
  - >-
36
  runwayml/stable-diffusion-v1-5
37
- text_encoder/model.safetensors,unet/diffusion_pytorch_model.safetensors
38
  - >-
39
  SG161222/Realistic_Vision_V5.1_noVAE
40
- text_encoder/model.safetensors,unet/diffusion_pytorch_model.safetensors
41
  ---
42
 
43
  # diffusion
44
 
45
- See [`info.md`](https://huggingface.co/spaces/adamelliotfields/diffusion/blob/main/info.md).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
  preload_from_hub:
23
  - >-
24
  fluently/Fluently-v4
25
+ text_encoder/model.fp16.safetensors,unet/diffusion_pytorch_model.fp16.safetensors,vae/diffusion_pytorch_model.fp16.safetensors
26
  - >-
27
  Linaqruf/anything-v3-1
28
  text_encoder/model.safetensors,unet/diffusion_pytorch_model.safetensors,vae/diffusion_pytorch_model.safetensors
29
  - >-
30
  Lykon/dreamshaper-8
31
+ text_encoder/model.fp16.safetensors,unet/diffusion_pytorch_model.fp16.safetensors,vae/diffusion_pytorch_model.fp16.safetensors
32
  - >-
33
  prompthero/openjourney-v4
34
+ text_encoder/model.safetensors,unet/diffusion_pytorch_model.safetensors,vae/diffusion_pytorch_model.safetensors
35
  - >-
36
  runwayml/stable-diffusion-v1-5
37
+ text_encoder/model.fp16.safetensors,unet/diffusion_pytorch_model.fp16.safetensors,vae/diffusion_pytorch_model.fp16.safetensors
38
  - >-
39
  SG161222/Realistic_Vision_V5.1_noVAE
40
+ text_encoder/model.safetensors,unet/diffusion_pytorch_model.safetensors,vae/diffusion_pytorch_model.safetensors
41
  ---
42
 
43
  # diffusion
44
 
45
+ Gradio-based Stable Diffusion 1.5 app on ZeroGPU.
46
+
47
+ ## Usage
48
+
49
+ See [`info.md`](https://huggingface.co/spaces/adamelliotfields/diffusion/blob/main/info.md).
50
+
51
+ ## Installation
52
+
53
+ ```bash
54
+ python -m venv .venv
55
+ source .venv/bin/activate
56
+ pip install -r requirements.txt torch==2.4.0 torchvision==0.19.0 gradio==4.39.0
57
+
58
+ # http://localhost:7860
59
+ python app.py
60
+ ```
61
+
62
+ ## TODO
63
+
64
+ - [ ] Support LoRA
65
+ - [ ] Add styles
66
+ - [ ] Hires fix
67
+ - [ ] Latent preview
68
+ - [ ] Metadata embed and display
info.md CHANGED
@@ -1,35 +1,41 @@
1
  ## Usage
2
 
3
- Enter a prompt and click **Generate**. [Civitai](https://civitai.com) has an excellent guide on [prompting](https://education.civitai.com/civitais-prompt-crafting-guide-part-1-basics/).
4
 
5
  ### Compel
6
 
7
  Positive and negative prompts are embedded by [Compel](https://github.com/damian0815/compel), enabling weighting and blending. See [syntax features](https://github.com/damian0815/compel/blob/main/doc/syntax.md).
8
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  ### Arrays
10
 
11
- Arrays allow you to generate different images from a single prompt. For example, `a cute [[cat,corgi,koala]]` will expand into 3 prompts. Note that it only works for the positive prompt. You also have to increase `Images` to generate more than 1 image at a time. Inspired by [Fooocus](https://github.com/lllyasviel/Fooocus/pull/1503).
12
 
13
  ### Autoincrement
14
 
15
- If `Autoincrement` is checked, the seed will be incremented for each image. When using arrays, you might want to uncheck this so the same seed is used for each prompt variation.
16
 
17
  ## Models
18
 
19
- All use `float16` (or `bfloat16` if supported). Recommended settings are shown below:
20
 
21
  * [fluently/fluently-v4](https://huggingface.co/fluently/Fluently-v4)
22
- - scheduler: DPM++ 2M, guidance: 5-7, steps: 20-30
23
  * [linaqruf/anything-v3-1](https://huggingface.co/linaqruf/anything-v3-1)
24
- - scheduler: DPM++ 2M, guidance: 12, steps: 50, vae: default
25
  * [lykon/dreamshaper-8](https://huggingface.co/Lykon/dreamshaper-8)
26
- - scheduler: DEIS 2M
27
  * [prompthero/openjourney-v4](https://huggingface.co/prompthero/openjourney-v4)
28
- - scheduler: PNDM
29
  * [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
30
- - scheduler: PNDM
31
  * [sg161222/realistic_vision_v5.1](https://huggingface.co/SG161222/Realistic_Vision_V5.1_noVAE)
32
- - scheduler: DPM++ 2M, guidance: 4-7
33
 
34
  ### Schedulers
35
 
@@ -43,17 +49,24 @@ All are based on [k_diffusion](https://github.com/crowsonkb/k-diffusion) except
43
  * [LMS](https://huggingface.co/docs/diffusers/api/schedulers/lms_discrete)
44
  * [PNDM](https://huggingface.co/docs/diffusers/api/schedulers/pndm)
45
 
46
- ### VAE
 
 
 
 
 
 
 
 
 
 
 
 
 
 
47
 
48
- All models use [madebyollin/taesd](https://huggingface.co/madebyollin/taesd) for speed.
49
 
50
- ## TODO
51
 
52
- - [ ] Support LoRA
53
- - [ ] Support embeddings
54
- - [ ] Add VAE radio
55
- - [ ] Add styles
56
- - [ ] Clip skip
57
- - [ ] DeepCache with T-GATE
58
- - [ ] Hires fix
59
- - [ ] Latent preview
 
1
  ## Usage
2
 
3
+ Enter a prompt and click `Generate`. Read [Civitai](https://civitai.com)'s guide on [prompting](https://education.civitai.com/civitais-prompt-crafting-guide-part-1-basics/) to learn more.
4
 
5
  ### Compel
6
 
7
  Positive and negative prompts are embedded by [Compel](https://github.com/damian0815/compel), enabling weighting and blending. See [syntax features](https://github.com/damian0815/compel/blob/main/doc/syntax.md).
8
 
9
+ ### Embeddings
10
+
11
+ Textual inversion embeddings are installed for use in the `Negative` prompt.
12
+
13
+ * [Bad Prompt](https://civitai.com/models/55700/badprompt-negative-embedding): `<bad_prompt>`
14
+ * [Negative Hand](https://civitai.com/models/56519/negativehand-negative-embedding): `<negative_hand>`
15
+ * [Fast Negative](https://civitai.com/models/71961/fast-negative-embedding-fastnegativev2): `<fast_negative>`
16
+ - includes Negative Hand
17
+ * [Bad Dream](https://civitai.com/models/72437?modelVersionId=77169): `<bad_dream>`
18
+ * [Unrealistic Dream](https://civitai.com/models/72437?modelVersionId=77173): `<unrealistic_dream>`
19
+ - pair with Fast Negative and the Realistic Vision model
20
+
21
  ### Arrays
22
 
23
+ Arrays allow you to generate different images from a single prompt. For example, `a cute [[cat,corgi,koala]]` will expand into 3 prompts. For this to work, you first have to increase `Images`. Note that it only works for the positive prompt. Inspired by [Fooocus](https://github.com/lllyasviel/Fooocus/pull/1503).
24
 
25
  ### Autoincrement
26
 
27
+ If `Autoincrement` checked, the seed will be incremented for each image in range `Images`. When using arrays, you might want this disabled so the same seed is used.
28
 
29
  ## Models
30
 
31
+ All use `float16` (or `bfloat16` if supported).
32
 
33
  * [fluently/fluently-v4](https://huggingface.co/fluently/Fluently-v4)
 
34
  * [linaqruf/anything-v3-1](https://huggingface.co/linaqruf/anything-v3-1)
 
35
  * [lykon/dreamshaper-8](https://huggingface.co/Lykon/dreamshaper-8)
 
36
  * [prompthero/openjourney-v4](https://huggingface.co/prompthero/openjourney-v4)
 
37
  * [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
 
38
  * [sg161222/realistic_vision_v5.1](https://huggingface.co/SG161222/Realistic_Vision_V5.1_noVAE)
 
39
 
40
  ### Schedulers
41
 
 
49
  * [LMS](https://huggingface.co/docs/diffusers/api/schedulers/lms_discrete)
50
  * [PNDM](https://huggingface.co/docs/diffusers/api/schedulers/pndm)
51
 
52
+ ## Advanced
53
+
54
+ ### DeepCache
55
+
56
+ [DeepCache](https://github.com/horseee/DeepCache) (Ma et al. 2023) caches UNet layers determined by `Branch` and reuses them every `Interval` steps. Leaving `Branch` on **0** caches lower layers, which provides a greater speedup. An `Interval` of **3** is the best balance between speed and quality; **1** means no cache.
57
+
58
+ ### T-GATE
59
+
60
+ [T-GATE](https://github.com/HaozheLiu-ST/T-GATE) (Zhang et al. 2024) caches self and cross attention computations up to `Step`. Afterwards, attention is no longer computed and the cache is used, resulting in a noticeable speedup. Works well with DeepCache.
61
+
62
+ ### Tiny VAE
63
+
64
+ Enable [madebyollin/taesd](https://github.com/madebyollin/taesd) for almost instant latent decoding with a minor loss in detail. Useful for development and ideation.
65
+
66
+ ### Clip Skip
67
 
68
+ When enabled, the last CLIP layer is skipped. This can improve image quality and is commonly used with anime models.
69
 
70
+ ### Prompt Truncation
71
 
72
+ When enabled, prompts will be truncated to CLIP's limit of 77 tokens. By default this is disabled, so Compel will chunk prompts into segments rather than cutting them off.