File size: 1,338 Bytes
c0c31d4 8dd1d6f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
---
license: other
---
It is a repository for storing as many LECOs as I can think of, emphasizing quantity over quality.
Files will continue to be added as needed.
Because the guidance_scale parameter is somewhat excessive, these LECOs tend to be very sensitive and too effective; using a weight of -0.1 to -1 is appropriate in most cases.
All LECOs are trained with target eq positive, erase settings.
The target is a one of among danbooru's GENERAL tags what most frequently used in order from the top to the bottom, and sometimes I also add phrases that I have personally come up with.
``` prompts.yaml
- target: "$query"
positive: "$query"
unconditional: ""
neutral: ""
action: "erase"
guidance_scale: 1.0
resolution: 512
batch_size: 4
```
```config.yaml
prompts_file: prompts.yaml
pretrained_model:
name_or_path: "/storage/model-1892-0000-0000.safetensors"
v2: false
v_pred: false
network:
type: "lierla"
rank: 4
alpha: 1.0
training_method: "full"
train:
precision: "bfloat16"
noise_scheduler: "ddim"
iterations: 50
lr: 1
optimizer: "Prodigy"
lr_scheduler: "cosine"
max_denoising_steps: 50
save:
name: "$query"
path: "/stable-diffusion-webui/models/Lora/LECO/"
per_steps: 50
precision: "float16"
logging:
use_wandb: false
verbose: false
other:
use_xformers: true
```
|