Update README.md
Browse files
README.md
CHANGED
@@ -36,18 +36,50 @@ For a thorough comparaison please refer to the `generated_samples` folder.
|
|
36 |
|
37 |
Dataset, in general, is the most important out of all.
|
38 |
The common wisdom that we should prune anything that we want to be attach to the trigger word is exactly the way to go for.
|
39 |
-
No tags at all is terrible, especially for style training.
|
40 |
-
Having all the tags remove the traits from subjects if these tags are not used during sampling (not completely true but more or less the case).
|
41 |
|
42 |
![00066-20230326090858](https://huggingface.co/alea31415/LyCORIS-experiments/resolve/main/generated_samples/00066-20230326090858.png)
|
43 |
|
44 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
45 |
#### Others
|
46 |
|
47 |
1. I barely see any difference for training at clip skip 1 and 2.
|
48 |
-
2. Setting text encoder learning rate to be half of that of unet makes training two times slower while I cannot see how it helps.
|
49 |
3. The difference between lora, locon, and loha are very subtle.
|
50 |
-
4. Training at higher resolution helps generating more complex backgrounds etc, but it is very time-consuming and most of the time it isn't worth it (simpler to just switch base model) unless this is exactly the goal of the lora you're training.
|
51 |
|
52 |
### Datasets
|
53 |
|
|
|
36 |
|
37 |
Dataset, in general, is the most important out of all.
|
38 |
The common wisdom that we should prune anything that we want to be attach to the trigger word is exactly the way to go for.
|
39 |
+
No tags at all (top three rows) is terrible, especially for style training.
|
40 |
+
Having all the tags (bottom three rows) remove the traits from subjects if these tags are not used during sampling (not completely true but more or less the case).
|
41 |
|
42 |
![00066-20230326090858](https://huggingface.co/alea31415/LyCORIS-experiments/resolve/main/generated_samples/00066-20230326090858.png)
|
43 |
|
44 |
|
45 |
+
#### Training Resolution
|
46 |
+
|
47 |
+
The most prominent benefit of training at higher resolution is that it helps generating more complex/detailed background.
|
48 |
+
Chances are that you can get more details about the outfit or pupils etc.
|
49 |
+
However, training at higher-resolution is quite time-consuming and most of the time it is probably not worth it.
|
50 |
+
For example, if you want better background it can be simpler to switch the model (unless, say, you are actually training background lora).
|
51 |
+
|
52 |
+
![00045-20230326045748](https://huggingface.co/alea31415/LyCORIS-experiments/resolve/main/generated_samples/00045-20230326045748.png)
|
53 |
+
![00044-20230326044731](https://huggingface.co/alea31415/LyCORIS-experiments/resolve/main/generated_samples/00044-20230326044731.png)
|
54 |
+
|
55 |
+
|
56 |
+
#### Text encoder learning rate
|
57 |
+
|
58 |
+
It is often suggested to set the text encoder learning rate to be smaller than that of unet.
|
59 |
+
This of course causes training to be slower white it is hard to evaluate the benefit.
|
60 |
+
In one experiment I half the text encoder learning rate and train the model two times longer.
|
61 |
+
After spending some time here are two situations that reveal the potential benefit of this practice.
|
62 |
+
|
63 |
+
- In my training set I have anime screenshots, tagged with `aniscreen` and fanarts, taggedd with `fanart`.
|
64 |
+
Although they are balanced to have the same weight, the consistency of anime screenshots seems to drive the characters toward this style by default.
|
65 |
+
When I put `aniscreen` to negative, this causes bad results in general but the one trained with lower text encoder learning rate seems to survive the best.
|
66 |
+
Note that Tilty (second image) is only trained with anime screenshots.
|
67 |
+
|
68 |
+
![00165-20230327023658](https://huggingface.co/alea31415/LyCORIS-experiments/resolve/main/generated_samples/00165-20230327023658.png)
|
69 |
+
![00085-20230327030828](https://huggingface.co/alea31415/LyCORIS-experiments/resolve/main/generated_samples/00085-20230327030828.png)
|
70 |
+
|
71 |
+
- Training at lower text encoder rate should better preserve the model's ability to understand the prompt.
|
72 |
+
This aspect is difficult to test, but it seems to be confirmed by this "umbrella" experiment (though some other setup, such as lora and higher dimension seem to give even better results).
|
73 |
+
|
74 |
+
![00083-20230327015201](https://huggingface.co/alea31415/LyCORIS-experiments/resolve/main/generated_samples/00083-20230327015201.png)
|
75 |
+
|
76 |
+
In any case, I still believe if we want to get the best result we should avoid compeletely text encodedr training and do [pivotal tuning](https://github.com/cloneofsimo/lora/discussions/121) instead.
|
77 |
+
|
78 |
+
|
79 |
#### Others
|
80 |
|
81 |
1. I barely see any difference for training at clip skip 1 and 2.
|
|
|
82 |
3. The difference between lora, locon, and loha are very subtle.
|
|
|
83 |
|
84 |
### Datasets
|
85 |
|