alea31415 commited on
Commit
bafcc48
1 Parent(s): f480710

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -4
README.md CHANGED
@@ -2,7 +2,7 @@
2
  license: creativeml-openrail-m
3
  ---
4
 
5
- Tags to test
6
 
7
  ```
8
  Anisphia, Euphyllia, Tilty, OyamaMahiro, OyamaMihari
@@ -28,6 +28,16 @@ The configuration json files can otherwsie be found in the `config` subdirectori
28
  However, some experiments concern the effect of tags for which I regenerate the txt file and the difference can not be seen from the configuration file in this case.
29
  For now this concerns `05tag` for which tags are only used with probability 0.5.
30
 
 
 
 
 
 
 
 
 
 
 
31
  #### Datasets
32
 
33
  Here is the composition of the datasets
@@ -51,6 +61,4 @@ Here is the composition of the datasets
51
  5_artists~onono_imoko: 373
52
  7_characters~screenshots~Tilty: 95
53
  9_characters~fanart~Anisphia+Euphyllia: 97
54
- ```
55
-
56
- Tks
 
2
  license: creativeml-openrail-m
3
  ---
4
 
5
+ Trigger words
6
 
7
  ```
8
  Anisphia, Euphyllia, Tilty, OyamaMahiro, OyamaMihari
 
28
  However, some experiments concern the effect of tags for which I regenerate the txt file and the difference can not be seen from the configuration file in this case.
29
  For now this concerns `05tag` for which tags are only used with probability 0.5.
30
 
31
+ #### Some observations
32
+
33
+ For a thorough comparaison please refer to the `generated_samples` folder.
34
+
35
+ 1. I barely see any difference for training at clip skip 1 and 2.
36
+ 2. Setting text encoder learning rate to be half of that of unet makes training two times slower while I cannot see how it helps.
37
+ 3. The difference between lora, locon, and loha are very subtle.
38
+ 4. Training at higher resolution helps generating more complex backgrounds etc, but it is very time-consuming and most of the time it isn't worth it (simpler to just switch base model) unless this is exactly the goal of the lora you're training.
39
+ 5. What I see that makes the biggest difference is in captioning. The common wisdom that we should prune anything that we want to be attach to the trigger word is exactly the way to go for. No tags at all is terrible, especially for style training. Having all the tags remove the traits from subjects if these tags are not used during sampling.
40
+
41
  #### Datasets
42
 
43
  Here is the composition of the datasets
 
61
  5_artists~onono_imoko: 373
62
  7_characters~screenshots~Tilty: 95
63
  9_characters~fanart~Anisphia+Euphyllia: 97
64
+ ```