|
# Revisions |
|
|
|
I wouldn't regard any of the notes below as especially useful. I was mostly throwing shit at the wall and then recording what happened while likely drawing incorrect conclusions. |
|
|
|
### v6 |
|
- 1343 steps total |
|
- 90 images * 20 repeats, 38 images * 26 repeats |
|
- 5 batch size |
|
- 3 epochs |
|
- 4e-4 adamw lr / unet lr (8e-5 * batch size) |
|
- 7.5e-5 text encoder lr (1e-5 * batch size) |
|
- 768x768 training resolution |
|
- network rank 64, alpha 64 |
|
Result: 22m13s, 0.0976 loss. |
|
Pretty good coherency, adapts well to prompt. |
|
Very slight color fringing and noise at high cfg. Epoch 2 eliminates this, but loses a little bit of the character's face and causes occasional body horror as coherency is lost. |
|
Optimal result would probably be around epoch 2.5. |
|
|
|
### v6b |
|
- Exactly the same as v6, but at 512x512 resolution instead of 768x768. |
|
Result: 8m49s, 0.115 loss. |
|
Substantially improved iamge quality -- noise and color fringing are almost completely gone. However, coherency seems noticably worse; certain fine character details are lost (perhaps due to them becoming too small to be represented at 512x512) and body horror/extra limbs are more common. It almost seems undertrained, but I would have expected the opposite given the step count is the same as v6 for a lower resolution. |
|
Not sure how to maintain the improved image quality from v6b while keeping the coherency of v6. |
|
|
|
### v7 |
|
Went back to old Koharu hyperparameters to reestablish a baseline since we're changing too many things at once and it's hard to tell what's causing what. Koharu params are slow to train and result in large models, but otherwise seem to work well. |
|
- 1138 steps total |
|
- 90 images * 8 repeats, 38 images * 11 repeats |
|
- 3 batch size |
|
- 3 epochs |
|
- 2e-4 adamw lr / unet lr |
|
- 5e-5 text encoder lr |
|
- 832x832 training resolution |
|
- network rank 128, alpha 128 |
|
Result: 18m46s, 0.102 loss. |
|
Still very slight fringing and noise but not as bad as v6. Coherency is good, but marginally worse than v6. Second-to-last epoch is totally undertrained and misses most character details, so while the last epoch is better I suspect it is on the verge of underfitting; some of the character's more subtle details and expressions don't quite make it through. |
|
I think increasing unet LR to improve fit while slightly reducing step count may be the best option here to reduce color fringing and training time while |
|
|
|
Omitting several not-very-good revisions here. |
|
|
|
### v8b |
|
- 1138 steps total |
|
- 90 images * 8 repeats, 38 images * 11 repeats |
|
- 3 batch size |
|
- 3 epochs |
|
- 1.5e-4 adamw lr / unet lr |
|
- 1.5e-5 text encoder lr |
|
- 768x768 training resolution |
|
- network rank 128, alpha 128 |
|
- Modified image captions to always include the character's name as the first word |
|
Result: 15m40s, 0.0952 loss. |
|
Image quality is good. Coherency is good. Probably the most usable result so far, but it is slightly undertrained and needs its weight boosted during inference to get the best results. Close to what I would consider a final result, but I think I can do better. |
|
|
|
### v8c |
|
- 1236 steps total |
|
- 88 images * 9 repeats, 38 images * 12 repeats |
|
- 3 batch size |
|
- 3 epochs |
|
- Removed some images which were too similar to each other |
|
- Added three new images which demonstrate a wider range of expressions |
|
- Removed series name from image captions since it probably just adds noise |
|
- 3e-4 adamw lr / unet lr |
|
- 2e-5 text encoder lr |
|
- Boosted learning rates in an attempt to improve fit |
|
- 768x768 training resolution |
|
- network rank 128, alpha 128 |
|
- keep_tokens revised to 2, it was incorrectly set to 3 in v8b though it probably makes no difference |
|
Result: 18m02, 0.0934 loss. |
|
Immediately it's clear that the combination of boosted learning rates and 8% increase in step count is too much. The model is overfitting and coherence is lost; extra limbs and weird deformations are too common. However it doesn't exhibit the color fringing and noise issues from earlier overtrained models so I think the issue is the learning rates are too high, not the step count. |
|
Setting the model's weight to 0.8 during inference gives quite good results, nailing the character's face and expressions while still having a good amount of coherency. Ideally it would work this well at 1.0 weight, but if I can't find the right parameters to get that I'll just release this one with a note that the weight should be set to 0.8. |
|
|
|
### v8d |
|
Same params as v8c except for with text encoder LR, reduced from 2e-5 to 1e-5. |
|
Result: didn't work as well as I'd hoped. Just seems messier. Not sure if that's due to the 50% reduction in text encoder LR or just random variance. RNG seed was the same but I believe the token shuffler doesn't use the seed it introduces additional randomness. |
|
|
|
### v8e |
|
Same params as v8c except for with text encoder LR, reduced from 2e-5 to 1.6e-5 (80% of v8c's learning rate to try to reproduce the effect of v8c with 0.8 weight) |
|
Result: shit's fucked, going back to v8b and just increasing the step count. |
|
|
|
### v9 |
|
v8b with an extra epoch and the slightly tweaked dataset from v8c. |
|
Result: looks pretty good, going with this. |
|
|