Explanation of training process

#2
by nominalgeek - opened

Could you provide some details as to how you trained this model?

Hi! For training, I used LoRAs trained on CivitAI. I used 100 images per LoRA to train multiple LoRA models, which were then combined using the Concat method (you can find this in the Kohya-SS branch sd3-flux). I used 6 epochs and 16 repeats, with a training resolution of 1024, Scheduler: cosine_with_restarts, lrSchedulerNumCycles: 3, LR 0.0004, minSnrGamma: 5, noiseOffset: 0.1, optimizerType: Adafactor, trainBatchSize: 4, and networkDim and networkAlpha were equal to each other. These might not be the best parameters, but they showed better results than my previous settings.

LoRa were trained on and then merged after with the Flux Dev distilled model? I can't see a way to train LoRA on the distilled ckpt via Civitai.

LoRA was trained on Flux Dev (distilled) and then applied to Flux Dev (de-distill).

Sign up or log in to comment