Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
alvdansen 
posted an update Jun 21
Post
972
Hey all!

Here I take a somewhat strong stance and am petitioning to revisit the default training parameters on the Diffusers LoRA page.

In my opinion and after observing and testing may training pipelines shared by startups and resources, I have found that many of them exhibit the same types of issues. Upon discussing with some of these founders and creators, the common theme has been working backwards from the Diffusers LoRA page.

In this article, I explain why the defaults in the Diffuser LoRA code produce some positive results, which can be initially misleading, and a suggestion on how that could be improved.

https://huggingface.co/blog/alvdansen/revisit-diffusers-default-params

Hey @alvdansen , thanks for sharing! What about the LoRA rank and the train_text_encoder_frac params, any reco here please? Thanks!

·

I need to double check the train_text_encoder_frac as I typically don't mess with that. For rank I'm usually at 32.

In this post