Post
972
Hey all!
Here I take a somewhat strong stance and am petitioning to revisit the default training parameters on the Diffusers LoRA page.
In my opinion and after observing and testing may training pipelines shared by startups and resources, I have found that many of them exhibit the same types of issues. Upon discussing with some of these founders and creators, the common theme has been working backwards from the Diffusers LoRA page.
In this article, I explain why the defaults in the Diffuser LoRA code produce some positive results, which can be initially misleading, and a suggestion on how that could be improved.
https://huggingface.co/blog/alvdansen/revisit-diffusers-default-params
Here I take a somewhat strong stance and am petitioning to revisit the default training parameters on the Diffusers LoRA page.
In my opinion and after observing and testing may training pipelines shared by startups and resources, I have found that many of them exhibit the same types of issues. Upon discussing with some of these founders and creators, the common theme has been working backwards from the Diffusers LoRA page.
In this article, I explain why the defaults in the Diffuser LoRA code produce some positive results, which can be initially misleading, and a suggestion on how that could be improved.
https://huggingface.co/blog/alvdansen/revisit-diffusers-default-params