araminta_k PRO
AI & ML interests
Articles
Organizations
alvdansen's activity
I've been working on this one for a few days, but really I've had this dataset for a few years! I collected a bunch of open access photos online back in late 2022, but I was never happy enough with how they played with the base model!
I am so thrilled that they look so nice with Flux!
This for me is a version one of this model - I still see room for improvement and possibly expansion of it's 40 image dataset. For those who are curious:
40 Image
3200 Steps
Dim 32
3e-4
Enjoy! Create! Big thank you to Glif for sponsoring the model creation! :D
alvdansen/flux_film_foto
I know it's a Saturday, but I decided to release my first Flux Dev Lora.
A retrain of my "Frosting Lane" model and I am sure the styles will just keep improving.
Have fun! Link Below - Thanks again to @ostris for the trainer and Black Forest Labs for the awesome model!
alvdansen/frosting_lane_flux
FROSTING LANE REDUX
The v1 of this model was released during a big model push, so I think it got lost in the shuffle. I revisited it for a project and realized it wasn't inventive enough around certain concepts, so I decided to retrain.
alvdansen/frosting_lane_redux
I think the original model was really strong on it's own, but because it was trained on fewer images I found that it was producing a very lackluster range of facial expressions, so I wanted to improve that.
The hardest part of creating models like this, I find, is maintaining the detailed linework without without overfitting. It takes a really balanced dataset and I repeat the data 12 times during the process, stopping at the last 10-20 epochs.
It is very difficult to predict the exact amount of time needed, so for me it is crucial to do epoch stops. Every model has a different threshold for ideal success.
Yeah that would be really helpful, I haven't had the time to try and do something like that.
Thanks!
I put together my own page of models using their code and LoRA. Enjoy!
alvdansen/flash-lora-araminta-k-styles
No problem! Hope it helps!
I've just written an article that takes a step by step approach to outlining the method that I used to train the 'm3lt' lora, a blended style model.
I've used the LoRA Ease trainer by @multimodalart :D
https://huggingface.co/blog/alvdansen/training-lora-m3lt
multimodalart/lora-ease
I responded on X with the best way to contact me.
I don’t know who you are
I trained this model on a new spot I'm really excited to share (soon!)
This Monday I will be posting my first beginning to end blog showing the tool I've used, dataset, captioning techniques, and parameters to finetune this LoRA.
For now, check out the model in the link below.
alvdansen/m3lt
Congrats :D
It will focus on dataset curation through training on a pre-determined style to give a better insight on my process.
Curious what are some questions you might have that I can try to answer in it?
Midsommar Cartoon
A playful cartoon style featuring bold colors and a retro aesthetic. Personal favorite at the moment.
alvdansen/midsommarcartoon
---
Wood Block XL
I've started training public domain styles to create some interesting datasets. In this case I found a group of images taken from really beautiful and colorful Japanese Blockprints.
alvdansen/wood-block-xl
--
Dimension W
For this model I did actually end up working on an SD 1.5 model as well as an SDXL. I prefer the SDXL version, and I am still looking for parameters I am really happy with for SD 1.5. That said, both have their merits. I trained this with the short film I am working on in mind.
alvdansen/dimension-w
alvdansen/dimension-w-sd15
I typically use Kohya, but I also test a lot of platform services for the right one because I am a creature of comfort :)
I need to double check the train_text_encoder_frac as I typically don't mess with that. For rank I'm usually at 32.
Here I take a somewhat strong stance and am petitioning to revisit the default training parameters on the Diffusers LoRA page.
In my opinion and after observing and testing may training pipelines shared by startups and resources, I have found that many of them exhibit the same types of issues. Upon discussing with some of these founders and creators, the common theme has been working backwards from the Diffusers LoRA page.
In this article, I explain why the defaults in the Diffuser LoRA code produce some positive results, which can be initially misleading, and a suggestion on how that could be improved.
https://huggingface.co/blog/alvdansen/revisit-diffusers-default-params
No - I change them however it’s very case by case. I am trying to emphasize elements other than hyperparameters, because in my experience these concepts apply to a variety of hyperparameters.
Here is fine also, I will check later
Are you on X? You can contact me there @araminta_k
I have added further observations here:
https://huggingface.co/blog/alvdansen/enhancing-lora-training-through-effective-captions
I will need to take a look at what the exact backend of face to all is. What is the result you’re getting ?
I've been asked a lot of share more on how I train LoRAs. The truth is I don't think my advice is very helpful without also including more contextual, theoretical commentary on how I **think** about training LoRAs for SDXL and other models.
I wrote a first article here about it - let me know what you think.
https://huggingface.co/blog/alvdansen/thoughts-on-lora-training-1
Edit: Also people kept asking where to start so I made a list of possible resources:
https://huggingface.co/blog/alvdansen/thoughts-on-lora-training-pt-2-training-services
Thank you!
I intend to start writing more fully on the thought process behind my approach to curating and training style and subject finetuning, beginning this next week.
Thank you for reading this post! You can find the models on my page and I'll drop a few previews here.
thank you <3
🙌 Thank you so much for sharing! I will be sharing a training workflow in the coming week :D