Thank you for your dedication and work !
#4
by
Baekdoosan
- opened
It's really interesting , as my first tries are running very good .
Of course I reallyh hope the "Hyper" Lora dev team are working on speeding up the generation time .
But the results and the strengh of the prompt adherence are remarkable .
Many Thanks!
I think this model is the most important thing that has happened in the flux community since we're able to train LoRas
Thank you so much
@nyanko7