issue and requests

#1
by john71 - opened

get and set always sets in 1st lora, will it be possible to make then set in desired place, also unselecting sometimes doesnt work , it still generates lora result

john71 changed discussion title from get and set lora to issue
john71 changed discussion title from issue to issue and requests
john71 changed discussion status to closed
john71 changed discussion status to open

also it doesnt seem to work with civit lora imports

now lora isnt working even with huggingface repo , error coming

Thanks for the bug report.😀

There was a time when it was working fine, so I must have manufactured a bug somehow.
It is possible to make LoRA set at an arbitrary point, but I will prioritize fixing the bug for now.

The reason why the LoRA results are not unloaded may be because I have fused them.
The most reliable way is to reload the model when LoRA is changed, but it is heavy. I will see if there is an efficient way to do this.

got it thanks , will wait for the update. :)

I always reload to get around it in my original SDXL space, but FLUX.1 is just too big...
SDXL is about 6.5GB and FLUX.1 is about 30GB.
HF is faster, but it's still a bit much.

I found something called “unfuse_lora” and tried to use it, but I hope it works.
Using multiple LoRAs is a bit tricky.

Specifying a number, usually easy, but it seems to be tricky because of a subtle incompatibility with this library called Gradio. I've been stuck there for a few minutes.
I'll try it later, just for study.
The others are probably fixed now. I hope it's fixed...

If you have any requests, please feel free to ask, not only in this space.
If you have a request that I can't handle, I'll probably be able to refer you to someone in HF.🙄

I know it's ugly, but I put as many buttons on it as LoRA has. Unwillingly, this was the most reliable way to do it.

BTW, I tried it and I don't think it fixed the phenomenon of them staying fused!🙀
I'll see if I can find something better.

everything seems good now, just not useful overflow scroll is coming is each input and boxes ,
thanks <3

I'm glad it's working!

That one...looks like a new feature of Gradio.
But frankly speaking, it's a bug no matter how you look at it.
Okay, let's go back to the Gradio version. There should be no problem!

a new request i have if you wont mind ,
i want to generate two different people from two different lora lets say , but to no avail its messing up the face and such
i found that by using controlnet and masking we can have better control over the generation?

will it be possible to add that too?

If it is possible with Diffusers it would be possible, but I have never dealt with ControlNet. (I have via libraries though...)
If you have a reference space or a page with the code, I'd appreciate it if you could let me know.

Be that as it may, the time limit for Zero GPU is pretty tight, but by the time you get to a complex flow using ControlNet, you'll probably get caught up in the regulations and it won't be usable.
I'm more concerned about that.

there is this space which uses controlnet
https://huggingface.co/spaces/DamarJati/FLUX.1-DEV-Canny

i see , makes sense , guess thats why comfy ui wont work in huggingface. if it did it would solve most of issue

https://huggingface.co/spaces/John6666/DiffuseCraftMod

in here for sdxl we have controlnet and ip adapter and other cool things , in sdxl because its lighter it works in huggingface?

Thank you. Perfect sample. It's not hard to copy with this.
I think it would be better if I could use it in a variety of ways anyway, so give me a minute. The original space is not designed for ControlNet, so the UI and the generator need to be modified slightly.

If I wanted to use ComfyUI or WebUI, I'd have to go with a pay-as-you-go plan.
If there is a maximum amount, I can use them, but there is not, and it is too hard for an individual.

in sdxl because its lighter it works in huggingface?

There's that, too, and the library at the heart of DiffuseCraft, stablepy, takes care of most of the processing, so there's no hassle and no waste.
https://github.com/R3gm/stablepy

The implementation of ControlNet (Union) has been completed to the stage where it should work.
However, torch crashes with an error during inference 😱, so it is better not to run it yet.
It is folded in the accordion at the bottom.

Torch-related problems are known to be troublesome, so I'll look into it slowly tomorrow.

There was an error in the model selection section, so I fixed it. Now you can try various styles by switching models.

lora is not working , getting
Model load Error: too many values to unpack (expected 2)

john71 changed discussion status to closed

Thanks for the report!
I thought I killed that bug just a few minutes ago...it came back as a zombie.😱
I killed it again. If it's still out there, something else is contagious.

All right! It's fixed.

Sign up or log in to comment