IndexError: list index out of range

#2
by xi0v - opened

Hello, I've encountered this when trying to convert one of my models. Not sure if this is this a code or a model based issue?
If you are free, please look through the code for such issues.
Used default preset with blessed vae with no LoRAs

Error: Failed to upload to xi0v/test. 
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/gradio/queueing.py", line 536, in process_events
    response = await route_utils.call_process_api(
  File "/usr/local/lib/python3.10/site-packages/gradio/route_utils.py", line 276, in call_process_api
    output = await app.get_blocks().process_api(
  File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1897, in process_api
    result = await self.call_function(
  File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1483, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "/usr/local/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread
    return await future
  File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 859, in run
    result = context.run(func, *args)
  File "/usr/local/lib/python3.10/site-packages/gradio/utils.py", line 816, in wrapper
    response = f(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/gradio/utils.py", line 816, in wrapper
    response = f(*args, **kwargs)
  File "/home/user/app/convert_url_to_diffusers_sdxl_gr.py", line 333, in convert_url_to_diffusers_repo
    md += f"[{str(u).split('/')[-2]}/{str(u).split('/')[-1]}]({str(u)})<br>"
IndexError: list index out of range

Also before the error started appearing, I managed to convert one of my models but it's output was fried and distorted images only.

Owner

Definitely a code issue. It's not related to the model content, but to the name-related processing. I'll take a look.

Owner

As for noise, it often happens if VAE is not baked in. Also, the scheduler (sampler) and the model have compatibility, so changing that may improve things.

Appreciate it!
I think I may have chose DPM++ 2M Karras instead of the regular 2M, I'll see if It reoccurs, also none of the models I've merged mention a baked vae nor do they recommend a vae, so should I still select a vae when converting?

Owner

In WebUI and ComfyUI, VAE is usually provided separately, and no one uses the one that is included...
A lot of people used to have it baked in, but now it's less than 50-50.
On the other hand, with Diffusers, it's assumed that it's included.

Owner

That's ridiculous...?๐Ÿ™€
I don't know what's going on here...
I've hardly modified anything (I just put in some debugging code to search for errors, but it shouldn't affect the operation itself), and yet it works.
I'm pretty sure there was an error over here earlier too.

Maybe it was an HF issue.

https://huggingface.co/John6666/Hestia-v2

I'll try to convert another model now.
If you don't mind can you please take down that converted version ๐Ÿ˜…
But also did you bake a vae while converting it?
And I think hf had an issue since it errored most of my spaces for no apparent reason (other than irrelevant alerts in the logs(

Owner

By the way, the recommendation for VAE (I've only been doing this for a few months so I don't know much about it) seems to be the standard VAE or E7 for flat animation painting.
F1 is good but a little too bright.

XL_VAE_C - E7
https://civitai.com/models/152040?modelVersionId=202582

I'll check them out, Thanks for the recommendations !

Owner

Oops. Deleted.
I burned the standard VAE and set the scheduler to Euler a. I got the girl correctly.

No worries!
Did you try to inference it?

Owner

Serverless is halfway broken right now, so I did an Inference in my space.
https://huggingface.co/spaces/John6666/votepurchase-multiple-model

Yeah serverless hasn't been good recently, I'm doing inference on your space right now, hopefully I don't get these fried images ๐Ÿ˜…

It works!
I really don't know how to thank you man!

Owner

Maybe still no one (us users) knows the cause of the serverless malfunction.
I wouldn't be surprised if it is simply a matter of money or machine power depletion.๐Ÿ˜…
We're currently doing DIY since there is no explanation for it.
https://discuss.huggingface.co/t/inference-api-turned-off-why/97126/21

Owner

Good!

I'll be closing this discussion now. Really, thank you!

xi0v changed discussion status to closed

Sign up or log in to comment