SD 1.5 version of this?

#1
by Yntec - opened

Hey, great space! Would it be possible to create a version that converts a SD1.5 version of a repo into a single safetensors file? I think it would need code like this one:

https://gist.github.com/jachiam/8a5c0b607e38fcc585168b90c686eb05

Thanks for the info!
The code supported .bin and not supported .safetensors, so I had to rewrite it a bit, but at least I have something that works locally.
I'll work on the GUI now.

SD1.5 version
https://huggingface.co/John6666/safetensors_converting_test/blob/main/convert_repo_to_safetensors_sd.py
SDXL version
https://huggingface.co/John6666/safetensors_converting_test/blob/main/convert_repo_to_safetensors.py

Completed. (Almost copy and paste)
When you duplicate, change os.environ['HF_OUTPUT_REPO'] = 'John6666/safetensors_converting_test' in app.py to your any Model Repo and secret HF_TOKEN to your HF write token.
Even without it, it works just fine if you can't upload to Repo.
https://huggingface.co/spaces/John6666/convert_repo_to_safetensors_sd

Thanks so much! That's something I always wanted to have!

I'm new to generative AI and don't know much other than SDXL, so let me know if there are any bugs.

I tried converting this repo into safetensors:

https://huggingface.co/Yntec/Playground

Which produced this file (renamed from Yntec_Playground to Playground):

https://huggingface.co/Yntec/Playground/resolve/main/Playground.safetensors

However, when trying to load it into Automatic1111's UI this error appears:

Loading weights [1a9adc090a] from /home/user/stable-diffusion-webui/models/Stable-diffusion/Playground.safetensors
changing setting sd_model_checkpoint to Playground.safetensors [1a9adc090a]: SafetensorError
Traceback (most recent call last):
File "/home/user/stable-diffusion-webui/modules/shared.py", line 516, in set
self.data_labels[key].onchange()
File "/home/user/stable-diffusion-webui/modules/call_queue.py", line 15, in f
res = func(*args, **kwargs)
File "/home/user/stable-diffusion-webui/webui.py", line 199, in
shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: modules.sd_models.reload_model_weights()), call=False)
File "/home/user/stable-diffusion-webui/modules/sd_models.py", line 533, in reload_model_weights
state_dict = get_checkpoint_state_dict(checkpoint_info, timer)
File "/home/user/stable-diffusion-webui/modules/sd_models.py", line 273, in get_checkpoint_state_dict
res = read_state_dict(checkpoint_info.filename)
File "/home/user/stable-diffusion-webui/modules/sd_models.py", line 252, in read_state_dict
pl_sd = safetensors.torch.load_file(checkpoint_file, device=device)
File "/usr/local/lib/python3.10/site-packages/safetensors/torch.py", line 259, in load_file
with safe_open(filename, framework="pt", device=device) as f:
safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge

HeaderTooLarge is an error that appears when the file isn't a safetensors file, very weird!

Welcome to generative AI! It's great to see someone creating these spaces, there are several models that were deleted and lost but we have their diffusers versions so if this works we'll be able to rescue those models.

I've tried to use safetensors lib's torch for saves as well. I also added metadata (not sure if it's correct).
Hope this works...
https://huggingface.co/spaces/John6666/convert_repo_to_safetensors_sd

That still produces a non-working file: https://huggingface.co/Yntec/Playground/resolve/main/Yntec_Playground.safetensors - this error appears before the other one:

reading checkpoint metadata: /home/user/stable-diffusion-webui/models/Stable-diffusion/Yntec_Playground.safetensors: AssertionError
Traceback (most recent call last):
File "/home/user/stable-diffusion-webui/modules/sd_models.py", line 62, in init
self.metadata = read_metadata_from_safetensors(filename)
File "/home/user/stable-diffusion-webui/modules/sd_models.py", line 232, in read_metadata_from_safetensors
assert metadata_len > 2 and json_start in (b'{"', b"{'"), f"{filename} is not a safetensors file"
AssertionError: /home/user/stable-diffusion-webui/models/Stable-diffusion/Yntec_Playground.safetensors is not a safetensors file

I have asked for help in the github thread, let's see if this can be fixed, I appreciate all your effort.

Thanks for the rescue request to github.
I looked at the headers for the other normal safetensors and it looks like the additional metadata is not needed, so I turned that off.
https://huggingface.co/docs/safetensors/metadata_parsing
I think it is now almost exactly the same as the SDXL version of Diffusers' code (except for the SD1.5/XL specific conversion part).
The file below is what I generated by this version.
If this doesn't work, there may be some damage to the core of the conversion part.
https://huggingface.co/John6666/safetensors_converting_test/resolve/main/Yntec_Playground.safetensors

That didn't load either, but here's a new idea! What if it's not converting to safetensors, but to the ckpt file format? In that case you would make it output a .ckpt file that could be later loaded and converted to safetensors like other ckpt files.

Here's the original discussion:

https://github.com/huggingface/diffusers/issues/672

I think it was created before the safetensors format was created and that's the request jachiam was fulfilling (diffusers to a single ckpt file.)

I'll try later.
Maybe I compare the codes and see what I can find out something.

https://github.com/huggingface/diffusers/blob/main/scripts/convert_diffusers_to_original_stable_diffusion.py
We may be idiots.
I was looking at the Diffusers repository to review the example SDXL script and found the code for SD1.5 that was modified 5 months ago. LOL!
I guess all I have to do now is work on it.

Oh, indeed, my idiocy is legendary, I just made just waste all that time looking at the wrong code!

Not only it works now, it's also perfect!

{{imagine here two images side by side, one is from the original model and the other from the model converted from diffusers, they're identical so they could be any picture}}

Stupendous work, you may not be the hero we deserve, but are the hero we needed! And now I'm going to rescue so many models from oblivion and merge them with so many others that today will be remembered as before and after of whatever I meant to say!

We did it!
Anyway, as long as it works as a result, everything is fine!

Sign up or log in to comment