|
# Summary of Stable Diffusion embedding format |
|
|
|
This file is to be a quick reference for SD embedding file formats. |
|
|
|
Note: there are a bunch of files here that have "embedding" in their names. However, they cannot be used as Stable Diffusion Embeddings. |
|
|
|
I do include some tools, such as *generate-embedding.py* and *generate-embeddingXL.py*, that are intended |
|
to explore the actual inference tool formatted embedding file types. Therefore, I'm taking some time to document |
|
the little I know about the format of those files |
|
|
|
## Stable Diffusion v1.5 |
|
Note that SD 1.5 has a different format for embeddings than SDXL. And within SD 1.5, there are two different formats |
|
|
|
### SD 1.5 pickletensor embed format |
|
|
|
I have observed that .pt embeddings have a dict-of-dicts type format. It looks something like this: |
|
|
|
[ |
|
"string_to_token": {'doesntmatter': 265}, # I dont know why 265, but it usually is |
|
"string_to_param": {'doesntmatter': tensor([][768])}, |
|
"name": *string*, |
|
"step": *string*, |
|
"sd_checkpoint": *string*, |
|
"sd_checkpoint_name": *string* |
|
] |
|
|
|
(Note that *string* can be None) |
|
|
|
|
|
### SD 1.5 safetensor embed format |
|
|
|
The ones I have seen, have a much simpler format. It is a trivial format compared to SD 1.5: |
|
|
|
{ "emb_params": Tensor([][768])} |
|
|
|
According to https://github.com/Stability-AI/ModelSpec?tab=readme-ov-file |
|
there is supposed to be metadata embedding in the safetensor format, but I havent found a clean way to read it yet. |
|
Expected standard slots for metadata info are: |
|
|
|
"modelspec.title": "(name for this embedding)", |
|
"modelspec.architecture": "stable-diffusion-v1/textual-inversion", |
|
"modelspec.thumbnail": "(data:image/jpeg;base64,/9jxxxxxxxxx)" |
|
|
|
## SDXL embed format (safetensor) |
|
|
|
This has an actual spec at: |
|
https://huggingface.co/docs/diffusers/using-diffusers/textual_inversion_inference |
|
But it's pretty simple. |
|
|
|
summary: |
|
|
|
{ |
|
"clip_l": Tensor([][768]), |
|
"clip_g": Tensor([][1280]) |
|
} |
|
|