Update README.sd-embeddings.md
Browse files- README.sd-embeddings.md +14 -14
README.sd-embeddings.md
CHANGED
@@ -12,14 +12,14 @@ Note that SD 1.5 has a different format for embeddings than SDXL. And within SD
|
|
12 |
### SD 1.5 pickletensor embed format
|
13 |
|
14 |
I have observed that .pt embeddings have a dict-of-dicts type format. It looks something like this:
|
15 |
-
[
|
16 |
-
"string_to_token": {'doesntmatter': 265}, # I dont know why 265, but it usually is
|
17 |
-
"string_to_param": {'doesntmatter': tensor([[][768])},
|
18 |
-
"name": *string*,
|
19 |
-
"step": *string*,
|
20 |
-
"sd_checkpoint": *string*,
|
21 |
-
"sd_checkpoint_name": *string*
|
22 |
-
]
|
23 |
|
24 |
(Note that *string* can be None)
|
25 |
|
@@ -28,16 +28,16 @@ I have observed that .pt embeddings have a dict-of-dicts type format. It looks s
|
|
28 |
|
29 |
The ones I have seen, have a much simpler format. It is a trivial format compared to SD 1.5:
|
30 |
|
31 |
-
{ "emb_params": Tensor([][768])}
|
32 |
|
33 |
-
|
34 |
|
35 |
This has an actual spec at:
|
36 |
https://huggingface.co/docs/diffusers/using-diffusers/textual_inversion_inference
|
37 |
But it's pretty simple:
|
38 |
|
39 |
summary:
|
40 |
-
{
|
41 |
-
"clip_l": Tensor([][768]),
|
42 |
-
"clip_g": Tensor([][1280])
|
43 |
-
}
|
|
|
12 |
### SD 1.5 pickletensor embed format
|
13 |
|
14 |
I have observed that .pt embeddings have a dict-of-dicts type format. It looks something like this:
|
15 |
+
[
|
16 |
+
"string_to_token": {'doesntmatter': 265}, # I dont know why 265, but it usually is
|
17 |
+
"string_to_param": {'doesntmatter': tensor([[][768])},
|
18 |
+
"name": *string*,
|
19 |
+
"step": *string*,
|
20 |
+
"sd_checkpoint": *string*,
|
21 |
+
"sd_checkpoint_name": *string*
|
22 |
+
]
|
23 |
|
24 |
(Note that *string* can be None)
|
25 |
|
|
|
28 |
|
29 |
The ones I have seen, have a much simpler format. It is a trivial format compared to SD 1.5:
|
30 |
|
31 |
+
{ "emb_params": Tensor([][768])}
|
32 |
|
33 |
+
## SDXL embed format (safetensor)
|
34 |
|
35 |
This has an actual spec at:
|
36 |
https://huggingface.co/docs/diffusers/using-diffusers/textual_inversion_inference
|
37 |
But it's pretty simple:
|
38 |
|
39 |
summary:
|
40 |
+
{
|
41 |
+
"clip_l": Tensor([][768]),
|
42 |
+
"clip_g": Tensor([][1280])
|
43 |
+
}
|