Update README.md
Browse files
README.md
CHANGED
@@ -8,13 +8,13 @@ tags:
|
|
8 |
# These are a set of VAEEncoder.mlmodelc bundles that will enable the image2image feature with [MOCHI DIFFUSION](https://github.com/godly-devotion/MochiDiffusion) 3.2, 4.0, and later, when using incompatible "older" CoreML converted models.<br>
|
9 |
|
10 |
## They are provided in a single zip file, containing five VAEEncoder.mlmodelc files, noted by their file names for use as follows:
|
11 |
-
**- for split_einsim 515x515
|
12 |
-
**- for original 512x512
|
13 |
-
**- for original 512x768
|
14 |
-
**- for original 768x512
|
15 |
-
**- for original 768x768
|
16 |
|
17 |
-
### They should enable image2image for ANY model trained or merged from the Stable Diffusion v1.5 base model. They will
|
18 |
These VAEEncoders were built from vae-ft-mse-840000-ema-pruned.ckpt. That is the VAE that was distributed with the original Stable Duffusion 1.5 model. It is the VAE used in the vast majority of trained and merged 1.5-type models. There is an alternate VAE, kl-f8-anime.ckpt, that is sometimes used instead, in anime-focused models. I believe that the diferences in that VAE are only relevant to the VAEDecoder. These are replacement VAEEncoders, not VAEDecoders. If your model has the kl-f8-anime VAE baked in, it will still do its job through the VAEDecoder.<br>
|
19 |
### INSTRUCTIONS
|
20 |
- If the model folder that you are upgrading already has a VAEEncoder.mlmodelc file inside, rename that file to VAEEncoder.mlmodelc.bak first, to keep it in case you want to return to it later. It is fine to leave it in the folder. It only needs to be renamed.
|
|
|
8 |
# These are a set of VAEEncoder.mlmodelc bundles that will enable the image2image feature with [MOCHI DIFFUSION](https://github.com/godly-devotion/MochiDiffusion) 3.2, 4.0, and later, when using incompatible "older" CoreML converted models.<br>
|
9 |
|
10 |
## They are provided in a single zip file, containing five VAEEncoder.mlmodelc files, noted by their file names for use as follows:
|
11 |
+
**- for split_einsim 515x515 models**<br>
|
12 |
+
**- for original 512x512 models**<br>
|
13 |
+
**- for original 512x768 models**<br>
|
14 |
+
**- for original 768x512 models**<br>
|
15 |
+
**- for original 768x768 models**
|
16 |
|
17 |
+
### They should enable image2image for ANY model trained or merged from the Stable Diffusion v1.5 base model. They will also work with models derived from Stable Diffusion v2.0 or v2.1 base models, but Mochi Diffusion has limited support for SD-2.x models.<br>
|
18 |
These VAEEncoders were built from vae-ft-mse-840000-ema-pruned.ckpt. That is the VAE that was distributed with the original Stable Duffusion 1.5 model. It is the VAE used in the vast majority of trained and merged 1.5-type models. There is an alternate VAE, kl-f8-anime.ckpt, that is sometimes used instead, in anime-focused models. I believe that the diferences in that VAE are only relevant to the VAEDecoder. These are replacement VAEEncoders, not VAEDecoders. If your model has the kl-f8-anime VAE baked in, it will still do its job through the VAEDecoder.<br>
|
19 |
### INSTRUCTIONS
|
20 |
- If the model folder that you are upgrading already has a VAEEncoder.mlmodelc file inside, rename that file to VAEEncoder.mlmodelc.bak first, to keep it in case you want to return to it later. It is fine to leave it in the folder. It only needs to be renamed.
|