Spaces:
Configuration error
title: Core ML Models
emoji: π±
pinned: false
tags:
- coreml
- stable-diffusion
βββ Scroll down (or click here) to see models βββ
Core ML Models Repository
Thanks to Apple engineers, we can now run Stable Diffusion on Apple Silicon using Core ML!
However, it is hard to find compatible models, and converting models isn't the easiest thing to do.
By organizing Core ML models in one place, it will be easier to find them and for everyone to benefit.
Model Types
Model files with split_einsum
in the file name are compatible with all compute units.
Model files with original
in the file name are only compatible with CPU & GPU.
Model files with a no-i2i
suffix in the file name only work for Text2Image.
Models with a cn
suffix in the file name (or in their repo name) will work for Text2Image, Image2Image and ControlNet.
Model files with neither a no-i2i
nor a cn
suffix in the file name will work for Text2Image and Image2Image.
If you are using Mochi Diffusion v3.2, v4.0 or later versions, some model files with neither a no-i2i
nor a cn
suffix in the file name might need a simple modification to enable Image2Image to work correctly. Please go HERE for more information.
Usage
Once the chosen model has been downloaded, simply unzip it and put it in your model folder to use it.
Conversion Flags
The models were converted using the following flags:--convert-vae-decoder --convert-vae-encoder --convert-unet --unet-support-controlnet --convert-text-encoder --bundle-resources-for-swift-cli --attention-implementation {SPLIT_EINSUM or ORIGINAL}
Contributing
We encourage you to have at least one model converted (that this community doesn't already have) under your account that you would be able to contribute before joining. This will help us see those who can actually contribute back to the community.
We also encourage you to follow models and repo naming schemes accordingly.
Attention: Apple introduced Image-to-image capabilities in the ml-stable-diffusion 0.2.0 release. All the models that do not have a VAE encoder (hence, not will be able to use Image-to-image), will have a no-i2i
suffix right after the model name.
For example: stable-diffusion-2-1_no-i2i_original
.
From now on, only models with a VAE encoder will be accepted.
Contact us on Discord if you are interested in helping out.
Models Name
Models have the following naming scheme:
- Original model name
- Model version (
split-einsum
ororiginal
) - Model size (only if different from
512x512
) - VAE name (only if different from the original VAE)
Each label is separated by an underscore _
, and all capitalization from the original name is preserved.
For example: stable-diffusion-2-1_original_512x768_ema-vae
.
Repo Name
Repos are named with the original diffusers Hugging Face / Civitai repo name prefixed by coreml-
.
For example: coreml-stable-diffusion-2-1
.
Repo README Contents
Copy this template and paste it as a header:
---
license: creativeml-openrail-m
tags:
- coreml
- stable-diffusion
- text-to-image
---
# Core ML converted model
This model was converted to Core ML for use on Apple Silicon devices by following Apple's instructions [here](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml).\
Provide the model to an app such as [Mochi Diffusion](https://github.com/godly-devotion/MochiDiffusion) to generate images.
`split_einsum` versions are compatible with all compute units.\
`original` versions are only compatible with `CPU & GPU`.
# <MODEL-NAME-HERE>
Sources: [Hugging Face]() - [CivitAI]()
Then copy the original model's README (without the tag section) as the body.
Repo Directory Structure
coreml-stable-diffusion-2-1
βββ README.md
βββ original
β βββ 512x768
β β βββ stable-diffusion-2-1_original_512x768.zip
β β βββ ...
β βββ 768x512
β β βββ stable-diffusion-2-1_original_768x512.zip
β β βββ ...
β βββ stable-diffusion-2-1_original.zip
β βββ ...
βββ split_einsum
βββ stable-diffusion-2-1_split-einsum.zip
βββ ...