Edit model card

Core ML Converted Model:

  • This model was converted to Core ML for use on Apple Silicon devices. Conversion instructions can be found here.
  • Provide the model to an app such as Mochi Diffusion Github - Discord to generate images.
  • split_einsum version is compatible with all compute unit options including Neural Engine.
  • original version is only compatible with CPU & GPU option.
  • Custom resolution versions are tagged accordingly.
  • The vae-ft-mse-840000-ema-pruned.ckpt vae is embedded into the model.
  • This model was converted with a vae-encoder for i2i.
  • This model is fp16.
  • Descriptions are posted as-is from original model source.
  • Not all features and/or results may be available in CoreML format.
  • This model does not have the unet split into chunks.
  • This model does not include a safety checker (for NSFW content).

vanGoghDiffusion_v1:

Source(s): Hugging Face - CivitAI

This is a fine-tuned Stable Diffusion model (based on v1.5) trained on screenshots from the film Loving Vincent. Use the token lvngvncnt at the BEGINNING of your prompts to use the style (e.g., "lvngvncnt, beautiful woman at sunset"). This model works best with the Euler sampler (NOT Euler_a).

If you get too many yellow faces or you dont like the strong blue bias, simply put them in the negative prompt (e.g., "Yellow face, blue").

image

image

--

Characters rendered with this model: Character Samples prompt and settings used: lvngvncnt, [person], highly detailed | Steps: 25, Sampler: Euler, CFG scale: 6

image

--

Landscapes/miscellaneous rendered with this model: Landscape Samples prompt and settings used: lvngvncnt, [subject/setting], highly detailed | Steps: 25, Sampler: Euler, CFG scale: 6

image

-- This model was trained with Dreambooth, using TheLastBen colab notebook

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Unable to determine this model's library. Check the docs .