uwcc's picture
Upload folder using huggingface_hub
a12eee1 verified
metadata
tags:
  - text-to-image
  - flux
  - lora
  - diffusers
  - template:sd-lora
  - ai-toolkit
widget:
  - text: A church in a field on a sunny day, [trigger] style.
    output:
      url: samples/1727828552988__000002000_0.jpg
  - text: A seal plays with a ball on the beach, [trigger] style.
    output:
      url: samples/1727828571395__000002000_1.jpg
  - text: A clown at the circus rides on a zebra, [trigger] style.
    output:
      url: samples/1727828589795__000002000_2.jpg
  - text: '[trigger]'
    output:
      url: samples/1727828608192__000002000_3.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: WATERCOLOUR_ILLUSTRATION
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md

WATERCOLOUR_ILLUSTRATION

Model trained with AI Toolkit by Ostris

Prompt
A church in a field on a sunny day, [trigger] style.
Prompt
A seal plays with a ball on the beach, [trigger] style.
Prompt
A clown at the circus rides on a zebra, [trigger] style.
Prompt
[trigger]

Trigger words

You should use WATERCOLOUR_ILLUSTRATION to trigger the image generation.

Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.

Weights for this model are available in Safetensors format.

Download them in the Files & versions tab.

Use it with the 🧨 diffusers library

from diffusers import AutoPipelineForText2Image
import torch

pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('uwcc/WATERCOLOUR_ILLUSTRATION', weight_name='WATERCOLOUR_ILLUSTRATION.safetensors')
image = pipeline('A church in a field on a sunny day, [trigger] style.').images[0]
image.save("my_image.png")

For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers