CarlosRiverMe's picture
Update README.md
ecc78f1 verified
metadata
license: other
base_model: stabilityai/stable-diffusion-3.5-large
tags:
  - sd3
  - sd3-diffusers
  - text-to-image
  - diffusers
  - simpletuner
  - safe-for-work
  - lora
  - template:sd-lora
  - standard
inference: true
widget:
  - text: grey shirt with a small logo of a bunny painted in the alebrijeros style
    output:
      url: images/example_wr1vsmbx0.png
  - text: hoodie with flowers painted in the alebrijeros style
    output:
      url: images/example_8srfnn37z.png

sd3-lora-alebrijeros-final

Este es un LoRA derivado del modelo m谩s reciente de stable diffusion: stabilityai/stable-diffusion-3.5-large.

El prompt usado durante el entrenamiento:

sweatshirt painted in the alebrijeros style

You can find some example images in the following gallery:

Prompt
grey shirt with a small logo of a bunny painted in the alebrijeros style
Prompt
hoodie with flowers painted in the alebrijeros style

Configuraci贸n del entrene

  • Training epochs: 4
  • Training steps: 2600
  • Learning rate: 5e-05
  • Max grad norm: 0.01
  • Effective batch size: 1
    • Micro-batch size: 1
    • Gradient accumulation steps: 1
    • Number of GPUs: 1
  • Prediction type: flow-matching
  • Rescaled betas zero SNR: False
  • Optimizer: adamw_bf16
  • Precision: Pure BF16
  • Quantised: Yes: int8-quanto
  • Xformers: Not used
  • LoRA Rank: 64
  • LoRA Alpha: None
  • LoRA Dropout: 0.1
  • LoRA initialisation style: default

Datasets

Im谩genes variadas obtenidas de google que incluyen desde un mercedes pintado por alebrijeros hasta miss universo que fue con un vestido estilo alebrijero

Inference

import torch
from diffusers import DiffusionPipeline

model_id = 'stabilityai/stable-diffusion-3.5-large'
adapter_id = 'CarlosRiverMe/sd3-lora-alebrijeros-final'
pipeline = DiffusionPipeline.from_pretrained(model_id)
pipeline.load_lora_weights(adapter_id)

prompt = "sweatshirt painted in the alebrijeros style"
negative_prompt = 'blurry, cropped, ugly'
pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu')
image = pipeline(
    prompt=prompt,
    negative_prompt=negative_prompt,
    num_inference_steps=20,
    generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(1641421826),
    width=512,
    height=512,
    guidance_scale=5.0,
).images[0]
image.save("output.png", format="PNG")