Edit model card

LCM LoRA SDXL Rank 1

LCM LoRA SDXL Rank 1 is a resized LCM LoRA SDXL. The LoRA resized to rank 1 with resize lora script. This LoRA still can do inference with LCMScheduler and maintain the inference speed with lower steps and guidance scale while the output is improved.

Prompt
steps 4 scale 1
Prompt
steps 6 scale 2
Prompt
steps 8 scale 2

Download model

Weights for this model are available in Safetensors format.

Download them in the Files & versions tab.

Usage

LCM-LoRA is supported in πŸ€— Hugging Face Diffusers library from version v0.23.0 onwards. To run the model, first install the latest version of the Diffusers library as well as peft, accelerate and transformers. audio dataset from the Hugging Face Hub:

pip install --upgrade diffusers transformers accelerate peft

Text-to-Image

The adapter can be loaded with it's base model stabilityai/stable-diffusion-xl-base-1.0. Next, the scheduler needs to be changed to LCMScheduler and we can reduce the number of inference steps to just 2 to 8 steps. Please make sure to either disable guidance_scale or use values between 1.0 and 2.0.

import torch
from diffusers import LCMScheduler, AutoPipelineForText2Image

model_id = "stabilityai/stable-diffusion-xl-base-1.0"
adapter_id = "Linaqruf/lcm-lora-sdxl-rank1"

pipe = AutoPipelineForText2Image.from_pretrained(model_id, torch_dtype=torch.float16, variant="fp16")
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
pipe.to("cuda")

# load and fuse lcm lora
pipe.load_lora_weights(adapter_id)
pipe.fuse_lora()

prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k"

# disable guidance_scale by passing 0
image = pipe(prompt=prompt, num_inference_steps=4, guidance_scale=0).images[0]

Acknowledgement

Downloads last month
27
Inference API
Examples

Model tree for Linaqruf/lcm-lora-sdxl-rank1

Adapter
(4845)
this model

Spaces using Linaqruf/lcm-lora-sdxl-rank1 3