license: openrail++
Contents
This repository contains:
- Half-Precision LoRA versions of https://huggingface.co/mhdang/dpo-sdxl-text2image-v1 and https://huggingface.co/mhdang/dpo-sd1.5-text2image-v1.
- Full-Precision offset versions of https://huggingface.co/mhdang/dpo-sdxl-text2image-v1 and https://huggingface.co/mhdang/dpo-sd1.5-text2image-v1.
Creation
LoRA
The LoRA were created using Kohya SS.
1.5: https://civitai.com/models/240850/sd15-direct-preference-optimization-dpo extracted from https://huggingface.co/fp16-guy/Stable-Diffusion-v1-5_fp16_cleaned/blob/main/sd_1.5.safetensors. XL: https://civitai.com/models/238319/sd-xl-dpo-finetune-direct-preference-optimization extracted from https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/sd_xl_base_1.0_0.9vae.safetensors
Offsets
The offsets were calculated in Pytorch using the following formula:
1.5: https://huggingface.co/mhdang/dpo-sd1.5-text2image-v1/blob/main/unet/diffusion_pytorch_model.safetensors - https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/unet/diffusion_pytorch_model.bin XL: https://huggingface.co/mhdang/dpo-sdxl-text2image-v1/blob/main/unet/diffusion_pytorch_model.safetensors - https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/unet/diffusion_pytorch_model.safetensors
These can be added directly to any initialized UNet to inject DPO training into it. See the code below for usage (diffusers only.)
License
These models are derivces from all OpenRail++ models, and are licensed under OpenRail++ themselves.
Usage
Offsets
from __future__ import annotations
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from diffusers.models import UNet2DConditionModel
def inject_dpo(unet: UNet2DConditionModel, dpo_offset_path: str, device: str, strict: bool = False) -> None:
"""
Injects DPO weights directly into your UNet.
Args:
unet (`UNet2DConditionModel`)
The initialized UNet from your pipeline.
dpo_offset_path (`str`)
The path to the `.safetensors` file downloaded from https://huggingface.co/benjamin-paine/sd-dpo-offsets/.
Make sure you're using the right file for the right base model.
strict (`bool`, *optional*)
Whether or not to raise errors when a weight cannot be applied. Defaults to false.
"""
from safetensors import safe_open
with safe_open(dpo_offset_path, framework="pt", device=device) as f:
for key in f.keys():
key_parts = key.split(".")
current_layer = unet
for key_part in key_parts[:-1]:
current_layer = getattr(current_layer, key_part, None)
if current_layer is None:
break
if current_layer is None:
if strict:
raise IOError(f"Couldn't find a layer to inject key {key} in.")
continue
layer_param = getattr(current_layer, key_parts[-1], None)
if layer_param is None:
if strict:
raise IOError(f"Couldn't get weight parameter for key {key}")
continue
layer_param.data += f.get_tensor(key)
Now you can use this function like so:
from diffusers import StableDiffusionPipeline
import huggingface_hub
import torch
# load sdv15 pipeline
device = "cuda"
model_id = "Lykon/dreamshaper-8"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe.to(device)
# make image
prompt = "Two cats playing chess on a tree branch"
generator = torch.Generator(device=device)
generator.manual_seed(123456789)
image = pipe(prompt, guidance_scale=7.5, generator=generator).images[0]
image.save("cats_playing_chess.png")
# download DPO offsets
dpo_offset_path = huggingface_hub.hf_hub_download("benjamin-paine/sd-dpo-offsets", "sd_v15_unet_dpo_offset.safetensors")
# inject
inject_dpo(pipe.unet, dpo_offset_path, device)
# make image again
generator.manual_seed(123456789)
image = pipe(prompt, guidance_scale=7.5, generator=generator).images[0]
image.save("cats_playing_chess_dpo.png")
Or for XL:
from diffusers import StableDiffusionXLPipeline
import huggingface_hub
import torch
# load sdv15 pipeline
device = "cuda"
model_id = "Lykon/dreamshaper-xl-1-0"
pipe = StableDiffusionXLPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe.to(device)
# make image
prompt = "Two cats playing chess on a tree branch"
generator = torch.Generator(device=device)
generator.manual_seed(123456789)
image = pipe(prompt, guidance_scale=7.5, generator=generator).images[0]
image.save("cats_playing_chess_xl.png")
# download DPO offsets
dpo_offset_path = huggingface_hub.hf_hub_download("benjamin-paine/sd-dpo-offsets", "sd_xl_unet_dpo_offset.safetensors")
# inject
inject_dpo(pipe.unet, dpo_offset_path, device)
# make image again
generator.manual_seed(123456789)
image = pipe(prompt, guidance_scale=7.5, generator=generator).images[0]
image.save("cats_playing_chess_xl_dpo.png")