salma-remyx's picture
Upload README.md with huggingface_hub
51ac44c verified
metadata
base_model: stabilityai/stable-diffusion-xl-base-1.0
license: openrail++
tags:
  - stable-diffusion-xl
  - stable-diffusion-xl-diffusers
  - text-to-image
  - diffusers
  - lora
  - template:sd-lora
  - remyx
widget:
  - text: a high resolution aerial photograph of <s0><s1>
    output:
      url: image_0.png
  - text: a high resolution aerial photograph of <s0><s1>
    output:
      url: image_1.png
  - text: a high resolution aerial photograph of <s0><s1>
    output:
      url: image_2.png
  - text: a high resolution aerial photograph of <s0><s1>
    output:
      url: image_3.png
instance_prompt: an aerial photograph <s0><s1>

SDXL LoRA DreamBooth - salma-remyx/aerial-view-field-sdxl-lora

Prompt
a high resolution aerial photograph of <s0><s1>
Prompt
a high resolution aerial photograph of <s0><s1>
Prompt
a high resolution aerial photograph of <s0><s1>
Prompt
a high resolution aerial photograph of <s0><s1>

Model description

These are salma-remyx/aerial-view-field-sdxl-lora LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.

Download model

Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke

Use it with the 🧨 diffusers library

from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
        
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('salma-remyx/aerial-view-field-sdxl-lora', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='salma-remyx/aerial-view-field-sdxl-lora', filename='aerial-view-field-sdxl-lora_emb.safetensors' repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
        
image = pipeline('a high resolution aerial photograph of <s0><s1>').images[0]

For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers

Trigger words

To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:

to trigger concept TOK → use <s0><s1> in your prompt

Details

All Files & versions.

The weights were trained using 🧨 diffusers Advanced Dreambooth Training Script.

LoRA for the text encoder was enabled. False.

Pivotal tuning was enabled: True.

Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.