armhebb commited on
Commit
c0f4366
1 Parent(s): c0c2b9e

End of training

Browse files
README.md ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - stable-diffusion-xl
4
+ - stable-diffusion-xl-diffusers
5
+ - text-to-image
6
+ - diffusers
7
+ - lora
8
+ - template:sd-lora
9
+ widget:
10
+
11
+ - text: 'a phot in the fashion style of <s0>'
12
+
13
+ instance_prompt: a phot in the fashion style of <s0>
14
+ license: openrail++
15
+ ---
16
+
17
+ # SDXL LoRA DreamBooth - armhebb/65995e622d50edfb3ead
18
+
19
+ <Gallery />
20
+
21
+ ## Model description
22
+
23
+ ### These are armhebb/65995e622d50edfb3ead LoRA adaption weights.
24
+
25
+ ## Download model
26
+
27
+ ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
28
+
29
+ - **LoRA**: download **[`/korean_sample_checkpoint.safetensors` here 💾](/armhebb/65995e622d50edfb3ead/blob/main//korean_sample_checkpoint.safetensors)**.
30
+ - Place it on your `models/Lora` folder.
31
+ - On AUTOMATIC1111, load the LoRA by adding `<lora:/korean_sample_checkpoint:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
32
+ - *Embeddings*: download **[`/korean_sample_checkpoint_emb.safetensors` here 💾](/armhebb/65995e622d50edfb3ead/blob/main//korean_sample_checkpoint_emb.safetensors)**.
33
+ - Place it on it on your `embeddings` folder
34
+ - Use it by adding `/korean_sample_checkpoint_emb` to your prompt. For example, `a phot in the fashion style of /korean_sample_checkpoint_emb`
35
+ (you need both the LoRA and the embeddings as they were trained together for this LoRA)
36
+
37
+
38
+ ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
39
+
40
+ ```py
41
+ from diffusers import AutoPipelineForText2Image
42
+ import torch
43
+ from huggingface_hub import hf_hub_download
44
+ from safetensors.torch import load_file
45
+
46
+ pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
47
+ pipeline.load_lora_weights('armhebb/65995e622d50edfb3ead', weight_name='pytorch_lora_weights.safetensors')
48
+ embedding_path = hf_hub_download(repo_id='armhebb/65995e622d50edfb3ead', filename='/korean_sample_checkpoint_emb.safetensors' repo_type="model")
49
+ state_dict = load_file(embedding_path)
50
+ pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
51
+ pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
52
+
53
+ image = pipeline('a phot in the fashion style of <s0>').images[0]
54
+ ```
55
+
56
+ For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
57
+
58
+ ## Trigger words
59
+
60
+ To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
61
+
62
+ to trigger concept `<GFJ>` → use `<s0>` in your prompt
63
+
64
+
65
+
66
+ ## Details
67
+ All [Files & versions](/armhebb/65995e622d50edfb3ead/tree/main).
68
+
69
+ The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
70
+
71
+ LoRA for the text encoder was enabled. False.
72
+
73
+ Pivotal tuning was enabled: True.
74
+
75
+ Special VAE used for training: None.
76
+
korean_sample_checkpoint.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ea0a2f78840c65ec96e6212ff6e531fc5945ace88e9f5e264bf780f8e7d94f3b
3
+ size 186046568
korean_sample_checkpoint_emb.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4f323306f9e3e1af74f180c92b59de077760a009ced3a5e118fabfeac34926ba
3
+ size 4240
pytorch_lora_weights.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:deb86255735e5a92357c6e7e5bba300ea815f50689bbc9e174fdf25f045131c5
3
+ size 185963768