calm-and-collected
commited on
Commit
•
958fab0
1
Parent(s):
cf6235a
Update README.md
Browse files
README.md
CHANGED
@@ -11,9 +11,10 @@ tags:
|
|
11 |
- vintage
|
12 |
- postcard
|
13 |
---
|
14 |
-
# Wish You Were Here - a 1.5 LORA for vintage postcard replication
|
15 |
|
16 |
<!-- Provide a quick summary of what the model is/does. -->
|
|
|
17 |
|
18 |
Wish you were here is a LORA model developped to create vintage postcard images. The model was trained on Stable Diffusion 1.5.
|
19 |
|
@@ -41,11 +42,19 @@ destinations. The model will also replicate damage seen in the images.
|
|
41 |
|
42 |
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
43 |
|
44 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
45 |
|
46 |
## How to Get Started with the Model
|
47 |
|
48 |
-
|
49 |
|
50 |
[More Information Needed]
|
51 |
|
@@ -54,23 +63,128 @@ Use the code below to get started with the model.
|
|
54 |
### Training Data
|
55 |
|
56 |
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
57 |
-
|
58 |
-
[
|
59 |
-
|
60 |
-
### Training
|
61 |
-
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
|
70 |
-
|
71 |
-
|
72 |
-
|
73 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
74 |
|
75 |
#### Hardware
|
76 |
|
@@ -80,10 +194,6 @@ The model was trained on two GTX 4090 for a duration of 2 days to extract 100 ep
|
|
80 |
|
81 |
The model was trained via the Kohya_SS gui.
|
82 |
|
83 |
-
## Citation [optional]
|
84 |
-
|
85 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
86 |
-
|
87 |
## Model Card Contact
|
88 |
|
89 |
Use the community section of this repository to contact me.
|
|
|
11 |
- vintage
|
12 |
- postcard
|
13 |
---
|
14 |
+
# Wish You Were Here - a Stable diffusion 1.5 LORA for vintage postcard replication
|
15 |
|
16 |
<!-- Provide a quick summary of what the model is/does. -->
|
17 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6537927953b7eb25ce03c962/d97rlp7IYnBcKPYQuCpBi.png)
|
18 |
|
19 |
Wish you were here is a LORA model developped to create vintage postcard images. The model was trained on Stable Diffusion 1.5.
|
20 |
|
|
|
42 |
|
43 |
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
44 |
|
45 |
+
To use the WYWH model, use your favorite Stable Diffusion model (the recommended model is a realistic model) and use the LORA along with the following triggers:
|
46 |
+
- WYWH (the base trigger)
|
47 |
+
- Photograph (for photography postcards)
|
48 |
+
- Drawing (for drawn postcards)
|
49 |
+
- Damage (to add scratch and water damage to the generation)
|
50 |
+
- Monochrome (for black and white images)
|
51 |
+
|
52 |
+
For negatives, your can use the following:
|
53 |
+
- White border (if you do not want a white border)
|
54 |
|
55 |
## How to Get Started with the Model
|
56 |
|
57 |
+
You can use this model with [automatic1111](https://github.com/AUTOMATIC1111/stable-diffusion-webui), [comfyui](https://github.com/comfyanonymous/ComfyUI) and [sdnext](https://github.com/vladmandic/automatic).
|
58 |
|
59 |
[More Information Needed]
|
60 |
|
|
|
63 |
### Training Data
|
64 |
|
65 |
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
66 |
+
The Wish You Were Here dataset consists out of ~650 images of postcards from 1900-1970.
|
67 |
+
Dataset: [origional dataset](https://huggingface.co/datasets/calm-and-collected/wish_you_were_here "The wish you were here dataset").
|
68 |
+
|
69 |
+
### Training Hyperparameters
|
70 |
+
|
71 |
+
<details>
|
72 |
+
<summary>Kohya_SS paramaters</summary>
|
73 |
+
```js
|
74 |
+
{
|
75 |
+
"LoRA_type": "Standard",
|
76 |
+
"adaptive_noise_scale": 0,
|
77 |
+
"additional_parameters": "",
|
78 |
+
"block_alphas": "",
|
79 |
+
"block_dims": "",
|
80 |
+
"block_lr_zero_threshold": "",
|
81 |
+
"bucket_no_upscale": true,
|
82 |
+
"bucket_reso_steps": 64,
|
83 |
+
"cache_latents": true,
|
84 |
+
"cache_latents_to_disk": true,
|
85 |
+
"caption_dropout_every_n_epochs": 0.0,
|
86 |
+
"caption_dropout_rate": 0,
|
87 |
+
"caption_extension": ".txt",
|
88 |
+
"clip_skip": 2,
|
89 |
+
"color_aug": false,
|
90 |
+
"conv_alpha": 1,
|
91 |
+
"conv_block_alphas": "",
|
92 |
+
"conv_block_dims": "",
|
93 |
+
"conv_dim": 1,
|
94 |
+
"decompose_both": false,
|
95 |
+
"dim_from_weights": false,
|
96 |
+
"down_lr_weight": "",
|
97 |
+
"enable_bucket": true,
|
98 |
+
"epoch": 1,
|
99 |
+
"factor": -1,
|
100 |
+
"flip_aug": false,
|
101 |
+
"full_bf16": false,
|
102 |
+
"full_fp16": false,
|
103 |
+
"gradient_accumulation_steps": 1,
|
104 |
+
"gradient_checkpointing": false,
|
105 |
+
"keep_tokens": "0",
|
106 |
+
"learning_rate": 0.0001,
|
107 |
+
"logging_dir": "/home/glow/Desktop/ml/whyw_logs",
|
108 |
+
"lora_network_weights": "",
|
109 |
+
"lr_scheduler": "constant",
|
110 |
+
"lr_scheduler_args": "",
|
111 |
+
"lr_scheduler_num_cycles": "",
|
112 |
+
"lr_scheduler_power": "",
|
113 |
+
"lr_warmup": 0,
|
114 |
+
"max_bucket_reso": 2048,
|
115 |
+
"max_data_loader_n_workers": "1",
|
116 |
+
"max_resolution": "512,650",
|
117 |
+
"max_timestep": 1000,
|
118 |
+
"max_token_length": "75",
|
119 |
+
"max_train_epochs": "100",
|
120 |
+
"max_train_steps": "",
|
121 |
+
"mem_eff_attn": true,
|
122 |
+
"mid_lr_weight": "",
|
123 |
+
"min_bucket_reso": 256,
|
124 |
+
"min_snr_gamma": 0,
|
125 |
+
"min_timestep": 0,
|
126 |
+
"mixed_precision": "bf16",
|
127 |
+
"model_list": "custom",
|
128 |
+
"module_dropout": 0.2,
|
129 |
+
"multires_noise_discount": 0.2,
|
130 |
+
"multires_noise_iterations": 8,
|
131 |
+
"network_alpha": 128,
|
132 |
+
"network_dim": 256,
|
133 |
+
"network_dropout": 0.3,
|
134 |
+
"no_token_padding": false,
|
135 |
+
"noise_offset": "0.05",
|
136 |
+
"noise_offset_type": "Multires",
|
137 |
+
"num_cpu_threads_per_process": 2,
|
138 |
+
"optimizer": "AdamW8bit",
|
139 |
+
"optimizer_args": "",
|
140 |
+
"output_dir": "/home/glow/Desktop/ml/whyw_logs/model_v2",
|
141 |
+
"output_name": "final_model",
|
142 |
+
"persistent_data_loader_workers": false,
|
143 |
+
"pretrained_model_name_or_path": "runwayml/stable-diffusion-v1-5",
|
144 |
+
"prior_loss_weight": 1.0,
|
145 |
+
"random_crop": false,
|
146 |
+
"rank_dropout": 0.2,
|
147 |
+
"reg_data_dir": "",
|
148 |
+
"resume": "",
|
149 |
+
"sample_every_n_epochs": 0,
|
150 |
+
"sample_every_n_steps": 0,
|
151 |
+
"sample_prompts": "",
|
152 |
+
"sample_sampler": "euler_a",
|
153 |
+
"save_every_n_epochs": 1,
|
154 |
+
"save_every_n_steps": 0,
|
155 |
+
"save_last_n_steps": 0,
|
156 |
+
"save_last_n_steps_state": 0,
|
157 |
+
"save_model_as": "safetensors",
|
158 |
+
"save_precision": "bf16",
|
159 |
+
"save_state": false,
|
160 |
+
"scale_v_pred_loss_like_noise_pred": false,
|
161 |
+
"scale_weight_norms": 1,
|
162 |
+
"sdxl": false,
|
163 |
+
"sdxl_cache_text_encoder_outputs": false,
|
164 |
+
"sdxl_no_half_vae": true,
|
165 |
+
"seed": "1234",
|
166 |
+
"shuffle_caption": false,
|
167 |
+
"stop_text_encoder_training": 1,
|
168 |
+
"text_encoder_lr": 5e-05,
|
169 |
+
"train_batch_size": 3,
|
170 |
+
"train_data_dir": "/home/glow/Desktop/wyhw",
|
171 |
+
"train_on_input": true,
|
172 |
+
"training_comment": "",
|
173 |
+
"unet_lr": 0.0001,
|
174 |
+
"unit": 1,
|
175 |
+
"up_lr_weight": "",
|
176 |
+
"use_cp": true,
|
177 |
+
"use_wandb": false,
|
178 |
+
"v2": false,
|
179 |
+
"v_parameterization": false,
|
180 |
+
"v_pred_like_loss": 0,
|
181 |
+
"vae_batch_size": 0,
|
182 |
+
"wandb_api_key": "",
|
183 |
+
"weighted_captions": false,
|
184 |
+
"xformers": "xformers"
|
185 |
+
}
|
186 |
+
```
|
187 |
+
</details>
|
188 |
|
189 |
#### Hardware
|
190 |
|
|
|
194 |
|
195 |
The model was trained via the Kohya_SS gui.
|
196 |
|
|
|
|
|
|
|
|
|
197 |
## Model Card Contact
|
198 |
|
199 |
Use the community section of this repository to contact me.
|