AlekseyCalvin commited on
Commit
9a3ec25
1 Parent(s): b1b6907

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -53,12 +53,11 @@ Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit)
53
 
54
  **HST style autochrome photo**
55
 
56
- ## Config Parameters
57
  *Dim:256 Alpha:256 Optimizer:Adamw8bit LR:4e-5 * <br>
58
  I SET THE CONFIG TO ONLY TRAIN A SINGLE BLOCK: <br>
59
  Namely, MMDiT block 12. I used the same config syntax I've repeatedly used for training Flux. <br>
60
  But I'm not sure single block training worked here, judging by the results and the super hefty checkpoint weights sizes.* <br>
61
- **More info/config below!** <br>
62
  Fine-tuned using the **Google Colab Notebook*** of **ai-toolkit**.<br>
63
  I've used A100 via Colab Pro.
64
  However, training SD3.5 may potentially work with Free Colab or lower VRAM in general:<br>
@@ -66,6 +65,7 @@ Especially if one were to use:<br> ...Say, *lower rank (try 4 or 8), dataset siz
66
  Generally, VRAM expenditures for fine-tuning SD3.5 tend to be lower than for Flux during training.<br>
67
  So, try it!<br>
68
 
 
69
 
70
  **To use on Colab**, modify a Flux template Notebook from [here](https://github.com/ostris/ai-toolkit/tree/main/notebooks) with parameters from Ostris' example config for SD3.5 [here](https://github.com/ostris/ai-toolkit/blob/main/config/examples/train_lora_sd35_large_24gb.yaml)! <br>
71
  **My Colab config report/example below!** <br> *(Including the version of block-specification network arguments syntax that works on ai-toolkit via Colab, at least for Flux...)* <br>
@@ -179,7 +179,7 @@ job_to_run = OrderedDict([
179
  ('neg', 'wrong, broken, warped, unrealistic, untextured, misspelling, messy, bad quality'), # not used on flux
180
  ('seed', 42),
181
  ('walk_seed', True),
182
- ('guidance_scale', 4), # schnell does not do guidance
183
  ('sample_steps', 25) # 1 - 4 works well
184
  ]))
185
  ])
 
53
 
54
  **HST style autochrome photo**
55
 
56
+ ## Parameters/Settings/Options Info
57
  *Dim:256 Alpha:256 Optimizer:Adamw8bit LR:4e-5 * <br>
58
  I SET THE CONFIG TO ONLY TRAIN A SINGLE BLOCK: <br>
59
  Namely, MMDiT block 12. I used the same config syntax I've repeatedly used for training Flux. <br>
60
  But I'm not sure single block training worked here, judging by the results and the super hefty checkpoint weights sizes.* <br>
 
61
  Fine-tuned using the **Google Colab Notebook*** of **ai-toolkit**.<br>
62
  I've used A100 via Colab Pro.
63
  However, training SD3.5 may potentially work with Free Colab or lower VRAM in general:<br>
 
65
  Generally, VRAM expenditures for fine-tuning SD3.5 tend to be lower than for Flux during training.<br>
66
  So, try it!<br>
67
 
68
+ ## Colab Config
69
 
70
  **To use on Colab**, modify a Flux template Notebook from [here](https://github.com/ostris/ai-toolkit/tree/main/notebooks) with parameters from Ostris' example config for SD3.5 [here](https://github.com/ostris/ai-toolkit/blob/main/config/examples/train_lora_sd35_large_24gb.yaml)! <br>
71
  **My Colab config report/example below!** <br> *(Including the version of block-specification network arguments syntax that works on ai-toolkit via Colab, at least for Flux...)* <br>
 
179
  ('neg', 'wrong, broken, warped, unrealistic, untextured, misspelling, messy, bad quality'), # not used on flux
180
  ('seed', 42),
181
  ('walk_seed', True),
182
+ ('guidance_scale', 4),
183
  ('sample_steps', 25) # 1 - 4 works well
184
  ]))
185
  ])