AlekseyCalvin
commited on
Commit
•
68fd13e
1
Parent(s):
c246c7a
Update README.md
Browse files
README.md
CHANGED
@@ -149,7 +149,7 @@ Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit)
|
|
149 |
Fine-tuned using the **Google Colab Notebook*** Version of **ai-toolkit**.<br>
|
150 |
I've used A100 via Colab Pro.
|
151 |
However, training SD3.5 may potentially work with Free Colab or lower Vram in general:<br>
|
152 |
-
Especially if one were to use lower rank (try 4 or 8), dataset size (in terms of caching/bucketing/pre-loading impacts), 1 batch size, Adamw8bit optimizer, 512 resolution, maybe adding the
|
153 |
Generally, VRAM expenditures tend to be lower than for Flux during training. So, try it! I certainly will.<br>
|
154 |
**To use on Colab**, modify a Flux template Notebook from [here](https://github.com/ostris/ai-toolkit/tree/main/notebooks) with parameters from Ostris' example config for SD3.5 [here](https://github.com/ostris/ai-toolkit/blob/main/config/examples/train_lora_sd35_large_24gb.yaml)!
|
155 |
```
|
|
|
149 |
Fine-tuned using the **Google Colab Notebook*** Version of **ai-toolkit**.<br>
|
150 |
I've used A100 via Colab Pro.
|
151 |
However, training SD3.5 may potentially work with Free Colab or lower Vram in general:<br>
|
152 |
+
Especially if one were to use: say, *lower rank (try 4 or 8), dataset size (in terms of caching/bucketing/pre-loading impacts), 1 batch size, Adamw8bit optimizer, 512 resolution, maybe adding the /lowvram, true/ argument, and plausibly specifying alternate quantization variants.* <br>
|
153 |
Generally, VRAM expenditures tend to be lower than for Flux during training. So, try it! I certainly will.<br>
|
154 |
**To use on Colab**, modify a Flux template Notebook from [here](https://github.com/ostris/ai-toolkit/tree/main/notebooks) with parameters from Ostris' example config for SD3.5 [here](https://github.com/ostris/ai-toolkit/blob/main/config/examples/train_lora_sd35_large_24gb.yaml)!
|
155 |
```
|