ambientocclusion
commited on
Commit
•
01687db
1
Parent(s):
f750f3b
Model card auto-generated by SimpleTuner
Browse files
README.md
CHANGED
@@ -60,7 +60,7 @@ You may reuse the base model text encoder for inference.
|
|
60 |
|
61 |
- Training epochs: 3999
|
62 |
- Training steps: 4000
|
63 |
-
- Learning rate: 0.
|
64 |
- Effective batch size: 1
|
65 |
- Micro-batch size: 1
|
66 |
- Gradient accumulation steps: 1
|
@@ -71,8 +71,8 @@ You may reuse the base model text encoder for inference.
|
|
71 |
- Precision: bf16
|
72 |
- Quantised: Yes: int2-quanto
|
73 |
- Xformers: Not used
|
74 |
-
- LoRA Rank:
|
75 |
-
- LoRA Alpha:
|
76 |
- LoRA Dropout: 0.1
|
77 |
- LoRA initialisation style: default
|
78 |
|
|
|
60 |
|
61 |
- Training epochs: 3999
|
62 |
- Training steps: 4000
|
63 |
+
- Learning rate: 0.001
|
64 |
- Effective batch size: 1
|
65 |
- Micro-batch size: 1
|
66 |
- Gradient accumulation steps: 1
|
|
|
71 |
- Precision: bf16
|
72 |
- Quantised: Yes: int2-quanto
|
73 |
- Xformers: Not used
|
74 |
+
- LoRA Rank: 32
|
75 |
+
- LoRA Alpha: 32.0
|
76 |
- LoRA Dropout: 0.1
|
77 |
- LoRA initialisation style: default
|
78 |
|