Update README.md
Browse files
README.md
CHANGED
@@ -10,11 +10,11 @@ tags:
|
|
10 |
|
11 |
This repo is my fine tuned lora of Llama on the first 4 volumes of Eminence in shadow and konosuba to test its ability to record new information.
|
12 |
The training used alpaca-lora on a 3090 for 10 hours with :
|
13 |
-
Micro Batch Size 2,
|
14 |
-
batch size 64,
|
15 |
-
35 epochs,
|
16 |
-
3e-4 learning rate,
|
17 |
-
lora rank 256,
|
18 |
-
512 lora alpha,
|
19 |
-
0.05 lora dropout,
|
20 |
-
352 cutoff
|
|
|
10 |
|
11 |
This repo is my fine tuned lora of Llama on the first 4 volumes of Eminence in shadow and konosuba to test its ability to record new information.
|
12 |
The training used alpaca-lora on a 3090 for 10 hours with :
|
13 |
+
- Micro Batch Size 2,
|
14 |
+
- batch size 64,
|
15 |
+
- 35 epochs,
|
16 |
+
- 3e-4 learning rate,
|
17 |
+
- lora rank 256,
|
18 |
+
- 512 lora alpha,
|
19 |
+
- 0.05 lora dropout,
|
20 |
+
- 352 cutoff
|