Snowad commited on
Commit
4a12161
1 Parent(s): 3037b2a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -10,11 +10,11 @@ tags:
10
 
11
  This repo is my fine tuned lora of Llama on the first 4 volumes of Eminence in shadow and konosuba to test its ability to record new information.
12
  The training used alpaca-lora on a 3090 for 10 hours with :
13
- Micro Batch Size 2,
14
- batch size 64,
15
- 35 epochs,
16
- 3e-4 learning rate,
17
- lora rank 256,
18
- 512 lora alpha,
19
- 0.05 lora dropout,
20
- 352 cutoff
 
10
 
11
  This repo is my fine tuned lora of Llama on the first 4 volumes of Eminence in shadow and konosuba to test its ability to record new information.
12
  The training used alpaca-lora on a 3090 for 10 hours with :
13
+ - Micro Batch Size 2,
14
+ - batch size 64,
15
+ - 35 epochs,
16
+ - 3e-4 learning rate,
17
+ - lora rank 256,
18
+ - 512 lora alpha,
19
+ - 0.05 lora dropout,
20
+ - 352 cutoff