ArkaAbacus commited on
Commit
3f60081
1 Parent(s): f23891b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -1
README.md CHANGED
@@ -49,4 +49,8 @@ Instruction tuned with the following parameters:
49
  - LORA, Rank 8, Alpha 16, Dropout 0.05, all modules (QKV and MLP)
50
  - 3 epochs
51
  - Micro Batch Size 32 over 4xH100, gradient accumulation steps = 1
52
- - AdamW with learning rate 5e-5
 
 
 
 
 
49
  - LORA, Rank 8, Alpha 16, Dropout 0.05, all modules (QKV and MLP)
50
  - 3 epochs
51
  - Micro Batch Size 32 over 4xH100, gradient accumulation steps = 1
52
+ - AdamW with learning rate 5e-5
53
+
54
+ # Bias, Risks, and Limitations
55
+
56
+ The model has not been evaluated for safety and is only intended for research and experiments.