Update README.md
Browse files
README.md
CHANGED
@@ -52,8 +52,8 @@ The model is trained to use the following format:
|
|
52 |
|
53 |
The following hyperparameters were used during DPO/SamPO training:
|
54 |
- DPO beta: 0.1
|
55 |
-
- learning_rate: 4e-7
|
56 |
-
- total_train_batch_size: 128
|
57 |
- optimizer: AdamW with beta1 0.9, beta2 0.999 and epsilon 1e-8
|
58 |
- lr_scheduler_type: linear
|
59 |
- lr_scheduler_warmup_ratio: 0.1
|
|
|
52 |
|
53 |
The following hyperparameters were used during DPO/SamPO training:
|
54 |
- DPO beta: 0.1
|
55 |
+
- learning_rate: 4e-7
|
56 |
+
- total_train_batch_size: 128
|
57 |
- optimizer: AdamW with beta1 0.9, beta2 0.999 and epsilon 1e-8
|
58 |
- lr_scheduler_type: linear
|
59 |
- lr_scheduler_warmup_ratio: 0.1
|