yakazimir commited on
Commit
2650e0e
1 Parent(s): 37f2d17

Model save

Browse files
Files changed (5) hide show
  1. README.md +75 -0
  2. all_results.json +9 -0
  3. generation_config.json +12 -0
  4. train_results.json +9 -0
  5. trainer_state.json +1515 -0
README.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: llama3
4
+ base_model: meta-llama/Meta-Llama-3-8B-Instruct
5
+ tags:
6
+ - trl
7
+ - simpo
8
+ - generated_from_trainer
9
+ model-index:
10
+ - name: llama3_l5_best_entropy_1
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # llama3_l5_best_entropy_1
18
+
19
+ This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 1.9563
22
+ - Rewards/chosen: -8.6021
23
+ - Rewards/rejected: -13.9123
24
+ - Rewards/accuracies: 0.8434
25
+ - Rewards/margins: 5.3102
26
+ - Logps/rejected: -1.3912
27
+ - Logps/chosen: -0.8602
28
+ - Logits/rejected: -1.3261
29
+ - Logits/chosen: -1.3707
30
+ - Semantic Entropy: 0.9091
31
+
32
+ ## Model description
33
+
34
+ More information needed
35
+
36
+ ## Intended uses & limitations
37
+
38
+ More information needed
39
+
40
+ ## Training and evaluation data
41
+
42
+ More information needed
43
+
44
+ ## Training procedure
45
+
46
+ ### Training hyperparameters
47
+
48
+ The following hyperparameters were used during training:
49
+ - learning_rate: 1e-06
50
+ - train_batch_size: 2
51
+ - eval_batch_size: 4
52
+ - seed: 42
53
+ - distributed_type: multi-GPU
54
+ - num_devices: 4
55
+ - gradient_accumulation_steps: 16
56
+ - total_train_batch_size: 128
57
+ - total_eval_batch_size: 16
58
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
59
+ - lr_scheduler_type: cosine
60
+ - lr_scheduler_warmup_ratio: 0.1
61
+ - num_epochs: 1.0
62
+
63
+ ### Training results
64
+
65
+ | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | Semantic Entropy |
66
+ |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|:----------------:|
67
+ | 1.9268 | 0.8743 | 400 | 1.9563 | -8.6021 | -13.9123 | 0.8434 | 5.3102 | -1.3912 | -0.8602 | -1.3261 | -1.3707 | 0.9091 |
68
+
69
+
70
+ ### Framework versions
71
+
72
+ - Transformers 4.44.2
73
+ - Pytorch 2.2.2+cu121
74
+ - Datasets 2.18.0
75
+ - Tokenizers 0.19.1
all_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 0.9989071038251366,
3
+ "total_flos": 0.0,
4
+ "train_loss": 2.3507381581112385,
5
+ "train_runtime": 7991.7933,
6
+ "train_samples": 58558,
7
+ "train_samples_per_second": 7.327,
8
+ "train_steps_per_second": 0.057
9
+ }
generation_config.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 128000,
3
+ "do_sample": true,
4
+ "eos_token_id": [
5
+ 128001,
6
+ 128009
7
+ ],
8
+ "max_length": 4096,
9
+ "temperature": 0.6,
10
+ "top_p": 0.9,
11
+ "transformers_version": "4.44.2"
12
+ }
train_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 0.9989071038251366,
3
+ "total_flos": 0.0,
4
+ "train_loss": 2.3507381581112385,
5
+ "train_runtime": 7991.7933,
6
+ "train_samples": 58558,
7
+ "train_samples_per_second": 7.327,
8
+ "train_steps_per_second": 0.057
9
+ }
trainer_state.json ADDED
@@ -0,0 +1,1515 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 0.9989071038251366,
5
+ "eval_steps": 400,
6
+ "global_step": 457,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.01092896174863388,
13
+ "grad_norm": 48.58164102385424,
14
+ "learning_rate": 1.0869565217391303e-07,
15
+ "logits/chosen": -1.0108345746994019,
16
+ "logits/rejected": -1.005958914756775,
17
+ "logps/chosen": -0.2804548144340515,
18
+ "logps/rejected": -0.2860378921031952,
19
+ "loss": 3.1505,
20
+ "rewards/accuracies": 0.53125,
21
+ "rewards/chosen": -2.8045480251312256,
22
+ "rewards/margins": 0.055831074714660645,
23
+ "rewards/rejected": -2.8603789806365967,
24
+ "semantic_entropy": 0.7518873810768127,
25
+ "step": 5
26
+ },
27
+ {
28
+ "epoch": 0.02185792349726776,
29
+ "grad_norm": 60.09792641017642,
30
+ "learning_rate": 2.1739130434782607e-07,
31
+ "logits/chosen": -1.0501697063446045,
32
+ "logits/rejected": -0.9994386434555054,
33
+ "logps/chosen": -0.25685304403305054,
34
+ "logps/rejected": -0.27076202630996704,
35
+ "loss": 3.1218,
36
+ "rewards/accuracies": 0.5375000238418579,
37
+ "rewards/chosen": -2.568530559539795,
38
+ "rewards/margins": 0.1390899419784546,
39
+ "rewards/rejected": -2.707620143890381,
40
+ "semantic_entropy": 0.7095814347267151,
41
+ "step": 10
42
+ },
43
+ {
44
+ "epoch": 0.03278688524590164,
45
+ "grad_norm": 53.218153749761065,
46
+ "learning_rate": 3.260869565217391e-07,
47
+ "logits/chosen": -1.010613203048706,
48
+ "logits/rejected": -0.964097797870636,
49
+ "logps/chosen": -0.2674953043460846,
50
+ "logps/rejected": -0.2733650207519531,
51
+ "loss": 3.1131,
52
+ "rewards/accuracies": 0.512499988079071,
53
+ "rewards/chosen": -2.674952983856201,
54
+ "rewards/margins": 0.05869739130139351,
55
+ "rewards/rejected": -2.7336502075195312,
56
+ "semantic_entropy": 0.7275451421737671,
57
+ "step": 15
58
+ },
59
+ {
60
+ "epoch": 0.04371584699453552,
61
+ "grad_norm": 67.45687349058657,
62
+ "learning_rate": 4.3478260869565214e-07,
63
+ "logits/chosen": -0.9450489282608032,
64
+ "logits/rejected": -0.8954045176506042,
65
+ "logps/chosen": -0.27203652262687683,
66
+ "logps/rejected": -0.2849101722240448,
67
+ "loss": 3.1483,
68
+ "rewards/accuracies": 0.5687500238418579,
69
+ "rewards/chosen": -2.720365047454834,
70
+ "rewards/margins": 0.1287367343902588,
71
+ "rewards/rejected": -2.8491017818450928,
72
+ "semantic_entropy": 0.7445512413978577,
73
+ "step": 20
74
+ },
75
+ {
76
+ "epoch": 0.0546448087431694,
77
+ "grad_norm": 31.667955833823346,
78
+ "learning_rate": 5.434782608695652e-07,
79
+ "logits/chosen": -0.9527080655097961,
80
+ "logits/rejected": -0.8783634901046753,
81
+ "logps/chosen": -0.27410227060317993,
82
+ "logps/rejected": -0.29486891627311707,
83
+ "loss": 3.1117,
84
+ "rewards/accuracies": 0.59375,
85
+ "rewards/chosen": -2.741022825241089,
86
+ "rewards/margins": 0.2076665163040161,
87
+ "rewards/rejected": -2.9486892223358154,
88
+ "semantic_entropy": 0.754062831401825,
89
+ "step": 25
90
+ },
91
+ {
92
+ "epoch": 0.06557377049180328,
93
+ "grad_norm": 49.885531132124974,
94
+ "learning_rate": 6.521739130434782e-07,
95
+ "logits/chosen": -1.05032479763031,
96
+ "logits/rejected": -0.9858748316764832,
97
+ "logps/chosen": -0.2675837576389313,
98
+ "logps/rejected": -0.28654658794403076,
99
+ "loss": 3.1024,
100
+ "rewards/accuracies": 0.5375000238418579,
101
+ "rewards/chosen": -2.675837278366089,
102
+ "rewards/margins": 0.1896287053823471,
103
+ "rewards/rejected": -2.8654661178588867,
104
+ "semantic_entropy": 0.726620078086853,
105
+ "step": 30
106
+ },
107
+ {
108
+ "epoch": 0.07650273224043716,
109
+ "grad_norm": 55.4615534844222,
110
+ "learning_rate": 7.608695652173913e-07,
111
+ "logits/chosen": -1.0076689720153809,
112
+ "logits/rejected": -0.9416168332099915,
113
+ "logps/chosen": -0.25796476006507874,
114
+ "logps/rejected": -0.2931690514087677,
115
+ "loss": 3.058,
116
+ "rewards/accuracies": 0.512499988079071,
117
+ "rewards/chosen": -2.5796475410461426,
118
+ "rewards/margins": 0.3520428538322449,
119
+ "rewards/rejected": -2.931690216064453,
120
+ "semantic_entropy": 0.7216796875,
121
+ "step": 35
122
+ },
123
+ {
124
+ "epoch": 0.08743169398907104,
125
+ "grad_norm": 61.12682886329354,
126
+ "learning_rate": 8.695652173913043e-07,
127
+ "logits/chosen": -0.9594043493270874,
128
+ "logits/rejected": -0.8999654650688171,
129
+ "logps/chosen": -0.28498369455337524,
130
+ "logps/rejected": -0.31468039751052856,
131
+ "loss": 3.1476,
132
+ "rewards/accuracies": 0.512499988079071,
133
+ "rewards/chosen": -2.849836826324463,
134
+ "rewards/margins": 0.296967089176178,
135
+ "rewards/rejected": -3.146804094314575,
136
+ "semantic_entropy": 0.7657366394996643,
137
+ "step": 40
138
+ },
139
+ {
140
+ "epoch": 0.09836065573770492,
141
+ "grad_norm": 30.63183736560673,
142
+ "learning_rate": 9.782608695652173e-07,
143
+ "logits/chosen": -1.0224157571792603,
144
+ "logits/rejected": -0.9429336786270142,
145
+ "logps/chosen": -0.29369351267814636,
146
+ "logps/rejected": -0.3366171717643738,
147
+ "loss": 3.0689,
148
+ "rewards/accuracies": 0.518750011920929,
149
+ "rewards/chosen": -2.9369354248046875,
150
+ "rewards/margins": 0.42923641204833984,
151
+ "rewards/rejected": -3.3661720752716064,
152
+ "semantic_entropy": 0.7680700421333313,
153
+ "step": 45
154
+ },
155
+ {
156
+ "epoch": 0.1092896174863388,
157
+ "grad_norm": 66.51868162378878,
158
+ "learning_rate": 9.997663088532014e-07,
159
+ "logits/chosen": -0.9610111117362976,
160
+ "logits/rejected": -0.8836190104484558,
161
+ "logps/chosen": -0.3035297095775604,
162
+ "logps/rejected": -0.3295886814594269,
163
+ "loss": 3.0733,
164
+ "rewards/accuracies": 0.4937500059604645,
165
+ "rewards/chosen": -3.03529691696167,
166
+ "rewards/margins": 0.26059025526046753,
167
+ "rewards/rejected": -3.295886993408203,
168
+ "semantic_entropy": 0.7657495141029358,
169
+ "step": 50
170
+ },
171
+ {
172
+ "epoch": 0.12021857923497267,
173
+ "grad_norm": 36.53428450530809,
174
+ "learning_rate": 9.98817312944725e-07,
175
+ "logits/chosen": -1.002439260482788,
176
+ "logits/rejected": -0.8919259905815125,
177
+ "logps/chosen": -0.2904648780822754,
178
+ "logps/rejected": -0.3420776426792145,
179
+ "loss": 2.9863,
180
+ "rewards/accuracies": 0.581250011920929,
181
+ "rewards/chosen": -2.904649019241333,
182
+ "rewards/margins": 0.5161274671554565,
183
+ "rewards/rejected": -3.4207763671875,
184
+ "semantic_entropy": 0.7696514129638672,
185
+ "step": 55
186
+ },
187
+ {
188
+ "epoch": 0.13114754098360656,
189
+ "grad_norm": 31.199928628063066,
190
+ "learning_rate": 9.971397915250336e-07,
191
+ "logits/chosen": -1.0388410091400146,
192
+ "logits/rejected": -0.9951351284980774,
193
+ "logps/chosen": -0.27618950605392456,
194
+ "logps/rejected": -0.3456563353538513,
195
+ "loss": 2.9429,
196
+ "rewards/accuracies": 0.6187499761581421,
197
+ "rewards/chosen": -2.761894941329956,
198
+ "rewards/margins": 0.694668173789978,
199
+ "rewards/rejected": -3.4565632343292236,
200
+ "semantic_entropy": 0.7672961354255676,
201
+ "step": 60
202
+ },
203
+ {
204
+ "epoch": 0.14207650273224043,
205
+ "grad_norm": 43.04608890956751,
206
+ "learning_rate": 9.94736194623663e-07,
207
+ "logits/chosen": -1.0304734706878662,
208
+ "logits/rejected": -0.9656420946121216,
209
+ "logps/chosen": -0.3235929608345032,
210
+ "logps/rejected": -0.3579636216163635,
211
+ "loss": 3.063,
212
+ "rewards/accuracies": 0.53125,
213
+ "rewards/chosen": -3.2359299659729004,
214
+ "rewards/margins": 0.34370657801628113,
215
+ "rewards/rejected": -3.579636335372925,
216
+ "semantic_entropy": 0.8141781091690063,
217
+ "step": 65
218
+ },
219
+ {
220
+ "epoch": 0.15300546448087432,
221
+ "grad_norm": 210.46235590585465,
222
+ "learning_rate": 9.916100327075037e-07,
223
+ "logits/chosen": -0.9860013723373413,
224
+ "logits/rejected": -0.9686506986618042,
225
+ "logps/chosen": -0.3108309209346771,
226
+ "logps/rejected": -0.36498162150382996,
227
+ "loss": 2.9623,
228
+ "rewards/accuracies": 0.581250011920929,
229
+ "rewards/chosen": -3.108308792114258,
230
+ "rewards/margins": 0.5415072441101074,
231
+ "rewards/rejected": -3.6498160362243652,
232
+ "semantic_entropy": 0.7746008038520813,
233
+ "step": 70
234
+ },
235
+ {
236
+ "epoch": 0.16393442622950818,
237
+ "grad_norm": 35.07537533113234,
238
+ "learning_rate": 9.877658715537428e-07,
239
+ "logits/chosen": -0.97590172290802,
240
+ "logits/rejected": -0.9608650207519531,
241
+ "logps/chosen": -0.32075995206832886,
242
+ "logps/rejected": -0.39185312390327454,
243
+ "loss": 3.0014,
244
+ "rewards/accuracies": 0.637499988079071,
245
+ "rewards/chosen": -3.2076001167297363,
246
+ "rewards/margins": 0.7109313011169434,
247
+ "rewards/rejected": -3.9185314178466797,
248
+ "semantic_entropy": 0.806628406047821,
249
+ "step": 75
250
+ },
251
+ {
252
+ "epoch": 0.17486338797814208,
253
+ "grad_norm": 52.85437823609445,
254
+ "learning_rate": 9.832093255815216e-07,
255
+ "logits/chosen": -0.9724711179733276,
256
+ "logits/rejected": -0.9133187532424927,
257
+ "logps/chosen": -0.33075493574142456,
258
+ "logps/rejected": -0.3698170483112335,
259
+ "loss": 2.9391,
260
+ "rewards/accuracies": 0.637499988079071,
261
+ "rewards/chosen": -3.307549238204956,
262
+ "rewards/margins": 0.39062100648880005,
263
+ "rewards/rejected": -3.6981704235076904,
264
+ "semantic_entropy": 0.7876138091087341,
265
+ "step": 80
266
+ },
267
+ {
268
+ "epoch": 0.18579234972677597,
269
+ "grad_norm": 68.01590434786175,
270
+ "learning_rate": 9.779470496520441e-07,
271
+ "logits/chosen": -0.973779022693634,
272
+ "logits/rejected": -0.9283053278923035,
273
+ "logps/chosen": -0.3297511339187622,
274
+ "logps/rejected": -0.41490381956100464,
275
+ "loss": 2.9392,
276
+ "rewards/accuracies": 0.6625000238418579,
277
+ "rewards/chosen": -3.297511339187622,
278
+ "rewards/margins": 0.8515273928642273,
279
+ "rewards/rejected": -4.149038791656494,
280
+ "semantic_entropy": 0.8133514523506165,
281
+ "step": 85
282
+ },
283
+ {
284
+ "epoch": 0.19672131147540983,
285
+ "grad_norm": 43.8207236035398,
286
+ "learning_rate": 9.719867293491144e-07,
287
+ "logits/chosen": -1.0416749715805054,
288
+ "logits/rejected": -0.9649559259414673,
289
+ "logps/chosen": -0.3701372444629669,
290
+ "logps/rejected": -0.42724722623825073,
291
+ "loss": 2.9596,
292
+ "rewards/accuracies": 0.59375,
293
+ "rewards/chosen": -3.7013721466064453,
294
+ "rewards/margins": 0.5711001753807068,
295
+ "rewards/rejected": -4.272472381591797,
296
+ "semantic_entropy": 0.8610862493515015,
297
+ "step": 90
298
+ },
299
+ {
300
+ "epoch": 0.20765027322404372,
301
+ "grad_norm": 37.490640629213566,
302
+ "learning_rate": 9.653370697542987e-07,
303
+ "logits/chosen": -0.9690491557121277,
304
+ "logits/rejected": -0.9741093516349792,
305
+ "logps/chosen": -0.358903706073761,
306
+ "logps/rejected": -0.41135644912719727,
307
+ "loss": 2.8808,
308
+ "rewards/accuracies": 0.6000000238418579,
309
+ "rewards/chosen": -3.5890374183654785,
310
+ "rewards/margins": 0.5245276689529419,
311
+ "rewards/rejected": -4.113564968109131,
312
+ "semantic_entropy": 0.8484892845153809,
313
+ "step": 95
314
+ },
315
+ {
316
+ "epoch": 0.2185792349726776,
317
+ "grad_norm": 72.4012883538114,
318
+ "learning_rate": 9.580077827331037e-07,
319
+ "logits/chosen": -0.9927606582641602,
320
+ "logits/rejected": -0.9488978385925293,
321
+ "logps/chosen": -0.4221861958503723,
322
+ "logps/rejected": -0.5099042654037476,
323
+ "loss": 2.8815,
324
+ "rewards/accuracies": 0.637499988079071,
325
+ "rewards/chosen": -4.221861839294434,
326
+ "rewards/margins": 0.8771804571151733,
327
+ "rewards/rejected": -5.0990424156188965,
328
+ "semantic_entropy": 0.891185462474823,
329
+ "step": 100
330
+ },
331
+ {
332
+ "epoch": 0.22950819672131148,
333
+ "grad_norm": 53.762702275255066,
334
+ "learning_rate": 9.500095727507419e-07,
335
+ "logits/chosen": -1.0526697635650635,
336
+ "logits/rejected": -1.025564193725586,
337
+ "logps/chosen": -0.37911754846572876,
338
+ "logps/rejected": -0.46409544348716736,
339
+ "loss": 2.8288,
340
+ "rewards/accuracies": 0.637499988079071,
341
+ "rewards/chosen": -3.7911758422851562,
342
+ "rewards/margins": 0.8497791290283203,
343
+ "rewards/rejected": -4.640954971313477,
344
+ "semantic_entropy": 0.8762981295585632,
345
+ "step": 105
346
+ },
347
+ {
348
+ "epoch": 0.24043715846994534,
349
+ "grad_norm": 45.04185647458636,
350
+ "learning_rate": 9.413541212382004e-07,
351
+ "logits/chosen": -1.0478137731552124,
352
+ "logits/rejected": -1.028585433959961,
353
+ "logps/chosen": -0.4319530427455902,
354
+ "logps/rejected": -0.5620400905609131,
355
+ "loss": 2.8058,
356
+ "rewards/accuracies": 0.6875,
357
+ "rewards/chosen": -4.319530487060547,
358
+ "rewards/margins": 1.3008701801300049,
359
+ "rewards/rejected": -5.620400428771973,
360
+ "semantic_entropy": 0.922514796257019,
361
+ "step": 110
362
+ },
363
+ {
364
+ "epoch": 0.25136612021857924,
365
+ "grad_norm": 56.505413378989054,
366
+ "learning_rate": 9.320540695314438e-07,
367
+ "logits/chosen": -1.06332266330719,
368
+ "logits/rejected": -1.0288236141204834,
369
+ "logps/chosen": -0.43162402510643005,
370
+ "logps/rejected": -0.5995403528213501,
371
+ "loss": 2.7772,
372
+ "rewards/accuracies": 0.706250011920929,
373
+ "rewards/chosen": -4.3162407875061035,
374
+ "rewards/margins": 1.6791629791259766,
375
+ "rewards/rejected": -5.995403289794922,
376
+ "semantic_entropy": 0.913605809211731,
377
+ "step": 115
378
+ },
379
+ {
380
+ "epoch": 0.26229508196721313,
381
+ "grad_norm": 44.06382434081467,
382
+ "learning_rate": 9.221230004086721e-07,
383
+ "logits/chosen": -1.0795866250991821,
384
+ "logits/rejected": -1.0165742635726929,
385
+ "logps/chosen": -0.441800594329834,
386
+ "logps/rejected": -0.5526998043060303,
387
+ "loss": 2.736,
388
+ "rewards/accuracies": 0.6875,
389
+ "rewards/chosen": -4.41800594329834,
390
+ "rewards/margins": 1.1089919805526733,
391
+ "rewards/rejected": -5.526998519897461,
392
+ "semantic_entropy": 0.9433540105819702,
393
+ "step": 120
394
+ },
395
+ {
396
+ "epoch": 0.273224043715847,
397
+ "grad_norm": 115.05665224956637,
398
+ "learning_rate": 9.11575418252596e-07,
399
+ "logits/chosen": -0.9929405450820923,
400
+ "logits/rejected": -0.9629238247871399,
401
+ "logps/chosen": -0.4976869225502014,
402
+ "logps/rejected": -0.6475359201431274,
403
+ "loss": 2.7093,
404
+ "rewards/accuracies": 0.731249988079071,
405
+ "rewards/chosen": -4.976869106292725,
406
+ "rewards/margins": 1.4984896183013916,
407
+ "rewards/rejected": -6.4753594398498535,
408
+ "semantic_entropy": 0.9429339170455933,
409
+ "step": 125
410
+ },
411
+ {
412
+ "epoch": 0.28415300546448086,
413
+ "grad_norm": 45.61755920747003,
414
+ "learning_rate": 9.004267278667031e-07,
415
+ "logits/chosen": -1.0040867328643799,
416
+ "logits/rejected": -1.0052454471588135,
417
+ "logps/chosen": -0.5251004099845886,
418
+ "logps/rejected": -0.7621506452560425,
419
+ "loss": 2.7046,
420
+ "rewards/accuracies": 0.7437499761581421,
421
+ "rewards/chosen": -5.251004219055176,
422
+ "rewards/margins": 2.3705029487609863,
423
+ "rewards/rejected": -7.621507167816162,
424
+ "semantic_entropy": 0.9312794804573059,
425
+ "step": 130
426
+ },
427
+ {
428
+ "epoch": 0.29508196721311475,
429
+ "grad_norm": 42.07329597975219,
430
+ "learning_rate": 8.886932119764565e-07,
431
+ "logits/chosen": -1.0416096448898315,
432
+ "logits/rejected": -0.9662833213806152,
433
+ "logps/chosen": -0.5473520755767822,
434
+ "logps/rejected": -0.7608135938644409,
435
+ "loss": 2.5919,
436
+ "rewards/accuracies": 0.6937500238418579,
437
+ "rewards/chosen": -5.473520755767822,
438
+ "rewards/margins": 2.134615898132324,
439
+ "rewards/rejected": -7.608136177062988,
440
+ "semantic_entropy": 0.9771392941474915,
441
+ "step": 135
442
+ },
443
+ {
444
+ "epoch": 0.30601092896174864,
445
+ "grad_norm": 45.36734996455639,
446
+ "learning_rate": 8.763920074482809e-07,
447
+ "logits/chosen": -1.0434763431549072,
448
+ "logits/rejected": -0.9957412481307983,
449
+ "logps/chosen": -0.594224750995636,
450
+ "logps/rejected": -0.8605974316596985,
451
+ "loss": 2.3803,
452
+ "rewards/accuracies": 0.7875000238418579,
453
+ "rewards/chosen": -5.94224739074707,
454
+ "rewards/margins": 2.663726806640625,
455
+ "rewards/rejected": -8.605974197387695,
456
+ "semantic_entropy": 0.9883469343185425,
457
+ "step": 140
458
+ },
459
+ {
460
+ "epoch": 0.31693989071038253,
461
+ "grad_norm": 57.35860265502167,
462
+ "learning_rate": 8.635410802610723e-07,
463
+ "logits/chosen": -1.0125925540924072,
464
+ "logits/rejected": -1.0003665685653687,
465
+ "logps/chosen": -0.6211115121841431,
466
+ "logps/rejected": -0.7806649804115295,
467
+ "loss": 2.5474,
468
+ "rewards/accuracies": 0.731249988079071,
469
+ "rewards/chosen": -6.211114406585693,
470
+ "rewards/margins": 1.5955346822738647,
471
+ "rewards/rejected": -7.806649208068848,
472
+ "semantic_entropy": 0.9819900393486023,
473
+ "step": 145
474
+ },
475
+ {
476
+ "epoch": 0.32786885245901637,
477
+ "grad_norm": 57.36843422854158,
478
+ "learning_rate": 8.501591992667849e-07,
479
+ "logits/chosen": -1.0663528442382812,
480
+ "logits/rejected": -1.0401932001113892,
481
+ "logps/chosen": -0.705702543258667,
482
+ "logps/rejected": -1.0317673683166504,
483
+ "loss": 2.4773,
484
+ "rewards/accuracies": 0.7562500238418579,
485
+ "rewards/chosen": -7.057024955749512,
486
+ "rewards/margins": 3.2606492042541504,
487
+ "rewards/rejected": -10.317673683166504,
488
+ "semantic_entropy": 0.9794198870658875,
489
+ "step": 150
490
+ },
491
+ {
492
+ "epoch": 0.33879781420765026,
493
+ "grad_norm": 55.65247575131584,
494
+ "learning_rate": 8.362659087784152e-07,
495
+ "logits/chosen": -1.0362324714660645,
496
+ "logits/rejected": -0.9899821281433105,
497
+ "logps/chosen": -0.7101436257362366,
498
+ "logps/rejected": -0.9879515767097473,
499
+ "loss": 2.4284,
500
+ "rewards/accuracies": 0.731249988079071,
501
+ "rewards/chosen": -7.101436614990234,
502
+ "rewards/margins": 2.7780795097351074,
503
+ "rewards/rejected": -9.879515647888184,
504
+ "semantic_entropy": 0.9602919816970825,
505
+ "step": 155
506
+ },
507
+ {
508
+ "epoch": 0.34972677595628415,
509
+ "grad_norm": 57.58698714096881,
510
+ "learning_rate": 8.218815000254231e-07,
511
+ "logits/chosen": -1.0922863483428955,
512
+ "logits/rejected": -1.0440222024917603,
513
+ "logps/chosen": -0.8645855188369751,
514
+ "logps/rejected": -1.1068294048309326,
515
+ "loss": 2.4302,
516
+ "rewards/accuracies": 0.737500011920929,
517
+ "rewards/chosen": -8.645854949951172,
518
+ "rewards/margins": 2.4224390983581543,
519
+ "rewards/rejected": -11.068293571472168,
520
+ "semantic_entropy": 0.9356050491333008,
521
+ "step": 160
522
+ },
523
+ {
524
+ "epoch": 0.36065573770491804,
525
+ "grad_norm": 98.49758980833779,
526
+ "learning_rate": 8.07026981518276e-07,
527
+ "logits/chosen": -1.0802171230316162,
528
+ "logits/rejected": -1.0290331840515137,
529
+ "logps/chosen": -0.8299033045768738,
530
+ "logps/rejected": -1.0872141122817993,
531
+ "loss": 2.4163,
532
+ "rewards/accuracies": 0.75,
533
+ "rewards/chosen": -8.299032211303711,
534
+ "rewards/margins": 2.573108196258545,
535
+ "rewards/rejected": -10.872140884399414,
536
+ "semantic_entropy": 0.9756767153739929,
537
+ "step": 165
538
+ },
539
+ {
540
+ "epoch": 0.37158469945355194,
541
+ "grad_norm": 51.034857184899195,
542
+ "learning_rate": 7.917240483654e-07,
543
+ "logits/chosen": -1.0868977308273315,
544
+ "logits/rejected": -1.0327848196029663,
545
+ "logps/chosen": -0.8292908668518066,
546
+ "logps/rejected": -1.0890681743621826,
547
+ "loss": 2.4543,
548
+ "rewards/accuracies": 0.762499988079071,
549
+ "rewards/chosen": -8.29290771484375,
550
+ "rewards/margins": 2.597774028778076,
551
+ "rewards/rejected": -10.890682220458984,
552
+ "semantic_entropy": 0.949859619140625,
553
+ "step": 170
554
+ },
555
+ {
556
+ "epoch": 0.3825136612021858,
557
+ "grad_norm": 70.18286425215469,
558
+ "learning_rate": 7.759950505873521e-07,
559
+ "logits/chosen": -1.1532905101776123,
560
+ "logits/rejected": -1.1286249160766602,
561
+ "logps/chosen": -0.8882781863212585,
562
+ "logps/rejected": -1.0833543539047241,
563
+ "loss": 2.363,
564
+ "rewards/accuracies": 0.675000011920929,
565
+ "rewards/chosen": -8.882781982421875,
566
+ "rewards/margins": 1.9507627487182617,
567
+ "rewards/rejected": -10.833544731140137,
568
+ "semantic_entropy": 0.9320682287216187,
569
+ "step": 175
570
+ },
571
+ {
572
+ "epoch": 0.39344262295081966,
573
+ "grad_norm": 57.51037018197599,
574
+ "learning_rate": 7.598629604744872e-07,
575
+ "logits/chosen": -1.1318219900131226,
576
+ "logits/rejected": -1.1271886825561523,
577
+ "logps/chosen": -0.8509464263916016,
578
+ "logps/rejected": -1.2260388135910034,
579
+ "loss": 2.2789,
580
+ "rewards/accuracies": 0.768750011920929,
581
+ "rewards/chosen": -8.509464263916016,
582
+ "rewards/margins": 3.7509243488311768,
583
+ "rewards/rejected": -12.260388374328613,
584
+ "semantic_entropy": 0.9382311105728149,
585
+ "step": 180
586
+ },
587
+ {
588
+ "epoch": 0.40437158469945356,
589
+ "grad_norm": 52.68844433987736,
590
+ "learning_rate": 7.433513390357989e-07,
591
+ "logits/chosen": -1.147756814956665,
592
+ "logits/rejected": -1.167338490486145,
593
+ "logps/chosen": -0.8701988458633423,
594
+ "logps/rejected": -1.2299163341522217,
595
+ "loss": 2.2886,
596
+ "rewards/accuracies": 0.831250011920929,
597
+ "rewards/chosen": -8.701990127563477,
598
+ "rewards/margins": 3.5971744060516357,
599
+ "rewards/rejected": -12.299162864685059,
600
+ "semantic_entropy": 0.9483189582824707,
601
+ "step": 185
602
+ },
603
+ {
604
+ "epoch": 0.41530054644808745,
605
+ "grad_norm": 67.57716327384249,
606
+ "learning_rate": 7.264843015879321e-07,
607
+ "logits/chosen": -1.156224012374878,
608
+ "logits/rejected": -1.115622878074646,
609
+ "logps/chosen": -0.8300007581710815,
610
+ "logps/rejected": -1.1879067420959473,
611
+ "loss": 2.3281,
612
+ "rewards/accuracies": 0.824999988079071,
613
+ "rewards/chosen": -8.300007820129395,
614
+ "rewards/margins": 3.579059600830078,
615
+ "rewards/rejected": -11.879068374633789,
616
+ "semantic_entropy": 0.9479382634162903,
617
+ "step": 190
618
+ },
619
+ {
620
+ "epoch": 0.4262295081967213,
621
+ "grad_norm": 49.46269476072645,
622
+ "learning_rate": 7.092864825346266e-07,
623
+ "logits/chosen": -1.1936676502227783,
624
+ "logits/rejected": -1.173877477645874,
625
+ "logps/chosen": -0.9850988388061523,
626
+ "logps/rejected": -1.3840601444244385,
627
+ "loss": 2.3467,
628
+ "rewards/accuracies": 0.7749999761581421,
629
+ "rewards/chosen": -9.850988388061523,
630
+ "rewards/margins": 3.9896130561828613,
631
+ "rewards/rejected": -13.840600967407227,
632
+ "semantic_entropy": 0.9018501043319702,
633
+ "step": 195
634
+ },
635
+ {
636
+ "epoch": 0.4371584699453552,
637
+ "grad_norm": 52.77170794873292,
638
+ "learning_rate": 6.917829993880302e-07,
639
+ "logits/chosen": -1.1570117473602295,
640
+ "logits/rejected": -1.0735323429107666,
641
+ "logps/chosen": -0.884504497051239,
642
+ "logps/rejected": -1.2432372570037842,
643
+ "loss": 2.2946,
644
+ "rewards/accuracies": 0.78125,
645
+ "rewards/chosen": -8.84504508972168,
646
+ "rewards/margins": 3.5873265266418457,
647
+ "rewards/rejected": -12.432373046875,
648
+ "semantic_entropy": 0.9433660507202148,
649
+ "step": 200
650
+ },
651
+ {
652
+ "epoch": 0.44808743169398907,
653
+ "grad_norm": 44.7279934671613,
654
+ "learning_rate": 6.739994160844309e-07,
655
+ "logits/chosen": -1.11527419090271,
656
+ "logits/rejected": -1.1322429180145264,
657
+ "logps/chosen": -0.8004055023193359,
658
+ "logps/rejected": -1.1100412607192993,
659
+ "loss": 2.2536,
660
+ "rewards/accuracies": 0.800000011920929,
661
+ "rewards/chosen": -8.004055976867676,
662
+ "rewards/margins": 3.0963568687438965,
663
+ "rewards/rejected": -11.100412368774414,
664
+ "semantic_entropy": 0.9715192914009094,
665
+ "step": 205
666
+ },
667
+ {
668
+ "epoch": 0.45901639344262296,
669
+ "grad_norm": 74.86910632781094,
670
+ "learning_rate": 6.559617056479827e-07,
671
+ "logits/chosen": -1.1598584651947021,
672
+ "logits/rejected": -1.1630977392196655,
673
+ "logps/chosen": -0.8840648531913757,
674
+ "logps/rejected": -1.3052994012832642,
675
+ "loss": 2.1909,
676
+ "rewards/accuracies": 0.824999988079071,
677
+ "rewards/chosen": -8.840649604797363,
678
+ "rewards/margins": 4.212344169616699,
679
+ "rewards/rejected": -13.052993774414062,
680
+ "semantic_entropy": 0.9134693145751953,
681
+ "step": 210
682
+ },
683
+ {
684
+ "epoch": 0.46994535519125685,
685
+ "grad_norm": 75.49182823367542,
686
+ "learning_rate": 6.376962122569567e-07,
687
+ "logits/chosen": -1.1508004665374756,
688
+ "logits/rejected": -1.0994528532028198,
689
+ "logps/chosen": -0.8758481740951538,
690
+ "logps/rejected": -1.3154475688934326,
691
+ "loss": 2.2578,
692
+ "rewards/accuracies": 0.831250011920929,
693
+ "rewards/chosen": -8.758482933044434,
694
+ "rewards/margins": 4.395995140075684,
695
+ "rewards/rejected": -13.154478073120117,
696
+ "semantic_entropy": 0.9499451518058777,
697
+ "step": 215
698
+ },
699
+ {
700
+ "epoch": 0.4808743169398907,
701
+ "grad_norm": 48.75460798689588,
702
+ "learning_rate": 6.192296127679192e-07,
703
+ "logits/chosen": -1.216799020767212,
704
+ "logits/rejected": -1.1946138143539429,
705
+ "logps/chosen": -0.9245316386222839,
706
+ "logps/rejected": -1.2553398609161377,
707
+ "loss": 2.1888,
708
+ "rewards/accuracies": 0.762499988079071,
709
+ "rewards/chosen": -9.245316505432129,
710
+ "rewards/margins": 3.3080811500549316,
711
+ "rewards/rejected": -12.553396224975586,
712
+ "semantic_entropy": 0.920149028301239,
713
+ "step": 220
714
+ },
715
+ {
716
+ "epoch": 0.4918032786885246,
717
+ "grad_norm": 51.690790536277476,
718
+ "learning_rate": 6.005888777540319e-07,
719
+ "logits/chosen": -1.2422759532928467,
720
+ "logits/rejected": -1.2019939422607422,
721
+ "logps/chosen": -0.8988568186759949,
722
+ "logps/rejected": -1.281859278678894,
723
+ "loss": 2.1988,
724
+ "rewards/accuracies": 0.824999988079071,
725
+ "rewards/chosen": -8.988569259643555,
726
+ "rewards/margins": 3.8300251960754395,
727
+ "rewards/rejected": -12.81859302520752,
728
+ "semantic_entropy": 0.9307478070259094,
729
+ "step": 225
730
+ },
731
+ {
732
+ "epoch": 0.5027322404371585,
733
+ "grad_norm": 54.336949187520304,
734
+ "learning_rate": 5.818012321143773e-07,
735
+ "logits/chosen": -1.1455602645874023,
736
+ "logits/rejected": -1.1488093137741089,
737
+ "logps/chosen": -0.8657233119010925,
738
+ "logps/rejected": -1.2924095392227173,
739
+ "loss": 2.2216,
740
+ "rewards/accuracies": 0.824999988079071,
741
+ "rewards/chosen": -8.657234191894531,
742
+ "rewards/margins": 4.266862392425537,
743
+ "rewards/rejected": -12.924097061157227,
744
+ "semantic_entropy": 0.9354921579360962,
745
+ "step": 230
746
+ },
747
+ {
748
+ "epoch": 0.5136612021857924,
749
+ "grad_norm": 65.64434177548276,
750
+ "learning_rate": 5.628941153118388e-07,
751
+ "logits/chosen": -1.158027172088623,
752
+ "logits/rejected": -1.1202760934829712,
753
+ "logps/chosen": -0.8916556239128113,
754
+ "logps/rejected": -1.255821704864502,
755
+ "loss": 2.147,
756
+ "rewards/accuracies": 0.831250011920929,
757
+ "rewards/chosen": -8.916555404663086,
758
+ "rewards/margins": 3.641660690307617,
759
+ "rewards/rejected": -12.55821704864502,
760
+ "semantic_entropy": 0.9296435117721558,
761
+ "step": 235
762
+ },
763
+ {
764
+ "epoch": 0.5245901639344263,
765
+ "grad_norm": 50.4169580491656,
766
+ "learning_rate": 5.438951412976098e-07,
767
+ "logits/chosen": -1.230419397354126,
768
+ "logits/rejected": -1.24024498462677,
769
+ "logps/chosen": -0.8406194448471069,
770
+ "logps/rejected": -1.2802150249481201,
771
+ "loss": 2.048,
772
+ "rewards/accuracies": 0.831250011920929,
773
+ "rewards/chosen": -8.406194686889648,
774
+ "rewards/margins": 4.395954608917236,
775
+ "rewards/rejected": -12.802148818969727,
776
+ "semantic_entropy": 0.9510146379470825,
777
+ "step": 240
778
+ },
779
+ {
780
+ "epoch": 0.5355191256830601,
781
+ "grad_norm": 49.4313854721959,
782
+ "learning_rate": 5.248320581808619e-07,
783
+ "logits/chosen": -1.1573108434677124,
784
+ "logits/rejected": -1.1132813692092896,
785
+ "logps/chosen": -0.8796091079711914,
786
+ "logps/rejected": -1.3321001529693604,
787
+ "loss": 2.1276,
788
+ "rewards/accuracies": 0.8125,
789
+ "rewards/chosen": -8.796091079711914,
790
+ "rewards/margins": 4.524909973144531,
791
+ "rewards/rejected": -13.321001052856445,
792
+ "semantic_entropy": 0.9308155179023743,
793
+ "step": 245
794
+ },
795
+ {
796
+ "epoch": 0.546448087431694,
797
+ "grad_norm": 48.57429116569769,
798
+ "learning_rate": 5.057327077024744e-07,
799
+ "logits/chosen": -1.2085868120193481,
800
+ "logits/rejected": -1.1734802722930908,
801
+ "logps/chosen": -0.8411601185798645,
802
+ "logps/rejected": -1.2031333446502686,
803
+ "loss": 2.1896,
804
+ "rewards/accuracies": 0.793749988079071,
805
+ "rewards/chosen": -8.411601066589355,
806
+ "rewards/margins": 3.619731903076172,
807
+ "rewards/rejected": -12.031333923339844,
808
+ "semantic_entropy": 0.9265835881233215,
809
+ "step": 250
810
+ },
811
+ {
812
+ "epoch": 0.5573770491803278,
813
+ "grad_norm": 66.94044733232491,
814
+ "learning_rate": 4.866249845720132e-07,
815
+ "logits/chosen": -1.1839255094528198,
816
+ "logits/rejected": -1.1519018411636353,
817
+ "logps/chosen": -0.9079425930976868,
818
+ "logps/rejected": -1.3595575094223022,
819
+ "loss": 2.1027,
820
+ "rewards/accuracies": 0.800000011920929,
821
+ "rewards/chosen": -9.079425811767578,
822
+ "rewards/margins": 4.516149044036865,
823
+ "rewards/rejected": -13.595575332641602,
824
+ "semantic_entropy": 0.9337202310562134,
825
+ "step": 255
826
+ },
827
+ {
828
+ "epoch": 0.5683060109289617,
829
+ "grad_norm": 52.69841931787451,
830
+ "learning_rate": 4.675367957273505e-07,
831
+ "logits/chosen": -1.1522729396820068,
832
+ "logits/rejected": -1.1428637504577637,
833
+ "logps/chosen": -0.8813174366950989,
834
+ "logps/rejected": -1.3057996034622192,
835
+ "loss": 2.093,
836
+ "rewards/accuracies": 0.8062499761581421,
837
+ "rewards/chosen": -8.813173294067383,
838
+ "rewards/margins": 4.2448225021362305,
839
+ "rewards/rejected": -13.057995796203613,
840
+ "semantic_entropy": 0.9142274856567383,
841
+ "step": 260
842
+ },
843
+ {
844
+ "epoch": 0.5792349726775956,
845
+ "grad_norm": 43.72494095619232,
846
+ "learning_rate": 4.4849601957642285e-07,
847
+ "logits/chosen": -1.1581261157989502,
848
+ "logits/rejected": -1.1219886541366577,
849
+ "logps/chosen": -0.851395308971405,
850
+ "logps/rejected": -1.285305380821228,
851
+ "loss": 2.0682,
852
+ "rewards/accuracies": 0.8062499761581421,
853
+ "rewards/chosen": -8.51395320892334,
854
+ "rewards/margins": 4.339099407196045,
855
+ "rewards/rejected": -12.853053092956543,
856
+ "semantic_entropy": 0.9402335286140442,
857
+ "step": 265
858
+ },
859
+ {
860
+ "epoch": 0.5901639344262295,
861
+ "grad_norm": 74.3004033684429,
862
+ "learning_rate": 4.295304652806592e-07,
863
+ "logits/chosen": -1.1579556465148926,
864
+ "logits/rejected": -1.1343352794647217,
865
+ "logps/chosen": -0.8544819951057434,
866
+ "logps/rejected": -1.3365771770477295,
867
+ "loss": 2.03,
868
+ "rewards/accuracies": 0.800000011920929,
869
+ "rewards/chosen": -8.544819831848145,
870
+ "rewards/margins": 4.820953369140625,
871
+ "rewards/rejected": -13.36577320098877,
872
+ "semantic_entropy": 0.9244714975357056,
873
+ "step": 270
874
+ },
875
+ {
876
+ "epoch": 0.6010928961748634,
877
+ "grad_norm": 47.25525711691707,
878
+ "learning_rate": 4.106678321395433e-07,
879
+ "logits/chosen": -1.1198965311050415,
880
+ "logits/rejected": -1.0580785274505615,
881
+ "logps/chosen": -0.8487468957901001,
882
+ "logps/rejected": -1.1780356168746948,
883
+ "loss": 2.0436,
884
+ "rewards/accuracies": 0.768750011920929,
885
+ "rewards/chosen": -8.487468719482422,
886
+ "rewards/margins": 3.2928879261016846,
887
+ "rewards/rejected": -11.780357360839844,
888
+ "semantic_entropy": 0.9419177174568176,
889
+ "step": 275
890
+ },
891
+ {
892
+ "epoch": 0.6120218579234973,
893
+ "grad_norm": 50.516107327592174,
894
+ "learning_rate": 3.9193566913562915e-07,
895
+ "logits/chosen": -1.0839662551879883,
896
+ "logits/rejected": -1.088769555091858,
897
+ "logps/chosen": -0.8575767278671265,
898
+ "logps/rejected": -1.3727636337280273,
899
+ "loss": 2.0447,
900
+ "rewards/accuracies": 0.7749999761581421,
901
+ "rewards/chosen": -8.57576847076416,
902
+ "rewards/margins": 5.151867866516113,
903
+ "rewards/rejected": -13.727636337280273,
904
+ "semantic_entropy": 0.9023324251174927,
905
+ "step": 280
906
+ },
907
+ {
908
+ "epoch": 0.6229508196721312,
909
+ "grad_norm": 53.0473998547049,
910
+ "learning_rate": 3.7336133469909623e-07,
911
+ "logits/chosen": -1.2294294834136963,
912
+ "logits/rejected": -1.1984007358551025,
913
+ "logps/chosen": -0.8300548791885376,
914
+ "logps/rejected": -1.328940987586975,
915
+ "loss": 1.9766,
916
+ "rewards/accuracies": 0.8812500238418579,
917
+ "rewards/chosen": -8.300548553466797,
918
+ "rewards/margins": 4.988862037658691,
919
+ "rewards/rejected": -13.289410591125488,
920
+ "semantic_entropy": 0.929417610168457,
921
+ "step": 285
922
+ },
923
+ {
924
+ "epoch": 0.6338797814207651,
925
+ "grad_norm": 90.39553328719175,
926
+ "learning_rate": 3.549719567506076e-07,
927
+ "logits/chosen": -1.153247594833374,
928
+ "logits/rejected": -1.1141259670257568,
929
+ "logps/chosen": -0.865193247795105,
930
+ "logps/rejected": -1.3289250135421753,
931
+ "loss": 2.0316,
932
+ "rewards/accuracies": 0.8187500238418579,
933
+ "rewards/chosen": -8.651933670043945,
934
+ "rewards/margins": 4.6373186111450195,
935
+ "rewards/rejected": -13.289251327514648,
936
+ "semantic_entropy": 0.9263349771499634,
937
+ "step": 290
938
+ },
939
+ {
940
+ "epoch": 0.644808743169399,
941
+ "grad_norm": 55.4386947782287,
942
+ "learning_rate": 3.3679439308082774e-07,
943
+ "logits/chosen": -1.1450589895248413,
944
+ "logits/rejected": -1.1411672830581665,
945
+ "logps/chosen": -0.8747571706771851,
946
+ "logps/rejected": -1.3667054176330566,
947
+ "loss": 1.9546,
948
+ "rewards/accuracies": 0.856249988079071,
949
+ "rewards/chosen": -8.747570991516113,
950
+ "rewards/margins": 4.919483661651611,
951
+ "rewards/rejected": -13.66705322265625,
952
+ "semantic_entropy": 0.9194013476371765,
953
+ "step": 295
954
+ },
955
+ {
956
+ "epoch": 0.6557377049180327,
957
+ "grad_norm": 45.252881799709705,
958
+ "learning_rate": 3.1885519212446716e-07,
959
+ "logits/chosen": -1.1887695789337158,
960
+ "logits/rejected": -1.1733474731445312,
961
+ "logps/chosen": -0.9111166000366211,
962
+ "logps/rejected": -1.3687089681625366,
963
+ "loss": 1.9751,
964
+ "rewards/accuracies": 0.762499988079071,
965
+ "rewards/chosen": -9.111166000366211,
966
+ "rewards/margins": 4.575922966003418,
967
+ "rewards/rejected": -13.687089920043945,
968
+ "semantic_entropy": 0.8905431032180786,
969
+ "step": 300
970
+ },
971
+ {
972
+ "epoch": 0.6666666666666666,
973
+ "grad_norm": 63.443681351494455,
974
+ "learning_rate": 3.0118055418614295e-07,
975
+ "logits/chosen": -1.1734164953231812,
976
+ "logits/rejected": -1.1228505373001099,
977
+ "logps/chosen": -0.8473879098892212,
978
+ "logps/rejected": -1.3491770029067993,
979
+ "loss": 2.0108,
980
+ "rewards/accuracies": 0.8374999761581421,
981
+ "rewards/chosen": -8.473878860473633,
982
+ "rewards/margins": 5.017890453338623,
983
+ "rewards/rejected": -13.491769790649414,
984
+ "semantic_entropy": 0.9206064343452454,
985
+ "step": 305
986
+ },
987
+ {
988
+ "epoch": 0.6775956284153005,
989
+ "grad_norm": 48.352495630764054,
990
+ "learning_rate": 2.83796293174686e-07,
991
+ "logits/chosen": -1.117762565612793,
992
+ "logits/rejected": -1.1260699033737183,
993
+ "logps/chosen": -0.901767909526825,
994
+ "logps/rejected": -1.443933129310608,
995
+ "loss": 2.0817,
996
+ "rewards/accuracies": 0.824999988079071,
997
+ "rewards/chosen": -9.017679214477539,
998
+ "rewards/margins": 5.421651840209961,
999
+ "rewards/rejected": -14.4393310546875,
1000
+ "semantic_entropy": 0.9092252850532532,
1001
+ "step": 310
1002
+ },
1003
+ {
1004
+ "epoch": 0.6885245901639344,
1005
+ "grad_norm": 48.159593233413936,
1006
+ "learning_rate": 2.6672779890178046e-07,
1007
+ "logits/chosen": -1.1639653444290161,
1008
+ "logits/rejected": -1.1631653308868408,
1009
+ "logps/chosen": -0.9492608904838562,
1010
+ "logps/rejected": -1.3062164783477783,
1011
+ "loss": 2.0243,
1012
+ "rewards/accuracies": 0.7562500238418579,
1013
+ "rewards/chosen": -9.492609977722168,
1014
+ "rewards/margins": 3.5695548057556152,
1015
+ "rewards/rejected": -13.062166213989258,
1016
+ "semantic_entropy": 0.9068315625190735,
1017
+ "step": 315
1018
+ },
1019
+ {
1020
+ "epoch": 0.6994535519125683,
1021
+ "grad_norm": 44.49675726254248,
1022
+ "learning_rate": 2.500000000000001e-07,
1023
+ "logits/chosen": -1.2249935865402222,
1024
+ "logits/rejected": -1.1779038906097412,
1025
+ "logps/chosen": -0.9137300252914429,
1026
+ "logps/rejected": -1.3746672868728638,
1027
+ "loss": 1.9926,
1028
+ "rewards/accuracies": 0.84375,
1029
+ "rewards/chosen": -9.137300491333008,
1030
+ "rewards/margins": 4.609372138977051,
1031
+ "rewards/rejected": -13.746671676635742,
1032
+ "semantic_entropy": 0.9218708872795105,
1033
+ "step": 320
1034
+ },
1035
+ {
1036
+ "epoch": 0.7103825136612022,
1037
+ "grad_norm": 62.06584454249009,
1038
+ "learning_rate": 2.3363732751439923e-07,
1039
+ "logits/chosen": -1.1833363771438599,
1040
+ "logits/rejected": -1.1670992374420166,
1041
+ "logps/chosen": -0.8321934938430786,
1042
+ "logps/rejected": -1.2726361751556396,
1043
+ "loss": 2.0087,
1044
+ "rewards/accuracies": 0.84375,
1045
+ "rewards/chosen": -8.32193374633789,
1046
+ "rewards/margins": 4.404426097869873,
1047
+ "rewards/rejected": -12.726360321044922,
1048
+ "semantic_entropy": 0.9296501278877258,
1049
+ "step": 325
1050
+ },
1051
+ {
1052
+ "epoch": 0.7213114754098361,
1053
+ "grad_norm": 47.582405569826236,
1054
+ "learning_rate": 2.1766367922083283e-07,
1055
+ "logits/chosen": -1.1103484630584717,
1056
+ "logits/rejected": -1.0834376811981201,
1057
+ "logps/chosen": -0.7902881503105164,
1058
+ "logps/rejected": -1.379204511642456,
1059
+ "loss": 1.9477,
1060
+ "rewards/accuracies": 0.824999988079071,
1061
+ "rewards/chosen": -7.9028825759887695,
1062
+ "rewards/margins": 5.889164447784424,
1063
+ "rewards/rejected": -13.792045593261719,
1064
+ "semantic_entropy": 0.9163787961006165,
1065
+ "step": 330
1066
+ },
1067
+ {
1068
+ "epoch": 0.73224043715847,
1069
+ "grad_norm": 51.055602602949044,
1070
+ "learning_rate": 2.021023847231202e-07,
1071
+ "logits/chosen": -1.0874385833740234,
1072
+ "logits/rejected": -1.0472285747528076,
1073
+ "logps/chosen": -0.9132916331291199,
1074
+ "logps/rejected": -1.3749035596847534,
1075
+ "loss": 1.9593,
1076
+ "rewards/accuracies": 0.856249988079071,
1077
+ "rewards/chosen": -9.132916450500488,
1078
+ "rewards/margins": 4.616118907928467,
1079
+ "rewards/rejected": -13.749035835266113,
1080
+ "semantic_entropy": 0.8924514651298523,
1081
+ "step": 335
1082
+ },
1083
+ {
1084
+ "epoch": 0.7431693989071039,
1085
+ "grad_norm": 53.38188724969566,
1086
+ "learning_rate": 1.869761713800254e-07,
1087
+ "logits/chosen": -1.1097468137741089,
1088
+ "logits/rejected": -1.064247965812683,
1089
+ "logps/chosen": -0.8977781534194946,
1090
+ "logps/rejected": -1.4049198627471924,
1091
+ "loss": 2.0313,
1092
+ "rewards/accuracies": 0.7875000238418579,
1093
+ "rewards/chosen": -8.977781295776367,
1094
+ "rewards/margins": 5.071417808532715,
1095
+ "rewards/rejected": -14.049200057983398,
1096
+ "semantic_entropy": 0.896017849445343,
1097
+ "step": 340
1098
+ },
1099
+ {
1100
+ "epoch": 0.7540983606557377,
1101
+ "grad_norm": 57.62309009254558,
1102
+ "learning_rate": 1.7230713111182164e-07,
1103
+ "logits/chosen": -1.1679933071136475,
1104
+ "logits/rejected": -1.167457103729248,
1105
+ "logps/chosen": -0.919908344745636,
1106
+ "logps/rejected": -1.4295835494995117,
1107
+ "loss": 2.0598,
1108
+ "rewards/accuracies": 0.824999988079071,
1109
+ "rewards/chosen": -9.19908332824707,
1110
+ "rewards/margins": 5.096750736236572,
1111
+ "rewards/rejected": -14.2958345413208,
1112
+ "semantic_entropy": 0.8877577781677246,
1113
+ "step": 345
1114
+ },
1115
+ {
1116
+ "epoch": 0.7650273224043715,
1117
+ "grad_norm": 56.46416368551912,
1118
+ "learning_rate": 1.5811668813491696e-07,
1119
+ "logits/chosen": -1.164217233657837,
1120
+ "logits/rejected": -1.140291452407837,
1121
+ "logps/chosen": -0.8424995541572571,
1122
+ "logps/rejected": -1.2386906147003174,
1123
+ "loss": 1.9725,
1124
+ "rewards/accuracies": 0.831250011920929,
1125
+ "rewards/chosen": -8.424995422363281,
1126
+ "rewards/margins": 3.96191143989563,
1127
+ "rewards/rejected": -12.386906623840332,
1128
+ "semantic_entropy": 0.9234525561332703,
1129
+ "step": 350
1130
+ },
1131
+ {
1132
+ "epoch": 0.7759562841530054,
1133
+ "grad_norm": 47.45687983835232,
1134
+ "learning_rate": 1.4442556767166369e-07,
1135
+ "logits/chosen": -1.1149598360061646,
1136
+ "logits/rejected": -1.0830504894256592,
1137
+ "logps/chosen": -0.8345939517021179,
1138
+ "logps/rejected": -1.2633658647537231,
1139
+ "loss": 1.9792,
1140
+ "rewards/accuracies": 0.8374999761581421,
1141
+ "rewards/chosen": -8.345940589904785,
1142
+ "rewards/margins": 4.287718296051025,
1143
+ "rewards/rejected": -12.633659362792969,
1144
+ "semantic_entropy": 0.9200286865234375,
1145
+ "step": 355
1146
+ },
1147
+ {
1148
+ "epoch": 0.7868852459016393,
1149
+ "grad_norm": 69.3977290180621,
1150
+ "learning_rate": 1.312537656810549e-07,
1151
+ "logits/chosen": -1.0737590789794922,
1152
+ "logits/rejected": -1.0747511386871338,
1153
+ "logps/chosen": -0.9100669622421265,
1154
+ "logps/rejected": -1.410347580909729,
1155
+ "loss": 1.9621,
1156
+ "rewards/accuracies": 0.762499988079071,
1157
+ "rewards/chosen": -9.100671768188477,
1158
+ "rewards/margins": 5.002806186676025,
1159
+ "rewards/rejected": -14.103477478027344,
1160
+ "semantic_entropy": 0.8941181898117065,
1161
+ "step": 360
1162
+ },
1163
+ {
1164
+ "epoch": 0.7978142076502732,
1165
+ "grad_norm": 62.629527673002094,
1166
+ "learning_rate": 1.1862051965451214e-07,
1167
+ "logits/chosen": -1.1590697765350342,
1168
+ "logits/rejected": -1.1595137119293213,
1169
+ "logps/chosen": -0.8836385011672974,
1170
+ "logps/rejected": -1.3890182971954346,
1171
+ "loss": 1.9788,
1172
+ "rewards/accuracies": 0.793749988079071,
1173
+ "rewards/chosen": -8.836385726928711,
1174
+ "rewards/margins": 5.053797245025635,
1175
+ "rewards/rejected": -13.890182495117188,
1176
+ "semantic_entropy": 0.9085249900817871,
1177
+ "step": 365
1178
+ },
1179
+ {
1180
+ "epoch": 0.8087431693989071,
1181
+ "grad_norm": 65.41767650747809,
1182
+ "learning_rate": 1.0654428051942138e-07,
1183
+ "logits/chosen": -1.1633613109588623,
1184
+ "logits/rejected": -1.1319725513458252,
1185
+ "logps/chosen": -0.9073840975761414,
1186
+ "logps/rejected": -1.4780160188674927,
1187
+ "loss": 2.0047,
1188
+ "rewards/accuracies": 0.8062499761581421,
1189
+ "rewards/chosen": -9.07384204864502,
1190
+ "rewards/margins": 5.7063188552856445,
1191
+ "rewards/rejected": -14.780160903930664,
1192
+ "semantic_entropy": 0.8961697816848755,
1193
+ "step": 370
1194
+ },
1195
+ {
1196
+ "epoch": 0.819672131147541,
1197
+ "grad_norm": 45.906819053741515,
1198
+ "learning_rate": 9.504268569144763e-08,
1199
+ "logits/chosen": -1.1833055019378662,
1200
+ "logits/rejected": -1.1207568645477295,
1201
+ "logps/chosen": -0.8244643211364746,
1202
+ "logps/rejected": -1.3258384466171265,
1203
+ "loss": 1.9881,
1204
+ "rewards/accuracies": 0.84375,
1205
+ "rewards/chosen": -8.244643211364746,
1206
+ "rewards/margins": 5.013741970062256,
1207
+ "rewards/rejected": -13.258384704589844,
1208
+ "semantic_entropy": 0.9239455461502075,
1209
+ "step": 375
1210
+ },
1211
+ {
1212
+ "epoch": 0.8306010928961749,
1213
+ "grad_norm": 41.61552479544314,
1214
+ "learning_rate": 8.413253331499049e-08,
1215
+ "logits/chosen": -1.067228078842163,
1216
+ "logits/rejected": -1.083414077758789,
1217
+ "logps/chosen": -0.8691232800483704,
1218
+ "logps/rejected": -1.315148949623108,
1219
+ "loss": 1.9039,
1220
+ "rewards/accuracies": 0.8187500238418579,
1221
+ "rewards/chosen": -8.69123363494873,
1222
+ "rewards/margins": 4.4602556228637695,
1223
+ "rewards/rejected": -13.151487350463867,
1224
+ "semantic_entropy": 0.9291045069694519,
1225
+ "step": 380
1226
+ },
1227
+ {
1228
+ "epoch": 0.8415300546448088,
1229
+ "grad_norm": 53.25699744382843,
1230
+ "learning_rate": 7.382975772939865e-08,
1231
+ "logits/chosen": -1.1655666828155518,
1232
+ "logits/rejected": -1.1497318744659424,
1233
+ "logps/chosen": -0.9494159817695618,
1234
+ "logps/rejected": -1.448509931564331,
1235
+ "loss": 2.061,
1236
+ "rewards/accuracies": 0.8374999761581421,
1237
+ "rewards/chosen": -9.494159698486328,
1238
+ "rewards/margins": 4.990939617156982,
1239
+ "rewards/rejected": -14.485099792480469,
1240
+ "semantic_entropy": 0.9152476191520691,
1241
+ "step": 385
1242
+ },
1243
+ {
1244
+ "epoch": 0.8524590163934426,
1245
+ "grad_norm": 70.12618215621923,
1246
+ "learning_rate": 6.414940619677734e-08,
1247
+ "logits/chosen": -1.157553791999817,
1248
+ "logits/rejected": -1.1328723430633545,
1249
+ "logps/chosen": -0.8517138361930847,
1250
+ "logps/rejected": -1.4659974575042725,
1251
+ "loss": 1.9241,
1252
+ "rewards/accuracies": 0.8500000238418579,
1253
+ "rewards/chosen": -8.51713752746582,
1254
+ "rewards/margins": 6.14283561706543,
1255
+ "rewards/rejected": -14.659975051879883,
1256
+ "semantic_entropy": 0.9240752458572388,
1257
+ "step": 390
1258
+ },
1259
+ {
1260
+ "epoch": 0.8633879781420765,
1261
+ "grad_norm": 55.71292222513987,
1262
+ "learning_rate": 5.5105616925376296e-08,
1263
+ "logits/chosen": -1.134119987487793,
1264
+ "logits/rejected": -1.1163135766983032,
1265
+ "logps/chosen": -0.9063531160354614,
1266
+ "logps/rejected": -1.3211853504180908,
1267
+ "loss": 1.9119,
1268
+ "rewards/accuracies": 0.8125,
1269
+ "rewards/chosen": -9.063530921936035,
1270
+ "rewards/margins": 4.148322105407715,
1271
+ "rewards/rejected": -13.21185302734375,
1272
+ "semantic_entropy": 0.9249491691589355,
1273
+ "step": 395
1274
+ },
1275
+ {
1276
+ "epoch": 0.8743169398907104,
1277
+ "grad_norm": 55.00548916953222,
1278
+ "learning_rate": 4.6711598420656976e-08,
1279
+ "logits/chosen": -1.0777003765106201,
1280
+ "logits/rejected": -1.0427886247634888,
1281
+ "logps/chosen": -0.8970823287963867,
1282
+ "logps/rejected": -1.4476853609085083,
1283
+ "loss": 1.9268,
1284
+ "rewards/accuracies": 0.8500000238418579,
1285
+ "rewards/chosen": -8.970823287963867,
1286
+ "rewards/margins": 5.506030082702637,
1287
+ "rewards/rejected": -14.476852416992188,
1288
+ "semantic_entropy": 0.8907070159912109,
1289
+ "step": 400
1290
+ },
1291
+ {
1292
+ "epoch": 0.8743169398907104,
1293
+ "eval_logits/chosen": -1.3707057237625122,
1294
+ "eval_logits/rejected": -1.3261345624923706,
1295
+ "eval_logps/chosen": -0.8602119088172913,
1296
+ "eval_logps/rejected": -1.391231656074524,
1297
+ "eval_loss": 1.9562536478042603,
1298
+ "eval_rewards/accuracies": 0.8433734774589539,
1299
+ "eval_rewards/chosen": -8.602119445800781,
1300
+ "eval_rewards/margins": 5.31019926071167,
1301
+ "eval_rewards/rejected": -13.912318229675293,
1302
+ "eval_runtime": 46.5338,
1303
+ "eval_samples_per_second": 28.323,
1304
+ "eval_semantic_entropy": 0.9091164469718933,
1305
+ "eval_steps_per_second": 1.784,
1306
+ "step": 400
1307
+ },
1308
+ {
1309
+ "epoch": 0.8852459016393442,
1310
+ "grad_norm": 56.49464800000407,
1311
+ "learning_rate": 3.897961019419516e-08,
1312
+ "logits/chosen": -1.1149052381515503,
1313
+ "logits/rejected": -1.0388867855072021,
1314
+ "logps/chosen": -0.8156052827835083,
1315
+ "logps/rejected": -1.2854751348495483,
1316
+ "loss": 1.9256,
1317
+ "rewards/accuracies": 0.824999988079071,
1318
+ "rewards/chosen": -8.156051635742188,
1319
+ "rewards/margins": 4.698698997497559,
1320
+ "rewards/rejected": -12.854751586914062,
1321
+ "semantic_entropy": 0.9291986227035522,
1322
+ "step": 405
1323
+ },
1324
+ {
1325
+ "epoch": 0.8961748633879781,
1326
+ "grad_norm": 50.962161952757214,
1327
+ "learning_rate": 3.192094485859526e-08,
1328
+ "logits/chosen": -1.117832064628601,
1329
+ "logits/rejected": -1.1457985639572144,
1330
+ "logps/chosen": -0.8877307772636414,
1331
+ "logps/rejected": -1.4623702764511108,
1332
+ "loss": 1.9062,
1333
+ "rewards/accuracies": 0.856249988079071,
1334
+ "rewards/chosen": -8.877307891845703,
1335
+ "rewards/margins": 5.746395587921143,
1336
+ "rewards/rejected": -14.62370491027832,
1337
+ "semantic_entropy": 0.8888769149780273,
1338
+ "step": 410
1339
+ },
1340
+ {
1341
+ "epoch": 0.907103825136612,
1342
+ "grad_norm": 54.776769647326134,
1343
+ "learning_rate": 2.5545911634565265e-08,
1344
+ "logits/chosen": -1.1549928188323975,
1345
+ "logits/rejected": -1.1519380807876587,
1346
+ "logps/chosen": -0.8584114909172058,
1347
+ "logps/rejected": -1.409808874130249,
1348
+ "loss": 1.9933,
1349
+ "rewards/accuracies": 0.856249988079071,
1350
+ "rewards/chosen": -8.584114074707031,
1351
+ "rewards/margins": 5.513972282409668,
1352
+ "rewards/rejected": -14.0980863571167,
1353
+ "semantic_entropy": 0.9221046566963196,
1354
+ "step": 415
1355
+ },
1356
+ {
1357
+ "epoch": 0.9180327868852459,
1358
+ "grad_norm": 57.98778672833022,
1359
+ "learning_rate": 1.9863821294241522e-08,
1360
+ "logits/chosen": -1.149745225906372,
1361
+ "logits/rejected": -1.125451683998108,
1362
+ "logps/chosen": -0.8198378682136536,
1363
+ "logps/rejected": -1.4263782501220703,
1364
+ "loss": 1.8964,
1365
+ "rewards/accuracies": 0.8500000238418579,
1366
+ "rewards/chosen": -8.198378562927246,
1367
+ "rewards/margins": 6.065403938293457,
1368
+ "rewards/rejected": -14.26378345489502,
1369
+ "semantic_entropy": 0.9195442199707031,
1370
+ "step": 420
1371
+ },
1372
+ {
1373
+ "epoch": 0.9289617486338798,
1374
+ "grad_norm": 52.73787429786629,
1375
+ "learning_rate": 1.4882972562753615e-08,
1376
+ "logits/chosen": -1.1456595659255981,
1377
+ "logits/rejected": -1.1313962936401367,
1378
+ "logps/chosen": -0.9214321374893188,
1379
+ "logps/rejected": -1.5123398303985596,
1380
+ "loss": 1.99,
1381
+ "rewards/accuracies": 0.8187500238418579,
1382
+ "rewards/chosen": -9.21432113647461,
1383
+ "rewards/margins": 5.90907621383667,
1384
+ "rewards/rejected": -15.123395919799805,
1385
+ "semantic_entropy": 0.8927605748176575,
1386
+ "step": 425
1387
+ },
1388
+ {
1389
+ "epoch": 0.9398907103825137,
1390
+ "grad_norm": 44.1912015437529,
1391
+ "learning_rate": 1.0610639997888915e-08,
1392
+ "logits/chosen": -1.081146001815796,
1393
+ "logits/rejected": -1.075714111328125,
1394
+ "logps/chosen": -0.7971862554550171,
1395
+ "logps/rejected": -1.3901129961013794,
1396
+ "loss": 1.7868,
1397
+ "rewards/accuracies": 0.9125000238418579,
1398
+ "rewards/chosen": -7.971863746643066,
1399
+ "rewards/margins": 5.92926549911499,
1400
+ "rewards/rejected": -13.901128768920898,
1401
+ "semantic_entropy": 0.9434274435043335,
1402
+ "step": 430
1403
+ },
1404
+ {
1405
+ "epoch": 0.9508196721311475,
1406
+ "grad_norm": 61.224577711409715,
1407
+ "learning_rate": 7.053063365559997e-09,
1408
+ "logits/chosen": -1.1544030904769897,
1409
+ "logits/rejected": -1.1826975345611572,
1410
+ "logps/chosen": -0.8376800417900085,
1411
+ "logps/rejected": -1.3729816675186157,
1412
+ "loss": 1.9375,
1413
+ "rewards/accuracies": 0.862500011920929,
1414
+ "rewards/chosen": -8.376801490783691,
1415
+ "rewards/margins": 5.353014945983887,
1416
+ "rewards/rejected": -13.729815483093262,
1417
+ "semantic_entropy": 0.9147855639457703,
1418
+ "step": 435
1419
+ },
1420
+ {
1421
+ "epoch": 0.9617486338797814,
1422
+ "grad_norm": 53.9815806254924,
1423
+ "learning_rate": 4.215438526591064e-09,
1424
+ "logits/chosen": -1.110844612121582,
1425
+ "logits/rejected": -1.0655571222305298,
1426
+ "logps/chosen": -0.9438391923904419,
1427
+ "logps/rejected": -1.4074232578277588,
1428
+ "loss": 1.8879,
1429
+ "rewards/accuracies": 0.800000011920929,
1430
+ "rewards/chosen": -9.438390731811523,
1431
+ "rewards/margins": 4.635839939117432,
1432
+ "rewards/rejected": -14.074231147766113,
1433
+ "semantic_entropy": 0.879165530204773,
1434
+ "step": 440
1435
+ },
1436
+ {
1437
+ "epoch": 0.9726775956284153,
1438
+ "grad_norm": 57.824670175057065,
1439
+ "learning_rate": 2.1019098481337426e-09,
1440
+ "logits/chosen": -1.1616451740264893,
1441
+ "logits/rejected": -1.1365407705307007,
1442
+ "logps/chosen": -0.8202412724494934,
1443
+ "logps/rejected": -1.3376966714859009,
1444
+ "loss": 1.9265,
1445
+ "rewards/accuracies": 0.8125,
1446
+ "rewards/chosen": -8.202413558959961,
1447
+ "rewards/margins": 5.174552917480469,
1448
+ "rewards/rejected": -13.376965522766113,
1449
+ "semantic_entropy": 0.9334556460380554,
1450
+ "step": 445
1451
+ },
1452
+ {
1453
+ "epoch": 0.9836065573770492,
1454
+ "grad_norm": 55.05572574163813,
1455
+ "learning_rate": 7.155641507955445e-10,
1456
+ "logits/chosen": -1.076629877090454,
1457
+ "logits/rejected": -1.0707811117172241,
1458
+ "logps/chosen": -0.882774829864502,
1459
+ "logps/rejected": -1.3867313861846924,
1460
+ "loss": 1.9798,
1461
+ "rewards/accuracies": 0.8187500238418579,
1462
+ "rewards/chosen": -8.82774829864502,
1463
+ "rewards/margins": 5.039565563201904,
1464
+ "rewards/rejected": -13.86731243133545,
1465
+ "semantic_entropy": 0.910732626914978,
1466
+ "step": 450
1467
+ },
1468
+ {
1469
+ "epoch": 0.994535519125683,
1470
+ "grad_norm": 64.69250426850014,
1471
+ "learning_rate": 5.842620032053824e-11,
1472
+ "logits/chosen": -1.0871715545654297,
1473
+ "logits/rejected": -1.0820033550262451,
1474
+ "logps/chosen": -0.9286049604415894,
1475
+ "logps/rejected": -1.3075941801071167,
1476
+ "loss": 2.053,
1477
+ "rewards/accuracies": 0.78125,
1478
+ "rewards/chosen": -9.286049842834473,
1479
+ "rewards/margins": 3.7898917198181152,
1480
+ "rewards/rejected": -13.07594108581543,
1481
+ "semantic_entropy": 0.9091150164604187,
1482
+ "step": 455
1483
+ },
1484
+ {
1485
+ "epoch": 0.9989071038251366,
1486
+ "step": 457,
1487
+ "total_flos": 0.0,
1488
+ "train_loss": 2.3507381581112385,
1489
+ "train_runtime": 7991.7933,
1490
+ "train_samples_per_second": 7.327,
1491
+ "train_steps_per_second": 0.057
1492
+ }
1493
+ ],
1494
+ "logging_steps": 5,
1495
+ "max_steps": 457,
1496
+ "num_input_tokens_seen": 0,
1497
+ "num_train_epochs": 1,
1498
+ "save_steps": 1000000,
1499
+ "stateful_callbacks": {
1500
+ "TrainerControl": {
1501
+ "args": {
1502
+ "should_epoch_stop": false,
1503
+ "should_evaluate": false,
1504
+ "should_log": false,
1505
+ "should_save": true,
1506
+ "should_training_stop": true
1507
+ },
1508
+ "attributes": {}
1509
+ }
1510
+ },
1511
+ "total_flos": 0.0,
1512
+ "train_batch_size": 2,
1513
+ "trial_name": null,
1514
+ "trial_params": null
1515
+ }