ambor1011 commited on
Commit
1034916
1 Parent(s): 2058026

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: peft
3
+ base_model: EleutherAI/gpt-neo-1.3B
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+ ### Framework versions
201
+
202
+ - PEFT 0.11.1
adapter_config.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "EleutherAI/gpt-neo-1.3B",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layer_replication": null,
10
+ "layers_pattern": null,
11
+ "layers_to_transform": null,
12
+ "loftq_config": {},
13
+ "lora_alpha": 16,
14
+ "lora_dropout": 0.05,
15
+ "megatron_config": null,
16
+ "megatron_core": "megatron.core",
17
+ "modules_to_save": null,
18
+ "peft_type": "LORA",
19
+ "r": 8,
20
+ "rank_pattern": {},
21
+ "revision": null,
22
+ "target_modules": [
23
+ "v_proj",
24
+ "q_proj",
25
+ "k_proj"
26
+ ],
27
+ "task_type": "CAUSAL_LM",
28
+ "use_dora": false,
29
+ "use_rslora": false
30
+ }
adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:feebdf15b898053d608d81daa4cd6dd099ba6cf7071641ae91ca882a1baebaf4
3
+ size 9457000
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2641f2cb2b4fdeee3f9a4ee5a13215510784a49436a654bc8c3ccb112504636d
3
+ size 18959674
rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:901454eae3a785b11565176eda263a4901a5e801f61aaac1a63fa07ac7277b3e
3
+ size 14180
scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2754bf07d01336164e4bfa949e826e310b35cd5348cd170cf25a5bcbcf51c8d0
3
+ size 1064
special_tokens_map.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<|endoftext|>",
4
+ "lstrip": false,
5
+ "normalized": true,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "<|endoftext|>",
11
+ "lstrip": false,
12
+ "normalized": true,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": "<|endoftext|>",
17
+ "unk_token": {
18
+ "content": "<|endoftext|>",
19
+ "lstrip": false,
20
+ "normalized": true,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ }
24
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_prefix_space": false,
4
+ "added_tokens_decoder": {
5
+ "50256": {
6
+ "content": "<|endoftext|>",
7
+ "lstrip": false,
8
+ "normalized": true,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ }
13
+ },
14
+ "bos_token": "<|endoftext|>",
15
+ "clean_up_tokenization_spaces": true,
16
+ "eos_token": "<|endoftext|>",
17
+ "errors": "replace",
18
+ "model_max_length": 2048,
19
+ "pad_token": "<|endoftext|>",
20
+ "tokenizer_class": "GPT2Tokenizer",
21
+ "unk_token": "<|endoftext|>"
22
+ }
trainer_state.json ADDED
@@ -0,0 +1,1355 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 2.260327357755261,
5
+ "eval_steps": 200,
6
+ "global_step": 5800,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.03897116134060795,
13
+ "grad_norm": 0.21226342022418976,
14
+ "learning_rate": 0.00025,
15
+ "logits/chosen": -19.687898635864258,
16
+ "logits/rejected": -18.633106231689453,
17
+ "logps/chosen": -343.7872619628906,
18
+ "logps/rejected": -250.41775512695312,
19
+ "loss": 0.4838,
20
+ "rewards/accuracies": 0.793749988079071,
21
+ "rewards/chosen": 0.6970043182373047,
22
+ "rewards/margins": 0.9049937725067139,
23
+ "rewards/rejected": -0.2079893946647644,
24
+ "step": 100
25
+ },
26
+ {
27
+ "epoch": 0.0779423226812159,
28
+ "grad_norm": 1.92564857006073,
29
+ "learning_rate": 0.0005,
30
+ "logits/chosen": -19.59960174560547,
31
+ "logits/rejected": -18.517173767089844,
32
+ "logps/chosen": -349.5354309082031,
33
+ "logps/rejected": -277.8677062988281,
34
+ "loss": 0.2445,
35
+ "rewards/accuracies": 0.862500011920929,
36
+ "rewards/chosen": 1.4490649700164795,
37
+ "rewards/margins": 3.1850626468658447,
38
+ "rewards/rejected": -1.7359976768493652,
39
+ "step": 200
40
+ },
41
+ {
42
+ "epoch": 0.0779423226812159,
43
+ "eval_logits/chosen": -16.551204681396484,
44
+ "eval_logits/rejected": -16.51001739501953,
45
+ "eval_logps/chosen": -340.37274169921875,
46
+ "eval_logps/rejected": -285.42059326171875,
47
+ "eval_loss": 0.19902318716049194,
48
+ "eval_rewards/accuracies": 0.9711538553237915,
49
+ "eval_rewards/chosen": 0.12859317660331726,
50
+ "eval_rewards/margins": 1.9796644449234009,
51
+ "eval_rewards/rejected": -1.8510712385177612,
52
+ "eval_runtime": 183.9267,
53
+ "eval_samples_per_second": 1.131,
54
+ "eval_steps_per_second": 0.565,
55
+ "step": 200
56
+ },
57
+ {
58
+ "epoch": 0.11691348402182385,
59
+ "grad_norm": 1.4603872299194336,
60
+ "learning_rate": 0.0004997805905390462,
61
+ "logits/chosen": -19.685632705688477,
62
+ "logits/rejected": -18.630176544189453,
63
+ "logps/chosen": -347.72479248046875,
64
+ "logps/rejected": -297.4257507324219,
65
+ "loss": 0.195,
66
+ "rewards/accuracies": 0.8799999952316284,
67
+ "rewards/chosen": 0.857117235660553,
68
+ "rewards/margins": 4.945696830749512,
69
+ "rewards/rejected": -4.088580131530762,
70
+ "step": 300
71
+ },
72
+ {
73
+ "epoch": 0.1558846453624318,
74
+ "grad_norm": 2.9201486110687256,
75
+ "learning_rate": 0.0004991227472802768,
76
+ "logits/chosen": -19.715879440307617,
77
+ "logits/rejected": -18.654006958007812,
78
+ "logps/chosen": -347.0750427246094,
79
+ "logps/rejected": -297.565185546875,
80
+ "loss": 0.1965,
81
+ "rewards/accuracies": 0.8949999809265137,
82
+ "rewards/chosen": 0.27759745717048645,
83
+ "rewards/margins": 5.794647693634033,
84
+ "rewards/rejected": -5.517050266265869,
85
+ "step": 400
86
+ },
87
+ {
88
+ "epoch": 0.1558846453624318,
89
+ "eval_logits/chosen": -16.683916091918945,
90
+ "eval_logits/rejected": -16.602561950683594,
91
+ "eval_logps/chosen": -343.2335510253906,
92
+ "eval_logps/rejected": -309.0007629394531,
93
+ "eval_loss": 0.04625505581498146,
94
+ "eval_rewards/accuracies": 1.0,
95
+ "eval_rewards/chosen": -0.15748834609985352,
96
+ "eval_rewards/margins": 4.051597595214844,
97
+ "eval_rewards/rejected": -4.2090864181518555,
98
+ "eval_runtime": 183.7705,
99
+ "eval_samples_per_second": 1.132,
100
+ "eval_steps_per_second": 0.566,
101
+ "step": 400
102
+ },
103
+ {
104
+ "epoch": 0.19485580670303976,
105
+ "grad_norm": 0.7862759232521057,
106
+ "learning_rate": 0.0004980276249199705,
107
+ "logits/chosen": -19.461082458496094,
108
+ "logits/rejected": -18.434755325317383,
109
+ "logps/chosen": -370.48394775390625,
110
+ "logps/rejected": -314.01043701171875,
111
+ "loss": 0.1873,
112
+ "rewards/accuracies": 0.8974999785423279,
113
+ "rewards/chosen": 0.1783198118209839,
114
+ "rewards/margins": 5.953300952911377,
115
+ "rewards/rejected": -5.774980545043945,
116
+ "step": 500
117
+ },
118
+ {
119
+ "epoch": 0.2338269680436477,
120
+ "grad_norm": 0.5820030570030212,
121
+ "learning_rate": 0.0004964971456997812,
122
+ "logits/chosen": -19.28722381591797,
123
+ "logits/rejected": -18.402751922607422,
124
+ "logps/chosen": -369.10980224609375,
125
+ "logps/rejected": -328.44403076171875,
126
+ "loss": 0.19,
127
+ "rewards/accuracies": 0.9024999737739563,
128
+ "rewards/chosen": -0.12349352240562439,
129
+ "rewards/margins": 6.251636505126953,
130
+ "rewards/rejected": -6.375130653381348,
131
+ "step": 600
132
+ },
133
+ {
134
+ "epoch": 0.2338269680436477,
135
+ "eval_logits/chosen": -16.357208251953125,
136
+ "eval_logits/rejected": -16.30510139465332,
137
+ "eval_logps/chosen": -336.88409423828125,
138
+ "eval_logps/rejected": -304.0970764160156,
139
+ "eval_loss": 0.036708563566207886,
140
+ "eval_rewards/accuracies": 1.0,
141
+ "eval_rewards/chosen": 0.47745734453201294,
142
+ "eval_rewards/margins": 4.196179389953613,
143
+ "eval_rewards/rejected": -3.718721866607666,
144
+ "eval_runtime": 183.7701,
145
+ "eval_samples_per_second": 1.132,
146
+ "eval_steps_per_second": 0.566,
147
+ "step": 600
148
+ },
149
+ {
150
+ "epoch": 0.2727981293842556,
151
+ "grad_norm": 0.3981655538082123,
152
+ "learning_rate": 0.0004945339960326746,
153
+ "logits/chosen": -19.472816467285156,
154
+ "logits/rejected": -18.500709533691406,
155
+ "logps/chosen": -368.1885681152344,
156
+ "logps/rejected": -322.74298095703125,
157
+ "loss": 0.171,
158
+ "rewards/accuracies": 0.9012500047683716,
159
+ "rewards/chosen": -0.00018103599722962826,
160
+ "rewards/margins": 6.297857761383057,
161
+ "rewards/rejected": -6.298038959503174,
162
+ "step": 700
163
+ },
164
+ {
165
+ "epoch": 0.3117692907248636,
166
+ "grad_norm": 0.429470956325531,
167
+ "learning_rate": 0.0004921416217875326,
168
+ "logits/chosen": -19.675304412841797,
169
+ "logits/rejected": -18.701597213745117,
170
+ "logps/chosen": -354.1678771972656,
171
+ "logps/rejected": -312.2345886230469,
172
+ "loss": 0.1742,
173
+ "rewards/accuracies": 0.8912500143051147,
174
+ "rewards/chosen": 0.06210971996188164,
175
+ "rewards/margins": 6.645627498626709,
176
+ "rewards/rejected": -6.583518028259277,
177
+ "step": 800
178
+ },
179
+ {
180
+ "epoch": 0.3117692907248636,
181
+ "eval_logits/chosen": -16.51410484313965,
182
+ "eval_logits/rejected": -16.480632781982422,
183
+ "eval_logps/chosen": -338.15936279296875,
184
+ "eval_logps/rejected": -315.7767333984375,
185
+ "eval_loss": 0.023502787575125694,
186
+ "eval_rewards/accuracies": 0.995192289352417,
187
+ "eval_rewards/chosen": 0.34993284940719604,
188
+ "eval_rewards/margins": 5.236622333526611,
189
+ "eval_rewards/rejected": -4.886689186096191,
190
+ "eval_runtime": 183.2137,
191
+ "eval_samples_per_second": 1.135,
192
+ "eval_steps_per_second": 0.568,
193
+ "step": 800
194
+ },
195
+ {
196
+ "epoch": 0.35074045206547155,
197
+ "grad_norm": 0.3401203453540802,
198
+ "learning_rate": 0.0004893242222407031,
199
+ "logits/chosen": -19.789016723632812,
200
+ "logits/rejected": -18.94336700439453,
201
+ "logps/chosen": -343.6241760253906,
202
+ "logps/rejected": -323.7110595703125,
203
+ "loss": 0.1681,
204
+ "rewards/accuracies": 0.9087499976158142,
205
+ "rewards/chosen": 0.2733341455459595,
206
+ "rewards/margins": 6.843422889709473,
207
+ "rewards/rejected": -6.5700883865356445,
208
+ "step": 900
209
+ },
210
+ {
211
+ "epoch": 0.3897116134060795,
212
+ "grad_norm": 1.2149152755737305,
213
+ "learning_rate": 0.00048608674270511344,
214
+ "logits/chosen": -19.847705841064453,
215
+ "logits/rejected": -18.942218780517578,
216
+ "logps/chosen": -359.46551513671875,
217
+ "logps/rejected": -334.84136962890625,
218
+ "loss": 0.1605,
219
+ "rewards/accuracies": 0.90625,
220
+ "rewards/chosen": 0.1338280439376831,
221
+ "rewards/margins": 7.973731517791748,
222
+ "rewards/rejected": -7.839903354644775,
223
+ "step": 1000
224
+ },
225
+ {
226
+ "epoch": 0.3897116134060795,
227
+ "eval_logits/chosen": -16.609466552734375,
228
+ "eval_logits/rejected": -16.567523956298828,
229
+ "eval_logps/chosen": -334.4723205566406,
230
+ "eval_logps/rejected": -314.89141845703125,
231
+ "eval_loss": 0.010429470799863338,
232
+ "eval_rewards/accuracies": 1.0,
233
+ "eval_rewards/chosen": 0.7186338901519775,
234
+ "eval_rewards/margins": 5.516786575317383,
235
+ "eval_rewards/rejected": -4.798152446746826,
236
+ "eval_runtime": 183.9441,
237
+ "eval_samples_per_second": 1.131,
238
+ "eval_steps_per_second": 0.565,
239
+ "step": 1000
240
+ },
241
+ {
242
+ "epoch": 0.4286827747466875,
243
+ "grad_norm": 15.479838371276855,
244
+ "learning_rate": 0.00048243486584988133,
245
+ "logits/chosen": -19.61993408203125,
246
+ "logits/rejected": -18.709421157836914,
247
+ "logps/chosen": -361.1015625,
248
+ "logps/rejected": -337.32342529296875,
249
+ "loss": 0.173,
250
+ "rewards/accuracies": 0.9037500023841858,
251
+ "rewards/chosen": -0.6482878923416138,
252
+ "rewards/margins": 7.439374923706055,
253
+ "rewards/rejected": -8.087662696838379,
254
+ "step": 1100
255
+ },
256
+ {
257
+ "epoch": 0.4676539360872954,
258
+ "grad_norm": 2.251624822616577,
259
+ "learning_rate": 0.00047837500172566503,
260
+ "logits/chosen": -19.768835067749023,
261
+ "logits/rejected": -18.762409210205078,
262
+ "logps/chosen": -365.1903076171875,
263
+ "logps/rejected": -328.1022644042969,
264
+ "loss": 0.19,
265
+ "rewards/accuracies": 0.9200000166893005,
266
+ "rewards/chosen": -0.6630336046218872,
267
+ "rewards/margins": 6.923609733581543,
268
+ "rewards/rejected": -7.586642265319824,
269
+ "step": 1200
270
+ },
271
+ {
272
+ "epoch": 0.4676539360872954,
273
+ "eval_logits/chosen": -16.668439865112305,
274
+ "eval_logits/rejected": -16.63960075378418,
275
+ "eval_logps/chosen": -348.08587646484375,
276
+ "eval_logps/rejected": -331.30828857421875,
277
+ "eval_loss": 0.013935078866779804,
278
+ "eval_rewards/accuracies": 1.0,
279
+ "eval_rewards/chosen": -0.6427211761474609,
280
+ "eval_rewards/margins": 5.797122478485107,
281
+ "eval_rewards/rejected": -6.43984317779541,
282
+ "eval_runtime": 183.93,
283
+ "eval_samples_per_second": 1.131,
284
+ "eval_steps_per_second": 0.565,
285
+ "step": 1200
286
+ },
287
+ {
288
+ "epoch": 0.5066250974279034,
289
+ "grad_norm": 2.569606304168701,
290
+ "learning_rate": 0.00047391427651325704,
291
+ "logits/chosen": -20.112619400024414,
292
+ "logits/rejected": -19.097084045410156,
293
+ "logps/chosen": -361.98895263671875,
294
+ "logps/rejected": -335.0581970214844,
295
+ "loss": 0.1348,
296
+ "rewards/accuracies": 0.9312499761581421,
297
+ "rewards/chosen": -0.5764763355255127,
298
+ "rewards/margins": 7.914137363433838,
299
+ "rewards/rejected": -8.49061393737793,
300
+ "step": 1300
301
+ },
302
+ {
303
+ "epoch": 0.5455962587685113,
304
+ "grad_norm": 1.0773099660873413,
305
+ "learning_rate": 0.00046906052001517164,
306
+ "logits/chosen": -20.002775192260742,
307
+ "logits/rejected": -19.024526596069336,
308
+ "logps/chosen": -369.9784851074219,
309
+ "logps/rejected": -346.8582763671875,
310
+ "loss": 0.1693,
311
+ "rewards/accuracies": 0.9237499833106995,
312
+ "rewards/chosen": -1.2993569374084473,
313
+ "rewards/margins": 8.027504920959473,
314
+ "rewards/rejected": -9.326862335205078,
315
+ "step": 1400
316
+ },
317
+ {
318
+ "epoch": 0.5455962587685113,
319
+ "eval_logits/chosen": -16.792936325073242,
320
+ "eval_logits/rejected": -16.73811912536621,
321
+ "eval_logps/chosen": -335.2118835449219,
322
+ "eval_logps/rejected": -317.7545471191406,
323
+ "eval_loss": 0.019140072166919708,
324
+ "eval_rewards/accuracies": 1.0,
325
+ "eval_rewards/chosen": 0.6446801424026489,
326
+ "eval_rewards/margins": 5.729149341583252,
327
+ "eval_rewards/rejected": -5.084469318389893,
328
+ "eval_runtime": 183.9981,
329
+ "eval_samples_per_second": 1.13,
330
+ "eval_steps_per_second": 0.565,
331
+ "step": 1400
332
+ },
333
+ {
334
+ "epoch": 0.5845674201091192,
335
+ "grad_norm": 3.4432373046875,
336
+ "learning_rate": 0.00046382225191218373,
337
+ "logits/chosen": -20.071659088134766,
338
+ "logits/rejected": -19.052783966064453,
339
+ "logps/chosen": -363.5613708496094,
340
+ "logps/rejected": -345.07879638671875,
341
+ "loss": 0.1425,
342
+ "rewards/accuracies": 0.9312499761581421,
343
+ "rewards/chosen": -0.769163966178894,
344
+ "rewards/margins": 8.191140174865723,
345
+ "rewards/rejected": -8.96030330657959,
346
+ "step": 1500
347
+ },
348
+ {
349
+ "epoch": 0.6235385814497272,
350
+ "grad_norm": 1.9013152122497559,
351
+ "learning_rate": 0.0004582086668089399,
352
+ "logits/chosen": -19.665212631225586,
353
+ "logits/rejected": -18.75629234313965,
354
+ "logps/chosen": -367.5525817871094,
355
+ "logps/rejected": -342.85516357421875,
356
+ "loss": 0.1263,
357
+ "rewards/accuracies": 0.9325000047683716,
358
+ "rewards/chosen": -0.8581268191337585,
359
+ "rewards/margins": 8.124137878417969,
360
+ "rewards/rejected": -8.982264518737793,
361
+ "step": 1600
362
+ },
363
+ {
364
+ "epoch": 0.6235385814497272,
365
+ "eval_logits/chosen": -16.445354461669922,
366
+ "eval_logits/rejected": -16.387128829956055,
367
+ "eval_logps/chosen": -327.33563232421875,
368
+ "eval_logps/rejected": -312.2492980957031,
369
+ "eval_loss": 0.0060209427028894424,
370
+ "eval_rewards/accuracies": 1.0,
371
+ "eval_rewards/chosen": 1.4323018789291382,
372
+ "eval_rewards/margins": 5.966245174407959,
373
+ "eval_rewards/rejected": -4.5339436531066895,
374
+ "eval_runtime": 184.0733,
375
+ "eval_samples_per_second": 1.13,
376
+ "eval_steps_per_second": 0.565,
377
+ "step": 1600
378
+ },
379
+ {
380
+ "epoch": 0.6625097427903351,
381
+ "grad_norm": 0.8941299915313721,
382
+ "learning_rate": 0.0004522296180948922,
383
+ "logits/chosen": -19.424589157104492,
384
+ "logits/rejected": -18.495784759521484,
385
+ "logps/chosen": -374.550537109375,
386
+ "logps/rejected": -353.1033630371094,
387
+ "loss": 0.125,
388
+ "rewards/accuracies": 0.9387500286102295,
389
+ "rewards/chosen": -0.5644223690032959,
390
+ "rewards/margins": 8.560413360595703,
391
+ "rewards/rejected": -9.124835014343262,
392
+ "step": 1700
393
+ },
394
+ {
395
+ "epoch": 0.7014809041309431,
396
+ "grad_norm": 1.1238608360290527,
397
+ "learning_rate": 0.00044589560064888347,
398
+ "logits/chosen": -19.239290237426758,
399
+ "logits/rejected": -18.375873565673828,
400
+ "logps/chosen": -369.72027587890625,
401
+ "logps/rejected": -349.2558898925781,
402
+ "loss": 0.1232,
403
+ "rewards/accuracies": 0.9375,
404
+ "rewards/chosen": -0.9773778319358826,
405
+ "rewards/margins": 8.248481750488281,
406
+ "rewards/rejected": -9.225860595703125,
407
+ "step": 1800
408
+ },
409
+ {
410
+ "epoch": 0.7014809041309431,
411
+ "eval_logits/chosen": -16.073932647705078,
412
+ "eval_logits/rejected": -16.034406661987305,
413
+ "eval_logps/chosen": -343.9805908203125,
414
+ "eval_logps/rejected": -342.4121398925781,
415
+ "eval_loss": 0.005078889429569244,
416
+ "eval_rewards/accuracies": 1.0,
417
+ "eval_rewards/chosen": -0.23218610882759094,
418
+ "eval_rewards/margins": 7.318041801452637,
419
+ "eval_rewards/rejected": -7.550227642059326,
420
+ "eval_runtime": 183.2955,
421
+ "eval_samples_per_second": 1.135,
422
+ "eval_steps_per_second": 0.567,
423
+ "step": 1800
424
+ },
425
+ {
426
+ "epoch": 0.7404520654715511,
427
+ "grad_norm": 3.126462697982788,
428
+ "learning_rate": 0.00043921773241774185,
429
+ "logits/chosen": -19.31768226623535,
430
+ "logits/rejected": -18.37215232849121,
431
+ "logps/chosen": -348.50390625,
432
+ "logps/rejected": -336.8278503417969,
433
+ "loss": 0.0923,
434
+ "rewards/accuracies": 0.9587500095367432,
435
+ "rewards/chosen": -0.21741238236427307,
436
+ "rewards/margins": 8.6088228225708,
437
+ "rewards/rejected": -8.826234817504883,
438
+ "step": 1900
439
+ },
440
+ {
441
+ "epoch": 0.779423226812159,
442
+ "grad_norm": 0.6362993121147156,
443
+ "learning_rate": 0.0004322077349012186,
444
+ "logits/chosen": -19.6059513092041,
445
+ "logits/rejected": -18.7484073638916,
446
+ "logps/chosen": -359.93206787109375,
447
+ "logps/rejected": -353.2133483886719,
448
+ "loss": 0.0983,
449
+ "rewards/accuracies": 0.949999988079071,
450
+ "rewards/chosen": -1.2052465677261353,
451
+ "rewards/margins": 8.998382568359375,
452
+ "rewards/rejected": -10.203628540039062,
453
+ "step": 2000
454
+ },
455
+ {
456
+ "epoch": 0.779423226812159,
457
+ "eval_logits/chosen": -16.276912689208984,
458
+ "eval_logits/rejected": -16.23944854736328,
459
+ "eval_logps/chosen": -340.11883544921875,
460
+ "eval_logps/rejected": -336.9291687011719,
461
+ "eval_loss": 0.003970090765506029,
462
+ "eval_rewards/accuracies": 1.0,
463
+ "eval_rewards/chosen": 0.15398484468460083,
464
+ "eval_rewards/margins": 7.155914783477783,
465
+ "eval_rewards/rejected": -7.001929759979248,
466
+ "eval_runtime": 184.065,
467
+ "eval_samples_per_second": 1.13,
468
+ "eval_steps_per_second": 0.565,
469
+ "step": 2000
470
+ },
471
+ {
472
+ "epoch": 0.818394388152767,
473
+ "grad_norm": 3.1369752883911133,
474
+ "learning_rate": 0.00042487791257752493,
475
+ "logits/chosen": -19.724271774291992,
476
+ "logits/rejected": -18.804405212402344,
477
+ "logps/chosen": -371.5325012207031,
478
+ "logps/rejected": -357.1531677246094,
479
+ "loss": 0.1049,
480
+ "rewards/accuracies": 0.9512500166893005,
481
+ "rewards/chosen": -1.3020814657211304,
482
+ "rewards/margins": 8.643760681152344,
483
+ "rewards/rejected": -9.945842742919922,
484
+ "step": 2100
485
+ },
486
+ {
487
+ "epoch": 0.857365549493375,
488
+ "grad_norm": 1.0814727544784546,
489
+ "learning_rate": 0.0004172411313055802,
490
+ "logits/chosen": -19.90325164794922,
491
+ "logits/rejected": -19.001785278320312,
492
+ "logps/chosen": -376.0804748535156,
493
+ "logps/rejected": -359.4775390625,
494
+ "loss": 0.0924,
495
+ "rewards/accuracies": 0.9624999761581421,
496
+ "rewards/chosen": -0.4394536316394806,
497
+ "rewards/margins": 8.659097671508789,
498
+ "rewards/rejected": -9.098551750183105,
499
+ "step": 2200
500
+ },
501
+ {
502
+ "epoch": 0.857365549493375,
503
+ "eval_logits/chosen": -16.71300506591797,
504
+ "eval_logits/rejected": -16.685504913330078,
505
+ "eval_logps/chosen": -334.0523986816406,
506
+ "eval_logps/rejected": -324.8212890625,
507
+ "eval_loss": 0.00510050356388092,
508
+ "eval_rewards/accuracies": 1.0,
509
+ "eval_rewards/chosen": 0.760625958442688,
510
+ "eval_rewards/margins": 6.551769733428955,
511
+ "eval_rewards/rejected": -5.791143417358398,
512
+ "eval_runtime": 183.3621,
513
+ "eval_samples_per_second": 1.134,
514
+ "eval_steps_per_second": 0.567,
515
+ "step": 2200
516
+ },
517
+ {
518
+ "epoch": 0.8963367108339828,
519
+ "grad_norm": 2.7771525382995605,
520
+ "learning_rate": 0.0004093107957418828,
521
+ "logits/chosen": -19.735408782958984,
522
+ "logits/rejected": -18.8337459564209,
523
+ "logps/chosen": -367.6742858886719,
524
+ "logps/rejected": -356.5105895996094,
525
+ "loss": 0.0789,
526
+ "rewards/accuracies": 0.9612500071525574,
527
+ "rewards/chosen": -1.1817975044250488,
528
+ "rewards/margins": 9.191315650939941,
529
+ "rewards/rejected": -10.373113632202148,
530
+ "step": 2300
531
+ },
532
+ {
533
+ "epoch": 0.9353078721745908,
534
+ "grad_norm": 0.0016880702460184693,
535
+ "learning_rate": 0.0004011008258116425,
536
+ "logits/chosen": -19.32187271118164,
537
+ "logits/rejected": -18.563722610473633,
538
+ "logps/chosen": -373.0425109863281,
539
+ "logps/rejected": -362.72589111328125,
540
+ "loss": 0.0848,
541
+ "rewards/accuracies": 0.9649999737739563,
542
+ "rewards/chosen": -2.054009437561035,
543
+ "rewards/margins": 9.20674991607666,
544
+ "rewards/rejected": -11.260760307312012,
545
+ "step": 2400
546
+ },
547
+ {
548
+ "epoch": 0.9353078721745908,
549
+ "eval_logits/chosen": -16.159223556518555,
550
+ "eval_logits/rejected": -16.161327362060547,
551
+ "eval_logps/chosen": -337.83880615234375,
552
+ "eval_logps/rejected": -339.8623046875,
553
+ "eval_loss": 0.0020727256778627634,
554
+ "eval_rewards/accuracies": 1.0,
555
+ "eval_rewards/chosen": 0.38198867440223694,
556
+ "eval_rewards/margins": 7.677234172821045,
557
+ "eval_rewards/rejected": -7.295245170593262,
558
+ "eval_runtime": 183.8954,
559
+ "eval_samples_per_second": 1.131,
560
+ "eval_steps_per_second": 0.566,
561
+ "step": 2400
562
+ },
563
+ {
564
+ "epoch": 0.9742790335151987,
565
+ "grad_norm": 1.488077998161316,
566
+ "learning_rate": 0.00039262563227547396,
567
+ "logits/chosen": -19.280136108398438,
568
+ "logits/rejected": -18.451894760131836,
569
+ "logps/chosen": -376.1581115722656,
570
+ "logps/rejected": -366.6534118652344,
571
+ "loss": 0.0662,
572
+ "rewards/accuracies": 0.9725000262260437,
573
+ "rewards/chosen": -1.176061987876892,
574
+ "rewards/margins": 8.959639549255371,
575
+ "rewards/rejected": -10.135702133178711,
576
+ "step": 2500
577
+ },
578
+ {
579
+ "epoch": 1.0132501948558068,
580
+ "grad_norm": 2.0352509021759033,
581
+ "learning_rate": 0.0003839000914345393,
582
+ "logits/chosen": -19.330242156982422,
583
+ "logits/rejected": -18.416221618652344,
584
+ "logps/chosen": -373.55523681640625,
585
+ "logps/rejected": -357.8737487792969,
586
+ "loss": 0.0373,
587
+ "rewards/accuracies": 0.9837499856948853,
588
+ "rewards/chosen": -0.7969868183135986,
589
+ "rewards/margins": 9.221331596374512,
590
+ "rewards/rejected": -10.018318176269531,
591
+ "step": 2600
592
+ },
593
+ {
594
+ "epoch": 1.0132501948558068,
595
+ "eval_logits/chosen": -15.664155006408691,
596
+ "eval_logits/rejected": -15.684382438659668,
597
+ "eval_logps/chosen": -348.93939208984375,
598
+ "eval_logps/rejected": -357.6224365234375,
599
+ "eval_loss": 0.001914023538120091,
600
+ "eval_rewards/accuracies": 1.0,
601
+ "eval_rewards/chosen": -0.7280683517456055,
602
+ "eval_rewards/margins": 8.34318733215332,
603
+ "eval_rewards/rejected": -9.071255683898926,
604
+ "eval_runtime": 183.3393,
605
+ "eval_samples_per_second": 1.135,
606
+ "eval_steps_per_second": 0.567,
607
+ "step": 2600
608
+ },
609
+ {
610
+ "epoch": 1.0522213561964147,
611
+ "grad_norm": 1.1277427673339844,
612
+ "learning_rate": 0.00037493951901853797,
613
+ "logits/chosen": -19.345949172973633,
614
+ "logits/rejected": -18.49242401123047,
615
+ "logps/chosen": -365.28350830078125,
616
+ "logps/rejected": -361.580810546875,
617
+ "loss": 0.021,
618
+ "rewards/accuracies": 0.9937499761581421,
619
+ "rewards/chosen": -0.9318392872810364,
620
+ "rewards/margins": 9.834463119506836,
621
+ "rewards/rejected": -10.766302108764648,
622
+ "step": 2700
623
+ },
624
+ {
625
+ "epoch": 1.0911925175370225,
626
+ "grad_norm": 0.07178483158349991,
627
+ "learning_rate": 0.00036575964330237904,
628
+ "logits/chosen": -19.510316848754883,
629
+ "logits/rejected": -18.55198860168457,
630
+ "logps/chosen": -367.44598388671875,
631
+ "logps/rejected": -363.74078369140625,
632
+ "loss": 0.0151,
633
+ "rewards/accuracies": 0.9937499761581421,
634
+ "rewards/chosen": -1.267324447631836,
635
+ "rewards/margins": 10.227116584777832,
636
+ "rewards/rejected": -11.494441032409668,
637
+ "step": 2800
638
+ },
639
+ {
640
+ "epoch": 1.0911925175370225,
641
+ "eval_logits/chosen": -15.850824356079102,
642
+ "eval_logits/rejected": -15.864118576049805,
643
+ "eval_logps/chosen": -359.7252502441406,
644
+ "eval_logps/rejected": -381.7374572753906,
645
+ "eval_loss": 0.0014377026818692684,
646
+ "eval_rewards/accuracies": 1.0,
647
+ "eval_rewards/chosen": -1.8066591024398804,
648
+ "eval_rewards/margins": 9.676095962524414,
649
+ "eval_rewards/rejected": -11.482755661010742,
650
+ "eval_runtime": 183.4769,
651
+ "eval_samples_per_second": 1.134,
652
+ "eval_steps_per_second": 0.567,
653
+ "step": 2800
654
+ },
655
+ {
656
+ "epoch": 1.1301636788776306,
657
+ "grad_norm": 1.4340250492095947,
658
+ "learning_rate": 0.00035637657749872255,
659
+ "logits/chosen": -19.317781448364258,
660
+ "logits/rejected": -18.438331604003906,
661
+ "logps/chosen": -375.9151306152344,
662
+ "logps/rejected": -366.2434387207031,
663
+ "loss": 0.0137,
664
+ "rewards/accuracies": 0.9962499737739563,
665
+ "rewards/chosen": -2.468919277191162,
666
+ "rewards/margins": 10.26855754852295,
667
+ "rewards/rejected": -12.737476348876953,
668
+ "step": 2900
669
+ },
670
+ {
671
+ "epoch": 1.1691348402182384,
672
+ "grad_norm": 0.2114144116640091,
673
+ "learning_rate": 0.00034680679147484916,
674
+ "logits/chosen": -19.166513442993164,
675
+ "logits/rejected": -18.25996208190918,
676
+ "logps/chosen": -371.7158508300781,
677
+ "logps/rejected": -375.24481201171875,
678
+ "loss": 0.0177,
679
+ "rewards/accuracies": 0.9925000071525574,
680
+ "rewards/chosen": -1.933343768119812,
681
+ "rewards/margins": 10.125692367553711,
682
+ "rewards/rejected": -12.059035301208496,
683
+ "step": 3000
684
+ },
685
+ {
686
+ "epoch": 1.1691348402182384,
687
+ "eval_logits/chosen": -16.009708404541016,
688
+ "eval_logits/rejected": -15.956798553466797,
689
+ "eval_logps/chosen": -332.91302490234375,
690
+ "eval_logps/rejected": -345.1927490234375,
691
+ "eval_loss": 0.0008918559760786593,
692
+ "eval_rewards/accuracies": 1.0,
693
+ "eval_rewards/chosen": 0.874567449092865,
694
+ "eval_rewards/margins": 8.702858924865723,
695
+ "eval_rewards/rejected": -7.8282904624938965,
696
+ "eval_runtime": 184.0136,
697
+ "eval_samples_per_second": 1.13,
698
+ "eval_steps_per_second": 0.565,
699
+ "step": 3000
700
+ },
701
+ {
702
+ "epoch": 1.2081060015588465,
703
+ "grad_norm": 1.6861902475357056,
704
+ "learning_rate": 0.00033706708284350227,
705
+ "logits/chosen": -19.25012969970703,
706
+ "logits/rejected": -18.424623489379883,
707
+ "logps/chosen": -372.6192932128906,
708
+ "logps/rejected": -379.19549560546875,
709
+ "loss": 0.0209,
710
+ "rewards/accuracies": 0.9950000047683716,
711
+ "rewards/chosen": -1.3639909029006958,
712
+ "rewards/margins": 10.262293815612793,
713
+ "rewards/rejected": -11.626285552978516,
714
+ "step": 3100
715
+ },
716
+ {
717
+ "epoch": 1.2470771628994544,
718
+ "grad_norm": 0.16628094017505646,
719
+ "learning_rate": 0.00032717454747844735,
720
+ "logits/chosen": -19.219282150268555,
721
+ "logits/rejected": -18.238243103027344,
722
+ "logps/chosen": -365.1196594238281,
723
+ "logps/rejected": -364.3663330078125,
724
+ "loss": 0.0168,
725
+ "rewards/accuracies": 0.9912499785423279,
726
+ "rewards/chosen": -1.1836423873901367,
727
+ "rewards/margins": 10.353758811950684,
728
+ "rewards/rejected": -11.53740119934082,
729
+ "step": 3200
730
+ },
731
+ {
732
+ "epoch": 1.2470771628994544,
733
+ "eval_logits/chosen": -15.836195945739746,
734
+ "eval_logits/rejected": -15.786764144897461,
735
+ "eval_logps/chosen": -340.96075439453125,
736
+ "eval_logps/rejected": -357.6408996582031,
737
+ "eval_loss": 0.0008726614178158343,
738
+ "eval_rewards/accuracies": 1.0,
739
+ "eval_rewards/chosen": 0.06979300826787949,
740
+ "eval_rewards/margins": 9.142892837524414,
741
+ "eval_rewards/rejected": -9.073100090026855,
742
+ "eval_runtime": 183.8457,
743
+ "eval_samples_per_second": 1.131,
744
+ "eval_steps_per_second": 0.566,
745
+ "step": 3200
746
+ },
747
+ {
748
+ "epoch": 1.2860483242400624,
749
+ "grad_norm": 0.0003638103371486068,
750
+ "learning_rate": 0.00031714654950649947,
751
+ "logits/chosen": -19.43454933166504,
752
+ "logits/rejected": -18.49201011657715,
753
+ "logps/chosen": -371.1318664550781,
754
+ "logps/rejected": -372.17938232421875,
755
+ "loss": 0.0163,
756
+ "rewards/accuracies": 0.9912499785423279,
757
+ "rewards/chosen": -1.4684104919433594,
758
+ "rewards/margins": 10.204544067382812,
759
+ "rewards/rejected": -11.672954559326172,
760
+ "step": 3300
761
+ },
762
+ {
763
+ "epoch": 1.3250194855806703,
764
+ "grad_norm": 0.0004742901655845344,
765
+ "learning_rate": 0.0003070006908286945,
766
+ "logits/chosen": -19.53050994873047,
767
+ "logits/rejected": -18.605304718017578,
768
+ "logps/chosen": -383.9638671875,
769
+ "logps/rejected": -382.642333984375,
770
+ "loss": 0.0159,
771
+ "rewards/accuracies": 0.9925000071525574,
772
+ "rewards/chosen": -1.5249592065811157,
773
+ "rewards/margins": 10.782021522521973,
774
+ "rewards/rejected": -12.306981086730957,
775
+ "step": 3400
776
+ },
777
+ {
778
+ "epoch": 1.3250194855806703,
779
+ "eval_logits/chosen": -15.98876953125,
780
+ "eval_logits/rejected": -15.942709922790527,
781
+ "eval_logps/chosen": -337.0641174316406,
782
+ "eval_logps/rejected": -357.69342041015625,
783
+ "eval_loss": 0.0007918892079032958,
784
+ "eval_rewards/accuracies": 1.0,
785
+ "eval_rewards/chosen": 0.4594591557979584,
786
+ "eval_rewards/margins": 9.53781795501709,
787
+ "eval_rewards/rejected": -9.078359603881836,
788
+ "eval_runtime": 183.8285,
789
+ "eval_samples_per_second": 1.131,
790
+ "eval_steps_per_second": 0.566,
791
+ "step": 3400
792
+ },
793
+ {
794
+ "epoch": 1.3639906469212781,
795
+ "grad_norm": 3.363085985183716,
796
+ "learning_rate": 0.0002967547802240997,
797
+ "logits/chosen": -19.533775329589844,
798
+ "logits/rejected": -18.48381233215332,
799
+ "logps/chosen": -369.41827392578125,
800
+ "logps/rejected": -367.9228820800781,
801
+ "loss": 0.0152,
802
+ "rewards/accuracies": 0.9950000047683716,
803
+ "rewards/chosen": -0.9302732944488525,
804
+ "rewards/margins": 10.337385177612305,
805
+ "rewards/rejected": -11.267658233642578,
806
+ "step": 3500
807
+ },
808
+ {
809
+ "epoch": 1.4029618082618862,
810
+ "grad_norm": 0.014138066209852695,
811
+ "learning_rate": 0.00028642680209049715,
812
+ "logits/chosen": -19.569242477416992,
813
+ "logits/rejected": -18.72142791748047,
814
+ "logps/chosen": -355.16339111328125,
815
+ "logps/rejected": -369.4004211425781,
816
+ "loss": 0.0106,
817
+ "rewards/accuracies": 0.9987499713897705,
818
+ "rewards/chosen": -0.6573285460472107,
819
+ "rewards/margins": 10.500960350036621,
820
+ "rewards/rejected": -11.15829086303711,
821
+ "step": 3600
822
+ },
823
+ {
824
+ "epoch": 1.4029618082618862,
825
+ "eval_logits/chosen": -16.05227279663086,
826
+ "eval_logits/rejected": -16.000469207763672,
827
+ "eval_logps/chosen": -334.4986572265625,
828
+ "eval_logps/rejected": -352.8683166503906,
829
+ "eval_loss": 0.0009424146264791489,
830
+ "eval_rewards/accuracies": 1.0,
831
+ "eval_rewards/chosen": 0.7160030603408813,
832
+ "eval_rewards/margins": 9.311846733093262,
833
+ "eval_rewards/rejected": -8.595843315124512,
834
+ "eval_runtime": 183.6848,
835
+ "eval_samples_per_second": 1.132,
836
+ "eval_steps_per_second": 0.566,
837
+ "step": 3600
838
+ },
839
+ {
840
+ "epoch": 1.4419329696024943,
841
+ "grad_norm": 0.009088713675737381,
842
+ "learning_rate": 0.00027603488487680684,
843
+ "logits/chosen": -19.91989517211914,
844
+ "logits/rejected": -18.872535705566406,
845
+ "logps/chosen": -372.661865234375,
846
+ "logps/rejected": -367.7705078125,
847
+ "loss": 0.0184,
848
+ "rewards/accuracies": 0.9937499761581421,
849
+ "rewards/chosen": -1.1601972579956055,
850
+ "rewards/margins": 10.311330795288086,
851
+ "rewards/rejected": -11.471526145935059,
852
+ "step": 3700
853
+ },
854
+ {
855
+ "epoch": 1.4809041309431021,
856
+ "grad_norm": 0.010365006513893604,
857
+ "learning_rate": 0.00026559726926266204,
858
+ "logits/chosen": -19.595075607299805,
859
+ "logits/rejected": -18.658893585205078,
860
+ "logps/chosen": -390.93206787109375,
861
+ "logps/rejected": -393.7774963378906,
862
+ "loss": 0.0095,
863
+ "rewards/accuracies": 0.9950000047683716,
864
+ "rewards/chosen": -2.7839019298553467,
865
+ "rewards/margins": 10.808964729309082,
866
+ "rewards/rejected": -13.592867851257324,
867
+ "step": 3800
868
+ },
869
+ {
870
+ "epoch": 1.4809041309431021,
871
+ "eval_logits/chosen": -15.648859977722168,
872
+ "eval_logits/rejected": -15.611715316772461,
873
+ "eval_logps/chosen": -346.69561767578125,
874
+ "eval_logps/rejected": -376.822265625,
875
+ "eval_loss": 0.0005593308596871793,
876
+ "eval_rewards/accuracies": 1.0,
877
+ "eval_rewards/chosen": -0.5036950707435608,
878
+ "eval_rewards/margins": 10.487546920776367,
879
+ "eval_rewards/rejected": -10.991241455078125,
880
+ "eval_runtime": 184.1015,
881
+ "eval_samples_per_second": 1.13,
882
+ "eval_steps_per_second": 0.565,
883
+ "step": 3800
884
+ },
885
+ {
886
+ "epoch": 1.51987529228371,
887
+ "grad_norm": 0.12639912962913513,
888
+ "learning_rate": 0.00025513227614098707,
889
+ "logits/chosen": -19.499780654907227,
890
+ "logits/rejected": -18.518644332885742,
891
+ "logps/chosen": -366.645263671875,
892
+ "logps/rejected": -380.7431640625,
893
+ "loss": 0.0067,
894
+ "rewards/accuracies": 0.9975000023841858,
895
+ "rewards/chosen": -1.6605476140975952,
896
+ "rewards/margins": 10.881020545959473,
897
+ "rewards/rejected": -12.541565895080566,
898
+ "step": 3900
899
+ },
900
+ {
901
+ "epoch": 1.558846453624318,
902
+ "grad_norm": 0.32782620191574097,
903
+ "learning_rate": 0.00024465827445977964,
904
+ "logits/chosen": -19.350522994995117,
905
+ "logits/rejected": -18.31049346923828,
906
+ "logps/chosen": -380.0142822265625,
907
+ "logps/rejected": -377.0948791503906,
908
+ "loss": 0.0107,
909
+ "rewards/accuracies": 0.9962499737739563,
910
+ "rewards/chosen": -1.4461276531219482,
911
+ "rewards/margins": 10.850979804992676,
912
+ "rewards/rejected": -12.297107696533203,
913
+ "step": 4000
914
+ },
915
+ {
916
+ "epoch": 1.558846453624318,
917
+ "eval_logits/chosen": -15.838950157165527,
918
+ "eval_logits/rejected": -15.806854248046875,
919
+ "eval_logps/chosen": -336.4289245605469,
920
+ "eval_logps/rejected": -364.8307189941406,
921
+ "eval_loss": 0.00040981665370054543,
922
+ "eval_rewards/accuracies": 1.0,
923
+ "eval_rewards/chosen": 0.5229762196540833,
924
+ "eval_rewards/margins": 10.315059661865234,
925
+ "eval_rewards/rejected": -9.792083740234375,
926
+ "eval_runtime": 184.0487,
927
+ "eval_samples_per_second": 1.13,
928
+ "eval_steps_per_second": 0.565,
929
+ "step": 4000
930
+ },
931
+ {
932
+ "epoch": 1.597817614964926,
933
+ "grad_norm": 1.2895886898040771,
934
+ "learning_rate": 0.00023419364897954285,
935
+ "logits/chosen": -19.494075775146484,
936
+ "logits/rejected": -18.560813903808594,
937
+ "logps/chosen": -361.8075256347656,
938
+ "logps/rejected": -380.0975646972656,
939
+ "loss": 0.0108,
940
+ "rewards/accuracies": 0.9950000047683716,
941
+ "rewards/chosen": -1.1338391304016113,
942
+ "rewards/margins": 11.00627326965332,
943
+ "rewards/rejected": -12.140111923217773,
944
+ "step": 4100
945
+ },
946
+ {
947
+ "epoch": 1.6367887763055338,
948
+ "grad_norm": 0.00033555322443135083,
949
+ "learning_rate": 0.00022375676800296247,
950
+ "logits/chosen": -19.587282180786133,
951
+ "logits/rejected": -18.464643478393555,
952
+ "logps/chosen": -379.6680603027344,
953
+ "logps/rejected": -372.1777648925781,
954
+ "loss": 0.0124,
955
+ "rewards/accuracies": 0.9937499761581421,
956
+ "rewards/chosen": -1.0544594526290894,
957
+ "rewards/margins": 11.037619590759277,
958
+ "rewards/rejected": -12.092079162597656,
959
+ "step": 4200
960
+ },
961
+ {
962
+ "epoch": 1.6367887763055338,
963
+ "eval_logits/chosen": -15.83126163482666,
964
+ "eval_logits/rejected": -15.794379234313965,
965
+ "eval_logps/chosen": -330.6876525878906,
966
+ "eval_logps/rejected": -356.1661682128906,
967
+ "eval_loss": 0.0004220473056193441,
968
+ "eval_rewards/accuracies": 1.0,
969
+ "eval_rewards/chosen": 1.0971003770828247,
970
+ "eval_rewards/margins": 10.022727012634277,
971
+ "eval_rewards/rejected": -8.925626754760742,
972
+ "eval_runtime": 183.9983,
973
+ "eval_samples_per_second": 1.13,
974
+ "eval_steps_per_second": 0.565,
975
+ "step": 4200
976
+ },
977
+ {
978
+ "epoch": 1.6757599376461418,
979
+ "grad_norm": 1.2421733140945435,
980
+ "learning_rate": 0.00021336595113347144,
981
+ "logits/chosen": -19.402978897094727,
982
+ "logits/rejected": -18.351083755493164,
983
+ "logps/chosen": -370.1186828613281,
984
+ "logps/rejected": -374.5020446777344,
985
+ "loss": 0.009,
986
+ "rewards/accuracies": 0.9937499761581421,
987
+ "rewards/chosen": -1.1320860385894775,
988
+ "rewards/margins": 10.839487075805664,
989
+ "rewards/rejected": -11.971571922302246,
990
+ "step": 4300
991
+ },
992
+ {
993
+ "epoch": 1.71473109898675,
994
+ "grad_norm": 0.16441740095615387,
995
+ "learning_rate": 0.00020303943711929526,
996
+ "logits/chosen": -19.135257720947266,
997
+ "logits/rejected": -18.16560173034668,
998
+ "logps/chosen": -375.6964111328125,
999
+ "logps/rejected": -390.9460144042969,
1000
+ "loss": 0.021,
1001
+ "rewards/accuracies": 0.987500011920929,
1002
+ "rewards/chosen": -2.0195655822753906,
1003
+ "rewards/margins": 11.327723503112793,
1004
+ "rewards/rejected": -13.3472900390625,
1005
+ "step": 4400
1006
+ },
1007
+ {
1008
+ "epoch": 1.71473109898675,
1009
+ "eval_logits/chosen": -15.488088607788086,
1010
+ "eval_logits/rejected": -15.465119361877441,
1011
+ "eval_logps/chosen": -344.2342529296875,
1012
+ "eval_logps/rejected": -376.2817687988281,
1013
+ "eval_loss": 0.0005891394102945924,
1014
+ "eval_rewards/accuracies": 1.0,
1015
+ "eval_rewards/chosen": -0.25755709409713745,
1016
+ "eval_rewards/margins": 10.679630279541016,
1017
+ "eval_rewards/rejected": -10.937186241149902,
1018
+ "eval_runtime": 184.1206,
1019
+ "eval_samples_per_second": 1.13,
1020
+ "eval_steps_per_second": 0.565,
1021
+ "step": 4400
1022
+ },
1023
+ {
1024
+ "epoch": 1.7537022603273578,
1025
+ "grad_norm": 0.02925615757703781,
1026
+ "learning_rate": 0.00019279535183942101,
1027
+ "logits/chosen": -19.17304039001465,
1028
+ "logits/rejected": -18.184049606323242,
1029
+ "logps/chosen": -385.8622131347656,
1030
+ "logps/rejected": -391.7120361328125,
1031
+ "loss": 0.0083,
1032
+ "rewards/accuracies": 0.9937499761581421,
1033
+ "rewards/chosen": -1.9831608533859253,
1034
+ "rewards/margins": 11.25387954711914,
1035
+ "rewards/rejected": -13.237038612365723,
1036
+ "step": 4500
1037
+ },
1038
+ {
1039
+ "epoch": 1.7926734216679656,
1040
+ "grad_norm": 0.05078015476465225,
1041
+ "learning_rate": 0.00018265167648768259,
1042
+ "logits/chosen": -19.064685821533203,
1043
+ "logits/rejected": -18.237638473510742,
1044
+ "logps/chosen": -376.9311218261719,
1045
+ "logps/rejected": -387.96905517578125,
1046
+ "loss": 0.0061,
1047
+ "rewards/accuracies": 0.9950000047683716,
1048
+ "rewards/chosen": -2.660050392150879,
1049
+ "rewards/margins": 11.354269027709961,
1050
+ "rewards/rejected": -14.014320373535156,
1051
+ "step": 4600
1052
+ },
1053
+ {
1054
+ "epoch": 1.7926734216679656,
1055
+ "eval_logits/chosen": -15.26915454864502,
1056
+ "eval_logits/rejected": -15.260515213012695,
1057
+ "eval_logps/chosen": -355.8375549316406,
1058
+ "eval_logps/rejected": -390.41119384765625,
1059
+ "eval_loss": 0.0009181927889585495,
1060
+ "eval_rewards/accuracies": 1.0,
1061
+ "eval_rewards/chosen": -1.417887568473816,
1062
+ "eval_rewards/margins": 10.932249069213867,
1063
+ "eval_rewards/rejected": -12.350136756896973,
1064
+ "eval_runtime": 183.689,
1065
+ "eval_samples_per_second": 1.132,
1066
+ "eval_steps_per_second": 0.566,
1067
+ "step": 4600
1068
+ },
1069
+ {
1070
+ "epoch": 1.8316445830085737,
1071
+ "grad_norm": 0.2935536205768585,
1072
+ "learning_rate": 0.00017262621601080811,
1073
+ "logits/chosen": -19.163440704345703,
1074
+ "logits/rejected": -18.167402267456055,
1075
+ "logps/chosen": -379.1352233886719,
1076
+ "logps/rejected": -383.6081237792969,
1077
+ "loss": 0.0126,
1078
+ "rewards/accuracies": 0.9950000047683716,
1079
+ "rewards/chosen": -2.3724796772003174,
1080
+ "rewards/margins": 11.351487159729004,
1081
+ "rewards/rejected": -13.723965644836426,
1082
+ "step": 4700
1083
+ },
1084
+ {
1085
+ "epoch": 1.8706157443491818,
1086
+ "grad_norm": 0.02069302648305893,
1087
+ "learning_rate": 0.00016273656785582984,
1088
+ "logits/chosen": -18.975814819335938,
1089
+ "logits/rejected": -18.142276763916016,
1090
+ "logps/chosen": -383.4511413574219,
1091
+ "logps/rejected": -393.5997009277344,
1092
+ "loss": 0.0102,
1093
+ "rewards/accuracies": 0.9950000047683716,
1094
+ "rewards/chosen": -2.359323024749756,
1095
+ "rewards/margins": 11.0841064453125,
1096
+ "rewards/rejected": -13.443428993225098,
1097
+ "step": 4800
1098
+ },
1099
+ {
1100
+ "epoch": 1.8706157443491818,
1101
+ "eval_logits/chosen": -15.47636604309082,
1102
+ "eval_logits/rejected": -15.429464340209961,
1103
+ "eval_logps/chosen": -341.96356201171875,
1104
+ "eval_logps/rejected": -369.5177307128906,
1105
+ "eval_loss": 0.0004053489537909627,
1106
+ "eval_rewards/accuracies": 1.0,
1107
+ "eval_rewards/chosen": -0.030484315007925034,
1108
+ "eval_rewards/margins": 10.230300903320312,
1109
+ "eval_rewards/rejected": -10.260786056518555,
1110
+ "eval_runtime": 183.2032,
1111
+ "eval_samples_per_second": 1.135,
1112
+ "eval_steps_per_second": 0.568,
1113
+ "step": 4800
1114
+ },
1115
+ {
1116
+ "epoch": 1.9095869056897894,
1117
+ "grad_norm": 0.06283093243837357,
1118
+ "learning_rate": 0.00015300009108171347,
1119
+ "logits/chosen": -19.286584854125977,
1120
+ "logits/rejected": -18.341442108154297,
1121
+ "logps/chosen": -376.3005065917969,
1122
+ "logps/rejected": -383.4756774902344,
1123
+ "loss": 0.0044,
1124
+ "rewards/accuracies": 0.9987499713897705,
1125
+ "rewards/chosen": -1.3465908765792847,
1126
+ "rewards/margins": 11.345978736877441,
1127
+ "rewards/rejected": -12.692569732666016,
1128
+ "step": 4900
1129
+ },
1130
+ {
1131
+ "epoch": 1.9485580670303975,
1132
+ "grad_norm": 0.04220004007220268,
1133
+ "learning_rate": 0.00014343387588942392,
1134
+ "logits/chosen": -19.211626052856445,
1135
+ "logits/rejected": -18.220531463623047,
1136
+ "logps/chosen": -389.9902648925781,
1137
+ "logps/rejected": -397.5294494628906,
1138
+ "loss": 0.0057,
1139
+ "rewards/accuracies": 0.9962499737739563,
1140
+ "rewards/chosen": -1.984826922416687,
1141
+ "rewards/margins": 11.562482833862305,
1142
+ "rewards/rejected": -13.547309875488281,
1143
+ "step": 5000
1144
+ },
1145
+ {
1146
+ "epoch": 1.9485580670303975,
1147
+ "eval_logits/chosen": -15.311283111572266,
1148
+ "eval_logits/rejected": -15.284743309020996,
1149
+ "eval_logps/chosen": -350.1474304199219,
1150
+ "eval_logps/rejected": -386.7135314941406,
1151
+ "eval_loss": 0.00032912034657783806,
1152
+ "eval_rewards/accuracies": 1.0,
1153
+ "eval_rewards/chosen": -0.8488726019859314,
1154
+ "eval_rewards/margins": 11.131490707397461,
1155
+ "eval_rewards/rejected": -11.980363845825195,
1156
+ "eval_runtime": 183.6747,
1157
+ "eval_samples_per_second": 1.132,
1158
+ "eval_steps_per_second": 0.566,
1159
+ "step": 5000
1160
+ },
1161
+ {
1162
+ "epoch": 1.9875292283710055,
1163
+ "grad_norm": 0.05133282020688057,
1164
+ "learning_rate": 0.00013405471362391068,
1165
+ "logits/chosen": -19.146968841552734,
1166
+ "logits/rejected": -18.36100196838379,
1167
+ "logps/chosen": -375.5859680175781,
1168
+ "logps/rejected": -405.4987487792969,
1169
+ "loss": 0.0055,
1170
+ "rewards/accuracies": 0.9962499737739563,
1171
+ "rewards/chosen": -2.0062613487243652,
1172
+ "rewards/margins": 11.725434303283691,
1173
+ "rewards/rejected": -13.731695175170898,
1174
+ "step": 5100
1175
+ },
1176
+ {
1177
+ "epoch": 2.0265003897116136,
1178
+ "grad_norm": 0.003434022655710578,
1179
+ "learning_rate": 0.00012487906730066888,
1180
+ "logits/chosen": -19.30167579650879,
1181
+ "logits/rejected": -18.223770141601562,
1182
+ "logps/chosen": -376.9130859375,
1183
+ "logps/rejected": -380.2550354003906,
1184
+ "loss": 0.0025,
1185
+ "rewards/accuracies": 0.9975000023841858,
1186
+ "rewards/chosen": -1.750290870666504,
1187
+ "rewards/margins": 11.900991439819336,
1188
+ "rewards/rejected": -13.651283264160156,
1189
+ "step": 5200
1190
+ },
1191
+ {
1192
+ "epoch": 2.0265003897116136,
1193
+ "eval_logits/chosen": -15.498884201049805,
1194
+ "eval_logits/rejected": -15.481465339660645,
1195
+ "eval_logps/chosen": -349.6870422363281,
1196
+ "eval_logps/rejected": -385.3147888183594,
1197
+ "eval_loss": 0.0003257237549405545,
1198
+ "eval_rewards/accuracies": 1.0,
1199
+ "eval_rewards/chosen": -0.8028372526168823,
1200
+ "eval_rewards/margins": 11.037657737731934,
1201
+ "eval_rewards/rejected": -11.840494155883789,
1202
+ "eval_runtime": 183.7823,
1203
+ "eval_samples_per_second": 1.132,
1204
+ "eval_steps_per_second": 0.566,
1205
+ "step": 5200
1206
+ },
1207
+ {
1208
+ "epoch": 2.0654715510522212,
1209
+ "grad_norm": 0.026280131191015244,
1210
+ "learning_rate": 0.00011592304270860795,
1211
+ "logits/chosen": -19.42230796813965,
1212
+ "logits/rejected": -18.394390106201172,
1213
+ "logps/chosen": -400.98980712890625,
1214
+ "logps/rejected": -409.95098876953125,
1215
+ "loss": 0.005,
1216
+ "rewards/accuracies": 0.9962499737739563,
1217
+ "rewards/chosen": -2.090318202972412,
1218
+ "rewards/margins": 11.665027618408203,
1219
+ "rewards/rejected": -13.755346298217773,
1220
+ "step": 5300
1221
+ },
1222
+ {
1223
+ "epoch": 2.1044427123928293,
1224
+ "grad_norm": 0.00010202277189819142,
1225
+ "learning_rate": 0.00010720236013995221,
1226
+ "logits/chosen": -19.477603912353516,
1227
+ "logits/rejected": -18.485740661621094,
1228
+ "logps/chosen": -365.8956298828125,
1229
+ "logps/rejected": -397.5658264160156,
1230
+ "loss": 0.0015,
1231
+ "rewards/accuracies": 0.9987499713897705,
1232
+ "rewards/chosen": -1.8521977663040161,
1233
+ "rewards/margins": 12.37298583984375,
1234
+ "rewards/rejected": -14.225183486938477,
1235
+ "step": 5400
1236
+ },
1237
+ {
1238
+ "epoch": 2.1044427123928293,
1239
+ "eval_logits/chosen": -15.460063934326172,
1240
+ "eval_logits/rejected": -15.440217018127441,
1241
+ "eval_logps/chosen": -351.2416687011719,
1242
+ "eval_logps/rejected": -389.2104797363281,
1243
+ "eval_loss": 0.00034331483766436577,
1244
+ "eval_rewards/accuracies": 1.0,
1245
+ "eval_rewards/chosen": -0.958295464515686,
1246
+ "eval_rewards/margins": 11.27176570892334,
1247
+ "eval_rewards/rejected": -12.230061531066895,
1248
+ "eval_runtime": 183.6747,
1249
+ "eval_samples_per_second": 1.132,
1250
+ "eval_steps_per_second": 0.566,
1251
+ "step": 5400
1252
+ },
1253
+ {
1254
+ "epoch": 2.1434138737334374,
1255
+ "grad_norm": 0.005236598663032055,
1256
+ "learning_rate": 9.873232679679392e-05,
1257
+ "logits/chosen": -19.37502670288086,
1258
+ "logits/rejected": -18.3719482421875,
1259
+ "logps/chosen": -381.2720947265625,
1260
+ "logps/rejected": -403.01593017578125,
1261
+ "loss": 0.0024,
1262
+ "rewards/accuracies": 0.9975000023841858,
1263
+ "rewards/chosen": -2.047292709350586,
1264
+ "rewards/margins": 12.199724197387695,
1265
+ "rewards/rejected": -14.247018814086914,
1266
+ "step": 5500
1267
+ },
1268
+ {
1269
+ "epoch": 2.182385035074045,
1270
+ "grad_norm": 0.029128100723028183,
1271
+ "learning_rate": 9.052780992273379e-05,
1272
+ "logits/chosen": -19.223974227905273,
1273
+ "logits/rejected": -18.315275192260742,
1274
+ "logps/chosen": -378.5389404296875,
1275
+ "logps/rejected": -399.8840637207031,
1276
+ "loss": 0.0021,
1277
+ "rewards/accuracies": 0.9975000023841858,
1278
+ "rewards/chosen": -1.8436551094055176,
1279
+ "rewards/margins": 12.235860824584961,
1280
+ "rewards/rejected": -14.07951545715332,
1281
+ "step": 5600
1282
+ },
1283
+ {
1284
+ "epoch": 2.182385035074045,
1285
+ "eval_logits/chosen": -15.46290111541748,
1286
+ "eval_logits/rejected": -15.436098098754883,
1287
+ "eval_logps/chosen": -349.15423583984375,
1288
+ "eval_logps/rejected": -387.6983947753906,
1289
+ "eval_loss": 0.00030186420190148056,
1290
+ "eval_rewards/accuracies": 1.0,
1291
+ "eval_rewards/chosen": -0.7495561242103577,
1292
+ "eval_rewards/margins": 11.329290390014648,
1293
+ "eval_rewards/rejected": -12.078847885131836,
1294
+ "eval_runtime": 183.2993,
1295
+ "eval_samples_per_second": 1.135,
1296
+ "eval_steps_per_second": 0.567,
1297
+ "step": 5600
1298
+ },
1299
+ {
1300
+ "epoch": 2.221356196414653,
1301
+ "grad_norm": 0.011133521795272827,
1302
+ "learning_rate": 8.26032107067696e-05,
1303
+ "logits/chosen": -19.451305389404297,
1304
+ "logits/rejected": -18.38578224182129,
1305
+ "logps/chosen": -374.8850402832031,
1306
+ "logps/rejected": -388.83575439453125,
1307
+ "loss": 0.0014,
1308
+ "rewards/accuracies": 0.9987499713897705,
1309
+ "rewards/chosen": -1.683659315109253,
1310
+ "rewards/margins": 12.163559913635254,
1311
+ "rewards/rejected": -13.847216606140137,
1312
+ "step": 5700
1313
+ },
1314
+ {
1315
+ "epoch": 2.260327357755261,
1316
+ "grad_norm": 7.387495134025812e-05,
1317
+ "learning_rate": 7.497243900523937e-05,
1318
+ "logits/chosen": -19.474388122558594,
1319
+ "logits/rejected": -18.39251708984375,
1320
+ "logps/chosen": -382.3981628417969,
1321
+ "logps/rejected": -390.441162109375,
1322
+ "loss": 0.0017,
1323
+ "rewards/accuracies": 0.9987499713897705,
1324
+ "rewards/chosen": -1.4145100116729736,
1325
+ "rewards/margins": 11.843048095703125,
1326
+ "rewards/rejected": -13.257555961608887,
1327
+ "step": 5800
1328
+ },
1329
+ {
1330
+ "epoch": 2.260327357755261,
1331
+ "eval_logits/chosen": -15.475095748901367,
1332
+ "eval_logits/rejected": -15.438820838928223,
1333
+ "eval_logps/chosen": -347.4687805175781,
1334
+ "eval_logps/rejected": -385.78497314453125,
1335
+ "eval_loss": 0.0002713102730922401,
1336
+ "eval_rewards/accuracies": 1.0,
1337
+ "eval_rewards/chosen": -0.5810099244117737,
1338
+ "eval_rewards/margins": 11.306499481201172,
1339
+ "eval_rewards/rejected": -11.887508392333984,
1340
+ "eval_runtime": 183.4323,
1341
+ "eval_samples_per_second": 1.134,
1342
+ "eval_steps_per_second": 0.567,
1343
+ "step": 5800
1344
+ }
1345
+ ],
1346
+ "logging_steps": 100,
1347
+ "max_steps": 7698,
1348
+ "num_input_tokens_seen": 0,
1349
+ "num_train_epochs": 3,
1350
+ "save_steps": 200,
1351
+ "total_flos": 0.0,
1352
+ "train_batch_size": 2,
1353
+ "trial_name": null,
1354
+ "trial_params": null
1355
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ff9a69f1507360b7e3abf8d20008aa5952976ad08fbbe6661f13498456902a1b
3
+ size 5560
vocab.json ADDED
The diff for this file is too large to render. See raw diff