paulrouge commited on
Commit
f48a1cb
1 Parent(s): 62b9e16

Upload 13 files

Browse files
README.md CHANGED
@@ -1,3 +1,226 @@
1
  ---
2
- license: apache-2.0
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ library_name: peft
3
+ base_model: TheBloke/MythoMax-L2-13B-GPTQ
4
  ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Shared by [optional]:** [More Information Needed]
22
+ - **Model type:** [More Information Needed]
23
+ - **Language(s) (NLP):** [More Information Needed]
24
+ - **License:** [More Information Needed]
25
+ - **Finetuned from model [optional]:** [More Information Needed]
26
+
27
+ ### Model Sources [optional]
28
+
29
+ <!-- Provide the basic links for the model. -->
30
+
31
+ - **Repository:** [More Information Needed]
32
+ - **Paper [optional]:** [More Information Needed]
33
+ - **Demo [optional]:** [More Information Needed]
34
+
35
+ ## Uses
36
+
37
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
38
+
39
+ ### Direct Use
40
+
41
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
42
+
43
+ [More Information Needed]
44
+
45
+ ### Downstream Use [optional]
46
+
47
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
48
+
49
+ [More Information Needed]
50
+
51
+ ### Out-of-Scope Use
52
+
53
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
54
+
55
+ [More Information Needed]
56
+
57
+ ## Bias, Risks, and Limitations
58
+
59
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
60
+
61
+ [More Information Needed]
62
+
63
+ ### Recommendations
64
+
65
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
66
+
67
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
68
+
69
+ ## How to Get Started with the Model
70
+
71
+ Use the code below to get started with the model.
72
+
73
+ [More Information Needed]
74
+
75
+ ## Training Details
76
+
77
+ ### Training Data
78
+
79
+ <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
80
+
81
+ [More Information Needed]
82
+
83
+ ### Training Procedure
84
+
85
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
86
+
87
+ #### Preprocessing [optional]
88
+
89
+ [More Information Needed]
90
+
91
+
92
+ #### Training Hyperparameters
93
+
94
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
95
+
96
+ #### Speeds, Sizes, Times [optional]
97
+
98
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
99
+
100
+ [More Information Needed]
101
+
102
+ ## Evaluation
103
+
104
+ <!-- This section describes the evaluation protocols and provides the results. -->
105
+
106
+ ### Testing Data, Factors & Metrics
107
+
108
+ #### Testing Data
109
+
110
+ <!-- This should link to a Data Card if possible. -->
111
+
112
+ [More Information Needed]
113
+
114
+ #### Factors
115
+
116
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
117
+
118
+ [More Information Needed]
119
+
120
+ #### Metrics
121
+
122
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
123
+
124
+ [More Information Needed]
125
+
126
+ ### Results
127
+
128
+ [More Information Needed]
129
+
130
+ #### Summary
131
+
132
+
133
+
134
+ ## Model Examination [optional]
135
+
136
+ <!-- Relevant interpretability work for the model goes here -->
137
+
138
+ [More Information Needed]
139
+
140
+ ## Environmental Impact
141
+
142
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
143
+
144
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
145
+
146
+ - **Hardware Type:** [More Information Needed]
147
+ - **Hours used:** [More Information Needed]
148
+ - **Cloud Provider:** [More Information Needed]
149
+ - **Compute Region:** [More Information Needed]
150
+ - **Carbon Emitted:** [More Information Needed]
151
+
152
+ ## Technical Specifications [optional]
153
+
154
+ ### Model Architecture and Objective
155
+
156
+ [More Information Needed]
157
+
158
+ ### Compute Infrastructure
159
+
160
+ [More Information Needed]
161
+
162
+ #### Hardware
163
+
164
+ [More Information Needed]
165
+
166
+ #### Software
167
+
168
+ [More Information Needed]
169
+
170
+ ## Citation [optional]
171
+
172
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
173
+
174
+ **BibTeX:**
175
+
176
+ [More Information Needed]
177
+
178
+ **APA:**
179
+
180
+ [More Information Needed]
181
+
182
+ ## Glossary [optional]
183
+
184
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
185
+
186
+ [More Information Needed]
187
+
188
+ ## More Information [optional]
189
+
190
+ [More Information Needed]
191
+
192
+ ## Model Card Authors [optional]
193
+
194
+ [More Information Needed]
195
+
196
+ ## Model Card Contact
197
+
198
+ [More Information Needed]
199
+
200
+
201
+ ## Training procedure
202
+
203
+
204
+ The following `bitsandbytes` quantization config was used during training:
205
+ - quant_method: gptq
206
+ - bits: 4
207
+ - tokenizer: None
208
+ - dataset: None
209
+ - group_size: 128
210
+ - damp_percent: 0.1
211
+ - desc_act: False
212
+ - sym: True
213
+ - true_sequential: True
214
+ - use_cuda_fp16: False
215
+ - model_seqlen: None
216
+ - block_name_to_quantize: None
217
+ - module_name_preceding_first_block: None
218
+ - batch_size: 1
219
+ - pad_token_id: None
220
+ - disable_exllama: True
221
+ - max_input_length: None
222
+
223
+ ### Framework versions
224
+
225
+
226
+ - PEFT 0.6.0.dev0
adapter_config.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "TheBloke/MythoMax-L2-13B-GPTQ",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layers_pattern": null,
10
+ "layers_to_transform": null,
11
+ "lora_alpha": 16,
12
+ "lora_dropout": 0.1,
13
+ "modules_to_save": null,
14
+ "peft_type": "LORA",
15
+ "r": 64,
16
+ "rank_pattern": {},
17
+ "revision": null,
18
+ "target_modules": [
19
+ "v_proj",
20
+ "q_proj"
21
+ ],
22
+ "task_type": "CAUSAL_LM"
23
+ }
adapter_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:310efd9c9b9166838bb7914221deb4aac8abeb875ba70e5fef4e8f21f29d8458
3
+ size 209773322
added_tokens.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "<pad>": 32000
3
+ }
optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6f5cedf636a1ac4b897d751596b289fb93c52b4e4af08be7b798dcf5a80d0d45
3
+ size 419525818
rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:875302996f023146ca785dfba348c403e239007176a9be08a72b77056d6c34f3
3
+ size 14244
scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:19bcbcfd505eb4fc1a12c7d832a77268d82b4a9b82157e01d48b5fb1cf5fbcc1
3
+ size 1064
special_tokens_map.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": "</s>",
17
+ "unk_token": {
18
+ "content": "<unk>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ }
24
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347
3
+ size 499723
tokenizer_config.json ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "<unk>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "<s>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "</s>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "32000": {
28
+ "content": "<pad>",
29
+ "lstrip": false,
30
+ "normalized": true,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": false
34
+ }
35
+ },
36
+ "bos_token": "<s>",
37
+ "clean_up_tokenization_spaces": false,
38
+ "eos_token": "</s>",
39
+ "legacy": false,
40
+ "model_max_length": 4096,
41
+ "pad_token": "</s>",
42
+ "padding_side": "right",
43
+ "sp_model_kwargs": {},
44
+ "tokenizer_class": "LlamaTokenizer",
45
+ "unk_token": "<unk>",
46
+ "use_default_system_prompt": false
47
+ }
trainer_state.json ADDED
@@ -0,0 +1,579 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": 0.15806905925273895,
3
+ "best_model_checkpoint": "./results/checkpoint-975",
4
+ "epoch": 14.285714285714286,
5
+ "eval_steps": 25,
6
+ "global_step": 1000,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.36,
13
+ "learning_rate": 0.0002,
14
+ "loss": 2.1731,
15
+ "step": 25
16
+ },
17
+ {
18
+ "epoch": 0.36,
19
+ "eval_loss": 1.4641685485839844,
20
+ "eval_runtime": 2.4526,
21
+ "eval_samples_per_second": 22.833,
22
+ "eval_steps_per_second": 2.854,
23
+ "step": 25
24
+ },
25
+ {
26
+ "epoch": 0.71,
27
+ "learning_rate": 0.0002,
28
+ "loss": 1.3444,
29
+ "step": 50
30
+ },
31
+ {
32
+ "epoch": 0.71,
33
+ "eval_loss": 1.161795735359192,
34
+ "eval_runtime": 2.4547,
35
+ "eval_samples_per_second": 22.813,
36
+ "eval_steps_per_second": 2.852,
37
+ "step": 50
38
+ },
39
+ {
40
+ "epoch": 1.07,
41
+ "learning_rate": 0.0002,
42
+ "loss": 1.0438,
43
+ "step": 75
44
+ },
45
+ {
46
+ "epoch": 1.07,
47
+ "eval_loss": 0.938343346118927,
48
+ "eval_runtime": 2.4539,
49
+ "eval_samples_per_second": 22.821,
50
+ "eval_steps_per_second": 2.853,
51
+ "step": 75
52
+ },
53
+ {
54
+ "epoch": 1.43,
55
+ "learning_rate": 0.0002,
56
+ "loss": 0.9378,
57
+ "step": 100
58
+ },
59
+ {
60
+ "epoch": 1.43,
61
+ "eval_loss": 0.8406057357788086,
62
+ "eval_runtime": 2.4541,
63
+ "eval_samples_per_second": 22.819,
64
+ "eval_steps_per_second": 2.852,
65
+ "step": 100
66
+ },
67
+ {
68
+ "epoch": 1.79,
69
+ "learning_rate": 0.0002,
70
+ "loss": 0.8852,
71
+ "step": 125
72
+ },
73
+ {
74
+ "epoch": 1.79,
75
+ "eval_loss": 0.7779091000556946,
76
+ "eval_runtime": 2.4548,
77
+ "eval_samples_per_second": 22.812,
78
+ "eval_steps_per_second": 2.852,
79
+ "step": 125
80
+ },
81
+ {
82
+ "epoch": 2.14,
83
+ "learning_rate": 0.0002,
84
+ "loss": 0.8243,
85
+ "step": 150
86
+ },
87
+ {
88
+ "epoch": 2.14,
89
+ "eval_loss": 0.7215237021446228,
90
+ "eval_runtime": 2.4543,
91
+ "eval_samples_per_second": 22.817,
92
+ "eval_steps_per_second": 2.852,
93
+ "step": 150
94
+ },
95
+ {
96
+ "epoch": 2.5,
97
+ "learning_rate": 0.0002,
98
+ "loss": 0.7581,
99
+ "step": 175
100
+ },
101
+ {
102
+ "epoch": 2.5,
103
+ "eval_loss": 0.6108285188674927,
104
+ "eval_runtime": 2.4539,
105
+ "eval_samples_per_second": 22.821,
106
+ "eval_steps_per_second": 2.853,
107
+ "step": 175
108
+ },
109
+ {
110
+ "epoch": 2.86,
111
+ "learning_rate": 0.0002,
112
+ "loss": 0.6965,
113
+ "step": 200
114
+ },
115
+ {
116
+ "epoch": 2.86,
117
+ "eval_loss": 0.5861143469810486,
118
+ "eval_runtime": 2.4542,
119
+ "eval_samples_per_second": 22.818,
120
+ "eval_steps_per_second": 2.852,
121
+ "step": 200
122
+ },
123
+ {
124
+ "epoch": 3.21,
125
+ "learning_rate": 0.0002,
126
+ "loss": 0.6161,
127
+ "step": 225
128
+ },
129
+ {
130
+ "epoch": 3.21,
131
+ "eval_loss": 0.5066039562225342,
132
+ "eval_runtime": 2.4563,
133
+ "eval_samples_per_second": 22.798,
134
+ "eval_steps_per_second": 2.85,
135
+ "step": 225
136
+ },
137
+ {
138
+ "epoch": 3.57,
139
+ "learning_rate": 0.0002,
140
+ "loss": 0.5444,
141
+ "step": 250
142
+ },
143
+ {
144
+ "epoch": 3.57,
145
+ "eval_loss": 0.45327526330947876,
146
+ "eval_runtime": 2.4538,
147
+ "eval_samples_per_second": 22.822,
148
+ "eval_steps_per_second": 2.853,
149
+ "step": 250
150
+ },
151
+ {
152
+ "epoch": 3.93,
153
+ "learning_rate": 0.0002,
154
+ "loss": 0.5739,
155
+ "step": 275
156
+ },
157
+ {
158
+ "epoch": 3.93,
159
+ "eval_loss": 0.4038705825805664,
160
+ "eval_runtime": 2.4543,
161
+ "eval_samples_per_second": 22.817,
162
+ "eval_steps_per_second": 2.852,
163
+ "step": 275
164
+ },
165
+ {
166
+ "epoch": 4.29,
167
+ "learning_rate": 0.0002,
168
+ "loss": 0.4352,
169
+ "step": 300
170
+ },
171
+ {
172
+ "epoch": 4.29,
173
+ "eval_loss": 0.3711726665496826,
174
+ "eval_runtime": 2.4536,
175
+ "eval_samples_per_second": 22.824,
176
+ "eval_steps_per_second": 2.853,
177
+ "step": 300
178
+ },
179
+ {
180
+ "epoch": 4.64,
181
+ "learning_rate": 0.0002,
182
+ "loss": 0.4281,
183
+ "step": 325
184
+ },
185
+ {
186
+ "epoch": 4.64,
187
+ "eval_loss": 0.3348071277141571,
188
+ "eval_runtime": 2.4574,
189
+ "eval_samples_per_second": 22.789,
190
+ "eval_steps_per_second": 2.849,
191
+ "step": 325
192
+ },
193
+ {
194
+ "epoch": 5.0,
195
+ "learning_rate": 0.0002,
196
+ "loss": 0.4371,
197
+ "step": 350
198
+ },
199
+ {
200
+ "epoch": 5.0,
201
+ "eval_loss": 0.2986474633216858,
202
+ "eval_runtime": 2.4531,
203
+ "eval_samples_per_second": 22.829,
204
+ "eval_steps_per_second": 2.854,
205
+ "step": 350
206
+ },
207
+ {
208
+ "epoch": 5.36,
209
+ "learning_rate": 0.0002,
210
+ "loss": 0.3143,
211
+ "step": 375
212
+ },
213
+ {
214
+ "epoch": 5.36,
215
+ "eval_loss": 0.29203733801841736,
216
+ "eval_runtime": 2.454,
217
+ "eval_samples_per_second": 22.82,
218
+ "eval_steps_per_second": 2.853,
219
+ "step": 375
220
+ },
221
+ {
222
+ "epoch": 5.71,
223
+ "learning_rate": 0.0002,
224
+ "loss": 0.3315,
225
+ "step": 400
226
+ },
227
+ {
228
+ "epoch": 5.71,
229
+ "eval_loss": 0.26739758253097534,
230
+ "eval_runtime": 2.4554,
231
+ "eval_samples_per_second": 22.807,
232
+ "eval_steps_per_second": 2.851,
233
+ "step": 400
234
+ },
235
+ {
236
+ "epoch": 6.07,
237
+ "learning_rate": 0.0002,
238
+ "loss": 0.3224,
239
+ "step": 425
240
+ },
241
+ {
242
+ "epoch": 6.07,
243
+ "eval_loss": 0.2381574958562851,
244
+ "eval_runtime": 2.4593,
245
+ "eval_samples_per_second": 22.771,
246
+ "eval_steps_per_second": 2.846,
247
+ "step": 425
248
+ },
249
+ {
250
+ "epoch": 6.43,
251
+ "learning_rate": 0.0002,
252
+ "loss": 0.2582,
253
+ "step": 450
254
+ },
255
+ {
256
+ "epoch": 6.43,
257
+ "eval_loss": 0.2326308786869049,
258
+ "eval_runtime": 2.4554,
259
+ "eval_samples_per_second": 22.807,
260
+ "eval_steps_per_second": 2.851,
261
+ "step": 450
262
+ },
263
+ {
264
+ "epoch": 6.79,
265
+ "learning_rate": 0.0002,
266
+ "loss": 0.2889,
267
+ "step": 475
268
+ },
269
+ {
270
+ "epoch": 6.79,
271
+ "eval_loss": 0.22920013964176178,
272
+ "eval_runtime": 2.4566,
273
+ "eval_samples_per_second": 22.796,
274
+ "eval_steps_per_second": 2.85,
275
+ "step": 475
276
+ },
277
+ {
278
+ "epoch": 7.14,
279
+ "learning_rate": 0.0002,
280
+ "loss": 0.2766,
281
+ "step": 500
282
+ },
283
+ {
284
+ "epoch": 7.14,
285
+ "eval_loss": 0.22648800909519196,
286
+ "eval_runtime": 2.4544,
287
+ "eval_samples_per_second": 22.816,
288
+ "eval_steps_per_second": 2.852,
289
+ "step": 500
290
+ },
291
+ {
292
+ "epoch": 7.5,
293
+ "learning_rate": 0.0002,
294
+ "loss": 0.2476,
295
+ "step": 525
296
+ },
297
+ {
298
+ "epoch": 7.5,
299
+ "eval_loss": 0.19777119159698486,
300
+ "eval_runtime": 2.4535,
301
+ "eval_samples_per_second": 22.824,
302
+ "eval_steps_per_second": 2.853,
303
+ "step": 525
304
+ },
305
+ {
306
+ "epoch": 7.86,
307
+ "learning_rate": 0.0002,
308
+ "loss": 0.2383,
309
+ "step": 550
310
+ },
311
+ {
312
+ "epoch": 7.86,
313
+ "eval_loss": 0.1977979987859726,
314
+ "eval_runtime": 2.4538,
315
+ "eval_samples_per_second": 22.821,
316
+ "eval_steps_per_second": 2.853,
317
+ "step": 550
318
+ },
319
+ {
320
+ "epoch": 8.21,
321
+ "learning_rate": 0.0002,
322
+ "loss": 0.2318,
323
+ "step": 575
324
+ },
325
+ {
326
+ "epoch": 8.21,
327
+ "eval_loss": 0.19671748578548431,
328
+ "eval_runtime": 2.4545,
329
+ "eval_samples_per_second": 22.815,
330
+ "eval_steps_per_second": 2.852,
331
+ "step": 575
332
+ },
333
+ {
334
+ "epoch": 8.57,
335
+ "learning_rate": 0.0002,
336
+ "loss": 0.2159,
337
+ "step": 600
338
+ },
339
+ {
340
+ "epoch": 8.57,
341
+ "eval_loss": 0.1927504688501358,
342
+ "eval_runtime": 2.4546,
343
+ "eval_samples_per_second": 22.815,
344
+ "eval_steps_per_second": 2.852,
345
+ "step": 600
346
+ },
347
+ {
348
+ "epoch": 8.93,
349
+ "learning_rate": 0.0002,
350
+ "loss": 0.2332,
351
+ "step": 625
352
+ },
353
+ {
354
+ "epoch": 8.93,
355
+ "eval_loss": 0.18296389281749725,
356
+ "eval_runtime": 2.4543,
357
+ "eval_samples_per_second": 22.817,
358
+ "eval_steps_per_second": 2.852,
359
+ "step": 625
360
+ },
361
+ {
362
+ "epoch": 9.29,
363
+ "learning_rate": 0.0002,
364
+ "loss": 0.2088,
365
+ "step": 650
366
+ },
367
+ {
368
+ "epoch": 9.29,
369
+ "eval_loss": 0.19335438311100006,
370
+ "eval_runtime": 2.4553,
371
+ "eval_samples_per_second": 22.808,
372
+ "eval_steps_per_second": 2.851,
373
+ "step": 650
374
+ },
375
+ {
376
+ "epoch": 9.64,
377
+ "learning_rate": 0.0002,
378
+ "loss": 0.2072,
379
+ "step": 675
380
+ },
381
+ {
382
+ "epoch": 9.64,
383
+ "eval_loss": 0.18041561543941498,
384
+ "eval_runtime": 2.4547,
385
+ "eval_samples_per_second": 22.813,
386
+ "eval_steps_per_second": 2.852,
387
+ "step": 675
388
+ },
389
+ {
390
+ "epoch": 10.0,
391
+ "learning_rate": 0.0002,
392
+ "loss": 0.2233,
393
+ "step": 700
394
+ },
395
+ {
396
+ "epoch": 10.0,
397
+ "eval_loss": 0.18061913549900055,
398
+ "eval_runtime": 2.454,
399
+ "eval_samples_per_second": 22.82,
400
+ "eval_steps_per_second": 2.853,
401
+ "step": 700
402
+ },
403
+ {
404
+ "epoch": 10.36,
405
+ "learning_rate": 0.0002,
406
+ "loss": 0.1796,
407
+ "step": 725
408
+ },
409
+ {
410
+ "epoch": 10.36,
411
+ "eval_loss": 0.17700645327568054,
412
+ "eval_runtime": 2.4539,
413
+ "eval_samples_per_second": 22.821,
414
+ "eval_steps_per_second": 2.853,
415
+ "step": 725
416
+ },
417
+ {
418
+ "epoch": 10.71,
419
+ "learning_rate": 0.0002,
420
+ "loss": 0.215,
421
+ "step": 750
422
+ },
423
+ {
424
+ "epoch": 10.71,
425
+ "eval_loss": 0.18451173603534698,
426
+ "eval_runtime": 2.4555,
427
+ "eval_samples_per_second": 22.806,
428
+ "eval_steps_per_second": 2.851,
429
+ "step": 750
430
+ },
431
+ {
432
+ "epoch": 11.07,
433
+ "learning_rate": 0.0002,
434
+ "loss": 0.1894,
435
+ "step": 775
436
+ },
437
+ {
438
+ "epoch": 11.07,
439
+ "eval_loss": 0.17726834118366241,
440
+ "eval_runtime": 2.4547,
441
+ "eval_samples_per_second": 22.813,
442
+ "eval_steps_per_second": 2.852,
443
+ "step": 775
444
+ },
445
+ {
446
+ "epoch": 11.43,
447
+ "learning_rate": 0.0002,
448
+ "loss": 0.1899,
449
+ "step": 800
450
+ },
451
+ {
452
+ "epoch": 11.43,
453
+ "eval_loss": 0.17982231080532074,
454
+ "eval_runtime": 2.4541,
455
+ "eval_samples_per_second": 22.819,
456
+ "eval_steps_per_second": 2.852,
457
+ "step": 800
458
+ },
459
+ {
460
+ "epoch": 11.79,
461
+ "learning_rate": 0.0002,
462
+ "loss": 0.2009,
463
+ "step": 825
464
+ },
465
+ {
466
+ "epoch": 11.79,
467
+ "eval_loss": 0.1710078865289688,
468
+ "eval_runtime": 2.4543,
469
+ "eval_samples_per_second": 22.817,
470
+ "eval_steps_per_second": 2.852,
471
+ "step": 825
472
+ },
473
+ {
474
+ "epoch": 12.14,
475
+ "learning_rate": 0.0002,
476
+ "loss": 0.1859,
477
+ "step": 850
478
+ },
479
+ {
480
+ "epoch": 12.14,
481
+ "eval_loss": 0.1884012222290039,
482
+ "eval_runtime": 2.4546,
483
+ "eval_samples_per_second": 22.814,
484
+ "eval_steps_per_second": 2.852,
485
+ "step": 850
486
+ },
487
+ {
488
+ "epoch": 12.5,
489
+ "learning_rate": 0.0002,
490
+ "loss": 0.1854,
491
+ "step": 875
492
+ },
493
+ {
494
+ "epoch": 12.5,
495
+ "eval_loss": 0.16743424534797668,
496
+ "eval_runtime": 2.4539,
497
+ "eval_samples_per_second": 22.821,
498
+ "eval_steps_per_second": 2.853,
499
+ "step": 875
500
+ },
501
+ {
502
+ "epoch": 12.86,
503
+ "learning_rate": 0.0002,
504
+ "loss": 0.191,
505
+ "step": 900
506
+ },
507
+ {
508
+ "epoch": 12.86,
509
+ "eval_loss": 0.16949959099292755,
510
+ "eval_runtime": 2.4543,
511
+ "eval_samples_per_second": 22.817,
512
+ "eval_steps_per_second": 2.852,
513
+ "step": 900
514
+ },
515
+ {
516
+ "epoch": 13.21,
517
+ "learning_rate": 0.0002,
518
+ "loss": 0.1912,
519
+ "step": 925
520
+ },
521
+ {
522
+ "epoch": 13.21,
523
+ "eval_loss": 0.15851029753684998,
524
+ "eval_runtime": 2.4557,
525
+ "eval_samples_per_second": 22.804,
526
+ "eval_steps_per_second": 2.851,
527
+ "step": 925
528
+ },
529
+ {
530
+ "epoch": 13.57,
531
+ "learning_rate": 0.0002,
532
+ "loss": 0.1763,
533
+ "step": 950
534
+ },
535
+ {
536
+ "epoch": 13.57,
537
+ "eval_loss": 0.17656584084033966,
538
+ "eval_runtime": 2.4541,
539
+ "eval_samples_per_second": 22.819,
540
+ "eval_steps_per_second": 2.852,
541
+ "step": 950
542
+ },
543
+ {
544
+ "epoch": 13.93,
545
+ "learning_rate": 0.0002,
546
+ "loss": 0.1953,
547
+ "step": 975
548
+ },
549
+ {
550
+ "epoch": 13.93,
551
+ "eval_loss": 0.15806905925273895,
552
+ "eval_runtime": 2.455,
553
+ "eval_samples_per_second": 22.811,
554
+ "eval_steps_per_second": 2.851,
555
+ "step": 975
556
+ },
557
+ {
558
+ "epoch": 14.29,
559
+ "learning_rate": 0.0002,
560
+ "loss": 0.1732,
561
+ "step": 1000
562
+ },
563
+ {
564
+ "epoch": 14.29,
565
+ "eval_loss": 0.18219564855098724,
566
+ "eval_runtime": 2.4533,
567
+ "eval_samples_per_second": 22.826,
568
+ "eval_steps_per_second": 2.853,
569
+ "step": 1000
570
+ }
571
+ ],
572
+ "logging_steps": 25,
573
+ "max_steps": 1000,
574
+ "num_train_epochs": 15,
575
+ "save_steps": 25,
576
+ "total_flos": 239052193136640.0,
577
+ "trial_name": null,
578
+ "trial_params": null
579
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dbc32753e27b71918fa5f3c9f72da69f9f1dc11b4ef45f2025e974308e33f68c
3
+ size 4472