Surabhi-K commited on
Commit
41a8a0d
1 Parent(s): d682b64

Surabhi-K/code_llama_library

Browse files
README.md ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama2
3
+ library_name: peft
4
+ tags:
5
+ - generated_from_trainer
6
+ base_model: codellama/CodeLlama-7b-hf
7
+ model-index:
8
+ - name: working
9
+ results: []
10
+ ---
11
+
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
+ # working
16
+
17
+ This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on an unknown dataset.
18
+ It achieves the following results on the evaluation set:
19
+ - Loss: 0.1536
20
+
21
+ ## Model description
22
+
23
+ More information needed
24
+
25
+ ## Intended uses & limitations
26
+
27
+ More information needed
28
+
29
+ ## Training and evaluation data
30
+
31
+ More information needed
32
+
33
+ ## Training procedure
34
+
35
+ ### Training hyperparameters
36
+
37
+ The following hyperparameters were used during training:
38
+ - learning_rate: 5e-05
39
+ - train_batch_size: 3
40
+ - eval_batch_size: 3
41
+ - seed: 42
42
+ - gradient_accumulation_steps: 5
43
+ - total_train_batch_size: 15
44
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
+ - lr_scheduler_type: linear
46
+ - lr_scheduler_warmup_steps: 20
47
+ - num_epochs: 7
48
+ - mixed_precision_training: Native AMP
49
+
50
+ ### Training results
51
+
52
+ | Training Loss | Epoch | Step | Validation Loss |
53
+ |:-------------:|:-----:|:----:|:---------------:|
54
+ | 2.0255 | 1.0 | 63 | 0.5661 |
55
+ | 0.3616 | 2.0 | 126 | 0.3047 |
56
+ | 0.1979 | 3.0 | 189 | 0.2129 |
57
+ | 0.1565 | 4.0 | 252 | 0.1817 |
58
+ | 0.1409 | 5.0 | 315 | 0.1644 |
59
+ | 0.1319 | 6.0 | 378 | 0.1561 |
60
+ | 0.1277 | 7.0 | 441 | 0.1536 |
61
+
62
+
63
+ ### Framework versions
64
+
65
+ - PEFT 0.7.1
66
+ - Transformers 4.36.2
67
+ - Pytorch 2.1.2
68
+ - Datasets 2.15.0
69
+ - Tokenizers 0.15.2
adapter_config.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "codellama/CodeLlama-7b-hf",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layers_pattern": null,
10
+ "layers_to_transform": null,
11
+ "loftq_config": {},
12
+ "lora_alpha": 32,
13
+ "lora_dropout": 0.05,
14
+ "megatron_config": null,
15
+ "megatron_core": "megatron.core",
16
+ "modules_to_save": null,
17
+ "peft_type": "LORA",
18
+ "r": 16,
19
+ "rank_pattern": {},
20
+ "revision": null,
21
+ "target_modules": [
22
+ "q_proj",
23
+ "v_proj",
24
+ "dense",
25
+ "k_proj",
26
+ "fc1",
27
+ "fc2"
28
+ ],
29
+ "task_type": "CAUSAL_LM"
30
+ }
adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:37c011b0cac744d8275e249f6410638acb2cdea1e5e799f63da8ead4cbb2eb5a
3
+ size 50357464
test.csv ADDED
@@ -0,0 +1,574 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Endpoint,Description,Inputs,Output,Test_Code
2
+ /config/rest/delete/,requesting to delete the config values when provided valid config name but with invalid token,"config_value = { ""name"": ""primary_server"" }","{
3
+ ""status"": 401,
4
+ ""message"": ""Invalid token""
5
+ }
6
+ ","def test_config_delete_with_invalid_token(invalid_exec_api):
7
+ """"""
8
+ deleting the non deletable config values with invalid token
9
+ """"""
10
+ config_value = {
11
+ ""name"": ""primary_server"",
12
+ }
13
+ r = invalid_exec_api.config_delete(config_value)
14
+ test_assert.status(r, 401)
15
+ result = r.json()
16
+ assert result['detail'] == ""Invalid token.""
17
+ "
18
+ /config/rest/version/,Fetching the information when invalid token is provided,,"{
19
+ ""status"":401,
20
+ ""message"":""invalid token""
21
+ }","def test_version_config_with_invalid_token(invalid_exec_api):
22
+ """"""
23
+ Fetching the information of Version and Build Number with invalid token
24
+ """"""
25
+ r = invalid_exec_api.config_version()
26
+ result = r.json()
27
+ test_assert.status(r, 401)
28
+ assert result['detail'] == ""Invalid token."""
29
+ "/deploy/rest/delete/{UUID}/
30
+
31
+ ","the manager deletes the image when the manager has rights over the user and the server
32
+ ",,"{
33
+ ""status"":204
34
+ }","endpoint = ""deploy_delete""
35
+
36
+ PARAMETERS = [{""dest_obj"": OBJ_DEPL}]
37
+ PARAMETERS_SRV_RIGHT = [{""dest_obj"": OBJ_DEPL, ""deploy_with"": SRV_MANAGER_RIGHTS}]
38
+ PARAMETERS_NO_SRV_RIGHT = [{""dest_obj"": OBJ_DEPL, ""deploy_with"": SRV_NO_MANAGER_RIGHTS}]
39
+
40
+ @pytest.mark.parametrize(""custom_lib_non_admin_operations"", PARAMETERS_SRV_RIGHT, indirect=True)
41
+ @pytest.mark.parametrize(""custom_lib_admin_operations"", PARAMETERS_SRV_RIGHT, indirect=True)
42
+
43
+ def test_deploy_delete_manager_server_right(skip_if_not_manager, custom_lib_admin_operations, custom_lib_non_admin_operations, run_api):
44
+ """"""
45
+ Deleting the VM by Manager
46
+ """"""
47
+ # When the user is not part of the group that the manager manages and deployment is on manager rights to server
48
+ deploy_id = custom_lib_admin_operations
49
+ r = run_api.deploy_image_delete(deploy_id, {})
50
+ test_assert.status(r, manager_rights_response(endpoint, manages_user=False, manages_server=True))
51
+
52
+ # When the user is part of the group that the manager manages and deployment is on manager rights to server
53
+ deploy_id = custom_lib_non_admin_operations
54
+ r = run_api.deploy_image_delete(deploy_id, {})
55
+ test_assert.status(r, manager_rights_response(endpoint, manages_user=True, manages_server=True))"
56
+ /deploy/rest/shutdown/{{UUID}}/,shutting down the deployment of machine by an admin using valid UUID and machine is in running state,,"{
57
+ ""status"" : 201,
58
+ ""response"" : Machine shutdown
59
+ }","PARAMETERS = [{""dest_obj"": OBJ_DEPL, ""final_state"": DEPL_STATE[""running""]}]
60
+
61
+ @pytest.mark.parametrize(""custom_lib_non_admin_operations"", PARAMETERS, indirect=True)
62
+ def test_deploy_shutdown_admin(skip_if_not_admin, custom_lib_non_admin_operations, run_api):
63
+ """"""
64
+ shutdown the VM by Admin
65
+ """"""
66
+ deploy_id = custom_lib_non_admin_operations
67
+ r = run_api.deploy_shutdown(deploy_id)
68
+ test_assert.status(r, 201)
69
+
70
+ "
71
+ /deploy/rest/stop/{{UUID}}/,stopping a machine by an admin when valid UUID is provided and machine is in running state,,"{
72
+ ""status"" : 201,
73
+ ""response"" : stopping deployment
74
+ }","endpoint = ""deploy_stop""
75
+
76
+ PARAMETERS = [{""dest_obj"": OBJ_DEPL, ""final_state"": DEPL_STATE[""running""]}]
77
+ PARAMETERS_SRV_RIGHT = [{""dest_obj"": OBJ_DEPL, ""final_state"": DEPL_STATE[""running""], ""deploy_with"": SRV_MANAGER_RIGHTS}]
78
+ PARAMETERS_NO_SRV_RIGHT = [{""dest_obj"": OBJ_DEPL, ""final_state"": DEPL_STATE[""running""], ""deploy_with"": SRV_NO_MANAGER_RIGHTS}]
79
+
80
+ @pytest.mark.parametrize(""custom_lib_non_admin_operations"", PARAMETERS, indirect=True)
81
+ def test_deploy_stop_admin(skip_if_not_admin, custom_lib_non_admin_operations, run_api):
82
+ """"""
83
+ Rebooting the VM by Admin
84
+ """"""
85
+ # Admin check of Starting a deployment created by different user
86
+ deploy_id = custom_lib_non_admin_operations
87
+ r = run_api.deploy_stop(deploy_id)
88
+ test_assert.status(r, 201)
89
+ "
90
+ /deploy/rest/stop/{{UUID}}/,stopping a machine by non-admin user when valid UUID is provided and machine is in running state,,"{
91
+ ""status"" : 403
92
+ }","endpoint = ""deploy_stop""
93
+
94
+ PARAMETERS = [{""dest_obj"": OBJ_DEPL, ""final_state"": DEPL_STATE[""running""]}]
95
+ PARAMETERS_SRV_RIGHT = [{""dest_obj"": OBJ_DEPL, ""final_state"": DEPL_STATE[""running""], ""deploy_with"": SRV_MANAGER_RIGHTS}]
96
+ PARAMETERS_NO_SRV_RIGHT = [{""dest_obj"": OBJ_DEPL, ""final_state"": DEPL_STATE[""running""], ""deploy_with"": SRV_NO_MANAGER_RIGHTS}]
97
+
98
+ @pytest.mark.parametrize(""custom_lib_admin_operations"", PARAMETERS, indirect=True)
99
+ def test_deploy_stop_non_admin(skip_if_not_non_admin, custom_lib_admin_operations, run_api):
100
+ """"""
101
+ stopping the VM by non-admin
102
+ """"""
103
+ deploy_id = custom_lib_admin_operations
104
+ r = run_api.deploy_stop(deploy_id)
105
+ test_assert.status(r, 403)
106
+ "
107
+ /deploy/rest/stop/{{UUID}}/,"stopping a machine deployment by a manager when valid UUID is provided and machine is in running state , and manager has rights over servers",,,"endpoint = ""deploy_stop""
108
+
109
+ PARAMETERS = [{""dest_obj"": OBJ_DEPL, ""final_state"": DEPL_STATE[""running""]}]
110
+ PARAMETERS_SRV_RIGHT = [{""dest_obj"": OBJ_DEPL, ""final_state"": DEPL_STATE[""running""], ""deploy_with"": SRV_MANAGER_RIGHTS}]
111
+ PARAMETERS_NO_SRV_RIGHT = [{""dest_obj"": OBJ_DEPL, ""final_state"": DEPL_STATE[""running""], ""deploy_with"": SRV_NO_MANAGER_RIGHTS}]
112
+
113
+ @pytest.mark.parametrize(""custom_lib_non_admin_operations"", PARAMETERS_SRV_RIGHT, indirect=True)
114
+ @pytest.mark.parametrize(""custom_lib_admin_operations"", PARAMETERS_SRV_RIGHT, indirect=True)
115
+ def test_deploy_stop_manager_server_right(skip_if_not_manager, custom_lib_admin_operations, custom_lib_non_admin_operations, run_api):
116
+ """"""
117
+ stopping the VM by manager when have right on server
118
+ """"""
119
+ # When the user is not part of the group that the manager manages
120
+ deploy_id = custom_lib_admin_operations
121
+ r = run_api.deploy_stop(deploy_id)
122
+ test_assert.status(r, manager_rights_response(endpoint, manages_user=False, manages_server=True))
123
+
124
+ # When the user is part of the group that the manager manages and deployment is on manager rights to server
125
+ deploy_id = custom_lib_non_admin_operations
126
+ r = run_api.deploy_stop(deploy_id)
127
+ test_assert.status(r, manager_rights_response(endpoint, manages_user=True, manages_server=True))
128
+ run_api.deploy_stop(deploy_id)
129
+
130
+ "
131
+ /deploy/rest/stop/{{UUID}}/,"stopping a machine deployment by a manager when valid UUID is provided and machine is in running state , but manager do not have rights over servers",,,"endpoint = ""deploy_stop""
132
+
133
+ PARAMETERS = [{""dest_obj"": OBJ_DEPL, ""final_state"": DEPL_STATE[""running""]}]
134
+ PARAMETERS_SRV_RIGHT = [{""dest_obj"": OBJ_DEPL, ""final_state"": DEPL_STATE[""running""], ""deploy_with"": SRV_MANAGER_RIGHTS}]
135
+ PARAMETERS_NO_SRV_RIGHT = [{""dest_obj"": OBJ_DEPL, ""final_state"": DEPL_STATE[""running""], ""deploy_with"": SRV_NO_MANAGER_RIGHTS}]
136
+
137
+ @pytest.mark.parametrize(""custom_lib_non_admin_operations"", PARAMETERS_NO_SRV_RIGHT, indirect=True)
138
+ @pytest.mark.parametrize(""custom_lib_admin_operations"", PARAMETERS_NO_SRV_RIGHT, indirect=True)
139
+ def test_deploy_stop_manager_no_server_right(skip_if_not_manager, custom_lib_admin_operations, custom_lib_non_admin_operations, run_api):
140
+ """"""
141
+ stopping the VM by manager when have no right on server
142
+ """"""
143
+ # When the user is not part of the group that the manager manages and the deployment is not on manager rightful server
144
+ deploy_id = custom_lib_admin_operations
145
+ r = run_api.deploy_stop(deploy_id)
146
+ test_assert.status(r, manager_rights_response(endpoint, manages_user=False, manages_server=False))
147
+
148
+ # When the user is part of the group that the manager manages but the deployment is not on manager rightful server
149
+ deploy_id = custom_lib_non_admin_operations
150
+ r = run_api.deploy_stop(deploy_id)
151
+ test_assert.status(r, manager_rights_response(endpoint, manages_user=True, manages_server=False))
152
+
153
+ "
154
+ /ilibrary/rest/add/,creating an island library and adding it using invalid IPs,"{
155
+ ""name"": ""test_ilibrary_add_invalid_ips"",
156
+ ""is_public"": True,
157
+ ""network_segments"": {
158
+ ""add"": [
159
+ {
160
+ ""name"": ""network_segment"",
161
+ ""bridge_ip"": ""1921681000"",
162
+ ""start_ip"": ""1921681001"",
163
+ ""end_ip"": ""192168100150""
164
+ }
165
+ ]
166
+ }
167
+ }","{
168
+ ""status"" : 400,
169
+ ""message"" : ""Enter valid IPv4 addresses""
170
+ }","
171
+ def test_ilibrary_add_invalid_ips(run_api):
172
+ """"""
173
+ Creating an Island Library with invalid bridge ip, start ip, end ip
174
+ """"""
175
+ params = {
176
+ ""name"": ""test_ilibrary_add_invalid_ips"",
177
+ ""is_public"": True,
178
+ ""network_segments"": {
179
+ ""add"": [
180
+ {
181
+ ""name"": ""network_segment"",
182
+ ""bridge_ip"": ""1921681000"",
183
+ ""start_ip"": ""1921681001"",
184
+ ""end_ip"": ""192168100150""
185
+ }
186
+ ]
187
+ }
188
+ }
189
+ params, r = run_api.ilibrary_add_new_island(params=params)
190
+ test_assert.status(r, 400)
191
+ rjson = r.json()
192
+ errors = rjson['network_segments']['add'][0]
193
+ assert errors['start_ip'] == ['Enter a valid IPv4 address.']
194
+ assert errors['end_ip'] == ['Enter a valid IPv4 address.']
195
+ assert errors['bridge_ip'] == ['Enter a valid IPv4 address.']
196
+ "
197
+ /ilibrary/rest/details/{UUID}/,fetching details of public island library from private island,"machine1 = {
198
+ ""uuid"": r1.json()[""uuid""],
199
+ ""nics"": {
200
+ ""add"": [
201
+ {
202
+ ""mac"": ""auto"",
203
+ ""type"": ""bridge"",
204
+ ""model"": networks[0].get(""model"", ""virtio""),
205
+ ""segment"": ""Default Public Segment""
206
+ },
207
+
208
+ ],
209
+ }
210
+
211
+ }
212
+ params = {
213
+ ""name"": ""Machine1"",
214
+ ""is_public"": False,
215
+ ""machines"": {
216
+ ""add"": [machine1],
217
+ },
218
+ }","{
219
+ ""response"" : success
220
+ }","def test_ilibrary_details_with_edit_private_island_to_public_island(skip_if_not_admin, run_api):
221
+ """"""
222
+ To check machine type with public island
223
+ """"""
224
+ params1, r1 = run_api.library_add_new_vm(networks=networks)
225
+ machine1 = {
226
+ ""uuid"": r1.json()[""uuid""],
227
+ ""nics"": {
228
+ ""add"": [
229
+ {
230
+ ""mac"": ""auto"",
231
+ ""type"": ""bridge"",
232
+ ""model"": networks[0].get(""model"", ""virtio""),
233
+ ""segment"": ""Default Public Segment""
234
+ },
235
+
236
+ ],
237
+ }
238
+
239
+ }
240
+ params = {
241
+ ""name"": ""Machine1"",
242
+ ""is_public"": False,
243
+ ""machines"": {
244
+ ""add"": [machine1],
245
+ },
246
+ }
247
+ params, r = run_api.ilibrary_add_new_island(params=params)
248
+ island_id = r.json()[""uuid""]
249
+ params, r = run_api.ilibrary_edit_island(uuid=island_id, params={""is_public"": True})
250
+ res = r.json()[""machines""]
251
+ run_api.ilibrary_delete(uuid=island_id)
252
+ run_api.library_delete(r1.json()[""uuid""])
253
+ for machine in res:
254
+ if not machine[""is_public""]:
255
+ assert False, ""The json is %s"" % r.json()
256
+
257
+ "
258
+ /library/rest/add,adding vm to library when multiple boot disks and same order is passed,"disks = [{ ""size"": 20, ""port"": ""sdb"", ""type"": ""sata"", ""format"": ""qcow2"", ""is_boot"": True, ""boot_order"": 1 }, { ""size"": 20, ""port"": ""sda"", ""type"": ""sata"", ""format"": ""qcow2"", ""is_boot"": True, ""boot_order"": 1 }]","{
259
+ ""status"" : 400,
260
+ ""response"" : Bad request
261
+ }","def test_add_vm_to_library_multiple_bootable_disk_with_same_boot_order(run_api):
262
+ """"""
263
+ If multiple bootable cds with same boot order is passed
264
+ """"""
265
+
266
+ disks = [{
267
+ ""size"": 20,
268
+ ""port"": ""sdb"",
269
+ ""type"": ""sata"",
270
+ ""format"": ""qcow2"",
271
+ ""is_boot"": True,
272
+ ""boot_order"": 1
273
+ },
274
+ {
275
+ ""size"": 20,
276
+ ""port"": ""sda"",
277
+ ""type"": ""sata"",
278
+ ""format"": ""qcow2"",
279
+ ""is_boot"": True,
280
+ ""boot_order"": 1
281
+ }]
282
+
283
+ params, response = run_api.library_add_new_vm(disks=disks, noraise=True)
284
+ test_assert.status(response, 400)
285
+ "
286
+ /library/rest/adddisk/{{UUID}}/ ,adding disk to library when provided lib_UUID that does not exist,"lib_UUID = ""doesnotexist""","{
287
+ ""status"" : 404
288
+ }","def test_lib_add_disk_with_invalid_UUID(run_api):
289
+ lib_UUID = ""doesnotexist""
290
+ r = run_api.library_add_disk(lib_UUID)
291
+ test_assert.status(r, 404)
292
+ "
293
+ /library/rest/ctypes/,getting the console type when requested,,"{
294
+ ""status"" : 200,
295
+ ""response"" : console type details displayed
296
+ }","def test_library_ctypes(run_api):
297
+ """"""
298
+ Getting the list of console type
299
+ """"""
300
+ r = run_api.library_console_types()
301
+ result = r.json()
302
+ test_assert.status(result, LIBRARY_CONSOLE_TYPE, ""library_ctypes"")
303
+ test_assert.status(r, 200)
304
+ "
305
+ /library/rest/delete/{UUID}/,deleting a library by non-admin when provided with valid UUID,,"{
306
+ ""status"" : 403
307
+ }","PARAMETERS = [{""dest_obj"": OBJ_LIB}]
308
+
309
+ @pytest.mark.parametrize(""custom_lib_admin_operations"", PARAMETERS, indirect=True)
310
+ def test_lib_delete_non_admin(skip_if_not_non_admin, custom_lib_admin_operations, run_api):
311
+ """"""
312
+ Deleting the Library by non-Admin
313
+ """"""
314
+ # Non-admin check for deleting the Library created by different user.
315
+ lib_id = custom_lib_admin_operations
316
+ r = run_api.library_delete(lib_id, {})
317
+ test_assert.status(r, 403)
318
+
319
+ "
320
+ /library/rest/edit/{UUID}/,deletion of disk when invalid UUID provided,"{""delete"": [
321
+ {
322
+ ""UUID"": disk_UUID,
323
+ ""port"": ""sdz"",
324
+ ""type"": r['hw']['disks'][0]['type']
325
+ }
326
+ ]
327
+ }","{
328
+ ""status"" : 404,
329
+ ""message"" : ""Disk with UUID does not exist""
330
+ }","def test_library_edit_delete_invalid_disk_UUID(library_add_new_vm, run_api):
331
+ """"""
332
+ delete disk with invalid UUID
333
+ """"""
334
+ p, r = library_add_new_vm
335
+ lib_id = r['UUID']
336
+ disk_UUID = str(UUID.UUID4())
337
+ disk_UUID = 'invalid'
338
+ disks = {""delete"": [
339
+ {
340
+ ""UUID"": disk_UUID,
341
+ ""port"": ""sdz"",
342
+ ""type"": r['hw']['disks'][0]['type']
343
+ }
344
+ ]
345
+ }
346
+ params = {""hw"": {""disks"": disks}}
347
+ res = run_api.library_edit(lib_id, params)
348
+ test_assert.status(res, 404)
349
+ rjson = res.json()
350
+ assert rjson['error'] == f""Disk with UUID {disk_UUID} does not exist"", ""|> json %s"" % rjson
351
+ "
352
+ /library/rest/edit/{UUID}/,updation of library when is_public flag set to True,"params = { ""is_public"": True, ""hw"": {} }","{
353
+ ""status"" : 400,
354
+ ""message"" : ""Failed to create task to sync public layers on primary""
355
+ }","def test_library_edit_update_is_public_flag(skip_if_not_non_admin, library_add_new_vm, run_api):
356
+ """"""
357
+ Update is_public flag
358
+ """"""
359
+ p, res = library_add_new_vm
360
+ UUID = res['UUID']
361
+ params = {
362
+ ""is_public"": True,
363
+ ""hw"": {}
364
+ }
365
+ r = run_api.library_edit(UUID, params)
366
+ test_assert.status(r, 400)
367
+ rjson = r.json()
368
+ assert rjson['error'] == ""Failed to create task to sync public layers on primary"", ""Json |> %s"" % rjson
369
+ "
370
+ /library/rest/edit/{UUID}/,updation of network in a library with valid data,"networks = [{
371
+ ""type"": ""bridge"",
372
+ ""model"": ""virtio"",
373
+ ""segment"": ""Default Public Segment"",
374
+ }
375
+ ]
376
+
377
+ update_network = [{
378
+ ""type"": ""host"",
379
+ ""model"": ""virtio"",
380
+ ""segment"": ""HostOnly Segment"",
381
+ }]
382
+
383
+ ","{
384
+ ""status"" : 201
385
+ }","def test_library_edit_network_with_valid_data(run_api):
386
+ """"""
387
+ edit network with valid data
388
+ """"""
389
+ networks = [{
390
+ ""type"": ""bridge"",
391
+ ""model"": ""virtio"",
392
+ ""segment"": ""Default Public Segment"",
393
+ }
394
+ ]
395
+ params, r = run_api.library_add_new_vm(networks=networks)
396
+ update_netork = [{
397
+ ""type"": ""host"",
398
+ ""model"": ""virtio"",
399
+ ""segment"": ""HostOnly Segment"",
400
+ }]
401
+ params = {'hw': {'networks': update_netork}}
402
+ lib_id = r.json()[""UUID""]
403
+ res = run_api.library_edit(lib_id, params)
404
+ test_assert.status(res, 201)
405
+ rjson = res.json()
406
+ new_network = rjson['hw']['networks'][0]
407
+ assert new_network['type'] == 'host', ""|> Error %s"" % rjson
408
+ assert new_network['segment'] == 'HostOnly Segment', ""|> Error %s"" % rjson
409
+ run_api.library_delete(lib_id, {})
410
+ "
411
+ ​/library​/rest​/hvmtypes​/,fetching the hypervisor type when requested,,"{
412
+ ""status"" : 200,
413
+ ""response"" : list of hypervisor type
414
+ }","def test_library_hvmtypes(run_api):
415
+ """"""
416
+ Getting the list of Hypervisor type
417
+ """"""
418
+ r = run_api.library_hvmtypes()
419
+ result = r.json()
420
+ test_assert.status(result, LIBRARY_HVM_TYPE, ""library_hvmtypes"")
421
+ test_assert.status(r, 200)
422
+ "
423
+ /library/rest/revisions/,requesting the revision list of library when machine_UUID is empty,"{
424
+ machine_UUID : ''
425
+ }","{
426
+ ""status"" : 400,
427
+ ""message"" : ""Machine UUID should be provided""}","def test_library_revisions_without_UUID(run_api):
428
+ """"""
429
+ Without UUID
430
+ """"""
431
+ res = run_api.library_revisions("""")
432
+ test_assert.status(res, 400)
433
+ rjson = res.json()
434
+ assert rjson['detail'] == ""Machine UUID should be provided"", ""|> The error %s"" % rjson
435
+ "
436
+ ​/rtask​/rest​/rlist​/,fetching the list of remote tasks without authorization,,"{
437
+ ""status"" : 401,
438
+ ""message"" : ""Authentication credentials were not provided""
439
+ }","def test_rtask_rlist_without_authorization(anonymous_exec_api):
440
+ """"""
441
+ Fetching the List of Jobs without authorization
442
+ """"""
443
+ r = anonymous_exec_api.rtask_rlist()
444
+ res = r.json()
445
+ test_assert.status(r, 401)
446
+ assert res['detail'] == ""Authentication credentials were not provided.""
447
+ "
448
+ ideploy/rest/change_ownership,Successful change of ownership from one user to another where both users exist and the requester has the necessary permissions.,"{
449
+ ""deployment_UUIDs"": [UUID],
450
+ ""owner"": ,
451
+ ""dest_user"": ""manager""
452
+ }","{""status"": 200, ""message"": ""Operation performed successfully without any error""}","def test_deploy_change_ownership(skip_if_non_admin, non_admin_exec_api, run_api):
453
+ """"""
454
+ To change ownership of deployed machine from non-admin user to manager by admin
455
+ """"""
456
+ params, r = non_admin_exec_api.library_add_new_vm()
457
+ lib_id = r.json()[""UUID""]
458
+ r = non_admin_exec_api.deploy_image(lib_id=lib_id, deploy_on=list(run_api.clm_my_servers.keys()))
459
+ UUID = r.json()['UUID']
460
+ params = {
461
+ ""deployment_UUIDs"": [UUID],
462
+ ""owner"": non_admin_exec_api.user,
463
+ ""dest_user"": ""manager""
464
+ }
465
+ res = run_api.deploy_change_ownership(params=params)
466
+ test_assert.status(res, 200)
467
+ new_owner = run_api.deploy_details(deploy_id=UUID).json()['owner']
468
+ assert new_owner == ""manager""
469
+ run_api.deploy_image_delete(deploy_id=UUID)
470
+ run_api.library_delete(UUID=lib_id)
471
+ "
472
+ "license/rest/licenses_check
473
+ "," when day params is negative
474
+ ","{
475
+ ""days"" : -1
476
+ }","{
477
+ ""status:200,
478
+ ""message"": ""Value of `days` cannot be negative""
479
+ }","def test_license_check_when_day_is_negative(run_api):
480
+ """"""
481
+ license check when day is negative
482
+ """"""
483
+ r = run_api.license_check(days=-1)
484
+ rjson = r.json()
485
+ test_assert.status(r, 400)
486
+ assert rjson['error'] == ""Value of `days` cannot be negative"", ""The error %s"" % rjson"
487
+ /deploy/rest/deploylist,"getting list of image of deployed machine by setting scope to ""my"". Check the user type before performing the operation, only admin user type have the permission to perform such operations.","{
488
+ scope : ""my""
489
+ }","{
490
+ ""response"" : success
491
+ }","def test_deploy_list_filter_with_scope_my(run_api, admin_exec_api, library_add_new_vm):
492
+ """"""
493
+ filter deploy list using scope = my
494
+ """"""
495
+ params, r = library_add_new_vm
496
+ lib_id = r[""uuid""]
497
+ r = admin_exec_api.deploy_image(lib_id)
498
+ deploy_id = r.json()[""uuid""]
499
+ count = check_count_deploylist(run_api, deploy_id, params={'scope': 'my', 'uuid': deploy_id})
500
+
501
+ if run_api.user_type == USER_TYPE['non_admin'] or run_api.user_type == USER_TYPE['manager']:
502
+ assert count == 0
503
+ elif run_api.user_type == USER_TYPE['admin']:
504
+ assert count == 1
505
+
506
+ r = admin_exec_api.deploy_image_delete(deploy_id)
507
+ "
508
+ /config/rest/set/,"setting the None value to secret config. Check the user type before performing the operation, only admin user type have the permission to perform such operations.
509
+ ","{
510
+ ""name"": ""secret"",
511
+ ""value"": None
512
+ }","{
513
+ ""status"" : 400,
514
+ ""response"" : 'Invalid secret_key Value'
515
+ }","def test_config_None_set_secret(run_api):
516
+ """"""
517
+ Set the secret-key config value as None
518
+ """"""
519
+ config_value = {
520
+ ""name"": ""secret"",
521
+ ""value"": None
522
+ }
523
+ r = run_api.config_set(config_value)
524
+ res = r.json()
525
+ if run_api.user_type in [USER_TYPE[""manager""], USER_TYPE[""non_admin""]]:
526
+ test_assert.status(r, 403)
527
+ elif run_api.user_type == USER_TYPE['admin']:
528
+ test_assert.status(r, 400)
529
+ assert res[""result""] == 'FAILURE'
530
+ assert 'Invalid secret_key Value' in res[""error""], res
531
+
532
+
533
+ "
534
+ /group/rest/add/,"adding new group when group name field is missing. Check the user type before performing the operation, only admin user type have the permission to perform such operations.
535
+ ",,"{
536
+ ""status"" : 400,
537
+ ""message"" : ""Group Name is required and it can not be blank""
538
+ }","def test_add_group_with_group_name_field_missing(run_api):
539
+ """"""
540
+ Adding new Group with group name field missing
541
+ """"""
542
+ params, r = run_api.group_add(template={})
543
+ if run_api.user_type in [USER_TYPE[""non_admin""], USER_TYPE[""manager""]]:
544
+ test_assert.status(r, 403)
545
+ elif run_api.user_type == USER_TYPE[""admin""]:
546
+ result = r.json()
547
+ test_assert.status(r, 400)
548
+ assert result['error'] == ""Group Name is required and it can not be blank""
549
+
550
+ "
551
+ /group/rest/update/,"updating the deployment strategy of a group using an invalid value. Check the user type before performing the operation, only admin user type have the permission to perform such operations.
552
+ ","{
553
+ ""name"",
554
+ ""deployment_strategy"": 'invalid'
555
+ }","{
556
+ ""status"" : 400,
557
+ ""message"" : ""Invalid deployment_strategy""
558
+ }","def test_group_update_invalid_deployment_strategy(skip_if_not_admin, group_add, run_api):
559
+ """"""
560
+ group update invalid deployment_strategy
561
+ """"""
562
+ params, r = group_add
563
+ rjson = r.json()
564
+ group_id = rjson['id']
565
+ group_param = {
566
+ ""name"": rjson['name'],
567
+ ""deployment_strategy"": 'invalid'
568
+ }
569
+ updated_param, r = run_api.group_update(group_id, group_param)
570
+ run_api.user_type == USER_TYPE[""admin""]
571
+ result = r.json()
572
+ test_assert.status(r, 400)
573
+ assert result['error'] == ""Invalid deployment_strategy"", ""|> Json %s"" % result
574
+ "
train.csv ADDED
The diff for this file is too large to render. See raw diff
 
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3f8bed004a75884b381dfeb5fecaa918ddd59fd1c78c47e14f81963e6c9ff58d
3
+ size 4728
wandb/debug-internal.log ADDED
The diff for this file is too large to render. See raw diff
 
wandb/debug.log ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2024-04-16 06:18:55,476 INFO MainThread:34 [wandb_setup.py:_flush():76] Current SDK version is 0.16.5
2
+ 2024-04-16 06:18:55,476 INFO MainThread:34 [wandb_setup.py:_flush():76] Configure stats pid to 34
3
+ 2024-04-16 06:18:55,476 INFO MainThread:34 [wandb_setup.py:_flush():76] Loading settings from /root/.config/wandb/settings
4
+ 2024-04-16 06:18:55,476 INFO MainThread:34 [wandb_setup.py:_flush():76] Loading settings from /kaggle/working/wandb/settings
5
+ 2024-04-16 06:18:55,476 INFO MainThread:34 [wandb_setup.py:_flush():76] Loading settings from environment variables: {}
6
+ 2024-04-16 06:18:55,476 INFO MainThread:34 [wandb_setup.py:_flush():76] Inferring run settings from compute environment: {'program': '<python with no main file>'}
7
+ 2024-04-16 06:18:55,476 INFO MainThread:34 [wandb_setup.py:_flush():76] Applying login settings: {}
8
+ 2024-04-16 06:18:55,476 INFO MainThread:34 [wandb_setup.py:_flush():76] Applying login settings: {'api_key': '***REDACTED***'}
9
+ 2024-04-16 06:18:55,476 INFO MainThread:34 [wandb_setup.py:_flush():76] Applying login settings: {}
10
+ 2024-04-16 06:18:55,476 INFO MainThread:34 [wandb_init.py:_log_setup():527] Logging user logs to /kaggle/working/wandb/run-20240416_061855-mrfkrks2/logs/debug.log
11
+ 2024-04-16 06:18:55,476 INFO MainThread:34 [wandb_init.py:_log_setup():528] Logging internal logs to /kaggle/working/wandb/run-20240416_061855-mrfkrks2/logs/debug-internal.log
12
+ 2024-04-16 06:18:55,476 INFO MainThread:34 [wandb_init.py:_jupyter_setup():473] configuring jupyter hooks <wandb.sdk.wandb_init._WandbInit object at 0x7a5364210f10>
13
+ 2024-04-16 06:18:55,477 INFO MainThread:34 [wandb_init.py:init():567] calling init triggers
14
+ 2024-04-16 06:18:55,477 INFO MainThread:34 [wandb_init.py:init():574] wandb.init called with sweep_config: {}
15
+ config: {}
16
+ 2024-04-16 06:18:55,477 INFO MainThread:34 [wandb_init.py:init():617] starting backend
17
+ 2024-04-16 06:18:55,477 INFO MainThread:34 [wandb_init.py:init():621] setting up manager
18
+ 2024-04-16 06:18:55,481 INFO MainThread:34 [backend.py:_multiprocessing_setup():105] multiprocessing start_methods=fork,spawn,forkserver, using: spawn
19
+ 2024-04-16 06:18:55,483 INFO MainThread:34 [wandb_init.py:init():629] backend started and connected
20
+ 2024-04-16 06:18:55,496 INFO MainThread:34 [wandb_run.py:_label_probe_notebook():1299] probe notebook
21
+ 2024-04-16 06:18:55,848 INFO MainThread:34 [wandb_init.py:init():721] updated telemetry
22
+ 2024-04-16 06:18:55,852 INFO MainThread:34 [wandb_init.py:init():754] communicating run to backend with 90.0 second timeout
23
+ 2024-04-16 06:18:56,112 INFO MainThread:34 [wandb_run.py:_on_init():2344] communicating current version
24
+ 2024-04-16 06:18:56,208 INFO MainThread:34 [wandb_run.py:_on_init():2353] got version response upgrade_message: "wandb version 0.16.6 is available! To upgrade, please run:\n $ pip install wandb --upgrade"
25
+
26
+ 2024-04-16 06:18:56,208 INFO MainThread:34 [wandb_init.py:init():805] starting run threads in backend
27
+ 2024-04-16 06:19:12,559 INFO MainThread:34 [wandb_run.py:_console_start():2323] atexit reg
28
+ 2024-04-16 06:19:12,559 INFO MainThread:34 [wandb_run.py:_redirect():2178] redirect: wrap_raw
29
+ 2024-04-16 06:19:12,559 INFO MainThread:34 [wandb_run.py:_redirect():2243] Wrapping output streams.
30
+ 2024-04-16 06:19:12,560 INFO MainThread:34 [wandb_run.py:_redirect():2268] Redirects installed.
31
+ 2024-04-16 06:19:12,561 INFO MainThread:34 [wandb_init.py:init():848] run started, returning control to user process
32
+ 2024-04-16 06:19:12,566 INFO MainThread:34 [wandb_run.py:_config_callback():1347] config_cb None None {'vocab_size': 32016, 'max_position_embeddings': 16384, 'hidden_size': 4096, 'intermediate_size': 11008, 'num_hidden_layers': 32, 'num_attention_heads': 32, 'num_key_value_heads': 32, 'hidden_act': 'silu', 'initializer_range': 0.02, 'rms_norm_eps': 1e-05, 'pretraining_tp': 1, 'use_cache': False, 'rope_theta': 1000000, 'rope_scaling': None, 'attention_bias': False, 'attention_dropout': 0.0, 'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'torch_dtype': 'bfloat16', 'use_bfloat16': False, 'tf_legacy_loss': False, 'pruned_heads': {}, 'tie_word_embeddings': False, 'is_encoder_decoder': False, 'is_decoder': False, 'cross_attention_hidden_size': None, 'add_cross_attention': False, 'tie_encoder_decoder': False, 'max_length': 20, 'min_length': 0, 'do_sample': False, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'diversity_penalty': 0.0, 'temperature': 1.0, 'top_k': 50, 'top_p': 1.0, 'typical_p': 1.0, 'repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'encoder_no_repeat_ngram_size': 0, 'bad_words_ids': None, 'num_return_sequences': 1, 'chunk_size_feed_forward': 0, 'output_scores': False, 'return_dict_in_generate': False, 'forced_bos_token_id': None, 'forced_eos_token_id': None, 'remove_invalid_values': False, 'exponential_decay_length_penalty': None, 'suppress_tokens': None, 'begin_suppress_tokens': None, 'architectures': ['LlamaForCausalLM'], 'finetuning_task': None, 'id2label': {0: 'LABEL_0', 1: 'LABEL_1'}, 'label2id': {'LABEL_0': 0, 'LABEL_1': 1}, 'tokenizer_class': None, 'prefix': None, 'bos_token_id': 1, 'pad_token_id': None, 'eos_token_id': 2, 'sep_token_id': None, 'decoder_start_token_id': None, 'task_specific_params': None, 'problem_type': None, '_name_or_path': 'codellama/CodeLlama-7b-hf', 'transformers_version': '4.36.2', 'model_type': 'llama', 'quantization_config': {'quant_method': 'QuantizationMethod.BITS_AND_BYTES', 'load_in_8bit': False, 'load_in_4bit': True, 'llm_int8_threshold': 6.0, 'llm_int8_skip_modules': None, 'llm_int8_enable_fp32_cpu_offload': True, 'llm_int8_has_fp16_weight': False, 'bnb_4bit_quant_type': 'nf4', 'bnb_4bit_use_double_quant': True, 'bnb_4bit_compute_dtype': 'bfloat16'}, 'output_dir': '/kaggle/working/', 'overwrite_output_dir': True, 'do_train': False, 'do_eval': True, 'do_predict': False, 'evaluation_strategy': 'epoch', 'prediction_loss_only': False, 'per_device_train_batch_size': 3, 'per_device_eval_batch_size': 3, 'per_gpu_train_batch_size': None, 'per_gpu_eval_batch_size': None, 'gradient_accumulation_steps': 5, 'eval_accumulation_steps': None, 'eval_delay': 0, 'learning_rate': 5e-05, 'weight_decay': 0.01, 'adam_beta1': 0.9, 'adam_beta2': 0.999, 'adam_epsilon': 1e-08, 'max_grad_norm': 1.0, 'num_train_epochs': 7, 'max_steps': -1, 'lr_scheduler_type': 'linear', 'lr_scheduler_kwargs': {}, 'warmup_ratio': 0.0, 'warmup_steps': 20, 'log_level': 'passive', 'log_level_replica': 'warning', 'log_on_each_node': True, 'logging_dir': '/kaggle/working//logs', 'logging_strategy': 'epoch', 'logging_first_step': False, 'logging_steps': 500, 'logging_nan_inf_filter': True, 'save_strategy': 'epoch', 'save_steps': 500, 'save_total_limit': 5, 'save_safetensors': True, 'save_on_each_node': False, 'save_only_model': False, 'no_cuda': False, 'use_cpu': False, 'use_mps_device': False, 'seed': 42, 'data_seed': None, 'jit_mode_eval': False, 'use_ipex': False, 'bf16': False, 'fp16': True, 'fp16_opt_level': 'O1', 'half_precision_backend': 'auto', 'bf16_full_eval': False, 'fp16_full_eval': False, 'tf32': None, 'local_rank': 0, 'ddp_backend': None, 'tpu_num_cores': None, 'tpu_metrics_debug': False, 'debug': [], 'dataloader_drop_last': False, 'eval_steps': None, 'dataloader_num_workers': 0, 'past_index': -1, 'run_name': 'fine-tuning-Phi2-with-webglm-qa-with-lora', 'disable_tqdm': False, 'remove_unused_columns': True, 'label_names': None, 'load_best_model_at_end': True, 'metric_for_best_model': 'loss', 'greater_is_better': False, 'ignore_data_skip': False, 'fsdp': [], 'fsdp_min_num_params': 0, 'fsdp_config': {'min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False}, 'fsdp_transformer_layer_cls_to_wrap': None, 'deepspeed': None, 'label_smoothing_factor': 0.0, 'optim': 'paged_adamw_8bit', 'optim_args': None, 'adafactor': False, 'group_by_length': False, 'length_column_name': 'length', 'report_to': ['wandb'], 'ddp_find_unused_parameters': None, 'ddp_bucket_cap_mb': None, 'ddp_broadcast_buffers': None, 'dataloader_pin_memory': True, 'dataloader_persistent_workers': False, 'skip_memory_metrics': True, 'use_legacy_prediction_loop': False, 'push_to_hub': False, 'resume_from_checkpoint': None, 'hub_model_id': None, 'hub_strategy': 'every_save', 'hub_token': '<HUB_TOKEN>', 'hub_private_repo': False, 'hub_always_push': False, 'gradient_checkpointing': True, 'gradient_checkpointing_kwargs': {'use_reentrant': False}, 'include_inputs_for_metrics': False, 'fp16_backend': 'auto', 'push_to_hub_model_id': None, 'push_to_hub_organization': None, 'push_to_hub_token': '<PUSH_TO_HUB_TOKEN>', 'mp_parameters': '', 'auto_find_batch_size': False, 'full_determinism': False, 'torchdynamo': None, 'ray_scope': 'last', 'ddp_timeout': 1800, 'torch_compile': False, 'torch_compile_backend': None, 'torch_compile_mode': None, 'dispatch_batches': None, 'split_batches': False, 'include_tokens_per_second': False, 'include_num_input_tokens_seen': False, 'neftune_noise_alpha': None}
33
+ 2024-04-16 11:52:21,876 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
34
+ 2024-04-16 11:52:21,876 INFO MainThread:34 [wandb_init.py:_pause_backend():438] pausing backend
35
+ 2024-04-16 11:52:21,883 INFO MainThread:34 [wandb_init.py:_resume_backend():443] resuming backend
36
+ 2024-04-16 11:52:49,863 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
37
+ 2024-04-16 11:52:49,863 INFO MainThread:34 [wandb_init.py:_pause_backend():438] pausing backend
38
+ 2024-04-16 11:52:49,869 INFO MainThread:34 [wandb_init.py:_resume_backend():443] resuming backend
39
+ 2024-04-16 11:52:52,066 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
40
+ 2024-04-16 11:52:52,066 INFO MainThread:34 [wandb_init.py:_pause_backend():438] pausing backend
41
+ 2024-04-16 11:55:22,701 INFO MainThread:34 [wandb_init.py:_resume_backend():443] resuming backend
wandb/run-20240416_061855-mrfkrks2/files/conda-environment.yaml ADDED
File without changes
wandb/run-20240416_061855-mrfkrks2/files/config.yaml ADDED
@@ -0,0 +1,700 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ wandb_version: 1
2
+
3
+ _wandb:
4
+ desc: null
5
+ value:
6
+ python_version: 3.10.13
7
+ cli_version: 0.16.5
8
+ framework: huggingface
9
+ huggingface_version: 4.36.2
10
+ is_jupyter_run: true
11
+ is_kaggle_kernel: true
12
+ start_time: 1713248335.0
13
+ t:
14
+ 1:
15
+ - 1
16
+ - 2
17
+ - 3
18
+ - 5
19
+ - 11
20
+ - 12
21
+ - 49
22
+ - 51
23
+ - 53
24
+ - 55
25
+ - 71
26
+ - 98
27
+ - 105
28
+ 2:
29
+ - 1
30
+ - 2
31
+ - 3
32
+ - 5
33
+ - 11
34
+ - 12
35
+ - 49
36
+ - 51
37
+ - 53
38
+ - 55
39
+ - 71
40
+ - 98
41
+ - 105
42
+ 3:
43
+ - 7
44
+ - 13
45
+ - 23
46
+ 4: 3.10.13
47
+ 5: 0.16.5
48
+ 6: 4.36.2
49
+ 8:
50
+ - 1
51
+ - 2
52
+ - 5
53
+ 9:
54
+ 1: transformers_trainer
55
+ 13: linux-x86_64
56
+ m:
57
+ - 1: train/global_step
58
+ 6:
59
+ - 3
60
+ - 1: train/loss
61
+ 5: 1
62
+ 6:
63
+ - 1
64
+ - 1: train/learning_rate
65
+ 5: 1
66
+ 6:
67
+ - 1
68
+ - 1: train/epoch
69
+ 5: 1
70
+ 6:
71
+ - 1
72
+ - 1: eval/loss
73
+ 5: 1
74
+ 6:
75
+ - 1
76
+ - 1: eval/runtime
77
+ 5: 1
78
+ 6:
79
+ - 1
80
+ - 1: eval/samples_per_second
81
+ 5: 1
82
+ 6:
83
+ - 1
84
+ - 1: eval/steps_per_second
85
+ 5: 1
86
+ 6:
87
+ - 1
88
+ - 1: train/train_runtime
89
+ 5: 1
90
+ 6:
91
+ - 1
92
+ - 1: train/train_samples_per_second
93
+ 5: 1
94
+ 6:
95
+ - 1
96
+ - 1: train/train_steps_per_second
97
+ 5: 1
98
+ 6:
99
+ - 1
100
+ - 1: train/total_flos
101
+ 5: 1
102
+ 6:
103
+ - 1
104
+ - 1: train/train_loss
105
+ 5: 1
106
+ 6:
107
+ - 1
108
+ vocab_size:
109
+ desc: null
110
+ value: 32016
111
+ max_position_embeddings:
112
+ desc: null
113
+ value: 16384
114
+ hidden_size:
115
+ desc: null
116
+ value: 4096
117
+ intermediate_size:
118
+ desc: null
119
+ value: 11008
120
+ num_hidden_layers:
121
+ desc: null
122
+ value: 32
123
+ num_attention_heads:
124
+ desc: null
125
+ value: 32
126
+ num_key_value_heads:
127
+ desc: null
128
+ value: 32
129
+ hidden_act:
130
+ desc: null
131
+ value: silu
132
+ initializer_range:
133
+ desc: null
134
+ value: 0.02
135
+ rms_norm_eps:
136
+ desc: null
137
+ value: 1.0e-05
138
+ pretraining_tp:
139
+ desc: null
140
+ value: 1
141
+ use_cache:
142
+ desc: null
143
+ value: false
144
+ rope_theta:
145
+ desc: null
146
+ value: 1000000
147
+ rope_scaling:
148
+ desc: null
149
+ value: null
150
+ attention_bias:
151
+ desc: null
152
+ value: false
153
+ attention_dropout:
154
+ desc: null
155
+ value: 0.0
156
+ return_dict:
157
+ desc: null
158
+ value: true
159
+ output_hidden_states:
160
+ desc: null
161
+ value: false
162
+ output_attentions:
163
+ desc: null
164
+ value: false
165
+ torchscript:
166
+ desc: null
167
+ value: false
168
+ torch_dtype:
169
+ desc: null
170
+ value: bfloat16
171
+ use_bfloat16:
172
+ desc: null
173
+ value: false
174
+ tf_legacy_loss:
175
+ desc: null
176
+ value: false
177
+ pruned_heads:
178
+ desc: null
179
+ value: {}
180
+ tie_word_embeddings:
181
+ desc: null
182
+ value: false
183
+ is_encoder_decoder:
184
+ desc: null
185
+ value: false
186
+ is_decoder:
187
+ desc: null
188
+ value: false
189
+ cross_attention_hidden_size:
190
+ desc: null
191
+ value: null
192
+ add_cross_attention:
193
+ desc: null
194
+ value: false
195
+ tie_encoder_decoder:
196
+ desc: null
197
+ value: false
198
+ max_length:
199
+ desc: null
200
+ value: 20
201
+ min_length:
202
+ desc: null
203
+ value: 0
204
+ do_sample:
205
+ desc: null
206
+ value: false
207
+ early_stopping:
208
+ desc: null
209
+ value: false
210
+ num_beams:
211
+ desc: null
212
+ value: 1
213
+ num_beam_groups:
214
+ desc: null
215
+ value: 1
216
+ diversity_penalty:
217
+ desc: null
218
+ value: 0.0
219
+ temperature:
220
+ desc: null
221
+ value: 1.0
222
+ top_k:
223
+ desc: null
224
+ value: 50
225
+ top_p:
226
+ desc: null
227
+ value: 1.0
228
+ typical_p:
229
+ desc: null
230
+ value: 1.0
231
+ repetition_penalty:
232
+ desc: null
233
+ value: 1.0
234
+ length_penalty:
235
+ desc: null
236
+ value: 1.0
237
+ no_repeat_ngram_size:
238
+ desc: null
239
+ value: 0
240
+ encoder_no_repeat_ngram_size:
241
+ desc: null
242
+ value: 0
243
+ bad_words_ids:
244
+ desc: null
245
+ value: null
246
+ num_return_sequences:
247
+ desc: null
248
+ value: 1
249
+ chunk_size_feed_forward:
250
+ desc: null
251
+ value: 0
252
+ output_scores:
253
+ desc: null
254
+ value: false
255
+ return_dict_in_generate:
256
+ desc: null
257
+ value: false
258
+ forced_bos_token_id:
259
+ desc: null
260
+ value: null
261
+ forced_eos_token_id:
262
+ desc: null
263
+ value: null
264
+ remove_invalid_values:
265
+ desc: null
266
+ value: false
267
+ exponential_decay_length_penalty:
268
+ desc: null
269
+ value: null
270
+ suppress_tokens:
271
+ desc: null
272
+ value: null
273
+ begin_suppress_tokens:
274
+ desc: null
275
+ value: null
276
+ architectures:
277
+ desc: null
278
+ value:
279
+ - LlamaForCausalLM
280
+ finetuning_task:
281
+ desc: null
282
+ value: null
283
+ id2label:
284
+ desc: null
285
+ value:
286
+ '0': LABEL_0
287
+ '1': LABEL_1
288
+ label2id:
289
+ desc: null
290
+ value:
291
+ LABEL_0: 0
292
+ LABEL_1: 1
293
+ tokenizer_class:
294
+ desc: null
295
+ value: null
296
+ prefix:
297
+ desc: null
298
+ value: null
299
+ bos_token_id:
300
+ desc: null
301
+ value: 1
302
+ pad_token_id:
303
+ desc: null
304
+ value: null
305
+ eos_token_id:
306
+ desc: null
307
+ value: 2
308
+ sep_token_id:
309
+ desc: null
310
+ value: null
311
+ decoder_start_token_id:
312
+ desc: null
313
+ value: null
314
+ task_specific_params:
315
+ desc: null
316
+ value: null
317
+ problem_type:
318
+ desc: null
319
+ value: null
320
+ _name_or_path:
321
+ desc: null
322
+ value: codellama/CodeLlama-7b-hf
323
+ transformers_version:
324
+ desc: null
325
+ value: 4.36.2
326
+ model_type:
327
+ desc: null
328
+ value: llama
329
+ quantization_config:
330
+ desc: null
331
+ value:
332
+ quant_method: QuantizationMethod.BITS_AND_BYTES
333
+ load_in_8bit: false
334
+ load_in_4bit: true
335
+ llm_int8_threshold: 6.0
336
+ llm_int8_skip_modules: null
337
+ llm_int8_enable_fp32_cpu_offload: true
338
+ llm_int8_has_fp16_weight: false
339
+ bnb_4bit_quant_type: nf4
340
+ bnb_4bit_use_double_quant: true
341
+ bnb_4bit_compute_dtype: bfloat16
342
+ output_dir:
343
+ desc: null
344
+ value: /kaggle/working/
345
+ overwrite_output_dir:
346
+ desc: null
347
+ value: true
348
+ do_train:
349
+ desc: null
350
+ value: false
351
+ do_eval:
352
+ desc: null
353
+ value: true
354
+ do_predict:
355
+ desc: null
356
+ value: false
357
+ evaluation_strategy:
358
+ desc: null
359
+ value: epoch
360
+ prediction_loss_only:
361
+ desc: null
362
+ value: false
363
+ per_device_train_batch_size:
364
+ desc: null
365
+ value: 3
366
+ per_device_eval_batch_size:
367
+ desc: null
368
+ value: 3
369
+ per_gpu_train_batch_size:
370
+ desc: null
371
+ value: null
372
+ per_gpu_eval_batch_size:
373
+ desc: null
374
+ value: null
375
+ gradient_accumulation_steps:
376
+ desc: null
377
+ value: 5
378
+ eval_accumulation_steps:
379
+ desc: null
380
+ value: null
381
+ eval_delay:
382
+ desc: null
383
+ value: 0
384
+ learning_rate:
385
+ desc: null
386
+ value: 5.0e-05
387
+ weight_decay:
388
+ desc: null
389
+ value: 0.01
390
+ adam_beta1:
391
+ desc: null
392
+ value: 0.9
393
+ adam_beta2:
394
+ desc: null
395
+ value: 0.999
396
+ adam_epsilon:
397
+ desc: null
398
+ value: 1.0e-08
399
+ max_grad_norm:
400
+ desc: null
401
+ value: 1.0
402
+ num_train_epochs:
403
+ desc: null
404
+ value: 7
405
+ max_steps:
406
+ desc: null
407
+ value: -1
408
+ lr_scheduler_type:
409
+ desc: null
410
+ value: linear
411
+ lr_scheduler_kwargs:
412
+ desc: null
413
+ value: {}
414
+ warmup_ratio:
415
+ desc: null
416
+ value: 0.0
417
+ warmup_steps:
418
+ desc: null
419
+ value: 20
420
+ log_level:
421
+ desc: null
422
+ value: passive
423
+ log_level_replica:
424
+ desc: null
425
+ value: warning
426
+ log_on_each_node:
427
+ desc: null
428
+ value: true
429
+ logging_dir:
430
+ desc: null
431
+ value: /kaggle/working//logs
432
+ logging_strategy:
433
+ desc: null
434
+ value: epoch
435
+ logging_first_step:
436
+ desc: null
437
+ value: false
438
+ logging_steps:
439
+ desc: null
440
+ value: 500
441
+ logging_nan_inf_filter:
442
+ desc: null
443
+ value: true
444
+ save_strategy:
445
+ desc: null
446
+ value: epoch
447
+ save_steps:
448
+ desc: null
449
+ value: 500
450
+ save_total_limit:
451
+ desc: null
452
+ value: 5
453
+ save_safetensors:
454
+ desc: null
455
+ value: true
456
+ save_on_each_node:
457
+ desc: null
458
+ value: false
459
+ save_only_model:
460
+ desc: null
461
+ value: false
462
+ no_cuda:
463
+ desc: null
464
+ value: false
465
+ use_cpu:
466
+ desc: null
467
+ value: false
468
+ use_mps_device:
469
+ desc: null
470
+ value: false
471
+ seed:
472
+ desc: null
473
+ value: 42
474
+ data_seed:
475
+ desc: null
476
+ value: null
477
+ jit_mode_eval:
478
+ desc: null
479
+ value: false
480
+ use_ipex:
481
+ desc: null
482
+ value: false
483
+ bf16:
484
+ desc: null
485
+ value: false
486
+ fp16:
487
+ desc: null
488
+ value: true
489
+ fp16_opt_level:
490
+ desc: null
491
+ value: O1
492
+ half_precision_backend:
493
+ desc: null
494
+ value: auto
495
+ bf16_full_eval:
496
+ desc: null
497
+ value: false
498
+ fp16_full_eval:
499
+ desc: null
500
+ value: false
501
+ tf32:
502
+ desc: null
503
+ value: null
504
+ local_rank:
505
+ desc: null
506
+ value: 0
507
+ ddp_backend:
508
+ desc: null
509
+ value: null
510
+ tpu_num_cores:
511
+ desc: null
512
+ value: null
513
+ tpu_metrics_debug:
514
+ desc: null
515
+ value: false
516
+ debug:
517
+ desc: null
518
+ value: []
519
+ dataloader_drop_last:
520
+ desc: null
521
+ value: false
522
+ eval_steps:
523
+ desc: null
524
+ value: null
525
+ dataloader_num_workers:
526
+ desc: null
527
+ value: 0
528
+ past_index:
529
+ desc: null
530
+ value: -1
531
+ run_name:
532
+ desc: null
533
+ value: fine-tuning-Phi2-with-webglm-qa-with-lora
534
+ disable_tqdm:
535
+ desc: null
536
+ value: false
537
+ remove_unused_columns:
538
+ desc: null
539
+ value: true
540
+ label_names:
541
+ desc: null
542
+ value: null
543
+ load_best_model_at_end:
544
+ desc: null
545
+ value: true
546
+ metric_for_best_model:
547
+ desc: null
548
+ value: loss
549
+ greater_is_better:
550
+ desc: null
551
+ value: false
552
+ ignore_data_skip:
553
+ desc: null
554
+ value: false
555
+ fsdp:
556
+ desc: null
557
+ value: []
558
+ fsdp_min_num_params:
559
+ desc: null
560
+ value: 0
561
+ fsdp_config:
562
+ desc: null
563
+ value:
564
+ min_num_params: 0
565
+ xla: false
566
+ xla_fsdp_grad_ckpt: false
567
+ fsdp_transformer_layer_cls_to_wrap:
568
+ desc: null
569
+ value: null
570
+ deepspeed:
571
+ desc: null
572
+ value: null
573
+ label_smoothing_factor:
574
+ desc: null
575
+ value: 0.0
576
+ optim:
577
+ desc: null
578
+ value: paged_adamw_8bit
579
+ optim_args:
580
+ desc: null
581
+ value: null
582
+ adafactor:
583
+ desc: null
584
+ value: false
585
+ group_by_length:
586
+ desc: null
587
+ value: false
588
+ length_column_name:
589
+ desc: null
590
+ value: length
591
+ report_to:
592
+ desc: null
593
+ value:
594
+ - wandb
595
+ ddp_find_unused_parameters:
596
+ desc: null
597
+ value: null
598
+ ddp_bucket_cap_mb:
599
+ desc: null
600
+ value: null
601
+ ddp_broadcast_buffers:
602
+ desc: null
603
+ value: null
604
+ dataloader_pin_memory:
605
+ desc: null
606
+ value: true
607
+ dataloader_persistent_workers:
608
+ desc: null
609
+ value: false
610
+ skip_memory_metrics:
611
+ desc: null
612
+ value: true
613
+ use_legacy_prediction_loop:
614
+ desc: null
615
+ value: false
616
+ push_to_hub:
617
+ desc: null
618
+ value: false
619
+ resume_from_checkpoint:
620
+ desc: null
621
+ value: null
622
+ hub_model_id:
623
+ desc: null
624
+ value: null
625
+ hub_strategy:
626
+ desc: null
627
+ value: every_save
628
+ hub_token:
629
+ desc: null
630
+ value: <HUB_TOKEN>
631
+ hub_private_repo:
632
+ desc: null
633
+ value: false
634
+ hub_always_push:
635
+ desc: null
636
+ value: false
637
+ gradient_checkpointing:
638
+ desc: null
639
+ value: true
640
+ gradient_checkpointing_kwargs:
641
+ desc: null
642
+ value:
643
+ use_reentrant: false
644
+ include_inputs_for_metrics:
645
+ desc: null
646
+ value: false
647
+ fp16_backend:
648
+ desc: null
649
+ value: auto
650
+ push_to_hub_model_id:
651
+ desc: null
652
+ value: null
653
+ push_to_hub_organization:
654
+ desc: null
655
+ value: null
656
+ push_to_hub_token:
657
+ desc: null
658
+ value: <PUSH_TO_HUB_TOKEN>
659
+ mp_parameters:
660
+ desc: null
661
+ value: ''
662
+ auto_find_batch_size:
663
+ desc: null
664
+ value: false
665
+ full_determinism:
666
+ desc: null
667
+ value: false
668
+ torchdynamo:
669
+ desc: null
670
+ value: null
671
+ ray_scope:
672
+ desc: null
673
+ value: last
674
+ ddp_timeout:
675
+ desc: null
676
+ value: 1800
677
+ torch_compile:
678
+ desc: null
679
+ value: false
680
+ torch_compile_backend:
681
+ desc: null
682
+ value: null
683
+ torch_compile_mode:
684
+ desc: null
685
+ value: null
686
+ dispatch_batches:
687
+ desc: null
688
+ value: null
689
+ split_batches:
690
+ desc: null
691
+ value: false
692
+ include_tokens_per_second:
693
+ desc: null
694
+ value: false
695
+ include_num_input_tokens_seen:
696
+ desc: null
697
+ value: false
698
+ neftune_noise_alpha:
699
+ desc: null
700
+ value: null
wandb/run-20240416_061855-mrfkrks2/files/output.log ADDED
@@ -0,0 +1 @@
 
 
1
+
wandb/run-20240416_061855-mrfkrks2/files/requirements.txt ADDED
@@ -0,0 +1,864 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Babel==2.14.0
2
+ Boruta==0.3
3
+ Brotli==1.0.9
4
+ CVXcanon==0.1.2
5
+ Cartopy==0.22.0
6
+ Cython==3.0.8
7
+ Deprecated==1.2.14
8
+ Farama-Notifications==0.0.4
9
+ Flask==3.0.2
10
+ Geohash==1.0
11
+ GitPython==3.1.41
12
+ ImageHash==4.3.1
13
+ Janome==0.5.0
14
+ Jinja2==3.1.2
15
+ LunarCalendar==0.0.9
16
+ Mako==1.3.2
17
+ Markdown==3.5.2
18
+ MarkupSafe==2.1.3
19
+ MarkupSafe==2.1.5
20
+ Pillow==9.5.0
21
+ PuLP==2.8.0
22
+ PyArabic==0.6.15
23
+ PyJWT==2.8.0
24
+ PyMeeus==0.5.12
25
+ PySocks==1.7.1
26
+ PyUpSet==0.1.1.post7
27
+ PyWavelets==1.5.0
28
+ PyYAML==6.0.1
29
+ Pygments==2.17.2
30
+ Pympler==1.0.1
31
+ QtPy==2.4.1
32
+ Rtree==1.2.0
33
+ SQLAlchemy==2.0.25
34
+ SecretStorage==3.3.3
35
+ Send2Trash==1.8.2
36
+ Shapely==1.8.5.post1
37
+ Shimmy==1.3.0
38
+ SimpleITK==2.3.1
39
+ TPOT==0.12.1
40
+ Theano-PyMC==1.1.2
41
+ Theano==1.0.5
42
+ Wand==0.6.13
43
+ Werkzeug==3.0.2
44
+ absl-py==1.4.0
45
+ accelerate==0.25.0
46
+ access==1.1.9
47
+ affine==2.4.0
48
+ aiobotocore==2.12.2
49
+ aiofiles==22.1.0
50
+ aiohttp-cors==0.7.0
51
+ aiohttp==3.9.1
52
+ aioitertools==0.11.0
53
+ aiorwlock==1.3.0
54
+ aiosignal==1.3.1
55
+ aiosqlite==0.19.0
56
+ albumentations==1.4.0
57
+ alembic==1.13.1
58
+ altair==5.3.0
59
+ annotated-types==0.6.0
60
+ annoy==1.17.3
61
+ anyio==4.2.0
62
+ apache-beam==2.46.0
63
+ aplus==0.11.0
64
+ appdirs==1.4.4
65
+ archspec==0.2.3
66
+ argon2-cffi-bindings==21.2.0
67
+ argon2-cffi==23.1.0
68
+ array-record==0.5.0
69
+ arrow==1.3.0
70
+ arviz==0.17.1
71
+ astroid==3.1.0
72
+ astropy-iers-data==0.2024.4.1.0.33.14
73
+ astropy==6.0.1
74
+ asttokens==2.4.1
75
+ astunparse==1.6.3
76
+ async-lru==2.0.4
77
+ async-timeout==4.0.3
78
+ attrs==23.2.0
79
+ audioread==3.0.1
80
+ autopep8==2.0.4
81
+ backoff==2.2.1
82
+ bayesian-optimization==1.4.3
83
+ beatrix_jupyterlab==2023.128.151533
84
+ beautifulsoup4==4.12.2
85
+ bitsandbytes==0.41.3
86
+ blake3==0.2.1
87
+ bleach==6.1.0
88
+ blessed==1.20.0
89
+ blinker==1.7.0
90
+ blis==0.7.10
91
+ blosc2==2.6.0
92
+ bokeh==3.3.4
93
+ boltons==23.1.1
94
+ boto3==1.26.100
95
+ botocore==1.34.51
96
+ bq_helper==0.4.1
97
+ bqplot==0.12.43
98
+ branca==0.7.1
99
+ brewer2mpl==1.4.1
100
+ brotlipy==0.7.0
101
+ cached-property==1.5.2
102
+ cachetools==4.2.4
103
+ cachetools==5.3.2
104
+ catalogue==2.0.10
105
+ catalyst==22.4
106
+ catboost==1.2.3
107
+ category-encoders==2.6.3
108
+ certifi==2024.2.2
109
+ cesium==0.12.1
110
+ cffi==1.16.0
111
+ charset-normalizer==3.3.2
112
+ chex==0.1.86
113
+ cleverhans==4.0.0
114
+ click-plugins==1.1.1
115
+ click==8.1.7
116
+ cligj==0.7.2
117
+ cloud-tpu-client==0.10
118
+ cloud-tpu-profiler==2.4.0
119
+ cloudpathlib==0.16.0
120
+ cloudpickle==2.2.1
121
+ cloudpickle==3.0.0
122
+ cmdstanpy==1.2.2
123
+ colorama==0.4.6
124
+ colorcet==3.1.0
125
+ colorful==0.5.6
126
+ colorlog==6.8.2
127
+ colorlover==0.3.0
128
+ comm==0.2.1
129
+ conda-libmamba-solver==23.7.0
130
+ conda-package-handling==2.2.0
131
+ conda==23.7.4
132
+ conda_package_streaming==0.9.0
133
+ confection==0.1.4
134
+ contextily==1.6.0
135
+ contourpy==1.2.0
136
+ convertdate==2.4.0
137
+ crcmod==1.7
138
+ cryptography==41.0.7
139
+ cuda-python==12.4.0
140
+ cudf==23.8.0
141
+ cufflinks==0.17.3
142
+ cuml==23.8.0
143
+ cupy==13.0.0
144
+ cycler==0.12.1
145
+ cymem==2.0.8
146
+ cytoolz==0.12.3
147
+ daal4py==2024.2.0
148
+ daal==2024.2.0
149
+ dacite==1.8.1
150
+ dask-cuda==23.8.0
151
+ dask-cudf==23.8.0
152
+ dask-expr==1.0.9
153
+ dask==2024.4.0
154
+ dataclasses-json==0.6.4
155
+ dataproc_jupyter_plugin==0.1.66
156
+ datasets==2.15.0
157
+ datashader==0.16.0
158
+ datatile==1.0.3
159
+ db-dtypes==1.2.0
160
+ deap==1.4.1
161
+ debugpy==1.8.0
162
+ decorator==5.1.1
163
+ deepdiff==6.7.1
164
+ defusedxml==0.7.1
165
+ deprecation==2.1.0
166
+ descartes==1.1.0
167
+ dill==0.3.7
168
+ dipy==1.9.0
169
+ distlib==0.3.8
170
+ distributed==2023.7.1
171
+ distro==1.9.0
172
+ dm-tree==0.1.8
173
+ docker-pycreds==0.4.0
174
+ docker==7.0.0
175
+ docopt==0.6.2
176
+ docstring-parser==0.15
177
+ docstring-to-markdown==0.15
178
+ docutils==0.20.1
179
+ earthengine-api==0.1.395
180
+ easydict==1.13
181
+ easyocr==1.7.1
182
+ ecos==2.0.13
183
+ eli5==0.13.0
184
+ emoji==2.11.0
185
+ en-core-web-lg==3.7.1
186
+ en-core-web-sm==3.7.1
187
+ entrypoints==0.4
188
+ ephem==4.1.5
189
+ esda==2.5.1
190
+ essentia==2.1b6.dev1110
191
+ et-xmlfile==1.1.0
192
+ etils==1.6.0
193
+ exceptiongroup==1.2.0
194
+ executing==2.0.1
195
+ explainable-ai-sdk==1.3.3
196
+ fastai==2.7.14
197
+ fastapi==0.108.0
198
+ fastavro==1.9.3
199
+ fastcore==1.5.29
200
+ fastdownload==0.0.7
201
+ fasteners==0.19
202
+ fastjsonschema==2.19.1
203
+ fastprogress==1.0.3
204
+ fastrlock==0.8.2
205
+ fasttext==0.9.2
206
+ feather-format==0.4.1
207
+ featuretools==1.30.0
208
+ filelock==3.13.1
209
+ fiona==1.9.6
210
+ fitter==1.7.0
211
+ flake8==7.0.0
212
+ flashtext==2.7
213
+ flatbuffers==23.5.26
214
+ flax==0.8.2
215
+ folium==0.16.0
216
+ fonttools==4.47.0
217
+ fonttools==4.50.0
218
+ fqdn==1.5.1
219
+ frozendict==2.4.1
220
+ frozenlist==1.4.1
221
+ fsspec==2023.10.0
222
+ fsspec==2024.3.1
223
+ funcy==2.0
224
+ fury==0.10.0
225
+ future==1.0.0
226
+ fuzzywuzzy==0.18.0
227
+ gast==0.5.4
228
+ gatspy==0.3
229
+ gcsfs==2024.2.0
230
+ gensim==4.3.2
231
+ geographiclib==2.0
232
+ geojson==3.1.0
233
+ geopandas==0.14.3
234
+ geoplot==0.5.1
235
+ geopy==2.4.1
236
+ geoviews==1.11.1
237
+ ggplot==0.11.5
238
+ giddy==2.3.5
239
+ gitdb==4.0.11
240
+ google-ai-generativelanguage==0.4.0
241
+ google-api-core==2.11.1
242
+ google-api-core==2.18.0
243
+ google-api-python-client==2.125.0
244
+ google-apitools==0.5.31
245
+ google-auth-httplib2==0.2.0
246
+ google-auth-oauthlib==1.2.0
247
+ google-auth==2.26.1
248
+ google-cloud-aiplatform==0.6.0a1
249
+ google-cloud-artifact-registry==1.10.0
250
+ google-cloud-automl==1.0.1
251
+ google-cloud-bigquery==2.34.4
252
+ google-cloud-bigtable==1.7.3
253
+ google-cloud-core==2.4.1
254
+ google-cloud-datastore==2.19.0
255
+ google-cloud-dlp==3.14.0
256
+ google-cloud-jupyter-config==0.0.5
257
+ google-cloud-language==2.13.3
258
+ google-cloud-monitoring==2.18.0
259
+ google-cloud-pubsub==2.19.0
260
+ google-cloud-pubsublite==1.9.0
261
+ google-cloud-recommendations-ai==0.7.1
262
+ google-cloud-resource-manager==1.11.0
263
+ google-cloud-spanner==3.40.1
264
+ google-cloud-storage==1.44.0
265
+ google-cloud-translate==3.12.1
266
+ google-cloud-videointelligence==2.13.3
267
+ google-cloud-vision==2.8.0
268
+ google-crc32c==1.5.0
269
+ google-generativeai==0.4.1
270
+ google-pasta==0.2.0
271
+ google-resumable-media==2.7.0
272
+ googleapis-common-protos==1.62.0
273
+ gplearn==0.4.2
274
+ gpustat==1.0.0
275
+ gpxpy==1.6.2
276
+ graphviz==0.20.3
277
+ greenlet==3.0.3
278
+ grpc-google-iam-v1==0.12.7
279
+ grpcio-status==1.48.1
280
+ grpcio-status==1.48.2
281
+ grpcio==1.51.1
282
+ grpcio==1.60.0
283
+ gviz-api==1.10.0
284
+ gym-notices==0.0.8
285
+ gym==0.26.2
286
+ gymnasium==0.29.0
287
+ h11==0.14.0
288
+ h2o==3.46.0.1
289
+ h5netcdf==1.3.0
290
+ h5py==3.10.0
291
+ haversine==2.8.1
292
+ hdfs==2.7.3
293
+ hep-ml==0.7.2
294
+ hijri-converter==2.3.1
295
+ hmmlearn==0.3.2
296
+ holidays==0.24
297
+ holoviews==1.18.3
298
+ hpsklearn==0.1.0
299
+ html5lib==1.1
300
+ htmlmin==0.1.12
301
+ httpcore==1.0.5
302
+ httplib2==0.21.0
303
+ httptools==0.6.1
304
+ httpx==0.27.0
305
+ huggingface-hub==0.22.2
306
+ hunspell==0.5.5
307
+ hydra-slayer==0.5.0
308
+ hyperopt==0.2.7
309
+ hypertools==0.8.0
310
+ idna==3.6
311
+ igraph==0.11.4
312
+ imagecodecs==2024.1.1
313
+ imageio==2.33.1
314
+ imbalanced-learn==0.12.2
315
+ imgaug==0.4.0
316
+ importlib-metadata==6.11.0
317
+ importlib-metadata==7.0.1
318
+ importlib-resources==6.1.1
319
+ inequality==1.0.1
320
+ iniconfig==2.0.0
321
+ ipydatawidgets==4.3.5
322
+ ipykernel==6.28.0
323
+ ipyleaflet==0.18.2
324
+ ipympl==0.7.0
325
+ ipython-genutils==0.2.0
326
+ ipython-genutils==0.2.0
327
+ ipython-sql==0.5.0
328
+ ipython==8.20.0
329
+ ipyvolume==0.6.3
330
+ ipyvue==1.10.2
331
+ ipyvuetify==1.9.3
332
+ ipywebrtc==0.6.0
333
+ ipywidgets==7.7.1
334
+ isoduration==20.11.0
335
+ isort==5.13.2
336
+ isoweek==1.3.3
337
+ itsdangerous==2.1.2
338
+ jaraco.classes==3.3.0
339
+ jax-jumpy==1.0.0
340
+ jax==0.4.23
341
+ jaxlib==0.4.23.dev20240116
342
+ jedi==0.19.1
343
+ jeepney==0.8.0
344
+ jieba==0.42.1
345
+ jmespath==1.0.1
346
+ joblib==1.3.2
347
+ json5==0.9.14
348
+ jsonpatch==1.33
349
+ jsonpointer==2.4
350
+ jsonschema-specifications==2023.12.1
351
+ jsonschema==4.20.0
352
+ jupyter-console==6.6.3
353
+ jupyter-events==0.9.0
354
+ jupyter-http-over-ws==0.0.8
355
+ jupyter-lsp==1.5.1
356
+ jupyter-server-mathjax==0.2.6
357
+ jupyter-ydoc==0.2.5
358
+ jupyter_client==7.4.9
359
+ jupyter_client==8.6.0
360
+ jupyter_core==5.7.1
361
+ jupyter_server==2.13.0
362
+ jupyter_server_fileid==0.9.1
363
+ jupyter_server_proxy==4.1.0
364
+ jupyter_server_terminals==0.5.1
365
+ jupyter_server_ydoc==0.8.0
366
+ jupyterlab-lsp==5.1.0
367
+ jupyterlab-widgets==3.0.9
368
+ jupyterlab==4.1.5
369
+ jupyterlab_git==0.44.0
370
+ jupyterlab_pygments==0.3.0
371
+ jupyterlab_server==2.25.2
372
+ jupytext==1.16.0
373
+ kaggle-environments==1.14.3
374
+ kaggle==1.6.8
375
+ kagglehub==0.2.2
376
+ keras-cv==0.8.2
377
+ keras-nlp==0.8.2
378
+ keras-tuner==1.4.6
379
+ keras==3.1.1
380
+ kernels-mixer==0.0.7
381
+ keyring==24.3.0
382
+ keyrings.google-artifactregistry-auth==1.1.2
383
+ kfp-pipeline-spec==0.2.2
384
+ kfp-server-api==2.0.5
385
+ kfp==2.5.0
386
+ kiwisolver==1.4.5
387
+ kmapper==2.0.1
388
+ kmodes==0.12.2
389
+ korean-lunar-calendar==0.3.1
390
+ kornia==0.7.2
391
+ kornia_rs==0.1.3
392
+ kt-legacy==1.0.5
393
+ kubernetes==26.1.0
394
+ langcodes==3.3.0
395
+ langid==1.1.6
396
+ lazy_loader==0.3
397
+ learntools==0.3.4
398
+ leven==1.0.4
399
+ libclang==16.0.6
400
+ libmambapy==1.5.0
401
+ libpysal==4.9.2
402
+ librosa==0.10.1
403
+ lightgbm==4.2.0
404
+ lightning-utilities==0.11.2
405
+ lime==0.2.0.1
406
+ line-profiler==4.1.2
407
+ linkify-it-py==2.0.3
408
+ llvmlite==0.41.1
409
+ llvmlite==0.42.0
410
+ lml==0.1.0
411
+ locket==1.0.0
412
+ loguru==0.7.2
413
+ lxml==5.2.1
414
+ lz4==4.3.3
415
+ mamba==1.5.0
416
+ mapclassify==2.6.1
417
+ markdown-it-py==3.0.0
418
+ marshmallow==3.21.1
419
+ matplotlib-inline==0.1.6
420
+ matplotlib-venn==0.11.10
421
+ matplotlib==3.7.5
422
+ matplotlib==3.8.3
423
+ mccabe==0.7.0
424
+ mdit-py-plugins==0.4.0
425
+ mdurl==0.1.2
426
+ memory-profiler==0.61.0
427
+ menuinst==2.0.1
428
+ mercantile==1.2.1
429
+ mgwr==2.2.1
430
+ missingno==0.5.2
431
+ mistune==0.8.4
432
+ mizani==0.11.1
433
+ ml-dtypes==0.2.0
434
+ mlcrate==0.2.0
435
+ mlens==0.2.3
436
+ mlxtend==0.23.1
437
+ mne==1.6.1
438
+ mnist==0.2.2
439
+ momepy==0.7.0
440
+ more-itertools==10.2.0
441
+ mpld3==0.5.10
442
+ mpmath==1.3.0
443
+ msgpack==1.0.7
444
+ multidict==6.0.4
445
+ multimethod==1.10
446
+ multipledispatch==1.0.0
447
+ multiprocess==0.70.15
448
+ munkres==1.1.4
449
+ murmurhash==1.0.10
450
+ mypy-extensions==1.0.0
451
+ namex==0.0.7
452
+ nb-conda-kernels==2.3.1
453
+ nb_conda==2.2.1
454
+ nbclassic==1.0.0
455
+ nbclient==0.5.13
456
+ nbconvert==6.4.5
457
+ nbdime==3.2.0
458
+ nbformat==5.9.2
459
+ ndindex==1.8
460
+ nest-asyncio==1.5.8
461
+ networkx==3.2.1
462
+ nibabel==5.2.1
463
+ nilearn==0.10.3
464
+ ninja==1.11.1.1
465
+ nltk==3.2.4
466
+ nose==1.3.7
467
+ notebook==6.5.4
468
+ notebook==6.5.6
469
+ notebook_executor==0.2
470
+ notebook_shim==0.2.3
471
+ numba==0.58.1
472
+ numba==0.59.1
473
+ numexpr==2.10.0
474
+ numpy==1.26.4
475
+ nvidia-ml-py==11.495.46
476
+ nvtx==0.2.10
477
+ oauth2client==4.1.3
478
+ oauthlib==3.2.2
479
+ objsize==0.6.1
480
+ odfpy==1.4.1
481
+ olefile==0.47
482
+ onnx==1.16.0
483
+ opencensus-context==0.1.3
484
+ opencensus==0.11.4
485
+ opencv-contrib-python==4.9.0.80
486
+ opencv-python-headless==4.9.0.80
487
+ opencv-python==4.9.0.80
488
+ openpyxl==3.1.2
489
+ openslide-python==1.3.1
490
+ opentelemetry-api==1.22.0
491
+ opentelemetry-exporter-otlp-proto-common==1.22.0
492
+ opentelemetry-exporter-otlp-proto-grpc==1.22.0
493
+ opentelemetry-exporter-otlp-proto-http==1.22.0
494
+ opentelemetry-exporter-otlp==1.22.0
495
+ opentelemetry-proto==1.22.0
496
+ opentelemetry-sdk==1.22.0
497
+ opentelemetry-semantic-conventions==0.43b0
498
+ opt-einsum==3.3.0
499
+ optax==0.2.2
500
+ optree==0.11.0
501
+ optuna==3.6.1
502
+ orbax-checkpoint==0.5.7
503
+ ordered-set==4.1.0
504
+ orjson==3.9.10
505
+ ortools==9.4.1874
506
+ osmnx==1.9.2
507
+ overrides==7.4.0
508
+ packaging==21.3
509
+ pandas-datareader==0.10.0
510
+ pandas-profiling==3.6.6
511
+ pandas-summary==0.2.0
512
+ pandas==2.1.4
513
+ pandas==2.2.1
514
+ pandasql==0.7.3
515
+ pandocfilters==1.5.0
516
+ panel==1.3.8
517
+ papermill==2.5.0
518
+ param==2.1.0
519
+ parso==0.8.3
520
+ partd==1.4.1
521
+ path.py==12.5.0
522
+ path==16.10.0
523
+ pathos==0.3.2
524
+ pathy==0.10.3
525
+ patsy==0.5.6
526
+ pdf2image==1.17.0
527
+ peft==0.7.1
528
+ pettingzoo==1.24.0
529
+ pexpect==4.8.0
530
+ pexpect==4.9.0
531
+ phik==0.12.4
532
+ pickleshare==0.7.5
533
+ pillow==10.3.0
534
+ pip==23.3.2
535
+ pkgutil_resolve_name==1.3.10
536
+ platformdirs==4.2.0
537
+ plotly-express==0.4.1
538
+ plotly==5.18.0
539
+ plotnine==0.13.4
540
+ pluggy==1.4.0
541
+ pointpats==2.4.0
542
+ polars==0.20.18
543
+ polyglot==16.7.4
544
+ pooch==1.8.1
545
+ pox==0.3.4
546
+ ppca==0.0.4
547
+ ppft==1.7.6.8
548
+ preprocessing==0.1.13
549
+ preshed==3.0.9
550
+ prettytable==3.9.0
551
+ progressbar2==4.4.2
552
+ prometheus-client==0.19.0
553
+ promise==2.3
554
+ prompt-toolkit==3.0.42
555
+ prompt-toolkit==3.0.43
556
+ prophet==1.1.1
557
+ proto-plus==1.23.0
558
+ protobuf==3.20.3
559
+ protobuf==4.21.12
560
+ psutil==5.9.3
561
+ psutil==5.9.7
562
+ ptyprocess==0.7.0
563
+ pudb==2024.1
564
+ pure-eval==0.2.2
565
+ py-cpuinfo==9.0.0
566
+ py-spy==0.3.14
567
+ py4j==0.10.9.7
568
+ pyLDAvis==3.4.1
569
+ pyOpenSSL==23.3.0
570
+ pyaml==23.12.0
571
+ pyarrow-hotfix==0.6
572
+ pyarrow==15.0.2
573
+ pyasn1-modules==0.3.0
574
+ pyasn1==0.5.1
575
+ pybind11==2.12.0
576
+ pyclipper==1.3.0.post5
577
+ pycodestyle==2.11.1
578
+ pycosat==0.6.6
579
+ pycparser==2.21
580
+ pycryptodome==3.20.0
581
+ pyct==0.5.0
582
+ pycuda==2024.1
583
+ pydantic==2.5.3
584
+ pydantic==2.6.4
585
+ pydantic_core==2.14.6
586
+ pydantic_core==2.16.3
587
+ pydegensac==0.1.2
588
+ pydicom==2.4.4
589
+ pydocstyle==6.3.0
590
+ pydot==1.4.2
591
+ pydub==0.25.1
592
+ pyemd==1.0.0
593
+ pyerfa==2.0.1.1
594
+ pyexcel-io==0.6.6
595
+ pyexcel-ods==0.6.0
596
+ pyflakes==3.2.0
597
+ pygltflib==1.16.2
598
+ pykalman==0.9.7
599
+ pylibraft==23.8.0
600
+ pylint==3.1.0
601
+ pymc3==3.11.4
602
+ pymongo==3.13.0
603
+ pynndescent==0.5.12
604
+ pynvml==11.4.1
605
+ pynvrtc==9.2
606
+ pyparsing==3.1.1
607
+ pyparsing==3.1.2
608
+ pypdf==4.1.0
609
+ pyproj==3.6.1
610
+ pysal==24.1
611
+ pyshp==2.3.1
612
+ pytesseract==0.3.10
613
+ pytest==8.1.1
614
+ python-bidi==0.4.2
615
+ python-dateutil==2.9.0.post0
616
+ python-dotenv==1.0.0
617
+ python-json-logger==2.0.7
618
+ python-louvain==0.16
619
+ python-lsp-jsonrpc==1.1.2
620
+ python-lsp-server==1.11.0
621
+ python-slugify==8.0.4
622
+ python-utils==3.8.2
623
+ pythreejs==2.4.2
624
+ pytoolconfig==1.3.1
625
+ pytools==2024.1.1
626
+ pytorch-ignite==0.5.0.post2
627
+ pytorch-lightning==2.2.1
628
+ pytz==2023.3.post1
629
+ pytz==2024.1
630
+ pyu2f==0.1.5
631
+ pyviz_comms==3.0.2
632
+ pyzmq==24.0.1
633
+ pyzmq==25.1.2
634
+ qgrid==1.3.1
635
+ qtconsole==5.5.1
636
+ quantecon==0.7.2
637
+ qudida==0.0.4
638
+ raft-dask==23.8.0
639
+ rasterio==1.3.9
640
+ rasterstats==0.19.0
641
+ ray-cpp==2.9.0
642
+ ray==2.9.0
643
+ referencing==0.32.1
644
+ regex==2023.12.25
645
+ requests-oauthlib==1.3.1
646
+ requests-toolbelt==0.10.1
647
+ requests==2.31.0
648
+ retrying==1.3.3
649
+ retrying==1.3.4
650
+ rfc3339-validator==0.1.4
651
+ rfc3986-validator==0.1.1
652
+ rgf-python==3.12.0
653
+ rich-click==1.7.4
654
+ rich==13.7.0
655
+ rich==13.7.1
656
+ rmm==23.8.0
657
+ rope==1.13.0
658
+ rpds-py==0.16.2
659
+ rsa==4.9
660
+ ruamel-yaml-conda==0.15.100
661
+ ruamel.yaml.clib==0.2.7
662
+ ruamel.yaml==0.17.40
663
+ s2sphere==0.2.5
664
+ s3fs==2024.2.0
665
+ s3transfer==0.6.2
666
+ safetensors==0.4.2
667
+ scattertext==0.1.19
668
+ scikit-image==0.22.0
669
+ scikit-learn-intelex==2024.2.0
670
+ scikit-learn==1.2.2
671
+ scikit-multilearn==0.2.0
672
+ scikit-optimize==0.10.1
673
+ scikit-plot==0.3.7
674
+ scikit-surprise==1.1.3
675
+ scipy==1.11.4
676
+ scipy==1.12.0
677
+ seaborn==0.12.2
678
+ segment_anything==1.0
679
+ segregation==2.5
680
+ semver==3.0.2
681
+ sentencepiece==0.2.0
682
+ sentry-sdk==1.44.1
683
+ setproctitle==1.3.3
684
+ setuptools-git==1.2
685
+ setuptools-scm==8.0.4
686
+ setuptools==69.0.3
687
+ shap==0.44.1
688
+ shapely==2.0.3
689
+ shellingham==1.5.4
690
+ shtab==1.7.1
691
+ simpervisor==1.0.0
692
+ simplejson==3.19.2
693
+ six==1.16.0
694
+ sklearn-pandas==2.2.0
695
+ slicer==0.0.7
696
+ smart-open==6.4.0
697
+ smmap==5.0.1
698
+ sniffio==1.3.0
699
+ snowballstemmer==2.2.0
700
+ snuggs==1.4.7
701
+ sortedcontainers==2.4.0
702
+ soundfile==0.12.1
703
+ soupsieve==2.5
704
+ soxr==0.3.7
705
+ spacy-legacy==3.0.12
706
+ spacy-loggers==1.0.5
707
+ spacy==3.7.2
708
+ spaghetti==1.7.5.post1
709
+ spectral==0.23.1
710
+ spglm==1.1.0
711
+ sphinx-rtd-theme==0.2.4
712
+ spint==1.0.7
713
+ splot==1.1.5.post1
714
+ spopt==0.6.0
715
+ spreg==1.4.2
716
+ spvcm==0.3.0
717
+ sqlparse==0.4.4
718
+ squarify==0.4.3
719
+ srsly==2.4.8
720
+ stable-baselines3==2.1.0
721
+ stack-data==0.6.2
722
+ stack-data==0.6.3
723
+ stanio==0.5.0
724
+ starlette==0.32.0.post1
725
+ statsmodels==0.14.1
726
+ stemming==1.0.1
727
+ stop-words==2018.7.23
728
+ stopit==1.1.2
729
+ stumpy==1.12.0
730
+ sympy==1.12
731
+ tables==3.9.2
732
+ tabulate==0.9.0
733
+ tangled-up-in-unicode==0.2.0
734
+ tbb==2021.12.0
735
+ tblib==3.0.0
736
+ tenacity==8.2.3
737
+ tensorboard-data-server==0.7.2
738
+ tensorboard-plugin-profile==2.15.0
739
+ tensorboard==2.15.1
740
+ tensorboardX==2.6.2.2
741
+ tensorflow-cloud==0.1.16
742
+ tensorflow-datasets==4.9.4
743
+ tensorflow-decision-forests==1.8.1
744
+ tensorflow-estimator==2.15.0
745
+ tensorflow-hub==0.16.1
746
+ tensorflow-io-gcs-filesystem==0.35.0
747
+ tensorflow-io==0.35.0
748
+ tensorflow-metadata==0.14.0
749
+ tensorflow-probability==0.23.0
750
+ tensorflow-serving-api==2.14.1
751
+ tensorflow-text==2.15.0
752
+ tensorflow-transform==0.14.0
753
+ tensorflow==2.15.0
754
+ tensorstore==0.1.56
755
+ termcolor==2.4.0
756
+ terminado==0.18.0
757
+ testpath==0.6.0
758
+ text-unidecode==1.3
759
+ textblob==0.18.0.post0
760
+ texttable==1.7.0
761
+ tf_keras==2.15.1
762
+ tfp-nightly==0.24.0.dev0
763
+ thinc==8.2.2
764
+ threadpoolctl==3.2.0
765
+ tifffile==2023.12.9
766
+ timm==0.9.16
767
+ tinycss2==1.2.1
768
+ tobler==0.11.2
769
+ tokenizers==0.15.2
770
+ toml==0.10.2
771
+ tomli==2.0.1
772
+ tomlkit==0.12.4
773
+ toolz==0.12.1
774
+ torch==2.1.2
775
+ torchaudio==2.1.2
776
+ torchdata==0.7.1
777
+ torchinfo==1.8.0
778
+ torchmetrics==1.3.2
779
+ torchtext==0.16.2
780
+ torchvision==0.16.2
781
+ tornado==6.3.3
782
+ tqdm==4.66.1
783
+ traceml==1.0.8
784
+ traitlets==5.9.0
785
+ traittypes==0.2.1
786
+ transformers==4.36.2
787
+ treelite-runtime==3.2.0
788
+ treelite==3.2.0
789
+ trl==0.7.7
790
+ truststore==0.8.0
791
+ trx-python==0.2.9
792
+ tsfresh==0.20.2
793
+ typeguard==4.1.5
794
+ typer==0.9.0
795
+ typer==0.9.4
796
+ types-python-dateutil==2.8.19.20240106
797
+ typing-inspect==0.9.0
798
+ typing-utils==0.1.0
799
+ typing_extensions==4.9.0
800
+ tyro==0.8.3
801
+ tzdata==2023.4
802
+ uc-micro-py==1.0.3
803
+ ucx-py==0.33.0
804
+ ujson==5.9.0
805
+ umap-learn==0.5.5
806
+ unicodedata2==15.1.0
807
+ update-checker==0.18.0
808
+ uri-template==1.3.0
809
+ uritemplate==3.0.1
810
+ urllib3==1.26.18
811
+ urllib3==2.1.0
812
+ urwid==2.6.10
813
+ urwid_readline==0.14
814
+ uvicorn==0.25.0
815
+ uvloop==0.19.0
816
+ vaex-astro==0.9.3
817
+ vaex-core==4.17.1
818
+ vaex-hdf5==0.14.1
819
+ vaex-jupyter==0.8.2
820
+ vaex-ml==0.18.3
821
+ vaex-server==0.9.0
822
+ vaex-viz==0.5.4
823
+ vaex==4.17.0
824
+ vec_noise==1.1.4
825
+ vecstack==0.4.0
826
+ virtualenv==20.21.0
827
+ visions==0.7.5
828
+ vowpalwabbit==9.9.0
829
+ vtk==9.3.0
830
+ wandb==0.16.5
831
+ wasabi==1.1.2
832
+ watchfiles==0.21.0
833
+ wavio==0.0.8
834
+ wcwidth==0.2.13
835
+ weasel==0.3.4
836
+ webcolors==1.13
837
+ webencodings==0.5.1
838
+ websocket-client==1.7.0
839
+ websockets==12.0
840
+ wfdb==4.1.2
841
+ whatthepatch==1.0.5
842
+ wheel==0.42.0
843
+ widgetsnbextension==3.6.6
844
+ witwidget==1.8.1
845
+ woodwork==0.29.0
846
+ wordcloud==1.9.3
847
+ wordsegment==1.3.1
848
+ wrapt==1.14.1
849
+ xarray-einstats==0.7.0
850
+ xarray==2024.3.0
851
+ xgboost==2.0.3
852
+ xvfbwrapper==0.2.9
853
+ xxhash==3.4.1
854
+ xyzservices==2023.10.1
855
+ y-py==0.6.2
856
+ yapf==0.40.2
857
+ yarl==1.9.3
858
+ yarl==1.9.4
859
+ ydata-profiling==4.6.4
860
+ yellowbrick==1.5
861
+ ypy-websocket==0.8.4
862
+ zict==3.0.0
863
+ zipp==3.17.0
864
+ zstandard==0.22.0
wandb/run-20240416_061855-mrfkrks2/files/wandb-metadata.json ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "os": "Linux-5.15.133+-x86_64-with-glibc2.31",
3
+ "python": "3.10.13",
4
+ "heartbeatAt": "2024-04-16T06:18:56.240194",
5
+ "startedAt": "2024-04-16T06:18:55.474625",
6
+ "docker": null,
7
+ "cuda": null,
8
+ "args": [],
9
+ "state": "running",
10
+ "program": "kaggle.ipynb",
11
+ "codePathLocal": null,
12
+ "root": "/kaggle/working",
13
+ "host": "7447fe43e87a",
14
+ "username": "root",
15
+ "executable": "/opt/conda/bin/python3.10",
16
+ "cpu_count": 2,
17
+ "cpu_count_logical": 4,
18
+ "cpu_freq": {
19
+ "current": 2000.138,
20
+ "min": 0.0,
21
+ "max": 0.0
22
+ },
23
+ "cpu_freq_per_core": [
24
+ {
25
+ "current": 2000.138,
26
+ "min": 0.0,
27
+ "max": 0.0
28
+ },
29
+ {
30
+ "current": 2000.138,
31
+ "min": 0.0,
32
+ "max": 0.0
33
+ },
34
+ {
35
+ "current": 2000.138,
36
+ "min": 0.0,
37
+ "max": 0.0
38
+ },
39
+ {
40
+ "current": 2000.138,
41
+ "min": 0.0,
42
+ "max": 0.0
43
+ }
44
+ ],
45
+ "disk": {
46
+ "/": {
47
+ "total": 8062.387607574463,
48
+ "used": 5576.1118812561035
49
+ }
50
+ },
51
+ "gpu": "Tesla T4",
52
+ "gpu_count": 2,
53
+ "gpu_devices": [
54
+ {
55
+ "name": "Tesla T4",
56
+ "memory_total": 16106127360
57
+ },
58
+ {
59
+ "name": "Tesla T4",
60
+ "memory_total": 16106127360
61
+ }
62
+ ],
63
+ "memory": {
64
+ "total": 31.357559204101562
65
+ }
66
+ }
wandb/run-20240416_061855-mrfkrks2/files/wandb-summary.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"train/loss": 0.1277, "train/learning_rate": 0.0, "train/epoch": 7.0, "train/global_step": 441, "_timestamp": 1713268369.8602688, "_runtime": 20034.376485824585, "_step": 15, "eval/loss": 0.1535859853029251, "eval/runtime": 27.971, "eval/samples_per_second": 0.93, "eval/steps_per_second": 0.322, "train/train_runtime": 20006.4283, "train/train_samples_per_second": 0.33, "train/train_steps_per_second": 0.022, "train/total_flos": 2.684839994232668e+17, "train/train_loss": 0.44885902664288374}
wandb/run-20240416_061855-mrfkrks2/logs/debug-internal.log ADDED
The diff for this file is too large to render. See raw diff
 
wandb/run-20240416_061855-mrfkrks2/logs/debug.log ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2024-04-16 06:18:55,476 INFO MainThread:34 [wandb_setup.py:_flush():76] Current SDK version is 0.16.5
2
+ 2024-04-16 06:18:55,476 INFO MainThread:34 [wandb_setup.py:_flush():76] Configure stats pid to 34
3
+ 2024-04-16 06:18:55,476 INFO MainThread:34 [wandb_setup.py:_flush():76] Loading settings from /root/.config/wandb/settings
4
+ 2024-04-16 06:18:55,476 INFO MainThread:34 [wandb_setup.py:_flush():76] Loading settings from /kaggle/working/wandb/settings
5
+ 2024-04-16 06:18:55,476 INFO MainThread:34 [wandb_setup.py:_flush():76] Loading settings from environment variables: {}
6
+ 2024-04-16 06:18:55,476 INFO MainThread:34 [wandb_setup.py:_flush():76] Inferring run settings from compute environment: {'program': '<python with no main file>'}
7
+ 2024-04-16 06:18:55,476 INFO MainThread:34 [wandb_setup.py:_flush():76] Applying login settings: {}
8
+ 2024-04-16 06:18:55,476 INFO MainThread:34 [wandb_setup.py:_flush():76] Applying login settings: {'api_key': '***REDACTED***'}
9
+ 2024-04-16 06:18:55,476 INFO MainThread:34 [wandb_setup.py:_flush():76] Applying login settings: {}
10
+ 2024-04-16 06:18:55,476 INFO MainThread:34 [wandb_init.py:_log_setup():527] Logging user logs to /kaggle/working/wandb/run-20240416_061855-mrfkrks2/logs/debug.log
11
+ 2024-04-16 06:18:55,476 INFO MainThread:34 [wandb_init.py:_log_setup():528] Logging internal logs to /kaggle/working/wandb/run-20240416_061855-mrfkrks2/logs/debug-internal.log
12
+ 2024-04-16 06:18:55,476 INFO MainThread:34 [wandb_init.py:_jupyter_setup():473] configuring jupyter hooks <wandb.sdk.wandb_init._WandbInit object at 0x7a5364210f10>
13
+ 2024-04-16 06:18:55,477 INFO MainThread:34 [wandb_init.py:init():567] calling init triggers
14
+ 2024-04-16 06:18:55,477 INFO MainThread:34 [wandb_init.py:init():574] wandb.init called with sweep_config: {}
15
+ config: {}
16
+ 2024-04-16 06:18:55,477 INFO MainThread:34 [wandb_init.py:init():617] starting backend
17
+ 2024-04-16 06:18:55,477 INFO MainThread:34 [wandb_init.py:init():621] setting up manager
18
+ 2024-04-16 06:18:55,481 INFO MainThread:34 [backend.py:_multiprocessing_setup():105] multiprocessing start_methods=fork,spawn,forkserver, using: spawn
19
+ 2024-04-16 06:18:55,483 INFO MainThread:34 [wandb_init.py:init():629] backend started and connected
20
+ 2024-04-16 06:18:55,496 INFO MainThread:34 [wandb_run.py:_label_probe_notebook():1299] probe notebook
21
+ 2024-04-16 06:18:55,848 INFO MainThread:34 [wandb_init.py:init():721] updated telemetry
22
+ 2024-04-16 06:18:55,852 INFO MainThread:34 [wandb_init.py:init():754] communicating run to backend with 90.0 second timeout
23
+ 2024-04-16 06:18:56,112 INFO MainThread:34 [wandb_run.py:_on_init():2344] communicating current version
24
+ 2024-04-16 06:18:56,208 INFO MainThread:34 [wandb_run.py:_on_init():2353] got version response upgrade_message: "wandb version 0.16.6 is available! To upgrade, please run:\n $ pip install wandb --upgrade"
25
+
26
+ 2024-04-16 06:18:56,208 INFO MainThread:34 [wandb_init.py:init():805] starting run threads in backend
27
+ 2024-04-16 06:19:12,559 INFO MainThread:34 [wandb_run.py:_console_start():2323] atexit reg
28
+ 2024-04-16 06:19:12,559 INFO MainThread:34 [wandb_run.py:_redirect():2178] redirect: wrap_raw
29
+ 2024-04-16 06:19:12,559 INFO MainThread:34 [wandb_run.py:_redirect():2243] Wrapping output streams.
30
+ 2024-04-16 06:19:12,560 INFO MainThread:34 [wandb_run.py:_redirect():2268] Redirects installed.
31
+ 2024-04-16 06:19:12,561 INFO MainThread:34 [wandb_init.py:init():848] run started, returning control to user process
32
+ 2024-04-16 06:19:12,566 INFO MainThread:34 [wandb_run.py:_config_callback():1347] config_cb None None {'vocab_size': 32016, 'max_position_embeddings': 16384, 'hidden_size': 4096, 'intermediate_size': 11008, 'num_hidden_layers': 32, 'num_attention_heads': 32, 'num_key_value_heads': 32, 'hidden_act': 'silu', 'initializer_range': 0.02, 'rms_norm_eps': 1e-05, 'pretraining_tp': 1, 'use_cache': False, 'rope_theta': 1000000, 'rope_scaling': None, 'attention_bias': False, 'attention_dropout': 0.0, 'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'torch_dtype': 'bfloat16', 'use_bfloat16': False, 'tf_legacy_loss': False, 'pruned_heads': {}, 'tie_word_embeddings': False, 'is_encoder_decoder': False, 'is_decoder': False, 'cross_attention_hidden_size': None, 'add_cross_attention': False, 'tie_encoder_decoder': False, 'max_length': 20, 'min_length': 0, 'do_sample': False, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'diversity_penalty': 0.0, 'temperature': 1.0, 'top_k': 50, 'top_p': 1.0, 'typical_p': 1.0, 'repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'encoder_no_repeat_ngram_size': 0, 'bad_words_ids': None, 'num_return_sequences': 1, 'chunk_size_feed_forward': 0, 'output_scores': False, 'return_dict_in_generate': False, 'forced_bos_token_id': None, 'forced_eos_token_id': None, 'remove_invalid_values': False, 'exponential_decay_length_penalty': None, 'suppress_tokens': None, 'begin_suppress_tokens': None, 'architectures': ['LlamaForCausalLM'], 'finetuning_task': None, 'id2label': {0: 'LABEL_0', 1: 'LABEL_1'}, 'label2id': {'LABEL_0': 0, 'LABEL_1': 1}, 'tokenizer_class': None, 'prefix': None, 'bos_token_id': 1, 'pad_token_id': None, 'eos_token_id': 2, 'sep_token_id': None, 'decoder_start_token_id': None, 'task_specific_params': None, 'problem_type': None, '_name_or_path': 'codellama/CodeLlama-7b-hf', 'transformers_version': '4.36.2', 'model_type': 'llama', 'quantization_config': {'quant_method': 'QuantizationMethod.BITS_AND_BYTES', 'load_in_8bit': False, 'load_in_4bit': True, 'llm_int8_threshold': 6.0, 'llm_int8_skip_modules': None, 'llm_int8_enable_fp32_cpu_offload': True, 'llm_int8_has_fp16_weight': False, 'bnb_4bit_quant_type': 'nf4', 'bnb_4bit_use_double_quant': True, 'bnb_4bit_compute_dtype': 'bfloat16'}, 'output_dir': '/kaggle/working/', 'overwrite_output_dir': True, 'do_train': False, 'do_eval': True, 'do_predict': False, 'evaluation_strategy': 'epoch', 'prediction_loss_only': False, 'per_device_train_batch_size': 3, 'per_device_eval_batch_size': 3, 'per_gpu_train_batch_size': None, 'per_gpu_eval_batch_size': None, 'gradient_accumulation_steps': 5, 'eval_accumulation_steps': None, 'eval_delay': 0, 'learning_rate': 5e-05, 'weight_decay': 0.01, 'adam_beta1': 0.9, 'adam_beta2': 0.999, 'adam_epsilon': 1e-08, 'max_grad_norm': 1.0, 'num_train_epochs': 7, 'max_steps': -1, 'lr_scheduler_type': 'linear', 'lr_scheduler_kwargs': {}, 'warmup_ratio': 0.0, 'warmup_steps': 20, 'log_level': 'passive', 'log_level_replica': 'warning', 'log_on_each_node': True, 'logging_dir': '/kaggle/working//logs', 'logging_strategy': 'epoch', 'logging_first_step': False, 'logging_steps': 500, 'logging_nan_inf_filter': True, 'save_strategy': 'epoch', 'save_steps': 500, 'save_total_limit': 5, 'save_safetensors': True, 'save_on_each_node': False, 'save_only_model': False, 'no_cuda': False, 'use_cpu': False, 'use_mps_device': False, 'seed': 42, 'data_seed': None, 'jit_mode_eval': False, 'use_ipex': False, 'bf16': False, 'fp16': True, 'fp16_opt_level': 'O1', 'half_precision_backend': 'auto', 'bf16_full_eval': False, 'fp16_full_eval': False, 'tf32': None, 'local_rank': 0, 'ddp_backend': None, 'tpu_num_cores': None, 'tpu_metrics_debug': False, 'debug': [], 'dataloader_drop_last': False, 'eval_steps': None, 'dataloader_num_workers': 0, 'past_index': -1, 'run_name': 'fine-tuning-Phi2-with-webglm-qa-with-lora', 'disable_tqdm': False, 'remove_unused_columns': True, 'label_names': None, 'load_best_model_at_end': True, 'metric_for_best_model': 'loss', 'greater_is_better': False, 'ignore_data_skip': False, 'fsdp': [], 'fsdp_min_num_params': 0, 'fsdp_config': {'min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False}, 'fsdp_transformer_layer_cls_to_wrap': None, 'deepspeed': None, 'label_smoothing_factor': 0.0, 'optim': 'paged_adamw_8bit', 'optim_args': None, 'adafactor': False, 'group_by_length': False, 'length_column_name': 'length', 'report_to': ['wandb'], 'ddp_find_unused_parameters': None, 'ddp_bucket_cap_mb': None, 'ddp_broadcast_buffers': None, 'dataloader_pin_memory': True, 'dataloader_persistent_workers': False, 'skip_memory_metrics': True, 'use_legacy_prediction_loop': False, 'push_to_hub': False, 'resume_from_checkpoint': None, 'hub_model_id': None, 'hub_strategy': 'every_save', 'hub_token': '<HUB_TOKEN>', 'hub_private_repo': False, 'hub_always_push': False, 'gradient_checkpointing': True, 'gradient_checkpointing_kwargs': {'use_reentrant': False}, 'include_inputs_for_metrics': False, 'fp16_backend': 'auto', 'push_to_hub_model_id': None, 'push_to_hub_organization': None, 'push_to_hub_token': '<PUSH_TO_HUB_TOKEN>', 'mp_parameters': '', 'auto_find_batch_size': False, 'full_determinism': False, 'torchdynamo': None, 'ray_scope': 'last', 'ddp_timeout': 1800, 'torch_compile': False, 'torch_compile_backend': None, 'torch_compile_mode': None, 'dispatch_batches': None, 'split_batches': False, 'include_tokens_per_second': False, 'include_num_input_tokens_seen': False, 'neftune_noise_alpha': None}
33
+ 2024-04-16 11:52:21,876 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
34
+ 2024-04-16 11:52:21,876 INFO MainThread:34 [wandb_init.py:_pause_backend():438] pausing backend
35
+ 2024-04-16 11:52:21,883 INFO MainThread:34 [wandb_init.py:_resume_backend():443] resuming backend
36
+ 2024-04-16 11:52:49,863 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
37
+ 2024-04-16 11:52:49,863 INFO MainThread:34 [wandb_init.py:_pause_backend():438] pausing backend
38
+ 2024-04-16 11:52:49,869 INFO MainThread:34 [wandb_init.py:_resume_backend():443] resuming backend
39
+ 2024-04-16 11:52:52,066 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
40
+ 2024-04-16 11:52:52,066 INFO MainThread:34 [wandb_init.py:_pause_backend():438] pausing backend
41
+ 2024-04-16 11:55:22,701 INFO MainThread:34 [wandb_init.py:_resume_backend():443] resuming backend
wandb/run-20240416_061855-mrfkrks2/run-mrfkrks2.wandb ADDED
Binary file (578 kB). View file