ZiHDeng commited on
Commit
6aac8e1
1 Parent(s): 8dd3f05

End of training

Browse files
README.md CHANGED
@@ -16,7 +16,7 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  This model is a fine-tuned version of [bigcode/starcoderbase-1b](https://huggingface.co/bigcode/starcoderbase-1b) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 0.2545
20
 
21
  ## Model description
22
 
@@ -44,33 +44,14 @@ The following hyperparameters were used during training:
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: cosine
46
  - lr_scheduler_warmup_steps: 30
47
- - training_steps: 2000
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss |
53
  |:-------------:|:-----:|:----:|:---------------:|
54
- | 0.1617 | 0.05 | 100 | 0.1605 |
55
- | 0.1147 | 0.1 | 200 | 0.1518 |
56
- | 0.087 | 0.15 | 300 | 0.1739 |
57
- | 0.0738 | 0.2 | 400 | 0.2029 |
58
- | 0.0707 | 0.25 | 500 | 0.2067 |
59
- | 0.0655 | 0.3 | 600 | 0.2156 |
60
- | 0.0632 | 0.35 | 700 | 0.2138 |
61
- | 0.0613 | 0.4 | 800 | 0.2285 |
62
- | 0.058 | 0.45 | 900 | 0.2292 |
63
- | 0.0582 | 0.5 | 1000 | 0.2417 |
64
- | 0.0545 | 0.55 | 1100 | 0.2513 |
65
- | 0.0531 | 0.6 | 1200 | 0.2393 |
66
- | 0.0527 | 0.65 | 1300 | 0.2526 |
67
- | 0.0518 | 0.7 | 1400 | 0.2541 |
68
- | 0.0511 | 0.75 | 1500 | 0.2407 |
69
- | 0.0501 | 0.8 | 1600 | 0.2527 |
70
- | 0.0498 | 0.85 | 1700 | 0.2511 |
71
- | 0.0499 | 0.9 | 1800 | 0.2549 |
72
- | 0.05 | 0.95 | 1900 | 0.2557 |
73
- | 0.0492 | 1.0 | 2000 | 0.2545 |
74
 
75
 
76
  ### Framework versions
 
16
 
17
  This model is a fine-tuned version of [bigcode/starcoderbase-1b](https://huggingface.co/bigcode/starcoderbase-1b) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 0.1670
20
 
21
  ## Model description
22
 
 
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: cosine
46
  - lr_scheduler_warmup_steps: 30
47
+ - training_steps: 150
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss |
53
  |:-------------:|:-----:|:----:|:---------------:|
54
+ | 0.1662 | 0.67 | 100 | 0.1670 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
55
 
56
 
57
  ### Framework versions
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:de6ce0a7fc08a4d8bfb903207840df689b68e60954d2521f06c5c4e13784b1db
3
  size 88891680
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d6385fd2bbda25ab5eedde86c08d4b0098a5abd609a90bbf443d31fe634d5d22
3
  size 88891680
final_checkpoint/adapter_config.json CHANGED
@@ -20,8 +20,8 @@
20
  "revision": null,
21
  "target_modules": [
22
  "c_fc",
23
- "c_attn",
24
  "c_proj",
 
25
  "q_attn"
26
  ],
27
  "task_type": "CAUSAL_LM"
 
20
  "revision": null,
21
  "target_modules": [
22
  "c_fc",
 
23
  "c_proj",
24
+ "c_attn",
25
  "q_attn"
26
  ],
27
  "task_type": "CAUSAL_LM"
final_checkpoint/adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6cd04770c38477828c7612bee4ff6d4038cd4b74b2deda77faa63f6d355c9f95
3
  size 88891680
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d6385fd2bbda25ab5eedde86c08d4b0098a5abd609a90bbf443d31fe634d5d22
3
  size 88891680
runs/Jan30_10-32-47_hgx049.scc.idea/events.out.tfevents.1706581988.hgx049.scc.idea.1712895.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:159d188379da99dc4f5b4258ab7c26fdb2f8bc56a59c94b43f91a1465d865dd7
3
- size 6052
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c4af7a561b5ad03bc9cd2a7ca2ff82b963da194cdf0c57ecf0a4d82a6ea89d8a
3
+ size 6785