Professor commited on
Commit
6d98b03
1 Parent(s): 5139a49

Professor/phiner2

Browse files
README.md CHANGED
@@ -18,7 +18,7 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 3.3835
22
 
23
  ## Model description
24
 
@@ -46,17 +46,14 @@ The following hyperparameters were used during training:
46
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
  - lr_scheduler_type: cosine
48
  - lr_scheduler_warmup_ratio: 0.03
49
- - num_epochs: 4
50
  - mixed_precision_training: Native AMP
51
 
52
  ### Training results
53
 
54
- | Training Loss | Epoch | Step | Validation Loss |
55
- |:-------------:|:-----:|:----:|:---------------:|
56
- | No log | 1.0 | 1 | 3.8530 |
57
- | No log | 2.0 | 2 | 3.6124 |
58
- | No log | 3.0 | 3 | 3.4412 |
59
- | No log | 4.0 | 4 | 3.3835 |
60
 
61
 
62
  ### Framework versions
 
18
 
19
  This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 2.1637
22
 
23
  ## Model description
24
 
 
46
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
  - lr_scheduler_type: cosine
48
  - lr_scheduler_warmup_ratio: 0.03
49
+ - num_epochs: 1
50
  - mixed_precision_training: Native AMP
51
 
52
  ### Training results
53
 
54
+ | Training Loss | Epoch | Step | Validation Loss |
55
+ |:-------------:|:------:|:----:|:---------------:|
56
+ | 1.9881 | 0.9993 | 1184 | 2.1637 |
 
 
 
57
 
58
 
59
  ### Framework versions
adapter_config.json CHANGED
@@ -20,10 +20,10 @@
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
- "o_proj",
24
- "down_proj",
25
  "qkv_proj",
26
- "gate_up_proj"
 
27
  ],
28
  "task_type": "CAUSAL_LM",
29
  "use_dora": false,
 
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
+ "gate_up_proj",
 
24
  "qkv_proj",
25
+ "down_proj",
26
+ "o_proj"
27
  ],
28
  "task_type": "CAUSAL_LM",
29
  "use_dora": false,
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c24a3e1169ec25022a33d6737f0667d30d88df991499c133b655928bf186f15e
3
  size 100697728
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:31e7ea754a538d5a76a8a53da74a043ce7a36b553d7ea56b6c264e234d59054f
3
  size 100697728
runs/Sep11_16-19-48_608bdcea72f3/events.out.tfevents.1726071592.608bdcea72f3.22.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:11f56f845f10b1bc2ba66eec2400745abb822a8d508c347896cdf90bdb30d917
3
+ size 16373
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:37aeb61e58ad55f458165ebc62de2efa3259c31b5cfdece809151365c91e1f9d
3
  size 5240
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9d1349990085dfab414f99edd341f6df818dc87a55908c553fc70e5904011d2c
3
  size 5240