vanherzog commited on
Commit
279eb54
1 Parent(s): 6ca5ca6

vanherzog/Mixtral_Alpace_v2_NIKI

Browse files
README.md CHANGED
@@ -19,6 +19,8 @@ should probably proofread and complete it, then remove this comment. -->
19
  # Mixtral_Alpace_v2_NIKI
20
 
21
  This model is a fine-tuned version of [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) on the generator dataset.
 
 
22
 
23
  ## Model description
24
 
@@ -38,7 +40,7 @@ More information needed
38
 
39
  The following hyperparameters were used during training:
40
  - learning_rate: 2.5e-05
41
- - train_batch_size: 16
42
  - eval_batch_size: 8
43
  - seed: 42
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
@@ -46,6 +48,42 @@ The following hyperparameters were used during training:
46
  - lr_scheduler_warmup_steps: 0.03
47
  - training_steps: 300
48
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49
  ### Framework versions
50
 
51
  - PEFT 0.10.0
 
19
  # Mixtral_Alpace_v2_NIKI
20
 
21
  This model is a fine-tuned version of [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) on the generator dataset.
22
+ It achieves the following results on the evaluation set:
23
+ - Loss: 1.1688
24
 
25
  ## Model description
26
 
 
40
 
41
  The following hyperparameters were used during training:
42
  - learning_rate: 2.5e-05
43
+ - train_batch_size: 8
44
  - eval_batch_size: 8
45
  - seed: 42
46
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
 
48
  - lr_scheduler_warmup_steps: 0.03
49
  - training_steps: 300
50
 
51
+ ### Training results
52
+
53
+ | Training Loss | Epoch | Step | Validation Loss |
54
+ |:-------------:|:------:|:----:|:---------------:|
55
+ | 1.3725 | 0.0606 | 10 | 1.3384 |
56
+ | 1.339 | 0.1212 | 20 | 1.3260 |
57
+ | 1.3448 | 0.1818 | 30 | 1.3121 |
58
+ | 1.2777 | 0.2424 | 40 | 1.2984 |
59
+ | 1.3067 | 0.3030 | 50 | 1.2853 |
60
+ | 1.2674 | 0.3636 | 60 | 1.2723 |
61
+ | 1.2842 | 0.4242 | 70 | 1.2610 |
62
+ | 1.2835 | 0.4848 | 80 | 1.2505 |
63
+ | 1.2688 | 0.5455 | 90 | 1.2406 |
64
+ | 1.2892 | 0.6061 | 100 | 1.2315 |
65
+ | 1.2565 | 0.6667 | 110 | 1.2236 |
66
+ | 1.2145 | 0.7273 | 120 | 1.2163 |
67
+ | 1.2297 | 0.7879 | 130 | 1.2101 |
68
+ | 1.2406 | 0.8485 | 140 | 1.2042 |
69
+ | 1.2146 | 0.9091 | 150 | 1.1986 |
70
+ | 1.2386 | 0.9697 | 160 | 1.1940 |
71
+ | 1.1929 | 1.0303 | 170 | 1.1899 |
72
+ | 1.2036 | 1.0909 | 180 | 1.1869 |
73
+ | 1.181 | 1.1515 | 190 | 1.1837 |
74
+ | 1.201 | 1.2121 | 200 | 1.1812 |
75
+ | 1.1965 | 1.2727 | 210 | 1.1786 |
76
+ | 1.2084 | 1.3333 | 220 | 1.1765 |
77
+ | 1.2097 | 1.3939 | 230 | 1.1746 |
78
+ | 1.176 | 1.4545 | 240 | 1.1727 |
79
+ | 1.1757 | 1.5152 | 250 | 1.1715 |
80
+ | 1.1977 | 1.5758 | 260 | 1.1705 |
81
+ | 1.1686 | 1.6364 | 270 | 1.1701 |
82
+ | 1.1679 | 1.6970 | 280 | 1.1694 |
83
+ | 1.1779 | 1.7576 | 290 | 1.1690 |
84
+ | 1.179 | 1.8182 | 300 | 1.1688 |
85
+
86
+
87
  ### Framework versions
88
 
89
  - PEFT 0.10.0
adapter_config.json CHANGED
@@ -20,14 +20,14 @@
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
- "v_proj",
24
- "k_proj",
25
- "gate_proj",
26
  "down_proj",
27
- "q_proj",
28
  "o_proj",
29
- "lm_head",
30
- "up_proj"
 
 
 
 
31
  ],
32
  "task_type": "CAUSAL_LM",
33
  "use_dora": false,
 
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
 
 
 
23
  "down_proj",
 
24
  "o_proj",
25
+ "gate_proj",
26
+ "v_proj",
27
+ "q_proj",
28
+ "k_proj",
29
+ "up_proj",
30
+ "lm_head"
31
  ],
32
  "task_type": "CAUSAL_LM",
33
  "use_dora": false,
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e5ec2902eea04260b46da459025278063264220d823f5ed4100733bc416b3809
3
  size 751667752
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:baa0c4b7004e4c44e16cad91595aff6e6560fb92f8a8f56862d1388b0493f9a1
3
  size 751667752
runs/May03_07-59-08_ea25dfb4a21c/events.out.tfevents.1714723151.ea25dfb4a21c.1159.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cca5955e504251248e31eb9a5abc49ef3528dcd0a28b706f9401d6e4c6ceb9f0
3
+ size 10624
runs/May03_08-00-26_ea25dfb4a21c/events.out.tfevents.1714723236.ea25dfb4a21c.1159.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e4762097ed3af8fbf1f3c617e15311d4f5199ce1e2cad7ab12031b3058b036d5
3
+ size 20061
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:dbe1f4e76d9f3e41b2c1dc12d129abb190c4437549bfb30b5e3b82304279edab
3
  size 5048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4659e7eb693732069b1cbe5d5c4c2a0d561b49ba9eec1792a87211ab8eee27fb
3
  size 5048