Krisbiantoro commited on
Commit
87bef75
1 Parent(s): d90f517

Model save

Browse files
Files changed (1) hide show
  1. README.md +85 -0
README.md ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: peft
4
+ tags:
5
+ - trl
6
+ - sft
7
+ - generated_from_trainer
8
+ datasets:
9
+ - generator
10
+ base_model: mistralai/Mixtral-8x7B-v0.1
11
+ model-index:
12
+ - name: mixtral_mix_2
13
+ results: []
14
+ ---
15
+
16
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
+ should probably proofread and complete it, then remove this comment. -->
18
+
19
+ # mixtral_mix_2
20
+
21
+ This model is a fine-tuned version of [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) on the generator dataset.
22
+ It achieves the following results on the evaluation set:
23
+ - Loss: 0.2832
24
+
25
+ ## Model description
26
+
27
+ More information needed
28
+
29
+ ## Intended uses & limitations
30
+
31
+ More information needed
32
+
33
+ ## Training and evaluation data
34
+
35
+ More information needed
36
+
37
+ ## Training procedure
38
+
39
+ ### Training hyperparameters
40
+
41
+ The following hyperparameters were used during training:
42
+ - learning_rate: 0.0001
43
+ - train_batch_size: 2
44
+ - eval_batch_size: 1
45
+ - seed: 42
46
+ - gradient_accumulation_steps: 32
47
+ - total_train_batch_size: 64
48
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
+ - lr_scheduler_type: cosine
50
+ - lr_scheduler_warmup_ratio: 0.03
51
+ - training_steps: 200
52
+
53
+ ### Training results
54
+
55
+ | Training Loss | Epoch | Step | Validation Loss |
56
+ |:-------------:|:-----:|:----:|:---------------:|
57
+ | 1.686 | 0.36 | 10 | 1.1952 |
58
+ | 1.0961 | 0.71 | 20 | 0.9786 |
59
+ | 0.9359 | 1.07 | 30 | 0.8352 |
60
+ | 0.7603 | 1.42 | 40 | 0.6454 |
61
+ | 0.5832 | 1.78 | 50 | 0.4951 |
62
+ | 0.4764 | 2.14 | 60 | 0.4361 |
63
+ | 0.4172 | 2.49 | 70 | 0.3998 |
64
+ | 0.3911 | 2.85 | 80 | 0.3735 |
65
+ | 0.37 | 3.2 | 90 | 0.3564 |
66
+ | 0.3408 | 3.56 | 100 | 0.3407 |
67
+ | 0.3454 | 3.92 | 110 | 0.3270 |
68
+ | 0.3153 | 4.27 | 120 | 0.3142 |
69
+ | 0.3025 | 4.63 | 130 | 0.3046 |
70
+ | 0.3076 | 4.98 | 140 | 0.2963 |
71
+ | 0.2791 | 5.34 | 150 | 0.2903 |
72
+ | 0.2907 | 5.7 | 160 | 0.2863 |
73
+ | 0.2787 | 6.05 | 170 | 0.2840 |
74
+ | 0.284 | 6.41 | 180 | 0.2832 |
75
+ | 0.264 | 6.76 | 190 | 0.2832 |
76
+ | 0.2734 | 7.12 | 200 | 0.2832 |
77
+
78
+
79
+ ### Framework versions
80
+
81
+ - PEFT 0.7.2.dev0
82
+ - Transformers 4.38.1
83
+ - Pytorch 2.1.2+cu121
84
+ - Datasets 2.16.1
85
+ - Tokenizers 0.15.0