Salmamoori commited on
Commit
ee11e38
1 Parent(s): e8e7bed

End of training

Browse files
Files changed (1) hide show
  1. README.md +135 -0
README.md ADDED
@@ -0,0 +1,135 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ base_model: microsoft/MiniLM-L12-H384-uncased
4
+ tags:
5
+ - Language
6
+ - image-Emotion
7
+ - miniLM
8
+ - PyTorch
9
+ - Trainer
10
+ - SequenceClassification
11
+ - WeightedLoss
12
+ - CrossEntropyLoss
13
+ - F1Score
14
+ - HuggingFaceHub
15
+ - generated_from_trainer
16
+ datasets:
17
+ - emotion
18
+ metrics:
19
+ - f1
20
+ model-index:
21
+ - name: miniLM_finetuned_Emotion_2024_06_17
22
+ results:
23
+ - task:
24
+ name: Text Classification
25
+ type: text-classification
26
+ dataset:
27
+ name: emotion
28
+ type: emotion
29
+ config: split
30
+ split: validation
31
+ args: split
32
+ metrics:
33
+ - name: F1
34
+ type: f1
35
+ value: 0.9349971922956838
36
+ ---
37
+
38
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
39
+ should probably proofread and complete it, then remove this comment. -->
40
+
41
+ # miniLM_finetuned_Emotion_2024_06_17
42
+
43
+ This model is a fine-tuned version of [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on the emotion dataset.
44
+ It achieves the following results on the evaluation set:
45
+ - Loss: 0.4059
46
+ - F1: 0.9350
47
+
48
+ ## Model description
49
+
50
+ More information needed
51
+
52
+ ## Intended uses & limitations
53
+
54
+ More information needed
55
+
56
+ ## Training and evaluation data
57
+
58
+ More information needed
59
+
60
+ ## Training procedure
61
+
62
+ ### Training hyperparameters
63
+
64
+ The following hyperparameters were used during training:
65
+ - learning_rate: 2e-05
66
+ - train_batch_size: 64
67
+ - eval_batch_size: 64
68
+ - seed: 42
69
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
70
+ - lr_scheduler_type: linear
71
+ - num_epochs: 50
72
+ - mixed_precision_training: Native AMP
73
+
74
+ ### Training results
75
+
76
+ | Training Loss | Epoch | Step | Validation Loss | F1 |
77
+ |:-------------:|:-----:|:-----:|:---------------:|:------:|
78
+ | 1.3684 | 1.0 | 250 | 1.0416 | 0.5803 |
79
+ | 0.8635 | 2.0 | 500 | 0.6225 | 0.8729 |
80
+ | 0.5165 | 3.0 | 750 | 0.3755 | 0.9130 |
81
+ | 0.3319 | 4.0 | 1000 | 0.2792 | 0.9256 |
82
+ | 0.2494 | 5.0 | 1250 | 0.2474 | 0.9252 |
83
+ | 0.1914 | 6.0 | 1500 | 0.2182 | 0.9290 |
84
+ | 0.156 | 7.0 | 1750 | 0.2140 | 0.9307 |
85
+ | 0.1435 | 8.0 | 2000 | 0.1807 | 0.9351 |
86
+ | 0.1258 | 9.0 | 2250 | 0.1830 | 0.9353 |
87
+ | 0.1128 | 10.0 | 2500 | 0.1655 | 0.9404 |
88
+ | 0.1023 | 11.0 | 2750 | 0.1968 | 0.9339 |
89
+ | 0.0967 | 12.0 | 3000 | 0.1816 | 0.9333 |
90
+ | 0.0914 | 13.0 | 3250 | 0.1840 | 0.9338 |
91
+ | 0.0818 | 14.0 | 3500 | 0.2094 | 0.9316 |
92
+ | 0.0755 | 15.0 | 3750 | 0.1945 | 0.9345 |
93
+ | 0.0718 | 16.0 | 4000 | 0.2040 | 0.9325 |
94
+ | 0.0641 | 17.0 | 4250 | 0.2230 | 0.9369 |
95
+ | 0.0613 | 18.0 | 4500 | 0.2349 | 0.9332 |
96
+ | 0.0556 | 19.0 | 4750 | 0.2530 | 0.9249 |
97
+ | 0.0521 | 20.0 | 5000 | 0.2334 | 0.9376 |
98
+ | 0.0526 | 21.0 | 5250 | 0.2531 | 0.9306 |
99
+ | 0.0423 | 22.0 | 5500 | 0.2336 | 0.9383 |
100
+ | 0.039 | 23.0 | 5750 | 0.2848 | 0.9352 |
101
+ | 0.0435 | 24.0 | 6000 | 0.2955 | 0.9363 |
102
+ | 0.0371 | 25.0 | 6250 | 0.3075 | 0.9362 |
103
+ | 0.0338 | 26.0 | 6500 | 0.2910 | 0.9339 |
104
+ | 0.0319 | 27.0 | 6750 | 0.3133 | 0.9343 |
105
+ | 0.0305 | 28.0 | 7000 | 0.3106 | 0.9344 |
106
+ | 0.0254 | 29.0 | 7250 | 0.3155 | 0.9370 |
107
+ | 0.0288 | 30.0 | 7500 | 0.3310 | 0.9339 |
108
+ | 0.0228 | 31.0 | 7750 | 0.3463 | 0.9364 |
109
+ | 0.0224 | 32.0 | 8000 | 0.3618 | 0.9353 |
110
+ | 0.0207 | 33.0 | 8250 | 0.3720 | 0.9347 |
111
+ | 0.022 | 34.0 | 8500 | 0.3672 | 0.9374 |
112
+ | 0.0222 | 35.0 | 8750 | 0.3525 | 0.9388 |
113
+ | 0.0197 | 36.0 | 9000 | 0.3848 | 0.9384 |
114
+ | 0.0196 | 37.0 | 9250 | 0.3722 | 0.9369 |
115
+ | 0.0175 | 38.0 | 9500 | 0.3490 | 0.9350 |
116
+ | 0.0168 | 39.0 | 9750 | 0.3539 | 0.9365 |
117
+ | 0.0167 | 40.0 | 10000 | 0.3590 | 0.9391 |
118
+ | 0.0144 | 41.0 | 10250 | 0.3824 | 0.9382 |
119
+ | 0.0164 | 42.0 | 10500 | 0.3973 | 0.9322 |
120
+ | 0.0124 | 43.0 | 10750 | 0.3892 | 0.9372 |
121
+ | 0.012 | 44.0 | 11000 | 0.4102 | 0.9333 |
122
+ | 0.0142 | 45.0 | 11250 | 0.3921 | 0.9366 |
123
+ | 0.012 | 46.0 | 11500 | 0.3925 | 0.9361 |
124
+ | 0.0097 | 47.0 | 11750 | 0.3924 | 0.9360 |
125
+ | 0.0107 | 48.0 | 12000 | 0.3952 | 0.9330 |
126
+ | 0.0093 | 49.0 | 12250 | 0.4067 | 0.9360 |
127
+ | 0.0104 | 50.0 | 12500 | 0.4059 | 0.9350 |
128
+
129
+
130
+ ### Framework versions
131
+
132
+ - Transformers 4.41.2
133
+ - Pytorch 2.3.1+cu121
134
+ - Datasets 2.20.0
135
+ - Tokenizers 0.19.1