qfrodicio commited on
Commit
fe1eb96
1 Parent(s): b787e4d

Training complete

Browse files
README.md CHANGED
@@ -3,10 +3,10 @@ base_model: MMG/mlm-spanish-roberta-base
3
  tags:
4
  - generated_from_trainer
5
  metrics:
 
6
  - precision
7
  - recall
8
  - f1
9
- - accuracy
10
  model-index:
11
  - name: roberta-finetuned-intention-prediction-es
12
  results: []
@@ -19,11 +19,11 @@ should probably proofread and complete it, then remove this comment. -->
19
 
20
  This model is a fine-tuned version of [MMG/mlm-spanish-roberta-base](https://huggingface.co/MMG/mlm-spanish-roberta-base) on an unknown dataset.
21
  It achieves the following results on the evaluation set:
22
- - Loss: 1.7531
23
- - Precision: 0.7331
24
- - Recall: 0.7331
25
- - F1: 0.7331
26
- - Accuracy: 0.7232
27
 
28
  ## Model description
29
 
@@ -52,28 +52,28 @@ The following hyperparameters were used during training:
52
 
53
  ### Training results
54
 
55
- | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
56
- |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
57
- | 1.878 | 1.0 | 102 | 1.2561 | 0.6630 | 0.6630 | 0.6630 | 0.6464 |
58
- | 1.0292 | 2.0 | 204 | 1.1108 | 0.7005 | 0.7005 | 0.7005 | 0.6855 |
59
- | 0.7035 | 3.0 | 306 | 1.0948 | 0.7215 | 0.7215 | 0.7215 | 0.7067 |
60
- | 0.4985 | 4.0 | 408 | 1.0831 | 0.7177 | 0.7177 | 0.7177 | 0.7045 |
61
- | 0.3357 | 5.0 | 510 | 1.1830 | 0.7288 | 0.7288 | 0.7288 | 0.7145 |
62
- | 0.2414 | 6.0 | 612 | 1.2706 | 0.7194 | 0.7194 | 0.7194 | 0.7060 |
63
- | 0.1712 | 7.0 | 714 | 1.3205 | 0.7328 | 0.7328 | 0.7328 | 0.7223 |
64
- | 0.1238 | 8.0 | 816 | 1.4237 | 0.7290 | 0.7290 | 0.7290 | 0.7177 |
65
- | 0.0845 | 9.0 | 918 | 1.4820 | 0.7271 | 0.7271 | 0.7271 | 0.7168 |
66
- | 0.0627 | 10.0 | 1020 | 1.5436 | 0.7204 | 0.7204 | 0.7204 | 0.7099 |
67
- | 0.0509 | 11.0 | 1122 | 1.5653 | 0.7311 | 0.7311 | 0.7311 | 0.7212 |
68
- | 0.0366 | 12.0 | 1224 | 1.5724 | 0.7268 | 0.7268 | 0.7268 | 0.7175 |
69
- | 0.023 | 13.0 | 1326 | 1.6088 | 0.7273 | 0.7273 | 0.7273 | 0.7170 |
70
- | 0.02 | 14.0 | 1428 | 1.6797 | 0.7346 | 0.7346 | 0.7346 | 0.7239 |
71
- | 0.0144 | 15.0 | 1530 | 1.7203 | 0.7369 | 0.7369 | 0.7369 | 0.7266 |
72
- | 0.0103 | 16.0 | 1632 | 1.7330 | 0.7302 | 0.7302 | 0.7302 | 0.7208 |
73
- | 0.0087 | 17.0 | 1734 | 1.7284 | 0.7293 | 0.7293 | 0.7293 | 0.7194 |
74
- | 0.008 | 18.0 | 1836 | 1.7434 | 0.7380 | 0.7380 | 0.7380 | 0.7280 |
75
- | 0.0059 | 19.0 | 1938 | 1.7642 | 0.7355 | 0.7355 | 0.7355 | 0.7261 |
76
- | 0.0055 | 20.0 | 2040 | 1.7531 | 0.7331 | 0.7331 | 0.7331 | 0.7232 |
77
 
78
 
79
  ### Framework versions
 
3
  tags:
4
  - generated_from_trainer
5
  metrics:
6
+ - accuracy
7
  - precision
8
  - recall
9
  - f1
 
10
  model-index:
11
  - name: roberta-finetuned-intention-prediction-es
12
  results: []
 
19
 
20
  This model is a fine-tuned version of [MMG/mlm-spanish-roberta-base](https://huggingface.co/MMG/mlm-spanish-roberta-base) on an unknown dataset.
21
  It achieves the following results on the evaluation set:
22
+ - Loss: 1.9097
23
+ - Accuracy: 0.6918
24
+ - Precision: 0.6953
25
+ - Recall: 0.6918
26
+ - F1: 0.6848
27
 
28
  ## Model description
29
 
 
52
 
53
  ### Training results
54
 
55
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
56
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
57
+ | 2.2985 | 1.0 | 102 | 1.7435 | 0.4970 | 0.4378 | 0.4970 | 0.4215 |
58
+ | 1.3399 | 2.0 | 204 | 1.4205 | 0.5828 | 0.5872 | 0.5828 | 0.5624 |
59
+ | 0.8893 | 3.0 | 306 | 1.2699 | 0.6393 | 0.6276 | 0.6393 | 0.6192 |
60
+ | 0.5691 | 4.0 | 408 | 1.3327 | 0.6515 | 0.6604 | 0.6515 | 0.6417 |
61
+ | 0.3837 | 5.0 | 510 | 1.3836 | 0.6592 | 0.6710 | 0.6592 | 0.6528 |
62
+ | 0.2543 | 6.0 | 612 | 1.4253 | 0.6641 | 0.6703 | 0.6641 | 0.6528 |
63
+ | 0.1669 | 7.0 | 714 | 1.5317 | 0.6650 | 0.6795 | 0.6650 | 0.6546 |
64
+ | 0.1139 | 8.0 | 816 | 1.5939 | 0.6725 | 0.6754 | 0.6725 | 0.6615 |
65
+ | 0.0805 | 9.0 | 918 | 1.6987 | 0.6594 | 0.6696 | 0.6594 | 0.6518 |
66
+ | 0.0578 | 10.0 | 1020 | 1.6960 | 0.6793 | 0.6782 | 0.6793 | 0.6690 |
67
+ | 0.0374 | 11.0 | 1122 | 1.7590 | 0.6824 | 0.6877 | 0.6824 | 0.6729 |
68
+ | 0.03 | 12.0 | 1224 | 1.7425 | 0.6842 | 0.6859 | 0.6842 | 0.6785 |
69
+ | 0.0183 | 13.0 | 1326 | 1.8165 | 0.6830 | 0.6846 | 0.6830 | 0.6774 |
70
+ | 0.0152 | 14.0 | 1428 | 1.8348 | 0.6866 | 0.6927 | 0.6866 | 0.6799 |
71
+ | 0.0109 | 15.0 | 1530 | 1.8562 | 0.6940 | 0.6967 | 0.6940 | 0.6855 |
72
+ | 0.0097 | 16.0 | 1632 | 1.8766 | 0.6889 | 0.6947 | 0.6889 | 0.6833 |
73
+ | 0.0073 | 17.0 | 1734 | 1.8745 | 0.6920 | 0.6948 | 0.6920 | 0.6851 |
74
+ | 0.0062 | 18.0 | 1836 | 1.8944 | 0.6895 | 0.6919 | 0.6895 | 0.6825 |
75
+ | 0.0057 | 19.0 | 1938 | 1.9103 | 0.6936 | 0.6984 | 0.6936 | 0.6867 |
76
+ | 0.0052 | 20.0 | 2040 | 1.9097 | 0.6918 | 0.6953 | 0.6918 | 0.6848 |
77
 
78
 
79
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:288bbffb20e2fc677f587c92a69ef0cce6e98d50a329321c1cee7f5486925e79
3
  size 501743200
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fccbbea43119ae47040715f1f51e498dc25a8fe3b55fe1520221757e7048ae17
3
  size 501743200
runs/Jan18_23-03-40_dda14c81a969/events.out.tfevents.1705619023.dda14c81a969.473.1 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0ae44fe6f08c71845d4f8dd6e97ca73dadaec5ad6cc339b812743e3c92dc6d49
3
- size 20195
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c50b8eca30555091988a8a2adcea8976424b7cba6017dc7057eb13569f1e4774
3
+ size 20549