Leotrim commited on
Commit
ae092c5
1 Parent(s): 43f69d2

End of training

Browse files
README.md CHANGED
@@ -28,29 +28,28 @@ model-index:
28
  value: 0.87
29
  - name: Precision
30
  type: precision
31
- value: 0.8753213453213452
32
  - name: Recall
33
  type: recall
34
  value: 0.87
35
  - name: F1
36
  type: f1
37
- value: 0.8641214483158217
38
- pipeline_tag: audio-classification
39
  ---
40
 
41
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
42
  should probably proofread and complete it, then remove this comment. -->
43
 
44
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/raspuntinov_ai/huggingface/runs/cefsu57q)
45
  # distilhubert-finetuned-gtzan
46
 
47
  This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
48
  It achieves the following results on the evaluation set:
49
- - Loss: 0.5488
50
  - Accuracy: 0.87
51
- - Precision: 0.8753
52
  - Recall: 0.87
53
- - F1: 0.8641
54
 
55
  ## Model description
56
 
@@ -83,18 +82,21 @@ The following hyperparameters were used during training:
83
 
84
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
85
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
86
- | 2.1729 | 1.0 | 113 | 2.0581 | 0.63 | 0.6670 | 0.63 | 0.5957 |
87
- | 1.6552 | 2.0 | 226 | 1.3957 | 0.7 | 0.6894 | 0.7 | 0.6857 |
88
- | 1.0753 | 3.0 | 339 | 0.9783 | 0.75 | 0.8154 | 0.75 | 0.7277 |
89
- | 0.8519 | 4.0 | 452 | 0.8087 | 0.75 | 0.8120 | 0.75 | 0.7380 |
90
- | 0.8623 | 5.0 | 565 | 0.7393 | 0.75 | 0.7622 | 0.75 | 0.7373 |
91
- | 0.506 | 6.0 | 678 | 0.6861 | 0.81 | 0.8449 | 0.81 | 0.7997 |
92
- | 0.2052 | 7.0 | 791 | 0.6505 | 0.81 | 0.8254 | 0.81 | 0.8024 |
93
- | 0.1583 | 8.0 | 904 | 0.5365 | 0.86 | 0.8770 | 0.86 | 0.8545 |
94
- | 0.0699 | 9.0 | 1017 | 0.5488 | 0.87 | 0.8753 | 0.87 | 0.8641 |
95
- | 0.0177 | 10.0 | 1130 | 0.6330 | 0.83 | 0.8312 | 0.83 | 0.8245 |
96
- | 0.0071 | 11.0 | 1243 | 0.6268 | 0.84 | 0.8410 | 0.84 | 0.8348 |
97
- | 0.0746 | 12.0 | 1356 | 0.6051 | 0.87 | 0.8732 | 0.87 | 0.8675 |
 
 
 
98
 
99
 
100
  ### Framework versions
@@ -102,4 +104,4 @@ The following hyperparameters were used during training:
102
  - Transformers 4.42.3
103
  - Pytorch 2.1.2
104
  - Datasets 2.20.0
105
- - Tokenizers 0.19.1
 
28
  value: 0.87
29
  - name: Precision
30
  type: precision
31
+ value: 0.8802816627816629
32
  - name: Recall
33
  type: recall
34
  value: 0.87
35
  - name: F1
36
  type: f1
37
+ value: 0.8627110595989314
 
38
  ---
39
 
40
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
41
  should probably proofread and complete it, then remove this comment. -->
42
 
43
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/raspuntinov_ai/huggingface/runs/8epo656a)
44
  # distilhubert-finetuned-gtzan
45
 
46
  This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
47
  It achieves the following results on the evaluation set:
48
+ - Loss: 0.6501
49
  - Accuracy: 0.87
50
+ - Precision: 0.8803
51
  - Recall: 0.87
52
+ - F1: 0.8627
53
 
54
  ## Model description
55
 
 
82
 
83
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
84
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
85
+ | 2.1743 | 1.0 | 113 | 2.0604 | 0.38 | 0.5273 | 0.38 | 0.3101 |
86
+ | 1.6179 | 2.0 | 226 | 1.4299 | 0.62 | 0.6136 | 0.62 | 0.5877 |
87
+ | 1.0981 | 3.0 | 339 | 1.0223 | 0.79 | 0.8516 | 0.79 | 0.7669 |
88
+ | 0.9785 | 4.0 | 452 | 0.8722 | 0.71 | 0.7748 | 0.71 | 0.6733 |
89
+ | 0.8834 | 5.0 | 565 | 0.8363 | 0.76 | 0.7691 | 0.76 | 0.7449 |
90
+ | 0.4936 | 6.0 | 678 | 0.6241 | 0.82 | 0.8313 | 0.82 | 0.8193 |
91
+ | 0.2772 | 7.0 | 791 | 0.5648 | 0.85 | 0.8623 | 0.85 | 0.8459 |
92
+ | 0.1213 | 8.0 | 904 | 0.6919 | 0.81 | 0.8429 | 0.81 | 0.7997 |
93
+ | 0.0958 | 9.0 | 1017 | 0.5527 | 0.86 | 0.8682 | 0.86 | 0.8541 |
94
+ | 0.0194 | 10.0 | 1130 | 0.6840 | 0.85 | 0.8645 | 0.85 | 0.8420 |
95
+ | 0.0151 | 11.0 | 1243 | 0.6214 | 0.86 | 0.8642 | 0.86 | 0.8542 |
96
+ | 0.1239 | 12.0 | 1356 | 0.6501 | 0.87 | 0.8803 | 0.87 | 0.8627 |
97
+ | 0.0049 | 13.0 | 1469 | 0.6651 | 0.87 | 0.8803 | 0.87 | 0.8627 |
98
+ | 0.0043 | 14.0 | 1582 | 0.7188 | 0.87 | 0.8803 | 0.87 | 0.8627 |
99
+ | 0.0035 | 15.0 | 1695 | 0.6808 | 0.87 | 0.8803 | 0.87 | 0.8627 |
100
 
101
 
102
  ### Framework versions
 
104
  - Transformers 4.42.3
105
  - Pytorch 2.1.2
106
  - Datasets 2.20.0
107
+ - Tokenizers 0.19.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:40c82a6f3ea1944ca6f86f1d720936373e73714e753ad3a6c3b43bbf55e4a53f
3
  size 94771728
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bfa940d409c2af0d42fa2c88ebbceddf68a847a2bba820fa72f1547c7d103b62
3
  size 94771728
runs/Aug14_08-19-17_26abd6e5c87d/events.out.tfevents.1723634970.26abd6e5c87d.34.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6ed7301b952115009c07a5b124c096b3eee4fd0df8eb44c51d0de90d8bfde84a
3
+ size 560