spellingdragon commited on
Commit
677cc94
1 Parent(s): 34b8007

End of training

Browse files
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  license: apache-2.0
3
- base_model: lighteternal/wav2vec2-large-xlsr-53-greek
4
  tags:
5
  - generated_from_trainer
6
  datasets:
@@ -30,9 +30,9 @@ should probably proofread and complete it, then remove this comment. -->
30
 
31
  # wav2vec2-xlsr-finetuned-gtzan
32
 
33
- This model is a fine-tuned version of [lighteternal/wav2vec2-large-xlsr-53-greek](https://huggingface.co/lighteternal/wav2vec2-large-xlsr-53-greek) on the GTZAN dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 0.6511
36
  - Accuracy: 0.88
37
 
38
  ## Model description
@@ -61,23 +61,21 @@ The following hyperparameters were used during training:
61
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
  - lr_scheduler_type: linear
63
  - lr_scheduler_warmup_ratio: 0.1
64
- - num_epochs: 10
65
  - mixed_precision_training: Native AMP
66
 
67
  ### Training results
68
 
69
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
70
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
71
- | 2.1788 | 1.0 | 112 | 1.9921 | 0.29 |
72
- | 1.4318 | 2.0 | 225 | 1.3264 | 0.49 |
73
- | 1.422 | 3.0 | 337 | 0.9604 | 0.65 |
74
- | 0.7665 | 4.0 | 450 | 0.7403 | 0.76 |
75
- | 0.6839 | 5.0 | 562 | 0.5957 | 0.83 |
76
- | 0.459 | 6.0 | 675 | 0.5525 | 0.83 |
77
- | 0.139 | 7.0 | 787 | 0.6561 | 0.86 |
78
- | 0.4184 | 8.0 | 900 | 0.6185 | 0.85 |
79
- | 0.1972 | 9.0 | 1012 | 0.5794 | 0.89 |
80
- | 0.2296 | 9.96 | 1120 | 0.6511 | 0.88 |
81
 
82
 
83
  ### Framework versions
 
1
  ---
2
  license: apache-2.0
3
+ base_model: ntu-spml/distilhubert
4
  tags:
5
  - generated_from_trainer
6
  datasets:
 
30
 
31
  # wav2vec2-xlsr-finetuned-gtzan
32
 
33
+ This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
34
  It achieves the following results on the evaluation set:
35
+ - Loss: 0.4910
36
  - Accuracy: 0.88
37
 
38
  ## Model description
 
61
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
  - lr_scheduler_type: linear
63
  - lr_scheduler_warmup_ratio: 0.1
64
+ - num_epochs: 8
65
  - mixed_precision_training: Native AMP
66
 
67
  ### Training results
68
 
69
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
70
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
71
+ | 2.1164 | 1.0 | 112 | 1.8109 | 0.41 |
72
+ | 1.4074 | 2.0 | 225 | 1.3693 | 0.52 |
73
+ | 1.3318 | 3.0 | 337 | 1.0215 | 0.67 |
74
+ | 0.753 | 4.0 | 450 | 0.7795 | 0.76 |
75
+ | 0.6883 | 5.0 | 562 | 0.7275 | 0.8 |
76
+ | 0.3683 | 6.0 | 675 | 0.5863 | 0.82 |
77
+ | 0.2497 | 7.0 | 787 | 0.4621 | 0.89 |
78
+ | 0.4483 | 7.96 | 896 | 0.4910 | 0.88 |
 
 
79
 
80
 
81
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:aedbaa4a2130e5ba9d636ee678fcf7036d5676fb46062d3e81a1faee0ed9092b
3
  size 1266047104
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a46f6d95f564694fa4f42abc4c0137251a67ae87d1df24749a11be48dc9be744
3
  size 1266047104
runs/Jan14_17-22-04_f65dd54c3f1e/events.out.tfevents.1705252925.f65dd54c3f1e.526.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d0199f1e13a6599333ab582d432485207b831a5ea87fd72ee1ea5b7dc4328578
3
+ size 37467
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ba3892dee01b2842d29aba802c4ca5747f7e63a7b668f10038490b68215e88b2
3
  size 4728
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2c6d0640066272dd87649e2d3f420351373fdc4e01d98f0fa539add5dd2d3cc1
3
  size 4728