jboat commited on
Commit
c9ec10d
1 Parent(s): 8126b75

End of training

Browse files
README.md CHANGED
@@ -7,24 +7,9 @@ tags:
7
  - generated_from_trainer
8
  datasets:
9
  - google/fleurs
10
- metrics:
11
- - wer
12
  model-index:
13
  - name: Whisper Small Igbo
14
- results:
15
- - task:
16
- name: Automatic Speech Recognition
17
- type: automatic-speech-recognition
18
- dataset:
19
- name: google/fleurs-jboat
20
- type: google/fleurs
21
- config: ig_ng
22
- split: test
23
- args: ig_ng
24
- metrics:
25
- - name: Wer
26
- type: wer
27
- value: 44.01272438082254
28
  ---
29
 
30
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -34,9 +19,14 @@ should probably proofread and complete it, then remove this comment. -->
34
 
35
  This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the google/fleurs-jboat dataset.
36
  It achieves the following results on the evaluation set:
37
- - Loss: 1.0619
38
- - Wer Ortho: 47.8937
39
- - Wer: 44.0127
 
 
 
 
 
40
 
41
  ## Model description
42
 
@@ -62,23 +52,9 @@ The following hyperparameters were used during training:
62
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
63
  - lr_scheduler_type: constant_with_warmup
64
  - lr_scheduler_warmup_steps: 50
65
- - training_steps: 4000
66
  - mixed_precision_training: Native AMP
67
 
68
- ### Training results
69
-
70
- | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
71
- |:-------------:|:-------:|:----:|:---------------:|:---------:|:-------:|
72
- | 0.3161 | 2.6455 | 500 | 0.7413 | 50.2340 | 46.1448 |
73
- | 0.0421 | 5.2910 | 1000 | 0.8582 | 49.0269 | 44.8004 |
74
- | 0.0168 | 7.9365 | 1500 | 0.9246 | 47.6351 | 43.5204 |
75
- | 0.0075 | 10.5820 | 2000 | 0.9912 | 47.7541 | 43.3121 |
76
- | 0.0051 | 13.2275 | 2500 | 1.0277 | 47.7377 | 43.3954 |
77
- | 0.0067 | 15.8730 | 3000 | 1.0354 | 47.6638 | 43.1644 |
78
- | 0.0041 | 18.5185 | 3500 | 1.0722 | 48.3864 | 44.1112 |
79
- | 0.0028 | 21.1640 | 4000 | 1.0619 | 47.8937 | 44.0127 |
80
-
81
-
82
  ### Framework versions
83
 
84
  - Transformers 4.42.4
 
7
  - generated_from_trainer
8
  datasets:
9
  - google/fleurs
 
 
10
  model-index:
11
  - name: Whisper Small Igbo
12
+ results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  ---
14
 
15
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
19
 
20
  This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the google/fleurs-jboat dataset.
21
  It achieves the following results on the evaluation set:
22
+ - eval_loss: 1.1403
23
+ - eval_wer_ortho: 47.1588
24
+ - eval_wer: 42.9448
25
+ - eval_runtime: 397.589
26
+ - eval_samples_per_second: 2.437
27
+ - eval_steps_per_second: 0.153
28
+ - epoch: 37.0370
29
+ - step: 7000
30
 
31
  ## Model description
32
 
 
52
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
53
  - lr_scheduler_type: constant_with_warmup
54
  - lr_scheduler_warmup_steps: 50
55
+ - training_steps: 8000
56
  - mixed_precision_training: Native AMP
57
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
58
  ### Framework versions
59
 
60
  - Transformers 4.42.4
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c2a1da508cc153f2740148a17e5db38e4b468f449746b8c30c51ced1e5e72bec
3
  size 966995080
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1de54084c3d74cada120df9a7c05f12b81fb05907ea38ee0274c2455851094c7
3
  size 966995080
runs/Jul24_17-24-06_0f8a7b59e9d5/events.out.tfevents.1721841855.0f8a7b59e9d5.233.1 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:718dae3108d846a8c1088e390ab4bf71a6b2a60e0d778a814a6ab964246dc720
3
- size 34167
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ab53d8d50a7fb1524516e41449de9cbaa5ec42dcb17a1098cf13296abff4b034
3
+ size 37332