lord-reso commited on
Commit
50ed4c9
1 Parent(s): 79e1dfa

End of training

Browse files
Files changed (1) hide show
  1. README.md +35 -11
README.md CHANGED
@@ -8,9 +8,22 @@ tags:
8
  - generated_from_trainer
9
  datasets:
10
  - lord-reso/inbrowser-proctor-dataset
 
 
11
  model-index:
12
  - name: Whisper-Small-Inbrowser-Proctor
13
- results: []
 
 
 
 
 
 
 
 
 
 
 
14
  ---
15
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -20,13 +33,8 @@ should probably proofread and complete it, then remove this comment. -->
20
 
21
  This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Inbrowser Procotor Dataset dataset.
22
  It achieves the following results on the evaluation set:
23
- - eval_loss: 0.3646
24
- - eval_wer: 18.0153
25
- - eval_runtime: 53.0859
26
- - eval_samples_per_second: 1.319
27
- - eval_steps_per_second: 0.17
28
- - epoch: 8.9286
29
- - step: 250
30
 
31
  ## Model description
32
 
@@ -46,15 +54,31 @@ More information needed
46
 
47
  The following hyperparameters were used during training:
48
  - learning_rate: 5e-06
49
- - train_batch_size: 16
50
  - eval_batch_size: 8
51
  - seed: 42
52
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
53
  - lr_scheduler_type: linear
54
- - lr_scheduler_warmup_steps: 50
55
- - training_steps: 500
56
  - mixed_precision_training: Native AMP
57
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
58
  ### Framework versions
59
 
60
  - Transformers 4.44.2
 
8
  - generated_from_trainer
9
  datasets:
10
  - lord-reso/inbrowser-proctor-dataset
11
+ metrics:
12
+ - wer
13
  model-index:
14
  - name: Whisper-Small-Inbrowser-Proctor
15
+ results:
16
+ - task:
17
+ name: Automatic Speech Recognition
18
+ type: automatic-speech-recognition
19
+ dataset:
20
+ name: Inbrowser Procotor Dataset
21
+ type: lord-reso/inbrowser-proctor-dataset
22
+ args: 'config: en, split: test'
23
+ metrics:
24
+ - name: Wer
25
+ type: wer
26
+ value: 17.075501752150366
27
  ---
28
 
29
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
33
 
34
  This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Inbrowser Procotor Dataset dataset.
35
  It achieves the following results on the evaluation set:
36
+ - Loss: 0.3099
37
+ - Wer: 17.0755
 
 
 
 
 
38
 
39
  ## Model description
40
 
 
54
 
55
  The following hyperparameters were used during training:
56
  - learning_rate: 5e-06
57
+ - train_batch_size: 8
58
  - eval_batch_size: 8
59
  - seed: 42
60
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
61
  - lr_scheduler_type: linear
62
+ - lr_scheduler_warmup_steps: 25
63
+ - training_steps: 250
64
  - mixed_precision_training: Native AMP
65
 
66
+ ### Training results
67
+
68
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
69
+ |:-------------:|:------:|:----:|:---------------:|:-------:|
70
+ | 0.3461 | 0.4545 | 25 | 0.4545 | 26.0433 |
71
+ | 0.1902 | 0.9091 | 50 | 0.3309 | 17.4419 |
72
+ | 0.1184 | 1.3636 | 75 | 0.3120 | 14.6543 |
73
+ | 0.0944 | 1.8182 | 100 | 0.3066 | 16.7251 |
74
+ | 0.0632 | 2.2727 | 125 | 0.3046 | 14.8455 |
75
+ | 0.0688 | 2.7273 | 150 | 0.3060 | 14.8933 |
76
+ | 0.0479 | 3.1818 | 175 | 0.3063 | 17.1074 |
77
+ | 0.0515 | 3.6364 | 200 | 0.3081 | 15.4986 |
78
+ | 0.0296 | 4.0909 | 225 | 0.3096 | 17.2507 |
79
+ | 0.0348 | 4.5455 | 250 | 0.3099 | 17.0755 |
80
+
81
+
82
  ### Framework versions
83
 
84
  - Transformers 4.44.2