Hanhpt23's picture
End of training
50bf7d0 verified
metadata
language:
  - de
license: apache-2.0
base_model: openai/whisper-small
tags:
  - generated_from_trainer
metrics:
  - wer
model-index:
  - name: openai/whisper-small
    results: []

openai/whisper-small

This model is a fine-tuned version of openai/whisper-small on the Hanhpt23/GermanMed-full dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6821
  • Wer: 26.4630
  • Cer: 15.4774

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Wer Cer
0.5026 1.0 194 0.5290 29.2811 19.4792
0.2354 2.0 388 0.5282 34.3515 22.4451
0.1144 3.0 582 0.5396 32.5825 21.4992
0.073 4.0 776 0.5676 32.2226 22.5456
0.0465 5.0 970 0.6049 26.3499 16.4094
0.0375 6.0 1164 0.6197 33.1791 21.1216
0.0213 7.0 1358 0.6250 30.1759 20.1462
0.0229 8.0 1552 0.6453 31.4718 19.4914
0.0118 9.0 1746 0.6510 23.1924 14.5627
0.0138 10.0 1940 0.6604 27.9235 17.4974
0.0081 11.0 2134 0.6546 26.3705 16.3176
0.0029 12.0 2328 0.6527 25.3625 15.1725
0.0028 13.0 2522 0.6712 22.6473 14.5384
0.0003 14.0 2716 0.6743 30.7004 18.0015
0.0002 15.0 2910 0.6752 27.2035 16.0248
0.0001 16.0 3104 0.6787 24.8277 15.1292
0.0001 17.0 3298 0.6803 26.6893 15.6541
0.0001 18.0 3492 0.6813 26.5864 15.6021
0.0001 19.0 3686 0.6819 26.4836 15.4964
0.0001 20.0 3880 0.6821 26.4630 15.4774

Framework versions

  • Transformers 4.41.1
  • Pytorch 2.3.0
  • Datasets 2.19.1
  • Tokenizers 0.19.1