golesheed's picture
End of training
ec1f323 verified
metadata
library_name: transformers
language:
  - nl
license: apache-2.0
base_model: openai/whisper-large-v2
tags:
  - generated_from_trainer
metrics:
  - wer
model-index:
  - name: Whisper Large V2
    results: []

Whisper Large V2

This model is a fine-tuned version of openai/whisper-large-v2 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4356
  • Wer: 17.9459

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 12
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 20
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Wer
0.8564 0.0676 15 0.6152 41.0055
0.5012 0.1351 30 0.4878 48.5720
0.4816 0.2027 45 0.4620 32.3907
0.415 0.2703 60 0.4401 44.3115
0.3973 0.3378 75 0.4212 36.4375
0.3987 0.4054 90 0.4094 29.3866
0.4059 0.4730 105 0.3966 26.1762
0.4126 0.5405 120 0.3882 28.4659
0.3785 0.6081 135 0.3864 22.7352
0.3652 0.6757 150 0.3845 35.2448
0.4099 0.7432 165 0.3776 29.4185
0.4101 0.8108 180 0.3709 31.4269
0.352 0.8784 195 0.3687 24.2766
0.3604 0.9459 210 0.3648 21.7113
0.3642 1.0135 225 0.3622 21.3025
0.1693 1.0811 240 0.3698 27.0501
0.1622 1.1486 255 0.3728 27.2526
0.1862 1.2162 270 0.3562 21.5538
0.1815 1.2838 285 0.3647 23.6784
0.2084 1.3514 300 0.3611 20.9556
0.1777 1.4189 315 0.3610 22.6921
0.1842 1.4865 330 0.3591 22.3939
0.178 1.5541 345 0.3545 20.8168
0.1965 1.6216 360 0.3489 23.0390
0.1931 1.6892 375 0.3485 22.2176
0.1886 1.7568 390 0.3452 20.1268
0.1936 1.8243 405 0.3417 20.4849
0.19 1.8919 420 0.3474 22.2889
0.1818 1.9595 435 0.3449 22.0545
0.1445 2.0270 450 0.3605 19.4892
0.0879 2.0946 465 0.3753 19.5136
0.0921 2.1622 480 0.3722 19.8455
0.091 2.2297 495 0.3705 20.2955
0.0936 2.2973 510 0.3670 22.8215
0.0854 2.3649 525 0.3629 22.5327
0.0938 2.4324 540 0.3550 19.5061
0.0843 2.5 555 0.3674 21.2219
0.0879 2.5676 570 0.3599 18.8966
0.0802 2.6351 585 0.3668 18.1109
0.0868 2.7027 600 0.3563 18.5197
0.09 2.7703 615 0.3601 19.7236
0.0844 2.8378 630 0.3583 18.9829
0.0814 2.9054 645 0.3647 19.2135
0.0834 2.9730 660 0.3555 19.6298
0.0642 3.0405 675 0.3672 18.4484
0.0403 3.1081 690 0.4052 18.9698
0.0397 3.1757 705 0.3852 18.4747
0.0363 3.2432 720 0.3983 17.8334
0.0361 3.3108 735 0.3859 18.2965
0.0365 3.3784 750 0.3986 19.9486
0.032 3.4459 765 0.4001 18.6753
0.0374 3.5135 780 0.3902 18.3528
0.0337 3.5811 795 0.3980 18.4016
0.0327 3.6486 810 0.3962 17.6252
0.0357 3.7162 825 0.3935 18.5197
0.0359 3.7838 840 0.3877 18.3978
0.0342 3.8514 855 0.3870 18.9848
0.0368 3.9189 870 0.3939 17.9721
0.0341 3.9865 885 0.3928 18.1409
0.0164 4.0541 900 0.4037 17.4077
0.0141 4.1216 915 0.4316 18.3509
0.0113 4.1892 930 0.4305 16.9895
0.014 4.2568 945 0.4285 17.5071
0.0131 4.3243 960 0.4271 17.7621
0.0156 4.3919 975 0.4292 19.2998
0.0118 4.4595 990 0.4334 18.8704
0.0104 4.5270 1005 0.4332 17.7827
0.0107 4.5946 1020 0.4327 19.0148
0.009 4.6622 1035 0.4346 17.9084
0.0091 4.7297 1050 0.4384 17.7827
0.0107 4.7973 1065 0.4359 18.4203
0.0114 4.8649 1080 0.4348 17.8465
0.0092 4.9324 1095 0.4354 18.0415
0.0095 5.0 1110 0.4356 17.9459

Framework versions

  • Transformers 4.45.0.dev0
  • Pytorch 2.1.0+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1