Chenxi-Chelsea-Liu's picture
Model save
0e81847 verified
|
raw
history blame
5.04 kB
metadata
license: apache-2.0
base_model: openai/whisper-small
tags:
  - generated_from_trainer
metrics:
  - wer
model-index:
  - name: whisper-small-clean-hi
    results: []

whisper-small-clean-hi

This model is a fine-tuned version of openai/whisper-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5136
  • Wer: 28.2379

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 48
  • eval_batch_size: 24
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 3000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
1.5251 0.46 50 1.2276 88.8034
0.7311 0.92 100 0.6706 50.3372
0.5582 1.38 150 0.5367 43.6798
0.4555 1.83 200 0.4448 43.1783
0.3326 2.29 250 0.3594 36.2182
0.2394 2.75 300 0.2507 33.5380
0.1449 3.21 350 0.2294 32.7252
0.1407 3.67 400 0.2144 30.6070
0.1048 4.13 450 0.2125 29.6299
0.0854 4.59 500 0.2085 29.1371
0.0762 5.05 550 0.2125 28.4109
0.0445 5.5 600 0.2168 28.4973
0.0474 5.96 650 0.2197 28.2725
0.0249 6.42 700 0.2324 28.2898
0.0267 6.88 750 0.2287 27.2696
0.0144 7.34 800 0.2440 27.2869
0.0154 7.8 850 0.2524 27.3733
0.008 8.26 900 0.2648 27.1312
0.0103 8.72 950 0.2602 27.9353
0.0066 9.17 1000 0.2718 28.3330
0.0073 9.63 1050 0.2705 27.4771
0.0053 10.09 1100 0.2828 27.5030
0.0044 10.55 1150 0.2882 27.2004
0.0045 11.01 1200 0.2892 27.5117
0.0037 11.47 1250 0.2961 27.3215
0.0031 11.93 1300 0.2934 27.0534
0.0022 12.39 1350 0.3014 27.1053
0.003 12.84 1400 0.3077 26.5779
0.0022 13.3 1450 0.3096 26.8373
0.002 13.76 1500 0.3123 26.5347
0.0017 14.22 1550 0.3186 26.8632
0.0016 14.68 1600 0.3255 26.6903
0.0012 15.14 1650 0.3329 26.4396
0.0015 15.6 1700 0.3336 27.0188
0.0009 16.06 1750 0.3361 26.4569
0.001 16.51 1800 0.3483 26.4655
0.0014 16.97 1850 0.3533 26.2666
0.0004 17.43 1900 0.3581 26.0678
0.0004 17.89 1950 0.3688 26.5087
0.0003 18.35 2000 0.3738 26.2148
0.0004 18.81 2050 0.3729 26.1197
0.0005 19.27 2100 0.3850 25.8776
0.0002 19.72 2150 0.3874 25.9900
0.0004 20.18 2200 0.3927 25.9727
0.0 20.64 2250 0.4037 25.9381
0.0 21.1 2300 0.4133 25.9208
0.0001 21.56 2350 0.4188 25.5836
0.0 22.02 2400 0.4266 25.8776
0.0 22.48 2450 0.4380 26.1715
0.0 22.94 2500 0.4473 25.6268
0.0 23.39 2550 0.4604 26.0418
0.0 23.85 2600 0.4681 26.1802
0.0 24.31 2650 0.4833 26.1197
0.0 24.77 2700 0.4883 26.2234
0.0 25.23 2750 0.4993 26.4914
0.0 25.69 2800 0.5031 26.7768
0.0 26.15 2850 0.5077 26.6211
0.0 26.61 2900 0.5102 27.1658
0.0 27.06 2950 0.5123 28.1688
0.0 27.52 3000 0.5136 28.2379

Framework versions

  • Transformers 4.37.0.dev0
  • Pytorch 1.12.1
  • Datasets 2.16.1
  • Tokenizers 0.15.0