Chenxi-Chelsea-Liu's picture
Model save
7abe145 verified
|
raw
history blame
3.44 kB
metadata
license: apache-2.0
base_model: openai/whisper-small
tags:
  - generated_from_trainer
metrics:
  - wer
model-index:
  - name: whisper-small-noisy-hindi
    results: []

whisper-small-noisy-hindi

This model is a fine-tuned version of openai/whisper-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.6842
  • Wer: 76.4655

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 64
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 3000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
2.5492 0.61 50 2.2588 122.9379
1.6655 1.22 100 1.6099 92.4434
1.4463 1.83 150 1.4333 89.4086
1.2723 2.44 200 1.3012 90.1954
1.1433 3.05 250 1.1803 84.7052
0.9489 3.66 300 1.0427 82.0768
0.8082 4.27 350 0.9796 81.3246
0.7696 4.88 400 0.9372 78.6962
0.631 5.49 450 0.9229 78.2898
0.5784 6.1 500 0.9306 76.3790
0.5112 6.71 550 0.9158 76.3963
0.3738 7.32 600 0.9585 75.7652
0.3762 7.93 650 0.9530 75.5490
0.2647 8.54 700 1.0094 76.1975
0.2282 9.15 750 1.0548 76.8632
0.1865 9.76 800 1.0789 76.4482
0.1172 10.37 850 1.1491 78.1688
0.1181 10.98 900 1.1769 76.5520
0.0778 11.59 950 1.2255 77.5117
0.0579 12.2 1000 1.3021 76.0246
0.0515 12.8 1050 1.3064 76.8546
0.0324 13.41 1100 1.3766 77.1140
0.0317 14.02 1150 1.4044 78.2206
0.0227 14.63 1200 1.4420 77.9353
0.0174 15.24 1250 1.4780 76.4482
0.0164 15.85 1300 1.5044 76.2494
0.0121 16.46 1350 1.5338 76.7595
0.0128 17.07 1400 1.5588 76.9410
0.0108 17.68 1450 1.5688 76.4482
0.0085 18.29 1500 1.6060 76.6903
0.0082 18.9 1550 1.6368 76.5606
0.0065 19.51 1600 1.6483 76.5520
0.0062 20.12 1650 1.6842 76.4655

Framework versions

  • Transformers 4.37.0.dev0
  • Pytorch 1.12.1
  • Datasets 2.16.1
  • Tokenizers 0.15.0