Edit model card

whisper4

This model is a fine-tuned version of openai/whisper-tiny.en on the tiny dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5409
  • Wer: 28.2719

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 128
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 300

Training results

Training Loss Epoch Step Validation Loss Wer
3.8231 0.2778 10 3.7088 76.9377
3.1925 0.5556 20 2.9439 65.5654
2.1383 0.8333 30 1.7221 61.5311
1.0671 1.1111 40 0.8320 50.6989
0.6947 1.3889 50 0.6587 41.0102
0.6263 1.6667 60 0.5874 29.7967
0.5827 1.9444 70 0.5402 27.3825
0.4222 2.2222 80 0.5154 32.0521
0.4065 2.5 90 0.4997 25.6989
0.3959 2.7778 100 0.4804 23.8247
0.3081 3.0556 110 0.4670 24.8412
0.2497 3.3333 120 0.4687 23.2846
0.2535 3.6111 130 0.4594 23.0940
0.2428 3.8889 140 0.4545 23.5070
0.1627 4.1667 150 0.4651 24.4917
0.1224 4.4444 160 0.4686 23.6976
0.1326 4.7222 170 0.4653 23.6976
0.1334 5.0 180 0.4741 24.7459
0.0659 5.2778 190 0.4792 24.6823
0.0639 5.5556 200 0.4760 33.3863
0.0667 5.8333 210 0.4820 25.4765
0.042 6.1111 220 0.4933 29.4155
0.0325 6.3889 230 0.5066 29.9873
0.0333 6.6667 240 0.5126 26.0801
0.0333 6.9444 250 0.5073 24.6188
0.0187 7.2222 260 0.5129 27.3507
0.0214 7.5 270 0.5209 28.2084
0.0187 7.7778 280 0.5213 29.3202
0.0312 8.0556 290 0.5274 34.6569
0.0172 8.3333 300 0.5409 28.2719

Framework versions

  • Transformers 4.40.1
  • Pytorch 2.2.1+cu121
  • Datasets 2.19.1.dev0
  • Tokenizers 0.19.1
Downloads last month
9
Safetensors
Model size
37.8M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for khaingsmon/whisper4

Finetuned
(60)
this model