Edit model card

Whisper Tiny Fa - Javad Razavian

This model is a fine-tuned version of openai/whisper-tiny on the Common Voice 16.0 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9459
  • Wer: 94.2810

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-06
  • train_batch_size: 16
  • eval_batch_size: 256
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 5000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
4.6309 0.08 100 4.1290 140.4220
2.5371 0.16 200 2.5264 128.3176
1.5224 0.24 300 1.7147 120.6830
1.2351 0.33 400 1.4970 112.3542
1.073 0.41 500 1.3917 103.7479
1.0077 0.49 600 1.3232 104.2199
0.9541 0.57 700 1.2781 99.6669
0.8933 0.65 800 1.2369 99.8612
0.8746 0.73 900 1.2076 99.5003
0.8306 0.81 1000 1.1809 99.8890
0.8309 0.89 1100 1.1583 96.5297
0.7982 0.98 1200 1.1370 94.2254
0.7719 1.06 1300 1.1243 96.8351
0.7799 1.14 1400 1.1065 92.6707
0.7512 1.22 1500 1.0941 93.1427
0.7212 1.3 1600 1.0838 94.6696
0.7315 1.38 1700 1.0709 96.0855
0.7002 1.46 1800 1.0595 96.0022
0.719 1.54 1900 1.0517 94.7807
0.7157 1.63 2000 1.0420 95.5303
0.7004 1.71 2100 1.0337 94.2810
0.6792 1.79 2200 1.0278 96.7518
0.6933 1.87 2300 1.0196 95.7801
0.669 1.95 2400 1.0113 98.0566
0.6627 2.03 2500 1.0063 96.8351
0.655 2.11 2600 1.0006 96.0577
0.6511 2.2 2700 0.9939 97.0572
0.6352 2.28 2800 0.9899 95.4470
0.6339 2.36 2900 0.9874 97.2238
0.6354 2.44 3000 0.9820 96.8351
0.611 2.52 3100 0.9777 94.5308
0.6143 2.6 3200 0.9752 99.0006
0.6242 2.68 3300 0.9729 98.7229
0.6324 2.76 3400 0.9681 99.1394
0.6237 2.85 3500 0.9646 96.8906
0.6285 2.93 3600 0.9621 96.1410
0.5934 3.01 3700 0.9601 97.4736
0.6129 3.09 3800 0.9575 92.9761
0.6154 3.17 3900 0.9575 97.5847
0.6334 3.25 4000 0.9555 101.0827
0.5956 3.33 4100 0.9536 94.7529
0.5956 3.41 4200 0.9507 100.3054
0.6053 3.5 4300 0.9504 94.5308
0.6199 3.58 4400 0.9491 95.0861
0.6064 3.66 4500 0.9482 91.8656
0.6154 3.74 4600 0.9478 94.1144
0.5909 3.82 4700 0.9466 91.5047
0.584 3.9 4800 0.9459 94.1144
0.5935 3.98 4900 0.9459 94.0589
0.5939 4.07 5000 0.9459 94.2810

Framework versions

  • Transformers 4.37.0.dev0
  • Pytorch 2.1.0+cu121
  • Datasets 2.16.1
  • Tokenizers 0.15.0
Downloads last month
24
Safetensors
Model size
37.8M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for javadr/whisper-tiny-fa

Finetuned
(1216)
this model

Dataset used to train javadr/whisper-tiny-fa

Evaluation results