Whisper Small Hi - Sanchit Gandhi

This model is a fine-tuned version of nurzhanit/whisper-enhanced-ml on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0001
  • Wer: 35.6229

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • training_steps: 1000

Training results

Training Loss Epoch Step Validation Loss Wer
3.2948 0.2688 50 2.0785 12.9293
1.4104 0.5376 100 1.1845 2.6263
0.5806 0.8065 150 0.3972 24.0404
0.0701 1.0753 200 0.0263 48.0471
0.0023 1.3441 250 0.0012 39.2593
0.0006 1.6129 300 0.0005 39.8653
0.0004 1.8817 350 0.0004 31.7508
0.0003 2.1505 400 0.0003 32.7609
0.0002 2.4194 450 0.0002 34.6801
0.0002 2.6882 500 0.0002 31.4141
0.0002 2.9570 550 0.0002 38.2155
0.0001 3.2258 600 0.0001 33.6364
0.0001 3.4946 650 0.0001 36.2290
0.0001 3.7634 700 0.0001 35.7239
0.0001 4.0323 750 0.0001 34.9158
0.0001 4.3011 800 0.0001 37.2727
0.0001 4.5699 850 0.0001 35.2862
0.0001 4.8387 900 0.0001 35.5892
0.0001 5.1075 950 0.0001 34.9158
0.0001 5.3763 1000 0.0001 35.6229

Framework versions

  • Transformers 4.40.0
  • Pytorch 2.5.0+cu124
  • Datasets 3.0.2
  • Tokenizers 0.19.1
Downloads last month
11
Safetensors
Model size
37.8M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for nurzhanit/whisper-tiny-en-blink

Finetuned
(4)
this model

Dataset used to train nurzhanit/whisper-tiny-en-blink

Evaluation results