Edit model card

whisper-ai-nomi

This model is a fine-tuned version of openai/whisper-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0000
  • Wer: 84.4743

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0004
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 132
  • num_epochs: 30
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.8208 0.8163 100 0.1289 13.0807
0.2627 1.6327 200 0.3265 94.8655
1.5786 2.4490 300 3.1093 100.0
0.4107 3.2653 400 0.2735 93.5208
0.2687 4.0816 500 0.2438 92.6650
0.2194 4.8980 600 0.2357 94.0098
0.1686 5.7143 700 0.2356 723.7164
0.1302 6.5306 800 0.0559 194.6210
0.0847 7.3469 900 0.0397 34.9633
0.0595 8.1633 1000 0.0123 17.8484
0.0622 8.9796 1100 0.2014 54.1565
0.0518 9.7959 1200 0.0192 27.9951
0.0285 10.6122 1300 0.0271 24.5721
0.0307 11.4286 1400 0.0565 243.0318
0.0207 12.2449 1500 0.0021 20.0489
0.0156 13.0612 1600 0.0091 21.1491
0.0157 13.8776 1700 0.0147 71.0269
0.0127 14.6939 1800 0.0019 20.2934
0.0084 15.5102 1900 0.0007 14.0587
0.008 16.3265 2000 0.0052 19.9267
0.0071 17.1429 2100 0.0009 98.5330
0.0023 17.9592 2200 0.0001 58.8020
0.002 18.7755 2300 0.0012 33.4963
0.002 19.5918 2400 0.0018 176.6504
0.0033 20.4082 2500 0.0001 148.5330
0.0017 21.2245 2600 0.0000 122.8606
0.0004 22.0408 2700 0.0000 125.3056
0.0001 22.8571 2800 0.0000 110.8802
0.0 23.6735 2900 0.0000 110.8802
0.0 24.4898 3000 0.0000 97.9218
0.0 25.3061 3100 0.0000 97.9218
0.0 26.1224 3200 0.0000 84.5966
0.0 26.9388 3300 0.0000 84.4743
0.0 27.7551 3400 0.0000 84.4743
0.0 28.5714 3500 0.0000 84.4743
0.0 29.3878 3600 0.0000 84.4743

Framework versions

  • Transformers 4.45.0.dev0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
25
Safetensors
Model size
242M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for susmitabhatt/whisper-ai-nomi

Finetuned
(1713)
this model