Edit model card

whisper-ai-nomo

This model is a fine-tuned version of openai/whisper-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0000
  • Wer: 12.2249

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0004
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 132
  • num_epochs: 30
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.8516 0.8584 100 0.0650 14.5477
0.1705 1.7167 200 0.0484 18.7042
0.0597 2.5751 300 0.0123 14.1809
0.0515 3.4335 400 0.1206 59.9022
0.0337 4.2918 500 0.0030 6.3570
0.0232 5.1502 600 0.0023 6.3570
0.0356 6.0086 700 0.0155 12.9584
0.0124 6.8670 800 0.0014 4.7677
0.0152 7.7253 900 0.0030 168.7042
0.0089 8.5837 1000 0.0130 8.1907
0.0098 9.4421 1100 0.0023 12.7139
0.0095 10.3004 1200 0.0094 13.3252
0.0121 11.1588 1300 0.0008 6.9682
0.0082 12.0172 1400 0.0032 11.6137
0.0042 12.8755 1500 0.0126 12.8362
0.0091 13.7339 1600 0.0041 13.0807
0.0047 14.5923 1700 0.0005 9.5355
0.0082 15.4506 1800 0.0000 8.8020
0.003 16.3090 1900 0.0000 9.2910
0.0064 17.1674 2000 0.0002 56.9682
0.0017 18.0258 2100 0.0000 13.4474
0.0004 18.8841 2200 0.0000 16.2592
0.0014 19.7425 2300 0.0003 12.1027
0.0009 20.6009 2400 0.0002 12.1027
0.0001 21.4592 2500 0.0000 12.8362
0.0 22.3176 2600 0.0000 12.4694
0.0 23.1760 2700 0.0000 12.3472
0.0 24.0343 2800 0.0000 12.2249
0.0 24.8927 2900 0.0000 12.2249
0.0 25.7511 3000 0.0000 12.2249
0.0 26.6094 3100 0.0000 12.2249
0.0 27.4678 3200 0.0000 12.2249
0.0 28.3262 3300 0.0000 12.2249
0.0 29.1845 3400 0.0000 12.2249

Framework versions

  • Transformers 4.45.0.dev0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
10
Safetensors
Model size
242M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for susmitabhatt/whisper-ai-nomo

Finetuned
(1713)
this model