shreyasdesaisuperU's picture
End of training
25e3b08 verified
metadata
library_name: transformers
license: apache-2.0
base_model: openai/whisper-large
tags:
  - generated_from_trainer
metrics:
  - wer
model-index:
  - name: Whisper Large SSD superU
    results: []

Whisper Large SSD superU

This model is a fine-tuned version of openai/whisper-large on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 4.2685
  • Wer: 166.6349

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 2000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
4.1121 3.125 100 3.5671 154.6120
2.6613 6.25 200 2.8860 158.7150
1.8679 9.375 300 2.8342 143.7977
1.1096 12.5 400 3.0283 167.7163
0.563 15.625 500 3.2773 167.3982
0.2032 18.75 600 3.4815 167.4618
0.0899 21.875 700 3.6164 151.9720
0.0431 25.0 800 3.7659 154.4211
0.0262 28.125 900 3.8327 188.4860
0.0264 31.25 1000 3.8547 173.1234
0.0118 34.375 1100 3.9458 184.9237
0.0076 37.5 1200 4.0480 178.3079
0.0036 40.625 1300 4.1518 159.7964
0.0014 43.75 1400 4.1739 164.6310
0.0011 46.875 1500 4.2014 173.6641
0.001 50.0 1600 4.2262 147.2646
0.001 53.125 1700 4.2510 159.1921
0.0009 56.25 1800 4.2570 168.0025
0.0009 59.375 1900 4.2650 166.7621
0.0008 62.5 2000 4.2685 166.6349

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.0+cu121
  • Datasets 2.21.0
  • Tokenizers 0.19.1