11_2 / README.md
datdo2717's picture
End of training
ae170f9 verified
metadata
library_name: transformers
language:
  - hi
license: apache-2.0
base_model: openai/whisper-small
tags:
  - generated_from_trainer
datasets:
  - mozilla-foundation/common_voice_11_0
metrics:
  - wer
model-index:
  - name: Whisper Small Ori vi
    results:
      - task:
          name: Automatic Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: Common Voice 11.0
          type: mozilla-foundation/common_voice_11_0
          args: 'config: hi, split: test'
        metrics:
          - name: Wer
            type: wer
            value: 15.251862231728003

Whisper Small Ori vi

This model is a fine-tuned version of openai/whisper-small on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4021
  • Wer: 15.2519

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 200
  • training_steps: 1000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.458 0.2222 100 0.4649 16.8154
0.4314 0.4444 200 0.4266 16.4319
0.4275 0.6667 300 0.4166 15.5542
0.3946 0.8889 400 0.4107 15.5764
0.2151 1.1111 500 0.4051 15.5616
0.2383 1.3333 600 0.4014 15.3551
0.2176 1.5556 700 0.3979 15.5395
0.2271 1.7778 800 0.3996 15.2371
0.222 2.0 900 0.3966 15.4141
0.1469 2.2222 1000 0.4021 15.2519

Framework versions

  • Transformers 4.46.3
  • Pytorch 2.4.0
  • Datasets 3.1.0
  • Tokenizers 0.20.0