Edit model card

Whisper Small Hi - Sanchit Gandhi

This model is a fine-tuned version of openai/whisper-small on the AfriVoice dataset. It achieves the following results on the evaluation set:

  • Loss: 1.6526
  • Wer: 53.5292

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 5000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.0002 153.8462 1000 1.4799 52.3072
0.0001 307.6923 2000 1.5528 52.1298
0.0 461.5385 3000 1.6058 52.3033
0.0 615.3846 4000 1.6294 52.8546
0.0 769.2308 5000 1.6526 53.5292

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.1.0+cu118
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
7
Safetensors
Model size
242M params
Tensor type
F32
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for KasuleTrevor/whisper-lingala-small-test

Finetuned
this model

Dataset used to train KasuleTrevor/whisper-lingala-small-test

Evaluation results