Edit model card

whisper-base-oshiwambo-speech

This model is a fine-tuned version of openai/whisper-base on meyabase/crowd-oshiwambo-speech-greetings dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0834
  • Wer: 80.9524
  • Cer: 58.9623
  • Word Acc: 82.2917
  • Sent Acc: 54.2857
  • Precision: 0.5097
  • Recall: 0.7524
  • F1 Score: 0.6077

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 8
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • training_steps: 10000

Training results

Training Loss Epoch Step Validation Loss Wer Cer Word Acc Sent Acc Precision Recall F1 Score
0.0099 117.65 1000 0.0777 46.6667 31.6038 69.1358 11.4286 0.6914 0.5333 0.6022
0.0105 235.29 2000 0.0806 47.6190 33.2547 71.4286 11.4286 0.7143 0.5238 0.6044
0.0106 352.94 3000 0.0795 44.7619 34.6698 76.3158 25.7143 0.7632 0.5524 0.6409
0.0092 470.59 4000 0.0793 42.8571 35.8491 81.0811 31.4286 0.8108 0.5714 0.6704
0.0099 588.24 5000 0.0806 92.3810 69.8113 81.7073 42.8571 0.4752 0.6381 0.5447
0.0094 705.88 6000 0.0800 28.5714 22.1698 83.3333 48.5714 0.8333 0.7143 0.7692
0.0093 823.53 7000 0.0796 24.7619 16.2736 82.2917 54.2857 0.8229 0.7524 0.7861
0.0095 941.18 8000 0.0815 82.8571 59.1981 80.2083 51.4286 0.4968 0.7333 0.5923
0.01 1058.82 9000 0.0815 24.7619 16.5094 82.2917 54.2857 0.8229 0.7524 0.7861
0.0088 1176.47 10000 0.0834 80.9524 58.9623 82.2917 54.2857 0.5097 0.7524 0.6077

Framework versions

  • Transformers 4.30.0.dev0
  • Pytorch 2.0.0
  • Datasets 2.12.0
  • Tokenizers 0.13.3
Downloads last month
10
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.