Edit model card

Whisper Small Korea

This model is a fine-tuned version of openai/whisper-medium on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4319
  • Wer: 26.9672

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 1500
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.0026 1.1111 50 0.3466 26.0886
0.0013 2.2222 100 0.3569 25.8212
0.0025 3.3333 150 0.3672 25.2865
0.003 4.4444 200 0.3578 25.5157
0.0091 5.5556 250 0.3599 25.3629
0.0072 6.6667 300 0.3682 25.9740
0.0054 7.7778 350 0.3785 26.8526
0.0093 8.8889 400 0.3764 27.1581
0.013 10.0 450 0.3886 28.5332
0.0146 11.1111 500 0.3900 27.5401
0.0128 12.2222 550 0.3917 27.3491
0.0054 13.3333 600 0.3926 26.5852
0.0029 14.4444 650 0.4281 28.4186
0.0062 15.5556 700 0.3957 27.5401
0.0062 16.6667 750 0.4080 27.7693
0.0023 17.7778 800 0.4151 27.4637
0.0034 18.8889 850 0.4153 28.2659
0.0009 20.0 900 0.4133 27.0053
0.0003 21.1111 950 0.4192 26.9672
0.0003 22.2222 1000 0.4223 26.9290
0.0002 23.3333 1050 0.4247 27.0053
0.0002 24.4444 1100 0.4266 26.9672
0.0002 25.5556 1150 0.4279 27.0817
0.0002 26.6667 1200 0.4290 27.0053
0.0002 27.7778 1250 0.4299 26.9672
0.0002 28.8889 1300 0.4306 26.9672
0.0002 30.0 1350 0.4312 27.0053
0.0002 31.1111 1400 0.4316 26.9672
0.0002 32.2222 1450 0.4318 26.9672
0.0002 33.3333 1500 0.4319 26.9672

Framework versions

  • Transformers 4.42.4
  • Pytorch 2.3.1+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
6
Safetensors
Model size
764M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mvbnh/whisper-medium-ko-new

Finetuned
(454)
this model

Dataset used to train mvbnh/whisper-medium-ko-new

Evaluation results