Edit model card

openai/whisper-tiny

This model is a fine-tuned version of openai/whisper-tiny on the pphuc25/FrenchMed dataset. It achieves the following results on the evaluation set:

  • Loss: 1.9773
  • Wer: 57.9179

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Wer
1.4539 1.0 215 1.4469 192.2287
0.9269 2.0 430 1.4258 127.1994
0.5316 3.0 645 1.5228 66.6422
0.3295 4.0 860 1.6796 60.4839
0.212 5.0 1075 1.7495 73.7537
0.1228 6.0 1290 1.8167 78.9589
0.0766 7.0 1505 1.8370 80.0587
0.0617 8.0 1720 1.8817 61.1437
0.0575 9.0 1935 1.9629 88.0499
0.0319 10.0 2150 1.9228 58.3578
0.0266 11.0 2365 1.9362 57.1848
0.0143 12.0 2580 1.9740 57.1848
0.0124 13.0 2795 1.9917 86.9501
0.0109 14.0 3010 1.9632 56.5982
0.0087 15.0 3225 1.9501 60.8504
0.0048 16.0 3440 1.9785 55.7918
0.0034 17.0 3655 1.9765 58.6510
0.0021 18.0 3870 1.9765 56.8915
0.0007 19.0 4085 1.9737 58.3578
0.0007 20.0 4300 1.9773 57.9179

Framework versions

  • Transformers 4.41.1
  • Pytorch 2.3.0
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
1
Safetensors
Model size
37.8M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Hanhpt23/whisper-tiny-Encod-frenchmed

Finetuned
(1162)
this model