Edit model card

Transcriber-small

This model is a fine-tuned version of openai/whisper-small on the dataset_whisper dataset. It achieves the following results on the evaluation set:

  • Loss: 3.0153
  • Wer: 97.2358

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 4000

Training results

Training Loss Epoch Step Validation Loss Wer
2.6006 4.02 100 2.6681 99.9350
1.6004 8.04 200 2.1138 107.2846
1.0072 12.06 300 1.9609 129.9187
0.5229 16.08 400 2.0901 119.0894
0.2155 20.1 500 2.2948 105.9187
0.0743 24.12 600 2.3731 100.6829
0.0292 28.14 700 2.5375 118.0813
0.0169 32.16 800 2.5601 108.0650
0.0121 36.18 900 2.6491 102.7642
0.008 40.2 1000 2.6436 94.3415
0.0046 44.22 1100 2.7131 89.8211
0.0021 48.24 1200 2.7516 96.9106
0.0012 52.26 1300 2.7878 95.3496
0.0009 56.28 1400 2.8137 97.6260
0.0008 60.3 1500 2.8333 94.2439
0.0007 64.32 1600 2.8514 90.1463
0.0006 68.34 1700 2.8667 95.3821
0.0006 72.36 1800 2.8813 98.0488
0.0005 76.38 1900 2.8932 98.8618
0.0005 80.4 2000 2.9056 98.9268
0.0004 84.42 2100 2.9156 96.7805
0.0004 88.44 2200 2.9251 96.7805
0.0004 92.46 2300 2.9343 97.8211
0.0003 96.48 2400 2.9439 97.8537
0.0003 100.5 2500 2.9516 97.1057
0.0003 104.52 2600 2.9597 98.1138
0.0003 108.54 2700 2.9671 96.4228
0.0003 112.56 2800 2.9733 99.1870
0.0003 116.58 2900 2.9791 102.2764
0.0003 120.6 3000 2.9860 101.2033
0.0002 124.62 3100 2.9903 98.9919
0.0002 128.64 3200 2.9953 98.3415
0.0002 132.66 3300 2.9996 99.8699
0.0002 136.68 3400 3.0034 100.1301
0.0002 140.7 3500 3.0070 98.7317
0.0002 144.72 3600 3.0093 97.1382
0.0002 148.74 3700 3.0118 98.3740
0.0002 152.76 3800 3.0136 96.8130
0.0002 156.78 3900 3.0153 96.8780
0.0002 160.8 4000 3.0153 97.2358

Framework versions

  • Transformers 4.32.0.dev0
  • Pytorch 1.12.1+cu113
  • Datasets 2.14.1
  • Tokenizers 0.13.3
Downloads last month
9
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mediaProcessing/Transcriber-small

Finetuned
(1882)
this model

Evaluation results