Edit model card

torgo_tiny_finetune_F01_frozen_encoder

This model is a fine-tuned version of openai/whisper-tiny on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2915
  • Wer: 73.9389

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 16
  • eval_batch_size: 1
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 1000
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Wer
0.7815 0.83 500 0.2625 39.3888
0.0936 1.66 1000 0.2655 29.4567
0.0711 2.49 1500 0.2517 25.4669
0.0456 3.32 2000 0.2738 28.6927
0.0327 4.15 2500 0.2770 34.8896
0.0258 4.98 3000 0.2653 20.0340
0.0181 5.8 3500 0.2902 27.0798
0.0145 6.63 4000 0.2801 22.3260
0.0114 7.46 4500 0.3174 27.0798
0.0094 8.29 5000 0.2789 47.8778
0.0072 9.12 5500 0.2827 20.7980
0.0058 9.95 6000 0.3011 23.8540
0.0046 10.78 6500 0.2892 23.0051
0.0035 11.61 7000 0.2858 20.5433
0.0034 12.44 7500 0.2876 25.2122
0.0021 13.27 8000 0.2876 23.1749
0.002 14.1 8500 0.3039 41.9355
0.0019 14.93 9000 0.3060 24.7029
0.001 15.75 9500 0.2938 30.4754
0.0009 16.58 10000 0.2998 31.3243
0.0007 17.41 10500 0.2933 37.0968
0.0005 18.24 11000 0.2937 39.7284
0.0004 19.07 11500 0.2921 69.8642
0.0002 19.9 12000 0.2915 73.9389

Framework versions

  • Transformers 4.32.0
  • Pytorch 2.1.0+cu121
  • Datasets 2.14.7
  • Tokenizers 0.13.3
Downloads last month
1
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for jindaznb/torgo_tiny_finetune_F01_frozen_encoder

Finetuned
(1162)
this model