Edit model card

scream_tertius_dropout_replicate_test7b

This model is a fine-tuned version of openai/whisper-small on the NbAiLab/NCC_speech_all_v5 dataset. It achieves the following results on the evaluation set:

  • step: 19999
  • eval_loss: 0.6607
  • train_loss: 0.3094
  • eval_wer: 10.8709
  • eval_cer: 5.1449

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • lr_scheduler_type: linear
  • per_device_train_batch_size: 32
  • total_train_batch_size_per_node: 128
  • total_train_batch_size: 1024
  • total_optimization_steps: 20,000
  • starting_optimization_step: None
  • finishing_optimization_step: 20,000
  • num_train_dataset_workers: 32
  • num_hosts: 8
  • total_num_training_examples: 20,480,000
  • steps_per_epoch: 1314
  • num_beams: 5
  • dropout: True
  • dropout_probability: 0.1

Training results

step eval_loss train_loss eval_wer eval_cer
0 1.3578 8.1186 156.3946 118.6999
1000 0.7538 0.9632 23.5688 9.1509
2000 0.7164 0.6653 18.2704 7.4628
3000 0.7374 0.5403 15.1340 6.4853
4000 0.7819 0.4543 13.5810 6.0368
5000 0.8360 0.4266 12.2716 5.4775
6000 0.9197 0.3941 11.6017 5.2104
7000 0.9399 0.3705 11.8149 5.3515
8000 0.7468 0.3806 11.6017 5.2104
9000 0.7944 0.3562 11.5713 5.3212
10000 0.6599 0.3563 11.1145 5.1046
11000 0.6534 0.3394 11.2972 5.3313
12000 0.5689 0.3427 11.0536 5.2708
13000 0.5633 0.3313 11.1754 5.2557
14000 0.7331 0.3278 11.4495 5.4623
15000 0.6593 0.3011 11.1754 5.1902
16000 0.6180 0.3044 11.1449 5.2356
17000 0.6761 0.3058 10.9318 5.2053
18000 0.6697 0.3154 10.8709 5.1499
19000 0.6730 0.2888 11.0231 5.2658
19999 0.6607 0.3094 10.8709 5.1449

Framework versions

  • Transformers 4.30.0.dev0
  • Datasets 2.12.0
  • Tokenizers 0.13.3
Downloads last month
3
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.