metadata
language:
- 'no'
license: apache-2.0
base_model: NbAiLab/nb-whisper-tiny-v0.7
tags:
- audio
- asr
- automatic-speech-recognition
- hf-asr-leaderboard
model-index:
- name: nb-whisper-tiny-v0.7-semantic
results: []
nb-whisper-tiny-v0.7-semantic
This model is a fine-tuned version of NbAiLab/nb-whisper-tiny-v0.7 on the NbAiLab/ncc_speech_styling_v4 dataset. It achieves the following results on the evaluation set:
- step: 249
- validation_nst_loss: 0.6579
- train_loss: 1.2508
- validation_nst_wer: 8.8029
- validation_nst_cer: 2.9662
- validation_nst_exact_wer: 9.6358
- validation_nst_exact_cer: 3.0880
- validation_clean_stortinget_no_loss: 0.7202
- validation_clean_stortinget_no_wer: 16.5324
- validation_clean_stortinget_no_cer: 9.1926
- validation_clean_stortinget_no_exact_wer: 20.6476
- validation_clean_stortinget_no_exact_cer: 9.9178
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- lr_scheduler_type: linear
- per_device_train_batch_size: 32
- total_train_batch_size_per_node: 128
- total_train_batch_size: 1024
- total_optimization_steps: 250
- starting_optimization_step: None
- finishing_optimization_step: 250
- num_train_dataset_workers: 32
- num_hosts: 8
- total_num_training_examples: 256,000
- steps_per_epoch: To be computed after first epoch
- num_beams: None
- weight_decay: 0.01
- adam_beta1: 0.9
- adam_beta2: 0.98
- adam_epsilon: 0.00015
- dropout: True
- bpe_dropout_probability: 0.2
- activation_dropout_probability: 0.1
Training results
step | validation_nst_loss | train_loss | validation_nst_wer | validation_nst_cer | validation_nst_exact_wer | validation_nst_exact_cer | validation_clean_stortinget_no_loss | validation_clean_stortinget_no_wer | validation_clean_stortinget_no_cer | validation_clean_stortinget_no_exact_wer | validation_clean_stortinget_no_exact_cer |
---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0.5155 | 1.4782 | 7.9155 | 2.5430 | 8.7103 | 2.6640 | 0.7153 | 15.8148 | 8.6090 | 19.7201 | 9.3061 |
40 | 0.5328 | 1.3581 | 8.6232 | 2.8926 | 9.6031 | 3.0477 | 0.7058 | 17.0843 | 9.6747 | 21.3213 | 10.4075 |
80 | 0.6321 | 1.1988 | 8.8464 | 3.1266 | 9.7447 | 3.2629 | 0.7131 | 16.8712 | 9.3548 | 20.9062 | 10.0570 |
120 | 0.6456 | 1.1956 | 8.7757 | 3.0306 | 9.6630 | 3.1585 | 0.7165 | 16.7243 | 9.2521 | 20.8113 | 9.9704 |
160 | 0.6455 | 1.2386 | 8.7648 | 2.9784 | 9.6195 | 3.1081 | 0.7188 | 16.7030 | 9.2259 | 20.8967 | 9.9658 |
200 | 0.6576 | 1.1753 | 8.6777 | 2.9336 | 9.4888 | 3.0559 | 0.7202 | 16.5917 | 9.1859 | 20.7188 | 9.9205 |
240 | 0.6577 | 1.1923 | 8.9226 | 2.9923 | 9.7338 | 3.1100 | 0.7214 | 16.6272 | 9.2176 | 20.7211 | 9.9396 |
249 | 0.6579 | 1.2508 | 8.8029 | 2.9662 | 9.6358 | 3.0880 | |||||
249 | 0.7202 | 1.2508 | 16.5324 | 9.1926 | 20.6476 | 9.9178 |
Framework versions
- Transformers 4.34.1
- Datasets 2.15.0
- Tokenizers 0.14.1