--- language: - eu license: apache-2.0 base_model: openai/whisper-large-v2 tags: - whisper-event - generated_from_trainer datasets: - mozilla-foundation/common_voice_16_1 metrics: - wer model-index: - name: Whisper Large-V2 Basque results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: mozilla-foundation/common_voice_16_1 eu type: mozilla-foundation/common_voice_16_1 config: eu split: test args: eu metrics: - name: Wer type: wer value: 7.720415819915585 --- # Whisper Large-V2 Basque This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the mozilla-foundation/common_voice_16_1 eu dataset. It achieves the following results on the evaluation set: - Loss: 0.4206 - Wer: 7.7204 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 40000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:-----:|:---------------:|:-------:| | 0.0112 | 10.04 | 1000 | 0.2182 | 10.1571 | | 0.0052 | 20.08 | 2000 | 0.2372 | 9.6276 | | 0.0017 | 30.11 | 3000 | 0.2417 | 9.0150 | | 0.0022 | 40.15 | 4000 | 0.2341 | 8.8938 | | 0.0023 | 50.19 | 5000 | 0.2451 | 8.9388 | | 0.0006 | 60.23 | 6000 | 0.2517 | 8.4161 | | 0.0006 | 70.26 | 7000 | 0.2499 | 8.0985 | | 0.0008 | 80.3 | 8000 | 0.2548 | 8.3467 | | 0.0004 | 90.34 | 9000 | 0.2498 | 7.9559 | | 0.0003 | 100.38 | 10000 | 0.2489 | 7.6940 | | 0.0 | 110.41 | 11000 | 0.2906 | 7.5455 | | 0.0 | 120.45 | 12000 | 0.3027 | 7.4596 | | 0.0 | 130.49 | 13000 | 0.3137 | 7.4517 | | 0.0 | 140.53 | 14000 | 0.3243 | 7.4644 | | 0.0 | 150.56 | 15000 | 0.3351 | 7.4762 | | 0.0 | 160.6 | 16000 | 0.3459 | 7.4556 | | 0.0 | 170.64 | 17000 | 0.3565 | 7.4605 | | 0.0 | 180.68 | 18000 | 0.3689 | 7.4996 | | 0.0 | 190.72 | 19000 | 0.3806 | 7.5934 | | 0.0 | 200.75 | 20000 | 0.3912 | 7.6344 | | 0.0 | 210.79 | 21000 | 0.4005 | 7.5485 | | 0.0 | 220.83 | 22000 | 0.4102 | 7.6266 | | 0.0079 | 230.87 | 23000 | 0.2467 | 9.1654 | | 0.0 | 240.9 | 24000 | 0.3097 | 7.7615 | | 0.0 | 250.94 | 25000 | 0.3311 | 7.7243 | | 0.0 | 260.98 | 26000 | 0.3446 | 7.7028 | | 0.0 | 271.02 | 27000 | 0.3551 | 7.7546 | | 0.0 | 281.05 | 28000 | 0.3646 | 7.7986 | | 0.0 | 291.09 | 29000 | 0.3729 | 7.7781 | | 0.0 | 301.13 | 30000 | 0.3811 | 7.7634 | | 0.0 | 311.17 | 31000 | 0.3878 | 7.7702 | | 0.0 | 321.2 | 32000 | 0.3948 | 7.7722 | | 0.0 | 331.24 | 33000 | 0.4003 | 7.7302 | | 0.0 | 341.28 | 34000 | 0.4058 | 7.7312 | | 0.0 | 351.32 | 35000 | 0.4108 | 7.7292 | | 0.0 | 361.36 | 36000 | 0.4142 | 7.7321 | | 0.0 | 371.39 | 37000 | 0.4170 | 7.7204 | | 0.0 | 381.43 | 38000 | 0.4189 | 7.7253 | | 0.0 | 391.47 | 39000 | 0.4202 | 7.7263 | | 0.0 | 401.51 | 40000 | 0.4206 | 7.7204 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1