--- library_name: transformers license: apache-2.0 base_model: facebook/wav2vec2-large-xlsr-53 tags: - generated_from_trainer metrics: - wer model-index: - name: xlsr-nomimo-nmcpc results: [] --- # xlsr-nomimo-nmcpc This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0002 - Wer: 0.2681 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0004 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 132 - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-------:|:----:|:---------------:|:------:| | 5.0143 | 2.7778 | 200 | 3.0948 | 1.0 | | 3.0375 | 5.5556 | 400 | 2.8972 | 1.0 | | 2.7913 | 8.3333 | 600 | 2.3748 | 1.0 | | 2.1375 | 11.1111 | 800 | 1.0610 | 0.9128 | | 1.1118 | 13.8889 | 1000 | 0.3242 | 0.4894 | | 0.5767 | 16.6667 | 1200 | 0.1737 | 0.4128 | | 0.3823 | 19.4444 | 1400 | 0.0890 | 0.3681 | | 0.2494 | 22.2222 | 1600 | 0.0470 | 0.3553 | | 0.2165 | 25.0 | 1800 | 0.0585 | 0.3213 | | 0.1548 | 27.7778 | 2000 | 0.0266 | 0.3106 | | 0.1225 | 30.5556 | 2200 | 0.0248 | 0.3043 | | 0.1104 | 33.3333 | 2400 | 0.0148 | 0.2830 | | 0.1041 | 36.1111 | 2600 | 0.0130 | 0.2809 | | 0.0872 | 38.8889 | 2800 | 0.0058 | 0.2745 | | 0.0722 | 41.6667 | 3000 | 0.0045 | 0.2617 | | 0.0721 | 44.4444 | 3200 | 0.0053 | 0.2723 | | 0.0593 | 47.2222 | 3400 | 0.0059 | 0.2723 | | 0.0625 | 50.0 | 3600 | 0.0042 | 0.2638 | | 0.0555 | 52.7778 | 3800 | 0.0021 | 0.2638 | | 0.0462 | 55.5556 | 4000 | 0.0043 | 0.2702 | | 0.0381 | 58.3333 | 4200 | 0.0012 | 0.2638 | | 0.0364 | 61.1111 | 4400 | 0.0022 | 0.2660 | | 0.0351 | 63.8889 | 4600 | 0.0012 | 0.2681 | | 0.0308 | 66.6667 | 4800 | 0.0024 | 0.2681 | | 0.0255 | 69.4444 | 5000 | 0.0011 | 0.2638 | | 0.0234 | 72.2222 | 5200 | 0.0006 | 0.2702 | | 0.0269 | 75.0 | 5400 | 0.0003 | 0.2617 | | 0.0186 | 77.7778 | 5600 | 0.0006 | 0.2638 | | 0.0184 | 80.5556 | 5800 | 0.0007 | 0.2638 | | 0.017 | 83.3333 | 6000 | 0.0002 | 0.2638 | | 0.0124 | 86.1111 | 6200 | 0.0003 | 0.2702 | | 0.0153 | 88.8889 | 6400 | 0.0002 | 0.2660 | | 0.0151 | 91.6667 | 6600 | 0.0001 | 0.2681 | | 0.0116 | 94.4444 | 6800 | 0.0001 | 0.2702 | | 0.0089 | 97.2222 | 7000 | 0.0002 | 0.2702 | | 0.0079 | 100.0 | 7200 | 0.0002 | 0.2681 | ### Framework versions - Transformers 4.45.0.dev0 - Pytorch 2.4.0 - Datasets 2.21.0 - Tokenizers 0.19.1