wav2vec2-large-xls-r-300m-tr-colab
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset. It achieves the following results on the evaluation set:
- Loss: 0.4121
- Wer: 0.3112
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
4.1868 | 1.83 | 400 | 0.9812 | 0.8398 |
0.691 | 3.67 | 800 | 0.5571 | 0.6298 |
0.3555 | 5.5 | 1200 | 0.4676 | 0.4779 |
0.2451 | 7.34 | 1600 | 0.4572 | 0.4541 |
0.1844 | 9.17 | 2000 | 0.4743 | 0.4389 |
0.1541 | 11.01 | 2400 | 0.4583 | 0.4300 |
0.1277 | 12.84 | 2800 | 0.4565 | 0.3950 |
0.1122 | 14.68 | 3200 | 0.4761 | 0.4087 |
0.0975 | 16.51 | 3600 | 0.4654 | 0.3786 |
0.0861 | 18.35 | 4000 | 0.4503 | 0.3667 |
0.0775 | 20.18 | 4400 | 0.4600 | 0.3581 |
0.0666 | 22.02 | 4800 | 0.4350 | 0.3504 |
0.0627 | 23.85 | 5200 | 0.4211 | 0.3349 |
0.0558 | 25.69 | 5600 | 0.4390 | 0.3333 |
0.0459 | 27.52 | 6000 | 0.4218 | 0.3185 |
0.0439 | 29.36 | 6400 | 0.4121 | 0.3112 |
Framework versions
- Transformers 4.11.3
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
- Downloads last month
- 24
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.