wav2vec2-xlsr-et-lm-1B
This model was finetuned with mozilla_foundation/common_voice_8_0 et with train+other+validation splits. It achieves the following results on the test set: (Loss reported with last eval step at step 2000/2040 during training)
- Loss: 0.2150
- Wer: 0.2012
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00005
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 1
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
Training results
Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
- Downloads last month
- 23
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Dataset used to train RASMUS/wav2vec2-xlsr-1b-et
Evaluation results
- Test WER on Common Voice 8self-reported20.120
- Test CER on Common Voice 8self-reported3.820
- Test WER on Robust Speech Event - Dev Dataself-reported40.770
- Test CER on Robust Speech Event - Dev Dataself-reported12.320
- Test WER on Robust Speech Event - Test Dataself-reported41.970