Edit model card

wav2vec2-Y_speed_freq2

This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 113.5747
  • Cer: 139.4267

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 50
  • num_epochs: 3
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Cer
137.9274 0.1289 200 113.5748 139.3973
141.0753 0.2579 400 113.5747 139.0977
135.8834 0.3868 600 113.5748 139.2211
144.9369 0.5158 800 113.5747 139.7086
138.9605 0.6447 1000 113.5746 139.3503
133.949 0.7737 1200 113.5748 139.3386
140.5077 0.9026 1400 113.5747 139.5794
136.6605 1.0316 1600 113.5746 139.2211
139.7982 1.1605 1800 113.5748 139.4091
132.729 1.2895 2000 113.5747 139.5089
143.8527 1.4184 2200 113.5747 139.2857
139.8027 1.5474 2400 113.5747 139.3562
134.4609 1.6763 2600 113.5747 139.3797
134.4437 1.8053 2800 113.5748 139.3562
142.3518 1.9342 3000 113.5747 139.2681
138.2023 2.0632 3200 113.5746 139.3268
136.824 2.1921 3400 113.5747 139.3738
136.6418 2.3211 3600 113.5746 139.5031
135.6343 2.4500 3800 113.5748 139.5089
140.8444 2.5790 4000 113.5746 139.1917
140.4969 2.7079 4200 113.5748 139.0566
135.8527 2.8369 4400 113.5747 139.4502
144.0101 2.9658 4600 113.5747 139.4267

Framework versions

  • Transformers 4.46.2
  • Pytorch 2.5.0+cu121
  • Datasets 3.1.0
  • Tokenizers 0.20.3
Downloads last month
2
Safetensors
Model size
317M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Gummybear05/wav2vec2-Y_speed_freq2

Finetuned
(439)
this model