Edit model card

wav2vec2-1b-Y

This model is a fine-tuned version of facebook/wav2vec2-xls-r-1b on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9820
  • Cer: 24.1248

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 1
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 16
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 50
  • num_epochs: 5
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Cer
9.8741 0.2580 200 2.9382 58.9579
1.6995 0.5161 400 2.0333 45.6238
1.151 0.7741 600 1.7032 39.6734
0.9663 1.0322 800 1.2993 31.3675
0.8027 1.2902 1000 1.2846 33.0768
0.7227 1.5483 1200 1.1823 28.6419
0.6516 1.8063 1400 1.2823 32.2838
0.6087 2.0643 1600 1.2643 31.0209
0.5242 2.3224 1800 1.2452 30.3865
0.4763 2.5804 2000 1.1365 28.0839
0.4611 2.8385 2200 1.0796 26.5742
0.4103 3.0965 2400 1.1832 29.3527
0.3289 3.3546 2600 1.0230 25.0705
0.31 3.6126 2800 0.9800 24.8708
0.2995 3.8707 3000 0.9924 25.1880
0.2516 4.1287 3200 1.0370 25.2173
0.227 4.3867 3400 1.0256 24.8531
0.2251 4.6448 3600 0.9982 24.1424
0.2251 4.9028 3800 0.9820 24.1248

Framework versions

  • Transformers 4.45.2
  • Pytorch 2.3.1.post100
  • Datasets 2.19.1
  • Tokenizers 0.20.1
Downloads last month
0
Safetensors
Model size
964M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Gummybear05/wav2vec2-1b-Y

Finetuned
(49)
this model