Edit model card

wav2vec

This model is a fine-tuned version of vitouphy/wav2vec2-xls-r-300m-english on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 417.9874
  • Pcc Accuracy: 0.2482
  • Pcc Fluency: 0.2791
  • Pcc Total Score: 0.3110
  • Pcc Content: 0.3780

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 6
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.4
  • num_epochs: 25
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Pcc Accuracy Pcc Fluency Pcc Total Score Pcc Content
3176.8777 1.01 100 2967.8770 0.2384 0.1513 -0.2037 -0.0386
3000.9279 2.02 200 2907.2825 0.2949 0.1813 -0.1250 0.0827
2716.2498 3.03 300 2805.8979 0.3034 0.2123 0.0290 0.2344
2600.8768 4.04 400 2666.0171 0.2859 0.2345 0.1642 0.3183
2222.3631 5.05 500 2490.9263 0.2698 0.2475 0.2305 0.3488
1940.7414 6.06 600 2284.4573 0.2573 0.2552 0.2588 0.3591
1962.4018 7.07 700 2051.6846 0.2504 0.2603 0.2738 0.3635
1506.0297 8.08 800 1798.2383 0.2444 0.2633 0.2813 0.3653
1448.3059 9.09 900 1534.5461 0.2396 0.2662 0.2845 0.3650
1202.578 10.1 1000 1265.2390 0.2376 0.2678 0.2873 0.3647
917.5093 11.11 1100 1021.2091 0.2356 0.2697 0.2896 0.3651
781.4407 12.12 1200 825.2852 0.2340 0.2710 0.2901 0.3647
633.8744 13.13 1300 674.1681 0.2337 0.2724 0.2918 0.3652
554.5075 14.14 1400 573.6318 0.2354 0.2737 0.2954 0.3677
500.6607 15.15 1500 510.6489 0.2378 0.2740 0.2978 0.3700
472.1874 16.16 1600 468.3256 0.2394 0.2751 0.3012 0.3720
406.9743 17.17 1700 444.8770 0.2421 0.2763 0.3041 0.3739
373.2401 18.18 1800 432.6308 0.2438 0.2771 0.3068 0.3751
447.599 19.19 1900 425.9487 0.2457 0.2778 0.3081 0.3762
360.8572 20.2 2000 421.8146 0.2466 0.2786 0.3093 0.3772
409.8801 21.21 2100 420.0713 0.2473 0.2786 0.3100 0.3777
419.8665 22.22 2200 418.7286 0.2478 0.2791 0.3107 0.3778
369.3772 23.23 2300 418.1939 0.2477 0.2791 0.3105 0.3776
449.1843 24.24 2400 417.9874 0.2482 0.2791 0.3110 0.3780

Framework versions

  • Transformers 4.37.0
  • Pytorch 2.1.2
  • Datasets 2.17.0
  • Tokenizers 0.15.1
Downloads last month
0
Safetensors
Model size
315M params
Tensor type
F32
·
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for arslanarjumand/wav2vec

Finetuned
(1)
this model