Edit model card

ai-light-dance_singing3_ft_wav2vec2-large-xlsr-53

This model is a fine-tuned version of gary109/ai-light-dance_singing3_ft_wav2vec2-large-xlsr-53 on the GARY109/AI_LIGHT_DANCE - ONSET-SINGING3 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8797
  • Wer: 0.5513

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 30.0
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.9613 1.0 2309 1.0171 0.7271
0.8254 2.0 4618 0.9771 0.6650
0.7406 3.0 6927 0.9174 0.6420
0.74 4.0 9236 0.9551 0.6371
0.5855 5.0 11545 0.9262 0.6453
0.5536 6.0 13854 0.9056 0.5894
0.505 7.0 16163 0.9166 0.6029
0.449 8.0 18472 0.8816 0.5873
0.4219 9.0 20781 0.8970 0.5589
0.5764 10.0 23090 0.9189 0.5649
0.5075 11.0 25399 0.8797 0.5513
0.4366 12.0 27708 0.9011 0.5567
0.4915 13.0 30017 0.9248 0.5455
0.3554 14.0 32326 0.9309 0.5374
0.3975 15.0 34635 0.9103 0.5259
0.4119 16.0 36944 0.9402 0.5290
0.267 17.0 39253 0.9479 0.5115
0.3107 18.0 41562 0.9428 0.5099
0.2684 19.0 43871 0.9508 0.5133
0.2125 20.0 46180 0.9737 0.5097
0.3149 21.0 48489 0.9992 0.5095
0.2313 22.0 50798 1.0037 0.5059
0.2674 23.0 53107 1.0091 0.5040
0.2056 24.0 55416 1.0082 0.5076
0.2781 25.0 57725 1.0160 0.5015
0.2005 26.0 60034 1.0390 0.5131
0.2221 27.0 62343 1.0401 0.5074
0.1857 28.0 64652 1.0484 0.5096
0.1562 29.0 66961 1.0516 0.5064
0.3027 30.0 69270 1.0543 0.5049

Framework versions

  • Transformers 4.21.0.dev0
  • Pytorch 1.9.1+cu102
  • Datasets 2.3.3.dev0
  • Tokenizers 0.12.1
Downloads last month
49
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.