Edit model card

wav2vec2-mms-1b-CV17.0-training_set_variations

This model is a fine-tuned version of facebook/mms-1b-all on common_voice_17_0's Tamil dataset. Several adapters were trained with different training set sizes. The intention was to test the improvement in performance as the quantity of training data increased. This model should not be used to perform STT tasks.

Intended uses & limitations

Testing purposes only. This is not intended as an STT solution.

Training and evaluation data

common_voice_17_0 "ta"

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.15
  • training_steps: 2000
  • mixed_precision_training: Native AMP

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.1+cu121
  • Datasets 3.0.0
  • Tokenizers 0.19.1
Downloads last month
39
Safetensors
Model size
965M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ndeclarke/wav2vec2-mms-1b-CV17.0-training_set_variations

Finetuned
(133)
this model

Evaluation results