Edit model card

Whisper Small Few Audios - vfranchis

This model is a fine-tuned version of openai/whisper-small on the Few audios 1.0 dataset. It achieves the following results on the evaluation set:

  • Loss: 1.6364
  • Wer: 66.6667

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 10
  • training_steps: 100
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.6824 2.8571 10 1.6364 66.6667
4.2687 5.7143 20 1.6364 66.6667
2.6441 8.5714 30 1.6364 66.6667
1.8789 11.4286 40 1.6364 66.6667
1.3406 14.2857 50 1.6364 66.6667
0.8864 17.1429 60 1.6364 66.6667
1.0665 20.0 70 1.6364 66.6667
0.5324 22.8571 80 1.6364 66.6667
4.0741 25.7143 90 1.6364 66.6667
2.8755 28.5714 100 1.6364 66.6667

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.3.1+cu121
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
6
Safetensors
Model size
242M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for breco/whisper-small-few-audios

Finetuned
(1882)
this model