Edit model card

whispherMusic

This model is a fine-tuned version of openai/whisper-base.en on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0963
  • Rouge1: 90.9672
  • Rouge2: 87.2313
  • Rougel: 89.5998
  • Rougelsum: 89.7339
  • Gen Len: 61.55

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
1.1983 1.0 1361 0.8877 47.1749 24.9442 38.1501 38.1934 56.0
0.8787 2.0 2722 0.6673 53.2613 32.6244 45.2405 45.3614 58.53
0.6983 3.0 4083 0.4976 57.5956 38.8147 51.0468 51.1725 59.97
0.5077 4.0 5444 0.3677 65.1283 49.4333 59.061 59.107 58.67
0.3955 5.0 6805 0.2650 70.453 58.4358 66.2936 66.5477 59.05
0.2846 6.0 8166 0.1987 77.3147 67.0836 73.4161 73.6763 59.26
0.21 7.0 9527 0.1489 84.5594 78.1538 82.1324 82.1614 60.14
0.1598 8.0 10888 0.1196 88.5138 83.886 86.8481 86.9753 61.14
0.1213 9.0 12249 0.1023 91.0285 87.2329 89.6909 89.7877 61.19
0.1051 10.0 13610 0.0963 90.9672 87.2313 89.5998 89.7339 61.55

Framework versions

  • Transformers 4.31.0
  • Pytorch 2.0.1+cu117
  • Datasets 2.14.2
  • Tokenizers 0.13.3
Downloads last month
13
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for nacielo/whispherMusic

Finetuned
(26)
this model