Marcos12886's picture
End of training
0c8650c verified
|
raw
history blame
3.42 kB
metadata
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
  - generated_from_trainer
datasets:
  - audiofolder
metrics:
  - accuracy
model-index:
  - name: distilhubert-finetuned-donateacry
    results:
      - task:
          name: Audio Classification
          type: audio-classification
        dataset:
          name: audiofolder
          type: audiofolder
          config: default
          split: train
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.8932584269662921

distilhubert-finetuned-donateacry

This model is a fine-tuned version of ntu-spml/distilhubert on the audiofolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5034
  • Accuracy: 0.8933

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 123
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 25

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 0.9888 11 0.9525 0.7303
No log 1.9775 22 1.2765 0.5393
No log 2.9663 33 0.6634 0.7978
No log 3.9551 44 0.6369 0.8202
No log 4.9438 55 0.5328 0.8596
No log 5.9326 66 0.5146 0.8652
No log 6.9213 77 0.5200 0.8764
No log 8.0 89 0.5213 0.8708
No log 8.9888 100 0.6062 0.8596
No log 9.9775 111 0.5938 0.8652
No log 10.9663 122 0.5247 0.8652
No log 11.9551 133 0.7004 0.8483
No log 12.9438 144 0.5388 0.8876
No log 13.9326 155 0.4856 0.8876
No log 14.9213 166 0.5380 0.8764
No log 16.0 178 0.5055 0.8876
No log 16.9888 189 0.5217 0.8876
No log 17.9775 200 0.5034 0.8933
No log 18.9663 211 0.4745 0.8876
No log 19.9551 222 0.4812 0.8876
No log 20.9438 233 0.4709 0.8820
No log 21.9326 244 0.4824 0.8876
No log 22.9213 255 0.4819 0.8876
No log 24.0 267 0.4877 0.8933
No log 24.7191 275 0.4866 0.8933

Framework versions

  • Transformers 4.42.3
  • Pytorch 2.3.1+cu118
  • Datasets 2.20.0
  • Tokenizers 0.19.1