videomae-base-finetuned-IEMOCAP_1
This model is a fine-tuned version of MCG-NJU/videomae-base on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.3409
- Accuracy: 0.3480
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 4440
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
1.35 | 0.1 | 445 | 1.4144 | 0.2478 |
1.3944 | 1.1 | 890 | 1.3677 | 0.3340 |
1.2676 | 2.1 | 1335 | 1.3613 | 0.3434 |
1.2343 | 3.1 | 1780 | 1.3674 | 0.3289 |
1.222 | 4.1 | 2225 | 1.3379 | 0.3522 |
1.3494 | 5.1 | 2670 | 1.3466 | 0.3421 |
1.2836 | 6.1 | 3115 | 1.3277 | 0.3591 |
1.226 | 7.1 | 3560 | 1.3132 | 0.3704 |
1.3174 | 8.1 | 4005 | 1.3001 | 0.3604 |
1.2933 | 9.1 | 4440 | 1.2912 | 0.3629 |
Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
- Downloads last month
- 3
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.