Edit model card

bert-base-uncased-finetuned-iemocap8

This model is a fine-tuned version of bert-base-uncased on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.8968
  • Accuracy: 0.6654
  • F1: 0.6723

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 4.319412088241492e-05
  • train_batch_size: 64
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Accuracy F1
No log 1.0 51 1.0531 0.5597 0.5655
1.0284 2.0 102 0.9370 0.6227 0.6304
1.0284 3.0 153 0.8796 0.6722 0.6765
0.4432 4.0 204 0.9785 0.6654 0.6727
0.4432 5.0 255 1.0664 0.6586 0.6634
0.2492 6.0 306 1.1291 0.6499 0.6606
0.2492 7.0 357 1.1847 0.6702 0.6777
0.1707 8.0 408 1.4084 0.6508 0.6534
0.1707 9.0 459 1.3468 0.6702 0.6762
0.1461 10.0 510 1.4245 0.6634 0.6710
0.1461 11.0 561 1.4865 0.6499 0.6600
0.1262 12.0 612 1.4616 0.6576 0.6656
0.1262 13.0 663 1.5335 0.6663 0.6711
0.1203 14.0 714 1.4855 0.6731 0.6806
0.1203 15.0 765 1.5825 0.6712 0.6792
0.1023 16.0 816 1.7145 0.6731 0.6794
0.1023 17.0 867 1.6676 0.6751 0.6823
0.0976 18.0 918 1.8013 0.6693 0.6719
0.0976 19.0 969 1.7192 0.6673 0.6755
0.0937 20.0 1020 1.7837 0.6654 0.6731
0.0937 21.0 1071 1.7779 0.6760 0.6831
0.0901 22.0 1122 1.8352 0.6615 0.6687
0.0901 23.0 1173 1.8601 0.6596 0.6656
0.0844 24.0 1224 1.9129 0.6625 0.6719
0.0844 25.0 1275 1.8507 0.6731 0.6784
0.0829 26.0 1326 1.8582 0.6673 0.6735
0.0829 27.0 1377 1.8670 0.6770 0.6825
0.0839 28.0 1428 1.8763 0.6741 0.6800
0.0839 29.0 1479 1.8925 0.6702 0.6769
0.0802 30.0 1530 1.8968 0.6654 0.6723

Framework versions

  • Transformers 4.26.1
  • Pytorch 1.13.1+cu116
  • Datasets 2.10.0
  • Tokenizers 0.13.2
Downloads last month
9
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.