tiennguyenbnbk's picture
End of training
d4e7d77 verified
metadata
base_model: vinai/phobert-base-v2
tags:
  - generated_from_trainer
metrics:
  - accuracy
  - recall
  - precision
model-index:
  - name: cls-comment-phobert-base-v2-v3.2.1
    results: []

cls-comment-phobert-base-v2-v3.2.1

This model is a fine-tuned version of vinai/phobert-base-v2 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4657
  • Accuracy: 0.9407
  • F1 Score: 0.9319
  • Recall: 0.9326
  • Precision: 0.9319

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • training_steps: 4000
  • label_smoothing_factor: 0.05

Training results

Training Loss Epoch Step Validation Loss Accuracy F1 Score Recall Precision
1.8571 0.8696 100 1.6996 0.3986 0.0814 0.1429 0.0569
1.552 1.7391 200 1.2878 0.6150 0.2552 0.2860 0.2738
1.1701 2.6087 300 0.9309 0.7746 0.5380 0.5249 0.5797
0.8958 3.4783 400 0.7468 0.8371 0.6099 0.6121 0.6113
0.7463 4.3478 500 0.6540 0.8641 0.6758 0.6741 0.7556
0.6489 5.2174 600 0.5884 0.8866 0.7502 0.7443 0.7611
0.5604 6.0870 700 0.5297 0.9010 0.8350 0.8196 0.9060
0.4907 6.9565 800 0.4928 0.9171 0.8962 0.8769 0.9190
0.4428 7.8261 900 0.4692 0.9220 0.9048 0.8958 0.9170
0.4086 8.6957 1000 0.4600 0.9236 0.9073 0.9183 0.8978
0.3892 9.5652 1100 0.4530 0.9293 0.9156 0.9159 0.9156
0.3659 10.4348 1200 0.4574 0.9258 0.9154 0.9258 0.9071
0.3577 11.3043 1300 0.4533 0.9288 0.9159 0.9177 0.9152
0.338 12.1739 1400 0.4454 0.9339 0.9203 0.9285 0.9128
0.3302 13.0435 1500 0.4539 0.9312 0.9179 0.9172 0.9196
0.3186 13.9130 1600 0.4533 0.9320 0.9220 0.9146 0.9298
0.3146 14.7826 1700 0.4485 0.9356 0.9246 0.9224 0.9281
0.3093 15.6522 1800 0.4557 0.9326 0.9194 0.9125 0.9291
0.3019 16.5217 1900 0.4684 0.9290 0.9169 0.9234 0.9128
0.2985 17.3913 2000 0.4545 0.9347 0.9248 0.9238 0.9259
0.2959 18.2609 2100 0.4689 0.9334 0.9220 0.9208 0.9249
0.2891 19.1304 2200 0.4558 0.9386 0.9262 0.9180 0.9360
0.2905 20.0 2300 0.4590 0.9358 0.9227 0.9163 0.9308
0.2875 20.8696 2400 0.4797 0.9307 0.9193 0.9146 0.9268
0.2812 21.7391 2500 0.4697 0.9356 0.9247 0.9257 0.9242
0.2789 22.6087 2600 0.4668 0.9380 0.9255 0.9250 0.9271
0.2785 23.4783 2700 0.4671 0.9383 0.9293 0.9301 0.9289
0.2773 24.3478 2800 0.4657 0.9391 0.9293 0.9274 0.9328
0.2814 25.2174 2900 0.4702 0.9361 0.9259 0.9285 0.9244
0.2744 26.0870 3000 0.4732 0.9353 0.9274 0.9290 0.9273
0.2772 26.9565 3100 0.4676 0.9388 0.9281 0.9301 0.9264
0.2736 27.8261 3200 0.4661 0.9394 0.9281 0.9242 0.9325
0.2754 28.6957 3300 0.4746 0.9367 0.9257 0.9233 0.9288
0.2717 29.5652 3400 0.4688 0.9380 0.9283 0.9255 0.9315
0.27 30.4348 3500 0.4697 0.9388 0.9304 0.9307 0.9308
0.2674 31.3043 3600 0.4668 0.9391 0.9274 0.9311 0.9240
0.2693 32.1739 3700 0.4657 0.9407 0.9319 0.9326 0.9319
0.2685 33.0435 3800 0.4672 0.9402 0.9298 0.9304 0.9297
0.268 33.9130 3900 0.4668 0.9410 0.9317 0.9311 0.9328
0.272 34.7826 4000 0.4654 0.9402 0.9310 0.9300 0.9325

Framework versions

  • Transformers 4.40.1
  • Pytorch 2.2.1+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1