tiennguyenbnbk's picture
End of training
84402d3 verified
|
raw
history blame
4.45 kB
metadata
base_model: vinai/phobert-base-v2
tags:
  - generated_from_trainer
metrics:
  - accuracy
  - recall
  - precision
model-index:
  - name: cls-comment-phobert-base-v2-v3.2.1
    results: []

cls-comment-phobert-base-v2-v3.2.1

This model is a fine-tuned version of vinai/phobert-base-v2 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2873
  • Accuracy: 0.9323
  • F1 Score: 0.9262
  • Recall: 0.9217
  • Precision: 0.9320

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • training_steps: 4000

Training results

Training Loss Epoch Step Validation Loss Accuracy F1 Score Recall Precision
1.8947 0.8696 100 1.6875 0.4001 0.0832 0.1437 0.1464
1.5395 1.7391 200 1.2897 0.5849 0.2356 0.2632 0.2752
1.1205 2.6087 300 0.8468 0.7999 0.5833 0.5810 0.5890
0.82 3.4783 400 0.6537 0.8369 0.6179 0.6355 0.6062
0.6232 4.3478 500 0.5371 0.8538 0.6337 0.6518 0.7525
0.5148 5.2174 600 0.4651 0.8728 0.7299 0.7211 0.7549
0.4204 6.0870 700 0.4010 0.8869 0.7654 0.7712 0.8914
0.3421 6.9565 800 0.3648 0.9051 0.8714 0.8588 0.8941
0.2841 7.8261 900 0.3240 0.9182 0.9007 0.9038 0.8978
0.2319 8.6957 1000 0.3025 0.9204 0.9061 0.8976 0.9175
0.205 9.5652 1100 0.2986 0.9209 0.9099 0.9086 0.9123
0.1783 10.4348 1200 0.3047 0.9206 0.9104 0.9207 0.9025
0.1587 11.3043 1300 0.2758 0.9296 0.9203 0.9177 0.9233
0.1286 12.1739 1400 0.2927 0.9266 0.9144 0.9199 0.9101
0.1221 13.0435 1500 0.2821 0.9318 0.9245 0.9194 0.9309
0.1087 13.9130 1600 0.2789 0.9293 0.9160 0.9237 0.9090
0.0982 14.7826 1700 0.2834 0.9291 0.9196 0.9213 0.9188
0.089 15.6522 1800 0.2828 0.9299 0.9202 0.9261 0.9152
0.0795 16.5217 1900 0.2737 0.9331 0.9244 0.9239 0.9253
0.0684 17.3913 2000 0.2873 0.9323 0.9262 0.9217 0.9320
0.0673 18.2609 2100 0.2904 0.9320 0.9252 0.9184 0.9333
0.0571 19.1304 2200 0.3166 0.9293 0.9222 0.9210 0.9251
0.0561 20.0 2300 0.2922 0.9318 0.9221 0.9298 0.9150
0.0511 20.8696 2400 0.2993 0.9315 0.9191 0.9303 0.9088
0.0442 21.7391 2500 0.3201 0.9266 0.9162 0.9280 0.9060
0.0447 22.6087 2600 0.3155 0.9282 0.9137 0.9282 0.9010
0.0415 23.4783 2700 0.3018 0.9334 0.9226 0.9270 0.9185
0.0359 24.3478 2800 0.3192 0.9299 0.9177 0.9308 0.9063
0.0369 25.2174 2900 0.3064 0.9337 0.9211 0.9286 0.9141
0.0296 26.0870 3000 0.3110 0.9329 0.9237 0.9279 0.9198

Framework versions

  • Transformers 4.40.1
  • Pytorch 2.2.1+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1