Edit model card

videberta-sentiment-analysis

This model is a fine-tuned version of Fsoft-AIC/videberta-xsmall on the vietnamese_students_feedback dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2787
  • Accuracy: 0.9470
  • Precision: 0.9481
  • Recall: 0.9528
  • F1: 0.9504

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss Accuracy Precision Recall F1
0.6152 0.58 100 0.4777 0.8007 0.8580 0.7503 0.8005
0.408 1.16 200 0.3241 0.8669 0.8943 0.8509 0.8721
0.3268 1.74 300 0.2726 0.8954 0.8837 0.9255 0.9041
0.2654 2.33 400 0.2296 0.9199 0.9212 0.9292 0.9252
0.253 2.91 500 0.2088 0.9159 0.9206 0.9217 0.9212
0.2014 3.49 600 0.2318 0.9172 0.9028 0.9466 0.9242
0.1939 4.07 700 0.2131 0.9212 0.9224 0.9304 0.9264
0.1698 4.65 800 0.2005 0.9311 0.9499 0.9193 0.9343
0.1822 5.23 900 0.2249 0.9245 0.9089 0.9540 0.9309
0.1441 5.81 1000 0.2038 0.9311 0.9311 0.9404 0.9357
0.1403 6.4 1100 0.2044 0.9338 0.9315 0.9453 0.9383
0.1377 6.98 1200 0.1991 0.9417 0.9567 0.9329 0.9447
0.1191 7.56 1300 0.2955 0.9119 0.8792 0.9677 0.9213
0.1227 8.14 1400 0.2362 0.9318 0.9199 0.9553 0.9372
0.1023 8.72 1500 0.2221 0.9358 0.9286 0.9528 0.9405
0.1049 9.3 1600 0.1940 0.9424 0.9454 0.9466 0.9460
0.1002 9.88 1700 0.1949 0.9404 0.9649 0.9217 0.9428
0.0946 10.47 1800 0.2232 0.9404 0.9625 0.9242 0.9430
0.0911 11.05 1900 0.2016 0.9457 0.9641 0.9329 0.9482
0.0818 11.63 2000 0.2636 0.9311 0.9128 0.9627 0.9371
0.0889 12.21 2100 0.2279 0.9450 0.9524 0.9441 0.9482
0.0668 12.79 2200 0.2460 0.9411 0.9409 0.9491 0.9450
0.0635 13.37 2300 0.2764 0.9424 0.9465 0.9453 0.9459
0.072 13.95 2400 0.2519 0.9437 0.9390 0.9565 0.9477
0.0697 14.53 2500 0.2705 0.9404 0.9408 0.9478 0.9443
0.0602 15.12 2600 0.2686 0.9450 0.9513 0.9453 0.9483
0.065 15.7 2700 0.2629 0.9450 0.9501 0.9466 0.9484
0.0628 16.28 2800 0.2644 0.9450 0.9547 0.9416 0.9481
0.0505 16.86 2900 0.2704 0.9424 0.9400 0.9528 0.9463
0.0471 17.44 3000 0.2787 0.9470 0.9481 0.9528 0.9504
0.0568 18.02 3100 0.2766 0.9450 0.9424 0.9553 0.9488
0.0523 18.6 3200 0.2659 0.9424 0.9421 0.9503 0.9462
0.0487 19.19 3300 0.3091 0.9338 0.9222 0.9565 0.9390
0.0529 19.77 3400 0.3575 0.9272 0.9045 0.9652 0.9339
0.0484 20.35 3500 0.3228 0.9358 0.9214 0.9615 0.9410
0.0456 20.93 3600 0.2694 0.9437 0.9412 0.9540 0.9476
0.0424 21.51 3700 0.2793 0.9404 0.9376 0.9516 0.9445
0.045 22.09 3800 0.2953 0.9417 0.9356 0.9565 0.9459
0.0395 22.67 3900 0.2840 0.9417 0.9377 0.9540 0.9458
0.0418 23.26 4000 0.3527 0.9305 0.9108 0.9640 0.9366

Framework versions

  • Transformers 4.31.0
  • Pytorch 2.0.1+cu118
  • Datasets 2.13.1
  • Tokenizers 0.13.3
Downloads last month
22
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for shayonhuggingface/videberta-sentiment-analysis

Finetuned
(2)
this model

Evaluation results