nerugm-lora-r8-2 / README.md
apwic's picture
End of training
7cb9ec2 verified
|
raw
history blame
3.32 kB
metadata
language:
  - id
license: mit
base_model: indolem/indobert-base-uncased
tags:
  - generated_from_trainer
metrics:
  - precision
  - recall
  - f1
  - accuracy
model-index:
  - name: nerugm-lora-r8-2
    results: []

nerugm-lora-r8-2

This model is a fine-tuned version of indolem/indobert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1743
  • Precision: 0.6820
  • Recall: 0.8289
  • F1: 0.7483
  • Accuracy: 0.9445

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 16
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20.0

Training results

Training Loss Epoch Step Validation Loss Precision Recall F1 Accuracy
1.2665 1.0 106 0.7137 0.0 0.0 0.0 0.8449
0.713 2.0 212 0.6075 0.0 0.0 0.0 0.8451
0.6346 3.0 318 0.5231 0.1905 0.0118 0.0222 0.8494
0.5555 4.0 424 0.4458 0.275 0.0649 0.1050 0.8656
0.4696 5.0 530 0.3715 0.4802 0.2861 0.3586 0.8949
0.3932 6.0 636 0.3134 0.5563 0.5251 0.5402 0.9194
0.3299 7.0 742 0.2706 0.5968 0.6637 0.6285 0.9277
0.2896 8.0 848 0.2433 0.62 0.7316 0.6712 0.9340
0.2656 9.0 954 0.2277 0.6289 0.7699 0.6923 0.9355
0.2442 10.0 1060 0.2082 0.6526 0.7758 0.7089 0.9387
0.23 11.0 1166 0.2020 0.6390 0.7935 0.7079 0.9382
0.2229 12.0 1272 0.1977 0.6524 0.8083 0.7220 0.9385
0.2132 13.0 1378 0.1886 0.6602 0.8083 0.7268 0.9402
0.2055 14.0 1484 0.1810 0.6708 0.7994 0.7295 0.9415
0.2038 15.0 1590 0.1822 0.6595 0.8112 0.7275 0.9405
0.2004 16.0 1696 0.1788 0.6731 0.8201 0.7394 0.9430
0.1966 17.0 1802 0.1775 0.6731 0.8260 0.7417 0.9432
0.1931 18.0 1908 0.1765 0.6683 0.8260 0.7388 0.9435
0.1937 19.0 2014 0.1749 0.6747 0.8260 0.7427 0.9437
0.1888 20.0 2120 0.1743 0.6820 0.8289 0.7483 0.9445

Framework versions

  • Transformers 4.39.3
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.15.2