--- language: - id license: mit base_model: indolem/indobert-base-uncased tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: sentiment-pt-pl5-1 results: [] --- # sentiment-pt-pl5-1 This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2868 - Accuracy: 0.8847 - Precision: 0.8599 - Recall: 0.8634 - F1: 0.8616 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 30 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.5526 | 1.0 | 122 | 0.5135 | 0.7118 | 0.6438 | 0.6286 | 0.6339 | | 0.4696 | 2.0 | 244 | 0.4522 | 0.7569 | 0.7328 | 0.7755 | 0.7369 | | 0.3867 | 3.0 | 366 | 0.3466 | 0.8371 | 0.8297 | 0.7597 | 0.7824 | | 0.3349 | 4.0 | 488 | 0.3128 | 0.8546 | 0.8395 | 0.7971 | 0.8137 | | 0.2998 | 5.0 | 610 | 0.2932 | 0.8596 | 0.8293 | 0.8357 | 0.8324 | | 0.2787 | 6.0 | 732 | 0.2855 | 0.8697 | 0.8419 | 0.8453 | 0.8436 | | 0.2551 | 7.0 | 854 | 0.2898 | 0.8747 | 0.8438 | 0.8713 | 0.8550 | | 0.2496 | 8.0 | 976 | 0.2936 | 0.8697 | 0.8653 | 0.8103 | 0.8309 | | 0.2347 | 9.0 | 1098 | 0.2755 | 0.8847 | 0.8599 | 0.8634 | 0.8616 | | 0.2199 | 10.0 | 1220 | 0.3038 | 0.8722 | 0.8675 | 0.8146 | 0.8347 | | 0.2089 | 11.0 | 1342 | 0.2695 | 0.8822 | 0.8574 | 0.8592 | 0.8583 | | 0.1992 | 12.0 | 1464 | 0.2710 | 0.8747 | 0.8488 | 0.8488 | 0.8488 | | 0.1841 | 13.0 | 1586 | 0.2807 | 0.8722 | 0.8512 | 0.8346 | 0.8422 | | 0.1808 | 14.0 | 1708 | 0.2822 | 0.8822 | 0.8548 | 0.8667 | 0.8603 | | 0.1677 | 15.0 | 1830 | 0.2841 | 0.8747 | 0.8479 | 0.8513 | 0.8496 | | 0.1683 | 16.0 | 1952 | 0.2821 | 0.8772 | 0.8496 | 0.8581 | 0.8537 | | 0.1748 | 17.0 | 2074 | 0.2824 | 0.8797 | 0.8572 | 0.8499 | 0.8534 | | 0.1566 | 18.0 | 2196 | 0.2847 | 0.8872 | 0.8606 | 0.8727 | 0.8662 | | 0.1522 | 19.0 | 2318 | 0.2880 | 0.8822 | 0.8574 | 0.8592 | 0.8583 | | 0.1566 | 20.0 | 2440 | 0.2868 | 0.8847 | 0.8599 | 0.8634 | 0.8616 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1