Edit model card

Labira/LabiraEdu-v1.0x

This model is a fine-tuned version of indolem/indobert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:

  • Train Loss: 0.0206
  • Validation Loss: 4.5266
  • Epoch: 98

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1100, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
  • training_precision: float32

Training results

Train Loss Validation Loss Epoch
5.0565 3.9761 0
3.6621 3.2932 1
3.0961 3.2587 2
2.7357 3.2031 3
2.3059 3.2519 4
1.8933 3.4772 5
1.9076 3.1664 6
1.5492 3.4201 7
1.2578 3.5190 8
1.0478 3.4076 9
1.0130 3.5961 10
0.9073 3.4919 11
0.7071 3.5013 12
0.5616 4.0259 13
0.4798 3.9766 14
0.5938 3.8146 15
0.6476 3.7065 16
0.4264 4.1631 17
0.5290 3.7455 18
0.4637 3.6362 19
0.3826 3.8389 20
0.2876 3.7611 21
0.2221 4.0540 22
0.1752 4.0683 23
0.1544 4.0452 24
0.1600 4.0417 25
0.1390 4.0668 26
0.1134 4.0659 27
0.0965 4.0700 28
0.0820 4.2026 29
0.0810 4.3008 30
0.1166 4.0835 31
0.0776 4.0886 32
0.1033 4.1303 33
0.0512 4.1014 34
0.0484 4.1462 35
0.0565 4.2404 36
0.0652 4.2064 37
0.0538 4.1032 38
0.0516 4.0948 39
0.0611 4.2563 40
0.0523 4.3629 41
0.0571 4.3032 42
0.0479 4.3147 43
0.0308 4.3639 44
0.0370 4.3490 45
0.0406 4.3471 46
0.0300 4.4078 47
0.0270 4.4253 48
0.0283 4.4177 49
0.0228 4.4394 50
0.0538 4.4019 51
0.0342 4.3553 52
0.0249 4.3161 53
0.0657 4.4426 54
0.0309 4.5678 55
0.0467 4.4247 56
0.0356 4.5058 57
0.0431 4.4563 58
0.0366 4.5242 59
0.0624 4.3149 60
0.0471 4.3177 61
0.0248 4.3159 62
0.0388 4.3554 63
0.0262 4.3888 64
0.0360 4.4544 65
0.0319 4.4608 66
0.0269 4.4676 67
0.0373 4.3847 68
0.0205 4.3560 69
0.0223 4.3715 70
0.0306 4.3894 71
0.0235 4.4409 72
0.0189 4.4767 73
0.0280 4.5137 74
0.0165 4.5471 75
0.0098 4.5553 76
0.0173 4.5465 77
0.0234 4.5461 78
0.0231 4.5485 79
0.0237 4.5326 80
0.0158 4.5293 81
0.0178 4.5309 82
0.0225 4.5306 83
0.0191 4.5213 84
0.0213 4.5231 85
0.0144 4.5332 86
0.0191 4.5365 87
0.0188 4.5487 88
0.0272 4.5426 89
0.0126 4.5390 90
0.0224 4.5384 91
0.0218 4.5389 92
0.0083 4.5394 93
0.0246 4.5326 94
0.0199 4.5284 95
0.0174 4.5264 96
0.0130 4.5259 97
0.0206 4.5266 98

Framework versions

  • Transformers 4.41.2
  • TensorFlow 2.15.0
  • Datasets 2.19.2
  • Tokenizers 0.19.1
Downloads last month
6
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Labira/LabiraEdu-v1.0x

Finetuned
(367)
this model