Edit model card

Labira/LabiraPJOK_3x_50

This model is a fine-tuned version of Labira/LabiraPJOK_2x_50 on an unknown dataset. It achieves the following results on the evaluation set:

  • Train Loss: 0.0125
  • Validation Loss: 1.5431
  • Epoch: 49

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 450, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
  • training_precision: float32

Training results

Train Loss Validation Loss Epoch
2.7214 1.2242 0
1.5828 1.1158 1
0.9946 1.0677 2
0.7404 1.2115 3
0.5481 1.0920 4
0.3599 1.1031 5
0.2659 1.1035 6
0.2725 1.1251 7
0.2207 1.1364 8
0.1379 1.2039 9
0.1687 1.2331 10
0.1154 1.1677 11
0.1126 1.2093 12
0.0953 1.2532 13
0.0753 1.2455 14
0.0519 1.2544 15
0.0603 1.2511 16
0.0609 1.2736 17
0.0530 1.2692 18
0.0384 1.2869 19
0.0337 1.3048 20
0.0304 1.3314 21
0.0565 1.3378 22
0.0351 1.3842 23
0.0480 1.4148 24
0.0308 1.3959 25
0.0454 1.3768 26
0.0557 1.4469 27
0.0397 1.4431 28
0.0212 1.4441 29
0.0251 1.4262 30
0.0291 1.4412 31
0.0194 1.5155 32
0.0238 1.5136 33
0.0209 1.5002 34
0.0183 1.4976 35
0.0204 1.5533 36
0.0183 1.6057 37
0.0147 1.6047 38
0.0137 1.6029 39
0.0090 1.5879 40
0.0323 1.5802 41
0.0181 1.5748 42
0.0144 1.5629 43
0.0215 1.5534 44
0.0058 1.5442 45
0.0144 1.5485 46
0.0122 1.5449 47
0.0139 1.5428 48
0.0125 1.5431 49

Framework versions

  • Transformers 4.44.2
  • TensorFlow 2.17.0
  • Datasets 3.0.1
  • Tokenizers 0.19.1
Downloads last month
14
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Labira/LabiraPJOK_3x_50

Finetuned
(1)
this model
Finetunes
1 model