Edit model card

Labira/LabiraPJOK_2x_50

This model is a fine-tuned version of Labira/LabiraPJOK_1_50 on an unknown dataset. It achieves the following results on the evaluation set:

  • Train Loss: 0.0131
  • Validation Loss: 4.2318
  • Epoch: 44

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 250, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
  • training_precision: float32

Training results

Train Loss Validation Loss Epoch
4.4264 3.9800 0
3.0757 3.5083 1
2.2362 3.1869 2
1.5246 2.7953 3
0.9065 2.8214 4
0.7330 3.3041 5
0.6050 3.4187 6
0.5238 3.4963 7
0.3471 3.4544 8
0.2836 3.1970 9
0.4074 3.1324 10
0.1832 3.2997 11
0.1899 3.5169 12
0.0939 3.5228 13
0.1638 3.3909 14
0.1055 3.4798 15
0.0827 3.6602 16
0.1070 3.7096 17
0.0751 3.7451 18
0.0449 3.7821 19
0.0299 3.8203 20
0.0505 3.8744 21
0.0247 3.9163 22
0.0534 3.9760 23
0.0442 4.0388 24
0.0211 4.0753 25
0.0216 4.0966 26
0.0219 4.1131 27
0.0234 4.1117 28
0.0255 4.1391 29
0.0199 4.1682 30
0.0196 4.1973 31
0.0317 4.2302 32
0.0263 4.2538 33
0.0322 4.2648 34
0.0171 4.2541 35
0.0200 4.2429 36
0.0201 4.2240 37
0.0331 4.1675 38
0.0220 4.1519 39
0.0158 4.1661 40
0.0131 4.1824 41
0.0174 4.2002 42
0.0170 4.2208 43
0.0131 4.2318 44

Framework versions

  • Transformers 4.44.2
  • TensorFlow 2.17.0
  • Datasets 3.0.1
  • Tokenizers 0.19.1
Downloads last month
15
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Labira/LabiraPJOK_2x_50

Finetuned
(1)
this model
Finetunes
1 model