Labira/LabiraPJOK_2_100_Full
This model is a fine-tuned version of Labira/LabiraPJOK_1_100_Full on an unknown dataset. It achieves the following results on the evaluation set:
- Train Loss: 0.0293
- Validation Loss: 0.1300
- Epoch: 99
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 600, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
Training results
Train Loss | Validation Loss | Epoch |
---|---|---|
4.5209 | 3.3699 | 0 |
3.0253 | 2.2638 | 1 |
2.4401 | 1.5791 | 2 |
1.7668 | 1.1757 | 3 |
1.1708 | 0.9297 | 4 |
0.8116 | 0.9080 | 5 |
0.7249 | 0.8814 | 6 |
0.5783 | 0.8619 | 7 |
0.6304 | 0.8486 | 8 |
0.4778 | 0.7509 | 9 |
0.2941 | 0.7865 | 10 |
0.3170 | 0.7460 | 11 |
0.4126 | 0.6175 | 12 |
0.3620 | 0.6029 | 13 |
0.1818 | 0.6585 | 14 |
0.2768 | 0.6480 | 15 |
0.2536 | 0.5260 | 16 |
0.2123 | 0.4587 | 17 |
0.2634 | 0.4301 | 18 |
0.1602 | 0.4109 | 19 |
0.0932 | 0.4353 | 20 |
0.1643 | 0.4815 | 21 |
0.1566 | 0.4562 | 22 |
0.1502 | 0.4109 | 23 |
0.1226 | 0.3920 | 24 |
0.1434 | 0.3484 | 25 |
0.1087 | 0.3325 | 26 |
0.1222 | 0.3458 | 27 |
0.1464 | 0.3116 | 28 |
0.0992 | 0.3066 | 29 |
0.1061 | 0.2891 | 30 |
0.1433 | 0.2752 | 31 |
0.0631 | 0.2797 | 32 |
0.0411 | 0.3270 | 33 |
0.1420 | 0.3368 | 34 |
0.1089 | 0.3010 | 35 |
0.1200 | 0.2545 | 36 |
0.0783 | 0.2148 | 37 |
0.1737 | 0.2061 | 38 |
0.1382 | 0.2004 | 39 |
0.0655 | 0.2980 | 40 |
0.0930 | 0.2433 | 41 |
0.0628 | 0.2099 | 42 |
0.0819 | 0.1863 | 43 |
0.0670 | 0.2036 | 44 |
0.0692 | 0.2208 | 45 |
0.0712 | 0.1989 | 46 |
0.0552 | 0.1790 | 47 |
0.0593 | 0.1699 | 48 |
0.1086 | 0.1732 | 49 |
0.0655 | 0.1703 | 50 |
0.0448 | 0.2029 | 51 |
0.0449 | 0.2357 | 52 |
0.0486 | 0.2362 | 53 |
0.0432 | 0.1734 | 54 |
0.0471 | 0.1580 | 55 |
0.1355 | 0.1838 | 56 |
0.0690 | 0.3843 | 57 |
0.1021 | 0.3450 | 58 |
0.0422 | 0.1757 | 59 |
0.0434 | 0.1444 | 60 |
0.0612 | 0.1391 | 61 |
0.1042 | 0.1467 | 62 |
0.0445 | 0.1664 | 63 |
0.0454 | 0.1636 | 64 |
0.0485 | 0.1568 | 65 |
0.0361 | 0.1518 | 66 |
0.0365 | 0.1477 | 67 |
0.0444 | 0.1452 | 68 |
0.0399 | 0.1430 | 69 |
0.0396 | 0.1401 | 70 |
0.0133 | 0.1408 | 71 |
0.0388 | 0.1452 | 72 |
0.0442 | 0.1505 | 73 |
0.0394 | 0.1497 | 74 |
0.0370 | 0.1474 | 75 |
0.0428 | 0.1436 | 76 |
0.0378 | 0.1408 | 77 |
0.0363 | 0.1413 | 78 |
0.0390 | 0.1391 | 79 |
0.0456 | 0.1396 | 80 |
0.0405 | 0.1390 | 81 |
0.0316 | 0.1379 | 82 |
0.0366 | 0.1389 | 83 |
0.0339 | 0.1389 | 84 |
0.0380 | 0.1374 | 85 |
0.0406 | 0.1369 | 86 |
0.0306 | 0.1352 | 87 |
0.0319 | 0.1332 | 88 |
0.0383 | 0.1322 | 89 |
0.0356 | 0.1301 | 90 |
0.0376 | 0.1295 | 91 |
0.0450 | 0.1300 | 92 |
0.0336 | 0.1304 | 93 |
0.0286 | 0.1306 | 94 |
0.0277 | 0.1304 | 95 |
0.0290 | 0.1302 | 96 |
0.0142 | 0.1299 | 97 |
0.0334 | 0.1300 | 98 |
0.0293 | 0.1300 | 99 |
Framework versions
- Transformers 4.44.2
- TensorFlow 2.17.0
- Datasets 3.1.0
- Tokenizers 0.19.1
- Downloads last month
- 8
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.