File size: 5,797 Bytes
fdaf5d8 d0997c8 2f2c498 d0997c8 fdaf5d8 2436eda e74b7e1 8652f62 e7b8b73 00375ee c648230 45c7146 a2f1edb 8224acd ba7af8c edbfc81 e658c9d cca4293 9441952 ba155bf b3c5f24 3d14632 9996efc 4dd658b 3fc3143 66ab198 52990b7 3ec9ce2 454a4e6 5668bca e1743d9 b39562f 2bb7f5c a20cde1 4831a42 b7f86fc 1dc45fc 2b3fe38 cff56d5 3b9c573 b5bc834 300d8c4 cf2e1c8 eddb979 a1189d4 9976524 7f00165 5c4840d 9c58398 7cf35c5 650f8f3 56465fb 7b66015 9ed36b4 2f2c498 f00c2f2 d0997c8 fdaf5d8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 |
---
library_name: transformers
license: mit
base_model: Labira/LabiraPJOK_2_100_Full
tags:
- generated_from_keras_callback
model-index:
- name: Labira/LabiraPJOK_3_100_Full
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Labira/LabiraPJOK_3_100_Full
This model is a fine-tuned version of [Labira/LabiraPJOK_2_100_Full](https://huggingface.co/Labira/LabiraPJOK_2_100_Full) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0019
- Validation Loss: 0.0005
- Epoch: 98
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1100, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.7614 | 1.1522 | 0 |
| 1.5531 | 0.5524 | 1 |
| 1.0482 | 0.2232 | 2 |
| 0.5443 | 0.0847 | 3 |
| 0.5227 | 0.0529 | 4 |
| 0.2873 | 0.0412 | 5 |
| 0.2568 | 0.0330 | 6 |
| 0.1310 | 0.0190 | 7 |
| 0.1108 | 0.0067 | 8 |
| 0.1252 | 0.0117 | 9 |
| 0.0740 | 0.0071 | 10 |
| 0.0507 | 0.0059 | 11 |
| 0.0790 | 0.0058 | 12 |
| 0.0282 | 0.0036 | 13 |
| 0.0562 | 0.0070 | 14 |
| 0.0850 | 0.0047 | 15 |
| 0.0715 | 0.0176 | 16 |
| 0.0724 | 0.0077 | 17 |
| 0.0361 | 0.0024 | 18 |
| 0.0266 | 0.0029 | 19 |
| 0.0207 | 0.0026 | 20 |
| 0.0158 | 0.0023 | 21 |
| 0.0086 | 0.0016 | 22 |
| 0.0214 | 0.0093 | 23 |
| 0.0327 | 0.0063 | 24 |
| 0.0102 | 0.0016 | 25 |
| 0.0072 | 0.0012 | 26 |
| 0.0273 | 0.0024 | 27 |
| 0.0185 | 0.0034 | 28 |
| 0.0091 | 0.0018 | 29 |
| 0.0144 | 0.0021 | 30 |
| 0.0107 | 0.0032 | 31 |
| 0.0632 | 0.0037 | 32 |
| 0.0149 | 0.0034 | 33 |
| 0.0151 | 0.0103 | 34 |
| 0.0195 | 0.0081 | 35 |
| 0.0145 | 0.0023 | 36 |
| 0.0150 | 0.0012 | 37 |
| 0.0126 | 0.0018 | 38 |
| 0.0068 | 0.0017 | 39 |
| 0.0057 | 0.0014 | 40 |
| 0.0075 | 0.0015 | 41 |
| 0.0035 | 0.0015 | 42 |
| 0.0059 | 0.0013 | 43 |
| 0.0040 | 0.0010 | 44 |
| 0.0036 | 0.0009 | 45 |
| 0.0040 | 0.0011 | 46 |
| 0.0058 | 0.0020 | 47 |
| 0.0801 | 0.0013 | 48 |
| 0.0062 | 0.0014 | 49 |
| 0.0049 | 0.0011 | 50 |
| 0.0057 | 0.0012 | 51 |
| 0.0023 | 0.0011 | 52 |
| 0.0047 | 0.0007 | 53 |
| 0.0041 | 0.0006 | 54 |
| 0.0056 | 0.0012 | 55 |
| 0.0035 | 0.0016 | 56 |
| 0.0042 | 0.0011 | 57 |
| 0.0029 | 0.0006 | 58 |
| 0.0025 | 0.0004 | 59 |
| 0.0229 | 0.0085 | 60 |
| 0.0057 | 0.0075 | 61 |
| 0.0038 | 0.0050 | 62 |
| 0.0047 | 0.0014 | 63 |
| 0.0024 | 0.0006 | 64 |
| 0.0021 | 0.0005 | 65 |
| 0.0480 | 0.0008 | 66 |
| 0.0041 | 0.0010 | 67 |
| 0.0038 | 0.0010 | 68 |
| 0.0032 | 0.0010 | 69 |
| 0.0037 | 0.0009 | 70 |
| 0.0027 | 0.0007 | 71 |
| 0.0041 | 0.0007 | 72 |
| 0.0039 | 0.0006 | 73 |
| 0.0024 | 0.0007 | 74 |
| 0.0020 | 0.0007 | 75 |
| 0.0026 | 0.0007 | 76 |
| 0.0058 | 0.0008 | 77 |
| 0.0025 | 0.0007 | 78 |
| 0.0021 | 0.0006 | 79 |
| 0.0028 | 0.0006 | 80 |
| 0.0024 | 0.0005 | 81 |
| 0.0015 | 0.0005 | 82 |
| 0.0100 | 0.0005 | 83 |
| 0.0018 | 0.0006 | 84 |
| 0.0039 | 0.0007 | 85 |
| 0.0019 | 0.0007 | 86 |
| 0.0022 | 0.0007 | 87 |
| 0.0021 | 0.0007 | 88 |
| 0.0025 | 0.0007 | 89 |
| 0.0014 | 0.0006 | 90 |
| 0.0014 | 0.0006 | 91 |
| 0.0038 | 0.0006 | 92 |
| 0.0024 | 0.0006 | 93 |
| 0.0017 | 0.0005 | 94 |
| 0.0020 | 0.0005 | 95 |
| 0.0030 | 0.0005 | 96 |
| 0.0032 | 0.0005 | 97 |
| 0.0019 | 0.0005 | 98 |
### Framework versions
- Transformers 4.46.2
- TensorFlow 2.17.0
- Datasets 3.1.0
- Tokenizers 0.20.3
|