--- library_name: transformers license: mit base_model: microsoft/mdeberta-v3-base tags: - generated_from_trainer model-index: - name: mdeberta-semeval25_narratives09_fold5 results: [] --- # mdeberta-semeval25_narratives09_fold5 This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.0227 - Precision Samples: 0.3630 - Recall Samples: 0.7663 - F1 Samples: 0.4583 - Precision Macro: 0.6929 - Recall Macro: 0.5586 - F1 Macro: 0.3787 - Precision Micro: 0.3170 - Recall Micro: 0.7293 - F1 Micro: 0.4419 - Precision Weighted: 0.4618 - Recall Weighted: 0.7293 - F1 Weighted: 0.4006 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision Samples | Recall Samples | F1 Samples | Precision Macro | Recall Macro | F1 Macro | Precision Micro | Recall Micro | F1 Micro | Precision Weighted | Recall Weighted | F1 Weighted | |:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:| | 5.5606 | 1.0 | 19 | 5.1743 | 1.0 | 0.0 | 0.0 | 1.0 | 0.1429 | 0.1429 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | | 4.8513 | 2.0 | 38 | 4.9270 | 0.2759 | 0.2532 | 0.2276 | 0.9372 | 0.2238 | 0.1869 | 0.2865 | 0.2068 | 0.2402 | 0.8398 | 0.2068 | 0.1101 | | 5.1086 | 3.0 | 57 | 4.6316 | 0.3810 | 0.4853 | 0.3601 | 0.8763 | 0.3242 | 0.2396 | 0.3420 | 0.4474 | 0.3876 | 0.6961 | 0.4474 | 0.2403 | | 4.5134 | 4.0 | 76 | 4.4138 | 0.3413 | 0.6266 | 0.4146 | 0.7828 | 0.4166 | 0.2917 | 0.3196 | 0.5827 | 0.4128 | 0.5521 | 0.5827 | 0.3108 | | 4.3876 | 5.0 | 95 | 4.2907 | 0.3599 | 0.6644 | 0.4357 | 0.7174 | 0.4444 | 0.3230 | 0.3259 | 0.6015 | 0.4227 | 0.4753 | 0.6015 | 0.3464 | | 4.084 | 6.0 | 114 | 4.1465 | 0.3372 | 0.7364 | 0.4312 | 0.7116 | 0.5145 | 0.3409 | 0.2987 | 0.7030 | 0.4193 | 0.4704 | 0.7030 | 0.3684 | | 3.9969 | 7.0 | 133 | 4.0975 | 0.3583 | 0.7479 | 0.4546 | 0.7007 | 0.5368 | 0.3753 | 0.3198 | 0.7105 | 0.4411 | 0.4677 | 0.7105 | 0.3978 | | 3.9677 | 8.0 | 152 | 4.0623 | 0.3605 | 0.7543 | 0.4564 | 0.6912 | 0.5472 | 0.3758 | 0.3220 | 0.7105 | 0.4431 | 0.4631 | 0.7105 | 0.3995 | | 4.0107 | 9.0 | 171 | 4.0401 | 0.3565 | 0.7571 | 0.4538 | 0.6965 | 0.5523 | 0.3805 | 0.3188 | 0.7143 | 0.4408 | 0.4649 | 0.7143 | 0.4006 | | 3.9591 | 10.0 | 190 | 4.0227 | 0.3630 | 0.7663 | 0.4583 | 0.6929 | 0.5586 | 0.3787 | 0.3170 | 0.7293 | 0.4419 | 0.4618 | 0.7293 | 0.4006 | ### Framework versions - Transformers 4.46.0 - Pytorch 2.3.1 - Datasets 2.21.0 - Tokenizers 0.20.1