--- library_name: transformers license: mit base_model: microsoft/deberta-v3-base tags: - generated_from_trainer model-index: - name: deberta-semeval25_EN08_CC_fold2 results: [] --- # deberta-semeval25_EN08_CC_fold2 This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 6.8873 - Precision Samples: 0.1892 - Recall Samples: 0.5700 - F1 Samples: 0.2529 - Precision Macro: 0.8502 - Recall Macro: 0.5481 - F1 Macro: 0.4558 - Precision Micro: 0.2086 - Recall Micro: 0.4203 - F1 Micro: 0.2788 - Precision Weighted: 0.6329 - Recall Weighted: 0.4203 - F1 Weighted: 0.2177 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision Samples | Recall Samples | F1 Samples | Precision Macro | Recall Macro | F1 Macro | Precision Micro | Recall Micro | F1 Micro | Precision Weighted | Recall Weighted | F1 Weighted | |:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:| | 9.3158 | 1.0 | 15 | 8.1700 | 0.8333 | 0.1278 | 0.1389 | 0.9878 | 0.4024 | 0.4024 | 0.5 | 0.0725 | 0.1266 | 0.9275 | 0.0725 | 0.0725 | | 9.1274 | 2.0 | 30 | 7.7022 | 0.2500 | 0.4444 | 0.3033 | 0.9462 | 0.4530 | 0.4198 | 0.2727 | 0.3043 | 0.2877 | 0.7473 | 0.3043 | 0.1448 | | 8.3785 | 3.0 | 45 | 7.4694 | 0.2078 | 0.4772 | 0.2741 | 0.9072 | 0.4890 | 0.4315 | 0.2232 | 0.3623 | 0.2762 | 0.6906 | 0.3623 | 0.1636 | | 8.317 | 4.0 | 60 | 7.2468 | 0.4567 | 0.4444 | 0.2226 | 0.9037 | 0.4617 | 0.4269 | 0.2333 | 0.3043 | 0.2642 | 0.7035 | 0.3043 | 0.1741 | | 6.5284 | 5.0 | 75 | 7.2364 | 0.4400 | 0.4333 | 0.2026 | 0.9047 | 0.4592 | 0.4274 | 0.2105 | 0.2899 | 0.2439 | 0.7075 | 0.2899 | 0.1751 | | 7.3718 | 6.0 | 90 | 7.0546 | 0.2717 | 0.4756 | 0.2479 | 0.8863 | 0.5156 | 0.4387 | 0.2252 | 0.3623 | 0.2778 | 0.6995 | 0.3623 | 0.1893 | | 7.9602 | 7.0 | 105 | 6.9761 | 0.2278 | 0.5172 | 0.2535 | 0.8724 | 0.5272 | 0.4464 | 0.2109 | 0.3913 | 0.2741 | 0.6745 | 0.3913 | 0.2039 | | 7.905 | 8.0 | 120 | 6.9036 | 0.1914 | 0.5700 | 0.2561 | 0.8506 | 0.5481 | 0.4564 | 0.2117 | 0.4203 | 0.2816 | 0.6331 | 0.4203 | 0.2181 | | 7.4649 | 9.0 | 135 | 6.8797 | 0.1892 | 0.5811 | 0.2598 | 0.8503 | 0.5505 | 0.4564 | 0.2128 | 0.4348 | 0.2857 | 0.6328 | 0.4348 | 0.2208 | | 7.3108 | 10.0 | 150 | 6.8873 | 0.1892 | 0.5700 | 0.2529 | 0.8502 | 0.5481 | 0.4558 | 0.2086 | 0.4203 | 0.2788 | 0.6329 | 0.4203 | 0.2177 | ### Framework versions - Transformers 4.46.0 - Pytorch 2.3.1 - Datasets 2.21.0 - Tokenizers 0.20.1