--- license: mit base_model: microsoft/mdeberta-v3-base tags: - generated_from_trainer datasets: - massive metrics: - accuracy - f1 model-index: - name: scenario-MDBT-TCR_data-AmazonScience_massive_all_1_1 results: - task: name: Text Classification type: text-classification dataset: name: massive type: massive config: all_1.1 split: validation args: all_1.1 metrics: - name: Accuracy type: accuracy value: 0.8643440917174317 - name: F1 type: f1 value: 0.8368032657773605 --- # scenario-MDBT-TCR_data-AmazonScience_massive_all_1_1 This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the massive dataset. It achieves the following results on the evaluation set: - Loss: 1.0026 - Accuracy: 0.8643 - F1: 0.8368 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 66 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:| | 0.5131 | 0.27 | 5000 | 0.6674 | 0.8368 | 0.7780 | | 0.3715 | 0.53 | 10000 | 0.6554 | 0.8527 | 0.8145 | | 0.3066 | 0.8 | 15000 | 0.6924 | 0.8471 | 0.8103 | | 0.2194 | 1.07 | 20000 | 0.7348 | 0.8548 | 0.8238 | | 0.2112 | 1.34 | 25000 | 0.7297 | 0.8581 | 0.8288 | | 0.1907 | 1.6 | 30000 | 0.7308 | 0.8558 | 0.8288 | | 0.1816 | 1.87 | 35000 | 0.7785 | 0.8565 | 0.8281 | | 0.1297 | 2.14 | 40000 | 0.8493 | 0.8567 | 0.8278 | | 0.127 | 2.41 | 45000 | 0.8757 | 0.8576 | 0.8310 | | 0.1148 | 2.67 | 50000 | 0.8581 | 0.8577 | 0.8300 | | 0.1287 | 2.94 | 55000 | 0.8479 | 0.8597 | 0.8341 | | 0.0875 | 3.21 | 60000 | 0.8763 | 0.8656 | 0.8392 | | 0.0832 | 3.47 | 65000 | 0.9379 | 0.8620 | 0.8341 | | 0.0837 | 3.74 | 70000 | 0.9044 | 0.8625 | 0.8339 | | 0.0617 | 4.01 | 75000 | 0.9840 | 0.8618 | 0.8352 | | 0.0524 | 4.28 | 80000 | 0.9955 | 0.8639 | 0.8385 | | 0.0496 | 4.54 | 85000 | 1.0026 | 0.8643 | 0.8368 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.1.1+cu121 - Datasets 2.14.5 - Tokenizers 0.13.3