metadata
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
model-index:
- name: deberta-v3-base_finetuned_bluegennx_run2.19_5e
results: []
deberta-v3-base_finetuned_bluegennx_run2.19_5e
This model is a fine-tuned version of microsoft/deberta-v3-base on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.0196
- Overall Precision: 0.9773
- Overall Recall: 0.9870
- Overall F1: 0.9822
- Overall Accuracy: 0.9957
- Aadhar Card F1: 0.9908
- Age F1: 0.9708
- City F1: 0.9879
- Country F1: 0.9825
- Creditcardcvv F1: 0.9915
- Creditcardnumber F1: 0.9428
- Date F1: 0.9626
- Dateofbirth F1: 0.9056
- Email F1: 0.9928
- Expirydate F1: 0.9898
- Organization F1: 0.9925
- Pan Card F1: 0.9866
- Person F1: 0.9887
- Phonenumber F1: 0.9880
- Pincode F1: 0.9897
- Secondaryaddress F1: 0.9891
- State F1: 0.9912
- Time F1: 0.9831
- Url F1: 0.9955
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 5
Training results
Training Loss | Epoch | Step | Validation Loss | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | Aadhar Card F1 | Age F1 | City F1 | Country F1 | Creditcardcvv F1 | Creditcardnumber F1 | Date F1 | Dateofbirth F1 | Email F1 | Expirydate F1 | Organization F1 | Pan Card F1 | Person F1 | Phonenumber F1 | Pincode F1 | Secondaryaddress F1 | State F1 | Time F1 | Url F1 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0.0356 | 1.0 | 15321 | 0.0383 | 0.9535 | 0.9675 | 0.9604 | 0.9915 | 0.9542 | 0.9221 | 0.9617 | 0.9816 | 0.9243 | 0.9195 | 0.9235 | 0.8262 | 0.9826 | 0.9477 | 0.9882 | 0.9529 | 0.9785 | 0.9684 | 0.9187 | 0.9734 | 0.9665 | 0.9723 | 0.9888 |
0.0231 | 2.0 | 30642 | 0.0265 | 0.9607 | 0.9814 | 0.9709 | 0.9937 | 0.9586 | 0.9437 | 0.9808 | 0.9821 | 0.9799 | 0.9006 | 0.9488 | 0.8788 | 0.9864 | 0.9768 | 0.9843 | 0.9837 | 0.9824 | 0.9809 | 0.9840 | 0.9820 | 0.9906 | 0.9749 | 0.9784 |
0.0182 | 3.0 | 45963 | 0.0219 | 0.9726 | 0.9854 | 0.9789 | 0.9951 | 0.9842 | 0.9631 | 0.9856 | 0.9843 | 0.9854 | 0.9424 | 0.9553 | 0.8962 | 0.9890 | 0.9878 | 0.9921 | 0.9869 | 0.9859 | 0.9815 | 0.9867 | 0.9884 | 0.9917 | 0.9767 | 0.9962 |
0.0106 | 4.0 | 61284 | 0.0196 | 0.9773 | 0.9870 | 0.9822 | 0.9957 | 0.9908 | 0.9708 | 0.9879 | 0.9825 | 0.9915 | 0.9428 | 0.9626 | 0.9056 | 0.9928 | 0.9898 | 0.9925 | 0.9866 | 0.9887 | 0.9880 | 0.9897 | 0.9891 | 0.9912 | 0.9831 | 0.9955 |
0.0044 | 5.0 | 76605 | 0.0214 | 0.9787 | 0.9876 | 0.9831 | 0.9959 | 0.9934 | 0.9710 | 0.9885 | 0.9846 | 0.9915 | 0.9453 | 0.9646 | 0.9125 | 0.9931 | 0.9898 | 0.9937 | 0.9875 | 0.9886 | 0.9893 | 0.9907 | 0.9903 | 0.9924 | 0.9837 | 0.9958 |
Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2