distilbert-base-uncased_legal_ner_finetuned
This model is a fine-tuned version of distilbert/distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.2765
- Law Precision: 0.7983
- Law Recall: 0.8716
- Law F1: 0.8333
- Law Number: 109
- Violated by Precision: 0.7937
- Violated by Recall: 0.7042
- Violated by F1: 0.7463
- Violated by Number: 71
- Violated on Precision: 0.3934
- Violated on Recall: 0.3429
- Violated on F1: 0.3664
- Violated on Number: 70
- Violation Precision: 0.5657
- Violation Recall: 0.6588
- Violation F1: 0.6087
- Violation Number: 425
- Overall Precision: 0.6084
- Overall Recall: 0.6652
- Overall F1: 0.6355
- Overall Accuracy: 0.9409
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss | Law Precision | Law Recall | Law F1 | Law Number | Violated by Precision | Violated by Recall | Violated by F1 | Violated by Number | Violated on Precision | Violated on Recall | Violated on F1 | Violated on Number | Violation Precision | Violation Recall | Violation F1 | Violation Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
No log | 1.0 | 85 | 1.1323 | 0.0 | 0.0 | 0.0 | 109 | 0.0 | 0.0 | 0.0 | 71 | 0.0 | 0.0 | 0.0 | 70 | 0.0 | 0.0 | 0.0 | 425 | 0.0 | 0.0 | 0.0 | 0.7656 |
No log | 2.0 | 170 | 0.4593 | 0.0 | 0.0 | 0.0 | 109 | 0.0 | 0.0 | 0.0 | 71 | 0.0 | 0.0 | 0.0 | 70 | 0.1391 | 0.1741 | 0.1546 | 425 | 0.1391 | 0.1096 | 0.1226 | 0.8706 |
No log | 3.0 | 255 | 0.3529 | 0.1923 | 0.0459 | 0.0741 | 109 | 0.0 | 0.0 | 0.0 | 71 | 0.0 | 0.0 | 0.0 | 70 | 0.2088 | 0.2 | 0.2043 | 425 | 0.2079 | 0.1333 | 0.1625 | 0.8943 |
No log | 4.0 | 340 | 0.2708 | 0.1176 | 0.0734 | 0.0904 | 109 | 0.0 | 0.0 | 0.0 | 71 | 0.0 | 0.0 | 0.0 | 70 | 0.4321 | 0.4941 | 0.4610 | 425 | 0.3928 | 0.3230 | 0.3545 | 0.9134 |
No log | 5.0 | 425 | 0.2579 | 0.8295 | 0.6697 | 0.7411 | 109 | 0.6667 | 0.3099 | 0.4231 | 71 | 0.3095 | 0.1857 | 0.2321 | 70 | 0.4197 | 0.4612 | 0.4395 | 425 | 0.4825 | 0.4504 | 0.4659 | 0.9153 |
0.5875 | 6.0 | 510 | 0.2516 | 0.8091 | 0.8165 | 0.8128 | 109 | 0.6 | 0.5070 | 0.5496 | 71 | 0.3542 | 0.2429 | 0.2881 | 70 | 0.5458 | 0.6588 | 0.5970 | 425 | 0.5773 | 0.6252 | 0.6003 | 0.9342 |
0.5875 | 7.0 | 595 | 0.2355 | 0.7946 | 0.8165 | 0.8054 | 109 | 0.7167 | 0.6056 | 0.6565 | 71 | 0.3438 | 0.3143 | 0.3284 | 70 | 0.5455 | 0.6353 | 0.5870 | 425 | 0.5800 | 0.6281 | 0.6031 | 0.9382 |
0.5875 | 8.0 | 680 | 0.2659 | 0.8246 | 0.8624 | 0.8430 | 109 | 0.7286 | 0.7183 | 0.7234 | 71 | 0.3243 | 0.3429 | 0.3333 | 70 | 0.5491 | 0.6706 | 0.6038 | 425 | 0.5843 | 0.6726 | 0.6253 | 0.9398 |
0.5875 | 9.0 | 765 | 0.2839 | 0.752 | 0.8624 | 0.8034 | 109 | 0.7391 | 0.7183 | 0.7286 | 71 | 0.3421 | 0.3714 | 0.3562 | 70 | 0.5524 | 0.6824 | 0.6105 | 425 | 0.5799 | 0.6830 | 0.6272 | 0.9394 |
0.5875 | 10.0 | 850 | 0.2765 | 0.7983 | 0.8716 | 0.8333 | 109 | 0.7937 | 0.7042 | 0.7463 | 71 | 0.3934 | 0.3429 | 0.3664 | 70 | 0.5657 | 0.6588 | 0.6087 | 425 | 0.6084 | 0.6652 | 0.6355 | 0.9409 |
Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
- Downloads last month
- 0
Model tree for khalidrajan/distilbert-base-uncased_legal_ner_finetuned
Base model
distilbert/distilbert-base-uncased