DunnBC22's picture
Update README.md
5ab9b80
|
raw
history blame
4.06 kB
metadata
license: apache-2.0
tags:
  - generated_from_trainer
model-index:
  - name: bert-base-cased-finetuned-ner-DFKI-SLT_few-NERd
    results: []
language:
  - en
metrics:
  - seqeval
pipeline_tag: token-classification

bert-base-cased-finetuned-ner-DFKI-SLT_few-NERd

This model is a fine-tuned version of bert-base-cased.

It achieves the following results on the evaluation set:

  • Loss: 0.1312
  • Person
    • Precision: 0.8860048426150121
    • Recall: 0.9401849948612538
    • F1: 0.912291199202194
    • Number': 29190
  • Location
    • Precision: 0.8686381704207632
    • Recall: 0.8152889539136796
    • F1: 0.841118472477534
    • Number: 95690
  • Organization
    • Precision: 0.7919078915181266
    • Recall': 0.7449641777764141
    • F1': 0.7677190874452579
    • Number': 65183
  • Product
    • Precision: 0.7065968977761166
    • Recall: 0.8295304958315051
    • F1: 0.7631446160056513
    • Number: 9116
  • Art
    • Precision: 0.8407258064516129
    • Recall: 0.8614333386302241
    • F1: 0.8509536143159878
    • Number: 6293
  • Other
    • Precision: 0.7303024586555996
    • Recall: 0.8314124132006586
    • F1: 0.7775843599357258
    • Nnumber: 13969
  • Building
    • Precision: 0.5162234691388143
    • Recall: 0.3648904983617865
    • F1: 0.4275611234592847
    • Number: 5799
  • Event
    • Precision: 0.605920892987139
    • Recall: 0.35144264602392683
    • F1: 0.44486014608943525
    • Number': 7105
  • Overall
    • Precision: 0.8203
    • Recall: 0.7886
    • F1: 0.8041
    • Accuracy: 0.9498

Model description

For more information on how it was created, check out the following link: https://github.com/DunnBC22/NLP_Projects/tree/main/Token%20Classification/Monolingual/DFKI%20SLT%20few%20NERd

Intended uses & limitations

This model is intended to demonstrate my ability to solve a complex problem using technology.

Training and evaluation data

Dataset Source: https://huggingface.co/datasets/DFKI-SLT/few-nerd

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 2

Training results

Training Loss Epoch Step Validation Loss Person Precision Person Recall Person F1 Person Number Location Precision Location Recall Location F1 Location Number Organization Precision Organization Recall Organization F1 Organization Number Product Precision Product Recall Product F1 Product Number Art Precision Art Recall Art F1 Art Number Other Precision Other Recall Other F1 Other Number Building Precision Building Recall Building F1 Building Number Event Precision Event Recall Event F1 Event Number Overall Precision Overall Recall Overall F1 Overall Accuracy
0.1796 1.0 11293 0.1427 0.8741 0.9272 0.8999 29190 0.8576 0.8072 0.8316 95690 0.7699 0.7688 0.7694 65183 0.6711 0.75 0.7084 9116 0.8347 0.8154 0.8249 6293 0.6743 0.8195 0.7398 13969 0.4812 0.3951 0.4339 5799 0.5998 0.3253 0.4218 7105 0.8000 0.7852 0.7925 0.9483
0.1542 2.0 22586 0.1312 0.8860 0.9402 0.9123 29190 0.8686 0.8153 0.8411 95690 0.7919 0.7450 0.7677 65183 0.7066 0.8295 0.7631 9116 0.8407 0.8614 0.8510 6293 0.7303 0.8314 0.7776 13969 0.5162 0.3649 0.4276 5799 0.6059 0.3514 0.4449 7105 0.8203 0.7886 0.8041 0.9498

Framework versions

  • Transformers 4.30.2
  • Pytorch 2.0.1+cu118
  • Datasets 2.13.1
  • Tokenizers 0.13.3