bert-finetuned-ner / README.md
Hatman's picture
Update README.md
951ff67
metadata
license: apache-2.0
tags:
  - generated_from_trainer
datasets:
  - conll2003
model-index:
  - name: bert-finetuned-ner
    results: []

bert-finetuned-ner

This model is a fine-tuned version of bert-base-cased on the conll2003 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0814

Model description

bert-base-NER is a fine-tuned BERT model that is ready to use for Named Entity Recognition and achieves state-of-the-art performance for the NER task. It has been trained to recognize four types of entities: location (LOC), organizations (ORG), person (PER) and Miscellaneous (MISC).

Specifically, this model is a bert-base-cased model that was fine-tuned on the English version of the standard CoNLL-2003 Named Entity Recognition dataset.

If you'd like to use a larger BERT-large model fine-tuned on the same dataset, a bert-large-NER version is also available.

How to Use

You can use this model with Transformers pipeline for NER.

from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline

tokenizer = AutoTokenizer.from_pretrained("Hatman/bert-finetuned-ner")
model = AutoModelForTokenClassification.from_pretrained("Hatman/bert-finetuned-ner")

nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "My name is Wolfgang and I live in Berlin"

ner_results = nlp(example)
print(ner_results)

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss
0.0181 1.0 1756 0.1301
0.0166 2.0 3512 0.0762
0.0064 3.0 5268 0.0814

Framework versions

  • Transformers 4.26.0
  • Pytorch 1.13.1+cu116
  • Datasets 2.9.0
  • Tokenizers 0.13.2