Fine-tuning the model losses the start and end position of the word in the predicted output
I have fine-tuned a BERT NER model to my dataset. The base model that I am fine-tuning is “dslim/bert-base-NER”. I have been successfully able to train the model using the following script as refrence: https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/BERT/Custom_Named_Entity_Recognition_with_BERT_only_first_wordpiece.ipynb#scrollTo=zPDla1mmZiax
The code which does the prediction:
from transformers import pipeline, BertTokenizer
tokenizer = BertTokenizer.from_pretrained('dslim/bert-base-NER', return_offsets_mapping=True, is_split_into_words=True)
model = BertForTokenClassification.from_pretrained('dslim/bert-base-NER')
pipe = pipeline(task="ner", model=model.to("cpu"), tokenizer=tokenizer, grouped_entities=True)
pipe("this is a Abc Corp. Ltd")
The prediction form the base model contained the start and end position of the word in the original text like:
{‘entity_group’: ‘ORG’, ‘score’: 0.9992545247077942, ‘word’: ‘A’, ‘start’: 10, ‘end’: 11}
{‘entity_group’: ‘ORG’, ‘score’: 0.998507097363472, ‘word’: ‘##bc Corp Ltd’, ‘start’: 11, ‘end’: 22}
While the prediction from the re-trained model is:
{‘entity_group’: ‘ORG’, ‘score’: 0.747031033039093, ‘word’: ‘##7’, ‘start’: None, ‘end’: None},
{‘entity_group’: ‘ORG’, ‘score’: 0.9055356582005819, ‘word’: ‘Games , Inc’, ‘start’: None, ‘end’: None}
I am passing the position ids to the model during the training process. I looked at the model training parameters but, could not find a way to pass start and end position of the words to model training process. I have the start and end position of the tokenized words.