metadata
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tner/ontonotes5
metrics:
- precision
- recall
- f1
- accuracy
widget:
- text: 'Hi! I am jack. I live in California and I work for Apple '
example_title: Example 1
- text: 'Thi book is amazing! I bought it on Amazon for 4$. '
example_title: Example 2
base_model: bert-base-cased
model-index:
- name: bert-finetuned-ner-ontonotes
results:
- task:
type: token-classification
name: Token Classification
dataset:
name: ontonotes5
type: ontonotes5
config: ontonotes5
split: train
args: ontonotes5
metrics:
- type: precision
value: 0.8567258883248731
name: Precision
- type: recall
value: 0.8841595180407308
name: Recall
- type: f1
value: 0.8702265476459025
name: F1
- type: accuracy
value: 0.9754933764288157
name: Accuracy
bert-finetuned-ner-ontonotes
This model is a fine-tuned version of bert-base-cased on the ontonotes5 dataset. It achieves the following results on the evaluation set:
- Loss: 0.1503
- Precision: 0.8567
- Recall: 0.8842
- F1: 0.8702
- Accuracy: 0.9755
Model description
Token classification experiment, NER, on business topics.
Intended uses & limitations
The model can be used on token classification, in particular NER. It is fine tuned on business topic.
Training and evaluation data
The dataset used is ontonotes5
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
Training results
Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
---|---|---|---|---|---|---|---|
0.0842 | 1.0 | 7491 | 0.0950 | 0.8524 | 0.8715 | 0.8618 | 0.9745 |
0.0523 | 2.0 | 14982 | 0.1044 | 0.8449 | 0.8827 | 0.8634 | 0.9744 |
0.036 | 3.0 | 22473 | 0.1118 | 0.8529 | 0.8843 | 0.8683 | 0.9760 |
0.0231 | 4.0 | 29964 | 0.1240 | 0.8589 | 0.8805 | 0.8696 | 0.9752 |
0.0118 | 5.0 | 37455 | 0.1416 | 0.8570 | 0.8804 | 0.8685 | 0.9753 |
0.0077 | 6.0 | 44946 | 0.1503 | 0.8567 | 0.8842 | 0.8702 | 0.9755 |
Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1