Update README.md
Browse files
README.md
CHANGED
@@ -2,9 +2,20 @@
|
|
2 |
license: apache-2.0
|
3 |
tags:
|
4 |
- generated_from_keras_callback
|
|
|
|
|
|
|
5 |
model-index:
|
6 |
- name: MUmairAB/bert-ner
|
7 |
results: []
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
---
|
9 |
|
10 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
|
@@ -12,23 +23,65 @@ probably proofread and complete it, then remove this comment. -->
|
|
12 |
|
13 |
# MUmairAB/bert-ner
|
14 |
|
15 |
-
|
|
|
|
|
16 |
It achieves the following results on the evaluation set:
|
17 |
- Train Loss: 0.0003
|
18 |
- Validation Loss: 0.0880
|
19 |
- Epoch: 19
|
20 |
|
|
|
21 |
## Model description
|
22 |
|
23 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
24 |
|
25 |
## Intended uses & limitations
|
26 |
|
27 |
-
|
|
|
|
|
|
|
|
|
28 |
|
29 |
## Training and evaluation data
|
30 |
|
31 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
32 |
|
33 |
## Training procedure
|
34 |
|
@@ -69,4 +122,4 @@ The following hyperparameters were used during training:
|
|
69 |
- Transformers 4.30.2
|
70 |
- TensorFlow 2.12.0
|
71 |
- Datasets 2.13.1
|
72 |
-
- Tokenizers 0.13.3
|
|
|
2 |
license: apache-2.0
|
3 |
tags:
|
4 |
- generated_from_keras_callback
|
5 |
+
- named entity recognition
|
6 |
+
- bert-base finetuned
|
7 |
+
- umair akram
|
8 |
model-index:
|
9 |
- name: MUmairAB/bert-ner
|
10 |
results: []
|
11 |
+
datasets:
|
12 |
+
- conll2003
|
13 |
+
language:
|
14 |
+
- en
|
15 |
+
metrics:
|
16 |
+
- seqeval
|
17 |
+
library_name: keras
|
18 |
+
pipeline_tag: token-classification
|
19 |
---
|
20 |
|
21 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
|
|
|
23 |
|
24 |
# MUmairAB/bert-ner
|
25 |
|
26 |
+
The model training notebook is available on my [GitHub Repo](https://github.com/MUmairAB/BERT-based-NER-using-HuggingFace-Transformers/tree/main).
|
27 |
+
|
28 |
+
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on [Cnoll2003](https://huggingface.co/datasets/conll2003) dataset.
|
29 |
It achieves the following results on the evaluation set:
|
30 |
- Train Loss: 0.0003
|
31 |
- Validation Loss: 0.0880
|
32 |
- Epoch: 19
|
33 |
|
34 |
+
|
35 |
## Model description
|
36 |
|
37 |
+
Model: "tf_bert_for_token_classification"
|
38 |
+
_________________________________________________________________
|
39 |
+
Layer (type) Output Shape Param #
|
40 |
+
=================================================================
|
41 |
+
bert (TFBertMainLayer) multiple 107719680
|
42 |
+
|
43 |
+
dropout_37 (Dropout) multiple 0
|
44 |
+
|
45 |
+
classifier (Dense) multiple 6921
|
46 |
+
|
47 |
+
=================================================================
|
48 |
+
Total params: 107,726,601
|
49 |
+
Trainable params: 107,726,601
|
50 |
+
Non-trainable params: 0
|
51 |
+
_________________________________________________________________
|
52 |
|
53 |
## Intended uses & limitations
|
54 |
|
55 |
+
This model can be used for named entity recognition tasks. It is trained on [Conll2003](https://huggingface.co/datasets/conll2003) dataset. The model can classify four types of named entities:
|
56 |
+
1. persons,
|
57 |
+
2. locations,
|
58 |
+
3. organizations, and
|
59 |
+
4. names of miscellaneous entities that do not belong to the previous three groups.
|
60 |
|
61 |
## Training and evaluation data
|
62 |
|
63 |
+
The model is evaluated on [seqeval](https://github.com/chakki-works/seqeval) metric and the result is as follows:
|
64 |
+
|
65 |
+
{'LOC': {'precision': 0.9655361050328227,
|
66 |
+
'recall': 0.9608056614044638,
|
67 |
+
'f1': 0.9631650750341064,
|
68 |
+
'number': 1837},
|
69 |
+
'MISC': {'precision': 0.8789144050104384,
|
70 |
+
'recall': 0.913232104121475,
|
71 |
+
'f1': 0.8957446808510638,
|
72 |
+
'number': 922},
|
73 |
+
'ORG': {'precision': 0.9075144508670521,
|
74 |
+
'recall': 0.9366144668158091,
|
75 |
+
'f1': 0.9218348623853211,
|
76 |
+
'number': 1341},
|
77 |
+
'PER': {'precision': 0.962011771000535,
|
78 |
+
'recall': 0.9761129207383279,
|
79 |
+
'f1': 0.9690110482349771,
|
80 |
+
'number': 1842},
|
81 |
+
'overall_precision': 0.9374068554396423,
|
82 |
+
'overall_recall': 0.9527095254123191,
|
83 |
+
'overall_f1': 0.944996244053084,
|
84 |
+
'overall_accuracy': 0.9864013657502796}
|
85 |
|
86 |
## Training procedure
|
87 |
|
|
|
122 |
- Transformers 4.30.2
|
123 |
- TensorFlow 2.12.0
|
124 |
- Datasets 2.13.1
|
125 |
+
- Tokenizers 0.13.3
|