File size: 8,754 Bytes
4fda3c6
 
 
 
 
 
 
27a3f35
 
 
 
 
 
4fda3c6
 
 
 
 
27a3f35
4fda3c6
 
 
 
 
27a3f35
4fda3c6
27a3f35
 
4fda3c6
27a3f35
 
4fda3c6
27a3f35
 
4fda3c6
27a3f35
 
4fda3c6
27a3f35
 
4fda3c6
27a3f35
 
4fda3c6
27a3f35
 
4fda3c6
27a3f35
 
4fda3c6
27a3f35
 
4fda3c6
27a3f35
 
4fda3c6
27a3f35
 
4fda3c6
27a3f35
 
4fda3c6
27a3f35
 
4fda3c6
27a3f35
 
4fda3c6
27a3f35
 
4fda3c6
27a3f35
 
4fda3c6
27a3f35
 
4fda3c6
27a3f35
4fda3c6
 
 
 
0632f48
4fda3c6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a4815c2
4fda3c6
 
 
a4815c2
 
 
4fda3c6
a4815c2
 
4fda3c6
a4815c2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4fda3c6
a4815c2
4fda3c6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0f79b25
 
4fda3c6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0f79b25
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a4815c2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
---
datasets:
- tner/tweetner7
metrics:
- f1
- precision
- recall
pipeline_tag: token-classification
widget:
- text: 'Get the all-analog Classic Vinyl Edition of `Takin'' Off` Album from {@herbiehancock@}
    via {@bluenoterecords@} link below: {{URL}}'
  example_title: NER Example 1
base_model: roberta-large
model-index:
- name: tner/roberta-large-tweetner7-selflabel2020
  results:
  - task:
      type: token-classification
      name: Token Classification
    dataset:
      name: tner/tweetner7
      type: tner/tweetner7
      args: tner/tweetner7
    metrics:
    - type: f1
      value: 0.6455908683974932
      name: F1 (test_2021)
    - type: precision
      value: 0.6254336513443192
      name: Precision (test_2021)
    - type: recall
      value: 0.6670906567992599
      name: Recall (test_2021)
    - type: f1_macro
      value: 0.5962839441412403
      name: Macro F1 (test_2021)
    - type: precision_macro
      value: 0.5727192958380657
      name: Macro Precision (test_2021)
    - type: recall_macro
      value: 0.6267698180905158
      name: Macro Recall (test_2021)
    - type: f1_entity_span
      value: 0.7846231324492194
      name: Entity Span F1 (test_2021)
    - type: precision_entity_span
      value: 0.7600823937554206
      name: Entity Span Precision (test_2020)
    - type: recall_entity_span
      value: 0.8108014340233607
      name: Entity Span Recall (test_2021)
    - type: f1
      value: 0.6589874095901421
      name: F1 (test_2020)
    - type: precision
      value: 0.6810631229235881
      name: Precision (test_2020)
    - type: recall
      value: 0.6382978723404256
      name: Recall (test_2020)
    - type: f1_macro
      value: 0.6185133813760935
      name: Macro F1 (test_2020)
    - type: precision_macro
      value: 0.6351153721439261
      name: Macro Precision (test_2020)
    - type: recall_macro
      value: 0.6085669577041991
      name: Macro Recall (test_2020)
    - type: f1_entity_span
      value: 0.7670865719646207
      name: Entity Span F1 (test_2020)
    - type: precision_entity_span
      value: 0.7932372505543237
      name: Entity Span Precision (test_2020)
    - type: recall_entity_span
      value: 0.7426050856253243
      name: Entity Span Recall (test_2020)
---
# tner/roberta-large-tweetner7-selflabel2020

This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the 
[tner/tweetner7](https://huggingface.co/datasets/tner/tweetner7) dataset (`train` split). This model is fine-tuned on self-labeled dataset which is the `extra_2020` split of the [tner/tweetner7](https://huggingface.co/datasets/tner/tweetner7) annotated by [tner/roberta-large](https://huggingface.co/tner/roberta-large-tweetner7-2020)). Please check [https://github.com/asahi417/tner/tree/master/examples/tweetner7_paper#model-fine-tuning-self-labeling](https://github.com/asahi417/tner/tree/master/examples/tweetner7_paper#model-fine-tuning-self-labeling) for more detail of reproducing the model. 
Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository
for more detail). It achieves the following results on the test set of 2021:
- F1 (micro): 0.6455908683974932
- Precision (micro): 0.6254336513443192
- Recall (micro): 0.6670906567992599
- F1 (macro): 0.5962839441412403
- Precision (macro): 0.5727192958380657
- Recall (macro): 0.6267698180905158



The per-entity breakdown of the F1 score on the test set are below:
- corporation: 0.522762148337596
- creative_work: 0.468235294117647
- event: 0.4446564885496183
- group: 0.6155398587285571
- location: 0.6423718344657197
- person: 0.840225906358171
- product: 0.6401960784313725 

For F1 scores, the confidence interval is obtained by bootstrap as below:
- F1 (micro): 
    - 90%: [0.6371204057050158, 0.6550747724054871]
    - 95%: [0.6350657043101348, 0.6568098006368783] 
- F1 (macro): 
    - 90%: [0.6371204057050158, 0.6550747724054871]
    - 95%: [0.6350657043101348, 0.6568098006368783] 

Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/roberta-large-tweetner7-selflabel2020/raw/main/eval/metric.json) 
and [metric file of entity span](https://huggingface.co/tner/roberta-large-tweetner7-selflabel2020/raw/main/eval/metric_span.json).

### Usage
This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip.   
```shell
pip install tner
```
[TweetNER7](https://huggingface.co/datasets/tner/tweetner7) pre-processed tweets where the account name and URLs are 
converted into special formats (see the dataset page for more detail), so we process tweets accordingly and then run the model prediction as below.  

```python
import re
from urlextract import URLExtract
from tner import TransformersNER

extractor = URLExtract()

def format_tweet(tweet):
    # mask web urls
    urls = extractor.find_urls(tweet)
    for url in urls:
        tweet = tweet.replace(url, "{{URL}}")
    # format twitter account
    tweet = re.sub(r"\b(\s*)(@[\S]+)\b", r'\1{\2@}', tweet)
    return tweet


text = "Get the all-analog Classic Vinyl Edition of `Takin' Off` Album from @herbiehancock via @bluenoterecords link below: http://bluenote.lnk.to/AlbumOfTheWeek"
text_format = format_tweet(text)
model = TransformersNER("tner/roberta-large-tweetner7-selflabel2020")
model.predict([text_format])
```
It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment.

### Training hyperparameters

The following hyperparameters were used during training:
 - dataset: ['tner/tweetner7']
 - dataset_split: train
 - dataset_name: None
 - local_dataset: {'train': 'tweet_ner/2020.extra.tner/roberta-large-2020.txt', 'validation': 'tweet_ner/2020.dev.txt'}
 - model: roberta-large
 - crf: True
 - max_length: 128
 - epoch: 30
 - batch_size: 32
 - lr: 1e-05
 - random_seed: 0
 - gradient_accumulation_steps: 1
 - weight_decay: 1e-07
 - lr_warmup_step_ratio: 0.15
 - max_grad_norm: 1

The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/roberta-large-tweetner7-selflabel2020/raw/main/trainer_config.json).

### Reference
If you use the model, please cite T-NER paper and TweetNER7 paper.
- T-NER
```

@inproceedings{ushio-camacho-collados-2021-ner,
    title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition",
    author = "Ushio, Asahi  and
      Camacho-Collados, Jose",
    booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations",
    month = apr,
    year = "2021",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.eacl-demos.7",
    doi = "10.18653/v1/2021.eacl-demos.7",
    pages = "53--62",
    abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.",
}

```
- TweetNER7
```

@inproceedings{ushio-etal-2022-tweet,
    title = "{N}amed {E}ntity {R}ecognition in {T}witter: {A} {D}ataset and {A}nalysis on {S}hort-{T}erm {T}emporal {S}hifts",
    author = "Ushio, Asahi  and
        Neves, Leonardo  and
        Silva, Vitor  and
        Barbieri, Francesco. and
        Camacho-Collados, Jose",
    booktitle = "The 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing",
    month = nov,
    year = "2022",
    address = "Online",
    publisher = "Association for Computational Linguistics",
}

```