File size: 5,252 Bytes
3281e02
4f42950
 
 
 
3281e02
4f42950
 
 
 
3281e02
 
790e34d
 
3281e02
 
 
 
4f42950
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
790e34d
4f42950
 
 
 
 
 
 
 
 
 
 
 
3281e02
 
1b0f91f
 
790e34d
 
1b0f91f
790e34d
 
 
1b0f91f
790e34d
3281e02
 
53d8523
3281e02
 
 
1b0f91f
790e34d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3281e02
 
 
 
 
 
 
1b0f91f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3281e02
 
 
 
 
 
 
 
 
 
 
1b0f91f
 
 
 
 
 
 
 
 
 
442fac5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
---
annotations_creators:
- expert-generated
language:
- tl
license: gpl-3.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: TLUnified-NER
tags:
- low-resource
- named-entity-recognition
dataset_info:
  features:
  - name: id
    dtype: string
  - name: tokens
    sequence: string
  - name: ner_tags
    sequence:
      class_label:
        names:
          '0': O
          '1': B-PER
          '2': I-PER
          '3': B-ORG
          '4': I-ORG
          '5': B-LOC
          '6': I-LOC
  splits:
  - name: train
    num_bytes: 3380392
    num_examples: 6252
  - name: validation
    num_bytes: 427069
    num_examples: 782
  - name: test
    num_bytes: 426247
    num_examples: 782
  download_size: 971039
  dataset_size: 4233708
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: validation
    path: data/validation-*
  - split: test
    path: data/test-*
train-eval-index:
- config: conllpp
  task: token-classification
  task_id: entity_extraction
  splits:
    train_split: train
    eval_split: test
  col_mapping:
    tokens: tokens
    ner_tags: tags
  metrics:
  - type: seqeval
    name: seqeval
---

<!-- SPACY PROJECT: AUTO-GENERATED DOCS START (do not remove) -->

# 🪐 spaCy Project: TLUnified-NER Corpus


- **Homepage:** [Github](https://github.com/ljvmiranda921/calamanCy)
- **Repository:** [Github](https://github.com/ljvmiranda921/calamanCy)
- **Point of Contact:** [email protected]

### Dataset Summary

This dataset contains the annotated TLUnified corpora from Cruz and Cheng
(2021).  It is a curated sample of around 7,000 documents for the
named entity recognition (NER) task.  The majority of the corpus are news
reports in Tagalog, resembling the domain of the original ConLL 2003.  There
are three entity types: Person (PER), Organization (ORG), and Location (LOC).

| Dataset     | Examples | PER  | ORG  | LOC  |
|-------------|----------|------|------|------|
| Train       | 6252     | 6418 | 3121 | 3296 |
| Development | 782      | 793  | 392  | 409  |
| Test        | 782      | 818  | 423  | 438  |

### Data Fields

The data fields are the same among all splits:
- `id`: a `string` feature
- `tokens`: a `list` of `string` features.
- `ner_tags`: a `list` of classification labels, with possible values including `O` (0), `B-PER` (1), `I-PER` (2), `B-ORG` (3), `I-ORG` (4), `B-LOC` (5), `I-LOC` (6)

### Annotation process

The author, together with two more annotators, labeled curated portions of
TLUnified in the course of four months. All annotators are native speakers of
Tagalog.  For each annotation round, the annotators resolved disagreements,
updated the annotation guidelines, and corrected past annotations. They
followed the process prescribed by [Reiters
(2017)](https://nilsreiter.de/blog/2017/howto-annotation).

They also measured the inter-annotator agreement (IAA) by computing pairwise
comparisons and averaging the results:
- Cohen's Kappa (all tokens): 0.81
- Cohen's Kappa (annotated tokens only): 0.65
- F1-score: 0.91

### About this repository

This repository is a [spaCy project](https://spacy.io/usage/projects) for
converting the annotated spaCy files into IOB. The process goes like this: we
download the raw corpus from Google Cloud Storage (GCS), convert the spaCy
files into a readable IOB format, and parse that using our loading script
(i.e., `tlunified-ner.py`). We're also shipping the IOB file so that it's
easier to access.


## 📋 project.yml

The [`project.yml`](project.yml) defines the data assets required by the
project, as well as the available commands and workflows. For details, see the
[spaCy projects documentation](https://spacy.io/usage/projects).

### ⏯ Commands

The following commands are defined by the project. They
can be executed using [`spacy project run [name]`](https://spacy.io/api/cli#project-run).
Commands are only re-run if their inputs have changed.

| Command | Description |
| --- | --- |
| `setup-data` | Prepare the Tagalog corpora used for training various spaCy components |
| `upload-to-hf` | Upload dataset to HuggingFace Hub |

### ⏭ Workflows

The following workflows are defined by the project. They
can be executed using [`spacy project run [name]`](https://spacy.io/api/cli#project-run)
and will run the specified commands in order. Commands are only re-run if their
inputs have changed.

| Workflow | Steps |
| --- | --- |
| `all` | `setup-data` &rarr; `upload-to-hf` |

### 🗂 Assets

The following assets are defined by the project. They can
be fetched by running [`spacy project assets`](https://spacy.io/api/cli#project-assets)
in the project directory.

| File | Source | Description |
| --- | --- | --- |
| `assets/corpus.tar.gz` | URL | Annotated TLUnified corpora in spaCy format with train, dev, and test splits. |

<!-- SPACY PROJECT: AUTO-GENERATED DOCS END (do not remove) -->

### Citation

You can cite this dataset as:

```
@misc{miranda2023developing,
  title={Developing a Named Entity Recognition Dataset for Tagalog}, 
  author={Lester James V. Miranda},
  year={2023},
  eprint={2311.07161},
  archivePrefix={arXiv},
  primaryClass={cs.CL}
}
```