Datasets:
Tasks:
Token Classification
Modalities:
Text
Formats:
parquet
Sub-tasks:
named-entity-recognition
Languages:
Tagalog
Size:
1K - 10K
ArXiv:
DOI:
License:
annotations_creators: | |
- expert-generated | |
language: | |
- tl | |
license: gpl-3.0 | |
multilinguality: | |
- monolingual | |
size_categories: | |
- 1K<n<10K | |
task_categories: | |
- token-classification | |
task_ids: | |
- named-entity-recognition | |
pretty_name: TLUnified-NER | |
tags: | |
- low-resource | |
- named-entity-recognition | |
dataset_info: | |
features: | |
- name: id | |
dtype: string | |
- name: tokens | |
sequence: string | |
- name: ner_tags | |
sequence: | |
class_label: | |
names: | |
'0': O | |
'1': B-PER | |
'2': I-PER | |
'3': B-ORG | |
'4': I-ORG | |
'5': B-LOC | |
'6': I-LOC | |
splits: | |
- name: train | |
num_bytes: 3380392 | |
num_examples: 6252 | |
- name: validation | |
num_bytes: 427069 | |
num_examples: 782 | |
- name: test | |
num_bytes: 426247 | |
num_examples: 782 | |
download_size: 971039 | |
dataset_size: 4233708 | |
configs: | |
- config_name: default | |
data_files: | |
- split: train | |
path: data/train-* | |
- split: validation | |
path: data/validation-* | |
- split: test | |
path: data/test-* | |
train-eval-index: | |
- config: conllpp | |
task: token-classification | |
task_id: entity_extraction | |
splits: | |
train_split: train | |
eval_split: test | |
col_mapping: | |
tokens: tokens | |
ner_tags: tags | |
metrics: | |
- type: seqeval | |
name: seqeval | |
<!-- SPACY PROJECT: AUTO-GENERATED DOCS START (do not remove) --> | |
# 🪐 spaCy Project: TLUnified-NER Corpus | |
- **Homepage:** [Github](https://github.com/ljvmiranda921/calamanCy) | |
- **Repository:** [Github](https://github.com/ljvmiranda921/calamanCy) | |
- **Point of Contact:** [email protected] | |
### Dataset Summary | |
This dataset contains the annotated TLUnified corpora from Cruz and Cheng | |
(2021). It is a curated sample of around 7,000 documents for the | |
named entity recognition (NER) task. The majority of the corpus are news | |
reports in Tagalog, resembling the domain of the original ConLL 2003. There | |
are three entity types: Person (PER), Organization (ORG), and Location (LOC). | |
| Dataset | Examples | PER | ORG | LOC | | |
|-------------|----------|------|------|------| | |
| Train | 6252 | 6418 | 3121 | 3296 | | |
| Development | 782 | 793 | 392 | 409 | | |
| Test | 782 | 818 | 423 | 438 | | |
### Data Fields | |
The data fields are the same among all splits: | |
- `id`: a `string` feature | |
- `tokens`: a `list` of `string` features. | |
- `ner_tags`: a `list` of classification labels, with possible values including `O` (0), `B-PER` (1), `I-PER` (2), `B-ORG` (3), `I-ORG` (4), `B-LOC` (5), `I-LOC` (6) | |
### Annotation process | |
The author, together with two more annotators, labeled curated portions of | |
TLUnified in the course of four months. All annotators are native speakers of | |
Tagalog. For each annotation round, the annotators resolved disagreements, | |
updated the annotation guidelines, and corrected past annotations. They | |
followed the process prescribed by [Reiters | |
(2017)](https://nilsreiter.de/blog/2017/howto-annotation). | |
They also measured the inter-annotator agreement (IAA) by computing pairwise | |
comparisons and averaging the results: | |
- Cohen's Kappa (all tokens): 0.81 | |
- Cohen's Kappa (annotated tokens only): 0.65 | |
- F1-score: 0.91 | |
### About this repository | |
This repository is a [spaCy project](https://spacy.io/usage/projects) for | |
converting the annotated spaCy files into IOB. The process goes like this: we | |
download the raw corpus from Google Cloud Storage (GCS), convert the spaCy | |
files into a readable IOB format, and parse that using our loading script | |
(i.e., `tlunified-ner.py`). We're also shipping the IOB file so that it's | |
easier to access. | |
## 📋 project.yml | |
The [`project.yml`](project.yml) defines the data assets required by the | |
project, as well as the available commands and workflows. For details, see the | |
[spaCy projects documentation](https://spacy.io/usage/projects). | |
### ⏯ Commands | |
The following commands are defined by the project. They | |
can be executed using [`spacy project run [name]`](https://spacy.io/api/cli#project-run). | |
Commands are only re-run if their inputs have changed. | |
| Command | Description | | |
| --- | --- | | |
| `setup-data` | Prepare the Tagalog corpora used for training various spaCy components | | |
| `upload-to-hf` | Upload dataset to HuggingFace Hub | | |
### ⏭ Workflows | |
The following workflows are defined by the project. They | |
can be executed using [`spacy project run [name]`](https://spacy.io/api/cli#project-run) | |
and will run the specified commands in order. Commands are only re-run if their | |
inputs have changed. | |
| Workflow | Steps | | |
| --- | --- | | |
| `all` | `setup-data` → `upload-to-hf` | | |
### 🗂 Assets | |
The following assets are defined by the project. They can | |
be fetched by running [`spacy project assets`](https://spacy.io/api/cli#project-assets) | |
in the project directory. | |
| File | Source | Description | | |
| --- | --- | --- | | |
| `assets/corpus.tar.gz` | URL | Annotated TLUnified corpora in spaCy format with train, dev, and test splits. | | |
<!-- SPACY PROJECT: AUTO-GENERATED DOCS END (do not remove) --> | |
### Citation | |
You can cite this dataset as: | |
``` | |
@misc{miranda2023developing, | |
title={Developing a Named Entity Recognition Dataset for Tagalog}, | |
author={Lester James V. Miranda}, | |
year={2023}, | |
eprint={2311.07161}, | |
archivePrefix={arXiv}, | |
primaryClass={cs.CL} | |
} | |
``` |