ljvmiranda921 commited on
Commit
8d31d97
1 Parent(s): d96e585

Delete data file

Browse files
Files changed (1) hide show
  1. project.yml +0 -87
project.yml DELETED
@@ -1,87 +0,0 @@
1
- title: "TLUnified-NER Corpus"
2
- description: |
3
-
4
- - **Homepage:** [Github](https://github.com/ljvmiranda921/calamanCy)
5
- - **Repository:** [Github](https://github.com/ljvmiranda921/calamanCy)
6
- - **Point of Contact:** [email protected]
7
-
8
- ### Dataset Summary
9
-
10
- This dataset contains the annotated TLUnified corpora from Cruz and Cheng
11
- (2021). It is a curated sample of around 7,000 documents for the
12
- named entity recognition (NER) task. The majority of the corpus are news
13
- reports in Tagalog, resembling the domain of the original ConLL 2003. There
14
- are three entity types: Person (PER), Organization (ORG), and Location (LOC).
15
-
16
- | Dataset | Examples | PER | ORG | LOC |
17
- |-------------|----------|------|------|------|
18
- | Train | 6252 | 6418 | 3121 | 3296 |
19
- | Development | 782 | 793 | 392 | 409 |
20
- | Test | 782 | 818 | 423 | 438 |
21
-
22
- ### Data Fields
23
-
24
- The data fields are the same among all splits:
25
- - `id`: a `string` feature
26
- - `tokens`: a `list` of `string` features.
27
- - `ner_tags`: a `list` of classification labels, with possible values including `O` (0), `B-PER` (1), `I-PER` (2), `B-ORG` (3), `I-ORG` (4), `B-LOC` (5), `I-LOC` (6)
28
-
29
- ### Annotation process
30
-
31
- The author, together with two more annotators, labeled curated portions of
32
- TLUnified in the course of four months. All annotators are native speakers of
33
- Tagalog. For each annotation round, the annotators resolved disagreements,
34
- updated the annotation guidelines, and corrected past annotations. They
35
- followed the process prescribed by [Reiters
36
- (2017)](https://nilsreiter.de/blog/2017/howto-annotation).
37
-
38
- They also measured the inter-annotator agreement (IAA) by computing pairwise
39
- comparisons and averaging the results:
40
- - Cohen's Kappa (all tokens): 0.81
41
- - Cohen's Kappa (annotated tokens only): 0.65
42
- - F1-score: 0.91
43
-
44
- ### About this repository
45
-
46
- This repository is a [spaCy project](https://spacy.io/usage/projects) for
47
- converting the annotated spaCy files into IOB. The process goes like this: we
48
- download the raw corpus from Google Cloud Storage (GCS), convert the spaCy
49
- files into a readable IOB format, and parse that using our loading script
50
- (i.e., `tlunified-ner.py`). We're also shipping the IOB file so that it's
51
- easier to access.
52
-
53
- directories: ["assets", "corpus/spacy", "corpus/iob"]
54
-
55
- vars:
56
- version: 1.0
57
-
58
- assets:
59
- - dest: assets/corpus.tar.gz
60
- description: "Annotated TLUnified corpora in spaCy format with train, dev, and test splits."
61
- url: "https://storage.googleapis.com/ljvmiranda/calamanCy/tl_tlunified_gold/v${vars.version}/corpus.tar.gz"
62
-
63
- workflows:
64
- all:
65
- - "setup-data"
66
- - "upload-to-hf"
67
-
68
- commands:
69
- - name: "setup-data"
70
- help: "Prepare the Tagalog corpora used for training various spaCy components"
71
- script:
72
- - mkdir -p corpus/spacy
73
- - tar -xzvf assets/corpus.tar.gz -C corpus/spacy
74
- - python -m spacy_to_iob corpus/spacy/ corpus/iob/
75
- outputs:
76
- - corpus/iob/train.iob
77
- - corpus/iob/dev.iob
78
- - corpus/iob/test.iob
79
-
80
- - name: "upload-to-hf"
81
- help: "Upload dataset to HuggingFace Hub"
82
- script:
83
- - git push
84
- deps:
85
- - corpus/iob/train.iob
86
- - corpus/iob/dev.iob
87
- - corpus/iob/test.iob