Datasets:
Tasks:
Token Classification
Formats:
csv
Sub-tasks:
named-entity-recognition
Languages:
English
Size:
100K - 1M
ArXiv:
License:
parquet-converter
commited on
Commit
•
425cf0d
1
Parent(s):
18aed21
Update parquet files
Browse files- README.md +0 -177
- gtfintechlab--finer-ord/csv-test.parquet +3 -0
- gtfintechlab--finer-ord/csv-train.parquet +3 -0
- gtfintechlab--finer-ord/csv-validation.parquet +3 -0
- test.csv +0 -0
- train.csv +0 -0
- val.csv +0 -0
README.md
DELETED
@@ -1,177 +0,0 @@
|
|
1 |
-
---
|
2 |
-
license: cc-by-nc-4.0
|
3 |
-
task_categories:
|
4 |
-
- token-classification
|
5 |
-
language:
|
6 |
-
- en
|
7 |
-
pretty_name: FiNER
|
8 |
-
size_categories:
|
9 |
-
- 1K<n<10K
|
10 |
-
multilinguality:
|
11 |
-
- monolingual
|
12 |
-
task_ids:
|
13 |
-
- named-entity-recognition
|
14 |
-
---
|
15 |
-
|
16 |
-
# Dataset Card for "FiNER-ORD"
|
17 |
-
|
18 |
-
## Table of Contents
|
19 |
-
- [Dataset Description](#dataset-description)
|
20 |
-
- [Dataset Summary](#dataset-summary)
|
21 |
-
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
22 |
-
- [Languages](#languages)
|
23 |
-
- [Dataset Structure](#dataset-structure)
|
24 |
-
- [Data Instances](#data-instances)
|
25 |
-
- [Data Fields](#data-fields)
|
26 |
-
- [Data Splits](#data-splits)
|
27 |
-
- [Dataset Creation and Annotation](#dataset-creation)
|
28 |
-
- [Curation Rationale](#curation-rationale)
|
29 |
-
- [Source Data](#source-data)
|
30 |
-
- [Annotations](#annotations)
|
31 |
-
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
32 |
-
- [Additional Information](#additional-information)
|
33 |
-
- [Licensing Information](#licensing-information)
|
34 |
-
- [Citation Information](#citation-information)
|
35 |
-
- [Contact Information](#contact-information)
|
36 |
-
|
37 |
-
## Dataset Description
|
38 |
-
|
39 |
-
- **Homepage:** [https://github.com/gtfintechlab/FiNER](https://github.com/gtfintechlab/FiNER)
|
40 |
-
- **Repository:** [https://github.com/gtfintechlab/FiNER](https://github.com/gtfintechlab/FiNER)
|
41 |
-
- **Paper:** [Arxiv Link]()
|
42 |
-
- **Point of Contact:** [Agam A. Shah](https://shahagam4.github.io/)
|
43 |
-
- **Size of train dataset file:** 1.34 MB
|
44 |
-
- **Size of validation dataset file:** 160 KB
|
45 |
-
- **Size of test dataset file:** 412 KB
|
46 |
-
|
47 |
-
### Dataset Summary
|
48 |
-
|
49 |
-
The FiNER-Open Research Dataset (FiNER-ORD) consists of a manually annotated dataset of financial news articles (in English)
|
50 |
-
collected from [webz.io] (https://webz.io/free-datasets/financial-news-articles/).
|
51 |
-
In total, there are 47851 news articles available in this data at the point of writing this paper.
|
52 |
-
Each news article is available in the form of a JSON document with various metadata information like
|
53 |
-
the source of the article, publication date, author of the article, and the title of the article.
|
54 |
-
For the manual annotation of named entities in financial news, we randomly sampled 220 documents from the entire set of news articles.
|
55 |
-
We observed that some articles were empty in our sample, so after filtering the empty documents, we were left with a total of 201 articles.
|
56 |
-
We use [Doccano](https://github.com/doccano/doccano), an open-source annotation tool,
|
57 |
-
to ingest the raw dataset and manually label person (PER), location (LOC), and organization (ORG) entities.
|
58 |
-
For our experiments, we use the manually labeled FiNER-ORD to benchmark model performance.
|
59 |
-
Thus, we make a train, validation, and test split of FiNER-ORD.
|
60 |
-
To avoid biased results, manual annotation is performed by annotators who have no knowledge about the labeling functions for the weak supervision framework.
|
61 |
-
The train and validation sets are annotated by two separate annotators and validated by a third annotator.
|
62 |
-
The test dataset is annotated by another annotator. We present a manual annotation guide in the Appendix of the paper detailing the procedures used to create the manually annotated FiNER-ORD.
|
63 |
-
|
64 |
-
After manual annotation, the news articles are split into sentences.
|
65 |
-
We then tokenize each sentence, employing a script to tokenize multi-token entities into separate tokens (e.g. PER_B denotes the beginning token of a person (PER) entity
|
66 |
-
and PER_I represents intermediate PER tokens). We exclude white spaces when tokenizing multi-token entities.
|
67 |
-
The descriptive statistics on the resulting FiNER-ORD are available in the Table of [Data Splits](#data-splits) section.
|
68 |
-
|
69 |
-
For more details check [information in paper]()
|
70 |
-
|
71 |
-
### Supported Tasks and Leaderboards
|
72 |
-
|
73 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
74 |
-
|
75 |
-
### Languages
|
76 |
-
|
77 |
-
- It is a monolingual English dataset
|
78 |
-
|
79 |
-
## Dataset Structure
|
80 |
-
|
81 |
-
### Data Instances
|
82 |
-
|
83 |
-
#### FiNER-ORD
|
84 |
-
|
85 |
-
- **Size of train dataset file:** 1.34 MB
|
86 |
-
- **Size of validation dataset file:** 160 KB
|
87 |
-
- **Size of test dataset file:** 412 KB
|
88 |
-
|
89 |
-
An example of a 'train' looks as follows.
|
90 |
-
|
91 |
-
```
|
92 |
-
{
|
93 |
-
"chunk_tags": [11, 12, 12, 21, 13, 11, 11, 21, 13, 11, 12, 13, 11, 21, 22, 11, 12, 17, 11, 21, 17, 11, 12, 12, 21, 22, 22, 13, 11, 0],
|
94 |
-
"id": "0",
|
95 |
-
"ner_tags": [0, 3, 4, 0, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
|
96 |
-
"pos_tags": [12, 22, 22, 38, 15, 22, 28, 38, 15, 16, 21, 35, 24, 35, 37, 16, 21, 15, 24, 41, 15, 16, 21, 21, 20, 37, 40, 35, 21, 7],
|
97 |
-
"tokens": ["The", "European", "Commission", "said", "on", "Thursday", "it", "disagreed", "with", "German", "advice", "to", "consumers", "to", "shun", "British", "lamb", "until", "scientists", "determine", "whether", "mad", "cow", "disease", "can", "be", "transmitted", "to", "sheep", "."]
|
98 |
-
}
|
99 |
-
```
|
100 |
-
|
101 |
-
The original data files have `-DOCSTART-` lines used to separate documents, but these lines are removed here.
|
102 |
-
Indeed `-DOCSTART-` is a special line that acts as a boundary between two different documents, and it is filtered out in this implementation.
|
103 |
-
|
104 |
-
### Data Fields
|
105 |
-
|
106 |
-
The data fields are the same among all splits.
|
107 |
-
|
108 |
-
#### conll2003
|
109 |
-
- `id`: a `string` feature.
|
110 |
-
- `tokens`: a `list` of `string` features.
|
111 |
-
- `pos_tags`: a `list` of classification labels (`int`). Full tagset with indices:
|
112 |
-
|
113 |
-
```python
|
114 |
-
{'"': 0, "''": 1, '#': 2, '$': 3, '(': 4, ')': 5, ',': 6, '.': 7, ':': 8, '``': 9, 'CC': 10, 'CD': 11, 'DT': 12,
|
115 |
-
'EX': 13, 'FW': 14, 'IN': 15, 'JJ': 16, 'JJR': 17, 'JJS': 18, 'LS': 19, 'MD': 20, 'NN': 21, 'NNP': 22, 'NNPS': 23,
|
116 |
-
'NNS': 24, 'NN|SYM': 25, 'PDT': 26, 'POS': 27, 'PRP': 28, 'PRP$': 29, 'RB': 30, 'RBR': 31, 'RBS': 32, 'RP': 33,
|
117 |
-
'SYM': 34, 'TO': 35, 'UH': 36, 'VB': 37, 'VBD': 38, 'VBG': 39, 'VBN': 40, 'VBP': 41, 'VBZ': 42, 'WDT': 43,
|
118 |
-
'WP': 44, 'WP$': 45, 'WRB': 46}
|
119 |
-
```
|
120 |
-
|
121 |
-
- `chunk_tags`: a `list` of classification labels (`int`). Full tagset with indices:
|
122 |
-
|
123 |
-
```python
|
124 |
-
{'O': 0, 'B-ADJP': 1, 'I-ADJP': 2, 'B-ADVP': 3, 'I-ADVP': 4, 'B-CONJP': 5, 'I-CONJP': 6, 'B-INTJ': 7, 'I-INTJ': 8,
|
125 |
-
'B-LST': 9, 'I-LST': 10, 'B-NP': 11, 'I-NP': 12, 'B-PP': 13, 'I-PP': 14, 'B-PRT': 15, 'I-PRT': 16, 'B-SBAR': 17,
|
126 |
-
'I-SBAR': 18, 'B-UCP': 19, 'I-UCP': 20, 'B-VP': 21, 'I-VP': 22}
|
127 |
-
```
|
128 |
-
|
129 |
-
- `ner_tags`: a `list` of classification labels (`int`). Full tagset with indices:
|
130 |
-
|
131 |
-
```python
|
132 |
-
{'O': 0, 'PER_B': 1, 'PER_I': 2, 'LOC_B': 3, 'LOC_I': 4, 'ORG_B': 5, 'ORG_I': 6}
|
133 |
-
```
|
134 |
-
|
135 |
-
### Data Splits
|
136 |
-
|
137 |
-
| **FiNER-ORD** | **Train** | **Validation** | **Test** |
|
138 |
-
|------------------|----------------|---------------------|---------------|
|
139 |
-
| # Articles | 135 | 24 | 42 |
|
140 |
-
| # Tokens | 80,531 | 10,233 | 25,957 |
|
141 |
-
| # LOC entities | 1,255 | 267 | 428 |
|
142 |
-
| # ORG entities | 3,440 | 524 | 933 |
|
143 |
-
| # PER entities | 1,374 | 222 | 466 |
|
144 |
-
|
145 |
-
|
146 |
-
## Dataset Creation and Annotation
|
147 |
-
|
148 |
-
### Paper
|
149 |
-
|
150 |
-
[Information in paper ]()
|
151 |
-
|
152 |
-
|
153 |
-
## Additional Information
|
154 |
-
|
155 |
-
|
156 |
-
### Licensing Information
|
157 |
-
|
158 |
-
[Information in paper ]()
|
159 |
-
|
160 |
-
### Citation Information
|
161 |
-
|
162 |
-
```
|
163 |
-
@article{shah2023finer,
|
164 |
-
title={FiNER: Financial Named Entity Recognition Dataset and Weak-supervision Model},
|
165 |
-
author={Agam Shah and Ruchit Vithani and Abhinav Gullapalli and Sudheer Chava},
|
166 |
-
journal={Under Review at SIGIR},
|
167 |
-
year={2023}
|
168 |
-
}
|
169 |
-
|
170 |
-
```
|
171 |
-
|
172 |
-
|
173 |
-
### Contact Information
|
174 |
-
|
175 |
-
Please contact Agam Shah (ashah482[at]gatech[dot]edu) about any FiNER-related issues and questions.
|
176 |
-
[@shahagam4](https://github.com/shahagam4)
|
177 |
-
[Website](https://shahagam4.github.io/)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
gtfintechlab--finer-ord/csv-test.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c98236014a97624202a554858cf7006aa5b607eb7e5f478ae1438313ef9e96ca
|
3 |
+
size 289147
|
gtfintechlab--finer-ord/csv-train.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6e4f8c124ce9b50eb6a7c5dcdc4bfbd4f95ebdd6e93d01ad6102def7080a6f22
|
3 |
+
size 906363
|
gtfintechlab--finer-ord/csv-validation.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f77f68f4d69be18d9b4e0bbea5367a49687570ecaa29e6c09c01c672e876bc91
|
3 |
+
size 118643
|
test.csv
DELETED
The diff for this file is too large to render.
See raw diff
|
|
train.csv
DELETED
The diff for this file is too large to render.
See raw diff
|
|
val.csv
DELETED
The diff for this file is too large to render.
See raw diff
|
|