parquet-converter commited on
Commit
0ea11d5
1 Parent(s): facdfd1

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,60 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ftz filter=lfs diff=lfs merge=lfs -text
6
- *.gz filter=lfs diff=lfs merge=lfs -text
7
- *.h5 filter=lfs diff=lfs merge=lfs -text
8
- *.joblib filter=lfs diff=lfs merge=lfs -text
9
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
- *.lz4 filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.npy filter=lfs diff=lfs merge=lfs -text
14
- *.npz filter=lfs diff=lfs merge=lfs -text
15
- *.onnx filter=lfs diff=lfs merge=lfs -text
16
- *.ot filter=lfs diff=lfs merge=lfs -text
17
- *.parquet filter=lfs diff=lfs merge=lfs -text
18
- *.pb filter=lfs diff=lfs merge=lfs -text
19
- *.pickle filter=lfs diff=lfs merge=lfs -text
20
- *.pkl filter=lfs diff=lfs merge=lfs -text
21
- *.pt filter=lfs diff=lfs merge=lfs -text
22
- *.pth filter=lfs diff=lfs merge=lfs -text
23
- *.rar filter=lfs diff=lfs merge=lfs -text
24
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
25
- *.tar.* filter=lfs diff=lfs merge=lfs -text
26
- *.tflite filter=lfs diff=lfs merge=lfs -text
27
- *.tgz filter=lfs diff=lfs merge=lfs -text
28
- *.wasm filter=lfs diff=lfs merge=lfs -text
29
- *.xz filter=lfs diff=lfs merge=lfs -text
30
- *.zip filter=lfs diff=lfs merge=lfs -text
31
- *.zst filter=lfs diff=lfs merge=lfs -text
32
- *tfevents* filter=lfs diff=lfs merge=lfs -text
33
- # Audio files - uncompressed
34
- *.pcm filter=lfs diff=lfs merge=lfs -text
35
- *.sam filter=lfs diff=lfs merge=lfs -text
36
- *.raw filter=lfs diff=lfs merge=lfs -text
37
- # Audio files - compressed
38
- *.aac filter=lfs diff=lfs merge=lfs -text
39
- *.flac filter=lfs diff=lfs merge=lfs -text
40
- *.mp3 filter=lfs diff=lfs merge=lfs -text
41
- *.ogg filter=lfs diff=lfs merge=lfs -text
42
- *.wav filter=lfs diff=lfs merge=lfs -text
43
- # Image files - uncompressed
44
- *.bmp filter=lfs diff=lfs merge=lfs -text
45
- *.gif filter=lfs diff=lfs merge=lfs -text
46
- *.png filter=lfs diff=lfs merge=lfs -text
47
- *.tiff filter=lfs diff=lfs merge=lfs -text
48
- # Image files - compressed
49
- *.jpg filter=lfs diff=lfs merge=lfs -text
50
- *.jpeg filter=lfs diff=lfs merge=lfs -text
51
- *.webp filter=lfs diff=lfs merge=lfs -text
52
- dataset/it.jsonl filter=lfs diff=lfs merge=lfs -text
53
- dataset/pt.jsonl filter=lfs diff=lfs merge=lfs -text
54
- dataset/es.jsonl filter=lfs diff=lfs merge=lfs -text
55
- dataset/fr.jsonl filter=lfs diff=lfs merge=lfs -text
56
- dataset/nl.jsonl filter=lfs diff=lfs merge=lfs -text
57
- dataset/pl.jsonl filter=lfs diff=lfs merge=lfs -text
58
- dataset/ru.jsonl filter=lfs diff=lfs merge=lfs -text
59
- dataset/de.jsonl filter=lfs diff=lfs merge=lfs -text
60
- dataset/en.jsonl filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,123 +0,0 @@
1
- ---
2
- language:
3
- - de
4
- - en
5
- - es
6
- - fr
7
- - it
8
- - nl
9
- - pl
10
- - pt
11
- - ru
12
- multilinguality:
13
- - multilingual
14
- size_categories:
15
- - <10K
16
- task_categories:
17
- - token-classification
18
- task_ids:
19
- - named-entity-recognition
20
- pretty_name: MultiNERD
21
- ---
22
-
23
- # Dataset Card for "tner/multinerd"
24
-
25
- ## Dataset Description
26
-
27
- - **Repository:** [T-NER](https://github.com/asahi417/tner)
28
- - **Paper:** [https://aclanthology.org/2022.findings-naacl.60/](https://aclanthology.org/2022.findings-naacl.60/)
29
- - **Dataset:** MultiNERD
30
- - **Domain:** Wikipedia, WikiNews
31
- - **Number of Entity:** 18
32
-
33
-
34
- ### Dataset Summary
35
- MultiNERD NER benchmark dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project.
36
- - Entity Types: `PER`, `LOC`, `ORG`, `ANIM`, `BIO`, `CEL`, `DIS`, `EVE`, `FOOD`, `INST`, `MEDIA`, `PLANT`, `MYTH`, `TIME`, `VEHI`, `MISC`, `SUPER`, `PHY`
37
-
38
- ## Dataset Structure
39
-
40
- ### Data Instances
41
- An example of `train` of `de` looks as follows.
42
-
43
- ```
44
- {
45
- 'tokens': [ "Die", "Blätter", "des", "Huflattichs", "sind", "leicht", "mit", "den", "sehr", "ähnlichen", "Blättern", "der", "Weißen", "Pestwurz", "(", "\"", "Petasites", "albus", "\"", ")", "zu", "verwechseln", "." ],
46
- 'tags': [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 0, 0, 0 ]
47
- }
48
- ```
49
-
50
- ### Label ID
51
- The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/multinerd/raw/main/dataset/label.json).
52
- ```python
53
- {
54
- "O": 0,
55
- "B-PER": 1,
56
- "I-PER": 2,
57
- "B-LOC": 3,
58
- "I-LOC": 4,
59
- "B-ORG": 5,
60
- "I-ORG": 6,
61
- "B-ANIM": 7,
62
- "I-ANIM": 8,
63
- "B-BIO": 9,
64
- "I-BIO": 10,
65
- "B-CEL": 11,
66
- "I-CEL": 12,
67
- "B-DIS": 13,
68
- "I-DIS": 14,
69
- "B-EVE": 15,
70
- "I-EVE": 16,
71
- "B-FOOD": 17,
72
- "I-FOOD": 18,
73
- "B-INST": 19,
74
- "I-INST": 20,
75
- "B-MEDIA": 21,
76
- "I-MEDIA": 22,
77
- "B-PLANT": 23,
78
- "I-PLANT": 24,
79
- "B-MYTH": 25,
80
- "I-MYTH": 26,
81
- "B-TIME": 27,
82
- "I-TIME": 28,
83
- "B-VEHI": 29,
84
- "I-VEHI": 30,
85
- "B-SUPER": 31,
86
- "I-SUPER": 32,
87
- "B-PHY": 33,
88
- "I-PHY": 34
89
- }
90
- ```
91
-
92
- ### Data Splits
93
-
94
- | language | test |
95
- |:-----------|-------:|
96
- | de | 156792 |
97
- | en | 164144 |
98
- | es | 173189 |
99
- | fr | 176185 |
100
- | it | 181927 |
101
- | nl | 171711 |
102
- | pl | 194965 |
103
- | pt | 177565 |
104
- | ru | 82858 |
105
-
106
- ### Citation Information
107
-
108
- ```
109
- @inproceedings{tedeschi-navigli-2022-multinerd,
110
- title = "{M}ulti{NERD}: A Multilingual, Multi-Genre and Fine-Grained Dataset for Named Entity Recognition (and Disambiguation)",
111
- author = "Tedeschi, Simone and
112
- Navigli, Roberto",
113
- booktitle = "Findings of the Association for Computational Linguistics: NAACL 2022",
114
- month = jul,
115
- year = "2022",
116
- address = "Seattle, United States",
117
- publisher = "Association for Computational Linguistics",
118
- url = "https://aclanthology.org/2022.findings-naacl.60",
119
- doi = "10.18653/v1/2022.findings-naacl.60",
120
- pages = "801--812",
121
- abstract = "Named Entity Recognition (NER) is the task of identifying named entities in texts and classifying them through specific semantic categories, a process which is crucial for a wide range of NLP applications. Current datasets for NER focus mainly on coarse-grained entity types, tend to consider a single textual genre and to cover a narrow set of languages, thus limiting the general applicability of NER systems.In this work, we design a new methodology for automatically producing NER annotations, and address the aforementioned limitations by introducing a novel dataset that covers 10 languages, 15 NER categories and 2 textual genres.We also introduce a manually-annotated test set, and extensively evaluate the quality of our novel dataset on both this new test set and standard benchmarks for NER.In addition, in our dataset, we include: i) disambiguation information to enable the development of multilingual entity linking systems, and ii) image URLs to encourage the creation of multimodal systems.We release our dataset at https://github.com/Babelscape/multinerd.",
122
- }
123
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset/it.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:08ab92f2ff04710d3d06ed14e48144ce9e06b6005f60f7464c60e0e246d0538e
3
- size 59584208
 
 
 
 
dataset/label.json DELETED
@@ -1 +0,0 @@
1
- {"O": 0, "B-PER": 1, "I-PER": 2, "B-LOC": 3, "I-LOC": 4, "B-ORG": 5, "I-ORG": 6, "B-ANIM": 7, "I-ANIM": 8, "B-BIO": 9, "I-BIO": 10, "B-CEL": 11, "I-CEL": 12, "B-DIS": 13, "I-DIS": 14, "B-EVE": 15, "I-EVE": 16, "B-FOOD": 17, "I-FOOD": 18, "B-INST": 19, "I-INST": 20, "B-MEDIA": 21, "I-MEDIA": 22, "B-PLANT": 23, "I-PLANT": 24, "B-MYTH": 25, "I-MYTH": 26, "B-TIME": 27, "I-TIME": 28, "B-VEHI": 29, "I-VEHI": 30, "B-SUPER": 31, "I-SUPER": 32, "B-PHY": 33, "I-PHY": 34}
 
 
dataset/nl.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:81a42e492e3b22e0cd38be687976cde0425ab2825979fff676b6c7ef6f7e414f
3
- size 39621455
 
 
 
 
dataset/pl.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:4ac94a2574933491091af59a848e5376a699de7d7b8241fce9d1b1210edaf855
3
- size 44953474
 
 
 
 
dataset/pt.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:d0266b95f3cb9867ad5a83e947ce7b7f9bd3a9edc605b5d3f09b3d5d341286b6
3
- size 51433608
 
 
 
 
dataset/ru.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:1621b8600571cbe9180c3c25becc01c41776192ff4a8691a961eb1bc49de1358
3
- size 51908152
 
 
 
 
dataset/es.jsonl → de/multinerd-test.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f69709cabe8dc592434c1c998fd2a6e8c4dc77ba8a82c5db35e5403aa2eca7a0
3
- size 54805232
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ec93fe570ee8ac075bceb20912b8ac98075099838b839fbf20813824c7ea70b5
3
+ size 14028603
dataset/fr.jsonl → en/multinerd-test.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d0c6514e478d65eb6b851c551bf5429b41858afa9e30c01bd21472e579aa1f17
3
- size 55584951
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a41f72ae2091d3cc6c52cb070416657244bd4e2584ed09f9843798983984c88b
3
+ size 14899109
dataset/de.jsonl → es/multinerd-test.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6131a64f666884691990333ecfc983940fbaf25eb584940c4c9640318bcef873
3
- size 38217905
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7a01e3d368ba26fb718575438b40bcfc3610381c9d510a017c9334c1e0e83a39
3
+ size 17484674
dataset/en.jsonl → fr/multinerd-test.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5040b1e7a1dea31eeb315a46b7f7cfc4cb3ddceae489f495901392e2f1b0aad1
3
- size 44663615
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:385232b6658a5928b4c0eca87dabcd4fc98255193cdb22843999e1a6b0323a4e
3
+ size 16928254
it/multinerd-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:de190e81c629da43971a8741cc7371a1a4cddd1d7c3e2de8587eb23468e178ad
3
+ size 19299780
multinerd.py DELETED
@@ -1,84 +0,0 @@
1
- """ NER dataset compiled by T-NER library https://github.com/asahi417/tner/tree/master/tner """
2
- import json
3
- from itertools import chain
4
- import datasets
5
-
6
- logger = datasets.logging.get_logger(__name__)
7
- _DESCRIPTION = """[MultiNERD](https://aclanthology.org/2022.findings-naacl.60/)"""
8
- _NAME = "multinerd"
9
- _VERSION = "1.0.0"
10
- _CITATION = """
11
- @inproceedings{tedeschi-navigli-2022-multinerd,
12
- title = "{M}ulti{NERD}: A Multilingual, Multi-Genre and Fine-Grained Dataset for Named Entity Recognition (and Disambiguation)",
13
- author = "Tedeschi, Simone and
14
- Navigli, Roberto",
15
- booktitle = "Findings of the Association for Computational Linguistics: NAACL 2022",
16
- month = jul,
17
- year = "2022",
18
- address = "Seattle, United States",
19
- publisher = "Association for Computational Linguistics",
20
- url = "https://aclanthology.org/2022.findings-naacl.60",
21
- doi = "10.18653/v1/2022.findings-naacl.60",
22
- pages = "801--812",
23
- abstract = "Named Entity Recognition (NER) is the task of identifying named entities in texts and classifying them through specific semantic categories, a process which is crucial for a wide range of NLP applications. Current datasets for NER focus mainly on coarse-grained entity types, tend to consider a single textual genre and to cover a narrow set of languages, thus limiting the general applicability of NER systems.In this work, we design a new methodology for automatically producing NER annotations, and address the aforementioned limitations by introducing a novel dataset that covers 10 languages, 15 NER categories and 2 textual genres.We also introduce a manually-annotated test set, and extensively evaluate the quality of our novel dataset on both this new test set and standard benchmarks for NER.In addition, in our dataset, we include: i) disambiguation information to enable the development of multilingual entity linking systems, and ii) image URLs to encourage the creation of multimodal systems.We release our dataset at https://github.com/Babelscape/multinerd.",
24
- }
25
- """
26
-
27
- _HOME_PAGE = "https://github.com/asahi417/tner"
28
- _URL = f'https://huggingface.co/datasets/tner/{_NAME}/resolve/main/dataset'
29
- _LANGUAGE = ['de', 'en', 'es', 'fr', 'it', 'nl', 'pl', 'pt', 'ru']
30
- _URLS = {
31
- l: {
32
- str(datasets.Split.TEST): [f'{_URL}/{l}.jsonl'],
33
- } for l in _LANGUAGE
34
- }
35
-
36
-
37
- class MultiNERDConfig(datasets.BuilderConfig):
38
- """BuilderConfig"""
39
-
40
- def __init__(self, **kwargs):
41
- """BuilderConfig.
42
-
43
- Args:
44
- **kwargs: keyword arguments forwarded to super.
45
- """
46
- super(MultiNERDConfig, self).__init__(**kwargs)
47
-
48
-
49
- class MultiNERD(datasets.GeneratorBasedBuilder):
50
- """Dataset."""
51
-
52
- BUILDER_CONFIGS = [
53
- MultiNERDConfig(name=l, version=datasets.Version(_VERSION), description=f"{_DESCRIPTION} (language: {l})") for l in _LANGUAGE
54
- ]
55
-
56
- def _split_generators(self, dl_manager):
57
- downloaded_file = dl_manager.download_and_extract(_URLS[self.config.name])
58
- return [datasets.SplitGenerator(name=i, gen_kwargs={"filepaths": downloaded_file[str(i)]})
59
- for i in [datasets.Split.TEST]]
60
-
61
- def _generate_examples(self, filepaths):
62
- _key = 0
63
- for filepath in filepaths:
64
- logger.info(f"generating examples from = {filepath}")
65
- with open(filepath, encoding="utf-8") as f:
66
- _list = [i for i in f.read().split('\n') if len(i) > 0]
67
- for i in _list:
68
- data = json.loads(i)
69
- yield _key, data
70
- _key += 1
71
-
72
- def _info(self):
73
- return datasets.DatasetInfo(
74
- description=_DESCRIPTION,
75
- features=datasets.Features(
76
- {
77
- "tokens": datasets.Sequence(datasets.Value("string")),
78
- "tags": datasets.Sequence(datasets.Value("int32")),
79
- }
80
- ),
81
- supervised_keys=None,
82
- homepage=_HOME_PAGE,
83
- citation=_CITATION,
84
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
nl/multinerd-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a5e7a4934351133e0f2fc08f0343a0f740d7ed3b38bc0111f74d382516661d9c
3
+ size 12864577
pl/multinerd-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4725832eaf92fd9a1f93261afc51c4964ef7fd92a526de1f60a1e683d4e60184
3
+ size 16777030
pt/multinerd-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1ea1f2891475c34c1607ee736de555c689c598bace9ae6e3a76e0f0f73f035a6
3
+ size 16279885
ru/multinerd-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e1a24a48a914b5bcb40657a15d50db011089e47e5ff69b99f042d9ac14a48353
3
+ size 9225203