Datasets:
cyanic-selkie
commited on
Commit
•
557787a
0
Parent(s):
Initial commit.
Browse files- .gitattributes +57 -0
- README.md +146 -0
- test.parquet +3 -0
- train.parquet +3 -0
- validation.parquet +3 -0
.gitattributes
ADDED
@@ -0,0 +1,57 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.lz4 filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
18 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
19 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
20 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
23 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
24 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
25 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
26 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
27 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
28 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
29 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
30 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
31 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
32 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
33 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
# Audio files - uncompressed
|
37 |
+
*.pcm filter=lfs diff=lfs merge=lfs -text
|
38 |
+
*.sam filter=lfs diff=lfs merge=lfs -text
|
39 |
+
*.raw filter=lfs diff=lfs merge=lfs -text
|
40 |
+
# Audio files - compressed
|
41 |
+
*.aac filter=lfs diff=lfs merge=lfs -text
|
42 |
+
*.flac filter=lfs diff=lfs merge=lfs -text
|
43 |
+
*.mp3 filter=lfs diff=lfs merge=lfs -text
|
44 |
+
*.ogg filter=lfs diff=lfs merge=lfs -text
|
45 |
+
*.wav filter=lfs diff=lfs merge=lfs -text
|
46 |
+
# Image files - uncompressed
|
47 |
+
*.bmp filter=lfs diff=lfs merge=lfs -text
|
48 |
+
*.gif filter=lfs diff=lfs merge=lfs -text
|
49 |
+
*.png filter=lfs diff=lfs merge=lfs -text
|
50 |
+
*.tiff filter=lfs diff=lfs merge=lfs -text
|
51 |
+
# Image files - compressed
|
52 |
+
*.jpg filter=lfs diff=lfs merge=lfs -text
|
53 |
+
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
54 |
+
*.webp filter=lfs diff=lfs merge=lfs -text
|
55 |
+
test.parquet filter=lfs diff=lfs merge=lfs -text
|
56 |
+
train.parquet filter=lfs diff=lfs merge=lfs -text
|
57 |
+
validation.parquet filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,146 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-4.0
|
3 |
+
task_categories:
|
4 |
+
- token-classification
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
tags:
|
8 |
+
- wikidata
|
9 |
+
- wikipedia
|
10 |
+
- wikification
|
11 |
+
pretty_name: WikiAnc EN
|
12 |
+
size_categories:
|
13 |
+
- 10M<n<100M
|
14 |
+
---
|
15 |
+
|
16 |
+
# Dataset Card for WikiAnc EN
|
17 |
+
|
18 |
+
## Table of Contents
|
19 |
+
- [Dataset Description](#dataset-description)
|
20 |
+
- [Dataset Summary](#dataset-summary)
|
21 |
+
- [Supported Tasks](#supported-tasks)
|
22 |
+
- [Languages](#languages)
|
23 |
+
- [Dataset Structure](#dataset-structure)
|
24 |
+
- [Data Instances](#data-instances)
|
25 |
+
- [Data Fields](#data-fields)
|
26 |
+
- [Data Splits](#data-splits)
|
27 |
+
- [Additional Information](#additional-information)
|
28 |
+
- [Licensing Information](#licensing-information)
|
29 |
+
|
30 |
+
## Dataset Description
|
31 |
+
|
32 |
+
- **Repository:** [WikiAnc repository](https://github.com/cyanic-selkie/wikianc)
|
33 |
+
|
34 |
+
### Dataset Summary
|
35 |
+
|
36 |
+
The WikiAnc EN datasets is an automatically generated dataset from Wikipedia (en) and Wikidata dumps (March 1, 2023).
|
37 |
+
|
38 |
+
The code for generating the dataset can be found [here](https://github.com/cyanic-selkie/wikianc).
|
39 |
+
|
40 |
+
### Supported Tasks
|
41 |
+
|
42 |
+
- `wikificiation`: The dataset can be used to train a model for Wikification.
|
43 |
+
|
44 |
+
### Languages
|
45 |
+
|
46 |
+
The text in the dataset is in English. The associated BCP-47 code is `en`.
|
47 |
+
|
48 |
+
You can find the Croatian version [here](https://huggingface.co/datasets/cyanic-selkie/wikianc-hr).
|
49 |
+
|
50 |
+
## Dataset Structure
|
51 |
+
|
52 |
+
### Data Instances
|
53 |
+
|
54 |
+
A typical data point represents a paragraph in a Wikipedia article.
|
55 |
+
|
56 |
+
The `paragraph_text` field contains the original text in an NFC normalized, UTF-8 encoded string.
|
57 |
+
|
58 |
+
The `paragraph_anchors` field contains a list of anchors, each represented by a struct with the inclusive starting UTF-8 code point `start` field, exclusive ending UTF-8 code point `end` field, a nullable `qid` field, a nullable `pageid` field, and an NFC normalized, UTF-8 encoded `title` (Wikipedia) field.
|
59 |
+
|
60 |
+
Additionally, each paragraph has `article_title`, `article_pageid`, and (nullable) `article_qid` fields referring to the article the paragraph came from.
|
61 |
+
|
62 |
+
There is also a nullable, NFC normalized, UTF-8 encoded `section_heading` field, and an integer `section_level` field referring to the heading (if it exists) of the article section, and the level in the section hierarchy that the paragraph came from.
|
63 |
+
|
64 |
+
The `qid` fields refers to Wikidata's QID identifiers, while the `pageid` and `title` fields refer to Wikipedia's pageID and title identifiers (there is a one-to-one mapping between pageIDs and titles).
|
65 |
+
|
66 |
+
**NOTE:** An anchor will always have a `title`, but that doesn't mean it has to have a `pageid`. This is because Wikipedia allows defining anchors to nonexistent articles.
|
67 |
+
|
68 |
+
An example from the WikiAnc EN test set looks as follows:
|
69 |
+
|
70 |
+
```
|
71 |
+
{
|
72 |
+
"uuid": "5f74e678-944f-4761-a5e0-b6426f6f61b8",
|
73 |
+
"article_title": "Climatius",
|
74 |
+
"article_pageid": 5394373,
|
75 |
+
"article_qid": 867987,
|
76 |
+
"section_heading": null,
|
77 |
+
"section_level": 0,
|
78 |
+
"paragraph_tex": "It was a small fish, at 7.5 cm, and to discourage predators, Climatius sported fifteen sharp spines. There was one spine each on the paired pelvic and pectoral fins, and on the aingle anal and two dorsal fins, and a four pairs without fins on the fish's underside.",
|
79 |
+
"paragraph_anchors": [
|
80 |
+
{
|
81 |
+
"start": 140,
|
82 |
+
"end": 146,
|
83 |
+
"qid": 3335089,
|
84 |
+
"pageid": 56849833,
|
85 |
+
"title": "Pelvic_fin"
|
86 |
+
},
|
87 |
+
{
|
88 |
+
"start": 151,
|
89 |
+
"end": 159,
|
90 |
+
"qid": 4162555,
|
91 |
+
"pageid": 331956,
|
92 |
+
"title": "Pectoral_fin"
|
93 |
+
},
|
94 |
+
{
|
95 |
+
"start": 184,
|
96 |
+
"end": 188,
|
97 |
+
"qid": 4162555,
|
98 |
+
"pageid": 331958,
|
99 |
+
"title": "Anal_fin"
|
100 |
+
},
|
101 |
+
{
|
102 |
+
"start": 197,
|
103 |
+
"end": 208,
|
104 |
+
"qid": 1568355,
|
105 |
+
"pageid": 294244,
|
106 |
+
"title": "Dorsal_fin"
|
107 |
+
}
|
108 |
+
]
|
109 |
+
}
|
110 |
+
```
|
111 |
+
|
112 |
+
### Data Fields
|
113 |
+
|
114 |
+
- `uuid`: a UTF-8 encoded string representing a v4 UUID that uniquely identifies the example
|
115 |
+
- `article_title`: an NFC normalized, UTF-8 encoded Wikipedia title of the article; spaces are replaced with underscores
|
116 |
+
- `article_pageid`: an integer representing the Wikipedia pageID of the article
|
117 |
+
- `article_qid`: an integer representing the Wikidata QID this article refers to; it can be null if the entity didn't exist in Wikidata at the time of the creation of the original dataset
|
118 |
+
- `section_heading`: a nullable, NFC normalized, UTF-8 encoded string representing the section heading
|
119 |
+
- `section_level`: an integer representing the level of the section in the section hierarchy
|
120 |
+
- `paragraph_text`: an NFC normalized, UTF-8 encoded string representing the paragraph
|
121 |
+
- `paragraph_anchors`: a list of structs representing anchors, each anchor has:
|
122 |
+
- `start`: an integer representing the inclusive starting UTF-8 code point of the anchors
|
123 |
+
- `end`: an integer representing the exclusive ending UTF-8 code point of the anchor
|
124 |
+
- `qid`: a nullable integer representing the Wikidata QID this anchor refers to; it can be null if the entity didn't exist in Wikidata at the time of the creation of the original dataset
|
125 |
+
- `pageid`: a nullable integer representing the Wikipedia pageID of the anchor; it can be null if the article didn't exist in Wikipedia at the time of the creation of the original dataset
|
126 |
+
- `title`: an NFC normalized, UTF-8 encoded string representing the Wikipedia title of the anchor; spaces are replaced with underscores; can refer to a nonexistent Wikipedia article
|
127 |
+
|
128 |
+
### Data Splits
|
129 |
+
|
130 |
+
The data is split into training, validation and test sets; paragraphs belonging to the same article aren't necessarily in the same split. The final split sizes are as follows:
|
131 |
+
|
132 |
+
| | Train | Validation | Test |
|
133 |
+
| :----- | :------: | :-----: | :----: |
|
134 |
+
| WikiAnc EN - articles | 5,883,342 | 2,374,055 | 2,375,830 |
|
135 |
+
| WikiAnc EN - paragraphs | 34,555,183 | 4,317,326 | 4,321,613 |
|
136 |
+
| WikiAnc EN - anchors | 87,060,158 | 10,876,572 | 10,883,232 |
|
137 |
+
| WikiAnc EN - anchors with QIDs | 85,414,610 | 10,671,262 | 10,677,412 |
|
138 |
+
| WikiAnc EN - anchors with pageIDs | 85,421,513 | 10,672,138 | 10,678,262 |
|
139 |
+
|
140 |
+
**NOTE:** The number of articles in the table above refers to the number of articles that have at least one paragraph belonging to the article appear in the split.
|
141 |
+
|
142 |
+
## Additional Information
|
143 |
+
|
144 |
+
### Licensing Information
|
145 |
+
|
146 |
+
The WikiAnc EN dataset is given under the [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/) license.
|
test.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5d07e4660df13ba790953a0e513d80d0f35dcb5738e0844bc4fda03d939095ac
|
3 |
+
size 985623909
|
train.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5485d2e6603871cc1f163f90dae983cf3dcf932a14e6c2f9e3f3ddee1537f10f
|
3 |
+
size 7240382192
|
validation.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f0e85ff09cbfabbcd4e86d051fe20a4125c61565aaca74397c6b73d9d055187e
|
3 |
+
size 985515258
|