Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,164 @@
|
|
1 |
---
|
2 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
annotations_creators:
|
3 |
+
- other
|
4 |
+
language_creators:
|
5 |
+
- found
|
6 |
+
language:
|
7 |
+
- en
|
8 |
+
license:
|
9 |
+
- cc-by-4.0
|
10 |
+
multilinguality:
|
11 |
+
- multilingual
|
12 |
+
paperswithcode_id: null
|
13 |
+
pretty_name: "wikipedia pages chunked for fill-mask"
|
14 |
+
size_categories:
|
15 |
+
- 10M<n<100M
|
16 |
+
source_datasets:
|
17 |
+
- original
|
18 |
+
task_categories:
|
19 |
+
- fill-mask
|
20 |
+
|
21 |
---
|
22 |
+
|
23 |
+
# preprocessed version of rcds/wikipedia-persons-masked
|
24 |
+
|
25 |
+
## Table of Contents
|
26 |
+
|
27 |
+
- [Table of Contents](#table-of-contents)
|
28 |
+
- [Dataset Description](#dataset-description)
|
29 |
+
- [Dataset Summary](#dataset-summary)
|
30 |
+
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
31 |
+
- [Languages](#languages)
|
32 |
+
- [Dataset Structure](#dataset-structure)
|
33 |
+
- [Data Fields](#data-fields)
|
34 |
+
- [Data Splits](#data-splits)
|
35 |
+
- [Dataset Creation](#dataset-creation)
|
36 |
+
- [Curation Rationale](#curation-rationale)
|
37 |
+
- [Source Data](#source-data)
|
38 |
+
- [Annotations](#annotations)
|
39 |
+
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
40 |
+
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
41 |
+
- [Social Impact of Dataset](#social-impact-of-dataset)
|
42 |
+
- [Discussion of Biases](#discussion-of-biases)
|
43 |
+
- [Other Known Limitations](#other-known-limitations)
|
44 |
+
- [Additional Information](#additional-information)
|
45 |
+
- [Dataset Curators](#dataset-curators)
|
46 |
+
- [Licensing Information](#licensing-information)
|
47 |
+
- [Citation Information](#citation-information)
|
48 |
+
- [Contributions](#contributions)
|
49 |
+
|
50 |
+
## Dataset Description
|
51 |
+
|
52 |
+
- **Homepage:**
|
53 |
+
- **Repository:**
|
54 |
+
- **Paper:**
|
55 |
+
- **Leaderboard:**
|
56 |
+
- **Point of Contact:**
|
57 |
+
|
58 |
+
### Dataset Summary
|
59 |
+
|
60 |
+
Contains ~70k pages from wikipedia, each describing a person. For each page, the person described in the text
|
61 |
+
is masked with a <mask> token. The ground truth for every mask is provided.
|
62 |
+
Each row contains a part of a wiki page, specified by the size parameter which limits the maximum size in number of tokens per text chunk.
|
63 |
+
for each chunk the expected name for each mask is given.
|
64 |
+
|
65 |
+
### Supported Tasks and Leaderboards
|
66 |
+
|
67 |
+
The dataset supports the tasks of fill-mask, but can also be used for other tasks such as question answering,
|
68 |
+
e.g. "Who is <mask>?"
|
69 |
+
|
70 |
+
### Languages
|
71 |
+
|
72 |
+
*english only*
|
73 |
+
|
74 |
+
## Dataset Structure
|
75 |
+
|
76 |
+
In /data find different versions of the full dataset, with original and paraphrased versions as well as chunked to 4096 and 512 tokens.
|
77 |
+
|
78 |
+
Use the dataset like this:
|
79 |
+
```python
|
80 |
+
from datasets import load_dataset
|
81 |
+
|
82 |
+
dataset = load_dataset('rcds/wikipedia-persons-masked', split='train', type='original', size='512')
|
83 |
+
```
|
84 |
+
|
85 |
+
### Data Fields
|
86 |
+
|
87 |
+
Columns are:
|
88 |
+
- texts: the text chunks
|
89 |
+
- masks: the names for each of the masks in the chunks
|
90 |
+
|
91 |
+
### Data Splits
|
92 |
+
|
93 |
+
There are no splits, only a default train.
|
94 |
+
|
95 |
+
## Dataset Creation
|
96 |
+
|
97 |
+
Created by using the tokenizer from allenai/longformer-base-4096 for the 4096 token per chunk version,
|
98 |
+
and the xml-roberta-large tokenizer for the 512 token version. Chunks are split to fit those token sizes,
|
99 |
+
with the splits ensuring no words are split in half.
|
100 |
+
Possible improvements: Last chunk of a page might be much shorter, could join part of the previous one to have more tokens
|
101 |
+
in the last chunk.
|
102 |
+
|
103 |
+
### Curation Rationale
|
104 |
+
|
105 |
+
[More Information Needed]
|
106 |
+
|
107 |
+
### Source Data
|
108 |
+
|
109 |
+
#### Initial Data Collection and Normalization
|
110 |
+
|
111 |
+
[More Information Needed]
|
112 |
+
|
113 |
+
#### Who are the source language producers?
|
114 |
+
|
115 |
+
[More Information Needed]
|
116 |
+
|
117 |
+
|
118 |
+
### Annotations
|
119 |
+
|
120 |
+
#### Annotation process
|
121 |
+
|
122 |
+
[More Information Needed]
|
123 |
+
|
124 |
+
#### Who are the annotators?
|
125 |
+
|
126 |
+
[More Information Needed]
|
127 |
+
|
128 |
+
### Personal and Sensitive Information
|
129 |
+
|
130 |
+
[More Information Needed]
|
131 |
+
|
132 |
+
## Considerations for Using the Data
|
133 |
+
|
134 |
+
### Social Impact of Dataset
|
135 |
+
|
136 |
+
[More Information Needed]
|
137 |
+
|
138 |
+
### Discussion of Biases
|
139 |
+
|
140 |
+
[More Information Needed]
|
141 |
+
|
142 |
+
### Other Known Limitations
|
143 |
+
|
144 |
+
[More Information Needed]
|
145 |
+
|
146 |
+
## Additional Information
|
147 |
+
|
148 |
+
### Dataset Curators
|
149 |
+
|
150 |
+
[More Information Needed]
|
151 |
+
|
152 |
+
### Licensing Information
|
153 |
+
|
154 |
+
[More Information Needed]
|
155 |
+
|
156 |
+
### Citation Information
|
157 |
+
|
158 |
+
```
|
159 |
+
TODO add citation
|
160 |
+
```
|
161 |
+
|
162 |
+
### Contributions
|
163 |
+
|
164 |
+
Thanks to [@skatinger](https://github.com/skatinger) for adding this dataset.
|