Datasets:
martynawck
commited on
Commit
•
c451938
1
Parent(s):
439a262
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,218 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
annotations_creators:
|
3 |
+
- expert-generated
|
4 |
+
language_creators:
|
5 |
+
- expert-generated
|
6 |
+
language:
|
7 |
+
- pl
|
8 |
+
license:
|
9 |
+
- cc-by-4.0
|
10 |
+
multilinguality:
|
11 |
+
- monolingual
|
12 |
+
pretty_name: NLPre-PL_dataset
|
13 |
+
size_categories:
|
14 |
+
- 10K<n<100K
|
15 |
+
source_datasets:
|
16 |
+
- original
|
17 |
+
tags:
|
18 |
+
- National Corpus of Polish
|
19 |
+
- Narodowy Korpus Języka Polskiego
|
20 |
+
task_categories:
|
21 |
+
- token-classification
|
22 |
+
task_ids:
|
23 |
+
- part-of-speech
|
24 |
+
- lemmatization
|
25 |
+
- parsing
|
26 |
+
dataset_info:
|
27 |
+
- config_name: nlprepl_by_name
|
28 |
+
features:
|
29 |
+
- name: idx
|
30 |
+
dtype: string
|
31 |
+
- name: text
|
32 |
+
dtype: string
|
33 |
+
- name: tokens
|
34 |
+
sequence: string
|
35 |
+
- name: lemmas
|
36 |
+
sequence: string
|
37 |
+
- name: upos
|
38 |
+
sequence:
|
39 |
+
class_label:
|
40 |
+
names:
|
41 |
+
'0': NOUN
|
42 |
+
'1': PUNCT
|
43 |
+
'2': ADP
|
44 |
+
'3': NUM
|
45 |
+
'4': SYM
|
46 |
+
'5': SCONJ
|
47 |
+
'6': ADJ
|
48 |
+
'7': PART
|
49 |
+
'8': DET
|
50 |
+
'9': CCONJ
|
51 |
+
'10': PROPN
|
52 |
+
'11': PRON
|
53 |
+
'12': X
|
54 |
+
'13': _
|
55 |
+
'14': ADV
|
56 |
+
'15': INTJ
|
57 |
+
'16': VERB
|
58 |
+
'17': AUX
|
59 |
+
- name: xpos
|
60 |
+
sequence: string
|
61 |
+
- name: feats
|
62 |
+
sequence: string
|
63 |
+
- name: head
|
64 |
+
sequence: string
|
65 |
+
- name: deprel
|
66 |
+
sequence: string
|
67 |
+
- name: deps
|
68 |
+
sequence: string
|
69 |
+
- name: misc
|
70 |
+
sequence: string
|
71 |
+
splits:
|
72 |
+
- name: train
|
73 |
+
num_bytes: 3523113
|
74 |
+
num_examples: 1315
|
75 |
+
- name: validation
|
76 |
+
num_bytes: 547285
|
77 |
+
num_examples: 194
|
78 |
+
- name: test
|
79 |
+
num_bytes: 1050299
|
80 |
+
num_examples: 425
|
81 |
+
download_size: 3088237
|
82 |
+
dataset_size: 5120697
|
83 |
+
- config_name: nlprepl_by_type
|
84 |
+
features:
|
85 |
+
- name: idx
|
86 |
+
dtype: string
|
87 |
+
- name: text
|
88 |
+
dtype: string
|
89 |
+
- name: tokens
|
90 |
+
sequence: string
|
91 |
+
- name: lemmas
|
92 |
+
sequence: string
|
93 |
+
- name: upos
|
94 |
+
sequence:
|
95 |
+
class_label:
|
96 |
+
names:
|
97 |
+
'0': NOUN
|
98 |
+
'1': PUNCT
|
99 |
+
'2': ADP
|
100 |
+
'3': NUM
|
101 |
+
'4': SYM
|
102 |
+
'5': SCONJ
|
103 |
+
'6': ADJ
|
104 |
+
'7': PART
|
105 |
+
'8': DET
|
106 |
+
'9': CCONJ
|
107 |
+
'10': PROPN
|
108 |
+
'11': PRON
|
109 |
+
'12': X
|
110 |
+
'13': _
|
111 |
+
'14': ADV
|
112 |
+
'15': INTJ
|
113 |
+
'16': VERB
|
114 |
+
'17': AUX
|
115 |
+
- name: xpos
|
116 |
+
sequence: string
|
117 |
+
- name: feats
|
118 |
+
sequence: string
|
119 |
+
- name: head
|
120 |
+
sequence: string
|
121 |
+
- name: deprel
|
122 |
+
sequence: string
|
123 |
+
- name: deps
|
124 |
+
sequence: string
|
125 |
+
- name: misc
|
126 |
+
sequence: string
|
127 |
+
splits:
|
128 |
+
- name: train
|
129 |
+
num_bytes: 3523113
|
130 |
+
num_examples: 1315
|
131 |
+
- name: validation
|
132 |
+
num_bytes: 547285
|
133 |
+
num_examples: 194
|
134 |
+
- name: test
|
135 |
+
num_bytes: 1050299
|
136 |
+
num_examples: 425
|
137 |
+
download_size: 3088237
|
138 |
+
dataset_size: 5120697
|
139 |
+
---
|
140 |
+
# Dataset Card for NLPre-PL – fairly divided version of NKJP1M
|
141 |
+
|
142 |
+
### Dataset Summary
|
143 |
+
|
144 |
+
This is the official NLPre-PL dataset - a uniformly paragraph-level divided version of NKJP1M corpus – the 1-million token balanced subcorpus of the National Corpus of Polish (Narodowy Korpus Języka Polskiego)
|
145 |
+
|
146 |
+
The NLPre dataset aims at fairly dividing the paragraphs length-wise and topic-wise into train, development, and test sets. Thus, we ensure a similar number of segments
|
147 |
+
distribution per paragraph and avoid the situation when paragraphs with a small (or large) number of segments are available only e.g. during test time.
|
148 |
+
|
149 |
+
We treat paragraphs as indivisible units (to ensure there is no data leakage between different dataset types). The paragraphs inherit the corresponding document's ID and type (a book, an article, etc.).
|
150 |
+
|
151 |
+
We provide two variations of the dataset, based on the fair division of paragraphs:
|
152 |
+
- fair by document's ID
|
153 |
+
- fair by document's type
|
154 |
+
|
155 |
+
### Creation of the dataset
|
156 |
+
|
157 |
+
We investigate the distribution over the number of segments in each paragraph. Being Gaussian-like, we divide the paragraphs into 10 buckets of roughly similar size and then sample from them with respective ratios of 0.8 : 0.1 : 0.1
|
158 |
+
(corresponding to training, development, and testing subsets).
|
159 |
+
This data selection technique assures a similar distribution of segment numbers per paragraph in our three subsets. We call it **fair_by_name** (shortly: **by_name**)
|
160 |
+
since it is divided equitably regarding the unique IDs of the documents.
|
161 |
+
|
162 |
+
For creating our second split, we also consider the type of document a paragraph belongs to. We first group paragraphs into categories equal to the document types,
|
163 |
+
and then we repeat the above-mentioned procedure per category. This provides us with a second split: **fair_by_type** (shortly: **by_type**).
|
164 |
+
|
165 |
+
### Supported Tasks and Leaderboards
|
166 |
+
|
167 |
+
This resource can be mainly used for training the morphosyntactic analyzer models for Polish. It support such tasks as: lemmatization, part-of-speech recognition, dependency parsing.
|
168 |
+
|
169 |
+
### Languages
|
170 |
+
|
171 |
+
Polish (monolingual)
|
172 |
+
|
173 |
+
## Dataset Structure
|
174 |
+
|
175 |
+
### Data Instances
|
176 |
+
|
177 |
+
```
|
178 |
+
{'nkjp_text': 'NKJP_1M_1102000002',
|
179 |
+
'nkjp_par': 'morph_1-p',
|
180 |
+
'nkjp_sent': 'morph_1.18-s',
|
181 |
+
'tokens': ['-', 'Nie', 'mam', 'pieniędzy', ',', 'da', 'mi', 'pani', 'wywiad', '?'],
|
182 |
+
'lemmas': ['-', 'nie', 'mieć', 'pieniądz', ',', 'dać', 'ja', 'pani', 'wywiad', '?'],
|
183 |
+
'cposes': [8, 11, 10, 9, 8, 10, 9, 9, 9, 8],
|
184 |
+
'poses': [19, 25, 12, 35, 19, 12, 28, 35, 35, 19],
|
185 |
+
'tags': [266, 464, 213, 923, 266, 218, 692, 988, 961, 266],
|
186 |
+
'nps': [False, False, False, False, True, False, False, False, False, True],
|
187 |
+
'nkjp_ids': ['morph_1.9-seg', 'morph_1.10-seg', 'morph_1.11-seg', 'morph_1.12-seg', 'morph_1.13-seg', 'morph_1.14-seg', 'morph_1.15-seg', 'morph_1.16-seg', 'morph_1.17-seg', 'morph_1.18-seg']}
|
188 |
+
```
|
189 |
+
|
190 |
+
### Data Fields
|
191 |
+
|
192 |
+
- `nkjp_text`, `nkjp_par`, `nkjp_sent` (strings): XML identifiers of the present text (document), paragraph and sentence in NKJP. (These allow to map the data point back to the source corpus and to identify paragraphs/samples.)
|
193 |
+
- `tokens` (sequence of strings): tokens of the text defined as in NKJP.
|
194 |
+
- `lemmas` (sequence of strings): lemmas corresponding to the tokens.
|
195 |
+
- `tags` (sequence of labels): morpho-syntactic tags according to Morfeusz2 tagset (1019 distinct tags).
|
196 |
+
- `poses` (sequence of labels): flexemic class (detailed part of speech, 40 classes) – the first element of the corresponding tag.
|
197 |
+
- `cposes` (sequence of labels): coarse part of speech (13 classes): all verbal and deverbal flexemic classes get mapped to a `V`, nominal – `N`, adjectival – `A`, “strange” (abbreviations, alien elements, symbols, emojis…) – `X`, rest as in `poses`.
|
198 |
+
- `nps` (sequence of booleans): `True` means that the corresponding token is not preceded by a space in the source text.
|
199 |
+
- `nkjp_ids` (sequence of strings): XML identifiers of particular tokens in NKJP (probably an overkill).
|
200 |
+
|
201 |
+
### Data Splits
|
202 |
+
|
203 |
+
| | Train | Validation | Test |
|
204 |
+
| ----- | ------ | ----- | ---- |
|
205 |
+
| sentences | 68943 | 7755 | 8964 |
|
206 |
+
| tokens | 978368 | 112454 | 125059 |
|
207 |
+
|
208 |
+
|
209 |
+
## Licensing Information
|
210 |
+
|
211 |
+
![Creative Commons License](https://i.creativecommons.org/l/by/4.0/80x15.png) This work is licensed under a [Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/).
|
212 |
+
|
213 |
+
|
214 |
+
<!--
|
215 |
+
### Contributions
|
216 |
+
|
217 |
+
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
|
218 |
+
-->
|