File size: 9,184 Bytes
c451938
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
63ad164
c451938
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
740f62a
 
 
 
 
c451938
740f62a
 
c451938
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
740f62a
 
 
 
 
c451938
740f62a
 
c451938
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
63ad164
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c451938
 
 
 
 
 
 
 
63ad164
 
 
 
 
 
 
 
 
 
 
 
 
c451938
63ad164
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c451938
 
 
 
63ad164
 
c451938
 
63ad164
 
 
 
 
 
 
 
c451938
 
 
740f62a
 
 
 
 
 
 
 
 
c451938
 
 
740f62a
c451938
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- pl
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: NLPre-PL_dataset
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- National Corpus of Polish
- Narodowy Korpus Języka Polskiego
- Universal Dependencies
task_categories:
- token-classification
task_ids:
- part-of-speech
- lemmatization
- parsing
dataset_info:
- config_name: nlprepl_by_name
  features:
  - name: idx
    dtype: string
  - name: text
    dtype: string
  - name: tokens
    sequence: string
  - name: lemmas
    sequence: string
  - name: upos
    sequence:
      class_label:
        names:
          '0': NOUN
          '1': PUNCT
          '2': ADP
          '3': NUM
          '4': SYM
          '5': SCONJ
          '6': ADJ
          '7': PART
          '8': DET
          '9': CCONJ
          '10': PROPN
          '11': PRON
          '12': X
          '13': _
          '14': ADV
          '15': INTJ
          '16': VERB
          '17': AUX
  - name: xpos
    sequence: string
  - name: feats
    sequence: string
  - name: head
    sequence: string
  - name: deprel
    sequence: string
  - name: deps
    sequence: string
  - name: misc
    sequence: string
  splits:
  - name: train
    num_bytes: 0
    num_examples: 69360
  - name: dev
    num_bytes: 0
    num_examples: 7669
  - name: test
    num_bytes: 0
    num_examples: 8633
  download_size: 3088237
  dataset_size: 5120697
- config_name: nlprepl_by_type
  features:
  - name: idx
    dtype: string
  - name: text
    dtype: string
  - name: tokens
    sequence: string
  - name: lemmas
    sequence: string
  - name: upos
    sequence:
      class_label:
        names:
          '0': NOUN
          '1': PUNCT
          '2': ADP
          '3': NUM
          '4': SYM
          '5': SCONJ
          '6': ADJ
          '7': PART
          '8': DET
          '9': CCONJ
          '10': PROPN
          '11': PRON
          '12': X
          '13': _
          '14': ADV
          '15': INTJ
          '16': VERB
          '17': AUX
  - name: xpos
    sequence: string
  - name: feats
    sequence: string
  - name: head
    sequence: string
  - name: deprel
    sequence: string
  - name: deps
    sequence: string
  - name: misc
    sequence: string
  splits:
  - name: train
    num_bytes: 0
    num_examples: 68943
  - name: dev
    num_bytes: 0
    num_examples: 7755
  - name: test
    num_bytes: 0
    num_examples: 8964
  download_size: 3088237
  dataset_size: 5120697
---
# Dataset Card for NLPre-PL – fairly divided version of NKJP1M

### Dataset Summary

This is the official NLPre-PL dataset - a uniformly paragraph-level divided version of NKJP1M corpus – the 1-million token balanced subcorpus of the National Corpus of Polish (Narodowy Korpus Języka Polskiego)

The NLPre dataset aims at fairly dividing the paragraphs length-wise and topic-wise into train, development, and test sets. Thus, we ensure a similar number of segments
distribution per paragraph and avoid the situation when paragraphs with a small (or large) number of segments are available only e.g. during test time.

We treat paragraphs as indivisible units (to ensure there is no data leakage between different dataset types). The paragraphs inherit the corresponding document's ID and type (a book, an article, etc.).

We provide two variations of the dataset, based on the fair division of paragraphs:
- fair by document's ID
- fair by document's type

### Creation of the dataset

We investigate the distribution over the number of segments in each paragraph. Being Gaussian-like, we divide the paragraphs into 10 buckets of roughly similar size and then sample from them with respective ratios of 0.8 : 0.1 : 0.1 
(corresponding to training, development, and testing subsets). 
This data selection technique assures a similar distribution of segment numbers per paragraph in our three subsets. We call it **fair_by_name** (shortly: **by_name**) 
since it is divided equitably regarding the unique IDs of the documents.

For creating our second split, we also consider the type of document a paragraph belongs to. We first group paragraphs into categories equal to the document types,
and then we repeat the above-mentioned procedure per category. This provides us with a second split: **fair_by_type** (shortly: **by_type**).

### Supported Tasks and Leaderboards

This resource can be mainly used for training the morphosyntactic analyzer models for Polish. It support such tasks as: lemmatization, part-of-speech recognition, dependency parsing.

### Supported versions

This dataset is available for two tagsets and in 3 formats.

Tagsets:
- UD
- NKJP

File formats:
- conllu
- conll
- conll with SpaceAfter token

All the available combinations can be found below:

- fair_by_name + nkjp tagset + conllu format

```
load_dataset("nlprepl", name="by_name-nkjp-conllu")
```

- fair_by_name + nkjp tagset + conll format

```
load_dataset("nlprepl", name="by_name-nkjp-conll")
```

- fair_by_name + nkjp tagset + conll-SpaceAfter format

```
load_dataset("nlprepl", name="by_name-nkjp-conll_space_after")
```

- fair_by_name + UD tagset + conllu format

```
load_dataset("nlprepl", name="by_name-nkjp-conllu")
```

- fair_by_type + nkjp tagset + conllu format

```
load_dataset("nlprepl", name="by_type-nkjp-conllu")
```

- fair_by_type + nkjp tagset + conll format

```
load_dataset("nlprepl", name="by_type-nkjp-conll")
```

- fair_by_type + nkjp tagset + conll-SpaceAfter format

```
load_dataset("nlprepl", name="by_type-nkjp-conll_space_after")
```

- fair_by_type + UD tagset + conllu format

```
load_dataset("nlprepl", name="by_type-nkjp-conllu")
```

### Languages

Polish (monolingual)

## Dataset Structure

### Data Instances


        "sent_id": datasets.Value("string"),
                    "text": datasets.Value("string"),
                    "id": datasets.Value("string"),
                    "tokens": datasets.Sequence(datasets.Value("string")),
                    "lemmas": datasets.Sequence(datasets.Value("string")),
                    "upos": datasets.Sequence(datasets.Value("string")),
                    "xpos": datasets.Sequence(datasets.Value("string")),
                    "feats": datasets.Sequence(datasets.Value("string")),
                    "head": datasets.Sequence(datasets.Value("string")),
                    "deprel": datasets.Sequence(datasets.Value("string")),
                    "deps": datasets.Sequence(datasets.Value("string")),
                    "misc"
```
{
 'sent_id': '3',
 'text': 'I zawrócił na rzekę.',
 'orig_file_sentence': '030-2-000000002#2-3',
 'id': ['1', '2', '3', '4', '5']
 'tokens': ['I', 'zawrócił', 'na', 'rzekę', '.'],
 'lemmas': ['i', 'zawrócić', 'na', 'rzeka', '.'],
 'upos': ['conj', 'praet', 'prep', 'subst', 'interp'],
 'xpos': ['con', 'praet:sg:m1:perf', 'prep:acc', 'subst:sg:acc:f', 'interp'],
 'feats': ['', 'sg|m1|perf', 'acc', 'sg|acc|f', ''],
 'head': ['0', '1', '2', '3', '1'],
 'deprel': ['root', 'conjunct', 'adjunct', 'comp', 'punct'],
 'deps': [''', '', '', '', ''],
 'misc': ['', '', '', '', '']
}
```

### Data Fields

- `sent_id`, `text`, `orig_file_sentence` (strings): XML identifiers of the present text (document), paragraph and sentence in NKJP. (These allow to map the data point back to the source corpus and to identify paragraphs/samples.)
- `id` (sequence of strings): ids of the appropriate tokens.
- `tokens` (sequence of strings): tokens of the text defined as in NKJP.
- `lemmas` (sequence of strings): lemmas corresponding to the tokens.
- `upos` (sequence of strings): universal part-of-speech tags corresponding to the tokens
- `xpos` (sequence of labels): Optional language-specific (or treebank-specific) part-of-speech / morphological tag; underscore if not available.
- `feats` (sequence of labels): List of morphological features from the universal feature inventory or from a defined language-specific extension; underscore if not available.
- `head` (sequence of labels): Head of the current word, which is either a value of ID or zero (0).
- `deprel` (sequence of labels): Universal dependency relation to the HEAD of the token.
- `deps` (sequence of labels): Enhanced dependency graph in the form of a list of head-deprel pairs.
- `misc` (sequence of labels): Any other annotation (most commonly contains SpaceAfter tag).


### Data Splits

#### Fair_by_name

|           | Train  | Validation | Test   |
| -----     | ------ | -----      | ----   |
| sentences | 69360  | 7669       |  8633  |
| tokens    | 984077 | 109900     | 121907 |

#### Fair_by_type

|           | Train  | Validation | Test   |
| -----     | ------ | -----      | ----   |
| sentences | 68943  | 7755       |  8964  |
| tokens    | 978371 | 112454     | 125059 |


## Licensing Information

![Creative Commons License](https://i.creativecommons.org/l/by/4.0/80x15.png) This work is licensed under a [Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/).


<!--
### Contributions

Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
-->