|
--- |
|
annotations_creators: |
|
- expert-generated |
|
language_creators: |
|
- expert-generated |
|
language: |
|
- pl |
|
license: |
|
- cc-by-4.0 |
|
multilinguality: |
|
- monolingual |
|
pretty_name: NLPre-PL_dataset |
|
size_categories: |
|
- 10K<n<100K |
|
source_datasets: |
|
- original |
|
tags: |
|
- National Corpus of Polish |
|
- Narodowy Korpus Języka Polskiego |
|
- Universal Dependencies |
|
task_categories: |
|
- token-classification |
|
task_ids: |
|
- part-of-speech |
|
- lemmatization |
|
- parsing |
|
dataset_info: |
|
- config_name: nlprepl_by_name |
|
features: |
|
- name: idx |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
- name: tokens |
|
sequence: string |
|
- name: lemmas |
|
sequence: string |
|
- name: upos |
|
sequence: |
|
class_label: |
|
names: |
|
'0': NOUN |
|
'1': PUNCT |
|
'2': ADP |
|
'3': NUM |
|
'4': SYM |
|
'5': SCONJ |
|
'6': ADJ |
|
'7': PART |
|
'8': DET |
|
'9': CCONJ |
|
'10': PROPN |
|
'11': PRON |
|
'12': X |
|
'13': _ |
|
'14': ADV |
|
'15': INTJ |
|
'16': VERB |
|
'17': AUX |
|
- name: xpos |
|
sequence: string |
|
- name: feats |
|
sequence: string |
|
- name: head |
|
sequence: string |
|
- name: deprel |
|
sequence: string |
|
- name: deps |
|
sequence: string |
|
- name: misc |
|
sequence: string |
|
splits: |
|
- name: train |
|
num_bytes: 0 |
|
num_examples: 69360 |
|
- name: dev |
|
num_bytes: 0 |
|
num_examples: 7669 |
|
- name: test |
|
num_bytes: 0 |
|
num_examples: 8633 |
|
download_size: 3088237 |
|
dataset_size: 5120697 |
|
- config_name: nlprepl_by_type |
|
features: |
|
- name: idx |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
- name: tokens |
|
sequence: string |
|
- name: lemmas |
|
sequence: string |
|
- name: upos |
|
sequence: |
|
class_label: |
|
names: |
|
'0': NOUN |
|
'1': PUNCT |
|
'2': ADP |
|
'3': NUM |
|
'4': SYM |
|
'5': SCONJ |
|
'6': ADJ |
|
'7': PART |
|
'8': DET |
|
'9': CCONJ |
|
'10': PROPN |
|
'11': PRON |
|
'12': X |
|
'13': _ |
|
'14': ADV |
|
'15': INTJ |
|
'16': VERB |
|
'17': AUX |
|
- name: xpos |
|
sequence: string |
|
- name: feats |
|
sequence: string |
|
- name: head |
|
sequence: string |
|
- name: deprel |
|
sequence: string |
|
- name: deps |
|
sequence: string |
|
- name: misc |
|
sequence: string |
|
splits: |
|
- name: train |
|
num_bytes: 0 |
|
num_examples: 68943 |
|
- name: dev |
|
num_bytes: 0 |
|
num_examples: 7755 |
|
- name: test |
|
num_bytes: 0 |
|
num_examples: 8964 |
|
download_size: 3088237 |
|
dataset_size: 5120697 |
|
--- |
|
# Dataset Card for NLPre-PL – fairly divided version of NKJP1M |
|
|
|
### Dataset Summary |
|
|
|
This is the official NLPre-PL dataset - a uniformly paragraph-level divided version of NKJP1M corpus – the 1-million token balanced subcorpus of the National Corpus of Polish (Narodowy Korpus Języka Polskiego) |
|
|
|
The NLPre dataset aims at fairly dividing the paragraphs length-wise and topic-wise into train, development, and test sets. Thus, we ensure a similar number of segments |
|
distribution per paragraph and avoid the situation when paragraphs with a small (or large) number of segments are available only e.g. during test time. |
|
|
|
We treat paragraphs as indivisible units (to ensure there is no data leakage between different dataset types). The paragraphs inherit the corresponding document's ID and type (a book, an article, etc.). |
|
|
|
We provide two variations of the dataset, based on the fair division of paragraphs: |
|
- fair by document's ID |
|
- fair by document's type |
|
|
|
### Creation of the dataset |
|
|
|
We investigate the distribution over the number of segments in each paragraph. Being Gaussian-like, we divide the paragraphs into 10 buckets of roughly similar size and then sample from them with respective ratios of 0.8 : 0.1 : 0.1 |
|
(corresponding to training, development, and testing subsets). |
|
This data selection technique assures a similar distribution of segment numbers per paragraph in our three subsets. We call it **fair_by_name** (shortly: **by_name**) |
|
since it is divided equitably regarding the unique IDs of the documents. |
|
|
|
For creating our second split, we also consider the type of document a paragraph belongs to. We first group paragraphs into categories equal to the document types, |
|
and then we repeat the above-mentioned procedure per category. This provides us with a second split: **fair_by_type** (shortly: **by_type**). |
|
|
|
### Supported Tasks and Leaderboards |
|
|
|
This resource can be mainly used for training the morphosyntactic analyzer models for Polish. It support such tasks as: lemmatization, part-of-speech recognition, dependency parsing. |
|
|
|
### Supported versions |
|
|
|
This dataset is available for two tagsets and in 3 formats. |
|
|
|
Tagsets: |
|
- UD |
|
- NKJP |
|
|
|
File formats: |
|
- conllu |
|
- conll |
|
- conll with SpaceAfter token |
|
|
|
All the available combinations can be found below: |
|
|
|
- fair_by_name + nkjp tagset + conllu format |
|
|
|
``` |
|
load_dataset("nlprepl", name="by_name-nkjp-conllu") |
|
``` |
|
|
|
- fair_by_name + nkjp tagset + conll format |
|
|
|
``` |
|
load_dataset("nlprepl", name="by_name-nkjp-conll") |
|
``` |
|
|
|
- fair_by_name + nkjp tagset + conll-SpaceAfter format |
|
|
|
``` |
|
load_dataset("nlprepl", name="by_name-nkjp-conll_space_after") |
|
``` |
|
|
|
- fair_by_name + UD tagset + conllu format |
|
|
|
``` |
|
load_dataset("nlprepl", name="by_name-nkjp-conllu") |
|
``` |
|
|
|
- fair_by_type + nkjp tagset + conllu format |
|
|
|
``` |
|
load_dataset("nlprepl", name="by_type-nkjp-conllu") |
|
``` |
|
|
|
- fair_by_type + nkjp tagset + conll format |
|
|
|
``` |
|
load_dataset("nlprepl", name="by_type-nkjp-conll") |
|
``` |
|
|
|
- fair_by_type + nkjp tagset + conll-SpaceAfter format |
|
|
|
``` |
|
load_dataset("nlprepl", name="by_type-nkjp-conll_space_after") |
|
``` |
|
|
|
- fair_by_type + UD tagset + conllu format |
|
|
|
``` |
|
load_dataset("nlprepl", name="by_type-nkjp-conllu") |
|
``` |
|
|
|
### Languages |
|
|
|
Polish (monolingual) |
|
|
|
## Dataset Structure |
|
|
|
### Data Instances |
|
|
|
|
|
"sent_id": datasets.Value("string"), |
|
"text": datasets.Value("string"), |
|
"id": datasets.Value("string"), |
|
"tokens": datasets.Sequence(datasets.Value("string")), |
|
"lemmas": datasets.Sequence(datasets.Value("string")), |
|
"upos": datasets.Sequence(datasets.Value("string")), |
|
"xpos": datasets.Sequence(datasets.Value("string")), |
|
"feats": datasets.Sequence(datasets.Value("string")), |
|
"head": datasets.Sequence(datasets.Value("string")), |
|
"deprel": datasets.Sequence(datasets.Value("string")), |
|
"deps": datasets.Sequence(datasets.Value("string")), |
|
"misc" |
|
``` |
|
{ |
|
'sent_id': '3', |
|
'text': 'I zawrócił na rzekę.', |
|
'orig_file_sentence': '030-2-000000002#2-3', |
|
'id': ['1', '2', '3', '4', '5'] |
|
'tokens': ['I', 'zawrócił', 'na', 'rzekę', '.'], |
|
'lemmas': ['i', 'zawrócić', 'na', 'rzeka', '.'], |
|
'upos': ['conj', 'praet', 'prep', 'subst', 'interp'], |
|
'xpos': ['con', 'praet:sg:m1:perf', 'prep:acc', 'subst:sg:acc:f', 'interp'], |
|
'feats': ['', 'sg|m1|perf', 'acc', 'sg|acc|f', ''], |
|
'head': ['0', '1', '2', '3', '1'], |
|
'deprel': ['root', 'conjunct', 'adjunct', 'comp', 'punct'], |
|
'deps': [''', '', '', '', ''], |
|
'misc': ['', '', '', '', ''] |
|
} |
|
``` |
|
|
|
### Data Fields |
|
|
|
- `sent_id`, `text`, `orig_file_sentence` (strings): XML identifiers of the present text (document), paragraph and sentence in NKJP. (These allow to map the data point back to the source corpus and to identify paragraphs/samples.) |
|
- `id` (sequence of strings): ids of the appropriate tokens. |
|
- `tokens` (sequence of strings): tokens of the text defined as in NKJP. |
|
- `lemmas` (sequence of strings): lemmas corresponding to the tokens. |
|
- `upos` (sequence of strings): universal part-of-speech tags corresponding to the tokens |
|
- `xpos` (sequence of labels): Optional language-specific (or treebank-specific) part-of-speech / morphological tag; underscore if not available. |
|
- `feats` (sequence of labels): List of morphological features from the universal feature inventory or from a defined language-specific extension; underscore if not available. |
|
- `head` (sequence of labels): Head of the current word, which is either a value of ID or zero (0). |
|
- `deprel` (sequence of labels): Universal dependency relation to the HEAD of the token. |
|
- `deps` (sequence of labels): Enhanced dependency graph in the form of a list of head-deprel pairs. |
|
- `misc` (sequence of labels): Any other annotation (most commonly contains SpaceAfter tag). |
|
|
|
|
|
### Data Splits |
|
|
|
#### Fair_by_name |
|
|
|
| | Train | Validation | Test | |
|
| ----- | ------ | ----- | ---- | |
|
| sentences | 69360 | 7669 | 8633 | |
|
| tokens | 984077 | 109900 | 121907 | |
|
|
|
#### Fair_by_type |
|
|
|
| | Train | Validation | Test | |
|
| ----- | ------ | ----- | ---- | |
|
| sentences | 68943 | 7755 | 8964 | |
|
| tokens | 978371 | 112454 | 125059 | |
|
|
|
|
|
## Licensing Information |
|
|
|
![Creative Commons License](https://i.creativecommons.org/l/by/4.0/80x15.png) This work is licensed under a [Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/). |
|
|
|
|
|
<!-- |
|
### Contributions |
|
|
|
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. |
|
--> |