nlprepl / README.md
martynawck's picture
Create README.md
c451938
|
raw
history blame
7.08 kB
metadata
annotations_creators:
  - expert-generated
language_creators:
  - expert-generated
language:
  - pl
license:
  - cc-by-4.0
multilinguality:
  - monolingual
pretty_name: NLPre-PL_dataset
size_categories:
  - 10K<n<100K
source_datasets:
  - original
tags:
  - National Corpus of Polish
  - Narodowy Korpus Języka Polskiego
task_categories:
  - token-classification
task_ids:
  - part-of-speech
  - lemmatization
  - parsing
dataset_info:
  - config_name: nlprepl_by_name
    features:
      - name: idx
        dtype: string
      - name: text
        dtype: string
      - name: tokens
        sequence: string
      - name: lemmas
        sequence: string
      - name: upos
        sequence:
          class_label:
            names:
              '0': NOUN
              '1': PUNCT
              '2': ADP
              '3': NUM
              '4': SYM
              '5': SCONJ
              '6': ADJ
              '7': PART
              '8': DET
              '9': CCONJ
              '10': PROPN
              '11': PRON
              '12': X
              '13': _
              '14': ADV
              '15': INTJ
              '16': VERB
              '17': AUX
      - name: xpos
        sequence: string
      - name: feats
        sequence: string
      - name: head
        sequence: string
      - name: deprel
        sequence: string
      - name: deps
        sequence: string
      - name: misc
        sequence: string
    splits:
      - name: train
        num_bytes: 3523113
        num_examples: 1315
      - name: validation
        num_bytes: 547285
        num_examples: 194
      - name: test
        num_bytes: 1050299
        num_examples: 425
    download_size: 3088237
    dataset_size: 5120697
  - config_name: nlprepl_by_type
    features:
      - name: idx
        dtype: string
      - name: text
        dtype: string
      - name: tokens
        sequence: string
      - name: lemmas
        sequence: string
      - name: upos
        sequence:
          class_label:
            names:
              '0': NOUN
              '1': PUNCT
              '2': ADP
              '3': NUM
              '4': SYM
              '5': SCONJ
              '6': ADJ
              '7': PART
              '8': DET
              '9': CCONJ
              '10': PROPN
              '11': PRON
              '12': X
              '13': _
              '14': ADV
              '15': INTJ
              '16': VERB
              '17': AUX
      - name: xpos
        sequence: string
      - name: feats
        sequence: string
      - name: head
        sequence: string
      - name: deprel
        sequence: string
      - name: deps
        sequence: string
      - name: misc
        sequence: string
    splits:
      - name: train
        num_bytes: 3523113
        num_examples: 1315
      - name: validation
        num_bytes: 547285
        num_examples: 194
      - name: test
        num_bytes: 1050299
        num_examples: 425
    download_size: 3088237
    dataset_size: 5120697

Dataset Card for NLPre-PL – fairly divided version of NKJP1M

Dataset Summary

This is the official NLPre-PL dataset - a uniformly paragraph-level divided version of NKJP1M corpus – the 1-million token balanced subcorpus of the National Corpus of Polish (Narodowy Korpus Języka Polskiego)

The NLPre dataset aims at fairly dividing the paragraphs length-wise and topic-wise into train, development, and test sets. Thus, we ensure a similar number of segments distribution per paragraph and avoid the situation when paragraphs with a small (or large) number of segments are available only e.g. during test time.

We treat paragraphs as indivisible units (to ensure there is no data leakage between different dataset types). The paragraphs inherit the corresponding document's ID and type (a book, an article, etc.).

We provide two variations of the dataset, based on the fair division of paragraphs:

  • fair by document's ID
  • fair by document's type

Creation of the dataset

We investigate the distribution over the number of segments in each paragraph. Being Gaussian-like, we divide the paragraphs into 10 buckets of roughly similar size and then sample from them with respective ratios of 0.8 : 0.1 : 0.1 (corresponding to training, development, and testing subsets). This data selection technique assures a similar distribution of segment numbers per paragraph in our three subsets. We call it fair_by_name (shortly: by_name) since it is divided equitably regarding the unique IDs of the documents.

For creating our second split, we also consider the type of document a paragraph belongs to. We first group paragraphs into categories equal to the document types, and then we repeat the above-mentioned procedure per category. This provides us with a second split: fair_by_type (shortly: by_type).

Supported Tasks and Leaderboards

This resource can be mainly used for training the morphosyntactic analyzer models for Polish. It support such tasks as: lemmatization, part-of-speech recognition, dependency parsing.

Languages

Polish (monolingual)

Dataset Structure

Data Instances

{'nkjp_text': 'NKJP_1M_1102000002',
 'nkjp_par': 'morph_1-p',
 'nkjp_sent': 'morph_1.18-s',
 'tokens': ['-', 'Nie', 'mam', 'pieniędzy', ',', 'da', 'mi', 'pani', 'wywiad', '?'],
 'lemmas': ['-', 'nie', 'mieć', 'pieniądz', ',', 'dać', 'ja', 'pani', 'wywiad', '?'],
 'cposes': [8, 11, 10, 9, 8, 10, 9, 9, 9, 8],
 'poses': [19, 25, 12, 35, 19, 12, 28, 35, 35, 19],
 'tags': [266, 464, 213, 923, 266, 218, 692, 988, 961, 266],
 'nps': [False, False, False, False, True, False, False, False, False, True],
 'nkjp_ids': ['morph_1.9-seg', 'morph_1.10-seg', 'morph_1.11-seg', 'morph_1.12-seg', 'morph_1.13-seg', 'morph_1.14-seg', 'morph_1.15-seg', 'morph_1.16-seg', 'morph_1.17-seg', 'morph_1.18-seg']}

Data Fields

  • nkjp_text, nkjp_par, nkjp_sent (strings): XML identifiers of the present text (document), paragraph and sentence in NKJP. (These allow to map the data point back to the source corpus and to identify paragraphs/samples.)
  • tokens (sequence of strings): tokens of the text defined as in NKJP.
  • lemmas (sequence of strings): lemmas corresponding to the tokens.
  • tags (sequence of labels): morpho-syntactic tags according to Morfeusz2 tagset (1019 distinct tags).
  • poses (sequence of labels): flexemic class (detailed part of speech, 40 classes) – the first element of the corresponding tag.
  • cposes (sequence of labels): coarse part of speech (13 classes): all verbal and deverbal flexemic classes get mapped to a V, nominal – N, adjectival – A, “strange” (abbreviations, alien elements, symbols, emojis…) – X, rest as in poses.
  • nps (sequence of booleans): True means that the corresponding token is not preceded by a space in the source text.
  • nkjp_ids (sequence of strings): XML identifiers of particular tokens in NKJP (probably an overkill).

Data Splits

Train Validation Test
sentences 68943 7755 8964
tokens 978368 112454 125059

Licensing Information

Creative Commons License This work is licensed under a Creative Commons Attribution 4.0 International License.