ik-nlp-22_pestyle / README.md
gsarti's picture
Update README.md
299486f
|
raw
history blame
8.35 kB
metadata
annotations_creators:
  - machine-generated
  - expert-generated
language_creators:
  - found
languages:
  - en
  - it
licenses:
  - private
multilinguality:
  - translation
pretty_name: htstyle-iknlp2022
size_categories:
  - 1K<n<10K
source_datasets:
  - original
task_categories:
  - translation

Dataset Card for IK-NLP-22 Translator Stylometry

Table of Contents

Dataset Description

Dataset Summary

This dataset contains a sample of sentences taken from the FLORES-101 dataset that were either translated from scratch or post-edited from an existing automatic translation by three human translators. Translation were performed for the English-Italian language pair, and translators' behavioral data (keystrokes, pauses, editing times) were collected using the PET platform.

This dataset is made available for final projects of the 2022 edition of the Natural Language Processing course at the Information Science Master's Degree at the University of Groningen, taught by Arianna Bisazza with the assistance of Gabriele Sarti.

Disclaimer: This repository is provided without a direct data access due to currently unpublished results. For this reason, it is for now strictly forbidden to share or publish all the data associated to this repository Students will be provided with a compressed folder containing the data upon choosing a project based on this dataset. To load the dataset using 🤗 Datasets, download and unzip the provided folder and pass it to the load_dataset method as: datasets.load_dataset('GroNLP/ik-nlp-22_htstyle', 'main', data_dir='path/to/unzipped/folder')

Projects

To be provided.

Languages

The language data of is in English (BCP-47 en) and Italian (BCP-47 it)

Dataset Structure

Data Instances

The dataset contains a single configuration, main, with two data splits: train and test.

Data Fields

The following fields are contained in the dataset:

  • item: The sentence identifier. The first digits of the number represent the document containing the sentence, while the last digit of the number represents the sentence position inside the document. Documents can contain from 3 to 5 semantically-related sentences each.

  • subject: The identifier for the translator performing the translation from scratch or post-editing task. Values: t1, t2 or t3.

  • tasktype: The setting of the translation task. Values: ht (translation from scratch), pe1 (post-editing Google Translate), pe2 (post-editing mBART).

  • sl_text: The original source text extracted from Wikinews, wikibooks or wikivoyage.

  • mt_text: Missing if tasktype is ht. Otherwise, contains the automatically-translated sentence before post-editing.

  • tl_text: Final sentence produced by the translator (either via translation from scratch of sl_text or post-editing mt_text)

  • len_sl_chr: Length of the original source text in characters.

  • len_tl_chr: Length of the final translated text in characters.

  • len_sl_wrd: Length of the original source text in words.

  • len_tl_wrd: Length of the final translated text in words.

  • edit_time: Total editing time for the translation in seconds.

  • k_total: Total number of keystrokes for the translation.

  • k_letter: Total number of letter keystrokes for the translation.

  • k_digit: Total number of digit keystrokes for the translation.

  • k_white: Total number of whitespace keystrokes for the translation.

  • k_symbol: Total number of symbol (punctuation, etc.) keystrokes for the translation.

  • k_nav: Total number of navigation keystrokes (left-right arrows, mouse clicks) for the translation.

  • k_erase: Total number of erase keystrokes (backspace, cancel) for the translation.

  • k_copy: Total number of copy (Ctrl + C) actions during the translation.

  • k_cut: Total number of cut (Ctrl + X) actions during the translation.

  • k_paste: Total number of paste (Ctrl + V) actions during the translation.

  • np_300: Number of pauses of 300ms or more during the translation.

  • lp_300: Total duration of pauses of 300ms or more, in milliseconds.

  • np_1000: Number of pauses of 1s or more during the translation.

  • lp_1000: Total duration of pauses of 1000ms or more, in milliseconds.

  • mt_tl_bleu: Sentence-level BLEU score computed using the SacreBLEU library with default parameters.

  • mt_tl_chrf: Sentence-level chrF score computed using the SacreBLEU library with default parameters.

  • mt_tl_ter: Sentence-level TER score computed using the SacreBLEU library with default parameters.

Data Splits

config train test
main 1159 107

Train Split

The train split contains a total of 1159 triplets (or pairs, when translation from scratch is performed) annotated with behavioral data produced during the translation. The following is an example of the subject t3 post-editing a machine translation produced by system 2 (tasktype pe2) taken from the train split:

{
    "item": 1072,
    "subject": "t3",
    "tasktype": "pe2",
    "sl_text": "At the beginning dress was heavily influenced by the Byzantine culture in the east.",
    "mt_text": "All'inizio il vestito era fortemente influenzato dalla cultura bizantina dell'est.",
    "tl_text": "Inizialmente, l'abbigliamento era fortemente influenzato dalla cultura bizantina orientale.",
    "len_sl_chr": 83,
    "len_tl_chr": 91,
    "len_sl_wrd": 14,
    "len_tl_wrd": 9,
    "edit_time": 45.687,
    "k_total": 51,
    "k_letter": 31,
    "k_digit": 0,
    "k_white": 2,
    "k_symbol": 3,
    "k_nav": 7,
    "k_erase": 3,
    "k_copy": 0,
    "k_cut": 0,
    "k_paste": 0,
    "np_300": 9,
    "lp_300": 40032,
    "np_1000": 5,
    "lp_1000": 38392,
    "mt_tl_bleu": 47.99,
    "mt_tl_chrf": 62.05,
    "mt_tl_ter": 44.44
}

The text is provided as-is, without further preprocessing or tokenization.

Test split

The test split contains 107 entries following the same structure as train, with few omissions:

  • the subject field was omitted for the translator stylometry task

  • the tasktype, mt_text and mt_tl evaluation metrics fields were omitted for the translation setting prediction task

  • the edit_time, lp_300 and lp_1000 fields were omitted for the translation time prediction task

Dataset Creation

The dataset was parsed from PET XML files into CSV format using the scripts by Antonio Toral found at the following link: https://github.com/antot/postediting_novel_frontiers

Additional Information

Dataset Curators

For problems on this 🤗 Datasets version, please contact us at [email protected].

Licensing Information

It is forbidden to share or publish the data associated to this 🤗 Dataset version.

Citation Information

No citation information is provided for this dataset.