MuST-SHE_en-ca / README.md
AudreyVM's picture
Update README.md
a88cc8c verified
metadata
license: cc-by-nc-nd-4.0
task_categories:
  - translation
language:
  - en
  - ca
tags:
  - bias
  - gender bias
  - evaluation

Dataset Card for MuST-SHE_en-ca

Dataset Description

Dataset Summary

MuST-SHE_en-ca is an English-Catalan evaluation dataset of 1.046 examples, created to support evaluation of Catalan NLP tasks, specifically Gender Bias evaluation in Machine Translation. This dataset is derived from MuST-SHE English-Spanish, by translating the Spanish portion into Catalan.

For more information about the original MuST-SHE dataset: https://mt.fbk.eu/must-she/

Supported Tasks and Leaderboards

The dataset has been designed for the evaluation of gender bias in Machine Translation from English to Catalan. As this is a corpus based on natural language it allows for different insights to popular template based gender bias evaluation sets.

Languages

The languages included in the dataset are English (EN) and Catalan (CA).

Dataset Structure

Data Instances

The dataset is composed of a single tsv file containing 1.046 rows.

  • MuST-SHE_en-ca.tsv

The dataset follows the structure of the original MuST-SHE dataset. The majority of the data fields have not been altered in any way. The only data fields which have been changed are those relating to the translation from Spanish to Catalan, namely:

  • LANG (es -> ca)
  • REF (es -> ca)
  • WRONG-REF (es -> ca)
  • GENDERTERMS (es -> ca)

The original dataset contains the data field CATEGORY, dividing the segments into four categories based on the presence or absense of gender information. However, as the original dataset is designed for the evaluation of Speech Data, the segments in which gender information is present in the audio can be considered to contain no gender information in the context of text-based Machine Translation. We therefore added the extra column "TEXT-CATEGORY", specifically meant for textual Machine Translation tasks. In "TEXT-CATEGORY", instances are classified in two distinct categories:

  • sentences in which the text contains sufficient information to disambiguate gender.
  • sentences in which the text does not contain sufficient information to disambiguate gender.

Data fields

The data fields follow the structure of the original MuST-SHE dataset.

  • ID (segment identifier)
  • LANG (language)
  • TALK (TED Talk identifier)
  • SRC (source sentence)
  • REF (correct gender translation)
  • WRONG-REF (incorrect gender translation)
  • SPEAKER (Speaker name)
  • GENDER (Speaker gender)
  • CATEGORY (gender information status)
  • TEXT-CATEGORY (gender information status for text only tasks)
  • GENDERTERMS (gender terms extracted from REF and WRONG-REF sentences)

More information about the creation of the original English-Spanish MuST-SHE can be found in the MuST-SHE data statement.

Data Splits

The dataset contains a single split for evaluation.

Curation Rationale

This dataset is aimed at evaluating Gender Bias in Machine Translation from English to Catalan, in order to promote fairer MT outputs when translating from a gender-neutral language, such as English, into a grammatically gendered language, such as Catalan.

Source Data

Initial Data Collection and Normalization

The original MuST-SHE dataset is a subset of the TED-based MuST-C corpus. MuST-SHE_en-ca was created by automatically translating the Spanish components of the English-Spanish MuST-SHE using the PlanTL Project's Spanish-Catalan machine translation model. Gender terms were extracted automatically and both the gender terms and the automatically translated sentences were then extensively reviewed by a native Catalan speaker to ensure accuracy.

Who are the source data producers?

Machine Translation group at Fondazione Bruno Kessler

Annotations

Annotation process

For each segment, we added an extra column "TEXT-CATEGORY", specifically meant for textual Machine Translation tasks. In "TEXT-CATEGORY", segments are classified in two distinct categories:

  • sentences in which the text contains sufficient information to disambiguate gender.
  • sentences in which the text does not contain sufficient information to disambiguate gender.

All translations from Spanish were automatically generated using the PlanTL es->ca model and manually revised by a native Catalan speaker.

  • Information about the original annotation process of MuST-SHE can be found in the MuST-SHE dataset card.

Who are the annotators?

The annotation was done internally by BSC LangTech collaborators.

Personal and Sensitive Information

No anonymisation process was performed.

Considerations for Using the Data

Social Impact of Dataset

The specific purpose of this dataset is to help evaluating the Gender Bias of Machines Translation engines when translating from a gender-neutral language, such as English, into a grammatically gendered language, such as Catalan. Such evaluation may contribute to promote fairer MT outputs in Catalan when translating from English. At a broad level, by providing this resource, we intend to promote the use of Catalan across NLP tasks, thereby improving the accessibility and visibility of the Catalan language.

Discussion of Biases

This dataset has been specifically designed to assess Gender Bias in Machine Translation. Inherent biases of other types (such as racial, ethnic, socio-economic bias, etc.) may be present in the data. No specific mitigation strategies for these other types of bias have been applied to this dataset.

Other Known Limitations

The dataset contains data of a general domain. Applications of this dataset in more specific domains such as biomedical, legal etc. would be of limited use.

Additional Information

Dataset Curators

Language Technologies Unit at the Barcelona Supercomputing Center ([email protected]).

This work has been promoted and financed by the Generalitat de Catalunya through the Aina project.

Licensing Information

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 4.0.

Citation Information

@inproceedings{bentivogli-etal-2020-gender,
    title = "Gender in Danger? Evaluating Speech Translation Technology on the {M}u{ST}-{SHE} Corpus",
    author = "Bentivogli, Luisa  and
      Savoldi, Beatrice  and
      Negri, Matteo  and
      Di Gangi, Mattia A.  and
      Cattoni, Roldano  and
      Turchi, Marco",
    editor = "Jurafsky, Dan  and
      Chai, Joyce  and
      Schluter, Natalie  and
      Tetreault, Joel",
    booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
    month = jul,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.acl-main.619",
    doi = "10.18653/v1/2020.acl-main.619",
    pages = "6923--6933",
    abstract = "Translating from languages without productive grammatical gender like English into gender-marked languages is a well-known difficulty for machines. This difficulty is also due to the fact that the training data on which models are built typically reflect the asymmetries of natural languages, gender bias included. Exclusively fed with textual data, machine translation is intrinsically constrained by the fact that the input sentence does not always contain clues about the gender identity of the referred human entities. But what happens with speech translation, where the input is an audio signal? Can audio provide additional information to reduce gender bias? We present the first thorough investigation of gender bias in speech translation, contributing with: i) the release of a benchmark useful for future studies, and ii) the comparison of different technologies (cascade and end-to-end) on two language directions (English-Italian/French).",
}
@article{article,
author = {Cattoni, Roldano and Di Gangi, Mattia and Bentivogli, Luisa and Negri, Matteo and Turchi, Marco},
year = {2021},
month = {03},
pages = {101155},
title = {MuST-C: A multilingual corpus for end-to-end speech translation},
volume = {66},
journal = {Computer Speech & Language},
doi = {10.1016/j.csl.2020.101155}
}

Contributions

[N/A]