metadata
license: apache-2.0
language:
- en
- de
- ru
- zh
tags:
- mt-evaluation
- WMT
- MQM
size_categories:
- 100K<n<1M
Dataset Summary
This dataset contains all MQM human annotations from previous WMT Metrics shared tasks and the MQM annotations from Experts, Errors, and Context in a form of error spans.
The data is organised into 8 columns:
- src: input text
- mt: translation
- ref: reference translation
- annotations: List of error spans (dictionaries with 'start', 'end', 'severity', 'text')
- lp: language pair
Note that this is not an official release of the data and the original data can be found here.
Also, while en-ru
was annotated by Unbabel, en-de
and zh-en
was annotated by Google. This means that for en-de and zh-en you will only find minor and major errors while for en-ru you can find a few critical errors.
Python usage:
from datasets import load_dataset
dataset = load_dataset("RicardoRei/wmt-mqm-error-spans", split="train")
There is no standard train/test split for this dataset but you can easily split it according to year, language pair or domain. E.g. :
# split by LP
data = dataset.filter(lambda example: example["lp"] == "en-de")
Citation Information
If you use this data please cite the following works:
- Experts, Errors, and Context: A Large-Scale Study of Human Evaluation for Machine Translation
- Results of the WMT21 Metrics Shared Task: Evaluating Metrics with Expert-based Human Evaluations on TED and News Domain
- Results of WMT22 Metrics Shared Task: Stop Using BLEU – Neural Metrics Are Better and More Robust