{ "paper_id": "W09-0402", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T06:42:11.343763Z" }, "title": "Syntax-oriented evaluation measures for machine translation output", "authors": [ { "first": "Maja", "middle": [], "last": "Popovi\u0107", "suffix": "", "affiliation": { "laboratory": "", "institution": "RWTH Aachen University", "location": { "settlement": "Aachen", "country": "Germany" } }, "email": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "", "affiliation": { "laboratory": "", "institution": "RWTH Aachen University", "location": { "settlement": "Aachen", "country": "Germany" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We explored novel automatic evaluation measures for machine translation output oriented to the syntactic structure of the sentence: the BLEU score on the detailed Part-of-Speech (POS) tags as well as the precision, recall and F-measure obtained on POS n-grams. We also introduced Fmeasure based on both word and POS ngrams. Correlations between the new metrics and human judgments were calculated on the data of the first, second and third shared task of the Statistical Machine Translation Workshop. Machine translation outputs in four different European languages were taken into account: English, Spanish, French and German. The results show that the new measures correlate very well with the human judgements and that they are competitive with the widely used BLEU, METEOR and TER metrics.", "pdf_parse": { "paper_id": "W09-0402", "_pdf_hash": "", "abstract": [ { "text": "We explored novel automatic evaluation measures for machine translation output oriented to the syntactic structure of the sentence: the BLEU score on the detailed Part-of-Speech (POS) tags as well as the precision, recall and F-measure obtained on POS n-grams. We also introduced Fmeasure based on both word and POS ngrams. Correlations between the new metrics and human judgments were calculated on the data of the first, second and third shared task of the Statistical Machine Translation Workshop. Machine translation outputs in four different European languages were taken into account: English, Spanish, French and German. The results show that the new measures correlate very well with the human judgements and that they are competitive with the widely used BLEU, METEOR and TER metrics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "We proposed several syntax-oriented automatic evaluation measures based on sequences of POS tags and investigated how they correlate with human judgments. The new measures are the POS-BLEU score, i.e. the BLEU score calculated on POS tags instead of words, as well as the POSP, the POSR and the POSF score: precision, recall and Fmeasure calculated on POS n-grams. In addition to the metrics based only on POS tags, we investigated a WPF score, i.e. an F-measure which takes into account both word and POS n-grams.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The correlations on the document level were computed on the English, French, Spanish and German texts generated by various translation systems in the framework of the first (Koehn and Monz, 2006) , second (Callison-Burch et al., 2007) and third shared translation task (Callison-Burch et al., 2008) . Preliminary experiments were carried out on the data from the first (2006) and the second task (2007) -Spearman's rank correlation coefficients between the adequacy and fluency scores and the POSBLEU, POSP, POSR and POSF scores were calculated. The POSBLEU and the POSF score were shown to be the most promising, so that these metrics were submitted to the official shared evaluation task 2008. The results of this evaluation showed that these metrics also correlate well on the document level with another human score, i.e. the sentence ranking. However, on the sentence level the results were less promising. The possible reason for this is the main drawback of the metrics based on pure POS tags, i.e. neglecting the lexical aspect. Therefore we also introduced a WPF score which takes into account both word n-grams and POS n-grams.", "cite_spans": [ { "start": 173, "end": 195, "text": "(Koehn and Monz, 2006)", "ref_id": "BIBREF4" }, { "start": 205, "end": 234, "text": "(Callison-Burch et al., 2007)", "ref_id": "BIBREF1" }, { "start": 269, "end": 298, "text": "(Callison-Burch et al., 2008)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We investigated the following metrics oriented on the syntactic structure of a translation output:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic-oriented evaluation metrics", "sec_num": "2" }, { "text": "\u2022 POSBLEU", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic-oriented evaluation metrics", "sec_num": "2" }, { "text": "The standard BLEU score (Papineni et al., 2002) calculated on POS tags instead of words;", "cite_spans": [ { "start": 24, "end": 47, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Syntactic-oriented evaluation metrics", "sec_num": "2" }, { "text": "\u2022 POSP POS n-gram precision: percentage of POS ngrams in the hypothesis which have a counterpart in the reference;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic-oriented evaluation metrics", "sec_num": "2" }, { "text": "\u2022 POSR Recall measure based on POS n-grams: percentage of POS n-grams in the reference which are also present in the hypothesis;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic-oriented evaluation metrics", "sec_num": "2" }, { "text": "\u2022 POSF POS n-gram based F-measure: takes into account all POS n-grams which have a counter-part, both in the reference and in the hypothesis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic-oriented evaluation metrics", "sec_num": "2" }, { "text": "\u2022 WPF F-measure based both on word and POS ngrams: takes into account all word n-grams and all POS n-grams which have a counterpart both in the corresponding reference and hypothesis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic-oriented evaluation metrics", "sec_num": "2" }, { "text": "The prerequisite for all metrics is availability of an appropriate POS tagger for the target language. It should be noted that the POS tags cannot be only basic but must have all details (e.g. verb tenses, cases, number, gender, etc.).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic-oriented evaluation metrics", "sec_num": "2" }, { "text": "The n-gram scores as well as the POSBLEU score are based on fourgrams (i.e. the value of maximal n is 4). For the n-gram-based measures, two types of n-gram averaging were investigated: geometric mean and aritmetic mean. Geometric mean is already widely used in the BLEU score, but is also argued not to be optimal because the score becomes equal to zero even if only one of the ngram counts is equal to zero. However, this problem is probably less critical for POS-based metrics because the tag set sizes are much smaller than vocabulary sizes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic-oriented evaluation metrics", "sec_num": "2" }, { "text": "The syntax-oriented evaluation metrics were compared with human judgments by means of Spearman correlation coefficients \u03c1. Spearman's rank correlation coefficient is equivalent to Pearson correlation on ranks, and its advantage is that it makes fewer assumptions about the data. The possible values of \u03c1 range between 1 (if all systems are ranked in the same order) and -1 (if all systems are ranked in the reverse order). Thus the higher value of \u03c1 for an automatic metric, the more similar it is to the human metric. Correlation coefficients between human scores and three well-known automatic measures BLEU, METEOR and TER were calculated as well, in order to see how the new metrics perform in comparison with widely used metrics. The scores were calculated for outputs of translation from Spanish, French and German into English and vice versa. English and German POS tags were produced using the TnT tagger (Brants, 2000) , Spanish texts were annotated using the FreeLing analyser (Carreras et al., 2004) , and French texts using the TreeTagger 1 . In this way, all references and hypotheses were provided with detailed POS tags.", "cite_spans": [ { "start": 913, "end": 927, "text": "(Brants, 2000)", "ref_id": "BIBREF0" }, { "start": 987, "end": 1010, "text": "(Carreras et al., 2004)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Correlations between the new metrics and human judgments", "sec_num": "3" }, { "text": "The preliminary experiments with the new evaluation metrics were performed on the data from the first two shared tasks in order to investigate Spearman correlation coefficients \u03c1 between POSbased evaluation measures and the human scores adequacy and fluency. The metrics described in Section 2 (except the WPF score) were calculated for all translation outputs. For each new metric, the \u03c1 coefficient with the adequacy and with the fluency score on the document level were calculated. Then the results were summarised by averaging obtained coefficients over all translation outputs, and the average correlations are presented in Table 1 shows that the new measures have high \u03c1 coefficients both with respect to the adequacy and to the fluency score. The POSBLEU score has the highest correlations, followed by the POSF score. Furthermore, the POSBLEU score has higher correlations than each of the three widely used metrics, and all the new metrics except the POSP have higher correlations than the TER. The POSF correlations with the fluency are higher than those for the standard metrics, and with the adequacy are comparable to those for the METEOR and the BLEU score. Table 2 presents the percentage of the documents for which the particular new metric has higher correlation than BLEU, METEOR or TER. It can be seen that on the majority of the documents the POSBLEU metric outperforms all three standard measures, especially the correlation with the fluency score. The geometric mean POSF shows similar behaviour, having higher correlation than the standard measures in majority of the cases but slightly less often than the POSBLEU. The POSR has higher correlation than the standard measures in 50-70% of cases, and the POSP score has the lowest percentage, 30-60%. It can be also seen that the geometric mean averaging of the n-grams correlates better with the human judgments more often than the artimetic mean.", "cite_spans": [], "ref_spans": [ { "start": 629, "end": 636, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 1172, "end": 1179, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Experiments on 2006 and 2007 test data", "sec_num": null }, { "text": "For the official shared evaluation task in 2008, the human evaluation scores were different -the adequacy and fluency scores were abandoned being rather time consuming and often inconsistent, and the sentence ranking was proposed as one of the human evaluation scores: the manual evaluators were asked to rank translated sentences relative to each other. RWTH participated in this shared task with the two most promising metrics according to the previous experiments, i.e. POSBLEU and POSF, and the detailed results can be found in (Callison-Burch et al., 2008) . It was shown that these metrics also correlate very well with the sentence ranking on the document level. However, on the sentence level the performance was much weaker: a percentage of sentence pairs for which the human comparison yields the same result as the comparison using particular automatic metric was not very high. We believe that the main reason for this is the fact that the metrics based only on the POS tags can assign high scores to translations without correct semantic meaning, because they are taking into account only syntactic structure without taking into account the actual words. For example, if the reference translation is \"This sentence is correct\", a translation output \"This tree is high\" would have a POS-based matching score of 100%. Therefore we introduced the WPF score -an F-measure metrics which counts both matching POS n-grams and matching word n-grams.", "cite_spans": [ { "start": 532, "end": 561, "text": "(Callison-Burch et al., 2008)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments on 2008 test data", "sec_num": null }, { "text": "The \u03c1 coefficients for the POSBLEU, POSF and WPF with the sentence ranking averaged over all translation outputs are shown in Table 3 . The cor-relations for several known metrics are shown as well, i.e. for the BLEU, METEOR and TER along with their variants: METEOR-r denotes the variant optimised for ranking, whereas MBLEU and MTER are BLEU and TER computed using the flexible matching as used in METEOR. It can be seen that the correlation coefficients for all three syntactic metrics are high. The POSBLEU score has the highest correlation with the sentence ranking, followed by POSF and WPF. All three measures have higher average correlation than MTER, MBLEU and BLEU. The purely syntactic metrics outperform also the METEOR scores, whereas the WPF correlations are comparable with those of the METEOR scores. Table 4 presents the percentage of the documents where the particular syntactic metric has higher correlation with the sentence ranking than the particular standard metric. All syntactic metrics have higher correlation than the MTER on almost all documents, and on a large number of documents than the MBLEU score. The correlations for syntactic measures are better than those for the BLEU score for more than 60% of documents. As for the METEOR scores, the syntactic metrics are comparable (about 50%).", "cite_spans": [], "ref_spans": [ { "start": 126, "end": 133, "text": "Table 3", "ref_id": "TABREF2" }, { "start": 817, "end": 824, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Experiments on 2008 test data", "sec_num": null }, { "text": "The results presented in this article suggest that the syntactic information has the potential to strenghten automatic evaluation metrics, and there are many possible directions for future work. We proposed several syntax-oriented evaluation metrics based on the detailed POS tags: the POS-BLEU score and POS-n-gram precision, recall and Table 4 : Percentage of documents from the 2008 shared task where the new metric has better correlation with the human sentence ranking than the standard metric.", "cite_spans": [], "ref_spans": [ { "start": 338, "end": 345, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Conclusions", "sec_num": "4" }, { "text": "F-measure, i.e. the POSP, POSR, and POSF score. In addition, we introduced a measure which takes into account both POS tags and words: the WPF score. We carried out an extensive analysis of the Spearman's rank correlation coefficients between the syntactic evaluation metrics and the human judgments. The obtained results showed that the new metrics correlate well with human judgments, namely the adequacy and fluency scores, as well as the sentence ranking. The results also showed that the syntax-oriented metrics are competitive with the widely used evaluation measures BLEU, METEOR and TER. Especially promising are the POSBLEU and the POSF score. The correlations of the WPF score are slightly lower than those of the purely POS based metrics -however, this metric has advantage of taking both syntactic and lexical aspect into account.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "4" }, { "text": "http://www.ims.uni-stuttgart.de/projekte/corplex/TreeTagger/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was realised as part of the Quaero Programme, funded by OSEO, French State agency for innovati on.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Tnt -a statistical part-ofspeech tagger", "authors": [ { "first": "Thorsten", "middle": [], "last": "Brants", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 6th Applied Natural Language Processing Conference (ANLP)", "volume": "", "issue": "", "pages": "224--231", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thorsten Brants. 2000. Tnt -a statistical part-of- speech tagger. In Proceedings of the 6th Applied Natural Language Processing Conference (ANLP), pages 224-231, Seattle, WA.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Meta-)Evaluation of Machine Translation", "authors": [ { "first": "Chris", "middle": [], "last": "Callison", "suffix": "" }, { "first": "-", "middle": [], "last": "Burch", "suffix": "" }, { "first": "Cameron", "middle": [], "last": "Fordyce", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" }, { "first": "Josh", "middle": [], "last": "Schroeder", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the ACL Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "136--158", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Callison-Burch, Cameron Fordyce, Philipp Koehn, Christof Monz, and Josh Schroeder. 2007. (Meta-)Evaluation of Machine Translation. In Pro- ceedings of the ACL Workshop on Statistical Ma- chine Translation, pages 136-158, Prague, Czech Republic, June.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Further Meta-Evaluation of Machine Translation", "authors": [ { "first": "Chris", "middle": [], "last": "Callison", "suffix": "" }, { "first": "-", "middle": [], "last": "Burch", "suffix": "" }, { "first": "Cameron", "middle": [], "last": "Fordyce", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" }, { "first": "Josh", "middle": [], "last": "Schroeder", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Third Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Callison-Burch, Cameron Fordyce, Philipp Koehn, Christof Monz, and Josh Schroeder. 2008. Further Meta-Evaluation of Machine Translation. In Proceedings of the Third Workshop on Statistical Machine Translation, Columbus, Ohio, June.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "FreeLing: An Open-Source Suite of Language Analyzers", "authors": [ { "first": "Xavier", "middle": [], "last": "Carreras", "suffix": "" }, { "first": "Isaac", "middle": [], "last": "Chao", "suffix": "" }, { "first": "Llu\u00eds", "middle": [], "last": "Padr\u00f3", "suffix": "" }, { "first": "Muntsa", "middle": [], "last": "Padr\u00f3", "suffix": "" } ], "year": 2004, "venue": "Proceedings 4th International Conference on Language Resources and Evaluation (LREC)", "volume": "", "issue": "", "pages": "239--242", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xavier Carreras, Isaac Chao, Llu\u00eds Padr\u00f3, and Muntsa Padr\u00f3. 2004. FreeLing: An Open-Source Suite of Language Analyzers. In Proceedings 4th Interna- tional Conference on Language Resources and Eval- uation (LREC), pages 239-242, Lisbon, Portugal, May.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Manual and automatic evaluation of machine translation between european languages", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" } ], "year": 2006, "venue": "Proceedings on the Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn and Christof Monz. 2006. Manual and automatic evaluation of machine translation between european languages. In Proceedings on the Work- shop on Statistical Machine Translation, New York City, June.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "BLEU: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wie-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wie- Jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), pages 311-318, Philadelphia, PA, July.", "links": null } }, "ref_entries": { "TABREF0": { "text": "", "content": "
2006+2007 adequacy fluency
BLEU0.5900.544
METEOR0.5980.538
TER0.4960.479
POSBLEU0.6420.626
POSF gm0.5860.551
am0.5840.570
POSR gm0.5720.576
am0.5420.544
POSP gm0.5510.481
am0.5310.461
Table 1: Average system-level correlations be-
tween automatic evaluation measures and ade-
quacy/fluency scores for 2006 and 2007 test data
(gm = geometric mean for n-gram averaging, am
= arithmetic mean).
", "html": null, "num": null, "type_str": "table" }, "TABREF2": { "text": "", "content": "
: Average system-level correlations be-
tween automatic evaluation measures and human
ranking for 2008 test data.
", "html": null, "num": null, "type_str": "table" } } } }