{ "paper_id": "W09-0406", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T06:36:52.182882Z" }, "title": "CMU System Combination for WMT'09", "authors": [ { "first": "Almut", "middle": [ "Silja" ], "last": "Hildebrand", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University Pittsburgh", "location": { "country": "USA" } }, "email": "" }, { "first": "Stephan", "middle": [], "last": "Vogel", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University Pittsburgh", "location": { "country": "USA" } }, "email": "vogel@cs.cmu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper describes the CMU entry for the system combination shared task at WMT'09. Our combination method is hypothesis selection, which uses information from n-best lists from several MT systems. The sentence level features are independent from the MT systems involved. To compensate for various n-best list sizes in the workshop shared task including firstbest-only entries, we normalize one of our high-impact features for varying sub-list size. We combined restricted data track entries in French-English, German-English and Hungarian-English using provided data only.", "pdf_parse": { "paper_id": "W09-0406", "_pdf_hash": "", "abstract": [ { "text": "This paper describes the CMU entry for the system combination shared task at WMT'09. Our combination method is hypothesis selection, which uses information from n-best lists from several MT systems. The sentence level features are independent from the MT systems involved. To compensate for various n-best list sizes in the workshop shared task including firstbest-only entries, we normalize one of our high-impact features for varying sub-list size. We combined restricted data track entries in French-English, German-English and Hungarian-English using provided data only.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "For the combination of machine translation systems there have been two main approaches described in recent publications. One uses confusion network decoding to combine translation systems as described in (Rosti et al., 2008) and (Karakos et al., 2008) . The other approach selects whole hypotheses from a combined n-best list (Hildebrand and Vogel, 2008) .", "cite_spans": [ { "start": 204, "end": 224, "text": "(Rosti et al., 2008)", "ref_id": "BIBREF2" }, { "start": 229, "end": 251, "text": "(Karakos et al., 2008)", "ref_id": "BIBREF1" }, { "start": 326, "end": 354, "text": "(Hildebrand and Vogel, 2008)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our setup follows the approach described in (Hildebrand and Vogel, 2008) . We combine the output from the available translation systems into one joint n-best list, then calculate a set of features consistently for all hypotheses. We use MER training on a development set to determine feature weights and re-rank the joint n-best list.", "cite_spans": [ { "start": 44, "end": 72, "text": "(Hildebrand and Vogel, 2008)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For our entries to the WMT'09 we used the following feature groups:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "2" }, { "text": "\u2022 Language model score The details on language model and word lexicon scores can be found in (Hildebrand and Vogel, 2008) . We use two sentence length features, which are the ratio of the hypothesis length to the length of the source sentence and the difference between the hypothesis length and the average length of the hypotheses in the n-best list for the respective source sentence. We also use the rank of the hypothesis in the original system's n-best list as a feature.", "cite_spans": [ { "start": 93, "end": 121, "text": "(Hildebrand and Vogel, 2008)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "2" }, { "text": "The participants of the WMT'09 shared translation task provided output from their translation systems in various sizes. Most submission were 1st-best translation only, some submitted 10-best up to 300-best lists.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Normalized N-gram Agreement", "sec_num": "2.1" }, { "text": "In preliminary experiments we saw that adding a high scoring 1st-best translation to a joint n-best list composed of several larger n-best lists does not yield the desired improvement. This might be due to the fact, that hypotheses within an n-best list originating from one single system (sub-list) tend to be much more similar to each other than to hypotheses from another system. This leads to hypotheses from larger sub-lists scoring higher in the n-best list based features, e.g. because they collect more n-gram matches within their sub-list, which \"supports\" them the more the larger it is.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Normalized N-gram Agreement", "sec_num": "2.1" }, { "text": "Previous experiments on Chinese-English showed, that the two feature groups with the highest impact on the combination result are the language model and the n-best list based n-gram agreement. Therefore we decided to focus on the n-best list n-gram agreement for exploring sub-list size normalization to adapt to the data situation with various n-best list sizes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Normalized N-gram Agreement", "sec_num": "2.1" }, { "text": "The n-gram agreement score of each n-gram in the target sentence is the relative frequency of target sentences in the n-best list for one source sentence that contain the n-gram e, independent of the position of the n-gram in the sentence. This feature represents the percentage of the translation hypotheses, which contain the respective ngram. If a hypothesis contains an n-gram more than once, it is only counted once, hence the maximum for the agreement score a(e) is 1.0 (100%). The agreement score a(e) for each n-gram e is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Normalized N-gram Agreement", "sec_num": "2.1" }, { "text": "a(e) = C L (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Normalized N-gram Agreement", "sec_num": "2.1" }, { "text": "where C is the count of the hypotheses containing the n-gram and L is the size of the n-best list for this source sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Normalized N-gram Agreement", "sec_num": "2.1" }, { "text": "To compensate for the various n-best list sizes provided to us we modified the n-best list n-gram agreement by normalizing the count of hypotheses that contain the n-gram by the size of the sub-list it came from. It can be viewed as either collecting fractional counts for each n-gram match, or as calculating the n-gram agreement percentage for each sub-list and then interpolating them. The normalized n-gram agreement score a norm (e) for each ngram e is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Normalized N-gram Agreement", "sec_num": "2.1" }, { "text": "a norm (e) = 1 P P j=1 C j L j (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Normalized N-gram Agreement", "sec_num": "2.1" }, { "text": "where P is the number of systems, C j is the count of the hypotheses containing the n-gram e in the sublist p j and L j is the size of the sublist p j . For the extreme case of a sub-list size of one the fact of finding an n-gram in that hypothesis or not has a rather strong impact on the normalized agreement score. Therefore we introduce a smoothing factor \u03bb in a way that it has an increasing influence the smaller the sub-list is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Normalized N-gram Agreement", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "a smooth (e) = 1 P P j=1 C j L j (1 \u2212 \u03bb L j ) + L j \u2212 C j L j \u03bb L j", "eq_num": "(3)" } ], "section": "Normalized N-gram Agreement", "sec_num": "2.1" }, { "text": "where P is the number of systems, C j is the count of the hypotheses containing the n-gram in the sublist p j and L j is the size of the sublist p j . We used an initial value of \u03bb = 0.1 for our experiments. In all three cases the score for the whole hypothesis is the sum over the word scores normalized by the sentence length. We use n-gram lengths n = 1..6 as six separate features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Normalized N-gram Agreement", "sec_num": "2.1" }, { "text": "Arabic-English", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminary Experiments", "sec_num": "3" }, { "text": "For the development of the modification on the nbest list n-gram agreement feature we used n-best lists from three large scale Arabic to English translation systems. We evaluate using the case insensitive BLEU score for the MT08 test set with four references, which was unseen data for the individual systems as well as the system combination. Ta To compare the behavior of the combination result for different n-best list sizes we combined the 100-best lists from systems A and C and then added three n-best list sizes from the middle system B into the combination: 1-best, 10-best and full 100-best. For each of these four combination options we ran the hypothesis selection using the plain version of the n-gram agreement feature a as well as the normalized version without a norm and with smoothing a smooth . The modified feature has as expected no impact on the combination of n-best lists of the same size (see Table 2 ), however it shows an improvement of BLEU +0.5 for the combination with the 1stbest from system B. The smoothing seems to have no significant impact for this dataset, but different smoothing factors will be investigated in the future.", "cite_spans": [], "ref_spans": [ { "start": 344, "end": 346, "text": "Ta", "ref_id": null }, { "start": 918, "end": 925, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Preliminary Experiments", "sec_num": "3" }, { "text": "To train our language models and word lexica we only used provided data. Therefore we excluded systems from the combination, which were to our knowledge using unrestricted training data (google). We did not include any contrastive systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Workshop Results", "sec_num": "4" }, { "text": "We trained the statistical word lexica on the parallel data provided for each language pair 1 . For each combination we used two language models, a 1.2 giga-word 3-gram language model, trained on the provided monolingual English data and a 4gram language model trained on the English part of the parallel training data of the respective languages. We used the SRILM toolkit (Stolcke, 2002) for training.", "cite_spans": [ { "start": 374, "end": 389, "text": "(Stolcke, 2002)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Workshop Results", "sec_num": "4" }, { "text": "For each of the three language pairs we submitted a combination that used the plain version of the n-gram agreement feature as well as one using the normalized smoothed version.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Workshop Results", "sec_num": "4" }, { "text": "The provided system combination development set, which we used for tuning our feature weights, was the same for all language pairs, 502 sentences with only one reference.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Workshop Results", "sec_num": "4" }, { "text": "For combination we tokenized and lowercased all data, because the n-best lists were submitted in various formats. Therefore we report the case insensitive scores here. The combination was optimized toward the BLEU metric, therefore results for TER and METEOR are not very meaningful here and only reported for completeness.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Workshop Results", "sec_num": "4" }, { "text": "14 systems were submitted to the restricted data track for the French-English translation task. The scores on the combination development set range from BLEU 27.56 to 15.09 (case insensitive evaluation).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "French-English", "sec_num": "4.1" }, { "text": "We received n-best lists from five systems, a 300-best, a 200-best two 100-best and one 10-best list. We included up to 100 hypotheses per system in our joint n-best list.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "French-English", "sec_num": "4.1" }, { "text": "For our workshop submission we combined the top nine systems with the last system scoring 24.23 as well as all 14 systems. Comparing the results for the two combinations of all 14 systems (see Table 3 ), the one with the sub-list normalization for the n-gram agreement feature gains +0.8 BLEU on unseen data compared to the one without normalization. Figure 1 : Contributions of the individual systems to the final translation. Figure 1 shows, how many hypotheses were contributed by the individual systems to the final translation (unseen data). The systems A to N are ordered by their BLEU score on the development set. The systems which provided n-best lists, marked with a star in the diagram, clearly dominate the selection. The low scoring systems contribute very little as expected.", "cite_spans": [], "ref_spans": [ { "start": 193, "end": 200, "text": "Table 3", "ref_id": "TABREF4" }, { "start": 351, "end": 359, "text": "Figure 1", "ref_id": null }, { "start": 428, "end": 436, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "French-English", "sec_num": "4.1" }, { "text": "14 systems were submitted to the restricted data track for the German-English translation task. The scores on the combination development set range from BLEU 27.56 to 7 (case insensitive evaluation). The two lowest scoring systems at BLEU 11 and 7 were so far from the rest of the systems that we decided to exclude them, assuming an error had occurred.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "German-English", "sec_num": "4.2" }, { "text": "Within the remaining 12 submissions were four n-best lists, three 100-best and one 10-best.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "German-English", "sec_num": "4.2" }, { "text": "For our submissions we combined the top seven systems between BLEU 22.91 and 20.24 as well as the top 12 systems where the last one of those was scoring BLEU 16.00 on the development set. For this language pair the combination with the normalized n-gram agreement also outperforms the one without by +0.8 BLEU (see Table 4 Table 4 : German-English Results: BLEU", "cite_spans": [], "ref_spans": [ { "start": 315, "end": 322, "text": "Table 4", "ref_id": null }, { "start": 323, "end": 330, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "German-English", "sec_num": "4.2" }, { "text": "Our system combination via hypothesis selection could improve translation quality by +1.95 BLEU on the unseen test set over the best single system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "German-English", "sec_num": "4.2" }, { "text": "Only three systems were submitted for the Hungarian-English translation task. Scores on the combination development set ranged from BLEU 13.63 to 10.04 (case insensitive evaluation). Only the top system provided an n-best list. We used 100-best hypotheses. We submitted combinations of the three systems by using the modified smoothed n-gram agreement feature and the plain version of the n-gram agreement feature. Here also the normalized version of the feature gives an improvement of +0.56 BLEU with an overall improvement of +1.0 BLEU over the best single system (see Table 5 ).", "cite_spans": [], "ref_spans": [ { "start": 572, "end": 579, "text": "Table 5", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Hungarian-English", "sec_num": "4.3" }, { "text": "It is beneficial to include more systems, even if they are more than 7 points BLEU behind the best system, as the comparison to the combinations with fewer systems shows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary", "sec_num": "5" }, { "text": "In the mixed size data situation of the workshop the modified feature shows a clear improvement for all three language pairs. Different smoothing factors should be investigated for these data sets in the future.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary", "sec_num": "5" }, { "text": "http://www.statmt.org/wmt09/translationtask.html#training", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank the participants in the WMT'09 workshop shared translation task for providing their data, especially n-best lists.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Combination of machine translation systems via hypothesis selection from combined n-best lists", "authors": [ { "first": "Almut", "middle": [], "last": "Silja Hildebrand", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "Vogel", "suffix": "" } ], "year": 2008, "venue": "MT at work: Proceedings of the Eighth Conference of the Association for Machine Translation in the Americas", "volume": "", "issue": "", "pages": "254--261", "other_ids": {}, "num": null, "urls": [], "raw_text": "Almut Silja Hildebrand and Stephan Vogel. 2008. Combination of machine translation systems via hy- pothesis selection from combined n-best lists. In MT at work: Proceedings of the Eighth Confer- ence of the Association for Machine Translation in the Americas, pages 254-261, Waikiki, Hawaii, Oc- tober. Association for Machine Translation in the Americas.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Machine translation system combination using itg-based alignments", "authors": [ { "first": "Damianos", "middle": [], "last": "Karakos", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "" }, { "first": "Sanjeev", "middle": [], "last": "Khudanpur", "suffix": "" }, { "first": "Markus", "middle": [], "last": "Dreyer", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL-08: HLT, Short Papers", "volume": "", "issue": "", "pages": "81--84", "other_ids": {}, "num": null, "urls": [], "raw_text": "Damianos Karakos, Jason Eisner, Sanjeev Khudanpur, and Markus Dreyer. 2008. Machine translation system combination using itg-based alignments. In Proceedings of ACL-08: HLT, Short Papers, pages 81-84, Columbus, Ohio, June. Association for Com- putational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Incremental hypothesis alignment for building confusion networks with application to machine translation system combination", "authors": [ { "first": "Antti-Veikko", "middle": [], "last": "Rosti", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Spyros", "middle": [], "last": "Matsoukas", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Schwartz", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Third Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "183--186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Antti-Veikko Rosti, Bing Zhang, Spyros Matsoukas, and Richard Schwartz. 2008. Incremental hy- pothesis alignment for building confusion networks with application to machine translation system com- bination. In Proceedings of the Third Workshop on Statistical Machine Translation, pages 183-186, Columbus, Ohio, June. Association for Computa- tional Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Srilm -an extensible language modeling toolkit", "authors": [ { "first": "Andreas", "middle": [], "last": "Stolcke", "suffix": "" } ], "year": 2002, "venue": "Proceedings International Conference for Spoken Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andreas Stolcke. 2002. Srilm -an extensible lan- guage modeling toolkit. In Proceedings Interna- tional Conference for Spoken Language Processing, Denver, Colorado, September.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "Word lexicon scores \u2022 Sentence length features \u2022 Rank feature \u2022 Normalized n-gram agreement", "uris": null, "num": null }, "TABREF2": { "content": "", "num": null, "text": "Combination results: BLEU on MT08", "html": null, "type_str": "table" }, "TABREF4": { "content": "
: French-English Results: BLEU
Our system combination via hypothesis selec-
tion could improve the translation quality by +1.6
BLEU on the unseen test set compared to the best
single system.
A 177B* 434C 104
177434104
", "num": null, "text": "", "html": null, "type_str": "table" }, "TABREF7": { "content": "", "num": null, "text": "Hungarian-English Results: BLEU", "html": null, "type_str": "table" } } } }