{ "paper_id": "W09-0412", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T06:41:19.550505Z" }, "title": "NUS at WMT09: Domain Adaptation Experiments for English-Spanish Machine Translation of News Commentary Text", "authors": [ { "first": "Preslav", "middle": [], "last": "Nakov", "suffix": "", "affiliation": { "laboratory": "", "institution": "National University of Singapore", "location": { "addrLine": "13 Computing Drive", "postCode": "117417", "country": "Singapore" } }, "email": "nakov@comp.nus.edu.sg" }, { "first": "Hwee", "middle": [ "Tou" ], "last": "Ng", "suffix": "", "affiliation": { "laboratory": "", "institution": "National University of Singapore", "location": { "addrLine": "13 Computing Drive", "postCode": "117417", "country": "Singapore" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We describe the system developed by the team of the National University of Singapore for English to Spanish machine translation of News Commentary text for the WMT09 Shared Translation Task. Our approach is based on domain adaptation, combining a small in-domain News Commentary bi-text and a large out-of-domain one from the Europarl corpus, from which we built and combined two separate phrase tables. We further combined two language models (in-domain and out-of-domain), and we experimented with cognates, improved tokenization and recasing, achieving the highest lowercased NIST score of 6.963 and the second best lowercased Bleu score of 24.91% for training without using additional external data for English-to-Spanish translation at the shared task.", "pdf_parse": { "paper_id": "W09-0412", "_pdf_hash": "", "abstract": [ { "text": "We describe the system developed by the team of the National University of Singapore for English to Spanish machine translation of News Commentary text for the WMT09 Shared Translation Task. Our approach is based on domain adaptation, combining a small in-domain News Commentary bi-text and a large out-of-domain one from the Europarl corpus, from which we built and combined two separate phrase tables. We further combined two language models (in-domain and out-of-domain), and we experimented with cognates, improved tokenization and recasing, achieving the highest lowercased NIST score of 6.963 and the second best lowercased Bleu score of 24.91% for training without using additional external data for English-to-Spanish translation at the shared task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Modern Statistical Machine Translation (SMT) systems are typically trained on sentence-aligned parallel texts (bi-texts) from a particular domain. When tested on text from that domain, they demonstrate state-of-the art performance, but on out-of-domain test data the results can deteriorate significantly. For example, on the WMT06 Shared Translation Task, the scores for French-to-English translation dropped from about 30 to about 20 Bleu points for nearly all systems when tested on News Commentary instead of the Europarl 1 text, which was used for training (Koehn and Monz, 2006) .", "cite_spans": [ { "start": 562, "end": 584, "text": "(Koehn and Monz, 2006)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1 See (Koehn, 2005) for details about the Europarl corpus. Subsequently, in 2007 and 2008 , the WMT Shared Translation Task organizers provided a limited amount of bilingual News Commentary training data (1-1.3M words) in addition to the large amount of Europarl data (30-32M words), and set up separate evaluations on News Commentary and on Europarl data, thus inviting interest in domain adaptation experiments for the News domain Callison-Burch et al., 2008) . This year, the evaluation is on News Commentary only, which makes domain adaptation the central focus of the Shared Translation Task.", "cite_spans": [ { "start": 6, "end": 19, "text": "(Koehn, 2005)", "ref_id": "BIBREF10" }, { "start": 59, "end": 84, "text": "Subsequently, in 2007 and", "ref_id": null }, { "start": 85, "end": 89, "text": "2008", "ref_id": "BIBREF18" }, { "start": 433, "end": 461, "text": "Callison-Burch et al., 2008)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The team of the National University of Singapore (NUS) participated in the WMT09 Shared Translation Task with an English-to-Spanish system. 2 Our approach is based on domain adaptation, combining the small in-domain News Commentary bi-text (1.8M words) and the large outof-domain one from the Europarl corpus (40M words), from which we built and combined two separate phrase tables. We further used two language models (in-domain and out-of-domain), cognates, improved tokenization, and additional smart recasing as a post-processing step.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Below we describe separately the standard and the nonstandard settings of our system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The NUS System", "sec_num": "2" }, { "text": "In our baseline experiments, we used the following general setup: First, we tokenized the par-allel bi-text, converted it to lowercase, and filtered out the overly-long training sentences, which complicate word alignments (we tried maximum length limits of 40 and 100). We then built separate English-to-Spanish and Spanish-to-English directed word alignments using IBM model 4 (Brown et al., 1993) , combined them using the in-tersect+grow heuristic (Och and Ney, 2003) , and extracted phrase-level translation pairs of maximum length 7 using the alignment template approach (Och and Ney, 2004) . We thus obtained a phrase table where each phrase translation pair is associated with the following five standard parameters: forward and reverse phrase translation probabilities, forward and reverse lexical translation probabilities, and phrase penalty.", "cite_spans": [ { "start": 378, "end": 398, "text": "(Brown et al., 1993)", "ref_id": "BIBREF4" }, { "start": 451, "end": 470, "text": "(Och and Ney, 2003)", "ref_id": "BIBREF19" }, { "start": 576, "end": 595, "text": "(Och and Ney, 2004)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Standard Settings", "sec_num": "2.1" }, { "text": "We then trained a log-linear model using the standard feature functions: language model probability, word penalty, distortion costs (we tried distance based and lexicalized reordering models), and the parameters from the phrase table. We set all feature weights by optimizing Bleu (Papineni et al., 2002) directly using minimum error rate training (MERT) (Och, 2003) on the tuning part of the development set (dev-test2009a). We used these weights in a beam search decoder to translate the test sentences (the English part of dev-test2009b, tokenized and lowercased). We then recased the output using a monotone model that translates from lowercase to uppercase Spanish, we post-cased it using a simple heuristic, de-tokenized the result, and compared it to the gold standard (the Spanish part of dev-test2009b) using Bleu and NIST.", "cite_spans": [ { "start": 281, "end": 304, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF22" }, { "start": 355, "end": 366, "text": "(Och, 2003)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Standard Settings", "sec_num": "2.1" }, { "text": "The nonstandard features of our system can be summarized as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Nonstandard Settings", "sec_num": "2.2" }, { "text": "Two Language Models. Following Nakov and Hearst (2007), we used two language models (LM) -an in-domain one (trained on a concatenation of the provided monolingual Spanish News Commentary data and the Spanish side of the training News Commentary bi-text) and an out-ofdomain one (trained on the provided monolingual Spanish Europarl data). For both LMs, we used 5-gram models with Kneser-Ney smoothing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Nonstandard Settings", "sec_num": "2.2" }, { "text": "Merging Two Phrase Tables. Following Nakov (2008), we trained and merged two phrasebased SMT systems: a small in-domain one using the News Commentary bi-text, and a large out-of-domain one using the Europarl bi-text. As a result, we obtained two phrase tables, T news and T euro , and two lexicalized reordering models, R news and R euro . We merged the phrase table as follows. First, we kept all phrase pairs from T news . Then we added those phrase pairs from T euro which were not present in T news . For each phrase pair added, we retained its associated features: forward and reverse phrase translation probabilities, forward and reverse lexical translation probabilities, and phrase penalty. We further added two new features, F news and F euro , which show the source of each phrase. Their values are 1 and 0.5 when the phrase was extracted from the News Commentary bi-text, 0.5 and 1 when it was extracted from the Europarl bi-text, and 1 and 1 when it was extracted from both. As a result, we ended up with seven parameters for each entry in the merged phrase table.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Nonstandard Settings", "sec_num": "2.2" }, { "text": "Merging Two Lexicalized Reordering Tables. When building the two phrase tables, we also built two lexicalized reordering tables (Koehn et al., 2005) for them, R news and R euro , which we merged as follows: We first kept all phrases from R news , then we added those from R euro which were not present in R news . This resulting lexicalized reordering table was used together with the above-described merged phrase table.", "cite_spans": [ { "start": 128, "end": 148, "text": "(Koehn et al., 2005)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Nonstandard Settings", "sec_num": "2.2" }, { "text": "Cognates. Previous research has shown that using cognates can yield better word alignments (Al-Onaizan et al., 1999; Kondrak et al., 2003) , which in turn often means higher-quality phrase pairs and better SMT systems. Linguists define cognates as words derived from a common root (Bickford and Tuggy, 2002) . Following previous researchers in computational linguistics (Bergsma and Kondrak, 2007; Mann and Yarowsky, 2001; Melamed, 1999) , however, we adopted a simplified definition which ignores origin, defining cognates as words in different languages that are mutual translations and have a similar orthography. We extracted and used such potential cognates in order to bias the training of the IBM word alignment models. Following Melamed (1995) , we measured the orthographic similarity using longest common subsequence ratio (LCSR), which is defined as follows:", "cite_spans": [ { "start": 91, "end": 116, "text": "(Al-Onaizan et al., 1999;", "ref_id": "BIBREF0" }, { "start": 117, "end": 138, "text": "Kondrak et al., 2003)", "ref_id": "BIBREF11" }, { "start": 281, "end": 307, "text": "(Bickford and Tuggy, 2002)", "ref_id": "BIBREF3" }, { "start": 370, "end": 397, "text": "(Bergsma and Kondrak, 2007;", "ref_id": null }, { "start": 398, "end": 422, "text": "Mann and Yarowsky, 2001;", "ref_id": "BIBREF12" }, { "start": 423, "end": 437, "text": "Melamed, 1999)", "ref_id": "BIBREF14" }, { "start": 737, "end": 751, "text": "Melamed (1995)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Nonstandard Settings", "sec_num": "2.2" }, { "text": "LCSR(s 1 , s 2 ) = |LCS(s 1 ,s 2 )| max(|s 1 |,|s 2 |)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Nonstandard Settings", "sec_num": "2.2" }, { "text": "where LCS(s 1 , s 2 ) is the longest common subsequence of s 1 and s 2 , and |s| is the length of s.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Nonstandard Settings", "sec_num": "2.2" }, { "text": "Following , we combined the LCSR similarity measure with competitive linking (Melamed, 2000) in order to extract potential cog-nates from the training bi-text. Competitive linking assumes that, given a source English sentence and its Spanish translation, a source word is either translated with a single target word or is not translated at all. Given an English-Spanish sentence pair, we calculated LCSR for all cross-lingual word pairs (excluding stopwords and words of length 3 or less), which induced a fully-connected weighted bipartite graph. Then, we performed a greedy approximation to the maximum weighted bipartite matching in that graph (competitive linking) as follows: First, we aligned the most similar pair of unaligned words and we discarded these words from further consideration. Then, we aligned the next most similar pair of unaligned words, and so forth. The process was repeated until there were no words left or the maximal word pair similarity fell below a pre-specified threshold \u03b8 (0 \u2264 \u03b8 \u2264 1), which typically left some words unaligned. 3 As a result we ended up with a list C of potential cognate pairs. Following (Al-Onaizan et al., 1999; Kondrak et al., 2003; we filtered out the duplicates in C, and we added the remaining cognate pairs as additional \"sentence\" pairs to the bi-text in order to bias the subsequent training of the IBM word alignment models.", "cite_spans": [ { "start": 77, "end": 92, "text": "(Melamed, 2000)", "ref_id": "BIBREF15" }, { "start": 1140, "end": 1165, "text": "(Al-Onaizan et al., 1999;", "ref_id": "BIBREF0" }, { "start": 1166, "end": 1187, "text": "Kondrak et al., 2003;", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Nonstandard Settings", "sec_num": "2.2" }, { "text": "Improved (De-)tokenization. The default tokenizer does not split on hyphenated compound words like nation-building, well-rehearsed, selfassured, Arab-Israeli, domestically-oriented, etc. While linguistically correct, this can be problematic for machine translation since it can cause data sparsity issues. For example, the system might know how to translate into Spanish both well and rehearsed, but not well-rehearsed, and thus at translation time it would be forced to handle it as an unknown word, i.e., copy it to the output untranslated. A similar problem is related to double dashes, as illustrated by the following training sentence: \"So the question now is what can China do to freeze--and, if possible, to reverse--North Korea's nuclear program.\" We changed the tokenizer so that it splits on '-' and '--'; we altered the detokenizer accordingly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Nonstandard Settings", "sec_num": "2.2" }, { "text": "Improved Recaser. The default recaser suggested by the WMT09 organizers was based on a monotone translation model. We trained such a recaser on the Spanish side of the News Commen-tary bi-text that translates from lowercase to uppercase Spanish. While being good overall, it had a problem with unknown words, leaving them in lowercase. In a News Commentary text, however, most unknown words are named entities -persons, organization, locations -which are spelled with a capitalized initial in Spanish. Therefore, we used an additional recasing script, which runs over the output of the default recaser and sets the casing of the unknown words to the original casing they had in the English input. It also makes sure all sentences start with a capitalized initial.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Nonstandard Settings", "sec_num": "2.2" }, { "text": "Rule-based Post-editing. We did a quick study of the system errors on the development set, and we designed some heuristic post-editing rules, e.g.,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Nonstandard Settings", "sec_num": "2.2" }, { "text": "\u2022 ? or ! without \u00bf or \u00a1 to the left: if so, we insert \u00bf/\u00a1 at the sentence beginning;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Nonstandard Settings", "sec_num": "2.2" }, { "text": "\u2022 numbers: we change English numbers like 1,185.32 to Spanish-style 1.185,32;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Nonstandard Settings", "sec_num": "2.2" }, { "text": "\u2022 duplicate punctuation: we remove duplicate sentence end markers, quotes, commas, parentheses, etc. Table 1 shows the performance of a simple baseline system and the impact of different cumulative modifications to that system when tuning on dev-test2009a and testing on dev-test2009b. The table report the Bleu and NIST scores measured on the detokenized output under three conditions: (1) without recasing ('Lowercased'), 2) using the default recaser ('Recased (default)'), and (3) using an improved recaser and post-editing rules Post-cased & Postedited'). In the following discussion, we will discuss the Bleu results under condition (3). System 1 uses sentences of length up to 40 tokens from the News Commentary bi-text, the default (de-)tokenizer, distance reordering, and a 3-gram language model trained on the Spanish side of the bi-text. Its performance is quite modest: 15.32% of Bleu with the default recaser, and 16.92% when the improved recaser and the postediting rules are used.", "cite_spans": [], "ref_spans": [ { "start": 101, "end": 108, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Nonstandard Settings", "sec_num": "2.2" }, { "text": "System 2 increases to 100 the maximum length of the sentences in the bi-text, which yields 0.55% absolute improvement in Bleu.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Evaluation", "sec_num": "3" }, { "text": "System 3 uses the new (de-)tokenizer, but this turns out to make almost no difference. Table 1 : Impact of the combined modifications for English-to-Spanish machine translation on dev-test2009b. We report the Bleu and NIST scores measured on the detokenized output under three conditions: (1) without recasing ('Lowercased'), (2) using the default recaser ('Recased (default)'), and (3) using an improved recaser and post-editing rules ('Post-cased & Post-edited') . The News Commentary baseline system uses sentences of length up to 40 tokens from the News Commentary bi-text, the default tokenizer and de-tokenizer, a distance-based reordering model, and a trigram language model trained on the Spanish side of the bi-text. The Europarl system is the same as system 6, except that it uses the Europarl bi-text instead of the News Commentary bi-text.", "cite_spans": [ { "start": 436, "end": 464, "text": "('Post-cased & Post-edited')", "ref_id": null } ], "ref_spans": [ { "start": 87, "end": 94, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Experiments and Evaluation", "sec_num": "3" }, { "text": "System 4 adds a lexicalized re-ordering model, which yields 0.8% absolute improvement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recased", "sec_num": null }, { "text": "System 5 improves the language model. It adds the additional monolingual Spanish News Commentary data provided by the organizers to the Spanish side of the bi-text, and uses a 5-gram language model instead of the 3-gram LM used by Systems 1-4. This yields a sizable absolute gain in Bleu: 2.27%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recased", "sec_num": null }, { "text": "System 6 adds a second 5-gram LM trained on the monolingual Europarl data, gaining 0.2%. System 7 augments the training bi-text with cognate pairs, gaining another 0.57%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recased", "sec_num": null }, { "text": "System 8 is the same as System 6, except that it is trained on the out-of-domain Europarl bitext instead of the in-domain News Commentary bi-text. Surprisingly, this turns out to work better than the in-domain System 6 by 1.14% of Bleu. This is a quite surprising result since in both WMT07 and WMT08, for which comparable kinds and size of training data was provided, training on the out-of-domain Europarl was always worse than training on the in-domain News Commentary. We are not sure why it is different this year, but it could be due to the way the devtrain and dev-test was created for the 2009 databy extracting alternating sentences from the original development set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recased", "sec_num": null }, { "text": "System 9 augments the Europarl bi-text with cognate pairs, gaining another 0.21%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recased", "sec_num": null }, { "text": "System 10 merges the phrase tables of systems 7 and 9, and is otherwise the same as them. This adds another 0.27%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recased", "sec_num": null }, { "text": "Our official submission to WMT09 is the postedited System 10, re-tuned on the full development set: dev-test2009a + dev-test2009b (in order to produce more stable results with MERT).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recased", "sec_num": null }, { "text": "As we can see in Table 1 , we have achieved not only a huge 'vertical' absolute improvement of 5.5-6% in Bleu from System 1 to System 10, but also a significant 'horizontal' one: our recased and post-edited result for System 10 is better than that of the default recaser by 1.63% in Bleu (22.37% vs. 20.74%). Still, the lowercased Bleu of 24.40% suggests that there may be a lot of room for further improvement in recasing -we are still about 2% below it. While this is probably due primarily to the system choosing a different sentence-initial word, it certainly deserves further investigation in future work.", "cite_spans": [], "ref_spans": [ { "start": 17, "end": 24, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "4" }, { "text": "The task organizers invited submissions translating forward and/or backward between English and five other European languages (French, Spanish, German, Czech and Hungarian), but we only participated in English\u2192Spanish, due to time limitations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For News Commentary, we used \u03b8 = 0.4, which was found by optimizing on the development set; for Europarl, we set \u03b8 = 0.58 as suggested byKondrak et al. (2003).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research was supported by research grant POD0713875.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Statistical machine translation", "authors": [ { "first": "Yaser", "middle": [], "last": "Al-Onaizan", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Curin", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Jahr", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "John", "middle": [], "last": "Lafferty", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Melamed", "suffix": "" }, { "first": "Franz", "middle": [ "Joseph" ], "last": "Och", "suffix": "" }, { "first": "David", "middle": [], "last": "Purdy", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Smith", "suffix": "" }, { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yaser Al-Onaizan, Jan Curin, Michael Jahr, Kevin Knight, John Lafferty, Dan Melamed, Franz Joseph Och, David Purdy, Noah Smith, and David Yarowsky. 1999. Statistical machine translation. Technical report, CLSP, Johns Hopkins University, Baltimore, MD.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Alignment-based discriminative string similarity", "authors": [], "year": null, "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL'07)", "volume": "", "issue": "", "pages": "656--663", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alignment-based discriminative string similarity. In Proceedings of the Annual Meeting of the Associa- tion for Computational Linguistics (ACL'07), pages 656-663, Prague, Czech Republic.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Electronic glossary of linguistic terms", "authors": [ { "first": "Albert", "middle": [], "last": "Bickford", "suffix": "" }, { "first": "David", "middle": [], "last": "Tuggy", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Albert Bickford and David Tuggy. 2002. Electronic glossary of linguistic terms.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "The mathematics of statistical machine translation: parameter estimation", "authors": [ { "first": "Peter", "middle": [], "last": "Brown", "suffix": "" }, { "first": "Vincent", "middle": [ "Della" ], "last": "Pietra", "suffix": "" }, { "first": "Stephen", "middle": [ "Della" ], "last": "Pietra", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Mercer", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "263--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Brown, Vincent Della Pietra, Stephen Della Pietra, and Robert Mercer. 1993. The mathematics of statistical machine translation: parameter estima- tion. Computational Linguistics, 19(2):263-311.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Meta-) evaluation of machine translation", "authors": [ { "first": "Chris", "middle": [], "last": "Callison", "suffix": "" }, { "first": "-", "middle": [], "last": "Burch", "suffix": "" }, { "first": "Cameron", "middle": [], "last": "Fordyce", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" }, { "first": "Josh", "middle": [], "last": "Schroeder", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Second Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "136--158", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Callison-Burch, Cameron Fordyce, Philipp Koehn, Christof Monz, and Josh Schroeder. 2007. (Meta-) evaluation of machine translation. In Pro- ceedings of the Second Workshop on Statistical Ma- chine Translation, pages 136-158, Prague, Czech Republic.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Further meta-evaluation of machine translation", "authors": [ { "first": "Chris", "middle": [], "last": "Callison", "suffix": "" }, { "first": "-", "middle": [], "last": "Burch", "suffix": "" }, { "first": "Cameron", "middle": [], "last": "Fordyce", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" }, { "first": "Josh", "middle": [], "last": "Schroeder", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Third Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "70--106", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Callison-Burch, Cameron Fordyce, Philipp Koehn, Christof Monz, and Josh Schroeder. 2008. Further meta-evaluation of machine translation. In Proceedings of the Third Workshop on Statisti- cal Machine Translation, pages 70-106, Columbus, OH, USA.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Manual and automatic evaluation of machine translation between European languages", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the First Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "102--121", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn and Christof Monz. 2006. Manual and automatic evaluation of machine translation between European languages. In Proceedings of the First Workshop on Statistical Machine Translation, pages 102-121, New York, NY, USA.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Edinburgh system description for the 2005 IWSLT speech translation evaluation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Amittai", "middle": [], "last": "Axelrod", "suffix": "" }, { "first": "Alexandra", "middle": [ "Birch" ], "last": "Mayne", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "Miles", "middle": [], "last": "Osborne", "suffix": "" }, { "first": "David", "middle": [], "last": "Talbot", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the International Workshop on Spoken Language Translation (IWSLT'05)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn, Amittai Axelrod, Alexandra Birch Mayne, Chris Callison-Burch, Miles Osborne, and David Talbot. 2005. Edinburgh system descrip- tion for the 2005 IWSLT speech translation evalu- ation. In Proceedings of the International Workshop on Spoken Language Translation (IWSLT'05), Pitts- burgh, PA, USA.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Moses: Open source toolkit for statistical machine translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "Marcello", "middle": [], "last": "Federico", "suffix": "" }, { "first": "Nicola", "middle": [], "last": "Bertoldi", "suffix": "" }, { "first": "Brooke", "middle": [], "last": "Cowan", "suffix": "" }, { "first": "Wade", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Christine", "middle": [], "last": "Moran", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Zens", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Ondrej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Constantin", "suffix": "" }, { "first": "Evan", "middle": [], "last": "Herbst", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL'07). Demonstration session", "volume": "", "issue": "", "pages": "177--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexan- dra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine trans- lation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL'07). Demonstration session, pages 177-180, Prague, Czech Republic.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Europarl: A parallel corpus for evaluation of machine translation", "authors": [ { "first": "P", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the X MT Summit", "volume": "", "issue": "", "pages": "79--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Koehn. 2005. Europarl: A parallel corpus for eval- uation of machine translation. In Proceedings of the X MT Summit, pages 79-86, Phuket, Thailand.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Cognates can improve statistical translation models", "authors": [ { "first": "Grzegorz", "middle": [], "last": "Kondrak", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the Annual Meeting of the North American Association for Computational Linguistics (NAACL'03)", "volume": "", "issue": "", "pages": "46--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grzegorz Kondrak, Daniel Marcu, and Kevin Knight. 2003. Cognates can improve statistical translation models. In Proceedings of the Annual Meeting of the North American Association for Computational Lin- guistics (NAACL'03), pages 46-48, Sapporo, Japan.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Multipath translation lexicon induction via bridge languages", "authors": [ { "first": "Gideon", "middle": [], "last": "Mann", "suffix": "" }, { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the Annual Meeting of the North American Association for Computational Linguistics (NAACL'01)", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gideon Mann and David Yarowsky. 2001. Multipath translation lexicon induction via bridge languages. In Proceedings of the Annual Meeting of the North American Association for Computational Linguis- tics (NAACL'01), pages 1-8, Pittsburgh, PA, USA.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Automatic evaluation and uniform filter cascades for inducing N-best translation lexicons", "authors": [ { "first": "Dan", "middle": [], "last": "Melamed", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the Third Workshop on Very Large Corpora", "volume": "", "issue": "", "pages": "184--198", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Melamed. 1995. Automatic evaluation and uni- form filter cascades for inducing N-best translation lexicons. In Proceedings of the Third Workshop on Very Large Corpora, pages 184-198, Cambridge, MA, USA.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Bitext maps and alignment via pattern recognition", "authors": [ { "first": "Dan", "middle": [], "last": "Melamed", "suffix": "" } ], "year": 1999, "venue": "Computational Linguistics", "volume": "25", "issue": "1", "pages": "107--130", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Melamed. 1999. Bitext maps and alignment via pattern recognition. Computational Linguistics, 25(1):107-130.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Models of translational equivalence among words", "authors": [ { "first": "Dan", "middle": [], "last": "Melamed", "suffix": "" } ], "year": 2000, "venue": "Computational Linguistics", "volume": "26", "issue": "2", "pages": "221--249", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Melamed. 2000. Models of translational equiv- alence among words. Computational Linguistics, 26(2):221-249.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "UCB system description for the WMT 2007 shared task", "authors": [ { "first": "Preslav", "middle": [], "last": "Nakov", "suffix": "" }, { "first": "Marti", "middle": [], "last": "Hearst", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Second Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "212--215", "other_ids": {}, "num": null, "urls": [], "raw_text": "Preslav Nakov and Marti Hearst. 2007. UCB system description for the WMT 2007 shared task. In Pro- ceedings of the Second Workshop on Statistical Ma- chine Translation, pages 212-215, Prague, Czech Republic.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Improved word alignments using the Web as a corpus", "authors": [ { "first": "Preslav", "middle": [], "last": "Nakov", "suffix": "" }, { "first": "Svetlin", "middle": [], "last": "Nakov", "suffix": "" }, { "first": "Elena", "middle": [], "last": "Paskaleva", "suffix": "" } ], "year": 2007, "venue": "Proceedigs of Recent Advances in Natural Language Processing (RANLP'07)", "volume": "", "issue": "", "pages": "400--405", "other_ids": {}, "num": null, "urls": [], "raw_text": "Preslav Nakov, Svetlin Nakov, and Elena Paskaleva. 2007. Improved word alignments using the Web as a corpus. In Proceedigs of Recent Advances in Nat- ural Language Processing (RANLP'07), pages 400- 405, Borovets, Bulgaria.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Improving English-Spanish statistical machine translation: Experiments in domain adaptation, sentence paraphrasing, tokenization, and recasing", "authors": [ { "first": "Preslav", "middle": [], "last": "Nakov", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Third Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "147--150", "other_ids": {}, "num": null, "urls": [], "raw_text": "Preslav Nakov. 2008. Improving English-Spanish sta- tistical machine translation: Experiments in domain adaptation, sentence paraphrasing, tokenization, and recasing. In Proceedings of the Third Workshop on Statistical Machine Translation, pages 147-150, Columbus, OH, USA.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A systematic comparison of various statistical alignment models", "authors": [ { "first": "Josef", "middle": [], "last": "Franz", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Och", "suffix": "" }, { "first": "", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics", "volume": "29", "issue": "1", "pages": "19--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och and Hermann Ney. 2003. A sys- tematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19-51.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "The alignment template approach to statistical machine translation", "authors": [ { "first": "Josef", "middle": [], "last": "Franz", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Och", "suffix": "" }, { "first": "", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2004, "venue": "Computational Linguistics", "volume": "30", "issue": "4", "pages": "417--449", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och and Hermann Ney. 2004. The align- ment template approach to statistical machine trans- lation. Computational Linguistics, 30(4):417-449.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Minimum error rate training in statistical machine translation", "authors": [ { "first": "Franz Josef", "middle": [], "last": "Och", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL'03)", "volume": "", "issue": "", "pages": "160--167", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the Annual Meeting of the Association for Compu- tational Linguistics (ACL'03), pages 160-167, Sap- poro, Japan.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL'02)", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the Annual Meeting of the Association for Com- putational Linguistics (ACL'02), pages 311-318, Philadelphia, PA, USA.", "links": null } }, "ref_entries": {} } }