{ "paper_id": "W07-0403", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T04:39:58.203423Z" }, "title": "Inversion Transduction Grammar for Joint Phrasal Translation Modeling", "authors": [ { "first": "Colin", "middle": [], "last": "Cherry", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Alberta Edmonton", "location": { "postCode": "T6G 2E8", "region": "AB", "country": "Canada" } }, "email": "colinc@cs.ualberta.ca" }, { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "", "affiliation": { "laboratory": "", "institution": "Google Inc", "location": { "addrLine": "1600 Amphitheatre Parkway Mountain View", "region": "CA", "country": "USA" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present a phrasal inversion transduction grammar as an alternative to joint phrasal translation models. This syntactic model is similar to its flatstring phrasal predecessors, but admits polynomial-time algorithms for Viterbi alignment and EM training. We demonstrate that the consistency constraints that allow flat phrasal models to scale also help ITG algorithms, producing an 80-times faster inside-outside algorithm. We also show that the phrasal translation tables produced by the ITG are superior to those of the flat joint phrasal model, producing up to a 2.5 point improvement in BLEU score. Finally, we explore, for the first time, the utility of a joint phrasal translation model as a word alignment method.", "pdf_parse": { "paper_id": "W07-0403", "_pdf_hash": "", "abstract": [ { "text": "We present a phrasal inversion transduction grammar as an alternative to joint phrasal translation models. This syntactic model is similar to its flatstring phrasal predecessors, but admits polynomial-time algorithms for Viterbi alignment and EM training. We demonstrate that the consistency constraints that allow flat phrasal models to scale also help ITG algorithms, producing an 80-times faster inside-outside algorithm. We also show that the phrasal translation tables produced by the ITG are superior to those of the flat joint phrasal model, producing up to a 2.5 point improvement in BLEU score. Finally, we explore, for the first time, the utility of a joint phrasal translation model as a word alignment method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Statistical machine translation benefits greatly from considering more than one word at a time. One can put forward any number of non-compositional translations to support this point, such as the colloquial Canadian French-English pair, (Wo les moteurs, Hold your horses), where no clear word-toword connection can be drawn. Nearly all current decoding methods have shifted to phrasal representations, gaining the ability to handle noncompositional translations, but also allowing the decoder to memorize phenomena such as monolingual agreement and short-range movement, taking pressure off of language and distortion models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Despite the success of phrasal decoders, knowledge acquisition for translation generally begins with a word-level analysis of the training text, taking the form of a word alignment. Attempts to apply the same statistical analysis used at the word level in a phrasal setting have met with limited success, held back by the sheer size of phrasal alignment space. Hybrid methods that combine well-founded statistical analysis with high-confidence word-level alignments have made some headway (Birch et al., 2006) , but suffer from the daunting task of heuristically exploring a still very large alignment space. In the meantime, synchronous parsing methods efficiently process the same bitext phrases while building their bilingual constituents, but continue to be employed primarily for word-to-word analysis (Wu, 1997) . In this paper we unify the probability models for phrasal translation with the algorithms for synchronous parsing, harnessing the benefits of both to create a statistically and algorithmically wellfounded method for phrasal analysis of bitext.", "cite_spans": [ { "start": 489, "end": 509, "text": "(Birch et al., 2006)", "ref_id": "BIBREF0" }, { "start": 807, "end": 817, "text": "(Wu, 1997)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Section 2 begins by outlining the phrase extraction system we intend to replace and the two methods we combine to do so: the joint phrasal translation model (JPTM) and inversion transduction grammar (ITG). Section 3 describes our proposed solution, a phrasal ITG. Section 4 describes how to apply our phrasal ITG, both as a translation model and as a phrasal word-aligner. Section 5 tests our system in both these capacities, while Section 6 concludes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Phrasal decoders require a phrase table (Koehn et al., 2003) , which contains bilingual phrase pairs and scores indicating their utility. The surface heuristic is the most popular method for phrase-table construction. It extracts all consistent phrase pairs from word-aligned bitext (Koehn et al., 2003) . The word alignment provides bilingual links, indicating translation relationships between words. Consistency is defined so that alignment links are never broken by phrase boundaries. For each token w in a consistent phrase pairp, all tokens linked to w by the alignment must also be included inp. Each consistent phrase pair is counted as occurring once per sentence pair. The scores for the extracted phrase pairs are provided by normalizing these flat counts according to common English or Foreign components, producing the conditional distributions p(f |\u0113) and p(\u0113|f ).", "cite_spans": [ { "start": 40, "end": 60, "text": "(Koehn et al., 2003)", "ref_id": "BIBREF6" }, { "start": 283, "end": 303, "text": "(Koehn et al., 2003)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Background 2.1 Phrase Table Extraction", "sec_num": "2" }, { "text": "The surface heuristic can define consistency according to any word alignment; but most often, the alignment is provided by GIZA++ (Och and Ney, 2003) . This alignment system is powered by the IBM translation models (Brown et al., 1993) , in which one sentence generates the other. These models produce only one-to-many alignments: each generated token can participate in at most one link. Many-to-many alignments can be created by combining two GIZA++ alignments, one where English generates Foreign and another with those roles reversed (Och and Ney, 2003) . Combination approaches begin with the intersection of the two alignments, and add links from the union heuristically. The grow-diag-final (GDF) combination heuristic (Koehn et al., 2003) adds links so that each new link connects a previously unlinked token.", "cite_spans": [ { "start": 130, "end": 149, "text": "(Och and Ney, 2003)", "ref_id": "BIBREF10" }, { "start": 215, "end": 235, "text": "(Brown et al., 1993)", "ref_id": "BIBREF1" }, { "start": 538, "end": 557, "text": "(Och and Ney, 2003)", "ref_id": "BIBREF10" }, { "start": 726, "end": 746, "text": "(Koehn et al., 2003)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Background 2.1 Phrase Table Extraction", "sec_num": "2" }, { "text": "The IBM models that power GIZA++ are trained with Expectation Maximization (Dempster et al., 1977) , or EM, on sentence-aligned bitext. A translation model assigns probabilities to alignments; these alignment distributions are used to count translation events, which are then used to estimate new parameters for the translation model. Sampling is employed when the alignment distributions cannot be calculated efficiently. This statistically-motivated process is much more appealing than the flat counting described in Section 2.1, but it does not directly include phrases.", "cite_spans": [ { "start": 75, "end": 98, "text": "(Dempster et al., 1977)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Joint phrasal translation model", "sec_num": "2.2" }, { "text": "The joint phrasal translation model (Marcu and Wong, 2002) , or JPTM, applies the same statistical techniques from the IBM models in a phrasal setting. The JPTM is designed according to a generative process where both languages are generated simultaneously. First, a bag of concepts, or cepts, C is generated. Each c i \u2208 C corresponds to a bilingual phrase pair, c i = (\u0113 i ,f i ). These contiguous phrases are permuted in each language to create two sequences of phrases. Initially, Marcu and Wong assume that the number of cepts, as well as the phrase orderings, are drawn from uniform distributions. That leaves a joint translation distribution p(\u0113 i ,f i ) to determine which phrase pairs are selected. Given a lexicon of possible cepts and a predicate L(E, F, C) that determines if a bag of cepts C can be bilingually permuted to create the sentence pair (E, F ), the probability of a sentence pair is:", "cite_spans": [ { "start": 36, "end": 58, "text": "(Marcu and Wong, 2002)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Joint phrasal translation model", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(E, F ) \u221d {C|L(E,F,C)} \uf8ee \uf8f0 c i \u2208C p(\u0113 i ,f i ) \uf8f9 \uf8fb", "eq_num": "(1)" } ], "section": "Joint phrasal translation model", "sec_num": "2.2" }, { "text": "If left unconstrained, (1) will consider every phrasal segmentation of E and F , and every alignment between those phrases. Later, a distortion model based on absolute token positions is added to (1). The JPTM faces several problems when scaling up to large training sets:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint phrasal translation model", "sec_num": "2.2" }, { "text": "1. The alignment space enumerated by the sum in (1) is huge, far larger than the one-to-many space explored by GIZA++. 2. The translation distribution p(\u0113,f ) will cover all co-occurring phrases observed in the bitext. This is far too large to fit in main memory, and can be unwieldly for storage on disk. 3. Given a non-uniform p(\u0113,f ), there is no efficient algorithm to compute the expectation of phrase pair counts required for EM, or to find the most likely phrasal alignment. Marcu and Wong (2002) address point 2 with a lexicon constraint; monolingual phrases that are above a length threshold or below a frequency threshold are excluded from the lexicon. Point 3 is handled by hill-climbing to a likely phrasal alignment and sampling around it. However, point 1 remains unaddressed, which prevents the model from scaling to large data sets. Birch et al. (2006) handle point 1 directly by reducing the size of the alignment space. This is accomplished by constraining the JPTM to only use phrase pairs that are consistent with a highconfidence word alignment, which is provided by GIZA++ intersection. We refer to this constrained JPTM as a C-JPTM. This strikes an interesting middle ground between the surface heuristic described in Section 2.1 and the JPTM. Like the surface heuristic, a word alignment is used to limit the phrase pairs considered, but the C-JPTM reasons about distributions over phrasal alignments, instead of taking flat counts. The consistency constraint allows them to scale their C-JPTM up to 700,000 sentence pairs. With this constraint in place, the use of hill-climbing and sampling during EM training becomes one of the largest remaining weaknesses of the C-JPTM.", "cite_spans": [ { "start": 482, "end": 503, "text": "Marcu and Wong (2002)", "ref_id": "BIBREF7" }, { "start": 849, "end": 868, "text": "Birch et al. (2006)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Joint phrasal translation model", "sec_num": "2.2" }, { "text": "Like the JPTM, stochastic synchronous grammars provide a generative process to produce a sentence and its translation simultaneously. Inversion transduction grammar (Wu, 1997) , or ITG, is a wellstudied synchronous grammar formalism. Terminal productions of the form A \u2192 e/f produce a token in each stream, or a token in one stream with the null symbol \u2205 in the other. To allow for movement during translation, non-terminal productions can be either straight or inverted. Straight productions, with their non-terminals inside square brackets [. . .], produce their symbols in the given order in both streams. Inverted productions, indicated by angled brackets . . . , are output in reverse order in the Foreign stream only.", "cite_spans": [ { "start": 165, "end": 175, "text": "(Wu, 1997)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Inversion Transduction Grammar", "sec_num": "2.3" }, { "text": "The work described here uses the binary bracketing ITG, which has a single non-terminal:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inversion Transduction Grammar", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "A \u2192 [AA] | AA | e/f", "eq_num": "(2)" } ], "section": "Inversion Transduction Grammar", "sec_num": "2.3" }, { "text": "This grammar admits an efficient bitext parsing algorithm, and holds no language-specific biases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inversion Transduction Grammar", "sec_num": "2.3" }, { "text": "(2) cannot represent all possible permutations of concepts that may occur during translation, because some permutations will require discontinuous constituents (Melamed, 2003) . This ITG constraint is characterized by the two forbidden structures shown in Figure 1 (Wu, 1997) . Empirical studies suggest that only a small percentage of human translations violate these constraints (Cherry and Lin, 2006) . Stochastic ITGs are parameterized like their PCFG counterparts (Wu, 1997) ; productions A \u2192 X are assigned probability Pr(X |A). These parameters can be learned from sentence-aligned bitext using the EM algorithm. The expectation task of counting productions weighted by their probability is handled with dynamic programming, using the inside-outside algorithm extended to bitext (Zhang and Gildea, 2004) .", "cite_spans": [ { "start": 160, "end": 175, "text": "(Melamed, 2003)", "ref_id": "BIBREF9" }, { "start": 265, "end": 275, "text": "(Wu, 1997)", "ref_id": "BIBREF16" }, { "start": 381, "end": 403, "text": "(Cherry and Lin, 2006)", "ref_id": "BIBREF2" }, { "start": 469, "end": 479, "text": "(Wu, 1997)", "ref_id": "BIBREF16" }, { "start": 786, "end": 810, "text": "(Zhang and Gildea, 2004)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 256, "end": 264, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Inversion Transduction Grammar", "sec_num": "2.3" }, { "text": "This paper introduces a phrasal ITG; in doing so, we combine ITG with the JPTM. ITG parsing algorithms consider every possible two-dimensional span of bitext, each corresponding to a bilingual phrase pair. Each multi-token span is analyzed in terms of how it could be built from smaller spans using a straight or inverted production, as is illustrated in Figures 2 (a) and (b). To extend ITG to a phrasal setting, we add a third option for span analysis: that the span under consideration might have been drawn directly from the lexicon. This option can be added to our grammar by altering the definition of a terminal production to include phrases: A \u2192\u0113/f . This third option is shown in Figure 2 (c). The model implied by this extended grammar is trained using inside-outside and EM.", "cite_spans": [], "ref_spans": [ { "start": 355, "end": 368, "text": "Figures 2 (a)", "ref_id": "FIGREF0" }, { "start": 689, "end": 697, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "ITG as a Phrasal Translation Model", "sec_num": "3" }, { "text": "Our approach differs from previous attempts to use ITGs for phrasal bitext analysis. Wu (1997) used a binary bracketing ITG to segment a sen-tence while simultaneously word-aligning it to its translation, but the model was trained heuristically with a fixed segmentation. Vilar and Vidal (2005) used ITG-like dynamic programming to drive both training and alignment for their recursive translation model, but they employed a conditional model that did not maintain a phrasal lexicon. Instead, they scored phrase pairs using IBM Model 1.", "cite_spans": [ { "start": 85, "end": 94, "text": "Wu (1997)", "ref_id": "BIBREF16" }, { "start": 272, "end": 294, "text": "Vilar and Vidal (2005)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "ITG as a Phrasal Translation Model", "sec_num": "3" }, { "text": "Our phrasal ITG is quite similar to the JPTM. Both models are trained with EM, and both employ generative stories that create a sentence and its translation simultaneously. The similarities become more apparent when we consider the canonical-form binary-bracketing ITG (Wu, 1997) shown here:", "cite_spans": [ { "start": 269, "end": 279, "text": "(Wu, 1997)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "ITG as a Phrasal Translation Model", "sec_num": "3" }, { "text": "S \u2192 A | B | C A \u2192 [AB] | [BB] | [CB] | [AC] | [BC] | [CC] B \u2192 AA | BA | CA | AC | BC | CC C \u2192\u0113/f (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ITG as a Phrasal Translation Model", "sec_num": "3" }, { "text": "(3) is employed in place of (2) to reduce redundant alignments and clean up EM expectations. 1 More importantly for our purposes, it introduces a preterminal C, which generates all phrase pairs or cepts. When (3) is parameterized as a stochastic ITG, the conditional distribution p(\u0113/f |C) is equivalent to the JPTM's p(\u0113,f ); both are joint distributions over all possible phrase pairs. The distributions conditioned on the remaining three non-terminals assign probability to concept movement by tracking inversions. Like the JPTM's distortion model, these parameters grade each movement decision independently. With terminal productions producing cepts, and inversions measuring distortion, our phrasal ITG is essentially a variation on the JPTM with an alternate distortion model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ITG as a Phrasal Translation Model", "sec_num": "3" }, { "text": "Our phrasal ITG has two main advantages over the JPTM. Most significantly, we gain polynomialtime algorithms for both Viterbi alignment and EM expectation, through the use of ITG parsing and inside-outside algorithms. These phrasal ITG algorithms are no more expensive asymptotically than their word-to-word counterparts, since each potential phrase needs to be analyzed anyway during constituent construction. We hypothesize that using these methods in place of heuristic search and sampling will improve the phrasal translation model learned by EM. Also, we can easily incorporate links to \u2205 by including the symbol among our terminals. To minimize redundancy, we allow only single tokens, not phrases, to align to \u2205. The JPTM does not allow links to \u2205.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ITG as a Phrasal Translation Model", "sec_num": "3" }, { "text": "The phrasal ITG also introduces two new complications. ITG Viterbi and inside-outside algorithms have polynomial complexity, but that polynomial is O(n 6 ), where n is the length of the longer sentence in the pair. This is too slow to train on large data sets without massive parallelization. Also, ITG algorithms explore their alignment space perfectly, but that space has been reduced by the ITG constraint described in Section 2.3. We will address each of these issues in the following two subsections.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ITG as a Phrasal Translation Model", "sec_num": "3" }, { "text": "First, we address the problem of scaling ITG to large data. ITG dynamic programming algorithms work by analyzing each bitext span only once, storing its value in a table for future use. There are O(n 4 ) of these spans, and each analysis takes O(n 2 ) time. An effective approach to speeding up ITG algorithms is to eliminate unlikely spans as a preprocessing step, assigning them 0 probability and saving the time spent processing them. Past approaches have pruned spans using IBM Model 1 probability estimates (Zhang and Gildea, 2005) or using agreement with an existing parse tree (Cherry and Lin, 2006) . The former is referred to as tic-tac-toe pruning because it uses both inside and outside estimates.", "cite_spans": [ { "start": 512, "end": 536, "text": "(Zhang and Gildea, 2005)", "ref_id": "BIBREF18" }, { "start": 584, "end": 606, "text": "(Cherry and Lin, 2006)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Pruning Spans", "sec_num": "3.1" }, { "text": "We propose a new ITG pruning method that leverages high-confidence links by pruning all spans that are inconsistent with a provided alignment. This is similar to the constraint used in the C-JPTM, but we do not just eliminate those spans as potential phrase-to-phrase links: we never consider any ITG parse that builds a non-terminal over a pruned span. 2 This fixed-link pruning will speed up both Viterbi alignment and EM training by reducing the number of analyzed spans, and so long as we trust our high-confidence links, it will do so harmlessly. We demonstrate the effectiveness of this pruning method experimentally in Section 5.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pruning Spans", "sec_num": "3.1" }, { "text": "Our remaining concern is the ITG constraint. There are some alignments that we just cannot build, and sentence pairs requiring those alignments will occur. These could potentially pollute our training data; if the system is unable to build the right alignment, the counts it will collect from that pair must be wrong. Furthermore, if our high-confidence links are not ITG-compatible, our fixed-link pruning will prevent the aligner from forming any alignments at all.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Handling the ITG Constraint", "sec_num": "3.2" }, { "text": "However, these two potential problems cancel each other out. Sentence pairs containing non-ITG translations will tend to have high-confidence links that are also not ITG-compatible. Our EM learner will simply skip these sentence pairs during training, avoiding pollution of our training data. We can use a linear-time algorithm (Zhang et al., 2006) to detect non-ITG movement in our high-confidence links, and remove the offending sentence pairs from our training corpus. This results in only a minor reduction in training data; in our French-English training set, we lose less than 1%. In the experiments described in Section 5, all systems that do not use ITG will take advantage of the complete training set.", "cite_spans": [ { "start": 328, "end": 348, "text": "(Zhang et al., 2006)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Handling the ITG Constraint", "sec_num": "3.2" }, { "text": "Any phrasal translation model can be used for two tasks: translation modeling and phrasal word alignment. Previous work on JPTM has focused on only the first task. We are interested in phrasal alignment because it may be better suited to heuristic phraseextraction than word-based models. This section describes how to use our phrasal ITG first as a translation model, and then as a phrasal aligner.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Applying the model", "sec_num": "4" }, { "text": "We can test our model's utility for translation by transforming its parameters into a phrase table for the phrasal decoder Pharaoh (Koehn et al., 2003) . Any joint model can produce the necessary conditional probabilities by conditionalizing the joint table in both directions. We use our p(\u0113/f |C) distribution from our stochastic grammar to produce p(\u0113|f ) and p(f |\u0113) values for its phrasal lexicon.", "cite_spans": [ { "start": 131, "end": 151, "text": "(Koehn et al., 2003)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Translation Modeling", "sec_num": "4.1" }, { "text": "Pharaoh also includes lexical weighting parameters that are derived from the alignments used to induce its phrase pairs (Koehn et al., 2003) . Using the phrasal ITG as a direct translation model, we do not produce alignments for individual sentence pairs. Instead, we provide a lexical preference with an IBM Model 1 feature p M1 that penalizes unmatched words (Vogel et al., 2003) . We include both p M1 (\u0113|f ) and p M1 (f |\u0113).", "cite_spans": [ { "start": 120, "end": 140, "text": "(Koehn et al., 2003)", "ref_id": "BIBREF6" }, { "start": 361, "end": 381, "text": "(Vogel et al., 2003)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Translation Modeling", "sec_num": "4.1" }, { "text": "We can produce a translation model using insideoutside, without ever creating a Viterbi parse. However, we can also examine the maximum likelihood phrasal alignments predicted by the trained model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrasal Word Alignment", "sec_num": "4.2" }, { "text": "Despite its strengths derived from using phrases throughout training, the alignments predicted by our phrasal ITG are usually unsatisfying. For example, the fragment pair (order of business, ordre des travaux) is aligned as a phrase pair by our system, linking every English word to every French word. This is frustrating, since there is a clear compositional relationship between the fragment's component words. This happens because the system seeks only to maximize the likelihood of its training corpus, and phrases are far more efficient than word-to-word connections. When aligning text, annotators are told to resort to many-to-many links only when no clear compositional relationship exists (Melamed, 1998 ). If we could tell our phrasal aligner the same thing, we could greatly improve the intuitive appeal of our alignments. Again, we can leverage high-confidence links for help.", "cite_spans": [ { "start": 698, "end": 712, "text": "(Melamed, 1998", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Phrasal Word Alignment", "sec_num": "4.2" }, { "text": "In the high-confidence alignments provided by GIZA++ intersection, each token participates in at most one link. Links only appear when two wordbased IBM translation models can agree. Therefore, they occur at points of high compositionality: the two words clearly account for one another. We adopt an alignment-driven definition of compositionality: any phrase pair containing two or more highconfidence links is compositional, and can be separated into at least two non-compositional phrases. By removing any phrase pairs that are compositional by this definition from our terminal productions, we can ensure that our aligner never creates such phrases during training or alignment. Doing so produces far more intuitive alignments. Aligned with a model trained using this non-compositional constraint (NCC), our example now forms three wordto-word connections, rather than a single phrasal one. The phrases produced with this constraint are very small, and include only non-compositional context. Therefore, we use the constraint only to train models intended for Viterbi alignment, and not when generating phrase tables directly as in Section 4.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrasal Word Alignment", "sec_num": "4.2" }, { "text": "In this section, we first verify the effectiveness of fixed-link pruning, and then test our phrasal ITG, both as an aligner and as a translation model. We train all translation models with a French-English Europarl corpus obtained by applying a 25 token sentence-length limit to the training set provided for the HLT-NAACL SMT Workshop Shared Task (Koehn and Monz, 2006) . The resulting corpus has 393,132 sentence pairs. 3,376 of these are omitted for ITG methods because their highconfidence alignments have ITG-incompatible constructions. Like our predecessors (Marcu and Wong, 2002; Birch et al., 2006) , we apply a lexicon constraint: no monolingual phrase can be used by any phrasal model unless it occurs at least five times. High-confidence alignments are provided by intersecting GIZA++ alignments trained in each direction with 5 iterations each of Model 1, HMM, and Model 4. All GIZA++ alignments are trained with no sentence-length limit, using the full 688K corpus.", "cite_spans": [ { "start": 348, "end": 370, "text": "(Koehn and Monz, 2006)", "ref_id": "BIBREF5" }, { "start": 564, "end": 586, "text": "(Marcu and Wong, 2002;", "ref_id": "BIBREF7" }, { "start": 587, "end": 606, "text": "Birch et al., 2006)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "5" }, { "text": "To measure the speed-up provided by fixed-link pruning, we timed our phrasal inside-outside algorithm on the first 100 sentence pairs in our training set, with and without pruning. The results are shown in Table 1 . Tic-tac-toe pruning is included for comparison. With fixed-link pruning, on average 95% of the possible spans are pruned, reducing running time by two orders of magnitude. This improvement makes ITG training feasible, even with large bitexts.", "cite_spans": [], "ref_spans": [ { "start": 206, "end": 213, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Pruning Speed Experiments", "sec_num": "5.1" }, { "text": "The goal of this experiment is to compare the Viterbi alignments from the phrasal ITG to gold standard human alignments. We do this to validate our noncompositional constraint and to select good alignments for use with the surface heuristic. Following the lead of (Fraser and Marcu, 2006) , we hand-aligned the first 100 sentence pairs of our training set according to the Blinker annotation guidelines (Melamed, 1998) . We did not differentiate between sure and possible links. We report precision, recall and balanced F-measure (Och and Ney, 2003) . For comparison purposes, we include the results of three types of GIZA++ combination, including the grow-diag-final heuristic (GDF). We tested our phrasal ITG with fixed link pruning, and then added the non-compositional constraint (NCC). During development we determined that performance levels off for both of the ITG models after 3 EM iterations. The results are shown in Table 2 .", "cite_spans": [ { "start": 264, "end": 288, "text": "(Fraser and Marcu, 2006)", "ref_id": "BIBREF4" }, { "start": 403, "end": 418, "text": "(Melamed, 1998)", "ref_id": "BIBREF8" }, { "start": 530, "end": 549, "text": "(Och and Ney, 2003)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 927, "end": 934, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Alignment Experiments", "sec_num": "5.2" }, { "text": "The first thing to note is that GIZA++ Intersection is indeed very high precision. Our confidence in it as a constraint is not misplaced. We also see that both phrasal models have significantly higher recall than any of the GIZA++ alignments, even higher than the permissive GIZA++ union. One factor contributing to this is the phrasal model's use of cepts: it completely interconnects any phrase pair, while GIZA++ union and GDF may not. Its global view of phrases also helps in this regard: evidence for a phrase can be built up over multiple sentences. Finally, we note that in terms of alignment quality, the non-compositional constraint is an unqualified success for the phrasal ITG. It produces a 25 point improvement in precision, at the cost of 2 points of recall. This produces the highest balanced Fmeasure observed on our test set, but the utility of its alignments will depend largely on one's desired precision-recall trade-off.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment Experiments", "sec_num": "5.2" }, { "text": "In this section, we compare a number of different methods for phrase table generation in a French to English translation task. We are interested in answering three questions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Experiments", "sec_num": "5.3" }, { "text": "1. Does the phrasal ITG improve on the C-JPTM? 2. Can phrasal translation models outperform the surface heuristic? 3. Do Viterbi phrasal alignments provide better input for the surface heuristic?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Experiments", "sec_num": "5.3" }, { "text": "With this in mind, we test five phrase tables. Two are conditionalized phrasal models, each EM trained until performance degrades:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Experiments", "sec_num": "5.3" }, { "text": "\u2022 C-JPTM 3 as described in (Birch et al., 2006) \u2022 Phrasal ITG as described in Section 4.1", "cite_spans": [ { "start": 27, "end": 47, "text": "(Birch et al., 2006)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Translation Experiments", "sec_num": "5.3" }, { "text": "Three provide alignments for the surface heuristic:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Experiments", "sec_num": "5.3" }, { "text": "\u2022 GIZA++ with grow-diag-final (GDF)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Experiments", "sec_num": "5.3" }, { "text": "\u2022 Viterbi Phrasal ITG with and without the noncompositional constraint", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Experiments", "sec_num": "5.3" }, { "text": "We use the Pharaoh decoder (Koehn et al., 2003) with the SMT Shared Task baseline system (Koehn and Monz, 2006) . Weights for the log-linear model are set using the 500-sentence tuning set provided for the shared task with minimum error rate training (Och, 2003) as implemented by Venugopal and Vogel (2005) . Results on the provided 2000sentence development set are reported using the BLEU metric (Papineni et al., 2002) . For all methods, we report performance with and without IBM Model 1 features (M1), along with the size of the resulting tables in millions of phrase pairs. The results of all experiments are shown in Table 3 . We see that the Phrasal ITG surpasses the C-JPTM by more than 2.5 BLEU points. A large component of this improvement is due to the ITG's use of inside-outside for expectation calculation, though 3 Supplied by personal communication. Run with default parameters, but with maximum phrase length increased to 5. there are other differences between the two systems. 4 This improvement over search and sampling is demonstrated by the ITG's larger table size; by exploring more thoroughly, it is extracting more phrase pairs from the same amount of data. Both systems improve drastically with the addition of IBM Model 1 features for lexical preference. These features also narrow the gap between the two systems. To help calibrate the contribution of these features, we parameterized the ITG's phrase table using only Model 1 features, which scores 27.17.", "cite_spans": [ { "start": 27, "end": 47, "text": "(Koehn et al., 2003)", "ref_id": "BIBREF6" }, { "start": 89, "end": 111, "text": "(Koehn and Monz, 2006)", "ref_id": "BIBREF5" }, { "start": 251, "end": 262, "text": "(Och, 2003)", "ref_id": "BIBREF11" }, { "start": 281, "end": 307, "text": "Venugopal and Vogel (2005)", "ref_id": "BIBREF13" }, { "start": 398, "end": 421, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF12" }, { "start": 829, "end": 830, "text": "3", "ref_id": null }, { "start": 996, "end": 997, "text": "4", "ref_id": null } ], "ref_spans": [ { "start": 624, "end": 631, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Translation Experiments", "sec_num": "5.3" }, { "text": "Although ITG+M1 comes close, neither phrasal model matches the performance of the surface heuristic. Whatever the surface heuristic lacks in sophistication, it makes up for in sheer coverage, as demonstrated by its huge table sizes. Even the Phrasal ITG Viterbi alignments, which over-commit wildly and have horrible precision, score slightly higher than the best phrasal model. The surface heuristic benefits from capturing as much context as possible, while still covering smaller translation events with its flat counts. It is not held back by any lexicon constraints. When GIZA++ GDF+M1 is forced to conform to a lexicon constraint by dropping any phrase with a frequency lower than 5 from its table, it scores only 29.26, for a reduction of 1.35 BLEU points.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Experiments", "sec_num": "5.3" }, { "text": "Phrases extracted from our non-compositional Viterbi alignments receive the highest BLEU score, but they are not significantly better than GIZA++ GDF. The two methods also produce similarly-sized tables, despite the ITG's higher recall.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Experiments", "sec_num": "5.3" }, { "text": "We have presented a phrasal ITG as an alternative to the joint phrasal translation model. This syntactic solution to phrase modeling admits polynomial-time training and alignment algorithms. We demonstrate that the same consistency constraints that allow joint phrasal models to scale also dramatically speed up ITGs, producing an 80-times faster inside-outside algorithm. We show that when used to learn phrase tables for the Pharaoh decoder, the phrasal ITG is superior to the constrained joint phrasal model, producing tables that result in a 2.5 point improvement in BLEU when used alone, and a 1 point improvement when used with IBM Model 1 features. This suggests that ITG's perfect expectation counting does matter; other phrasal models could benefit from either adopting the ITG formalism, or improving their sampling heuristics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "We have explored, for the first time, the utility of a joint phrasal model as a word alignment method. We present a non-compositional constraint that turns the phrasal ITG into a high-recall phrasal aligner with an F-measure that is comparable to GIZA++.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "With search and sampling no longer a concern, the remaining weaknesses of the system seem to lie with the model itself. Phrases are just too efficient probabilistically: were we to remove all lexicon constraints, EM would always align entire sentences to entire sentences. This pressure to always build the longest phrase possible may be overwhelming otherwise strong correlations in our training data. A promising next step would be to develop a prior over lexicon size or phrase size, allowing EM to introduce large phrases at a penalty, and removing the need for artificial constraints on the lexicon.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "If the null symbol \u2205 is included among the terminals, then redundant parses will still occur, but far less frequently.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Birch et al. (2006) re-introduce inconsistent phrase-pairs in cases where the sentence pair could not be aligned otherwise. We allow links to \u2205 to handle these situations, completely eliminating the pruned spans from our alignment space.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Unlike our system, the Birch implementation does table smoothing and internal lexical weighting, both of which should help improve their results. The systems also differ in distortion modeling and \u2205 handling, as described in Section 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Acknowledgments Special thanks to Alexandra Birch for the use of her code, and to our reviewers for their comments. The first author is funded by Alberta Ingenuity and iCORE studentships.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "acknowledgement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Constraining the phrase-based, joint probability statistical translation model", "authors": [ { "first": "A", "middle": [], "last": "Birch", "suffix": "" }, { "first": "C", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "M", "middle": [], "last": "Osborne", "suffix": "" }, { "first": "P", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2006, "venue": "HLT-NAACL Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "154--157", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Birch, C. Callison-Burch, M. Osborne, and P. Koehn. 2006. Constraining the phrase-based, joint probability statistical translation model. In HLT-NAACL Workshop on Statistical Machine Translation, pages 154-157.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The mathematics of statistical machine translation: Parameter estimation", "authors": [ { "first": "P", "middle": [ "F" ], "last": "Brown", "suffix": "" }, { "first": "S", "middle": [ "A" ], "last": "Della Pietra", "suffix": "" }, { "first": "V", "middle": [ "J" ], "last": "Della Pietra", "suffix": "" }, { "first": "R", "middle": [ "L" ], "last": "Mercer", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "263--312", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. F. Brown, S. A. Della Pietra, V. J. Della Pietra, and R. L. Mercer. 1993. The mathematics of statistical machine trans- lation: Parameter estimation. Computational Linguistics, 19(2):263-312.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A comparison of syntactically motivated word alignment spaces", "authors": [ { "first": "C", "middle": [], "last": "Cherry", "suffix": "" }, { "first": "D", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2006, "venue": "EACL", "volume": "", "issue": "", "pages": "145--152", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Cherry and D. Lin. 2006. A comparison of syntactically motivated word alignment spaces. In EACL, pages 145-152.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Maximum likelihood from incomplete data via the EM algorithm", "authors": [ { "first": "A", "middle": [ "P" ], "last": "Dempster", "suffix": "" }, { "first": "N", "middle": [ "M" ], "last": "Laird", "suffix": "" }, { "first": "D", "middle": [ "B" ], "last": "Rubin", "suffix": "" } ], "year": 1977, "venue": "Journal of the Royal Statistical Society", "volume": "39", "issue": "1", "pages": "1--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. P. Dempster, N. M. Laird, and D. B. Rubin. 1977. Maxi- mum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, 39(1):1-38.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Semi-supervised training for statistical word alignment", "authors": [ { "first": "A", "middle": [], "last": "Fraser", "suffix": "" }, { "first": "D", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2006, "venue": "ACL", "volume": "", "issue": "", "pages": "769--776", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Fraser and D. Marcu. 2006. Semi-supervised training for statistical word alignment. In ACL, pages 769-776.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Manual and automatic evaluation of machine translation", "authors": [ { "first": "P", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "C", "middle": [], "last": "Monz", "suffix": "" } ], "year": 2006, "venue": "HLT-NACCL Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "102--121", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Koehn and C. Monz. 2006. Manual and automatic evalu- ation of machine translation. In HLT-NACCL Workshop on Statistical Machine Translation, pages 102-121.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Statistical phrasebased translation", "authors": [ { "first": "P", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "D", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2003, "venue": "HLT-NAACL", "volume": "", "issue": "", "pages": "127--133", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Koehn, F. J. Och, and D. Marcu. 2003. Statistical phrase- based translation. In HLT-NAACL, pages 127-133.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A phrase-based, joint probability model for statistic machine translation", "authors": [ { "first": "D", "middle": [], "last": "Marcu", "suffix": "" }, { "first": "W", "middle": [], "last": "Wong", "suffix": "" } ], "year": 2002, "venue": "EMNLP", "volume": "", "issue": "", "pages": "133--139", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Marcu and W. Wong. 2002. A phrase-based, joint probabil- ity model for statistic machine translation. In EMNLP, pages 133-139.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Manual annotation of translational equivalence: The blinker project", "authors": [ { "first": "I", "middle": [ "D" ], "last": "Melamed", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "I. D. Melamed. 1998. Manual annotation of translational equivalence: The blinker project. Technical Report 98-07, Institute for Research in Cognitive Science.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Multitext grammars and synchronous parsers", "authors": [ { "first": "I", "middle": [ "D" ], "last": "Melamed", "suffix": "" } ], "year": 2003, "venue": "HLT-NAACL", "volume": "", "issue": "", "pages": "158--165", "other_ids": {}, "num": null, "urls": [], "raw_text": "I. D. Melamed. 2003. Multitext grammars and synchronous parsers. In HLT-NAACL, pages 158-165.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A systematic comparison of various statistical alignment models", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics", "volume": "29", "issue": "1", "pages": "19--52", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. J. Och and H. Ney. 2003. A systematic comparison of vari- ous statistical alignment models. Computational Linguistics, 29(1):19-52.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Minimum error rate training for statistical machine translation", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" } ], "year": 2003, "venue": "ACL", "volume": "", "issue": "", "pages": "160--167", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. J. Och. 2003. Minimum error rate training for statistical machine translation. In ACL, pages 160-167.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "BLEU: a method for automatic evaluation of machine translation", "authors": [ { "first": "K", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "S", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "T", "middle": [], "last": "Ward", "suffix": "" }, { "first": "W", "middle": [ "J" ], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "ACL", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Papineni, S. Roukos, T. Ward, and W. J. Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In ACL, pages 311-318.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Considerations in maximum mutual information and minimum classification error training for statistical machine translation", "authors": [ { "first": "A", "middle": [], "last": "Venugopal", "suffix": "" }, { "first": "S", "middle": [], "last": "Vogel", "suffix": "" } ], "year": 2005, "venue": "EAMT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Venugopal and S. Vogel. 2005. Considerations in maximum mutual information and minimum classification error train- ing for statistical machine translation. In EAMT.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A recursive statistical translation model", "authors": [ { "first": "J", "middle": [ "M" ], "last": "Vilar", "suffix": "" }, { "first": "E", "middle": [], "last": "Vidal", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the ACL Workshop on Building and Using Parallel Texts", "volume": "", "issue": "", "pages": "199--207", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. M. Vilar and E. Vidal. 2005. A recursive statistical transla- tion model. In Proceedings of the ACL Workshop on Build- ing and Using Parallel Texts, pages 199-207.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "The CMU statistical machine translation system", "authors": [ { "first": "S", "middle": [], "last": "Vogel", "suffix": "" }, { "first": "Y", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "F", "middle": [], "last": "Huang", "suffix": "" }, { "first": "A", "middle": [], "last": "Tribble", "suffix": "" }, { "first": "A", "middle": [], "last": "Venugopal", "suffix": "" }, { "first": "B", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "A", "middle": [], "last": "Waibel", "suffix": "" } ], "year": 2003, "venue": "MT Summmit", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Vogel, Y. Zhang, F. Huang, A. Tribble, A. Venugopal, B. Zhang, and A. Waibel. 2003. The CMU statistical ma- chine translation system. In MT Summmit.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Stochastic inversion transduction grammars and bilingual parsing of parallel corpora", "authors": [ { "first": "D", "middle": [], "last": "Wu", "suffix": "" } ], "year": 1997, "venue": "Computational Linguistics", "volume": "23", "issue": "3", "pages": "377--403", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Lin- guistics, 23(3):377-403.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Syntax-based alignment: Supervised or unsupervised", "authors": [ { "first": "H", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "D", "middle": [], "last": "Gildea", "suffix": "" } ], "year": 2004, "venue": "COLING", "volume": "", "issue": "", "pages": "418--424", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Zhang and D. Gildea. 2004. Syntax-based alignment: Su- pervised or unsupervised? In COLING, pages 418-424.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Stochastic lexicalized inversion transduction grammar for alignment", "authors": [ { "first": "H", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "D", "middle": [], "last": "Gildea", "suffix": "" } ], "year": 2005, "venue": "ACL", "volume": "", "issue": "", "pages": "475--482", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Zhang and D. Gildea. 2005. Stochastic lexicalized inversion transduction grammar for alignment. In ACL, pages 475- 482.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Synchronous binarization for machine translation", "authors": [ { "first": "H", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "L", "middle": [], "last": "Huang", "suffix": "" }, { "first": "D", "middle": [], "last": "Gildea", "suffix": "" }, { "first": "K", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2006, "venue": "HLT-NAACL", "volume": "", "issue": "", "pages": "256--263", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Zhang, L. Huang, D. Gildea, and K. Knight. 2006. Syn- chronous binarization for machine translation. In HLT- NAACL, pages 256-263.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "text": "Three ways in which a phrasal ITG can analyze a multi-word span or phrase.", "type_str": "figure" }, "TABREF0": { "html": null, "text": "Inside-outside run-time comparison.", "content": "
MethodSeconds Avg. Spans Pruned
No Prune415-
Tic-tac-toe3768%
Fixed link595%
", "type_str": "table", "num": null }, "TABREF1": { "html": null, "text": "Alignment Comparison.", "content": "
MethodPrec Rec F-measure
GIZA++ Intersect96.7 53.068.5
GIZA++ Union82.5 69.075.1
GIZA++ GDF84.0 68.275.2
Phrasal ITG50.7 80.362.2
Phrasal ITG + NCC 75.4 78.076.7
", "type_str": "table", "num": null }, "TABREF2": { "html": null, "text": "Translation Comparison.", "content": "
MethodBLEU +M1 Size
Conditionalized Phrasal Model
C-JPTM26.27 28.98 1.3M
Phrasal ITG28.85 30.24 2.2M
Alignment with Surface Heuristic
GIZA++ GDF30.46 30.61 9.8M
Phrasal ITG30.31 30.39 5.8M
Phrasal ITG + NCC 30.66 30.80 9.0M
", "type_str": "table", "num": null } } } }