{ "paper_id": "W07-0405", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T04:38:32.474366Z" }, "title": "Binarization, Synchronous Binarization, and Target-side Binarization *", "authors": [ { "first": "Liang", "middle": [], "last": "Huang", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Pennsylvania", "location": { "addrLine": "3330 Walnut Street, Levine Hall Philadelphia", "postCode": "19104", "region": "PA" } }, "email": "lhuang3@cis.upenn.edu" }, { "first": "Jonathan", "middle": [], "last": "Graehl", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Pennsylvania", "location": { "addrLine": "3330 Walnut Street, Levine Hall Philadelphia", "postCode": "19104", "region": "PA" } }, "email": "" }, { "first": "Giorgio", "middle": [], "last": "Satta", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Pennsylvania", "location": { "addrLine": "3330 Walnut Street, Levine Hall Philadelphia", "postCode": "19104", "region": "PA" } }, "email": "" }, { "first": "Hao", "middle": [], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Pennsylvania", "location": { "addrLine": "3330 Walnut Street, Levine Hall Philadelphia", "postCode": "19104", "region": "PA" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Binarization is essential for achieving polynomial time complexities in parsing and syntax-based machine translation. This paper presents a new binarization scheme, target-side binarization, and compares it with source-side and synchronous binarizations on both stringbased and tree-based systems using synchronous grammars. In particular, we demonstrate the effectiveness of targetside binarization on a large-scale tree-tostring translation system.", "pdf_parse": { "paper_id": "W07-0405", "_pdf_hash": "", "abstract": [ { "text": "Binarization is essential for achieving polynomial time complexities in parsing and syntax-based machine translation. This paper presents a new binarization scheme, target-side binarization, and compares it with source-side and synchronous binarizations on both stringbased and tree-based systems using synchronous grammars. In particular, we demonstrate the effectiveness of targetside binarization on a large-scale tree-tostring translation system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Several recent syntax-based models for machine translation (Chiang, 2005; Galley et al., 2006) can be seen as instances of the general framework of synchronous grammars and tree transducers. In this framework, decoding can be thought of as parsing problems, whose complexity is in general exponential in the number of nonterminals on the right hand side of a grammar rule. To alleviate this problem, one can borrow from parsing the technique of binarizing context-free grammars (into Chomsky Normal Form) to reduce the complexity. With synchronous context-free grammars (SCFG), however, this problem becomes more complicated with the additional dimension of target-side permutation.", "cite_spans": [ { "start": 59, "end": 73, "text": "(Chiang, 2005;", "ref_id": "BIBREF0" }, { "start": 74, "end": 94, "text": "Galley et al., 2006)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The simplest method of binarizing an SCFG is to binarize (left-to-right) on the source-side as if treating it as a monolingual CFG for the sourcelangauge. However, this approach does not guaran-tee contiguous spans on the target-side, due to the arbitrary re-ordering of nonterminals between the two languages. As a result, decoding with an integrated language model still has an exponential complexity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Synchronous binarization (Zhang et al., 2006 ) solves this problem by simultaneously binarizing both source and target-sides of a synchronous rule, making sure of contiguous spans on both sides whenever possible. Neglecting the small amount of non-binarizable rules, the decoding complexity with an integrated language model becomes polynomial and translation quality is significantly improved thanks to the better search. However, this method is more sophisticated to implement than the previous method and binarizability ratio decreases on freer word-order languages (Wellington et al., 2006) .", "cite_spans": [ { "start": 25, "end": 44, "text": "(Zhang et al., 2006", "ref_id": "BIBREF14" }, { "start": 569, "end": 594, "text": "(Wellington et al., 2006)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper presents a third alternative, targetside binarization, which is the symmetric version of the simple source-side variant mentioned above. We compare it with the other two schemes in two popular instantiations of MT systems based on SCFGs: the string-based systems (Chiang, 2005; Galley et al., 2006) where the input is a string to be parsed using the source-side of the SCFG; and the treebased systems (Liu et al., 2006; where the input is a parse tree and is recursively converted into a target string using the SCFG as a tree-transducer. While synchronous binarization is the best strategy for string-based systems, we show that target-side binarization can achieve the same performance of synchronous binarization for treebased systems, with much simpler implementation and 100% binarizability.", "cite_spans": [ { "start": 274, "end": 288, "text": "(Chiang, 2005;", "ref_id": "BIBREF0" }, { "start": 289, "end": 309, "text": "Galley et al., 2006)", "ref_id": "BIBREF3" }, { "start": 412, "end": 430, "text": "(Liu et al., 2006;", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this section, we define synchronous contextfree grammars and present the three binarization schemes through a motivational example. A synchronous CFG (SCFG) is a context-free rewriting system for generating string pairs. Each rule (synchronous production) rewrites a nonterminal in two dimensions subject to the constraint that the sequence of nonterminal children on one side is a permutation of the nonterminal sequence on the other side. Each co-indexed child nonterminal pair will be further rewritten as a unit. The rank of a rule is defined as the number of its synchronous nonterminals. We also define the source and target projections of an SCFG to be the CFGs for the source and target languages, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Synchronous Grammars and Binarization Schemes", "sec_num": "2" }, { "text": "For example, the following SCFG 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Synchronous Grammars and Binarization Schemes", "sec_num": "2" }, { "text": "(1) S \u2192 NP 1 PP 2 VP 3 , NP 1 VP 3 PP 2 NP \u2192 Baoweier, Powell VP \u2192 juxing le huitan, held a meeting PP \u2192 yu Shalong,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Synchronous Grammars and Binarization Schemes", "sec_num": "2" }, { "text": "with Sharon captures the re-ordering of PP and VP between Chinese (source) and English (target). The sourceprojection of the first rule, for example, is S \u2192 NP PP VP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Synchronous Grammars and Binarization Schemes", "sec_num": "2" }, { "text": "Decoding with an SCFG (e.g., translating from Chinese to English using the above grammar) can be cast as a parsing problem (see Section 3 for details), in which case we need to binarize a synchronous rule with more than two nonterminals to achieve polynomial time algorithms (Zhang et al., 2006) . We will next present the three different binarization schemes using Example 1.", "cite_spans": [ { "start": 275, "end": 295, "text": "(Zhang et al., 2006)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Synchronous Grammars and Binarization Schemes", "sec_num": "2" }, { "text": "The first and simplest scheme, source-side binarization, works left-to-right on the source projection of the SCFG without respecting the re-orderings on the target-side. So it will binarize the first rule as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source-side Binarization", "sec_num": "2.1" }, { "text": "(2) S \u2192 NP-PP VP NP-PP \u2192 NP PP which corresponds to Figure 1 (b) . Notice that the virtual nonterminal NP-PP representing the intermediate symbol is discontinuous with two spans on the target (English) side, because this binarization scheme completely ignores the reorderings of nonterminals. As a result, the binarized grammar, with a gap on the target-side, is no longer an SCFG, but can be represented in the more general formalism of Multi-Text Grammars (MTG) (Melamed, 2003) :", "cite_spans": [ { "start": 464, "end": 479, "text": "(Melamed, 2003)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 52, "end": 64, "text": "Figure 1 (b)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Source-side Binarization", "sec_num": "2.1" }, { "text": "(3) S S \u2192\u22b2\u22b3 [1, 2] [1, 2, 1] NP-PP VP NP-PP (2) VP", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source-side Binarization", "sec_num": "2.1" }, { "text": "here [1, 2, 1] denotes that on that target-side, the first nonterminal NP-PP has two discontinuous spans, with the second nonterminal VP in the gap.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source-side Binarization", "sec_num": "2.1" }, { "text": "Intuitively speaking, the gaps on the target-side will lead to exponential complexity in decoding with integrated language models (see Section 3), as well as synchronous parsing (Zhang et al., 2006) .", "cite_spans": [ { "start": 178, "end": 198, "text": "(Zhang et al., 2006)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Source-side Binarization", "sec_num": "2.1" }, { "text": "A more principled method is synchronous binarization, which simultaneously binarizes both source and target sides, with the constraint that virtual nonterminals always have contiguous spans on both sides. The resulting grammar is thus another SCFG, the binary branching equivalent of the original grammar, which can be thought of as an extension of the Figure 2 : An example of non-binarizable rule from the hand-aligned Chinese-English data in Liu et al. (2005) . The SCFG rule is VP \u2192 ADVP 1 PP 2 VB 3 NN 4 , VP \u2192 VB 3 JJ 1 NNS 4 PP 2 in the notatoin of Satta and Peserico (2005) .", "cite_spans": [ { "start": 445, "end": 462, "text": "Liu et al. (2005)", "ref_id": "BIBREF7" }, { "start": 556, "end": 581, "text": "Satta and Peserico (2005)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 353, "end": 361, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Synchronous Binarization", "sec_num": "2.2" }, { "text": "Chomsky Normal Form in synchronous grammars. The example rule is now binarized into:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Synchronous Binarization", "sec_num": "2.2" }, { "text": "(4) S \u2192 NP 1 PP-VP 2 , NP 1 PP-VP 2 PP-VP \u2192 PP 1 VP 2 , VP 2 PP 1 which corresponds to Figure 1 (c). This representation, being contiguous on both sides, successfully reduces the decoding complexity to a low polynomial and significantly improved the search quality (Zhang et al., 2006) . However, this scheme has the following drawbacks. First, synchronous binarization is not always possible with an arbitrary SCFG. Some reorderings, for example, the permutation (2, 4, 1, 3), is non-binarizable. Although according to Zhang et al. (2006) , the vast majority (99.7%) of rules in their Chinese-English dataset are binarizable, there do exist some interesting cases that are not (see Figure 2 for a real-data example). More importantly, the ratio of binarizability, as expected, decreases on freer word-order languages (Wellington et al., 2006) . Second, synchronous binarization is significantly more complicated to implement than the straightforward source-side binarization.", "cite_spans": [ { "start": 265, "end": 285, "text": "(Zhang et al., 2006)", "ref_id": "BIBREF14" }, { "start": 520, "end": 539, "text": "Zhang et al. (2006)", "ref_id": "BIBREF14" }, { "start": 818, "end": 843, "text": "(Wellington et al., 2006)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 87, "end": 95, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 683, "end": 691, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Synchronous Binarization", "sec_num": "2.2" }, { "text": "We now introduce a novel scheme, target-side binarization, which is the symmetric version of the source-side variant. Under this method, the targetside is always contiguous, while leaving some gaps on the source-side. The example rule is binarized into the following MTG form: Table 1 : Source and target arities of the three binarization schemes of an SCFG rule of rank n.", "cite_spans": [], "ref_spans": [ { "start": 277, "end": 284, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Target-side Binarization", "sec_num": "2.3" }, { "text": "(5) S S \u2192\u22b2\u22b3 [1, 2, 1] [1, 2] NP-VP (2) PP NP-VP PP which corresponds to Figure 1 (d). scheme s(b) t(b) source-side 1 \u2264 n/2 synchronous 1 1 target-side \u2264 n/2 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target-side Binarization", "sec_num": "2.3" }, { "text": "Although the discontinuity on the source-side in this new scheme causes exponential complexity in string-based systems (Section 3.1), the continuous spans on the target-side will ensure polynomial complexity in tree-based systems (Section 3.2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target-side Binarization", "sec_num": "2.3" }, { "text": "Before we move on to study the effects of various binarization schemes in decoding, we need some formal machineries of discontinuities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target-side Binarization", "sec_num": "2.3" }, { "text": "We define the source and target arities of a virtual nonterminal V , denoted s(V ) and t(V ), to be the number of (consecutive) spans of V on the source and target sides, respectively. This definition extends to a binarization b of an SCFG rule of rank n, where arities s(b) and t(b) are defined as the maximum source and target arities over all virtual nonterminals in b, respectively. For example, the source and target arities of the three binarizations in Figure 1 are 1 and 2 for (b), 1 and 1 for (c), and 2 and 1 for (d). In general, the arities for the three binarization schemes are summarized in Table 1 .", "cite_spans": [], "ref_spans": [ { "start": 460, "end": 468, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 605, "end": 612, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Target-side Binarization", "sec_num": "2.3" }, { "text": "We now compare the algorithmic complexities of the three binarization schemes in a central problem of machine translation: decoding with an integrated ngram language model. Depending on the input being a string or a parse-tree, we divide MT systems based on synchronous grammars into two broad categories: string-based and tree-based.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theoretical Analysis", "sec_num": "3" }, { "text": "String-based approaches include both string-tostring (Chiang, 2005) and string-to-tree systems (Galley et al., 2006) . 2 To simplify the presentation we will just focus on the former but the analysis also applies to the latter. We will first discuss decoding with a pure SCFG as the translation model (henceforth \u2212LM decoding), and then extend it to include an n-gram model (+LM decoding).", "cite_spans": [ { "start": 53, "end": 67, "text": "(Chiang, 2005)", "ref_id": "BIBREF0" }, { "start": 95, "end": 116, "text": "(Galley et al., 2006)", "ref_id": "BIBREF3" }, { "start": 119, "end": 120, "text": "2", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "String-based Approaches", "sec_num": "3.1" }, { "text": "The \u2212LM decoder can be cast as a (monolingual) parser on the source language: it takes the source-language string as input and parses it using the source-projection of the SCFG while building the corresponding target-language sub-translations in parallel. For source-side and synchronous binarizations, since the resulting grammar has contiguous source spans, we can apply the CKY algorithm which guarantees cubic time complexity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation as Parsing", "sec_num": "3.1.1" }, { "text": "For example, a deduction along the virtual rule in the synchronously binarized grammar (4) is notated", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation as Parsing", "sec_num": "3.1.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "(PP j,k ) : (w 1 , t 1 ) (VP k,l ) : (w 2 , t 2 ) (PP-VP j,l ) : (w 1 + w 2 , t 2 t 1 )", "eq_num": "(6)" } ], "section": "Translation as Parsing", "sec_num": "3.1.1" }, { "text": "where i, j, k are free indices in the source string, w 1 , w 2 are the scores of the two antecedent items, and t 1 , t 2 are the corresponding sub-translations. 3 The resulting translation t 2 t 1 is the inverted concatenation as specified by the target-side of the SCFG rule.", "cite_spans": [ { "start": 161, "end": 162, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Translation as Parsing", "sec_num": "3.1.1" }, { "text": "The case for a source-side binarized grammar (3) is slightly more complicated than the above, because we have to keep track of gaps on the target side. For example, we first combine NP with PP", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation as Parsing", "sec_num": "3.1.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "(NP i,j ) : (w 1 , t 1 ) (PP j,k ) : (w 2 , t 2 ) (NP-PP i,k ) : (w 1 + w 2 , t 1 \u2294 t 2 )", "eq_num": "(7)" } ], "section": "Translation as Parsing", "sec_num": "3.1.1" }, { "text": "2 Our notation of X-to-Y systems is defined as follows: X denotes the input, either a string or a tree; while Y represents the RHS structure of an individual rule: Y is string if the RHS is a flat one-level tree (as in SCFGs), and Y is tree if the RHS is multi-level as in (Galley et al., 2006) . This convention also applies to tree-based approaches.", "cite_spans": [ { "start": 273, "end": 294, "text": "(Galley et al., 2006)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Translation as Parsing", "sec_num": "3.1.1" }, { "text": "3 The actual system does not need to store the translations since they can be recovered from backpointers and they are not considered part of the state. We keep them here only for presentation reasons. leaving a gap (\u2294) on the target-side resulting item, because NP and PP are not contiguous in the English ordering. This gap is later filled in by the subtranslation t 3 of VP (see also Figure 3 (a)):", "cite_spans": [], "ref_spans": [ { "start": 387, "end": 395, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Translation as Parsing", "sec_num": "3.1.1" }, { "text": "(NP-PP i,k ) : (w 1 , t 1 \u2294 t 2 ) (VP k,l ) : (w 2 , t 3 ) (S i,l ) : (w 1 + w 2 , t 1 t 3 t 2 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NP-", "sec_num": null }, { "text": "(8) In both cases, there are still only three free indices on the source-side, so the complexity remains cubic. The gaps on the target-side do not require any extra computation in the current \u2212LM setting, but as we shall see shortly below, will lead to exponential complexity when integrating a language model. For a target-side binarized grammar as in (5), however, the source-side spans are discontinuous where CKY can not apply, and we have to enumerate more free indices on the source side. For example, the first deduction", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NP-", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "(NP i,j ) : (w 1 , t 1 ) (VP k,l ) : (w 2 , t 2 ) (NP-VP i,j\u2294k,l ) : (w 1 + w 2 , t 1 t 2 )", "eq_num": "(9)" } ], "section": "NP-", "sec_num": null }, { "text": "leaves a gap in the source-side span of the resulting item, which is later filled in when the item is combined with a PP (see also Figure 3 (b)):", "cite_spans": [], "ref_spans": [ { "start": 131, "end": 139, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "NP-", "sec_num": null }, { "text": "(NP-VP i,j\u2294k,l ) : (w 1 , t 1 ) (PP j,k ) : (w 2 , t 2 ) (S i,l ) : (w 1 + w 2 , t 1 t 2 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NP-", "sec_num": null }, { "text": "(10) Both of the above deductions have four free indices, and thus of complexity O(|w| 4 ) instead of cubic in the length of the input string w.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NP-", "sec_num": null }, { "text": "More generally, the complexity of a binarization scheme depends on its source arity. In the worstcase, a binarized grammar with a source arity of s will require at most (2s + 1) free indices in a deduction, because otherwise if one rule needs (2s + 2) indices, then there are s+1 spans, which contradicts the definition of arity (Huang et al., 2005) . 4 These deductive systems represent the search space of decoding without a language model. When one is instantiated for a particular input string, it defines a set of derivations, called a forest, represented in a compact structure that has a structure of a hypergraph. Accordingly we call items like (PP 1,3 ) nodes in the forest, and an instantiated deduction like", "cite_spans": [ { "start": 329, "end": 349, "text": "(Huang et al., 2005)", "ref_id": "BIBREF4" }, { "start": 352, "end": 353, "text": "4", "ref_id": null } ], "ref_spans": [ { "start": 653, "end": 660, "text": "(PP 1,3", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "NP-", "sec_num": null }, { "text": "(PP-VP 1,6 ) \u2192 (PP 1,3 )(VP 3,6 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NP-", "sec_num": null }, { "text": "we call a hyperedge that connects one or more antecedent nodes to a consequent node. In this representation, the time complexity of \u2212LM decoding, which we refer to as source-side complexity, is proportional to the size of the forest F , i.e., the number of hyperedges (instantiated deductions) in F . To summarize, the source-side complexity for a binarized grammar of source arity s is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NP-", "sec_num": null }, { "text": "|F | = O(|w| 2s+1 ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NP-", "sec_num": null }, { "text": "To integrate with a bigram language model, we can use the dynamic-programming algorithm of Wu (1996) , which we may think of as proceeding in two passes. The first pass is as above, and the second pass traverses the first-pass forest, assigning to each node v a set of augmented items, which we call +LM items, of the form (v a\u22c6b ), where a and b are target words and \u22c6 is a placeholder symbol for an elided part of a target-language string. This item indicates that a possible translation of the part of the input spanned by v is a target string that starts with a and ends with b.", "cite_spans": [ { "start": 91, "end": 100, "text": "Wu (1996)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Adding a Language Model", "sec_num": "3.1.2" }, { "text": "Here is an example deduction in the synchronously binarized grammar (4), for a +LM item for the node (PP-VP 1,6 ) based on the \u2212LM Deduction (6):", "cite_spans": [], "ref_spans": [ { "start": 101, "end": 111, "text": "(PP-VP 1,6", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Adding a Language Model", "sec_num": "3.1.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "(PP with \u22c6 Sharon 1,3 ): (w 1 , t 1 ) (VP held \u22c6 talk 3,6 ): (w 2 , t 2 ) (PP-VP held \u22c6 Sharon 1,6 ): (w \u2032 , t 2 t 1 )", "eq_num": "(11)" } ], "section": "Adding a Language Model", "sec_num": "3.1.2" }, { "text": "where w \u2032 = w 1 + w 2 \u2212 log P lm (with | talk) is the score of the resulting +LM item: the sum of the scores of the antecedent items, plus a combination cost which is the negative log probability of the bigrams formed in combining adjacent boundary words of antecedents. Now that we keep track of target-side boundary words, an additional complexity, called target-side complexity, is introduced. In Deduction (11), four target words are enumerated, and each +LM item stores two boundary words; this is also true in general for synchronous and target-side binarized grammars where we always combine two consecutive target strings in a deduction. More generally, this scheme can be easily extended to work with an mgram model (Chiang, 2007) where m is usually \u2265 3 (trigram or higher) in practice. The target-side complexity for this case is thus", "cite_spans": [ { "start": 725, "end": 739, "text": "(Chiang, 2007)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Adding a Language Model", "sec_num": "3.1.2" }, { "text": "O(|V | 4(m\u22121) )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding a Language Model", "sec_num": "3.1.2" }, { "text": "where V is the target language vocabulary. This is because each constituent must store its initial and final (m \u2212 1)-grams, which yields four (m \u2212 1)grams in a binary combination. In practice, it is often assumed that there are only a constant number of translations for each input word, which reduces this complexity into O(|w| 4(m\u22121) ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding a Language Model", "sec_num": "3.1.2" }, { "text": "However, for source-side binarization which leaves gaps on the target-side, the situation becomes more complicated. Consider Deduction (8), where the sub-translation for the virtual node NP-PP is gapped (t 1 \u2294t 2 ). Now if we integrate a bigram model based on that deduction, we have to maintain the boundary words of both t 1 and t 2 in the +LM node of NP-PP. Together with the boundary words in node VP, there are a total of six target words to enumerate for this +LM deduction:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding a Language Model", "sec_num": "3.1.2" }, { "text": "(NP-PP a\u22c6b\u2294e\u22c6f i,k ) : (w 1 , t 1 \u2294 t 2 ) (VP c\u22c6d k,l ) : (w 2 , t 3 ) (S a\u22c6f i,l ) : (w \u2032 , t 1 t 3 t 2 ) (12) where w \u2032 = w 1 + w 2 \u2212 log P lm (c | b)P lm (e | d).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding a Language Model", "sec_num": "3.1.2" }, { "text": "With an analysis similar to that of the source-side, we state that, for a binarized grammar with target arity t, the target-side complexity, denoted T , is T = O(|w| 2(t+1)(m\u22121) ) scheme string-based tree-based source-side |w| 3+2(t+1)(m\u22121) |w| 1+2(t+1)(m\u22121) synchronous |w| 3+4(m\u22121) |w| 1+4(m\u22121) target-side |w| (2s+1)+4(m\u22121) |w| 1+4(m\u22121) Table 2 : Worst-case decoding complexities of the three binarization schemes in the two approaches (excluding the O(|w| 3 ) time for source-side parsing in tree-based approaches).", "cite_spans": [], "ref_spans": [ { "start": 340, "end": 347, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Adding a Language Model", "sec_num": "3.1.2" }, { "text": "because in the worst-case, there are t + 1 spans involved in a +LM deduction (t of them from one virtual antecedent and the other one non-virtual), and for each span, there are m \u2212 1 target words to enumerate at both left and right boundaries, giving a total of 2(t + 1)(m \u2212 1) words in this deduction. We now conclude that, in a string-based system, the combined complexities for a binarized grammar with source arity s and target arity t is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding a Language Model", "sec_num": "3.1.2" }, { "text": "O(|F |T ) = O(|w| (2s+1)+2(t+1)(m\u22121) ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding a Language Model", "sec_num": "3.1.2" }, { "text": "The results for the three specific binarization schemes are summarized in Table 2 . Although both source-side and target-side binarizations lead to exponential complexities, it is likely that language model combinations (target-side complexity) dominate the computation, since m is larger than 2 in practice. In this sense, target-side binarization is still preferable to source-side binarization.", "cite_spans": [], "ref_spans": [ { "start": 74, "end": 81, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Adding a Language Model", "sec_num": "3.1.2" }, { "text": "It is also worth noting that with the hook trick of Huang et al. (2005) , the target-side complexity can be reduced to O(|w| (2t+1)(m\u22121) ), making it more analogous to its source-side counterpart: if we consider the decoding problem as intersecting the SCFG with a source-side DFA which has |S| = |w|+1 states, and a target-side DFA which has |T | = O(|w| m\u22121 ) states, then the intersected grammar has a parsing complexity of O(|S| 2s+1 |T | 2t+1 ), which is symmetric from both sides.", "cite_spans": [ { "start": 52, "end": 71, "text": "Huang et al. (2005)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Adding a Language Model", "sec_num": "3.1.2" }, { "text": "The tree-based approaches include the tree-to-string (also called syntax-directed) systems (Liu et al., 2006; . This approach takes a source-language parse tree, instead of the plain string, as input, and tries to find the best derivation that recursively rewrites the input tree into a target ... string, using the SCFG as a tree-transducer. In this setting, the \u2212LM decoding phase is a tree-parsing problem (Eisner, 2003) which aims to cover the entire tree by a set of rules. For example, a deduction of the first rule in Example 1 would be:", "cite_spans": [ { "start": 91, "end": 109, "text": "(Liu et al., 2006;", "ref_id": "BIBREF8" }, { "start": 409, "end": 423, "text": "(Eisner, 2003)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Tree-based Approaches", "sec_num": "3.2" }, { "text": "S \u03b7 : t 1 t 3 t 2 NP \u03b7\u20221 : t 1 ... PP \u03b7\u20222 : t 2 ... VP \u03b7\u20223 : t 3 ...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tree-based Approaches", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "(NP \u03b7\u20221 ) : (w 1 , t 1 ) (PP \u03b7\u20222 ) : (w 2 , t 2 ) (VP \u03b7\u20223 ) : (w 3 , t 3 ) (S \u03b7 ) : (w 1 + w 2 + w 3 , t 1 t 3 t 2 )", "eq_num": "(13)" } ], "section": "Tree-based Approaches", "sec_num": "3.2" }, { "text": "where \u03b7 and \u03b7 \u2022 i(i = 1, 2, 3) are tree addresses (Shieber et al., 1995) , with \u03b7 \u2022 i being the i th child of \u03b7 (the address of the root node is \u01eb). The nonterminal labels at these tree nodes must match those in the SCFG rule, e.g., the input tree must have a PP at node \u03b7 \u2022 2.", "cite_spans": [ { "start": 50, "end": 72, "text": "(Shieber et al., 1995)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Tree-based Approaches", "sec_num": "3.2" }, { "text": "The semantics of this deduction is the following: if the label of the current node in the input tree is S, and its three children are labeled NP, PP, and VP, with corresponding sub-translations t 1 , t 2 , and t 3 , then a possible translation for the current node S is t 1 t 3 t 2 (see Figure 4 ). An alternative, top-down version of this bottom-up deductive system is, at each node, try all SCFG rules that pattern-match the current subtree, and recursively solve sub-problems indicated by the variables, i.e., synchronous nonterminals, of the matching rule .", "cite_spans": [], "ref_spans": [ { "start": 287, "end": 295, "text": "Figure 4", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Tree-based Approaches", "sec_num": "3.2" }, { "text": "With the input tree completely given, this setting has some fundamental differences from its stringbased counterpart. First, we do not need to binarize the SCFG grammar before \u2212LM decoding. In fact, it will be much harder to do the tree-parsing (pattern-matching) with a binarized grammar. Second, regardless of the number of nonterminals in a rule, building the \u2212LM forest always costs time linear in the size of the input tree (times a grammar constant, see (Huang et al., 2006, Sec. 5 .1) for details), which is in turn linear in the length of the input string. So we have:", "cite_spans": [ { "start": 460, "end": 487, "text": "(Huang et al., 2006, Sec. 5", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Tree-based Approaches", "sec_num": "3.2" }, { "text": "O(|F |) = O(|w|).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tree-based Approaches", "sec_num": "3.2" }, { "text": "This fast \u2212LM decoding is a major advantage of tree-based approaches. Now in +LM decoding, we still need binarization of the hyperedges, as opposed to rules, in the forest, but the analysis is almost identical to that of string-based approach. For example, the tree-based version of Deduction (12) for source-side binarization is now notated", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tree-based Approaches", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "(NP \u03b7\u20221 -PP \u03b7\u20222 a\u22c6b\u2294e\u22c6f ) : (w 1 , t 1 \u2294 t 2 ) (VP \u03b7\u20223 c\u22c6d ) : (w 2 , t 3 ) (S \u03b7 a\u22c6f ) : (w \u2032 , t 1 t 3 t 2 )", "eq_num": "(14)" } ], "section": "Tree-based Approaches", "sec_num": "3.2" }, { "text": "In general, the target-side complexity of a binarized grammar with target arity t is still T = O(|w| 2(t+1)(m\u22121) ) and the combined decoding complexity of the tree-based approach is O(|F |T ) = O(|w| 1+2(t+1)(m\u22121) ). Table 2 shows that in this tree-based setting, target-side binarization has exactly the same performance with synchronous binarization while being much simpler to implement and does not have the problem of non-binarizability. The fact that simple binarization works (at least) equally well, which is not possible in string-based systems, is another advantage of the tree-based approaches.", "cite_spans": [], "ref_spans": [ { "start": 217, "end": 224, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Tree-based Approaches", "sec_num": "3.2" }, { "text": "Section 3 shows that target-side binarization achieves the same polynomial decoding complexity as the more sophisticated synchronous binarization in the tree-based systems. We now empirically compare target-side binarization with an even simpler variant, on-the-fly generation, where the only difference is that the latter does target-side left-to-right binarization during +LM decoding on a hyperedgeper-hyperedge basis, without sharing common virtual nonterminals across hyperedges, while the former binarizes the whole \u2212LM forest before the +LM decoding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Our experiments are on English-to-Chinese translation in the tree-to-string system of , which takes a source-language parse tree as input and tries to recursively convert it to a targetlanguage string according to transfer rules in a synchronous grammar (Galley et al., 2006) . For instance, the following rule translates an English passive construction into Chinese. Although the rules are actually in a synchronous tree-substitution grammar (STSG) instead of an SCFG, its derivation structure is still a hypergraph and all the analysis in Section 3.2 still applies. This system performs slightly better than the state-of-the-art phrase-based system Pharaoh (Koehn, 2004) on English to Chinese translation. A very similar system for the reverse direction is described in (Liu et al., 2006) . Our data preparation follows : the training data is a parallel corpus of 28.3M words on the English side, from which we extracted 24.7M tree-to-string rules using the algorithm of (Galley et al., 2006) , and trained a Chinese trigram model on the Chinese side. We test our methods on the same test-set as in which is a 140 sentence subset of NIST 2003 MT evaluation with 9-36 words on the English side. The weights for the loglinear model is tuned on a separate development set. Figure 5 compares the number of nodes in the binarized forests against the original forest. On-the-fly generation essentially works on a larger forest with duplicate nodes due to the lack of sharing, which is on average 1.85 times bigger than the target-side binarized forest. This difference is also reflected in the decoding speed, which is illustrated in Figure 6 under various beam settings and where the amount of computation is measured by the number of +LM items generated. At each individual beam setting, the two methods produce exactly the same set of translations (i.e., there is no relative search error), but the target-side binarization is consistently 1.3 times faster thanks to the sharing. In terms of translation quality, the final BLEU score at the largest beam setting is 0.2614, significantly higher than Pharaoh's 0.2354 as reported in .", "cite_spans": [ { "start": 254, "end": 275, "text": "(Galley et al., 2006)", "ref_id": "BIBREF3" }, { "start": 659, "end": 672, "text": "(Koehn, 2004)", "ref_id": "BIBREF6" }, { "start": 772, "end": 790, "text": "(Liu et al., 2006)", "ref_id": "BIBREF8" }, { "start": 973, "end": 994, "text": "(Galley et al., 2006)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 1272, "end": 1280, "text": "Figure 5", "ref_id": "FIGREF4" }, { "start": 1630, "end": 1638, "text": "Figure 6", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "This paper introduces a simple binarization scheme, target-side binarization, and presents a systematic study of the theoretical properties of the three binarization schemes in both string-based and tree-based systems using syncrhonous grammars. In particular, we show that target-side binarization achieves the same polynomial complexity as synchronous binarization while being much simpler to implement and universally applicable to arbitrary SCFGs. We also demonstrate the empirical effectiveness of this new scheme on a large-scale tree-to-string system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "An alternative notation, used bySatta and Peserico (2005), allows co-indexed nonterminals to take different symbols across languages, which is convenient in describing syntactic divergences (seeFigure 2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Actually this is true only if in any binarization scheme, a non-contiguous item is always combined with a contiguous item. We define both source and target binarizations to be incremental (i.e., left-to-right or right-to-left), so this assumption trivially holds. More general binarization schemes are possible to have even higher complexities, but also possible to achieve better complexities. Full discussion is left for a separate paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A hierarchical phrase-based model for statistical machine translation", "authors": [ { "first": "David", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2005, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of ACL.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Hierarchical phrase-based translation", "authors": [ { "first": "David", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2007, "venue": "Computational Linguistics", "volume": "33", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Chiang. 2007. Hierarchical phrase-based trans- lation. In Computational Linguistics, volume 33. To appear.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Learning non-isomorphic tree mappings for machine translation", "authors": [ { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 2003, "venue": "Proceedings of ACL (poster)", "volume": "", "issue": "", "pages": "205--208", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason Eisner. 2003. Learning non-isomorphic tree map- pings for machine translation. In Proceedings of ACL (poster), pages 205-208.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Scalable inference and training of context-rich syntactic translation models", "authors": [ { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Graehl", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Deneefe", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Ignacio", "middle": [], "last": "Thayer", "suffix": "" } ], "year": 2006, "venue": "Proceedings of COLING-ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scalable inference and training of context-rich syntactic translation models. In Proceed- ings of COLING-ACL.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Machine translation as lexicalized parsing with hooks", "authors": [ { "first": "Liang", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Ninth International Workshop on Parsing Technologies (IWPT-2005)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liang Huang, Hao Zhang, and Daniel Gildea. 2005. Ma- chine translation as lexicalized parsing with hooks. In Proceedings of the Ninth International Workshop on Parsing Technologies (IWPT-2005).", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Statistical syntax-directed translation with extended domain of locality", "authors": [ { "first": "Liang", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Aravind", "middle": [], "last": "Joshi", "suffix": "" } ], "year": 2006, "venue": "Proc. of AMTA", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liang Huang, Kevin Knight, and Aravind Joshi. 2006. Statistical syntax-directed translation with extended domain of locality. In Proc. of AMTA.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Pharaoh: a beam search decoder for phrase-based statistical machine translation models", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2004, "venue": "Proceedings of AMTA", "volume": "", "issue": "", "pages": "115--124", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn. 2004. Pharaoh: a beam search decoder for phrase-based statistical machine translation mod- els. In Proceedings of AMTA, pages 115-124.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Log-linear models for word alignment", "authors": [ { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Shouxun", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2005, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yang Liu, Qun Liu, and Shouxun Lin. 2005. Log-linear models for word alignment. In Proceedings of ACL.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Tree-tostring alignment template for statistical machine translation", "authors": [ { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Shouxun", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2006, "venue": "Proceedings of COLING-ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yang Liu, Qun Liu, and Shouxun Lin. 2006. Tree-to- string alignment template for statistical machine trans- lation. In Proceedings of COLING-ACL.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Multitext grammars and synchronous parsers", "authors": [ { "first": "I", "middle": [], "last": "", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Melamed", "suffix": "" } ], "year": 2003, "venue": "Proceedings of NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "I. Dan Melamed. 2003. Multitext grammars and syn- chronous parsers. In Proceedings of NAACL.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Some computational complexity results for synchronous context-free grammars", "authors": [ { "first": "Giorgio", "middle": [], "last": "Satta", "suffix": "" }, { "first": "Enoch", "middle": [], "last": "Peserico", "suffix": "" } ], "year": 2005, "venue": "Proc. of HLT-EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Giorgio Satta and Enoch Peserico. 2005. Some computa- tional complexity results for synchronous context-free grammars. In Proc. of HLT-EMNLP 2005.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Principles and implementation of deductive parsing", "authors": [ { "first": "Stuart", "middle": [], "last": "Shieber", "suffix": "" }, { "first": "Yves", "middle": [], "last": "Schabes", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 1995, "venue": "Journal of Logic Programming", "volume": "24", "issue": "", "pages": "3--36", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stuart Shieber, Yves Schabes, and Fernando Pereira. 1995. Principles and implementation of deductive parsing. Journal of Logic Programming, 24:3-36.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Empirical lower bounds on the complexity of translational equivalence", "authors": [ { "first": "Sonjia", "middle": [], "last": "Benjamin Wellington", "suffix": "" }, { "first": "I", "middle": [ "Dan" ], "last": "Waxmonsky", "suffix": "" }, { "first": "", "middle": [], "last": "Melamed", "suffix": "" } ], "year": 2006, "venue": "Proceedings of COLING-ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Benjamin Wellington, Sonjia Waxmonsky, and I. Dan Melamed. 2006. Empirical lower bounds on the com- plexity of translational equivalence. In Proceedings of COLING-ACL.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A polynomial-time algorithm for statistical machine translation", "authors": [ { "first": "Dekai", "middle": [], "last": "Wu", "suffix": "" } ], "year": 1996, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dekai Wu. 1996. A polynomial-time algorithm for sta- tistical machine translation. In Proceedings of ACL.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Synchronous binarization for machine translation", "authors": [ { "first": "Hao", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Liang", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2006, "venue": "Proc. of HLT-NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hao Zhang, Liang Huang, Daniel Gildea, and Kevin Knight. 2006. Synchronous binarization for machine translation. In Proc. of HLT-NAACL.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "Illustration of the three binarization schemes, with virtual nonterminals in gray.", "uris": null, "num": null }, "FIGREF1": { "type_str": "figure", "text": "Illustrations of two deductions with gaps.", "uris": null, "num": null }, "FIGREF2": { "type_str": "figure", "text": "Illustration of tree-to-string deduction.", "uris": null, "num": null }, "FIGREF4": { "type_str": "figure", "text": "Number of nodes in the forests. Input sentences are grouped into bins according to their lengths(5-9, 10-14, 15-20, etc.). bei x 2 x 1", "uris": null, "num": null }, "FIGREF5": { "type_str": "figure", "text": "Decoding speed and BLEU scores under beam search.", "uris": null, "num": null } } } }