{ "paper_id": "W07-0404", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T04:39:43.676829Z" }, "title": "Factorization of Synchronous Context-Free Grammars in Linear Time", "authors": [ { "first": "Hao", "middle": [], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Rochester Rochester", "location": { "postCode": "14627", "region": "NY" } }, "email": "" }, { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Rochester Rochester", "location": { "postCode": "14627", "region": "NY" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Factoring a Synchronous Context-Free Grammar into an equivalent grammar with a smaller number of nonterminals in each rule enables synchronous parsing algorithms of lower complexity. The problem can be formalized as searching for the tree-decomposition of a given permutation with the minimal branching factor. In this paper, by modifying the algorithm of Uno and Yagiura (2000) for the closely related problem of finding all common intervals of two permutations, we achieve a linear time algorithm for the permutation factorization problem. We also use the algorithm to analyze the maximum SCFG rule length needed to cover hand-aligned data from various language pairs.", "pdf_parse": { "paper_id": "W07-0404", "_pdf_hash": "", "abstract": [ { "text": "Factoring a Synchronous Context-Free Grammar into an equivalent grammar with a smaller number of nonterminals in each rule enables synchronous parsing algorithms of lower complexity. The problem can be formalized as searching for the tree-decomposition of a given permutation with the minimal branching factor. In this paper, by modifying the algorithm of Uno and Yagiura (2000) for the closely related problem of finding all common intervals of two permutations, we achieve a linear time algorithm for the permutation factorization problem. We also use the algorithm to analyze the maximum SCFG rule length needed to cover hand-aligned data from various language pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "A number of recent syntax-based approaches to statistical machine translation make use of Synchronous Context Free Grammar (SCFG) as the underlying model of translational equivalence. Wu (1997) 's Inversion Transduction Grammar, as well as tree-transformation models of translation such as Yamada and Knight (2001) , Galley et al. (2004) , and Chiang (2005) all fall into this category.", "cite_spans": [ { "start": 184, "end": 193, "text": "Wu (1997)", "ref_id": "BIBREF9" }, { "start": 290, "end": 314, "text": "Yamada and Knight (2001)", "ref_id": "BIBREF10" }, { "start": 317, "end": 337, "text": "Galley et al. (2004)", "ref_id": "BIBREF4" }, { "start": 344, "end": 357, "text": "Chiang (2005)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A crucial question for efficient computation in approaches based on SCFG is the length of the grammar rules. Grammars with longer rules can represent a larger set of reorderings between languages (Aho and Ullman, 1972) , but also require greater computational complexity for word alignment algorithms based on synchronous parsing (Satta and Peserico, 2005) . Grammar rules extracted from large parallel corpora by systems such as Galley et al. (2004) can be quite large, and Wellington et al. (2006) argue that complex rules are necessary by analyzing the coverage of gold-standard word alignments from different language pairs by various grammars.", "cite_spans": [ { "start": 196, "end": 218, "text": "(Aho and Ullman, 1972)", "ref_id": "BIBREF0" }, { "start": 330, "end": 356, "text": "(Satta and Peserico, 2005)", "ref_id": "BIBREF6" }, { "start": 430, "end": 450, "text": "Galley et al. (2004)", "ref_id": "BIBREF4" }, { "start": 475, "end": 499, "text": "Wellington et al. (2006)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, parsing complexity depends not only on rule length, but also on the specific permutations represented by the individual rules. It may be possible to factor an SCFG with maximum rule length n into a simpler grammar with a maximum of k nonterminals in any one rule, if not all n! permutations appear in the rules. discuss methods for binarizing SCFGs, ignoring the nonbinarizable grammars; in Section 2 we discuss the generalized problem of factoring to k-ary grammars for any k and formalize the problem as permutation factorization in Section 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In Section 4, we describe an O(k \u2022 n) left-toright shift-reduce algorithm for analyzing permutations that can be k-arized. Its time complexity becomes O(n 2 ) when k is not specified beforehand and the minimal k is to be discovered. Instead of linearly shifting in one number at a time, employ a balanced binary tree as the control structure, producing an algorithm similar in spirit to merge-sort with a reduced time complexity of O(n log n). However, both algorithms rely on reduction tests on emerging spans which involve redundancies with the spans that have already been tested. Uno and Yagiura (2000) describe a clever algorithm for the problem of finding all common intervals of two permutations in time O(n + K), where K is the number of common intervals, which can itself be \u2126(n 2 ). In Section 5, we adapt their approach to the problem of factoring SCFGs, and show that, given this problem definition, running time can be improved to O(n), the optimum given the time needed to read the input permutation.", "cite_spans": [ { "start": 584, "end": 606, "text": "Uno and Yagiura (2000)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The methodology in Wellington et al. (2006) measures the complexity of word alignment using the number of gaps that are necessary for their synchronous parser which allows discontinuous spans to succeed in parsing. In Section 6, we provide a more direct measurement using the minimal branching factor yielded by the permutation factorization algorithm.", "cite_spans": [ { "start": 19, "end": 43, "text": "Wellington et al. (2006)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We begin by describing the synchronous CFG formalism, which is more rigorously defined by Aho and Ullman (1972) and Satta and Peserico (2005) . We adopt the SCFG notation of Satta and Peserico (2005) . Superscript indices in the right-hand side of grammar rules:", "cite_spans": [ { "start": 90, "end": 111, "text": "Aho and Ullman (1972)", "ref_id": "BIBREF0" }, { "start": 116, "end": 141, "text": "Satta and Peserico (2005)", "ref_id": "BIBREF6" }, { "start": 174, "end": 199, "text": "Satta and Peserico (2005)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Synchronous CFG and Synchronous Parsing", "sec_num": "2" }, { "text": "X \u2192 X (1) 1 ...X (n) n , X (\u03c0(1)) \u03c0(1) ...X (\u03c0(n)) \u03c0(n)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Synchronous CFG and Synchronous Parsing", "sec_num": "2" }, { "text": "indicate that the nonterminals with the same index are linked across the two languages, and will eventually be rewritten by the same rule application. Each X i is a variable which can take the value of any nonterminal in the grammar. We say an SCFG is n-ary if and only if the maximum number of co-indexed nonterminals, i.e. the longest permutation contained in the set of rules, is of size n.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Synchronous CFG and Synchronous Parsing", "sec_num": "2" }, { "text": "Given a synchronous CFG and a pair of input strings, we can apply a generalized CYK-style bottom up chart parser to build synchronous parse trees over the string pair. Wu (1997) demonstrates the case of binary SCFG parsing, where six string boundary variables, three for each language as in monolingual CFG parsing, interact with each other, yielding an O(N 6 ) dynamic programming algorithm, where N is the string length, assuming the two paired strings are comparable in length. For an n-ary SCFG, the parsing complexity can be as high as O(N n+4 ). The reason is even if we binarize on one side to maintain 3 indices, for many unfriendly permutations, at most n + 1 boundary variables in the other language are necessary.", "cite_spans": [ { "start": 168, "end": 177, "text": "Wu (1997)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Synchronous CFG and Synchronous Parsing", "sec_num": "2" }, { "text": "The fact that this bound is exponential in the rule length n suggests that it is advantageous to reduce the length of grammar rules as much as possible. This paper focuses on converting an SCFG to the equivalent grammar with smallest possible maximum rule size. The algorithm processes each rule in the input grammar independently, and determines whether the rule can be factored into smaller SCFG rules by analyzing the rule's permutation \u03c0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Synchronous CFG and Synchronous Parsing", "sec_num": "2" }, { "text": "As an example, given the input rule:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Synchronous CFG and Synchronous Parsing", "sec_num": "2" }, { "text": "[ X \u2192 A (1) B (2) C (3) D (4) E (5) F (6) G (7) , X \u2192 E (5) G (7) D (4) F (6) C (3) A (1) B (2) ] (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Synchronous CFG and Synchronous Parsing", "sec_num": "2" }, { "text": "we consider the associated permutation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Synchronous CFG and Synchronous Parsing", "sec_num": "2" }, { "text": "(5, 7, 4, 6, 3, 1, 2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Synchronous CFG and Synchronous Parsing", "sec_num": "2" }, { "text": "We determine that this permutation can be factored into the following permutation tree:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Synchronous CFG and Synchronous Parsing", "sec_num": "2" }, { "text": "(2,1) (2,1) (2,4,1,3) 5 7 4 6 3 (1,2) 1 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Synchronous CFG and Synchronous Parsing", "sec_num": "2" }, { "text": "We define permutation trees formally in the next section, but note here that nodes in the tree correspond to subsets of nonterminals that form a single continuous span in both languages, as shown by the shaded regions in the permutation matrix above. This tree can be converted into a set of output rules that are generatively equivalent to the original rule:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Synchronous CFG and Synchronous Parsing", "sec_num": "2" }, { "text": "[ X \u2192 X (1) 1 X (2) 2 , X \u2192 X (2) 2 X (1) 1 ] [ X 1 \u2192 A (1) B (2) , X 1 \u2192 A (1) B (2) ] [ X 2 \u2192 C (1) X (2) 3 , X 2 \u2192 X (2) 3 C (1) ] [ X 3 \u2192 D (1) E (2) F (3) G (4) , X 3 \u2192 E (2) G (4) D (1) F (3) ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Synchronous CFG and Synchronous Parsing", "sec_num": "2" }, { "text": "where X 1 , X 2 and X 3 are new nonterminals used to represent the intermediate states in which the synchronous nodes are combined. The factorized grammar is only larger than the original grammar by a constant factor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Synchronous CFG and Synchronous Parsing", "sec_num": "2" }, { "text": "We define the notion of permutation structure in this section. We define a permuted sequence as a permutation of n (n \u2265 1) consecutive natural numbers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Permutation Trees", "sec_num": "3" }, { "text": "A permuted sequence is said to be k-ary parsable if either of the following conditions holds:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Permutation Trees", "sec_num": "3" }, { "text": "1. The permuted sequence only has one number.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Permutation Trees", "sec_num": "3" }, { "text": "2. It has more than one number and can be segmented into k \u2032 (k \u2265 k \u2032 \u2265 2) permuted sequences each of which is k-ary parsable, and the k \u2032 subsequences are arranged in an order identified by one of the", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Permutation Trees", "sec_num": "3" }, { "text": "k \u2032 ! permutations of k \u2032 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Permutation Trees", "sec_num": "3" }, { "text": "This is a recursive definition, and we call the corresponding recursive structure over the entire sequence a k-ary permutation tree.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Permutation Trees", "sec_num": "3" }, { "text": "Our goal is to find out the k-ary permutation tree for a given permutation, where k is minimized.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Permutation Trees", "sec_num": "3" }, { "text": "In this section, we present an O(n \u2022 k) algorithm which can be viewed as a need-to-be-optimized version of the linear time algorithm to be presented in the next section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shift-reduce on Permutations", "sec_num": "4" }, { "text": "The algorithm is based on a shift-reduce parser, which maintains a stack for subsequences that have been discovered so far and loops over shift and reduce steps:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shift-reduce on Permutations", "sec_num": "4" }, { "text": "1. Shift the next number in the input permutation onto the stack.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shift-reduce on Permutations", "sec_num": "4" }, { "text": "2. Go down the stack from the top to the bottom. Whenever the top m subsequences satisfy the partition property, which says the total length of the m (k \u2265 m \u2265 2) subsequences minus 1 is equal to the difference between the smallest number and the largest number contained in the m segments, make a reduction by gluing the m segments into one subsequence and restart reducing from the top of the new stack. Stop when no reduction is possible.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shift-reduce on Permutations", "sec_num": "4" }, { "text": "3. If there are remaining numbers in the input permutation, go to 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shift-reduce on Permutations", "sec_num": "4" }, { "text": "When we exit from the loop, if the height of the stack is 1, the input permutation of n has been reduced to Table 1 : The execution trace of the shift-reduce parser on the input permutation 5, 7, 4, 6, 3, 1, 2. a linear sequence of 1 to n, and parsing is successful. Otherwise, the input permutation of n cannot be parsed into a k-ary permutation tree.", "cite_spans": [], "ref_spans": [ { "start": 108, "end": 115, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Shift-reduce on Permutations", "sec_num": "4" }, { "text": "An example execution trace of the algorithm is shown in Table 1 .", "cite_spans": [], "ref_spans": [ { "start": 56, "end": 63, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Shift-reduce on Permutations", "sec_num": "4" }, { "text": "The partition property is a sufficient and necessary condition for the top m subsequences to be reducible. In order to check if the property holds, we need to compute the sum of the lengths of subsequences under consideration and the difference between the largest and smallest number in the covered region. We can incrementally compute both along with each step going down the stack. If m is bounded by k, we need O(k) operations for each item shifted onto the stack. So, the algorithm runs in O(n \u2022 k).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shift-reduce on Permutations", "sec_num": "4" }, { "text": "We might also wish to compute the minimum k for which k-arization can be successful on an input permutation of n. We can simply keep doing reduction tests for every possible top region of the stack while going deeper in the stack to find the minimal reduction. In the worst case, each time we go down to the bottom of the increasingly higher stack without a successful reduction. Thus, in O(n 2 ), we can find the minimum k-arization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shift-reduce on Permutations", "sec_num": "4" }, { "text": "In this section, we show a linear time algorithm which shares the left-to-right and bottom-up control structure but uses more book-keeping operations to reduce unnecessary reduction attempts. The reason that our previous algorithm is asymptotically O(n 2 ) is that whenever a new number is shifted in, we have to try out every possible new span ending at the new number. Do we need to try every possible span? Let us start with a motivating example. The permuted sequence (5, 7, 4, 6) in Table 1 can only be reduced as a whole block. However, in the last algorithm, when 4 is shifted in, we make an unsuccessful attempt for the span on (7, 4), knowing we are missing 5, which will not appear when we expand the span no matter how much further to the right. Yet we repeat the same mistake to try on 7 when 6 is scanned in by attempting on (7, 4, 6). Such wasteful checks result in the quadratic behavior of the algorithm. The way the following algorithm differs from and outperforms the previous algorithm is exactly that it crosses out impossible candidates for reductions such as 7 in the example as early as possible.", "cite_spans": [], "ref_spans": [ { "start": 488, "end": 495, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Linear Time Factorization", "sec_num": "5" }, { "text": "Now we state our problem mathematically. We define a function whose value indicates the reducibility of each pair of positions", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear Time Factorization", "sec_num": "5" }, { "text": "(x, y) (1 \u2264 x \u2264 y \u2264 n): f (x, y) = u(x, y) \u2212 l(x, y) \u2212 (y \u2212 x) where l(x, y) = min i\u2208[x,y] \u03c0(i) u(x, y) = max i\u2208[x,y] \u03c0(i)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear Time Factorization", "sec_num": "5" }, { "text": "l records the minimum of the numbers that are permuted to from the positions in the region [x, y] . u records the maximum. Figure 1 provides the visualization of u, l, and f for the example permutation (5, 7, 4, 6, 3, 1, 2). u and l can be visualized as stairs. u goes up from the right end to the left. l goes down. f is non-negative, but not monotonic in general. We can make a reduction on (x, y) if and only if f (x, y) = 0. This is the mathematical statement of the partition property in step 2 of the shift-reduce algorithm. u and l can be computed incrementally from smaller spans to larger spans to guarantee O(1) operations for computing f on each new span of [x, y] as long as we go bottom up. In the new algorithm, we will reduce the size of the search space of candidate position pairs (x, y) to be linear in n so that the whole algorithm is O(n).", "cite_spans": [ { "start": 91, "end": 97, "text": "[x, y]", "ref_id": null } ], "ref_spans": [ { "start": 123, "end": 131, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Linear Time Factorization", "sec_num": "5" }, { "text": "The algorithm has two main ideas:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear Time Factorization", "sec_num": "5" }, { "text": "\u2022 We filter x's to maintain the invariant that f (x, y) (x \u2264 y) is monotonically decreasing with respect to x, over iterations on y (from 1 to n), so that any remaining values of x corresponding to valid reductions are clustered at the point where f tails off to zero. To put it another way, we never have to test invalid reductions, because the valid reductions have been sorted together for us.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear Time Factorization", "sec_num": "5" }, { "text": "\u2022 We make greedy reductions as in the shiftreduce algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear Time Factorization", "sec_num": "5" }, { "text": "In the new algorithm, we use a doubly linked list, instead of a stack, as the data structure that stores the candidate x's to allow for more flexible maintaining operations. The steps of the algorithm are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear Time Factorization", "sec_num": "5" }, { "text": "1. Increase the left-to-right index y by one and append it to the right end of the list.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear Time Factorization", "sec_num": "5" }, { "text": "2. Find the pivot x * in the list which is minimum (leftmost) among", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear Time Factorization", "sec_num": "5" }, { "text": "x satisfying either u(x, y \u2212 1) < u(x, y) (exclusively) or l(x, y \u2212 1) > l(x, y).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear Time Factorization", "sec_num": "5" }, { "text": "3. Remove those x's that yield even smaller u(x, y \u2212 1) than u(x * , y \u2212 1) or even larger l(x, y \u2212 1) than l(x * , y \u2212 1). Those x's must be on the right of x * if they exist. They must form a sub-list extending to the right end of the original x list. 6. If there are remaining numbers in the input permutation, go to 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear Time Factorization", "sec_num": "5" }, { "text": "The tricks lie in step 3 and step 4, where bad candidate x's are filtered out. We use the following diagram to help readers understand the parts of x-list that the two steps are filtering on.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear Time Factorization", "sec_num": "5" }, { "text": "x 1 , ..., x \u2032 , step 4 x * , ..., x i , ..., x j , ..., x k step 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear Time Factorization", "sec_num": "5" }, { "text": "The steps from 2 to 4 are the operations that maintain the monotonic invariant which makes the reductions in step 5 as trivial as performing output. The stack-based shift-reduce algorithm has the same toplevel structure, but lacks steps 2 to 4 so that in step 5 we have to winnow the entire list. Both algorithms scan left to right and examine potential reduction spans by extending the left endpoint from right to left given a right endpoint.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ", y", "sec_num": null }, { "text": "An example of the algorithm's execution is shown in Figure 1 . The evolution of u(x, y), l(x, y), and f (x, y) is displayed for increasing y's (from 2 to 7). To identify reducible spans, we can check the plot of f (x, y) to locate the (x, y) pairs that yield zero. The pivots found by step 2 of the algorithm are marked with * 's on the x-axis in the plot for u and l. The x's that are filtered out by step 3 or 4 are marked with horizontal bars across. We want to point out the interesting steps. When y = 3, x * = 1, x = 2 needs to be crossed out by step 3 in the algorithm. When y = 4, x * = 3, x = 3 itself is to be deleted by step 4 in the algorithm. x = 4 is removed at step 5 because it is the right end in the first reduction. On the other hand, x = 4 is also a bad starting point for future reductions. Notice that we also remove x = 5 at step 6, which can be a good starting point for reductions. But we exclude it from further considerations, because we want left-most reductions.", "cite_spans": [], "ref_spans": [ { "start": 52, "end": 60, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Example Execution Trace", "sec_num": "5.1" }, { "text": "Now we explain why the algorithm works. Both algorithms are greedy in the sense that at each scan point we exhaustively reduce all candidate spans to the leftmost possible point. It can be shown that greediness is safe for parsing permutations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correctness", "sec_num": "5.2" }, { "text": "What we need to show is how the monotonic invariant holds and is valid. Now we sketch the proof. We want to show for all x i remaining on the list, f (x i , y) \u2265 f (x i+1 , y). When y = 1, it is trivially true. Now we do the induction on y step by case analysis:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correctness", "sec_num": "5.2" }, { "text": "Case 1: If x i < x i+1 < x * , then f (x i , y) \u2212 f (x i , y \u2212 1) = \u22121.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correctness", "sec_num": "5.2" }, { "text": "The reason is if x i is on the left of x * , both u(x i , y) and l(x i , y) are not changed from the y \u2212 1-th step, so the only difference is that y\u2212x i has increased by one. Graphically, the f curve extending to the left of x * shifts down a unit of 1. So, the monotonic property still holds to the left of x * .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correctness", "sec_num": "5.2" }, { "text": "Case 2: If x * \u2264 x i < x i+1 , then f (x i , y) \u2212 f (x i , y \u2212 1) = c (c \u2265 0).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correctness", "sec_num": "5.2" }, { "text": "The reason is that after executing step 3 in the algorithm, the remaining x i 's have either their u(x i , y) shifted up uniformly with l(x i , y) being unchanged, or the symmetric case that l(x i , y) is shifted down uniformly without changing u(x i , y). In both cases, the difference between u and l increases by at least one unit to offset the one unit increase of y \u2212 x i . The result is that the f curve extending from x * to the right shifts up or remains the same.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correctness", "sec_num": "5.2" }, { "text": "Case 3: So the half curve of f on the left of x * is shifting down and the half right curve on the right is shifting up, making it necessary to consider the case that x i is on the left and x i+1 on the right. Fortunately, step 4 in the algorithm deals with this case explicitly by cutting down the head of the right half curve to smooth the whole curve into a monotonically decreasing one.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correctness", "sec_num": "5.2" }, { "text": "We still need one last piece for the proof, i.e., the validity of pruning. Is it possible we winnow off good x's that will become useful in later stages of y? The answer is no. The values we remove in step 3 and 4 are similar to the points indexing into the second and third numbers in the permuted sequence (5, 7, 4, 6). Any span starting from these two points will not be reducible because the element 5 is missing. 1 To summarize, we remove impossible left boundaries and keep good ones, resulting in the monotonicity of f function which in turn makes safe greedy reductions fast.", "cite_spans": [ { "start": 418, "end": 419, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Correctness", "sec_num": "5.2" }, { "text": "We use a doubly linked list to implement both the u and l functions, where list element includes a span of x values (shaded rectangles in Figure 1 ). Both lists can be doubly linked with the list of x's so that we can access the u function and l function at O(1) time for each x. At the same time, if we search for x based on u or l, we can follow the stair functions, skipping many intermediate x's.", "cite_spans": [], "ref_spans": [ { "start": 138, "end": 146, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Implementation and Time Analysis", "sec_num": "5.3" }, { "text": "The total number of operations that occur at step 4 and step 5 is O(n) since these steps just involve removing nodes on the x list, and only n nodes are created in total over the entire algorithm. To find x * , we scan back from the right end of u list or l list. Due to step 3, each u (and l) element that we scan over is removed at this iteration. So the total number of operations accountable to step 2 and step 3 is bounded by the maximum number of nodes ever created on the u and l lists, which is also n.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementation and Time Analysis", "sec_num": "5.3" }, { "text": "Our algorithm is based on an algorithm for finding all common intervals of two permutations (Uno and Yagiura, 2000) . The difference 2 is in step 5, where we remove the embedded reducible x's and keep only the leftmost one; their algorithm will keep all of the reducible x's for future considerations so that in the example the number 3 will be able to involve in both the reduction ([4 \u2212 7] , 3) and (3, [1 \u2212 2]). In the worst case, their algorithm will output a quadratic number of reducible spans, making the whole algorithm O(n 2 ). Our algorithm is O(n) in the worst case. We can also generate all common intervals by transforming the permutation tree output by our algorithm.", "cite_spans": [ { "start": 92, "end": 115, "text": "(Uno and Yagiura, 2000)", "ref_id": "BIBREF7" }, { "start": 383, "end": 391, "text": "([4 \u2212 7]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5.4" }, { "text": "However, we are not the first to specialize the Uno and Yagiura algorithm to produce tree structures for permutations. Bui-Xuan et al. (2005) reached a linear time algorithm in the definition framework of PQ trees. PQ trees represent families of permutations that can be created by composing operations of scrambling subsequences according to any permutation (P nodes) and concatenating subsequences in order (Q nodes). Our definition of permutation tree can be thought of as a more specific version of a PQ tree, where the nodes are all labeled with a specific permutation which is not decomposable.", "cite_spans": [ { "start": 119, "end": 141, "text": "Bui-Xuan et al. (2005)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5.4" }, { "text": "We apply the factorization algorithm to analyzing word alignments in this section. Wellington et al. (2006) indicate the necessity of introducing discontinuous spans for synchronous parsing to match up with human-annotated word alignment data. The number of discontinuous spans reflects the structural complexity of the synchronous rules that are involved in building the synchronous trees for the given alignments. However, the more direct and detailed analysis would be on the branching factors of the synchronous trees for the aligned data.", "cite_spans": [ { "start": 83, "end": 107, "text": "Wellington et al. (2006)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments on Analyzing Word Alignments", "sec_num": "6" }, { "text": "Since human-aligned data has many-to-one word links, it is necessary to modify the alignments into one-to-one. Wellington et al. (2006) treat many-toone word links disjunctively in their synchronous parser. We also commit to one of the many-one links by extracting a maximum match (Cormen et al., 1990 ) from the bipartite graph of the alignment. In other words, we abstract away the alternative links in the given alignment while capturing the backbone using the maximum number of word links.", "cite_spans": [ { "start": 111, "end": 135, "text": "Wellington et al. (2006)", "ref_id": "BIBREF8" }, { "start": 281, "end": 301, "text": "(Cormen et al., 1990", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments on Analyzing Word Alignments", "sec_num": "6" }, { "text": "We use the same alignment data for the five language pairs Chinese/English, Romanian/English, Hindi/English, Spanish/English, and French/English (Wellington et al., 2006) . In Table 2 , we report the number of sentences that are k-ary parsable but not k \u2212 1-ary parsable for increasing k's. Our analysis reveals that the permutations that are accountable for non-ITG alignments include higher order permutations such as (3, 1, 5, 2, 4), albeit sparsely seen.", "cite_spans": [ { "start": 145, "end": 170, "text": "(Wellington et al., 2006)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 176, "end": 183, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experiments on Analyzing Word Alignments", "sec_num": "6" }, { "text": "We also look at the number of terminals the nonbinary synchronous nodes can cover. We are interested in doing so, because this can tell us how general these unfriendly rules are. Wellington et al. (2006) did a similar analysis on the English-English bitext. They found out the majority of non-ITG parsable cases are not local in the sense that phrases of length up to 10 are not helpful in covering the gaps. We analyzed the translation data for the five language pairs instead. Our result differs. The rightmost column in Table 2 shows that only a tiny percent of the non-ITG cases are significant in the sense that we can not deal with them through phrases or tree-flattening within windows of size 10. , and f (x, y) as y goes from 2 to 7 for the permutation (5, 7, 4, 6, 3, 1, 2) . We use * under the x-axis to indicate the x * 's that are pivots in the algorithm. Useless x's are crossed out. x's that contribute to reductions are marked with either ( on its left or ) on its right. For the f function, we use solid boxes to plot the values of remaining x's on the list but also show the other f values for completeness.", "cite_spans": [ { "start": 762, "end": 765, "text": "(5,", "ref_id": null }, { "start": 766, "end": 768, "text": "7,", "ref_id": null }, { "start": 769, "end": 771, "text": "4,", "ref_id": null }, { "start": 772, "end": 774, "text": "6,", "ref_id": null }, { "start": 775, "end": 777, "text": "3,", "ref_id": null }, { "start": 778, "end": 780, "text": "1,", "ref_id": null }, { "start": 781, "end": 783, "text": "2)", "ref_id": null } ], "ref_spans": [ { "start": 523, "end": 530, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experiments on Analyzing Word Alignments", "sec_num": "6" }, { "text": "Branching Factor 1 2 4 5 6 7 10 \u2265 4 (and covering > 10 words) Chinese/English 451 30 4 5 1 7(1.4%) Romanian/English 195 4 0 Hindi/English 3 85 1 1 0 Spanish/English 195 4 1(0.5%) French/English 425 9 9 3 1 6(1.3%) ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments on Analyzing Word Alignments", "sec_num": "6" }, { "text": "We present a linear time algorithm for factorizing any n-ary SCFG rule into a set of k-ary rules where k is minimized. The algorithm speeds up an easyto-understand shift-reduce algorithm, by avoiding unnecessary reduction attempts while maintaining the left-to-right bottom-up control structure. Empirically, we provide a complexity analysis of word alignments based on the concept of minimal branching factor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Uno and Yagiura (2000) prove the validity of step 3 and step 4 rigorously.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The original Uno and Yagiura algorithm also has the minor difference that the scan point goes from right to left.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The Theory of Parsing, Translation, and Compiling", "authors": [ { "first": "Albert", "middle": [ "V" ], "last": "Aho", "suffix": "" }, { "first": "Jeffery", "middle": [ "D" ], "last": "Ullman", "suffix": "" } ], "year": 1972, "venue": "", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Albert V. Aho and Jeffery D. Ullman. 1972. The The- ory of Parsing, Translation, and Compiling, volume 1. Prentice-Hall, Englewood Cliffs, NJ.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Revisiting T. Uno and M. Yagiura's algorithm", "authors": [ { "first": "Minh", "middle": [], "last": "Binh", "suffix": "" }, { "first": "Michel", "middle": [], "last": "Bui-Xuan", "suffix": "" }, { "first": "Christophe", "middle": [], "last": "Habib", "suffix": "" }, { "first": "", "middle": [], "last": "Paul", "suffix": "" } ], "year": 2005, "venue": "The 16th Annual International Symposium on Algorithms and Computation (ISAAC'05)", "volume": "", "issue": "", "pages": "146--155", "other_ids": {}, "num": null, "urls": [], "raw_text": "Binh Minh Bui-Xuan, Michel Habib, and Christophe Paul. 2005. Revisiting T. Uno and M. Yagiura's algo- rithm. In The 16th Annual International Symposium on Algorithms and Computation (ISAAC'05), pages 146-155.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A hierarchical phrase-based model for statistical machine translation", "authors": [ { "first": "David", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43rd Annual Conference of the Association for Computational Linguistics (ACL-05)", "volume": "", "issue": "", "pages": "263--270", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of the 43rd Annual Conference of the Association for Computational Linguistics (ACL-05), pages 263-270, Ann Arbor, Michigan.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Introduction to algorithms", "authors": [ { "first": "H", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "Charles", "middle": [ "E" ], "last": "Cormen", "suffix": "" }, { "first": "Ronald", "middle": [ "L" ], "last": "Leiserson", "suffix": "" }, { "first": "", "middle": [], "last": "Rivest", "suffix": "" } ], "year": 1990, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas H. Cormen, Charles E. Leiserson, and Ronald L. Rivest. 1990. Introduction to algorithms. MIT Press, Cambridge, MA.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "What's in a translation rule", "authors": [ { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Hopkins", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the Human Language Technology Conference/North American Chapter of the Association for Computational Linguistics (HLT/NAACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What's in a translation rule? In Pro- ceedings of the Human Language Technology Confer- ence/North American Chapter of the Association for Computational Linguistics (HLT/NAACL).", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Factoring synchronous grammars by sorting", "authors": [ { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" }, { "first": "Giorgio", "middle": [], "last": "Satta", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the International Conference on Computational Linguistics/Association for Computational Linguistics (COLING/ACL-06) Poster Session", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Gildea, Giorgio Satta, and Hao Zhang. 2006. Fac- toring synchronous grammars by sorting. In Proceed- ings of the International Conference on Computational Linguistics/Association for Computational Linguistics (COLING/ACL-06) Poster Session, Sydney.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Some computational complexity results for synchronous contextfree grammars", "authors": [ { "first": "Giorgio", "middle": [], "last": "Satta", "suffix": "" }, { "first": "Enoch", "middle": [], "last": "Peserico", "suffix": "" } ], "year": 2005, "venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing (HLT/EMNLP)", "volume": "", "issue": "", "pages": "803--810", "other_ids": {}, "num": null, "urls": [], "raw_text": "Giorgio Satta and Enoch Peserico. 2005. Some com- putational complexity results for synchronous context- free grammars. In Proceedings of Human Lan- guage Technology Conference and Conference on Empirical Methods in Natural Language Processing (HLT/EMNLP), pages 803-810, Vancouver, Canada, October.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Fast algorithms to enumerate all common intervals of two permutations", "authors": [ { "first": "Takeaki", "middle": [], "last": "Uno", "suffix": "" }, { "first": "Mutsunori", "middle": [], "last": "Yagiura", "suffix": "" } ], "year": 2000, "venue": "Algorithmica", "volume": "26", "issue": "2", "pages": "290--309", "other_ids": {}, "num": null, "urls": [], "raw_text": "Takeaki Uno and Mutsunori Yagiura. 2000. Fast algo- rithms to enumerate all common intervals of two per- mutations. Algorithmica, 26(2):290-309.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Empirical lower bounds on the complexity of translational equivalence", "authors": [ { "first": "Sonjia", "middle": [], "last": "Benjamin Wellington", "suffix": "" }, { "first": "I", "middle": [ "Dan" ], "last": "Waxmonsky", "suffix": "" }, { "first": "", "middle": [], "last": "Melamed", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the International Conference on Computational Linguistics/Association for Computational Linguistics (COLING/ACL-06)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Benjamin Wellington, Sonjia Waxmonsky, and I. Dan Melamed. 2006. Empirical lower bounds on the complexity of translational equivalence. In Proceed- ings of the International Conference on Computational Linguistics/Association for Computational Linguistics (COLING/ACL-06).", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Stochastic inversion transduction grammars and bilingual parsing of parallel corpora", "authors": [ { "first": "Dekai", "middle": [], "last": "Wu", "suffix": "" } ], "year": 1997, "venue": "Computational Linguistics", "volume": "23", "issue": "3", "pages": "377--403", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377-403.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A syntax-based statistical translation model", "authors": [ { "first": "Kenji", "middle": [], "last": "Yamada", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 39th Annual Conference of the Association for Computational Linguistics (ACL-01)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenji Yamada and Kevin Knight. 2001. A syntax-based statistical translation model. In Proceedings of the 39th Annual Conference of the Association for Com- putational Linguistics (ACL-01), Toulouse, France.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Synchronous binarization for machine translation", "authors": [ { "first": "Hao", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Liang", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Human Language Technology Conference/North American Chapter of the Association for Computational Linguistics (HLT/NAACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hao Zhang, Liang Huang, Daniel Gildea, and Kevin Knight. 2006. Synchronous binarization for ma- chine translation. In Proceedings of the Human Lan- guage Technology Conference/North American Chap- ter of the Association for Computational Linguistics (HLT/NAACL).", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "Denote the x which is immediately to the left ofx * as x \u2032 . Repeatedly remove all x's such that f (x, y) > f (x \u2032 , y)where x is at the left end of the sub-list of x's starting from x * extending to the right.5. Go down the pruned list from the right end, output (x, y) until f (x, y) > 0. Remove x's such that f (x, y) = 0, sparing the smallest x which is the leftmost among all such x's on the list.", "type_str": "figure", "uris": null }, "FIGREF1": { "num": null, "text": "Evolution of u(x, y), l(x, y)", "type_str": "figure", "uris": null }, "TABREF1": { "content": "", "text": "Distribution of branching factors for synchronous trees on various language pairs.", "num": null, "html": null, "type_str": "table" } } } }