Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W07-0408",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T04:39:06.839760Z"
},
"title": "Generation in Machine Translation from Deep Syntactic Trees",
"authors": [
{
"first": "Keith",
"middle": [],
"last": "Hall",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University Baltimore",
"location": {
"postCode": "21218",
"region": "MD"
}
},
"email": "[email protected]"
},
{
"first": "Petr",
"middle": [],
"last": "N\u011bmec",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Charles University",
"location": {
"settlement": "Prague",
"country": "Czech Republic"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper we explore a generative model for recovering surface syntax and strings from deep-syntactic tree structures. Deep analysis has been proposed for a number of language and speech processing tasks, such as machine translation and paraphrasing of speech transcripts. In an effort to validate one such formalism of deep syntax, the Praguian Tectogrammatical Representation (TR), we present a model of synthesis for English which generates surface-syntactic trees as well as strings. We propose a generative model for function word insertion (prepositions, definite/indefinite articles, etc.) and subphrase reordering. We show by way of empirical results that this model is effective in constructing acceptable English sentences given impoverished trees.",
"pdf_parse": {
"paper_id": "W07-0408",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper we explore a generative model for recovering surface syntax and strings from deep-syntactic tree structures. Deep analysis has been proposed for a number of language and speech processing tasks, such as machine translation and paraphrasing of speech transcripts. In an effort to validate one such formalism of deep syntax, the Praguian Tectogrammatical Representation (TR), we present a model of synthesis for English which generates surface-syntactic trees as well as strings. We propose a generative model for function word insertion (prepositions, definite/indefinite articles, etc.) and subphrase reordering. We show by way of empirical results that this model is effective in constructing acceptable English sentences given impoverished trees.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Syntactic models for language are being reintroduced into language and speech processing systems thanks to the success of sophisticated statistical models of parsing (Charniak and Johnson, 2005; Collins, 2003) . Representing deep syntactic relationships is an open area of research; examples of such models are exhibited in a variety of grammatical formalisms, such as Lexical Functional Grammars (Bresnan and Kaplan, 1982) , Head-driven Phrase Structure Grammars (Pollard and Sag, 1994) and the Tectogrammatical Representation (TR) of the Functional Generative Description (Sgall et al., 1986) . In this paper we do not attempt to analyze the differences of these formalisms; instead, we show how one particular formalism is sufficient for automatic analysis and synthesis. Specifically, in this paper we provide evidence that TR is sufficient for synthesis in English.",
"cite_spans": [
{
"start": 166,
"end": 194,
"text": "(Charniak and Johnson, 2005;",
"ref_id": "BIBREF3"
},
{
"start": 195,
"end": 209,
"text": "Collins, 2003)",
"ref_id": "BIBREF5"
},
{
"start": 397,
"end": 423,
"text": "(Bresnan and Kaplan, 1982)",
"ref_id": "BIBREF2"
},
{
"start": 464,
"end": 487,
"text": "(Pollard and Sag, 1994)",
"ref_id": "BIBREF10"
},
{
"start": 574,
"end": 594,
"text": "(Sgall et al., 1986)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Augmenting models of machine translation (MT) with syntactic features is one of the main fronts of the MT research community. The Hiero model has been the most successful to date by incorporating syntactic structure amounting to simple tree structures (Chiang, 2005) . Synchronous parsing models have been explored with moderate success (Wu, 1997; Quirk et al., 2005) . An extension to this work is the exploration of deeper syntactic models, such as TR. However, a better understanding of the synthesis of surface structure from the deep syntax is necessary.",
"cite_spans": [
{
"start": 252,
"end": 266,
"text": "(Chiang, 2005)",
"ref_id": "BIBREF4"
},
{
"start": 337,
"end": 347,
"text": "(Wu, 1997;",
"ref_id": "BIBREF14"
},
{
"start": 348,
"end": 367,
"text": "Quirk et al., 2005)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper presents a generative model for surface syntax and strings of English given tectogrammatical trees. Sentence generation begins by inserting auxiliary words associated with autosemantic nodes; these include prepositions, subordinating conjunctions, modal verbs, and articles. Following this, the linear order of nodes is modeled by a similar generative process. These two models are combined in order to synthesize a sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The Amalgam system provides a similar model for generation from a logical form (Corston-Oliver et al., 2002) . The primary difference between our approach and that of the Amalgam system is that we focus on an impoverished deep structure (akin to logical form); we restrict the deep analysis to contain only the features which transfer directly across languages; specifically, those that transfer directly in our Czech-English machine translation system. Amalgam targets different issues. For example, Amalgam's generation of prepositions and subordinating conjunctions is severely restricted as most of these are considered part of the logical form.",
"cite_spans": [
{
"start": 79,
"end": 108,
"text": "(Corston-Oliver et al., 2002)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The work of Langkilde-Geary (2002) on the Halogen system is similar to the work we present here. The differences that distinguish their work from ours stem from the type of deep representation from which strings are generated. Although their syntactic and semantic representations appear similar to the Tectogrammatical Representation, more explicit information is preserved in their representation. For example, the Halogen representation includes markings for determiners, voice, subject position, and dative position which simplifies the generation process. We believe their minimally specified results are based on input which most closely resembles the input from which we generate in our experiments.",
"cite_spans": [
{
"start": 12,
"end": 34,
"text": "Langkilde-Geary (2002)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Amalgam's reordering model is similar to the one presented here; their model reorders constituents in a similar way that we reorder subtrees. Both the model of Amalgam and that presented here differ considerably from the n-gram models of Langkilde and Knight (1998) , the TAG models of Bangalore and Rambow (2000) , and the stochastic generation from semantic representation approach of Soricut and Marcu (2006) . In our work, we order the localsubtrees 1 of an augmented deep-structure tree based on the syntactic features of the nodes in the tree. By factoring these decisions to be independent for each local-subtree, the set of strings we consider is only constrained by the projective strucutre of the input tree and the local permutation limit described below.",
"cite_spans": [
{
"start": 238,
"end": 265,
"text": "Langkilde and Knight (1998)",
"ref_id": "BIBREF8"
},
{
"start": 286,
"end": 313,
"text": "Bangalore and Rambow (2000)",
"ref_id": "BIBREF0"
},
{
"start": 387,
"end": 411,
"text": "Soricut and Marcu (2006)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the following sections we first provide a brief description of the Tectogrammatical Representation as used in our work. Both manually annotated and synthetic TR trees are utilized in our experiments; we present a description of each type of tree as well as the motivation for using it. We then describe the generative statistical process used to model the synthesis of analytical (surface-syntactic) trees based 1 A local subtree consists of a parent node (governor) and it's immediate children. Reference: Now the network has opened a news bureau in the Hungarian capital Each sentence has an artificial root node labeled #. Verbs contain their tense and mood (labeled T M).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "on the TR trees. Details of the model's features are presented in the following section. Finally we present empirical results for experiments using both the manually annotated and automatically generated data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The Tectogrammatical Representation (TR) comes out of the Praguian linguistic theory known as the Functional Generative Description of language (Sgall et al., 1986) . TR attempts to capture deep syntactic relationships based on the valency of predicates (i.e., function-argument structure) and modification of participants (i.e., nouns used as actors, patients, etc.). A key feature of TR is that dependency relationships are represented only for autosemantic words (content words), meaning that synsemantic words (syntactic function words) are encoded as features of the grammatical relationships rather than the actual words. Abstracting away from specific syntactic lexical items allows for the representation to be less language-specific making the representation attractive as a medium for machine translation and summarization. Figure 1 shows an example TR tree, the nodes of which represent the autosemantic words of the sentence. Each node is labeled with a morphologically reduced word-form called the lemma and a functor that describes the deep syntactic relationship to its governor (function-argument form). Additionally, the nodes are labeled with grammatemes that capture morphological and semantic information associated with the autosemantic words. For example, English verb forms are represented by the infinitive form as the lemma and the grammatemes encode the tense, aspect, and mood of the verb. For a detailed description of the TR annotation scheme see B\u00f6hmov\u00e1 et al. (2002) . In Figure 1 we show only those features that are present in the TR structures used throughout this paper. Both the synsemantic nodes and the left-to-right surface order 2 in the TR trees is under-specified. In the context of machine translation, we assume the TR word order carries no information with the exception of a single situation: the order of coordinated phrases is preserved in one of our models.",
"cite_spans": [
{
"start": 144,
"end": 164,
"text": "(Sgall et al., 1986)",
"ref_id": "BIBREF12"
},
{
"start": 1476,
"end": 1497,
"text": "B\u00f6hmov\u00e1 et al. (2002)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 834,
"end": 842,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 1503,
"end": 1511,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Tectogrammatical (Deep) Syntax",
"sec_num": "2"
},
{
"text": "While it is not part of the formal TR description, the authors of the TR annotation scheme have found it useful to define an intermediate representation between the sentence and the TR tree (B\u00f6hmov\u00e1 et al., 2002) . The analytical representation (AR) is a surface-syntactic dependency tree that encodes syntactic relationships between words (i.e., object, subject, attribute, etc.). Unlike the TR layer, the analytical layer contains all words of the sentence and their relative ordering is identical to the surface order.",
"cite_spans": [
{
"start": 190,
"end": 212,
"text": "(B\u00f6hmov\u00e1 et al., 2002)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analytic Representation",
"sec_num": "2.1"
},
{
"text": "In order to evaluate the efficacy of the generation model, we construct a dataset from both manually annotated data and automatically generated data. The information contained in the originally manually annotated TR all but specifies the surface form. We have modified the annotated data by removing all features except those that could be directly transfered across languages. Specifically, we preserve the following features: lemma, functor, verbal gram-matemes, and part-of-speech tags. The lemma is the morphologically reduced form of the word; for verbs this is the infinitive form and for nouns this is the singular form. The functor is the deep-syntactic function of the node; for example, the deep functor indicates whether a node is a predicate, an actor, or a patient. Modifiers can be labeled as locative, temporal, benefactive, etc. Additionally we include a verbal grammateme which encodes tense and mood as well as a Penn Treebank style part-of-speech tag.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Manually Annotated TR",
"sec_num": "2.2"
},
{
"text": "In this section we describe the generative process that inserts the synsemantic auxiliary words, reorders the trees, and produces a sentence. Our evaluation will be on English data, so we describe the models and the model features in the context of English. While the model is language independent, the specific features and the size of the necessary conditioning contexts is a function of the language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Process",
"sec_num": "3"
},
{
"text": "Given a TR tree T , we wish to predict the correct auxiliary nodes A and an ordering of the words associated with {T \u222a A}, defined by the function f ({T \u222a A}). The functions f determine the surface word order of the words associated with nodes of the auxiliary-inserted TR tree: N = {T \u222a A}. The node features that we use from the nodes in the TR and AR trees are: the word lemma, the part-of-speech (POS) tag, and the functor. 3 The objective of our model is:",
"cite_spans": [
{
"start": 428,
"end": 429,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Process",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "arg max A,f P (A, f |T ) = arg max A,f P (f |A, T )P (A|T ) (1) \u2248 arg max f P (f |T, arg max A P (A|T ))",
"eq_num": "(2)"
}
],
"section": "Generative Process",
"sec_num": "3"
},
{
"text": "In Equation 2 we approximate the full model with a greedy procedure. First, we predict the most likely A according to the model P (A|T ). Given A, we compute the best ordering of the nodes of the tree, including those introduced in A.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Process",
"sec_num": "3"
},
{
"text": "There is an efficient dynamic-programming solution to the objective function in Equation 1; how-ever, in this work we experiment with the greedy approximation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Process",
"sec_num": "3"
},
{
"text": "The specific English auxiliary nodes which are not present in TR include articles, prepositions, subordinating conjunctions, and modal verbs. 4 For each node in the TR tree, the generative process predicts which synsemantic word, if any, should be inserted as a dependent of the current node. We make the assumption that these decisions are determined independently.",
"cite_spans": [
{
"start": 142,
"end": 143,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Insertion Model",
"sec_num": "3.1"
},
{
"text": "Let T = {w 1 , . . . , w i , . . . , w k } be the nodes of the TR tree. For each node w i , we define the associated node a i to be the auxiliary node that should be inserted as a dependent of w i . Given a tree T , we wish to find the set of auxiliary nodes A = {a 1 , . . . , a k } that should be inserted 5 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Insertion Model",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (A|T ) = i P (a i |a 1 , . . . , a i\u22121 , T ) (3) \u2248 i P (a i |T ) (4) \u2248 i P (a i |w i , w g(i) )",
"eq_num": "(5)"
}
],
"section": "Insertion Model",
"sec_num": "3.1"
},
{
"text": "Equation 3 is simply a factorization of the original model, Equation 4 shows the independence assumption, and in Equation 5 we make an additional conditional independence assumption that in order to predict auxiliary a i , we need only know the associated node w i and its governor w g(i) . 6 We further divide the model into three components: one that models articles, such as the English articles the and a; one that models prepositions and subordinating conjunctions; and one that models modal verbs. The first two models are of the form described by Equation 5. The modal verb insertion model is a deterministic mapping based on grammatemes expressing the verb modality of the main verb. Additionally, each model is independent of the other and therefore up to two insertions per TR node are possible (an article and another syntactic modifier). In a variant of our model, we perform a small set of deterministic transformations in cases where the classifier is relatively uncertain about the predicted insertion node (i.e., the entropy of the conditional distribution is high).",
"cite_spans": [
{
"start": 291,
"end": 292,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Insertion Model",
"sec_num": "3.1"
},
{
"text": "We note here that unlike the Amalgam system (Corston-Oliver et al., 2002) , we do not address features which are determined (or almost completely determined) by the underlying deep-structure. For example, the task of inserting prepositions is nontrivial given we only know a node's functor (e.g., the node's valency role).",
"cite_spans": [
{
"start": 44,
"end": 73,
"text": "(Corston-Oliver et al., 2002)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Insertion Model",
"sec_num": "3.1"
},
{
"text": "We have experimented with two paradigms for synthesizing sentences from TR trees. The first technique involves first generating AR trees (surface syntax). In this model, we predict the node insertions, transform the functors from TR to AR functions (deep valency relationship to surface-syntactic relationships), and then reorder the nodes. In the second framework, we reorder the nodes directly in the TR trees with inserted auxiliary nodes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analytical Representation Tree Generation",
"sec_num": "3.2"
},
{
"text": "The node ordering model is used to determine a projection of the tree to a string. We assume the ordering of the nodes in the input TR trees is arbitrary, the reordering model proposed here is based only on the dependency structure and the node's attributes (words, POS tags, etc.). In a variant of the reordering model, we assume the deep order of coordinating conjunctions to be the surface order.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Surface-order Model",
"sec_num": "3.3"
},
{
"text": "Algorithm 1 presents the bottom-up node reordering algorithm. In the first part of the algorithm, we determine the relative ordering of child nodes. We maximize the likelihood of a particular order via the precedence operator \u227a. If node c i \u227a c i+1 , then the subtree of the word associated with c i immediately precedes the subtree of the word associated with c i+1 in the projected sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Surface-order Model",
"sec_num": "3.3"
},
{
"text": "In the second half of the algorithm (starting at line 13), we predict the position of the governor within the previously ordered child nodes. Recall Algorithm 1 Subtree Reordering Algorithm that this is a dependency structure; knowing the governor does not tell us where it lies on the surface with respect to its children. The model is similar to the general reordering model, except we consider an absolute ordering of three nodes (left child, governor, right child). Finally, we can reconstruct the total ordering from the subtree ordering defined in O = {o 1 , . . . , o n }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Surface-order Model",
"sec_num": "3.3"
},
{
"text": "procedure REORDER(T, A, O) Result in O N \u2190bottomUp(T \u222a A); O \u2190 {} for g \u2208 N",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Surface-order Model",
"sec_num": "3.3"
},
{
"text": "The procedure described here is greedy; first we choose the best child ordering and then we choose the location of the governor. We do this to minimize the computational complexity of the algorithm. The current algorithm's runtime complexity is O(n!), but the complexity of the alternative algorithm for which we consider triples of child nodes is O(n!(n \u2212 1)!). The actual complexity is determined by the maximum number of child nodes k = |C| and is O( n k k!).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Surface-order Model",
"sec_num": "3.3"
},
{
"text": "In order to produce true English sentences, we convert the lemma and POS tag to a word form. We use John Carroll's morphg tool 7 to generate English word forms given lemma/POS tag pairs. This is not perfect, but it performs an adequate job at recovering English inflected forms. In the completesystem evaluation, we report scores based on gener- 7 Available on the web at:",
"cite_spans": [
{
"start": 346,
"end": 347,
"text": "7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological Generation",
"sec_num": "3.4"
},
{
"text": "http://www.informatics.susx.ac.uk/research/nlp/carroll/morph.html.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological Generation",
"sec_num": "3.4"
},
{
"text": "ated morphological forms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological Generation",
"sec_num": "3.4"
},
{
"text": "Features for the insertion model come from the current node being examined and the node's governor. When the governor is a coordinating conjunction, we use features from the governor of the conjunction node. The features used are the lemma, POS tag, and functor for the current node, and the lemma, POS tag, and functor of the governor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Insertion Features",
"sec_num": "3.5"
},
{
"text": "i P (a i |w i , w g ) (6) = i P (a i |l i , t i , f i , l g , t g , f g )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Insertion Features",
"sec_num": "3.5"
},
{
"text": "The left-hand side of Equation 6 is repeated from Equation 5 above. Equation 6 shows the expanded model for auxiliary insertion where l i is the lemma , t i is the POS tag, and f i is the functor of node w i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Insertion Features",
"sec_num": "3.5"
},
{
"text": "Our reordering model for English is based primarily on non-lexical features. We use the POS tag and functor from each node as features. The two distributions in our reordering model (used in Algorithm 1) are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reordering Features",
"sec_num": "3.6"
},
{
"text": "P (c i \u227a c i+1 |c i , c i+1 , g) (7) = (c i \u227a c i+1 |f i , t i , f i+1 , t i+1 , f g , t g ) P (c i \u227a g \u227a c i+1 |c i , c i+1 , g) (8) = P (c i \u227a g \u227a c i+1 |f i , t i , f i+1 , t i+1 , t g , f g )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reordering Features",
"sec_num": "3.6"
},
{
"text": "In both Equation 7 and Equation 8, only the functor and POS tag of each node is used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reordering Features",
"sec_num": "3.6"
},
{
"text": "We have experimented with the above models on both manually annotated TR trees and synthetic trees (i.e., automatically generated trees). The data comes from the PCEDT 1.0 corpus 8 , a version of the Penn WSJ Treebank that has been been translated to Czech and automatically transformed to TR in both English and Czech. The English TR was automatically generated from the Penn Treebank's manually annotated surface syntax trees (English phrasestructure trees). Additionally, a small set of 497 sentences were manually annotated at the TR level: 248 Table 1 : Classification accuracy for insertion models on development data from PCEDT 1.0. Article accuracy is computed over the set of nouns. Preposition and subordinating conjunction accuracy (P & SC) is computed over the set of nodes that appear on the surface (excluding hidden nodes in the TR -these will not exist in automatically generated data). Models are shown for all features minus the specified feature. Features with the prefix \"g.\" indicate governor features, otherwise the features are from the node's attributes. The Baseline model is one which never inserts any nodes (i.e., the model which inserts the most probable value -NOAUX).",
"cite_spans": [],
"ref_spans": [
{
"start": 549,
"end": 556,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Empirical Evaluation",
"sec_num": "4"
},
{
"text": "for development and 249 for evaluation; results are presented for these two datasets. All models were trained on the PCEDT 1.0 data set, approximately 49,000 sentences, of which 4,200 were randomly selected as held-out training data, the remainder was used for training. We estimate the model distributions with a smoothed maximum likelihood estimator, using Jelinek-Mercer EM smoothing (i.e., linearly interpolated backoff distributions). Lower order distributions used for smoothing are estimated by deleting the rightmost conditioning variable (as presented in the above models).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Evaluation",
"sec_num": "4"
},
{
"text": "Similar experiments were performed at the 2002 Johns Hopkins summer workshop. The results reported here are substantially better than those reported in the workshop report ; however, the details of the workshop experiments are not clear enough to ensure the experimental conditions are identical.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Evaluation",
"sec_num": "4"
},
{
"text": "For each of the two insertion models (the article model and the preposition and subordinating conjunction model), there is a finite set of values for the dependent variable a i . For example, the articles are the complete set of English articles as collected from the Penn Treebank training data (these have manual POS tag annotations). We add a dummy value to this set which indicates no article should be inserted. 9 The preposition and auxiliary model assumes the set of possible modifiers to be all those seen in the training data that were removed when modifying the manual TR trees.",
"cite_spans": [
{
"start": 417,
"end": 418,
"text": "9",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Insertion Results",
"sec_num": "4.1"
},
{
"text": "The classification accuracy is the percentage of nodes for which we predicted the correct auxiliary from the set of candidate nodes for the auxiliary type. Articles are only predicted and evaluated for nouns (determined by the POS tag). Prepositions and subordinating conjunctions are predicted and evaluated for all nodes that appear on the surface. We do not report results for the modal verb insertion as it is primarily determined by the features of the verb being modified (accuracy is approximately 100%). We have experimented with different features sets and found that the model described in Equation 6 performs best when all features are used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Insertion Results",
"sec_num": "4.1"
},
{
"text": "In a variant of the insertion model, when the classifier prediction is of low certainty (probability less than .5) we defer to a small set of deterministic rules. For infinitives, we insert \"to\"; for origin nouns, we insert \"from\", for actors we insert \"of\", and we attach \"by\" to actors of passive verbs. In the article insertion model, we do not insert anything if there is another determiner (e.g., \"none\" or \"any\") or personal pronoun; we insert \"the\" if the word appeared within the previous four sentences or if there is a suggestive adjective attached to the noun. 10 Table 1 shows that the classifiers perform better on automatically generated data (Synthetic Data), but also perform well on the manually annotated Table 3 presents a confusion set from the best article classifier on the development data. Our model is relatively conservative, incurring 60% of the error by choosing to insert nothing when it should have inserted an article. The model requires more informed features as we are currently being overly conservative.",
"cite_spans": [
{
"start": 572,
"end": 574,
"text": "10",
"ref_id": null
}
],
"ref_spans": [
{
"start": 723,
"end": 730,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Insertion Results",
"sec_num": "4.1"
},
{
"text": "In Table 4 we report the overall accuracy on evaluation data using the model that performed best on the development data. The results are consistent with the results for the development data; however, the article model performs slightly worse on the evaluation set.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Insertion Results",
"sec_num": "4.1"
},
{
"text": "Evaluation of the final sentence ordering was based on predicting the correct words in the correct po-sitions. We use the reordering metric described in which computes the percentage of nodes for which all children are correctly ordered (i.e., no credit for partially correct orderings). Table 2 shows the reordering accuracy for the full model and variants where a particular feature type is removed. These results are for ordering the correct auxiliary-inserted TR trees (using deepsyntactic functors and the correctly inserted auxiliaries). In the model variant that preserves the deep order of coordinating conjunctions, we see a significant increase in performance. The child node tags are critical for the reordering model, followed by the child functors.",
"cite_spans": [],
"ref_spans": [
{
"start": 288,
"end": 295,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Reordering Results",
"sec_num": "4.2"
},
{
"text": "Manual Synthetic TR w/ Rules .4614 .4777 TR w/o Rules .4532 .4657 AR .2337 .2451 Table 5 : BLEU scores for complete generation system for TR trees (with and without rules applied) and the AR trees.",
"cite_spans": [],
"ref_spans": [
{
"start": 81,
"end": 88,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "In order to evaluate the combined system, we used the multiple-translation dataset in the PCEDT corpus. This data contains four retranslations from Czech to English of each of the original English sentences in the development and evaluation datasets. In Table 5 we report the BLEU scores on development data for our TR generation model (including the morphological generation module) and the AR generation model. Results for the system that uses AR trees as an intermediate stage are very poor; this is likely due to the noise introduced when generating AR trees. Additionally, the results for the TR model with the additional rules are consistent with the pre-vious results; the rules provide only a marginal improvement. Finally, we have run the complete system on the evaluation data and achieved a BLEU score of .4633 on the manual data and .4750 on the synthetic data. These can be interpreted as the upper-bound for Czech-English translation systems based on TR tree transduction.",
"cite_spans": [],
"ref_spans": [
{
"start": 254,
"end": 261,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "We have provided a model for sentence synthesis from Tectogrammatical Representation trees. We provide a number of models based on relatively simple, local features that can be extracted from impoverished TR trees. We believe that further improvements will be made by allowing for more flexible use of the features. The current model uses simple linear interpolation smoothing which limits the types of model features used (forcing an explicit factorization). The advantage of simple models of the type presented in this paper is that they are robust to errors in the TR trees -which are expected when the TR trees are generated automatically (e.g., in a machine translation system).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "In a TR tree, a subtree is always between the nodes to the left and right of its governor. More specifically, all TR trees are projective. For this reason, the relative ordering of subtrees imposes an absolute ordering for the tree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The type of functor used (deep syntactic or surfacesyntactic) depends on the tree to which we are applying the model. One form of the reordering model operates on AR trees and therefore uses surface syntactic functors. The other model is based on TR trees and uses deep-syntactic functors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The function of synsemantic nodes are encoded by functors. For example, the prepositions to, at, in, by, and on may be used to indicate time or location. An autosemantic modifier will be labeled as temporal or locative, but the particular preposition is not specified.5 Note that we include the auxiliary node labeled NOAUX to be inserted, which in fact means a node is not inserted.6 In the case of nodes whose governor is a coordinating conjunction, the governor information comes from the governor of the coordination node.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "LDC catalog number: LDC2004T25.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In the classifier evaluation we consider the article a and an to be equivalent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Any adjective that is always followed by the definite article in the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Exploiting a probabilistic hierarchical model for generation",
"authors": [
{
"first": "Srinivas",
"middle": [],
"last": "Bangalore",
"suffix": ""
},
{
"first": "Owen",
"middle": [],
"last": "Rambow",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 18th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Srinivas Bangalore and Owen Rambow. 2000. Exploiting a probabilistic hierarchical model for generation. In Proceed- ings of the 18th International Conference on Computational Linguistics (COLING 2000), Saarbr\u00fccken, Germany.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The prague dependency treebank: Threelevel annotation scenario",
"authors": [
{
"first": "Alena",
"middle": [],
"last": "B\u00f6hmov\u00e1",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Haji\u010d",
"suffix": ""
},
{
"first": "Eva",
"middle": [],
"last": "Haji\u010dov\u00e1",
"suffix": ""
},
{
"first": "Barbora Vidov\u00e1",
"middle": [],
"last": "Hladk\u00e1",
"suffix": ""
}
],
"year": 2002,
"venue": "Treebanks: Building and Using Syntactically Annotated Corpora",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alena B\u00f6hmov\u00e1, Jan Haji\u010d, Eva Haji\u010dov\u00e1, and Barbora Vidov\u00e1 Hladk\u00e1. 2002. The prague dependency treebank: Three- level annotation scenario. In Anne Abeille, editor, In Tree- banks: Building and Using Syntactically Annotated Cor- pora. Dordrecht, Kluwer Academic Publishers, The Neter- lands.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Lexical-functional grammar: A formal system for grammatical representation",
"authors": [
{
"first": "Joan",
"middle": [],
"last": "Bresnan",
"suffix": ""
},
{
"first": "Ronald",
"middle": [
"M"
],
"last": "Kaplan",
"suffix": ""
}
],
"year": 1982,
"venue": "The Mental Representation of Grammatical Relations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joan Bresnan and Ronald M. Kaplan. 1982. Lexical-functional grammar: A formal system for grammatical representation. In The Mental Representation of Grammatical Relations. MIT Press.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Coarse-to-fine nbest parsing and MaxEnt discriminative reranking",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugene Charniak and Mark Johnson. 2005. Coarse-to-fine n- best parsing and MaxEnt discriminative reranking. In Pro- ceedings of the 43rd Annual Meeting of the Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A hierarchical phrase-based model for statistical machine translation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "263--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of the 43rd Annual Meeting of the Association for Computational Lin- guistics, pages 263-270, Ann Arbor, MI.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Head-driven statistical models for natural language processing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "29",
"issue": "4",
"pages": "589--637",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 2003. Head-driven statistical models for natural language processing. Computational Linguistics, 29(4):589-637.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "An overview of Amalgam: A machine-learned generation module",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "Corston",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Oliver",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Gamon",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Ringger",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Moore",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the International Natural Language Generation Conference",
"volume": "",
"issue": "",
"pages": "33--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon Corston-Oliver, Michael Gamon, Eric Ringger, and Robert Moore. 2002. An overview of Amalgam: A machine-learned generation module. In Proceedings of the International Natural Language Generation Conference, pages 33-40, New York, USA.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Natural language generation in the context of machine translation",
"authors": [
{
"first": "Jan",
"middle": [],
"last": "Haji\u010d",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Martin\u010dmejrek",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Dorr",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Eisner",
"suffix": ""
},
{
"first": "Terry",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Kristen",
"middle": [],
"last": "Koo",
"suffix": ""
},
{
"first": "Dragomir",
"middle": [],
"last": "Parton",
"suffix": ""
},
{
"first": "Owen",
"middle": [],
"last": "Radev",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rambow",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jan Haji\u010d, Martin\u010cmejrek, Bonnie Dorr, Yuan Ding, Jason Eisner, Dan Gildea, Terry Koo, Kristen Parton, Dragomir Radev, and Owen Rambow. 2002. Natural language genera- tion in the context of machine translation. Technical report, Center for Language and Speech Processing, Johns Hopkins University, Balitmore. Summer Workshop Final Report.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The practical value of n-grams in generation",
"authors": [
{
"first": "Irene",
"middle": [],
"last": "Langkilde",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the International Natural Language Generation Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Irene Langkilde and Kevin Knight. 1998. The practical value of n-grams in generation. In Proceedings of the International Natural Language Generation Workshop.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "An empirical verification of coverage and correctness for a general-purpose sentence generator",
"authors": [
{
"first": "Irene",
"middle": [],
"last": "Langkilde-Geary",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the International Natural Language Generation Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Irene Langkilde-Geary. 2002. An empirical verification of cov- erage and correctness for a general-purpose sentence gener- ator. In Proceedings of the International Natural Language Generation Conference.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Head-Driven Phrase Structure Grammar",
"authors": [
{
"first": "Carl",
"middle": [],
"last": "Pollard",
"suffix": ""
},
{
"first": "Ivan",
"middle": [
"A"
],
"last": "Sag",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carl Pollard and Ivan A. Sag. 1994. Head-Driven Phrase Structure Grammar. University of Chicago Press.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Dependency treelet translation: Syntactically informed phrasal SMT",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Quirk",
"suffix": ""
},
{
"first": "Arul",
"middle": [],
"last": "Menezes",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)",
"volume": "",
"issue": "",
"pages": "271--279",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Quirk, Arul Menezes, and Colin Cherry. 2005. De- pendency treelet translation: Syntactically informed phrasal SMT. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05), pages 271-279, Ann Arbor, Michigan, June. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The Meaning of the Sentence in Its Semantic and Pragmatic Aspects",
"authors": [
{
"first": "Petr",
"middle": [],
"last": "Sgall",
"suffix": ""
},
{
"first": "Eva",
"middle": [],
"last": "Haji\u010dov\u00e1",
"suffix": ""
},
{
"first": "Jarmila",
"middle": [],
"last": "Panevov\u00e1",
"suffix": ""
}
],
"year": 1986,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Petr Sgall, Eva Haji\u010dov\u00e1, and Jarmila Panevov\u00e1. 1986. The Meaning of the Sentence in Its Semantic and Pragmatic As- pects. Kluwer Academic, Boston.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Stochastic language generation using WIDL-expressions and its application in machine translation and summarization",
"authors": [
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 44th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radu Soricut and Daniel Marcu. 2006. Stochastic language generation using WIDL-expressions and its application in machine translation and summarization. In Proceedings of the 44th Annual Meeting of the Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Stochastic inversion transduction grammars and bilingual parsing of parallel corpora",
"authors": [
{
"first": "Dekai",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 1997,
"venue": "Computational Linguistics",
"volume": "23",
"issue": "3",
"pages": "377--404",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377-404.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Example of a manually annotated, Synthetic TR tree (see Section 2.2)."
},
"TABREF4": {
"num": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"3\">data. Prediction of articles is primarily dependent on</td></tr><tr><td colspan=\"3\">the lemma and the tag of the node. The lemma and</td></tr><tr><td colspan=\"3\">tag of the governing node and the node's functor is</td></tr><tr><td colspan=\"3\">important to a lesser degree. In predicting the prepo-</td></tr><tr><td colspan=\"3\">sitions and subordinating conjunctions, the node's</td></tr><tr><td colspan=\"2\">functor is the most critical factor.</td><td/></tr><tr><td colspan=\"3\">% Errors Reference\u2192Hypothesis</td></tr><tr><td>41</td><td colspan=\"2\">the \u2192 NULL</td></tr><tr><td>19</td><td colspan=\"2\">a/an \u2192 NULL</td></tr><tr><td>16</td><td>NULL \u2192</td><td>the</td></tr><tr><td>11</td><td>a/an \u2192</td><td>the</td></tr><tr><td>11</td><td colspan=\"2\">the \u2192 a/an</td></tr><tr><td>2</td><td colspan=\"2\">NULL \u2192 a/an</td></tr></table>",
"html": null,
"text": "Reordering accuracy for TR trees on development data from PCEDT 1.0. We include performance on the interior nodes (excluding leaf nodes) for the Manual data to show a more detailed analysis of the performance. \"g.\" are the governor features and \"c.\" are the child features. The baseline model sorts subtrees of each node randomly."
},
"TABREF5": {
"num": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">Manual</td><td colspan=\"2\">Synthetic</td></tr><tr><td>Det.</td><td>P &amp; SC</td><td>Det.</td><td>P &amp; SC</td></tr><tr><td>85.53</td><td>89.18</td><td>85.31</td><td>91.54</td></tr></table>",
"html": null,
"text": "Article classifier errors on development data."
},
"TABREF6": {
"num": null,
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "Accuracy of best models on the evaluation data."
}
}
}
}