{ "paper_id": "W89-0204", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:45:46.667617Z" }, "title": "A Uniform Formal Framework for Parsing", "authors": [ { "first": "B", "middle": [], "last": "Ernard Lang", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "", "pdf_parse": { "paper_id": "W89-0204", "_pdf_hash": "", "abstract": [], "body_text": [ { "text": "Many of the formalisms used to define the syntax of natural (and programming) languages may be located in a continuum that ranges from propositional Horn logic to full first order Horn logic, possibly with non-Herbrand interpretations. This structural parenthood has been previously re marked: it lead to the development of Prolog and is analyzed in some detail in . A notable outcome is the parsing technique known as Earley deduction .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "These formalisms play (at least) three roles:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "d e s c rip tiv e : they give a finite and organized description of the syntactic structure of the language, a n a ly tic : they can be used to analyze sentences so as to retrieve a syntactic structure (i.e. a representation) from which the meaning can be extracted, g e n e ra tiv e : they can also be used as the specification of the concrete representation of sentences from a more structured abstract syntactic representation (e.g. a parse tree).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The choice of a formalism is essential with respect to the descriptive role, since it controls the perspicuity with which linguistic phenomena may be understood and expressed in actual language descriptions, and hence the tractability of these descriptions for the human mind.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Plowever, computational tractability is required by the other two roles if we intend to use these descriptions for mechanical processing of languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The aim of our work, which is partially reported here, is to obtain a uniform understanding of the computational aspects of syntactic phenomena within the continuum of Horn-like formalisms considered above, and devise general purpose algorithmic techniques to deal with these formalisms in practical applications.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To attain this goal, we follow a three-sided strategy:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Systematic study of the lower end of the continuum, represented by context-free (CF) gram mars (simpler formalisms, such as propositional Horn logic do not seem relevant for our . purpose).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Systematic study of the higher end of the continuum, i.e. first order Horn clauses,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Analysis of the relations between intermediate formalisms and Horn clauses, so as to reuse for intermediate formalisms the understanding and algorithmic solutions developed for the more powerful Horn clauses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This strategy is motivated by two facts:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 the computational properties of both CF grammars and Horn clauses may be expressed with the same computational model: the non-deterministic pushdown automaton,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 the two formalisms have a compatible concept of syntactic structure: the parse-tree in the CF case, and the proof-tree in the Horn clause case.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The greater simplicity of the CF formalism helps us in understanding more easily most of the computational phenomena. We then generalize this knowledge to the more powerful Horn clauses, and finally we specialize it from Horn clauses to the possibly less powerful but linguistically more perspicuous intermediate formalisms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the rest of this paper we present two aspects of our work:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1 . a new understanding of shared parse forests and their relation to CF grammars, and 2. a generalization to full Horn clauses, also called Definite Clause (DC) programs, of the push down stack computational model developed for CF parsers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Though much research has been devoted to this subject in the past, most of the practically usable work has concentrated on deterministic push-down parsing which is clearly inadequate for natural language applications and does not generalize to more complex formalisms. On the other hand there has been little formal investigation of general CF parsing, though many practical systems have been implemented based on some variant of Earley's algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C ontext-Free Parsing", "sec_num": "2" }, { "text": "Our contribution has been t o 'develop a formal model which can describe these variants in a uniform way, and encompasses the construction of parse-trees, and more generally of parseforests. This model is based on the compilation paradigm common in programming languages and deterministic parsing: we use the non-deterministic1 Pushdown Automaton (PD A) as a virtual parsing machine which we can simulate with an Earley-like construction; variations on Earley's algorithm are then expressed as variations in the compilation schema used to produce the PDA code from the original CF gram m ar. This uniform framework has been used to compare experimentally parsing schem ata w.r.t. parser size, parsing speed and size of shared forest, and in reusing the wealth of PD A construction techniques to be found in the literature.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C ontext-Free Parsing", "sec_num": "2" }, { "text": "This work has been reported elsewhere . An essential outcome, which is the object of this section, is a new understanding of the relation between CF grammars, parse-trees and parse-forests, and the parsing process itself. The presentation is informal since our purpose is to give an intuitive understanding of the concepts, which is our interpretation of the earlier theoretical results. Essentiadly, we shall first show that both CF grammars and shared parsed forest may be repre sented by AND-OR graphs, with specific interpretations. We shall then argue that this represen tational similarity is not accidental, and that there is no difference between a shared forest and a grammar.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C ontext-Free Parsing", "sec_num": "2" }, { "text": "Our running example for a CF grammar is the pico-grammar of English, taken from , which is given in figure 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C o n t e x t -f r e e G r a m m a r s", "sec_num": "2.1" }, { "text": "In figure ' 2 we give a graphical representation of this gram m ar as an AND-OR graph. The notation for this AND-OR graph is unusual and emphasizes the difference between AND and OR nodes. OR-nodes are represented by the non-terminal categories of the grammar, and AND-nodes are represented by the rules (numbers) of the grammar. There are also leaf-nodes corresponding to the terminal categories.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C o n t e x t -f r e e G r a m m a r s", "sec_num": "2.1" }, { "text": "The OR-node corresponding to a non-terminal X has exiting arcs leading to each AND-node n representing a rule that defines X. This arc is not explicitly represented in the graphical formalism chosen. If there is only one such arc, then it is represented by placing n immediately under X. This is the case for the OR-node representing the non-terminal PP. If there are several such arcs, they are implicitly represented by enclosing in an ellipse the OR-node X above all its son nodes n, n*, ... This is the case for the OR-node representing the non-terminal NP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C o n t e x t -f r e e G r a m m a r s", "sec_num": "2.1" }, { "text": "The sons of an AND-node (i.e. a rule) are the grammatical categories found in the right-handside of the rule, in that order. The arcs leading from an AND-node to its sons are represented explicitly. The convention for orienting the arcs is that they leave a node from below and reach a node from above. This graph accurately represents the grammar, and is very similar to the graphs used in some parsers. For example, LR(0) parsing uses a graph representation of the grammar that is very similar, the main difference being that the sons of AND-nodes are linked together from left to right, rather than being attached separately to the AND-node [AhoU-72, DeR-71]. More simply, this graph representation is very close to the data structures often used to represent conveniently a grammar in a computer memory. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C o n t e x t -f r e e G r a m m a r s", "sec_num": "2.1" }, { "text": "Given a sentence in the language defined by a CF grammar, the parsing process consists in building a tree structure, the parse tree, that shows how this sentence can be constructed according to the grammatical rules of the language. It is however frequent that the CF syntax of a sentence is ambiguous, i.e. that several distinct parse-trees may be constructed for it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parse trees and parse forests", "sec_num": "2.2" }, { "text": "Let us consider the gram m ar of figure 1 . If we take as example the sentence \"I see a man with a mirror\", which translate into the terminal sequence \"n v det n prep det n\", we can build the two parse trees given in figures 3 and 4. Note that we label a parse tree node with its non-terminal category and with the rule used to decompose it into constituents. Hence such a parse tree could be seen as an AND-OR tree similar to the AND-OR gram m ar graph of figure 2. However, since all OR-nodes are degenerated (i.e. have a unique son), a parse tree is just an AND-tree.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parse trees and parse forests", "sec_num": "2.2" }, { "text": "The number of possible parse trees may become very large when the size of sentences increases: it may grow exponentially with that size, and may even be infinite for cyclic grammars (which seem of little linguistic usefulness ). Since it is often desirable to consider all -31- Figure 6: A shared parse forest possible parse trees (e.g. for semantic processing), it is convenient to merge as much as possible these parse trees into a single structure that allows them to share common parts. This sharing save on the space needed to represent the trees, and also on the later processing of these trees since it may allows to share between two trees the processing of some common parts2. The shared representation of all parse trees is called shared parse forest, or just parse forest.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parse trees and parse forests", "sec_num": "2.2" }, { "text": "To analyze how two trees can share a ( connected) part, we first notice that such a part may be isolated by cutting the tree along an edge (or arc) as in figure 5. this actually give us two parts: a subtree and a context (cf. figure 5). Either of .these two parts may be shared in forests representing two trees. When a subtree is the same for two trees, it may be shared as shown in figure 7. When contexts are equal and may thus be shared, we get the structure depicted in figure 8.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parse trees and parse forests", "sec_num": "2.2" }, { "text": "The sharing of context actually corresponds to ambiguities in the analyzed sentence: the ellipse in figure 8 contains the head nodes for two distinct parses of the same subsentence u, that both recognize v in the same non-terminal category NT. Each head node is labelled with the (number of) the rule used to decompose v into constituents in that parse, and the common syntactical category labels the top of the ellipse. Not accidentally, this structure is precisely the structure of the 0Rnodes we used to represent CF grammars. Indeed, an ambiguity is nothing but a choice between two possible parses of the same sentence fragment v as the same syntactic category NT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parse trees and parse forests", "sec_num": "2.2" }, { "text": "Using a combination of these two forms of sharing, the two parse trees of figures 3 and 4 may be merged into the shared parse forest3 of figure 6. Note that, for this simple example, the only In this representation we keep our double labelling of parse tree nodes with both the non terminal category and the rule used to decompose it into its constituents. As indicated above, ambiguities are represented with context sharing, i.e. by OR-nodes that are the exact equivalent of those of figure 2. Hence a shared parse forest is an AN D -O R graph*. Note however that the same rule (resp. non-terminal) may now label several AND-nodes (resp. OR-nodes) of the shared parse forest graph.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parse trees and parse forests", "sec_num": "2.2" }, { "text": "If we make the labels distinct, for example by indexing them so as not to lose their original information, we can then read the shared forest graph of a sentence 3 as a grammar T a. The language of this gram m ar contains only the sentence s, and it gives s the same syntactic structure(s) -i.e. the same parse tree(s) and the same ambiguities -as the original grammar, up to the above renaming of labels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parse trees and parse forests", "sec_num": "2.2" }, { "text": "Our view of parsing may be extended to the parsing of incomplete sentences . An example of incomplete sentence is .. see . . . m ir ro r \". Assuming that we know that the first hole stands for a single missing word, and that the second one stands for an arbitrary number of words, we can represent this sentence by the sequence \"? v * n\" . The convention is that \"? \" stands for one unknown word, and for any number of them. Such an incomplete sentence 3 may be understood as defining a sublanguage C3 which contains all the correct sentences matching s. Any parse tree for a sentence in that sublanguage may then be considered a possible parse tree for the incomplete sentence s. For example, the sentences \"I see a man with a mirror\" and \"You see a mirror\" are both in the sublanguage of the incomplete sentence above. Consequently, the two parse trees of figures 3 and 4 are possible parse trees for this sentence, along with many others. All parse trees for the sentence s = \"? v * ii\" may be merged into a shared parse forest that is represented in figure 9 .", "cite_spans": [], "ref_spans": [ { "start": 1054, "end": 1062, "text": "figure 9", "ref_id": null } ], "eq_spans": [], "section": ".3 P a r s e fo r e s t s for in c o m p le t e s e n t e n c e s", "sec_num": "2" }, { "text": "The graph of this forest has been divided into two parts by the horizontal grey line a -The terminal labels underscored with a represent any word in the corresponding terminal category. This is also true for all the terminal labels in the bottom part of the graph.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ".3 P a r s e fo r e s t s for in c o m p le t e s e n t e n c e s", "sec_num": "2" }, { "text": "Tne forest fragment below the horizontal line is a (closed) subgraph of the original gram m ar of figure 2 (which we have completed in grey to emphasize the fact). It corresponds to parse trees of constituents that are completely undefined, within their syntactical categories, and may thus be any tree in that category that the gram m ar can generate. This occurs once in the forest for non-terminal PP at arc marked a and twice for NP at arcs marked p.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ".3 P a r s e fo r e s t s for in c o m p le t e s e n t e n c e s", "sec_num": "2" }, { "text": "This bottom part of the graph brings no new information (it is just the part of the original gram m ar reachable from nodes PP and NP). Hence the forest could be simplified by eliminating this bottom subgraph, and labelling the end node of the a (resp. (5) arc with PP* (resp. NP*), meaning an arbitrary PP (resp. NP) constituent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ".3 P a r s e fo r e s t s for in c o m p le t e s e n t e n c e s", "sec_num": "2" }, { "text": "The complete shared forest of figure 6 may be interpreted as a CF grammar Qs. This grammar is precisely a grammar of the sublanguage C3 of all sentences that match the incomplete sentence 5 . Again, up to renaming of nonterminals, this grammar Q3 gives the sentences in Ca the same syntactic structure as the original grammar of the full language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ".3 P a r s e fo r e s t s for in c o m p le t e s e n t e n c e s", "sec_num": "2" }, { "text": "If the sentence parsed is the completely unknown sentence u = then the corresponding sublanguage Cu is the complete language considered, and the parse forest for u is quite naturally the original gram m ar of the full language: The grammar of a CF language is the parse-forest of the completely unknown sentence, i.e. the syntactic structure of all sentences in the language, in a non-trivial sense. In other words, all ono can say about a fully unknown sentence assumed to be correct, is that it satisfies the syntax ot the language. This statement does take a stronger signification when shared parse forests are actually built by parsers, and when such a parser does return the original gram m ar for the fully unknown sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ".3 P a r s e fo r e s t s for in c o m p le t e s e n t e n c e s", "sec_num": "2" }, { "text": "Parsing a sentence according to a CF grammar is just extracting a parse tree fitting that sentence from the CF gram m ar considered as a parse forest.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ".3 P a r s e fo r e s t s for in c o m p le t e s e n t e n c e s", "sec_num": "2" }, { "text": "Looking at these issues from another angle, we have the following consequence of the above discussion: given a set of parse trees (i.e. appropriately decorated trees), they form the set of parses of a CF language iff they can be merged into a shared forest that is finite.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ".3 P a r s e fo r e s t s for in c o m p le t e s e n t e n c e s", "sec_num": "2" }, { "text": "In [BilL-88, Lan-88a] Billot and the author have proposed parsers that actually build shared forests formalized as CF grammar. This view of shared forests originally seemed to be an artifact of the formalization chosen in the design of these algorithms, and appeared possibly more obfuscatory than illuminating. It has been our purpose here to show that it really has a fundamental character, independently of any parsing algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ".3 P a r s e fo r e s t s for in c o m p le t e s e n t e n c e s", "sec_num": "2" }, { "text": "This close relation between sharing structures and context-freeness actually hints to limitations of the effectiveness of sharing in parse forests defined by non-CF formalisms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ".3 P a r s e fo r e s t s for in c o m p le t e s e n t e n c e s", "sec_num": "2" }, { "text": "From an algorithmic point of view, the construction of a shared forest for a (possibly incomplete) sentence may be seen as a specialization of the original gram m ar to the sublanguage defined by that sentence. This shows interesting connections with the general theory of partial evaluation of programs , which deals with the specialization of programs by propagation of known properties of their input.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ".3 P a r s e fo r e s t s for in c o m p le t e s e n t e n c e s", "sec_num": "2" }, { "text": "In practice, the published parsing algorithms do not always give shared forest with maximum sharing. This may result in forests that are larger or more complex, but does not invalidate our presentation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ".3 P a r s e fo r e s t s for in c o m p le t e s e n t e n c e s", "sec_num": "2" }, { "text": "The PDA based compilation approach proved itself a fruitful theoretical and experimental support for the analysis and understanding of general CF parsing a la Earley. In accordance with our strategy of uniform study of the \"Horn continuum \" , we extended this approach to general Horn clauses, i.e. DC programs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Horn Clauses", "sec_num": "3" }, { "text": "which is an operational engine intended to play for Horn clauses the same role as the usual PDA for CF languages. Space limitations prevent giving here a detailed presentation of LPD As, and we only sketch the underlying ideas. More details may be found in .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "This lead to the definition of the Logical Push-Down Automaton (LP D A )", "sec_num": null }, { "text": "As in the CF case, the evaluation of a DC program may be decomposed into two phases:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "This lead to the definition of the Logical Push-Down Automaton (LP D A )", "sec_num": null }, { "text": "\u2022 a compilation phase that translate the DC program into a LPDA. Independently of the later execution strategy, the compilation may be done according to a variety of evaluation schemata: top-down, bottom-up, predictive bottom-up, ... Specific optimization techniques may also be developed for each of these compilation schemata.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "This lead to the definition of the Logical Push-Down Automaton (LP D A )", "sec_num": null }, { "text": "\u2022 an execution phase that can interpret the LPDA according to some execution technique: back track (depth-first), breadth-first, dynamic programming, or some combination .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "This lead to the definition of the Logical Push-Down Automaton (LP D A )", "sec_num": null }, { "text": "This separation of concerns leads to a better understanding of issues, and should allow a more systematic comparison of the possible alternatives.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "This lead to the definition of the Logical Push-Down Automaton (LP D A )", "sec_num": null }, { "text": "In the case of dynamic programming execution, the LPDA formalism uses to very simple struc tures that we believe easier to analyze, prove, and optimize than the corresponding direct con structions on DC programs , while remaining independent of the computation schema, unlike the direct constructions. Note that predictive bottom-up compi lation followed by dynamic programming execution is essentially equivalent to Earley deduction as presented in .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "This lead to the definition of the Logical Push-Down Automaton (LP D A )", "sec_num": null }, { "text": "The next sections include a presentation of LPDAs and their dynamic programming interpre tation, a compilation schema for building a LPDA from a DC program, and an example applying this top-down construction to a very simple DC program.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "This lead to the definition of the Logical Push-Down Automaton (LP D A )", "sec_num": null }, { "text": "A LPDA is essentially a PDA that stores logical atoms (i.e. predicates applied to arguments) and substitutions on its stack, instead of simple symbols. The symbols of the standard CF PDA stack may be seen as predicates with no arguments (or more accurately with two argument similar to those used to translate CF grammars into DC in . A technical point is that we consider PDAs without \"finite state\" control: this is possible without loss of generality by having pop transitions that replace the top two atoms by only one (this is standard in LR(k) PDA parsers ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "L o g ic a l P D A s a n d t h e ir d y n a m ic p r o g r a m m in g in t e r p r e t a t io n", "sec_num": "3.1" }, { "text": "Formally a LPD A ^4 is a 6-tuple:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "L o g ic a l P D A s a n d t h e ir d y n a m ic p r o g r a m m in g in t e r p r e t a t io n", "sec_num": "3.1" }, { "text": "^4 = (X, F , A , $, $f, 0 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "L o g ic a l P D A s a n d t h e ir d y n a m ic p r o g r a m m in g in t e r p r e t a t io n", "sec_num": "3.1" }, { "text": "where X is a set of variables, F is a set of functions and constants symbols, A is a set of stack 0 predicate symbols, $ and $f are respectively the initial and final stack predicates, and 0 is a finite set of transitions having one of the following three forms: Intuitively (and approximately) a pop transition BD ' -\u25ba C is applicable to a stack configuration with atoms A and A' on top, iff there is a substitution s such that B.s = As and Ds = A s. Then A and A' are removed from the stack and replaced by Cs, i.e. the atom C to which s has been applied.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "L o g ic a l P D A s a n d t h e ir d y n a m ic p r o g r a m m in g in t e r p r e t a t io n", "sec_num": "3.1" }, { "text": "Things are similar for other kinds of transitions. Of course a LPDA is usually non-deterministic w.r.t. the choice of the applicable transition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "L o g ic a l P D A s a n d t h e ir d y n a m ic p r o g r a m m in g in t e r p r e t a t io n", "sec_num": "3.1" }, { "text": "In the case of dynamic programming interpretations, all possible computation paths are ex plored, with as much sub-computation sharing as possible. The algorithm proceeds by building a collection of items (analogous to those of Earley's algorithm) which are pairs of atoms. An item represents a stack fragment of two consecutive atoms . If another item was also created, this means that the sequence of atoms A A'A\" is to be found in some possible stack configuration, and so on (up to the use of substitutions, not discussed here). The O 0 computation is initialized with an initial item U = < S H >. New items are produced by applying the LPDA transitions to existing items, until no new application is possible (an application may often produce an already existing item). The computation terminates under similar conditions as specialized algorithms . If successful, the computation produces one O or several final items of the form <$f $ >, where the arguments of $f are an answer substitution of the initial DC program. In a parsing context, one is usually interested in obtaining parse-trees rather than \"answer substitutions''. A parse tree is here a proof tree corresponding to the original DC program. Such proof trees may be obtained by the same techniques that are used in the case of CF parsing , and that actually interpret the items and their relations as a shared parse forest structure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "L o g ic a l P D A s a n d t h e ir d y n a m ic p r o g r a m m in g in t e r p r e t a t io n", "sec_num": "3.1" }, { "text": "Substitutions are applied to items as follows (we give as example the most complex case): a pop transition BD \u2022 -\u25ba C is applicable to a pair of items and < E E '> , iff there is a unifier s of and , and a unifier s' of A's and E. This produces the item .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "L o g ic a l P D A s a n d t h e ir d y n a m ic p r o g r a m m in g in t e r p r e t a t io n", "sec_num": "3.1" }, { "text": "Given a DC program, many different compilation schemata may be used to build a corresponding LPDA . We give here a very simple and unoptimized top-down construction. The DC program to be compiled is composed of a set of clauses 7 Ajt.o A jt,i,...,A k,nk, where each A\u00a3,,\u2022 is a logical literal. The query is assumed to be the head literal Ao.o of the first clause 70.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ".2 T o p -d o w n c o m p ila t io n o f D C p r o g r a m s in t o L P D A s", "sec_num": "3" }, { "text": "The construction of the top-down LPDA is based on the introduction of new predicate sym bols Vjt,,-, corresponding to positions between the body literals of each clause 7^. The predicate Vjt,o corresponds to the position before the leftmost literal, and so on. Literals in clause bodies are refuted from left to right. The presence of an instance of a position literal V^^tjt) in the stack indicates that the first : subgoals corresponding to the body of some instance of clause 7* have already been refuted. The argument bindings of that position literal are the partial answer substitution computed by this partial refutation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ".2 T o p -d o w n c o m p ila t io n o f D C p r o g r a m s in t o L P D A s", "sec_num": "3" }, { "text": "For every clause 7 A^o A*fi , . . . , A k,nk> w\u00ab note tjt the vector of variables occurring in the clause. Recall that A*tl-is a literal using some of the variables in 7^, while V^,-is only a predicate which needs to be given the argument vector t* to become the literal V^t * ) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ".2 T o p -d o w n c o m p ila t io n o f D C p r o g r a m s in t o L P D A s", "sec_num": "3" }, { "text": "Then we can define the top-down LPDA by the following transitions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ".2 T o p -d o w n c o m p ila t io n o f D C p r o g r a m s in t o L P D A s", "sec_num": "3" }, { "text": "1 . $ * -\u25ba V0to(to) $ 2 . Vfc,;( tfc) -Afc.i+i Vjt.^tfc) -for every clause 7* and for every position i in its body: 0 < i < n3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ".2 T o p -d o w n c o m p ila t io n o f D C p r o g r a m s in t o L P D A s", "sec_num": "3" }, { "text": ". Afc.o \u25ba -Vjt.o(tjt) -for every clause ~/k 4. Vfcink(tfc) Vfc/it(tfc/ ) i-\u2022 ^ii+i(tfc0 5 -/or every pair of clauses 7* and 7*/ and /or every position i in the body of 7 ;-': 0 < t < njt<", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ".2 T o p -d o w n c o m p ila t io n o f D C p r o g r a m s in t o L P D A s", "sec_num": "3" }, { "text": "The final predicate of the LPDA is the stack predicate V0)no which corresponds to the end of the body of the first \"query clause'' of the DC program. The rest of the LPDA is defined accordingly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ".2 T o p -d o w n c o m p ila t io n o f D C p r o g r a m s in t o L P D A s", "sec_num": "3" }, { "text": "The following is an informal explanation of the above transitions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ".2 T o p -d o w n c o m p ila t io n o f D C p r o g r a m s in t o L P D A s", "sec_num": "3" }, { "text": "1 . Initialization: We require the refutation of the body of clause 70, i.e. of the query.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ".2 T o p -d o w n c o m p ila t io n o f D C p r o g r a m s in t o L P D A s", "sec_num": "3" }, { "text": "2. Selection of the leftmost remaining subgoal: When the first i literals of clause 7* have been refuted, as indicated by the position literal V^t * ) , then select the i + l 3t literal A^.+i to be now refuted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ".2 T o p -d o w n c o m p ila t io n o f D C p r o g r a m s in t o L P D A s", "sec_num": "3" }, { "text": "3. Selection of clause 7 *: Having to satisfy a subgoal that is an instance of A^o, eliminate it by resolution with the clause 7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ".2 T o p -d o w n c o m p ila t io n o f D C p r o g r a m s in t o L P D A s", "sec_num": "3" }, { "text": "The body of 7^ is now considered as a sequence of new subgoals, as indicated by the position literal V^i0(tjt).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ".2 T o p -d o w n c o m p ila t io n o f D C p r o g r a m s in t o L P D A s", "sec_num": "3" }, { "text": "Return to calling clause 7 */: Having successfully refuted the head of clause 7* by refuting successively all literals in its body as indicated by position literal V^ink(t^), we return to the calling clause 7^ and \"increment\" its position literal from V;-/ t(t^/) to V^/it+1 (t^/), since the body literal Ak',i+i has been refuted as instance of the head of 7^.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4.", "sec_num": null }, { "text": "Backtrack interpretation of a LPDA thus constructed essentially mimics the Prolog interpreta tion of the original DC program.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4.", "sec_num": null }, { "text": "The following example has been produced with a prototype implementation realized by Eric Villemonte de la Clergerie and Alain Zanchetta . The definite clause program to be executed is given in figure 11. Note that a search for all solutions in a backtrack evaluator would not terminate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ".A v e r y s im p le e x a m p le", "sec_num": "3" }, { "text": "The solutions found by the computer are: X2 3 f ( f ( a ) )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ".A v e r y s im p le e x a m p le", "sec_num": "3" }, { "text": "X2 = f ( a )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ".A v e r y s im p le e x a m p le", "sec_num": "3" }, { "text": "X2 * a 5If jfc = Jt( then we rename the variable in tsince the transition corresponds to the use of two distinct variants of the clause 7*.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ".A v e r y s im p le e x a m p le", "sec_num": "3" }, { "text": "Note also that we need not define such a transition for all triples of integer k k and \u00bb, but only for those triples such that the head of 7* unifies with the literal + These solutions were obtained by first compiling the DC program into an LPDA according to the schema defined in section 3.2, and then interpreting this LPDA with the general dynamic programming algorithm defined in section 3.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ".A v e r y s im p le e x a m p le", "sec_num": "3" }, { "text": "The LPDA transitions produced by the compilation are in figure 10 . The collection of items produced by the dynamic programming computation is given in the figure 1 ' 2 .", "cite_spans": [], "ref_spans": [ { "start": 56, "end": 65, "text": "figure 10", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": ".A v e r y s im p le e x a m p le", "sec_num": "3" }, { "text": "In the transitions printout of figure 10, each predicate name n a b l a . i . j stands for our V,,; . According to the construction of section 3.2, the final predicate should be n a b la .0 . 1 . For better readability we have added a horizontal transition to a final predicate noted answer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ".A v e r y s im p le e x a m p le", "sec_num": "3" }, { "text": "Pereira and Warren have shown in their classical paper the link between CF grammars and DC programs. A similar approach may be applied to more complex formalisms than CF grammars, and we have done so for Tree Adjoining Grammars (TAG) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Other linguistic formalisms", "sec_num": "4" }, { "text": "By encoding TAGs into DC programs, we can specialize to TAGs the above results, and easily build TAG parsers (using at least the general optimization techniques valid for all DC programs). Furthermore, control mechanisms akin to the agenda of chart parsers, together with some finer properties of LPDA interpretation, allow to control precisely the parsing process and produce Earley-like left-to-right parsers, with a complexity 0 ( n 6).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Other linguistic formalisms", "sec_num": "4" }, { "text": "We expect that this approach can be extended to a variety of other linguistic formalisms, with or without unification of feature structures, such as head grammars, linear indexed grammars, combinatory categorial grammars. This is indeed suggested by the results of of Joshi, Vijay-Shanker and Weir that relate these formalisms and propose CKY or Earley parsers for some of them .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Other linguistic formalisms", "sec_num": "4" }, { "text": "The parse forests built in the CF case correspond to proof forests in the Horn case. Such proof forests may be obtained by the same techniques that we used for CF parsing . However it is not yet fully clear how parse trees or derivation trees may be extracted from the proof forest when DC programs are used to encode non-CF syntactic formalisms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Other linguistic formalisms", "sec_num": "4" }, { "text": "Our understanding of syntactic structures and parsing may be considerably enhanced by comparing the various approaches in similar formal terms. Hence we attem pt to formally unify the problems in two ways:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "-by considering all formalisms as special cases of Horn clauses -by expressing all parsing strategies with a unique operational device: the pushdown autom a ton.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "Systematic formalization of problems often considered to be pragmatic issues (e.g. parse forests) has considerably improved our understanding and has been an im portant success factor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "The links established with problems in other areas of computer science (e.g. partial evaluation, database recursive queries) could be the source of interesting new approaches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "In this paper, the a b b rev ia tio n P D A alw ays im pnes th e p o ssib ility o f n o n -d eterm in ism", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Intemational Parsing Workshop '89", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The Theory of Parsing, Translation and Compil ing", "authors": [ { "first": "A", "middle": [ "V" ], "last": "Aho", "suffix": "" } ], "year": null, "venue": "J.D", "volume": "19", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aho, A.V., and L liman, J.D. 19(2 The Theory of Parsing, Translation and Compil ing. Prentice-Hall, Englewood Cliffs, New Jersey. [Bil-88]", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Analyseurs Syntaxiques et Non-Determinisme", "authors": [ { "first": "S", "middle": [], "last": "Billot", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Billot, S. 1988 Analyseurs Syntaxiques et Non-Determinisme. These de Doctorat. Universite d 'Orleans la Source Orleans (France).", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The structure of Shared Forests in Ambiguous Parsing", "authors": [ { "first": "S", "middle": [], "last": "Billot", "suffix": "" }, { "first": "B", "middle": [], "last": "Lang", "suffix": "" } ], "year": 1989, "venue": "Proc. of the 271* 1 Annual Meeting of the Association for Computational Linguistics", "volume": "1038", "issue": "", "pages": "143--151", "other_ids": {}, "num": null, "urls": [], "raw_text": "Billot, S.; and Lang, B. 1989 The structure of Shared Forests in Ambiguous Parsing. Proc. of the 271* 1 Annual Meeting of the Association for Computational Linguistics, Vancouver (British Columbia), 143-151. Also INRIA Research Report 1038. [Coh-88]", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A View of the Origins and Development of Prolog", "authors": [ { "first": "J", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 1988, "venue": "Communications of the A C M", "volume": "31", "issue": "1", "pages": "26--36", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cohen, J. 1988 A View of the Origins and Development of Prolog. Communications of the A C M 31(1) :26-36.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Metamorphosis Grammars", "authors": [ { "first": "A", "middle": [], "last": "Colmerauer", "suffix": "" } ], "year": 1975, "venue": "Springer LNCS 63. First appeared as Les Grammaires de Metamorphose, Groupe d'Intelligence Artificielle", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Colmerauer, A. 1978 Metamorphosis Grammars, in Natural Language Communica tion with Computers, L. Bole ed., Springer LNCS 63. First appeared as Les Gram- maires de Metamorphose, Groupe d'Intelligence Artificielle, Universite de Marseille II, 1975. [DeR-71]", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Simple LR(k) Grammars", "authors": [ { "first": "F", "middle": [ "L" ], "last": "Deremer", "suffix": "" } ], "year": 1971, "venue": "Communications A C M", "volume": "14", "issue": "7", "pages": "453--460", "other_ids": {}, "num": null, "urls": [], "raw_text": "DeRemer, F.L. 1971 Simple LR(k) Grammars. Communications A C M 14(7): 453- 460.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Proceedings of the Workshop on Partial Evaluation and Mixed Computation", "authors": [ { "first": "Y", "middle": [], "last": "Futamura", "suffix": "" } ], "year": 1988, "venue": "", "volume": "6", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Futamura, Y. (ed.) 1988 Proceedings of the Workshop on Partial Evaluation and Mixed Computation. New Generation Computing 6(2,3).", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Deterministic Techniques for Efficient Non-deterministic Parsers", "authors": [ { "first": "B", "middle": [], "last": "Lang", "suffix": "" } ], "year": 1974, "venue": "Proc. of the 2nc* Colloquium on Autom ata, Languages and Programming", "volume": "14", "issue": "", "pages": "255--269", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lang, B. 1974 Deterministic Techniques for Efficient Non-deterministic Parsers. Proc. of the 2nc* Colloquium on Autom ata, Languages and Programming, J. Loeckx (ed.), Saarbriicken, Springer Lecture Notes in Computer Science 14: 255-269. Also: Rapport de Recherche 72, IRIA-Laboria, Rocquencourt (France).", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Parsing Incomplete Sentences", "authors": [ { "first": "B", "middle": [], "last": "Lang", "suffix": "" } ], "year": 1988, "venue": "Proc. of the 12th Internat. Conf. on Computational Linguistics (COLING 88)", "volume": "1", "issue": "", "pages": "365--371", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lang, B. 1988 Parsing Incomplete Sentences. Proc. of the 12th Internat. Conf. on Computational Linguistics (COLING 88) Vol. 1 :365-371, D. Vargha(ed.), Budapest (Hungary).", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Datalog Automata. Proc. of the 3rd Internat. Conf. on Data and Knowledge Bases", "authors": [ { "first": "B", "middle": [], "last": "Lang", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "389--404", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lang, B. 1988 Datalog Automata. Proc. of the 3rd Internat. Conf. on Data and Knowledge Bases, C. Beeri, J.W . Schmidt, U. Dayal(eds.), Morgan Kaufmann Pub., pp. 389-404, Jerusalem (Israel).", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Complete Evaluation o f Horn Clauses: an Automata Theoretic A p proach", "authors": [ { "first": "B", "middle": [], "last": "Lang", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lang, B. 1988 Complete Evaluation o f Horn Clauses: an Automata Theoretic A p proach. INRIA Research Report 913. [Lan-88c]", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The Systematic Construction of Earley Parsers: Application to the Production of 0 ( n 6) Earley Parsers for Tree Adjoining Grammars", "authors": [ { "first": "B", "middle": [], "last": "Lang", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lang, B. 1988 The Systematic Construction of Earley Parsers: Application to the Production of 0 ( n 6) Earley Parsers for Tree Adjoining Grammars. In preparation. [PerW-80]", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Definite Clause Grammars for Language Analysis -A Survey of the Formalism and a Comparison with Augmented Transi tion Networks", "authors": [ { "first": "F", "middle": [ "C N" ], "last": "Pereira", "suffix": "" }, { "first": "D", "middle": [ "H D" ], "last": "Warren", "suffix": "" } ], "year": 1980, "venue": "Artificial Intelligence", "volume": "13", "issue": "", "pages": "231--278", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pereira, F.C.N.; and Warren, D.H.D. 1980 Definite Clause Grammars for Language Analysis -A Survey of the Formalism and a Comparison with Augmented Transi tion Networks. Artificial Intelligence 13: 231-278.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Parsing as Deduction", "authors": [ { "first": "F", "middle": [ "C N" ], "last": "Pereira", "suffix": "" }, { "first": "D", "middle": [ "H D" ], "last": "Warren", "suffix": "" } ], "year": 1983, "venue": "Proceedings of the '213t Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "137--144", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pereira, F.C.N.; and Warren, D.H.D. 1983 Parsing as Deduction. Proceedings of the '213t Annual Meeting of the Association for Computational Linguistics: 137-144, Cambridge (Massachusetts).", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Earley Deduction", "authors": [ { "first": "H", "middle": [ "H" ], "last": "Porter", "suffix": "" } ], "year": 1986, "venue": "Oregon Grad uate Center", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Porter, H.H. 3rd 1986 Earley Deduction. Tech. Report C S/E-86-002, Oregon Grad uate Center, Beaverton (Oregon).", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "OLD Resolution with Tabulation", "authors": [ { "first": "H", "middle": [], "last": "Tamaki", "suffix": "" }, { "first": "T", "middle": [], "last": "Sato", "suffix": "" } ], "year": 1986, "venue": "Proc. of3rd Internat. Conf. on Logic Programming", "volume": "225", "issue": "", "pages": "84--98", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tamaki, H.; and Sato, T. 1986 OLD Resolution with Tabulation. Proc. of3rd In- ternat. Conf. on Logic Programming, London (UK), Springer LNCS 225: 84-98.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "An Efficient Context-free Parsing Algorithm for Natural Languages and Its Applications", "authors": [ { "first": "M", "middle": [], "last": "Tomita", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomita, M. 19S5 An Efficient Context-free Parsing Algorithm for Natural Languages and Its Applications. Ph.D. thesis, Carnegie-Mellon University, Pittsburgh, Pennsyl vania.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "An Efficient Augmented-Context-Free Parsing Algorithm", "authors": [ { "first": "M", "middle": [], "last": "Tomita", "suffix": "" } ], "year": 1987, "venue": "Compu tational Linguistics", "volume": "13", "issue": "1-2", "pages": "31--46", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomita, M. 1987 An Efficient Augmented-Context-Free Parsing Algorithm. Compu tational Linguistics 13(1-2): 31-46.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Recursive Query Processing: The power of Logic", "authors": [ { "first": "L", "middle": [], "last": "Vieille", "suffix": "" } ], "year": 1987, "venue": "European Com puterlndustry Research Center (ECRC)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vieille, L. 1987 Recursive Query Processing: The power of Logic. Tech. Report TR- KB-17, European Com puterlndustry Research Center (ECRC), Munich (West Ger many).", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Characterizing Structural De scriptions Produced by Various Grammatical Formalisms", "authors": [ { "first": "K", "middle": [], "last": "Vijay-Shankar", "suffix": "" }, { "first": "D", "middle": [ "J" ], "last": "Weir", "suffix": "" }, { "first": "A", "middle": [ "K" ], "last": "Joshi", "suffix": "" } ], "year": 1987, "venue": "Proceedings of the 25rd", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vijay-Shankar, K.; Weir, D.J.; and Joshi, A.K. 1987 Characterizing Structural De scriptions Produced by Various Grammatical Formalisms. Proceedings of the 25rd", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Annual Meeting of the Association for Computational Linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "104--111", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Meeting of the Association for Computational Linguistics: 104-111, Stanford (California).", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Recognition of Combinatory Categorial Grammars and Linear Indexed Grammars", "authors": [ { "first": "K", "middle": [], "last": "Vijay-Shankar", "suffix": "" }, { "first": "D", "middle": [ "J" ], "last": "Weir", "suffix": "" } ], "year": 1989, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vijay-Shankar, K.; and Weir, D.J. 1989 Recognition of Combinatory Categorial Grammars and Linear Indexed Grammars. These proceedings.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Evaluateur de Clauses de Horn", "authors": [ { "first": "E", "middle": [], "last": "Villemonte De La Clergerie", "suffix": "" }, { "first": "A", "middle": [], "last": "Zanchetta", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Villemonte de la Clergerie, E.; and Zanchetta, A. 1988 Evaluateur de Clauses de Horn. Rapport de Stage d'Option, Ecole Polytechnique, Palaiseau (France).", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "uris": null, "text": ": A Context-Free Gramm ar Figure 2: Graph of the Gramm ar" }, "FIGREF1": { "type_str": "figure", "num": null, "uris": null, "text": "characteristic of the A N D /O R graph representing a grammar is that all nodes have different labels. Conversely, any labelled A N D /O R graph such that all node labels are different may be read as -translated into -a CF gram m ar such that AND-node labels are rule names, OR-node labels represent non-terminal categories, and leaf-node labels represent terminal categories." }, "FIGREF2": { "type_str": "figure", "num": null, "uris": null, "text": "Figure 5: Context and Subtree" }, "FIGREF3": { "type_str": "figure", "num": null, "uris": null, "text": "2The direct production of such shared representation by parsing algorithms also corresponds to sharing in the parsing computation.3This graphical representation of shared forests is not original: to our knowledge it was first used by, However, we believe that its comparative understanding as context sharing, as AND-O R tree Two parses sharing a subtreeFigure 8: Two parses sharing a context context being shared is the empty outer context of the two possible parse tree, that still states that a proper parse tree must belong to the syntactic category S." }, "FIGREF4": { "type_str": "figure", "num": null, "uris": null, "text": "or as gram m ar has never been p resen ted . C o n te x t sharing is called local ambiguity packing by T o m ita. 4T h is graph m ay have cy cle s for in fin itely am b igu ou s sen ten ces w hen the gram m ar o f th e lan gu age is itse lf cyclic." }, "FIGREF5": { "type_str": "figure", "num": null, "uris": null, "text": "by C on top of stack where B, C and D are A-atom s, i.e. atoms built with A , F and X." }, "FIGREF6": { "type_str": "figure", "num": null, "uris": null, "text": "Transitions of the LPDA." } } } }