{ "paper_id": "W89-0206", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:44:56.689194Z" }, "title": "Head-Driven Parsing", "authors": [ { "first": "M", "middle": [], "last": "Artin Kay", "suffix": "", "affiliation": { "laboratory": "", "institution": "Stanford University", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "There are clear signs of a \"Back to Basics\" movement in parsing and syntactic generation. Our Latin teachers were apparently right. You should start with the main verb. This w ill tell you what kinds of subjects and objects to look for and what cases they will be in. When you com e to look for these, you should also stan by trying to find the main word, because this will tell you most about what else to look for. In the early days o f research on machine translation, Paul Garvin advocated the applicadon o f what he called the \"Fulcrum\" method to the analysis of sentences. If he was the last to heed the injunctions o f his Latin teacher, it is doubtless because America follow ed the tradition o f rewriting systems exemplified by context-free grammar and this provided no immediate motivation for the notion o f the head of a construction. The European tradition, and particularly the tradition o f Eastern Europe, where Garvin had his roots, tend more towards dependency grammar, but away from that of mathematical formalization which has been the underpinning o f computational linguistics. But the move now is towards linguistic descriptions that put more information in the lexicon so that grammar rules take on a more schematic quality. Little by little, we moved from rules like (1) VPl-> VP2 NP CaseO f(VP2) = D a t iv e CaseOf(NP) = D a t i v e to rules that attain greater abstraction through the use of logical variables (or the equivalent), like (2) VPl-> VP2 NP ObjCase(VP2) = \u25a0 Case CaseOf(NP)-Case Where the underlined Case is to be taken as the name o f a variable. From there, it was a short step to (3) VPl-> VP2 X C om p lem en tO f(VP2)-X-52-International Parsing Workshop '89", "pdf_parse": { "paper_id": "W89-0206", "_pdf_hash": "", "abstract": [ { "text": "There are clear signs of a \"Back to Basics\" movement in parsing and syntactic generation. Our Latin teachers were apparently right. You should start with the main verb. This w ill tell you what kinds of subjects and objects to look for and what cases they will be in. When you com e to look for these, you should also stan by trying to find the main word, because this will tell you most about what else to look for. In the early days o f research on machine translation, Paul Garvin advocated the applicadon o f what he called the \"Fulcrum\" method to the analysis of sentences. If he was the last to heed the injunctions o f his Latin teacher, it is doubtless because America follow ed the tradition o f rewriting systems exemplified by context-free grammar and this provided no immediate motivation for the notion o f the head of a construction. The European tradition, and particularly the tradition o f Eastern Europe, where Garvin had his roots, tend more towards dependency grammar, but away from that of mathematical formalization which has been the underpinning o f computational linguistics. But the move now is towards linguistic descriptions that put more information in the lexicon so that grammar rules take on a more schematic quality. Little by little, we moved from rules like (1) VPl-> VP2 NP CaseO f(VP2) = D a t iv e CaseOf(NP) = D a t i v e to rules that attain greater abstraction through the use of logical variables (or the equivalent), like (2) VPl-> VP2 NP ObjCase(VP2) = \u25a0 Case CaseOf(NP)-Case Where the underlined Case is to be taken as the name o f a variable. From there, it was a short step to (3) VPl-> VP2 X C om p lem en tO f(VP2)-X-52-International Parsing Workshop '89", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "or even (4) VP1 -> VP2 X C om plem entStringO f(V P2) = X Given rule (2), that parser knows what case the noun must have only after it has encountered the verb. Rules (3) and (4), do not even tell it that the complement must be a noun phrase. In (4) we cannot even tell how many complements ther will be. For most parsers, the problem is masked in these examples by the fact that they apply rules from left to right so that the value of the variable X is known by the time it is needed. In rule (4a), the matter is different.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "(4a) VP1 -> X VP2 C om p lem en tStrin gO f(V P 2) = X Needless to say, these things have not gone unnoticed, least of all by the participants in this conference. It has been noted, for example, that deftniteclause grammars can be adjusted so as to look for heads before complements and adjuncts. If the head o f a sentence is a verb phrase, then it is sufficient to write (6) instead o f (5). A rule that expands the verb phrase would be something like (7). This time, the order is the usual one because the head is on the left1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "O f course, all this works if L e ft, M iddle, and R ig h t are something like word numbers that provide random access to the parts of the sentence. To make the system work with difference lists, we need something more, for example, as in (8). We have now moved lo the Prolog convention of using caiulized names for variables.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The reason for the addition is that the parser, embodied here in the set of rules themselves, has no way to tell where the verb phrase will begin. It must therefore consider all possible positions in the string, an end which, against all expectation, is accomplished by the append predicate. If append is not needed when something like word numbers are used, it is because the inevitable search of the string is being quietly conducted by the Prolog system as it searches its data base, rather than being programmed explcitely.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The old-fashioned parser had no trouble finding the beginnings of things because they were always immediately adjacent, either to the boundaries of the sentence, or to another phrase whose position was already known. Given the sentence", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "and given appropriate rules, the head-driven parser will correcdy identify \"my car\" as the direct object of \"sold\". But it will also consider for this role at least the following: A f r i c a n la n g u a g e s A f r i c a n la n g u a g e s whom I met A f r i c a n la n g u a g e s whom I-met at a p a r ty la n g u a g e s la n g u a g e s whom I met l a n g u a g e s whom I met a t a p a r ty a p a r t y It w ill reject them only when it fails to extend them far enough to the left to meet the right-hand edge of the word \"sold\". Likewise, the last four entries on the list w ill be constructed again as possible objects for the preposition \"of'. As we shall see, this problem is not easy to put to set aside. O f course, definite-clause grammars have other problems, when interpreted directly by a standard Prolog processor. The most notorious o f these is that, in their classical form, they cycle indefinitely when provided with a grammar that involves left recursion. However this can be overcome by using a more appropriate interpreter such as the one given in Appendix A o f this paper. It", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "I sold my car to a student of African languages whom I met at a party", "sec_num": null }, { "text": "does not touch the question of the additional work that has to be done in parsing a sentence. Two solutions to the problem suggest themselves immediately. One is to use an undirected bottom-up parsing strategy, and the other is to seek an appropriate adaptation o f chart parsing to a directed, head-driven, strategy. The first solution works for the simple reason that the problem we are facing simply does not arise in undirected bottom-up processing. There is no question of finding phrases that are adjacent to, or otherwise positioned relative to, other phrases. The strategy is a purely opportunistic one which finds phrases wherever, and whenever, its control strategy dictates. A simple chart parser with these properties is given in Appendix B. It accepts only unary and binary rules, but this is not a real restriction because these binary rules can function as meta-rules that interpret the more general of the actual grammar according to something like the follow ing scheme. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "54-", "sec_num": null }, { "text": "is a partially formed phrase belonging to the given c a t e g o r y which can be com pleted by adding the items sepecified by the L e ft list on the left, and the R ig h t list on the right.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The schem e is reminiscent o f categorial grammar, p (C a te g o ry , L e f t , R ig h t)", "sec_num": null }, { "text": "This schem e has a certain elegance in that the parser itself is sim ple and does not reflect any peculiarities o f head-driven grammar. Only the simple meta-rules given above are in any way special. Furthermore, the performance properties o f the chart parser are not compromised. On the other hand, this inactive chart parser cannot be extended to make it into an active chan parser in a straightforward manner as our second solution requires. This is the crux of the matter that this paper addresses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The schem e is reminiscent o f categorial grammar, p (C a te g o ry , L e f t , R ig h t)", "sec_num": null }, { "text": "Suppose that the verb has been located that will be the head o f a verb phrase, but that it remains to identify one or two objects for it on the right. A standard active chart parser does this by introducing active edges at the vertex to the right o f the verb which will build the first object if the material necessary for its construction is available, or comes to be available. As the construction procedes, active edges stretch further and further to the right intil the construction is complete and the corresponding inactive edge is introduced. This works only because the phrase can be built incrementally starting from the left, that is, starting next to the phrase to which it must be adjacent. But this strategy is not open to the head-driven parser which must begin by locating, or constructing the head of the new phrase. The rest of the phrase must then be constructed outwards from the head. We are therefore forced to modify the standard approach. Suppose once again that a verb has been identified and that we are now concerned to find its sisters to the right. The verb can have been found only because there was a seek in existance for verbs covering the region in which it was found, and this, in its turn, will have come about because seeks were extant in that region for higher-level phrases, notably verb phrases. The objects we are now interested to locate must lie entirely in a region bounded on the left by the verb itself and, on the right, by the furthest right-hnd end o f a VP seek that Add ed ge f o r n p ( n p ( _ 6 9 0 , n ( c a t ) _ 2 0 1 4 , n ( d o g ) The Prolog code that implements this strategy is considerably more compicated that that for the techniques discussed earlier, and I have therefore not included it. I believe that the strategy I have outlined is the natural one for anyone to adopt who is determined to work with a head-driven active chart parser. However, it is entirely unclear that the advantages that it offers over the simple undirected chart parser are worth its considerable added expense in complexity. Notice that, if one o f the other nouns in the sentence just considered also had a verbal interpretation, the search for noun phrases would have been active everywhere. The longer the sentence, and therefore the more pressing the need for high performance, the more active regions there would be in the string and the more nearly the process as a whole would approximate that o f the undirected technique. This should not, of course, be taken as an indictment o f head-driven parsing, which is interesting for reasons having nothing to do with performance. It does, however, suggest that the temptation to claim that it is also a natural source of efficiency should be resisted.", "cite_spans": [], "ref_spans": [ { "start": 1390, "end": 1518, "text": "entirely in a region bounded on the left by the verb itself and, on the right, by the furthest right-hnd end o f a VP seek that", "ref_id": null }, { "start": 1519, "end": 1568, "text": "Add ed ge f o r n p ( n p ( _ 6 9 0 , n ( c a t )", "ref_id": null }, { "start": 1569, "end": 1592, "text": "_ 2 0 1 4 , n ( d o g )", "ref_id": null } ], "eq_spans": [], "section": "The schem e is reminiscent o f categorial grammar, p (C a te g o ry , L e f t , R ig h t)", "sec_num": null }, { "text": "This is a simple head-driven recursive-descent parser. There is a distinction between the top level p a r s e predicate and the s y n t a x predicate to eliminate inessential arguments to the top level call, and also because the program can, with only minor modifications in s y n ta x , be used as a generator. The predicate head is assumed to be defined as pan o f the grammar. It is true o f a pair of grammatical labels if the second can be the head (of the head, o f the head ...) ", "cite_spans": [], "ref_spans": [ { "start": 454, "end": 485, "text": "(of the head, o f the head ...)", "ref_id": null } ], "eq_spans": [], "section": "A ppendix A -A PA R SER -G EN ER A TO R FOR H EA D -D R IV E N G R A M M A R .", "sec_num": null }, { "text": "International Parsing Workshop '89", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": {}, "ref_entries": { "FIGREF4": { "text": "Real rules have a similar format to that used in the program in Appendix Aare the non-head (complement) daughters o f 'Mother' to the left of the head, and R\\ ... Rm are those to the right. For convenience, we give the ones on the left in the reverse o f the order in which they actually appear so that the one nearest to the head is written first. We define the binary rule predicate referred to in the algorithm somewhat as follows;", "num": null, "type_str": "figure", "uris": null }, "FIGREF5": { "text": "We propose to enrich the notion of a chart so that instead of simply active and inactive edges, it contains five different types of object. Edges can be active and inactive, but they can also be pending or current. This gives four of the five kinds. The fifth we shall refer to simply as a seek. It is a record of the fact that phrases with a given label are being sought in a given region o f the chart.A seek contains a label and also identifies a pair o f vertices in the chart. It is irrelevant at the level o f generality o f this discussion whether we think of the seek as actually being located in, or on, one o f the vertices, or being representable as a transition between them. A condition that the chart is required to maintain is that edges with the same label as that of a seek, both o f whose end points lie within the region of the seek, must be current. Edges which are not so situated must be pending. The standard chart regime never calls for information in a chart to change, but that is not the case here. W^hen a new seek is introduced, pending edges are modified to becom e current as necessary to maintain the above invariant. The fundamental rule (Henry Thom pson's term) of chart parsing is that an action is taken, possibly resulting in the introduction of new edges, whenever the introduction o f a particular new edge brings the operative end o f an active edge together, at the same vertex, with an end of an inactive edge. If the label on the inactive edge is o f the kind that the active edge can consume, a new edge is introduced, possibly provoking new applications of the fundamental rule. The fundamental rule also applies in our enriched charts, but only to current edges-pending edges are ignored by it.", "num": null, "type_str": "figure", "uris": null }, "FIGREF6": { "text": "includes the verb. Accordingly, a new seek is established for N P's in this region. The immediate effect o f this will be to make current any pending edges in that region that are inactive and labeled NP, or active and labeled with a rule that forms N P's. It remains to discuss how active edges, whether current or pending, are introduced in the first place. The simplest solution seem s to be to do this just as it would be in an undirected, bottom-up, parser. Whenever a current inactive edge is introduced, or a pending one becomes current, active edges are introduced, one for each rule that could accept the new item as head. However, these do not becom e current until a need for them emerges higher in the structure, and this is signaled by the introduction of a seek.Consider, for example, the sentence the dog saw the cat and assume that dog, saw, and cat are nouns, saw is also a transitive verb, and that the", "num": null, "type_str": "figure", "uris": null }, "FIGREF8": { "text": "e d g e . . . when the edge being added is current. Notice that the edge for the word saw, construed as a verb, is initially introduced as current, because the goal is to find a sentence and a seek is therefore extant for S, VP, and V, covering the whole string. The N edge for saw, however, is pending. In step 4, the active adge is introduced that will consume the object o f saw when it is found. This introduces a seek for NP and N between vertex 3and the end of the sentence. For this reason, when cat is introduced in step 8, it is as a current edge. Notice, however, that the, in step 7, is introduced as pending, because it is not the head o f a NP. However, the introduction of the active NP edge in step 9 causes the edge for the to be made current, and this is what happens in step 10.The active S edge in step 13 activates the search for an NP before the verb so that all the remaining edges are introduced as current At the end of the process all pending edges have been made current except the one corresponding to the nominal interpretation o f saw.", "num": null, "type_str": "figure", "uris": null }, "FIGREF9": { "text": "of the first. Having hypothesized the label of the eventual lexical head o f a phrase that w ill satisfy the current goal, syn taxcalls range to find a word in the string with that label. If such a word is found, its position in the string w ill be given by the HRange (head range) difference list and it must, in any case, lie within the range o f the string given by Maxi and Maxr. The b u i l d predicate constructs phrases with the given putative head so long as their labels stand in the head relation to the goal. is a DL giving the bounds of the phrase satisfying the goal * * Maxl/Maxr gives the string bounds for the current search. *", "num": null, "type_str": "figure", "uris": null } } } }