{ "paper_id": "W89-0202", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:45:12.384836Z" }, "title": "Using Restriction to Optimize Unification Parsing", "authors": [], "year": "", "venue": null, "identifiers": {}, "abstract": "D ale G erdem ann * * D ep artm en t of L inguistics C ognitive Science Group B eckm an In stitu te for A dvanced Science and Technology U niversity of Illinois ' C ogn itive Scien ce G roup, B eckm an In stitu te , 405 N. M athew s, Urbana, 111 61801; d aleQ tan k i.cogvci.u iu c.ed u *1 would like to thank A lan Frisch, Erhard H inrichs, Lucja Ivariska, Jerry M organ, Mike M en delson, and T su neko N ak ataw a for their useful com m ents. A ny deficiencies m ust rest w ith me. T hanks also to the UIUC C ogn itive S cien ce/A rtificial Intelligence fellow ship com m ittee for th e su p p ort th at m ade this research possible. *By un ification parsing I m ean p a n in g of un ification gram mars. See Seifert (1988) for a precise d efinition of a un ification gram m ar. 3I will assum e fam iliarity w ith the basic step s of E arley's algorithm as presented in Earley (1970). For an in trod u ction to E a r le y 's algorithm and its relationship to chart parsing in general see W inograd (1983).", "pdf_parse": { "paper_id": "W89-0202", "_pdf_hash": "", "abstract": [ { "text": "D ale G erdem ann * * D ep artm en t of L inguistics C ognitive Science Group B eckm an In stitu te for A dvanced Science and Technology U niversity of Illinois ' C ogn itive Scien ce G roup, B eckm an In stitu te , 405 N. M athew s, Urbana, 111 61801; d aleQ tan k i.cogvci.u iu c.ed u *1 would like to thank A lan Frisch, Erhard H inrichs, Lucja Ivariska, Jerry M organ, Mike M en delson, and T su neko N ak ataw a for their useful com m ents. A ny deficiencies m ust rest w ith me. T hanks also to the UIUC C ogn itive S cien ce/A rtificial Intelligence fellow ship com m ittee for th e su p p ort th at m ade this research possible. *By un ification parsing I m ean p a n in g of un ification gram mars. See Seifert (1988) for a precise d efinition of a un ification gram m ar. 3I will assum e fam iliarity w ith the basic step s of E arley's algorithm as presented in Earley (1970). For an in trod u ction to E a r le y 's algorithm and its relationship to chart parsing in general see W inograd (1983).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Since Shieber (1985) , restriction has been recognized as an important operation in unification parsing. 1 As Shieber points out, the most straightforward adap tation of Earley's algorithm 2 for use with unification grammars fails because the infinite number of categories in these grammars can cause the predictor step in the algorithm to go into an infinite loop, creating ever more and more new predictions (i.e. the problem is that new predictions are not subsumed by pre vious predictions). The basic idea of restriction is to avoid making predictions on the basis of all of the information in a DAG, but rather to take some subset of that information (i.e. a restricted DAG-henceforth RD) and use just that information to make new predictions. Since there are only a finite number of possible RDs the predictor step will no longer go into the infinite loop described above. The price you pay for this move is that some spurious predictions will be made, but as Shieber points out, the algorithm is still correct since any spunous predictions will be weeded out by the completer step.", "cite_spans": [ { "start": 6, "end": 20, "text": "Shieber (1985)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Shieber's use of restriction in the predictor step is by now well established. On the other hand, there has been little discussion of the uses of restriction in other stages of parsing. In this paper, I will argue that restriction can be used to advantage in at least three additional ways. First, restriction can be used to significantly speed up the subsumption check on new predictions. Second, it can be used in the completer step in order to speed up the process of finding the correct states in the state sets to be completed. And third, it can be used to add a lookahead component to the unification parser. I will begin this paper by briefly reviewing Shieber's use of restriction and then I will discuss the three additional uses for restriction mentioned above.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The original motivation for restriction was to avoid infinite cycles in the predic tor step of Earley's algorithm. Shieber illustrates this problem with a \"counting grammar\" but the same point can be made using a type of grammar that is some what more familiar in recent linguistic theory. Specifically, infinite cycles can arise in grammars that handle 3ubcategorization with list valued features such as Head Driven Phrase Structure Grammar (Pollard and Sag, 1987) or PATR style grammars (Shieber, 1986) . To illustrate the problem, suppose that we are parsing a sentence using a grammar with the PATR style rules in (1, 2) . The problem of non-termination can arise with this grammar since rule (2) allows for lexical items with indefinitely long subcategorization lists.", "cite_spans": [ { "start": 443, "end": 466, "text": "(Pollard and Sag, 1987)", "ref_id": "BIBREF2" }, { "start": 490, "end": 505, "text": "(Shieber, 1986)", "ref_id": "BIBREF4" }, { "start": 619, "end": 622, "text": "(1,", "ref_id": "BIBREF0" }, { "start": 623, "end": 625, "text": "2)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "R estriction in the Predictor Step", "sec_num": "2" }, { "text": "xO [ xl [1 ] x2 12)", "cite_spans": [ { "start": 3, "end": 4, "text": "[", "ref_id": null }, { "start": 8, "end": 12, "text": "[1 ]", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "(1) zO -\u2666 xl x2", "sec_num": null }, { "text": "The first step in parsing a sentence with this grammar is to find a rule whose left hand side unifies with the DAG described by the path equation {cat) = s (i.e. the start DAG). Since the rule in (l) satisfies this requirement, the next step is to make a prediction for the xl daughter. In Earley's algorithm as it was originally formulated (Earley 1970) , the prediction for xl would simply be its category label (i.e. np). In this unification style grammar, however, category labels are just features like any other feature. Since the DAGs associated with each of the non-terminals (xO, x l ,..., xn) in a rule may express just partial information about that non-terminal, it is possible that some non-terminals (such as x2 in the second rule) will not be associated with any category label at all. The natural solution, then, would be to make a prediction using the entire DAG associated with a given non-terminal. Suppose, now, that we have parsed the np in rule 1 x2 I1! Now, following the same procedure, the predictor would next make a pre diction for the non-terminal xl in (4) . It can easily be seen that when the DAG associated with xl unifies with the left hand side of rule (2) the predicted rule is almost the same as (4) except that the value for (subcat rest) in (4) becomes the value for (subcat rest rest) in the new predict ,i. In fact, the predictor step can continue making such predictions ad infinitum and, crucially, the new predictions will not be subsumed by previous predictions.", "cite_spans": [ { "start": 341, "end": 354, "text": "(Earley 1970)", "ref_id": "BIBREF0" }, { "start": 1082, "end": 1085, "text": "(4)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "(1) zO -\u2666 xl x2", "sec_num": null }, { "text": "To solve this problem Shieber proposes that the predictor step should not use all of the information in the DAG associated with a non-terminal, but rather it should use some limited subset of that information. Of course, when some nodes of the DAG tire eliminated the predictor step can overpredict, but this does not affect the correctness of the algorithm since these spurious predictions will not be completable. Shieber's proposal is basically that before the predictor step is applied, a RD should be created which contains just the information associated with a finite set of paths (i.e. a restrictor). 3 In this way, Shieber's algorithm allows an infinite number of categories to be divided into a finite number of equivalence classes. Since the number of possible RDs is finite it becomes impossible to make the kind of infinite cycle of predictions illustrated above.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(1) zO -\u2666 xl x2", "sec_num": null }, { "text": "Primarily for notational reasons, I will define restriction in a slightly different manner from Shieber (1985) . For our purpose here we can define the RD D' of DAG D to be the least specific DAG D' C D such that for every path P in the restrictor if the value of P in D is atomic then the value of P in D' is the same as the value of P in D and if the value of P in D is complex then the value of P in D' is a variable. This differs from Shieber's definition in that reentrancies are eliminated in the RD. Thus the RD is not really a DAG but rather is a tree and hence it can be represented more easily by a simple list structure. ", "cite_spans": [ { "start": 96, "end": 110, "text": "Shieber (1985)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "(1) zO -\u2666 xl x2", "sec_num": null }, { "text": "' e W [ / { 9 M l \" d * [ ; [ i l l k I m (6 ) [ [ a , [ [ 6 , c ] ] l , \\ d , [ [ \u00ab , [ [ / , O il] , [ i, U [ [ / . I l l l l l l", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(1) zO -\u2666 xl x2", "sec_num": null }, { "text": "The first use of restriction I will discuss involves the subsumption check on new predictions. In the original Earley's algorithm (Earley 1970 the new DAG is not retained since any DAGs that could be predicted on the basis of the new DAG could already have been predicted on the basis of the more general DAG. Clearly, the move from an identity check to a subsumption check is the right sort of move to make, but a subsumption check on arbitrarily large DAGs can be an expensive operation. This seems to be an ideal area in which restriction could be used to optimize the algorithm.", "cite_spans": [ { "start": 130, "end": 142, "text": "(Earley 1970", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "R estriction in the Subsum ption Test", "sec_num": "3" }, { "text": "The move I propose is the following. Initially, new predictions are made in the manner suggested by Shieber; i.e. make a RD for the category \"to the right of the Dot\" and then collect all the rules from the grammar whose left hand side category unifies with this RD-these rules then constitute the new predictions. At this point I suggest that the RD used to find these predictions should be retained along with the new predictions; that is, a list of RDs that have been used to make predictions should be kept for each state set. I will call this list the RDJList. Then, the next time the parser enters the predictor step and creates a new RD from which to make new predictions, a subsumption check can be made directly between this RD and the RD_List. If the new RD is subsumed by any member of the RD_List then we can immediately give up trying to make any new predictions from this RD. Any predictions made from th RD would necessarily already have been made when the predictor encountered the more general RD in the RDJList. Thus we avoid both the expense of making new predictions and the expense of applying the subsumption test to weed these new predictions out. Moreover, since RDs are typically very small (at least given the sample restrictors given in Shieber (1985 Shieber ( ,1986 ), the subsumption test that is performed on them can be applied very quickly.", "cite_spans": [ { "start": 1264, "end": 1277, "text": "Shieber (1985", "ref_id": "BIBREF6" }, { "start": 1278, "end": 1293, "text": "Shieber ( ,1986", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "R estriction in the Subsum ption Test", "sec_num": "3" }, { "text": "As an example, suppose that some set of predictions has already been made using the RD, ([cat, np]], then there is no point in making predictions using [[cat, np],[num, sing]] since any such predictions would necessarily fail the sub sumption check; i.e., rules expanding singular noun phrases are more specific than (or subsumed by) rules expanding noun phrases unspecified for number. This particular case probably does not arise often in actual parsing, but cases of left recursion do arise for which this optimization can make a very signifi cant difference in processing speed. In fact our experience with the UNICORN natural language processing system (Gerdemann and Hinrichs 1988), has shown that for grammars with a large amount of left recursion, this simple optimiza tion can make the difference between taking several minutes of processing time and several seconds of processing time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "R estriction in the Subsum ption Test", "sec_num": "3" }, { "text": "The next use of restriction I propose involves the completer step. The completer applies, in Earley's algorithm, at the point where all of the right hand side of a rule in some state has been consumed, i.e., the point at which the \"Dot\" has been moved all the way to the right in some rule. At this point the completer goes back to the state set in which the state to be completed was originally predicted and searches for a prediction in this state set which has a category \"to the right of the Dot* which can unify with the mother node of the rule in the state to be completed. This search can be quite time consuming since the completer must attempt to perform a unification for each state in this state set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "R estriction in the Com pleter Step", "sec_num": "4" }, { "text": "In each state, there is a variable F which indicates in which state set that stace was predicted so the completer can immediately go back to the Fth state set in order to make the completion. But there is no variable which indicates which state in the Fth state set could have been responsible for making that prediction. And, in fact, it would be quite difficult to implement such a direct backpointer since in many cases a particular state is really only indirectly re sponsible for some prediction in the sense that it would have been responsible for the prediction if it had not been for the subsumption check. For example, suppose we try to implement a system of backpointers as follows. Each state will be a quintuple (Lab,BP,Dot,F,Dag) where Lab is an arbitrary label, BP is a kind of backpointer which takes as its value the label of the state that was re sponsible for predicting the current state and Dot, F, and Dag are as in Shieber's adaptation of Earley's algorithm; i.e., Dot is a pointer to the current position in the rule represented by Dag, and F is the more general kind of backpointer which only indicates in which state set the original prediction was made. To illustrate the problem with this scheme, consider the partial state set in (7) , in which the subscripted t indicates that this is the tth state set.", "cite_spans": [ { "start": 1258, "end": 1261, "text": "(7)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "R estriction in the Com pleter Step", "sec_num": "4" }, { "text": "[ It is at this point that RDs can again help us out. The idea is that when the predictor attempts to make predictions on the basis of some state it adds a RD to that state and to all predictions made from that state as a kind of marker (or coindexing between a state and the predictions resulting from that -13-< state). The RD used for this coindexing will be either 1 .) the RD used to make the predictions or 2.) if no predictions were made because a more general RD had already been used to make predictions, then this more general RD is used as the marker. Now the completion step is greatly simplified. The completer can go back to the Fth state set and attempt unification only on states that have identical RD-markers. Clearly this move eliminates many attempted unifications that would be doomed to failure. To implement this idea states will be defined as quintuples (BP,FP,Dot,F,Dag) where BP is a RD acting as a backpointer, FP is a RD acting as a forward pointer and F,Dot, and Dag are as before. Now the analog of (7) will be (9).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(7)", "sec_num": null }, { "text": "(9) i[... [BP 1 , FP 1 , Dotl, FI, Dagl\\, [BP2, FP2, Dot2, F2, Dag2],...]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(7)", "sec_num": null }, { "text": "In (9) BPl and BP2 will each be instantiated to the value of the RD re sponsible for the prediction which created their respective state. FPl and FP2, however will be uninstantiated variable since these two states have not yet been responsible for creating any new predictions. Now assuming that the RDs for Dagl and Dag2 are as in (7) then when the predictor applies to the first state shown in (9), the result will be the state set shown in (10). Then when the predictor looks at the second state in (10), no predictions will be made as before, however the predictor will register the attempt to make a prediction by instantiating the variable FP2 as in (11). Now whenever the descendants of states 3 and 4 are ready to be completed, it will be easy to go back to this state set and find the states whose forward pointers are identical to the backpointers of the states to be completed. Thus many candidates for completion are immediately ruled out.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(7)", "sec_num": null }, { "text": "The final use for restriction that I propose involves lookahead. Lookahead is one aspect of Earley's algorithm which clearly needs modification in order to be used efficiently with unification grammars or natural language grammars in general. In the original algorithm, a calculation of lookahead was performed as part of the prediction step. A simple example can show the problem with Earley's version of this procedure. In the S -\u25ba NP VP rule, when the predictor makes a prediction for NP, it is required to add a state for each possible lookahead string that can be derived from the VP. But given the large number of verbs or adverbs that can start a VP in a natural language this would require adding a huge number of states to the state set. Clearly we don't want to simply list all the possible lookahead strings, but rather the correct approach would be to find what features these strings have in common and then add a smaller number of states with feature based lookaheads.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "R estriction U sed in Lookahead", "sec_num": "5" }, { "text": "Aside from the question of what kind of lookahead to calculate, there are two other questions that need to be considered: first the question of when to calculate lookahead and second how to calculate it. Beginning with the when question, it is clear that unification grammars require lookahead to be calculated at a later point than it is in Earley's approach. The reason for this is illustrated by rules like 2 xl [1 ] x2 [2] cat subcat vp first [2] rest [l] According to Earley's approach, when a prediction is made for xl, the looka head for x2 should be calculated. But in this case, no features for x2 will be specified until after xl is parsed. This is an extreme situation, but it illustrates a general problem. It is the normal case in a unification grammar for the result of parsing one category to affect the feature instantiations on its sister. Clearly, what needs to be done in this case is to parse xl and then perform a lookahead on x2. Thus, lookahead should be calculated for a category immediately before the predictor applies to that category; i.e., lookahead can be considered a quick check to be made immediately before applying prediction. Unlike Earley's orig inal algorithm, then, it is not necessary to put a lookahead string into a state to be checked at a later point.", "cite_spans": [ { "start": 415, "end": 419, "text": "[1 ]", "ref_id": "BIBREF0" }, { "start": 423, "end": 426, "text": "[2]", "ref_id": "BIBREF1" }, { "start": 447, "end": 450, "text": "[2]", "ref_id": "BIBREF1" }, { "start": 456, "end": 459, "text": "[l]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "R estriction U sed in Lookahead", "sec_num": "5" }, { "text": "The question, then, is how to calculate lookahead. In Earley's version of the algorithm, there is a function, Hk which when applied to a category C returns a set of k-symbol strings of terminals which could begin a phrase of category C. When applied to unification grammars, however, the problem of having an infinite number of categories again appears. We certainly cannot list possible strings of preterminals that can begin each category. It is clear, then, that some form of restriction is again going to be necessary in order to implement any kind of lookahead. One, relatively simple, way of implementing this idea is as follows. When the predictor applies to a category C, the first thing it does is make a RD for C. Then a table lookup is performed to determine what preterminal cate gories could begin C. Since there are potentially infinite preterminal categories, restriction must be applied here too. So more precisely, the table lookup finds a set of RDs that could unify with whatever actual preterminal could begin a phrase of category C. Let us call these RDs the preterminal RDs. Then before the predictor can actually make a prediction a check must be performed to ver ify that the next item in the input is an instance of a category that can unify with one of the preterminal RDs. If the check fails, then the prediction is aban doned. All that remains is to specify how the lookup table is constructed. One way such a table might be constructed would be to run the parser in reverse for generation as in Shieber (1988) . Thus, for each possible RD (given a particular restrictor), the generator is used to determine what preterminal RDs can begin a phrase of this category.", "cite_spans": [ { "start": 1524, "end": 1538, "text": "Shieber (1988)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "R estriction U sed in Lookahead", "sec_num": "5" }, { "text": "I have argued here that restriction can be used in unification parsing to effect three optimizations. First, it can be used to greatly speed up the subsumption test for adding new predictions to the state set, second it can be used to speed up the searching used in the completer step, and finally it can be used to implement a form of lookahead. The first two of these uses have been fully implemented within the UNICORN natural language processing system (Gerdemann and Hinrichs 1988) . The use of restriction with lookahead is still under development.", "cite_spans": [ { "start": 457, "end": 486, "text": "(Gerdemann and Hinrichs 1988)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "In general, the fact that unification grammars may have categories of in definite complexity necessitates some way of focusing on limited portions of the information contained in these categories. It seems quite likely, then, that restriction would be useful even in other parsing algorithms for unification gram mars. The primary question that remains is what portion of the information in complex DAGs should be used in these algorithms; that is, the question is how to choos\u00ab a restrictor. Up to now, no general principles have been given for choosing a restrictor for greatest efficiency. Given the proposals in this paper, it becomes even more critical to find such general principles since restriction can affect the efficiency of several steps in the parsing algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "An efficient context-free parsing algorithm", "authors": [ { "first": "Jay", "middle": [], "last": "Earley", "suffix": "" } ], "year": 1970, "venue": "Communications of the ACM", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jay Earley. An efficient context-free parsing algorithm. Communications of the ACM, 1970.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "UNICORN: a unification parser for attribute-value grammars", "authors": [ { "first": "Dale", "middle": [], "last": "Gerdemann", "suffix": "" }, { "first": "Erhard", "middle": [], "last": "Hinrichs", "suffix": "" } ], "year": 1988, "venue": "Studies in the Linguistic Sciences", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dale Gerdemann and Erhard Hinrichs. UNICORN: a unification parser for attribute-value grammars. Studies in the Linguistic Sciences, 1988.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "An Information-Based Approach to Syntax and Semantics: Volume 1 Fundamentals", "authors": [ { "first": "Carl", "middle": [], "last": "Pollard", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Sag", "suffix": "" } ], "year": 1987, "venue": "CSLI Lecture Notes No. IS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carl Pollard and Ivan Sag. An Information-Based Approach to Syntax and Semantics: Volume 1 Fundamentals. CSLI Lecture Notes No. IS, Chicago University Press, Chicago, 1987.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Chart-parsing of unification-based grammars with ID\\LP-rules", "authors": [ { "first": "Roland", "middle": [], "last": "Seiffert", "suffix": "" } ], "year": 1987, "venue": "Categories, Polymor phism and Unification", "volume": "", "issue": "", "pages": "335--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roland Seiffert. Chart-parsing of unification-based grammars with ID\\LP- rules. In Ewan Klein and Johan van Benthem, editors, Categories, Polymor phism and Unification, pages 335-54, CCS/ILLI, Edinburgh/Amsterdam, 1987.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "An Introduction to Unification-Based Approaches to Gram mar", "authors": [ { "first": "Stuart", "middle": [], "last": "Shieber", "suffix": "" } ], "year": 1986, "venue": "CSLI Lecture Notes", "volume": "", "issue": "4", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stuart Shieber. An Introduction to Unification-Based Approaches to Gram mar. CSLI Lecture Notes No. 4, Chicago University Press, Chicago, 1986.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A uniform architecture for parsing and generation", "authors": [ { "first": "Stuart", "middle": [], "last": "Shieber", "suffix": "" } ], "year": 1988, "venue": "COLING-88", "volume": "", "issue": "", "pages": "614--623", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stuart Shieber. A uniform architecture for parsing and generation. In COLING-88, pages 614-9, 1988.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Using restriction to extend parsing algorithms for complexfeature-based formalisms", "authors": [ { "first": "Stuart", "middle": [], "last": "Shieber", "suffix": "" } ], "year": 1985, "venue": "ACL Proceedings, 2Srd Annual Meeting", "volume": "", "issue": "", "pages": "145--52", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stuart Shieber. Using restriction to extend parsing algorithms for complex- feature-based formalisms. In ACL Proceedings, 2Srd Annual Meeting, pages 145-52, 1985.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Language as a Cognitive Process: Syntax. Ablex, Nor wood", "authors": [ { "first": "Terry", "middle": [], "last": "Winograd", "suffix": "" } ], "year": 1983, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Terry Winograd. Language as a Cognitive Process: Syntax. Ablex, Nor wood, 1983.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "type_str": "figure", "text": "For example,given the restrictor [(a b), (d e f), (d i j f) ], the RD for the DAG in (5) (from Shieber 1985) will be represented by the indented list shown in (6), in which variables are indicated by [].4" }, "FIGREF1": { "uris": null, "num": null, "type_str": "figure", "text": "estion o f how to select an appropriate restrictor for greatest efficiency m ust rem ain a q u estion for further research. See the conclu sion o f this paper for further d iscussion. E lim in atin g reentrancies from R D s m ay also be a reasonable th in g to do from a com pu ta tio n a l poin t of view . Ju d gin g from the p articu lar restrictors used in Sh ieber (1985,1986) it would appear th a t reentrancies rarely occu r in R D s. However, for som e purposes it may be desirable to inclu de more inform ation in R D s. A possible exam ple would be the use of parsing algorithm s for generation, in w hich it w ould be desirable to use as much top down inform ation as p ossible." }, "FIGREF2": { "uris": null, "num": null, "type_str": "figure", "text": "repeated here as (12 )" }, "TABREF3": { "num": null, "html": null, "text": "Now suppose the RD for Dagl is[[cat,np]] and that the RD for Dag2 is[[cat,np],[num,sing]]. When the predictor looks at state Labi it will make some number of predictions with backpointers to Labi as in (8) (For example, [Lab3,Labl,0,i,Dag3] is a new state with an arbitrary label, Lab3, a backpointer to state Labi, the Dot set at 0 indicating the beginning of the left hand side, F set to t indicating that the prediction was made in state set i, and Dag3 representing the new rule).But when the predictor looks at Lab2 no predictions will be made since its RD ifl subsumed by the RD of Labi. Thus even though (without the subsumption check) Lab2 could have been responsible for the predictions Lab3 and Lab4, no backpointers are created for Lab2.", "type_str": "table", "content": "
(8) | i[... [Labi, B P 1, Dotl, FI, Dagl], [Lab2, B P 2, Dot2, F2, Dag2], |
[LabZ, Labi, DotZ, DagZ], [LabA, Labi, Dot\\, FA, Dagi],...] |