{ "paper_id": "W98-0131", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T06:05:49.711304Z" }, "title": "Automatie Extraction of Stochastic Lexicalized Tree Grammars from Treebanks", "authors": [ { "first": "G\u00fcnter", "middle": [], "last": "Neumann", "suffix": "", "affiliation": {}, "email": "neumann@dfki.de" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present a method for the extraction of stochastic lexicalized tree grammars (S-LTG) of different complexities from existing treebanks, which allows us to analyze the relationship of a grammar automatically induced from a treebank wrt. its size, its complexity, and its predictive power on unseen data. Processing of different S-LTG is performed by a stochastic version of the two-step Early-based parsing strategy introduced in (Schabes and Joshi, 1991).", "pdf_parse": { "paper_id": "W98-0131", "_pdf_hash": "", "abstract": [ { "text": "We present a method for the extraction of stochastic lexicalized tree grammars (S-LTG) of different complexities from existing treebanks, which allows us to analyze the relationship of a grammar automatically induced from a treebank wrt. its size, its complexity, and its predictive power on unseen data. Processing of different S-LTG is performed by a stochastic version of the two-step Early-based parsing strategy introduced in (Schabes and Joshi, 1991).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In this paper we present a method for the extraction of stochastic lexicalized tree grammars (S-LTG) of different complexities from existing treebanks, which allows us to analyze the relationship of a grammar automatically induced from a treebank wrt . its size, its complexity, and its predictive power on unseen data. The use of S-LTGs is motivated for two reasons. First, it is assumed that S-LTG better capture distributional and hierarchical information than stochastic CFG (cf. (Schabes, 1992; Schabes and \\Vaters, 1996) ), and second, they allow the factorization of recursion of different kinds, viz. extraction of left, right, and wrapping auxiliary trees and possible combinations. Existing treebanks are used because they allow a corpus-based analysis of grammars of realistic size. Processing of different S-LTG is performed by a stochastic version of the two-phase Early-based parsing strategy introduced in (Schabes and Joshi, 1991) .", "cite_spans": [ { "start": 484, "end": 499, "text": "(Schabes, 1992;", "ref_id": "BIBREF9" }, { "start": 500, "end": 526, "text": "Schabes and \\Vaters, 1996)", "ref_id": null }, { "start": 921, "end": 946, "text": "(Schabes and Joshi, 1991)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This abstract describes work in progress. So far, we have concentrated on the automatic extraction of S-LTGs of different kinds (actually S-LTSG, S-LTIG, and S-LTAG). This phase is completed and we will report on first experiments using the Penn-Treebank (Marcus et al., 1993) and Negra, a treebank for German (Skut et al., 1997) . A first version of the two-phase parser is implemented, and we have started first tests concerning its performance .", "cite_spans": [ { "start": 255, "end": 276, "text": "(Marcus et al., 1993)", "ref_id": "BIBREF3" }, { "start": 310, "end": 329, "text": "(Skut et al., 1997)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Given a treebank, grammar extraction is the process of decomposing each parse tree into smaller units called subtrees. In our approach, the underlying decomposition operation 1. should yield lexically anchored subtrees, and 2. should be guided by linguistic principles.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammar extraction", "sec_num": "2" }, { "text": "The motivation behind (1) is the observation that in practice stochastic CFG perform worse than nonhierarchical approaches, and that lexicalized tree grammars may be able to capture both distributional and hierarchical information (Schabes and Waters, 1996) . Concerning (2) we want to take advantage of the linguistic principles explicitly or implicitly used to define a treebank. This is motivated by the hypothesis that it will better support the development of on-line or incremental learning strategies (the cutting criteria are less dependent from the quantity and quality of the existing treebank than purely statistically based approaches, see also sec. 5) and that it renders possible a comparison of an induced grammar with a linguistically based competence grammar. Both aspects (but especially the latter one) are of importance because it is possible to apply the same learning strategy also to a treebank computed by some competence grammar, and to investigate methods for combining treebanks and competence grammars (see sec. 6).", "cite_spans": [ { "start": 231, "end": 257, "text": "(Schabes and Waters, 1996)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Grammar extraction", "sec_num": "2" }, { "text": "However, in this paper we will focus on the use of existing treebanks using the Penn-Treebank (Marcus et al., 1993) and Negra, a treebank for German (Skut et al., 1997) . First, it is assumed that the treebank comes with a notion of lexical and phrasal head, i.e\" with a kind of head principle (see also (Charniak, 1997) ). In the Negra treebank, head elements are explicitly tagged. For the Penn treebank, the head relation has been determined manually. In case it is not possible to uniquely identify one head element there exists a parameter called DIRECTION which specifies whether the left or right candidate should be selected. Note that by means of this parameter we can also specify whether the resulting grammar should prefer a left or right branching.", "cite_spans": [ { "start": 94, "end": 115, "text": "(Marcus et al., 1993)", "ref_id": "BIBREF3" }, { "start": 149, "end": 168, "text": "(Skut et al., 1997)", "ref_id": "BIBREF11" }, { "start": 304, "end": 320, "text": "(Charniak, 1997)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Grammar extraction", "sec_num": "2" }, { "text": "Using the head information, each tree from the treebank is decomposed from the top downwards into a set of subtrees, such that each non-terminal non-headed subtree is cut off, and the cutting point is marked for substitution. The same process is then recursively applied to each extracted subtree. Due to the assumed head notion each extracted tree will automatically be lexically anchored (and the path from the lexical anchor to the root can be seen as a head-chain). FUrthermore, every terminal element which is a sister of a node of the head-chain will also remain in the extracted tree. Thus, the yield of the extracted tree might contain several terminal substrings, which gives interesting patterns of word or POS sequences. For each extracted tree a frequency counter is used to compute the probability p(t) of a tree t, after the whole treebank has been processed, such that l:t:root(t)=a p(t) = 1, where a denotes the root labe! of a tree t.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammar extraction", "sec_num": "2" }, { "text": "After a tree has been decomposed completely we obtain a set of lexicalized elementary trees where each nonterminal of the yield is marked for substitution. In a next step the set of elementary trees is divided into a set of initial and auxiliary trees. The set of auxiliary trees is further subdivided into a set of left, right, and wrapping auxiliary trees following (Schabes and Waters, 1995 ) (using special foot note labels, like :lfoot, :rfoot, and :wfoot). Note that the identification of possible auxiliary trees is strongly corpus-driven. Using special foot note labels allows us to trigger carefully the corresponding inference rules. For example, it might be possible to treat the :wfoot labe! as the substitution labe!, which means that we consider the extracted grammar as a S-LTIG, or only highly frequent wrapping auxiiiary trees wiil be wnsidered. It is also possible to treat every foot labe! as the substitution labe!, which means that the extracted grammar only allows substitution.", "cite_spans": [ { "start": 368, "end": 393, "text": "(Schabes and Waters, 1995", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Grammar extraction", "sec_num": "2" }, { "text": "The resulting S-LTG will be processed by a twophase stochastic parser along the line of (Schabes and Joshi, 1991) . In a first step the input string is used for retrieving the relevant subset of elementary trees. Note that the yield of an elementary tree might consist of a sequence of lexical elements. Thus in order to support efficient access, the deepest leftmost chain of lexical elements is used as index to an elementary tree. Each such index is stored in a decision tree. The first step is then realized by means of a recursive tree traversal which identifies all (langest) matching substrings of the input string (see also sec. 4). Parsing of lexically triggered trees is performed in the second step using an Earley-based strategy. In order to ease implementation of different strategies, the different parsing operations are expressed as inference rules and controlled by a chart-based agenda strategy along the line of (Shieber et al., 1995) . So far, we have implemented a version for running S-LTIG which is based on (Schabes and Waters, 1995) . The inference rules can be triggered through boolean parameters, which allows flexible hiding of auxiliary trees of different kinds.", "cite_spans": [ { "start": 88, "end": 113, "text": "(Schabes and Joshi, 1991)", "ref_id": "BIBREF6" }, { "start": 931, "end": 953, "text": "(Shieber et al., 1995)", "ref_id": "BIBREF10" }, { "start": 1031, "end": 1057, "text": "(Schabes and Waters, 1995)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Two-phase parsing of S-LTG", "sec_num": "3" }, { "text": "We will briefty report on first results of our method using the Negra treebank ( 4270 sentences) and the section 02, 03, 04 from the Penn treebank (the first 4270 sentences). In both cases we extracted three different versions of S-LTG (note that no normalization of the treebanks has been performed): (a) lexical anchors are words, (b) lexical anchors are partof-speech, and (c) all terminal elements are substituted by the constant :term, which means that lexical information is ignored. For each grammar we report the number of elementary trees, left, right, and wrapping auxiliary trees. In a second experiment we evaluated the performance of the implemented S-LTIG parser using the extracted Penn treebank with words as lexical anchors. We applied all sentences on the extracted grammar and computed the following average valnes for the first phase: sentence length: 27.54, number of matching snbstrings: 15.93, number of elementary trees: 492.77, number of different root labels: 33.16. The average run-time for each sentence (measnred an a Sun Ultra 2 (200 mhz): 0.0231 sec. In a next step we tested the run-time behaviour of the whole parser on the same input, however ignoring every parse which took langer than 30 sec. (about 20 %). The average run-time for each sentence (exhaustive mode): 6.18 sec. This is promising, since the parser is still not optimized.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "First experiments", "sec_num": "4" }, { "text": "We also tried first blind tests, but it turned ont that the current considered size of the treebanks is too small to get reliable results on unseen data (randomly selecting 10 % of a treebank for testing; 90 % for training). The reason is that if we consider only words as anchors then we rarely get a complete parse result (around 10 %). If we consider only POS then the number of elementary trees retrieved through the first phase increases causing the current parser prototype to be slow (due to the restricted annotation schema). 1 A better strategy seems to be the use of words only for lexical anchors and POS for all other terminal nodes, or to use only closed-class words as lexical anchors (assuming a head principle based on functional categories). In that case it would also be possible to adapt the strategies described in (Srinivas, 1997) wrt. supertagging in order to reduce the set of retrieved trees before the second phase is called.", "cite_spans": [ { "start": 835, "end": 851, "text": "(Srinivas, 1997)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "First experiments", "sec_num": "4" }, { "text": "Here we will discuss alternative approaches for converting treebanks into lexicalized tree grammars, namely the Data-oriented Parsing (DOP) framework (Bad, 1995) and approaches based on applying Explanation-based Learning (EBL) to NL parsing (e.g\" (Samuelsson, 1994; Srinivas, 1997) ).", "cite_spans": [ { "start": 150, "end": 161, "text": "(Bad, 1995)", "ref_id": null }, { "start": 248, "end": 266, "text": "(Samuelsson, 1994;", "ref_id": "BIBREF5" }, { "start": 267, "end": 282, "text": "Srinivas, 1997)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "5" }, { "text": "The general strategy of our approach is similar to DOP with the notable distinction that in our framework all trees must be lexically anchored and that in addition to substitution, we also consider adjunction and restricted versions of it. In the EBL approach to NL parsing the core idea is to use a competence grammar and a training corpus to construct a treebank. The treebank is then used to obtain a specialized grammar which can be processed much faster than -the original one at the price of a small lass in coverage. Samuelsson (1994) presents a method in which tree decomposition is completely automatized using the information-theoretical concept of entropy, after the whole treebank has been indexed in an and-or tree. This implies that a new grammar has tobe computed if the treebank changes (i.e., reduced incrementallity) and that the generality of the induced subtrees depends much more on the size and variation of the treebank than ours. On the other side, this approach seems to be more sensitive to the distribution of sequences of lexical anchors than our approach, so that we will explore its integration.", "cite_spans": [ { "start": 524, "end": 541, "text": "Samuelsson (1994)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "5" }, { "text": "In (Srinivas, 1997) the application of EBL to parsing of LTAG is presented. The core idea is to generalize the derivation trees generated by an LTAG and to allow for a finite state transducer representation of the set of generealized parses. The POS sequence of a training instance is used as the index to a generalized parse. Generalization wrt. recursion is achieved by introducing the Kleene star into the yield of an auxiliary tree that was part of the training example, which allows generalization about the length of the training sentences. This approach is an important candidate for improvements of our two-phase parser once we have acquired an S-LTAG.", "cite_spans": [ { "start": 3, "end": 19, "text": "(Srinivas, 1997)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "5" }, { "text": "6 Future steps", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "5" }, { "text": "The work described here is certainly in its early phase.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "5" }, { "text": "The next future steps (partly already started) will be: (1) measuring the coverage of an extracted S-LTG, (2) incremental grammar induction, (3) combination of a competence grammar and a treebank. 1 already applied the same learning strategy on derivation trees obtained from a !arge HPSG-based English grammar in order to speed up parsing of HPSG (extending the work described in (Neumann, 1994) ). Now 1 am exploring methods for merging such an \"HPSG-based\" S-LTG with one extracted from a treebank. The same will also be explored wrt. a competence-based LTAG, like the one which comes with the XTAG system (Daran et al., 1994) .", "cite_spans": [ { "start": 381, "end": 396, "text": "(Neumann, 1994)", "ref_id": "BIBREF4" }, { "start": 609, "end": 629, "text": "(Daran et al., 1994)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "5" }, { "text": "1 Acknowledgment", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "5" }, { "text": "Applying the same tcst as dcscribed above on POS, the average number of elementary trecs retrieved is 2292.86, i.e\" the number seems to increase by a factor of 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The research underlying this paper was supported by a research grant from the German Bundesministerium f\u00fcr Bildung, Wissenschaft, Forschung und Technologie (BMBF) to t.he. DFT