Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W89-0211",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:45:10.889689Z"
},
"title": "Pro b a b il is t ic LR Parsing for Speech Recognition",
"authors": [
{
"first": "J",
"middle": [
"H"
],
"last": "Wright",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Bristol",
"location": {
"country": "U.K"
}
},
"email": ""
},
{
"first": "E",
"middle": [
"N"
],
"last": "Wrigley",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Bristol",
"location": {
"country": "U.K"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "An LR parser for probabilistic context-free grammars is described. Each of the standard versions of parser generator (SLR, canonical and LA.LR) may be applied. A graph-structured stack permits action conflicts and allows the parser to be used with uncertain input, typical of speech recognition applications. The sentence uncertainty is measured using entropy and is significantly lower for the grammar than for a first-order Markov model. * 1 I n t e r n a t i o n a l Parsing Workshop '89",
"pdf_parse": {
"paper_id": "W89-0211",
"_pdf_hash": "",
"abstract": [
{
"text": "An LR parser for probabilistic context-free grammars is described. Each of the standard versions of parser generator (SLR, canonical and LA.LR) may be applied. A graph-structured stack permits action conflicts and allows the parser to be used with uncertain input, typical of speech recognition applications. The sentence uncertainty is measured using entropy and is significantly lower for the grammar than for a first-order Markov model. * 1 I n t e r n a t i o n a l Parsing Workshop '89",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The automatic recognition of continuous speech requires more than signal processing and pattern matching: a model of the language is needed to give structure to the utterance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1.1"
},
{
"text": "At sub-word level, hidden Markov models [1] have proved of great value in pattern matching.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1.1"
},
{
"text": "The focus of this paper is modelling at the linguistic level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1.1"
},
{
"text": "Markov models are adaptable and can handle potentially any sequence of words [2] .",
"cite_spans": [
{
"start": 77,
"end": 80,
"text": "[2]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1.1"
},
{
"text": "Being probabilistic they fit naturally into the context of uncertainty created by pattern matching. However, they do not capture the larger-scale structure of language and they do not provide an interpretation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1.1"
},
{
"text": "Grammar models capture more of the structure of language, but it can be difficult to recover from an early error in syntactic analysis and there is no watertight grammar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1.1"
},
{
"text": "A systematic treatment of uncertainty is needed in this context, for the following reasons:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1.1"
},
{
"text": "(1 ) some words and grammar rules are used more often than others;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1.1"
},
{
"text": "(2) pattern matching (whether by dynamic time warping, hidden Markov modelling or multi-layer perceptron [3] ) returns a degree of fit for each word tested, rather than an absolute discrimination; a number of possible sentences therefore arise;",
"cite_spans": [
{
"start": 105,
"end": 108,
"text": "[3]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1.1"
},
{
"text": "(3 ) at the end of an utterance it is desirable that each of these sentences receive an overall measure of support, given all the data so that the information is used efficiently.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1.1"
},
{
"text": "The type of language model which is the focus of this paper is the probabilistic context-free grammar (PCFG). This is an obvious enhancement of an ordinary CFG, the probability information initially intended to capture (1 ) above, but as will be seen this opens the way to satisfying (2 ) and (3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1.1"
},
{
"text": "An LR parser [4, 5] is used with an adaptation [6 ] which enlarges the scope to include almost any practical CFG. This adaptation also allows the LR approach to be used with uncertain input [7] , and this approach enables a grammar model to interface with the speech recognition front end as naturally as does a Markov model",
"cite_spans": [
{
"start": 13,
"end": 16,
"text": "[4,",
"ref_id": null
},
{
"start": 17,
"end": 19,
"text": "5]",
"ref_id": "BIBREF4"
},
{
"start": 47,
"end": 51,
"text": "[6 ]",
"ref_id": null
},
{
"start": 190,
"end": 193,
"text": "[7]",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1.1"
},
{
"text": "A \"probabilistic context-free grammar (PCFG)\" [8-10] is a 4-tuple <N,T,R,S> where N is a nonterminal vocabulary including the start symbol S, T is a terminal vocabulary, and R is a set of production-rules each of which is a pair of form <A a , p>, with AeN, a\u20ac(NuT)*, and p a probability. The probabilities associated with all the rules having a particular nonterminal on the LHS must sum to one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Context-Free Grammars",
"sec_num": "1.2"
},
{
"text": "A probability is associated with each derivation by multiplying the probabilities of those rules used, in keeping with the context-freeness of the grammar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Context-Free Grammars",
"sec_num": "1.2"
},
{
"text": "A very simple PCFG can be seen in figure 1: the symbols in uppercase are the nonterminals, those in lowercase are the terminals (actually preterminals) and A denotes the null string.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Context-Free Grammars",
"sec_num": "1.2"
},
{
"text": "The LR parsing strategy can be applied to a PCFG if the rule-probabilities are driven down into the parsing action table by the parser generator.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LR PARSING FOR PROBABILISTIC CFGs",
"sec_num": "2."
},
{
"text": "In addition, one of the objectives of using the parser in speech recognition is for providing a set of prior probabilities for possible next words at successive stages in the recognition of a sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LR PARSING FOR PROBABILISTIC CFGs",
"sec_num": "2."
},
{
"text": "The use of these prior probabilities will be described in section 3.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LR PARSING FOR PROBABILISTIC CFGs",
"sec_num": "2."
},
{
"text": "In what follows it will be assumed that the grammars are non-left-recursive, although null rules are allowed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LR PARSING FOR PROBABILISTIC CFGs",
"sec_num": "2."
},
{
"text": "The first aspect of parser construction is the closure function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ". 1 SLR Parser",
"sec_num": "2"
},
{
"text": "The item probability p can be thought of as a posterior probability of the item given the terminal string up to that point.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "<A -\u00bb a-\u00a3, p>",
"sec_num": null
},
{
"text": "The computation of closure(I) requires that items <B -> \u25a0 7r\u00bb PbPt> be added to the set for each rule <B -\u00bb 7 r, pr> with B on the LHS, provided pBpr exceeds some small probability threshold e, where pB is the total probability of items with B appearing after the dot (in the closed set). is another set which has the same number of elements, an exact counterpart for each dotted item, and a probability for each item that differs from that for its counterpart in the new set by at most e.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "<A -\u00bb a-\u00a3, p>",
"sec_num": null
},
{
"text": "where S' is an auxiliary start symbol, this process continues until no further sets are created. They can then be listed as I0 ,Ii,....",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Starting from an initial state I0 consisting of the closure of {<S' -> -S, 1>>",
"sec_num": null
},
{
"text": "Ir a generates state m and a row in the parsing tables \"action\" and \"goto\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Each state set",
"sec_num": null
},
{
"text": "The goto table simply contains the numbers of the destination states, as for the deterministic LR algorithm, but the action table also inherits probabilistic information from the grammar. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Each state set",
"sec_num": null
},
{
"text": "For a probabilistic grammar, the probability p attached to the reduce item cannot be distributed over those entries because when the tables are compiled it is not determined which of those terminals can actually occur next in that context, so the probability p is attached to the whole range of entries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The range of terminal symbols which can follow a B-reduction is given by the set FOLLOW(B) which can be obtained from the grammar by a standard algorithm [4],",
"sec_num": null
},
{
"text": "Completing the set of prior probabilities involves following up each reduce action using local copies of the stack until shift actions block all further progress.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The probability associated with a shift action is the prior probability of that terminal occurring next at that point in the input string (assuming no conflicts).",
"sec_num": null
},
{
"text": "The reduce action probability must be distributed over the shift terminals which emerge. This is done by allocating this probability to the entries in the action table row for the state reached after the reduction, in proportion to the probability of each entry.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The probability associated with a shift action is the prior probability of that terminal occurring next at that point in the input string (assuming no conflicts).",
"sec_num": null
},
{
"text": "Some of these entries may be further reduce actions in which case a similar procedure must be followed, and so on. The closure operation is more complex than for the SLR parser, because of the propagation of lookaheads through the non-kernel items. The items to be added to a kernel set to close it take the form ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The probability associated with a shift action is the prior probability of that terminal occurring next at that point in the input string (assuming no conflicts).",
"sec_num": null
},
{
"text": "Merging the states of the canonical parser which differ only in lookaheads for each item causes the probability distribution of lookaheads to be lost, so for the LALR parser the LR(1) items take the form",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LALR Parser",
"sec_num": "2.3"
},
{
"text": "The preferred method for generating the states as described in [4] can be adapted to the probabilistic case.",
"cite_spans": [
{
"start": 63,
"end": 66,
"text": "[4]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "<A -\u00bb a-(3, p, L> where LCT.",
"sec_num": null
},
{
"text": "Reduce entries in the parsing tables are then controlled by the lookahead sets, with the prior probabilities found as for the SLR parser.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "<A -\u00bb a-(3, p, L> where LCT.",
"sec_num": null
},
{
"text": "An action conflict arises whenever the parser generator attempts to put two (or more) different entries into the same place in the action table, and there are two ways to deal with them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conflicts and Interprecat Lon",
"sec_num": "2.4"
},
{
"text": "The first approach is to resolve each conflict [11] .",
"cite_spans": [
{
"start": 47,
"end": 51,
"text": "[11]",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conflicts and Interprecat Lon",
"sec_num": "2.4"
},
{
"text": "The second approach is to split the stack and pursue all options, conceptually in parallel.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "This is a dubious practice in the probabilistic case because there is no clear basis for resolving the probabilities of the actions in conflict.",
"sec_num": null
},
{
"text": "Toraita [6 ] has devised an efficient enhancement of the LR parser which operates in this way.",
"cite_spans": [
{
"start": 8,
"end": 12,
"text": "[6 ]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "This is a dubious practice in the probabilistic case because there is no clear basis for resolving the probabilities of the actions in conflict.",
"sec_num": null
},
{
"text": "A graphstructured stack avoids duplication of effort and maintains (so far as possible) the speed and compactness of Che parser.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "This is a dubious practice in the probabilistic case because there is no clear basis for resolving the probabilities of the actions in conflict.",
"sec_num": null
},
{
"text": "With this approach the LR algorithm can handle almost any practical CFG, and is highly suited to probabilistic grammars, the main distinction being that a probability becomes attached to each branch.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "This is a dubious practice in the probabilistic case because there is no clear basis for resolving the probabilities of the actions in conflict.",
"sec_num": null
},
{
"text": "This is in keeping with the further adaptation of the algorithm to deal with uncertain input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The generation and action of the probabilistic LR parser can be supported by a Bayesian interpretation.",
"sec_num": null
},
{
"text": "The situation envisaged for applications of the probabilistic LR parser in speech recognition is depicted in figure 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 109,
"end": 117,
"text": "figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Prediction and Updating Algorithm",
"sec_num": "3.1"
},
{
"text": "The parser forms part of a linguistic analyser whose purpose is to maintain and extend those partial sentences which are compatible with the input so far.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction and Updating Algorithm",
"sec_num": "3.1"
},
{
"text": "With each partial sentence there is associated an overall probability and partial sentences with very low probability are suspended.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction and Updating Algorithm",
"sec_num": "3.1"
},
{
"text": "It is assumed that the pattern matcher returns likelihoods of words, which is true if hidden Markov models are used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction and Updating Algorithm",
"sec_num": "3.1"
},
{
"text": "Other ---------------------------------------P(a\" I (D) ",
"cite_spans": [],
"ref_spans": [
{
"start": 6,
"end": 55,
"text": "---------------------------------------P(a\" I (D)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Prediction and Updating Algorithm",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": ")",
"eq_num": "(1)"
}
],
"section": "Prediction and Updating Algorithm",
"sec_num": "3.1"
},
{
"text": "This shows that the posterior probability of a\u2122 is distributed over the extended partial sentences in proportion to their root sentences s ^ contribution to the total prior probability of that word. Each path through the stack graph corresponds to one or more partial sentences and the probability P(r^|{D)m} has to be associated with each partial sentence r^.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "P (a j | (D ) )",
"sec_num": null
},
{
"text": "Despite the pruning the number of partial sentences maintained by the parser tends to grow with the length of input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of the Partial Sentences",
"sec_num": "3.2"
},
{
"text": "It seems sensible to base the measure of complexity upon the probabilities of the sentences rather than their number, and the obvious measure is the entropy of the distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of the Partial Sentences",
"sec_num": "3.2"
},
{
"text": "The discussion here will assume that the proliferation of sentences is caused by input uncertainty rather than by action conflicts. This is likely to be the dominant factor in speech applications. The upper bound is very pessimistic because it ignores the discriminative power of the pattern matcher. This could be measured in various ways but it is convenient to define a \"likelihood entropy\" as and the \"likelihood perplexity\" is _ jn P\u2122 \" exp(K^).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of the Partial Sentences",
"sec_num": "3.2"
},
{
"text": "The maximum sentence entropy subject to a fixed likelihood entropy can be found by simulation. Sets of random likelihoods with a given entropy can be generated from sets of independent uniform random numbers by raising these to an appropriate power.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of the Partial Sentences",
"sec_num": "3.2"
},
{
"text": "Permuting these so as to maximise the sentence entropy greatly reduces the number of sample runs needed to get a good result.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of the Partial Sentences",
"sec_num": "3.2"
},
{
"text": "These likelihoods are then fed into the parser and the procedure repeated to simulate the recognition process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of the Partial Sentences",
"sec_num": "3.2"
},
{
"text": "The sentence entropy is maximised over a number of such runs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of the Partial Sentences",
"sec_num": "3.2"
},
{
"text": "The likelihoods which produce the upper bound line shown in figure 5(a) have a perplexity which is approximately constant at 6 .6 . This line is reproduced almost exactly by the above simulation procedure, using a fixed J3L \u00b0f 6 .6 with 30 sample runs.",
"cite_spans": [],
"ref_spans": [
{
"start": 60,
"end": 71,
"text": "figure 5(a)",
"ref_id": "FIGREF8"
}
],
"eq_spans": [],
"section": "Entropy of the Partial Sentences",
"sec_num": "3.2"
},
{
"text": "to compute the average sentence entropy over the sample runs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The simulation method is easily adapted",
"sec_num": null
},
{
"text": "For this it is preferable to average the entropy and then convert to a perplexity rather than average the measured perplexity values.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The simulation method is easily adapted",
"sec_num": null
},
{
"text": "This process provides an indication of how the parser will perform in a typical case, assuming a fixed likelihood perplexity as a parameter (although this could be varied from stage to stage if required). ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The simulation method is easily adapted",
"sec_num": null
},
{
"text": "Markov models have some advantages over grammar models for speech recognition in flexibility and ease of use but a major disadvantage is their limited memory of past events. For an extended utterance the number of possible sentences compatible with a Markov model may be much greater than for a grammar model, for the same data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Inferred Markov Model",
"sec_num": "3.3"
},
{
"text": "Demonstrating this in the present context requires the derivation of a first-order Markov model from a probabilistic grammar [13] .",
"cite_spans": [
{
"start": 125,
"end": 129,
"text": "[13]",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Inferred Markov Model",
"sec_num": "3.3"
},
{
"text": "The uncertainty algorithm of section 3.1 will operate largely unchanged with the prior probabilities obtained from the transition probabilities rather than from the LR parser. The upper bound reaches 409 after 10 words, for a likelihood perplexity of approximately 6.3, reducing to 37 for the average (after 30 sample runs).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Inferred Markov Model",
"sec_num": "3.3"
},
{
"text": "This falls with the likelihood perplexity but is higher than for the grammar model. The sentence perplexity for the grammar is twice that for the inferred Markov model after from six to nine words depending on This comparison is reproduced for other grammars considered. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Inferred Markov Model",
"sec_num": "3.3"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "An Introduction to the Application of the Theory of Probabilistic Functions of a Markov Process to Automatic Speech Recognition",
"authors": [
{
"first": "L",
"middle": [],
"last": "S E Levinson",
"suffix": ""
},
{
"first": "M M",
"middle": [],
"last": "R Rabiner",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sondhi",
"suffix": ""
}
],
"year": 1983,
"venue": "BSTJ",
"volume": "62",
"issue": "",
"pages": "35--1074",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S E Levinson, L R Rabiner and M M Sondhi, \"An Introduction to the Application of the Theory of Probabilistic Functions of a Markov Process to Automatic Speech Recognition\", BSTJ vol 62, ppl035-1074, 1983.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The Computational Analysis of English, a Corpus-Based Approach",
"authors": [
{
"first": "R",
"middle": [],
"last": "Garside",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Leech",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sampson",
"suffix": ""
}
],
"year": 1987,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R Garside, G Leech and G Sampson (eds), \"The Computational Analysis of English, a Corpus-Based Approach\", Longman, 1987.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Speech Pattern Discrimination and Multilayer Perceptrons",
"authors": [
{
"first": "H",
"middle": [],
"last": "Bourland",
"suffix": ""
},
{
"first": "C J",
"middle": [],
"last": "Wellekens",
"suffix": ""
}
],
"year": 1989,
"venue": "Computer Speech and Language",
"volume": "3",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H Bourland and C J Wellekens, \"Speech Pattern Discrimination and Multilayer Perceptrons\", Computer Speech and Language, vol 3, ppl-19, 1989.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Compilers: Principles, Techniques and Tools",
"authors": [
{
"first": "R",
"middle": [],
"last": "A V Aho",
"suffix": ""
},
{
"first": "J D",
"middle": [],
"last": "Sethi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ullman",
"suffix": ""
}
],
"year": 1985,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A V Aho, R Sethi and J D Ullman, \"Compilers: Principles, Techniques and Tools\", Addison-Wesley, 1985.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "LR Parsing, Theory and Practice",
"authors": [
{
"first": "",
"middle": [],
"last": "N P Chapman",
"suffix": ""
}
],
"year": 1987,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N P Chapman, \"LR Parsing, Theory and Practice\", Cambridge University Press, 1987.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Efficient Parsing for Natural Language",
"authors": [
{
"first": "M",
"middle": [],
"last": "Tomita",
"suffix": ""
}
],
"year": 1986,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M Tomita, \"Efficient Parsing for Natural Language\", Kluwer Academic Publishers, 1986.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Linguistic Control in Speech Recognition",
"authors": [
{
"first": "J H",
"middle": [],
"last": "Wright",
"suffix": ""
},
{
"first": "E N",
"middle": [],
"last": "Wrigley",
"suffix": ""
}
],
"year": 1988,
"venue": "Proceedings of the 7th FASE Symposium",
"volume": "",
"issue": "",
"pages": "545--552",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J H Wright and E N Wrigley, \"Linguistic Control in Speech Recognition\", Proceedings of the 7th FASE Symposium, pp545-552, 1988.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Probabilistic Grammars for Natural Languages",
"authors": [
{
"first": "P",
"middle": [],
"last": "Suppes",
"suffix": ""
}
],
"year": 1968,
"venue": "Synthese",
"volume": "22",
"issue": "",
"pages": "95--116",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P Suppes, \"Probabilistic Grammars for Natural Languages\", Synthese, vol 22, pp95-116, 1968.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Formal Grammars in Linguistics and Psycholinguistics",
"authors": [
{
"first": "W J M",
"middle": [],
"last": "Levelt",
"suffix": ""
}
],
"year": 1974,
"venue": "",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W J M Levelt, \"Formal Grammars in Linguistics and Psycholinguistics, volume 1\", Mouton, 1974.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Probabilistic Languages: A Review and Some Open Questions",
"authors": [
{
"first": "C S",
"middle": [],
"last": "Wetherall",
"suffix": ""
}
],
"year": 1980,
"venue": "Computing Surveys",
"volume": "12",
"issue": "",
"pages": "361--379",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C S Wetherall, \"Probabilistic Languages: A Review and Some Open Questions\", Computing Surveys vol 12, pp361-379, 1980.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Sentence Disambiguation by a Shift-Reduce Parsing Technique",
"authors": [
{
"first": "S M",
"middle": [],
"last": "Shieber",
"suffix": ""
}
],
"year": 1983,
"venue": "Proc. 21st Annual Meeting of Assoc, for Comp. Linguistics",
"volume": "",
"issue": "",
"pages": "3--118",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S M Shieber, \"Sentence Disambiguation by a Shift-Reduce Parsing Technique\", Proc. 21st Annual Meeting of Assoc, for Comp. Linguistics, ppll3-118, 1983.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A Maximum Likelihood Approach to Continuous Speech Recognition",
"authors": [
{
"first": "J",
"middle": [],
"last": "L R Bahl",
"suffix": ""
},
{
"first": "R L",
"middle": [],
"last": "Jelinek",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1983,
"venue": "IEEE Trans, on Pattern Analysis and Machine Intelligence",
"volume": "",
"issue": "5",
"pages": "79--190",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L R Bahl, J Jelinek and R L Mercer, \"A Maximum Likelihood Approach to Continuous Speech Recognition\", IEEE Trans, on Pattern Analysis and Machine Intelligence, vol PAMI-5, ppl79-190, 1983.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Linguistic Modelling for Application in Speech Recognition",
"authors": [
{
"first": "J H",
"middle": [],
"last": "Wright",
"suffix": ""
}
],
"year": 1988,
"venue": "Proceedings of the 7th FASE Symposium",
"volume": "",
"issue": "",
"pages": "391--398",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J H Wright, \"Linguistic Modelling for Application in Speech Recognition\", Proceedings of the 7th FASE Symposium, pp391-398, 1988.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "New kernel sets are generated from a closed set of items by the goto function. If all the items with symbol Xe(NuT) after the dot in a set I are <Ak ak-X/9k , pk> for k-l,...,nx, with px -\u00a3 pk k-1 then the new kernel set corresponding to X is (<Ak -> akX-\u00a3k, pk/px> for k-1, . . . , nx} and goto(I,X) is the closure of this set. The set already exists if there",
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"text": "For each terminal symbol b, if there are items in Im such that the total Pb>f, and the shift state n is given by goto(Im,b) -In, then action[m,b] -<shift-to-n, pb> For each nonterminal symbol B, if Pb>\u00ab and goto(Im ,B)-In then goto[m,B] -n If <S' -> S \u2022 , p> G Im then action[m,$] -<accept, p> If <B -> 7 * , p> E Ir a where BhS' then action[m , FOLLOW(B) ] -<reduce-by B -\u00bb 7 , p> with shift-reduce optimisation [4,5] applied. The probability of each entry is underneath.",
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"text": "Canonical LR Parser For the canonical LR parser each item possesses a lookahead distribution: <A -> a* / ? , p, {P(at)",
"uris": null,
"type_str": "figure"
},
"FIGREF3": {
"num": null,
"text": "PbPt i (PB(aj))j = l.... i t i ) so that all the items with B after the dot are then <Ak -> a j j \u2022 , pk, { Pk(ai ) } 1=1, ..Pb iwhere P (^ka1,aJ) is the probability of aj occurring first in a string derived from \u00a3kai, which is easily evaluated. A justification of this will be published elsewhere. The lookahead distribution is copied to the new kernel set by the goto function. The first three steps of parsing table construction are essentially the same as for the SLR parser. In step (4), the item in Im takes the form <B -\u00bb 7 \u2022 , p, (P(a1) ) 1=1., T|> where B*S ' The total probability p has to be distributed over the possible next input symbols at, using the lookahead distribution: actionfm.ai] -<reduce-by B -\u00bb 7 , pP(at)> for all i such that pP(ai)>c. The prior probabilities during parsing action can now be read directly from the action table.",
"uris": null,
"type_str": "figure"
},
"FIGREF4": {
"num": null,
"text": "methods of pattern matching return measures which it is assumed can be interpreted as likelihoods, perhaps via a transformation. let (s-1 ,2 ,...) represent partial sentences up to stage m (the stage denoted by a superscript). let D represent the data at stage m, and (D) represent all the data up to stage m.Each branch 1^ predicts words a\u2122 (perhaps via the LR parser) with probability P(aj|r^ ), so the total prior probability for each word aj",
"uris": null,
"type_str": "figure"
},
"FIGREF5": {
"num": null,
"text": "If P(rsj| (D) )<e then the branch is suspended. The next set of prior probabilities can now be derived and the cycle continues. These results are derived using the following independence assumptions: P(a?|a*,D\") -P(a^ | a\") and P(D\"|a\" ,Dk) -P(D' |a\") which decouple the data at different stages.",
"uris": null,
"type_str": "figure"
},
"FIGREF6": {
"num": null,
"text": "shows successive likelihoods, entered by hand for a (rather contrived) illustration using the grammar in figure 1.At the end the two viable sentences (with probabilities) are \"pn tv det n pron tv pn\" (0.897) \"det n pron tv pn tv pn\" (0.103) Notice that the string which maximises the likelihood at each stage, \"pn tv pron tv pron tv pn\" might correspond to a line of poetry but is not a sentence in the language.The graph-structured stack approach of Tomita[6 ] is used for nondeterministic input.",
"uris": null,
"type_str": "figure"
},
"FIGREF7": {
"num": null,
"text": "(in entropy) number of equally-likely sentences. Substituting for P( j | {D }\u2122) from equation (1contributed by the sentences at stage m -1 predicting word aj. The quantities / i j can be evaluated with the prior probabilities. It can be shown that the sentence entropy has an upper bound as a function of the likelihoods: w s < log Ijexp(*j) . \" e x p (A * ) withequality when P(D | a % ) < x ----------------------.upper bound for the grammar in figure 1, and it can be seen chat che perplexity is equivalent to 35 equally-1 ikely sentences after 10 words",
"uris": null,
"type_str": "figure"
},
"FIGREF8": {
"num": null,
"text": "a) shows how the average compares with the maximum for a fixed T L of 6 .6 , and how the sentence perplexity is reduced when the likelihoods are progressively more constrained -5.0, 3.0 and 2.0).",
"uris": null,
"type_str": "figure"
},
"FIGREF9": {
"num": null,
"text": "b) contains results corresponding to those in (a), for the Markov model inferred from the grammar in figure 1.",
"uris": null,
"type_str": "figure"
}
}
}
}