File size: 51,193 Bytes
6fa4bc9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 |
{
"paper_id": "W89-0211",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:45:10.889689Z"
},
"title": "Pro b a b il is t ic LR Parsing for Speech Recognition",
"authors": [
{
"first": "J",
"middle": [
"H"
],
"last": "Wright",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Bristol",
"location": {
"country": "U.K"
}
},
"email": ""
},
{
"first": "E",
"middle": [
"N"
],
"last": "Wrigley",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Bristol",
"location": {
"country": "U.K"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "An LR parser for probabilistic context-free grammars is described. Each of the standard versions of parser generator (SLR, canonical and LA.LR) may be applied. A graph-structured stack permits action conflicts and allows the parser to be used with uncertain input, typical of speech recognition applications. The sentence uncertainty is measured using entropy and is significantly lower for the grammar than for a first-order Markov model. * 1 I n t e r n a t i o n a l Parsing Workshop '89",
"pdf_parse": {
"paper_id": "W89-0211",
"_pdf_hash": "",
"abstract": [
{
"text": "An LR parser for probabilistic context-free grammars is described. Each of the standard versions of parser generator (SLR, canonical and LA.LR) may be applied. A graph-structured stack permits action conflicts and allows the parser to be used with uncertain input, typical of speech recognition applications. The sentence uncertainty is measured using entropy and is significantly lower for the grammar than for a first-order Markov model. * 1 I n t e r n a t i o n a l Parsing Workshop '89",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The automatic recognition of continuous speech requires more than signal processing and pattern matching: a model of the language is needed to give structure to the utterance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1.1"
},
{
"text": "At sub-word level, hidden Markov models [1] have proved of great value in pattern matching.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1.1"
},
{
"text": "The focus of this paper is modelling at the linguistic level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1.1"
},
{
"text": "Markov models are adaptable and can handle potentially any sequence of words [2] .",
"cite_spans": [
{
"start": 77,
"end": 80,
"text": "[2]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1.1"
},
{
"text": "Being probabilistic they fit naturally into the context of uncertainty created by pattern matching. However, they do not capture the larger-scale structure of language and they do not provide an interpretation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1.1"
},
{
"text": "Grammar models capture more of the structure of language, but it can be difficult to recover from an early error in syntactic analysis and there is no watertight grammar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1.1"
},
{
"text": "A systematic treatment of uncertainty is needed in this context, for the following reasons:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1.1"
},
{
"text": "(1 ) some words and grammar rules are used more often than others;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1.1"
},
{
"text": "(2) pattern matching (whether by dynamic time warping, hidden Markov modelling or multi-layer perceptron [3] ) returns a degree of fit for each word tested, rather than an absolute discrimination; a number of possible sentences therefore arise;",
"cite_spans": [
{
"start": 105,
"end": 108,
"text": "[3]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1.1"
},
{
"text": "(3 ) at the end of an utterance it is desirable that each of these sentences receive an overall measure of support, given all the data so that the information is used efficiently.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1.1"
},
{
"text": "The type of language model which is the focus of this paper is the probabilistic context-free grammar (PCFG). This is an obvious enhancement of an ordinary CFG, the probability information initially intended to capture (1 ) above, but as will be seen this opens the way to satisfying (2 ) and (3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1.1"
},
{
"text": "An LR parser [4, 5] is used with an adaptation [6 ] which enlarges the scope to include almost any practical CFG. This adaptation also allows the LR approach to be used with uncertain input [7] , and this approach enables a grammar model to interface with the speech recognition front end as naturally as does a Markov model",
"cite_spans": [
{
"start": 13,
"end": 16,
"text": "[4,",
"ref_id": null
},
{
"start": 17,
"end": 19,
"text": "5]",
"ref_id": "BIBREF4"
},
{
"start": 47,
"end": 51,
"text": "[6 ]",
"ref_id": null
},
{
"start": 190,
"end": 193,
"text": "[7]",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1.1"
},
{
"text": "A \"probabilistic context-free grammar (PCFG)\" [8-10] is a 4-tuple <N,T,R,S> where N is a nonterminal vocabulary including the start symbol S, T is a terminal vocabulary, and R is a set of production-rules each of which is a pair of form <A a , p>, with AeN, a\u20ac(NuT)*, and p a probability. The probabilities associated with all the rules having a particular nonterminal on the LHS must sum to one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Context-Free Grammars",
"sec_num": "1.2"
},
{
"text": "A probability is associated with each derivation by multiplying the probabilities of those rules used, in keeping with the context-freeness of the grammar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Context-Free Grammars",
"sec_num": "1.2"
},
{
"text": "A very simple PCFG can be seen in figure 1: the symbols in uppercase are the nonterminals, those in lowercase are the terminals (actually preterminals) and A denotes the null string.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Context-Free Grammars",
"sec_num": "1.2"
},
{
"text": "The LR parsing strategy can be applied to a PCFG if the rule-probabilities are driven down into the parsing action table by the parser generator.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LR PARSING FOR PROBABILISTIC CFGs",
"sec_num": "2."
},
{
"text": "In addition, one of the objectives of using the parser in speech recognition is for providing a set of prior probabilities for possible next words at successive stages in the recognition of a sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LR PARSING FOR PROBABILISTIC CFGs",
"sec_num": "2."
},
{
"text": "The use of these prior probabilities will be described in section 3.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LR PARSING FOR PROBABILISTIC CFGs",
"sec_num": "2."
},
{
"text": "In what follows it will be assumed that the grammars are non-left-recursive, although null rules are allowed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LR PARSING FOR PROBABILISTIC CFGs",
"sec_num": "2."
},
{
"text": "The first aspect of parser construction is the closure function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ". 1 SLR Parser",
"sec_num": "2"
},
{
"text": "The item probability p can be thought of as a posterior probability of the item given the terminal string up to that point.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "<A -\u00bb a-\u00a3, p>",
"sec_num": null
},
{
"text": "The computation of closure(I) requires that items <B -> \u25a0 7r\u00bb PbPt> be added to the set for each rule <B -\u00bb 7 r, pr> with B on the LHS, provided pBpr exceeds some small probability threshold e, where pB is the total probability of items with B appearing after the dot (in the closed set). is another set which has the same number of elements, an exact counterpart for each dotted item, and a probability for each item that differs from that for its counterpart in the new set by at most e.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "<A -\u00bb a-\u00a3, p>",
"sec_num": null
},
{
"text": "where S' is an auxiliary start symbol, this process continues until no further sets are created. They can then be listed as I0 ,Ii,....",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Starting from an initial state I0 consisting of the closure of {<S' -> -S, 1>>",
"sec_num": null
},
{
"text": "Ir a generates state m and a row in the parsing tables \"action\" and \"goto\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Each state set",
"sec_num": null
},
{
"text": "The goto table simply contains the numbers of the destination states, as for the deterministic LR algorithm, but the action table also inherits probabilistic information from the grammar. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Each state set",
"sec_num": null
},
{
"text": "For a probabilistic grammar, the probability p attached to the reduce item cannot be distributed over those entries because when the tables are compiled it is not determined which of those terminals can actually occur next in that context, so the probability p is attached to the whole range of entries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The range of terminal symbols which can follow a B-reduction is given by the set FOLLOW(B) which can be obtained from the grammar by a standard algorithm [4],",
"sec_num": null
},
{
"text": "Completing the set of prior probabilities involves following up each reduce action using local copies of the stack until shift actions block all further progress.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The probability associated with a shift action is the prior probability of that terminal occurring next at that point in the input string (assuming no conflicts).",
"sec_num": null
},
{
"text": "The reduce action probability must be distributed over the shift terminals which emerge. This is done by allocating this probability to the entries in the action table row for the state reached after the reduction, in proportion to the probability of each entry.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The probability associated with a shift action is the prior probability of that terminal occurring next at that point in the input string (assuming no conflicts).",
"sec_num": null
},
{
"text": "Some of these entries may be further reduce actions in which case a similar procedure must be followed, and so on. The closure operation is more complex than for the SLR parser, because of the propagation of lookaheads through the non-kernel items. The items to be added to a kernel set to close it take the form ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The probability associated with a shift action is the prior probability of that terminal occurring next at that point in the input string (assuming no conflicts).",
"sec_num": null
},
{
"text": "Merging the states of the canonical parser which differ only in lookaheads for each item causes the probability distribution of lookaheads to be lost, so for the LALR parser the LR(1) items take the form",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LALR Parser",
"sec_num": "2.3"
},
{
"text": "The preferred method for generating the states as described in [4] can be adapted to the probabilistic case.",
"cite_spans": [
{
"start": 63,
"end": 66,
"text": "[4]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "<A -\u00bb a-(3, p, L> where LCT.",
"sec_num": null
},
{
"text": "Reduce entries in the parsing tables are then controlled by the lookahead sets, with the prior probabilities found as for the SLR parser.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "<A -\u00bb a-(3, p, L> where LCT.",
"sec_num": null
},
{
"text": "An action conflict arises whenever the parser generator attempts to put two (or more) different entries into the same place in the action table, and there are two ways to deal with them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conflicts and Interprecat Lon",
"sec_num": "2.4"
},
{
"text": "The first approach is to resolve each conflict [11] .",
"cite_spans": [
{
"start": 47,
"end": 51,
"text": "[11]",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conflicts and Interprecat Lon",
"sec_num": "2.4"
},
{
"text": "The second approach is to split the stack and pursue all options, conceptually in parallel.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "This is a dubious practice in the probabilistic case because there is no clear basis for resolving the probabilities of the actions in conflict.",
"sec_num": null
},
{
"text": "Toraita [6 ] has devised an efficient enhancement of the LR parser which operates in this way.",
"cite_spans": [
{
"start": 8,
"end": 12,
"text": "[6 ]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "This is a dubious practice in the probabilistic case because there is no clear basis for resolving the probabilities of the actions in conflict.",
"sec_num": null
},
{
"text": "A graphstructured stack avoids duplication of effort and maintains (so far as possible) the speed and compactness of Che parser.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "This is a dubious practice in the probabilistic case because there is no clear basis for resolving the probabilities of the actions in conflict.",
"sec_num": null
},
{
"text": "With this approach the LR algorithm can handle almost any practical CFG, and is highly suited to probabilistic grammars, the main distinction being that a probability becomes attached to each branch.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "This is a dubious practice in the probabilistic case because there is no clear basis for resolving the probabilities of the actions in conflict.",
"sec_num": null
},
{
"text": "This is in keeping with the further adaptation of the algorithm to deal with uncertain input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The generation and action of the probabilistic LR parser can be supported by a Bayesian interpretation.",
"sec_num": null
},
{
"text": "The situation envisaged for applications of the probabilistic LR parser in speech recognition is depicted in figure 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 109,
"end": 117,
"text": "figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Prediction and Updating Algorithm",
"sec_num": "3.1"
},
{
"text": "The parser forms part of a linguistic analyser whose purpose is to maintain and extend those partial sentences which are compatible with the input so far.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction and Updating Algorithm",
"sec_num": "3.1"
},
{
"text": "With each partial sentence there is associated an overall probability and partial sentences with very low probability are suspended.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction and Updating Algorithm",
"sec_num": "3.1"
},
{
"text": "It is assumed that the pattern matcher returns likelihoods of words, which is true if hidden Markov models are used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction and Updating Algorithm",
"sec_num": "3.1"
},
{
"text": "Other ---------------------------------------P(a\" I (D) ",
"cite_spans": [],
"ref_spans": [
{
"start": 6,
"end": 55,
"text": "---------------------------------------P(a\" I (D)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Prediction and Updating Algorithm",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": ")",
"eq_num": "(1)"
}
],
"section": "Prediction and Updating Algorithm",
"sec_num": "3.1"
},
{
"text": "This shows that the posterior probability of a\u2122 is distributed over the extended partial sentences in proportion to their root sentences s ^ contribution to the total prior probability of that word. Each path through the stack graph corresponds to one or more partial sentences and the probability P(r^|{D)m} has to be associated with each partial sentence r^.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "P (a j | (D ) )",
"sec_num": null
},
{
"text": "Despite the pruning the number of partial sentences maintained by the parser tends to grow with the length of input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of the Partial Sentences",
"sec_num": "3.2"
},
{
"text": "It seems sensible to base the measure of complexity upon the probabilities of the sentences rather than their number, and the obvious measure is the entropy of the distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of the Partial Sentences",
"sec_num": "3.2"
},
{
"text": "The discussion here will assume that the proliferation of sentences is caused by input uncertainty rather than by action conflicts. This is likely to be the dominant factor in speech applications. The upper bound is very pessimistic because it ignores the discriminative power of the pattern matcher. This could be measured in various ways but it is convenient to define a \"likelihood entropy\" as and the \"likelihood perplexity\" is _ jn P\u2122 \" exp(K^).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of the Partial Sentences",
"sec_num": "3.2"
},
{
"text": "The maximum sentence entropy subject to a fixed likelihood entropy can be found by simulation. Sets of random likelihoods with a given entropy can be generated from sets of independent uniform random numbers by raising these to an appropriate power.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of the Partial Sentences",
"sec_num": "3.2"
},
{
"text": "Permuting these so as to maximise the sentence entropy greatly reduces the number of sample runs needed to get a good result.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of the Partial Sentences",
"sec_num": "3.2"
},
{
"text": "These likelihoods are then fed into the parser and the procedure repeated to simulate the recognition process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of the Partial Sentences",
"sec_num": "3.2"
},
{
"text": "The sentence entropy is maximised over a number of such runs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy of the Partial Sentences",
"sec_num": "3.2"
},
{
"text": "The likelihoods which produce the upper bound line shown in figure 5(a) have a perplexity which is approximately constant at 6 .6 . This line is reproduced almost exactly by the above simulation procedure, using a fixed J3L \u00b0f 6 .6 with 30 sample runs.",
"cite_spans": [],
"ref_spans": [
{
"start": 60,
"end": 71,
"text": "figure 5(a)",
"ref_id": "FIGREF8"
}
],
"eq_spans": [],
"section": "Entropy of the Partial Sentences",
"sec_num": "3.2"
},
{
"text": "to compute the average sentence entropy over the sample runs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The simulation method is easily adapted",
"sec_num": null
},
{
"text": "For this it is preferable to average the entropy and then convert to a perplexity rather than average the measured perplexity values.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The simulation method is easily adapted",
"sec_num": null
},
{
"text": "This process provides an indication of how the parser will perform in a typical case, assuming a fixed likelihood perplexity as a parameter (although this could be varied from stage to stage if required). ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The simulation method is easily adapted",
"sec_num": null
},
{
"text": "Markov models have some advantages over grammar models for speech recognition in flexibility and ease of use but a major disadvantage is their limited memory of past events. For an extended utterance the number of possible sentences compatible with a Markov model may be much greater than for a grammar model, for the same data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Inferred Markov Model",
"sec_num": "3.3"
},
{
"text": "Demonstrating this in the present context requires the derivation of a first-order Markov model from a probabilistic grammar [13] .",
"cite_spans": [
{
"start": 125,
"end": 129,
"text": "[13]",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Inferred Markov Model",
"sec_num": "3.3"
},
{
"text": "The uncertainty algorithm of section 3.1 will operate largely unchanged with the prior probabilities obtained from the transition probabilities rather than from the LR parser. The upper bound reaches 409 after 10 words, for a likelihood perplexity of approximately 6.3, reducing to 37 for the average (after 30 sample runs).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Inferred Markov Model",
"sec_num": "3.3"
},
{
"text": "This falls with the likelihood perplexity but is higher than for the grammar model. The sentence perplexity for the grammar is twice that for the inferred Markov model after from six to nine words depending on This comparison is reproduced for other grammars considered. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Inferred Markov Model",
"sec_num": "3.3"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "An Introduction to the Application of the Theory of Probabilistic Functions of a Markov Process to Automatic Speech Recognition",
"authors": [
{
"first": "L",
"middle": [],
"last": "S E Levinson",
"suffix": ""
},
{
"first": "M M",
"middle": [],
"last": "R Rabiner",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sondhi",
"suffix": ""
}
],
"year": 1983,
"venue": "BSTJ",
"volume": "62",
"issue": "",
"pages": "35--1074",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S E Levinson, L R Rabiner and M M Sondhi, \"An Introduction to the Application of the Theory of Probabilistic Functions of a Markov Process to Automatic Speech Recognition\", BSTJ vol 62, ppl035-1074, 1983.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The Computational Analysis of English, a Corpus-Based Approach",
"authors": [
{
"first": "R",
"middle": [],
"last": "Garside",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Leech",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sampson",
"suffix": ""
}
],
"year": 1987,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R Garside, G Leech and G Sampson (eds), \"The Computational Analysis of English, a Corpus-Based Approach\", Longman, 1987.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Speech Pattern Discrimination and Multilayer Perceptrons",
"authors": [
{
"first": "H",
"middle": [],
"last": "Bourland",
"suffix": ""
},
{
"first": "C J",
"middle": [],
"last": "Wellekens",
"suffix": ""
}
],
"year": 1989,
"venue": "Computer Speech and Language",
"volume": "3",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H Bourland and C J Wellekens, \"Speech Pattern Discrimination and Multilayer Perceptrons\", Computer Speech and Language, vol 3, ppl-19, 1989.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Compilers: Principles, Techniques and Tools",
"authors": [
{
"first": "R",
"middle": [],
"last": "A V Aho",
"suffix": ""
},
{
"first": "J D",
"middle": [],
"last": "Sethi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ullman",
"suffix": ""
}
],
"year": 1985,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A V Aho, R Sethi and J D Ullman, \"Compilers: Principles, Techniques and Tools\", Addison-Wesley, 1985.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "LR Parsing, Theory and Practice",
"authors": [
{
"first": "",
"middle": [],
"last": "N P Chapman",
"suffix": ""
}
],
"year": 1987,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N P Chapman, \"LR Parsing, Theory and Practice\", Cambridge University Press, 1987.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Efficient Parsing for Natural Language",
"authors": [
{
"first": "M",
"middle": [],
"last": "Tomita",
"suffix": ""
}
],
"year": 1986,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M Tomita, \"Efficient Parsing for Natural Language\", Kluwer Academic Publishers, 1986.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Linguistic Control in Speech Recognition",
"authors": [
{
"first": "J H",
"middle": [],
"last": "Wright",
"suffix": ""
},
{
"first": "E N",
"middle": [],
"last": "Wrigley",
"suffix": ""
}
],
"year": 1988,
"venue": "Proceedings of the 7th FASE Symposium",
"volume": "",
"issue": "",
"pages": "545--552",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J H Wright and E N Wrigley, \"Linguistic Control in Speech Recognition\", Proceedings of the 7th FASE Symposium, pp545-552, 1988.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Probabilistic Grammars for Natural Languages",
"authors": [
{
"first": "P",
"middle": [],
"last": "Suppes",
"suffix": ""
}
],
"year": 1968,
"venue": "Synthese",
"volume": "22",
"issue": "",
"pages": "95--116",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P Suppes, \"Probabilistic Grammars for Natural Languages\", Synthese, vol 22, pp95-116, 1968.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Formal Grammars in Linguistics and Psycholinguistics",
"authors": [
{
"first": "W J M",
"middle": [],
"last": "Levelt",
"suffix": ""
}
],
"year": 1974,
"venue": "",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W J M Levelt, \"Formal Grammars in Linguistics and Psycholinguistics, volume 1\", Mouton, 1974.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Probabilistic Languages: A Review and Some Open Questions",
"authors": [
{
"first": "C S",
"middle": [],
"last": "Wetherall",
"suffix": ""
}
],
"year": 1980,
"venue": "Computing Surveys",
"volume": "12",
"issue": "",
"pages": "361--379",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C S Wetherall, \"Probabilistic Languages: A Review and Some Open Questions\", Computing Surveys vol 12, pp361-379, 1980.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Sentence Disambiguation by a Shift-Reduce Parsing Technique",
"authors": [
{
"first": "S M",
"middle": [],
"last": "Shieber",
"suffix": ""
}
],
"year": 1983,
"venue": "Proc. 21st Annual Meeting of Assoc, for Comp. Linguistics",
"volume": "",
"issue": "",
"pages": "3--118",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S M Shieber, \"Sentence Disambiguation by a Shift-Reduce Parsing Technique\", Proc. 21st Annual Meeting of Assoc, for Comp. Linguistics, ppll3-118, 1983.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A Maximum Likelihood Approach to Continuous Speech Recognition",
"authors": [
{
"first": "J",
"middle": [],
"last": "L R Bahl",
"suffix": ""
},
{
"first": "R L",
"middle": [],
"last": "Jelinek",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1983,
"venue": "IEEE Trans, on Pattern Analysis and Machine Intelligence",
"volume": "",
"issue": "5",
"pages": "79--190",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L R Bahl, J Jelinek and R L Mercer, \"A Maximum Likelihood Approach to Continuous Speech Recognition\", IEEE Trans, on Pattern Analysis and Machine Intelligence, vol PAMI-5, ppl79-190, 1983.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Linguistic Modelling for Application in Speech Recognition",
"authors": [
{
"first": "J H",
"middle": [],
"last": "Wright",
"suffix": ""
}
],
"year": 1988,
"venue": "Proceedings of the 7th FASE Symposium",
"volume": "",
"issue": "",
"pages": "391--398",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J H Wright, \"Linguistic Modelling for Application in Speech Recognition\", Proceedings of the 7th FASE Symposium, pp391-398, 1988.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "New kernel sets are generated from a closed set of items by the goto function. If all the items with symbol Xe(NuT) after the dot in a set I are <Ak ak-X/9k , pk> for k-l,...,nx, with px -\u00a3 pk k-1 then the new kernel set corresponding to X is (<Ak -> akX-\u00a3k, pk/px> for k-1, . . . , nx} and goto(I,X) is the closure of this set. The set already exists if there",
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"text": "For each terminal symbol b, if there are items in Im such that the total Pb>f, and the shift state n is given by goto(Im,b) -In, then action[m,b] -<shift-to-n, pb> For each nonterminal symbol B, if Pb>\u00ab and goto(Im ,B)-In then goto[m,B] -n If <S' -> S \u2022 , p> G Im then action[m,$] -<accept, p> If <B -> 7 * , p> E Ir a where BhS' then action[m , FOLLOW(B) ] -<reduce-by B -\u00bb 7 , p> with shift-reduce optimisation [4,5] applied. The probability of each entry is underneath.",
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"text": "Canonical LR Parser For the canonical LR parser each item possesses a lookahead distribution: <A -> a* / ? , p, {P(at)",
"uris": null,
"type_str": "figure"
},
"FIGREF3": {
"num": null,
"text": "PbPt i (PB(aj))j = l.... i t i ) so that all the items with B after the dot are then <Ak -> a j j \u2022 , pk, { Pk(ai ) } 1=1, ..Pb iwhere P (^ka1,aJ) is the probability of aj occurring first in a string derived from \u00a3kai, which is easily evaluated. A justification of this will be published elsewhere. The lookahead distribution is copied to the new kernel set by the goto function. The first three steps of parsing table construction are essentially the same as for the SLR parser. In step (4), the item in Im takes the form <B -\u00bb 7 \u2022 , p, (P(a1) ) 1=1., T|> where B*S ' The total probability p has to be distributed over the possible next input symbols at, using the lookahead distribution: actionfm.ai] -<reduce-by B -\u00bb 7 , pP(at)> for all i such that pP(ai)>c. The prior probabilities during parsing action can now be read directly from the action table.",
"uris": null,
"type_str": "figure"
},
"FIGREF4": {
"num": null,
"text": "methods of pattern matching return measures which it is assumed can be interpreted as likelihoods, perhaps via a transformation. let (s-1 ,2 ,...) represent partial sentences up to stage m (the stage denoted by a superscript). let D represent the data at stage m, and (D) represent all the data up to stage m.Each branch 1^ predicts words a\u2122 (perhaps via the LR parser) with probability P(aj|r^ ), so the total prior probability for each word aj",
"uris": null,
"type_str": "figure"
},
"FIGREF5": {
"num": null,
"text": "If P(rsj| (D) )<e then the branch is suspended. The next set of prior probabilities can now be derived and the cycle continues. These results are derived using the following independence assumptions: P(a?|a*,D\") -P(a^ | a\") and P(D\"|a\" ,Dk) -P(D' |a\") which decouple the data at different stages.",
"uris": null,
"type_str": "figure"
},
"FIGREF6": {
"num": null,
"text": "shows successive likelihoods, entered by hand for a (rather contrived) illustration using the grammar in figure 1.At the end the two viable sentences (with probabilities) are \"pn tv det n pron tv pn\" (0.897) \"det n pron tv pn tv pn\" (0.103) Notice that the string which maximises the likelihood at each stage, \"pn tv pron tv pron tv pn\" might correspond to a line of poetry but is not a sentence in the language.The graph-structured stack approach of Tomita[6 ] is used for nondeterministic input.",
"uris": null,
"type_str": "figure"
},
"FIGREF7": {
"num": null,
"text": "(in entropy) number of equally-likely sentences. Substituting for P( j | {D }\u2122) from equation (1contributed by the sentences at stage m -1 predicting word aj. The quantities / i j can be evaluated with the prior probabilities. It can be shown that the sentence entropy has an upper bound as a function of the likelihoods: w s < log Ijexp(*j) . \" e x p (A * ) withequality when P(D | a % ) < x ----------------------.upper bound for the grammar in figure 1, and it can be seen chat che perplexity is equivalent to 35 equally-1 ikely sentences after 10 words",
"uris": null,
"type_str": "figure"
},
"FIGREF8": {
"num": null,
"text": "a) shows how the average compares with the maximum for a fixed T L of 6 .6 , and how the sentence perplexity is reduced when the likelihoods are progressively more constrained -5.0, 3.0 and 2.0).",
"uris": null,
"type_str": "figure"
},
"FIGREF9": {
"num": null,
"text": "b) contains results corresponding to those in (a), for the Markov model inferred from the grammar in figure 1.",
"uris": null,
"type_str": "figure"
}
}
}
} |