{ "paper_id": "W02-0102", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T05:13:32.393577Z" }, "title": "An Interactive Spreadsheet for Teaching the Forward-Backward Algorithm", "authors": [ { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "", "affiliation": { "laboratory": "", "institution": "Johns Hopkins University Baltimore", "location": { "postCode": "21218-2691", "region": "MD", "country": "USA" } }, "email": "jason@cs.jhu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper offers a detailed lesson plan on the forwardbackward algorithm. The lesson is taught from a live, commented spreadsheet that implements the algorithm and graphs its behavior on a whimsical toy example. By experimenting with different inputs, one can help students develop intuitions about HMMs in particular and Expectation Maximization in general. The spreadsheet and a coordinated follow-up assignment are available.", "pdf_parse": { "paper_id": "W02-0102", "_pdf_hash": "", "abstract": [ { "text": "This paper offers a detailed lesson plan on the forwardbackward algorithm. The lesson is taught from a live, commented spreadsheet that implements the algorithm and graphs its behavior on a whimsical toy example. By experimenting with different inputs, one can help students develop intuitions about HMMs in particular and Expectation Maximization in general. The spreadsheet and a coordinated follow-up assignment are available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Algorithm animations are a wonderful teaching tool. They are concrete, visual, playful, sometimes interactive, and remain available to students after the lecture ends. Unfortunately, they have mainly been limited to algorithms that manipulate easy-to-draw data structures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Why Teach from a Spreadsheet?", "sec_num": "1" }, { "text": "Numerical algorithms can be \"animated\" by spreadsheets. Although current spreadsheets do not provide video, they can show \"all at once\" how a computation unfolds over time, displaying intermediate results in successive rows of a table and on graphs. Like the best algorithm animations, they let the user manipulate the input data to see what changes. The user can instantly and graphically see the effect on the whole course of the computation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Why Teach from a Spreadsheet?", "sec_num": "1" }, { "text": "Spreadsheets are also transparent. In Figure 1 , the user has double-clicked on a cell to reveal its underlying formula. The other cells that it depends on are automatically highlighted, with colors keyed to the references in the formula. There is no programming language to learn: spreadsheet programs are aimed at the mass market, with an intuitive design and plenty of online help, and today's undergraduates already understand their basic operation. An adventurous student can even experiment with modifying the formulas, or can instrument the spreadsheet with additional graphs.", "cite_spans": [], "ref_spans": [ { "start": 38, "end": 46, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Why Teach from a Spreadsheet?", "sec_num": "1" }, { "text": "Finally, modern spreadsheet programs such as Microsoft Excel support visually attractive layouts with integrated comments, color, fonts, shading, and drawings. This makes them effective for both classroom presentation and individual study.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Why Teach from a Spreadsheet?", "sec_num": "1" }, { "text": "This paper describes a lesson plan that was centered around a live spreadsheet, as well as a subsequent programming assignment in which the spreadsheet served as a debugging aid. The materials are available for use by others.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Why Teach from a Spreadsheet?", "sec_num": "1" }, { "text": "Students were especially engaged in class, apparently for the following reasons:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Why Teach from a Spreadsheet?", "sec_num": "1" }, { "text": "\u2022 Striking results (\"It learned it!\") that could be immediately apprehended from the graphs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Why Teach from a Spreadsheet?", "sec_num": "1" }, { "text": "\u2022 Live interactive demo. The students were eager to guess what the algorithm would do on particular inputs and test their guesses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Why Teach from a Spreadsheet?", "sec_num": "1" }, { "text": "\u2022 A whimsical toy example.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Why Teach from a Spreadsheet?", "sec_num": "1" }, { "text": "\u2022 The departure from the usual class routine.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Why Teach from a Spreadsheet?", "sec_num": "1" }, { "text": "\u2022 Novel use of spreadsheets. Several students who thought of them as mere bookkeeping tools were awed by this, with one calling it \"the coolest-ass spreadsheet ever.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Why Teach from a Spreadsheet?", "sec_num": "1" }, { "text": "2 How to Teach from a Spreadsheet?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Why Teach from a Spreadsheet?", "sec_num": "1" }, { "text": "It is possible to teach from a live spreadsheet by using an RGB projector. The spreadsheet's zoom feature can compensate for small type, although undergraduate eyes prove sharp enough that it may be unnecessary. (Invite the students to sit near the front.) Of course, interesting spreadsheets are much too big to fit on the screen, even with a \"View / Full Screen\" command. But scrolling is easy to follow if it is not too fast and if the class has previously been given a tour of the overall spreadsheet layout (by scrolling and/or zooming out). Split-screen features such as hide rows/columns, split panes, and freeze panes can be moderately helpful; so can commands to jump around the spreadsheet, or switch between two windows that display different areas. It is a good idea to memorize key sequences for such commands rather than struggle with mouse menus or dialog boxes during class.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Why Teach from a Spreadsheet?", "sec_num": "1" }, { "text": "Among topics in natural language processing, the forward-backward or Baum-Welch algorithm (Baum, 1972) is particularly difficult to teach. The algorithm estimates the parameters of a Hidden Markov Model (HMM) by Expectation-Maximization (EM), using dynamic programming to carry out the expectation steps efficiently.", "cite_spans": [ { "start": 90, "end": 102, "text": "(Baum, 1972)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "The Subject Matter", "sec_num": "3" }, { "text": "HMMs have long been central in speech recognition (Rabiner, 1989) . Their application to partof-speech tagging (Church, 1988; DeRose, 1988) kicked off the era of statistical NLP, and they have found additional NLP applications to phrase chunking, text segmentation, word-sense disambiguation, and information extraction.", "cite_spans": [ { "start": 50, "end": 65, "text": "(Rabiner, 1989)", "ref_id": "BIBREF7" }, { "start": 111, "end": 125, "text": "(Church, 1988;", "ref_id": "BIBREF1" }, { "start": 126, "end": 139, "text": "DeRose, 1988)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "The Subject Matter", "sec_num": "3" }, { "text": "The algorithm is also important to teach for pedagogical reasons, as the entry point to a family of EM algorithms for unsupervised parameter estimation. Indeed, it is an instructive special case of (1) the inside-outside algorithm for estimation of probabilistic context-free grammars; (2) belief propagation for training singly-connected Bayesian networks and junction trees (Pearl, 1988; Lauritzen, 1995) ; (3) algorithms for learning alignment models such as weighted edit distance; (4) general finitestate parameter estimation (Eisner, 2002) .", "cite_spans": [ { "start": 376, "end": 389, "text": "(Pearl, 1988;", "ref_id": "BIBREF6" }, { "start": 390, "end": 406, "text": "Lauritzen, 1995)", "ref_id": "BIBREF4" }, { "start": 531, "end": 545, "text": "(Eisner, 2002)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "The Subject Matter", "sec_num": "3" }, { "text": "Before studying the algorithm, students should first have worked with some if not all of the key ideas in simpler settings. Markov models can be introduced through n-gram models or probabilistic finite-state automata. EM can be introduced through simpler tasks such as soft clustering. Global optimization through dynamic programming can be introduced in other contexts such as probabilistic CKY parsing or edit distance. Finally, the students should understand supervised training and Viterbi decoding of HMMs, for example in the context of part-of-speech tagging.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Subject Matter", "sec_num": "3" }, { "text": "Even with such preparation, however, the forward-backward algorithm can be difficult for beginning students to apprehend. It requires them to think about all of the above ideas at once, in combination, and to relate them to the nitty-gritty of the algorithm, namely", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Subject Matter", "sec_num": "3" }, { "text": "\u2022 the two-pass computation of mysterious \u03b1 and \u03b2 probabilities", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Subject Matter", "sec_num": "3" }, { "text": "\u2022 the conversion of these prior path probabilities to posterior expectations of transition and emission counts Just as important, students must develop an understanding of the algorithm's qualitative properties, which it shares with other EM algorithms:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Subject Matter", "sec_num": "3" }, { "text": "\u2022 performs unsupervised learning (what is this and why is it possible?)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Subject Matter", "sec_num": "3" }, { "text": "\u2022 alternates expectation and maximization steps", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Subject Matter", "sec_num": "3" }, { "text": "\u2022 maximizes p(observed training data) (i.e., total probability of all hidden paths that generate those data)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Subject Matter", "sec_num": "3" }, { "text": "\u2022 finds only a local maximum, so is sensitive to initial conditions", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Subject Matter", "sec_num": "3" }, { "text": "\u2022 cannot escape zeroes or symmetries, so they should be avoided in initial conditions", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Subject Matter", "sec_num": "3" }, { "text": "\u2022 uses the states as it sees fit, ignoring the suggestive names that we may give them (e.g., part of speech tags)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Subject Matter", "sec_num": "3" }, { "text": "\u2022 may overfit the training data unless smoothing is used", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Subject Matter", "sec_num": "3" }, { "text": "The spreadsheet lesson was deployed in two 50minute lectures at Johns Hopkins University, in an introductory NLP course aimed at upper-level undergraduates and first-year graduate students. A single lecture might have sufficed for a less interactive presentation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Subject Matter", "sec_num": "3" }, { "text": "The lesson appeared in week 10 of 13, by which time the students had already been exposed to most of the preparatory topics mentioned above, including Viterbi decoding of a part-of-speech trigram tagging model. However, the present lesson was their first exposure to EM or indeed to any kind of unsupervised learning. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Subject Matter", "sec_num": "3" }, { "text": "[While the spreadsheet could be used in many ways, the next several sections offer one detailed lesson plan. Questions for the class are included; subsequent points often depend on the answers, which are concealed here in footnotes. Some fragments of the full spreadsheet are shown in the figures.]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Ice Cream Climatology Data", "sec_num": "4" }, { "text": "The situation: You are climatologists in the year 2799, studying the history of global warming. You can't find any records of Baltimore weather, but you do find my diary, in which I assiduously recorded how much ice cream I ate each day (see Figure 3) .", "cite_spans": [], "ref_spans": [ { "start": 242, "end": 251, "text": "Figure 3)", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "The Ice Cream Climatology Data", "sec_num": "4" }, { "text": "Let's simplify and suppose there are only two kinds of days: C (cold) and H (hot). And let's suppose you have guessed some probabilities as shown on the spreadsheet (Figure 2 ).", "cite_spans": [], "ref_spans": [ { "start": 165, "end": 174, "text": "(Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "What can you figure out from this about the weather that summer?", "sec_num": null }, { "text": "Thus, you guess that on cold days, I usually ate only 1 ice cream cone: my probabilities of 1, 2, or 3 cones were 70%, 20% and 10%. That adds up to 100%. On hot days, the probabilities were reversed-I usually ate 3 ice creams. So other things equal, if you know I ate 3 ice creams, the odds are 7 to 1 that it was a hot day, but if I ate 2 ice creams, the odds are 1 to 1 (no information).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "What can you figure out from this about the weather that summer?", "sec_num": null }, { "text": "You also guess (still Figure 2 ) that if today is cold, tomorrow is probably cold, and if today is hot, tomorrow is probably hot. (Q: How does this setup resemble part-of-speech tagging? 1 )", "cite_spans": [], "ref_spans": [ { "start": 22, "end": 30, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "What can you figure out from this about the weather that summer?", "sec_num": null }, { "text": "We also have some boundary conditions. I only kept this diary for a while. If I was more likely to start or stop the diary on a hot day, then that is useful information and it should go in the table. (Q: Is there an analogy in part-of-speech tagging? 2 ) For simplicity, let's guess that I was equally likely to start or stop on a hot or cold day. So the first day I started writing was equally likely (50%) to be hot or cold, and any given day had the same chance (10%) of being the last recorded day, e.g., because on any day I wrote (regardless of temperature), I had a 10% chance of losing my diary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "What can you figure out from this about the weather that summer?", "sec_num": null }, { "text": "[The notation p i (H) in this paper stands for the probability of H on day i, given all the observed ice cream data. On the spreadsheet itself the subscript i is clear from context and is dropped; thus in Figure 3 , p(H) denotes the conditional probability p i (H), not a prior. The spreadsheet likewise omits subscripts on \u03b1 i (H) and \u03b2 i (H).]", "cite_spans": [], "ref_spans": [ { "start": 205, "end": 213, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "The Trellis and \u03b1\u03b2 Decoding", "sec_num": "5" }, { "text": "Scroll down the spreadsheet and look at the lower line of Figure 3 , which shows a weather reconstruction under the above assumptions. It estimates the relative hot and cold probabilities for each day. Apparently, the summer was mostly hot with a cold spell in the middle; we are unsure about the weather on a few transitional days.", "cite_spans": [], "ref_spans": [ { "start": 58, "end": 66, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "The Trellis and \u03b1\u03b2 Decoding", "sec_num": "5" }, { "text": "We will now see how the reconstruction was accomplished. Look at the trellis diagram on the spreadsheet (Figure 4 ). Consistent with visual intuition, arcs (lines) represent days and states (points) represent the intervening midnights. A cold day is represented by an arc that ends in a C state. 3 (So 1 A: This is a bigram tag generation model with tags C and H. Each tag independently generates a word (1, 2, or 3); the word choice is conditioned on the tag.", "cite_spans": [], "ref_spans": [ { "start": 104, "end": 113, "text": "(Figure 4", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "The Trellis and \u03b1\u03b2 Decoding", "sec_num": "5" }, { "text": "2 A: A tagger should know that sentences tend to start with determiners and end with periods. A tagging that ends with a determiner should be penalized because p(Stop | Det) \u2248 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Trellis and \u03b1\u03b2 Decoding", "sec_num": "5" }, { "text": "3 These conventions are a compromise between a traditional view of HMMs and a finite-state view used elsewhere in the course. (The two views correspond to Moore vs. Mealy machines.) In the traditional view, states would represent days and each arc effectively inherits the C or H label of its terminal state.) Q: According to the trellis, what is the a priori probability that the first three days of summer are H,H,C and I eat 2,3,3 cones respectively (as I did)? 4 Q: Of the 8 ways to account for the 2,3,3 cones, which is most probable? 5 Q: Why do all 8 paths have low probabilities? 6 Recall that the Viterbi algorithm computes, at each state of the trellis, the maximum probability of any path from Start. Similarly, define \u03b1 at a state to be the total probability of all paths to that state from Start. Q: How would you compute it by dynamic programming? 7 Q: Symmetrically, how would you compute \u03b2 at a state, which is defined to be the total probability of all paths to Stop?", "cite_spans": [ { "start": 588, "end": 589, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The Trellis and \u03b1\u03b2 Decoding", "sec_num": "5" }, { "text": "The \u03b1 and \u03b2 values are computed on the spreadsheet ( Figure 1 ). Q: Are there any patterns in the values? 8 Now for some important questions. Q: What is the total probability of all paths from Start to would bear emission probabilities such as p(3 | H). In Figure 4 , as in finite-state machines, this role is played by the arcs (which also carry transition probabilities such as p(H | C)); this allows \u03b1 and \u03b2 to be described more simply as sums of path probabilities. But we persist in a traditional labeling of the states as H or C so that the \u03b1\u03b2 notation can refer to them.", "cite_spans": [ { "start": 106, "end": 107, "text": "8", "ref_id": null } ], "ref_spans": [ { "start": 53, "end": 61, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 257, "end": 265, "text": "Figure 4", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "The Trellis and \u03b1\u03b2 Decoding", "sec_num": "5" }, { "text": "4 A: Consult the path Start \u2192 H \u2192 H \u2192 C, which has probability (0.5\u20220.2)\u2022(0.8\u20220.7)\u2022(0.1\u20220.1) = 0.1\u20220.56\u20220.01 = 0.00056. Note that the trellis is specialized to these data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Trellis and \u03b1\u03b2 Decoding", "sec_num": "5" }, { "text": "5 A: H,H,H gives probability 0.1 \u2022 0.56 \u2022 0.56 = 0.03136. (Starting with C would be as cheap as starting with H, but then getting from C to H would be expensive.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Trellis and \u03b1\u03b2 Decoding", "sec_num": "5" }, { "text": "6 A: It was a priori unlikely that I'd eat exactly this sequence of ice creams. (A priori there were many more than 8 possible paths, but this trellis only shows the paths generating the actual data 2,3,3.) We'll be interested in the relative probabilities of these 8 paths.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Trellis and \u03b1\u03b2 Decoding", "sec_num": "5" }, { "text": "7 A: In terms of \u03b1 at the predecessor states: just replace \"max\" with \"+\" in the Viterbi algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Trellis and \u03b1\u03b2 Decoding", "sec_num": "5" }, { "text": "8 A: \u03b1 probabilities decrease going down the column, and \u03b2 probabilities decrease going up, as they become responsible for generating more and more ice cream data. Stop in which day 3 is hot? 9 It is shown in column H of Figure 1 . Q: Why is column I of Figure 1 constant at 9.13e-19 across rows? 10 Q: What does that column tell us about ice cream or weather? 11 Now the class may be able to see how to complete the reconstruction:", "cite_spans": [ { "start": 361, "end": 363, "text": "11", "ref_id": null } ], "ref_spans": [ { "start": 221, "end": 229, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 254, "end": 262, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "The Trellis and \u03b1\u03b2 Decoding", "sec_num": "5" }, { "text": "p(day 3 hot | 2, 3, 3, . . .) = p(day 3 hot,2,3,3,...) p(2,3,3,...) = \u03b1 3 (H)\u2022\u03b2 3 (H) \u03b1 3 (C)\u2022\u03b2 3 (C)+\u03b1 3 (H)\u2022\u03b2 3 (H) = 9.0e-19 9.8e-21+9.0e-19", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Trellis and \u03b1\u03b2 Decoding", "sec_num": "5" }, { "text": "which is 0.989, as shown in cell K29 of Figure 5 . ", "cite_spans": [], "ref_spans": [ { "start": 40, "end": 48, "text": "Figure 5", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "The Trellis and \u03b1\u03b2 Decoding", "sec_num": "5" }, { "text": "Notice that the lower line in Figure 3 has the same general shape as the upper line (the original data), but is smoother. For example, some 2-ice-cream days were tagged as probably cold and some as probably hot. Q: Why? 12 Q: Since the first day has 2 ice creams and doesn't follow a hot day, why was it tagged as hot? 13 Q: Why was day 11, which has only 1 ice cream, tagged as hot? 14 We can experiment with the spreadsheet (using the Undo command after each experiment). Q: What do you predict will happen to Figure 3 if we weaken 9 A: By the distributive law, \u03b13(H) \u2022 \u03b23(H). 10 A: It is the total probability of all paths that go through either C or H on a given day. But all paths do that, so this is simply the total probability of all paths! The choice of day doesn't matter.", "cite_spans": [ { "start": 384, "end": 386, "text": "14", "ref_id": null } ], "ref_spans": [ { "start": 30, "end": 38, "text": "Figure 3", "ref_id": "FIGREF2" }, { "start": 512, "end": 520, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Understanding the Reconstruction", "sec_num": "6" }, { "text": "11 A: It is the probability of my actual ice cream consumption: p(2, 3, 3, . . .) = w p( w, 2, 3, 3, . . .) = 9.13e-19, where w ranges over all 2 33 possible weather state sequences such as H,H,C,. . . . Each summand is the probability of a trellis path.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Understanding the Reconstruction", "sec_num": "6" }, { "text": "12 A: Figure 2 assumed a kind of \"weather inertia\" in which a hot day tends to be followed by another hot day, and likewise for cold days.", "cite_spans": [], "ref_spans": [ { "start": 6, "end": 14, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Understanding the Reconstruction", "sec_num": "6" }, { "text": "13 Because an apparently hot day follows it. (See footnote 5.) It is the \u03b2 factors that consider this information from the future, and make \u03b11(H) \u2022 \u03b21(H) \u03b11(C) \u2022 \u03b21(C). 14 A: Switching from hot to cold and back (HCH) has probability 0.01, whereas staying hot (HHH) has probability 0.64. So although the fact that I ate only one ice cream on day 11 favors C by 7 to 1, the inferred \"fact\" that days 10 and 12 are hot favors H by 64 to 1. or remove the \"weather inertia\" in Figure 2? 15 Q: What happens if we try \"anti-inertia\"? 16 Even though the number of ice creams is not decisive (consider day 11), it is influential. Q: What do you predict will happen if the distribution of ice creams is the same on hot and cold days? 17 Q: What if we also change p(H | Start) from 0.5 to 0? 18", "cite_spans": [ { "start": 527, "end": 529, "text": "16", "ref_id": null } ], "ref_spans": [ { "start": 472, "end": 481, "text": "Figure 2?", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Understanding the Reconstruction", "sec_num": "6" }, { "text": "We originally assumed (Figure 2 ) that I had a 20% chance of eating 2 cones on either a hot or a cold day. But if our reconstruction is right (Figure 3 ), I actually ate 2 cones on 20% of cold days but 40+% of hot days. (Figure 6a ). The lower line now tracks the upper line exactly.", "cite_spans": [], "ref_spans": [ { "start": 22, "end": 31, "text": "(Figure 2", "ref_id": "FIGREF1" }, { "start": 142, "end": 151, "text": "(Figure 3", "ref_id": "FIGREF2" }, { "start": 220, "end": 230, "text": "(Figure 6a", "ref_id": "FIGREF7" } ], "eq_spans": [], "section": "Reestimating Emission Probabilities", "sec_num": "7" }, { "text": "16 A: Setting p(C | H) = p(H | C) = 0.8 and p(C | C) = p(H | H) = 0.1, rather than vice-versa, yields Figure 6b. 17 A: The ice cream data now gives us no information about the weather, so pi(H) = pi(C) = 0.5 on every day i.", "cite_spans": [], "ref_spans": [ { "start": 102, "end": 112, "text": "Figure 6b.", "ref_id": "FIGREF7" } ], "eq_spans": [], "section": "Reestimating Emission Probabilities", "sec_num": "7" }, { "text": "18 A: p1(H) = 0, but pi(H) increases toward an asymptote of 0.5 (the \"limit distribution\"). The weather is more likely to switch to hot than to cold if it was more likely cold to begin with; so pi(H) increases if it is < 0.5. So now that we \"know\" which days are hot and which days are cold, we should really update our probabilities to 0.2 and 0.4, not 0.2 and 0.2. After all, our initial probabilities were just guesses. Q: Where does the learning come from-isn't this circular? Since our reconstruction was based on the guessed probabilities 0.2 and 0.2, why didn't the reconstruction perfectly reflect those guesses? 19 Scrolling rightward on the spreadsheet, we find a table giving the updated probabilities (Figure 7) . This table feeds into a second copy of the forwardbackward calculation and graph. Q: The second graph of p i (H) (not shown here) closely resembles the first; why is it different on days 11 and 27? 20 The updated probability table was computed by the spreadsheet. Q: When it calculated how often I ate 2 cones on a reconstructed hot day, do you think it counted day 27 as a hot day or a cold day? 21", "cite_spans": [ { "start": 621, "end": 623, "text": "19", "ref_id": null }, { "start": 924, "end": 926, "text": "20", "ref_id": null } ], "ref_spans": [ { "start": 713, "end": 723, "text": "(Figure 7)", "ref_id": "FIGREF9" } ], "eq_spans": [], "section": "Reestimating Emission Probabilities", "sec_num": "7" }, { "text": "Notice that Figure 7 also updated the transition probabilities. This involved counting the 4 kinds of days distinguished by Figure 8 : 22 e.g., what fraction of H 19 A: The reconstruction of the weather underlying the observed data was a compromise between the guessed probabilities ( Figure 2 ) and the demands of the actual data. The model in Figure 2 disagreed with the data: it would not have predicted that 2-cone days actually accounted for more than 20% of all days, or that they were disproportionately likely to fall between 3-cone days.", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 20, "text": "Figure 7", "ref_id": "FIGREF9" }, { "start": 124, "end": 132, "text": "Figure 8", "ref_id": "FIGREF10" }, { "start": 285, "end": 293, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 345, "end": 353, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Reestimating Transition Probabilities", "sec_num": "8" }, { "text": "20 A: These days fall between hot and cold days, so smoothing has little effect: their temperature is mainly reconstructed from the number of ice creams. 1 ice cream is now better evidence of a cold day, and 2 ice creams of a hot day. Interestingly, days 11 and 14 can now conspire to \"cool down\" the intervening 3-icecream days.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reestimating Transition Probabilities", "sec_num": "8" }, { "text": "21 A: Half of each, since p27(H) \u2248 0.5! The actual computation is performed in Figure 5 and should be discussed at this point.", "cite_spans": [], "ref_spans": [ { "start": 79, "end": 87, "text": "Figure 5", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Reestimating Transition Probabilities", "sec_num": "8" }, { "text": "22 Notice how p(H \u2192 C) and p(C \u2192 H) spike when the weather changes, on day 14 and either day 27 or 28. days were followed by H? Again, fractional counts were used to handle uncertainty. Q: Does Figure 3 (first-order reconstruction) contain enough information to construct Figure 8 (second-order reconstruction)? 23 Continuing with the probabilities from the end of footnote 23, suppose we increase p(H | Start) to 0.7. Q: What will happen to the first-order graph? 24 Q: What if we switch from anti-inertia back to inertia ( Figure 9 )? 25 Q: In this last case, what do you predict will happen when we reestimate the probabilities? 26 This reestimation ( Figure 10 ) slightly improved the reconstruction. [Defer discussion of what \"improved\" means: the class still assumes that good reconstructions look like Figure 3 .] Q: Now what? A: Perhaps we should do it again. And again, and again. . . Scrolling rightward past 10 successive reestimations, we see that this arrives at the intuitively 23 A: No. A dramatic way to see this is to make the distribution of ice cream distribution the same on hot and cold days. This makes the first-order graph constant at 0.5 as in footnote 17. But we can still get a range of behaviors in the secondorder graph; e.g., if we switch from inertia to anti-inertia as in footnote 16, then we switch from thinking the weather is unknown but constant to thinking it is unknown but oscillating.", "cite_spans": [ { "start": 312, "end": 314, "text": "23", "ref_id": null }, { "start": 632, "end": 634, "text": "26", "ref_id": null } ], "ref_spans": [ { "start": 194, "end": 202, "text": "Figure 3", "ref_id": "FIGREF2" }, { "start": 272, "end": 280, "text": "Figure 8", "ref_id": "FIGREF10" }, { "start": 525, "end": 533, "text": "Figure 9", "ref_id": "FIGREF11" }, { "start": 655, "end": 664, "text": "Figure 10", "ref_id": "FIGREF0" }, { "start": 809, "end": 817, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Reestimating Transition Probabilities", "sec_num": "8" }, { "text": "24 A: pi(H) alternates and converges to 0.5 from both sides. 25 A: pi(H) converges to 0.5 from above (cf. footnote 18), as shown in Figure 9. 26 A: The first-order graph suggests that the early days of summer were slightly more likely to be hot than cold. Since we ate more ice cream on those days, the reestimated probabilities (unlike the initial ones) slightly favor eating more ice cream on hot days. So the new reconstruction based on these probabilities has a very shallow \"U\" shape (bottom of Figure 10) , in which the low-ice-cream middle of the summer is slightly less likely to be hot. correct answer (Figure 11 )! Thus, starting from an uninformed probability table, the spreadsheet learned sensible probabilities ( Figure 11 ) that enabled it to reconstruct the weather.", "cite_spans": [], "ref_spans": [ { "start": 132, "end": 141, "text": "Figure 9.", "ref_id": "FIGREF11" }, { "start": 500, "end": 510, "text": "Figure 10)", "ref_id": "FIGREF0" }, { "start": 611, "end": 621, "text": "(Figure 11", "ref_id": "FIGREF0" }, { "start": 727, "end": 736, "text": "Figure 11", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Reestimating Transition Probabilities", "sec_num": "8" }, { "text": "The 3-D graph shows how the reconstruction improved over time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reestimating Transition Probabilities", "sec_num": "8" }, { "text": "The only remaining detail is how the transition probabilities in Figure 8 were computed. Recall that to get Figure 3 , we asked what fraction of paths passed through each state. This time we must ask what fraction of paths traversed each arc. (Q: How to compute this? 27 ) Just as there were two possible states each day, there are four possible arcs each day, and the graph reflects their relative probabilities.", "cite_spans": [], "ref_spans": [ { "start": 65, "end": 73, "text": "Figure 8", "ref_id": "FIGREF10" }, { "start": 108, "end": 116, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Reestimating Transition Probabilities", "sec_num": "8" }, { "text": "We can check whether the algorithm still learns from other initial guesses. The following examples appear on the spreadsheet and can be copied over the table of initial probabilities. (Except for the pathologically symmetric third case, they all learn the same structure.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reestimation Experiments", "sec_num": "9" }, { "text": "1. No weather inertia, but more ice cream on hot days. The model initially behaves as in foot- note 15, but over time it learns that weather does have inertia.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reestimation Experiments", "sec_num": "9" }, { "text": "2. Inertia, but only a very slight preference for more ice cream on hot days. The p i (H) graph is initially almost as flat as in footnote 17. But over several passes the model learns that I eat a lot more ice cream on hot days.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reestimation Experiments", "sec_num": "9" }, { "text": "3. A completely symmetric initial state: no inertia, and no preference at all for more ice cream on hot days. Q: What do you expect to happen under reestimation? 28 4. Like the previous case, but break the symmetry by giving cold days a slight preference to eat more ice cream (Figure 12 ). This initial state is almost perfectly symmetric. Q: Why doesn't this case appear to learn the same structure as the previous ones? 29", "cite_spans": [], "ref_spans": [ { "start": 277, "end": 287, "text": "(Figure 12", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Reestimation Experiments", "sec_num": "9" }, { "text": "The final case does not converge to quite the same result as the others: C and H are reversed. (It is If T is the sequence of 34 training observations (33 days plus Stop), then p(T ) increases rapidly during reestimation. To compress the range of the graph, we don't plot p(T ) but rather perplexity per observation = 1/ 34 p(T ) = 2 \u2212(log 2 p(T ))/34 . now H that is used for the low-ice-cream midsummer days.) Should you care about this difference? As climatologists, you might very well be upset that the spreadsheet reversed cold and hot days. But since C and H are ultimately just arbitrary labels, then perhaps the outcome is equally good in some sense. What does it mean for the outcome of this unsupervised learning procedure to be \"good\"? The dataset is just the ice cream diary, which makes no reference to weather. Without knowing the true weather, how can we tell whether we did a good job learning it?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reestimation Experiments", "sec_num": "9" }, { "text": "The answer: A good model is one that predicts the dataset as accurately as possible. The dataset actually has temporal structure, since I tended to have long periods of high and low ice cream consumption. That structure is what the algorithm discovered, regardless of whether weather was the cause. The state C or H distinguishes between the two kinds of periods and tends to persist over time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Local Maximization of Likelihood", "sec_num": "10" }, { "text": "So did this learned model predict the dataset well? It was not always sure about the state sequence, but Figure 13 shows that the likelihood of the observed dataset (summed over all possible state sequences) increased on every iteration. (Q: How is this found? 30 )", "cite_spans": [], "ref_spans": [ { "start": 105, "end": 114, "text": "Figure 13", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Local Maximization of Likelihood", "sec_num": "10" }, { "text": "That behavior is actually guaranteed: repeated 30 It is the total probability of paths that explain the data, i.e., all paths in Figure 4 , as given by column I of Figure 1 ; see footnote 10.", "cite_spans": [ { "start": 47, "end": 49, "text": "30", "ref_id": null } ], "ref_spans": [ { "start": 129, "end": 137, "text": "Figure 4", "ref_id": "FIGREF3" }, { "start": 164, "end": 172, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Local Maximization of Likelihood", "sec_num": "10" }, { "text": "forward-backward reestimation converges to a local maximum of likelihood. We have already discovered two symmetric local maxima, both with perplexity of 2.827 per day: the model might use C to represent cold and H to represent hot, or vice versa. Q: How much better is 2.827 than a model with no temporal structure? 31 Remember that maximizing the likelihood of the training data can lead to overfitting. Q: Do you see any evidence of this in the final probability table? 32 Q: Is there a remedy? 33", "cite_spans": [ { "start": 316, "end": 318, "text": "31", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Local Maximization of Likelihood", "sec_num": "10" }, { "text": "We get very different results if we slightly modify Figure 12 by putting p(1 | H) = 0.3 with p(2 | H) = 0.4. The structure of the solution is very different (Figure 14) . In fact, the final parameters now show anti-inertia, giving a reconstruction similar to Figure 6b . Q: What went wrong? 34 In the two previous local maxima, H meant \"low ice-cream day\" or \"high ice-cream day.\" Q: According to Figure 14 , what does H mean here? 35 Q: What does the low value of p(H | H) mean? 36 So we see that there are actually two kinds of structure coexisting in this dataset: days with a lot (little) ice cream tend to repeat, and days with 2 ice creams tend not to repeat. The first kind of structure did a better job of lowering the perplexity, but both 31 A: A model with no temporal structure is a unigram model. A good guess is that it will have perplexity 3, since it will be completely undecided between the 3 kinds of observations. (It so happens that they were equally frequent in the dataset.) However, if we prevent the learning of temporal structure (by setting the initial conditions so that the model is always in state C, or is always equally likely to be in states C and H), we find that the perplexity is 3.314, reflecting the four-way unigram distribution p(1) = p(2) = p(3) = 11/34, p(Stop)=1/34. 32 A: p(H | Start) \u2192 1 because we become increasingly sure that the training diary started on a hot day. But this single training observation, no matter how justifiably certain we are of it, might not generalize to next summer's diary.", "cite_spans": [ { "start": 291, "end": 293, "text": "34", "ref_id": null }, { "start": 480, "end": 482, "text": "36", "ref_id": null } ], "ref_spans": [ { "start": 52, "end": 61, "text": "Figure 12", "ref_id": "FIGREF0" }, { "start": 157, "end": 168, "text": "(Figure 14)", "ref_id": "FIGREF0" }, { "start": 259, "end": 268, "text": "Figure 6b", "ref_id": "FIGREF7" }, { "start": 397, "end": 406, "text": "Figure 14", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "A Trick Ending", "sec_num": "11" }, { "text": "33 A: Smoothing the fractional counts. Note: If a prior is used for smoothing, the algorithm is guaranteed to locally maximize the posterior (in place of the likelihood).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Trick Ending", "sec_num": "11" }, { "text": "34 A: This is a third local maximum of likelihood, unrelated to the others, with worse perplexity (3.059). Getting stuck in poor local maxima is an occupational hazard.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Trick Ending", "sec_num": "11" }, { "text": "35 A: H usually emits 2 ice creams, whereas C never does. So H stands for a 2-ice-cream day.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Trick Ending", "sec_num": "11" }, { "text": "36 A: That 2 ice creams are rarely followed by 2 ice creams. Looking at the dataset, this is true. So even this local maximum successfully discovered some structure: it discovered (to my surprise) that when I make up data, I tend not to repeat 2's! are useful. Q: How could we get our model to discover both kinds of structure (thereby lowering the perplexity further)? 37 Q:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Trick Ending", "sec_num": "11" }, { "text": "We have now seen three locally optimal models in which the H state was used for 3 different things-even though we named it H for \"Hot.\" What does this mean for the application of this algorithm to part-of-speech tagging? 38", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Trick Ending", "sec_num": "11" }, { "text": "In a follow-up assignment, students applied Viterbi decoding and forward-backward reestimation to part-of-speech tagging. 39 In the assignment, students were asked to test their code first on the ice cream data (provided as a small tagged corpus) before switching to real data. This cemented the analogy between the ice cream and tagging tasks, helping students connect the class to the assignment. 37 A: Use more states. Four states would suffice to distinguish hot/2, cold/2, hot/not2, and cold/not2 days.", "cite_spans": [ { "start": 122, "end": 124, "text": "39", "ref_id": null }, { "start": 399, "end": 401, "text": "37", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Follow-Up Assignment", "sec_num": "12" }, { "text": "38 A: There is no guarantee that N and V will continue to distinguish nouns and verbs after reestimation. They will evolve to make whatever distinctions help to predict the word sequence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Follow-Up Assignment", "sec_num": "12" }, { "text": "39 Advanced students might also want to read about a modern supervised trigram tagger (Brants, 2000) , or the mixed results when one actually trains trigram taggers by EM (Merialdo, 1994) .", "cite_spans": [ { "start": 86, "end": 100, "text": "(Brants, 2000)", "ref_id": null }, { "start": 171, "end": 187, "text": "(Merialdo, 1994)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Follow-Up Assignment", "sec_num": "12" }, { "text": "Furthermore, students could check their ice cream output against the spreadsheet, and track down basic bugs by comparing their intermediate results to the spreadsheet's. They reported this to be very useful. Presumably it helps learning when students actually find their bugs before handing in the assignment, and when they are able to isolate their misconceptions on their own. It also made office hours and grading much easier for the teaching assistant.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Follow-Up Assignment", "sec_num": "12" }, { "text": "The spreadsheet (in Microsoft Excel) and assignment are available at http://www.cs.jhu. edu/\u02dcjason/papers/#tnlp02.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Availability", "sec_num": "13" }, { "text": "Also available is a second version of the spreadsheet, which uses the Viterbi approximation for decoding and reestimation. The Viterbi algorithm is implemented in an unconventional way so that the two spreadsheets are almost identical; there is no need to follow backpointers. The probability of the best path through state H on day 3 is \u00b5 3 (H) \u2022 \u03bd 3 (H), where \u00b5 and \u03bd are computed like \u03b1 and \u03b2 but maximizing rather than summing path probabilities. The Viterbi approximation treats p 3 (H) as 1 or 0 according to whether \u00b5 3 (H) \u2022 \u03bd 3 (H) equals max(\u00b5 3 (C) \u2022 \u03bd 3 (C), \u00b5 3 (H) \u2022 \u03bd 3 (H)).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Availability", "sec_num": "13" }, { "text": "Have fun! Comments are most welcome.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Availability", "sec_num": "13" }, { "text": "A: The total probability of all paths traversing q \u2192 r is \u03b1(q) * p(q \u2192 r) * \u03b2(r).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "A: Nothing changes, since the situation is too symmetric. As H and C behave identically, there is nothing to differentiate them and allow them to specialize.29 A: Actually it does; it merely requires more iterations to converge. (The spreadsheet is only wide enough to hold 10 iterations; to run for 10 more, just copy the final probabilities back over the initial ones. Repeat as necessary.) It learns both inertia and a preference for more ice cream on cold days.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "An inequality and associated maximization technique in statistical estimation of probabilistic functions of a Markov process. Inequalities, 3. Thorsten Brants", "authors": [ { "first": "L", "middle": [ "E" ], "last": "Baum", "suffix": "" } ], "year": 1972, "venue": "Proc. of ANLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. E. Baum. 1972. An inequality and associated max- imization technique in statistical estimation of proba- bilistic functions of a Markov process. Inequalities, 3. Thorsten Brants. 2000. TnT: A statistical part-of-speech tagger. In Proc. of ANLP, Seattle.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A stochastic parts program and noun phrase parser for unrestricted text", "authors": [ { "first": "K", "middle": [ "W" ], "last": "Church", "suffix": "" } ], "year": 1988, "venue": "Proc. of ANLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. W. Church. 1988. A stochastic parts program and noun phrase parser for unrestricted text. Proc. of ANLP.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Grammatical category disambiguation by statistical optimization", "authors": [ { "first": "J", "middle": [], "last": "Steven", "suffix": "" }, { "first": "", "middle": [], "last": "Derose", "suffix": "" } ], "year": 1988, "venue": "Computational Linguistics", "volume": "14", "issue": "1", "pages": "31--39", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steven J. DeRose. 1988. Grammatical category disam- biguation by statistical optimization. Computational Linguistics, 14(1):31-39.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Parameter estimation for probabilistic finite-state transducers", "authors": [ { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 2002, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason Eisner. 2002. Parameter estimation for probabilis- tic finite-state transducers. In Proc. of ACL.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "The EM algorithm for graphical association models with missing data", "authors": [ { "first": "L", "middle": [], "last": "Steffen", "suffix": "" }, { "first": "", "middle": [], "last": "Lauritzen", "suffix": "" } ], "year": 1995, "venue": "Computational Statistics and Data Analysis", "volume": "19", "issue": "", "pages": "191--201", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steffen L. Lauritzen. 1995. The EM algorithm for graph- ical association models with missing data. Computa- tional Statistics and Data Analysis, 19:191-201.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Tagging English text with a probabilistic model", "authors": [ { "first": "Bernard", "middle": [], "last": "Merialdo", "suffix": "" } ], "year": 1994, "venue": "Comp. Ling", "volume": "20", "issue": "2", "pages": "155--172", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bernard Merialdo. 1994. Tagging English text with a probabilistic model. Comp. Ling., 20(2):155-172.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference", "authors": [ { "first": "Judea", "middle": [], "last": "Pearl", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Judea Pearl. 1988. Probabilistic Reasoning in Intelli- gent Systems: Networks of Plausible Inference. Mor- gan Kaufmann, San Mateo, California.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A tutorial on hidden Markov models and selected applications in speech recognition", "authors": [ { "first": "L", "middle": [ "R" ], "last": "Rabiner", "suffix": "" } ], "year": 1989, "venue": "Proceedings of IEEE", "volume": "77", "issue": "2", "pages": "257--285", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. R. Rabiner. 1989. A tutorial on hidden Markov mod- els and selected applications in speech recognition. Proceedings of IEEE, 77(2):257-285, February.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "type_str": "figure", "text": "User has double-clicked on cell D29." }, "FIGREF1": { "num": null, "uris": null, "type_str": "figure", "text": "Initial guesses of parameters." }, "FIGREF2": { "num": null, "uris": null, "type_str": "figure", "text": "Diary data and reconstructed weather." }, "FIGREF3": { "num": null, "uris": null, "type_str": "figure", "text": "The \u03b1-\u03b2 trellis." }, "FIGREF4": { "num": null, "uris": null, "type_str": "figure", "text": "Computing expected counts and their totals." }, "FIGREF5": { "num": null, "uris": null, "type_str": "figure", "text": "Figure 3simply graphs column K." }, "FIGREF7": { "num": null, "uris": null, "type_str": "figure", "text": "With (a) no inertia, and (b) anti-inertia." }, "FIGREF8": { "num": null, "uris": null, "type_str": "figure", "text": "Changing p(C | C) = p(H | C) = p(C | H) = p(H | H) = 0.45 cancels the smoothing effect" }, "FIGREF9": { "num": null, "uris": null, "type_str": "figure", "text": "Parameters of Figure 2 updated by reestimation." }, "FIGREF10": { "num": null, "uris": null, "type_str": "figure", "text": "Second-order weather reconstruction." }, "FIGREF11": { "num": null, "uris": null, "type_str": "figure", "text": "An initial poor reconstruction that will be improved by reestimation." }, "FIGREF12": { "num": null, "uris": null, "type_str": "figure", "text": "The effect of reestimation onFigure 9." }, "FIGREF13": { "num": null, "uris": null, "type_str": "figure", "text": "Nine more passes of forward-backward reestimation onFigures 9-10. Note that the final graph is even smoother thanFigure 3." }, "FIGREF14": { "num": null, "uris": null, "type_str": "figure", "text": "Breaking symmetry toward the opposite solution." }, "FIGREF15": { "num": null, "uris": null, "type_str": "figure", "text": "Figure 13: If T is the sequence of 34 training observations (33 days plus Stop), then p(T ) increases rapidly during reestimation. To compress the range of the graph, we don't plot p(T ) but rather perplexity per observation = 1/ 34 p(T ) = 2 \u2212(log 2 p(T ))/34 ." }, "FIGREF16": { "num": null, "uris": null, "type_str": "figure", "text": "A suboptimal local maximum." } } } }