n_chapter
stringclasses
10 values
chapter
stringclasses
10 values
n_section
stringlengths
3
5
section
stringlengths
3
48
n_subsection
stringlengths
3
6
subsection
stringlengths
3
51
text
stringlengths
1
2.65k
8
Sequence Labeling for Parts of Speech and Named Entities
nan
nan
nan
nan
Such tasks in which we assign, to each word x i in an input word sequence, a label y i , so that the output sequence Y has the same length as the input sequence X are called sequence labeling tasks. We'll introduce classic sequence labeling algo-sequence labeling rithms, one generative-the Hidden Markov Model (HMM)-and one discriminativethe Conditional Random Field (CRF). In following chapters we'll introduce modern sequence labelers based on RNNs and Transformers.
8
Sequence Labeling for Parts of Speech and Named Entities
8.1
(Mostly) English Word Classes
nan
nan
Until now we have been using part-of-speech terms like noun and verb rather freely. In this section we give more complete definitions. While word classes do have semantic tendencies-adjectives, for example, often describe properties and nouns people-parts of speech are defined instead based on their grammatical relationship with neighboring words or the morphological properties about their affixes.
8
Sequence Labeling for Parts of Speech and Named Entities
8.1
(Mostly) English Word Classes
nan
nan
Open Class Nouns are words for people, places, or things, but include others as well. Actually, I ran home extremely quickly yesterday
8
Sequence Labeling for Parts of Speech and Named Entities
8.1
(Mostly) English Word Classes
nan
nan
Adverbs generally modify something (often verbs, hence the name "adverb", but also other adverbs and entire verb phrases). Directional adverbs or locative adlocative verbs (home, here, downhill) specify the direction or location of some action; degree degree adverbs (extremely, very, somewhat) specify the extent of some action, process, or property; manner adverbs (slowly, slinkily, delicately) describe the manner of some manner action or process; and temporal adverbs describe the time that some action or event temporal took place (yesterday, Monday).
8
Sequence Labeling for Parts of Speech and Named Entities
8.1
(Mostly) English Word Classes
nan
nan
Interjections (oh, hey, alas, uh, um) , are a smaller open class, that also includes interjection greetings (hello, goodbye), and question responses (yes, no, uh-huh) .
8
Sequence Labeling for Parts of Speech and Named Entities
8.1
(Mostly) English Word Classes
nan
nan
English adpositions occur before nouns, hence are called prepositions. They can preposition indicate spatial or temporal relations, whether literal (on it, before then, by the house) or metaphorical (on time, with gusto, beside herself), and relations like marking the agent in Hamlet was written by Shakespeare.
8
Sequence Labeling for Parts of Speech and Named Entities
8.1
(Mostly) English Word Classes
nan
nan
A particle resembles a preposition or an adverb and is used in combination with particle a verb. Particles often have extended meanings that aren't quite the same as the prepositions they resemble, as in the particle over in she turned the paper over. A verb and a particle acting as a single unit is called a phrasal verb. The meaning phrasal verb of phrasal verbs is often non-compositional-not predictable from the individual meanings of the verb and the particle. Thus, turn down means 'reject', rule out 'eliminate', and go on 'continue'. Determiners like this and that (this chapter, that page) can mark the start of an determiner English noun phrase. Articles like a, an, and the, are a type of determiner that mark article discourse properties of the noun and are quite frequent; the is the most common word in written English, with a and an right behind.
8
Sequence Labeling for Parts of Speech and Named Entities
8.1
(Mostly) English Word Classes
nan
nan
Conjunctions join two phrases, clauses, or sentences. Coordinating conjuncconjunction tions like and, or, and but join two elements of equal status. Subordinating conjunctions are used when one of the elements has some embedded status. For example, the subordinating conjunction that in "I thought that you might like some milk" links the main clause I thought with the subordinate clause you might like some milk. This clause is called subordinate because this entire clause is the "content" of the main verb thought. Subordinating conjunctions like that which link a verb to its argument in this way are also called complementizers.
8
Sequence Labeling for Parts of Speech and Named Entities
8.1
(Mostly) English Word Classes
nan
nan
complementizer Pronouns act as a shorthand for referring to an entity or event. Personal propronoun nouns refer to persons or entities (you, she, I, it, me, etc.). Possessive pronouns are forms of personal pronouns that indicate either actual possession or more often just an abstract relation between the person and some object (my, your, his, her, its, one's, our, their). Wh-pronouns (what, who, whom, whoever) are used in certain wh question forms, or act as complementizers (Frida, who married Diego. . . ). Auxiliary verbs mark semantic features of a main verb such as its tense, whether auxiliary it is completed (aspect), whether it is negated (polarity), and whether an action is necessary, possible, suggested, or desired (mood). English auxiliaries include the copula verb be, the two verbs do and have, forms, as well as modal verbs used to copula modal mark the mood associated with the event depicted by the main verb: can indicates ability or possibility, may permission or possibility, must necessity. An English-specific tagset, the 45-tag Penn Treebank tagset (Marcus et al., 1993) , shown in Fig. 8.2 , has been used to label many syntactically annotated corpora like the Penn Treebank corpora, so is worth knowing about. Below we show some examples with each word tagged according to both the UD and Penn tagsets. Notice that the Penn tagset distinguishes tense and participles on verbs, and has a special tag for the existential there construction in English. Note that since New England Journal of Medicine is a proper noun, both tagsets mark its component nouns as NNP, including journal and medicine, which might otherwise be labeled as common nouns (NOUN/NN).
8
Sequence Labeling for Parts of Speech and Named Entities
8.2
Part-of-Speech Tagging
nan
nan
Part-of-speech tagging is the process of assigning a part-of-speech to each word in a text. The input is a sequence x 1 , x 2 , ..., x n of (tokenized) words and a tagset, and the output is a sequence y 1 , y 2 , ..., y n of tags, each output y i corresponding exactly to one input x i , as shown in the intuition in Fig. 8.3 . Tagging is a disambiguation task; words are ambiguous -have more than one ambiguous possible part-of-speech-and the goal is to find the correct tag for the situation. For example, book can be a verb (book that flight) or a noun (hand me that book). That can be a determiner (Does that flight serve dinner) or a complementizer (I We'll introduce algorithms for the task in the next few sections, but first let's explore the task. Exactly how hard is it? Fig. 8.4 shows that most word types (85-86%) are unambiguous (Janet is always NNP, hesitantly is always RB). But the ambiguous words, though accounting for only 14-15% of the vocabulary, are very common, and 55-67% of word tokens in running text are ambiguous. Particularly ambiguous common words include that, back, down, put and set; here are some examples of the 6 different parts of speech for the word back: earnings growth took a back/JJ seat a small building in the back/NN a clear majority of senators back/VBP the bill Dave began to back/VB toward the door enable the country to buy back/RP debt I was twenty-one back/RB then Nonetheless, many words are easy to disambiguate, because their different tags aren't equally likely. For example, a can be a determiner or the letter a, but the determiner sense is much more likely.
8
Sequence Labeling for Parts of Speech and Named Entities
8.2
Part-of-Speech Tagging
nan
nan
This idea suggests a useful baseline: given an ambiguous word, choose the tag which is most frequent in the training corpus. This is a key concept:
8
Sequence Labeling for Parts of Speech and Named Entities
8.2
Part-of-Speech Tagging
nan
nan
Most Frequent Class Baseline: Always compare a classifier against a baseline at least as good as the most frequent class baseline (assigning each token to the class it occurred in most often in the training set).
8
Sequence Labeling for Parts of Speech and Named Entities
8.2
Part-of-Speech Tagging
nan
nan
The most-frequent-tag baseline has an accuracy of about 92% 1 . The baseline thus differs from the state-of-the-art and human ceiling (97%) by only 5%.
8
Sequence Labeling for Parts of Speech and Named Entities
8.3
Named Entities and Named Entity Tagging
nan
nan
Part of speech tagging can tell us that words like Janet, Stanford University, and Colorado are all proper nouns; being a proper noun is a grammatical property of these words. But viewed from a semantic perspective, these proper nouns refer to different kinds of entities: Janet is a person, Stanford University is an organization,.. and Colorado is a location.
8
Sequence Labeling for Parts of Speech and Named Entities
8.3
Named Entities and Named Entity Tagging
nan
nan
A named entity is, roughly speaking, anything that can be referred to with a proper name: a person, a location, an organization. The text contains 13 mentions of named entities including 5 organizations, 4 locations, 2 times, 1 person, and 1 mention of money. Figure 8 .5 shows typical generic named entity types. Many applications will also need to use specific entity types like proteins, genes, commercial products, or works of art.
8
Sequence Labeling for Parts of Speech and Named Entities
8.3
Named Entities and Named Entity Tagging
nan
nan
Palo Alto is raising the fees for parking. Named entity tagging is a useful first step in lots of natural language processing tasks. In sentiment analysis we might want to know a consumer's sentiment toward a particular entity. Entities are a useful first stage in question answering, or for linking text to information in structured knowledge sources like Wikipedia. And named entity tagging is also central to tasks involving building semantic representations, like extracting events and the relationship between participants.
8
Sequence Labeling for Parts of Speech and Named Entities
8.3
Named Entities and Named Entity Tagging
nan
nan
Unlike part-of-speech tagging, where there is no segmentation problem since each word gets one tag, the task of named entity recognition is to find and label spans of text, and is difficult partly because of the ambiguity of segmentation; we
8
Sequence Labeling for Parts of Speech and Named Entities
8.3
Named Entities and Named Entity Tagging
nan
nan
need to decide what's an entity and what isn't, and where the boundaries are. Indeed, most words in a text will not be named entities. Another difficulty is caused by type ambiguity. The mention JFK can refer to a person, the airport in New York, or any number of schools, bridges, and streets around the United States. Some examples of this kind of cross-type confusion are given in Figure 8 The standard approach to sequence labeling for a span-recognition problem like NER is BIO tagging (Ramshaw and Marcus, 1995) . This is a method that allows us to treat NER like a word-by-word sequence labeling task, via tags that capture both the boundary and the named entity type. Consider the following sentence: variants called IO tagging and BIOES tagging. In BIO tagging we label any token that begins a span of interest with the label B, tokens that occur inside a span are tagged with an I, and any tokens outside of any span of interest are labeled O. While there is only one O tag, we'll have distinct B and I tags for each named entity class. The number of tags is thus 2n + 1 tags, where n is the number of entity types. BIO tagging can represent exactly the same information as the bracketed notation, but has the advantage that we can represent the task in the same simple sequence modeling way as part-of-speech tagging: assigning a single label y i to each input word x i : We've also shown two variant tagging schemes: IO tagging, which loses some information by eliminating the B tag, and BIOES tagging, which adds an end tag E for the end of a span, and a span tag S for a span consisting of only one word. A sequence labeler (HMM, CRF, RNN, Transformer, etc.) is trained to label each token in a text with tags that indicate the presence (or absence) of particular kinds of named entities.
8
Sequence Labeling for Parts of Speech and Named Entities
8.3
Named Entities and Named Entity Tagging
nan
nan
[ PER Jane Villanueva ] of [ ORG United] ,
8
Sequence Labeling for Parts of Speech and Named Entities
8.3
Named Entities and Named Entity Tagging
nan
nan
Words IO
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
nan
nan
In this section we introduce our first sequence labeling algorithm, the Hidden Markov Model, and show how to apply it to part-of-speech tagging. Recall that a sequence labeler is a model whose job is to assign a label to each unit in a sequence, thus mapping a sequence of observations to a sequence of labels of the same length. The HMM is a classic model that introduces many of the key concepts of sequence modeling that we will see again in more modern models.
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
nan
nan
An HMM is a probabilistic sequence model: given a sequence of units (words, letters, morphemes, sentences, whatever), it computes a probability distribution over possible sequences of labels and chooses the best label sequence.
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.1
Markov Chains
The HMM is based on augmenting the Markov chain. A Markov chain is a model Markov chain that tells us something about the probabilities of sequences of random variables, states, each of which can take on values from some set. These sets can be words, or tags, or symbols representing anything, for example the weather. A Markov chain makes a very strong assumption that if we want to predict the future in the sequence, all that matters is the current state. All the states before the current state have no impact on the future except via the current state. It's as if to predict tomorrow's weather you could examine today's weather but you weren't allowed to look at yesterday's weather. More formally, consider a sequence of state variables q 1 , q 2 , ..., q i . A Markov model embodies the Markov assumption on the probabilities of this sequence: that
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.1
Markov Chains
Markov assumption when predicting the future, the past doesn't matter, only the present.
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.1
Markov Chains
Markov Assumption: Figure 8 .8a shows a Markov chain for assigning a probability to a sequence of weather events, for which the vocabulary consists of HOT, COLD, and WARM. The states are represented as nodes in the graph, and the transitions, with their probabilities, as edges. The transitions are probabilities: the values of arcs leaving a given state must sum to 1. Figure 8 .8b shows a Markov chain for assigning a probability to a sequence of words w 1 ...w t . This Markov chain should be familiar; in fact, it represents a bigram language model, with each edge expressing the probability p(w i |w j )! Given the two models in Fig. 8.8 , we can assign a probability to any sequence from our vocabulary.
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.1
Markov Chains
P(q i = a|q 1 ...q iβˆ’1 ) = P(q i = a|q iβˆ’1 ) (8.3)
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.1
Markov Chains
Formally, a Markov chain is specified by the following components:
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.1
Markov Chains
Q = q 1 q 2 . . . q N a set of N states A = a 11 a 12 . . . a N1 .
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.1
Markov Chains
. . a NN a transition probability matrix A, each a i j representing the probability of moving from state i to state j, s.t.
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.1
Markov Chains
n j=1 a i j = 1 βˆ€i Ο€ = Ο€ 1 , Ο€ 2 , ..., Ο€ N
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.1
Markov Chains
an initial probability distribution over states. Ο€ i is the probability that the Markov chain will start in state i. Some states j may have Ο€ j = 0, meaning that they cannot be initial states. Also, n i=1 Ο€ i = 1 Before you go on, use the sample probabilities in Fig. 8.8a (with Ο€ = [0.1, 0.7, 0.2] ) to compute the probability of each of the following sequences:
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.1
Markov Chains
(8.4) hot hot hot hot (8.5) cold hot cold hot What does the difference in these probabilities tell you about a real-world weather fact encoded in Fig. 8 .8a?
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.2
The Hidden Markov Model
A Markov chain is useful when we need to compute a probability for a sequence of observable events. In many cases, however, the events we are interested in are hidden: we don't observe them directly. For example we don't normally observe hidden part-of-speech tags in a text. Rather, we see words, and must infer the tags from the word sequence. We call the tags hidden because they are not observed.
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.2
The Hidden Markov Model
A hidden Markov model (HMM) allows us to talk about both observed events hidden Markov model (like words that we see in the input) and hidden events (like part-of-speech tags) that we think of as causal factors in our probabilistic model. An HMM is specified by the following components:
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.2
The Hidden Markov Model
Q = q 1 q 2 . . . q N a set of N states A = a 11 . . . a i j .
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.2
The Hidden Markov Model
. . a NN a transition probability matrix A, each a i j representing the probability of moving from state i to state j,
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.2
The Hidden Markov Model
s.t. N j=1 a i j = 1 βˆ€i O = o 1 o 2 . . . o T
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.2
The Hidden Markov Model
a sequence of T observations, each one drawn from a vocabulary
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.2
The Hidden Markov Model
V = v 1 , v 2 , ..., v V B = b i (o t )
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.2
The Hidden Markov Model
a sequence of observation likelihoods, also called emission probabilities, each expressing the probability of an observation o t being generated from a state q i
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.2
The Hidden Markov Model
Ο€ = Ο€ 1 , Ο€ 2 , ..., Ο€ N
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.2
The Hidden Markov Model
an initial probability distribution over states. Ο€ i is the probability that the Markov chain will start in state i. Some states j may have Ο€ j = 0, meaning that they cannot be initial states. Also, n i=1 Ο€ i = 1 A first-order hidden Markov model instantiates two simplifying assumptions. First, as with a first-order Markov chain, the probability of a particular state depends only on the previous state:
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.2
The Hidden Markov Model
Markov Assumption: P(q i |q 1 , ..., q iβˆ’1 ) = P(q i |q iβˆ’1 ) (8.6)
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.2
The Hidden Markov Model
Second, the probability of an output observation o i depends only on the state that produced the observation q i and not on any other states or any other observations:
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.2
The Hidden Markov Model
Output Independence: P(o i |q 1 , . . . q i , . . . , q T , o 1 , . . . , o i , . . . , o T ) = P(o i |q i ) (8.7)
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.3
The components of a HMM tagger
Let's start by looking at the pieces of an HMM tagger, and then we'll see how to use it to tag. An HMM has two components, the A and B probabilities.
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.3
The components of a HMM tagger
The A matrix contains the tag transition probabilities P(t i |t iβˆ’1 ) which represent the probability of a tag occurring given the previous tag. For example, modal verbs like will are very likely to be followed by a verb in the base form, a VB, like race, so we expect this probability to be high. We compute the maximum likelihood estimate of this transition probability by counting, out of the times we see the first tag in a labeled corpus, how often the first tag is followed by the second:
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.3
The components of a HMM tagger
P(t i |t iβˆ’1 ) = C(t iβˆ’1 ,t i ) C(t iβˆ’1 ) (8.8)
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.3
The components of a HMM tagger
In the WSJ corpus, for example, MD occurs 13124 times of which it is followed by VB 10471, for an MLE estimate of
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.3
The components of a HMM tagger
P(V B|MD) = C(MD,V B) C(MD) = 10471 13124 = .80 (8.9)
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.3
The components of a HMM tagger
Let's walk through an example, seeing how these probabilities are estimated and used in a sample tagging task, before we return to the algorithm for decoding.
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.3
The components of a HMM tagger
In HMM tagging, the probabilities are estimated by counting on a tagged training corpus. For this example we'll use the tagged WSJ corpus.
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.3
The components of a HMM tagger
The B emission probabilities, P(w i |t i ), represent the probability, given a tag (say MD), that it will be associated with a given word (say will). The MLE of the emission probability is
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.3
The components of a HMM tagger
P(w i |t i ) = C(t i , w i ) C(t i ) (8.10)
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.3
The components of a HMM tagger
Of the 13124 occurrences of MD in the WSJ corpus, it is associated with will 4046 times:
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.3
The components of a HMM tagger
P(will|MD) = C(MD, will) C(MD) = 4046 13124 = .31 (8.11)
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.3
The components of a HMM tagger
We saw this kind of Bayesian modeling in Chapter 4; recall that this likelihood term is not asking "which is the most likely tag for the word will?" That would be the posterior P(MD|will). Instead, P(will|MD) answers the slightly counterintuitive question "If we were going to generate a MD, how likely is it that this modal would be will?"
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.3
The components of a HMM tagger
The A transition probabilities, and B observation likelihoods of the HMM are illustrated in Fig. 8 .9 for three states in an HMM part-of-speech tagger; the full tagger would have one state for each tag.
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.4
HMM tagging as decoding
For any model, such as an HMM, that contains hidden variables, the task of determining the hidden variables sequence corresponding to the sequence of observations is called decoding. More formally, Figure 8 .9 An illustration of the two parts of an HMM representation: the A transition probabilities used to compute the prior probability, and the B observation likelihoods that are associated with each state, one likelihood for each possible observation word.
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.4
HMM tagging as decoding
For part-of-speech tagging, the goal of HMM decoding is to choose the tag sequence t 1 . . .t n that is most probable given the observation sequence of n words w 1 . . . w n :t
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.4
HMM tagging as decoding
1:n = argmax t 1 ... t n P(t 1 . . .t n |w 1 . . . w n ) (8.12)
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.4
HMM tagging as decoding
The way we'll do this in the HMM is to use Bayes' rule to instead compute:
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.4
HMM tagging as decoding
t 1:n = argmax t 1 ... t n P(w 1 . . . w n |t 1 . . .t n )P(t 1 . . .t n ) P(w 1 . . . w n ) (8.13)
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.4
HMM tagging as decoding
Furthermore, we simplify Eq. 8.13 by dropping the denominator P(w n 1 ):
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.4
HMM tagging as decoding
t 1:n = argmax t 1 ... t n P(w 1 . . . w n |t 1 . . .t n )P(t 1 . . .t n ) (8.14)
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.4
HMM tagging as decoding
HMM taggers make two further simplifying assumptions. The first is that the probability of a word appearing depends only on its own tag and is independent of neighboring words and tags:
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.4
HMM tagging as decoding
P(w 1 . . . w n |t 1 . . .t n ) β‰ˆ n i=1 P(w i |t i ) (8.15)
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.4
HMM tagging as decoding
The second assumption, the bigram assumption, is that the probability of a tag is dependent only on the previous tag, rather than the entire tag sequence;
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.4
HMM tagging as decoding
P(t 1 . . .t n ) β‰ˆ n i=1 P(t i |t iβˆ’1 ) (8.16)
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.4
HMM tagging as decoding
Plugging the simplifying assumptions from Eq. 8.15 and Eq. 8.16 into Eq. 8.14 results in the following equation for the most probable tag sequence from a bigram tagger:t 1:n = argmax
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.4
HMM tagging as decoding
t 1 ... t n P(t 1 . . .t n |w 1 . . . w n ) β‰ˆ argmax t 1 ... t n n i=1 emission P(w i |t i ) transition P(t i |t iβˆ’1 ) (8.17)
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.4
HMM tagging as decoding
The two parts of Eq. 8.17 correspond neatly to the B emission probability and A transition probability that we just defined above!
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.5
The Viterbi Algorithm
The decoding algorithm for HMMs is the Viterbi algorithm shown in Fig. 8 .10.
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.5
The Viterbi Algorithm
As an instance of dynamic programming, Viterbi resembles the dynamic programming minimum edit distance algorithm of Chapter 2.
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.5
The Viterbi Algorithm
function VITERBI(observations of len T,state-graph of len N) returns best-path, path-prob create a path probability matrix viterbi [N,T] for each state s from 1 to N do
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.5
The Viterbi Algorithm
; initialization step viterbi[s,1] ← Ο€ s * b s (o 1 ) backpointer[s,1] ← 0
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.5
The Viterbi Algorithm
for each time step t from 2 to T do ; recursion step for each state s from
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.5
The Viterbi Algorithm
1 to N do viterbi[s,t] ← N max s =1 viterbi[s ,t βˆ’ 1] * a s ,s * b s (o t ) backpointer[s,t] ← N argmax s =1 viterbi[s ,t βˆ’ 1] * a s ,s * b s (o t ) bestpathprob ← N max s=1 viterbi[s, T ] ; termination step bestpathpointer ← N argmax s=1 viterbi[s, T ] ; termination step
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.5
The Viterbi Algorithm
bestpath ← the path starting at state bestpathpointer, that follows backpointer[] to states back in time return bestpath, bestpathprob Figure 8 .10 Viterbi algorithm for finding the optimal sequence of tags. Given an observation sequence and an HMM Ξ» = (A, B), the algorithm returns the state path through the HMM that assigns maximum likelihood to the observation sequence.
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.5
The Viterbi Algorithm
The Viterbi algorithm first sets up a probability matrix or lattice, with one column for each observation o t and one row for each state in the state graph. Each column thus has a cell for each state q i in the single combined automaton. Figure 8 .11 shows an intuition of this lattice for the sentence Janet will back the bill.
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.5
The Viterbi Algorithm
Each cell of the lattice, v t ( j), represents the probability that the HMM is in state j after seeing the first t observations and passing through the most probable state sequence q 1 , ..., q tβˆ’1 , given the HMM Ξ» . The value of each cell v t ( j) is computed by recursively taking the most probable path that could lead us to this cell. Formally, each cell expresses the probability
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.5
The Viterbi Algorithm
v t ( j) = max q 1 ,...,q tβˆ’1 P(q 1 ...q tβˆ’1 , o 1 , o 2 . . . o t , q t = j|Ξ» ) (8.18)
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.5
The Viterbi Algorithm
We represent the most probable path by taking the maximum over all possible previous state sequences max
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.5
The Viterbi Algorithm
q 1 ,...,q tβˆ’1
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.5
The Viterbi Algorithm
. Like other dynamic programming algorithms, Viterbi fills each cell recursively. Given that we had already computed the probability of being in every state at time t βˆ’ 1, we compute the Viterbi probability by taking the most probable of the extensions of the paths that lead to the current cell. For a given state q j at time t, the value
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.5
The Viterbi Algorithm
v t ( j) is computed as v t ( j) = N max i=1 v tβˆ’1 (i) a i j b j (o t ) (8.19)
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.5
The Viterbi Algorithm
The three factors that are multiplied in Eq. 8.19 for extending the previous paths to compute the Viterbi probability at time t are .11 A sketch of the lattice for Janet will back the bill, showing the possible tags (q i ) for each word and highlighting the path corresponding to the correct tag sequence through the hidden states. States (parts of speech) which have a zero probability of generating a particular word according to the B matrix (such as the probability that a determiner DT will be realized as Janet) are greyed out.
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.5
The Viterbi Algorithm
v tβˆ’1 (i) the previous Viterbi path probability from the previous time step a i j the transition probability from previous state q i to current state q j b j (o t ) the state observation likelihood of the observation symbol o t given the current state j
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.6
Working through an example
Let's tag the sentence Janet will back the bill; the goal is the correct series of tags (see also Fig. 8.11 Let the HMM be defined by the two tables in Fig. 8 .12 and Fig. 8 .13. Figure 8 .12 lists the a i j probabilities for transitioning between the hidden states (part-of-speech tags). Figure 8 .13 expresses the b i (o t ) probabilities, the observation likelihoods of words given tags. This table is (slightly simplified) from counts in the WSJ corpus. So the word Janet only appears as an NNP, back has 4 possible parts of speech, and the word the can appear as a determiner or as an NNP (in titles like "Somewhere Over the Rainbow" all words are tagged as NNP). Figure 8 .14 The first few entries in the individual state columns for the Viterbi algorithm. Each cell keeps the probability of the best path so far and a pointer to the previous cell along that path. We have only filled out columns 1 and 2; to avoid clutter most cells with value 0 are left empty. The rest is left as an exercise for the reader. After the cells are filled in, backtracing from the end state, we should be able to reconstruct the correct state sequence NNP MD VB DT NN. Figure 8 .14 shows a fleshed-out version of the sketch we saw in Fig. 8 .11, the Viterbi lattice for computing the best hidden state sequence for the observation sequence Janet will back the bill.
8
Sequence Labeling for Parts of Speech and Named Entities
8.4
HMM Part-of-Speech Tagging
8.4.6
Working through an example
There are N = 5 state columns. We begin in column 1 (for the word Janet) by setting the Viterbi value in each cell to the product of the Ο€ transition probability (the start probability for that state i, which we get from the <s > entry of Fig. 8.12) , and the observation likelihood of the word Janet given the tag for that cell. Most of the cells in the column are zero since the word Janet cannot be any of those tags. The reader should find this in Fig. 8.14. Next, each cell in the will column gets updated. For each state, we compute the value viterbi[s,t] by taking the maximum over the extensions of all the paths from the previous column that lead to the current cell according to Eq. 8.19 . We have shown the values for the MD, VB, and NN cells. Each cell gets the max of the 7 values from the previous column, multiplied by the appropriate transition probability; as it happens in this case, most of them are zero from the previous column. The remaining value is multiplied by the relevant observation probability, and the (trivial) max is taken. In this case the final value, 2.772e-8, comes from the NNP state at the previous column. The reader should fill in the rest of the lattice in Fig. 8 .14 and backtrace to see whether or not the Viterbi algorithm returns the gold state sequence NNP MD VB DT NN.
8
Sequence Labeling for Parts of Speech and Named Entities
8.5
Conditional Random Fields (CRFs)
nan
nan
While the HMM is a useful and powerful model, it turns out that HMMs need a number of augmentations to achieve high accuracy. For example, in POS tagging as in other tasks, we often run into unknown words: proper names and acronyms unknown words are created very often, and even new common nouns and verbs enter the language at a surprising rate. It would be great to have ways to add arbitrary features to help with this, perhaps based on capitalization or morphology (words starting with capital letters are likely to be proper nouns, words ending with -ed tend to be past tense (VBD or VBN), etc.) Or knowing the previous or following words might be a useful feature (if the previous word is the, the current tag is unlikely to be a verb). Although we could try to hack the HMM to find ways to incorporate some of these, in general it's hard for generative models like HMMs to add arbitrary features directly into the model in a clean way. We've already seen a model for combining arbitrary features in a principled way: log-linear models like the logistic regression model of Chapter 5! But logistic regression isn't a sequence model; it assigns a class to a single observation.
8
Sequence Labeling for Parts of Speech and Named Entities
8.5
Conditional Random Fields (CRFs)
nan
nan
Luckily, there is a discriminative sequence model based on log-linear models: the conditional random field (CRF). We'll describe here the linear chain CRF, CRF the version of the CRF most commonly used for language processing, and the one whose conditioning closely matches the HMM.
8
Sequence Labeling for Parts of Speech and Named Entities
8.5
Conditional Random Fields (CRFs)
nan
nan
Assuming we have a sequence of input words X = x 1 ...x n and want to compute a sequence of output tags Y = y 1 ...y n . In an HMM to compute the best tag sequence that maximizes P(Y |X) we rely on Bayes' rule and the likelihood P(X|Y ):
8
Sequence Labeling for Parts of Speech and Named Entities
8.5
Conditional Random Fields (CRFs)
nan
nan
Y = argmax Y p(Y |X) = argmax Y p(X|Y )p(Y ) = argmax Y i p(x i |y i ) i p(y i |y iβˆ’1 ) (8.21)
8
Sequence Labeling for Parts of Speech and Named Entities
8.5
Conditional Random Fields (CRFs)
nan
nan
In a CRF, by contrast, we compute the posterior p(Y |X) directly, training the CRF to discriminate among the possible tag sequences:
8
Sequence Labeling for Parts of Speech and Named Entities
8.5
Conditional Random Fields (CRFs)
nan
nan
Y = argmax Y ∈Y P(Y |X) (8.22)
8
Sequence Labeling for Parts of Speech and Named Entities
8.5
Conditional Random Fields (CRFs)
nan
nan
However, the CRF does not compute a probability for each tag at each time step. Instead, at each time step the CRF computes log-linear functions over a set of relevant features, and these local features are aggregated and normalized to produce a global probability for the whole sequence. Let's introduce the CRF more formally, again using X and Y as the input and output sequences. A CRF is a log-linear model that assigns a probability to an entire output (tag) sequence Y , out of all possible sequences Y, given the entire input (word) sequence X. We can think of a CRF as like a giant version of what multinomial logistic regression does for a single token. Recall that the feature function f in regular multinomial logistic regression can be viewed as a function of a tuple: a token x and a label y (page 92). In a CRF, the function F maps an entire input sequence X and an entire output sequence Y to a feature vector. Let's assume we have K features, with a weight w k for each feature F k :
8
Sequence Labeling for Parts of Speech and Named Entities
8.5
Conditional Random Fields (CRFs)
nan
nan
p(Y |X) = exp K k=1 w k F k (X,Y ) Y ∈Y exp K k=1 w k F k (X,Y ) (8.23)
8
Sequence Labeling for Parts of Speech and Named Entities
8.5
Conditional Random Fields (CRFs)
nan
nan
It's common to also describe the same equation by pulling out the denominator into a function Z(X):
8
Sequence Labeling for Parts of Speech and Named Entities
8.5
Conditional Random Fields (CRFs)
nan
nan
p(Y |X) = 1 Z(X) exp K k=1 w k F k (X,Y ) (8.24) Z(X) = Y ∈Y exp K k=1 w k F k (X,Y ) (8.25)