n_chapter
stringclasses
10 values
chapter
stringclasses
10 values
n_section
stringlengths
3
5
section
stringlengths
3
48
n_subsection
stringlengths
3
6
subsection
stringlengths
3
51
text
stringlengths
1
2.65k
4
Naive Bayes and Sentiment Classification
4.1
Naive Bayes Classifiers
nan
nan
Figure 4 .1 Intuition of the multinomial naive Bayes classifier applied to a movie review. The position of the words is ignored (the bag of words assumption) and we make use of the frequency of each word.
4
Naive Bayes and Sentiment Classification
4.1
Naive Bayes Classifiers
nan
nan
Naive Bayes is a probabilistic classifier, meaning that for a document d, out of all classes c ∈ C the classifier returns the classĉ which has the maximum posterior probability given the document. In Eq. 4.1 we use the hat notationˆto mean "our estimate of the correct class".
4
Naive Bayes and Sentiment Classification
4.1
Naive Bayes Classifiers
nan
nan
This idea of Bayesian inference has been known since the work of Bayes (1763), and was first applied to text classification by Mosteller and Wallace (1964) . The intuition of Bayesian classification is to use Bayes' rule to transform Eq. 4.1 into other probabilities that have some useful properties. Bayes' rule is presented in Eq. 4.2; it gives us a way to break down any conditional probability P(x|y) into three other probabilities:
4
Naive Bayes and Sentiment Classification
4.1
Naive Bayes Classifiers
nan
nan
P(x|y) = P(y|x)P(x) P(y) (4.2)
4
Naive Bayes and Sentiment Classification
4.1
Naive Bayes Classifiers
nan
nan
We can then substitute Eq. 4.2 into Eq. 4.1 to get Eq. 4.3:
4
Naive Bayes and Sentiment Classification
4.1
Naive Bayes Classifiers
nan
nan
c = argmax c∈C P(c|d) = argmax c∈C P(d|c)P(c) P(d) (4.3) 4.1 • NAIVE BAYES CLASSIFIERS 59
4
Naive Bayes and Sentiment Classification
4.1
Naive Bayes Classifiers
nan
nan
We can conveniently simplify Eq. 4.3 by dropping the denominator P(d). This is possible because we will be computing P(d|c)P(c)
4
Naive Bayes and Sentiment Classification
4.1
Naive Bayes Classifiers
nan
nan
P(d)
4
Naive Bayes and Sentiment Classification
4.1
Naive Bayes Classifiers
nan
nan
for each possible class. But P(d) doesn't change for each class; we are always asking about the most likely class for the same document d, which must have the same probability P(d). Thus, we can choose the class that maximizes this simpler formula:
4
Naive Bayes and Sentiment Classification
4.1
Naive Bayes Classifiers
nan
nan
c = argmax c∈C P(c|d) = argmax c∈C P(d|c)P(c) (4.4)
4
Naive Bayes and Sentiment Classification
4.1
Naive Bayes Classifiers
nan
nan
We call Naive Bayes a generative model because we can read Eq. 4.4 as stating a kind of implicit assumption about how a document is generated: first a class is sampled from P(c), and then the words are generated by sampling from P(d|c). (In fact we could imagine generating artificial documents, or at least their word counts, by following this process). We'll say more about this intuition of generative models in Chapter 5.
4
Naive Bayes and Sentiment Classification
4.1
Naive Bayes Classifiers
nan
nan
To return to classification: we compute the most probable classĉ given some document d by choosing the class which has the highest product of two probabilities: the prior probability of the class P(c) and the likelihood of the document P(d|c):
4
Naive Bayes and Sentiment Classification
4.1
Naive Bayes Classifiers
nan
nan
prior probability likelihoodĉ = argmax c∈C likelihood P(d|c) prior P(c) (4.5)
4
Naive Bayes and Sentiment Classification
4.1
Naive Bayes Classifiers
nan
nan
Without loss of generalization, we can represent a document d as a set of features
4
Naive Bayes and Sentiment Classification
4.1
Naive Bayes Classifiers
nan
nan
f 1 , f 2 , ..., f n :ĉ = argmax c∈C likelihood P( f 1 , f 2 , ...., f n |c) prior P(c) (4.6)
4
Naive Bayes and Sentiment Classification
4.1
Naive Bayes Classifiers
nan
nan
Unfortunately, Eq. 4.6 is still too hard to compute directly: without some simplifying assumptions, estimating the probability of every possible combination of features (for example, every possible set of words and positions) would require huge numbers of parameters and impossibly large training sets. Naive Bayes classifiers therefore make two simplifying assumptions.
4
Naive Bayes and Sentiment Classification
4.1
Naive Bayes Classifiers
nan
nan
The first is the bag of words assumption discussed intuitively above: we assume position doesn't matter, and that the word "love" has the same effect on classification whether it occurs as the 1st, 20th, or last word in the document. Thus we assume that the features f 1 , f 2 , ..., f n only encode word identity and not position.
4
Naive Bayes and Sentiment Classification
4.1
Naive Bayes Classifiers
nan
nan
The second is commonly called the naive Bayes assumption: this is the condi-naive Bayes assumption tional independence assumption that the probabilities P( f i |c) are independent given the class c and hence can be 'naively' multiplied as follows:
4
Naive Bayes and Sentiment Classification
4.1
Naive Bayes Classifiers
nan
nan
P( f 1 , f 2 , ...., f n |c) = P( f 1 |c) • P( f 2 |c) • ... • P( f n |c) (4.7)
4
Naive Bayes and Sentiment Classification
4.1
Naive Bayes Classifiers
nan
nan
The final equation for the class chosen by a naive Bayes classifier is thus:
4
Naive Bayes and Sentiment Classification
4.1
Naive Bayes Classifiers
nan
nan
c NB = argmax c∈C P(c) f ∈F P( f |c) (4.8)
4
Naive Bayes and Sentiment Classification
4.1
Naive Bayes Classifiers
nan
nan
To apply the naive Bayes classifier to text, we need to consider word positions, by simply walking an index through every word position in the document:
4
Naive Bayes and Sentiment Classification
4.1
Naive Bayes Classifiers
nan
nan
positions ← all word positions in test document
4
Naive Bayes and Sentiment Classification
4.1
Naive Bayes Classifiers
nan
nan
Naive Bayes calculations, like calculations for language modeling, are done in log
4
Naive Bayes and Sentiment Classification
4.1
Naive Bayes Classifiers
nan
nan
space, to avoid underflow and increase speed. Thus Eq. 4.9 is generally instead
4
Naive Bayes and Sentiment Classification
4.1
Naive Bayes Classifiers
nan
nan
expressed as:
4
Naive Bayes and Sentiment Classification
4.1
Naive Bayes Classifiers
nan
nan
By considering features in log space, Eq. 4.10 computes the predicted class as a lin-
4
Naive Bayes and Sentiment Classification
4.1
Naive Bayes Classifiers
nan
nan
ear function of input features. Classifiers that use a linear combination of the inputs
4
Naive Bayes and Sentiment Classification
4.1
Naive Bayes Classifiers
nan
nan
to make a classification decision —like naive Bayes and also logistic regression—
4
Naive Bayes and Sentiment Classification
4.1
Naive Bayes Classifiers
nan
nan
are called linear classifiers.
4
Naive Bayes and Sentiment Classification
4.2
Training the Naive Bayes Classifier
nan
nan
How can we learn the probabilities P(c) and P( f i |c)? Let's first consider the maximum likelihood estimate. We'll simply use the frequencies in the data. For the class prior P(c) we ask what percentage of the documents in our training set are in each class c. Let N c be the number of documents in our training data with class c and N doc be the total number of documents. Then:
4
Naive Bayes and Sentiment Classification
4.2
Training the Naive Bayes Classifier
nan
nan
P(c) = N c N doc (4.11)
4
Naive Bayes and Sentiment Classification
4.2
Training the Naive Bayes Classifier
nan
nan
To learn the probability P( f i |c), we'll assume a feature is just the existence of a word in the document's bag of words, and so we'll want P(w i |c), which we compute as the fraction of times the word w i appears among all words in all documents of topic c. We first concatenate all documents with category c into one big "category c" text. Then we use the frequency of w i in this concatenated document to give a maximum likelihood estimate of the probability:
4
Naive Bayes and Sentiment Classification
4.2
Training the Naive Bayes Classifier
nan
nan
P(w i |c) = count(w i , c) w∈V count(w, c) (4.12)
4
Naive Bayes and Sentiment Classification
4.2
Training the Naive Bayes Classifier
nan
nan
Here the vocabulary V consists of the union of all the word types in all classes, not just the words in one class c.
4
Naive Bayes and Sentiment Classification
4.2
Training the Naive Bayes Classifier
nan
nan
There is a problem, however, with maximum likelihood training. Imagine we are trying to estimate the likelihood of the word "fantastic" given class positive, but suppose there are no training documents that both contain the word "fantastic" and are classified as positive. Perhaps the word "fantastic" happens to occur (sarcastically?) in the class negative. In such a case the probability for this feature will be zero:P ("fantastic"|positive) = count("fantastic", positive)
4
Naive Bayes and Sentiment Classification
4.2
Training the Naive Bayes Classifier
nan
nan
w∈V count(w, positive) = 0 (4.13)
4
Naive Bayes and Sentiment Classification
4.2
Training the Naive Bayes Classifier
nan
nan
But since naive Bayes naively multiplies all the feature likelihoods together, zero probabilities in the likelihood term for any class will cause the probability of the class to be zero, no matter the other evidence! The simplest solution is the add-one (Laplace) smoothing introduced in Chapter 3. While Laplace smoothing is usually replaced by more sophisticated smoothing algorithms in language modeling, it is commonly used in naive Bayes text categorization:P
4
Naive Bayes and Sentiment Classification
4.2
Training the Naive Bayes Classifier
nan
nan
(w i |c) = count(w i , c) + 1 w∈V (count(w, c) + 1) = count(w i , c) + 1 w∈V count(w, c) + |V | (4.14)
4
Naive Bayes and Sentiment Classification
4.2
Training the Naive Bayes Classifier
nan
nan
Note once again that it is crucial that the vocabulary V consists of the union of all the word types in all classes, not just the words in one class c (try to convince yourself why this must be true; see the exercise at the end of the chapter). What do we do about words that occur in our test data but are not in our vocabulary at all because they did not occur in any training document in any class? The solution for such unknown words is to ignore them-remove them from the test unknown word document and not include any probability for them at all.
4
Naive Bayes and Sentiment Classification
4.2
Training the Naive Bayes Classifier
nan
nan
Finally, some systems choose to completely ignore another class of words: stop words, very frequent words like the and a. This can be done by sorting the vocabustop words lary by frequency in the training set, and defining the top 10-100 vocabulary entries as stop words, or alternatively by using one of the many predefined stop word lists available online. Then each instance of these stop words is simply removed from both training and test documents as if it had never occurred. In most text classification applications, however, using a stop word list doesn't improve performance, and so it is more common to make use of the entire vocabulary and not use a stop word list. Fig. 4 .2 shows the final algorithm.
4
Naive Bayes and Sentiment Classification
4.2
Training the Naive Bayes Classifier
nan
nan
function TRAIN NAIVE BAYES(D, C) returns log P(c) and log P(w|c)
4
Naive Bayes and Sentiment Classification
4.2
Training the Naive Bayes Classifier
nan
nan
for each class c ∈ C # Calculate P(c) terms N doc = number of documents in D N c = number of documents from D in class c logprior[c] ← log N c N doc V ← vocabulary of D bigdoc[c] ← append(d) for d ∈ D with class c for each word w in V # Calculate P(w|c) terms count(w,c) ← # of occurrences of w in bigdoc[c] loglikelihood[w,c] ← log count(w, c) + 1 w in V (count (w , c) + 1) return logprior, loglikelihood, V function TEST NAIVE BAYES(testdoc, logprior, loglikelihood, C, V) returns best c for each class c ∈ C sum[c] ← logprior[c] for each position i in testdoc word ← testdoc[i] if word ∈ V sum[c] ← sum[c]+ loglikelihood[word,c] return argmax c sum[c]
4
Naive Bayes and Sentiment Classification
4.3
Worked Example
nan
nan
Let's walk through an example of training and testing naive Bayes with add-one smoothing. We'll use a sentiment analysis domain with the two classes positive (+) and negative (-), and take the following miniature training and test documents simplified from actual movie reviews. : P(−) = 3 5 P(+) = 2 5 The word with doesn't occur in the training set, so we drop it completely (as mentioned above, we don't use unknown word models for naive Bayes). The likelihoods from the training set for the remaining three words "predictable", "no", and "fun", are as follows, from Eq. 4.14 (computing the probabilities for the remainder of the words in the training set is left as an exercise for the reader): P("predictable"|−) = 1 + 1 14 + 20 P("predictable"|+) = 0 + 1 9 + 20 P("no"|−) = 1 + 1 14 + 20 P("no"|+) = 0 + 1 9 + 20 P("fun"|−) = 0 + 1 14 + 20 P("fun"
4
Naive Bayes and Sentiment Classification
4.3
Worked Example
nan
nan
|+) = 1 + 1 9 + 20
4
Naive Bayes and Sentiment Classification
4.3
Worked Example
nan
nan
For the test sentence S = "predictable with no fun", after removing the word 'with', the chosen class, via Eq. 4.9, is therefore computed as follows:
4
Naive Bayes and Sentiment Classification
4.3
Worked Example
nan
nan
P(−)P(S|−) = 3 5 × 2 × 2 × 1 34 3 = 6.1 × 10 −5 P(+)P(S|+) = 2 5 × 1 × 1 × 2 29 3 = 3.2 × 10 −5
4
Naive Bayes and Sentiment Classification
4.3
Worked Example
nan
nan
The model thus predicts the class negative for the test sentence.
4
Naive Bayes and Sentiment Classification
4.4
Optimizing for Sentiment Analysis
nan
nan
While standard naive Bayes text classification can work well for sentiment analysis, some small changes are generally employed that improve performance. First, for sentiment classification and a number of other text classification tasks, whether a word occurs or not seems to matter more than its frequency. Thus it often improves performance to clip the word counts in each document at 1 (see the end of the chapter for pointers to these results). This variant is called binary multinomial naive Bayes or binary NB. The variant uses the same Eq. 4.10 except binary NB that for each document we remove all duplicate words before concatenating them into the single big document. Fig. 4 .3 shows an example in which a set of four documents (shortened and text-normalized for this example) are remapped to binary, with the modified counts shown in the table on the right. The example is worked without add-1 smoothing to make the differences clearer. Note that the results counts need not be 1; the word great has a count of 2 even for Binary NB, because it appears in multiple documents.
4
Naive Bayes and Sentiment Classification
4.4
Optimizing for Sentiment Analysis
nan
nan
− it was pathetic the worst part was the boxing scenes − no plot twists or great scenes + and satire and great plot twists + great scenes great film After per-document binarization: A second important addition commonly made when doing text classification for sentiment is to deal with negation. Consider the difference between I really like this movie (positive) and I didn't like this movie (negative). The negation expressed by didn't completely alters the inferences we draw from the predicate like. Similarly, negation can modify a negative word to produce a positive review (don't dismiss this film, doesn't let us get bored).
4
Naive Bayes and Sentiment Classification
4.4
Optimizing for Sentiment Analysis
nan
nan
A very simple baseline that is commonly used in sentiment analysis to deal with negation is the following: during text normalization, prepend the prefix NOT to every word after a token of logical negation (n't, not, no, never) until the next punctuation mark. Thus the phrase didn't like this movie , but I becomes didn't NOT_like NOT_this NOT_movie , but I Newly formed 'words' like NOT like, NOT recommend will thus occur more often in negative document and act as cues for negative sentiment, while words like NOT bored, NOT dismiss will acquire positive associations. We will return in Chapter 16 to the use of parsing to deal more accurately with the scope relationship between these negation words and the predicates they modify, but this simple baseline works quite well in practice.
4
Naive Bayes and Sentiment Classification
4.4
Optimizing for Sentiment Analysis
nan
nan
Finally, in some situations we might have insufficient labeled training data to train accurate naive Bayes classifiers using all words in the training set to estimate positive and negative sentiment. In such cases we can instead derive the positive and negative word features from sentiment lexicons, lists of words that are pre-sentiment lexicons annotated with positive or negative sentiment. Four popular lexicons are the General Inquirer (Stone et al., 1966) , LIWC (Pennebaker et al., 2007) , the opinion lexicon
4
Naive Bayes and Sentiment Classification
4.4
Optimizing for Sentiment Analysis
nan
nan
of Hu and Liu (2004a) and the MPQA Subjectivity Lexicon (Wilson et al., 2005) . For example the MPQA subjectivity lexicon has 6885 words, 2718 positive and 4912 negative, each marked for whether it is strongly or weakly biased. Some samples of positive and negative words from the MPQA lexicon include: + : admirable, beautiful, confident, dazzling, ecstatic, favor, glee, great − : awful, bad, bias, catastrophe, cheat, deny, envious, foul, harsh, hate A common way to use lexicons in a naive Bayes classifier is to add a feature that is counted whenever a word from that lexicon occurs. Thus we might add a feature called 'this word occurs in the positive lexicon', and treat all instances of words in the lexicon as counts for that one feature, instead of counting each word separately. Similarly, we might add as a second feature 'this word occurs in the negative lexicon' of words in the negative lexicon. If we have lots of training data, and if the test data matches the training data, using just two features won't work as well as using all the words. But when training data is sparse or not representative of the test set, using dense lexicon features instead of sparse individual-word features may generalize better.
4
Naive Bayes and Sentiment Classification
4.4
Optimizing for Sentiment Analysis
nan
nan
We'll return to this use of lexicons in Chapter 20, showing how these lexicons can be learned automatically, and how they can be applied to many other tasks beyond sentiment classification.
4
Naive Bayes and Sentiment Classification
4.5
Naive Bayes for Other Text Classification Tasks
nan
nan
In the previous section we pointed out that naive Bayes doesn't require that our classifier use all the words in the training data as features. In fact features in naive Bayes can express any property of the input text we want.
4
Naive Bayes and Sentiment Classification
4.5
Naive Bayes for Other Text Classification Tasks
nan
nan
Consider the task of spam detection, deciding if a particular piece of email is spam detection an example of spam (unsolicited bulk email) -and one of the first applications of naive Bayes to text classification (Sahami et al., 1998) . A common solution here, rather than using all the words as individual features, is to predefine likely sets of words or phrases as features, combined with features that are not purely linguistic. For example the open-source SpamAssassin tool 1 predefines features like the phrase "one hundred percent guaranteed", or the feature mentions millions of dollars, which is a regular expression that matches suspiciously large sums of money. But it also includes features like HTML has a low ratio of text to image area, that aren't purely linguistic and might require some sophisticated computation, or totally non-linguistic features about, say, the path that the email took to arrive. More sample SpamAssassin features:
4
Naive Bayes and Sentiment Classification
4.5
Naive Bayes for Other Text Classification Tasks
nan
nan
• Email subject line is all capital letters • Contains phrases of urgency like "urgent reply" • Email subject line contains "online pharmaceutical"
4
Naive Bayes and Sentiment Classification
4.5
Naive Bayes for Other Text Classification Tasks
nan
nan
• HTML has unbalanced "head" tags • Claims you can be removed from the list For other tasks, like language id-determining what language a given piece language id of text is written in-the most effective naive Bayes features are not words at all, but character n-grams, 2-grams ('zw') 3-grams ('nya', ' Vo'), or 4-grams ('ie z', 'thei'), or, even simpler byte n-grams, where instead of using the multibyte Unicode character representations called codepoints, we just pretend everything is a string of raw bytes. Because spaces count as a byte, byte n-grams can model statistics about the beginning or ending of words. A widely used naive Bayes system, langid.py (Lui and Baldwin, 2012) begins with all possible n-grams of lengths 1-4, using feature selection to winnow down to the most informative 7000 final features. Language ID systems are trained on multilingual text, such as Wikipedia (Wikipedia text in 68 different languages was used in (Lui and Baldwin, 2011)), or newswire. To make sure that this multilingual text correctly reflects different regions, dialects, and socioeconomic classes, systems also add Twitter text in many languages geotagged to many regions (important for getting world English dialects from countries with large Anglophone populations like Nigeria or India), Bible and Quran translations, slang websites like Urban Dictionary, corpora of African American Vernacular English (Blodgett et al., 2016) , and so on (Jurgens et al., 2017).
4
Naive Bayes and Sentiment Classification
4.6
Naive Bayes as a Language Model
nan
nan
As we saw in the previous section, naive Bayes classifiers can use any sort of feature: dictionaries, URLs, email addresses, network features, phrases, and so on. But if, as in the previous section, we use only individual word features, and we use all of the words in the text (not a subset), then naive Bayes has an important similarity to language modeling. Specifically, a naive Bayes model can be viewed as a set of class-specific unigram language models, in which the model for each class instantiates a unigram language model.
4
Naive Bayes and Sentiment Classification
4.6
Naive Bayes as a Language Model
nan
nan
Since the likelihood features from the naive Bayes model assign a probability to each word P(word|c), the model also assigns a probability to each sentence:
4
Naive Bayes and Sentiment Classification
4.6
Naive Bayes as a Language Model
nan
nan
P(s|c) = i∈positions P(w i |c) (4.15)
4
Naive Bayes and Sentiment Classification
4.6
Naive Bayes as a Language Model
nan
nan
Thus consider a naive Bayes model with the classes positive (+) and negative (-) and the following model parameters:
4
Naive Bayes and Sentiment Classification
4.6
Naive Bayes as a Language Model
nan
nan
w P(w|+) P(w|-) I 0.1 0.2 love 0.1 0.001 this 0.01 0.01 fun 0.05 0.005 film 0.1 0.1 ... ... ...
4
Naive Bayes and Sentiment Classification
4.6
Naive Bayes as a Language Model
nan
nan
Each of the two columns above instantiates a language model that can assign a probability to the sentence "I love this fun film": P("I love this fun film"|+) = 0.1 × 0.1 × 0.01 × 0.05 × 0.1 = 0.0000005 P("I love this fun film"|−) = 0.2 × 0.001 × 0.01 × 0.005 × 0.1 = .0000000010
4
Naive Bayes and Sentiment Classification
4.6
Naive Bayes as a Language Model
nan
nan
As it happens, the positive model assigns a higher probability to the sentence: P(s|pos) > P(s|neg). Note that this is just the likelihood part of the naive Bayes model; once we multiply in the prior a full naive Bayes model might well make a different classification decision.
4
Naive Bayes and Sentiment Classification
4.7
Evaluation: Precision, Recall, F-measure
nan
nan
To introduce the methods for evaluating text classification, let's first consider some simple binary detection tasks. For example, in spam detection, our goal is to label every text as being in the spam category ("positive") or not in the spam category ("negative"). For each item (email document) we therefore need to know whether our system called it spam or not. We also need to know whether the email is actually spam or not, i.e. the human-defined labels for each document that we are trying to match. We will refer to these human labels as the gold labels.
4
Naive Bayes and Sentiment Classification
4.7
Evaluation: Precision, Recall, F-measure
nan
nan
Or imagine you're the CEO of the Delicious Pie Company and you need to know what people are saying about your pies on social media, so you build a system that detects tweets concerning Delicious Pie. Here the positive class is tweets about Delicious Pie and the negative class is all other tweets.
4
Naive Bayes and Sentiment Classification
4.7
Evaluation: Precision, Recall, F-measure
nan
nan
In both cases, we need a metric for knowing how well our spam detector (or pie-tweet-detector) is doing. To evaluate any system for detecting things, we start by building a confusion matrix like the one shown in Fig. 4
4
Naive Bayes and Sentiment Classification
4.7
Evaluation: Precision, Recall, F-measure
nan
nan
is a table for visualizing how an algorithm performs with respect to the human gold labels, using two dimensions (system output and gold labels), and each cell labeling a set of possible outcomes. In the spam detection case, for example, true positives are documents that are indeed spam (indicated by human-created gold labels) that our system correctly said were spam. False negatives are documents that are indeed spam but our system incorrectly labeled as non-spam.
4
Naive Bayes and Sentiment Classification
4.7
Evaluation: Precision, Recall, F-measure
nan
nan
To the bottom right of the table is the equation for accuracy, which asks what percentage of all the observations (for the spam or pie examples that means all emails or tweets) our system labeled correctly. Although accuracy might seem a natural metric, we generally don't use it for text classification tasks. That's because accuracy doesn't work well when the classes are unbalanced (as indeed they are with spam, which is a large majority of email, or with tweets, which are mainly not about pie). To make this more explicit, imagine that we looked at a million tweets, and let's say that only 100 of them are discussing their love (or hatred) for our pie,
4
Naive Bayes and Sentiment Classification
4.7
Evaluation: Precision, Recall, F-measure
nan
nan
while the other 999,900 are tweets about something completely unrelated. Imagine a simple classifier that stupidly classified every tweet as "not about pie". This classifier would have 999,900 true negatives and only 100 false negatives for an accuracy of 999,900/1,000,000 or 99.99%! What an amazing accuracy level! Surely we should be happy with this classifier? But of course this fabulous 'no pie' classifier would be completely useless, since it wouldn't find a single one of the customer comments we are looking for. In other words, accuracy is not a good metric when the goal is to discover something that is rare, or at least not completely balanced in frequency, which is a very common situation in the world.
4
Naive Bayes and Sentiment Classification
4.7
Evaluation: Precision, Recall, F-measure
nan
nan
That's why instead of accuracy we generally turn to two other metrics shown in Precision and recall will help solve the problem with the useless "nothing is pie" classifier. This classifier, despite having a fabulous accuracy of 99.99%, has a terrible recall of 0 (since there are no true positives, and 100 false negatives, the recall is 0/100). You should convince yourself that the precision at finding relevant tweets is equally problematic. Thus precision and recall, unlike accuracy, emphasize true positives: finding the things that we are supposed to be looking for.
4
Naive Bayes and Sentiment Classification
4.7
Evaluation: Precision, Recall, F-measure
nan
nan
There are many ways to define a single metric that incorporates aspects of both precision and recall. The simplest of these combinations is the F-measure (van F-measure Rijsbergen, 1975) , defined as:
4
Naive Bayes and Sentiment Classification
4.7
Evaluation: Precision, Recall, F-measure
nan
nan
F β = (β 2 + 1)PR
4
Naive Bayes and Sentiment Classification
4.7
Evaluation: Precision, Recall, F-measure
nan
nan
β 2 P + R The β parameter differentially weights the importance of recall and precision, based perhaps on the needs of an application. Values of β > 1 favor recall, while values of β < 1 favor precision. When β = 1, precision and recall are equally balanced; this is the most frequently used metric, and is called F β =1 or just F 1 :
4
Naive Bayes and Sentiment Classification
4.7
Evaluation: Precision, Recall, F-measure
nan
nan
F1 F 1 = 2PR P + R (4.16)
4
Naive Bayes and Sentiment Classification
4.7
Evaluation: Precision, Recall, F-measure
nan
nan
F-measure comes from a weighted harmonic mean of precision and recall. The harmonic mean of a set of numbers is the reciprocal of the arithmetic mean of reciprocals:
4
Naive Bayes and Sentiment Classification
4.7
Evaluation: Precision, Recall, F-measure
nan
nan
HarmonicMean(a 1 , a 2 , a 3 , a 4 , ..., a n ) = n 1 a 1 + 1 a 2 + 1 a 3 + ... + 1 a n (4.17)
4
Naive Bayes and Sentiment Classification
4.7
Evaluation: Precision, Recall, F-measure
nan
nan
and hence F-measure is
4
Naive Bayes and Sentiment Classification
4.7
Evaluation: Precision, Recall, F-measure
nan
nan
F = 1 α 1 P + (1 − α) 1 R or with β 2 = 1 − α α F = (β 2 + 1)PR β 2 P + R (4.18)
4
Naive Bayes and Sentiment Classification
4.7
Evaluation: Precision, Recall, F-measure
nan
nan
Harmonic mean is used because it is a conservative metric; the harmonic mean of two values is closer to the minimum of the two values than the arithmetic mean is. Thus it weighs the lower of the two numbers more heavily.
4
Naive Bayes and Sentiment Classification
4.7
Evaluation: Precision, Recall, F-measure
4.7.1
Evaluating with more than two classes
Up to now we have been describing text classification tasks with only two classes. But lots of classification tasks in language processing have more than two classes. For sentiment analysis we generally have 3 classes (positive, negative, neutral) and even more classes are common for tasks like part-of-speech tagging, word sense disambiguation, semantic role labeling, emotion detection, and so on. Luckily the naive Bayes algorithm is already a multi-class classification algorithm. But we'll need to slightly modify our definitions of precision and recall. Consider the sample confusion matrix for a hypothetical 3-way one-of email categorization decision (urgent, normal, spam) shown in Fig. 4 .5. The matrix shows, for example, that the system mistakenly labeled one spam document as urgent, and we have shown how to compute a distinct precision and recall value for each class. In order to derive a single metric that tells us how well the system is doing, we can combine these values in two ways. In macroaveraging, we compute the performance macroaveraging for each class, and then average over classes. In microaveraging, we collect the demicroaveraging cisions for all classes into a single confusion matrix, and then compute precision and recall from that table. Fig. 4 .6 shows the confusion matrix for each class separately, and shows the computation of microaveraged and macroaveraged precision.
4
Naive Bayes and Sentiment Classification
4.7
Evaluation: Precision, Recall, F-measure
4.7.1
Evaluating with more than two classes
As the figure shows, a microaverage is dominated by the more frequent class (in this case spam), since the counts are pooled. The macroaverage better reflects the statistics of the smaller classes, and so is more appropriate when performance on all the classes is equally important.
4
Naive Bayes and Sentiment Classification
4.8
Test sets and Cross-validation
nan
nan
The training and testing procedure for text classification follows what we saw with language modeling (Section 3.2): we use the training set to train the model, then use the development test set (also called a devset) to perhaps tune some parameters, and in general decide what the best model is. Once we come up with what we think is the best model, we run it on the (hitherto unseen) test set to report its performance.
4
Naive Bayes and Sentiment Classification
4.8
Test sets and Cross-validation
nan
nan
While the use of a devset avoids overfitting the test set, having a fixed training set, devset, and test set creates another problem: in order to save lots of data for training, the test set (or devset) might not be large enough to be representative. Wouldn't it be better if we could somehow use all our data for training and still use all our data for test? We can do this by cross-validation.
4
Naive Bayes and Sentiment Classification
4.8
Test sets and Cross-validation
nan
nan
In cross-validation, we choose a number k, and partition our data into k disjoint subsets called folds. Now we choose one of those k folds as a test set, train our folds classifier on the remaining k − 1 folds, and then compute the error rate on the test set. Then we repeat with another fold as the test set, again training on the other k − 1 folds. We do this sampling process k times and average the test set error rate from these k runs to get an average error rate. If we choose k = 10, we would train 10 different models (each on 90% of our data), test the model 10 times, and average these 10 values. This is called 10-fold cross-validation.
4
Naive Bayes and Sentiment Classification
4.8
Test sets and Cross-validation
nan
nan
The only problem with cross-validation is that because all the data is used for testing, we need the whole corpus to be blind; we can't examine any of the data to suggest possible features and in general see what's going on, because we'd be peeking at the test set, and such cheating would cause us to overestimate the performance of our system. However, looking at the corpus to understand what's going on is important in designing NLP systems! What to do? For this reason, it is common to create a fixed training set and test set, then do 10-fold cross-validation inside the training set, but compute error rate the normal way in the test set, as shown in Fig. 4 .7.
4
Naive Bayes and Sentiment Classification
4.9
Statistical Significance Testing
nan
nan
In building systems we often need to compare the performance of two systems. How can we know if the new system we just built is better than our old one? Or better than the some other system described in the literature? This is the domain of statistical hypothesis testing, and in this section we introduce tests for statistical significance for NLP classifiers, drawing especially on the work of Dror et al. 2020 such as F 1 , or accuracy. Perhaps we want to know if our logistic regression sentiment classifier A (Chapter 5) gets a higher F 1 score than our naive Bayes sentiment classifier B on a particular test set x. Let's call M(A, x) the score that system A gets on test set x, and δ (x) the performance difference between A and B on x:
4
Naive Bayes and Sentiment Classification
4.9
Statistical Significance Testing
nan
nan
δ (x) = M(A, x) − M(B, x) (4.19)
4
Naive Bayes and Sentiment Classification
4.9
Statistical Significance Testing
nan
nan
We would like to know if δ (x) > 0, meaning that our logistic regression classifier has a higher F 1 than our naive Bayes classifier on X. δ (x) is called the effect size;
4
Naive Bayes and Sentiment Classification
4.9
Statistical Significance Testing
nan
nan
effect size a bigger δ means that A seems to be way better than B; a small δ means A seems to be only a little better. Why don't we just check if δ (x) is positive? Suppose we do, and we find that the F 1 score of A is higher than B's by .04. Can we be certain that A is better? We cannot! That's because A might just be accidentally better than B on this particular x. We need something more: we want to know if A's superiority over B is likely to hold again if we checked another test set x , or under some other set of circumstances.
4
Naive Bayes and Sentiment Classification
4.9
Statistical Significance Testing
nan
nan
In the paradigm of statistical hypothesis testing, we test this by formalizing two hypotheses.
4
Naive Bayes and Sentiment Classification
4.9
Statistical Significance Testing
nan
nan
H 0 : δ (x) ≤ 0 H 1 : δ (x) > 0 (4.20)
4
Naive Bayes and Sentiment Classification
4.9
Statistical Significance Testing
nan
nan
The hypothesis H 0 , called the null hypothesis, supposes that δ (x) is actually neganull hypothesis tive or zero, meaning that A is not better than B. We would like to know if we can confidently rule out this hypothesis, and instead support H 1 , that A is better.
4
Naive Bayes and Sentiment Classification
4.9
Statistical Significance Testing
nan
nan
We do this by creating a random variable X ranging over all test sets. Now we ask how likely is it, if the null hypothesis H 0 was correct, that among these test sets we would encounter the value of δ (x) that we found. We formalize this likelihood as the p-value: the probability, assuming the null hypothesis H 0 is true, of seeing p-value the δ (x) that we saw or one even greater
4
Naive Bayes and Sentiment Classification
4.9
Statistical Significance Testing
nan
nan
P(δ (X) ≥ δ (x)|H 0 is true) (4.21)
4
Naive Bayes and Sentiment Classification
4.9
Statistical Significance Testing
nan
nan
So in our example, this p-value is the probability that we would see δ (x) assuming A is not better than B. If δ (x) is huge (let's say A has a very respectable F 1 of .9 and B has a terrible F 1 of only .2 on x), we might be surprised, since that would be extremely unlikely to occur if H 0 were in fact true, and so the p-value would be low (unlikely to have such a large δ if A is in fact not better than B). But if δ (x) is very small, it might be less surprising to us even if H 0 were true and A is not really better than B, and so the p-value would be higher.
4
Naive Bayes and Sentiment Classification
4.9
Statistical Significance Testing
nan
nan
A very small p-value means that the difference we observed is very unlikely under the null hypothesis, and we can reject the null hypothesis. What counts as very small? It is common to use values like .05 or .01 as the thresholds. A value of .01 means that if the p-value (the probability of observing the δ we saw assuming H 0 is true) is less than .01, we reject the null hypothesis and assume that A is indeed better than B. We say that a result (e.g., "A is better than B") is statistically significant if statistically significant the δ we saw has a probability that is below the threshold and we therefore reject this null hypothesis. How do we compute this probability we need for the p-value? In NLP we generally don't use simple parametric tests like t-tests or ANOVAs that you might be familiar with. Parametric tests make assumptions about the distributions of the test statistic (such as normality) that don't generally hold in our cases. So in NLP we usually use non-parametric tests based on sampling: we artificially create many versions of the experimental setup. For example, if we had lots of different test sets x we could just measure all the δ (x ) for all the x . That gives us a distribution. Now we set a threshold (like .01) and if we see in this distribution that 99% or more of those deltas are smaller than the delta we observed, i.e. that p-value(x)-the probability of seeing a δ (x) as big as the one we saw, is less than .01, then we can reject the null hypothesis and agree that δ (x) was a sufficiently surprising difference and A is really a better algorithm than B.
4
Naive Bayes and Sentiment Classification
4.9
Statistical Significance Testing
nan
nan
There are two common non-parametric tests used in NLP: approximate randomization (Noreen, 1989) and the bootstrap test. We will describe bootstrap approximate randomization below, showing the paired version of the test, which again is most common in NLP. Paired tests are those in which we compare two sets of observations that are aligned: paired each observation in one set can be paired with an observation in another. This happens naturally when we are comparing the performance of two systems on the same test set; we can pair the performance of system A on an individual observation x i with the performance of system B on the same x i .
4
Naive Bayes and Sentiment Classification
4.9
Statistical Significance Testing
4.9.1
The Paired Bootstrap Test
The bootstrap test (Efron and Tibshirani, 1993) can apply to any metric; from prebootstrap test cision, recall, or F1 to the BLEU metric used in machine translation. The word bootstrapping refers to repeatedly drawing large numbers of smaller samples with bootstrapping replacement (called bootstrap samples) from an original larger sample. The intuition of the bootstrap test is that we can create many virtual test sets from an observed test set by repeatedly sampling from it. The method only makes the assumption that the sample is representative of the population.