n_chapter
stringclasses
10 values
chapter
stringclasses
10 values
n_section
stringlengths
3
5
section
stringlengths
3
48
n_subsection
stringlengths
3
6
subsection
stringlengths
3
51
text
stringlengths
1
2.65k
4
Naive Bayes and Sentiment Classification
4.9
Statistical Significance Testing
4.9.1
The Paired Bootstrap Test
Consider a tiny text classification example with a test set x of 10 documents. The first row of Fig. 4 .8 shows the results of two classifiers (A and B) on this test set, with each document labeled by one of the four possibilities: (A and B both right, both wrong, A right and B wrong, A wrong and B right); a slash through a letter ( B) means that that classifier got the answer wrong. On the first document both A and B get the correct class (AB), while on the second document A got it right but B got it wrong (A B). If we assume for simplicity that our metric is accuracy, A has an accuracy of .70 and B of .50, so δ (x) is .20. Now we create a large number b (perhaps 10 5 ) of virtual test sets x (i) , each of size n = 10. Fig. 4 .8 shows a couple examples. To create each virtual test set x (i) , we repeatedly (n = 10 times) select a cell from row x with replacement. For example, to create the first cell of the first virtual test set x (1) , if we happened to randomly select the second cell of the x row; we would copy the value A B into our new cell, and move on to create the second cell of x (1) , each time sampling (randomly choosing) from the original x with replacement. 1 2 3 4 5 6 7 8 9 10 A% B% δ () Now that we have the b test sets, providing a sampling distribution, we can do statistics on how often A has an accidental advantage. There are various ways to compute this advantage; here we follow the version laid out in Berg-Kirkpatrick et al. (2012) . Assuming H 0 (A isn't better than B), we would expect that δ (X), estimated over many test sets, would be zero; a much higher value would be surprising, since H 0 specifically assumes A isn't better than B. To measure exactly how surprising is our observed δ (x) we would in other circumstances compute the p-value by counting over many test sets how often δ (x (i) ) exceeds the expected zero value by δ (x) or more:
4
Naive Bayes and Sentiment Classification
4.9
Statistical Significance Testing
4.9.1
The Paired Bootstrap Test
x AB A B AB AB A B AB A B AB A B A B .70 .50 .20 x (1) A B AB A B AB AB A B AB AB A B AB .60 .60 .00 x (2) A B AB A B AB AB AB AB A B AB AB .60 .70 -.10 ... x (b)
4
Naive Bayes and Sentiment Classification
4.9
Statistical Significance Testing
4.9.1
The Paired Bootstrap Test
p-value(x) = 1 b b i=1 1 δ (x (i) ) − δ (x) ≥ 0
4
Naive Bayes and Sentiment Classification
4.9
Statistical Significance Testing
4.9.1
The Paired Bootstrap Test
(We use the notation 1(x) to mean "1 if x is true, and 0 otherwise".) However, although it's generally true that the expected value of δ (X) over many test sets, (again assuming A isn't better than B) is 0, this isn't true for the bootstrapped test sets we created. That's because we didn't draw these samples from a distribution with 0 mean; we happened to create them from the original test set x, which happens to be biased (by .20) in favor of A. So to measure how surprising is our observed δ (x), we actually compute the p-value by counting over many test sets how often δ (x (i) ) exceeds the expected value of δ (x) by δ (x) or more:
4
Naive Bayes and Sentiment Classification
4.9
Statistical Significance Testing
4.9.1
The Paired Bootstrap Test
p-value(x) = 1 b b i=1 1 δ (x (i) ) − δ (x) ≥ δ (x) = 1 b b i=1 1 δ (x (i) ) ≥ 2δ (x) (4.22)
4
Naive Bayes and Sentiment Classification
4.9
Statistical Significance Testing
4.9.1
The Paired Bootstrap Test
So if for example we have 10,000 test sets x (i) and a threshold of .01, and in only 47 of the test sets do we find that δ (x (i) ) ≥ 2δ (x), the resulting p-value of .0047 is smaller than .01, indicating δ (x) is indeed sufficiently surprising, and we can reject the null hypothesis and conclude A is better than B.
4
Naive Bayes and Sentiment Classification
4.9
Statistical Significance Testing
4.9.1
The Paired Bootstrap Test
function BOOTSTRAP(test set x, num of samples b) returns p-value(x)
4
Naive Bayes and Sentiment Classification
4.9
Statistical Significance Testing
4.9.1
The Paired Bootstrap Test
Calculate δ (x) # how much better does algorithm A do than B on x s = 0 for i = 1 to b do for j = 1 to n do # Draw a bootstrap sample x (i) of size n Select a member of x at random and add it to
4
Naive Bayes and Sentiment Classification
4.9
Statistical Significance Testing
4.9.1
The Paired Bootstrap Test
x (i) Calculate δ (x (i) ) # how much better does algorithm A do than B on x (i) s ← s + 1 if δ (x (i) ) ≥ 2δ (x) p-value(x) ≈ s
4
Naive Bayes and Sentiment Classification
4.9
Statistical Significance Testing
4.9.1
The Paired Bootstrap Test
b # on what % of the b samples did algorithm A beat expectations? return p-value(x) # if very few did, our observed δ is probably not accidental The full algorithm for the bootstrap is shown in Fig. 4 .9. It is given a test set x, a number of samples b, and counts the percentage of the b bootstrap test sets in which
4
Naive Bayes and Sentiment Classification
4.9
Statistical Significance Testing
4.9.1
The Paired Bootstrap Test
δ (x * (i) ) > 2δ (x)
4
Naive Bayes and Sentiment Classification
4.9
Statistical Significance Testing
4.9.1
The Paired Bootstrap Test
. This percentage then acts as a one-sided empirical p-value
4
Naive Bayes and Sentiment Classification
4.1
Avoiding Harms in Classification
nan
nan
It is important to avoid harms that may result from classifiers, harms that exist both for naive Bayes classifiers and for the other classification algorithms we introduce in later chapters.
4
Naive Bayes and Sentiment Classification
4.1
Avoiding Harms in Classification
nan
nan
One class of harms is representational harms (Crawford 2017, Blodgett et al. 2020), harms caused by a system that demeans a social group, for example by perpetuating negative stereotypes about them. For example Kiritchenko and Mohammad (2018) examined the performance of 200 sentiment analysis systems on pairs of sentences that were identical except for containing either a common African American first name (like Shaniqua) or a common European American first name (like Stephanie), chosen from the Caliskan et al. (2017) study discussed in Chapter 6. They found that most systems assigned lower sentiment and more negative emotion to sentences with African American names, reflecting and perpetuating stereotypes that associate African Americans with negative emotions (Popp et al., 2003) . In other tasks classifiers may lead to both representational harms and other harms, such as censorship. For example the important text classification task of toxicity detection is the task of detecting hate speech, abuse, harassment, or other toxicity detection kinds of toxic language. While the goal of such classifiers is to help reduce societal harm, toxicity classifiers can themselves cause harms. For example, researchers have shown that some widely used toxicity classifiers incorrectly flag as being toxic sentences that are non-toxic but simply mention minority identities like women (Park et al., 2018), blind people (Hutchinson et al., 2020) or gay people (Dixon et al., 2018), or simply use linguistic features characteristic of varieties like African-American Vernacular English (Sap et al. 2019 , Davidson et al. 2019 . Such false positive errors, if employed by toxicity detection systems without human oversight, could lead to the censoring of discourse by or about these groups.
4
Naive Bayes and Sentiment Classification
4.1
Avoiding Harms in Classification
nan
nan
These model problems can be caused by biases or other problems in the training data; in general, machine learning systems replicate and even amplify the biases in their training data. But these problems can also be caused by the labels (for example due to biases in the human labelers), by the resources used (like lexicons, or model components like pretrained embeddings), or even by model architecture (like what the model is trained to optimized). While the mitigation of these biases (for example by carefully considering the training data sources) is an important area of research, we currently don't have general solutions. For this reason it's important, when introducing any NLP model, to study these these kinds of factors and make them clear. One way to do this is by releasing a model card (Mitchell et al., 2019) model card for each version of a model. A model card documents a machine learning model with information like:
4
Naive Bayes and Sentiment Classification
4.1
Avoiding Harms in Classification
nan
nan
• training algorithms and parameters • training data sources, motivation, and preprocessing • evaluation data sources, motivation, and preprocessing • intended use and users • model performance across different demographic or other groups and environmental situations
4
Naive Bayes and Sentiment Classification
4.11
Summary
nan
nan
This chapter introduced the naive Bayes model for classification and applied it to the text categorization task of sentiment analysis.
4
Naive Bayes and Sentiment Classification
4.11
Summary
nan
nan
• Many language processing tasks can be viewed as tasks of classification.
4
Naive Bayes and Sentiment Classification
4.11
Summary
nan
nan
• Text categorization, in which an entire text is assigned a class from a finite set, includes such tasks as sentiment analysis, spam detection, language identification, and authorship attribution. • Sentiment analysis classifies a text as reflecting the positive or negative orientation (sentiment) that a writer expresses toward some object. • Naive Bayes is a generative model that makes the bag of words assumption (position doesn't matter) and the conditional independence assumption (words are conditionally independent of each other given the class) • Naive Bayes with binarized features seems to work better for many text classification tasks. • Classifiers are evaluated based on precision and recall.
4
Naive Bayes and Sentiment Classification
4.11
Summary
nan
nan
• Classifiers are trained using distinct training, dev, and test sets, including the use of cross-validation in the training set. • Statistical significance tests should be used to determine whether we can be confident that one version of a classifier is better than another. • Designers of classifiers should carefully consider harms that may be caused by the model, including its training data and other components, and report model characteristics in a model card.
4
Naive Bayes and Sentiment Classification
4.12
Bibliographical and Historical Notes
nan
nan
Multinomial naive Bayes text classification was proposed by Maron (1961) at the RAND Corporation for the task of assigning subject categories to journal abstracts. His model introduced most of the features of the modern form presented here, approximating the classification task with one-of categorization, and implementing add-δ smoothing and information-based feature selection.
4
Naive Bayes and Sentiment Classification
4.12
Bibliographical and Historical Notes
nan
nan
The conditional independence assumptions of naive Bayes and the idea of Bayesian analysis of text seems to have arisen multiple times. The same year as Maron's paper, Minsky (1961) proposed a naive Bayes classifier for vision and other artificial intelligence problems, and Bayesian techniques were also applied to the text classification task of authorship attribution by Mosteller and Wallace (1963) . It had long been known that Alexander Hamilton, John Jay, and James Madison wrote the anonymously-published Federalist papers in 1787-1788 to persuade New York to ratify the United States Constitution. Yet although some of the 85 essays were clearly attributable to one author or another, the authorship of 12 were in dispute between Hamilton and Madison. Mosteller and Wallace (1963) trained a Bayesian probabilistic model of the writing of Hamilton and another model on the writings of Madison, then computed the maximum-likelihood author for each of the disputed essays. Naive Bayes was first applied to spam detection in Heckerman et al. (1998) .
4
Naive Bayes and Sentiment Classification
4.12
Bibliographical and Historical Notes
nan
nan
Metsis et al. 2006, Pang et al. 2002, and Wang and Manning 2012show that using boolean attributes with multinomial naive Bayes works better than full counts. Binary multinomial naive Bayes is sometimes confused with another variant of naive Bayes that also use a binary representation of whether a term occurs in a document: Multivariate Bernoulli naive Bayes. The Bernoulli variant instead estimates P(w|c) as the fraction of documents that contain a term, and includes a probability for whether a term is not in a document. McCallum and Nigam (1998) and Wang and Manning (2012) show that the multivariate Bernoulli variant of naive Bayes doesn't work as well as the multinomial algorithm for sentiment or other text tasks.
4
Naive Bayes and Sentiment Classification
4.12
Bibliographical and Historical Notes
nan
nan
There are a variety of sources covering the many kinds of text classification tasks. For sentiment analysis see Pang and Lee (2008) , and Liu and Zhang (2012). Stamatatos (2009) surveys authorship attribute algorithms. On language identification see Jauhiainen et al. (2018); Jaech et al. (2016) is an important early neural system. The task of newswire indexing was often used as a test case for text classification algorithms, based on the Reuters-21578 collection of newswire articles.
4
Naive Bayes and Sentiment Classification
4.12
Bibliographical and Historical Notes
nan
nan
See Manning et al. (2008) and Aggarwal and Zhai (2012) on text classification; classification in general is covered in machine learning textbooks (Hastie et al. 2001 , Witten and Frank 2005 , Bishop 2006 , Murphy 2012 .
4
Naive Bayes and Sentiment Classification
4.12
Bibliographical and Historical Notes
nan
nan
Non-parametric methods for computing statistical significance were used first in NLP in the MUC competition (Chinchor et al., 1993) , and even earlier in speech recognition (Gillick and Cox 1989, Bisani and Ney 2004) . Our description of the bootstrap draws on the description in Berg-Kirkpatrick et al. (2012) . Recent work has focused on issues including multiple test sets and multiple metrics (Søgaard et al. 2014 , Dror et al. 2017 .
4
Naive Bayes and Sentiment Classification
4.12
Bibliographical and Historical Notes
nan
nan
Feature selection is a method of removing features that are unlikely to generalize well. Features are generally ranked by how informative they are about the classification decision. A very common metric, information gain, tells us how many bits of information gain information the presence of the word gives us for guessing the class. Other feature selection metrics include χ 2 , pointwise mutual information, and GINI index; see Yang and Pedersen (1997) for a comparison and Guyon and Elisseeff (2003) for an introduction to feature selection.
5
Logistic Regression
nan
nan
nan
nan
Detective stories are as littered with clues as texts are with words. Yet for the poor reader it can be challenging to know how to weigh the author's clues in order to make the crucial classification task: deciding whodunnit.
5
Logistic Regression
nan
nan
nan
nan
In this chapter we introduce an algorithm that is admirably suited for discovering the link between features or cues and some particular outcome: logistic regression.
5
Logistic Regression
nan
nan
nan
nan
Indeed, logistic regression is one of the most important analytic tools in the social and natural sciences. In natural language processing, logistic regression is the baseline supervised machine learning algorithm for classification, and also has a very close relationship with neural networks. As we will see in Chapter 7, a neural network can be viewed as a series of logistic regression classifiers stacked on top of each other. Thus the classification and machine learning techniques introduced here will play an important role throughout the book.
5
Logistic Regression
nan
nan
nan
nan
Logistic regression can be used to classify an observation into one of two classes (like 'positive sentiment' and 'negative sentiment'), or into one of many classes. Because the mathematics for the two-class case is simpler, we'll describe this special case of logistic regression first in the next few sections, and then briefly summarize the use of multinomial logistic regression for more than two classes in Section 5.6.
5
Logistic Regression
nan
nan
nan
nan
We'll introduce the mathematics of logistic regression in the next few sections. But let's begin with some high-level issues.
5
Logistic Regression
nan
nan
nan
nan
Generative and Discriminative Classifiers: The most important difference between naive Bayes and logistic regression is that logistic regression is a discriminative classifier while naive Bayes is a generative classifier.
5
Logistic Regression
nan
nan
nan
nan
These are two very different frameworks for how to build a machine learning model. Consider a visual metaphor: imagine we're trying to distinguish dog images from cat images. A generative model would have the goal of understanding what dogs look like and what cats look like. You might literally ask such a model to 'generate', i.e., draw, a dog. Given a test image, the system then asks whether it's the cat model or the dog model that better fits (is less surprised by) the image, and chooses that as its label.
5
Logistic Regression
nan
nan
nan
nan
A discriminative model, by contrast, is only trying to learn to distinguish the classes (perhaps without learning much about them). So maybe all the dogs in the training data are wearing collars and the cats aren't. If that one feature neatly separates the classes, the model is satisfied. If you ask such a model what it knows about cats all it can say is that they don't wear collars.
5
Logistic Regression
nan
nan
nan
nan
More formally, recall that the naive Bayes assigns a class c to a document d not by directly computing P(c|d) but by computing a likelihood and a prior
5
Logistic Regression
nan
nan
nan
nan
A generative model like naive Bayes makes use of this likelihood term, w which
5
Logistic Regression
nan
nan
nan
nan
expresses how to generate the features of a document if we knew it was of class c. By contrast a discriminative model in this text categorization scenario attempts to directly compute P(c|d). Perhaps it will learn to assign a high weight to document features that directly improve its ability to discriminate between possible classes, even if it couldn’t generate an example of one of the classes.
5
Logistic Regression
nan
nan
nan
nan
Components of a probabilistic machine learning classifier: Like naive Bayes, logistic regression is a probabilistic classifier that makes use of supervised machine learning. Machine learning classifiers require a training corpus of m input/output pairs (x (i) , y (i) ). (We'll use superscripts in parentheses to refer to individual instances in the training set-for sentiment classification each instance might be an individual document to be classified). A machine learning system for classification then has four components:
5
Logistic Regression
nan
nan
nan
nan
1. A feature representation of the input. For each input observation x (i) , this will be a vector of features [x 1 , x 2 , ..., x n ]. We will generally refer to feature i for input x ( j) as x ( j)i , sometimes simplified as x i , but we will also see the notation f i , f i (x), or, for multiclass classification, f i (c, x).
5
Logistic Regression
nan
nan
nan
nan
2. A classification function that computesŷ, the estimated class, via p(y|x). In the next section we will introduce the sigmoid and softmax tools for classification.
5
Logistic Regression
nan
nan
nan
nan
3. An objective function for learning, usually involving minimizing error on training examples. We will introduce the cross-entropy loss function.
5
Logistic Regression
nan
nan
nan
nan
4. An algorithm for optimizing the objective function. We introduce the stochastic gradient descent algorithm.
5
Logistic Regression
nan
nan
nan
nan
Logistic regression has two phases:
5
Logistic Regression
nan
nan
nan
nan
training: we train the system (specifically the weights w and b) using stochastic gradient descent and the cross-entropy loss.
5
Logistic Regression
nan
nan
nan
nan
test: Given a test example x we compute p(y|x) and return the higher probability label y = 1 or y = 0.
5
Logistic Regression
5.1
Classification: the Sigmoid
nan
nan
The goal of binary logistic regression is to train a classifier that can make a binary decision about the class of a new input observation. Here we introduce the sigmoid classifier that will help us make this decision. Consider a single input observation x, which we will represent by a vector of features [x 1 , x 2 , ..., x n ] (we'll show sample features in the next subsection). The classifier output y can be 1 (meaning the observation is a member of the class) or 0 (the observation is not a member of the class). We want to know the probability P(y = 1|x) that this observation is a member of the class. So perhaps the decision is "positive sentiment" versus "negative sentiment", the features represent counts of words in a document, P(y = 1|x) is the probability that the document has positive sentiment, and P(y = 0|x) is the probability that the document has negative sentiment. Logistic regression solves this task by learning, from a training set, a vector of weights and a bias term. Each weight w i is a real number, and is associated with one of the input features x i . The weight w i represents how important that input feature is to the classification decision, and can be positive (providing evidence that the instance being classified belongs in the positive class) or negative (providing evidence that the instance being classified belongs in the negative class). Thus we might expect in a sentiment task the word awesome to have a high positive weight, and abysmal to have a very negative weight. The bias term, also called the intercept, is bias term intercept another real number that's added to the weighted inputs.
5
Logistic Regression
5.1
Classification: the Sigmoid
nan
nan
To make a decision on a test instance-after we've learned the weights in training-the classifier first multiplies each x i by its weight w i , sums up the weighted features, and adds the bias term b. The resulting single number z expresses the weighted sum of the evidence for the class.
5
Logistic Regression
5.1
Classification: the Sigmoid
nan
nan
z = n i=1 w i x i + b (5.2)
5
Logistic Regression
5.1
Classification: the Sigmoid
nan
nan
In the rest of the book we'll represent such sums using the dot product notation from dot product linear algebra. The dot product of two vectors a and b, written as a • b is the sum of the products of the corresponding elements of each vector. Thus the following is an equivalent formation to Eq. 5.2:
5
Logistic Regression
5.1
Classification: the Sigmoid
nan
nan
z = w • x + b (5.3)
5
Logistic Regression
5.1
Classification: the Sigmoid
nan
nan
But note that nothing in Eq. 5.3 forces z to be a legal probability, that is, to lie between 0 and 1. In fact, since weights are real-valued, the output might even be negative; z ranges from −∞ to ∞. To create a probability, we'll pass z through the sigmoid function, σ (z). The sigmoid sigmoid function (named because it looks like an s) is also called the logistic function, and gives logistic regression its name. The sigmoid has the following equation,
5
Logistic Regression
5.1
Classification: the Sigmoid
nan
nan
σ (z) = 1 1 + e −z = 1 1 + exp (−z) (5.4)
5
Logistic Regression
5.1
Classification: the Sigmoid
nan
nan
(For the rest of the book, we'll use the notation exp(x) to mean e x .) The sigmoid has a number of advantages; it takes a real-valued number and maps it into the range [0, 1], which is just what we want for a probability. Because it is nearly linear around 0 but flattens toward the ends, it tends to squash outlier values toward 0 or 1. And it's differentiable, which as we'll see in Section 5.8 will be handy for learning.
5
Logistic Regression
5.1
Classification: the Sigmoid
nan
nan
We're almost there. If we apply the sigmoid to the sum of the weighted features, we get a number between 0 and 1. To make it a probability, we just need to make sure that the two cases, p(y = 1) and p(y = 0), sum to 1. We can do this as follows:
5
Logistic Regression
5.1
Classification: the Sigmoid
nan
nan
P(y = 1) = σ (w • x + b) = 1 1 + exp (−(w • x + b)) P(y = 0) = 1 − σ (w • x + b) = 1 − 1 1 + exp (−(w • x + b)) = exp (−(w • x + b)) 1 + exp (−(w • x + b)) (5.5)
5
Logistic Regression
5.1
Classification: the Sigmoid
nan
nan
The sigmoid function has the property
5
Logistic Regression
5.1
Classification: the Sigmoid
nan
nan
1 − σ (x) = σ (−x) (5.6)
5
Logistic Regression
5.1
Classification: the Sigmoid
nan
nan
so we could also have expressed P(y = 0) as σ (−(w • x + b)).
5
Logistic Regression
5.1
Classification: the Sigmoid
nan
nan
Now we have an algorithm that given an instance x computes the probability P(y = 1|x). How do we make a decision? For a test instance x, we say yes if the probability P(y = 1|x) is more than .5, and no otherwise. We call .5 the decision boundary:
5
Logistic Regression
5.1
Classification: the Sigmoid
nan
nan
decision boundary decision(x) = 1 if P(y = 1|x) > 0.5, 0 otherwise
5
Logistic Regression
5.1
Classification: the Sigmoid
5.1.1
Example: Sentiment Classification
Let's have an example. Suppose we are doing binary sentiment classification on movie review text, and we would like to know whether to assign the sentiment class + or − to a review document doc. We'll represent each input observation by the 6 features x 1 . . . x 6 of the input shown in the following x 3 1 if "no" ∈ doc 0 otherwise 1
5
Logistic Regression
5.1
Classification: the Sigmoid
5.1.1
Example: Sentiment Classification
x 4 count(1st and 2nd pronouns ∈ doc) 3
5
Logistic Regression
5.1
Classification: the Sigmoid
5.1.1
Example: Sentiment Classification
x 5 1 if "!" ∈ doc 0 otherwise 0
5
Logistic Regression
5.1
Classification: the Sigmoid
5.1.1
Example: Sentiment Classification
x 6 log(word count of doc) ln(66) = 4.19
5
Logistic Regression
5.1
Classification: the Sigmoid
5.1.1
Example: Sentiment Classification
Let's assume for the moment that we've already learned a real-valued weight for each of these features, and that the 6 weights corresponding to the 6 features are [2.5, −5.0, −1.2, 0.5, 2.0, 0.7], while b = 0.1. (We'll discuss in the next section how the weights are learned.) The weight w 1 , for example indicates how important a feature the number of positive lexicon words (great, nice, enjoyable, etc.) is to a positive sentiment decision, while w 2 tells us the importance of negative lexicon words. Note that w 1 = 2.5 is positive, while w 2 = −5.0, meaning that negative words are negatively associated with a positive sentiment decision, and are about twice as important as positive words.
5
Logistic Regression
5.1
Classification: the Sigmoid
5.1.1
Example: Sentiment Classification
Given these 6 features and the input review x, P(+|x) and P(−|x) can be computed using Eq. 5.5:
5
Logistic Regression
5.1
Classification: the Sigmoid
5.1.1
Example: Sentiment Classification
p(+|x) = P(y = 1|x) = σ (w • x + b) = σ ([2.5, −5.0, −1.2, 0.5, 2.0, 0.7] • [3, 2, 1, 3, 0, 4.19] + 0.1) = σ (.833) = 0.70 (5.7) p(−|x) = P(y = 0|x) = 1 − σ (w • x + b) = 0.30
5
Logistic Regression
5.1
Classification: the Sigmoid
5.1.1
Example: Sentiment Classification
Logistic regression is commonly applied to all sorts of NLP tasks, and any property of the input can be a feature. Consider the task of period disambiguation: deciding if a period is the end of a sentence or part of a word, by classifying each period into one of two classes EOS (end-of-sentence) and not-EOS. We might use features like x 1 below expressing that the current word is lower case (perhaps with a positive weight), or that the current word is in our abbreviations dictionary ("Prof.") (perhaps with a negative weight). A feature can also express a quite complex combination of properties. For example a period following an upper case word is likely to be an EOS, but if the word itself is St. and the previous word is capitalized, then the period is likely part of a shortening of the word street.
5
Logistic Regression
5.1
Classification: the Sigmoid
5.1.1
Example: Sentiment Classification
x 1 = 1 if "Case(w i ) = Lower" 0 otherwise
5
Logistic Regression
5.1
Classification: the Sigmoid
5.1.1
Example: Sentiment Classification
x 2 = 1 if "w i ∈ AcronymDict" 0 otherwise x 3 = 1 if "w i = St. & Case(w i−1 ) = Cap" 0 otherwise
5
Logistic Regression
5.1
Classification: the Sigmoid
5.1.1
Example: Sentiment Classification
Designing features: Features are generally designed by examining the training set with an eye to linguistic intuitions and the linguistic literature on the domain. A careful error analysis on the training set or devset of an early version of a system often provides insights into features.
5
Logistic Regression
5.1
Classification: the Sigmoid
5.1.1
Example: Sentiment Classification
For some tasks it is especially helpful to build complex features that are combinations of more primitive features. We saw such a feature for period disambiguation above, where a period on the word St. was less likely to be the end of the sentence if the previous word was capitalized. For logistic regression and naive Bayes these combination features or feature interactions have to be designed by hand.
5
Logistic Regression
5.1
Classification: the Sigmoid
5.1.1
Example: Sentiment Classification
For many tasks (especially when feature values can reference specific words) we'll need large numbers of features. Often these are created automatically via feature templates, abstract specifications of features. For example a bigram template feature templates for period disambiguation might create a feature for every pair of words that occurs before a period in the training set. Thus the feature space is sparse, since we only have to create a feature if that n-gram exists in that position in the training set. The feature is generally created as a hash from the string descriptions. A user description of a feature as, "bigram(American breakfast)" is hashed into a unique integer i that becomes the feature number f i .
5
Logistic Regression
5.1
Classification: the Sigmoid
5.1.1
Example: Sentiment Classification
In order to avoid the extensive human effort of feature design, recent research in NLP has focused on representation learning: ways to learn features automatically in an unsupervised way from the input. We'll introduce methods for representation learning in Chapter 6 and Chapter 7.
5
Logistic Regression
5.1
Classification: the Sigmoid
5.1.1
Example: Sentiment Classification
Choosing a classifier Logistic regression has a number of advantages over naive Bayes. Naive Bayes has overly strong conditional independence assumptions. Consider two features which are strongly correlated; in fact, imagine that we just add the same feature f 1 twice. Naive Bayes will treat both copies of f 1 as if they were separate, multiplying them both in, overestimating the evidence. By contrast, logistic regression is much more robust to correlated features; if two features f 1 and f 2 are perfectly correlated, regression will simply assign part of the weight to w 1 and part to w 2 . Thus when there are many correlated features, logistic regression will assign a more accurate probability than naive Bayes. So logistic regression generally works better on larger documents or datasets and is a common default.
5
Logistic Regression
5.1
Classification: the Sigmoid
5.1.1
Example: Sentiment Classification
Despite the less accurate probabilities, naive Bayes still often makes the correct classification decision. Furthermore, naive Bayes can work extremely well (sometimes even better than logistic regression) on very small datasets (Ng and Jordan, 2002) or short documents (Wang and Manning, 2012). Furthermore, naive Bayes is easy to implement and very fast to train (there's no optimization step). So it's still a reasonable approach to use in some situations.
5
Logistic Regression
5.2
Learning in Logistic Regression
nan
nan
How are the parameters of the model, the weights w and bias b, learned? Logistic regression is an instance of supervised classification in which we know the correct label y (either 0 or 1) for each observation x. What the system produces via Eq. 5.5 isŷ, the system's estimate of the true y. We want to learn parameters (meaning w and b) that makeŷ for each training observation as close as possible to the true y.
5
Logistic Regression
5.2
Learning in Logistic Regression
nan
nan
This requires two components that we foreshadowed in the introduction to the chapter. The first is a metric for how close the current label (ŷ) is to the true gold label y. Rather than measure similarity, we usually talk about the opposite of this: the distance between the system output and the gold output, and we call this distance the loss function or the cost function. In the next section we'll introduce the loss loss function that is commonly used for logistic regression and also for neural networks,
5
Logistic Regression
5.2
Learning in Logistic Regression
nan
nan
the cross-entropy loss.
5
Logistic Regression
5.2
Learning in Logistic Regression
nan
nan
The second thing we need is an optimization algorithm for iteratively updating the weights so as to minimize this loss function. The standard algorithm for this is gradient descent; we'll introduce the stochastic gradient descent algorithm in the following section.
5
Logistic Regression
5.3
The Cross-Entropy Loss Function
nan
nan
We need a loss function that expresses, for an observation x, how close the classifier output (ŷ = σ (w • x + b)) is to the correct output (y, which is 0 or 1). We'll call this: L(ŷ, y) = How muchŷ differs from the true y
5
Logistic Regression
5.3
The Cross-Entropy Loss Function
nan
nan
(5.8)
5
Logistic Regression
5.3
The Cross-Entropy Loss Function
nan
nan
We do this via a loss function that prefers the correct class labels of the training examples to be more likely. This is called conditional maximum likelihood estimation: we choose the parameters w, b that maximize the log probability of the true y labels in the training data given the observations x. The resulting loss function is the negative log likelihood loss, generally called the cross-entropy loss.
5
Logistic Regression
5.3
The Cross-Entropy Loss Function
nan
nan
Let's derive this loss function, applied to a single observation x. We'd like to learn weights that maximize the probability of the correct label p(y|x). Since there are only two discrete outcomes (1 or 0), this is a Bernoulli distribution, and we can express the probability p(y|x) that our classifier produces for one observation as the following (keeping in mind that if y=1, Eq. 5.9 simplifies toŷ; if y=0, Eq. 5.9 simplifies to 1 −ŷ):
5
Logistic Regression
5.3
The Cross-Entropy Loss Function
nan
nan
p(y|x) =ŷ y (1 −ŷ) 1−y (5.9)
5
Logistic Regression
5.3
The Cross-Entropy Loss Function
nan
nan
Now we take the log of both sides. This will turn out to be handy mathematically, and doesn't hurt us; whatever values maximize a probability will also maximize the log of the probability:
5
Logistic Regression
5.3
The Cross-Entropy Loss Function
nan
nan
log p(y|x) = log ŷ y (1 −ŷ) 1−y = y logŷ + (1 − y) log(1 −ŷ) (5.10)
5
Logistic Regression
5.3
The Cross-Entropy Loss Function
nan
nan
Eq. 5.10 describes a log likelihood that should be maximized. In order to turn this into loss function (something that we need to minimize), we'll just flip the sign on Eq. 5.10. The result is the cross-entropy loss L CE :
5
Logistic Regression
5.3
The Cross-Entropy Loss Function
nan
nan
L CE (ŷ, y) = − log p(y|x) = − [y logŷ + (1 − y) log(1 −ŷ)] (5.11)
5
Logistic Regression
5.3
The Cross-Entropy Loss Function
nan
nan
Finally, we can plug in the definition ofŷ = σ (w • x + b):
5
Logistic Regression
5.3
The Cross-Entropy Loss Function
nan
nan
L CE (ŷ, y) = − [y log σ (w • x + b) + (1 − y) log (1 − σ (w • x + b))] (5.12)
5
Logistic Regression
5.3
The Cross-Entropy Loss Function
nan
nan
Let's see if this loss function does the right thing for our example from Fig. 5 .2. We want the loss to be smaller if the model's estimate is close to correct, and bigger if the model is confused. So first let's suppose the correct gold label for the sentiment example in Fig. 5 .2 is positive, i.e., y = 1. In this case our model is doing well, since from Eq. 5.7 it indeed gave the example a higher probability of being positive (.70) than negative (.30). If we plug σ (w • x + b) = .70 and y = 1 into Eq. 5.12, the right side of the equation drops out, leading to the following loss (we'll use log to mean natural log when the base is not specified):
5
Logistic Regression
5.3
The Cross-Entropy Loss Function
nan
nan
L CE (ŷ, y) = −[y log σ (w • x + b) + (1 − y) log (1 − σ (w • x + b))] = − [log σ (w • x + b)] = − log(.70) = .36
5
Logistic Regression
5.3
The Cross-Entropy Loss Function
nan
nan
By contrast, let's pretend instead that the example in Fig. 5 .2 was actually negative, i.e., y = 0 (perhaps the reviewer went on to say "But bottom line, the movie is terrible! I beg you not to see it!"). In this case our model is confused and we'd want the loss to be higher. Now if we plug y = 0 and 1 − σ (w • x + b) = .31 from Eq. 5.7 into Eq. 5.12, the left side of the equation drops out:
5
Logistic Regression
5.3
The Cross-Entropy Loss Function
nan
nan
L CE (ŷ, y) = −[y log σ (w • x + b)+(1 − y) log (1 − σ (w • x + b))] = − [log (1 − σ (w • x + b))] = − log (.30) = 1.2
5
Logistic Regression
5.3
The Cross-Entropy Loss Function
nan
nan
Sure enough, the loss for the first classifier (.37) is less than the loss for the second classifier (1.17). Why does minimizing this negative log probability do what we want? A perfect classifier would assign probability 1 to the correct outcome (y=1 or y=0) and probability 0 to the incorrect outcome. That means the higherŷ (the closer it is to 1), the better the classifier; the lowerŷ is (the closer it is to 0), the worse the classifier. The negative log of this probability is a convenient loss metric since it goes from 0 (negative log of 1, no loss) to infinity (negative log of 0, infinite loss). This loss function also ensures that as the probability of the correct answer is maximized, the probability of the incorrect answer is minimized; since the two sum to one, any increase in the probability of the correct answer is coming at the expense of the incorrect answer. It's called the cross-entropy loss, because Eq. 5.10 is also the formula for the cross-entropy between the true probability distribution y and our estimated distributionŷ.
5
Logistic Regression
5.3
The Cross-Entropy Loss Function
nan
nan
Now we know what we want to minimize; in the next section, we'll see how to find the minimum.
5
Logistic Regression
5.4
Gradient Descent
nan
nan
Our goal with gradient descent is to find the optimal weights: minimize the loss function we've defined for the model. In Eq. 5.13 below, we'll explicitly represent the fact that the loss function L is parameterized by the weights, which we'll refer to in machine learning in general as θ (in the case of logistic regression θ = w, b). So the goal is to find the set of weights which minimizes the loss function, averaged over all examples:θ
5
Logistic Regression
5.4
Gradient Descent
nan
nan
= argmin θ 1 m m i=1 L CE ( f (x (i) ; θ ), y (i) ) (5.13)