n_chapter
stringclasses
10 values
chapter
stringclasses
10 values
n_section
stringlengths
3
5
section
stringlengths
3
48
n_subsection
stringlengths
3
6
subsection
stringlengths
3
51
text
stringlengths
1
2.65k
5
Logistic Regression
5.4
Gradient Descent
nan
nan
How shall we find the minimum of this (or any) loss function? Gradient descent is a method that finds a minimum of a function by figuring out in which direction (in the space of the parameters θ ) the function's slope is rising the most steeply, and moving in the opposite direction. The intuition is that if you are hiking in a canyon and trying to descend most quickly down to the river at the bottom, you might look around yourself 360 degrees, find the direction where the ground is sloping the steepest, and walk downhill in that direction.
5
Logistic Regression
5.4
Gradient Descent
nan
nan
For logistic regression, this loss function is conveniently convex. A convex funcconvex tion has just one minimum; there are no local minima to get stuck in, so gradient descent starting from any point is guaranteed to find the minimum. (By contrast, the loss for multi-layer neural networks is non-convex, and gradient descent may get stuck in local minima for neural network training and never find the global optimum.) Although the algorithm (and the concept of gradient) are designed for direction vectors, let's first consider a visualization of the case where the parameter of our system is just a single scalar w, shown in Fig. 5 .3.
5
Logistic Regression
5.4
Gradient Descent
nan
nan
Given a random initialization of w at some value w 1 , and assuming the loss function L happened to have the shape in Fig. 5 .3, we need the algorithm to tell us whether at the next iteration we should move left (making w 2 smaller than w 1 ) or right (making w 2 bigger than w 1 ) to reach the minimum. .3 The first step in iteratively finding the minimum of this loss function, by moving w in the reverse direction from the slope of the function. Since the slope is negative, we need to move w in a positive direction, to the right. Here superscripts are used for learning steps, so w 1 means the initial value of w (which is 0), w 2 at the second step, and so on.
5
Logistic Regression
5.4
Gradient Descent
nan
nan
The gradient descent algorithm answers this question by finding the gradient gradient of the loss function at the current point and moving in the opposite direction. The gradient of a function of many variables is a vector pointing in the direction of the greatest increase in a function. The gradient is a multi-variable generalization of the slope, so for a function of one variable like the one in Fig. 5 .3, we can informally think of the gradient as the slope. The dotted line in Fig. 5 .3 shows the slope of this hypothetical loss function at point w = w 1 . You can see that the slope of this dotted line is negative. Thus to find the minimum, gradient descent tells us to go in the opposite direction: moving w in a positive direction. The magnitude of the amount to move in gradient descent is the value of the slope d dw L( f (x; w), y) weighted by a learning rate η. A higher (faster) learning learning rate rate means that we should move w more on each step. The change we make in our parameter is the learning rate times the gradient (or the slope, in our single-variable example):
5
Logistic Regression
5.4
Gradient Descent
nan
nan
w t+1 = w t − η d dw L( f (x; w), y) (5.14)
5
Logistic Regression
5.4
Gradient Descent
nan
nan
Now let's extend the intuition from a function of one scalar variable w to many variables, because we don't just want to move left or right, we want to know where in the N-dimensional space (of the N parameters that make up θ ) we should move. The gradient is just such a vector; it expresses the directional components of the sharpest slope along each of those N dimensions. If we're just imagining two weight dimensions (say for one weight w and one bias b), the gradient might be a vector with two orthogonal components, each of which tells us how much the ground slopes in the w dimension and in the b dimension. In an actual logistic regression, the parameter vector w is much longer than 1 or 2, since the input feature vector x can be quite long, and we need a weight w i for each x i . For each dimension/variable w i in w (plus the bias b), the gradient will have a component that tells us the slope with respect to that variable. Essentially we're asking: "How much would a small change in that variable w i influence the total loss function L?"
5
Logistic Regression
5.4
Gradient Descent
nan
nan
In each dimension w i , we express the slope as a partial derivative ∂ ∂ w i of the loss function. The gradient is then defined as a vector of these partials. We'll representŷ as f (x; θ ) to make the dependence on θ more obvious:
5
Logistic Regression
5.4
Gradient Descent
nan
nan
∇ θ L( f (x; θ ), y)) =         ∂ ∂ w 1 L( f (x; θ ), y) ∂ ∂ w 2 L( f (x; θ ), y) . . . ∂ ∂ w n L( f (x; θ ), y) ∂ ∂ b L( f (x; θ ), y)         (5.15)
5
Logistic Regression
5.4
Gradient Descent
nan
nan
The final equation for updating θ based on the gradient is thus
5
Logistic Regression
5.4
Gradient Descent
nan
nan
θ t+1 = θ t − η∇L( f (x; θ ), y) (5.16)
5
Logistic Regression
5.4
Gradient Descent
5.4.1
The Gradient for Logistic Regression
In order to update θ , we need a definition for the gradient ∇L( f (x; θ ), y). Recall that for logistic regression, the cross-entropy loss function is:
5
Logistic Regression
5.4
Gradient Descent
5.4.1
The Gradient for Logistic Regression
L CE (ŷ, y) = − [y log σ (w • x + b) + (1 − y) log (1 − σ (w • x + b))] (5.17)
5
Logistic Regression
5.4
Gradient Descent
5.4.1
The Gradient for Logistic Regression
It turns out that the derivative of this function for one observation vector x is Eq. 5.18 (the interested reader can see Section 5.8 for the derivation of this equation):
5
Logistic Regression
5.4
Gradient Descent
5.4.1
The Gradient for Logistic Regression
∂ L CE (ŷ, y) ∂ w j = [σ (w • x + b) − y]x j (5.18)
5
Logistic Regression
5.4
Gradient Descent
5.4.1
The Gradient for Logistic Regression
Note in Eq. 5.18 that the gradient with respect to a single weight w j represents a very intuitive value: the difference between the true y and our estimatedŷ = σ (w • x + b) for that observation, multiplied by the corresponding input value x j .
5
Logistic Regression
5.4
Gradient Descent
5.4.2
The Stochastic Gradient Descent Algorithm
Stochastic gradient descent is an online algorithm that minimizes the loss function by computing its gradient after each training example, and nudging θ in the right direction (the opposite direction of the gradient). (an "online algorithm" is one that processes its input example by example, rather than waiting until it sees the entire input). x is the set of training inputs
5
Logistic Regression
5.4
Gradient Descent
5.4.2
The Stochastic Gradient Descent Algorithm
x (1) , x (2) , ..., x (m) #
5
Logistic Regression
5.4
Gradient Descent
5.4.2
The Stochastic Gradient Descent Algorithm
y is the set of training outputs (labels) y (1) , y (2) , ..., y (m) θ ← 0 repeat til done # see caption For each training tuple (x (i) , y (i) ) (in random order) 1. Optional (for reporting): # How are we doing on this tuple? Computeŷ (i) = f (x (i) ; θ ) # What is our estimated outputŷ? Compute the loss L(ŷ (i) , y (i) ) # How far off isŷ (i) from the true output
5
Logistic Regression
5.4
Gradient Descent
5.4.2
The Stochastic Gradient Descent Algorithm
y (i) ? 2. g ← ∇ θ L( f (x (i) ; θ ), y (i) )
5
Logistic Regression
5.4
Gradient Descent
5.4.2
The Stochastic Gradient Descent Algorithm
# How should we move θ to maximize loss? 3. θ ← θ − η g # Go the other way instead return θ Figure 5 .5 The stochastic gradient descent algorithm.
5
Logistic Regression
5.4
Gradient Descent
5.4.2
The Stochastic Gradient Descent Algorithm
Step 1 (computing the loss) is used to report how well we are doing on the current tuple. The algorithm can terminate when it converges (or when the gradient norm < ), or when progress halts (for example when the loss starts going up on a held-out set).
5
Logistic Regression
5.4
Gradient Descent
5.4.2
The Stochastic Gradient Descent Algorithm
The learning rate η is a hyperparameter that must be adjusted. If it's too high, hyperparameter the learner will take steps that are too large, overshooting the minimum of the loss function. If it's too low, the learner will take steps that are too small, and take too long to get to the minimum. It is common to start with a higher learning rate and then slowly decrease it, so that it is a function of the iteration k of training; the notation η k can be used to mean the value of the learning rate at iteration k.
5
Logistic Regression
5.4
Gradient Descent
5.4.2
The Stochastic Gradient Descent Algorithm
We'll discuss hyperparameters in more detail in Chapter 7, but briefly they are a special kind of parameter for any machine learning model. Unlike regular parameters of a model (weights like w and b), which are learned by the algorithm from the training set, hyperparameters are special parameters chosen by the algorithm designer that affect how the algorithm works.
5
Logistic Regression
5.4
Gradient Descent
5.4.3
Working through an Example
Let's walk though a single step of the gradient descent algorithm. We'll use a simplified version of the example in Fig. 5 .2 as it sees a single observation x, whose correct value is y = 1 (this is a positive review), and with only two features:
5
Logistic Regression
5.4
Gradient Descent
5.4.3
Working through an Example
x 1 = 3 (count of positive lexicon words)
5
Logistic Regression
5.4
Gradient Descent
5.4.3
Working through an Example
x 2 = 2 (count of negative lexicon words) Let's assume the initial weights and bias in θ 0 are all set to 0, and the initial learning rate η is 0.1:
5
Logistic Regression
5.4
Gradient Descent
5.4.3
Working through an Example
w 1 = w 2 = b = 0 η = 0.1
5
Logistic Regression
5.4
Gradient Descent
5.4.3
Working through an Example
The single update step requires that we compute the gradient, multiplied by the learning rate
5
Logistic Regression
5.4
Gradient Descent
5.4.3
Working through an Example
θ t+1 = θ t − η∇ θ L( f (x (i) ; θ ), y (i) )
5
Logistic Regression
5.4
Gradient Descent
5.4.3
Working through an Example
In our mini example there are three parameters, so the gradient vector has 3 dimensions, for w 1 , w 2 , and b. We can compute the first gradient as follows:
5
Logistic Regression
5.4
Gradient Descent
5.4.3
Working through an Example
∇ w,b L =    ∂ L CE (ŷ,y) ∂ w 1 ∂ L CE (ŷ,y) ∂ w 2 ∂ L CE (ŷ,y) ∂ b    =   (σ (w • x + b) − y)x 1 (σ (w • x + b) − y)x 2 σ (w • x + b) − y   =   (σ (0) − 1)x 1 (σ (0) − 1)x 2 σ (0) − 1   =   −0.5x 1 −0.5x 2 −0.5   =   −1.5 −1.0 −0.5  
5
Logistic Regression
5.4
Gradient Descent
5.4.3
Working through an Example
Now that we have a gradient, we compute the new parameter vector θ 1 by moving θ 0 in the opposite direction from the gradient:
5
Logistic Regression
5.4
Gradient Descent
5.4.3
Working through an Example
θ 1 =   w 1 w 2 b   − η   −1.5 −1.0 −0.5   =   .15 .1 .05  
5
Logistic Regression
5.4
Gradient Descent
5.4.3
Working through an Example
So after one step of gradient descent, the weights have shifted to be: w 1 = .15, w 2 = .1, and b = .05.
5
Logistic Regression
5.4
Gradient Descent
5.4.3
Working through an Example
Note that this observation x happened to be a positive example. We would expect that after seeing more negative examples with high counts of negative words, that the weight w 2 would shift to have a negative value.
5
Logistic Regression
5.4
Gradient Descent
5.4.4
Mini-batch Training
Stochastic gradient descent is called stochastic because it chooses a single random example at a time, moving the weights so as to improve performance on that single example. That can result in very choppy movements, so it's common to compute the gradient over batches of training instances rather than a single instance.
5
Logistic Regression
5.4
Gradient Descent
5.4.4
Mini-batch Training
For example in batch training we compute the gradient over the entire dataset.
5
Logistic Regression
5.4
Gradient Descent
5.4.4
Mini-batch Training
By seeing so many examples, batch training offers a superb estimate of which direction to move the weights, at the cost of spending a lot of time processing every single example in the training set to compute this perfect direction.
5
Logistic Regression
5.4
Gradient Descent
5.4.4
Mini-batch Training
A compromise is mini-batch training: we train on a group of m examples (permini-batch haps 512, or 1024) that is less than the whole dataset. (If m is the size of the dataset, then we are doing batch gradient descent; if m = 1, we are back to doing stochastic gradient descent). Mini-batch training also has the advantage of computational efficiency. The mini-batches can easily be vectorized, choosing the size of the minibatch based on the computational resources. This allows us to process all the examples in one mini-batch in parallel and then accumulate the loss, something that's not possible with individual or batch training. We just need to define mini-batch versions of the cross-entropy loss function we defined in Section 5.3 and the gradient in Section 5.4.1. Let's extend the crossentropy loss for one example from Eq. 5.11 to mini-batches of size m. We'll continue to use the notation that x (i) and y (i) mean the ith training features and training label, respectively. We make the assumption that the training examples are independent:
5
Logistic Regression
5.4
Gradient Descent
5.4.4
Mini-batch Training
log p(training labels) = log m i=1 p(y (i) |x (i) ) = m i=1 log p(y (i) |x (i) ) = − m i=1 L CE (ŷ (i) , y (i) ) (5.19)
5
Logistic Regression
5.4
Gradient Descent
5.4.4
Mini-batch Training
Now the cost function for the mini-batch of m examples is the average loss for each example:
5
Logistic Regression
5.4
Gradient Descent
5.4.4
Mini-batch Training
Cost(ŷ, y) = 1 m m i=1 L CE (ŷ (i) , y (i) ) = − 1 m m i=1 y (i) log σ (w • x (i) + b) + (1 − y (i) ) log 1 − σ (w • x (i) + b) (5.20)
5
Logistic Regression
5.4
Gradient Descent
5.4.4
Mini-batch Training
The mini-batch gradient is the average of the individual gradients from Eq. 5.18:
5
Logistic Regression
5.4
Gradient Descent
5.4.4
Mini-batch Training
∂Cost(ŷ, y) ∂ w j = 1 m m i=1 σ (w • x (i) + b) − y (i) x (i) j (5.21)
5
Logistic Regression
5.5
Regularization
nan
nan
Numquam ponenda est pluralitas sine necessitate 'Plurality should never be proposed unless needed'
5
Logistic Regression
5.5
Regularization
nan
nan
There is a problem with learning weights that make the model perfectly match the training data. If a feature is perfectly predictive of the outcome because it happens to only occur in one class, it will be assigned a very high weight. The weights for features will attempt to perfectly fit details of the training set, in fact too perfectly, modeling noisy factors that just accidentally correlate with the class. This problem is called overfitting. A good model should be able to generalize well from the training overfitting generalize data to the unseen test set, but a model that overfits will have poor generalization.
5
Logistic Regression
5.5
Regularization
nan
nan
To avoid overfitting, a new regularization term R(θ ) is added to the objective regularization function in Eq. 5.13, resulting in the following objective for a batch of m examples (slightly rewritten from Eq. 5.13 to be maximizing log probability rather than minimizing loss, and removing the 1 m term which doesn't affect the argmax):
5
Logistic Regression
5.5
Regularization
nan
nan
θ = argmax θ m i=1 log P(y (i) |x (i) ) − αR(θ ) (5.22)
5
Logistic Regression
5.5
Regularization
nan
nan
The new regularization term R(θ ) is used to penalize large weights. Thus a setting of the weights that matches the training data perfectly-but uses many weights with high values to do so-will be penalized more than a setting that matches the data a little less well, but does so using smaller weights. There are two common ways to compute this regularization term R(θ ). L2 regularization is a quadratic function of
5
Logistic Regression
5.5
Regularization
nan
nan
the weight values, named because it uses the (square of the) L2 norm of the weight values. The L2 norm, ||θ || 2 , is the same as the Euclidean distance of the vector θ from the origin. If θ consists of n weights, then:
5
Logistic Regression
5.5
Regularization
nan
nan
R(θ ) = ||θ || 2 2 = n j=1 θ 2 j (5.23)
5
Logistic Regression
5.5
Regularization
nan
nan
The L2 regularized objective function becomes:
5
Logistic Regression
5.5
Regularization
nan
nan
θ = argmax θ m i=1 log P(y (i) |x (i) ) − α n j=1 θ 2 j (5.24)
5
Logistic Regression
5.5
Regularization
nan
nan
L1 regularization is a linear function of the weight values, named after the L1 norm L1 regularization ||W || 1 , the sum of the absolute values of the weights, or Manhattan distance (the Manhattan distance is the distance you'd have to walk between two points in a city with a street grid like New York):
5
Logistic Regression
5.5
Regularization
nan
nan
R(θ ) = ||θ || 1 = n i=1 |θ i | (5.25)
5
Logistic Regression
5.5
Regularization
nan
nan
The L1 regularized objective function becomes:
5
Logistic Regression
5.5
Regularization
nan
nan
θ = argmax θ m 1=i log P(y (i) |x (i) ) − α n j=1 |θ j | (5.26)
5
Logistic Regression
5.5
Regularization
nan
nan
These kinds of regularization come from statistics, where L1 regularization is called lasso regression (Tibshirani, 1996) and L2 regularization is called ridge regression, lasso ridge and both are commonly used in language processing. L2 regularization is easier to optimize because of its simple derivative (the derivative of θ 2 is just 2θ ), while L1 regularization is more complex (the derivative of |θ | is non-continuous at zero).
5
Logistic Regression
5.5
Regularization
nan
nan
But where L2 prefers weight vectors with many small weights, L1 prefers sparse solutions with some larger weights but many more weights set to zero. Thus L1 regularization leads to much sparser weight vectors, that is, far fewer features.
5
Logistic Regression
5.5
Regularization
nan
nan
Both L1 and L2 regularization have Bayesian interpretations as constraints on the prior of how weights should look. L1 regularization can be viewed as a Laplace prior on the weights. L2 regularization corresponds to assuming that weights are distributed according to a Gaussian distribution with mean µ = 0. In a Gaussian or normal distribution, the further away a value is from the mean, the lower its probability (scaled by the variance σ ). By using a Gaussian prior on the weights, we are saying that weights prefer to have the value 0. A Gaussian for a weight θ j is 1
5
Logistic Regression
5.5
Regularization
nan
nan
2πσ 2 j exp − (θ j − µ j ) 2 2σ 2 j (5.27)
5
Logistic Regression
5.5
Regularization
nan
nan
If we multiply each weight by a Gaussian prior on the weight, we are thus maximizing the following constraint:
5
Logistic Regression
5.5
Regularization
nan
nan
θ = argmax θ M i=1 P(y (i) |x (i) ) × n j=1 1 2πσ 2 j exp − (θ j − µ j ) 2 2σ 2 j (5.28)
5
Logistic Regression
5.5
Regularization
nan
nan
which in log space, with µ = 0, and assuming 2σ 2 = 1, corresponds tô
5
Logistic Regression
5.5
Regularization
nan
nan
θ = argmax θ m i=1 log P(y (i) |x (i) ) − α n j=1 θ 2 j (5.29)
5
Logistic Regression
5.5
Regularization
nan
nan
which is in the same form as Eq. 5.24.
5
Logistic Regression
5.6
Multinomial Logistic Regression
nan
nan
Sometimes we need more than two classes. Perhaps we might want to do 3-way sentiment classification (positive, negative, or neutral). Or we could be assigning some of the labels we will introduce in Chapter 8, like the part of speech of a word (choosing from 10, 30, or even 50 different parts of speech), or the named entity type of a phrase (choosing from tags like person, location, organization). In such cases we use multinomial logistic regression, also called softmax regression (or, historically, the maxent classifier). In multinomial logistic regression the target y is a variable that ranges over more than two classes; we want to know the probability of y being in each potential class c ∈ C, p(y = c|x).
5
Logistic Regression
5.6
Multinomial Logistic Regression
nan
nan
The multinomial logistic classifier uses a generalization of the sigmoid, called the softmax function, to compute the probability p(y = c|x). The softmax function softmax takes a vector z = [z 1 , z 2 , ..., z k ] of k arbitrary values and maps them to a probability distribution, with each value in the range (0,1), and all the values summing to 1. Like the sigmoid, it is an exponential function.
5
Logistic Regression
5.6
Multinomial Logistic Regression
nan
nan
For a vector z of dimensionality k, the softmax is defined as:
5
Logistic Regression
5.6
Multinomial Logistic Regression
nan
nan
softmax(z i ) = exp (z i ) k j=1 exp (z j ) 1 ≤ i ≤ k (5.30)
5
Logistic Regression
5.6
Multinomial Logistic Regression
nan
nan
The softmax of an input vector z = [z 1 , z 2 , ..., z k ] is thus a vector itself:
5
Logistic Regression
5.6
Multinomial Logistic Regression
nan
nan
softmax(z) = exp (z 1 ) k i=1 exp (z i ) , exp (z 2 ) k i=1 exp (z i ) , ..., exp (z k ) k i=1 exp (z i ) (5.31)
5
Logistic Regression
5.6
Multinomial Logistic Regression
nan
nan
The denominator k i=1 exp (z i ) is used to normalize all the values into probabilities. Thus for example given a vector:
5
Logistic Regression
5.6
Multinomial Logistic Regression
nan
nan
z = [0.6, 1.1, −1.5, 1.2, 3.2, −1.1]
5
Logistic Regression
5.6
Multinomial Logistic Regression
nan
nan
the resulting (rounded) softmax(z) is [0.055, 0.090, 0.006, 0.099, 0.74, 0.010] Again like the sigmoid, the input to the softmax will be the dot product between a weight vector w and an input vector x (plus a bias). But now we'll need separate weight vectors (and bias) for each of the K classes.
5
Logistic Regression
5.6
Multinomial Logistic Regression
nan
nan
p(y = c|x) = exp (w c • x + b c ) K j=1 exp (w j • x + b j ) (5.32)
5
Logistic Regression
5.6
Multinomial Logistic Regression
nan
nan
Like the sigmoid, the softmax has the property of squashing values toward 0 or 1. Thus if one of the inputs is larger than the others, it will tend to push its probability toward 1, and suppress the probabilities of the smaller inputs.
5
Logistic Regression
5.6
Multinomial Logistic Regression
5.6.1
Features in Multinomial Logistic Regression
Features in multinomial logistic regression function similarly to binary logistic regression, with one difference that we'll need separate weight vectors (and biases) for each of the K classes. Recall our binary exclamation point feature x 5 from page 80:
5
Logistic Regression
5.6
Multinomial Logistic Regression
5.6.1
Features in Multinomial Logistic Regression
x 5 = 1 if "!" ∈ doc 0 otherwise
5
Logistic Regression
5.6
Multinomial Logistic Regression
5.6.1
Features in Multinomial Logistic Regression
In binary classification a positive weight w 5 on a feature influences the classifier toward y = 1 (positive sentiment) and a negative weight influences it toward y = 0 (negative sentiment) with the absolute value indicating how important the feature is. For multinominal logistic regression, by contrast, with separate weights for each class, a feature can be evidence for or against each individual class.
5
Logistic Regression
5.6
Multinomial Logistic Regression
5.6.1
Features in Multinomial Logistic Regression
In 3-way multiclass sentiment classification, for example, we must assign each document one of the 3 classes +, −, or 0 (neutral). Now a feature related to exclamation marks might have a negative weight for 0 documents, and a positive weight for + or − documents:
5
Logistic Regression
5.6
Multinomial Logistic Regression
5.6.1
Features in Multinomial Logistic Regression
Feature Definition w 5,+ w 5,− w 5,0 f 5 (x) 1 if "!" ∈ doc 0 otherwise 3.5 3.1 −5.3
5
Logistic Regression
5.6
Multinomial Logistic Regression
5.6.1
Features in Multinomial Logistic Regression
Because these feature weights are dependent both on the input text and the output class, we sometimes make this dependence explicit and represent the features themselves as f (x, y): a function of both the input and the class. Using such a notation f 5 (x) above could be represented as three features f 5 (x, +), f 5 (x, −), and f 5 (x, 0), each of which has a single weight.
5
Logistic Regression
5.6
Multinomial Logistic Regression
5.6.2
Learning in Multinomial Logistic Regression
The loss function for multinomial logistic regression generalizes the loss function for binary logistic regression from 2 to K classes. Recall that that the cross-entropy loss for binary logistic regression (repeated from Eq. 5.11) is:
5
Logistic Regression
5.6
Multinomial Logistic Regression
5.6.2
Learning in Multinomial Logistic Regression
L CE (ŷ, y) = − log p(y|x) = − [y logŷ + (1 − y) log(1 −ŷ)] (5.33)
5
Logistic Regression
5.6
Multinomial Logistic Regression
5.6.2
Learning in Multinomial Logistic Regression
The loss function for multinominal logistic regression generalizes the two terms in Eq. 5.33 (one that is non-zero when y = 1 and one that is non-zero when y = 0) to K terms. The loss function for a single example x is thus the sum of the logs of the K output classes, each weighted by y k , the probability of the true class :
5
Logistic Regression
5.6
Multinomial Logistic Regression
5.6.2
Learning in Multinomial Logistic Regression
L CE (ŷ, y) = − K k=1 y k logŷ k = − K k=1 y k logp(y = k|x) (5.34)
5
Logistic Regression
5.6
Multinomial Logistic Regression
5.6.2
Learning in Multinomial Logistic Regression
Because only one class (let's call it i) is the correct one, the vector y takes the value 1 only for this value of k, i.e., has y i = 1 and y j = 0 ∀ j = i. A vector like this, with one value=1 and the rest 0, is called a one-hot vector. The terms in the sum in Eq. 5.34 will thus be 0 except for the term corresponding to the true class, i.e.:
5
Logistic Regression
5.6
Multinomial Logistic Regression
5.6.2
Learning in Multinomial Logistic Regression
L CE (ŷ, y) = − K k=1 1{y = k} logp(y = k|x) = − K k=1 1{y = k} log exp (w k • x + b k ) K j=1 exp (w j • x + b j ) (5.35)
5
Logistic Regression
5.6
Multinomial Logistic Regression
5.6.2
Learning in Multinomial Logistic Regression
Here we'll use the notation w k to mean the vector of weights from each input x i to the output node k, and the indicator function 1{}, which evaluates to 1 if the condition in the brackets is true and to 0 otherwise. Hence the cross-entropy loss is simply the log of the output probability corresponding to the correct class, and we therefore also call this the negative log likelihood loss:
5
Logistic Regression
5.6
Multinomial Logistic Regression
5.6.2
Learning in Multinomial Logistic Regression
negative log likelihood loss L CE (ŷ, y) = − logŷ k , (where k is the correct class) = − log exp (w k • x + b k ) K j=1 exp (w j • x + b j ) (
5
Logistic Regression
5.6
Multinomial Logistic Regression
5.6.2
Learning in Multinomial Logistic Regression
where k is the correct class)(5.36)
5
Logistic Regression
5.6
Multinomial Logistic Regression
5.6.2
Learning in Multinomial Logistic Regression
The gradient for a single example turns out to be very similar to the gradient for binary logistic regression, although we don't show the derivation here. It is the difference between the value for the true class k (which is 1) and the probability the classifier outputs for class k, weighted by the value of the input x i corresponding to the ith element of the weight for class k w k,i :
5
Logistic Regression
5.6
Multinomial Logistic Regression
5.6.2
Learning in Multinomial Logistic Regression
∂ L CE ∂ w k,i = −(1{y = k} − p(y = k|x))x i = − 1{y = k} − exp (w k • x + b k ) K j=1 exp (w j • x + b j ) x i (5.37)
5
Logistic Regression
5.7
Interpreting Models
nan
nan
Often we want to know more than just the correct classification of an observation. We want to know why the classifier made the decision it did. That is, we want our decision to be interpretable. Interpretability can be hard to define strictly, but the interpretable core idea is that as humans we should know why our algorithms reach the conclusions they do. Because the features to logistic regression are often human-designed, one way to understand a classifier's decision is to understand the role each feature plays in the decision. Logistic regression can be combined with statistical tests (the likelihood ratio test, or the Wald test); investigating whether a particular feature is significant by one of these tests, or inspecting its magnitude (how large is the weight w associated with the feature?) can help us interpret why the classifier made the decision it makes. This is enormously important for building transparent models. Furthermore, in addition to its use as a classifier, logistic regression in NLP and many other fields is widely used as an analytic tool for testing hypotheses about the effect of various explanatory variables (features). In text classification, perhaps we want to know if logically negative words (no, not, never) are more likely to be associated with negative sentiment, or if negative reviews of movies are more likely to discuss the cinematography. However, in doing so it's necessary to control for potential confounds: other factors that might influence sentiment (the movie genre, the year it was made, perhaps the length of the review in words). Or we might be studying the relationship between NLP-extracted linguistic features and non-linguistic outcomes (hospital readmissions, political outcomes, or product sales), but need to control for confounds (the age of the patient, the county of voting, the brand of the product). In such cases, logistic regression allows us to test whether some feature is associated with some outcome above and beyond the effect of other features.
5
Logistic Regression
5.8
Advanced: Deriving the Gradient Equation
nan
nan
In this section we give the derivation of the gradient of the cross-entropy loss function L CE for logistic regression. Let's start with some quick calculus refreshers. First, the derivative of ln(x):
5
Logistic Regression
5.8
Advanced: Deriving the Gradient Equation
nan
nan
d dx ln(x) = 1 x (5.38)
5
Logistic Regression
5.8
Advanced: Deriving the Gradient Equation
nan
nan
Second, the (very elegant) derivative of the sigmoid:
5
Logistic Regression
5.8
Advanced: Deriving the Gradient Equation
nan
nan
dσ (z) dz = σ (z)(1 − σ (z))
5
Logistic Regression
5.8
Advanced: Deriving the Gradient Equation
nan
nan
d f dx = du dv • dv dx (5.40)