n_chapter
stringclasses
10 values
chapter
stringclasses
10 values
n_section
stringlengths
3
5
section
stringlengths
3
48
n_subsection
stringlengths
3
6
subsection
stringlengths
3
51
text
stringlengths
1
2.65k
7
Neural Networks and Neural Language Models
7.5
Feedforward Neural Language Modeling
nan
nan
P(w t |w 1 , . . . , w tβˆ’1 ) β‰ˆ P(w t |w tβˆ’N+1 , . . . , w tβˆ’1 ) (7.21)
7
Neural Networks and Neural Language Models
7.5
Feedforward Neural Language Modeling
nan
nan
In the following examples we'll use a 4-gram example, so we'll show a net to estimate the probability P(w t = i|w tβˆ’3 , w tβˆ’2 , w tβˆ’1 ).
7
Neural Networks and Neural Language Models
7.5
Feedforward Neural Language Modeling
nan
nan
Neural language models represent words in this prior context by their embeddings, rather than just by their word identity as used in n-gram language models. Using embeddings allows neural language models to generalize better to unseen data. For example, suppose we've seen this sentence in training:
7
Neural Networks and Neural Language Models
7.5
Feedforward Neural Language Modeling
nan
nan
I have to make sure that the cat gets fed. but have never seen the words "gets fed" after the word "dog". Our test set has the prefix "I forgot to make sure that the dog gets". What's the next word? An n-gram language model will predict "fed" after "that the cat gets", but not after "that the dog gets". But a neural LM, knowing that "cat" and "dog" have similar embeddings, will be able to generalize from the "cat" context to assign a high enough probability to "fed" even after seeing "dog".
7
Neural Networks and Neural Language Models
7.5
Feedforward Neural Language Modeling
7.5.1
Forward inference in the neural language model
Let's walk through forward inference or decoding for neural language models.
7
Neural Networks and Neural Language Models
7.5
Feedforward Neural Language Modeling
7.5.1
Forward inference in the neural language model
forward inference Forward inference is the task, given an input, of running a forward pass on the network to produce a probability distribution over possible outputs, in this case next words.
7
Neural Networks and Neural Language Models
7.5
Feedforward Neural Language Modeling
7.5.1
Forward inference in the neural language model
We first represent each of the N previous words as a one-hot vector of length |V |, i.e., with one dimension for each word in the vocabulary. A one-hot vector is one-hot vector a vector that has one element equal to 1-in the dimension corresponding to that word's index in the vocabulary-while all the other elements are set to zero. Thus in a one-hot representation for the word "toothpaste", supposing it is V 5 , i.e., index 5 in the vocabulary, x 5 = 1, and x i = 0 βˆ€i = 5, as shown here:
7
Neural Networks and Neural Language Models
7.5
Feedforward Neural Language Modeling
7.5.1
Forward inference in the neural language model
[0 0 0 0 1 0 0 ... 0 0 0 0] 1 2 3 4 5 6 7 ... ... |V| The feedforward neural language model (sketched in Fig. 7 .13) has a moving window that can see N words into the past. We'll let N-3, so the 3 words w tβˆ’1 , w tβˆ’2 , and w tβˆ’3 are each represented as a one-hot vector. We then multiply these one-hot vectors by the embedding matrix E. The embedding weight matrix E has a column for each word, each a column vector of d dimensions, and hence has dimensionality d Γ— |V |. Multiplying by a one-hot vector that has only one non-zero element x i = 1 simply selects out the relevant column vector for word i, resulting in the embedding for word i, as shown in Fig. 7 .12.
7
Neural Networks and Neural Language Models
7.5
Feedforward Neural Language Modeling
7.5.1
Forward inference in the neural language model
The 3 resulting embedding vectors are concatenated to produce e, the embedding layer. This is followed by a hidden layer and an output layer whose softmax produces a probability distribution over words. For example y 42 , the value of output node 42, is the probability of the next word w t being V 42 , the vocabulary word with index 42 (which is the word 'fish' in our example).
7
Neural Networks and Neural Language Models
7.5
Feedforward Neural Language Modeling
7.5.1
Forward inference in the neural language model
Here's the algorithm in detail for our mini example: 1. Select three embeddings from E: Given the three previous words, we look up their indices, create 3 one-hot vectors, and then multiply each by the embedding matrix E. Consider w tβˆ’3 . The one-hot vector for 'for' (index 35) is Figure 7 .13 Forward inference in a feedforward neural language model. At each timestep t the network computes a d-dimensional embedding for each context word (by multiplying a one-hot vector by the embedding matrix E), and concatenates the 3 resulting embeddings to get the embedding layer e. The embedding vector e is multiplied by a weight matrix W and then an activation function is applied element-wise to produce the hidden layer h, which is then multiplied by another weight matrix U. Finally, a softmax output layer predicts at each node i the probability that the next word w t will be vocabulary word V i .
7
Neural Networks and Neural Language Models
7.5
Feedforward Neural Language Modeling
7.5.1
Forward inference in the neural language model
multiplied by the embedding matrix E, to give the first part of the first hidden layer, the embedding layer. Since each column of the input matrix E is an embedding layer embedding for a word, and the input is a one-hot column vector x i for word V i , the embedding layer for input w will be Ex i = e i , the embedding for word i. We now concatenate the three embeddings for the three context words to produce the embedding layer e.
7
Neural Networks and Neural Language Models
7.5
Feedforward Neural Language Modeling
7.5.1
Forward inference in the neural language model
We multiply by W (and add b) and pass through the ReLU (or other) activation function to get the hidden layer h. 3. Multiply by U: h is now multiplied by U 4. Apply softmax: After the softmax, each node i in the output layer estimates the probability P(w t = i|w tβˆ’1 , w tβˆ’2 , w tβˆ’3 )
7
Neural Networks and Neural Language Models
7.5
Feedforward Neural Language Modeling
7.5.1
Forward inference in the neural language model
In summary, the equations for a neural language model with a window size of 3, Note that we formed the embedding layer e by concatenating the 3 embeddings for the three context vectors; we'll often use semicolons to mean concatenation of vectors.
7
Neural Networks and Neural Language Models
7.5
Feedforward Neural Language Modeling
7.5.1
Forward inference in the neural language model
In the next section we'll introduce a general algorithm for training neural networks, and then return to how to specifically train the neural language model in Section 7.7.
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
nan
nan
A feedforward neural net is an instance of supervised machine learning in which we know the correct output y for each observation x. What the system produces, via Eq. 7.13, isΕ·, the system's estimate of the true y. The goal of the training procedure is to learn parameters W [i] and b [i] for each layer i that makeΕ· for each training observation as close as possible to the true y.
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
nan
nan
In general, we do all this by drawing on the methods we introduced in Chapter 5 for logistic regression, so the reader should be comfortable with that chapter before proceeding.
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
nan
nan
First, we'll need a loss function that models the distance between the system output and the gold output, and it's common to use the loss function used for logistic regression, the cross-entropy loss.
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
nan
nan
Second, to find the parameters that minimize this loss function, we'll use the gradient descent optimization algorithm introduced in Chapter 5.
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
nan
nan
Third, gradient descent requires knowing the gradient of the loss function, the vector that contains the partial derivative of the loss function with respect to each of the parameters. In logistic regression, for each observation we could directly compute the derivative of the loss function with respect to an individual w or b. But for neural networks, with millions of parameters in many layers, it's much harder to see how to compute the partial derivative of some weight in layer 1 when the loss is attached to some much later layer. How do we partial out the loss over all those intermediate layers? The answer is the algorithm called error backpropagation or backward differentiation.
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.1
Loss function
The cross-entropy loss that is used in neural networks is the same one we saw for cross-entropy loss logistic regression. In fact, if the neural network is being used as a binary classifier, with the sigmoid at the final layer, the loss function is exactly the same as we saw with logistic regression in Eq. 5.11:
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.1
Loss function
L CE (Ε·, y) = βˆ’ log p(y|x) = βˆ’ [y logΕ· + (1 βˆ’ y) log(1 βˆ’Ε·)] (7.23)
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.1
Loss function
What about if the neural network is being used as a multinomial classifier? Let y be a vector over the C classes representing the true output probability distribution. The cross-entropy loss here is
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.1
Loss function
L CE (Ε·, y) = βˆ’ C i=1 y i logΕ· i (7.24)
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.1
Loss function
We can simplify this equation further. Assume this is a hard classification task, meaning that only one class is the correct one, and that there is one output unit in y for each class. If the true class is i, then y is a vector where y i = 1 and y j = 0 βˆ€ j = i. A vector like this, with one value=1 and the rest 0, is called a one-hot vector. The terms in the sum in Eq. 7.24 will be 0 except for the term corresponding to the true class, i.e.:
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.1
Loss function
L CE (Ε·, y) = βˆ’ K k=1 1{y = k} logΕ· i = βˆ’ K k=1 1{y = k} logp(y = k|x) = βˆ’ K k=1 1{y = k} log exp(z k ) K j=1 exp(z j ) (7.25)
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.1
Loss function
Hence the cross-entropy loss is simply the log of the output probability corresponding to the correct class, and we therefore also call this the negative log likelihood loss:
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.1
Loss function
negative log likelihood loss L CE (Ε·, y) = βˆ’ logΕ· i , (where i is the correct class) (7.26) Plugging in the softmax formula from Eq. 7.9, and with K the number of classes:
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.1
Loss function
L CE (Ε·, y) = βˆ’ log exp(z i ) K j=1 exp(z j )
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.1
Loss function
(where i is the correct class) (7.27)
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.2
Computing the Gradient
How do we compute the gradient of this loss function? Computing the gradient requires the partial derivative of the loss function with respect to each parameter. For a network with one weight layer and sigmoid output (which is what logistic regression is), we could simply use the derivative of the loss that we used for logistic regression in Eq. 7.28 (and derived in Section 5.8):
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.2
Computing the Gradient
βˆ‚ L CE (w, b) βˆ‚ w j = (Ε· βˆ’ y) x j = (Οƒ (w β€’ x + b) βˆ’ y) x j (7.28)
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.2
Computing the Gradient
Or for a network with one hidden layer and softmax output, we could use the derivative of the softmax loss from Eq. 5.37:
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.2
Computing the Gradient
βˆ‚ L CE βˆ‚ w k = βˆ’(1{y = k} βˆ’ p(y = k|x))x k = βˆ’ 1{y = k} βˆ’ exp(w k β€’ x + b k ) K j=1 exp(w j β€’ x + b j ) x k (7.29)
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.2
Computing the Gradient
But these derivatives only give correct updates for one weight layer: the last one! For deep networks, computing the gradients for each weight is much more complex, since we are computing the derivative with respect to weight parameters that appear all the way back in the very early layers of the network, even though the loss is computed only at the very end of the network. The solution to computing this gradient is an algorithm called error backpropagation or backprop (Rumelhart et al., 1986) . While backprop was invented spe-error backpropagation cially for neural networks, it turns out to be the same as a more general procedure called backward differentiation, which depends on the notion of computation graphs. Let's see how that works in the next subsection.
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.3
Computation Graphs
A computation graph is a representation of the process of computing a mathematical expression, in which the computation is broken down into separate operations, each of which is modeled as a node in a graph.
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.3
Computation Graphs
Consider computing the function L(a, b, c) = c(a + 2b). If we make each of the component addition and multiplication operations explicit, and add names (d and e) for the intermediate outputs, the resulting series of computations is:
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.3
Computation Graphs
d = 2 * b e = a + d L = c * e
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.3
Computation Graphs
We can now represent this as a graph, with nodes for each operation, and directed edges showing the outputs from each operation as the inputs to the next, as in Fig. 7 .14. The simplest use of computation graphs is to compute the value of the function with some given inputs. In the figure, we've assumed the inputs a = 3, b = 1, c = βˆ’2, and we've shown the result of the forward pass to compute the result L(3, 1, βˆ’2) = βˆ’10. In the forward pass of a computation graph, we apply each operation left to right, passing the outputs of each computation as the input to the next node.
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.4
Backward differentiation on computation graphs
The importance of the computation graph comes from the backward pass, which is used to compute the derivatives that we'll need for the weight update. In this example our goal is to compute the derivative of the output function L with respect to each of the input variables, i.e., βˆ‚ L βˆ‚ a , βˆ‚ L βˆ‚ b , and βˆ‚ L βˆ‚ c . The derivative βˆ‚ L βˆ‚ a , tells us how much a small change in a affects L.
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.4
Backward differentiation on computation graphs
Backwards differentiation makes use of the chain rule in calculus, so let's rechain rule mind ourselves of that. Suppose we are computing the derivative of a composite function f (x) = u(v(x)). The derivative of f (x) is the derivative of u(x) with respect to v(x) times the derivative of v(x) with respect to x:
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.4
Backward differentiation on computation graphs
d f dx = du dv β€’ dv dx (7.30)
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.4
Backward differentiation on computation graphs
The chain rule extends to more than two functions. If computing the derivative of a composite function f (x) = u(v(w(x))), the derivative of f (x) is:
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.4
Backward differentiation on computation graphs
d f dx = du dv β€’ dv dw β€’ dw dx (7.31)
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.4
Backward differentiation on computation graphs
The intuition of backward differentiation is to pass gradients back from the final node to all the nodes in the graph. Fig. 7 .17 shows part of the backward computation at one node e. Each node takes an upstream gradient that is passed in from its parent node to the right, and for each of its inputs computes a local gradient (the gradient of its output with respect to its input), and uses the chain rule to multiply these two to compute a downstream gradient to be passed on to the next earlier node. Figure 7 .15 Each node (like e here) takes an upstream gradient, multiplies it by the local gradient (the gradient of its output with respect to its input), and uses the chain rule to compute a downstream gradient to be passed on to a prior node. A node may have multiple local gradients if it has multiple inputs.
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.4
Backward differentiation on computation graphs
Let's now compute the 3 derivatives we need. Since in the computation graph L = ce, we can directly compute the derivative βˆ‚ L βˆ‚ c :
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.4
Backward differentiation on computation graphs
βˆ‚ L βˆ‚ c = e (7.32)
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.4
Backward differentiation on computation graphs
For the other two, we'll need to use the chain rule:
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.4
Backward differentiation on computation graphs
βˆ‚ L βˆ‚ a = βˆ‚ L βˆ‚ e βˆ‚ e βˆ‚ a βˆ‚ L βˆ‚ b = βˆ‚ L βˆ‚ e βˆ‚ e βˆ‚ d βˆ‚ d βˆ‚ b (7.33)
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.4
Backward differentiation on computation graphs
Eq. 7.33 and Eq. 7.32 thus require five intermediate derivatives:
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.4
Backward differentiation on computation graphs
βˆ‚ L βˆ‚ e , βˆ‚ L βˆ‚ c , βˆ‚ e βˆ‚ a ,
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.4
Backward differentiation on computation graphs
d = 2b : βˆ‚ d βˆ‚ b = 2
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.4
Backward differentiation on computation graphs
In the backward pass, we compute each of these partials along each edge of the graph from right to left, using the chain rule just as we did above. Thus we begin by computing the downstream gradients from node L, which are βˆ‚ L βˆ‚ e and βˆ‚ L βˆ‚ c . For node e, we then multiply this upstream gradient βˆ‚ L βˆ‚ e by the local gradient (the gradient of the output with respect to the input), βˆ‚ e βˆ‚ d to get the output we send back to node d:
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.4
Backward differentiation on computation graphs
βˆ‚ L βˆ‚ d
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.4
Backward differentiation on computation graphs
. And so on, until we have annotated the graph all the way to all the input variables. The forward pass conveniently already will have computed the values of the forward intermediate variables we need (like d and e) to compute these derivatives. Fig. 7.16 shows the backward pass.
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.4
Backward differentiation on computation graphs
Of course computation graphs for real neural networks are much more complex. Fig. 7 .17 shows a sample computation graph for a 2-layer neural network with n 0 = 2, n 1 = 2, and n 2 = 1, assuming binary classification and hence using a sigmoid output unit for simplicity. The function that the computation graph is computing is:
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.4
Backward differentiation on computation graphs
z [1] = W [1] x + b [1]
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.4
Backward differentiation on computation graphs
a [1] = ReLU(z [1] )
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.4
Backward differentiation on computation graphs
z [2] = W [2] a [1] + b [2]
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.4
Backward differentiation on computation graphs
a [2] = Οƒ (z [2] ) y = a [2] (7.34)
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.4
Backward differentiation on computation graphs
For the backward pass we'll also need to compute the loss L. The loss function for binary sigmoid output from Eq. 7.23 is
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.4
Backward differentiation on computation graphs
L CE (Ε·, y) = βˆ’ [y logΕ· + (1 βˆ’ y) log(1 βˆ’Ε·)] (7.35)
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.4
Backward differentiation on computation graphs
Our outputΕ· = a [2] , so we can rephrase this as L CE (a [2] , y) = βˆ’ y log a [2] Figure 7 .17 Sample computation graph for a simple 2-layer neural net (= 1 hidden layer) with two input dimensions and 2 hidden dimensions.
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.4
Backward differentiation on computation graphs
EQUATION
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.4
Backward differentiation on computation graphs
The weights that need updating (those for which we need to know the partial derivative of the loss function) are shown in teal. In order to do the backward pass, we'll need to know the derivatives of all the functions in the graph. We already saw in Section 5.8 the derivative of the sigmoid Οƒ :
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.4
Backward differentiation on computation graphs
dΟƒ (z) dz = Οƒ (z)(1 βˆ’ Οƒ (z)) (7.37)
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.4
Backward differentiation on computation graphs
We'll also need the derivatives of each of the other activation functions. The derivative of tanh is:
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.4
Backward differentiation on computation graphs
d tanh(z) dz = 1 βˆ’ tanh 2 (z) (7.38)
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.4
Backward differentiation on computation graphs
The derivative of the ReLU is
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.4
Backward differentiation on computation graphs
d ReLU(z) dz = 0 f or z < 0 1 f or z β‰₯ 0 (7.39)
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.4
Backward differentiation on computation graphs
We'll give the start of the computation, computing the derivative of the loss function L with respect to z, or βˆ‚ L βˆ‚ z (and leaving the rest of the computation as an exercise for the reader). By the chain rule:
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.4
Backward differentiation on computation graphs
βˆ‚ L βˆ‚ z = βˆ‚ L βˆ‚ a [2]
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.4
Backward differentiation on computation graphs
βˆ‚ a [2] βˆ‚ z (7.40)
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.4
Backward differentiation on computation graphs
So let's first compute βˆ‚ L βˆ‚ a [2] , taking the derivative of Eq. 7.36, repeated here:
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.4
Backward differentiation on computation graphs
L CE (a [2] , y) = βˆ’ y log a [2] 2] (7.41)
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.4
Backward differentiation on computation graphs
+ (1 βˆ’ y) log(1 βˆ’ a [2] ) βˆ‚ L βˆ‚ a [2] = βˆ’ y βˆ‚ log(a [2] ) βˆ‚ a [2] + (1 βˆ’ y) βˆ‚ log(1 βˆ’ a [2] ) βˆ‚ a [2] = βˆ’ y 1 a [2] + (1 βˆ’ y) 1 1 βˆ’ a [2] (βˆ’1) = βˆ’ y a [2] + y βˆ’ 1 1 βˆ’ a [
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.4
Backward differentiation on computation graphs
Next, by the derivative of the sigmoid: 2] (1 βˆ’ a [2] ) Finally, we can use the chain rule:
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.4
Backward differentiation on computation graphs
βˆ‚ L βˆ‚ a [2] = a [
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.4
Backward differentiation on computation graphs
βˆ‚ L βˆ‚ z = βˆ‚ L βˆ‚ a [2] βˆ‚ a [2] βˆ‚ z = βˆ’ y a [2] + y βˆ’ 1 1 βˆ’ a [2] a [2] (1 βˆ’ a [2] ) = a [2] βˆ’ y (7.42)
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.4
Backward differentiation on computation graphs
Continuing the backward computation of the gradients (next by passing the gradients over b [2] 1 and the two product nodes, and so on, back to all the orange nodes), is left as an exercise for the reader.
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.5
More details on learning
Optimization in neural networks is a non-convex optimization problem, more complex than for logistic regression, and for that and other reasons there are many best practices for successful learning.
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.5
More details on learning
For logistic regression we can initialize gradient descent with all the weights and biases having the value 0. In neural networks, by contrast, we need to initialize the weights with small random numbers. It's also helpful to normalize the input values to have 0 mean and unit variance.
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.5
More details on learning
Various forms of regularization are used to prevent overfitting. One of the most important is dropout: randomly dropping some units and their connections from dropout the network during training (Hinton et al. 2012 , Srivastava et al. 2014 . Tuning of hyperparameters is also important. The parameters of a neural network are the hyperparameter weights W and biases b; those are learned by gradient descent. The hyperparameters are things that are chosen by the algorithm designer; optimal values are tuned on a devset rather than by gradient descent learning on the training set. Hyperparameters include the learning rate Ξ·, the mini-batch size, the model architecture (the number of layers, the number of hidden nodes per layer, the choice of activation functions), how to regularize, and so on. Gradient descent itself also has many architectural variants such as Adam (Kingma and Ba, 2015) .
7
Neural Networks and Neural Language Models
7.6
Training Neural Nets
7.6.5
More details on learning
Finally, most modern neural networks are built using computation graph formalisms that make it easy and natural to do gradient computation and parallelization onto vector-based GPUs (Graphic Processing Units). PyTorch (Paszke et al., 2017) and TensorFlow (Abadi et al., 2015) are two of the most popular. The interested reader should consult a neural network textbook for further details; some suggestions are at the end of the chapter.
7
Neural Networks and Neural Language Models
7.7
Training the neural language model
nan
nan
Now that we've seen how to train a generic neural net, let's talk about the architecture for training a neural language model, setting the parameters ΞΈ = E, W, U, b.
7
Neural Networks and Neural Language Models
7.7
Training the neural language model
nan
nan
For some tasks, it's ok to freeze the embedding layer E with initial word2vec valfreeze ues. Freezing means we use word2vec or some other pretraining algorithm to compute the initial embedding matrix E, and then hold it constant while we only modify W, U, and b, i.e., we don't update E during language model training. However, often we'd like to learn the embeddings simultaneously with training the network. This is useful when the task the network is designed for (sentiment classification, or translation, or parsing) places strong constraints on what makes a good representation for words. Let's see how to train the entire model including E, i.e. to set all the parameters ΞΈ = E, W, U, b. We'll do this via gradient descent (Fig. 5.5 ), using error backpropagation on the computation graph to compute the gradient. Training thus not only sets the weights W and U of the network, but also as we're predicting upcoming words, we're learning the embeddings E for each word that best predict upcoming words. Figure 7 .18 Learning all the way back to embeddings. Again, the embedding matrix E is shared among the 3 context words. Fig. 7.18 shows the set up for a window size of N=3 context words. The input x consists of 3 one-hot vectors, fully connected to the embedding layer via 3 instanti-ations of the embedding matrix E. We don't want to learn separate weight matrices for mapping each of the 3 previous words to the projection layer. We want one single embedding dictionary E that's shared among these three. That's because over time, many different words will appear as w tβˆ’2 or w tβˆ’1 , and we'd like to just represent each word with one vector, whichever context position it appears in. Recall that the embedding weight matrix E has a column for each word, each a column vector of d dimensions, and hence has dimensionality d Γ— |V |.
7
Neural Networks and Neural Language Models
7.7
Training the neural language model
nan
nan
Generally training proceeds by taking as input a very long text, concatenating all the sentences, starting with random weights, and then iteratively moving through the text predicting each word w t . At each word w t , we use the cross-entropy (negative log likelihood) loss. Recall that the general form for this (repeated from Eq. 7.26 is:
7
Neural Networks and Neural Language Models
7.7
Training the neural language model
nan
nan
L CE (Ε·, y) = βˆ’ logΕ· i , (where i is the correct class) (7.43)
7
Neural Networks and Neural Language Models
7.7
Training the neural language model
nan
nan
For language modeling, the classes are the words in the vocabulary, soΕ· i here means the probability that the model assigns to the correct next word w t :
7
Neural Networks and Neural Language Models
7.7
Training the neural language model
nan
nan
L CE = βˆ’ log p(w t |w tβˆ’1 , ..., w tβˆ’n+1 ) (7.44)
7
Neural Networks and Neural Language Models
7.7
Training the neural language model
nan
nan
The parameter update for stochastic gradient descent for this loss from step s to s + 1 is then:
7
Neural Networks and Neural Language Models
7.7
Training the neural language model
nan
nan
ΞΈ s+1 = ΞΈ s βˆ’ Ξ· βˆ‚ [βˆ’ log p(w t |w tβˆ’1 , ..., w tβˆ’n+1 )] βˆ‚ ΞΈ (7.45)
7
Neural Networks and Neural Language Models
7.7
Training the neural language model
nan
nan
This gradient can be computed in any standard neural network framework which will then backpropagate through ΞΈ = E, W, U, b.
7
Neural Networks and Neural Language Models
7.7
Training the neural language model
nan
nan
Training the parameters to minimize loss will result both in an algorithm for language modeling (a word predictor) but also a new set of embeddings E that can be used as word representations for other tasks.
7
Neural Networks and Neural Language Models
7.8
Summary
nan
nan
β€’ Neural networks are built out of neural units, originally inspired by human neurons but now simply an abstract computational device. β€’ Each neural unit multiplies input values by a weight vector, adds a bias, and then applies a non-linear activation function like sigmoid, tanh, or rectified linear unit. β€’ In a fully-connected, feedforward network, each unit in layer i is connected to each unit in layer i + 1, and there are no cycles. β€’ The power of neural networks comes from the ability of early layers to learn representations that can be utilized by later layers in the network. β€’ Neural networks are trained by optimization algorithms like gradient descent. β€’ Error backpropagation, backward differentiation on a computation graph, is used to compute the gradients of the loss function for a network. β€’ Neural language models use a neural network as a probabilistic classifier, to compute the probability of the next word given the previous n words. β€’ Neural language models can use pretrained embeddings, or can learn embeddings from scratch in the process of language modeling.
7
Neural Networks and Neural Language Models
7.9
Bibliographical and Historical Notes
nan
nan
The origins of neural networks lie in the 1940s McCulloch-Pitts neuron (McCulloch and Pitts, 1943 ), a simplified model of the human neuron as a kind of computing element that could be described in terms of propositional logic. By the late 1950s and early 1960s, a number of labs (including Frank Rosenblatt at Cornell and Bernard Widrow at Stanford) developed research into neural networks; this phase saw the development of the perceptron (Rosenblatt, 1958) , and the transformation of the threshold into a bias, a notation we still use (Widrow and Hoff, 1960) . The field of neural networks declined after it was shown that a single perceptron unit was unable to model functions as simple as XOR (Minsky and Papert, 1969) . While some small amount of work continued during the next two decades, a major revival for the field didn't come until the 1980s, when practical tools for building deeper networks like error backpropagation became widespread (Rumelhart et al., 1986) . During the 1980s a wide variety of neural network and related architectures were developed, particularly for applications in psychology and cognitive science (Rumelhart and McClelland 1986b , McClelland and Elman 1986 , Rumelhart and McClelland 1986a , Elman 1990 , for which the term connectionist or paralconnectionist lel distributed processing was often used (Feldman and Ballard 1982, Smolensky 1988) . Many of the principles and techniques developed in this period are foundational to modern work, including the ideas of distributed representations (Hinton, 1986) , recurrent networks (Elman, 1990) , and the use of tensors for compositionality (Smolensky, 1990) .
7
Neural Networks and Neural Language Models
7.9
Bibliographical and Historical Notes
nan
nan
By the 1990s larger neural networks began to be applied to many practical language processing tasks as well, like handwriting recognition (LeCun et al. 1989) and speech recognition (Morgan and Bourlard 1990) . By the early 2000s, improvements in computer hardware and advances in optimization and training techniques made it possible to train even larger and deeper networks, leading to the modern term deep learning (Hinton et al. 2006 , Bengio et al. 2007 . We cover more related history in Chapter 9 and Chapter 26.
7
Neural Networks and Neural Language Models
7.9
Bibliographical and Historical Notes
nan
nan
There are a number of excellent books on the subject. Goldberg (2017) has superb coverage of neural networks for natural language processing. For neural networks in general see Goodfellow et al. (2016) and Nielsen (2015).
8
Sequence Labeling for Parts of Speech and Named Entities
nan
nan
nan
nan
Dionysius Thrax of Alexandria (c. 100 B.C.), or perhaps someone else (it was a long time ago), wrote a grammatical sketch of Greek (a β€œtechn Μ„e”) that summarized the linguistic knowledge of his day. This work is the source of an astonishing proportion of modern linguistic vocabulary, including the words syntax, diphthong, clitic, and analogy. Also included are a description of eight parts of speech: noun, verb,parts of speech pronoun, preposition, adverb, conjunction, participle, and article. Although earlier scholars (including Aristotle as well as the Stoics) had their own lists of parts of speech, it was Thrax’s set of eight that became the basis for descriptions of European languages for the next 2000 years. (All the way to the Schoolhouse Rock educational television shows of our childhood, which had songs about 8 parts of speech, like the late great Bob Dorough’s Conjunction Junction.) The durability of parts of speech through two millennia speaks to their centrality in models of human language.
8
Sequence Labeling for Parts of Speech and Named Entities
nan
nan
nan
nan
Proper names are another important and anciently studied linguistic category. While parts of speech are generally assigned to individual words or morphemes, a proper name is often an entire multiword phrase, like the name "Marie Curie", the location "New York City", or the organization "Stanford University". We'll use the term named entity for, roughly speaking, anything that can be referred to with a named entity proper name: a person, a location, an organization, although as we'll see the term is commonly extended to include things that aren't entities per se.
8
Sequence Labeling for Parts of Speech and Named Entities
nan
nan
nan
nan
Parts of speech (also known as POS) and named entities are useful clues to POS sentence structure and meaning. Knowing whether a word is a noun or a verb tells us about likely neighboring words (nouns in English are preceded by determiners and adjectives, verbs by nouns) and syntactic structure (verbs have dependency links to nouns), making part-of-speech tagging a key aspect of parsing. Knowing if a named entity like Washington is a name of a person, a place, or a university is important to many natural language processing tasks like question answering, stance detection, or information extraction. In this chapter we'll introduce the task of part-of-speech tagging, taking a sequence of words and assigning each word a part of speech like NOUN or VERB, and the task of named entity recognition (NER), assigning words or phrases tags like PERSON, LOCATION, or ORGANIZATION.