n_chapter
stringclasses
10 values
chapter
stringclasses
10 values
n_section
stringlengths
3
5
section
stringlengths
3
48
n_subsection
stringlengths
3
6
subsection
stringlengths
3
51
text
stringlengths
1
2.65k
9
Deep Learning Architectures for Sequence Processing
9.3
RNNs as Language Models
nan
nan
P(w 1:n ) = n i=1 P(w i |w 1:i−1 ) (9.10) = n i=1 y i [w i ] (9.11)
9
Deep Learning Architectures for Sequence Processing
9.3
RNNs as Language Models
nan
nan
To train an RNN as a language model, we use a corpus of text as training material, having the model predict the next word at each time step t. We train the model to minimize the error in predicting the true next word in the training sequence, using cross-entropy as the loss function. Recall that the cross-entropy loss measures the difference between a predicted probability distribution and the correct distribution.
9
Deep Learning Architectures for Sequence Processing
9.3
RNNs as Language Models
nan
nan
L CE = − w∈V y t [w] logŷ t [w] (9.12)
9
Deep Learning Architectures for Sequence Processing
9.3
RNNs as Language Models
nan
nan
In the case of language modeling, the correct distribution y t comes from knowing the next word. This is represented as a one-hot vector corresponding to the vocabulary where the entry for the actual next word is 1, and all the other entries are 0. Thus, the cross-entropy loss for language modeling is determined by the probability the model assigns to the correct next word. So at time t the CE loss is the negative log probability the model assigns to the next word in the training sequence.
9
Deep Learning Architectures for Sequence Processing
9.3
RNNs as Language Models
nan
nan
L CE (ŷ t , y t ) = − logŷ t [w t+1 ] (9.13)
9
Deep Learning Architectures for Sequence Processing
9.3
RNNs as Language Models
nan
nan
Thus at each word position t of the input, the model takes as input the correct sequence of tokens w 1:t , and uses them to compute a probability distribution over possible next words so as to compute the model's loss for the next token w t+1 . Then we move to the next word, we ignore what the model predicted for the next word and instead use the correct sequence of tokens w 1:t+1 to estimate the probability of token w t+2 . This idea that we always give the model the correct history sequence to
9
Deep Learning Architectures for Sequence Processing
9.3
RNNs as Language Models
nan
nan
The weights in the network are adjusted to minimize the average CE loss over the training sequence via gradient descent. Fig. 9 .6 illustrates this training regimen.
9
Deep Learning Architectures for Sequence Processing
9.3
RNNs as Language Models
nan
nan
Careful readers may have noticed that the input embedding matrix E and the final layer matrix V, which feeds the output softmax, are quite similar. The rows of E represent the word embeddings for each word in the vocabulary learned during the training process with the goal that words that have similar meaning and function will have similar embeddings. And, since the length of these embeddings corresponds to the size of the hidden layer d h , the shape of the embedding matrix E is |V | × d h .
9
Deep Learning Architectures for Sequence Processing
9.3
RNNs as Language Models
nan
nan
The final layer matrix V provides a way to score the likelihood of each word in the vocabulary given the evidence present in the final hidden layer of the network through the calculation of Vh. This entails that it also has the dimensionality |V | × d h . That is, the rows of V provide a second set of learned word embeddings that capture relevant aspects of word meaning and function. This leads to an obvious question -is it even necessary to have both? Weight tying is a method that Weight tying dispenses with this redundancy and uses a single set of embeddings at the input and softmax layers. That is, E = V. To do this, we set the dimensionality of the final hidden layer to be the same d h , (or add an additional projection layer to do the same thing), and simply use the same matrix for both layers. In addition to providing improved perplexity results, this approach significantly reduces the number of parameters required for the model.
9
Deep Learning Architectures for Sequence Processing
9.4
RNNs for other NLP tasks
nan
nan
Now that we've seen the basic RNN architecture, let's consider how to apply it to three types of NLP tasks: sequence classification tasks like sentiment analysis and topic classification, sequence labeling tasks like part-of-speech tagging, and and text generation tasks. And we'll see in Chapter 10 how to use them for encoder-decoder approaches to summarization, machine translation, and question answering.ling
9
Deep Learning Architectures for Sequence Processing
9.4
RNNs for other NLP tasks
9.4.1
Sequence labe
In sequence labeling, the network's task is to assign a label chosen from a small fixed set of labels to each element of a sequence, like the part-of-speech tagging and named entity recognition tasks from Chapter 8. In an RNN approach to sequence labeling, inputs are word embeddings and the outputs are tag probabilities generated by a softmax layer over the given tagset, as illustrated in Fig. 9 Figure 9 .7 Part-of-speech tagging as sequence labeling with a simple RNN. Pre-trained word embeddings serve as inputs and a softmax layer provides a probability distribution over the part-of-speech tags as output at each time step.
9
Deep Learning Architectures for Sequence Processing
9.4
RNNs for other NLP tasks
9.4.1
Sequence labe
In this figure, the inputs at each time step are pre-trained word embeddings corresponding to the input tokens. The RNN block is an abstraction that represents an unrolled simple recurrent network consisting of an input layer, hidden layer, and output layer at each time step, as well as the shared U, V and W weight matrices that comprise the network. The outputs of the network at each time step represent the distribution over the POS tagset generated by a softmax layer.
9
Deep Learning Architectures for Sequence Processing
9.4
RNNs for other NLP tasks
9.4.1
Sequence labe
To generate a sequence of tags for a given input, we run forward inference over the input sequence and select the most likely tag from the softmax at each step. Since we're using a softmax layer to generate the probability distribution over the output tagset at each time step, we will again employ the cross-entropy loss during training.
9
Deep Learning Architectures for Sequence Processing
9.4
RNNs for other NLP tasks
9.4.2
RNNs for Sequence Classification
Another use of RNNs is to classify entire sequences rather than the tokens within them. We've already encountered sentiment analysis in Chapter 4, in which we classify a text as positive or negative. Other sequence classification tasks for mapping sequences of text to one from a small set of categories include document-level topic classification, spam detection, or message routing for customer service applications.
9
Deep Learning Architectures for Sequence Processing
9.4
RNNs for other NLP tasks
9.4.2
RNNs for Sequence Classification
To apply RNNs in this setting, we pass the text to be classified through the RNN a word at a time generating a new hidden layer at each time step. We can then take the hidden layer for the last token of the text, h n , to constitute a compressed representation of the entire sequence. We can pass this representation h n to a feedforward network that chooses a class via a softmax over the possible classes. Fig. 9 .8 illustrates this approach.
9
Deep Learning Architectures for Sequence Processing
9.4
RNNs for other NLP tasks
9.4.2
RNNs for Sequence Classification
Note that in this approach there don't need intermediate outputs for the words in the sequence preceding the last element. Therefore, there are no loss terms associated with those elements. Instead, the loss function used to train the weights in the network is based entirely on the final text classification task. The output from the softmax output from the feedforward classifier together with a cross-entropy loss drives the training. The error signal from the classification is backpropagated all the way through the weights in the feedforward classifier through, to its input, and then through to the three sets of weights in the RNN as described earlier in Section 9.2.2. The training regimen that uses the loss from a downstream application to adjust the weights all the way through the network is referred to as end-to-end training.
9
Deep Learning Architectures for Sequence Processing
9.4
RNNs for other NLP tasks
9.4.2
RNNs for Sequence Classification
Another option, instead of using just the last token h n to represent the whole sequence, is to use some sort of pooling function of all the hidden states h i for each pooling word i in the sequence. For example, we can create a representation that pools all the n hidden states by taking their element-wise mean:
9
Deep Learning Architectures for Sequence Processing
9.4
RNNs for other NLP tasks
9.4.2
RNNs for Sequence Classification
h mean = 1 n n i=1 h i (9.14)
9
Deep Learning Architectures for Sequence Processing
9.4
RNNs for other NLP tasks
9.4.2
RNNs for Sequence Classification
Or we can take the element-wise max; the element-wise max of a set of n vectors is a new vector whose kth element is the max of the kth elements of all the n vectors.
9
Deep Learning Architectures for Sequence Processing
9.4
RNNs for other NLP tasks
9.4.3
Generation with RNN-Based Language Models
RNN-based language models can also be used to generate text. Text generation is of enormous practical importance, part of tasks like question answering, machine translation, text summarization, and conversational dialogue; any ask where a system needs to produce text, conditioned on some other text. Recall back in Chapter 3 we saw how to generate text from an n-gram language model by adapting a technique suggested contemporaneously by Claude Shannon (Shannon, 1951) and the psychologists George Miller and Selfridge (Miller and Selfridge, 1950) . We first randomly sample a word to begin a sequence based on its suitability as the start of a sequence. We then continue to sample words conditioned on our previous choices until we reach a pre-determined length, or an end of sequence token is generated.
9
Deep Learning Architectures for Sequence Processing
9.4
RNNs for other NLP tasks
9.4.3
Generation with RNN-Based Language Models
Today, this approach of using a language model to incrementally generate words by repeatedly sampling the next word conditioned on our previous choices is called autoregressive generation. The procedure is basically the same as that described autoregressive generation on 38, in a neural context:
9
Deep Learning Architectures for Sequence Processing
9.4
RNNs for other NLP tasks
9.4.3
Generation with RNN-Based Language Models
• Sample a word in the output from the softmax distribution that results from using the beginning of sentence marker, <s>, as the first input. • Use the word embedding for that first word as the input to the network at the next time step, and then sample the next word in the same fashion. • Continue generating until the end of sentence marker, </s>, is sampled or a fixed length limit is reached.
9
Deep Learning Architectures for Sequence Processing
9.4
RNNs for other NLP tasks
9.4.3
Generation with RNN-Based Language Models
Technically an autoregressive model is a model that predicts a value at time t based on a linear function of the previous values at times t − 1, t − 2, and so on. Although language models are not linear (since they have many layers of non-linearities), we loosely refer to this generation technique as autoregressive generation since the word generated at each time step is conditioned on the word selected by the network from the previous step. Fig. 9 .9 illustrates this approach. In this figure, the details of the RNN's hidden layers and recurrent connections are hidden within the blue block. This simple architecture underlies state-of-the-art approaches to applications such as machine translation, summarization, and question answering. The key to these approaches is to prime the generation component with an appropriate context. That is, instead of simply using <s> to get things started we can provide a richer task-appropriate context; for translation the context is the sentence in the source language; for summarization it's the long text we want to summarize. We'll discuss the application of contextual generation to the problem of summarization in Section 9.9 in the context of transformer-based language models, and then again in Chapter 10 when we introduce encoder-decoder models.
9
Deep Learning Architectures for Sequence Processing
9.5
Stacked and Bidirectional RNN Architectures
nan
nan
Recurrent networks are quite flexible. By combining the feedforward nature of unrolled computational graphs with vectors as common inputs and outputs, complex networks can be treated as modules that can be combined in creative ways. This
9
Deep Learning Architectures for Sequence Processing
9.5
Stacked and Bidirectional RNN Architectures
nan
nan
section introduces two of the more common network architectures used in language processing with RNNs.
9
Deep Learning Architectures for Sequence Processing
9.5
Stacked and Bidirectional RNN Architectures
9.5.1
Stacked RNNs
In our examples thus far, the inputs to our RNNs have consisted of sequences of word or character embeddings (vectors) and the outputs have been vectors useful for predicting words, tags or sequence labels. However, nothing prevents us from using the entire sequence of outputs from one RNN as an input sequence to another one. Stacked RNNs generally outperform single-layer networks. One reason for this success seems to be that the network induces representations at differing levels of abstraction across layers. Just as the early stages of the human visual system detect edges that are then used for finding larger regions and shapes, the initial layers of stacked networks can induce representations that serve as useful abstractions for further layers -representations that might prove difficult to induce in a single RNN. The optimal number of stacked RNNs is specific to each application and to each training set. However, as the number of stacks is increased the training costs rise quickly.
9
Deep Learning Architectures for Sequence Processing
9.5
Stacked and Bidirectional RNN Architectures
9.5.2
Bidirectional RNNs
The RNN uses information from the left (prior) context to make its predictions at time t. But in many applications we have access to the entire input sequence; in those cases we would like to use words from the context to the right of t. One way to do this is to run two separate RNNs, one left-to-right, and one right-to-left, and concatenate their representations.
9
Deep Learning Architectures for Sequence Processing
9.5
Stacked and Bidirectional RNN Architectures
9.5.2
Bidirectional RNNs
In the left-to-right RNNs we've discussed so far, the hidden state at a given time t represents everything the network knows about the sequence up to that point. The state is a function of the inputs x 1 , ..., x t and represents the context of the network to the left of the current time.
9
Deep Learning Architectures for Sequence Processing
9.5
Stacked and Bidirectional RNN Architectures
9.5.2
Bidirectional RNNs
EQUATION
9
Deep Learning Architectures for Sequence Processing
9.5
Stacked and Bidirectional RNN Architectures
9.5.2
Bidirectional RNNs
15) Fig. 9 .11 illustrates such a bidirectional network that concatenates the outputs of the forward and backward pass. Other simple ways to combine the forward and backward contexts include element-wise addition or multiplication. The output at each step in time thus captures information to the left and to the right of the current input. In sequence labeling applications, these concatenated outputs can serve as the basis for a local labeling decision.
9
Deep Learning Architectures for Sequence Processing
9.5
Stacked and Bidirectional RNN Architectures
9.5.2
Bidirectional RNNs
RNN 1
9
Deep Learning Architectures for Sequence Processing
9.5
Stacked and Bidirectional RNN Architectures
9.5.2
Bidirectional RNNs
x 1 y 2 y 1 y 3 y n concatenated outputs
9
Deep Learning Architectures for Sequence Processing
9.5
Stacked and Bidirectional RNN Architectures
9.5.2
Bidirectional RNNs
x 2
9
Deep Learning Architectures for Sequence Processing
9.5
Stacked and Bidirectional RNN Architectures
9.5.2
Bidirectional RNNs
x 3 x n Bidirectional RNNs have also proven to be quite effective for sequence classification. Recall from Fig. 9 .8 that for sequence classification we used the final hidden state of the RNN as the input to a subsequent feedforward classifier. A difficulty with this approach is that the final state naturally reflects more information about the end of the sentence than its beginning. Bidirectional RNNs provide a simple solution to this problem; as shown in Fig. 9 .12, we simply combine the final hidden states from the forward and backward passes (for example by concatenation) and use that as input for follow-on processing.
9
Deep Learning Architectures for Sequence Processing
9.5
Stacked and Bidirectional RNN Architectures
9.5.2
Bidirectional RNNs
RNN 2 RNN 1 x 1 x 2 x 3 x n h n → h 1 ← h n → Softmax FFN h 1 ← Figure 9
9
Deep Learning Architectures for Sequence Processing
9.5
Stacked and Bidirectional RNN Architectures
9.5.2
Bidirectional RNNs
.12 A bidirectional RNN for sequence classification. The final hidden units from the forward and backward passes are combined to represent the entire sequence. This combined representation serves as input to the subsequent classifier.
9
Deep Learning Architectures for Sequence Processing
9.6
The LSTM
nan
nan
In practice, it is quite difficult to train RNNs for tasks that require a network to make use of information distant from the current point of processing. Despite having access to the entire preceding sequence, the information encoded in hidden states tends to be fairly local, more relevant to the most recent parts of the input sequence and recent decisions. Yet distant information is critical to many language applications. Consider the following example in the context of language modeling.
9
Deep Learning Architectures for Sequence Processing
9.6
The LSTM
nan
nan
(9.18) The flights the airline was cancelling were full.
9
Deep Learning Architectures for Sequence Processing
9.6
The LSTM
nan
nan
Assigning a high probability to was following airline is straightforward since airline provides a strong local context for the singular agreement. However, assigning an appropriate probability to were is quite difficult, not only because the plural flights is quite distant, but also because the intervening context involves singular constituents. Ideally, a network should be able to retain the distant information about plural flights until it is needed, while still processing the intermediate parts of the sequence correctly. One reason for the inability of RNNs to carry forward critical information is that the hidden layers, and, by extension, the weights that determine the values in the hidden layer, are being asked to perform two tasks simultaneously: provide information useful for the current decision, and updating and carrying forward information required for future decisions.
9
Deep Learning Architectures for Sequence Processing
9.6
The LSTM
nan
nan
A second difficulty with training RNNs arises from the need to backpropagate the error signal back through time. Recall from Section 9.2.2 that the hidden layer at time t contributes to the loss at the next time step since it takes part in that calculation. As a result, during the backward pass of training, the hidden layers are subject to repeated multiplications, as determined by the length of the sequence. A frequent result of this process is that the gradients are eventually driven to zero, a situation called the vanishing gradients problem.
9
Deep Learning Architectures for Sequence Processing
9.6
The LSTM
nan
nan
To address these issues, more complex network architectures have been designed to explicitly manage the task of maintaining relevant context over time, by enabling the network to learn to forget information that is no longer needed and to remember information required for decisions still to come.
9
Deep Learning Architectures for Sequence Processing
9.6
The LSTM
nan
nan
The most commonly used such extension to RNNs is the Long short-term memory (LSTM) network (Hochreiter and Schmidhuber, 1997) . LSTMs divide the context management problem into two sub-problems: removing information no longer needed from the context, and adding information likely to be needed for later decision making. The key to solving both problems is to learn how to manage this context rather than hard-coding a strategy into the architecture. LSTMs accomplish this by first adding an explicit context layer to the architecture (in addition to the usual recurrent hidden layer), and through the use of specialized neural units that make use of gates to control the flow of information into and out of the units that comprise the network layers. These gates are implemented through the use of additional weights that operate sequentially on the input, and previous hidden layer, and previous context layers. The gates in an LSTM share a common design pattern; each consists of a feedforward layer, followed by a sigmoid activation function, followed by a pointwise multiplication with the layer being gated. The choice of the sigmoid as the activation function arises from its tendency to push its outputs to either 0 or 1. Combining this with a pointwise multiplication has an effect similar to that of a binary mask. Values in the layer being gated that align with values near 1 in the mask are passed through nearly unchanged; values corresponding to lower values are essentially erased.
9
Deep Learning Architectures for Sequence Processing
9.6
The LSTM
nan
nan
The first gate we'll consider is the forget gate. The purpose of this gate to delete information from the context that is no longer needed. The forget gate computes a weighted sum of the previous state's hidden layer and the current input and passes that through a sigmoid. This mask is then multiplied element-wise by the context vector to remove the information from context that is no longer required. Elementwise multiplication of two vectors (represented by the operator , and sometimes called the Hadamard product) is the vector of the same dimension as the two input vectors, where each element i is the product of element i in the two input vectors:
9
Deep Learning Architectures for Sequence Processing
9.6
The LSTM
nan
nan
f t = σ (U f h t−1 + W f x t ) (9.19) k t = c t−1 f t (9.20)
9
Deep Learning Architectures for Sequence Processing
9.6
The LSTM
nan
nan
The next task is compute the actual information we need to extract from the previous hidden state and current inputs -the same basic computation we've been using for all our recurrent networks.
9
Deep Learning Architectures for Sequence Processing
9.6
The LSTM
nan
nan
g t = tanh(U g h t−1 + W g x t ) (9.21)
9
Deep Learning Architectures for Sequence Processing
9.6
The LSTM
nan
nan
Next, we generate the mask for the add gate to select the information to add to the add gate current context. Next, we add this to the modified context vector to get our new context vector.
9
Deep Learning Architectures for Sequence Processing
9.6
The LSTM
nan
nan
EQUATION
9
Deep Learning Architectures for Sequence Processing
9.6
The LSTM
nan
nan
c t = j t + k t (9.24)
9
Deep Learning Architectures for Sequence Processing
9.6
The LSTM
nan
nan
The final gate we'll use is the output gate which is used to decide what informaoutput gate tion is required for the current hidden state (as opposed to what information needs to be preserved for future decisions).
9
Deep Learning Architectures for Sequence Processing
9.6
The LSTM
nan
nan
o t = σ (U o h t−1 + W o x t ) (9.25) h t = o t tanh(c t )
9
Deep Learning Architectures for Sequence Processing
9.6
The LSTM
nan
nan
(9.26) Fig. 9 .13 illustrates the complete computation for a single LSTM unit. Given the appropriate weights for the various gates, an LSTM accepts as input the context layer, and hidden layer from the previous time step, along with the current input vector. It then generates updated context and hidden vectors as output. The hidden layer, h t , can be used as input to subsequent layers in a stacked RNN, or to generate an output for the final layer of a network.
9
Deep Learning Architectures for Sequence Processing
9.6
The LSTM
9.6.1
Gated Units, Layers and Networks
The neural units used in LSTMs are obviously much more complex than those used in basic feedforward networks. Fortunately, this complexity is encapsulated within the basic processing units, allowing us to maintain modularity and to easily experiment with different architectures. To see this, consider Fig. 9 .14 which illustrates the inputs and outputs associated with each kind of unit. At the far left, (a) is the basic feedforward unit where a single set of weights and a single activation function determine its output, and when arranged in a layer there are no connections among the units in the layer. Next, (b) represents the unit in a simple recurrent network. Now there are two inputs and an additional set of weights to go with it. However, there is still a single activation function and output.
9
Deep Learning Architectures for Sequence Processing
9.6
The LSTM
9.6.1
Gated Units, Layers and Networks
The increased complexity of the LSTM units is encapsulated within the unit itself. The only additional external complexity for the LSTM over the basic recurrent unit (b) is the presence of the additional context vector as an input and output.
9
Deep Learning Architectures for Sequence Processing
9.6
The LSTM
9.6.1
Gated Units, Layers and Networks
This modularity is key to the power and widespread applicability of LSTM units. LSTM units (or other varieties, like GRUs) can be substituted into any of the network architectures described in Section 9.5. And, as with simple RNNs, multi-layered networks making use of gated units can be unrolled into deep feedforward networks and trained in the usual fashion with backpropagation.
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
nan
nan
While the addition of gates allows LSTMs to handle more distant information than RNNs, they don't completely solve the underlying problem: passing information through an extended series of recurrent connections leads to information loss and difficulties in training. Moreover, the inherently sequential nature of recurrent networks makes it hard to do computation in parallel. These considerations led to the development of transformers -an approach to sequence processing that eliminates transformers recurrent connections and returns to architectures reminiscent of the fully connected networks described earlier in Chapter 7. Transformers map sequences of input vectors (x 1 , ..., x n ) to sequences of output vectors (y 1 , ..., y n ) of the same length. Transformers are made up of stacks of transformer blocks, which are multilayer networks made by combining simple linear layers, feedforward networks, and self-attention layers, they key innovation of self-attention transformers. Self-attention allows a network to directly extract and use information from arbitrarily large contexts without the need to pass it through intermediate recurrent connections as in RNNs. We'll start by describing how self-attention works and then return to how it fits into larger transformer blocks. Fig. 9 .15 illustrates the flow of information in a single causal, or backward looking, self-attention layer. As with the overall transformer, a self-attention layer maps input sequences (x 1 , ..., x n ) to output sequences of the same length (y 1 , ..., y n ). When processing each item in the input, the model has access to all of the inputs up to and including the one under consideration, but no access to information about inputs beyond the current one. In addition, the computation performed for each item is independent of all the other computations. The first point ensures that we can use this approach to create language models and use them for autoregressive generation, and the second point means that we can easily parallelize both forward inference and training of such models.
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
nan
nan
Figure 9 .15 Information flow in a causal (or masked) self-attention model. In processing each element of the sequence, the model attends to all the inputs up to, and including, the current one. Unlike RNNs, the computations at each time step are independent of all the other steps and therefore can be performed in parallel.
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
nan
nan
At the core of an attention-based approach is the ability to compare an item of interest to a collection of other items in a way that reveals their relevance in the current context. In the case of self-attention, the set of comparisons are to other elements within a given sequence. The result of these comparisons is then used to compute an output for the current input. For example, returning to Fig. 9 .15, the computation of y 3 is based on a set of comparisons between the input x 3 and its preceding elements x 1 and x 2 , and to x 3 itself. The simplest form of comparison between elements in a self-attention layer is a dot product. Let's refer to the result of this comparison as a score (we'll be updating this equation to add attention to the computation of this score):
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
nan
nan
score(x i , x j ) = x i • x j (9.27)
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
nan
nan
The result of a dot product is a scalar value ranging from −∞ to ∞, the larger the value the more similar the vectors that are being compared. Continuing with our example, the first step in computing y 3 would be to compute three scores:
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
nan
nan
x 3 • x 1 , x 3 • x 2 and x 3 • x 3 .
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
nan
nan
Then to make effective use of these scores, we'll normalize them with a softmax to create a vector of weights, α i j , that indicates the proportional relevance of each input to the input element i that is the current focus of attention.
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
nan
nan
α i j = softmax(score(x i , x j )) ∀ j ≤ i (9.28) = exp(score(x i , x j )) i k=1 exp(score(x i , x k )) ∀ j ≤ i (9.29)
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
nan
nan
Given the proportional scores in α, we then generate an output value y i by taking the sum of the inputs seen so far, weighted by their respective α value.
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
nan
nan
y i = j≤i α i j x j (9.30)
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
nan
nan
The steps embodied in Equations 9.27 through 9.30 represent the core of an attention-based approach: a set of comparisons to relevant items in some context, a normalization of those scores to provide a probability distribution, followed by a weighted sum using this distribution. The output y is the result of this straightforward computation over the inputs.
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
nan
nan
This kind of simple attention can be useful, and indeed we'll see in Chapter 10 how to use this simple idea of attention for LSTM-based encoder-decoder models for machine translation.
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
nan
nan
But transformers allow us to create a more sophisticated way of representing how words can contribute to the representation of longer inputs. Consider the three different roles that each input embedding plays during the course of the attention process.
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
nan
nan
• As the current focus of attention when being compared to all of the other preceding inputs. We'll refer to this role as a query.
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
nan
nan
• In its role as a preceding input being compared to the current focus of attention. We'll refer to this role as a key. key
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
nan
nan
• And finally, as a value used to compute the output for the current focus of value attention.
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
nan
nan
To capture these three different roles, transformers introduce weight matrices W Q , W K , and W V . These weights will be used to project each input vector x i into a representation of its role as a key, query, or value.
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
nan
nan
q i = W Q x i ; k i = W K x i ; v i = W V x i (9.31)
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
nan
nan
The inputs x and outputs y of transformers, as well as the intermediate vectors after the various layers, all have the same dimensionality 1 × d. For now let's assume the dimensionalities of the transform matrices are W Q ∈ R d×d , W K ∈ R d×d , and W V ∈ R d×d . Later we'll need separate dimensions for these matrices when we introduce multi-headed attention, so let's just make a note that we'll have a dimension d k for the key and query vectors, and a dimension d v for the value vectors, both of which for now we'll set to d. In the original transformer work (Vaswani et al., 2017) , d was 1024.
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
nan
nan
Given these projections, the score between a current focus of attention, x i and an element in the preceding context, x j consists of a dot product between its query vector q i and the preceding element's key vectors k j . This dot product has the right shape since both the query and the key are of dimensionality 1 × d. Let's update our previous comparison calculation to reflect this, replacing Eq. 9.27 with Eq. 9.32:
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
nan
nan
score(x i , x j ) = q i • k j (9.32)
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
nan
nan
The ensuing softmax calculation resulting in α i, j remains the same, but the output calculation for y i is now based on a weighted sum over the value vectors v. Fig. 9 .16 illustrates this calculation in the case of computing the third output y 3 in a sequence.
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
nan
nan
y i = j≤i α i j v j (9.33)
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
nan
nan
The result of a dot product can be an arbitrarily large (positive or negative) value. Exponentiating such large values can lead to numerical issues and to an effective loss of gradients during training. To avoid this, the dot product needs to be scaled in a suitable fashion. A scaled dot-product approach divides the result of the dot product by a factor related to the size of the embeddings before passing them through the softmax. A typical approach is to divide the dot product by the square root of the dimensionality of the query and key vectors (d k ), leading us to update our scoring function one more time, replacing Eq. 9.27 and Eq. 9.32 with Eq. 9.34:
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
nan
nan
EQUATION
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
nan
nan
This description of the self-attention process has been from the perspective of computing a single output at a single time step i. However, since each output, y i , is computed independently this entire process can be parallelized by taking advantage of efficient matrix multiplication routines by packing the input embeddings of the N tokens of the input sequence into a single matrix X ∈ R N×d . That is, each row of X is the embedding of one token of the input. We then multiply X by the key, query, and value matrices (all of dimensionality d × d) to produce matrices Q ∈ R N×d , K ∈ R N×d , and V ∈ R N×d , containing all the key, query, and value vectors:
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
nan
nan
Q = XW Q ; K = XW K ; V = XW V (9.35)
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
nan
nan
Given these matrices we can compute all the requisite query-key comparisons simultaneously by multiplying Q and K in a single matrix multiplication (the product is of shape N × N; Fig. 9 .17 shows a visualization). Taking this one step further, we can scale these scores, take the softmax, and then multiply the result by V resulting in a matrix of shape N × d: a vector embedding representation for each token in the input. We've reduced the entire self-attention step for an entire sequence of N tokens to the following computation:
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
nan
nan
SelfAttention(Q, K, V) = softmax QK √ d k V (9.36)
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
nan
nan
Unfortunately, this process goes a bit too far since the calculation of the comparisons in QK results in a score for each query value to every key value, including those that follow the query. This is inappropriate in the setting of language modeling since guessing the next word is pretty simple if you already know it. To fix this, the elements in the upper-triangular portion of the matrix are zeroed out (set to −∞), thus eliminating any knowledge of words that follow in the sequence. Fig. 9 .17 depicts the QK matrix. (we'll see in Chapter 11 how to make use of words in the future for tasks that need it).
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
nan
nan
Figure 9.17 The N × N QT matrix showing the q i • k j values, with the upper-triangle portion of the comparisons matrix zeroed out (set to −∞, which the softmax will turn to zero).
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
nan
nan
Fig. 9 .17 also makes it clear that attention is quadratic in the length of the input, since at each layer we need to compute dot products between each pair of tokens in the input. This makes it extremely expensive for the input to a transformer to consist of long documents (like entire Wikipedia pages, or novels), and so most applications have to limit the input length, for example to at most a page or a paragraph of text at a time. Finding more efficient attention mechanisms is an ongoing research direction.
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
9.7.1
Transformer Blocks
The self-attention calculation lies at the core of what's called a transformer block, which, in addition to the self-attention layer, includes additional feedforward layers, residual connections, and normalizing layers. The input and output dimensions of these blocks are matched so they can be stacked just as was the case for stacked RNNs. Fig. 9 .18 illustrates a standard transformer block consisting of a single attention layer followed by a fully-connected feedforward layer with residual connections and layer normalizations following each. We've already seen feedforward layers in Chapter 7, but what are residual connections and layer norm? In deep networks, residual connections are connections that pass information from a lower layer to a higher layer without going through the intermediate layer. Allowing information from the activation going forward and the gradient going backwards to skip a layer improves learning and gives higher level layers direct access to information from lower layers (He et al., 2016) . Residual connections in transformers are implemented by added a layer's input vector to its output vector before passing it forward . In the transformer block shown in Fig. 9 .18, residual connections are used with both the attention and feedforward sublayers. These summed vectors are then normalized using layer normalization (Ba et al., 2016 ). If we think of a layer as one long vector of units, the resulting function computed in a transformer block can be expressed as:
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
9.7.1
Transformer Blocks
z = LayerNorm(x + SelfAttn(x)) (9.37) y = LayerNorm(z + FFNN(z)) (9.38)
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
9.7.1
Transformer Blocks
Layer normalization (or layer norm) is one of many forms of normalization that layer norm can be used to improve training performance in deep neural networks by keeping the values of a hidden layer in a range that facilitates gradient-based training. Layer norm is a variation of the standard score, or z-score, from statistics applied to a single hidden layer. The first step in layer normalization is to calculate the mean, µ, and standard deviation, σ , over the elements of the vector to be normalized. Given a hidden layer with dimensionality d h , these values are calculated as follows.
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
9.7.1
Transformer Blocks
µ = 1 d h d h i=1 x i (9.39) σ = 1 d h d h i=1 (x i − µ) 2 (9.40)
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
9.7.1
Transformer Blocks
Given these values, the vector components are normalized by subtracting the mean from each and dividing by the standard deviation. The result of this computation is a new vector with zero mean and a standard deviation of one.
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
9.7.1
Transformer Blocks
x = (x − µ) σ (9.41)
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
9.7.1
Transformer Blocks
Finally, in the standard implementation of layer normalization, two learnable parameters, γ and β , representing gain and offset values, are introduced.
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
9.7.1
Transformer Blocks
LayerNorm = γx + β (9.42)
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
9.7.2
Multihead Attention
The different words in a sentence can relate to each other in many different ways simultaneously. For example, distinct syntactic, semantic, and discourse relationships can hold between verbs and their arguments in a sentence. It would be difficult for a single transformer block to learn to capture all of the different kinds of parallel relations among its inputs. Transformers address this issue with multihead selfattention layers. These are sets of self-attention layers, called heads, that reside in multihead self-attention layers parallel layers at the same depth in a model, each with its own set of parameters. Given these distinct sets of parameters, each head can learn different aspects of the relationships that exist among inputs at the same level of abstraction.
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
9.7.2
Multihead Attention
To implement this notion, each head, i, in a self-attention layer is provided with its own set of key, query and value matrices:
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
9.7.2
Multihead Attention
W K i , W Q i and W V
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
9.7.2
Multihead Attention
i . These are used to project the inputs into separate key, value, and query embeddings separately for each head, with the rest of the self-attention computation remaining unchanged. In multi-head attention, instead of using the model dimension d that's used for the input and output from the model, the key and query embeddings have dimensionality d k , and the value embeddings are dimensionality d v (in the original transformer paper d k = d v = 64). Thus for each head i, we have weight layers W Q i ∈ R d×d k , W K i ∈ R d×d k , and
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
9.7.2
Multihead Attention
W V i ∈ R d×d v