text
stringlengths 575
4.75k
|
---|
understanding deep learning simon j.d. prince december 24, 2023 the most recent version of this document can be found at http://udlbook.com. copyright in this work has been licensed exclusively to the mit press, https://mitpress.mit.edu, which will be releasing the final version to the public in 2024. all inquiries regarding rights should be addressed to the mit press, rights and permissions department. this work is subject to a creative commons cc-by-nc-nd license. i would really appreciate help improving this document. no detail too small! please mail suggestions, factual inaccuracies, ambiguities, questions, and errata to [email protected], calvert, coppola, ellison, faulkner, kerpatenko, morris, robinson, sträussler, wallace, waymon, wojnarowicz, and all the others whose work is even more important and interesting than deep learning.contents preface ix acknowledgements xi 1 introduction 1 1.1 supervised learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 unsupervised learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.3 reinforcement learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.4 ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.5 structure of book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 1.6 other books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 1.7 how to read this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2 supervised learning 17 2.1 supervised learning overview . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.2 linear regression example . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.3 summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 3 shallow neural networks 25 3.1 neural network example . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.2 universal approximation theorem . . . . . . . . . . . . . . . . . . . . . . 29 3.3 multivariate inputs and outputs . . . . . . . . . . . . . . . . . . . . . . . 30 3.4 shallow neural networks: general case . . . . . . . . . . . . . . . . . . . . 33 3.5 terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.6 summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 4 deep neural networks 41 4.1 composing neural networks . . . . . . . . . . . . . . . . . . . . . . . . . 41 4.2 from composing networks to deep networks . . . . . . . . . . . . . . . . 43 4.3 deep neural networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.4 matrix notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 4.5 shallow vs. deep neural networks . . . . . . . . . . . . . . . . . . . . . . 49 4. |
6 summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 draft: please send errata to [email protected] contents 5 loss functions 56 5.1 maximum likelihood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 5.2 recipe for constructing loss functions . . . . . . . . . . . . . . . . . . . . 60 5.3 example 1: univariate regression . . . . . . . . . . . . . . . . . . . . . . 61 5.4 example 2: binary classification . . . . . . . . . . . . . . . . . . . . . . . 64 5.5 example 3: multiclass classification . . . . . . . . . . . . . . . . . . . . . 67 5.6 multiple outputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 5.7 cross-entropy loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 5.8 summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 6 fitting models 77 6.1 gradient descent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 6.2 stochastic gradient descent . . . . . . . . . . . . . . . . . . . . . . . . . . 83 6.3 momentum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 6.4 adam. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 6.5 training algorithm hyperparameters . . . . . . . . . . . . . . . . . . . . 91 6.6 summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 7 gradients and initialization 96 7.1 problem definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 7.2 computing derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 7.3 toy example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 7.4 backpropagation algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 103 7.5 parameter initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 7.6 example training code . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 7.7 summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 8 measuring performance 118 8.1 training a simple model . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 8.2 sources of error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 8.3 reducing error. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 8.4 double descent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 8.5 choosing |
search through the family of possible equations (possible cyan curves) relating input to output to find the one that describes the training data most accurately. itfollowsthatthemodelsinfigure1.2requirelabeledinput/outputpairsfortraining. for example, the music classification model would require a large number of audio clips where a human expert had identified the genre of each. these input/output pairs take theroleofateacherorsupervisorforthetrainingprocess,andthisgivesrisetotheterm supervised learning. 1.1.4 deep neural networks thisbookconcernsdeepneuralnetworks,whichareaparticularlyusefultypeofmachine learning model. they are equations that can represent an extremely broad family of relationships between input and output, and where it is particularly easy to search through this family to find the relationship that describes the training data. deep neural networks can process inputs that are very large, of variable length, and contain various kinds of internal structures. they can output single real numbers (regression),multiplenumbers(multivariateregression),orprobabilitiesovertwoormore classes (binary and multiclass classification, respectively). as we shall see in the next section, their outputs may also be very large, of variable length, and contain internal structure. itisprobablyhardtoimagineequationswiththeseproperties,andthereader should endeavor to suspend disbelief for now. 1.1.5 structured outputs figure1.4adepictsamultivariatebinaryclassificationmodelforsemanticsegmentation. here, every pixel of an input image is assigned a binary label that indicates whether it belongs to a cow or the background. figure 1.4b shows a multivariate regression model where the input is an image of a street scene and the output is the depth at each pixel. in both cases, the output is high-dimensional and structured. however, this structure is closely tied to the input, and this can be exploited; if a pixel is labeled as “cow,” then a neighbor with a similar rgb value probably has the same label. figures 1.4c–e depict three models where the output has a complex structure that is not so closely tied to the input. figure 1.4c shows a model where the input is an audio file and the output is the transcribed words from that file. figure 1.4d is a translation draft: please send errata to [email protected] 1 introduction figure 1.4 supervised learning tasks with structured outputs. a) this semantic segmentation model maps an rgb image to a binary image indicating whether each pixel belongs to the background or a cow (adapted from noh et al., 2015). b) this monocular depth estimation model maps an rgb image to an output image where each pixel represents the depth (adapted from cordts et al., 2016). c) this audio transcription model maps an audio sample to a transcription of the spoken words in the audio. d) this translation model maps an english text stringtoitsfrenchtranslation. e)thisimagesynthesismodelmapsacaptionto animage(examplefromhttps://openai.com/dall-e-2/). ineachcase,theoutput has a complex internal structure or grammar. in some cases, many outputs are compatible with the input. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.1.2 unsupervised learning 7 modelinwhichtheinputisabodyoftextinenglish,andtheoutputcontainsthefrench translation. figure 1.4e depicts a very challenging task in which the input is descriptive text, and the model must produce an image that matches this description. inprinciple, thelatterthreetaskscanbetackledinthestandardsupervisedlearning framework, but they are more difficult for two reasons. first, the output may genuinely beambiguous;therearemultiplevalidtranslationsfromanenglishsentencetoafrench one and multiple images that are compatible with any caption. second, the output contains considerable structure; not all strings of words make valid english and french sentences, and not all collections of rgb values make plausible images. in addition to learning the mapping, we also have to respect the “grammar” of the output. fortunately, this “grammar” can be learned without the need for output labels. for example, wecanlearnhowtoformvalidenglishsentencesbylearningthestatisticsofa large corpus of text data. this provides a connection with the next section of the book, which |
. in practice, this takes the form of one sgd-like update within another. keskar this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.notes 159 et al. (2017) showed that sgd finds wider minima as the batch size is reduced. this may be because of the batch variance term that results from implicit regularization by sgd. ishida et al. (2020) use a technique named flooding, in which they intentionally prevent the traininglossfrombecomingzero. thisencouragesthesolutiontoperformarandomwalkover the loss landscape and drift into a flatter area with better generalization. bayesian approaches: for some models, including the simplified neural network model in figure 9.11, the bayesian predictive distribution can be computed in closed form (see bishop, 2006; prince, 2012). for neural networks, the posterior distribution over the parameters can- not be represented in closed form and must be approximated. the two main approaches are variationalbayes(hinton&vancamp,1993;mackay,1995;barber&bishop,1997;blundell et al., 2015), in which the posterior is approximated by a simpler tractable distribution, and markovchainmontecarlo(mcmc)methods,whichapproximatethedistributionbydrawing a set of samples (neal, 1995; welling & teh, 2011; chen et al., 2014; ma et al., 2015; li et al., 2016a). the generation of samples can be integrated into sgd, and this is known as stochas- tic gradient mcmc (see ma et al., 2015). it has recently been discovered that “cooling” the posteriordistributionovertheparameters(makingitsharper)improvespredictionsfromthese models(wenzeletal.,2020a),butthisisnotcurrentlyfullyunderstood(seenocietal.,2021). transfer learning: transfer learning for visual tasks works extremely well (sharif razavian etal.,2014)andhassupportedrapidprogressincomputervision,includingtheoriginalalexnet results(krizhevskyetal.,2012). transferlearninghasalsoimpactednaturallanguageprocess- ing(nlp),wheremanymodelsarebasedonpre-trainedfeaturesfromthebertmodel(devlin et al., 2019). more information can be found in zhuang et al. (2020) and yang et al. (2020b). self-supervised learning: self-supervised learning techniques for images have included in- paintingmaskedimageregions(pathaketal.,2016),predictingtherelativepositionofpatches in an image (doersch et al., 2015), re-arranging permuted image tiles back into their original configuration (noroozi & favaro, 2016), colorizing grayscale images (zhang et al., 2016b), and transforming rotated images back to their original orientation (gidaris et al., 2018). in sim- clr (chen et al., 2020c), a network is learned that maps versions of the same image that have been photometrically and geometrically transformed to the same representation while re- pelling versions of different images, with the goal of becoming indifferent to irrelevant image transformations. jing & tian (2020) present a survey of self-supervised learning in images. self-supervised learning in nlp can be based on predicting masked words(devlin et al., 2019), predicting the next word in a sentence (radford et al., 2019; brown et al., 2020), or predicting whethertwosentencesfollowoneanother(devlinetal.,2019). inautomaticspeechrecognition, the wav2vec model (schneider et al., 2019) aims to distinguish an original audio sample from one where 10ms of audio has been swapped out from elsewhere in the clip. self-supervision has also been applied to graph neural networks (chapter 13). tasks include recovering masked features(youetal.,2020)andrecoveringtheadjacencystructureofthegraph(kipf&welling, 2016). liu et al. (2023a) review self-supervised learning for graph models. data augmentation: data augmentation for images dates back to at least lecun et al. (1998)andcontributedtothesuccessofalexnet(krizhevskyetal.,2012),inwhichthedataset was increased by a factor of 2048. image augmentation approaches include geometric transfor- mations, |
changingormanipulatingthecolorspace,noiseinjection,andapplyingspatialfilters. moreelaboratetechniquesincluderandomlymixingimages(inoue,2018;summers&dinneen, 2019), randomly erasing parts of the image (zhong et al., 2020), style transfer (jackson et al., 2019), and randomly swapping image patches (kang et al., 2017). in addition, many studies haveusedgenerativeadversarialnetworksorgans(seechapter15)toproducenovelbutplau- sible data examples (e.g., calimeri et al., 2017). in other cases, the data have been augmented with adversarial examples (goodfellow et al., 2015a), which are minor perturbations of the training data that cause the example to be misclassified. a review of data augmentation for images can be found in shorten & khoshgoftaar (2019). draft: please send errata to [email protected] 9 regularization augmentationmethodsforacousticdataincludepitchshifting,timestretching,dynamicrange compression, and adding random noise (e.g., abeßer et al., 2017; salamon & bello, 2017; xu etal.,2015;lasseck,2018),aswellasmixingdatapairs(zhangetal.,2017c;yunetal.,2019), maskingfeatures(parketal.,2019),andusingganstogeneratenewdata(munetal.,2017). augmentationforspeechdataincludesvocaltractlengthperturbation(jaitly&hinton,2013; kandaetal.,2013),styletransfer(gales,1998;ye&young,2004),addingnoise(hannunetal., 2014), and synthesizing speech (gales et al., 2009). augmentationmethodsfortextincludeaddingnoiseatacharacterlevelbyswitching,deleting, and inserting letters (belinkov & bisk, 2018; feng et al., 2020), or by generating adversarial examples(ebrahimietal.,2018),usingcommonspellingmistakes(coulombe,2018),randomly swapping or deleting words (wei & zou, 2019), using synonyms (kolomiyets et al., 2011), altering adjectives (li et al., 2017c), passivization (min et al., 2020), using generative models tocreatenewdata(qiuetal.,2020), and round-triptranslationtoanotherlanguageandback (aiken & park, 2010). augmentation methods for text are reviewed by bayer et al. (2022). problems problem 9.1 consider a model where the prior distribution over the parameters is a normal distribution with mean zero and variance σ2 so that ϕ yj pr(ϕ)= norm [0,σ2], (9.21) ϕj ϕ j=1 q wherej indexesthemodelparameters. wenowmaximize i pr(y |x ,ϕ)pr(ϕ). showthat i=1 i i the associated loss function of this model is equivalent to l2 regularization. problem 9.2 how do the gradients of the loss function change when l2 regularization (equa- tion 9.5) is added? problem 9.3∗ consider a linear regression model y = ϕ +ϕ x with input x, output y, and 0 1 parameters ϕ and ϕ . assume we have i training examples {x ,y } and use a least squares 0 1 i i loss. consider adding gaussian noise with mean zero and variance σ2 to the inputs x at each x i training iteration. what is the expected gradient update? problem 9.4∗ derive the loss function for multiclass classification when we use label smooth- ing so that the target probability distribution has 0.9 at the correct class and the remaining probability mass of 0.1 is divided between the remaining d −1 classes. o problem 9.5 show that the weight decay parameter update with decay rate λ: ∂l ϕ←−(1−λ)ϕ−α , (9.22) ∂ϕ on the original loss function l[ϕ] is equivalentto a standard gradientupdate using l2 regular- ization so that the modified loss function l˜[ϕ] is: x λ |
l˜[ϕ]=l[ϕ]+ ϕ2, (9.23) 2α k k where ϕ are the parameters, and α is the learning rate. problem 9.6 consider a model with parameters ϕ = [ϕ ,ϕ ]t. draw the l0, l1, and l1 0 1 p2 regularizationtermsinasimilarformtofigure9.1b. thelp regularizationtermis d |ϕ |p. d=1 d this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.chapter 10 convolutional networks chapters2–9introducedthesupervisedlearningpipelinefordeepneuralnetworks. how- ever, these chapters only considered fully connected networks with a single path from input to output. chapters 10–13 introduce more specialized network components with sparser connections, shared weights, and parallel processing paths. this chapter de- scribes convolutional layers, which are mainly used for processing image data. images have three properties that suggest the need for specialized model architec- ture. first, they are high-dimensional. a typical image for a classification task contains 224×224 rgb values (i.e., 150,528 input dimensions). hidden layers in fully connected networks are generally larger than the input size, so even for a shallow network, the number of weights would exceed 150,5282, or 22 billion. this poses obvious practical problems in terms of the required training data, memory, and computation. second, nearby image pixels are statistically related. however, fully connected net- workshavenonotionof“nearby”andtreattherelationshipbetweeneveryinputequally. if the pixels of the training and test images were randomly permuted in the same way, the network could still be trained with no practical difference. third, the interpretation of an image is stable under geometric transformations. an image of a tree is still an image of a tree if we shift it leftwards by a few pixels. however, this shift changes every input to the network. hence, a fully connected model must learn the patterns of pixels that signify a tree separately at every position, which is clearly inefficient. convolutionallayersprocesseachlocalimageregionindependently,usingparameters shared across the whole image. they use fewer parameters than fully connected layers, exploit the spatial relationships between nearby pixels, and don’t have to re-learn the interpretation of the pixels at every position. a network predominantly consisting of convolutional layers is known as a convolutional neural network or cnn. 10.1 invariance and equivariance we argued above that some properties of images (e.g., tree texture) are stable under transformations. in this section, we make this idea more mathematically precise. a draft: please send errata to [email protected] 10 convolutional networks figure 10.1 invariance and equivariance for translation. a–b) in image classi- fication, the goal is to categorize both images as “mountain” regardless of the horizontal shift that has occurred. in other words, we require the network pre- diction to be invariant to translation. c,e) the goal of semantic segmentation is to associate a label with each pixel. d,f) when the input image is translated, we want the output (colored overlay) to translate in the same way. in other words, we require the output to be equivariant with respect to translation. panels c–f) adapted from bousselham et al. (2021). function f[x] of an image x is invariant to a transformation t[x] if: (cid:2) (cid:3) f t[x] =f[x]. (10.1) in other words, the output of the function f[x] is the same regardless of the transfor- mation t[x]. networks for image classification should be invariant to geometric trans- formations of the image (figure 10.1a–b). the network f[x] should identify an image as containing the same object, even if it has been translated, rotated, flipped, or warped. a function f[x] of an image x is equivariant or covariant to a transformation t[x] if: (cid:2) (cid:3) (cid:2) (cid:3) f t[x] =t f[x |
] . (10.2) in other words, f[x] is equivariant to the transformation t[x] if its output changes in the same way under the transformation as the input. networks for per-pixel image segmentation should be equivariant to transformations (figure 10.1c–f); if the image is translated, rotated, or flipped, the network f[x] should return a segmentation that has been transformed in the same way. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.10.2 convolutional networks for 1d inputs 163 figure 10.2 1d convolution with kernel size three. each output z is a weighted i sum of the nearest three inputs xi−1, xi, and xi+1, where the weights are ω = [ω ,ω ,ω ]. a)outputz iscomputedasz =ω x +ω x +ω x . b)outputz 1 2 3 2 2 1 1 2 2 3 3 3 is computed as z = ω x +ω x +ω x . c) at position z , the kernel extends 3 1 2 2 3 3 4 1 beyond the first input x . this can be handled by zero padding, in which we 1 assume values outside the input are zero. the final output is treated similarly. d)alternatively,wecouldonlycomputeoutputswherethekernelfitswithinthe inputrange(“valid”convolution);now,theoutputwillbesmallerthantheinput. 10.2 convolutional networks for 1d inputs convolutionalnetworksconsistofaseriesofconvolutionallayers,eachofwhichisequiv- arianttotranslation. theyalsotypicallyincludepoolingmechanismsthatinducepartial invariance to translation. for clarity of exposition, we first consider convolutional net- works for 1d data, which are easier to visualize. in section 10.3, we progress to 2d convolution, which can be applied to image data. 10.2.1 1d convolution operation convolutional layers are network layers based on the convolution operation. in 1d, a convolution transforms an input vector x into an output vector z so that each output z i is a weighted sum of nearby inputs. the same weights are used at every position and are collectively called the convolution kernel or filter. the size of the region over which inputs are combined is termed the kernel size. for a kernel size of three, we have: zi =ω1xi−1+ω2xi+ω3xi+1, (10.3) where ω = [ω ,ω ,ω ]t is the kernel (figure 10.2).1 notice that the convolution oper- 1 2 3 problem10.1 ation is equivariant with respect to translation. if we translate the input x, then the corresponding output z is translated in the same way. 1strictlyspeaking, thisisacross-correlationandnotaconvolution, inwhichtheweightswouldbe flippedrelativetotheinput(sowewouldswitchxi−1withxi+1). regardless,this(incorrect)definition istheusualconventioninmachinelearning. draft: please send errata to [email protected] 10 convolutional networks figure 10.3stride,kernelsize,anddilation. a)withastrideoftwo,weevaluate the kernel at every other position, so the first output z is computed from a 1 weighted sum centered at x , and b) the second output z is computed from a 1 2 weighted sum centered at x and so on. c) the kernel size can also be changed. 3 withakernelsizeoffive,wetakeaweightedsumofthenearestfiveinputs. d)in dilated or atrous convolution, we intersperse zeros in the weight vector to allow us to combine information over a large area using fewer weights. 10.2.2 padding equation 10.3 shows that each output is computed by taking a weighted sum of the previous, current, and subsequent positions in the input. this begs the question of how to deal with the first output (where there is no previous input) and the final output (where there is no subsequent input). there are two common approaches. the first is to pad the edges of the inputs with new values and proceed as usual. zero padding assumes the input is zero outside its |
valid range (figure 10.2c). other possibilities include treating the input as circular or reflecting it at the boundaries. the second approach is to discard the output positions wherethe kernelexceeds the range of input positions. these valid convolutionshavethe advantage of introducing no extra information at the edges of the input. however, they have the disadvantage that the representation decreases in size. 10.2.3 stride, kernel size, and dilation in the example above, each output was a sum of the nearest three inputs. however, this is just one of a larger family of convolution operations, the members of which are distinguishedbytheirstride,kernelsize,anddilationrate. whenweevaluatetheoutput this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.10.2 convolutional networks for 1d inputs 165 at every position, we term this a stride of one. however, it is also possible to shift the kernel by a stride greater than one. if we have a stride of two, we create roughly half the number of outputs (figure 10.3a–b). the kernel size can be increased to integrate over a larger area (figure 10.3c). how- ever, it typically remains an odd number so that it can be centered around the current position. increasingthekernelsizehasthedisadvantageofrequiringmoreweights. this leads to the idea of dilated or atrous convolutions, in which the kernel values are inter- spersedwithzeros. forexample, wecanturnakernelofsizefiveintoadilatedkernelof size three by setting the second and fourth elements to zero. we still integrate informa- problems10.2–10.4 tion from a larger input region but only require three weights to do this (figure 10.3d). the number of zeros we intersperse between the weights determines the dilation rate. 10.2.4 convolutional layers aconvolutionallayercomputesitsoutputbyconvolvingtheinput, addingabiasβ, and passing each result through an activation function a[•]. with kernel size three, stride one, and dilation rate one, the ith hidden unit h would be computed as: i hi = a[2β+ω1xi−1+ω2xi3+ω3xi+1] x3 4 5 = a β+ ωjxi+j−2 , (10.4) j=1 where the bias β and kernel weights ω ,ω ,ω are trainable parameters, and (with zero 1 2 3 padding) we treat the input x as zero when it is out of the valid range. this is a special case of a fully connected layer that computes the ith hidden unit as: 2 3 xd 4 5 h = a β + ω x . (10.5) i i ij j j=1 ifthereared inputsx• andd hiddenunitsh•,thisfullyconnectedlayerwouldhaved2 weights ω•• and d biases β•. the convolutional layer only uses three weights and one bias. a fully connected layer can reproduce this exactly if most weights are set to zero problem10.5 and others are constrained to be identical (figure 10.4). 10.2.5 channels ifweonlyapplyasingleconvolution,informationwillinevitablybelost;weareaveraging nearby inputs, and the relu activation function clips results that are less than zero. hence,itisusualtocomputeseveralconvolutionsinparallel. eachconvolutionproduces a new set of hidden variables, termed a feature map or channel. draft: please send errata to [email protected] 10 convolutional networks figure 10.4 fully connected vs. convolutional layers. a) a fully connected layer has a weight connecting each input x to each hidden unit h (colored arrows) and a bias for each hidden unit (not shown). b) hence, the associated weight matrixωcontains36weightsrelatingthesixinputstothesixhiddenunits. c)a convolutionallayerwithkernelsizethreecomputeseachhiddenunitasthesame weighted sum of the three neighboring inputs (arrows) plus a bias (not shown). d)the weightmatrix isa specialcase ofthe fullyconnected matrixwhere many weightsarezeroandothersarerepeated(samecolorsindicatesamevalue,white indicates zero weight). e) |
a convolutional layer with kernel size three and stride two computes a weighted sum at every other position. f) this is also a special case of a fully connected network with a different sparse weight structure. figure10.5channels. typically,multipleconvolutionsareappliedtotheinputx and stored in channels. a) a convolution is applied to create hidden units h 1 toh ,whichformthefirstchannel. b)asecondconvolutionoperationisapplied 6 to create hidden units h to h , which form the second channel. the channels 7 12 arestoredina2darrayh thatcontainsallthehiddenunitsinthefirsthidden 1 layer. c) if we add a further convolutional layer, there are now two channels at each input position. here, the 1d convolution defines a weighted sum over both input channels at the three closest positions to create each new output channel. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.10.2 convolutional networks for 1d inputs 167 figure 10.5a–b illustrates this with two convolution kernels of size three and with zeropadding. thefirstkernelcomputesaweightedsumofthenearestthreepixels,adds abias,andpassestheresultsthroughtheactivationfunctiontoproducehiddenunitsh 1 toh . thesecomprisethefirstchannel. thesecondkernelcomputesadifferentweighted 6 sum of the nearest three pixels, adds a different bias, and passes the results through the activationfunctiontocreatehiddenunitsh toh . thesecomprisethesecondchannel. 7 12 in general, the input and the hidden layers all have multiple channels (figure 10.5c). iftheincominglayerhasc channelsandkernelsize k, thehidden unitsineachoutput i problems10.6–10.8 channel are computed as a weighted sum over all c channels and k kernel positions i using a weight matrix ω ∈ rci×k and one bias. hence, if there are co channels in the notebook10.1 next layer, then we need ω∈rci×co×k weights and β ∈rco biases. 1dconvolution 10.2.6 convolutional networks and receptive fields chapter 4 described deep networks, which consisted of a sequence of fully connected layers. similarly, convolutional networks comprise a sequence of convolutional layers. thereceptive fieldofahiddenunitinthenetworkistheregionoftheoriginalinputthat feedsintoit. consideraconvolutionalnetworkwhereeachconvolutionallayerhaskernel size three. the hidden units in the first layer take a weighted sum of the three closest inputs,sohavereceptivefieldsofsizethree. theunitsinthesecondlayertakeaweighted sum of the three closest positions in the first layer, which are themselves weighted sums of three inputs. hence, the hidden units in the second layer have a receptive field of size five. inthisway,thereceptivefieldofunitsinsuccessivelayersincreases,andinformation from across the input is gradually integrated (figure 10.6). problems10.9–10.11 10.2.7 example: mnist-1d we now apply a convolutional network to the mnist-1d data (see figure 8.1). the input x is a 40d vector, and the output f is a 10d vector that is passed through a softmax layer to produce class probabilities. we use a network with three hidden layers (figure 10.7). the fifteen channels of the first hidden layer h are each computed using 1 a kernel size of three and a stride of two with “valid” padding, giving nineteen spatial positions. the second hidden layer h is also computed using a kernel size of three, a 2 strideoftwo,and“valid”padding. thethirdhiddenlayeriscomputedsimilarly. atthis stage, the representation has four spatial positions and fifteen channels. these values are reshaped into a vector of size sixty, which is mapped by a fully connected layer to the ten output activations. thisnetworkwastrainedfor100,000stepsusingsgdwithoutmomentum,alearning rate of 0.01, and a batch size of 100 on a dataset of 4,000 examples. we compare this to problem10.12 a fully connected network with the same number of layers and hidden units (i.e., three hidden layers with 285, 135, and 60 hidden units, respectively). the |
convolutional net- workhas2,050parameters,andthefullyconnectednetworkhas150,185parameters. by the logic of figure 10.4, the convolutional network is a special case of the fully connected draft: please send errata to [email protected] 10 convolutional networks figure 10.6 receptive fields for network with kernel width of three. a) an input with eleven dimensions feeds into a hidden layer with three channels and convo- lution kernel of size three. the pre-activations of the three highlighted hidden unitsinthefirsthiddenlayerh aredifferentweightedsumsofthenearestthree 1 inputs, so the receptive field in h has size three. b) the pre-activations of the 1 four highlighted hidden units in layer h each take a weighted sum of the three 2 channels in layer h at each of the three nearest positions. each hidden unit in 1 layer h weights the nearest three input positions. hence, hidden units in h 1 2 have a receptive field size of five. c) the hidden units in the third layer (kernel size three, stride two) increases the receptive field size to seven. d) by the time we add a fourth layer, the receptive field of the hidden units at position three have a receptive field that covers the entire input. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.10.2 convolutional networks for 1d inputs 169 figure10.7convolutionalnetworkforclassifyingmnist-1ddata(seefigure8.1). the mnist-1d input has dimension d =40. the first convolutional layer has i fifteen channels, kernel size three, stride two, and only retains “valid” positions to make a representation with nineteen positions and fifteen channels. the fol- lowing two convolutional layers have the same settings, gradually reducing the representation size. finally, a fully connected layer takes all sixty hidden units from the third hidden layer. it outputs ten activations that are subsequently passed through a softmax layer to produce the ten class probabilities. figure 10.8 mnist-1d results. a) the convolutional network from figure 10.7 eventually fits the training data perfectly and has ∼17% test error. b) a fully connected network with the same number of hidden layers and the number of hiddenunitsineachlearnsthetrainingdatafasterbutfailstogeneralizewellwith ∼40% test error. the latter model can reproduce the convolutional model but failstodoso. theconvolutionalstructurerestrictsthepossiblemappingstothose thatprocesseverypositionsimilarly,andthisrestrictionimprovesperformance. draft: please send errata to [email protected] 10 convolutional networks one. the latter has enough flexibility to replicate the former exactly. figure 10.8 shows notebook10.2 bothmodelsfitthetrainingdataperfectly. however, thetesterrorfortheconvolutional convolution formnist-1d network is much less than for the fully connected network. this discrepancy is probably not due to the difference in the number of parameters; we know overparameterization usually improves performance (section 8.4.1). the likely explanation is that the convolutional architecture has a superior inductive bias (i.e., interpolates between the training data better) because we have embodied some prior knowledge in the architecture; we have forced the network to process each position in the input in the same way. we know that the data were created by starting with a template that is (among other operations) randomly translated, so this is sensible. thefullyconnectednetworkhastolearnwhateachdigittemplatelookslikeatevery position. in contrast, the convolutional network shares information across positions and hence learns to identify each category more accurately. another way of thinking about thisisthatwhenwetraintheconvolutionalnetwork,wesearchthroughasmallerfamily of input/output mappings, all of which are plausible. alternatively, the convolutional structure can be considered a regularizer that applies an infinite penalty to most of the solutions that a fully connected network can describe. 10.3 convolutional networks for 2d inputs the previous section described convolutional networks for processing 1d data. such networkscanbeappliedtofinancialtimeseries,audio,andtext. however,convolutional networks are more usually applied to 2d image data |
. the convolutional kernel is now a 2d object. a 3×3 kernel ω ∈ r3×3 applied to a 2d input comprising of elements x ij computes a single layer of hidden units h as: ij " # x3 x3 hij = a β+ ωmnxi+m−2,j+n−2 , (10.6) m=1n=1 where ω are the entries of the convolutional kernel. this is simply a weighted sum mn overasquare3×3inputregion. thekernelistranslatedbothhorizontallyandvertically problem10.13 across the 2d input (figure 10.9) to create an output at each position. oftentheinputisanrgbimage,whichistreatedasa2dsignalwiththreechannels (figure 10.10). here, a 3×3 kernel would have 3×3×3 weights and be applied to the notebook10.3 threeinputchannelsateachofthe3×3positionstocreatea2doutputthatisthesame 2dconvolution height and width as the input image (assuming zero padding). to generate multiple problem10.14 output channels, we repeat this process with different kernel weights and append the resultstoforma3dtensor. ifthekernelissizek×k, andtherearec inputchannels, i appendixb.3 eachoutputchannelisaweightedsumofc ×k×k quantitiesplusonebias. itfollows i tensors that to compute c output channels, we need c ×c ×k×k weights and c biases. o i o o this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.10.4 downsampling and upsampling 171 figure10.92dconvolutionallayer. eachoutputh computesaweightedsumof ij the 3×3 nearest inputs, adds a bias, and passes the result through an activation function. a) here, the output h (shaded output) is a weighted sum of the nine 23 positionsfromx tox (shadedinputs). b)differentoutputsarecomputedby 12 34 translating the kernel across the image grid in two dimensions. c–d) with zero padding, positions beyond the image’s edge are considered to be zero. 10.4 downsampling and upsampling the network in figure 10.7 increased receptive field size by scaling down the representa- tion at each layer using stride two convolutions. we now consider methods for scaling down or downsampling 2d input representations. we also describe methods for scaling them back up (upsampling), which is useful when the output is also an image. finally, we consider methods to change the number of channels between layers. this is helpful when recombining representations from two branches of a network (chapter 11). 10.4.1 downsampling therearethreemainapproachestoscalingdowna2drepresentation. here,weconsider the most common case of scaling down both dimensions by a factor of two. first, we draft: please send errata to [email protected] 10 convolutional networks figure 10.10 2d convolution applied to an image. the image is treated as a 2d inputwiththreechannelscorrespondingtothered,green,andbluecomponents. with a 3×3 kernel, each pre-activation in the first hidden layer is computed by pointwisemultiplyingthe3×3×3kernelweightswiththe3×3rgbimagepatch centered at the same position, summing, and adding the bias. to calculate all the pre-activations in the hidden layer, we “slide” the kernel over the image in bothhorizontalandverticaldirections. theoutputisa2dlayerofhiddenunits. to create multiple output channels, we would repeat this process with multiple kernels, resulting in a 3d tensor of hidden units at hidden layer h . 1 can sample every other position. when we use a stride of two, we effectively apply this problem10.15 method simultaneously with the convolution operation (figure 10.11a). second, max pooling retains the maximum of the 2×2 input values (figure 10.11b). this induces some invariance to translation; if the input is shifted by one pixel, many of these maximum values remain the same. finally, mean pooling or average pooling averages the inputs. for all approaches |
, we apply downsampling separately to each channel, so the output has half the width and height but the same number of channels. 10.4.2 upsampling the simplest way to scale up a network layer to double the resolution is to duplicate all the channels at each spatial position four times (figure 10.12a). a second method is max unpooling; this is used where we have previously used a max pooling operation for downsampling, and we distribute the values to the positions they originated from (figure 10.12b). a third approach uses bilinear interpolation to fill in the missing values between the points where we have samples. (figure 10.12c). a fourth approach is roughly analogous to downsampling using a stride of two. in notebook10.4 that method, there were half as many outputs as inputs, and for kernel size three, each downsampling &upsampling output was a weighted sum of the three closest inputs (figure 10.13a). in transposed convolution, this picture is reversed (figure 10.13c). there are twice as many outputs this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.10.4 downsampling and upsampling 173 figure 10.11 methods for scaling down representation size (downsampling). a) sub-sampling. theoriginal4×4representation(left)isreducedtosize2×2(right) byretainingeveryotherinput. colorsontheleftindicatewhichinputscontribute totheoutputsontheright. thisiseffectivelywhathappenswithakernelofstride two, except that the intermediate values are never computed. b) max pooling. each output comprises the maximum value of the corresponding 2×2 block. c) mean pooling. each output is the mean of the values in the 2×2 block. figure 10.12 methods for scaling up representation size (upsampling). a) the simplest way to double the size of a 2d layer is to duplicate each input four times. b) in networks where we have previously used a max pooling operation (figure10.11b),wecanredistributethevaluestothesamepositionstheyoriginally camefrom(i.e.,wherethemaximawere). thisisknownasmaxunpooling. c)a third option is bilinear interpolation between the input values. figure 10.13 transposed convolution in 1d. a) downsampling with kernel size three, stride two, and zero padding. each output is a weighted sum of three inputs (arrows indicate weights). b) this can be expressed by a weight matrix (same color indicates shared weight). c) in transposed convolution, each input contributesthreevaluestotheoutputlayer,whichhastwiceasmanyoutputsas inputs. d) the associated weight matrix is the transpose of that in panel (b). draft: please send errata to [email protected] 10 convolutional networks as inputs, and each input contributes to three of the outputs. when we consider the associated weight matrix of this upsampling mechanism (figure 10.13d), we see that it is the transpose of the matrix for the downsampling mechanism (figure 10.13b). 10.4.3 changing the number of channels sometimes we want to change the number of channels between one hidden layer and the nextwithoutfurtherspatialpooling. thisisusuallysowecancombinetherepresentation with another parallel computation (see chapter 11). to accomplish this, we apply a convolution with kernel size one. each element of the output layer is computed by taking a weighted sum of all the channels at the same position (figure 10.14). we can repeatthismultipletimeswithdifferentweightstogenerateasmanyoutputchannelsas we need. the associated convolution weights have size 1×1×c ×c . hence, this is i o knownas1×1convolution. combinedwithabiasandactivationfunction,itisequivalent to running the same fully connected network on the channels at every position. 10.5 applications we conclude by describing three computer vision applications. we describe convolu- tional networks for image classification where the goal is to assign the image to one of a predetermined set of categories. then we consider object detection, where the goal is to identify multiple objects in an image and find the bounding box around each. finally, wedescribeanearlysystemforsemanticsegment |
ationwherethegoalistoassignalabel to each pixel according to which object is present. 10.5.1 image classification much of the pioneering work on deep learning in computer vision focused on image classificationusingtheimagenetdataset(figure10.15). thiscontains1,281,167training images, 50,000validationimages, and100,000testimages, andeveryimageislabeledas belonging to one of 1000 possible categories. most methods reshape the input images to a standard size; in a typical system, the input x to the network is a 224×224 rgb image, and the output is a probability distribution over the 1000 classes. the task is challenging; there are a large number of classes, and they exhibit considerable variation (figure 10.15). in 2011, before deep networkswereapplied,thestate-of-the-artmethodclassifiedthetestimageswith∼25% errors for the correct class being in the top five suggestions. five years later, the best deep learning models eclipsed human performance. in 2012, alexnet was the first convolutional network to perform well on this task. it consists of eight hidden layers with relu activation functions, of which the first five are convolutional and the rest fully connected (figure 10.16). the network starts by this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.10.5 applications 175 figure10.141×1convolution. tochangethenumberofchannelswithoutspatial pooling, we apply a 1×1 kernel. each output channel is computed by taking a weighted sum of all of the channels at the same position, adding a bias, and passing through an activation function. multiple output channels are created by repeating this operation with different weights and biases. figure 10.15exampleimagenetclassificationimages. themodelaimstoassign an input image to one of 1000 classes. this task is challenging because the images vary widely along different attributes (columns). these include rigidity (monkey<canoe), number of instances in image (lizard<strawberry), clutter (compass<steeldrum),size(candle<spiderweb),texture(screwdriver<leopard), distinctiveness of color (mug<red wine), and distinctiveness of shape (headland <bell). adapted from russakovsky et al. (2015). draft: please send errata to [email protected] 10 convolutional networks figure 10.16alexnet(krizhevskyetal., 2012). the network maps a 224×224 color image to a 1000-dimensional vec- tor representing class probabilities. the network first convolves with 11×11 ker- nels and stride 4 to create 96 channels. it decreases the resolution again using a max pool operation and applies a 5×5 convolutional layer. another max pool- ing layer follows, and three 3×3 convo- lutional layers are applied. after a fi- nal max pooling operation, the result is vectorized and passed through three fully connected (fc) layers and finally the softmax layer. downsamplingtheinputusingan11×11kernelwithastrideoffourtocreate96channels. it then downsamples again using a max pooling layer before applying a 5×5 kernel to create 256 channels. there are three more convolutional layers with kernel size 3×3, problems10.16–10.17 eventually resulting in a 13×13 representation with 256 channels. this is resized into a single vector of length 43,264 and then passed through three fully connected layers containing 4096, 4096, and 1000 hidden units, respectively. the last layer is passed through the softmax function to output a probability distribution over the 1000 classes. the complete network contains ∼60 million parameters. most of these are in the fully connected layers and the end of the network. the dataset size was augmented by a factor of 2048 using (i) spatial transformations notebook10.5 and (ii) modifications of the input intensities. at test time, five different cropped and convolution formnist mirrored versions of the image were run through the network, and their predictions averaged. the system was learned using sgd with a momentum coefficient of 0.9 and a batch size of 128. dropout was applied in the fully connected layers, and an l2 (weight decay) regularizer was used. this system achieved a 16.4% top-5 error rate and a 38.1 |
considers unsupervised learning models. 1.2 unsupervised learning constructing a model from input data without corresponding output labels is termed unsupervised learning; theabsenceofoutputlabelsmeanstherecanbeno“supervision.” rather than learning a mapping from input to output, the goal is to describe or under- stand the structure of the data. as was the case for supervised learning, the data may have very different characteristics; it may be discrete or continuous, low-dimensional or high-dimensional, and of constant or variable length. 1.2.1 generative models this book focuses on generative unsupervised models, which learn to synthesize new data examples that are statistically indistinguishable from the training data. some generativemodelsexplicitlydescribetheprobabilitydistributionovertheinputdataand herenewexamplesaregeneratedbysamplingfromthisdistribution. othersmerelylearn a mechanism to generate new examples without explicitly describing their distribution. state-of-the-art generative models can synthesize examples that are extremely plau- sible but distinct from the training examples. they have been particularly successful at generating images (figure 1.5) and text (figure 1.6). they can also synthesize data under the constraint that some outputs are predetermined (termed conditional genera- tion). examples include image inpainting (figure 1.7) and text completion (figure 1.8). indeed, modern generative models for text are so powerful that they can appear intel- ligent. given a body of text followed by a question, the model can often “fill in” the missing answer by generating the most likely completion of the document. however, in reality, the model only knows about the statistics of language and does not understand the significance of its answers. draft: please send errata to [email protected] 1 introduction figure 1.5 generative models for images. left: two images were generated from a model trained on pictures of cats. these are not real cats, but samples from a probabilitymodel. right: twoimagesgeneratedfromamodeltrainedonimages of buildings. adapted from karras et al. (2020b). themoonhadrisenbythetimeireachedtheedgeoftheforest,andthelightthatfilteredthroughthe treeswassilverandcold. ishivered, thoughiwasnotcold, andquickenedmypace. ihadneverbeen so far from the village before, and i was not sure what to expect. i had been walking for hours, and i was tired and hungry. i had left in such a hurry that i had not thought to pack any food, and i had notthoughttobringaweapon. iwasunarmedandaloneinastrangeplace,andididnotknowwhat iwasdoing. ihadbeenwalkingforsolongthatihadlostallsenseoftime,andihadnoideahowfarihadcome. i only knew that i had to keep going. i had to find her. i was getting close. i could feel it. she was nearby,andshewasintrouble. ihadtofindherandhelpher,beforeitwastoolate. figure 1.6 short story synthesized from a generative model of text data. the model describes a probability distribution that assigns a probability to every output string. sampling from the model creates strings that follow the statistics of the training data (here, short stories) but have never been seen before. figure 1.7 inpainting. in the original image (left), the boy is obscured by metal cables. theseundesirableregions(center)areremovedandthegenerativemodel synthesizes a new image (right) under the constraint that the remaining pixels must stay the same. adapted from saharia et al. (2022a). this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.1.2 unsupervised learning 9 i was a little nervous before my first lecture at the university of bath. it seemed like there were hundredsofstudentsandtheylookedintimidating. isteppeduptothelecternandwasabouttospeak whensomethingbizarrehappened. suddenly, the room was filled with a deafening noise, like a giant roar. it was so loud that i couldn’t hear anything else and i had to cover my ears. i could see the students looking around, con- fusedandfrightened. then,asquicklyasithadstarted,thenoisestoppedandthe |
% top-1errorrate. atthetime,thiswasanenormousleapforwardinperformanceatatask considered far beyond the capabilities of contemporary methods. this result revealed the potential of deep learning and kick-started the modern era of ai research. the vgg network was also targeted at classification in the imagenet task and achieved a considerably better performance of 6.8% top-5 error rate and a 23.7% top-1 error rate. this network is similarly composed of a series of interspersed convolutional and max pooling layers, where the spatial size of the representation gradually decreases, but the number of channels increase. these are followed by three fully connected layers (figure 10.17). the vgg network was also trained using data augmentation, weight decay, and dropout. althoughtherewerevariousminordifferencesinthetrainingregime,themostimpor- tant change between alexnet and vgg was the depth of the network. the latter used problem10.18 19 hidden layers and 144 million parameters. the networks in figures 10.16 and 10.17 are depicted at the same scale for comparison. there was a general trend for several years for performance on this task to improve as the depth of the networks increased, and this is evidence that depth is important in neural networks. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.10.5 applications 177 figure 10.17 vggnetwork(simonyan& zisserman,2014)depicted atthe same scale as alexnet (see figure 10.16). this network consists of a series of convolu- tional layers and max pooling operations, in which the spatial scale of the rep- resentation gradually decreases, but the number of channels gradually increases. the hidden layer after the last convolutional operation is resized to a 1d vector and three fully connected layers follow. the network outputs 1000 activations corresponding to the class labels that are passed through a softmax function to create class probabilities. 10.5.2 object detection in object detection, thegoal is to identifyand localize multipleobjects within the image. an early method based on convolutional networks was you only look once, or yolo for short. the input to the yolo network is a 448×448 rgb image. this is passed through 24 convolutional layers that gradually decrease the representation size using max pooling operations while concurrently increasing the number of channels, similarly tothevggnetwork. thefinalconvolutionallayerisofsize7×7andhas1024channels. this is reshaped to a vector, and a fully connected layer maps it to 4096 values. one further fully connected layer maps this representation to the output. the output values encode which class is present at each of a 7×7 grid of locations (figure 10.18a–b). for each location, the output values also encode a fixed number of bounding boxes. five parameters define each box: the x- and y-positions of the center, the height and width of the box, and the confidence of the prediction (figure 10.18c). the confidence estimates the overlap between the predicted and ground truth bound- ing boxes. the system is trained using momentum, weight decay, dropout, and data augmentation. transfer learning is employed; the network is initially trained on the imagenet classification task and is then fine-tuned for object detection. after the network is run, a heuristic process is used to remove rectangles with low confidenceandtosuppresspredictedboundingboxesthatcorrespondtothesameobject so only the most confident one is retained. draft: please send errata to [email protected] 10 convolutional networks figure10.18yoloobjectdetection. a)theinputimageisreshapedto448×448 anddividedintoaregular7×7grid. b)thesystempredictsthemostlikelyclass ateachgridcell. c)italsopredictstwoboundingboxespercell,andaconfidence value (represented by thickness of line). d) during inference, the most likely boundingboxesareretained,andboxeswithlowerconfidencevaluesthatbelong to the same object are suppressed. adapted from redmon et al. (2016). 10.5.3 semantic segmentation thegoalofsemanticsegmentationistoassignalabeltoeachpixelaccordingtotheobject thatitbelongstoornolabelifthatpixeldoesnotcorrespondtoanythinginthetraining database |
. an early network for semantic segmentation is depicted in figure 10.19. the input is a 224×224 rgb image, and the output is a 224×224×21 array that contains the probability of each of 21 possible classes at each position. thefirstpartofthenetworkisasmallerversionofvgg(figure10.17)thatcontains thirteenratherthansixteenconvolutionallayersanddownsizestherepresentationtosize 14×14. there is then one more max pooling operation, followed by two fully connected layers that map to two 1d representations of size 4096. these layers do not represent spatial position but instead, combine information from across the whole image. here, the architecture diverges from vgg. another fully connected layer reconsti- tutes the representation into 7×7 spatial positions and 512 channels. this is followed this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.10.6 summary 179 figure10.19semanticsegmentationnetworkofnohetal.(2015). theinputisa 224×224image,whichispassedthroughaversionofthevggnetworkandeven- tuallytransformedintoarepresentationofsize4096usingafullyconnectedlayer. this contains information about the entire image. this is then reformed into a representation of size 7×7 using another fully connected layer, and the image is upsampled and deconvolved (transposed convolutions without upsampling) in a mirror image of the vgg network. the output is a 224×224×21 representation that gives the output probabilities for the 21 classes at each position. by a series of max unpooling layers (see figure 10.12b) and deconvolution layers. these are transposed convolutions (see figure 10.13) but in 2d and without the upsampling. finally,thereisa1×1convolutiontocreate21channelsrepresentingthepossibleclasses and a softmax operation at each spatial position to map the activations to class proba- bilities. the downsampling side of the network is sometimes referred to as an encoder, and the upsampling side as a decoder, so networks of this type are sometimes called encoder-decoder networks or hourglass networks due to their shape. the final segmentation is generated using a heuristic method that greedily searches for the class that is most represented and infers its region, taking into account the probabilities but also encouraging connectedness. then the next most-represented class isaddedwhereitdominatesattheremainingunlabeledpixels. thiscontinuesuntilthere is insufficient evidence to add more (figure 10.20). 10.6 summary in convolutional layers, each hidden unit is computed by taking a weighted sum of the nearby inputs, adding a bias, and applying an activation function. the weights and the bias are the same at every spatial position, so there are far fewer parameters than in a fully connected network, and the number of parameters doesn’t increase with the input image size. to ensure that information is not lost, this operation is repeated with draft: please send errata to [email protected] 10 convolutional networks figure 10.20 semantic segmentation results. the final result is created from the 21 probability maps by greedily selecting the best class and using a heuristic methodtofindasensiblebinarymapbasedontheprobabilitiesandtheirspatial proximity. if there is enough evidence, subsequent classes are added, and their segmentation maps are combined. adapted from noh et al. (2015). different weights and biases to create multiple channels at each spatial position. typicalconvolutionalnetworksconsistofconvolutionallayersinterspersedwithlayers that downsample by a factor of two. as the network progresses, the spatial dimensions usually decrease by factors of two, and the number of channels increases by factors of two. at the end of the network, there are typically one or more fully connected layers that integrate information from across the entire input and create the desired output. if the output is an image, a mirrored “decoder” upsamples back to the original size. thetranslationalequivarianceofconvolutionallayersimposesausefulinductivebias that increases performance for image-based tasks relative to fully connected networks. wedescribedimageclassification,objectdetection,andsemanticsegmentationnetworks. image classification performance was shown to improve as the network became deeper. however, subsequent experiments showed that increasing the |
network depth indefinitely doesn’t continue to help; after a certain depth, the system becomes difficult to train. this is the motivation for residual connections, which are the topic of the next chapter. notes dumoulin&visin(2016)presentanoverviewofthemathematicsofconvolutionsthatexpands on the brief treatment in this chapter. convolutional networks: early convolutional networks were developed by fukushima & miyake (1982), lecun et al. (1989a), and lecun et al. (1989b). initial applications included this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.notes 181 handwritingrecognition(lecunetal.,1989a;martin,1993),facerecognition(lawrenceetal., 1997),phonemerecognition(waibeletal.,1989),spokenwordrecognition(bottouetal.,1990), and signature verification (bromley et al., 1993). however, convolutional networks were popu- larizedbylecunetal.(1998),whobuiltasystemcalledlenetforclassifying28×28grayscale images of handwritten digits. this is immediately recognizable as a precursor of modern net- works;itusesaseriesofconvolutionallayers,followedbyfullyconnectedlayers,sigmoidactiva- tions rather than relus, and average pooling rather than max pooling. alexnet (krizhevsky et al., 2012) is widely considered the starting point for modern deep convolutional networks. imagenet challenge: dengetal.(2009)collatedtheimagenetdatabaseandtheassociated classificationchallengedroveprogressindeeplearningforseveralyearsafteralexnet. notable subsequent winners of this challenge include the network-in-network architecture (lin et al., 2014), which alternated convolutions with fully connected layers that operated independently on all of the channels at each position (i.e., 1×1 convolutions). zeiler & fergus (2014) and simonyan&zisserman(2014)trainedlargeranddeeperarchitecturesthatwerefundamentally similar to alexnet. szegedy et al. (2017) developed an architecture called googlenet, which introduced inception blocks. these use several parallel paths with different filter sizes, which are then recombined. this effectively allowed the system to learn the filter size. thetrendwasforperformancetoimprovewithincreasingdepth. however,itultimatelybecame difficult to train deeper networks without modifications; these include residual connections and normalization layers, both of which are described in the next chapter. progress in the imagenet challenges is summarized in russakovsky et al. (2015). a more general survey of image classification using convolutional networks can be found in rawat & wang (2017). the improvement of image classification networks over time is visualized in figure 10.21. types of convolutional layers: atrous or dilated convolutions were introduced by chen etal.(2018c)andyu&koltun(2015). transposedconvolutionswereintroducedbylongetal. (2015). odenaetal.(2016)pointedoutthattheycanleadtocheckerboardartifactsandshould be used with caution. lin et al. (2014) is an early example of convolution with 1×1 filters. many variants of the standard convolutional layer aim to reduce the number of parameters. theseincludedepthwiseorchannel-separateconvolution(howardetal.,2017;tranetal.,2018), inwhichadifferentfilterconvolveseachchannelseparatelytocreateanewsetofchannels. for akernelsizeofk×k withc inputchannelsandc outputchannels,thisrequiresk×k×c parameters rather than the k ×k ×c ×c parameters in a regular convolutional layer. a related approach is grouped convolutions (xie et al., 2017), where each convolution kernel is only applied to a subset of the channels with a commensurate reduction in the parameters. in fact, groupedconvolutionswereusedinalexnetforcomputationalreasons; thewholenetwork could not run on a single gpu, so some channels were processed on one gpu and some on another, with limited interaction points. separable convolutions treat each kernel as an outer product of 1d vectors; they use c +k +k parameters for each of the c channels. partial convolutions (liu et al., 2018a) are |
used when inpainting missing pixels and account for the partial masking of the input. gated convolutions learn the mask from the previous layer (yu et al., 2019; chang et al., 2019b). hu et al. (2018b) propose squeeze-and-excitation networks which re-weight the channels using information pooled across all spatial positions. downsamplingandupsampling: averagepoolingdatesbacktoatleastlecunetal.(1989a) and max pooling to zhou & chellappa (1988). scherer et al. (2010) compared these methods and concluded that max pooling was superior. the max unpooling method was introduced by zeiler et al. (2011) and zeiler & fergus (2014). max pooling can be thought of as applying draft: please send errata to [email protected] 10 convolutional networks figure 10.21imagenetperformance. eachcirclerepresentsadifferentpublished model. blue circles represent models that were state-of-the-art. models dis- cussed in this book are also highlighted. the alexnet and vgg networks were remarkable for their time but are now far from state of the art. resnet-200 and densenet are discussed in chapter 11. imagegpt, vit, swin, and davit are discussedinchapter12. adaptedfromhttps://paperswithcode.com/sota/image- classification-on-imagenet. an l∞ norm to the hidden units that are to be pooled. this led to applying other lk norms appendixb.3.2 (springenberg et al., 2015; sainath et al., 2013), although these require more computation and vectornorms are not widely used. zhang (2019) introduced max-blur-pooling, in which a low-pass filter is appliedbeforedownsamplingtopreventaliasing,andshowedthatthisimprovesgeneralization over translation of the inputs and protects against adversarial attacks (see section 20.4.6). shi et al. (2016) introduced pixelshuffle, which used convolutional filters with a stride of 1/s to scale up 1d signals by a factor of s. only the weights that lie exactly on positions are used to create the outputs, and the ones that fall between positions are discarded. this can be implemented by multiplying the number of channels in the kernel by a factor of s, where the sth output position is computed from just the sth subset of channels. this can be trivially extended to 2d convolution, which requires s2 channels. convolution in 1d and 3d: convolutionalnetworksareusuallyappliedtoimagesbuthave also been applied to 1d data in applications that include speech recognition (abdel-hamid etal.,2012),sentenceclassification(zhangetal.,2015;conneauetal.,2017),electrocardiogram classification (kiranyaz et al., 2015), and bearing fault diagnosis (eren et al., 2019). a survey of 1d convolutional networks can be found in kiranyaz et al. (2021). convolutional networks havealsobeenappliedto3ddata,includingvideo(jietal.,2012;sahaetal.,2016;tranetal., 2015) and volumetric measurements (wu et al., 2015b; maturana & scherer, 2015). invariance and equivariance: part of the motivation for convolutional layers is that they are approximately equivariant with respect to translation, and part of the motivation for max this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.notes 183 pooling is to induce invariance to small translations. zhang (2019) considers the degree to which convolutional networks really have these properties and proposes the max-blur-pooling modification that demonstrably improves them. there is considerable interest in making net- works equivariant or invariant to other types of transformations, such as reflections, rotations, and scaling. sifre & mallat (2013) constructed a system based on wavelets that induced both translational and rotational invariance in image patches and applied this to texture classifica- tion. kanazawa et al. (2014) developed locally scale-invariant convolutional neural networks. cohen&welling(2016)exploitedgrouptheorytoconstructgroupcnns,whichareequivariant to larger families of transformations, including reflections and rotations. |
esteves et al. (2018) introduced polar transformer networks, which are invariant to translations and equivariant to rotation and scale. worrall et al. (2017) developed harmonic networks, the first example of a group cnn that was equivariant to continuous rotations. initialization and regularization: convolutional networks are typically initialized using xavierinitialization(glorot&bengio,2010)orheinitialization(heetal.,2015),asdescribed insection7.5. however,theconvolutionorthogonalinitializer(xiaoetal.,2018a)isspecialized problem10.19 for convolutionalnetworks (xiao et al., 2018a). networks of up to 10,000 layerscan be trained using this initialization without the need for residual connections. dropout is effective for fully connected networks but less so for convolutional layers (park & kwak,2016). thismaybebecauseneighboringimagepixelsarehighlycorrelated,soifahidden unitdropsout,thesameinformationispassedonviaadjacentpositions. thisisthemotivation for spatial dropout and cutout. in spatial dropout (tompson et al., 2015), entire feature maps are discarded instead of individual pixels. this circumvents the problem of neighboring pixels carryingthesameinformation. similarly, devries&taylor(2017b)propose cutout, inwhicha square patch of each input image is masked at training time. wu & gu (2015) modified max poolingfordropoutlayersusingamethodthatinvolvessamplingfromaprobabilitydistribution over the constituent elements rather than always taking the maximum. adaptive kernels: the inception block (szegedy et al., 2017) applies convolutional filters of different sizes in parallel and, as such, provides a crude mechanism by which the network can learn the appropriate filter size. other work has investigated learning the scale of convolutions as part of the training process (e.g., pintea et al., 2021; romero et al., 2021) or the stride of downsampling layers (riad et al., 2022). insomesystems,thekernelsizeischangedadaptivelybasedonthedata. thisissometimesin thecontextofguidedconvolution,whereoneinputisusedtohelpguidethecomputationfrom another input. for example, an rgb image might be used to help upsample a low-resolution depth map. jia et al. (2016) directly predicted the filter weights themselves using a different network branch. xiong et al. (2020b) change the kernel size adaptively. su et al. (2019a) moderate weights of fixed kernels by a function learned from another modality. dai et al. (2017) learn offsets of weights so that they do not have to be applied in a regular grid. object detection and semantic segmentation: objectdetectionmethodscanbedivided into proposal-based and proposal-free schemes. in the former case, processing occurs in two stages. a convolutional network ingests the whole image and proposes regions that might contain objects. these proposal regions are then resized, and a second network analyzes them toestablishwhetherthereisanobjectthereandwhatitis. anearlyexampleofthisapproach wasr-cnn(girshicketal.,2014). thiswassubsequentlyextendedtoallowend-to-endtraining (girshick, 2015) and to reduce the cost of the region proposals (ren et al., 2015). subsequent workonfeaturepyramidnetworksimprovedbothperformanceandspeedbycombiningfeatures draft: please send errata to [email protected] 10 convolutional networks across multiple scales lin et al. (2017b). in contrast, proposal-free schemes perform all the processinginasinglepass. yoloredmonetal.(2016),whichwasdescribedinsection10.5.2, is the most celebrated example of a proposal-free scheme. the most recent iteration of this framework at the time of writing is yolov7 (wang et al., 2022a). a recent review of object detection can be found in zou et al. (2023). the semantic segmentation network described in section 10.5.3 was developed by noh et al. (2015). manysubsequentapproacheshavebeenvariationsofu-net(ronnebergeretal.,2015), which is described in section 11.5.3. recent surveys of semantic segmentation can be found in minaee |
et al. (2021) and ulku & akagündüz (2022). visualizing convolutional networks: the dramatic success of convolutional networks led toaseriesofeffortstovisualizetheinformationtheyextractfromtheimage(seeqinetal.,2018, for a review). erhan et al. (2009) visualized the optimal stimulus that activated a hidden unit by starting with an image containing noise and then optimizing the input to make the hidden unitmostactiveusinggradientascent. zeiler&fergus(2014)trainedanetworktoreconstruct the input and then set all the hidden units to zero except the one they were interested in; the reconstruction then provides information about what drives the hidden unit. mahendran & vedaldi (2015) visualized an entire layer of a network. their network inversion technique aimedtofindanimagethatresultedintheactivationsatthatlayerbutalsoincorporatesprior knowledge that encourages this image to have similar statistics to natural images. finally, bau et al. (2017) introduced network dissection. here, a series of images with known pixel labels capturing color, texture, and object type are passed through the network, and the correlation of a hidden unit with each property is measured. this method has the advantage that it only uses the forward pass of the network and does not require optimization. these methodsdidprovidesomepartialinsightintohowthenetworkprocessesimages. forexample, bau et al. (2017) showed that earlier layers correlate more with texture and color and later layers with the object type. however, it is fair to say that fully understanding the processing of networks containing millions of parameters is currently not possible. problems problem 10.1∗ showthattheoperationinequation10.4isequivariantwithrespecttotransla- tion. problem 10.2 equation 10.3 defines 1d convolution with a kernel size of three, stride of one, and dilation one. write out the equivalent equation for the 1d convolution with a kernel size of three and a stride of two as pictured in figure 10.3a–b. problem 10.3 writeouttheequationforthe1ddilatedconvolutionwithakernelsizeofthree and a dilation rate of two, as pictured in figure 10.3d. problem 10.4 write out the equation for a 1d convolution with kernel size of seven, a dilation rate of three, and a stride of three. problem 10.5 draw weight matrices in the style of figure 10.4d for (i) the strided convolution in figure 10.3a–b, (ii) the convolution with kernel size 5 in figure 10.3c, and (iii) the dilated convolution in figure 10.3d. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.notes 185 problem10.6∗drawa6×12weightmatrixinthestyleoffigure10.4drelatingtheinputsx ,...,x 1 6 to the outputs h ,...,h in the multi-channel convolution as depicted in figures 10.5a–b. 1 12 problem10.7∗drawa12×6weightmatrixinthestyleoffigure10.4drelatingtheinputsh ,...,h 1 12 to the outputs h′,...,h′ in the multi-channel convolution in figure 10.5c. 1 6 problem 10.8 consider a 1d convolutional network where the input has three channels. the first hidden layer is computed using a kernel size of three and has four channels. the second hiddenlayeriscomputedusingakernelsizeoffiveandhastenchannels. howmanybiasesand how many weights are needed for each of these two convolutional layers? problem10.9anetworkconsistsofthree1dconvolutionallayers. ateachlayer,azero-padded convolution with kernel size three, stride one, and dilation one is applied. what size is the receptive field of the hidden units in the third layer? problem 10.10 a network consists of three 1d convolutional layers. at each layer, a zero- paddedconvolutionwithkernelsizeseven,strideone,anddilationoneisapplied. whatsizeis the receptive field of hidden units in the third layer? problem10.11consideraconvolutionalnetworkwith1dinputx. thefirsthiddenlayerh is 1 computed using a |
convolution with kernel size five, stride two, and a dilation rate of one. the second hidden layer h is computed using a convolution with kernelsize three, stride one, and 2 a dilation rate of one. the third hidden layer h is computed using a convolution with kernel 3 sizefive,strideone,andadilationrateoftwo. whatarethereceptivefieldsizesateachhidden layer? problem10.12the1dconvolutionalnetworkinfigure10.7wastrainedusingstochasticgradient descentwithalearningrateof0.01andabatchsizeof100onatrainingdatasetof4,000examples for 100,000 steps. how many epochs was the network trained for? problem 10.13 draw a weight matrix in the style of figure 10.4d that shows the relationship between the 24 inputs and the 24 outputs in figure 10.9. problem 10.14 consider a 2d convolutional layer with kernel size 5×5 that takes 3 input channels and returns 10 output channels. how many convolutional weights are there? how many biases? problem 10.15 draw a weight matrix in the style of figure 10.4d that samples every other variable in a 1d input (i.e., the 1d analog of figure 10.11a). show that the weight matrix for 1d convolution with kernel size and stride two is equivalent to composing the matrices for 1d convolution with kernel size one and this sampling matrix. problem 10.16∗ consider the alexnet network (figure 10.16). how many parameters are used in each convolutional and fully connected layer? what is the total number of parameters? problem 10.17 what is the receptive field size at each of the first three layers of alexnet (figure 10.16)? problem 10.18 how many weights and biases are there at each convolutional layer and fully connected layer in the vgg architecture (figure 10.17)? problem 10.19∗ consider two hidden layers of size 224×224 with c and c channels, respec- 1 2 tively, connected by a 3×3 convolutional layer. describe how to initialize the weights using he initialization. draft: please send errata to [email protected] 11 residual networks the previous chapter described how image classification performance improved as the depth of convolutional networks was extended from eight layers (alexnet) to eighteen layers (vgg). this led to experimentation with even deeper networks. however, per- formance decreased again when many more layers were added. this chapter introduces residual blocks. here, each network layer computes an addi- tive change to the current representation instead of transforming it directly. this allows deeper networks to be trained but causes an exponential increase in the activation mag- nitudes at initialization. residual blocks employ batch normalization to compensate for this, which re-centers and rescales the activations at each layer. residual blocks with batch normalization allow much deeper networks to be trained, and these networks improve performance across a variety of tasks. architectures that combine residual blocks to tackle image classification, medical image segmentation, and human pose estimation are described. 11.1 sequential processing every network we have seen so far processes the data sequentially; each layer receives the previous layer’s output and passes the result to the next (figure 11.1). for example, a three-layer network is defined by: h = f [x,ϕ ] 1 1 1 h = f [h ,ϕ ] 2 2 1 2 h = f [h ,ϕ ] 3 3 2 3 y = f [h ,ϕ ], (11.1) 4 3 4 where h , h , and h denote the intermediate hidden layers, x is the network input, y 1 2 3 is the output, and the functions f [•,ϕ ] perform the processing. k k in a standard neural network, each layer consists of a linear transformation followed byanactivationfunction, andtheparametersϕ comprisetheweightsandbiasesofthe k this work is subject to a creative commons cc-by-nc-nd license. (c) mit press. |
11.1 sequential processing 187 figure 11.1 sequential processing. standard neural networks pass the output of each layer directly into the next layer. lineartransformation. inaconvolutionalnetwork,eachlayerconsistsofasetofconvolu- tions followed by an activation function, and the parameters comprise the convolutional kernels and biases. since the processing is sequential, we can equivalently think of this network as a series of nested functions: (cid:20) h (cid:2) (cid:3) i (cid:21) y=f f f f [x,ϕ ],ϕ ,ϕ ,ϕ . (11.2) 4 3 2 1 1 2 3 4 11.1.1 limitations of sequential processing inprinciple, wecan add asmanylayersas wewant, andin the previous chapter, wesaw thataddingmorelayerstoaconvolutionalnetworkdoesimproveperformance;thevgg network (figure 10.17), which has eighteen layers, outperforms alexnet (figure 10.16), which has eight layers. however, image classification performance decreases again as further layers are added (figure 11.2). this is surprising since models generally perform betterasmorecapacityisadded(figure8.10). indeed,thedecreaseispresentforboththe training set and the test set, which implies that the problem is training deeper networks rather than the inability of deeper networks to generalize. this phenomenon is not completely understood. one conjecture is that at initial- ization, the loss gradients change unpredictably when we modify parameters in early network layers. with appropriate initialization of the weights (see section 7.5), the gra- dient of the loss with respect to these parameters will be reasonable (i.e., no exploding or vanishing gradients). however, the derivative assumes an infinitesimal change in the parameter,whereasoptimizationalgorithmsuseafinitestepsize. anyreasonablechoice notebook11.1 of step size may move to a place with a completely different and unrelated gradient; the shattered loss surface looks like an enormous range of tiny mountains rather than a single smooth gradients structure that is easy to descend. consequently, the algorithm doesn’t make progress in the way that it does when the loss function gradient changes more slowly. this conjecture is supported by empirical observations of gradients in networks with a single input and output. for a shallow network, the gradient of the output with re- spect to the input changes slowly as we change the input (figure 11.3a). however, for a appendixb.2.1 deep network, a tiny change in the input results in a completely different gradient (fig- autocorrelation ure11.3b). thisiscapturedbytheautocorrelationfunctionofthegradient(figure11.3c). function nearby gradients are correlated for shallow networks, but this correlation quickly drops to zero for deep networks. this is termed the shattered gradients phenomenon. draft: please send errata to [email protected] 11 residual networks figure11.2decreaseinperformancewhenaddingmoreconvolutionallayers. a)a 20-layer convolutional network outperforms a 56-layer neural network for image classification on the test set of the cifar-10 dataset (krizhevsky & hinton, 2009). b) this is also true for the training set, which suggests that the problem relatestotrainingtheoriginalnetworkratherthanafailuretogeneralizetonew data. adapted from he et al. (2016a). figure 11.3 shattered gradients. a) consider a shallownetwork with 200 hidden units and glorot initialization (he initialization without the factor of two) for both the weights and biases. the gradient ∂y/∂x of the scalar network output y with respect to the scalar input x changes relatively slowly as we change the in- put x. b) for a deep network with 24 layers and 200 hidden units per layer, this gradientchangesveryquicklyandunpredictably. c)theautocorrelationfunction ofthegradientshowsthatnearbygradientsbecomeunrelated(haveautocorrela- tion close to zero) for deep networks. this shattered gradients phenomenon may explain why it is hard to train deep networks. gradient descent algorithms rely on the loss surface being relatively smooth, so the gradients should be related before and after each update step. adapted from balduzzi et al. (2017 |
). this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.11.2 residual connections and residual blocks 189 shatteredgradientspresumablyarisebecausechangesinearlynetworklayersmodify theoutputinanincreasinglycomplexwayasthenetworkbecomesdeeper. thederivative of the output y with respect to the first layer f of the network in equation 11.1 is: 1 ∂y ∂f ∂f ∂f = 4 3 2. (11.3) ∂f ∂f ∂f ∂f 1 3 2 1 whenwechangetheparametersthatdeterminef ,allofthederivativesinthissequence 1 can change since layers f ,f , and f are themselves computed from f . consequently, 2 3 4 1 theupdatedgradientateachtrainingexamplemaybecompletelydifferent, andtheloss function becomes badly behaved.1 11.2 residual connections and residual blocks residual or skip connections are branches in the computational path, whereby the input to each network layer f[•] is added back to the output (figure 11.4a). by analogy to equation 11.1, the residual network is defined as: h = x+f [x,ϕ ] 1 1 1 h = h +f [h ,ϕ ] 2 1 2 1 2 h = h +f [h ,ϕ ] 3 2 3 2 3 y = h +f [h ,ϕ ], (11.4) 3 4 3 4 where the first term on the right-hand side of each line is the residual connection. each function f learns an additive change to the current representation. it follows that their k outputs must be the same size as their inputs. each additive combination of the input and the processed output is known as a residual block or residual layer. once more, we can write this as a single function by substituting in the expressions problem11.1 for the intermediate quantities h : k y=x + f [x] (11.5) 1(cid:2) (cid:3) + f x+f [x] 2h 1 (cid:2) (cid:3)i + f x+f [x]+f x+f [x] 3 1 2 1 (cid:20) (cid:2) (cid:3) h (cid:2) (cid:3)i(cid:21) + f x+f [x]+f x+f [x] +f x+f [x]+f x+f [x] , 4 1 2 1 3 1 2 1 where we have omitted the parameters ϕ for clarity. we can think of this equation as • “unraveling” the network (figure 11.4b). we see that the final network output is a sum of the input and four smaller networks, corresponding to each line of the equation; one 1inequations11.3and11.6,weoverloadnotationtodefinef astheoutputofthefunctionf [•]. k k draft: please send errata to [email protected] 11 residual networks figure 11.4 residual connections. a) the output of each function f [x,ϕ ] is k k addedbacktoitsinput,whichispassedviaaparallelcomputationalpathcalled a residual or skip connection. hence, the function computes an additive change totherepresentation. b)uponexpanding(unraveling)thenetworkequations,we findthattheoutputisthesumoftheinputplusfoursmallernetworks(depicted in white, orange, gray, and cyan, respectively, and corresponding to terms in equation 11.5); we can think of this as an ensemble of networks. moreover, the output from the cyan network is itself a transformation f [•,ϕ ] of another 4 4 ensemble,andsoon. alternatively,wecanconsiderthenetworkasacombination of16differentpathsthroughthecomputationalgraph. oneexampleisthedashed path from input x to output y, which is the same in panels (a) and (b). this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.11.2 residual connections and residual blocks 191 figure 11.5orderofoperationsinres |
id- ual blocks. a) the usual order of linear transformation or convolution followed byarelunonlinearitymeansthateach residualblockcanonlyaddnon-negative quantities. b) with the reverse order, bothpositiveandnegativequantitiescan beadded. however,wemustaddalinear transformation at the start of the net- workincasetheinputisallnegative. c) in practice, it’s common for a residual block to contain several network layers. interpretation is that residual connections turn the original network into an ensemble of these smaller networks whose outputs are summed to compute the result. acomplementarywayofthinkingaboutthisresidualnetworkisthatitcreatessixteen paths of different lengths from input to output. for example, the first function f [x] 1 problem11.2 occurs in eight of these sixteen paths, including as a direct additive term (i.e., a path length of one), and the analogous derivative to equation 11.3 is: problem11.3 (cid:18) (cid:19) (cid:18) (cid:19) ∂y ∂f ∂f ∂f ∂f ∂f ∂f ∂f ∂f ∂f ∂f ∂f ∂f =i+ 2 + 3 + 3 2 + 4 + 4 2 + 4 3 + 4 3 2 , (11.6) ∂f ∂f ∂f ∂f ∂f ∂f ∂f ∂f ∂f ∂f ∂f ∂f ∂f 1 1 1 2 1 1 2 1 3 1 3 2 1 where there is one term for each of the eight paths. the identity term on the right- hand side shows that changes in the parameters ϕ in the first layer f [x,ϕ ] contribute 1 1 1 directly to changes in the network output y. they also contribute indirectly through the other chains of derivatives of varying lengths. in general, gradients through shorter notebook11.2 paths will be better behaved. since both the identity term and various short chains of residual derivatives will contribute to the derivative for each layer, networks with residual links networks suffer less from shattered gradients. 11.2.1 order of operations in residual blocks until now, we have implied that the additive functions f[x] could be any valid network layer (e.g., fully connected or convolutional). this is technically true, but the order of operations in these functions is important. they must contain a nonlinear activation functionlikearelu,ortheentirenetworkwillbelinear. however, inatypicalnetwork layer (figure 11.5a), the relu function is at the end, so the output is non-negative. if we adopt this convention, then each residual block can only increase the input values. hence,itistypicaltochangetheorderofoperationssothattheactivationfunctionis applied first, followed by the linear transformation (figure 11.5b). sometimes there may be several layers of processing within the residual block (figure 11.5c), but these usually terminatewithalineartransformation. finally,wenotethatwhenwestarttheseblocks withareluoperation,theywilldonothingiftheinitialnetworkinputisnegativesince therelu will clipthe entiresignalto zero. hence, it’s typicaltostart the networkwith a linear transformation rather than a residual block, as in figure 11.5b. draft: please send errata to [email protected] 11 residual networks 11.2.2 deeper networks with residual connections adding residual connections roughly doubles the depth of a network that can be practi- callytrainedbeforeperformancedegrades. however,wewouldliketoincreasethedepth further. to understand why residual connections do not allow us to increase the depth arbitrarily, we must consider how the variance of the activations changes during the forward pass and how the gradient magnitudes change during the backward pass. 11.3 exploding gradients in residual networks in section 7.5, we saw that initializing the network parameters is critical. without careful initialization, the magnitudes of the intermediate values during the forward pass ofbackpropagationcanincreaseordecreaseexponentially. similarly,thegradientsduring the backward pass can explode or vanish as we move backward through the network. hence |
roomwassilentagain. istoodthereforafewmoments,tryingtomakesenseofwhathadjusthappened. thenirealizedthat thestudentswereallstaringatme,waitingformetosaysomething. itriedtothinkofsomethingwitty orclevertosay,butmymindwasblank. soijustsaid,“well,thatwasstrange,’andthenistartedmy lecture. figure 1.8 conditional text synthesis. given an initial body of text (in black), generative models of text can continue the string plausibly by synthesizing the “missing”remainingpartofthestring. generatedbygpt3(brownetal.,2020). figure 1.9 variation of the human face. the human face contains roughly 42 muscles, so it’s possible to describe most of the variation in images of the same personinthesamelightingwithjust42numbers. ingeneral,datasetsofimages, music, and text can be described by a relatively small number of underlying variables although it is typically more difficult to tie these to particular physical mechanisms. images from dynamic faces database (holland et al., 2019). 1.2.2 latent variables some (but not all) generative models exploit the observation that data can be lower dimensionalthantherawnumberofobservedvariablessuggests. forexample,thenum- ber of valid and meaningful english sentences is considerably smaller than the number of strings created by drawing words at random. similarly, real-world images are a tiny subsetoftheimagesthatcanbecreatedbydrawingrandomrgbvaluesforeverypixel. this is because images are generated by physical processes (see figure 1.9). thisleadstotheideathatwecandescribeeachdataexampleusingasmallernumber of underlying latent variables. here, the role of deep learning is to describe the mapping betweentheselatentvariablesandthedata. thelatentvariablestypicallyhaveasimple draft: please send errata to [email protected] 1 introduction figure 1.10latentvariables. manygenerativemodelsuseadeeplearningmodel to describe the relationship between a low-dimensional “latent” variable and the observed high-dimensional data. the latent variables have a simple probability distributionbydesign. hence,newexamplescanbegeneratedbysamplingfrom thesimpledistributionoverthelatentvariablesandthenusingthedeeplearning model to map the sample to the observed data space. figure 1.11 image interpolation. in each row the left and right images are real and the three images in between represent a sequence of interpolations created byagenerativemodel. thegenerativemodelsthatunderpintheseinterpolations havelearnedthatallimagescanbecreatedbyasetofunderlyinglatentvariables. byfindingthesevariablesforthetworealimages,interpolatingtheirvalues,and then using these intermediate variables to create new images, we can generate intermediate results that are both visually plausible and mix the characteristics of the two original images. top row adapted from sauer et al. (2022). bottom row adapted from ramesh et al. (2022). this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.1.3 reinforcement learning 11 figure 1.12 multiple images generated from the caption “a teddy bear on a skateboard in times square.” generated by dall·e-2 (ramesh et al., 2022). probability distribution by design. by sampling from this distribution and passing the result through the deep learning model, we can create new samples (figure 1.10). thesemodelsleadtonewmethodsformanipulatingrealdata. forexample,consider findingthelatentvariablesthatunderpintworealexamples. wecaninterpolatebetween these examples by interpolating between their latent representations and mapping the intermediate positions back into the data space (figure 1.11). 1.2.3 connecting supervised and unsupervised learning generative models with latent variables can also benefit supervised learning models where the outputs have structure (figure 1.4). for example, consider learning to predict the images corresponding to a caption. rather than directly map the text input to an image, we can learn a relation between latent variables that explain the text and the latent variables that explain the image. this |
, we initialize the network parameters so that the expected variance of the activations(intheforwardpass)andgradients(inthebackwardpass)remainsthesame between layers. he initialization (section 7.5) achieves this for relu activations by initializing the biases β to zero and choosing normally distributed weights ω with mean zero and variance 2/d where d is the number of hidden units in the previous layer. h h now consider a residual network. we do not have to worry about the intermediate values or gradients vanishing with network depth since there exists a path whereby each layer directly contributes to the network output (equation 11.5 and figure 11.4b). however, even if we use he initialization within the residual block, the values in the forward pass increase exponentially as we move through the network. toseewhy,considerthatweaddtheresultoftheprocessingintheresidualblockback totheinput. eachbranchhassome(uncorrelated)variability. hence,theoverallvariance problem11.4 increases when we recombine them. with relu activations and he initialization, the expected variance is unchanged by the processing in each block. consequently, when we recombine with the input, the variance doubles (figure 11.6a), growing exponentially withthenumberofresidualblocks. thislimitsthepossiblenetworkdepthbeforefloating point precision is exceeded in the forward pass. a similar argument applies to the gradients in the backward pass of the backpropagation algorithm. hence,residualnetworksstillsufferfromunstableforwardpropagationandexploding gradientsevenwithheinitialization. oneapproachthatwouldstabilizetheforwardand backwardpasseswouldbeto√useheinitializationandthenmultiplythecombinedoutput of each residual block by 1/ 2 to compensate for the doubling (figure 11.6b). however, it is more usual to use batch normalization. 11.4 batch normalization batch normalizationorbatchnormshiftsandrescaleseachactivationhsothatitsmean and variance across the batch b become values that are learned during training. first, the empirical mean m and standard deviation s are computed: h h this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.11.4 batch normalization 193 figure 11.6 variance in residual networks. a) he initialization ensures that the expectedvarianceremainsunchangedafteralinearplusrelulayerf . unfortu- k nately,inresidualnetworks,theinputofeachblockisaddedbacktotheoutput, sothevariancedoublesateachlayer(graynumbersindicatevariance√)andgrows exponentially. b) one approach would be to rescale the signal by 1/ 2 between each residual block. c) a second method uses batch normalization (bn) as the first step in the residual block and initializes the associated offset δ to zero and scaleγ toone. thistransformstheinputtoeachlayertohaveunitvariance,and with he initialization, the output variance will also be one. now the variance increases linearly with the number of residual blocks. a side-effect is that, at initialization, later network layers are dominated by the residual connection and are hence close to computing the identity. x 1 m = h h |b| i s i∈b x 1 s = (h −m )2, (11.7) h |b| i h i∈b where all quantities are scalars. then we use these statistics to standardize the batch appendixc.2.4 activations to have mean zero and unit variance: standardization h −m h ← i h ∀i∈b, (11.8) i s +ϵ h where ϵ is a small number that prevents division by zero if h is the same for every i member of the batch and s =0. h finally, the normalized variable is scaled by γ and shifted by δ: h ←γh +δ ∀i∈b. (11.9) i i draft: please send errata to [email protected] 11 residual networks after this operation, the activations have mean δ and standard deviation γ across all problem11.5 members of the batch. both of these quantities are learned during training. b |
atch normalization is applied independently to each hidden unit. in a standard neural network with k layers, each containing d hidden units, there would be kd problem11.6 learned offsets δ and kd learned scales γ. in a convolutional network, the normalizing statistics are computed over both the batch and the spatial position. if there were k notebook11.3 layers, each containing c channels, there would be kc offsets and kc scales. at test batchnorm time, we do not have a batch from which we can gather statistics. to resolve this, the statistics m and s are calculated across the whole training dataset (rather than just a h h batch) and frozen in the final network. 11.4.1 costs and benefits of batch normalization batchnormalizationmakesthenetworkinvarianttorescalingtheweightsandbiasesthat contribute to each activation; if these are doubled, then the activations also double, the estimated standard deviation s doubles, and the normalization in equation 11.8 com- h pensatesforthesechanges. thishappensseparatelyforeachhiddenunit. consequently, therewillbealargefamilyofweightsandbiasesthatallproducethesameeffect. batch normalizationalsoaddstwoparameters,γ andδ, ateveryhiddenunit, whichmakesthe modelsomewhatlarger. hence,itbothcreatesredundancyintheweightparametersand adds extra parameters to compensate for that redundancy. this is obviously inefficient, but batch normalization also provides several benefits. stableforwardpropagation: ifweinitializetheoffsetsδtozeroandthescalesγ toone, then each output activation will have unit variance. in a regular network, this ensures thevarianceisstableduringforwardpropagationatinitialization. inaresidualnetwork, the variance must still increase as we add a new source of variation to the input at each layer. however, it will increase linearly with each residual block; the kth layer adds one unit of variance to the existing variance of k (figure 11.6c). at initialization, this has the side-effect that later layers make a smaller change to theoverallvariationthanearlierones. thenetworkiseffectivelylessdeepatthestartof training since later layers are close to computing the identity. as training proceeds, the network can increase the scales γ in later layers and can control its own effective depth. higher learning rates: empirical studies and theory both show that batch normaliza- tion makes the loss surface and its gradient change more smoothly (i.e., reduces shat- tered gradients). this means we can use higher learning rates as the surface is more predictable. we saw in section 9.2 that higher learning rates improve test performance. regularization: we also saw in chapter 9 that adding noise to the training process can improve generalization. batch normalization injects noise because the normaliza- tion depends on the batch statistics. the activations for a given training example are normalized by an amount that depends on the other members of the batch and will be slightly different at each training iteration. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.11.5 common residual architectures 195 11.5 common residual architectures residual connections are now a standard part of deep learning pipelines. this section reviews some well-known architectures that incorporate them. 11.5.1 resnet residual blocks were first used in convolutional networks for image classification. the resultingnetworksareknownasresidualnetworks,orresnetsforshort. inresnets,each residualblockcontainsabatchnormalizationoperation,areluactivationfunction,and a convolutional layer. this is followed by the same sequence again before being added problem11.7 backtotheinput(figure11.7a). trialanderrorhaveshownthatthisorderofoperations works well for image classification. for very deep networks, the number of parameters may become undesirably large. bottleneckresidualblocksmakemoreefficientuseofparametersusingthreeconvolutions. the first has a 1×1 kernel and reduces the number of channels. the second is a regular 3×3kernel, andthethirdisanother1×1kerneltoincreasethenumberofchannelsback to the original amount (figure 11.7b). in this way, we can integrate |
information over a 3×3 pixel area using fewer parameters. problem11.8 the resnet-200 model (figure 11.8) contains 200 layers and was used for image clas- sification on the imagenet database (figure 10.15). the architecture resembles alexnet and vgg but uses bottleneck residual blocks instead of vanilla convolutional layers. as with alexnet and vgg, these are periodically interspersed with decreases in spatial resolution and simultaneous increases in the number of channels. here, the resolution is decreasedbydownsamplingusingconvolutionswithstridetwo. thenumberofchannels is increased either by appending zeros to the representation or by using an extra 1×1 convolution. at the start of the network is a 7×7 convolutional layer, followed by a downsampling operation. at the end, a fully connected layer maps the block to a vector of length 1000. this is passed through a softmax layer to generate class probabilities. the resnet-200 model achieved a remarkable 4.8% error rate for the correct class beinginthetopfiveand20.1%foridentifyingthecorrectclasscorrectly. thiscompared favorably with alexnet (16.4%, 38.1%) and vgg (6.8%, 23.7%) and was one of the first networks to exceed human performance (5.1% for being in the top five guesses). however, this model was conceived in 2016 and is far from state-of-the-art. at the time of writing, the best-performing model on this task has a 9.0% error for identifying the class correctly (see figure 10.21). this and all the other current top-performing models for image classification are now based on transformers (see chapter 12). 11.5.2 densenet residual blocks receive the output from the previous layer, modify it by passing it through some network layers, and add it back to the original input. an alternative is to concatenate the modified and original signals. this increases the representation size draft: please send errata to [email protected] 11 residual networks figure 11.7 resnet blocks. a) a standard block in the resnet architecture con- tains a batch normalization operation, followed by an activation function, and a 3×3 convolutional layer. then, this sequence is repeated. b). a bottleneck resnet block still integrates information over a 3×3 region but uses fewer pa- rameters. it contains three convolutions. the first 1×1 convolution reduces the number of channels. the second 3×3 convolution is applied to the smaller rep- resentation. a final 1×1 convolution increases the number of channels again so that it can be added back to the input. figure 11.8 resnet-200 model. a standard 7×7 convolutional layer with stride two is applied, followed by a maxpool operation. a series of bottleneck residual blocks follow (number in brackets is channels after first 1×1 convolution), with periodic downsampling and accompanying increases in the number of channels. the network concludes with average pooling across all spatial positions and a fully connected layer that maps to pre-softmax activations. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.11.5 common residual architectures 197 figure11.9densenet. thisarchitectureusesresidualconnectionstoconcatenate theoutputsofearlierlayerstolaterones. here,thethree-channelinputimageis processed to form a 32-channel representation. the input image is concatenated to this to give a total of 35 channels. this combined representation is processed tocreateanother32-channelrepresentation, andbothearlierrepresentationsare concatenated to this to create a total of 67 channels and so on. (in terms of channels for a convolutional network), but an optional subsequent linear transformation can map back to the original size (a 1×1 convolution for a convolutional network). this allows the model to add the representations together, take a weighted sum, or combine them in a more complex way. the densenet architecture uses concatenation so that the input to a layer comprises the concatenated outputs from all previous layers (figure 11.9). these are processed to create a new representation that is itself concatenated with the previous representation and passed to the next layer. this concatenation means there is a direct contribution from earlier layers to the output, so the loss surface behaves reasonably. inpractice |
,thiscanonlybesustainedforafewlayersbecausethenumberofchannels (and hence the number of parameters required to process them) becomes increasingly large. this problem can be alleviated by applying a 1×1 convolution to reduce the number of channels before the next 3×3 convolution is applied. in a convolutional network,theinputisperiodicallydownsampled. concatenationacrossthedownsampling makes no sense since the representations have different sizes. consequently, the chain of concatenation is broken at this point, and a smaller representation starts a new chain. inaddition, anotherbottleneck1×1convolutioncanbeappliedwhenthedownsampling occurs to control the representation size further. thisnetworkperformscompetitivelywithresnetmodelsonimageclassification(see figure 10.21); indeed, it can perform better for a comparable parameter count. this is presumably because it can reuse processing from earlier layers more flexibly. 11.5.3 u-nets and hourglass networks section10.5.3describedasemanticsegmentationnetworkthathadanencoder-decoderor hourglass structure. the encoder repeatedly downsamples the image until the receptive fields are large and information is integrated from across the image. then the decoder draft: please send errata to [email protected] 11 residual networks figure11.10u-netforsegmentinghelacells. theu-nethasanencoder-decoder structure, in which the representation is downsampled (orange blocks) and then re-upsampled (blue blocks). the encoder uses regular convolutions, and the de- coder uses transposed convolutions. residual connections append the last repre- sentationateachscaleintheencodertothefirstrepresentationatthesamescale inthedecoder(orangearrows). theoriginalu-netused“valid”convolutions,so the size decreased slightly with each layer, even without downsampling. hence, the representations from the encoder were cropped (dashed squares) before ap- pending to the decoder. adapted from ronneberger et al. (2015). upsamples it back to the size of the original image. the final output is a probability over possible object classes at each pixel. one drawback of this architecture is that the low-resolution representation in the middle of the network must “remember” the high-resolution details to make the final result accurate. this is unnecessary if residual connectionstransfertherepresentationsfromtheencodertotheirpartnerinthedecoder. the u-net (figure 11.10) is an encoder-decoder architecture where the earlier repre- sentations are concatenated to the later ones. the original implementation used “valid” convolutions, so the spatial size decreases by two pixels each time a 3×3 convolutional layer is applied. this means that the upsampled version is smaller than its counterpart in the encoder, which must be cropped before concatenation. subsequent implementa- tions have used zero padding, where this cropping is unnecessary. note that the u-net is completely convolutional, so after training, it can be run on an image of any size. problem11.9 the u-net was intended for segmenting medical images (figure 11.11) but has found many other uses in computer graphics and vision. hourglass networks are similar but applyfurtherconvolutionallayersintheskipconnectionsandaddtheresultbacktothe decoder rather than concatenating it. a series of these models form a stacked hourglass network that alternates between considering the image at local and global levels. such networksareusedforposeestimation(figure11.12). thesystemistrainedtopredictone “heatmap” for each joint, and the estimated position is the maximum of each heatmap. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.11.6 why do nets with residual connections perform so well? 199 figure 11.11 segmentation using u-net in 3d. a) three slices through a 3d volume of mouse cortex taken by scanning electron microscope. b) a single u- net is used to classify voxels as being inside or outside neurites. connected regions are identified with different colors. c) for a better result, an ensemble of five u-nets is trained, and a voxel is only classified as |
belonging to the cell if all five networks agree. adapted from falk et al. (2019). 11.6 why do nets with residual connections perform so well? residual networks allow much deeper networks to be trained; it’s possible to extend the resnet architecture to 1000 layers and still train effectively. the improvement in image classification performance was initially attributed to the additional network depth, but two pieces of evidence contradict this viewpoint. first,shallower,widerresidualnetworkssometimesoutperformdeeper,narrowerones with a comparable parameter count. in other words, better performance can sometimes beachievedwithanetworkwithfewerlayersbutmorechannelsperlayer. second,there is evidence that the gradients during training do not propagate effectively through very long paths in the unraveled network (figure 11.4b). in effect, a very deep network may act more like a combination of shallower networks. the current view is that residual connections add some value of their own, as well as allowing deeper networks to be trained. this perspective is supported by the fact that the loss surfaces of residual networks around a minimum tend to be smoother and morepredictablethanthoseforthesamenetworkwhentheskipconnectionsareremoved (figure 11.13). this may make it easier to learn a good solution that generalizes well. 11.7 summary increasingnetworkdepthindefinitelycausesbothtrainingandtestperformanceforimage classification to decrease. this may be because the gradient of the loss with respect to draft: please send errata to [email protected] 11 residual networks figure 11.12 stacked hourglass networks for pose estimation. a) the network inputisanimagecontainingaperson,andtheoutputisasetofheatmaps,with oneheatmapforeachjoint. thisisformulatedasaregressionproblemwherethe targets are heatmap images with small, highlighted regions at the ground-truth jointpositions. thepeakoftheestimatedheatmapisusedtoestablisheachfinal joint position. b) the architecture consists of initial convolutional and residual layers followed by a series of hourglass blocks. c) each hourglass block consists ofanencoder-decodernetworksimilartotheu-netexceptthattheconvolutions usezeropadding,somefurtherprocessingisdoneintheresiduallinks,andthese links add this processed representation rather than concatenate it. each blue cuboid is itself a bottleneck residual block (figure 11.7b). adapted from newell et al. (2016). this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.notes 201 figure 11.13 visualizing neural network loss surfaces. each plot shows the loss surfaceintworandomdirectionsinparameterspacearoundtheminimumfound by sgd for an image classification task on the cifar-10 dataset. these direc- tionsarenormalizedtofacilitateside-by-sidecomparison. a)residualnetwith56 layers. b) results from the same network without skip connections. the surface is smoother with the skip connections. this facilitates learning and makes the final network performance more robust to minor errors in the parameters, so it will likely generalize better. adapted from li et al. (2018b). parametersearlyinthenetworkchangesquicklyandunpredictablyrelativetotheupdate stepsize. residualconnectionsaddtheprocessedrepresentationbacktotheirowninput. now each layer contributes directly to the output as well as indirectly, so propagating gradients through many layers is not mandatory, and the loss surface is smoother. residualnetworksdon’tsufferfromvanishinggradientsbutintroduceanexponential increaseinthevarianceoftheactivationsduringforwardpropagationandcorresponding problems with exploding gradients. this is usually handled by adding batch normaliza- tion, which compensates for the empirical mean and variance of the batch and then shifts and rescales using learned parameters. if these parameters are initialized judi- ciously, very deep networks can be trained. there is evidence that both residual links and batch normalization make the loss surface smoother, which permits larger learning rates. moreover, the variability in the batch statistics adds a source of regularization. residual blocks have been incorporated into convolutional networks. they allow deeper networks to be trained with commensurate increases in image classification per- formance. variations of residual networks include the dense |
net architecture, which concatenatesoutputsofallprior layerstofeedintothecurrentlayer, andu-nets, which incorporate residual connections into encoder-decoder models. notes residual connections: residualconnectionswereintroducedbyheetal.(2016a),whobuilt anetworkwith152layers,whichwaseighttimeslargerthanvgg(figure10.17),andachieved state-of-the-artperformanceontheimagenetclassificationtask. eachresidualblockconsisted draft: please send errata to [email protected] 11 residual networks of a convolutional layer followed by batch normalization, a relu activation, a second convolu- tional layer, and second batch normalization. a second relu function was applied after this block was added back to the main representation. this architecture was termed resnet v1. he et al. (2016b) investigated different variations of residual architectures, in which either (i) processing could also be applied along the skip connection or (ii) after the two branches had recombined. they concluded neither was necessary, leading to the architecture in figure 11.7, which is sometimes termed a pre-activation residual block and is the backbone of resnet v2. they trained a network with 200 layers that improved further on the imagenet classification task (see figure 11.8). since this time, new methods for regularization, optimization, and data augmentationhavebeendeveloped,andwightmanetal.(2021)exploitthesetopresentamore modern training pipeline for the resnet architecture. why residual connections help: residual networks certainly allow deeper networks to be trained. presumably, this is related to reducing shattered gradients (balduzzi et al., 2017) at the start of training and the smoother loss surface near the minima as depicted in figure 11.13 (li et al., 2018b). residual connections alone (i.e., without batch normalization) increase the trainabledepthofanetworkbyroughlyafactoroftwo(sankararamanetal.,2020). withbatch normalization, very deep networks can be trained, but it is unclear that depth is critical for performance. zagoruyko&komodakis(2016)showedthatwideresidualnetworkswithonly16 layers outperformed all residual networks of the time for image classification. orhan & pitkow (2017) propose a different explanation for why residual connections improve learning in terms of eliminating singularities (places on the loss surface where the hessian is degenerate). related architectures: residualconnectionsareaspecialcaseofhighway networks(srivas- tavaetal.,2015)whichalsosplitthecomputationintotwobranchesandadditivelyrecombine. highway networks use a gating function that weights the inputs to the two branches in a way thatdependsonthedataitself,whereasresidualnetworkssendthedatadownbothbranchesin astraightforwardmanner. xieetal.(2017)introducedtheresnextarchitecture,whichplaces a residual connection around multiple parallel convolutional branches. residual networks as ensembles: veit et al. (2016) characterized residual networks as en- semblesofshorternetworksanddepictedthe“unravelednetwork”interpretation(figure11.4b). they provide evidence that this interpretation is valid by showing that deleting layers in a trained network (and hence a subset of paths) only has a modest effect on performance. con- versely, removing a layer in a purely sequential network like vgg is catastrophic. they also lookedatthegradientmagnitudesalongpathsofdifferentlengthsandshowedthatthegradient vanishesinlongerpaths. inaresidualnetworkconsistingof54blocks,almostallofthegradient updates during training were from paths of length 5 to 17 blocks long, even though these only constitute 0.45% of the total paths. it seems that adding more blocks effectively adds more parallel shorter paths rather than creating a network that is truly deeper. regularization for residual networks: l2regularizationoftheweightshasafundamentally differenteffectinvanillanetworksandresidualnetworkswithoutbatchnorm. intheformer,it encourages the output of the layer to be a constant function determined by the biases. in the latter, it encourages the residual block to compute the identity plus a constant determined |
by the biases. several regularization methods have been developed that are targeted specifically at residual architectures. resdrop (yamada et al., 2016), stochastic depth (huang et al., 2016), and randomdrop (yamada et al., 2019) all regularize residual networks by randomly dropping residualblocksduringthetrainingprocess. inthelattercase,thepropensityfordroppingablock isdeterminedbyabernoullivariable,whoseparameterislinearlydecreasedduringtraining. at testtime,theresidualblocksareaddedbackinwiththeirexpectedprobability. thesemethods are effectively versions of dropout, in which all the hidden units in a block are simultaneously this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.notes 203 droppedinconcert. inthemultiplepathsviewofresidualnetworks(figure11.4b),theysimply removesomeofthepathsateachtrainingstep. wuetal.(2018b)developedblockdrop,which analyzesanexistingnetworkanddecideswhichresidualblockstouseatruntimewiththegoal of improving the efficiency of inference. other regularization methods have been developed for networks with multiple paths inside the residual block. shake-shake (gastaldi, 2017a,b) randomly re-weights the paths during the forward and backward passes. in the forward pass, this can be viewed as synthesizing random data, and in the backward pass, as injecting another form of noise into the training method. shakedrop (yamada et al., 2019) draws a bernoulli variable that decides whether each block will be subject to shake-shake or behave like a standard residual unit on this training step. batchnormalization: batchnormalizationwasintroducedbyioffe&szegedy(2015)outside of the context of residual networks. they showed empirically that it allowed higher learning rates,increasedconvergencespeed,andmadesigmoidactivationfunctionsmorepractical(since the distribution of outputs is controlled, so examples are less likely to fall in the saturated extremes of the sigmoid). balduzzi et al. (2017) investigated the activation of hidden units in laterlayersofdeepnetworkswithrelufunctionsatinitialization. theyshowedthatmanysuch hiddenunitswerealwaysactiveoralwaysinactiveregardlessoftheinputbutthatbatchnorm reduced this tendency. although batch normalization helps stabilize the forward propagation of signals through a network,yangetal.(2019)showedthatitcausesgradientexplosioninrelunetpworkswithout skip connections, with each layer increasing the magnitude of the gradients by π/(π−1) ≈ 1.21. this argument is summarized by luther (2020). since a residual network can be seen as a combination of paths of different lengths (figure 11.4), this effect must also be present in residualnetworks. presumably,however,thebenefitofremovingthe2k increasesinmagnitude in the forward pass of a network with k layers outweighs the harm done by increasing the gradients by 1.21k in the backward pass, so overall batchnorm makes training more stable. variations of batch normalization: several variants of batchnorm have been proposed (figure 11.14). batchnorm normalizes each channel separately based on statistics gathered across the batch. ghost batch normalization or ghostnorm (hoffer et al., 2017) uses only part of the batch to compute the normalization statistics, which makes them noisier and increases the amount of regularization when the batch size is very large (figure 11.14b). whenthebatchsizeisverysmallorthefluctuationswithinabatchareverylarge(asisoftenthe caseinnaturallanguageprocessing),thestatisticsinbatchnormmaybecomeunreliable. ioffe (2017) proposed batch renormalization, which keeps a running average of the batch statistics and modifies the normalization of any batch to ensure that it is more representative. another problemisthatbatchnormalizationisunsuitableforuseinrecurrentneuralnetworks(networks forprocessing sequences, in whichthe previous outputis fedbackas anadditional input aswe movethroughthesequence(seefigure12.19). here,thestatisticsmustbestoredateachstepin thesequence,andit’sunclearwhattodoifatestsequenceislongerthanthetrainingsequences. a third problem is that |
batch normalization needs access to the whole batch. however, this may not be easily available when training is distributed across several machines. layernormalizationorlayernorm(baetal.,2016)avoidsusingbatchstatisticsbynormalizing eachdataexampleseparately,usingstatisticsgatheredacrossthechannelsandspatialposition (figure 11.14c). however, there is still a separate learned scale γ and offset δ per channel. group normalization or groupnorm (wu & he, 2018) is similar to layernorm but divides the channels into groups and computes the statistics for each group separately across the within- groupchannelsandthespatialpositions(figure11.14d). again,therearestillseparatescaleand offset parameters per channel. instance normalization or instancenorm (ulyanov et al., 2016) takes this to the extreme where the number of groups is the same as the number of channels, soeachchannelisnormalizedseparately(figure11.14e),usingstatisticsgatheredacrossspatial draft: please send errata to [email protected] 11 residual networks figure 11.14 normalization schemes. batchnorm modifies each channel sepa- rately but adjusts each batch member in the same way based on statistics gath- ered across the batch and spatial position. ghost batchnorm computes these statistics from only part of the batch to make them more variable. layernorm computes statistics for each batch member separately, based on statistics gath- eredacrossthechannelsandspatialposition. itretainsaseparatelearnedscaling factor for each channel. groupnorm normalizes within each group of channels and also retains a separate scale and offset parameter for each channel. instan- cenormnormalizeswithineachchannelseparately,computingthestatisticsonly across spatial position. adapted from wu & he (2018). positionalone. salimans&kingma(2016)investigatednormalizingthenetworkweightsrather thantheactivations,butthishasbeenlessempiricallysuccessful. teyeetal.(2018)introduced montecarlobatchnormalization,whichcanprovidemeaningfulestimatesofuncertaintyinthe predictionsofneuralnetworks. arecentcomparisonofthepropertiesofdifferentnormalization schemes can be found in lubana et al. (2021). why batchnorm helps: batchnormhelpscontroltheinitialgradientsinaresidualnetwork (figure 11.6c). however, the mechanism by which batchnorm improves performance is not well understood. the stated goal of ioffe & szegedy (2015) was to reduce problems caused by internal covariate shift, which is the change in the distribution of inputs to a layer caused by updating preceding layers during the backpropagation update. however, santurkar et al. (2018) provided evidence against this view by artificially inducing covariate shift and showing that networks with and without batchnorm performed equally well. motivated by this, they searched for another explanation for why batchnorm should improve performance. they showed empirically for the vgg network that adding batch normalization decreases the variation in both the loss and its gradient as we move in the gradient direction. inotherwords,thelosssurfaceisbothsmootherandchangesmoreslowly,whichiswhylarger learning rates are possible. they also provide theoretical proofs for both these phenomena and show that for any parameter initialization, the distance to the nearest optimum is less for networks with batch normalization. bjorck et al. (2018) also argue that batchnorm improves the properties of the loss landscape and allows larger learning rates. otherexplanationsofwhybatchnormimprovesperformanceincludedecreasingtheimportance of tuning the learning rate (ioffe & szegedy, 2015; arora et al., 2018). indeed li & arora (2019)showthatusinganexponentiallyincreasinglearningratescheduleispossiblewithbatch normalization. ultimately, this is because batch normalization makes the network invariant to the scales of the weight matrices (see huszár, 2019, for an intuitive visualization). hoffer et al. (2017) identified that batchnorm has a regularizing effect due to statistical fluc- this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.notes 205 tuations from the random composition of the batch. they proposed using a ghost batch size, in which the mean and standard |
has three advantages. first, we may need fewer text/image pairs to learn this mapping now that the inputs and outputs are lower dimensional. second, we are more likely to generate a plausible-looking image; any sensible values of the latent variables should produce something that looks like a plausible example. third, if we introduce randomnesstoeitherthemappingbetweenthetwosetsoflatentvariablesorthemapping fromthelatentvariablestotheimage,thenwecangeneratemultipleimagesthatareall described well by the caption (figure 1.12). 1.3 reinforcement learning the final area of machine learning is reinforcement learning. this paradigm introduces the idea of an agent which lives in a world and can perform certain actions at each time step. the actions change the state of the system but not necessarily in a deterministic way. taking an action can also produce rewards, and the goal of reinforcement learning draft: please send errata to [email protected] 1 introduction is for the agent to learn to choose actions that lead to high rewards on average. one complication is that the reward may occur some time after the action is taken, so associating a reward with an action is not straightforward. this is known as the temporal credit assignment problem. as the agent learns, it must trade off exploration andexploitationofwhatitalreadyknows; perhapstheagenthasalreadylearnedhowto receive modest rewards; should it follow this strategy (exploit what it knows), or should it try different actions to see if it can improve (explore other opportunities)? 1.3.1 two examples consider teaching a humanoid robot to locomote. the robot can perform a limited number of actions at a given time (moving various joints), and these change the state of the world (its pose). we might reward the robot for reaching checkpoints in an obstacle course. to reach each checkpoint, it must perform many actions, and it’s unclear which ones contributed to the reward when it is received and which were irrelevant. this is an example of the temporal credit assignment problem. asecondexampleislearningtoplaychess. again,theagenthasasetofvalidactions (chess moves) at any given time. however, these actions change the state of the system in a non-deterministic way; for any choice of action, the opposing player might respond withmanydifferentmoves. here,wemightsetuparewardstructurebasedoncapturing piecesorjusthaveasinglerewardattheendofthegameforwinning. inthelattercase, the temporal credit assignment problem is extreme; the system must learn which of the many moves it made were instrumental to success or failure. the exploration-exploitation trade-off is also apparent in these two examples. the robot may have discovered that it can make progress by lying on its side and pushing withoneleg. thisstrategywillmovetherobotandyieldsrewards,butmuchmoreslowly than the optimal solution: to balance on its legs and walk. so, it faces a choice between exploitingwhatitalreadyknows(howtoslidealongthefloorawkwardly)andexploring the space of actions (which might result in much faster locomotion). similarly, in the chess example, the agent may learn a reasonable sequence of opening moves. should it exploit this knowledge or explore different opening sequences? itisperhapsnotobvioushowdeeplearningfitsintothereinforcementlearningframe- work. there are several possible approaches, but one technique is to use deep networks to build a mapping from the observed world state to an action. this is known as a policy network. in the robot example, the policy network would learn a mapping from its sensor measurements to joint movements. in the chess example, the network would learnamappingfromthecurrentstateoftheboardtothechoiceofmove(figure1.13). 1.4 ethics it would be irresponsible to write this book without discussing the ethical implications of artificial intelligence. this potent technology will change the world to at least the this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.1.4 ethics 13 figure 1.13 policy networks for reinforcement learning. one way to incorporate deepneuralnetworksintoreinforcementlearningistousethemtodefineamap- pingfromthestate(herepositiononchessboard)totheactions(possiblemoves). this mapping is known as a policy. sameextentaselectricity,theinternalcombustionengine,thetransistor,ortheinternet. |
the potential benefits in healthcare, design, entertainment, transport, education, and almosteveryareaofcommerceareenormous. however,scientistsandengineersareoften unrealistically optimistic about the outcomes of their work, and the potential for harm is just as great. the following paragraphs highlight five concerns. bias and fairness: if we train a system to predict salary levels for individuals based on historical data, then this system will reproduce historical biases; for example, it will probably predict that women should be paid less than men. several such cases have already become international news stories: an ai system for super-resolving face images made non-white people look more white; a system for generating images produced only pictures of men when asked to synthesize pictures of lawyers. careless application of algorithmicdecision-makingusingaihasthepotentialtoentrenchoraggravateexisting biases. see binns (2018) for further discussion. explainability: deep learning systems make decisions, but we do not usually know exactly how or based on what information. they may contain billions of parameters, and there is no way we can understand how they work based on examination. this has led to the sub-field of explainable ai. one moderately successful area is producing local explanations; we cannot explain the entire system, but we can produce an interpretable descriptionofwhyaparticulardecisionwasmade. however,itremainsunknownwhether itispossibletobuildcomplexdecision-makingsystemsthatarefullytransparenttotheir users or even their creators. see grennan et al. (2022) for further information. weaponizing ai: all significant technologies have been applied directly or indirectly toward war. sadly, violent conflict seems to be an inevitable feature of human behavior. ai is arguably the most powerful technology ever built and will doubtless be deployed extensively in a military context. indeed, this is already happening (heikkilä, 2022). draft: please send errata to [email protected] 1 introduction concentrating power: it is not from a benevolent interest in improving the lot of the human race that the world’s most powerful companies are investing heavily in artifi- cial intelligence. they know that these technologies will allow them to reap enormous profits. like any advanced technology, deep learning is likely to concentrate power in the hands of the few organizations that control it. automating jobs that are currently donebyhumanswillchangetheeconomicenvironmentanddisproportionatelyaffectthe livelihoods of lower-paid workers with fewer skills. optimists argue similar disruptions happened during the industrial revolution and resulted in shorter working hours. the truthisthatwesimplydonotknowwhateffectsthelarge-scaleadoptionofaiwillhave on society (see david, 2015). existential risk: the major existential risks to the human race all result from tech- nology. climate change has been driven by industrialization. nuclear weapons derive from the study of physics. pandemics are more probable and spread faster because in- novations in transport, agriculture, and construction have allowed a larger, denser, and more interconnected population. artificial intelligence brings new existential risks. we should be very cautious about building systems that are more capable and extensible than human beings. in the most optimistic case, it will put vast power in the hands of the owners. in the most pessimistic case, we will be unable to control it or even understand its motives (see tegmark, 2018). this list is far from exhaustive. ai could also enable surveillance, disinformation, violations of privacy, fraud, and manipulation of financial markets, and the energy re- quired to train ai systems contributes to climate change. moreover, these concerns are not speculative; there are already many examples of ethically dubious applications of ai (consult dao, 2021, for a partial list). in addition, the recent history of the inter- net has shown how new technology can cause harm in unexpected ways. the online community of the eighties and early nineties could hardly have predicted the prolifera- tion of fake news, spam, online harassment, fraud, cyberbullying, incel culture, political manipulation, doxxing, online radicalization, and revenge porn. everyone studying or researching (or writing books about) ai should contemplate to what degree scientists are accountable for the uses of their technology. we should consider that capitalism primarily drives the development of ai and that legal advances and deployment for social good are likely to lag significantly behind. we should reflect on whether it’s possible, as scientists and engineers, to control progress in this field |
and to reduce the potential for harm. we should consider what kind of organizations we are prepared to work for. how serious are they in their commitment to reducing the potential harms of ai? are they simply “ethics-washing” to reduce reputational risk, or do they actually implement mechanisms to halt ethically suspect projects? all readers are encouraged to investigate these issues further. the online course at https://ethics-of-ai.mooc.fi/ is a useful introductory resource. if you are a professor teaching from this book, you are encouraged to raise these issues with your students. if you are a student taking a course where this is not done, then lobby your professor to make this happen. if you are deploying or researching ai in a corporate environment, you are encouraged to scrutinize your employer’s values and to help change them (or leave) if they are wanting. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.1.5 structure of book 15 1.5 structure of book the structure of the book follows the structure of this introduction. chapters 2–9 walk throughthesupervisedlearningpipeline. wedescribeshallowanddeepneuralnetworks and discuss how to train them and measure and improve their performance. chap- ters 10–13 describe common architectural variations of deep neural networks, including convolutional networks, residual connections, and transformers. these architectures are used across supervised, unsupervised, and reinforcement learning. chapters 14–18 tackle unsupervised learning using deep neural networks. we devote a chapter each to four modern deep generative models: generative adversarial networks, variational autoencoders, normalizing flows, and diffusion models. chapter 19 is a brief introduction to deep reinforcement learning. this is a topic that easily justifies its own book, so the treatment is necessarily superficial. however, this treatment is intended to be a good starting point for readers unfamiliar with this area. despite the title of this book, some aspects of deep learning remain poorly under- stood. chapter 20 poses some fundamental questions. why are deep networks so easy to train? why do they generalize so well? why do they need so many parameters? do they need to be deep? along the way, we explore unexpected phenomena such as the structure of the loss function, double descent, grokking, and lottery tickets. the book concludes with chapter 21, which discusses ethics and deep learning. 1.6 other books this book is self-contained but is limited to coverage of deep learning. it is intended to bethespiritualsuccessortodeep learning(goodfellowetal.,2016)whichisafantastic resourcebutdoesnotcoverrecentadvances. forabroaderlookatmachinelearning,the most up-to-date and encyclopedic resource is probabilistic machine learning (murphy, 2022, 2023). however, pattern recognition and machine learning (bishop, 2006) is still an excellent and relevant book. ifyouenjoythisbook,thenmypreviousvolume,computervision: models,learning, and inference (prince, 2012), is still worth reading. some parts have dated badly, but it contains a thorough introduction to probability, including bayesian methods, and good introductorycoverageoflatentvariablemodels,geometryforcomputervision,gaussian processes,andgraphicalmodels. itusesidenticalnotationtothisbookandcanbefound online. adetailedtreatmentofgraphicalmodelscanbefoundinprobabilistic graphical models: principles and techniques (koller & friedman, 2009), and gaussian processes arecoveredbygaussianprocessesformachinelearning(williams&rasmussen,2006). for background mathematics, consult mathematics for machine learning (deisen- rothetal.,2020). foramorecoding-orientedapproach,consultdiveintodeeplearning (zhang et al., 2023). the best overview for computer vision is szeliski (2022), and there is also the impending book foundations of computer vision (torralba et al., 2024). a good starting point to learn about graph neural networks is graph representation learning (hamilton, 2020). the definitive work on reinforcement learning is reinforce- draft: please send errata to [email protected] 1 introduction ment learning: an introduction (sutton & barto, 2018). a good initial resource is foundations of deep reinforcement learning (graesser & keng, 2019). 1.7 how to read this book most remaining chapters in this |
book contain a main body of text, a notes section, and asetofproblems. themainbodyofthetextisintendedtobeself-containedandcanbe readwithoutrecoursetotheotherpartsofthechapter. asmuchaspossible,background mathematics is incorporated into the main body of the text. however, for larger topics thatwouldbeadistractiontothemainthreadoftheargument,thebackgroundmaterial isappendicized, andareferenceisprovidedinthemargin. mostnotationinthisbookis appendixa standard. however, some conventions are less widely used, and the reader is encouraged notation to consult appendix a before proceeding. the main body of text includes many novel illustrations and visualizations of deep learning models and results. i’ve worked hard to provide new explanations of existing ideas rather than merely curate the work of others. deep learning is a new field, and sometimes phenomena are poorly understood. i try to make it clear where this is the case and when my explanations should be treated with caution. references are included in the main body of the chapter only where results are de- picted. instead, they can be found in the notes section at the end of the chapter. i do not generally respect historical precedent in the main text; if an ancestor of a current techniqueisnolongeruseful,theniwillnotmentionit. however,thehistoricaldevelop- mentofthefieldisdescribedinthenotessection,andhopefully,creditisfairlyassigned. the notes are organized into paragraphs and provide pointers for further reading. they should help the reader orient themselves within the sub-area and understand how it re- lates to other parts of machinelearning. the notes are less self-contained than the main text. dependingonyourlevelofbackgroundknowledgeandinterest,youmayfindthese sections more or less useful. eachchapterhasanumberofassociatedproblems. theyarereferencedinthemargin of the main text at the point that they should be attempted. as george pólya noted, “mathematics,yousee,isnotaspectatorsport.” hewascorrect,andihighlyrecommend that you attempt the problems as you go. in some cases, they provide insights that will helpyouunderstandthemaintext. problemsforwhichtheanswersareprovidedonthe associated website are indicated with an asterisk. additionally, python notebooks that will help you understand the ideas in this book are also available via the website, and these are also referenced in the margins of the text. indeed, if you are feeling rusty, it notebook1.1 might be worth working through the notebook on background mathematics right now. background mathematics unfortunately, the pace of research in ai makes it inevitable that this book will be a constant work in progress. if there are parts you find hard to understand, notable omis- sions, or sections that seem extraneous, please get in touch via the associated website. together, we can make the next edition better. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.chapter 2 supervised learning a supervised learning model defines a mapping from one or more inputs to one or more outputs. for example, the input might be the age and mileage of a secondhand toyota prius, and the output might be the estimated value of the car in dollars. the model is just a mathematical equation; when the inputs are passed through this equation,itcomputestheoutput,andthisistermedinference. themodelequationalso contains parameters. different parameter values change the outcome of the computa- tion; the model equation describes a family of possible relationships between inputs and outputs, and the parameters specify the particular relationship. whenwetrainorlearnamodel,wefindparametersthatdescribethetruerelationship between inputs and outputs. a learning algorithm takes a training set of input/output pairs and manipulates the parameters until the inputs predict their corresponding out- putsascloselyaspossible. ifthemodelworkswellforthesetrainingpairs,thenwehope it will make good predictions for new inputs where the true output is unknown. thegoalofthischapteristoexpandontheseideas. first,wedescribethisframework more formally and introduce some notation. then we work through a simple example in which we use a straight line to describe the relationship between input and output. this linear model is both familiar and easy to visualize, but nevertheless illustrates all the main ideas of supervised learning. |
2.1 supervised learning overview in supervised learning, we aim to build a model that takes an input x and outputs a prediction y. for simplicity, we assume that both the input x and output y are vectors ofapredeterminedandfixedsizeandthattheelementsofeachvectorarealwaysordered in the same way; in the prius example above, the input x would always contain the age ofthecarandthenthemileage, inthatorder. thisistermedstructuredortabulardata. to make the prediction, we need a model f[•] that takes input x and returns y, so: y=f[x]. (2.1) draft: please send errata to [email protected] 2 supervised learning when we compute the prediction y from the input x, we call this inference. the model is just a mathematical equation with a fixed form. it represents a family ofdifferentrelationsbetweentheinputandtheoutput. themodelalsocontainsparam- eters ϕ. the choice of parameters determines the particular relation between input and output, so we should really write: y=f[x,ϕ]. (2.2) when we talk about learning or training a model, we mean that we attempt to find parameters ϕ that make sensible output predictions from the input. we learn these parametersusingatrainingdatasetofi pairsofinputandoutputexamples{x ,y }. we i i aimtoselectparametersthatmapeachtraininginputtoitsassociatedoutputasclosely as possible. we quantify the degree of mismatch in this mapping with the loss l. this is a scalar value that summarizes how poorly the model predicts the training outputs from their corresponding inputs for parameters ϕ. we can treat the loss as a function l[ϕ] of these parameters. when we train the model, we are seeking parameters ϕˆ that minimize this loss function:1 h i ϕˆ =argmin l[ϕ] . (2.3) ϕ if the loss is small after this minimization, we have found model parameters that accu- rately predict the training outputs y from the training inputs x . i i after training a model, we must now assess its performance; we run the model on separatetest datatoseehowwellitgeneralizestoexamplesthatitdidn’tobserveduring training. if the performance is adequate, then we are ready to deploy the model. 2.2 linear regression example let’s now make these ideas concrete with a simple example. we consider a model y = f[x,ϕ] that predicts a single output y from a single input x. then we develop a loss function, and finally, we discuss model training. 2.2.1 1d linear regression model a 1d linear regression model describes the relationship between input x and output y as a straight line: y = f[x,ϕ] = ϕ +ϕ x. (2.4) 0 1 1more properly, the loss function also depends on the training data {xi,yi}, so we should writel[{xi,yi},ϕ],butthisisrathercumbersome. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.2.2 linear regression example 19 figure 2.1 linear regression model. for a given choice of parameters ϕ = [ϕ ,ϕ ]t, the model makes a predic- 0 1 tion for the output (y-axis) based on the input (x-axis). different choices for the y-intercept ϕ and the slope ϕ 0 1 change these predictions (cyan, orange, and gray lines). the linear regression model (equation 2.4) defines a family of input/output relations (lines) and the parametersdeterminethememberofthe family (the particular line). this model has two parameters ϕ = [ϕ ,ϕ ]t, where ϕ is the y-intercept of the line 0 1 0 and ϕ is the slope. different choices for the y-intercept and slope result in different 1 relations between input and output (figure 2.1). hence, equation 2.4 defines a fam- ily of possible input-output relations (all possible lines), and the choice of parameters determines the member of this family (the particular line). 2.2.2 loss forthismodel,thetrainingdataset |
(figure2.2a)consistsofi input/outputpairs{x ,y }. i i figures 2.2b–d show three lines defined by three sets of parameters. the green line in figure 2.2d describes the data more accurately than the other two since it is much closer to the data points. however, we need a principled approach for deciding which parameters ϕ are better than others. to this end, we assign a numerical value to each choice of parameters that quantifies the degree of mismatch between the model and the data. we term this value the loss; a lower loss means a better fit. the mismatch is captured by the deviation between the model predictions f[x ,ϕ] i (heightofthelineatx )andthegroundtruthoutputsy . thesedeviationsaredepicted i i asorangedashedlinesinfigures2.2b–d. wequantifythetotalmismatch,training error, or loss as the sum of the squares of these deviations for all i training pairs: xi l[ϕ] = (f[x ,ϕ]−y )2 i i i=1 xi = (ϕ +ϕ x −y )2. (2.5) 0 1 i i i=1 sincethebestparametersminimizethisexpression,wecallthisaleast-squaresloss. the squaring operation means that the direction of the deviation (i.e., whether the line is draft: please send errata to [email protected] 2 supervised learning figure 2.2linearregressiontrainingdata,model,andloss. a)thetrainingdata (orange points) consist of i = 12 input/output pairs {x ,y }. b–d) each panel i i shows the linear regression model with different parameters. depending on the choiceofy-interceptandslopeparametersϕ=[ϕ ,ϕ ]t,themodelerrors(orange 0 1 dashed lines) may be larger or smaller. the loss l is the sum of the squares of theseerrors. theparametersthatdefinethelinesinpanels(b)and(c)havelarge losses l=7.07 and l=10.28, respectively because the models fit badly. the loss l=0.20 in panel (d) is smaller because the model fits well; in fact, this has the smallest loss of all possible lines, so these are the optimal parameters. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.2.2 linear regression example 21 figure2.3lossfunctionforlinearregressionmodelwiththedatasetinfigure2.2a. a) each combination of parameters ϕ=[ϕ ,ϕ ]t has an associated loss. the re- 0 1 sultinglossfunctionl[ϕ]canbevisualizedasasurface. thethreecirclesrepresent thethreelinesfromfigure2.2b–d. b)thelosscanalsobevisualizedasaheatmap, where brighter regions represent larger losses; here we are looking straight down atthesurfacein(a)fromaboveandgrayellipsesrepresentisocontours. thebest fittingline(figure2.2d)hastheparameterswiththesmallestloss(greencircle). above or below the data) is unimportant. there are also theoretical reasons for this choice which we return to in chapter 5. the loss l is a function of the parameters ϕ; it will be larger when the model fit is notebook2.1 poor (figure 2.2b,c) and smaller when it is good (figure 2.2d). considered in this light, supervised we term l[ϕ] the loss function or cost function. the goal is to find the parameters ϕˆ learning that minimize this quantity: h i ϕˆ = argmin l[ϕ] ϕ " # xi = argmin (f[x ,ϕ]−y )2 i i ϕ "i=1 # xi = argmin (ϕ +ϕ x −y )2 . (2.6) 0 1 i i ϕ i=1 there are only two parameters (the y-intercept ϕ and slope ϕ ), so we can calculate 0 1 problems2.1–2.2 the loss for every combination of |
values and visualize the loss function as a surface (figure 2.3). the “best” parameters are at the minimum of this surface. draft: please send errata to [email protected] 2 supervised learning 2.2.3 training theprocessoffindingparametersthatminimizethelossistermedmodelfitting,training, or learning. the basic method is to choose the initial parameters randomly and then improvethemby“walkingdown”thelossfunctionuntilwereachthebottom(figure2.4). one way to do this is to measure the gradient of the surface at the current position and take a step in the direction that is most steeply downhill. then we repeat this process until the gradient is flat and we can improve no further.2 2.2.4 testing having trained the model, we want to know how it will perform in the real world. we do this by computing the loss on a separate set of test data. the degree to which the prediction accuracy generalizes to the test data depends in part on how representative andcompletethetrainingdatais. however,italsodependsonhowexpressivethemodel is. asimplemodellikealinemightnotbeabletocapturethetruerelationshipbetween input and output. this is known as underfitting. conversely, a very expressive model may describe statistical peculiarities of the training data that are atypical and lead to unusual predictions. this is known as overfitting. 2.3 summary a supervised learning model is a function y=f[x,ϕ] that relates inputs x to outputs y. the particular relationship is determined by parameters ϕ. to train the model, we definealossfunctionl[ϕ]overatrainingdataset{x ,y }. thisquantifiesthemismatch i i between the model predictions f[x ,ϕ] and observed outputs y as a function of the i i parameters ϕ. then we search for the parameters that minimize the loss. we evaluate the model on a different set of test data to see how well it generalizes to new inputs. chapters 3–9 expand on these ideas. first, we tackle the model itself; 1d linear regressionhastheobviousdrawbackthatitcanonlydescribetherelationshipbetweenthe inputandoutputasastraightline. shallowneuralnetworks(chapter3)areonlyslightly more complex than linear regression but describe a much larger family of input/output relationships. deep neural networks (chapter 4) are just as expressive but can describe complex functions with fewer parameters and work better in practice. chapter 5 investigates loss functions for different tasks and reveals the theoretical underpinnings of the least-squares loss. chapters 6 and 7 discuss the training process. chapter 8 discusses how to measure model performance. chapter 9 considers regular- ization techniques, which aim to improve that performance. 2thisiterativeapproachisnotactuallynecessaryforthelinearregressionmodel. here,it’spossible to find closed-form expressions for the parameters. however, this gradient descent approach works for morecomplexmodelswherethereisnoclosed-formsolutionandwheretherearetoomanyparameters toevaluatethelossforeverycombinationofvalues. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.notes 23 figure2.4linearregressiontraining. thegoalistofindthey-interceptandslope parameters that correspond to the smallest loss. a) iterative training algorithms initializetheparametersrandomlyandthenimprovethemby“walkingdownhill” untilnofurtherimprovementcanbemade. here,westartatposition0andmove a certain distance downhill (perpendicular to the contours) to position 1. then we re-calculate the downhill direction and move to position 2. eventually, we reachtheminimumofthefunction(position4). b)eachposition0–4frompanel (a) corresponds to a different y-intercept and slope and so represents a different line. as the loss decreases, the lines fit the data more closely. notes lossfunctionsvs. costfunctions: inmuchofmachinelearningandinthisbook,theterms lossfunctionandcostfunctionareusedinterchangeably. however,moreproperly,alossfunction istheindividualtermassociatedwithadatapoint(i.e.,eachofthesquaredtermsontheright- handsideofequation 2.5), andthecostfunction istheoverallquantitythat ismin |
hyperparameters . . . . . . . . . . . . . . . . . . . . . . . . . . 132 8.6 summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 9 regularization 138 9.1 explicit regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 9.2 implicit regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 9.3 heuristics to improve performance. . . . . . . . . . . . . . . . . . . . . . 144 9.4 summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 10 convolutional networks 161 10.1 invariance and equivariance . . . . . . . . . . . . . . . . . . . . . . . . . 161 10.2 convolutional networks for 1d inputs . . . . . . . . . . . . . . . . . . . . 163 10.3 convolutional networks for 2d inputs . . . . . . . . . . . . . . . . . . . . 170 this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.contents v 10.4 downsampling and upsampling . . . . . . . . . . . . . . . . . . . . . . . 171 10.5 applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 10.6 summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 11 residual networks 186 11.1 sequential processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 11.2 residual connections and residual blocks . . . . . . . . . . . . . . . . . . 189 11.3 exploding gradients in residual networks . . . . . . . . . . . . . . . . . . 192 11.4 batch normalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 11.5 common residual architectures . . . . . . . . . . . . . . . . . . . . . . . 195 11.6 why do nets with residual connections perform so well? . . . . . . . . . 199 11.7 summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 12 transformers 207 12.1 processing text data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 12.2 dot-product self-attention . . . . . . . . . . . . . . . . . . . . . . . . . . 208 12.3 extensions to dot-product self-attention . . . . . . . . . . . . . . . . . . 213 12.4 transformers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 12.5 transformers for natural language processing. . . . . . . . . . . . . . . . 216 12.6 encoder model example: bert . . . . . . . . . . . . . . . . . . . . . . . 219 12.7 decoder model example: gpt3 . . . . . . . . . . . . . . . . . . . . . . . 222 12.8 encoder-decoder model example: machine translation . . . . . . . . . . . 226 12.9 transformers for long sequences . . . . . . . . . . . . . . . . . . . . . . . 227 12 |
imized(i.e., the entire right-hand side of equation 2.5). a cost function can contain additional terms that are not associated with individual data points (see section 9.1). more generally, an objective function is any function that is to be maximized or minimized. generative vs. discriminative models: themodelsy=f[x,ϕ]inthischapterarediscrim- inative models. thesemakeanoutputpredictionyfromreal-worldmeasurementsx. another problem2.3 approach is to build a generative model x = g[y,ϕ], in which the real-world measurements x are computed as a function of the output y. the generative approach has the disadvantage that it doesn’t directly predict y. to perform inference, we must invert the generative equation as y = g−1[x,ϕ], and this may be difficult. however,generativemodelshavetheadvantagethatwecanbuildinpriorknowledgeabouthow the data were created. for example, if we wanted to predict the 3d position and orientation y draft: please send errata to [email protected] 2 supervised learning ofacarinanimagex,thenwecouldbuildknowledgeaboutcarshape,3dgeometry,andlight transport into the function x=g[y,ϕ]. this seems like a good idea, but in fact, discriminative models dominate modern machine learning; the advantage gained from exploiting prior knowledge in generative models is usually trumped by learning very flexible discriminative models with large amounts of training data. problems problem2.1towalk“downhill”onthelossfunction(equation2.5),wemeasureitsgradientwith respecttotheparametersϕ andϕ . calculateexpressionsfortheslopes∂l/∂ϕ and∂l/∂ϕ . 0 1 0 1 problem 2.2 showthatwecanfindtheminimumofthelossfunctioninclosedformbysetting theexpressionforthederivativesfromproblem2.1tozeroandsolvingforϕ andϕ . notethat 0 1 this works for linear regression but not for more complex models; this is why we use iterative model fitting methods like gradient descent (figure 2.4). problem 2.3∗ consider reformulating linear regression as a generative model, so we have x = g[y,ϕ] = ϕ +ϕ y. what is the new loss function? find an expression for the inverse func- 0 1 tion y = g−1[x,ϕ] that we would use to perform inference. will this model make the same predictions as the discriminative version for a given training dataset {x ,y }? one way to es- i i tablish this is to write code that fits a line to three data points using both methods and see if the result is the same. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.chapter 3 shallow neural networks chapter2introducedsupervisedlearningusing1dlinearregression. however,thismodel canonlydescribetheinput/outputrelationshipasaline. thischapterintroducesshallow neural networks. these describe piecewise linear functions and are expressive enough to approximate arbitrarily complex relationships between multi-dimensional inputs and outputs. 3.1 neural network example shallowneuralnetworksarefunctionsy=f[x,ϕ]withparametersϕthatmapmultivari- ate inputs x to multivariate outputs y. we defer a full definition until section 3.4 and introduce the main ideas using an example network f[x,ϕ] that maps a scalar input x to a scalar output y and has ten parameters ϕ={ϕ ,ϕ ,ϕ ,ϕ ,θ ,θ ,θ ,θ ,θ ,θ }: 0 1 2 3 10 11 20 21 30 31 y = f[x,ϕ] = ϕ +ϕ a[θ +θ x]+ϕ a[θ +θ x]+ϕ a[θ +θ x]. (3.1) 0 1 10 11 2 20 21 3 30 31 we can break down this calculation into three parts: first, we compute three linear functions of the input data (θ +θ x, θ +θ x, and θ +θ x |
). second, we pass the 10 11 20 21 30 31 three results through an activation function a[•]. finally, we weight the three resulting activations with ϕ ,ϕ , and ϕ , sum them, and add an offset ϕ . 1 2 3 0 to complete the description, we must define the activation function a[•]. there are many possibilities, but the most common choice is the rectified linear unit or relu: ( 0 z <0 a[z]=relu[z]= . (3.2) z z ≥0 this returns the input when it is positive and zero otherwise (figure 3.1). it is probably not obvious which family of input/output relations is represented by equation 3.1. nonetheless, the ideas from the previous chapter are all applicable. equa- tion 3.1 represents a family of functions where the particular member of the family draft: please send errata to [email protected] 3 shallow neural networks figure 3.1 rectified linear unit (relu). this activation function returns zero if the input is less than zero and returns theinputunchangedotherwise. inother words, it clips negative values to zero. note that there are many other possi- ble choices for the activation function (see figure 3.13), but the relu is the most commonly used and the easiest to understand. figure 3.2 family of functions defined by equation 3.1. a–c) functions for three differentchoicesofthetenparametersϕ. ineachcase,theinput/outputrelation is piecewise linear. however, the positions of the joints, the slopes of the linear regions between them, and the overall height vary. depends on the ten parameters in ϕ. if we know these parameters, we can perform inference (predict y) by evaluating the equation for a given input x. given a training dataset {x ,y }i , we can define a least squares loss function l[ϕ] and use this to mea- i i i=1 sure how effectively the model describes this dataset for any given parameter values ϕ. to train the model, we search for the values ϕˆ that minimize this loss. 3.1.1 neural network intuition in fact, equation 3.1 represents a family of continuous piecewise linear functions (fig- ure 3.2) with up to four linear regions. we now break down equation 3.1 and show why it describes this family. to make this easier to understand, we split the function into two parts. first, we introduce the intermediate quantities: this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.3.1 neural network example 27 h = a[θ +θ x] 1 10 11 h = a[θ +θ x] 2 20 21 h = a[θ +θ x], (3.3) 3 30 31 where we refer to h , h , and h as hidden units. second, we compute the output by 1 2 3 combining these hidden units with a linear function:1 y =ϕ +ϕ h +ϕ h +ϕ h . (3.4) 0 1 1 2 2 3 3 figure 3.3 shows the flow of computation that creates the function in figure 3.2a. each hidden unit contains a linear function θ•0 +θ•1x of the input, and that line is clipped by the relu function a[•] below zero. the positions where the three lines cross zero become the three “joints” in the final output. the three clipped lines are then weighted by ϕ , ϕ , and ϕ , respectively. finally, the offset ϕ is added, which controls 1 2 3 0 the overall height of the final function. problems3.1–3.8 each linear region in figure 3.3j corresponds to a different activation pattern in the hidden units. when a unit is clipped, we refer to it as inactive, and when it is not clipped, we refer to it as active. for example, the shaded region receives contributions from h and h (which are active) but not from h (which is inactive). the slope of 1 3 2 eachlinearregionisdeterminedby(i)theoriginalslopesθ•1 oftheactiveinputsforthis region and (ii) the weights ϕ• that were subsequently applied. for example, the slope in the shaded region (see problem 3 |
.3) is θ ϕ +θ ϕ , where the first term is the slope in 11 1 31 3 panel (g) and the second term is the slope in panel (i). each hidden unit contributes one “joint” to the function, so with three hidden units, notebook3.1 there can be four linear regions. however, only three of the slopes of these regions are shallownetworksi independent; the fourth is either zero (if all the hidden units are inactive in this region) or is a sum of slopes from the other regions. problem3.9 3.1.2 depicting neural networks we have been discussing a neural network with one input, one output, and three hidden units. wevisualizethisnetworkinfigure3.4a. theinputisontheleft,thehiddenunits are in the middle, and the output is on the right. each connection represents one of the ten parameters. to simplify this representation, we do not typically draw the intercept parameters, so this network is usually depicted as in figure 3.4b. ∑ 1forthepurposesofthisbook,alinearfunctionhastheformz′=ϕ0+ iϕizi. anyothertypeof functionisnonlinear. forinstance, therelufunction(equation3.2)andtheexampleneuralnetwork thatcontainsit(equation3.1)arebothnonlinear. seenotesatendofchapterforfurtherclarification. draft: please send errata to [email protected] 3 shallow neural networks figure 3.3 computation for function in figure 3.2a. a–c) the input x is passed throughthreelinearfunctions,eachwithadifferenty-interceptθ•0 andslopeθ•1. d–f) each line is passed through the relu activation function, which clips neg- ative values to zero. g–i) the three clipped lines are then weighted (scaled) by ϕ ,ϕ , and ϕ , respectively. j) finally, the clipped and weighted functions are 1 2 3 summed, and an offset ϕ that controls the height is added. each of the four 0 linear regions corresponds to a different activation pattern in the hidden units. in the shaded region, h is inactive (clipped), but h and h are both active. 2 1 3 this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.3.2 universal approximation theorem 29 figure 3.4 depicting neural networks. a) the input x is on the left, the hidden units h ,h , and h in the center, and the output y on the right. computation 1 2 3 flowsfromlefttoright. theinputisusedtocomputethehiddenunits,whichare combined to create the output. each of the ten arrows represents a parameter (intercepts in orange and slopes in black). each parameter multiplies its source and adds the result to its target. for example, we multiply the parameter ϕ 1 by source h and add it to y. we introduce additional nodes containing ones 1 (orange circles) to incorporate the offsets into this scheme, so we multiply ϕ by 0 one (with no effect) and add it to y. relu functions are applied at the hidden units. b) more typically, the intercepts, relu functions, and parameter names are omitted; this simpler depiction represents the same network. 3.2 universal approximation theorem in the previous section, we introduced an example neural network with one input, one output, relu activation functions, and three hidden units. let’s now generalize this slightly and consider the case with d hidden units where the dth hidden unit is: h =a[θ +θ x], (3.5) d d0 d1 and these are combined linearly to create the output: xd y =ϕ + ϕ h . (3.6) 0 d d d=1 the number of hidden units in a shallow network is a measure of the network capacity. with relu activation functions, the output of a network with d hidden units has at problem3.10 mostd jointsandsoisapiecewiselinearfunctionwithatmostd+1linearregions. as we add more hidden units, the model can approximate more complex functions. indeed, with enough capacity (hidden units), a shallow network can describe any continuous1dfunctiondefinedonacompact |
subsetofthereallinetoarbitraryprecision. toseethis,considerthateverytimeweaddahiddenunit,weaddanotherlinearregionto the function. as these regions become more numerous, they represent smaller sections of the function, which are increasingly well approximated by a line (figure 3.5). the universal approximation theorem proves that for any continuous function, there exists a shallow network that can approximate this function to any specified precision. draft: please send errata to [email protected] 3 shallow neural networks figure 3.5 approximation of a 1d function (dashed line) by a piecewise linear model. a–c) as the number of regions increases, the model becomes closer and closer to the continuous function. a neural network with a scalar input creates one extra linear region per hidden unit. the universal approximation theorem provesthat,withenoughhiddenunits,thereexistsashallowneuralnetworkthat can describe any given continuous function defined on a compact subset of rdi to arbitrary precision. 3.3 multivariate inputs and outputs intheaboveexample,thenetworkhasasinglescalarinputxandasinglescalaroutputy. however, the universal approximation theorem also holds for the more general case wherethenetworkmapsmultivariateinputsx=[x ,x ,...,x ]t tomultivariateoutput 1 2 di predictions y = [y ,y ,...,y ]t. we first explore how to extend the model to predict 1 2 do multivariate outputs. then we consider multivariate inputs. finally, in section 3.4, we present a general definition of a shallow neural network. 3.3.1 visualizing multivariate outputs toextendthenetworktomultivariateoutputsy,wesimplyuseadifferentlinearfunction of the hidden units for each output. so, a network with a scalar input x, four hidden units h ,h ,h , and h , and a 2d multivariate output y=[y ,y ]t would be defined as: 1 2 3 4 1 2 h = a[θ +θ x] 1 10 11 h = a[θ +θ x] 2 20 21 h = a[θ +θ x] 3 30 31 h = a[θ +θ x], (3.7) 4 40 41 and this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.3.3 multivariate inputs and outputs 31 figure 3.6 network with one input, four hidden units, and two outputs. a) visualizationofnetworkstructure. b)thisnetworkproducestwopiecewiselinear functions,y [x]andy [x]. thefour“joints”ofthesefunctions(atverticaldotted 1 2 lines) are constrained to be in the same places since they share the same hidden units, but the slopes and overall height may differ. figure 3.7 visualization of neural net- work with 2d multivariate input x = [x ,x ]t and scalar output y. 1 2 y = ϕ +ϕ h +ϕ h +ϕ h +ϕ h 1 10 11 1 12 2 13 3 14 4 y = ϕ +ϕ h +ϕ h +ϕ h +ϕ h . (3.8) 2 20 21 1 22 2 23 3 24 4 the two outputs are two different linear functions of the hidden units. as we saw in figure 3.3, the “joints” in the piecewise functions depend on where the initial linear functions θ•0+θ•1x are clipped by the relu functions a[•] at the hidden units. sincebothoutputsy andy aredifferentlinearfunctionsofthesamefourhidden 1 2 problem3.11 units, the four “joints” in each must be in the same places. however, the slopes of the linear regions and the overall vertical offset can differ (figure 3.6). 3.3.2 visualizing multivariate inputs to cope with multivariate inputs x, we extend the linear relations between the input and the hidden units. so a network with two inputs x=[x ,x ]t and a scalar output y 1 2 (figure 3.7) might have three hidden units defined by: draft: please send errata to [email protected] 3 shallow neural networks figure 3.8 |
processing in network with two inputs x = [x ,x ]t, three hidden 1 2 units h ,h ,h , and one output y. a–c) the input to each hidden unit is a 1 2 3 linearfunctionofthetwoinputs,whichcorrespondstoanorientedplane. bright- ness indicates function output. for example, in panel (a), the brightness repre- sents θ +θ x +θ x . thin lines are contours. d–f) each plane is clipped by 10 11 1 12 2 thereluactivationfunction(cyanlinesareequivalentto“joints”infigures3.3d– f). g-i) the clipped planes are then weighted, and j) summed together with an offsetthatdeterminestheoverallheightofthesurface. theresultisacontinuous surface made up of convex piecewise linear polygonal regions. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.3.4 shallow neural networks: general case 33 h = a[θ +θ x +θ x ] 1 10 11 1 12 2 h = a[θ +θ x +θ x ] 2 20 21 1 22 2 h = a[θ +θ x +θ x ], (3.9) 3 30 31 1 32 2 where there is now one slope parameter for each input. the hidden units are combined to form the output in the usual way: y =ϕ +ϕ h +ϕ h +ϕ h . (3.10) 0 1 1 2 2 3 3 figure3.8illustratestheprocessingofthisnetwork. eachhiddenunitreceivesalinear problems3.12–3.13 combination of the two inputs, which forms an oriented plane in the 3d input/output space. the activation function clips the negative values of these planes to zero. the notebook3.2 clipped planes are then recombined in a second linear function (equation 3.10) to create shallownetworksii acontinuouspiecewiselinearsurfaceconsistingofconvexpolygonalregions(figure3.8j). each region corresponds to a different activation pattern. for example, in the central appendixb.1.2 convexregion triangular region, the first and third hidden units are active, and the second is inactive. when there are more than two inputs to the model, it becomes difficult to visualize. however, the interpretation is similar. the output will be a continuous piecewise linear function of the input, where the linear regions are now convex polytopes in the multi- dimensional input space. notethatastheinputdimensionsgrow,thenumberoflinearregionsincreasesrapidly (figure 3.9). to get a feeling for how rapidly, consider that each hidden unit defines a hyperplane that delineates the part of space where this unit is active from the part notebook3.3 where it is not (cyan lines in 3.8d–f). if we had the same number of hidden units as shallownetwork input dimensions di, we could align each hyperplane with one of the coordinate axes regions (figure3.10). fortwoinputdimensions,thiswoulddividethespaceintofourquadrants. forthreedimensions, thiswouldcreateeightoctants, andford dimensions, thiswould i create2di orthants. shallowneuralnetworksusuallyhavemorehiddenunitsthaninput dimensions, so they typically create more than 2di linear regions. 3.4 shallow neural networks: general case wehavedescribedseveralexampleshallownetworkstohelpdevelopintuitionabouthow they work. we now define a general equation for a shallow neural network y = f[x,ϕ] that maps a multi-dimensional input x ∈ rdi to a multi-dimensional output y ∈ rdo using h∈rd hidden units. each hidden unit is computed as: " # xdi h =a θ + θ x , (3.11) d d0 di i i=1 and these are combined linearly to create the output: draft: please send errata to [email protected] 3 shallow neural networks figure 3.9 linear regions vs. hidden units. a) maximum possible regions as a function of the number of hidden units for five different input dimensions d |
= i {1,5,10,50,100}. the number of regions increases rapidly in high dimensions; with d = 500 units and input size d = 100, there can be greater than 10107 i regions(solidcircle). b)thesamedataareplottedasafunctionofthenumberof parameters. thesolidcirclerepresentsthesamemodelasinpanel(a)withd= 500 hidden units. this network has 51,001 parameters and would be considered very small by modern standards. figure3.10numberoflinearregionsvs. inputdimensions. a)withasingleinput dimension,amodelwithonehiddenunitcreatesonejoint,whichdividestheaxis into two linear regions. b) with two input dimensions, a model with two hidden unitscandividetheinputspaceusingtwolines(herealignedwithaxes)tocreate four regions. c) with three input dimensions, a model with three hidden units candividetheinputspaceusingthreeplanes(againalignedwithaxes)tocreate eight regions. continuing this argument, it follows that a model with d input i dimensions and d hidden units can divide the input space with d hyperplanes i i to create 2di linear regions. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.3.5 terminology 35 figure 3.11 visualization of neural net- work with three inputs and two out- puts. this network has twenty param- eters. therearefifteenslopes(indicated by arrows) and five offsets (not shown). xd y =ϕ + ϕ h , (3.12) j j0 jd d d=1 where a[•] is a nonlinear activation function. the model has parameters ϕ={θ••,ϕ••}. figure 3.11 shows an example with three inputs, three hidden units, and two outputs. problems3.14–3.17 the activation function permits the model to describe nonlinear relations between input and the output, and as such, it must be nonlinear itself; with no activation func- tion, or a linear activation function, the overall mapping from input to output would be restricted to be linear. many different activation functions have been tried (see fig- ure 3.13), but the most common choice is the relu (figure 3.1), which has the merit notebook3.4 of being easily interpretable. with relu activations, the network divides the input activation space into convex polytopes defined by the intersections of hyperplanes computed by functions the “joints” in the relu functions. each convex polytope contains a different linear function. the polytopes are the same for each output, but the linear functions they contain can differ. 3.5 terminology weconcludethischapterbyintroducingsometerminology. regrettably,neuralnetworks have a lot of associated jargon. they are often referred to in terms of layers. the left of figure3.12istheinput layer,thecenteristhehidden layer,andtotherightistheoutput layer. we would say that the network in figure 3.12 has one hidden layer containing four hidden units. the hidden units themselves are sometimes referred to as neurons. when we pass data through the network, the values of the inputs to the hidden layer (i.e., before the relu functions are applied) are termed pre-activations. the values at the hidden layer (i.e., after the relu functions) are termed activations. forhistoricalreasons,anyneuralnetworkwithatleastonehiddenlayerisalsocalled amulti-layerperceptron,ormlpforshort. networkswithonehiddenlayer(asdescribed in this chapter) are sometimes referred to as shallow neural networks. networks with multiple hidden layers (as described in the next chapter) are referred to as deep neural networks. neural networks in which the connections form an acyclic graph (i.e., a graph with no loops, as in all the examples in this chapter) are referred to as feed-forward networks. if every element in one layer connects to every element in the next (as in all the examples in this chapter), the network is fully connected. these connections draft: please send errata to [email protected] 3 shallow neural networks figure 3.12 terminology. a shallow network consists of an input layer, a hidden layer, and an output layer |
. each layer is connected to the next by forward con- nections (arrows). for this reason, these models are referred to as feed-forward networks. when every variable in one layer connects to every variable in the next, we call this a fully connected network. each connection represents a slope parameter in the underlying equation, and these parameters are termed weights. thevariablesinthehiddenlayeraretermedneuronsorhiddenunits. thevalues feeding into the hidden units are termed pre-activations, and the values at the hidden units (i.e., after the relu function is applied) are termed activations. represent slope parameters in the underlying equations and are referred to as network weights. the offset parameters (not shown in figure 3.12) are called biases. 3.6 summary shallowneuralnetworkshaveonehiddenlayer. they(i)computeseverallinearfunctions of the input, (ii) pass each result through an activation function, and then (iii) take a linear combination of these activations to form the outputs. shallow neural networks make predictions y based on inputs x by dividing the input space into a continuous surface of piecewise linear regions. with enough hidden units (neurons), shallow neural networks can approximate any continuous function to arbitrary precision. chapter4discussesdeepneuralnetworks,whichextendthemodelsfromthischapter by adding more hidden layers. chapters 5–7 describe how to train these models. notes “neural” networks: if the models in this chapter are just functions, why are they called “neural networks”? the connection is, unfortunately, tenuous. visualizations like figure 3.12 consistofnodes(inputs,hiddenunits,andoutputs)thataredenselyconnectedtooneanother. this bears a superficial similarity to neurons in the mammalian brain, which also have dense connections. however, there is scant evidence that brain computation works in the same way as neural networks, and it is unhelpful to think about biology going forward. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press. |
notes 37 figure 3.13 activation functions. a) logistic sigmoid and tanh functions. b) leaky relu and parametric relu with parameter 0.25. c) softplus, gaussian errorlinearunit,andsigmoidlinearunit. d)exponentiallinearunitwithparam- eters0.5and1.0,e)scaledexponentiallinearunit. f)swishwithparameters0.4, 1.0, and 1.4. history of neural networks: mcculloch & pitts (1943) first came up with the notion of an artificialneuronthatcombinedinputstoproduceanoutput,butthismodeldidnothaveaprac- tical learning algorithm. rosenblatt (1958) developed the perceptron, which linearly combined inputs and then thresholded them to make a yes/no decision. he also provided an algorithm to learn the weights from data. minsky & papert (1969) argued that the linear function was inadequate for general classification problems but that adding hidden layers with nonlinear activation functions (hence the term multi-layer perceptron) could allow the learning of more generalinput/outputrelations. however,theyconcludedthatrosenblatt’salgorithmcouldnot learn the parameters of such models. it was not until the 1980s that a practical algorithm (backpropagation, see chapter 7) was developed, and significant work on neural networks re- sumed. thehistoryofneuralnetworksischronicledbykurenkov(2020),sejnowski(2018),and schmidhuber (2022). activation functions: the relu function has been used as far back as fukushima (1969). however,intheearlydaysofneuralnetworks,itwasmorecommontousethelogisticsigmoidor tanhactivationfunctions(figure3.13a). thereluwasre-popularizedbyjarrettetal.(2009), nair&hinton(2010),andglorotetal.(2011)andisanimportantpartofthesuccessstoryof modernneuralnetworks. ithasthenicepropertythatthederivativeoftheoutputwithrespect to the input is always one for inputs greater than zero. this contributes to the stability and efficiency of training (see chapter 7) and contrasts with the derivatives of sigmoid activation draft: please send errata to [email protected] 3 shallow neural networks functions, which saturate (become close to zero) for large positive and large negative inputs. however,therelufunctionhasthedisadvantagethatitsderivativeiszerofornegativeinputs. if all the training examples produce negative inputs to a given relu function, then we cannot improve the parameters feeding into this relu during training. the gradient with respect to the incoming weights is locally flat, so we cannot “walk downhill.” this is known as the dying relu problem. many variations on the relu have been proposed to resolve this problem (figure3.13b), including(i)theleaky relu(maasetal.,2013), whichalsohasa linearoutput fornegativevalueswithasmallerslopeof0.1,(ii)theparametricrelu(heetal.,2015),which treats the slope of the negative portion as an unknown parameter, and (iii) the concatenated relu(shangetal.,2016),whichproducestwooutputs,oneofwhichclipsbelowzero(i.e.,like a typical relu) and one of which clips above zero. a variety of smooth functions have also been investigated (figure 3.13c–d), including the soft- plus function (glorot et al., 2011), gaussian error linear unit (hendrycks & gimpel, 2016), sigmoid linear unit (hendrycks & gimpel, 2016), and exponential linear unit (clevert et al., 2015). mostoftheseareattemptstoavoidthedyingreluproblemwhilelimitingthegradient for negative values. klambauer et al. (2017) introduced the scaled exponential linear unit (fig- ure 3.13e), which is particularly interesting as it helps stabilize the variance of the activations when the input variance has a limited range (see section 7.5). ramachandran et al. (2017) adopted an empirical approach to choosing an activation function. they searched the space of possible functions to find the one that performed best over a variety of supervised learning |
tasks. the optimal function was found to be a[x] = x/(1+exp[−βx]), where β is a learned parameter(figure3.13f). theytermedthisfunctionswish. interestingly,thiswasarediscovery of activation functions previously proposed by hendrycks & gimpel (2016) and elfwing et al. (2018). howardetal.(2019)approximatedswishbythehardswishfunction,whichhasavery similar shape but is faster to compute: 8 ><0 z<−3 hardswish[z]= z(z+3)/6 −3≤z≤3. (3.13) >: z z>3 there is no definitive answer as to which of these activations functions is empirically superior. however, the leaky relu, parameterized relu, and many of the continuous functions can be shown to provide minor performance gains over the relu in particular situations. we restrict attentiontoneuralnetworkswiththebasicrelufunctionfortherestofthisbookbecauseit’s easy to characterize the functions they create in terms of the number of linear regions. universal approximation theorem: the width version of this theorem states that there exists a network with one hidden layer containing a finite number of hidden units that can approximateanyspecifiedcontinuousfunctiononacompactsubsetofrn toarbitraryaccuracy. this was proved by cybenko (1989) for a class of sigmoid activations and was later shown to be true for a larger class of nonlinear activation functions (hornik, 1991). number of linear regions: consider a shallow network with d ≥ 2-dimensional inputs i and d hidden units. the number of linear regions is determined by the intersections of the d hyperplanes created by the “joints” in the relu functions (e.g., figure 3.8d–f). each region is appendixb.2 created by a different combination of the relu functions clipping or not clipping the input. binomial the number of regions created by d hypeprplane(cid:0)s i(cid:1)n the di ≤ d-dimensional input space was coefficient shown by zaslavsky (1975) to be at most di d (i.e., a sum of binomial coefficients). as a j=0 j rule of thumb, shallow neural networks almost always have a larger number d of hidden units problem3.18 than input dimensions di and create between 2di and 2d linear regions. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.notes 39 linear, affine, and nonlinear functions: technically, a linear transformation f[•] is any functionthatobeystheprincipleofsuperposition,sof[a+b]=f[a]+f[b]. thisdefinitionimplies that f[2a] = 2f[a].the weighted sum f[h ,h ,h ] = ϕ h +ϕ h +ϕ h is linear, but once the 1 2 3 1 1 2 2 3 3 offset (bias) is added so f[h ,h ,h ]=ϕ +ϕ h +ϕ h +ϕ h , this is no longer true. to see 1 2 3 0 1 1 2 2 3 3 this,considerthattheoutputisdoubledwhenwedoubletheargumentsoftheformerfunction. this is not the case for the latter function, which is more properly termed an affine function. however, it is common in machine learning to conflate these terms. we follow this convention in this book and refer to both as linear. all other functions we will encounter are nonlinear. problems problem 3.1 what kind of mapping from input to output would be created if the activation function in equation 3.1 was linear so that a[z]=ψ +ψ z? what kind of mapping would be 0 1 created if the activation function was removed, so a[z]=z? problem 3.2 for each of the four linear regions in figure 3.3j, indicate which hidden units are inactive and which are active (i.e., which do and do not clip their inputs). problem 3.3∗ derive expressions for the positions of the “joints” in function in figure 3.3j in terms of the ten parameters ϕ and the input x. derive expressions for the slopes of the four linear regions. problem |
3.4 draw a version of figure 3.3 where the y-intercept and slope of the third hidden unit have changed as in figure 3.14c. assume that the remaining parameters remain the same. figure 3.14 processing in network with one input, three hidden units, and one outputforproblem3.4. a–c)theinputtoeachhiddenunitisalinearfunctionof the inputs. the first two are the same as in figure 3.3, but the last one differs. problem 3.5 prove that the following property holds for α∈r+: relu[α·z]=α·relu[z]. (3.14) this is known as the non-negative homogeneity property of the relu function. draft: please send errata to [email protected] 3 shallow neural networks problem 3.6 following on from problem 3.5, what happens to the shallow network defined in equations 3.3 and 3.4 when we multiply the parameters θ and θ by a positive constant α 10 11 and divide the slope ϕ by the same parameter α? what happens if α is negative? 1 problem3.7considerfittingthemodelinequation3.1usingaleastsquareslossfunction. does this loss function have a unique minimum? i.e., is there a single “best” set of parameters? problem 3.8 considerreplacingthereluactivationfunctionwith(i)theheavisidestepfunc- tion heaviside[z], (ii) the hyperbolic tangent function tanh[z], and (iii) the rectangular func- tion rect[z], where: 8 ( ><0 z<0 0 z<0 heaviside[z]= rect[z]= 1 0≤z≤1. (3.15) 1 z≥0 >: 0 z>1 redraw a version of figure 3.3 for each of these functions. the original parameters were: ϕ= {ϕ ,ϕ ,ϕ ,ϕ ,θ ,θ ,θ ,θ ,θ ,θ }={−0.23,−1.3,1.3,0.66,−0.2,0.4,−0.9,0.9,1.1,−0.7}. 0 1 2 3 10 11 20 21 30 31 provideaninformaldescriptionofthefamilyoffunctionsthatcanbecreatedbyneuralnetworks with one input, three hidden units, and one output for each activation function. problem 3.9∗ show that the third linear region in figure 3.3 has a slope that is the sum of the slopes of the first and fourth linear regions. problem 3.10 consider a neural network with one input, one output, and three hidden units. the construction in figure 3.3 shows how this creates four linear regions. under what circum- stances could this network produce a function with fewer than four linear regions? problem 3.11∗ how many parameters does the model in figure 3.6 have? problem 3.12 how many parameters does the model in figure 3.7 have? problem3.13whatistheactivationpatternforeachofthesevenregionsinfigure3.8? inother words, which hidden units are active (pass the input) and which are inactive (clip the input) for each region? problem 3.14 write out the equations that define the network in figure 3.11. there should be three equations to compute the three hidden units from the inputs and two equations to compute the outputs from the hidden units. problem3.15∗ whatisthemaximumpossiblenumberof3dlinearregionsthatcanbecreated by the network in figure 3.11? problem 3.16 write out the equations for a network with two inputs, four hidden units, and three outputs. draw this model in the style of figure 3.11. problem 3.17∗ equations 3.11 and 3.12 define a general neural network with d inputs, one i hiddenlayercontainingd hiddenunits,andd outputs. findanexpressionforthenumberof o parameters in the model in terms of d , d, and d . i o problem 3.18∗ show that the maximum number of regions created by a shallow network withd =2-dimensionalinput,d =1-dimensionaloutput,andd=3hiddenunitsisseven,as i o in figure 3.8j. use the result of zaslavsky (1975) that the max |
End of preview. Expand
in Dataset Viewer.
No dataset card yet
New: Create and edit this dataset card directly on the website!
Contribute a Dataset Card- Downloads last month
- 3