text "l˜[ϕ]=l[ϕ]+ ϕ2, (9.23) 2α k k where ϕ are the parameters, and α is the learning rate. problem 9.6 consider a model with parameters ϕ = [ϕ ,ϕ ]t. draw the l0, l1, and l1 0 1 p2 regularizationtermsinasimilarformtofigure9.1b. thelp regularizationtermis d |ϕ |p. d=1 d this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.chapter 10 convolutional networks chapters2–9introducedthesupervisedlearningpipelinefordeepneuralnetworks. how- ever, these chapters only considered fully connected networks with a single path from input to output. chapters 10–13 introduce more specialized network components with sparser connections, shared weights, and parallel processing paths. this chapter de- scribes convolutional layers, which are mainly used for processing image data. images have three properties that suggest the need for specialized model architec- ture. first, they are high-dimensional. a typical image for a classification task contains 224×224 rgb values (i.e., 150,528 input dimensions). hidden layers in fully connected networks are generally larger than the input size, so even for a shallow network, the number of weights would exceed 150,5282, or 22 billion. this poses obvious practical problems in terms of the required training data, memory, and computation. second, nearby image pixels are statistically related. however, fully connected net- workshavenonotionof“nearby”andtreattherelationshipbetweeneveryinputequally. if the pixels of the training and test images were randomly permuted in the same way, the network could still be trained with no practical difference. third, the interpretation of an image is stable under geometric transformations. an image of a tree is still an image of a tree if we shift it leftwards by a few pixels. however, this shift changes every input to the network. hence, a fully connected model must learn the patterns of pixels that signify a tree separately at every position, which is clearly inefficient. convolutionallayersprocesseachlocalimageregionindependently,usingparameters shared across the whole image. they use fewer parameters than fully connected layers, exploit the spatial relationships between nearby pixels, and don’t have to re-learn the interpretation of the pixels at every position. a network predominantly consisting of convolutional layers is known as a convolutional neural network or cnn. 10.1 invariance and equivariance we argued above that some properties of images (e.g., tree texture) are stable under transformations. in this section, we make this idea more mathematically precise. a draft: please send errata to udlbookmail@gmail.com.162 10 convolutional networks figure 10.1 invariance and equivariance for translation. a–b) in image classi- fication, the goal is to categorize both images as “mountain” regardless of the horizontal shift that has occurred. in other words, we require the network pre- diction to be invariant to translation. c,e) the goal of semantic segmentation is to associate a label with each pixel. d,f) when the input image is translated, we want the output (colored overlay) to translate in the same way. in other words, we require the output to be equivariant with respect to translation. panels c–f) adapted from bousselham et al. (2021). function f[x] of an image x is invariant to a transformation t[x] if: (cid:2) (cid:3) f t[x] =f[x]. (10.1) in other words, the output of the function f[x] is the same regardless of the transfor- mation t[x]. networks for image classification should be invariant to geometric trans- formations of the image (figure 10.1a–b). the network f[x] should identify an image as containing the same object, even if it has been translated, rotated, flipped, or warped. a function f[x] of an image x is equivariant or covariant to a transformation t[x] if: (cid:2) (cid:3) (cid:2) (cid:3) f t[x] =t f[x"