text "valid range (figure 10.2c). other possibilities include treating the input as circular or reflecting it at the boundaries. the second approach is to discard the output positions wherethe kernelexceeds the range of input positions. these valid convolutionshavethe advantage of introducing no extra information at the edges of the input. however, they have the disadvantage that the representation decreases in size. 10.2.3 stride, kernel size, and dilation in the example above, each output was a sum of the nearest three inputs. however, this is just one of a larger family of convolution operations, the members of which are distinguishedbytheirstride,kernelsize,anddilationrate. whenweevaluatetheoutput this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.10.2 convolutional networks for 1d inputs 165 at every position, we term this a stride of one. however, it is also possible to shift the kernel by a stride greater than one. if we have a stride of two, we create roughly half the number of outputs (figure 10.3a–b). the kernel size can be increased to integrate over a larger area (figure 10.3c). how- ever, it typically remains an odd number so that it can be centered around the current position. increasingthekernelsizehasthedisadvantageofrequiringmoreweights. this leads to the idea of dilated or atrous convolutions, in which the kernel values are inter- spersedwithzeros. forexample, wecanturnakernelofsizefiveintoadilatedkernelof size three by setting the second and fourth elements to zero. we still integrate informa- problems10.2–10.4 tion from a larger input region but only require three weights to do this (figure 10.3d). the number of zeros we intersperse between the weights determines the dilation rate. 10.2.4 convolutional layers aconvolutionallayercomputesitsoutputbyconvolvingtheinput, addingabiasβ, and passing each result through an activation function a[•]. with kernel size three, stride one, and dilation rate one, the ith hidden unit h would be computed as: i hi = a[2β+ω1xi−1+ω2xi3+ω3xi+1] x3 4 5 = a β+ ωjxi+j−2 , (10.4) j=1 where the bias β and kernel weights ω ,ω ,ω are trainable parameters, and (with zero 1 2 3 padding) we treat the input x as zero when it is out of the valid range. this is a special case of a fully connected layer that computes the ith hidden unit as: 2 3 xd 4 5 h = a β + ω x . (10.5) i i ij j j=1 ifthereared inputsx• andd hiddenunitsh•,thisfullyconnectedlayerwouldhaved2 weights ω•• and d biases β•. the convolutional layer only uses three weights and one bias. a fully connected layer can reproduce this exactly if most weights are set to zero problem10.5 and others are constrained to be identical (figure 10.4). 10.2.5 channels ifweonlyapplyasingleconvolution,informationwillinevitablybelost;weareaveraging nearby inputs, and the relu activation function clips results that are less than zero. hence,itisusualtocomputeseveralconvolutionsinparallel. eachconvolutionproduces a new set of hidden variables, termed a feature map or channel. draft: please send errata to udlbookmail@gmail.com.166 10 convolutional networks figure 10.4 fully connected vs. convolutional layers. a) a fully connected layer has a weight connecting each input x to each hidden unit h (colored arrows) and a bias for each hidden unit (not shown). b) hence, the associated weight matrixωcontains36weightsrelatingthesixinputstothesixhiddenunits. c)a convolutionallayerwithkernelsizethreecomputeseachhiddenunitasthesame weighted sum of the three neighboring inputs (arrows) plus a bias (not shown). d)the weightmatrix isa specialcase ofthe fullyconnected matrixwhere many weightsarezeroandothersarerepeated(samecolorsindicatesamevalue,white indicates zero weight). e)"