dl_dataset_1 / dataset_chunk_103.csv
Vishwas1's picture
Upload dataset_chunk_103.csv with huggingface_hub
276cbc0 verified
raw
history blame
3.65 kB
text
"] . (10.2) in other words, f[x] is equivariant to the transformation t[x] if its output changes in the same way under the transformation as the input. networks for per-pixel image segmentation should be equivariant to transformations (figure 10.1c–f); if the image is translated, rotated, or flipped, the network f[x] should return a segmentation that has been transformed in the same way. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.10.2 convolutional networks for 1d inputs 163 figure 10.2 1d convolution with kernel size three. each output z is a weighted i sum of the nearest three inputs xi−1, xi, and xi+1, where the weights are ω = [ω ,ω ,ω ]. a)outputz iscomputedasz =ω x +ω x +ω x . b)outputz 1 2 3 2 2 1 1 2 2 3 3 3 is computed as z = ω x +ω x +ω x . c) at position z , the kernel extends 3 1 2 2 3 3 4 1 beyond the first input x . this can be handled by zero padding, in which we 1 assume values outside the input are zero. the final output is treated similarly. d)alternatively,wecouldonlycomputeoutputswherethekernelfitswithinthe inputrange(“valid”convolution);now,theoutputwillbesmallerthantheinput. 10.2 convolutional networks for 1d inputs convolutionalnetworksconsistofaseriesofconvolutionallayers,eachofwhichisequiv- arianttotranslation. theyalsotypicallyincludepoolingmechanismsthatinducepartial invariance to translation. for clarity of exposition, we first consider convolutional net- works for 1d data, which are easier to visualize. in section 10.3, we progress to 2d convolution, which can be applied to image data. 10.2.1 1d convolution operation convolutional layers are network layers based on the convolution operation. in 1d, a convolution transforms an input vector x into an output vector z so that each output z i is a weighted sum of nearby inputs. the same weights are used at every position and are collectively called the convolution kernel or filter. the size of the region over which inputs are combined is termed the kernel size. for a kernel size of three, we have: zi =ω1xi−1+ω2xi+ω3xi+1, (10.3) where ω = [ω ,ω ,ω ]t is the kernel (figure 10.2).1 notice that the convolution oper- 1 2 3 problem10.1 ation is equivariant with respect to translation. if we translate the input x, then the corresponding output z is translated in the same way. 1strictlyspeaking, thisisacross-correlationandnotaconvolution, inwhichtheweightswouldbe flippedrelativetotheinput(sowewouldswitchxi−1withxi+1). regardless,this(incorrect)definition istheusualconventioninmachinelearning. draft: please send errata to [email protected] 10 convolutional networks figure 10.3stride,kernelsize,anddilation. a)withastrideoftwo,weevaluate the kernel at every other position, so the first output z is computed from a 1 weighted sum centered at x , and b) the second output z is computed from a 1 2 weighted sum centered at x and so on. c) the kernel size can also be changed. 3 withakernelsizeoffive,wetakeaweightedsumofthenearestfiveinputs. d)in dilated or atrous convolution, we intersperse zeros in the weight vector to allow us to combine information over a large area using fewer weights. 10.2.2 padding equation 10.3 shows that each output is computed by taking a weighted sum of the previous, current, and subsequent positions in the input. this begs the question of how to deal with the first output (where there is no previous input) and the final output (where there is no subsequent input). there are two common approaches. the first is to pad the edges of the inputs with new values and proceed as usual. zero padding assumes the input is zero outside its"