text ". an early network for semantic segmentation is depicted in figure 10.19. the input is a 224×224 rgb image, and the output is a 224×224×21 array that contains the probability of each of 21 possible classes at each position. thefirstpartofthenetworkisasmallerversionofvgg(figure10.17)thatcontains thirteenratherthansixteenconvolutionallayersanddownsizestherepresentationtosize 14×14. there is then one more max pooling operation, followed by two fully connected layers that map to two 1d representations of size 4096. these layers do not represent spatial position but instead, combine information from across the whole image. here, the architecture diverges from vgg. another fully connected layer reconsti- tutes the representation into 7×7 spatial positions and 512 channels. this is followed this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.10.6 summary 179 figure10.19semanticsegmentationnetworkofnohetal.(2015). theinputisa 224×224image,whichispassedthroughaversionofthevggnetworkandeven- tuallytransformedintoarepresentationofsize4096usingafullyconnectedlayer. this contains information about the entire image. this is then reformed into a representation of size 7×7 using another fully connected layer, and the image is upsampled and deconvolved (transposed convolutions without upsampling) in a mirror image of the vgg network. the output is a 224×224×21 representation that gives the output probabilities for the 21 classes at each position. by a series of max unpooling layers (see figure 10.12b) and deconvolution layers. these are transposed convolutions (see figure 10.13) but in 2d and without the upsampling. finally,thereisa1×1convolutiontocreate21channelsrepresentingthepossibleclasses and a softmax operation at each spatial position to map the activations to class proba- bilities. the downsampling side of the network is sometimes referred to as an encoder, and the upsampling side as a decoder, so networks of this type are sometimes called encoder-decoder networks or hourglass networks due to their shape. the final segmentation is generated using a heuristic method that greedily searches for the class that is most represented and infers its region, taking into account the probabilities but also encouraging connectedness. then the next most-represented class isaddedwhereitdominatesattheremainingunlabeledpixels. thiscontinuesuntilthere is insufficient evidence to add more (figure 10.20). 10.6 summary in convolutional layers, each hidden unit is computed by taking a weighted sum of the nearby inputs, adding a bias, and applying an activation function. the weights and the bias are the same at every spatial position, so there are far fewer parameters than in a fully connected network, and the number of parameters doesn’t increase with the input image size. to ensure that information is not lost, this operation is repeated with draft: please send errata to udlbookmail@gmail.com.180 10 convolutional networks figure 10.20 semantic segmentation results. the final result is created from the 21 probability maps by greedily selecting the best class and using a heuristic methodtofindasensiblebinarymapbasedontheprobabilitiesandtheirspatial proximity. if there is enough evidence, subsequent classes are added, and their segmentation maps are combined. adapted from noh et al. (2015). different weights and biases to create multiple channels at each spatial position. typicalconvolutionalnetworksconsistofconvolutionallayersinterspersedwithlayers that downsample by a factor of two. as the network progresses, the spatial dimensions usually decrease by factors of two, and the number of channels increases by factors of two. at the end of the network, there are typically one or more fully connected layers that integrate information from across the entire input and create the desired output. if the output is an image, a mirrored “decoder” upsamples back to the original size. thetranslationalequivarianceofconvolutionallayersimposesausefulinductivebias that increases performance for image-based tasks relative to fully connected networks. wedescribedimageclassification,objectdetection,andsemanticsegmentationnetworks. image classification performance was shown to improve as the network became deeper. however, subsequent experiments showed that increasing the"