text "themodeloutputsandthegroundtruthpredictionsforatrainingdataset. inchapters6 and 7, we deal with the training process itself, in which we seek the parameter values that minimize this loss. notes deeplearning: ithaslongbeenunderstoodthatitispossibletobuildmorecomplexfunctions bycomposingshallowneuralnetworksordevelopingnetworkswithmorethanonehiddenlayer. indeed,theterm“deeplearning”wasfirstusedbydechter(1986). however,interestwaslimited due to practical concerns; it was not possible to train such networks well. the modern era of deep learning was kick-started by startling improvements in image classification reported by krizhevsky et al. (2012). this sudden progress was arguably due to the confluence of four factors: larger training datasets, improved processing power for training, the use of the relu activation function, and the use of stochastic gradient descent (see chapter 6). lecun et al. (2015) present an overview of early advances in the modern era of deep learning. number of linear regions: for deep networks using a total of d hidden units with relu activations, the upper bound on the number of regions is 2d (montufar et al., 2014). the same authors show that a deep relu n(cid:16)etwork with di-dime(cid:17)nsional input and k layers, each containingd≥di hiddenunits,haso (d/di)(k−1)diddi linearregions. montúfar(2017), arora et al. (2016) and serra et al. (2018) all provide tighter upper bounds that consider the possibility that each layer has different numbers of hidden units. serra et al. (2018) provide an algorithm that counts the number of linear regions in a neural network, although it is only practical for very small networks. if the number of hidden units d in each of the k layers is the same, and d is an integer multipleoftheinputdimensionalityd ,thenthemaximumnumberoflinearregionsn canbe i r this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.notes 53 computed exactly and is: (cid:18) (cid:19) ! d di(k−1) xdi d n = +1 · . (4.17) r d j i j=0 thefirstterminthisexpressioncorrespondstothefirstk−1layersofthenetwork,whichcan be thought of as repeatedly folding the input space. however, we now need to devote d/d i hidden units to each input dimension to create these folds. the last term in this equation (a sum of binomial coefficients) is the number of regions that a shallow network can create and is appendixb.2 attributable to the last layer. for further information, consult montufar et al. (2014), pascanu binomialcoefficient et al. (2013), and montúfar (2017). universal approximation theorem: we argued in section 4.5.1 that if the layers of a deep network have enough hidden units, then the width version of the universal approximation the- orem applies: there exists a network that can approximate any given continuous function on a compactsubsetofrdi toarbitraryaccuracy. luetal.(2017)provedthatthereexistsanetwork withreluactivationfunctionsandatleastd +4hiddenunitsineachlayercanapproximate i any specified d -dimensional lebesgue integrable function to arbitrary accuracy given enough i layers. this is known as the depth version of the universal approximation theorem. depth efficiency: severalresultsshowthattherearefunctionsthatcanberealizedbydeep networks but not by any shallow network whose capacity is bounded above exponentially. in other words, it would take an exponentially larger number of units in a shallow network to describe these functions accurately. this is known as the depth efficiency of neural networks. telgarsky(2016)showsthatforanyintegerk,itispossibletoconstructnetworkswithoneinput, one output, and o[k3] layers of constant width, which cannot be realized with o[k] layers and less than 2k width. perhaps surprisingly, eldan & shamir (2016) showed that when there are multivariate inputs, there is"