|
"tasks. the optimal function was found to be a[x] = x/(1+exp[−βx]), where β is a learned parameter(figure3.13f). theytermedthisfunctionswish. interestingly,thiswasarediscovery of activation functions previously proposed by hendrycks & gimpel (2016) and elfwing et al. (2018). howardetal.(2019)approximatedswishbythehardswishfunction,whichhasavery similar shape but is faster to compute: 8 ><0 z<−3 hardswish[z]= z(z+3)/6 −3≤z≤3. (3.13) >: z z>3 there is no definitive answer as to which of these activations functions is empirically superior. however, the leaky relu, parameterized relu, and many of the continuous functions can be shown to provide minor performance gains over the relu in particular situations. we restrict attentiontoneuralnetworkswiththebasicrelufunctionfortherestofthisbookbecauseit’s easy to characterize the functions they create in terms of the number of linear regions. universal approximation theorem: the width version of this theorem states that there exists a network with one hidden layer containing a finite number of hidden units that can approximateanyspecifiedcontinuousfunctiononacompactsubsetofrn toarbitraryaccuracy. this was proved by cybenko (1989) for a class of sigmoid activations and was later shown to be true for a larger class of nonlinear activation functions (hornik, 1991). number of linear regions: consider a shallow network with d ≥ 2-dimensional inputs i and d hidden units. the number of linear regions is determined by the intersections of the d hyperplanes created by the “joints” in the relu functions (e.g., figure 3.8d–f). each region is appendixb.2 created by a different combination of the relu functions clipping or not clipping the input. binomial the number of regions created by d hypeprplane(cid:0)s i(cid:1)n the di ≤ d-dimensional input space was coefficient shown by zaslavsky (1975) to be at most di d (i.e., a sum of binomial coefficients). as a j=0 j rule of thumb, shallow neural networks almost always have a larger number d of hidden units problem3.18 than input dimensions di and create between 2di and 2d linear regions. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.notes 39 linear, affine, and nonlinear functions: technically, a linear transformation f[•] is any functionthatobeystheprincipleofsuperposition,sof[a+b]=f[a]+f[b]. thisdefinitionimplies that f[2a] = 2f[a].the weighted sum f[h ,h ,h ] = ϕ h +ϕ h +ϕ h is linear, but once the 1 2 3 1 1 2 2 3 3 offset (bias) is added so f[h ,h ,h ]=ϕ +ϕ h +ϕ h +ϕ h , this is no longer true. to see 1 2 3 0 1 1 2 2 3 3 this,considerthattheoutputisdoubledwhenwedoubletheargumentsoftheformerfunction. this is not the case for the latter function, which is more properly termed an affine function. however, it is common in machine learning to conflate these terms. we follow this convention in this book and refer to both as linear. all other functions we will encounter are nonlinear. problems problem 3.1 what kind of mapping from input to output would be created if the activation function in equation 3.1 was linear so that a[z]=ψ +ψ z? what kind of mapping would be 0 1 created if the activation function was removed, so a[z]=z? problem 3.2 for each of the four linear regions in figure 3.3j, indicate which hidden units are inactive and which are active (i.e., which do and do not clip their inputs). problem 3.3∗ derive expressions for the positions of the “joints” in function in figure 3.3j in terms of the ten parameters ϕ and the input x. derive expressions for the slopes of the four linear regions. problem" |