|
"processing in network with two inputs x = [x ,x ]t, three hidden 1 2 units h ,h ,h , and one output y. a–c) the input to each hidden unit is a 1 2 3 linearfunctionofthetwoinputs,whichcorrespondstoanorientedplane. bright- ness indicates function output. for example, in panel (a), the brightness repre- sents θ +θ x +θ x . thin lines are contours. d–f) each plane is clipped by 10 11 1 12 2 thereluactivationfunction(cyanlinesareequivalentto“joints”infigures3.3d– f). g-i) the clipped planes are then weighted, and j) summed together with an offsetthatdeterminestheoverallheightofthesurface. theresultisacontinuous surface made up of convex piecewise linear polygonal regions. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.3.4 shallow neural networks: general case 33 h = a[θ +θ x +θ x ] 1 10 11 1 12 2 h = a[θ +θ x +θ x ] 2 20 21 1 22 2 h = a[θ +θ x +θ x ], (3.9) 3 30 31 1 32 2 where there is now one slope parameter for each input. the hidden units are combined to form the output in the usual way: y =ϕ +ϕ h +ϕ h +ϕ h . (3.10) 0 1 1 2 2 3 3 figure3.8illustratestheprocessingofthisnetwork. eachhiddenunitreceivesalinear problems3.12–3.13 combination of the two inputs, which forms an oriented plane in the 3d input/output space. the activation function clips the negative values of these planes to zero. the notebook3.2 clipped planes are then recombined in a second linear function (equation 3.10) to create shallownetworksii acontinuouspiecewiselinearsurfaceconsistingofconvexpolygonalregions(figure3.8j). each region corresponds to a different activation pattern. for example, in the central appendixb.1.2 convexregion triangular region, the first and third hidden units are active, and the second is inactive. when there are more than two inputs to the model, it becomes difficult to visualize. however, the interpretation is similar. the output will be a continuous piecewise linear function of the input, where the linear regions are now convex polytopes in the multi- dimensional input space. notethatastheinputdimensionsgrow,thenumberoflinearregionsincreasesrapidly (figure 3.9). to get a feeling for how rapidly, consider that each hidden unit defines a hyperplane that delineates the part of space where this unit is active from the part notebook3.3 where it is not (cyan lines in 3.8d–f). if we had the same number of hidden units as shallownetwork input dimensions di, we could align each hyperplane with one of the coordinate axes regions (figure3.10). fortwoinputdimensions,thiswoulddividethespaceintofourquadrants. forthreedimensions, thiswouldcreateeightoctants, andford dimensions, thiswould i create2di orthants. shallowneuralnetworksusuallyhavemorehiddenunitsthaninput dimensions, so they typically create more than 2di linear regions. 3.4 shallow neural networks: general case wehavedescribedseveralexampleshallownetworkstohelpdevelopintuitionabouthow they work. we now define a general equation for a shallow neural network y = f[x,ϕ] that maps a multi-dimensional input x ∈ rdi to a multi-dimensional output y ∈ rdo using h∈rd hidden units. each hidden unit is computed as: "" # xdi h =a θ + θ x , (3.11) d d0 di i i=1 and these are combined linearly to create the output: draft: please send errata to [email protected] 3 shallow neural networks figure 3.9 linear regions vs. hidden units. a) maximum possible regions as a function of the number of hidden units for five different input dimensions d" |