Vishwas1 commited on
Commit
f807505
1 Parent(s): 727b197

Upload dataset_chunk_33.csv with huggingface_hub

Browse files
Files changed (1) hide show
  1. dataset_chunk_33.csv +2 -0
dataset_chunk_33.csv ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ text
2
+ "ahundredlayerswiththousandsofhiddenunitsateach layer. thenumberofhiddenunitsineachlayerisreferredtoasthewidthofthenetwork, and the number of hidden layers as the depth. the total number of hidden units is a measure of the network’s capacity. we denote the number of layers as k and the number of hidden units in each layer as d ,d ,...,d . these are examples of hyperparameters. they are quantities chosen 1 2 k problem4.2 before we learn the model parameters (i.e., the slope and intercept terms). for fixed hyperparameters (e.g., k = 2 layers with d = 3 hidden units in each), the model k describes a family of functions, and the parameters determine the particular function. hence, when we also consider the hyperparameters, we can think of neural networks as representing a family of families of functions relating input to output. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.4.3 deep neural networks 47 figure 4.5 computation for the deep network in figure 4.4. a–c) the inputs to the second hidden layer (i.e., the pre-activations) are three piecewise linear functions where the “joints” between the linear regions are at the same places (see figure 3.6). d–f) each piecewise linear function is clipped to zero by the relu activation function. g–i) these clipped functions are then weighted with parameters ϕ′,ϕ′, and ϕ′, respectively. j) finally, the clipped and weighted 1 2 3 functionsaresummedandanoffsetϕ′ thatcontrolstheoverallheightisadded. 0 draft: please send errata to [email protected] 4 deep neural networks figure4.6matrixnotationfornetworkwithd =3-dimensionalinputx,d =2- i o dimensional output y, and k = 3 hidden layers h ,h , and h of dimensions 1 2 3 d = 4, d = 2, and d = 3 respectively. the weights are stored in matrices 1 2 3 ω that pre-multiply the activations from the preceding layer to create the pre- k activations at the subsequent layer. for example, the weight matrix ω that 1 computes the pre-activations at h from the activations at h has dimension 2 1 2×4. itisappliedtothefourhiddenunitsinlayeroneandcreatestheinputsto the two hidden units at layer two. the biases are stored in vectors β and have k the dimension of the layer into which they feed. for example, the bias vector β 2 is length three because layer h contains three hidden units. 3 4.4 matrix notation we have seen that a deep neural network consists of linear transformations alternating appendixb.3 with activation functions. we could equivalently describe equations 4.7–4.9 in matrix matrices notation as: 2 3 22 3 2 3 3 h θ θ 1 10 11 4 5 44 5 4 5 5 h =a θ + θ x , (4.11) 2 20 21 h θ θ 3 30 31 2 3 22 3 2 32 33 h′ ψ ψ ψ ψ h 1 10 11 12 13 1 4h′5=a44ψ 5+4ψ ψ ψ 54h 55, (4.12) 2 20 21 22 23 2 h′ ψ ψ ψ ψ h 3 30 31 32 33 3 and 2 3 (cid:2) (cid:3) h′1 y′ =ϕ′ + ϕ′ ϕ′ ϕ′ 4h′5, (4.13) 0 1 2 3 2 h′ 3 this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.4.5 shallow vs. deep neural networks 49 or even more compactly in matrix notation as: h = a[θ +θx] 0 ′ h = a[ψ +ψh] 0 ′ ′ ′ ′ y = ϕ +ϕh, (4.14) 0 where, in each case, the function a[•] applies the activation function separately to every element of its vector input. 4.4."