dl_dataset_1 / dataset_chunk_20.csv
Vishwas1's picture
Upload dataset_chunk_20.csv with huggingface_hub
7b1d187 verified
text
"imized(i.e., the entire right-hand side of equation 2.5). a cost function can contain additional terms that are not associated with individual data points (see section 9.1). more generally, an objective function is any function that is to be maximized or minimized. generative vs. discriminative models: themodelsy=f[x,ϕ]inthischapterarediscrim- inative models. thesemakeanoutputpredictionyfromreal-worldmeasurementsx. another problem2.3 approach is to build a generative model x = g[y,ϕ], in which the real-world measurements x are computed as a function of the output y. the generative approach has the disadvantage that it doesn’t directly predict y. to perform inference, we must invert the generative equation as y = g−1[x,ϕ], and this may be difficult. however,generativemodelshavetheadvantagethatwecanbuildinpriorknowledgeabouthow the data were created. for example, if we wanted to predict the 3d position and orientation y draft: please send errata to [email protected] 2 supervised learning ofacarinanimagex,thenwecouldbuildknowledgeaboutcarshape,3dgeometry,andlight transport into the function x=g[y,ϕ]. this seems like a good idea, but in fact, discriminative models dominate modern machine learning; the advantage gained from exploiting prior knowledge in generative models is usually trumped by learning very flexible discriminative models with large amounts of training data. problems problem2.1towalk“downhill”onthelossfunction(equation2.5),wemeasureitsgradientwith respecttotheparametersϕ andϕ . calculateexpressionsfortheslopes∂l/∂ϕ and∂l/∂ϕ . 0 1 0 1 problem 2.2 showthatwecanfindtheminimumofthelossfunctioninclosedformbysetting theexpressionforthederivativesfromproblem2.1tozeroandsolvingforϕ andϕ . notethat 0 1 this works for linear regression but not for more complex models; this is why we use iterative model fitting methods like gradient descent (figure 2.4). problem 2.3∗ consider reformulating linear regression as a generative model, so we have x = g[y,ϕ] = ϕ +ϕ y. what is the new loss function? find an expression for the inverse func- 0 1 tion y = g−1[x,ϕ] that we would use to perform inference. will this model make the same predictions as the discriminative version for a given training dataset {x ,y }? one way to es- i i tablish this is to write code that fits a line to three data points using both methods and see if the result is the same. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.chapter 3 shallow neural networks chapter2introducedsupervisedlearningusing1dlinearregression. however,thismodel canonlydescribetheinput/outputrelationshipasaline. thischapterintroducesshallow neural networks. these describe piecewise linear functions and are expressive enough to approximate arbitrarily complex relationships between multi-dimensional inputs and outputs. 3.1 neural network example shallowneuralnetworksarefunctionsy=f[x,ϕ]withparametersϕthatmapmultivari- ate inputs x to multivariate outputs y. we defer a full definition until section 3.4 and introduce the main ideas using an example network f[x,ϕ] that maps a scalar input x to a scalar output y and has ten parameters ϕ={ϕ ,ϕ ,ϕ ,ϕ ,θ ,θ ,θ ,θ ,θ ,θ }: 0 1 2 3 10 11 20 21 30 31 y = f[x,ϕ] = ϕ +ϕ a[θ +θ x]+ϕ a[θ +θ x]+ϕ a[θ +θ x]. (3.1) 0 1 10 11 2 20 21 3 30 31 we can break down this calculation into three parts: first, we compute three linear functions of the input data (θ +θ x, θ +θ x, and θ +θ x"