|
"test example x, return either the full distribu- tion pr(y|f[x,ϕˆ]) or the maximum of this distribution. we devote most of the rest of this chapter to constructing loss functions for common prediction types using this recipe. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.5.3 example 1: univariate regression 61 figure 5.3 the univariate normal distri- bution (also known as the gaussian dis- tribution) is defined on the real line z ∈ r and has parameters µ and σ2. the mean µ determines the position of the peak. the positive root of the vari- ance σ2 (the standard deviation) de- termines the width of the distribution. since the total probability density sums to one, the peak becomes higher as the variance decreases and the distribution becomes narrower. 5.3 example 1: univariate regression westartbyconsideringunivariateregressionmodels. herethegoalistopredictasingle scalar output y ∈ r from input x using a model f[x,ϕ] with parameters ϕ. following therecipe, wechooseaprobabilitydistributionovertheoutputdomainy. weselectthe univariate normal (figure 5.3), which is defined over y ∈ r. this distribution has two parameters (mean µ and variance σ2) and has a probability density function: (cid:20) (cid:21) 1 (y−µ)2 pr(y|µ,σ2)= √ exp − . (5.7) 2πσ2 2σ2 second,wesetthemachinelearningmodelf[x,ϕ]tocomputeoneormoreoftheparam- eters of this distribution. here, we just compute the mean so µ=f[x,ϕ]: (cid:20) (cid:21) 1 (y−f[x,ϕ])2 pr(y|f[x,ϕ],σ2)= √ exp − . (5.8) 2πσ2 2σ2 we aim to find the parameters ϕ that make the training data {x ,y } most probable i i under this distribution (figure 5.4). to accomplish this, we choose a loss function l[ϕ] based on the negative log-likelihood: xi (cid:2) (cid:3) l[ϕ] = − log pr(y |f[x ,ϕ],σ2) i i i=1 (cid:20) (cid:20) (cid:21)(cid:21) xi 1 (y −f[x ,ϕ])2 = − log √ exp − i i . (5.9) 2πσ2 2σ2 i=1 when we train the model, we seek parameters ϕˆ that minimize this loss. draft: please send errata to [email protected] 5 loss functions 5.3.1 least squares loss function now let’s perform some algebraic manipulations on the loss function. we seek: "" (cid:20) (cid:20) (cid:21)(cid:21)# xi 1 (y −f[x ,ϕ])2 ϕˆ = argmin − log √ exp − i i ϕ 2πσ2 2σ2 "" i=1(cid:18) (cid:20) (cid:21) (cid:19)# xi 1 (y −f[x ,ϕ])2 = argmin − log √ − i i ϕ 2πσ2 2σ2 "" i=1 # xi (y −f[x ,ϕ])2 = argmin − − i i 2σ2 ϕ "" i=1 # xi = argmin (y −f[x ,ϕ])2 , (5.10) i i ϕ i=1 wherewehaveremovedthefirsttermbetweenthesecondandthirdlinesbecauseitdoes notdependonϕ. wehaveremovedthedenominatorbetweenthethirdandfourthlines, asthisisjustaconstantscalingfactorthatdoesnotaffectthepositionoftheminimum. the result of these manipulations is the least squares loss function that" |