text | |
"ically increasing. if z>z′, then log[z]>log[z′]. it follows that the maximum of any function g[z] will be at the same position as the maximum of log[g[z]]. b) a function g[z]. c) thelogarithmofthisfunctionlog[g[z]]. allpositionsong[z]withapositiveslope retain a positive slope after the log transform, and those with a negative slope retain a negative slope. the position of the maximum remains the same. 5.1.3 maximizing log-likelihood the maximum likelihood criterion (equation 5.1) is not very practical. each term pr(y |f[x ,ϕ]) can be small, so the product of many of these terms can be tiny. it i i may be difficult to represent this quantity with finite precision arithmetic. fortunately, we can equivalently maximize the logarithm of the likelihood: "" # yi ϕˆ = argmax pr(y |f[x ,ϕ]) i i ϕ ""i=1"" ## yi = argmax log pr(y |f[x ,ϕ]) i i ϕ "" i=1 # xi h i = argmax log pr(y |f[x ,ϕ]) . (5.3) i i ϕ i=1 this log-likelihood criterion is equivalent because the logarithm is a monotonically in- creasing function: if z>z′, then log[z]>log[z′] and vice versa (figure 5.2). it follows thatwhenwechangethemodelparametersϕtoimprovethelog-likelihoodcriterion,we also improve the original maximum likelihood criterion. it also follows that the overall maxima of the two criteria must be in the same place, so the best model parameters ϕˆ are the same in both cases. however, the log-likelihood criterion has the practical ad- vantage of using a sum of terms, not a product, so representing it with finite precision isn’t problematic. draft: please send errata to [email protected] 5 loss functions 5.1.4 minimizing negative log-likelihood finally, we note that, by convention, model fitting problems are framed in terms of minimizing a loss. to convert the maximum log-likelihood criterion to a minimization problem, we multiply by minus one, which gives us the negative log-likelihood criterion: "" # xi h i ϕˆ = argmin − log pr(y |f[x ,ϕ]) i i ϕ h i=i1 = argmin l[ϕ] , (5.4) ϕ which is what forms the final loss function l[ϕ]. 5.1.5 inference the network no longer directly predicts the outputs y but instead determines a proba- bility distribution over y. when we perform inference, we often want a point estimate rather than a distribution, so we return the maximum of the distribution: h i yˆ =argmax pr(y|f[x,ϕˆ]) . (5.5) y it is usually possible to find an expression for this in terms of the distribution parame- ters θ predicted by the model. for example, in the univariate normal distribution, the maximum occurs at the mean µ. 5.2 recipe for constructing loss functions the recipe for constructing loss functions for training data {x ,y } using the maximum i i likelihood approach is hence: 1. choose a suitable probability distribution pr(y|θ) defined over the domain of the predictions y with distribution parameters θ. 2. set the machine learning model f[x,ϕ] to predict one or more of these parameters, so θ =f[x,ϕ] and pr(y|θ)=pr(y|f[x,ϕ]). 3. to train the model, find the network parameters ϕˆ that minimize the negative log-likelihood loss function over the training dataset pairs {x ,y }: i i "" # h i xi h i ϕˆ =argmin l[ϕ] =argmin − log pr(y |f[x ,ϕ]) . (5.6) i i ϕ ϕ i=1 4. to perform inference for a new" | |