Upload dataset_chunk_39.csv with huggingface_hub
Browse files- dataset_chunk_39.csv +2 -0
dataset_chunk_39.csv
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
text
|
2 |
+
"the loss encourages each training output y to have i a high probability under the distribution pr(y |x ) computed from the corresponding i i input x (figure 5.1). i this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.5.1 maximum likelihood 57 figure 5.1 predicting distributions over outputs. a) regression task, where the goalistopredictareal-valuedoutputy fromtheinputxbasedontrainingdata {x ,y }(orangepoints). foreachinputvaluex,themachinelearningmodelpre- i i dictsadistributionpr(y|x)overtheoutputy∈r(cyancurvesshowdistributions for x=2.0 and x=7.0). the loss function aims to maximize the probability of theobservedtrainingoutputsy underthedistributionpredictedfromthecorre- i spondinginputsx . b)topredictdiscreteclassesy∈{1,2,3,4}inaclassification i task, we use a discrete probability distribution, so the model predicts a different histogram over the four possible values of y for each value of x . c) to predict i i counts y∈{0,1,2,...} and d) direction y∈(−π,π], we use distributions defined over positive integers and circular domains, respectively. draft: please send errata to [email protected] 5 loss functions 5.1.1 computing a distribution over outputs this raises the question of exactly how a model f[x,ϕ] can be adapted to compute a probability distribution. the solution is simple. first, we choose a parametric distribu- tionpr(y|θ)definedontheoutputdomainy. thenweusethenetworktocomputeone or more of the parameters θ of this distribution. for example, suppose the prediction domain is the set of real numbers, so y ∈ r. here, we might choose the univariate normal distribution, which is defined on r. this distribution is defined by the mean µ and variance σ2, so θ = {µ,σ2}. the machine learning model might predict the mean µ, and the variance σ2 could be treated as an unknown constant. 5.1.2 maximum likelihood criterion themodelnowcomputesdifferentdistributionparametersθ =f[x ,ϕ]foreachtraining i i input x . each observed training output y should have high probability under its i i correspondingdistributionpr(y |θ ). hence, wechoosethemodelparametersϕsothat i i they maximize the combined probability across all i training examples: "" # yi ϕˆ = argmax pr(y |x ) i i ϕ ""i=1 # yi = argmax pr(y |θ ) i i ϕ ""i=1 # yi = argmax pr(y |f[x ,ϕ]) . (5.1) i i ϕ i=1 thecombinedprobabilitytermisthelikelihoodoftheparameters,andhenceequation5.1 is known as the maximum likelihood criterion.1 here we are implicitly making two assumptions. first, we assume that the data are identically distributed (the form of the probability distribution over the outputs y i is the same for each data point). second, we assume that the conditional distribu- appendixc.1.5 independence tions pr(yi|xi) of the output given the input are independent, so the total likelihood of the training data decomposes as: yi pr(y ,y ,...,y |x ,x ,...,x )= pr(y |x ). (5.2) 1 2 i 1 2 i i i i=1 in other words, we assume the data are independent and identically distributed (i.i.d.). 1a conditional probability pr(z|ψ) can be considered in two ways. as a function of z, it is a probability distribution that sums to one. as a function of ψ, it is known as a likelihood and does not generallysumtoone. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.5.1 maximum likelihood 59 figure 5.2 the log transform. a) the log function is monoton"
|