Upload dataset_chunk_18.csv with huggingface_hub
Browse files- dataset_chunk_18.csv +2 -0
dataset_chunk_18.csv
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
text
|
2 |
+
"(figure2.2a)consistsofi input/outputpairs{x ,y }. i i figures 2.2b–d show three lines defined by three sets of parameters. the green line in figure 2.2d describes the data more accurately than the other two since it is much closer to the data points. however, we need a principled approach for deciding which parameters ϕ are better than others. to this end, we assign a numerical value to each choice of parameters that quantifies the degree of mismatch between the model and the data. we term this value the loss; a lower loss means a better fit. the mismatch is captured by the deviation between the model predictions f[x ,ϕ] i (heightofthelineatx )andthegroundtruthoutputsy . thesedeviationsaredepicted i i asorangedashedlinesinfigures2.2b–d. wequantifythetotalmismatch,training error, or loss as the sum of the squares of these deviations for all i training pairs: xi l[ϕ] = (f[x ,ϕ]−y )2 i i i=1 xi = (ϕ +ϕ x −y )2. (2.5) 0 1 i i i=1 sincethebestparametersminimizethisexpression,wecallthisaleast-squaresloss. the squaring operation means that the direction of the deviation (i.e., whether the line is draft: please send errata to [email protected] 2 supervised learning figure 2.2linearregressiontrainingdata,model,andloss. a)thetrainingdata (orange points) consist of i = 12 input/output pairs {x ,y }. b–d) each panel i i shows the linear regression model with different parameters. depending on the choiceofy-interceptandslopeparametersϕ=[ϕ ,ϕ ]t,themodelerrors(orange 0 1 dashed lines) may be larger or smaller. the loss l is the sum of the squares of theseerrors. theparametersthatdefinethelinesinpanels(b)and(c)havelarge losses l=7.07 and l=10.28, respectively because the models fit badly. the loss l=0.20 in panel (d) is smaller because the model fits well; in fact, this has the smallest loss of all possible lines, so these are the optimal parameters. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.2.2 linear regression example 21 figure2.3lossfunctionforlinearregressionmodelwiththedatasetinfigure2.2a. a) each combination of parameters ϕ=[ϕ ,ϕ ]t has an associated loss. the re- 0 1 sultinglossfunctionl[ϕ]canbevisualizedasasurface. thethreecirclesrepresent thethreelinesfromfigure2.2b–d. b)thelosscanalsobevisualizedasaheatmap, where brighter regions represent larger losses; here we are looking straight down atthesurfacein(a)fromaboveandgrayellipsesrepresentisocontours. thebest fittingline(figure2.2d)hastheparameterswiththesmallestloss(greencircle). above or below the data) is unimportant. there are also theoretical reasons for this choice which we return to in chapter 5. the loss l is a function of the parameters ϕ; it will be larger when the model fit is notebook2.1 poor (figure 2.2b,c) and smaller when it is good (figure 2.2d). considered in this light, supervised we term l[ϕ] the loss function or cost function. the goal is to find the parameters ϕˆ learning that minimize this quantity: h i ϕˆ = argmin l[ϕ] ϕ "" # xi = argmin (f[x ,ϕ]−y )2 i i ϕ ""i=1 # xi = argmin (ϕ +ϕ x −y )2 . (2.6) 0 1 i i ϕ i=1 there are only two parameters (the y-intercept ϕ and slope ϕ ), so we can calculate 0 1 problems2.1–2.2 the loss for every combination of"
|