domain
stringclasses
48 values
url
stringlengths
35
137
text
stringlengths
0
836k
topic
stringclasses
13 values
enriquegit.github.io
https://enriquegit.github.io/behavior-free/citing-this-book.html
Citing this Book ================ If you found this book useful, you can consider citing it like this: ``` Garcia-Ceja, Enrique. "Behavior Analysis with Machine Learning Using R", 2021. http://behavior.enriquegc.com ``` BibTeX: ``` @book{GarciaCejaBook, title = {Behavior Analysis with Machine Learning Using {R}}, author = {Enrique Garcia-Ceja}, year = {2021}, note = {\url{http://behavior.enriquegc.com}} } ```
Machine Learning
srdas.github.io
https://srdas.github.io/MLBook/MathPrimer.html
Chapter 2 Got Math? The Very Beginning ====================================== Business analytics requires the use of various quantitative tools, from algebra and calculus, to statistics and econometrics, with implementations in various programming languages and software. It calls for technical expertise as well as good judgment, and the ability to ask insightful questions and to deploy data towards answering the questions. The presence of the web as the primary platform for business and marketing has spawned huge quantities of data, driving firms to attempt to exploit vast stores of information in honing their competitive edge. As a consequence, firms in Silicon Valley (and elsewhere) are hiring a new breed of employee known as “data scientist” whose role is to analyze “Big Data” using tools such as the ones you will learn in this course. This chapter will review some of the mathematics, statistics, linear algebra, and calculus you might have not used in many years. It is more fun than it looks. We will also learn to use some mathematical packages along the way. We’ll revisit some standard calculations and analyses that you will have encountered in previous courses you might have taken. You will refresh some old concepts, learn new ones, and become technically adept with the tools of the trade. 2\.1 Logarithms and Exponentials, Continuous Compounding -------------------------------------------------------- It is fitting to begin with the fundamental mathematical constant, \\(e \= 2\.718281828\...\\), which is also the function \\(\\exp(\\cdot)\\). We often write this function as \\(e^x\\), where \\(x\\) can be a real or complex variable. It shows up in many places, especially in Finance, where it is used for continuous compounding and discounting of money at a given interest rate \\(r\\) over some time horizon \\(t\\). Given \\(y\=e^x\\), a fixed change in \\(x\\) results in the same continuous percentage change in \\(y\\). This is because \\(\\ln(y) \= x\\), where \\(\\ln(\\cdot)\\) is the natural logarithm function, and is the inverse function of the exponential function. Recall also that the first derivative of this function is \\(\\frac{dy}{dx} \= e^x\\), i.e., the function itself. The constant \\(e\\) is defined as the limit of a specific function: \\\[ e \= \\lim\_{n \\rightarrow \\infty} \\left( 1 \+ \\frac{1}{n} \\right)^n \\] Exponential compounding is the limit of successively shorter intervals over discrete compounding. ``` x = c(1,2,3) y = exp(x) print(y) ``` ``` ## [1] 2.718282 7.389056 20.085537 ``` ``` print(log(y)) ``` ``` ## [1] 1 2 3 ``` 2\.2 Calculus ------------- EXAMPLE: Bond Mathematics Given a horizon \\(t\\) divided into \\(n\\) intervals per year, one dollar compounded from time zero to time \\(t\\) years over these \\(n\\) intervals at per annum rate \\(r\\) may be written as \\(\\left(1 \+ \\frac{r}{n} \\right)^{nt}\\). Continuous\-compounding is the limit of this equation when the number of periods \\(n\\) goes to infinity: \\\[ \\lim\_{n \\rightarrow \\infty} \\left(1 \+ \\frac{r}{n} \\right)^{nt} \= \\lim\_{n \\rightarrow \\infty} \\left\[ \\left(1 \+ \\frac{1}{n/r} \\right)^{n/r}\\right]^{tr} \= e^{rt} \\] This is the forward value of one dollar. Present value is just the reverse. Therefore, the price today of a dollar received \\(t\\) years from today is \\(P \= e^{\-rt}\\). The yield of a bond is: \\\[ r \= \-\\frac{1}{t} \\ln(P) \\] In bond mathematics, the negative of the percentage price sensitivity of a bond to changes in interest rates is known as “Duration”: \\\[ \-\\frac{dP}{dr}\\frac{1}{P} \= \-\\left(\-t e^{\-rt}\\frac{1}{P}\\right) \= t P\\frac{1}{P} \= t \\] The derivative \\(\\frac{dP}{dr}\\) is the price sensitivity of the bond to changes in interest rates, and is negative. Further dividing this by \\(P\\) gives the percentage price sensitivity. The minus sign in front of the definition of duration is applied to convert the negative number to a positive one. The “Convexity” of a bond is its percentage price sensitivity relative to the second derivative, i.e., \\\[ \\frac{d^2P}{dr^2}\\frac{1}{P} \= t^2 P\\frac{1}{P} \= t^2 \\] Because the second derivative is positive, we know that the bond pricing function is convex. 2\.3 Normal Distribution ------------------------ This distribution is the workhorse of many models in the social sciences, and is assumed to generate much of the data that comprises the Big Data universe. Interestingly, most phenomena (variables) in the real world are not normally distributed. They tend to be “power law” distributed, i.e., many observations of low value, and very few of high value. The probability distribution declines from left to right and does not have the characteristic hump shape of the normal distribution. An example of data that is distributed thus is income distribution (many people with low income, very few with high income). Other examples are word frequencies in languages, population sizes of cities, number of connections of people in a social network, etc. Still, we do need to learn about the normal distribution because it is important in statistics, and the central limit theorem does govern much of the data we look at. Examples of approximately normally distributed data are stock returns, and human heights. If \\(x \\sim N(\\mu,\\sigma^2\)\\), that is, \\(x\\) is normally distributed with mean \\(\\mu\\) and variance \\(\\sigma^2\\), then the probability “density” function for \\(x\\) is: \\\[ f(x) \= \\frac{1}{\\sqrt{2\\pi\\sigma^2}} \\exp\\left\[\-\\frac{1}{2}\\frac{(x\-\\mu)^2}{\\sigma^2} \\right] \\] The cumulative probability is given by the “distribution” function \\\[ F(x) \= \\int\_{\-\\infty}^x f(u) du \\] and \\\[ F(x) \= 1 \- F(\-x) \\] because the normal distribution is symmetric. We often also use the notation \\(N(\\cdot)\\) or \\(\\Phi(\\cdot)\\) instead of \\(F(\\cdot)\\). The “standard normal” distribution is: \\(x \\sim N(0,1\)\\). For the standard normal distribution: \\(F(0\) \= \\frac{1}{2}\\). The normal distribution has continuous support, i.e., a range of values of \\(x\\) that goes continuously from \\(\-\\infty\\) to \\(\+\\infty\\). ``` #DENSITY FUNCTION x = seq(-4,4,0.001) plot(x,dnorm(x),type="l",col="red") grid(lwd=2) ``` ``` print(dnorm(0)) ``` ``` ## [1] 0.3989423 ``` ``` fx = dnorm(x) Fx = pnorm(x) plot(x,Fx,type="l",col="blue",main="Normal Probability",ylab="f(x) and F(x)") lines(x,fx,col="red") grid(col="green",lwd=2) ``` ``` res = c(pnorm(-6),pnorm(0),pnorm(6)) print(round(res,6)) ``` ``` ## [1] 0.0 0.5 1.0 ``` 2\.4 Poisson Distribution ------------------------- The Poisson is also known as the rare\-event distribution. Its density function is: \\\[ f(n; \\lambda) \= \\frac{e^{\-\\lambda} \\lambda^n}{n!} \\] where there is only one parameter, i.e., the mean \\(\\lambda\\). The density function is over discrete values of \\(n\\), the number of occurrences given the mean number of outcomes \\(\\lambda\\). The mean and variance of the Poisson distribution are both \\(\\lambda\\). The Poisson is a discrete\-support distribution, with a range of values \\(n\=\\{0,1,2, \\ldots\\}\\). ``` x = seq(0,25) lambda = 4 fx = dpois(x,lambda) barplot(fx,names.arg=x) ``` ``` print(sum(fx)) ``` ``` ## [1] 1 ``` ``` #Check that the mean is lambda print(sum(x*fx)) ``` ``` ## [1] 4 ``` ``` #Check that the variance is lambda print(sum(x^2*fx)-sum(x*fx)^2) ``` ``` ## [1] 4 ``` There are of course, many other probability distributions. Figure [2\.1](MathPrimer.html#fig:ProbDistributions) displays them succinctly. Figure 2\.1: Probability Distributions 2\.5 Moments of Random Variables -------------------------------- The following formulae are useful to review because any analysis of data begins with descriptive statistics, and the following statistical “moments” are computed in order to get a first handle on the data. Given a random variable \\(x\\) with probability density function \\(f(x)\\), then the following are the first four moments. \\\[ \\mbox{Mean (first moment or average)} \= E(x) \= \\int x f(x) dx \\] In like fashion, powers of the variable result in higher (\\(n\\)\-th order) moments. These are “non\-central” moments, i.e., they are moments of the raw random variable \\(x\\), not its deviation from its mean, i.e., \\(\[x \- E(x)]\\). \\\[ n^{th} \\mbox{ moment} \= E(x^n) \= \\int x^n f(x) dx \\] Central moments are moments of de\-meaned random variables. The second central moment is the variance: \\\[ \\mbox{Variance } \= Var(x) \= E\[x\-E(x)]^2 \= E(x^2\) \- \[E(x)]^2 \\] The standard deviation is the square\-root of the variance, i.e., \\(\\sigma \= \\sqrt{Var(x)}\\). The third central moment, normalized by the standard deviation to a suitable power is the skewness: \\\[ \\mbox{Skewness } \= \\frac{E\[x\-E(x)]^3}{Var(x)^{3/2}} \\] The absolute value of skewness relates to the degree of asymmetry in the probability density. If more extreme values occur to the left than the right, the distribution is left\-skewed. And vice\-versa, the distribution is right\-skewed. Correspondingly, the fourth central, normalized moment is kurtosis. \\\[ \\mbox{Kurtosis } \= \\frac{E\[x\-E(x)]^4}{\[Var(x)]^2} \\] Kurtosis in the normal distribution has value \\(3\\). We define “Excess Kurtosis” to be Kurtosis minus 3\. When a probability distribution has positive excess kurtosis we call it “leptokurtic”. Such distributions have fatter tails (either or both sides) than a normal distribution. ``` #EXAMPLES dx = 0.001 x = seq(-5,5,dx) fx = dnorm(x) mn = sum(x*fx*dx) print(c("Mean=",mn)) ``` ``` ## [1] "Mean=" "3.20341408642798e-19" ``` ``` vr = sum(x^2*fx*dx)-mn^2 print(c("Variance=",vr)) ``` ``` ## [1] "Variance=" "0.999984596641201" ``` ``` sk = sum((x-mn)^3*fx*dx)/vr^(3/2) print(c("Skewness=",sk)) ``` ``` ## [1] "Skewness=" "4.09311659106139e-19" ``` ``` kt = sum((x-mn)^4*fx*dx)/vr^2 print(c("Kurtosis=",kt)) ``` ``` ## [1] "Kurtosis=" "2.99967533661497" ``` 2\.6 Combining Random Variables ------------------------------- Since we often have to deal with composites of random variables, i.e., more than one random variable, we review here some simple rules for moments of combinations of random variables. There are several other expressions for the same equations, but we examine just a few here, as these are the ones we will use more frequently. First, we see that means are additive and scalable, i.e., \\\[ E(ax \+ by) \= a E(x) \+ b E(y) \\] where \\(x, y\\) are random variables, and \\(a, b\\) are scalar constants. The variance of scaled, summed random variables is as follows: \\\[ Var(ax \+ by) \= a^2 Var(x) \+ b^2 Var(y) \+ 2ab Cov(x,y) \\] And the covariance and correlation between two random variables is \\\[ Cov(x,y) \= E(xy) \- E(x)E(y) \\] \\\[ Corr(x,y) \= \\frac{Cov(x,y)}{\\sqrt{Var(x)Var(y)}} \\] Students of finance will be well\-versed with these expressions. They are facile and easy to implement. ``` #CHECK MEAN x = rnorm(1000) y = runif(1000) a = 3; b = 5 print(c(mean(a*x+b*y),a*mean(x)+b*mean(y))) ``` ``` ## [1] 2.488377 2.488377 ``` ``` #CHECK VARIANCE vr = var(a*x+b*y) vr2 = a^2*var(x) + b^2*var(y) + 2*a*b*cov(x,y) print(c(vr,vr2)) ``` ``` ## [1] 11.03522 11.03522 ``` ``` #CHECK COVARIANCE FORMULA cv = cov(x,y) cv2 = mean(x*y)-mean(x)*mean(y) print(c(cv,cv2)) ``` ``` ## [1] 0.0008330452 0.0008322121 ``` ``` corr = cov(x,y)/(sd(x)*sd(y)) print(corr) ``` ``` ## [1] 0.002889551 ``` ``` print(cor(x,y)) ``` ``` ## [1] 0.002889551 ``` 2\.7 Vector Algebra ------------------- We will be using linear algebra in many of the models that we explore in this book. Linear algebra requires the manipulation of vectors and matrices. We will also use vector calculus. Vector algebra and calculus are very powerful methods for tackling problems that involve solutions in spaces of several variables, i.e., in high dimension. The parsimony of using vector notation will become apparent as we proceed. This introduction is very light and meant for the reader who is mostly uninitiated in linear algebra. Rather than work with an abstract exposition, it is better to introduce ideas using an example. We’ll examine the use of vectors in the context of stock portfolios. We define the returns for each stock in a portfolio as: \\\[ {\\bf R} \= \\left(\\begin{array}{c} R\_1 \\\\ R\_2 \\\\ : \\\\ : \\\\ R\_N \\end{array} \\right) \\] This is a random vector, because each return \\(R\_i, i \= 1,2, \\ldots, N\\) comes from its own distribution, and the returns of all these stocks are correlated. This random vector’s probability is represented as a joint or multivariate probability distribution. Note that we use a bold font to denote the vector \\({\\bf R}\\). We also define a Unit vector: \\\[ {\\bf 1} \= \\left(\\begin{array}{c} 1 \\\\ 1 \\\\ : \\\\ : \\\\ 1 \\end{array} \\right) \\] The use of this unit vector will become apparent shortly, but it will be used in myriad ways and is a useful analytical object. A *portfolio* vector is defined as a set of portfolio weights, i.e., the fraction of the portfolio that is invested in each stock: \\\[ {\\bf w} \= \\left(\\begin{array}{c} w\_1 \\\\ w\_2 \\\\ : \\\\ : \\\\ w\_N \\end{array} \\right) \\] The total of portfolio weights must add up to 1\. \\\[ \\sum\_{i\=1}^N w\_i \= 1, \\;\\;\\; {\\bf w}'{\\bf 1} \= 1 \\] Pay special attention to the line above. In it, there are two ways in which to describe the sum of portfolio weights. The first one uses summation notation, and the second one uses a simple vector algebraic statement, i.e., that the transpose of \\({\\bf w}\\), denoted \\({\\bf w'}\\) times the unit vector \\({\\bf 1}\\) equals 1\.[27](#fn27) The two elements on the left\-hand\-side of the equation are vectors, and the 1 on the right hand side is a scalar. The dimension of \\({\\bf w'}\\) is \\((1 \\times N)\\) and the dimension of \\({\\bf 1}\\) is \\((N \\times 1\)\\). And a \\((1 \\times N)\\) vector multiplied by a \\((N \\times 1\)\\) results in a \\((1 \\times 1\)\\) vector, i.e., a scalar. We may also invest in a risk free asset (denoted as asset zero, \\(i\=0\\)), with return \\(R\_0 \= r\_f\\). In this case, the total portfolio weights including that of the risk free asset must sum to 1, and the weight \\(w\_0\\) is: \\\[ w\_0 \= 1 \- \\sum\_{i\=1}^N w\_i \= 1 \- {\\bf w}^\\top {\\bf 1} \\] Now we can use vector notation to compute statistics and quantities of the portfolio. The portfolio return is \\\[ R\_p \= \\sum\_{i\=1}^N w\_i R\_i \= {\\bf w}' {\\bf R} \\] Again, note that the left\-hand\-side quantity is a scalar, and the two right\-hand\-side quantities are vectors. Since \\({\\bf R}\\) is a random vector, \\(R\_p\\) is a random (scalar, i.e., not a vector, of dimension \\(1 \\times 1\\)) variable. Such a product is called a scalar product of two vectors. In order for the calculation to work, the two vectors or matrices must be “conformable”, i.e., the inner dimensions of the matrices must be the same. In this case we are multiplying \\({\\bf w}'\\) of dimension \\(1 \\times N\\) with \\({\\bf R}\\) of dimension \\(N \\times 1\\) and since the two “inside” dimensions are both \\(n\\), the calculation is proper as the matrices are conformable. The result of the calculation will take the size of the “outer” dimensions, i.e., in this case \\(1 \\times 1\\). Now, suppose \\\[ {\\bf R} \\sim MVN\[{\\boldsymbol \\mu}; {\\bf \\Sigma}] \\] That is, returns are multivariate normally distributed with mean vector \\(E\[{\\bf R}] \= {\\boldsymbol \\mu} \= \[\\mu\_1,\\mu\_2,\\ldots,\\mu\_N]' \\in R^N\\) and covariance matrix \\({\\bf \\Sigma} \\in R^{N \\times N}\\). The notation \\(R^N\\) stands for a \`\`real\-valued matrix of dimension \\(N\\).’’ If it’s just \\(N\\), then it means a vector of dimension \\(N\\). If it’s written as \\(N \\times M\\), then it’s a matrix of that dimension, i.e., \\(N\\) rows and \\(M\\) columns. We can write the portfolio’s mean return as: \\\[ E\[{\\bf w}' {\\bf R}] \= {\\bf w}' E\[{\\bf R}] \= {\\bf w}'{\\boldsymbol \\mu} \= \\sum\_{i\=1}^N w\_i \\mu\_i \\] The portfolio’s return variance is \\\[ Var(R\_p) \= Var({\\bf w}'{\\bf R}) \= {\\bf w}'{\\bf \\Sigma}{\\bf w} \\] Showing why this is true is left as an exercise to the reader. Take a case where \\(N\=2\\) and write out the expression for the variance of the portfolio using equation . Then also undertake the same calculation using the variance formula \\({\\bf w' \\Sigma w}\\) and see the equivalence. Also note carefully that this expression works because \\({\\bf \\Sigma}\\) is a symmetric matrix. The multivariate normal density function is: \\\[ f({\\bf R}) \= \\frac{1}{2\\pi^{N/2}\\sqrt{\|\\Sigma\|}} \\exp\\left\[\-\\frac{1}{2} \\boldsymbol{(R\-\\mu)'\\Sigma^{\-1}(R\-\\mu)} \\right] \\] Now, we take a look at some simple applications expressed in terms of vector notation. 2\.8 Basic Regression Model (OLS) --------------------------------- We assume here that you are conversant with a linear regression and how to interpret the outputs of the regression. This subsection only serves to demonstrate how to run the simple ordinary least squares (OLS) regression in R. We create some random data and then find the relation between \\(Y\\) (the dependent variable) and \\(X\\) (the independent variable). ``` #STATISTICAL REGRESSION x = rnorm(1000) y = 3 + 4*x + 0.5*x^2 + rt(1000,5) res = lm(y~x) print(summary(res)) ``` ``` ## ## Call: ## lm(formula = y ~ x) ## ## Residuals: ## Min 1Q Median 3Q Max ## -5.0031 -0.9200 -0.0879 0.7987 10.2941 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 3.53187 0.04591 76.94 <2e-16 *** ## x 3.91528 0.04634 84.49 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 1.451 on 998 degrees of freedom ## Multiple R-squared: 0.8774, Adjusted R-squared: 0.8772 ## F-statistic: 7139 on 1 and 998 DF, p-value: < 2.2e-16 ``` ``` plot(x,y) abline(res,col="red") ``` The plot of the data and the best fit regression line shows a good fit. The regression \\(R^2\=88\\%\\), which means that the right\-hand\-side (RHS, independent) variables explain 88% of the variation in the left\-hand\-side (LHS, dependent) variable. The \\(t\\)\-statistics are highly significant, suggesting that the RHS variables are useful in explaining the LHS variable. Also, the \\(F\\)\-statistic has a very small \\(p\\)\-value, meaning that the collection of RHS variables form a statistically good model for explaining the LHS variable. The output of the regression is stored in an output object **res** and it has various components (in R, we call these “attributes”). The **names** function is used to see what these components are. ``` names(res) ``` ``` ## [1] "coefficients" "residuals" "effects" "rank" ## [5] "fitted.values" "assign" "qr" "df.residual" ## [9] "xlevels" "call" "terms" "model" ``` We might be interested in extracting the coefficients of the regression, which are a component in the output object. These are addressed using the “$” extractor, i.e., follow the output object with a “$” and then the name of the attribute you want to extract. Here is an example. ``` res$coefficients ``` ``` ## (Intercept) x ## 3.531866 3.915277 ``` 2\.9 Regression with Dummy Variables ------------------------------------ In R, we may “factor” a variable that has levels (categorical values instead of numerical values). These categorical values are often encountered in data, for example, we may have a data set of customers and they are classified into gender, income category, etc. If you have a column of such data, then you want to “factor” the data to make it clear to R that this is categorical. Here is an example. ``` x1 = rnorm(1000) w = ceiling(3*runif(1000)) y = 4 + 5*x + 6*w + rnorm(1000)*0.2 x2 = factor(w) print(head(x2,20)) ``` ``` ## [1] 3 1 2 3 2 3 3 1 1 1 2 1 2 2 1 1 2 2 3 2 ## Levels: 1 2 3 ``` We took the data **w** and factored it, so that R would treat it as categorical even thought the data itself was originally numerical. The way it has been coded up, will lead to three categories: \\(x\_2 \= \\{1,2,3\\}\\). We then run a regression of \\(y\\) on \\(x\_1\\) (numerical) and \\(x\_2\\) (categorical). ``` res = lm(y~x1+x2) summary(res) ``` ``` ## ## Call: ## lm(formula = y ~ x1 + x2) ## ## Residuals: ## Min 1Q Median 3Q Max ## -20.9459 -3.3428 -0.0159 3.2414 16.2667 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 9.73486 0.27033 36.01 <2e-16 *** ## x1 -0.03866 0.15478 -0.25 0.803 ## x22 6.37973 0.38186 16.71 <2e-16 *** ## x23 11.95114 0.38658 30.91 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 4.968 on 996 degrees of freedom ## Multiple R-squared: 0.4905, Adjusted R-squared: 0.4889 ## F-statistic: 319.6 on 3 and 996 DF, p-value: < 2.2e-16 ``` Notice that the categorical \\(x\_2\\) variable has been split into two RHS variables **x22** and **x23**. However, there are 3 levels, not 2? The way to think of this is to treat category 1 as the “baseline” and then the dummy variables **x22** and **x23** capture the difference between category 1 and 2, and 1 and 3, respectively. We see that both are significant and positive implying that both categories 2 and 3 increase the LHS variable by their coefficients over and above the effect of category 1\. How is the regression actually run internally? What R does with a factor variable that has \\(N\\) levels is to create \\(N\-1\\) columns of dummy variables, for categories 2 to \\(N\\), where each column is for one of these categories and takes value 1 if the column corresponds to the category and 0 otherwise. In order to see this, let’s redo the regression but create the dummy variable columns without factoring the data in R. This will serve as a cross\-check of the regression above. ``` #CHECK THE DUMMY VARIABLE REGRESSION idx = which(w==2); x22 = y*0; x22[idx] = 1 idx = which(w==3); x23 = y*0; x23[idx] = 1 res = lm(y~x1+x22+x23) summary(res) ``` ``` ## ## Call: ## lm(formula = y ~ x1 + x22 + x23) ## ## Residuals: ## Min 1Q Median 3Q Max ## -20.9459 -3.3428 -0.0159 3.2414 16.2667 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 9.73486 0.27033 36.01 <2e-16 *** ## x1 -0.03866 0.15478 -0.25 0.803 ## x22 6.37973 0.38186 16.71 <2e-16 *** ## x23 11.95114 0.38658 30.91 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 4.968 on 996 degrees of freedom ## Multiple R-squared: 0.4905, Adjusted R-squared: 0.4889 ## F-statistic: 319.6 on 3 and 996 DF, p-value: < 2.2e-16 ``` As we see, the regressions are identical. It is of course much easier to factor a variable and have R make the entire set of dummy variable columns than to write separate code to do so. 2\.10 Matrix Calculations ------------------------- Since the representation of data is usually in the form of tables, these are nothing but matrices and vectors. Therefore, it is a good idea to review basic matrix math, equations, calculus, etc. We start with a simple example where we manipulate economic quantities stored in matrices and vectors. We have already seen vectors for finance in an earlier section, and here we will get some practice manipulating these vectors in R. Example: Financial Portfolios. A portfolio is described by its holdings, i.e., the proportions of various securities you may hold. These proportions are also called “weights”. The return you get on a portfolio is random, because each security has some average return, but also has a standard deviation of movement around this mean return, and for a portfolio, each security also covaries with the others. Hence, the basic information about all securities is stored in a mean return vector, which has the average return for each security. There is also a covariance matrix of returns, generated from data, and finally, the portfolio itself is described by the vector of weights. We create these matrices below for a small portfolio comprised of four securities. We then calculate the expected return on the portfolio and the standrd deviation of portfolio return. ``` #PORTFOLIO CALCULATIONS w = matrix(c(0.3,0.4,0.2,0.1),4,1) #PORTFOLIO WEIGHTS mu = matrix(c(0.01,0.05,0.10,0.20),4,1) #MEAN RETURNS cv = matrix(c(0.002,0.001,0.001,0.001,0.001,0.02,0.01,0.01,0.001, 0.01,0.04,0.01,0.001,0.01,0.01,0.09),4,4) print(w) ``` ``` ## [,1] ## [1,] 0.3 ## [2,] 0.4 ## [3,] 0.2 ## [4,] 0.1 ``` ``` print(mu) ``` ``` ## [,1] ## [1,] 0.01 ## [2,] 0.05 ## [3,] 0.10 ## [4,] 0.20 ``` ``` print(cv) ``` ``` ## [,1] [,2] [,3] [,4] ## [1,] 0.002 0.001 0.001 0.001 ## [2,] 0.001 0.020 0.010 0.010 ## [3,] 0.001 0.010 0.040 0.010 ## [4,] 0.001 0.010 0.010 0.090 ``` ``` print(c("Mean return = ",t(w)%*%mu)) ``` ``` ## [1] "Mean return = " "0.063" ``` ``` print(c("Return Std Dev = ",sqrt(t(w)%*%cv%*%w))) ``` ``` ## [1] "Return Std Dev = " "0.0953939201416946" ``` ### 2\.10\.1 Diversification of a Portfolio It is useful to examine the power of using vector algebra with an application. Here we use vector and summation math to understand how diversification in stock portfolios works. Diversification occurs when we increase the number of non\-perfectly correlated stocks in a portfolio, thereby reducing portfolio variance. In order to compute the variance of the portfolio we need to use the portfolio weights \\({\\bf w}\\) and the covariance matrix of stock returns \\({\\bf R}\\), denoted \\({\\bf \\Sigma}\\). We first write down the formula for a portfolio’s return variance: \\\[\\begin{equation} Var(\\boldsymbol{w'R}) \= \\boldsymbol{w'\\Sigma w} \= \\sum\_{i\=1}^n \\boldsymbol{w\_i^2 \\sigma\_i^2} \+ \\sum\_{i\=1}^n \\sum\_{j\=1,i \\neq j}^n \\boldsymbol{w\_i w\_j \\sigma\_{ij}} \\end{equation}\\] Readers are strongly encouraged to implement this by hand for \\(n\=2\\) to convince themselves that the vector form of the expression for variance \\(\\boldsymbol{w'\\Sigma w}\\) is the same thing as the long form on the right\-hand side of the equation above. If returns are independent, then the formula collapses to: \\\[\\begin{equation} Var(\\bf{w'R}) \= \\bf{w'\\Sigma w} \= \\sum\_{i\=1}^n \\boldsymbol{w\_i^2 \\sigma\_i^2} \\end{equation}\\] If returns are dependent, and equal amounts are invested in each asset (\\(w\_i\=1/n,\\;\\;\\forall i\\)): \\\[\\begin{eqnarray\*} Var(\\bf{w'R}) \&\=\& \\frac{1}{n}\\sum\_{i\=1}^n \\frac{\\sigma\_i^2}{n} \+ \\frac{n\-1}{n}\\sum\_{i\=1}^n \\sum\_{j\=1,i \\neq j}^n \\frac{\\sigma\_{ij}}{n(n\-1\)}\\\\ \&\=\& \\frac{1}{n} \\bar{\\sigma\_i}^2 \+ \\frac{n\-1}{n} \\bar{\\sigma\_{ij}}\\\\ \&\=\& \\frac{1}{n} \\bar{\\sigma\_i}^2 \+ \\left(1 \- \\frac{1}{n} \\right) \\bar{\\sigma\_{ij}} \\end{eqnarray\*}\\] The first term is the average variance, denoted \\(\\bar{\\sigma\_1}^2\\) divided by \\(n\\), and the second is the average covariance, denoted \\(\\bar{\\sigma\_{ij}}\\) multiplied by factor \\((n\-1\)/n\\). As \\(n \\rightarrow \\infty\\), \\\[\\begin{equation} Var({\\bf w'R}) \= \\bar{\\sigma\_{ij}} \\end{equation}\\] This produces the remarkable result that in a well diversified portfolio, the variances of each stock’s return does not matter at all for portfolio risk! Further the risk of the portfolio, i.e., its variance, is nothing but the average of off\-diagonal terms in the covariance matrix. ``` sd=0.20; cv=0.01 n = seq(1,100) sd_p = matrix(0,length(n),1) for (j in n) { cv_mat = matrix(cv,j,j) diag(cv_mat) = sd^2 w = matrix(1/j,j,1) sd_p[j] = sqrt(t(w) %*% cv_mat %*% w) } plot(n,sd_p,type="l",col="blue") ``` ### 2\.10\.2 Diversification exercise Implement the math above using R to compute the standard deviation of a portfolio of \\(n\\) identical securities with variance 0\.04, and pairwise covariances equal to 0\.01\. Keep increasing \\(n\\) and report the value of the standard deviation. What do you see? Why would this be easier to do in R versus Excel? 2\.11 Matrix Equations ---------------------- Here we examine how matrices may be used to represent large systems of equations easily and also solve them. Using the values of matrices \\({\\bf A}\\), \\({\\bf B}\\) and \\({\\bf w}\\) from the previous section, we write out the following in long form: \\\[\\begin{equation} {\\bf A} {\\bf w} \= {\\bf B} \\end{equation}\\] That is, we have \\\[\\begin{equation} \\left\[ \\begin{array}{cc} 3 \& 2 \\\\ 2 \& 4 \\end{array} \\right] \\left\[ \\begin{array}{c} w\_1 \\\\ w\_2 \\end{array} \\right] \= \\left\[ \\begin{array}{c} 3 \\\\ 4 \\end{array} \\right] \\end{equation}\\] Do you get 2 equations? If so, write them out. Find the solution values \\(w\_1\\) and \\(w\_2\\) by hand. And then we may compute the solution for \\({\\bf w}\\) by “dividing” \\({\\bf B}\\) by \\({\\bf A}\\). This is not regular division because \\({\\bf A}\\) and \\({\\bf B}\\) are matrices. Instead we need to multiply the inverse of \\({\\bf A}\\) (which is its “reciprocal”) by \\({\\bf B}\\). The inverse of \\({\\bf A}\\) is \\\[\\begin{equation} {\\bf A}^{\-1} \= \\left\[ \\begin{array}{cc} 0\.500 \& \-0\.250 \\\\ \-0\.250 \& 0\.375 \\end{array} \\right] \\end{equation}\\] Now compute by hand: \\\[\\begin{equation} {\\bf A}^{\-1} {\\bf B} \= \\left\[ \\begin{array}{c} 0\.50 \\\\ 0\.75 \\end{array} \\right] \\end{equation}\\] which should be the same as your solution by hand. Literally, this is all the matrix algebra and calculus you will need for most of the work we will do. ``` A = matrix(c(3,2,2,4),2,2) B = matrix(c(3,4),2,1) print(solve(A) %*% B) ``` ``` ## [,1] ## [1,] 0.50 ## [2,] 0.75 ``` ### 2\.11\.1 Matrix algebra exercise The following brief notes will introduce you to everything you need to know about the vocabulary of vectors and matrices in a “DIY” (do\-it\-yourself) mode. Define \\\[ w \= \[w\_1 \\;\\;\\; w\_2]' \= \\left\[ \\begin{array}{c} w\_1 \\\\ w\_2 \\end{array} \\right] \\] \\\[ I \= \\left\[ \\begin{array}{cc} 1 \& 0 \\\\ 0 \& 1 \\end{array} \\right] \\] \\\[ \\Sigma \= \\left\[ \\begin{array}{cc} \\sigma\_1^2 \& \\sigma\_{12} \\\\ \\sigma\_{12} \& \\sigma\_2^2 \\end{array} \\right] \\] Do the following exercises in long hand: * Show that \\(I\\;w \= w\\). * Show that the dimensions make sense at all steps of your calculations. * Show that \\\[ w' \\; \\Sigma \\; w \= w\_1^2 \\sigma\_1^2 \+ 2 w\_1 w\_2 \\sigma\_{12} \+ w\_2^2 \\sigma\_2^2 \\] 2\.12 Matrix Calculus --------------------- It is simple to undertake calculus when working with matrices. Calculations using matrices are mere functions of many variables. These functions are amenable to applying calculus, just as you would do in multivariate calculus. However, using vectors and matrices makes things simpler in fact, because we end up taking derivatives of these multivariate functions in one fell swoop rather than one\-by\-one for each variable. An example will make this clear. Suppose \\\[ {\\bf w} \= \\left\[ \\begin{array}{c} w\_1 \\\\ w\_2 \\end{array} \\right] \\] and \\\[ {\\bf B} \= \\left\[ \\begin{array}{c} 3 \\\\ 4 \\end{array} \\right] \\] Let \\(f({\\bf w}) \= {\\bf w}' {\\bf B}\\). This is a function of two variables \\(w\_1, w\_2\\). If we write out \\(f({\\bf w})\\) in long form, we get \\(3 w\_1 \+ 4 w\_2\\). The derivative of \\(f({\\bf w})\\) with respect to \\(w\_1\\) is \\(\\frac{\\partial f}{\\partial w\_1} \= 3\\). The derivative of \\(f({\\bf w})\\) with respect to \\(w\_2\\) is \\(\\frac{\\partial f}{\\partial w\_2} \= 4\\). Compare these answers to vector \\({\\bf B}\\). What do you see? What is \\(\\frac{df}{d{\\bf w}}\\)? It’s \\({\\bf B}\\). The insight here is that if we simply treat the vectors as regular scalars and conduct calculus accordingly, we will end up with vector derivatives. Hence, the derivative of \\({\\bf w}' {\\bf B}\\) with respect to \\({\\bf w}\\) is just \\({\\bf B}\\). Of course, \\({\\bf w}' {\\bf B}\\) is an entire function and \\({\\bf B}\\) is a vector. But the beauty of this is that we can take all derivatives of function \\({\\bf w}' {\\bf B}\\) at one time! These ideas can also be extended to higher\-order matrix functions. Suppose \\\[ {\\bf A} \= \\left\[ \\begin{array}{cc} 3 \& 2 \\\\ 2 \& 4 \\end{array} \\right] \\] and \\\[ {\\bf w} \= \\left\[ \\begin{array}{c} w\_1 \\\\ w\_2 \\end{array} \\right] \\] Let \\(f({\\bf w}) \= {\\bf w}' {\\bf A} {\\bf w}\\). If we write out \\(f({\\bf w})\\) in long form, we get \\\[ {\\bf w}' {\\bf A} {\\bf w} \= 3 w\_1^2 \+ 4 w\_2^2 \+ 2 (2\) w\_1 w\_2 \\] Take the derivative of \\(f({\\bf w})\\) with respect to \\(w\_1\\), and this is \\\[ \\frac{df}{dw\_1} \= 6 w\_1 \+ 4 w\_2 \\] Take the derivative of \\(f({\\bf w})\\) with respect to \\(w\_2\\), and this is \\\[ \\frac{df}{dw\_2} \= 8 w\_2 \+ 4 w\_1 \\] Now, we write out the following calculation in long form: \\\[ 2\\; {\\bf A} \\; {\\bf w} \= 2 \\left\[ \\begin{array}{cc} 3 \& 2 \\\\ 2 \& 4 \\end{array} \\right] \\left\[ \\begin{array}{c} w\_1 \\\\ w\_2 \\end{array} \\right] \= \\left\[ \\begin{array}{c} 6 w\_1 \+ 4 w\_2 \\\\ 8 w\_2 \+ 4 w\_1 \\end{array} \\right] \\] What do you notice about this solution when compared to the previous two answers? It is nothing but \\(\\frac{df}{dw}\\). Since \\(w \\in R^2\\), i.e., is of dimension 2, the derivative \\(\\frac{df}{dw}\\) will also be of that dimension. To see how this corresponds to scalar calculus, think of the function \\(f({\\bf w}) \= {\\bf w}' {\\bf A} {\\bf w}\\) as simply \\({\\bf A} w^2\\), where \\(w\\) is scalar. The derivative of this function with respect to \\(w\\) would be \\(2{\\bf A}w\\). And, this is the same as what we get when we look at a function of vectors, but with the caveat below. **Note**: This computation only works out because \\({\\bf A}\\) is symmetric. What should the expression be for the derivative of this vector function if \\({\\bf A}\\) is not symmetric but is a square matrix? It turns out that this is \\\[ \\frac{\\partial f}{\\partial {\\bf w}} \= {\\bf A}' {\\bf w} \+ {\\bf A} {\\bf w} \\neq 2 {\\bf A} {\\bf w} \\] Let’s try this and see. Suppose \\\[ {\\bf A} \= \\left\[ \\begin{array}{cc} 3 \& 2 \\\\ 1 \& 4 \\end{array} \\right] \\] You can check that the following is all true: \\\[\\begin{eqnarray\*} {\\bf w}' {\\bf A} {\\bf w} \&\=\& 3 w\_1^2 \+ 4 w\_2^2 \+ 3 w\_1 w\_2 \\\\ \\frac{\\partial f}{\\partial w\_1} \&\=\& 6 w\_1 \+ 3 w\_2\\\\ \\frac{\\partial f}{\\partial w\_2} \&\=\& 3 w\_1 \+ 8 w\_2 \\end{eqnarray\*}\\] and \\\[ {\\bf A}' {\\bf w} \+ {\\bf A} {\\bf w} \= \\left\[ \\begin{array}{c} 6w\_1\+3w\_2 \\\\ 3w\_1\+8w\_2 \\end{array} \\right] \\] which is correct, but note that the formula for symmetric \\({\\bf A}\\) is not! \\\[ 2{\\bf A} {\\bf w} \= \\left\[ \\begin{array}{c} 6w\_1\+4w\_2 \\\\ 2w\_1\+8w\_2 \\end{array} \\right] \\] ### 2\.12\.1 More exercises Try the following questions for practice. 1. What is the value of \\\[ {\\bf A}^{\-1} {\\bf A} {\\bf B} \\] Is this vector or scalar? 2. What is the final dimension of \\\[ {\\bf (w'B) (A A A^{\-1} B)} \\] 2\.13 Complex Numbers and Euler’s Equation ------------------------------------------ You can also handle complex numbers in R. The representation is x \+ y\*1i \\\[ e^{i \\pi}\+1\=0 \\] ``` print(pi) ``` ``` ## [1] 3.141593 ``` ``` exp(1i*pi)+1 ``` ``` ## [1] 0+1.224647e-16i ``` 2\.1 Logarithms and Exponentials, Continuous Compounding -------------------------------------------------------- It is fitting to begin with the fundamental mathematical constant, \\(e \= 2\.718281828\...\\), which is also the function \\(\\exp(\\cdot)\\). We often write this function as \\(e^x\\), where \\(x\\) can be a real or complex variable. It shows up in many places, especially in Finance, where it is used for continuous compounding and discounting of money at a given interest rate \\(r\\) over some time horizon \\(t\\). Given \\(y\=e^x\\), a fixed change in \\(x\\) results in the same continuous percentage change in \\(y\\). This is because \\(\\ln(y) \= x\\), where \\(\\ln(\\cdot)\\) is the natural logarithm function, and is the inverse function of the exponential function. Recall also that the first derivative of this function is \\(\\frac{dy}{dx} \= e^x\\), i.e., the function itself. The constant \\(e\\) is defined as the limit of a specific function: \\\[ e \= \\lim\_{n \\rightarrow \\infty} \\left( 1 \+ \\frac{1}{n} \\right)^n \\] Exponential compounding is the limit of successively shorter intervals over discrete compounding. ``` x = c(1,2,3) y = exp(x) print(y) ``` ``` ## [1] 2.718282 7.389056 20.085537 ``` ``` print(log(y)) ``` ``` ## [1] 1 2 3 ``` 2\.2 Calculus ------------- EXAMPLE: Bond Mathematics Given a horizon \\(t\\) divided into \\(n\\) intervals per year, one dollar compounded from time zero to time \\(t\\) years over these \\(n\\) intervals at per annum rate \\(r\\) may be written as \\(\\left(1 \+ \\frac{r}{n} \\right)^{nt}\\). Continuous\-compounding is the limit of this equation when the number of periods \\(n\\) goes to infinity: \\\[ \\lim\_{n \\rightarrow \\infty} \\left(1 \+ \\frac{r}{n} \\right)^{nt} \= \\lim\_{n \\rightarrow \\infty} \\left\[ \\left(1 \+ \\frac{1}{n/r} \\right)^{n/r}\\right]^{tr} \= e^{rt} \\] This is the forward value of one dollar. Present value is just the reverse. Therefore, the price today of a dollar received \\(t\\) years from today is \\(P \= e^{\-rt}\\). The yield of a bond is: \\\[ r \= \-\\frac{1}{t} \\ln(P) \\] In bond mathematics, the negative of the percentage price sensitivity of a bond to changes in interest rates is known as “Duration”: \\\[ \-\\frac{dP}{dr}\\frac{1}{P} \= \-\\left(\-t e^{\-rt}\\frac{1}{P}\\right) \= t P\\frac{1}{P} \= t \\] The derivative \\(\\frac{dP}{dr}\\) is the price sensitivity of the bond to changes in interest rates, and is negative. Further dividing this by \\(P\\) gives the percentage price sensitivity. The minus sign in front of the definition of duration is applied to convert the negative number to a positive one. The “Convexity” of a bond is its percentage price sensitivity relative to the second derivative, i.e., \\\[ \\frac{d^2P}{dr^2}\\frac{1}{P} \= t^2 P\\frac{1}{P} \= t^2 \\] Because the second derivative is positive, we know that the bond pricing function is convex. 2\.3 Normal Distribution ------------------------ This distribution is the workhorse of many models in the social sciences, and is assumed to generate much of the data that comprises the Big Data universe. Interestingly, most phenomena (variables) in the real world are not normally distributed. They tend to be “power law” distributed, i.e., many observations of low value, and very few of high value. The probability distribution declines from left to right and does not have the characteristic hump shape of the normal distribution. An example of data that is distributed thus is income distribution (many people with low income, very few with high income). Other examples are word frequencies in languages, population sizes of cities, number of connections of people in a social network, etc. Still, we do need to learn about the normal distribution because it is important in statistics, and the central limit theorem does govern much of the data we look at. Examples of approximately normally distributed data are stock returns, and human heights. If \\(x \\sim N(\\mu,\\sigma^2\)\\), that is, \\(x\\) is normally distributed with mean \\(\\mu\\) and variance \\(\\sigma^2\\), then the probability “density” function for \\(x\\) is: \\\[ f(x) \= \\frac{1}{\\sqrt{2\\pi\\sigma^2}} \\exp\\left\[\-\\frac{1}{2}\\frac{(x\-\\mu)^2}{\\sigma^2} \\right] \\] The cumulative probability is given by the “distribution” function \\\[ F(x) \= \\int\_{\-\\infty}^x f(u) du \\] and \\\[ F(x) \= 1 \- F(\-x) \\] because the normal distribution is symmetric. We often also use the notation \\(N(\\cdot)\\) or \\(\\Phi(\\cdot)\\) instead of \\(F(\\cdot)\\). The “standard normal” distribution is: \\(x \\sim N(0,1\)\\). For the standard normal distribution: \\(F(0\) \= \\frac{1}{2}\\). The normal distribution has continuous support, i.e., a range of values of \\(x\\) that goes continuously from \\(\-\\infty\\) to \\(\+\\infty\\). ``` #DENSITY FUNCTION x = seq(-4,4,0.001) plot(x,dnorm(x),type="l",col="red") grid(lwd=2) ``` ``` print(dnorm(0)) ``` ``` ## [1] 0.3989423 ``` ``` fx = dnorm(x) Fx = pnorm(x) plot(x,Fx,type="l",col="blue",main="Normal Probability",ylab="f(x) and F(x)") lines(x,fx,col="red") grid(col="green",lwd=2) ``` ``` res = c(pnorm(-6),pnorm(0),pnorm(6)) print(round(res,6)) ``` ``` ## [1] 0.0 0.5 1.0 ``` 2\.4 Poisson Distribution ------------------------- The Poisson is also known as the rare\-event distribution. Its density function is: \\\[ f(n; \\lambda) \= \\frac{e^{\-\\lambda} \\lambda^n}{n!} \\] where there is only one parameter, i.e., the mean \\(\\lambda\\). The density function is over discrete values of \\(n\\), the number of occurrences given the mean number of outcomes \\(\\lambda\\). The mean and variance of the Poisson distribution are both \\(\\lambda\\). The Poisson is a discrete\-support distribution, with a range of values \\(n\=\\{0,1,2, \\ldots\\}\\). ``` x = seq(0,25) lambda = 4 fx = dpois(x,lambda) barplot(fx,names.arg=x) ``` ``` print(sum(fx)) ``` ``` ## [1] 1 ``` ``` #Check that the mean is lambda print(sum(x*fx)) ``` ``` ## [1] 4 ``` ``` #Check that the variance is lambda print(sum(x^2*fx)-sum(x*fx)^2) ``` ``` ## [1] 4 ``` There are of course, many other probability distributions. Figure [2\.1](MathPrimer.html#fig:ProbDistributions) displays them succinctly. Figure 2\.1: Probability Distributions 2\.5 Moments of Random Variables -------------------------------- The following formulae are useful to review because any analysis of data begins with descriptive statistics, and the following statistical “moments” are computed in order to get a first handle on the data. Given a random variable \\(x\\) with probability density function \\(f(x)\\), then the following are the first four moments. \\\[ \\mbox{Mean (first moment or average)} \= E(x) \= \\int x f(x) dx \\] In like fashion, powers of the variable result in higher (\\(n\\)\-th order) moments. These are “non\-central” moments, i.e., they are moments of the raw random variable \\(x\\), not its deviation from its mean, i.e., \\(\[x \- E(x)]\\). \\\[ n^{th} \\mbox{ moment} \= E(x^n) \= \\int x^n f(x) dx \\] Central moments are moments of de\-meaned random variables. The second central moment is the variance: \\\[ \\mbox{Variance } \= Var(x) \= E\[x\-E(x)]^2 \= E(x^2\) \- \[E(x)]^2 \\] The standard deviation is the square\-root of the variance, i.e., \\(\\sigma \= \\sqrt{Var(x)}\\). The third central moment, normalized by the standard deviation to a suitable power is the skewness: \\\[ \\mbox{Skewness } \= \\frac{E\[x\-E(x)]^3}{Var(x)^{3/2}} \\] The absolute value of skewness relates to the degree of asymmetry in the probability density. If more extreme values occur to the left than the right, the distribution is left\-skewed. And vice\-versa, the distribution is right\-skewed. Correspondingly, the fourth central, normalized moment is kurtosis. \\\[ \\mbox{Kurtosis } \= \\frac{E\[x\-E(x)]^4}{\[Var(x)]^2} \\] Kurtosis in the normal distribution has value \\(3\\). We define “Excess Kurtosis” to be Kurtosis minus 3\. When a probability distribution has positive excess kurtosis we call it “leptokurtic”. Such distributions have fatter tails (either or both sides) than a normal distribution. ``` #EXAMPLES dx = 0.001 x = seq(-5,5,dx) fx = dnorm(x) mn = sum(x*fx*dx) print(c("Mean=",mn)) ``` ``` ## [1] "Mean=" "3.20341408642798e-19" ``` ``` vr = sum(x^2*fx*dx)-mn^2 print(c("Variance=",vr)) ``` ``` ## [1] "Variance=" "0.999984596641201" ``` ``` sk = sum((x-mn)^3*fx*dx)/vr^(3/2) print(c("Skewness=",sk)) ``` ``` ## [1] "Skewness=" "4.09311659106139e-19" ``` ``` kt = sum((x-mn)^4*fx*dx)/vr^2 print(c("Kurtosis=",kt)) ``` ``` ## [1] "Kurtosis=" "2.99967533661497" ``` 2\.6 Combining Random Variables ------------------------------- Since we often have to deal with composites of random variables, i.e., more than one random variable, we review here some simple rules for moments of combinations of random variables. There are several other expressions for the same equations, but we examine just a few here, as these are the ones we will use more frequently. First, we see that means are additive and scalable, i.e., \\\[ E(ax \+ by) \= a E(x) \+ b E(y) \\] where \\(x, y\\) are random variables, and \\(a, b\\) are scalar constants. The variance of scaled, summed random variables is as follows: \\\[ Var(ax \+ by) \= a^2 Var(x) \+ b^2 Var(y) \+ 2ab Cov(x,y) \\] And the covariance and correlation between two random variables is \\\[ Cov(x,y) \= E(xy) \- E(x)E(y) \\] \\\[ Corr(x,y) \= \\frac{Cov(x,y)}{\\sqrt{Var(x)Var(y)}} \\] Students of finance will be well\-versed with these expressions. They are facile and easy to implement. ``` #CHECK MEAN x = rnorm(1000) y = runif(1000) a = 3; b = 5 print(c(mean(a*x+b*y),a*mean(x)+b*mean(y))) ``` ``` ## [1] 2.488377 2.488377 ``` ``` #CHECK VARIANCE vr = var(a*x+b*y) vr2 = a^2*var(x) + b^2*var(y) + 2*a*b*cov(x,y) print(c(vr,vr2)) ``` ``` ## [1] 11.03522 11.03522 ``` ``` #CHECK COVARIANCE FORMULA cv = cov(x,y) cv2 = mean(x*y)-mean(x)*mean(y) print(c(cv,cv2)) ``` ``` ## [1] 0.0008330452 0.0008322121 ``` ``` corr = cov(x,y)/(sd(x)*sd(y)) print(corr) ``` ``` ## [1] 0.002889551 ``` ``` print(cor(x,y)) ``` ``` ## [1] 0.002889551 ``` 2\.7 Vector Algebra ------------------- We will be using linear algebra in many of the models that we explore in this book. Linear algebra requires the manipulation of vectors and matrices. We will also use vector calculus. Vector algebra and calculus are very powerful methods for tackling problems that involve solutions in spaces of several variables, i.e., in high dimension. The parsimony of using vector notation will become apparent as we proceed. This introduction is very light and meant for the reader who is mostly uninitiated in linear algebra. Rather than work with an abstract exposition, it is better to introduce ideas using an example. We’ll examine the use of vectors in the context of stock portfolios. We define the returns for each stock in a portfolio as: \\\[ {\\bf R} \= \\left(\\begin{array}{c} R\_1 \\\\ R\_2 \\\\ : \\\\ : \\\\ R\_N \\end{array} \\right) \\] This is a random vector, because each return \\(R\_i, i \= 1,2, \\ldots, N\\) comes from its own distribution, and the returns of all these stocks are correlated. This random vector’s probability is represented as a joint or multivariate probability distribution. Note that we use a bold font to denote the vector \\({\\bf R}\\). We also define a Unit vector: \\\[ {\\bf 1} \= \\left(\\begin{array}{c} 1 \\\\ 1 \\\\ : \\\\ : \\\\ 1 \\end{array} \\right) \\] The use of this unit vector will become apparent shortly, but it will be used in myriad ways and is a useful analytical object. A *portfolio* vector is defined as a set of portfolio weights, i.e., the fraction of the portfolio that is invested in each stock: \\\[ {\\bf w} \= \\left(\\begin{array}{c} w\_1 \\\\ w\_2 \\\\ : \\\\ : \\\\ w\_N \\end{array} \\right) \\] The total of portfolio weights must add up to 1\. \\\[ \\sum\_{i\=1}^N w\_i \= 1, \\;\\;\\; {\\bf w}'{\\bf 1} \= 1 \\] Pay special attention to the line above. In it, there are two ways in which to describe the sum of portfolio weights. The first one uses summation notation, and the second one uses a simple vector algebraic statement, i.e., that the transpose of \\({\\bf w}\\), denoted \\({\\bf w'}\\) times the unit vector \\({\\bf 1}\\) equals 1\.[27](#fn27) The two elements on the left\-hand\-side of the equation are vectors, and the 1 on the right hand side is a scalar. The dimension of \\({\\bf w'}\\) is \\((1 \\times N)\\) and the dimension of \\({\\bf 1}\\) is \\((N \\times 1\)\\). And a \\((1 \\times N)\\) vector multiplied by a \\((N \\times 1\)\\) results in a \\((1 \\times 1\)\\) vector, i.e., a scalar. We may also invest in a risk free asset (denoted as asset zero, \\(i\=0\\)), with return \\(R\_0 \= r\_f\\). In this case, the total portfolio weights including that of the risk free asset must sum to 1, and the weight \\(w\_0\\) is: \\\[ w\_0 \= 1 \- \\sum\_{i\=1}^N w\_i \= 1 \- {\\bf w}^\\top {\\bf 1} \\] Now we can use vector notation to compute statistics and quantities of the portfolio. The portfolio return is \\\[ R\_p \= \\sum\_{i\=1}^N w\_i R\_i \= {\\bf w}' {\\bf R} \\] Again, note that the left\-hand\-side quantity is a scalar, and the two right\-hand\-side quantities are vectors. Since \\({\\bf R}\\) is a random vector, \\(R\_p\\) is a random (scalar, i.e., not a vector, of dimension \\(1 \\times 1\\)) variable. Such a product is called a scalar product of two vectors. In order for the calculation to work, the two vectors or matrices must be “conformable”, i.e., the inner dimensions of the matrices must be the same. In this case we are multiplying \\({\\bf w}'\\) of dimension \\(1 \\times N\\) with \\({\\bf R}\\) of dimension \\(N \\times 1\\) and since the two “inside” dimensions are both \\(n\\), the calculation is proper as the matrices are conformable. The result of the calculation will take the size of the “outer” dimensions, i.e., in this case \\(1 \\times 1\\). Now, suppose \\\[ {\\bf R} \\sim MVN\[{\\boldsymbol \\mu}; {\\bf \\Sigma}] \\] That is, returns are multivariate normally distributed with mean vector \\(E\[{\\bf R}] \= {\\boldsymbol \\mu} \= \[\\mu\_1,\\mu\_2,\\ldots,\\mu\_N]' \\in R^N\\) and covariance matrix \\({\\bf \\Sigma} \\in R^{N \\times N}\\). The notation \\(R^N\\) stands for a \`\`real\-valued matrix of dimension \\(N\\).’’ If it’s just \\(N\\), then it means a vector of dimension \\(N\\). If it’s written as \\(N \\times M\\), then it’s a matrix of that dimension, i.e., \\(N\\) rows and \\(M\\) columns. We can write the portfolio’s mean return as: \\\[ E\[{\\bf w}' {\\bf R}] \= {\\bf w}' E\[{\\bf R}] \= {\\bf w}'{\\boldsymbol \\mu} \= \\sum\_{i\=1}^N w\_i \\mu\_i \\] The portfolio’s return variance is \\\[ Var(R\_p) \= Var({\\bf w}'{\\bf R}) \= {\\bf w}'{\\bf \\Sigma}{\\bf w} \\] Showing why this is true is left as an exercise to the reader. Take a case where \\(N\=2\\) and write out the expression for the variance of the portfolio using equation . Then also undertake the same calculation using the variance formula \\({\\bf w' \\Sigma w}\\) and see the equivalence. Also note carefully that this expression works because \\({\\bf \\Sigma}\\) is a symmetric matrix. The multivariate normal density function is: \\\[ f({\\bf R}) \= \\frac{1}{2\\pi^{N/2}\\sqrt{\|\\Sigma\|}} \\exp\\left\[\-\\frac{1}{2} \\boldsymbol{(R\-\\mu)'\\Sigma^{\-1}(R\-\\mu)} \\right] \\] Now, we take a look at some simple applications expressed in terms of vector notation. 2\.8 Basic Regression Model (OLS) --------------------------------- We assume here that you are conversant with a linear regression and how to interpret the outputs of the regression. This subsection only serves to demonstrate how to run the simple ordinary least squares (OLS) regression in R. We create some random data and then find the relation between \\(Y\\) (the dependent variable) and \\(X\\) (the independent variable). ``` #STATISTICAL REGRESSION x = rnorm(1000) y = 3 + 4*x + 0.5*x^2 + rt(1000,5) res = lm(y~x) print(summary(res)) ``` ``` ## ## Call: ## lm(formula = y ~ x) ## ## Residuals: ## Min 1Q Median 3Q Max ## -5.0031 -0.9200 -0.0879 0.7987 10.2941 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 3.53187 0.04591 76.94 <2e-16 *** ## x 3.91528 0.04634 84.49 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 1.451 on 998 degrees of freedom ## Multiple R-squared: 0.8774, Adjusted R-squared: 0.8772 ## F-statistic: 7139 on 1 and 998 DF, p-value: < 2.2e-16 ``` ``` plot(x,y) abline(res,col="red") ``` The plot of the data and the best fit regression line shows a good fit. The regression \\(R^2\=88\\%\\), which means that the right\-hand\-side (RHS, independent) variables explain 88% of the variation in the left\-hand\-side (LHS, dependent) variable. The \\(t\\)\-statistics are highly significant, suggesting that the RHS variables are useful in explaining the LHS variable. Also, the \\(F\\)\-statistic has a very small \\(p\\)\-value, meaning that the collection of RHS variables form a statistically good model for explaining the LHS variable. The output of the regression is stored in an output object **res** and it has various components (in R, we call these “attributes”). The **names** function is used to see what these components are. ``` names(res) ``` ``` ## [1] "coefficients" "residuals" "effects" "rank" ## [5] "fitted.values" "assign" "qr" "df.residual" ## [9] "xlevels" "call" "terms" "model" ``` We might be interested in extracting the coefficients of the regression, which are a component in the output object. These are addressed using the “$” extractor, i.e., follow the output object with a “$” and then the name of the attribute you want to extract. Here is an example. ``` res$coefficients ``` ``` ## (Intercept) x ## 3.531866 3.915277 ``` 2\.9 Regression with Dummy Variables ------------------------------------ In R, we may “factor” a variable that has levels (categorical values instead of numerical values). These categorical values are often encountered in data, for example, we may have a data set of customers and they are classified into gender, income category, etc. If you have a column of such data, then you want to “factor” the data to make it clear to R that this is categorical. Here is an example. ``` x1 = rnorm(1000) w = ceiling(3*runif(1000)) y = 4 + 5*x + 6*w + rnorm(1000)*0.2 x2 = factor(w) print(head(x2,20)) ``` ``` ## [1] 3 1 2 3 2 3 3 1 1 1 2 1 2 2 1 1 2 2 3 2 ## Levels: 1 2 3 ``` We took the data **w** and factored it, so that R would treat it as categorical even thought the data itself was originally numerical. The way it has been coded up, will lead to three categories: \\(x\_2 \= \\{1,2,3\\}\\). We then run a regression of \\(y\\) on \\(x\_1\\) (numerical) and \\(x\_2\\) (categorical). ``` res = lm(y~x1+x2) summary(res) ``` ``` ## ## Call: ## lm(formula = y ~ x1 + x2) ## ## Residuals: ## Min 1Q Median 3Q Max ## -20.9459 -3.3428 -0.0159 3.2414 16.2667 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 9.73486 0.27033 36.01 <2e-16 *** ## x1 -0.03866 0.15478 -0.25 0.803 ## x22 6.37973 0.38186 16.71 <2e-16 *** ## x23 11.95114 0.38658 30.91 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 4.968 on 996 degrees of freedom ## Multiple R-squared: 0.4905, Adjusted R-squared: 0.4889 ## F-statistic: 319.6 on 3 and 996 DF, p-value: < 2.2e-16 ``` Notice that the categorical \\(x\_2\\) variable has been split into two RHS variables **x22** and **x23**. However, there are 3 levels, not 2? The way to think of this is to treat category 1 as the “baseline” and then the dummy variables **x22** and **x23** capture the difference between category 1 and 2, and 1 and 3, respectively. We see that both are significant and positive implying that both categories 2 and 3 increase the LHS variable by their coefficients over and above the effect of category 1\. How is the regression actually run internally? What R does with a factor variable that has \\(N\\) levels is to create \\(N\-1\\) columns of dummy variables, for categories 2 to \\(N\\), where each column is for one of these categories and takes value 1 if the column corresponds to the category and 0 otherwise. In order to see this, let’s redo the regression but create the dummy variable columns without factoring the data in R. This will serve as a cross\-check of the regression above. ``` #CHECK THE DUMMY VARIABLE REGRESSION idx = which(w==2); x22 = y*0; x22[idx] = 1 idx = which(w==3); x23 = y*0; x23[idx] = 1 res = lm(y~x1+x22+x23) summary(res) ``` ``` ## ## Call: ## lm(formula = y ~ x1 + x22 + x23) ## ## Residuals: ## Min 1Q Median 3Q Max ## -20.9459 -3.3428 -0.0159 3.2414 16.2667 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 9.73486 0.27033 36.01 <2e-16 *** ## x1 -0.03866 0.15478 -0.25 0.803 ## x22 6.37973 0.38186 16.71 <2e-16 *** ## x23 11.95114 0.38658 30.91 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 4.968 on 996 degrees of freedom ## Multiple R-squared: 0.4905, Adjusted R-squared: 0.4889 ## F-statistic: 319.6 on 3 and 996 DF, p-value: < 2.2e-16 ``` As we see, the regressions are identical. It is of course much easier to factor a variable and have R make the entire set of dummy variable columns than to write separate code to do so. 2\.10 Matrix Calculations ------------------------- Since the representation of data is usually in the form of tables, these are nothing but matrices and vectors. Therefore, it is a good idea to review basic matrix math, equations, calculus, etc. We start with a simple example where we manipulate economic quantities stored in matrices and vectors. We have already seen vectors for finance in an earlier section, and here we will get some practice manipulating these vectors in R. Example: Financial Portfolios. A portfolio is described by its holdings, i.e., the proportions of various securities you may hold. These proportions are also called “weights”. The return you get on a portfolio is random, because each security has some average return, but also has a standard deviation of movement around this mean return, and for a portfolio, each security also covaries with the others. Hence, the basic information about all securities is stored in a mean return vector, which has the average return for each security. There is also a covariance matrix of returns, generated from data, and finally, the portfolio itself is described by the vector of weights. We create these matrices below for a small portfolio comprised of four securities. We then calculate the expected return on the portfolio and the standrd deviation of portfolio return. ``` #PORTFOLIO CALCULATIONS w = matrix(c(0.3,0.4,0.2,0.1),4,1) #PORTFOLIO WEIGHTS mu = matrix(c(0.01,0.05,0.10,0.20),4,1) #MEAN RETURNS cv = matrix(c(0.002,0.001,0.001,0.001,0.001,0.02,0.01,0.01,0.001, 0.01,0.04,0.01,0.001,0.01,0.01,0.09),4,4) print(w) ``` ``` ## [,1] ## [1,] 0.3 ## [2,] 0.4 ## [3,] 0.2 ## [4,] 0.1 ``` ``` print(mu) ``` ``` ## [,1] ## [1,] 0.01 ## [2,] 0.05 ## [3,] 0.10 ## [4,] 0.20 ``` ``` print(cv) ``` ``` ## [,1] [,2] [,3] [,4] ## [1,] 0.002 0.001 0.001 0.001 ## [2,] 0.001 0.020 0.010 0.010 ## [3,] 0.001 0.010 0.040 0.010 ## [4,] 0.001 0.010 0.010 0.090 ``` ``` print(c("Mean return = ",t(w)%*%mu)) ``` ``` ## [1] "Mean return = " "0.063" ``` ``` print(c("Return Std Dev = ",sqrt(t(w)%*%cv%*%w))) ``` ``` ## [1] "Return Std Dev = " "0.0953939201416946" ``` ### 2\.10\.1 Diversification of a Portfolio It is useful to examine the power of using vector algebra with an application. Here we use vector and summation math to understand how diversification in stock portfolios works. Diversification occurs when we increase the number of non\-perfectly correlated stocks in a portfolio, thereby reducing portfolio variance. In order to compute the variance of the portfolio we need to use the portfolio weights \\({\\bf w}\\) and the covariance matrix of stock returns \\({\\bf R}\\), denoted \\({\\bf \\Sigma}\\). We first write down the formula for a portfolio’s return variance: \\\[\\begin{equation} Var(\\boldsymbol{w'R}) \= \\boldsymbol{w'\\Sigma w} \= \\sum\_{i\=1}^n \\boldsymbol{w\_i^2 \\sigma\_i^2} \+ \\sum\_{i\=1}^n \\sum\_{j\=1,i \\neq j}^n \\boldsymbol{w\_i w\_j \\sigma\_{ij}} \\end{equation}\\] Readers are strongly encouraged to implement this by hand for \\(n\=2\\) to convince themselves that the vector form of the expression for variance \\(\\boldsymbol{w'\\Sigma w}\\) is the same thing as the long form on the right\-hand side of the equation above. If returns are independent, then the formula collapses to: \\\[\\begin{equation} Var(\\bf{w'R}) \= \\bf{w'\\Sigma w} \= \\sum\_{i\=1}^n \\boldsymbol{w\_i^2 \\sigma\_i^2} \\end{equation}\\] If returns are dependent, and equal amounts are invested in each asset (\\(w\_i\=1/n,\\;\\;\\forall i\\)): \\\[\\begin{eqnarray\*} Var(\\bf{w'R}) \&\=\& \\frac{1}{n}\\sum\_{i\=1}^n \\frac{\\sigma\_i^2}{n} \+ \\frac{n\-1}{n}\\sum\_{i\=1}^n \\sum\_{j\=1,i \\neq j}^n \\frac{\\sigma\_{ij}}{n(n\-1\)}\\\\ \&\=\& \\frac{1}{n} \\bar{\\sigma\_i}^2 \+ \\frac{n\-1}{n} \\bar{\\sigma\_{ij}}\\\\ \&\=\& \\frac{1}{n} \\bar{\\sigma\_i}^2 \+ \\left(1 \- \\frac{1}{n} \\right) \\bar{\\sigma\_{ij}} \\end{eqnarray\*}\\] The first term is the average variance, denoted \\(\\bar{\\sigma\_1}^2\\) divided by \\(n\\), and the second is the average covariance, denoted \\(\\bar{\\sigma\_{ij}}\\) multiplied by factor \\((n\-1\)/n\\). As \\(n \\rightarrow \\infty\\), \\\[\\begin{equation} Var({\\bf w'R}) \= \\bar{\\sigma\_{ij}} \\end{equation}\\] This produces the remarkable result that in a well diversified portfolio, the variances of each stock’s return does not matter at all for portfolio risk! Further the risk of the portfolio, i.e., its variance, is nothing but the average of off\-diagonal terms in the covariance matrix. ``` sd=0.20; cv=0.01 n = seq(1,100) sd_p = matrix(0,length(n),1) for (j in n) { cv_mat = matrix(cv,j,j) diag(cv_mat) = sd^2 w = matrix(1/j,j,1) sd_p[j] = sqrt(t(w) %*% cv_mat %*% w) } plot(n,sd_p,type="l",col="blue") ``` ### 2\.10\.2 Diversification exercise Implement the math above using R to compute the standard deviation of a portfolio of \\(n\\) identical securities with variance 0\.04, and pairwise covariances equal to 0\.01\. Keep increasing \\(n\\) and report the value of the standard deviation. What do you see? Why would this be easier to do in R versus Excel? ### 2\.10\.1 Diversification of a Portfolio It is useful to examine the power of using vector algebra with an application. Here we use vector and summation math to understand how diversification in stock portfolios works. Diversification occurs when we increase the number of non\-perfectly correlated stocks in a portfolio, thereby reducing portfolio variance. In order to compute the variance of the portfolio we need to use the portfolio weights \\({\\bf w}\\) and the covariance matrix of stock returns \\({\\bf R}\\), denoted \\({\\bf \\Sigma}\\). We first write down the formula for a portfolio’s return variance: \\\[\\begin{equation} Var(\\boldsymbol{w'R}) \= \\boldsymbol{w'\\Sigma w} \= \\sum\_{i\=1}^n \\boldsymbol{w\_i^2 \\sigma\_i^2} \+ \\sum\_{i\=1}^n \\sum\_{j\=1,i \\neq j}^n \\boldsymbol{w\_i w\_j \\sigma\_{ij}} \\end{equation}\\] Readers are strongly encouraged to implement this by hand for \\(n\=2\\) to convince themselves that the vector form of the expression for variance \\(\\boldsymbol{w'\\Sigma w}\\) is the same thing as the long form on the right\-hand side of the equation above. If returns are independent, then the formula collapses to: \\\[\\begin{equation} Var(\\bf{w'R}) \= \\bf{w'\\Sigma w} \= \\sum\_{i\=1}^n \\boldsymbol{w\_i^2 \\sigma\_i^2} \\end{equation}\\] If returns are dependent, and equal amounts are invested in each asset (\\(w\_i\=1/n,\\;\\;\\forall i\\)): \\\[\\begin{eqnarray\*} Var(\\bf{w'R}) \&\=\& \\frac{1}{n}\\sum\_{i\=1}^n \\frac{\\sigma\_i^2}{n} \+ \\frac{n\-1}{n}\\sum\_{i\=1}^n \\sum\_{j\=1,i \\neq j}^n \\frac{\\sigma\_{ij}}{n(n\-1\)}\\\\ \&\=\& \\frac{1}{n} \\bar{\\sigma\_i}^2 \+ \\frac{n\-1}{n} \\bar{\\sigma\_{ij}}\\\\ \&\=\& \\frac{1}{n} \\bar{\\sigma\_i}^2 \+ \\left(1 \- \\frac{1}{n} \\right) \\bar{\\sigma\_{ij}} \\end{eqnarray\*}\\] The first term is the average variance, denoted \\(\\bar{\\sigma\_1}^2\\) divided by \\(n\\), and the second is the average covariance, denoted \\(\\bar{\\sigma\_{ij}}\\) multiplied by factor \\((n\-1\)/n\\). As \\(n \\rightarrow \\infty\\), \\\[\\begin{equation} Var({\\bf w'R}) \= \\bar{\\sigma\_{ij}} \\end{equation}\\] This produces the remarkable result that in a well diversified portfolio, the variances of each stock’s return does not matter at all for portfolio risk! Further the risk of the portfolio, i.e., its variance, is nothing but the average of off\-diagonal terms in the covariance matrix. ``` sd=0.20; cv=0.01 n = seq(1,100) sd_p = matrix(0,length(n),1) for (j in n) { cv_mat = matrix(cv,j,j) diag(cv_mat) = sd^2 w = matrix(1/j,j,1) sd_p[j] = sqrt(t(w) %*% cv_mat %*% w) } plot(n,sd_p,type="l",col="blue") ``` ### 2\.10\.2 Diversification exercise Implement the math above using R to compute the standard deviation of a portfolio of \\(n\\) identical securities with variance 0\.04, and pairwise covariances equal to 0\.01\. Keep increasing \\(n\\) and report the value of the standard deviation. What do you see? Why would this be easier to do in R versus Excel? 2\.11 Matrix Equations ---------------------- Here we examine how matrices may be used to represent large systems of equations easily and also solve them. Using the values of matrices \\({\\bf A}\\), \\({\\bf B}\\) and \\({\\bf w}\\) from the previous section, we write out the following in long form: \\\[\\begin{equation} {\\bf A} {\\bf w} \= {\\bf B} \\end{equation}\\] That is, we have \\\[\\begin{equation} \\left\[ \\begin{array}{cc} 3 \& 2 \\\\ 2 \& 4 \\end{array} \\right] \\left\[ \\begin{array}{c} w\_1 \\\\ w\_2 \\end{array} \\right] \= \\left\[ \\begin{array}{c} 3 \\\\ 4 \\end{array} \\right] \\end{equation}\\] Do you get 2 equations? If so, write them out. Find the solution values \\(w\_1\\) and \\(w\_2\\) by hand. And then we may compute the solution for \\({\\bf w}\\) by “dividing” \\({\\bf B}\\) by \\({\\bf A}\\). This is not regular division because \\({\\bf A}\\) and \\({\\bf B}\\) are matrices. Instead we need to multiply the inverse of \\({\\bf A}\\) (which is its “reciprocal”) by \\({\\bf B}\\). The inverse of \\({\\bf A}\\) is \\\[\\begin{equation} {\\bf A}^{\-1} \= \\left\[ \\begin{array}{cc} 0\.500 \& \-0\.250 \\\\ \-0\.250 \& 0\.375 \\end{array} \\right] \\end{equation}\\] Now compute by hand: \\\[\\begin{equation} {\\bf A}^{\-1} {\\bf B} \= \\left\[ \\begin{array}{c} 0\.50 \\\\ 0\.75 \\end{array} \\right] \\end{equation}\\] which should be the same as your solution by hand. Literally, this is all the matrix algebra and calculus you will need for most of the work we will do. ``` A = matrix(c(3,2,2,4),2,2) B = matrix(c(3,4),2,1) print(solve(A) %*% B) ``` ``` ## [,1] ## [1,] 0.50 ## [2,] 0.75 ``` ### 2\.11\.1 Matrix algebra exercise The following brief notes will introduce you to everything you need to know about the vocabulary of vectors and matrices in a “DIY” (do\-it\-yourself) mode. Define \\\[ w \= \[w\_1 \\;\\;\\; w\_2]' \= \\left\[ \\begin{array}{c} w\_1 \\\\ w\_2 \\end{array} \\right] \\] \\\[ I \= \\left\[ \\begin{array}{cc} 1 \& 0 \\\\ 0 \& 1 \\end{array} \\right] \\] \\\[ \\Sigma \= \\left\[ \\begin{array}{cc} \\sigma\_1^2 \& \\sigma\_{12} \\\\ \\sigma\_{12} \& \\sigma\_2^2 \\end{array} \\right] \\] Do the following exercises in long hand: * Show that \\(I\\;w \= w\\). * Show that the dimensions make sense at all steps of your calculations. * Show that \\\[ w' \\; \\Sigma \\; w \= w\_1^2 \\sigma\_1^2 \+ 2 w\_1 w\_2 \\sigma\_{12} \+ w\_2^2 \\sigma\_2^2 \\] ### 2\.11\.1 Matrix algebra exercise The following brief notes will introduce you to everything you need to know about the vocabulary of vectors and matrices in a “DIY” (do\-it\-yourself) mode. Define \\\[ w \= \[w\_1 \\;\\;\\; w\_2]' \= \\left\[ \\begin{array}{c} w\_1 \\\\ w\_2 \\end{array} \\right] \\] \\\[ I \= \\left\[ \\begin{array}{cc} 1 \& 0 \\\\ 0 \& 1 \\end{array} \\right] \\] \\\[ \\Sigma \= \\left\[ \\begin{array}{cc} \\sigma\_1^2 \& \\sigma\_{12} \\\\ \\sigma\_{12} \& \\sigma\_2^2 \\end{array} \\right] \\] Do the following exercises in long hand: * Show that \\(I\\;w \= w\\). * Show that the dimensions make sense at all steps of your calculations. * Show that \\\[ w' \\; \\Sigma \\; w \= w\_1^2 \\sigma\_1^2 \+ 2 w\_1 w\_2 \\sigma\_{12} \+ w\_2^2 \\sigma\_2^2 \\] 2\.12 Matrix Calculus --------------------- It is simple to undertake calculus when working with matrices. Calculations using matrices are mere functions of many variables. These functions are amenable to applying calculus, just as you would do in multivariate calculus. However, using vectors and matrices makes things simpler in fact, because we end up taking derivatives of these multivariate functions in one fell swoop rather than one\-by\-one for each variable. An example will make this clear. Suppose \\\[ {\\bf w} \= \\left\[ \\begin{array}{c} w\_1 \\\\ w\_2 \\end{array} \\right] \\] and \\\[ {\\bf B} \= \\left\[ \\begin{array}{c} 3 \\\\ 4 \\end{array} \\right] \\] Let \\(f({\\bf w}) \= {\\bf w}' {\\bf B}\\). This is a function of two variables \\(w\_1, w\_2\\). If we write out \\(f({\\bf w})\\) in long form, we get \\(3 w\_1 \+ 4 w\_2\\). The derivative of \\(f({\\bf w})\\) with respect to \\(w\_1\\) is \\(\\frac{\\partial f}{\\partial w\_1} \= 3\\). The derivative of \\(f({\\bf w})\\) with respect to \\(w\_2\\) is \\(\\frac{\\partial f}{\\partial w\_2} \= 4\\). Compare these answers to vector \\({\\bf B}\\). What do you see? What is \\(\\frac{df}{d{\\bf w}}\\)? It’s \\({\\bf B}\\). The insight here is that if we simply treat the vectors as regular scalars and conduct calculus accordingly, we will end up with vector derivatives. Hence, the derivative of \\({\\bf w}' {\\bf B}\\) with respect to \\({\\bf w}\\) is just \\({\\bf B}\\). Of course, \\({\\bf w}' {\\bf B}\\) is an entire function and \\({\\bf B}\\) is a vector. But the beauty of this is that we can take all derivatives of function \\({\\bf w}' {\\bf B}\\) at one time! These ideas can also be extended to higher\-order matrix functions. Suppose \\\[ {\\bf A} \= \\left\[ \\begin{array}{cc} 3 \& 2 \\\\ 2 \& 4 \\end{array} \\right] \\] and \\\[ {\\bf w} \= \\left\[ \\begin{array}{c} w\_1 \\\\ w\_2 \\end{array} \\right] \\] Let \\(f({\\bf w}) \= {\\bf w}' {\\bf A} {\\bf w}\\). If we write out \\(f({\\bf w})\\) in long form, we get \\\[ {\\bf w}' {\\bf A} {\\bf w} \= 3 w\_1^2 \+ 4 w\_2^2 \+ 2 (2\) w\_1 w\_2 \\] Take the derivative of \\(f({\\bf w})\\) with respect to \\(w\_1\\), and this is \\\[ \\frac{df}{dw\_1} \= 6 w\_1 \+ 4 w\_2 \\] Take the derivative of \\(f({\\bf w})\\) with respect to \\(w\_2\\), and this is \\\[ \\frac{df}{dw\_2} \= 8 w\_2 \+ 4 w\_1 \\] Now, we write out the following calculation in long form: \\\[ 2\\; {\\bf A} \\; {\\bf w} \= 2 \\left\[ \\begin{array}{cc} 3 \& 2 \\\\ 2 \& 4 \\end{array} \\right] \\left\[ \\begin{array}{c} w\_1 \\\\ w\_2 \\end{array} \\right] \= \\left\[ \\begin{array}{c} 6 w\_1 \+ 4 w\_2 \\\\ 8 w\_2 \+ 4 w\_1 \\end{array} \\right] \\] What do you notice about this solution when compared to the previous two answers? It is nothing but \\(\\frac{df}{dw}\\). Since \\(w \\in R^2\\), i.e., is of dimension 2, the derivative \\(\\frac{df}{dw}\\) will also be of that dimension. To see how this corresponds to scalar calculus, think of the function \\(f({\\bf w}) \= {\\bf w}' {\\bf A} {\\bf w}\\) as simply \\({\\bf A} w^2\\), where \\(w\\) is scalar. The derivative of this function with respect to \\(w\\) would be \\(2{\\bf A}w\\). And, this is the same as what we get when we look at a function of vectors, but with the caveat below. **Note**: This computation only works out because \\({\\bf A}\\) is symmetric. What should the expression be for the derivative of this vector function if \\({\\bf A}\\) is not symmetric but is a square matrix? It turns out that this is \\\[ \\frac{\\partial f}{\\partial {\\bf w}} \= {\\bf A}' {\\bf w} \+ {\\bf A} {\\bf w} \\neq 2 {\\bf A} {\\bf w} \\] Let’s try this and see. Suppose \\\[ {\\bf A} \= \\left\[ \\begin{array}{cc} 3 \& 2 \\\\ 1 \& 4 \\end{array} \\right] \\] You can check that the following is all true: \\\[\\begin{eqnarray\*} {\\bf w}' {\\bf A} {\\bf w} \&\=\& 3 w\_1^2 \+ 4 w\_2^2 \+ 3 w\_1 w\_2 \\\\ \\frac{\\partial f}{\\partial w\_1} \&\=\& 6 w\_1 \+ 3 w\_2\\\\ \\frac{\\partial f}{\\partial w\_2} \&\=\& 3 w\_1 \+ 8 w\_2 \\end{eqnarray\*}\\] and \\\[ {\\bf A}' {\\bf w} \+ {\\bf A} {\\bf w} \= \\left\[ \\begin{array}{c} 6w\_1\+3w\_2 \\\\ 3w\_1\+8w\_2 \\end{array} \\right] \\] which is correct, but note that the formula for symmetric \\({\\bf A}\\) is not! \\\[ 2{\\bf A} {\\bf w} \= \\left\[ \\begin{array}{c} 6w\_1\+4w\_2 \\\\ 2w\_1\+8w\_2 \\end{array} \\right] \\] ### 2\.12\.1 More exercises Try the following questions for practice. 1. What is the value of \\\[ {\\bf A}^{\-1} {\\bf A} {\\bf B} \\] Is this vector or scalar? 2. What is the final dimension of \\\[ {\\bf (w'B) (A A A^{\-1} B)} \\] ### 2\.12\.1 More exercises Try the following questions for practice. 1. What is the value of \\\[ {\\bf A}^{\-1} {\\bf A} {\\bf B} \\] Is this vector or scalar? 2. What is the final dimension of \\\[ {\\bf (w'B) (A A A^{\-1} B)} \\] 2\.13 Complex Numbers and Euler’s Equation ------------------------------------------ You can also handle complex numbers in R. The representation is x \+ y\*1i \\\[ e^{i \\pi}\+1\=0 \\] ``` print(pi) ``` ``` ## [1] 3.141593 ``` ``` exp(1i*pi)+1 ``` ``` ## [1] 0+1.224647e-16i ```
Machine Learning
srdas.github.io
https://srdas.github.io/MLBook/IntroductoryRprogamming.html
Chapter 3 Open Source: R Programming ==================================== > “Walking on water and developing software from a specification are easy if both are frozen” – Edward V. Berard 3\.1 Got R? ----------- In this chapter, we develop some expertise in using the R statistical package. See the manual [https://cran.r\-project.org/doc/manuals/r\-release/R\-intro.pdf](https://cran.r-project.org/doc/manuals/r-release/R-intro.pdf) on the R web site. Work through Appendix A, at least the first page. Also see Grant Farnsworth’s document “Econometrics in R”: [https://cran.r\-project.org/doc/contrib/Farnsworth\-EconometricsInR.pdf](https://cran.r-project.org/doc/contrib/Farnsworth-EconometricsInR.pdf). There is also a great book that I personally find to be of very high quality, titled “The Art of R Programming” by Norman Matloff. You can easily install the R programming language, which is a very useful tool for Machine Learning. See: <http://en.wikipedia.org/wiki/Machine_learning> Get R from: [http://www.r\-project.org/](http://www.r-project.org/) (download and install it). If you want to use R in IDE mode, download RStudio: <http://www.rstudio.com>. Here is qa quick test to make sure your installation of R is working along with graphics capabilities. ``` #PLOT HISTOGRAM FROM STANDARD NORMAL RANDOM NUMBERS x = rnorm(1000000) hist(x,50) grid(col="blue",lwd=2) ``` ### 3\.1\.1 System Commands If you want to directly access the system you can issue system commands as follows: ``` #SYSTEM COMMANDS #The following command will show the files in the directory which are notebooks. print(system("ls -lt")) #This command will not work in the notebook. ``` ``` ## [1] 0 ``` 3\.2 Loading Data ----------------- To get started, we need to grab some data. Go to Yahoo! Finance and download some historical data in an Excel spreadsheet, re\-sort it into chronological order, then save it as a CSV file. Read the file into R as follows. ``` #READ IN DATA FROM CSV FILE data = read.csv("DSTMAA_data/goog.csv",header=TRUE) print(head(data)) ``` ``` ## Date Open High Low Close Volume Adj.Close ## 1 2011-04-06 572.18 575.16 568.00 574.18 2668300 574.18 ## 2 2011-04-05 581.08 581.49 565.68 569.09 6047500 569.09 ## 3 2011-04-04 593.00 594.74 583.10 587.68 2054500 587.68 ## 4 2011-04-01 588.76 595.19 588.76 591.80 2613200 591.80 ## 5 2011-03-31 583.00 588.16 581.74 586.76 2029400 586.76 ## 6 2011-03-30 584.38 585.50 580.58 581.84 1422300 581.84 ``` ``` m = length(data) n = length(data[,1]) print(c("Number of columns = ",m)) ``` ``` ## [1] "Number of columns = " "7" ``` ``` print(c("Length of data series = ",n)) ``` ``` ## [1] "Length of data series = " "1671" ``` ``` #REVERSE ORDER THE DATA (Also get some practice with a for loop) for (j in 1:m) { data[,j] = rev(data[,j]) } print(head(data)) ``` ``` ## Date Open High Low Close Volume Adj.Close ## 1 2004-08-19 100.00 104.06 95.96 100.34 22351900 100.34 ## 2 2004-08-20 101.01 109.08 100.50 108.31 11428600 108.31 ## 3 2004-08-23 110.75 113.48 109.05 109.40 9137200 109.40 ## 4 2004-08-24 111.24 111.60 103.57 104.87 7631300 104.87 ## 5 2004-08-25 104.96 108.00 103.88 106.00 4598900 106.00 ## 6 2004-08-26 104.95 107.95 104.66 107.91 3551000 107.91 ``` ``` stkp = as.matrix(data[,7]) plot(stkp,type="l",col="blue") grid(lwd=2) ``` The last command reverses the sequence of the data if required. 3\.3 Getting External Stock Data -------------------------------- We can do the same data set up exercise for financial data using the **quantmod** package. *Note*: to install a package you can use the drop down menus on Windows and Mac operating systems, and use a package installer on Linux. Or issue the following command: ``` install.packages("quantmod") ``` Now we move on to using this package for one stock. ``` #USE THE QUANTMOD PACKAGE TO GET STOCK DATA library(quantmod) ``` ``` ## Loading required package: xts ``` ``` ## Loading required package: zoo ``` ``` ## ## Attaching package: 'zoo' ``` ``` ## The following objects are masked from 'package:base': ## ## as.Date, as.Date.numeric ``` ``` ## Loading required package: TTR ``` ``` ## Loading required package: methods ``` ``` ## Version 0.4-0 included new data defaults. See ?getSymbols. ``` ``` getSymbols("IBM") ``` ``` ## As of 0.4-0, 'getSymbols' uses env=parent.frame() and ## auto.assign=TRUE by default. ## ## This behavior will be phased out in 0.5-0 when the call will ## default to use auto.assign=FALSE. getOption("getSymbols.env") and ## getOptions("getSymbols.auto.assign") are now checked for alternate defaults ## ## This message is shown once per session and may be disabled by setting ## options("getSymbols.warning4.0"=FALSE). See ?getSymbols for more details. ``` ``` ## [1] "IBM" ``` ``` chartSeries(IBM) ``` Let’s take a quick look at the data. ``` head(IBM) ``` ``` ## IBM.Open IBM.High IBM.Low IBM.Close IBM.Volume IBM.Adjusted ## 2007-01-03 97.18 98.40 96.26 97.27 9196800 77.73997 ## 2007-01-04 97.25 98.79 96.88 98.31 10524500 78.57116 ## 2007-01-05 97.60 97.95 96.91 97.42 7221300 77.85985 ## 2007-01-08 98.50 99.50 98.35 98.90 10340000 79.04270 ## 2007-01-09 99.08 100.33 99.07 100.07 11108200 79.97778 ## 2007-01-10 98.50 99.05 97.93 98.89 8744800 79.03470 ``` Extract the dates using pipes (we will see this in more detail later). ``` library(magrittr) dts = IBM %>% as.data.frame %>% row.names dts %>% head %>% print ``` ``` ## [1] "2007-01-03" "2007-01-04" "2007-01-05" "2007-01-08" "2007-01-09" ## [6] "2007-01-10" ``` ``` dts %>% length %>% print ``` ``` ## [1] 2574 ``` Plot the data. ``` stkp = as.matrix(IBM$IBM.Adjusted) rets = diff(log(stkp)) dts = as.Date(dts) plot(dts,stkp,type="l",col="blue",xlab="Years",ylab="Stock Price of IBM") grid(lwd=2) ``` Summarize the data. ``` #DESCRIPTIVE STATS summary(IBM) ``` ``` ## Index IBM.Open IBM.High IBM.Low ## Min. :2007-01-03 Min. : 72.74 Min. : 76.98 Min. : 69.5 ## 1st Qu.:2009-07-23 1st Qu.:122.59 1st Qu.:123.97 1st Qu.:121.5 ## Median :2012-02-09 Median :155.01 Median :156.29 Median :154.0 ## Mean :2012-02-11 Mean :151.07 Mean :152.32 Mean :150.0 ## 3rd Qu.:2014-09-02 3rd Qu.:183.52 3rd Qu.:184.77 3rd Qu.:182.4 ## Max. :2017-03-23 Max. :215.38 Max. :215.90 Max. :214.3 ## IBM.Close IBM.Volume IBM.Adjusted ## Min. : 71.74 Min. : 1027500 Min. : 59.16 ## 1st Qu.:122.70 1st Qu.: 3615825 1st Qu.:101.43 ## Median :155.38 Median : 4979650 Median :143.63 ## Mean :151.19 Mean : 5869075 Mean :134.12 ## 3rd Qu.:183.54 3rd Qu.: 7134350 3rd Qu.:166.28 ## Max. :215.80 Max. :30770700 Max. :192.08 ``` Compute risk (volatility). ``` #STOCK VOLATILITY sigma_daily = sd(rets) sigma_annual = sigma_daily*sqrt(252) print(sigma_annual) ``` ``` ## [1] 0.2234349 ``` ``` print(c("Sharpe ratio = ",mean(rets)*252/sigma_annual)) ``` ``` ## [1] "Sharpe ratio = " "0.355224144170446" ``` We may also use the package to get data for more than one stock. ``` library(quantmod) getSymbols(c("GOOG","AAPL","CSCO","IBM")) ``` ``` ## [1] "GOOG" "AAPL" "CSCO" "IBM" ``` We now go ahead and concatenate columns of data into one stock data set. ``` goog = as.numeric(GOOG[,6]) aapl = as.numeric(AAPL[,6]) csco = as.numeric(CSCO[,6]) ibm = as.numeric(IBM[,6]) stkdata = cbind(goog,aapl,csco,ibm) dim(stkdata) ``` ``` ## [1] 2574 4 ``` Now, compute daily returns. This time, we do log returns in continuous\-time. The mean returns are: ``` n = dim(stkdata)[1] rets = log(stkdata[2:n,]/stkdata[1:(n-1),]) colMeans(rets) ``` ``` ## goog aapl csco ibm ## 0.0004869421 0.0009962588 0.0001426355 0.0003149582 ``` We can also compute the covariance matrix and correlation matrix: ``` cv = cov(rets) print(cv,2) ``` ``` ## goog aapl csco ibm ## goog 0.00034 0.00020 0.00017 0.00012 ## aapl 0.00020 0.00042 0.00019 0.00014 ## csco 0.00017 0.00019 0.00036 0.00015 ## ibm 0.00012 0.00014 0.00015 0.00020 ``` ``` cr = cor(rets) print(cr,4) ``` ``` ## goog aapl csco ibm ## goog 1.0000 0.5342 0.4984 0.4627 ## aapl 0.5342 1.0000 0.4840 0.4743 ## csco 0.4984 0.4840 1.0000 0.5711 ## ibm 0.4627 0.4743 0.5711 1.0000 ``` Notice the print command allows you to choose the number of significant digits (in this case 4\). Also, as expected the four return time series are positively correlated with each other. 3\.4 Data Frames ---------------- Data frames are the most essential data structure in the R programming language. One may think of a data frame as simply a spreadsheet. In fact you can view it as such with the following command. ``` View(data) ``` However, data frames in R are much more than mere spreadsheets, which is why Excel will never trump R in the hanlding and analysis of data, except for very small applications on small spreadsheets. One may also think of data frames as databases, and there are many commands that we may use that are database\-like, such as joins, merges, filters, selections, etc. Indeed, packages such as **dplyr** and **data.table** are designed to make these operations seamless, and operate efficiently on big data, where the number of observations (rows) are of the order of hundreds of millions. Data frames can be addressed by column names, so that we do not need to remember column numbers specifically. If you want to find the names of all columns in a data frame, the **names** function does the trick. To address a chosen column, append the column name to the data frame using the “$” connector, as shown below. ``` #THIS IS A DATA FRAME AND CAN BE REFERENCED BY COLUMN NAMES print(names(data)) ``` ``` ## [1] "Date" "Open" "High" "Low" "Close" "Volume" ## [7] "Adj.Close" ``` ``` print(head(data$Close)) ``` ``` ## [1] 100.34 108.31 109.40 104.87 106.00 107.91 ``` The command printed out the first few observations in the column “Close”. All variables and functions in R are “objects”, and you are well\-served to know the object *type*, because objects have properties and methods apply differently to objects of various types. Therefore, to check an object type, use the **class** function. ``` class(data) ``` ``` ## [1] "data.frame" ``` To obtain descriptive statistics on the data variables in a data frame, the **summary** function is very handy. ``` #DESCRIPTIVE STATISTICS summary(data) ``` ``` ## Date Open High Low ## 2004-08-19: 1 Min. : 99.19 Min. :101.7 Min. : 95.96 ## 2004-08-20: 1 1st Qu.:353.79 1st Qu.:359.5 1st Qu.:344.25 ## 2004-08-23: 1 Median :457.57 Median :462.2 Median :452.42 ## 2004-08-24: 1 Mean :434.70 Mean :439.7 Mean :429.15 ## 2004-08-25: 1 3rd Qu.:532.62 3rd Qu.:537.2 3rd Qu.:526.15 ## 2004-08-26: 1 Max. :741.13 Max. :747.2 Max. :725.00 ## (Other) :1665 ## Close Volume Adj.Close ## Min. :100.0 Min. : 858700 Min. :100.0 ## 1st Qu.:353.5 1st Qu.: 3200350 1st Qu.:353.5 ## Median :457.4 Median : 5028000 Median :457.4 ## Mean :434.4 Mean : 6286021 Mean :434.4 ## 3rd Qu.:531.6 3rd Qu.: 7703250 3rd Qu.:531.6 ## Max. :741.8 Max. :41116700 Max. :741.8 ## ``` Let’s take a given column of data and perform some transformations on it. We can also plot the data, with some arguments for look and feel, using the **plot** function. ``` #USING A PARTICULAR COLUMN stkp = data$Adj.Close dt = data$Date print(c("Length of stock series = ",length(stkp))) ``` ``` ## [1] "Length of stock series = " "1671" ``` ``` #Ln of differenced stk prices gives continuous returns rets = diff(log(stkp)) #diff() takes first differences print(c("Length of return series = ",length(rets))) ``` ``` ## [1] "Length of return series = " "1670" ``` ``` print(head(rets)) ``` ``` ## [1] 0.07643307 0.01001340 -0.04228940 0.01071761 0.01785845 -0.01644436 ``` ``` plot(rets,type="l",col="blue") ``` In case you want more descriptive statistics than provided by the **summary** function, then use an appropriate package. We may be interested in the higher\-order moments, and we use the **moments** package for this. ``` print(summary(rets)) ``` ``` ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## -0.1234000 -0.0092080 0.0007246 0.0010450 0.0117100 0.1823000 ``` Compute the daily and annualized standard deviation of returns. ``` r_sd = sd(rets) r_sd_annual = r_sd*sqrt(252) print(c(r_sd,r_sd_annual)) ``` ``` ## [1] 0.02266823 0.35984704 ``` ``` #What if we take the stdev of annualized returns? print(sd(rets*252)) ``` ``` ## [1] 5.712395 ``` ``` #Huh? print(sd(rets*252))/252 ``` ``` ## [1] 5.712395 ``` ``` ## [1] 0.02266823 ``` ``` print(sd(rets*252))/sqrt(252) ``` ``` ## [1] 5.712395 ``` ``` ## [1] 0.359847 ``` Notice the interesting use of the **print** function here. The variance is easy as well. ``` #Variance r_var = var(rets) r_var_annual = var(rets)*252 print(c(r_var,r_var_annual)) ``` ``` ## [1] 0.0005138488 0.1294898953 ``` 3\.5 Higher\-Order Moments -------------------------- Skewness and kurtosis are key moments that arise in all return distributions. We need a different library in R for these. We use the **moments** library. \\\[\\begin{equation} \\mbox{Skewness} \= \\frac{E\[(X\-\\mu)^3]}{\\sigma^{3}} \\end{equation}\\] Skewness means one tail is fatter than the other (asymmetry). Fatter right (left) tail implies positive (negative) skewness. \\\[\\begin{equation} \\mbox{Kurtosis} \= \\frac{E\[(X\-\\mu)^4]}{\\sigma^{4}} \\end{equation}\\] Kurtosis means both tails are fatter than with a normal distribution. ``` #HIGHER-ORDER MOMENTS library(moments) hist(rets,50) ``` ``` print(c("Skewness=",skewness(rets))) ``` ``` ## [1] "Skewness=" "0.487479193296115" ``` ``` print(c("Kurtosis=",kurtosis(rets))) ``` ``` ## [1] "Kurtosis=" "9.95591572103069" ``` For the normal distribution, skewness is zero, and kurtosis is 3\. Kurtosis minus three is denoted “excess kurtosis”. ``` skewness(rnorm(1000000)) ``` ``` ## [1] 0.001912514 ``` ``` kurtosis(rnorm(1000000)) ``` ``` ## [1] 2.995332 ``` What is the skewness and kurtosis of the stock index (S\&P500\)? 3\.6 Reading space delimited files ---------------------------------- Often the original data is in a space delimited file, not a comma separated one, in which case the **read.table** function is appropriate. ``` #READ IN MORE DATA USING SPACE DELIMITED FILE data = read.table("DSTMAA_data/markowitzdata.txt",header=TRUE) print(head(data)) ``` ``` ## X.DATE SUNW MSFT IBM CSCO AMZN ## 1 20010102 -0.087443948 0.000000000 -0.002205882 -0.129084975 -0.10843374 ## 2 20010103 0.297297299 0.105187319 0.115696386 0.240150094 0.26576576 ## 3 20010104 -0.060606062 0.010430248 -0.015191546 0.013615734 -0.11743772 ## 4 20010105 -0.096774191 0.014193549 0.008718981 -0.125373140 -0.06048387 ## 5 20010108 0.006696429 -0.003816794 -0.004654255 -0.002133106 0.02575107 ## 6 20010109 0.044345897 0.058748405 -0.010688043 0.015818726 0.09623431 ## mktrf smb hml rf ## 1 -0.0345 -0.0037 0.0209 0.00026 ## 2 0.0527 0.0097 -0.0493 0.00026 ## 3 -0.0121 0.0083 -0.0015 0.00026 ## 4 -0.0291 0.0027 0.0242 0.00026 ## 5 -0.0037 -0.0053 0.0129 0.00026 ## 6 0.0046 0.0044 -0.0026 0.00026 ``` ``` print(c("Length of data series = ",length(data$X.DATE))) ``` ``` ## [1] "Length of data series = " "1507" ``` We compute covariance and correlation in the data frame. ``` #COMPUTE COVARIANCE AND CORRELATION rets = as.data.frame(cbind(data$SUNW,data$MSFT,data$IBM,data$CSCO,data$AMZN)) names(rets) = c("SUNW","MSFT","IBM","CSCO","AMZN") print(cov(rets)) ``` ``` ## SUNW MSFT IBM CSCO AMZN ## SUNW 0.0014380649 0.0003241903 0.0003104236 0.0007174466 0.0004594254 ## MSFT 0.0003241903 0.0003646160 0.0001968077 0.0003301491 0.0002678712 ## IBM 0.0003104236 0.0001968077 0.0002991120 0.0002827622 0.0002056656 ## CSCO 0.0007174466 0.0003301491 0.0002827622 0.0009502685 0.0005041975 ## AMZN 0.0004594254 0.0002678712 0.0002056656 0.0005041975 0.0016479809 ``` ``` print(cor(rets)) ``` ``` ## SUNW MSFT IBM CSCO AMZN ## SUNW 1.0000000 0.4477060 0.4733132 0.6137298 0.2984349 ## MSFT 0.4477060 1.0000000 0.5959466 0.5608788 0.3455669 ## IBM 0.4733132 0.5959466 1.0000000 0.5303729 0.2929333 ## CSCO 0.6137298 0.5608788 0.5303729 1.0000000 0.4029038 ## AMZN 0.2984349 0.3455669 0.2929333 0.4029038 1.0000000 ``` 3\.7 Pipes with *magrittr* -------------------------- We may redo the example above using a very useful package called **magrittr** which mimics pipes in the Unix operating system. In the code below, we pipe the returns data into the correlation function and then “pipe” the output of that into the print function. This is analogous to issuing the command *print(cor(rets))*. ``` #Repeat the same process using pipes library(magrittr) rets %>% cor %>% print ``` ``` ## SUNW MSFT IBM CSCO AMZN ## SUNW 1.0000000 0.4477060 0.4733132 0.6137298 0.2984349 ## MSFT 0.4477060 1.0000000 0.5959466 0.5608788 0.3455669 ## IBM 0.4733132 0.5959466 1.0000000 0.5303729 0.2929333 ## CSCO 0.6137298 0.5608788 0.5303729 1.0000000 0.4029038 ## AMZN 0.2984349 0.3455669 0.2929333 0.4029038 1.0000000 ``` 3\.8 Matrices ------------- > *Question*: What do you get if you cross a mountain\-climber with a mosquito? *Answer*: Can’t be done. You’ll be crossing a scaler with a vector. We will use matrices extensively in modeling, and here we examine the basic commands needed to create and manipulate matrices in R. We create a \\(4 \\times 3\\) matrix with random numbers as follows: ``` x = matrix(rnorm(12),4,3) print(x) ``` ``` ## [,1] [,2] [,3] ## [1,] -0.69430984 0.7897995 0.3524628 ## [2,] 1.08377771 0.7380866 0.4088171 ## [3,] -0.37520601 -1.3140870 2.0383614 ## [4,] -0.06818956 -0.6813911 0.1423782 ``` Transposing the matrix, notice that the dimensions are reversed. ``` print(t(x),3) ``` ``` ## [,1] [,2] [,3] [,4] ## [1,] -0.694 1.084 -0.375 -0.0682 ## [2,] 0.790 0.738 -1.314 -0.6814 ## [3,] 0.352 0.409 2.038 0.1424 ``` Of course, it is easy to multiply matrices as long as they conform. By “conform” we mean that when multiplying one matrix by another, the number of columns of the matrix on the left must be equal to the number of rows of the matrix on the right. The resultant matrix that holds the answer of this computation will have the number of rows of the matrix on the left, and the number of columns of the matrix on the right. See the examples below: ``` print(t(x) %*% x,3) ``` ``` ## [,1] [,2] [,3] ## [1,] 1.802 0.791 -0.576 ## [2,] 0.791 3.360 -2.195 ## [3,] -0.576 -2.195 4.467 ``` ``` print(x %*% t(x),3) ``` ``` ## [,1] [,2] [,3] [,4] ## [1,] 1.2301 -0.0254 -0.0589 -0.441 ## [2,] -0.0254 1.8865 -0.5432 -0.519 ## [3,] -0.0589 -0.5432 6.0225 1.211 ## [4,] -0.4406 -0.5186 1.2112 0.489 ``` Here is an example of non\-conforming matrices. ``` #CREATE A RANDOM MATRIX x = matrix(runif(12),4,3) print(x) ``` ``` ## [,1] [,2] [,3] ## [1,] 0.9508065 0.3802924 0.7496199 ## [2,] 0.2546922 0.2621244 0.9214230 ## [3,] 0.3521408 0.1808846 0.8633504 ## [4,] 0.9065475 0.2655430 0.7270766 ``` ``` print(x*2) ``` ``` ## [,1] [,2] [,3] ## [1,] 1.9016129 0.7605847 1.499240 ## [2,] 0.5093844 0.5242488 1.842846 ## [3,] 0.7042817 0.3617692 1.726701 ## [4,] 1.8130949 0.5310859 1.454153 ``` ``` print(x+x) ``` ``` ## [,1] [,2] [,3] ## [1,] 1.9016129 0.7605847 1.499240 ## [2,] 0.5093844 0.5242488 1.842846 ## [3,] 0.7042817 0.3617692 1.726701 ## [4,] 1.8130949 0.5310859 1.454153 ``` ``` print(t(x) %*% x) #THIS SHOULD BE 3x3 ``` ``` ## [,1] [,2] [,3] ## [1,] 1.9147325 0.7327696 1.910573 ## [2,] 0.7327696 0.3165638 0.875839 ## [3,] 1.9105731 0.8758390 2.684965 ``` ``` #print(x %*% x) #SHOULD GIVE AN ERROR ``` Taking the inverse of the covariance matrix, we get: ``` cv_inv = solve(cv) print(cv_inv,3) ``` ``` ## goog aapl csco ibm ## goog 4670 -1430 -1099 -1011 ## aapl -1430 3766 -811 -1122 ## csco -1099 -811 4801 -2452 ## ibm -1011 -1122 -2452 8325 ``` Check that the inverse is really so! ``` print(cv_inv %*% cv,3) ``` ``` ## goog aapl csco ibm ## goog 1.00e+00 -2.78e-16 -1.94e-16 -2.78e-17 ## aapl -2.78e-17 1.00e+00 8.33e-17 -5.55e-17 ## csco 1.67e-16 1.11e-16 1.00e+00 1.11e-16 ## ibm 0.00e+00 -2.22e-16 -2.22e-16 1.00e+00 ``` It is, the result of multiplying the inverse matrix by the matrix itself results in the identity matrix. A covariance matrix should be positive definite. Why? What happens if it is not? Checking for this property is easy. ``` library(corpcor) is.positive.definite(cv) ``` ``` ## [1] TRUE ``` What happens if you compute pairwise covariances from differing lengths of data for each pair? Let’s take the returns data we have and find the inverse. ``` cv = cov(rets) print(round(cv,6)) ``` ``` ## SUNW MSFT IBM CSCO AMZN ## SUNW 0.001438 0.000324 0.000310 0.000717 0.000459 ## MSFT 0.000324 0.000365 0.000197 0.000330 0.000268 ## IBM 0.000310 0.000197 0.000299 0.000283 0.000206 ## CSCO 0.000717 0.000330 0.000283 0.000950 0.000504 ## AMZN 0.000459 0.000268 0.000206 0.000504 0.001648 ``` ``` cv_inv = solve(cv) #TAKE THE INVERSE print(round(cv_inv %*% cv,2)) #CHECK THAT WE GET IDENTITY MATRIX ``` ``` ## SUNW MSFT IBM CSCO AMZN ## SUNW 1 0 0 0 0 ## MSFT 0 1 0 0 0 ## IBM 0 0 1 0 0 ## CSCO 0 0 0 1 0 ## AMZN 0 0 0 0 1 ``` ``` #CHECK IF MATRIX IS POSITIVE DEFINITE (why do we check this?) library(corpcor) is.positive.definite(cv) ``` ``` ## [1] TRUE ``` 3\.9 Root Finding ----------------- Finding roots of nonlinear equations is often required, and R has several packages for this purpose. Here we examine a few examples. Suppose we are given the function \\\[ (x^2 \+ y^2 \- 1\)^3 \- x^2 y^3 \= 0 \\] and for various values of \\(y\\) we wish to solve for the values of \\(x\\). The function we use is called **multiroot** and the use of the function is shown below. ``` #ROOT SOLVING IN R library(rootSolve) fn = function(x,y) { result = (x^2+y^2-1)^3 - x^2*y^3 } yy = 1 sol = multiroot(f=fn,start=1,maxiter=10000,rtol=0.000001,atol=0.000001,ctol=0.00001,y=yy) print(c("solution=",sol$root)) ``` ``` ## [1] "solution=" "1" ``` ``` check = fn(sol$root,yy) print(check) ``` ``` ## [1] 0 ``` Here we demonstrate the use of another function called **uniroot**. ``` fn = function(x) { result = 0.065*(x*(1-x))^0.5- 0.05 +0.05*x } sol = uniroot.all(f=fn,c(0,1)) print(sol) ``` ``` ## [1] 1.0000000 0.3717627 ``` ``` check = fn(sol) print(check) ``` ``` ## [1] 0.000000e+00 1.041576e-06 ``` 3\.10 Regression ---------------- In a *multivariate* linear regression, we have \\\[\\begin{equation} Y \= X \\cdot \\beta \+ e \\end{equation}\\] where \\(Y \\in R^{t \\times 1}\\), \\(X \\in R^{t \\times n}\\), and \\(\\beta \\in R^{n \\times 1}\\), and the regression solution is simply equal to \\(\\beta \= (X'X)^{\-1}(X'Y) \\in R^{n \\times 1}\\). To get this result we minimize the sum of squared errors. \\\[\\begin{eqnarray\*} \\min\_{\\beta} e'e \&\=\& (Y \- X \\cdot \\beta)' (Y\-X \\cdot \\beta) \\\\ \&\=\& Y'(Y\-X \\cdot \\beta) \- (X \\beta)'\\cdot (Y\-X \\cdot \\beta) \\\\ \&\=\& Y'Y \- Y' X \\beta \- (\\beta' X') Y \+ \\beta' X'X \\beta \\\\ \&\=\& Y'Y \- Y' X \\beta \- Y' X \\beta \+ \\beta' X'X \\beta \\\\ \&\=\& Y'Y \- 2Y' X \\beta \+ \\beta' X'X \\beta \\end{eqnarray\*}\\] Note that this expression is a scalar. Differentiating w.r.t. \\(\\beta'\\) gives the following f.o.c: \\\[\\begin{eqnarray\*} \- 2 X'Y \+ 2 X'X \\beta\&\=\& {\\bf 0} \\\\ \& \\Longrightarrow \& \\\\ \\beta \&\=\& (X'X)^{\-1} (X'Y) \\end{eqnarray\*}\\] There is another useful expression for each individual \\(\\beta\_i \= \\frac{Cov(X\_i,Y)}{Var(X\_i)}\\). You should compute this and check that each coefficient in the regression is indeed equal to the \\(\\beta\_i\\) from this calculation. *Example*: We run a stock return regression to exemplify the algebra above. ``` data = read.table("DSTMAA_data/markowitzdata.txt",header=TRUE) #THESE DATA ARE RETURNS print(names(data)) #THIS IS A DATA FRAME (important construct in R) ``` ``` ## [1] "X.DATE" "SUNW" "MSFT" "IBM" "CSCO" "AMZN" "mktrf" ## [8] "smb" "hml" "rf" ``` ``` head(data) ``` ``` ## X.DATE SUNW MSFT IBM CSCO AMZN ## 1 20010102 -0.087443948 0.000000000 -0.002205882 -0.129084975 -0.10843374 ## 2 20010103 0.297297299 0.105187319 0.115696386 0.240150094 0.26576576 ## 3 20010104 -0.060606062 0.010430248 -0.015191546 0.013615734 -0.11743772 ## 4 20010105 -0.096774191 0.014193549 0.008718981 -0.125373140 -0.06048387 ## 5 20010108 0.006696429 -0.003816794 -0.004654255 -0.002133106 0.02575107 ## 6 20010109 0.044345897 0.058748405 -0.010688043 0.015818726 0.09623431 ## mktrf smb hml rf ## 1 -0.0345 -0.0037 0.0209 0.00026 ## 2 0.0527 0.0097 -0.0493 0.00026 ## 3 -0.0121 0.0083 -0.0015 0.00026 ## 4 -0.0291 0.0027 0.0242 0.00026 ## 5 -0.0037 -0.0053 0.0129 0.00026 ## 6 0.0046 0.0044 -0.0026 0.00026 ``` ``` #RUN A MULTIVARIATE REGRESSION ON STOCK DATA Y = as.matrix(data$SUNW) X = as.matrix(data[,3:6]) res = lm(Y~X) summary(res) ``` ``` ## ## Call: ## lm(formula = Y ~ X) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.233758 -0.014921 -0.000711 0.014214 0.178859 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -0.0007256 0.0007512 -0.966 0.33422 ## XMSFT 0.1382312 0.0529045 2.613 0.00907 ** ## XIBM 0.3791500 0.0566232 6.696 3.02e-11 *** ## XCSCO 0.5769097 0.0317799 18.153 < 2e-16 *** ## XAMZN 0.0324899 0.0204802 1.586 0.11286 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.02914 on 1502 degrees of freedom ## Multiple R-squared: 0.4112, Adjusted R-squared: 0.4096 ## F-statistic: 262.2 on 4 and 1502 DF, p-value: < 2.2e-16 ``` Now we can cross\-check the regression using the algebraic solution for the regression coefficients. ``` #CHECK THE REGRESSION n = length(Y) X = cbind(matrix(1,n,1),X) b = solve(t(X) %*% X) %*% (t(X) %*% Y) print(b) ``` ``` ## [,1] ## -0.0007256342 ## MSFT 0.1382312148 ## IBM 0.3791500328 ## CSCO 0.5769097262 ## AMZN 0.0324898716 ``` *Example*: As a second example, we take data on basketball teams in a cross\-section, and try to explain their performance using team statistics. Here is a simple regression run on some data from the 2005\-06 NCAA basketball season for the March madness stats. The data is stored in a space\-delimited file called **ncaa.txt**. We use the metric of performance to be the number of games played, with more successful teams playing more playoff games, and then try to see what variables explain it best. We apply a simple linear regression that uses the R command **lm**, which stands for “linear model”. ``` #REGRESSION ON NCAA BASKETBALL PLAYOFF DATA ncaa = read.table("DSTMAA_data/ncaa.txt",header=TRUE) print(head(ncaa)) ``` ``` ## No NAME GMS PTS REB AST TO A.T STL BLK PF FG FT ## 1 1 NorthCarolina 6 84.2 41.5 17.8 12.8 1.39 6.7 3.8 16.7 0.514 0.664 ## 2 2 Illinois 6 74.5 34.0 19.0 10.2 1.87 8.0 1.7 16.5 0.457 0.753 ## 3 3 Louisville 5 77.4 35.4 13.6 11.0 1.24 5.4 4.2 16.6 0.479 0.702 ## 4 4 MichiganState 5 80.8 37.8 13.0 12.6 1.03 8.4 2.4 19.8 0.445 0.783 ## 5 5 Arizona 4 79.8 35.0 15.8 14.5 1.09 6.0 6.5 13.3 0.542 0.759 ## 6 6 Kentucky 4 72.8 32.3 12.8 13.5 0.94 7.3 3.5 19.5 0.510 0.663 ## X3P ## 1 0.417 ## 2 0.361 ## 3 0.376 ## 4 0.329 ## 5 0.397 ## 6 0.400 ``` ``` y = ncaa[3] y = as.matrix(y) x = ncaa[4:14] x = as.matrix(x) fm = lm(y~x) res = summary(fm) res ``` ``` ## ## Call: ## lm(formula = y ~ x) ## ## Residuals: ## Min 1Q Median 3Q Max ## -1.5074 -0.5527 -0.2454 0.6705 2.2344 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -10.194804 2.892203 -3.525 0.000893 *** ## xPTS -0.010442 0.025276 -0.413 0.681218 ## xREB 0.105048 0.036951 2.843 0.006375 ** ## xAST -0.060798 0.091102 -0.667 0.507492 ## xTO -0.034545 0.071393 -0.484 0.630513 ## xA.T 1.325402 1.110184 1.194 0.237951 ## xSTL 0.181015 0.068999 2.623 0.011397 * ## xBLK 0.007185 0.075054 0.096 0.924106 ## xPF -0.031705 0.044469 -0.713 0.479050 ## xFG 13.823190 3.981191 3.472 0.001048 ** ## xFT 2.694716 1.118595 2.409 0.019573 * ## xX3P 2.526831 1.754038 1.441 0.155698 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.9619 on 52 degrees of freedom ## Multiple R-squared: 0.5418, Adjusted R-squared: 0.4448 ## F-statistic: 5.589 on 11 and 52 DF, p-value: 7.889e-06 ``` An alternative specification of regression using data frames is somewhat easier to implement. ``` #CREATING DATA FRAMES ncaa_data_frame = data.frame(y=as.matrix(ncaa[3]),x=as.matrix(ncaa[4:14])) fm = lm(y~x,data=ncaa_data_frame) summary(fm) ``` ``` ## ## Call: ## lm(formula = y ~ x, data = ncaa_data_frame) ## ## Residuals: ## Min 1Q Median 3Q Max ## -1.5074 -0.5527 -0.2454 0.6705 2.2344 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -10.194804 2.892203 -3.525 0.000893 *** ## xPTS -0.010442 0.025276 -0.413 0.681218 ## xREB 0.105048 0.036951 2.843 0.006375 ** ## xAST -0.060798 0.091102 -0.667 0.507492 ## xTO -0.034545 0.071393 -0.484 0.630513 ## xA.T 1.325402 1.110184 1.194 0.237951 ## xSTL 0.181015 0.068999 2.623 0.011397 * ## xBLK 0.007185 0.075054 0.096 0.924106 ## xPF -0.031705 0.044469 -0.713 0.479050 ## xFG 13.823190 3.981191 3.472 0.001048 ** ## xFT 2.694716 1.118595 2.409 0.019573 * ## xX3P 2.526831 1.754038 1.441 0.155698 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.9619 on 52 degrees of freedom ## Multiple R-squared: 0.5418, Adjusted R-squared: 0.4448 ## F-statistic: 5.589 on 11 and 52 DF, p-value: 7.889e-06 ``` 3\.11 Parts of a regression --------------------------- The linear regression is fit by minimizing the sum of squared errors, but the same concept may also be applied to a nonlinear regression as well. So we might have: \\\[ y\_i \= f(x\_{i1},x\_{i2},...,x\_{ip}) \+ \\epsilon\_i, \\quad i\=1,2,...,n \\] which describes a data set that has \\(n\\) rows and \\(p\\) columns, which are the standard variables for the number of rows and columns. Note that the error term (residual) is \\(\\epsilon\_i\\). The regression will have \\((p\+1\)\\) coefficients, i.e., \\({\\bf b} \= \\{b\_0,b\_1,b\_2,...,b\_p\\}\\), and \\({\\bf x}\_i \= \\{x\_{i1},x\_{i2},...,x\_{ip}\\}\\). The model is fit by minimizing the sum of squared residuals, i.e., \\\[ \\min\_{\\bf b} \\sum\_{i\=1}^n \\epsilon\_i^2 \\] We define the following: * Sum of squared residuals (errors): \\(SSE \= \\sum\_{i\=1}^n \\epsilon\_i^2\\), with degrees of freedom \\(DFE\= n\-p\\). * Total sum of squares: \\(SST \= \\sum\_{i\=1}^n (y\_i \- {\\bar y})^2\\), where \\({\\bar y}\\) is the mean of \\(y\\). Degrees of freedom are \\(DFT \= n\-1\\). * Regression (model) sum of squares: \\(SST \= \\sum\_{i\=1}^n (f({\\bf x}\_i) \- {\\bar y})^2\\); with degrees of freedom \\(DFM \= p\-1\\). * Note that \\(SST \= SSM \+ SSE\\). * Check that \\(DFT \= DFM \+ DFE\\). The \\(R\\)\-squared of the regression is \\\[ R^2 \= \\left( 1 \- \\frac{SSE}{SST} \\right) \\quad \\in (0,1\) \\] The \\(F\\)\-statistic in the regression is what tells us if the RHS variables comprise a model that explains the LHS variable sufficiently. Do the RHS variables offer more of an explanation that simply assuming that the mean value of \\(y\\) is the best prediction? The null hypothesis we care about is * \\(H\_0\\): \\(b\_k \= 0, k\=0,1,2,...,p\\), versus an alternate hypothesis of * \\(H\_1\\): \\(b\_k \\neq 0\\) for at least one \\(k\\). To test this the \\(F\\)\-statistic is computed as the following ratio: \\\[ F \= \\frac{\\mbox{Explained variance}}{\\mbox{Unexplained variance}} \= \\frac{SSM/DFM}{SSE/DFE} \= \\frac{MSM}{MSE} \\] where \\(MSM\\) is the mean squared model error, and \\(MSE\\) is mean squared error. Now let’s relate this to \\(R^2\\). First, we find an approximation for the \\(R^2\\). \\\[ R^2 \= 1 \- \\frac{SSE}{SST} \\\\ \= 1 \- \\frac{SSE/n}{SST/n} \\\\ \\approx 1 \- \\frac{MSE}{MST} \\\\ \= \\frac{MST\-MSE}{MST} \\\\ \= \\frac{MSM}{MST} \\] The \\(R^2\\) of a regression that has no RHS variables is zero, and of course \\(MSM\=0\\). In such a regression \\(MST \= MSE\\). So the expression above becomes: \\\[ R^2\_{p\=0} \= \\frac{MSM}{MST} \= 0 \\] We can also see with some manipulation, that \\(R^2\\) is related to \\(F\\) (approximately, assuming large \\(n\\)). \\\[ R^2 \+ \\frac{1}{F\+1}\=1 \\quad \\mbox{or} \\quad 1\+F \= \\frac{1}{1\-R^2} \\] Check to see that when \\(R^2\=0\\), then \\(F\=0\\). We can further check the formulae with a numerical example, by creating some sample data. ``` x = matrix(runif(300),100,3) y = 5 + 4*x[,1] + 3*x[,2] + 2*x[,3] + rnorm(100) y = as.matrix(y) res = lm(y~x) print(summary(res)) ``` ``` ## ## Call: ## lm(formula = y ~ x) ## ## Residuals: ## Min 1Q Median 3Q Max ## -2.7194 -0.5876 0.0410 0.7223 2.5900 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 5.0819 0.3141 16.178 < 2e-16 *** ## x1 4.3444 0.3753 11.575 < 2e-16 *** ## x2 2.8944 0.3335 8.679 1.02e-13 *** ## x3 1.8143 0.3397 5.341 6.20e-07 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 1.005 on 96 degrees of freedom ## Multiple R-squared: 0.7011, Adjusted R-squared: 0.6918 ## F-statistic: 75.06 on 3 and 96 DF, p-value: < 2.2e-16 ``` ``` e = res$residuals SSE = sum(e^2) SST = sum((y-mean(y))^2) SSM = SST - SSE print(c(SSE,SSM,SST)) ``` ``` ## [1] 97.02772 227.60388 324.63160 ``` ``` R2 = 1 - SSE/SST print(R2) ``` ``` ## [1] 0.7011144 ``` ``` n = dim(x)[1] p = dim(x)[2]+1 MSE = SSE/(n-p) MSM = SSM/(p-1) MST = SST/(n-1) print(c(n,p,MSE,MSM,MST)) ``` ``` ## [1] 100.000000 4.000000 1.010705 75.867960 3.279107 ``` ``` Fstat = MSM/MSE print(Fstat) ``` ``` ## [1] 75.06436 ``` We can also compare two regressions, say one with 5 RHS variables with one that has only 3 of those five to see whether the additional two variables has any extra value. The ratio of the two \\(MSM\\) values from the first and second regressions is also a \\(F\\)\-statistic that may be tested for it to be large enough. Note that if the residuals \\(\\epsilon\\) are assumed to be normally distributed, then squared residuals are distributed as per the chi\-square (\\(\\chi^2\\)) distribution. Further, the sum of residuals is distributed normal and the sum of squared residuals is distributed \\(\\chi^2\\). And finally, the ratio of two \\(\\chi^2\\) variables is \\(F\\)\-distributed, which is why we call it the \\(F\\)\-statistic, it is the ratio of two sums of squared errors. 3\.12 Heteroskedasticity ------------------------ Simple linear regression assumes that the standard error of the residuals is the same for all observations. Many regressions suffer from the failure of this condition. The word for this is “heteroskedastic” errors. “Hetero” means different, and “skedastic” means dependent on type. We can first test for the presence of heteroskedasticity using a standard Breusch\-Pagan test available in R. This resides in the **lmtest** package which is loaded in before running the test. ``` ncaa = read.table("DSTMAA_data/ncaa.txt",header=TRUE) y = as.matrix(ncaa[3]) x = as.matrix(ncaa[4:14]) result = lm(y~x) library(lmtest) bptest(result) ``` ``` ## ## studentized Breusch-Pagan test ## ## data: result ## BP = 15.538, df = 11, p-value = 0.1592 ``` We can see that there is very little evidence of heteroskedasticity in the standard errors as the \\(p\\)\-value is not small. However, lets go ahead and correct the t\-statistics for heteroskedasticity as follows, using the **hccm** function. The **hccm** stands for heteroskedasticity corrected covariance matrix. ``` wuns = matrix(1,64,1) z = cbind(wuns,x) b = solve(t(z) %*% z) %*% (t(z) %*% y) result = lm(y~x) library(car) vb = hccm(result) stdb = sqrt(diag(vb)) tstats = b/stdb print(tstats) ``` ``` ## GMS ## -2.68006069 ## PTS -0.38212818 ## REB 2.38342637 ## AST -0.40848721 ## TO -0.28709450 ## A.T 0.65632053 ## STL 2.13627108 ## BLK 0.09548606 ## PF -0.68036944 ## FG 3.52193532 ## FT 2.35677255 ## X3P 1.23897636 ``` Compare these to the t\-statistics in the original model ``` summary(result) ``` ``` ## ## Call: ## lm(formula = y ~ x) ## ## Residuals: ## Min 1Q Median 3Q Max ## -1.5074 -0.5527 -0.2454 0.6705 2.2344 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -10.194804 2.892203 -3.525 0.000893 *** ## xPTS -0.010442 0.025276 -0.413 0.681218 ## xREB 0.105048 0.036951 2.843 0.006375 ** ## xAST -0.060798 0.091102 -0.667 0.507492 ## xTO -0.034545 0.071393 -0.484 0.630513 ## xA.T 1.325402 1.110184 1.194 0.237951 ## xSTL 0.181015 0.068999 2.623 0.011397 * ## xBLK 0.007185 0.075054 0.096 0.924106 ## xPF -0.031705 0.044469 -0.713 0.479050 ## xFG 13.823190 3.981191 3.472 0.001048 ** ## xFT 2.694716 1.118595 2.409 0.019573 * ## xX3P 2.526831 1.754038 1.441 0.155698 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.9619 on 52 degrees of freedom ## Multiple R-squared: 0.5418, Adjusted R-squared: 0.4448 ## F-statistic: 5.589 on 11 and 52 DF, p-value: 7.889e-06 ``` It is apparent that when corrected for heteroskedasticity, the t\-statistics in the regression are lower, and also render some of the previously significant coefficients insignificant. 3\.13 Auto\-Regressive Models ----------------------------- When data is autocorrelated, i.e., has dependence in time, not accounting for this issue results in unnecessarily high statistical significance (in terms of inflated t\-statistics). Intuitively, this is because observations are treated as independent when actually they are correlated in time, and therefore, the true number of observations is effectively less. Consider a finance application. In efficient markets, the correlation of stock returns from one period to the next should be close to zero. We use the returns on Google stock as an example. First, read in the data. ``` data = read.csv("DSTMAA_data/goog.csv",header=TRUE) head(data) ``` ``` ## Date Open High Low Close Volume Adj.Close ## 1 2011-04-06 572.18 575.16 568.00 574.18 2668300 574.18 ## 2 2011-04-05 581.08 581.49 565.68 569.09 6047500 569.09 ## 3 2011-04-04 593.00 594.74 583.10 587.68 2054500 587.68 ## 4 2011-04-01 588.76 595.19 588.76 591.80 2613200 591.80 ## 5 2011-03-31 583.00 588.16 581.74 586.76 2029400 586.76 ## 6 2011-03-30 584.38 585.50 580.58 581.84 1422300 581.84 ``` Next, create the returns time series. ``` n = length(data$Close) stkp = rev(data$Adj.Close) rets = as.matrix(log(stkp[2:n]/stkp[1:(n-1)])) n = length(rets) plot(rets,type="l",col="blue") ``` ``` print(n) ``` ``` ## [1] 1670 ``` Examine the autocorrelation. This is one lag, also known as first\-order autocorrelation. ``` cor(rets[1:(n-1)],rets[2:n]) ``` ``` ## [1] 0.007215026 ``` Run the Durbin\-Watson test for autocorrelation. Here we test for up to 10 lags. ``` library(car) res = lm(rets[2:n]~rets[1:(n-1)]) durbinWatsonTest(res,max.lag=10) ``` ``` ## lag Autocorrelation D-W Statistic p-value ## 1 -0.0006436855 2.001125 0.950 ## 2 -0.0109757002 2.018298 0.696 ## 3 -0.0002853870 1.996723 0.982 ## 4 0.0252586312 1.945238 0.324 ## 5 0.0188824874 1.957564 0.444 ## 6 -0.0555810090 2.104550 0.018 ## 7 0.0020507562 1.989158 0.986 ## 8 0.0746953706 1.843219 0.004 ## 9 -0.0375308940 2.067304 0.108 ## 10 0.0085641680 1.974756 0.798 ## Alternative hypothesis: rho[lag] != 0 ``` There is no evidence of auto\-correlation when the DW statistic is close to 2\. If the DW\-statistic is greater than 2 it indicates negative autocorrelation, and if it is less than 2, it indicates positive autocorrelation. If there is autocorrelation we can correct for it as follows. Let’s take a different data set. ``` md = read.table("DSTMAA_data/markowitzdata.txt",header=TRUE) names(md) ``` ``` ## [1] "X.DATE" "SUNW" "MSFT" "IBM" "CSCO" "AMZN" "mktrf" ## [8] "smb" "hml" "rf" ``` Test for autocorrelation. ``` y = as.matrix(md[2]) x = as.matrix(md[7:9]) rf = as.matrix(md[10]) y = y-rf library(car) results = lm(y ~ x) print(summary(results)) ``` ``` ## ## Call: ## lm(formula = y ~ x) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.213676 -0.014356 -0.000733 0.014462 0.191089 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -0.000197 0.000785 -0.251 0.8019 ## xmktrf 1.657968 0.085816 19.320 <2e-16 *** ## xsmb 0.299735 0.146973 2.039 0.0416 * ## xhml -1.544633 0.176049 -8.774 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.03028 on 1503 degrees of freedom ## Multiple R-squared: 0.3636, Adjusted R-squared: 0.3623 ## F-statistic: 286.3 on 3 and 1503 DF, p-value: < 2.2e-16 ``` ``` durbinWatsonTest(results,max.lag=6) ``` ``` ## lag Autocorrelation D-W Statistic p-value ## 1 -0.07231926 2.144549 0.008 ## 2 -0.04595240 2.079356 0.122 ## 3 0.02958136 1.926791 0.180 ## 4 -0.01608143 2.017980 0.654 ## 5 -0.02360625 2.032176 0.474 ## 6 -0.01874952 2.021745 0.594 ## Alternative hypothesis: rho[lag] != 0 ``` Now make the correction to the t\-statistics. We use the procedure formulated by Newey and West ([1987](#ref-10.2307/1913610)). This correction is part of the **car** package. ``` #CORRECT FOR AUTOCORRELATION library(sandwich) b = results$coefficients print(b) ``` ``` ## (Intercept) xmktrf xsmb xhml ## -0.0001970164 1.6579682191 0.2997353765 -1.5446330690 ``` ``` vb = NeweyWest(results,lag=1) stdb = sqrt(diag(vb)) tstats = b/stdb print(tstats) ``` ``` ## (Intercept) xmktrf xsmb xhml ## -0.2633665 15.5779184 1.8300340 -6.1036120 ``` Compare these to the stats we had earlier. Notice how they have come down after correction for AR. Note that there are several steps needed to correct for autocorrelation, and it might have been nice to roll one’s own function for this. (I leave this as an exercise for you.) Figure 3\.1: From Lo and MacKinlay (1999\) For fun, lets look at the autocorrelation in stock market indexes, shown in Figure [3\.1](IntroductoryRprogamming.html#fig:ARequityindexes). The following graphic is taken from the book “A Non\-Random Walk Down Wall Street” by A. W. Lo and MacKinlay ([1999](#ref-10.2307/j.ctt7tccx)). Is the autocorrelation higher for equally\-weighted or value\-weighted indexes? Why? 3\.14 Maximum Likelihood ------------------------ Assume that the stock returns \\(R(t)\\) mentioned above have a normal distribution with mean \\(\\mu\\) and variance \\(\\sigma^2\\) per year. MLE estimation requires finding the parameters \\(\\{\\mu,\\sigma\\}\\) that maximize the likelihood of seeing the empirical sequence of returns \\(R(t)\\). A normal probability function is required, and we have one above for \\(R(t)\\), which is assumed to be i.i.d. (independent and identically distributed). First, a quick recap of the normal distribution. If \\(x \\sim N(\\mu,\\sigma^2\)\\), then \\\[\\begin{equation} \\mbox{density function:} \\quad f(x) \= \\frac{1}{\\sqrt{2\\pi\\sigma^2}} \\exp\\left\[\-\\frac{1}{2}\\frac{(x\-\\mu)^2}{\\sigma^2} \\right] \\end{equation}\\] \\\[\\begin{equation} N(x) \= 1 \- N(\-x) \\end{equation}\\] \\\[\\begin{equation} F(x) \= \\int\_{\-\\infty}^x f(u) du \\end{equation}\\] The standard normal distribution is \\(x \\sim N(0,1\)\\). For the standard normal distribution: \\(F(0\) \= \\frac{1}{2}\\). Noting that when returns are i.i.d., the mean return and the variance of returns scale with time, and therefore, the standard deviation of returns scales with the square\-root of time. If the time intervals between return observations is \\(h\\) years, then the probability density of \\(R(t)\\) is normal with the following equation: \\\[\\begin{equation} f\[R(t)] \= \\frac{1}{\\sqrt{2 \\pi \\sigma^2 h}} \\cdot \\exp\\left\[ \-\\frac{1}{2} \\cdot \\frac{(R(t)\-\\alpha)^2}{\\sigma^2 h} \\right] \\end{equation}\\] where \\(\\alpha \= \\left(\\mu\-\\frac{1}{2}\\sigma^2 \\right) h\\). In our case, we have daily data and \\(h\=1/252\\). For periods \\(t\=1,2,\\ldots,T\\) the likelihood of the entire series is \\\[\\begin{equation} \\prod\_{t\=1}^T f\[R(t)] \\end{equation}\\] It is easier (computationally) to maximize \\\[\\begin{equation} \\max\_{\\mu,\\sigma} \\; {\\cal L} \\equiv \\sum\_{t\=1}^T \\ln f\[R(t)] \\end{equation}\\] known as the log\-likelihood. This is easily done in R. First we create the log\-likelihood function, so you can see how functions are defined in R. Note that \\\[\\begin{equation} \\ln \\; f\[R(t)] \= \-\\ln \\sqrt{2 \\pi \\sigma^2 h} \- \\frac{\[R(t)\-\\alpha]^2}{2 \\sigma^2 h} \\end{equation}\\] We have used variable “sigsq” in function “LL” for \\(\\sigma^2 h\\). ``` #LOG-LIKELIHOOD FUNCTION LL = function(params,rets) { alpha = params[1]; sigsq = params[2] logf = -log(sqrt(2*pi*sigsq)) - (rets-alpha)^2/(2*sigsq) LL = -sum(logf) } ``` We now read in the data and maximize the log\-likelihood to find the required parameters of the return distribution. ``` #READ DATA data = read.csv("DSTMAA_data/goog.csv",header=TRUE) stkp = data$Adj.Close #Ln of differenced stk prices gives continuous returns rets = diff(log(stkp)) #diff() takes first differences print(c("mean return = ",mean(rets),mean(rets)*252)) ``` ``` ## [1] "mean return = " "-0.00104453803410475" "-0.263223584594396" ``` ``` print(c("stdev returns = ",sd(rets),sd(rets)*sqrt(252))) ``` ``` ## [1] "stdev returns = " "0.0226682330750677" "0.359847044267268" ``` ``` #Create starting guess for parameters params = c(0.001,0.001) res = nlm(LL,params,rets) print(res) ``` ``` ## $minimum ## [1] -3954.813 ## ## $estimate ## [1] -0.0010450602 0.0005130408 ## ## $gradient ## [1] -0.07215158 -1.93982032 ## ## $code ## [1] 2 ## ## $iterations ## [1] 8 ``` Let’s annualize the parameters and see what they are, comparing them to the raw mean and variance of returns. ``` h = 1/252 alpha = res$estimate[1] sigsq = res$estimate[2] print(c("alpha=",alpha)) ``` ``` ## [1] "alpha=" "-0.00104506019968994" ``` ``` print(c("sigsq=",sigsq)) ``` ``` ## [1] "sigsq=" "0.000513040809008682" ``` ``` sigma = sqrt(sigsq/h) mu = alpha/h + 0.5*sigma^2 print(c("mu=",mu)) ``` ``` ## [1] "mu=" "-0.19871202838677" ``` ``` print(c("sigma=",sigma)) ``` ``` ## [1] "sigma=" "0.359564019154014" ``` ``` print(mean(rets*252)) ``` ``` ## [1] -0.2632236 ``` ``` print(sd(rets)*sqrt(252)) ``` ``` ## [1] 0.359847 ``` As we can see, the parameters under the normal distribution are quite close to the raw moments. 3\.15 Logit ----------- We have seen how to fit a linear regression model in R. In that model we placed no restrictions on the dependent variable. However, when the LHS variable in a regression is categorical and binary, i.e., takes the value 1 or 0, then a logit regression is more apt. This regression fits a model that will always return a fitted value of the dependent variable that lies between \\((0,1\)\\). This class of specifications covers what are known as *limited dependent variables* models. In this introduction to R, we will simply run a few examples of these models, leaving a more detailed analysis for later in this book. Example: For the NCAA data, there are 64 observatios (teams) ordered from best to worst. We take the top 32 teams and make their dependent variable 1 (above median teams), and that of the bottom 32 teams zero (below median). Our goal is to fit a regression model that returns a team’s predicted percentile ranking. First, we create the dependent variable. ``` y = c(rep(1,32),rep(0,32)) print(y) ``` ``` ## [1] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 ## [36] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ``` ``` x = as.matrix(ncaa[,4:14]) y = as.matrix(y) ``` We use the function **glm** for this task. Running the model is pretty easy as follows. ``` h = glm(y~x, family=binomial(link="logit")) print(logLik(h)) ``` ``` ## 'log Lik.' -21.44779 (df=12) ``` ``` print(summary(h)) ``` ``` ## ## Call: ## glm(formula = y ~ x, family = binomial(link = "logit")) ## ## Deviance Residuals: ## Min 1Q Median 3Q Max ## -1.80174 -0.40502 -0.00238 0.37584 2.31767 ## ## Coefficients: ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -45.83315 14.97564 -3.061 0.00221 ** ## xPTS -0.06127 0.09549 -0.642 0.52108 ## xREB 0.49037 0.18089 2.711 0.00671 ** ## xAST 0.16422 0.26804 0.613 0.54010 ## xTO -0.38405 0.23434 -1.639 0.10124 ## xA.T 1.56351 3.17091 0.493 0.62196 ## xSTL 0.78360 0.32605 2.403 0.01625 * ## xBLK 0.07867 0.23482 0.335 0.73761 ## xPF 0.02602 0.13644 0.191 0.84874 ## xFG 46.21374 17.33685 2.666 0.00768 ** ## xFT 10.72992 4.47729 2.397 0.01655 * ## xX3P 5.41985 5.77966 0.938 0.34838 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## (Dispersion parameter for binomial family taken to be 1) ## ## Null deviance: 88.723 on 63 degrees of freedom ## Residual deviance: 42.896 on 52 degrees of freedom ## AIC: 66.896 ## ## Number of Fisher Scoring iterations: 6 ``` Thus, we see that the best variables that separate upper\-half teams from lower\-half teams are the number of rebounds and the field goal percentage. To a lesser extent, field goal percentage and steals also provide some explanatory power. The logit regression is specified as follows: \\\[\\begin{eqnarray\*} z \&\=\& \\frac{e^y}{1\+e^y}\\\\ y \&\=\& b\_0 \+ b\_1 x\_1 \+ b\_2 x\_2 \+ \\ldots \+ b\_k x\_k \\end{eqnarray\*}\\] The original data \\(z \= \\{0,1\\}\\). The range of values of \\(y\\) is \\((\-\\infty,\+\\infty)\\). And as required, the fitted \\(z \\in (0,1\)\\). The variables \\(x\\) are the RHS variables. The fitting is done using MLE. Suppose we ran this with a simple linear regression. ``` h = lm(y~x) summary(h) ``` ``` ## ## Call: ## lm(formula = y ~ x) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.65982 -0.26830 0.03183 0.24712 0.83049 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -4.114185 1.174308 -3.503 0.000953 *** ## xPTS -0.005569 0.010263 -0.543 0.589709 ## xREB 0.046922 0.015003 3.128 0.002886 ** ## xAST 0.015391 0.036990 0.416 0.679055 ## xTO -0.046479 0.028988 -1.603 0.114905 ## xA.T 0.103216 0.450763 0.229 0.819782 ## xSTL 0.063309 0.028015 2.260 0.028050 * ## xBLK 0.023088 0.030474 0.758 0.452082 ## xPF 0.011492 0.018056 0.636 0.527253 ## xFG 4.842722 1.616465 2.996 0.004186 ** ## xFT 1.162177 0.454178 2.559 0.013452 * ## xX3P 0.476283 0.712184 0.669 0.506604 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.3905 on 52 degrees of freedom ## Multiple R-squared: 0.5043, Adjusted R-squared: 0.3995 ## F-statistic: 4.81 on 11 and 52 DF, p-value: 4.514e-05 ``` We get the same variables again showing up as significant. 3\.16 Probit ------------ We can redo the same regression in the logit using a probit instead. A probit is identical in spirit to the logit regression, except that the function that is used is \\\[\\begin{eqnarray\*} z \&\=\& \\Phi(y)\\\\ y \&\=\& b\_0 \+ b\_1 x\_1 \+ b\_2 x\_2 \+ \\ldots \+ b\_k x\_k \\end{eqnarray\*}\\] where \\(\\Phi(\\cdot)\\) is the cumulative normal probability function. It is implemented in R as follows. ``` h = glm(y~x, family=binomial(link="probit")) print(logLik(h)) ``` ``` ## 'log Lik.' -21.27924 (df=12) ``` ``` print(summary(h)) ``` ``` ## ## Call: ## glm(formula = y ~ x, family = binomial(link = "probit")) ## ## Deviance Residuals: ## Min 1Q Median 3Q Max ## -1.76353 -0.41212 -0.00031 0.34996 2.24568 ## ## Coefficients: ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -26.28219 8.09608 -3.246 0.00117 ** ## xPTS -0.03463 0.05385 -0.643 0.52020 ## xREB 0.28493 0.09939 2.867 0.00415 ** ## xAST 0.10894 0.15735 0.692 0.48874 ## xTO -0.23742 0.13642 -1.740 0.08180 . ## xA.T 0.71485 1.86701 0.383 0.70181 ## xSTL 0.45963 0.18414 2.496 0.01256 * ## xBLK 0.03029 0.13631 0.222 0.82415 ## xPF 0.01041 0.07907 0.132 0.89529 ## xFG 26.58461 9.38711 2.832 0.00463 ** ## xFT 6.28278 2.51452 2.499 0.01247 * ## xX3P 3.15824 3.37841 0.935 0.34988 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## (Dispersion parameter for binomial family taken to be 1) ## ## Null deviance: 88.723 on 63 degrees of freedom ## Residual deviance: 42.558 on 52 degrees of freedom ## AIC: 66.558 ## ## Number of Fisher Scoring iterations: 8 ``` The results confirm those obtained from the linear regression and logit regression. 3\.17 ARCH and GARCH -------------------- GARCH stands for “Generalized Auto\-Regressive Conditional Heteroskedasticity”. Engle ([1982](#ref-10.2307/1912773)) invented ARCH (for which he got the Nobel prize) and this was extended by Bollerslev ([1986](#ref-RePEc:eee:econom:v:31:y:1986:i:3:p:307-327)) to GARCH. ARCH models are based on the idea that volatility tends to cluster, i.e., volatility for period \\(t\\), is auto\-correlated with volatility from period \\((t\-1\)\\), or more preceding periods. If we had a time series of stock returns following a random walk, we might model it as follows \\\[\\begin{equation} r\_t \= \\mu \+ e\_t, \\quad e\_t \\sim N(0,\\sigma\_t^2\) \\end{equation}\\] Returns have constant mean \\(\\mu\\) and time\-varying variance \\(\\sigma\_t^2\\). If the variance were stationary then \\(\\sigma\_t^2\\) would be constant. But under GARCH it is auto\-correlated with previous variances. Hence, we have \\\[\\begin{equation} \\sigma\_{t}^2 \= \\beta\_0 \+ \\sum\_{j\=1}^p \\beta\_{1j} \\sigma\_{t\-j}^2 \+ \\sum\_{k\=1}^q \\beta\_{2k} e\_{t\-k}^2 \\end{equation}\\] So current variance (\\(\\sigma\_t^2\\)) depends on past squared shocks (\\(e\_{t\-k}^2\\)) and past variances (\\(\\sigma\_{t\-j}^2\\)). The number of lags of past variance is \\(p\\), and that of lagged shocks is \\(q\\). The model is thus known as a GARCH\\((p,q)\\) model. For the model to be stationary, the sum of all the \\(\\beta\\) terms should be less than 1\. In GARCH, stock returns are conditionally normal, and independent, but not identically distributed because the variance changes over time. Since at every time \\(t\\), we know the conditional distribution of returns, because \\(\\sigma\_t\\) is based on past \\(\\sigma\_{t\-j}\\) and past shocks \\(e\_{t\-k}\\), we can estimate the parameters \\(\\{\\beta\_0,\\beta{1j}, \\beta\_{2k}\\}, \\forall j,k\\), of the model using MLE. The good news is that this comes canned in R, so all we need to do is use the **tseries** package. ``` library(tseries) res = garch(rets,order=c(1,1)) ``` ``` ## ## ***** ESTIMATION WITH ANALYTICAL GRADIENT ***** ## ## ## I INITIAL X(I) D(I) ## ## 1 4.624639e-04 1.000e+00 ## 2 5.000000e-02 1.000e+00 ## 3 5.000000e-02 1.000e+00 ## ## IT NF F RELDF PRELDF RELDX STPPAR D*STEP NPRELDF ## 0 1 -5.512e+03 ## 1 7 -5.513e+03 1.82e-04 2.97e-04 2.0e-04 4.3e+09 2.0e-05 6.33e+05 ## 2 8 -5.513e+03 8.45e-06 9.19e-06 1.9e-04 2.0e+00 2.0e-05 1.57e+01 ## 3 15 -5.536e+03 3.99e-03 6.04e-03 4.4e-01 2.0e+00 8.0e-02 1.56e+01 ## 4 18 -5.569e+03 6.02e-03 4.17e-03 7.4e-01 1.9e+00 3.2e-01 4.54e-01 ## 5 20 -5.579e+03 1.85e-03 1.71e-03 7.9e-02 2.0e+00 6.4e-02 1.67e+02 ## 6 22 -5.604e+03 4.44e-03 3.94e-03 1.3e-01 2.0e+00 1.3e-01 1.93e+04 ## 7 24 -5.610e+03 9.79e-04 9.71e-04 2.2e-02 2.0e+00 2.6e-02 2.93e+06 ## 8 26 -5.621e+03 1.92e-03 1.96e-03 4.1e-02 2.0e+00 5.1e-02 2.76e+08 ## 9 27 -5.639e+03 3.20e-03 4.34e-03 7.4e-02 2.0e+00 1.0e-01 2.26e+02 ## 10 34 -5.640e+03 2.02e-04 3.91e-04 3.7e-06 4.0e+00 5.5e-06 1.73e+01 ## 11 35 -5.640e+03 7.02e-06 8.09e-06 3.6e-06 2.0e+00 5.5e-06 5.02e+00 ## 12 36 -5.640e+03 2.22e-07 2.36e-07 3.7e-06 2.0e+00 5.5e-06 5.26e+00 ## 13 43 -5.641e+03 2.52e-04 3.98e-04 1.5e-02 2.0e+00 2.3e-02 5.26e+00 ## 14 45 -5.642e+03 2.28e-04 1.40e-04 1.7e-02 0.0e+00 3.2e-02 1.40e-04 ## 15 46 -5.644e+03 3.17e-04 3.54e-04 3.9e-02 1.0e-01 8.8e-02 3.57e-04 ## 16 56 -5.644e+03 1.60e-05 3.69e-05 5.7e-07 3.2e+00 9.7e-07 6.48e-05 ## 17 57 -5.644e+03 1.91e-06 1.96e-06 5.0e-07 2.0e+00 9.7e-07 1.20e-05 ## 18 58 -5.644e+03 8.57e-11 5.45e-09 5.2e-07 2.0e+00 9.7e-07 9.38e-06 ## 19 66 -5.644e+03 6.92e-06 9.36e-06 4.2e-03 6.2e-02 7.8e-03 9.38e-06 ## 20 67 -5.644e+03 7.42e-07 1.16e-06 1.2e-03 0.0e+00 2.2e-03 1.16e-06 ## 21 68 -5.644e+03 8.44e-08 1.50e-07 7.1e-04 0.0e+00 1.6e-03 1.50e-07 ## 22 69 -5.644e+03 1.39e-08 2.44e-09 8.6e-05 0.0e+00 1.8e-04 2.44e-09 ## 23 70 -5.644e+03 -7.35e-10 1.24e-11 3.1e-06 0.0e+00 5.4e-06 1.24e-11 ## ## ***** RELATIVE FUNCTION CONVERGENCE ***** ## ## FUNCTION -5.644379e+03 RELDX 3.128e-06 ## FUNC. EVALS 70 GRAD. EVALS 23 ## PRELDF 1.242e-11 NPRELDF 1.242e-11 ## ## I FINAL X(I) D(I) G(I) ## ## 1 1.807617e-05 1.000e+00 1.035e+01 ## 2 1.304314e-01 1.000e+00 -2.837e-02 ## 3 8.457819e-01 1.000e+00 -2.915e-02 ``` ``` summary(res) ``` ``` ## ## Call: ## garch(x = rets, order = c(1, 1)) ## ## Model: ## GARCH(1,1) ## ## Residuals: ## Min 1Q Median 3Q Max ## -9.17102 -0.59191 -0.03853 0.43929 4.64677 ## ## Coefficient(s): ## Estimate Std. Error t value Pr(>|t|) ## a0 1.808e-05 2.394e-06 7.551 4.33e-14 *** ## a1 1.304e-01 1.292e-02 10.094 < 2e-16 *** ## b1 8.458e-01 1.307e-02 64.720 < 2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Diagnostic Tests: ## Jarque Bera Test ## ## data: Residuals ## X-squared = 3199.7, df = 2, p-value < 2.2e-16 ## ## ## Box-Ljung test ## ## data: Squared.Residuals ## X-squared = 0.14094, df = 1, p-value = 0.7073 ``` That’s it! Certainly much less painful than programming the entire MLE procedure. We see that the parameters \\(\\{\\beta\_0,\\beta\_1,\\beta\_2\\}\\) are all statistically significant. Given the fitted parameters, we can also examine the extracted time series of volatilty. ``` #PLOT VOLATILITY TIMES SERIES print(names(res)) ``` ``` ## [1] "order" "coef" "n.likeli" "n.used" ## [5] "residuals" "fitted.values" "series" "frequency" ## [9] "call" "vcov" ``` ``` plot(res$fitted.values[,1],type="l",col="red") grid(lwd=2) ``` We may also plot is side by side with the stock price series. ``` par(mfrow=c(2,1)) plot(res$fitted.values[,1],col="blue",type="l") plot(stkp,type="l",col="red") ``` Notice how the volatility series clumps into periods of high volatility, interspersed with larger periods of calm. As is often the case, volatility tends to be higher when the stock price is lower. 3\.18 Vector Autoregression --------------------------- Also known as VAR (not the same thing as Value\-at\-Risk, denoted VaR). VAR is useful for estimating systems where there are simultaneous regression equations, and the variables influence each other over time. So in a VAR, each variable in a system is assumed to depend on lagged values of itself and the other variables. The number of lags may be chosen by the econometrician based on the expected decay in time\-dependence of the variables in the VAR. In the following example, we examine the inter\-relatedness of returns of the following three tickers: SUNW, MSFT, IBM. For vector autoregressions (VARs), we run the following R commands: ``` md = read.table("DSTMAA_data/markowitzdata.txt",header=TRUE) y = as.matrix(md[2:4]) library(stats) var6 = ar(y,aic=TRUE,order=6) print(var6$order) ``` ``` ## [1] 1 ``` ``` print(var6$ar) ``` ``` ## , , SUNW ## ## SUNW MSFT IBM ## 1 -0.00985635 0.02224093 0.002072782 ## ## , , MSFT ## ## SUNW MSFT IBM ## 1 0.008658304 -0.1369503 0.0306552 ## ## , , IBM ## ## SUNW MSFT IBM ## 1 -0.04517035 0.0975497 -0.01283037 ``` We print out the Akaike Information Criterion (AIC)[28](#fn28) to see which lags are significant. ``` print(var6$aic) ``` ``` ## 0 1 2 3 4 5 6 ## 23.950676 0.000000 2.762663 5.284709 5.164238 10.065300 8.924513 ``` Since there are three stocks’ returns moving over time, we have a system of three equations, each with six lags, so there will be six lagged coefficients for each equation. We print out these coefficients here, and examine the sign. We note however that only one lag is significant, as the “order” of the system was estimated as 1 in the VAR above. ``` print(var6$partialacf) ``` ``` ## , , SUNW ## ## SUNW MSFT IBM ## 1 -0.00985635 0.022240931 0.002072782 ## 2 -0.07857841 -0.019721982 -0.006210487 ## 3 0.03382375 0.003658121 0.032990758 ## 4 0.02259522 0.030023132 0.020925226 ## 5 -0.03944162 -0.030654949 -0.012384084 ## 6 -0.03109748 -0.021612632 -0.003164879 ## ## , , MSFT ## ## SUNW MSFT IBM ## 1 0.008658304 -0.13695027 0.030655201 ## 2 -0.053224374 -0.02396291 -0.047058278 ## 3 0.080632420 0.03720952 -0.004353203 ## 4 -0.038171317 -0.07573402 -0.004913021 ## 5 0.002727220 0.05886752 0.050568308 ## 6 0.242148823 0.03534206 0.062799122 ## ## , , IBM ## ## SUNW MSFT IBM ## 1 -0.04517035 0.097549700 -0.01283037 ## 2 0.05436993 0.021189756 0.05430338 ## 3 -0.08990973 -0.077140955 -0.03979962 ## 4 0.06651063 0.056250866 0.05200459 ## 5 0.03117548 -0.056192843 -0.06080490 ## 6 -0.13131366 -0.003776726 -0.01502191 ``` Interestingly we see that each of the tickers has a negative relation to its lagged value, but a positive correlation with the lagged values of the other two stocks. Hence, there is positive cross autocorrelation amongst these tech stocks. We can also run a model with three lags. ``` ar(y,method="ols",order=3) ``` ``` ## ## Call: ## ar(x = y, order.max = 3, method = "ols") ## ## $ar ## , , 1 ## ## SUNW MSFT IBM ## SUNW 0.01407 -0.0006952 -0.036839 ## MSFT 0.02693 -0.1440645 0.100557 ## IBM 0.01330 0.0211160 -0.009662 ## ## , , 2 ## ## SUNW MSFT IBM ## SUNW -0.082017 -0.04079 0.04812 ## MSFT -0.020668 -0.01722 0.01761 ## IBM -0.006717 -0.04790 0.05537 ## ## , , 3 ## ## SUNW MSFT IBM ## SUNW 0.035412 0.081961 -0.09139 ## MSFT 0.003999 0.037252 -0.07719 ## IBM 0.033571 -0.003906 -0.04031 ## ## ## $x.intercept ## SUNW MSFT IBM ## -9.623e-05 -7.366e-05 -6.257e-05 ## ## $var.pred ## SUNW MSFT IBM ## SUNW 0.0013593 0.0003007 0.0002842 ## MSFT 0.0003007 0.0003511 0.0001888 ## IBM 0.0002842 0.0001888 0.0002881 ``` We examine cross autocorrelation found across all stocks by Lo and Mackinlay in their book “A Non\-Random Walk Down Wall Street” – see Figure [3\.2](IntroductoryRprogamming.html#fig:ARcross). Figure 3\.2: From Lo and MacKinlay (1999\) We see that one\-lag cross autocorrelations are positive. Compare these portfolio autocorrelations with the individual stock autocorrelations in the example here. 3\.19 Solving Non\-Linear Equations ----------------------------------- Earlier we examined root finding. Here we develop it further. We have also not done much with user\-generated functions. Here is a neat model in R to solve for the implied volatility in the Black\-Merton\-Scholes class of models. First, we code up the Black and Scholes ([1973](#ref-doi:10.1086/260062)) model; this is the function **bms73** below. Then we write a user\-defined function that solves for the implied volatility from a given call or put option price. The package **minpack.lm** is used for the equation solving, and the function call is **nls.lm**. If you are not familiar with the Nobel Prize winning Black\-Scholes model, never mind, almost the entire world has never heard of it. Just think of it as a nonlinear multivariate function that we will use as an exemplar for equation solving. We are going to use the function below to solve for the value of **sig** in the expressions below. We set up two functions. ``` #Black-Merton-Scholes 1973 #sig: volatility #S: stock price #K: strike price #T: maturity #r: risk free rate #q: dividend rate #cp = 1 for calls and -1 for puts #optprice: observed option price bms73 = function(sig,S,K,T,r,q,cp=1,optprice) { d1 = (log(S/K)+(r-q+0.5*sig^2)*T)/(sig*sqrt(T)) d2 = d1 - sig*sqrt(T) if (cp==1) { optval = S*exp(-q*T)*pnorm(d1)-K*exp(-r*T)*pnorm(d2) } else { optval = -S*exp(-q*T)*pnorm(-d1)+K*exp(-r*T)*pnorm(-d2) } #If option price is supplied we want the implied vol, else optprice bs = optval - optprice } #Function to return Imp Vol with starting guess sig0 impvol = function(sig0,S,K,T,r,q,cp,optprice) { sol = nls.lm(par=sig0,fn=bms73,S=S,K=K,T=T,r=r,q=q, cp=cp,optprice=optprice) } ``` We use the minimizer to solve the nonlinear function for the value of **sig**. The calls to this model are as follows: ``` library(minpack.lm) optprice = 4 res = impvol(0.2,40,40,1,0.03,0,-1,optprice) print(names(res)) ``` ``` ## [1] "par" "hessian" "fvec" "info" "message" "diag" ## [7] "niter" "rsstrace" "deviance" ``` ``` print(c("Implied vol = ",res$par)) ``` ``` ## [1] "Implied vol = " "0.291522285803426" ``` We note that the function **impvol** was written such that the argument that we needed to solve for, **sig0**, the implied volatility, was the first argument in the function. However, the expression **par\=sig0** does inform the solver which argument is being searched for in order to satisfy the non\-linear equation for implied volatility. Note also that the function **bms73** returns the difference between the model price and observed price, not the model price alone. This is necessary as the solver tries to set this function value to zero by finding the implied volatility. Lets check if we put this volatility back into the bms function that we get back the option price of 4\. Voila! ``` #CHECK optp = bms73(res$par,40,40,1,0.03,0,0,4) + optprice print(c("Check option price = ",optp)) ``` ``` ## [1] "Check option price = " "4" ``` 3\.20 Web\-Enabling R Functions ------------------------------- We may be interested in hosting our R programs for users to run through a browser interface. This section walks you through the process to do so. This is an extract of my blog post at [http://sanjivdas.wordpress.com/2010/11/07/web\-enabling\-r\-functions\-with\-cgi\-on\-a\-mac\-os\-x\-desktop/](http://sanjivdas.wordpress.com/2010/11/07/web-enabling-r-functions-with-cgi-on-a-mac-os-x-desktop/). The same may be achieved by using the **Shiny** package in R, which enables you to create interactive browser\-based applications, and is in fact a more powerful environment in which to create web\-driven applications. See: <https://shiny.rstudio.com/>. Here we desribe an example based on the **Rcgi** package from David Firth, and for full details of using R with CGI, see <http://www.omegahat.org/CGIwithR/>. Download the document on using R with CGI. It’s titled “CGIwithR: Facilities for Processing Web Forms with R.”[29](#fn29) You need two program files to get everything working. (These instructions are for a Mac environment.) 1. The html file that is the web form for input data. 2. The R file, with special tags for use with the **CGIwithR** package. Our example will be simple, i.e., a calculator to work out the monthly payment on a standard fixed rate mortgage. The three inputs are the loan principal, annual loan rate, and the number of remaining months to maturity. But first, let’s create the html file for the web page that will take these three input values. We call it **mortgage\_calc.html**. The code is all standard, for those familiar with html, and even if you are not used to html, the code is self\-explanatory. See Figure [3\.3](IntroductoryRprogamming.html#fig:rcgi1). Figure 3\.3: HTML code for the Rcgi application Notice that line 06 will be the one referencing the R program that does the calculation. The three inputs are accepted in lines 08\-10\. Line 12 sends the inputs to the R program. Next, we look at the R program, suitably modified to include html tags. We name it **mortgage\_calc.R**. See Figure [3\.4](IntroductoryRprogamming.html#fig:rcgi2). Figure 3\.4: R code for the Rcgi application We can see that all html calls in the R program are made using the **tag()** construct. Lines 22–35 take in the three inputs from the html form. Lines 43–44 do the calculations and line 45 prints the result. The **cat()** function prints its arguments to the web browser page. Okay, we have seen how the two programs (html, R) are written and these templates may be used with changes as needed. We also need to pay attention to setting up the R environment to make sure that the function is served up by the system. The following steps are needed: Make sure that your Mac is allowing connections to its web server. Go to System Preferences and choose Sharing. In this window enable Web Sharing by ticking the box next to it. Place the html file **mortgage\_calc.html** in the directory that serves up web pages. On a Mac, there is already a web directory for this called **Sites**. It’s a good idea to open a separate subdirectory called (say) **Rcgi** below this one for the R related programs and put the html file there. The R program **mortgage\_calc.R** must go in the directory that has been assigned for CGI executables. On a Mac, the default for this directory is **/Library/WebServer/CGI\-Executables** and is usually referenced by the alias **cgi\-bin** (stands for cgi binaries). Drop the R program into this directory. Two more important files are created when you install the **Rcgi** package. The **CGIwithR** installation creates two files: 1. A hidden file called **.Rprofile**; 2. A file called **R.cgi**. Place both these files in the directory: **/Library/WebServer/CGI\-Executables**. If you cannot find the **.Rprofile** file then create it directly by opening a text editor and adding two lines to the file: ``` #! /usr/bin/R library(CGIwithR,warn.conflicts=FALSE) ``` Now, open the **R.cgi** file and make sure that the line pointing to the R executable in the file is showing > R\_DEFAULT\=/usr/bin/R The file may actually have it as **\#!/usr/local/bin/R** which is for Linux platforms, but the usual Mac install has the executable in **\#! /usr/bin/R** so make sure this is done. Make both files executable as follows: \> chmod a\+rx .Rprofile \> chmod a\+rx R.cgi Finally, make the **\\(\\sim\\)/Sites/Rcgi/** directory write accessible: > chmod a\+wx \\(\\sim\\)/Sites/Rcgi Just being patient and following all the steps makes sure it all works well. Having done it once, it’s easy to repeat and create several functions. The inputs are as follows: Loan principal (enter a dollar amount). Annual loan rate (enter it in decimals, e.g., six percent is entered as 0\.06\). Remaining maturity in months (enter 300 if the remaining maturity is 25 years). 3\.1 Got R? ----------- In this chapter, we develop some expertise in using the R statistical package. See the manual [https://cran.r\-project.org/doc/manuals/r\-release/R\-intro.pdf](https://cran.r-project.org/doc/manuals/r-release/R-intro.pdf) on the R web site. Work through Appendix A, at least the first page. Also see Grant Farnsworth’s document “Econometrics in R”: [https://cran.r\-project.org/doc/contrib/Farnsworth\-EconometricsInR.pdf](https://cran.r-project.org/doc/contrib/Farnsworth-EconometricsInR.pdf). There is also a great book that I personally find to be of very high quality, titled “The Art of R Programming” by Norman Matloff. You can easily install the R programming language, which is a very useful tool for Machine Learning. See: <http://en.wikipedia.org/wiki/Machine_learning> Get R from: [http://www.r\-project.org/](http://www.r-project.org/) (download and install it). If you want to use R in IDE mode, download RStudio: <http://www.rstudio.com>. Here is qa quick test to make sure your installation of R is working along with graphics capabilities. ``` #PLOT HISTOGRAM FROM STANDARD NORMAL RANDOM NUMBERS x = rnorm(1000000) hist(x,50) grid(col="blue",lwd=2) ``` ### 3\.1\.1 System Commands If you want to directly access the system you can issue system commands as follows: ``` #SYSTEM COMMANDS #The following command will show the files in the directory which are notebooks. print(system("ls -lt")) #This command will not work in the notebook. ``` ``` ## [1] 0 ``` ### 3\.1\.1 System Commands If you want to directly access the system you can issue system commands as follows: ``` #SYSTEM COMMANDS #The following command will show the files in the directory which are notebooks. print(system("ls -lt")) #This command will not work in the notebook. ``` ``` ## [1] 0 ``` 3\.2 Loading Data ----------------- To get started, we need to grab some data. Go to Yahoo! Finance and download some historical data in an Excel spreadsheet, re\-sort it into chronological order, then save it as a CSV file. Read the file into R as follows. ``` #READ IN DATA FROM CSV FILE data = read.csv("DSTMAA_data/goog.csv",header=TRUE) print(head(data)) ``` ``` ## Date Open High Low Close Volume Adj.Close ## 1 2011-04-06 572.18 575.16 568.00 574.18 2668300 574.18 ## 2 2011-04-05 581.08 581.49 565.68 569.09 6047500 569.09 ## 3 2011-04-04 593.00 594.74 583.10 587.68 2054500 587.68 ## 4 2011-04-01 588.76 595.19 588.76 591.80 2613200 591.80 ## 5 2011-03-31 583.00 588.16 581.74 586.76 2029400 586.76 ## 6 2011-03-30 584.38 585.50 580.58 581.84 1422300 581.84 ``` ``` m = length(data) n = length(data[,1]) print(c("Number of columns = ",m)) ``` ``` ## [1] "Number of columns = " "7" ``` ``` print(c("Length of data series = ",n)) ``` ``` ## [1] "Length of data series = " "1671" ``` ``` #REVERSE ORDER THE DATA (Also get some practice with a for loop) for (j in 1:m) { data[,j] = rev(data[,j]) } print(head(data)) ``` ``` ## Date Open High Low Close Volume Adj.Close ## 1 2004-08-19 100.00 104.06 95.96 100.34 22351900 100.34 ## 2 2004-08-20 101.01 109.08 100.50 108.31 11428600 108.31 ## 3 2004-08-23 110.75 113.48 109.05 109.40 9137200 109.40 ## 4 2004-08-24 111.24 111.60 103.57 104.87 7631300 104.87 ## 5 2004-08-25 104.96 108.00 103.88 106.00 4598900 106.00 ## 6 2004-08-26 104.95 107.95 104.66 107.91 3551000 107.91 ``` ``` stkp = as.matrix(data[,7]) plot(stkp,type="l",col="blue") grid(lwd=2) ``` The last command reverses the sequence of the data if required. 3\.3 Getting External Stock Data -------------------------------- We can do the same data set up exercise for financial data using the **quantmod** package. *Note*: to install a package you can use the drop down menus on Windows and Mac operating systems, and use a package installer on Linux. Or issue the following command: ``` install.packages("quantmod") ``` Now we move on to using this package for one stock. ``` #USE THE QUANTMOD PACKAGE TO GET STOCK DATA library(quantmod) ``` ``` ## Loading required package: xts ``` ``` ## Loading required package: zoo ``` ``` ## ## Attaching package: 'zoo' ``` ``` ## The following objects are masked from 'package:base': ## ## as.Date, as.Date.numeric ``` ``` ## Loading required package: TTR ``` ``` ## Loading required package: methods ``` ``` ## Version 0.4-0 included new data defaults. See ?getSymbols. ``` ``` getSymbols("IBM") ``` ``` ## As of 0.4-0, 'getSymbols' uses env=parent.frame() and ## auto.assign=TRUE by default. ## ## This behavior will be phased out in 0.5-0 when the call will ## default to use auto.assign=FALSE. getOption("getSymbols.env") and ## getOptions("getSymbols.auto.assign") are now checked for alternate defaults ## ## This message is shown once per session and may be disabled by setting ## options("getSymbols.warning4.0"=FALSE). See ?getSymbols for more details. ``` ``` ## [1] "IBM" ``` ``` chartSeries(IBM) ``` Let’s take a quick look at the data. ``` head(IBM) ``` ``` ## IBM.Open IBM.High IBM.Low IBM.Close IBM.Volume IBM.Adjusted ## 2007-01-03 97.18 98.40 96.26 97.27 9196800 77.73997 ## 2007-01-04 97.25 98.79 96.88 98.31 10524500 78.57116 ## 2007-01-05 97.60 97.95 96.91 97.42 7221300 77.85985 ## 2007-01-08 98.50 99.50 98.35 98.90 10340000 79.04270 ## 2007-01-09 99.08 100.33 99.07 100.07 11108200 79.97778 ## 2007-01-10 98.50 99.05 97.93 98.89 8744800 79.03470 ``` Extract the dates using pipes (we will see this in more detail later). ``` library(magrittr) dts = IBM %>% as.data.frame %>% row.names dts %>% head %>% print ``` ``` ## [1] "2007-01-03" "2007-01-04" "2007-01-05" "2007-01-08" "2007-01-09" ## [6] "2007-01-10" ``` ``` dts %>% length %>% print ``` ``` ## [1] 2574 ``` Plot the data. ``` stkp = as.matrix(IBM$IBM.Adjusted) rets = diff(log(stkp)) dts = as.Date(dts) plot(dts,stkp,type="l",col="blue",xlab="Years",ylab="Stock Price of IBM") grid(lwd=2) ``` Summarize the data. ``` #DESCRIPTIVE STATS summary(IBM) ``` ``` ## Index IBM.Open IBM.High IBM.Low ## Min. :2007-01-03 Min. : 72.74 Min. : 76.98 Min. : 69.5 ## 1st Qu.:2009-07-23 1st Qu.:122.59 1st Qu.:123.97 1st Qu.:121.5 ## Median :2012-02-09 Median :155.01 Median :156.29 Median :154.0 ## Mean :2012-02-11 Mean :151.07 Mean :152.32 Mean :150.0 ## 3rd Qu.:2014-09-02 3rd Qu.:183.52 3rd Qu.:184.77 3rd Qu.:182.4 ## Max. :2017-03-23 Max. :215.38 Max. :215.90 Max. :214.3 ## IBM.Close IBM.Volume IBM.Adjusted ## Min. : 71.74 Min. : 1027500 Min. : 59.16 ## 1st Qu.:122.70 1st Qu.: 3615825 1st Qu.:101.43 ## Median :155.38 Median : 4979650 Median :143.63 ## Mean :151.19 Mean : 5869075 Mean :134.12 ## 3rd Qu.:183.54 3rd Qu.: 7134350 3rd Qu.:166.28 ## Max. :215.80 Max. :30770700 Max. :192.08 ``` Compute risk (volatility). ``` #STOCK VOLATILITY sigma_daily = sd(rets) sigma_annual = sigma_daily*sqrt(252) print(sigma_annual) ``` ``` ## [1] 0.2234349 ``` ``` print(c("Sharpe ratio = ",mean(rets)*252/sigma_annual)) ``` ``` ## [1] "Sharpe ratio = " "0.355224144170446" ``` We may also use the package to get data for more than one stock. ``` library(quantmod) getSymbols(c("GOOG","AAPL","CSCO","IBM")) ``` ``` ## [1] "GOOG" "AAPL" "CSCO" "IBM" ``` We now go ahead and concatenate columns of data into one stock data set. ``` goog = as.numeric(GOOG[,6]) aapl = as.numeric(AAPL[,6]) csco = as.numeric(CSCO[,6]) ibm = as.numeric(IBM[,6]) stkdata = cbind(goog,aapl,csco,ibm) dim(stkdata) ``` ``` ## [1] 2574 4 ``` Now, compute daily returns. This time, we do log returns in continuous\-time. The mean returns are: ``` n = dim(stkdata)[1] rets = log(stkdata[2:n,]/stkdata[1:(n-1),]) colMeans(rets) ``` ``` ## goog aapl csco ibm ## 0.0004869421 0.0009962588 0.0001426355 0.0003149582 ``` We can also compute the covariance matrix and correlation matrix: ``` cv = cov(rets) print(cv,2) ``` ``` ## goog aapl csco ibm ## goog 0.00034 0.00020 0.00017 0.00012 ## aapl 0.00020 0.00042 0.00019 0.00014 ## csco 0.00017 0.00019 0.00036 0.00015 ## ibm 0.00012 0.00014 0.00015 0.00020 ``` ``` cr = cor(rets) print(cr,4) ``` ``` ## goog aapl csco ibm ## goog 1.0000 0.5342 0.4984 0.4627 ## aapl 0.5342 1.0000 0.4840 0.4743 ## csco 0.4984 0.4840 1.0000 0.5711 ## ibm 0.4627 0.4743 0.5711 1.0000 ``` Notice the print command allows you to choose the number of significant digits (in this case 4\). Also, as expected the four return time series are positively correlated with each other. 3\.4 Data Frames ---------------- Data frames are the most essential data structure in the R programming language. One may think of a data frame as simply a spreadsheet. In fact you can view it as such with the following command. ``` View(data) ``` However, data frames in R are much more than mere spreadsheets, which is why Excel will never trump R in the hanlding and analysis of data, except for very small applications on small spreadsheets. One may also think of data frames as databases, and there are many commands that we may use that are database\-like, such as joins, merges, filters, selections, etc. Indeed, packages such as **dplyr** and **data.table** are designed to make these operations seamless, and operate efficiently on big data, where the number of observations (rows) are of the order of hundreds of millions. Data frames can be addressed by column names, so that we do not need to remember column numbers specifically. If you want to find the names of all columns in a data frame, the **names** function does the trick. To address a chosen column, append the column name to the data frame using the “$” connector, as shown below. ``` #THIS IS A DATA FRAME AND CAN BE REFERENCED BY COLUMN NAMES print(names(data)) ``` ``` ## [1] "Date" "Open" "High" "Low" "Close" "Volume" ## [7] "Adj.Close" ``` ``` print(head(data$Close)) ``` ``` ## [1] 100.34 108.31 109.40 104.87 106.00 107.91 ``` The command printed out the first few observations in the column “Close”. All variables and functions in R are “objects”, and you are well\-served to know the object *type*, because objects have properties and methods apply differently to objects of various types. Therefore, to check an object type, use the **class** function. ``` class(data) ``` ``` ## [1] "data.frame" ``` To obtain descriptive statistics on the data variables in a data frame, the **summary** function is very handy. ``` #DESCRIPTIVE STATISTICS summary(data) ``` ``` ## Date Open High Low ## 2004-08-19: 1 Min. : 99.19 Min. :101.7 Min. : 95.96 ## 2004-08-20: 1 1st Qu.:353.79 1st Qu.:359.5 1st Qu.:344.25 ## 2004-08-23: 1 Median :457.57 Median :462.2 Median :452.42 ## 2004-08-24: 1 Mean :434.70 Mean :439.7 Mean :429.15 ## 2004-08-25: 1 3rd Qu.:532.62 3rd Qu.:537.2 3rd Qu.:526.15 ## 2004-08-26: 1 Max. :741.13 Max. :747.2 Max. :725.00 ## (Other) :1665 ## Close Volume Adj.Close ## Min. :100.0 Min. : 858700 Min. :100.0 ## 1st Qu.:353.5 1st Qu.: 3200350 1st Qu.:353.5 ## Median :457.4 Median : 5028000 Median :457.4 ## Mean :434.4 Mean : 6286021 Mean :434.4 ## 3rd Qu.:531.6 3rd Qu.: 7703250 3rd Qu.:531.6 ## Max. :741.8 Max. :41116700 Max. :741.8 ## ``` Let’s take a given column of data and perform some transformations on it. We can also plot the data, with some arguments for look and feel, using the **plot** function. ``` #USING A PARTICULAR COLUMN stkp = data$Adj.Close dt = data$Date print(c("Length of stock series = ",length(stkp))) ``` ``` ## [1] "Length of stock series = " "1671" ``` ``` #Ln of differenced stk prices gives continuous returns rets = diff(log(stkp)) #diff() takes first differences print(c("Length of return series = ",length(rets))) ``` ``` ## [1] "Length of return series = " "1670" ``` ``` print(head(rets)) ``` ``` ## [1] 0.07643307 0.01001340 -0.04228940 0.01071761 0.01785845 -0.01644436 ``` ``` plot(rets,type="l",col="blue") ``` In case you want more descriptive statistics than provided by the **summary** function, then use an appropriate package. We may be interested in the higher\-order moments, and we use the **moments** package for this. ``` print(summary(rets)) ``` ``` ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## -0.1234000 -0.0092080 0.0007246 0.0010450 0.0117100 0.1823000 ``` Compute the daily and annualized standard deviation of returns. ``` r_sd = sd(rets) r_sd_annual = r_sd*sqrt(252) print(c(r_sd,r_sd_annual)) ``` ``` ## [1] 0.02266823 0.35984704 ``` ``` #What if we take the stdev of annualized returns? print(sd(rets*252)) ``` ``` ## [1] 5.712395 ``` ``` #Huh? print(sd(rets*252))/252 ``` ``` ## [1] 5.712395 ``` ``` ## [1] 0.02266823 ``` ``` print(sd(rets*252))/sqrt(252) ``` ``` ## [1] 5.712395 ``` ``` ## [1] 0.359847 ``` Notice the interesting use of the **print** function here. The variance is easy as well. ``` #Variance r_var = var(rets) r_var_annual = var(rets)*252 print(c(r_var,r_var_annual)) ``` ``` ## [1] 0.0005138488 0.1294898953 ``` 3\.5 Higher\-Order Moments -------------------------- Skewness and kurtosis are key moments that arise in all return distributions. We need a different library in R for these. We use the **moments** library. \\\[\\begin{equation} \\mbox{Skewness} \= \\frac{E\[(X\-\\mu)^3]}{\\sigma^{3}} \\end{equation}\\] Skewness means one tail is fatter than the other (asymmetry). Fatter right (left) tail implies positive (negative) skewness. \\\[\\begin{equation} \\mbox{Kurtosis} \= \\frac{E\[(X\-\\mu)^4]}{\\sigma^{4}} \\end{equation}\\] Kurtosis means both tails are fatter than with a normal distribution. ``` #HIGHER-ORDER MOMENTS library(moments) hist(rets,50) ``` ``` print(c("Skewness=",skewness(rets))) ``` ``` ## [1] "Skewness=" "0.487479193296115" ``` ``` print(c("Kurtosis=",kurtosis(rets))) ``` ``` ## [1] "Kurtosis=" "9.95591572103069" ``` For the normal distribution, skewness is zero, and kurtosis is 3\. Kurtosis minus three is denoted “excess kurtosis”. ``` skewness(rnorm(1000000)) ``` ``` ## [1] 0.001912514 ``` ``` kurtosis(rnorm(1000000)) ``` ``` ## [1] 2.995332 ``` What is the skewness and kurtosis of the stock index (S\&P500\)? 3\.6 Reading space delimited files ---------------------------------- Often the original data is in a space delimited file, not a comma separated one, in which case the **read.table** function is appropriate. ``` #READ IN MORE DATA USING SPACE DELIMITED FILE data = read.table("DSTMAA_data/markowitzdata.txt",header=TRUE) print(head(data)) ``` ``` ## X.DATE SUNW MSFT IBM CSCO AMZN ## 1 20010102 -0.087443948 0.000000000 -0.002205882 -0.129084975 -0.10843374 ## 2 20010103 0.297297299 0.105187319 0.115696386 0.240150094 0.26576576 ## 3 20010104 -0.060606062 0.010430248 -0.015191546 0.013615734 -0.11743772 ## 4 20010105 -0.096774191 0.014193549 0.008718981 -0.125373140 -0.06048387 ## 5 20010108 0.006696429 -0.003816794 -0.004654255 -0.002133106 0.02575107 ## 6 20010109 0.044345897 0.058748405 -0.010688043 0.015818726 0.09623431 ## mktrf smb hml rf ## 1 -0.0345 -0.0037 0.0209 0.00026 ## 2 0.0527 0.0097 -0.0493 0.00026 ## 3 -0.0121 0.0083 -0.0015 0.00026 ## 4 -0.0291 0.0027 0.0242 0.00026 ## 5 -0.0037 -0.0053 0.0129 0.00026 ## 6 0.0046 0.0044 -0.0026 0.00026 ``` ``` print(c("Length of data series = ",length(data$X.DATE))) ``` ``` ## [1] "Length of data series = " "1507" ``` We compute covariance and correlation in the data frame. ``` #COMPUTE COVARIANCE AND CORRELATION rets = as.data.frame(cbind(data$SUNW,data$MSFT,data$IBM,data$CSCO,data$AMZN)) names(rets) = c("SUNW","MSFT","IBM","CSCO","AMZN") print(cov(rets)) ``` ``` ## SUNW MSFT IBM CSCO AMZN ## SUNW 0.0014380649 0.0003241903 0.0003104236 0.0007174466 0.0004594254 ## MSFT 0.0003241903 0.0003646160 0.0001968077 0.0003301491 0.0002678712 ## IBM 0.0003104236 0.0001968077 0.0002991120 0.0002827622 0.0002056656 ## CSCO 0.0007174466 0.0003301491 0.0002827622 0.0009502685 0.0005041975 ## AMZN 0.0004594254 0.0002678712 0.0002056656 0.0005041975 0.0016479809 ``` ``` print(cor(rets)) ``` ``` ## SUNW MSFT IBM CSCO AMZN ## SUNW 1.0000000 0.4477060 0.4733132 0.6137298 0.2984349 ## MSFT 0.4477060 1.0000000 0.5959466 0.5608788 0.3455669 ## IBM 0.4733132 0.5959466 1.0000000 0.5303729 0.2929333 ## CSCO 0.6137298 0.5608788 0.5303729 1.0000000 0.4029038 ## AMZN 0.2984349 0.3455669 0.2929333 0.4029038 1.0000000 ``` 3\.7 Pipes with *magrittr* -------------------------- We may redo the example above using a very useful package called **magrittr** which mimics pipes in the Unix operating system. In the code below, we pipe the returns data into the correlation function and then “pipe” the output of that into the print function. This is analogous to issuing the command *print(cor(rets))*. ``` #Repeat the same process using pipes library(magrittr) rets %>% cor %>% print ``` ``` ## SUNW MSFT IBM CSCO AMZN ## SUNW 1.0000000 0.4477060 0.4733132 0.6137298 0.2984349 ## MSFT 0.4477060 1.0000000 0.5959466 0.5608788 0.3455669 ## IBM 0.4733132 0.5959466 1.0000000 0.5303729 0.2929333 ## CSCO 0.6137298 0.5608788 0.5303729 1.0000000 0.4029038 ## AMZN 0.2984349 0.3455669 0.2929333 0.4029038 1.0000000 ``` 3\.8 Matrices ------------- > *Question*: What do you get if you cross a mountain\-climber with a mosquito? *Answer*: Can’t be done. You’ll be crossing a scaler with a vector. We will use matrices extensively in modeling, and here we examine the basic commands needed to create and manipulate matrices in R. We create a \\(4 \\times 3\\) matrix with random numbers as follows: ``` x = matrix(rnorm(12),4,3) print(x) ``` ``` ## [,1] [,2] [,3] ## [1,] -0.69430984 0.7897995 0.3524628 ## [2,] 1.08377771 0.7380866 0.4088171 ## [3,] -0.37520601 -1.3140870 2.0383614 ## [4,] -0.06818956 -0.6813911 0.1423782 ``` Transposing the matrix, notice that the dimensions are reversed. ``` print(t(x),3) ``` ``` ## [,1] [,2] [,3] [,4] ## [1,] -0.694 1.084 -0.375 -0.0682 ## [2,] 0.790 0.738 -1.314 -0.6814 ## [3,] 0.352 0.409 2.038 0.1424 ``` Of course, it is easy to multiply matrices as long as they conform. By “conform” we mean that when multiplying one matrix by another, the number of columns of the matrix on the left must be equal to the number of rows of the matrix on the right. The resultant matrix that holds the answer of this computation will have the number of rows of the matrix on the left, and the number of columns of the matrix on the right. See the examples below: ``` print(t(x) %*% x,3) ``` ``` ## [,1] [,2] [,3] ## [1,] 1.802 0.791 -0.576 ## [2,] 0.791 3.360 -2.195 ## [3,] -0.576 -2.195 4.467 ``` ``` print(x %*% t(x),3) ``` ``` ## [,1] [,2] [,3] [,4] ## [1,] 1.2301 -0.0254 -0.0589 -0.441 ## [2,] -0.0254 1.8865 -0.5432 -0.519 ## [3,] -0.0589 -0.5432 6.0225 1.211 ## [4,] -0.4406 -0.5186 1.2112 0.489 ``` Here is an example of non\-conforming matrices. ``` #CREATE A RANDOM MATRIX x = matrix(runif(12),4,3) print(x) ``` ``` ## [,1] [,2] [,3] ## [1,] 0.9508065 0.3802924 0.7496199 ## [2,] 0.2546922 0.2621244 0.9214230 ## [3,] 0.3521408 0.1808846 0.8633504 ## [4,] 0.9065475 0.2655430 0.7270766 ``` ``` print(x*2) ``` ``` ## [,1] [,2] [,3] ## [1,] 1.9016129 0.7605847 1.499240 ## [2,] 0.5093844 0.5242488 1.842846 ## [3,] 0.7042817 0.3617692 1.726701 ## [4,] 1.8130949 0.5310859 1.454153 ``` ``` print(x+x) ``` ``` ## [,1] [,2] [,3] ## [1,] 1.9016129 0.7605847 1.499240 ## [2,] 0.5093844 0.5242488 1.842846 ## [3,] 0.7042817 0.3617692 1.726701 ## [4,] 1.8130949 0.5310859 1.454153 ``` ``` print(t(x) %*% x) #THIS SHOULD BE 3x3 ``` ``` ## [,1] [,2] [,3] ## [1,] 1.9147325 0.7327696 1.910573 ## [2,] 0.7327696 0.3165638 0.875839 ## [3,] 1.9105731 0.8758390 2.684965 ``` ``` #print(x %*% x) #SHOULD GIVE AN ERROR ``` Taking the inverse of the covariance matrix, we get: ``` cv_inv = solve(cv) print(cv_inv,3) ``` ``` ## goog aapl csco ibm ## goog 4670 -1430 -1099 -1011 ## aapl -1430 3766 -811 -1122 ## csco -1099 -811 4801 -2452 ## ibm -1011 -1122 -2452 8325 ``` Check that the inverse is really so! ``` print(cv_inv %*% cv,3) ``` ``` ## goog aapl csco ibm ## goog 1.00e+00 -2.78e-16 -1.94e-16 -2.78e-17 ## aapl -2.78e-17 1.00e+00 8.33e-17 -5.55e-17 ## csco 1.67e-16 1.11e-16 1.00e+00 1.11e-16 ## ibm 0.00e+00 -2.22e-16 -2.22e-16 1.00e+00 ``` It is, the result of multiplying the inverse matrix by the matrix itself results in the identity matrix. A covariance matrix should be positive definite. Why? What happens if it is not? Checking for this property is easy. ``` library(corpcor) is.positive.definite(cv) ``` ``` ## [1] TRUE ``` What happens if you compute pairwise covariances from differing lengths of data for each pair? Let’s take the returns data we have and find the inverse. ``` cv = cov(rets) print(round(cv,6)) ``` ``` ## SUNW MSFT IBM CSCO AMZN ## SUNW 0.001438 0.000324 0.000310 0.000717 0.000459 ## MSFT 0.000324 0.000365 0.000197 0.000330 0.000268 ## IBM 0.000310 0.000197 0.000299 0.000283 0.000206 ## CSCO 0.000717 0.000330 0.000283 0.000950 0.000504 ## AMZN 0.000459 0.000268 0.000206 0.000504 0.001648 ``` ``` cv_inv = solve(cv) #TAKE THE INVERSE print(round(cv_inv %*% cv,2)) #CHECK THAT WE GET IDENTITY MATRIX ``` ``` ## SUNW MSFT IBM CSCO AMZN ## SUNW 1 0 0 0 0 ## MSFT 0 1 0 0 0 ## IBM 0 0 1 0 0 ## CSCO 0 0 0 1 0 ## AMZN 0 0 0 0 1 ``` ``` #CHECK IF MATRIX IS POSITIVE DEFINITE (why do we check this?) library(corpcor) is.positive.definite(cv) ``` ``` ## [1] TRUE ``` 3\.9 Root Finding ----------------- Finding roots of nonlinear equations is often required, and R has several packages for this purpose. Here we examine a few examples. Suppose we are given the function \\\[ (x^2 \+ y^2 \- 1\)^3 \- x^2 y^3 \= 0 \\] and for various values of \\(y\\) we wish to solve for the values of \\(x\\). The function we use is called **multiroot** and the use of the function is shown below. ``` #ROOT SOLVING IN R library(rootSolve) fn = function(x,y) { result = (x^2+y^2-1)^3 - x^2*y^3 } yy = 1 sol = multiroot(f=fn,start=1,maxiter=10000,rtol=0.000001,atol=0.000001,ctol=0.00001,y=yy) print(c("solution=",sol$root)) ``` ``` ## [1] "solution=" "1" ``` ``` check = fn(sol$root,yy) print(check) ``` ``` ## [1] 0 ``` Here we demonstrate the use of another function called **uniroot**. ``` fn = function(x) { result = 0.065*(x*(1-x))^0.5- 0.05 +0.05*x } sol = uniroot.all(f=fn,c(0,1)) print(sol) ``` ``` ## [1] 1.0000000 0.3717627 ``` ``` check = fn(sol) print(check) ``` ``` ## [1] 0.000000e+00 1.041576e-06 ``` 3\.10 Regression ---------------- In a *multivariate* linear regression, we have \\\[\\begin{equation} Y \= X \\cdot \\beta \+ e \\end{equation}\\] where \\(Y \\in R^{t \\times 1}\\), \\(X \\in R^{t \\times n}\\), and \\(\\beta \\in R^{n \\times 1}\\), and the regression solution is simply equal to \\(\\beta \= (X'X)^{\-1}(X'Y) \\in R^{n \\times 1}\\). To get this result we minimize the sum of squared errors. \\\[\\begin{eqnarray\*} \\min\_{\\beta} e'e \&\=\& (Y \- X \\cdot \\beta)' (Y\-X \\cdot \\beta) \\\\ \&\=\& Y'(Y\-X \\cdot \\beta) \- (X \\beta)'\\cdot (Y\-X \\cdot \\beta) \\\\ \&\=\& Y'Y \- Y' X \\beta \- (\\beta' X') Y \+ \\beta' X'X \\beta \\\\ \&\=\& Y'Y \- Y' X \\beta \- Y' X \\beta \+ \\beta' X'X \\beta \\\\ \&\=\& Y'Y \- 2Y' X \\beta \+ \\beta' X'X \\beta \\end{eqnarray\*}\\] Note that this expression is a scalar. Differentiating w.r.t. \\(\\beta'\\) gives the following f.o.c: \\\[\\begin{eqnarray\*} \- 2 X'Y \+ 2 X'X \\beta\&\=\& {\\bf 0} \\\\ \& \\Longrightarrow \& \\\\ \\beta \&\=\& (X'X)^{\-1} (X'Y) \\end{eqnarray\*}\\] There is another useful expression for each individual \\(\\beta\_i \= \\frac{Cov(X\_i,Y)}{Var(X\_i)}\\). You should compute this and check that each coefficient in the regression is indeed equal to the \\(\\beta\_i\\) from this calculation. *Example*: We run a stock return regression to exemplify the algebra above. ``` data = read.table("DSTMAA_data/markowitzdata.txt",header=TRUE) #THESE DATA ARE RETURNS print(names(data)) #THIS IS A DATA FRAME (important construct in R) ``` ``` ## [1] "X.DATE" "SUNW" "MSFT" "IBM" "CSCO" "AMZN" "mktrf" ## [8] "smb" "hml" "rf" ``` ``` head(data) ``` ``` ## X.DATE SUNW MSFT IBM CSCO AMZN ## 1 20010102 -0.087443948 0.000000000 -0.002205882 -0.129084975 -0.10843374 ## 2 20010103 0.297297299 0.105187319 0.115696386 0.240150094 0.26576576 ## 3 20010104 -0.060606062 0.010430248 -0.015191546 0.013615734 -0.11743772 ## 4 20010105 -0.096774191 0.014193549 0.008718981 -0.125373140 -0.06048387 ## 5 20010108 0.006696429 -0.003816794 -0.004654255 -0.002133106 0.02575107 ## 6 20010109 0.044345897 0.058748405 -0.010688043 0.015818726 0.09623431 ## mktrf smb hml rf ## 1 -0.0345 -0.0037 0.0209 0.00026 ## 2 0.0527 0.0097 -0.0493 0.00026 ## 3 -0.0121 0.0083 -0.0015 0.00026 ## 4 -0.0291 0.0027 0.0242 0.00026 ## 5 -0.0037 -0.0053 0.0129 0.00026 ## 6 0.0046 0.0044 -0.0026 0.00026 ``` ``` #RUN A MULTIVARIATE REGRESSION ON STOCK DATA Y = as.matrix(data$SUNW) X = as.matrix(data[,3:6]) res = lm(Y~X) summary(res) ``` ``` ## ## Call: ## lm(formula = Y ~ X) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.233758 -0.014921 -0.000711 0.014214 0.178859 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -0.0007256 0.0007512 -0.966 0.33422 ## XMSFT 0.1382312 0.0529045 2.613 0.00907 ** ## XIBM 0.3791500 0.0566232 6.696 3.02e-11 *** ## XCSCO 0.5769097 0.0317799 18.153 < 2e-16 *** ## XAMZN 0.0324899 0.0204802 1.586 0.11286 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.02914 on 1502 degrees of freedom ## Multiple R-squared: 0.4112, Adjusted R-squared: 0.4096 ## F-statistic: 262.2 on 4 and 1502 DF, p-value: < 2.2e-16 ``` Now we can cross\-check the regression using the algebraic solution for the regression coefficients. ``` #CHECK THE REGRESSION n = length(Y) X = cbind(matrix(1,n,1),X) b = solve(t(X) %*% X) %*% (t(X) %*% Y) print(b) ``` ``` ## [,1] ## -0.0007256342 ## MSFT 0.1382312148 ## IBM 0.3791500328 ## CSCO 0.5769097262 ## AMZN 0.0324898716 ``` *Example*: As a second example, we take data on basketball teams in a cross\-section, and try to explain their performance using team statistics. Here is a simple regression run on some data from the 2005\-06 NCAA basketball season for the March madness stats. The data is stored in a space\-delimited file called **ncaa.txt**. We use the metric of performance to be the number of games played, with more successful teams playing more playoff games, and then try to see what variables explain it best. We apply a simple linear regression that uses the R command **lm**, which stands for “linear model”. ``` #REGRESSION ON NCAA BASKETBALL PLAYOFF DATA ncaa = read.table("DSTMAA_data/ncaa.txt",header=TRUE) print(head(ncaa)) ``` ``` ## No NAME GMS PTS REB AST TO A.T STL BLK PF FG FT ## 1 1 NorthCarolina 6 84.2 41.5 17.8 12.8 1.39 6.7 3.8 16.7 0.514 0.664 ## 2 2 Illinois 6 74.5 34.0 19.0 10.2 1.87 8.0 1.7 16.5 0.457 0.753 ## 3 3 Louisville 5 77.4 35.4 13.6 11.0 1.24 5.4 4.2 16.6 0.479 0.702 ## 4 4 MichiganState 5 80.8 37.8 13.0 12.6 1.03 8.4 2.4 19.8 0.445 0.783 ## 5 5 Arizona 4 79.8 35.0 15.8 14.5 1.09 6.0 6.5 13.3 0.542 0.759 ## 6 6 Kentucky 4 72.8 32.3 12.8 13.5 0.94 7.3 3.5 19.5 0.510 0.663 ## X3P ## 1 0.417 ## 2 0.361 ## 3 0.376 ## 4 0.329 ## 5 0.397 ## 6 0.400 ``` ``` y = ncaa[3] y = as.matrix(y) x = ncaa[4:14] x = as.matrix(x) fm = lm(y~x) res = summary(fm) res ``` ``` ## ## Call: ## lm(formula = y ~ x) ## ## Residuals: ## Min 1Q Median 3Q Max ## -1.5074 -0.5527 -0.2454 0.6705 2.2344 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -10.194804 2.892203 -3.525 0.000893 *** ## xPTS -0.010442 0.025276 -0.413 0.681218 ## xREB 0.105048 0.036951 2.843 0.006375 ** ## xAST -0.060798 0.091102 -0.667 0.507492 ## xTO -0.034545 0.071393 -0.484 0.630513 ## xA.T 1.325402 1.110184 1.194 0.237951 ## xSTL 0.181015 0.068999 2.623 0.011397 * ## xBLK 0.007185 0.075054 0.096 0.924106 ## xPF -0.031705 0.044469 -0.713 0.479050 ## xFG 13.823190 3.981191 3.472 0.001048 ** ## xFT 2.694716 1.118595 2.409 0.019573 * ## xX3P 2.526831 1.754038 1.441 0.155698 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.9619 on 52 degrees of freedom ## Multiple R-squared: 0.5418, Adjusted R-squared: 0.4448 ## F-statistic: 5.589 on 11 and 52 DF, p-value: 7.889e-06 ``` An alternative specification of regression using data frames is somewhat easier to implement. ``` #CREATING DATA FRAMES ncaa_data_frame = data.frame(y=as.matrix(ncaa[3]),x=as.matrix(ncaa[4:14])) fm = lm(y~x,data=ncaa_data_frame) summary(fm) ``` ``` ## ## Call: ## lm(formula = y ~ x, data = ncaa_data_frame) ## ## Residuals: ## Min 1Q Median 3Q Max ## -1.5074 -0.5527 -0.2454 0.6705 2.2344 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -10.194804 2.892203 -3.525 0.000893 *** ## xPTS -0.010442 0.025276 -0.413 0.681218 ## xREB 0.105048 0.036951 2.843 0.006375 ** ## xAST -0.060798 0.091102 -0.667 0.507492 ## xTO -0.034545 0.071393 -0.484 0.630513 ## xA.T 1.325402 1.110184 1.194 0.237951 ## xSTL 0.181015 0.068999 2.623 0.011397 * ## xBLK 0.007185 0.075054 0.096 0.924106 ## xPF -0.031705 0.044469 -0.713 0.479050 ## xFG 13.823190 3.981191 3.472 0.001048 ** ## xFT 2.694716 1.118595 2.409 0.019573 * ## xX3P 2.526831 1.754038 1.441 0.155698 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.9619 on 52 degrees of freedom ## Multiple R-squared: 0.5418, Adjusted R-squared: 0.4448 ## F-statistic: 5.589 on 11 and 52 DF, p-value: 7.889e-06 ``` 3\.11 Parts of a regression --------------------------- The linear regression is fit by minimizing the sum of squared errors, but the same concept may also be applied to a nonlinear regression as well. So we might have: \\\[ y\_i \= f(x\_{i1},x\_{i2},...,x\_{ip}) \+ \\epsilon\_i, \\quad i\=1,2,...,n \\] which describes a data set that has \\(n\\) rows and \\(p\\) columns, which are the standard variables for the number of rows and columns. Note that the error term (residual) is \\(\\epsilon\_i\\). The regression will have \\((p\+1\)\\) coefficients, i.e., \\({\\bf b} \= \\{b\_0,b\_1,b\_2,...,b\_p\\}\\), and \\({\\bf x}\_i \= \\{x\_{i1},x\_{i2},...,x\_{ip}\\}\\). The model is fit by minimizing the sum of squared residuals, i.e., \\\[ \\min\_{\\bf b} \\sum\_{i\=1}^n \\epsilon\_i^2 \\] We define the following: * Sum of squared residuals (errors): \\(SSE \= \\sum\_{i\=1}^n \\epsilon\_i^2\\), with degrees of freedom \\(DFE\= n\-p\\). * Total sum of squares: \\(SST \= \\sum\_{i\=1}^n (y\_i \- {\\bar y})^2\\), where \\({\\bar y}\\) is the mean of \\(y\\). Degrees of freedom are \\(DFT \= n\-1\\). * Regression (model) sum of squares: \\(SST \= \\sum\_{i\=1}^n (f({\\bf x}\_i) \- {\\bar y})^2\\); with degrees of freedom \\(DFM \= p\-1\\). * Note that \\(SST \= SSM \+ SSE\\). * Check that \\(DFT \= DFM \+ DFE\\). The \\(R\\)\-squared of the regression is \\\[ R^2 \= \\left( 1 \- \\frac{SSE}{SST} \\right) \\quad \\in (0,1\) \\] The \\(F\\)\-statistic in the regression is what tells us if the RHS variables comprise a model that explains the LHS variable sufficiently. Do the RHS variables offer more of an explanation that simply assuming that the mean value of \\(y\\) is the best prediction? The null hypothesis we care about is * \\(H\_0\\): \\(b\_k \= 0, k\=0,1,2,...,p\\), versus an alternate hypothesis of * \\(H\_1\\): \\(b\_k \\neq 0\\) for at least one \\(k\\). To test this the \\(F\\)\-statistic is computed as the following ratio: \\\[ F \= \\frac{\\mbox{Explained variance}}{\\mbox{Unexplained variance}} \= \\frac{SSM/DFM}{SSE/DFE} \= \\frac{MSM}{MSE} \\] where \\(MSM\\) is the mean squared model error, and \\(MSE\\) is mean squared error. Now let’s relate this to \\(R^2\\). First, we find an approximation for the \\(R^2\\). \\\[ R^2 \= 1 \- \\frac{SSE}{SST} \\\\ \= 1 \- \\frac{SSE/n}{SST/n} \\\\ \\approx 1 \- \\frac{MSE}{MST} \\\\ \= \\frac{MST\-MSE}{MST} \\\\ \= \\frac{MSM}{MST} \\] The \\(R^2\\) of a regression that has no RHS variables is zero, and of course \\(MSM\=0\\). In such a regression \\(MST \= MSE\\). So the expression above becomes: \\\[ R^2\_{p\=0} \= \\frac{MSM}{MST} \= 0 \\] We can also see with some manipulation, that \\(R^2\\) is related to \\(F\\) (approximately, assuming large \\(n\\)). \\\[ R^2 \+ \\frac{1}{F\+1}\=1 \\quad \\mbox{or} \\quad 1\+F \= \\frac{1}{1\-R^2} \\] Check to see that when \\(R^2\=0\\), then \\(F\=0\\). We can further check the formulae with a numerical example, by creating some sample data. ``` x = matrix(runif(300),100,3) y = 5 + 4*x[,1] + 3*x[,2] + 2*x[,3] + rnorm(100) y = as.matrix(y) res = lm(y~x) print(summary(res)) ``` ``` ## ## Call: ## lm(formula = y ~ x) ## ## Residuals: ## Min 1Q Median 3Q Max ## -2.7194 -0.5876 0.0410 0.7223 2.5900 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 5.0819 0.3141 16.178 < 2e-16 *** ## x1 4.3444 0.3753 11.575 < 2e-16 *** ## x2 2.8944 0.3335 8.679 1.02e-13 *** ## x3 1.8143 0.3397 5.341 6.20e-07 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 1.005 on 96 degrees of freedom ## Multiple R-squared: 0.7011, Adjusted R-squared: 0.6918 ## F-statistic: 75.06 on 3 and 96 DF, p-value: < 2.2e-16 ``` ``` e = res$residuals SSE = sum(e^2) SST = sum((y-mean(y))^2) SSM = SST - SSE print(c(SSE,SSM,SST)) ``` ``` ## [1] 97.02772 227.60388 324.63160 ``` ``` R2 = 1 - SSE/SST print(R2) ``` ``` ## [1] 0.7011144 ``` ``` n = dim(x)[1] p = dim(x)[2]+1 MSE = SSE/(n-p) MSM = SSM/(p-1) MST = SST/(n-1) print(c(n,p,MSE,MSM,MST)) ``` ``` ## [1] 100.000000 4.000000 1.010705 75.867960 3.279107 ``` ``` Fstat = MSM/MSE print(Fstat) ``` ``` ## [1] 75.06436 ``` We can also compare two regressions, say one with 5 RHS variables with one that has only 3 of those five to see whether the additional two variables has any extra value. The ratio of the two \\(MSM\\) values from the first and second regressions is also a \\(F\\)\-statistic that may be tested for it to be large enough. Note that if the residuals \\(\\epsilon\\) are assumed to be normally distributed, then squared residuals are distributed as per the chi\-square (\\(\\chi^2\\)) distribution. Further, the sum of residuals is distributed normal and the sum of squared residuals is distributed \\(\\chi^2\\). And finally, the ratio of two \\(\\chi^2\\) variables is \\(F\\)\-distributed, which is why we call it the \\(F\\)\-statistic, it is the ratio of two sums of squared errors. 3\.12 Heteroskedasticity ------------------------ Simple linear regression assumes that the standard error of the residuals is the same for all observations. Many regressions suffer from the failure of this condition. The word for this is “heteroskedastic” errors. “Hetero” means different, and “skedastic” means dependent on type. We can first test for the presence of heteroskedasticity using a standard Breusch\-Pagan test available in R. This resides in the **lmtest** package which is loaded in before running the test. ``` ncaa = read.table("DSTMAA_data/ncaa.txt",header=TRUE) y = as.matrix(ncaa[3]) x = as.matrix(ncaa[4:14]) result = lm(y~x) library(lmtest) bptest(result) ``` ``` ## ## studentized Breusch-Pagan test ## ## data: result ## BP = 15.538, df = 11, p-value = 0.1592 ``` We can see that there is very little evidence of heteroskedasticity in the standard errors as the \\(p\\)\-value is not small. However, lets go ahead and correct the t\-statistics for heteroskedasticity as follows, using the **hccm** function. The **hccm** stands for heteroskedasticity corrected covariance matrix. ``` wuns = matrix(1,64,1) z = cbind(wuns,x) b = solve(t(z) %*% z) %*% (t(z) %*% y) result = lm(y~x) library(car) vb = hccm(result) stdb = sqrt(diag(vb)) tstats = b/stdb print(tstats) ``` ``` ## GMS ## -2.68006069 ## PTS -0.38212818 ## REB 2.38342637 ## AST -0.40848721 ## TO -0.28709450 ## A.T 0.65632053 ## STL 2.13627108 ## BLK 0.09548606 ## PF -0.68036944 ## FG 3.52193532 ## FT 2.35677255 ## X3P 1.23897636 ``` Compare these to the t\-statistics in the original model ``` summary(result) ``` ``` ## ## Call: ## lm(formula = y ~ x) ## ## Residuals: ## Min 1Q Median 3Q Max ## -1.5074 -0.5527 -0.2454 0.6705 2.2344 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -10.194804 2.892203 -3.525 0.000893 *** ## xPTS -0.010442 0.025276 -0.413 0.681218 ## xREB 0.105048 0.036951 2.843 0.006375 ** ## xAST -0.060798 0.091102 -0.667 0.507492 ## xTO -0.034545 0.071393 -0.484 0.630513 ## xA.T 1.325402 1.110184 1.194 0.237951 ## xSTL 0.181015 0.068999 2.623 0.011397 * ## xBLK 0.007185 0.075054 0.096 0.924106 ## xPF -0.031705 0.044469 -0.713 0.479050 ## xFG 13.823190 3.981191 3.472 0.001048 ** ## xFT 2.694716 1.118595 2.409 0.019573 * ## xX3P 2.526831 1.754038 1.441 0.155698 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.9619 on 52 degrees of freedom ## Multiple R-squared: 0.5418, Adjusted R-squared: 0.4448 ## F-statistic: 5.589 on 11 and 52 DF, p-value: 7.889e-06 ``` It is apparent that when corrected for heteroskedasticity, the t\-statistics in the regression are lower, and also render some of the previously significant coefficients insignificant. 3\.13 Auto\-Regressive Models ----------------------------- When data is autocorrelated, i.e., has dependence in time, not accounting for this issue results in unnecessarily high statistical significance (in terms of inflated t\-statistics). Intuitively, this is because observations are treated as independent when actually they are correlated in time, and therefore, the true number of observations is effectively less. Consider a finance application. In efficient markets, the correlation of stock returns from one period to the next should be close to zero. We use the returns on Google stock as an example. First, read in the data. ``` data = read.csv("DSTMAA_data/goog.csv",header=TRUE) head(data) ``` ``` ## Date Open High Low Close Volume Adj.Close ## 1 2011-04-06 572.18 575.16 568.00 574.18 2668300 574.18 ## 2 2011-04-05 581.08 581.49 565.68 569.09 6047500 569.09 ## 3 2011-04-04 593.00 594.74 583.10 587.68 2054500 587.68 ## 4 2011-04-01 588.76 595.19 588.76 591.80 2613200 591.80 ## 5 2011-03-31 583.00 588.16 581.74 586.76 2029400 586.76 ## 6 2011-03-30 584.38 585.50 580.58 581.84 1422300 581.84 ``` Next, create the returns time series. ``` n = length(data$Close) stkp = rev(data$Adj.Close) rets = as.matrix(log(stkp[2:n]/stkp[1:(n-1)])) n = length(rets) plot(rets,type="l",col="blue") ``` ``` print(n) ``` ``` ## [1] 1670 ``` Examine the autocorrelation. This is one lag, also known as first\-order autocorrelation. ``` cor(rets[1:(n-1)],rets[2:n]) ``` ``` ## [1] 0.007215026 ``` Run the Durbin\-Watson test for autocorrelation. Here we test for up to 10 lags. ``` library(car) res = lm(rets[2:n]~rets[1:(n-1)]) durbinWatsonTest(res,max.lag=10) ``` ``` ## lag Autocorrelation D-W Statistic p-value ## 1 -0.0006436855 2.001125 0.950 ## 2 -0.0109757002 2.018298 0.696 ## 3 -0.0002853870 1.996723 0.982 ## 4 0.0252586312 1.945238 0.324 ## 5 0.0188824874 1.957564 0.444 ## 6 -0.0555810090 2.104550 0.018 ## 7 0.0020507562 1.989158 0.986 ## 8 0.0746953706 1.843219 0.004 ## 9 -0.0375308940 2.067304 0.108 ## 10 0.0085641680 1.974756 0.798 ## Alternative hypothesis: rho[lag] != 0 ``` There is no evidence of auto\-correlation when the DW statistic is close to 2\. If the DW\-statistic is greater than 2 it indicates negative autocorrelation, and if it is less than 2, it indicates positive autocorrelation. If there is autocorrelation we can correct for it as follows. Let’s take a different data set. ``` md = read.table("DSTMAA_data/markowitzdata.txt",header=TRUE) names(md) ``` ``` ## [1] "X.DATE" "SUNW" "MSFT" "IBM" "CSCO" "AMZN" "mktrf" ## [8] "smb" "hml" "rf" ``` Test for autocorrelation. ``` y = as.matrix(md[2]) x = as.matrix(md[7:9]) rf = as.matrix(md[10]) y = y-rf library(car) results = lm(y ~ x) print(summary(results)) ``` ``` ## ## Call: ## lm(formula = y ~ x) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.213676 -0.014356 -0.000733 0.014462 0.191089 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -0.000197 0.000785 -0.251 0.8019 ## xmktrf 1.657968 0.085816 19.320 <2e-16 *** ## xsmb 0.299735 0.146973 2.039 0.0416 * ## xhml -1.544633 0.176049 -8.774 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.03028 on 1503 degrees of freedom ## Multiple R-squared: 0.3636, Adjusted R-squared: 0.3623 ## F-statistic: 286.3 on 3 and 1503 DF, p-value: < 2.2e-16 ``` ``` durbinWatsonTest(results,max.lag=6) ``` ``` ## lag Autocorrelation D-W Statistic p-value ## 1 -0.07231926 2.144549 0.008 ## 2 -0.04595240 2.079356 0.122 ## 3 0.02958136 1.926791 0.180 ## 4 -0.01608143 2.017980 0.654 ## 5 -0.02360625 2.032176 0.474 ## 6 -0.01874952 2.021745 0.594 ## Alternative hypothesis: rho[lag] != 0 ``` Now make the correction to the t\-statistics. We use the procedure formulated by Newey and West ([1987](#ref-10.2307/1913610)). This correction is part of the **car** package. ``` #CORRECT FOR AUTOCORRELATION library(sandwich) b = results$coefficients print(b) ``` ``` ## (Intercept) xmktrf xsmb xhml ## -0.0001970164 1.6579682191 0.2997353765 -1.5446330690 ``` ``` vb = NeweyWest(results,lag=1) stdb = sqrt(diag(vb)) tstats = b/stdb print(tstats) ``` ``` ## (Intercept) xmktrf xsmb xhml ## -0.2633665 15.5779184 1.8300340 -6.1036120 ``` Compare these to the stats we had earlier. Notice how they have come down after correction for AR. Note that there are several steps needed to correct for autocorrelation, and it might have been nice to roll one’s own function for this. (I leave this as an exercise for you.) Figure 3\.1: From Lo and MacKinlay (1999\) For fun, lets look at the autocorrelation in stock market indexes, shown in Figure [3\.1](IntroductoryRprogamming.html#fig:ARequityindexes). The following graphic is taken from the book “A Non\-Random Walk Down Wall Street” by A. W. Lo and MacKinlay ([1999](#ref-10.2307/j.ctt7tccx)). Is the autocorrelation higher for equally\-weighted or value\-weighted indexes? Why? 3\.14 Maximum Likelihood ------------------------ Assume that the stock returns \\(R(t)\\) mentioned above have a normal distribution with mean \\(\\mu\\) and variance \\(\\sigma^2\\) per year. MLE estimation requires finding the parameters \\(\\{\\mu,\\sigma\\}\\) that maximize the likelihood of seeing the empirical sequence of returns \\(R(t)\\). A normal probability function is required, and we have one above for \\(R(t)\\), which is assumed to be i.i.d. (independent and identically distributed). First, a quick recap of the normal distribution. If \\(x \\sim N(\\mu,\\sigma^2\)\\), then \\\[\\begin{equation} \\mbox{density function:} \\quad f(x) \= \\frac{1}{\\sqrt{2\\pi\\sigma^2}} \\exp\\left\[\-\\frac{1}{2}\\frac{(x\-\\mu)^2}{\\sigma^2} \\right] \\end{equation}\\] \\\[\\begin{equation} N(x) \= 1 \- N(\-x) \\end{equation}\\] \\\[\\begin{equation} F(x) \= \\int\_{\-\\infty}^x f(u) du \\end{equation}\\] The standard normal distribution is \\(x \\sim N(0,1\)\\). For the standard normal distribution: \\(F(0\) \= \\frac{1}{2}\\). Noting that when returns are i.i.d., the mean return and the variance of returns scale with time, and therefore, the standard deviation of returns scales with the square\-root of time. If the time intervals between return observations is \\(h\\) years, then the probability density of \\(R(t)\\) is normal with the following equation: \\\[\\begin{equation} f\[R(t)] \= \\frac{1}{\\sqrt{2 \\pi \\sigma^2 h}} \\cdot \\exp\\left\[ \-\\frac{1}{2} \\cdot \\frac{(R(t)\-\\alpha)^2}{\\sigma^2 h} \\right] \\end{equation}\\] where \\(\\alpha \= \\left(\\mu\-\\frac{1}{2}\\sigma^2 \\right) h\\). In our case, we have daily data and \\(h\=1/252\\). For periods \\(t\=1,2,\\ldots,T\\) the likelihood of the entire series is \\\[\\begin{equation} \\prod\_{t\=1}^T f\[R(t)] \\end{equation}\\] It is easier (computationally) to maximize \\\[\\begin{equation} \\max\_{\\mu,\\sigma} \\; {\\cal L} \\equiv \\sum\_{t\=1}^T \\ln f\[R(t)] \\end{equation}\\] known as the log\-likelihood. This is easily done in R. First we create the log\-likelihood function, so you can see how functions are defined in R. Note that \\\[\\begin{equation} \\ln \\; f\[R(t)] \= \-\\ln \\sqrt{2 \\pi \\sigma^2 h} \- \\frac{\[R(t)\-\\alpha]^2}{2 \\sigma^2 h} \\end{equation}\\] We have used variable “sigsq” in function “LL” for \\(\\sigma^2 h\\). ``` #LOG-LIKELIHOOD FUNCTION LL = function(params,rets) { alpha = params[1]; sigsq = params[2] logf = -log(sqrt(2*pi*sigsq)) - (rets-alpha)^2/(2*sigsq) LL = -sum(logf) } ``` We now read in the data and maximize the log\-likelihood to find the required parameters of the return distribution. ``` #READ DATA data = read.csv("DSTMAA_data/goog.csv",header=TRUE) stkp = data$Adj.Close #Ln of differenced stk prices gives continuous returns rets = diff(log(stkp)) #diff() takes first differences print(c("mean return = ",mean(rets),mean(rets)*252)) ``` ``` ## [1] "mean return = " "-0.00104453803410475" "-0.263223584594396" ``` ``` print(c("stdev returns = ",sd(rets),sd(rets)*sqrt(252))) ``` ``` ## [1] "stdev returns = " "0.0226682330750677" "0.359847044267268" ``` ``` #Create starting guess for parameters params = c(0.001,0.001) res = nlm(LL,params,rets) print(res) ``` ``` ## $minimum ## [1] -3954.813 ## ## $estimate ## [1] -0.0010450602 0.0005130408 ## ## $gradient ## [1] -0.07215158 -1.93982032 ## ## $code ## [1] 2 ## ## $iterations ## [1] 8 ``` Let’s annualize the parameters and see what they are, comparing them to the raw mean and variance of returns. ``` h = 1/252 alpha = res$estimate[1] sigsq = res$estimate[2] print(c("alpha=",alpha)) ``` ``` ## [1] "alpha=" "-0.00104506019968994" ``` ``` print(c("sigsq=",sigsq)) ``` ``` ## [1] "sigsq=" "0.000513040809008682" ``` ``` sigma = sqrt(sigsq/h) mu = alpha/h + 0.5*sigma^2 print(c("mu=",mu)) ``` ``` ## [1] "mu=" "-0.19871202838677" ``` ``` print(c("sigma=",sigma)) ``` ``` ## [1] "sigma=" "0.359564019154014" ``` ``` print(mean(rets*252)) ``` ``` ## [1] -0.2632236 ``` ``` print(sd(rets)*sqrt(252)) ``` ``` ## [1] 0.359847 ``` As we can see, the parameters under the normal distribution are quite close to the raw moments. 3\.15 Logit ----------- We have seen how to fit a linear regression model in R. In that model we placed no restrictions on the dependent variable. However, when the LHS variable in a regression is categorical and binary, i.e., takes the value 1 or 0, then a logit regression is more apt. This regression fits a model that will always return a fitted value of the dependent variable that lies between \\((0,1\)\\). This class of specifications covers what are known as *limited dependent variables* models. In this introduction to R, we will simply run a few examples of these models, leaving a more detailed analysis for later in this book. Example: For the NCAA data, there are 64 observatios (teams) ordered from best to worst. We take the top 32 teams and make their dependent variable 1 (above median teams), and that of the bottom 32 teams zero (below median). Our goal is to fit a regression model that returns a team’s predicted percentile ranking. First, we create the dependent variable. ``` y = c(rep(1,32),rep(0,32)) print(y) ``` ``` ## [1] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 ## [36] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ``` ``` x = as.matrix(ncaa[,4:14]) y = as.matrix(y) ``` We use the function **glm** for this task. Running the model is pretty easy as follows. ``` h = glm(y~x, family=binomial(link="logit")) print(logLik(h)) ``` ``` ## 'log Lik.' -21.44779 (df=12) ``` ``` print(summary(h)) ``` ``` ## ## Call: ## glm(formula = y ~ x, family = binomial(link = "logit")) ## ## Deviance Residuals: ## Min 1Q Median 3Q Max ## -1.80174 -0.40502 -0.00238 0.37584 2.31767 ## ## Coefficients: ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -45.83315 14.97564 -3.061 0.00221 ** ## xPTS -0.06127 0.09549 -0.642 0.52108 ## xREB 0.49037 0.18089 2.711 0.00671 ** ## xAST 0.16422 0.26804 0.613 0.54010 ## xTO -0.38405 0.23434 -1.639 0.10124 ## xA.T 1.56351 3.17091 0.493 0.62196 ## xSTL 0.78360 0.32605 2.403 0.01625 * ## xBLK 0.07867 0.23482 0.335 0.73761 ## xPF 0.02602 0.13644 0.191 0.84874 ## xFG 46.21374 17.33685 2.666 0.00768 ** ## xFT 10.72992 4.47729 2.397 0.01655 * ## xX3P 5.41985 5.77966 0.938 0.34838 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## (Dispersion parameter for binomial family taken to be 1) ## ## Null deviance: 88.723 on 63 degrees of freedom ## Residual deviance: 42.896 on 52 degrees of freedom ## AIC: 66.896 ## ## Number of Fisher Scoring iterations: 6 ``` Thus, we see that the best variables that separate upper\-half teams from lower\-half teams are the number of rebounds and the field goal percentage. To a lesser extent, field goal percentage and steals also provide some explanatory power. The logit regression is specified as follows: \\\[\\begin{eqnarray\*} z \&\=\& \\frac{e^y}{1\+e^y}\\\\ y \&\=\& b\_0 \+ b\_1 x\_1 \+ b\_2 x\_2 \+ \\ldots \+ b\_k x\_k \\end{eqnarray\*}\\] The original data \\(z \= \\{0,1\\}\\). The range of values of \\(y\\) is \\((\-\\infty,\+\\infty)\\). And as required, the fitted \\(z \\in (0,1\)\\). The variables \\(x\\) are the RHS variables. The fitting is done using MLE. Suppose we ran this with a simple linear regression. ``` h = lm(y~x) summary(h) ``` ``` ## ## Call: ## lm(formula = y ~ x) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.65982 -0.26830 0.03183 0.24712 0.83049 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -4.114185 1.174308 -3.503 0.000953 *** ## xPTS -0.005569 0.010263 -0.543 0.589709 ## xREB 0.046922 0.015003 3.128 0.002886 ** ## xAST 0.015391 0.036990 0.416 0.679055 ## xTO -0.046479 0.028988 -1.603 0.114905 ## xA.T 0.103216 0.450763 0.229 0.819782 ## xSTL 0.063309 0.028015 2.260 0.028050 * ## xBLK 0.023088 0.030474 0.758 0.452082 ## xPF 0.011492 0.018056 0.636 0.527253 ## xFG 4.842722 1.616465 2.996 0.004186 ** ## xFT 1.162177 0.454178 2.559 0.013452 * ## xX3P 0.476283 0.712184 0.669 0.506604 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.3905 on 52 degrees of freedom ## Multiple R-squared: 0.5043, Adjusted R-squared: 0.3995 ## F-statistic: 4.81 on 11 and 52 DF, p-value: 4.514e-05 ``` We get the same variables again showing up as significant. 3\.16 Probit ------------ We can redo the same regression in the logit using a probit instead. A probit is identical in spirit to the logit regression, except that the function that is used is \\\[\\begin{eqnarray\*} z \&\=\& \\Phi(y)\\\\ y \&\=\& b\_0 \+ b\_1 x\_1 \+ b\_2 x\_2 \+ \\ldots \+ b\_k x\_k \\end{eqnarray\*}\\] where \\(\\Phi(\\cdot)\\) is the cumulative normal probability function. It is implemented in R as follows. ``` h = glm(y~x, family=binomial(link="probit")) print(logLik(h)) ``` ``` ## 'log Lik.' -21.27924 (df=12) ``` ``` print(summary(h)) ``` ``` ## ## Call: ## glm(formula = y ~ x, family = binomial(link = "probit")) ## ## Deviance Residuals: ## Min 1Q Median 3Q Max ## -1.76353 -0.41212 -0.00031 0.34996 2.24568 ## ## Coefficients: ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -26.28219 8.09608 -3.246 0.00117 ** ## xPTS -0.03463 0.05385 -0.643 0.52020 ## xREB 0.28493 0.09939 2.867 0.00415 ** ## xAST 0.10894 0.15735 0.692 0.48874 ## xTO -0.23742 0.13642 -1.740 0.08180 . ## xA.T 0.71485 1.86701 0.383 0.70181 ## xSTL 0.45963 0.18414 2.496 0.01256 * ## xBLK 0.03029 0.13631 0.222 0.82415 ## xPF 0.01041 0.07907 0.132 0.89529 ## xFG 26.58461 9.38711 2.832 0.00463 ** ## xFT 6.28278 2.51452 2.499 0.01247 * ## xX3P 3.15824 3.37841 0.935 0.34988 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## (Dispersion parameter for binomial family taken to be 1) ## ## Null deviance: 88.723 on 63 degrees of freedom ## Residual deviance: 42.558 on 52 degrees of freedom ## AIC: 66.558 ## ## Number of Fisher Scoring iterations: 8 ``` The results confirm those obtained from the linear regression and logit regression. 3\.17 ARCH and GARCH -------------------- GARCH stands for “Generalized Auto\-Regressive Conditional Heteroskedasticity”. Engle ([1982](#ref-10.2307/1912773)) invented ARCH (for which he got the Nobel prize) and this was extended by Bollerslev ([1986](#ref-RePEc:eee:econom:v:31:y:1986:i:3:p:307-327)) to GARCH. ARCH models are based on the idea that volatility tends to cluster, i.e., volatility for period \\(t\\), is auto\-correlated with volatility from period \\((t\-1\)\\), or more preceding periods. If we had a time series of stock returns following a random walk, we might model it as follows \\\[\\begin{equation} r\_t \= \\mu \+ e\_t, \\quad e\_t \\sim N(0,\\sigma\_t^2\) \\end{equation}\\] Returns have constant mean \\(\\mu\\) and time\-varying variance \\(\\sigma\_t^2\\). If the variance were stationary then \\(\\sigma\_t^2\\) would be constant. But under GARCH it is auto\-correlated with previous variances. Hence, we have \\\[\\begin{equation} \\sigma\_{t}^2 \= \\beta\_0 \+ \\sum\_{j\=1}^p \\beta\_{1j} \\sigma\_{t\-j}^2 \+ \\sum\_{k\=1}^q \\beta\_{2k} e\_{t\-k}^2 \\end{equation}\\] So current variance (\\(\\sigma\_t^2\\)) depends on past squared shocks (\\(e\_{t\-k}^2\\)) and past variances (\\(\\sigma\_{t\-j}^2\\)). The number of lags of past variance is \\(p\\), and that of lagged shocks is \\(q\\). The model is thus known as a GARCH\\((p,q)\\) model. For the model to be stationary, the sum of all the \\(\\beta\\) terms should be less than 1\. In GARCH, stock returns are conditionally normal, and independent, but not identically distributed because the variance changes over time. Since at every time \\(t\\), we know the conditional distribution of returns, because \\(\\sigma\_t\\) is based on past \\(\\sigma\_{t\-j}\\) and past shocks \\(e\_{t\-k}\\), we can estimate the parameters \\(\\{\\beta\_0,\\beta{1j}, \\beta\_{2k}\\}, \\forall j,k\\), of the model using MLE. The good news is that this comes canned in R, so all we need to do is use the **tseries** package. ``` library(tseries) res = garch(rets,order=c(1,1)) ``` ``` ## ## ***** ESTIMATION WITH ANALYTICAL GRADIENT ***** ## ## ## I INITIAL X(I) D(I) ## ## 1 4.624639e-04 1.000e+00 ## 2 5.000000e-02 1.000e+00 ## 3 5.000000e-02 1.000e+00 ## ## IT NF F RELDF PRELDF RELDX STPPAR D*STEP NPRELDF ## 0 1 -5.512e+03 ## 1 7 -5.513e+03 1.82e-04 2.97e-04 2.0e-04 4.3e+09 2.0e-05 6.33e+05 ## 2 8 -5.513e+03 8.45e-06 9.19e-06 1.9e-04 2.0e+00 2.0e-05 1.57e+01 ## 3 15 -5.536e+03 3.99e-03 6.04e-03 4.4e-01 2.0e+00 8.0e-02 1.56e+01 ## 4 18 -5.569e+03 6.02e-03 4.17e-03 7.4e-01 1.9e+00 3.2e-01 4.54e-01 ## 5 20 -5.579e+03 1.85e-03 1.71e-03 7.9e-02 2.0e+00 6.4e-02 1.67e+02 ## 6 22 -5.604e+03 4.44e-03 3.94e-03 1.3e-01 2.0e+00 1.3e-01 1.93e+04 ## 7 24 -5.610e+03 9.79e-04 9.71e-04 2.2e-02 2.0e+00 2.6e-02 2.93e+06 ## 8 26 -5.621e+03 1.92e-03 1.96e-03 4.1e-02 2.0e+00 5.1e-02 2.76e+08 ## 9 27 -5.639e+03 3.20e-03 4.34e-03 7.4e-02 2.0e+00 1.0e-01 2.26e+02 ## 10 34 -5.640e+03 2.02e-04 3.91e-04 3.7e-06 4.0e+00 5.5e-06 1.73e+01 ## 11 35 -5.640e+03 7.02e-06 8.09e-06 3.6e-06 2.0e+00 5.5e-06 5.02e+00 ## 12 36 -5.640e+03 2.22e-07 2.36e-07 3.7e-06 2.0e+00 5.5e-06 5.26e+00 ## 13 43 -5.641e+03 2.52e-04 3.98e-04 1.5e-02 2.0e+00 2.3e-02 5.26e+00 ## 14 45 -5.642e+03 2.28e-04 1.40e-04 1.7e-02 0.0e+00 3.2e-02 1.40e-04 ## 15 46 -5.644e+03 3.17e-04 3.54e-04 3.9e-02 1.0e-01 8.8e-02 3.57e-04 ## 16 56 -5.644e+03 1.60e-05 3.69e-05 5.7e-07 3.2e+00 9.7e-07 6.48e-05 ## 17 57 -5.644e+03 1.91e-06 1.96e-06 5.0e-07 2.0e+00 9.7e-07 1.20e-05 ## 18 58 -5.644e+03 8.57e-11 5.45e-09 5.2e-07 2.0e+00 9.7e-07 9.38e-06 ## 19 66 -5.644e+03 6.92e-06 9.36e-06 4.2e-03 6.2e-02 7.8e-03 9.38e-06 ## 20 67 -5.644e+03 7.42e-07 1.16e-06 1.2e-03 0.0e+00 2.2e-03 1.16e-06 ## 21 68 -5.644e+03 8.44e-08 1.50e-07 7.1e-04 0.0e+00 1.6e-03 1.50e-07 ## 22 69 -5.644e+03 1.39e-08 2.44e-09 8.6e-05 0.0e+00 1.8e-04 2.44e-09 ## 23 70 -5.644e+03 -7.35e-10 1.24e-11 3.1e-06 0.0e+00 5.4e-06 1.24e-11 ## ## ***** RELATIVE FUNCTION CONVERGENCE ***** ## ## FUNCTION -5.644379e+03 RELDX 3.128e-06 ## FUNC. EVALS 70 GRAD. EVALS 23 ## PRELDF 1.242e-11 NPRELDF 1.242e-11 ## ## I FINAL X(I) D(I) G(I) ## ## 1 1.807617e-05 1.000e+00 1.035e+01 ## 2 1.304314e-01 1.000e+00 -2.837e-02 ## 3 8.457819e-01 1.000e+00 -2.915e-02 ``` ``` summary(res) ``` ``` ## ## Call: ## garch(x = rets, order = c(1, 1)) ## ## Model: ## GARCH(1,1) ## ## Residuals: ## Min 1Q Median 3Q Max ## -9.17102 -0.59191 -0.03853 0.43929 4.64677 ## ## Coefficient(s): ## Estimate Std. Error t value Pr(>|t|) ## a0 1.808e-05 2.394e-06 7.551 4.33e-14 *** ## a1 1.304e-01 1.292e-02 10.094 < 2e-16 *** ## b1 8.458e-01 1.307e-02 64.720 < 2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Diagnostic Tests: ## Jarque Bera Test ## ## data: Residuals ## X-squared = 3199.7, df = 2, p-value < 2.2e-16 ## ## ## Box-Ljung test ## ## data: Squared.Residuals ## X-squared = 0.14094, df = 1, p-value = 0.7073 ``` That’s it! Certainly much less painful than programming the entire MLE procedure. We see that the parameters \\(\\{\\beta\_0,\\beta\_1,\\beta\_2\\}\\) are all statistically significant. Given the fitted parameters, we can also examine the extracted time series of volatilty. ``` #PLOT VOLATILITY TIMES SERIES print(names(res)) ``` ``` ## [1] "order" "coef" "n.likeli" "n.used" ## [5] "residuals" "fitted.values" "series" "frequency" ## [9] "call" "vcov" ``` ``` plot(res$fitted.values[,1],type="l",col="red") grid(lwd=2) ``` We may also plot is side by side with the stock price series. ``` par(mfrow=c(2,1)) plot(res$fitted.values[,1],col="blue",type="l") plot(stkp,type="l",col="red") ``` Notice how the volatility series clumps into periods of high volatility, interspersed with larger periods of calm. As is often the case, volatility tends to be higher when the stock price is lower. 3\.18 Vector Autoregression --------------------------- Also known as VAR (not the same thing as Value\-at\-Risk, denoted VaR). VAR is useful for estimating systems where there are simultaneous regression equations, and the variables influence each other over time. So in a VAR, each variable in a system is assumed to depend on lagged values of itself and the other variables. The number of lags may be chosen by the econometrician based on the expected decay in time\-dependence of the variables in the VAR. In the following example, we examine the inter\-relatedness of returns of the following three tickers: SUNW, MSFT, IBM. For vector autoregressions (VARs), we run the following R commands: ``` md = read.table("DSTMAA_data/markowitzdata.txt",header=TRUE) y = as.matrix(md[2:4]) library(stats) var6 = ar(y,aic=TRUE,order=6) print(var6$order) ``` ``` ## [1] 1 ``` ``` print(var6$ar) ``` ``` ## , , SUNW ## ## SUNW MSFT IBM ## 1 -0.00985635 0.02224093 0.002072782 ## ## , , MSFT ## ## SUNW MSFT IBM ## 1 0.008658304 -0.1369503 0.0306552 ## ## , , IBM ## ## SUNW MSFT IBM ## 1 -0.04517035 0.0975497 -0.01283037 ``` We print out the Akaike Information Criterion (AIC)[28](#fn28) to see which lags are significant. ``` print(var6$aic) ``` ``` ## 0 1 2 3 4 5 6 ## 23.950676 0.000000 2.762663 5.284709 5.164238 10.065300 8.924513 ``` Since there are three stocks’ returns moving over time, we have a system of three equations, each with six lags, so there will be six lagged coefficients for each equation. We print out these coefficients here, and examine the sign. We note however that only one lag is significant, as the “order” of the system was estimated as 1 in the VAR above. ``` print(var6$partialacf) ``` ``` ## , , SUNW ## ## SUNW MSFT IBM ## 1 -0.00985635 0.022240931 0.002072782 ## 2 -0.07857841 -0.019721982 -0.006210487 ## 3 0.03382375 0.003658121 0.032990758 ## 4 0.02259522 0.030023132 0.020925226 ## 5 -0.03944162 -0.030654949 -0.012384084 ## 6 -0.03109748 -0.021612632 -0.003164879 ## ## , , MSFT ## ## SUNW MSFT IBM ## 1 0.008658304 -0.13695027 0.030655201 ## 2 -0.053224374 -0.02396291 -0.047058278 ## 3 0.080632420 0.03720952 -0.004353203 ## 4 -0.038171317 -0.07573402 -0.004913021 ## 5 0.002727220 0.05886752 0.050568308 ## 6 0.242148823 0.03534206 0.062799122 ## ## , , IBM ## ## SUNW MSFT IBM ## 1 -0.04517035 0.097549700 -0.01283037 ## 2 0.05436993 0.021189756 0.05430338 ## 3 -0.08990973 -0.077140955 -0.03979962 ## 4 0.06651063 0.056250866 0.05200459 ## 5 0.03117548 -0.056192843 -0.06080490 ## 6 -0.13131366 -0.003776726 -0.01502191 ``` Interestingly we see that each of the tickers has a negative relation to its lagged value, but a positive correlation with the lagged values of the other two stocks. Hence, there is positive cross autocorrelation amongst these tech stocks. We can also run a model with three lags. ``` ar(y,method="ols",order=3) ``` ``` ## ## Call: ## ar(x = y, order.max = 3, method = "ols") ## ## $ar ## , , 1 ## ## SUNW MSFT IBM ## SUNW 0.01407 -0.0006952 -0.036839 ## MSFT 0.02693 -0.1440645 0.100557 ## IBM 0.01330 0.0211160 -0.009662 ## ## , , 2 ## ## SUNW MSFT IBM ## SUNW -0.082017 -0.04079 0.04812 ## MSFT -0.020668 -0.01722 0.01761 ## IBM -0.006717 -0.04790 0.05537 ## ## , , 3 ## ## SUNW MSFT IBM ## SUNW 0.035412 0.081961 -0.09139 ## MSFT 0.003999 0.037252 -0.07719 ## IBM 0.033571 -0.003906 -0.04031 ## ## ## $x.intercept ## SUNW MSFT IBM ## -9.623e-05 -7.366e-05 -6.257e-05 ## ## $var.pred ## SUNW MSFT IBM ## SUNW 0.0013593 0.0003007 0.0002842 ## MSFT 0.0003007 0.0003511 0.0001888 ## IBM 0.0002842 0.0001888 0.0002881 ``` We examine cross autocorrelation found across all stocks by Lo and Mackinlay in their book “A Non\-Random Walk Down Wall Street” – see Figure [3\.2](IntroductoryRprogamming.html#fig:ARcross). Figure 3\.2: From Lo and MacKinlay (1999\) We see that one\-lag cross autocorrelations are positive. Compare these portfolio autocorrelations with the individual stock autocorrelations in the example here. 3\.19 Solving Non\-Linear Equations ----------------------------------- Earlier we examined root finding. Here we develop it further. We have also not done much with user\-generated functions. Here is a neat model in R to solve for the implied volatility in the Black\-Merton\-Scholes class of models. First, we code up the Black and Scholes ([1973](#ref-doi:10.1086/260062)) model; this is the function **bms73** below. Then we write a user\-defined function that solves for the implied volatility from a given call or put option price. The package **minpack.lm** is used for the equation solving, and the function call is **nls.lm**. If you are not familiar with the Nobel Prize winning Black\-Scholes model, never mind, almost the entire world has never heard of it. Just think of it as a nonlinear multivariate function that we will use as an exemplar for equation solving. We are going to use the function below to solve for the value of **sig** in the expressions below. We set up two functions. ``` #Black-Merton-Scholes 1973 #sig: volatility #S: stock price #K: strike price #T: maturity #r: risk free rate #q: dividend rate #cp = 1 for calls and -1 for puts #optprice: observed option price bms73 = function(sig,S,K,T,r,q,cp=1,optprice) { d1 = (log(S/K)+(r-q+0.5*sig^2)*T)/(sig*sqrt(T)) d2 = d1 - sig*sqrt(T) if (cp==1) { optval = S*exp(-q*T)*pnorm(d1)-K*exp(-r*T)*pnorm(d2) } else { optval = -S*exp(-q*T)*pnorm(-d1)+K*exp(-r*T)*pnorm(-d2) } #If option price is supplied we want the implied vol, else optprice bs = optval - optprice } #Function to return Imp Vol with starting guess sig0 impvol = function(sig0,S,K,T,r,q,cp,optprice) { sol = nls.lm(par=sig0,fn=bms73,S=S,K=K,T=T,r=r,q=q, cp=cp,optprice=optprice) } ``` We use the minimizer to solve the nonlinear function for the value of **sig**. The calls to this model are as follows: ``` library(minpack.lm) optprice = 4 res = impvol(0.2,40,40,1,0.03,0,-1,optprice) print(names(res)) ``` ``` ## [1] "par" "hessian" "fvec" "info" "message" "diag" ## [7] "niter" "rsstrace" "deviance" ``` ``` print(c("Implied vol = ",res$par)) ``` ``` ## [1] "Implied vol = " "0.291522285803426" ``` We note that the function **impvol** was written such that the argument that we needed to solve for, **sig0**, the implied volatility, was the first argument in the function. However, the expression **par\=sig0** does inform the solver which argument is being searched for in order to satisfy the non\-linear equation for implied volatility. Note also that the function **bms73** returns the difference between the model price and observed price, not the model price alone. This is necessary as the solver tries to set this function value to zero by finding the implied volatility. Lets check if we put this volatility back into the bms function that we get back the option price of 4\. Voila! ``` #CHECK optp = bms73(res$par,40,40,1,0.03,0,0,4) + optprice print(c("Check option price = ",optp)) ``` ``` ## [1] "Check option price = " "4" ``` 3\.20 Web\-Enabling R Functions ------------------------------- We may be interested in hosting our R programs for users to run through a browser interface. This section walks you through the process to do so. This is an extract of my blog post at [http://sanjivdas.wordpress.com/2010/11/07/web\-enabling\-r\-functions\-with\-cgi\-on\-a\-mac\-os\-x\-desktop/](http://sanjivdas.wordpress.com/2010/11/07/web-enabling-r-functions-with-cgi-on-a-mac-os-x-desktop/). The same may be achieved by using the **Shiny** package in R, which enables you to create interactive browser\-based applications, and is in fact a more powerful environment in which to create web\-driven applications. See: <https://shiny.rstudio.com/>. Here we desribe an example based on the **Rcgi** package from David Firth, and for full details of using R with CGI, see <http://www.omegahat.org/CGIwithR/>. Download the document on using R with CGI. It’s titled “CGIwithR: Facilities for Processing Web Forms with R.”[29](#fn29) You need two program files to get everything working. (These instructions are for a Mac environment.) 1. The html file that is the web form for input data. 2. The R file, with special tags for use with the **CGIwithR** package. Our example will be simple, i.e., a calculator to work out the monthly payment on a standard fixed rate mortgage. The three inputs are the loan principal, annual loan rate, and the number of remaining months to maturity. But first, let’s create the html file for the web page that will take these three input values. We call it **mortgage\_calc.html**. The code is all standard, for those familiar with html, and even if you are not used to html, the code is self\-explanatory. See Figure [3\.3](IntroductoryRprogamming.html#fig:rcgi1). Figure 3\.3: HTML code for the Rcgi application Notice that line 06 will be the one referencing the R program that does the calculation. The three inputs are accepted in lines 08\-10\. Line 12 sends the inputs to the R program. Next, we look at the R program, suitably modified to include html tags. We name it **mortgage\_calc.R**. See Figure [3\.4](IntroductoryRprogamming.html#fig:rcgi2). Figure 3\.4: R code for the Rcgi application We can see that all html calls in the R program are made using the **tag()** construct. Lines 22–35 take in the three inputs from the html form. Lines 43–44 do the calculations and line 45 prints the result. The **cat()** function prints its arguments to the web browser page. Okay, we have seen how the two programs (html, R) are written and these templates may be used with changes as needed. We also need to pay attention to setting up the R environment to make sure that the function is served up by the system. The following steps are needed: Make sure that your Mac is allowing connections to its web server. Go to System Preferences and choose Sharing. In this window enable Web Sharing by ticking the box next to it. Place the html file **mortgage\_calc.html** in the directory that serves up web pages. On a Mac, there is already a web directory for this called **Sites**. It’s a good idea to open a separate subdirectory called (say) **Rcgi** below this one for the R related programs and put the html file there. The R program **mortgage\_calc.R** must go in the directory that has been assigned for CGI executables. On a Mac, the default for this directory is **/Library/WebServer/CGI\-Executables** and is usually referenced by the alias **cgi\-bin** (stands for cgi binaries). Drop the R program into this directory. Two more important files are created when you install the **Rcgi** package. The **CGIwithR** installation creates two files: 1. A hidden file called **.Rprofile**; 2. A file called **R.cgi**. Place both these files in the directory: **/Library/WebServer/CGI\-Executables**. If you cannot find the **.Rprofile** file then create it directly by opening a text editor and adding two lines to the file: ``` #! /usr/bin/R library(CGIwithR,warn.conflicts=FALSE) ``` Now, open the **R.cgi** file and make sure that the line pointing to the R executable in the file is showing > R\_DEFAULT\=/usr/bin/R The file may actually have it as **\#!/usr/local/bin/R** which is for Linux platforms, but the usual Mac install has the executable in **\#! /usr/bin/R** so make sure this is done. Make both files executable as follows: \> chmod a\+rx .Rprofile \> chmod a\+rx R.cgi Finally, make the **\\(\\sim\\)/Sites/Rcgi/** directory write accessible: > chmod a\+wx \\(\\sim\\)/Sites/Rcgi Just being patient and following all the steps makes sure it all works well. Having done it once, it’s easy to repeat and create several functions. The inputs are as follows: Loan principal (enter a dollar amount). Annual loan rate (enter it in decimals, e.g., six percent is entered as 0\.06\). Remaining maturity in months (enter 300 if the remaining maturity is 25 years).
Machine Learning
srdas.github.io
https://srdas.github.io/MLBook/MoreDataHandling.html
Chapter 4 MoRe: Data Handling and Other Useful Things ===================================================== In this chapter, we will revisit some of the topics considered in the previous chapters, and demonstrate alternate programming approaches in R. There are some extremely powerful packages in R that allow sql\-like operations on data sets, making for advanced data handling. One of the most time\-consuming activities in data analytics is cleaning and arranging data, and here we will show examples of many tools available for that purpose. Let’s assume we have a good working knowledge of R by now. Here we revisit some more packages, functions, and data structures. 4\.1 Data Extraction of stocks using the *quantmod* package ----------------------------------------------------------- We have seen the package already in the previous chapter. Now, we proceed to use it to get some initial data. ``` library(quantmod) ``` ``` ## Loading required package: xts ``` ``` ## Loading required package: zoo ``` ``` ## ## Attaching package: 'zoo' ``` ``` ## The following objects are masked from 'package:base': ## ## as.Date, as.Date.numeric ``` ``` ## Loading required package: TTR ``` ``` ## Loading required package: methods ``` ``` ## Version 0.4-0 included new data defaults. See ?getSymbols. ``` ``` tickers = c("AAPL","YHOO","IBM","CSCO","C","GSPC") getSymbols(tickers) ``` ``` ## As of 0.4-0, 'getSymbols' uses env=parent.frame() and ## auto.assign=TRUE by default. ## ## This behavior will be phased out in 0.5-0 when the call will ## default to use auto.assign=FALSE. getOption("getSymbols.env") and ## getOptions("getSymbols.auto.assign") are now checked for alternate defaults ## ## This message is shown once per session and may be disabled by setting ## options("getSymbols.warning4.0"=FALSE). See ?getSymbols for more details. ``` ``` ## pausing 1 second between requests for more than 5 symbols ## pausing 1 second between requests for more than 5 symbols ``` ``` ## [1] "AAPL" "YHOO" "IBM" "CSCO" "C" "GSPC" ``` ### 4\.1\.1 Print the length of each stock series. Are they all the same? Here we need to extract the ticker symbol without quotes. ``` print(head(AAPL)) ``` ``` ## AAPL.Open AAPL.High AAPL.Low AAPL.Close AAPL.Volume ## 2007-01-03 86.29 86.58 81.90 83.80 309579900 ## 2007-01-04 84.05 85.95 83.82 85.66 211815100 ## 2007-01-05 85.77 86.20 84.40 85.05 208685400 ## 2007-01-08 85.96 86.53 85.28 85.47 199276700 ## 2007-01-09 86.45 92.98 85.15 92.57 837324600 ## 2007-01-10 94.75 97.80 93.45 97.00 738220000 ## AAPL.Adjusted ## 2007-01-03 10.85709 ## 2007-01-04 11.09807 ## 2007-01-05 11.01904 ## 2007-01-08 11.07345 ## 2007-01-09 11.99333 ## 2007-01-10 12.56728 ``` ``` length(tickers) ``` ``` ## [1] 6 ``` Now we can examine the number of observations in each ticker. ``` for (t in tickers) { a = get(noquote(t))[,1] print(c(t,length(a))) } ``` ``` ## [1] "AAPL" "2574" ## [1] "YHOO" "2574" ## [1] "IBM" "2574" ## [1] "CSCO" "2574" ## [1] "C" "2574" ## [1] "GSPC" "2567" ``` We see that they are not all the same. The stock series are all the same length but the S\&P index is shorter by 7 days. ### 4\.1\.2 Convert closing adjusted prices of all stocks into individual data.frames. First, we create a **list** of data.frames. This will also illustrate how useful lists are because we store data.frames in lists. Notice how we also add a new column to each data.frame so that the dates column may later be used as an index to join the individual stock data.frames into one composite data.frame. ``` df = list() j = 0 for (t in tickers) { j = j + 1 a = noquote(t) b = data.frame(get(a)[,6]) b$dt = row.names(b) df[[j]] = b } ``` ### 4\.1\.3 Make a single data frame Second, we combine all the stocks adjusted closing prices into a single data.frame using a join, excluding all dates for which all stocks do not have data. The main function used here is *merge* which could be an intersect join or a union join. The default is the intersect join. ``` stock_table = df[[1]] for (j in 2:length(df)) { stock_table = merge(stock_table,df[[j]],by="dt") } print(dim(stock_table)) ``` ``` ## [1] 2567 7 ``` ``` class(stock_table) ``` ``` ## [1] "data.frame" ``` Note that the stock table contains the number of rows of the stock index, which had fewer observations than the individual stocks. So since this is an intersect join, some rows have been dropped. ### 4\.1\.4 Plot the stock series Plot all stocks in a single data.frame using ggplot2, which is more advanced than the basic plot function. We use the basic plot function first. ``` par(mfrow=c(3,2)) #Set the plot area to six plots for (j in 1:length(tickers)) { plot(as.Date(stock_table[,1]),stock_table[,j+1], type="l", ylab=tickers[j],xlab="date") } ``` ``` par(mfrow=c(1,1)) #Set the plot figure back to a single plot ``` ### 4\.1\.5 Convert the data into returns These are continuously compounded returns, or log returns. ``` n = length(stock_table[,1]) rets = stock_table[,2:(length(tickers)+1)] for (j in 1:length(tickers)) { rets[2:n,j] = diff(log(rets[,j])) } rets$dt = stock_table$dt rets = rets[2:n,] #lose the first row when converting to returns print(head(rets)) ``` ``` ## AAPL.Adjusted YHOO.Adjusted IBM.Adjusted CSCO.Adjusted C.Adjusted ## 2 0.021952895 0.047282882 0.010635146 0.0259847487 -0.003444886 ## 3 -0.007146715 0.032609594 -0.009094219 0.0003512839 -0.005280834 ## 4 0.004926208 0.006467863 0.015077746 0.0056042360 0.005099241 ## 5 0.079799692 -0.012252406 0.011760684 -0.0056042360 -0.008757558 ## 6 0.046745798 0.039806285 -0.011861824 0.0073491742 -0.008095767 ## 7 -0.012448257 0.017271586 -0.002429871 0.0003486308 0.000738734 ## GSPC.Adjusted dt ## 2 -0.0003760369 2007-01-04 ## 3 0.0000000000 2007-01-05 ## 4 0.0093082361 2007-01-08 ## 5 -0.0127373254 2007-01-09 ## 6 0.0000000000 2007-01-10 ## 7 0.0053269494 2007-01-11 ``` ``` class(rets) ``` ``` ## [1] "data.frame" ``` ### 4\.1\.6 Descriptive statistics The data.frame of returns can be used to present the descriptive statistics of returns. ``` summary(rets) ``` ``` ## AAPL.Adjusted YHOO.Adjusted IBM.Adjusted ## Min. :-0.197470 Min. :-0.2340251 Min. :-0.0864191 ## 1st Qu.:-0.008318 1st Qu.:-0.0107879 1st Qu.:-0.0064540 ## Median : 0.001008 Median : 0.0003064 Median : 0.0004224 ## Mean : 0.000999 Mean : 0.0002333 Mean : 0.0003158 ## 3rd Qu.: 0.011628 3rd Qu.: 0.0115493 3rd Qu.: 0.0076022 ## Max. : 0.130194 Max. : 0.3918166 Max. : 0.1089889 ## CSCO.Adjusted C.Adjusted GSPC.Adjusted ## Min. :-0.1768648 Min. :-0.4946962 Min. :-0.1542612 ## 1st Qu.:-0.0076399 1st Qu.:-0.0119556 1st Qu.:-0.0040400 ## Median : 0.0003616 Median :-0.0000931 Median : 0.0000000 ## Mean : 0.0001430 Mean :-0.0008315 Mean : 0.0001502 ## 3rd Qu.: 0.0089725 3rd Qu.: 0.0115179 3rd Qu.: 0.0048274 ## Max. : 0.1479930 Max. : 0.4563162 Max. : 0.1967094 ## dt ## Length:2566 ## Class :character ## Mode :character ## ## ## ``` 4\.2 Correlation matrix ----------------------- Now we compute the correlation matrix of returns. ``` cor(rets[,1:length(tickers)]) ``` ``` ## AAPL.Adjusted YHOO.Adjusted IBM.Adjusted CSCO.Adjusted ## AAPL.Adjusted 1.0000000 0.3548475 0.4754687 0.4860619 ## YHOO.Adjusted 0.3548475 1.0000000 0.3832693 0.4133302 ## IBM.Adjusted 0.4754687 0.3832693 1.0000000 0.5710565 ## CSCO.Adjusted 0.4860619 0.4133302 0.5710565 1.0000000 ## C.Adjusted 0.3731001 0.3377278 0.4329949 0.4633700 ## GSPC.Adjusted 0.2220585 0.1667948 0.1996484 0.2277044 ## C.Adjusted GSPC.Adjusted ## AAPL.Adjusted 0.3731001 0.2220585 ## YHOO.Adjusted 0.3377278 0.1667948 ## IBM.Adjusted 0.4329949 0.1996484 ## CSCO.Adjusted 0.4633700 0.2277044 ## C.Adjusted 1.0000000 0.3303486 ## GSPC.Adjusted 0.3303486 1.0000000 ``` ### 4\.2\.1 Correlogram Show the correlogram for the six return series. This is a useful way to visualize the relationship between all variables in the data set. ``` library(corrgram) corrgram(rets[,1:length(tickers)], order=TRUE, lower.panel=panel.ellipse, upper.panel=panel.pts, text.panel=panel.txt) ``` ### 4\.2\.2 Market regression To see the relation between the stocks and the index, run a regression of each of the five stocks on the index returns. ``` betas = NULL for (j in 1:(length(tickers)-1)) { res = lm(rets[,j]~rets[,6]) betas[j] = res$coefficients[2] } print(betas) ``` ``` ## [1] 0.2921709 0.2602061 0.1790612 0.2746572 0.8101568 ``` The \\(\\beta\\)s indicate the level of systematic risk for each stock. We notice that all the betas are positive, and highly significant. But they are not close to unity, in fact all are lower. This is evidence of misspecification that may arise from the fact that the stocks are in the tech sector and better explanatory power would come from an index that was more relevant to the technology sector. ### 4\.2\.3 Return versus systematic risk In order to assess whether in the cross\-section, there is a relation between average returns and the systematic risk or \\(\\beta\\) of a stock, run a regression of the five average returns on the five betas from the regression. ``` betas = matrix(betas) avgrets = colMeans(rets[,1:(length(tickers)-1)]) res = lm(avgrets~betas) print(summary(res)) ``` ``` ## ## Call: ## lm(formula = avgrets ~ betas) ## ## Residuals: ## AAPL.Adjusted YHOO.Adjusted IBM.Adjusted CSCO.Adjusted C.Adjusted ## 6.785e-04 -1.540e-04 -2.411e-04 -2.141e-04 -6.938e-05 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 0.0009311 0.0003754 2.480 0.0892 . ## betas -0.0020901 0.0008766 -2.384 0.0972 . ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.0004445 on 3 degrees of freedom ## Multiple R-squared: 0.6546, Adjusted R-squared: 0.5394 ## F-statistic: 5.685 on 1 and 3 DF, p-value: 0.09724 ``` ``` plot(betas,avgrets) abline(res,col="red") ``` We see indeed, that there is an unexpected negative relation between \\(\\beta\\) and the return levels. This may be on account of the particular small sample we used for illustration here, however, we note that the CAPM (Capital Asset Pricing Model) dictate that we see a positive relation between stock returns and a firm’s systematic risk level. 4\.3 Using the *merge* function ------------------------------- Data frames a very much like spreadsheets or tables, but they are also a lot like databases. Some sort of happy medium. If you want to join two dataframes, it is the same a joining two databases. For this R has the **merge** function. It is best illustrated with an example. ### 4\.3\.1 Extracting online corporate data Suppose we have a list of ticker symbols and we want to generate a dataframe with more details on these tickers, especially their sector and the full name of the company. Let’s look at the input list of tickers. Suppose I have them in a file called **tickers.csv** where the delimiter is the colon sign. We read this in as follows. ``` tickers = read.table("DSTMAA_data/tickers.csv",header=FALSE,sep=":") ``` The line of code reads in the file and this gives us two columns of data. We can look at the top of the file (first 6 rows). ``` head(tickers) ``` ``` ## V1 V2 ## 1 NasdaqGS ACOR ## 2 NasdaqGS AKAM ## 3 NYSE ARE ## 4 NasdaqGS AMZN ## 5 NasdaqGS AAPL ## 6 NasdaqGS AREX ``` Note that the ticker symbols relate to stocks from different exchanges, in this case Nasdaq and NYSE. The file may also contain AMEX listed stocks. The second line of code below counts the number of input tickers, and the third line of code renames the columns of the dataframe. We need to call the column of ticker symbols as \`\`Symbol’’ because we will see that the dataframe with which we will merge this one also has a column with the same name. This column becomes the index on which the two dataframes are matched and joined. ``` n = dim(tickers)[1] print(n) ``` ``` ## [1] 98 ``` ``` names(tickers) = c("Exchange","Symbol") head(tickers) ``` ``` ## Exchange Symbol ## 1 NasdaqGS ACOR ## 2 NasdaqGS AKAM ## 3 NYSE ARE ## 4 NasdaqGS AMZN ## 5 NasdaqGS AAPL ## 6 NasdaqGS AREX ``` ### 4\.3\.2 Get all stock symbols from exchanges Next, we read in lists of all stocks on Nasdaq, NYSE, and AMEX as follows: ``` library(quantmod) nasdaq_names = stockSymbols(exchange="NASDAQ") ``` ``` ## Fetching NASDAQ symbols... ``` ``` nyse_names = stockSymbols(exchange="NYSE") ``` ``` ## Fetching NYSE symbols... ``` ``` amex_names = stockSymbols(exchange="AMEX") ``` ``` ## Fetching AMEX symbols... ``` We can look at the top of the Nasdaq file. ``` head(nasdaq_names) ``` ``` ## Symbol Name LastSale MarketCap IPOyear ## 1 AAAP Advanced Accelerator Applications S.A. 39.68 $1.72B 2015 ## 2 AAL American Airlines Group, Inc. 41.42 $20.88B NA ## 3 AAME Atlantic American Corporation 3.90 $79.62M NA ## 4 AAOI Applied Optoelectronics, Inc. 51.51 $962.1M 2013 ## 5 AAON AAON, Inc. 36.40 $1.92B NA ## 6 AAPC Atlantic Alliance Partnership Corp. 9.80 $36.13M 2015 ## Sector Industry Exchange ## 1 Health Care Major Pharmaceuticals NASDAQ ## 2 Transportation Air Freight/Delivery Services NASDAQ ## 3 Finance Life Insurance NASDAQ ## 4 Technology Semiconductors NASDAQ ## 5 Capital Goods Industrial Machinery/Components NASDAQ ## 6 Consumer Services Services-Misc. Amusement & Recreation NASDAQ ``` Next we merge all three dataframes for each of the exchanges into one data frame. ``` co_names = rbind(nyse_names,nasdaq_names,amex_names) ``` To see how many rows are there in this merged file, we check dimensions. ``` dim(co_names) ``` ``` ## [1] 6692 8 ``` Finally, use the merge function to combine the ticker symbols file with the exchanges data to extend the tickers file to include the information from the exchanges file. ``` result = merge(tickers,co_names,by="Symbol") head(result) ``` ``` ## Symbol Exchange.x Name LastSale ## 1 AAPL NasdaqGS Apple Inc. 140.94 ## 2 ACOR NasdaqGS Acorda Therapeutics, Inc. 25.35 ## 3 AKAM NasdaqGS Akamai Technologies, Inc. 63.67 ## 4 AMZN NasdaqGS Amazon.com, Inc. 847.38 ## 5 ARE NYSE Alexandria Real Estate Equities, Inc. 112.09 ## 6 AREX NasdaqGS Approach Resources Inc. 2.28 ## MarketCap IPOyear Sector ## 1 $739.45B 1980 Technology ## 2 $1.18B 2006 Health Care ## 3 $11.03B 1999 Miscellaneous ## 4 $404.34B 1997 Consumer Services ## 5 $10.73B NA Consumer Services ## 6 $184.46M 2007 Energy ## Industry Exchange.y ## 1 Computer Manufacturing NASDAQ ## 2 Biotechnology: Biological Products (No Diagnostic Substances) NASDAQ ## 3 Business Services NASDAQ ## 4 Catalog/Specialty Distribution NASDAQ ## 5 Real Estate Investment Trusts NYSE ## 6 Oil & Gas Production NASDAQ ``` An alternate package to download stock tickers en masse is **BatchGetSymbols**. 4\.4 Using the DT package ------------------------- The Data Table package is a very good way to examine tabular data through an R\-driven user interface. ``` library(DT) datatable(co_names, options = list(pageLength = 25)) ``` 4\.5 Web scraping ----------------- Now suppose we want to find the CEOs of these 98 companies. There is no one file with compay CEO listings freely available for download. However, sites like Google Finance have a page for each stock and mention the CEOs name on the page. By writing R code to scrape the data off these pages one by one, we can extract these CEO names and augment the tickers dataframe. The code for this is simple in R. ``` library(stringr) #READ IN THE LIST OF TICKERS tickers = read.table("DSTMAA_data/tickers.csv",header=FALSE,sep=":") n = dim(tickers)[1] names(tickers) = c("Exchange","Symbol") tickers$ceo = NA #PULL CEO NAMES FROM GOOGLE FINANCE (take random 10 firms) for (j in sample(1:n,10)) { url = paste("https://www.google.com/finance?q=",tickers[j,2],sep="") text = readLines(url) idx = grep("Chief Executive",text) if (length(idx)>0) { tickers[j,3] = str_split(text[idx-2],">")[[1]][2] } else { tickers[j,3] = NA } print(tickers[j,]) } ``` ``` ## Exchange Symbol ceo ## 19 NasdaqGS FORR George F. Colony ## Exchange Symbol ceo ## 23 NYSE GDOT Steven W. Streit ## Exchange Symbol ceo ## 6 NasdaqGS AREX J. Ross Craft P.E. ## Exchange Symbol ceo ## 33 NYSE IPI Robert P Jornayvaz III ## Exchange Symbol ceo ## 96 NasdaqGS WERN Derek J. Leathers ## Exchange Symbol ceo ## 93 NasdaqGS VSAT Mark D. Dankberg ## Exchange Symbol ceo ## 94 NasdaqGS VRTU Krishan A. Canekeratne ## Exchange Symbol ceo ## 1 NasdaqGS ACOR Ron Cohen M.D. ## Exchange Symbol ceo ## 4 NasdaqGS AMZN Jeffrey P. Bezos ## Exchange Symbol ceo ## 90 NasdaqGS VASC <NA> ``` ``` #WRITE CEO_NAMES TO CSV write.table(tickers,file="DSTMAA_data/ceo_names.csv", row.names=FALSE,sep=",") ``` The code uses the **stringr** package so that string handling is simplified. After extracting the page, we search for the line in which the words \`\`Chief Executive’’ show up, and we note that the name of the CEO appears two lines before in the html page. A sample web page for Apple Inc is shown here: The final dataframe with CEO names is shown here (the top 6 lines): ``` head(tickers) ``` ``` ## Exchange Symbol ceo ## 1 NasdaqGS ACOR Ron Cohen M.D. ## 2 NasdaqGS AKAM <NA> ## 3 NYSE ARE <NA> ## 4 NasdaqGS AMZN Jeffrey P. Bezos ## 5 NasdaqGS AAPL <NA> ## 6 NasdaqGS AREX J. Ross Craft P.E. ``` 4\.6 Using the *apply* class of functions ----------------------------------------- Sometimes we need to apply a function to many cases, and these case parameters may be supplied in a vector, matrix, or list. This is analogous to looping through a set of values to repeat evaluations of a function using different sets of parameters. We illustrate here by computing the mean returns of all stocks in our sample using the **apply** function. The first argument of the function is the data.frame to which it is being applied, the second argument is either 1 (by rows) or 2 (by columns). The third argument is the function being evaluated. ``` tickers = c("AAPL","YHOO","IBM","CSCO","C","GSPC") apply(rets[,1:(length(tickers)-1)],2,mean) ``` ``` ## AAPL.Adjusted YHOO.Adjusted IBM.Adjusted CSCO.Adjusted C.Adjusted ## 0.0009989766 0.0002332882 0.0003158174 0.0001430246 -0.0008315260 ``` We see that the function returns the column means of the data set. The variants of the function pertain to what the loop is being applied to. The **lapply** is a function applied to a list, and **sapply** is for matrices and vectors. Likewise, **mapply** uses multiple arguments. To cross check, we can simply use the **colMeans** function: ``` colMeans(rets[,1:(length(tickers)-1)]) ``` ``` ## AAPL.Adjusted YHOO.Adjusted IBM.Adjusted CSCO.Adjusted C.Adjusted ## 0.0009989766 0.0002332882 0.0003158174 0.0001430246 -0.0008315260 ``` As we see, this result is verified. 4\.7 Getting interest rate data from FRED ----------------------------------------- In finance, data on interest rates is widely used. An authoritative source of data on interest rates is FRED (Federal Reserve Economic Data), maintained by the St. Louis Federal Reserve Bank, and is warehoused at the following web site: <https://research.stlouisfed.org/fred2/>. Let’s assume that we want to download the data using R from FRED directly. To do this we need to write some custom code. There used to be a package for this but since the web site changed, it has been updated but does not work properly. Still, see that it is easy to roll your own code quite easily in R. ``` #FUNCTION TO READ IN CSV FILES FROM FRED #Enter SeriesID as a text string readFRED = function(SeriesID) { url = paste("https://research.stlouisfed.org/fred2/series/", SeriesID, "/downloaddata/",SeriesID,".csv",sep="") data = readLines(url) n = length(data) data = data[2:n] n = length(data) df = matrix(0,n,2) #top line is header for (j in 1:n) { tmp = strsplit(data[j],",") df[j,1] = tmp[[1]][1] df[j,2] = tmp[[1]][2] } rate = as.numeric(df[,2]) idx = which(rate>0) idx = setdiff(seq(1,n),idx) rate[idx] = -99 date = df[,1] df = data.frame(date,rate) names(df)[2] = SeriesID result = df } ``` ### 4\.7\.1 Using the custom function Now, we provide a list of economic time series and download data accordingly using the function above. Note that we also join these individual series using the data as index. We download constant maturity interest rates (yields) starting from a maturity of one month (DGS1MO) to a maturity of thirty years (DGS30\). ``` #EXTRACT TERM STRUCTURE DATA FOR ALL RATES FROM 1 MO to 30 YRS FROM FRED id_list = c("DGS1MO","DGS3MO","DGS6MO","DGS1","DGS2","DGS3", "DGS5","DGS7","DGS10","DGS20","DGS30") k = 0 for (id in id_list) { out = readFRED(id) if (k>0) { rates = merge(rates,out,"date",all=TRUE) } else { rates = out } k = k + 1 } head(rates) ``` ``` ## date DGS1MO DGS3MO DGS6MO DGS1 DGS2 DGS3 DGS5 DGS7 DGS10 DGS20 ## 1 2001-07-31 3.67 3.54 3.47 3.53 3.79 4.06 4.57 4.86 5.07 5.61 ## 2 2001-08-01 3.65 3.53 3.47 3.56 3.83 4.09 4.62 4.90 5.11 5.63 ## 3 2001-08-02 3.65 3.53 3.46 3.57 3.89 4.17 4.69 4.97 5.17 5.68 ## 4 2001-08-03 3.63 3.52 3.47 3.57 3.91 4.22 4.72 4.99 5.20 5.70 ## 5 2001-08-06 3.62 3.52 3.47 3.56 3.88 4.17 4.71 4.99 5.19 5.70 ## 6 2001-08-07 3.63 3.52 3.47 3.56 3.90 4.19 4.72 5.00 5.20 5.71 ## DGS30 ## 1 5.51 ## 2 5.53 ## 3 5.57 ## 4 5.59 ## 5 5.59 ## 6 5.60 ``` ### 4\.7\.2 Organize the data by date Having done this, we now have a data.frame called **rates** containing all the time series we are interested in. We now convert the dates into numeric strings and sort the data.frame by date. ``` #CONVERT ALL DATES TO NUMERIC AND SORT BY DATE dates = rates[,1] library(stringr) dates = as.numeric(str_replace_all(dates,"-","")) res = sort(dates,index.return=TRUE) for (j in 1:dim(rates)[2]) { rates[,j] = rates[res$ix,j] } head(rates) ``` ``` ## date DGS1MO DGS3MO DGS6MO DGS1 DGS2 DGS3 DGS5 DGS7 DGS10 DGS20 ## 1 1962-01-02 NA NA NA 3.22 NA 3.70 3.88 NA 4.06 NA ## 2 1962-01-03 NA NA NA 3.24 NA 3.70 3.87 NA 4.03 NA ## 3 1962-01-04 NA NA NA 3.24 NA 3.69 3.86 NA 3.99 NA ## 4 1962-01-05 NA NA NA 3.26 NA 3.71 3.89 NA 4.02 NA ## 5 1962-01-08 NA NA NA 3.31 NA 3.71 3.91 NA 4.03 NA ## 6 1962-01-09 NA NA NA 3.32 NA 3.74 3.93 NA 4.05 NA ## DGS30 ## 1 NA ## 2 NA ## 3 NA ## 4 NA ## 5 NA ## 6 NA ``` ### 4\.7\.3 Handling missing values Note that there are missing values, denoted by **NA**. Also there are rows with “\-99” values and we can clean those out too but they represent periods when there was no yield available of that maturity, so we leave this in. ``` #REMOVE THE NA ROWS idx = which(rowSums(is.na(rates))==0) rates2 = rates[idx,] print(head(rates2)) ``` ``` ## date DGS1MO DGS3MO DGS6MO DGS1 DGS2 DGS3 DGS5 DGS7 DGS10 DGS20 ## 10326 2001-07-31 3.67 3.54 3.47 3.53 3.79 4.06 4.57 4.86 5.07 5.61 ## 10327 2001-08-01 3.65 3.53 3.47 3.56 3.83 4.09 4.62 4.90 5.11 5.63 ## 10328 2001-08-02 3.65 3.53 3.46 3.57 3.89 4.17 4.69 4.97 5.17 5.68 ## 10329 2001-08-03 3.63 3.52 3.47 3.57 3.91 4.22 4.72 4.99 5.20 5.70 ## 10330 2001-08-06 3.62 3.52 3.47 3.56 3.88 4.17 4.71 4.99 5.19 5.70 ## 10331 2001-08-07 3.63 3.52 3.47 3.56 3.90 4.19 4.72 5.00 5.20 5.71 ## DGS30 ## 10326 5.51 ## 10327 5.53 ## 10328 5.57 ## 10329 5.59 ## 10330 5.59 ## 10331 5.60 ``` 4\.8 Cross\-Sectional Data (an example) --------------------------------------- 1. A great resource for data sets in corporate finance is on Aswath Damodaran’s web site, see: <http://people.stern.nyu.edu/adamodar/New_Home_Page/data.html> 2. Financial statement data sets are available at: [http://www.sec.gov/dera/data/financial\-statement\-data\-sets.html](http://www.sec.gov/dera/data/financial-statement-data-sets.html) 3. And another comprehensive data source: <http://fisher.osu.edu/fin/fdf/osudata.htm> 4. Open government data: <https://www.data.gov/finance/> Let’s read in the list of failed banks: <http://www.fdic.gov/bank/individual/failed/banklist.csv> ``` #download.file(url="http://www.fdic.gov/bank/individual/ #failed/banklist.csv",destfile="failed_banks.csv") ``` (This does not work, and has been an issue for a while.) ### 4\.8\.1 Access file from the web using the *readLines* function You can also read in the data using **readLines** but then further work is required to clean it up, but it works well in downloading the data. ``` url = "https://www.fdic.gov/bank/individual/failed/banklist.csv" data = readLines(url) head(data) ``` ``` ## [1] "Bank Name,City,ST,CERT,Acquiring Institution,Closing Date,Updated Date" ## [2] "Proficio Bank,Cottonwood Heights,UT,35495,Cache Valley Bank,3-Mar-17,14-Mar-17" ## [3] "Seaway Bank and Trust Company,Chicago,IL,19328,State Bank of Texas,27-Jan-17,17-Feb-17" ## [4] "Harvest Community Bank,Pennsville,NJ,34951,First-Citizens Bank & Trust Company,13-Jan-17,17-Feb-17" ## [5] "Allied Bank,Mulberry,AR,91,Today's Bank,23-Sep-16,17-Nov-16" ## [6] "The Woodbury Banking Company,Woodbury,GA,11297,United Bank,19-Aug-16,17-Nov-16" ``` #### 4\.8\.1\.1 Or, read the file from disk It may be simpler to just download the data and read it in from the csv file: ``` data = read.csv("DSTMAA_data/banklist.csv",header=TRUE) print(names(data)) ``` ``` ## [1] "Bank.Name" "City" "ST" ## [4] "CERT" "Acquiring.Institution" "Closing.Date" ## [7] "Updated.Date" ``` This gives a data.frame which is easy to work with. We will illustrate some interesting ways in which to manipulate this data. ### 4\.8\.2 Failed banks by State Suppose we want to get subtotals of how many banks failed by state. First add a column of ones to the data.frame. ``` print(head(data)) ``` ``` ## Bank.Name City ST CERT ## 1 Banks of Wisconsin d/b/a Bank of Kenosha Kenosha WI 35386 ## 2 Central Arizona Bank Scottsdale AZ 34527 ## 3 Sunrise Bank Valdosta GA 58185 ## 4 Pisgah Community Bank Asheville NC 58701 ## 5 Douglas County Bank Douglasville GA 21649 ## 6 Parkway Bank Lenoir NC 57158 ## Acquiring.Institution Closing.Date Updated.Date ## 1 North Shore Bank, FSB 31-May-13 31-May-13 ## 2 Western State Bank 14-May-13 20-May-13 ## 3 Synovus Bank 10-May-13 21-May-13 ## 4 Capital Bank, N.A. 10-May-13 14-May-13 ## 5 Hamilton State Bank 26-Apr-13 16-May-13 ## 6 CertusBank, National Association 26-Apr-13 17-May-13 ``` ``` data$count = 1 print(head(data)) ``` ``` ## Bank.Name City ST CERT ## 1 Banks of Wisconsin d/b/a Bank of Kenosha Kenosha WI 35386 ## 2 Central Arizona Bank Scottsdale AZ 34527 ## 3 Sunrise Bank Valdosta GA 58185 ## 4 Pisgah Community Bank Asheville NC 58701 ## 5 Douglas County Bank Douglasville GA 21649 ## 6 Parkway Bank Lenoir NC 57158 ## Acquiring.Institution Closing.Date Updated.Date count ## 1 North Shore Bank, FSB 31-May-13 31-May-13 1 ## 2 Western State Bank 14-May-13 20-May-13 1 ## 3 Synovus Bank 10-May-13 21-May-13 1 ## 4 Capital Bank, N.A. 10-May-13 14-May-13 1 ## 5 Hamilton State Bank 26-Apr-13 16-May-13 1 ## 6 CertusBank, National Association 26-Apr-13 17-May-13 1 ``` #### 4\.8\.2\.1 Check for missing data It’s good to check that there is no missing data. ``` any(is.na(data)) ``` ``` ## [1] FALSE ``` #### 4\.8\.2\.2 Sort by State Now we sort the data by state to see how many there are. ``` res = sort(as.matrix(data$ST),index.return=TRUE) print(head(data[res$ix,])) ``` ``` ## Bank.Name City ST CERT ## 42 Alabama Trust Bank, National Association Sylacauga AL 35224 ## 126 Superior Bank Birmingham AL 17750 ## 127 Nexity Bank Birmingham AL 19794 ## 279 First Lowndes Bank Fort Deposit AL 24957 ## 318 New South Federal Savings Bank Irondale AL 32276 ## 375 CapitalSouth Bank Birmingham AL 22130 ## Acquiring.Institution Closing.Date Updated.Date count ## 42 Southern States Bank 18-May-12 20-May-13 1 ## 126 Superior Bank, National Association 15-Apr-11 30-Nov-12 1 ## 127 AloStar Bank of Commerce 15-Apr-11 4-Sep-12 1 ## 279 First Citizens Bank 19-Mar-10 23-Aug-12 1 ## 318 Beal Bank 18-Dec-09 23-Aug-12 1 ## 375 IBERIABANK 21-Aug-09 15-Jan-13 1 ``` ``` print(head(sort(unique(data$ST)))) ``` ``` ## [1] AL AR AZ CA CO CT ## 44 Levels: AL AR AZ CA CO CT FL GA HI IA ID IL IN KS KY LA MA MD MI ... WY ``` ``` print(length(unique(data$ST))) ``` ``` ## [1] 44 ``` ### 4\.8\.3 Use the *aggregate* function (for subtotals) We can directly use the **aggregate** function to get subtotals by state. ``` head(aggregate(count ~ ST,data,sum),10) ``` ``` ## ST count ## 1 AL 7 ## 2 AR 3 ## 3 AZ 15 ## 4 CA 40 ## 5 CO 9 ## 6 CT 1 ## 7 FL 71 ## 8 GA 89 ## 9 HI 1 ## 10 IA 1 ``` #### 4\.8\.3\.1 Data by acquiring bank And another example, subtotal by acquiring bank. Note how we take the subtotals into another data.frame, which is then sorted and returned in order using the index of the sort. ``` acq = aggregate(count~Acquiring.Institution,data,sum) idx = sort(as.matrix(acq$count),decreasing=TRUE,index.return=TRUE)$ix head(acq[idx,],15) ``` ``` ## Acquiring.Institution count ## 158 No Acquirer 30 ## 208 State Bank and Trust Company 12 ## 9 Ameris Bank 10 ## 245 U.S. Bank N.A. 9 ## 25 Bank of the Ozarks 7 ## 41 Centennial Bank 7 ## 61 Community & Southern Bank 7 ## 212 Stearns Bank, N.A. 7 ## 43 CenterState Bank of Florida, N.A. 6 ## 44 Central Bank 6 ## 103 First-Citizens Bank & Trust Company 6 ## 143 MB Financial Bank, N.A. 6 ## 48 CertusBank, National Association 5 ## 58 Columbia State Bank 5 ## 178 Premier American Bank, N.A. 5 ``` 4\.9 Handling dates with *lubridate* ------------------------------------ Suppose we want to take the preceding data.frame of failed banks and aggregate the data by year, or month, etc. In this case, it us useful to use a dates package. Another useful tool developed by Hadley Wickham is the **lubridate** package. ``` head(data) ``` ``` ## Bank.Name City ST CERT ## 1 Banks of Wisconsin d/b/a Bank of Kenosha Kenosha WI 35386 ## 2 Central Arizona Bank Scottsdale AZ 34527 ## 3 Sunrise Bank Valdosta GA 58185 ## 4 Pisgah Community Bank Asheville NC 58701 ## 5 Douglas County Bank Douglasville GA 21649 ## 6 Parkway Bank Lenoir NC 57158 ## Acquiring.Institution Closing.Date Updated.Date count ## 1 North Shore Bank, FSB 31-May-13 31-May-13 1 ## 2 Western State Bank 14-May-13 20-May-13 1 ## 3 Synovus Bank 10-May-13 21-May-13 1 ## 4 Capital Bank, N.A. 10-May-13 14-May-13 1 ## 5 Hamilton State Bank 26-Apr-13 16-May-13 1 ## 6 CertusBank, National Association 26-Apr-13 17-May-13 1 ``` ``` library(lubridate) ``` ``` ## ## Attaching package: 'lubridate' ``` ``` ## The following object is masked from 'package:base': ## ## date ``` ``` data$Cdate = dmy(data$Closing.Date) data$Cyear = year(data$Cdate) fd = aggregate(count~Cyear,data,sum) print(fd) ``` ``` ## Cyear count ## 1 2000 2 ## 2 2001 4 ## 3 2002 11 ## 4 2003 3 ## 5 2004 4 ## 6 2007 3 ## 7 2008 25 ## 8 2009 140 ## 9 2010 157 ## 10 2011 92 ## 11 2012 51 ## 12 2013 14 ``` ``` plot(count~Cyear,data=fd,type="l",lwd=3,col="red",xlab="Year") grid(lwd=3) ``` ### 4\.9\.1 By Month Let’s do the same thing by month to see if there is seasonality ``` data$Cmonth = month(data$Cdate) fd = aggregate(count~Cmonth,data,sum) print(fd) ``` ``` ## Cmonth count ## 1 1 44 ## 2 2 40 ## 3 3 38 ## 4 4 56 ## 5 5 36 ## 6 6 31 ## 7 7 71 ## 8 8 36 ## 9 9 35 ## 10 10 53 ## 11 11 34 ## 12 12 32 ``` ``` plot(count~Cmonth,data=fd,type="l",lwd=3,col="green"); grid(lwd=3) ``` ### 4\.9\.2 By Day There does not appear to be any seasonality. What about day? ``` data$Cday = day(data$Cdate) fd = aggregate(count~Cday,data,sum) print(fd) ``` ``` ## Cday count ## 1 1 8 ## 2 2 17 ## 3 3 3 ## 4 4 21 ## 5 5 15 ## 6 6 12 ## 7 7 18 ## 8 8 13 ## 9 9 9 ## 10 10 13 ## 11 11 17 ## 12 12 10 ## 13 13 10 ## 14 14 20 ## 15 15 20 ## 16 16 20 ## 17 17 21 ## 18 18 20 ## 19 19 28 ## 20 20 25 ## 21 21 17 ## 22 22 18 ## 23 23 26 ## 24 24 17 ## 25 25 11 ## 26 26 15 ## 27 27 16 ## 28 28 16 ## 29 29 15 ## 30 30 28 ## 31 31 7 ``` ``` plot(count~Cday,data=fd,type="l",lwd=3,col="blue"); grid(lwd=3) ``` Definitely, counts are lower at the start and end of the month! 4\.10 Using the *data.table* package ------------------------------------ This is an incredibly useful package that was written by Matt Dowle. It essentially allows your data.frame to operate as a database. It enables very fast handling of massive quantities of data, and much of this technology is now embedded in the IP of the company called h2o: <http://h2o.ai/> The data.table cheat sheet is here: [https://s3\.amazonaws.com/assets.datacamp.com/img/blog/data\+table\+cheat\+sheet.pdf](https://s3.amazonaws.com/assets.datacamp.com/img/blog/data+table+cheat+sheet.pdf) ### 4\.10\.1 California Crime Statistics We start with some freely downloadable crime data statistics for California. We placed the data in a csv file which is then easy to read in to R. ``` data = read.csv("DSTMAA_data/CA_Crimes_Data_2004-2013.csv",header=TRUE) ``` It is easy to convert this into a data.table. ``` library(data.table) ``` ``` ## ## Attaching package: 'data.table' ``` ``` ## The following objects are masked from 'package:lubridate': ## ## hour, mday, month, quarter, wday, week, yday, year ``` ``` ## The following object is masked from 'package:xts': ## ## last ``` ``` D_T = as.data.table(data) print(class(D_T)) ``` ``` ## [1] "data.table" "data.frame" ``` Note, it is still a **data.frame** also. Hence, it inherits its properties from the **data.frame** class. ### 4\.10\.2 Examine the *data.table* Let’s see how it works, noting that the syntax is similar to that for data.frames as much as possible. We print only a part of the names list. And do not go through each and everyone. ``` print(dim(D_T)) ``` ``` ## [1] 7301 69 ``` ``` print(names(D_T)) ``` ``` ## [1] "Year" "County" "NCICCode" ## [4] "Violent_sum" "Homicide_sum" "ForRape_sum" ## [7] "Robbery_sum" "AggAssault_sum" "Property_sum" ## [10] "Burglary_sum" "VehicleTheft_sum" "LTtotal_sum" ## [13] "ViolentClr_sum" "HomicideClr_sum" "ForRapeClr_sum" ## [16] "RobberyClr_sum" "AggAssaultClr_sum" "PropertyClr_sum" ## [19] "BurglaryClr_sum" "VehicleTheftClr_sum" "LTtotalClr_sum" ## [22] "TotalStructural_sum" "TotalMobile_sum" "TotalOther_sum" ## [25] "GrandTotal_sum" "GrandTotClr_sum" "RAPact_sum" ## [28] "ARAPact_sum" "FROBact_sum" "KROBact_sum" ## [31] "OROBact_sum" "SROBact_sum" "HROBnao_sum" ## [34] "CHROBnao_sum" "GROBnao_sum" "CROBnao_sum" ## [37] "RROBnao_sum" "BROBnao_sum" "MROBnao_sum" ## [40] "FASSact_sum" "KASSact_sum" "OASSact_sum" ## [43] "HASSact_sum" "FEBURact_Sum" "UBURact_sum" ## [46] "RESDBUR_sum" "RNBURnao_sum" "RDBURnao_sum" ## [49] "RUBURnao_sum" "NRESBUR_sum" "NNBURnao_sum" ## [52] "NDBURnao_sum" "NUBURnao_sum" "MVTact_sum" ## [55] "TMVTact_sum" "OMVTact_sum" "PPLARnao_sum" ## [58] "PSLARnao_sum" "SLLARnao_sum" "MVLARnao_sum" ## [61] "MVPLARnao_sum" "BILARnao_sum" "FBLARnao_sum" ## [64] "COMLARnao_sum" "AOLARnao_sum" "LT400nao_sum" ## [67] "LT200400nao_sum" "LT50200nao_sum" "LT50nao_sum" ``` ``` head(D_T) ``` ``` ## Year County NCICCode Violent_sum ## 1: 2004 Alameda County Alameda Co. Sheriff's Department 461 ## 2: 2004 Alameda County Alameda 342 ## 3: 2004 Alameda County Albany 42 ## 4: 2004 Alameda County Berkeley 557 ## 5: 2004 Alameda County Emeryville 83 ## 6: 2004 Alameda County Fremont 454 ## Homicide_sum ForRape_sum Robbery_sum AggAssault_sum Property_sum ## 1: 5 29 174 253 3351 ## 2: 1 12 89 240 2231 ## 3: 1 3 29 9 718 ## 4: 4 17 355 181 8611 ## 5: 2 4 53 24 1066 ## 6: 5 24 165 260 5723 ## Burglary_sum VehicleTheft_sum LTtotal_sum ViolentClr_sum ## 1: 731 947 1673 170 ## 2: 376 333 1522 244 ## 3: 130 142 446 10 ## 4: 1382 1128 6101 169 ## 5: 94 228 744 15 ## 6: 939 881 3903 232 ## HomicideClr_sum ForRapeClr_sum RobberyClr_sum AggAssaultClr_sum ## 1: 5 4 43 118 ## 2: 1 8 45 190 ## 3: 0 1 3 6 ## 4: 1 6 72 90 ## 5: 1 0 8 6 ## 6: 2 18 51 161 ## PropertyClr_sum BurglaryClr_sum VehicleTheftClr_sum LTtotalClr_sum ## 1: 275 58 129 88 ## 2: 330 65 57 208 ## 3: 53 24 2 27 ## 4: 484 58 27 399 ## 5: 169 14 4 151 ## 6: 697 84 135 478 ## TotalStructural_sum TotalMobile_sum TotalOther_sum GrandTotal_sum ## 1: 7 23 3 33 ## 2: 5 1 9 15 ## 3: 3 0 5 8 ## 4: 21 21 17 59 ## 5: 0 1 0 1 ## 6: 8 10 3 21 ## GrandTotClr_sum RAPact_sum ARAPact_sum FROBact_sum KROBact_sum ## 1: 4 27 2 53 17 ## 2: 5 12 0 18 4 ## 3: 0 3 0 9 1 ## 4: 15 12 5 126 20 ## 5: 0 4 0 13 6 ## 6: 5 23 1 64 22 ## OROBact_sum SROBact_sum HROBnao_sum CHROBnao_sum GROBnao_sum ## 1: 9 95 81 19 6 ## 2: 11 56 49 14 0 ## 3: 1 18 21 1 0 ## 4: 71 138 201 58 6 ## 5: 1 33 33 11 2 ## 6: 6 73 89 19 3 ## CROBnao_sum RROBnao_sum BROBnao_sum MROBnao_sum FASSact_sum KASSact_sum ## 1: 13 17 13 25 17 35 ## 2: 3 9 4 10 8 23 ## 3: 1 2 3 1 0 3 ## 4: 2 24 22 42 15 16 ## 5: 1 1 0 5 4 0 ## 6: 28 2 12 12 19 56 ## OASSact_sum HASSact_sum FEBURact_Sum UBURact_sum RESDBUR_sum ## 1: 132 69 436 295 538 ## 2: 86 123 183 193 213 ## 3: 4 2 61 69 73 ## 4: 73 77 748 634 962 ## 5: 9 11 61 33 36 ## 6: 120 65 698 241 593 ## RNBURnao_sum RDBURnao_sum RUBURnao_sum NRESBUR_sum NNBURnao_sum ## 1: 131 252 155 193 76 ## 2: 40 67 106 163 31 ## 3: 11 60 2 57 25 ## 4: 225 418 319 420 171 ## 5: 8 25 3 58 40 ## 6: 106 313 174 346 76 ## NDBURnao_sum NUBURnao_sum MVTact_sum TMVTact_sum OMVTact_sum ## 1: 33 84 879 2 66 ## 2: 18 114 250 59 24 ## 3: 31 1 116 21 5 ## 4: 112 137 849 169 110 ## 5: 14 4 182 33 13 ## 6: 34 236 719 95 67 ## PPLARnao_sum PSLARnao_sum SLLARnao_sum MVLARnao_sum MVPLARnao_sum ## 1: 14 14 76 1048 56 ## 2: 0 1 176 652 14 ## 3: 1 2 27 229 31 ## 4: 22 34 376 2373 1097 ## 5: 17 2 194 219 122 ## 6: 3 26 391 2269 325 ## BILARnao_sum FBLARnao_sum COMLARnao_sum AOLARnao_sum LT400nao_sum ## 1: 54 192 5 214 681 ## 2: 176 172 8 323 371 ## 3: 47 60 1 48 76 ## 4: 374 539 7 1279 1257 ## 5: 35 44 0 111 254 ## 6: 79 266 13 531 1298 ## LT200400nao_sum LT50200nao_sum LT50nao_sum ## 1: 301 308 383 ## 2: 274 336 541 ## 3: 101 120 149 ## 4: 1124 1178 2542 ## 5: 110 141 239 ## 6: 663 738 1204 ``` ### 4\.10\.3 Indexing the *data.table* A nice feature of the data.table is that it can be indexed, i.e., resorted on the fly by making any column in the database the key. Once that is done, then it becomes easy to compute subtotals, and generate plots from these subtotals as well. The data table can be used like a database, and you can directly apply summarization functions to it. Essentially, it is governed by a format that is summarized as (\\(i\\),\\(j\\),by), i.e., apply some rule to rows \\(i\\), then to some columns \\(j\\), and one may also group by some columns. We can see how this works with the following example. ``` setkey(D_T,Year) crime = 6 res = D_T[,sum(ForRape_sum),by=Year] print(res) ``` ``` ## Year V1 ## 1: 2004 9598 ## 2: 2005 9345 ## 3: 2006 9213 ## 4: 2007 9047 ## 5: 2008 8906 ## 6: 2009 8698 ## 7: 2010 8325 ## 8: 2011 7678 ## 9: 2012 7828 ## 10: 2013 7459 ``` ``` class(res) ``` ``` ## [1] "data.table" "data.frame" ``` The data table was operated on for all columns, i.e., all \\(i\\), and the \\(j\\) column we are interested in was the “ForRape\_sum” which we want to total by Year. This returns a summary of only the Year and the total number of rapes per year. See that the type of output is also of the type data.table, which includes the class data.frame also. ### 4\.10\.4 Plotting from the *data.table* Next, we plot the results from the **data.table** in the same way as we would for a **data.frame**. ``` plot(res$Year,res$V1,type="b",lwd=3,col="blue", xlab="Year",ylab="Forced Rape") ``` #### 4\.10\.4\.1 By County Repeat the process looking at crime (Rape) totals by county. ``` setkey(D_T,County) res = D_T[,sum(ForRape_sum),by=County] print(res) ``` ``` ## County V1 ## 1: Alameda County 4979 ## 2: Alpine County 15 ## 3: Amador County 153 ## 4: Butte County 930 ## 5: Calaveras County 148 ## 6: Colusa County 60 ## 7: Contra Costa County 1848 ## 8: Del Norte County 236 ## 9: El Dorado County 351 ## 10: Fresno County 1960 ## 11: Glenn County 56 ## 12: Humboldt County 495 ## 13: Imperial County 263 ## 14: Inyo County 52 ## 15: Kern County 1935 ## 16: Kings County 356 ## 17: Lake County 262 ## 18: Lassen County 96 ## 19: Los Angeles County 21483 ## 20: Madera County 408 ## 21: Marin County 452 ## 22: Mariposa County 46 ## 23: Mendocino County 328 ## 24: Merced County 738 ## 25: Modoc County 64 ## 26: Mono County 61 ## 27: Monterey County 1062 ## 28: Napa County 354 ## 29: Nevada County 214 ## 30: Orange County 4509 ## 31: Placer County 611 ## 32: Plumas County 115 ## 33: Riverside County 4321 ## 34: Sacramento County 4084 ## 35: San Benito County 151 ## 36: San Bernardino County 4900 ## 37: San Diego County 7378 ## 38: San Francisco County 1498 ## 39: San Joaquin County 1612 ## 40: San Luis Obispo County 900 ## 41: San Mateo County 1381 ## 42: Santa Barbara County 1352 ## 43: Santa Clara County 3832 ## 44: Santa Cruz County 865 ## 45: Shasta County 1089 ## 46: Sierra County 2 ## 47: Siskiyou County 143 ## 48: Solano County 1150 ## 49: Sonoma County 1558 ## 50: Stanislaus County 1348 ## 51: Sutter County 274 ## 52: Tehama County 165 ## 53: Trinity County 28 ## 54: Tulare County 1114 ## 55: Tuolumne County 160 ## 56: Ventura County 1146 ## 57: Yolo County 729 ## 58: Yuba County 277 ## County V1 ``` ``` setnames(res,"V1","Rapes") County_Rapes = as.data.table(res) #This is not really needed setkey(County_Rapes,Rapes) print(County_Rapes) ``` ``` ## County Rapes ## 1: Sierra County 2 ## 2: Alpine County 15 ## 3: Trinity County 28 ## 4: Mariposa County 46 ## 5: Inyo County 52 ## 6: Glenn County 56 ## 7: Colusa County 60 ## 8: Mono County 61 ## 9: Modoc County 64 ## 10: Lassen County 96 ## 11: Plumas County 115 ## 12: Siskiyou County 143 ## 13: Calaveras County 148 ## 14: San Benito County 151 ## 15: Amador County 153 ## 16: Tuolumne County 160 ## 17: Tehama County 165 ## 18: Nevada County 214 ## 19: Del Norte County 236 ## 20: Lake County 262 ## 21: Imperial County 263 ## 22: Sutter County 274 ## 23: Yuba County 277 ## 24: Mendocino County 328 ## 25: El Dorado County 351 ## 26: Napa County 354 ## 27: Kings County 356 ## 28: Madera County 408 ## 29: Marin County 452 ## 30: Humboldt County 495 ## 31: Placer County 611 ## 32: Yolo County 729 ## 33: Merced County 738 ## 34: Santa Cruz County 865 ## 35: San Luis Obispo County 900 ## 36: Butte County 930 ## 37: Monterey County 1062 ## 38: Shasta County 1089 ## 39: Tulare County 1114 ## 40: Ventura County 1146 ## 41: Solano County 1150 ## 42: Stanislaus County 1348 ## 43: Santa Barbara County 1352 ## 44: San Mateo County 1381 ## 45: San Francisco County 1498 ## 46: Sonoma County 1558 ## 47: San Joaquin County 1612 ## 48: Contra Costa County 1848 ## 49: Kern County 1935 ## 50: Fresno County 1960 ## 51: Santa Clara County 3832 ## 52: Sacramento County 4084 ## 53: Riverside County 4321 ## 54: Orange County 4509 ## 55: San Bernardino County 4900 ## 56: Alameda County 4979 ## 57: San Diego County 7378 ## 58: Los Angeles County 21483 ## County Rapes ``` #### 4\.10\.4\.2 Barplot of crime Now, we can go ahead and plot it using a different kind of plot, a horizontal barplot. ``` par(las=2) #makes label horizontal #par(mar=c(3,4,2,1)) #increase y-axis margins barplot(County_Rapes$Rapes, names.arg=County_Rapes$County, horiz=TRUE, cex.names=0.4, col=8) ``` ### 4\.10\.5 Bay Area Bike Share data We show some other features using a different data set, the bike information on Silicon Valley routes for the Bike Share program. This is a much larger data set. ``` trips = read.csv("DSTMAA_data/201408_trip_data.csv",header=TRUE) print(names(trips)) ``` ``` ## [1] "Trip.ID" "Duration" "Start.Date" ## [4] "Start.Station" "Start.Terminal" "End.Date" ## [7] "End.Station" "End.Terminal" "Bike.." ## [10] "Subscriber.Type" "Zip.Code" ``` #### 4\.10\.5\.1 Summarize Trips Data Next we print some descriptive statistics. ``` print(length(trips$Trip.ID)) ``` ``` ## [1] 171792 ``` ``` print(summary(trips$Duration/60)) ``` ``` ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## 1.000 5.750 8.617 18.880 12.680 11940.000 ``` ``` print(mean(trips$Duration/60,trim=0.01)) ``` ``` ## [1] 13.10277 ``` #### 4\.10\.5\.2 Start and End Bike Stations Now, we quickly check how many start and end stations there are. ``` start_stn = unique(trips$Start.Terminal) print(sort(start_stn)) ``` ``` ## [1] 2 3 4 5 6 7 8 9 10 11 12 13 14 16 21 22 23 24 25 26 27 28 29 ## [24] 30 31 32 33 34 35 36 37 38 39 41 42 45 46 47 48 49 50 51 54 55 56 57 ## [47] 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 80 82 83 ## [70] 84 ``` ``` print(length(start_stn)) ``` ``` ## [1] 70 ``` ``` end_stn = unique(trips$End.Terminal) print(sort(end_stn)) ``` ``` ## [1] 2 3 4 5 6 7 8 9 10 11 12 13 14 16 21 22 23 24 25 26 27 28 29 ## [24] 30 31 32 33 34 35 36 37 38 39 41 42 45 46 47 48 49 50 51 54 55 56 57 ## [47] 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 80 82 83 ## [70] 84 ``` ``` print(length(end_stn)) ``` ``` ## [1] 70 ``` As we can see, there are quite a few stations in the bike share program where riders can pick up and drop off bikes. The trip duration information is stored in seconds, so has been converted to minutes in the code above. 4\.11 The *plyr* package family ------------------------------- This package by Hadley Wickham is useful for applying functions to tables of data, i.e., data.frames. Since we may want to write custom functions, this is a highly useful package. R users often select either the **data.table** or the **plyr** class of packages for handling data.frames as databases. The latest incarnation is the **dplyr** package, which focuses only on data.frames. ``` require(plyr) ``` ``` ## Loading required package: plyr ``` ``` ## ## Attaching package: 'plyr' ``` ``` ## The following object is masked from 'package:lubridate': ## ## here ``` ``` ## The following object is masked from 'package:corrgram': ## ## baseball ``` ``` library(dplyr) ``` ``` ## ------------------------------------------------------------------------- ``` ``` ## data.table + dplyr code now lives in dtplyr. ## Please library(dtplyr)! ``` ``` ## ------------------------------------------------------------------------- ``` ``` ## ## Attaching package: 'dplyr' ``` ``` ## The following objects are masked from 'package:plyr': ## ## arrange, count, desc, failwith, id, mutate, rename, summarise, ## summarize ``` ``` ## The following objects are masked from 'package:data.table': ## ## between, last ``` ``` ## The following objects are masked from 'package:lubridate': ## ## intersect, setdiff, union ``` ``` ## The following objects are masked from 'package:xts': ## ## first, last ``` ``` ## The following objects are masked from 'package:stats': ## ## filter, lag ``` ``` ## The following objects are masked from 'package:base': ## ## intersect, setdiff, setequal, union ``` ### 4\.11\.1 Filter the data One of the useful things you can use is the **filter** function, to subset the rows of the dataset you might want to select for further analysis. ``` res = filter(trips,Start.Terminal==50,End.Terminal==51) head(res) ``` ``` ## Trip.ID Duration Start.Date Start.Station ## 1 432024 3954 8/30/2014 14:46 Harry Bridges Plaza (Ferry Building) ## 2 432022 4120 8/30/2014 14:44 Harry Bridges Plaza (Ferry Building) ## 3 431895 1196 8/30/2014 12:04 Harry Bridges Plaza (Ferry Building) ## 4 431891 1249 8/30/2014 12:03 Harry Bridges Plaza (Ferry Building) ## 5 430408 145 8/29/2014 9:08 Harry Bridges Plaza (Ferry Building) ## 6 429148 862 8/28/2014 13:47 Harry Bridges Plaza (Ferry Building) ## Start.Terminal End.Date End.Station End.Terminal Bike.. ## 1 50 8/30/2014 15:52 Embarcadero at Folsom 51 306 ## 2 50 8/30/2014 15:52 Embarcadero at Folsom 51 659 ## 3 50 8/30/2014 12:24 Embarcadero at Folsom 51 556 ## 4 50 8/30/2014 12:23 Embarcadero at Folsom 51 621 ## 5 50 8/29/2014 9:11 Embarcadero at Folsom 51 400 ## 6 50 8/28/2014 14:02 Embarcadero at Folsom 51 589 ## Subscriber.Type Zip.Code ## 1 Customer 94952 ## 2 Customer 94952 ## 3 Customer 11238 ## 4 Customer 11238 ## 5 Subscriber 94070 ## 6 Subscriber 94107 ``` ### 4\.11\.2 Sorting using the *arrange* function The **arrange** function is useful for sorting by any number of columns as needed. Here we sort by the start and end stations. ``` trips_sorted = arrange(trips,Start.Station,End.Station) head(trips_sorted) ``` ``` ## Trip.ID Duration Start.Date Start.Station Start.Terminal ## 1 426408 120 8/27/2014 7:40 2nd at Folsom 62 ## 2 411496 21183 8/16/2014 13:36 2nd at Folsom 62 ## 3 396676 3707 8/6/2014 11:38 2nd at Folsom 62 ## 4 385761 123 7/29/2014 19:52 2nd at Folsom 62 ## 5 364633 6395 7/15/2014 13:39 2nd at Folsom 62 ## 6 362776 9433 7/14/2014 13:36 2nd at Folsom 62 ## End.Date End.Station End.Terminal Bike.. Subscriber.Type ## 1 8/27/2014 7:42 2nd at Folsom 62 527 Subscriber ## 2 8/16/2014 19:29 2nd at Folsom 62 508 Customer ## 3 8/6/2014 12:40 2nd at Folsom 62 109 Customer ## 4 7/29/2014 19:55 2nd at Folsom 62 421 Subscriber ## 5 7/15/2014 15:26 2nd at Folsom 62 448 Customer ## 6 7/14/2014 16:13 2nd at Folsom 62 454 Customer ## Zip.Code ## 1 94107 ## 2 94105 ## 3 31200 ## 4 94107 ## 5 2184 ## 6 2184 ``` ### 4\.11\.3 Reverse order sort The sort can also be done in reverse order as follows. ``` trips_sorted = arrange(trips,desc(Start.Station),End.Station) head(trips_sorted) ``` ``` ## Trip.ID Duration Start.Date ## 1 416755 285 8/20/2014 11:37 ## 2 411270 257 8/16/2014 7:03 ## 3 410269 286 8/15/2014 10:34 ## 4 405273 382 8/12/2014 14:27 ## 5 398372 401 8/7/2014 10:10 ## 6 393012 317 8/4/2014 10:59 ## Start.Station Start.Terminal ## 1 Yerba Buena Center of the Arts (3rd @ Howard) 68 ## 2 Yerba Buena Center of the Arts (3rd @ Howard) 68 ## 3 Yerba Buena Center of the Arts (3rd @ Howard) 68 ## 4 Yerba Buena Center of the Arts (3rd @ Howard) 68 ## 5 Yerba Buena Center of the Arts (3rd @ Howard) 68 ## 6 Yerba Buena Center of the Arts (3rd @ Howard) 68 ## End.Date End.Station End.Terminal Bike.. Subscriber.Type ## 1 8/20/2014 11:42 2nd at Folsom 62 383 Customer ## 2 8/16/2014 7:07 2nd at Folsom 62 614 Subscriber ## 3 8/15/2014 10:38 2nd at Folsom 62 545 Subscriber ## 4 8/12/2014 14:34 2nd at Folsom 62 344 Customer ## 5 8/7/2014 10:16 2nd at Folsom 62 597 Subscriber ## 6 8/4/2014 11:04 2nd at Folsom 62 367 Subscriber ## Zip.Code ## 1 95060 ## 2 94107 ## 3 94127 ## 4 94110 ## 5 94127 ## 6 94127 ``` ### 4\.11\.4 Descriptive statistics Data.table also offers a fantastic way to do descriptive statistics! First, group the data by start point, and then produce statistics by this group, choosing to count the number of trips starting from each station and the average duration of each trip. ``` byStartStation = group_by(trips,Start.Station) res = summarise(byStartStation, count=n(), time=mean(Duration)/60) print(res) ``` ``` ## # A tibble: 70 x 3 ## Start.Station count time ## <fctr> <int> <dbl> ## 1 2nd at Folsom 4165 9.32088 ## 2 2nd at South Park 4569 11.60195 ## 3 2nd at Townsend 6824 15.14786 ## 4 5th at Howard 3183 14.23254 ## 5 Adobe on Almaden 360 10.06120 ## 6 Arena Green / SAP Center 510 43.82833 ## 7 Beale at Market 4293 15.74702 ## 8 Broadway at Main 22 54.82121 ## 9 Broadway St at Battery St 2433 15.31862 ## 10 California Ave Caltrain Station 329 51.30709 ## # ... with 60 more rows ``` ### 4\.11\.5 Other functions in *dplyr* Try also the **select()**, **extract()**, **mutate()**, **summarise()**, **sample\_n()**, **sample\_frac()** functions. The **group\_by()** function is particularly useful as we have seen. 4\.12 Application to IPO Data ----------------------------- Let’s revisit all the stock exchange data from before, where we download the table of firms listed on the NYSE, NASDAQ, and AMEX using the *quantmod* package. ``` library(quantmod) nasdaq_names = stockSymbols(exchange = "NASDAQ") ``` ``` ## Fetching NASDAQ symbols... ``` ``` nyse_names = stockSymbols(exchange = "NYSE") ``` ``` ## Fetching NYSE symbols... ``` ``` amex_names = stockSymbols(exchange = "AMEX") ``` ``` ## Fetching AMEX symbols... ``` ``` tickers = rbind(nasdaq_names,nyse_names,amex_names) tickers$Count = 1 print(dim(tickers)) ``` ``` ## [1] 6692 9 ``` We then clean off the rows with incomplete data, using the very useful **complete.cases** function. ``` idx = complete.cases(tickers) df = tickers[idx,] print(nrow(df)) ``` ``` ## [1] 2198 ``` We create a table of the frequency of IPOs by year to see hot and cold IPO markets. 1\. First, remove all rows with missing IPO data. 2\. Plot IPO Activity with a bar plot. We make sure to label the axes properly. 3\. Plot IPO Activity using the **rbokeh** package to make a pretty line plot. See: <https://hafen.github.io/rbokeh/> ``` library(dplyr) library(magrittr) idx = which(!is.na(tickers$IPOyear)) df = tickers[idx,] res = df %>% group_by(IPOyear) %>% summarise(numIPO = sum(Count)) print(res) ``` ``` ## # A tibble: 40 x 2 ## IPOyear numIPO ## <int> <dbl> ## 1 1972 4 ## 2 1973 1 ## 3 1980 2 ## 4 1981 6 ## 5 1982 4 ## 6 1983 13 ## 7 1984 6 ## 8 1985 4 ## 9 1986 38 ## 10 1987 28 ## # ... with 30 more rows ``` ``` barplot(res$numIPO,names.arg = res$IPOyear) ``` 4\.13 Bokeh plots ----------------- These are really nice looking but requires simple code. The “hover”" features make these plots especially appealing. ``` library(rbokeh) p = figure(width=500,height=300) %>% ly_points(IPOyear,numIPO,data=res,hover=c(IPOyear,numIPO)) %>% ly_lines(IPOyear,numIPO,data=res) p ``` 4\.1 Data Extraction of stocks using the *quantmod* package ----------------------------------------------------------- We have seen the package already in the previous chapter. Now, we proceed to use it to get some initial data. ``` library(quantmod) ``` ``` ## Loading required package: xts ``` ``` ## Loading required package: zoo ``` ``` ## ## Attaching package: 'zoo' ``` ``` ## The following objects are masked from 'package:base': ## ## as.Date, as.Date.numeric ``` ``` ## Loading required package: TTR ``` ``` ## Loading required package: methods ``` ``` ## Version 0.4-0 included new data defaults. See ?getSymbols. ``` ``` tickers = c("AAPL","YHOO","IBM","CSCO","C","GSPC") getSymbols(tickers) ``` ``` ## As of 0.4-0, 'getSymbols' uses env=parent.frame() and ## auto.assign=TRUE by default. ## ## This behavior will be phased out in 0.5-0 when the call will ## default to use auto.assign=FALSE. getOption("getSymbols.env") and ## getOptions("getSymbols.auto.assign") are now checked for alternate defaults ## ## This message is shown once per session and may be disabled by setting ## options("getSymbols.warning4.0"=FALSE). See ?getSymbols for more details. ``` ``` ## pausing 1 second between requests for more than 5 symbols ## pausing 1 second between requests for more than 5 symbols ``` ``` ## [1] "AAPL" "YHOO" "IBM" "CSCO" "C" "GSPC" ``` ### 4\.1\.1 Print the length of each stock series. Are they all the same? Here we need to extract the ticker symbol without quotes. ``` print(head(AAPL)) ``` ``` ## AAPL.Open AAPL.High AAPL.Low AAPL.Close AAPL.Volume ## 2007-01-03 86.29 86.58 81.90 83.80 309579900 ## 2007-01-04 84.05 85.95 83.82 85.66 211815100 ## 2007-01-05 85.77 86.20 84.40 85.05 208685400 ## 2007-01-08 85.96 86.53 85.28 85.47 199276700 ## 2007-01-09 86.45 92.98 85.15 92.57 837324600 ## 2007-01-10 94.75 97.80 93.45 97.00 738220000 ## AAPL.Adjusted ## 2007-01-03 10.85709 ## 2007-01-04 11.09807 ## 2007-01-05 11.01904 ## 2007-01-08 11.07345 ## 2007-01-09 11.99333 ## 2007-01-10 12.56728 ``` ``` length(tickers) ``` ``` ## [1] 6 ``` Now we can examine the number of observations in each ticker. ``` for (t in tickers) { a = get(noquote(t))[,1] print(c(t,length(a))) } ``` ``` ## [1] "AAPL" "2574" ## [1] "YHOO" "2574" ## [1] "IBM" "2574" ## [1] "CSCO" "2574" ## [1] "C" "2574" ## [1] "GSPC" "2567" ``` We see that they are not all the same. The stock series are all the same length but the S\&P index is shorter by 7 days. ### 4\.1\.2 Convert closing adjusted prices of all stocks into individual data.frames. First, we create a **list** of data.frames. This will also illustrate how useful lists are because we store data.frames in lists. Notice how we also add a new column to each data.frame so that the dates column may later be used as an index to join the individual stock data.frames into one composite data.frame. ``` df = list() j = 0 for (t in tickers) { j = j + 1 a = noquote(t) b = data.frame(get(a)[,6]) b$dt = row.names(b) df[[j]] = b } ``` ### 4\.1\.3 Make a single data frame Second, we combine all the stocks adjusted closing prices into a single data.frame using a join, excluding all dates for which all stocks do not have data. The main function used here is *merge* which could be an intersect join or a union join. The default is the intersect join. ``` stock_table = df[[1]] for (j in 2:length(df)) { stock_table = merge(stock_table,df[[j]],by="dt") } print(dim(stock_table)) ``` ``` ## [1] 2567 7 ``` ``` class(stock_table) ``` ``` ## [1] "data.frame" ``` Note that the stock table contains the number of rows of the stock index, which had fewer observations than the individual stocks. So since this is an intersect join, some rows have been dropped. ### 4\.1\.4 Plot the stock series Plot all stocks in a single data.frame using ggplot2, which is more advanced than the basic plot function. We use the basic plot function first. ``` par(mfrow=c(3,2)) #Set the plot area to six plots for (j in 1:length(tickers)) { plot(as.Date(stock_table[,1]),stock_table[,j+1], type="l", ylab=tickers[j],xlab="date") } ``` ``` par(mfrow=c(1,1)) #Set the plot figure back to a single plot ``` ### 4\.1\.5 Convert the data into returns These are continuously compounded returns, or log returns. ``` n = length(stock_table[,1]) rets = stock_table[,2:(length(tickers)+1)] for (j in 1:length(tickers)) { rets[2:n,j] = diff(log(rets[,j])) } rets$dt = stock_table$dt rets = rets[2:n,] #lose the first row when converting to returns print(head(rets)) ``` ``` ## AAPL.Adjusted YHOO.Adjusted IBM.Adjusted CSCO.Adjusted C.Adjusted ## 2 0.021952895 0.047282882 0.010635146 0.0259847487 -0.003444886 ## 3 -0.007146715 0.032609594 -0.009094219 0.0003512839 -0.005280834 ## 4 0.004926208 0.006467863 0.015077746 0.0056042360 0.005099241 ## 5 0.079799692 -0.012252406 0.011760684 -0.0056042360 -0.008757558 ## 6 0.046745798 0.039806285 -0.011861824 0.0073491742 -0.008095767 ## 7 -0.012448257 0.017271586 -0.002429871 0.0003486308 0.000738734 ## GSPC.Adjusted dt ## 2 -0.0003760369 2007-01-04 ## 3 0.0000000000 2007-01-05 ## 4 0.0093082361 2007-01-08 ## 5 -0.0127373254 2007-01-09 ## 6 0.0000000000 2007-01-10 ## 7 0.0053269494 2007-01-11 ``` ``` class(rets) ``` ``` ## [1] "data.frame" ``` ### 4\.1\.6 Descriptive statistics The data.frame of returns can be used to present the descriptive statistics of returns. ``` summary(rets) ``` ``` ## AAPL.Adjusted YHOO.Adjusted IBM.Adjusted ## Min. :-0.197470 Min. :-0.2340251 Min. :-0.0864191 ## 1st Qu.:-0.008318 1st Qu.:-0.0107879 1st Qu.:-0.0064540 ## Median : 0.001008 Median : 0.0003064 Median : 0.0004224 ## Mean : 0.000999 Mean : 0.0002333 Mean : 0.0003158 ## 3rd Qu.: 0.011628 3rd Qu.: 0.0115493 3rd Qu.: 0.0076022 ## Max. : 0.130194 Max. : 0.3918166 Max. : 0.1089889 ## CSCO.Adjusted C.Adjusted GSPC.Adjusted ## Min. :-0.1768648 Min. :-0.4946962 Min. :-0.1542612 ## 1st Qu.:-0.0076399 1st Qu.:-0.0119556 1st Qu.:-0.0040400 ## Median : 0.0003616 Median :-0.0000931 Median : 0.0000000 ## Mean : 0.0001430 Mean :-0.0008315 Mean : 0.0001502 ## 3rd Qu.: 0.0089725 3rd Qu.: 0.0115179 3rd Qu.: 0.0048274 ## Max. : 0.1479930 Max. : 0.4563162 Max. : 0.1967094 ## dt ## Length:2566 ## Class :character ## Mode :character ## ## ## ``` ### 4\.1\.1 Print the length of each stock series. Are they all the same? Here we need to extract the ticker symbol without quotes. ``` print(head(AAPL)) ``` ``` ## AAPL.Open AAPL.High AAPL.Low AAPL.Close AAPL.Volume ## 2007-01-03 86.29 86.58 81.90 83.80 309579900 ## 2007-01-04 84.05 85.95 83.82 85.66 211815100 ## 2007-01-05 85.77 86.20 84.40 85.05 208685400 ## 2007-01-08 85.96 86.53 85.28 85.47 199276700 ## 2007-01-09 86.45 92.98 85.15 92.57 837324600 ## 2007-01-10 94.75 97.80 93.45 97.00 738220000 ## AAPL.Adjusted ## 2007-01-03 10.85709 ## 2007-01-04 11.09807 ## 2007-01-05 11.01904 ## 2007-01-08 11.07345 ## 2007-01-09 11.99333 ## 2007-01-10 12.56728 ``` ``` length(tickers) ``` ``` ## [1] 6 ``` Now we can examine the number of observations in each ticker. ``` for (t in tickers) { a = get(noquote(t))[,1] print(c(t,length(a))) } ``` ``` ## [1] "AAPL" "2574" ## [1] "YHOO" "2574" ## [1] "IBM" "2574" ## [1] "CSCO" "2574" ## [1] "C" "2574" ## [1] "GSPC" "2567" ``` We see that they are not all the same. The stock series are all the same length but the S\&P index is shorter by 7 days. ### 4\.1\.2 Convert closing adjusted prices of all stocks into individual data.frames. First, we create a **list** of data.frames. This will also illustrate how useful lists are because we store data.frames in lists. Notice how we also add a new column to each data.frame so that the dates column may later be used as an index to join the individual stock data.frames into one composite data.frame. ``` df = list() j = 0 for (t in tickers) { j = j + 1 a = noquote(t) b = data.frame(get(a)[,6]) b$dt = row.names(b) df[[j]] = b } ``` ### 4\.1\.3 Make a single data frame Second, we combine all the stocks adjusted closing prices into a single data.frame using a join, excluding all dates for which all stocks do not have data. The main function used here is *merge* which could be an intersect join or a union join. The default is the intersect join. ``` stock_table = df[[1]] for (j in 2:length(df)) { stock_table = merge(stock_table,df[[j]],by="dt") } print(dim(stock_table)) ``` ``` ## [1] 2567 7 ``` ``` class(stock_table) ``` ``` ## [1] "data.frame" ``` Note that the stock table contains the number of rows of the stock index, which had fewer observations than the individual stocks. So since this is an intersect join, some rows have been dropped. ### 4\.1\.4 Plot the stock series Plot all stocks in a single data.frame using ggplot2, which is more advanced than the basic plot function. We use the basic plot function first. ``` par(mfrow=c(3,2)) #Set the plot area to six plots for (j in 1:length(tickers)) { plot(as.Date(stock_table[,1]),stock_table[,j+1], type="l", ylab=tickers[j],xlab="date") } ``` ``` par(mfrow=c(1,1)) #Set the plot figure back to a single plot ``` ### 4\.1\.5 Convert the data into returns These are continuously compounded returns, or log returns. ``` n = length(stock_table[,1]) rets = stock_table[,2:(length(tickers)+1)] for (j in 1:length(tickers)) { rets[2:n,j] = diff(log(rets[,j])) } rets$dt = stock_table$dt rets = rets[2:n,] #lose the first row when converting to returns print(head(rets)) ``` ``` ## AAPL.Adjusted YHOO.Adjusted IBM.Adjusted CSCO.Adjusted C.Adjusted ## 2 0.021952895 0.047282882 0.010635146 0.0259847487 -0.003444886 ## 3 -0.007146715 0.032609594 -0.009094219 0.0003512839 -0.005280834 ## 4 0.004926208 0.006467863 0.015077746 0.0056042360 0.005099241 ## 5 0.079799692 -0.012252406 0.011760684 -0.0056042360 -0.008757558 ## 6 0.046745798 0.039806285 -0.011861824 0.0073491742 -0.008095767 ## 7 -0.012448257 0.017271586 -0.002429871 0.0003486308 0.000738734 ## GSPC.Adjusted dt ## 2 -0.0003760369 2007-01-04 ## 3 0.0000000000 2007-01-05 ## 4 0.0093082361 2007-01-08 ## 5 -0.0127373254 2007-01-09 ## 6 0.0000000000 2007-01-10 ## 7 0.0053269494 2007-01-11 ``` ``` class(rets) ``` ``` ## [1] "data.frame" ``` ### 4\.1\.6 Descriptive statistics The data.frame of returns can be used to present the descriptive statistics of returns. ``` summary(rets) ``` ``` ## AAPL.Adjusted YHOO.Adjusted IBM.Adjusted ## Min. :-0.197470 Min. :-0.2340251 Min. :-0.0864191 ## 1st Qu.:-0.008318 1st Qu.:-0.0107879 1st Qu.:-0.0064540 ## Median : 0.001008 Median : 0.0003064 Median : 0.0004224 ## Mean : 0.000999 Mean : 0.0002333 Mean : 0.0003158 ## 3rd Qu.: 0.011628 3rd Qu.: 0.0115493 3rd Qu.: 0.0076022 ## Max. : 0.130194 Max. : 0.3918166 Max. : 0.1089889 ## CSCO.Adjusted C.Adjusted GSPC.Adjusted ## Min. :-0.1768648 Min. :-0.4946962 Min. :-0.1542612 ## 1st Qu.:-0.0076399 1st Qu.:-0.0119556 1st Qu.:-0.0040400 ## Median : 0.0003616 Median :-0.0000931 Median : 0.0000000 ## Mean : 0.0001430 Mean :-0.0008315 Mean : 0.0001502 ## 3rd Qu.: 0.0089725 3rd Qu.: 0.0115179 3rd Qu.: 0.0048274 ## Max. : 0.1479930 Max. : 0.4563162 Max. : 0.1967094 ## dt ## Length:2566 ## Class :character ## Mode :character ## ## ## ``` 4\.2 Correlation matrix ----------------------- Now we compute the correlation matrix of returns. ``` cor(rets[,1:length(tickers)]) ``` ``` ## AAPL.Adjusted YHOO.Adjusted IBM.Adjusted CSCO.Adjusted ## AAPL.Adjusted 1.0000000 0.3548475 0.4754687 0.4860619 ## YHOO.Adjusted 0.3548475 1.0000000 0.3832693 0.4133302 ## IBM.Adjusted 0.4754687 0.3832693 1.0000000 0.5710565 ## CSCO.Adjusted 0.4860619 0.4133302 0.5710565 1.0000000 ## C.Adjusted 0.3731001 0.3377278 0.4329949 0.4633700 ## GSPC.Adjusted 0.2220585 0.1667948 0.1996484 0.2277044 ## C.Adjusted GSPC.Adjusted ## AAPL.Adjusted 0.3731001 0.2220585 ## YHOO.Adjusted 0.3377278 0.1667948 ## IBM.Adjusted 0.4329949 0.1996484 ## CSCO.Adjusted 0.4633700 0.2277044 ## C.Adjusted 1.0000000 0.3303486 ## GSPC.Adjusted 0.3303486 1.0000000 ``` ### 4\.2\.1 Correlogram Show the correlogram for the six return series. This is a useful way to visualize the relationship between all variables in the data set. ``` library(corrgram) corrgram(rets[,1:length(tickers)], order=TRUE, lower.panel=panel.ellipse, upper.panel=panel.pts, text.panel=panel.txt) ``` ### 4\.2\.2 Market regression To see the relation between the stocks and the index, run a regression of each of the five stocks on the index returns. ``` betas = NULL for (j in 1:(length(tickers)-1)) { res = lm(rets[,j]~rets[,6]) betas[j] = res$coefficients[2] } print(betas) ``` ``` ## [1] 0.2921709 0.2602061 0.1790612 0.2746572 0.8101568 ``` The \\(\\beta\\)s indicate the level of systematic risk for each stock. We notice that all the betas are positive, and highly significant. But they are not close to unity, in fact all are lower. This is evidence of misspecification that may arise from the fact that the stocks are in the tech sector and better explanatory power would come from an index that was more relevant to the technology sector. ### 4\.2\.3 Return versus systematic risk In order to assess whether in the cross\-section, there is a relation between average returns and the systematic risk or \\(\\beta\\) of a stock, run a regression of the five average returns on the five betas from the regression. ``` betas = matrix(betas) avgrets = colMeans(rets[,1:(length(tickers)-1)]) res = lm(avgrets~betas) print(summary(res)) ``` ``` ## ## Call: ## lm(formula = avgrets ~ betas) ## ## Residuals: ## AAPL.Adjusted YHOO.Adjusted IBM.Adjusted CSCO.Adjusted C.Adjusted ## 6.785e-04 -1.540e-04 -2.411e-04 -2.141e-04 -6.938e-05 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 0.0009311 0.0003754 2.480 0.0892 . ## betas -0.0020901 0.0008766 -2.384 0.0972 . ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.0004445 on 3 degrees of freedom ## Multiple R-squared: 0.6546, Adjusted R-squared: 0.5394 ## F-statistic: 5.685 on 1 and 3 DF, p-value: 0.09724 ``` ``` plot(betas,avgrets) abline(res,col="red") ``` We see indeed, that there is an unexpected negative relation between \\(\\beta\\) and the return levels. This may be on account of the particular small sample we used for illustration here, however, we note that the CAPM (Capital Asset Pricing Model) dictate that we see a positive relation between stock returns and a firm’s systematic risk level. ### 4\.2\.1 Correlogram Show the correlogram for the six return series. This is a useful way to visualize the relationship between all variables in the data set. ``` library(corrgram) corrgram(rets[,1:length(tickers)], order=TRUE, lower.panel=panel.ellipse, upper.panel=panel.pts, text.panel=panel.txt) ``` ### 4\.2\.2 Market regression To see the relation between the stocks and the index, run a regression of each of the five stocks on the index returns. ``` betas = NULL for (j in 1:(length(tickers)-1)) { res = lm(rets[,j]~rets[,6]) betas[j] = res$coefficients[2] } print(betas) ``` ``` ## [1] 0.2921709 0.2602061 0.1790612 0.2746572 0.8101568 ``` The \\(\\beta\\)s indicate the level of systematic risk for each stock. We notice that all the betas are positive, and highly significant. But they are not close to unity, in fact all are lower. This is evidence of misspecification that may arise from the fact that the stocks are in the tech sector and better explanatory power would come from an index that was more relevant to the technology sector. ### 4\.2\.3 Return versus systematic risk In order to assess whether in the cross\-section, there is a relation between average returns and the systematic risk or \\(\\beta\\) of a stock, run a regression of the five average returns on the five betas from the regression. ``` betas = matrix(betas) avgrets = colMeans(rets[,1:(length(tickers)-1)]) res = lm(avgrets~betas) print(summary(res)) ``` ``` ## ## Call: ## lm(formula = avgrets ~ betas) ## ## Residuals: ## AAPL.Adjusted YHOO.Adjusted IBM.Adjusted CSCO.Adjusted C.Adjusted ## 6.785e-04 -1.540e-04 -2.411e-04 -2.141e-04 -6.938e-05 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 0.0009311 0.0003754 2.480 0.0892 . ## betas -0.0020901 0.0008766 -2.384 0.0972 . ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.0004445 on 3 degrees of freedom ## Multiple R-squared: 0.6546, Adjusted R-squared: 0.5394 ## F-statistic: 5.685 on 1 and 3 DF, p-value: 0.09724 ``` ``` plot(betas,avgrets) abline(res,col="red") ``` We see indeed, that there is an unexpected negative relation between \\(\\beta\\) and the return levels. This may be on account of the particular small sample we used for illustration here, however, we note that the CAPM (Capital Asset Pricing Model) dictate that we see a positive relation between stock returns and a firm’s systematic risk level. 4\.3 Using the *merge* function ------------------------------- Data frames a very much like spreadsheets or tables, but they are also a lot like databases. Some sort of happy medium. If you want to join two dataframes, it is the same a joining two databases. For this R has the **merge** function. It is best illustrated with an example. ### 4\.3\.1 Extracting online corporate data Suppose we have a list of ticker symbols and we want to generate a dataframe with more details on these tickers, especially their sector and the full name of the company. Let’s look at the input list of tickers. Suppose I have them in a file called **tickers.csv** where the delimiter is the colon sign. We read this in as follows. ``` tickers = read.table("DSTMAA_data/tickers.csv",header=FALSE,sep=":") ``` The line of code reads in the file and this gives us two columns of data. We can look at the top of the file (first 6 rows). ``` head(tickers) ``` ``` ## V1 V2 ## 1 NasdaqGS ACOR ## 2 NasdaqGS AKAM ## 3 NYSE ARE ## 4 NasdaqGS AMZN ## 5 NasdaqGS AAPL ## 6 NasdaqGS AREX ``` Note that the ticker symbols relate to stocks from different exchanges, in this case Nasdaq and NYSE. The file may also contain AMEX listed stocks. The second line of code below counts the number of input tickers, and the third line of code renames the columns of the dataframe. We need to call the column of ticker symbols as \`\`Symbol’’ because we will see that the dataframe with which we will merge this one also has a column with the same name. This column becomes the index on which the two dataframes are matched and joined. ``` n = dim(tickers)[1] print(n) ``` ``` ## [1] 98 ``` ``` names(tickers) = c("Exchange","Symbol") head(tickers) ``` ``` ## Exchange Symbol ## 1 NasdaqGS ACOR ## 2 NasdaqGS AKAM ## 3 NYSE ARE ## 4 NasdaqGS AMZN ## 5 NasdaqGS AAPL ## 6 NasdaqGS AREX ``` ### 4\.3\.2 Get all stock symbols from exchanges Next, we read in lists of all stocks on Nasdaq, NYSE, and AMEX as follows: ``` library(quantmod) nasdaq_names = stockSymbols(exchange="NASDAQ") ``` ``` ## Fetching NASDAQ symbols... ``` ``` nyse_names = stockSymbols(exchange="NYSE") ``` ``` ## Fetching NYSE symbols... ``` ``` amex_names = stockSymbols(exchange="AMEX") ``` ``` ## Fetching AMEX symbols... ``` We can look at the top of the Nasdaq file. ``` head(nasdaq_names) ``` ``` ## Symbol Name LastSale MarketCap IPOyear ## 1 AAAP Advanced Accelerator Applications S.A. 39.68 $1.72B 2015 ## 2 AAL American Airlines Group, Inc. 41.42 $20.88B NA ## 3 AAME Atlantic American Corporation 3.90 $79.62M NA ## 4 AAOI Applied Optoelectronics, Inc. 51.51 $962.1M 2013 ## 5 AAON AAON, Inc. 36.40 $1.92B NA ## 6 AAPC Atlantic Alliance Partnership Corp. 9.80 $36.13M 2015 ## Sector Industry Exchange ## 1 Health Care Major Pharmaceuticals NASDAQ ## 2 Transportation Air Freight/Delivery Services NASDAQ ## 3 Finance Life Insurance NASDAQ ## 4 Technology Semiconductors NASDAQ ## 5 Capital Goods Industrial Machinery/Components NASDAQ ## 6 Consumer Services Services-Misc. Amusement & Recreation NASDAQ ``` Next we merge all three dataframes for each of the exchanges into one data frame. ``` co_names = rbind(nyse_names,nasdaq_names,amex_names) ``` To see how many rows are there in this merged file, we check dimensions. ``` dim(co_names) ``` ``` ## [1] 6692 8 ``` Finally, use the merge function to combine the ticker symbols file with the exchanges data to extend the tickers file to include the information from the exchanges file. ``` result = merge(tickers,co_names,by="Symbol") head(result) ``` ``` ## Symbol Exchange.x Name LastSale ## 1 AAPL NasdaqGS Apple Inc. 140.94 ## 2 ACOR NasdaqGS Acorda Therapeutics, Inc. 25.35 ## 3 AKAM NasdaqGS Akamai Technologies, Inc. 63.67 ## 4 AMZN NasdaqGS Amazon.com, Inc. 847.38 ## 5 ARE NYSE Alexandria Real Estate Equities, Inc. 112.09 ## 6 AREX NasdaqGS Approach Resources Inc. 2.28 ## MarketCap IPOyear Sector ## 1 $739.45B 1980 Technology ## 2 $1.18B 2006 Health Care ## 3 $11.03B 1999 Miscellaneous ## 4 $404.34B 1997 Consumer Services ## 5 $10.73B NA Consumer Services ## 6 $184.46M 2007 Energy ## Industry Exchange.y ## 1 Computer Manufacturing NASDAQ ## 2 Biotechnology: Biological Products (No Diagnostic Substances) NASDAQ ## 3 Business Services NASDAQ ## 4 Catalog/Specialty Distribution NASDAQ ## 5 Real Estate Investment Trusts NYSE ## 6 Oil & Gas Production NASDAQ ``` An alternate package to download stock tickers en masse is **BatchGetSymbols**. ### 4\.3\.1 Extracting online corporate data Suppose we have a list of ticker symbols and we want to generate a dataframe with more details on these tickers, especially their sector and the full name of the company. Let’s look at the input list of tickers. Suppose I have them in a file called **tickers.csv** where the delimiter is the colon sign. We read this in as follows. ``` tickers = read.table("DSTMAA_data/tickers.csv",header=FALSE,sep=":") ``` The line of code reads in the file and this gives us two columns of data. We can look at the top of the file (first 6 rows). ``` head(tickers) ``` ``` ## V1 V2 ## 1 NasdaqGS ACOR ## 2 NasdaqGS AKAM ## 3 NYSE ARE ## 4 NasdaqGS AMZN ## 5 NasdaqGS AAPL ## 6 NasdaqGS AREX ``` Note that the ticker symbols relate to stocks from different exchanges, in this case Nasdaq and NYSE. The file may also contain AMEX listed stocks. The second line of code below counts the number of input tickers, and the third line of code renames the columns of the dataframe. We need to call the column of ticker symbols as \`\`Symbol’’ because we will see that the dataframe with which we will merge this one also has a column with the same name. This column becomes the index on which the two dataframes are matched and joined. ``` n = dim(tickers)[1] print(n) ``` ``` ## [1] 98 ``` ``` names(tickers) = c("Exchange","Symbol") head(tickers) ``` ``` ## Exchange Symbol ## 1 NasdaqGS ACOR ## 2 NasdaqGS AKAM ## 3 NYSE ARE ## 4 NasdaqGS AMZN ## 5 NasdaqGS AAPL ## 6 NasdaqGS AREX ``` ### 4\.3\.2 Get all stock symbols from exchanges Next, we read in lists of all stocks on Nasdaq, NYSE, and AMEX as follows: ``` library(quantmod) nasdaq_names = stockSymbols(exchange="NASDAQ") ``` ``` ## Fetching NASDAQ symbols... ``` ``` nyse_names = stockSymbols(exchange="NYSE") ``` ``` ## Fetching NYSE symbols... ``` ``` amex_names = stockSymbols(exchange="AMEX") ``` ``` ## Fetching AMEX symbols... ``` We can look at the top of the Nasdaq file. ``` head(nasdaq_names) ``` ``` ## Symbol Name LastSale MarketCap IPOyear ## 1 AAAP Advanced Accelerator Applications S.A. 39.68 $1.72B 2015 ## 2 AAL American Airlines Group, Inc. 41.42 $20.88B NA ## 3 AAME Atlantic American Corporation 3.90 $79.62M NA ## 4 AAOI Applied Optoelectronics, Inc. 51.51 $962.1M 2013 ## 5 AAON AAON, Inc. 36.40 $1.92B NA ## 6 AAPC Atlantic Alliance Partnership Corp. 9.80 $36.13M 2015 ## Sector Industry Exchange ## 1 Health Care Major Pharmaceuticals NASDAQ ## 2 Transportation Air Freight/Delivery Services NASDAQ ## 3 Finance Life Insurance NASDAQ ## 4 Technology Semiconductors NASDAQ ## 5 Capital Goods Industrial Machinery/Components NASDAQ ## 6 Consumer Services Services-Misc. Amusement & Recreation NASDAQ ``` Next we merge all three dataframes for each of the exchanges into one data frame. ``` co_names = rbind(nyse_names,nasdaq_names,amex_names) ``` To see how many rows are there in this merged file, we check dimensions. ``` dim(co_names) ``` ``` ## [1] 6692 8 ``` Finally, use the merge function to combine the ticker symbols file with the exchanges data to extend the tickers file to include the information from the exchanges file. ``` result = merge(tickers,co_names,by="Symbol") head(result) ``` ``` ## Symbol Exchange.x Name LastSale ## 1 AAPL NasdaqGS Apple Inc. 140.94 ## 2 ACOR NasdaqGS Acorda Therapeutics, Inc. 25.35 ## 3 AKAM NasdaqGS Akamai Technologies, Inc. 63.67 ## 4 AMZN NasdaqGS Amazon.com, Inc. 847.38 ## 5 ARE NYSE Alexandria Real Estate Equities, Inc. 112.09 ## 6 AREX NasdaqGS Approach Resources Inc. 2.28 ## MarketCap IPOyear Sector ## 1 $739.45B 1980 Technology ## 2 $1.18B 2006 Health Care ## 3 $11.03B 1999 Miscellaneous ## 4 $404.34B 1997 Consumer Services ## 5 $10.73B NA Consumer Services ## 6 $184.46M 2007 Energy ## Industry Exchange.y ## 1 Computer Manufacturing NASDAQ ## 2 Biotechnology: Biological Products (No Diagnostic Substances) NASDAQ ## 3 Business Services NASDAQ ## 4 Catalog/Specialty Distribution NASDAQ ## 5 Real Estate Investment Trusts NYSE ## 6 Oil & Gas Production NASDAQ ``` An alternate package to download stock tickers en masse is **BatchGetSymbols**. 4\.4 Using the DT package ------------------------- The Data Table package is a very good way to examine tabular data through an R\-driven user interface. ``` library(DT) datatable(co_names, options = list(pageLength = 25)) ``` 4\.5 Web scraping ----------------- Now suppose we want to find the CEOs of these 98 companies. There is no one file with compay CEO listings freely available for download. However, sites like Google Finance have a page for each stock and mention the CEOs name on the page. By writing R code to scrape the data off these pages one by one, we can extract these CEO names and augment the tickers dataframe. The code for this is simple in R. ``` library(stringr) #READ IN THE LIST OF TICKERS tickers = read.table("DSTMAA_data/tickers.csv",header=FALSE,sep=":") n = dim(tickers)[1] names(tickers) = c("Exchange","Symbol") tickers$ceo = NA #PULL CEO NAMES FROM GOOGLE FINANCE (take random 10 firms) for (j in sample(1:n,10)) { url = paste("https://www.google.com/finance?q=",tickers[j,2],sep="") text = readLines(url) idx = grep("Chief Executive",text) if (length(idx)>0) { tickers[j,3] = str_split(text[idx-2],">")[[1]][2] } else { tickers[j,3] = NA } print(tickers[j,]) } ``` ``` ## Exchange Symbol ceo ## 19 NasdaqGS FORR George F. Colony ## Exchange Symbol ceo ## 23 NYSE GDOT Steven W. Streit ## Exchange Symbol ceo ## 6 NasdaqGS AREX J. Ross Craft P.E. ## Exchange Symbol ceo ## 33 NYSE IPI Robert P Jornayvaz III ## Exchange Symbol ceo ## 96 NasdaqGS WERN Derek J. Leathers ## Exchange Symbol ceo ## 93 NasdaqGS VSAT Mark D. Dankberg ## Exchange Symbol ceo ## 94 NasdaqGS VRTU Krishan A. Canekeratne ## Exchange Symbol ceo ## 1 NasdaqGS ACOR Ron Cohen M.D. ## Exchange Symbol ceo ## 4 NasdaqGS AMZN Jeffrey P. Bezos ## Exchange Symbol ceo ## 90 NasdaqGS VASC <NA> ``` ``` #WRITE CEO_NAMES TO CSV write.table(tickers,file="DSTMAA_data/ceo_names.csv", row.names=FALSE,sep=",") ``` The code uses the **stringr** package so that string handling is simplified. After extracting the page, we search for the line in which the words \`\`Chief Executive’’ show up, and we note that the name of the CEO appears two lines before in the html page. A sample web page for Apple Inc is shown here: The final dataframe with CEO names is shown here (the top 6 lines): ``` head(tickers) ``` ``` ## Exchange Symbol ceo ## 1 NasdaqGS ACOR Ron Cohen M.D. ## 2 NasdaqGS AKAM <NA> ## 3 NYSE ARE <NA> ## 4 NasdaqGS AMZN Jeffrey P. Bezos ## 5 NasdaqGS AAPL <NA> ## 6 NasdaqGS AREX J. Ross Craft P.E. ``` 4\.6 Using the *apply* class of functions ----------------------------------------- Sometimes we need to apply a function to many cases, and these case parameters may be supplied in a vector, matrix, or list. This is analogous to looping through a set of values to repeat evaluations of a function using different sets of parameters. We illustrate here by computing the mean returns of all stocks in our sample using the **apply** function. The first argument of the function is the data.frame to which it is being applied, the second argument is either 1 (by rows) or 2 (by columns). The third argument is the function being evaluated. ``` tickers = c("AAPL","YHOO","IBM","CSCO","C","GSPC") apply(rets[,1:(length(tickers)-1)],2,mean) ``` ``` ## AAPL.Adjusted YHOO.Adjusted IBM.Adjusted CSCO.Adjusted C.Adjusted ## 0.0009989766 0.0002332882 0.0003158174 0.0001430246 -0.0008315260 ``` We see that the function returns the column means of the data set. The variants of the function pertain to what the loop is being applied to. The **lapply** is a function applied to a list, and **sapply** is for matrices and vectors. Likewise, **mapply** uses multiple arguments. To cross check, we can simply use the **colMeans** function: ``` colMeans(rets[,1:(length(tickers)-1)]) ``` ``` ## AAPL.Adjusted YHOO.Adjusted IBM.Adjusted CSCO.Adjusted C.Adjusted ## 0.0009989766 0.0002332882 0.0003158174 0.0001430246 -0.0008315260 ``` As we see, this result is verified. 4\.7 Getting interest rate data from FRED ----------------------------------------- In finance, data on interest rates is widely used. An authoritative source of data on interest rates is FRED (Federal Reserve Economic Data), maintained by the St. Louis Federal Reserve Bank, and is warehoused at the following web site: <https://research.stlouisfed.org/fred2/>. Let’s assume that we want to download the data using R from FRED directly. To do this we need to write some custom code. There used to be a package for this but since the web site changed, it has been updated but does not work properly. Still, see that it is easy to roll your own code quite easily in R. ``` #FUNCTION TO READ IN CSV FILES FROM FRED #Enter SeriesID as a text string readFRED = function(SeriesID) { url = paste("https://research.stlouisfed.org/fred2/series/", SeriesID, "/downloaddata/",SeriesID,".csv",sep="") data = readLines(url) n = length(data) data = data[2:n] n = length(data) df = matrix(0,n,2) #top line is header for (j in 1:n) { tmp = strsplit(data[j],",") df[j,1] = tmp[[1]][1] df[j,2] = tmp[[1]][2] } rate = as.numeric(df[,2]) idx = which(rate>0) idx = setdiff(seq(1,n),idx) rate[idx] = -99 date = df[,1] df = data.frame(date,rate) names(df)[2] = SeriesID result = df } ``` ### 4\.7\.1 Using the custom function Now, we provide a list of economic time series and download data accordingly using the function above. Note that we also join these individual series using the data as index. We download constant maturity interest rates (yields) starting from a maturity of one month (DGS1MO) to a maturity of thirty years (DGS30\). ``` #EXTRACT TERM STRUCTURE DATA FOR ALL RATES FROM 1 MO to 30 YRS FROM FRED id_list = c("DGS1MO","DGS3MO","DGS6MO","DGS1","DGS2","DGS3", "DGS5","DGS7","DGS10","DGS20","DGS30") k = 0 for (id in id_list) { out = readFRED(id) if (k>0) { rates = merge(rates,out,"date",all=TRUE) } else { rates = out } k = k + 1 } head(rates) ``` ``` ## date DGS1MO DGS3MO DGS6MO DGS1 DGS2 DGS3 DGS5 DGS7 DGS10 DGS20 ## 1 2001-07-31 3.67 3.54 3.47 3.53 3.79 4.06 4.57 4.86 5.07 5.61 ## 2 2001-08-01 3.65 3.53 3.47 3.56 3.83 4.09 4.62 4.90 5.11 5.63 ## 3 2001-08-02 3.65 3.53 3.46 3.57 3.89 4.17 4.69 4.97 5.17 5.68 ## 4 2001-08-03 3.63 3.52 3.47 3.57 3.91 4.22 4.72 4.99 5.20 5.70 ## 5 2001-08-06 3.62 3.52 3.47 3.56 3.88 4.17 4.71 4.99 5.19 5.70 ## 6 2001-08-07 3.63 3.52 3.47 3.56 3.90 4.19 4.72 5.00 5.20 5.71 ## DGS30 ## 1 5.51 ## 2 5.53 ## 3 5.57 ## 4 5.59 ## 5 5.59 ## 6 5.60 ``` ### 4\.7\.2 Organize the data by date Having done this, we now have a data.frame called **rates** containing all the time series we are interested in. We now convert the dates into numeric strings and sort the data.frame by date. ``` #CONVERT ALL DATES TO NUMERIC AND SORT BY DATE dates = rates[,1] library(stringr) dates = as.numeric(str_replace_all(dates,"-","")) res = sort(dates,index.return=TRUE) for (j in 1:dim(rates)[2]) { rates[,j] = rates[res$ix,j] } head(rates) ``` ``` ## date DGS1MO DGS3MO DGS6MO DGS1 DGS2 DGS3 DGS5 DGS7 DGS10 DGS20 ## 1 1962-01-02 NA NA NA 3.22 NA 3.70 3.88 NA 4.06 NA ## 2 1962-01-03 NA NA NA 3.24 NA 3.70 3.87 NA 4.03 NA ## 3 1962-01-04 NA NA NA 3.24 NA 3.69 3.86 NA 3.99 NA ## 4 1962-01-05 NA NA NA 3.26 NA 3.71 3.89 NA 4.02 NA ## 5 1962-01-08 NA NA NA 3.31 NA 3.71 3.91 NA 4.03 NA ## 6 1962-01-09 NA NA NA 3.32 NA 3.74 3.93 NA 4.05 NA ## DGS30 ## 1 NA ## 2 NA ## 3 NA ## 4 NA ## 5 NA ## 6 NA ``` ### 4\.7\.3 Handling missing values Note that there are missing values, denoted by **NA**. Also there are rows with “\-99” values and we can clean those out too but they represent periods when there was no yield available of that maturity, so we leave this in. ``` #REMOVE THE NA ROWS idx = which(rowSums(is.na(rates))==0) rates2 = rates[idx,] print(head(rates2)) ``` ``` ## date DGS1MO DGS3MO DGS6MO DGS1 DGS2 DGS3 DGS5 DGS7 DGS10 DGS20 ## 10326 2001-07-31 3.67 3.54 3.47 3.53 3.79 4.06 4.57 4.86 5.07 5.61 ## 10327 2001-08-01 3.65 3.53 3.47 3.56 3.83 4.09 4.62 4.90 5.11 5.63 ## 10328 2001-08-02 3.65 3.53 3.46 3.57 3.89 4.17 4.69 4.97 5.17 5.68 ## 10329 2001-08-03 3.63 3.52 3.47 3.57 3.91 4.22 4.72 4.99 5.20 5.70 ## 10330 2001-08-06 3.62 3.52 3.47 3.56 3.88 4.17 4.71 4.99 5.19 5.70 ## 10331 2001-08-07 3.63 3.52 3.47 3.56 3.90 4.19 4.72 5.00 5.20 5.71 ## DGS30 ## 10326 5.51 ## 10327 5.53 ## 10328 5.57 ## 10329 5.59 ## 10330 5.59 ## 10331 5.60 ``` ### 4\.7\.1 Using the custom function Now, we provide a list of economic time series and download data accordingly using the function above. Note that we also join these individual series using the data as index. We download constant maturity interest rates (yields) starting from a maturity of one month (DGS1MO) to a maturity of thirty years (DGS30\). ``` #EXTRACT TERM STRUCTURE DATA FOR ALL RATES FROM 1 MO to 30 YRS FROM FRED id_list = c("DGS1MO","DGS3MO","DGS6MO","DGS1","DGS2","DGS3", "DGS5","DGS7","DGS10","DGS20","DGS30") k = 0 for (id in id_list) { out = readFRED(id) if (k>0) { rates = merge(rates,out,"date",all=TRUE) } else { rates = out } k = k + 1 } head(rates) ``` ``` ## date DGS1MO DGS3MO DGS6MO DGS1 DGS2 DGS3 DGS5 DGS7 DGS10 DGS20 ## 1 2001-07-31 3.67 3.54 3.47 3.53 3.79 4.06 4.57 4.86 5.07 5.61 ## 2 2001-08-01 3.65 3.53 3.47 3.56 3.83 4.09 4.62 4.90 5.11 5.63 ## 3 2001-08-02 3.65 3.53 3.46 3.57 3.89 4.17 4.69 4.97 5.17 5.68 ## 4 2001-08-03 3.63 3.52 3.47 3.57 3.91 4.22 4.72 4.99 5.20 5.70 ## 5 2001-08-06 3.62 3.52 3.47 3.56 3.88 4.17 4.71 4.99 5.19 5.70 ## 6 2001-08-07 3.63 3.52 3.47 3.56 3.90 4.19 4.72 5.00 5.20 5.71 ## DGS30 ## 1 5.51 ## 2 5.53 ## 3 5.57 ## 4 5.59 ## 5 5.59 ## 6 5.60 ``` ### 4\.7\.2 Organize the data by date Having done this, we now have a data.frame called **rates** containing all the time series we are interested in. We now convert the dates into numeric strings and sort the data.frame by date. ``` #CONVERT ALL DATES TO NUMERIC AND SORT BY DATE dates = rates[,1] library(stringr) dates = as.numeric(str_replace_all(dates,"-","")) res = sort(dates,index.return=TRUE) for (j in 1:dim(rates)[2]) { rates[,j] = rates[res$ix,j] } head(rates) ``` ``` ## date DGS1MO DGS3MO DGS6MO DGS1 DGS2 DGS3 DGS5 DGS7 DGS10 DGS20 ## 1 1962-01-02 NA NA NA 3.22 NA 3.70 3.88 NA 4.06 NA ## 2 1962-01-03 NA NA NA 3.24 NA 3.70 3.87 NA 4.03 NA ## 3 1962-01-04 NA NA NA 3.24 NA 3.69 3.86 NA 3.99 NA ## 4 1962-01-05 NA NA NA 3.26 NA 3.71 3.89 NA 4.02 NA ## 5 1962-01-08 NA NA NA 3.31 NA 3.71 3.91 NA 4.03 NA ## 6 1962-01-09 NA NA NA 3.32 NA 3.74 3.93 NA 4.05 NA ## DGS30 ## 1 NA ## 2 NA ## 3 NA ## 4 NA ## 5 NA ## 6 NA ``` ### 4\.7\.3 Handling missing values Note that there are missing values, denoted by **NA**. Also there are rows with “\-99” values and we can clean those out too but they represent periods when there was no yield available of that maturity, so we leave this in. ``` #REMOVE THE NA ROWS idx = which(rowSums(is.na(rates))==0) rates2 = rates[idx,] print(head(rates2)) ``` ``` ## date DGS1MO DGS3MO DGS6MO DGS1 DGS2 DGS3 DGS5 DGS7 DGS10 DGS20 ## 10326 2001-07-31 3.67 3.54 3.47 3.53 3.79 4.06 4.57 4.86 5.07 5.61 ## 10327 2001-08-01 3.65 3.53 3.47 3.56 3.83 4.09 4.62 4.90 5.11 5.63 ## 10328 2001-08-02 3.65 3.53 3.46 3.57 3.89 4.17 4.69 4.97 5.17 5.68 ## 10329 2001-08-03 3.63 3.52 3.47 3.57 3.91 4.22 4.72 4.99 5.20 5.70 ## 10330 2001-08-06 3.62 3.52 3.47 3.56 3.88 4.17 4.71 4.99 5.19 5.70 ## 10331 2001-08-07 3.63 3.52 3.47 3.56 3.90 4.19 4.72 5.00 5.20 5.71 ## DGS30 ## 10326 5.51 ## 10327 5.53 ## 10328 5.57 ## 10329 5.59 ## 10330 5.59 ## 10331 5.60 ``` 4\.8 Cross\-Sectional Data (an example) --------------------------------------- 1. A great resource for data sets in corporate finance is on Aswath Damodaran’s web site, see: <http://people.stern.nyu.edu/adamodar/New_Home_Page/data.html> 2. Financial statement data sets are available at: [http://www.sec.gov/dera/data/financial\-statement\-data\-sets.html](http://www.sec.gov/dera/data/financial-statement-data-sets.html) 3. And another comprehensive data source: <http://fisher.osu.edu/fin/fdf/osudata.htm> 4. Open government data: <https://www.data.gov/finance/> Let’s read in the list of failed banks: <http://www.fdic.gov/bank/individual/failed/banklist.csv> ``` #download.file(url="http://www.fdic.gov/bank/individual/ #failed/banklist.csv",destfile="failed_banks.csv") ``` (This does not work, and has been an issue for a while.) ### 4\.8\.1 Access file from the web using the *readLines* function You can also read in the data using **readLines** but then further work is required to clean it up, but it works well in downloading the data. ``` url = "https://www.fdic.gov/bank/individual/failed/banklist.csv" data = readLines(url) head(data) ``` ``` ## [1] "Bank Name,City,ST,CERT,Acquiring Institution,Closing Date,Updated Date" ## [2] "Proficio Bank,Cottonwood Heights,UT,35495,Cache Valley Bank,3-Mar-17,14-Mar-17" ## [3] "Seaway Bank and Trust Company,Chicago,IL,19328,State Bank of Texas,27-Jan-17,17-Feb-17" ## [4] "Harvest Community Bank,Pennsville,NJ,34951,First-Citizens Bank & Trust Company,13-Jan-17,17-Feb-17" ## [5] "Allied Bank,Mulberry,AR,91,Today's Bank,23-Sep-16,17-Nov-16" ## [6] "The Woodbury Banking Company,Woodbury,GA,11297,United Bank,19-Aug-16,17-Nov-16" ``` #### 4\.8\.1\.1 Or, read the file from disk It may be simpler to just download the data and read it in from the csv file: ``` data = read.csv("DSTMAA_data/banklist.csv",header=TRUE) print(names(data)) ``` ``` ## [1] "Bank.Name" "City" "ST" ## [4] "CERT" "Acquiring.Institution" "Closing.Date" ## [7] "Updated.Date" ``` This gives a data.frame which is easy to work with. We will illustrate some interesting ways in which to manipulate this data. ### 4\.8\.2 Failed banks by State Suppose we want to get subtotals of how many banks failed by state. First add a column of ones to the data.frame. ``` print(head(data)) ``` ``` ## Bank.Name City ST CERT ## 1 Banks of Wisconsin d/b/a Bank of Kenosha Kenosha WI 35386 ## 2 Central Arizona Bank Scottsdale AZ 34527 ## 3 Sunrise Bank Valdosta GA 58185 ## 4 Pisgah Community Bank Asheville NC 58701 ## 5 Douglas County Bank Douglasville GA 21649 ## 6 Parkway Bank Lenoir NC 57158 ## Acquiring.Institution Closing.Date Updated.Date ## 1 North Shore Bank, FSB 31-May-13 31-May-13 ## 2 Western State Bank 14-May-13 20-May-13 ## 3 Synovus Bank 10-May-13 21-May-13 ## 4 Capital Bank, N.A. 10-May-13 14-May-13 ## 5 Hamilton State Bank 26-Apr-13 16-May-13 ## 6 CertusBank, National Association 26-Apr-13 17-May-13 ``` ``` data$count = 1 print(head(data)) ``` ``` ## Bank.Name City ST CERT ## 1 Banks of Wisconsin d/b/a Bank of Kenosha Kenosha WI 35386 ## 2 Central Arizona Bank Scottsdale AZ 34527 ## 3 Sunrise Bank Valdosta GA 58185 ## 4 Pisgah Community Bank Asheville NC 58701 ## 5 Douglas County Bank Douglasville GA 21649 ## 6 Parkway Bank Lenoir NC 57158 ## Acquiring.Institution Closing.Date Updated.Date count ## 1 North Shore Bank, FSB 31-May-13 31-May-13 1 ## 2 Western State Bank 14-May-13 20-May-13 1 ## 3 Synovus Bank 10-May-13 21-May-13 1 ## 4 Capital Bank, N.A. 10-May-13 14-May-13 1 ## 5 Hamilton State Bank 26-Apr-13 16-May-13 1 ## 6 CertusBank, National Association 26-Apr-13 17-May-13 1 ``` #### 4\.8\.2\.1 Check for missing data It’s good to check that there is no missing data. ``` any(is.na(data)) ``` ``` ## [1] FALSE ``` #### 4\.8\.2\.2 Sort by State Now we sort the data by state to see how many there are. ``` res = sort(as.matrix(data$ST),index.return=TRUE) print(head(data[res$ix,])) ``` ``` ## Bank.Name City ST CERT ## 42 Alabama Trust Bank, National Association Sylacauga AL 35224 ## 126 Superior Bank Birmingham AL 17750 ## 127 Nexity Bank Birmingham AL 19794 ## 279 First Lowndes Bank Fort Deposit AL 24957 ## 318 New South Federal Savings Bank Irondale AL 32276 ## 375 CapitalSouth Bank Birmingham AL 22130 ## Acquiring.Institution Closing.Date Updated.Date count ## 42 Southern States Bank 18-May-12 20-May-13 1 ## 126 Superior Bank, National Association 15-Apr-11 30-Nov-12 1 ## 127 AloStar Bank of Commerce 15-Apr-11 4-Sep-12 1 ## 279 First Citizens Bank 19-Mar-10 23-Aug-12 1 ## 318 Beal Bank 18-Dec-09 23-Aug-12 1 ## 375 IBERIABANK 21-Aug-09 15-Jan-13 1 ``` ``` print(head(sort(unique(data$ST)))) ``` ``` ## [1] AL AR AZ CA CO CT ## 44 Levels: AL AR AZ CA CO CT FL GA HI IA ID IL IN KS KY LA MA MD MI ... WY ``` ``` print(length(unique(data$ST))) ``` ``` ## [1] 44 ``` ### 4\.8\.3 Use the *aggregate* function (for subtotals) We can directly use the **aggregate** function to get subtotals by state. ``` head(aggregate(count ~ ST,data,sum),10) ``` ``` ## ST count ## 1 AL 7 ## 2 AR 3 ## 3 AZ 15 ## 4 CA 40 ## 5 CO 9 ## 6 CT 1 ## 7 FL 71 ## 8 GA 89 ## 9 HI 1 ## 10 IA 1 ``` #### 4\.8\.3\.1 Data by acquiring bank And another example, subtotal by acquiring bank. Note how we take the subtotals into another data.frame, which is then sorted and returned in order using the index of the sort. ``` acq = aggregate(count~Acquiring.Institution,data,sum) idx = sort(as.matrix(acq$count),decreasing=TRUE,index.return=TRUE)$ix head(acq[idx,],15) ``` ``` ## Acquiring.Institution count ## 158 No Acquirer 30 ## 208 State Bank and Trust Company 12 ## 9 Ameris Bank 10 ## 245 U.S. Bank N.A. 9 ## 25 Bank of the Ozarks 7 ## 41 Centennial Bank 7 ## 61 Community & Southern Bank 7 ## 212 Stearns Bank, N.A. 7 ## 43 CenterState Bank of Florida, N.A. 6 ## 44 Central Bank 6 ## 103 First-Citizens Bank & Trust Company 6 ## 143 MB Financial Bank, N.A. 6 ## 48 CertusBank, National Association 5 ## 58 Columbia State Bank 5 ## 178 Premier American Bank, N.A. 5 ``` ### 4\.8\.1 Access file from the web using the *readLines* function You can also read in the data using **readLines** but then further work is required to clean it up, but it works well in downloading the data. ``` url = "https://www.fdic.gov/bank/individual/failed/banklist.csv" data = readLines(url) head(data) ``` ``` ## [1] "Bank Name,City,ST,CERT,Acquiring Institution,Closing Date,Updated Date" ## [2] "Proficio Bank,Cottonwood Heights,UT,35495,Cache Valley Bank,3-Mar-17,14-Mar-17" ## [3] "Seaway Bank and Trust Company,Chicago,IL,19328,State Bank of Texas,27-Jan-17,17-Feb-17" ## [4] "Harvest Community Bank,Pennsville,NJ,34951,First-Citizens Bank & Trust Company,13-Jan-17,17-Feb-17" ## [5] "Allied Bank,Mulberry,AR,91,Today's Bank,23-Sep-16,17-Nov-16" ## [6] "The Woodbury Banking Company,Woodbury,GA,11297,United Bank,19-Aug-16,17-Nov-16" ``` #### 4\.8\.1\.1 Or, read the file from disk It may be simpler to just download the data and read it in from the csv file: ``` data = read.csv("DSTMAA_data/banklist.csv",header=TRUE) print(names(data)) ``` ``` ## [1] "Bank.Name" "City" "ST" ## [4] "CERT" "Acquiring.Institution" "Closing.Date" ## [7] "Updated.Date" ``` This gives a data.frame which is easy to work with. We will illustrate some interesting ways in which to manipulate this data. #### 4\.8\.1\.1 Or, read the file from disk It may be simpler to just download the data and read it in from the csv file: ``` data = read.csv("DSTMAA_data/banklist.csv",header=TRUE) print(names(data)) ``` ``` ## [1] "Bank.Name" "City" "ST" ## [4] "CERT" "Acquiring.Institution" "Closing.Date" ## [7] "Updated.Date" ``` This gives a data.frame which is easy to work with. We will illustrate some interesting ways in which to manipulate this data. ### 4\.8\.2 Failed banks by State Suppose we want to get subtotals of how many banks failed by state. First add a column of ones to the data.frame. ``` print(head(data)) ``` ``` ## Bank.Name City ST CERT ## 1 Banks of Wisconsin d/b/a Bank of Kenosha Kenosha WI 35386 ## 2 Central Arizona Bank Scottsdale AZ 34527 ## 3 Sunrise Bank Valdosta GA 58185 ## 4 Pisgah Community Bank Asheville NC 58701 ## 5 Douglas County Bank Douglasville GA 21649 ## 6 Parkway Bank Lenoir NC 57158 ## Acquiring.Institution Closing.Date Updated.Date ## 1 North Shore Bank, FSB 31-May-13 31-May-13 ## 2 Western State Bank 14-May-13 20-May-13 ## 3 Synovus Bank 10-May-13 21-May-13 ## 4 Capital Bank, N.A. 10-May-13 14-May-13 ## 5 Hamilton State Bank 26-Apr-13 16-May-13 ## 6 CertusBank, National Association 26-Apr-13 17-May-13 ``` ``` data$count = 1 print(head(data)) ``` ``` ## Bank.Name City ST CERT ## 1 Banks of Wisconsin d/b/a Bank of Kenosha Kenosha WI 35386 ## 2 Central Arizona Bank Scottsdale AZ 34527 ## 3 Sunrise Bank Valdosta GA 58185 ## 4 Pisgah Community Bank Asheville NC 58701 ## 5 Douglas County Bank Douglasville GA 21649 ## 6 Parkway Bank Lenoir NC 57158 ## Acquiring.Institution Closing.Date Updated.Date count ## 1 North Shore Bank, FSB 31-May-13 31-May-13 1 ## 2 Western State Bank 14-May-13 20-May-13 1 ## 3 Synovus Bank 10-May-13 21-May-13 1 ## 4 Capital Bank, N.A. 10-May-13 14-May-13 1 ## 5 Hamilton State Bank 26-Apr-13 16-May-13 1 ## 6 CertusBank, National Association 26-Apr-13 17-May-13 1 ``` #### 4\.8\.2\.1 Check for missing data It’s good to check that there is no missing data. ``` any(is.na(data)) ``` ``` ## [1] FALSE ``` #### 4\.8\.2\.2 Sort by State Now we sort the data by state to see how many there are. ``` res = sort(as.matrix(data$ST),index.return=TRUE) print(head(data[res$ix,])) ``` ``` ## Bank.Name City ST CERT ## 42 Alabama Trust Bank, National Association Sylacauga AL 35224 ## 126 Superior Bank Birmingham AL 17750 ## 127 Nexity Bank Birmingham AL 19794 ## 279 First Lowndes Bank Fort Deposit AL 24957 ## 318 New South Federal Savings Bank Irondale AL 32276 ## 375 CapitalSouth Bank Birmingham AL 22130 ## Acquiring.Institution Closing.Date Updated.Date count ## 42 Southern States Bank 18-May-12 20-May-13 1 ## 126 Superior Bank, National Association 15-Apr-11 30-Nov-12 1 ## 127 AloStar Bank of Commerce 15-Apr-11 4-Sep-12 1 ## 279 First Citizens Bank 19-Mar-10 23-Aug-12 1 ## 318 Beal Bank 18-Dec-09 23-Aug-12 1 ## 375 IBERIABANK 21-Aug-09 15-Jan-13 1 ``` ``` print(head(sort(unique(data$ST)))) ``` ``` ## [1] AL AR AZ CA CO CT ## 44 Levels: AL AR AZ CA CO CT FL GA HI IA ID IL IN KS KY LA MA MD MI ... WY ``` ``` print(length(unique(data$ST))) ``` ``` ## [1] 44 ``` #### 4\.8\.2\.1 Check for missing data It’s good to check that there is no missing data. ``` any(is.na(data)) ``` ``` ## [1] FALSE ``` #### 4\.8\.2\.2 Sort by State Now we sort the data by state to see how many there are. ``` res = sort(as.matrix(data$ST),index.return=TRUE) print(head(data[res$ix,])) ``` ``` ## Bank.Name City ST CERT ## 42 Alabama Trust Bank, National Association Sylacauga AL 35224 ## 126 Superior Bank Birmingham AL 17750 ## 127 Nexity Bank Birmingham AL 19794 ## 279 First Lowndes Bank Fort Deposit AL 24957 ## 318 New South Federal Savings Bank Irondale AL 32276 ## 375 CapitalSouth Bank Birmingham AL 22130 ## Acquiring.Institution Closing.Date Updated.Date count ## 42 Southern States Bank 18-May-12 20-May-13 1 ## 126 Superior Bank, National Association 15-Apr-11 30-Nov-12 1 ## 127 AloStar Bank of Commerce 15-Apr-11 4-Sep-12 1 ## 279 First Citizens Bank 19-Mar-10 23-Aug-12 1 ## 318 Beal Bank 18-Dec-09 23-Aug-12 1 ## 375 IBERIABANK 21-Aug-09 15-Jan-13 1 ``` ``` print(head(sort(unique(data$ST)))) ``` ``` ## [1] AL AR AZ CA CO CT ## 44 Levels: AL AR AZ CA CO CT FL GA HI IA ID IL IN KS KY LA MA MD MI ... WY ``` ``` print(length(unique(data$ST))) ``` ``` ## [1] 44 ``` ### 4\.8\.3 Use the *aggregate* function (for subtotals) We can directly use the **aggregate** function to get subtotals by state. ``` head(aggregate(count ~ ST,data,sum),10) ``` ``` ## ST count ## 1 AL 7 ## 2 AR 3 ## 3 AZ 15 ## 4 CA 40 ## 5 CO 9 ## 6 CT 1 ## 7 FL 71 ## 8 GA 89 ## 9 HI 1 ## 10 IA 1 ``` #### 4\.8\.3\.1 Data by acquiring bank And another example, subtotal by acquiring bank. Note how we take the subtotals into another data.frame, which is then sorted and returned in order using the index of the sort. ``` acq = aggregate(count~Acquiring.Institution,data,sum) idx = sort(as.matrix(acq$count),decreasing=TRUE,index.return=TRUE)$ix head(acq[idx,],15) ``` ``` ## Acquiring.Institution count ## 158 No Acquirer 30 ## 208 State Bank and Trust Company 12 ## 9 Ameris Bank 10 ## 245 U.S. Bank N.A. 9 ## 25 Bank of the Ozarks 7 ## 41 Centennial Bank 7 ## 61 Community & Southern Bank 7 ## 212 Stearns Bank, N.A. 7 ## 43 CenterState Bank of Florida, N.A. 6 ## 44 Central Bank 6 ## 103 First-Citizens Bank & Trust Company 6 ## 143 MB Financial Bank, N.A. 6 ## 48 CertusBank, National Association 5 ## 58 Columbia State Bank 5 ## 178 Premier American Bank, N.A. 5 ``` #### 4\.8\.3\.1 Data by acquiring bank And another example, subtotal by acquiring bank. Note how we take the subtotals into another data.frame, which is then sorted and returned in order using the index of the sort. ``` acq = aggregate(count~Acquiring.Institution,data,sum) idx = sort(as.matrix(acq$count),decreasing=TRUE,index.return=TRUE)$ix head(acq[idx,],15) ``` ``` ## Acquiring.Institution count ## 158 No Acquirer 30 ## 208 State Bank and Trust Company 12 ## 9 Ameris Bank 10 ## 245 U.S. Bank N.A. 9 ## 25 Bank of the Ozarks 7 ## 41 Centennial Bank 7 ## 61 Community & Southern Bank 7 ## 212 Stearns Bank, N.A. 7 ## 43 CenterState Bank of Florida, N.A. 6 ## 44 Central Bank 6 ## 103 First-Citizens Bank & Trust Company 6 ## 143 MB Financial Bank, N.A. 6 ## 48 CertusBank, National Association 5 ## 58 Columbia State Bank 5 ## 178 Premier American Bank, N.A. 5 ``` 4\.9 Handling dates with *lubridate* ------------------------------------ Suppose we want to take the preceding data.frame of failed banks and aggregate the data by year, or month, etc. In this case, it us useful to use a dates package. Another useful tool developed by Hadley Wickham is the **lubridate** package. ``` head(data) ``` ``` ## Bank.Name City ST CERT ## 1 Banks of Wisconsin d/b/a Bank of Kenosha Kenosha WI 35386 ## 2 Central Arizona Bank Scottsdale AZ 34527 ## 3 Sunrise Bank Valdosta GA 58185 ## 4 Pisgah Community Bank Asheville NC 58701 ## 5 Douglas County Bank Douglasville GA 21649 ## 6 Parkway Bank Lenoir NC 57158 ## Acquiring.Institution Closing.Date Updated.Date count ## 1 North Shore Bank, FSB 31-May-13 31-May-13 1 ## 2 Western State Bank 14-May-13 20-May-13 1 ## 3 Synovus Bank 10-May-13 21-May-13 1 ## 4 Capital Bank, N.A. 10-May-13 14-May-13 1 ## 5 Hamilton State Bank 26-Apr-13 16-May-13 1 ## 6 CertusBank, National Association 26-Apr-13 17-May-13 1 ``` ``` library(lubridate) ``` ``` ## ## Attaching package: 'lubridate' ``` ``` ## The following object is masked from 'package:base': ## ## date ``` ``` data$Cdate = dmy(data$Closing.Date) data$Cyear = year(data$Cdate) fd = aggregate(count~Cyear,data,sum) print(fd) ``` ``` ## Cyear count ## 1 2000 2 ## 2 2001 4 ## 3 2002 11 ## 4 2003 3 ## 5 2004 4 ## 6 2007 3 ## 7 2008 25 ## 8 2009 140 ## 9 2010 157 ## 10 2011 92 ## 11 2012 51 ## 12 2013 14 ``` ``` plot(count~Cyear,data=fd,type="l",lwd=3,col="red",xlab="Year") grid(lwd=3) ``` ### 4\.9\.1 By Month Let’s do the same thing by month to see if there is seasonality ``` data$Cmonth = month(data$Cdate) fd = aggregate(count~Cmonth,data,sum) print(fd) ``` ``` ## Cmonth count ## 1 1 44 ## 2 2 40 ## 3 3 38 ## 4 4 56 ## 5 5 36 ## 6 6 31 ## 7 7 71 ## 8 8 36 ## 9 9 35 ## 10 10 53 ## 11 11 34 ## 12 12 32 ``` ``` plot(count~Cmonth,data=fd,type="l",lwd=3,col="green"); grid(lwd=3) ``` ### 4\.9\.2 By Day There does not appear to be any seasonality. What about day? ``` data$Cday = day(data$Cdate) fd = aggregate(count~Cday,data,sum) print(fd) ``` ``` ## Cday count ## 1 1 8 ## 2 2 17 ## 3 3 3 ## 4 4 21 ## 5 5 15 ## 6 6 12 ## 7 7 18 ## 8 8 13 ## 9 9 9 ## 10 10 13 ## 11 11 17 ## 12 12 10 ## 13 13 10 ## 14 14 20 ## 15 15 20 ## 16 16 20 ## 17 17 21 ## 18 18 20 ## 19 19 28 ## 20 20 25 ## 21 21 17 ## 22 22 18 ## 23 23 26 ## 24 24 17 ## 25 25 11 ## 26 26 15 ## 27 27 16 ## 28 28 16 ## 29 29 15 ## 30 30 28 ## 31 31 7 ``` ``` plot(count~Cday,data=fd,type="l",lwd=3,col="blue"); grid(lwd=3) ``` Definitely, counts are lower at the start and end of the month! ### 4\.9\.1 By Month Let’s do the same thing by month to see if there is seasonality ``` data$Cmonth = month(data$Cdate) fd = aggregate(count~Cmonth,data,sum) print(fd) ``` ``` ## Cmonth count ## 1 1 44 ## 2 2 40 ## 3 3 38 ## 4 4 56 ## 5 5 36 ## 6 6 31 ## 7 7 71 ## 8 8 36 ## 9 9 35 ## 10 10 53 ## 11 11 34 ## 12 12 32 ``` ``` plot(count~Cmonth,data=fd,type="l",lwd=3,col="green"); grid(lwd=3) ``` ### 4\.9\.2 By Day There does not appear to be any seasonality. What about day? ``` data$Cday = day(data$Cdate) fd = aggregate(count~Cday,data,sum) print(fd) ``` ``` ## Cday count ## 1 1 8 ## 2 2 17 ## 3 3 3 ## 4 4 21 ## 5 5 15 ## 6 6 12 ## 7 7 18 ## 8 8 13 ## 9 9 9 ## 10 10 13 ## 11 11 17 ## 12 12 10 ## 13 13 10 ## 14 14 20 ## 15 15 20 ## 16 16 20 ## 17 17 21 ## 18 18 20 ## 19 19 28 ## 20 20 25 ## 21 21 17 ## 22 22 18 ## 23 23 26 ## 24 24 17 ## 25 25 11 ## 26 26 15 ## 27 27 16 ## 28 28 16 ## 29 29 15 ## 30 30 28 ## 31 31 7 ``` ``` plot(count~Cday,data=fd,type="l",lwd=3,col="blue"); grid(lwd=3) ``` Definitely, counts are lower at the start and end of the month! 4\.10 Using the *data.table* package ------------------------------------ This is an incredibly useful package that was written by Matt Dowle. It essentially allows your data.frame to operate as a database. It enables very fast handling of massive quantities of data, and much of this technology is now embedded in the IP of the company called h2o: <http://h2o.ai/> The data.table cheat sheet is here: [https://s3\.amazonaws.com/assets.datacamp.com/img/blog/data\+table\+cheat\+sheet.pdf](https://s3.amazonaws.com/assets.datacamp.com/img/blog/data+table+cheat+sheet.pdf) ### 4\.10\.1 California Crime Statistics We start with some freely downloadable crime data statistics for California. We placed the data in a csv file which is then easy to read in to R. ``` data = read.csv("DSTMAA_data/CA_Crimes_Data_2004-2013.csv",header=TRUE) ``` It is easy to convert this into a data.table. ``` library(data.table) ``` ``` ## ## Attaching package: 'data.table' ``` ``` ## The following objects are masked from 'package:lubridate': ## ## hour, mday, month, quarter, wday, week, yday, year ``` ``` ## The following object is masked from 'package:xts': ## ## last ``` ``` D_T = as.data.table(data) print(class(D_T)) ``` ``` ## [1] "data.table" "data.frame" ``` Note, it is still a **data.frame** also. Hence, it inherits its properties from the **data.frame** class. ### 4\.10\.2 Examine the *data.table* Let’s see how it works, noting that the syntax is similar to that for data.frames as much as possible. We print only a part of the names list. And do not go through each and everyone. ``` print(dim(D_T)) ``` ``` ## [1] 7301 69 ``` ``` print(names(D_T)) ``` ``` ## [1] "Year" "County" "NCICCode" ## [4] "Violent_sum" "Homicide_sum" "ForRape_sum" ## [7] "Robbery_sum" "AggAssault_sum" "Property_sum" ## [10] "Burglary_sum" "VehicleTheft_sum" "LTtotal_sum" ## [13] "ViolentClr_sum" "HomicideClr_sum" "ForRapeClr_sum" ## [16] "RobberyClr_sum" "AggAssaultClr_sum" "PropertyClr_sum" ## [19] "BurglaryClr_sum" "VehicleTheftClr_sum" "LTtotalClr_sum" ## [22] "TotalStructural_sum" "TotalMobile_sum" "TotalOther_sum" ## [25] "GrandTotal_sum" "GrandTotClr_sum" "RAPact_sum" ## [28] "ARAPact_sum" "FROBact_sum" "KROBact_sum" ## [31] "OROBact_sum" "SROBact_sum" "HROBnao_sum" ## [34] "CHROBnao_sum" "GROBnao_sum" "CROBnao_sum" ## [37] "RROBnao_sum" "BROBnao_sum" "MROBnao_sum" ## [40] "FASSact_sum" "KASSact_sum" "OASSact_sum" ## [43] "HASSact_sum" "FEBURact_Sum" "UBURact_sum" ## [46] "RESDBUR_sum" "RNBURnao_sum" "RDBURnao_sum" ## [49] "RUBURnao_sum" "NRESBUR_sum" "NNBURnao_sum" ## [52] "NDBURnao_sum" "NUBURnao_sum" "MVTact_sum" ## [55] "TMVTact_sum" "OMVTact_sum" "PPLARnao_sum" ## [58] "PSLARnao_sum" "SLLARnao_sum" "MVLARnao_sum" ## [61] "MVPLARnao_sum" "BILARnao_sum" "FBLARnao_sum" ## [64] "COMLARnao_sum" "AOLARnao_sum" "LT400nao_sum" ## [67] "LT200400nao_sum" "LT50200nao_sum" "LT50nao_sum" ``` ``` head(D_T) ``` ``` ## Year County NCICCode Violent_sum ## 1: 2004 Alameda County Alameda Co. Sheriff's Department 461 ## 2: 2004 Alameda County Alameda 342 ## 3: 2004 Alameda County Albany 42 ## 4: 2004 Alameda County Berkeley 557 ## 5: 2004 Alameda County Emeryville 83 ## 6: 2004 Alameda County Fremont 454 ## Homicide_sum ForRape_sum Robbery_sum AggAssault_sum Property_sum ## 1: 5 29 174 253 3351 ## 2: 1 12 89 240 2231 ## 3: 1 3 29 9 718 ## 4: 4 17 355 181 8611 ## 5: 2 4 53 24 1066 ## 6: 5 24 165 260 5723 ## Burglary_sum VehicleTheft_sum LTtotal_sum ViolentClr_sum ## 1: 731 947 1673 170 ## 2: 376 333 1522 244 ## 3: 130 142 446 10 ## 4: 1382 1128 6101 169 ## 5: 94 228 744 15 ## 6: 939 881 3903 232 ## HomicideClr_sum ForRapeClr_sum RobberyClr_sum AggAssaultClr_sum ## 1: 5 4 43 118 ## 2: 1 8 45 190 ## 3: 0 1 3 6 ## 4: 1 6 72 90 ## 5: 1 0 8 6 ## 6: 2 18 51 161 ## PropertyClr_sum BurglaryClr_sum VehicleTheftClr_sum LTtotalClr_sum ## 1: 275 58 129 88 ## 2: 330 65 57 208 ## 3: 53 24 2 27 ## 4: 484 58 27 399 ## 5: 169 14 4 151 ## 6: 697 84 135 478 ## TotalStructural_sum TotalMobile_sum TotalOther_sum GrandTotal_sum ## 1: 7 23 3 33 ## 2: 5 1 9 15 ## 3: 3 0 5 8 ## 4: 21 21 17 59 ## 5: 0 1 0 1 ## 6: 8 10 3 21 ## GrandTotClr_sum RAPact_sum ARAPact_sum FROBact_sum KROBact_sum ## 1: 4 27 2 53 17 ## 2: 5 12 0 18 4 ## 3: 0 3 0 9 1 ## 4: 15 12 5 126 20 ## 5: 0 4 0 13 6 ## 6: 5 23 1 64 22 ## OROBact_sum SROBact_sum HROBnao_sum CHROBnao_sum GROBnao_sum ## 1: 9 95 81 19 6 ## 2: 11 56 49 14 0 ## 3: 1 18 21 1 0 ## 4: 71 138 201 58 6 ## 5: 1 33 33 11 2 ## 6: 6 73 89 19 3 ## CROBnao_sum RROBnao_sum BROBnao_sum MROBnao_sum FASSact_sum KASSact_sum ## 1: 13 17 13 25 17 35 ## 2: 3 9 4 10 8 23 ## 3: 1 2 3 1 0 3 ## 4: 2 24 22 42 15 16 ## 5: 1 1 0 5 4 0 ## 6: 28 2 12 12 19 56 ## OASSact_sum HASSact_sum FEBURact_Sum UBURact_sum RESDBUR_sum ## 1: 132 69 436 295 538 ## 2: 86 123 183 193 213 ## 3: 4 2 61 69 73 ## 4: 73 77 748 634 962 ## 5: 9 11 61 33 36 ## 6: 120 65 698 241 593 ## RNBURnao_sum RDBURnao_sum RUBURnao_sum NRESBUR_sum NNBURnao_sum ## 1: 131 252 155 193 76 ## 2: 40 67 106 163 31 ## 3: 11 60 2 57 25 ## 4: 225 418 319 420 171 ## 5: 8 25 3 58 40 ## 6: 106 313 174 346 76 ## NDBURnao_sum NUBURnao_sum MVTact_sum TMVTact_sum OMVTact_sum ## 1: 33 84 879 2 66 ## 2: 18 114 250 59 24 ## 3: 31 1 116 21 5 ## 4: 112 137 849 169 110 ## 5: 14 4 182 33 13 ## 6: 34 236 719 95 67 ## PPLARnao_sum PSLARnao_sum SLLARnao_sum MVLARnao_sum MVPLARnao_sum ## 1: 14 14 76 1048 56 ## 2: 0 1 176 652 14 ## 3: 1 2 27 229 31 ## 4: 22 34 376 2373 1097 ## 5: 17 2 194 219 122 ## 6: 3 26 391 2269 325 ## BILARnao_sum FBLARnao_sum COMLARnao_sum AOLARnao_sum LT400nao_sum ## 1: 54 192 5 214 681 ## 2: 176 172 8 323 371 ## 3: 47 60 1 48 76 ## 4: 374 539 7 1279 1257 ## 5: 35 44 0 111 254 ## 6: 79 266 13 531 1298 ## LT200400nao_sum LT50200nao_sum LT50nao_sum ## 1: 301 308 383 ## 2: 274 336 541 ## 3: 101 120 149 ## 4: 1124 1178 2542 ## 5: 110 141 239 ## 6: 663 738 1204 ``` ### 4\.10\.3 Indexing the *data.table* A nice feature of the data.table is that it can be indexed, i.e., resorted on the fly by making any column in the database the key. Once that is done, then it becomes easy to compute subtotals, and generate plots from these subtotals as well. The data table can be used like a database, and you can directly apply summarization functions to it. Essentially, it is governed by a format that is summarized as (\\(i\\),\\(j\\),by), i.e., apply some rule to rows \\(i\\), then to some columns \\(j\\), and one may also group by some columns. We can see how this works with the following example. ``` setkey(D_T,Year) crime = 6 res = D_T[,sum(ForRape_sum),by=Year] print(res) ``` ``` ## Year V1 ## 1: 2004 9598 ## 2: 2005 9345 ## 3: 2006 9213 ## 4: 2007 9047 ## 5: 2008 8906 ## 6: 2009 8698 ## 7: 2010 8325 ## 8: 2011 7678 ## 9: 2012 7828 ## 10: 2013 7459 ``` ``` class(res) ``` ``` ## [1] "data.table" "data.frame" ``` The data table was operated on for all columns, i.e., all \\(i\\), and the \\(j\\) column we are interested in was the “ForRape\_sum” which we want to total by Year. This returns a summary of only the Year and the total number of rapes per year. See that the type of output is also of the type data.table, which includes the class data.frame also. ### 4\.10\.4 Plotting from the *data.table* Next, we plot the results from the **data.table** in the same way as we would for a **data.frame**. ``` plot(res$Year,res$V1,type="b",lwd=3,col="blue", xlab="Year",ylab="Forced Rape") ``` #### 4\.10\.4\.1 By County Repeat the process looking at crime (Rape) totals by county. ``` setkey(D_T,County) res = D_T[,sum(ForRape_sum),by=County] print(res) ``` ``` ## County V1 ## 1: Alameda County 4979 ## 2: Alpine County 15 ## 3: Amador County 153 ## 4: Butte County 930 ## 5: Calaveras County 148 ## 6: Colusa County 60 ## 7: Contra Costa County 1848 ## 8: Del Norte County 236 ## 9: El Dorado County 351 ## 10: Fresno County 1960 ## 11: Glenn County 56 ## 12: Humboldt County 495 ## 13: Imperial County 263 ## 14: Inyo County 52 ## 15: Kern County 1935 ## 16: Kings County 356 ## 17: Lake County 262 ## 18: Lassen County 96 ## 19: Los Angeles County 21483 ## 20: Madera County 408 ## 21: Marin County 452 ## 22: Mariposa County 46 ## 23: Mendocino County 328 ## 24: Merced County 738 ## 25: Modoc County 64 ## 26: Mono County 61 ## 27: Monterey County 1062 ## 28: Napa County 354 ## 29: Nevada County 214 ## 30: Orange County 4509 ## 31: Placer County 611 ## 32: Plumas County 115 ## 33: Riverside County 4321 ## 34: Sacramento County 4084 ## 35: San Benito County 151 ## 36: San Bernardino County 4900 ## 37: San Diego County 7378 ## 38: San Francisco County 1498 ## 39: San Joaquin County 1612 ## 40: San Luis Obispo County 900 ## 41: San Mateo County 1381 ## 42: Santa Barbara County 1352 ## 43: Santa Clara County 3832 ## 44: Santa Cruz County 865 ## 45: Shasta County 1089 ## 46: Sierra County 2 ## 47: Siskiyou County 143 ## 48: Solano County 1150 ## 49: Sonoma County 1558 ## 50: Stanislaus County 1348 ## 51: Sutter County 274 ## 52: Tehama County 165 ## 53: Trinity County 28 ## 54: Tulare County 1114 ## 55: Tuolumne County 160 ## 56: Ventura County 1146 ## 57: Yolo County 729 ## 58: Yuba County 277 ## County V1 ``` ``` setnames(res,"V1","Rapes") County_Rapes = as.data.table(res) #This is not really needed setkey(County_Rapes,Rapes) print(County_Rapes) ``` ``` ## County Rapes ## 1: Sierra County 2 ## 2: Alpine County 15 ## 3: Trinity County 28 ## 4: Mariposa County 46 ## 5: Inyo County 52 ## 6: Glenn County 56 ## 7: Colusa County 60 ## 8: Mono County 61 ## 9: Modoc County 64 ## 10: Lassen County 96 ## 11: Plumas County 115 ## 12: Siskiyou County 143 ## 13: Calaveras County 148 ## 14: San Benito County 151 ## 15: Amador County 153 ## 16: Tuolumne County 160 ## 17: Tehama County 165 ## 18: Nevada County 214 ## 19: Del Norte County 236 ## 20: Lake County 262 ## 21: Imperial County 263 ## 22: Sutter County 274 ## 23: Yuba County 277 ## 24: Mendocino County 328 ## 25: El Dorado County 351 ## 26: Napa County 354 ## 27: Kings County 356 ## 28: Madera County 408 ## 29: Marin County 452 ## 30: Humboldt County 495 ## 31: Placer County 611 ## 32: Yolo County 729 ## 33: Merced County 738 ## 34: Santa Cruz County 865 ## 35: San Luis Obispo County 900 ## 36: Butte County 930 ## 37: Monterey County 1062 ## 38: Shasta County 1089 ## 39: Tulare County 1114 ## 40: Ventura County 1146 ## 41: Solano County 1150 ## 42: Stanislaus County 1348 ## 43: Santa Barbara County 1352 ## 44: San Mateo County 1381 ## 45: San Francisco County 1498 ## 46: Sonoma County 1558 ## 47: San Joaquin County 1612 ## 48: Contra Costa County 1848 ## 49: Kern County 1935 ## 50: Fresno County 1960 ## 51: Santa Clara County 3832 ## 52: Sacramento County 4084 ## 53: Riverside County 4321 ## 54: Orange County 4509 ## 55: San Bernardino County 4900 ## 56: Alameda County 4979 ## 57: San Diego County 7378 ## 58: Los Angeles County 21483 ## County Rapes ``` #### 4\.10\.4\.2 Barplot of crime Now, we can go ahead and plot it using a different kind of plot, a horizontal barplot. ``` par(las=2) #makes label horizontal #par(mar=c(3,4,2,1)) #increase y-axis margins barplot(County_Rapes$Rapes, names.arg=County_Rapes$County, horiz=TRUE, cex.names=0.4, col=8) ``` ### 4\.10\.5 Bay Area Bike Share data We show some other features using a different data set, the bike information on Silicon Valley routes for the Bike Share program. This is a much larger data set. ``` trips = read.csv("DSTMAA_data/201408_trip_data.csv",header=TRUE) print(names(trips)) ``` ``` ## [1] "Trip.ID" "Duration" "Start.Date" ## [4] "Start.Station" "Start.Terminal" "End.Date" ## [7] "End.Station" "End.Terminal" "Bike.." ## [10] "Subscriber.Type" "Zip.Code" ``` #### 4\.10\.5\.1 Summarize Trips Data Next we print some descriptive statistics. ``` print(length(trips$Trip.ID)) ``` ``` ## [1] 171792 ``` ``` print(summary(trips$Duration/60)) ``` ``` ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## 1.000 5.750 8.617 18.880 12.680 11940.000 ``` ``` print(mean(trips$Duration/60,trim=0.01)) ``` ``` ## [1] 13.10277 ``` #### 4\.10\.5\.2 Start and End Bike Stations Now, we quickly check how many start and end stations there are. ``` start_stn = unique(trips$Start.Terminal) print(sort(start_stn)) ``` ``` ## [1] 2 3 4 5 6 7 8 9 10 11 12 13 14 16 21 22 23 24 25 26 27 28 29 ## [24] 30 31 32 33 34 35 36 37 38 39 41 42 45 46 47 48 49 50 51 54 55 56 57 ## [47] 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 80 82 83 ## [70] 84 ``` ``` print(length(start_stn)) ``` ``` ## [1] 70 ``` ``` end_stn = unique(trips$End.Terminal) print(sort(end_stn)) ``` ``` ## [1] 2 3 4 5 6 7 8 9 10 11 12 13 14 16 21 22 23 24 25 26 27 28 29 ## [24] 30 31 32 33 34 35 36 37 38 39 41 42 45 46 47 48 49 50 51 54 55 56 57 ## [47] 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 80 82 83 ## [70] 84 ``` ``` print(length(end_stn)) ``` ``` ## [1] 70 ``` As we can see, there are quite a few stations in the bike share program where riders can pick up and drop off bikes. The trip duration information is stored in seconds, so has been converted to minutes in the code above. ### 4\.10\.1 California Crime Statistics We start with some freely downloadable crime data statistics for California. We placed the data in a csv file which is then easy to read in to R. ``` data = read.csv("DSTMAA_data/CA_Crimes_Data_2004-2013.csv",header=TRUE) ``` It is easy to convert this into a data.table. ``` library(data.table) ``` ``` ## ## Attaching package: 'data.table' ``` ``` ## The following objects are masked from 'package:lubridate': ## ## hour, mday, month, quarter, wday, week, yday, year ``` ``` ## The following object is masked from 'package:xts': ## ## last ``` ``` D_T = as.data.table(data) print(class(D_T)) ``` ``` ## [1] "data.table" "data.frame" ``` Note, it is still a **data.frame** also. Hence, it inherits its properties from the **data.frame** class. ### 4\.10\.2 Examine the *data.table* Let’s see how it works, noting that the syntax is similar to that for data.frames as much as possible. We print only a part of the names list. And do not go through each and everyone. ``` print(dim(D_T)) ``` ``` ## [1] 7301 69 ``` ``` print(names(D_T)) ``` ``` ## [1] "Year" "County" "NCICCode" ## [4] "Violent_sum" "Homicide_sum" "ForRape_sum" ## [7] "Robbery_sum" "AggAssault_sum" "Property_sum" ## [10] "Burglary_sum" "VehicleTheft_sum" "LTtotal_sum" ## [13] "ViolentClr_sum" "HomicideClr_sum" "ForRapeClr_sum" ## [16] "RobberyClr_sum" "AggAssaultClr_sum" "PropertyClr_sum" ## [19] "BurglaryClr_sum" "VehicleTheftClr_sum" "LTtotalClr_sum" ## [22] "TotalStructural_sum" "TotalMobile_sum" "TotalOther_sum" ## [25] "GrandTotal_sum" "GrandTotClr_sum" "RAPact_sum" ## [28] "ARAPact_sum" "FROBact_sum" "KROBact_sum" ## [31] "OROBact_sum" "SROBact_sum" "HROBnao_sum" ## [34] "CHROBnao_sum" "GROBnao_sum" "CROBnao_sum" ## [37] "RROBnao_sum" "BROBnao_sum" "MROBnao_sum" ## [40] "FASSact_sum" "KASSact_sum" "OASSact_sum" ## [43] "HASSact_sum" "FEBURact_Sum" "UBURact_sum" ## [46] "RESDBUR_sum" "RNBURnao_sum" "RDBURnao_sum" ## [49] "RUBURnao_sum" "NRESBUR_sum" "NNBURnao_sum" ## [52] "NDBURnao_sum" "NUBURnao_sum" "MVTact_sum" ## [55] "TMVTact_sum" "OMVTact_sum" "PPLARnao_sum" ## [58] "PSLARnao_sum" "SLLARnao_sum" "MVLARnao_sum" ## [61] "MVPLARnao_sum" "BILARnao_sum" "FBLARnao_sum" ## [64] "COMLARnao_sum" "AOLARnao_sum" "LT400nao_sum" ## [67] "LT200400nao_sum" "LT50200nao_sum" "LT50nao_sum" ``` ``` head(D_T) ``` ``` ## Year County NCICCode Violent_sum ## 1: 2004 Alameda County Alameda Co. Sheriff's Department 461 ## 2: 2004 Alameda County Alameda 342 ## 3: 2004 Alameda County Albany 42 ## 4: 2004 Alameda County Berkeley 557 ## 5: 2004 Alameda County Emeryville 83 ## 6: 2004 Alameda County Fremont 454 ## Homicide_sum ForRape_sum Robbery_sum AggAssault_sum Property_sum ## 1: 5 29 174 253 3351 ## 2: 1 12 89 240 2231 ## 3: 1 3 29 9 718 ## 4: 4 17 355 181 8611 ## 5: 2 4 53 24 1066 ## 6: 5 24 165 260 5723 ## Burglary_sum VehicleTheft_sum LTtotal_sum ViolentClr_sum ## 1: 731 947 1673 170 ## 2: 376 333 1522 244 ## 3: 130 142 446 10 ## 4: 1382 1128 6101 169 ## 5: 94 228 744 15 ## 6: 939 881 3903 232 ## HomicideClr_sum ForRapeClr_sum RobberyClr_sum AggAssaultClr_sum ## 1: 5 4 43 118 ## 2: 1 8 45 190 ## 3: 0 1 3 6 ## 4: 1 6 72 90 ## 5: 1 0 8 6 ## 6: 2 18 51 161 ## PropertyClr_sum BurglaryClr_sum VehicleTheftClr_sum LTtotalClr_sum ## 1: 275 58 129 88 ## 2: 330 65 57 208 ## 3: 53 24 2 27 ## 4: 484 58 27 399 ## 5: 169 14 4 151 ## 6: 697 84 135 478 ## TotalStructural_sum TotalMobile_sum TotalOther_sum GrandTotal_sum ## 1: 7 23 3 33 ## 2: 5 1 9 15 ## 3: 3 0 5 8 ## 4: 21 21 17 59 ## 5: 0 1 0 1 ## 6: 8 10 3 21 ## GrandTotClr_sum RAPact_sum ARAPact_sum FROBact_sum KROBact_sum ## 1: 4 27 2 53 17 ## 2: 5 12 0 18 4 ## 3: 0 3 0 9 1 ## 4: 15 12 5 126 20 ## 5: 0 4 0 13 6 ## 6: 5 23 1 64 22 ## OROBact_sum SROBact_sum HROBnao_sum CHROBnao_sum GROBnao_sum ## 1: 9 95 81 19 6 ## 2: 11 56 49 14 0 ## 3: 1 18 21 1 0 ## 4: 71 138 201 58 6 ## 5: 1 33 33 11 2 ## 6: 6 73 89 19 3 ## CROBnao_sum RROBnao_sum BROBnao_sum MROBnao_sum FASSact_sum KASSact_sum ## 1: 13 17 13 25 17 35 ## 2: 3 9 4 10 8 23 ## 3: 1 2 3 1 0 3 ## 4: 2 24 22 42 15 16 ## 5: 1 1 0 5 4 0 ## 6: 28 2 12 12 19 56 ## OASSact_sum HASSact_sum FEBURact_Sum UBURact_sum RESDBUR_sum ## 1: 132 69 436 295 538 ## 2: 86 123 183 193 213 ## 3: 4 2 61 69 73 ## 4: 73 77 748 634 962 ## 5: 9 11 61 33 36 ## 6: 120 65 698 241 593 ## RNBURnao_sum RDBURnao_sum RUBURnao_sum NRESBUR_sum NNBURnao_sum ## 1: 131 252 155 193 76 ## 2: 40 67 106 163 31 ## 3: 11 60 2 57 25 ## 4: 225 418 319 420 171 ## 5: 8 25 3 58 40 ## 6: 106 313 174 346 76 ## NDBURnao_sum NUBURnao_sum MVTact_sum TMVTact_sum OMVTact_sum ## 1: 33 84 879 2 66 ## 2: 18 114 250 59 24 ## 3: 31 1 116 21 5 ## 4: 112 137 849 169 110 ## 5: 14 4 182 33 13 ## 6: 34 236 719 95 67 ## PPLARnao_sum PSLARnao_sum SLLARnao_sum MVLARnao_sum MVPLARnao_sum ## 1: 14 14 76 1048 56 ## 2: 0 1 176 652 14 ## 3: 1 2 27 229 31 ## 4: 22 34 376 2373 1097 ## 5: 17 2 194 219 122 ## 6: 3 26 391 2269 325 ## BILARnao_sum FBLARnao_sum COMLARnao_sum AOLARnao_sum LT400nao_sum ## 1: 54 192 5 214 681 ## 2: 176 172 8 323 371 ## 3: 47 60 1 48 76 ## 4: 374 539 7 1279 1257 ## 5: 35 44 0 111 254 ## 6: 79 266 13 531 1298 ## LT200400nao_sum LT50200nao_sum LT50nao_sum ## 1: 301 308 383 ## 2: 274 336 541 ## 3: 101 120 149 ## 4: 1124 1178 2542 ## 5: 110 141 239 ## 6: 663 738 1204 ``` ### 4\.10\.3 Indexing the *data.table* A nice feature of the data.table is that it can be indexed, i.e., resorted on the fly by making any column in the database the key. Once that is done, then it becomes easy to compute subtotals, and generate plots from these subtotals as well. The data table can be used like a database, and you can directly apply summarization functions to it. Essentially, it is governed by a format that is summarized as (\\(i\\),\\(j\\),by), i.e., apply some rule to rows \\(i\\), then to some columns \\(j\\), and one may also group by some columns. We can see how this works with the following example. ``` setkey(D_T,Year) crime = 6 res = D_T[,sum(ForRape_sum),by=Year] print(res) ``` ``` ## Year V1 ## 1: 2004 9598 ## 2: 2005 9345 ## 3: 2006 9213 ## 4: 2007 9047 ## 5: 2008 8906 ## 6: 2009 8698 ## 7: 2010 8325 ## 8: 2011 7678 ## 9: 2012 7828 ## 10: 2013 7459 ``` ``` class(res) ``` ``` ## [1] "data.table" "data.frame" ``` The data table was operated on for all columns, i.e., all \\(i\\), and the \\(j\\) column we are interested in was the “ForRape\_sum” which we want to total by Year. This returns a summary of only the Year and the total number of rapes per year. See that the type of output is also of the type data.table, which includes the class data.frame also. ### 4\.10\.4 Plotting from the *data.table* Next, we plot the results from the **data.table** in the same way as we would for a **data.frame**. ``` plot(res$Year,res$V1,type="b",lwd=3,col="blue", xlab="Year",ylab="Forced Rape") ``` #### 4\.10\.4\.1 By County Repeat the process looking at crime (Rape) totals by county. ``` setkey(D_T,County) res = D_T[,sum(ForRape_sum),by=County] print(res) ``` ``` ## County V1 ## 1: Alameda County 4979 ## 2: Alpine County 15 ## 3: Amador County 153 ## 4: Butte County 930 ## 5: Calaveras County 148 ## 6: Colusa County 60 ## 7: Contra Costa County 1848 ## 8: Del Norte County 236 ## 9: El Dorado County 351 ## 10: Fresno County 1960 ## 11: Glenn County 56 ## 12: Humboldt County 495 ## 13: Imperial County 263 ## 14: Inyo County 52 ## 15: Kern County 1935 ## 16: Kings County 356 ## 17: Lake County 262 ## 18: Lassen County 96 ## 19: Los Angeles County 21483 ## 20: Madera County 408 ## 21: Marin County 452 ## 22: Mariposa County 46 ## 23: Mendocino County 328 ## 24: Merced County 738 ## 25: Modoc County 64 ## 26: Mono County 61 ## 27: Monterey County 1062 ## 28: Napa County 354 ## 29: Nevada County 214 ## 30: Orange County 4509 ## 31: Placer County 611 ## 32: Plumas County 115 ## 33: Riverside County 4321 ## 34: Sacramento County 4084 ## 35: San Benito County 151 ## 36: San Bernardino County 4900 ## 37: San Diego County 7378 ## 38: San Francisco County 1498 ## 39: San Joaquin County 1612 ## 40: San Luis Obispo County 900 ## 41: San Mateo County 1381 ## 42: Santa Barbara County 1352 ## 43: Santa Clara County 3832 ## 44: Santa Cruz County 865 ## 45: Shasta County 1089 ## 46: Sierra County 2 ## 47: Siskiyou County 143 ## 48: Solano County 1150 ## 49: Sonoma County 1558 ## 50: Stanislaus County 1348 ## 51: Sutter County 274 ## 52: Tehama County 165 ## 53: Trinity County 28 ## 54: Tulare County 1114 ## 55: Tuolumne County 160 ## 56: Ventura County 1146 ## 57: Yolo County 729 ## 58: Yuba County 277 ## County V1 ``` ``` setnames(res,"V1","Rapes") County_Rapes = as.data.table(res) #This is not really needed setkey(County_Rapes,Rapes) print(County_Rapes) ``` ``` ## County Rapes ## 1: Sierra County 2 ## 2: Alpine County 15 ## 3: Trinity County 28 ## 4: Mariposa County 46 ## 5: Inyo County 52 ## 6: Glenn County 56 ## 7: Colusa County 60 ## 8: Mono County 61 ## 9: Modoc County 64 ## 10: Lassen County 96 ## 11: Plumas County 115 ## 12: Siskiyou County 143 ## 13: Calaveras County 148 ## 14: San Benito County 151 ## 15: Amador County 153 ## 16: Tuolumne County 160 ## 17: Tehama County 165 ## 18: Nevada County 214 ## 19: Del Norte County 236 ## 20: Lake County 262 ## 21: Imperial County 263 ## 22: Sutter County 274 ## 23: Yuba County 277 ## 24: Mendocino County 328 ## 25: El Dorado County 351 ## 26: Napa County 354 ## 27: Kings County 356 ## 28: Madera County 408 ## 29: Marin County 452 ## 30: Humboldt County 495 ## 31: Placer County 611 ## 32: Yolo County 729 ## 33: Merced County 738 ## 34: Santa Cruz County 865 ## 35: San Luis Obispo County 900 ## 36: Butte County 930 ## 37: Monterey County 1062 ## 38: Shasta County 1089 ## 39: Tulare County 1114 ## 40: Ventura County 1146 ## 41: Solano County 1150 ## 42: Stanislaus County 1348 ## 43: Santa Barbara County 1352 ## 44: San Mateo County 1381 ## 45: San Francisco County 1498 ## 46: Sonoma County 1558 ## 47: San Joaquin County 1612 ## 48: Contra Costa County 1848 ## 49: Kern County 1935 ## 50: Fresno County 1960 ## 51: Santa Clara County 3832 ## 52: Sacramento County 4084 ## 53: Riverside County 4321 ## 54: Orange County 4509 ## 55: San Bernardino County 4900 ## 56: Alameda County 4979 ## 57: San Diego County 7378 ## 58: Los Angeles County 21483 ## County Rapes ``` #### 4\.10\.4\.2 Barplot of crime Now, we can go ahead and plot it using a different kind of plot, a horizontal barplot. ``` par(las=2) #makes label horizontal #par(mar=c(3,4,2,1)) #increase y-axis margins barplot(County_Rapes$Rapes, names.arg=County_Rapes$County, horiz=TRUE, cex.names=0.4, col=8) ``` #### 4\.10\.4\.1 By County Repeat the process looking at crime (Rape) totals by county. ``` setkey(D_T,County) res = D_T[,sum(ForRape_sum),by=County] print(res) ``` ``` ## County V1 ## 1: Alameda County 4979 ## 2: Alpine County 15 ## 3: Amador County 153 ## 4: Butte County 930 ## 5: Calaveras County 148 ## 6: Colusa County 60 ## 7: Contra Costa County 1848 ## 8: Del Norte County 236 ## 9: El Dorado County 351 ## 10: Fresno County 1960 ## 11: Glenn County 56 ## 12: Humboldt County 495 ## 13: Imperial County 263 ## 14: Inyo County 52 ## 15: Kern County 1935 ## 16: Kings County 356 ## 17: Lake County 262 ## 18: Lassen County 96 ## 19: Los Angeles County 21483 ## 20: Madera County 408 ## 21: Marin County 452 ## 22: Mariposa County 46 ## 23: Mendocino County 328 ## 24: Merced County 738 ## 25: Modoc County 64 ## 26: Mono County 61 ## 27: Monterey County 1062 ## 28: Napa County 354 ## 29: Nevada County 214 ## 30: Orange County 4509 ## 31: Placer County 611 ## 32: Plumas County 115 ## 33: Riverside County 4321 ## 34: Sacramento County 4084 ## 35: San Benito County 151 ## 36: San Bernardino County 4900 ## 37: San Diego County 7378 ## 38: San Francisco County 1498 ## 39: San Joaquin County 1612 ## 40: San Luis Obispo County 900 ## 41: San Mateo County 1381 ## 42: Santa Barbara County 1352 ## 43: Santa Clara County 3832 ## 44: Santa Cruz County 865 ## 45: Shasta County 1089 ## 46: Sierra County 2 ## 47: Siskiyou County 143 ## 48: Solano County 1150 ## 49: Sonoma County 1558 ## 50: Stanislaus County 1348 ## 51: Sutter County 274 ## 52: Tehama County 165 ## 53: Trinity County 28 ## 54: Tulare County 1114 ## 55: Tuolumne County 160 ## 56: Ventura County 1146 ## 57: Yolo County 729 ## 58: Yuba County 277 ## County V1 ``` ``` setnames(res,"V1","Rapes") County_Rapes = as.data.table(res) #This is not really needed setkey(County_Rapes,Rapes) print(County_Rapes) ``` ``` ## County Rapes ## 1: Sierra County 2 ## 2: Alpine County 15 ## 3: Trinity County 28 ## 4: Mariposa County 46 ## 5: Inyo County 52 ## 6: Glenn County 56 ## 7: Colusa County 60 ## 8: Mono County 61 ## 9: Modoc County 64 ## 10: Lassen County 96 ## 11: Plumas County 115 ## 12: Siskiyou County 143 ## 13: Calaveras County 148 ## 14: San Benito County 151 ## 15: Amador County 153 ## 16: Tuolumne County 160 ## 17: Tehama County 165 ## 18: Nevada County 214 ## 19: Del Norte County 236 ## 20: Lake County 262 ## 21: Imperial County 263 ## 22: Sutter County 274 ## 23: Yuba County 277 ## 24: Mendocino County 328 ## 25: El Dorado County 351 ## 26: Napa County 354 ## 27: Kings County 356 ## 28: Madera County 408 ## 29: Marin County 452 ## 30: Humboldt County 495 ## 31: Placer County 611 ## 32: Yolo County 729 ## 33: Merced County 738 ## 34: Santa Cruz County 865 ## 35: San Luis Obispo County 900 ## 36: Butte County 930 ## 37: Monterey County 1062 ## 38: Shasta County 1089 ## 39: Tulare County 1114 ## 40: Ventura County 1146 ## 41: Solano County 1150 ## 42: Stanislaus County 1348 ## 43: Santa Barbara County 1352 ## 44: San Mateo County 1381 ## 45: San Francisco County 1498 ## 46: Sonoma County 1558 ## 47: San Joaquin County 1612 ## 48: Contra Costa County 1848 ## 49: Kern County 1935 ## 50: Fresno County 1960 ## 51: Santa Clara County 3832 ## 52: Sacramento County 4084 ## 53: Riverside County 4321 ## 54: Orange County 4509 ## 55: San Bernardino County 4900 ## 56: Alameda County 4979 ## 57: San Diego County 7378 ## 58: Los Angeles County 21483 ## County Rapes ``` #### 4\.10\.4\.2 Barplot of crime Now, we can go ahead and plot it using a different kind of plot, a horizontal barplot. ``` par(las=2) #makes label horizontal #par(mar=c(3,4,2,1)) #increase y-axis margins barplot(County_Rapes$Rapes, names.arg=County_Rapes$County, horiz=TRUE, cex.names=0.4, col=8) ``` ### 4\.10\.5 Bay Area Bike Share data We show some other features using a different data set, the bike information on Silicon Valley routes for the Bike Share program. This is a much larger data set. ``` trips = read.csv("DSTMAA_data/201408_trip_data.csv",header=TRUE) print(names(trips)) ``` ``` ## [1] "Trip.ID" "Duration" "Start.Date" ## [4] "Start.Station" "Start.Terminal" "End.Date" ## [7] "End.Station" "End.Terminal" "Bike.." ## [10] "Subscriber.Type" "Zip.Code" ``` #### 4\.10\.5\.1 Summarize Trips Data Next we print some descriptive statistics. ``` print(length(trips$Trip.ID)) ``` ``` ## [1] 171792 ``` ``` print(summary(trips$Duration/60)) ``` ``` ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## 1.000 5.750 8.617 18.880 12.680 11940.000 ``` ``` print(mean(trips$Duration/60,trim=0.01)) ``` ``` ## [1] 13.10277 ``` #### 4\.10\.5\.2 Start and End Bike Stations Now, we quickly check how many start and end stations there are. ``` start_stn = unique(trips$Start.Terminal) print(sort(start_stn)) ``` ``` ## [1] 2 3 4 5 6 7 8 9 10 11 12 13 14 16 21 22 23 24 25 26 27 28 29 ## [24] 30 31 32 33 34 35 36 37 38 39 41 42 45 46 47 48 49 50 51 54 55 56 57 ## [47] 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 80 82 83 ## [70] 84 ``` ``` print(length(start_stn)) ``` ``` ## [1] 70 ``` ``` end_stn = unique(trips$End.Terminal) print(sort(end_stn)) ``` ``` ## [1] 2 3 4 5 6 7 8 9 10 11 12 13 14 16 21 22 23 24 25 26 27 28 29 ## [24] 30 31 32 33 34 35 36 37 38 39 41 42 45 46 47 48 49 50 51 54 55 56 57 ## [47] 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 80 82 83 ## [70] 84 ``` ``` print(length(end_stn)) ``` ``` ## [1] 70 ``` As we can see, there are quite a few stations in the bike share program where riders can pick up and drop off bikes. The trip duration information is stored in seconds, so has been converted to minutes in the code above. #### 4\.10\.5\.1 Summarize Trips Data Next we print some descriptive statistics. ``` print(length(trips$Trip.ID)) ``` ``` ## [1] 171792 ``` ``` print(summary(trips$Duration/60)) ``` ``` ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## 1.000 5.750 8.617 18.880 12.680 11940.000 ``` ``` print(mean(trips$Duration/60,trim=0.01)) ``` ``` ## [1] 13.10277 ``` #### 4\.10\.5\.2 Start and End Bike Stations Now, we quickly check how many start and end stations there are. ``` start_stn = unique(trips$Start.Terminal) print(sort(start_stn)) ``` ``` ## [1] 2 3 4 5 6 7 8 9 10 11 12 13 14 16 21 22 23 24 25 26 27 28 29 ## [24] 30 31 32 33 34 35 36 37 38 39 41 42 45 46 47 48 49 50 51 54 55 56 57 ## [47] 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 80 82 83 ## [70] 84 ``` ``` print(length(start_stn)) ``` ``` ## [1] 70 ``` ``` end_stn = unique(trips$End.Terminal) print(sort(end_stn)) ``` ``` ## [1] 2 3 4 5 6 7 8 9 10 11 12 13 14 16 21 22 23 24 25 26 27 28 29 ## [24] 30 31 32 33 34 35 36 37 38 39 41 42 45 46 47 48 49 50 51 54 55 56 57 ## [47] 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 80 82 83 ## [70] 84 ``` ``` print(length(end_stn)) ``` ``` ## [1] 70 ``` As we can see, there are quite a few stations in the bike share program where riders can pick up and drop off bikes. The trip duration information is stored in seconds, so has been converted to minutes in the code above. 4\.11 The *plyr* package family ------------------------------- This package by Hadley Wickham is useful for applying functions to tables of data, i.e., data.frames. Since we may want to write custom functions, this is a highly useful package. R users often select either the **data.table** or the **plyr** class of packages for handling data.frames as databases. The latest incarnation is the **dplyr** package, which focuses only on data.frames. ``` require(plyr) ``` ``` ## Loading required package: plyr ``` ``` ## ## Attaching package: 'plyr' ``` ``` ## The following object is masked from 'package:lubridate': ## ## here ``` ``` ## The following object is masked from 'package:corrgram': ## ## baseball ``` ``` library(dplyr) ``` ``` ## ------------------------------------------------------------------------- ``` ``` ## data.table + dplyr code now lives in dtplyr. ## Please library(dtplyr)! ``` ``` ## ------------------------------------------------------------------------- ``` ``` ## ## Attaching package: 'dplyr' ``` ``` ## The following objects are masked from 'package:plyr': ## ## arrange, count, desc, failwith, id, mutate, rename, summarise, ## summarize ``` ``` ## The following objects are masked from 'package:data.table': ## ## between, last ``` ``` ## The following objects are masked from 'package:lubridate': ## ## intersect, setdiff, union ``` ``` ## The following objects are masked from 'package:xts': ## ## first, last ``` ``` ## The following objects are masked from 'package:stats': ## ## filter, lag ``` ``` ## The following objects are masked from 'package:base': ## ## intersect, setdiff, setequal, union ``` ### 4\.11\.1 Filter the data One of the useful things you can use is the **filter** function, to subset the rows of the dataset you might want to select for further analysis. ``` res = filter(trips,Start.Terminal==50,End.Terminal==51) head(res) ``` ``` ## Trip.ID Duration Start.Date Start.Station ## 1 432024 3954 8/30/2014 14:46 Harry Bridges Plaza (Ferry Building) ## 2 432022 4120 8/30/2014 14:44 Harry Bridges Plaza (Ferry Building) ## 3 431895 1196 8/30/2014 12:04 Harry Bridges Plaza (Ferry Building) ## 4 431891 1249 8/30/2014 12:03 Harry Bridges Plaza (Ferry Building) ## 5 430408 145 8/29/2014 9:08 Harry Bridges Plaza (Ferry Building) ## 6 429148 862 8/28/2014 13:47 Harry Bridges Plaza (Ferry Building) ## Start.Terminal End.Date End.Station End.Terminal Bike.. ## 1 50 8/30/2014 15:52 Embarcadero at Folsom 51 306 ## 2 50 8/30/2014 15:52 Embarcadero at Folsom 51 659 ## 3 50 8/30/2014 12:24 Embarcadero at Folsom 51 556 ## 4 50 8/30/2014 12:23 Embarcadero at Folsom 51 621 ## 5 50 8/29/2014 9:11 Embarcadero at Folsom 51 400 ## 6 50 8/28/2014 14:02 Embarcadero at Folsom 51 589 ## Subscriber.Type Zip.Code ## 1 Customer 94952 ## 2 Customer 94952 ## 3 Customer 11238 ## 4 Customer 11238 ## 5 Subscriber 94070 ## 6 Subscriber 94107 ``` ### 4\.11\.2 Sorting using the *arrange* function The **arrange** function is useful for sorting by any number of columns as needed. Here we sort by the start and end stations. ``` trips_sorted = arrange(trips,Start.Station,End.Station) head(trips_sorted) ``` ``` ## Trip.ID Duration Start.Date Start.Station Start.Terminal ## 1 426408 120 8/27/2014 7:40 2nd at Folsom 62 ## 2 411496 21183 8/16/2014 13:36 2nd at Folsom 62 ## 3 396676 3707 8/6/2014 11:38 2nd at Folsom 62 ## 4 385761 123 7/29/2014 19:52 2nd at Folsom 62 ## 5 364633 6395 7/15/2014 13:39 2nd at Folsom 62 ## 6 362776 9433 7/14/2014 13:36 2nd at Folsom 62 ## End.Date End.Station End.Terminal Bike.. Subscriber.Type ## 1 8/27/2014 7:42 2nd at Folsom 62 527 Subscriber ## 2 8/16/2014 19:29 2nd at Folsom 62 508 Customer ## 3 8/6/2014 12:40 2nd at Folsom 62 109 Customer ## 4 7/29/2014 19:55 2nd at Folsom 62 421 Subscriber ## 5 7/15/2014 15:26 2nd at Folsom 62 448 Customer ## 6 7/14/2014 16:13 2nd at Folsom 62 454 Customer ## Zip.Code ## 1 94107 ## 2 94105 ## 3 31200 ## 4 94107 ## 5 2184 ## 6 2184 ``` ### 4\.11\.3 Reverse order sort The sort can also be done in reverse order as follows. ``` trips_sorted = arrange(trips,desc(Start.Station),End.Station) head(trips_sorted) ``` ``` ## Trip.ID Duration Start.Date ## 1 416755 285 8/20/2014 11:37 ## 2 411270 257 8/16/2014 7:03 ## 3 410269 286 8/15/2014 10:34 ## 4 405273 382 8/12/2014 14:27 ## 5 398372 401 8/7/2014 10:10 ## 6 393012 317 8/4/2014 10:59 ## Start.Station Start.Terminal ## 1 Yerba Buena Center of the Arts (3rd @ Howard) 68 ## 2 Yerba Buena Center of the Arts (3rd @ Howard) 68 ## 3 Yerba Buena Center of the Arts (3rd @ Howard) 68 ## 4 Yerba Buena Center of the Arts (3rd @ Howard) 68 ## 5 Yerba Buena Center of the Arts (3rd @ Howard) 68 ## 6 Yerba Buena Center of the Arts (3rd @ Howard) 68 ## End.Date End.Station End.Terminal Bike.. Subscriber.Type ## 1 8/20/2014 11:42 2nd at Folsom 62 383 Customer ## 2 8/16/2014 7:07 2nd at Folsom 62 614 Subscriber ## 3 8/15/2014 10:38 2nd at Folsom 62 545 Subscriber ## 4 8/12/2014 14:34 2nd at Folsom 62 344 Customer ## 5 8/7/2014 10:16 2nd at Folsom 62 597 Subscriber ## 6 8/4/2014 11:04 2nd at Folsom 62 367 Subscriber ## Zip.Code ## 1 95060 ## 2 94107 ## 3 94127 ## 4 94110 ## 5 94127 ## 6 94127 ``` ### 4\.11\.4 Descriptive statistics Data.table also offers a fantastic way to do descriptive statistics! First, group the data by start point, and then produce statistics by this group, choosing to count the number of trips starting from each station and the average duration of each trip. ``` byStartStation = group_by(trips,Start.Station) res = summarise(byStartStation, count=n(), time=mean(Duration)/60) print(res) ``` ``` ## # A tibble: 70 x 3 ## Start.Station count time ## <fctr> <int> <dbl> ## 1 2nd at Folsom 4165 9.32088 ## 2 2nd at South Park 4569 11.60195 ## 3 2nd at Townsend 6824 15.14786 ## 4 5th at Howard 3183 14.23254 ## 5 Adobe on Almaden 360 10.06120 ## 6 Arena Green / SAP Center 510 43.82833 ## 7 Beale at Market 4293 15.74702 ## 8 Broadway at Main 22 54.82121 ## 9 Broadway St at Battery St 2433 15.31862 ## 10 California Ave Caltrain Station 329 51.30709 ## # ... with 60 more rows ``` ### 4\.11\.5 Other functions in *dplyr* Try also the **select()**, **extract()**, **mutate()**, **summarise()**, **sample\_n()**, **sample\_frac()** functions. The **group\_by()** function is particularly useful as we have seen. ### 4\.11\.1 Filter the data One of the useful things you can use is the **filter** function, to subset the rows of the dataset you might want to select for further analysis. ``` res = filter(trips,Start.Terminal==50,End.Terminal==51) head(res) ``` ``` ## Trip.ID Duration Start.Date Start.Station ## 1 432024 3954 8/30/2014 14:46 Harry Bridges Plaza (Ferry Building) ## 2 432022 4120 8/30/2014 14:44 Harry Bridges Plaza (Ferry Building) ## 3 431895 1196 8/30/2014 12:04 Harry Bridges Plaza (Ferry Building) ## 4 431891 1249 8/30/2014 12:03 Harry Bridges Plaza (Ferry Building) ## 5 430408 145 8/29/2014 9:08 Harry Bridges Plaza (Ferry Building) ## 6 429148 862 8/28/2014 13:47 Harry Bridges Plaza (Ferry Building) ## Start.Terminal End.Date End.Station End.Terminal Bike.. ## 1 50 8/30/2014 15:52 Embarcadero at Folsom 51 306 ## 2 50 8/30/2014 15:52 Embarcadero at Folsom 51 659 ## 3 50 8/30/2014 12:24 Embarcadero at Folsom 51 556 ## 4 50 8/30/2014 12:23 Embarcadero at Folsom 51 621 ## 5 50 8/29/2014 9:11 Embarcadero at Folsom 51 400 ## 6 50 8/28/2014 14:02 Embarcadero at Folsom 51 589 ## Subscriber.Type Zip.Code ## 1 Customer 94952 ## 2 Customer 94952 ## 3 Customer 11238 ## 4 Customer 11238 ## 5 Subscriber 94070 ## 6 Subscriber 94107 ``` ### 4\.11\.2 Sorting using the *arrange* function The **arrange** function is useful for sorting by any number of columns as needed. Here we sort by the start and end stations. ``` trips_sorted = arrange(trips,Start.Station,End.Station) head(trips_sorted) ``` ``` ## Trip.ID Duration Start.Date Start.Station Start.Terminal ## 1 426408 120 8/27/2014 7:40 2nd at Folsom 62 ## 2 411496 21183 8/16/2014 13:36 2nd at Folsom 62 ## 3 396676 3707 8/6/2014 11:38 2nd at Folsom 62 ## 4 385761 123 7/29/2014 19:52 2nd at Folsom 62 ## 5 364633 6395 7/15/2014 13:39 2nd at Folsom 62 ## 6 362776 9433 7/14/2014 13:36 2nd at Folsom 62 ## End.Date End.Station End.Terminal Bike.. Subscriber.Type ## 1 8/27/2014 7:42 2nd at Folsom 62 527 Subscriber ## 2 8/16/2014 19:29 2nd at Folsom 62 508 Customer ## 3 8/6/2014 12:40 2nd at Folsom 62 109 Customer ## 4 7/29/2014 19:55 2nd at Folsom 62 421 Subscriber ## 5 7/15/2014 15:26 2nd at Folsom 62 448 Customer ## 6 7/14/2014 16:13 2nd at Folsom 62 454 Customer ## Zip.Code ## 1 94107 ## 2 94105 ## 3 31200 ## 4 94107 ## 5 2184 ## 6 2184 ``` ### 4\.11\.3 Reverse order sort The sort can also be done in reverse order as follows. ``` trips_sorted = arrange(trips,desc(Start.Station),End.Station) head(trips_sorted) ``` ``` ## Trip.ID Duration Start.Date ## 1 416755 285 8/20/2014 11:37 ## 2 411270 257 8/16/2014 7:03 ## 3 410269 286 8/15/2014 10:34 ## 4 405273 382 8/12/2014 14:27 ## 5 398372 401 8/7/2014 10:10 ## 6 393012 317 8/4/2014 10:59 ## Start.Station Start.Terminal ## 1 Yerba Buena Center of the Arts (3rd @ Howard) 68 ## 2 Yerba Buena Center of the Arts (3rd @ Howard) 68 ## 3 Yerba Buena Center of the Arts (3rd @ Howard) 68 ## 4 Yerba Buena Center of the Arts (3rd @ Howard) 68 ## 5 Yerba Buena Center of the Arts (3rd @ Howard) 68 ## 6 Yerba Buena Center of the Arts (3rd @ Howard) 68 ## End.Date End.Station End.Terminal Bike.. Subscriber.Type ## 1 8/20/2014 11:42 2nd at Folsom 62 383 Customer ## 2 8/16/2014 7:07 2nd at Folsom 62 614 Subscriber ## 3 8/15/2014 10:38 2nd at Folsom 62 545 Subscriber ## 4 8/12/2014 14:34 2nd at Folsom 62 344 Customer ## 5 8/7/2014 10:16 2nd at Folsom 62 597 Subscriber ## 6 8/4/2014 11:04 2nd at Folsom 62 367 Subscriber ## Zip.Code ## 1 95060 ## 2 94107 ## 3 94127 ## 4 94110 ## 5 94127 ## 6 94127 ``` ### 4\.11\.4 Descriptive statistics Data.table also offers a fantastic way to do descriptive statistics! First, group the data by start point, and then produce statistics by this group, choosing to count the number of trips starting from each station and the average duration of each trip. ``` byStartStation = group_by(trips,Start.Station) res = summarise(byStartStation, count=n(), time=mean(Duration)/60) print(res) ``` ``` ## # A tibble: 70 x 3 ## Start.Station count time ## <fctr> <int> <dbl> ## 1 2nd at Folsom 4165 9.32088 ## 2 2nd at South Park 4569 11.60195 ## 3 2nd at Townsend 6824 15.14786 ## 4 5th at Howard 3183 14.23254 ## 5 Adobe on Almaden 360 10.06120 ## 6 Arena Green / SAP Center 510 43.82833 ## 7 Beale at Market 4293 15.74702 ## 8 Broadway at Main 22 54.82121 ## 9 Broadway St at Battery St 2433 15.31862 ## 10 California Ave Caltrain Station 329 51.30709 ## # ... with 60 more rows ``` ### 4\.11\.5 Other functions in *dplyr* Try also the **select()**, **extract()**, **mutate()**, **summarise()**, **sample\_n()**, **sample\_frac()** functions. The **group\_by()** function is particularly useful as we have seen. 4\.12 Application to IPO Data ----------------------------- Let’s revisit all the stock exchange data from before, where we download the table of firms listed on the NYSE, NASDAQ, and AMEX using the *quantmod* package. ``` library(quantmod) nasdaq_names = stockSymbols(exchange = "NASDAQ") ``` ``` ## Fetching NASDAQ symbols... ``` ``` nyse_names = stockSymbols(exchange = "NYSE") ``` ``` ## Fetching NYSE symbols... ``` ``` amex_names = stockSymbols(exchange = "AMEX") ``` ``` ## Fetching AMEX symbols... ``` ``` tickers = rbind(nasdaq_names,nyse_names,amex_names) tickers$Count = 1 print(dim(tickers)) ``` ``` ## [1] 6692 9 ``` We then clean off the rows with incomplete data, using the very useful **complete.cases** function. ``` idx = complete.cases(tickers) df = tickers[idx,] print(nrow(df)) ``` ``` ## [1] 2198 ``` We create a table of the frequency of IPOs by year to see hot and cold IPO markets. 1\. First, remove all rows with missing IPO data. 2\. Plot IPO Activity with a bar plot. We make sure to label the axes properly. 3\. Plot IPO Activity using the **rbokeh** package to make a pretty line plot. See: <https://hafen.github.io/rbokeh/> ``` library(dplyr) library(magrittr) idx = which(!is.na(tickers$IPOyear)) df = tickers[idx,] res = df %>% group_by(IPOyear) %>% summarise(numIPO = sum(Count)) print(res) ``` ``` ## # A tibble: 40 x 2 ## IPOyear numIPO ## <int> <dbl> ## 1 1972 4 ## 2 1973 1 ## 3 1980 2 ## 4 1981 6 ## 5 1982 4 ## 6 1983 13 ## 7 1984 6 ## 8 1985 4 ## 9 1986 38 ## 10 1987 28 ## # ... with 30 more rows ``` ``` barplot(res$numIPO,names.arg = res$IPOyear) ``` 4\.13 Bokeh plots ----------------- These are really nice looking but requires simple code. The “hover”" features make these plots especially appealing. ``` library(rbokeh) p = figure(width=500,height=300) %>% ly_points(IPOyear,numIPO,data=res,hover=c(IPOyear,numIPO)) %>% ly_lines(IPOyear,numIPO,data=res) p ```
Machine Learning
srdas.github.io
https://srdas.github.io/MLBook/Shiny.html
Chapter 5 Interactive applications with *Shiny* =============================================== **Shiny** is an R framework in which you can set up browser\-based interactive applications and use them to interact with the data. This approach results in a better understanding of models you may build in R. Full documentation and details are available at <http://shiny.rstudio.com/> Preparing an application in **Shiny** requires creating the back end processing code, which has to be stored in a file named **server.R** and a front end graphical user interface (GUI), placed in a file named **ui.R**. Both these file names are mandated, as the **shiny** package will look for these files. One may also create a file called **app.R** in which both a *server* function and a *ui* function are embedded. To illustrate, we will create an interactive application to price options using the well\-known Black\-Scholes\-Merton (1973\) model. 5\.1 The Black\-Scholes\-Merton (1973\) model --------------------------------------------- The price of a call option in this model is given by the following formula \\\[ C \= S e^{\-qT} \\cdot N(d\_1\) \- K e^{\-rT} \\cdot N(d\_2\) \\] where \\\[ d\_1 \= \\frac{\\ln(S/K)\+(r\-q\+v^2/2\)T}{v \\sqrt{T}} \\] and \\(d\_2 \= d\_1 \- v \\sqrt{T}\\). Here \\(S\\) is the stock price, \\(K\\) is the strike price, \\(T\\) is option maturity, \\(v\\) is the annualized volatility of the stock, and \\(r\\) is the continuous risk free rate of interest for maturity \\(T\\). Finally, \\(q\\) is the annual dividend rate, assuming it is paid continuously. Likewise, the formula for a put option is \\\[ C \= K e^{\-rT} \\cdot N(\-d\_2\) \- S e^{\-qT} \\cdot N(\-d\_1\) \\] and \\(d\_1\\) and \\(d\_2\\) are the same as for the call option. 5\.2 The application program ---------------------------- Here is the code and it is stored in a file called **app.R**. ``` library(shiny) library(plotly) library(ggplot2) ##### SERVER ##### # Define server logic for random distribution application server <- function(input, output) { #Generate Black-Scholes values BS = function(S,K,T,v,rf,dv) { d1 = (log(S/K) + (rf-dv+0.5*v^2)*T)/(v*sqrt(T)) d2 = d1 - v*sqrt(T) bscall = S*exp(-dv*T)*pnorm(d1) - K*exp(-rf*T)*pnorm(d2) bsput = -S*exp(-dv*T)*pnorm(-d1) + K*exp(-rf*T)*pnorm(-d2) res = c(bscall,bsput) } #Call option price output$BScall <- renderText({ #Get inputs S = input$stockprice K = input$strike T = input$maturity v = input$volatility rf = input$riskfreerate dv = input$divrate res = round(BS(S,K,T,v,rf,dv)[1],4) }) #Put option price output$BSput <- renderText({ #Get inputs S = input$stockprice K = input$strike T = input$maturity v = input$volatility rf = input$riskfreerate dv = input$divrate res = round(BS(S,K,T,v,rf,dv)[2],4) }) #Call plot output$plotCall <- renderPlot({ S = input$stockprice K = input$strike T = input$maturity v = input$volatility rf = input$riskfreerate dv = input$divrate vcall = NULL; vput = NULL strikes = seq(K-30,K+30) for (k in strikes) { vcall = c(vcall,BS(S,k,T,v,rf,dv)[1]) vput = c(vput,BS(S,k,T,v,rf,dv)[2]) } df = data.frame(strikes,vcall,vput) ggplot(df,aes(x=strikes,y=vcall)) + geom_point(color=strikes) }, height = 350, width = 600) #Put plot output$plotPut <- renderPlot({ S = input$stockprice K = input$strike T = input$maturity v = input$volatility rf = input$riskfreerate dv = input$divrate vcall = NULL; vput = NULL strikes = seq(K-30,K+30) for (k in strikes) { vcall = c(vcall,BS(S,k,T,v,rf,dv)[1]) vput = c(vput,BS(S,k,T,v,rf,dv)[2]) } df = data.frame(strikes,vcall,vput) ggplot(df,aes(x=strikes,y=vput)) + geom_point(color=strikes) }, height = 350, width = 600) } ##### UI ##### ui <- shinyUI(fluidPage( titlePanel("Black-Scholes-Merton (1973)"), sidebarLayout( sidebarPanel( numericInput('stockprice','Stock Price',100), numericInput('strike','Strike Price',100), sliderInput('maturity','Maturity (years)',min=0.1,max=10,value=1,step=0.01), sliderInput('volatility','Volatility',min=0.1,max=0.9,value=0.15,step=0.01), sliderInput('riskfreerate','Risk free rate',min=0.0,max=0.5,value=0.01,step=0.01), sliderInput('divrate','Dividend rate',min=0.0,max=0.25,value=0.01,step=0.01), hr(), p('Please refer to following for more details:', a("Black-Scholes (1973)", href = "https://en.wikipedia.org/wiki/Black%E2%80%93Scholes_model")), hr() ), mainPanel( p('Call price'), textOutput("BScall"), hr(), p('Put price'), textOutput("BSput"), hr(), tabsetPanel( tabPanel("Calls", plotOutput("plotCall",width="100%")), tabPanel("Puts", plotOutput("plotPut",width="100%")) ) ) ) )) ##### Run ##### shinyApp(ui = ui, server = server) ``` 5\.3 Running the App -------------------- To run the app, open the file **app.R** in RStudio and then execute **RunApp** from the menu. This app will generate the following screen. 1. Note the sidebar panel, that allows numeric input for the stock price and the strike price. 2. Note also the slider input for the other variables of the model. 3. Changing the inputs results in automatic interactive updates in the output panel, both to call and put prices, as well as the plots. 4. Look at the panel with the plots, it has two tabs, and one can click to switch between the plot for calls and the one for puts. 5\.4 Server section of the App ------------------------------ The server section has the following features (examine the code above). 1. The packages used may be invoked at the top of the file, as they may be used by both the server and ui functions. 2. Each external output is created by a separate function. The text output is carried out by a shiny function called **renderText** and the plots are generated by a function called **renderPlot**. 3. One may also create subsidiary functions that do not generate external output, but are called by other functions inside the program. For example, the function **BS** in the code above implements the option pricing formula but does not return anything to the UI. 5\.5 UI section of the App -------------------------- The ui section has the following features (examine the code above). 1. There are three panels: title, sidebar, main. This allows for a nice layout of inputs and outputs. In the example here, we use the sidebar panel to input values to the app, and the main panel to present outputs. 2. All inputs are taken in to an object called **input** which is then accessed by the server section of the program. Different formats for the inputs are allowed and here we show numeric and slider inputs as examples. 3. The output can be tabbed as is done for the plots. 5\.6 Using the *reactive* mode in the app ----------------------------------------- The ui portion of the program takes input values and makes them available to the server section. We see that each function in the server section has to collect all the inputs for itself, and as a result the initialization of variables in this section occurs inside each function in a repetitive manner. In order to avoid this, and thereby shorten and speed up the code, we may use the inputs in *reactive* mode. What this means is that inputs are live and available globally to all functions in the server segment of the program. Here is the **ui.R** file from the reactive version. We see that is much the same as before. ``` ##### UI ##### library(shiny) fluidPage( titlePanel("Black-Scholes-Merton (1973)"), sidebarLayout( sidebarPanel( numericInput('stockprice','Stock Price',100), numericInput('strike','Strike Price',100), sliderInput('maturity','Maturity (years)',min=0.1,max=10,value=1,step=0.01), sliderInput('volatility','Volatility',min=0.1,max=0.9,value=0.15,step=0.01), sliderInput('riskfreerate','Risk free rate',min=0.0,max=0.5,value=0.01,step=0.01), sliderInput('divrate','Dividend rate',min=0.0,max=0.25,value=0.01,step=0.01), hr(), p('Please refer to following for more details:', a("Black-Scholes (1973)", href = "https://en.wikipedia.org/wiki/Black%E2%80%93Scholes_model")), hr() ), mainPanel( p('Call price'), textOutput("BScall"), hr(), p('Put price'), textOutput("BSput"), hr(), tabsetPanel( tabPanel("Calls", plotOutput("plotCall",width="100%")), tabPanel("Puts", plotOutput("plotPut",width="100%")) ) ) ) ) ``` However, the **server.R** file is quite different, and it needed the Black\-Scholes pricing funtion **BS** to be refactored to take reactive input, and we were then able to shorten the code considerably, see here. ``` library(shiny) library(plotly) library(ggplot2) ##### SERVER ##### # Define server logic for random distribution application function(input, output) { #Generate Black-Scholes values BS = function(x) { S=x[1]; K=x[2]; T=x[3]; v=x[4]; rf=x[5]; dv=x[6] d1 = (log(S/K) + (rf-dv+0.5*v^2)*T)/(v*sqrt(T)) d2 = d1 - v*sqrt(T) bscall = S*exp(-dv*T)*pnorm(d1) - K*exp(-rf*T)*pnorm(d2) bsput = -S*exp(-dv*T)*pnorm(-d1) + K*exp(-rf*T)*pnorm(-d2) res = c(bscall,bsput) } data <- reactive({ #Get inputs matrix(c(input$stockprice,input$strike,input$maturity, input$volatility,input$riskfreerate,input$divrate)) }) #Call option price output$BScall <- renderText({ res = round(BS(data())[1],4) }) #Put option price output$BSput <- renderText({ res = round(BS(data())[2],4) }) #Call plot output$plotCall <- renderPlot({ vcall = NULL; vput = NULL K = data()[2] strikes = seq(K-30,K+30) for (k in strikes) { d = data(); d[2]=k vcall = c(vcall,BS(d)[1]) vput = c(vput,BS(d)[2]) } df = data.frame(strikes,vcall,vput) ggplot(df,aes(x=strikes,y=vcall)) + geom_point(color=strikes) }, height = 350, width = 600) #Put plot output$plotPut <- renderPlot({ vcall = NULL; vput = NULL K = data()[2] strikes = seq(K-30,K+30) for (k in strikes) { d = data(); d[2]=k vcall = c(vcall,BS(d)[1]) vput = c(vput,BS(d)[2]) } df = data.frame(strikes,vcall,vput) ggplot(df,aes(x=strikes,y=vput)) + geom_point(color=strikes) }, height = 350, width = 600) } ``` You can copy this code and create two files in your directory and then run the app to see it execute in exactly the same way as before when reactive inputs were not used. 5\.7 Market Liquidity in Real Time using *Shiny* ------------------------------------------------ In this segment we combine web scraping with *shiny* to create a real time liquidity model. The app is based on the paper by George Chacko, Sanjiv Das, and Rong Fan titled “An Index\-Based Measure of Liquidity”, published in the *Journal of Banking and Finance*, 2016, v68, 162\-178\. It is available at: [http://algo.scu.edu/\~sanjivdas/etfliq.pdf](http://algo.scu.edu/~sanjivdas/etfliq.pdf) The main idea of the paper’s algorithm is as follows. Since the ETF is usually more liquid than the underlying bonds it represents, any difference in the price of the ETF and the NAV (net asset value) of the underlying bonds must be on account of liquidity, because market risk is otherwise the same for the ETF and its underlying. The paper uses an option pricing based derivation of the illiquidity of the market sector represented by the ETF. This illiquidity is represented in a basis points spread given by the following equation: \\\[ BILLIQ \= \-10000 \\ln \\left(\\frac{NAV}{NAV \+ \|ETF\-NAV\|}\\right) \\] 5\.8 Program files ------------------ For this application here are the **ui.R** and **server.R** files. You can cut and paste them into separate files in RStudio, and then run the app. ``` #ui.R library(shiny) # Define UI for miles per gallon application shinyUI(pageWithSidebar( # Application title headerPanel("Index-Based Illiquidity"), sidebarPanel( textInput("ticker", "Input ETF Ticker ", "LQD"), submitButton("Submit"), p(" "), p("Example of ETF tickers are: LQD, HYG, CSJ, CFT, CIU, AGG, GBF, GVI, MBB, EMB, IVV, BIV, BLV, BND, BSV, etc.") ), mainPanel( verbatimTextOutput("text4"), verbatimTextOutput("text1"), verbatimTextOutput("text2"), verbatimTextOutput("text3"), helpText("The paper that derives this measure of illiquidity is:"), helpText(a("George Chacko, Sanjiv Das, Rong Fan (2016), An Index-Based Measure of Liquidity, Journal of Banking and Finance, v68, 162-178.", href="http://algo.scu.edu/~sanjivdas/etfliq.pdf")) ) )) ``` ``` #server.R library(shiny) library(magrittr) library(stringr) library(rvest) library(httr) library(XML) library(RCurl) # Note that this logic may have to be uodated when the web page format is altered. shinyServer(function(input, output) { observe({ ## Read in the URL for the ETF ticker etf = input$ticker url = paste("http://finance.yahoo.com/quote/",etf,sep="") page = try(readLines(url)) #Get Closing Price doc.html = read_html(url) x = doc.html %>% html_nodes("span") %>% html_text() Price = as.numeric(x[16]) ## Process page for NAV s = '\"navPrice\"' idx = grep(s,page) y = str_locate(page[idx],s) x = substr(page[idx],y[2],y[2]+20) NAV = as.numeric(regmatches(x,gregexpr("[0-9]+.[0-9]+",x))) ## Compute BILLIQ BILLIQ = -10000*log(NAV/(NAV+abs(Price-NAV))) ## Process page for Yield s = '\"yield\"' idx = grep(s,page) y = str_locate(page[idx],s) x = substr(page[idx],y[2],y[2]+32) Yield = unlist(regmatches(x,gregexpr("[0-9]+.[0-9]+%",x))) ## Output output$text1 = renderText(paste("Price = ",Price)) output$text2 = renderText(paste("NAV = ",NAV)) output$text3 = renderText(paste("BILLIQ = ",BILLIQ," (bps)")) output$text4 = renderText(paste("Yield = ",Yield)) return() }) }) ``` When the app is launched the following interactive screen comes up so one may enter the ETF market for which the liquidity is being computed. As one can see, several statistics are provided, after being scraped from the web. The code in **server.R** shows how the information is sourced from the web. 5\.9 Using *Shiny* with Data Table ---------------------------------- In this section we will redisplay the data set for finance firms that we looked at earlier in the previous chapter using shiny. What we will do is add to the shiny app a feature that lets you select which columns of the data set to display. The resulting Shiny App should look as follows: We create the data that we need to apply to the shiny app. Here are the few lines of code needed if we do not use an app. Following this, we will look at the app code. ``` #GetData.R #Subset Finance sector nasdaq_names = stockSymbols(exchange = "NASDAQ") nyse_names = stockSymbols(exchange = "NYSE") amex_names = stockSymbols(exchange = "AMEX") df = rbind(nasdaq_names,nyse_names,amex_names) #Convert all values into millions idx = grep("B",df$MarketCap) x = df$MarketCap; df$MarketCap = as.numeric(substr(x,2,nchar(x)-1)) df$MarketCap[idx] = df$MarketCap[idx]*1000 #For the billion cases idx = which(df$MarketCap>0) df = df[idx,] Finance = df %>% filter(Sector=="Finance") ``` Next, here is the **server.R** code. ``` #server.R library(shiny) library(ggplot2) library(quantmod) library(DT) library(dplyr) library(magrittr) function(input, output, session) { #Subset Finance sector nasdaq_names = stockSymbols(exchange = "NASDAQ") nyse_names = stockSymbols(exchange = "NYSE") amex_names = stockSymbols(exchange = "AMEX") df = rbind(nasdaq_names,nyse_names,amex_names) #Convert all values into millions idx = grep("B",df$MarketCap) x = df$MarketCap; df$MarketCap = as.numeric(substr(x,2,nchar(x)-1)) df$MarketCap[idx] = df$MarketCap[idx]*1000 #For the billion cases idx = which(df$MarketCap>0) df = df[idx,] Finance = df %>% filter(Sector=="Finance") output$mytable1 <- DT::renderDataTable({ DT::datatable(Finance[, input$show_vars, drop = FALSE]) }) } ``` And then, as needed, the **ui.R** script. ``` #ui.R library(shiny) library(ggplot2) library(quantmod) library(DT) library(dplyr) library(magrittr) fluidPage( title = 'Financial Firms Data', sidebarLayout( sidebarPanel( conditionalPanel( 'input.dataset === "Finance"', checkboxGroupInput('show_vars', 'Choose data elements:', names(Finance), selected = names(Finance)) ) ), mainPanel( tabsetPanel( id = 'dataset', tabPanel('Finance', DT::dataTableOutput('mytable1')) ) ) ) ) ``` Cut and paste this code into the **server.R** and **ui.R** scripts in your directory and then execute the app. 5\.1 The Black\-Scholes\-Merton (1973\) model --------------------------------------------- The price of a call option in this model is given by the following formula \\\[ C \= S e^{\-qT} \\cdot N(d\_1\) \- K e^{\-rT} \\cdot N(d\_2\) \\] where \\\[ d\_1 \= \\frac{\\ln(S/K)\+(r\-q\+v^2/2\)T}{v \\sqrt{T}} \\] and \\(d\_2 \= d\_1 \- v \\sqrt{T}\\). Here \\(S\\) is the stock price, \\(K\\) is the strike price, \\(T\\) is option maturity, \\(v\\) is the annualized volatility of the stock, and \\(r\\) is the continuous risk free rate of interest for maturity \\(T\\). Finally, \\(q\\) is the annual dividend rate, assuming it is paid continuously. Likewise, the formula for a put option is \\\[ C \= K e^{\-rT} \\cdot N(\-d\_2\) \- S e^{\-qT} \\cdot N(\-d\_1\) \\] and \\(d\_1\\) and \\(d\_2\\) are the same as for the call option. 5\.2 The application program ---------------------------- Here is the code and it is stored in a file called **app.R**. ``` library(shiny) library(plotly) library(ggplot2) ##### SERVER ##### # Define server logic for random distribution application server <- function(input, output) { #Generate Black-Scholes values BS = function(S,K,T,v,rf,dv) { d1 = (log(S/K) + (rf-dv+0.5*v^2)*T)/(v*sqrt(T)) d2 = d1 - v*sqrt(T) bscall = S*exp(-dv*T)*pnorm(d1) - K*exp(-rf*T)*pnorm(d2) bsput = -S*exp(-dv*T)*pnorm(-d1) + K*exp(-rf*T)*pnorm(-d2) res = c(bscall,bsput) } #Call option price output$BScall <- renderText({ #Get inputs S = input$stockprice K = input$strike T = input$maturity v = input$volatility rf = input$riskfreerate dv = input$divrate res = round(BS(S,K,T,v,rf,dv)[1],4) }) #Put option price output$BSput <- renderText({ #Get inputs S = input$stockprice K = input$strike T = input$maturity v = input$volatility rf = input$riskfreerate dv = input$divrate res = round(BS(S,K,T,v,rf,dv)[2],4) }) #Call plot output$plotCall <- renderPlot({ S = input$stockprice K = input$strike T = input$maturity v = input$volatility rf = input$riskfreerate dv = input$divrate vcall = NULL; vput = NULL strikes = seq(K-30,K+30) for (k in strikes) { vcall = c(vcall,BS(S,k,T,v,rf,dv)[1]) vput = c(vput,BS(S,k,T,v,rf,dv)[2]) } df = data.frame(strikes,vcall,vput) ggplot(df,aes(x=strikes,y=vcall)) + geom_point(color=strikes) }, height = 350, width = 600) #Put plot output$plotPut <- renderPlot({ S = input$stockprice K = input$strike T = input$maturity v = input$volatility rf = input$riskfreerate dv = input$divrate vcall = NULL; vput = NULL strikes = seq(K-30,K+30) for (k in strikes) { vcall = c(vcall,BS(S,k,T,v,rf,dv)[1]) vput = c(vput,BS(S,k,T,v,rf,dv)[2]) } df = data.frame(strikes,vcall,vput) ggplot(df,aes(x=strikes,y=vput)) + geom_point(color=strikes) }, height = 350, width = 600) } ##### UI ##### ui <- shinyUI(fluidPage( titlePanel("Black-Scholes-Merton (1973)"), sidebarLayout( sidebarPanel( numericInput('stockprice','Stock Price',100), numericInput('strike','Strike Price',100), sliderInput('maturity','Maturity (years)',min=0.1,max=10,value=1,step=0.01), sliderInput('volatility','Volatility',min=0.1,max=0.9,value=0.15,step=0.01), sliderInput('riskfreerate','Risk free rate',min=0.0,max=0.5,value=0.01,step=0.01), sliderInput('divrate','Dividend rate',min=0.0,max=0.25,value=0.01,step=0.01), hr(), p('Please refer to following for more details:', a("Black-Scholes (1973)", href = "https://en.wikipedia.org/wiki/Black%E2%80%93Scholes_model")), hr() ), mainPanel( p('Call price'), textOutput("BScall"), hr(), p('Put price'), textOutput("BSput"), hr(), tabsetPanel( tabPanel("Calls", plotOutput("plotCall",width="100%")), tabPanel("Puts", plotOutput("plotPut",width="100%")) ) ) ) )) ##### Run ##### shinyApp(ui = ui, server = server) ``` 5\.3 Running the App -------------------- To run the app, open the file **app.R** in RStudio and then execute **RunApp** from the menu. This app will generate the following screen. 1. Note the sidebar panel, that allows numeric input for the stock price and the strike price. 2. Note also the slider input for the other variables of the model. 3. Changing the inputs results in automatic interactive updates in the output panel, both to call and put prices, as well as the plots. 4. Look at the panel with the plots, it has two tabs, and one can click to switch between the plot for calls and the one for puts. 5\.4 Server section of the App ------------------------------ The server section has the following features (examine the code above). 1. The packages used may be invoked at the top of the file, as they may be used by both the server and ui functions. 2. Each external output is created by a separate function. The text output is carried out by a shiny function called **renderText** and the plots are generated by a function called **renderPlot**. 3. One may also create subsidiary functions that do not generate external output, but are called by other functions inside the program. For example, the function **BS** in the code above implements the option pricing formula but does not return anything to the UI. 5\.5 UI section of the App -------------------------- The ui section has the following features (examine the code above). 1. There are three panels: title, sidebar, main. This allows for a nice layout of inputs and outputs. In the example here, we use the sidebar panel to input values to the app, and the main panel to present outputs. 2. All inputs are taken in to an object called **input** which is then accessed by the server section of the program. Different formats for the inputs are allowed and here we show numeric and slider inputs as examples. 3. The output can be tabbed as is done for the plots. 5\.6 Using the *reactive* mode in the app ----------------------------------------- The ui portion of the program takes input values and makes them available to the server section. We see that each function in the server section has to collect all the inputs for itself, and as a result the initialization of variables in this section occurs inside each function in a repetitive manner. In order to avoid this, and thereby shorten and speed up the code, we may use the inputs in *reactive* mode. What this means is that inputs are live and available globally to all functions in the server segment of the program. Here is the **ui.R** file from the reactive version. We see that is much the same as before. ``` ##### UI ##### library(shiny) fluidPage( titlePanel("Black-Scholes-Merton (1973)"), sidebarLayout( sidebarPanel( numericInput('stockprice','Stock Price',100), numericInput('strike','Strike Price',100), sliderInput('maturity','Maturity (years)',min=0.1,max=10,value=1,step=0.01), sliderInput('volatility','Volatility',min=0.1,max=0.9,value=0.15,step=0.01), sliderInput('riskfreerate','Risk free rate',min=0.0,max=0.5,value=0.01,step=0.01), sliderInput('divrate','Dividend rate',min=0.0,max=0.25,value=0.01,step=0.01), hr(), p('Please refer to following for more details:', a("Black-Scholes (1973)", href = "https://en.wikipedia.org/wiki/Black%E2%80%93Scholes_model")), hr() ), mainPanel( p('Call price'), textOutput("BScall"), hr(), p('Put price'), textOutput("BSput"), hr(), tabsetPanel( tabPanel("Calls", plotOutput("plotCall",width="100%")), tabPanel("Puts", plotOutput("plotPut",width="100%")) ) ) ) ) ``` However, the **server.R** file is quite different, and it needed the Black\-Scholes pricing funtion **BS** to be refactored to take reactive input, and we were then able to shorten the code considerably, see here. ``` library(shiny) library(plotly) library(ggplot2) ##### SERVER ##### # Define server logic for random distribution application function(input, output) { #Generate Black-Scholes values BS = function(x) { S=x[1]; K=x[2]; T=x[3]; v=x[4]; rf=x[5]; dv=x[6] d1 = (log(S/K) + (rf-dv+0.5*v^2)*T)/(v*sqrt(T)) d2 = d1 - v*sqrt(T) bscall = S*exp(-dv*T)*pnorm(d1) - K*exp(-rf*T)*pnorm(d2) bsput = -S*exp(-dv*T)*pnorm(-d1) + K*exp(-rf*T)*pnorm(-d2) res = c(bscall,bsput) } data <- reactive({ #Get inputs matrix(c(input$stockprice,input$strike,input$maturity, input$volatility,input$riskfreerate,input$divrate)) }) #Call option price output$BScall <- renderText({ res = round(BS(data())[1],4) }) #Put option price output$BSput <- renderText({ res = round(BS(data())[2],4) }) #Call plot output$plotCall <- renderPlot({ vcall = NULL; vput = NULL K = data()[2] strikes = seq(K-30,K+30) for (k in strikes) { d = data(); d[2]=k vcall = c(vcall,BS(d)[1]) vput = c(vput,BS(d)[2]) } df = data.frame(strikes,vcall,vput) ggplot(df,aes(x=strikes,y=vcall)) + geom_point(color=strikes) }, height = 350, width = 600) #Put plot output$plotPut <- renderPlot({ vcall = NULL; vput = NULL K = data()[2] strikes = seq(K-30,K+30) for (k in strikes) { d = data(); d[2]=k vcall = c(vcall,BS(d)[1]) vput = c(vput,BS(d)[2]) } df = data.frame(strikes,vcall,vput) ggplot(df,aes(x=strikes,y=vput)) + geom_point(color=strikes) }, height = 350, width = 600) } ``` You can copy this code and create two files in your directory and then run the app to see it execute in exactly the same way as before when reactive inputs were not used. 5\.7 Market Liquidity in Real Time using *Shiny* ------------------------------------------------ In this segment we combine web scraping with *shiny* to create a real time liquidity model. The app is based on the paper by George Chacko, Sanjiv Das, and Rong Fan titled “An Index\-Based Measure of Liquidity”, published in the *Journal of Banking and Finance*, 2016, v68, 162\-178\. It is available at: [http://algo.scu.edu/\~sanjivdas/etfliq.pdf](http://algo.scu.edu/~sanjivdas/etfliq.pdf) The main idea of the paper’s algorithm is as follows. Since the ETF is usually more liquid than the underlying bonds it represents, any difference in the price of the ETF and the NAV (net asset value) of the underlying bonds must be on account of liquidity, because market risk is otherwise the same for the ETF and its underlying. The paper uses an option pricing based derivation of the illiquidity of the market sector represented by the ETF. This illiquidity is represented in a basis points spread given by the following equation: \\\[ BILLIQ \= \-10000 \\ln \\left(\\frac{NAV}{NAV \+ \|ETF\-NAV\|}\\right) \\] 5\.8 Program files ------------------ For this application here are the **ui.R** and **server.R** files. You can cut and paste them into separate files in RStudio, and then run the app. ``` #ui.R library(shiny) # Define UI for miles per gallon application shinyUI(pageWithSidebar( # Application title headerPanel("Index-Based Illiquidity"), sidebarPanel( textInput("ticker", "Input ETF Ticker ", "LQD"), submitButton("Submit"), p(" "), p("Example of ETF tickers are: LQD, HYG, CSJ, CFT, CIU, AGG, GBF, GVI, MBB, EMB, IVV, BIV, BLV, BND, BSV, etc.") ), mainPanel( verbatimTextOutput("text4"), verbatimTextOutput("text1"), verbatimTextOutput("text2"), verbatimTextOutput("text3"), helpText("The paper that derives this measure of illiquidity is:"), helpText(a("George Chacko, Sanjiv Das, Rong Fan (2016), An Index-Based Measure of Liquidity, Journal of Banking and Finance, v68, 162-178.", href="http://algo.scu.edu/~sanjivdas/etfliq.pdf")) ) )) ``` ``` #server.R library(shiny) library(magrittr) library(stringr) library(rvest) library(httr) library(XML) library(RCurl) # Note that this logic may have to be uodated when the web page format is altered. shinyServer(function(input, output) { observe({ ## Read in the URL for the ETF ticker etf = input$ticker url = paste("http://finance.yahoo.com/quote/",etf,sep="") page = try(readLines(url)) #Get Closing Price doc.html = read_html(url) x = doc.html %>% html_nodes("span") %>% html_text() Price = as.numeric(x[16]) ## Process page for NAV s = '\"navPrice\"' idx = grep(s,page) y = str_locate(page[idx],s) x = substr(page[idx],y[2],y[2]+20) NAV = as.numeric(regmatches(x,gregexpr("[0-9]+.[0-9]+",x))) ## Compute BILLIQ BILLIQ = -10000*log(NAV/(NAV+abs(Price-NAV))) ## Process page for Yield s = '\"yield\"' idx = grep(s,page) y = str_locate(page[idx],s) x = substr(page[idx],y[2],y[2]+32) Yield = unlist(regmatches(x,gregexpr("[0-9]+.[0-9]+%",x))) ## Output output$text1 = renderText(paste("Price = ",Price)) output$text2 = renderText(paste("NAV = ",NAV)) output$text3 = renderText(paste("BILLIQ = ",BILLIQ," (bps)")) output$text4 = renderText(paste("Yield = ",Yield)) return() }) }) ``` When the app is launched the following interactive screen comes up so one may enter the ETF market for which the liquidity is being computed. As one can see, several statistics are provided, after being scraped from the web. The code in **server.R** shows how the information is sourced from the web. 5\.9 Using *Shiny* with Data Table ---------------------------------- In this section we will redisplay the data set for finance firms that we looked at earlier in the previous chapter using shiny. What we will do is add to the shiny app a feature that lets you select which columns of the data set to display. The resulting Shiny App should look as follows: We create the data that we need to apply to the shiny app. Here are the few lines of code needed if we do not use an app. Following this, we will look at the app code. ``` #GetData.R #Subset Finance sector nasdaq_names = stockSymbols(exchange = "NASDAQ") nyse_names = stockSymbols(exchange = "NYSE") amex_names = stockSymbols(exchange = "AMEX") df = rbind(nasdaq_names,nyse_names,amex_names) #Convert all values into millions idx = grep("B",df$MarketCap) x = df$MarketCap; df$MarketCap = as.numeric(substr(x,2,nchar(x)-1)) df$MarketCap[idx] = df$MarketCap[idx]*1000 #For the billion cases idx = which(df$MarketCap>0) df = df[idx,] Finance = df %>% filter(Sector=="Finance") ``` Next, here is the **server.R** code. ``` #server.R library(shiny) library(ggplot2) library(quantmod) library(DT) library(dplyr) library(magrittr) function(input, output, session) { #Subset Finance sector nasdaq_names = stockSymbols(exchange = "NASDAQ") nyse_names = stockSymbols(exchange = "NYSE") amex_names = stockSymbols(exchange = "AMEX") df = rbind(nasdaq_names,nyse_names,amex_names) #Convert all values into millions idx = grep("B",df$MarketCap) x = df$MarketCap; df$MarketCap = as.numeric(substr(x,2,nchar(x)-1)) df$MarketCap[idx] = df$MarketCap[idx]*1000 #For the billion cases idx = which(df$MarketCap>0) df = df[idx,] Finance = df %>% filter(Sector=="Finance") output$mytable1 <- DT::renderDataTable({ DT::datatable(Finance[, input$show_vars, drop = FALSE]) }) } ``` And then, as needed, the **ui.R** script. ``` #ui.R library(shiny) library(ggplot2) library(quantmod) library(DT) library(dplyr) library(magrittr) fluidPage( title = 'Financial Firms Data', sidebarLayout( sidebarPanel( conditionalPanel( 'input.dataset === "Finance"', checkboxGroupInput('show_vars', 'Choose data elements:', names(Finance), selected = names(Finance)) ) ), mainPanel( tabsetPanel( id = 'dataset', tabPanel('Finance', DT::dataTableOutput('mytable1')) ) ) ) ) ``` Cut and paste this code into the **server.R** and **ui.R** scripts in your directory and then execute the app.
Machine Learning
srdas.github.io
https://srdas.github.io/MLBook/BayesModels.html
Chapter 6 Bayes Models: Learning from Experience ================================================ For a fairly good introduction to Bayes Rule, see Wikipedia, <http://en.wikipedia.org/wiki/Bayes_theorem>. The various R packages for Bayesian inference are at: [http://cran.r\-project.org/web/views/Bayesian.html](http://cran.r-project.org/web/views/Bayesian.html). In business, we often want to ask, is a given phenomena real or just a coincidence? Bayes theorem really helps with that. For example, we may ask – is Warren Buffet’s investment success a coincidence? How would you answer this question? Would it depend on your prior probability of Buffet being able to beat the market? How would this answer change as additional information about his performance was being released over time? 6\.1 Bayes’ Theorem ------------------- Bayes rule follows easily from a decomposition of joint probability, i.e., \\\[ Pr\[A \\cap B] \= Pr(A\|B)\\; Pr(B) \= Pr(B\|A)\\; Pr(A) \\] Then the last two terms may be arranged to give \\\[ Pr(A\|B) \= \\frac{Pr(B\|A)\\; Pr(A)}{Pr(B)} \\] or \\\[ Pr(B\|A) \= \\frac{Pr(A\|B)\\; Pr(B)}{Pr(A)} \\] 6\.2 Example: Aids Testing -------------------------- This is an interesting problem, because it shows that if you are diagnosed with AIDS, there is a good chance the diagnosis is wrong, but if you are diagnosed as not having AIDS then there is a good chance it is right \- hopefully this is comforting news. Define, \\(\\{Pos,Neg\\}\\) as a positive or negative diagnosis of having AIDS. Also define \\(\\{Dis, NoDis\\}\\) as the event of having the disease versus not having it. There are 1\.5 million AIDS cases in the U.S. and about 300 million people which means the probability of AIDS in the population is 0\.005 (half a percent). Hence, a random test will uncover someone with AIDS with a half a percent probability. The confirmation accuracy of the AIDS test is 99%, such that we have \\\[ Pr(Pos \| Dis) \= 0\.99 \\] Hence the test is reasonably good. The accuracy of the test for people who do not have AIDS is \\\[ Pr(Neg \| NoDis) \= 0\.95 \\] What we really want is the probability of having the disease when the test comes up positive, i.e. we need to compute \\(Pr(Dis \| Pos)\\). Using Bayes Rule we calculate: \\\[ \\begin{aligned} Pr(Dis \| Pos) \&\= \\frac{Pr(Pos \| Dis)Pr(Dis)}{Pr(Pos)} \\\\ \&\= \\frac{Pr(Pos \| Dis)Pr(Dis)}{Pr(Pos \| Dis)Pr(Dis) \+ Pr(Pos\|NoDis) Pr(NoDis)} \\\\ \&\= \\frac{0\.99 \\times 0\.005}{(0\.99\)(0\.005\) \+ (0\.05\)(0\.995\)} \\\\ \&\= 0\.0904936 \\end{aligned} \\] Hence, the chance of having AIDS when the test is positive is only 9%. We might also care about the chance of not having AIDS when the test is positive \\\[ Pr(NoDis \| Pos) \= 1 \- Pr(Dis \| Pos) \= 1 \- 0\.09 \= 0\.91 \\] Finally, what is the chance that we have AIDS even when the test is negative \- this would also be a matter of concern to many of us, who might not relish the chance to be on some heavy drugs for the rest of our lives. \\\[ \\begin{aligned} Pr(Dis \| Neg) \&\= \\frac{Pr(Neg \| Dis)Pr(Dis)}{Pr(Neg)} \\\\ \&\= \\frac{Pr(Neg \| Dis)Pr(Dis)}{Pr(Neg \| Dis)Pr(Dis) \+ Pr(Neg\|NoDis) Pr(NoDis)} \\\\ \&\= \\frac{0\.01 \\times 0\.005}{(0\.01\)(0\.005\) \+ (0\.95\)(0\.995\)} \\\\ \&\= 0\.000053 \\end{aligned} \\] Hence, this is a worry we should not have. If the test is negative, there is a miniscule chance that we are infected with AIDS. 6\.3 Computational Approach using Sets -------------------------------------- The preceding analysis is a good lead in to (a) the connection with joint probability distributions, and (b) using R to demonstrate a computational way of thinking about Bayes theorem. Let’s begin by assuming that we have 300,000 people in the population, to scale down the numbers from the millions for convenience. Of these 1,500 have AIDS. So let’s create the population and then sample from it. See the use of the **sample** function in R. ``` #PEOPLE WITH AIDS people = seq(1,300000) people_aids = sample(people,1500) people_noaids = setdiff(people,people_aids) ``` Note, how we also used the **setdiff** function to get the complement set of the people who do not have AIDS. Now, of the people who have AIDS, we know that 99% of them test positive so let’s subset that list, and also take its complement. These are joint events, and their numbers proscribe the joint distribution. ``` people_aids_pos = sample(people_aids,1500*0.99) people_aids_neg = setdiff(people_aids,people_aids_pos) print(length(people_aids_pos)) ``` ``` ## [1] 1485 ``` ``` print(length(people_aids_neg)) ``` ``` ## [1] 15 ``` ``` people_aids_neg ``` ``` ## [1] 35037 126781 139889 193826 149185 135464 28387 14428 257567 114212 ## [11] 57248 151006 283192 168069 153407 ``` We can also subset the group that does not have AIDS, as we know that the test is negative for them 95% of the time. ``` #PEOPLE WITHOUT AIDS people_noaids_neg = sample(people_noaids,298500*0.95) people_noaids_pos = setdiff(people_noaids,people_noaids_neg) print(length(people_noaids_neg)) ``` ``` ## [1] 283575 ``` ``` print(length(people_noaids_pos)) ``` ``` ## [1] 14925 ``` We can now compute the probability that someone actually has AIDS when the test comes out positive. ``` #HAVE AIDS GIVEN TEST IS POSITIVE pr_aids_given_pos = (length(people_aids_pos))/ (length(people_aids_pos)+length(people_noaids_pos)) pr_aids_given_pos ``` ``` ## [1] 0.0904936 ``` This confirms the formal Bayes computation that we had undertaken earlier. And of course, as we had examined earlier, what’s the chance that you have AIDS when the test is negative, i.e., a false negative? ``` #FALSE NEGATIVE: HAVE AIDS WHEN TEST IS NEGATIVE pr_aids_given_neg = (length(people_aids_neg))/ (length(people_aids_neg)+length(people_noaids_neg)) pr_aids_given_neg ``` ``` ## [1] 5.289326e-05 ``` Phew! Note here that we first computed the joint sets covering joint outcomes, and then used these to compute conditional (Bayes) probabilities. The approach used R to apply a set\-theoretic, computational approach to calculating conditional probabilities. 6\.4 A Second Opinion --------------------- What if we tested positive, and then decided to get a second opinion, i.e., take another test. What would now be the posterior probability in this case? Here is the calculation. ``` #SECOND OPINION MEDICAL TEST FOR AIDS 0.99*0.09/(0.99*0.09 + 0.05*0.91) ``` ``` ## [1] 0.6619614 ``` Note that we used the previous posterior probability (0\.91\) as the prior probability in this calculation. 6\.5 Correlated Default ----------------------- Bayes theorem is very useful when we want to extract conditional default information. Bond fund managers are not as interested in the correlation of default of the bonds in their portfolio as much as the conditional default of bonds. What this means is that they care about the *conditional* probability of bond A defaulting if bond B has defaulted already. Modern finance provides many tools to obtain the default probabilities of firms. Suppose we know that firm 1 has default probability \\(p\_1 \= 1\\%\\) and firm 2 has default probability \\(p\_2\=3\\%\\). If the correlation of default of the two firms is 40% over one year, then if either bond defaults, what is the probability of default of the other, conditional on the first default? ### 6\.5\.1 Indicator Functions for Default We can see that even with this limited information, Bayes theorem allows us to derive the conditional probabilities of interest. First define \\(d\_i, i\=1,2\\) as default indicators for firms 1 and 2\. \\(d\_i\=1\\) if the firm defaults, and zero otherwise. We note that: \\\[ E(d\_1\)\=1 . p\_1 \+ 0 . (1\-p\_1\) \= p\_1 \= 0\.01\. \\] Likewise \\\[ E(d\_2\)\=1 . p\_2 \+ 0 . (1\-p\_2\) \= p\_2 \= 0\.03\. \\] The Bernoulli distribution lets us derive the standard deviation of \\(d\_1\\) and \\(d\_2\\). \\\[ \\begin{aligned} \\sigma\_1 \&\= \\sqrt{p\_1 (1\-p\_1\)} \= \\sqrt{(0\.01\)(0\.99\)} \= 0\.099499 \\\\ \\sigma\_2 \&\= \\sqrt{p\_2 (1\-p\_2\)} \= \\sqrt{(0\.03\)(0\.97\)} \= 0\.17059 \\end{aligned} \\] ### 6\.5\.2 Default Correlation Now, we note that \\\[ \\begin{aligned} Cov(d\_1,d\_2\) \&\= E(d\_1 . d\_2\) \- E(d\_1\)E(d\_2\) \\\\ \\rho \\sigma\_1 \\sigma\_2 \&\= E(d\_1 . d\_2\) \- p\_1 p\_2 \\\\ (0\.4\)(0\.099499\)(0\.17059\) \&\= E(d\_1 . d\_2\) \- (0\.01\)(0\.03\) \\\\ E(d\_1 . d\_2\) \&\= 0\.0070894 \\\\ E(d\_1 . d\_2\) \&\\equiv p\_{12} \\end{aligned} \\] where \\(p\_{12}\\) is the probability of default of both firm 1 and 2\. We now get the conditional probabilities: \\\[ \\begin{aligned} p(d\_1 \| d\_2\) \&\= p\_{12}/p\_2 \= 0\.0070894/0\.03 \= 0\.23631 \\\\ p(d\_2 \| d\_1\) \&\= p\_{12}/p\_1 \= 0\.0070894/0\.01 \= 0\.70894 \\end{aligned} \\] These conditional probabilities are non\-trivial in size, even though the individual probabilities of default are very small. What this means is that default contagion can be quite severe once firms begin to default. (This example used our knowledge of Bayes’ rule, correlations, covariances, and joint events.) 6\.6 Continuous Space Bayes Theorem ----------------------------------- In Bayesian approaches, the terms “prior”, “posterior”, and “likelihood” are commonly used and we explore this terminology here. We are usually interested in the parameter \\(\\theta\\), the mean of the distribution of some data \\(x\\) (I am using the standard notation here). But in the Bayesian setting we do not just want the value of \\(\\theta\\), but we want a distribution of values of \\(\\theta\\) starting from some prior assumption about this distribution. So we start with \\(p(\\theta)\\), which we call the **prior** distribution. We then observe data \\(x\\), and combine the data with the prior to get the **posterior** distribution \\(p(\\theta \| x)\\). To do this, we need to compute the probability of seeing the data \\(x\\) given our prior \\(p(\\theta)\\) and this probability is given by the **likelihood** function \\(L(x \| \\theta)\\). Assume that the variance of the data \\(x\\) is known, i.e., is \\(\\sigma^2\\). ### 6\.6\.1 Formulation Applying Bayes’ theorem we have \\\[ p(\\theta \| x) \= \\frac{L(x \| \\theta)\\; p(\\theta)}{\\int L(x \| \\theta) \\; p(\\theta)\\; d\\theta} \\propto L(x \| \\theta)\\; p(\\theta) \\] If we assume the prior distribution for the mean of the data is normal, i.e., \\(p(\\theta) \\sim N\[\\mu\_0, \\sigma\_0^2]\\), and the likelihood is also normal, i.e., \\(L(x \| \\theta) \\sim N\[\\theta, \\sigma^2]\\), then we have that \\\[ \\begin{aligned} p(\\theta) \&\= \\frac{1}{\\sqrt{2\\pi \\sigma\_0^2}} \\exp\\left\[\-\\frac{1}{2} \\frac{(\\theta\-\\mu\_0\)^2}{\\sigma\_0^2} \\right] \\sim N\[\\theta \| \\mu\_0, \\sigma\_0^2] \\; \\; \\propto \\exp\\left\[\-\\frac{1}{2} \\frac{(\\theta\-\\mu\_0\)^2}{\\sigma\_0^2} \\right] \\\\ L(x \| \\theta) \&\= \\frac{1}{\\sqrt{2\\pi \\sigma^2}} \\exp\\left\[\-\\frac{1}{2} \\frac{(x\-\\theta)^2}{\\sigma^2} \\right] \\sim N\[x \| \\theta, \\sigma^2] \\; \\; \\propto \\exp\\left\[\-\\frac{1}{2} \\frac{(x\-\\theta)^2}{\\sigma^2} \\right] \\end{aligned} \\] ### 6\.6\.2 Posterior Distribution Given this, the posterior is as follows: \\\[ p(\\theta \| x) \\propto L(x \| \\theta) p(\\theta) \\;\\; \\propto \\exp\\left\[\-\\frac{1}{2} \\frac{(x\-\\theta)^2}{\\sigma^2} \-\\frac{1}{2} \\frac{(\\theta\-\\mu\_0\)^2}{\\sigma\_0^2} \\right] \\] Define the precision values to be \\(\\tau\_0 \= \\frac{1}{\\sigma\_0^2}\\), and \\(\\tau \= \\frac{1}{\\sigma^2}\\). Then it can be shown that when you observe a new value of the data \\(x\\), the posterior distribution is written down in closed form as: \\\[ p(\\theta \| x) \\sim N\\left\[ \\frac{\\tau\_0}{\\tau\_0\+\\tau} \\mu\_0 \+ \\frac{\\tau}{\\tau\_0\+\\tau} x, \\; \\; \\frac{1}{\\tau\_0 \+ \\tau} \\right] \\] When the posterior distribution and prior distribution have the same form, they are said to be “conjugate” with respect to the specific likelihood function. ### 6\.6\.3 Example To take an example, suppose our prior for the mean of the equity premium per month is \\(p(\\theta) \\sim N\[0\.005, 0\.001^2]\\). The standard deviation of the equity premium is 0\.04\. If next month we observe an equity premium of 1%, what is the posterior distribution of the mean equity premium? ``` mu0 = 0.005 sigma0 = 0.001 sigma=0.04 x = 0.01 tau0 = 1/sigma0^2 tau = 1/sigma^2 posterior_mean = tau0*mu0/(tau0+tau) + tau*x/(tau0+tau) print(posterior_mean) ``` ``` ## [1] 0.005003123 ``` ``` posterior_var = 1/(tau0+tau) print(sqrt(posterior_var)) ``` ``` ## [1] 0.0009996876 ``` Hence, we see that after updating the mean has increased mildly because the data came in higher than expected. ### 6\.6\.4 General Formula for \\(n\\) sequential updates If we observe \\(n\\) new values of \\(x\\), then the new posterior is \\\[ p(\\theta \| x) \\sim N\\left\[ \\frac{\\tau\_0}{\\tau\_0\+n\\tau} \\mu\_0 \+ \\frac{\\tau}{\\tau\_0\+n\\tau} \\sum\_{j\=1}^n x\_j, \\; \\; \\frac{1}{\\tau\_0 \+ n \\tau} \\right] \\] This is easy to derive, as it is just the result you obtain if you took each \\(x\_j\\) and updated the posterior one at a time. Try this as an Exercise. *Estimate the equity risk premium*. We will use data and discrete Bayes to come up with a forecast of the equity risk premium. Proceed along the following lines using the **LearnBayes** package. 1. We’ll use data from 1926 onwards from the Fama\-French data repository. All you need is the equity premium \\((r\_m\-r\_f)\\) data, and I will leave it up to you to choose if you want to use annual or monthly data. Download this and load it into R. 2. Using the series only up to the year 2000, present the descriptive statistics for the equity premium. State these in annualized terms. 3. Present the distribution of returns as a histogram. 4. Store the results of the histogram, i.e., the range of discrete values of the equity premium, and the probability of each one. Treat this as your prior distribution. 5. Now take the remaining data for the years after 2000, and use this data to update the prior and construct a posterior. Assume that the prior, likelihood, and posterior are normally distributed. Use the **discrete.bayes** function to construct the posterior distribution and plot it using a histogram. See if you can put the prior and posterior on the same plot to see how the new data has changed the prior. 6. What is the forecasted equity premium, and what is the confidence interval around your forecast? 6\.7 Bayes Classifier --------------------- Suppose we want to classify entities (emails, consumers, companies, tumors, images, etc.) into categories \\(c\\). Think of a data set with rows each giving one instance of the data set with several characteristics, i.e., the \\(x\\) variables, and the row will also contain the category \\(c\\). Suppose there are \\(n\\) variables, and \\(m\\) categories. We use the data to construct the prior and conditional probabilities. Once these probabilities are computed we say that the model is “trained”. The trained classifier contains the unconditional probabilities \\(p(c)\\) of each class, which are merely frequencies with which each category appears. It also shows the conditional probability distributions \\(p(x \|c)\\) given as the mean and standard deviation of the occurrence of these terms in each class. The posterior probabilities are computed as follows. These tell us the most likely category given the data \\(x\\) on any observation. \\\[ p(c\=i \| x\_1,x\_2,...,x\_n) \= \\frac{p(x\_1,x\_2,...,x\_n\|c\=i) \\cdot p(c\=i)}{\\sum\_{j\=1}^m p(x\_1,x\_2,...,x\_n\|c\=j) \\cdot p(c\=j)}, \\quad \\forall i\=1,2,...,m \\] In the naive Bayes model, it is assumed that all the \\(x\\) variables are independent of each other, so that we may write \\\[ p(x\_1,x\_2,...,x\_n \| c\=i) \= p(x\_1 \| c\=i) \\cdot p(x\_1 \| c\=i) \\cdot \\cdot \\cdot p(x\_n \| c\=i) \\] We use the **e1071** package. It has a one\-line command that takes in the tagged training dataset using the function **naiveBayes()**. It returns the trained classifier model. We may take this trained model and re\-apply to the training data set to see how well it does. We use the **predict()** function for this. The data set here is the classic Iris data. ### 6\.7\.1 Example ``` library(e1071) data(iris) print(head(iris)) ``` ``` ## Sepal.Length Sepal.Width Petal.Length Petal.Width Species ## 1 5.1 3.5 1.4 0.2 setosa ## 2 4.9 3.0 1.4 0.2 setosa ## 3 4.7 3.2 1.3 0.2 setosa ## 4 4.6 3.1 1.5 0.2 setosa ## 5 5.0 3.6 1.4 0.2 setosa ## 6 5.4 3.9 1.7 0.4 setosa ``` ``` tail(iris) ``` ``` ## Sepal.Length Sepal.Width Petal.Length Petal.Width Species ## 145 6.7 3.3 5.7 2.5 virginica ## 146 6.7 3.0 5.2 2.3 virginica ## 147 6.3 2.5 5.0 1.9 virginica ## 148 6.5 3.0 5.2 2.0 virginica ## 149 6.2 3.4 5.4 2.3 virginica ## 150 5.9 3.0 5.1 1.8 virginica ``` ``` #NAIVE BAYES res = naiveBayes(iris[,1:4],iris[,5]) #SHOWS THE PRIOR AND LIKELIHOOD FUNCTIONS res ``` ``` ## ## Naive Bayes Classifier for Discrete Predictors ## ## Call: ## naiveBayes.default(x = iris[, 1:4], y = iris[, 5]) ## ## A-priori probabilities: ## iris[, 5] ## setosa versicolor virginica ## 0.3333333 0.3333333 0.3333333 ## ## Conditional probabilities: ## Sepal.Length ## iris[, 5] [,1] [,2] ## setosa 5.006 0.3524897 ## versicolor 5.936 0.5161711 ## virginica 6.588 0.6358796 ## ## Sepal.Width ## iris[, 5] [,1] [,2] ## setosa 3.428 0.3790644 ## versicolor 2.770 0.3137983 ## virginica 2.974 0.3224966 ## ## Petal.Length ## iris[, 5] [,1] [,2] ## setosa 1.462 0.1736640 ## versicolor 4.260 0.4699110 ## virginica 5.552 0.5518947 ## ## Petal.Width ## iris[, 5] [,1] [,2] ## setosa 0.246 0.1053856 ## versicolor 1.326 0.1977527 ## virginica 2.026 0.2746501 ``` ``` #SHOWS POSTERIOR PROBABILITIES predict(res,iris[,1:4],type="raw") ``` ``` ## setosa versicolor virginica ## [1,] 1.000000e+00 2.981309e-18 2.152373e-25 ## [2,] 1.000000e+00 3.169312e-17 6.938030e-25 ## [3,] 1.000000e+00 2.367113e-18 7.240956e-26 ## [4,] 1.000000e+00 3.069606e-17 8.690636e-25 ## [5,] 1.000000e+00 1.017337e-18 8.885794e-26 ## [6,] 1.000000e+00 2.717732e-14 4.344285e-21 ## [7,] 1.000000e+00 2.321639e-17 7.988271e-25 ## [8,] 1.000000e+00 1.390751e-17 8.166995e-25 ## [9,] 1.000000e+00 1.990156e-17 3.606469e-25 ## [10,] 1.000000e+00 7.378931e-18 3.615492e-25 ## [11,] 1.000000e+00 9.396089e-18 1.474623e-24 ## [12,] 1.000000e+00 3.461964e-17 2.093627e-24 ## [13,] 1.000000e+00 2.804520e-18 1.010192e-25 ## [14,] 1.000000e+00 1.799033e-19 6.060578e-27 ## [15,] 1.000000e+00 5.533879e-19 2.485033e-25 ## [16,] 1.000000e+00 6.273863e-17 4.509864e-23 ## [17,] 1.000000e+00 1.106658e-16 1.282419e-23 ## [18,] 1.000000e+00 4.841773e-17 2.350011e-24 ## [19,] 1.000000e+00 1.126175e-14 2.567180e-21 ## [20,] 1.000000e+00 1.808513e-17 1.963924e-24 ## [21,] 1.000000e+00 2.178382e-15 2.013989e-22 ## [22,] 1.000000e+00 1.210057e-15 7.788592e-23 ## [23,] 1.000000e+00 4.535220e-20 3.130074e-27 ## [24,] 1.000000e+00 3.147327e-11 8.175305e-19 ## [25,] 1.000000e+00 1.838507e-14 1.553757e-21 ## [26,] 1.000000e+00 6.873990e-16 1.830374e-23 ## [27,] 1.000000e+00 3.192598e-14 1.045146e-21 ## [28,] 1.000000e+00 1.542562e-17 1.274394e-24 ## [29,] 1.000000e+00 8.833285e-18 5.368077e-25 ## [30,] 1.000000e+00 9.557935e-17 3.652571e-24 ## [31,] 1.000000e+00 2.166837e-16 6.730536e-24 ## [32,] 1.000000e+00 3.940500e-14 1.546678e-21 ## [33,] 1.000000e+00 1.609092e-20 1.013278e-26 ## [34,] 1.000000e+00 7.222217e-20 4.261853e-26 ## [35,] 1.000000e+00 6.289348e-17 1.831694e-24 ## [36,] 1.000000e+00 2.850926e-18 8.874002e-26 ## [37,] 1.000000e+00 7.746279e-18 7.235628e-25 ## [38,] 1.000000e+00 8.623934e-20 1.223633e-26 ## [39,] 1.000000e+00 4.612936e-18 9.655450e-26 ## [40,] 1.000000e+00 2.009325e-17 1.237755e-24 ## [41,] 1.000000e+00 1.300634e-17 5.657689e-25 ## [42,] 1.000000e+00 1.577617e-15 5.717219e-24 ## [43,] 1.000000e+00 1.494911e-18 4.800333e-26 ## [44,] 1.000000e+00 1.076475e-10 3.721344e-18 ## [45,] 1.000000e+00 1.357569e-12 1.708326e-19 ## [46,] 1.000000e+00 3.882113e-16 5.587814e-24 ## [47,] 1.000000e+00 5.086735e-18 8.960156e-25 ## [48,] 1.000000e+00 5.012793e-18 1.636566e-25 ## [49,] 1.000000e+00 5.717245e-18 8.231337e-25 ## [50,] 1.000000e+00 7.713456e-18 3.349997e-25 ## [51,] 4.893048e-107 8.018653e-01 1.981347e-01 ## [52,] 7.920550e-100 9.429283e-01 5.707168e-02 ## [53,] 5.494369e-121 4.606254e-01 5.393746e-01 ## [54,] 1.129435e-69 9.999621e-01 3.789964e-05 ## [55,] 1.473329e-105 9.503408e-01 4.965916e-02 ## [56,] 1.931184e-89 9.990013e-01 9.986538e-04 ## [57,] 4.539099e-113 6.592515e-01 3.407485e-01 ## [58,] 2.549753e-34 9.999997e-01 3.119517e-07 ## [59,] 6.562814e-97 9.895385e-01 1.046153e-02 ## [60,] 5.000210e-69 9.998928e-01 1.071638e-04 ## [61,] 7.354548e-41 9.999997e-01 3.143915e-07 ## [62,] 4.799134e-86 9.958564e-01 4.143617e-03 ## [63,] 4.631287e-60 9.999925e-01 7.541274e-06 ## [64,] 1.052252e-103 9.850868e-01 1.491324e-02 ## [65,] 4.789799e-55 9.999700e-01 2.999393e-05 ## [66,] 1.514706e-92 9.787587e-01 2.124125e-02 ## [67,] 1.338348e-97 9.899311e-01 1.006893e-02 ## [68,] 2.026115e-62 9.999799e-01 2.007314e-05 ## [69,] 6.547473e-101 9.941996e-01 5.800427e-03 ## [70,] 3.016276e-58 9.999913e-01 8.739959e-06 ## [71,] 1.053341e-127 1.609361e-01 8.390639e-01 ## [72,] 1.248202e-70 9.997743e-01 2.256698e-04 ## [73,] 3.294753e-119 9.245812e-01 7.541876e-02 ## [74,] 1.314175e-95 9.979398e-01 2.060233e-03 ## [75,] 3.003117e-83 9.982736e-01 1.726437e-03 ## [76,] 2.536747e-92 9.865372e-01 1.346281e-02 ## [77,] 1.558909e-111 9.102260e-01 8.977398e-02 ## [78,] 7.014282e-136 7.989607e-02 9.201039e-01 ## [79,] 5.034528e-99 9.854957e-01 1.450433e-02 ## [80,] 1.439052e-41 9.999984e-01 1.601574e-06 ## [81,] 1.251567e-54 9.999955e-01 4.500139e-06 ## [82,] 8.769539e-48 9.999983e-01 1.742560e-06 ## [83,] 3.447181e-62 9.999664e-01 3.361987e-05 ## [84,] 1.087302e-132 6.134355e-01 3.865645e-01 ## [85,] 4.119852e-97 9.918297e-01 8.170260e-03 ## [86,] 1.140835e-102 8.734107e-01 1.265893e-01 ## [87,] 2.247339e-110 7.971795e-01 2.028205e-01 ## [88,] 4.870630e-88 9.992978e-01 7.022084e-04 ## [89,] 2.028672e-72 9.997620e-01 2.379898e-04 ## [90,] 2.227900e-69 9.999461e-01 5.390514e-05 ## [91,] 5.110709e-81 9.998510e-01 1.489819e-04 ## [92,] 5.774841e-99 9.885399e-01 1.146006e-02 ## [93,] 5.146736e-66 9.999591e-01 4.089540e-05 ## [94,] 1.332816e-34 9.999997e-01 2.716264e-07 ## [95,] 6.094144e-77 9.998034e-01 1.966331e-04 ## [96,] 1.424276e-72 9.998236e-01 1.764463e-04 ## [97,] 8.302641e-77 9.996692e-01 3.307548e-04 ## [98,] 1.835520e-82 9.988601e-01 1.139915e-03 ## [99,] 5.710350e-30 9.999997e-01 3.094739e-07 ## [100,] 3.996459e-73 9.998204e-01 1.795726e-04 ## [101,] 3.993755e-249 1.031032e-10 1.000000e+00 ## [102,] 1.228659e-149 2.724406e-02 9.727559e-01 ## [103,] 2.460661e-216 2.327488e-07 9.999998e-01 ## [104,] 2.864831e-173 2.290954e-03 9.977090e-01 ## [105,] 8.299884e-214 3.175384e-07 9.999997e-01 ## [106,] 1.371182e-267 3.807455e-10 1.000000e+00 ## [107,] 3.444090e-107 9.719885e-01 2.801154e-02 ## [108,] 3.741929e-224 1.782047e-06 9.999982e-01 ## [109,] 5.564644e-188 5.823191e-04 9.994177e-01 ## [110,] 2.052443e-260 2.461662e-12 1.000000e+00 ## [111,] 8.669405e-159 4.895235e-04 9.995105e-01 ## [112,] 4.220200e-163 3.168643e-03 9.968314e-01 ## [113,] 4.360059e-190 6.230821e-06 9.999938e-01 ## [114,] 6.142256e-151 1.423414e-02 9.857659e-01 ## [115,] 2.201426e-186 1.393247e-06 9.999986e-01 ## [116,] 2.949945e-191 6.128385e-07 9.999994e-01 ## [117,] 2.909076e-168 2.152843e-03 9.978472e-01 ## [118,] 1.347608e-281 2.872996e-12 1.000000e+00 ## [119,] 2.786402e-306 1.151469e-12 1.000000e+00 ## [120,] 2.082510e-123 9.561626e-01 4.383739e-02 ## [121,] 2.194169e-217 1.712166e-08 1.000000e+00 ## [122,] 3.325791e-145 1.518718e-02 9.848128e-01 ## [123,] 6.251357e-269 1.170872e-09 1.000000e+00 ## [124,] 4.415135e-135 1.360432e-01 8.639568e-01 ## [125,] 6.315716e-201 1.300512e-06 9.999987e-01 ## [126,] 5.257347e-203 9.507989e-06 9.999905e-01 ## [127,] 1.476391e-129 2.067703e-01 7.932297e-01 ## [128,] 8.772841e-134 1.130589e-01 8.869411e-01 ## [129,] 5.230800e-194 1.395719e-05 9.999860e-01 ## [130,] 7.014892e-179 8.232518e-04 9.991767e-01 ## [131,] 6.306820e-218 1.214497e-06 9.999988e-01 ## [132,] 2.539020e-247 4.668891e-10 1.000000e+00 ## [133,] 2.210812e-201 2.000316e-06 9.999980e-01 ## [134,] 1.128613e-128 7.118948e-01 2.881052e-01 ## [135,] 8.114869e-151 4.900992e-01 5.099008e-01 ## [136,] 7.419068e-249 1.448050e-10 1.000000e+00 ## [137,] 1.004503e-215 9.743357e-09 1.000000e+00 ## [138,] 1.346716e-167 2.186989e-03 9.978130e-01 ## [139,] 1.994716e-128 1.999894e-01 8.000106e-01 ## [140,] 8.440466e-185 6.769126e-06 9.999932e-01 ## [141,] 2.334365e-218 7.456220e-09 1.000000e+00 ## [142,] 2.179139e-183 6.352663e-07 9.999994e-01 ## [143,] 1.228659e-149 2.724406e-02 9.727559e-01 ## [144,] 3.426814e-229 6.597015e-09 1.000000e+00 ## [145,] 2.011574e-232 2.620636e-10 1.000000e+00 ## [146,] 1.078519e-187 7.915543e-07 9.999992e-01 ## [147,] 1.061392e-146 2.770575e-02 9.722942e-01 ## [148,] 1.846900e-164 4.398402e-04 9.995602e-01 ## [149,] 1.439996e-195 3.384156e-07 9.999997e-01 ## [150,] 2.771480e-143 5.987903e-02 9.401210e-01 ``` ``` #CONFUSION MATRIX out = table(predict(res,iris[,1:4]),iris[,5]) out ``` ``` ## ## setosa versicolor virginica ## setosa 50 0 0 ## versicolor 0 47 3 ## virginica 0 3 47 ``` 6\.8 Bayes Nets --------------- Higher\-dimension Bayes problems and joint distributions over several outcomes/events are easy to visualize with a network diagram, also called a Bayes net. A Bayes net is a directed, acyclic graph (known as a DAG), i.e., cycles are not permitted in the graph. A good way to understand a Bayes net is with an example of economic distress. There are three levels at which distress may be noticed: economy level (\\(E\=1\\)), industry level (\\(I\=1\\)), or at a particular firm level (\\(F\=1\\)). Economic distress can lead to industry distress and/or firm distress, and industry distress may or may not result in a firm’s distress. The probabilities are as follows. Note that the probabilities in the first tableau are unconditional, but in all the subsequent tableaus they are conditional probabilities. See @(fig:bayesnet1\). Figure 6\.1: Conditional probabilities The Bayes net shows the pathways of economic distress. There are three channels: \\(a\\) is the inducement of industry distress from economy distress; \\(b\\) is the inducement of firm distress directly from economy distress; \\(c\\) is the inducement of firm distress directly from industry distress. See @(fig:bayesnet2\). Figure 6\.2: Bayesian network Note here that each pair of conditional probabilities adds up to 1\. The “channels” in the tableaus refer to the arrows in the Bayes net diagram. #### 6\.8\.0\.1 Conditional Probability \- 1 Now we will compute an answer to the question: What is the probability that the industry is distressed if the firm is known to be in distress? The calculation is as follows: \\\[ \\begin{aligned} Pr(I\=1\|F\=1\) \&\= \\frac{Pr(F\=1\|I\=1\)\\cdot Pr(I\=1\)}{Pr(F\=1\)} \\\\ Pr(F\=1\|I\=1\)\\cdot Pr(I\=1\) \&\= Pr(F\=1\|I\=1\)\\cdot Pr(I\=1\|E\=1\)\\cdot Pr(E\=1\) \\\\ \&\+ Pr(F\=1\|I\=1\)\\cdot Pr(I\=1\|E\=0\)\\cdot Pr(E\=0\)\\\\ \&\= 0\.95 \\times 0\.6 \\times 0\.1 \+ 0\.8 \\times 0\.2 \\times 0\.9 \= 0\.201 \\\\ \\end{aligned} \\] \\\[ \\begin{aligned} Pr(F\=1\|I\=0\)\\cdot Pr(I\=0\) \&\= Pr(F\=1\|I\=0\)\\cdot Pr(I\=0\|E\=1\)\\cdot Pr(E\=1\) \\\\ \&\+ Pr(F\=1\|I\=0\)\\cdot Pr(I\=0\|E\=0\)\\cdot Pr(E\=0\)\\\\ \&\= 0\.7 \\times 0\.4 \\times 0\.1 \+ 0\.1 \\times 0\.8 \\times 0\.9 \= 0\.100 \\end{aligned} \\] \\\[ \\begin{aligned} Pr(F\=1\) \&\= Pr(F\=1\|I\=1\)\\cdot Pr(I\=1\) \\\\ \&\+ Pr(F\=1\|I\=0\)\\cdot Pr(I\=0\) \= 0\.301 \\end{aligned} \\] \\\[ Pr(I\=1\|F\=1\) \= \\frac{Pr(F\=1\|I\=1\)\\cdot Pr(I\=1\)}{Pr(F\=1\)} \= \\frac{0\.201}{0\.301} \= 0\.6677741 \\] #### 6\.8\.0\.2 Computational set\-theoretic approach We may write a R script to compute the conditional probability that the industry is distressed when a firm is distressed. This uses the set approach that we visited earlier. ``` #BAYES NET COMPUTATIONS E = seq(1,100000) n = length(E) E1 = sample(E,length(E)*0.1) E0 = setdiff(E,E1) E1I1 = sample(E1,length(E1)*0.6) E1I0 = setdiff(E1,E1I1) E0I1 = sample(E0,length(E0)*0.2) E0I0 = setdiff(E0,E0I1) E1I1F1 = sample(E1I1,length(E1I1)*0.95) E1I1F0 = setdiff(E1I1,E1I1F1) E1I0F1 = sample(E1I0,length(E1I0)*0.70) E1I0F0 = setdiff(E1I0,E1I0F1) E0I1F1 = sample(E0I1,length(E0I1)*0.80) E0I1F0 = setdiff(E0I1,E0I1F1) E0I0F1 = sample(E0I0,length(E0I0)*0.10) E0I0F0 = setdiff(E0I0,E0I0F1) pr_I1_given_F1 = length(c(E1I1F1,E0I1F1))/ length(c(E1I1F1,E1I0F1,E0I1F1,E0I0F1)) print(pr_I1_given_F1) ``` ``` ## [1] 0.6677741 ``` Running this program gives the desired probability and confirms the previous result. #### 6\.8\.0\.3 Conditional Probability \- 2 Compute the conditional probability that the economy is in distress if the firm is in distress. Compare this to the previous conditional probability we computed of 0\.6677741\. Should it be lower? ``` pr_E1_given_F1 = length(c(E1I1F1,E1I0F1))/length(c(E1I1F1,E1I0F1,E0I1F1,E0I0F1)) print(pr_E1_given_F1) ``` ``` ## [1] 0.282392 ``` Yes, it should be lower than the probability that the industry is in distress when the firm is in distress, because the economy is one network layer removed from the firm, unlike the industry. #### 6\.8\.0\.4 R Packages for Bayes Nets What packages does R provide for doing Bayes Nets? See: [http://cran.r\-project.org/web/views/Bayesian.html](http://cran.r-project.org/web/views/Bayesian.html) 6\.9 Bayes in Marketing ----------------------- In pilot market tests (part of a larger market research campaign), Bayes theorem shows up in a simple manner. Suppose we have a project whose value is \\(x\\). If the product is successful (\\(S\\)), the payoff is \\(\+100\\) and if the product fails (\\(F\\)) the payoff is \\(\-70\\). The probability of these two events is: \\\[ Pr(S) \= 0\.7, \\quad Pr(F) \= 0\.3 \\] You can easily check that the expected value is \\(E(x) \= 49\\). Suppose we were able to buy protection for a failed product, then this protection would be a put option (of the real option type), and would be worth \\(0\.3 \\times 70 \= 21\\). Since the put saves the loss on failure, the value is simply the expected loss amount, conditional on loss. Market researchers think of this as the value of **perfect information**. #### 6\.9\.0\.1 Product Launch? Would you proceed with this product launch given these odds? **Yes**, the expected value is positive (note that we are assuming away risk aversion issues here \- but this is not a finance topic, but a marketing research analysis). #### 6\.9\.0\.2 Pilot Test Now suppose there is an intermediate choice, i.e. you can undertake a pilot test (denoted \\(T\\)). Pilot tests are not highly accurate though they are reasonably sophisticated. The pilot test signals success (\\(T\+\\)) or failure (\\(T\-\\)) with the following probabilities: \\\[ Pr(T\+ \| S) \= 0\.8 \\\\ Pr(T\- \| S) \= 0\.2 \\\\ Pr(T\+ \| F) \= 0\.3 \\\\ Pr(T\- \| F) \= 0\.7 \\] What are these? We note that \\(Pr(T\+ \| S)\\) stands for the probability that the pilot signals success when indeed the underlying product launch will be successful. Thus the pilot in this case gives only an accurate reading of success 80% of the time. Analogously, one can interpret the other probabilities. We may compute the probability that the pilot gives a positive result: \\\[ \\begin{aligned} Pr(T\+) \&\= Pr(T\+ \| S)Pr(S) \+ Pr(T\+ \| F)Pr(F) \\\\ \&\= (0\.8\)(0\.7\)\+(0\.3\)(0\.3\) \= 0\.65 \\end{aligned} \\] and that the result is negative: \\\[ \\begin{aligned} Pr(T\-) \&\= Pr(T\- \| S)Pr(S) \+ Pr(T\- \| F)Pr(F) \\\\ \&\= (0\.2\)(0\.7\)\+(0\.7\)(0\.3\) \= 0\.35 \\end{aligned} \\] which now allows us to compute the following conditional probabilities: \\\[ \\begin{aligned} Pr(S \| T\+) \&\= \\frac{Pr(T\+\|S)Pr(S)}{Pr(T\+)} \= \\frac{(0\.8\)(0\.7\)}{0\.65} \= 0\.86154 \\\\ Pr(S \| T\-) \&\= \\frac{Pr(T\-\|S)Pr(S)}{Pr(T\-)} \= \\frac{(0\.2\)(0\.7\)}{0\.35} \= 0\.4 \\\\ Pr(F \| T\+) \&\= \\frac{Pr(T\+\|F)Pr(F)}{Pr(T\+)} \= \\frac{(0\.3\)(0\.3\)}{0\.65} \= 0\.13846 \\\\ Pr(F \| T\-) \&\= \\frac{Pr(T\-\|F)Pr(F)}{Pr(T\-)} \= \\frac{(0\.7\)(0\.3\)}{0\.35} \= 0\.6 \\end{aligned} \\] Armed with these conditional probabilities, we may now re\-evaluate our product launch. If the pilot comes out positive, what is the expected value of the product launch? This is as follows: \\\[ E(x \| T\+) \= 100 Pr(S\|T\+) \+(\-70\) Pr(F\|T\+) \\\\ \= 100(0\.86154\)\-70(0\.13846\) \\\\ \= 76\.462 \\] And if the pilot comes out negative, then the value of the launch is: \\\[ E(x \| T\-) \= 100 Pr(S\|T\-) \+(\-70\) Pr(F\|T\-) \\\\ \= 100(0\.4\)\-70(0\.6\) \\\\ \= \-2 \\] So. we see that if the pilot is negative, then we know that the expected value from the main product launch is negative, and we do not proceed. Thus, the overall expected value after the pilot is \\\[ E(x) \= E(x\|T\+)Pr(T\+) \+ E(x\|T\-)Pr(T\-) \\\\ \= 76\.462(0\.65\) \+ (0\)(0\.35\) \\\\ \= 49\.70 \\] The incremental value over the case without the pilot test is \\(0\.70\\). This is the information value of the pilot test. 6\.10 Other Marketing Applications ---------------------------------- Bayesian methods show up in many areas in the Marketing field. Especially around customer heterogeneity, see Allenby and Rossi ([1998](#ref-RePEc:eee:econom:v:89:y:1998:i:1-2:p:57-78)). Other papers are as follows: * See the paper “The HB Revolution: How Bayesian Methods Have Changed the Face of Marketing Research,” by Allenby, Bakken, and Rossi ([2004](#ref-AllenbyBakkenRossi)). * See also the paper by David Bakken, titled \`\`The Bayesian Revolution in Marketing Research’’. * In conjoint analysis, see the paper by Sawtooth software. [https://www.sawtoothsoftware.com/download/techpap/undca15\.pdf](https://www.sawtoothsoftware.com/download/techpap/undca15.pdf) 6\.1 Bayes’ Theorem ------------------- Bayes rule follows easily from a decomposition of joint probability, i.e., \\\[ Pr\[A \\cap B] \= Pr(A\|B)\\; Pr(B) \= Pr(B\|A)\\; Pr(A) \\] Then the last two terms may be arranged to give \\\[ Pr(A\|B) \= \\frac{Pr(B\|A)\\; Pr(A)}{Pr(B)} \\] or \\\[ Pr(B\|A) \= \\frac{Pr(A\|B)\\; Pr(B)}{Pr(A)} \\] 6\.2 Example: Aids Testing -------------------------- This is an interesting problem, because it shows that if you are diagnosed with AIDS, there is a good chance the diagnosis is wrong, but if you are diagnosed as not having AIDS then there is a good chance it is right \- hopefully this is comforting news. Define, \\(\\{Pos,Neg\\}\\) as a positive or negative diagnosis of having AIDS. Also define \\(\\{Dis, NoDis\\}\\) as the event of having the disease versus not having it. There are 1\.5 million AIDS cases in the U.S. and about 300 million people which means the probability of AIDS in the population is 0\.005 (half a percent). Hence, a random test will uncover someone with AIDS with a half a percent probability. The confirmation accuracy of the AIDS test is 99%, such that we have \\\[ Pr(Pos \| Dis) \= 0\.99 \\] Hence the test is reasonably good. The accuracy of the test for people who do not have AIDS is \\\[ Pr(Neg \| NoDis) \= 0\.95 \\] What we really want is the probability of having the disease when the test comes up positive, i.e. we need to compute \\(Pr(Dis \| Pos)\\). Using Bayes Rule we calculate: \\\[ \\begin{aligned} Pr(Dis \| Pos) \&\= \\frac{Pr(Pos \| Dis)Pr(Dis)}{Pr(Pos)} \\\\ \&\= \\frac{Pr(Pos \| Dis)Pr(Dis)}{Pr(Pos \| Dis)Pr(Dis) \+ Pr(Pos\|NoDis) Pr(NoDis)} \\\\ \&\= \\frac{0\.99 \\times 0\.005}{(0\.99\)(0\.005\) \+ (0\.05\)(0\.995\)} \\\\ \&\= 0\.0904936 \\end{aligned} \\] Hence, the chance of having AIDS when the test is positive is only 9%. We might also care about the chance of not having AIDS when the test is positive \\\[ Pr(NoDis \| Pos) \= 1 \- Pr(Dis \| Pos) \= 1 \- 0\.09 \= 0\.91 \\] Finally, what is the chance that we have AIDS even when the test is negative \- this would also be a matter of concern to many of us, who might not relish the chance to be on some heavy drugs for the rest of our lives. \\\[ \\begin{aligned} Pr(Dis \| Neg) \&\= \\frac{Pr(Neg \| Dis)Pr(Dis)}{Pr(Neg)} \\\\ \&\= \\frac{Pr(Neg \| Dis)Pr(Dis)}{Pr(Neg \| Dis)Pr(Dis) \+ Pr(Neg\|NoDis) Pr(NoDis)} \\\\ \&\= \\frac{0\.01 \\times 0\.005}{(0\.01\)(0\.005\) \+ (0\.95\)(0\.995\)} \\\\ \&\= 0\.000053 \\end{aligned} \\] Hence, this is a worry we should not have. If the test is negative, there is a miniscule chance that we are infected with AIDS. 6\.3 Computational Approach using Sets -------------------------------------- The preceding analysis is a good lead in to (a) the connection with joint probability distributions, and (b) using R to demonstrate a computational way of thinking about Bayes theorem. Let’s begin by assuming that we have 300,000 people in the population, to scale down the numbers from the millions for convenience. Of these 1,500 have AIDS. So let’s create the population and then sample from it. See the use of the **sample** function in R. ``` #PEOPLE WITH AIDS people = seq(1,300000) people_aids = sample(people,1500) people_noaids = setdiff(people,people_aids) ``` Note, how we also used the **setdiff** function to get the complement set of the people who do not have AIDS. Now, of the people who have AIDS, we know that 99% of them test positive so let’s subset that list, and also take its complement. These are joint events, and their numbers proscribe the joint distribution. ``` people_aids_pos = sample(people_aids,1500*0.99) people_aids_neg = setdiff(people_aids,people_aids_pos) print(length(people_aids_pos)) ``` ``` ## [1] 1485 ``` ``` print(length(people_aids_neg)) ``` ``` ## [1] 15 ``` ``` people_aids_neg ``` ``` ## [1] 35037 126781 139889 193826 149185 135464 28387 14428 257567 114212 ## [11] 57248 151006 283192 168069 153407 ``` We can also subset the group that does not have AIDS, as we know that the test is negative for them 95% of the time. ``` #PEOPLE WITHOUT AIDS people_noaids_neg = sample(people_noaids,298500*0.95) people_noaids_pos = setdiff(people_noaids,people_noaids_neg) print(length(people_noaids_neg)) ``` ``` ## [1] 283575 ``` ``` print(length(people_noaids_pos)) ``` ``` ## [1] 14925 ``` We can now compute the probability that someone actually has AIDS when the test comes out positive. ``` #HAVE AIDS GIVEN TEST IS POSITIVE pr_aids_given_pos = (length(people_aids_pos))/ (length(people_aids_pos)+length(people_noaids_pos)) pr_aids_given_pos ``` ``` ## [1] 0.0904936 ``` This confirms the formal Bayes computation that we had undertaken earlier. And of course, as we had examined earlier, what’s the chance that you have AIDS when the test is negative, i.e., a false negative? ``` #FALSE NEGATIVE: HAVE AIDS WHEN TEST IS NEGATIVE pr_aids_given_neg = (length(people_aids_neg))/ (length(people_aids_neg)+length(people_noaids_neg)) pr_aids_given_neg ``` ``` ## [1] 5.289326e-05 ``` Phew! Note here that we first computed the joint sets covering joint outcomes, and then used these to compute conditional (Bayes) probabilities. The approach used R to apply a set\-theoretic, computational approach to calculating conditional probabilities. 6\.4 A Second Opinion --------------------- What if we tested positive, and then decided to get a second opinion, i.e., take another test. What would now be the posterior probability in this case? Here is the calculation. ``` #SECOND OPINION MEDICAL TEST FOR AIDS 0.99*0.09/(0.99*0.09 + 0.05*0.91) ``` ``` ## [1] 0.6619614 ``` Note that we used the previous posterior probability (0\.91\) as the prior probability in this calculation. 6\.5 Correlated Default ----------------------- Bayes theorem is very useful when we want to extract conditional default information. Bond fund managers are not as interested in the correlation of default of the bonds in their portfolio as much as the conditional default of bonds. What this means is that they care about the *conditional* probability of bond A defaulting if bond B has defaulted already. Modern finance provides many tools to obtain the default probabilities of firms. Suppose we know that firm 1 has default probability \\(p\_1 \= 1\\%\\) and firm 2 has default probability \\(p\_2\=3\\%\\). If the correlation of default of the two firms is 40% over one year, then if either bond defaults, what is the probability of default of the other, conditional on the first default? ### 6\.5\.1 Indicator Functions for Default We can see that even with this limited information, Bayes theorem allows us to derive the conditional probabilities of interest. First define \\(d\_i, i\=1,2\\) as default indicators for firms 1 and 2\. \\(d\_i\=1\\) if the firm defaults, and zero otherwise. We note that: \\\[ E(d\_1\)\=1 . p\_1 \+ 0 . (1\-p\_1\) \= p\_1 \= 0\.01\. \\] Likewise \\\[ E(d\_2\)\=1 . p\_2 \+ 0 . (1\-p\_2\) \= p\_2 \= 0\.03\. \\] The Bernoulli distribution lets us derive the standard deviation of \\(d\_1\\) and \\(d\_2\\). \\\[ \\begin{aligned} \\sigma\_1 \&\= \\sqrt{p\_1 (1\-p\_1\)} \= \\sqrt{(0\.01\)(0\.99\)} \= 0\.099499 \\\\ \\sigma\_2 \&\= \\sqrt{p\_2 (1\-p\_2\)} \= \\sqrt{(0\.03\)(0\.97\)} \= 0\.17059 \\end{aligned} \\] ### 6\.5\.2 Default Correlation Now, we note that \\\[ \\begin{aligned} Cov(d\_1,d\_2\) \&\= E(d\_1 . d\_2\) \- E(d\_1\)E(d\_2\) \\\\ \\rho \\sigma\_1 \\sigma\_2 \&\= E(d\_1 . d\_2\) \- p\_1 p\_2 \\\\ (0\.4\)(0\.099499\)(0\.17059\) \&\= E(d\_1 . d\_2\) \- (0\.01\)(0\.03\) \\\\ E(d\_1 . d\_2\) \&\= 0\.0070894 \\\\ E(d\_1 . d\_2\) \&\\equiv p\_{12} \\end{aligned} \\] where \\(p\_{12}\\) is the probability of default of both firm 1 and 2\. We now get the conditional probabilities: \\\[ \\begin{aligned} p(d\_1 \| d\_2\) \&\= p\_{12}/p\_2 \= 0\.0070894/0\.03 \= 0\.23631 \\\\ p(d\_2 \| d\_1\) \&\= p\_{12}/p\_1 \= 0\.0070894/0\.01 \= 0\.70894 \\end{aligned} \\] These conditional probabilities are non\-trivial in size, even though the individual probabilities of default are very small. What this means is that default contagion can be quite severe once firms begin to default. (This example used our knowledge of Bayes’ rule, correlations, covariances, and joint events.) ### 6\.5\.1 Indicator Functions for Default We can see that even with this limited information, Bayes theorem allows us to derive the conditional probabilities of interest. First define \\(d\_i, i\=1,2\\) as default indicators for firms 1 and 2\. \\(d\_i\=1\\) if the firm defaults, and zero otherwise. We note that: \\\[ E(d\_1\)\=1 . p\_1 \+ 0 . (1\-p\_1\) \= p\_1 \= 0\.01\. \\] Likewise \\\[ E(d\_2\)\=1 . p\_2 \+ 0 . (1\-p\_2\) \= p\_2 \= 0\.03\. \\] The Bernoulli distribution lets us derive the standard deviation of \\(d\_1\\) and \\(d\_2\\). \\\[ \\begin{aligned} \\sigma\_1 \&\= \\sqrt{p\_1 (1\-p\_1\)} \= \\sqrt{(0\.01\)(0\.99\)} \= 0\.099499 \\\\ \\sigma\_2 \&\= \\sqrt{p\_2 (1\-p\_2\)} \= \\sqrt{(0\.03\)(0\.97\)} \= 0\.17059 \\end{aligned} \\] ### 6\.5\.2 Default Correlation Now, we note that \\\[ \\begin{aligned} Cov(d\_1,d\_2\) \&\= E(d\_1 . d\_2\) \- E(d\_1\)E(d\_2\) \\\\ \\rho \\sigma\_1 \\sigma\_2 \&\= E(d\_1 . d\_2\) \- p\_1 p\_2 \\\\ (0\.4\)(0\.099499\)(0\.17059\) \&\= E(d\_1 . d\_2\) \- (0\.01\)(0\.03\) \\\\ E(d\_1 . d\_2\) \&\= 0\.0070894 \\\\ E(d\_1 . d\_2\) \&\\equiv p\_{12} \\end{aligned} \\] where \\(p\_{12}\\) is the probability of default of both firm 1 and 2\. We now get the conditional probabilities: \\\[ \\begin{aligned} p(d\_1 \| d\_2\) \&\= p\_{12}/p\_2 \= 0\.0070894/0\.03 \= 0\.23631 \\\\ p(d\_2 \| d\_1\) \&\= p\_{12}/p\_1 \= 0\.0070894/0\.01 \= 0\.70894 \\end{aligned} \\] These conditional probabilities are non\-trivial in size, even though the individual probabilities of default are very small. What this means is that default contagion can be quite severe once firms begin to default. (This example used our knowledge of Bayes’ rule, correlations, covariances, and joint events.) 6\.6 Continuous Space Bayes Theorem ----------------------------------- In Bayesian approaches, the terms “prior”, “posterior”, and “likelihood” are commonly used and we explore this terminology here. We are usually interested in the parameter \\(\\theta\\), the mean of the distribution of some data \\(x\\) (I am using the standard notation here). But in the Bayesian setting we do not just want the value of \\(\\theta\\), but we want a distribution of values of \\(\\theta\\) starting from some prior assumption about this distribution. So we start with \\(p(\\theta)\\), which we call the **prior** distribution. We then observe data \\(x\\), and combine the data with the prior to get the **posterior** distribution \\(p(\\theta \| x)\\). To do this, we need to compute the probability of seeing the data \\(x\\) given our prior \\(p(\\theta)\\) and this probability is given by the **likelihood** function \\(L(x \| \\theta)\\). Assume that the variance of the data \\(x\\) is known, i.e., is \\(\\sigma^2\\). ### 6\.6\.1 Formulation Applying Bayes’ theorem we have \\\[ p(\\theta \| x) \= \\frac{L(x \| \\theta)\\; p(\\theta)}{\\int L(x \| \\theta) \\; p(\\theta)\\; d\\theta} \\propto L(x \| \\theta)\\; p(\\theta) \\] If we assume the prior distribution for the mean of the data is normal, i.e., \\(p(\\theta) \\sim N\[\\mu\_0, \\sigma\_0^2]\\), and the likelihood is also normal, i.e., \\(L(x \| \\theta) \\sim N\[\\theta, \\sigma^2]\\), then we have that \\\[ \\begin{aligned} p(\\theta) \&\= \\frac{1}{\\sqrt{2\\pi \\sigma\_0^2}} \\exp\\left\[\-\\frac{1}{2} \\frac{(\\theta\-\\mu\_0\)^2}{\\sigma\_0^2} \\right] \\sim N\[\\theta \| \\mu\_0, \\sigma\_0^2] \\; \\; \\propto \\exp\\left\[\-\\frac{1}{2} \\frac{(\\theta\-\\mu\_0\)^2}{\\sigma\_0^2} \\right] \\\\ L(x \| \\theta) \&\= \\frac{1}{\\sqrt{2\\pi \\sigma^2}} \\exp\\left\[\-\\frac{1}{2} \\frac{(x\-\\theta)^2}{\\sigma^2} \\right] \\sim N\[x \| \\theta, \\sigma^2] \\; \\; \\propto \\exp\\left\[\-\\frac{1}{2} \\frac{(x\-\\theta)^2}{\\sigma^2} \\right] \\end{aligned} \\] ### 6\.6\.2 Posterior Distribution Given this, the posterior is as follows: \\\[ p(\\theta \| x) \\propto L(x \| \\theta) p(\\theta) \\;\\; \\propto \\exp\\left\[\-\\frac{1}{2} \\frac{(x\-\\theta)^2}{\\sigma^2} \-\\frac{1}{2} \\frac{(\\theta\-\\mu\_0\)^2}{\\sigma\_0^2} \\right] \\] Define the precision values to be \\(\\tau\_0 \= \\frac{1}{\\sigma\_0^2}\\), and \\(\\tau \= \\frac{1}{\\sigma^2}\\). Then it can be shown that when you observe a new value of the data \\(x\\), the posterior distribution is written down in closed form as: \\\[ p(\\theta \| x) \\sim N\\left\[ \\frac{\\tau\_0}{\\tau\_0\+\\tau} \\mu\_0 \+ \\frac{\\tau}{\\tau\_0\+\\tau} x, \\; \\; \\frac{1}{\\tau\_0 \+ \\tau} \\right] \\] When the posterior distribution and prior distribution have the same form, they are said to be “conjugate” with respect to the specific likelihood function. ### 6\.6\.3 Example To take an example, suppose our prior for the mean of the equity premium per month is \\(p(\\theta) \\sim N\[0\.005, 0\.001^2]\\). The standard deviation of the equity premium is 0\.04\. If next month we observe an equity premium of 1%, what is the posterior distribution of the mean equity premium? ``` mu0 = 0.005 sigma0 = 0.001 sigma=0.04 x = 0.01 tau0 = 1/sigma0^2 tau = 1/sigma^2 posterior_mean = tau0*mu0/(tau0+tau) + tau*x/(tau0+tau) print(posterior_mean) ``` ``` ## [1] 0.005003123 ``` ``` posterior_var = 1/(tau0+tau) print(sqrt(posterior_var)) ``` ``` ## [1] 0.0009996876 ``` Hence, we see that after updating the mean has increased mildly because the data came in higher than expected. ### 6\.6\.4 General Formula for \\(n\\) sequential updates If we observe \\(n\\) new values of \\(x\\), then the new posterior is \\\[ p(\\theta \| x) \\sim N\\left\[ \\frac{\\tau\_0}{\\tau\_0\+n\\tau} \\mu\_0 \+ \\frac{\\tau}{\\tau\_0\+n\\tau} \\sum\_{j\=1}^n x\_j, \\; \\; \\frac{1}{\\tau\_0 \+ n \\tau} \\right] \\] This is easy to derive, as it is just the result you obtain if you took each \\(x\_j\\) and updated the posterior one at a time. Try this as an Exercise. *Estimate the equity risk premium*. We will use data and discrete Bayes to come up with a forecast of the equity risk premium. Proceed along the following lines using the **LearnBayes** package. 1. We’ll use data from 1926 onwards from the Fama\-French data repository. All you need is the equity premium \\((r\_m\-r\_f)\\) data, and I will leave it up to you to choose if you want to use annual or monthly data. Download this and load it into R. 2. Using the series only up to the year 2000, present the descriptive statistics for the equity premium. State these in annualized terms. 3. Present the distribution of returns as a histogram. 4. Store the results of the histogram, i.e., the range of discrete values of the equity premium, and the probability of each one. Treat this as your prior distribution. 5. Now take the remaining data for the years after 2000, and use this data to update the prior and construct a posterior. Assume that the prior, likelihood, and posterior are normally distributed. Use the **discrete.bayes** function to construct the posterior distribution and plot it using a histogram. See if you can put the prior and posterior on the same plot to see how the new data has changed the prior. 6. What is the forecasted equity premium, and what is the confidence interval around your forecast? ### 6\.6\.1 Formulation Applying Bayes’ theorem we have \\\[ p(\\theta \| x) \= \\frac{L(x \| \\theta)\\; p(\\theta)}{\\int L(x \| \\theta) \\; p(\\theta)\\; d\\theta} \\propto L(x \| \\theta)\\; p(\\theta) \\] If we assume the prior distribution for the mean of the data is normal, i.e., \\(p(\\theta) \\sim N\[\\mu\_0, \\sigma\_0^2]\\), and the likelihood is also normal, i.e., \\(L(x \| \\theta) \\sim N\[\\theta, \\sigma^2]\\), then we have that \\\[ \\begin{aligned} p(\\theta) \&\= \\frac{1}{\\sqrt{2\\pi \\sigma\_0^2}} \\exp\\left\[\-\\frac{1}{2} \\frac{(\\theta\-\\mu\_0\)^2}{\\sigma\_0^2} \\right] \\sim N\[\\theta \| \\mu\_0, \\sigma\_0^2] \\; \\; \\propto \\exp\\left\[\-\\frac{1}{2} \\frac{(\\theta\-\\mu\_0\)^2}{\\sigma\_0^2} \\right] \\\\ L(x \| \\theta) \&\= \\frac{1}{\\sqrt{2\\pi \\sigma^2}} \\exp\\left\[\-\\frac{1}{2} \\frac{(x\-\\theta)^2}{\\sigma^2} \\right] \\sim N\[x \| \\theta, \\sigma^2] \\; \\; \\propto \\exp\\left\[\-\\frac{1}{2} \\frac{(x\-\\theta)^2}{\\sigma^2} \\right] \\end{aligned} \\] ### 6\.6\.2 Posterior Distribution Given this, the posterior is as follows: \\\[ p(\\theta \| x) \\propto L(x \| \\theta) p(\\theta) \\;\\; \\propto \\exp\\left\[\-\\frac{1}{2} \\frac{(x\-\\theta)^2}{\\sigma^2} \-\\frac{1}{2} \\frac{(\\theta\-\\mu\_0\)^2}{\\sigma\_0^2} \\right] \\] Define the precision values to be \\(\\tau\_0 \= \\frac{1}{\\sigma\_0^2}\\), and \\(\\tau \= \\frac{1}{\\sigma^2}\\). Then it can be shown that when you observe a new value of the data \\(x\\), the posterior distribution is written down in closed form as: \\\[ p(\\theta \| x) \\sim N\\left\[ \\frac{\\tau\_0}{\\tau\_0\+\\tau} \\mu\_0 \+ \\frac{\\tau}{\\tau\_0\+\\tau} x, \\; \\; \\frac{1}{\\tau\_0 \+ \\tau} \\right] \\] When the posterior distribution and prior distribution have the same form, they are said to be “conjugate” with respect to the specific likelihood function. ### 6\.6\.3 Example To take an example, suppose our prior for the mean of the equity premium per month is \\(p(\\theta) \\sim N\[0\.005, 0\.001^2]\\). The standard deviation of the equity premium is 0\.04\. If next month we observe an equity premium of 1%, what is the posterior distribution of the mean equity premium? ``` mu0 = 0.005 sigma0 = 0.001 sigma=0.04 x = 0.01 tau0 = 1/sigma0^2 tau = 1/sigma^2 posterior_mean = tau0*mu0/(tau0+tau) + tau*x/(tau0+tau) print(posterior_mean) ``` ``` ## [1] 0.005003123 ``` ``` posterior_var = 1/(tau0+tau) print(sqrt(posterior_var)) ``` ``` ## [1] 0.0009996876 ``` Hence, we see that after updating the mean has increased mildly because the data came in higher than expected. ### 6\.6\.4 General Formula for \\(n\\) sequential updates If we observe \\(n\\) new values of \\(x\\), then the new posterior is \\\[ p(\\theta \| x) \\sim N\\left\[ \\frac{\\tau\_0}{\\tau\_0\+n\\tau} \\mu\_0 \+ \\frac{\\tau}{\\tau\_0\+n\\tau} \\sum\_{j\=1}^n x\_j, \\; \\; \\frac{1}{\\tau\_0 \+ n \\tau} \\right] \\] This is easy to derive, as it is just the result you obtain if you took each \\(x\_j\\) and updated the posterior one at a time. Try this as an Exercise. *Estimate the equity risk premium*. We will use data and discrete Bayes to come up with a forecast of the equity risk premium. Proceed along the following lines using the **LearnBayes** package. 1. We’ll use data from 1926 onwards from the Fama\-French data repository. All you need is the equity premium \\((r\_m\-r\_f)\\) data, and I will leave it up to you to choose if you want to use annual or monthly data. Download this and load it into R. 2. Using the series only up to the year 2000, present the descriptive statistics for the equity premium. State these in annualized terms. 3. Present the distribution of returns as a histogram. 4. Store the results of the histogram, i.e., the range of discrete values of the equity premium, and the probability of each one. Treat this as your prior distribution. 5. Now take the remaining data for the years after 2000, and use this data to update the prior and construct a posterior. Assume that the prior, likelihood, and posterior are normally distributed. Use the **discrete.bayes** function to construct the posterior distribution and plot it using a histogram. See if you can put the prior and posterior on the same plot to see how the new data has changed the prior. 6. What is the forecasted equity premium, and what is the confidence interval around your forecast? 6\.7 Bayes Classifier --------------------- Suppose we want to classify entities (emails, consumers, companies, tumors, images, etc.) into categories \\(c\\). Think of a data set with rows each giving one instance of the data set with several characteristics, i.e., the \\(x\\) variables, and the row will also contain the category \\(c\\). Suppose there are \\(n\\) variables, and \\(m\\) categories. We use the data to construct the prior and conditional probabilities. Once these probabilities are computed we say that the model is “trained”. The trained classifier contains the unconditional probabilities \\(p(c)\\) of each class, which are merely frequencies with which each category appears. It also shows the conditional probability distributions \\(p(x \|c)\\) given as the mean and standard deviation of the occurrence of these terms in each class. The posterior probabilities are computed as follows. These tell us the most likely category given the data \\(x\\) on any observation. \\\[ p(c\=i \| x\_1,x\_2,...,x\_n) \= \\frac{p(x\_1,x\_2,...,x\_n\|c\=i) \\cdot p(c\=i)}{\\sum\_{j\=1}^m p(x\_1,x\_2,...,x\_n\|c\=j) \\cdot p(c\=j)}, \\quad \\forall i\=1,2,...,m \\] In the naive Bayes model, it is assumed that all the \\(x\\) variables are independent of each other, so that we may write \\\[ p(x\_1,x\_2,...,x\_n \| c\=i) \= p(x\_1 \| c\=i) \\cdot p(x\_1 \| c\=i) \\cdot \\cdot \\cdot p(x\_n \| c\=i) \\] We use the **e1071** package. It has a one\-line command that takes in the tagged training dataset using the function **naiveBayes()**. It returns the trained classifier model. We may take this trained model and re\-apply to the training data set to see how well it does. We use the **predict()** function for this. The data set here is the classic Iris data. ### 6\.7\.1 Example ``` library(e1071) data(iris) print(head(iris)) ``` ``` ## Sepal.Length Sepal.Width Petal.Length Petal.Width Species ## 1 5.1 3.5 1.4 0.2 setosa ## 2 4.9 3.0 1.4 0.2 setosa ## 3 4.7 3.2 1.3 0.2 setosa ## 4 4.6 3.1 1.5 0.2 setosa ## 5 5.0 3.6 1.4 0.2 setosa ## 6 5.4 3.9 1.7 0.4 setosa ``` ``` tail(iris) ``` ``` ## Sepal.Length Sepal.Width Petal.Length Petal.Width Species ## 145 6.7 3.3 5.7 2.5 virginica ## 146 6.7 3.0 5.2 2.3 virginica ## 147 6.3 2.5 5.0 1.9 virginica ## 148 6.5 3.0 5.2 2.0 virginica ## 149 6.2 3.4 5.4 2.3 virginica ## 150 5.9 3.0 5.1 1.8 virginica ``` ``` #NAIVE BAYES res = naiveBayes(iris[,1:4],iris[,5]) #SHOWS THE PRIOR AND LIKELIHOOD FUNCTIONS res ``` ``` ## ## Naive Bayes Classifier for Discrete Predictors ## ## Call: ## naiveBayes.default(x = iris[, 1:4], y = iris[, 5]) ## ## A-priori probabilities: ## iris[, 5] ## setosa versicolor virginica ## 0.3333333 0.3333333 0.3333333 ## ## Conditional probabilities: ## Sepal.Length ## iris[, 5] [,1] [,2] ## setosa 5.006 0.3524897 ## versicolor 5.936 0.5161711 ## virginica 6.588 0.6358796 ## ## Sepal.Width ## iris[, 5] [,1] [,2] ## setosa 3.428 0.3790644 ## versicolor 2.770 0.3137983 ## virginica 2.974 0.3224966 ## ## Petal.Length ## iris[, 5] [,1] [,2] ## setosa 1.462 0.1736640 ## versicolor 4.260 0.4699110 ## virginica 5.552 0.5518947 ## ## Petal.Width ## iris[, 5] [,1] [,2] ## setosa 0.246 0.1053856 ## versicolor 1.326 0.1977527 ## virginica 2.026 0.2746501 ``` ``` #SHOWS POSTERIOR PROBABILITIES predict(res,iris[,1:4],type="raw") ``` ``` ## setosa versicolor virginica ## [1,] 1.000000e+00 2.981309e-18 2.152373e-25 ## [2,] 1.000000e+00 3.169312e-17 6.938030e-25 ## [3,] 1.000000e+00 2.367113e-18 7.240956e-26 ## [4,] 1.000000e+00 3.069606e-17 8.690636e-25 ## [5,] 1.000000e+00 1.017337e-18 8.885794e-26 ## [6,] 1.000000e+00 2.717732e-14 4.344285e-21 ## [7,] 1.000000e+00 2.321639e-17 7.988271e-25 ## [8,] 1.000000e+00 1.390751e-17 8.166995e-25 ## [9,] 1.000000e+00 1.990156e-17 3.606469e-25 ## [10,] 1.000000e+00 7.378931e-18 3.615492e-25 ## [11,] 1.000000e+00 9.396089e-18 1.474623e-24 ## [12,] 1.000000e+00 3.461964e-17 2.093627e-24 ## [13,] 1.000000e+00 2.804520e-18 1.010192e-25 ## [14,] 1.000000e+00 1.799033e-19 6.060578e-27 ## [15,] 1.000000e+00 5.533879e-19 2.485033e-25 ## [16,] 1.000000e+00 6.273863e-17 4.509864e-23 ## [17,] 1.000000e+00 1.106658e-16 1.282419e-23 ## [18,] 1.000000e+00 4.841773e-17 2.350011e-24 ## [19,] 1.000000e+00 1.126175e-14 2.567180e-21 ## [20,] 1.000000e+00 1.808513e-17 1.963924e-24 ## [21,] 1.000000e+00 2.178382e-15 2.013989e-22 ## [22,] 1.000000e+00 1.210057e-15 7.788592e-23 ## [23,] 1.000000e+00 4.535220e-20 3.130074e-27 ## [24,] 1.000000e+00 3.147327e-11 8.175305e-19 ## [25,] 1.000000e+00 1.838507e-14 1.553757e-21 ## [26,] 1.000000e+00 6.873990e-16 1.830374e-23 ## [27,] 1.000000e+00 3.192598e-14 1.045146e-21 ## [28,] 1.000000e+00 1.542562e-17 1.274394e-24 ## [29,] 1.000000e+00 8.833285e-18 5.368077e-25 ## [30,] 1.000000e+00 9.557935e-17 3.652571e-24 ## [31,] 1.000000e+00 2.166837e-16 6.730536e-24 ## [32,] 1.000000e+00 3.940500e-14 1.546678e-21 ## [33,] 1.000000e+00 1.609092e-20 1.013278e-26 ## [34,] 1.000000e+00 7.222217e-20 4.261853e-26 ## [35,] 1.000000e+00 6.289348e-17 1.831694e-24 ## [36,] 1.000000e+00 2.850926e-18 8.874002e-26 ## [37,] 1.000000e+00 7.746279e-18 7.235628e-25 ## [38,] 1.000000e+00 8.623934e-20 1.223633e-26 ## [39,] 1.000000e+00 4.612936e-18 9.655450e-26 ## [40,] 1.000000e+00 2.009325e-17 1.237755e-24 ## [41,] 1.000000e+00 1.300634e-17 5.657689e-25 ## [42,] 1.000000e+00 1.577617e-15 5.717219e-24 ## [43,] 1.000000e+00 1.494911e-18 4.800333e-26 ## [44,] 1.000000e+00 1.076475e-10 3.721344e-18 ## [45,] 1.000000e+00 1.357569e-12 1.708326e-19 ## [46,] 1.000000e+00 3.882113e-16 5.587814e-24 ## [47,] 1.000000e+00 5.086735e-18 8.960156e-25 ## [48,] 1.000000e+00 5.012793e-18 1.636566e-25 ## [49,] 1.000000e+00 5.717245e-18 8.231337e-25 ## [50,] 1.000000e+00 7.713456e-18 3.349997e-25 ## [51,] 4.893048e-107 8.018653e-01 1.981347e-01 ## [52,] 7.920550e-100 9.429283e-01 5.707168e-02 ## [53,] 5.494369e-121 4.606254e-01 5.393746e-01 ## [54,] 1.129435e-69 9.999621e-01 3.789964e-05 ## [55,] 1.473329e-105 9.503408e-01 4.965916e-02 ## [56,] 1.931184e-89 9.990013e-01 9.986538e-04 ## [57,] 4.539099e-113 6.592515e-01 3.407485e-01 ## [58,] 2.549753e-34 9.999997e-01 3.119517e-07 ## [59,] 6.562814e-97 9.895385e-01 1.046153e-02 ## [60,] 5.000210e-69 9.998928e-01 1.071638e-04 ## [61,] 7.354548e-41 9.999997e-01 3.143915e-07 ## [62,] 4.799134e-86 9.958564e-01 4.143617e-03 ## [63,] 4.631287e-60 9.999925e-01 7.541274e-06 ## [64,] 1.052252e-103 9.850868e-01 1.491324e-02 ## [65,] 4.789799e-55 9.999700e-01 2.999393e-05 ## [66,] 1.514706e-92 9.787587e-01 2.124125e-02 ## [67,] 1.338348e-97 9.899311e-01 1.006893e-02 ## [68,] 2.026115e-62 9.999799e-01 2.007314e-05 ## [69,] 6.547473e-101 9.941996e-01 5.800427e-03 ## [70,] 3.016276e-58 9.999913e-01 8.739959e-06 ## [71,] 1.053341e-127 1.609361e-01 8.390639e-01 ## [72,] 1.248202e-70 9.997743e-01 2.256698e-04 ## [73,] 3.294753e-119 9.245812e-01 7.541876e-02 ## [74,] 1.314175e-95 9.979398e-01 2.060233e-03 ## [75,] 3.003117e-83 9.982736e-01 1.726437e-03 ## [76,] 2.536747e-92 9.865372e-01 1.346281e-02 ## [77,] 1.558909e-111 9.102260e-01 8.977398e-02 ## [78,] 7.014282e-136 7.989607e-02 9.201039e-01 ## [79,] 5.034528e-99 9.854957e-01 1.450433e-02 ## [80,] 1.439052e-41 9.999984e-01 1.601574e-06 ## [81,] 1.251567e-54 9.999955e-01 4.500139e-06 ## [82,] 8.769539e-48 9.999983e-01 1.742560e-06 ## [83,] 3.447181e-62 9.999664e-01 3.361987e-05 ## [84,] 1.087302e-132 6.134355e-01 3.865645e-01 ## [85,] 4.119852e-97 9.918297e-01 8.170260e-03 ## [86,] 1.140835e-102 8.734107e-01 1.265893e-01 ## [87,] 2.247339e-110 7.971795e-01 2.028205e-01 ## [88,] 4.870630e-88 9.992978e-01 7.022084e-04 ## [89,] 2.028672e-72 9.997620e-01 2.379898e-04 ## [90,] 2.227900e-69 9.999461e-01 5.390514e-05 ## [91,] 5.110709e-81 9.998510e-01 1.489819e-04 ## [92,] 5.774841e-99 9.885399e-01 1.146006e-02 ## [93,] 5.146736e-66 9.999591e-01 4.089540e-05 ## [94,] 1.332816e-34 9.999997e-01 2.716264e-07 ## [95,] 6.094144e-77 9.998034e-01 1.966331e-04 ## [96,] 1.424276e-72 9.998236e-01 1.764463e-04 ## [97,] 8.302641e-77 9.996692e-01 3.307548e-04 ## [98,] 1.835520e-82 9.988601e-01 1.139915e-03 ## [99,] 5.710350e-30 9.999997e-01 3.094739e-07 ## [100,] 3.996459e-73 9.998204e-01 1.795726e-04 ## [101,] 3.993755e-249 1.031032e-10 1.000000e+00 ## [102,] 1.228659e-149 2.724406e-02 9.727559e-01 ## [103,] 2.460661e-216 2.327488e-07 9.999998e-01 ## [104,] 2.864831e-173 2.290954e-03 9.977090e-01 ## [105,] 8.299884e-214 3.175384e-07 9.999997e-01 ## [106,] 1.371182e-267 3.807455e-10 1.000000e+00 ## [107,] 3.444090e-107 9.719885e-01 2.801154e-02 ## [108,] 3.741929e-224 1.782047e-06 9.999982e-01 ## [109,] 5.564644e-188 5.823191e-04 9.994177e-01 ## [110,] 2.052443e-260 2.461662e-12 1.000000e+00 ## [111,] 8.669405e-159 4.895235e-04 9.995105e-01 ## [112,] 4.220200e-163 3.168643e-03 9.968314e-01 ## [113,] 4.360059e-190 6.230821e-06 9.999938e-01 ## [114,] 6.142256e-151 1.423414e-02 9.857659e-01 ## [115,] 2.201426e-186 1.393247e-06 9.999986e-01 ## [116,] 2.949945e-191 6.128385e-07 9.999994e-01 ## [117,] 2.909076e-168 2.152843e-03 9.978472e-01 ## [118,] 1.347608e-281 2.872996e-12 1.000000e+00 ## [119,] 2.786402e-306 1.151469e-12 1.000000e+00 ## [120,] 2.082510e-123 9.561626e-01 4.383739e-02 ## [121,] 2.194169e-217 1.712166e-08 1.000000e+00 ## [122,] 3.325791e-145 1.518718e-02 9.848128e-01 ## [123,] 6.251357e-269 1.170872e-09 1.000000e+00 ## [124,] 4.415135e-135 1.360432e-01 8.639568e-01 ## [125,] 6.315716e-201 1.300512e-06 9.999987e-01 ## [126,] 5.257347e-203 9.507989e-06 9.999905e-01 ## [127,] 1.476391e-129 2.067703e-01 7.932297e-01 ## [128,] 8.772841e-134 1.130589e-01 8.869411e-01 ## [129,] 5.230800e-194 1.395719e-05 9.999860e-01 ## [130,] 7.014892e-179 8.232518e-04 9.991767e-01 ## [131,] 6.306820e-218 1.214497e-06 9.999988e-01 ## [132,] 2.539020e-247 4.668891e-10 1.000000e+00 ## [133,] 2.210812e-201 2.000316e-06 9.999980e-01 ## [134,] 1.128613e-128 7.118948e-01 2.881052e-01 ## [135,] 8.114869e-151 4.900992e-01 5.099008e-01 ## [136,] 7.419068e-249 1.448050e-10 1.000000e+00 ## [137,] 1.004503e-215 9.743357e-09 1.000000e+00 ## [138,] 1.346716e-167 2.186989e-03 9.978130e-01 ## [139,] 1.994716e-128 1.999894e-01 8.000106e-01 ## [140,] 8.440466e-185 6.769126e-06 9.999932e-01 ## [141,] 2.334365e-218 7.456220e-09 1.000000e+00 ## [142,] 2.179139e-183 6.352663e-07 9.999994e-01 ## [143,] 1.228659e-149 2.724406e-02 9.727559e-01 ## [144,] 3.426814e-229 6.597015e-09 1.000000e+00 ## [145,] 2.011574e-232 2.620636e-10 1.000000e+00 ## [146,] 1.078519e-187 7.915543e-07 9.999992e-01 ## [147,] 1.061392e-146 2.770575e-02 9.722942e-01 ## [148,] 1.846900e-164 4.398402e-04 9.995602e-01 ## [149,] 1.439996e-195 3.384156e-07 9.999997e-01 ## [150,] 2.771480e-143 5.987903e-02 9.401210e-01 ``` ``` #CONFUSION MATRIX out = table(predict(res,iris[,1:4]),iris[,5]) out ``` ``` ## ## setosa versicolor virginica ## setosa 50 0 0 ## versicolor 0 47 3 ## virginica 0 3 47 ``` ### 6\.7\.1 Example ``` library(e1071) data(iris) print(head(iris)) ``` ``` ## Sepal.Length Sepal.Width Petal.Length Petal.Width Species ## 1 5.1 3.5 1.4 0.2 setosa ## 2 4.9 3.0 1.4 0.2 setosa ## 3 4.7 3.2 1.3 0.2 setosa ## 4 4.6 3.1 1.5 0.2 setosa ## 5 5.0 3.6 1.4 0.2 setosa ## 6 5.4 3.9 1.7 0.4 setosa ``` ``` tail(iris) ``` ``` ## Sepal.Length Sepal.Width Petal.Length Petal.Width Species ## 145 6.7 3.3 5.7 2.5 virginica ## 146 6.7 3.0 5.2 2.3 virginica ## 147 6.3 2.5 5.0 1.9 virginica ## 148 6.5 3.0 5.2 2.0 virginica ## 149 6.2 3.4 5.4 2.3 virginica ## 150 5.9 3.0 5.1 1.8 virginica ``` ``` #NAIVE BAYES res = naiveBayes(iris[,1:4],iris[,5]) #SHOWS THE PRIOR AND LIKELIHOOD FUNCTIONS res ``` ``` ## ## Naive Bayes Classifier for Discrete Predictors ## ## Call: ## naiveBayes.default(x = iris[, 1:4], y = iris[, 5]) ## ## A-priori probabilities: ## iris[, 5] ## setosa versicolor virginica ## 0.3333333 0.3333333 0.3333333 ## ## Conditional probabilities: ## Sepal.Length ## iris[, 5] [,1] [,2] ## setosa 5.006 0.3524897 ## versicolor 5.936 0.5161711 ## virginica 6.588 0.6358796 ## ## Sepal.Width ## iris[, 5] [,1] [,2] ## setosa 3.428 0.3790644 ## versicolor 2.770 0.3137983 ## virginica 2.974 0.3224966 ## ## Petal.Length ## iris[, 5] [,1] [,2] ## setosa 1.462 0.1736640 ## versicolor 4.260 0.4699110 ## virginica 5.552 0.5518947 ## ## Petal.Width ## iris[, 5] [,1] [,2] ## setosa 0.246 0.1053856 ## versicolor 1.326 0.1977527 ## virginica 2.026 0.2746501 ``` ``` #SHOWS POSTERIOR PROBABILITIES predict(res,iris[,1:4],type="raw") ``` ``` ## setosa versicolor virginica ## [1,] 1.000000e+00 2.981309e-18 2.152373e-25 ## [2,] 1.000000e+00 3.169312e-17 6.938030e-25 ## [3,] 1.000000e+00 2.367113e-18 7.240956e-26 ## [4,] 1.000000e+00 3.069606e-17 8.690636e-25 ## [5,] 1.000000e+00 1.017337e-18 8.885794e-26 ## [6,] 1.000000e+00 2.717732e-14 4.344285e-21 ## [7,] 1.000000e+00 2.321639e-17 7.988271e-25 ## [8,] 1.000000e+00 1.390751e-17 8.166995e-25 ## [9,] 1.000000e+00 1.990156e-17 3.606469e-25 ## [10,] 1.000000e+00 7.378931e-18 3.615492e-25 ## [11,] 1.000000e+00 9.396089e-18 1.474623e-24 ## [12,] 1.000000e+00 3.461964e-17 2.093627e-24 ## [13,] 1.000000e+00 2.804520e-18 1.010192e-25 ## [14,] 1.000000e+00 1.799033e-19 6.060578e-27 ## [15,] 1.000000e+00 5.533879e-19 2.485033e-25 ## [16,] 1.000000e+00 6.273863e-17 4.509864e-23 ## [17,] 1.000000e+00 1.106658e-16 1.282419e-23 ## [18,] 1.000000e+00 4.841773e-17 2.350011e-24 ## [19,] 1.000000e+00 1.126175e-14 2.567180e-21 ## [20,] 1.000000e+00 1.808513e-17 1.963924e-24 ## [21,] 1.000000e+00 2.178382e-15 2.013989e-22 ## [22,] 1.000000e+00 1.210057e-15 7.788592e-23 ## [23,] 1.000000e+00 4.535220e-20 3.130074e-27 ## [24,] 1.000000e+00 3.147327e-11 8.175305e-19 ## [25,] 1.000000e+00 1.838507e-14 1.553757e-21 ## [26,] 1.000000e+00 6.873990e-16 1.830374e-23 ## [27,] 1.000000e+00 3.192598e-14 1.045146e-21 ## [28,] 1.000000e+00 1.542562e-17 1.274394e-24 ## [29,] 1.000000e+00 8.833285e-18 5.368077e-25 ## [30,] 1.000000e+00 9.557935e-17 3.652571e-24 ## [31,] 1.000000e+00 2.166837e-16 6.730536e-24 ## [32,] 1.000000e+00 3.940500e-14 1.546678e-21 ## [33,] 1.000000e+00 1.609092e-20 1.013278e-26 ## [34,] 1.000000e+00 7.222217e-20 4.261853e-26 ## [35,] 1.000000e+00 6.289348e-17 1.831694e-24 ## [36,] 1.000000e+00 2.850926e-18 8.874002e-26 ## [37,] 1.000000e+00 7.746279e-18 7.235628e-25 ## [38,] 1.000000e+00 8.623934e-20 1.223633e-26 ## [39,] 1.000000e+00 4.612936e-18 9.655450e-26 ## [40,] 1.000000e+00 2.009325e-17 1.237755e-24 ## [41,] 1.000000e+00 1.300634e-17 5.657689e-25 ## [42,] 1.000000e+00 1.577617e-15 5.717219e-24 ## [43,] 1.000000e+00 1.494911e-18 4.800333e-26 ## [44,] 1.000000e+00 1.076475e-10 3.721344e-18 ## [45,] 1.000000e+00 1.357569e-12 1.708326e-19 ## [46,] 1.000000e+00 3.882113e-16 5.587814e-24 ## [47,] 1.000000e+00 5.086735e-18 8.960156e-25 ## [48,] 1.000000e+00 5.012793e-18 1.636566e-25 ## [49,] 1.000000e+00 5.717245e-18 8.231337e-25 ## [50,] 1.000000e+00 7.713456e-18 3.349997e-25 ## [51,] 4.893048e-107 8.018653e-01 1.981347e-01 ## [52,] 7.920550e-100 9.429283e-01 5.707168e-02 ## [53,] 5.494369e-121 4.606254e-01 5.393746e-01 ## [54,] 1.129435e-69 9.999621e-01 3.789964e-05 ## [55,] 1.473329e-105 9.503408e-01 4.965916e-02 ## [56,] 1.931184e-89 9.990013e-01 9.986538e-04 ## [57,] 4.539099e-113 6.592515e-01 3.407485e-01 ## [58,] 2.549753e-34 9.999997e-01 3.119517e-07 ## [59,] 6.562814e-97 9.895385e-01 1.046153e-02 ## [60,] 5.000210e-69 9.998928e-01 1.071638e-04 ## [61,] 7.354548e-41 9.999997e-01 3.143915e-07 ## [62,] 4.799134e-86 9.958564e-01 4.143617e-03 ## [63,] 4.631287e-60 9.999925e-01 7.541274e-06 ## [64,] 1.052252e-103 9.850868e-01 1.491324e-02 ## [65,] 4.789799e-55 9.999700e-01 2.999393e-05 ## [66,] 1.514706e-92 9.787587e-01 2.124125e-02 ## [67,] 1.338348e-97 9.899311e-01 1.006893e-02 ## [68,] 2.026115e-62 9.999799e-01 2.007314e-05 ## [69,] 6.547473e-101 9.941996e-01 5.800427e-03 ## [70,] 3.016276e-58 9.999913e-01 8.739959e-06 ## [71,] 1.053341e-127 1.609361e-01 8.390639e-01 ## [72,] 1.248202e-70 9.997743e-01 2.256698e-04 ## [73,] 3.294753e-119 9.245812e-01 7.541876e-02 ## [74,] 1.314175e-95 9.979398e-01 2.060233e-03 ## [75,] 3.003117e-83 9.982736e-01 1.726437e-03 ## [76,] 2.536747e-92 9.865372e-01 1.346281e-02 ## [77,] 1.558909e-111 9.102260e-01 8.977398e-02 ## [78,] 7.014282e-136 7.989607e-02 9.201039e-01 ## [79,] 5.034528e-99 9.854957e-01 1.450433e-02 ## [80,] 1.439052e-41 9.999984e-01 1.601574e-06 ## [81,] 1.251567e-54 9.999955e-01 4.500139e-06 ## [82,] 8.769539e-48 9.999983e-01 1.742560e-06 ## [83,] 3.447181e-62 9.999664e-01 3.361987e-05 ## [84,] 1.087302e-132 6.134355e-01 3.865645e-01 ## [85,] 4.119852e-97 9.918297e-01 8.170260e-03 ## [86,] 1.140835e-102 8.734107e-01 1.265893e-01 ## [87,] 2.247339e-110 7.971795e-01 2.028205e-01 ## [88,] 4.870630e-88 9.992978e-01 7.022084e-04 ## [89,] 2.028672e-72 9.997620e-01 2.379898e-04 ## [90,] 2.227900e-69 9.999461e-01 5.390514e-05 ## [91,] 5.110709e-81 9.998510e-01 1.489819e-04 ## [92,] 5.774841e-99 9.885399e-01 1.146006e-02 ## [93,] 5.146736e-66 9.999591e-01 4.089540e-05 ## [94,] 1.332816e-34 9.999997e-01 2.716264e-07 ## [95,] 6.094144e-77 9.998034e-01 1.966331e-04 ## [96,] 1.424276e-72 9.998236e-01 1.764463e-04 ## [97,] 8.302641e-77 9.996692e-01 3.307548e-04 ## [98,] 1.835520e-82 9.988601e-01 1.139915e-03 ## [99,] 5.710350e-30 9.999997e-01 3.094739e-07 ## [100,] 3.996459e-73 9.998204e-01 1.795726e-04 ## [101,] 3.993755e-249 1.031032e-10 1.000000e+00 ## [102,] 1.228659e-149 2.724406e-02 9.727559e-01 ## [103,] 2.460661e-216 2.327488e-07 9.999998e-01 ## [104,] 2.864831e-173 2.290954e-03 9.977090e-01 ## [105,] 8.299884e-214 3.175384e-07 9.999997e-01 ## [106,] 1.371182e-267 3.807455e-10 1.000000e+00 ## [107,] 3.444090e-107 9.719885e-01 2.801154e-02 ## [108,] 3.741929e-224 1.782047e-06 9.999982e-01 ## [109,] 5.564644e-188 5.823191e-04 9.994177e-01 ## [110,] 2.052443e-260 2.461662e-12 1.000000e+00 ## [111,] 8.669405e-159 4.895235e-04 9.995105e-01 ## [112,] 4.220200e-163 3.168643e-03 9.968314e-01 ## [113,] 4.360059e-190 6.230821e-06 9.999938e-01 ## [114,] 6.142256e-151 1.423414e-02 9.857659e-01 ## [115,] 2.201426e-186 1.393247e-06 9.999986e-01 ## [116,] 2.949945e-191 6.128385e-07 9.999994e-01 ## [117,] 2.909076e-168 2.152843e-03 9.978472e-01 ## [118,] 1.347608e-281 2.872996e-12 1.000000e+00 ## [119,] 2.786402e-306 1.151469e-12 1.000000e+00 ## [120,] 2.082510e-123 9.561626e-01 4.383739e-02 ## [121,] 2.194169e-217 1.712166e-08 1.000000e+00 ## [122,] 3.325791e-145 1.518718e-02 9.848128e-01 ## [123,] 6.251357e-269 1.170872e-09 1.000000e+00 ## [124,] 4.415135e-135 1.360432e-01 8.639568e-01 ## [125,] 6.315716e-201 1.300512e-06 9.999987e-01 ## [126,] 5.257347e-203 9.507989e-06 9.999905e-01 ## [127,] 1.476391e-129 2.067703e-01 7.932297e-01 ## [128,] 8.772841e-134 1.130589e-01 8.869411e-01 ## [129,] 5.230800e-194 1.395719e-05 9.999860e-01 ## [130,] 7.014892e-179 8.232518e-04 9.991767e-01 ## [131,] 6.306820e-218 1.214497e-06 9.999988e-01 ## [132,] 2.539020e-247 4.668891e-10 1.000000e+00 ## [133,] 2.210812e-201 2.000316e-06 9.999980e-01 ## [134,] 1.128613e-128 7.118948e-01 2.881052e-01 ## [135,] 8.114869e-151 4.900992e-01 5.099008e-01 ## [136,] 7.419068e-249 1.448050e-10 1.000000e+00 ## [137,] 1.004503e-215 9.743357e-09 1.000000e+00 ## [138,] 1.346716e-167 2.186989e-03 9.978130e-01 ## [139,] 1.994716e-128 1.999894e-01 8.000106e-01 ## [140,] 8.440466e-185 6.769126e-06 9.999932e-01 ## [141,] 2.334365e-218 7.456220e-09 1.000000e+00 ## [142,] 2.179139e-183 6.352663e-07 9.999994e-01 ## [143,] 1.228659e-149 2.724406e-02 9.727559e-01 ## [144,] 3.426814e-229 6.597015e-09 1.000000e+00 ## [145,] 2.011574e-232 2.620636e-10 1.000000e+00 ## [146,] 1.078519e-187 7.915543e-07 9.999992e-01 ## [147,] 1.061392e-146 2.770575e-02 9.722942e-01 ## [148,] 1.846900e-164 4.398402e-04 9.995602e-01 ## [149,] 1.439996e-195 3.384156e-07 9.999997e-01 ## [150,] 2.771480e-143 5.987903e-02 9.401210e-01 ``` ``` #CONFUSION MATRIX out = table(predict(res,iris[,1:4]),iris[,5]) out ``` ``` ## ## setosa versicolor virginica ## setosa 50 0 0 ## versicolor 0 47 3 ## virginica 0 3 47 ``` 6\.8 Bayes Nets --------------- Higher\-dimension Bayes problems and joint distributions over several outcomes/events are easy to visualize with a network diagram, also called a Bayes net. A Bayes net is a directed, acyclic graph (known as a DAG), i.e., cycles are not permitted in the graph. A good way to understand a Bayes net is with an example of economic distress. There are three levels at which distress may be noticed: economy level (\\(E\=1\\)), industry level (\\(I\=1\\)), or at a particular firm level (\\(F\=1\\)). Economic distress can lead to industry distress and/or firm distress, and industry distress may or may not result in a firm’s distress. The probabilities are as follows. Note that the probabilities in the first tableau are unconditional, but in all the subsequent tableaus they are conditional probabilities. See @(fig:bayesnet1\). Figure 6\.1: Conditional probabilities The Bayes net shows the pathways of economic distress. There are three channels: \\(a\\) is the inducement of industry distress from economy distress; \\(b\\) is the inducement of firm distress directly from economy distress; \\(c\\) is the inducement of firm distress directly from industry distress. See @(fig:bayesnet2\). Figure 6\.2: Bayesian network Note here that each pair of conditional probabilities adds up to 1\. The “channels” in the tableaus refer to the arrows in the Bayes net diagram. #### 6\.8\.0\.1 Conditional Probability \- 1 Now we will compute an answer to the question: What is the probability that the industry is distressed if the firm is known to be in distress? The calculation is as follows: \\\[ \\begin{aligned} Pr(I\=1\|F\=1\) \&\= \\frac{Pr(F\=1\|I\=1\)\\cdot Pr(I\=1\)}{Pr(F\=1\)} \\\\ Pr(F\=1\|I\=1\)\\cdot Pr(I\=1\) \&\= Pr(F\=1\|I\=1\)\\cdot Pr(I\=1\|E\=1\)\\cdot Pr(E\=1\) \\\\ \&\+ Pr(F\=1\|I\=1\)\\cdot Pr(I\=1\|E\=0\)\\cdot Pr(E\=0\)\\\\ \&\= 0\.95 \\times 0\.6 \\times 0\.1 \+ 0\.8 \\times 0\.2 \\times 0\.9 \= 0\.201 \\\\ \\end{aligned} \\] \\\[ \\begin{aligned} Pr(F\=1\|I\=0\)\\cdot Pr(I\=0\) \&\= Pr(F\=1\|I\=0\)\\cdot Pr(I\=0\|E\=1\)\\cdot Pr(E\=1\) \\\\ \&\+ Pr(F\=1\|I\=0\)\\cdot Pr(I\=0\|E\=0\)\\cdot Pr(E\=0\)\\\\ \&\= 0\.7 \\times 0\.4 \\times 0\.1 \+ 0\.1 \\times 0\.8 \\times 0\.9 \= 0\.100 \\end{aligned} \\] \\\[ \\begin{aligned} Pr(F\=1\) \&\= Pr(F\=1\|I\=1\)\\cdot Pr(I\=1\) \\\\ \&\+ Pr(F\=1\|I\=0\)\\cdot Pr(I\=0\) \= 0\.301 \\end{aligned} \\] \\\[ Pr(I\=1\|F\=1\) \= \\frac{Pr(F\=1\|I\=1\)\\cdot Pr(I\=1\)}{Pr(F\=1\)} \= \\frac{0\.201}{0\.301} \= 0\.6677741 \\] #### 6\.8\.0\.2 Computational set\-theoretic approach We may write a R script to compute the conditional probability that the industry is distressed when a firm is distressed. This uses the set approach that we visited earlier. ``` #BAYES NET COMPUTATIONS E = seq(1,100000) n = length(E) E1 = sample(E,length(E)*0.1) E0 = setdiff(E,E1) E1I1 = sample(E1,length(E1)*0.6) E1I0 = setdiff(E1,E1I1) E0I1 = sample(E0,length(E0)*0.2) E0I0 = setdiff(E0,E0I1) E1I1F1 = sample(E1I1,length(E1I1)*0.95) E1I1F0 = setdiff(E1I1,E1I1F1) E1I0F1 = sample(E1I0,length(E1I0)*0.70) E1I0F0 = setdiff(E1I0,E1I0F1) E0I1F1 = sample(E0I1,length(E0I1)*0.80) E0I1F0 = setdiff(E0I1,E0I1F1) E0I0F1 = sample(E0I0,length(E0I0)*0.10) E0I0F0 = setdiff(E0I0,E0I0F1) pr_I1_given_F1 = length(c(E1I1F1,E0I1F1))/ length(c(E1I1F1,E1I0F1,E0I1F1,E0I0F1)) print(pr_I1_given_F1) ``` ``` ## [1] 0.6677741 ``` Running this program gives the desired probability and confirms the previous result. #### 6\.8\.0\.3 Conditional Probability \- 2 Compute the conditional probability that the economy is in distress if the firm is in distress. Compare this to the previous conditional probability we computed of 0\.6677741\. Should it be lower? ``` pr_E1_given_F1 = length(c(E1I1F1,E1I0F1))/length(c(E1I1F1,E1I0F1,E0I1F1,E0I0F1)) print(pr_E1_given_F1) ``` ``` ## [1] 0.282392 ``` Yes, it should be lower than the probability that the industry is in distress when the firm is in distress, because the economy is one network layer removed from the firm, unlike the industry. #### 6\.8\.0\.4 R Packages for Bayes Nets What packages does R provide for doing Bayes Nets? See: [http://cran.r\-project.org/web/views/Bayesian.html](http://cran.r-project.org/web/views/Bayesian.html) #### 6\.8\.0\.1 Conditional Probability \- 1 Now we will compute an answer to the question: What is the probability that the industry is distressed if the firm is known to be in distress? The calculation is as follows: \\\[ \\begin{aligned} Pr(I\=1\|F\=1\) \&\= \\frac{Pr(F\=1\|I\=1\)\\cdot Pr(I\=1\)}{Pr(F\=1\)} \\\\ Pr(F\=1\|I\=1\)\\cdot Pr(I\=1\) \&\= Pr(F\=1\|I\=1\)\\cdot Pr(I\=1\|E\=1\)\\cdot Pr(E\=1\) \\\\ \&\+ Pr(F\=1\|I\=1\)\\cdot Pr(I\=1\|E\=0\)\\cdot Pr(E\=0\)\\\\ \&\= 0\.95 \\times 0\.6 \\times 0\.1 \+ 0\.8 \\times 0\.2 \\times 0\.9 \= 0\.201 \\\\ \\end{aligned} \\] \\\[ \\begin{aligned} Pr(F\=1\|I\=0\)\\cdot Pr(I\=0\) \&\= Pr(F\=1\|I\=0\)\\cdot Pr(I\=0\|E\=1\)\\cdot Pr(E\=1\) \\\\ \&\+ Pr(F\=1\|I\=0\)\\cdot Pr(I\=0\|E\=0\)\\cdot Pr(E\=0\)\\\\ \&\= 0\.7 \\times 0\.4 \\times 0\.1 \+ 0\.1 \\times 0\.8 \\times 0\.9 \= 0\.100 \\end{aligned} \\] \\\[ \\begin{aligned} Pr(F\=1\) \&\= Pr(F\=1\|I\=1\)\\cdot Pr(I\=1\) \\\\ \&\+ Pr(F\=1\|I\=0\)\\cdot Pr(I\=0\) \= 0\.301 \\end{aligned} \\] \\\[ Pr(I\=1\|F\=1\) \= \\frac{Pr(F\=1\|I\=1\)\\cdot Pr(I\=1\)}{Pr(F\=1\)} \= \\frac{0\.201}{0\.301} \= 0\.6677741 \\] #### 6\.8\.0\.2 Computational set\-theoretic approach We may write a R script to compute the conditional probability that the industry is distressed when a firm is distressed. This uses the set approach that we visited earlier. ``` #BAYES NET COMPUTATIONS E = seq(1,100000) n = length(E) E1 = sample(E,length(E)*0.1) E0 = setdiff(E,E1) E1I1 = sample(E1,length(E1)*0.6) E1I0 = setdiff(E1,E1I1) E0I1 = sample(E0,length(E0)*0.2) E0I0 = setdiff(E0,E0I1) E1I1F1 = sample(E1I1,length(E1I1)*0.95) E1I1F0 = setdiff(E1I1,E1I1F1) E1I0F1 = sample(E1I0,length(E1I0)*0.70) E1I0F0 = setdiff(E1I0,E1I0F1) E0I1F1 = sample(E0I1,length(E0I1)*0.80) E0I1F0 = setdiff(E0I1,E0I1F1) E0I0F1 = sample(E0I0,length(E0I0)*0.10) E0I0F0 = setdiff(E0I0,E0I0F1) pr_I1_given_F1 = length(c(E1I1F1,E0I1F1))/ length(c(E1I1F1,E1I0F1,E0I1F1,E0I0F1)) print(pr_I1_given_F1) ``` ``` ## [1] 0.6677741 ``` Running this program gives the desired probability and confirms the previous result. #### 6\.8\.0\.3 Conditional Probability \- 2 Compute the conditional probability that the economy is in distress if the firm is in distress. Compare this to the previous conditional probability we computed of 0\.6677741\. Should it be lower? ``` pr_E1_given_F1 = length(c(E1I1F1,E1I0F1))/length(c(E1I1F1,E1I0F1,E0I1F1,E0I0F1)) print(pr_E1_given_F1) ``` ``` ## [1] 0.282392 ``` Yes, it should be lower than the probability that the industry is in distress when the firm is in distress, because the economy is one network layer removed from the firm, unlike the industry. #### 6\.8\.0\.4 R Packages for Bayes Nets What packages does R provide for doing Bayes Nets? See: [http://cran.r\-project.org/web/views/Bayesian.html](http://cran.r-project.org/web/views/Bayesian.html) 6\.9 Bayes in Marketing ----------------------- In pilot market tests (part of a larger market research campaign), Bayes theorem shows up in a simple manner. Suppose we have a project whose value is \\(x\\). If the product is successful (\\(S\\)), the payoff is \\(\+100\\) and if the product fails (\\(F\\)) the payoff is \\(\-70\\). The probability of these two events is: \\\[ Pr(S) \= 0\.7, \\quad Pr(F) \= 0\.3 \\] You can easily check that the expected value is \\(E(x) \= 49\\). Suppose we were able to buy protection for a failed product, then this protection would be a put option (of the real option type), and would be worth \\(0\.3 \\times 70 \= 21\\). Since the put saves the loss on failure, the value is simply the expected loss amount, conditional on loss. Market researchers think of this as the value of **perfect information**. #### 6\.9\.0\.1 Product Launch? Would you proceed with this product launch given these odds? **Yes**, the expected value is positive (note that we are assuming away risk aversion issues here \- but this is not a finance topic, but a marketing research analysis). #### 6\.9\.0\.2 Pilot Test Now suppose there is an intermediate choice, i.e. you can undertake a pilot test (denoted \\(T\\)). Pilot tests are not highly accurate though they are reasonably sophisticated. The pilot test signals success (\\(T\+\\)) or failure (\\(T\-\\)) with the following probabilities: \\\[ Pr(T\+ \| S) \= 0\.8 \\\\ Pr(T\- \| S) \= 0\.2 \\\\ Pr(T\+ \| F) \= 0\.3 \\\\ Pr(T\- \| F) \= 0\.7 \\] What are these? We note that \\(Pr(T\+ \| S)\\) stands for the probability that the pilot signals success when indeed the underlying product launch will be successful. Thus the pilot in this case gives only an accurate reading of success 80% of the time. Analogously, one can interpret the other probabilities. We may compute the probability that the pilot gives a positive result: \\\[ \\begin{aligned} Pr(T\+) \&\= Pr(T\+ \| S)Pr(S) \+ Pr(T\+ \| F)Pr(F) \\\\ \&\= (0\.8\)(0\.7\)\+(0\.3\)(0\.3\) \= 0\.65 \\end{aligned} \\] and that the result is negative: \\\[ \\begin{aligned} Pr(T\-) \&\= Pr(T\- \| S)Pr(S) \+ Pr(T\- \| F)Pr(F) \\\\ \&\= (0\.2\)(0\.7\)\+(0\.7\)(0\.3\) \= 0\.35 \\end{aligned} \\] which now allows us to compute the following conditional probabilities: \\\[ \\begin{aligned} Pr(S \| T\+) \&\= \\frac{Pr(T\+\|S)Pr(S)}{Pr(T\+)} \= \\frac{(0\.8\)(0\.7\)}{0\.65} \= 0\.86154 \\\\ Pr(S \| T\-) \&\= \\frac{Pr(T\-\|S)Pr(S)}{Pr(T\-)} \= \\frac{(0\.2\)(0\.7\)}{0\.35} \= 0\.4 \\\\ Pr(F \| T\+) \&\= \\frac{Pr(T\+\|F)Pr(F)}{Pr(T\+)} \= \\frac{(0\.3\)(0\.3\)}{0\.65} \= 0\.13846 \\\\ Pr(F \| T\-) \&\= \\frac{Pr(T\-\|F)Pr(F)}{Pr(T\-)} \= \\frac{(0\.7\)(0\.3\)}{0\.35} \= 0\.6 \\end{aligned} \\] Armed with these conditional probabilities, we may now re\-evaluate our product launch. If the pilot comes out positive, what is the expected value of the product launch? This is as follows: \\\[ E(x \| T\+) \= 100 Pr(S\|T\+) \+(\-70\) Pr(F\|T\+) \\\\ \= 100(0\.86154\)\-70(0\.13846\) \\\\ \= 76\.462 \\] And if the pilot comes out negative, then the value of the launch is: \\\[ E(x \| T\-) \= 100 Pr(S\|T\-) \+(\-70\) Pr(F\|T\-) \\\\ \= 100(0\.4\)\-70(0\.6\) \\\\ \= \-2 \\] So. we see that if the pilot is negative, then we know that the expected value from the main product launch is negative, and we do not proceed. Thus, the overall expected value after the pilot is \\\[ E(x) \= E(x\|T\+)Pr(T\+) \+ E(x\|T\-)Pr(T\-) \\\\ \= 76\.462(0\.65\) \+ (0\)(0\.35\) \\\\ \= 49\.70 \\] The incremental value over the case without the pilot test is \\(0\.70\\). This is the information value of the pilot test. #### 6\.9\.0\.1 Product Launch? Would you proceed with this product launch given these odds? **Yes**, the expected value is positive (note that we are assuming away risk aversion issues here \- but this is not a finance topic, but a marketing research analysis). #### 6\.9\.0\.2 Pilot Test Now suppose there is an intermediate choice, i.e. you can undertake a pilot test (denoted \\(T\\)). Pilot tests are not highly accurate though they are reasonably sophisticated. The pilot test signals success (\\(T\+\\)) or failure (\\(T\-\\)) with the following probabilities: \\\[ Pr(T\+ \| S) \= 0\.8 \\\\ Pr(T\- \| S) \= 0\.2 \\\\ Pr(T\+ \| F) \= 0\.3 \\\\ Pr(T\- \| F) \= 0\.7 \\] What are these? We note that \\(Pr(T\+ \| S)\\) stands for the probability that the pilot signals success when indeed the underlying product launch will be successful. Thus the pilot in this case gives only an accurate reading of success 80% of the time. Analogously, one can interpret the other probabilities. We may compute the probability that the pilot gives a positive result: \\\[ \\begin{aligned} Pr(T\+) \&\= Pr(T\+ \| S)Pr(S) \+ Pr(T\+ \| F)Pr(F) \\\\ \&\= (0\.8\)(0\.7\)\+(0\.3\)(0\.3\) \= 0\.65 \\end{aligned} \\] and that the result is negative: \\\[ \\begin{aligned} Pr(T\-) \&\= Pr(T\- \| S)Pr(S) \+ Pr(T\- \| F)Pr(F) \\\\ \&\= (0\.2\)(0\.7\)\+(0\.7\)(0\.3\) \= 0\.35 \\end{aligned} \\] which now allows us to compute the following conditional probabilities: \\\[ \\begin{aligned} Pr(S \| T\+) \&\= \\frac{Pr(T\+\|S)Pr(S)}{Pr(T\+)} \= \\frac{(0\.8\)(0\.7\)}{0\.65} \= 0\.86154 \\\\ Pr(S \| T\-) \&\= \\frac{Pr(T\-\|S)Pr(S)}{Pr(T\-)} \= \\frac{(0\.2\)(0\.7\)}{0\.35} \= 0\.4 \\\\ Pr(F \| T\+) \&\= \\frac{Pr(T\+\|F)Pr(F)}{Pr(T\+)} \= \\frac{(0\.3\)(0\.3\)}{0\.65} \= 0\.13846 \\\\ Pr(F \| T\-) \&\= \\frac{Pr(T\-\|F)Pr(F)}{Pr(T\-)} \= \\frac{(0\.7\)(0\.3\)}{0\.35} \= 0\.6 \\end{aligned} \\] Armed with these conditional probabilities, we may now re\-evaluate our product launch. If the pilot comes out positive, what is the expected value of the product launch? This is as follows: \\\[ E(x \| T\+) \= 100 Pr(S\|T\+) \+(\-70\) Pr(F\|T\+) \\\\ \= 100(0\.86154\)\-70(0\.13846\) \\\\ \= 76\.462 \\] And if the pilot comes out negative, then the value of the launch is: \\\[ E(x \| T\-) \= 100 Pr(S\|T\-) \+(\-70\) Pr(F\|T\-) \\\\ \= 100(0\.4\)\-70(0\.6\) \\\\ \= \-2 \\] So. we see that if the pilot is negative, then we know that the expected value from the main product launch is negative, and we do not proceed. Thus, the overall expected value after the pilot is \\\[ E(x) \= E(x\|T\+)Pr(T\+) \+ E(x\|T\-)Pr(T\-) \\\\ \= 76\.462(0\.65\) \+ (0\)(0\.35\) \\\\ \= 49\.70 \\] The incremental value over the case without the pilot test is \\(0\.70\\). This is the information value of the pilot test. 6\.10 Other Marketing Applications ---------------------------------- Bayesian methods show up in many areas in the Marketing field. Especially around customer heterogeneity, see Allenby and Rossi ([1998](#ref-RePEc:eee:econom:v:89:y:1998:i:1-2:p:57-78)). Other papers are as follows: * See the paper “The HB Revolution: How Bayesian Methods Have Changed the Face of Marketing Research,” by Allenby, Bakken, and Rossi ([2004](#ref-AllenbyBakkenRossi)). * See also the paper by David Bakken, titled \`\`The Bayesian Revolution in Marketing Research’’. * In conjoint analysis, see the paper by Sawtooth software. [https://www.sawtoothsoftware.com/download/techpap/undca15\.pdf](https://www.sawtoothsoftware.com/download/techpap/undca15.pdf)
Machine Learning
srdas.github.io
https://srdas.github.io/MLBook/TextAnalytics.html
Chapter 7 More than Words: Text Analytics ========================================= 7\.1 Introduction ----------------- Text expands the universe of data many\-fold. See my monograph on text mining in finance at: <http://srdas.github.io/Das_TextAnalyticsInFinance.pdf> In Finance, for example, text has become a major source of trading information, leading to a new field known as News Metrics. News analysis is defined as “the measurement of the various qualitative and quantitative attributes of textual news stories. Some of these attributes are: sentiment, relevance, and novelty. Expressing news stories as numbers permits the manipulation of everyday information in a mathematical and statistical way.” (Wikipedia). In this chapter, I provide a framework for text analytics techniques that are in widespread use. I will discuss various text analytic methods and software, and then provide a set of metrics that may be used to assess the performance of analytics. Various directions for this field are discussed through the exposition. The techniques herein can aid in the valuation and trading of securities, facilitate investment decision making, meet regulatory requirements, provide marketing insights, or manage risk. See: [https://www.amazon.com/Handbook\-News\-Analytics\-Finance/dp/047066679X/ref\=sr\_1\_1?ie\=UTF8\&qid\=1466897817\&sr\=8\-1\&keywords\=handbook\+of\+news\+analytics](https://www.amazon.com/Handbook-News-Analytics-Finance/dp/047066679X/ref=sr_1_1?ie=UTF8&qid=1466897817&sr=8-1&keywords=handbook+of+news+analytics) “News analytics are used in financial modeling, particularly in quantitative and algorithmic trading. Further, news analytics can be used to plot and characterize firm behaviors over time and thus yield important strategic insights about rival firms. News analytics are usually derived through automated text analysis and applied to digital texts using elements from natural language processing and machine learning such as latent semantic analysis, support vector machines, \`bag of words’, among other techniques.” (Wikipedia) 7\.2 Text as Data ----------------- There are many reasons why text has business value. But this is a narrow view. Textual data provides a means of understanding all human behavior through a data\-driven, analytical approach. Let’s enumerate some reasons for this. 1. Big Text: there is more textual data than numerical data. 2. Text is versatile. Nuances and behavioral expressions are not conveyed with numbers, so analyzing text allows us to explore these aspects of human interaction. 3. Text contains emotive content. This has led to the ubiquity of “Sentiment analysis”. See for example: Admati\-Pfleiderer 2001; DeMarzo et al 2003; Antweiler\-Frank 2004, 2005; Das\-Chen 2007; Tetlock 2007; Tetlock et al 2008; Mitra et al 2008; Leinweber\-Sisk 2010\. 4. Text contains opinions and connections. See: Das et al 2005; Das and Sisk 2005; Godes et al 2005; Li 2006; Hochberg et al 2007\. 5. Numbers aggregate; text disaggregates. Text allows us to drill down into underlying behavior when understanding human interaction. In a talk at the 17th ACM Conference on Information Knowledge and Management (CIKM ’08\), Google’s director of research Peter Norvig stated his unequivocal preference for data over algorithms—“data is more agile than code.” Yet, it is well\-understood that too much data can lead to overfitting so that an algorithm becomes mostly useless out\-of\-sample. 2\. Chris Anderson: “Data is the New Theory.” 3\. These issues are relevant to text mining, but let’s put them on hold till the end of the session. 7\.3 Definition: Text\-Mining ----------------------------- I will make an attempt to provide a comprehensive definition of “Text Mining”. As definitions go, it is often easier to enumerate various versions and nuances of an activity than to describe something in one single statement. So here goes: 1. Text mining is the large\-scale, automated processing of plain text language in digital form to extract data that is converted into useful quantitative or qualitative information. 2. Text mining is automated on big data that is not amenable to human processing within reasonable time frames. It entails extracting data that is converted into information of many types. 3. Simple: Text mining may be simple as key word searches and counts. 4. Complicated: It may require language parsing and complex rules for information extraction. 5. Involves structured text, such as the information in forms and some kinds of web pages. 6. May be applied to unstructured text is a much harder endeavor. 7. Text mining is also aimed at unearthing unseen relationships in unstructured text as in meta analyses of research papers, see Van Noorden 2012\. 7\.4 Data and Algorithms ------------------------ 7\.5 Text Extraction -------------------- The R programming language is increasingly being used to download text from the web and then analyze it. The ease with which R may be used to scrape text from web site may be seen from the following simple command in R: ``` text = readLines("http://srdas.github.io/bio-candid.html") text[15:20] ``` ``` ## [1] "journals. Prior to being an academic, he worked in the derivatives" ## [2] "business in the Asia-Pacific region as a Vice-President at" ## [3] "Citibank. His current research interests include: machine learning," ## [4] "social networks, derivatives pricing models, portfolio theory, the" ## [5] "modeling of default risk, and venture capital. He has published over" ## [6] "ninety articles in academic journals, and has won numerous awards for" ``` Here, we downloaded the my bio page from my university’s web site. It’s a simple HTML file. ``` length(text) ``` ``` ## [1] 80 ``` 7\.6 String Parsing ------------------- Suppose we just want the 17th line, we do: ``` text[17] ``` ``` ## [1] "Citibank. His current research interests include: machine learning," ``` And, to find out the character length of the this line we use the function: ``` library(stringr) str_length(text[17]) ``` ``` ## [1] 67 ``` We have first invoked the library **stringr** that contains many string handling functions. In fact, we may also get the length of each line in the text vector by applying the function **length()** to the entire text vector. ``` text_len = str_length(text) print(text_len) ``` ``` ## [1] 6 69 0 66 70 70 70 63 69 65 59 59 70 67 66 58 67 66 69 69 67 62 63 ## [24] 19 0 0 56 0 65 67 66 65 64 66 69 63 69 65 27 0 3 0 71 71 69 68 ## [47] 71 12 0 3 0 71 70 68 71 69 63 67 69 64 67 7 0 3 0 67 71 65 63 ## [70] 72 69 68 66 69 70 70 43 0 0 0 ``` ``` print(text_len[55]) ``` ``` ## [1] 71 ``` ``` text_len[17] ``` ``` ## [1] 67 ``` 7\.7 Sort by Length ------------------- Some lines are very long and are the ones we are mainly interested in as they contain the bulk of the story, whereas many of the remaining lines that are shorter contain html formatting instructions. Thus, we may extract the top three lengthy lines with the following set of commands. ``` res = sort(text_len,decreasing=TRUE,index.return=TRUE) idx = res$ix text2 = text[idx] text2 ``` ``` ## [1] "important to open the academic door to the ivory tower and let the world" ## [2] "Sanjiv is now a Professor of Finance at Santa Clara University. He came" ## [3] "to SCU from Harvard Business School and spent a year at UC Berkeley. In" ## [4] "previous lives into his current existence, which is incredibly confused" ## [5] "Sanjiv's research style is instilled with a distinct \"New York state of" ## [6] "funds, the internet, portfolio choice, banking models, credit risk, and" ## [7] "ocean. The many walks in Greenwich village convinced him that there is" ## [8] "Santa Clara University's Leavey School of Business. He previously held" ## [9] "faculty appointments as Associate Professor at Harvard Business School" ## [10] "and UC Berkeley. He holds post-graduate degrees in Finance (M.Phil and" ## [11] "Management, co-editor of The Journal of Derivatives and The Journal of" ## [12] "mind\" - it is chaotic, diverse, with minimal method to the madness. He" ## [13] "any time you like, but you can never leave.\" Which is why he is doomed" ## [14] "to a lifetime in Hotel California. And he believes that, if this is as" ## [15] "<BODY background=\"http://algo.scu.edu/~sanjivdas/graphics/back2.gif\">" ## [16] "Berkeley), an MBA from the Indian Institute of Management, Ahmedabad," ## [17] "modeling of default risk, and venture capital. He has published over" ## [18] "ninety articles in academic journals, and has won numerous awards for" ## [19] "science fiction movies, and writing cool software code. When there is" ## [20] "academic papers, which helps him relax. Always the contrarian, Sanjiv" ## [21] "his past life in the unreal world, Sanjiv worked at Citibank, N.A. in" ## [22] "has unpublished articles in many other areas. Some years ago, he took" ## [23] "There he learnt about the fascinating field of Randomized Algorithms," ## [24] "in. Academia is a real challenge, given that he has to reconcile many" ## [25] "explains, you never really finish your education - \"you can check out" ## [26] "the Asia-Pacific region. He takes great pleasure in merging his many" ## [27] "has published articles on derivatives, term-structure models, mutual" ## [28] "more opinions than ideas. He has been known to have turned down many" ## [29] "Financial Services Research, and Associate Editor of other academic" ## [30] "Citibank. His current research interests include: machine learning," ## [31] "research and teaching. His recent book \"Derivatives: Principles and" ## [32] "growing up, Sanjiv moved to New York to change the world, hopefully" ## [33] "confirming that an unchecked hobby can quickly become an obsession." ## [34] "pursuits, many of which stem from being in the epicenter of Silicon" ## [35] "Coastal living did a lot to mold Sanjiv, who needs to live near the" ## [36] "Sanjiv Das is the William and Janice Terry Professor of Finance at" ## [37] "journals. Prior to being an academic, he worked in the derivatives" ## [38] "social networks, derivatives pricing models, portfolio theory, the" ## [39] "through research. He graduated in 1994 with a Ph.D. from NYU, and" ## [40] "mountains meet the sea, riding sport motorbikes, reading, gadgets," ## [41] "offers from Mad magazine to publish his academic work. As he often" ## [42] "B.Com in Accounting and Economics (University of Bombay, Sydenham" ## [43] "After loafing and working in many parts of Asia, but never really" ## [44] "since then spent five years in Boston, and now lives in San Jose," ## [45] "thinks that New York City is the most calming place in the world," ## [46] "no such thing as a representative investor, yet added many unique" ## [47] "California. Sanjiv loves animals, places in the world where the" ## [48] "skills he now applies earnestly to his editorial work, and other" ## [49] "Ph.D. from New York University), Computer Science (M.S. from UC" ## [50] "currently also serves as a Senior Fellow at the FDIC Center for" ## [51] "time available from the excitement of daily life, Sanjiv writes" ## [52] "time off to get another degree in computer science at Berkeley," ## [53] "features to his personal utility function. He learnt that it is" ## [54] "Practice\" was published in May 2010 (second edition 2016). He" ## [55] "College), and is also a qualified Cost and Works Accountant" ## [56] "(AICWA). He is a senior editor of The Journal of Investment" ## [57] "business in the Asia-Pacific region as a Vice-President at" ## [58] "<p> <B>Sanjiv Das: A Short Academic Life History</B> <p>" ## [59] "bad as it gets, life is really pretty good." ## [60] "after California of course." ## [61] "Financial Research." ## [62] "and diverse." ## [63] "Valley." ## [64] "<HTML>" ## [65] "<p>" ## [66] "<p>" ## [67] "<p>" ## [68] "" ## [69] "" ## [70] "" ## [71] "" ## [72] "" ## [73] "" ## [74] "" ## [75] "" ## [76] "" ## [77] "" ## [78] "" ## [79] "" ## [80] "" ``` 7\.8 Text cleanup ----------------- In short, text extraction can be exceedingly simple, though getting clean text is not as easy an operation. Removing html tags and other unnecessary elements in the file is also a fairly simple operation. We undertake the following steps that use generalized regular expressions (i.e., **grep**) to eliminate html formatting characters. This will generate one single paragraph of text, relatively clean of formatting characters. Such a text collection is also known as a “bag of words”. ``` text = paste(text,collapse="\n") print(text) ``` ``` ## [1] "<HTML>\n<BODY background=\"http://algo.scu.edu/~sanjivdas/graphics/back2.gif\">\n\nSanjiv Das is the William and Janice Terry Professor of Finance at\nSanta Clara University's Leavey School of Business. He previously held\nfaculty appointments as Associate Professor at Harvard Business School\nand UC Berkeley. He holds post-graduate degrees in Finance (M.Phil and\nPh.D. from New York University), Computer Science (M.S. from UC\nBerkeley), an MBA from the Indian Institute of Management, Ahmedabad,\nB.Com in Accounting and Economics (University of Bombay, Sydenham\nCollege), and is also a qualified Cost and Works Accountant\n(AICWA). He is a senior editor of The Journal of Investment\nManagement, co-editor of The Journal of Derivatives and The Journal of\nFinancial Services Research, and Associate Editor of other academic\njournals. Prior to being an academic, he worked in the derivatives\nbusiness in the Asia-Pacific region as a Vice-President at\nCitibank. His current research interests include: machine learning,\nsocial networks, derivatives pricing models, portfolio theory, the\nmodeling of default risk, and venture capital. He has published over\nninety articles in academic journals, and has won numerous awards for\nresearch and teaching. His recent book \"Derivatives: Principles and\nPractice\" was published in May 2010 (second edition 2016). He\ncurrently also serves as a Senior Fellow at the FDIC Center for\nFinancial Research.\n\n\n<p> <B>Sanjiv Das: A Short Academic Life History</B> <p>\n\nAfter loafing and working in many parts of Asia, but never really\ngrowing up, Sanjiv moved to New York to change the world, hopefully\nthrough research. He graduated in 1994 with a Ph.D. from NYU, and\nsince then spent five years in Boston, and now lives in San Jose,\nCalifornia. Sanjiv loves animals, places in the world where the\nmountains meet the sea, riding sport motorbikes, reading, gadgets,\nscience fiction movies, and writing cool software code. When there is\ntime available from the excitement of daily life, Sanjiv writes\nacademic papers, which helps him relax. Always the contrarian, Sanjiv\nthinks that New York City is the most calming place in the world,\nafter California of course.\n\n<p>\n\nSanjiv is now a Professor of Finance at Santa Clara University. He came\nto SCU from Harvard Business School and spent a year at UC Berkeley. In\nhis past life in the unreal world, Sanjiv worked at Citibank, N.A. in\nthe Asia-Pacific region. He takes great pleasure in merging his many\nprevious lives into his current existence, which is incredibly confused\nand diverse.\n\n<p>\n\nSanjiv's research style is instilled with a distinct \"New York state of\nmind\" - it is chaotic, diverse, with minimal method to the madness. He\nhas published articles on derivatives, term-structure models, mutual\nfunds, the internet, portfolio choice, banking models, credit risk, and\nhas unpublished articles in many other areas. Some years ago, he took\ntime off to get another degree in computer science at Berkeley,\nconfirming that an unchecked hobby can quickly become an obsession.\nThere he learnt about the fascinating field of Randomized Algorithms,\nskills he now applies earnestly to his editorial work, and other\npursuits, many of which stem from being in the epicenter of Silicon\nValley.\n\n<p>\n\nCoastal living did a lot to mold Sanjiv, who needs to live near the\nocean. The many walks in Greenwich village convinced him that there is\nno such thing as a representative investor, yet added many unique\nfeatures to his personal utility function. He learnt that it is\nimportant to open the academic door to the ivory tower and let the world\nin. Academia is a real challenge, given that he has to reconcile many\nmore opinions than ideas. He has been known to have turned down many\noffers from Mad magazine to publish his academic work. As he often\nexplains, you never really finish your education - \"you can check out\nany time you like, but you can never leave.\" Which is why he is doomed\nto a lifetime in Hotel California. And he believes that, if this is as\nbad as it gets, life is really pretty good.\n\n\n" ``` ``` text = str_replace_all(text,"[<>{}()&;,.\n]"," ") print(text) ``` ``` ## [1] " HTML BODY background=\"http://algo scu edu/~sanjivdas/graphics/back2 gif\" Sanjiv Das is the William and Janice Terry Professor of Finance at Santa Clara University's Leavey School of Business He previously held faculty appointments as Associate Professor at Harvard Business School and UC Berkeley He holds post-graduate degrees in Finance M Phil and Ph D from New York University Computer Science M S from UC Berkeley an MBA from the Indian Institute of Management Ahmedabad B Com in Accounting and Economics University of Bombay Sydenham College and is also a qualified Cost and Works Accountant AICWA He is a senior editor of The Journal of Investment Management co-editor of The Journal of Derivatives and The Journal of Financial Services Research and Associate Editor of other academic journals Prior to being an academic he worked in the derivatives business in the Asia-Pacific region as a Vice-President at Citibank His current research interests include: machine learning social networks derivatives pricing models portfolio theory the modeling of default risk and venture capital He has published over ninety articles in academic journals and has won numerous awards for research and teaching His recent book \"Derivatives: Principles and Practice\" was published in May 2010 second edition 2016 He currently also serves as a Senior Fellow at the FDIC Center for Financial Research p B Sanjiv Das: A Short Academic Life History /B p After loafing and working in many parts of Asia but never really growing up Sanjiv moved to New York to change the world hopefully through research He graduated in 1994 with a Ph D from NYU and since then spent five years in Boston and now lives in San Jose California Sanjiv loves animals places in the world where the mountains meet the sea riding sport motorbikes reading gadgets science fiction movies and writing cool software code When there is time available from the excitement of daily life Sanjiv writes academic papers which helps him relax Always the contrarian Sanjiv thinks that New York City is the most calming place in the world after California of course p Sanjiv is now a Professor of Finance at Santa Clara University He came to SCU from Harvard Business School and spent a year at UC Berkeley In his past life in the unreal world Sanjiv worked at Citibank N A in the Asia-Pacific region He takes great pleasure in merging his many previous lives into his current existence which is incredibly confused and diverse p Sanjiv's research style is instilled with a distinct \"New York state of mind\" - it is chaotic diverse with minimal method to the madness He has published articles on derivatives term-structure models mutual funds the internet portfolio choice banking models credit risk and has unpublished articles in many other areas Some years ago he took time off to get another degree in computer science at Berkeley confirming that an unchecked hobby can quickly become an obsession There he learnt about the fascinating field of Randomized Algorithms skills he now applies earnestly to his editorial work and other pursuits many of which stem from being in the epicenter of Silicon Valley p Coastal living did a lot to mold Sanjiv who needs to live near the ocean The many walks in Greenwich village convinced him that there is no such thing as a representative investor yet added many unique features to his personal utility function He learnt that it is important to open the academic door to the ivory tower and let the world in Academia is a real challenge given that he has to reconcile many more opinions than ideas He has been known to have turned down many offers from Mad magazine to publish his academic work As he often explains you never really finish your education - \"you can check out any time you like but you can never leave \" Which is why he is doomed to a lifetime in Hotel California And he believes that if this is as bad as it gets life is really pretty good " ``` 7\.9 The *XML* Package ---------------------- The **XML** package in R also comes with many functions that aid in cleaning up text and dropping it (mostly unformatted) into a flat file or data frame. This may then be further processed. Here is some example code for this. ### 7\.9\.1 Processing XML files in R into a data frame The following example has been adapted from r\-bloggers.com. It uses the following URL: <http://www.w3schools.com/xml/plant_catalog.xml> ``` library(XML) #Part1: Reading an xml and creating a data frame with it. xml.url <- "http://www.w3schools.com/xml/plant_catalog.xml" xmlfile <- xmlTreeParse(xml.url) xmltop <- xmlRoot(xmlfile) plantcat <- xmlSApply(xmltop, function(x) xmlSApply(x, xmlValue)) plantcat_df <- data.frame(t(plantcat),row.names=NULL) plantcat_df[1:5,1:4] ``` ### 7\.9\.2 Creating a XML file from a data frame ``` library(XML) ``` ``` ## Warning: package 'XML' was built under R version 3.3.2 ``` ``` ## Loading required package: methods ``` ``` #Example adapted from https://stat.ethz.ch/pipermail/r-help/2008-September/175364.html #Load the iris data set and create a data frame data("iris") data <- as.data.frame(iris) xml <- xmlTree() xml$addTag("document", close=FALSE) ``` ``` ## Warning in xmlRoot.XMLInternalDocument(currentNodes[[1]]): empty XML ## document ``` ``` for (i in 1:nrow(data)) { xml$addTag("row", close=FALSE) for (j in names(data)) { xml$addTag(j, data[i, j]) } xml$closeTag() } xml$closeTag() #view the xml (uncomment line below to see XML, long output) cat(saveXML(xml)) ``` ``` ## <?xml version="1.0"?> ## ## <document> ## <row> ## <Sepal.Length>5.1</Sepal.Length> ## <Sepal.Width>3.5</Sepal.Width> ## <Petal.Length>1.4</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.9</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>1.4</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.7</Sepal.Length> ## <Sepal.Width>3.2</Sepal.Width> ## <Petal.Length>1.3</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.6</Sepal.Length> ## <Sepal.Width>3.1</Sepal.Width> ## <Petal.Length>1.5</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5</Sepal.Length> ## <Sepal.Width>3.6</Sepal.Width> ## <Petal.Length>1.4</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.4</Sepal.Length> ## <Sepal.Width>3.9</Sepal.Width> ## <Petal.Length>1.7</Petal.Length> ## <Petal.Width>0.4</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.6</Sepal.Length> ## <Sepal.Width>3.4</Sepal.Width> ## <Petal.Length>1.4</Petal.Length> ## <Petal.Width>0.3</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5</Sepal.Length> ## <Sepal.Width>3.4</Sepal.Width> ## <Petal.Length>1.5</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.4</Sepal.Length> ## <Sepal.Width>2.9</Sepal.Width> ## <Petal.Length>1.4</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.9</Sepal.Length> ## <Sepal.Width>3.1</Sepal.Width> ## <Petal.Length>1.5</Petal.Length> ## <Petal.Width>0.1</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.4</Sepal.Length> ## <Sepal.Width>3.7</Sepal.Width> ## <Petal.Length>1.5</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.8</Sepal.Length> ## <Sepal.Width>3.4</Sepal.Width> ## <Petal.Length>1.6</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.8</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>1.4</Petal.Length> ## <Petal.Width>0.1</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.3</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>1.1</Petal.Length> ## <Petal.Width>0.1</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.8</Sepal.Length> ## <Sepal.Width>4</Sepal.Width> ## <Petal.Length>1.2</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.7</Sepal.Length> ## <Sepal.Width>4.4</Sepal.Width> ## <Petal.Length>1.5</Petal.Length> ## <Petal.Width>0.4</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.4</Sepal.Length> ## <Sepal.Width>3.9</Sepal.Width> ## <Petal.Length>1.3</Petal.Length> ## <Petal.Width>0.4</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.1</Sepal.Length> ## <Sepal.Width>3.5</Sepal.Width> ## <Petal.Length>1.4</Petal.Length> ## <Petal.Width>0.3</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.7</Sepal.Length> ## <Sepal.Width>3.8</Sepal.Width> ## <Petal.Length>1.7</Petal.Length> ## <Petal.Width>0.3</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.1</Sepal.Length> ## <Sepal.Width>3.8</Sepal.Width> ## <Petal.Length>1.5</Petal.Length> ## <Petal.Width>0.3</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.4</Sepal.Length> ## <Sepal.Width>3.4</Sepal.Width> ## <Petal.Length>1.7</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.1</Sepal.Length> ## <Sepal.Width>3.7</Sepal.Width> ## <Petal.Length>1.5</Petal.Length> ## <Petal.Width>0.4</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.6</Sepal.Length> ## <Sepal.Width>3.6</Sepal.Width> ## <Petal.Length>1</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.1</Sepal.Length> ## <Sepal.Width>3.3</Sepal.Width> ## <Petal.Length>1.7</Petal.Length> ## <Petal.Width>0.5</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.8</Sepal.Length> ## <Sepal.Width>3.4</Sepal.Width> ## <Petal.Length>1.9</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>1.6</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5</Sepal.Length> ## <Sepal.Width>3.4</Sepal.Width> ## <Petal.Length>1.6</Petal.Length> ## <Petal.Width>0.4</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.2</Sepal.Length> ## <Sepal.Width>3.5</Sepal.Width> ## <Petal.Length>1.5</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.2</Sepal.Length> ## <Sepal.Width>3.4</Sepal.Width> ## <Petal.Length>1.4</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.7</Sepal.Length> ## <Sepal.Width>3.2</Sepal.Width> ## <Petal.Length>1.6</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.8</Sepal.Length> ## <Sepal.Width>3.1</Sepal.Width> ## <Petal.Length>1.6</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.4</Sepal.Length> ## <Sepal.Width>3.4</Sepal.Width> ## <Petal.Length>1.5</Petal.Length> ## <Petal.Width>0.4</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.2</Sepal.Length> ## <Sepal.Width>4.1</Sepal.Width> ## <Petal.Length>1.5</Petal.Length> ## <Petal.Width>0.1</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.5</Sepal.Length> ## <Sepal.Width>4.2</Sepal.Width> ## <Petal.Length>1.4</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.9</Sepal.Length> ## <Sepal.Width>3.1</Sepal.Width> ## <Petal.Length>1.5</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5</Sepal.Length> ## <Sepal.Width>3.2</Sepal.Width> ## <Petal.Length>1.2</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.5</Sepal.Length> ## <Sepal.Width>3.5</Sepal.Width> ## <Petal.Length>1.3</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.9</Sepal.Length> ## <Sepal.Width>3.6</Sepal.Width> ## <Petal.Length>1.4</Petal.Length> ## <Petal.Width>0.1</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.4</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>1.3</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.1</Sepal.Length> ## <Sepal.Width>3.4</Sepal.Width> ## <Petal.Length>1.5</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5</Sepal.Length> ## <Sepal.Width>3.5</Sepal.Width> ## <Petal.Length>1.3</Petal.Length> ## <Petal.Width>0.3</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.5</Sepal.Length> ## <Sepal.Width>2.3</Sepal.Width> ## <Petal.Length>1.3</Petal.Length> ## <Petal.Width>0.3</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.4</Sepal.Length> ## <Sepal.Width>3.2</Sepal.Width> ## <Petal.Length>1.3</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5</Sepal.Length> ## <Sepal.Width>3.5</Sepal.Width> ## <Petal.Length>1.6</Petal.Length> ## <Petal.Width>0.6</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.1</Sepal.Length> ## <Sepal.Width>3.8</Sepal.Width> ## <Petal.Length>1.9</Petal.Length> ## <Petal.Width>0.4</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.8</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>1.4</Petal.Length> ## <Petal.Width>0.3</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.1</Sepal.Length> ## <Sepal.Width>3.8</Sepal.Width> ## <Petal.Length>1.6</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.6</Sepal.Length> ## <Sepal.Width>3.2</Sepal.Width> ## <Petal.Length>1.4</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.3</Sepal.Length> ## <Sepal.Width>3.7</Sepal.Width> ## <Petal.Length>1.5</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5</Sepal.Length> ## <Sepal.Width>3.3</Sepal.Width> ## <Petal.Length>1.4</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>7</Sepal.Length> ## <Sepal.Width>3.2</Sepal.Width> ## <Petal.Length>4.7</Petal.Length> ## <Petal.Width>1.4</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.4</Sepal.Length> ## <Sepal.Width>3.2</Sepal.Width> ## <Petal.Length>4.5</Petal.Length> ## <Petal.Width>1.5</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.9</Sepal.Length> ## <Sepal.Width>3.1</Sepal.Width> ## <Petal.Length>4.9</Petal.Length> ## <Petal.Width>1.5</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.5</Sepal.Length> ## <Sepal.Width>2.3</Sepal.Width> ## <Petal.Length>4</Petal.Length> ## <Petal.Width>1.3</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.5</Sepal.Length> ## <Sepal.Width>2.8</Sepal.Width> ## <Petal.Length>4.6</Petal.Length> ## <Petal.Width>1.5</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.7</Sepal.Length> ## <Sepal.Width>2.8</Sepal.Width> ## <Petal.Length>4.5</Petal.Length> ## <Petal.Width>1.3</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.3</Sepal.Length> ## <Sepal.Width>3.3</Sepal.Width> ## <Petal.Length>4.7</Petal.Length> ## <Petal.Width>1.6</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>4.9</Sepal.Length> ## <Sepal.Width>2.4</Sepal.Width> ## <Petal.Length>3.3</Petal.Length> ## <Petal.Width>1</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.6</Sepal.Length> ## <Sepal.Width>2.9</Sepal.Width> ## <Petal.Length>4.6</Petal.Length> ## <Petal.Width>1.3</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.2</Sepal.Length> ## <Sepal.Width>2.7</Sepal.Width> ## <Petal.Length>3.9</Petal.Length> ## <Petal.Width>1.4</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5</Sepal.Length> ## <Sepal.Width>2</Sepal.Width> ## <Petal.Length>3.5</Petal.Length> ## <Petal.Width>1</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.9</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>4.2</Petal.Length> ## <Petal.Width>1.5</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6</Sepal.Length> ## <Sepal.Width>2.2</Sepal.Width> ## <Petal.Length>4</Petal.Length> ## <Petal.Width>1</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.1</Sepal.Length> ## <Sepal.Width>2.9</Sepal.Width> ## <Petal.Length>4.7</Petal.Length> ## <Petal.Width>1.4</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.6</Sepal.Length> ## <Sepal.Width>2.9</Sepal.Width> ## <Petal.Length>3.6</Petal.Length> ## <Petal.Width>1.3</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.7</Sepal.Length> ## <Sepal.Width>3.1</Sepal.Width> ## <Petal.Length>4.4</Petal.Length> ## <Petal.Width>1.4</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.6</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>4.5</Petal.Length> ## <Petal.Width>1.5</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.8</Sepal.Length> ## <Sepal.Width>2.7</Sepal.Width> ## <Petal.Length>4.1</Petal.Length> ## <Petal.Width>1</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.2</Sepal.Length> ## <Sepal.Width>2.2</Sepal.Width> ## <Petal.Length>4.5</Petal.Length> ## <Petal.Width>1.5</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.6</Sepal.Length> ## <Sepal.Width>2.5</Sepal.Width> ## <Petal.Length>3.9</Petal.Length> ## <Petal.Width>1.1</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.9</Sepal.Length> ## <Sepal.Width>3.2</Sepal.Width> ## <Petal.Length>4.8</Petal.Length> ## <Petal.Width>1.8</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.1</Sepal.Length> ## <Sepal.Width>2.8</Sepal.Width> ## <Petal.Length>4</Petal.Length> ## <Petal.Width>1.3</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.3</Sepal.Length> ## <Sepal.Width>2.5</Sepal.Width> ## <Petal.Length>4.9</Petal.Length> ## <Petal.Width>1.5</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.1</Sepal.Length> ## <Sepal.Width>2.8</Sepal.Width> ## <Petal.Length>4.7</Petal.Length> ## <Petal.Width>1.2</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.4</Sepal.Length> ## <Sepal.Width>2.9</Sepal.Width> ## <Petal.Length>4.3</Petal.Length> ## <Petal.Width>1.3</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.6</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>4.4</Petal.Length> ## <Petal.Width>1.4</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.8</Sepal.Length> ## <Sepal.Width>2.8</Sepal.Width> ## <Petal.Length>4.8</Petal.Length> ## <Petal.Width>1.4</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.7</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>5</Petal.Length> ## <Petal.Width>1.7</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6</Sepal.Length> ## <Sepal.Width>2.9</Sepal.Width> ## <Petal.Length>4.5</Petal.Length> ## <Petal.Width>1.5</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.7</Sepal.Length> ## <Sepal.Width>2.6</Sepal.Width> ## <Petal.Length>3.5</Petal.Length> ## <Petal.Width>1</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.5</Sepal.Length> ## <Sepal.Width>2.4</Sepal.Width> ## <Petal.Length>3.8</Petal.Length> ## <Petal.Width>1.1</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.5</Sepal.Length> ## <Sepal.Width>2.4</Sepal.Width> ## <Petal.Length>3.7</Petal.Length> ## <Petal.Width>1</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.8</Sepal.Length> ## <Sepal.Width>2.7</Sepal.Width> ## <Petal.Length>3.9</Petal.Length> ## <Petal.Width>1.2</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6</Sepal.Length> ## <Sepal.Width>2.7</Sepal.Width> ## <Petal.Length>5.1</Petal.Length> ## <Petal.Width>1.6</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.4</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>4.5</Petal.Length> ## <Petal.Width>1.5</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6</Sepal.Length> ## <Sepal.Width>3.4</Sepal.Width> ## <Petal.Length>4.5</Petal.Length> ## <Petal.Width>1.6</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.7</Sepal.Length> ## <Sepal.Width>3.1</Sepal.Width> ## <Petal.Length>4.7</Petal.Length> ## <Petal.Width>1.5</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.3</Sepal.Length> ## <Sepal.Width>2.3</Sepal.Width> ## <Petal.Length>4.4</Petal.Length> ## <Petal.Width>1.3</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.6</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>4.1</Petal.Length> ## <Petal.Width>1.3</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.5</Sepal.Length> ## <Sepal.Width>2.5</Sepal.Width> ## <Petal.Length>4</Petal.Length> ## <Petal.Width>1.3</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.5</Sepal.Length> ## <Sepal.Width>2.6</Sepal.Width> ## <Petal.Length>4.4</Petal.Length> ## <Petal.Width>1.2</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.1</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>4.6</Petal.Length> ## <Petal.Width>1.4</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.8</Sepal.Length> ## <Sepal.Width>2.6</Sepal.Width> ## <Petal.Length>4</Petal.Length> ## <Petal.Width>1.2</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5</Sepal.Length> ## <Sepal.Width>2.3</Sepal.Width> ## <Petal.Length>3.3</Petal.Length> ## <Petal.Width>1</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.6</Sepal.Length> ## <Sepal.Width>2.7</Sepal.Width> ## <Petal.Length>4.2</Petal.Length> ## <Petal.Width>1.3</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.7</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>4.2</Petal.Length> ## <Petal.Width>1.2</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.7</Sepal.Length> ## <Sepal.Width>2.9</Sepal.Width> ## <Petal.Length>4.2</Petal.Length> ## <Petal.Width>1.3</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.2</Sepal.Length> ## <Sepal.Width>2.9</Sepal.Width> ## <Petal.Length>4.3</Petal.Length> ## <Petal.Width>1.3</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.1</Sepal.Length> ## <Sepal.Width>2.5</Sepal.Width> ## <Petal.Length>3</Petal.Length> ## <Petal.Width>1.1</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.7</Sepal.Length> ## <Sepal.Width>2.8</Sepal.Width> ## <Petal.Length>4.1</Petal.Length> ## <Petal.Width>1.3</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.3</Sepal.Length> ## <Sepal.Width>3.3</Sepal.Width> ## <Petal.Length>6</Petal.Length> ## <Petal.Width>2.5</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>5.8</Sepal.Length> ## <Sepal.Width>2.7</Sepal.Width> ## <Petal.Length>5.1</Petal.Length> ## <Petal.Width>1.9</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>7.1</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>5.9</Petal.Length> ## <Petal.Width>2.1</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.3</Sepal.Length> ## <Sepal.Width>2.9</Sepal.Width> ## <Petal.Length>5.6</Petal.Length> ## <Petal.Width>1.8</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.5</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>5.8</Petal.Length> ## <Petal.Width>2.2</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>7.6</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>6.6</Petal.Length> ## <Petal.Width>2.1</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>4.9</Sepal.Length> ## <Sepal.Width>2.5</Sepal.Width> ## <Petal.Length>4.5</Petal.Length> ## <Petal.Width>1.7</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>7.3</Sepal.Length> ## <Sepal.Width>2.9</Sepal.Width> ## <Petal.Length>6.3</Petal.Length> ## <Petal.Width>1.8</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.7</Sepal.Length> ## <Sepal.Width>2.5</Sepal.Width> ## <Petal.Length>5.8</Petal.Length> ## <Petal.Width>1.8</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>7.2</Sepal.Length> ## <Sepal.Width>3.6</Sepal.Width> ## <Petal.Length>6.1</Petal.Length> ## <Petal.Width>2.5</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.5</Sepal.Length> ## <Sepal.Width>3.2</Sepal.Width> ## <Petal.Length>5.1</Petal.Length> ## <Petal.Width>2</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.4</Sepal.Length> ## <Sepal.Width>2.7</Sepal.Width> ## <Petal.Length>5.3</Petal.Length> ## <Petal.Width>1.9</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.8</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>5.5</Petal.Length> ## <Petal.Width>2.1</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>5.7</Sepal.Length> ## <Sepal.Width>2.5</Sepal.Width> ## <Petal.Length>5</Petal.Length> ## <Petal.Width>2</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>5.8</Sepal.Length> ## <Sepal.Width>2.8</Sepal.Width> ## <Petal.Length>5.1</Petal.Length> ## <Petal.Width>2.4</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.4</Sepal.Length> ## <Sepal.Width>3.2</Sepal.Width> ## <Petal.Length>5.3</Petal.Length> ## <Petal.Width>2.3</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.5</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>5.5</Petal.Length> ## <Petal.Width>1.8</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>7.7</Sepal.Length> ## <Sepal.Width>3.8</Sepal.Width> ## <Petal.Length>6.7</Petal.Length> ## <Petal.Width>2.2</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>7.7</Sepal.Length> ## <Sepal.Width>2.6</Sepal.Width> ## <Petal.Length>6.9</Petal.Length> ## <Petal.Width>2.3</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6</Sepal.Length> ## <Sepal.Width>2.2</Sepal.Width> ## <Petal.Length>5</Petal.Length> ## <Petal.Width>1.5</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.9</Sepal.Length> ## <Sepal.Width>3.2</Sepal.Width> ## <Petal.Length>5.7</Petal.Length> ## <Petal.Width>2.3</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>5.6</Sepal.Length> ## <Sepal.Width>2.8</Sepal.Width> ## <Petal.Length>4.9</Petal.Length> ## <Petal.Width>2</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>7.7</Sepal.Length> ## <Sepal.Width>2.8</Sepal.Width> ## <Petal.Length>6.7</Petal.Length> ## <Petal.Width>2</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.3</Sepal.Length> ## <Sepal.Width>2.7</Sepal.Width> ## <Petal.Length>4.9</Petal.Length> ## <Petal.Width>1.8</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.7</Sepal.Length> ## <Sepal.Width>3.3</Sepal.Width> ## <Petal.Length>5.7</Petal.Length> ## <Petal.Width>2.1</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>7.2</Sepal.Length> ## <Sepal.Width>3.2</Sepal.Width> ## <Petal.Length>6</Petal.Length> ## <Petal.Width>1.8</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.2</Sepal.Length> ## <Sepal.Width>2.8</Sepal.Width> ## <Petal.Length>4.8</Petal.Length> ## <Petal.Width>1.8</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.1</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>4.9</Petal.Length> ## <Petal.Width>1.8</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.4</Sepal.Length> ## <Sepal.Width>2.8</Sepal.Width> ## <Petal.Length>5.6</Petal.Length> ## <Petal.Width>2.1</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>7.2</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>5.8</Petal.Length> ## <Petal.Width>1.6</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>7.4</Sepal.Length> ## <Sepal.Width>2.8</Sepal.Width> ## <Petal.Length>6.1</Petal.Length> ## <Petal.Width>1.9</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>7.9</Sepal.Length> ## <Sepal.Width>3.8</Sepal.Width> ## <Petal.Length>6.4</Petal.Length> ## <Petal.Width>2</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.4</Sepal.Length> ## <Sepal.Width>2.8</Sepal.Width> ## <Petal.Length>5.6</Petal.Length> ## <Petal.Width>2.2</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.3</Sepal.Length> ## <Sepal.Width>2.8</Sepal.Width> ## <Petal.Length>5.1</Petal.Length> ## <Petal.Width>1.5</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.1</Sepal.Length> ## <Sepal.Width>2.6</Sepal.Width> ## <Petal.Length>5.6</Petal.Length> ## <Petal.Width>1.4</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>7.7</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>6.1</Petal.Length> ## <Petal.Width>2.3</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.3</Sepal.Length> ## <Sepal.Width>3.4</Sepal.Width> ## <Petal.Length>5.6</Petal.Length> ## <Petal.Width>2.4</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.4</Sepal.Length> ## <Sepal.Width>3.1</Sepal.Width> ## <Petal.Length>5.5</Petal.Length> ## <Petal.Width>1.8</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>4.8</Petal.Length> ## <Petal.Width>1.8</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.9</Sepal.Length> ## <Sepal.Width>3.1</Sepal.Width> ## <Petal.Length>5.4</Petal.Length> ## <Petal.Width>2.1</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.7</Sepal.Length> ## <Sepal.Width>3.1</Sepal.Width> ## <Petal.Length>5.6</Petal.Length> ## <Petal.Width>2.4</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.9</Sepal.Length> ## <Sepal.Width>3.1</Sepal.Width> ## <Petal.Length>5.1</Petal.Length> ## <Petal.Width>2.3</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>5.8</Sepal.Length> ## <Sepal.Width>2.7</Sepal.Width> ## <Petal.Length>5.1</Petal.Length> ## <Petal.Width>1.9</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.8</Sepal.Length> ## <Sepal.Width>3.2</Sepal.Width> ## <Petal.Length>5.9</Petal.Length> ## <Petal.Width>2.3</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.7</Sepal.Length> ## <Sepal.Width>3.3</Sepal.Width> ## <Petal.Length>5.7</Petal.Length> ## <Petal.Width>2.5</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.7</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>5.2</Petal.Length> ## <Petal.Width>2.3</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.3</Sepal.Length> ## <Sepal.Width>2.5</Sepal.Width> ## <Petal.Length>5</Petal.Length> ## <Petal.Width>1.9</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.5</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>5.2</Petal.Length> ## <Petal.Width>2</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.2</Sepal.Length> ## <Sepal.Width>3.4</Sepal.Width> ## <Petal.Length>5.4</Petal.Length> ## <Petal.Width>2.3</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>5.9</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>5.1</Petal.Length> ## <Petal.Width>1.8</Petal.Width> ## <Species>virginica</Species> ## </row> ## </document> ``` 7\.10 The Response to News -------------------------- ### 7\.10\.1 Das, Martinez\-Jerez, and Tufano (FM 2005\) ### 7\.10\.2 Breakdown of News Flow ### 7\.10\.3 Frequency of Postings ### 7\.10\.4 Weekly Posting ### 7\.10\.5 Intraday Posting ### 7\.10\.6 Number of Characters per Posting 7\.11 Text Handling ------------------- First, let’s read in a simple web page (my landing page) ``` text = readLines("http://srdas.github.io/") print(text[1:4]) ``` ``` ## [1] "<html>" ## [2] "" ## [3] "<head>" ## [4] "<title>SCU Web Page of Sanjiv Ranjan Das</title>" ``` ``` print(length(text)) ``` ``` ## [1] 36 ``` ### 7\.11\.1 String Detection String handling is a basic need, so we use the **stringr** package. ``` #EXTRACTING SUBSTRINGS (take some time to look at #the "stringr" package also) library(stringr) substr(text[4],24,29) ``` ``` ## [1] "Sanjiv" ``` ``` #IF YOU WANT TO LOCATE A STRING res = regexpr("Sanjiv",text[4]) print(res) ``` ``` ## [1] 24 ## attr(,"match.length") ## [1] 6 ## attr(,"useBytes") ## [1] TRUE ``` ``` print(substr(text[4],res[1],res[1]+nchar("Sanjiv")-1)) ``` ``` ## [1] "Sanjiv" ``` ``` #ANOTHER WAY res = str_locate(text[4],"Sanjiv") print(res) ``` ``` ## start end ## [1,] 24 29 ``` ``` print(substr(text[4],res[1],res[2])) ``` ``` ## [1] "Sanjiv" ``` ### 7\.11\.2 Cleaning Text Now we look at using regular expressions with the **grep** command to clean out text. I will read in my research page to process this. Here we are undertaking a “ruthless” cleanup. ``` #SIMPLE TEXT HANDLING text = readLines("http://srdas.github.io/research.htm") print(length(text)) ``` ``` ## [1] 845 ``` ``` #print(text) text = text[setdiff(seq(1,length(text)),grep("<",text))] text = text[setdiff(seq(1,length(text)),grep(">",text))] text = text[setdiff(seq(1,length(text)),grep("]",text))] text = text[setdiff(seq(1,length(text)),grep("}",text))] text = text[setdiff(seq(1,length(text)),grep("_",text))] text = text[setdiff(seq(1,length(text)),grep("\\/",text))] print(length(text)) ``` ``` ## [1] 350 ``` ``` #print(text) text = str_replace_all(text,"[\"]","") idx = which(nchar(text)==0) research = text[setdiff(seq(1,length(text)),idx)] print(research) ``` ``` ## [1] "Data Science: Theories, Models, Algorithms, and Analytics (web book -- work in progress)" ## [2] "Derivatives: Principles and Practice (2010)," ## [3] "(Rangarajan Sundaram and Sanjiv Das), McGraw Hill." ## [4] "An Index-Based Measure of Liquidity,'' (with George Chacko and Rong Fan), (2016)." ## [5] "Matrix Metrics: Network-Based Systemic Risk Scoring, (2016)." ## [6] "of systemic risk. This paper won the First Prize in the MIT-CFP competition 2016 for " ## [7] "the best paper on SIFIs (systemically important financial institutions). " ## [8] "It also won the best paper award at " ## [9] "Credit Spreads with Dynamic Debt (with Seoyoung Kim), (2015), " ## [10] "Text and Context: Language Analytics for Finance, (2014)," ## [11] "Strategic Loan Modification: An Options-Based Response to Strategic Default," ## [12] "Options and Structured Products in Behavioral Portfolios, (with Meir Statman), (2013), " ## [13] "and barrier range notes, in the presence of fat-tailed outcomes using copulas." ## [14] "Polishing Diamonds in the Rough: The Sources of Syndicated Venture Performance, (2011), (with Hoje Jo and Yongtae Kim), " ## [15] "Optimization with Mental Accounts, (2010), (with Harry Markowitz, Jonathan" ## [16] "Accounting-based versus market-based cross-sectional models of CDS spreads, " ## [17] "(with Paul Hanouna and Atulya Sarin), (2009), " ## [18] "Hedging Credit: Equity Liquidity Matters, (with Paul Hanouna), (2009)," ## [19] "An Integrated Model for Hybrid Securities," ## [20] "Yahoo for Amazon! Sentiment Extraction from Small Talk on the Web," ## [21] "Common Failings: How Corporate Defaults are Correlated " ## [22] "(with Darrell Duffie, Nikunj Kapadia and Leandro Saita)." ## [23] "A Clinical Study of Investor Discussion and Sentiment, " ## [24] "(with Asis Martinez-Jerez and Peter Tufano), 2005, " ## [25] "International Portfolio Choice with Systemic Risk," ## [26] "The loss resulting from diminished diversification is small, while" ## [27] "Speech: Signaling, Risk-sharing and the Impact of Fee Structures on" ## [28] "investor welfare. Contrary to regulatory intuition, incentive structures" ## [29] "A Discrete-Time Approach to No-arbitrage Pricing of Credit derivatives" ## [30] "with Rating Transitions, (with Viral Acharya and Rangarajan Sundaram)," ## [31] "Pricing Interest Rate Derivatives: A General Approach,''(with George Chacko)," ## [32] "A Discrete-Time Approach to Arbitrage-Free Pricing of Credit Derivatives,'' " ## [33] "The Psychology of Financial Decision Making: A Case" ## [34] "for Theory-Driven Experimental Enquiry,''" ## [35] "1999, (with Priya Raghubir)," ## [36] "Of Smiles and Smirks: A Term Structure Perspective,''" ## [37] "A Theory of Banking Structure, 1999, (with Ashish Nanda)," ## [38] "by function based upon two dimensions: the degree of information asymmetry " ## [39] "A Theory of Optimal Timing and Selectivity,'' " ## [40] "A Direct Discrete-Time Approach to" ## [41] "Poisson-Gaussian Bond Option Pricing in the Heath-Jarrow-Morton " ## [42] "The Central Tendency: A Second Factor in" ## [43] "Bond Yields, 1998, (with Silverio Foresi and Pierluigi Balduzzi), " ## [44] "Efficiency with Costly Information: A Reinterpretation of" ## [45] "Evidence from Managed Portfolios, (with Edwin Elton, Martin Gruber and Matt " ## [46] "Presented and Reprinted in the Proceedings of The " ## [47] "Seminar on the Analysis of Security Prices at the Center " ## [48] "for Research in Security Prices at the University of " ## [49] "Managing Rollover Risk with Capital Structure Covenants" ## [50] "in Structured Finance Vehicles (2016)," ## [51] "The Design and Risk Management of Structured Finance Vehicles (2016)," ## [52] "Post the recent subprime financial crisis, we inform the creation of safer SIVs " ## [53] "in structured finance, and propose avenues of mitigating risks faced by senior debt through " ## [54] "Coming up Short: Managing Underfunded Portfolios in an LDI-ES Framework (2014), " ## [55] "(with Seoyoung Kim and Meir Statman), " ## [56] "Going for Broke: Restructuring Distressed Debt Portfolios (2014)," ## [57] "Digital Portfolios. (2013), " ## [58] "Options on Portfolios with Higher-Order Moments, (2009)," ## [59] "options on a multivariate system of assets, calibrated to the return " ## [60] "Dealing with Dimension: Option Pricing on Factor Trees, (2009)," ## [61] "you to price options on multiple assets in a unified fraamework. Computational" ## [62] "Modeling" ## [63] "Correlated Default with a Forest of Binomial Trees, (2007), (with" ## [64] "Basel II: Correlation Related Issues (2007), " ## [65] "Correlated Default Risk, (2006)," ## [66] "(with Laurence Freed, Gary Geng, and Nikunj Kapadia)," ## [67] "increase as markets worsen. Regime switching models are needed to explain dynamic" ## [68] "A Simple Model for Pricing Equity Options with Markov" ## [69] "Switching State Variables (2006)," ## [70] "(with Donald Aingworth and Rajeev Motwani)," ## [71] "The Firm's Management of Social Interactions, (2005)" ## [72] "(with D. Godes, D. Mayzlin, Y. Chen, S. Das, C. Dellarocas, " ## [73] "B. Pfeieffer, B. Libai, S. Sen, M. Shi, and P. Verlegh). " ## [74] "Financial Communities (with Jacob Sisk), 2005, " ## [75] "Summer, 112-123." ## [76] "Monte Carlo Markov Chain Methods for Derivative Pricing" ## [77] "and Risk Assessment,(with Alistair Sinclair), 2005, " ## [78] "where incomplete information about the value of an asset may be exploited to " ## [79] "undertake fast and accurate pricing. Proof that a fully polynomial randomized " ## [80] "Correlated Default Processes: A Criterion-Based Copula Approach," ## [81] "Special Issue on Default Risk. " ## [82] "Private Equity Returns: An Empirical Examination of the Exit of" ## [83] "Venture-Backed Companies, (with Murali Jagannathan and Atulya Sarin)," ## [84] "firm being financed, the valuation at the time of financing, and the prevailing market" ## [85] "sentiment. Helps understand the risk premium required for the" ## [86] "Issue on Computational Methods in Economics and Finance), " ## [87] "December, 55-69." ## [88] "Bayesian Migration in Credit Ratings Based on Probabilities of" ## [89] "The Impact of Correlated Default Risk on Credit Portfolios," ## [90] "(with Gifford Fong, and Gary Geng)," ## [91] "How Diversified are Internationally Diversified Portfolios:" ## [92] "Time-Variation in the Covariances between International Returns," ## [93] "Discrete-Time Bond and Option Pricing for Jump-Diffusion" ## [94] "Macroeconomic Implications of Search Theory for the Labor Market," ## [95] "Auction Theory: A Summary with Applications and Evidence" ## [96] "from the Treasury Markets, 1996, (with Rangarajan Sundaram)," ## [97] "A Simple Approach to Three Factor Affine Models of the" ## [98] "Term Structure, (with Pierluigi Balduzzi, Silverio Foresi and Rangarajan" ## [99] "Analytical Approximations of the Term Structure" ## [100] "for Jump-diffusion Processes: A Numerical Analysis, 1996, " ## [101] "Markov Chain Term Structure Models: Extensions and Applications," ## [102] "Exact Solutions for Bond and Options Prices" ## [103] "with Systematic Jump Risk, 1996, (with Silverio Foresi)," ## [104] "Pricing Credit Sensitive Debt when Interest Rates, Credit Ratings" ## [105] "and Credit Spreads are Stochastic, 1996, " ## [106] "v5(2), 161-198." ## [107] "Did CDS Trading Improve the Market for Corporate Bonds, (2016), " ## [108] "(with Madhu Kalimipalli and Subhankar Nayak), " ## [109] "Big Data's Big Muscle, (2016), " ## [110] "Portfolios for Investors Who Want to Reach Their Goals While Staying on the Mean-Variance Efficient Frontier, (2011), " ## [111] "(with Harry Markowitz, Jonathan Scheid, and Meir Statman), " ## [112] "News Analytics: Framework, Techniques and Metrics, The Handbook of News Analytics in Finance, May 2011, John Wiley & Sons, U.K. " ## [113] "Random Lattices for Option Pricing Problems in Finance, (2011)," ## [114] "Implementing Option Pricing Models using Python and Cython, (2010)," ## [115] "The Finance Web: Internet Information and Markets, (2010), " ## [116] "Financial Applications with Parallel R, (2009), " ## [117] "Recovery Swaps, (2009), (with Paul Hanouna), " ## [118] "Recovery Rates, (2009),(with Paul Hanouna), " ## [119] "``A Simple Model for Pricing Securities with a Debt-Equity Linkage,'' 2008, in " ## [120] "Credit Default Swap Spreads, 2006, (with Paul Hanouna), " ## [121] "Multiple-Core Processors for Finance Applications, 2006, " ## [122] "Power Laws, 2005, (with Jacob Sisk), " ## [123] "Genetic Algorithms, 2005," ## [124] "Recovery Risk, 2005," ## [125] "Venture Capital Syndication, (with Hoje Jo and Yongtae Kim), 2004" ## [126] "Technical Analysis, (with David Tien), 2004" ## [127] "Liquidity and the Bond Markets, (with Jan Ericsson and " ## [128] "Madhu Kalimipalli), 2003," ## [129] "Modern Pricing of Interest Rate Derivatives - Book Review, " ## [130] "Contagion, 2003," ## [131] "Hedge Funds, 2003," ## [132] "Reprinted in " ## [133] "Working Papers on Hedge Funds, in The World of Hedge Funds: " ## [134] "Characteristics and " ## [135] "Analysis, 2005, World Scientific." ## [136] "The Internet and Investors, 2003," ## [137] " Useful things to know about Correlated Default Risk," ## [138] "(with Gifford Fong, Laurence Freed, Gary Geng, and Nikunj Kapadia)," ## [139] "The Regulation of Fee Structures in Mutual Funds: A Theoretical Analysis,'' " ## [140] "(with Rangarajan Sundaram), 1998, NBER WP No 6639, in the" ## [141] "Courant Institute of Mathematical Sciences, special volume on" ## [142] "A Discrete-Time Approach to Arbitrage-Free Pricing of Credit Derivatives,'' " ## [143] "(with Rangarajan Sundaram), reprinted in " ## [144] "the Courant Institute of Mathematical Sciences, special volume on" ## [145] "Stochastic Mean Models of the Term Structure,''" ## [146] "(with Pierluigi Balduzzi, Silverio Foresi and Rangarajan Sundaram), " ## [147] "John Wiley & Sons, Inc., 128-161." ## [148] "Interest Rate Modeling with Jump-Diffusion Processes,'' " ## [149] "John Wiley & Sons, Inc., 162-189." ## [150] "Comments on 'Pricing Excess-of-Loss Reinsurance Contracts against" ## [151] "Catastrophic Loss,' by J. David Cummins, C. Lewis, and Richard Phillips," ## [152] "Froot (Ed.), University of Chicago Press, 1999, 141-145." ## [153] " Pricing Credit Derivatives,'' " ## [154] "J. Frost and J.G. Whittaker, 101-138." ## [155] "On the Recursive Implementation of Term Structure Models,'' " ## [156] "Zero-Revelation RegTech: Detecting Risk through" ## [157] "Linguistic Analysis of Corporate Emails and News " ## [158] "(with Seoyoung Kim and Bhushan Kothari)." ## [159] "Summary for the Columbia Law School blog: " ## [160] " " ## [161] "Dynamic Risk Networks: A Note " ## [162] "(with Seoyoung Kim and Dan Ostrov)." ## [163] "Research Challenges in Financial Data Modeling and Analysis " ## [164] "(with Lewis Alexander, Zachary Ives, H.V. Jagadish, and Claire Monteleoni)." ## [165] "Local Volatility and the Recovery Rate of Credit Default Swaps " ## [166] "(with Jeroen Jansen and Frank Fabozzi)." ## [167] "Efficient Rebalancing of Taxable Portfolios (with Dan Ostrov, Dennis Ding, Vincent Newell), " ## [168] "The Fast and the Curious: VC Drift " ## [169] "(with Amit Bubna and Paul Hanouna), " ## [170] "Venture Capital Communities (with Amit Bubna and Nagpurnanand Prabhala), " ## [171] " " ``` Take a look at the text now to see how cleaned up it is. But there is a better way, i.e., use the text\-mining package **tm**. 7\.12 Package *tm* ------------------ 1. The R programming language supports a text\-mining package, succinctly named {tm}. Using functions such as {readDOC()}, {readPDF()}, etc., for reading DOC and PDF files, the package makes accessing various file formats easy. 2. Text mining involves applying functions to many text documents. A library of text documents (irrespective of format) is called a **corpus**. The essential and highly useful feature of text mining packages is the ability to operate on the entire set of documents at one go. ``` library(tm) ``` ``` ## Loading required package: NLP ``` ``` text = c("INTL is expected to announce good earnings report", "AAPL first quarter disappoints","GOOG announces new wallet", "YHOO ascends from old ways") text_corpus = Corpus(VectorSource(text)) print(text_corpus) ``` ``` ## <<VCorpus>> ## Metadata: corpus specific: 0, document level (indexed): 0 ## Content: documents: 4 ``` ``` writeCorpus(text_corpus) ``` The **writeCorpus()** function in **tm** creates separate text files on the hard drive, and by default are names **1\.txt**, **2\.txt**, etc. The simple program code above shows how text scraped off a web page and collapsed into a single character string for each document, may then be converted into a corpus of documents using the **Corpus()** function. It is easy to inspect the corpus as follows: ``` inspect(text_corpus) ``` ``` ## <<VCorpus>> ## Metadata: corpus specific: 0, document level (indexed): 0 ## Content: documents: 4 ## ## [[1]] ## <<PlainTextDocument>> ## Metadata: 7 ## Content: chars: 49 ## ## [[2]] ## <<PlainTextDocument>> ## Metadata: 7 ## Content: chars: 30 ## ## [[3]] ## <<PlainTextDocument>> ## Metadata: 7 ## Content: chars: 25 ## ## [[4]] ## <<PlainTextDocument>> ## Metadata: 7 ## Content: chars: 26 ``` ### 7\.12\.1 A second example Here we use **lapply** to inspect the contents of the corpus. ``` #USING THE tm PACKAGE library(tm) text = c("Doc1;","This is doc2 --", "And, then Doc3.") ctext = Corpus(VectorSource(text)) ctext ``` ``` ## <<VCorpus>> ## Metadata: corpus specific: 0, document level (indexed): 0 ## Content: documents: 3 ``` ``` #writeCorpus(ctext) #THE CORPUS IS A LIST OBJECT in R of type VCorpus or Corpus inspect(ctext) ``` ``` ## <<VCorpus>> ## Metadata: corpus specific: 0, document level (indexed): 0 ## Content: documents: 3 ## ## [[1]] ## <<PlainTextDocument>> ## Metadata: 7 ## Content: chars: 5 ## ## [[2]] ## <<PlainTextDocument>> ## Metadata: 7 ## Content: chars: 15 ## ## [[3]] ## <<PlainTextDocument>> ## Metadata: 7 ## Content: chars: 15 ``` ``` print(as.character(ctext[[1]])) ``` ``` ## [1] "Doc1;" ``` ``` print(lapply(ctext[1:2],as.character)) ``` ``` ## $`1` ## [1] "Doc1;" ## ## $`2` ## [1] "This is doc2 --" ``` ``` ctext = tm_map(ctext,tolower) #Lower case all text in all docs inspect(ctext) ``` ``` ## <<VCorpus>> ## Metadata: corpus specific: 0, document level (indexed): 0 ## Content: documents: 3 ## ## [[1]] ## [1] doc1; ## ## [[2]] ## [1] this is doc2 -- ## ## [[3]] ## [1] and, then doc3. ``` ``` ctext2 = tm_map(ctext,toupper) inspect(ctext2) ``` ``` ## <<VCorpus>> ## Metadata: corpus specific: 0, document level (indexed): 0 ## Content: documents: 3 ## ## [[1]] ## [1] DOC1; ## ## [[2]] ## [1] THIS IS DOC2 -- ## ## [[3]] ## [1] AND, THEN DOC3. ``` ### 7\.12\.2 Function *tm\_map* * The **tm\_map** function is very useful for cleaning up the documents. We may want to remove some words. * We may also remove *stopwords*, punctuation, numbers, etc. ``` #FIRST CURATE TO UPPER CASE dropWords = c("IS","AND","THEN") ctext2 = tm_map(ctext2,removeWords,dropWords) inspect(ctext2) ``` ``` ## <<VCorpus>> ## Metadata: corpus specific: 0, document level (indexed): 0 ## Content: documents: 3 ## ## [[1]] ## [1] DOC1; ## ## [[2]] ## [1] THIS DOC2 -- ## ## [[3]] ## [1] , DOC3. ``` ``` ctext = Corpus(VectorSource(text)) temp = ctext print(lapply(temp,as.character)) ``` ``` ## $`1` ## [1] "Doc1;" ## ## $`2` ## [1] "This is doc2 --" ## ## $`3` ## [1] "And, then Doc3." ``` ``` temp = tm_map(temp,removeWords,stopwords("english")) print(lapply(temp,as.character)) ``` ``` ## $`1` ## [1] "Doc1;" ## ## $`2` ## [1] "This doc2 --" ## ## $`3` ## [1] "And, Doc3." ``` ``` temp = tm_map(temp,removePunctuation) print(lapply(temp,as.character)) ``` ``` ## $`1` ## [1] "Doc1" ## ## $`2` ## [1] "This doc2 " ## ## $`3` ## [1] "And Doc3" ``` ``` temp = tm_map(temp,removeNumbers) print(lapply(temp,as.character)) ``` ``` ## $`1` ## [1] "Doc" ## ## $`2` ## [1] "This doc " ## ## $`3` ## [1] "And Doc" ``` ### 7\.12\.3 Bag of Words We can create a *bag of words* by collapsing all the text into one bundle. ``` #CONVERT CORPUS INTO ARRAY OF STRINGS AND FLATTEN txt = NULL for (j in 1:length(temp)) { txt = c(txt,temp[[j]]$content) } txt = paste(txt,collapse=" ") txt = tolower(txt) print(txt) ``` ``` ## [1] "doc this doc and doc" ``` ### 7\.12\.4 Example (on my bio page) Now we will do a full pass through of this on my bio. ``` text = readLines("http://srdas.github.io/bio-candid.html") ctext = Corpus(VectorSource(text)) ctext ``` ``` ## <<VCorpus>> ## Metadata: corpus specific: 0, document level (indexed): 0 ## Content: documents: 80 ``` ``` #Print a few lines print(lapply(ctext, as.character)[10:15]) ``` ``` ## $`10` ## [1] "B.Com in Accounting and Economics (University of Bombay, Sydenham" ## ## $`11` ## [1] "College), and is also a qualified Cost and Works Accountant" ## ## $`12` ## [1] "(AICWA). He is a senior editor of The Journal of Investment" ## ## $`13` ## [1] "Management, co-editor of The Journal of Derivatives and The Journal of" ## ## $`14` ## [1] "Financial Services Research, and Associate Editor of other academic" ## ## $`15` ## [1] "journals. Prior to being an academic, he worked in the derivatives" ``` ``` ctext = tm_map(ctext,removePunctuation) print(lapply(ctext, as.character)[10:15]) ``` ``` ## $`10` ## [1] "BCom in Accounting and Economics University of Bombay Sydenham" ## ## $`11` ## [1] "College and is also a qualified Cost and Works Accountant" ## ## $`12` ## [1] "AICWA He is a senior editor of The Journal of Investment" ## ## $`13` ## [1] "Management coeditor of The Journal of Derivatives and The Journal of" ## ## $`14` ## [1] "Financial Services Research and Associate Editor of other academic" ## ## $`15` ## [1] "journals Prior to being an academic he worked in the derivatives" ``` ``` txt = NULL for (j in 1:length(ctext)) { txt = c(txt,ctext[[j]]$content) } txt = paste(txt,collapse=" ") txt = tolower(txt) print(txt) ``` ``` ## [1] "html body backgroundhttpalgoscuedusanjivdasgraphicsback2gif sanjiv das is the william and janice terry professor of finance at santa clara universitys leavey school of business he previously held faculty appointments as associate professor at harvard business school and uc berkeley he holds postgraduate degrees in finance mphil and phd from new york university computer science ms from uc berkeley an mba from the indian institute of management ahmedabad bcom in accounting and economics university of bombay sydenham college and is also a qualified cost and works accountant aicwa he is a senior editor of the journal of investment management coeditor of the journal of derivatives and the journal of financial services research and associate editor of other academic journals prior to being an academic he worked in the derivatives business in the asiapacific region as a vicepresident at citibank his current research interests include machine learning social networks derivatives pricing models portfolio theory the modeling of default risk and venture capital he has published over ninety articles in academic journals and has won numerous awards for research and teaching his recent book derivatives principles and practice was published in may 2010 second edition 2016 he currently also serves as a senior fellow at the fdic center for financial research p bsanjiv das a short academic life historyb p after loafing and working in many parts of asia but never really growing up sanjiv moved to new york to change the world hopefully through research he graduated in 1994 with a phd from nyu and since then spent five years in boston and now lives in san jose california sanjiv loves animals places in the world where the mountains meet the sea riding sport motorbikes reading gadgets science fiction movies and writing cool software code when there is time available from the excitement of daily life sanjiv writes academic papers which helps him relax always the contrarian sanjiv thinks that new york city is the most calming place in the world after california of course p sanjiv is now a professor of finance at santa clara university he came to scu from harvard business school and spent a year at uc berkeley in his past life in the unreal world sanjiv worked at citibank na in the asiapacific region he takes great pleasure in merging his many previous lives into his current existence which is incredibly confused and diverse p sanjivs research style is instilled with a distinct new york state of mind it is chaotic diverse with minimal method to the madness he has published articles on derivatives termstructure models mutual funds the internet portfolio choice banking models credit risk and has unpublished articles in many other areas some years ago he took time off to get another degree in computer science at berkeley confirming that an unchecked hobby can quickly become an obsession there he learnt about the fascinating field of randomized algorithms skills he now applies earnestly to his editorial work and other pursuits many of which stem from being in the epicenter of silicon valley p coastal living did a lot to mold sanjiv who needs to live near the ocean the many walks in greenwich village convinced him that there is no such thing as a representative investor yet added many unique features to his personal utility function he learnt that it is important to open the academic door to the ivory tower and let the world in academia is a real challenge given that he has to reconcile many more opinions than ideas he has been known to have turned down many offers from mad magazine to publish his academic work as he often explains you never really finish your education you can check out any time you like but you can never leave which is why he is doomed to a lifetime in hotel california and he believes that if this is as bad as it gets life is really pretty good " ``` 7\.13 Term Document Matrix (TDM) -------------------------------- An extremeley important object in text analysis is the **Term\-Document Matrix**. This allows us to store an entire library of text inside a single matrix. This may then be used for analysis as well as searching documents. It forms the basis of search engines, topic analysis, and classification (spam filtering). It is a table that provides the frequency count of every word (term) in each document. The number of rows in the TDM is equal to the number of unique terms, and the number of columns is equal to the number of documents. ``` #TERM-DOCUMENT MATRIX tdm = TermDocumentMatrix(ctext,control=list(minWordLength=1)) print(tdm) ``` ``` ## <<TermDocumentMatrix (terms: 321, documents: 80)>> ## Non-/sparse entries: 502/25178 ## Sparsity : 98% ## Maximal term length: 49 ## Weighting : term frequency (tf) ``` ``` inspect(tdm[10:20,11:18]) ``` ``` ## <<TermDocumentMatrix (terms: 11, documents: 8)>> ## Non-/sparse entries: 5/83 ## Sparsity : 94% ## Maximal term length: 10 ## Weighting : term frequency (tf) ## ## Docs ## Terms 11 12 13 14 15 16 17 18 ## after 0 0 0 0 0 0 0 0 ## ago 0 0 0 0 0 0 0 0 ## ahmedabad 0 0 0 0 0 0 0 0 ## aicwa 0 1 0 0 0 0 0 0 ## algorithms 0 0 0 0 0 0 0 0 ## also 1 0 0 0 0 0 0 0 ## always 0 0 0 0 0 0 0 0 ## and 2 0 1 1 0 0 0 0 ## animals 0 0 0 0 0 0 0 0 ## another 0 0 0 0 0 0 0 0 ## any 0 0 0 0 0 0 0 0 ``` ``` out = findFreqTerms(tdm,lowfreq=5) print(out) ``` ``` ## [1] "academic" "and" "derivatives" "from" "has" ## [6] "his" "many" "research" "sanjiv" "that" ## [11] "the" "world" ``` 7\.14 Term Frequency \- Inverse Document Frequency (TF\-IDF) ------------------------------------------------------------ This is a weighting scheme provided to sharpen the importance of rare words in a document, relative to the frequency of these words in the corpus. It is based on simple calculations and even though it does not have strong theoretical foundations, it is still very useful in practice. The TF\-IDF is the importance of a word \\(w\\) in a document \\(d\\) in a corpus \\(C\\). Therefore it is a function of all these three, i.e., we write it as TF\-IDF\\((w,d,C)\\), and is the product of term frequency (TF) and inverse document frequency (IDF). The frequency of a word in a document is defined as \\\[ f(w,d) \= \\frac{\\\#w \\in d}{\|d\|} \\] where \\(\|d\|\\) is the number of words in the document. We usually normalize word frequency so that \\\[ TF(w,d) \= \\ln\[f(w,d)] \\] This is log normalization. Another form of normalization is known as double normalization and is as follows: \\\[ TF(w,d) \= \\frac{1}{2} \+ \\frac{1}{2} \\frac{f(w,d)}{\\max\_{w \\in d} f(w,d)} \\] Note that normalization is not necessary, but it tends to help shrink the difference between counts of words. Inverse document frequency is as follows: \\\[ IDF(w,C) \= \\ln\\left\[ \\frac{\|C\|}{\|d\_{w \\in d}\|} \\right] \\] That is, we compute the ratio of the number of documents in the corpus \\(C\\) divided by the number of documents with word \\(w\\) in the corpus. Finally, we have the weighting score for a given word \\(w\\) in document \\(d\\) in corpus \\(C\\): \\\[ \\mbox{TF\-IDF}(w,d,C) \= TF(w,d) \\times IDF(w,C) \\] ### 7\.14\.1 Example of TD\-IDF We illustrate this with an application to the previously computed term\-document matrix. ``` tdm_mat = as.matrix(tdm) #Convert tdm into a matrix print(dim(tdm_mat)) ``` ``` ## [1] 321 80 ``` ``` nw = dim(tdm_mat)[1] nd = dim(tdm_mat)[2] doc = 13 #Choose document word = "derivatives" #Choose word #COMPUTE TF f = NULL for (w in row.names(tdm_mat)) { f = c(f,tdm_mat[w,doc]/sum(tdm_mat[,doc])) } fw = tdm_mat[word,doc]/sum(tdm_mat[,doc]) TF = 0.5 + 0.5*fw/max(f) print(TF) ``` ``` ## [1] 0.75 ``` ``` #COMPUTE IDF nw = length(which(tdm_mat[word,]>0)) print(nw) ``` ``` ## [1] 5 ``` ``` IDF = nd/nw print(IDF) ``` ``` ## [1] 16 ``` ``` #COMPUTE TF-IDF TF_IDF = TF*IDF print(TF_IDF) #With normalization ``` ``` ## [1] 12 ``` ``` print(fw*IDF) #Without normalization ``` ``` ## [1] 2 ``` We can write this code into a function and work out the TF\-IDF for all words. Then these word weights may be used in further text analysis. ### 7\.14\.2 TF\-IDF in the **tm** package We may also directly use the **weightTfIdf** function in the **tm** package. This undertakes the following computation: * Term frequency \\({\\it tf}\_{i,j}\\) counts the number of occurrences \\(n\_{i,j}\\) of a term \\(t\_i\\) in a document \\(d\_j\\). In the case of normalization, the term frequency \\(\\mathit{tf}\_{i,j}\\) is divided by \\(\\sum\_k n\_{k,j}\\). * Inverse document frequency for a term \\(t\_i\\) is defined as \\(\\mathit{idf}\_i \= \\log\_2 \\frac{\|D\|}{\|{d\_{t\_i \\in d}}\|}\\) where \\(\|D\|\\) denotes the total number of documents \\(\|{d\_{t\_i \\in d}}\|\\) is the number of documents where the term \\(t\_i\\) appears. * Term frequency \- inverse document frequency is now defined as \\(\\mathit{tf}\_{i,j} \\cdot \\mathit{idf}\_i\\). ``` tdm = TermDocumentMatrix(ctext,control=list(minWordLength=1,weighting=weightTfIdf)) ``` ``` ## Warning in weighting(x): empty document(s): 3 25 26 28 40 41 42 49 50 51 63 ## 64 65 78 79 80 ``` ``` print(tdm) ``` ``` ## <<TermDocumentMatrix (terms: 321, documents: 80)>> ## Non-/sparse entries: 502/25178 ## Sparsity : 98% ## Maximal term length: 49 ## Weighting : term frequency - inverse document frequency (normalized) (tf-idf) ``` ``` inspect(tdm[10:20,11:18]) ``` ``` ## <<TermDocumentMatrix (terms: 11, documents: 8)>> ## Non-/sparse entries: 5/83 ## Sparsity : 94% ## Maximal term length: 10 ## Weighting : term frequency - inverse document frequency (normalized) (tf-idf) ## ## Docs ## Terms 11 12 13 14 15 16 17 18 ## after 0.0000000 0.000000 0.0000000 0.0000000 0 0 0 0 ## ago 0.0000000 0.000000 0.0000000 0.0000000 0 0 0 0 ## ahmedabad 0.0000000 0.000000 0.0000000 0.0000000 0 0 0 0 ## aicwa 0.0000000 1.053655 0.0000000 0.0000000 0 0 0 0 ## algorithms 0.0000000 0.000000 0.0000000 0.0000000 0 0 0 0 ## also 0.6652410 0.000000 0.0000000 0.0000000 0 0 0 0 ## always 0.0000000 0.000000 0.0000000 0.0000000 0 0 0 0 ## and 0.5185001 0.000000 0.2592501 0.2592501 0 0 0 0 ## animals 0.0000000 0.000000 0.0000000 0.0000000 0 0 0 0 ## another 0.0000000 0.000000 0.0000000 0.0000000 0 0 0 0 ## any 0.0000000 0.000000 0.0000000 0.0000000 0 0 0 0 ``` *Example*: ``` library(tm) textarray = c("Free software comes with ABSOLUTELY NO certain WARRANTY","You are welcome to redistribute free software under certain conditions","Natural language support for software in an English locale","A collaborative project with many contributors") textcorpus = Corpus(VectorSource(textarray)) m = TermDocumentMatrix(textcorpus) print(as.matrix(m)) ``` ``` ## Docs ## Terms 1 2 3 4 ## absolutely 1 0 0 0 ## are 0 1 0 0 ## certain 1 1 0 0 ## collaborative 0 0 0 1 ## comes 1 0 0 0 ## conditions 0 1 0 0 ## contributors 0 0 0 1 ## english 0 0 1 0 ## for 0 0 1 0 ## free 1 1 0 0 ## language 0 0 1 0 ## locale 0 0 1 0 ## many 0 0 0 1 ## natural 0 0 1 0 ## project 0 0 0 1 ## redistribute 0 1 0 0 ## software 1 1 1 0 ## support 0 0 1 0 ## under 0 1 0 0 ## warranty 1 0 0 0 ## welcome 0 1 0 0 ## with 1 0 0 1 ## you 0 1 0 0 ``` ``` print(as.matrix(weightTfIdf(m))) ``` ``` ## Docs ## Terms 1 2 3 4 ## absolutely 0.28571429 0.00000000 0.00000000 0.0 ## are 0.00000000 0.22222222 0.00000000 0.0 ## certain 0.14285714 0.11111111 0.00000000 0.0 ## collaborative 0.00000000 0.00000000 0.00000000 0.4 ## comes 0.28571429 0.00000000 0.00000000 0.0 ## conditions 0.00000000 0.22222222 0.00000000 0.0 ## contributors 0.00000000 0.00000000 0.00000000 0.4 ## english 0.00000000 0.00000000 0.28571429 0.0 ## for 0.00000000 0.00000000 0.28571429 0.0 ## free 0.14285714 0.11111111 0.00000000 0.0 ## language 0.00000000 0.00000000 0.28571429 0.0 ## locale 0.00000000 0.00000000 0.28571429 0.0 ## many 0.00000000 0.00000000 0.00000000 0.4 ## natural 0.00000000 0.00000000 0.28571429 0.0 ## project 0.00000000 0.00000000 0.00000000 0.4 ## redistribute 0.00000000 0.22222222 0.00000000 0.0 ## software 0.05929107 0.04611528 0.05929107 0.0 ## support 0.00000000 0.00000000 0.28571429 0.0 ## under 0.00000000 0.22222222 0.00000000 0.0 ## warranty 0.28571429 0.00000000 0.00000000 0.0 ## welcome 0.00000000 0.22222222 0.00000000 0.0 ## with 0.14285714 0.00000000 0.00000000 0.2 ## you 0.00000000 0.22222222 0.00000000 0.0 ``` 7\.15 Cosine Similarity in the Text Domain ------------------------------------------ In this segment we will learn some popular functions on text that are used in practice. One of the first things we like to do is to find similar text or like sentences (think of web search as one application). Since documents are vectors in the TDM, we may want to find the closest vectors or compute the distance between vectors. \\\[ cos(\\theta) \= \\frac{A \\cdot B}{\|\|A\|\| \\times \|\|B\|\|} \\] where \\(\|\|A\|\| \= \\sqrt{A \\cdot A}\\), is the dot product of \\(A\\) with itself, also known as the norm of \\(A\\). This gives the cosine of the angle between the two vectors and is zero for orthogonal vectors and 1 for identical vectors. ``` #COSINE DISTANCE OR SIMILARITY A = as.matrix(c(0,3,4,1,7,0,1)) B = as.matrix(c(0,4,3,0,6,1,1)) cos = t(A) %*% B / (sqrt(t(A)%*%A) * sqrt(t(B)%*%B)) print(cos) ``` ``` ## [,1] ## [1,] 0.9682728 ``` ``` library(lsa) ``` ``` ## Loading required package: SnowballC ``` ``` #THE COSINE FUNCTION IN LSA ONLY TAKES ARRAYS A = c(0,3,4,1,7,0,1) B = c(0,4,3,0,6,1,1) print(cosine(A,B)) ``` ``` ## [,1] ## [1,] 0.9682728 ``` 7\.16 Using the ANLP package for bigrams and trigrams ----------------------------------------------------- This package has a few additional functions that make the preceding ideas more streamlined to implement. First let’s read in the usual text. ``` library(ANLP) download.file("http://srdas.github.io/bio-candid.html",destfile = "text") text = readTextFile("text","UTF-8") ctext = cleanTextData(text) #Creates a text corpus ``` The last function removes non\-english characters, numbers, white spaces, brackets, punctuation. It also handles cases like abbreviation, contraction. It converts entire text to lower case. We now make TDMs for unigrams, bigrams, trigrams. Then, combine them all into one list for word prediction. ``` g1 = generateTDM(ctext,1) g2 = generateTDM(ctext,2) g3 = generateTDM(ctext,3) gmodel = list(g1,g2,g3) ``` Next, use the **back\-off** algorithm to predict the next sequence of words. ``` print(predict_Backoff("you never",gmodel)) print(predict_Backoff("life is",gmodel)) print(predict_Backoff("been known",gmodel)) print(predict_Backoff("needs to",gmodel)) print(predict_Backoff("worked at",gmodel)) print(predict_Backoff("being an",gmodel)) print(predict_Backoff("publish",gmodel)) ``` 7\.17 Wordclouds ---------------- Wordlcouds are interesting ways in which to represent text. They give an instant visual summary. The **wordcloud** package in R may be used to create your own wordclouds. ``` #MAKE A WORDCLOUD library(wordcloud) ``` ``` ## Loading required package: RColorBrewer ``` ``` tdm2 = as.matrix(tdm) wordcount = sort(rowSums(tdm2),decreasing=TRUE) tdm_names = names(wordcount) wordcloud(tdm_names,wordcount) ``` ``` ## Warning in wordcloud(tdm_names, wordcount): ## backgroundhttpalgoscuedusanjivdasgraphicsback2gif could not be fit on page. ## It will not be plotted. ``` ``` #REMOVE STOPWORDS, NUMBERS, STEMMING ctext1 = tm_map(ctext,removeWords,stopwords("english")) ctext1 = tm_map(ctext1, removeNumbers) tdm = TermDocumentMatrix(ctext1,control=list(minWordLength=1)) tdm2 = as.matrix(tdm) wordcount = sort(rowSums(tdm2),decreasing=TRUE) tdm_names = names(wordcount) wordcloud(tdm_names,wordcount) ``` 7\.18 Manipulating Text ----------------------- ### 7\.18\.1 Stemming **Stemming** is the procedure by which a word is reduced to its root or stem. This is done so as to treat words from the one stem as the same word, rather than as separate words. We do not want “eaten” and “eating” to be treated as different words for example. ``` #STEMMING ctext2 = tm_map(ctext,removeWords,stopwords("english")) ctext2 = tm_map(ctext2, stemDocument) print(lapply(ctext2, as.character)[10:15]) ``` ``` ## $`10` ## [1] "BCom Account Econom Univers Bombay Sydenham" ## ## $`11` ## [1] "Colleg also qualifi Cost Work Accountant" ## ## $`12` ## [1] "AICWA He senior editor The Journal Investment" ## ## $`13` ## [1] "Manag coeditor The Journal Deriv The Journal" ## ## $`14` ## [1] "Financi Servic Research Associat Editor academ" ## ## $`15` ## [1] "journal Prior academ work deriv" ``` ### 7\.18\.2 Regular Expressions Regular expressions are syntax used to modify strings in an efficient manner. They are complicated but extremely effective. Here we will illustrate with a few examples, but you are encouraged to explore more on your own, as the variations are endless. What you need to do will depend on the application at hand, and with some experience you will become better at using regular expressions. The initial use will however be somewhat confusing. We start with a simple example of a text array where we wish replace the string “data” with a blank, i.e., we eliminate this string from the text we have. ``` library(tm) #Create a text array text = c("Doc1 is datavision","Doc2 is datatable","Doc3 is data","Doc4 is nodata","Doc5 is simpler") print(text) ``` ``` ## [1] "Doc1 is datavision" "Doc2 is datatable" "Doc3 is data" ## [4] "Doc4 is nodata" "Doc5 is simpler" ``` ``` #Remove all strings with the chosen text for all docs print(gsub("data","",text)) ``` ``` ## [1] "Doc1 is vision" "Doc2 is table" "Doc3 is " "Doc4 is no" ## [5] "Doc5 is simpler" ``` ``` #Remove all words that contain "data" at the start even if they are longer than data print(gsub("*data.*","",text)) ``` ``` ## [1] "Doc1 is " "Doc2 is " "Doc3 is " "Doc4 is no" ## [5] "Doc5 is simpler" ``` ``` #Remove all words that contain "data" at the end even if they are longer than data print(gsub("*.data*","",text)) ``` ``` ## [1] "Doc1 isvision" "Doc2 istable" "Doc3 is" "Doc4 is n" ## [5] "Doc5 is simpler" ``` ``` #Remove all words that contain "data" at the end even if they are longer than data print(gsub("*.data.*","",text)) ``` ``` ## [1] "Doc1 is" "Doc2 is" "Doc3 is" "Doc4 is n" ## [5] "Doc5 is simpler" ``` ### 7\.18\.3 Complex Regular Expressions using *grep* We now explore some more complex regular expressions. One case that is common is handling the search for special types of strings like telephone numbers. Suppose we have a text array that may contain telephone numbers in different formats, we can use a single **grep** command to extract these numbers. Here is some code to illustrate this. ``` #Create an array with some strings which may also contain telephone numbers as strings. x = c("234-5678","234 5678","2345678","1234567890","0123456789","abc 234-5678","234 5678 def","xx 2345678","abc1234567890def") #Now use grep to find which elements of the array contain telephone numbers idx = grep("[[:digit:]]{3}-[[:digit:]]{4}|[[:digit:]]{3} [[:digit:]]{4}|[1-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]",x) print(idx) ``` ``` ## [1] 1 2 4 6 7 9 ``` ``` print(x[idx]) ``` ``` ## [1] "234-5678" "234 5678" "1234567890" ## [4] "abc 234-5678" "234 5678 def" "abc1234567890def" ``` ``` #We can shorten this as follows idx = grep("[[:digit:]]{3}-[[:digit:]]{4}|[[:digit:]]{3} [[:digit:]]{4}|[1-9][0-9]{9}",x) print(idx) ``` ``` ## [1] 1 2 4 6 7 9 ``` ``` print(x[idx]) ``` ``` ## [1] "234-5678" "234 5678" "1234567890" ## [4] "abc 234-5678" "234 5678 def" "abc1234567890def" ``` ``` #What if we want to extract only the phone number and drop the rest of the text? pattern = "[[:digit:]]{3}-[[:digit:]]{4}|[[:digit:]]{3} [[:digit:]]{4}|[1-9][0-9]{9}" print(regmatches(x, gregexpr(pattern,x))) ``` ``` ## [[1]] ## [1] "234-5678" ## ## [[2]] ## [1] "234 5678" ## ## [[3]] ## character(0) ## ## [[4]] ## [1] "1234567890" ## ## [[5]] ## character(0) ## ## [[6]] ## [1] "234-5678" ## ## [[7]] ## [1] "234 5678" ## ## [[8]] ## character(0) ## ## [[9]] ## [1] "1234567890" ``` ``` #Or use the stringr package, which is a lot better library(stringr) str_extract(x,pattern) ``` ``` ## [1] "234-5678" "234 5678" NA "1234567890" NA ## [6] "234-5678" "234 5678" NA "1234567890" ``` ### 7\.18\.4 Using *grep* for emails Now we use grep to extract emails by looking for the “@” sign in the text string. We would proceed as in the following example. ``` x = c("sanjiv das","[email protected]","SCU","[email protected]") print(grep("\\@",x)) ``` ``` ## [1] 2 4 ``` ``` print(x[grep("\\@",x)]) ``` ``` ## [1] "[email protected]" "[email protected]" ``` You get the idea. Using the functions **gsub**, **grep**, **regmatches**, and **gregexpr**, you can manage most fancy string handling that is needed. 7\.19 Web Extraction using the *rvest* package ---------------------------------------------- The **rvest** package, written bu Hadley Wickham, is a powerful tool for extracting text from web pages. The package provides wrappers around the ‘xml2’ and ‘httr’ packages to make it easy to download, and then manipulate, HTML and XML. The package is best illustrated with some simple examples. ### 7\.19\.1 Program to read a web page using the selector gadget The selector gadget ius a useful tool to be used in conjunction with the *rvest* package. It allows you to find the html tag in a web page that you need to pass to the program to parse the html page element you are interested in. Download from: <http://selectorgadget.com/> Here is some code to read in the slashdot web page and gather the stories currently on their headlines. ``` library(rvest) ``` ``` ## Loading required package: xml2 ``` ``` ## ## Attaching package: 'rvest' ``` ``` ## The following object is masked from 'package:XML': ## ## xml ``` ``` url = "https://slashdot.org/" doc.html = read_html(url) text = doc.html %>% html_nodes(".story") %>% html_text() text = gsub("[\t\n]","",text) #text = paste(text, collapse=" ") print(text[1:20]) ``` ``` ## [1] " Samsung's Calls For Industry To Embrace Its Battery Check Process as a New Standard Have Been Ignored (cnet.com) " ## [2] " Blinking Cursor Devours CPU Cycles in Visual Studio Code Editor (theregister.co.uk) 39" ## [3] " Alcohol Is Good for Your Heart -- Most of the Time (time.com) 58" ## [4] " App That Lets People Make Personalized Emojis Is the Fastest Growing App In Past Two Years (axios.com) 22" ## [5] " Americans' Shift To The Suburbs Sped Up Last Year (fivethirtyeight.com) 113" ## [6] " Some Of Hacker Group's Claims Of Having Access To 250M iCloud Accounts Aren't False (zdnet.com) 33" ## [7] " Amazon Wins $1.5 Billion Tax Dispute Over IRS (reuters.com) 63" ## [8] " Hollywood Producer Blames Rotten Tomatoes For Convincing People Not To See His Movie (vanityfair.com) 283" ## [9] " Sea Ice Extent Sinks To Record Lows At Both Poles (sciencedaily.com) 130" ## [10] " Molecule Kills Elderly Cells, Reduces Signs of Aging In Mice (sciencemag.org) 94" ## [11] " Red-Light Camera Grace Period Goes From 0.1 To 0.3 Seconds, Chicago To Lose $17 Million (arstechnica.com) 201" ## [12] " US Ordered 'Mandatory Social Media Check' For Visa Applicants Who Visited ISIS Territory (theverge.com) 177" ## [13] " Google Reducing Trust In Symantec Certificates Following Numerous Slip-Ups (bleepingcomputer.com) 63" ## [14] " Twitter Considers Premium Version After 11 Years As a Free Service (reuters.com) 81" ## [15] " Apple Explores Using An iPhone, iPad To Power a Laptop (appleinsider.com) 63" ## [16] NA ## [17] NA ## [18] NA ## [19] NA ## [20] NA ``` ### 7\.19\.2 Program to read a web table using the selector gadget Sometimes we need to read a table embedded in a web page and this is also a simple exercise, which is undertaken also with **rvest**. ``` library(rvest) url = "http://finance.yahoo.com/q?uhb=uhb2&fr=uh3_finance_vert_gs&type=2button&s=IBM" doc.html = read_html(url) table = doc.html %>% html_nodes("table") %>% html_table() print(table) ``` ``` ## [[1]] ## X1 X2 ## 1 NA Search ## ## [[2]] ## X1 X2 ## 1 Previous Close 174.82 ## 2 Open 175.12 ## 3 Bid 174.80 x 300 ## 4 Ask 174.99 x 300 ## 5 Day's Range 173.94 - 175.50 ## 6 52 Week Range 142.50 - 182.79 ## 7 Volume 1,491,738 ## 8 Avg. Volume 3,608,856 ## ## [[3]] ## X1 X2 ## 1 Market Cap 164.3B ## 2 Beta 0.87 ## 3 PE Ratio (TTM) 14.07 ## 4 EPS (TTM) N/A ## 5 Earnings Date N/A ## 6 Dividend & Yield 5.60 (3.20%) ## 7 Ex-Dividend Date N/A ## 8 1y Target Est N/A ``` Note that this code extracted all the web tables in the Yahoo! Finance page and returned each one as a list item. ### 7\.19\.3 Program to read a web table into a data frame Here we take note of some Russian language sites where we want to extract forex quotes and store them in a data frame. ``` library(rvest) url1 <- "http://finance.i.ua/market/kiev/?type=1" #Buy USD url2 <- "http://finance.i.ua/market/kiev/?type=2" #Sell USD doc1.html = read_html(url1) table1 = doc1.html %>% html_nodes("table") %>% html_table() result1 = table1[[1]] print(head(result1)) ``` ``` ## X1 X2 X3 X4 ## 1 Время Курс Сумма Телефон ## 2 13:03 0.462 250000 \u20bd +38 063 \nПоказать ## 3 13:07 27.0701 72000 $ +38 063 \nПоказать ## 4 19:05 27.11 2000 $ +38 068 \nПоказать ## 5 18:48 27.08 200000 $ +38 063 \nПоказать ## 6 18:44 27.08 100000 $ +38 096 \nПоказать ## X5 ## 1 Район ## 2 м Дружбы народов ## 3 Обмен Валют Ленинградская пл ## 4 Центр. Могу подъехать. ## 5 Леси Украинки. Дружба Народов. Лыбидская ## 6 Ленинградская Пл. Левобережка. Печерск ## X6 ## 1 Комментарий ## 2 детектор, обмен валют ## 3 От 10т дол. Крупная гривна. От 30т нду. Звоните ## 4 Можно частями ## 5 П е ч е р с к , Подол. Лыбидская , от 10т. Обмен на Е В Р О 1. 0 82 ## 6 П е ч е р с к , Подол. Лыбидская , от 10т. Обмен на Е В Р О 1. 082 ``` ``` doc2.html = read_html(url2) table2 = doc2.html %>% html_nodes("table") %>% html_table() result2 = table2[[1]] print(head(result2)) ``` ``` ## X1 X2 X3 X4 ## 1 Время Курс Сумма Телефон ## 2 17:10 29.2299 62700 € +38 093 \nПоказать ## 3 19:04 27.14 5000 $ +38 098 \nПоказать ## 4 13:08 27.1099 72000 $ +38 063 \nПоказать ## 5 15:03 27.14 5200 $ +38 095 \nПоказать ## 6 17:05 27.2 40000 $ +38 093 \nПоказать ## X5 ## 1 Район ## 2 Обменный пункт Ленинградская пл и ## 3 Центр. Подъеду ## 4 Обмен Валют Ленинградская пл ## 5 Печерск ## 6 Подол ## X6 ## 1 Комментарий ## 2 Или за дол 1. 08 От 10т евро. 50 100 и 500 купюры. Звоните. Бронируйте. Еду от 10т. Артем ## 3 Можно Частями от 500 дол ## 4 От 10т дол. Крупная гривна. От 30т нду. Звоните ## 5 м Дружбы народов, от 500, детектор, обмен валют ## 6 Обмен валют, с 9-00 до 19-00 ``` 7\.20 Using the *rselenium* package ----------------------------------- ``` #Clicking Show More button Google Scholar page library(RCurl) library(RSelenium) library(rvest) library(stringr) library(igraph) checkForServer() startServer() remDr <- remoteDriver(remoteServerAddr = "localhost" , port = 4444 , browserName = "firefox" ) remDr$open() remDr$getStatus() ``` ### 7\.20\.1 Application to Google Scholar data ``` remDr$navigate("http://scholar.google.com") webElem <- remDr$findElement(using = 'css selector', "input#gs_hp_tsi") webElem$sendKeysToElement(list("Sanjiv Das", "\uE007")) link <- webElem$getCurrentUrl() page <- read_html(as.character(link)) citations <- page %>% html_nodes (".gs_rt2") matched <- str_match_all(citations, "<a href=\"(.*?)\"") scholarurl <- paste("https://scholar.google.com", matched[[1]][,2], sep="") page <- read_html(as.character(scholarurl)) remDr$navigate(as.character(scholarurl)) authorlist <- page %>% html_nodes(css=".gs_gray") %>% html_text() # Selecting fields after CSS selector .gs_gray authorlist <- as.data.frame(authorlist) odd_index <- seq(1,nrow(authorlist),2) #Sorting data by even/odd indexes to form a table. even_index <- seq (2,nrow(authorlist),2) authornames <- data.frame(x=authorlist[odd_index,1]) papernames <- data.frame(x=authorlist[even_index,1]) pubmatrix <- cbind(authorlist,papernames) # Building the view all link on scholar page. a=str_split(matched, "user=") x <- substring(a[[1]][2], 1,12) y<- paste("https://scholar.google.com/citations?view_op=list_colleagues&hl=en&user=", x, sep="") remDr$navigate(y) #Reading view all page to get author list: page <- read_html(as.character(y)) z <- page %>% html_nodes (".gsc_1usr_name") x <-lapply(z,str_extract,">[A-Z]+[a-z]+ .+<") x<-lapply(x,str_replace, ">","") x<-lapply(x,str_replace, "<","") # Graph function: bsk <- as.matrix(cbind("SR Das", unlist(x))) bsk.network<-graph.data.frame(bsk, directed=F) plot(bsk.network) ``` 7\.21 Web APIs -------------- We now look to getting text from the web and using various APIs from different services like Twitter, Facebook, etc. You will need to open free developer accounts to do this on each site. You will also need the special R packages for each different source. ### 7\.21\.1 Twitter First create a Twitter developer account to get the required credentials for accessing the API. See: <https://dev.twitter.com/> The Twitter API needs a lot of handshaking… ``` ##TWITTER EXTRACTOR library(twitteR) library(ROAuth) library(RCurl) download.file(url="https://curl.haxx.se/ca/cacert.pem",destfile="cacert.pem") #certificate file based on Privacy Enhanced Mail (PEM) protocol: https://en.wikipedia.org/wiki/Privacy-enhanced_Electronic_Mail cKey = "oV89mZ970KM9vO8a5mktV7Aqw" #These are my keys and won't work for you cSecret = "cNriTUShd69AJaVPpZHCMDZI5U7nnXVcd72vmK4psqDUQhIEEY" #use your own secret reqURL = "https://api.twitter.com/oauth/request_token" accURL = "https://api.twitter.com/oauth/access_token" authURL = "https://api.twitter.com/oauth/authorize" #NOW SUBMIT YOUR CODES AND ASK FOR CREDENTIALS cred = OAuthFactory$new(consumerKey=cKey, consumerSecret=cSecret,requestURL=reqURL, accessURL=accURL,authURL=authURL) cred$handshake(cainfo="cacert.pem") #Asks for token #Test and save credentials #registerTwitterOAuth(cred) #save(list="cred",file="twitteR_credentials") #FIRST PHASE DONE ``` ### 7\.21\.2 Accessing Twitter ``` ##USE httr, SECOND PHASE library(httr) #options(httr_oauth_cache=T) accToken = "18666236-DmDE1wwbpvPbDcw9kwt9yThGeyYhjfpVVywrHuhOQ" accTokenSecret = "cttbpxpTtqJn7wrCP36I59omNI5GQHXXgV41sKwUgc" setup_twitter_oauth(cKey,cSecret,accToken,accTokenSecret) #At prompt type 1 ``` This more direct code chunk does handshaking better and faster than the preceding. ``` library(stringr) library(twitteR) library(ROAuth) library(RCurl) ``` ``` ## Loading required package: bitops ``` ``` cKey = "oV89mZ970KM9vO8a5mktV7Aqw" cSecret = "cNriTUShd69AJaVPpZHCMDZI5U7nnXVcd72vmK4psqDUQhIEEY" accToken = "18666236-DmDE1wwbpvPbDcw9kwt9yThGeyYhjfpVVywrHuhOQ" accTokenSecret = "cttbpxpTtqJn7wrCP36I59omNI5GQHXXgV41sKwUgc" setup_twitter_oauth(consumer_key = cKey, consumer_secret = cSecret, access_token = accToken, access_secret = accTokenSecret) ``` ``` ## [1] "Using direct authentication" ``` This completes the handshaking with Twitter. Now we can access tweets using the functions in the **twitteR** package. ### 7\.21\.3 Using the *twitteR* package ``` #EXAMPLE 1 s = searchTwitter("#GOOG") #This is a list s ``` ``` ## [[1]] ## [1] "_KevinRosales_: @Origengg @UnicornsOfLove #GoOg siempre apoyándolos hasta la muerte" ## ## [[2]] ## [1] "uncle_otc: @Jasik @crtaylor81 seen? MyDx, Inc. (OTC:$MYDX) Revolutionary Medical Software That's Poised To Earn Billions, https://t.co/KbgNIEoAlB #GOOG" ## ## [[3]] ## [1] "prabhumap: \"O-MG, the Developer Preview of Android O is here!\" https://t.co/cShgn63DrJ #goog #feedly" ## ## [[4]] ## [1] "top10USstocks: Alphabet Inc (NASDAQ:GOOG) loses -1.45% on Thursday-Top10 Worst Performer in NASDAQ100 #NASDAQ #GOOG https://t.co/FPbW5Ablez" ## ## [[5]] ## [1] "wlstcom: Alphabet - 25% Upside Potential #GOOGLE #GOOG #GOOGL #StockMarketSherpa #LongIdeas $GOOG https://t.co/IIGxCsBvab https://t.co/raegkUwI0j" ## ## [[6]] ## [1] "wlstcom: Scenarios For The Healthcare Bill - Cramer's Mad Money (3/23/17) #JPM #C #MLM #USCR #GOOG #GOOGL #AAPL #AMGN #CSCO https://t.co/B3GscATmg3" ## ## [[7]] ## [1] "seajourney2004: Lake Tekapo, New Zealand from Brent (@brentpurcell.nz) on Instagram: “Tekapo Blue\" #LakeTekapo #goog https://t.co/agzGy6ortN" ## ## [[8]] ## [1] "ScottWestBand: #Cowboy #Song #Western #Music Westerns #CowboySong #WesternMusic Theme https://t.co/bi8psLXB8G #Trending #Youtube #Twitter #Facebook #Goog…" ## ## [[9]] ## [1] "savvyyabby: Thought leadership is 1 part Common Sense and 99 parts Leadership. I have no idea what Google is smoking but I am getting SHORT #GOOG" ## ## [[10]] ## [1] "Addiply: @marcwebber @thetimes Rupert, Dacre and Co all want @DCMS @DamianCollins et al to clip #GOOG wings. Cos they ain't getting their slice..." ## ## [[11]] ## [1] "onlinemedialist: RT @wlstcom: Augmented Reality: The Next Big Thing In Wearables #APPLE #AAPL #FB #SSNLF #GOOG #GOOGL $FB https://t.co/PwqUrm4VU4 https://t.…" ## ## [[12]] ## [1] "wlstcom: Augmented Reality: The Next Big Thing In Wearables #APPLE #AAPL #FB #SSNLF #GOOG #GOOGL $FB https://t.co/PwqUrm4VU4 https://t.co/0rnSbVUvGX" ## ## [[13]] ## [1] "zeyumw: Google Agrees to YouTube Metrics Audit to Ease Advertisers’ Concerns https://t.co/OsSjVDY24X #goog #media #googl" ## ## [[14]] ## [1] "wlstcom: Apple Acquires DeskConnect For Workflow Application #GOOG #AAPL #GOOGL #DonovanJones $AAPL https://t.co/YIGqHyYwrm https://t.co/UI2ejtP0Jo" ## ## [[15]] ## [1] "wlstcom: Apple Acquires DeskConnect For Workflow Application #GOOGLE #GOOG #AAPL #DonovanJones $GOOG https://t.co/Yd01TL5ZZb https://t.co/Vo6VEeSxw7" ## ## [[16]] ## [1] "send2katz: Cloud SQL for PostgreSQL: Managed PostgreSQL for your mobile and geospatial applications in Google Cloud https://t.co/W7JLhPb1CG #GCE #Goog" ## ## [[17]] ## [1] "MarkYu_DPT: Ah, really? First @Google Medical Diagnostics Center soon?\n#GOOGL #GOOG\nhttps://t.co/PhmPsB0xgf" ## ## [[18]] ## [1] "AskFriedrich: Alphabet — GOOGL\nnot meeting Friedrich criteria, &amp; EXTREMELY expensive\n\n#alphabet #google $google $GOOGL #GOOG… https://t.co/N1x8LUUz5T" ## ## [[19]] ## [1] "HotHardware: #GoogleMaps To Offer Optional Real-Time User #LocationTracking Allowing You To Share Your ETA… https://t.co/OTF73K6a3w" ## ## [[20]] ## [1] "ConsumerFeed: Alphabet's buy rating reiterated at Mizuho. $1,024.00 PT. https://t.co/7c3Hart1rT $GOOG #GOOG" ## ## [[21]] ## [1] "RatingsNetwork: Alphabet's buy rating reiterated at Mizuho. $1,024.00 PT. https://t.co/LUCXvQDHX4 $GOOG #GOOG" ## ## [[22]] ## [1] "rContentRich: (#Google #Resurrected a #Dead #Product on #Wednesday and no one #Noticed (#GOOG))\n \nhttps://t.co/7YFLbMDyp7 https://t.co/CIfrOPmmKh" ## ## [[23]] ## [1] "ScottWestBand: #Cowboy #Song #Western #Music Westerns #CowboySong #WesternMusic Theme https://t.co/bi8psLXB8G #Trending #Youtube #Twitter #Facebook #Goog…" ## ## [[24]] ## [1] "APPLE_GOOGLE_TW: Virgin Tonic : Merci Google Maps ! On va enfin pouvoir retrouver notre voiture sur le parking - Virgin Radio https://t.co/l5IpUUyIGz #Goog…" ## ## [[25]] ## [1] "carlosmoisescet: RT @JUANJmauricio: #goog nigth #fuck hard #ass #cock # fuck mounth https://t.co/2dpIdWtlxX" ``` ``` #CONVERT TWITTER LIST TO TEXT ARRAY (see documentation in twitteR package) twts = twListToDF(s) #This gives a dataframe with the tweets names(twts) ``` ``` ## [1] "text" "favorited" "favoriteCount" "replyToSN" ## [5] "created" "truncated" "replyToSID" "id" ## [9] "replyToUID" "statusSource" "screenName" "retweetCount" ## [13] "isRetweet" "retweeted" "longitude" "latitude" ``` ``` twts_array = twts$text print(twts$retweetCount) ``` ``` ## [1] 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 ## [24] 0 47 ``` ``` twts_array ``` ``` ## [1] "@Origengg @UnicornsOfLove #GoOg siempre apoyándolos hasta la muerte" ## [2] "@Jasik @crtaylor81 seen? MyDx, Inc. (OTC:$MYDX) Revolutionary Medical Software That's Poised To Earn Billions, https://t.co/KbgNIEoAlB #GOOG" ## [3] "\"O-MG, the Developer Preview of Android O is here!\" https://t.co/cShgn63DrJ #goog #feedly" ## [4] "Alphabet Inc (NASDAQ:GOOG) loses -1.45% on Thursday-Top10 Worst Performer in NASDAQ100 #NASDAQ #GOOG https://t.co/FPbW5Ablez" ## [5] "Alphabet - 25% Upside Potential #GOOGLE #GOOG #GOOGL #StockMarketSherpa #LongIdeas $GOOG https://t.co/IIGxCsBvab https://t.co/raegkUwI0j" ## [6] "Scenarios For The Healthcare Bill - Cramer's Mad Money (3/23/17) #JPM #C #MLM #USCR #GOOG #GOOGL #AAPL #AMGN #CSCO https://t.co/B3GscATmg3" ## [7] "Lake Tekapo, New Zealand from Brent (@brentpurcell.nz) on Instagram: “Tekapo Blue\" #LakeTekapo #goog https://t.co/agzGy6ortN" ## [8] "#Cowboy #Song #Western #Music Westerns #CowboySong #WesternMusic Theme https://t.co/bi8psLXB8G #Trending #Youtube #Twitter #Facebook #Goog…" ## [9] "Thought leadership is 1 part Common Sense and 99 parts Leadership. I have no idea what Google is smoking but I am getting SHORT #GOOG" ## [10] "@marcwebber @thetimes Rupert, Dacre and Co all want @DCMS @DamianCollins et al to clip #GOOG wings. Cos they ain't getting their slice..." ## [11] "RT @wlstcom: Augmented Reality: The Next Big Thing In Wearables #APPLE #AAPL #FB #SSNLF #GOOG #GOOGL $FB https://t.co/PwqUrm4VU4 https://t.…" ## [12] "Augmented Reality: The Next Big Thing In Wearables #APPLE #AAPL #FB #SSNLF #GOOG #GOOGL $FB https://t.co/PwqUrm4VU4 https://t.co/0rnSbVUvGX" ## [13] "Google Agrees to YouTube Metrics Audit to Ease Advertisers’ Concerns https://t.co/OsSjVDY24X #goog #media #googl" ## [14] "Apple Acquires DeskConnect For Workflow Application #GOOG #AAPL #GOOGL #DonovanJones $AAPL https://t.co/YIGqHyYwrm https://t.co/UI2ejtP0Jo" ## [15] "Apple Acquires DeskConnect For Workflow Application #GOOGLE #GOOG #AAPL #DonovanJones $GOOG https://t.co/Yd01TL5ZZb https://t.co/Vo6VEeSxw7" ## [16] "Cloud SQL for PostgreSQL: Managed PostgreSQL for your mobile and geospatial applications in Google Cloud https://t.co/W7JLhPb1CG #GCE #Goog" ## [17] "Ah, really? First @Google Medical Diagnostics Center soon?\n#GOOGL #GOOG\nhttps://t.co/PhmPsB0xgf" ## [18] "Alphabet — GOOGL\nnot meeting Friedrich criteria, &amp; EXTREMELY expensive\n\n#alphabet #google $google $GOOGL #GOOG… https://t.co/N1x8LUUz5T" ## [19] "#GoogleMaps To Offer Optional Real-Time User #LocationTracking Allowing You To Share Your ETA… https://t.co/OTF73K6a3w" ## [20] "Alphabet's buy rating reiterated at Mizuho. $1,024.00 PT. https://t.co/7c3Hart1rT $GOOG #GOOG" ## [21] "Alphabet's buy rating reiterated at Mizuho. $1,024.00 PT. https://t.co/LUCXvQDHX4 $GOOG #GOOG" ## [22] "(#Google #Resurrected a #Dead #Product on #Wednesday and no one #Noticed (#GOOG))\n \nhttps://t.co/7YFLbMDyp7 https://t.co/CIfrOPmmKh" ## [23] "#Cowboy #Song #Western #Music Westerns #CowboySong #WesternMusic Theme https://t.co/bi8psLXB8G #Trending #Youtube #Twitter #Facebook #Goog…" ## [24] "Virgin Tonic : Merci Google Maps ! On va enfin pouvoir retrouver notre voiture sur le parking - Virgin Radio https://t.co/l5IpUUyIGz #Goog…" ## [25] "RT @JUANJmauricio: #goog nigth #fuck hard #ass #cock # fuck mounth https://t.co/2dpIdWtlxX" ``` ``` #EXAMPLE 2 s = getUser("srdas") fr = s$getFriends() print(length(fr)) ``` ``` ## [1] 154 ``` ``` print(fr[1:10]) ``` ``` ## $`60816617` ## [1] "cedarwright" ## ## $`2511461743` ## [1] "rightrelevance" ## ## $`3097250541` ## [1] "MichiganCFLP" ## ## $`894057794` ## [1] "BigDataGal" ## ## $`365145609` ## [1] "mathbabedotorg" ## ## $`19251838` ## [1] "ClimbingMag" ## ## $`235261861` ## [1] "rstudio" ## ## $`5849202` ## [1] "jcheng" ## ## $`46486816` ## [1] "ramnath_vaidya" ## ## $`39010299` ## [1] "xieyihui" ``` ``` s_tweets = userTimeline("srdas",n=20) print(s_tweets) ``` ``` ## [[1]] ## [1] "srdas: Bestselling author of 'Moneyball' says laziness is the key to success. @MindaZetlin https://t.co/OTjzI3bHRm via @Inc" ## ## [[2]] ## [1] "srdas: Difference between Data Science, Machine Learning and Data Mining on Data Science Central: https://t.co/hreJ3QsmFG" ## ## [[3]] ## [1] "srdas: High-frequency traders fall on hard times https://t.co/626yKMshvY via @WSJ" ## ## [[4]] ## [1] "srdas: Shapes of Probability Distributions https://t.co/3hKE8FR9rx" ## ## [[5]] ## [1] "srdas: The one thing you need to master data science https://t.co/hmAwGKUAZg via @Rbloggers" ## ## [[6]] ## [1] "srdas: The Chess Problem that a Computer Cannot Solve: https://t.co/1qwCFPnMFz" ## ## [[7]] ## [1] "srdas: The dystopian future of price discrimination https://t.co/w7BuGJjjEJ via @BV" ## ## [[8]] ## [1] "srdas: How artificial intelligence is transforming the workplace https://t.co/V0TrDlm3D2 via @WSJ" ## ## [[9]] ## [1] "srdas: John Maeda: If you want to survive in design, you better learn to code https://t.co/EGyM5DvfyZ via @WIRED" ## ## [[10]] ## [1] "srdas: On mentorship and finding your way around https://t.co/wojEs6TTsD via @techcrunch" ## ## [[11]] ## [1] "srdas: Information Avoidance: How People Select Their Own Reality https://t.co/ytogtYqq4P" ## ## [[12]] ## [1] "srdas: Paul Ryan says he’s been “dreaming” of Medicaid cuts since he was “drinking out of kegs” https://t.co/5rZmZTtTyZ via @voxdotcom" ## ## [[13]] ## [1] "srdas: Don't Ask How to Define Data Science: https://t.co/WGVO0yB8Hy" ## ## [[14]] ## [1] "srdas: Kurzweil Claims That the Singularity Will Happen by 2045 https://t.co/Inl60a2KLv via @Futurism" ## ## [[15]] ## [1] "srdas: Did Uber steal the driverless future from Google? https://t.co/sDrtfHob34 via @BW" ## ## [[16]] ## [1] "srdas: Think Like a Data Scientist: \nhttps://t.co/aNFtL1tqDs" ## ## [[17]] ## [1] "srdas: Why Employees At Apple And Google Are More Productive https://t.co/E3WESsKkFO" ## ## [[18]] ## [1] "srdas: Cutting down the clutter in online conversations https://t.co/41ZH5iR9Hy" ## ## [[19]] ## [1] "srdas: I invented the web. Here are three things we need to change to save it | Tim Berners-Lee https://t.co/ORQaXiBXWC" ## ## [[20]] ## [1] "srdas: Let’s calculate pi on a Raspberry Pi to celebrate Pi Day https://t.co/D3gW0l2ZHt via @WIRED" ``` ``` getCurRateLimitInfo(c("users")) ``` ``` ## resource limit remaining reset ## 1 /users/report_spam 15 15 2017-03-24 18:55:44 ## 2 /users/show/:id 900 899 2017-03-24 18:55:42 ## 3 /users/search 900 900 2017-03-24 18:55:44 ## 4 /users/suggestions/:slug 15 15 2017-03-24 18:55:44 ## 5 /users/derived_info 15 15 2017-03-24 18:55:44 ## 6 /users/profile_banner 180 180 2017-03-24 18:55:44 ## 7 /users/suggestions/:slug/members 15 15 2017-03-24 18:55:44 ## 8 /users/lookup 900 898 2017-03-24 18:55:43 ## 9 /users/suggestions 15 15 2017-03-24 18:55:44 ``` 7\.22 Quick Process ------------------- ``` library(ngram) ``` ``` ## Warning: package 'ngram' was built under R version 3.3.2 ``` ``` library(NLP) library(syuzhet) twts = twListToDF(s_tweets) x = paste(twts$text,collapse=" ") y = get_tokens(x) sen = get_sentiment(y) print(sen) ``` ``` ## [1] 0.80 0.00 0.00 0.00 0.00 -1.00 0.00 0.00 0.00 0.00 0.75 ## [12] 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ## [23] 0.00 0.80 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ## [34] 0.00 0.00 0.00 0.00 0.00 -0.25 0.00 -0.25 0.00 0.00 0.00 ## [45] 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ## [56] 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ## [67] 0.00 0.00 0.00 0.00 0.00 0.00 0.00 -0.75 0.00 0.00 0.00 ## [78] 0.00 0.80 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ## [89] -0.50 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.50 0.00 ## [100] 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ## [111] 0.00 0.00 0.00 0.00 0.80 0.00 0.00 0.00 0.80 0.80 0.00 ## [122] 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ## [133] 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.40 -0.80 ## [144] 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ## [155] 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 -0.25 0.00 0.00 ## [166] 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ## [177] 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ## [188] 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ## [199] 0.00 0.00 0.00 0.00 0.00 0.00 0.00 -0.75 0.00 0.00 0.00 ## [210] 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.50 0.00 ## [221] 0.00 0.40 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ## [232] 0.00 0.00 0.00 0.50 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ## [243] 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.60 0.00 ## [254] 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.50 ## [265] 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ## [276] 0.00 0.00 0.00 0.00 0.00 0.00 0.50 0.00 0.00 0.00 0.00 ## [287] 0.00 0.00 0.00 0.00 ``` ``` print(sum(sen)) ``` ``` ## [1] 4.9 ``` ### 7\.22\.1 Getting Streaming Data from Twitter This assumes you have a working twitter account and have already connected R to it using twitteR package. * Retriving tweets for a particular search query * Example 1 adapted from [http://bogdanrau.com/blog/collecting\-tweets\-using\-r\-and\-the\-twitter\-streaming\-api/](http://bogdanrau.com/blog/collecting-tweets-using-r-and-the-twitter-streaming-api/) * Additional reference: [https://cran.r\-project.org/web/packages/streamR/streamR.pdf](https://cran.r-project.org/web/packages/streamR/streamR.pdf) ``` library(streamR) filterStream(file.name = "tweets.json", # Save tweets in a json file track = "useR_Stanford" , # Collect tweets with useR_Stanford over 60 seconds. Can use twitter handles or keywords. language = "en", timeout = 30, # Keep connection alive for 60 seconds oauth = cred) # Use OAuth credentials tweets.df <- parseTweets("tweets.json", simplify = FALSE) # parse the json file and save to a data frame called tweets.df. Simplify = FALSE ensures that we include lat/lon information in that data frame. ``` ### 7\.22\.2 Retrieving tweets of a particular user over a 60 second time period ``` filterStream(file.name = "tweets.json", # Save tweets in a json file track = "3497513953" , # Collect tweets from useR2016 feed over 60 seconds. Must use twitter ID of the user. language = "en", timeout = 30, # Keep connection alive for 60 seconds oauth = cred) # Use my_oauth file as the OAuth credentials tweets.df <- parseTweets("tweets.json", simplify = FALSE) ``` ### 7\.22\.3 Streaming messages from the accounts your user follows. ``` userStream( file.name="my_timeline.json", with="followings",tweets=10, oauth=cred ) ``` ### 7\.22\.4 Facebook Now we move on to using Facebook, which is a little less trouble than Twitter. Also the results may be used for creating interesting networks. ``` ##FACEBOOK EXTRACTOR library(Rfacebook) library(SnowballC) library(Rook) library(ROAuth) app_id = "847737771920076" # USE YOUR OWN IDs app_secret = "eb8b1c4639a3f5de2fd8582a16b9e5a9" fb_oauth = fbOAuth(app_id,app_secret,extended_permissions=TRUE) #save(fb_oauth,file="fb_oauth") #DIRECT LOAD #load("fb_oauth") ``` ### 7\.22\.5 Examples ``` ##EXAMPLES bbn = getUsers("bloombergnews",token=fb_oauth) print(bbn) page = getPage(page="bloombergnews",token=fb_oauth,n=20) print(dim(page)) print(head(page)) print(names(page)) print(page$message) print(page$message[11]) ``` ### 7\.22\.6 Yelp \- Setting up an authorization First we examine the protocol for connecting to the Yelp API. This assumes you have opei ``` ###CODE to connect to YELP. consumerKey = "z6w-Or6HSyKbdUTmV9lbOA" consumerSecret = "ImUufP3yU9FmNWWx54NUbNEBcj8" token = "mBzEBjhYIGgJZnmtTHLVdQ-0cyfFVRGu" token_secret = "v0FGCL0TS_dFDWFwH3HptDZhiLE" ``` ### 7\.22\.7 Yelp \- handshaking with the API ``` require(httr) require(httpuv) require(jsonlite) # authorization myapp = oauth_app("YELP", key=consumerKey, secret=consumerSecret) sig=sign_oauth1.0(myapp, token=token,token_secret=token_secret) ``` ``` ## Searching the top ten bars in Chicago and SF. limit <- 10 # 10 bars in Chicago yelpurl <- paste0("http://api.yelp.com/v2/search/?limit=",limit,"&location=Chicago%20IL&term=bar") # or 10 bars by geo-coordinates yelpurl <- paste0("http://api.yelp.com/v2/search/?limit=",limit,"&ll=37.788022,-122.399797&term=bar") locationdata=GET(yelpurl, sig) locationdataContent = content(locationdata) locationdataList=jsonlite::fromJSON(toJSON(locationdataContent)) head(data.frame(locationdataList)) for (j in 1:limit) { print(locationdataContent$businesses[[j]]$snippet_text) } ``` 7\.23 Dictionaries ------------------ 1. Webster’s defines a “dictionary” as “…a reference source in print or electronic form containing words usually alphabetically arranged along with information about their forms, pronunciations, functions, etymologies, meanings, and syntactical and idiomatic uses.” 2. The Harvard General Inquirer: [http://www.wjh.harvard.edu/\~inquirer/](http://www.wjh.harvard.edu/~inquirer/) 3. Standard Dictionaries: www.dictionary.com, and www.merriam\-webster.com. 4. Computer dictionary: <http://www.hyperdictionary.com/computer> that contains about 14,000 computer related words, such as “byte” or “hyperlink”. 5. Math dictionary, such as <http://www.amathsdictionaryforkids.com/dictionary.html>. 6. Medical dictionary, see <http://www.hyperdictionary.com/medical>. 7. Internet lingo dictionaries may be used to complement standard dictionaries with words that are not usually found in standard language, for example, see <http://www.netlingo.com/dictionary/all.php> for words such as “2BZ4UQT” which stands for “too busy for you cutey” (LOL). When extracting text messages, postings on Facebook, or stock message board discussions, internet lingo does need to be parsed and such a dictionary is very useful. 8. Associative dictionaries are also useful when trying to find context, as the word may be related to a concept, identified using a dictionary such as <http://www.visuwords.com/>. This dictionary doubles up as a thesaurus, as it provides alternative words and phrases that mean the same thing, and also related concepts. 9. Value dictionaries deal with values and may be useful when only affect (positive or negative) is insufficient for scoring text. The Lasswell Value Dictionary [http://www.wjh.harvard.edu/\~inquirer/lasswell.htm](http://www.wjh.harvard.edu/~inquirer/lasswell.htm) may be used to score the loading of text on the eight basic value categories: Wealth, Power, Respect, Rectitude, Skill, Enlightenment, Affection, and Well being. 7\.24 Lexicons -------------- 1. A **lexicon** is defined by Webster’s as “a book containing an alphabetical arrangement of the words in a language and their definitions; the vocabulary of a language, an individual speaker or group of speakers, or a subject; the total stock of morphemes in a language.” This suggests it is not that different from a dictionary. 2. A “morpheme” is defined as “a word or a part of a word that has a meaning and that contains no smaller part that has a meaning.” 3. In the text analytics realm, we will take a lexicon to be a smaller, special purpose dictionary, containing words that are relevant to the domain of interest. 4. The benefit of a lexicon is that it enables focusing only on words that are relevant to the analytics and discards words that are not. 5. Another benefit is that since it is a smaller dictionary, the computational effort required by text analytics algorithms is drastically reduced. ### 7\.24\.1 Constructing a lexicon 1. By hand. This is an effective technique and the simplest. It calls for a human reader who scans a representative sample of text documents and culls important words that lend interpretive meaning. 2. Examine the term document matrix for most frequent words, and pick the ones that have high connotation for the classification task at hand. 3. Use pre\-classified documents in a text corpus. We analyze the separate groups of documents to find words whose difference in frequency between groups is highest. Such words are likely to be better in discriminating between groups. ### 7\.24\.2 Lexicons as Word Lists 1. Das and Chen (2007\) constructed a lexicon of about 375 words that are useful in parsing sentiment from stock message boards. 2. Loughran and McDonald (2011\): * Taking a sample of 50,115 firm\-year 10\-Ks from 1994 to 2008, they found that almost three\-fourths of the words identified as negative by the Harvard Inquirer dictionary are not typically negative words in a financial context. * Therefore, they specifically created separate lists of words by the following attributes of words: negative, positive, uncertainty, litigious, strong modal, and weak modal. Modal words are based on Jordan’s categories of strong and weak modal words. These word lists may be downloaded from [http://www3\.nd.edu/\~mcdonald/Word\_Lists.html](http://www3.nd.edu/~mcdonald/Word_Lists.html). ### 7\.24\.3 Negation Tagging Das and Chen (2007\) introduced the notion of “negation tagging” into the literature. Negation tags create additional words in the word list using some rule. In this case, the rule used was to take any sentence, and if a negation word occurred, then tag all remaining positive words in the sentence as negative. For example, take a sentence \- “This is not a good book.” Here the positive words after “not” are candidates for negation tagging. So, we would replace the sentence with “This is not a n\_\_good book." Sometimes this can be more nuanced. For example, a sentence such as “There is nothing better than sliced bread.” So now, the negation word “nothing” is used in conjunction with “better” so is an exception to the rule. Such exceptions may need to be coded in to rules for parsing textual content. The Grammarly Handbook provides the folowing negation words (see <https://www.grammarly.com/handbook/>): * Negative words: No, Not, None, No one, Nobody, Nothing, Neither, Nowhere, Never. * Negative Adverbs: Hardly, Scarcely, Barely. * Negative verbs: Doesn’t, Isn’t, Wasn’t, Shouldn’t, Wouldn’t, Couldn’t, Won’t, Can’t, Don’t. 7\.25 Scoring Text ------------------ * Text can be scored using dictionaries and word lists. Here is an example of mood scoring. We use a psychological dictionary from Harvard. There is also WordNet. * WordNet is a large database of words in English, i.e., a lexicon. The repository is at <http://wordnet.princeton.edu>. WordNet groups words together based on their meanings (synonyms) and hence may be used as a thesaurus. WordNet is also useful for natural language processing as it provides word lists by language category, such as noun, verb, adjective, etc. 7\.26 Mood Scoring using Harvard Inquirer ----------------------------------------- ### 7\.26\.1 Creating Positive and Negative Word Lists ``` #MOOD SCORING USING HARVARD INQUIRER #Read in the Harvard Inquirer Dictionary #And create a list of positive and negative words HIDict = readLines("DSTMAA_data/inqdict.txt") dict_pos = HIDict[grep("Pos",HIDict)] poswords = NULL for (s in dict_pos) { s = strsplit(s,"#")[[1]][1] poswords = c(poswords,strsplit(s," ")[[1]][1]) } dict_neg = HIDict[grep("Neg",HIDict)] negwords = NULL for (s in dict_neg) { s = strsplit(s,"#")[[1]][1] negwords = c(negwords,strsplit(s," ")[[1]][1]) } poswords = tolower(poswords) negwords = tolower(negwords) print(sample(poswords,25)) ``` ``` ## [1] "rouse" "donation" "correct" "eager" ## [5] "shiny" "train" "gain" "competent" ## [9] "aristocracy" "arisen" "comeback" "honeymoon" ## [13] "inspire" "faith" "sympathize" "uppermost" ## [17] "fulfill" "relaxation" "appreciative" "create" ## [21] "luck" "protection" "entrust" "fortify" ## [25] "dignified" ``` ``` print(sample(negwords,25)) ``` ``` ## [1] "suspicion" "censorship" "conspire" "even" ## [5] "order" "perverse" "withhold" "collision" ## [9] "muddy" "frown" "war" "discriminate" ## [13] "competitor" "challenge" "blah" "need" ## [17] "pass" "frustrate" "lying" "frantically" ## [21] "haggard" "blunder" "confuse" "scold" ## [25] "audacity" ``` ``` poswords = unique(poswords) negwords = unique(negwords) print(length(poswords)) ``` ``` ## [1] 1647 ``` ``` print(length(negwords)) ``` ``` ## [1] 2121 ``` The preceding code created two arrays, one of positive words and another of negative words. You can also directly use the EmoLex which contains positive and negative words already, see: NRC Word\-Emotion Lexicon: [http://saifmohammad.com/WebPages/NRC\-Emotion\-Lexicon.htm](http://saifmohammad.com/WebPages/NRC-Emotion-Lexicon.htm) ### 7\.26\.2 One Function to Rule All Text In order to score text, we need to clean it first and put it into an array to compare with the word list of positive and negative words. I wrote a general purpose function that grabs text and cleans it up for further use. ``` library(tm) library(stringr) #READ IN TEXT FOR ANALYSIS, PUT IT IN A CORPUS, OR ARRAY, OR FLAT STRING #cstem=1, if stemming needed #cstop=1, if stopwords to be removed #ccase=1 for lower case, ccase=2 for upper case #cpunc=1, if punctuation to be removed #cflat=1 for flat text wanted, cflat=2 if text array, else returns corpus read_web_page = function(url,cstem=0,cstop=0,ccase=0,cpunc=0,cflat=0) { text = readLines(url) text = text[setdiff(seq(1,length(text)),grep("<",text))] text = text[setdiff(seq(1,length(text)),grep(">",text))] text = text[setdiff(seq(1,length(text)),grep("]",text))] text = text[setdiff(seq(1,length(text)),grep("}",text))] text = text[setdiff(seq(1,length(text)),grep("_",text))] text = text[setdiff(seq(1,length(text)),grep("\\/",text))] ctext = Corpus(VectorSource(text)) if (cstem==1) { ctext = tm_map(ctext, stemDocument) } if (cstop==1) { ctext = tm_map(ctext, removeWords, stopwords("english"))} if (cpunc==1) { ctext = tm_map(ctext, removePunctuation) } if (ccase==1) { ctext = tm_map(ctext, tolower) } if (ccase==2) { ctext = tm_map(ctext, toupper) } text = ctext #CONVERT FROM CORPUS IF NEEDED if (cflat>0) { text = NULL for (j in 1:length(ctext)) { temp = ctext[[j]]$content if (temp!="") { text = c(text,temp) } } text = as.array(text) } if (cflat==1) { text = paste(text,collapse="\n") text = str_replace_all(text, "[\r\n]" , " ") } result = text } ``` ### 7\.26\.3 Example Now apply this function and see how we can get some clean text. ``` url = "http://srdas.github.io/research.htm" res = read_web_page(url,0,0,0,1,1) print(res) ``` ``` ## [1] "Data Science Theories Models Algorithms and Analytics web book work in progress Derivatives Principles and Practice 2010 Rangarajan Sundaram and Sanjiv Das McGraw Hill An IndexBased Measure of Liquidity with George Chacko and Rong Fan 2016 Matrix Metrics NetworkBased Systemic Risk Scoring 2016 of systemic risk This paper won the First Prize in the MITCFP competition 2016 for the best paper on SIFIs systemically important financial institutions It also won the best paper award at Credit Spreads with Dynamic Debt with Seoyoung Kim 2015 Text and Context Language Analytics for Finance 2014 Strategic Loan Modification An OptionsBased Response to Strategic Default Options and Structured Products in Behavioral Portfolios with Meir Statman 2013 and barrier range notes in the presence of fattailed outcomes using copulas Polishing Diamonds in the Rough The Sources of Syndicated Venture Performance 2011 with Hoje Jo and Yongtae Kim Optimization with Mental Accounts 2010 with Harry Markowitz Jonathan Accountingbased versus marketbased crosssectional models of CDS spreads with Paul Hanouna and Atulya Sarin 2009 Hedging Credit Equity Liquidity Matters with Paul Hanouna 2009 An Integrated Model for Hybrid Securities Yahoo for Amazon Sentiment Extraction from Small Talk on the Web Common Failings How Corporate Defaults are Correlated with Darrell Duffie Nikunj Kapadia and Leandro Saita A Clinical Study of Investor Discussion and Sentiment with Asis MartinezJerez and Peter Tufano 2005 International Portfolio Choice with Systemic Risk The loss resulting from diminished diversification is small while Speech Signaling Risksharing and the Impact of Fee Structures on investor welfare Contrary to regulatory intuition incentive structures A DiscreteTime Approach to Noarbitrage Pricing of Credit derivatives with Rating Transitions with Viral Acharya and Rangarajan Sundaram Pricing Interest Rate Derivatives A General Approachwith George Chacko A DiscreteTime Approach to ArbitrageFree Pricing of Credit Derivatives The Psychology of Financial Decision Making A Case for TheoryDriven Experimental Enquiry 1999 with Priya Raghubir Of Smiles and Smirks A Term Structure Perspective A Theory of Banking Structure 1999 with Ashish Nanda by function based upon two dimensions the degree of information asymmetry A Theory of Optimal Timing and Selectivity A Direct DiscreteTime Approach to PoissonGaussian Bond Option Pricing in the HeathJarrowMorton The Central Tendency A Second Factor in Bond Yields 1998 with Silverio Foresi and Pierluigi Balduzzi Efficiency with Costly Information A Reinterpretation of Evidence from Managed Portfolios with Edwin Elton Martin Gruber and Matt Presented and Reprinted in the Proceedings of The Seminar on the Analysis of Security Prices at the Center for Research in Security Prices at the University of Managing Rollover Risk with Capital Structure Covenants in Structured Finance Vehicles 2016 The Design and Risk Management of Structured Finance Vehicles 2016 Post the recent subprime financial crisis we inform the creation of safer SIVs in structured finance and propose avenues of mitigating risks faced by senior debt through Coming up Short Managing Underfunded Portfolios in an LDIES Framework 2014 with Seoyoung Kim and Meir Statman Going for Broke Restructuring Distressed Debt Portfolios 2014 Digital Portfolios 2013 Options on Portfolios with HigherOrder Moments 2009 options on a multivariate system of assets calibrated to the return Dealing with Dimension Option Pricing on Factor Trees 2009 you to price options on multiple assets in a unified fraamework Computational Modeling Correlated Default with a Forest of Binomial Trees 2007 with Basel II Correlation Related Issues 2007 Correlated Default Risk 2006 with Laurence Freed Gary Geng and Nikunj Kapadia increase as markets worsen Regime switching models are needed to explain dynamic A Simple Model for Pricing Equity Options with Markov Switching State Variables 2006 with Donald Aingworth and Rajeev Motwani The Firms Management of Social Interactions 2005 with D Godes D Mayzlin Y Chen S Das C Dellarocas B Pfeieffer B Libai S Sen M Shi and P Verlegh Financial Communities with Jacob Sisk 2005 Summer 112123 Monte Carlo Markov Chain Methods for Derivative Pricing and Risk Assessmentwith Alistair Sinclair 2005 where incomplete information about the value of an asset may be exploited to undertake fast and accurate pricing Proof that a fully polynomial randomized Correlated Default Processes A CriterionBased Copula Approach Special Issue on Default Risk Private Equity Returns An Empirical Examination of the Exit of VentureBacked Companies with Murali Jagannathan and Atulya Sarin firm being financed the valuation at the time of financing and the prevailing market sentiment Helps understand the risk premium required for the Issue on Computational Methods in Economics and Finance December 5569 Bayesian Migration in Credit Ratings Based on Probabilities of The Impact of Correlated Default Risk on Credit Portfolios with Gifford Fong and Gary Geng How Diversified are Internationally Diversified Portfolios TimeVariation in the Covariances between International Returns DiscreteTime Bond and Option Pricing for JumpDiffusion Macroeconomic Implications of Search Theory for the Labor Market Auction Theory A Summary with Applications and Evidence from the Treasury Markets 1996 with Rangarajan Sundaram A Simple Approach to Three Factor Affine Models of the Term Structure with Pierluigi Balduzzi Silverio Foresi and Rangarajan Analytical Approximations of the Term Structure for Jumpdiffusion Processes A Numerical Analysis 1996 Markov Chain Term Structure Models Extensions and Applications Exact Solutions for Bond and Options Prices with Systematic Jump Risk 1996 with Silverio Foresi Pricing Credit Sensitive Debt when Interest Rates Credit Ratings and Credit Spreads are Stochastic 1996 v52 161198 Did CDS Trading Improve the Market for Corporate Bonds 2016 with Madhu Kalimipalli and Subhankar Nayak Big Datas Big Muscle 2016 Portfolios for Investors Who Want to Reach Their Goals While Staying on the MeanVariance Efficient Frontier 2011 with Harry Markowitz Jonathan Scheid and Meir Statman News Analytics Framework Techniques and Metrics The Handbook of News Analytics in Finance May 2011 John Wiley Sons UK Random Lattices for Option Pricing Problems in Finance 2011 Implementing Option Pricing Models using Python and Cython 2010 The Finance Web Internet Information and Markets 2010 Financial Applications with Parallel R 2009 Recovery Swaps 2009 with Paul Hanouna Recovery Rates 2009with Paul Hanouna A Simple Model for Pricing Securities with a DebtEquity Linkage 2008 in Credit Default Swap Spreads 2006 with Paul Hanouna MultipleCore Processors for Finance Applications 2006 Power Laws 2005 with Jacob Sisk Genetic Algorithms 2005 Recovery Risk 2005 Venture Capital Syndication with Hoje Jo and Yongtae Kim 2004 Technical Analysis with David Tien 2004 Liquidity and the Bond Markets with Jan Ericsson and Madhu Kalimipalli 2003 Modern Pricing of Interest Rate Derivatives Book Review Contagion 2003 Hedge Funds 2003 Reprinted in Working Papers on Hedge Funds in The World of Hedge Funds Characteristics and Analysis 2005 World Scientific The Internet and Investors 2003 Useful things to know about Correlated Default Risk with Gifford Fong Laurence Freed Gary Geng and Nikunj Kapadia The Regulation of Fee Structures in Mutual Funds A Theoretical Analysis with Rangarajan Sundaram 1998 NBER WP No 6639 in the Courant Institute of Mathematical Sciences special volume on A DiscreteTime Approach to ArbitrageFree Pricing of Credit Derivatives with Rangarajan Sundaram reprinted in the Courant Institute of Mathematical Sciences special volume on Stochastic Mean Models of the Term Structure with Pierluigi Balduzzi Silverio Foresi and Rangarajan Sundaram John Wiley Sons Inc 128161 Interest Rate Modeling with JumpDiffusion Processes John Wiley Sons Inc 162189 Comments on Pricing ExcessofLoss Reinsurance Contracts against Catastrophic Loss by J David Cummins C Lewis and Richard Phillips Froot Ed University of Chicago Press 1999 141145 Pricing Credit Derivatives J Frost and JG Whittaker 101138 On the Recursive Implementation of Term Structure Models ZeroRevelation RegTech Detecting Risk through Linguistic Analysis of Corporate Emails and News with Seoyoung Kim and Bhushan Kothari Summary for the Columbia Law School blog Dynamic Risk Networks A Note with Seoyoung Kim and Dan Ostrov Research Challenges in Financial Data Modeling and Analysis with Lewis Alexander Zachary Ives HV Jagadish and Claire Monteleoni Local Volatility and the Recovery Rate of Credit Default Swaps with Jeroen Jansen and Frank Fabozzi Efficient Rebalancing of Taxable Portfolios with Dan Ostrov Dennis Ding Vincent Newell The Fast and the Curious VC Drift with Amit Bubna and Paul Hanouna Venture Capital Communities with Amit Bubna and Nagpurnanand Prabhala " ``` ### 7\.26\.4 Mood Scoring Text Now we will take a different page of text and mood score it. ``` #EXAMPLE OF MOOD SCORING library(stringr) url = "http://srdas.github.io/bio-candid.html" text = read_web_page(url,cstem=0,cstop=0,ccase=0,cpunc=1,cflat=1) text = str_replace_all(text,"nbsp"," ") text = unlist(strsplit(text," ")) posmatch = match(text,poswords) numposmatch = length(posmatch[which(posmatch>0)]) negmatch = match(text,negwords) numnegmatch = length(negmatch[which(negmatch>0)]) print(c(numposmatch,numnegmatch)) ``` ``` ## [1] 26 16 ``` ``` #FURTHER EXPLORATION OF THESE OBJECTS print(length(text)) ``` ``` ## [1] 647 ``` ``` print(posmatch) ``` ``` ## [1] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [15] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [29] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [43] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [57] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [71] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [85] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [99] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [113] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [127] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [141] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [155] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [169] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [183] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [197] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [211] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [225] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [239] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [253] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [267] NA 994 NA NA NA NA NA NA NA NA NA NA NA NA ## [281] NA NA NA NA NA NA NA NA NA NA 611 NA NA NA ## [295] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [309] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [323] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [337] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [351] 800 NA NA NA NA NA NA NA NA NA NA NA NA NA ## [365] NA NA NA NA 761 1144 NA NA 800 NA NA NA NA 800 ## [379] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [393] NA 515 NA NA NA NA 1011 NA NA NA NA NA NA NA ## [407] NA NA NA NA NA NA NA NA NA NA NA NA 1036 NA ## [421] NA NA NA NA NA NA 455 NA NA NA NA NA NA NA ## [435] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [449] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [463] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [477] NA NA 800 NA NA NA NA NA NA NA NA NA NA NA ## [491] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [505] NA NA NA 941 NA NA NA NA NA NA NA NA NA NA ## [519] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [533] NA 1571 NA NA 800 NA NA NA NA NA NA NA NA 838 ## [547] NA 1076 NA NA NA NA NA NA NA NA NA NA NA NA ## [561] NA NA NA 1255 NA NA NA NA NA NA 1266 NA NA NA ## [575] NA NA NA NA NA NA NA 781 NA NA NA NA NA NA ## [589] NA NA NA 800 NA NA NA NA NA NA NA NA NA NA ## [603] 1645 542 NA NA NA NA NA NA NA NA 940 NA NA NA ## [617] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [631] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [645] NA 1184 747 ``` ``` print(text[77]) ``` ``` ## [1] "qualified" ``` ``` print(poswords[204]) ``` ``` ## [1] "back" ``` ``` is.na(posmatch) ``` ``` ## [1] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [12] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [23] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [34] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [45] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [56] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [67] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [78] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [89] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [100] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [111] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [122] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [133] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [144] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [155] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [166] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [177] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [188] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [199] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [210] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [221] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [232] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [243] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [254] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [265] TRUE TRUE TRUE FALSE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [276] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [287] TRUE TRUE TRUE TRUE FALSE TRUE TRUE TRUE TRUE TRUE TRUE ## [298] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [309] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [320] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [331] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [342] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE FALSE TRUE ## [353] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [364] TRUE TRUE TRUE TRUE TRUE FALSE FALSE TRUE TRUE FALSE TRUE ## [375] TRUE TRUE TRUE FALSE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [386] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE FALSE TRUE TRUE ## [397] TRUE TRUE FALSE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [408] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [419] FALSE TRUE TRUE TRUE TRUE TRUE TRUE TRUE FALSE TRUE TRUE ## [430] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [441] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [452] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [463] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [474] TRUE TRUE TRUE TRUE TRUE FALSE TRUE TRUE TRUE TRUE TRUE ## [485] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [496] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [507] TRUE FALSE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [518] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [529] TRUE TRUE TRUE TRUE TRUE FALSE TRUE TRUE FALSE TRUE TRUE ## [540] TRUE TRUE TRUE TRUE TRUE TRUE FALSE TRUE FALSE TRUE TRUE ## [551] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [562] TRUE TRUE FALSE TRUE TRUE TRUE TRUE TRUE TRUE FALSE TRUE ## [573] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE FALSE TRUE ## [584] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE FALSE TRUE TRUE ## [595] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE FALSE FALSE TRUE ## [606] TRUE TRUE TRUE TRUE TRUE TRUE TRUE FALSE TRUE TRUE TRUE ## [617] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [628] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [639] TRUE TRUE TRUE TRUE TRUE TRUE TRUE FALSE FALSE ``` 7\.27 Language Detection and Translation ---------------------------------------- We may be scraping web sites from many countries and need to detect the language and then translate it into English for mood scoring. The useful package **textcat** enables us to categorize the language. ``` library(textcat) text = c("Je suis un programmeur novice.", "I am a programmer who is a novice.", "Sono un programmatore alle prime armi.", "Ich bin ein Anfänger Programmierer", "Soy un programador con errores.") lang = textcat(text) print(lang) ``` ``` ## [1] "french" "english" "italian" "german" "spanish" ``` ### 7\.27\.1 Language Translation And of course, once the language is detected, we may translate it into English. ``` library(translate) set.key("AIzaSyDIB8qQTmhLlbPNN38Gs4dXnlN4a7lRrHQ") print(translate(text[1],"fr","en")) print(translate(text[3],"it","en")) print(translate(text[4],"de","en")) print(translate(text[5],"es","en")) ``` This requires a Google API for which you need to set up a paid account. 7\.28 Text Classification ------------------------- 1. Machine classification is, from a layman’s point of view, nothing but learning by example. In new\-fangled modern parlance, it is a technique in the field of “machine learning”. 2. Learning by machines falls into two categories, supervised and unsupervised. When a number of explanatory \\(X\\) variables are used to determine some outcome \\(Y\\), and we train an algorithm to do this, we are performing supervised (machine) learning. The outcome \\(Y\\) may be a dependent variable (for example, the left hand side in a linear regression), or a classification (i.e., discrete outcome). 3. When we only have \\(X\\) variables and no separate outcome variable \\(Y\\), we perform unsupervised learning. For example, cluster analysis produces groupings based on the \\(X\\) variables of various entities, and is a common example. We start with a simple example on numerical data befoe discussing how this is to be applied to text. We first look at the Bayes classifier. 7\.29 Bayes Classifier ---------------------- Bayes classification extends the Document\-Term model with a document\-term\-classification model. These are the three entities in the model and we denote them as \\((d,t,c)\\). Assume that there are \\(D\\) documents to classify into \\(C\\) categories, and we employ a dictionary/lexicon (as the case may be) of \\(T\\) terms or words. Hence we have \\(d\_i, i \= 1, ... , D\\), and \\(t\_j, j \= 1, ... , T\\). And correspondingly the categories for classification are \\(c\_k, k \= 1, ... , C\\). Suppose we are given a text corpus of stock market related documents (tweets for example), and wish to classify them into bullish (\\(c\_1\\)), neutral (\\(c\_2\\)), or bearish (\\(c\_3\\)), where \\(C\=3\\). We first need to train the Bayes classifier using a training data set, with pre\-classified documents, numbering \\(D\\). For each term \\(t\\) in the lexicon, we can compute how likely it is to appear in documents in each class \\(c\_k\\). Therefore, for each class, there is a \\(T\\)\-sided dice with each face representing a term and having a probability of coming up. These dice are the prior probabilities of seeing a word for each class of document. We denote these probabilities succinctly as \\(p(t \| c)\\). For example in a bearish document, if the word “sell” comprises 10% of the words that appear, then \\(p(t\=\\mbox{sell} \| c\=\\mbox{bearish})\=0\.10\\). In order to ensure that just because a word does not appear in a class, it has a non\-zero probability we compute the probabilities as follows: \\\[ \\begin{equation} p(t \| c) \= \\frac{n(t \| c) \+ 1}{n(c)\+T} \\end{equation} \\] where \\(n(t \| c)\\) is the number of times word \\(t\\) appears in category \\(c\\), and \\(n(c) \= \\sum\_t n(t \| c)\\) is the total number of words in the training data in class \\(c\\). Note that if there are no words in the class \\(c\\), then each term \\(t\\) has probability \\(1/T\\). A document \\(d\_i\\) is a collection or set of words \\(t\_j\\). The probability of seeing a given document in each category is given by the following *multinomial* probability: \\\[ \\begin{equation} p(d \| c) \= \\frac{n(d)!}{n(t\_1\|d)! \\cdot n(t\_2\|d)! \\cdots n(t\_T\|d)!} \\times p(t\_1 \| c) \\cdot p(t\_2 \| c) \\cdots p(t\_T \| c) \\nonumber \\end{equation} \\] where \\(n(d)\\) is the number of words in the document, and \\(n(t\_j \| d)\\) is the number of occurrences of word \\(t\_j\\) in the same document \\(d\\). These \\(p(d \| c)\\) are the prior probabilities in the Bayes classifier, computed from all documents in the training data. The posterior probabilities are computed for each document in the test data as follows: \\\[ p(c \| d) \= \\frac{p(d \| c) p(c)}{\\sum\_k \\; p(d \| c\_k) p(c\_k)}, \\forall k \= 1, \\ldots, C \\nonumber \\] Note that we get \\(C\\) posterior probabilities for document \\(d\\), and assign the document to class \\(\\max\_k c\_k\\), i.e., the class with the highest posterior probability for the given document. ### 7\.29\.1 Naive Bayes in R We use the **e1071** package. It has a one\-line command that takes in the tagged training dataset using the function **naiveBayes()**. It returns the trained classifier model. The trained classifier contains the unconditional probabilities \\(p(c)\\) of each class, which are merely frequencies with which each document appears. It also shows the conditional probability distributions \\(p(t \|c)\\) given as the mean and standard deviation of the occurrence of these terms in each class. We may take this trained model and re\-apply to the training data set to see how well it does. We use the **predict()** function for this. The data set here is the classic Iris data. For text mining, the feature set in the data will be the set of all words, and there will be one column for each word. Hence, this will be a large feature set. In order to keep this small, we may instead reduce the number of words by only using a lexicon’s words as the set of features. This will vastly reduce and make more specific the feature set used in the classifier. ### 7\.29\.2 Example ``` library(e1071) data(iris) print(head(iris)) ``` ``` ## Sepal.Length Sepal.Width Petal.Length Petal.Width Species ## 1 5.1 3.5 1.4 0.2 setosa ## 2 4.9 3.0 1.4 0.2 setosa ## 3 4.7 3.2 1.3 0.2 setosa ## 4 4.6 3.1 1.5 0.2 setosa ## 5 5.0 3.6 1.4 0.2 setosa ## 6 5.4 3.9 1.7 0.4 setosa ``` ``` tail(iris) ``` ``` ## Sepal.Length Sepal.Width Petal.Length Petal.Width Species ## 145 6.7 3.3 5.7 2.5 virginica ## 146 6.7 3.0 5.2 2.3 virginica ## 147 6.3 2.5 5.0 1.9 virginica ## 148 6.5 3.0 5.2 2.0 virginica ## 149 6.2 3.4 5.4 2.3 virginica ## 150 5.9 3.0 5.1 1.8 virginica ``` ``` #NAIVE BAYES res = naiveBayes(iris[,1:4],iris[,5]) #SHOWS THE PRIOR AND LIKELIHOOD FUNCTIONS res ``` ``` ## ## Naive Bayes Classifier for Discrete Predictors ## ## Call: ## naiveBayes.default(x = iris[, 1:4], y = iris[, 5]) ## ## A-priori probabilities: ## iris[, 5] ## setosa versicolor virginica ## 0.3333333 0.3333333 0.3333333 ## ## Conditional probabilities: ## Sepal.Length ## iris[, 5] [,1] [,2] ## setosa 5.006 0.3524897 ## versicolor 5.936 0.5161711 ## virginica 6.588 0.6358796 ## ## Sepal.Width ## iris[, 5] [,1] [,2] ## setosa 3.428 0.3790644 ## versicolor 2.770 0.3137983 ## virginica 2.974 0.3224966 ## ## Petal.Length ## iris[, 5] [,1] [,2] ## setosa 1.462 0.1736640 ## versicolor 4.260 0.4699110 ## virginica 5.552 0.5518947 ## ## Petal.Width ## iris[, 5] [,1] [,2] ## setosa 0.246 0.1053856 ## versicolor 1.326 0.1977527 ## virginica 2.026 0.2746501 ``` ``` #SHOWS POSTERIOR PROBABILITIES predict(res,iris[,1:4],type="raw") ``` ``` ## setosa versicolor virginica ## [1,] 1.000000e+00 2.981309e-18 2.152373e-25 ## [2,] 1.000000e+00 3.169312e-17 6.938030e-25 ## [3,] 1.000000e+00 2.367113e-18 7.240956e-26 ## [4,] 1.000000e+00 3.069606e-17 8.690636e-25 ## [5,] 1.000000e+00 1.017337e-18 8.885794e-26 ## [6,] 1.000000e+00 2.717732e-14 4.344285e-21 ## [7,] 1.000000e+00 2.321639e-17 7.988271e-25 ## [8,] 1.000000e+00 1.390751e-17 8.166995e-25 ## [9,] 1.000000e+00 1.990156e-17 3.606469e-25 ## [10,] 1.000000e+00 7.378931e-18 3.615492e-25 ## [11,] 1.000000e+00 9.396089e-18 1.474623e-24 ## [12,] 1.000000e+00 3.461964e-17 2.093627e-24 ## [13,] 1.000000e+00 2.804520e-18 1.010192e-25 ## [14,] 1.000000e+00 1.799033e-19 6.060578e-27 ## [15,] 1.000000e+00 5.533879e-19 2.485033e-25 ## [16,] 1.000000e+00 6.273863e-17 4.509864e-23 ## [17,] 1.000000e+00 1.106658e-16 1.282419e-23 ## [18,] 1.000000e+00 4.841773e-17 2.350011e-24 ## [19,] 1.000000e+00 1.126175e-14 2.567180e-21 ## [20,] 1.000000e+00 1.808513e-17 1.963924e-24 ## [21,] 1.000000e+00 2.178382e-15 2.013989e-22 ## [22,] 1.000000e+00 1.210057e-15 7.788592e-23 ## [23,] 1.000000e+00 4.535220e-20 3.130074e-27 ## [24,] 1.000000e+00 3.147327e-11 8.175305e-19 ## [25,] 1.000000e+00 1.838507e-14 1.553757e-21 ## [26,] 1.000000e+00 6.873990e-16 1.830374e-23 ## [27,] 1.000000e+00 3.192598e-14 1.045146e-21 ## [28,] 1.000000e+00 1.542562e-17 1.274394e-24 ## [29,] 1.000000e+00 8.833285e-18 5.368077e-25 ## [30,] 1.000000e+00 9.557935e-17 3.652571e-24 ## [31,] 1.000000e+00 2.166837e-16 6.730536e-24 ## [32,] 1.000000e+00 3.940500e-14 1.546678e-21 ## [33,] 1.000000e+00 1.609092e-20 1.013278e-26 ## [34,] 1.000000e+00 7.222217e-20 4.261853e-26 ## [35,] 1.000000e+00 6.289348e-17 1.831694e-24 ## [36,] 1.000000e+00 2.850926e-18 8.874002e-26 ## [37,] 1.000000e+00 7.746279e-18 7.235628e-25 ## [38,] 1.000000e+00 8.623934e-20 1.223633e-26 ## [39,] 1.000000e+00 4.612936e-18 9.655450e-26 ## [40,] 1.000000e+00 2.009325e-17 1.237755e-24 ## [41,] 1.000000e+00 1.300634e-17 5.657689e-25 ## [42,] 1.000000e+00 1.577617e-15 5.717219e-24 ## [43,] 1.000000e+00 1.494911e-18 4.800333e-26 ## [44,] 1.000000e+00 1.076475e-10 3.721344e-18 ## [45,] 1.000000e+00 1.357569e-12 1.708326e-19 ## [46,] 1.000000e+00 3.882113e-16 5.587814e-24 ## [47,] 1.000000e+00 5.086735e-18 8.960156e-25 ## [48,] 1.000000e+00 5.012793e-18 1.636566e-25 ## [49,] 1.000000e+00 5.717245e-18 8.231337e-25 ## [50,] 1.000000e+00 7.713456e-18 3.349997e-25 ## [51,] 4.893048e-107 8.018653e-01 1.981347e-01 ## [52,] 7.920550e-100 9.429283e-01 5.707168e-02 ## [53,] 5.494369e-121 4.606254e-01 5.393746e-01 ## [54,] 1.129435e-69 9.999621e-01 3.789964e-05 ## [55,] 1.473329e-105 9.503408e-01 4.965916e-02 ## [56,] 1.931184e-89 9.990013e-01 9.986538e-04 ## [57,] 4.539099e-113 6.592515e-01 3.407485e-01 ## [58,] 2.549753e-34 9.999997e-01 3.119517e-07 ## [59,] 6.562814e-97 9.895385e-01 1.046153e-02 ## [60,] 5.000210e-69 9.998928e-01 1.071638e-04 ## [61,] 7.354548e-41 9.999997e-01 3.143915e-07 ## [62,] 4.799134e-86 9.958564e-01 4.143617e-03 ## [63,] 4.631287e-60 9.999925e-01 7.541274e-06 ## [64,] 1.052252e-103 9.850868e-01 1.491324e-02 ## [65,] 4.789799e-55 9.999700e-01 2.999393e-05 ## [66,] 1.514706e-92 9.787587e-01 2.124125e-02 ## [67,] 1.338348e-97 9.899311e-01 1.006893e-02 ## [68,] 2.026115e-62 9.999799e-01 2.007314e-05 ## [69,] 6.547473e-101 9.941996e-01 5.800427e-03 ## [70,] 3.016276e-58 9.999913e-01 8.739959e-06 ## [71,] 1.053341e-127 1.609361e-01 8.390639e-01 ## [72,] 1.248202e-70 9.997743e-01 2.256698e-04 ## [73,] 3.294753e-119 9.245812e-01 7.541876e-02 ## [74,] 1.314175e-95 9.979398e-01 2.060233e-03 ## [75,] 3.003117e-83 9.982736e-01 1.726437e-03 ## [76,] 2.536747e-92 9.865372e-01 1.346281e-02 ## [77,] 1.558909e-111 9.102260e-01 8.977398e-02 ## [78,] 7.014282e-136 7.989607e-02 9.201039e-01 ## [79,] 5.034528e-99 9.854957e-01 1.450433e-02 ## [80,] 1.439052e-41 9.999984e-01 1.601574e-06 ## [81,] 1.251567e-54 9.999955e-01 4.500139e-06 ## [82,] 8.769539e-48 9.999983e-01 1.742560e-06 ## [83,] 3.447181e-62 9.999664e-01 3.361987e-05 ## [84,] 1.087302e-132 6.134355e-01 3.865645e-01 ## [85,] 4.119852e-97 9.918297e-01 8.170260e-03 ## [86,] 1.140835e-102 8.734107e-01 1.265893e-01 ## [87,] 2.247339e-110 7.971795e-01 2.028205e-01 ## [88,] 4.870630e-88 9.992978e-01 7.022084e-04 ## [89,] 2.028672e-72 9.997620e-01 2.379898e-04 ## [90,] 2.227900e-69 9.999461e-01 5.390514e-05 ## [91,] 5.110709e-81 9.998510e-01 1.489819e-04 ## [92,] 5.774841e-99 9.885399e-01 1.146006e-02 ## [93,] 5.146736e-66 9.999591e-01 4.089540e-05 ## [94,] 1.332816e-34 9.999997e-01 2.716264e-07 ## [95,] 6.094144e-77 9.998034e-01 1.966331e-04 ## [96,] 1.424276e-72 9.998236e-01 1.764463e-04 ## [97,] 8.302641e-77 9.996692e-01 3.307548e-04 ## [98,] 1.835520e-82 9.988601e-01 1.139915e-03 ## [99,] 5.710350e-30 9.999997e-01 3.094739e-07 ## [100,] 3.996459e-73 9.998204e-01 1.795726e-04 ## [101,] 3.993755e-249 1.031032e-10 1.000000e+00 ## [102,] 1.228659e-149 2.724406e-02 9.727559e-01 ## [103,] 2.460661e-216 2.327488e-07 9.999998e-01 ## [104,] 2.864831e-173 2.290954e-03 9.977090e-01 ## [105,] 8.299884e-214 3.175384e-07 9.999997e-01 ## [106,] 1.371182e-267 3.807455e-10 1.000000e+00 ## [107,] 3.444090e-107 9.719885e-01 2.801154e-02 ## [108,] 3.741929e-224 1.782047e-06 9.999982e-01 ## [109,] 5.564644e-188 5.823191e-04 9.994177e-01 ## [110,] 2.052443e-260 2.461662e-12 1.000000e+00 ## [111,] 8.669405e-159 4.895235e-04 9.995105e-01 ## [112,] 4.220200e-163 3.168643e-03 9.968314e-01 ## [113,] 4.360059e-190 6.230821e-06 9.999938e-01 ## [114,] 6.142256e-151 1.423414e-02 9.857659e-01 ## [115,] 2.201426e-186 1.393247e-06 9.999986e-01 ## [116,] 2.949945e-191 6.128385e-07 9.999994e-01 ## [117,] 2.909076e-168 2.152843e-03 9.978472e-01 ## [118,] 1.347608e-281 2.872996e-12 1.000000e+00 ## [119,] 2.786402e-306 1.151469e-12 1.000000e+00 ## [120,] 2.082510e-123 9.561626e-01 4.383739e-02 ## [121,] 2.194169e-217 1.712166e-08 1.000000e+00 ## [122,] 3.325791e-145 1.518718e-02 9.848128e-01 ## [123,] 6.251357e-269 1.170872e-09 1.000000e+00 ## [124,] 4.415135e-135 1.360432e-01 8.639568e-01 ## [125,] 6.315716e-201 1.300512e-06 9.999987e-01 ## [126,] 5.257347e-203 9.507989e-06 9.999905e-01 ## [127,] 1.476391e-129 2.067703e-01 7.932297e-01 ## [128,] 8.772841e-134 1.130589e-01 8.869411e-01 ## [129,] 5.230800e-194 1.395719e-05 9.999860e-01 ## [130,] 7.014892e-179 8.232518e-04 9.991767e-01 ## [131,] 6.306820e-218 1.214497e-06 9.999988e-01 ## [132,] 2.539020e-247 4.668891e-10 1.000000e+00 ## [133,] 2.210812e-201 2.000316e-06 9.999980e-01 ## [134,] 1.128613e-128 7.118948e-01 2.881052e-01 ## [135,] 8.114869e-151 4.900992e-01 5.099008e-01 ## [136,] 7.419068e-249 1.448050e-10 1.000000e+00 ## [137,] 1.004503e-215 9.743357e-09 1.000000e+00 ## [138,] 1.346716e-167 2.186989e-03 9.978130e-01 ## [139,] 1.994716e-128 1.999894e-01 8.000106e-01 ## [140,] 8.440466e-185 6.769126e-06 9.999932e-01 ## [141,] 2.334365e-218 7.456220e-09 1.000000e+00 ## [142,] 2.179139e-183 6.352663e-07 9.999994e-01 ## [143,] 1.228659e-149 2.724406e-02 9.727559e-01 ## [144,] 3.426814e-229 6.597015e-09 1.000000e+00 ## [145,] 2.011574e-232 2.620636e-10 1.000000e+00 ## [146,] 1.078519e-187 7.915543e-07 9.999992e-01 ## [147,] 1.061392e-146 2.770575e-02 9.722942e-01 ## [148,] 1.846900e-164 4.398402e-04 9.995602e-01 ## [149,] 1.439996e-195 3.384156e-07 9.999997e-01 ## [150,] 2.771480e-143 5.987903e-02 9.401210e-01 ``` ``` #CONFUSION MATRIX out = table(predict(res,iris[,1:4]),iris[,5]) out ``` ``` ## ## setosa versicolor virginica ## setosa 50 0 0 ## versicolor 0 47 3 ## virginica 0 3 47 ``` 7\.30 Support Vector Machines (SVM) ----------------------------------- The goal of the SVM is to map a set of entities with inputs \\(X\=\\{x\_1,x\_2,\\ldots,x\_n\\}\\) of dimension \\(n\\), i.e., \\(X \\in R^n\\), into a set of categories \\(Y\=\\{y\_1,y\_2,\\ldots,y\_m\\}\\) of dimension \\(m\\), such that the \\(n\\)\-dimensional \\(X\\)\-space is divided using hyperplanes, which result in the maximal separation between classes \\(Y\\). A hyperplane is the set of points \\({\\bf x}\\) satisfying the equation \\\[ {\\bf w} \\cdot {\\bf x} \= b \\] where \\(b\\) is a scalar constant, and \\({\\bf w} \\in R^n\\) is the normal vector to the hyperplane, i.e., the vector at right angles to the plane. The distance between this hyperplane and \\({\\bf w} \\cdot {\\bf x} \= 0\\) is given by \\(b/\|\|{\\bf w}\|\|\\), where \\(\|\|{\\bf w}\|\|\\) is the norm of vector \\({\\bf w}\\). This set up is sufficient to provide intuition about how the SVM is implemented. Suppose we have two categories of data, i.e., \\(y \= \\{y\_1, y\_2\\}\\). Assume that all points in category \\(y\_1\\) lie above a hyperplane \\({\\bf w} \\cdot {\\bf x} \= b\_1\\), and all points in category \\(y\_2\\) lie below a hyperplane \\({\\bf w} \\cdot {\\bf x} \= b\_2\\), then the distance between the two hyperplanes is \\(\\frac{\|b\_1\-b\_2\|}{\|\|{\\bf w}\|\|}\\). ``` #Example of hyperplane geometry w1 = 1; w2 = 2 b1 = 10 #Plot hyperplane in x1, x2 space x1 = seq(-3,3,0.1) x2 = (b1-w1*x1)/w2 plot(x1,x2,type="l") #Create hyperplane 2 b2 = 8 x2 = (b2-w1*x1)/w2 lines(x1,x2,col="red") ``` ``` #Compute distance to hyperplane 2 print(abs(b1-b2)/sqrt(w1^2+w2^2)) ``` ``` ## [1] 0.8944272 ``` We see that this gives the *perpendicular* distance between the two parallel hyperplanes. The goal of the SVM is to maximize the distance (separation) between the two hyperplanes, and this is achieved by minimizing norm \\(\|\|{\\bf w}\|\|\\). This naturally leads to a quadratic optimization problem. \\\[ \\min\_{b\_1,b\_2,{\\bf w}} \\frac{1}{2} \|\|{\\bf w}\|\|^2 \\] subject to \\({\\bf w} \\cdot {\\bf x} \\geq b\_1\\) for points in category \\(y\_1\\) and \\({\\bf w} \\cdot {\\bf x} \\leq b\_2\\) for points in category \\(y\_2\\). Note that this program may find a solution where many of the elements of \\({\\bf w}\\) are zero, i.e., it also finds the minimal set of “support” vectors that separate the two groups. The “half” in front of the minimand is for mathematical convenience in solving the quadratic program. Of course, there may be no linear hyperplane that perfectly separates the two groups. This slippage may be accounted for in the SVM by allowing for points on the wrong side of the separating hyperplanes using cost functions, i.e., we modify the quadratic program as follows: \\\[ \\min\_{b\_1,b\_2,{\\bf w},\\{\\eta\_i\\}} \\frac{1}{2} \|\|{\\bf w}\|\|^2 \+ C\_1 \\sum\_{i\=1}^n \\eta\_i \+ C\_2 \\sum\_{i\=1}^n \\eta\_i \\] where \\(C\_1,C\_2\\) are the costs for slippage in groups 1 and 2, respectively. Often implementations assume \\(C\_1\=C\_2\\). The values \\(\\eta\_i\\) are positive for observations that are not perfectly separated, i.e., lead to slippage. Thus, for group 1, these are the length of the perpendicular amounts by which observation \\(i\\) lies below the hyperplane \\({\\bf w} \\cdot {\\bf x} \= b\_1\\), i.e., lies on the hyperplane \\({\\bf w} \\cdot {\\bf x} \= b\_1 \- \\eta\_i\\). For group 1, these are the length of the perpendicular amounts by which observation \\(i\\) lies above the hyperplane \\({\\bf w} \\cdot {\\bf x} \= b\_2\\), i.e., lies on the hyperplane \\({\\bf w} \\cdot {\\bf x} \= b\_1 \+ \\eta\_i\\). For observations within the respective hyperplanes, of course, \\(\\eta\_i\=0\\). ### 7\.30\.1 Example of SVM with Confusion Matrix ``` library(e1071) #EXAMPLE 1 for SVM model = svm(iris[,1:4],iris[,5]) model ``` ``` ## ## Call: ## svm.default(x = iris[, 1:4], y = iris[, 5]) ## ## ## Parameters: ## SVM-Type: C-classification ## SVM-Kernel: radial ## cost: 1 ## gamma: 0.25 ## ## Number of Support Vectors: 51 ``` ``` out = predict(model,iris[,1:4]) out ``` ``` ## 1 2 3 4 5 6 ## setosa setosa setosa setosa setosa setosa ## 7 8 9 10 11 12 ## setosa setosa setosa setosa setosa setosa ## 13 14 15 16 17 18 ## setosa setosa setosa setosa setosa setosa ## 19 20 21 22 23 24 ## setosa setosa setosa setosa setosa setosa ## 25 26 27 28 29 30 ## setosa setosa setosa setosa setosa setosa ## 31 32 33 34 35 36 ## setosa setosa setosa setosa setosa setosa ## 37 38 39 40 41 42 ## setosa setosa setosa setosa setosa setosa ## 43 44 45 46 47 48 ## setosa setosa setosa setosa setosa setosa ## 49 50 51 52 53 54 ## setosa setosa versicolor versicolor versicolor versicolor ## 55 56 57 58 59 60 ## versicolor versicolor versicolor versicolor versicolor versicolor ## 61 62 63 64 65 66 ## versicolor versicolor versicolor versicolor versicolor versicolor ## 67 68 69 70 71 72 ## versicolor versicolor versicolor versicolor versicolor versicolor ## 73 74 75 76 77 78 ## versicolor versicolor versicolor versicolor versicolor virginica ## 79 80 81 82 83 84 ## versicolor versicolor versicolor versicolor versicolor virginica ## 85 86 87 88 89 90 ## versicolor versicolor versicolor versicolor versicolor versicolor ## 91 92 93 94 95 96 ## versicolor versicolor versicolor versicolor versicolor versicolor ## 97 98 99 100 101 102 ## versicolor versicolor versicolor versicolor virginica virginica ## 103 104 105 106 107 108 ## virginica virginica virginica virginica virginica virginica ## 109 110 111 112 113 114 ## virginica virginica virginica virginica virginica virginica ## 115 116 117 118 119 120 ## virginica virginica virginica virginica virginica versicolor ## 121 122 123 124 125 126 ## virginica virginica virginica virginica virginica virginica ## 127 128 129 130 131 132 ## virginica virginica virginica virginica virginica virginica ## 133 134 135 136 137 138 ## virginica versicolor virginica virginica virginica virginica ## 139 140 141 142 143 144 ## virginica virginica virginica virginica virginica virginica ## 145 146 147 148 149 150 ## virginica virginica virginica virginica virginica virginica ## Levels: setosa versicolor virginica ``` ``` print(length(out)) ``` ``` ## [1] 150 ``` ``` table(matrix(out),iris[,5]) ``` ``` ## ## setosa versicolor virginica ## setosa 50 0 0 ## versicolor 0 48 2 ## virginica 0 2 48 ``` So it does marginally better than naive Bayes. Here is another example. ### 7\.30\.2 Another example ``` #EXAMPLE 2 for SVM train_data = matrix(rpois(60,3),10,6) print(train_data) ``` ``` ## [,1] [,2] [,3] [,4] [,5] [,6] ## [1,] 0 4 7 6 4 2 ## [2,] 2 4 4 4 2 3 ## [3,] 2 3 5 1 6 2 ## [4,] 2 5 3 5 4 4 ## [5,] 1 3 3 1 2 3 ## [6,] 2 2 4 8 4 0 ## [7,] 2 4 3 3 4 2 ## [8,] 4 4 4 5 2 0 ## [9,] 1 5 4 1 1 2 ## [10,] 5 3 6 4 4 2 ``` ``` train_class = as.matrix(c(2,3,1,2,2,1,3,2,3,3)) print(train_class) ``` ``` ## [,1] ## [1,] 2 ## [2,] 3 ## [3,] 1 ## [4,] 2 ## [5,] 2 ## [6,] 1 ## [7,] 3 ## [8,] 2 ## [9,] 3 ## [10,] 3 ``` ``` library(e1071) model = svm(train_data,train_class) model ``` ``` ## ## Call: ## svm.default(x = train_data, y = train_class) ## ## ## Parameters: ## SVM-Type: eps-regression ## SVM-Kernel: radial ## cost: 1 ## gamma: 0.1666667 ## epsilon: 0.1 ## ## ## Number of Support Vectors: 9 ``` ``` pred = predict(model,train_data, type="raw") table(pred,train_class) ``` ``` ## train_class ## pred 1 2 3 ## 1.25759920432731 1 0 0 ## 1.56659922213705 1 0 0 ## 2.03896978308775 0 1 0 ## 2.07877220630261 0 1 0 ## 2.07882451500643 0 1 0 ## 2.079102996171 0 1 0 ## 2.50854276105477 0 0 1 ## 2.60314938880547 0 0 1 ## 2.80915400612272 0 0 1 ## 2.92106239193998 0 0 1 ``` ``` train_fitted = round(pred,0) print(cbind(train_class,train_fitted)) ``` ``` ## train_fitted ## 1 2 2 ## 2 3 3 ## 3 1 2 ## 4 2 2 ## 5 2 2 ## 6 1 1 ## 7 3 3 ## 8 2 2 ## 9 3 3 ## 10 3 3 ``` ``` train_fitted = matrix(train_fitted) table(train_class,train_fitted) ``` ``` ## train_fitted ## train_class 1 2 3 ## 1 1 1 0 ## 2 0 4 0 ## 3 0 0 4 ``` How do we know if the confusion matrix shows statistically significant classification power? We do a chi\-square test. ``` library(e1071) res = naiveBayes(iris[,1:4],iris[,5]) pred = predict(res,iris[,1:4]) out = table(pred,iris[,5]) out ``` ``` ## ## pred setosa versicolor virginica ## setosa 50 0 0 ## versicolor 0 47 3 ## virginica 0 3 47 ``` ``` chisq.test(out) ``` ``` ## ## Pearson's Chi-squared test ## ## data: out ## X-squared = 266.16, df = 4, p-value < 2.2e-16 ``` 7\.31 Word count classifiers, adjectives, and adverbs ----------------------------------------------------- 1. Given a lexicon of selected words, one may sign the words as positive or negative, and then do a simple word count to compute net sentiment or mood of text. By establishing appropriate cut offs, one can determine the classification of text into optimistic, neutral, or pessimistic. These cut offs are determined using the training and testing data sets. 2. Word count classifiers may be enhanced by focusing on “emphasis words” such as adjectives and adverbs, especially when classifying emotive content. One approach used in Das and Chen (2007\) is to identify all adjectives and adverbs in the text and then only consider words that are within \\(\\pm 3\\) words before and after the adjective or adverb. This extracts the most emphatic parts of the text only, and then mood scores it. 7\.32 Fisher’s discriminant --------------------------- * Fisher’s discriminant is simply the ratio of the variation of a given word across groups to the variation within group. * More formally, Fisher’s discriminant score \\(F(w)\\) for word \\(w\\) is \\\[ F(w) \= \\frac{\\frac{1}{K} \\sum\_{j\=1}^K ({\\bar w}\_j \- {\\bar w}\_0\)^2}{\\frac{1}{K} \\sum\_{j\=1}^K \\sigma\_j^2} \\nonumber \\] where \\(K\\) is the number of categories and \\({\\bar w}\_j\\) is the mean occurrence of the word \\(w\\) in each text in category \\(j\\), and \\({\\bar w}\_0\\) is the mean occurrence across all categories. And \\(\\sigma\_j^2\\) is the variance of the word occurrence in category \\(j\\). This is just one way in which Fisher’s discriminant may be calculated, and there are other variations on the theme. * We may compute \\(F(w)\\) for each word \\(w\\), and then use it to weight the word counts of each text, thereby giving greater credence to words that are better discriminants. 7\.33 Vector\-Distance Classifier --------------------------------- Suppose we have 500 documents in each of two categories, bullish and bearish. These 1,000 documents may all be placed as points in \\(n\\)\-dimensional space. It is more than likely that the points in each category will lie closer to each other than to the points in the other category. Now, if we wish to classify a new document, with vector \\(D\_i\\), the obvious idea is to look at which cluster it is closest to, or which point in either cluster it is closest to. The closeness between two documents \\(i\\) and \\(j\\) is determined easily by the well known metric of cosine distance, i.e., \\\[ 1 \- \\cos(\\theta\_{ij}) \= 1 \- \\frac{D\_i^\\top D\_j}{\|\|D\_i\|\| \\cdot \|\|D\_j\|\|} \\nonumber \\] where \\(\|\|D\_i\|\| \= \\sqrt{D\_i^\\top D\_i}\\) is the norm of the vector \\(D\_i\\). The cosine of the angle between the two document vectors is 1 if the two vectors are identical, and in this case the distance between them would be zero. 7\.34 Confusion matrix ---------------------- The confusion matrix is the classic tool for assessing classification accuracy. Given \\(n\\) categories, the matrix is of dimension \\(n \\times n\\). The rows relate to the category assigned by the analytic algorithm and the columns refer to the correct category in which the text resides. Each cell \\((i,j)\\) of the matrix contains the number of text messages that were of type \\(j\\) and were classified as type \\(i\\). The cells on the diagonal of the confusion matrix state the number of times the algorithm got the classification right. All other cells are instances of classification error. If an algorithm has no classification ability, then the rows and columns of the matrix will be independent of each other. Under this null hypothesis, the statistic that is examined for rejection is as follows: \\\[ \\chi^2\[dof\=(n\-1\)^2] \= \\sum\_{i\=1}^n \\sum\_{j\=1}^n \\frac{\[A(i,j) \- E(i,j)]^2}{E(i,j)} \\] where \\(A(i,j)\\) are the actual numbers observed in the confusion matrix, and \\(E(i,j)\\) are the expected numbers, assuming no classification ability under the null. If \\(T(i)\\) represents the total across row \\(i\\) of the confusion matrix, and \\(T(j)\\) the column total, then \\\[ E(i,j) \= \\frac{T(i) \\times T(j)}{\\sum\_{i\=1}^n T(i)} \\equiv \\frac{T(i) \\times T(j)}{\\sum\_{j\=1}^n T(j)} \\] The degrees of freedom of the \\(\\chi^2\\) statistic is \\((n\-1\)^2\\). This statistic is very easy to implement and may be applied to models for any \\(n\\). A highly significant statistic is evidence of classification ability. 7\.35 Accuracy -------------- Algorithm accuracy over a classification scheme is the percentage of text that is correctly classified. This may be done in\-sample or out\-of\-sample. To compute this off the confusion matrix, we calculate \\\[ \\mbox{Accuracy} \= \\frac{ \\sum\_{i\=1}^K O(i,i)}{\\sum\_{j\=1}^K M(j)} \= \\frac{ \\sum\_{i\=1}^K O(i,i)}{\\sum\_{i\=1}^K M(i)} \\] We should hope that this is at least greater than \\(1/K\\), which is the accuracy level achieved on average from random guessing. ### 7\.35\.1 Sentiment over Time ### 7\.35\.2 Stock Sentiment Correlations ### 7\.35\.3 Phase Lag Analysis 7\.36 False Positives --------------------- 1. The percentage of false positives is a useful metric to work with. It may be calculated as a simple count or as a weighted count (by nearness of wrong category) of false classifications divided by total classifications undertaken. 2. For example, assume that in the example above, category 1 is BULLISH and category 3 is BEARISH, whereas category 2 is NEUTRAL. The false positives would arise from mis\-classifying category 1 as 3 and vice\-versa. We compute the false positive rate for illustration. 3. The false positive rate is just 1% in the example below. ``` Omatrix = matrix(c(22,1,0,3,44,3,1,1,25),3,3) print((Omatrix[1,3]+Omatrix[3,1])/sum(Omatrix)) ``` ``` ## [1] 0.01 ``` 7\.37 Sentiment Error --------------------- In a 3\-way classification scheme, where category 1 is BULLISH and category 3 is BEARISH, whereas category 2 is NEUTRAL, we can compute this metric as follows. \\\[ \\mbox{Sentiment Error} \= 1 \- \\frac{M(j\=1\)\-M(j\=3\)}{M(i\=1\)\-M(i\=3\)} \\nonumber \\] In our illustrative example, we may easily calculate this metric. The classified sentiment from the algorithm was \\(\-3 \= 23\-27\\), whereas it actually should have been \\(\-2 \= 26\-28\\). The percentage error in sentiment is 50%. ``` print(Omatrix) ``` ``` ## [,1] [,2] [,3] ## [1,] 22 3 1 ## [2,] 1 44 1 ## [3,] 0 3 25 ``` ``` rsum = rowSums(Omatrix) csum = colSums(Omatrix) print(rsum) ``` ``` ## [1] 26 46 28 ``` ``` print(csum) ``` ``` ## [1] 23 50 27 ``` ``` print(1 - (-3)/(-2)) ``` ``` ## [1] -0.5 ``` 7\.38 Disagreement ------------------ The metric uses the number of signed buys and sells in the day (based on a sentiment model) to determine how much difference of opinion there is in the market. The metric is computed as follows: \\\[ \\mbox{DISAG} \= \\left\| 1 \- \\left\| \\frac{B\-S}{B\+S} \\right\| \\right\| \\] where \\(B, S\\) are the numbers of classified buys and sells. Note that DISAG is bounded between zero and one. Using the true categories of buys (category 1 BULLISH) and sells (category 3 BEARISH) in the same example as before, we may compute disagreement. Since there is little agreement (26 buys and 28 sells), disagreement is high. ``` print(Omatrix) ``` ``` ## [,1] [,2] [,3] ## [1,] 22 3 1 ## [2,] 1 44 1 ## [3,] 0 3 25 ``` ``` DISAG = abs(1-abs((26-28)/(26+28))) print(DISAG) ``` ``` ## [1] 0.962963 ``` 7\.39 Precision and Recall -------------------------- The creation of the confusion matrix leads naturally to two measures that are associated with it. Precision is the fraction of positives identified that are truly positive, and is also known as positive predictive value. It is a measure of usefulness of prediction. So if the algorithm (say) was tasked with selecting those account holders on LinkedIn who are actually looking for a job, and it identifies \\(n\\) such people of which only \\(m\\) were really looking for a job, then the precision would be \\(m/n\\). Recall is the proportion of positives that are correctly identified, and is also known as sensitivity. It is a measure of how complete the prediction is. If the actual number of people looking for a job on LinkedIn was \\(M\\), then recall would be \\(n/M\\). For example, suppose we have the following confusion matrix. | | **Actual** | | | | --- | --- | --- | --- | | **Predicted** | Looking for Job | Not Looking | | | Looking for Job | 10 | 2 | 12 | | Not Looking | 1 | 16 | 17 | | | 11 | 18 | 29 | In this case precision is \\(10/12\\) and recall is \\(10/11\\). Precision is related to the probability of false positives (Type I error), which is one minus precision. Recall is related to the probability of false negatives (Type II error), which is one minus recall. One may also think of this in terms of true and false positives. There are totally 12 positives predicted by the model, of which 10 are true positives, and 2 are false positives. These values go into calculating precision. Of the predicted negatives, 1 is false, and this goes into calculating recall. Recall refers to relevancy of results returned. 7\.40 RTextTools package ------------------------ This package bundles text classification algorithms into one package. ``` library(tm) library(RTextTools) ``` ``` ## Loading required package: SparseM ``` ``` ## Warning: package 'SparseM' was built under R version 3.3.2 ``` ``` ## ## Attaching package: 'SparseM' ``` ``` ## The following object is masked from 'package:base': ## ## backsolve ``` ``` ## ## Attaching package: 'RTextTools' ``` ``` ## The following objects are masked from 'package:SnowballC': ## ## getStemLanguages, wordStem ``` ``` #Create sample text with positive and negative markers n = 1000 npos = round(runif(n,1,25)) nneg = round(runif(n,1,25)) flag = matrix(0,n,1) flag[which(npos>nneg)] = 1 text = NULL for (j in 1:n) { res = paste(c(sample(poswords,npos[j]),sample(negwords,nneg[j])),collapse=" ") text = c(text,res) } #Text Classification m = create_matrix(text) print(m) ``` ``` ## <<DocumentTermMatrix (documents: 1000, terms: 3711)>> ## Non-/sparse entries: 26023/3684977 ## Sparsity : 99% ## Maximal term length: 17 ## Weighting : term frequency (tf) ``` ``` m = create_matrix(text,weighting=weightTfIdf) print(m) ``` ``` ## <<DocumentTermMatrix (documents: 1000, terms: 3711)>> ## Non-/sparse entries: 26023/3684977 ## Sparsity : 99% ## Maximal term length: 17 ## Weighting : term frequency - inverse document frequency (normalized) (tf-idf) ``` ``` container <- create_container(m,flag,trainSize=1:(n/2), testSize=(n/2+1):n,virgin=FALSE) #models <- train_models(container, algorithms=c("MAXENT","SVM","GLMNET","SLDA","TREE","BAGGING","BOOSTING","RF")) models <- train_models(container, algorithms=c("MAXENT","SVM","GLMNET","TREE")) results <- classify_models(container, models) analytics <- create_analytics(container, results) #RESULTS #analytics@algorithm_summary # SUMMARY OF PRECISION, RECALL, F-SCORES, AND ACCURACY SORTED BY TOPIC CODE FOR EACH ALGORITHM #analytics@label_summary # SUMMARY OF LABEL (e.g. TOPIC) ACCURACY #analytics@document_summary # RAW SUMMARY OF ALL DATA AND SCORING #analytics@ensemble_summary # SUMMARY OF ENSEMBLE PRECISION/COVERAGE. USES THE n VARIABLE PASSED INTO create_analytics() #CONFUSION MATRIX yhat = as.matrix(analytics@document_summary$CONSENSUS_CODE) y = flag[(n/2+1):n] print(table(y,yhat)) ``` ``` ## yhat ## y 0 1 ## 0 255 6 ## 1 212 27 ``` 7\.41 Grading Text ------------------ In recent years, the SAT exams added a new essay section. While the test aimed at assessing original writing, it also introduced automated grading. A goal of the test is to assess the writing level of the student. This is associated with the notion of *readability*. ### 7\.41\.1 Readability “Readability” is a metric of how easy it is to comprehend text. Given a goal of efficient markets, regulators want to foster transparency by making sure financial documents that are disseminated to the investing public are readable. Hence, metrics for readability are very important and are recently gaining traction. ### 7\.41\.2 Gunning\-Fog Index Gunning (1952\) developed the Fog index. The index estimates the years of formal education needed to understand text on a first reading. A fog index of 12 requires the reading level of a U.S. high school senior (around 18 years old). The index is based on the idea that poor readability is associated with longer sentences and complex words. Complex words are those that have more than two syllables. The formula for the Fog index is \\\[ 0\.4 \\cdot \\left\[\\frac{\\mbox{\\\#words}}{\\mbox{\\\#sentences}} \+ 100 \\cdot \\left( \\frac{\\mbox{\\\#complex words}}{\\mbox{\\\#words}} \\right) \\right] \\] Alternative readability scores use similar ideas. The Flesch Reading Ease Score and the Flesch\-Kincaid Grade Level also use counts of words, syllables, and sentences. See [http://en.wikipedia.org/wiki/Flesch\-Kincaid\_readability\_tests](http://en.wikipedia.org/wiki/Flesch-Kincaid_readability_tests). The Flesch Reading Ease Score is defined as \\\[ 206\.835 \- 1\.015 \\left(\\frac{\\mbox{\\\#words}}{\\mbox{\\\#sentences}}\\right) \- 84\.6 \\left( \\frac{\\mbox{\\\#syllables}}{\\mbox{\\\#words}} \\right) \\] With a range of 90\-100 easily accessible by a 11\-year old, 60\-70 being easy to understand for 13\-15 year olds, and 0\-30 for university graduates. ### 7\.41\.3 The Flesch\-Kincaid Grade Level This is defined as \\\[ 0\.39 \\left(\\frac{\\mbox{\\\#words}}{\\mbox{\\\#sentences}}\\right) \+ 11\.8 \\left( \\frac{\\mbox{\\\#syllables}}{\\mbox{\\\#words}} \\right) \-15\.59 \\] which gives a number that corresponds to the grade level. As expected these two measures are negatively correlated. Various other measures of readability use the same ideas as in the Fog index. For example the Coleman and Liau (1975\) index does not even require a count of syllables, as follows: \\\[ CLI \= 0\.0588 L \- 0\.296 S \- 15\.8 \\] where \\(L\\) is the average number of letters per hundred words and \\(S\\) is the average number of sentences per hundred words. Standard readability metrics may not work well for financial text. Loughran and McDonald (2014\) find that the Fog index is inferior to simply looking at 10\-K file size. **References** M. Coleman and T. L. Liau. (1975\). A computer readability formula designed for machine scoring. *Journal of Applied Psychology* 60, 283\-284\. T. Loughran and W. McDonald, (2014\). Measuring readability in financial disclosures, *The Journal of Finance* 69, 1643\-1671\. 7\.42 koRpus package -------------------- R package koRpus for readability scoring here. [http://www.inside\-r.org/packages/cran/koRpus/docs/readability](http://www.inside-r.org/packages/cran/koRpus/docs/readability) First, let’s grab some text from my web site. ``` library(rvest) url = "http://srdas.github.io/bio-candid.html" doc.html = read_html(url) text = doc.html %>% html_nodes("p") %>% html_text() text = gsub("[\t\n]"," ",text) text = gsub('"'," ",text) #removes single backslash text = paste(text, collapse=" ") print(text) ``` ``` ## [1] " Sanjiv Das: A Short Academic Life History After loafing and working in many parts of Asia, but never really growing up, Sanjiv moved to New York to change the world, hopefully through research. He graduated in 1994 with a Ph.D. from NYU, and since then spent five years in Boston, and now lives in San Jose, California. Sanjiv loves animals, places in the world where the mountains meet the sea, riding sport motorbikes, reading, gadgets, science fiction movies, and writing cool software code. When there is time available from the excitement of daily life, Sanjiv writes academic papers, which helps him relax. Always the contrarian, Sanjiv thinks that New York City is the most calming place in the world, after California of course. Sanjiv is now a Professor of Finance at Santa Clara University. He came to SCU from Harvard Business School and spent a year at UC Berkeley. In his past life in the unreal world, Sanjiv worked at Citibank, N.A. in the Asia-Pacific region. He takes great pleasure in merging his many previous lives into his current existence, which is incredibly confused and diverse. Sanjiv's research style is instilled with a distinct New York state of mind - it is chaotic, diverse, with minimal method to the madness. He has published articles on derivatives, term-structure models, mutual funds, the internet, portfolio choice, banking models, credit risk, and has unpublished articles in many other areas. Some years ago, he took time off to get another degree in computer science at Berkeley, confirming that an unchecked hobby can quickly become an obsession. There he learnt about the fascinating field of Randomized Algorithms, skills he now applies earnestly to his editorial work, and other pursuits, many of which stem from being in the epicenter of Silicon Valley. Coastal living did a lot to mold Sanjiv, who needs to live near the ocean. The many walks in Greenwich village convinced him that there is no such thing as a representative investor, yet added many unique features to his personal utility function. He learnt that it is important to open the academic door to the ivory tower and let the world in. Academia is a real challenge, given that he has to reconcile many more opinions than ideas. He has been known to have turned down many offers from Mad magazine to publish his academic work. As he often explains, you never really finish your education - you can check out any time you like, but you can never leave. Which is why he is doomed to a lifetime in Hotel California. And he believes that, if this is as bad as it gets, life is really pretty good. " ``` Now we can assess it for readability. ``` library(koRpus) ``` ``` ## ## Attaching package: 'koRpus' ``` ``` ## The following object is masked from 'package:lsa': ## ## query ``` ``` write(text,file="textvec.txt") text_tokens = tokenize("textvec.txt",lang="en") #print(text_tokens) print(c("Number of sentences: ",text_tokens@desc$sentences)) ``` ``` ## [1] "Number of sentences: " "24" ``` ``` print(c("Number of words: ",text_tokens@desc$words)) ``` ``` ## [1] "Number of words: " "446" ``` ``` print(c("Number of words per sentence: ",text_tokens@desc$avg.sentc.length)) ``` ``` ## [1] "Number of words per sentence: " "18.5833333333333" ``` ``` print(c("Average length of words: ",text_tokens@desc$avg.word.length)) ``` ``` ## [1] "Average length of words: " "4.67488789237668" ``` Next we generate several indices of readability, which are worth looking at. ``` print(readability(text_tokens)) ``` ``` ## Hyphenation (language: en) ``` ``` ## | | | 0% | | | 1% | |= | 1% | |= | 2% | |== | 2% | |== | 3% | |== | 4% | |=== | 4% | |=== | 5% | |==== | 6% | |==== | 7% | |===== | 7% | |===== | 8% | |====== | 9% | |====== | 10% | |======= | 10% | |======= | 11% | |======== | 12% | |======== | 13% | |========= | 13% | |========= | 14% | |========= | 15% | |========== | 15% | |========== | 16% | |=========== | 16% | |=========== | 17% | |============ | 18% | |============ | 19% | |============= | 19% | |============= | 20% | |============= | 21% | |============== | 21% | |============== | 22% | |=============== | 22% | |=============== | 23% | |=============== | 24% | |================ | 24% | |================ | 25% | |================= | 26% | |================= | 27% | |================== | 27% | |================== | 28% | |=================== | 28% | |=================== | 29% | |=================== | 30% | |==================== | 30% | |==================== | 31% | |===================== | 32% | |===================== | 33% | |====================== | 33% | |====================== | 34% | |====================== | 35% | |======================= | 35% | |======================= | 36% | |======================== | 36% | |======================== | 37% | |======================== | 38% | |========================= | 38% | |========================= | 39% | |========================== | 39% | |========================== | 40% | |========================== | 41% | |=========================== | 41% | |=========================== | 42% | |============================ | 42% | |============================ | 43% | |============================ | 44% | |============================= | 44% | |============================= | 45% | |============================== | 46% | |============================== | 47% | |=============================== | 47% | |=============================== | 48% | |================================ | 49% | |================================ | 50% | |================================= | 50% | |================================= | 51% | |================================== | 52% | |================================== | 53% | |=================================== | 53% | |=================================== | 54% | |==================================== | 55% | |==================================== | 56% | |===================================== | 56% | |===================================== | 57% | |===================================== | 58% | |====================================== | 58% | |====================================== | 59% | |======================================= | 59% | |======================================= | 60% | |======================================= | 61% | |======================================== | 61% | |======================================== | 62% | |========================================= | 62% | |========================================= | 63% | |========================================= | 64% | |========================================== | 64% | |========================================== | 65% | |=========================================== | 65% | |=========================================== | 66% | |=========================================== | 67% | |============================================ | 67% | |============================================ | 68% | |============================================= | 69% | |============================================= | 70% | |============================================== | 70% | |============================================== | 71% | |============================================== | 72% | |=============================================== | 72% | |=============================================== | 73% | |================================================ | 73% | |================================================ | 74% | |================================================= | 75% | |================================================= | 76% | |================================================== | 76% | |================================================== | 77% | |================================================== | 78% | |=================================================== | 78% | |=================================================== | 79% | |==================================================== | 79% | |==================================================== | 80% | |==================================================== | 81% | |===================================================== | 81% | |===================================================== | 82% | |====================================================== | 83% | |====================================================== | 84% | |======================================================= | 84% | |======================================================= | 85% | |======================================================== | 85% | |======================================================== | 86% | |======================================================== | 87% | |========================================================= | 87% | |========================================================= | 88% | |========================================================== | 89% | |========================================================== | 90% | |=========================================================== | 90% | |=========================================================== | 91% | |============================================================ | 92% | |============================================================ | 93% | |============================================================= | 93% | |============================================================= | 94% | |============================================================== | 95% | |============================================================== | 96% | |=============================================================== | 96% | |=============================================================== | 97% | |=============================================================== | 98% | |================================================================ | 98% | |================================================================ | 99% | |=================================================================| 99% | |=================================================================| 100% ``` ``` ## Warning: Bormuth: Missing word list, hence not calculated. ``` ``` ## Warning: Coleman: POS tags are not elaborate enough, can't count pronouns ## and prepositions. Formulae skipped. ``` ``` ## Warning: Dale-Chall: Missing word list, hence not calculated. ``` ``` ## Warning: DRP: Missing Bormuth Mean Cloze, hence not calculated. ``` ``` ## Warning: Harris.Jacobson: Missing word list, hence not calculated. ``` ``` ## Warning: Spache: Missing word list, hence not calculated. ``` ``` ## Warning: Traenkle.Bailer: POS tags are not elaborate enough, can't count ## prepositions and conjuctions. Formulae skipped. ``` ``` ## Warning: Note: The implementations of these formulas are still subject to validation: ## Coleman, Danielson.Bryan, Dickes.Steiwer, ELF, Fucks, Harris.Jacobson, nWS, Strain, Traenkle.Bailer, TRI ## Use the results with caution, even if they seem plausible! ``` ``` ## ## Automated Readability Index (ARI) ## Parameters: default ## Grade: 9.88 ## ## ## Coleman-Liau ## Parameters: default ## ECP: 47% (estimted cloze percentage) ## Grade: 10.09 ## Grade: 10.1 (short formula) ## ## ## Danielson-Bryan ## Parameters: default ## DB1: 7.64 ## DB2: 48.58 ## Grade: 9-12 ## ## ## Dickes-Steiwer's Handformel ## Parameters: default ## TTR: 0.58 ## Score: 42.76 ## ## ## Easy Listening Formula ## Parameters: default ## Exsyls: 149 ## Score: 6.21 ## ## ## Farr-Jenkins-Paterson ## Parameters: default ## RE: 56.1 ## Grade: >= 10 (high school) ## ## ## Flesch Reading Ease ## Parameters: en (Flesch) ## RE: 59.75 ## Grade: >= 10 (high school) ## ## ## Flesch-Kincaid Grade Level ## Parameters: default ## Grade: 9.54 ## Age: 14.54 ## ## ## Gunning Frequency of Gobbledygook (FOG) ## Parameters: default ## Grade: 12.55 ## ## ## FORCAST ## Parameters: default ## Grade: 10.01 ## Age: 15.01 ## ## ## Fucks' Stilcharakteristik ## Score: 86.88 ## Grade: 9.32 ## ## ## Linsear Write ## Parameters: default ## Easy words: 87 ## Hard words: 13 ## Grade: 11.71 ## ## ## Läsbarhetsindex (LIX) ## Parameters: default ## Index: 40.56 ## Rating: standard ## Grade: 6 ## ## ## Neue Wiener Sachtextformeln ## Parameters: default ## nWS 1: 5.42 ## nWS 2: 5.97 ## nWS 3: 6.28 ## nWS 4: 6.81 ## ## ## Readability Index (RIX) ## Parameters: default ## Index: 4.08 ## Grade: 9 ## ## ## Simple Measure of Gobbledygook (SMOG) ## Parameters: default ## Grade: 12.01 ## Age: 17.01 ## ## ## Strain Index ## Parameters: default ## Index: 8.45 ## ## ## Kuntzsch's Text-Redundanz-Index ## Parameters: default ## Short words: 297 ## Punctuation: 71 ## Foreign: 0 ## Score: -56.22 ## ## ## Tuldava's Text Difficulty Formula ## Parameters: default ## Index: 4.43 ## ## ## Wheeler-Smith ## Parameters: default ## Score: 62.08 ## Grade: > 4 ## ## Text language: en ``` 7\.43 Text Summarization ------------------------ It is really easy to write a summarizer in a few lines of code. The function below takes in a text array and does the needful. Each element of the array is one sentence of the document we wan summarized. In the function we need to calculate how similar each sentence is to any other one. This could be done using cosine similarity, but here we use another approach, Jaccard similarity. Given two sentences, Jaccard similarity is the ratio of the size of the intersection word set divided by the size of the union set. ### 7\.43\.1 Jaccard Similarity A document \\(D\\) is comprised of \\(m\\) sentences \\(s\_i, i\=1,2,...,m\\), where each \\(s\_i\\) is a set of words. We compute the pairwise overlap between sentences using the **Jaccard** similarity index: \\\[ J\_{ij} \= J(s\_i, s\_j) \= \\frac{\|s\_i \\cap s\_j\|}{\|s\_i \\cup s\_j\|} \= J\_{ji} \\] The overlap is the ratio of the size of the intersect of the two word sets in sentences \\(s\_i\\) and \\(s\_j\\), divided by the size of the union of the two sets. The similarity score of each sentence is computed as the row sums of the Jaccard similarity matrix. \\\[ {\\cal S}\_i \= \\sum\_{j\=1}^m J\_{ij} \\] ### 7\.43\.2 Generating the summary Once the row sums are obtained, they are sorted and the summary is the first \\(n\\) sentences based on the \\({\\cal S}\_i\\) values. ``` # FUNCTION TO RETURN n SENTENCE SUMMARY # Input: array of sentences (text) # Output: n most common intersecting sentences text_summary = function(text, n) { m = length(text) # No of sentences in input jaccard = matrix(0,m,m) #Store match index for (i in 1:m) { for (j in i:m) { a = text[i]; aa = unlist(strsplit(a," ")) b = text[j]; bb = unlist(strsplit(b," ")) jaccard[i,j] = length(intersect(aa,bb))/ length(union(aa,bb)) jaccard[j,i] = jaccard[i,j] } } similarity_score = rowSums(jaccard) res = sort(similarity_score, index.return=TRUE, decreasing=TRUE) idx = res$ix[1:n] summary = text[idx] } ``` ### 7\.43\.3 Example: Summarization We will use a sample of text that I took from Bloomberg news. It is about the need for data scientists. ``` url = "DSTMAA_data/dstext_sample.txt" #You can put any text file or URL here text = read_web_page(url,cstem=0,cstop=0,ccase=0,cpunc=0,cflat=1) print(length(text[[1]])) ``` ``` ## [1] 1 ``` ``` print("ORIGINAL TEXT") ``` ``` ## [1] "ORIGINAL TEXT" ``` ``` print(text) ``` ``` ## [1] "THERE HAVE BEEN murmurings that we are now in the “trough of disillusionment” of big data, the hype around it having surpassed the reality of what it can deliver. Gartner suggested that the “gravitational pull of big data is now so strong that even people who haven’t a clue as to what it’s all about report that they’re running big data projects.” Indeed, their research with business decision makers suggests that organisations are struggling to get value from big data. Data scientists were meant to be the answer to this issue. Indeed, Hal Varian, Chief Economist at Google famously joked that “The sexy job in the next 10 years will be statisticians.” He was clearly right as we are now used to hearing that data scientists are the key to unlocking the value of big data. This has created a huge market for people with these skills. US recruitment agency, Glassdoor, report that the average salary for a data scientist is $118,709 versus $64,537 for a skilled programmer. And a McKinsey study predicts that by 2018, the United States alone faces a shortage of 140,000 to 190,000 people with analytical expertise and a 1.5 million shortage of managers with the skills to understand and make decisions based on analysis of big data. It’s no wonder that companies are keen to employ data scientists when, for example, a retailer using big data can reportedly increase their margin by more than 60%. However, is it really this simple? Can data scientists actually justify earning their salaries when brands seem to be struggling to realize the promise of big data? Perhaps we are expecting too much of data scientists. May be we are investing too much in a relatively small number of individuals rather than thinking about how we can design organisations to help us get the most from data assets. The focus on the data scientist often implies a centralized approach to analytics and decision making; we implicitly assume that a small team of highly skilled individuals can meet the needs of the organisation as a whole. This theme of centralized vs. decentralized decision-making is one that has long been debated in the management literature. For many organisations a centralized structure helps maintain control over a vast international operation, plus ensures consistency of customer experience. Others, meanwhile, may give managers at a local level decision-making power particularly when it comes to tactical needs. But the issue urgently needs revisiting in the context of big data as the way in which organisations manage themselves around data may well be a key factor for brands in realizing the value of their data assets. Economist and philosopher Friedrich Hayek took the view that organisations should consider the purpose of the information itself. Centralized decision-making can be more cost-effective and co-ordinated, he believed, but decentralization can add speed and local information that proves more valuable, even if the bigger picture is less clear. He argued that organisations thought too highly of centralized knowledge, while ignoring ‘knowledge of the particular circumstances of time and place’. But it is only relatively recently that economists are starting to accumulate data that allows them to gauge how successful organisations organize themselves. One such exercise reported by Tim Harford was carried out by Harvard Professor Julie Wulf and the former chief economist of the International Monetary Fund, Raghuram Rajan. They reviewed the workings of large US organisations over fifteen years from the mid-80s. What they found was successful companies were often associated with a move towards decentralisation, often driven by globalisation and the need to react promptly to a diverse and swiftly-moving range of markets, particularly at a local level. Their research indicated that decentralisation pays. And technological advancement often goes hand-in-hand with decentralization. Data analytics is starting to filter down to the department layer, where executives are increasingly eager to trawl through the mass of information on offer. Cloud computing, meanwhile, means that line managers no longer rely on IT teams to deploy computer resources. They can do it themselves, in just minutes. The decentralization trend is now impacting on technology spending. According to Gartner, chief marketing officers have been given the same purchasing power in this area as IT managers and, as their spending rises, so that of data centre managers is falling. Tim Harford makes a strong case for the way in which this decentralization is important given that the environment in which we operate is so unpredictable. Innovation typically comes, he argues from a “swirling mix of ideas not from isolated minds.” And he cites Jane Jacobs, writer on urban planning– who suggested we find innovation in cities rather than on the Pacific islands. But this approach is not necessarily always adopted. For example, research by academics Donald Marchand and Joe Peppard discovered that there was still a tendency for brands to approach big data projects the same way they would existing IT projects: i.e. using centralized IT specialists with a focus on building and deploying technology on time, to plan, and within budget. The problem with a centralized ‘IT-style’ approach is that it ignores the human side of the process of considering how people create and use information i.e. how do people actually deliver value from data assets. Marchand and Peppard suggest (among other recommendations) that those who need to be able to create meaning from data should be at the heart of any initiative. As ever then, the real value from data comes from asking the right questions of the data. And the right questions to ask only emerge if you are close enough to the business to see them. Are data scientists earning their salary? In my view they are a necessary but not sufficient part of the solution; brands need to be making greater investment in working with a greater range of users to help them ask questions of the data. Which probably means that data scientists’ salaries will need to take a hit in the process." ``` ``` text2 = strsplit(text,". ",fixed=TRUE) #Special handling of the period. text2 = text2[[1]] print("SENTENCES") ``` ``` ## [1] "SENTENCES" ``` ``` print(text2) ``` ``` ## [1] "THERE HAVE BEEN murmurings that we are now in the “trough of disillusionment” of big data, the hype around it having surpassed the reality of what it can deliver" ## [2] " Gartner suggested that the “gravitational pull of big data is now so strong that even people who haven’t a clue as to what it’s all about report that they’re running big data projects.” Indeed, their research with business decision makers suggests that organisations are struggling to get value from big data" ## [3] "Data scientists were meant to be the answer to this issue" ## [4] "Indeed, Hal Varian, Chief Economist at Google famously joked that “The sexy job in the next 10 years will be statisticians.” He was clearly right as we are now used to hearing that data scientists are the key to unlocking the value of big data" ## [5] "This has created a huge market for people with these skills" ## [6] "US recruitment agency, Glassdoor, report that the average salary for a data scientist is $118,709 versus $64,537 for a skilled programmer" ## [7] "And a McKinsey study predicts that by 2018, the United States alone faces a shortage of 140,000 to 190,000 people with analytical expertise and a 1.5 million shortage of managers with the skills to understand and make decisions based on analysis of big data" ## [8] " It’s no wonder that companies are keen to employ data scientists when, for example, a retailer using big data can reportedly increase their margin by more than 60%" ## [9] " However, is it really this simple? Can data scientists actually justify earning their salaries when brands seem to be struggling to realize the promise of big data? Perhaps we are expecting too much of data scientists" ## [10] "May be we are investing too much in a relatively small number of individuals rather than thinking about how we can design organisations to help us get the most from data assets" ## [11] "The focus on the data scientist often implies a centralized approach to analytics and decision making; we implicitly assume that a small team of highly skilled individuals can meet the needs of the organisation as a whole" ## [12] "This theme of centralized vs" ## [13] "decentralized decision-making is one that has long been debated in the management literature" ## [14] " For many organisations a centralized structure helps maintain control over a vast international operation, plus ensures consistency of customer experience" ## [15] "Others, meanwhile, may give managers at a local level decision-making power particularly when it comes to tactical needs" ## [16] " But the issue urgently needs revisiting in the context of big data as the way in which organisations manage themselves around data may well be a key factor for brands in realizing the value of their data assets" ## [17] "Economist and philosopher Friedrich Hayek took the view that organisations should consider the purpose of the information itself" ## [18] "Centralized decision-making can be more cost-effective and co-ordinated, he believed, but decentralization can add speed and local information that proves more valuable, even if the bigger picture is less clear" ## [19] " He argued that organisations thought too highly of centralized knowledge, while ignoring ‘knowledge of the particular circumstances of time and place’" ## [20] "But it is only relatively recently that economists are starting to accumulate data that allows them to gauge how successful organisations organize themselves" ## [21] "One such exercise reported by Tim Harford was carried out by Harvard Professor Julie Wulf and the former chief economist of the International Monetary Fund, Raghuram Rajan" ## [22] "They reviewed the workings of large US organisations over fifteen years from the mid-80s" ## [23] "What they found was successful companies were often associated with a move towards decentralisation, often driven by globalisation and the need to react promptly to a diverse and swiftly-moving range of markets, particularly at a local level" ## [24] "Their research indicated that decentralisation pays" ## [25] "And technological advancement often goes hand-in-hand with decentralization" ## [26] "Data analytics is starting to filter down to the department layer, where executives are increasingly eager to trawl through the mass of information on offer" ## [27] "Cloud computing, meanwhile, means that line managers no longer rely on IT teams to deploy computer resources" ## [28] "They can do it themselves, in just minutes" ## [29] " The decentralization trend is now impacting on technology spending" ## [30] "According to Gartner, chief marketing officers have been given the same purchasing power in this area as IT managers and, as their spending rises, so that of data centre managers is falling" ## [31] "Tim Harford makes a strong case for the way in which this decentralization is important given that the environment in which we operate is so unpredictable" ## [32] "Innovation typically comes, he argues from a “swirling mix of ideas not from isolated minds.” And he cites Jane Jacobs, writer on urban planning– who suggested we find innovation in cities rather than on the Pacific islands" ## [33] "But this approach is not necessarily always adopted" ## [34] "For example, research by academics Donald Marchand and Joe Peppard discovered that there was still a tendency for brands to approach big data projects the same way they would existing IT projects: i.e" ## [35] "using centralized IT specialists with a focus on building and deploying technology on time, to plan, and within budget" ## [36] "The problem with a centralized ‘IT-style’ approach is that it ignores the human side of the process of considering how people create and use information i.e" ## [37] "how do people actually deliver value from data assets" ## [38] "Marchand and Peppard suggest (among other recommendations) that those who need to be able to create meaning from data should be at the heart of any initiative" ## [39] "As ever then, the real value from data comes from asking the right questions of the data" ## [40] "And the right questions to ask only emerge if you are close enough to the business to see them" ## [41] "Are data scientists earning their salary? In my view they are a necessary but not sufficient part of the solution; brands need to be making greater investment in working with a greater range of users to help them ask questions of the data" ## [42] "Which probably means that data scientists’ salaries will need to take a hit in the process." ``` ``` print("SUMMARY") ``` ``` ## [1] "SUMMARY" ``` ``` res = text_summary(text2,5) print(res) ``` ``` ## [1] " Gartner suggested that the “gravitational pull of big data is now so strong that even people who haven’t a clue as to what it’s all about report that they’re running big data projects.” Indeed, their research with business decision makers suggests that organisations are struggling to get value from big data" ## [2] "The focus on the data scientist often implies a centralized approach to analytics and decision making; we implicitly assume that a small team of highly skilled individuals can meet the needs of the organisation as a whole" ## [3] "May be we are investing too much in a relatively small number of individuals rather than thinking about how we can design organisations to help us get the most from data assets" ## [4] "The problem with a centralized ‘IT-style’ approach is that it ignores the human side of the process of considering how people create and use information i.e" ## [5] "Which probably means that data scientists’ salaries will need to take a hit in the process." ``` 7\.44 Research in Finance ------------------------- In this segment we explore various text mining research in the field of finance. 1. Lu, Chen, Chen, Hung, and Li (2010\) categorize finance related textual content into three categories: (a) forums, blogs, and wikis; (b) news and research reports; and (c) content generated by firms. 2. Extracting sentiment and other information from messages posted to stock message boards such as Yahoo!, Motley Fool, Silicon Investor, Raging Bull, etc., see Tumarkin and Whitelaw (2001\), Antweiler and Frank (2004\), Antweiler and Frank (2005\), Das, Martinez\-Jerez and Tufano (2005\), Das and Chen (2007\). 3. Other news sources: Lexis\-Nexis, Factiva, Dow Jones News, etc., see Das, Martinez\-Jerez and Tufano (2005\); Boudoukh, Feldman, Kogan, Richardson (2012\). 4. The Heard on the Street column in the Wall Street Journal has been used in work by Tetlock (2007\), Tetlock, Saar\-Tsechansky and Macskassay (2008\); see also the use of Wall Street Journal articles by Lu, Chen, Chen, Hung, and Li (2010\). 5. Thomson\-Reuters NewsScope Sentiment Engine (RNSE) based on Infonics/Lexalytics algorithms and varied data on stocks and text from internal databases, see Leinweber and Sisk (2011\). Zhang and Skiena (2010\) develop a market neutral trading strategy using news media such as tweets, over 500 newspapers, Spinn3r RSS feeds, and LiveJournal. ### 7\.44\.1 Das and Chen (*Management Science* 2007\) ### 7\.44\.2 Using Twitter and Facebook for Market Prediction 1. Bollen, Mao, and Zeng (2010\) claimed that stock direction of the Dow Jones Industrial Average can be predicted using tweets with 87\.6% accuracy. 2. Bar\-Haim, Dinur, Feldman, Fresko and Goldstein (2011\) attempt to predict stock direction using tweets by detecting and overweighting the opinion of expert investors. 3. Brown (2012\) looks at the correlation between tweets and the stock market via several measures. 4. Logunov (2011\) uses OpinionFinder to generate many measures of sentiment from tweets. 5. Twitter based sentiment developed by Rao and Srivastava (2012\) is found to be highly correlated with stock prices and indexes, as high as 0\.88 for returns. 6. Sprenger and Welpe (2010\) find that tweet bullishness is associated with abnormal stock returns and tweet volume predicts trading volume. 7\.45 Polarity and Subjectivity ------------------------------- Zhang and Skiena (2010\) use Twitter feeds and also three other sources of text: over 500 nationwide newspapers, RSS feeds from blogs, and LiveJournal blogs. These are used to compute two metrics. \\\[ \\mbox{polarity} \= \\frac{n\_{pos} \- n\_{neg}}{n\_{pos} \+ n\_{neg}} \\] \\\[ \\mbox{subjectivity} \= \\frac{n\_{pos} \+ n\_{neg}}{N} \\] where \\(N\\) is the total number of words in a text document, \\(n\_{pos}, n\_{neg}\\) are the number of positive and negative words, respectively. * They find that the number of articles is predictive of trading volume. * Subjectivity is also predictive of trading volume, lending credence to the idea that differences of opinion make markets. * Stock return prediction is weak using polarity, but tweets do seem to have some predictive power. * Various sentiment driven market neutral strategies are shown to be profitable, though the study is not tested for robustness. Logunov (2011\) uses tweets data, and applies OpinionFinder and also developed a new classifier called Naive Emoticon Classification to encode sentiment. This is an unusual and original, albeit quite intuitive use of emoticons to determine mood in text mining. If an emoticon exists, then the tweet is automatically coded with that sentiment of emotion. Four types of emoticons are considered: Happy (H), Sad (S), Joy (J), and Cry (C). Polarity is defined here as \\\[ \\mbox{polarity} \= A \= \\frac{n\_H \+ n\_J}{n\_H \+ n\_S \+ n\_J \+ n\_C} \\] Values greater than 0\.5 are positive. \\(A\\) stands for aggregate sentiment and appears to be strongly autocorrelated. Overall, prediction evidence is weak. ### 7\.45\.1 Text Mining Corporate Reports * Text analysis is undertaken across companies in a cross\-section. * The quality of text in company reports is much better than in message postings. * Textual analysis in this area has also resulted in technical improvements. Rudimentary approaches such as word count methods have been extended to weighted schemes, where weights are determined in statistical ways. In Das and Chen (2007\), the discriminant score of each word across classification categories is used as a weighting index for the importance of words. There is a proliferation of word\-weighting schemes.The idea of “inverse document frequency’’ (\\(idf\\)) as a weighting coefficient. Hence, the \\(idf\\) for word \\(j\\) would be \\\[ w\_j^{idf} \= \\ln \\left( \\frac{N}{df\_j} \\right) \\] where \\(N\\) is the total number of documents, and \\(df\_j\\) is the number of documents containing word \\(j\\). This scheme was proposed by Manning and Schutze (1999\). * Loughran and McDonald (2011\) use this weighting approach to modify the word (term) frequency counts in the documents they analyze. The weight on word \\(j\\) in document \\(i\\) is specified as \\\[ w\_{ij} \= \\max\[0,1 \+ \\ln(f\_{ij}) w\_{j}^{idf}] \\] where \\(f\_{ij}\\) is the frequency count of word \\(j\\) in document \\(i\\). This leads naturally to a document score of \\\[ S\_i^{LM} \= \\frac{1}{1\+\\ln(a\_i)} \\sum\_{j\=1}^J w\_{ij} \\] Here \\(a\_i\\) is the total number of words in document \\(i\\), and \\(J\\) is the total number of words in the lexicon. (The \\(LM\\) superscript signifies the weighting approach.) * Whereas the \\(idf\\) approach is intuitive, it does not have to be relevant for market activity. An alternate and effective weighting scheme has been developed in Jegadeesh and Wu (2013, JW) using market movements. Words that occur more often on large market move days are given a greater weight than other words. JW show that this scheme is superior to an unweighted one, and delivers an accurate system for determining the “tone’’ of a regulatory filing. * JW also conduct robustness checks that suggest that the approach is quite general, and applies to other domains with no additional modifications to the specification. Indeed, they find that tone extraction from 10\-Ks may be used to predict IPO underpricing. 7\.46 Tone ---------- * Jegadeesh and Wu (2013\) create a “global lexicon’’ merging multiple word lists from Harvard\-IV\-4 Psychological Dictionaries(Harvard Inquirer), the Lasswell Value Dictionary, the Loughran and McDonald lists, and the word list in Bradley and Lang (1999\). They test this lexicon for robustness by checking (a) that the lexicon delivers accurate tone scores and (b) that it is complete by discarding 50% of the words and seeing whether it causes a material change in results (it does not). * This approach provides a more reliable measure of document tone than preceding approaches. Their measure of **filing tone** is statistically related to filing period returns after providing for reasonable control variables. Tone is significantly related to returns for up to two weeks after filing, and it appears that the market under reacts to tone, and this is corrected within this two week window. * The tone score of document \\(i\\) in the JW paper is specified as \\\[ S\_i^{JW} \= \\frac{1}{a\_i} \\sum\_{j\=1}^J w\_j f\_{ij} \\] where \\(w\_j\\) is the weight for word \\(j\\) based on its relationship to market movement. (The \\(JW\\) superscript signifies the weighting approach.) * The following regression is used to determine the value of \\(w\_j\\) (across all documents). \\\[ \\begin{aligned} r\_i \&\= a \+ b \\cdot S\_j^{JW} \+ \\epsilon\_i \\\\ \&\= a \+ b \\left( \\frac{1}{a\_i} \\sum\_{j\=1}^J w\_j f\_{ij} \\right) \+ \\epsilon\_i \\\\ \&\= a \+ \\left( \\frac{1}{a\_i} \\sum\_{j\=1}^J (b w\_j) f\_{ij} \\right) \+ \\epsilon\_i \\\\ \&\= a \+ \\left( \\frac{1}{a\_i} \\sum\_{j\=1}^J B\_j f\_{ij} \\right) \+ \\epsilon\_i \\end{aligned} \\] where \\(r\_i\\) is the abnormal return around the release of document \\(i\\), and \\(B\_j\=b w\_j\\) is a modified word weight. This is then translated back into the original estimated word weight by normalization, i.e., \\\[ w\_j \= \\frac{B\_j \- \\frac{1}{J}\\sum\_{j\=1}^J B\_j}{\\sigma(B\_j)} \\] where \\(\\sigma(B\_j)\\) is the standard deviation of \\(B\_j\\) across all \\(J\\) words in the lexicon. * Abnormal return \\(r\_i\\) is defined as the three\-day excess return over the CRSP value\-weighted return. \\\[ r\_i \= \\prod\_{t\=0}^3 ret\_{it} \- \\prod\_{t\=1}^3 ret\_{VW,t} \\] Instead of \\(r\_i\\) as the left\-hand side variable in the regression, one might also use a binary variable for good and bad news, positive or negative 10\-Ks, etc., and instead of the regression we would use a limited dependent variable structure such as logit, probit, or even a Bayes classifier. However, the advantages of \\(r\_i\\) being a continuous variable are considerable for it offers a range of outcomes, and simpler regression fit. \- JW use data from 10\-K filings over the period 1995–2010 extracted from SEC’s EDGAR database. They ignore positive and negative words when a negator occurs within a distance of three words, the negators being the words “not, no, never’’. * Word weight scores are computed for the entire sample, and also for three roughly equal concatenated subperiods. The correlation of word weights across these subperiods is high, around 0\.50 on average. Hence, the word weights appear to be quite stable over time and different economic regimes. As would be expected, when two subperiods are used the correlation of word weights is higher, suggesting that longer samples deliver better weighting scores. Interestingly, the correlation of JW scores with LM \\(idf\\) scores is low, and therefore, they are not substitutes. * JW examine the market variables that determine document score \\(S\_i^{JW}\\) for each 10\-K with right\-hand side variables as the size of the firm, book\-to\-market, volatility, turnover, three day excess return over CRSP VW around earnings announcements, and accruals. Both positive and negative tone are significantly related to size and BM, suggesting that risk factors are captured in score. * Volatility is also significant and has the correct sign, i.e., that increases in volatility make negative tone more negative and positive tone less positive. * The same holds for turnover, in that more turnover makes tone pessimistic. The greater the earnings announcement abnormal return, the higher the tone, though this is not significant. Accruals do not significantly relate to score. * When regressing filing period return on document score and other controls (same as in the previous paragraph), the score is always statistically significant. Hence text in the 10\-Ks does correlate with the market’s view of the firm after incorporating the information in the 10\-K and from other sources. * Finally, JW find a negative relation between tone and IPO underpricing, suggesting that term weights from one domain can be reliably used in a different domain. ### 7\.46\.1 MD\&A Usage * When using company filings, it is often an important issue as to whether to use the entire text of the filing or not. Sharper conclusions may be possible from specific sections of the filing such as a 10\-K. Loughran and McDonald (2011\) examined whether the Management Discussion and Analysis (MD\&A) section of the filing was better at providing tone (sentiment) then the entire 10\-K. They found not. * They also showed that using their six tailor\-made word lists gave better results for detecting tone than did the Harvard Inquirer words. And as discussed earlier, proper word\-weighting also improves tone detection. Their word lists also worked well in detecting tone for seasoned equity offerings and news articles, providing good correlation with returns. ### 7\.46\.2 Readability of Financial Reports * Loughran and McDonald (2014\) examine the readability of financial documents, by surveying at the text in 10\-K filings. They compute the Fog index for these documents and compare this to post filing measures of the information environment such as volatility of returns, dispersion of analysts recommendations. When the text is readable, then there should be less dispersion in the information environment, i.e., lower volatility and lower dispersion of analysts expectations around the release of the 10\-K. * Whereas they find that the Fog index does not seem to correlate well with these measures of the information environment, the file size of the 10\-K is a much better measure and is significantly related to return volatility, earnings forecast errors, and earnings forecast dispersion, after accounting for control variates such as size, book\-to\-market, lagged volatility, lagged return, and industry effects. * Li (2008\) also shows that 10\-Ks with high Fog index and longer length have lower subsequent earnings. Thus managers with poor performance may try to hide this by increasing the complexity of their documents, mostly by increasing the size of their filings. * The readability of business documents has caught the attention of many researchers, and not unexpectedly, in the accounting area. DeFranco et al (2013\) combine the Fog, Flesh\-Kincaid, and Flesch scores to show that higher readability of analyst’s reports is related to higher trading volume, suggesting that a better information environment induces people to trade more and not shy away from the market. * Lehavy et al (2011\) show that a greater Fog index on 10\-Ks is correlated with greater analyst following, more analyst dispersion, and lower accuracy of their forecasts. Most of the literature focuses on 10\-Ks because these are deemed the most information to investors, but it would be interesting to see if readability is any different when looking at shorter documents such as 10\-Qs. Whether the simple, dominant (albeit language independent) measure of file size remains a strong indicator of readability remains to be seen in documents other than 10\-Ks. * Another examination of 10\-K text appears in Bodnaruk et al (2013\). Here, the authors measure the percentage of negative words in 10\-Ks to see if this is an indicator of financial constraints that improves on existing measures. There is low correlation of this measure with size, where bigger firms are widely posited to be less financially constrained. But, an increase in the percentage of negative words suggests an inflection point indicating the tendency of a firm to lapse into a state of financial constraint. Using control variables such as market capitalization, prior returns, and a negative earnings indicator, percentage negative words helps more in identifying which firm will be financially constrained than widely used constraint indexes. The negative word count is useful in that it is independent of the way in which the filing is written, and picks up cues from managers who tend to use more negative words. * The number of negative words is useful in predicting liquidity events such as dividend cuts or omissions, downgrades, and asset growth. A one standard deviation increase in negative words increases the likelihood of a dividend omission by 8\.9% and a debt downgrade by 10\.8%. An obvious extension of this work would be to see whether default probability models may be enhanced by using the percentage of negative words as an explanatory variable. ### 7\.46\.3 Corporate Finance and Risk Management 1. Sprenger (2011\) integrates data from text classification of tweets, user voting, and a proprietary stock game to extract the bullishness of online investors; these ideas are behind the site <http://TweetTrader.net>. 2. Tweets also pose interesting problems of big streaming data discussed in Pervin, Fang, Datta, and Dutta (2013\). 3. Data used here is from filings such as 10\-Ks, etc., (Loughran and McDonald (2011\); Burdick et al (2011\); Bodnaruk, Loughran, and McDonald (2013\); Jegadeesh and Wu (2013\); Loughran and McDonald (2014\)). ### 7\.46\.4 Predicting Markets 1. Wysocki (1999\) found that for the 50 top firms in message posting volume on Yahoo! Finance, message volume predicted next day abnormal stock returns. Using a broader set of firms, he also found that high message volume firms were those with inflated valuations (relative to fundamentals), high trading volume, high short seller activity (given possibly inflated valuations), high analyst following (message posting appears to be related to news as well, correlated with a general notion of “attention” stocks), and low institutional holdings (hence broader investor discussion and interest), all intuitive outcomes. 2. Bagnoli, Beneish, and Watts (1999\) examined earnings “whispers”, unofficial crowd\-sourced forecasts of quarterly earnings from small investors, are more accurate than that of First Call analyst forecasts. 3. Tumarkin and Whitelaw (2001\) examined self\-reported sentiment on the Raging Bull message board and found no predictive content, either of returns or volume. ### 7\.46\.5 Bullishness Index Antweiler and Frank (2004\) used the Naive Bayes algorithm for classification, implemented in the {Rainbow} package of Andrew McCallum (1996\). They also repeated the same using Support Vector Machines (SVMs) as a robustness check. Both algorithms generate similar empirical results. Once the algorithm is trained, they use it out\-of\-sample to sign each message as \\(\\{Buy, Hold, Sell\\}\\). Let \\(n\_B, n\_S\\) be the number of buy and sell messages, respectively. Then \\(R \= n\_B/n\_S\\) is just the ration of buy to sell messages. Based on this they define their bullishness index \\\[ B \= \\frac{n\_B \- n\_S}{n\_B \+ n\_S} \= \\frac{R\-1}{R\+1} \\in (\-1,\+1\) \\] This metric is independent of the number of messages, i.e., is homogenous of degree zero in \\(n\_B,n\_S\\). An alternative measure is also proposed, i.e., \\\[ \\begin{aligned} B^\* \&\= \\ln\\left\[\\frac{1\+n\_B}{1\+n\_S} \\right] \\\\ \&\= \\ln\\left\[\\frac{1\+R(1\+n\_B\+n\_S)}{1\+R\+n\_B\+n\_S} \\right] \\\\ \&\= \\ln\\left\[\\frac{2\+(n\_B\+n\_S)(1\+B)}{2\+(n\_B\+n\_S)(1\-B)} \\right] \\\\ \& \\approx B \\cdot \\ln(1\+n\_B\+n\_S) \\end{aligned} \\] This measure takes the bullishness index \\(B\\) and weights it by the number of messages of both categories. This is homogenous of degree between zero and one. And they also propose a third measure, which is much more direct, i.e., \\\[ B^{\*\*} \= n\_B \- n\_S \= (n\_B\+n\_S) \\cdot \\frac{R\-1}{R\+1} \= M \\cdot B \\] which is homogenous of degree one, and is a message weighted bullishness index. They prefer to use \\(B^\*\\) in their algorithms as it appears to deliver the best predictive results. Finally, produce an agreement index, \\\[ A \= 1 \- \\sqrt{1\-B^2} \\in (0,1\) \\] Note how closely this is related to the disagreement index seen earlier. * The bullishness index does not predict returns, but returns do explain message posting. More messages are posted in periods of negative returns, but this is not a significant relationship. * A contemporaneous relation between returns and bullishness is present. Overall, \\(AF04\\) present some important results that are indicative of the results in this literature, confirmed also in subsequent work. * First, that message board postings do not predict returns. * Second, that disagreement (measured from postings) induces trading. * Third, message posting does predict volatility at daily frequencies and intraday. * Fourth, messages reflect public information rapidly. Overall, they conclude that stock chat is meaningful in content and not just noise. 7\.47 Commercial Developments ----------------------------- ### 7\.47\.1 IBM’s Midas System ### 7\.47\.2 Stock Twits ### 7\.47\.3 iSentium ### 7\.47\.4 RavenPack ### 7\.47\.5 Possibile Applications for Finance Firms An illustrative list of **applications** for finance firms is as follows: * Monitoring corporate buzz. * Analyzing textual data to detect, analyze, and understand the more profitable customers or products. * Targeting new clients. * Customer retention, which is a huge issue. Text mining complaints to prioritize customer remedial action makes a huge difference, especially in the insurance business. * Lending activity \- automated management of profiling information for lending screening. * Market prediction and trading. * Risk management. * Automated financial analysts. * Financial forensics to prevent rogue employees from inflicting large losses. * Fraud detection. * Detecting market manipulation. * Social network analysis of clients. * Measuring institutional risk from systemic risk. 7\.48 Latent Semantic Analysis (LSA) ------------------------------------ Latent Semantic Analysis (LSA) is an approach for reducing the dimension of the Term\-Document Matrix (TDM), or the corresponding Document\-Term Matrix (DTM), in general used interchangeably, unless a specific one is invoked. Dimension reduction of the TDM offers two benefits: * The DTM is usually a sparse matrix, and sparseness means that our algorithms have to work harder on missing data, which is clearly wasteful. Some of this sparseness is attenuated by applying LSA to the TDM. * The problem of synonymy also exists in the TDM, which usually contains thousands of terms (words). Synonymy arises becauses many words have similar meanings, i.e., redundancy exists in the list of terms. LSA mitigates this redundancy, as we shall see through the ensuing anaysis of LSA. * While not precisely the same thing, think of LSA in the text domain as analogous to PCA in the data domain. ### 7\.48\.1 How is LSA implemented using SVD? LSA is the application of Singular Value Decomposition (SVD) to the TDM, extracted from a text corpus. Define the TDM to be a matrix \\(M \\in {\\cal R}^{m \\times n}\\), where \\(m\\) is the number of terms and \\(n\\) is the number of documents. The SVD of matrix \\(M\\) is given by \\\[ M \= T \\cdot S \\cdot D^\\top \\] where \\(T \\in {\\cal R}^{m \\times n}\\) and \\(D \\in {\\cal R}^{n \\times n}\\) are orthonormal to each other, and \\(S \\in {\\cal R}^{n \\times n}\\) is the “singluar values” matrix, i.e., a diagonal matrix with singular values on the diagonal. These values denote the relative importance of the terms in the TDM. ### 7\.48\.2 Example Create a temporary directory and add some documents to it. This is a modification of the example in the **lsa** package ``` system("mkdir D") write( c("blue", "red", "green"), file=paste("D", "D1.txt", sep="/")) write( c("black", "blue", "red"), file=paste("D", "D2.txt", sep="/")) write( c("yellow", "black", "green"), file=paste("D", "D3.txt", sep="/")) write( c("yellow", "red", "black"), file=paste("D", "D4.txt", sep="/")) ``` Create a TDM using the **textmatrix** function. ``` library(lsa) tdm = textmatrix("D",minWordLength=1) print(tdm) ``` ``` ## docs ## terms D1.txt D2.txt D3.txt D4.txt ## blue 1 1 0 0 ## green 1 0 1 0 ## red 1 1 0 1 ## black 0 1 1 1 ## yellow 0 0 1 1 ``` Remove the extra directory. ``` system("rm -rf D") ``` 7\.49 Singular Value Decomposition (SVD) ---------------------------------------- SVD tries to connect the correlation matrix of terms (\\(M \\cdot M^\\top\\)) with the correlation matrix of documents (\\(M^\\top \\cdot M\\)) through the singular matrix. To see this connection, note that matrix \\(T\\) contains the eigenvectors of the correlation matrix of terms. Likewise, the matrix \\(D\\) contains the eigenvectors of the correlation matrix of documents. To see this, let’s compute ``` et = eigen(tdm %*% t(tdm))$vectors print(et) ``` ``` ## [,1] [,2] [,3] [,4] [,5] ## [1,] -0.3629044 -6.015010e-01 -0.06829369 3.717480e-01 0.6030227 ## [2,] -0.3328695 -2.220446e-16 -0.89347008 5.551115e-16 -0.3015113 ## [3,] -0.5593741 -3.717480e-01 0.31014767 -6.015010e-01 -0.3015113 ## [4,] -0.5593741 3.717480e-01 0.31014767 6.015010e-01 -0.3015113 ## [5,] -0.3629044 6.015010e-01 -0.06829369 -3.717480e-01 0.6030227 ``` ``` ed = eigen(t(tdm) %*% tdm)$vectors print(ed) ``` ``` ## [,1] [,2] [,3] [,4] ## [1,] -0.4570561 0.601501 -0.5395366 -0.371748 ## [2,] -0.5395366 0.371748 0.4570561 0.601501 ## [3,] -0.4570561 -0.601501 -0.5395366 0.371748 ## [4,] -0.5395366 -0.371748 0.4570561 -0.601501 ``` ### 7\.49\.1 Dimension reduction of the TDM via LSA If we wish to reduce the dimension of the latent semantic space to \\(k \< n\\) then we use only the first \\(k\\) eigenvectors. The **lsa** function does this automatically. We call LSA and ask it to automatically reduce the dimension of the TDM using a built\-in function **dimcalc\_share**. ``` res = lsa(tdm,dims=dimcalc_share()) print(res) ``` ``` ## $tk ## [,1] [,2] ## blue -0.3629044 -6.015010e-01 ## green -0.3328695 -5.551115e-17 ## red -0.5593741 -3.717480e-01 ## black -0.5593741 3.717480e-01 ## yellow -0.3629044 6.015010e-01 ## ## $dk ## [,1] [,2] ## D1.txt -0.4570561 -0.601501 ## D2.txt -0.5395366 -0.371748 ## D3.txt -0.4570561 0.601501 ## D4.txt -0.5395366 0.371748 ## ## $sk ## [1] 2.746158 1.618034 ## ## attr(,"class") ## [1] "LSAspace" ``` We can see that the dimension has been reduced from \\(n\=4\\) to \\(n\=2\\). The output is shown for both the term matrix and the document matrix, both of which have only two columns. Think of these as the two “principal semantic components” of the TDM. Compare the output of the LSA to the eigenvectors above to see that it is exactly that. The singular values in the ouput are connected to SVD as follows. ### 7\.49\.2 LSA and SVD: the connection? First of all we see that the **lsa** function is nothing but the **svd** function in base R. ``` res2 = svd(tdm) print(res2) ``` ``` ## $d ## [1] 2.746158 1.618034 1.207733 0.618034 ## ## $u ## [,1] [,2] [,3] [,4] ## [1,] -0.3629044 -6.015010e-01 0.06829369 3.717480e-01 ## [2,] -0.3328695 -5.551115e-17 0.89347008 -3.455569e-15 ## [3,] -0.5593741 -3.717480e-01 -0.31014767 -6.015010e-01 ## [4,] -0.5593741 3.717480e-01 -0.31014767 6.015010e-01 ## [5,] -0.3629044 6.015010e-01 0.06829369 -3.717480e-01 ## ## $v ## [,1] [,2] [,3] [,4] ## [1,] -0.4570561 -0.601501 0.5395366 -0.371748 ## [2,] -0.5395366 -0.371748 -0.4570561 0.601501 ## [3,] -0.4570561 0.601501 0.5395366 0.371748 ## [4,] -0.5395366 0.371748 -0.4570561 -0.601501 ``` The output here is the same as that of LSA except it is provided for \\(n\=4\\). So we have four columns in \\(T\\) and \\(D\\) rather than two. Compare the results here to the previous two slides to see the connection. ### 7\.49\.3 What is the rank of the TDM? We may reconstruct the TDM using the result of the LSA. ``` tdm_lsa = res$tk %*% diag(res$sk) %*% t(res$dk) print(tdm_lsa) ``` ``` ## D1.txt D2.txt D3.txt D4.txt ## blue 1.0409089 0.8995016 -0.1299115 0.1758948 ## green 0.4178005 0.4931970 0.4178005 0.4931970 ## red 1.0639006 1.0524048 0.3402938 0.6051912 ## black 0.3402938 0.6051912 1.0639006 1.0524048 ## yellow -0.1299115 0.1758948 1.0409089 0.8995016 ``` We see the new TDM after the LSA operation, it has non\-integer frequency counts, but it may be treated in the same way as the original TDM. The document vectors populate a slightly different hyperspace. LSA reduces the rank of the correlation matrix of terms \\(M \\cdot M^\\top\\) to \\(n\=2\\). Here we see the rank before and after LSA. ``` library(Matrix) print(rankMatrix(tdm)) ``` ``` ## [1] 4 ## attr(,"method") ## [1] "tolNorm2" ## attr(,"useGrad") ## [1] FALSE ## attr(,"tol") ## [1] 1.110223e-15 ``` ``` print(rankMatrix(tdm_lsa)) ``` ``` ## [1] 2 ## attr(,"method") ## [1] "tolNorm2" ## attr(,"useGrad") ## [1] FALSE ## attr(,"tol") ## [1] 1.110223e-15 ``` 7\.50 Topic Analysis with Latent Dirichlet Allocation (LDA) ----------------------------------------------------------- ### 7\.50\.1 What does LDA have to do with LSA? It is similar to LSA, in that it seeks to find the most related words and cluster them into topics. It uses a Bayesian approach to do this, but more on that later. Here, let’s just do an example to see how we might use the **topicmodels** package. ``` #Load the package library(topicmodels) #Load data on news articles from Associated Press data(AssociatedPress) print(dim(AssociatedPress)) ``` ``` ## [1] 2246 10473 ``` This is a large DTM (not TDM). It has more than 10,000 terms, and more than 2,000 documents. This is very large and LDA will take some time, so let’s run it on a subset of the documents. ``` dtm = AssociatedPress[1:100,] dim(dtm) ``` ``` ## [1] 100 10473 ``` Now we run LDA on this data set. ``` #Set parameters for Gibbs sampling burnin = 4000 iter = 2000 thin = 500 seed = list(2003,5,63,100001,765) nstart = 5 best = TRUE #Number of topics k = 5 ``` ``` #Run LDA res <-LDA(dtm, k, method="Gibbs", control = list(nstart = nstart, seed = seed, best = best, burnin = burnin, iter = iter, thin = thin)) #Show topics res.topics = as.matrix(topics(res)) print(res.topics) ``` ``` ## [,1] ## [1,] 5 ## [2,] 4 ## [3,] 5 ## [4,] 1 ## [5,] 1 ## [6,] 4 ## [7,] 2 ## [8,] 1 ## [9,] 5 ## [10,] 5 ## [11,] 5 ## [12,] 3 ## [13,] 1 ## [14,] 4 ## [15,] 2 ## [16,] 3 ## [17,] 1 ## [18,] 1 ## [19,] 2 ## [20,] 3 ## [21,] 5 ## [22,] 2 ## [23,] 2 ## [24,] 1 ## [25,] 2 ## [26,] 4 ## [27,] 4 ## [28,] 2 ## [29,] 4 ## [30,] 3 ## [31,] 2 ## [32,] 1 ## [33,] 4 ## [34,] 1 ## [35,] 5 ## [36,] 4 ## [37,] 1 ## [38,] 4 ## [39,] 4 ## [40,] 2 ## [41,] 2 ## [42,] 2 ## [43,] 1 ## [44,] 1 ## [45,] 5 ## [46,] 3 ## [47,] 2 ## [48,] 3 ## [49,] 1 ## [50,] 4 ## [51,] 1 ## [52,] 2 ## [53,] 3 ## [54,] 1 ## [55,] 3 ## [56,] 4 ## [57,] 4 ## [58,] 2 ## [59,] 5 ## [60,] 2 ## [61,] 2 ## [62,] 3 ## [63,] 2 ## [64,] 1 ## [65,] 2 ## [66,] 4 ## [67,] 5 ## [68,] 2 ## [69,] 4 ## [70,] 5 ## [71,] 5 ## [72,] 5 ## [73,] 2 ## [74,] 5 ## [75,] 2 ## [76,] 1 ## [77,] 1 ## [78,] 1 ## [79,] 3 ## [80,] 5 ## [81,] 1 ## [82,] 3 ## [83,] 5 ## [84,] 3 ## [85,] 3 ## [86,] 5 ## [87,] 2 ## [88,] 5 ## [89,] 2 ## [90,] 5 ## [91,] 3 ## [92,] 1 ## [93,] 1 ## [94,] 4 ## [95,] 3 ## [96,] 4 ## [97,] 4 ## [98,] 4 ## [99,] 5 ## [100,] 5 ``` ``` #Show top terms res.terms = as.matrix(terms(res,10)) print(res.terms) ``` ``` ## Topic 1 Topic 2 Topic 3 Topic 4 Topic 5 ## [1,] "i" "percent" "new" "soviet" "police" ## [2,] "people" "year" "york" "government" "central" ## [3,] "state" "company" "expected" "official" "man" ## [4,] "years" "last" "states" "two" "monday" ## [5,] "bush" "new" "officials" "union" "friday" ## [6,] "president" "bank" "program" "officials" "city" ## [7,] "get" "oil" "california" "war" "four" ## [8,] "told" "prices" "week" "president" "school" ## [9,] "administration" "report" "air" "world" "high" ## [10,] "dukakis" "million" "help" "leaders" "national" ``` ``` #Show topic probabilities res.topicProbs = as.data.frame(res@gamma) print(res.topicProbs) ``` ``` ## V1 V2 V3 V4 V5 ## 1 0.19169329 0.06070288 0.04472843 0.10223642 0.60063898 ## 2 0.12149533 0.14330218 0.08099688 0.58255452 0.07165109 ## 3 0.27213115 0.04262295 0.05901639 0.07868852 0.54754098 ## 4 0.29571984 0.16731518 0.19844358 0.19455253 0.14396887 ## 5 0.31896552 0.15517241 0.20689655 0.14655172 0.17241379 ## 6 0.30360934 0.08492569 0.08492569 0.46284501 0.06369427 ## 7 0.17050691 0.40092166 0.15668203 0.17050691 0.10138249 ## 8 0.37142857 0.15238095 0.14285714 0.20000000 0.13333333 ## 9 0.19298246 0.17543860 0.19298246 0.19298246 0.24561404 ## 10 0.19879518 0.16265060 0.17469880 0.18674699 0.27710843 ## 11 0.21212121 0.20202020 0.16161616 0.15151515 0.27272727 ## 12 0.20143885 0.15827338 0.25899281 0.17985612 0.20143885 ## 13 0.41395349 0.16279070 0.18139535 0.12558140 0.11627907 ## 14 0.17948718 0.17948718 0.12820513 0.30769231 0.20512821 ## 15 0.05135952 0.78247734 0.06344411 0.06042296 0.04229607 ## 16 0.09770115 0.24712644 0.35632184 0.14942529 0.14942529 ## 17 0.43103448 0.18103448 0.09051724 0.10775862 0.18965517 ## 18 0.67857143 0.04591837 0.06377551 0.08418367 0.12755102 ## 19 0.07083333 0.70000000 0.08750000 0.07500000 0.06666667 ## 20 0.15196078 0.05637255 0.69117647 0.04656863 0.05392157 ## 21 0.21782178 0.11881188 0.12871287 0.15841584 0.37623762 ## 22 0.16666667 0.30000000 0.16666667 0.16666667 0.20000000 ## 23 0.19298246 0.21052632 0.17543860 0.21052632 0.21052632 ## 24 0.31775701 0.20560748 0.16822430 0.18691589 0.12149533 ## 25 0.05121951 0.65121951 0.15365854 0.08536585 0.05853659 ## 26 0.11740891 0.09311741 0.08502024 0.37246964 0.33198381 ## 27 0.06583072 0.05956113 0.10658307 0.68338558 0.08463950 ## 28 0.15068493 0.30136986 0.12328767 0.26027397 0.16438356 ## 29 0.07860262 0.04148472 0.05676856 0.68995633 0.13318777 ## 30 0.13968254 0.17142857 0.46031746 0.07936508 0.14920635 ## 31 0.08405172 0.74784483 0.07112069 0.05172414 0.04525862 ## 32 0.66137566 0.10846561 0.06349206 0.07407407 0.09259259 ## 33 0.14655172 0.18103448 0.15517241 0.41379310 0.10344828 ## 34 0.29605263 0.19736842 0.21052632 0.13157895 0.16447368 ## 35 0.08080808 0.05050505 0.10437710 0.07070707 0.69360269 ## 36 0.13333333 0.07878788 0.08484848 0.46666667 0.23636364 ## 37 0.46202532 0.08227848 0.12974684 0.16139241 0.16455696 ## 38 0.09442060 0.07296137 0.12017167 0.64377682 0.06866953 ## 39 0.11764706 0.08359133 0.10526316 0.62538700 0.06811146 ## 40 0.10869565 0.56521739 0.14492754 0.07246377 0.10869565 ## 41 0.07671958 0.43650794 0.16137566 0.25396825 0.07142857 ## 42 0.11445783 0.57831325 0.11445783 0.09036145 0.10240964 ## 43 0.55793991 0.10944206 0.08798283 0.09442060 0.15021459 ## 44 0.40939597 0.10067114 0.22818792 0.12751678 0.13422819 ## 45 0.20000000 0.15121951 0.12682927 0.25853659 0.26341463 ## 46 0.14828897 0.11406844 0.56653992 0.08365019 0.08745247 ## 47 0.09929078 0.41134752 0.13475177 0.22695035 0.12765957 ## 48 0.20129870 0.07467532 0.54870130 0.10714286 0.06818182 ## 49 0.46800000 0.09600000 0.18400000 0.10400000 0.14800000 ## 50 0.22955145 0.08179420 0.05013193 0.60158311 0.03693931 ## 51 0.28368794 0.17730496 0.18439716 0.14893617 0.20567376 ## 52 0.12977099 0.45801527 0.12977099 0.18320611 0.09923664 ## 53 0.10507246 0.14492754 0.55072464 0.06884058 0.13043478 ## 54 0.42647059 0.13725490 0.15196078 0.15686275 0.12745098 ## 55 0.11881188 0.19801980 0.44554455 0.08910891 0.14851485 ## 56 0.22857143 0.15714286 0.13571429 0.37142857 0.10714286 ## 57 0.15294118 0.07058824 0.06117647 0.66823529 0.04705882 ## 58 0.11494253 0.49425287 0.14367816 0.12068966 0.12643678 ## 59 0.13278008 0.04979253 0.13692946 0.26556017 0.41493776 ## 60 0.16666667 0.31666667 0.16666667 0.16666667 0.18333333 ## 61 0.06796117 0.73786408 0.08090615 0.04854369 0.06472492 ## 62 0.12680115 0.12968300 0.58213256 0.12103746 0.04034582 ## 63 0.07902736 0.72948328 0.09118541 0.05471125 0.04559271 ## 64 0.44285714 0.12142857 0.14285714 0.13214286 0.16071429 ## 65 0.19540230 0.31034483 0.19540230 0.14942529 0.14942529 ## 66 0.18518519 0.22222222 0.17037037 0.28888889 0.13333333 ## 67 0.07024793 0.07851240 0.08677686 0.04545455 0.71900826 ## 68 0.10181818 0.48000000 0.14909091 0.12727273 0.14181818 ## 69 0.12307692 0.15384615 0.10000000 0.43076923 0.19230769 ## 70 0.12745098 0.07352941 0.14215686 0.13235294 0.52450980 ## 71 0.21582734 0.10791367 0.16546763 0.14388489 0.36690647 ## 72 0.17560976 0.11219512 0.17073171 0.15609756 0.38536585 ## 73 0.12280702 0.46198830 0.07602339 0.23976608 0.09941520 ## 74 0.20535714 0.16964286 0.17857143 0.14285714 0.30357143 ## 75 0.07567568 0.47027027 0.11891892 0.19459459 0.14054054 ## 76 0.67310789 0.15619968 0.07407407 0.05152979 0.04508857 ## 77 0.63834423 0.07189542 0.09150327 0.11546841 0.08278867 ## 78 0.61504425 0.09292035 0.11946903 0.11504425 0.05752212 ## 79 0.10971787 0.07523511 0.65830721 0.07210031 0.08463950 ## 80 0.11111111 0.08666667 0.11111111 0.05777778 0.63333333 ## 81 0.49681529 0.03821656 0.15286624 0.14437367 0.16772824 ## 82 0.20111732 0.17318436 0.24022346 0.15642458 0.22905028 ## 83 0.10731707 0.15609756 0.11219512 0.23902439 0.38536585 ## 84 0.26016260 0.10569106 0.36585366 0.13008130 0.13821138 ## 85 0.11525424 0.10508475 0.39322034 0.30508475 0.08135593 ## 86 0.15454545 0.06060606 0.15757576 0.09696970 0.53030303 ## 87 0.08301887 0.67924528 0.07924528 0.09433962 0.06415094 ## 88 0.16666667 0.15972222 0.22916667 0.11805556 0.32638889 ## 89 0.12389381 0.47787611 0.09734513 0.14159292 0.15929204 ## 90 0.12389381 0.11061947 0.23008850 0.10176991 0.43362832 ## 91 0.19724771 0.11009174 0.30275229 0.16972477 0.22018349 ## 92 0.33854167 0.13541667 0.12500000 0.11458333 0.28645833 ## 93 0.40131579 0.13815789 0.10526316 0.18421053 0.17105263 ## 94 0.06930693 0.10231023 0.09240924 0.67656766 0.05940594 ## 95 0.09130435 0.15000000 0.65434783 0.03043478 0.07391304 ## 96 0.13370474 0.13091922 0.12256267 0.49303621 0.11977716 ## 97 0.06709265 0.06070288 0.11501597 0.60383387 0.15335463 ## 98 0.16438356 0.16438356 0.17808219 0.28767123 0.20547945 ## 99 0.06274510 0.08235294 0.16470588 0.06666667 0.62352941 ## 100 0.11627907 0.20465116 0.11162791 0.16744186 0.40000000 ``` ``` #Check that each term is allocated to all topics print(rowSums(res.topicProbs)) ``` ``` ## [1] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ## [36] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ## [71] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ``` Note that the highest probability in each row assigns each document to a topic. 7\.51 LDA Explained (Briefly) ----------------------------- Latent Dirichlet Allocation (LDA) was created by David Blei, Andrew Ng, and Michael Jordan in 2003, see their paper titled “Latent Dirichlet Allocation” in the *Journal of Machine Learning Research*, pp 993–1022\. The simplest way to think about LDA is as a probability model that connects documents with words and topics. The components are: * A Vocabulary of \\(V\\) words, i.e., \\(w\_1,w\_2,...,w\_i,...,w\_V\\), each word indexed by \\(i\\). * A Document is a vector of \\(N\\) words, i.e., \\({\\bf w}\\). * A Corpus \\(D\\) is a collection of \\(M\\) documents, each document indexed by \\(j\\), i.e. \\(d\_j\\). Next, we connect the above objects to \\(K\\) topics, indexed by \\(l\\), i.e., \\(t\_l\\). We will see that LDA is encapsulated in two matrices: Matrix \\(A\\) and Matrix \\(B\\). ### 7\.51\.1 Matrix \\(A\\): Connecting Documents with Topics * This matrix has documents on the rows, so there are \\(M\\) rows. * The topics are on the columns, so there are \\(K\\) columns. * Therefore \\(A \\in {\\cal R}^{M \\times K}\\). * The row sums equal \\(1\\), i.e., for each document, we have a probability that it pertains to a given topic, i.e., \\(A\_{jl} \= Pr\[t\_l \| d\_j]\\), and \\(\\sum\_{l\=1}^K A\_{jl} \= 1\\). ### 7\.51\.2 Matrix \\(B\\): Connecting Words with Topics * This matrix has topics on the rows, so there are \\(K\\) rows. * The words are on the columns, so there are \\(V\\) columns. * Therefore \\(B \\in {\\cal R}^{K \\times V}\\). * The row sums equal \\(1\\), i.e., for each topic, we have a probability that it pertains to a given word, i.e., \\(B\_{li} \= Pr\[w\_i \| t\_l]\\), and \\(\\sum\_{i\=1}^V B\_{li} \= 1\\). ### 7\.51\.3 Distribution of Topics in a Document * Using Matrix \\(A\\), we can sample a \\(K\\)\-vector of probabilities of topics for a single document. Denote the probability of this vector as \\(p(\\theta \| \\alpha)\\), where \\(\\theta, \\alpha \\in {\\cal R}^K\\), \\(\\theta, \\alpha \\geq 0\\), and \\(\\sum\_l \\theta\_l \= 1\\). * The probability \\(p(\\theta \| \\alpha)\\) is governed by a Dirichlet distribution, with density function \\\[ p(\\theta \| \\alpha) \= \\frac{\\Gamma(\\sum\_{l\=1}^K \\alpha\_l)}{\\prod\_{l\=1}^K \\Gamma(\\alpha\_l)} \\; \\prod\_{l\=1}^K \\theta\_l^{\\alpha\_l \- 1} \\] where \\(\\Gamma(\\cdot)\\) is the Gamma function. \- LDA thus gets its name from the use of the Dirichlet distribution, embodied in Matrix \\(A\\). Since the topics are latent, it explains the rest of the nomenclature. \- Given \\(\\theta\\), we sample topics from matrix \\(A\\) with probability \\(p(t \| \\theta)\\). ### 7\.51\.4 Distribution of Words and Topics for a Document * The number of words in a document is assumed to be distributed Poisson with parameter \\(\\xi\\). * Matrix \\(B\\) gives the probability of a word appearing in a topic, \\(p(w \| t)\\). * The topics mixture is given by \\(\\theta\\). * The joint distribution over \\(K\\) topics and \\(K\\) words for a topic mixture is given by \\\[ p(\\theta, {\\bf t}, {\\bf w}) \= p(\\theta \| \\alpha) \\prod\_{l\=1}^K p(t\_l \| \\theta) p(w\_l \| t\_l) \\] * The marginal distribution for a document’s words comes from integrating out the topic mixture \\(\\theta\\), and summing out the topics \\({\\bf t}\\), i.e., \\\[ p({\\bf w}) \= \\int p(\\theta \| \\alpha) \\left(\\prod\_{l\=1}^K \\sum\_{t\_l} p(t\_l \| \\theta) p(w\_l \| t\_l)\\; \\right) d\\theta \\] ### 7\.51\.5 Likelihood of the entire Corpus * This is given by: \\\[ p(D) \= \\prod\_{j\=1}^M \\int p(\\theta\_j \| \\alpha) \\left(\\prod\_{l\=1}^K \\sum\_{t\_{jl}} p(t\_l \| \\theta\_j) p(w\_l \| t\_l)\\; \\right) d\\theta\_j \\] * The goal is to maximize this likelihood by picking the vector \\(\\alpha\\) and the probabilities in the matrix \\(B\\). (Note that were a Dirichlet distribution not used, then we could directly pick values in Matrices \\(A\\) and \\(B\\).) * The computation is undertaken using MCMC with Gibbs sampling as shown in the example earlier. ### 7\.51\.6 Examples in Finance ### 7\.51\.7 word2vec (explained) For more details, see: [https://www.quora.com/How\-does\-word2vec\-work](https://www.quora.com/How-does-word2vec-work) **A geometrical interpretation**: word2vec is a shallow word embedding model. This means that the model learns to map each discrete word id (0 through the number of words in the vocabulary) into a low\-dimensional continuous vector\-space from their distributional properties observed in some raw text corpus. Geometrically, one may interpret these vectors as tracing out points on the outside surface of a manifold in the “embedded space”. If we initialize these vectors from a spherical gaussian distribution, then you can imagine this manifold to look something like a hypersphere initially. Let us focus on the CBOW for now. CBOW is trained to predict the target word t from the contextual words that surround it, c, i.e. the goal is to maximize P(t \| c) over the training set. I am simplifying somewhat, but you can show that this probability is roughly inversely proportional to the distance between the current vectors assigned to t and to c. Since this model is trained in an online setting (one example at a time), at time T the goal is therefore to take a small step (mediated by the “learning rate”) in order to minimize the distance between the current vectors for t and c (and thereby increase the probability P(t \|c)). By repeating this process over the entire training set, we have that vectors for words that habitually co\-occur tend to be nudged closer together, and by gradually lowering the learning rate, this process converges towards some final state of the vectors. By the Distributional Hypothesis (Firth, 1957; see also the Wikipedia page on Distributional semantics), words with similar distributional properties (i.e. that co\-occur regularly) tend to share some aspect of semantic meaning. For example, we may find several sentences in the training set such as “citizens of X protested today” where X (the target word t) may be names of cities or countries that are semantically related. You can therefore interpret each training step as deforming or morphing the initial manifold by nudging the vectors for some words somewhat closer together, and the result, after projecting down to two dimensions, is the familiar t\-SNE visualizations where related words cluster together (e.g. Word representations for NLP). For the skipgram, the direction of the prediction is simply inverted, i.e. now we try to predict P(citizens \| X), P(of \| X), etc. This turns out to learn finer\-grained vectors when one trains over more data. The main reason is that the CBOW smooths over a lot of the distributional statistics by averaging over all context words while the skipgram does not. With little data, this “regularizing” effect of the CBOW turns out to be helpful, but since data is the ultimate regularizer the skipgram is able to extract more information when more data is available. There’s a bit more going on behind the scenes, but hopefully this helps to give a useful geometrical intuition as to how these models work. 7\.52 End Note! --------------- Biblio at: <http://srdas.github.io/Das_TextAnalyticsInFinance.pdf> 7\.1 Introduction ----------------- Text expands the universe of data many\-fold. See my monograph on text mining in finance at: <http://srdas.github.io/Das_TextAnalyticsInFinance.pdf> In Finance, for example, text has become a major source of trading information, leading to a new field known as News Metrics. News analysis is defined as “the measurement of the various qualitative and quantitative attributes of textual news stories. Some of these attributes are: sentiment, relevance, and novelty. Expressing news stories as numbers permits the manipulation of everyday information in a mathematical and statistical way.” (Wikipedia). In this chapter, I provide a framework for text analytics techniques that are in widespread use. I will discuss various text analytic methods and software, and then provide a set of metrics that may be used to assess the performance of analytics. Various directions for this field are discussed through the exposition. The techniques herein can aid in the valuation and trading of securities, facilitate investment decision making, meet regulatory requirements, provide marketing insights, or manage risk. See: [https://www.amazon.com/Handbook\-News\-Analytics\-Finance/dp/047066679X/ref\=sr\_1\_1?ie\=UTF8\&qid\=1466897817\&sr\=8\-1\&keywords\=handbook\+of\+news\+analytics](https://www.amazon.com/Handbook-News-Analytics-Finance/dp/047066679X/ref=sr_1_1?ie=UTF8&qid=1466897817&sr=8-1&keywords=handbook+of+news+analytics) “News analytics are used in financial modeling, particularly in quantitative and algorithmic trading. Further, news analytics can be used to plot and characterize firm behaviors over time and thus yield important strategic insights about rival firms. News analytics are usually derived through automated text analysis and applied to digital texts using elements from natural language processing and machine learning such as latent semantic analysis, support vector machines, \`bag of words’, among other techniques.” (Wikipedia) 7\.2 Text as Data ----------------- There are many reasons why text has business value. But this is a narrow view. Textual data provides a means of understanding all human behavior through a data\-driven, analytical approach. Let’s enumerate some reasons for this. 1. Big Text: there is more textual data than numerical data. 2. Text is versatile. Nuances and behavioral expressions are not conveyed with numbers, so analyzing text allows us to explore these aspects of human interaction. 3. Text contains emotive content. This has led to the ubiquity of “Sentiment analysis”. See for example: Admati\-Pfleiderer 2001; DeMarzo et al 2003; Antweiler\-Frank 2004, 2005; Das\-Chen 2007; Tetlock 2007; Tetlock et al 2008; Mitra et al 2008; Leinweber\-Sisk 2010\. 4. Text contains opinions and connections. See: Das et al 2005; Das and Sisk 2005; Godes et al 2005; Li 2006; Hochberg et al 2007\. 5. Numbers aggregate; text disaggregates. Text allows us to drill down into underlying behavior when understanding human interaction. In a talk at the 17th ACM Conference on Information Knowledge and Management (CIKM ’08\), Google’s director of research Peter Norvig stated his unequivocal preference for data over algorithms—“data is more agile than code.” Yet, it is well\-understood that too much data can lead to overfitting so that an algorithm becomes mostly useless out\-of\-sample. 2\. Chris Anderson: “Data is the New Theory.” 3\. These issues are relevant to text mining, but let’s put them on hold till the end of the session. 7\.3 Definition: Text\-Mining ----------------------------- I will make an attempt to provide a comprehensive definition of “Text Mining”. As definitions go, it is often easier to enumerate various versions and nuances of an activity than to describe something in one single statement. So here goes: 1. Text mining is the large\-scale, automated processing of plain text language in digital form to extract data that is converted into useful quantitative or qualitative information. 2. Text mining is automated on big data that is not amenable to human processing within reasonable time frames. It entails extracting data that is converted into information of many types. 3. Simple: Text mining may be simple as key word searches and counts. 4. Complicated: It may require language parsing and complex rules for information extraction. 5. Involves structured text, such as the information in forms and some kinds of web pages. 6. May be applied to unstructured text is a much harder endeavor. 7. Text mining is also aimed at unearthing unseen relationships in unstructured text as in meta analyses of research papers, see Van Noorden 2012\. 7\.4 Data and Algorithms ------------------------ 7\.5 Text Extraction -------------------- The R programming language is increasingly being used to download text from the web and then analyze it. The ease with which R may be used to scrape text from web site may be seen from the following simple command in R: ``` text = readLines("http://srdas.github.io/bio-candid.html") text[15:20] ``` ``` ## [1] "journals. Prior to being an academic, he worked in the derivatives" ## [2] "business in the Asia-Pacific region as a Vice-President at" ## [3] "Citibank. His current research interests include: machine learning," ## [4] "social networks, derivatives pricing models, portfolio theory, the" ## [5] "modeling of default risk, and venture capital. He has published over" ## [6] "ninety articles in academic journals, and has won numerous awards for" ``` Here, we downloaded the my bio page from my university’s web site. It’s a simple HTML file. ``` length(text) ``` ``` ## [1] 80 ``` 7\.6 String Parsing ------------------- Suppose we just want the 17th line, we do: ``` text[17] ``` ``` ## [1] "Citibank. His current research interests include: machine learning," ``` And, to find out the character length of the this line we use the function: ``` library(stringr) str_length(text[17]) ``` ``` ## [1] 67 ``` We have first invoked the library **stringr** that contains many string handling functions. In fact, we may also get the length of each line in the text vector by applying the function **length()** to the entire text vector. ``` text_len = str_length(text) print(text_len) ``` ``` ## [1] 6 69 0 66 70 70 70 63 69 65 59 59 70 67 66 58 67 66 69 69 67 62 63 ## [24] 19 0 0 56 0 65 67 66 65 64 66 69 63 69 65 27 0 3 0 71 71 69 68 ## [47] 71 12 0 3 0 71 70 68 71 69 63 67 69 64 67 7 0 3 0 67 71 65 63 ## [70] 72 69 68 66 69 70 70 43 0 0 0 ``` ``` print(text_len[55]) ``` ``` ## [1] 71 ``` ``` text_len[17] ``` ``` ## [1] 67 ``` 7\.7 Sort by Length ------------------- Some lines are very long and are the ones we are mainly interested in as they contain the bulk of the story, whereas many of the remaining lines that are shorter contain html formatting instructions. Thus, we may extract the top three lengthy lines with the following set of commands. ``` res = sort(text_len,decreasing=TRUE,index.return=TRUE) idx = res$ix text2 = text[idx] text2 ``` ``` ## [1] "important to open the academic door to the ivory tower and let the world" ## [2] "Sanjiv is now a Professor of Finance at Santa Clara University. He came" ## [3] "to SCU from Harvard Business School and spent a year at UC Berkeley. In" ## [4] "previous lives into his current existence, which is incredibly confused" ## [5] "Sanjiv's research style is instilled with a distinct \"New York state of" ## [6] "funds, the internet, portfolio choice, banking models, credit risk, and" ## [7] "ocean. The many walks in Greenwich village convinced him that there is" ## [8] "Santa Clara University's Leavey School of Business. He previously held" ## [9] "faculty appointments as Associate Professor at Harvard Business School" ## [10] "and UC Berkeley. He holds post-graduate degrees in Finance (M.Phil and" ## [11] "Management, co-editor of The Journal of Derivatives and The Journal of" ## [12] "mind\" - it is chaotic, diverse, with minimal method to the madness. He" ## [13] "any time you like, but you can never leave.\" Which is why he is doomed" ## [14] "to a lifetime in Hotel California. And he believes that, if this is as" ## [15] "<BODY background=\"http://algo.scu.edu/~sanjivdas/graphics/back2.gif\">" ## [16] "Berkeley), an MBA from the Indian Institute of Management, Ahmedabad," ## [17] "modeling of default risk, and venture capital. He has published over" ## [18] "ninety articles in academic journals, and has won numerous awards for" ## [19] "science fiction movies, and writing cool software code. When there is" ## [20] "academic papers, which helps him relax. Always the contrarian, Sanjiv" ## [21] "his past life in the unreal world, Sanjiv worked at Citibank, N.A. in" ## [22] "has unpublished articles in many other areas. Some years ago, he took" ## [23] "There he learnt about the fascinating field of Randomized Algorithms," ## [24] "in. Academia is a real challenge, given that he has to reconcile many" ## [25] "explains, you never really finish your education - \"you can check out" ## [26] "the Asia-Pacific region. He takes great pleasure in merging his many" ## [27] "has published articles on derivatives, term-structure models, mutual" ## [28] "more opinions than ideas. He has been known to have turned down many" ## [29] "Financial Services Research, and Associate Editor of other academic" ## [30] "Citibank. His current research interests include: machine learning," ## [31] "research and teaching. His recent book \"Derivatives: Principles and" ## [32] "growing up, Sanjiv moved to New York to change the world, hopefully" ## [33] "confirming that an unchecked hobby can quickly become an obsession." ## [34] "pursuits, many of which stem from being in the epicenter of Silicon" ## [35] "Coastal living did a lot to mold Sanjiv, who needs to live near the" ## [36] "Sanjiv Das is the William and Janice Terry Professor of Finance at" ## [37] "journals. Prior to being an academic, he worked in the derivatives" ## [38] "social networks, derivatives pricing models, portfolio theory, the" ## [39] "through research. He graduated in 1994 with a Ph.D. from NYU, and" ## [40] "mountains meet the sea, riding sport motorbikes, reading, gadgets," ## [41] "offers from Mad magazine to publish his academic work. As he often" ## [42] "B.Com in Accounting and Economics (University of Bombay, Sydenham" ## [43] "After loafing and working in many parts of Asia, but never really" ## [44] "since then spent five years in Boston, and now lives in San Jose," ## [45] "thinks that New York City is the most calming place in the world," ## [46] "no such thing as a representative investor, yet added many unique" ## [47] "California. Sanjiv loves animals, places in the world where the" ## [48] "skills he now applies earnestly to his editorial work, and other" ## [49] "Ph.D. from New York University), Computer Science (M.S. from UC" ## [50] "currently also serves as a Senior Fellow at the FDIC Center for" ## [51] "time available from the excitement of daily life, Sanjiv writes" ## [52] "time off to get another degree in computer science at Berkeley," ## [53] "features to his personal utility function. He learnt that it is" ## [54] "Practice\" was published in May 2010 (second edition 2016). He" ## [55] "College), and is also a qualified Cost and Works Accountant" ## [56] "(AICWA). He is a senior editor of The Journal of Investment" ## [57] "business in the Asia-Pacific region as a Vice-President at" ## [58] "<p> <B>Sanjiv Das: A Short Academic Life History</B> <p>" ## [59] "bad as it gets, life is really pretty good." ## [60] "after California of course." ## [61] "Financial Research." ## [62] "and diverse." ## [63] "Valley." ## [64] "<HTML>" ## [65] "<p>" ## [66] "<p>" ## [67] "<p>" ## [68] "" ## [69] "" ## [70] "" ## [71] "" ## [72] "" ## [73] "" ## [74] "" ## [75] "" ## [76] "" ## [77] "" ## [78] "" ## [79] "" ## [80] "" ``` 7\.8 Text cleanup ----------------- In short, text extraction can be exceedingly simple, though getting clean text is not as easy an operation. Removing html tags and other unnecessary elements in the file is also a fairly simple operation. We undertake the following steps that use generalized regular expressions (i.e., **grep**) to eliminate html formatting characters. This will generate one single paragraph of text, relatively clean of formatting characters. Such a text collection is also known as a “bag of words”. ``` text = paste(text,collapse="\n") print(text) ``` ``` ## [1] "<HTML>\n<BODY background=\"http://algo.scu.edu/~sanjivdas/graphics/back2.gif\">\n\nSanjiv Das is the William and Janice Terry Professor of Finance at\nSanta Clara University's Leavey School of Business. He previously held\nfaculty appointments as Associate Professor at Harvard Business School\nand UC Berkeley. He holds post-graduate degrees in Finance (M.Phil and\nPh.D. from New York University), Computer Science (M.S. from UC\nBerkeley), an MBA from the Indian Institute of Management, Ahmedabad,\nB.Com in Accounting and Economics (University of Bombay, Sydenham\nCollege), and is also a qualified Cost and Works Accountant\n(AICWA). He is a senior editor of The Journal of Investment\nManagement, co-editor of The Journal of Derivatives and The Journal of\nFinancial Services Research, and Associate Editor of other academic\njournals. Prior to being an academic, he worked in the derivatives\nbusiness in the Asia-Pacific region as a Vice-President at\nCitibank. His current research interests include: machine learning,\nsocial networks, derivatives pricing models, portfolio theory, the\nmodeling of default risk, and venture capital. He has published over\nninety articles in academic journals, and has won numerous awards for\nresearch and teaching. His recent book \"Derivatives: Principles and\nPractice\" was published in May 2010 (second edition 2016). He\ncurrently also serves as a Senior Fellow at the FDIC Center for\nFinancial Research.\n\n\n<p> <B>Sanjiv Das: A Short Academic Life History</B> <p>\n\nAfter loafing and working in many parts of Asia, but never really\ngrowing up, Sanjiv moved to New York to change the world, hopefully\nthrough research. He graduated in 1994 with a Ph.D. from NYU, and\nsince then spent five years in Boston, and now lives in San Jose,\nCalifornia. Sanjiv loves animals, places in the world where the\nmountains meet the sea, riding sport motorbikes, reading, gadgets,\nscience fiction movies, and writing cool software code. When there is\ntime available from the excitement of daily life, Sanjiv writes\nacademic papers, which helps him relax. Always the contrarian, Sanjiv\nthinks that New York City is the most calming place in the world,\nafter California of course.\n\n<p>\n\nSanjiv is now a Professor of Finance at Santa Clara University. He came\nto SCU from Harvard Business School and spent a year at UC Berkeley. In\nhis past life in the unreal world, Sanjiv worked at Citibank, N.A. in\nthe Asia-Pacific region. He takes great pleasure in merging his many\nprevious lives into his current existence, which is incredibly confused\nand diverse.\n\n<p>\n\nSanjiv's research style is instilled with a distinct \"New York state of\nmind\" - it is chaotic, diverse, with minimal method to the madness. He\nhas published articles on derivatives, term-structure models, mutual\nfunds, the internet, portfolio choice, banking models, credit risk, and\nhas unpublished articles in many other areas. Some years ago, he took\ntime off to get another degree in computer science at Berkeley,\nconfirming that an unchecked hobby can quickly become an obsession.\nThere he learnt about the fascinating field of Randomized Algorithms,\nskills he now applies earnestly to his editorial work, and other\npursuits, many of which stem from being in the epicenter of Silicon\nValley.\n\n<p>\n\nCoastal living did a lot to mold Sanjiv, who needs to live near the\nocean. The many walks in Greenwich village convinced him that there is\nno such thing as a representative investor, yet added many unique\nfeatures to his personal utility function. He learnt that it is\nimportant to open the academic door to the ivory tower and let the world\nin. Academia is a real challenge, given that he has to reconcile many\nmore opinions than ideas. He has been known to have turned down many\noffers from Mad magazine to publish his academic work. As he often\nexplains, you never really finish your education - \"you can check out\nany time you like, but you can never leave.\" Which is why he is doomed\nto a lifetime in Hotel California. And he believes that, if this is as\nbad as it gets, life is really pretty good.\n\n\n" ``` ``` text = str_replace_all(text,"[<>{}()&;,.\n]"," ") print(text) ``` ``` ## [1] " HTML BODY background=\"http://algo scu edu/~sanjivdas/graphics/back2 gif\" Sanjiv Das is the William and Janice Terry Professor of Finance at Santa Clara University's Leavey School of Business He previously held faculty appointments as Associate Professor at Harvard Business School and UC Berkeley He holds post-graduate degrees in Finance M Phil and Ph D from New York University Computer Science M S from UC Berkeley an MBA from the Indian Institute of Management Ahmedabad B Com in Accounting and Economics University of Bombay Sydenham College and is also a qualified Cost and Works Accountant AICWA He is a senior editor of The Journal of Investment Management co-editor of The Journal of Derivatives and The Journal of Financial Services Research and Associate Editor of other academic journals Prior to being an academic he worked in the derivatives business in the Asia-Pacific region as a Vice-President at Citibank His current research interests include: machine learning social networks derivatives pricing models portfolio theory the modeling of default risk and venture capital He has published over ninety articles in academic journals and has won numerous awards for research and teaching His recent book \"Derivatives: Principles and Practice\" was published in May 2010 second edition 2016 He currently also serves as a Senior Fellow at the FDIC Center for Financial Research p B Sanjiv Das: A Short Academic Life History /B p After loafing and working in many parts of Asia but never really growing up Sanjiv moved to New York to change the world hopefully through research He graduated in 1994 with a Ph D from NYU and since then spent five years in Boston and now lives in San Jose California Sanjiv loves animals places in the world where the mountains meet the sea riding sport motorbikes reading gadgets science fiction movies and writing cool software code When there is time available from the excitement of daily life Sanjiv writes academic papers which helps him relax Always the contrarian Sanjiv thinks that New York City is the most calming place in the world after California of course p Sanjiv is now a Professor of Finance at Santa Clara University He came to SCU from Harvard Business School and spent a year at UC Berkeley In his past life in the unreal world Sanjiv worked at Citibank N A in the Asia-Pacific region He takes great pleasure in merging his many previous lives into his current existence which is incredibly confused and diverse p Sanjiv's research style is instilled with a distinct \"New York state of mind\" - it is chaotic diverse with minimal method to the madness He has published articles on derivatives term-structure models mutual funds the internet portfolio choice banking models credit risk and has unpublished articles in many other areas Some years ago he took time off to get another degree in computer science at Berkeley confirming that an unchecked hobby can quickly become an obsession There he learnt about the fascinating field of Randomized Algorithms skills he now applies earnestly to his editorial work and other pursuits many of which stem from being in the epicenter of Silicon Valley p Coastal living did a lot to mold Sanjiv who needs to live near the ocean The many walks in Greenwich village convinced him that there is no such thing as a representative investor yet added many unique features to his personal utility function He learnt that it is important to open the academic door to the ivory tower and let the world in Academia is a real challenge given that he has to reconcile many more opinions than ideas He has been known to have turned down many offers from Mad magazine to publish his academic work As he often explains you never really finish your education - \"you can check out any time you like but you can never leave \" Which is why he is doomed to a lifetime in Hotel California And he believes that if this is as bad as it gets life is really pretty good " ``` 7\.9 The *XML* Package ---------------------- The **XML** package in R also comes with many functions that aid in cleaning up text and dropping it (mostly unformatted) into a flat file or data frame. This may then be further processed. Here is some example code for this. ### 7\.9\.1 Processing XML files in R into a data frame The following example has been adapted from r\-bloggers.com. It uses the following URL: <http://www.w3schools.com/xml/plant_catalog.xml> ``` library(XML) #Part1: Reading an xml and creating a data frame with it. xml.url <- "http://www.w3schools.com/xml/plant_catalog.xml" xmlfile <- xmlTreeParse(xml.url) xmltop <- xmlRoot(xmlfile) plantcat <- xmlSApply(xmltop, function(x) xmlSApply(x, xmlValue)) plantcat_df <- data.frame(t(plantcat),row.names=NULL) plantcat_df[1:5,1:4] ``` ### 7\.9\.2 Creating a XML file from a data frame ``` library(XML) ``` ``` ## Warning: package 'XML' was built under R version 3.3.2 ``` ``` ## Loading required package: methods ``` ``` #Example adapted from https://stat.ethz.ch/pipermail/r-help/2008-September/175364.html #Load the iris data set and create a data frame data("iris") data <- as.data.frame(iris) xml <- xmlTree() xml$addTag("document", close=FALSE) ``` ``` ## Warning in xmlRoot.XMLInternalDocument(currentNodes[[1]]): empty XML ## document ``` ``` for (i in 1:nrow(data)) { xml$addTag("row", close=FALSE) for (j in names(data)) { xml$addTag(j, data[i, j]) } xml$closeTag() } xml$closeTag() #view the xml (uncomment line below to see XML, long output) cat(saveXML(xml)) ``` ``` ## <?xml version="1.0"?> ## ## <document> ## <row> ## <Sepal.Length>5.1</Sepal.Length> ## <Sepal.Width>3.5</Sepal.Width> ## <Petal.Length>1.4</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.9</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>1.4</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.7</Sepal.Length> ## <Sepal.Width>3.2</Sepal.Width> ## <Petal.Length>1.3</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.6</Sepal.Length> ## <Sepal.Width>3.1</Sepal.Width> ## <Petal.Length>1.5</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5</Sepal.Length> ## <Sepal.Width>3.6</Sepal.Width> ## <Petal.Length>1.4</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.4</Sepal.Length> ## <Sepal.Width>3.9</Sepal.Width> ## <Petal.Length>1.7</Petal.Length> ## <Petal.Width>0.4</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.6</Sepal.Length> ## <Sepal.Width>3.4</Sepal.Width> ## <Petal.Length>1.4</Petal.Length> ## <Petal.Width>0.3</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5</Sepal.Length> ## <Sepal.Width>3.4</Sepal.Width> ## <Petal.Length>1.5</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.4</Sepal.Length> ## <Sepal.Width>2.9</Sepal.Width> ## <Petal.Length>1.4</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.9</Sepal.Length> ## <Sepal.Width>3.1</Sepal.Width> ## <Petal.Length>1.5</Petal.Length> ## <Petal.Width>0.1</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.4</Sepal.Length> ## <Sepal.Width>3.7</Sepal.Width> ## <Petal.Length>1.5</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.8</Sepal.Length> ## <Sepal.Width>3.4</Sepal.Width> ## <Petal.Length>1.6</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.8</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>1.4</Petal.Length> ## <Petal.Width>0.1</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.3</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>1.1</Petal.Length> ## <Petal.Width>0.1</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.8</Sepal.Length> ## <Sepal.Width>4</Sepal.Width> ## <Petal.Length>1.2</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.7</Sepal.Length> ## <Sepal.Width>4.4</Sepal.Width> ## <Petal.Length>1.5</Petal.Length> ## <Petal.Width>0.4</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.4</Sepal.Length> ## <Sepal.Width>3.9</Sepal.Width> ## <Petal.Length>1.3</Petal.Length> ## <Petal.Width>0.4</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.1</Sepal.Length> ## <Sepal.Width>3.5</Sepal.Width> ## <Petal.Length>1.4</Petal.Length> ## <Petal.Width>0.3</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.7</Sepal.Length> ## <Sepal.Width>3.8</Sepal.Width> ## <Petal.Length>1.7</Petal.Length> ## <Petal.Width>0.3</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.1</Sepal.Length> ## <Sepal.Width>3.8</Sepal.Width> ## <Petal.Length>1.5</Petal.Length> ## <Petal.Width>0.3</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.4</Sepal.Length> ## <Sepal.Width>3.4</Sepal.Width> ## <Petal.Length>1.7</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.1</Sepal.Length> ## <Sepal.Width>3.7</Sepal.Width> ## <Petal.Length>1.5</Petal.Length> ## <Petal.Width>0.4</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.6</Sepal.Length> ## <Sepal.Width>3.6</Sepal.Width> ## <Petal.Length>1</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.1</Sepal.Length> ## <Sepal.Width>3.3</Sepal.Width> ## <Petal.Length>1.7</Petal.Length> ## <Petal.Width>0.5</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.8</Sepal.Length> ## <Sepal.Width>3.4</Sepal.Width> ## <Petal.Length>1.9</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>1.6</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5</Sepal.Length> ## <Sepal.Width>3.4</Sepal.Width> ## <Petal.Length>1.6</Petal.Length> ## <Petal.Width>0.4</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.2</Sepal.Length> ## <Sepal.Width>3.5</Sepal.Width> ## <Petal.Length>1.5</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.2</Sepal.Length> ## <Sepal.Width>3.4</Sepal.Width> ## <Petal.Length>1.4</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.7</Sepal.Length> ## <Sepal.Width>3.2</Sepal.Width> ## <Petal.Length>1.6</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.8</Sepal.Length> ## <Sepal.Width>3.1</Sepal.Width> ## <Petal.Length>1.6</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.4</Sepal.Length> ## <Sepal.Width>3.4</Sepal.Width> ## <Petal.Length>1.5</Petal.Length> ## <Petal.Width>0.4</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.2</Sepal.Length> ## <Sepal.Width>4.1</Sepal.Width> ## <Petal.Length>1.5</Petal.Length> ## <Petal.Width>0.1</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.5</Sepal.Length> ## <Sepal.Width>4.2</Sepal.Width> ## <Petal.Length>1.4</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.9</Sepal.Length> ## <Sepal.Width>3.1</Sepal.Width> ## <Petal.Length>1.5</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5</Sepal.Length> ## <Sepal.Width>3.2</Sepal.Width> ## <Petal.Length>1.2</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.5</Sepal.Length> ## <Sepal.Width>3.5</Sepal.Width> ## <Petal.Length>1.3</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.9</Sepal.Length> ## <Sepal.Width>3.6</Sepal.Width> ## <Petal.Length>1.4</Petal.Length> ## <Petal.Width>0.1</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.4</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>1.3</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.1</Sepal.Length> ## <Sepal.Width>3.4</Sepal.Width> ## <Petal.Length>1.5</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5</Sepal.Length> ## <Sepal.Width>3.5</Sepal.Width> ## <Petal.Length>1.3</Petal.Length> ## <Petal.Width>0.3</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.5</Sepal.Length> ## <Sepal.Width>2.3</Sepal.Width> ## <Petal.Length>1.3</Petal.Length> ## <Petal.Width>0.3</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.4</Sepal.Length> ## <Sepal.Width>3.2</Sepal.Width> ## <Petal.Length>1.3</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5</Sepal.Length> ## <Sepal.Width>3.5</Sepal.Width> ## <Petal.Length>1.6</Petal.Length> ## <Petal.Width>0.6</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.1</Sepal.Length> ## <Sepal.Width>3.8</Sepal.Width> ## <Petal.Length>1.9</Petal.Length> ## <Petal.Width>0.4</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.8</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>1.4</Petal.Length> ## <Petal.Width>0.3</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.1</Sepal.Length> ## <Sepal.Width>3.8</Sepal.Width> ## <Petal.Length>1.6</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.6</Sepal.Length> ## <Sepal.Width>3.2</Sepal.Width> ## <Petal.Length>1.4</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.3</Sepal.Length> ## <Sepal.Width>3.7</Sepal.Width> ## <Petal.Length>1.5</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5</Sepal.Length> ## <Sepal.Width>3.3</Sepal.Width> ## <Petal.Length>1.4</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>7</Sepal.Length> ## <Sepal.Width>3.2</Sepal.Width> ## <Petal.Length>4.7</Petal.Length> ## <Petal.Width>1.4</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.4</Sepal.Length> ## <Sepal.Width>3.2</Sepal.Width> ## <Petal.Length>4.5</Petal.Length> ## <Petal.Width>1.5</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.9</Sepal.Length> ## <Sepal.Width>3.1</Sepal.Width> ## <Petal.Length>4.9</Petal.Length> ## <Petal.Width>1.5</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.5</Sepal.Length> ## <Sepal.Width>2.3</Sepal.Width> ## <Petal.Length>4</Petal.Length> ## <Petal.Width>1.3</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.5</Sepal.Length> ## <Sepal.Width>2.8</Sepal.Width> ## <Petal.Length>4.6</Petal.Length> ## <Petal.Width>1.5</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.7</Sepal.Length> ## <Sepal.Width>2.8</Sepal.Width> ## <Petal.Length>4.5</Petal.Length> ## <Petal.Width>1.3</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.3</Sepal.Length> ## <Sepal.Width>3.3</Sepal.Width> ## <Petal.Length>4.7</Petal.Length> ## <Petal.Width>1.6</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>4.9</Sepal.Length> ## <Sepal.Width>2.4</Sepal.Width> ## <Petal.Length>3.3</Petal.Length> ## <Petal.Width>1</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.6</Sepal.Length> ## <Sepal.Width>2.9</Sepal.Width> ## <Petal.Length>4.6</Petal.Length> ## <Petal.Width>1.3</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.2</Sepal.Length> ## <Sepal.Width>2.7</Sepal.Width> ## <Petal.Length>3.9</Petal.Length> ## <Petal.Width>1.4</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5</Sepal.Length> ## <Sepal.Width>2</Sepal.Width> ## <Petal.Length>3.5</Petal.Length> ## <Petal.Width>1</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.9</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>4.2</Petal.Length> ## <Petal.Width>1.5</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6</Sepal.Length> ## <Sepal.Width>2.2</Sepal.Width> ## <Petal.Length>4</Petal.Length> ## <Petal.Width>1</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.1</Sepal.Length> ## <Sepal.Width>2.9</Sepal.Width> ## <Petal.Length>4.7</Petal.Length> ## <Petal.Width>1.4</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.6</Sepal.Length> ## <Sepal.Width>2.9</Sepal.Width> ## <Petal.Length>3.6</Petal.Length> ## <Petal.Width>1.3</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.7</Sepal.Length> ## <Sepal.Width>3.1</Sepal.Width> ## <Petal.Length>4.4</Petal.Length> ## <Petal.Width>1.4</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.6</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>4.5</Petal.Length> ## <Petal.Width>1.5</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.8</Sepal.Length> ## <Sepal.Width>2.7</Sepal.Width> ## <Petal.Length>4.1</Petal.Length> ## <Petal.Width>1</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.2</Sepal.Length> ## <Sepal.Width>2.2</Sepal.Width> ## <Petal.Length>4.5</Petal.Length> ## <Petal.Width>1.5</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.6</Sepal.Length> ## <Sepal.Width>2.5</Sepal.Width> ## <Petal.Length>3.9</Petal.Length> ## <Petal.Width>1.1</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.9</Sepal.Length> ## <Sepal.Width>3.2</Sepal.Width> ## <Petal.Length>4.8</Petal.Length> ## <Petal.Width>1.8</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.1</Sepal.Length> ## <Sepal.Width>2.8</Sepal.Width> ## <Petal.Length>4</Petal.Length> ## <Petal.Width>1.3</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.3</Sepal.Length> ## <Sepal.Width>2.5</Sepal.Width> ## <Petal.Length>4.9</Petal.Length> ## <Petal.Width>1.5</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.1</Sepal.Length> ## <Sepal.Width>2.8</Sepal.Width> ## <Petal.Length>4.7</Petal.Length> ## <Petal.Width>1.2</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.4</Sepal.Length> ## <Sepal.Width>2.9</Sepal.Width> ## <Petal.Length>4.3</Petal.Length> ## <Petal.Width>1.3</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.6</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>4.4</Petal.Length> ## <Petal.Width>1.4</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.8</Sepal.Length> ## <Sepal.Width>2.8</Sepal.Width> ## <Petal.Length>4.8</Petal.Length> ## <Petal.Width>1.4</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.7</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>5</Petal.Length> ## <Petal.Width>1.7</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6</Sepal.Length> ## <Sepal.Width>2.9</Sepal.Width> ## <Petal.Length>4.5</Petal.Length> ## <Petal.Width>1.5</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.7</Sepal.Length> ## <Sepal.Width>2.6</Sepal.Width> ## <Petal.Length>3.5</Petal.Length> ## <Petal.Width>1</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.5</Sepal.Length> ## <Sepal.Width>2.4</Sepal.Width> ## <Petal.Length>3.8</Petal.Length> ## <Petal.Width>1.1</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.5</Sepal.Length> ## <Sepal.Width>2.4</Sepal.Width> ## <Petal.Length>3.7</Petal.Length> ## <Petal.Width>1</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.8</Sepal.Length> ## <Sepal.Width>2.7</Sepal.Width> ## <Petal.Length>3.9</Petal.Length> ## <Petal.Width>1.2</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6</Sepal.Length> ## <Sepal.Width>2.7</Sepal.Width> ## <Petal.Length>5.1</Petal.Length> ## <Petal.Width>1.6</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.4</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>4.5</Petal.Length> ## <Petal.Width>1.5</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6</Sepal.Length> ## <Sepal.Width>3.4</Sepal.Width> ## <Petal.Length>4.5</Petal.Length> ## <Petal.Width>1.6</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.7</Sepal.Length> ## <Sepal.Width>3.1</Sepal.Width> ## <Petal.Length>4.7</Petal.Length> ## <Petal.Width>1.5</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.3</Sepal.Length> ## <Sepal.Width>2.3</Sepal.Width> ## <Petal.Length>4.4</Petal.Length> ## <Petal.Width>1.3</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.6</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>4.1</Petal.Length> ## <Petal.Width>1.3</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.5</Sepal.Length> ## <Sepal.Width>2.5</Sepal.Width> ## <Petal.Length>4</Petal.Length> ## <Petal.Width>1.3</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.5</Sepal.Length> ## <Sepal.Width>2.6</Sepal.Width> ## <Petal.Length>4.4</Petal.Length> ## <Petal.Width>1.2</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.1</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>4.6</Petal.Length> ## <Petal.Width>1.4</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.8</Sepal.Length> ## <Sepal.Width>2.6</Sepal.Width> ## <Petal.Length>4</Petal.Length> ## <Petal.Width>1.2</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5</Sepal.Length> ## <Sepal.Width>2.3</Sepal.Width> ## <Petal.Length>3.3</Petal.Length> ## <Petal.Width>1</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.6</Sepal.Length> ## <Sepal.Width>2.7</Sepal.Width> ## <Petal.Length>4.2</Petal.Length> ## <Petal.Width>1.3</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.7</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>4.2</Petal.Length> ## <Petal.Width>1.2</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.7</Sepal.Length> ## <Sepal.Width>2.9</Sepal.Width> ## <Petal.Length>4.2</Petal.Length> ## <Petal.Width>1.3</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.2</Sepal.Length> ## <Sepal.Width>2.9</Sepal.Width> ## <Petal.Length>4.3</Petal.Length> ## <Petal.Width>1.3</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.1</Sepal.Length> ## <Sepal.Width>2.5</Sepal.Width> ## <Petal.Length>3</Petal.Length> ## <Petal.Width>1.1</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.7</Sepal.Length> ## <Sepal.Width>2.8</Sepal.Width> ## <Petal.Length>4.1</Petal.Length> ## <Petal.Width>1.3</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.3</Sepal.Length> ## <Sepal.Width>3.3</Sepal.Width> ## <Petal.Length>6</Petal.Length> ## <Petal.Width>2.5</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>5.8</Sepal.Length> ## <Sepal.Width>2.7</Sepal.Width> ## <Petal.Length>5.1</Petal.Length> ## <Petal.Width>1.9</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>7.1</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>5.9</Petal.Length> ## <Petal.Width>2.1</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.3</Sepal.Length> ## <Sepal.Width>2.9</Sepal.Width> ## <Petal.Length>5.6</Petal.Length> ## <Petal.Width>1.8</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.5</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>5.8</Petal.Length> ## <Petal.Width>2.2</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>7.6</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>6.6</Petal.Length> ## <Petal.Width>2.1</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>4.9</Sepal.Length> ## <Sepal.Width>2.5</Sepal.Width> ## <Petal.Length>4.5</Petal.Length> ## <Petal.Width>1.7</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>7.3</Sepal.Length> ## <Sepal.Width>2.9</Sepal.Width> ## <Petal.Length>6.3</Petal.Length> ## <Petal.Width>1.8</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.7</Sepal.Length> ## <Sepal.Width>2.5</Sepal.Width> ## <Petal.Length>5.8</Petal.Length> ## <Petal.Width>1.8</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>7.2</Sepal.Length> ## <Sepal.Width>3.6</Sepal.Width> ## <Petal.Length>6.1</Petal.Length> ## <Petal.Width>2.5</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.5</Sepal.Length> ## <Sepal.Width>3.2</Sepal.Width> ## <Petal.Length>5.1</Petal.Length> ## <Petal.Width>2</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.4</Sepal.Length> ## <Sepal.Width>2.7</Sepal.Width> ## <Petal.Length>5.3</Petal.Length> ## <Petal.Width>1.9</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.8</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>5.5</Petal.Length> ## <Petal.Width>2.1</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>5.7</Sepal.Length> ## <Sepal.Width>2.5</Sepal.Width> ## <Petal.Length>5</Petal.Length> ## <Petal.Width>2</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>5.8</Sepal.Length> ## <Sepal.Width>2.8</Sepal.Width> ## <Petal.Length>5.1</Petal.Length> ## <Petal.Width>2.4</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.4</Sepal.Length> ## <Sepal.Width>3.2</Sepal.Width> ## <Petal.Length>5.3</Petal.Length> ## <Petal.Width>2.3</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.5</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>5.5</Petal.Length> ## <Petal.Width>1.8</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>7.7</Sepal.Length> ## <Sepal.Width>3.8</Sepal.Width> ## <Petal.Length>6.7</Petal.Length> ## <Petal.Width>2.2</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>7.7</Sepal.Length> ## <Sepal.Width>2.6</Sepal.Width> ## <Petal.Length>6.9</Petal.Length> ## <Petal.Width>2.3</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6</Sepal.Length> ## <Sepal.Width>2.2</Sepal.Width> ## <Petal.Length>5</Petal.Length> ## <Petal.Width>1.5</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.9</Sepal.Length> ## <Sepal.Width>3.2</Sepal.Width> ## <Petal.Length>5.7</Petal.Length> ## <Petal.Width>2.3</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>5.6</Sepal.Length> ## <Sepal.Width>2.8</Sepal.Width> ## <Petal.Length>4.9</Petal.Length> ## <Petal.Width>2</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>7.7</Sepal.Length> ## <Sepal.Width>2.8</Sepal.Width> ## <Petal.Length>6.7</Petal.Length> ## <Petal.Width>2</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.3</Sepal.Length> ## <Sepal.Width>2.7</Sepal.Width> ## <Petal.Length>4.9</Petal.Length> ## <Petal.Width>1.8</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.7</Sepal.Length> ## <Sepal.Width>3.3</Sepal.Width> ## <Petal.Length>5.7</Petal.Length> ## <Petal.Width>2.1</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>7.2</Sepal.Length> ## <Sepal.Width>3.2</Sepal.Width> ## <Petal.Length>6</Petal.Length> ## <Petal.Width>1.8</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.2</Sepal.Length> ## <Sepal.Width>2.8</Sepal.Width> ## <Petal.Length>4.8</Petal.Length> ## <Petal.Width>1.8</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.1</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>4.9</Petal.Length> ## <Petal.Width>1.8</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.4</Sepal.Length> ## <Sepal.Width>2.8</Sepal.Width> ## <Petal.Length>5.6</Petal.Length> ## <Petal.Width>2.1</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>7.2</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>5.8</Petal.Length> ## <Petal.Width>1.6</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>7.4</Sepal.Length> ## <Sepal.Width>2.8</Sepal.Width> ## <Petal.Length>6.1</Petal.Length> ## <Petal.Width>1.9</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>7.9</Sepal.Length> ## <Sepal.Width>3.8</Sepal.Width> ## <Petal.Length>6.4</Petal.Length> ## <Petal.Width>2</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.4</Sepal.Length> ## <Sepal.Width>2.8</Sepal.Width> ## <Petal.Length>5.6</Petal.Length> ## <Petal.Width>2.2</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.3</Sepal.Length> ## <Sepal.Width>2.8</Sepal.Width> ## <Petal.Length>5.1</Petal.Length> ## <Petal.Width>1.5</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.1</Sepal.Length> ## <Sepal.Width>2.6</Sepal.Width> ## <Petal.Length>5.6</Petal.Length> ## <Petal.Width>1.4</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>7.7</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>6.1</Petal.Length> ## <Petal.Width>2.3</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.3</Sepal.Length> ## <Sepal.Width>3.4</Sepal.Width> ## <Petal.Length>5.6</Petal.Length> ## <Petal.Width>2.4</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.4</Sepal.Length> ## <Sepal.Width>3.1</Sepal.Width> ## <Petal.Length>5.5</Petal.Length> ## <Petal.Width>1.8</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>4.8</Petal.Length> ## <Petal.Width>1.8</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.9</Sepal.Length> ## <Sepal.Width>3.1</Sepal.Width> ## <Petal.Length>5.4</Petal.Length> ## <Petal.Width>2.1</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.7</Sepal.Length> ## <Sepal.Width>3.1</Sepal.Width> ## <Petal.Length>5.6</Petal.Length> ## <Petal.Width>2.4</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.9</Sepal.Length> ## <Sepal.Width>3.1</Sepal.Width> ## <Petal.Length>5.1</Petal.Length> ## <Petal.Width>2.3</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>5.8</Sepal.Length> ## <Sepal.Width>2.7</Sepal.Width> ## <Petal.Length>5.1</Petal.Length> ## <Petal.Width>1.9</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.8</Sepal.Length> ## <Sepal.Width>3.2</Sepal.Width> ## <Petal.Length>5.9</Petal.Length> ## <Petal.Width>2.3</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.7</Sepal.Length> ## <Sepal.Width>3.3</Sepal.Width> ## <Petal.Length>5.7</Petal.Length> ## <Petal.Width>2.5</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.7</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>5.2</Petal.Length> ## <Petal.Width>2.3</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.3</Sepal.Length> ## <Sepal.Width>2.5</Sepal.Width> ## <Petal.Length>5</Petal.Length> ## <Petal.Width>1.9</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.5</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>5.2</Petal.Length> ## <Petal.Width>2</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.2</Sepal.Length> ## <Sepal.Width>3.4</Sepal.Width> ## <Petal.Length>5.4</Petal.Length> ## <Petal.Width>2.3</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>5.9</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>5.1</Petal.Length> ## <Petal.Width>1.8</Petal.Width> ## <Species>virginica</Species> ## </row> ## </document> ``` ### 7\.9\.1 Processing XML files in R into a data frame The following example has been adapted from r\-bloggers.com. It uses the following URL: <http://www.w3schools.com/xml/plant_catalog.xml> ``` library(XML) #Part1: Reading an xml and creating a data frame with it. xml.url <- "http://www.w3schools.com/xml/plant_catalog.xml" xmlfile <- xmlTreeParse(xml.url) xmltop <- xmlRoot(xmlfile) plantcat <- xmlSApply(xmltop, function(x) xmlSApply(x, xmlValue)) plantcat_df <- data.frame(t(plantcat),row.names=NULL) plantcat_df[1:5,1:4] ``` ### 7\.9\.2 Creating a XML file from a data frame ``` library(XML) ``` ``` ## Warning: package 'XML' was built under R version 3.3.2 ``` ``` ## Loading required package: methods ``` ``` #Example adapted from https://stat.ethz.ch/pipermail/r-help/2008-September/175364.html #Load the iris data set and create a data frame data("iris") data <- as.data.frame(iris) xml <- xmlTree() xml$addTag("document", close=FALSE) ``` ``` ## Warning in xmlRoot.XMLInternalDocument(currentNodes[[1]]): empty XML ## document ``` ``` for (i in 1:nrow(data)) { xml$addTag("row", close=FALSE) for (j in names(data)) { xml$addTag(j, data[i, j]) } xml$closeTag() } xml$closeTag() #view the xml (uncomment line below to see XML, long output) cat(saveXML(xml)) ``` ``` ## <?xml version="1.0"?> ## ## <document> ## <row> ## <Sepal.Length>5.1</Sepal.Length> ## <Sepal.Width>3.5</Sepal.Width> ## <Petal.Length>1.4</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.9</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>1.4</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.7</Sepal.Length> ## <Sepal.Width>3.2</Sepal.Width> ## <Petal.Length>1.3</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.6</Sepal.Length> ## <Sepal.Width>3.1</Sepal.Width> ## <Petal.Length>1.5</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5</Sepal.Length> ## <Sepal.Width>3.6</Sepal.Width> ## <Petal.Length>1.4</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.4</Sepal.Length> ## <Sepal.Width>3.9</Sepal.Width> ## <Petal.Length>1.7</Petal.Length> ## <Petal.Width>0.4</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.6</Sepal.Length> ## <Sepal.Width>3.4</Sepal.Width> ## <Petal.Length>1.4</Petal.Length> ## <Petal.Width>0.3</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5</Sepal.Length> ## <Sepal.Width>3.4</Sepal.Width> ## <Petal.Length>1.5</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.4</Sepal.Length> ## <Sepal.Width>2.9</Sepal.Width> ## <Petal.Length>1.4</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.9</Sepal.Length> ## <Sepal.Width>3.1</Sepal.Width> ## <Petal.Length>1.5</Petal.Length> ## <Petal.Width>0.1</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.4</Sepal.Length> ## <Sepal.Width>3.7</Sepal.Width> ## <Petal.Length>1.5</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.8</Sepal.Length> ## <Sepal.Width>3.4</Sepal.Width> ## <Petal.Length>1.6</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.8</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>1.4</Petal.Length> ## <Petal.Width>0.1</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.3</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>1.1</Petal.Length> ## <Petal.Width>0.1</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.8</Sepal.Length> ## <Sepal.Width>4</Sepal.Width> ## <Petal.Length>1.2</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.7</Sepal.Length> ## <Sepal.Width>4.4</Sepal.Width> ## <Petal.Length>1.5</Petal.Length> ## <Petal.Width>0.4</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.4</Sepal.Length> ## <Sepal.Width>3.9</Sepal.Width> ## <Petal.Length>1.3</Petal.Length> ## <Petal.Width>0.4</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.1</Sepal.Length> ## <Sepal.Width>3.5</Sepal.Width> ## <Petal.Length>1.4</Petal.Length> ## <Petal.Width>0.3</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.7</Sepal.Length> ## <Sepal.Width>3.8</Sepal.Width> ## <Petal.Length>1.7</Petal.Length> ## <Petal.Width>0.3</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.1</Sepal.Length> ## <Sepal.Width>3.8</Sepal.Width> ## <Petal.Length>1.5</Petal.Length> ## <Petal.Width>0.3</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.4</Sepal.Length> ## <Sepal.Width>3.4</Sepal.Width> ## <Petal.Length>1.7</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.1</Sepal.Length> ## <Sepal.Width>3.7</Sepal.Width> ## <Petal.Length>1.5</Petal.Length> ## <Petal.Width>0.4</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.6</Sepal.Length> ## <Sepal.Width>3.6</Sepal.Width> ## <Petal.Length>1</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.1</Sepal.Length> ## <Sepal.Width>3.3</Sepal.Width> ## <Petal.Length>1.7</Petal.Length> ## <Petal.Width>0.5</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.8</Sepal.Length> ## <Sepal.Width>3.4</Sepal.Width> ## <Petal.Length>1.9</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>1.6</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5</Sepal.Length> ## <Sepal.Width>3.4</Sepal.Width> ## <Petal.Length>1.6</Petal.Length> ## <Petal.Width>0.4</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.2</Sepal.Length> ## <Sepal.Width>3.5</Sepal.Width> ## <Petal.Length>1.5</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.2</Sepal.Length> ## <Sepal.Width>3.4</Sepal.Width> ## <Petal.Length>1.4</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.7</Sepal.Length> ## <Sepal.Width>3.2</Sepal.Width> ## <Petal.Length>1.6</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.8</Sepal.Length> ## <Sepal.Width>3.1</Sepal.Width> ## <Petal.Length>1.6</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.4</Sepal.Length> ## <Sepal.Width>3.4</Sepal.Width> ## <Petal.Length>1.5</Petal.Length> ## <Petal.Width>0.4</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.2</Sepal.Length> ## <Sepal.Width>4.1</Sepal.Width> ## <Petal.Length>1.5</Petal.Length> ## <Petal.Width>0.1</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.5</Sepal.Length> ## <Sepal.Width>4.2</Sepal.Width> ## <Petal.Length>1.4</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.9</Sepal.Length> ## <Sepal.Width>3.1</Sepal.Width> ## <Petal.Length>1.5</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5</Sepal.Length> ## <Sepal.Width>3.2</Sepal.Width> ## <Petal.Length>1.2</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.5</Sepal.Length> ## <Sepal.Width>3.5</Sepal.Width> ## <Petal.Length>1.3</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.9</Sepal.Length> ## <Sepal.Width>3.6</Sepal.Width> ## <Petal.Length>1.4</Petal.Length> ## <Petal.Width>0.1</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.4</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>1.3</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.1</Sepal.Length> ## <Sepal.Width>3.4</Sepal.Width> ## <Petal.Length>1.5</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5</Sepal.Length> ## <Sepal.Width>3.5</Sepal.Width> ## <Petal.Length>1.3</Petal.Length> ## <Petal.Width>0.3</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.5</Sepal.Length> ## <Sepal.Width>2.3</Sepal.Width> ## <Petal.Length>1.3</Petal.Length> ## <Petal.Width>0.3</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.4</Sepal.Length> ## <Sepal.Width>3.2</Sepal.Width> ## <Petal.Length>1.3</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5</Sepal.Length> ## <Sepal.Width>3.5</Sepal.Width> ## <Petal.Length>1.6</Petal.Length> ## <Petal.Width>0.6</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.1</Sepal.Length> ## <Sepal.Width>3.8</Sepal.Width> ## <Petal.Length>1.9</Petal.Length> ## <Petal.Width>0.4</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.8</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>1.4</Petal.Length> ## <Petal.Width>0.3</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.1</Sepal.Length> ## <Sepal.Width>3.8</Sepal.Width> ## <Petal.Length>1.6</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>4.6</Sepal.Length> ## <Sepal.Width>3.2</Sepal.Width> ## <Petal.Length>1.4</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5.3</Sepal.Length> ## <Sepal.Width>3.7</Sepal.Width> ## <Petal.Length>1.5</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>5</Sepal.Length> ## <Sepal.Width>3.3</Sepal.Width> ## <Petal.Length>1.4</Petal.Length> ## <Petal.Width>0.2</Petal.Width> ## <Species>setosa</Species> ## </row> ## <row> ## <Sepal.Length>7</Sepal.Length> ## <Sepal.Width>3.2</Sepal.Width> ## <Petal.Length>4.7</Petal.Length> ## <Petal.Width>1.4</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.4</Sepal.Length> ## <Sepal.Width>3.2</Sepal.Width> ## <Petal.Length>4.5</Petal.Length> ## <Petal.Width>1.5</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.9</Sepal.Length> ## <Sepal.Width>3.1</Sepal.Width> ## <Petal.Length>4.9</Petal.Length> ## <Petal.Width>1.5</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.5</Sepal.Length> ## <Sepal.Width>2.3</Sepal.Width> ## <Petal.Length>4</Petal.Length> ## <Petal.Width>1.3</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.5</Sepal.Length> ## <Sepal.Width>2.8</Sepal.Width> ## <Petal.Length>4.6</Petal.Length> ## <Petal.Width>1.5</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.7</Sepal.Length> ## <Sepal.Width>2.8</Sepal.Width> ## <Petal.Length>4.5</Petal.Length> ## <Petal.Width>1.3</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.3</Sepal.Length> ## <Sepal.Width>3.3</Sepal.Width> ## <Petal.Length>4.7</Petal.Length> ## <Petal.Width>1.6</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>4.9</Sepal.Length> ## <Sepal.Width>2.4</Sepal.Width> ## <Petal.Length>3.3</Petal.Length> ## <Petal.Width>1</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.6</Sepal.Length> ## <Sepal.Width>2.9</Sepal.Width> ## <Petal.Length>4.6</Petal.Length> ## <Petal.Width>1.3</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.2</Sepal.Length> ## <Sepal.Width>2.7</Sepal.Width> ## <Petal.Length>3.9</Petal.Length> ## <Petal.Width>1.4</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5</Sepal.Length> ## <Sepal.Width>2</Sepal.Width> ## <Petal.Length>3.5</Petal.Length> ## <Petal.Width>1</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.9</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>4.2</Petal.Length> ## <Petal.Width>1.5</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6</Sepal.Length> ## <Sepal.Width>2.2</Sepal.Width> ## <Petal.Length>4</Petal.Length> ## <Petal.Width>1</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.1</Sepal.Length> ## <Sepal.Width>2.9</Sepal.Width> ## <Petal.Length>4.7</Petal.Length> ## <Petal.Width>1.4</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.6</Sepal.Length> ## <Sepal.Width>2.9</Sepal.Width> ## <Petal.Length>3.6</Petal.Length> ## <Petal.Width>1.3</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.7</Sepal.Length> ## <Sepal.Width>3.1</Sepal.Width> ## <Petal.Length>4.4</Petal.Length> ## <Petal.Width>1.4</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.6</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>4.5</Petal.Length> ## <Petal.Width>1.5</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.8</Sepal.Length> ## <Sepal.Width>2.7</Sepal.Width> ## <Petal.Length>4.1</Petal.Length> ## <Petal.Width>1</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.2</Sepal.Length> ## <Sepal.Width>2.2</Sepal.Width> ## <Petal.Length>4.5</Petal.Length> ## <Petal.Width>1.5</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.6</Sepal.Length> ## <Sepal.Width>2.5</Sepal.Width> ## <Petal.Length>3.9</Petal.Length> ## <Petal.Width>1.1</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.9</Sepal.Length> ## <Sepal.Width>3.2</Sepal.Width> ## <Petal.Length>4.8</Petal.Length> ## <Petal.Width>1.8</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.1</Sepal.Length> ## <Sepal.Width>2.8</Sepal.Width> ## <Petal.Length>4</Petal.Length> ## <Petal.Width>1.3</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.3</Sepal.Length> ## <Sepal.Width>2.5</Sepal.Width> ## <Petal.Length>4.9</Petal.Length> ## <Petal.Width>1.5</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.1</Sepal.Length> ## <Sepal.Width>2.8</Sepal.Width> ## <Petal.Length>4.7</Petal.Length> ## <Petal.Width>1.2</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.4</Sepal.Length> ## <Sepal.Width>2.9</Sepal.Width> ## <Petal.Length>4.3</Petal.Length> ## <Petal.Width>1.3</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.6</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>4.4</Petal.Length> ## <Petal.Width>1.4</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.8</Sepal.Length> ## <Sepal.Width>2.8</Sepal.Width> ## <Petal.Length>4.8</Petal.Length> ## <Petal.Width>1.4</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.7</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>5</Petal.Length> ## <Petal.Width>1.7</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6</Sepal.Length> ## <Sepal.Width>2.9</Sepal.Width> ## <Petal.Length>4.5</Petal.Length> ## <Petal.Width>1.5</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.7</Sepal.Length> ## <Sepal.Width>2.6</Sepal.Width> ## <Petal.Length>3.5</Petal.Length> ## <Petal.Width>1</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.5</Sepal.Length> ## <Sepal.Width>2.4</Sepal.Width> ## <Petal.Length>3.8</Petal.Length> ## <Petal.Width>1.1</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.5</Sepal.Length> ## <Sepal.Width>2.4</Sepal.Width> ## <Petal.Length>3.7</Petal.Length> ## <Petal.Width>1</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.8</Sepal.Length> ## <Sepal.Width>2.7</Sepal.Width> ## <Petal.Length>3.9</Petal.Length> ## <Petal.Width>1.2</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6</Sepal.Length> ## <Sepal.Width>2.7</Sepal.Width> ## <Petal.Length>5.1</Petal.Length> ## <Petal.Width>1.6</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.4</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>4.5</Petal.Length> ## <Petal.Width>1.5</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6</Sepal.Length> ## <Sepal.Width>3.4</Sepal.Width> ## <Petal.Length>4.5</Petal.Length> ## <Petal.Width>1.6</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.7</Sepal.Length> ## <Sepal.Width>3.1</Sepal.Width> ## <Petal.Length>4.7</Petal.Length> ## <Petal.Width>1.5</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.3</Sepal.Length> ## <Sepal.Width>2.3</Sepal.Width> ## <Petal.Length>4.4</Petal.Length> ## <Petal.Width>1.3</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.6</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>4.1</Petal.Length> ## <Petal.Width>1.3</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.5</Sepal.Length> ## <Sepal.Width>2.5</Sepal.Width> ## <Petal.Length>4</Petal.Length> ## <Petal.Width>1.3</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.5</Sepal.Length> ## <Sepal.Width>2.6</Sepal.Width> ## <Petal.Length>4.4</Petal.Length> ## <Petal.Width>1.2</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.1</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>4.6</Petal.Length> ## <Petal.Width>1.4</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.8</Sepal.Length> ## <Sepal.Width>2.6</Sepal.Width> ## <Petal.Length>4</Petal.Length> ## <Petal.Width>1.2</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5</Sepal.Length> ## <Sepal.Width>2.3</Sepal.Width> ## <Petal.Length>3.3</Petal.Length> ## <Petal.Width>1</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.6</Sepal.Length> ## <Sepal.Width>2.7</Sepal.Width> ## <Petal.Length>4.2</Petal.Length> ## <Petal.Width>1.3</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.7</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>4.2</Petal.Length> ## <Petal.Width>1.2</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.7</Sepal.Length> ## <Sepal.Width>2.9</Sepal.Width> ## <Petal.Length>4.2</Petal.Length> ## <Petal.Width>1.3</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.2</Sepal.Length> ## <Sepal.Width>2.9</Sepal.Width> ## <Petal.Length>4.3</Petal.Length> ## <Petal.Width>1.3</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.1</Sepal.Length> ## <Sepal.Width>2.5</Sepal.Width> ## <Petal.Length>3</Petal.Length> ## <Petal.Width>1.1</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>5.7</Sepal.Length> ## <Sepal.Width>2.8</Sepal.Width> ## <Petal.Length>4.1</Petal.Length> ## <Petal.Width>1.3</Petal.Width> ## <Species>versicolor</Species> ## </row> ## <row> ## <Sepal.Length>6.3</Sepal.Length> ## <Sepal.Width>3.3</Sepal.Width> ## <Petal.Length>6</Petal.Length> ## <Petal.Width>2.5</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>5.8</Sepal.Length> ## <Sepal.Width>2.7</Sepal.Width> ## <Petal.Length>5.1</Petal.Length> ## <Petal.Width>1.9</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>7.1</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>5.9</Petal.Length> ## <Petal.Width>2.1</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.3</Sepal.Length> ## <Sepal.Width>2.9</Sepal.Width> ## <Petal.Length>5.6</Petal.Length> ## <Petal.Width>1.8</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.5</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>5.8</Petal.Length> ## <Petal.Width>2.2</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>7.6</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>6.6</Petal.Length> ## <Petal.Width>2.1</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>4.9</Sepal.Length> ## <Sepal.Width>2.5</Sepal.Width> ## <Petal.Length>4.5</Petal.Length> ## <Petal.Width>1.7</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>7.3</Sepal.Length> ## <Sepal.Width>2.9</Sepal.Width> ## <Petal.Length>6.3</Petal.Length> ## <Petal.Width>1.8</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.7</Sepal.Length> ## <Sepal.Width>2.5</Sepal.Width> ## <Petal.Length>5.8</Petal.Length> ## <Petal.Width>1.8</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>7.2</Sepal.Length> ## <Sepal.Width>3.6</Sepal.Width> ## <Petal.Length>6.1</Petal.Length> ## <Petal.Width>2.5</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.5</Sepal.Length> ## <Sepal.Width>3.2</Sepal.Width> ## <Petal.Length>5.1</Petal.Length> ## <Petal.Width>2</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.4</Sepal.Length> ## <Sepal.Width>2.7</Sepal.Width> ## <Petal.Length>5.3</Petal.Length> ## <Petal.Width>1.9</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.8</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>5.5</Petal.Length> ## <Petal.Width>2.1</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>5.7</Sepal.Length> ## <Sepal.Width>2.5</Sepal.Width> ## <Petal.Length>5</Petal.Length> ## <Petal.Width>2</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>5.8</Sepal.Length> ## <Sepal.Width>2.8</Sepal.Width> ## <Petal.Length>5.1</Petal.Length> ## <Petal.Width>2.4</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.4</Sepal.Length> ## <Sepal.Width>3.2</Sepal.Width> ## <Petal.Length>5.3</Petal.Length> ## <Petal.Width>2.3</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.5</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>5.5</Petal.Length> ## <Petal.Width>1.8</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>7.7</Sepal.Length> ## <Sepal.Width>3.8</Sepal.Width> ## <Petal.Length>6.7</Petal.Length> ## <Petal.Width>2.2</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>7.7</Sepal.Length> ## <Sepal.Width>2.6</Sepal.Width> ## <Petal.Length>6.9</Petal.Length> ## <Petal.Width>2.3</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6</Sepal.Length> ## <Sepal.Width>2.2</Sepal.Width> ## <Petal.Length>5</Petal.Length> ## <Petal.Width>1.5</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.9</Sepal.Length> ## <Sepal.Width>3.2</Sepal.Width> ## <Petal.Length>5.7</Petal.Length> ## <Petal.Width>2.3</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>5.6</Sepal.Length> ## <Sepal.Width>2.8</Sepal.Width> ## <Petal.Length>4.9</Petal.Length> ## <Petal.Width>2</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>7.7</Sepal.Length> ## <Sepal.Width>2.8</Sepal.Width> ## <Petal.Length>6.7</Petal.Length> ## <Petal.Width>2</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.3</Sepal.Length> ## <Sepal.Width>2.7</Sepal.Width> ## <Petal.Length>4.9</Petal.Length> ## <Petal.Width>1.8</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.7</Sepal.Length> ## <Sepal.Width>3.3</Sepal.Width> ## <Petal.Length>5.7</Petal.Length> ## <Petal.Width>2.1</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>7.2</Sepal.Length> ## <Sepal.Width>3.2</Sepal.Width> ## <Petal.Length>6</Petal.Length> ## <Petal.Width>1.8</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.2</Sepal.Length> ## <Sepal.Width>2.8</Sepal.Width> ## <Petal.Length>4.8</Petal.Length> ## <Petal.Width>1.8</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.1</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>4.9</Petal.Length> ## <Petal.Width>1.8</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.4</Sepal.Length> ## <Sepal.Width>2.8</Sepal.Width> ## <Petal.Length>5.6</Petal.Length> ## <Petal.Width>2.1</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>7.2</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>5.8</Petal.Length> ## <Petal.Width>1.6</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>7.4</Sepal.Length> ## <Sepal.Width>2.8</Sepal.Width> ## <Petal.Length>6.1</Petal.Length> ## <Petal.Width>1.9</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>7.9</Sepal.Length> ## <Sepal.Width>3.8</Sepal.Width> ## <Petal.Length>6.4</Petal.Length> ## <Petal.Width>2</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.4</Sepal.Length> ## <Sepal.Width>2.8</Sepal.Width> ## <Petal.Length>5.6</Petal.Length> ## <Petal.Width>2.2</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.3</Sepal.Length> ## <Sepal.Width>2.8</Sepal.Width> ## <Petal.Length>5.1</Petal.Length> ## <Petal.Width>1.5</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.1</Sepal.Length> ## <Sepal.Width>2.6</Sepal.Width> ## <Petal.Length>5.6</Petal.Length> ## <Petal.Width>1.4</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>7.7</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>6.1</Petal.Length> ## <Petal.Width>2.3</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.3</Sepal.Length> ## <Sepal.Width>3.4</Sepal.Width> ## <Petal.Length>5.6</Petal.Length> ## <Petal.Width>2.4</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.4</Sepal.Length> ## <Sepal.Width>3.1</Sepal.Width> ## <Petal.Length>5.5</Petal.Length> ## <Petal.Width>1.8</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>4.8</Petal.Length> ## <Petal.Width>1.8</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.9</Sepal.Length> ## <Sepal.Width>3.1</Sepal.Width> ## <Petal.Length>5.4</Petal.Length> ## <Petal.Width>2.1</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.7</Sepal.Length> ## <Sepal.Width>3.1</Sepal.Width> ## <Petal.Length>5.6</Petal.Length> ## <Petal.Width>2.4</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.9</Sepal.Length> ## <Sepal.Width>3.1</Sepal.Width> ## <Petal.Length>5.1</Petal.Length> ## <Petal.Width>2.3</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>5.8</Sepal.Length> ## <Sepal.Width>2.7</Sepal.Width> ## <Petal.Length>5.1</Petal.Length> ## <Petal.Width>1.9</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.8</Sepal.Length> ## <Sepal.Width>3.2</Sepal.Width> ## <Petal.Length>5.9</Petal.Length> ## <Petal.Width>2.3</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.7</Sepal.Length> ## <Sepal.Width>3.3</Sepal.Width> ## <Petal.Length>5.7</Petal.Length> ## <Petal.Width>2.5</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.7</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>5.2</Petal.Length> ## <Petal.Width>2.3</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.3</Sepal.Length> ## <Sepal.Width>2.5</Sepal.Width> ## <Petal.Length>5</Petal.Length> ## <Petal.Width>1.9</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.5</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>5.2</Petal.Length> ## <Petal.Width>2</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>6.2</Sepal.Length> ## <Sepal.Width>3.4</Sepal.Width> ## <Petal.Length>5.4</Petal.Length> ## <Petal.Width>2.3</Petal.Width> ## <Species>virginica</Species> ## </row> ## <row> ## <Sepal.Length>5.9</Sepal.Length> ## <Sepal.Width>3</Sepal.Width> ## <Petal.Length>5.1</Petal.Length> ## <Petal.Width>1.8</Petal.Width> ## <Species>virginica</Species> ## </row> ## </document> ``` 7\.10 The Response to News -------------------------- ### 7\.10\.1 Das, Martinez\-Jerez, and Tufano (FM 2005\) ### 7\.10\.2 Breakdown of News Flow ### 7\.10\.3 Frequency of Postings ### 7\.10\.4 Weekly Posting ### 7\.10\.5 Intraday Posting ### 7\.10\.6 Number of Characters per Posting ### 7\.10\.1 Das, Martinez\-Jerez, and Tufano (FM 2005\) ### 7\.10\.2 Breakdown of News Flow ### 7\.10\.3 Frequency of Postings ### 7\.10\.4 Weekly Posting ### 7\.10\.5 Intraday Posting ### 7\.10\.6 Number of Characters per Posting 7\.11 Text Handling ------------------- First, let’s read in a simple web page (my landing page) ``` text = readLines("http://srdas.github.io/") print(text[1:4]) ``` ``` ## [1] "<html>" ## [2] "" ## [3] "<head>" ## [4] "<title>SCU Web Page of Sanjiv Ranjan Das</title>" ``` ``` print(length(text)) ``` ``` ## [1] 36 ``` ### 7\.11\.1 String Detection String handling is a basic need, so we use the **stringr** package. ``` #EXTRACTING SUBSTRINGS (take some time to look at #the "stringr" package also) library(stringr) substr(text[4],24,29) ``` ``` ## [1] "Sanjiv" ``` ``` #IF YOU WANT TO LOCATE A STRING res = regexpr("Sanjiv",text[4]) print(res) ``` ``` ## [1] 24 ## attr(,"match.length") ## [1] 6 ## attr(,"useBytes") ## [1] TRUE ``` ``` print(substr(text[4],res[1],res[1]+nchar("Sanjiv")-1)) ``` ``` ## [1] "Sanjiv" ``` ``` #ANOTHER WAY res = str_locate(text[4],"Sanjiv") print(res) ``` ``` ## start end ## [1,] 24 29 ``` ``` print(substr(text[4],res[1],res[2])) ``` ``` ## [1] "Sanjiv" ``` ### 7\.11\.2 Cleaning Text Now we look at using regular expressions with the **grep** command to clean out text. I will read in my research page to process this. Here we are undertaking a “ruthless” cleanup. ``` #SIMPLE TEXT HANDLING text = readLines("http://srdas.github.io/research.htm") print(length(text)) ``` ``` ## [1] 845 ``` ``` #print(text) text = text[setdiff(seq(1,length(text)),grep("<",text))] text = text[setdiff(seq(1,length(text)),grep(">",text))] text = text[setdiff(seq(1,length(text)),grep("]",text))] text = text[setdiff(seq(1,length(text)),grep("}",text))] text = text[setdiff(seq(1,length(text)),grep("_",text))] text = text[setdiff(seq(1,length(text)),grep("\\/",text))] print(length(text)) ``` ``` ## [1] 350 ``` ``` #print(text) text = str_replace_all(text,"[\"]","") idx = which(nchar(text)==0) research = text[setdiff(seq(1,length(text)),idx)] print(research) ``` ``` ## [1] "Data Science: Theories, Models, Algorithms, and Analytics (web book -- work in progress)" ## [2] "Derivatives: Principles and Practice (2010)," ## [3] "(Rangarajan Sundaram and Sanjiv Das), McGraw Hill." ## [4] "An Index-Based Measure of Liquidity,'' (with George Chacko and Rong Fan), (2016)." ## [5] "Matrix Metrics: Network-Based Systemic Risk Scoring, (2016)." ## [6] "of systemic risk. This paper won the First Prize in the MIT-CFP competition 2016 for " ## [7] "the best paper on SIFIs (systemically important financial institutions). " ## [8] "It also won the best paper award at " ## [9] "Credit Spreads with Dynamic Debt (with Seoyoung Kim), (2015), " ## [10] "Text and Context: Language Analytics for Finance, (2014)," ## [11] "Strategic Loan Modification: An Options-Based Response to Strategic Default," ## [12] "Options and Structured Products in Behavioral Portfolios, (with Meir Statman), (2013), " ## [13] "and barrier range notes, in the presence of fat-tailed outcomes using copulas." ## [14] "Polishing Diamonds in the Rough: The Sources of Syndicated Venture Performance, (2011), (with Hoje Jo and Yongtae Kim), " ## [15] "Optimization with Mental Accounts, (2010), (with Harry Markowitz, Jonathan" ## [16] "Accounting-based versus market-based cross-sectional models of CDS spreads, " ## [17] "(with Paul Hanouna and Atulya Sarin), (2009), " ## [18] "Hedging Credit: Equity Liquidity Matters, (with Paul Hanouna), (2009)," ## [19] "An Integrated Model for Hybrid Securities," ## [20] "Yahoo for Amazon! Sentiment Extraction from Small Talk on the Web," ## [21] "Common Failings: How Corporate Defaults are Correlated " ## [22] "(with Darrell Duffie, Nikunj Kapadia and Leandro Saita)." ## [23] "A Clinical Study of Investor Discussion and Sentiment, " ## [24] "(with Asis Martinez-Jerez and Peter Tufano), 2005, " ## [25] "International Portfolio Choice with Systemic Risk," ## [26] "The loss resulting from diminished diversification is small, while" ## [27] "Speech: Signaling, Risk-sharing and the Impact of Fee Structures on" ## [28] "investor welfare. Contrary to regulatory intuition, incentive structures" ## [29] "A Discrete-Time Approach to No-arbitrage Pricing of Credit derivatives" ## [30] "with Rating Transitions, (with Viral Acharya and Rangarajan Sundaram)," ## [31] "Pricing Interest Rate Derivatives: A General Approach,''(with George Chacko)," ## [32] "A Discrete-Time Approach to Arbitrage-Free Pricing of Credit Derivatives,'' " ## [33] "The Psychology of Financial Decision Making: A Case" ## [34] "for Theory-Driven Experimental Enquiry,''" ## [35] "1999, (with Priya Raghubir)," ## [36] "Of Smiles and Smirks: A Term Structure Perspective,''" ## [37] "A Theory of Banking Structure, 1999, (with Ashish Nanda)," ## [38] "by function based upon two dimensions: the degree of information asymmetry " ## [39] "A Theory of Optimal Timing and Selectivity,'' " ## [40] "A Direct Discrete-Time Approach to" ## [41] "Poisson-Gaussian Bond Option Pricing in the Heath-Jarrow-Morton " ## [42] "The Central Tendency: A Second Factor in" ## [43] "Bond Yields, 1998, (with Silverio Foresi and Pierluigi Balduzzi), " ## [44] "Efficiency with Costly Information: A Reinterpretation of" ## [45] "Evidence from Managed Portfolios, (with Edwin Elton, Martin Gruber and Matt " ## [46] "Presented and Reprinted in the Proceedings of The " ## [47] "Seminar on the Analysis of Security Prices at the Center " ## [48] "for Research in Security Prices at the University of " ## [49] "Managing Rollover Risk with Capital Structure Covenants" ## [50] "in Structured Finance Vehicles (2016)," ## [51] "The Design and Risk Management of Structured Finance Vehicles (2016)," ## [52] "Post the recent subprime financial crisis, we inform the creation of safer SIVs " ## [53] "in structured finance, and propose avenues of mitigating risks faced by senior debt through " ## [54] "Coming up Short: Managing Underfunded Portfolios in an LDI-ES Framework (2014), " ## [55] "(with Seoyoung Kim and Meir Statman), " ## [56] "Going for Broke: Restructuring Distressed Debt Portfolios (2014)," ## [57] "Digital Portfolios. (2013), " ## [58] "Options on Portfolios with Higher-Order Moments, (2009)," ## [59] "options on a multivariate system of assets, calibrated to the return " ## [60] "Dealing with Dimension: Option Pricing on Factor Trees, (2009)," ## [61] "you to price options on multiple assets in a unified fraamework. Computational" ## [62] "Modeling" ## [63] "Correlated Default with a Forest of Binomial Trees, (2007), (with" ## [64] "Basel II: Correlation Related Issues (2007), " ## [65] "Correlated Default Risk, (2006)," ## [66] "(with Laurence Freed, Gary Geng, and Nikunj Kapadia)," ## [67] "increase as markets worsen. Regime switching models are needed to explain dynamic" ## [68] "A Simple Model for Pricing Equity Options with Markov" ## [69] "Switching State Variables (2006)," ## [70] "(with Donald Aingworth and Rajeev Motwani)," ## [71] "The Firm's Management of Social Interactions, (2005)" ## [72] "(with D. Godes, D. Mayzlin, Y. Chen, S. Das, C. Dellarocas, " ## [73] "B. Pfeieffer, B. Libai, S. Sen, M. Shi, and P. Verlegh). " ## [74] "Financial Communities (with Jacob Sisk), 2005, " ## [75] "Summer, 112-123." ## [76] "Monte Carlo Markov Chain Methods for Derivative Pricing" ## [77] "and Risk Assessment,(with Alistair Sinclair), 2005, " ## [78] "where incomplete information about the value of an asset may be exploited to " ## [79] "undertake fast and accurate pricing. Proof that a fully polynomial randomized " ## [80] "Correlated Default Processes: A Criterion-Based Copula Approach," ## [81] "Special Issue on Default Risk. " ## [82] "Private Equity Returns: An Empirical Examination of the Exit of" ## [83] "Venture-Backed Companies, (with Murali Jagannathan and Atulya Sarin)," ## [84] "firm being financed, the valuation at the time of financing, and the prevailing market" ## [85] "sentiment. Helps understand the risk premium required for the" ## [86] "Issue on Computational Methods in Economics and Finance), " ## [87] "December, 55-69." ## [88] "Bayesian Migration in Credit Ratings Based on Probabilities of" ## [89] "The Impact of Correlated Default Risk on Credit Portfolios," ## [90] "(with Gifford Fong, and Gary Geng)," ## [91] "How Diversified are Internationally Diversified Portfolios:" ## [92] "Time-Variation in the Covariances between International Returns," ## [93] "Discrete-Time Bond and Option Pricing for Jump-Diffusion" ## [94] "Macroeconomic Implications of Search Theory for the Labor Market," ## [95] "Auction Theory: A Summary with Applications and Evidence" ## [96] "from the Treasury Markets, 1996, (with Rangarajan Sundaram)," ## [97] "A Simple Approach to Three Factor Affine Models of the" ## [98] "Term Structure, (with Pierluigi Balduzzi, Silverio Foresi and Rangarajan" ## [99] "Analytical Approximations of the Term Structure" ## [100] "for Jump-diffusion Processes: A Numerical Analysis, 1996, " ## [101] "Markov Chain Term Structure Models: Extensions and Applications," ## [102] "Exact Solutions for Bond and Options Prices" ## [103] "with Systematic Jump Risk, 1996, (with Silverio Foresi)," ## [104] "Pricing Credit Sensitive Debt when Interest Rates, Credit Ratings" ## [105] "and Credit Spreads are Stochastic, 1996, " ## [106] "v5(2), 161-198." ## [107] "Did CDS Trading Improve the Market for Corporate Bonds, (2016), " ## [108] "(with Madhu Kalimipalli and Subhankar Nayak), " ## [109] "Big Data's Big Muscle, (2016), " ## [110] "Portfolios for Investors Who Want to Reach Their Goals While Staying on the Mean-Variance Efficient Frontier, (2011), " ## [111] "(with Harry Markowitz, Jonathan Scheid, and Meir Statman), " ## [112] "News Analytics: Framework, Techniques and Metrics, The Handbook of News Analytics in Finance, May 2011, John Wiley & Sons, U.K. " ## [113] "Random Lattices for Option Pricing Problems in Finance, (2011)," ## [114] "Implementing Option Pricing Models using Python and Cython, (2010)," ## [115] "The Finance Web: Internet Information and Markets, (2010), " ## [116] "Financial Applications with Parallel R, (2009), " ## [117] "Recovery Swaps, (2009), (with Paul Hanouna), " ## [118] "Recovery Rates, (2009),(with Paul Hanouna), " ## [119] "``A Simple Model for Pricing Securities with a Debt-Equity Linkage,'' 2008, in " ## [120] "Credit Default Swap Spreads, 2006, (with Paul Hanouna), " ## [121] "Multiple-Core Processors for Finance Applications, 2006, " ## [122] "Power Laws, 2005, (with Jacob Sisk), " ## [123] "Genetic Algorithms, 2005," ## [124] "Recovery Risk, 2005," ## [125] "Venture Capital Syndication, (with Hoje Jo and Yongtae Kim), 2004" ## [126] "Technical Analysis, (with David Tien), 2004" ## [127] "Liquidity and the Bond Markets, (with Jan Ericsson and " ## [128] "Madhu Kalimipalli), 2003," ## [129] "Modern Pricing of Interest Rate Derivatives - Book Review, " ## [130] "Contagion, 2003," ## [131] "Hedge Funds, 2003," ## [132] "Reprinted in " ## [133] "Working Papers on Hedge Funds, in The World of Hedge Funds: " ## [134] "Characteristics and " ## [135] "Analysis, 2005, World Scientific." ## [136] "The Internet and Investors, 2003," ## [137] " Useful things to know about Correlated Default Risk," ## [138] "(with Gifford Fong, Laurence Freed, Gary Geng, and Nikunj Kapadia)," ## [139] "The Regulation of Fee Structures in Mutual Funds: A Theoretical Analysis,'' " ## [140] "(with Rangarajan Sundaram), 1998, NBER WP No 6639, in the" ## [141] "Courant Institute of Mathematical Sciences, special volume on" ## [142] "A Discrete-Time Approach to Arbitrage-Free Pricing of Credit Derivatives,'' " ## [143] "(with Rangarajan Sundaram), reprinted in " ## [144] "the Courant Institute of Mathematical Sciences, special volume on" ## [145] "Stochastic Mean Models of the Term Structure,''" ## [146] "(with Pierluigi Balduzzi, Silverio Foresi and Rangarajan Sundaram), " ## [147] "John Wiley & Sons, Inc., 128-161." ## [148] "Interest Rate Modeling with Jump-Diffusion Processes,'' " ## [149] "John Wiley & Sons, Inc., 162-189." ## [150] "Comments on 'Pricing Excess-of-Loss Reinsurance Contracts against" ## [151] "Catastrophic Loss,' by J. David Cummins, C. Lewis, and Richard Phillips," ## [152] "Froot (Ed.), University of Chicago Press, 1999, 141-145." ## [153] " Pricing Credit Derivatives,'' " ## [154] "J. Frost and J.G. Whittaker, 101-138." ## [155] "On the Recursive Implementation of Term Structure Models,'' " ## [156] "Zero-Revelation RegTech: Detecting Risk through" ## [157] "Linguistic Analysis of Corporate Emails and News " ## [158] "(with Seoyoung Kim and Bhushan Kothari)." ## [159] "Summary for the Columbia Law School blog: " ## [160] " " ## [161] "Dynamic Risk Networks: A Note " ## [162] "(with Seoyoung Kim and Dan Ostrov)." ## [163] "Research Challenges in Financial Data Modeling and Analysis " ## [164] "(with Lewis Alexander, Zachary Ives, H.V. Jagadish, and Claire Monteleoni)." ## [165] "Local Volatility and the Recovery Rate of Credit Default Swaps " ## [166] "(with Jeroen Jansen and Frank Fabozzi)." ## [167] "Efficient Rebalancing of Taxable Portfolios (with Dan Ostrov, Dennis Ding, Vincent Newell), " ## [168] "The Fast and the Curious: VC Drift " ## [169] "(with Amit Bubna and Paul Hanouna), " ## [170] "Venture Capital Communities (with Amit Bubna and Nagpurnanand Prabhala), " ## [171] " " ``` Take a look at the text now to see how cleaned up it is. But there is a better way, i.e., use the text\-mining package **tm**. ### 7\.11\.1 String Detection String handling is a basic need, so we use the **stringr** package. ``` #EXTRACTING SUBSTRINGS (take some time to look at #the "stringr" package also) library(stringr) substr(text[4],24,29) ``` ``` ## [1] "Sanjiv" ``` ``` #IF YOU WANT TO LOCATE A STRING res = regexpr("Sanjiv",text[4]) print(res) ``` ``` ## [1] 24 ## attr(,"match.length") ## [1] 6 ## attr(,"useBytes") ## [1] TRUE ``` ``` print(substr(text[4],res[1],res[1]+nchar("Sanjiv")-1)) ``` ``` ## [1] "Sanjiv" ``` ``` #ANOTHER WAY res = str_locate(text[4],"Sanjiv") print(res) ``` ``` ## start end ## [1,] 24 29 ``` ``` print(substr(text[4],res[1],res[2])) ``` ``` ## [1] "Sanjiv" ``` ### 7\.11\.2 Cleaning Text Now we look at using regular expressions with the **grep** command to clean out text. I will read in my research page to process this. Here we are undertaking a “ruthless” cleanup. ``` #SIMPLE TEXT HANDLING text = readLines("http://srdas.github.io/research.htm") print(length(text)) ``` ``` ## [1] 845 ``` ``` #print(text) text = text[setdiff(seq(1,length(text)),grep("<",text))] text = text[setdiff(seq(1,length(text)),grep(">",text))] text = text[setdiff(seq(1,length(text)),grep("]",text))] text = text[setdiff(seq(1,length(text)),grep("}",text))] text = text[setdiff(seq(1,length(text)),grep("_",text))] text = text[setdiff(seq(1,length(text)),grep("\\/",text))] print(length(text)) ``` ``` ## [1] 350 ``` ``` #print(text) text = str_replace_all(text,"[\"]","") idx = which(nchar(text)==0) research = text[setdiff(seq(1,length(text)),idx)] print(research) ``` ``` ## [1] "Data Science: Theories, Models, Algorithms, and Analytics (web book -- work in progress)" ## [2] "Derivatives: Principles and Practice (2010)," ## [3] "(Rangarajan Sundaram and Sanjiv Das), McGraw Hill." ## [4] "An Index-Based Measure of Liquidity,'' (with George Chacko and Rong Fan), (2016)." ## [5] "Matrix Metrics: Network-Based Systemic Risk Scoring, (2016)." ## [6] "of systemic risk. This paper won the First Prize in the MIT-CFP competition 2016 for " ## [7] "the best paper on SIFIs (systemically important financial institutions). " ## [8] "It also won the best paper award at " ## [9] "Credit Spreads with Dynamic Debt (with Seoyoung Kim), (2015), " ## [10] "Text and Context: Language Analytics for Finance, (2014)," ## [11] "Strategic Loan Modification: An Options-Based Response to Strategic Default," ## [12] "Options and Structured Products in Behavioral Portfolios, (with Meir Statman), (2013), " ## [13] "and barrier range notes, in the presence of fat-tailed outcomes using copulas." ## [14] "Polishing Diamonds in the Rough: The Sources of Syndicated Venture Performance, (2011), (with Hoje Jo and Yongtae Kim), " ## [15] "Optimization with Mental Accounts, (2010), (with Harry Markowitz, Jonathan" ## [16] "Accounting-based versus market-based cross-sectional models of CDS spreads, " ## [17] "(with Paul Hanouna and Atulya Sarin), (2009), " ## [18] "Hedging Credit: Equity Liquidity Matters, (with Paul Hanouna), (2009)," ## [19] "An Integrated Model for Hybrid Securities," ## [20] "Yahoo for Amazon! Sentiment Extraction from Small Talk on the Web," ## [21] "Common Failings: How Corporate Defaults are Correlated " ## [22] "(with Darrell Duffie, Nikunj Kapadia and Leandro Saita)." ## [23] "A Clinical Study of Investor Discussion and Sentiment, " ## [24] "(with Asis Martinez-Jerez and Peter Tufano), 2005, " ## [25] "International Portfolio Choice with Systemic Risk," ## [26] "The loss resulting from diminished diversification is small, while" ## [27] "Speech: Signaling, Risk-sharing and the Impact of Fee Structures on" ## [28] "investor welfare. Contrary to regulatory intuition, incentive structures" ## [29] "A Discrete-Time Approach to No-arbitrage Pricing of Credit derivatives" ## [30] "with Rating Transitions, (with Viral Acharya and Rangarajan Sundaram)," ## [31] "Pricing Interest Rate Derivatives: A General Approach,''(with George Chacko)," ## [32] "A Discrete-Time Approach to Arbitrage-Free Pricing of Credit Derivatives,'' " ## [33] "The Psychology of Financial Decision Making: A Case" ## [34] "for Theory-Driven Experimental Enquiry,''" ## [35] "1999, (with Priya Raghubir)," ## [36] "Of Smiles and Smirks: A Term Structure Perspective,''" ## [37] "A Theory of Banking Structure, 1999, (with Ashish Nanda)," ## [38] "by function based upon two dimensions: the degree of information asymmetry " ## [39] "A Theory of Optimal Timing and Selectivity,'' " ## [40] "A Direct Discrete-Time Approach to" ## [41] "Poisson-Gaussian Bond Option Pricing in the Heath-Jarrow-Morton " ## [42] "The Central Tendency: A Second Factor in" ## [43] "Bond Yields, 1998, (with Silverio Foresi and Pierluigi Balduzzi), " ## [44] "Efficiency with Costly Information: A Reinterpretation of" ## [45] "Evidence from Managed Portfolios, (with Edwin Elton, Martin Gruber and Matt " ## [46] "Presented and Reprinted in the Proceedings of The " ## [47] "Seminar on the Analysis of Security Prices at the Center " ## [48] "for Research in Security Prices at the University of " ## [49] "Managing Rollover Risk with Capital Structure Covenants" ## [50] "in Structured Finance Vehicles (2016)," ## [51] "The Design and Risk Management of Structured Finance Vehicles (2016)," ## [52] "Post the recent subprime financial crisis, we inform the creation of safer SIVs " ## [53] "in structured finance, and propose avenues of mitigating risks faced by senior debt through " ## [54] "Coming up Short: Managing Underfunded Portfolios in an LDI-ES Framework (2014), " ## [55] "(with Seoyoung Kim and Meir Statman), " ## [56] "Going for Broke: Restructuring Distressed Debt Portfolios (2014)," ## [57] "Digital Portfolios. (2013), " ## [58] "Options on Portfolios with Higher-Order Moments, (2009)," ## [59] "options on a multivariate system of assets, calibrated to the return " ## [60] "Dealing with Dimension: Option Pricing on Factor Trees, (2009)," ## [61] "you to price options on multiple assets in a unified fraamework. Computational" ## [62] "Modeling" ## [63] "Correlated Default with a Forest of Binomial Trees, (2007), (with" ## [64] "Basel II: Correlation Related Issues (2007), " ## [65] "Correlated Default Risk, (2006)," ## [66] "(with Laurence Freed, Gary Geng, and Nikunj Kapadia)," ## [67] "increase as markets worsen. Regime switching models are needed to explain dynamic" ## [68] "A Simple Model for Pricing Equity Options with Markov" ## [69] "Switching State Variables (2006)," ## [70] "(with Donald Aingworth and Rajeev Motwani)," ## [71] "The Firm's Management of Social Interactions, (2005)" ## [72] "(with D. Godes, D. Mayzlin, Y. Chen, S. Das, C. Dellarocas, " ## [73] "B. Pfeieffer, B. Libai, S. Sen, M. Shi, and P. Verlegh). " ## [74] "Financial Communities (with Jacob Sisk), 2005, " ## [75] "Summer, 112-123." ## [76] "Monte Carlo Markov Chain Methods for Derivative Pricing" ## [77] "and Risk Assessment,(with Alistair Sinclair), 2005, " ## [78] "where incomplete information about the value of an asset may be exploited to " ## [79] "undertake fast and accurate pricing. Proof that a fully polynomial randomized " ## [80] "Correlated Default Processes: A Criterion-Based Copula Approach," ## [81] "Special Issue on Default Risk. " ## [82] "Private Equity Returns: An Empirical Examination of the Exit of" ## [83] "Venture-Backed Companies, (with Murali Jagannathan and Atulya Sarin)," ## [84] "firm being financed, the valuation at the time of financing, and the prevailing market" ## [85] "sentiment. Helps understand the risk premium required for the" ## [86] "Issue on Computational Methods in Economics and Finance), " ## [87] "December, 55-69." ## [88] "Bayesian Migration in Credit Ratings Based on Probabilities of" ## [89] "The Impact of Correlated Default Risk on Credit Portfolios," ## [90] "(with Gifford Fong, and Gary Geng)," ## [91] "How Diversified are Internationally Diversified Portfolios:" ## [92] "Time-Variation in the Covariances between International Returns," ## [93] "Discrete-Time Bond and Option Pricing for Jump-Diffusion" ## [94] "Macroeconomic Implications of Search Theory for the Labor Market," ## [95] "Auction Theory: A Summary with Applications and Evidence" ## [96] "from the Treasury Markets, 1996, (with Rangarajan Sundaram)," ## [97] "A Simple Approach to Three Factor Affine Models of the" ## [98] "Term Structure, (with Pierluigi Balduzzi, Silverio Foresi and Rangarajan" ## [99] "Analytical Approximations of the Term Structure" ## [100] "for Jump-diffusion Processes: A Numerical Analysis, 1996, " ## [101] "Markov Chain Term Structure Models: Extensions and Applications," ## [102] "Exact Solutions for Bond and Options Prices" ## [103] "with Systematic Jump Risk, 1996, (with Silverio Foresi)," ## [104] "Pricing Credit Sensitive Debt when Interest Rates, Credit Ratings" ## [105] "and Credit Spreads are Stochastic, 1996, " ## [106] "v5(2), 161-198." ## [107] "Did CDS Trading Improve the Market for Corporate Bonds, (2016), " ## [108] "(with Madhu Kalimipalli and Subhankar Nayak), " ## [109] "Big Data's Big Muscle, (2016), " ## [110] "Portfolios for Investors Who Want to Reach Their Goals While Staying on the Mean-Variance Efficient Frontier, (2011), " ## [111] "(with Harry Markowitz, Jonathan Scheid, and Meir Statman), " ## [112] "News Analytics: Framework, Techniques and Metrics, The Handbook of News Analytics in Finance, May 2011, John Wiley & Sons, U.K. " ## [113] "Random Lattices for Option Pricing Problems in Finance, (2011)," ## [114] "Implementing Option Pricing Models using Python and Cython, (2010)," ## [115] "The Finance Web: Internet Information and Markets, (2010), " ## [116] "Financial Applications with Parallel R, (2009), " ## [117] "Recovery Swaps, (2009), (with Paul Hanouna), " ## [118] "Recovery Rates, (2009),(with Paul Hanouna), " ## [119] "``A Simple Model for Pricing Securities with a Debt-Equity Linkage,'' 2008, in " ## [120] "Credit Default Swap Spreads, 2006, (with Paul Hanouna), " ## [121] "Multiple-Core Processors for Finance Applications, 2006, " ## [122] "Power Laws, 2005, (with Jacob Sisk), " ## [123] "Genetic Algorithms, 2005," ## [124] "Recovery Risk, 2005," ## [125] "Venture Capital Syndication, (with Hoje Jo and Yongtae Kim), 2004" ## [126] "Technical Analysis, (with David Tien), 2004" ## [127] "Liquidity and the Bond Markets, (with Jan Ericsson and " ## [128] "Madhu Kalimipalli), 2003," ## [129] "Modern Pricing of Interest Rate Derivatives - Book Review, " ## [130] "Contagion, 2003," ## [131] "Hedge Funds, 2003," ## [132] "Reprinted in " ## [133] "Working Papers on Hedge Funds, in The World of Hedge Funds: " ## [134] "Characteristics and " ## [135] "Analysis, 2005, World Scientific." ## [136] "The Internet and Investors, 2003," ## [137] " Useful things to know about Correlated Default Risk," ## [138] "(with Gifford Fong, Laurence Freed, Gary Geng, and Nikunj Kapadia)," ## [139] "The Regulation of Fee Structures in Mutual Funds: A Theoretical Analysis,'' " ## [140] "(with Rangarajan Sundaram), 1998, NBER WP No 6639, in the" ## [141] "Courant Institute of Mathematical Sciences, special volume on" ## [142] "A Discrete-Time Approach to Arbitrage-Free Pricing of Credit Derivatives,'' " ## [143] "(with Rangarajan Sundaram), reprinted in " ## [144] "the Courant Institute of Mathematical Sciences, special volume on" ## [145] "Stochastic Mean Models of the Term Structure,''" ## [146] "(with Pierluigi Balduzzi, Silverio Foresi and Rangarajan Sundaram), " ## [147] "John Wiley & Sons, Inc., 128-161." ## [148] "Interest Rate Modeling with Jump-Diffusion Processes,'' " ## [149] "John Wiley & Sons, Inc., 162-189." ## [150] "Comments on 'Pricing Excess-of-Loss Reinsurance Contracts against" ## [151] "Catastrophic Loss,' by J. David Cummins, C. Lewis, and Richard Phillips," ## [152] "Froot (Ed.), University of Chicago Press, 1999, 141-145." ## [153] " Pricing Credit Derivatives,'' " ## [154] "J. Frost and J.G. Whittaker, 101-138." ## [155] "On the Recursive Implementation of Term Structure Models,'' " ## [156] "Zero-Revelation RegTech: Detecting Risk through" ## [157] "Linguistic Analysis of Corporate Emails and News " ## [158] "(with Seoyoung Kim and Bhushan Kothari)." ## [159] "Summary for the Columbia Law School blog: " ## [160] " " ## [161] "Dynamic Risk Networks: A Note " ## [162] "(with Seoyoung Kim and Dan Ostrov)." ## [163] "Research Challenges in Financial Data Modeling and Analysis " ## [164] "(with Lewis Alexander, Zachary Ives, H.V. Jagadish, and Claire Monteleoni)." ## [165] "Local Volatility and the Recovery Rate of Credit Default Swaps " ## [166] "(with Jeroen Jansen and Frank Fabozzi)." ## [167] "Efficient Rebalancing of Taxable Portfolios (with Dan Ostrov, Dennis Ding, Vincent Newell), " ## [168] "The Fast and the Curious: VC Drift " ## [169] "(with Amit Bubna and Paul Hanouna), " ## [170] "Venture Capital Communities (with Amit Bubna and Nagpurnanand Prabhala), " ## [171] " " ``` Take a look at the text now to see how cleaned up it is. But there is a better way, i.e., use the text\-mining package **tm**. 7\.12 Package *tm* ------------------ 1. The R programming language supports a text\-mining package, succinctly named {tm}. Using functions such as {readDOC()}, {readPDF()}, etc., for reading DOC and PDF files, the package makes accessing various file formats easy. 2. Text mining involves applying functions to many text documents. A library of text documents (irrespective of format) is called a **corpus**. The essential and highly useful feature of text mining packages is the ability to operate on the entire set of documents at one go. ``` library(tm) ``` ``` ## Loading required package: NLP ``` ``` text = c("INTL is expected to announce good earnings report", "AAPL first quarter disappoints","GOOG announces new wallet", "YHOO ascends from old ways") text_corpus = Corpus(VectorSource(text)) print(text_corpus) ``` ``` ## <<VCorpus>> ## Metadata: corpus specific: 0, document level (indexed): 0 ## Content: documents: 4 ``` ``` writeCorpus(text_corpus) ``` The **writeCorpus()** function in **tm** creates separate text files on the hard drive, and by default are names **1\.txt**, **2\.txt**, etc. The simple program code above shows how text scraped off a web page and collapsed into a single character string for each document, may then be converted into a corpus of documents using the **Corpus()** function. It is easy to inspect the corpus as follows: ``` inspect(text_corpus) ``` ``` ## <<VCorpus>> ## Metadata: corpus specific: 0, document level (indexed): 0 ## Content: documents: 4 ## ## [[1]] ## <<PlainTextDocument>> ## Metadata: 7 ## Content: chars: 49 ## ## [[2]] ## <<PlainTextDocument>> ## Metadata: 7 ## Content: chars: 30 ## ## [[3]] ## <<PlainTextDocument>> ## Metadata: 7 ## Content: chars: 25 ## ## [[4]] ## <<PlainTextDocument>> ## Metadata: 7 ## Content: chars: 26 ``` ### 7\.12\.1 A second example Here we use **lapply** to inspect the contents of the corpus. ``` #USING THE tm PACKAGE library(tm) text = c("Doc1;","This is doc2 --", "And, then Doc3.") ctext = Corpus(VectorSource(text)) ctext ``` ``` ## <<VCorpus>> ## Metadata: corpus specific: 0, document level (indexed): 0 ## Content: documents: 3 ``` ``` #writeCorpus(ctext) #THE CORPUS IS A LIST OBJECT in R of type VCorpus or Corpus inspect(ctext) ``` ``` ## <<VCorpus>> ## Metadata: corpus specific: 0, document level (indexed): 0 ## Content: documents: 3 ## ## [[1]] ## <<PlainTextDocument>> ## Metadata: 7 ## Content: chars: 5 ## ## [[2]] ## <<PlainTextDocument>> ## Metadata: 7 ## Content: chars: 15 ## ## [[3]] ## <<PlainTextDocument>> ## Metadata: 7 ## Content: chars: 15 ``` ``` print(as.character(ctext[[1]])) ``` ``` ## [1] "Doc1;" ``` ``` print(lapply(ctext[1:2],as.character)) ``` ``` ## $`1` ## [1] "Doc1;" ## ## $`2` ## [1] "This is doc2 --" ``` ``` ctext = tm_map(ctext,tolower) #Lower case all text in all docs inspect(ctext) ``` ``` ## <<VCorpus>> ## Metadata: corpus specific: 0, document level (indexed): 0 ## Content: documents: 3 ## ## [[1]] ## [1] doc1; ## ## [[2]] ## [1] this is doc2 -- ## ## [[3]] ## [1] and, then doc3. ``` ``` ctext2 = tm_map(ctext,toupper) inspect(ctext2) ``` ``` ## <<VCorpus>> ## Metadata: corpus specific: 0, document level (indexed): 0 ## Content: documents: 3 ## ## [[1]] ## [1] DOC1; ## ## [[2]] ## [1] THIS IS DOC2 -- ## ## [[3]] ## [1] AND, THEN DOC3. ``` ### 7\.12\.2 Function *tm\_map* * The **tm\_map** function is very useful for cleaning up the documents. We may want to remove some words. * We may also remove *stopwords*, punctuation, numbers, etc. ``` #FIRST CURATE TO UPPER CASE dropWords = c("IS","AND","THEN") ctext2 = tm_map(ctext2,removeWords,dropWords) inspect(ctext2) ``` ``` ## <<VCorpus>> ## Metadata: corpus specific: 0, document level (indexed): 0 ## Content: documents: 3 ## ## [[1]] ## [1] DOC1; ## ## [[2]] ## [1] THIS DOC2 -- ## ## [[3]] ## [1] , DOC3. ``` ``` ctext = Corpus(VectorSource(text)) temp = ctext print(lapply(temp,as.character)) ``` ``` ## $`1` ## [1] "Doc1;" ## ## $`2` ## [1] "This is doc2 --" ## ## $`3` ## [1] "And, then Doc3." ``` ``` temp = tm_map(temp,removeWords,stopwords("english")) print(lapply(temp,as.character)) ``` ``` ## $`1` ## [1] "Doc1;" ## ## $`2` ## [1] "This doc2 --" ## ## $`3` ## [1] "And, Doc3." ``` ``` temp = tm_map(temp,removePunctuation) print(lapply(temp,as.character)) ``` ``` ## $`1` ## [1] "Doc1" ## ## $`2` ## [1] "This doc2 " ## ## $`3` ## [1] "And Doc3" ``` ``` temp = tm_map(temp,removeNumbers) print(lapply(temp,as.character)) ``` ``` ## $`1` ## [1] "Doc" ## ## $`2` ## [1] "This doc " ## ## $`3` ## [1] "And Doc" ``` ### 7\.12\.3 Bag of Words We can create a *bag of words* by collapsing all the text into one bundle. ``` #CONVERT CORPUS INTO ARRAY OF STRINGS AND FLATTEN txt = NULL for (j in 1:length(temp)) { txt = c(txt,temp[[j]]$content) } txt = paste(txt,collapse=" ") txt = tolower(txt) print(txt) ``` ``` ## [1] "doc this doc and doc" ``` ### 7\.12\.4 Example (on my bio page) Now we will do a full pass through of this on my bio. ``` text = readLines("http://srdas.github.io/bio-candid.html") ctext = Corpus(VectorSource(text)) ctext ``` ``` ## <<VCorpus>> ## Metadata: corpus specific: 0, document level (indexed): 0 ## Content: documents: 80 ``` ``` #Print a few lines print(lapply(ctext, as.character)[10:15]) ``` ``` ## $`10` ## [1] "B.Com in Accounting and Economics (University of Bombay, Sydenham" ## ## $`11` ## [1] "College), and is also a qualified Cost and Works Accountant" ## ## $`12` ## [1] "(AICWA). He is a senior editor of The Journal of Investment" ## ## $`13` ## [1] "Management, co-editor of The Journal of Derivatives and The Journal of" ## ## $`14` ## [1] "Financial Services Research, and Associate Editor of other academic" ## ## $`15` ## [1] "journals. Prior to being an academic, he worked in the derivatives" ``` ``` ctext = tm_map(ctext,removePunctuation) print(lapply(ctext, as.character)[10:15]) ``` ``` ## $`10` ## [1] "BCom in Accounting and Economics University of Bombay Sydenham" ## ## $`11` ## [1] "College and is also a qualified Cost and Works Accountant" ## ## $`12` ## [1] "AICWA He is a senior editor of The Journal of Investment" ## ## $`13` ## [1] "Management coeditor of The Journal of Derivatives and The Journal of" ## ## $`14` ## [1] "Financial Services Research and Associate Editor of other academic" ## ## $`15` ## [1] "journals Prior to being an academic he worked in the derivatives" ``` ``` txt = NULL for (j in 1:length(ctext)) { txt = c(txt,ctext[[j]]$content) } txt = paste(txt,collapse=" ") txt = tolower(txt) print(txt) ``` ``` ## [1] "html body backgroundhttpalgoscuedusanjivdasgraphicsback2gif sanjiv das is the william and janice terry professor of finance at santa clara universitys leavey school of business he previously held faculty appointments as associate professor at harvard business school and uc berkeley he holds postgraduate degrees in finance mphil and phd from new york university computer science ms from uc berkeley an mba from the indian institute of management ahmedabad bcom in accounting and economics university of bombay sydenham college and is also a qualified cost and works accountant aicwa he is a senior editor of the journal of investment management coeditor of the journal of derivatives and the journal of financial services research and associate editor of other academic journals prior to being an academic he worked in the derivatives business in the asiapacific region as a vicepresident at citibank his current research interests include machine learning social networks derivatives pricing models portfolio theory the modeling of default risk and venture capital he has published over ninety articles in academic journals and has won numerous awards for research and teaching his recent book derivatives principles and practice was published in may 2010 second edition 2016 he currently also serves as a senior fellow at the fdic center for financial research p bsanjiv das a short academic life historyb p after loafing and working in many parts of asia but never really growing up sanjiv moved to new york to change the world hopefully through research he graduated in 1994 with a phd from nyu and since then spent five years in boston and now lives in san jose california sanjiv loves animals places in the world where the mountains meet the sea riding sport motorbikes reading gadgets science fiction movies and writing cool software code when there is time available from the excitement of daily life sanjiv writes academic papers which helps him relax always the contrarian sanjiv thinks that new york city is the most calming place in the world after california of course p sanjiv is now a professor of finance at santa clara university he came to scu from harvard business school and spent a year at uc berkeley in his past life in the unreal world sanjiv worked at citibank na in the asiapacific region he takes great pleasure in merging his many previous lives into his current existence which is incredibly confused and diverse p sanjivs research style is instilled with a distinct new york state of mind it is chaotic diverse with minimal method to the madness he has published articles on derivatives termstructure models mutual funds the internet portfolio choice banking models credit risk and has unpublished articles in many other areas some years ago he took time off to get another degree in computer science at berkeley confirming that an unchecked hobby can quickly become an obsession there he learnt about the fascinating field of randomized algorithms skills he now applies earnestly to his editorial work and other pursuits many of which stem from being in the epicenter of silicon valley p coastal living did a lot to mold sanjiv who needs to live near the ocean the many walks in greenwich village convinced him that there is no such thing as a representative investor yet added many unique features to his personal utility function he learnt that it is important to open the academic door to the ivory tower and let the world in academia is a real challenge given that he has to reconcile many more opinions than ideas he has been known to have turned down many offers from mad magazine to publish his academic work as he often explains you never really finish your education you can check out any time you like but you can never leave which is why he is doomed to a lifetime in hotel california and he believes that if this is as bad as it gets life is really pretty good " ``` ### 7\.12\.1 A second example Here we use **lapply** to inspect the contents of the corpus. ``` #USING THE tm PACKAGE library(tm) text = c("Doc1;","This is doc2 --", "And, then Doc3.") ctext = Corpus(VectorSource(text)) ctext ``` ``` ## <<VCorpus>> ## Metadata: corpus specific: 0, document level (indexed): 0 ## Content: documents: 3 ``` ``` #writeCorpus(ctext) #THE CORPUS IS A LIST OBJECT in R of type VCorpus or Corpus inspect(ctext) ``` ``` ## <<VCorpus>> ## Metadata: corpus specific: 0, document level (indexed): 0 ## Content: documents: 3 ## ## [[1]] ## <<PlainTextDocument>> ## Metadata: 7 ## Content: chars: 5 ## ## [[2]] ## <<PlainTextDocument>> ## Metadata: 7 ## Content: chars: 15 ## ## [[3]] ## <<PlainTextDocument>> ## Metadata: 7 ## Content: chars: 15 ``` ``` print(as.character(ctext[[1]])) ``` ``` ## [1] "Doc1;" ``` ``` print(lapply(ctext[1:2],as.character)) ``` ``` ## $`1` ## [1] "Doc1;" ## ## $`2` ## [1] "This is doc2 --" ``` ``` ctext = tm_map(ctext,tolower) #Lower case all text in all docs inspect(ctext) ``` ``` ## <<VCorpus>> ## Metadata: corpus specific: 0, document level (indexed): 0 ## Content: documents: 3 ## ## [[1]] ## [1] doc1; ## ## [[2]] ## [1] this is doc2 -- ## ## [[3]] ## [1] and, then doc3. ``` ``` ctext2 = tm_map(ctext,toupper) inspect(ctext2) ``` ``` ## <<VCorpus>> ## Metadata: corpus specific: 0, document level (indexed): 0 ## Content: documents: 3 ## ## [[1]] ## [1] DOC1; ## ## [[2]] ## [1] THIS IS DOC2 -- ## ## [[3]] ## [1] AND, THEN DOC3. ``` ### 7\.12\.2 Function *tm\_map* * The **tm\_map** function is very useful for cleaning up the documents. We may want to remove some words. * We may also remove *stopwords*, punctuation, numbers, etc. ``` #FIRST CURATE TO UPPER CASE dropWords = c("IS","AND","THEN") ctext2 = tm_map(ctext2,removeWords,dropWords) inspect(ctext2) ``` ``` ## <<VCorpus>> ## Metadata: corpus specific: 0, document level (indexed): 0 ## Content: documents: 3 ## ## [[1]] ## [1] DOC1; ## ## [[2]] ## [1] THIS DOC2 -- ## ## [[3]] ## [1] , DOC3. ``` ``` ctext = Corpus(VectorSource(text)) temp = ctext print(lapply(temp,as.character)) ``` ``` ## $`1` ## [1] "Doc1;" ## ## $`2` ## [1] "This is doc2 --" ## ## $`3` ## [1] "And, then Doc3." ``` ``` temp = tm_map(temp,removeWords,stopwords("english")) print(lapply(temp,as.character)) ``` ``` ## $`1` ## [1] "Doc1;" ## ## $`2` ## [1] "This doc2 --" ## ## $`3` ## [1] "And, Doc3." ``` ``` temp = tm_map(temp,removePunctuation) print(lapply(temp,as.character)) ``` ``` ## $`1` ## [1] "Doc1" ## ## $`2` ## [1] "This doc2 " ## ## $`3` ## [1] "And Doc3" ``` ``` temp = tm_map(temp,removeNumbers) print(lapply(temp,as.character)) ``` ``` ## $`1` ## [1] "Doc" ## ## $`2` ## [1] "This doc " ## ## $`3` ## [1] "And Doc" ``` ### 7\.12\.3 Bag of Words We can create a *bag of words* by collapsing all the text into one bundle. ``` #CONVERT CORPUS INTO ARRAY OF STRINGS AND FLATTEN txt = NULL for (j in 1:length(temp)) { txt = c(txt,temp[[j]]$content) } txt = paste(txt,collapse=" ") txt = tolower(txt) print(txt) ``` ``` ## [1] "doc this doc and doc" ``` ### 7\.12\.4 Example (on my bio page) Now we will do a full pass through of this on my bio. ``` text = readLines("http://srdas.github.io/bio-candid.html") ctext = Corpus(VectorSource(text)) ctext ``` ``` ## <<VCorpus>> ## Metadata: corpus specific: 0, document level (indexed): 0 ## Content: documents: 80 ``` ``` #Print a few lines print(lapply(ctext, as.character)[10:15]) ``` ``` ## $`10` ## [1] "B.Com in Accounting and Economics (University of Bombay, Sydenham" ## ## $`11` ## [1] "College), and is also a qualified Cost and Works Accountant" ## ## $`12` ## [1] "(AICWA). He is a senior editor of The Journal of Investment" ## ## $`13` ## [1] "Management, co-editor of The Journal of Derivatives and The Journal of" ## ## $`14` ## [1] "Financial Services Research, and Associate Editor of other academic" ## ## $`15` ## [1] "journals. Prior to being an academic, he worked in the derivatives" ``` ``` ctext = tm_map(ctext,removePunctuation) print(lapply(ctext, as.character)[10:15]) ``` ``` ## $`10` ## [1] "BCom in Accounting and Economics University of Bombay Sydenham" ## ## $`11` ## [1] "College and is also a qualified Cost and Works Accountant" ## ## $`12` ## [1] "AICWA He is a senior editor of The Journal of Investment" ## ## $`13` ## [1] "Management coeditor of The Journal of Derivatives and The Journal of" ## ## $`14` ## [1] "Financial Services Research and Associate Editor of other academic" ## ## $`15` ## [1] "journals Prior to being an academic he worked in the derivatives" ``` ``` txt = NULL for (j in 1:length(ctext)) { txt = c(txt,ctext[[j]]$content) } txt = paste(txt,collapse=" ") txt = tolower(txt) print(txt) ``` ``` ## [1] "html body backgroundhttpalgoscuedusanjivdasgraphicsback2gif sanjiv das is the william and janice terry professor of finance at santa clara universitys leavey school of business he previously held faculty appointments as associate professor at harvard business school and uc berkeley he holds postgraduate degrees in finance mphil and phd from new york university computer science ms from uc berkeley an mba from the indian institute of management ahmedabad bcom in accounting and economics university of bombay sydenham college and is also a qualified cost and works accountant aicwa he is a senior editor of the journal of investment management coeditor of the journal of derivatives and the journal of financial services research and associate editor of other academic journals prior to being an academic he worked in the derivatives business in the asiapacific region as a vicepresident at citibank his current research interests include machine learning social networks derivatives pricing models portfolio theory the modeling of default risk and venture capital he has published over ninety articles in academic journals and has won numerous awards for research and teaching his recent book derivatives principles and practice was published in may 2010 second edition 2016 he currently also serves as a senior fellow at the fdic center for financial research p bsanjiv das a short academic life historyb p after loafing and working in many parts of asia but never really growing up sanjiv moved to new york to change the world hopefully through research he graduated in 1994 with a phd from nyu and since then spent five years in boston and now lives in san jose california sanjiv loves animals places in the world where the mountains meet the sea riding sport motorbikes reading gadgets science fiction movies and writing cool software code when there is time available from the excitement of daily life sanjiv writes academic papers which helps him relax always the contrarian sanjiv thinks that new york city is the most calming place in the world after california of course p sanjiv is now a professor of finance at santa clara university he came to scu from harvard business school and spent a year at uc berkeley in his past life in the unreal world sanjiv worked at citibank na in the asiapacific region he takes great pleasure in merging his many previous lives into his current existence which is incredibly confused and diverse p sanjivs research style is instilled with a distinct new york state of mind it is chaotic diverse with minimal method to the madness he has published articles on derivatives termstructure models mutual funds the internet portfolio choice banking models credit risk and has unpublished articles in many other areas some years ago he took time off to get another degree in computer science at berkeley confirming that an unchecked hobby can quickly become an obsession there he learnt about the fascinating field of randomized algorithms skills he now applies earnestly to his editorial work and other pursuits many of which stem from being in the epicenter of silicon valley p coastal living did a lot to mold sanjiv who needs to live near the ocean the many walks in greenwich village convinced him that there is no such thing as a representative investor yet added many unique features to his personal utility function he learnt that it is important to open the academic door to the ivory tower and let the world in academia is a real challenge given that he has to reconcile many more opinions than ideas he has been known to have turned down many offers from mad magazine to publish his academic work as he often explains you never really finish your education you can check out any time you like but you can never leave which is why he is doomed to a lifetime in hotel california and he believes that if this is as bad as it gets life is really pretty good " ``` 7\.13 Term Document Matrix (TDM) -------------------------------- An extremeley important object in text analysis is the **Term\-Document Matrix**. This allows us to store an entire library of text inside a single matrix. This may then be used for analysis as well as searching documents. It forms the basis of search engines, topic analysis, and classification (spam filtering). It is a table that provides the frequency count of every word (term) in each document. The number of rows in the TDM is equal to the number of unique terms, and the number of columns is equal to the number of documents. ``` #TERM-DOCUMENT MATRIX tdm = TermDocumentMatrix(ctext,control=list(minWordLength=1)) print(tdm) ``` ``` ## <<TermDocumentMatrix (terms: 321, documents: 80)>> ## Non-/sparse entries: 502/25178 ## Sparsity : 98% ## Maximal term length: 49 ## Weighting : term frequency (tf) ``` ``` inspect(tdm[10:20,11:18]) ``` ``` ## <<TermDocumentMatrix (terms: 11, documents: 8)>> ## Non-/sparse entries: 5/83 ## Sparsity : 94% ## Maximal term length: 10 ## Weighting : term frequency (tf) ## ## Docs ## Terms 11 12 13 14 15 16 17 18 ## after 0 0 0 0 0 0 0 0 ## ago 0 0 0 0 0 0 0 0 ## ahmedabad 0 0 0 0 0 0 0 0 ## aicwa 0 1 0 0 0 0 0 0 ## algorithms 0 0 0 0 0 0 0 0 ## also 1 0 0 0 0 0 0 0 ## always 0 0 0 0 0 0 0 0 ## and 2 0 1 1 0 0 0 0 ## animals 0 0 0 0 0 0 0 0 ## another 0 0 0 0 0 0 0 0 ## any 0 0 0 0 0 0 0 0 ``` ``` out = findFreqTerms(tdm,lowfreq=5) print(out) ``` ``` ## [1] "academic" "and" "derivatives" "from" "has" ## [6] "his" "many" "research" "sanjiv" "that" ## [11] "the" "world" ``` 7\.14 Term Frequency \- Inverse Document Frequency (TF\-IDF) ------------------------------------------------------------ This is a weighting scheme provided to sharpen the importance of rare words in a document, relative to the frequency of these words in the corpus. It is based on simple calculations and even though it does not have strong theoretical foundations, it is still very useful in practice. The TF\-IDF is the importance of a word \\(w\\) in a document \\(d\\) in a corpus \\(C\\). Therefore it is a function of all these three, i.e., we write it as TF\-IDF\\((w,d,C)\\), and is the product of term frequency (TF) and inverse document frequency (IDF). The frequency of a word in a document is defined as \\\[ f(w,d) \= \\frac{\\\#w \\in d}{\|d\|} \\] where \\(\|d\|\\) is the number of words in the document. We usually normalize word frequency so that \\\[ TF(w,d) \= \\ln\[f(w,d)] \\] This is log normalization. Another form of normalization is known as double normalization and is as follows: \\\[ TF(w,d) \= \\frac{1}{2} \+ \\frac{1}{2} \\frac{f(w,d)}{\\max\_{w \\in d} f(w,d)} \\] Note that normalization is not necessary, but it tends to help shrink the difference between counts of words. Inverse document frequency is as follows: \\\[ IDF(w,C) \= \\ln\\left\[ \\frac{\|C\|}{\|d\_{w \\in d}\|} \\right] \\] That is, we compute the ratio of the number of documents in the corpus \\(C\\) divided by the number of documents with word \\(w\\) in the corpus. Finally, we have the weighting score for a given word \\(w\\) in document \\(d\\) in corpus \\(C\\): \\\[ \\mbox{TF\-IDF}(w,d,C) \= TF(w,d) \\times IDF(w,C) \\] ### 7\.14\.1 Example of TD\-IDF We illustrate this with an application to the previously computed term\-document matrix. ``` tdm_mat = as.matrix(tdm) #Convert tdm into a matrix print(dim(tdm_mat)) ``` ``` ## [1] 321 80 ``` ``` nw = dim(tdm_mat)[1] nd = dim(tdm_mat)[2] doc = 13 #Choose document word = "derivatives" #Choose word #COMPUTE TF f = NULL for (w in row.names(tdm_mat)) { f = c(f,tdm_mat[w,doc]/sum(tdm_mat[,doc])) } fw = tdm_mat[word,doc]/sum(tdm_mat[,doc]) TF = 0.5 + 0.5*fw/max(f) print(TF) ``` ``` ## [1] 0.75 ``` ``` #COMPUTE IDF nw = length(which(tdm_mat[word,]>0)) print(nw) ``` ``` ## [1] 5 ``` ``` IDF = nd/nw print(IDF) ``` ``` ## [1] 16 ``` ``` #COMPUTE TF-IDF TF_IDF = TF*IDF print(TF_IDF) #With normalization ``` ``` ## [1] 12 ``` ``` print(fw*IDF) #Without normalization ``` ``` ## [1] 2 ``` We can write this code into a function and work out the TF\-IDF for all words. Then these word weights may be used in further text analysis. ### 7\.14\.2 TF\-IDF in the **tm** package We may also directly use the **weightTfIdf** function in the **tm** package. This undertakes the following computation: * Term frequency \\({\\it tf}\_{i,j}\\) counts the number of occurrences \\(n\_{i,j}\\) of a term \\(t\_i\\) in a document \\(d\_j\\). In the case of normalization, the term frequency \\(\\mathit{tf}\_{i,j}\\) is divided by \\(\\sum\_k n\_{k,j}\\). * Inverse document frequency for a term \\(t\_i\\) is defined as \\(\\mathit{idf}\_i \= \\log\_2 \\frac{\|D\|}{\|{d\_{t\_i \\in d}}\|}\\) where \\(\|D\|\\) denotes the total number of documents \\(\|{d\_{t\_i \\in d}}\|\\) is the number of documents where the term \\(t\_i\\) appears. * Term frequency \- inverse document frequency is now defined as \\(\\mathit{tf}\_{i,j} \\cdot \\mathit{idf}\_i\\). ``` tdm = TermDocumentMatrix(ctext,control=list(minWordLength=1,weighting=weightTfIdf)) ``` ``` ## Warning in weighting(x): empty document(s): 3 25 26 28 40 41 42 49 50 51 63 ## 64 65 78 79 80 ``` ``` print(tdm) ``` ``` ## <<TermDocumentMatrix (terms: 321, documents: 80)>> ## Non-/sparse entries: 502/25178 ## Sparsity : 98% ## Maximal term length: 49 ## Weighting : term frequency - inverse document frequency (normalized) (tf-idf) ``` ``` inspect(tdm[10:20,11:18]) ``` ``` ## <<TermDocumentMatrix (terms: 11, documents: 8)>> ## Non-/sparse entries: 5/83 ## Sparsity : 94% ## Maximal term length: 10 ## Weighting : term frequency - inverse document frequency (normalized) (tf-idf) ## ## Docs ## Terms 11 12 13 14 15 16 17 18 ## after 0.0000000 0.000000 0.0000000 0.0000000 0 0 0 0 ## ago 0.0000000 0.000000 0.0000000 0.0000000 0 0 0 0 ## ahmedabad 0.0000000 0.000000 0.0000000 0.0000000 0 0 0 0 ## aicwa 0.0000000 1.053655 0.0000000 0.0000000 0 0 0 0 ## algorithms 0.0000000 0.000000 0.0000000 0.0000000 0 0 0 0 ## also 0.6652410 0.000000 0.0000000 0.0000000 0 0 0 0 ## always 0.0000000 0.000000 0.0000000 0.0000000 0 0 0 0 ## and 0.5185001 0.000000 0.2592501 0.2592501 0 0 0 0 ## animals 0.0000000 0.000000 0.0000000 0.0000000 0 0 0 0 ## another 0.0000000 0.000000 0.0000000 0.0000000 0 0 0 0 ## any 0.0000000 0.000000 0.0000000 0.0000000 0 0 0 0 ``` *Example*: ``` library(tm) textarray = c("Free software comes with ABSOLUTELY NO certain WARRANTY","You are welcome to redistribute free software under certain conditions","Natural language support for software in an English locale","A collaborative project with many contributors") textcorpus = Corpus(VectorSource(textarray)) m = TermDocumentMatrix(textcorpus) print(as.matrix(m)) ``` ``` ## Docs ## Terms 1 2 3 4 ## absolutely 1 0 0 0 ## are 0 1 0 0 ## certain 1 1 0 0 ## collaborative 0 0 0 1 ## comes 1 0 0 0 ## conditions 0 1 0 0 ## contributors 0 0 0 1 ## english 0 0 1 0 ## for 0 0 1 0 ## free 1 1 0 0 ## language 0 0 1 0 ## locale 0 0 1 0 ## many 0 0 0 1 ## natural 0 0 1 0 ## project 0 0 0 1 ## redistribute 0 1 0 0 ## software 1 1 1 0 ## support 0 0 1 0 ## under 0 1 0 0 ## warranty 1 0 0 0 ## welcome 0 1 0 0 ## with 1 0 0 1 ## you 0 1 0 0 ``` ``` print(as.matrix(weightTfIdf(m))) ``` ``` ## Docs ## Terms 1 2 3 4 ## absolutely 0.28571429 0.00000000 0.00000000 0.0 ## are 0.00000000 0.22222222 0.00000000 0.0 ## certain 0.14285714 0.11111111 0.00000000 0.0 ## collaborative 0.00000000 0.00000000 0.00000000 0.4 ## comes 0.28571429 0.00000000 0.00000000 0.0 ## conditions 0.00000000 0.22222222 0.00000000 0.0 ## contributors 0.00000000 0.00000000 0.00000000 0.4 ## english 0.00000000 0.00000000 0.28571429 0.0 ## for 0.00000000 0.00000000 0.28571429 0.0 ## free 0.14285714 0.11111111 0.00000000 0.0 ## language 0.00000000 0.00000000 0.28571429 0.0 ## locale 0.00000000 0.00000000 0.28571429 0.0 ## many 0.00000000 0.00000000 0.00000000 0.4 ## natural 0.00000000 0.00000000 0.28571429 0.0 ## project 0.00000000 0.00000000 0.00000000 0.4 ## redistribute 0.00000000 0.22222222 0.00000000 0.0 ## software 0.05929107 0.04611528 0.05929107 0.0 ## support 0.00000000 0.00000000 0.28571429 0.0 ## under 0.00000000 0.22222222 0.00000000 0.0 ## warranty 0.28571429 0.00000000 0.00000000 0.0 ## welcome 0.00000000 0.22222222 0.00000000 0.0 ## with 0.14285714 0.00000000 0.00000000 0.2 ## you 0.00000000 0.22222222 0.00000000 0.0 ``` ### 7\.14\.1 Example of TD\-IDF We illustrate this with an application to the previously computed term\-document matrix. ``` tdm_mat = as.matrix(tdm) #Convert tdm into a matrix print(dim(tdm_mat)) ``` ``` ## [1] 321 80 ``` ``` nw = dim(tdm_mat)[1] nd = dim(tdm_mat)[2] doc = 13 #Choose document word = "derivatives" #Choose word #COMPUTE TF f = NULL for (w in row.names(tdm_mat)) { f = c(f,tdm_mat[w,doc]/sum(tdm_mat[,doc])) } fw = tdm_mat[word,doc]/sum(tdm_mat[,doc]) TF = 0.5 + 0.5*fw/max(f) print(TF) ``` ``` ## [1] 0.75 ``` ``` #COMPUTE IDF nw = length(which(tdm_mat[word,]>0)) print(nw) ``` ``` ## [1] 5 ``` ``` IDF = nd/nw print(IDF) ``` ``` ## [1] 16 ``` ``` #COMPUTE TF-IDF TF_IDF = TF*IDF print(TF_IDF) #With normalization ``` ``` ## [1] 12 ``` ``` print(fw*IDF) #Without normalization ``` ``` ## [1] 2 ``` We can write this code into a function and work out the TF\-IDF for all words. Then these word weights may be used in further text analysis. ### 7\.14\.2 TF\-IDF in the **tm** package We may also directly use the **weightTfIdf** function in the **tm** package. This undertakes the following computation: * Term frequency \\({\\it tf}\_{i,j}\\) counts the number of occurrences \\(n\_{i,j}\\) of a term \\(t\_i\\) in a document \\(d\_j\\). In the case of normalization, the term frequency \\(\\mathit{tf}\_{i,j}\\) is divided by \\(\\sum\_k n\_{k,j}\\). * Inverse document frequency for a term \\(t\_i\\) is defined as \\(\\mathit{idf}\_i \= \\log\_2 \\frac{\|D\|}{\|{d\_{t\_i \\in d}}\|}\\) where \\(\|D\|\\) denotes the total number of documents \\(\|{d\_{t\_i \\in d}}\|\\) is the number of documents where the term \\(t\_i\\) appears. * Term frequency \- inverse document frequency is now defined as \\(\\mathit{tf}\_{i,j} \\cdot \\mathit{idf}\_i\\). ``` tdm = TermDocumentMatrix(ctext,control=list(minWordLength=1,weighting=weightTfIdf)) ``` ``` ## Warning in weighting(x): empty document(s): 3 25 26 28 40 41 42 49 50 51 63 ## 64 65 78 79 80 ``` ``` print(tdm) ``` ``` ## <<TermDocumentMatrix (terms: 321, documents: 80)>> ## Non-/sparse entries: 502/25178 ## Sparsity : 98% ## Maximal term length: 49 ## Weighting : term frequency - inverse document frequency (normalized) (tf-idf) ``` ``` inspect(tdm[10:20,11:18]) ``` ``` ## <<TermDocumentMatrix (terms: 11, documents: 8)>> ## Non-/sparse entries: 5/83 ## Sparsity : 94% ## Maximal term length: 10 ## Weighting : term frequency - inverse document frequency (normalized) (tf-idf) ## ## Docs ## Terms 11 12 13 14 15 16 17 18 ## after 0.0000000 0.000000 0.0000000 0.0000000 0 0 0 0 ## ago 0.0000000 0.000000 0.0000000 0.0000000 0 0 0 0 ## ahmedabad 0.0000000 0.000000 0.0000000 0.0000000 0 0 0 0 ## aicwa 0.0000000 1.053655 0.0000000 0.0000000 0 0 0 0 ## algorithms 0.0000000 0.000000 0.0000000 0.0000000 0 0 0 0 ## also 0.6652410 0.000000 0.0000000 0.0000000 0 0 0 0 ## always 0.0000000 0.000000 0.0000000 0.0000000 0 0 0 0 ## and 0.5185001 0.000000 0.2592501 0.2592501 0 0 0 0 ## animals 0.0000000 0.000000 0.0000000 0.0000000 0 0 0 0 ## another 0.0000000 0.000000 0.0000000 0.0000000 0 0 0 0 ## any 0.0000000 0.000000 0.0000000 0.0000000 0 0 0 0 ``` *Example*: ``` library(tm) textarray = c("Free software comes with ABSOLUTELY NO certain WARRANTY","You are welcome to redistribute free software under certain conditions","Natural language support for software in an English locale","A collaborative project with many contributors") textcorpus = Corpus(VectorSource(textarray)) m = TermDocumentMatrix(textcorpus) print(as.matrix(m)) ``` ``` ## Docs ## Terms 1 2 3 4 ## absolutely 1 0 0 0 ## are 0 1 0 0 ## certain 1 1 0 0 ## collaborative 0 0 0 1 ## comes 1 0 0 0 ## conditions 0 1 0 0 ## contributors 0 0 0 1 ## english 0 0 1 0 ## for 0 0 1 0 ## free 1 1 0 0 ## language 0 0 1 0 ## locale 0 0 1 0 ## many 0 0 0 1 ## natural 0 0 1 0 ## project 0 0 0 1 ## redistribute 0 1 0 0 ## software 1 1 1 0 ## support 0 0 1 0 ## under 0 1 0 0 ## warranty 1 0 0 0 ## welcome 0 1 0 0 ## with 1 0 0 1 ## you 0 1 0 0 ``` ``` print(as.matrix(weightTfIdf(m))) ``` ``` ## Docs ## Terms 1 2 3 4 ## absolutely 0.28571429 0.00000000 0.00000000 0.0 ## are 0.00000000 0.22222222 0.00000000 0.0 ## certain 0.14285714 0.11111111 0.00000000 0.0 ## collaborative 0.00000000 0.00000000 0.00000000 0.4 ## comes 0.28571429 0.00000000 0.00000000 0.0 ## conditions 0.00000000 0.22222222 0.00000000 0.0 ## contributors 0.00000000 0.00000000 0.00000000 0.4 ## english 0.00000000 0.00000000 0.28571429 0.0 ## for 0.00000000 0.00000000 0.28571429 0.0 ## free 0.14285714 0.11111111 0.00000000 0.0 ## language 0.00000000 0.00000000 0.28571429 0.0 ## locale 0.00000000 0.00000000 0.28571429 0.0 ## many 0.00000000 0.00000000 0.00000000 0.4 ## natural 0.00000000 0.00000000 0.28571429 0.0 ## project 0.00000000 0.00000000 0.00000000 0.4 ## redistribute 0.00000000 0.22222222 0.00000000 0.0 ## software 0.05929107 0.04611528 0.05929107 0.0 ## support 0.00000000 0.00000000 0.28571429 0.0 ## under 0.00000000 0.22222222 0.00000000 0.0 ## warranty 0.28571429 0.00000000 0.00000000 0.0 ## welcome 0.00000000 0.22222222 0.00000000 0.0 ## with 0.14285714 0.00000000 0.00000000 0.2 ## you 0.00000000 0.22222222 0.00000000 0.0 ``` 7\.15 Cosine Similarity in the Text Domain ------------------------------------------ In this segment we will learn some popular functions on text that are used in practice. One of the first things we like to do is to find similar text or like sentences (think of web search as one application). Since documents are vectors in the TDM, we may want to find the closest vectors or compute the distance between vectors. \\\[ cos(\\theta) \= \\frac{A \\cdot B}{\|\|A\|\| \\times \|\|B\|\|} \\] where \\(\|\|A\|\| \= \\sqrt{A \\cdot A}\\), is the dot product of \\(A\\) with itself, also known as the norm of \\(A\\). This gives the cosine of the angle between the two vectors and is zero for orthogonal vectors and 1 for identical vectors. ``` #COSINE DISTANCE OR SIMILARITY A = as.matrix(c(0,3,4,1,7,0,1)) B = as.matrix(c(0,4,3,0,6,1,1)) cos = t(A) %*% B / (sqrt(t(A)%*%A) * sqrt(t(B)%*%B)) print(cos) ``` ``` ## [,1] ## [1,] 0.9682728 ``` ``` library(lsa) ``` ``` ## Loading required package: SnowballC ``` ``` #THE COSINE FUNCTION IN LSA ONLY TAKES ARRAYS A = c(0,3,4,1,7,0,1) B = c(0,4,3,0,6,1,1) print(cosine(A,B)) ``` ``` ## [,1] ## [1,] 0.9682728 ``` 7\.16 Using the ANLP package for bigrams and trigrams ----------------------------------------------------- This package has a few additional functions that make the preceding ideas more streamlined to implement. First let’s read in the usual text. ``` library(ANLP) download.file("http://srdas.github.io/bio-candid.html",destfile = "text") text = readTextFile("text","UTF-8") ctext = cleanTextData(text) #Creates a text corpus ``` The last function removes non\-english characters, numbers, white spaces, brackets, punctuation. It also handles cases like abbreviation, contraction. It converts entire text to lower case. We now make TDMs for unigrams, bigrams, trigrams. Then, combine them all into one list for word prediction. ``` g1 = generateTDM(ctext,1) g2 = generateTDM(ctext,2) g3 = generateTDM(ctext,3) gmodel = list(g1,g2,g3) ``` Next, use the **back\-off** algorithm to predict the next sequence of words. ``` print(predict_Backoff("you never",gmodel)) print(predict_Backoff("life is",gmodel)) print(predict_Backoff("been known",gmodel)) print(predict_Backoff("needs to",gmodel)) print(predict_Backoff("worked at",gmodel)) print(predict_Backoff("being an",gmodel)) print(predict_Backoff("publish",gmodel)) ``` 7\.17 Wordclouds ---------------- Wordlcouds are interesting ways in which to represent text. They give an instant visual summary. The **wordcloud** package in R may be used to create your own wordclouds. ``` #MAKE A WORDCLOUD library(wordcloud) ``` ``` ## Loading required package: RColorBrewer ``` ``` tdm2 = as.matrix(tdm) wordcount = sort(rowSums(tdm2),decreasing=TRUE) tdm_names = names(wordcount) wordcloud(tdm_names,wordcount) ``` ``` ## Warning in wordcloud(tdm_names, wordcount): ## backgroundhttpalgoscuedusanjivdasgraphicsback2gif could not be fit on page. ## It will not be plotted. ``` ``` #REMOVE STOPWORDS, NUMBERS, STEMMING ctext1 = tm_map(ctext,removeWords,stopwords("english")) ctext1 = tm_map(ctext1, removeNumbers) tdm = TermDocumentMatrix(ctext1,control=list(minWordLength=1)) tdm2 = as.matrix(tdm) wordcount = sort(rowSums(tdm2),decreasing=TRUE) tdm_names = names(wordcount) wordcloud(tdm_names,wordcount) ``` 7\.18 Manipulating Text ----------------------- ### 7\.18\.1 Stemming **Stemming** is the procedure by which a word is reduced to its root or stem. This is done so as to treat words from the one stem as the same word, rather than as separate words. We do not want “eaten” and “eating” to be treated as different words for example. ``` #STEMMING ctext2 = tm_map(ctext,removeWords,stopwords("english")) ctext2 = tm_map(ctext2, stemDocument) print(lapply(ctext2, as.character)[10:15]) ``` ``` ## $`10` ## [1] "BCom Account Econom Univers Bombay Sydenham" ## ## $`11` ## [1] "Colleg also qualifi Cost Work Accountant" ## ## $`12` ## [1] "AICWA He senior editor The Journal Investment" ## ## $`13` ## [1] "Manag coeditor The Journal Deriv The Journal" ## ## $`14` ## [1] "Financi Servic Research Associat Editor academ" ## ## $`15` ## [1] "journal Prior academ work deriv" ``` ### 7\.18\.2 Regular Expressions Regular expressions are syntax used to modify strings in an efficient manner. They are complicated but extremely effective. Here we will illustrate with a few examples, but you are encouraged to explore more on your own, as the variations are endless. What you need to do will depend on the application at hand, and with some experience you will become better at using regular expressions. The initial use will however be somewhat confusing. We start with a simple example of a text array where we wish replace the string “data” with a blank, i.e., we eliminate this string from the text we have. ``` library(tm) #Create a text array text = c("Doc1 is datavision","Doc2 is datatable","Doc3 is data","Doc4 is nodata","Doc5 is simpler") print(text) ``` ``` ## [1] "Doc1 is datavision" "Doc2 is datatable" "Doc3 is data" ## [4] "Doc4 is nodata" "Doc5 is simpler" ``` ``` #Remove all strings with the chosen text for all docs print(gsub("data","",text)) ``` ``` ## [1] "Doc1 is vision" "Doc2 is table" "Doc3 is " "Doc4 is no" ## [5] "Doc5 is simpler" ``` ``` #Remove all words that contain "data" at the start even if they are longer than data print(gsub("*data.*","",text)) ``` ``` ## [1] "Doc1 is " "Doc2 is " "Doc3 is " "Doc4 is no" ## [5] "Doc5 is simpler" ``` ``` #Remove all words that contain "data" at the end even if they are longer than data print(gsub("*.data*","",text)) ``` ``` ## [1] "Doc1 isvision" "Doc2 istable" "Doc3 is" "Doc4 is n" ## [5] "Doc5 is simpler" ``` ``` #Remove all words that contain "data" at the end even if they are longer than data print(gsub("*.data.*","",text)) ``` ``` ## [1] "Doc1 is" "Doc2 is" "Doc3 is" "Doc4 is n" ## [5] "Doc5 is simpler" ``` ### 7\.18\.3 Complex Regular Expressions using *grep* We now explore some more complex regular expressions. One case that is common is handling the search for special types of strings like telephone numbers. Suppose we have a text array that may contain telephone numbers in different formats, we can use a single **grep** command to extract these numbers. Here is some code to illustrate this. ``` #Create an array with some strings which may also contain telephone numbers as strings. x = c("234-5678","234 5678","2345678","1234567890","0123456789","abc 234-5678","234 5678 def","xx 2345678","abc1234567890def") #Now use grep to find which elements of the array contain telephone numbers idx = grep("[[:digit:]]{3}-[[:digit:]]{4}|[[:digit:]]{3} [[:digit:]]{4}|[1-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]",x) print(idx) ``` ``` ## [1] 1 2 4 6 7 9 ``` ``` print(x[idx]) ``` ``` ## [1] "234-5678" "234 5678" "1234567890" ## [4] "abc 234-5678" "234 5678 def" "abc1234567890def" ``` ``` #We can shorten this as follows idx = grep("[[:digit:]]{3}-[[:digit:]]{4}|[[:digit:]]{3} [[:digit:]]{4}|[1-9][0-9]{9}",x) print(idx) ``` ``` ## [1] 1 2 4 6 7 9 ``` ``` print(x[idx]) ``` ``` ## [1] "234-5678" "234 5678" "1234567890" ## [4] "abc 234-5678" "234 5678 def" "abc1234567890def" ``` ``` #What if we want to extract only the phone number and drop the rest of the text? pattern = "[[:digit:]]{3}-[[:digit:]]{4}|[[:digit:]]{3} [[:digit:]]{4}|[1-9][0-9]{9}" print(regmatches(x, gregexpr(pattern,x))) ``` ``` ## [[1]] ## [1] "234-5678" ## ## [[2]] ## [1] "234 5678" ## ## [[3]] ## character(0) ## ## [[4]] ## [1] "1234567890" ## ## [[5]] ## character(0) ## ## [[6]] ## [1] "234-5678" ## ## [[7]] ## [1] "234 5678" ## ## [[8]] ## character(0) ## ## [[9]] ## [1] "1234567890" ``` ``` #Or use the stringr package, which is a lot better library(stringr) str_extract(x,pattern) ``` ``` ## [1] "234-5678" "234 5678" NA "1234567890" NA ## [6] "234-5678" "234 5678" NA "1234567890" ``` ### 7\.18\.4 Using *grep* for emails Now we use grep to extract emails by looking for the “@” sign in the text string. We would proceed as in the following example. ``` x = c("sanjiv das","[email protected]","SCU","[email protected]") print(grep("\\@",x)) ``` ``` ## [1] 2 4 ``` ``` print(x[grep("\\@",x)]) ``` ``` ## [1] "[email protected]" "[email protected]" ``` You get the idea. Using the functions **gsub**, **grep**, **regmatches**, and **gregexpr**, you can manage most fancy string handling that is needed. ### 7\.18\.1 Stemming **Stemming** is the procedure by which a word is reduced to its root or stem. This is done so as to treat words from the one stem as the same word, rather than as separate words. We do not want “eaten” and “eating” to be treated as different words for example. ``` #STEMMING ctext2 = tm_map(ctext,removeWords,stopwords("english")) ctext2 = tm_map(ctext2, stemDocument) print(lapply(ctext2, as.character)[10:15]) ``` ``` ## $`10` ## [1] "BCom Account Econom Univers Bombay Sydenham" ## ## $`11` ## [1] "Colleg also qualifi Cost Work Accountant" ## ## $`12` ## [1] "AICWA He senior editor The Journal Investment" ## ## $`13` ## [1] "Manag coeditor The Journal Deriv The Journal" ## ## $`14` ## [1] "Financi Servic Research Associat Editor academ" ## ## $`15` ## [1] "journal Prior academ work deriv" ``` ### 7\.18\.2 Regular Expressions Regular expressions are syntax used to modify strings in an efficient manner. They are complicated but extremely effective. Here we will illustrate with a few examples, but you are encouraged to explore more on your own, as the variations are endless. What you need to do will depend on the application at hand, and with some experience you will become better at using regular expressions. The initial use will however be somewhat confusing. We start with a simple example of a text array where we wish replace the string “data” with a blank, i.e., we eliminate this string from the text we have. ``` library(tm) #Create a text array text = c("Doc1 is datavision","Doc2 is datatable","Doc3 is data","Doc4 is nodata","Doc5 is simpler") print(text) ``` ``` ## [1] "Doc1 is datavision" "Doc2 is datatable" "Doc3 is data" ## [4] "Doc4 is nodata" "Doc5 is simpler" ``` ``` #Remove all strings with the chosen text for all docs print(gsub("data","",text)) ``` ``` ## [1] "Doc1 is vision" "Doc2 is table" "Doc3 is " "Doc4 is no" ## [5] "Doc5 is simpler" ``` ``` #Remove all words that contain "data" at the start even if they are longer than data print(gsub("*data.*","",text)) ``` ``` ## [1] "Doc1 is " "Doc2 is " "Doc3 is " "Doc4 is no" ## [5] "Doc5 is simpler" ``` ``` #Remove all words that contain "data" at the end even if they are longer than data print(gsub("*.data*","",text)) ``` ``` ## [1] "Doc1 isvision" "Doc2 istable" "Doc3 is" "Doc4 is n" ## [5] "Doc5 is simpler" ``` ``` #Remove all words that contain "data" at the end even if they are longer than data print(gsub("*.data.*","",text)) ``` ``` ## [1] "Doc1 is" "Doc2 is" "Doc3 is" "Doc4 is n" ## [5] "Doc5 is simpler" ``` ### 7\.18\.3 Complex Regular Expressions using *grep* We now explore some more complex regular expressions. One case that is common is handling the search for special types of strings like telephone numbers. Suppose we have a text array that may contain telephone numbers in different formats, we can use a single **grep** command to extract these numbers. Here is some code to illustrate this. ``` #Create an array with some strings which may also contain telephone numbers as strings. x = c("234-5678","234 5678","2345678","1234567890","0123456789","abc 234-5678","234 5678 def","xx 2345678","abc1234567890def") #Now use grep to find which elements of the array contain telephone numbers idx = grep("[[:digit:]]{3}-[[:digit:]]{4}|[[:digit:]]{3} [[:digit:]]{4}|[1-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]",x) print(idx) ``` ``` ## [1] 1 2 4 6 7 9 ``` ``` print(x[idx]) ``` ``` ## [1] "234-5678" "234 5678" "1234567890" ## [4] "abc 234-5678" "234 5678 def" "abc1234567890def" ``` ``` #We can shorten this as follows idx = grep("[[:digit:]]{3}-[[:digit:]]{4}|[[:digit:]]{3} [[:digit:]]{4}|[1-9][0-9]{9}",x) print(idx) ``` ``` ## [1] 1 2 4 6 7 9 ``` ``` print(x[idx]) ``` ``` ## [1] "234-5678" "234 5678" "1234567890" ## [4] "abc 234-5678" "234 5678 def" "abc1234567890def" ``` ``` #What if we want to extract only the phone number and drop the rest of the text? pattern = "[[:digit:]]{3}-[[:digit:]]{4}|[[:digit:]]{3} [[:digit:]]{4}|[1-9][0-9]{9}" print(regmatches(x, gregexpr(pattern,x))) ``` ``` ## [[1]] ## [1] "234-5678" ## ## [[2]] ## [1] "234 5678" ## ## [[3]] ## character(0) ## ## [[4]] ## [1] "1234567890" ## ## [[5]] ## character(0) ## ## [[6]] ## [1] "234-5678" ## ## [[7]] ## [1] "234 5678" ## ## [[8]] ## character(0) ## ## [[9]] ## [1] "1234567890" ``` ``` #Or use the stringr package, which is a lot better library(stringr) str_extract(x,pattern) ``` ``` ## [1] "234-5678" "234 5678" NA "1234567890" NA ## [6] "234-5678" "234 5678" NA "1234567890" ``` ### 7\.18\.4 Using *grep* for emails Now we use grep to extract emails by looking for the “@” sign in the text string. We would proceed as in the following example. ``` x = c("sanjiv das","[email protected]","SCU","[email protected]") print(grep("\\@",x)) ``` ``` ## [1] 2 4 ``` ``` print(x[grep("\\@",x)]) ``` ``` ## [1] "[email protected]" "[email protected]" ``` You get the idea. Using the functions **gsub**, **grep**, **regmatches**, and **gregexpr**, you can manage most fancy string handling that is needed. 7\.19 Web Extraction using the *rvest* package ---------------------------------------------- The **rvest** package, written bu Hadley Wickham, is a powerful tool for extracting text from web pages. The package provides wrappers around the ‘xml2’ and ‘httr’ packages to make it easy to download, and then manipulate, HTML and XML. The package is best illustrated with some simple examples. ### 7\.19\.1 Program to read a web page using the selector gadget The selector gadget ius a useful tool to be used in conjunction with the *rvest* package. It allows you to find the html tag in a web page that you need to pass to the program to parse the html page element you are interested in. Download from: <http://selectorgadget.com/> Here is some code to read in the slashdot web page and gather the stories currently on their headlines. ``` library(rvest) ``` ``` ## Loading required package: xml2 ``` ``` ## ## Attaching package: 'rvest' ``` ``` ## The following object is masked from 'package:XML': ## ## xml ``` ``` url = "https://slashdot.org/" doc.html = read_html(url) text = doc.html %>% html_nodes(".story") %>% html_text() text = gsub("[\t\n]","",text) #text = paste(text, collapse=" ") print(text[1:20]) ``` ``` ## [1] " Samsung's Calls For Industry To Embrace Its Battery Check Process as a New Standard Have Been Ignored (cnet.com) " ## [2] " Blinking Cursor Devours CPU Cycles in Visual Studio Code Editor (theregister.co.uk) 39" ## [3] " Alcohol Is Good for Your Heart -- Most of the Time (time.com) 58" ## [4] " App That Lets People Make Personalized Emojis Is the Fastest Growing App In Past Two Years (axios.com) 22" ## [5] " Americans' Shift To The Suburbs Sped Up Last Year (fivethirtyeight.com) 113" ## [6] " Some Of Hacker Group's Claims Of Having Access To 250M iCloud Accounts Aren't False (zdnet.com) 33" ## [7] " Amazon Wins $1.5 Billion Tax Dispute Over IRS (reuters.com) 63" ## [8] " Hollywood Producer Blames Rotten Tomatoes For Convincing People Not To See His Movie (vanityfair.com) 283" ## [9] " Sea Ice Extent Sinks To Record Lows At Both Poles (sciencedaily.com) 130" ## [10] " Molecule Kills Elderly Cells, Reduces Signs of Aging In Mice (sciencemag.org) 94" ## [11] " Red-Light Camera Grace Period Goes From 0.1 To 0.3 Seconds, Chicago To Lose $17 Million (arstechnica.com) 201" ## [12] " US Ordered 'Mandatory Social Media Check' For Visa Applicants Who Visited ISIS Territory (theverge.com) 177" ## [13] " Google Reducing Trust In Symantec Certificates Following Numerous Slip-Ups (bleepingcomputer.com) 63" ## [14] " Twitter Considers Premium Version After 11 Years As a Free Service (reuters.com) 81" ## [15] " Apple Explores Using An iPhone, iPad To Power a Laptop (appleinsider.com) 63" ## [16] NA ## [17] NA ## [18] NA ## [19] NA ## [20] NA ``` ### 7\.19\.2 Program to read a web table using the selector gadget Sometimes we need to read a table embedded in a web page and this is also a simple exercise, which is undertaken also with **rvest**. ``` library(rvest) url = "http://finance.yahoo.com/q?uhb=uhb2&fr=uh3_finance_vert_gs&type=2button&s=IBM" doc.html = read_html(url) table = doc.html %>% html_nodes("table") %>% html_table() print(table) ``` ``` ## [[1]] ## X1 X2 ## 1 NA Search ## ## [[2]] ## X1 X2 ## 1 Previous Close 174.82 ## 2 Open 175.12 ## 3 Bid 174.80 x 300 ## 4 Ask 174.99 x 300 ## 5 Day's Range 173.94 - 175.50 ## 6 52 Week Range 142.50 - 182.79 ## 7 Volume 1,491,738 ## 8 Avg. Volume 3,608,856 ## ## [[3]] ## X1 X2 ## 1 Market Cap 164.3B ## 2 Beta 0.87 ## 3 PE Ratio (TTM) 14.07 ## 4 EPS (TTM) N/A ## 5 Earnings Date N/A ## 6 Dividend & Yield 5.60 (3.20%) ## 7 Ex-Dividend Date N/A ## 8 1y Target Est N/A ``` Note that this code extracted all the web tables in the Yahoo! Finance page and returned each one as a list item. ### 7\.19\.3 Program to read a web table into a data frame Here we take note of some Russian language sites where we want to extract forex quotes and store them in a data frame. ``` library(rvest) url1 <- "http://finance.i.ua/market/kiev/?type=1" #Buy USD url2 <- "http://finance.i.ua/market/kiev/?type=2" #Sell USD doc1.html = read_html(url1) table1 = doc1.html %>% html_nodes("table") %>% html_table() result1 = table1[[1]] print(head(result1)) ``` ``` ## X1 X2 X3 X4 ## 1 Время Курс Сумма Телефон ## 2 13:03 0.462 250000 \u20bd +38 063 \nПоказать ## 3 13:07 27.0701 72000 $ +38 063 \nПоказать ## 4 19:05 27.11 2000 $ +38 068 \nПоказать ## 5 18:48 27.08 200000 $ +38 063 \nПоказать ## 6 18:44 27.08 100000 $ +38 096 \nПоказать ## X5 ## 1 Район ## 2 м Дружбы народов ## 3 Обмен Валют Ленинградская пл ## 4 Центр. Могу подъехать. ## 5 Леси Украинки. Дружба Народов. Лыбидская ## 6 Ленинградская Пл. Левобережка. Печерск ## X6 ## 1 Комментарий ## 2 детектор, обмен валют ## 3 От 10т дол. Крупная гривна. От 30т нду. Звоните ## 4 Можно частями ## 5 П е ч е р с к , Подол. Лыбидская , от 10т. Обмен на Е В Р О 1. 0 82 ## 6 П е ч е р с к , Подол. Лыбидская , от 10т. Обмен на Е В Р О 1. 082 ``` ``` doc2.html = read_html(url2) table2 = doc2.html %>% html_nodes("table") %>% html_table() result2 = table2[[1]] print(head(result2)) ``` ``` ## X1 X2 X3 X4 ## 1 Время Курс Сумма Телефон ## 2 17:10 29.2299 62700 € +38 093 \nПоказать ## 3 19:04 27.14 5000 $ +38 098 \nПоказать ## 4 13:08 27.1099 72000 $ +38 063 \nПоказать ## 5 15:03 27.14 5200 $ +38 095 \nПоказать ## 6 17:05 27.2 40000 $ +38 093 \nПоказать ## X5 ## 1 Район ## 2 Обменный пункт Ленинградская пл и ## 3 Центр. Подъеду ## 4 Обмен Валют Ленинградская пл ## 5 Печерск ## 6 Подол ## X6 ## 1 Комментарий ## 2 Или за дол 1. 08 От 10т евро. 50 100 и 500 купюры. Звоните. Бронируйте. Еду от 10т. Артем ## 3 Можно Частями от 500 дол ## 4 От 10т дол. Крупная гривна. От 30т нду. Звоните ## 5 м Дружбы народов, от 500, детектор, обмен валют ## 6 Обмен валют, с 9-00 до 19-00 ``` ### 7\.19\.1 Program to read a web page using the selector gadget The selector gadget ius a useful tool to be used in conjunction with the *rvest* package. It allows you to find the html tag in a web page that you need to pass to the program to parse the html page element you are interested in. Download from: <http://selectorgadget.com/> Here is some code to read in the slashdot web page and gather the stories currently on their headlines. ``` library(rvest) ``` ``` ## Loading required package: xml2 ``` ``` ## ## Attaching package: 'rvest' ``` ``` ## The following object is masked from 'package:XML': ## ## xml ``` ``` url = "https://slashdot.org/" doc.html = read_html(url) text = doc.html %>% html_nodes(".story") %>% html_text() text = gsub("[\t\n]","",text) #text = paste(text, collapse=" ") print(text[1:20]) ``` ``` ## [1] " Samsung's Calls For Industry To Embrace Its Battery Check Process as a New Standard Have Been Ignored (cnet.com) " ## [2] " Blinking Cursor Devours CPU Cycles in Visual Studio Code Editor (theregister.co.uk) 39" ## [3] " Alcohol Is Good for Your Heart -- Most of the Time (time.com) 58" ## [4] " App That Lets People Make Personalized Emojis Is the Fastest Growing App In Past Two Years (axios.com) 22" ## [5] " Americans' Shift To The Suburbs Sped Up Last Year (fivethirtyeight.com) 113" ## [6] " Some Of Hacker Group's Claims Of Having Access To 250M iCloud Accounts Aren't False (zdnet.com) 33" ## [7] " Amazon Wins $1.5 Billion Tax Dispute Over IRS (reuters.com) 63" ## [8] " Hollywood Producer Blames Rotten Tomatoes For Convincing People Not To See His Movie (vanityfair.com) 283" ## [9] " Sea Ice Extent Sinks To Record Lows At Both Poles (sciencedaily.com) 130" ## [10] " Molecule Kills Elderly Cells, Reduces Signs of Aging In Mice (sciencemag.org) 94" ## [11] " Red-Light Camera Grace Period Goes From 0.1 To 0.3 Seconds, Chicago To Lose $17 Million (arstechnica.com) 201" ## [12] " US Ordered 'Mandatory Social Media Check' For Visa Applicants Who Visited ISIS Territory (theverge.com) 177" ## [13] " Google Reducing Trust In Symantec Certificates Following Numerous Slip-Ups (bleepingcomputer.com) 63" ## [14] " Twitter Considers Premium Version After 11 Years As a Free Service (reuters.com) 81" ## [15] " Apple Explores Using An iPhone, iPad To Power a Laptop (appleinsider.com) 63" ## [16] NA ## [17] NA ## [18] NA ## [19] NA ## [20] NA ``` ### 7\.19\.2 Program to read a web table using the selector gadget Sometimes we need to read a table embedded in a web page and this is also a simple exercise, which is undertaken also with **rvest**. ``` library(rvest) url = "http://finance.yahoo.com/q?uhb=uhb2&fr=uh3_finance_vert_gs&type=2button&s=IBM" doc.html = read_html(url) table = doc.html %>% html_nodes("table") %>% html_table() print(table) ``` ``` ## [[1]] ## X1 X2 ## 1 NA Search ## ## [[2]] ## X1 X2 ## 1 Previous Close 174.82 ## 2 Open 175.12 ## 3 Bid 174.80 x 300 ## 4 Ask 174.99 x 300 ## 5 Day's Range 173.94 - 175.50 ## 6 52 Week Range 142.50 - 182.79 ## 7 Volume 1,491,738 ## 8 Avg. Volume 3,608,856 ## ## [[3]] ## X1 X2 ## 1 Market Cap 164.3B ## 2 Beta 0.87 ## 3 PE Ratio (TTM) 14.07 ## 4 EPS (TTM) N/A ## 5 Earnings Date N/A ## 6 Dividend & Yield 5.60 (3.20%) ## 7 Ex-Dividend Date N/A ## 8 1y Target Est N/A ``` Note that this code extracted all the web tables in the Yahoo! Finance page and returned each one as a list item. ### 7\.19\.3 Program to read a web table into a data frame Here we take note of some Russian language sites where we want to extract forex quotes and store them in a data frame. ``` library(rvest) url1 <- "http://finance.i.ua/market/kiev/?type=1" #Buy USD url2 <- "http://finance.i.ua/market/kiev/?type=2" #Sell USD doc1.html = read_html(url1) table1 = doc1.html %>% html_nodes("table") %>% html_table() result1 = table1[[1]] print(head(result1)) ``` ``` ## X1 X2 X3 X4 ## 1 Время Курс Сумма Телефон ## 2 13:03 0.462 250000 \u20bd +38 063 \nПоказать ## 3 13:07 27.0701 72000 $ +38 063 \nПоказать ## 4 19:05 27.11 2000 $ +38 068 \nПоказать ## 5 18:48 27.08 200000 $ +38 063 \nПоказать ## 6 18:44 27.08 100000 $ +38 096 \nПоказать ## X5 ## 1 Район ## 2 м Дружбы народов ## 3 Обмен Валют Ленинградская пл ## 4 Центр. Могу подъехать. ## 5 Леси Украинки. Дружба Народов. Лыбидская ## 6 Ленинградская Пл. Левобережка. Печерск ## X6 ## 1 Комментарий ## 2 детектор, обмен валют ## 3 От 10т дол. Крупная гривна. От 30т нду. Звоните ## 4 Можно частями ## 5 П е ч е р с к , Подол. Лыбидская , от 10т. Обмен на Е В Р О 1. 0 82 ## 6 П е ч е р с к , Подол. Лыбидская , от 10т. Обмен на Е В Р О 1. 082 ``` ``` doc2.html = read_html(url2) table2 = doc2.html %>% html_nodes("table") %>% html_table() result2 = table2[[1]] print(head(result2)) ``` ``` ## X1 X2 X3 X4 ## 1 Время Курс Сумма Телефон ## 2 17:10 29.2299 62700 € +38 093 \nПоказать ## 3 19:04 27.14 5000 $ +38 098 \nПоказать ## 4 13:08 27.1099 72000 $ +38 063 \nПоказать ## 5 15:03 27.14 5200 $ +38 095 \nПоказать ## 6 17:05 27.2 40000 $ +38 093 \nПоказать ## X5 ## 1 Район ## 2 Обменный пункт Ленинградская пл и ## 3 Центр. Подъеду ## 4 Обмен Валют Ленинградская пл ## 5 Печерск ## 6 Подол ## X6 ## 1 Комментарий ## 2 Или за дол 1. 08 От 10т евро. 50 100 и 500 купюры. Звоните. Бронируйте. Еду от 10т. Артем ## 3 Можно Частями от 500 дол ## 4 От 10т дол. Крупная гривна. От 30т нду. Звоните ## 5 м Дружбы народов, от 500, детектор, обмен валют ## 6 Обмен валют, с 9-00 до 19-00 ``` 7\.20 Using the *rselenium* package ----------------------------------- ``` #Clicking Show More button Google Scholar page library(RCurl) library(RSelenium) library(rvest) library(stringr) library(igraph) checkForServer() startServer() remDr <- remoteDriver(remoteServerAddr = "localhost" , port = 4444 , browserName = "firefox" ) remDr$open() remDr$getStatus() ``` ### 7\.20\.1 Application to Google Scholar data ``` remDr$navigate("http://scholar.google.com") webElem <- remDr$findElement(using = 'css selector', "input#gs_hp_tsi") webElem$sendKeysToElement(list("Sanjiv Das", "\uE007")) link <- webElem$getCurrentUrl() page <- read_html(as.character(link)) citations <- page %>% html_nodes (".gs_rt2") matched <- str_match_all(citations, "<a href=\"(.*?)\"") scholarurl <- paste("https://scholar.google.com", matched[[1]][,2], sep="") page <- read_html(as.character(scholarurl)) remDr$navigate(as.character(scholarurl)) authorlist <- page %>% html_nodes(css=".gs_gray") %>% html_text() # Selecting fields after CSS selector .gs_gray authorlist <- as.data.frame(authorlist) odd_index <- seq(1,nrow(authorlist),2) #Sorting data by even/odd indexes to form a table. even_index <- seq (2,nrow(authorlist),2) authornames <- data.frame(x=authorlist[odd_index,1]) papernames <- data.frame(x=authorlist[even_index,1]) pubmatrix <- cbind(authorlist,papernames) # Building the view all link on scholar page. a=str_split(matched, "user=") x <- substring(a[[1]][2], 1,12) y<- paste("https://scholar.google.com/citations?view_op=list_colleagues&hl=en&user=", x, sep="") remDr$navigate(y) #Reading view all page to get author list: page <- read_html(as.character(y)) z <- page %>% html_nodes (".gsc_1usr_name") x <-lapply(z,str_extract,">[A-Z]+[a-z]+ .+<") x<-lapply(x,str_replace, ">","") x<-lapply(x,str_replace, "<","") # Graph function: bsk <- as.matrix(cbind("SR Das", unlist(x))) bsk.network<-graph.data.frame(bsk, directed=F) plot(bsk.network) ``` ### 7\.20\.1 Application to Google Scholar data ``` remDr$navigate("http://scholar.google.com") webElem <- remDr$findElement(using = 'css selector', "input#gs_hp_tsi") webElem$sendKeysToElement(list("Sanjiv Das", "\uE007")) link <- webElem$getCurrentUrl() page <- read_html(as.character(link)) citations <- page %>% html_nodes (".gs_rt2") matched <- str_match_all(citations, "<a href=\"(.*?)\"") scholarurl <- paste("https://scholar.google.com", matched[[1]][,2], sep="") page <- read_html(as.character(scholarurl)) remDr$navigate(as.character(scholarurl)) authorlist <- page %>% html_nodes(css=".gs_gray") %>% html_text() # Selecting fields after CSS selector .gs_gray authorlist <- as.data.frame(authorlist) odd_index <- seq(1,nrow(authorlist),2) #Sorting data by even/odd indexes to form a table. even_index <- seq (2,nrow(authorlist),2) authornames <- data.frame(x=authorlist[odd_index,1]) papernames <- data.frame(x=authorlist[even_index,1]) pubmatrix <- cbind(authorlist,papernames) # Building the view all link on scholar page. a=str_split(matched, "user=") x <- substring(a[[1]][2], 1,12) y<- paste("https://scholar.google.com/citations?view_op=list_colleagues&hl=en&user=", x, sep="") remDr$navigate(y) #Reading view all page to get author list: page <- read_html(as.character(y)) z <- page %>% html_nodes (".gsc_1usr_name") x <-lapply(z,str_extract,">[A-Z]+[a-z]+ .+<") x<-lapply(x,str_replace, ">","") x<-lapply(x,str_replace, "<","") # Graph function: bsk <- as.matrix(cbind("SR Das", unlist(x))) bsk.network<-graph.data.frame(bsk, directed=F) plot(bsk.network) ``` 7\.21 Web APIs -------------- We now look to getting text from the web and using various APIs from different services like Twitter, Facebook, etc. You will need to open free developer accounts to do this on each site. You will also need the special R packages for each different source. ### 7\.21\.1 Twitter First create a Twitter developer account to get the required credentials for accessing the API. See: <https://dev.twitter.com/> The Twitter API needs a lot of handshaking… ``` ##TWITTER EXTRACTOR library(twitteR) library(ROAuth) library(RCurl) download.file(url="https://curl.haxx.se/ca/cacert.pem",destfile="cacert.pem") #certificate file based on Privacy Enhanced Mail (PEM) protocol: https://en.wikipedia.org/wiki/Privacy-enhanced_Electronic_Mail cKey = "oV89mZ970KM9vO8a5mktV7Aqw" #These are my keys and won't work for you cSecret = "cNriTUShd69AJaVPpZHCMDZI5U7nnXVcd72vmK4psqDUQhIEEY" #use your own secret reqURL = "https://api.twitter.com/oauth/request_token" accURL = "https://api.twitter.com/oauth/access_token" authURL = "https://api.twitter.com/oauth/authorize" #NOW SUBMIT YOUR CODES AND ASK FOR CREDENTIALS cred = OAuthFactory$new(consumerKey=cKey, consumerSecret=cSecret,requestURL=reqURL, accessURL=accURL,authURL=authURL) cred$handshake(cainfo="cacert.pem") #Asks for token #Test and save credentials #registerTwitterOAuth(cred) #save(list="cred",file="twitteR_credentials") #FIRST PHASE DONE ``` ### 7\.21\.2 Accessing Twitter ``` ##USE httr, SECOND PHASE library(httr) #options(httr_oauth_cache=T) accToken = "18666236-DmDE1wwbpvPbDcw9kwt9yThGeyYhjfpVVywrHuhOQ" accTokenSecret = "cttbpxpTtqJn7wrCP36I59omNI5GQHXXgV41sKwUgc" setup_twitter_oauth(cKey,cSecret,accToken,accTokenSecret) #At prompt type 1 ``` This more direct code chunk does handshaking better and faster than the preceding. ``` library(stringr) library(twitteR) library(ROAuth) library(RCurl) ``` ``` ## Loading required package: bitops ``` ``` cKey = "oV89mZ970KM9vO8a5mktV7Aqw" cSecret = "cNriTUShd69AJaVPpZHCMDZI5U7nnXVcd72vmK4psqDUQhIEEY" accToken = "18666236-DmDE1wwbpvPbDcw9kwt9yThGeyYhjfpVVywrHuhOQ" accTokenSecret = "cttbpxpTtqJn7wrCP36I59omNI5GQHXXgV41sKwUgc" setup_twitter_oauth(consumer_key = cKey, consumer_secret = cSecret, access_token = accToken, access_secret = accTokenSecret) ``` ``` ## [1] "Using direct authentication" ``` This completes the handshaking with Twitter. Now we can access tweets using the functions in the **twitteR** package. ### 7\.21\.3 Using the *twitteR* package ``` #EXAMPLE 1 s = searchTwitter("#GOOG") #This is a list s ``` ``` ## [[1]] ## [1] "_KevinRosales_: @Origengg @UnicornsOfLove #GoOg siempre apoyándolos hasta la muerte" ## ## [[2]] ## [1] "uncle_otc: @Jasik @crtaylor81 seen? MyDx, Inc. (OTC:$MYDX) Revolutionary Medical Software That's Poised To Earn Billions, https://t.co/KbgNIEoAlB #GOOG" ## ## [[3]] ## [1] "prabhumap: \"O-MG, the Developer Preview of Android O is here!\" https://t.co/cShgn63DrJ #goog #feedly" ## ## [[4]] ## [1] "top10USstocks: Alphabet Inc (NASDAQ:GOOG) loses -1.45% on Thursday-Top10 Worst Performer in NASDAQ100 #NASDAQ #GOOG https://t.co/FPbW5Ablez" ## ## [[5]] ## [1] "wlstcom: Alphabet - 25% Upside Potential #GOOGLE #GOOG #GOOGL #StockMarketSherpa #LongIdeas $GOOG https://t.co/IIGxCsBvab https://t.co/raegkUwI0j" ## ## [[6]] ## [1] "wlstcom: Scenarios For The Healthcare Bill - Cramer's Mad Money (3/23/17) #JPM #C #MLM #USCR #GOOG #GOOGL #AAPL #AMGN #CSCO https://t.co/B3GscATmg3" ## ## [[7]] ## [1] "seajourney2004: Lake Tekapo, New Zealand from Brent (@brentpurcell.nz) on Instagram: “Tekapo Blue\" #LakeTekapo #goog https://t.co/agzGy6ortN" ## ## [[8]] ## [1] "ScottWestBand: #Cowboy #Song #Western #Music Westerns #CowboySong #WesternMusic Theme https://t.co/bi8psLXB8G #Trending #Youtube #Twitter #Facebook #Goog…" ## ## [[9]] ## [1] "savvyyabby: Thought leadership is 1 part Common Sense and 99 parts Leadership. I have no idea what Google is smoking but I am getting SHORT #GOOG" ## ## [[10]] ## [1] "Addiply: @marcwebber @thetimes Rupert, Dacre and Co all want @DCMS @DamianCollins et al to clip #GOOG wings. Cos they ain't getting their slice..." ## ## [[11]] ## [1] "onlinemedialist: RT @wlstcom: Augmented Reality: The Next Big Thing In Wearables #APPLE #AAPL #FB #SSNLF #GOOG #GOOGL $FB https://t.co/PwqUrm4VU4 https://t.…" ## ## [[12]] ## [1] "wlstcom: Augmented Reality: The Next Big Thing In Wearables #APPLE #AAPL #FB #SSNLF #GOOG #GOOGL $FB https://t.co/PwqUrm4VU4 https://t.co/0rnSbVUvGX" ## ## [[13]] ## [1] "zeyumw: Google Agrees to YouTube Metrics Audit to Ease Advertisers’ Concerns https://t.co/OsSjVDY24X #goog #media #googl" ## ## [[14]] ## [1] "wlstcom: Apple Acquires DeskConnect For Workflow Application #GOOG #AAPL #GOOGL #DonovanJones $AAPL https://t.co/YIGqHyYwrm https://t.co/UI2ejtP0Jo" ## ## [[15]] ## [1] "wlstcom: Apple Acquires DeskConnect For Workflow Application #GOOGLE #GOOG #AAPL #DonovanJones $GOOG https://t.co/Yd01TL5ZZb https://t.co/Vo6VEeSxw7" ## ## [[16]] ## [1] "send2katz: Cloud SQL for PostgreSQL: Managed PostgreSQL for your mobile and geospatial applications in Google Cloud https://t.co/W7JLhPb1CG #GCE #Goog" ## ## [[17]] ## [1] "MarkYu_DPT: Ah, really? First @Google Medical Diagnostics Center soon?\n#GOOGL #GOOG\nhttps://t.co/PhmPsB0xgf" ## ## [[18]] ## [1] "AskFriedrich: Alphabet — GOOGL\nnot meeting Friedrich criteria, &amp; EXTREMELY expensive\n\n#alphabet #google $google $GOOGL #GOOG… https://t.co/N1x8LUUz5T" ## ## [[19]] ## [1] "HotHardware: #GoogleMaps To Offer Optional Real-Time User #LocationTracking Allowing You To Share Your ETA… https://t.co/OTF73K6a3w" ## ## [[20]] ## [1] "ConsumerFeed: Alphabet's buy rating reiterated at Mizuho. $1,024.00 PT. https://t.co/7c3Hart1rT $GOOG #GOOG" ## ## [[21]] ## [1] "RatingsNetwork: Alphabet's buy rating reiterated at Mizuho. $1,024.00 PT. https://t.co/LUCXvQDHX4 $GOOG #GOOG" ## ## [[22]] ## [1] "rContentRich: (#Google #Resurrected a #Dead #Product on #Wednesday and no one #Noticed (#GOOG))\n \nhttps://t.co/7YFLbMDyp7 https://t.co/CIfrOPmmKh" ## ## [[23]] ## [1] "ScottWestBand: #Cowboy #Song #Western #Music Westerns #CowboySong #WesternMusic Theme https://t.co/bi8psLXB8G #Trending #Youtube #Twitter #Facebook #Goog…" ## ## [[24]] ## [1] "APPLE_GOOGLE_TW: Virgin Tonic : Merci Google Maps ! On va enfin pouvoir retrouver notre voiture sur le parking - Virgin Radio https://t.co/l5IpUUyIGz #Goog…" ## ## [[25]] ## [1] "carlosmoisescet: RT @JUANJmauricio: #goog nigth #fuck hard #ass #cock # fuck mounth https://t.co/2dpIdWtlxX" ``` ``` #CONVERT TWITTER LIST TO TEXT ARRAY (see documentation in twitteR package) twts = twListToDF(s) #This gives a dataframe with the tweets names(twts) ``` ``` ## [1] "text" "favorited" "favoriteCount" "replyToSN" ## [5] "created" "truncated" "replyToSID" "id" ## [9] "replyToUID" "statusSource" "screenName" "retweetCount" ## [13] "isRetweet" "retweeted" "longitude" "latitude" ``` ``` twts_array = twts$text print(twts$retweetCount) ``` ``` ## [1] 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 ## [24] 0 47 ``` ``` twts_array ``` ``` ## [1] "@Origengg @UnicornsOfLove #GoOg siempre apoyándolos hasta la muerte" ## [2] "@Jasik @crtaylor81 seen? MyDx, Inc. (OTC:$MYDX) Revolutionary Medical Software That's Poised To Earn Billions, https://t.co/KbgNIEoAlB #GOOG" ## [3] "\"O-MG, the Developer Preview of Android O is here!\" https://t.co/cShgn63DrJ #goog #feedly" ## [4] "Alphabet Inc (NASDAQ:GOOG) loses -1.45% on Thursday-Top10 Worst Performer in NASDAQ100 #NASDAQ #GOOG https://t.co/FPbW5Ablez" ## [5] "Alphabet - 25% Upside Potential #GOOGLE #GOOG #GOOGL #StockMarketSherpa #LongIdeas $GOOG https://t.co/IIGxCsBvab https://t.co/raegkUwI0j" ## [6] "Scenarios For The Healthcare Bill - Cramer's Mad Money (3/23/17) #JPM #C #MLM #USCR #GOOG #GOOGL #AAPL #AMGN #CSCO https://t.co/B3GscATmg3" ## [7] "Lake Tekapo, New Zealand from Brent (@brentpurcell.nz) on Instagram: “Tekapo Blue\" #LakeTekapo #goog https://t.co/agzGy6ortN" ## [8] "#Cowboy #Song #Western #Music Westerns #CowboySong #WesternMusic Theme https://t.co/bi8psLXB8G #Trending #Youtube #Twitter #Facebook #Goog…" ## [9] "Thought leadership is 1 part Common Sense and 99 parts Leadership. I have no idea what Google is smoking but I am getting SHORT #GOOG" ## [10] "@marcwebber @thetimes Rupert, Dacre and Co all want @DCMS @DamianCollins et al to clip #GOOG wings. Cos they ain't getting their slice..." ## [11] "RT @wlstcom: Augmented Reality: The Next Big Thing In Wearables #APPLE #AAPL #FB #SSNLF #GOOG #GOOGL $FB https://t.co/PwqUrm4VU4 https://t.…" ## [12] "Augmented Reality: The Next Big Thing In Wearables #APPLE #AAPL #FB #SSNLF #GOOG #GOOGL $FB https://t.co/PwqUrm4VU4 https://t.co/0rnSbVUvGX" ## [13] "Google Agrees to YouTube Metrics Audit to Ease Advertisers’ Concerns https://t.co/OsSjVDY24X #goog #media #googl" ## [14] "Apple Acquires DeskConnect For Workflow Application #GOOG #AAPL #GOOGL #DonovanJones $AAPL https://t.co/YIGqHyYwrm https://t.co/UI2ejtP0Jo" ## [15] "Apple Acquires DeskConnect For Workflow Application #GOOGLE #GOOG #AAPL #DonovanJones $GOOG https://t.co/Yd01TL5ZZb https://t.co/Vo6VEeSxw7" ## [16] "Cloud SQL for PostgreSQL: Managed PostgreSQL for your mobile and geospatial applications in Google Cloud https://t.co/W7JLhPb1CG #GCE #Goog" ## [17] "Ah, really? First @Google Medical Diagnostics Center soon?\n#GOOGL #GOOG\nhttps://t.co/PhmPsB0xgf" ## [18] "Alphabet — GOOGL\nnot meeting Friedrich criteria, &amp; EXTREMELY expensive\n\n#alphabet #google $google $GOOGL #GOOG… https://t.co/N1x8LUUz5T" ## [19] "#GoogleMaps To Offer Optional Real-Time User #LocationTracking Allowing You To Share Your ETA… https://t.co/OTF73K6a3w" ## [20] "Alphabet's buy rating reiterated at Mizuho. $1,024.00 PT. https://t.co/7c3Hart1rT $GOOG #GOOG" ## [21] "Alphabet's buy rating reiterated at Mizuho. $1,024.00 PT. https://t.co/LUCXvQDHX4 $GOOG #GOOG" ## [22] "(#Google #Resurrected a #Dead #Product on #Wednesday and no one #Noticed (#GOOG))\n \nhttps://t.co/7YFLbMDyp7 https://t.co/CIfrOPmmKh" ## [23] "#Cowboy #Song #Western #Music Westerns #CowboySong #WesternMusic Theme https://t.co/bi8psLXB8G #Trending #Youtube #Twitter #Facebook #Goog…" ## [24] "Virgin Tonic : Merci Google Maps ! On va enfin pouvoir retrouver notre voiture sur le parking - Virgin Radio https://t.co/l5IpUUyIGz #Goog…" ## [25] "RT @JUANJmauricio: #goog nigth #fuck hard #ass #cock # fuck mounth https://t.co/2dpIdWtlxX" ``` ``` #EXAMPLE 2 s = getUser("srdas") fr = s$getFriends() print(length(fr)) ``` ``` ## [1] 154 ``` ``` print(fr[1:10]) ``` ``` ## $`60816617` ## [1] "cedarwright" ## ## $`2511461743` ## [1] "rightrelevance" ## ## $`3097250541` ## [1] "MichiganCFLP" ## ## $`894057794` ## [1] "BigDataGal" ## ## $`365145609` ## [1] "mathbabedotorg" ## ## $`19251838` ## [1] "ClimbingMag" ## ## $`235261861` ## [1] "rstudio" ## ## $`5849202` ## [1] "jcheng" ## ## $`46486816` ## [1] "ramnath_vaidya" ## ## $`39010299` ## [1] "xieyihui" ``` ``` s_tweets = userTimeline("srdas",n=20) print(s_tweets) ``` ``` ## [[1]] ## [1] "srdas: Bestselling author of 'Moneyball' says laziness is the key to success. @MindaZetlin https://t.co/OTjzI3bHRm via @Inc" ## ## [[2]] ## [1] "srdas: Difference between Data Science, Machine Learning and Data Mining on Data Science Central: https://t.co/hreJ3QsmFG" ## ## [[3]] ## [1] "srdas: High-frequency traders fall on hard times https://t.co/626yKMshvY via @WSJ" ## ## [[4]] ## [1] "srdas: Shapes of Probability Distributions https://t.co/3hKE8FR9rx" ## ## [[5]] ## [1] "srdas: The one thing you need to master data science https://t.co/hmAwGKUAZg via @Rbloggers" ## ## [[6]] ## [1] "srdas: The Chess Problem that a Computer Cannot Solve: https://t.co/1qwCFPnMFz" ## ## [[7]] ## [1] "srdas: The dystopian future of price discrimination https://t.co/w7BuGJjjEJ via @BV" ## ## [[8]] ## [1] "srdas: How artificial intelligence is transforming the workplace https://t.co/V0TrDlm3D2 via @WSJ" ## ## [[9]] ## [1] "srdas: John Maeda: If you want to survive in design, you better learn to code https://t.co/EGyM5DvfyZ via @WIRED" ## ## [[10]] ## [1] "srdas: On mentorship and finding your way around https://t.co/wojEs6TTsD via @techcrunch" ## ## [[11]] ## [1] "srdas: Information Avoidance: How People Select Their Own Reality https://t.co/ytogtYqq4P" ## ## [[12]] ## [1] "srdas: Paul Ryan says he’s been “dreaming” of Medicaid cuts since he was “drinking out of kegs” https://t.co/5rZmZTtTyZ via @voxdotcom" ## ## [[13]] ## [1] "srdas: Don't Ask How to Define Data Science: https://t.co/WGVO0yB8Hy" ## ## [[14]] ## [1] "srdas: Kurzweil Claims That the Singularity Will Happen by 2045 https://t.co/Inl60a2KLv via @Futurism" ## ## [[15]] ## [1] "srdas: Did Uber steal the driverless future from Google? https://t.co/sDrtfHob34 via @BW" ## ## [[16]] ## [1] "srdas: Think Like a Data Scientist: \nhttps://t.co/aNFtL1tqDs" ## ## [[17]] ## [1] "srdas: Why Employees At Apple And Google Are More Productive https://t.co/E3WESsKkFO" ## ## [[18]] ## [1] "srdas: Cutting down the clutter in online conversations https://t.co/41ZH5iR9Hy" ## ## [[19]] ## [1] "srdas: I invented the web. Here are three things we need to change to save it | Tim Berners-Lee https://t.co/ORQaXiBXWC" ## ## [[20]] ## [1] "srdas: Let’s calculate pi on a Raspberry Pi to celebrate Pi Day https://t.co/D3gW0l2ZHt via @WIRED" ``` ``` getCurRateLimitInfo(c("users")) ``` ``` ## resource limit remaining reset ## 1 /users/report_spam 15 15 2017-03-24 18:55:44 ## 2 /users/show/:id 900 899 2017-03-24 18:55:42 ## 3 /users/search 900 900 2017-03-24 18:55:44 ## 4 /users/suggestions/:slug 15 15 2017-03-24 18:55:44 ## 5 /users/derived_info 15 15 2017-03-24 18:55:44 ## 6 /users/profile_banner 180 180 2017-03-24 18:55:44 ## 7 /users/suggestions/:slug/members 15 15 2017-03-24 18:55:44 ## 8 /users/lookup 900 898 2017-03-24 18:55:43 ## 9 /users/suggestions 15 15 2017-03-24 18:55:44 ``` ### 7\.21\.1 Twitter First create a Twitter developer account to get the required credentials for accessing the API. See: <https://dev.twitter.com/> The Twitter API needs a lot of handshaking… ``` ##TWITTER EXTRACTOR library(twitteR) library(ROAuth) library(RCurl) download.file(url="https://curl.haxx.se/ca/cacert.pem",destfile="cacert.pem") #certificate file based on Privacy Enhanced Mail (PEM) protocol: https://en.wikipedia.org/wiki/Privacy-enhanced_Electronic_Mail cKey = "oV89mZ970KM9vO8a5mktV7Aqw" #These are my keys and won't work for you cSecret = "cNriTUShd69AJaVPpZHCMDZI5U7nnXVcd72vmK4psqDUQhIEEY" #use your own secret reqURL = "https://api.twitter.com/oauth/request_token" accURL = "https://api.twitter.com/oauth/access_token" authURL = "https://api.twitter.com/oauth/authorize" #NOW SUBMIT YOUR CODES AND ASK FOR CREDENTIALS cred = OAuthFactory$new(consumerKey=cKey, consumerSecret=cSecret,requestURL=reqURL, accessURL=accURL,authURL=authURL) cred$handshake(cainfo="cacert.pem") #Asks for token #Test and save credentials #registerTwitterOAuth(cred) #save(list="cred",file="twitteR_credentials") #FIRST PHASE DONE ``` ### 7\.21\.2 Accessing Twitter ``` ##USE httr, SECOND PHASE library(httr) #options(httr_oauth_cache=T) accToken = "18666236-DmDE1wwbpvPbDcw9kwt9yThGeyYhjfpVVywrHuhOQ" accTokenSecret = "cttbpxpTtqJn7wrCP36I59omNI5GQHXXgV41sKwUgc" setup_twitter_oauth(cKey,cSecret,accToken,accTokenSecret) #At prompt type 1 ``` This more direct code chunk does handshaking better and faster than the preceding. ``` library(stringr) library(twitteR) library(ROAuth) library(RCurl) ``` ``` ## Loading required package: bitops ``` ``` cKey = "oV89mZ970KM9vO8a5mktV7Aqw" cSecret = "cNriTUShd69AJaVPpZHCMDZI5U7nnXVcd72vmK4psqDUQhIEEY" accToken = "18666236-DmDE1wwbpvPbDcw9kwt9yThGeyYhjfpVVywrHuhOQ" accTokenSecret = "cttbpxpTtqJn7wrCP36I59omNI5GQHXXgV41sKwUgc" setup_twitter_oauth(consumer_key = cKey, consumer_secret = cSecret, access_token = accToken, access_secret = accTokenSecret) ``` ``` ## [1] "Using direct authentication" ``` This completes the handshaking with Twitter. Now we can access tweets using the functions in the **twitteR** package. ### 7\.21\.3 Using the *twitteR* package ``` #EXAMPLE 1 s = searchTwitter("#GOOG") #This is a list s ``` ``` ## [[1]] ## [1] "_KevinRosales_: @Origengg @UnicornsOfLove #GoOg siempre apoyándolos hasta la muerte" ## ## [[2]] ## [1] "uncle_otc: @Jasik @crtaylor81 seen? MyDx, Inc. (OTC:$MYDX) Revolutionary Medical Software That's Poised To Earn Billions, https://t.co/KbgNIEoAlB #GOOG" ## ## [[3]] ## [1] "prabhumap: \"O-MG, the Developer Preview of Android O is here!\" https://t.co/cShgn63DrJ #goog #feedly" ## ## [[4]] ## [1] "top10USstocks: Alphabet Inc (NASDAQ:GOOG) loses -1.45% on Thursday-Top10 Worst Performer in NASDAQ100 #NASDAQ #GOOG https://t.co/FPbW5Ablez" ## ## [[5]] ## [1] "wlstcom: Alphabet - 25% Upside Potential #GOOGLE #GOOG #GOOGL #StockMarketSherpa #LongIdeas $GOOG https://t.co/IIGxCsBvab https://t.co/raegkUwI0j" ## ## [[6]] ## [1] "wlstcom: Scenarios For The Healthcare Bill - Cramer's Mad Money (3/23/17) #JPM #C #MLM #USCR #GOOG #GOOGL #AAPL #AMGN #CSCO https://t.co/B3GscATmg3" ## ## [[7]] ## [1] "seajourney2004: Lake Tekapo, New Zealand from Brent (@brentpurcell.nz) on Instagram: “Tekapo Blue\" #LakeTekapo #goog https://t.co/agzGy6ortN" ## ## [[8]] ## [1] "ScottWestBand: #Cowboy #Song #Western #Music Westerns #CowboySong #WesternMusic Theme https://t.co/bi8psLXB8G #Trending #Youtube #Twitter #Facebook #Goog…" ## ## [[9]] ## [1] "savvyyabby: Thought leadership is 1 part Common Sense and 99 parts Leadership. I have no idea what Google is smoking but I am getting SHORT #GOOG" ## ## [[10]] ## [1] "Addiply: @marcwebber @thetimes Rupert, Dacre and Co all want @DCMS @DamianCollins et al to clip #GOOG wings. Cos they ain't getting their slice..." ## ## [[11]] ## [1] "onlinemedialist: RT @wlstcom: Augmented Reality: The Next Big Thing In Wearables #APPLE #AAPL #FB #SSNLF #GOOG #GOOGL $FB https://t.co/PwqUrm4VU4 https://t.…" ## ## [[12]] ## [1] "wlstcom: Augmented Reality: The Next Big Thing In Wearables #APPLE #AAPL #FB #SSNLF #GOOG #GOOGL $FB https://t.co/PwqUrm4VU4 https://t.co/0rnSbVUvGX" ## ## [[13]] ## [1] "zeyumw: Google Agrees to YouTube Metrics Audit to Ease Advertisers’ Concerns https://t.co/OsSjVDY24X #goog #media #googl" ## ## [[14]] ## [1] "wlstcom: Apple Acquires DeskConnect For Workflow Application #GOOG #AAPL #GOOGL #DonovanJones $AAPL https://t.co/YIGqHyYwrm https://t.co/UI2ejtP0Jo" ## ## [[15]] ## [1] "wlstcom: Apple Acquires DeskConnect For Workflow Application #GOOGLE #GOOG #AAPL #DonovanJones $GOOG https://t.co/Yd01TL5ZZb https://t.co/Vo6VEeSxw7" ## ## [[16]] ## [1] "send2katz: Cloud SQL for PostgreSQL: Managed PostgreSQL for your mobile and geospatial applications in Google Cloud https://t.co/W7JLhPb1CG #GCE #Goog" ## ## [[17]] ## [1] "MarkYu_DPT: Ah, really? First @Google Medical Diagnostics Center soon?\n#GOOGL #GOOG\nhttps://t.co/PhmPsB0xgf" ## ## [[18]] ## [1] "AskFriedrich: Alphabet — GOOGL\nnot meeting Friedrich criteria, &amp; EXTREMELY expensive\n\n#alphabet #google $google $GOOGL #GOOG… https://t.co/N1x8LUUz5T" ## ## [[19]] ## [1] "HotHardware: #GoogleMaps To Offer Optional Real-Time User #LocationTracking Allowing You To Share Your ETA… https://t.co/OTF73K6a3w" ## ## [[20]] ## [1] "ConsumerFeed: Alphabet's buy rating reiterated at Mizuho. $1,024.00 PT. https://t.co/7c3Hart1rT $GOOG #GOOG" ## ## [[21]] ## [1] "RatingsNetwork: Alphabet's buy rating reiterated at Mizuho. $1,024.00 PT. https://t.co/LUCXvQDHX4 $GOOG #GOOG" ## ## [[22]] ## [1] "rContentRich: (#Google #Resurrected a #Dead #Product on #Wednesday and no one #Noticed (#GOOG))\n \nhttps://t.co/7YFLbMDyp7 https://t.co/CIfrOPmmKh" ## ## [[23]] ## [1] "ScottWestBand: #Cowboy #Song #Western #Music Westerns #CowboySong #WesternMusic Theme https://t.co/bi8psLXB8G #Trending #Youtube #Twitter #Facebook #Goog…" ## ## [[24]] ## [1] "APPLE_GOOGLE_TW: Virgin Tonic : Merci Google Maps ! On va enfin pouvoir retrouver notre voiture sur le parking - Virgin Radio https://t.co/l5IpUUyIGz #Goog…" ## ## [[25]] ## [1] "carlosmoisescet: RT @JUANJmauricio: #goog nigth #fuck hard #ass #cock # fuck mounth https://t.co/2dpIdWtlxX" ``` ``` #CONVERT TWITTER LIST TO TEXT ARRAY (see documentation in twitteR package) twts = twListToDF(s) #This gives a dataframe with the tweets names(twts) ``` ``` ## [1] "text" "favorited" "favoriteCount" "replyToSN" ## [5] "created" "truncated" "replyToSID" "id" ## [9] "replyToUID" "statusSource" "screenName" "retweetCount" ## [13] "isRetweet" "retweeted" "longitude" "latitude" ``` ``` twts_array = twts$text print(twts$retweetCount) ``` ``` ## [1] 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 ## [24] 0 47 ``` ``` twts_array ``` ``` ## [1] "@Origengg @UnicornsOfLove #GoOg siempre apoyándolos hasta la muerte" ## [2] "@Jasik @crtaylor81 seen? MyDx, Inc. (OTC:$MYDX) Revolutionary Medical Software That's Poised To Earn Billions, https://t.co/KbgNIEoAlB #GOOG" ## [3] "\"O-MG, the Developer Preview of Android O is here!\" https://t.co/cShgn63DrJ #goog #feedly" ## [4] "Alphabet Inc (NASDAQ:GOOG) loses -1.45% on Thursday-Top10 Worst Performer in NASDAQ100 #NASDAQ #GOOG https://t.co/FPbW5Ablez" ## [5] "Alphabet - 25% Upside Potential #GOOGLE #GOOG #GOOGL #StockMarketSherpa #LongIdeas $GOOG https://t.co/IIGxCsBvab https://t.co/raegkUwI0j" ## [6] "Scenarios For The Healthcare Bill - Cramer's Mad Money (3/23/17) #JPM #C #MLM #USCR #GOOG #GOOGL #AAPL #AMGN #CSCO https://t.co/B3GscATmg3" ## [7] "Lake Tekapo, New Zealand from Brent (@brentpurcell.nz) on Instagram: “Tekapo Blue\" #LakeTekapo #goog https://t.co/agzGy6ortN" ## [8] "#Cowboy #Song #Western #Music Westerns #CowboySong #WesternMusic Theme https://t.co/bi8psLXB8G #Trending #Youtube #Twitter #Facebook #Goog…" ## [9] "Thought leadership is 1 part Common Sense and 99 parts Leadership. I have no idea what Google is smoking but I am getting SHORT #GOOG" ## [10] "@marcwebber @thetimes Rupert, Dacre and Co all want @DCMS @DamianCollins et al to clip #GOOG wings. Cos they ain't getting their slice..." ## [11] "RT @wlstcom: Augmented Reality: The Next Big Thing In Wearables #APPLE #AAPL #FB #SSNLF #GOOG #GOOGL $FB https://t.co/PwqUrm4VU4 https://t.…" ## [12] "Augmented Reality: The Next Big Thing In Wearables #APPLE #AAPL #FB #SSNLF #GOOG #GOOGL $FB https://t.co/PwqUrm4VU4 https://t.co/0rnSbVUvGX" ## [13] "Google Agrees to YouTube Metrics Audit to Ease Advertisers’ Concerns https://t.co/OsSjVDY24X #goog #media #googl" ## [14] "Apple Acquires DeskConnect For Workflow Application #GOOG #AAPL #GOOGL #DonovanJones $AAPL https://t.co/YIGqHyYwrm https://t.co/UI2ejtP0Jo" ## [15] "Apple Acquires DeskConnect For Workflow Application #GOOGLE #GOOG #AAPL #DonovanJones $GOOG https://t.co/Yd01TL5ZZb https://t.co/Vo6VEeSxw7" ## [16] "Cloud SQL for PostgreSQL: Managed PostgreSQL for your mobile and geospatial applications in Google Cloud https://t.co/W7JLhPb1CG #GCE #Goog" ## [17] "Ah, really? First @Google Medical Diagnostics Center soon?\n#GOOGL #GOOG\nhttps://t.co/PhmPsB0xgf" ## [18] "Alphabet — GOOGL\nnot meeting Friedrich criteria, &amp; EXTREMELY expensive\n\n#alphabet #google $google $GOOGL #GOOG… https://t.co/N1x8LUUz5T" ## [19] "#GoogleMaps To Offer Optional Real-Time User #LocationTracking Allowing You To Share Your ETA… https://t.co/OTF73K6a3w" ## [20] "Alphabet's buy rating reiterated at Mizuho. $1,024.00 PT. https://t.co/7c3Hart1rT $GOOG #GOOG" ## [21] "Alphabet's buy rating reiterated at Mizuho. $1,024.00 PT. https://t.co/LUCXvQDHX4 $GOOG #GOOG" ## [22] "(#Google #Resurrected a #Dead #Product on #Wednesday and no one #Noticed (#GOOG))\n \nhttps://t.co/7YFLbMDyp7 https://t.co/CIfrOPmmKh" ## [23] "#Cowboy #Song #Western #Music Westerns #CowboySong #WesternMusic Theme https://t.co/bi8psLXB8G #Trending #Youtube #Twitter #Facebook #Goog…" ## [24] "Virgin Tonic : Merci Google Maps ! On va enfin pouvoir retrouver notre voiture sur le parking - Virgin Radio https://t.co/l5IpUUyIGz #Goog…" ## [25] "RT @JUANJmauricio: #goog nigth #fuck hard #ass #cock # fuck mounth https://t.co/2dpIdWtlxX" ``` ``` #EXAMPLE 2 s = getUser("srdas") fr = s$getFriends() print(length(fr)) ``` ``` ## [1] 154 ``` ``` print(fr[1:10]) ``` ``` ## $`60816617` ## [1] "cedarwright" ## ## $`2511461743` ## [1] "rightrelevance" ## ## $`3097250541` ## [1] "MichiganCFLP" ## ## $`894057794` ## [1] "BigDataGal" ## ## $`365145609` ## [1] "mathbabedotorg" ## ## $`19251838` ## [1] "ClimbingMag" ## ## $`235261861` ## [1] "rstudio" ## ## $`5849202` ## [1] "jcheng" ## ## $`46486816` ## [1] "ramnath_vaidya" ## ## $`39010299` ## [1] "xieyihui" ``` ``` s_tweets = userTimeline("srdas",n=20) print(s_tweets) ``` ``` ## [[1]] ## [1] "srdas: Bestselling author of 'Moneyball' says laziness is the key to success. @MindaZetlin https://t.co/OTjzI3bHRm via @Inc" ## ## [[2]] ## [1] "srdas: Difference between Data Science, Machine Learning and Data Mining on Data Science Central: https://t.co/hreJ3QsmFG" ## ## [[3]] ## [1] "srdas: High-frequency traders fall on hard times https://t.co/626yKMshvY via @WSJ" ## ## [[4]] ## [1] "srdas: Shapes of Probability Distributions https://t.co/3hKE8FR9rx" ## ## [[5]] ## [1] "srdas: The one thing you need to master data science https://t.co/hmAwGKUAZg via @Rbloggers" ## ## [[6]] ## [1] "srdas: The Chess Problem that a Computer Cannot Solve: https://t.co/1qwCFPnMFz" ## ## [[7]] ## [1] "srdas: The dystopian future of price discrimination https://t.co/w7BuGJjjEJ via @BV" ## ## [[8]] ## [1] "srdas: How artificial intelligence is transforming the workplace https://t.co/V0TrDlm3D2 via @WSJ" ## ## [[9]] ## [1] "srdas: John Maeda: If you want to survive in design, you better learn to code https://t.co/EGyM5DvfyZ via @WIRED" ## ## [[10]] ## [1] "srdas: On mentorship and finding your way around https://t.co/wojEs6TTsD via @techcrunch" ## ## [[11]] ## [1] "srdas: Information Avoidance: How People Select Their Own Reality https://t.co/ytogtYqq4P" ## ## [[12]] ## [1] "srdas: Paul Ryan says he’s been “dreaming” of Medicaid cuts since he was “drinking out of kegs” https://t.co/5rZmZTtTyZ via @voxdotcom" ## ## [[13]] ## [1] "srdas: Don't Ask How to Define Data Science: https://t.co/WGVO0yB8Hy" ## ## [[14]] ## [1] "srdas: Kurzweil Claims That the Singularity Will Happen by 2045 https://t.co/Inl60a2KLv via @Futurism" ## ## [[15]] ## [1] "srdas: Did Uber steal the driverless future from Google? https://t.co/sDrtfHob34 via @BW" ## ## [[16]] ## [1] "srdas: Think Like a Data Scientist: \nhttps://t.co/aNFtL1tqDs" ## ## [[17]] ## [1] "srdas: Why Employees At Apple And Google Are More Productive https://t.co/E3WESsKkFO" ## ## [[18]] ## [1] "srdas: Cutting down the clutter in online conversations https://t.co/41ZH5iR9Hy" ## ## [[19]] ## [1] "srdas: I invented the web. Here are three things we need to change to save it | Tim Berners-Lee https://t.co/ORQaXiBXWC" ## ## [[20]] ## [1] "srdas: Let’s calculate pi on a Raspberry Pi to celebrate Pi Day https://t.co/D3gW0l2ZHt via @WIRED" ``` ``` getCurRateLimitInfo(c("users")) ``` ``` ## resource limit remaining reset ## 1 /users/report_spam 15 15 2017-03-24 18:55:44 ## 2 /users/show/:id 900 899 2017-03-24 18:55:42 ## 3 /users/search 900 900 2017-03-24 18:55:44 ## 4 /users/suggestions/:slug 15 15 2017-03-24 18:55:44 ## 5 /users/derived_info 15 15 2017-03-24 18:55:44 ## 6 /users/profile_banner 180 180 2017-03-24 18:55:44 ## 7 /users/suggestions/:slug/members 15 15 2017-03-24 18:55:44 ## 8 /users/lookup 900 898 2017-03-24 18:55:43 ## 9 /users/suggestions 15 15 2017-03-24 18:55:44 ``` 7\.22 Quick Process ------------------- ``` library(ngram) ``` ``` ## Warning: package 'ngram' was built under R version 3.3.2 ``` ``` library(NLP) library(syuzhet) twts = twListToDF(s_tweets) x = paste(twts$text,collapse=" ") y = get_tokens(x) sen = get_sentiment(y) print(sen) ``` ``` ## [1] 0.80 0.00 0.00 0.00 0.00 -1.00 0.00 0.00 0.00 0.00 0.75 ## [12] 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ## [23] 0.00 0.80 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ## [34] 0.00 0.00 0.00 0.00 0.00 -0.25 0.00 -0.25 0.00 0.00 0.00 ## [45] 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ## [56] 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ## [67] 0.00 0.00 0.00 0.00 0.00 0.00 0.00 -0.75 0.00 0.00 0.00 ## [78] 0.00 0.80 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ## [89] -0.50 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.50 0.00 ## [100] 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ## [111] 0.00 0.00 0.00 0.00 0.80 0.00 0.00 0.00 0.80 0.80 0.00 ## [122] 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ## [133] 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.40 -0.80 ## [144] 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ## [155] 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 -0.25 0.00 0.00 ## [166] 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ## [177] 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ## [188] 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ## [199] 0.00 0.00 0.00 0.00 0.00 0.00 0.00 -0.75 0.00 0.00 0.00 ## [210] 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.50 0.00 ## [221] 0.00 0.40 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ## [232] 0.00 0.00 0.00 0.50 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ## [243] 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.60 0.00 ## [254] 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.50 ## [265] 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ## [276] 0.00 0.00 0.00 0.00 0.00 0.00 0.50 0.00 0.00 0.00 0.00 ## [287] 0.00 0.00 0.00 0.00 ``` ``` print(sum(sen)) ``` ``` ## [1] 4.9 ``` ### 7\.22\.1 Getting Streaming Data from Twitter This assumes you have a working twitter account and have already connected R to it using twitteR package. * Retriving tweets for a particular search query * Example 1 adapted from [http://bogdanrau.com/blog/collecting\-tweets\-using\-r\-and\-the\-twitter\-streaming\-api/](http://bogdanrau.com/blog/collecting-tweets-using-r-and-the-twitter-streaming-api/) * Additional reference: [https://cran.r\-project.org/web/packages/streamR/streamR.pdf](https://cran.r-project.org/web/packages/streamR/streamR.pdf) ``` library(streamR) filterStream(file.name = "tweets.json", # Save tweets in a json file track = "useR_Stanford" , # Collect tweets with useR_Stanford over 60 seconds. Can use twitter handles or keywords. language = "en", timeout = 30, # Keep connection alive for 60 seconds oauth = cred) # Use OAuth credentials tweets.df <- parseTweets("tweets.json", simplify = FALSE) # parse the json file and save to a data frame called tweets.df. Simplify = FALSE ensures that we include lat/lon information in that data frame. ``` ### 7\.22\.2 Retrieving tweets of a particular user over a 60 second time period ``` filterStream(file.name = "tweets.json", # Save tweets in a json file track = "3497513953" , # Collect tweets from useR2016 feed over 60 seconds. Must use twitter ID of the user. language = "en", timeout = 30, # Keep connection alive for 60 seconds oauth = cred) # Use my_oauth file as the OAuth credentials tweets.df <- parseTweets("tweets.json", simplify = FALSE) ``` ### 7\.22\.3 Streaming messages from the accounts your user follows. ``` userStream( file.name="my_timeline.json", with="followings",tweets=10, oauth=cred ) ``` ### 7\.22\.4 Facebook Now we move on to using Facebook, which is a little less trouble than Twitter. Also the results may be used for creating interesting networks. ``` ##FACEBOOK EXTRACTOR library(Rfacebook) library(SnowballC) library(Rook) library(ROAuth) app_id = "847737771920076" # USE YOUR OWN IDs app_secret = "eb8b1c4639a3f5de2fd8582a16b9e5a9" fb_oauth = fbOAuth(app_id,app_secret,extended_permissions=TRUE) #save(fb_oauth,file="fb_oauth") #DIRECT LOAD #load("fb_oauth") ``` ### 7\.22\.5 Examples ``` ##EXAMPLES bbn = getUsers("bloombergnews",token=fb_oauth) print(bbn) page = getPage(page="bloombergnews",token=fb_oauth,n=20) print(dim(page)) print(head(page)) print(names(page)) print(page$message) print(page$message[11]) ``` ### 7\.22\.6 Yelp \- Setting up an authorization First we examine the protocol for connecting to the Yelp API. This assumes you have opei ``` ###CODE to connect to YELP. consumerKey = "z6w-Or6HSyKbdUTmV9lbOA" consumerSecret = "ImUufP3yU9FmNWWx54NUbNEBcj8" token = "mBzEBjhYIGgJZnmtTHLVdQ-0cyfFVRGu" token_secret = "v0FGCL0TS_dFDWFwH3HptDZhiLE" ``` ### 7\.22\.7 Yelp \- handshaking with the API ``` require(httr) require(httpuv) require(jsonlite) # authorization myapp = oauth_app("YELP", key=consumerKey, secret=consumerSecret) sig=sign_oauth1.0(myapp, token=token,token_secret=token_secret) ``` ``` ## Searching the top ten bars in Chicago and SF. limit <- 10 # 10 bars in Chicago yelpurl <- paste0("http://api.yelp.com/v2/search/?limit=",limit,"&location=Chicago%20IL&term=bar") # or 10 bars by geo-coordinates yelpurl <- paste0("http://api.yelp.com/v2/search/?limit=",limit,"&ll=37.788022,-122.399797&term=bar") locationdata=GET(yelpurl, sig) locationdataContent = content(locationdata) locationdataList=jsonlite::fromJSON(toJSON(locationdataContent)) head(data.frame(locationdataList)) for (j in 1:limit) { print(locationdataContent$businesses[[j]]$snippet_text) } ``` ### 7\.22\.1 Getting Streaming Data from Twitter This assumes you have a working twitter account and have already connected R to it using twitteR package. * Retriving tweets for a particular search query * Example 1 adapted from [http://bogdanrau.com/blog/collecting\-tweets\-using\-r\-and\-the\-twitter\-streaming\-api/](http://bogdanrau.com/blog/collecting-tweets-using-r-and-the-twitter-streaming-api/) * Additional reference: [https://cran.r\-project.org/web/packages/streamR/streamR.pdf](https://cran.r-project.org/web/packages/streamR/streamR.pdf) ``` library(streamR) filterStream(file.name = "tweets.json", # Save tweets in a json file track = "useR_Stanford" , # Collect tweets with useR_Stanford over 60 seconds. Can use twitter handles or keywords. language = "en", timeout = 30, # Keep connection alive for 60 seconds oauth = cred) # Use OAuth credentials tweets.df <- parseTweets("tweets.json", simplify = FALSE) # parse the json file and save to a data frame called tweets.df. Simplify = FALSE ensures that we include lat/lon information in that data frame. ``` ### 7\.22\.2 Retrieving tweets of a particular user over a 60 second time period ``` filterStream(file.name = "tweets.json", # Save tweets in a json file track = "3497513953" , # Collect tweets from useR2016 feed over 60 seconds. Must use twitter ID of the user. language = "en", timeout = 30, # Keep connection alive for 60 seconds oauth = cred) # Use my_oauth file as the OAuth credentials tweets.df <- parseTweets("tweets.json", simplify = FALSE) ``` ### 7\.22\.3 Streaming messages from the accounts your user follows. ``` userStream( file.name="my_timeline.json", with="followings",tweets=10, oauth=cred ) ``` ### 7\.22\.4 Facebook Now we move on to using Facebook, which is a little less trouble than Twitter. Also the results may be used for creating interesting networks. ``` ##FACEBOOK EXTRACTOR library(Rfacebook) library(SnowballC) library(Rook) library(ROAuth) app_id = "847737771920076" # USE YOUR OWN IDs app_secret = "eb8b1c4639a3f5de2fd8582a16b9e5a9" fb_oauth = fbOAuth(app_id,app_secret,extended_permissions=TRUE) #save(fb_oauth,file="fb_oauth") #DIRECT LOAD #load("fb_oauth") ``` ### 7\.22\.5 Examples ``` ##EXAMPLES bbn = getUsers("bloombergnews",token=fb_oauth) print(bbn) page = getPage(page="bloombergnews",token=fb_oauth,n=20) print(dim(page)) print(head(page)) print(names(page)) print(page$message) print(page$message[11]) ``` ### 7\.22\.6 Yelp \- Setting up an authorization First we examine the protocol for connecting to the Yelp API. This assumes you have opei ``` ###CODE to connect to YELP. consumerKey = "z6w-Or6HSyKbdUTmV9lbOA" consumerSecret = "ImUufP3yU9FmNWWx54NUbNEBcj8" token = "mBzEBjhYIGgJZnmtTHLVdQ-0cyfFVRGu" token_secret = "v0FGCL0TS_dFDWFwH3HptDZhiLE" ``` ### 7\.22\.7 Yelp \- handshaking with the API ``` require(httr) require(httpuv) require(jsonlite) # authorization myapp = oauth_app("YELP", key=consumerKey, secret=consumerSecret) sig=sign_oauth1.0(myapp, token=token,token_secret=token_secret) ``` ``` ## Searching the top ten bars in Chicago and SF. limit <- 10 # 10 bars in Chicago yelpurl <- paste0("http://api.yelp.com/v2/search/?limit=",limit,"&location=Chicago%20IL&term=bar") # or 10 bars by geo-coordinates yelpurl <- paste0("http://api.yelp.com/v2/search/?limit=",limit,"&ll=37.788022,-122.399797&term=bar") locationdata=GET(yelpurl, sig) locationdataContent = content(locationdata) locationdataList=jsonlite::fromJSON(toJSON(locationdataContent)) head(data.frame(locationdataList)) for (j in 1:limit) { print(locationdataContent$businesses[[j]]$snippet_text) } ``` 7\.23 Dictionaries ------------------ 1. Webster’s defines a “dictionary” as “…a reference source in print or electronic form containing words usually alphabetically arranged along with information about their forms, pronunciations, functions, etymologies, meanings, and syntactical and idiomatic uses.” 2. The Harvard General Inquirer: [http://www.wjh.harvard.edu/\~inquirer/](http://www.wjh.harvard.edu/~inquirer/) 3. Standard Dictionaries: www.dictionary.com, and www.merriam\-webster.com. 4. Computer dictionary: <http://www.hyperdictionary.com/computer> that contains about 14,000 computer related words, such as “byte” or “hyperlink”. 5. Math dictionary, such as <http://www.amathsdictionaryforkids.com/dictionary.html>. 6. Medical dictionary, see <http://www.hyperdictionary.com/medical>. 7. Internet lingo dictionaries may be used to complement standard dictionaries with words that are not usually found in standard language, for example, see <http://www.netlingo.com/dictionary/all.php> for words such as “2BZ4UQT” which stands for “too busy for you cutey” (LOL). When extracting text messages, postings on Facebook, or stock message board discussions, internet lingo does need to be parsed and such a dictionary is very useful. 8. Associative dictionaries are also useful when trying to find context, as the word may be related to a concept, identified using a dictionary such as <http://www.visuwords.com/>. This dictionary doubles up as a thesaurus, as it provides alternative words and phrases that mean the same thing, and also related concepts. 9. Value dictionaries deal with values and may be useful when only affect (positive or negative) is insufficient for scoring text. The Lasswell Value Dictionary [http://www.wjh.harvard.edu/\~inquirer/lasswell.htm](http://www.wjh.harvard.edu/~inquirer/lasswell.htm) may be used to score the loading of text on the eight basic value categories: Wealth, Power, Respect, Rectitude, Skill, Enlightenment, Affection, and Well being. 7\.24 Lexicons -------------- 1. A **lexicon** is defined by Webster’s as “a book containing an alphabetical arrangement of the words in a language and their definitions; the vocabulary of a language, an individual speaker or group of speakers, or a subject; the total stock of morphemes in a language.” This suggests it is not that different from a dictionary. 2. A “morpheme” is defined as “a word or a part of a word that has a meaning and that contains no smaller part that has a meaning.” 3. In the text analytics realm, we will take a lexicon to be a smaller, special purpose dictionary, containing words that are relevant to the domain of interest. 4. The benefit of a lexicon is that it enables focusing only on words that are relevant to the analytics and discards words that are not. 5. Another benefit is that since it is a smaller dictionary, the computational effort required by text analytics algorithms is drastically reduced. ### 7\.24\.1 Constructing a lexicon 1. By hand. This is an effective technique and the simplest. It calls for a human reader who scans a representative sample of text documents and culls important words that lend interpretive meaning. 2. Examine the term document matrix for most frequent words, and pick the ones that have high connotation for the classification task at hand. 3. Use pre\-classified documents in a text corpus. We analyze the separate groups of documents to find words whose difference in frequency between groups is highest. Such words are likely to be better in discriminating between groups. ### 7\.24\.2 Lexicons as Word Lists 1. Das and Chen (2007\) constructed a lexicon of about 375 words that are useful in parsing sentiment from stock message boards. 2. Loughran and McDonald (2011\): * Taking a sample of 50,115 firm\-year 10\-Ks from 1994 to 2008, they found that almost three\-fourths of the words identified as negative by the Harvard Inquirer dictionary are not typically negative words in a financial context. * Therefore, they specifically created separate lists of words by the following attributes of words: negative, positive, uncertainty, litigious, strong modal, and weak modal. Modal words are based on Jordan’s categories of strong and weak modal words. These word lists may be downloaded from [http://www3\.nd.edu/\~mcdonald/Word\_Lists.html](http://www3.nd.edu/~mcdonald/Word_Lists.html). ### 7\.24\.3 Negation Tagging Das and Chen (2007\) introduced the notion of “negation tagging” into the literature. Negation tags create additional words in the word list using some rule. In this case, the rule used was to take any sentence, and if a negation word occurred, then tag all remaining positive words in the sentence as negative. For example, take a sentence \- “This is not a good book.” Here the positive words after “not” are candidates for negation tagging. So, we would replace the sentence with “This is not a n\_\_good book." Sometimes this can be more nuanced. For example, a sentence such as “There is nothing better than sliced bread.” So now, the negation word “nothing” is used in conjunction with “better” so is an exception to the rule. Such exceptions may need to be coded in to rules for parsing textual content. The Grammarly Handbook provides the folowing negation words (see <https://www.grammarly.com/handbook/>): * Negative words: No, Not, None, No one, Nobody, Nothing, Neither, Nowhere, Never. * Negative Adverbs: Hardly, Scarcely, Barely. * Negative verbs: Doesn’t, Isn’t, Wasn’t, Shouldn’t, Wouldn’t, Couldn’t, Won’t, Can’t, Don’t. ### 7\.24\.1 Constructing a lexicon 1. By hand. This is an effective technique and the simplest. It calls for a human reader who scans a representative sample of text documents and culls important words that lend interpretive meaning. 2. Examine the term document matrix for most frequent words, and pick the ones that have high connotation for the classification task at hand. 3. Use pre\-classified documents in a text corpus. We analyze the separate groups of documents to find words whose difference in frequency between groups is highest. Such words are likely to be better in discriminating between groups. ### 7\.24\.2 Lexicons as Word Lists 1. Das and Chen (2007\) constructed a lexicon of about 375 words that are useful in parsing sentiment from stock message boards. 2. Loughran and McDonald (2011\): * Taking a sample of 50,115 firm\-year 10\-Ks from 1994 to 2008, they found that almost three\-fourths of the words identified as negative by the Harvard Inquirer dictionary are not typically negative words in a financial context. * Therefore, they specifically created separate lists of words by the following attributes of words: negative, positive, uncertainty, litigious, strong modal, and weak modal. Modal words are based on Jordan’s categories of strong and weak modal words. These word lists may be downloaded from [http://www3\.nd.edu/\~mcdonald/Word\_Lists.html](http://www3.nd.edu/~mcdonald/Word_Lists.html). ### 7\.24\.3 Negation Tagging Das and Chen (2007\) introduced the notion of “negation tagging” into the literature. Negation tags create additional words in the word list using some rule. In this case, the rule used was to take any sentence, and if a negation word occurred, then tag all remaining positive words in the sentence as negative. For example, take a sentence \- “This is not a good book.” Here the positive words after “not” are candidates for negation tagging. So, we would replace the sentence with “This is not a n\_\_good book." Sometimes this can be more nuanced. For example, a sentence such as “There is nothing better than sliced bread.” So now, the negation word “nothing” is used in conjunction with “better” so is an exception to the rule. Such exceptions may need to be coded in to rules for parsing textual content. The Grammarly Handbook provides the folowing negation words (see <https://www.grammarly.com/handbook/>): * Negative words: No, Not, None, No one, Nobody, Nothing, Neither, Nowhere, Never. * Negative Adverbs: Hardly, Scarcely, Barely. * Negative verbs: Doesn’t, Isn’t, Wasn’t, Shouldn’t, Wouldn’t, Couldn’t, Won’t, Can’t, Don’t. 7\.25 Scoring Text ------------------ * Text can be scored using dictionaries and word lists. Here is an example of mood scoring. We use a psychological dictionary from Harvard. There is also WordNet. * WordNet is a large database of words in English, i.e., a lexicon. The repository is at <http://wordnet.princeton.edu>. WordNet groups words together based on their meanings (synonyms) and hence may be used as a thesaurus. WordNet is also useful for natural language processing as it provides word lists by language category, such as noun, verb, adjective, etc. 7\.26 Mood Scoring using Harvard Inquirer ----------------------------------------- ### 7\.26\.1 Creating Positive and Negative Word Lists ``` #MOOD SCORING USING HARVARD INQUIRER #Read in the Harvard Inquirer Dictionary #And create a list of positive and negative words HIDict = readLines("DSTMAA_data/inqdict.txt") dict_pos = HIDict[grep("Pos",HIDict)] poswords = NULL for (s in dict_pos) { s = strsplit(s,"#")[[1]][1] poswords = c(poswords,strsplit(s," ")[[1]][1]) } dict_neg = HIDict[grep("Neg",HIDict)] negwords = NULL for (s in dict_neg) { s = strsplit(s,"#")[[1]][1] negwords = c(negwords,strsplit(s," ")[[1]][1]) } poswords = tolower(poswords) negwords = tolower(negwords) print(sample(poswords,25)) ``` ``` ## [1] "rouse" "donation" "correct" "eager" ## [5] "shiny" "train" "gain" "competent" ## [9] "aristocracy" "arisen" "comeback" "honeymoon" ## [13] "inspire" "faith" "sympathize" "uppermost" ## [17] "fulfill" "relaxation" "appreciative" "create" ## [21] "luck" "protection" "entrust" "fortify" ## [25] "dignified" ``` ``` print(sample(negwords,25)) ``` ``` ## [1] "suspicion" "censorship" "conspire" "even" ## [5] "order" "perverse" "withhold" "collision" ## [9] "muddy" "frown" "war" "discriminate" ## [13] "competitor" "challenge" "blah" "need" ## [17] "pass" "frustrate" "lying" "frantically" ## [21] "haggard" "blunder" "confuse" "scold" ## [25] "audacity" ``` ``` poswords = unique(poswords) negwords = unique(negwords) print(length(poswords)) ``` ``` ## [1] 1647 ``` ``` print(length(negwords)) ``` ``` ## [1] 2121 ``` The preceding code created two arrays, one of positive words and another of negative words. You can also directly use the EmoLex which contains positive and negative words already, see: NRC Word\-Emotion Lexicon: [http://saifmohammad.com/WebPages/NRC\-Emotion\-Lexicon.htm](http://saifmohammad.com/WebPages/NRC-Emotion-Lexicon.htm) ### 7\.26\.2 One Function to Rule All Text In order to score text, we need to clean it first and put it into an array to compare with the word list of positive and negative words. I wrote a general purpose function that grabs text and cleans it up for further use. ``` library(tm) library(stringr) #READ IN TEXT FOR ANALYSIS, PUT IT IN A CORPUS, OR ARRAY, OR FLAT STRING #cstem=1, if stemming needed #cstop=1, if stopwords to be removed #ccase=1 for lower case, ccase=2 for upper case #cpunc=1, if punctuation to be removed #cflat=1 for flat text wanted, cflat=2 if text array, else returns corpus read_web_page = function(url,cstem=0,cstop=0,ccase=0,cpunc=0,cflat=0) { text = readLines(url) text = text[setdiff(seq(1,length(text)),grep("<",text))] text = text[setdiff(seq(1,length(text)),grep(">",text))] text = text[setdiff(seq(1,length(text)),grep("]",text))] text = text[setdiff(seq(1,length(text)),grep("}",text))] text = text[setdiff(seq(1,length(text)),grep("_",text))] text = text[setdiff(seq(1,length(text)),grep("\\/",text))] ctext = Corpus(VectorSource(text)) if (cstem==1) { ctext = tm_map(ctext, stemDocument) } if (cstop==1) { ctext = tm_map(ctext, removeWords, stopwords("english"))} if (cpunc==1) { ctext = tm_map(ctext, removePunctuation) } if (ccase==1) { ctext = tm_map(ctext, tolower) } if (ccase==2) { ctext = tm_map(ctext, toupper) } text = ctext #CONVERT FROM CORPUS IF NEEDED if (cflat>0) { text = NULL for (j in 1:length(ctext)) { temp = ctext[[j]]$content if (temp!="") { text = c(text,temp) } } text = as.array(text) } if (cflat==1) { text = paste(text,collapse="\n") text = str_replace_all(text, "[\r\n]" , " ") } result = text } ``` ### 7\.26\.3 Example Now apply this function and see how we can get some clean text. ``` url = "http://srdas.github.io/research.htm" res = read_web_page(url,0,0,0,1,1) print(res) ``` ``` ## [1] "Data Science Theories Models Algorithms and Analytics web book work in progress Derivatives Principles and Practice 2010 Rangarajan Sundaram and Sanjiv Das McGraw Hill An IndexBased Measure of Liquidity with George Chacko and Rong Fan 2016 Matrix Metrics NetworkBased Systemic Risk Scoring 2016 of systemic risk This paper won the First Prize in the MITCFP competition 2016 for the best paper on SIFIs systemically important financial institutions It also won the best paper award at Credit Spreads with Dynamic Debt with Seoyoung Kim 2015 Text and Context Language Analytics for Finance 2014 Strategic Loan Modification An OptionsBased Response to Strategic Default Options and Structured Products in Behavioral Portfolios with Meir Statman 2013 and barrier range notes in the presence of fattailed outcomes using copulas Polishing Diamonds in the Rough The Sources of Syndicated Venture Performance 2011 with Hoje Jo and Yongtae Kim Optimization with Mental Accounts 2010 with Harry Markowitz Jonathan Accountingbased versus marketbased crosssectional models of CDS spreads with Paul Hanouna and Atulya Sarin 2009 Hedging Credit Equity Liquidity Matters with Paul Hanouna 2009 An Integrated Model for Hybrid Securities Yahoo for Amazon Sentiment Extraction from Small Talk on the Web Common Failings How Corporate Defaults are Correlated with Darrell Duffie Nikunj Kapadia and Leandro Saita A Clinical Study of Investor Discussion and Sentiment with Asis MartinezJerez and Peter Tufano 2005 International Portfolio Choice with Systemic Risk The loss resulting from diminished diversification is small while Speech Signaling Risksharing and the Impact of Fee Structures on investor welfare Contrary to regulatory intuition incentive structures A DiscreteTime Approach to Noarbitrage Pricing of Credit derivatives with Rating Transitions with Viral Acharya and Rangarajan Sundaram Pricing Interest Rate Derivatives A General Approachwith George Chacko A DiscreteTime Approach to ArbitrageFree Pricing of Credit Derivatives The Psychology of Financial Decision Making A Case for TheoryDriven Experimental Enquiry 1999 with Priya Raghubir Of Smiles and Smirks A Term Structure Perspective A Theory of Banking Structure 1999 with Ashish Nanda by function based upon two dimensions the degree of information asymmetry A Theory of Optimal Timing and Selectivity A Direct DiscreteTime Approach to PoissonGaussian Bond Option Pricing in the HeathJarrowMorton The Central Tendency A Second Factor in Bond Yields 1998 with Silverio Foresi and Pierluigi Balduzzi Efficiency with Costly Information A Reinterpretation of Evidence from Managed Portfolios with Edwin Elton Martin Gruber and Matt Presented and Reprinted in the Proceedings of The Seminar on the Analysis of Security Prices at the Center for Research in Security Prices at the University of Managing Rollover Risk with Capital Structure Covenants in Structured Finance Vehicles 2016 The Design and Risk Management of Structured Finance Vehicles 2016 Post the recent subprime financial crisis we inform the creation of safer SIVs in structured finance and propose avenues of mitigating risks faced by senior debt through Coming up Short Managing Underfunded Portfolios in an LDIES Framework 2014 with Seoyoung Kim and Meir Statman Going for Broke Restructuring Distressed Debt Portfolios 2014 Digital Portfolios 2013 Options on Portfolios with HigherOrder Moments 2009 options on a multivariate system of assets calibrated to the return Dealing with Dimension Option Pricing on Factor Trees 2009 you to price options on multiple assets in a unified fraamework Computational Modeling Correlated Default with a Forest of Binomial Trees 2007 with Basel II Correlation Related Issues 2007 Correlated Default Risk 2006 with Laurence Freed Gary Geng and Nikunj Kapadia increase as markets worsen Regime switching models are needed to explain dynamic A Simple Model for Pricing Equity Options with Markov Switching State Variables 2006 with Donald Aingworth and Rajeev Motwani The Firms Management of Social Interactions 2005 with D Godes D Mayzlin Y Chen S Das C Dellarocas B Pfeieffer B Libai S Sen M Shi and P Verlegh Financial Communities with Jacob Sisk 2005 Summer 112123 Monte Carlo Markov Chain Methods for Derivative Pricing and Risk Assessmentwith Alistair Sinclair 2005 where incomplete information about the value of an asset may be exploited to undertake fast and accurate pricing Proof that a fully polynomial randomized Correlated Default Processes A CriterionBased Copula Approach Special Issue on Default Risk Private Equity Returns An Empirical Examination of the Exit of VentureBacked Companies with Murali Jagannathan and Atulya Sarin firm being financed the valuation at the time of financing and the prevailing market sentiment Helps understand the risk premium required for the Issue on Computational Methods in Economics and Finance December 5569 Bayesian Migration in Credit Ratings Based on Probabilities of The Impact of Correlated Default Risk on Credit Portfolios with Gifford Fong and Gary Geng How Diversified are Internationally Diversified Portfolios TimeVariation in the Covariances between International Returns DiscreteTime Bond and Option Pricing for JumpDiffusion Macroeconomic Implications of Search Theory for the Labor Market Auction Theory A Summary with Applications and Evidence from the Treasury Markets 1996 with Rangarajan Sundaram A Simple Approach to Three Factor Affine Models of the Term Structure with Pierluigi Balduzzi Silverio Foresi and Rangarajan Analytical Approximations of the Term Structure for Jumpdiffusion Processes A Numerical Analysis 1996 Markov Chain Term Structure Models Extensions and Applications Exact Solutions for Bond and Options Prices with Systematic Jump Risk 1996 with Silverio Foresi Pricing Credit Sensitive Debt when Interest Rates Credit Ratings and Credit Spreads are Stochastic 1996 v52 161198 Did CDS Trading Improve the Market for Corporate Bonds 2016 with Madhu Kalimipalli and Subhankar Nayak Big Datas Big Muscle 2016 Portfolios for Investors Who Want to Reach Their Goals While Staying on the MeanVariance Efficient Frontier 2011 with Harry Markowitz Jonathan Scheid and Meir Statman News Analytics Framework Techniques and Metrics The Handbook of News Analytics in Finance May 2011 John Wiley Sons UK Random Lattices for Option Pricing Problems in Finance 2011 Implementing Option Pricing Models using Python and Cython 2010 The Finance Web Internet Information and Markets 2010 Financial Applications with Parallel R 2009 Recovery Swaps 2009 with Paul Hanouna Recovery Rates 2009with Paul Hanouna A Simple Model for Pricing Securities with a DebtEquity Linkage 2008 in Credit Default Swap Spreads 2006 with Paul Hanouna MultipleCore Processors for Finance Applications 2006 Power Laws 2005 with Jacob Sisk Genetic Algorithms 2005 Recovery Risk 2005 Venture Capital Syndication with Hoje Jo and Yongtae Kim 2004 Technical Analysis with David Tien 2004 Liquidity and the Bond Markets with Jan Ericsson and Madhu Kalimipalli 2003 Modern Pricing of Interest Rate Derivatives Book Review Contagion 2003 Hedge Funds 2003 Reprinted in Working Papers on Hedge Funds in The World of Hedge Funds Characteristics and Analysis 2005 World Scientific The Internet and Investors 2003 Useful things to know about Correlated Default Risk with Gifford Fong Laurence Freed Gary Geng and Nikunj Kapadia The Regulation of Fee Structures in Mutual Funds A Theoretical Analysis with Rangarajan Sundaram 1998 NBER WP No 6639 in the Courant Institute of Mathematical Sciences special volume on A DiscreteTime Approach to ArbitrageFree Pricing of Credit Derivatives with Rangarajan Sundaram reprinted in the Courant Institute of Mathematical Sciences special volume on Stochastic Mean Models of the Term Structure with Pierluigi Balduzzi Silverio Foresi and Rangarajan Sundaram John Wiley Sons Inc 128161 Interest Rate Modeling with JumpDiffusion Processes John Wiley Sons Inc 162189 Comments on Pricing ExcessofLoss Reinsurance Contracts against Catastrophic Loss by J David Cummins C Lewis and Richard Phillips Froot Ed University of Chicago Press 1999 141145 Pricing Credit Derivatives J Frost and JG Whittaker 101138 On the Recursive Implementation of Term Structure Models ZeroRevelation RegTech Detecting Risk through Linguistic Analysis of Corporate Emails and News with Seoyoung Kim and Bhushan Kothari Summary for the Columbia Law School blog Dynamic Risk Networks A Note with Seoyoung Kim and Dan Ostrov Research Challenges in Financial Data Modeling and Analysis with Lewis Alexander Zachary Ives HV Jagadish and Claire Monteleoni Local Volatility and the Recovery Rate of Credit Default Swaps with Jeroen Jansen and Frank Fabozzi Efficient Rebalancing of Taxable Portfolios with Dan Ostrov Dennis Ding Vincent Newell The Fast and the Curious VC Drift with Amit Bubna and Paul Hanouna Venture Capital Communities with Amit Bubna and Nagpurnanand Prabhala " ``` ### 7\.26\.4 Mood Scoring Text Now we will take a different page of text and mood score it. ``` #EXAMPLE OF MOOD SCORING library(stringr) url = "http://srdas.github.io/bio-candid.html" text = read_web_page(url,cstem=0,cstop=0,ccase=0,cpunc=1,cflat=1) text = str_replace_all(text,"nbsp"," ") text = unlist(strsplit(text," ")) posmatch = match(text,poswords) numposmatch = length(posmatch[which(posmatch>0)]) negmatch = match(text,negwords) numnegmatch = length(negmatch[which(negmatch>0)]) print(c(numposmatch,numnegmatch)) ``` ``` ## [1] 26 16 ``` ``` #FURTHER EXPLORATION OF THESE OBJECTS print(length(text)) ``` ``` ## [1] 647 ``` ``` print(posmatch) ``` ``` ## [1] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [15] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [29] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [43] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [57] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [71] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [85] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [99] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [113] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [127] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [141] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [155] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [169] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [183] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [197] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [211] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [225] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [239] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [253] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [267] NA 994 NA NA NA NA NA NA NA NA NA NA NA NA ## [281] NA NA NA NA NA NA NA NA NA NA 611 NA NA NA ## [295] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [309] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [323] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [337] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [351] 800 NA NA NA NA NA NA NA NA NA NA NA NA NA ## [365] NA NA NA NA 761 1144 NA NA 800 NA NA NA NA 800 ## [379] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [393] NA 515 NA NA NA NA 1011 NA NA NA NA NA NA NA ## [407] NA NA NA NA NA NA NA NA NA NA NA NA 1036 NA ## [421] NA NA NA NA NA NA 455 NA NA NA NA NA NA NA ## [435] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [449] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [463] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [477] NA NA 800 NA NA NA NA NA NA NA NA NA NA NA ## [491] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [505] NA NA NA 941 NA NA NA NA NA NA NA NA NA NA ## [519] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [533] NA 1571 NA NA 800 NA NA NA NA NA NA NA NA 838 ## [547] NA 1076 NA NA NA NA NA NA NA NA NA NA NA NA ## [561] NA NA NA 1255 NA NA NA NA NA NA 1266 NA NA NA ## [575] NA NA NA NA NA NA NA 781 NA NA NA NA NA NA ## [589] NA NA NA 800 NA NA NA NA NA NA NA NA NA NA ## [603] 1645 542 NA NA NA NA NA NA NA NA 940 NA NA NA ## [617] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [631] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [645] NA 1184 747 ``` ``` print(text[77]) ``` ``` ## [1] "qualified" ``` ``` print(poswords[204]) ``` ``` ## [1] "back" ``` ``` is.na(posmatch) ``` ``` ## [1] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [12] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [23] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [34] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [45] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [56] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [67] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [78] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [89] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [100] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [111] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [122] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [133] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [144] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [155] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [166] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [177] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [188] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [199] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [210] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [221] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [232] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [243] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [254] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [265] TRUE TRUE TRUE FALSE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [276] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [287] TRUE TRUE TRUE TRUE FALSE TRUE TRUE TRUE TRUE TRUE TRUE ## [298] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [309] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [320] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [331] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [342] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE FALSE TRUE ## [353] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [364] TRUE TRUE TRUE TRUE TRUE FALSE FALSE TRUE TRUE FALSE TRUE ## [375] TRUE TRUE TRUE FALSE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [386] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE FALSE TRUE TRUE ## [397] TRUE TRUE FALSE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [408] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [419] FALSE TRUE TRUE TRUE TRUE TRUE TRUE TRUE FALSE TRUE TRUE ## [430] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [441] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [452] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [463] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [474] TRUE TRUE TRUE TRUE TRUE FALSE TRUE TRUE TRUE TRUE TRUE ## [485] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [496] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [507] TRUE FALSE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [518] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [529] TRUE TRUE TRUE TRUE TRUE FALSE TRUE TRUE FALSE TRUE TRUE ## [540] TRUE TRUE TRUE TRUE TRUE TRUE FALSE TRUE FALSE TRUE TRUE ## [551] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [562] TRUE TRUE FALSE TRUE TRUE TRUE TRUE TRUE TRUE FALSE TRUE ## [573] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE FALSE TRUE ## [584] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE FALSE TRUE TRUE ## [595] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE FALSE FALSE TRUE ## [606] TRUE TRUE TRUE TRUE TRUE TRUE TRUE FALSE TRUE TRUE TRUE ## [617] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [628] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [639] TRUE TRUE TRUE TRUE TRUE TRUE TRUE FALSE FALSE ``` ### 7\.26\.1 Creating Positive and Negative Word Lists ``` #MOOD SCORING USING HARVARD INQUIRER #Read in the Harvard Inquirer Dictionary #And create a list of positive and negative words HIDict = readLines("DSTMAA_data/inqdict.txt") dict_pos = HIDict[grep("Pos",HIDict)] poswords = NULL for (s in dict_pos) { s = strsplit(s,"#")[[1]][1] poswords = c(poswords,strsplit(s," ")[[1]][1]) } dict_neg = HIDict[grep("Neg",HIDict)] negwords = NULL for (s in dict_neg) { s = strsplit(s,"#")[[1]][1] negwords = c(negwords,strsplit(s," ")[[1]][1]) } poswords = tolower(poswords) negwords = tolower(negwords) print(sample(poswords,25)) ``` ``` ## [1] "rouse" "donation" "correct" "eager" ## [5] "shiny" "train" "gain" "competent" ## [9] "aristocracy" "arisen" "comeback" "honeymoon" ## [13] "inspire" "faith" "sympathize" "uppermost" ## [17] "fulfill" "relaxation" "appreciative" "create" ## [21] "luck" "protection" "entrust" "fortify" ## [25] "dignified" ``` ``` print(sample(negwords,25)) ``` ``` ## [1] "suspicion" "censorship" "conspire" "even" ## [5] "order" "perverse" "withhold" "collision" ## [9] "muddy" "frown" "war" "discriminate" ## [13] "competitor" "challenge" "blah" "need" ## [17] "pass" "frustrate" "lying" "frantically" ## [21] "haggard" "blunder" "confuse" "scold" ## [25] "audacity" ``` ``` poswords = unique(poswords) negwords = unique(negwords) print(length(poswords)) ``` ``` ## [1] 1647 ``` ``` print(length(negwords)) ``` ``` ## [1] 2121 ``` The preceding code created two arrays, one of positive words and another of negative words. You can also directly use the EmoLex which contains positive and negative words already, see: NRC Word\-Emotion Lexicon: [http://saifmohammad.com/WebPages/NRC\-Emotion\-Lexicon.htm](http://saifmohammad.com/WebPages/NRC-Emotion-Lexicon.htm) ### 7\.26\.2 One Function to Rule All Text In order to score text, we need to clean it first and put it into an array to compare with the word list of positive and negative words. I wrote a general purpose function that grabs text and cleans it up for further use. ``` library(tm) library(stringr) #READ IN TEXT FOR ANALYSIS, PUT IT IN A CORPUS, OR ARRAY, OR FLAT STRING #cstem=1, if stemming needed #cstop=1, if stopwords to be removed #ccase=1 for lower case, ccase=2 for upper case #cpunc=1, if punctuation to be removed #cflat=1 for flat text wanted, cflat=2 if text array, else returns corpus read_web_page = function(url,cstem=0,cstop=0,ccase=0,cpunc=0,cflat=0) { text = readLines(url) text = text[setdiff(seq(1,length(text)),grep("<",text))] text = text[setdiff(seq(1,length(text)),grep(">",text))] text = text[setdiff(seq(1,length(text)),grep("]",text))] text = text[setdiff(seq(1,length(text)),grep("}",text))] text = text[setdiff(seq(1,length(text)),grep("_",text))] text = text[setdiff(seq(1,length(text)),grep("\\/",text))] ctext = Corpus(VectorSource(text)) if (cstem==1) { ctext = tm_map(ctext, stemDocument) } if (cstop==1) { ctext = tm_map(ctext, removeWords, stopwords("english"))} if (cpunc==1) { ctext = tm_map(ctext, removePunctuation) } if (ccase==1) { ctext = tm_map(ctext, tolower) } if (ccase==2) { ctext = tm_map(ctext, toupper) } text = ctext #CONVERT FROM CORPUS IF NEEDED if (cflat>0) { text = NULL for (j in 1:length(ctext)) { temp = ctext[[j]]$content if (temp!="") { text = c(text,temp) } } text = as.array(text) } if (cflat==1) { text = paste(text,collapse="\n") text = str_replace_all(text, "[\r\n]" , " ") } result = text } ``` ### 7\.26\.3 Example Now apply this function and see how we can get some clean text. ``` url = "http://srdas.github.io/research.htm" res = read_web_page(url,0,0,0,1,1) print(res) ``` ``` ## [1] "Data Science Theories Models Algorithms and Analytics web book work in progress Derivatives Principles and Practice 2010 Rangarajan Sundaram and Sanjiv Das McGraw Hill An IndexBased Measure of Liquidity with George Chacko and Rong Fan 2016 Matrix Metrics NetworkBased Systemic Risk Scoring 2016 of systemic risk This paper won the First Prize in the MITCFP competition 2016 for the best paper on SIFIs systemically important financial institutions It also won the best paper award at Credit Spreads with Dynamic Debt with Seoyoung Kim 2015 Text and Context Language Analytics for Finance 2014 Strategic Loan Modification An OptionsBased Response to Strategic Default Options and Structured Products in Behavioral Portfolios with Meir Statman 2013 and barrier range notes in the presence of fattailed outcomes using copulas Polishing Diamonds in the Rough The Sources of Syndicated Venture Performance 2011 with Hoje Jo and Yongtae Kim Optimization with Mental Accounts 2010 with Harry Markowitz Jonathan Accountingbased versus marketbased crosssectional models of CDS spreads with Paul Hanouna and Atulya Sarin 2009 Hedging Credit Equity Liquidity Matters with Paul Hanouna 2009 An Integrated Model for Hybrid Securities Yahoo for Amazon Sentiment Extraction from Small Talk on the Web Common Failings How Corporate Defaults are Correlated with Darrell Duffie Nikunj Kapadia and Leandro Saita A Clinical Study of Investor Discussion and Sentiment with Asis MartinezJerez and Peter Tufano 2005 International Portfolio Choice with Systemic Risk The loss resulting from diminished diversification is small while Speech Signaling Risksharing and the Impact of Fee Structures on investor welfare Contrary to regulatory intuition incentive structures A DiscreteTime Approach to Noarbitrage Pricing of Credit derivatives with Rating Transitions with Viral Acharya and Rangarajan Sundaram Pricing Interest Rate Derivatives A General Approachwith George Chacko A DiscreteTime Approach to ArbitrageFree Pricing of Credit Derivatives The Psychology of Financial Decision Making A Case for TheoryDriven Experimental Enquiry 1999 with Priya Raghubir Of Smiles and Smirks A Term Structure Perspective A Theory of Banking Structure 1999 with Ashish Nanda by function based upon two dimensions the degree of information asymmetry A Theory of Optimal Timing and Selectivity A Direct DiscreteTime Approach to PoissonGaussian Bond Option Pricing in the HeathJarrowMorton The Central Tendency A Second Factor in Bond Yields 1998 with Silverio Foresi and Pierluigi Balduzzi Efficiency with Costly Information A Reinterpretation of Evidence from Managed Portfolios with Edwin Elton Martin Gruber and Matt Presented and Reprinted in the Proceedings of The Seminar on the Analysis of Security Prices at the Center for Research in Security Prices at the University of Managing Rollover Risk with Capital Structure Covenants in Structured Finance Vehicles 2016 The Design and Risk Management of Structured Finance Vehicles 2016 Post the recent subprime financial crisis we inform the creation of safer SIVs in structured finance and propose avenues of mitigating risks faced by senior debt through Coming up Short Managing Underfunded Portfolios in an LDIES Framework 2014 with Seoyoung Kim and Meir Statman Going for Broke Restructuring Distressed Debt Portfolios 2014 Digital Portfolios 2013 Options on Portfolios with HigherOrder Moments 2009 options on a multivariate system of assets calibrated to the return Dealing with Dimension Option Pricing on Factor Trees 2009 you to price options on multiple assets in a unified fraamework Computational Modeling Correlated Default with a Forest of Binomial Trees 2007 with Basel II Correlation Related Issues 2007 Correlated Default Risk 2006 with Laurence Freed Gary Geng and Nikunj Kapadia increase as markets worsen Regime switching models are needed to explain dynamic A Simple Model for Pricing Equity Options with Markov Switching State Variables 2006 with Donald Aingworth and Rajeev Motwani The Firms Management of Social Interactions 2005 with D Godes D Mayzlin Y Chen S Das C Dellarocas B Pfeieffer B Libai S Sen M Shi and P Verlegh Financial Communities with Jacob Sisk 2005 Summer 112123 Monte Carlo Markov Chain Methods for Derivative Pricing and Risk Assessmentwith Alistair Sinclair 2005 where incomplete information about the value of an asset may be exploited to undertake fast and accurate pricing Proof that a fully polynomial randomized Correlated Default Processes A CriterionBased Copula Approach Special Issue on Default Risk Private Equity Returns An Empirical Examination of the Exit of VentureBacked Companies with Murali Jagannathan and Atulya Sarin firm being financed the valuation at the time of financing and the prevailing market sentiment Helps understand the risk premium required for the Issue on Computational Methods in Economics and Finance December 5569 Bayesian Migration in Credit Ratings Based on Probabilities of The Impact of Correlated Default Risk on Credit Portfolios with Gifford Fong and Gary Geng How Diversified are Internationally Diversified Portfolios TimeVariation in the Covariances between International Returns DiscreteTime Bond and Option Pricing for JumpDiffusion Macroeconomic Implications of Search Theory for the Labor Market Auction Theory A Summary with Applications and Evidence from the Treasury Markets 1996 with Rangarajan Sundaram A Simple Approach to Three Factor Affine Models of the Term Structure with Pierluigi Balduzzi Silverio Foresi and Rangarajan Analytical Approximations of the Term Structure for Jumpdiffusion Processes A Numerical Analysis 1996 Markov Chain Term Structure Models Extensions and Applications Exact Solutions for Bond and Options Prices with Systematic Jump Risk 1996 with Silverio Foresi Pricing Credit Sensitive Debt when Interest Rates Credit Ratings and Credit Spreads are Stochastic 1996 v52 161198 Did CDS Trading Improve the Market for Corporate Bonds 2016 with Madhu Kalimipalli and Subhankar Nayak Big Datas Big Muscle 2016 Portfolios for Investors Who Want to Reach Their Goals While Staying on the MeanVariance Efficient Frontier 2011 with Harry Markowitz Jonathan Scheid and Meir Statman News Analytics Framework Techniques and Metrics The Handbook of News Analytics in Finance May 2011 John Wiley Sons UK Random Lattices for Option Pricing Problems in Finance 2011 Implementing Option Pricing Models using Python and Cython 2010 The Finance Web Internet Information and Markets 2010 Financial Applications with Parallel R 2009 Recovery Swaps 2009 with Paul Hanouna Recovery Rates 2009with Paul Hanouna A Simple Model for Pricing Securities with a DebtEquity Linkage 2008 in Credit Default Swap Spreads 2006 with Paul Hanouna MultipleCore Processors for Finance Applications 2006 Power Laws 2005 with Jacob Sisk Genetic Algorithms 2005 Recovery Risk 2005 Venture Capital Syndication with Hoje Jo and Yongtae Kim 2004 Technical Analysis with David Tien 2004 Liquidity and the Bond Markets with Jan Ericsson and Madhu Kalimipalli 2003 Modern Pricing of Interest Rate Derivatives Book Review Contagion 2003 Hedge Funds 2003 Reprinted in Working Papers on Hedge Funds in The World of Hedge Funds Characteristics and Analysis 2005 World Scientific The Internet and Investors 2003 Useful things to know about Correlated Default Risk with Gifford Fong Laurence Freed Gary Geng and Nikunj Kapadia The Regulation of Fee Structures in Mutual Funds A Theoretical Analysis with Rangarajan Sundaram 1998 NBER WP No 6639 in the Courant Institute of Mathematical Sciences special volume on A DiscreteTime Approach to ArbitrageFree Pricing of Credit Derivatives with Rangarajan Sundaram reprinted in the Courant Institute of Mathematical Sciences special volume on Stochastic Mean Models of the Term Structure with Pierluigi Balduzzi Silverio Foresi and Rangarajan Sundaram John Wiley Sons Inc 128161 Interest Rate Modeling with JumpDiffusion Processes John Wiley Sons Inc 162189 Comments on Pricing ExcessofLoss Reinsurance Contracts against Catastrophic Loss by J David Cummins C Lewis and Richard Phillips Froot Ed University of Chicago Press 1999 141145 Pricing Credit Derivatives J Frost and JG Whittaker 101138 On the Recursive Implementation of Term Structure Models ZeroRevelation RegTech Detecting Risk through Linguistic Analysis of Corporate Emails and News with Seoyoung Kim and Bhushan Kothari Summary for the Columbia Law School blog Dynamic Risk Networks A Note with Seoyoung Kim and Dan Ostrov Research Challenges in Financial Data Modeling and Analysis with Lewis Alexander Zachary Ives HV Jagadish and Claire Monteleoni Local Volatility and the Recovery Rate of Credit Default Swaps with Jeroen Jansen and Frank Fabozzi Efficient Rebalancing of Taxable Portfolios with Dan Ostrov Dennis Ding Vincent Newell The Fast and the Curious VC Drift with Amit Bubna and Paul Hanouna Venture Capital Communities with Amit Bubna and Nagpurnanand Prabhala " ``` ### 7\.26\.4 Mood Scoring Text Now we will take a different page of text and mood score it. ``` #EXAMPLE OF MOOD SCORING library(stringr) url = "http://srdas.github.io/bio-candid.html" text = read_web_page(url,cstem=0,cstop=0,ccase=0,cpunc=1,cflat=1) text = str_replace_all(text,"nbsp"," ") text = unlist(strsplit(text," ")) posmatch = match(text,poswords) numposmatch = length(posmatch[which(posmatch>0)]) negmatch = match(text,negwords) numnegmatch = length(negmatch[which(negmatch>0)]) print(c(numposmatch,numnegmatch)) ``` ``` ## [1] 26 16 ``` ``` #FURTHER EXPLORATION OF THESE OBJECTS print(length(text)) ``` ``` ## [1] 647 ``` ``` print(posmatch) ``` ``` ## [1] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [15] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [29] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [43] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [57] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [71] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [85] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [99] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [113] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [127] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [141] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [155] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [169] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [183] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [197] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [211] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [225] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [239] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [253] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [267] NA 994 NA NA NA NA NA NA NA NA NA NA NA NA ## [281] NA NA NA NA NA NA NA NA NA NA 611 NA NA NA ## [295] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [309] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [323] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [337] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [351] 800 NA NA NA NA NA NA NA NA NA NA NA NA NA ## [365] NA NA NA NA 761 1144 NA NA 800 NA NA NA NA 800 ## [379] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [393] NA 515 NA NA NA NA 1011 NA NA NA NA NA NA NA ## [407] NA NA NA NA NA NA NA NA NA NA NA NA 1036 NA ## [421] NA NA NA NA NA NA 455 NA NA NA NA NA NA NA ## [435] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [449] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [463] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [477] NA NA 800 NA NA NA NA NA NA NA NA NA NA NA ## [491] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [505] NA NA NA 941 NA NA NA NA NA NA NA NA NA NA ## [519] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [533] NA 1571 NA NA 800 NA NA NA NA NA NA NA NA 838 ## [547] NA 1076 NA NA NA NA NA NA NA NA NA NA NA NA ## [561] NA NA NA 1255 NA NA NA NA NA NA 1266 NA NA NA ## [575] NA NA NA NA NA NA NA 781 NA NA NA NA NA NA ## [589] NA NA NA 800 NA NA NA NA NA NA NA NA NA NA ## [603] 1645 542 NA NA NA NA NA NA NA NA 940 NA NA NA ## [617] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [631] NA NA NA NA NA NA NA NA NA NA NA NA NA NA ## [645] NA 1184 747 ``` ``` print(text[77]) ``` ``` ## [1] "qualified" ``` ``` print(poswords[204]) ``` ``` ## [1] "back" ``` ``` is.na(posmatch) ``` ``` ## [1] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [12] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [23] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [34] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [45] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [56] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [67] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [78] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [89] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [100] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [111] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [122] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [133] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [144] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [155] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [166] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [177] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [188] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [199] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [210] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [221] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [232] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [243] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [254] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [265] TRUE TRUE TRUE FALSE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [276] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [287] TRUE TRUE TRUE TRUE FALSE TRUE TRUE TRUE TRUE TRUE TRUE ## [298] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [309] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [320] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [331] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [342] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE FALSE TRUE ## [353] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [364] TRUE TRUE TRUE TRUE TRUE FALSE FALSE TRUE TRUE FALSE TRUE ## [375] TRUE TRUE TRUE FALSE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [386] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE FALSE TRUE TRUE ## [397] TRUE TRUE FALSE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [408] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [419] FALSE TRUE TRUE TRUE TRUE TRUE TRUE TRUE FALSE TRUE TRUE ## [430] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [441] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [452] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [463] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [474] TRUE TRUE TRUE TRUE TRUE FALSE TRUE TRUE TRUE TRUE TRUE ## [485] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [496] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [507] TRUE FALSE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [518] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [529] TRUE TRUE TRUE TRUE TRUE FALSE TRUE TRUE FALSE TRUE TRUE ## [540] TRUE TRUE TRUE TRUE TRUE TRUE FALSE TRUE FALSE TRUE TRUE ## [551] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [562] TRUE TRUE FALSE TRUE TRUE TRUE TRUE TRUE TRUE FALSE TRUE ## [573] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE FALSE TRUE ## [584] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE FALSE TRUE TRUE ## [595] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE FALSE FALSE TRUE ## [606] TRUE TRUE TRUE TRUE TRUE TRUE TRUE FALSE TRUE TRUE TRUE ## [617] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [628] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ## [639] TRUE TRUE TRUE TRUE TRUE TRUE TRUE FALSE FALSE ``` 7\.27 Language Detection and Translation ---------------------------------------- We may be scraping web sites from many countries and need to detect the language and then translate it into English for mood scoring. The useful package **textcat** enables us to categorize the language. ``` library(textcat) text = c("Je suis un programmeur novice.", "I am a programmer who is a novice.", "Sono un programmatore alle prime armi.", "Ich bin ein Anfänger Programmierer", "Soy un programador con errores.") lang = textcat(text) print(lang) ``` ``` ## [1] "french" "english" "italian" "german" "spanish" ``` ### 7\.27\.1 Language Translation And of course, once the language is detected, we may translate it into English. ``` library(translate) set.key("AIzaSyDIB8qQTmhLlbPNN38Gs4dXnlN4a7lRrHQ") print(translate(text[1],"fr","en")) print(translate(text[3],"it","en")) print(translate(text[4],"de","en")) print(translate(text[5],"es","en")) ``` This requires a Google API for which you need to set up a paid account. ### 7\.27\.1 Language Translation And of course, once the language is detected, we may translate it into English. ``` library(translate) set.key("AIzaSyDIB8qQTmhLlbPNN38Gs4dXnlN4a7lRrHQ") print(translate(text[1],"fr","en")) print(translate(text[3],"it","en")) print(translate(text[4],"de","en")) print(translate(text[5],"es","en")) ``` This requires a Google API for which you need to set up a paid account. 7\.28 Text Classification ------------------------- 1. Machine classification is, from a layman’s point of view, nothing but learning by example. In new\-fangled modern parlance, it is a technique in the field of “machine learning”. 2. Learning by machines falls into two categories, supervised and unsupervised. When a number of explanatory \\(X\\) variables are used to determine some outcome \\(Y\\), and we train an algorithm to do this, we are performing supervised (machine) learning. The outcome \\(Y\\) may be a dependent variable (for example, the left hand side in a linear regression), or a classification (i.e., discrete outcome). 3. When we only have \\(X\\) variables and no separate outcome variable \\(Y\\), we perform unsupervised learning. For example, cluster analysis produces groupings based on the \\(X\\) variables of various entities, and is a common example. We start with a simple example on numerical data befoe discussing how this is to be applied to text. We first look at the Bayes classifier. 7\.29 Bayes Classifier ---------------------- Bayes classification extends the Document\-Term model with a document\-term\-classification model. These are the three entities in the model and we denote them as \\((d,t,c)\\). Assume that there are \\(D\\) documents to classify into \\(C\\) categories, and we employ a dictionary/lexicon (as the case may be) of \\(T\\) terms or words. Hence we have \\(d\_i, i \= 1, ... , D\\), and \\(t\_j, j \= 1, ... , T\\). And correspondingly the categories for classification are \\(c\_k, k \= 1, ... , C\\). Suppose we are given a text corpus of stock market related documents (tweets for example), and wish to classify them into bullish (\\(c\_1\\)), neutral (\\(c\_2\\)), or bearish (\\(c\_3\\)), where \\(C\=3\\). We first need to train the Bayes classifier using a training data set, with pre\-classified documents, numbering \\(D\\). For each term \\(t\\) in the lexicon, we can compute how likely it is to appear in documents in each class \\(c\_k\\). Therefore, for each class, there is a \\(T\\)\-sided dice with each face representing a term and having a probability of coming up. These dice are the prior probabilities of seeing a word for each class of document. We denote these probabilities succinctly as \\(p(t \| c)\\). For example in a bearish document, if the word “sell” comprises 10% of the words that appear, then \\(p(t\=\\mbox{sell} \| c\=\\mbox{bearish})\=0\.10\\). In order to ensure that just because a word does not appear in a class, it has a non\-zero probability we compute the probabilities as follows: \\\[ \\begin{equation} p(t \| c) \= \\frac{n(t \| c) \+ 1}{n(c)\+T} \\end{equation} \\] where \\(n(t \| c)\\) is the number of times word \\(t\\) appears in category \\(c\\), and \\(n(c) \= \\sum\_t n(t \| c)\\) is the total number of words in the training data in class \\(c\\). Note that if there are no words in the class \\(c\\), then each term \\(t\\) has probability \\(1/T\\). A document \\(d\_i\\) is a collection or set of words \\(t\_j\\). The probability of seeing a given document in each category is given by the following *multinomial* probability: \\\[ \\begin{equation} p(d \| c) \= \\frac{n(d)!}{n(t\_1\|d)! \\cdot n(t\_2\|d)! \\cdots n(t\_T\|d)!} \\times p(t\_1 \| c) \\cdot p(t\_2 \| c) \\cdots p(t\_T \| c) \\nonumber \\end{equation} \\] where \\(n(d)\\) is the number of words in the document, and \\(n(t\_j \| d)\\) is the number of occurrences of word \\(t\_j\\) in the same document \\(d\\). These \\(p(d \| c)\\) are the prior probabilities in the Bayes classifier, computed from all documents in the training data. The posterior probabilities are computed for each document in the test data as follows: \\\[ p(c \| d) \= \\frac{p(d \| c) p(c)}{\\sum\_k \\; p(d \| c\_k) p(c\_k)}, \\forall k \= 1, \\ldots, C \\nonumber \\] Note that we get \\(C\\) posterior probabilities for document \\(d\\), and assign the document to class \\(\\max\_k c\_k\\), i.e., the class with the highest posterior probability for the given document. ### 7\.29\.1 Naive Bayes in R We use the **e1071** package. It has a one\-line command that takes in the tagged training dataset using the function **naiveBayes()**. It returns the trained classifier model. The trained classifier contains the unconditional probabilities \\(p(c)\\) of each class, which are merely frequencies with which each document appears. It also shows the conditional probability distributions \\(p(t \|c)\\) given as the mean and standard deviation of the occurrence of these terms in each class. We may take this trained model and re\-apply to the training data set to see how well it does. We use the **predict()** function for this. The data set here is the classic Iris data. For text mining, the feature set in the data will be the set of all words, and there will be one column for each word. Hence, this will be a large feature set. In order to keep this small, we may instead reduce the number of words by only using a lexicon’s words as the set of features. This will vastly reduce and make more specific the feature set used in the classifier. ### 7\.29\.2 Example ``` library(e1071) data(iris) print(head(iris)) ``` ``` ## Sepal.Length Sepal.Width Petal.Length Petal.Width Species ## 1 5.1 3.5 1.4 0.2 setosa ## 2 4.9 3.0 1.4 0.2 setosa ## 3 4.7 3.2 1.3 0.2 setosa ## 4 4.6 3.1 1.5 0.2 setosa ## 5 5.0 3.6 1.4 0.2 setosa ## 6 5.4 3.9 1.7 0.4 setosa ``` ``` tail(iris) ``` ``` ## Sepal.Length Sepal.Width Petal.Length Petal.Width Species ## 145 6.7 3.3 5.7 2.5 virginica ## 146 6.7 3.0 5.2 2.3 virginica ## 147 6.3 2.5 5.0 1.9 virginica ## 148 6.5 3.0 5.2 2.0 virginica ## 149 6.2 3.4 5.4 2.3 virginica ## 150 5.9 3.0 5.1 1.8 virginica ``` ``` #NAIVE BAYES res = naiveBayes(iris[,1:4],iris[,5]) #SHOWS THE PRIOR AND LIKELIHOOD FUNCTIONS res ``` ``` ## ## Naive Bayes Classifier for Discrete Predictors ## ## Call: ## naiveBayes.default(x = iris[, 1:4], y = iris[, 5]) ## ## A-priori probabilities: ## iris[, 5] ## setosa versicolor virginica ## 0.3333333 0.3333333 0.3333333 ## ## Conditional probabilities: ## Sepal.Length ## iris[, 5] [,1] [,2] ## setosa 5.006 0.3524897 ## versicolor 5.936 0.5161711 ## virginica 6.588 0.6358796 ## ## Sepal.Width ## iris[, 5] [,1] [,2] ## setosa 3.428 0.3790644 ## versicolor 2.770 0.3137983 ## virginica 2.974 0.3224966 ## ## Petal.Length ## iris[, 5] [,1] [,2] ## setosa 1.462 0.1736640 ## versicolor 4.260 0.4699110 ## virginica 5.552 0.5518947 ## ## Petal.Width ## iris[, 5] [,1] [,2] ## setosa 0.246 0.1053856 ## versicolor 1.326 0.1977527 ## virginica 2.026 0.2746501 ``` ``` #SHOWS POSTERIOR PROBABILITIES predict(res,iris[,1:4],type="raw") ``` ``` ## setosa versicolor virginica ## [1,] 1.000000e+00 2.981309e-18 2.152373e-25 ## [2,] 1.000000e+00 3.169312e-17 6.938030e-25 ## [3,] 1.000000e+00 2.367113e-18 7.240956e-26 ## [4,] 1.000000e+00 3.069606e-17 8.690636e-25 ## [5,] 1.000000e+00 1.017337e-18 8.885794e-26 ## [6,] 1.000000e+00 2.717732e-14 4.344285e-21 ## [7,] 1.000000e+00 2.321639e-17 7.988271e-25 ## [8,] 1.000000e+00 1.390751e-17 8.166995e-25 ## [9,] 1.000000e+00 1.990156e-17 3.606469e-25 ## [10,] 1.000000e+00 7.378931e-18 3.615492e-25 ## [11,] 1.000000e+00 9.396089e-18 1.474623e-24 ## [12,] 1.000000e+00 3.461964e-17 2.093627e-24 ## [13,] 1.000000e+00 2.804520e-18 1.010192e-25 ## [14,] 1.000000e+00 1.799033e-19 6.060578e-27 ## [15,] 1.000000e+00 5.533879e-19 2.485033e-25 ## [16,] 1.000000e+00 6.273863e-17 4.509864e-23 ## [17,] 1.000000e+00 1.106658e-16 1.282419e-23 ## [18,] 1.000000e+00 4.841773e-17 2.350011e-24 ## [19,] 1.000000e+00 1.126175e-14 2.567180e-21 ## [20,] 1.000000e+00 1.808513e-17 1.963924e-24 ## [21,] 1.000000e+00 2.178382e-15 2.013989e-22 ## [22,] 1.000000e+00 1.210057e-15 7.788592e-23 ## [23,] 1.000000e+00 4.535220e-20 3.130074e-27 ## [24,] 1.000000e+00 3.147327e-11 8.175305e-19 ## [25,] 1.000000e+00 1.838507e-14 1.553757e-21 ## [26,] 1.000000e+00 6.873990e-16 1.830374e-23 ## [27,] 1.000000e+00 3.192598e-14 1.045146e-21 ## [28,] 1.000000e+00 1.542562e-17 1.274394e-24 ## [29,] 1.000000e+00 8.833285e-18 5.368077e-25 ## [30,] 1.000000e+00 9.557935e-17 3.652571e-24 ## [31,] 1.000000e+00 2.166837e-16 6.730536e-24 ## [32,] 1.000000e+00 3.940500e-14 1.546678e-21 ## [33,] 1.000000e+00 1.609092e-20 1.013278e-26 ## [34,] 1.000000e+00 7.222217e-20 4.261853e-26 ## [35,] 1.000000e+00 6.289348e-17 1.831694e-24 ## [36,] 1.000000e+00 2.850926e-18 8.874002e-26 ## [37,] 1.000000e+00 7.746279e-18 7.235628e-25 ## [38,] 1.000000e+00 8.623934e-20 1.223633e-26 ## [39,] 1.000000e+00 4.612936e-18 9.655450e-26 ## [40,] 1.000000e+00 2.009325e-17 1.237755e-24 ## [41,] 1.000000e+00 1.300634e-17 5.657689e-25 ## [42,] 1.000000e+00 1.577617e-15 5.717219e-24 ## [43,] 1.000000e+00 1.494911e-18 4.800333e-26 ## [44,] 1.000000e+00 1.076475e-10 3.721344e-18 ## [45,] 1.000000e+00 1.357569e-12 1.708326e-19 ## [46,] 1.000000e+00 3.882113e-16 5.587814e-24 ## [47,] 1.000000e+00 5.086735e-18 8.960156e-25 ## [48,] 1.000000e+00 5.012793e-18 1.636566e-25 ## [49,] 1.000000e+00 5.717245e-18 8.231337e-25 ## [50,] 1.000000e+00 7.713456e-18 3.349997e-25 ## [51,] 4.893048e-107 8.018653e-01 1.981347e-01 ## [52,] 7.920550e-100 9.429283e-01 5.707168e-02 ## [53,] 5.494369e-121 4.606254e-01 5.393746e-01 ## [54,] 1.129435e-69 9.999621e-01 3.789964e-05 ## [55,] 1.473329e-105 9.503408e-01 4.965916e-02 ## [56,] 1.931184e-89 9.990013e-01 9.986538e-04 ## [57,] 4.539099e-113 6.592515e-01 3.407485e-01 ## [58,] 2.549753e-34 9.999997e-01 3.119517e-07 ## [59,] 6.562814e-97 9.895385e-01 1.046153e-02 ## [60,] 5.000210e-69 9.998928e-01 1.071638e-04 ## [61,] 7.354548e-41 9.999997e-01 3.143915e-07 ## [62,] 4.799134e-86 9.958564e-01 4.143617e-03 ## [63,] 4.631287e-60 9.999925e-01 7.541274e-06 ## [64,] 1.052252e-103 9.850868e-01 1.491324e-02 ## [65,] 4.789799e-55 9.999700e-01 2.999393e-05 ## [66,] 1.514706e-92 9.787587e-01 2.124125e-02 ## [67,] 1.338348e-97 9.899311e-01 1.006893e-02 ## [68,] 2.026115e-62 9.999799e-01 2.007314e-05 ## [69,] 6.547473e-101 9.941996e-01 5.800427e-03 ## [70,] 3.016276e-58 9.999913e-01 8.739959e-06 ## [71,] 1.053341e-127 1.609361e-01 8.390639e-01 ## [72,] 1.248202e-70 9.997743e-01 2.256698e-04 ## [73,] 3.294753e-119 9.245812e-01 7.541876e-02 ## [74,] 1.314175e-95 9.979398e-01 2.060233e-03 ## [75,] 3.003117e-83 9.982736e-01 1.726437e-03 ## [76,] 2.536747e-92 9.865372e-01 1.346281e-02 ## [77,] 1.558909e-111 9.102260e-01 8.977398e-02 ## [78,] 7.014282e-136 7.989607e-02 9.201039e-01 ## [79,] 5.034528e-99 9.854957e-01 1.450433e-02 ## [80,] 1.439052e-41 9.999984e-01 1.601574e-06 ## [81,] 1.251567e-54 9.999955e-01 4.500139e-06 ## [82,] 8.769539e-48 9.999983e-01 1.742560e-06 ## [83,] 3.447181e-62 9.999664e-01 3.361987e-05 ## [84,] 1.087302e-132 6.134355e-01 3.865645e-01 ## [85,] 4.119852e-97 9.918297e-01 8.170260e-03 ## [86,] 1.140835e-102 8.734107e-01 1.265893e-01 ## [87,] 2.247339e-110 7.971795e-01 2.028205e-01 ## [88,] 4.870630e-88 9.992978e-01 7.022084e-04 ## [89,] 2.028672e-72 9.997620e-01 2.379898e-04 ## [90,] 2.227900e-69 9.999461e-01 5.390514e-05 ## [91,] 5.110709e-81 9.998510e-01 1.489819e-04 ## [92,] 5.774841e-99 9.885399e-01 1.146006e-02 ## [93,] 5.146736e-66 9.999591e-01 4.089540e-05 ## [94,] 1.332816e-34 9.999997e-01 2.716264e-07 ## [95,] 6.094144e-77 9.998034e-01 1.966331e-04 ## [96,] 1.424276e-72 9.998236e-01 1.764463e-04 ## [97,] 8.302641e-77 9.996692e-01 3.307548e-04 ## [98,] 1.835520e-82 9.988601e-01 1.139915e-03 ## [99,] 5.710350e-30 9.999997e-01 3.094739e-07 ## [100,] 3.996459e-73 9.998204e-01 1.795726e-04 ## [101,] 3.993755e-249 1.031032e-10 1.000000e+00 ## [102,] 1.228659e-149 2.724406e-02 9.727559e-01 ## [103,] 2.460661e-216 2.327488e-07 9.999998e-01 ## [104,] 2.864831e-173 2.290954e-03 9.977090e-01 ## [105,] 8.299884e-214 3.175384e-07 9.999997e-01 ## [106,] 1.371182e-267 3.807455e-10 1.000000e+00 ## [107,] 3.444090e-107 9.719885e-01 2.801154e-02 ## [108,] 3.741929e-224 1.782047e-06 9.999982e-01 ## [109,] 5.564644e-188 5.823191e-04 9.994177e-01 ## [110,] 2.052443e-260 2.461662e-12 1.000000e+00 ## [111,] 8.669405e-159 4.895235e-04 9.995105e-01 ## [112,] 4.220200e-163 3.168643e-03 9.968314e-01 ## [113,] 4.360059e-190 6.230821e-06 9.999938e-01 ## [114,] 6.142256e-151 1.423414e-02 9.857659e-01 ## [115,] 2.201426e-186 1.393247e-06 9.999986e-01 ## [116,] 2.949945e-191 6.128385e-07 9.999994e-01 ## [117,] 2.909076e-168 2.152843e-03 9.978472e-01 ## [118,] 1.347608e-281 2.872996e-12 1.000000e+00 ## [119,] 2.786402e-306 1.151469e-12 1.000000e+00 ## [120,] 2.082510e-123 9.561626e-01 4.383739e-02 ## [121,] 2.194169e-217 1.712166e-08 1.000000e+00 ## [122,] 3.325791e-145 1.518718e-02 9.848128e-01 ## [123,] 6.251357e-269 1.170872e-09 1.000000e+00 ## [124,] 4.415135e-135 1.360432e-01 8.639568e-01 ## [125,] 6.315716e-201 1.300512e-06 9.999987e-01 ## [126,] 5.257347e-203 9.507989e-06 9.999905e-01 ## [127,] 1.476391e-129 2.067703e-01 7.932297e-01 ## [128,] 8.772841e-134 1.130589e-01 8.869411e-01 ## [129,] 5.230800e-194 1.395719e-05 9.999860e-01 ## [130,] 7.014892e-179 8.232518e-04 9.991767e-01 ## [131,] 6.306820e-218 1.214497e-06 9.999988e-01 ## [132,] 2.539020e-247 4.668891e-10 1.000000e+00 ## [133,] 2.210812e-201 2.000316e-06 9.999980e-01 ## [134,] 1.128613e-128 7.118948e-01 2.881052e-01 ## [135,] 8.114869e-151 4.900992e-01 5.099008e-01 ## [136,] 7.419068e-249 1.448050e-10 1.000000e+00 ## [137,] 1.004503e-215 9.743357e-09 1.000000e+00 ## [138,] 1.346716e-167 2.186989e-03 9.978130e-01 ## [139,] 1.994716e-128 1.999894e-01 8.000106e-01 ## [140,] 8.440466e-185 6.769126e-06 9.999932e-01 ## [141,] 2.334365e-218 7.456220e-09 1.000000e+00 ## [142,] 2.179139e-183 6.352663e-07 9.999994e-01 ## [143,] 1.228659e-149 2.724406e-02 9.727559e-01 ## [144,] 3.426814e-229 6.597015e-09 1.000000e+00 ## [145,] 2.011574e-232 2.620636e-10 1.000000e+00 ## [146,] 1.078519e-187 7.915543e-07 9.999992e-01 ## [147,] 1.061392e-146 2.770575e-02 9.722942e-01 ## [148,] 1.846900e-164 4.398402e-04 9.995602e-01 ## [149,] 1.439996e-195 3.384156e-07 9.999997e-01 ## [150,] 2.771480e-143 5.987903e-02 9.401210e-01 ``` ``` #CONFUSION MATRIX out = table(predict(res,iris[,1:4]),iris[,5]) out ``` ``` ## ## setosa versicolor virginica ## setosa 50 0 0 ## versicolor 0 47 3 ## virginica 0 3 47 ``` ### 7\.29\.1 Naive Bayes in R We use the **e1071** package. It has a one\-line command that takes in the tagged training dataset using the function **naiveBayes()**. It returns the trained classifier model. The trained classifier contains the unconditional probabilities \\(p(c)\\) of each class, which are merely frequencies with which each document appears. It also shows the conditional probability distributions \\(p(t \|c)\\) given as the mean and standard deviation of the occurrence of these terms in each class. We may take this trained model and re\-apply to the training data set to see how well it does. We use the **predict()** function for this. The data set here is the classic Iris data. For text mining, the feature set in the data will be the set of all words, and there will be one column for each word. Hence, this will be a large feature set. In order to keep this small, we may instead reduce the number of words by only using a lexicon’s words as the set of features. This will vastly reduce and make more specific the feature set used in the classifier. ### 7\.29\.2 Example ``` library(e1071) data(iris) print(head(iris)) ``` ``` ## Sepal.Length Sepal.Width Petal.Length Petal.Width Species ## 1 5.1 3.5 1.4 0.2 setosa ## 2 4.9 3.0 1.4 0.2 setosa ## 3 4.7 3.2 1.3 0.2 setosa ## 4 4.6 3.1 1.5 0.2 setosa ## 5 5.0 3.6 1.4 0.2 setosa ## 6 5.4 3.9 1.7 0.4 setosa ``` ``` tail(iris) ``` ``` ## Sepal.Length Sepal.Width Petal.Length Petal.Width Species ## 145 6.7 3.3 5.7 2.5 virginica ## 146 6.7 3.0 5.2 2.3 virginica ## 147 6.3 2.5 5.0 1.9 virginica ## 148 6.5 3.0 5.2 2.0 virginica ## 149 6.2 3.4 5.4 2.3 virginica ## 150 5.9 3.0 5.1 1.8 virginica ``` ``` #NAIVE BAYES res = naiveBayes(iris[,1:4],iris[,5]) #SHOWS THE PRIOR AND LIKELIHOOD FUNCTIONS res ``` ``` ## ## Naive Bayes Classifier for Discrete Predictors ## ## Call: ## naiveBayes.default(x = iris[, 1:4], y = iris[, 5]) ## ## A-priori probabilities: ## iris[, 5] ## setosa versicolor virginica ## 0.3333333 0.3333333 0.3333333 ## ## Conditional probabilities: ## Sepal.Length ## iris[, 5] [,1] [,2] ## setosa 5.006 0.3524897 ## versicolor 5.936 0.5161711 ## virginica 6.588 0.6358796 ## ## Sepal.Width ## iris[, 5] [,1] [,2] ## setosa 3.428 0.3790644 ## versicolor 2.770 0.3137983 ## virginica 2.974 0.3224966 ## ## Petal.Length ## iris[, 5] [,1] [,2] ## setosa 1.462 0.1736640 ## versicolor 4.260 0.4699110 ## virginica 5.552 0.5518947 ## ## Petal.Width ## iris[, 5] [,1] [,2] ## setosa 0.246 0.1053856 ## versicolor 1.326 0.1977527 ## virginica 2.026 0.2746501 ``` ``` #SHOWS POSTERIOR PROBABILITIES predict(res,iris[,1:4],type="raw") ``` ``` ## setosa versicolor virginica ## [1,] 1.000000e+00 2.981309e-18 2.152373e-25 ## [2,] 1.000000e+00 3.169312e-17 6.938030e-25 ## [3,] 1.000000e+00 2.367113e-18 7.240956e-26 ## [4,] 1.000000e+00 3.069606e-17 8.690636e-25 ## [5,] 1.000000e+00 1.017337e-18 8.885794e-26 ## [6,] 1.000000e+00 2.717732e-14 4.344285e-21 ## [7,] 1.000000e+00 2.321639e-17 7.988271e-25 ## [8,] 1.000000e+00 1.390751e-17 8.166995e-25 ## [9,] 1.000000e+00 1.990156e-17 3.606469e-25 ## [10,] 1.000000e+00 7.378931e-18 3.615492e-25 ## [11,] 1.000000e+00 9.396089e-18 1.474623e-24 ## [12,] 1.000000e+00 3.461964e-17 2.093627e-24 ## [13,] 1.000000e+00 2.804520e-18 1.010192e-25 ## [14,] 1.000000e+00 1.799033e-19 6.060578e-27 ## [15,] 1.000000e+00 5.533879e-19 2.485033e-25 ## [16,] 1.000000e+00 6.273863e-17 4.509864e-23 ## [17,] 1.000000e+00 1.106658e-16 1.282419e-23 ## [18,] 1.000000e+00 4.841773e-17 2.350011e-24 ## [19,] 1.000000e+00 1.126175e-14 2.567180e-21 ## [20,] 1.000000e+00 1.808513e-17 1.963924e-24 ## [21,] 1.000000e+00 2.178382e-15 2.013989e-22 ## [22,] 1.000000e+00 1.210057e-15 7.788592e-23 ## [23,] 1.000000e+00 4.535220e-20 3.130074e-27 ## [24,] 1.000000e+00 3.147327e-11 8.175305e-19 ## [25,] 1.000000e+00 1.838507e-14 1.553757e-21 ## [26,] 1.000000e+00 6.873990e-16 1.830374e-23 ## [27,] 1.000000e+00 3.192598e-14 1.045146e-21 ## [28,] 1.000000e+00 1.542562e-17 1.274394e-24 ## [29,] 1.000000e+00 8.833285e-18 5.368077e-25 ## [30,] 1.000000e+00 9.557935e-17 3.652571e-24 ## [31,] 1.000000e+00 2.166837e-16 6.730536e-24 ## [32,] 1.000000e+00 3.940500e-14 1.546678e-21 ## [33,] 1.000000e+00 1.609092e-20 1.013278e-26 ## [34,] 1.000000e+00 7.222217e-20 4.261853e-26 ## [35,] 1.000000e+00 6.289348e-17 1.831694e-24 ## [36,] 1.000000e+00 2.850926e-18 8.874002e-26 ## [37,] 1.000000e+00 7.746279e-18 7.235628e-25 ## [38,] 1.000000e+00 8.623934e-20 1.223633e-26 ## [39,] 1.000000e+00 4.612936e-18 9.655450e-26 ## [40,] 1.000000e+00 2.009325e-17 1.237755e-24 ## [41,] 1.000000e+00 1.300634e-17 5.657689e-25 ## [42,] 1.000000e+00 1.577617e-15 5.717219e-24 ## [43,] 1.000000e+00 1.494911e-18 4.800333e-26 ## [44,] 1.000000e+00 1.076475e-10 3.721344e-18 ## [45,] 1.000000e+00 1.357569e-12 1.708326e-19 ## [46,] 1.000000e+00 3.882113e-16 5.587814e-24 ## [47,] 1.000000e+00 5.086735e-18 8.960156e-25 ## [48,] 1.000000e+00 5.012793e-18 1.636566e-25 ## [49,] 1.000000e+00 5.717245e-18 8.231337e-25 ## [50,] 1.000000e+00 7.713456e-18 3.349997e-25 ## [51,] 4.893048e-107 8.018653e-01 1.981347e-01 ## [52,] 7.920550e-100 9.429283e-01 5.707168e-02 ## [53,] 5.494369e-121 4.606254e-01 5.393746e-01 ## [54,] 1.129435e-69 9.999621e-01 3.789964e-05 ## [55,] 1.473329e-105 9.503408e-01 4.965916e-02 ## [56,] 1.931184e-89 9.990013e-01 9.986538e-04 ## [57,] 4.539099e-113 6.592515e-01 3.407485e-01 ## [58,] 2.549753e-34 9.999997e-01 3.119517e-07 ## [59,] 6.562814e-97 9.895385e-01 1.046153e-02 ## [60,] 5.000210e-69 9.998928e-01 1.071638e-04 ## [61,] 7.354548e-41 9.999997e-01 3.143915e-07 ## [62,] 4.799134e-86 9.958564e-01 4.143617e-03 ## [63,] 4.631287e-60 9.999925e-01 7.541274e-06 ## [64,] 1.052252e-103 9.850868e-01 1.491324e-02 ## [65,] 4.789799e-55 9.999700e-01 2.999393e-05 ## [66,] 1.514706e-92 9.787587e-01 2.124125e-02 ## [67,] 1.338348e-97 9.899311e-01 1.006893e-02 ## [68,] 2.026115e-62 9.999799e-01 2.007314e-05 ## [69,] 6.547473e-101 9.941996e-01 5.800427e-03 ## [70,] 3.016276e-58 9.999913e-01 8.739959e-06 ## [71,] 1.053341e-127 1.609361e-01 8.390639e-01 ## [72,] 1.248202e-70 9.997743e-01 2.256698e-04 ## [73,] 3.294753e-119 9.245812e-01 7.541876e-02 ## [74,] 1.314175e-95 9.979398e-01 2.060233e-03 ## [75,] 3.003117e-83 9.982736e-01 1.726437e-03 ## [76,] 2.536747e-92 9.865372e-01 1.346281e-02 ## [77,] 1.558909e-111 9.102260e-01 8.977398e-02 ## [78,] 7.014282e-136 7.989607e-02 9.201039e-01 ## [79,] 5.034528e-99 9.854957e-01 1.450433e-02 ## [80,] 1.439052e-41 9.999984e-01 1.601574e-06 ## [81,] 1.251567e-54 9.999955e-01 4.500139e-06 ## [82,] 8.769539e-48 9.999983e-01 1.742560e-06 ## [83,] 3.447181e-62 9.999664e-01 3.361987e-05 ## [84,] 1.087302e-132 6.134355e-01 3.865645e-01 ## [85,] 4.119852e-97 9.918297e-01 8.170260e-03 ## [86,] 1.140835e-102 8.734107e-01 1.265893e-01 ## [87,] 2.247339e-110 7.971795e-01 2.028205e-01 ## [88,] 4.870630e-88 9.992978e-01 7.022084e-04 ## [89,] 2.028672e-72 9.997620e-01 2.379898e-04 ## [90,] 2.227900e-69 9.999461e-01 5.390514e-05 ## [91,] 5.110709e-81 9.998510e-01 1.489819e-04 ## [92,] 5.774841e-99 9.885399e-01 1.146006e-02 ## [93,] 5.146736e-66 9.999591e-01 4.089540e-05 ## [94,] 1.332816e-34 9.999997e-01 2.716264e-07 ## [95,] 6.094144e-77 9.998034e-01 1.966331e-04 ## [96,] 1.424276e-72 9.998236e-01 1.764463e-04 ## [97,] 8.302641e-77 9.996692e-01 3.307548e-04 ## [98,] 1.835520e-82 9.988601e-01 1.139915e-03 ## [99,] 5.710350e-30 9.999997e-01 3.094739e-07 ## [100,] 3.996459e-73 9.998204e-01 1.795726e-04 ## [101,] 3.993755e-249 1.031032e-10 1.000000e+00 ## [102,] 1.228659e-149 2.724406e-02 9.727559e-01 ## [103,] 2.460661e-216 2.327488e-07 9.999998e-01 ## [104,] 2.864831e-173 2.290954e-03 9.977090e-01 ## [105,] 8.299884e-214 3.175384e-07 9.999997e-01 ## [106,] 1.371182e-267 3.807455e-10 1.000000e+00 ## [107,] 3.444090e-107 9.719885e-01 2.801154e-02 ## [108,] 3.741929e-224 1.782047e-06 9.999982e-01 ## [109,] 5.564644e-188 5.823191e-04 9.994177e-01 ## [110,] 2.052443e-260 2.461662e-12 1.000000e+00 ## [111,] 8.669405e-159 4.895235e-04 9.995105e-01 ## [112,] 4.220200e-163 3.168643e-03 9.968314e-01 ## [113,] 4.360059e-190 6.230821e-06 9.999938e-01 ## [114,] 6.142256e-151 1.423414e-02 9.857659e-01 ## [115,] 2.201426e-186 1.393247e-06 9.999986e-01 ## [116,] 2.949945e-191 6.128385e-07 9.999994e-01 ## [117,] 2.909076e-168 2.152843e-03 9.978472e-01 ## [118,] 1.347608e-281 2.872996e-12 1.000000e+00 ## [119,] 2.786402e-306 1.151469e-12 1.000000e+00 ## [120,] 2.082510e-123 9.561626e-01 4.383739e-02 ## [121,] 2.194169e-217 1.712166e-08 1.000000e+00 ## [122,] 3.325791e-145 1.518718e-02 9.848128e-01 ## [123,] 6.251357e-269 1.170872e-09 1.000000e+00 ## [124,] 4.415135e-135 1.360432e-01 8.639568e-01 ## [125,] 6.315716e-201 1.300512e-06 9.999987e-01 ## [126,] 5.257347e-203 9.507989e-06 9.999905e-01 ## [127,] 1.476391e-129 2.067703e-01 7.932297e-01 ## [128,] 8.772841e-134 1.130589e-01 8.869411e-01 ## [129,] 5.230800e-194 1.395719e-05 9.999860e-01 ## [130,] 7.014892e-179 8.232518e-04 9.991767e-01 ## [131,] 6.306820e-218 1.214497e-06 9.999988e-01 ## [132,] 2.539020e-247 4.668891e-10 1.000000e+00 ## [133,] 2.210812e-201 2.000316e-06 9.999980e-01 ## [134,] 1.128613e-128 7.118948e-01 2.881052e-01 ## [135,] 8.114869e-151 4.900992e-01 5.099008e-01 ## [136,] 7.419068e-249 1.448050e-10 1.000000e+00 ## [137,] 1.004503e-215 9.743357e-09 1.000000e+00 ## [138,] 1.346716e-167 2.186989e-03 9.978130e-01 ## [139,] 1.994716e-128 1.999894e-01 8.000106e-01 ## [140,] 8.440466e-185 6.769126e-06 9.999932e-01 ## [141,] 2.334365e-218 7.456220e-09 1.000000e+00 ## [142,] 2.179139e-183 6.352663e-07 9.999994e-01 ## [143,] 1.228659e-149 2.724406e-02 9.727559e-01 ## [144,] 3.426814e-229 6.597015e-09 1.000000e+00 ## [145,] 2.011574e-232 2.620636e-10 1.000000e+00 ## [146,] 1.078519e-187 7.915543e-07 9.999992e-01 ## [147,] 1.061392e-146 2.770575e-02 9.722942e-01 ## [148,] 1.846900e-164 4.398402e-04 9.995602e-01 ## [149,] 1.439996e-195 3.384156e-07 9.999997e-01 ## [150,] 2.771480e-143 5.987903e-02 9.401210e-01 ``` ``` #CONFUSION MATRIX out = table(predict(res,iris[,1:4]),iris[,5]) out ``` ``` ## ## setosa versicolor virginica ## setosa 50 0 0 ## versicolor 0 47 3 ## virginica 0 3 47 ``` 7\.30 Support Vector Machines (SVM) ----------------------------------- The goal of the SVM is to map a set of entities with inputs \\(X\=\\{x\_1,x\_2,\\ldots,x\_n\\}\\) of dimension \\(n\\), i.e., \\(X \\in R^n\\), into a set of categories \\(Y\=\\{y\_1,y\_2,\\ldots,y\_m\\}\\) of dimension \\(m\\), such that the \\(n\\)\-dimensional \\(X\\)\-space is divided using hyperplanes, which result in the maximal separation between classes \\(Y\\). A hyperplane is the set of points \\({\\bf x}\\) satisfying the equation \\\[ {\\bf w} \\cdot {\\bf x} \= b \\] where \\(b\\) is a scalar constant, and \\({\\bf w} \\in R^n\\) is the normal vector to the hyperplane, i.e., the vector at right angles to the plane. The distance between this hyperplane and \\({\\bf w} \\cdot {\\bf x} \= 0\\) is given by \\(b/\|\|{\\bf w}\|\|\\), where \\(\|\|{\\bf w}\|\|\\) is the norm of vector \\({\\bf w}\\). This set up is sufficient to provide intuition about how the SVM is implemented. Suppose we have two categories of data, i.e., \\(y \= \\{y\_1, y\_2\\}\\). Assume that all points in category \\(y\_1\\) lie above a hyperplane \\({\\bf w} \\cdot {\\bf x} \= b\_1\\), and all points in category \\(y\_2\\) lie below a hyperplane \\({\\bf w} \\cdot {\\bf x} \= b\_2\\), then the distance between the two hyperplanes is \\(\\frac{\|b\_1\-b\_2\|}{\|\|{\\bf w}\|\|}\\). ``` #Example of hyperplane geometry w1 = 1; w2 = 2 b1 = 10 #Plot hyperplane in x1, x2 space x1 = seq(-3,3,0.1) x2 = (b1-w1*x1)/w2 plot(x1,x2,type="l") #Create hyperplane 2 b2 = 8 x2 = (b2-w1*x1)/w2 lines(x1,x2,col="red") ``` ``` #Compute distance to hyperplane 2 print(abs(b1-b2)/sqrt(w1^2+w2^2)) ``` ``` ## [1] 0.8944272 ``` We see that this gives the *perpendicular* distance between the two parallel hyperplanes. The goal of the SVM is to maximize the distance (separation) between the two hyperplanes, and this is achieved by minimizing norm \\(\|\|{\\bf w}\|\|\\). This naturally leads to a quadratic optimization problem. \\\[ \\min\_{b\_1,b\_2,{\\bf w}} \\frac{1}{2} \|\|{\\bf w}\|\|^2 \\] subject to \\({\\bf w} \\cdot {\\bf x} \\geq b\_1\\) for points in category \\(y\_1\\) and \\({\\bf w} \\cdot {\\bf x} \\leq b\_2\\) for points in category \\(y\_2\\). Note that this program may find a solution where many of the elements of \\({\\bf w}\\) are zero, i.e., it also finds the minimal set of “support” vectors that separate the two groups. The “half” in front of the minimand is for mathematical convenience in solving the quadratic program. Of course, there may be no linear hyperplane that perfectly separates the two groups. This slippage may be accounted for in the SVM by allowing for points on the wrong side of the separating hyperplanes using cost functions, i.e., we modify the quadratic program as follows: \\\[ \\min\_{b\_1,b\_2,{\\bf w},\\{\\eta\_i\\}} \\frac{1}{2} \|\|{\\bf w}\|\|^2 \+ C\_1 \\sum\_{i\=1}^n \\eta\_i \+ C\_2 \\sum\_{i\=1}^n \\eta\_i \\] where \\(C\_1,C\_2\\) are the costs for slippage in groups 1 and 2, respectively. Often implementations assume \\(C\_1\=C\_2\\). The values \\(\\eta\_i\\) are positive for observations that are not perfectly separated, i.e., lead to slippage. Thus, for group 1, these are the length of the perpendicular amounts by which observation \\(i\\) lies below the hyperplane \\({\\bf w} \\cdot {\\bf x} \= b\_1\\), i.e., lies on the hyperplane \\({\\bf w} \\cdot {\\bf x} \= b\_1 \- \\eta\_i\\). For group 1, these are the length of the perpendicular amounts by which observation \\(i\\) lies above the hyperplane \\({\\bf w} \\cdot {\\bf x} \= b\_2\\), i.e., lies on the hyperplane \\({\\bf w} \\cdot {\\bf x} \= b\_1 \+ \\eta\_i\\). For observations within the respective hyperplanes, of course, \\(\\eta\_i\=0\\). ### 7\.30\.1 Example of SVM with Confusion Matrix ``` library(e1071) #EXAMPLE 1 for SVM model = svm(iris[,1:4],iris[,5]) model ``` ``` ## ## Call: ## svm.default(x = iris[, 1:4], y = iris[, 5]) ## ## ## Parameters: ## SVM-Type: C-classification ## SVM-Kernel: radial ## cost: 1 ## gamma: 0.25 ## ## Number of Support Vectors: 51 ``` ``` out = predict(model,iris[,1:4]) out ``` ``` ## 1 2 3 4 5 6 ## setosa setosa setosa setosa setosa setosa ## 7 8 9 10 11 12 ## setosa setosa setosa setosa setosa setosa ## 13 14 15 16 17 18 ## setosa setosa setosa setosa setosa setosa ## 19 20 21 22 23 24 ## setosa setosa setosa setosa setosa setosa ## 25 26 27 28 29 30 ## setosa setosa setosa setosa setosa setosa ## 31 32 33 34 35 36 ## setosa setosa setosa setosa setosa setosa ## 37 38 39 40 41 42 ## setosa setosa setosa setosa setosa setosa ## 43 44 45 46 47 48 ## setosa setosa setosa setosa setosa setosa ## 49 50 51 52 53 54 ## setosa setosa versicolor versicolor versicolor versicolor ## 55 56 57 58 59 60 ## versicolor versicolor versicolor versicolor versicolor versicolor ## 61 62 63 64 65 66 ## versicolor versicolor versicolor versicolor versicolor versicolor ## 67 68 69 70 71 72 ## versicolor versicolor versicolor versicolor versicolor versicolor ## 73 74 75 76 77 78 ## versicolor versicolor versicolor versicolor versicolor virginica ## 79 80 81 82 83 84 ## versicolor versicolor versicolor versicolor versicolor virginica ## 85 86 87 88 89 90 ## versicolor versicolor versicolor versicolor versicolor versicolor ## 91 92 93 94 95 96 ## versicolor versicolor versicolor versicolor versicolor versicolor ## 97 98 99 100 101 102 ## versicolor versicolor versicolor versicolor virginica virginica ## 103 104 105 106 107 108 ## virginica virginica virginica virginica virginica virginica ## 109 110 111 112 113 114 ## virginica virginica virginica virginica virginica virginica ## 115 116 117 118 119 120 ## virginica virginica virginica virginica virginica versicolor ## 121 122 123 124 125 126 ## virginica virginica virginica virginica virginica virginica ## 127 128 129 130 131 132 ## virginica virginica virginica virginica virginica virginica ## 133 134 135 136 137 138 ## virginica versicolor virginica virginica virginica virginica ## 139 140 141 142 143 144 ## virginica virginica virginica virginica virginica virginica ## 145 146 147 148 149 150 ## virginica virginica virginica virginica virginica virginica ## Levels: setosa versicolor virginica ``` ``` print(length(out)) ``` ``` ## [1] 150 ``` ``` table(matrix(out),iris[,5]) ``` ``` ## ## setosa versicolor virginica ## setosa 50 0 0 ## versicolor 0 48 2 ## virginica 0 2 48 ``` So it does marginally better than naive Bayes. Here is another example. ### 7\.30\.2 Another example ``` #EXAMPLE 2 for SVM train_data = matrix(rpois(60,3),10,6) print(train_data) ``` ``` ## [,1] [,2] [,3] [,4] [,5] [,6] ## [1,] 0 4 7 6 4 2 ## [2,] 2 4 4 4 2 3 ## [3,] 2 3 5 1 6 2 ## [4,] 2 5 3 5 4 4 ## [5,] 1 3 3 1 2 3 ## [6,] 2 2 4 8 4 0 ## [7,] 2 4 3 3 4 2 ## [8,] 4 4 4 5 2 0 ## [9,] 1 5 4 1 1 2 ## [10,] 5 3 6 4 4 2 ``` ``` train_class = as.matrix(c(2,3,1,2,2,1,3,2,3,3)) print(train_class) ``` ``` ## [,1] ## [1,] 2 ## [2,] 3 ## [3,] 1 ## [4,] 2 ## [5,] 2 ## [6,] 1 ## [7,] 3 ## [8,] 2 ## [9,] 3 ## [10,] 3 ``` ``` library(e1071) model = svm(train_data,train_class) model ``` ``` ## ## Call: ## svm.default(x = train_data, y = train_class) ## ## ## Parameters: ## SVM-Type: eps-regression ## SVM-Kernel: radial ## cost: 1 ## gamma: 0.1666667 ## epsilon: 0.1 ## ## ## Number of Support Vectors: 9 ``` ``` pred = predict(model,train_data, type="raw") table(pred,train_class) ``` ``` ## train_class ## pred 1 2 3 ## 1.25759920432731 1 0 0 ## 1.56659922213705 1 0 0 ## 2.03896978308775 0 1 0 ## 2.07877220630261 0 1 0 ## 2.07882451500643 0 1 0 ## 2.079102996171 0 1 0 ## 2.50854276105477 0 0 1 ## 2.60314938880547 0 0 1 ## 2.80915400612272 0 0 1 ## 2.92106239193998 0 0 1 ``` ``` train_fitted = round(pred,0) print(cbind(train_class,train_fitted)) ``` ``` ## train_fitted ## 1 2 2 ## 2 3 3 ## 3 1 2 ## 4 2 2 ## 5 2 2 ## 6 1 1 ## 7 3 3 ## 8 2 2 ## 9 3 3 ## 10 3 3 ``` ``` train_fitted = matrix(train_fitted) table(train_class,train_fitted) ``` ``` ## train_fitted ## train_class 1 2 3 ## 1 1 1 0 ## 2 0 4 0 ## 3 0 0 4 ``` How do we know if the confusion matrix shows statistically significant classification power? We do a chi\-square test. ``` library(e1071) res = naiveBayes(iris[,1:4],iris[,5]) pred = predict(res,iris[,1:4]) out = table(pred,iris[,5]) out ``` ``` ## ## pred setosa versicolor virginica ## setosa 50 0 0 ## versicolor 0 47 3 ## virginica 0 3 47 ``` ``` chisq.test(out) ``` ``` ## ## Pearson's Chi-squared test ## ## data: out ## X-squared = 266.16, df = 4, p-value < 2.2e-16 ``` ### 7\.30\.1 Example of SVM with Confusion Matrix ``` library(e1071) #EXAMPLE 1 for SVM model = svm(iris[,1:4],iris[,5]) model ``` ``` ## ## Call: ## svm.default(x = iris[, 1:4], y = iris[, 5]) ## ## ## Parameters: ## SVM-Type: C-classification ## SVM-Kernel: radial ## cost: 1 ## gamma: 0.25 ## ## Number of Support Vectors: 51 ``` ``` out = predict(model,iris[,1:4]) out ``` ``` ## 1 2 3 4 5 6 ## setosa setosa setosa setosa setosa setosa ## 7 8 9 10 11 12 ## setosa setosa setosa setosa setosa setosa ## 13 14 15 16 17 18 ## setosa setosa setosa setosa setosa setosa ## 19 20 21 22 23 24 ## setosa setosa setosa setosa setosa setosa ## 25 26 27 28 29 30 ## setosa setosa setosa setosa setosa setosa ## 31 32 33 34 35 36 ## setosa setosa setosa setosa setosa setosa ## 37 38 39 40 41 42 ## setosa setosa setosa setosa setosa setosa ## 43 44 45 46 47 48 ## setosa setosa setosa setosa setosa setosa ## 49 50 51 52 53 54 ## setosa setosa versicolor versicolor versicolor versicolor ## 55 56 57 58 59 60 ## versicolor versicolor versicolor versicolor versicolor versicolor ## 61 62 63 64 65 66 ## versicolor versicolor versicolor versicolor versicolor versicolor ## 67 68 69 70 71 72 ## versicolor versicolor versicolor versicolor versicolor versicolor ## 73 74 75 76 77 78 ## versicolor versicolor versicolor versicolor versicolor virginica ## 79 80 81 82 83 84 ## versicolor versicolor versicolor versicolor versicolor virginica ## 85 86 87 88 89 90 ## versicolor versicolor versicolor versicolor versicolor versicolor ## 91 92 93 94 95 96 ## versicolor versicolor versicolor versicolor versicolor versicolor ## 97 98 99 100 101 102 ## versicolor versicolor versicolor versicolor virginica virginica ## 103 104 105 106 107 108 ## virginica virginica virginica virginica virginica virginica ## 109 110 111 112 113 114 ## virginica virginica virginica virginica virginica virginica ## 115 116 117 118 119 120 ## virginica virginica virginica virginica virginica versicolor ## 121 122 123 124 125 126 ## virginica virginica virginica virginica virginica virginica ## 127 128 129 130 131 132 ## virginica virginica virginica virginica virginica virginica ## 133 134 135 136 137 138 ## virginica versicolor virginica virginica virginica virginica ## 139 140 141 142 143 144 ## virginica virginica virginica virginica virginica virginica ## 145 146 147 148 149 150 ## virginica virginica virginica virginica virginica virginica ## Levels: setosa versicolor virginica ``` ``` print(length(out)) ``` ``` ## [1] 150 ``` ``` table(matrix(out),iris[,5]) ``` ``` ## ## setosa versicolor virginica ## setosa 50 0 0 ## versicolor 0 48 2 ## virginica 0 2 48 ``` So it does marginally better than naive Bayes. Here is another example. ### 7\.30\.2 Another example ``` #EXAMPLE 2 for SVM train_data = matrix(rpois(60,3),10,6) print(train_data) ``` ``` ## [,1] [,2] [,3] [,4] [,5] [,6] ## [1,] 0 4 7 6 4 2 ## [2,] 2 4 4 4 2 3 ## [3,] 2 3 5 1 6 2 ## [4,] 2 5 3 5 4 4 ## [5,] 1 3 3 1 2 3 ## [6,] 2 2 4 8 4 0 ## [7,] 2 4 3 3 4 2 ## [8,] 4 4 4 5 2 0 ## [9,] 1 5 4 1 1 2 ## [10,] 5 3 6 4 4 2 ``` ``` train_class = as.matrix(c(2,3,1,2,2,1,3,2,3,3)) print(train_class) ``` ``` ## [,1] ## [1,] 2 ## [2,] 3 ## [3,] 1 ## [4,] 2 ## [5,] 2 ## [6,] 1 ## [7,] 3 ## [8,] 2 ## [9,] 3 ## [10,] 3 ``` ``` library(e1071) model = svm(train_data,train_class) model ``` ``` ## ## Call: ## svm.default(x = train_data, y = train_class) ## ## ## Parameters: ## SVM-Type: eps-regression ## SVM-Kernel: radial ## cost: 1 ## gamma: 0.1666667 ## epsilon: 0.1 ## ## ## Number of Support Vectors: 9 ``` ``` pred = predict(model,train_data, type="raw") table(pred,train_class) ``` ``` ## train_class ## pred 1 2 3 ## 1.25759920432731 1 0 0 ## 1.56659922213705 1 0 0 ## 2.03896978308775 0 1 0 ## 2.07877220630261 0 1 0 ## 2.07882451500643 0 1 0 ## 2.079102996171 0 1 0 ## 2.50854276105477 0 0 1 ## 2.60314938880547 0 0 1 ## 2.80915400612272 0 0 1 ## 2.92106239193998 0 0 1 ``` ``` train_fitted = round(pred,0) print(cbind(train_class,train_fitted)) ``` ``` ## train_fitted ## 1 2 2 ## 2 3 3 ## 3 1 2 ## 4 2 2 ## 5 2 2 ## 6 1 1 ## 7 3 3 ## 8 2 2 ## 9 3 3 ## 10 3 3 ``` ``` train_fitted = matrix(train_fitted) table(train_class,train_fitted) ``` ``` ## train_fitted ## train_class 1 2 3 ## 1 1 1 0 ## 2 0 4 0 ## 3 0 0 4 ``` How do we know if the confusion matrix shows statistically significant classification power? We do a chi\-square test. ``` library(e1071) res = naiveBayes(iris[,1:4],iris[,5]) pred = predict(res,iris[,1:4]) out = table(pred,iris[,5]) out ``` ``` ## ## pred setosa versicolor virginica ## setosa 50 0 0 ## versicolor 0 47 3 ## virginica 0 3 47 ``` ``` chisq.test(out) ``` ``` ## ## Pearson's Chi-squared test ## ## data: out ## X-squared = 266.16, df = 4, p-value < 2.2e-16 ``` 7\.31 Word count classifiers, adjectives, and adverbs ----------------------------------------------------- 1. Given a lexicon of selected words, one may sign the words as positive or negative, and then do a simple word count to compute net sentiment or mood of text. By establishing appropriate cut offs, one can determine the classification of text into optimistic, neutral, or pessimistic. These cut offs are determined using the training and testing data sets. 2. Word count classifiers may be enhanced by focusing on “emphasis words” such as adjectives and adverbs, especially when classifying emotive content. One approach used in Das and Chen (2007\) is to identify all adjectives and adverbs in the text and then only consider words that are within \\(\\pm 3\\) words before and after the adjective or adverb. This extracts the most emphatic parts of the text only, and then mood scores it. 7\.32 Fisher’s discriminant --------------------------- * Fisher’s discriminant is simply the ratio of the variation of a given word across groups to the variation within group. * More formally, Fisher’s discriminant score \\(F(w)\\) for word \\(w\\) is \\\[ F(w) \= \\frac{\\frac{1}{K} \\sum\_{j\=1}^K ({\\bar w}\_j \- {\\bar w}\_0\)^2}{\\frac{1}{K} \\sum\_{j\=1}^K \\sigma\_j^2} \\nonumber \\] where \\(K\\) is the number of categories and \\({\\bar w}\_j\\) is the mean occurrence of the word \\(w\\) in each text in category \\(j\\), and \\({\\bar w}\_0\\) is the mean occurrence across all categories. And \\(\\sigma\_j^2\\) is the variance of the word occurrence in category \\(j\\). This is just one way in which Fisher’s discriminant may be calculated, and there are other variations on the theme. * We may compute \\(F(w)\\) for each word \\(w\\), and then use it to weight the word counts of each text, thereby giving greater credence to words that are better discriminants. 7\.33 Vector\-Distance Classifier --------------------------------- Suppose we have 500 documents in each of two categories, bullish and bearish. These 1,000 documents may all be placed as points in \\(n\\)\-dimensional space. It is more than likely that the points in each category will lie closer to each other than to the points in the other category. Now, if we wish to classify a new document, with vector \\(D\_i\\), the obvious idea is to look at which cluster it is closest to, or which point in either cluster it is closest to. The closeness between two documents \\(i\\) and \\(j\\) is determined easily by the well known metric of cosine distance, i.e., \\\[ 1 \- \\cos(\\theta\_{ij}) \= 1 \- \\frac{D\_i^\\top D\_j}{\|\|D\_i\|\| \\cdot \|\|D\_j\|\|} \\nonumber \\] where \\(\|\|D\_i\|\| \= \\sqrt{D\_i^\\top D\_i}\\) is the norm of the vector \\(D\_i\\). The cosine of the angle between the two document vectors is 1 if the two vectors are identical, and in this case the distance between them would be zero. 7\.34 Confusion matrix ---------------------- The confusion matrix is the classic tool for assessing classification accuracy. Given \\(n\\) categories, the matrix is of dimension \\(n \\times n\\). The rows relate to the category assigned by the analytic algorithm and the columns refer to the correct category in which the text resides. Each cell \\((i,j)\\) of the matrix contains the number of text messages that were of type \\(j\\) and were classified as type \\(i\\). The cells on the diagonal of the confusion matrix state the number of times the algorithm got the classification right. All other cells are instances of classification error. If an algorithm has no classification ability, then the rows and columns of the matrix will be independent of each other. Under this null hypothesis, the statistic that is examined for rejection is as follows: \\\[ \\chi^2\[dof\=(n\-1\)^2] \= \\sum\_{i\=1}^n \\sum\_{j\=1}^n \\frac{\[A(i,j) \- E(i,j)]^2}{E(i,j)} \\] where \\(A(i,j)\\) are the actual numbers observed in the confusion matrix, and \\(E(i,j)\\) are the expected numbers, assuming no classification ability under the null. If \\(T(i)\\) represents the total across row \\(i\\) of the confusion matrix, and \\(T(j)\\) the column total, then \\\[ E(i,j) \= \\frac{T(i) \\times T(j)}{\\sum\_{i\=1}^n T(i)} \\equiv \\frac{T(i) \\times T(j)}{\\sum\_{j\=1}^n T(j)} \\] The degrees of freedom of the \\(\\chi^2\\) statistic is \\((n\-1\)^2\\). This statistic is very easy to implement and may be applied to models for any \\(n\\). A highly significant statistic is evidence of classification ability. 7\.35 Accuracy -------------- Algorithm accuracy over a classification scheme is the percentage of text that is correctly classified. This may be done in\-sample or out\-of\-sample. To compute this off the confusion matrix, we calculate \\\[ \\mbox{Accuracy} \= \\frac{ \\sum\_{i\=1}^K O(i,i)}{\\sum\_{j\=1}^K M(j)} \= \\frac{ \\sum\_{i\=1}^K O(i,i)}{\\sum\_{i\=1}^K M(i)} \\] We should hope that this is at least greater than \\(1/K\\), which is the accuracy level achieved on average from random guessing. ### 7\.35\.1 Sentiment over Time ### 7\.35\.2 Stock Sentiment Correlations ### 7\.35\.3 Phase Lag Analysis ### 7\.35\.1 Sentiment over Time ### 7\.35\.2 Stock Sentiment Correlations ### 7\.35\.3 Phase Lag Analysis 7\.36 False Positives --------------------- 1. The percentage of false positives is a useful metric to work with. It may be calculated as a simple count or as a weighted count (by nearness of wrong category) of false classifications divided by total classifications undertaken. 2. For example, assume that in the example above, category 1 is BULLISH and category 3 is BEARISH, whereas category 2 is NEUTRAL. The false positives would arise from mis\-classifying category 1 as 3 and vice\-versa. We compute the false positive rate for illustration. 3. The false positive rate is just 1% in the example below. ``` Omatrix = matrix(c(22,1,0,3,44,3,1,1,25),3,3) print((Omatrix[1,3]+Omatrix[3,1])/sum(Omatrix)) ``` ``` ## [1] 0.01 ``` 7\.37 Sentiment Error --------------------- In a 3\-way classification scheme, where category 1 is BULLISH and category 3 is BEARISH, whereas category 2 is NEUTRAL, we can compute this metric as follows. \\\[ \\mbox{Sentiment Error} \= 1 \- \\frac{M(j\=1\)\-M(j\=3\)}{M(i\=1\)\-M(i\=3\)} \\nonumber \\] In our illustrative example, we may easily calculate this metric. The classified sentiment from the algorithm was \\(\-3 \= 23\-27\\), whereas it actually should have been \\(\-2 \= 26\-28\\). The percentage error in sentiment is 50%. ``` print(Omatrix) ``` ``` ## [,1] [,2] [,3] ## [1,] 22 3 1 ## [2,] 1 44 1 ## [3,] 0 3 25 ``` ``` rsum = rowSums(Omatrix) csum = colSums(Omatrix) print(rsum) ``` ``` ## [1] 26 46 28 ``` ``` print(csum) ``` ``` ## [1] 23 50 27 ``` ``` print(1 - (-3)/(-2)) ``` ``` ## [1] -0.5 ``` 7\.38 Disagreement ------------------ The metric uses the number of signed buys and sells in the day (based on a sentiment model) to determine how much difference of opinion there is in the market. The metric is computed as follows: \\\[ \\mbox{DISAG} \= \\left\| 1 \- \\left\| \\frac{B\-S}{B\+S} \\right\| \\right\| \\] where \\(B, S\\) are the numbers of classified buys and sells. Note that DISAG is bounded between zero and one. Using the true categories of buys (category 1 BULLISH) and sells (category 3 BEARISH) in the same example as before, we may compute disagreement. Since there is little agreement (26 buys and 28 sells), disagreement is high. ``` print(Omatrix) ``` ``` ## [,1] [,2] [,3] ## [1,] 22 3 1 ## [2,] 1 44 1 ## [3,] 0 3 25 ``` ``` DISAG = abs(1-abs((26-28)/(26+28))) print(DISAG) ``` ``` ## [1] 0.962963 ``` 7\.39 Precision and Recall -------------------------- The creation of the confusion matrix leads naturally to two measures that are associated with it. Precision is the fraction of positives identified that are truly positive, and is also known as positive predictive value. It is a measure of usefulness of prediction. So if the algorithm (say) was tasked with selecting those account holders on LinkedIn who are actually looking for a job, and it identifies \\(n\\) such people of which only \\(m\\) were really looking for a job, then the precision would be \\(m/n\\). Recall is the proportion of positives that are correctly identified, and is also known as sensitivity. It is a measure of how complete the prediction is. If the actual number of people looking for a job on LinkedIn was \\(M\\), then recall would be \\(n/M\\). For example, suppose we have the following confusion matrix. | | **Actual** | | | | --- | --- | --- | --- | | **Predicted** | Looking for Job | Not Looking | | | Looking for Job | 10 | 2 | 12 | | Not Looking | 1 | 16 | 17 | | | 11 | 18 | 29 | In this case precision is \\(10/12\\) and recall is \\(10/11\\). Precision is related to the probability of false positives (Type I error), which is one minus precision. Recall is related to the probability of false negatives (Type II error), which is one minus recall. One may also think of this in terms of true and false positives. There are totally 12 positives predicted by the model, of which 10 are true positives, and 2 are false positives. These values go into calculating precision. Of the predicted negatives, 1 is false, and this goes into calculating recall. Recall refers to relevancy of results returned. 7\.40 RTextTools package ------------------------ This package bundles text classification algorithms into one package. ``` library(tm) library(RTextTools) ``` ``` ## Loading required package: SparseM ``` ``` ## Warning: package 'SparseM' was built under R version 3.3.2 ``` ``` ## ## Attaching package: 'SparseM' ``` ``` ## The following object is masked from 'package:base': ## ## backsolve ``` ``` ## ## Attaching package: 'RTextTools' ``` ``` ## The following objects are masked from 'package:SnowballC': ## ## getStemLanguages, wordStem ``` ``` #Create sample text with positive and negative markers n = 1000 npos = round(runif(n,1,25)) nneg = round(runif(n,1,25)) flag = matrix(0,n,1) flag[which(npos>nneg)] = 1 text = NULL for (j in 1:n) { res = paste(c(sample(poswords,npos[j]),sample(negwords,nneg[j])),collapse=" ") text = c(text,res) } #Text Classification m = create_matrix(text) print(m) ``` ``` ## <<DocumentTermMatrix (documents: 1000, terms: 3711)>> ## Non-/sparse entries: 26023/3684977 ## Sparsity : 99% ## Maximal term length: 17 ## Weighting : term frequency (tf) ``` ``` m = create_matrix(text,weighting=weightTfIdf) print(m) ``` ``` ## <<DocumentTermMatrix (documents: 1000, terms: 3711)>> ## Non-/sparse entries: 26023/3684977 ## Sparsity : 99% ## Maximal term length: 17 ## Weighting : term frequency - inverse document frequency (normalized) (tf-idf) ``` ``` container <- create_container(m,flag,trainSize=1:(n/2), testSize=(n/2+1):n,virgin=FALSE) #models <- train_models(container, algorithms=c("MAXENT","SVM","GLMNET","SLDA","TREE","BAGGING","BOOSTING","RF")) models <- train_models(container, algorithms=c("MAXENT","SVM","GLMNET","TREE")) results <- classify_models(container, models) analytics <- create_analytics(container, results) #RESULTS #analytics@algorithm_summary # SUMMARY OF PRECISION, RECALL, F-SCORES, AND ACCURACY SORTED BY TOPIC CODE FOR EACH ALGORITHM #analytics@label_summary # SUMMARY OF LABEL (e.g. TOPIC) ACCURACY #analytics@document_summary # RAW SUMMARY OF ALL DATA AND SCORING #analytics@ensemble_summary # SUMMARY OF ENSEMBLE PRECISION/COVERAGE. USES THE n VARIABLE PASSED INTO create_analytics() #CONFUSION MATRIX yhat = as.matrix(analytics@document_summary$CONSENSUS_CODE) y = flag[(n/2+1):n] print(table(y,yhat)) ``` ``` ## yhat ## y 0 1 ## 0 255 6 ## 1 212 27 ``` 7\.41 Grading Text ------------------ In recent years, the SAT exams added a new essay section. While the test aimed at assessing original writing, it also introduced automated grading. A goal of the test is to assess the writing level of the student. This is associated with the notion of *readability*. ### 7\.41\.1 Readability “Readability” is a metric of how easy it is to comprehend text. Given a goal of efficient markets, regulators want to foster transparency by making sure financial documents that are disseminated to the investing public are readable. Hence, metrics for readability are very important and are recently gaining traction. ### 7\.41\.2 Gunning\-Fog Index Gunning (1952\) developed the Fog index. The index estimates the years of formal education needed to understand text on a first reading. A fog index of 12 requires the reading level of a U.S. high school senior (around 18 years old). The index is based on the idea that poor readability is associated with longer sentences and complex words. Complex words are those that have more than two syllables. The formula for the Fog index is \\\[ 0\.4 \\cdot \\left\[\\frac{\\mbox{\\\#words}}{\\mbox{\\\#sentences}} \+ 100 \\cdot \\left( \\frac{\\mbox{\\\#complex words}}{\\mbox{\\\#words}} \\right) \\right] \\] Alternative readability scores use similar ideas. The Flesch Reading Ease Score and the Flesch\-Kincaid Grade Level also use counts of words, syllables, and sentences. See [http://en.wikipedia.org/wiki/Flesch\-Kincaid\_readability\_tests](http://en.wikipedia.org/wiki/Flesch-Kincaid_readability_tests). The Flesch Reading Ease Score is defined as \\\[ 206\.835 \- 1\.015 \\left(\\frac{\\mbox{\\\#words}}{\\mbox{\\\#sentences}}\\right) \- 84\.6 \\left( \\frac{\\mbox{\\\#syllables}}{\\mbox{\\\#words}} \\right) \\] With a range of 90\-100 easily accessible by a 11\-year old, 60\-70 being easy to understand for 13\-15 year olds, and 0\-30 for university graduates. ### 7\.41\.3 The Flesch\-Kincaid Grade Level This is defined as \\\[ 0\.39 \\left(\\frac{\\mbox{\\\#words}}{\\mbox{\\\#sentences}}\\right) \+ 11\.8 \\left( \\frac{\\mbox{\\\#syllables}}{\\mbox{\\\#words}} \\right) \-15\.59 \\] which gives a number that corresponds to the grade level. As expected these two measures are negatively correlated. Various other measures of readability use the same ideas as in the Fog index. For example the Coleman and Liau (1975\) index does not even require a count of syllables, as follows: \\\[ CLI \= 0\.0588 L \- 0\.296 S \- 15\.8 \\] where \\(L\\) is the average number of letters per hundred words and \\(S\\) is the average number of sentences per hundred words. Standard readability metrics may not work well for financial text. Loughran and McDonald (2014\) find that the Fog index is inferior to simply looking at 10\-K file size. **References** M. Coleman and T. L. Liau. (1975\). A computer readability formula designed for machine scoring. *Journal of Applied Psychology* 60, 283\-284\. T. Loughran and W. McDonald, (2014\). Measuring readability in financial disclosures, *The Journal of Finance* 69, 1643\-1671\. ### 7\.41\.1 Readability “Readability” is a metric of how easy it is to comprehend text. Given a goal of efficient markets, regulators want to foster transparency by making sure financial documents that are disseminated to the investing public are readable. Hence, metrics for readability are very important and are recently gaining traction. ### 7\.41\.2 Gunning\-Fog Index Gunning (1952\) developed the Fog index. The index estimates the years of formal education needed to understand text on a first reading. A fog index of 12 requires the reading level of a U.S. high school senior (around 18 years old). The index is based on the idea that poor readability is associated with longer sentences and complex words. Complex words are those that have more than two syllables. The formula for the Fog index is \\\[ 0\.4 \\cdot \\left\[\\frac{\\mbox{\\\#words}}{\\mbox{\\\#sentences}} \+ 100 \\cdot \\left( \\frac{\\mbox{\\\#complex words}}{\\mbox{\\\#words}} \\right) \\right] \\] Alternative readability scores use similar ideas. The Flesch Reading Ease Score and the Flesch\-Kincaid Grade Level also use counts of words, syllables, and sentences. See [http://en.wikipedia.org/wiki/Flesch\-Kincaid\_readability\_tests](http://en.wikipedia.org/wiki/Flesch-Kincaid_readability_tests). The Flesch Reading Ease Score is defined as \\\[ 206\.835 \- 1\.015 \\left(\\frac{\\mbox{\\\#words}}{\\mbox{\\\#sentences}}\\right) \- 84\.6 \\left( \\frac{\\mbox{\\\#syllables}}{\\mbox{\\\#words}} \\right) \\] With a range of 90\-100 easily accessible by a 11\-year old, 60\-70 being easy to understand for 13\-15 year olds, and 0\-30 for university graduates. ### 7\.41\.3 The Flesch\-Kincaid Grade Level This is defined as \\\[ 0\.39 \\left(\\frac{\\mbox{\\\#words}}{\\mbox{\\\#sentences}}\\right) \+ 11\.8 \\left( \\frac{\\mbox{\\\#syllables}}{\\mbox{\\\#words}} \\right) \-15\.59 \\] which gives a number that corresponds to the grade level. As expected these two measures are negatively correlated. Various other measures of readability use the same ideas as in the Fog index. For example the Coleman and Liau (1975\) index does not even require a count of syllables, as follows: \\\[ CLI \= 0\.0588 L \- 0\.296 S \- 15\.8 \\] where \\(L\\) is the average number of letters per hundred words and \\(S\\) is the average number of sentences per hundred words. Standard readability metrics may not work well for financial text. Loughran and McDonald (2014\) find that the Fog index is inferior to simply looking at 10\-K file size. **References** M. Coleman and T. L. Liau. (1975\). A computer readability formula designed for machine scoring. *Journal of Applied Psychology* 60, 283\-284\. T. Loughran and W. McDonald, (2014\). Measuring readability in financial disclosures, *The Journal of Finance* 69, 1643\-1671\. 7\.42 koRpus package -------------------- R package koRpus for readability scoring here. [http://www.inside\-r.org/packages/cran/koRpus/docs/readability](http://www.inside-r.org/packages/cran/koRpus/docs/readability) First, let’s grab some text from my web site. ``` library(rvest) url = "http://srdas.github.io/bio-candid.html" doc.html = read_html(url) text = doc.html %>% html_nodes("p") %>% html_text() text = gsub("[\t\n]"," ",text) text = gsub('"'," ",text) #removes single backslash text = paste(text, collapse=" ") print(text) ``` ``` ## [1] " Sanjiv Das: A Short Academic Life History After loafing and working in many parts of Asia, but never really growing up, Sanjiv moved to New York to change the world, hopefully through research. He graduated in 1994 with a Ph.D. from NYU, and since then spent five years in Boston, and now lives in San Jose, California. Sanjiv loves animals, places in the world where the mountains meet the sea, riding sport motorbikes, reading, gadgets, science fiction movies, and writing cool software code. When there is time available from the excitement of daily life, Sanjiv writes academic papers, which helps him relax. Always the contrarian, Sanjiv thinks that New York City is the most calming place in the world, after California of course. Sanjiv is now a Professor of Finance at Santa Clara University. He came to SCU from Harvard Business School and spent a year at UC Berkeley. In his past life in the unreal world, Sanjiv worked at Citibank, N.A. in the Asia-Pacific region. He takes great pleasure in merging his many previous lives into his current existence, which is incredibly confused and diverse. Sanjiv's research style is instilled with a distinct New York state of mind - it is chaotic, diverse, with minimal method to the madness. He has published articles on derivatives, term-structure models, mutual funds, the internet, portfolio choice, banking models, credit risk, and has unpublished articles in many other areas. Some years ago, he took time off to get another degree in computer science at Berkeley, confirming that an unchecked hobby can quickly become an obsession. There he learnt about the fascinating field of Randomized Algorithms, skills he now applies earnestly to his editorial work, and other pursuits, many of which stem from being in the epicenter of Silicon Valley. Coastal living did a lot to mold Sanjiv, who needs to live near the ocean. The many walks in Greenwich village convinced him that there is no such thing as a representative investor, yet added many unique features to his personal utility function. He learnt that it is important to open the academic door to the ivory tower and let the world in. Academia is a real challenge, given that he has to reconcile many more opinions than ideas. He has been known to have turned down many offers from Mad magazine to publish his academic work. As he often explains, you never really finish your education - you can check out any time you like, but you can never leave. Which is why he is doomed to a lifetime in Hotel California. And he believes that, if this is as bad as it gets, life is really pretty good. " ``` Now we can assess it for readability. ``` library(koRpus) ``` ``` ## ## Attaching package: 'koRpus' ``` ``` ## The following object is masked from 'package:lsa': ## ## query ``` ``` write(text,file="textvec.txt") text_tokens = tokenize("textvec.txt",lang="en") #print(text_tokens) print(c("Number of sentences: ",text_tokens@desc$sentences)) ``` ``` ## [1] "Number of sentences: " "24" ``` ``` print(c("Number of words: ",text_tokens@desc$words)) ``` ``` ## [1] "Number of words: " "446" ``` ``` print(c("Number of words per sentence: ",text_tokens@desc$avg.sentc.length)) ``` ``` ## [1] "Number of words per sentence: " "18.5833333333333" ``` ``` print(c("Average length of words: ",text_tokens@desc$avg.word.length)) ``` ``` ## [1] "Average length of words: " "4.67488789237668" ``` Next we generate several indices of readability, which are worth looking at. ``` print(readability(text_tokens)) ``` ``` ## Hyphenation (language: en) ``` ``` ## | | | 0% | | | 1% | |= | 1% | |= | 2% | |== | 2% | |== | 3% | |== | 4% | |=== | 4% | |=== | 5% | |==== | 6% | |==== | 7% | |===== | 7% | |===== | 8% | |====== | 9% | |====== | 10% | |======= | 10% | |======= | 11% | |======== | 12% | |======== | 13% | |========= | 13% | |========= | 14% | |========= | 15% | |========== | 15% | |========== | 16% | |=========== | 16% | |=========== | 17% | |============ | 18% | |============ | 19% | |============= | 19% | |============= | 20% | |============= | 21% | |============== | 21% | |============== | 22% | |=============== | 22% | |=============== | 23% | |=============== | 24% | |================ | 24% | |================ | 25% | |================= | 26% | |================= | 27% | |================== | 27% | |================== | 28% | |=================== | 28% | |=================== | 29% | |=================== | 30% | |==================== | 30% | |==================== | 31% | |===================== | 32% | |===================== | 33% | |====================== | 33% | |====================== | 34% | |====================== | 35% | |======================= | 35% | |======================= | 36% | |======================== | 36% | |======================== | 37% | |======================== | 38% | |========================= | 38% | |========================= | 39% | |========================== | 39% | |========================== | 40% | |========================== | 41% | |=========================== | 41% | |=========================== | 42% | |============================ | 42% | |============================ | 43% | |============================ | 44% | |============================= | 44% | |============================= | 45% | |============================== | 46% | |============================== | 47% | |=============================== | 47% | |=============================== | 48% | |================================ | 49% | |================================ | 50% | |================================= | 50% | |================================= | 51% | |================================== | 52% | |================================== | 53% | |=================================== | 53% | |=================================== | 54% | |==================================== | 55% | |==================================== | 56% | |===================================== | 56% | |===================================== | 57% | |===================================== | 58% | |====================================== | 58% | |====================================== | 59% | |======================================= | 59% | |======================================= | 60% | |======================================= | 61% | |======================================== | 61% | |======================================== | 62% | |========================================= | 62% | |========================================= | 63% | |========================================= | 64% | |========================================== | 64% | |========================================== | 65% | |=========================================== | 65% | |=========================================== | 66% | |=========================================== | 67% | |============================================ | 67% | |============================================ | 68% | |============================================= | 69% | |============================================= | 70% | |============================================== | 70% | |============================================== | 71% | |============================================== | 72% | |=============================================== | 72% | |=============================================== | 73% | |================================================ | 73% | |================================================ | 74% | |================================================= | 75% | |================================================= | 76% | |================================================== | 76% | |================================================== | 77% | |================================================== | 78% | |=================================================== | 78% | |=================================================== | 79% | |==================================================== | 79% | |==================================================== | 80% | |==================================================== | 81% | |===================================================== | 81% | |===================================================== | 82% | |====================================================== | 83% | |====================================================== | 84% | |======================================================= | 84% | |======================================================= | 85% | |======================================================== | 85% | |======================================================== | 86% | |======================================================== | 87% | |========================================================= | 87% | |========================================================= | 88% | |========================================================== | 89% | |========================================================== | 90% | |=========================================================== | 90% | |=========================================================== | 91% | |============================================================ | 92% | |============================================================ | 93% | |============================================================= | 93% | |============================================================= | 94% | |============================================================== | 95% | |============================================================== | 96% | |=============================================================== | 96% | |=============================================================== | 97% | |=============================================================== | 98% | |================================================================ | 98% | |================================================================ | 99% | |=================================================================| 99% | |=================================================================| 100% ``` ``` ## Warning: Bormuth: Missing word list, hence not calculated. ``` ``` ## Warning: Coleman: POS tags are not elaborate enough, can't count pronouns ## and prepositions. Formulae skipped. ``` ``` ## Warning: Dale-Chall: Missing word list, hence not calculated. ``` ``` ## Warning: DRP: Missing Bormuth Mean Cloze, hence not calculated. ``` ``` ## Warning: Harris.Jacobson: Missing word list, hence not calculated. ``` ``` ## Warning: Spache: Missing word list, hence not calculated. ``` ``` ## Warning: Traenkle.Bailer: POS tags are not elaborate enough, can't count ## prepositions and conjuctions. Formulae skipped. ``` ``` ## Warning: Note: The implementations of these formulas are still subject to validation: ## Coleman, Danielson.Bryan, Dickes.Steiwer, ELF, Fucks, Harris.Jacobson, nWS, Strain, Traenkle.Bailer, TRI ## Use the results with caution, even if they seem plausible! ``` ``` ## ## Automated Readability Index (ARI) ## Parameters: default ## Grade: 9.88 ## ## ## Coleman-Liau ## Parameters: default ## ECP: 47% (estimted cloze percentage) ## Grade: 10.09 ## Grade: 10.1 (short formula) ## ## ## Danielson-Bryan ## Parameters: default ## DB1: 7.64 ## DB2: 48.58 ## Grade: 9-12 ## ## ## Dickes-Steiwer's Handformel ## Parameters: default ## TTR: 0.58 ## Score: 42.76 ## ## ## Easy Listening Formula ## Parameters: default ## Exsyls: 149 ## Score: 6.21 ## ## ## Farr-Jenkins-Paterson ## Parameters: default ## RE: 56.1 ## Grade: >= 10 (high school) ## ## ## Flesch Reading Ease ## Parameters: en (Flesch) ## RE: 59.75 ## Grade: >= 10 (high school) ## ## ## Flesch-Kincaid Grade Level ## Parameters: default ## Grade: 9.54 ## Age: 14.54 ## ## ## Gunning Frequency of Gobbledygook (FOG) ## Parameters: default ## Grade: 12.55 ## ## ## FORCAST ## Parameters: default ## Grade: 10.01 ## Age: 15.01 ## ## ## Fucks' Stilcharakteristik ## Score: 86.88 ## Grade: 9.32 ## ## ## Linsear Write ## Parameters: default ## Easy words: 87 ## Hard words: 13 ## Grade: 11.71 ## ## ## Läsbarhetsindex (LIX) ## Parameters: default ## Index: 40.56 ## Rating: standard ## Grade: 6 ## ## ## Neue Wiener Sachtextformeln ## Parameters: default ## nWS 1: 5.42 ## nWS 2: 5.97 ## nWS 3: 6.28 ## nWS 4: 6.81 ## ## ## Readability Index (RIX) ## Parameters: default ## Index: 4.08 ## Grade: 9 ## ## ## Simple Measure of Gobbledygook (SMOG) ## Parameters: default ## Grade: 12.01 ## Age: 17.01 ## ## ## Strain Index ## Parameters: default ## Index: 8.45 ## ## ## Kuntzsch's Text-Redundanz-Index ## Parameters: default ## Short words: 297 ## Punctuation: 71 ## Foreign: 0 ## Score: -56.22 ## ## ## Tuldava's Text Difficulty Formula ## Parameters: default ## Index: 4.43 ## ## ## Wheeler-Smith ## Parameters: default ## Score: 62.08 ## Grade: > 4 ## ## Text language: en ``` 7\.43 Text Summarization ------------------------ It is really easy to write a summarizer in a few lines of code. The function below takes in a text array and does the needful. Each element of the array is one sentence of the document we wan summarized. In the function we need to calculate how similar each sentence is to any other one. This could be done using cosine similarity, but here we use another approach, Jaccard similarity. Given two sentences, Jaccard similarity is the ratio of the size of the intersection word set divided by the size of the union set. ### 7\.43\.1 Jaccard Similarity A document \\(D\\) is comprised of \\(m\\) sentences \\(s\_i, i\=1,2,...,m\\), where each \\(s\_i\\) is a set of words. We compute the pairwise overlap between sentences using the **Jaccard** similarity index: \\\[ J\_{ij} \= J(s\_i, s\_j) \= \\frac{\|s\_i \\cap s\_j\|}{\|s\_i \\cup s\_j\|} \= J\_{ji} \\] The overlap is the ratio of the size of the intersect of the two word sets in sentences \\(s\_i\\) and \\(s\_j\\), divided by the size of the union of the two sets. The similarity score of each sentence is computed as the row sums of the Jaccard similarity matrix. \\\[ {\\cal S}\_i \= \\sum\_{j\=1}^m J\_{ij} \\] ### 7\.43\.2 Generating the summary Once the row sums are obtained, they are sorted and the summary is the first \\(n\\) sentences based on the \\({\\cal S}\_i\\) values. ``` # FUNCTION TO RETURN n SENTENCE SUMMARY # Input: array of sentences (text) # Output: n most common intersecting sentences text_summary = function(text, n) { m = length(text) # No of sentences in input jaccard = matrix(0,m,m) #Store match index for (i in 1:m) { for (j in i:m) { a = text[i]; aa = unlist(strsplit(a," ")) b = text[j]; bb = unlist(strsplit(b," ")) jaccard[i,j] = length(intersect(aa,bb))/ length(union(aa,bb)) jaccard[j,i] = jaccard[i,j] } } similarity_score = rowSums(jaccard) res = sort(similarity_score, index.return=TRUE, decreasing=TRUE) idx = res$ix[1:n] summary = text[idx] } ``` ### 7\.43\.3 Example: Summarization We will use a sample of text that I took from Bloomberg news. It is about the need for data scientists. ``` url = "DSTMAA_data/dstext_sample.txt" #You can put any text file or URL here text = read_web_page(url,cstem=0,cstop=0,ccase=0,cpunc=0,cflat=1) print(length(text[[1]])) ``` ``` ## [1] 1 ``` ``` print("ORIGINAL TEXT") ``` ``` ## [1] "ORIGINAL TEXT" ``` ``` print(text) ``` ``` ## [1] "THERE HAVE BEEN murmurings that we are now in the “trough of disillusionment” of big data, the hype around it having surpassed the reality of what it can deliver. Gartner suggested that the “gravitational pull of big data is now so strong that even people who haven’t a clue as to what it’s all about report that they’re running big data projects.” Indeed, their research with business decision makers suggests that organisations are struggling to get value from big data. Data scientists were meant to be the answer to this issue. Indeed, Hal Varian, Chief Economist at Google famously joked that “The sexy job in the next 10 years will be statisticians.” He was clearly right as we are now used to hearing that data scientists are the key to unlocking the value of big data. This has created a huge market for people with these skills. US recruitment agency, Glassdoor, report that the average salary for a data scientist is $118,709 versus $64,537 for a skilled programmer. And a McKinsey study predicts that by 2018, the United States alone faces a shortage of 140,000 to 190,000 people with analytical expertise and a 1.5 million shortage of managers with the skills to understand and make decisions based on analysis of big data. It’s no wonder that companies are keen to employ data scientists when, for example, a retailer using big data can reportedly increase their margin by more than 60%. However, is it really this simple? Can data scientists actually justify earning their salaries when brands seem to be struggling to realize the promise of big data? Perhaps we are expecting too much of data scientists. May be we are investing too much in a relatively small number of individuals rather than thinking about how we can design organisations to help us get the most from data assets. The focus on the data scientist often implies a centralized approach to analytics and decision making; we implicitly assume that a small team of highly skilled individuals can meet the needs of the organisation as a whole. This theme of centralized vs. decentralized decision-making is one that has long been debated in the management literature. For many organisations a centralized structure helps maintain control over a vast international operation, plus ensures consistency of customer experience. Others, meanwhile, may give managers at a local level decision-making power particularly when it comes to tactical needs. But the issue urgently needs revisiting in the context of big data as the way in which organisations manage themselves around data may well be a key factor for brands in realizing the value of their data assets. Economist and philosopher Friedrich Hayek took the view that organisations should consider the purpose of the information itself. Centralized decision-making can be more cost-effective and co-ordinated, he believed, but decentralization can add speed and local information that proves more valuable, even if the bigger picture is less clear. He argued that organisations thought too highly of centralized knowledge, while ignoring ‘knowledge of the particular circumstances of time and place’. But it is only relatively recently that economists are starting to accumulate data that allows them to gauge how successful organisations organize themselves. One such exercise reported by Tim Harford was carried out by Harvard Professor Julie Wulf and the former chief economist of the International Monetary Fund, Raghuram Rajan. They reviewed the workings of large US organisations over fifteen years from the mid-80s. What they found was successful companies were often associated with a move towards decentralisation, often driven by globalisation and the need to react promptly to a diverse and swiftly-moving range of markets, particularly at a local level. Their research indicated that decentralisation pays. And technological advancement often goes hand-in-hand with decentralization. Data analytics is starting to filter down to the department layer, where executives are increasingly eager to trawl through the mass of information on offer. Cloud computing, meanwhile, means that line managers no longer rely on IT teams to deploy computer resources. They can do it themselves, in just minutes. The decentralization trend is now impacting on technology spending. According to Gartner, chief marketing officers have been given the same purchasing power in this area as IT managers and, as their spending rises, so that of data centre managers is falling. Tim Harford makes a strong case for the way in which this decentralization is important given that the environment in which we operate is so unpredictable. Innovation typically comes, he argues from a “swirling mix of ideas not from isolated minds.” And he cites Jane Jacobs, writer on urban planning– who suggested we find innovation in cities rather than on the Pacific islands. But this approach is not necessarily always adopted. For example, research by academics Donald Marchand and Joe Peppard discovered that there was still a tendency for brands to approach big data projects the same way they would existing IT projects: i.e. using centralized IT specialists with a focus on building and deploying technology on time, to plan, and within budget. The problem with a centralized ‘IT-style’ approach is that it ignores the human side of the process of considering how people create and use information i.e. how do people actually deliver value from data assets. Marchand and Peppard suggest (among other recommendations) that those who need to be able to create meaning from data should be at the heart of any initiative. As ever then, the real value from data comes from asking the right questions of the data. And the right questions to ask only emerge if you are close enough to the business to see them. Are data scientists earning their salary? In my view they are a necessary but not sufficient part of the solution; brands need to be making greater investment in working with a greater range of users to help them ask questions of the data. Which probably means that data scientists’ salaries will need to take a hit in the process." ``` ``` text2 = strsplit(text,". ",fixed=TRUE) #Special handling of the period. text2 = text2[[1]] print("SENTENCES") ``` ``` ## [1] "SENTENCES" ``` ``` print(text2) ``` ``` ## [1] "THERE HAVE BEEN murmurings that we are now in the “trough of disillusionment” of big data, the hype around it having surpassed the reality of what it can deliver" ## [2] " Gartner suggested that the “gravitational pull of big data is now so strong that even people who haven’t a clue as to what it’s all about report that they’re running big data projects.” Indeed, their research with business decision makers suggests that organisations are struggling to get value from big data" ## [3] "Data scientists were meant to be the answer to this issue" ## [4] "Indeed, Hal Varian, Chief Economist at Google famously joked that “The sexy job in the next 10 years will be statisticians.” He was clearly right as we are now used to hearing that data scientists are the key to unlocking the value of big data" ## [5] "This has created a huge market for people with these skills" ## [6] "US recruitment agency, Glassdoor, report that the average salary for a data scientist is $118,709 versus $64,537 for a skilled programmer" ## [7] "And a McKinsey study predicts that by 2018, the United States alone faces a shortage of 140,000 to 190,000 people with analytical expertise and a 1.5 million shortage of managers with the skills to understand and make decisions based on analysis of big data" ## [8] " It’s no wonder that companies are keen to employ data scientists when, for example, a retailer using big data can reportedly increase their margin by more than 60%" ## [9] " However, is it really this simple? Can data scientists actually justify earning their salaries when brands seem to be struggling to realize the promise of big data? Perhaps we are expecting too much of data scientists" ## [10] "May be we are investing too much in a relatively small number of individuals rather than thinking about how we can design organisations to help us get the most from data assets" ## [11] "The focus on the data scientist often implies a centralized approach to analytics and decision making; we implicitly assume that a small team of highly skilled individuals can meet the needs of the organisation as a whole" ## [12] "This theme of centralized vs" ## [13] "decentralized decision-making is one that has long been debated in the management literature" ## [14] " For many organisations a centralized structure helps maintain control over a vast international operation, plus ensures consistency of customer experience" ## [15] "Others, meanwhile, may give managers at a local level decision-making power particularly when it comes to tactical needs" ## [16] " But the issue urgently needs revisiting in the context of big data as the way in which organisations manage themselves around data may well be a key factor for brands in realizing the value of their data assets" ## [17] "Economist and philosopher Friedrich Hayek took the view that organisations should consider the purpose of the information itself" ## [18] "Centralized decision-making can be more cost-effective and co-ordinated, he believed, but decentralization can add speed and local information that proves more valuable, even if the bigger picture is less clear" ## [19] " He argued that organisations thought too highly of centralized knowledge, while ignoring ‘knowledge of the particular circumstances of time and place’" ## [20] "But it is only relatively recently that economists are starting to accumulate data that allows them to gauge how successful organisations organize themselves" ## [21] "One such exercise reported by Tim Harford was carried out by Harvard Professor Julie Wulf and the former chief economist of the International Monetary Fund, Raghuram Rajan" ## [22] "They reviewed the workings of large US organisations over fifteen years from the mid-80s" ## [23] "What they found was successful companies were often associated with a move towards decentralisation, often driven by globalisation and the need to react promptly to a diverse and swiftly-moving range of markets, particularly at a local level" ## [24] "Their research indicated that decentralisation pays" ## [25] "And technological advancement often goes hand-in-hand with decentralization" ## [26] "Data analytics is starting to filter down to the department layer, where executives are increasingly eager to trawl through the mass of information on offer" ## [27] "Cloud computing, meanwhile, means that line managers no longer rely on IT teams to deploy computer resources" ## [28] "They can do it themselves, in just minutes" ## [29] " The decentralization trend is now impacting on technology spending" ## [30] "According to Gartner, chief marketing officers have been given the same purchasing power in this area as IT managers and, as their spending rises, so that of data centre managers is falling" ## [31] "Tim Harford makes a strong case for the way in which this decentralization is important given that the environment in which we operate is so unpredictable" ## [32] "Innovation typically comes, he argues from a “swirling mix of ideas not from isolated minds.” And he cites Jane Jacobs, writer on urban planning– who suggested we find innovation in cities rather than on the Pacific islands" ## [33] "But this approach is not necessarily always adopted" ## [34] "For example, research by academics Donald Marchand and Joe Peppard discovered that there was still a tendency for brands to approach big data projects the same way they would existing IT projects: i.e" ## [35] "using centralized IT specialists with a focus on building and deploying technology on time, to plan, and within budget" ## [36] "The problem with a centralized ‘IT-style’ approach is that it ignores the human side of the process of considering how people create and use information i.e" ## [37] "how do people actually deliver value from data assets" ## [38] "Marchand and Peppard suggest (among other recommendations) that those who need to be able to create meaning from data should be at the heart of any initiative" ## [39] "As ever then, the real value from data comes from asking the right questions of the data" ## [40] "And the right questions to ask only emerge if you are close enough to the business to see them" ## [41] "Are data scientists earning their salary? In my view they are a necessary but not sufficient part of the solution; brands need to be making greater investment in working with a greater range of users to help them ask questions of the data" ## [42] "Which probably means that data scientists’ salaries will need to take a hit in the process." ``` ``` print("SUMMARY") ``` ``` ## [1] "SUMMARY" ``` ``` res = text_summary(text2,5) print(res) ``` ``` ## [1] " Gartner suggested that the “gravitational pull of big data is now so strong that even people who haven’t a clue as to what it’s all about report that they’re running big data projects.” Indeed, their research with business decision makers suggests that organisations are struggling to get value from big data" ## [2] "The focus on the data scientist often implies a centralized approach to analytics and decision making; we implicitly assume that a small team of highly skilled individuals can meet the needs of the organisation as a whole" ## [3] "May be we are investing too much in a relatively small number of individuals rather than thinking about how we can design organisations to help us get the most from data assets" ## [4] "The problem with a centralized ‘IT-style’ approach is that it ignores the human side of the process of considering how people create and use information i.e" ## [5] "Which probably means that data scientists’ salaries will need to take a hit in the process." ``` ### 7\.43\.1 Jaccard Similarity A document \\(D\\) is comprised of \\(m\\) sentences \\(s\_i, i\=1,2,...,m\\), where each \\(s\_i\\) is a set of words. We compute the pairwise overlap between sentences using the **Jaccard** similarity index: \\\[ J\_{ij} \= J(s\_i, s\_j) \= \\frac{\|s\_i \\cap s\_j\|}{\|s\_i \\cup s\_j\|} \= J\_{ji} \\] The overlap is the ratio of the size of the intersect of the two word sets in sentences \\(s\_i\\) and \\(s\_j\\), divided by the size of the union of the two sets. The similarity score of each sentence is computed as the row sums of the Jaccard similarity matrix. \\\[ {\\cal S}\_i \= \\sum\_{j\=1}^m J\_{ij} \\] ### 7\.43\.2 Generating the summary Once the row sums are obtained, they are sorted and the summary is the first \\(n\\) sentences based on the \\({\\cal S}\_i\\) values. ``` # FUNCTION TO RETURN n SENTENCE SUMMARY # Input: array of sentences (text) # Output: n most common intersecting sentences text_summary = function(text, n) { m = length(text) # No of sentences in input jaccard = matrix(0,m,m) #Store match index for (i in 1:m) { for (j in i:m) { a = text[i]; aa = unlist(strsplit(a," ")) b = text[j]; bb = unlist(strsplit(b," ")) jaccard[i,j] = length(intersect(aa,bb))/ length(union(aa,bb)) jaccard[j,i] = jaccard[i,j] } } similarity_score = rowSums(jaccard) res = sort(similarity_score, index.return=TRUE, decreasing=TRUE) idx = res$ix[1:n] summary = text[idx] } ``` ### 7\.43\.3 Example: Summarization We will use a sample of text that I took from Bloomberg news. It is about the need for data scientists. ``` url = "DSTMAA_data/dstext_sample.txt" #You can put any text file or URL here text = read_web_page(url,cstem=0,cstop=0,ccase=0,cpunc=0,cflat=1) print(length(text[[1]])) ``` ``` ## [1] 1 ``` ``` print("ORIGINAL TEXT") ``` ``` ## [1] "ORIGINAL TEXT" ``` ``` print(text) ``` ``` ## [1] "THERE HAVE BEEN murmurings that we are now in the “trough of disillusionment” of big data, the hype around it having surpassed the reality of what it can deliver. Gartner suggested that the “gravitational pull of big data is now so strong that even people who haven’t a clue as to what it’s all about report that they’re running big data projects.” Indeed, their research with business decision makers suggests that organisations are struggling to get value from big data. Data scientists were meant to be the answer to this issue. Indeed, Hal Varian, Chief Economist at Google famously joked that “The sexy job in the next 10 years will be statisticians.” He was clearly right as we are now used to hearing that data scientists are the key to unlocking the value of big data. This has created a huge market for people with these skills. US recruitment agency, Glassdoor, report that the average salary for a data scientist is $118,709 versus $64,537 for a skilled programmer. And a McKinsey study predicts that by 2018, the United States alone faces a shortage of 140,000 to 190,000 people with analytical expertise and a 1.5 million shortage of managers with the skills to understand and make decisions based on analysis of big data. It’s no wonder that companies are keen to employ data scientists when, for example, a retailer using big data can reportedly increase their margin by more than 60%. However, is it really this simple? Can data scientists actually justify earning their salaries when brands seem to be struggling to realize the promise of big data? Perhaps we are expecting too much of data scientists. May be we are investing too much in a relatively small number of individuals rather than thinking about how we can design organisations to help us get the most from data assets. The focus on the data scientist often implies a centralized approach to analytics and decision making; we implicitly assume that a small team of highly skilled individuals can meet the needs of the organisation as a whole. This theme of centralized vs. decentralized decision-making is one that has long been debated in the management literature. For many organisations a centralized structure helps maintain control over a vast international operation, plus ensures consistency of customer experience. Others, meanwhile, may give managers at a local level decision-making power particularly when it comes to tactical needs. But the issue urgently needs revisiting in the context of big data as the way in which organisations manage themselves around data may well be a key factor for brands in realizing the value of their data assets. Economist and philosopher Friedrich Hayek took the view that organisations should consider the purpose of the information itself. Centralized decision-making can be more cost-effective and co-ordinated, he believed, but decentralization can add speed and local information that proves more valuable, even if the bigger picture is less clear. He argued that organisations thought too highly of centralized knowledge, while ignoring ‘knowledge of the particular circumstances of time and place’. But it is only relatively recently that economists are starting to accumulate data that allows them to gauge how successful organisations organize themselves. One such exercise reported by Tim Harford was carried out by Harvard Professor Julie Wulf and the former chief economist of the International Monetary Fund, Raghuram Rajan. They reviewed the workings of large US organisations over fifteen years from the mid-80s. What they found was successful companies were often associated with a move towards decentralisation, often driven by globalisation and the need to react promptly to a diverse and swiftly-moving range of markets, particularly at a local level. Their research indicated that decentralisation pays. And technological advancement often goes hand-in-hand with decentralization. Data analytics is starting to filter down to the department layer, where executives are increasingly eager to trawl through the mass of information on offer. Cloud computing, meanwhile, means that line managers no longer rely on IT teams to deploy computer resources. They can do it themselves, in just minutes. The decentralization trend is now impacting on technology spending. According to Gartner, chief marketing officers have been given the same purchasing power in this area as IT managers and, as their spending rises, so that of data centre managers is falling. Tim Harford makes a strong case for the way in which this decentralization is important given that the environment in which we operate is so unpredictable. Innovation typically comes, he argues from a “swirling mix of ideas not from isolated minds.” And he cites Jane Jacobs, writer on urban planning– who suggested we find innovation in cities rather than on the Pacific islands. But this approach is not necessarily always adopted. For example, research by academics Donald Marchand and Joe Peppard discovered that there was still a tendency for brands to approach big data projects the same way they would existing IT projects: i.e. using centralized IT specialists with a focus on building and deploying technology on time, to plan, and within budget. The problem with a centralized ‘IT-style’ approach is that it ignores the human side of the process of considering how people create and use information i.e. how do people actually deliver value from data assets. Marchand and Peppard suggest (among other recommendations) that those who need to be able to create meaning from data should be at the heart of any initiative. As ever then, the real value from data comes from asking the right questions of the data. And the right questions to ask only emerge if you are close enough to the business to see them. Are data scientists earning their salary? In my view they are a necessary but not sufficient part of the solution; brands need to be making greater investment in working with a greater range of users to help them ask questions of the data. Which probably means that data scientists’ salaries will need to take a hit in the process." ``` ``` text2 = strsplit(text,". ",fixed=TRUE) #Special handling of the period. text2 = text2[[1]] print("SENTENCES") ``` ``` ## [1] "SENTENCES" ``` ``` print(text2) ``` ``` ## [1] "THERE HAVE BEEN murmurings that we are now in the “trough of disillusionment” of big data, the hype around it having surpassed the reality of what it can deliver" ## [2] " Gartner suggested that the “gravitational pull of big data is now so strong that even people who haven’t a clue as to what it’s all about report that they’re running big data projects.” Indeed, their research with business decision makers suggests that organisations are struggling to get value from big data" ## [3] "Data scientists were meant to be the answer to this issue" ## [4] "Indeed, Hal Varian, Chief Economist at Google famously joked that “The sexy job in the next 10 years will be statisticians.” He was clearly right as we are now used to hearing that data scientists are the key to unlocking the value of big data" ## [5] "This has created a huge market for people with these skills" ## [6] "US recruitment agency, Glassdoor, report that the average salary for a data scientist is $118,709 versus $64,537 for a skilled programmer" ## [7] "And a McKinsey study predicts that by 2018, the United States alone faces a shortage of 140,000 to 190,000 people with analytical expertise and a 1.5 million shortage of managers with the skills to understand and make decisions based on analysis of big data" ## [8] " It’s no wonder that companies are keen to employ data scientists when, for example, a retailer using big data can reportedly increase their margin by more than 60%" ## [9] " However, is it really this simple? Can data scientists actually justify earning their salaries when brands seem to be struggling to realize the promise of big data? Perhaps we are expecting too much of data scientists" ## [10] "May be we are investing too much in a relatively small number of individuals rather than thinking about how we can design organisations to help us get the most from data assets" ## [11] "The focus on the data scientist often implies a centralized approach to analytics and decision making; we implicitly assume that a small team of highly skilled individuals can meet the needs of the organisation as a whole" ## [12] "This theme of centralized vs" ## [13] "decentralized decision-making is one that has long been debated in the management literature" ## [14] " For many organisations a centralized structure helps maintain control over a vast international operation, plus ensures consistency of customer experience" ## [15] "Others, meanwhile, may give managers at a local level decision-making power particularly when it comes to tactical needs" ## [16] " But the issue urgently needs revisiting in the context of big data as the way in which organisations manage themselves around data may well be a key factor for brands in realizing the value of their data assets" ## [17] "Economist and philosopher Friedrich Hayek took the view that organisations should consider the purpose of the information itself" ## [18] "Centralized decision-making can be more cost-effective and co-ordinated, he believed, but decentralization can add speed and local information that proves more valuable, even if the bigger picture is less clear" ## [19] " He argued that organisations thought too highly of centralized knowledge, while ignoring ‘knowledge of the particular circumstances of time and place’" ## [20] "But it is only relatively recently that economists are starting to accumulate data that allows them to gauge how successful organisations organize themselves" ## [21] "One such exercise reported by Tim Harford was carried out by Harvard Professor Julie Wulf and the former chief economist of the International Monetary Fund, Raghuram Rajan" ## [22] "They reviewed the workings of large US organisations over fifteen years from the mid-80s" ## [23] "What they found was successful companies were often associated with a move towards decentralisation, often driven by globalisation and the need to react promptly to a diverse and swiftly-moving range of markets, particularly at a local level" ## [24] "Their research indicated that decentralisation pays" ## [25] "And technological advancement often goes hand-in-hand with decentralization" ## [26] "Data analytics is starting to filter down to the department layer, where executives are increasingly eager to trawl through the mass of information on offer" ## [27] "Cloud computing, meanwhile, means that line managers no longer rely on IT teams to deploy computer resources" ## [28] "They can do it themselves, in just minutes" ## [29] " The decentralization trend is now impacting on technology spending" ## [30] "According to Gartner, chief marketing officers have been given the same purchasing power in this area as IT managers and, as their spending rises, so that of data centre managers is falling" ## [31] "Tim Harford makes a strong case for the way in which this decentralization is important given that the environment in which we operate is so unpredictable" ## [32] "Innovation typically comes, he argues from a “swirling mix of ideas not from isolated minds.” And he cites Jane Jacobs, writer on urban planning– who suggested we find innovation in cities rather than on the Pacific islands" ## [33] "But this approach is not necessarily always adopted" ## [34] "For example, research by academics Donald Marchand and Joe Peppard discovered that there was still a tendency for brands to approach big data projects the same way they would existing IT projects: i.e" ## [35] "using centralized IT specialists with a focus on building and deploying technology on time, to plan, and within budget" ## [36] "The problem with a centralized ‘IT-style’ approach is that it ignores the human side of the process of considering how people create and use information i.e" ## [37] "how do people actually deliver value from data assets" ## [38] "Marchand and Peppard suggest (among other recommendations) that those who need to be able to create meaning from data should be at the heart of any initiative" ## [39] "As ever then, the real value from data comes from asking the right questions of the data" ## [40] "And the right questions to ask only emerge if you are close enough to the business to see them" ## [41] "Are data scientists earning their salary? In my view they are a necessary but not sufficient part of the solution; brands need to be making greater investment in working with a greater range of users to help them ask questions of the data" ## [42] "Which probably means that data scientists’ salaries will need to take a hit in the process." ``` ``` print("SUMMARY") ``` ``` ## [1] "SUMMARY" ``` ``` res = text_summary(text2,5) print(res) ``` ``` ## [1] " Gartner suggested that the “gravitational pull of big data is now so strong that even people who haven’t a clue as to what it’s all about report that they’re running big data projects.” Indeed, their research with business decision makers suggests that organisations are struggling to get value from big data" ## [2] "The focus on the data scientist often implies a centralized approach to analytics and decision making; we implicitly assume that a small team of highly skilled individuals can meet the needs of the organisation as a whole" ## [3] "May be we are investing too much in a relatively small number of individuals rather than thinking about how we can design organisations to help us get the most from data assets" ## [4] "The problem with a centralized ‘IT-style’ approach is that it ignores the human side of the process of considering how people create and use information i.e" ## [5] "Which probably means that data scientists’ salaries will need to take a hit in the process." ``` 7\.44 Research in Finance ------------------------- In this segment we explore various text mining research in the field of finance. 1. Lu, Chen, Chen, Hung, and Li (2010\) categorize finance related textual content into three categories: (a) forums, blogs, and wikis; (b) news and research reports; and (c) content generated by firms. 2. Extracting sentiment and other information from messages posted to stock message boards such as Yahoo!, Motley Fool, Silicon Investor, Raging Bull, etc., see Tumarkin and Whitelaw (2001\), Antweiler and Frank (2004\), Antweiler and Frank (2005\), Das, Martinez\-Jerez and Tufano (2005\), Das and Chen (2007\). 3. Other news sources: Lexis\-Nexis, Factiva, Dow Jones News, etc., see Das, Martinez\-Jerez and Tufano (2005\); Boudoukh, Feldman, Kogan, Richardson (2012\). 4. The Heard on the Street column in the Wall Street Journal has been used in work by Tetlock (2007\), Tetlock, Saar\-Tsechansky and Macskassay (2008\); see also the use of Wall Street Journal articles by Lu, Chen, Chen, Hung, and Li (2010\). 5. Thomson\-Reuters NewsScope Sentiment Engine (RNSE) based on Infonics/Lexalytics algorithms and varied data on stocks and text from internal databases, see Leinweber and Sisk (2011\). Zhang and Skiena (2010\) develop a market neutral trading strategy using news media such as tweets, over 500 newspapers, Spinn3r RSS feeds, and LiveJournal. ### 7\.44\.1 Das and Chen (*Management Science* 2007\) ### 7\.44\.2 Using Twitter and Facebook for Market Prediction 1. Bollen, Mao, and Zeng (2010\) claimed that stock direction of the Dow Jones Industrial Average can be predicted using tweets with 87\.6% accuracy. 2. Bar\-Haim, Dinur, Feldman, Fresko and Goldstein (2011\) attempt to predict stock direction using tweets by detecting and overweighting the opinion of expert investors. 3. Brown (2012\) looks at the correlation between tweets and the stock market via several measures. 4. Logunov (2011\) uses OpinionFinder to generate many measures of sentiment from tweets. 5. Twitter based sentiment developed by Rao and Srivastava (2012\) is found to be highly correlated with stock prices and indexes, as high as 0\.88 for returns. 6. Sprenger and Welpe (2010\) find that tweet bullishness is associated with abnormal stock returns and tweet volume predicts trading volume. ### 7\.44\.1 Das and Chen (*Management Science* 2007\) ### 7\.44\.2 Using Twitter and Facebook for Market Prediction 1. Bollen, Mao, and Zeng (2010\) claimed that stock direction of the Dow Jones Industrial Average can be predicted using tweets with 87\.6% accuracy. 2. Bar\-Haim, Dinur, Feldman, Fresko and Goldstein (2011\) attempt to predict stock direction using tweets by detecting and overweighting the opinion of expert investors. 3. Brown (2012\) looks at the correlation between tweets and the stock market via several measures. 4. Logunov (2011\) uses OpinionFinder to generate many measures of sentiment from tweets. 5. Twitter based sentiment developed by Rao and Srivastava (2012\) is found to be highly correlated with stock prices and indexes, as high as 0\.88 for returns. 6. Sprenger and Welpe (2010\) find that tweet bullishness is associated with abnormal stock returns and tweet volume predicts trading volume. 7\.45 Polarity and Subjectivity ------------------------------- Zhang and Skiena (2010\) use Twitter feeds and also three other sources of text: over 500 nationwide newspapers, RSS feeds from blogs, and LiveJournal blogs. These are used to compute two metrics. \\\[ \\mbox{polarity} \= \\frac{n\_{pos} \- n\_{neg}}{n\_{pos} \+ n\_{neg}} \\] \\\[ \\mbox{subjectivity} \= \\frac{n\_{pos} \+ n\_{neg}}{N} \\] where \\(N\\) is the total number of words in a text document, \\(n\_{pos}, n\_{neg}\\) are the number of positive and negative words, respectively. * They find that the number of articles is predictive of trading volume. * Subjectivity is also predictive of trading volume, lending credence to the idea that differences of opinion make markets. * Stock return prediction is weak using polarity, but tweets do seem to have some predictive power. * Various sentiment driven market neutral strategies are shown to be profitable, though the study is not tested for robustness. Logunov (2011\) uses tweets data, and applies OpinionFinder and also developed a new classifier called Naive Emoticon Classification to encode sentiment. This is an unusual and original, albeit quite intuitive use of emoticons to determine mood in text mining. If an emoticon exists, then the tweet is automatically coded with that sentiment of emotion. Four types of emoticons are considered: Happy (H), Sad (S), Joy (J), and Cry (C). Polarity is defined here as \\\[ \\mbox{polarity} \= A \= \\frac{n\_H \+ n\_J}{n\_H \+ n\_S \+ n\_J \+ n\_C} \\] Values greater than 0\.5 are positive. \\(A\\) stands for aggregate sentiment and appears to be strongly autocorrelated. Overall, prediction evidence is weak. ### 7\.45\.1 Text Mining Corporate Reports * Text analysis is undertaken across companies in a cross\-section. * The quality of text in company reports is much better than in message postings. * Textual analysis in this area has also resulted in technical improvements. Rudimentary approaches such as word count methods have been extended to weighted schemes, where weights are determined in statistical ways. In Das and Chen (2007\), the discriminant score of each word across classification categories is used as a weighting index for the importance of words. There is a proliferation of word\-weighting schemes.The idea of “inverse document frequency’’ (\\(idf\\)) as a weighting coefficient. Hence, the \\(idf\\) for word \\(j\\) would be \\\[ w\_j^{idf} \= \\ln \\left( \\frac{N}{df\_j} \\right) \\] where \\(N\\) is the total number of documents, and \\(df\_j\\) is the number of documents containing word \\(j\\). This scheme was proposed by Manning and Schutze (1999\). * Loughran and McDonald (2011\) use this weighting approach to modify the word (term) frequency counts in the documents they analyze. The weight on word \\(j\\) in document \\(i\\) is specified as \\\[ w\_{ij} \= \\max\[0,1 \+ \\ln(f\_{ij}) w\_{j}^{idf}] \\] where \\(f\_{ij}\\) is the frequency count of word \\(j\\) in document \\(i\\). This leads naturally to a document score of \\\[ S\_i^{LM} \= \\frac{1}{1\+\\ln(a\_i)} \\sum\_{j\=1}^J w\_{ij} \\] Here \\(a\_i\\) is the total number of words in document \\(i\\), and \\(J\\) is the total number of words in the lexicon. (The \\(LM\\) superscript signifies the weighting approach.) * Whereas the \\(idf\\) approach is intuitive, it does not have to be relevant for market activity. An alternate and effective weighting scheme has been developed in Jegadeesh and Wu (2013, JW) using market movements. Words that occur more often on large market move days are given a greater weight than other words. JW show that this scheme is superior to an unweighted one, and delivers an accurate system for determining the “tone’’ of a regulatory filing. * JW also conduct robustness checks that suggest that the approach is quite general, and applies to other domains with no additional modifications to the specification. Indeed, they find that tone extraction from 10\-Ks may be used to predict IPO underpricing. ### 7\.45\.1 Text Mining Corporate Reports * Text analysis is undertaken across companies in a cross\-section. * The quality of text in company reports is much better than in message postings. * Textual analysis in this area has also resulted in technical improvements. Rudimentary approaches such as word count methods have been extended to weighted schemes, where weights are determined in statistical ways. In Das and Chen (2007\), the discriminant score of each word across classification categories is used as a weighting index for the importance of words. There is a proliferation of word\-weighting schemes.The idea of “inverse document frequency’’ (\\(idf\\)) as a weighting coefficient. Hence, the \\(idf\\) for word \\(j\\) would be \\\[ w\_j^{idf} \= \\ln \\left( \\frac{N}{df\_j} \\right) \\] where \\(N\\) is the total number of documents, and \\(df\_j\\) is the number of documents containing word \\(j\\). This scheme was proposed by Manning and Schutze (1999\). * Loughran and McDonald (2011\) use this weighting approach to modify the word (term) frequency counts in the documents they analyze. The weight on word \\(j\\) in document \\(i\\) is specified as \\\[ w\_{ij} \= \\max\[0,1 \+ \\ln(f\_{ij}) w\_{j}^{idf}] \\] where \\(f\_{ij}\\) is the frequency count of word \\(j\\) in document \\(i\\). This leads naturally to a document score of \\\[ S\_i^{LM} \= \\frac{1}{1\+\\ln(a\_i)} \\sum\_{j\=1}^J w\_{ij} \\] Here \\(a\_i\\) is the total number of words in document \\(i\\), and \\(J\\) is the total number of words in the lexicon. (The \\(LM\\) superscript signifies the weighting approach.) * Whereas the \\(idf\\) approach is intuitive, it does not have to be relevant for market activity. An alternate and effective weighting scheme has been developed in Jegadeesh and Wu (2013, JW) using market movements. Words that occur more often on large market move days are given a greater weight than other words. JW show that this scheme is superior to an unweighted one, and delivers an accurate system for determining the “tone’’ of a regulatory filing. * JW also conduct robustness checks that suggest that the approach is quite general, and applies to other domains with no additional modifications to the specification. Indeed, they find that tone extraction from 10\-Ks may be used to predict IPO underpricing. 7\.46 Tone ---------- * Jegadeesh and Wu (2013\) create a “global lexicon’’ merging multiple word lists from Harvard\-IV\-4 Psychological Dictionaries(Harvard Inquirer), the Lasswell Value Dictionary, the Loughran and McDonald lists, and the word list in Bradley and Lang (1999\). They test this lexicon for robustness by checking (a) that the lexicon delivers accurate tone scores and (b) that it is complete by discarding 50% of the words and seeing whether it causes a material change in results (it does not). * This approach provides a more reliable measure of document tone than preceding approaches. Their measure of **filing tone** is statistically related to filing period returns after providing for reasonable control variables. Tone is significantly related to returns for up to two weeks after filing, and it appears that the market under reacts to tone, and this is corrected within this two week window. * The tone score of document \\(i\\) in the JW paper is specified as \\\[ S\_i^{JW} \= \\frac{1}{a\_i} \\sum\_{j\=1}^J w\_j f\_{ij} \\] where \\(w\_j\\) is the weight for word \\(j\\) based on its relationship to market movement. (The \\(JW\\) superscript signifies the weighting approach.) * The following regression is used to determine the value of \\(w\_j\\) (across all documents). \\\[ \\begin{aligned} r\_i \&\= a \+ b \\cdot S\_j^{JW} \+ \\epsilon\_i \\\\ \&\= a \+ b \\left( \\frac{1}{a\_i} \\sum\_{j\=1}^J w\_j f\_{ij} \\right) \+ \\epsilon\_i \\\\ \&\= a \+ \\left( \\frac{1}{a\_i} \\sum\_{j\=1}^J (b w\_j) f\_{ij} \\right) \+ \\epsilon\_i \\\\ \&\= a \+ \\left( \\frac{1}{a\_i} \\sum\_{j\=1}^J B\_j f\_{ij} \\right) \+ \\epsilon\_i \\end{aligned} \\] where \\(r\_i\\) is the abnormal return around the release of document \\(i\\), and \\(B\_j\=b w\_j\\) is a modified word weight. This is then translated back into the original estimated word weight by normalization, i.e., \\\[ w\_j \= \\frac{B\_j \- \\frac{1}{J}\\sum\_{j\=1}^J B\_j}{\\sigma(B\_j)} \\] where \\(\\sigma(B\_j)\\) is the standard deviation of \\(B\_j\\) across all \\(J\\) words in the lexicon. * Abnormal return \\(r\_i\\) is defined as the three\-day excess return over the CRSP value\-weighted return. \\\[ r\_i \= \\prod\_{t\=0}^3 ret\_{it} \- \\prod\_{t\=1}^3 ret\_{VW,t} \\] Instead of \\(r\_i\\) as the left\-hand side variable in the regression, one might also use a binary variable for good and bad news, positive or negative 10\-Ks, etc., and instead of the regression we would use a limited dependent variable structure such as logit, probit, or even a Bayes classifier. However, the advantages of \\(r\_i\\) being a continuous variable are considerable for it offers a range of outcomes, and simpler regression fit. \- JW use data from 10\-K filings over the period 1995–2010 extracted from SEC’s EDGAR database. They ignore positive and negative words when a negator occurs within a distance of three words, the negators being the words “not, no, never’’. * Word weight scores are computed for the entire sample, and also for three roughly equal concatenated subperiods. The correlation of word weights across these subperiods is high, around 0\.50 on average. Hence, the word weights appear to be quite stable over time and different economic regimes. As would be expected, when two subperiods are used the correlation of word weights is higher, suggesting that longer samples deliver better weighting scores. Interestingly, the correlation of JW scores with LM \\(idf\\) scores is low, and therefore, they are not substitutes. * JW examine the market variables that determine document score \\(S\_i^{JW}\\) for each 10\-K with right\-hand side variables as the size of the firm, book\-to\-market, volatility, turnover, three day excess return over CRSP VW around earnings announcements, and accruals. Both positive and negative tone are significantly related to size and BM, suggesting that risk factors are captured in score. * Volatility is also significant and has the correct sign, i.e., that increases in volatility make negative tone more negative and positive tone less positive. * The same holds for turnover, in that more turnover makes tone pessimistic. The greater the earnings announcement abnormal return, the higher the tone, though this is not significant. Accruals do not significantly relate to score. * When regressing filing period return on document score and other controls (same as in the previous paragraph), the score is always statistically significant. Hence text in the 10\-Ks does correlate with the market’s view of the firm after incorporating the information in the 10\-K and from other sources. * Finally, JW find a negative relation between tone and IPO underpricing, suggesting that term weights from one domain can be reliably used in a different domain. ### 7\.46\.1 MD\&A Usage * When using company filings, it is often an important issue as to whether to use the entire text of the filing or not. Sharper conclusions may be possible from specific sections of the filing such as a 10\-K. Loughran and McDonald (2011\) examined whether the Management Discussion and Analysis (MD\&A) section of the filing was better at providing tone (sentiment) then the entire 10\-K. They found not. * They also showed that using their six tailor\-made word lists gave better results for detecting tone than did the Harvard Inquirer words. And as discussed earlier, proper word\-weighting also improves tone detection. Their word lists also worked well in detecting tone for seasoned equity offerings and news articles, providing good correlation with returns. ### 7\.46\.2 Readability of Financial Reports * Loughran and McDonald (2014\) examine the readability of financial documents, by surveying at the text in 10\-K filings. They compute the Fog index for these documents and compare this to post filing measures of the information environment such as volatility of returns, dispersion of analysts recommendations. When the text is readable, then there should be less dispersion in the information environment, i.e., lower volatility and lower dispersion of analysts expectations around the release of the 10\-K. * Whereas they find that the Fog index does not seem to correlate well with these measures of the information environment, the file size of the 10\-K is a much better measure and is significantly related to return volatility, earnings forecast errors, and earnings forecast dispersion, after accounting for control variates such as size, book\-to\-market, lagged volatility, lagged return, and industry effects. * Li (2008\) also shows that 10\-Ks with high Fog index and longer length have lower subsequent earnings. Thus managers with poor performance may try to hide this by increasing the complexity of their documents, mostly by increasing the size of their filings. * The readability of business documents has caught the attention of many researchers, and not unexpectedly, in the accounting area. DeFranco et al (2013\) combine the Fog, Flesh\-Kincaid, and Flesch scores to show that higher readability of analyst’s reports is related to higher trading volume, suggesting that a better information environment induces people to trade more and not shy away from the market. * Lehavy et al (2011\) show that a greater Fog index on 10\-Ks is correlated with greater analyst following, more analyst dispersion, and lower accuracy of their forecasts. Most of the literature focuses on 10\-Ks because these are deemed the most information to investors, but it would be interesting to see if readability is any different when looking at shorter documents such as 10\-Qs. Whether the simple, dominant (albeit language independent) measure of file size remains a strong indicator of readability remains to be seen in documents other than 10\-Ks. * Another examination of 10\-K text appears in Bodnaruk et al (2013\). Here, the authors measure the percentage of negative words in 10\-Ks to see if this is an indicator of financial constraints that improves on existing measures. There is low correlation of this measure with size, where bigger firms are widely posited to be less financially constrained. But, an increase in the percentage of negative words suggests an inflection point indicating the tendency of a firm to lapse into a state of financial constraint. Using control variables such as market capitalization, prior returns, and a negative earnings indicator, percentage negative words helps more in identifying which firm will be financially constrained than widely used constraint indexes. The negative word count is useful in that it is independent of the way in which the filing is written, and picks up cues from managers who tend to use more negative words. * The number of negative words is useful in predicting liquidity events such as dividend cuts or omissions, downgrades, and asset growth. A one standard deviation increase in negative words increases the likelihood of a dividend omission by 8\.9% and a debt downgrade by 10\.8%. An obvious extension of this work would be to see whether default probability models may be enhanced by using the percentage of negative words as an explanatory variable. ### 7\.46\.3 Corporate Finance and Risk Management 1. Sprenger (2011\) integrates data from text classification of tweets, user voting, and a proprietary stock game to extract the bullishness of online investors; these ideas are behind the site <http://TweetTrader.net>. 2. Tweets also pose interesting problems of big streaming data discussed in Pervin, Fang, Datta, and Dutta (2013\). 3. Data used here is from filings such as 10\-Ks, etc., (Loughran and McDonald (2011\); Burdick et al (2011\); Bodnaruk, Loughran, and McDonald (2013\); Jegadeesh and Wu (2013\); Loughran and McDonald (2014\)). ### 7\.46\.4 Predicting Markets 1. Wysocki (1999\) found that for the 50 top firms in message posting volume on Yahoo! Finance, message volume predicted next day abnormal stock returns. Using a broader set of firms, he also found that high message volume firms were those with inflated valuations (relative to fundamentals), high trading volume, high short seller activity (given possibly inflated valuations), high analyst following (message posting appears to be related to news as well, correlated with a general notion of “attention” stocks), and low institutional holdings (hence broader investor discussion and interest), all intuitive outcomes. 2. Bagnoli, Beneish, and Watts (1999\) examined earnings “whispers”, unofficial crowd\-sourced forecasts of quarterly earnings from small investors, are more accurate than that of First Call analyst forecasts. 3. Tumarkin and Whitelaw (2001\) examined self\-reported sentiment on the Raging Bull message board and found no predictive content, either of returns or volume. ### 7\.46\.5 Bullishness Index Antweiler and Frank (2004\) used the Naive Bayes algorithm for classification, implemented in the {Rainbow} package of Andrew McCallum (1996\). They also repeated the same using Support Vector Machines (SVMs) as a robustness check. Both algorithms generate similar empirical results. Once the algorithm is trained, they use it out\-of\-sample to sign each message as \\(\\{Buy, Hold, Sell\\}\\). Let \\(n\_B, n\_S\\) be the number of buy and sell messages, respectively. Then \\(R \= n\_B/n\_S\\) is just the ration of buy to sell messages. Based on this they define their bullishness index \\\[ B \= \\frac{n\_B \- n\_S}{n\_B \+ n\_S} \= \\frac{R\-1}{R\+1} \\in (\-1,\+1\) \\] This metric is independent of the number of messages, i.e., is homogenous of degree zero in \\(n\_B,n\_S\\). An alternative measure is also proposed, i.e., \\\[ \\begin{aligned} B^\* \&\= \\ln\\left\[\\frac{1\+n\_B}{1\+n\_S} \\right] \\\\ \&\= \\ln\\left\[\\frac{1\+R(1\+n\_B\+n\_S)}{1\+R\+n\_B\+n\_S} \\right] \\\\ \&\= \\ln\\left\[\\frac{2\+(n\_B\+n\_S)(1\+B)}{2\+(n\_B\+n\_S)(1\-B)} \\right] \\\\ \& \\approx B \\cdot \\ln(1\+n\_B\+n\_S) \\end{aligned} \\] This measure takes the bullishness index \\(B\\) and weights it by the number of messages of both categories. This is homogenous of degree between zero and one. And they also propose a third measure, which is much more direct, i.e., \\\[ B^{\*\*} \= n\_B \- n\_S \= (n\_B\+n\_S) \\cdot \\frac{R\-1}{R\+1} \= M \\cdot B \\] which is homogenous of degree one, and is a message weighted bullishness index. They prefer to use \\(B^\*\\) in their algorithms as it appears to deliver the best predictive results. Finally, produce an agreement index, \\\[ A \= 1 \- \\sqrt{1\-B^2} \\in (0,1\) \\] Note how closely this is related to the disagreement index seen earlier. * The bullishness index does not predict returns, but returns do explain message posting. More messages are posted in periods of negative returns, but this is not a significant relationship. * A contemporaneous relation between returns and bullishness is present. Overall, \\(AF04\\) present some important results that are indicative of the results in this literature, confirmed also in subsequent work. * First, that message board postings do not predict returns. * Second, that disagreement (measured from postings) induces trading. * Third, message posting does predict volatility at daily frequencies and intraday. * Fourth, messages reflect public information rapidly. Overall, they conclude that stock chat is meaningful in content and not just noise. ### 7\.46\.1 MD\&A Usage * When using company filings, it is often an important issue as to whether to use the entire text of the filing or not. Sharper conclusions may be possible from specific sections of the filing such as a 10\-K. Loughran and McDonald (2011\) examined whether the Management Discussion and Analysis (MD\&A) section of the filing was better at providing tone (sentiment) then the entire 10\-K. They found not. * They also showed that using their six tailor\-made word lists gave better results for detecting tone than did the Harvard Inquirer words. And as discussed earlier, proper word\-weighting also improves tone detection. Their word lists also worked well in detecting tone for seasoned equity offerings and news articles, providing good correlation with returns. ### 7\.46\.2 Readability of Financial Reports * Loughran and McDonald (2014\) examine the readability of financial documents, by surveying at the text in 10\-K filings. They compute the Fog index for these documents and compare this to post filing measures of the information environment such as volatility of returns, dispersion of analysts recommendations. When the text is readable, then there should be less dispersion in the information environment, i.e., lower volatility and lower dispersion of analysts expectations around the release of the 10\-K. * Whereas they find that the Fog index does not seem to correlate well with these measures of the information environment, the file size of the 10\-K is a much better measure and is significantly related to return volatility, earnings forecast errors, and earnings forecast dispersion, after accounting for control variates such as size, book\-to\-market, lagged volatility, lagged return, and industry effects. * Li (2008\) also shows that 10\-Ks with high Fog index and longer length have lower subsequent earnings. Thus managers with poor performance may try to hide this by increasing the complexity of their documents, mostly by increasing the size of their filings. * The readability of business documents has caught the attention of many researchers, and not unexpectedly, in the accounting area. DeFranco et al (2013\) combine the Fog, Flesh\-Kincaid, and Flesch scores to show that higher readability of analyst’s reports is related to higher trading volume, suggesting that a better information environment induces people to trade more and not shy away from the market. * Lehavy et al (2011\) show that a greater Fog index on 10\-Ks is correlated with greater analyst following, more analyst dispersion, and lower accuracy of their forecasts. Most of the literature focuses on 10\-Ks because these are deemed the most information to investors, but it would be interesting to see if readability is any different when looking at shorter documents such as 10\-Qs. Whether the simple, dominant (albeit language independent) measure of file size remains a strong indicator of readability remains to be seen in documents other than 10\-Ks. * Another examination of 10\-K text appears in Bodnaruk et al (2013\). Here, the authors measure the percentage of negative words in 10\-Ks to see if this is an indicator of financial constraints that improves on existing measures. There is low correlation of this measure with size, where bigger firms are widely posited to be less financially constrained. But, an increase in the percentage of negative words suggests an inflection point indicating the tendency of a firm to lapse into a state of financial constraint. Using control variables such as market capitalization, prior returns, and a negative earnings indicator, percentage negative words helps more in identifying which firm will be financially constrained than widely used constraint indexes. The negative word count is useful in that it is independent of the way in which the filing is written, and picks up cues from managers who tend to use more negative words. * The number of negative words is useful in predicting liquidity events such as dividend cuts or omissions, downgrades, and asset growth. A one standard deviation increase in negative words increases the likelihood of a dividend omission by 8\.9% and a debt downgrade by 10\.8%. An obvious extension of this work would be to see whether default probability models may be enhanced by using the percentage of negative words as an explanatory variable. ### 7\.46\.3 Corporate Finance and Risk Management 1. Sprenger (2011\) integrates data from text classification of tweets, user voting, and a proprietary stock game to extract the bullishness of online investors; these ideas are behind the site <http://TweetTrader.net>. 2. Tweets also pose interesting problems of big streaming data discussed in Pervin, Fang, Datta, and Dutta (2013\). 3. Data used here is from filings such as 10\-Ks, etc., (Loughran and McDonald (2011\); Burdick et al (2011\); Bodnaruk, Loughran, and McDonald (2013\); Jegadeesh and Wu (2013\); Loughran and McDonald (2014\)). ### 7\.46\.4 Predicting Markets 1. Wysocki (1999\) found that for the 50 top firms in message posting volume on Yahoo! Finance, message volume predicted next day abnormal stock returns. Using a broader set of firms, he also found that high message volume firms were those with inflated valuations (relative to fundamentals), high trading volume, high short seller activity (given possibly inflated valuations), high analyst following (message posting appears to be related to news as well, correlated with a general notion of “attention” stocks), and low institutional holdings (hence broader investor discussion and interest), all intuitive outcomes. 2. Bagnoli, Beneish, and Watts (1999\) examined earnings “whispers”, unofficial crowd\-sourced forecasts of quarterly earnings from small investors, are more accurate than that of First Call analyst forecasts. 3. Tumarkin and Whitelaw (2001\) examined self\-reported sentiment on the Raging Bull message board and found no predictive content, either of returns or volume. ### 7\.46\.5 Bullishness Index Antweiler and Frank (2004\) used the Naive Bayes algorithm for classification, implemented in the {Rainbow} package of Andrew McCallum (1996\). They also repeated the same using Support Vector Machines (SVMs) as a robustness check. Both algorithms generate similar empirical results. Once the algorithm is trained, they use it out\-of\-sample to sign each message as \\(\\{Buy, Hold, Sell\\}\\). Let \\(n\_B, n\_S\\) be the number of buy and sell messages, respectively. Then \\(R \= n\_B/n\_S\\) is just the ration of buy to sell messages. Based on this they define their bullishness index \\\[ B \= \\frac{n\_B \- n\_S}{n\_B \+ n\_S} \= \\frac{R\-1}{R\+1} \\in (\-1,\+1\) \\] This metric is independent of the number of messages, i.e., is homogenous of degree zero in \\(n\_B,n\_S\\). An alternative measure is also proposed, i.e., \\\[ \\begin{aligned} B^\* \&\= \\ln\\left\[\\frac{1\+n\_B}{1\+n\_S} \\right] \\\\ \&\= \\ln\\left\[\\frac{1\+R(1\+n\_B\+n\_S)}{1\+R\+n\_B\+n\_S} \\right] \\\\ \&\= \\ln\\left\[\\frac{2\+(n\_B\+n\_S)(1\+B)}{2\+(n\_B\+n\_S)(1\-B)} \\right] \\\\ \& \\approx B \\cdot \\ln(1\+n\_B\+n\_S) \\end{aligned} \\] This measure takes the bullishness index \\(B\\) and weights it by the number of messages of both categories. This is homogenous of degree between zero and one. And they also propose a third measure, which is much more direct, i.e., \\\[ B^{\*\*} \= n\_B \- n\_S \= (n\_B\+n\_S) \\cdot \\frac{R\-1}{R\+1} \= M \\cdot B \\] which is homogenous of degree one, and is a message weighted bullishness index. They prefer to use \\(B^\*\\) in their algorithms as it appears to deliver the best predictive results. Finally, produce an agreement index, \\\[ A \= 1 \- \\sqrt{1\-B^2} \\in (0,1\) \\] Note how closely this is related to the disagreement index seen earlier. * The bullishness index does not predict returns, but returns do explain message posting. More messages are posted in periods of negative returns, but this is not a significant relationship. * A contemporaneous relation between returns and bullishness is present. Overall, \\(AF04\\) present some important results that are indicative of the results in this literature, confirmed also in subsequent work. * First, that message board postings do not predict returns. * Second, that disagreement (measured from postings) induces trading. * Third, message posting does predict volatility at daily frequencies and intraday. * Fourth, messages reflect public information rapidly. Overall, they conclude that stock chat is meaningful in content and not just noise. 7\.47 Commercial Developments ----------------------------- ### 7\.47\.1 IBM’s Midas System ### 7\.47\.2 Stock Twits ### 7\.47\.3 iSentium ### 7\.47\.4 RavenPack ### 7\.47\.5 Possibile Applications for Finance Firms An illustrative list of **applications** for finance firms is as follows: * Monitoring corporate buzz. * Analyzing textual data to detect, analyze, and understand the more profitable customers or products. * Targeting new clients. * Customer retention, which is a huge issue. Text mining complaints to prioritize customer remedial action makes a huge difference, especially in the insurance business. * Lending activity \- automated management of profiling information for lending screening. * Market prediction and trading. * Risk management. * Automated financial analysts. * Financial forensics to prevent rogue employees from inflicting large losses. * Fraud detection. * Detecting market manipulation. * Social network analysis of clients. * Measuring institutional risk from systemic risk. ### 7\.47\.1 IBM’s Midas System ### 7\.47\.2 Stock Twits ### 7\.47\.3 iSentium ### 7\.47\.4 RavenPack ### 7\.47\.5 Possibile Applications for Finance Firms An illustrative list of **applications** for finance firms is as follows: * Monitoring corporate buzz. * Analyzing textual data to detect, analyze, and understand the more profitable customers or products. * Targeting new clients. * Customer retention, which is a huge issue. Text mining complaints to prioritize customer remedial action makes a huge difference, especially in the insurance business. * Lending activity \- automated management of profiling information for lending screening. * Market prediction and trading. * Risk management. * Automated financial analysts. * Financial forensics to prevent rogue employees from inflicting large losses. * Fraud detection. * Detecting market manipulation. * Social network analysis of clients. * Measuring institutional risk from systemic risk. 7\.48 Latent Semantic Analysis (LSA) ------------------------------------ Latent Semantic Analysis (LSA) is an approach for reducing the dimension of the Term\-Document Matrix (TDM), or the corresponding Document\-Term Matrix (DTM), in general used interchangeably, unless a specific one is invoked. Dimension reduction of the TDM offers two benefits: * The DTM is usually a sparse matrix, and sparseness means that our algorithms have to work harder on missing data, which is clearly wasteful. Some of this sparseness is attenuated by applying LSA to the TDM. * The problem of synonymy also exists in the TDM, which usually contains thousands of terms (words). Synonymy arises becauses many words have similar meanings, i.e., redundancy exists in the list of terms. LSA mitigates this redundancy, as we shall see through the ensuing anaysis of LSA. * While not precisely the same thing, think of LSA in the text domain as analogous to PCA in the data domain. ### 7\.48\.1 How is LSA implemented using SVD? LSA is the application of Singular Value Decomposition (SVD) to the TDM, extracted from a text corpus. Define the TDM to be a matrix \\(M \\in {\\cal R}^{m \\times n}\\), where \\(m\\) is the number of terms and \\(n\\) is the number of documents. The SVD of matrix \\(M\\) is given by \\\[ M \= T \\cdot S \\cdot D^\\top \\] where \\(T \\in {\\cal R}^{m \\times n}\\) and \\(D \\in {\\cal R}^{n \\times n}\\) are orthonormal to each other, and \\(S \\in {\\cal R}^{n \\times n}\\) is the “singluar values” matrix, i.e., a diagonal matrix with singular values on the diagonal. These values denote the relative importance of the terms in the TDM. ### 7\.48\.2 Example Create a temporary directory and add some documents to it. This is a modification of the example in the **lsa** package ``` system("mkdir D") write( c("blue", "red", "green"), file=paste("D", "D1.txt", sep="/")) write( c("black", "blue", "red"), file=paste("D", "D2.txt", sep="/")) write( c("yellow", "black", "green"), file=paste("D", "D3.txt", sep="/")) write( c("yellow", "red", "black"), file=paste("D", "D4.txt", sep="/")) ``` Create a TDM using the **textmatrix** function. ``` library(lsa) tdm = textmatrix("D",minWordLength=1) print(tdm) ``` ``` ## docs ## terms D1.txt D2.txt D3.txt D4.txt ## blue 1 1 0 0 ## green 1 0 1 0 ## red 1 1 0 1 ## black 0 1 1 1 ## yellow 0 0 1 1 ``` Remove the extra directory. ``` system("rm -rf D") ``` ### 7\.48\.1 How is LSA implemented using SVD? LSA is the application of Singular Value Decomposition (SVD) to the TDM, extracted from a text corpus. Define the TDM to be a matrix \\(M \\in {\\cal R}^{m \\times n}\\), where \\(m\\) is the number of terms and \\(n\\) is the number of documents. The SVD of matrix \\(M\\) is given by \\\[ M \= T \\cdot S \\cdot D^\\top \\] where \\(T \\in {\\cal R}^{m \\times n}\\) and \\(D \\in {\\cal R}^{n \\times n}\\) are orthonormal to each other, and \\(S \\in {\\cal R}^{n \\times n}\\) is the “singluar values” matrix, i.e., a diagonal matrix with singular values on the diagonal. These values denote the relative importance of the terms in the TDM. ### 7\.48\.2 Example Create a temporary directory and add some documents to it. This is a modification of the example in the **lsa** package ``` system("mkdir D") write( c("blue", "red", "green"), file=paste("D", "D1.txt", sep="/")) write( c("black", "blue", "red"), file=paste("D", "D2.txt", sep="/")) write( c("yellow", "black", "green"), file=paste("D", "D3.txt", sep="/")) write( c("yellow", "red", "black"), file=paste("D", "D4.txt", sep="/")) ``` Create a TDM using the **textmatrix** function. ``` library(lsa) tdm = textmatrix("D",minWordLength=1) print(tdm) ``` ``` ## docs ## terms D1.txt D2.txt D3.txt D4.txt ## blue 1 1 0 0 ## green 1 0 1 0 ## red 1 1 0 1 ## black 0 1 1 1 ## yellow 0 0 1 1 ``` Remove the extra directory. ``` system("rm -rf D") ``` 7\.49 Singular Value Decomposition (SVD) ---------------------------------------- SVD tries to connect the correlation matrix of terms (\\(M \\cdot M^\\top\\)) with the correlation matrix of documents (\\(M^\\top \\cdot M\\)) through the singular matrix. To see this connection, note that matrix \\(T\\) contains the eigenvectors of the correlation matrix of terms. Likewise, the matrix \\(D\\) contains the eigenvectors of the correlation matrix of documents. To see this, let’s compute ``` et = eigen(tdm %*% t(tdm))$vectors print(et) ``` ``` ## [,1] [,2] [,3] [,4] [,5] ## [1,] -0.3629044 -6.015010e-01 -0.06829369 3.717480e-01 0.6030227 ## [2,] -0.3328695 -2.220446e-16 -0.89347008 5.551115e-16 -0.3015113 ## [3,] -0.5593741 -3.717480e-01 0.31014767 -6.015010e-01 -0.3015113 ## [4,] -0.5593741 3.717480e-01 0.31014767 6.015010e-01 -0.3015113 ## [5,] -0.3629044 6.015010e-01 -0.06829369 -3.717480e-01 0.6030227 ``` ``` ed = eigen(t(tdm) %*% tdm)$vectors print(ed) ``` ``` ## [,1] [,2] [,3] [,4] ## [1,] -0.4570561 0.601501 -0.5395366 -0.371748 ## [2,] -0.5395366 0.371748 0.4570561 0.601501 ## [3,] -0.4570561 -0.601501 -0.5395366 0.371748 ## [4,] -0.5395366 -0.371748 0.4570561 -0.601501 ``` ### 7\.49\.1 Dimension reduction of the TDM via LSA If we wish to reduce the dimension of the latent semantic space to \\(k \< n\\) then we use only the first \\(k\\) eigenvectors. The **lsa** function does this automatically. We call LSA and ask it to automatically reduce the dimension of the TDM using a built\-in function **dimcalc\_share**. ``` res = lsa(tdm,dims=dimcalc_share()) print(res) ``` ``` ## $tk ## [,1] [,2] ## blue -0.3629044 -6.015010e-01 ## green -0.3328695 -5.551115e-17 ## red -0.5593741 -3.717480e-01 ## black -0.5593741 3.717480e-01 ## yellow -0.3629044 6.015010e-01 ## ## $dk ## [,1] [,2] ## D1.txt -0.4570561 -0.601501 ## D2.txt -0.5395366 -0.371748 ## D3.txt -0.4570561 0.601501 ## D4.txt -0.5395366 0.371748 ## ## $sk ## [1] 2.746158 1.618034 ## ## attr(,"class") ## [1] "LSAspace" ``` We can see that the dimension has been reduced from \\(n\=4\\) to \\(n\=2\\). The output is shown for both the term matrix and the document matrix, both of which have only two columns. Think of these as the two “principal semantic components” of the TDM. Compare the output of the LSA to the eigenvectors above to see that it is exactly that. The singular values in the ouput are connected to SVD as follows. ### 7\.49\.2 LSA and SVD: the connection? First of all we see that the **lsa** function is nothing but the **svd** function in base R. ``` res2 = svd(tdm) print(res2) ``` ``` ## $d ## [1] 2.746158 1.618034 1.207733 0.618034 ## ## $u ## [,1] [,2] [,3] [,4] ## [1,] -0.3629044 -6.015010e-01 0.06829369 3.717480e-01 ## [2,] -0.3328695 -5.551115e-17 0.89347008 -3.455569e-15 ## [3,] -0.5593741 -3.717480e-01 -0.31014767 -6.015010e-01 ## [4,] -0.5593741 3.717480e-01 -0.31014767 6.015010e-01 ## [5,] -0.3629044 6.015010e-01 0.06829369 -3.717480e-01 ## ## $v ## [,1] [,2] [,3] [,4] ## [1,] -0.4570561 -0.601501 0.5395366 -0.371748 ## [2,] -0.5395366 -0.371748 -0.4570561 0.601501 ## [3,] -0.4570561 0.601501 0.5395366 0.371748 ## [4,] -0.5395366 0.371748 -0.4570561 -0.601501 ``` The output here is the same as that of LSA except it is provided for \\(n\=4\\). So we have four columns in \\(T\\) and \\(D\\) rather than two. Compare the results here to the previous two slides to see the connection. ### 7\.49\.3 What is the rank of the TDM? We may reconstruct the TDM using the result of the LSA. ``` tdm_lsa = res$tk %*% diag(res$sk) %*% t(res$dk) print(tdm_lsa) ``` ``` ## D1.txt D2.txt D3.txt D4.txt ## blue 1.0409089 0.8995016 -0.1299115 0.1758948 ## green 0.4178005 0.4931970 0.4178005 0.4931970 ## red 1.0639006 1.0524048 0.3402938 0.6051912 ## black 0.3402938 0.6051912 1.0639006 1.0524048 ## yellow -0.1299115 0.1758948 1.0409089 0.8995016 ``` We see the new TDM after the LSA operation, it has non\-integer frequency counts, but it may be treated in the same way as the original TDM. The document vectors populate a slightly different hyperspace. LSA reduces the rank of the correlation matrix of terms \\(M \\cdot M^\\top\\) to \\(n\=2\\). Here we see the rank before and after LSA. ``` library(Matrix) print(rankMatrix(tdm)) ``` ``` ## [1] 4 ## attr(,"method") ## [1] "tolNorm2" ## attr(,"useGrad") ## [1] FALSE ## attr(,"tol") ## [1] 1.110223e-15 ``` ``` print(rankMatrix(tdm_lsa)) ``` ``` ## [1] 2 ## attr(,"method") ## [1] "tolNorm2" ## attr(,"useGrad") ## [1] FALSE ## attr(,"tol") ## [1] 1.110223e-15 ``` ### 7\.49\.1 Dimension reduction of the TDM via LSA If we wish to reduce the dimension of the latent semantic space to \\(k \< n\\) then we use only the first \\(k\\) eigenvectors. The **lsa** function does this automatically. We call LSA and ask it to automatically reduce the dimension of the TDM using a built\-in function **dimcalc\_share**. ``` res = lsa(tdm,dims=dimcalc_share()) print(res) ``` ``` ## $tk ## [,1] [,2] ## blue -0.3629044 -6.015010e-01 ## green -0.3328695 -5.551115e-17 ## red -0.5593741 -3.717480e-01 ## black -0.5593741 3.717480e-01 ## yellow -0.3629044 6.015010e-01 ## ## $dk ## [,1] [,2] ## D1.txt -0.4570561 -0.601501 ## D2.txt -0.5395366 -0.371748 ## D3.txt -0.4570561 0.601501 ## D4.txt -0.5395366 0.371748 ## ## $sk ## [1] 2.746158 1.618034 ## ## attr(,"class") ## [1] "LSAspace" ``` We can see that the dimension has been reduced from \\(n\=4\\) to \\(n\=2\\). The output is shown for both the term matrix and the document matrix, both of which have only two columns. Think of these as the two “principal semantic components” of the TDM. Compare the output of the LSA to the eigenvectors above to see that it is exactly that. The singular values in the ouput are connected to SVD as follows. ### 7\.49\.2 LSA and SVD: the connection? First of all we see that the **lsa** function is nothing but the **svd** function in base R. ``` res2 = svd(tdm) print(res2) ``` ``` ## $d ## [1] 2.746158 1.618034 1.207733 0.618034 ## ## $u ## [,1] [,2] [,3] [,4] ## [1,] -0.3629044 -6.015010e-01 0.06829369 3.717480e-01 ## [2,] -0.3328695 -5.551115e-17 0.89347008 -3.455569e-15 ## [3,] -0.5593741 -3.717480e-01 -0.31014767 -6.015010e-01 ## [4,] -0.5593741 3.717480e-01 -0.31014767 6.015010e-01 ## [5,] -0.3629044 6.015010e-01 0.06829369 -3.717480e-01 ## ## $v ## [,1] [,2] [,3] [,4] ## [1,] -0.4570561 -0.601501 0.5395366 -0.371748 ## [2,] -0.5395366 -0.371748 -0.4570561 0.601501 ## [3,] -0.4570561 0.601501 0.5395366 0.371748 ## [4,] -0.5395366 0.371748 -0.4570561 -0.601501 ``` The output here is the same as that of LSA except it is provided for \\(n\=4\\). So we have four columns in \\(T\\) and \\(D\\) rather than two. Compare the results here to the previous two slides to see the connection. ### 7\.49\.3 What is the rank of the TDM? We may reconstruct the TDM using the result of the LSA. ``` tdm_lsa = res$tk %*% diag(res$sk) %*% t(res$dk) print(tdm_lsa) ``` ``` ## D1.txt D2.txt D3.txt D4.txt ## blue 1.0409089 0.8995016 -0.1299115 0.1758948 ## green 0.4178005 0.4931970 0.4178005 0.4931970 ## red 1.0639006 1.0524048 0.3402938 0.6051912 ## black 0.3402938 0.6051912 1.0639006 1.0524048 ## yellow -0.1299115 0.1758948 1.0409089 0.8995016 ``` We see the new TDM after the LSA operation, it has non\-integer frequency counts, but it may be treated in the same way as the original TDM. The document vectors populate a slightly different hyperspace. LSA reduces the rank of the correlation matrix of terms \\(M \\cdot M^\\top\\) to \\(n\=2\\). Here we see the rank before and after LSA. ``` library(Matrix) print(rankMatrix(tdm)) ``` ``` ## [1] 4 ## attr(,"method") ## [1] "tolNorm2" ## attr(,"useGrad") ## [1] FALSE ## attr(,"tol") ## [1] 1.110223e-15 ``` ``` print(rankMatrix(tdm_lsa)) ``` ``` ## [1] 2 ## attr(,"method") ## [1] "tolNorm2" ## attr(,"useGrad") ## [1] FALSE ## attr(,"tol") ## [1] 1.110223e-15 ``` 7\.50 Topic Analysis with Latent Dirichlet Allocation (LDA) ----------------------------------------------------------- ### 7\.50\.1 What does LDA have to do with LSA? It is similar to LSA, in that it seeks to find the most related words and cluster them into topics. It uses a Bayesian approach to do this, but more on that later. Here, let’s just do an example to see how we might use the **topicmodels** package. ``` #Load the package library(topicmodels) #Load data on news articles from Associated Press data(AssociatedPress) print(dim(AssociatedPress)) ``` ``` ## [1] 2246 10473 ``` This is a large DTM (not TDM). It has more than 10,000 terms, and more than 2,000 documents. This is very large and LDA will take some time, so let’s run it on a subset of the documents. ``` dtm = AssociatedPress[1:100,] dim(dtm) ``` ``` ## [1] 100 10473 ``` Now we run LDA on this data set. ``` #Set parameters for Gibbs sampling burnin = 4000 iter = 2000 thin = 500 seed = list(2003,5,63,100001,765) nstart = 5 best = TRUE #Number of topics k = 5 ``` ``` #Run LDA res <-LDA(dtm, k, method="Gibbs", control = list(nstart = nstart, seed = seed, best = best, burnin = burnin, iter = iter, thin = thin)) #Show topics res.topics = as.matrix(topics(res)) print(res.topics) ``` ``` ## [,1] ## [1,] 5 ## [2,] 4 ## [3,] 5 ## [4,] 1 ## [5,] 1 ## [6,] 4 ## [7,] 2 ## [8,] 1 ## [9,] 5 ## [10,] 5 ## [11,] 5 ## [12,] 3 ## [13,] 1 ## [14,] 4 ## [15,] 2 ## [16,] 3 ## [17,] 1 ## [18,] 1 ## [19,] 2 ## [20,] 3 ## [21,] 5 ## [22,] 2 ## [23,] 2 ## [24,] 1 ## [25,] 2 ## [26,] 4 ## [27,] 4 ## [28,] 2 ## [29,] 4 ## [30,] 3 ## [31,] 2 ## [32,] 1 ## [33,] 4 ## [34,] 1 ## [35,] 5 ## [36,] 4 ## [37,] 1 ## [38,] 4 ## [39,] 4 ## [40,] 2 ## [41,] 2 ## [42,] 2 ## [43,] 1 ## [44,] 1 ## [45,] 5 ## [46,] 3 ## [47,] 2 ## [48,] 3 ## [49,] 1 ## [50,] 4 ## [51,] 1 ## [52,] 2 ## [53,] 3 ## [54,] 1 ## [55,] 3 ## [56,] 4 ## [57,] 4 ## [58,] 2 ## [59,] 5 ## [60,] 2 ## [61,] 2 ## [62,] 3 ## [63,] 2 ## [64,] 1 ## [65,] 2 ## [66,] 4 ## [67,] 5 ## [68,] 2 ## [69,] 4 ## [70,] 5 ## [71,] 5 ## [72,] 5 ## [73,] 2 ## [74,] 5 ## [75,] 2 ## [76,] 1 ## [77,] 1 ## [78,] 1 ## [79,] 3 ## [80,] 5 ## [81,] 1 ## [82,] 3 ## [83,] 5 ## [84,] 3 ## [85,] 3 ## [86,] 5 ## [87,] 2 ## [88,] 5 ## [89,] 2 ## [90,] 5 ## [91,] 3 ## [92,] 1 ## [93,] 1 ## [94,] 4 ## [95,] 3 ## [96,] 4 ## [97,] 4 ## [98,] 4 ## [99,] 5 ## [100,] 5 ``` ``` #Show top terms res.terms = as.matrix(terms(res,10)) print(res.terms) ``` ``` ## Topic 1 Topic 2 Topic 3 Topic 4 Topic 5 ## [1,] "i" "percent" "new" "soviet" "police" ## [2,] "people" "year" "york" "government" "central" ## [3,] "state" "company" "expected" "official" "man" ## [4,] "years" "last" "states" "two" "monday" ## [5,] "bush" "new" "officials" "union" "friday" ## [6,] "president" "bank" "program" "officials" "city" ## [7,] "get" "oil" "california" "war" "four" ## [8,] "told" "prices" "week" "president" "school" ## [9,] "administration" "report" "air" "world" "high" ## [10,] "dukakis" "million" "help" "leaders" "national" ``` ``` #Show topic probabilities res.topicProbs = as.data.frame(res@gamma) print(res.topicProbs) ``` ``` ## V1 V2 V3 V4 V5 ## 1 0.19169329 0.06070288 0.04472843 0.10223642 0.60063898 ## 2 0.12149533 0.14330218 0.08099688 0.58255452 0.07165109 ## 3 0.27213115 0.04262295 0.05901639 0.07868852 0.54754098 ## 4 0.29571984 0.16731518 0.19844358 0.19455253 0.14396887 ## 5 0.31896552 0.15517241 0.20689655 0.14655172 0.17241379 ## 6 0.30360934 0.08492569 0.08492569 0.46284501 0.06369427 ## 7 0.17050691 0.40092166 0.15668203 0.17050691 0.10138249 ## 8 0.37142857 0.15238095 0.14285714 0.20000000 0.13333333 ## 9 0.19298246 0.17543860 0.19298246 0.19298246 0.24561404 ## 10 0.19879518 0.16265060 0.17469880 0.18674699 0.27710843 ## 11 0.21212121 0.20202020 0.16161616 0.15151515 0.27272727 ## 12 0.20143885 0.15827338 0.25899281 0.17985612 0.20143885 ## 13 0.41395349 0.16279070 0.18139535 0.12558140 0.11627907 ## 14 0.17948718 0.17948718 0.12820513 0.30769231 0.20512821 ## 15 0.05135952 0.78247734 0.06344411 0.06042296 0.04229607 ## 16 0.09770115 0.24712644 0.35632184 0.14942529 0.14942529 ## 17 0.43103448 0.18103448 0.09051724 0.10775862 0.18965517 ## 18 0.67857143 0.04591837 0.06377551 0.08418367 0.12755102 ## 19 0.07083333 0.70000000 0.08750000 0.07500000 0.06666667 ## 20 0.15196078 0.05637255 0.69117647 0.04656863 0.05392157 ## 21 0.21782178 0.11881188 0.12871287 0.15841584 0.37623762 ## 22 0.16666667 0.30000000 0.16666667 0.16666667 0.20000000 ## 23 0.19298246 0.21052632 0.17543860 0.21052632 0.21052632 ## 24 0.31775701 0.20560748 0.16822430 0.18691589 0.12149533 ## 25 0.05121951 0.65121951 0.15365854 0.08536585 0.05853659 ## 26 0.11740891 0.09311741 0.08502024 0.37246964 0.33198381 ## 27 0.06583072 0.05956113 0.10658307 0.68338558 0.08463950 ## 28 0.15068493 0.30136986 0.12328767 0.26027397 0.16438356 ## 29 0.07860262 0.04148472 0.05676856 0.68995633 0.13318777 ## 30 0.13968254 0.17142857 0.46031746 0.07936508 0.14920635 ## 31 0.08405172 0.74784483 0.07112069 0.05172414 0.04525862 ## 32 0.66137566 0.10846561 0.06349206 0.07407407 0.09259259 ## 33 0.14655172 0.18103448 0.15517241 0.41379310 0.10344828 ## 34 0.29605263 0.19736842 0.21052632 0.13157895 0.16447368 ## 35 0.08080808 0.05050505 0.10437710 0.07070707 0.69360269 ## 36 0.13333333 0.07878788 0.08484848 0.46666667 0.23636364 ## 37 0.46202532 0.08227848 0.12974684 0.16139241 0.16455696 ## 38 0.09442060 0.07296137 0.12017167 0.64377682 0.06866953 ## 39 0.11764706 0.08359133 0.10526316 0.62538700 0.06811146 ## 40 0.10869565 0.56521739 0.14492754 0.07246377 0.10869565 ## 41 0.07671958 0.43650794 0.16137566 0.25396825 0.07142857 ## 42 0.11445783 0.57831325 0.11445783 0.09036145 0.10240964 ## 43 0.55793991 0.10944206 0.08798283 0.09442060 0.15021459 ## 44 0.40939597 0.10067114 0.22818792 0.12751678 0.13422819 ## 45 0.20000000 0.15121951 0.12682927 0.25853659 0.26341463 ## 46 0.14828897 0.11406844 0.56653992 0.08365019 0.08745247 ## 47 0.09929078 0.41134752 0.13475177 0.22695035 0.12765957 ## 48 0.20129870 0.07467532 0.54870130 0.10714286 0.06818182 ## 49 0.46800000 0.09600000 0.18400000 0.10400000 0.14800000 ## 50 0.22955145 0.08179420 0.05013193 0.60158311 0.03693931 ## 51 0.28368794 0.17730496 0.18439716 0.14893617 0.20567376 ## 52 0.12977099 0.45801527 0.12977099 0.18320611 0.09923664 ## 53 0.10507246 0.14492754 0.55072464 0.06884058 0.13043478 ## 54 0.42647059 0.13725490 0.15196078 0.15686275 0.12745098 ## 55 0.11881188 0.19801980 0.44554455 0.08910891 0.14851485 ## 56 0.22857143 0.15714286 0.13571429 0.37142857 0.10714286 ## 57 0.15294118 0.07058824 0.06117647 0.66823529 0.04705882 ## 58 0.11494253 0.49425287 0.14367816 0.12068966 0.12643678 ## 59 0.13278008 0.04979253 0.13692946 0.26556017 0.41493776 ## 60 0.16666667 0.31666667 0.16666667 0.16666667 0.18333333 ## 61 0.06796117 0.73786408 0.08090615 0.04854369 0.06472492 ## 62 0.12680115 0.12968300 0.58213256 0.12103746 0.04034582 ## 63 0.07902736 0.72948328 0.09118541 0.05471125 0.04559271 ## 64 0.44285714 0.12142857 0.14285714 0.13214286 0.16071429 ## 65 0.19540230 0.31034483 0.19540230 0.14942529 0.14942529 ## 66 0.18518519 0.22222222 0.17037037 0.28888889 0.13333333 ## 67 0.07024793 0.07851240 0.08677686 0.04545455 0.71900826 ## 68 0.10181818 0.48000000 0.14909091 0.12727273 0.14181818 ## 69 0.12307692 0.15384615 0.10000000 0.43076923 0.19230769 ## 70 0.12745098 0.07352941 0.14215686 0.13235294 0.52450980 ## 71 0.21582734 0.10791367 0.16546763 0.14388489 0.36690647 ## 72 0.17560976 0.11219512 0.17073171 0.15609756 0.38536585 ## 73 0.12280702 0.46198830 0.07602339 0.23976608 0.09941520 ## 74 0.20535714 0.16964286 0.17857143 0.14285714 0.30357143 ## 75 0.07567568 0.47027027 0.11891892 0.19459459 0.14054054 ## 76 0.67310789 0.15619968 0.07407407 0.05152979 0.04508857 ## 77 0.63834423 0.07189542 0.09150327 0.11546841 0.08278867 ## 78 0.61504425 0.09292035 0.11946903 0.11504425 0.05752212 ## 79 0.10971787 0.07523511 0.65830721 0.07210031 0.08463950 ## 80 0.11111111 0.08666667 0.11111111 0.05777778 0.63333333 ## 81 0.49681529 0.03821656 0.15286624 0.14437367 0.16772824 ## 82 0.20111732 0.17318436 0.24022346 0.15642458 0.22905028 ## 83 0.10731707 0.15609756 0.11219512 0.23902439 0.38536585 ## 84 0.26016260 0.10569106 0.36585366 0.13008130 0.13821138 ## 85 0.11525424 0.10508475 0.39322034 0.30508475 0.08135593 ## 86 0.15454545 0.06060606 0.15757576 0.09696970 0.53030303 ## 87 0.08301887 0.67924528 0.07924528 0.09433962 0.06415094 ## 88 0.16666667 0.15972222 0.22916667 0.11805556 0.32638889 ## 89 0.12389381 0.47787611 0.09734513 0.14159292 0.15929204 ## 90 0.12389381 0.11061947 0.23008850 0.10176991 0.43362832 ## 91 0.19724771 0.11009174 0.30275229 0.16972477 0.22018349 ## 92 0.33854167 0.13541667 0.12500000 0.11458333 0.28645833 ## 93 0.40131579 0.13815789 0.10526316 0.18421053 0.17105263 ## 94 0.06930693 0.10231023 0.09240924 0.67656766 0.05940594 ## 95 0.09130435 0.15000000 0.65434783 0.03043478 0.07391304 ## 96 0.13370474 0.13091922 0.12256267 0.49303621 0.11977716 ## 97 0.06709265 0.06070288 0.11501597 0.60383387 0.15335463 ## 98 0.16438356 0.16438356 0.17808219 0.28767123 0.20547945 ## 99 0.06274510 0.08235294 0.16470588 0.06666667 0.62352941 ## 100 0.11627907 0.20465116 0.11162791 0.16744186 0.40000000 ``` ``` #Check that each term is allocated to all topics print(rowSums(res.topicProbs)) ``` ``` ## [1] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ## [36] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ## [71] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ``` Note that the highest probability in each row assigns each document to a topic. ### 7\.50\.1 What does LDA have to do with LSA? It is similar to LSA, in that it seeks to find the most related words and cluster them into topics. It uses a Bayesian approach to do this, but more on that later. Here, let’s just do an example to see how we might use the **topicmodels** package. ``` #Load the package library(topicmodels) #Load data on news articles from Associated Press data(AssociatedPress) print(dim(AssociatedPress)) ``` ``` ## [1] 2246 10473 ``` This is a large DTM (not TDM). It has more than 10,000 terms, and more than 2,000 documents. This is very large and LDA will take some time, so let’s run it on a subset of the documents. ``` dtm = AssociatedPress[1:100,] dim(dtm) ``` ``` ## [1] 100 10473 ``` Now we run LDA on this data set. ``` #Set parameters for Gibbs sampling burnin = 4000 iter = 2000 thin = 500 seed = list(2003,5,63,100001,765) nstart = 5 best = TRUE #Number of topics k = 5 ``` ``` #Run LDA res <-LDA(dtm, k, method="Gibbs", control = list(nstart = nstart, seed = seed, best = best, burnin = burnin, iter = iter, thin = thin)) #Show topics res.topics = as.matrix(topics(res)) print(res.topics) ``` ``` ## [,1] ## [1,] 5 ## [2,] 4 ## [3,] 5 ## [4,] 1 ## [5,] 1 ## [6,] 4 ## [7,] 2 ## [8,] 1 ## [9,] 5 ## [10,] 5 ## [11,] 5 ## [12,] 3 ## [13,] 1 ## [14,] 4 ## [15,] 2 ## [16,] 3 ## [17,] 1 ## [18,] 1 ## [19,] 2 ## [20,] 3 ## [21,] 5 ## [22,] 2 ## [23,] 2 ## [24,] 1 ## [25,] 2 ## [26,] 4 ## [27,] 4 ## [28,] 2 ## [29,] 4 ## [30,] 3 ## [31,] 2 ## [32,] 1 ## [33,] 4 ## [34,] 1 ## [35,] 5 ## [36,] 4 ## [37,] 1 ## [38,] 4 ## [39,] 4 ## [40,] 2 ## [41,] 2 ## [42,] 2 ## [43,] 1 ## [44,] 1 ## [45,] 5 ## [46,] 3 ## [47,] 2 ## [48,] 3 ## [49,] 1 ## [50,] 4 ## [51,] 1 ## [52,] 2 ## [53,] 3 ## [54,] 1 ## [55,] 3 ## [56,] 4 ## [57,] 4 ## [58,] 2 ## [59,] 5 ## [60,] 2 ## [61,] 2 ## [62,] 3 ## [63,] 2 ## [64,] 1 ## [65,] 2 ## [66,] 4 ## [67,] 5 ## [68,] 2 ## [69,] 4 ## [70,] 5 ## [71,] 5 ## [72,] 5 ## [73,] 2 ## [74,] 5 ## [75,] 2 ## [76,] 1 ## [77,] 1 ## [78,] 1 ## [79,] 3 ## [80,] 5 ## [81,] 1 ## [82,] 3 ## [83,] 5 ## [84,] 3 ## [85,] 3 ## [86,] 5 ## [87,] 2 ## [88,] 5 ## [89,] 2 ## [90,] 5 ## [91,] 3 ## [92,] 1 ## [93,] 1 ## [94,] 4 ## [95,] 3 ## [96,] 4 ## [97,] 4 ## [98,] 4 ## [99,] 5 ## [100,] 5 ``` ``` #Show top terms res.terms = as.matrix(terms(res,10)) print(res.terms) ``` ``` ## Topic 1 Topic 2 Topic 3 Topic 4 Topic 5 ## [1,] "i" "percent" "new" "soviet" "police" ## [2,] "people" "year" "york" "government" "central" ## [3,] "state" "company" "expected" "official" "man" ## [4,] "years" "last" "states" "two" "monday" ## [5,] "bush" "new" "officials" "union" "friday" ## [6,] "president" "bank" "program" "officials" "city" ## [7,] "get" "oil" "california" "war" "four" ## [8,] "told" "prices" "week" "president" "school" ## [9,] "administration" "report" "air" "world" "high" ## [10,] "dukakis" "million" "help" "leaders" "national" ``` ``` #Show topic probabilities res.topicProbs = as.data.frame(res@gamma) print(res.topicProbs) ``` ``` ## V1 V2 V3 V4 V5 ## 1 0.19169329 0.06070288 0.04472843 0.10223642 0.60063898 ## 2 0.12149533 0.14330218 0.08099688 0.58255452 0.07165109 ## 3 0.27213115 0.04262295 0.05901639 0.07868852 0.54754098 ## 4 0.29571984 0.16731518 0.19844358 0.19455253 0.14396887 ## 5 0.31896552 0.15517241 0.20689655 0.14655172 0.17241379 ## 6 0.30360934 0.08492569 0.08492569 0.46284501 0.06369427 ## 7 0.17050691 0.40092166 0.15668203 0.17050691 0.10138249 ## 8 0.37142857 0.15238095 0.14285714 0.20000000 0.13333333 ## 9 0.19298246 0.17543860 0.19298246 0.19298246 0.24561404 ## 10 0.19879518 0.16265060 0.17469880 0.18674699 0.27710843 ## 11 0.21212121 0.20202020 0.16161616 0.15151515 0.27272727 ## 12 0.20143885 0.15827338 0.25899281 0.17985612 0.20143885 ## 13 0.41395349 0.16279070 0.18139535 0.12558140 0.11627907 ## 14 0.17948718 0.17948718 0.12820513 0.30769231 0.20512821 ## 15 0.05135952 0.78247734 0.06344411 0.06042296 0.04229607 ## 16 0.09770115 0.24712644 0.35632184 0.14942529 0.14942529 ## 17 0.43103448 0.18103448 0.09051724 0.10775862 0.18965517 ## 18 0.67857143 0.04591837 0.06377551 0.08418367 0.12755102 ## 19 0.07083333 0.70000000 0.08750000 0.07500000 0.06666667 ## 20 0.15196078 0.05637255 0.69117647 0.04656863 0.05392157 ## 21 0.21782178 0.11881188 0.12871287 0.15841584 0.37623762 ## 22 0.16666667 0.30000000 0.16666667 0.16666667 0.20000000 ## 23 0.19298246 0.21052632 0.17543860 0.21052632 0.21052632 ## 24 0.31775701 0.20560748 0.16822430 0.18691589 0.12149533 ## 25 0.05121951 0.65121951 0.15365854 0.08536585 0.05853659 ## 26 0.11740891 0.09311741 0.08502024 0.37246964 0.33198381 ## 27 0.06583072 0.05956113 0.10658307 0.68338558 0.08463950 ## 28 0.15068493 0.30136986 0.12328767 0.26027397 0.16438356 ## 29 0.07860262 0.04148472 0.05676856 0.68995633 0.13318777 ## 30 0.13968254 0.17142857 0.46031746 0.07936508 0.14920635 ## 31 0.08405172 0.74784483 0.07112069 0.05172414 0.04525862 ## 32 0.66137566 0.10846561 0.06349206 0.07407407 0.09259259 ## 33 0.14655172 0.18103448 0.15517241 0.41379310 0.10344828 ## 34 0.29605263 0.19736842 0.21052632 0.13157895 0.16447368 ## 35 0.08080808 0.05050505 0.10437710 0.07070707 0.69360269 ## 36 0.13333333 0.07878788 0.08484848 0.46666667 0.23636364 ## 37 0.46202532 0.08227848 0.12974684 0.16139241 0.16455696 ## 38 0.09442060 0.07296137 0.12017167 0.64377682 0.06866953 ## 39 0.11764706 0.08359133 0.10526316 0.62538700 0.06811146 ## 40 0.10869565 0.56521739 0.14492754 0.07246377 0.10869565 ## 41 0.07671958 0.43650794 0.16137566 0.25396825 0.07142857 ## 42 0.11445783 0.57831325 0.11445783 0.09036145 0.10240964 ## 43 0.55793991 0.10944206 0.08798283 0.09442060 0.15021459 ## 44 0.40939597 0.10067114 0.22818792 0.12751678 0.13422819 ## 45 0.20000000 0.15121951 0.12682927 0.25853659 0.26341463 ## 46 0.14828897 0.11406844 0.56653992 0.08365019 0.08745247 ## 47 0.09929078 0.41134752 0.13475177 0.22695035 0.12765957 ## 48 0.20129870 0.07467532 0.54870130 0.10714286 0.06818182 ## 49 0.46800000 0.09600000 0.18400000 0.10400000 0.14800000 ## 50 0.22955145 0.08179420 0.05013193 0.60158311 0.03693931 ## 51 0.28368794 0.17730496 0.18439716 0.14893617 0.20567376 ## 52 0.12977099 0.45801527 0.12977099 0.18320611 0.09923664 ## 53 0.10507246 0.14492754 0.55072464 0.06884058 0.13043478 ## 54 0.42647059 0.13725490 0.15196078 0.15686275 0.12745098 ## 55 0.11881188 0.19801980 0.44554455 0.08910891 0.14851485 ## 56 0.22857143 0.15714286 0.13571429 0.37142857 0.10714286 ## 57 0.15294118 0.07058824 0.06117647 0.66823529 0.04705882 ## 58 0.11494253 0.49425287 0.14367816 0.12068966 0.12643678 ## 59 0.13278008 0.04979253 0.13692946 0.26556017 0.41493776 ## 60 0.16666667 0.31666667 0.16666667 0.16666667 0.18333333 ## 61 0.06796117 0.73786408 0.08090615 0.04854369 0.06472492 ## 62 0.12680115 0.12968300 0.58213256 0.12103746 0.04034582 ## 63 0.07902736 0.72948328 0.09118541 0.05471125 0.04559271 ## 64 0.44285714 0.12142857 0.14285714 0.13214286 0.16071429 ## 65 0.19540230 0.31034483 0.19540230 0.14942529 0.14942529 ## 66 0.18518519 0.22222222 0.17037037 0.28888889 0.13333333 ## 67 0.07024793 0.07851240 0.08677686 0.04545455 0.71900826 ## 68 0.10181818 0.48000000 0.14909091 0.12727273 0.14181818 ## 69 0.12307692 0.15384615 0.10000000 0.43076923 0.19230769 ## 70 0.12745098 0.07352941 0.14215686 0.13235294 0.52450980 ## 71 0.21582734 0.10791367 0.16546763 0.14388489 0.36690647 ## 72 0.17560976 0.11219512 0.17073171 0.15609756 0.38536585 ## 73 0.12280702 0.46198830 0.07602339 0.23976608 0.09941520 ## 74 0.20535714 0.16964286 0.17857143 0.14285714 0.30357143 ## 75 0.07567568 0.47027027 0.11891892 0.19459459 0.14054054 ## 76 0.67310789 0.15619968 0.07407407 0.05152979 0.04508857 ## 77 0.63834423 0.07189542 0.09150327 0.11546841 0.08278867 ## 78 0.61504425 0.09292035 0.11946903 0.11504425 0.05752212 ## 79 0.10971787 0.07523511 0.65830721 0.07210031 0.08463950 ## 80 0.11111111 0.08666667 0.11111111 0.05777778 0.63333333 ## 81 0.49681529 0.03821656 0.15286624 0.14437367 0.16772824 ## 82 0.20111732 0.17318436 0.24022346 0.15642458 0.22905028 ## 83 0.10731707 0.15609756 0.11219512 0.23902439 0.38536585 ## 84 0.26016260 0.10569106 0.36585366 0.13008130 0.13821138 ## 85 0.11525424 0.10508475 0.39322034 0.30508475 0.08135593 ## 86 0.15454545 0.06060606 0.15757576 0.09696970 0.53030303 ## 87 0.08301887 0.67924528 0.07924528 0.09433962 0.06415094 ## 88 0.16666667 0.15972222 0.22916667 0.11805556 0.32638889 ## 89 0.12389381 0.47787611 0.09734513 0.14159292 0.15929204 ## 90 0.12389381 0.11061947 0.23008850 0.10176991 0.43362832 ## 91 0.19724771 0.11009174 0.30275229 0.16972477 0.22018349 ## 92 0.33854167 0.13541667 0.12500000 0.11458333 0.28645833 ## 93 0.40131579 0.13815789 0.10526316 0.18421053 0.17105263 ## 94 0.06930693 0.10231023 0.09240924 0.67656766 0.05940594 ## 95 0.09130435 0.15000000 0.65434783 0.03043478 0.07391304 ## 96 0.13370474 0.13091922 0.12256267 0.49303621 0.11977716 ## 97 0.06709265 0.06070288 0.11501597 0.60383387 0.15335463 ## 98 0.16438356 0.16438356 0.17808219 0.28767123 0.20547945 ## 99 0.06274510 0.08235294 0.16470588 0.06666667 0.62352941 ## 100 0.11627907 0.20465116 0.11162791 0.16744186 0.40000000 ``` ``` #Check that each term is allocated to all topics print(rowSums(res.topicProbs)) ``` ``` ## [1] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ## [36] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ## [71] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ``` Note that the highest probability in each row assigns each document to a topic. 7\.51 LDA Explained (Briefly) ----------------------------- Latent Dirichlet Allocation (LDA) was created by David Blei, Andrew Ng, and Michael Jordan in 2003, see their paper titled “Latent Dirichlet Allocation” in the *Journal of Machine Learning Research*, pp 993–1022\. The simplest way to think about LDA is as a probability model that connects documents with words and topics. The components are: * A Vocabulary of \\(V\\) words, i.e., \\(w\_1,w\_2,...,w\_i,...,w\_V\\), each word indexed by \\(i\\). * A Document is a vector of \\(N\\) words, i.e., \\({\\bf w}\\). * A Corpus \\(D\\) is a collection of \\(M\\) documents, each document indexed by \\(j\\), i.e. \\(d\_j\\). Next, we connect the above objects to \\(K\\) topics, indexed by \\(l\\), i.e., \\(t\_l\\). We will see that LDA is encapsulated in two matrices: Matrix \\(A\\) and Matrix \\(B\\). ### 7\.51\.1 Matrix \\(A\\): Connecting Documents with Topics * This matrix has documents on the rows, so there are \\(M\\) rows. * The topics are on the columns, so there are \\(K\\) columns. * Therefore \\(A \\in {\\cal R}^{M \\times K}\\). * The row sums equal \\(1\\), i.e., for each document, we have a probability that it pertains to a given topic, i.e., \\(A\_{jl} \= Pr\[t\_l \| d\_j]\\), and \\(\\sum\_{l\=1}^K A\_{jl} \= 1\\). ### 7\.51\.2 Matrix \\(B\\): Connecting Words with Topics * This matrix has topics on the rows, so there are \\(K\\) rows. * The words are on the columns, so there are \\(V\\) columns. * Therefore \\(B \\in {\\cal R}^{K \\times V}\\). * The row sums equal \\(1\\), i.e., for each topic, we have a probability that it pertains to a given word, i.e., \\(B\_{li} \= Pr\[w\_i \| t\_l]\\), and \\(\\sum\_{i\=1}^V B\_{li} \= 1\\). ### 7\.51\.3 Distribution of Topics in a Document * Using Matrix \\(A\\), we can sample a \\(K\\)\-vector of probabilities of topics for a single document. Denote the probability of this vector as \\(p(\\theta \| \\alpha)\\), where \\(\\theta, \\alpha \\in {\\cal R}^K\\), \\(\\theta, \\alpha \\geq 0\\), and \\(\\sum\_l \\theta\_l \= 1\\). * The probability \\(p(\\theta \| \\alpha)\\) is governed by a Dirichlet distribution, with density function \\\[ p(\\theta \| \\alpha) \= \\frac{\\Gamma(\\sum\_{l\=1}^K \\alpha\_l)}{\\prod\_{l\=1}^K \\Gamma(\\alpha\_l)} \\; \\prod\_{l\=1}^K \\theta\_l^{\\alpha\_l \- 1} \\] where \\(\\Gamma(\\cdot)\\) is the Gamma function. \- LDA thus gets its name from the use of the Dirichlet distribution, embodied in Matrix \\(A\\). Since the topics are latent, it explains the rest of the nomenclature. \- Given \\(\\theta\\), we sample topics from matrix \\(A\\) with probability \\(p(t \| \\theta)\\). ### 7\.51\.4 Distribution of Words and Topics for a Document * The number of words in a document is assumed to be distributed Poisson with parameter \\(\\xi\\). * Matrix \\(B\\) gives the probability of a word appearing in a topic, \\(p(w \| t)\\). * The topics mixture is given by \\(\\theta\\). * The joint distribution over \\(K\\) topics and \\(K\\) words for a topic mixture is given by \\\[ p(\\theta, {\\bf t}, {\\bf w}) \= p(\\theta \| \\alpha) \\prod\_{l\=1}^K p(t\_l \| \\theta) p(w\_l \| t\_l) \\] * The marginal distribution for a document’s words comes from integrating out the topic mixture \\(\\theta\\), and summing out the topics \\({\\bf t}\\), i.e., \\\[ p({\\bf w}) \= \\int p(\\theta \| \\alpha) \\left(\\prod\_{l\=1}^K \\sum\_{t\_l} p(t\_l \| \\theta) p(w\_l \| t\_l)\\; \\right) d\\theta \\] ### 7\.51\.5 Likelihood of the entire Corpus * This is given by: \\\[ p(D) \= \\prod\_{j\=1}^M \\int p(\\theta\_j \| \\alpha) \\left(\\prod\_{l\=1}^K \\sum\_{t\_{jl}} p(t\_l \| \\theta\_j) p(w\_l \| t\_l)\\; \\right) d\\theta\_j \\] * The goal is to maximize this likelihood by picking the vector \\(\\alpha\\) and the probabilities in the matrix \\(B\\). (Note that were a Dirichlet distribution not used, then we could directly pick values in Matrices \\(A\\) and \\(B\\).) * The computation is undertaken using MCMC with Gibbs sampling as shown in the example earlier. ### 7\.51\.6 Examples in Finance ### 7\.51\.7 word2vec (explained) For more details, see: [https://www.quora.com/How\-does\-word2vec\-work](https://www.quora.com/How-does-word2vec-work) **A geometrical interpretation**: word2vec is a shallow word embedding model. This means that the model learns to map each discrete word id (0 through the number of words in the vocabulary) into a low\-dimensional continuous vector\-space from their distributional properties observed in some raw text corpus. Geometrically, one may interpret these vectors as tracing out points on the outside surface of a manifold in the “embedded space”. If we initialize these vectors from a spherical gaussian distribution, then you can imagine this manifold to look something like a hypersphere initially. Let us focus on the CBOW for now. CBOW is trained to predict the target word t from the contextual words that surround it, c, i.e. the goal is to maximize P(t \| c) over the training set. I am simplifying somewhat, but you can show that this probability is roughly inversely proportional to the distance between the current vectors assigned to t and to c. Since this model is trained in an online setting (one example at a time), at time T the goal is therefore to take a small step (mediated by the “learning rate”) in order to minimize the distance between the current vectors for t and c (and thereby increase the probability P(t \|c)). By repeating this process over the entire training set, we have that vectors for words that habitually co\-occur tend to be nudged closer together, and by gradually lowering the learning rate, this process converges towards some final state of the vectors. By the Distributional Hypothesis (Firth, 1957; see also the Wikipedia page on Distributional semantics), words with similar distributional properties (i.e. that co\-occur regularly) tend to share some aspect of semantic meaning. For example, we may find several sentences in the training set such as “citizens of X protested today” where X (the target word t) may be names of cities or countries that are semantically related. You can therefore interpret each training step as deforming or morphing the initial manifold by nudging the vectors for some words somewhat closer together, and the result, after projecting down to two dimensions, is the familiar t\-SNE visualizations where related words cluster together (e.g. Word representations for NLP). For the skipgram, the direction of the prediction is simply inverted, i.e. now we try to predict P(citizens \| X), P(of \| X), etc. This turns out to learn finer\-grained vectors when one trains over more data. The main reason is that the CBOW smooths over a lot of the distributional statistics by averaging over all context words while the skipgram does not. With little data, this “regularizing” effect of the CBOW turns out to be helpful, but since data is the ultimate regularizer the skipgram is able to extract more information when more data is available. There’s a bit more going on behind the scenes, but hopefully this helps to give a useful geometrical intuition as to how these models work. ### 7\.51\.1 Matrix \\(A\\): Connecting Documents with Topics * This matrix has documents on the rows, so there are \\(M\\) rows. * The topics are on the columns, so there are \\(K\\) columns. * Therefore \\(A \\in {\\cal R}^{M \\times K}\\). * The row sums equal \\(1\\), i.e., for each document, we have a probability that it pertains to a given topic, i.e., \\(A\_{jl} \= Pr\[t\_l \| d\_j]\\), and \\(\\sum\_{l\=1}^K A\_{jl} \= 1\\). ### 7\.51\.2 Matrix \\(B\\): Connecting Words with Topics * This matrix has topics on the rows, so there are \\(K\\) rows. * The words are on the columns, so there are \\(V\\) columns. * Therefore \\(B \\in {\\cal R}^{K \\times V}\\). * The row sums equal \\(1\\), i.e., for each topic, we have a probability that it pertains to a given word, i.e., \\(B\_{li} \= Pr\[w\_i \| t\_l]\\), and \\(\\sum\_{i\=1}^V B\_{li} \= 1\\). ### 7\.51\.3 Distribution of Topics in a Document * Using Matrix \\(A\\), we can sample a \\(K\\)\-vector of probabilities of topics for a single document. Denote the probability of this vector as \\(p(\\theta \| \\alpha)\\), where \\(\\theta, \\alpha \\in {\\cal R}^K\\), \\(\\theta, \\alpha \\geq 0\\), and \\(\\sum\_l \\theta\_l \= 1\\). * The probability \\(p(\\theta \| \\alpha)\\) is governed by a Dirichlet distribution, with density function \\\[ p(\\theta \| \\alpha) \= \\frac{\\Gamma(\\sum\_{l\=1}^K \\alpha\_l)}{\\prod\_{l\=1}^K \\Gamma(\\alpha\_l)} \\; \\prod\_{l\=1}^K \\theta\_l^{\\alpha\_l \- 1} \\] where \\(\\Gamma(\\cdot)\\) is the Gamma function. \- LDA thus gets its name from the use of the Dirichlet distribution, embodied in Matrix \\(A\\). Since the topics are latent, it explains the rest of the nomenclature. \- Given \\(\\theta\\), we sample topics from matrix \\(A\\) with probability \\(p(t \| \\theta)\\). ### 7\.51\.4 Distribution of Words and Topics for a Document * The number of words in a document is assumed to be distributed Poisson with parameter \\(\\xi\\). * Matrix \\(B\\) gives the probability of a word appearing in a topic, \\(p(w \| t)\\). * The topics mixture is given by \\(\\theta\\). * The joint distribution over \\(K\\) topics and \\(K\\) words for a topic mixture is given by \\\[ p(\\theta, {\\bf t}, {\\bf w}) \= p(\\theta \| \\alpha) \\prod\_{l\=1}^K p(t\_l \| \\theta) p(w\_l \| t\_l) \\] * The marginal distribution for a document’s words comes from integrating out the topic mixture \\(\\theta\\), and summing out the topics \\({\\bf t}\\), i.e., \\\[ p({\\bf w}) \= \\int p(\\theta \| \\alpha) \\left(\\prod\_{l\=1}^K \\sum\_{t\_l} p(t\_l \| \\theta) p(w\_l \| t\_l)\\; \\right) d\\theta \\] ### 7\.51\.5 Likelihood of the entire Corpus * This is given by: \\\[ p(D) \= \\prod\_{j\=1}^M \\int p(\\theta\_j \| \\alpha) \\left(\\prod\_{l\=1}^K \\sum\_{t\_{jl}} p(t\_l \| \\theta\_j) p(w\_l \| t\_l)\\; \\right) d\\theta\_j \\] * The goal is to maximize this likelihood by picking the vector \\(\\alpha\\) and the probabilities in the matrix \\(B\\). (Note that were a Dirichlet distribution not used, then we could directly pick values in Matrices \\(A\\) and \\(B\\).) * The computation is undertaken using MCMC with Gibbs sampling as shown in the example earlier. ### 7\.51\.6 Examples in Finance ### 7\.51\.7 word2vec (explained) For more details, see: [https://www.quora.com/How\-does\-word2vec\-work](https://www.quora.com/How-does-word2vec-work) **A geometrical interpretation**: word2vec is a shallow word embedding model. This means that the model learns to map each discrete word id (0 through the number of words in the vocabulary) into a low\-dimensional continuous vector\-space from their distributional properties observed in some raw text corpus. Geometrically, one may interpret these vectors as tracing out points on the outside surface of a manifold in the “embedded space”. If we initialize these vectors from a spherical gaussian distribution, then you can imagine this manifold to look something like a hypersphere initially. Let us focus on the CBOW for now. CBOW is trained to predict the target word t from the contextual words that surround it, c, i.e. the goal is to maximize P(t \| c) over the training set. I am simplifying somewhat, but you can show that this probability is roughly inversely proportional to the distance between the current vectors assigned to t and to c. Since this model is trained in an online setting (one example at a time), at time T the goal is therefore to take a small step (mediated by the “learning rate”) in order to minimize the distance between the current vectors for t and c (and thereby increase the probability P(t \|c)). By repeating this process over the entire training set, we have that vectors for words that habitually co\-occur tend to be nudged closer together, and by gradually lowering the learning rate, this process converges towards some final state of the vectors. By the Distributional Hypothesis (Firth, 1957; see also the Wikipedia page on Distributional semantics), words with similar distributional properties (i.e. that co\-occur regularly) tend to share some aspect of semantic meaning. For example, we may find several sentences in the training set such as “citizens of X protested today” where X (the target word t) may be names of cities or countries that are semantically related. You can therefore interpret each training step as deforming or morphing the initial manifold by nudging the vectors for some words somewhat closer together, and the result, after projecting down to two dimensions, is the familiar t\-SNE visualizations where related words cluster together (e.g. Word representations for NLP). For the skipgram, the direction of the prediction is simply inverted, i.e. now we try to predict P(citizens \| X), P(of \| X), etc. This turns out to learn finer\-grained vectors when one trains over more data. The main reason is that the CBOW smooths over a lot of the distributional statistics by averaging over all context words while the skipgram does not. With little data, this “regularizing” effect of the CBOW turns out to be helpful, but since data is the ultimate regularizer the skipgram is able to extract more information when more data is available. There’s a bit more going on behind the scenes, but hopefully this helps to give a useful geometrical intuition as to how these models work. 7\.52 End Note! --------------- Biblio at: <http://srdas.github.io/Das_TextAnalyticsInFinance.pdf>
Machine Learning
srdas.github.io
https://srdas.github.io/MLBook/Text2Vec.html
Chapter 8 Much More: Word Embeddings ==================================== 8\.1 Word Embeddings with *text2vec* ------------------------------------ See the original vignette from which this is abstracted. [https://cran.r\-project.org/web/packages/text2vec/vignettes/text\-vectorization.html](https://cran.r-project.org/web/packages/text2vec/vignettes/text-vectorization.html) ``` suppressMessages(library(text2vec)) ``` 8\.2 How to process data quickly using *text2vec* ------------------------------------------------- ### 8\.2\.1 Read in the provided data. ``` suppressMessages(library(data.table)) data("movie_review") setDT(movie_review) setkey(movie_review, id) set.seed(2016L) all_ids = movie_review$id train_ids = sample(all_ids, 4000) test_ids = setdiff(all_ids, train_ids) train = movie_review[J(train_ids)] test = movie_review[J(test_ids)] print(head(train)) ``` ``` ## id sentiment ## 1: 11912_2 0 ## 2: 11507_10 1 ## 3: 8194_9 1 ## 4: 11426_10 1 ## 5: 4043_3 0 ## 6: 11287_3 0 ## review ## 1: The story behind this movie is very interesting, and in general the plot is not so bad... but the details: writing, directing, continuity, pacing, action sequences, stunts, and use of CG all cheapen and spoil the film.<br /><br />First off, action sequences. They are all quite unexciting. Most consist of someone standing up and getting shot, making no attempt to run, fight, dodge, or whatever, even though they have all the time in the world. The sequences just seem bland for something made in 2004.<br /><br />The CG features very nicely rendered and animated effects, but they come off looking cheap because of how they are used.<br /><br />Pacing: everything happens too quickly. For example, \\"Elle\\" is trained to fight in a couple of hours, and from the start can do back-flips, etc. Why is she so acrobatic? None of this is explained in the movie. As Lilith, she wouldn't have needed to be able to do back flips - maybe she couldn't, since she had wings.<br /><br />Also, we have sequences like a woman getting run over by a car, and getting up and just wandering off into a deserted room with a sink and mirror, and then stabbing herself in the throat, all for no apparent reason, and without any of the spectators really caring that she just got hit by a car (and then felt the secondary effects of another, exploding car)... \\"Are you okay?\\" asks the driver \\"yes, I'm fine\\" she says, bloodied and disheveled.<br /><br />I watched it all, though, because the introduction promised me that it would be interesting... but in the end, the poor execution made me wish for anything else: Blade, Vampire Hunter D, even that movie with vampires where Jackie Chan was comic relief, because they managed to suspend my disbelief, but this just made me want to shake the director awake, and give the writer a good talking to. ## 2: I remember the original series vividly mostly due to it's unique blend of wry humor and macabre subject matter. Kolchak was hard-bitten newsman from the Ben Hecht school of big-city reporting, and his gritty determination and wise-ass demeanor made even the most mundane episode eminently watchable. My personal fave was \\"The Spanish Moss Murders\\" due to it's totally original storyline. A poor,troubled Cajun youth from Louisiana bayou country, takes part in a sleep research experiment, for the purpose of dream analysis. Something goes inexplicably wrong, and he literally dreams to life a swamp creature inhabiting the dark folk tales of his youth. This malevolent manifestation seeks out all persons who have wronged the dreamer in his conscious state, and brutally suffocates them to death. Kolchak investigates and uncovers this horrible truth, much to the chagrin of police captain Joe \\"Mad Dog\\" Siska(wonderfully essayed by a grumpy Keenan Wynn)and the head sleep researcher played by Second City improv founder, Severn Darden, to droll, understated perfection. The wickedly funny, harrowing finale takes place in the Chicago sewer system, and is a series highlight. Kolchak never got any better. Timeless. ## 3: Despite the other comments listed here, this is probably the best Dirty Harry movie made; a film that reflects -- for better or worse -- the country's socio-political feelings during the Reagan glory years of the early '80's. It's also a kickass action movie.<br /><br />Opening with a liberal, female judge overturning a murder case due to lack of tangible evidence and then going straight into the coffee shop encounter with several unfortunate hoodlums (the scene which prompts the famous, \\"Go ahead, make my day\\" line), \\"Sudden Impact\\" is one non-stop roller coaster of an action film. The first time you get to catch your breath is when the troublesome Inspector Callahan is sent away to a nearby city to investigate the background of a murdered hood. It gets only better from there with an over-the-top group of grotesque thugs for Callahan to deal with along with a sherriff with a mysterious past. Superb direction and photography and a at-times hilarious script help make this film one of the best of the '80's. ## 4: I think this movie would be more enjoyable if everyone thought of it as a picture of colonial Africa in the 50's and 60's rather than as a story. Because there is no real story here. Just one vignette on top of another like little points of light that don't mean much until you have enough to paint a picture. The first time I saw Chocolat I didn't really \\"get it\\" until having thought about it for a few days. Then I realized there were lots of things to \\"get\\", including the end of colonialism which was but around the corner, just no plot. Anyway, it's one of my all-time favorite movies. The scene at the airport with the brief shower and beautiful music was sheer poetry. If you like \\"exciting\\" movies, don't watch this--you'll be bored to tears. But, for some of you..., you can thank me later for recommending it to you. ## 5: The film begins with promise, but lingers too long in a sepia world of distance and alienation. We are left hanging, but with nothing much else save languid shots of grave and pensive male faces to savour. Certainly no rope up the wall to help us climb over. It's a shame, because the concept is not without merit.<br /><br />We are left wondering why a loving couple - a father and son no less - should be so estranged from the real world that their own world is preferable when claustrophobic beyond all imagining. This loss of presence in the real world is, rather too obviously and unnecessarily, contrasted with the son having enlisted in the armed forces. Why not the circus, so we can at least appreciate some colour? We are left with a gnawing sense of loss, but sadly no enlightenment, which is bewildering given the film is apparently about some form of attainment not available to us all. ## 6: This is a film that had a lot to live down to . on the year of its release legendary film critic Barry Norman considered it the worst film of the year and I'd heard nothing but bad things about it especially a plot that was criticised for being too complicated <br /><br />To be honest the plot is something of a red herring and the film suffers even more when the word \\" plot \\" is used because as far as I can see there is no plot as such . There's something involving Russian gangsters , a character called Pete Thompson who's trying to get his wife Sarah pregnant , and an Irish bloke called Sean . How they all fit into something called a \\" plot \\" I'm not sure . It's difficult to explain the plots of Guy Ritchie films but if you watch any of his films I'm sure we can all agree that they all posses one no matter how complicated they may seem on first viewing . Likewise a James Bond film though the plots are stretched out with action scenes . You will have a serious problem believing RANCID ALUMINIUM has any type of central plot that can be cogently explained <br /><br />Taking a look at the cast list will ring enough warning bells as to what sort of film you'll be watching . Sadie Frost has appeared in some of the worst British films made in the last 15 years and she's doing nothing to become inconsistent . Steven Berkoff gives acting a bad name ( and he plays a character called Kant which sums up the wit of this movie ) while one of the supporting characters is played by a TV presenter presumably because no serious actress would be seen dead in this <br /><br />The only good thing I can say about this movie is that it's utterly forgettable . I saw it a few days ago and immediately after watching I was going to write a very long a critical review warning people what they are letting themselves in for by watching , but by now I've mainly forgotten why . But this doesn't alter the fact that I remember disliking this piece of crap immensely ``` The processing steps are: 1. Lower case the documents and then tokenize them. 2. Create an iterator. (Step 1 can also be done while making the iterator, as the *itoken* function supports this, see below.) 3. Use the iterator to create the vocabulary, which is nothing but the list of unique words across all documents. 4. Vectorize the vocabulary, i.e., create a data structure of words that can be used later for matrix factorizations needed for various text analytics. 5. Using the iterator and vectorized vocabulary, form text matrices, such as the Document\-Term Matrix (DTM) or the Term Co\-occurrence Matrix (TCM). 6. Use the TCM or DTM to undertake various text analytics such as classification, word2vec, topic modeling using LDA (Latent Dirichlet Allocation), and LSA (Latent Semantic Analysis). 8\.3 Preprocessing and Tokenization ----------------------------------- ``` prep_fun = tolower tok_fun = word_tokenizer #Create an iterator to pass to the create_vocabulary function it_train = itoken(train$review, preprocessor = prep_fun, tokenizer = tok_fun, ids = train$id, progressbar = FALSE) #Now create a vocabulary vocab = create_vocabulary(it_train) print(vocab) ``` ``` ## Number of docs: 4000 ## 0 stopwords: ... ## ngram_min = 1; ngram_max = 1 ## Vocabulary: ## terms terms_counts doc_counts ## 1: overturned 1 1 ## 2: disintegration 1 1 ## 3: vachon 1 1 ## 4: interfered 1 1 ## 5: michonoku 1 1 ## --- ## 35592: penises 2 2 ## 35593: arabian 1 1 ## 35594: personal 102 94 ## 35595: end 921 743 ## 35596: address 10 10 ``` 8\.4 Iterator ------------- An iterator is an object that traverses a container. A list is iterable. See: [https://www.r\-bloggers.com/iterators\-in\-r/](https://www.r-bloggers.com/iterators-in-r/) 8\.5 Vectorize -------------- ``` vectorizer = vocab_vectorizer(vocab) ``` 8\.6 Document Term Matrix (DTM) ------------------------------- ``` dtm_train = create_dtm(it_train, vectorizer) print(dim(as.matrix(dtm_train))) ``` ``` ## [1] 4000 35596 ``` 8\.7 N\-Grams ------------- n\-grams are phrases made by coupling words that co\-occur. For example, a bi\-gram is a set of two consecutive words. ``` vocab = create_vocabulary(it_train, ngram = c(1, 2)) print(vocab) ``` ``` ## Number of docs: 4000 ## 0 stopwords: ... ## ngram_min = 1; ngram_max = 2 ## Vocabulary: ## terms terms_counts doc_counts ## 1: bad_characterization 1 1 ## 2: few_step 1 1 ## 3: also_took 1 1 ## 4: in_graphics 1 1 ## 5: like_poke 1 1 ## --- ## 397499: original_uncut 1 1 ## 397500: settle_his 2 2 ## 397501: first_blood 2 1 ## 397502: occasional_at 1 1 ## 397503: the_brothers 14 14 ``` This creates a vocabulary of both single words and bi\-grams. Notice how large it is compared to the unigram vocabulary from earlier. Because of this we go ahead and prune the vocabulary first, as this will speed up computation. ### 8\.7\.1 Redo classification with n\-grams. ``` library(glmnet) ``` ``` ## Loading required package: Matrix ``` ``` ## Loading required package: foreach ``` ``` ## Loaded glmnet 2.0-5 ``` ``` NFOLDS = 5 vocab = vocab %>% prune_vocabulary(term_count_min = 10, doc_proportion_max = 0.5) print(vocab) ``` ``` ## Number of docs: 4000 ## 0 stopwords: ... ## ngram_min = 1; ngram_max = 2 ## Vocabulary: ## terms terms_counts doc_counts ## 1: morvern 14 1 ## 2: race_films 10 1 ## 3: bazza 11 1 ## 4: thunderbirds 10 1 ## 5: mary_lou 21 1 ## --- ## 17866: br_also 36 36 ## 17867: a_better 96 89 ## 17868: tourists 10 10 ## 17869: in_each 14 14 ## 17870: the_brothers 14 14 ``` ``` bigram_vectorizer = vocab_vectorizer(vocab) dtm_train = create_dtm(it_train, bigram_vectorizer) res = cv.glmnet(x = dtm_train, y = train[['sentiment']], family = 'binomial', alpha = 1, type.measure = "auc", nfolds = NFOLDS, thresh = 1e-3, maxit = 1e3) plot(res) ``` ``` print(names(res)) ``` ``` ## [1] "lambda" "cvm" "cvsd" "cvup" "cvlo" ## [6] "nzero" "name" "glmnet.fit" "lambda.min" "lambda.1se" ``` ``` #AUC (area under curve) print(max(res$cvm)) ``` ``` ## [1] 0.9267776 ``` ### 8\.7\.2 Out\-of\-sample test ``` it_test = test$review %>% prep_fun %>% tok_fun %>% itoken(ids = test$id, # turn off progressbar because it won't look nice in rmd progressbar = FALSE) dtm_test = create_dtm(it_test, bigram_vectorizer) preds = predict(res, dtm_test, type = 'response')[,1] glmnet:::auc(test$sentiment, preds) ``` ``` ## [1] 0.9309295 ``` 8\.8 TF\-IDF ------------ We have seen the TF\-IDF discussion earlier, and here we see how to implement it using the *text2vec* package. ``` vocab = create_vocabulary(it_train) vectorizer = vocab_vectorizer(vocab) dtm_train = create_dtm(it_train, vectorizer) tfidf = TfIdf$new() dtm_train_tfidf = fit_transform(dtm_train, tfidf) dtm_test_tfidf = create_dtm(it_test, vectorizer) %>% transform(tfidf) ``` Now we take the TF\-IDF adjusted DTM and run the classifier. 8\.9 Refit classifier --------------------- ``` res = cv.glmnet(x = dtm_train_tfidf, y = train[['sentiment']], family = 'binomial', alpha = 1, type.measure = "auc", nfolds = NFOLDS, thresh = 1e-3, maxit = 1e3) print(paste("max AUC =", round(max(res$cvm), 4))) ``` ``` ## [1] "max AUC = 0.9115" ``` ``` #Test on hold-out sample preds = predict(res, dtm_test_tfidf, type = 'response')[,1] glmnet:::auc(test$sentiment, preds) ``` ``` ## [1] 0.9039965 ``` 8\.10 Embeddings (word2vec) --------------------------- From: [http://stackoverflow.com/questions/39514941/preparing\-word\-embeddings\-in\-text2vec\-r\-package](http://stackoverflow.com/questions/39514941/preparing-word-embeddings-in-text2vec-r-package) Do the entire creation of the TCM (Term Co\-occurrence Matrix) ``` suppressMessages(library(magrittr)) suppressMessages(library(text2vec)) data("movie_review") tokens = movie_review$review %>% tolower %>% word_tokenizer() it = itoken(tokens) v = create_vocabulary(it) %>% prune_vocabulary(term_count_min=10) vectorizer = vocab_vectorizer(v, grow_dtm = FALSE, skip_grams_window = 5) tcm = create_tcm(it, vectorizer) print(dim(tcm)) ``` ``` ## [1] 7797 7797 ``` Now fit the word embeddings using GloVe See: <http://nlp.stanford.edu/projects/glove/> ``` model = GlobalVectors$new(word_vectors_size=50, vocabulary=v, x_max=10, learning_rate=0.20) model$fit(tcm,n_iter=25) ``` ``` ## 2017-03-24 11:41:55 - epoch 1, expected cost 0.0820 ``` ``` ## 2017-03-24 11:41:55 - epoch 2, expected cost 0.0508 ``` ``` ## 2017-03-24 11:41:56 - epoch 3, expected cost 0.0433 ``` ``` ## 2017-03-24 11:41:56 - epoch 4, expected cost 0.0390 ``` ``` ## 2017-03-24 11:41:56 - epoch 5, expected cost 0.0359 ``` ``` ## 2017-03-24 11:41:56 - epoch 6, expected cost 0.0337 ``` ``` ## 2017-03-24 11:41:56 - epoch 7, expected cost 0.0321 ``` ``` ## 2017-03-24 11:41:57 - epoch 8, expected cost 0.0307 ``` ``` ## 2017-03-24 11:41:57 - epoch 9, expected cost 0.0296 ``` ``` ## 2017-03-24 11:41:57 - epoch 10, expected cost 0.0288 ``` ``` ## 2017-03-24 11:41:57 - epoch 11, expected cost 0.0281 ``` ``` ## 2017-03-24 11:41:57 - epoch 12, expected cost 0.0275 ``` ``` ## 2017-03-24 11:41:58 - epoch 13, expected cost 0.0269 ``` ``` ## 2017-03-24 11:41:58 - epoch 14, expected cost 0.0264 ``` ``` ## 2017-03-24 11:41:58 - epoch 15, expected cost 0.0260 ``` ``` ## 2017-03-24 11:41:58 - epoch 16, expected cost 0.0257 ``` ``` ## 2017-03-24 11:41:59 - epoch 17, expected cost 0.0253 ``` ``` ## 2017-03-24 11:41:59 - epoch 18, expected cost 0.0251 ``` ``` ## 2017-03-24 11:41:59 - epoch 19, expected cost 0.0248 ``` ``` ## 2017-03-24 11:41:59 - epoch 20, expected cost 0.0246 ``` ``` ## 2017-03-24 11:41:59 - epoch 21, expected cost 0.0243 ``` ``` ## 2017-03-24 11:42:00 - epoch 22, expected cost 0.0242 ``` ``` ## 2017-03-24 11:42:00 - epoch 23, expected cost 0.0240 ``` ``` ## 2017-03-24 11:42:00 - epoch 24, expected cost 0.0238 ``` ``` ## 2017-03-24 11:42:00 - epoch 25, expected cost 0.0236 ``` ``` wv = model$get_word_vectors() #Dimension words x wvec_size ``` 8\.11 Distance between words (or find close words) -------------------------------------------------- ``` #Make distance matrix d = dist2(wv, method="cosine") #Smaller values means closer print(dim(d)) ``` ``` ## [1] 7797 7797 ``` ``` #Pass: w=word, d=dist matrix, n=nomber of close words findCloseWords = function(w,d,n) { words = rownames(d) i = which(words==w) if (length(i) > 0) { res = sort(d[i,]) print(as.matrix(res[2:(n+1)])) } else { print("Word not in corpus.") } } ``` Example: Show the ten words close to the word “man” and “woman”. ``` findCloseWords("man",d,10) ``` ``` ## [,1] ## woman 0.2009660 ## girl 0.2371918 ## guy 0.2802020 ## who 0.3009101 ## young 0.3341396 ## person 0.3397372 ## boy 0.3733406 ## hit 0.3953263 ## old 0.4037096 ## he 0.4111968 ``` ``` findCloseWords("woman",d,10) ``` ``` ## [,1] ## young 0.1754151 ## man 0.2009660 ## girl 0.2546709 ## boy 0.2981061 ## who 0.3186094 ## guy 0.3222383 ## named 0.3372591 ## kid 0.3728761 ## child 0.3759926 ## doctor 0.3941979 ``` This is a very useful feature of word embeddings, as it is often argued that in the embedded space, words that are close to each other, also tend to have semantic similarities, even though the closeness is computed simply by using their co\-occurence frequencies. 8\.12 word2vec (explained) -------------------------- For more details, see: [https://www.quora.com/How\-does\-word2vec\-work](https://www.quora.com/How-does-word2vec-work) **A geometrical interpretation**: word2vec is a shallow word embedding model. This means that the model learns to map each discrete word id (0 through the number of words in the vocabulary) into a low\-dimensional continuous vector\-space from their distributional properties observed in some raw text corpus. Geometrically, one may interpret these vectors as tracing out points on the outside surface of a manifold in the “embedded space”. If we initialize these vectors from a spherical gaussian distribution, then you can imagine this manifold to look something like a hypersphere initially. Let us focus on the CBOW for now. CBOW is trained to predict the target word t from the contextual words that surround it, c, i.e. the goal is to maximize P(t \| c) over the training set. I am simplifying somewhat, but you can show that this probability is roughly inversely proportional to the distance between the current vectors assigned to t and to c. Since this model is trained in an online setting (one example at a time), at time T the goal is therefore to take a small step (mediated by the “learning rate”) in order to minimize the distance between the current vectors for t and c (and thereby increase the probability P(t \|c)). By repeating this process over the entire training set, we have that vectors for words that habitually co\-occur tend to be nudged closer together, and by gradually lowering the learning rate, this process converges towards some final state of the vectors. By the Distributional Hypothesis (Firth, 1957; see also the Wikipedia page on Distributional semantics), words with similar distributional properties (i.e. that co\-occur regularly) tend to share some aspect of semantic meaning. For example, we may find several sentences in the training set such as “citizens of X protested today” where X (the target word t) may be names of cities or countries that are semantically related. You can therefore interpret each training step as deforming or morphing the initial manifold by nudging the vectors for some words somewhat closer together, and the result, after projecting down to two dimensions, is the familiar t\-SNE visualizations where related words cluster together (e.g. Word representations for NLP). For the skipgram, the direction of the prediction is simply inverted, i.e. now we try to predict P(citizens \| X), P(of \| X), etc. This turns out to learn finer\-grained vectors when one trains over more data. The main reason is that the CBOW smooths over a lot of the distributional statistics by averaging over all context words while the skipgram does not. With little data, this “regularizing” effect of the CBOW turns out to be helpful, but since data is the ultimate regularizer the skipgram is able to extract more information when more data is available. There’s a bit more going on behind the scenes, but hopefully this helps to give a useful geometrical intuition as to how these models work. 8\.13 Topic Analysis -------------------- Uses Latent Dirichlet Allocation. ``` suppressMessages(library(tm)) suppressMessages(library(text2vec)) stopw = stopwords('en') stopw = c(stopw,"br","t","s","m","ve","2","d","1") #Make DTM data("movie_review") tokens = movie_review$review %>% tolower %>% word_tokenizer() it = itoken(tokens) v = create_vocabulary(it, stopwords = stopw) %>% prune_vocabulary(term_count_min=5) vectrzr = vocab_vectorizer(v, grow_dtm = TRUE, skip_grams_window = 5) dtm = create_dtm(it, vectrzr) print(dim(dtm)) ``` ``` ## [1] 5000 12733 ``` ``` #Do LDA lda = LatentDirichletAllocation$new(n_topics=5, v) lda$fit(dtm,n_iter = 25) doc_topics = lda$fit_transform(dtm,n_iter = 25) print(dim(doc_topics)) ``` ``` ## [1] 5000 5 ``` ``` #Get word vectors by topic topic_wv = lda$get_word_vectors() print(dim(topic_wv)) ``` ``` ## [1] 12733 5 ``` ``` #Plot LDA suppressMessages(library(LDAvis)) lda$plot() ``` ``` ## Loading required namespace: servr ``` This produces a terrific interactive plot. 8\.14 Latent Semantic Analysis (LSA) ------------------------------------ ``` lsa = LatentSemanticAnalysis$new(n_topics = 5) res = lsa$fit_transform(dtm) print(dim(res)) ``` ``` ## [1] 5000 5 ``` Biblio at: <http://srdas.github.io/Das_TextAnalyticsInFinance.pdf> 8\.1 Word Embeddings with *text2vec* ------------------------------------ See the original vignette from which this is abstracted. [https://cran.r\-project.org/web/packages/text2vec/vignettes/text\-vectorization.html](https://cran.r-project.org/web/packages/text2vec/vignettes/text-vectorization.html) ``` suppressMessages(library(text2vec)) ``` 8\.2 How to process data quickly using *text2vec* ------------------------------------------------- ### 8\.2\.1 Read in the provided data. ``` suppressMessages(library(data.table)) data("movie_review") setDT(movie_review) setkey(movie_review, id) set.seed(2016L) all_ids = movie_review$id train_ids = sample(all_ids, 4000) test_ids = setdiff(all_ids, train_ids) train = movie_review[J(train_ids)] test = movie_review[J(test_ids)] print(head(train)) ``` ``` ## id sentiment ## 1: 11912_2 0 ## 2: 11507_10 1 ## 3: 8194_9 1 ## 4: 11426_10 1 ## 5: 4043_3 0 ## 6: 11287_3 0 ## review ## 1: The story behind this movie is very interesting, and in general the plot is not so bad... but the details: writing, directing, continuity, pacing, action sequences, stunts, and use of CG all cheapen and spoil the film.<br /><br />First off, action sequences. They are all quite unexciting. Most consist of someone standing up and getting shot, making no attempt to run, fight, dodge, or whatever, even though they have all the time in the world. The sequences just seem bland for something made in 2004.<br /><br />The CG features very nicely rendered and animated effects, but they come off looking cheap because of how they are used.<br /><br />Pacing: everything happens too quickly. For example, \\"Elle\\" is trained to fight in a couple of hours, and from the start can do back-flips, etc. Why is she so acrobatic? None of this is explained in the movie. As Lilith, she wouldn't have needed to be able to do back flips - maybe she couldn't, since she had wings.<br /><br />Also, we have sequences like a woman getting run over by a car, and getting up and just wandering off into a deserted room with a sink and mirror, and then stabbing herself in the throat, all for no apparent reason, and without any of the spectators really caring that she just got hit by a car (and then felt the secondary effects of another, exploding car)... \\"Are you okay?\\" asks the driver \\"yes, I'm fine\\" she says, bloodied and disheveled.<br /><br />I watched it all, though, because the introduction promised me that it would be interesting... but in the end, the poor execution made me wish for anything else: Blade, Vampire Hunter D, even that movie with vampires where Jackie Chan was comic relief, because they managed to suspend my disbelief, but this just made me want to shake the director awake, and give the writer a good talking to. ## 2: I remember the original series vividly mostly due to it's unique blend of wry humor and macabre subject matter. Kolchak was hard-bitten newsman from the Ben Hecht school of big-city reporting, and his gritty determination and wise-ass demeanor made even the most mundane episode eminently watchable. My personal fave was \\"The Spanish Moss Murders\\" due to it's totally original storyline. A poor,troubled Cajun youth from Louisiana bayou country, takes part in a sleep research experiment, for the purpose of dream analysis. Something goes inexplicably wrong, and he literally dreams to life a swamp creature inhabiting the dark folk tales of his youth. This malevolent manifestation seeks out all persons who have wronged the dreamer in his conscious state, and brutally suffocates them to death. Kolchak investigates and uncovers this horrible truth, much to the chagrin of police captain Joe \\"Mad Dog\\" Siska(wonderfully essayed by a grumpy Keenan Wynn)and the head sleep researcher played by Second City improv founder, Severn Darden, to droll, understated perfection. The wickedly funny, harrowing finale takes place in the Chicago sewer system, and is a series highlight. Kolchak never got any better. Timeless. ## 3: Despite the other comments listed here, this is probably the best Dirty Harry movie made; a film that reflects -- for better or worse -- the country's socio-political feelings during the Reagan glory years of the early '80's. It's also a kickass action movie.<br /><br />Opening with a liberal, female judge overturning a murder case due to lack of tangible evidence and then going straight into the coffee shop encounter with several unfortunate hoodlums (the scene which prompts the famous, \\"Go ahead, make my day\\" line), \\"Sudden Impact\\" is one non-stop roller coaster of an action film. The first time you get to catch your breath is when the troublesome Inspector Callahan is sent away to a nearby city to investigate the background of a murdered hood. It gets only better from there with an over-the-top group of grotesque thugs for Callahan to deal with along with a sherriff with a mysterious past. Superb direction and photography and a at-times hilarious script help make this film one of the best of the '80's. ## 4: I think this movie would be more enjoyable if everyone thought of it as a picture of colonial Africa in the 50's and 60's rather than as a story. Because there is no real story here. Just one vignette on top of another like little points of light that don't mean much until you have enough to paint a picture. The first time I saw Chocolat I didn't really \\"get it\\" until having thought about it for a few days. Then I realized there were lots of things to \\"get\\", including the end of colonialism which was but around the corner, just no plot. Anyway, it's one of my all-time favorite movies. The scene at the airport with the brief shower and beautiful music was sheer poetry. If you like \\"exciting\\" movies, don't watch this--you'll be bored to tears. But, for some of you..., you can thank me later for recommending it to you. ## 5: The film begins with promise, but lingers too long in a sepia world of distance and alienation. We are left hanging, but with nothing much else save languid shots of grave and pensive male faces to savour. Certainly no rope up the wall to help us climb over. It's a shame, because the concept is not without merit.<br /><br />We are left wondering why a loving couple - a father and son no less - should be so estranged from the real world that their own world is preferable when claustrophobic beyond all imagining. This loss of presence in the real world is, rather too obviously and unnecessarily, contrasted with the son having enlisted in the armed forces. Why not the circus, so we can at least appreciate some colour? We are left with a gnawing sense of loss, but sadly no enlightenment, which is bewildering given the film is apparently about some form of attainment not available to us all. ## 6: This is a film that had a lot to live down to . on the year of its release legendary film critic Barry Norman considered it the worst film of the year and I'd heard nothing but bad things about it especially a plot that was criticised for being too complicated <br /><br />To be honest the plot is something of a red herring and the film suffers even more when the word \\" plot \\" is used because as far as I can see there is no plot as such . There's something involving Russian gangsters , a character called Pete Thompson who's trying to get his wife Sarah pregnant , and an Irish bloke called Sean . How they all fit into something called a \\" plot \\" I'm not sure . It's difficult to explain the plots of Guy Ritchie films but if you watch any of his films I'm sure we can all agree that they all posses one no matter how complicated they may seem on first viewing . Likewise a James Bond film though the plots are stretched out with action scenes . You will have a serious problem believing RANCID ALUMINIUM has any type of central plot that can be cogently explained <br /><br />Taking a look at the cast list will ring enough warning bells as to what sort of film you'll be watching . Sadie Frost has appeared in some of the worst British films made in the last 15 years and she's doing nothing to become inconsistent . Steven Berkoff gives acting a bad name ( and he plays a character called Kant which sums up the wit of this movie ) while one of the supporting characters is played by a TV presenter presumably because no serious actress would be seen dead in this <br /><br />The only good thing I can say about this movie is that it's utterly forgettable . I saw it a few days ago and immediately after watching I was going to write a very long a critical review warning people what they are letting themselves in for by watching , but by now I've mainly forgotten why . But this doesn't alter the fact that I remember disliking this piece of crap immensely ``` The processing steps are: 1. Lower case the documents and then tokenize them. 2. Create an iterator. (Step 1 can also be done while making the iterator, as the *itoken* function supports this, see below.) 3. Use the iterator to create the vocabulary, which is nothing but the list of unique words across all documents. 4. Vectorize the vocabulary, i.e., create a data structure of words that can be used later for matrix factorizations needed for various text analytics. 5. Using the iterator and vectorized vocabulary, form text matrices, such as the Document\-Term Matrix (DTM) or the Term Co\-occurrence Matrix (TCM). 6. Use the TCM or DTM to undertake various text analytics such as classification, word2vec, topic modeling using LDA (Latent Dirichlet Allocation), and LSA (Latent Semantic Analysis). ### 8\.2\.1 Read in the provided data. ``` suppressMessages(library(data.table)) data("movie_review") setDT(movie_review) setkey(movie_review, id) set.seed(2016L) all_ids = movie_review$id train_ids = sample(all_ids, 4000) test_ids = setdiff(all_ids, train_ids) train = movie_review[J(train_ids)] test = movie_review[J(test_ids)] print(head(train)) ``` ``` ## id sentiment ## 1: 11912_2 0 ## 2: 11507_10 1 ## 3: 8194_9 1 ## 4: 11426_10 1 ## 5: 4043_3 0 ## 6: 11287_3 0 ## review ## 1: The story behind this movie is very interesting, and in general the plot is not so bad... but the details: writing, directing, continuity, pacing, action sequences, stunts, and use of CG all cheapen and spoil the film.<br /><br />First off, action sequences. They are all quite unexciting. Most consist of someone standing up and getting shot, making no attempt to run, fight, dodge, or whatever, even though they have all the time in the world. The sequences just seem bland for something made in 2004.<br /><br />The CG features very nicely rendered and animated effects, but they come off looking cheap because of how they are used.<br /><br />Pacing: everything happens too quickly. For example, \\"Elle\\" is trained to fight in a couple of hours, and from the start can do back-flips, etc. Why is she so acrobatic? None of this is explained in the movie. As Lilith, she wouldn't have needed to be able to do back flips - maybe she couldn't, since she had wings.<br /><br />Also, we have sequences like a woman getting run over by a car, and getting up and just wandering off into a deserted room with a sink and mirror, and then stabbing herself in the throat, all for no apparent reason, and without any of the spectators really caring that she just got hit by a car (and then felt the secondary effects of another, exploding car)... \\"Are you okay?\\" asks the driver \\"yes, I'm fine\\" she says, bloodied and disheveled.<br /><br />I watched it all, though, because the introduction promised me that it would be interesting... but in the end, the poor execution made me wish for anything else: Blade, Vampire Hunter D, even that movie with vampires where Jackie Chan was comic relief, because they managed to suspend my disbelief, but this just made me want to shake the director awake, and give the writer a good talking to. ## 2: I remember the original series vividly mostly due to it's unique blend of wry humor and macabre subject matter. Kolchak was hard-bitten newsman from the Ben Hecht school of big-city reporting, and his gritty determination and wise-ass demeanor made even the most mundane episode eminently watchable. My personal fave was \\"The Spanish Moss Murders\\" due to it's totally original storyline. A poor,troubled Cajun youth from Louisiana bayou country, takes part in a sleep research experiment, for the purpose of dream analysis. Something goes inexplicably wrong, and he literally dreams to life a swamp creature inhabiting the dark folk tales of his youth. This malevolent manifestation seeks out all persons who have wronged the dreamer in his conscious state, and brutally suffocates them to death. Kolchak investigates and uncovers this horrible truth, much to the chagrin of police captain Joe \\"Mad Dog\\" Siska(wonderfully essayed by a grumpy Keenan Wynn)and the head sleep researcher played by Second City improv founder, Severn Darden, to droll, understated perfection. The wickedly funny, harrowing finale takes place in the Chicago sewer system, and is a series highlight. Kolchak never got any better. Timeless. ## 3: Despite the other comments listed here, this is probably the best Dirty Harry movie made; a film that reflects -- for better or worse -- the country's socio-political feelings during the Reagan glory years of the early '80's. It's also a kickass action movie.<br /><br />Opening with a liberal, female judge overturning a murder case due to lack of tangible evidence and then going straight into the coffee shop encounter with several unfortunate hoodlums (the scene which prompts the famous, \\"Go ahead, make my day\\" line), \\"Sudden Impact\\" is one non-stop roller coaster of an action film. The first time you get to catch your breath is when the troublesome Inspector Callahan is sent away to a nearby city to investigate the background of a murdered hood. It gets only better from there with an over-the-top group of grotesque thugs for Callahan to deal with along with a sherriff with a mysterious past. Superb direction and photography and a at-times hilarious script help make this film one of the best of the '80's. ## 4: I think this movie would be more enjoyable if everyone thought of it as a picture of colonial Africa in the 50's and 60's rather than as a story. Because there is no real story here. Just one vignette on top of another like little points of light that don't mean much until you have enough to paint a picture. The first time I saw Chocolat I didn't really \\"get it\\" until having thought about it for a few days. Then I realized there were lots of things to \\"get\\", including the end of colonialism which was but around the corner, just no plot. Anyway, it's one of my all-time favorite movies. The scene at the airport with the brief shower and beautiful music was sheer poetry. If you like \\"exciting\\" movies, don't watch this--you'll be bored to tears. But, for some of you..., you can thank me later for recommending it to you. ## 5: The film begins with promise, but lingers too long in a sepia world of distance and alienation. We are left hanging, but with nothing much else save languid shots of grave and pensive male faces to savour. Certainly no rope up the wall to help us climb over. It's a shame, because the concept is not without merit.<br /><br />We are left wondering why a loving couple - a father and son no less - should be so estranged from the real world that their own world is preferable when claustrophobic beyond all imagining. This loss of presence in the real world is, rather too obviously and unnecessarily, contrasted with the son having enlisted in the armed forces. Why not the circus, so we can at least appreciate some colour? We are left with a gnawing sense of loss, but sadly no enlightenment, which is bewildering given the film is apparently about some form of attainment not available to us all. ## 6: This is a film that had a lot to live down to . on the year of its release legendary film critic Barry Norman considered it the worst film of the year and I'd heard nothing but bad things about it especially a plot that was criticised for being too complicated <br /><br />To be honest the plot is something of a red herring and the film suffers even more when the word \\" plot \\" is used because as far as I can see there is no plot as such . There's something involving Russian gangsters , a character called Pete Thompson who's trying to get his wife Sarah pregnant , and an Irish bloke called Sean . How they all fit into something called a \\" plot \\" I'm not sure . It's difficult to explain the plots of Guy Ritchie films but if you watch any of his films I'm sure we can all agree that they all posses one no matter how complicated they may seem on first viewing . Likewise a James Bond film though the plots are stretched out with action scenes . You will have a serious problem believing RANCID ALUMINIUM has any type of central plot that can be cogently explained <br /><br />Taking a look at the cast list will ring enough warning bells as to what sort of film you'll be watching . Sadie Frost has appeared in some of the worst British films made in the last 15 years and she's doing nothing to become inconsistent . Steven Berkoff gives acting a bad name ( and he plays a character called Kant which sums up the wit of this movie ) while one of the supporting characters is played by a TV presenter presumably because no serious actress would be seen dead in this <br /><br />The only good thing I can say about this movie is that it's utterly forgettable . I saw it a few days ago and immediately after watching I was going to write a very long a critical review warning people what they are letting themselves in for by watching , but by now I've mainly forgotten why . But this doesn't alter the fact that I remember disliking this piece of crap immensely ``` The processing steps are: 1. Lower case the documents and then tokenize them. 2. Create an iterator. (Step 1 can also be done while making the iterator, as the *itoken* function supports this, see below.) 3. Use the iterator to create the vocabulary, which is nothing but the list of unique words across all documents. 4. Vectorize the vocabulary, i.e., create a data structure of words that can be used later for matrix factorizations needed for various text analytics. 5. Using the iterator and vectorized vocabulary, form text matrices, such as the Document\-Term Matrix (DTM) or the Term Co\-occurrence Matrix (TCM). 6. Use the TCM or DTM to undertake various text analytics such as classification, word2vec, topic modeling using LDA (Latent Dirichlet Allocation), and LSA (Latent Semantic Analysis). 8\.3 Preprocessing and Tokenization ----------------------------------- ``` prep_fun = tolower tok_fun = word_tokenizer #Create an iterator to pass to the create_vocabulary function it_train = itoken(train$review, preprocessor = prep_fun, tokenizer = tok_fun, ids = train$id, progressbar = FALSE) #Now create a vocabulary vocab = create_vocabulary(it_train) print(vocab) ``` ``` ## Number of docs: 4000 ## 0 stopwords: ... ## ngram_min = 1; ngram_max = 1 ## Vocabulary: ## terms terms_counts doc_counts ## 1: overturned 1 1 ## 2: disintegration 1 1 ## 3: vachon 1 1 ## 4: interfered 1 1 ## 5: michonoku 1 1 ## --- ## 35592: penises 2 2 ## 35593: arabian 1 1 ## 35594: personal 102 94 ## 35595: end 921 743 ## 35596: address 10 10 ``` 8\.4 Iterator ------------- An iterator is an object that traverses a container. A list is iterable. See: [https://www.r\-bloggers.com/iterators\-in\-r/](https://www.r-bloggers.com/iterators-in-r/) 8\.5 Vectorize -------------- ``` vectorizer = vocab_vectorizer(vocab) ``` 8\.6 Document Term Matrix (DTM) ------------------------------- ``` dtm_train = create_dtm(it_train, vectorizer) print(dim(as.matrix(dtm_train))) ``` ``` ## [1] 4000 35596 ``` 8\.7 N\-Grams ------------- n\-grams are phrases made by coupling words that co\-occur. For example, a bi\-gram is a set of two consecutive words. ``` vocab = create_vocabulary(it_train, ngram = c(1, 2)) print(vocab) ``` ``` ## Number of docs: 4000 ## 0 stopwords: ... ## ngram_min = 1; ngram_max = 2 ## Vocabulary: ## terms terms_counts doc_counts ## 1: bad_characterization 1 1 ## 2: few_step 1 1 ## 3: also_took 1 1 ## 4: in_graphics 1 1 ## 5: like_poke 1 1 ## --- ## 397499: original_uncut 1 1 ## 397500: settle_his 2 2 ## 397501: first_blood 2 1 ## 397502: occasional_at 1 1 ## 397503: the_brothers 14 14 ``` This creates a vocabulary of both single words and bi\-grams. Notice how large it is compared to the unigram vocabulary from earlier. Because of this we go ahead and prune the vocabulary first, as this will speed up computation. ### 8\.7\.1 Redo classification with n\-grams. ``` library(glmnet) ``` ``` ## Loading required package: Matrix ``` ``` ## Loading required package: foreach ``` ``` ## Loaded glmnet 2.0-5 ``` ``` NFOLDS = 5 vocab = vocab %>% prune_vocabulary(term_count_min = 10, doc_proportion_max = 0.5) print(vocab) ``` ``` ## Number of docs: 4000 ## 0 stopwords: ... ## ngram_min = 1; ngram_max = 2 ## Vocabulary: ## terms terms_counts doc_counts ## 1: morvern 14 1 ## 2: race_films 10 1 ## 3: bazza 11 1 ## 4: thunderbirds 10 1 ## 5: mary_lou 21 1 ## --- ## 17866: br_also 36 36 ## 17867: a_better 96 89 ## 17868: tourists 10 10 ## 17869: in_each 14 14 ## 17870: the_brothers 14 14 ``` ``` bigram_vectorizer = vocab_vectorizer(vocab) dtm_train = create_dtm(it_train, bigram_vectorizer) res = cv.glmnet(x = dtm_train, y = train[['sentiment']], family = 'binomial', alpha = 1, type.measure = "auc", nfolds = NFOLDS, thresh = 1e-3, maxit = 1e3) plot(res) ``` ``` print(names(res)) ``` ``` ## [1] "lambda" "cvm" "cvsd" "cvup" "cvlo" ## [6] "nzero" "name" "glmnet.fit" "lambda.min" "lambda.1se" ``` ``` #AUC (area under curve) print(max(res$cvm)) ``` ``` ## [1] 0.9267776 ``` ### 8\.7\.2 Out\-of\-sample test ``` it_test = test$review %>% prep_fun %>% tok_fun %>% itoken(ids = test$id, # turn off progressbar because it won't look nice in rmd progressbar = FALSE) dtm_test = create_dtm(it_test, bigram_vectorizer) preds = predict(res, dtm_test, type = 'response')[,1] glmnet:::auc(test$sentiment, preds) ``` ``` ## [1] 0.9309295 ``` ### 8\.7\.1 Redo classification with n\-grams. ``` library(glmnet) ``` ``` ## Loading required package: Matrix ``` ``` ## Loading required package: foreach ``` ``` ## Loaded glmnet 2.0-5 ``` ``` NFOLDS = 5 vocab = vocab %>% prune_vocabulary(term_count_min = 10, doc_proportion_max = 0.5) print(vocab) ``` ``` ## Number of docs: 4000 ## 0 stopwords: ... ## ngram_min = 1; ngram_max = 2 ## Vocabulary: ## terms terms_counts doc_counts ## 1: morvern 14 1 ## 2: race_films 10 1 ## 3: bazza 11 1 ## 4: thunderbirds 10 1 ## 5: mary_lou 21 1 ## --- ## 17866: br_also 36 36 ## 17867: a_better 96 89 ## 17868: tourists 10 10 ## 17869: in_each 14 14 ## 17870: the_brothers 14 14 ``` ``` bigram_vectorizer = vocab_vectorizer(vocab) dtm_train = create_dtm(it_train, bigram_vectorizer) res = cv.glmnet(x = dtm_train, y = train[['sentiment']], family = 'binomial', alpha = 1, type.measure = "auc", nfolds = NFOLDS, thresh = 1e-3, maxit = 1e3) plot(res) ``` ``` print(names(res)) ``` ``` ## [1] "lambda" "cvm" "cvsd" "cvup" "cvlo" ## [6] "nzero" "name" "glmnet.fit" "lambda.min" "lambda.1se" ``` ``` #AUC (area under curve) print(max(res$cvm)) ``` ``` ## [1] 0.9267776 ``` ### 8\.7\.2 Out\-of\-sample test ``` it_test = test$review %>% prep_fun %>% tok_fun %>% itoken(ids = test$id, # turn off progressbar because it won't look nice in rmd progressbar = FALSE) dtm_test = create_dtm(it_test, bigram_vectorizer) preds = predict(res, dtm_test, type = 'response')[,1] glmnet:::auc(test$sentiment, preds) ``` ``` ## [1] 0.9309295 ``` 8\.8 TF\-IDF ------------ We have seen the TF\-IDF discussion earlier, and here we see how to implement it using the *text2vec* package. ``` vocab = create_vocabulary(it_train) vectorizer = vocab_vectorizer(vocab) dtm_train = create_dtm(it_train, vectorizer) tfidf = TfIdf$new() dtm_train_tfidf = fit_transform(dtm_train, tfidf) dtm_test_tfidf = create_dtm(it_test, vectorizer) %>% transform(tfidf) ``` Now we take the TF\-IDF adjusted DTM and run the classifier. 8\.9 Refit classifier --------------------- ``` res = cv.glmnet(x = dtm_train_tfidf, y = train[['sentiment']], family = 'binomial', alpha = 1, type.measure = "auc", nfolds = NFOLDS, thresh = 1e-3, maxit = 1e3) print(paste("max AUC =", round(max(res$cvm), 4))) ``` ``` ## [1] "max AUC = 0.9115" ``` ``` #Test on hold-out sample preds = predict(res, dtm_test_tfidf, type = 'response')[,1] glmnet:::auc(test$sentiment, preds) ``` ``` ## [1] 0.9039965 ``` 8\.10 Embeddings (word2vec) --------------------------- From: [http://stackoverflow.com/questions/39514941/preparing\-word\-embeddings\-in\-text2vec\-r\-package](http://stackoverflow.com/questions/39514941/preparing-word-embeddings-in-text2vec-r-package) Do the entire creation of the TCM (Term Co\-occurrence Matrix) ``` suppressMessages(library(magrittr)) suppressMessages(library(text2vec)) data("movie_review") tokens = movie_review$review %>% tolower %>% word_tokenizer() it = itoken(tokens) v = create_vocabulary(it) %>% prune_vocabulary(term_count_min=10) vectorizer = vocab_vectorizer(v, grow_dtm = FALSE, skip_grams_window = 5) tcm = create_tcm(it, vectorizer) print(dim(tcm)) ``` ``` ## [1] 7797 7797 ``` Now fit the word embeddings using GloVe See: <http://nlp.stanford.edu/projects/glove/> ``` model = GlobalVectors$new(word_vectors_size=50, vocabulary=v, x_max=10, learning_rate=0.20) model$fit(tcm,n_iter=25) ``` ``` ## 2017-03-24 11:41:55 - epoch 1, expected cost 0.0820 ``` ``` ## 2017-03-24 11:41:55 - epoch 2, expected cost 0.0508 ``` ``` ## 2017-03-24 11:41:56 - epoch 3, expected cost 0.0433 ``` ``` ## 2017-03-24 11:41:56 - epoch 4, expected cost 0.0390 ``` ``` ## 2017-03-24 11:41:56 - epoch 5, expected cost 0.0359 ``` ``` ## 2017-03-24 11:41:56 - epoch 6, expected cost 0.0337 ``` ``` ## 2017-03-24 11:41:56 - epoch 7, expected cost 0.0321 ``` ``` ## 2017-03-24 11:41:57 - epoch 8, expected cost 0.0307 ``` ``` ## 2017-03-24 11:41:57 - epoch 9, expected cost 0.0296 ``` ``` ## 2017-03-24 11:41:57 - epoch 10, expected cost 0.0288 ``` ``` ## 2017-03-24 11:41:57 - epoch 11, expected cost 0.0281 ``` ``` ## 2017-03-24 11:41:57 - epoch 12, expected cost 0.0275 ``` ``` ## 2017-03-24 11:41:58 - epoch 13, expected cost 0.0269 ``` ``` ## 2017-03-24 11:41:58 - epoch 14, expected cost 0.0264 ``` ``` ## 2017-03-24 11:41:58 - epoch 15, expected cost 0.0260 ``` ``` ## 2017-03-24 11:41:58 - epoch 16, expected cost 0.0257 ``` ``` ## 2017-03-24 11:41:59 - epoch 17, expected cost 0.0253 ``` ``` ## 2017-03-24 11:41:59 - epoch 18, expected cost 0.0251 ``` ``` ## 2017-03-24 11:41:59 - epoch 19, expected cost 0.0248 ``` ``` ## 2017-03-24 11:41:59 - epoch 20, expected cost 0.0246 ``` ``` ## 2017-03-24 11:41:59 - epoch 21, expected cost 0.0243 ``` ``` ## 2017-03-24 11:42:00 - epoch 22, expected cost 0.0242 ``` ``` ## 2017-03-24 11:42:00 - epoch 23, expected cost 0.0240 ``` ``` ## 2017-03-24 11:42:00 - epoch 24, expected cost 0.0238 ``` ``` ## 2017-03-24 11:42:00 - epoch 25, expected cost 0.0236 ``` ``` wv = model$get_word_vectors() #Dimension words x wvec_size ``` 8\.11 Distance between words (or find close words) -------------------------------------------------- ``` #Make distance matrix d = dist2(wv, method="cosine") #Smaller values means closer print(dim(d)) ``` ``` ## [1] 7797 7797 ``` ``` #Pass: w=word, d=dist matrix, n=nomber of close words findCloseWords = function(w,d,n) { words = rownames(d) i = which(words==w) if (length(i) > 0) { res = sort(d[i,]) print(as.matrix(res[2:(n+1)])) } else { print("Word not in corpus.") } } ``` Example: Show the ten words close to the word “man” and “woman”. ``` findCloseWords("man",d,10) ``` ``` ## [,1] ## woman 0.2009660 ## girl 0.2371918 ## guy 0.2802020 ## who 0.3009101 ## young 0.3341396 ## person 0.3397372 ## boy 0.3733406 ## hit 0.3953263 ## old 0.4037096 ## he 0.4111968 ``` ``` findCloseWords("woman",d,10) ``` ``` ## [,1] ## young 0.1754151 ## man 0.2009660 ## girl 0.2546709 ## boy 0.2981061 ## who 0.3186094 ## guy 0.3222383 ## named 0.3372591 ## kid 0.3728761 ## child 0.3759926 ## doctor 0.3941979 ``` This is a very useful feature of word embeddings, as it is often argued that in the embedded space, words that are close to each other, also tend to have semantic similarities, even though the closeness is computed simply by using their co\-occurence frequencies. 8\.12 word2vec (explained) -------------------------- For more details, see: [https://www.quora.com/How\-does\-word2vec\-work](https://www.quora.com/How-does-word2vec-work) **A geometrical interpretation**: word2vec is a shallow word embedding model. This means that the model learns to map each discrete word id (0 through the number of words in the vocabulary) into a low\-dimensional continuous vector\-space from their distributional properties observed in some raw text corpus. Geometrically, one may interpret these vectors as tracing out points on the outside surface of a manifold in the “embedded space”. If we initialize these vectors from a spherical gaussian distribution, then you can imagine this manifold to look something like a hypersphere initially. Let us focus on the CBOW for now. CBOW is trained to predict the target word t from the contextual words that surround it, c, i.e. the goal is to maximize P(t \| c) over the training set. I am simplifying somewhat, but you can show that this probability is roughly inversely proportional to the distance between the current vectors assigned to t and to c. Since this model is trained in an online setting (one example at a time), at time T the goal is therefore to take a small step (mediated by the “learning rate”) in order to minimize the distance between the current vectors for t and c (and thereby increase the probability P(t \|c)). By repeating this process over the entire training set, we have that vectors for words that habitually co\-occur tend to be nudged closer together, and by gradually lowering the learning rate, this process converges towards some final state of the vectors. By the Distributional Hypothesis (Firth, 1957; see also the Wikipedia page on Distributional semantics), words with similar distributional properties (i.e. that co\-occur regularly) tend to share some aspect of semantic meaning. For example, we may find several sentences in the training set such as “citizens of X protested today” where X (the target word t) may be names of cities or countries that are semantically related. You can therefore interpret each training step as deforming or morphing the initial manifold by nudging the vectors for some words somewhat closer together, and the result, after projecting down to two dimensions, is the familiar t\-SNE visualizations where related words cluster together (e.g. Word representations for NLP). For the skipgram, the direction of the prediction is simply inverted, i.e. now we try to predict P(citizens \| X), P(of \| X), etc. This turns out to learn finer\-grained vectors when one trains over more data. The main reason is that the CBOW smooths over a lot of the distributional statistics by averaging over all context words while the skipgram does not. With little data, this “regularizing” effect of the CBOW turns out to be helpful, but since data is the ultimate regularizer the skipgram is able to extract more information when more data is available. There’s a bit more going on behind the scenes, but hopefully this helps to give a useful geometrical intuition as to how these models work. 8\.13 Topic Analysis -------------------- Uses Latent Dirichlet Allocation. ``` suppressMessages(library(tm)) suppressMessages(library(text2vec)) stopw = stopwords('en') stopw = c(stopw,"br","t","s","m","ve","2","d","1") #Make DTM data("movie_review") tokens = movie_review$review %>% tolower %>% word_tokenizer() it = itoken(tokens) v = create_vocabulary(it, stopwords = stopw) %>% prune_vocabulary(term_count_min=5) vectrzr = vocab_vectorizer(v, grow_dtm = TRUE, skip_grams_window = 5) dtm = create_dtm(it, vectrzr) print(dim(dtm)) ``` ``` ## [1] 5000 12733 ``` ``` #Do LDA lda = LatentDirichletAllocation$new(n_topics=5, v) lda$fit(dtm,n_iter = 25) doc_topics = lda$fit_transform(dtm,n_iter = 25) print(dim(doc_topics)) ``` ``` ## [1] 5000 5 ``` ``` #Get word vectors by topic topic_wv = lda$get_word_vectors() print(dim(topic_wv)) ``` ``` ## [1] 12733 5 ``` ``` #Plot LDA suppressMessages(library(LDAvis)) lda$plot() ``` ``` ## Loading required namespace: servr ``` This produces a terrific interactive plot. 8\.14 Latent Semantic Analysis (LSA) ------------------------------------ ``` lsa = LatentSemanticAnalysis$new(n_topics = 5) res = lsa$fit_transform(dtm) print(dim(res)) ``` ``` ## [1] 5000 5 ``` Biblio at: <http://srdas.github.io/Das_TextAnalyticsInFinance.pdf>
Machine Learning
srdas.github.io
https://srdas.github.io/MLBook/Networks.html
Chapter 9 Making Connections: Networks ====================================== 9\.1 Networks are beautiful --------------------------- 9\.2 Small Worlds ----------------- 9\.3 Academic Networks ---------------------- <http://academic.research.microsoft.com> Useful introductory book on networks: [http://www.cs.cornell.edu/home/kleinber/networks\-book/](http://www.cs.cornell.edu/home/kleinber/networks-book/) 9\.4 Graphs ----------- What is a graph? It is a picture of a network, a diagram consisting of relationships between entities. We call the entities as vertices or nodes (set \\(V\\)) and the relationships are called the edges of a graph (set \\(E\\)). Hence a graph \\(G\\) is defined as \\\[\\begin{equation} G \= (V,E) \\end{equation}\\] ### 9\.4\.1 Types of graphs If the edges \\(e \\in E\\) of a graph are not tipped with arrows implying some direction or causality, we call the graph an “undirected” graph. If there are arrows of direction then the graph is a “directed” graph. If the connections (edges) between vertices \\(v \\in V\\) have weights on them, then we call the graph a “weighted graph” else it’s “unweighted”. In an unweighted graph, for any pair of vertices \\((u,v)\\), we have \\\[\\begin{equation} w(u,v) \= \\left\\{ \\begin{array}{ll} w(u,v) \= 1, \& \\mbox{ if } (u,v) \\in E \\\\ w(u,v) \= 0, \& \\mbox{ if } (u,v) \\ni E \\end{array} \\right. \\end{equation}\\] In a weighted graph the value of \\(w(u,v)\\) is unrestricted, and can also be negative. Directed graphs can be cyclic or acyclic. In a cyclic graph there is a path from a source node that leads back to the node itself. Not so in an acyclic graph. The term **dag** is used to connote a “directed acyclic graph”. The binomial option pricing model in finance that you have learnt is an example of a dag. 9\.5 Adjacency Matrix --------------------- A graph may be represented by its adjacency matrix. This is simply the matrix \\(A \= \\{w(u,v)\\}, \\forall u,v\\). You can take the transpose of this matrix as well, which in the case of a directed graph will simply reverse the direction of all edges. 9\.6 igraph package ------------------- ``` library(igraph) ``` ``` ## Loading required package: methods ``` ``` ## ## Attaching package: 'igraph' ``` ``` ## The following objects are masked from 'package:stats': ## ## decompose, spectrum ``` ``` ## The following object is masked from 'package:base': ## ## union ``` ``` g = erdos.renyi.game(20,1/10) g ``` ``` ## IGRAPH U--- 20 23 -- Erdos renyi (gnp) graph ## + attr: name (g/c), type (g/c), loops (g/l), p (g/n) ## + edges: ## [1] 1-- 5 8-- 9 2--10 6--11 7--11 1--12 6--12 10--12 11--12 6--13 ## [11] 7--15 12--15 1--16 4--16 8--16 3--17 1--18 1--19 10--19 14--19 ## [21] 16--19 5--20 9--20 ``` ``` plot.igraph(g) ``` ``` print(clusters(g)) ``` ``` ## $membership ## [1] 1 1 2 1 1 1 1 1 1 1 1 1 1 1 1 1 2 1 1 1 ## ## $csize ## [1] 18 2 ## ## $no ## [1] 2 ``` ``` g$no[[1]] ``` ``` ## NULL ``` 9\.7 Graph Attributes --------------------- ``` #GENERATE RANDOM GRAPH g = erdos.renyi.game(30,0.2) plot(g) ``` ``` #COMPUTE DEGREE DISTRIBUTION dd = degree.distribution(g) dd = as.matrix(dd) d = as.matrix(seq(0,max(degree(g)))) plot(d,dd,type="l") ``` 9\.8 Dijkstra’s Shortest Paths Algorithm ---------------------------------------- This is one of the most well\-known algorithms in theoretical computer science. Given a source vertex on a weighted, directed graph, it finds the shortest path to all other nodes from source \\(s\\). The weight between two vertices is denoted \\(w(u,v)\\) as before. Dijkstra’s algorithm works for graphs where \\(w(u,v) \\geq 0\\). For negative weights, there is the Bellman\-Ford algorithm. The algorithm is as follows. function DIJKSTRA(G,w,s) S \= { } %S \= Set of vertices whose shortest paths from %source s have been found Q \= V(G) while Q notequal { } : u \= getMin(Q) S \= S \+ u Q \= Q \- u for each vertex v in SUCC(u): if d\[v] \> d\[u]\+w(u,v) then: d\[v] \= d\[u]\+w(u,v) PRED(v) \= u ``` #DIJSKATRA'S SHORTEST PATHS ALGORITHM e = matrix(nc=3, byrow=TRUE, c(1,2,8, 1,4,4, 2,4,3, 4,2,1, 2,3,1, 2,5,7, 4,5,4, 3,5,1)) e ``` ``` ## [,1] [,2] [,3] ## [1,] 1 2 8 ## [2,] 1 4 4 ## [3,] 2 4 3 ## [4,] 4 2 1 ## [5,] 2 3 1 ## [6,] 2 5 7 ## [7,] 4 5 4 ## [8,] 3 5 1 ``` ``` g = graph.empty(5) g = add.edges(g,t(e[,1:2]),weight=e[,3]) plot(g) ``` ``` plot(g,edge.width=e[,3],edge.label=e[,3]) ``` ``` get.shortest.paths(g,1) ``` ``` ## $vpath ## $vpath[[1]] ## + 0/5 vertices: ## ## $vpath[[2]] ## + 3/5 vertices: ## [1] 1 4 2 ## ## $vpath[[3]] ## + 4/5 vertices: ## [1] 1 4 2 3 ## ## $vpath[[4]] ## + 2/5 vertices: ## [1] 1 4 ## ## $vpath[[5]] ## + 5/5 vertices: ## [1] 1 4 2 3 5 ## ## ## $epath ## NULL ## ## $predecessors ## NULL ## ## $inbound_edges ## NULL ``` ``` print(shortest.paths(g)) ``` ``` ## [,1] [,2] [,3] [,4] [,5] ## [1,] 0 5 6 4 7 ## [2,] 5 0 1 1 2 ## [3,] 6 1 0 2 1 ## [4,] 4 1 2 0 3 ## [5,] 7 2 1 3 0 ``` ``` print(average.path.length(g)) ``` ``` ## [1] 1.272727 ``` ``` el <- matrix(nc=3, byrow=TRUE,c(0,1,0, 0,2,2, 0,3,1, 1,2,0, 1,4,5, 1,5,2, 2,1,1, 2,3,1, 2,6,1, 3,2,0, 3,6,2, 4,5,2, 4,7,8, 5,2,2, 5,6,1, 5,8,1, 5,9,3, 7,5,1, 7,8,1, 8,9,4) ) el[,1:2] = el[,1:2]+1 #Note that the zero vertex option does not exist any more, so we added 1 g = add.edges(graph.empty(10), t(el[,1:2]), weight=el[,3]) plot(g) ``` ``` #GRAPHING MAIN NETWORK g = simplify(g) V(g)$name = seq(vcount(g)) #l = layout.fruchterman.reingold(g) #l = layout.kamada.kawai(g) l = layout.circle(g) l = layout.norm(l, -1,1,-1,1) #pdf(file="network_plot.pdf") plot(g, layout=l, vertex.size=10, vertex.label=seq(1,10), vertex.color="#ff000033", edge.color="grey", edge.arrow.size=0.75, rescale=FALSE, xlim=range(l[,1]), ylim=range(l[,2])) ``` 9\.9 D3 plots ------------- D3 is a well known framework for plotting spring graphs. The following plot shows how one may use javascript in R, using the **html widgets** framework. See: <http://www.htmlwidgets.org/> ``` library(networkD3) links = data.frame(el[,1:2])-1 names(links) = c("source","target") links$value = 1 nodes = data.frame(unique(c(links$target,links$source))) names(nodes) = "name" nodes$group = ceiling(3*runif(length(nodes$name))) forceNetwork(Links = links, Nodes = nodes, Source = "source", Target = "target", Value = "value", NodeID = "name", Group = "group", opacity = 0.8, fontSize = 75) ``` 9\.10 Centrality ---------------- Centrality is a property of vertices in the network. Given the adjacency matrix \\(A\=\\{w(u,v)\\}\\), we can obtain a measure of the “influence” of all vertices in the network. Let \\(x\_i\\) be the influence of vertex \\(i\\). Then the column vector \\(x\\) contains the influence of each vertex. What is influence? Think of a web page. It has more influence the more links it has both, to the page, and from the page to other pages. Or think of a alumni network. People with more connections have more influence, they are more “central”. It is possible that you might have no connections yourself, but are connected to people with great connections. In this case, you do have influence. Hence, your influence depends on your own influence and that which you derive through others. Hence, the entire system of influence is interdependent, and can be written as the following matrix equation \\\[\\begin{equation} x \= A\\;x \\end{equation}\\] Now, we can just add a scalar here to this to get \\\[\\begin{equation} \\xi \\; x \= A x \\end{equation}\\] an eigensystem. Decompose this to get the principle eigenvector, and its values give you the influence of each member. In this way you can find the most influential people in any network. There are several applications of this idea to real data. This is eigenvector centrality is exactly what Google trademarked as **PageRank**, even though they did not invent eigenvector centrality. ``` A = matrix(nc=3, byrow=TRUE, c(0,1,1, 1,0,1, 1,1,0)) print(A) ``` ``` ## [,1] [,2] [,3] ## [1,] 0 1 1 ## [2,] 1 0 1 ## [3,] 1 1 0 ``` ``` g = graph.adjacency(A,mode="undirected",weighted=TRUE,diag=FALSE) res = evcent(g) print(names(res)) ``` ``` ## [1] "vector" "value" "options" ``` ``` res$vector ``` ``` ## [1] 1 1 1 ``` ``` res = evcent(g,scale=FALSE) res$vector ``` ``` ## [1] 0.5773503 0.5773503 0.5773503 ``` ``` A = matrix(nc=3, byrow=TRUE, c(0,1,1, 1,0,0, 1,0,0)) print(A) ``` ``` ## [,1] [,2] [,3] ## [1,] 0 1 1 ## [2,] 1 0 0 ## [3,] 1 0 0 ``` ``` g = graph.adjacency(A,mode="undirected",weighted=TRUE,diag=FALSE) res = evcent(g,scale=FALSE) res$vector ``` ``` ## [1] 0.7071068 0.5000000 0.5000000 ``` ``` A = matrix(nc=3, byrow=TRUE, c(0,2,1, 2,0,0, 1,0,0)) print(A) ``` ``` ## [,1] [,2] [,3] ## [1,] 0 2 1 ## [2,] 2 0 0 ## [3,] 1 0 0 ``` ``` g = graph.adjacency(A,mode="undirected",weighted=TRUE,diag=FALSE) res = evcent(g,scale=FALSE) res$vector ``` ``` ## [1] 0.7071068 0.6324555 0.3162278 ``` 9\.11 Betweenness ----------------- Another concept of centrality is known as “betweenness”. This is the proportion of shortest paths that go through a node relative to all paths that go through the same node. This may be expressed as \\\[\\begin{equation} B(v) \= \\sum\_{a \\neq v \\neq b} \\frac{n\_{a,b}(v)}{n\_{a,b}} \\end{equation}\\] where \\(n\_{a,b}\\) is the number of shortest paths from node \\(a\\) to node \\(b\\), and \\(n\_{a,b}(v)\\) are the number of those paths that traverse through vertex \\(v\\). Here is an example from an earlier directed graph. ``` el = matrix(nc=3, byrow=TRUE, c(0,1,1, 0,2,2, 0,3,1, 1,2,1, 1,4,5, 1,5,2, 2,1,1, 2,3,1, 2,6,1, 3,2,1, 3,6,2, 4,5,2, 4,7,8, 5,2,2, 5,6,1, 5,8,1, 5,9,3, 7,5,1, 7,8,1, 8,9,4) ) el[,1:2] = el[,1:2] + 1 g = add.edges(graph.empty(10), t(el[,1:2]), weight=el[,3]) plot(g) ``` ``` res = betweenness(g) res ``` ``` ## [1] 0.0000000 18.5833333 18.2500000 0.8333333 5.0000000 20.0000000 ## [7] 0.0000000 0.0000000 0.0000000 0.0000000 ``` ``` g = erdos.renyi.game(30,0.1) d = seq(0,max(degree(g))) dd = degree.distribution(g) plot(g) ``` ``` #DIAMETER print(diameter(g)) ``` ``` ## [1] 6 ``` ``` #FRAGILITY print((t(d^2) %*% dd)/(t(d) %*% dd)) ``` ``` ## [,1] ## [1,] 3.837209 ``` ``` #CENTRALITY res = evcent(g) res$vector ``` ``` ## [1] 0.13020514 0.10654809 0.50328790 0.53703737 0.22421218 0.23555387 ## [7] 0.33641755 0.09718898 0.07088808 0.61028079 0.37861544 0.27615600 ## [13] 0.37620605 0.17105358 1.00000000 0.07332221 0.08635696 0.12932960 ## [19] 0.15630895 0.28404621 0.17887855 0.27369218 0.13102918 0.25669577 ## [25] 0.25669577 0.72508578 0.23833268 0.69685043 0.25944866 0.41435043 ``` 9\.12 Communities ----------------- Community detection methods partition nodes into clusters that tend to interact together. It is useful to point out the considerable flexibility and realism built into the definition of our community clusters. We do not require all nodes to belong to communities. Nor do we fix the number of communities that may exist at a time, and we also allow each community to have different size. With this flexibility, the key computational challenge is to find the “best” partition because the number of possible partitions of the nodes is extremely large. Community detection methods attempt to determine a set of clusters that are internally tight\-knit. Mathematically, this is equivalent to finding a partition of clusters to maximize the observed number of connections between cluster members minus what is expected conditional on the connections within the cluster, aggregated across all clusters. More formally, we choose partitions with high modularity \\(Q\\), where \\\[\\begin{equation} Q \= \\frac{1}{2m} \\sum\_{i,j} \\left\[ A\_{ij} \- \\frac{d\_i \\times d\_j}{2m} \\right] \\cdot \\delta(i,j) \\end{equation}\\] \\(A\_{ij}\\) is the \\((i,j)\\)\-th entry in the adjacency matrix, i.e., the number of connections in which \\(i\\) and \\(j\\) jointly participated, \\(d\_i\=\\sum\_j A\_{ij}\\) is the total number of transactions that node \\(i\\) participated in (or, the degree of \\(i\\)) and \\(m \= \\frac{1}{2} \\sum\_{ij} A\_{ij}\\) is the sum of all edge weights in matrix \\(A\\). The function \\(\\delta(i,j)\\) is an indicator equal to 1\.0 if nodes \\(i\\) and \\(j\\) are from the same community, and zero otherwise. \\(Q\\) is bounded in \[\-1, \+1]. If \\(Q \> 0\\), intra\-community connections exceed the expected number given deal flow. Consider a network of five nodes \\(\\{A,B,C,D,E\\}\\), where the edge weights are as follows: \\(A:B\=6\\), \\(A:C\=5\\), \\(B:C\=2\\), \\(C:D\=2\\), and \\(D:E\=10\\). Assume that a community detection algorithm assigns \\(\\{A,B,C\\}\\) to one community and \\(\\{D,E\\}\\) to another, i.e., only two communities. The adjacency matrix for this graph is given by matrix \\(A\\) below. ``` A = matrix(c(0,6,5,0,0,6,0,2,0,0,5,2,0,2,0,0,0,2,0,10,0,0,0,10,0),5,5) print(A) ``` ``` ## [,1] [,2] [,3] [,4] [,5] ## [1,] 0 6 5 0 0 ## [2,] 6 0 2 0 0 ## [3,] 5 2 0 2 0 ## [4,] 0 0 2 0 10 ## [5,] 0 0 0 10 0 ``` ``` g = graph.adjacency(A,mode="undirected",weighted=TRUE,diag=FALSE) wtc = walktrap.community(g) res=membership(wtc) print(res) ``` ``` ## [1] 1 1 1 2 2 ``` ``` g = graph.adjacency(A,mode="undirected",weighted=TRUE,diag=FALSE) fgc = fastgreedy.community(g,merges=TRUE,modularity=TRUE, weights=E(g)$weight) res = membership(fgc) res ``` ``` ## [1] 1 1 1 2 2 ``` ``` g = graph.adjacency(A,mode="undirected",diag=FALSE) wtc = walktrap.community(g) res = membership(wtc) print(res) ``` ``` ## [1] 2 2 2 1 1 ``` ``` print(modularity(g,res)) ``` ``` ## [1] 0.4128 ``` ``` #New functions in igraph for walktrap res = cluster_walktrap(g) print(res) ``` ``` ## IGRAPH clustering walktrap, groups: 2, mod: 0.41 ## + groups: ## $`1` ## [1] 4 5 ## ## $`2` ## [1] 1 2 3 ## ``` ``` print(modularity(g,res$membership)) ``` ``` ## [1] 0.4128 ``` 9\.13 Financial Applications ---------------------------- 9\.14 Risk Networks ------------------- ``` #RISK NETWORKS PROGRAM CODE #LOAD GRAPH NETWORK LIBRARY library(igraph) #FUNCTION FOR RISK INCREMENT AND DECOMP NetRisk = function(Ri,X) { S = sqrt(t(Ri) %*% X %*% Ri) RiskIncr = 0.5 * (X %*% Ri + t(X) %*% Ri)/S[1,1] RiskDecomp = RiskIncr * Ri result = list(S,RiskIncr,RiskDecomp) } ``` ### 9\.14\.1 Example ``` #READ IN DATA data = read.csv(file="DSTMAA_data/AdjacencyMatrix.csv",sep=",") na = dim(data)[2]-1 #columns (assets) nc = 20 #Number of controls m = dim(data)[1] #rows (first 1 is header, next n are assets, next 20 are controls, remaining are business lines, last line is weights) nb = m-na-nc-2 #Number of business lines X = data[2:(1+na),2:(na+1)] X = matrix(as.numeric(as.matrix(X)),na,na) controls = data[(2+na):(1+na+nc),2:(na+1)] controls = matrix(as.numeric(as.matrix(controls)),nc,na) Ri = matrix(colSums(controls),na,1) #Aggregate risk by asset bus = data[(2+na+nc):(m-1),2:(na+1)] bus = matrix(as.numeric(as.matrix(bus)),nb,na) bus_names = as.matrix(data[(2+na+nc):(m-1),1]) wts = data[m,2:(1+nb)] wts = matrix(as.numeric(as.matrix(wts)),nb,1)/100 #percentage weights ``` ``` #TABLE OF ASSETS: Asset number, Asset name, IP address tab_assets = cbind(seq(1,na),names(data)[2:(na+1)],t(data[1,2:(na+1)])) write(t(tab_assets),file="DSTMAA_data/tab_assets.txt",ncolumns=3) ``` ``` #GRAPH NETWORK: plot of the assets and the links with directed arrows Y = X; diag(Y)=0 g = graph.adjacency(Y) plot.igraph(g,layout=layout.fruchterman.reingold,edge.arrow.size=0.5,vertex.size=15,vertex.label=seq(1,na)) ``` ### 9\.14\.2 Overall Risk Score ``` #COMPUTE OVERALL RISK SCORE #A computation that considers the risk level of each asset (Ri) #and the interlinkages between all assets (in adjacency matrix X) #The function S below is homogenous of degree 1, i.e., S(m*Ri) = m*S(Ri) S = sqrt(t(Ri) %*% X %*% Ri); print(c("Risk Score",S)) ``` ``` ## [1] "Risk Score" "11.6189500386223" ``` ``` S ``` ``` ## [,1] ## [1,] 11.61895 ``` ### 9\.14\.3 Risk Decomposition ``` #COMPUTE RISK DECOMPOSITION #Exploits the homogeneity degree 1 property to compute individual asset #risk contributions, i.e., a risk decomposition. #Risk increment is the change in total risk score if any one asset's #risk level increases by 1. RiskIncr = 0.5 * (X %*% Ri + t(X) %*% Ri)/S[1,1] RiskDecomp = RiskIncr * Ri sorted_RiskDecomp = sort(RiskDecomp,decreasing=TRUE,index.return=TRUE) RD = sorted_RiskDecomp$x idxRD = sorted_RiskDecomp$ix print("Risk Contribution"); print(RiskDecomp); print(sum(RiskDecomp)) ``` ``` ## [1] "Risk Contribution" ``` ``` ## [,1] ## [1,] 0.0000000 ## [2,] 0.0000000 ## [3,] 0.6885304 ## [4,] 0.8606630 ## [5,] 1.3770607 ## [6,] 0.6885304 ## [7,] 0.8606630 ## [8,] 1.3770607 ## [9,] 0.7745967 ## [10,] 0.0000000 ## [11,] 1.2049282 ## [12,] 1.2049282 ## [13,] 1.2049282 ## [14,] 0.5163978 ## [15,] 0.1721326 ## [16,] 0.0000000 ## [17,] 0.5163978 ## [18,] 0.1721326 ``` ``` ## [1] 11.61895 ``` ``` barplot(t(RD),col="dark green",xlab="Node Number",names.arg=idxRD,cex.names=0.75) ``` ### 9\.14\.4 Centrality ``` #NODE EIGEN VALUE CENTRALITY #Centrality is a measure of connectedness and influence of a node in a network #accounting for all its linkages and influence of all other nodes. Centrality #is based on connections only and not risk scores, and measures the propensity #of a node to propagate a security breach if the node is compromised. #It is a score that is normalized to the range (0,1) cent = evcent(g)$vector print("Normalized Centrality Scores") ``` ``` ## [1] "Normalized Centrality Scores" ``` ``` print(cent) ``` ``` ## [1] 1.0000000 0.4567810 0.4922349 0.3627391 0.3345007 0.1982681 0.3322908 ## [8] 0.4593151 0.5590561 0.5492208 0.5492208 0.5492208 0.5492208 0.3044259 ## [15] 0.2944982 0.5231594 0.4121079 0.2944982 ``` ``` sorted_cent = sort(cent,decreasing=TRUE,index.return=TRUE) Scent = sorted_cent$x idxScent = sorted_cent$ix barplot(t(Scent),col="dark red",xlab="Node Number",names.arg=idxScent,cex.names=0.75) ``` ### 9\.14\.5 Risk Increment ``` #COMPUTE RISK INCREMENTS sorted_RiskIncr = sort(RiskIncr,decreasing=TRUE,index.return=TRUE) RI = sorted_RiskIncr$x idxRI = sorted_RiskIncr$ix print("Risk Increment (per unit increase in any node risk"); print(RiskIncr) ``` ``` ## [1] "Risk Increment (per unit increase in any node risk" ``` ``` ## [,1] ## [1,] 1.9795248 ## [2,] 0.7745967 ## [3,] 0.6885304 ## [4,] 0.4303315 ## [5,] 0.6885304 ## [6,] 0.3442652 ## [7,] 0.4303315 ## [8,] 0.6885304 ## [9,] 0.7745967 ## [10,] 0.6024641 ## [11,] 0.6024641 ## [12,] 0.6024641 ## [13,] 0.6024641 ## [14,] 0.2581989 ## [15,] 0.1721326 ## [16,] 0.9036961 ## [17,] 0.5163978 ## [18,] 0.1721326 ``` ``` barplot(t(RI),col="dark blue",xlab="Node Number",names.arg=idxRI,cex.names=0.75) ``` ### 9\.14\.6 Criticality ``` #CRITICALITY #Criticality is compromise-weighted centrality. #This is an element-wise multiplication of vectors $C$ and $x$. crit = Ri * cent print("Criticality Vector") ``` ``` ## [1] "Criticality Vector" ``` ``` print(crit) ``` ``` ## [,1] ## [1,] 0.0000000 ## [2,] 0.0000000 ## [3,] 0.4922349 ## [4,] 0.7254782 ## [5,] 0.6690015 ## [6,] 0.3965362 ## [7,] 0.6645815 ## [8,] 0.9186302 ## [9,] 0.5590561 ## [10,] 0.0000000 ## [11,] 1.0984415 ## [12,] 1.0984415 ## [13,] 1.0984415 ## [14,] 0.6088518 ## [15,] 0.2944982 ## [16,] 0.0000000 ## [17,] 0.4121079 ## [18,] 0.2944982 ``` ``` sorted_crit = sort(crit,decreasing=TRUE,index.return=TRUE) Scrit = sorted_crit$x idxScrit = sorted_crit$ix barplot(t(Scrit),col="orange",xlab="Node Number",names.arg=idxScrit,cex.names=0.75) ``` ### 9\.14\.7 Cross Risk ### 9\.14\.8 Risk Scaling: Spillovers ``` #CROSS IMPACT MATRIX #CHECK FOR SPILLOVER EFFECTS FROM ONE NODE TO ALL OTHERS d_RiskDecomp = NULL n = length(Ri) for (j in 1:n) { Ri2 = Ri Ri2[j] = Ri[j]+1 res = NetRisk(Ri2,X) d_Risk = as.matrix(res[[3]]) - RiskDecomp d_RiskDecomp = cbind(d_RiskDecomp,d_Risk) #Column by column for each asset } #3D plots library("RColorBrewer"); library("lattice"); library("latticeExtra") cloud(d_RiskDecomp, panel.3d.cloud = panel.3dbars, xbase = 0.25, ybase = 0.25, zlim = c(min(d_RiskDecomp), max(d_RiskDecomp)), scales = list(arrows = FALSE, just = "right"), xlab = "On", ylab = "From", zlab = NULL, main="Change in Risk Contribution", col.facet = level.colors(d_RiskDecomp, at = do.breaks(range(d_RiskDecomp), 20), col.regions = cm.colors, colors = TRUE), colorkey = list(col = cm.colors, at = do.breaks(range(d_RiskDecomp), 20)), #screen = list(z = 40, x = -30) ) ``` ``` brewer.div <- colorRampPalette(brewer.pal(11, "Spectral"), interpolate = "spline") levelplot(d_RiskDecomp, aspect = "iso", col.regions = brewer.div(20), ylab="Impact from", xlab="Impact on", main="Change in Risk Contribution") ``` ### 9\.14\.9 Risk Scaling with Increased Connectivity ``` #SIMULATION OF EFFECT OF INCREASED CONNECTIVITY #RANDOM GRAPHS n=50; k=100; pvec=seq(0.05,0.50,0.05); svec=NULL; sbarvec=NULL for (p in pvec) { s_temp = NULL sbar_temp = NULL for (j in 1:k) { g = erdos.renyi.game(n,p,directed=TRUE); A = get.adjacency(g) diag(A) = 1 c = as.matrix(round(runif(n,0,2),0)) syscore = as.numeric(sqrt(t(c) %*% A %*% c)) sbarscore = syscore/n s_temp = c(s_temp,syscore) sbar_temp = c(sbar_temp,sbarscore) } svec = c(svec,mean(s_temp)) sbarvec = c(sbarvec,mean(sbar_temp)) } #plot(pvec,svec,type="l",xlab="Prob of connecting to a node",ylab="S",lwd=3,col="red") plot(pvec,sbarvec,type="l",xlab="Prob of connecting to a node",ylab="S_Avg",lwd=3,col="red") ``` ### 9\.14\.10 Too Big To Fail The change in risk score \\({S}\\) as the number of nodes increases, while keeping the average number of connections between nodes constant. This mimics the case where banks are divided into smaller banks, each of which then contains part of the transacting volume of the previous bank. The plot shows how the risk score increases as the number of nodes increases from 10 to 100, while expected number of total edges in the network remains the same. A compromise vector is also generated with equally likely values \\(\\{0,1,2\\}\\). This is repeated 5000 times for each fixed number of nodes and the mean risk score across 5000 simulations. ``` #SIMULATION OF EFFECT OF INCREASED NODES AND REDUCED CONNECTIVITY nvec=seq(10,100,10); k=100; svec=NULL; sbarvec=NULL for (n in nvec) { s_temp = NULL sbar_temp = NULL p = 5/n for (j in 1:k) { g = erdos.renyi.game(n,p,directed=TRUE); A = get.adjacency(g) diag(A) = 1 c = as.matrix(round(runif(n,0,2),0)) syscore = as.numeric(sqrt(t(c) %*% A %*% c)) sbarscore = syscore/n s_temp = c(s_temp,syscore) sbar_temp = c(sbar_temp,sbarscore) } svec = c(svec,mean(s_temp)) sbarvec = c(sbarvec,mean(sbar_temp)) } plot(nvec,svec,type="l",xlab="Number of nodes",ylab="S",ylim=c(0,max(svec)),lwd=3,col="red") ``` ``` #plot(nvec,sbarvec,type="l",xlab="Number of nodes",ylab="S_Avg",ylim=c(0,max(sbarvec)),lwd=3,col="red") ``` 9\.15 Systemic Risk in Indian Banks ----------------------------------- 9\.16 Systemic Risk Portals --------------------------- [http://www.systemic\-risk.org/](http://www.systemic-risk.org/) [http://www.systemic\-risk\-hub.org/risk\_centers.php](http://www.systemic-risk-hub.org/risk_centers.php) 9\.17 Shiny application ----------------------- The example above may also be embedded in a shiny application for which the code is provided below. The screen will appear as follows. The files below also require the data file **systemicR.csv** or an upload. ``` #SERVER.R library(shiny) library(plotly) library(igraph) # Define server logic for random distribution application shinyServer(function(input, output) { fData = reactive({ # input$file1 will be NULL initially. After the user selects and uploads a # file, it will be a data frame with 'name', 'size', 'type', and 'datapath' # columns. The 'datapath' column will contain the local filenames where the # data can be found. inFile <- input$file if (is.null(inFile)){ data = read.csv(file="systemicR.csv",sep=",") } else read.csv(file=inFile$datapath) }) observeEvent(input$compute, { output$text1 <- renderText({ data = fData() na = dim(data)[1] #columns (assets) Ri = matrix(data[,1],na,1) #Aggregate risk by asset X = data[1:na,2:(na+1)] X = matrix(as.numeric(as.matrix(X)),na,na) S = as.numeric(sqrt(t(Ri) %*% X %*% Ri)) paste("Overall Risk Score",round(S,2)) }) output$plot <- renderPlot({ data = fData() na = dim(data)[1] #columns (assets) bnames = names(data) Ri = matrix(data[,1],na,1) #Aggregate risk by asset X = data[1:na,2:(na+1)] X = matrix(as.numeric(as.matrix(X)),na,na) #GRAPH NETWORK: plot of the assets and the links with directed arrows Y = X; diag(Y)=0 g = graph.adjacency(Y) V(g)$color = "#ffec78" V(g)$color[degree(g)==max(degree(g))] = "#ff4040" V(g)$color[degree(g)==min(degree(g))] = "#b4eeb4" V(g)$size = Ri*8+10 plot.igraph(g,layout=layout.fruchterman.reingold,edge.arrow.size=0.5, vertex.label.color="black",edge.arrow.width=0.8, vertex.label=bnames[1:na+1], vertex.label.cex=0.8) }, height = 550, width = 800) output$text2 <- renderText({ data = fData() na = dim(data)[1] #columns (assets) Ri = matrix(data[,1],na,1) #Aggregate risk by asset X = data[1:na,2:(na+1)] X = matrix(as.numeric(as.matrix(X)),na,na) Y = X; diag(Y)=0 g = graph.adjacency(Y) H = ((sum(degree(g)^2))/na)/((sum(degree(g)))/na) paste("Fragility of the Network is ",round(H,2)) }) output$plot2 <- renderPlotly({ data = fData() na = dim(data)[1] #columns (assets) Ri = matrix(data[,1],na,1) #Aggregate risk by asset X = data[1:na,2:(na+1)] X = matrix(as.numeric(as.matrix(X)),na,na) S = as.numeric(sqrt(t(Ri) %*% X %*% Ri)) RiskIncr = 0.5 * as.numeric((X %*% Ri + t(X) %*% Ri))/S RiskDecomp = RiskIncr * Ri sorted_RiskDecomp = sort(RiskDecomp,decreasing=TRUE,index.return=TRUE) RD = as.numeric(as.matrix(sorted_RiskDecomp$x)) idxRD = as.character(as.matrix(sorted_RiskDecomp$ix)) idxRD = paste("B",idxRD,sep="") xAx <- list( title = "Node Number" ) yAx <- list( title = "Risk Decomposition") plot_ly(y = RD,x = idxRD,marker = list(color = toRGB("dark green")),type="bar")%>% layout(xaxis = xAx, yaxis = yAx) # barplot(t(RD),col="dark green",xlab="Node Number",names.arg=idxRD,cex.names=0.75) }) output$plot3 <- renderPlotly({ data = fData() na = dim(data)[1] #columns (assets) Ri = matrix(data[,1],na,1) #Aggregate risk by asset X = data[1:na,2:(na+1)] X = matrix(as.numeric(as.matrix(X)),na,na) #GRAPH NETWORK: plot of the assets and the links with directed arrows Y = X; diag(Y)=0 g = graph.adjacency(Y) cent = evcent(g)$vector # print("Normalized Centrality Scores") sorted_cent = sort(cent,decreasing=TRUE,index.return=TRUE) Scent = sorted_cent$x idxScent = sorted_cent$ix idxScent = paste("B",idxScent,sep="") xAx <- list( title = "Node Number" ) yAx <- list( title = "Eigen Value Centrality" ) plot_ly(y = as.numeric(t(Scent)),x = idxScent,marker = list(color = toRGB("red")),type="bar")%>% layout(xaxis = xAx, yaxis = yAx) # barplot(t(Scent),col="dark red",xlab="Node Number",names.arg=idxScent,cex.names=0.75) }) output$plot4 <- renderPlotly({ data = fData() na = dim(data)[1] #columns (assets) Ri = matrix(data[,1],na,1) #Aggregate risk by asset X = data[1:na,2:(na+1)] X = matrix(as.numeric(as.matrix(X)),na,na) S = as.numeric(sqrt(t(Ri) %*% X %*% Ri)) RiskIncr = 0.5 * as.numeric((X %*% Ri + t(X) %*% Ri))/S #COMPUTE RISK INCREMENTS sorted_RiskIncr = sort(RiskIncr,decreasing=TRUE,index.return=TRUE) RI = sorted_RiskIncr$x idxRI = sorted_RiskIncr$ix idxRI = paste("B",idxRI,sep="") xAx <- list( title = "Node Number" ) yAx <- list( title = "Risk Increments" ) plot_ly(y = as.numeric(t(RI)),x = idxRI,marker = list(color = toRGB("green")),type="bar")%>% layout(xaxis = xAx, yaxis = yAx) # barplot(t(RI),col="dark blue",xlab="Node Number",names.arg=idxRI,cex.names=0.75) }) #CRITICALITY #Criticality is compromise-weighted centrality. #This is an element-wise multiplication of vectors $C$ and $x$. output$plot5 <- renderPlotly({ data = fData() na = dim(data)[1] #columns (assets) Ri = matrix(data[,1],na,1) #Aggregate risk by asset X = data[1:na,2:(na+1)] X = matrix(as.numeric(as.matrix(X)),na,na) #GRAPH NETWORK: plot of the assets and the links with directed arrows Y = X; diag(Y)=0 g = graph.adjacency(Y) cent = evcent(g)$vector crit = Ri * cent print("Criticality Vector") print(crit) sorted_crit = sort(crit,decreasing=TRUE,index.return=TRUE) Scrit = sorted_crit$x idxScrit = sorted_crit$ix idxScrit = paste("B",idxScrit,sep="") xAx <- list( title = "Node Number" ) yAx <- list( title = "Criticality Vector" ) plot_ly(y = as.numeric(t(sorted_crit$x)),x = idxScrit,marker = list(color = toRGB("orange")),type="bar")%>% layout(xaxis = xAx, yaxis = yAx) # barplot(t(Scrit),col="orange",xlab="Node Number",names.arg=idxScrit,cex.names=0.75) }) }) }) ``` ``` #UI.R library(plotly) shinyUI(fluidPage( titlePanel("Systemic Risk Scoring"), sidebarLayout( sidebarPanel( # Inputs excluded for brevity p('Upload a .csv file having header as Credit Scores and names of n banks. Dimensions of file will be (n*n+1) excluding the header.'), fileInput("file", label = h3("File input")), actionButton("compute","Compute Scores"), hr(), textOutput("text1"), textOutput("text2"), hr(), p('Please refer following Paper published for further details', a("Matrix Metrics: Network-Based Systemic Risk Scoring.", href = "http://srdas.github.io/Papers/JAI_Das_issue.pdf")) ), mainPanel( tabsetPanel( tabPanel("Network Graph", plotOutput("plot",width="100%")), tabPanel("Risk Decomposition", plotlyOutput("plot2")), tabPanel("Node Centrality", plotlyOutput("plot3")), tabPanel("Risk Increments", plotlyOutput("plot4")), tabPanel("Criticality", plotlyOutput("plot5")) ) ) ) )) ``` 9\.1 Networks are beautiful --------------------------- 9\.2 Small Worlds ----------------- 9\.3 Academic Networks ---------------------- <http://academic.research.microsoft.com> Useful introductory book on networks: [http://www.cs.cornell.edu/home/kleinber/networks\-book/](http://www.cs.cornell.edu/home/kleinber/networks-book/) 9\.4 Graphs ----------- What is a graph? It is a picture of a network, a diagram consisting of relationships between entities. We call the entities as vertices or nodes (set \\(V\\)) and the relationships are called the edges of a graph (set \\(E\\)). Hence a graph \\(G\\) is defined as \\\[\\begin{equation} G \= (V,E) \\end{equation}\\] ### 9\.4\.1 Types of graphs If the edges \\(e \\in E\\) of a graph are not tipped with arrows implying some direction or causality, we call the graph an “undirected” graph. If there are arrows of direction then the graph is a “directed” graph. If the connections (edges) between vertices \\(v \\in V\\) have weights on them, then we call the graph a “weighted graph” else it’s “unweighted”. In an unweighted graph, for any pair of vertices \\((u,v)\\), we have \\\[\\begin{equation} w(u,v) \= \\left\\{ \\begin{array}{ll} w(u,v) \= 1, \& \\mbox{ if } (u,v) \\in E \\\\ w(u,v) \= 0, \& \\mbox{ if } (u,v) \\ni E \\end{array} \\right. \\end{equation}\\] In a weighted graph the value of \\(w(u,v)\\) is unrestricted, and can also be negative. Directed graphs can be cyclic or acyclic. In a cyclic graph there is a path from a source node that leads back to the node itself. Not so in an acyclic graph. The term **dag** is used to connote a “directed acyclic graph”. The binomial option pricing model in finance that you have learnt is an example of a dag. ### 9\.4\.1 Types of graphs If the edges \\(e \\in E\\) of a graph are not tipped with arrows implying some direction or causality, we call the graph an “undirected” graph. If there are arrows of direction then the graph is a “directed” graph. If the connections (edges) between vertices \\(v \\in V\\) have weights on them, then we call the graph a “weighted graph” else it’s “unweighted”. In an unweighted graph, for any pair of vertices \\((u,v)\\), we have \\\[\\begin{equation} w(u,v) \= \\left\\{ \\begin{array}{ll} w(u,v) \= 1, \& \\mbox{ if } (u,v) \\in E \\\\ w(u,v) \= 0, \& \\mbox{ if } (u,v) \\ni E \\end{array} \\right. \\end{equation}\\] In a weighted graph the value of \\(w(u,v)\\) is unrestricted, and can also be negative. Directed graphs can be cyclic or acyclic. In a cyclic graph there is a path from a source node that leads back to the node itself. Not so in an acyclic graph. The term **dag** is used to connote a “directed acyclic graph”. The binomial option pricing model in finance that you have learnt is an example of a dag. 9\.5 Adjacency Matrix --------------------- A graph may be represented by its adjacency matrix. This is simply the matrix \\(A \= \\{w(u,v)\\}, \\forall u,v\\). You can take the transpose of this matrix as well, which in the case of a directed graph will simply reverse the direction of all edges. 9\.6 igraph package ------------------- ``` library(igraph) ``` ``` ## Loading required package: methods ``` ``` ## ## Attaching package: 'igraph' ``` ``` ## The following objects are masked from 'package:stats': ## ## decompose, spectrum ``` ``` ## The following object is masked from 'package:base': ## ## union ``` ``` g = erdos.renyi.game(20,1/10) g ``` ``` ## IGRAPH U--- 20 23 -- Erdos renyi (gnp) graph ## + attr: name (g/c), type (g/c), loops (g/l), p (g/n) ## + edges: ## [1] 1-- 5 8-- 9 2--10 6--11 7--11 1--12 6--12 10--12 11--12 6--13 ## [11] 7--15 12--15 1--16 4--16 8--16 3--17 1--18 1--19 10--19 14--19 ## [21] 16--19 5--20 9--20 ``` ``` plot.igraph(g) ``` ``` print(clusters(g)) ``` ``` ## $membership ## [1] 1 1 2 1 1 1 1 1 1 1 1 1 1 1 1 1 2 1 1 1 ## ## $csize ## [1] 18 2 ## ## $no ## [1] 2 ``` ``` g$no[[1]] ``` ``` ## NULL ``` 9\.7 Graph Attributes --------------------- ``` #GENERATE RANDOM GRAPH g = erdos.renyi.game(30,0.2) plot(g) ``` ``` #COMPUTE DEGREE DISTRIBUTION dd = degree.distribution(g) dd = as.matrix(dd) d = as.matrix(seq(0,max(degree(g)))) plot(d,dd,type="l") ``` 9\.8 Dijkstra’s Shortest Paths Algorithm ---------------------------------------- This is one of the most well\-known algorithms in theoretical computer science. Given a source vertex on a weighted, directed graph, it finds the shortest path to all other nodes from source \\(s\\). The weight between two vertices is denoted \\(w(u,v)\\) as before. Dijkstra’s algorithm works for graphs where \\(w(u,v) \\geq 0\\). For negative weights, there is the Bellman\-Ford algorithm. The algorithm is as follows. function DIJKSTRA(G,w,s) S \= { } %S \= Set of vertices whose shortest paths from %source s have been found Q \= V(G) while Q notequal { } : u \= getMin(Q) S \= S \+ u Q \= Q \- u for each vertex v in SUCC(u): if d\[v] \> d\[u]\+w(u,v) then: d\[v] \= d\[u]\+w(u,v) PRED(v) \= u ``` #DIJSKATRA'S SHORTEST PATHS ALGORITHM e = matrix(nc=3, byrow=TRUE, c(1,2,8, 1,4,4, 2,4,3, 4,2,1, 2,3,1, 2,5,7, 4,5,4, 3,5,1)) e ``` ``` ## [,1] [,2] [,3] ## [1,] 1 2 8 ## [2,] 1 4 4 ## [3,] 2 4 3 ## [4,] 4 2 1 ## [5,] 2 3 1 ## [6,] 2 5 7 ## [7,] 4 5 4 ## [8,] 3 5 1 ``` ``` g = graph.empty(5) g = add.edges(g,t(e[,1:2]),weight=e[,3]) plot(g) ``` ``` plot(g,edge.width=e[,3],edge.label=e[,3]) ``` ``` get.shortest.paths(g,1) ``` ``` ## $vpath ## $vpath[[1]] ## + 0/5 vertices: ## ## $vpath[[2]] ## + 3/5 vertices: ## [1] 1 4 2 ## ## $vpath[[3]] ## + 4/5 vertices: ## [1] 1 4 2 3 ## ## $vpath[[4]] ## + 2/5 vertices: ## [1] 1 4 ## ## $vpath[[5]] ## + 5/5 vertices: ## [1] 1 4 2 3 5 ## ## ## $epath ## NULL ## ## $predecessors ## NULL ## ## $inbound_edges ## NULL ``` ``` print(shortest.paths(g)) ``` ``` ## [,1] [,2] [,3] [,4] [,5] ## [1,] 0 5 6 4 7 ## [2,] 5 0 1 1 2 ## [3,] 6 1 0 2 1 ## [4,] 4 1 2 0 3 ## [5,] 7 2 1 3 0 ``` ``` print(average.path.length(g)) ``` ``` ## [1] 1.272727 ``` ``` el <- matrix(nc=3, byrow=TRUE,c(0,1,0, 0,2,2, 0,3,1, 1,2,0, 1,4,5, 1,5,2, 2,1,1, 2,3,1, 2,6,1, 3,2,0, 3,6,2, 4,5,2, 4,7,8, 5,2,2, 5,6,1, 5,8,1, 5,9,3, 7,5,1, 7,8,1, 8,9,4) ) el[,1:2] = el[,1:2]+1 #Note that the zero vertex option does not exist any more, so we added 1 g = add.edges(graph.empty(10), t(el[,1:2]), weight=el[,3]) plot(g) ``` ``` #GRAPHING MAIN NETWORK g = simplify(g) V(g)$name = seq(vcount(g)) #l = layout.fruchterman.reingold(g) #l = layout.kamada.kawai(g) l = layout.circle(g) l = layout.norm(l, -1,1,-1,1) #pdf(file="network_plot.pdf") plot(g, layout=l, vertex.size=10, vertex.label=seq(1,10), vertex.color="#ff000033", edge.color="grey", edge.arrow.size=0.75, rescale=FALSE, xlim=range(l[,1]), ylim=range(l[,2])) ``` 9\.9 D3 plots ------------- D3 is a well known framework for plotting spring graphs. The following plot shows how one may use javascript in R, using the **html widgets** framework. See: <http://www.htmlwidgets.org/> ``` library(networkD3) links = data.frame(el[,1:2])-1 names(links) = c("source","target") links$value = 1 nodes = data.frame(unique(c(links$target,links$source))) names(nodes) = "name" nodes$group = ceiling(3*runif(length(nodes$name))) forceNetwork(Links = links, Nodes = nodes, Source = "source", Target = "target", Value = "value", NodeID = "name", Group = "group", opacity = 0.8, fontSize = 75) ``` 9\.10 Centrality ---------------- Centrality is a property of vertices in the network. Given the adjacency matrix \\(A\=\\{w(u,v)\\}\\), we can obtain a measure of the “influence” of all vertices in the network. Let \\(x\_i\\) be the influence of vertex \\(i\\). Then the column vector \\(x\\) contains the influence of each vertex. What is influence? Think of a web page. It has more influence the more links it has both, to the page, and from the page to other pages. Or think of a alumni network. People with more connections have more influence, they are more “central”. It is possible that you might have no connections yourself, but are connected to people with great connections. In this case, you do have influence. Hence, your influence depends on your own influence and that which you derive through others. Hence, the entire system of influence is interdependent, and can be written as the following matrix equation \\\[\\begin{equation} x \= A\\;x \\end{equation}\\] Now, we can just add a scalar here to this to get \\\[\\begin{equation} \\xi \\; x \= A x \\end{equation}\\] an eigensystem. Decompose this to get the principle eigenvector, and its values give you the influence of each member. In this way you can find the most influential people in any network. There are several applications of this idea to real data. This is eigenvector centrality is exactly what Google trademarked as **PageRank**, even though they did not invent eigenvector centrality. ``` A = matrix(nc=3, byrow=TRUE, c(0,1,1, 1,0,1, 1,1,0)) print(A) ``` ``` ## [,1] [,2] [,3] ## [1,] 0 1 1 ## [2,] 1 0 1 ## [3,] 1 1 0 ``` ``` g = graph.adjacency(A,mode="undirected",weighted=TRUE,diag=FALSE) res = evcent(g) print(names(res)) ``` ``` ## [1] "vector" "value" "options" ``` ``` res$vector ``` ``` ## [1] 1 1 1 ``` ``` res = evcent(g,scale=FALSE) res$vector ``` ``` ## [1] 0.5773503 0.5773503 0.5773503 ``` ``` A = matrix(nc=3, byrow=TRUE, c(0,1,1, 1,0,0, 1,0,0)) print(A) ``` ``` ## [,1] [,2] [,3] ## [1,] 0 1 1 ## [2,] 1 0 0 ## [3,] 1 0 0 ``` ``` g = graph.adjacency(A,mode="undirected",weighted=TRUE,diag=FALSE) res = evcent(g,scale=FALSE) res$vector ``` ``` ## [1] 0.7071068 0.5000000 0.5000000 ``` ``` A = matrix(nc=3, byrow=TRUE, c(0,2,1, 2,0,0, 1,0,0)) print(A) ``` ``` ## [,1] [,2] [,3] ## [1,] 0 2 1 ## [2,] 2 0 0 ## [3,] 1 0 0 ``` ``` g = graph.adjacency(A,mode="undirected",weighted=TRUE,diag=FALSE) res = evcent(g,scale=FALSE) res$vector ``` ``` ## [1] 0.7071068 0.6324555 0.3162278 ``` 9\.11 Betweenness ----------------- Another concept of centrality is known as “betweenness”. This is the proportion of shortest paths that go through a node relative to all paths that go through the same node. This may be expressed as \\\[\\begin{equation} B(v) \= \\sum\_{a \\neq v \\neq b} \\frac{n\_{a,b}(v)}{n\_{a,b}} \\end{equation}\\] where \\(n\_{a,b}\\) is the number of shortest paths from node \\(a\\) to node \\(b\\), and \\(n\_{a,b}(v)\\) are the number of those paths that traverse through vertex \\(v\\). Here is an example from an earlier directed graph. ``` el = matrix(nc=3, byrow=TRUE, c(0,1,1, 0,2,2, 0,3,1, 1,2,1, 1,4,5, 1,5,2, 2,1,1, 2,3,1, 2,6,1, 3,2,1, 3,6,2, 4,5,2, 4,7,8, 5,2,2, 5,6,1, 5,8,1, 5,9,3, 7,5,1, 7,8,1, 8,9,4) ) el[,1:2] = el[,1:2] + 1 g = add.edges(graph.empty(10), t(el[,1:2]), weight=el[,3]) plot(g) ``` ``` res = betweenness(g) res ``` ``` ## [1] 0.0000000 18.5833333 18.2500000 0.8333333 5.0000000 20.0000000 ## [7] 0.0000000 0.0000000 0.0000000 0.0000000 ``` ``` g = erdos.renyi.game(30,0.1) d = seq(0,max(degree(g))) dd = degree.distribution(g) plot(g) ``` ``` #DIAMETER print(diameter(g)) ``` ``` ## [1] 6 ``` ``` #FRAGILITY print((t(d^2) %*% dd)/(t(d) %*% dd)) ``` ``` ## [,1] ## [1,] 3.837209 ``` ``` #CENTRALITY res = evcent(g) res$vector ``` ``` ## [1] 0.13020514 0.10654809 0.50328790 0.53703737 0.22421218 0.23555387 ## [7] 0.33641755 0.09718898 0.07088808 0.61028079 0.37861544 0.27615600 ## [13] 0.37620605 0.17105358 1.00000000 0.07332221 0.08635696 0.12932960 ## [19] 0.15630895 0.28404621 0.17887855 0.27369218 0.13102918 0.25669577 ## [25] 0.25669577 0.72508578 0.23833268 0.69685043 0.25944866 0.41435043 ``` 9\.12 Communities ----------------- Community detection methods partition nodes into clusters that tend to interact together. It is useful to point out the considerable flexibility and realism built into the definition of our community clusters. We do not require all nodes to belong to communities. Nor do we fix the number of communities that may exist at a time, and we also allow each community to have different size. With this flexibility, the key computational challenge is to find the “best” partition because the number of possible partitions of the nodes is extremely large. Community detection methods attempt to determine a set of clusters that are internally tight\-knit. Mathematically, this is equivalent to finding a partition of clusters to maximize the observed number of connections between cluster members minus what is expected conditional on the connections within the cluster, aggregated across all clusters. More formally, we choose partitions with high modularity \\(Q\\), where \\\[\\begin{equation} Q \= \\frac{1}{2m} \\sum\_{i,j} \\left\[ A\_{ij} \- \\frac{d\_i \\times d\_j}{2m} \\right] \\cdot \\delta(i,j) \\end{equation}\\] \\(A\_{ij}\\) is the \\((i,j)\\)\-th entry in the adjacency matrix, i.e., the number of connections in which \\(i\\) and \\(j\\) jointly participated, \\(d\_i\=\\sum\_j A\_{ij}\\) is the total number of transactions that node \\(i\\) participated in (or, the degree of \\(i\\)) and \\(m \= \\frac{1}{2} \\sum\_{ij} A\_{ij}\\) is the sum of all edge weights in matrix \\(A\\). The function \\(\\delta(i,j)\\) is an indicator equal to 1\.0 if nodes \\(i\\) and \\(j\\) are from the same community, and zero otherwise. \\(Q\\) is bounded in \[\-1, \+1]. If \\(Q \> 0\\), intra\-community connections exceed the expected number given deal flow. Consider a network of five nodes \\(\\{A,B,C,D,E\\}\\), where the edge weights are as follows: \\(A:B\=6\\), \\(A:C\=5\\), \\(B:C\=2\\), \\(C:D\=2\\), and \\(D:E\=10\\). Assume that a community detection algorithm assigns \\(\\{A,B,C\\}\\) to one community and \\(\\{D,E\\}\\) to another, i.e., only two communities. The adjacency matrix for this graph is given by matrix \\(A\\) below. ``` A = matrix(c(0,6,5,0,0,6,0,2,0,0,5,2,0,2,0,0,0,2,0,10,0,0,0,10,0),5,5) print(A) ``` ``` ## [,1] [,2] [,3] [,4] [,5] ## [1,] 0 6 5 0 0 ## [2,] 6 0 2 0 0 ## [3,] 5 2 0 2 0 ## [4,] 0 0 2 0 10 ## [5,] 0 0 0 10 0 ``` ``` g = graph.adjacency(A,mode="undirected",weighted=TRUE,diag=FALSE) wtc = walktrap.community(g) res=membership(wtc) print(res) ``` ``` ## [1] 1 1 1 2 2 ``` ``` g = graph.adjacency(A,mode="undirected",weighted=TRUE,diag=FALSE) fgc = fastgreedy.community(g,merges=TRUE,modularity=TRUE, weights=E(g)$weight) res = membership(fgc) res ``` ``` ## [1] 1 1 1 2 2 ``` ``` g = graph.adjacency(A,mode="undirected",diag=FALSE) wtc = walktrap.community(g) res = membership(wtc) print(res) ``` ``` ## [1] 2 2 2 1 1 ``` ``` print(modularity(g,res)) ``` ``` ## [1] 0.4128 ``` ``` #New functions in igraph for walktrap res = cluster_walktrap(g) print(res) ``` ``` ## IGRAPH clustering walktrap, groups: 2, mod: 0.41 ## + groups: ## $`1` ## [1] 4 5 ## ## $`2` ## [1] 1 2 3 ## ``` ``` print(modularity(g,res$membership)) ``` ``` ## [1] 0.4128 ``` 9\.13 Financial Applications ---------------------------- 9\.14 Risk Networks ------------------- ``` #RISK NETWORKS PROGRAM CODE #LOAD GRAPH NETWORK LIBRARY library(igraph) #FUNCTION FOR RISK INCREMENT AND DECOMP NetRisk = function(Ri,X) { S = sqrt(t(Ri) %*% X %*% Ri) RiskIncr = 0.5 * (X %*% Ri + t(X) %*% Ri)/S[1,1] RiskDecomp = RiskIncr * Ri result = list(S,RiskIncr,RiskDecomp) } ``` ### 9\.14\.1 Example ``` #READ IN DATA data = read.csv(file="DSTMAA_data/AdjacencyMatrix.csv",sep=",") na = dim(data)[2]-1 #columns (assets) nc = 20 #Number of controls m = dim(data)[1] #rows (first 1 is header, next n are assets, next 20 are controls, remaining are business lines, last line is weights) nb = m-na-nc-2 #Number of business lines X = data[2:(1+na),2:(na+1)] X = matrix(as.numeric(as.matrix(X)),na,na) controls = data[(2+na):(1+na+nc),2:(na+1)] controls = matrix(as.numeric(as.matrix(controls)),nc,na) Ri = matrix(colSums(controls),na,1) #Aggregate risk by asset bus = data[(2+na+nc):(m-1),2:(na+1)] bus = matrix(as.numeric(as.matrix(bus)),nb,na) bus_names = as.matrix(data[(2+na+nc):(m-1),1]) wts = data[m,2:(1+nb)] wts = matrix(as.numeric(as.matrix(wts)),nb,1)/100 #percentage weights ``` ``` #TABLE OF ASSETS: Asset number, Asset name, IP address tab_assets = cbind(seq(1,na),names(data)[2:(na+1)],t(data[1,2:(na+1)])) write(t(tab_assets),file="DSTMAA_data/tab_assets.txt",ncolumns=3) ``` ``` #GRAPH NETWORK: plot of the assets and the links with directed arrows Y = X; diag(Y)=0 g = graph.adjacency(Y) plot.igraph(g,layout=layout.fruchterman.reingold,edge.arrow.size=0.5,vertex.size=15,vertex.label=seq(1,na)) ``` ### 9\.14\.2 Overall Risk Score ``` #COMPUTE OVERALL RISK SCORE #A computation that considers the risk level of each asset (Ri) #and the interlinkages between all assets (in adjacency matrix X) #The function S below is homogenous of degree 1, i.e., S(m*Ri) = m*S(Ri) S = sqrt(t(Ri) %*% X %*% Ri); print(c("Risk Score",S)) ``` ``` ## [1] "Risk Score" "11.6189500386223" ``` ``` S ``` ``` ## [,1] ## [1,] 11.61895 ``` ### 9\.14\.3 Risk Decomposition ``` #COMPUTE RISK DECOMPOSITION #Exploits the homogeneity degree 1 property to compute individual asset #risk contributions, i.e., a risk decomposition. #Risk increment is the change in total risk score if any one asset's #risk level increases by 1. RiskIncr = 0.5 * (X %*% Ri + t(X) %*% Ri)/S[1,1] RiskDecomp = RiskIncr * Ri sorted_RiskDecomp = sort(RiskDecomp,decreasing=TRUE,index.return=TRUE) RD = sorted_RiskDecomp$x idxRD = sorted_RiskDecomp$ix print("Risk Contribution"); print(RiskDecomp); print(sum(RiskDecomp)) ``` ``` ## [1] "Risk Contribution" ``` ``` ## [,1] ## [1,] 0.0000000 ## [2,] 0.0000000 ## [3,] 0.6885304 ## [4,] 0.8606630 ## [5,] 1.3770607 ## [6,] 0.6885304 ## [7,] 0.8606630 ## [8,] 1.3770607 ## [9,] 0.7745967 ## [10,] 0.0000000 ## [11,] 1.2049282 ## [12,] 1.2049282 ## [13,] 1.2049282 ## [14,] 0.5163978 ## [15,] 0.1721326 ## [16,] 0.0000000 ## [17,] 0.5163978 ## [18,] 0.1721326 ``` ``` ## [1] 11.61895 ``` ``` barplot(t(RD),col="dark green",xlab="Node Number",names.arg=idxRD,cex.names=0.75) ``` ### 9\.14\.4 Centrality ``` #NODE EIGEN VALUE CENTRALITY #Centrality is a measure of connectedness and influence of a node in a network #accounting for all its linkages and influence of all other nodes. Centrality #is based on connections only and not risk scores, and measures the propensity #of a node to propagate a security breach if the node is compromised. #It is a score that is normalized to the range (0,1) cent = evcent(g)$vector print("Normalized Centrality Scores") ``` ``` ## [1] "Normalized Centrality Scores" ``` ``` print(cent) ``` ``` ## [1] 1.0000000 0.4567810 0.4922349 0.3627391 0.3345007 0.1982681 0.3322908 ## [8] 0.4593151 0.5590561 0.5492208 0.5492208 0.5492208 0.5492208 0.3044259 ## [15] 0.2944982 0.5231594 0.4121079 0.2944982 ``` ``` sorted_cent = sort(cent,decreasing=TRUE,index.return=TRUE) Scent = sorted_cent$x idxScent = sorted_cent$ix barplot(t(Scent),col="dark red",xlab="Node Number",names.arg=idxScent,cex.names=0.75) ``` ### 9\.14\.5 Risk Increment ``` #COMPUTE RISK INCREMENTS sorted_RiskIncr = sort(RiskIncr,decreasing=TRUE,index.return=TRUE) RI = sorted_RiskIncr$x idxRI = sorted_RiskIncr$ix print("Risk Increment (per unit increase in any node risk"); print(RiskIncr) ``` ``` ## [1] "Risk Increment (per unit increase in any node risk" ``` ``` ## [,1] ## [1,] 1.9795248 ## [2,] 0.7745967 ## [3,] 0.6885304 ## [4,] 0.4303315 ## [5,] 0.6885304 ## [6,] 0.3442652 ## [7,] 0.4303315 ## [8,] 0.6885304 ## [9,] 0.7745967 ## [10,] 0.6024641 ## [11,] 0.6024641 ## [12,] 0.6024641 ## [13,] 0.6024641 ## [14,] 0.2581989 ## [15,] 0.1721326 ## [16,] 0.9036961 ## [17,] 0.5163978 ## [18,] 0.1721326 ``` ``` barplot(t(RI),col="dark blue",xlab="Node Number",names.arg=idxRI,cex.names=0.75) ``` ### 9\.14\.6 Criticality ``` #CRITICALITY #Criticality is compromise-weighted centrality. #This is an element-wise multiplication of vectors $C$ and $x$. crit = Ri * cent print("Criticality Vector") ``` ``` ## [1] "Criticality Vector" ``` ``` print(crit) ``` ``` ## [,1] ## [1,] 0.0000000 ## [2,] 0.0000000 ## [3,] 0.4922349 ## [4,] 0.7254782 ## [5,] 0.6690015 ## [6,] 0.3965362 ## [7,] 0.6645815 ## [8,] 0.9186302 ## [9,] 0.5590561 ## [10,] 0.0000000 ## [11,] 1.0984415 ## [12,] 1.0984415 ## [13,] 1.0984415 ## [14,] 0.6088518 ## [15,] 0.2944982 ## [16,] 0.0000000 ## [17,] 0.4121079 ## [18,] 0.2944982 ``` ``` sorted_crit = sort(crit,decreasing=TRUE,index.return=TRUE) Scrit = sorted_crit$x idxScrit = sorted_crit$ix barplot(t(Scrit),col="orange",xlab="Node Number",names.arg=idxScrit,cex.names=0.75) ``` ### 9\.14\.7 Cross Risk ### 9\.14\.8 Risk Scaling: Spillovers ``` #CROSS IMPACT MATRIX #CHECK FOR SPILLOVER EFFECTS FROM ONE NODE TO ALL OTHERS d_RiskDecomp = NULL n = length(Ri) for (j in 1:n) { Ri2 = Ri Ri2[j] = Ri[j]+1 res = NetRisk(Ri2,X) d_Risk = as.matrix(res[[3]]) - RiskDecomp d_RiskDecomp = cbind(d_RiskDecomp,d_Risk) #Column by column for each asset } #3D plots library("RColorBrewer"); library("lattice"); library("latticeExtra") cloud(d_RiskDecomp, panel.3d.cloud = panel.3dbars, xbase = 0.25, ybase = 0.25, zlim = c(min(d_RiskDecomp), max(d_RiskDecomp)), scales = list(arrows = FALSE, just = "right"), xlab = "On", ylab = "From", zlab = NULL, main="Change in Risk Contribution", col.facet = level.colors(d_RiskDecomp, at = do.breaks(range(d_RiskDecomp), 20), col.regions = cm.colors, colors = TRUE), colorkey = list(col = cm.colors, at = do.breaks(range(d_RiskDecomp), 20)), #screen = list(z = 40, x = -30) ) ``` ``` brewer.div <- colorRampPalette(brewer.pal(11, "Spectral"), interpolate = "spline") levelplot(d_RiskDecomp, aspect = "iso", col.regions = brewer.div(20), ylab="Impact from", xlab="Impact on", main="Change in Risk Contribution") ``` ### 9\.14\.9 Risk Scaling with Increased Connectivity ``` #SIMULATION OF EFFECT OF INCREASED CONNECTIVITY #RANDOM GRAPHS n=50; k=100; pvec=seq(0.05,0.50,0.05); svec=NULL; sbarvec=NULL for (p in pvec) { s_temp = NULL sbar_temp = NULL for (j in 1:k) { g = erdos.renyi.game(n,p,directed=TRUE); A = get.adjacency(g) diag(A) = 1 c = as.matrix(round(runif(n,0,2),0)) syscore = as.numeric(sqrt(t(c) %*% A %*% c)) sbarscore = syscore/n s_temp = c(s_temp,syscore) sbar_temp = c(sbar_temp,sbarscore) } svec = c(svec,mean(s_temp)) sbarvec = c(sbarvec,mean(sbar_temp)) } #plot(pvec,svec,type="l",xlab="Prob of connecting to a node",ylab="S",lwd=3,col="red") plot(pvec,sbarvec,type="l",xlab="Prob of connecting to a node",ylab="S_Avg",lwd=3,col="red") ``` ### 9\.14\.10 Too Big To Fail The change in risk score \\({S}\\) as the number of nodes increases, while keeping the average number of connections between nodes constant. This mimics the case where banks are divided into smaller banks, each of which then contains part of the transacting volume of the previous bank. The plot shows how the risk score increases as the number of nodes increases from 10 to 100, while expected number of total edges in the network remains the same. A compromise vector is also generated with equally likely values \\(\\{0,1,2\\}\\). This is repeated 5000 times for each fixed number of nodes and the mean risk score across 5000 simulations. ``` #SIMULATION OF EFFECT OF INCREASED NODES AND REDUCED CONNECTIVITY nvec=seq(10,100,10); k=100; svec=NULL; sbarvec=NULL for (n in nvec) { s_temp = NULL sbar_temp = NULL p = 5/n for (j in 1:k) { g = erdos.renyi.game(n,p,directed=TRUE); A = get.adjacency(g) diag(A) = 1 c = as.matrix(round(runif(n,0,2),0)) syscore = as.numeric(sqrt(t(c) %*% A %*% c)) sbarscore = syscore/n s_temp = c(s_temp,syscore) sbar_temp = c(sbar_temp,sbarscore) } svec = c(svec,mean(s_temp)) sbarvec = c(sbarvec,mean(sbar_temp)) } plot(nvec,svec,type="l",xlab="Number of nodes",ylab="S",ylim=c(0,max(svec)),lwd=3,col="red") ``` ``` #plot(nvec,sbarvec,type="l",xlab="Number of nodes",ylab="S_Avg",ylim=c(0,max(sbarvec)),lwd=3,col="red") ``` ### 9\.14\.1 Example ``` #READ IN DATA data = read.csv(file="DSTMAA_data/AdjacencyMatrix.csv",sep=",") na = dim(data)[2]-1 #columns (assets) nc = 20 #Number of controls m = dim(data)[1] #rows (first 1 is header, next n are assets, next 20 are controls, remaining are business lines, last line is weights) nb = m-na-nc-2 #Number of business lines X = data[2:(1+na),2:(na+1)] X = matrix(as.numeric(as.matrix(X)),na,na) controls = data[(2+na):(1+na+nc),2:(na+1)] controls = matrix(as.numeric(as.matrix(controls)),nc,na) Ri = matrix(colSums(controls),na,1) #Aggregate risk by asset bus = data[(2+na+nc):(m-1),2:(na+1)] bus = matrix(as.numeric(as.matrix(bus)),nb,na) bus_names = as.matrix(data[(2+na+nc):(m-1),1]) wts = data[m,2:(1+nb)] wts = matrix(as.numeric(as.matrix(wts)),nb,1)/100 #percentage weights ``` ``` #TABLE OF ASSETS: Asset number, Asset name, IP address tab_assets = cbind(seq(1,na),names(data)[2:(na+1)],t(data[1,2:(na+1)])) write(t(tab_assets),file="DSTMAA_data/tab_assets.txt",ncolumns=3) ``` ``` #GRAPH NETWORK: plot of the assets and the links with directed arrows Y = X; diag(Y)=0 g = graph.adjacency(Y) plot.igraph(g,layout=layout.fruchterman.reingold,edge.arrow.size=0.5,vertex.size=15,vertex.label=seq(1,na)) ``` ### 9\.14\.2 Overall Risk Score ``` #COMPUTE OVERALL RISK SCORE #A computation that considers the risk level of each asset (Ri) #and the interlinkages between all assets (in adjacency matrix X) #The function S below is homogenous of degree 1, i.e., S(m*Ri) = m*S(Ri) S = sqrt(t(Ri) %*% X %*% Ri); print(c("Risk Score",S)) ``` ``` ## [1] "Risk Score" "11.6189500386223" ``` ``` S ``` ``` ## [,1] ## [1,] 11.61895 ``` ### 9\.14\.3 Risk Decomposition ``` #COMPUTE RISK DECOMPOSITION #Exploits the homogeneity degree 1 property to compute individual asset #risk contributions, i.e., a risk decomposition. #Risk increment is the change in total risk score if any one asset's #risk level increases by 1. RiskIncr = 0.5 * (X %*% Ri + t(X) %*% Ri)/S[1,1] RiskDecomp = RiskIncr * Ri sorted_RiskDecomp = sort(RiskDecomp,decreasing=TRUE,index.return=TRUE) RD = sorted_RiskDecomp$x idxRD = sorted_RiskDecomp$ix print("Risk Contribution"); print(RiskDecomp); print(sum(RiskDecomp)) ``` ``` ## [1] "Risk Contribution" ``` ``` ## [,1] ## [1,] 0.0000000 ## [2,] 0.0000000 ## [3,] 0.6885304 ## [4,] 0.8606630 ## [5,] 1.3770607 ## [6,] 0.6885304 ## [7,] 0.8606630 ## [8,] 1.3770607 ## [9,] 0.7745967 ## [10,] 0.0000000 ## [11,] 1.2049282 ## [12,] 1.2049282 ## [13,] 1.2049282 ## [14,] 0.5163978 ## [15,] 0.1721326 ## [16,] 0.0000000 ## [17,] 0.5163978 ## [18,] 0.1721326 ``` ``` ## [1] 11.61895 ``` ``` barplot(t(RD),col="dark green",xlab="Node Number",names.arg=idxRD,cex.names=0.75) ``` ### 9\.14\.4 Centrality ``` #NODE EIGEN VALUE CENTRALITY #Centrality is a measure of connectedness and influence of a node in a network #accounting for all its linkages and influence of all other nodes. Centrality #is based on connections only and not risk scores, and measures the propensity #of a node to propagate a security breach if the node is compromised. #It is a score that is normalized to the range (0,1) cent = evcent(g)$vector print("Normalized Centrality Scores") ``` ``` ## [1] "Normalized Centrality Scores" ``` ``` print(cent) ``` ``` ## [1] 1.0000000 0.4567810 0.4922349 0.3627391 0.3345007 0.1982681 0.3322908 ## [8] 0.4593151 0.5590561 0.5492208 0.5492208 0.5492208 0.5492208 0.3044259 ## [15] 0.2944982 0.5231594 0.4121079 0.2944982 ``` ``` sorted_cent = sort(cent,decreasing=TRUE,index.return=TRUE) Scent = sorted_cent$x idxScent = sorted_cent$ix barplot(t(Scent),col="dark red",xlab="Node Number",names.arg=idxScent,cex.names=0.75) ``` ### 9\.14\.5 Risk Increment ``` #COMPUTE RISK INCREMENTS sorted_RiskIncr = sort(RiskIncr,decreasing=TRUE,index.return=TRUE) RI = sorted_RiskIncr$x idxRI = sorted_RiskIncr$ix print("Risk Increment (per unit increase in any node risk"); print(RiskIncr) ``` ``` ## [1] "Risk Increment (per unit increase in any node risk" ``` ``` ## [,1] ## [1,] 1.9795248 ## [2,] 0.7745967 ## [3,] 0.6885304 ## [4,] 0.4303315 ## [5,] 0.6885304 ## [6,] 0.3442652 ## [7,] 0.4303315 ## [8,] 0.6885304 ## [9,] 0.7745967 ## [10,] 0.6024641 ## [11,] 0.6024641 ## [12,] 0.6024641 ## [13,] 0.6024641 ## [14,] 0.2581989 ## [15,] 0.1721326 ## [16,] 0.9036961 ## [17,] 0.5163978 ## [18,] 0.1721326 ``` ``` barplot(t(RI),col="dark blue",xlab="Node Number",names.arg=idxRI,cex.names=0.75) ``` ### 9\.14\.6 Criticality ``` #CRITICALITY #Criticality is compromise-weighted centrality. #This is an element-wise multiplication of vectors $C$ and $x$. crit = Ri * cent print("Criticality Vector") ``` ``` ## [1] "Criticality Vector" ``` ``` print(crit) ``` ``` ## [,1] ## [1,] 0.0000000 ## [2,] 0.0000000 ## [3,] 0.4922349 ## [4,] 0.7254782 ## [5,] 0.6690015 ## [6,] 0.3965362 ## [7,] 0.6645815 ## [8,] 0.9186302 ## [9,] 0.5590561 ## [10,] 0.0000000 ## [11,] 1.0984415 ## [12,] 1.0984415 ## [13,] 1.0984415 ## [14,] 0.6088518 ## [15,] 0.2944982 ## [16,] 0.0000000 ## [17,] 0.4121079 ## [18,] 0.2944982 ``` ``` sorted_crit = sort(crit,decreasing=TRUE,index.return=TRUE) Scrit = sorted_crit$x idxScrit = sorted_crit$ix barplot(t(Scrit),col="orange",xlab="Node Number",names.arg=idxScrit,cex.names=0.75) ``` ### 9\.14\.7 Cross Risk ### 9\.14\.8 Risk Scaling: Spillovers ``` #CROSS IMPACT MATRIX #CHECK FOR SPILLOVER EFFECTS FROM ONE NODE TO ALL OTHERS d_RiskDecomp = NULL n = length(Ri) for (j in 1:n) { Ri2 = Ri Ri2[j] = Ri[j]+1 res = NetRisk(Ri2,X) d_Risk = as.matrix(res[[3]]) - RiskDecomp d_RiskDecomp = cbind(d_RiskDecomp,d_Risk) #Column by column for each asset } #3D plots library("RColorBrewer"); library("lattice"); library("latticeExtra") cloud(d_RiskDecomp, panel.3d.cloud = panel.3dbars, xbase = 0.25, ybase = 0.25, zlim = c(min(d_RiskDecomp), max(d_RiskDecomp)), scales = list(arrows = FALSE, just = "right"), xlab = "On", ylab = "From", zlab = NULL, main="Change in Risk Contribution", col.facet = level.colors(d_RiskDecomp, at = do.breaks(range(d_RiskDecomp), 20), col.regions = cm.colors, colors = TRUE), colorkey = list(col = cm.colors, at = do.breaks(range(d_RiskDecomp), 20)), #screen = list(z = 40, x = -30) ) ``` ``` brewer.div <- colorRampPalette(brewer.pal(11, "Spectral"), interpolate = "spline") levelplot(d_RiskDecomp, aspect = "iso", col.regions = brewer.div(20), ylab="Impact from", xlab="Impact on", main="Change in Risk Contribution") ``` ### 9\.14\.9 Risk Scaling with Increased Connectivity ``` #SIMULATION OF EFFECT OF INCREASED CONNECTIVITY #RANDOM GRAPHS n=50; k=100; pvec=seq(0.05,0.50,0.05); svec=NULL; sbarvec=NULL for (p in pvec) { s_temp = NULL sbar_temp = NULL for (j in 1:k) { g = erdos.renyi.game(n,p,directed=TRUE); A = get.adjacency(g) diag(A) = 1 c = as.matrix(round(runif(n,0,2),0)) syscore = as.numeric(sqrt(t(c) %*% A %*% c)) sbarscore = syscore/n s_temp = c(s_temp,syscore) sbar_temp = c(sbar_temp,sbarscore) } svec = c(svec,mean(s_temp)) sbarvec = c(sbarvec,mean(sbar_temp)) } #plot(pvec,svec,type="l",xlab="Prob of connecting to a node",ylab="S",lwd=3,col="red") plot(pvec,sbarvec,type="l",xlab="Prob of connecting to a node",ylab="S_Avg",lwd=3,col="red") ``` ### 9\.14\.10 Too Big To Fail The change in risk score \\({S}\\) as the number of nodes increases, while keeping the average number of connections between nodes constant. This mimics the case where banks are divided into smaller banks, each of which then contains part of the transacting volume of the previous bank. The plot shows how the risk score increases as the number of nodes increases from 10 to 100, while expected number of total edges in the network remains the same. A compromise vector is also generated with equally likely values \\(\\{0,1,2\\}\\). This is repeated 5000 times for each fixed number of nodes and the mean risk score across 5000 simulations. ``` #SIMULATION OF EFFECT OF INCREASED NODES AND REDUCED CONNECTIVITY nvec=seq(10,100,10); k=100; svec=NULL; sbarvec=NULL for (n in nvec) { s_temp = NULL sbar_temp = NULL p = 5/n for (j in 1:k) { g = erdos.renyi.game(n,p,directed=TRUE); A = get.adjacency(g) diag(A) = 1 c = as.matrix(round(runif(n,0,2),0)) syscore = as.numeric(sqrt(t(c) %*% A %*% c)) sbarscore = syscore/n s_temp = c(s_temp,syscore) sbar_temp = c(sbar_temp,sbarscore) } svec = c(svec,mean(s_temp)) sbarvec = c(sbarvec,mean(sbar_temp)) } plot(nvec,svec,type="l",xlab="Number of nodes",ylab="S",ylim=c(0,max(svec)),lwd=3,col="red") ``` ``` #plot(nvec,sbarvec,type="l",xlab="Number of nodes",ylab="S_Avg",ylim=c(0,max(sbarvec)),lwd=3,col="red") ``` 9\.15 Systemic Risk in Indian Banks ----------------------------------- 9\.16 Systemic Risk Portals --------------------------- [http://www.systemic\-risk.org/](http://www.systemic-risk.org/) [http://www.systemic\-risk\-hub.org/risk\_centers.php](http://www.systemic-risk-hub.org/risk_centers.php) 9\.17 Shiny application ----------------------- The example above may also be embedded in a shiny application for which the code is provided below. The screen will appear as follows. The files below also require the data file **systemicR.csv** or an upload. ``` #SERVER.R library(shiny) library(plotly) library(igraph) # Define server logic for random distribution application shinyServer(function(input, output) { fData = reactive({ # input$file1 will be NULL initially. After the user selects and uploads a # file, it will be a data frame with 'name', 'size', 'type', and 'datapath' # columns. The 'datapath' column will contain the local filenames where the # data can be found. inFile <- input$file if (is.null(inFile)){ data = read.csv(file="systemicR.csv",sep=",") } else read.csv(file=inFile$datapath) }) observeEvent(input$compute, { output$text1 <- renderText({ data = fData() na = dim(data)[1] #columns (assets) Ri = matrix(data[,1],na,1) #Aggregate risk by asset X = data[1:na,2:(na+1)] X = matrix(as.numeric(as.matrix(X)),na,na) S = as.numeric(sqrt(t(Ri) %*% X %*% Ri)) paste("Overall Risk Score",round(S,2)) }) output$plot <- renderPlot({ data = fData() na = dim(data)[1] #columns (assets) bnames = names(data) Ri = matrix(data[,1],na,1) #Aggregate risk by asset X = data[1:na,2:(na+1)] X = matrix(as.numeric(as.matrix(X)),na,na) #GRAPH NETWORK: plot of the assets and the links with directed arrows Y = X; diag(Y)=0 g = graph.adjacency(Y) V(g)$color = "#ffec78" V(g)$color[degree(g)==max(degree(g))] = "#ff4040" V(g)$color[degree(g)==min(degree(g))] = "#b4eeb4" V(g)$size = Ri*8+10 plot.igraph(g,layout=layout.fruchterman.reingold,edge.arrow.size=0.5, vertex.label.color="black",edge.arrow.width=0.8, vertex.label=bnames[1:na+1], vertex.label.cex=0.8) }, height = 550, width = 800) output$text2 <- renderText({ data = fData() na = dim(data)[1] #columns (assets) Ri = matrix(data[,1],na,1) #Aggregate risk by asset X = data[1:na,2:(na+1)] X = matrix(as.numeric(as.matrix(X)),na,na) Y = X; diag(Y)=0 g = graph.adjacency(Y) H = ((sum(degree(g)^2))/na)/((sum(degree(g)))/na) paste("Fragility of the Network is ",round(H,2)) }) output$plot2 <- renderPlotly({ data = fData() na = dim(data)[1] #columns (assets) Ri = matrix(data[,1],na,1) #Aggregate risk by asset X = data[1:na,2:(na+1)] X = matrix(as.numeric(as.matrix(X)),na,na) S = as.numeric(sqrt(t(Ri) %*% X %*% Ri)) RiskIncr = 0.5 * as.numeric((X %*% Ri + t(X) %*% Ri))/S RiskDecomp = RiskIncr * Ri sorted_RiskDecomp = sort(RiskDecomp,decreasing=TRUE,index.return=TRUE) RD = as.numeric(as.matrix(sorted_RiskDecomp$x)) idxRD = as.character(as.matrix(sorted_RiskDecomp$ix)) idxRD = paste("B",idxRD,sep="") xAx <- list( title = "Node Number" ) yAx <- list( title = "Risk Decomposition") plot_ly(y = RD,x = idxRD,marker = list(color = toRGB("dark green")),type="bar")%>% layout(xaxis = xAx, yaxis = yAx) # barplot(t(RD),col="dark green",xlab="Node Number",names.arg=idxRD,cex.names=0.75) }) output$plot3 <- renderPlotly({ data = fData() na = dim(data)[1] #columns (assets) Ri = matrix(data[,1],na,1) #Aggregate risk by asset X = data[1:na,2:(na+1)] X = matrix(as.numeric(as.matrix(X)),na,na) #GRAPH NETWORK: plot of the assets and the links with directed arrows Y = X; diag(Y)=0 g = graph.adjacency(Y) cent = evcent(g)$vector # print("Normalized Centrality Scores") sorted_cent = sort(cent,decreasing=TRUE,index.return=TRUE) Scent = sorted_cent$x idxScent = sorted_cent$ix idxScent = paste("B",idxScent,sep="") xAx <- list( title = "Node Number" ) yAx <- list( title = "Eigen Value Centrality" ) plot_ly(y = as.numeric(t(Scent)),x = idxScent,marker = list(color = toRGB("red")),type="bar")%>% layout(xaxis = xAx, yaxis = yAx) # barplot(t(Scent),col="dark red",xlab="Node Number",names.arg=idxScent,cex.names=0.75) }) output$plot4 <- renderPlotly({ data = fData() na = dim(data)[1] #columns (assets) Ri = matrix(data[,1],na,1) #Aggregate risk by asset X = data[1:na,2:(na+1)] X = matrix(as.numeric(as.matrix(X)),na,na) S = as.numeric(sqrt(t(Ri) %*% X %*% Ri)) RiskIncr = 0.5 * as.numeric((X %*% Ri + t(X) %*% Ri))/S #COMPUTE RISK INCREMENTS sorted_RiskIncr = sort(RiskIncr,decreasing=TRUE,index.return=TRUE) RI = sorted_RiskIncr$x idxRI = sorted_RiskIncr$ix idxRI = paste("B",idxRI,sep="") xAx <- list( title = "Node Number" ) yAx <- list( title = "Risk Increments" ) plot_ly(y = as.numeric(t(RI)),x = idxRI,marker = list(color = toRGB("green")),type="bar")%>% layout(xaxis = xAx, yaxis = yAx) # barplot(t(RI),col="dark blue",xlab="Node Number",names.arg=idxRI,cex.names=0.75) }) #CRITICALITY #Criticality is compromise-weighted centrality. #This is an element-wise multiplication of vectors $C$ and $x$. output$plot5 <- renderPlotly({ data = fData() na = dim(data)[1] #columns (assets) Ri = matrix(data[,1],na,1) #Aggregate risk by asset X = data[1:na,2:(na+1)] X = matrix(as.numeric(as.matrix(X)),na,na) #GRAPH NETWORK: plot of the assets and the links with directed arrows Y = X; diag(Y)=0 g = graph.adjacency(Y) cent = evcent(g)$vector crit = Ri * cent print("Criticality Vector") print(crit) sorted_crit = sort(crit,decreasing=TRUE,index.return=TRUE) Scrit = sorted_crit$x idxScrit = sorted_crit$ix idxScrit = paste("B",idxScrit,sep="") xAx <- list( title = "Node Number" ) yAx <- list( title = "Criticality Vector" ) plot_ly(y = as.numeric(t(sorted_crit$x)),x = idxScrit,marker = list(color = toRGB("orange")),type="bar")%>% layout(xaxis = xAx, yaxis = yAx) # barplot(t(Scrit),col="orange",xlab="Node Number",names.arg=idxScrit,cex.names=0.75) }) }) }) ``` ``` #UI.R library(plotly) shinyUI(fluidPage( titlePanel("Systemic Risk Scoring"), sidebarLayout( sidebarPanel( # Inputs excluded for brevity p('Upload a .csv file having header as Credit Scores and names of n banks. Dimensions of file will be (n*n+1) excluding the header.'), fileInput("file", label = h3("File input")), actionButton("compute","Compute Scores"), hr(), textOutput("text1"), textOutput("text2"), hr(), p('Please refer following Paper published for further details', a("Matrix Metrics: Network-Based Systemic Risk Scoring.", href = "http://srdas.github.io/Papers/JAI_Das_issue.pdf")) ), mainPanel( tabsetPanel( tabPanel("Network Graph", plotOutput("plot",width="100%")), tabPanel("Risk Decomposition", plotlyOutput("plot2")), tabPanel("Node Centrality", plotlyOutput("plot3")), tabPanel("Risk Increments", plotlyOutput("plot4")), tabPanel("Criticality", plotlyOutput("plot5")) ) ) ) )) ```
Machine Learning
srdas.github.io
https://srdas.github.io/MLBook/DiscriminantFactorAnalysis.html
Chapter 10 Extracting Dimensions: Discriminant and Factor Analysis ================================================================== 10\.1 Introduction ------------------ In discriminant analysis (DA), we develop statistical models that differentiate two or more population types, such as immigrants vs natives, males vs females, etc. In factor analysis (FA), we attempt to collapse an enormous amount of data about the population into a few common explanatory variables. DA is an attempt to explain categorical data, and FA is an attempt to reduce the dimensionality of the data that we use to explain both categorical or continuous data. They are distinct techniques, related in that they both exploit the techniques of linear algebra. 10\.2 Discriminant Analysis --------------------------- In DA, what we are trying to explain is very often a dichotomous split of our observations. For example, if we are trying to understand what determines a good versus a bad creditor. We call the good vs bad the “criterion” variable, or the “dependent” variable. The variables we use to explain the split between the criterion variables are called “predictor” or “explanatory” variables. We may think of the criterion variables as left\-hand side variables or dependent variables in the lingo of regression analysis. Likewise, the explanatory variables are the right\-hand side ones. What distinguishes DA is that the left\-hand side (lhs) variables are essentially **qualitative** in nature. They have some underlying numerical value, but are in essence qualitative. For example, when universities go through the admission process, they may have a cut off score for admission. This cut off score discriminates the students that they want to admit and the ones that they wish to reject. DA is a very useful tool for determining this cut off score. In short, DA is the means by which quantitative explanatory variables are used to explain qualitative criterion variables. The number of qualitative categories need not be restricted to just two. DA encompasses a larger number of categories too. 10\.3 Notation and assumptions ------------------------------ * Assume that there are \\(N\\) categories or groups indexed by \\(i\=2\...N\\). * Within each group there are observations \\(y\_j\\), indexed by \\(j\=1\...M\_i\\). The size of each group need not be the same, i.e., it is possible that \\(M\_i \\neq M\_j\\). * There are a set of predictor variables \\(x \= \[x\_1,x\_2,\\ldots,x\_K]'\\). Clearly, there must be good reasons for choosing these so as to explain the groups in which the \\(y\_j\\) reside. Hence the value of the \\(k\\)th variable for group \\(i\\), observation \\(j\\), is denoted as \\(x\_{ijk}\\). * Observations are mutually exclusive, i.e., each object can only belong to any one of the groups. * The \\(K \\times K\\) covariance matrix of explanatory variables is assumed to be the same for all groups, i.e., \\(Cov(x\_i) \= Cov(x\_j)\\). This is the homoskedasticity assumption, and makes the criterion for choosing one class over the other a simple projection on the \\(z\\) axis where it may be compared to a cut off. 10\.4 Discriminant Function --------------------------- DA involves finding a discriminant function \\(D\\) that best classifies the observations into the chosen groups. The function may be nonlinear, but the most common approach is to use linear DA. The function takes the following form: \\\[\\begin{equation} D \= a\_1 x\_1 \+ a\_2 x\_2 \+ \\ldots \+ a\_K x\_K \= \\sum\_{k\=1}^K a\_k x\_k \\end{equation}\\] where the \\(a\_k\\) coefficients are discriminant weights. The analysis requires the inclusion of a cut\-off score \\(C\\). For example, if \\(N\=2\\), i.e., there are 2 groups, then if \\(D\>C\\) the observation falls into group 1, and if \\(D \\leq C\\), then the observation falls into group 2\. Hence, the *objective* function is to choose \\(\\{\\{a\_k\\}, C\\}\\) such that classification error is minimized. The equation \\(C\=D(\\{x\_k\\}; \\{a\_k\\})\\) is the equation of a hyperplane that cuts the space of the observations into 2 parts if there are only two groups. Note that if there are \\(N\\) groups then there will be \\((N\-1\)\\) cutoffs \\(\\{C\_1,C\_2,\\ldots,C\_{N\-1}\\}\\), and a corresponding number of hyperplanes. The variables \\(x\_k\\) are also known as the “discriminants”. In the extraction of the discriminant function, better discriminants will have higher statistical significance. 10\.5 How good is the discriminant function? -------------------------------------------- After fitting the discriminant function, the next question to ask is how good the fit is. There are various measures that have been suggested for this. All of them have the essential property that they best separate the distribution of observations for different groups. There are many such measures: (a) Point biserial correlation, (b) Mahalobis \\(D\_M\\), (c) Wilks’ \\(\\lambda\\), (d) Rao’s \\(V\\), and (e) the confusion matrix. Each of the measures assesses the degree of classification error. * The point biserial correlation is the \\(R^2\\) of a regression in which the classified observations are signed as \\(y\_{ij}\=1, i\=1\\) for group 1 and \\(y\_{ij}\=0, i\=2\\) for group 2, and the rhs variables are the \\(x\_{ijk}\\) values. * The Mahalanobis distance between any two characteristic vectors for two entities in the data is given by \\\[\\begin{equation} D\_M \= \\sqrt{({\\bf x}\_1 \- {\\bf x}\_2\)' {\\bf \\Sigma}^{\-1} ({\\bf x}\_1 \- {\\bf x}\_2\)} \\end{equation}\\] where \\({\\bf x}\_1, {\\bf x}\_2\\) are two vectors and \\({\\bf \\Sigma}\\) is the covariance matrix of characteristics of all observations in the data set. First, note that if \\({\\bf \\Sigma}\\) is the identity matrix, then \\(D\_M\\) defaults to the Euclidean distance between two vectors. Second, one of the vectors may be treated as the mean vector for a given category, in which case the Mahalanobis distance can be used to assess the distances within and across groups in a pairwise manner. The quality of the discriminant function is then gauged by computing the ratio of the average distance across groups to the average distance within groups. Such ratios are often called the Fisher’s discriminant value. 10\.6 Confusion Matrix ---------------------- The confusion matrix is a cross\-tabulation of the actual versus predicted classification. For example, a \\(n\\)\-category model will result in a \\(n \\times n\\) confusion matrix. A comparison of this matrix with a matrix where the model is assumed to have no classification ability leads to a \\(\\chi^2\\) statistic that informs us about the statistical strength of the classification ability of the model. We will examine this in more detail shortly. ### 10\.6\.1 Example Using Basketball Data ``` ncaa = read.table("DSTMAA_data/ncaa.txt",header=TRUE) x = as.matrix(ncaa[4:14]) y1 = 1:32 y1 = y1*0+1 y2 = y1*0 y = c(y1,y2) library(MASS) dm = lda(y~x) dm ``` ``` ## Call: ## lda(y ~ x) ## ## Prior probabilities of groups: ## 0 1 ## 0.5 0.5 ## ## Group means: ## xPTS xREB xAST xTO xA.T xSTL xBLK xPF ## 0 62.10938 33.85938 11.46875 15.01562 0.835625 6.609375 2.375 18.84375 ## 1 72.09375 35.07500 14.02812 12.90000 1.120000 7.037500 3.125 18.46875 ## xFG xFT xX3P ## 0 0.4001562 0.6685313 0.3142187 ## 1 0.4464375 0.7144063 0.3525313 ## ## Coefficients of linear discriminants: ## LD1 ## xPTS -0.02192489 ## xREB 0.18473974 ## xAST 0.06059732 ## xTO -0.18299304 ## xA.T 0.40637827 ## xSTL 0.24925833 ## xBLK 0.09090269 ## xPF 0.04524600 ## xFG 19.06652563 ## xFT 4.57566671 ## xX3P 1.87519768 ``` ``` head(ncaa) ``` ``` ## No NAME GMS PTS REB AST TO A.T STL BLK PF FG FT ## 1 1 NorthCarolina 6 84.2 41.5 17.8 12.8 1.39 6.7 3.8 16.7 0.514 0.664 ## 2 2 Illinois 6 74.5 34.0 19.0 10.2 1.87 8.0 1.7 16.5 0.457 0.753 ## 3 3 Louisville 5 77.4 35.4 13.6 11.0 1.24 5.4 4.2 16.6 0.479 0.702 ## 4 4 MichiganState 5 80.8 37.8 13.0 12.6 1.03 8.4 2.4 19.8 0.445 0.783 ## 5 5 Arizona 4 79.8 35.0 15.8 14.5 1.09 6.0 6.5 13.3 0.542 0.759 ## 6 6 Kentucky 4 72.8 32.3 12.8 13.5 0.94 7.3 3.5 19.5 0.510 0.663 ## X3P ## 1 0.417 ## 2 0.361 ## 3 0.376 ## 4 0.329 ## 5 0.397 ## 6 0.400 ``` ``` print(names(dm)) ``` ``` ## [1] "prior" "counts" "means" "scaling" "lev" "svd" "N" ## [8] "call" "terms" "xlevels" ``` ``` print(dm$scaling) ``` ``` ## LD1 ## xPTS -0.02192489 ## xREB 0.18473974 ## xAST 0.06059732 ## xTO -0.18299304 ## xA.T 0.40637827 ## xSTL 0.24925833 ## xBLK 0.09090269 ## xPF 0.04524600 ## xFG 19.06652563 ## xFT 4.57566671 ## xX3P 1.87519768 ``` ``` print(dm$means) ``` ``` ## xPTS xREB xAST xTO xA.T xSTL xBLK xPF ## 0 62.10938 33.85938 11.46875 15.01562 0.835625 6.609375 2.375 18.84375 ## 1 72.09375 35.07500 14.02812 12.90000 1.120000 7.037500 3.125 18.46875 ## xFG xFT xX3P ## 0 0.4001562 0.6685313 0.3142187 ## 1 0.4464375 0.7144063 0.3525313 ``` ``` print(sum(dm$scaling*colMeans(dm$means))) ``` ``` ## [1] 18.16674 ``` ``` print(sum(dm$scaling*dm$means[1,])) ``` ``` ## [1] 17.17396 ``` ``` print(sum(dm$scaling*dm$means[2,])) ``` ``` ## [1] 19.15952 ``` ``` y_pred = predict(dm)$class print(y_pred) ``` ``` ## [1] 1 1 1 1 1 1 1 1 0 1 1 1 0 1 1 1 0 1 0 1 1 1 1 1 1 0 1 1 1 1 1 1 0 0 0 ## [36] 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 1 1 0 0 0 1 0 0 0 0 0 0 ## Levels: 0 1 ``` ``` predict(dm) ``` ``` ## $class ## [1] 1 1 1 1 1 1 1 1 0 1 1 1 0 1 1 1 0 1 0 1 1 1 1 1 1 0 1 1 1 1 1 1 0 0 0 ## [36] 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 1 1 0 0 0 1 0 0 0 0 0 0 ## Levels: 0 1 ## ## $posterior ## 0 1 ## 1 0.001299131 0.998700869 ## 2 0.011196418 0.988803582 ## 3 0.046608204 0.953391796 ## 4 0.025364951 0.974635049 ## 5 0.006459513 0.993540487 ## 6 0.056366779 0.943633221 ## 7 0.474976979 0.525023021 ## 8 0.081379875 0.918620125 ## 9 0.502094785 0.497905215 ## 10 0.327329832 0.672670168 ## 11 0.065547282 0.934452718 ## 12 0.341547846 0.658452154 ## 13 0.743464274 0.256535726 ## 14 0.024815082 0.975184918 ## 15 0.285683981 0.714316019 ## 16 0.033598255 0.966401745 ## 17 0.751098160 0.248901840 ## 18 0.136470406 0.863529594 ## 19 0.565743827 0.434256173 ## 20 0.106256858 0.893743142 ## 21 0.079260811 0.920739189 ## 22 0.211287405 0.788712595 ## 23 0.016145814 0.983854186 ## 24 0.017916328 0.982083672 ## 25 0.053361102 0.946638898 ## 26 0.929799893 0.070200107 ## 27 0.421467187 0.578532813 ## 28 0.041196674 0.958803326 ## 29 0.160473313 0.839526687 ## 30 0.226165888 0.773834112 ## 31 0.103861216 0.896138784 ## 32 0.328218436 0.671781564 ## 33 0.511514581 0.488485419 ## 34 0.595293351 0.404706649 ## 35 0.986761936 0.013238064 ## 36 0.676574981 0.323425019 ## 37 0.926833195 0.073166805 ## 38 0.955066682 0.044933318 ## 39 0.986527865 0.013472135 ## 40 0.877497556 0.122502444 ## 41 0.859503954 0.140496046 ## 42 0.991731912 0.008268088 ## 43 0.827209283 0.172790717 ## 44 0.964180566 0.035819434 ## 45 0.958246183 0.041753817 ## 46 0.517839067 0.482160933 ## 47 0.992279182 0.007720818 ## 48 0.241060617 0.758939383 ## 49 0.358987835 0.641012165 ## 50 0.653092701 0.346907299 ## 51 0.799810486 0.200189514 ## 52 0.933218396 0.066781604 ## 53 0.297058121 0.702941879 ## 54 0.222809854 0.777190146 ## 55 0.996971215 0.003028785 ## 56 0.924919737 0.075080263 ## 57 0.583330536 0.416669464 ## 58 0.483663571 0.516336429 ## 59 0.946886736 0.053113264 ## 60 0.860202673 0.139797327 ## 61 0.961358779 0.038641221 ## 62 0.998027953 0.001972047 ## 63 0.859521185 0.140478815 ## 64 0.706002516 0.293997484 ## ## $x ## LD1 ## 1 3.346531869 ## 2 2.256737828 ## 3 1.520095227 ## 4 1.837609440 ## 5 2.536163975 ## 6 1.419170979 ## 7 0.050452000 ## 8 1.220682015 ## 9 -0.004220052 ## 10 0.362761452 ## 11 1.338252835 ## 12 0.330587901 ## 13 -0.535893942 ## 14 1.848931516 ## 15 0.461550632 ## 16 1.691762218 ## 17 -0.556253363 ## 18 0.929165997 ## 19 -0.133214789 ## 20 1.072519927 ## 21 1.235130454 ## 22 0.663378952 ## 23 2.069846547 ## 24 2.016535392 ## 25 1.448370738 ## 26 -1.301200562 ## 27 0.159527985 ## 28 1.585103944 ## 29 0.833369746 ## 30 0.619515440 ## 31 1.085352883 ## 32 0.360730337 ## 33 -0.023200674 ## 34 -0.194348531 ## 35 -2.171336821 ## 36 -0.371720701 ## 37 -1.278744604 ## 38 -1.539410745 ## 39 -2.162390029 ## 40 -0.991628191 ## 41 -0.912171192 ## 42 -2.410924430 ## 43 -0.788680213 ## 44 -1.658362422 ## 45 -1.578045708 ## 46 -0.035952755 ## 47 -2.445692660 ## 48 0.577605329 ## 49 0.291987243 ## 50 -0.318630304 ## 51 -0.697589676 ## 52 -1.328191375 ## 53 0.433803969 ## 54 0.629224272 ## 55 -2.919349215 ## 56 -1.264701997 ## 57 -0.169453310 ## 58 0.032922090 ## 59 -1.450847181 ## 60 -0.915091388 ## 61 -1.618696192 ## 62 -3.135987051 ## 63 -0.912243063 ## 64 -0.441208000 ``` ``` out = table(y,y_pred) print(out) ``` ``` ## y_pred ## y 0 1 ## 0 27 5 ## 1 5 27 ``` ``` chisq.test(out) ``` ``` ## ## Pearson's Chi-squared test with Yates' continuity correction ## ## data: out ## X-squared = 27.562, df = 1, p-value = 1.521e-07 ``` ``` chisq.test(out,correct=FALSE) ``` ``` ## ## Pearson's Chi-squared test ## ## data: out ## X-squared = 30.25, df = 1, p-value = 3.798e-08 ``` ``` ldahist(data = predict(dm)$x[,1], g=predict(dm)$class) ``` ``` predict(dm) ``` ``` ## $class ## [1] 1 1 1 1 1 1 1 1 0 1 1 1 0 1 1 1 0 1 0 1 1 1 1 1 1 0 1 1 1 1 1 1 0 0 0 ## [36] 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 1 1 0 0 0 1 0 0 0 0 0 0 ## Levels: 0 1 ## ## $posterior ## 0 1 ## 1 0.001299131 0.998700869 ## 2 0.011196418 0.988803582 ## 3 0.046608204 0.953391796 ## 4 0.025364951 0.974635049 ## 5 0.006459513 0.993540487 ## 6 0.056366779 0.943633221 ## 7 0.474976979 0.525023021 ## 8 0.081379875 0.918620125 ## 9 0.502094785 0.497905215 ## 10 0.327329832 0.672670168 ## 11 0.065547282 0.934452718 ## 12 0.341547846 0.658452154 ## 13 0.743464274 0.256535726 ## 14 0.024815082 0.975184918 ## 15 0.285683981 0.714316019 ## 16 0.033598255 0.966401745 ## 17 0.751098160 0.248901840 ## 18 0.136470406 0.863529594 ## 19 0.565743827 0.434256173 ## 20 0.106256858 0.893743142 ## 21 0.079260811 0.920739189 ## 22 0.211287405 0.788712595 ## 23 0.016145814 0.983854186 ## 24 0.017916328 0.982083672 ## 25 0.053361102 0.946638898 ## 26 0.929799893 0.070200107 ## 27 0.421467187 0.578532813 ## 28 0.041196674 0.958803326 ## 29 0.160473313 0.839526687 ## 30 0.226165888 0.773834112 ## 31 0.103861216 0.896138784 ## 32 0.328218436 0.671781564 ## 33 0.511514581 0.488485419 ## 34 0.595293351 0.404706649 ## 35 0.986761936 0.013238064 ## 36 0.676574981 0.323425019 ## 37 0.926833195 0.073166805 ## 38 0.955066682 0.044933318 ## 39 0.986527865 0.013472135 ## 40 0.877497556 0.122502444 ## 41 0.859503954 0.140496046 ## 42 0.991731912 0.008268088 ## 43 0.827209283 0.172790717 ## 44 0.964180566 0.035819434 ## 45 0.958246183 0.041753817 ## 46 0.517839067 0.482160933 ## 47 0.992279182 0.007720818 ## 48 0.241060617 0.758939383 ## 49 0.358987835 0.641012165 ## 50 0.653092701 0.346907299 ## 51 0.799810486 0.200189514 ## 52 0.933218396 0.066781604 ## 53 0.297058121 0.702941879 ## 54 0.222809854 0.777190146 ## 55 0.996971215 0.003028785 ## 56 0.924919737 0.075080263 ## 57 0.583330536 0.416669464 ## 58 0.483663571 0.516336429 ## 59 0.946886736 0.053113264 ## 60 0.860202673 0.139797327 ## 61 0.961358779 0.038641221 ## 62 0.998027953 0.001972047 ## 63 0.859521185 0.140478815 ## 64 0.706002516 0.293997484 ## ## $x ## LD1 ## 1 3.346531869 ## 2 2.256737828 ## 3 1.520095227 ## 4 1.837609440 ## 5 2.536163975 ## 6 1.419170979 ## 7 0.050452000 ## 8 1.220682015 ## 9 -0.004220052 ## 10 0.362761452 ## 11 1.338252835 ## 12 0.330587901 ## 13 -0.535893942 ## 14 1.848931516 ## 15 0.461550632 ## 16 1.691762218 ## 17 -0.556253363 ## 18 0.929165997 ## 19 -0.133214789 ## 20 1.072519927 ## 21 1.235130454 ## 22 0.663378952 ## 23 2.069846547 ## 24 2.016535392 ## 25 1.448370738 ## 26 -1.301200562 ## 27 0.159527985 ## 28 1.585103944 ## 29 0.833369746 ## 30 0.619515440 ## 31 1.085352883 ## 32 0.360730337 ## 33 -0.023200674 ## 34 -0.194348531 ## 35 -2.171336821 ## 36 -0.371720701 ## 37 -1.278744604 ## 38 -1.539410745 ## 39 -2.162390029 ## 40 -0.991628191 ## 41 -0.912171192 ## 42 -2.410924430 ## 43 -0.788680213 ## 44 -1.658362422 ## 45 -1.578045708 ## 46 -0.035952755 ## 47 -2.445692660 ## 48 0.577605329 ## 49 0.291987243 ## 50 -0.318630304 ## 51 -0.697589676 ## 52 -1.328191375 ## 53 0.433803969 ## 54 0.629224272 ## 55 -2.919349215 ## 56 -1.264701997 ## 57 -0.169453310 ## 58 0.032922090 ## 59 -1.450847181 ## 60 -0.915091388 ## 61 -1.618696192 ## 62 -3.135987051 ## 63 -0.912243063 ## 64 -0.441208000 ``` ### 10\.6\.2 Confusion Matrix This matrix shows some classification ability. Now we ask, what if the model has no classification ability, then what would the average confusion matrix look like? It’s easy to see that this would give a matrix that would assume no relation between the rows and columns, and the numbers in each cell would reflect the average number drawn based on row and column totals. In this case since the row and column totals are all 32, we get the following confusion matrix of no classification ability: \\\[\\begin{equation} E \= \\left\[ \\begin{array}{cc} 16 \& 16\\\\ 16 \& 16 \\end{array} \\right] \\end{equation}\\] The test statistic is the sum of squared normalized differences in the cells of both matrices, i.e., \\\[\\begin{equation} \\mbox{Test\-Stat } \= \\sum\_{i,j} \\frac{\[A\_{ij} \- E\_{ij}]^2}{E\_{ij}} \\end{equation}\\] We compute this in R. ``` A = matrix(c(27,5,5,27),2,2); print(A) ``` ``` ## [,1] [,2] ## [1,] 27 5 ## [2,] 5 27 ``` ``` E = matrix(c(16,16,16,16),2,2); print(E) ``` ``` ## [,1] [,2] ## [1,] 16 16 ## [2,] 16 16 ``` ``` test_stat = sum((A-E)^2/E); print(test_stat) ``` ``` ## [1] 30.25 ``` ``` print(1-pchisq(test_stat,1)) ``` ``` ## [1] 3.797912e-08 ``` 10\.7 Explanation of LDA ------------------------ We assume two groups first for simplicity, 1 and 2\. Assume a feature space \\(x \\in R^d\\). Group 1 has \\(n\_1\\) observations, and group 2 has \\(n\_2\\) observations, i.e., tuples of dimension \\(d\\). We want to find weights \\(w \\in R^d\\) that will project each observation in each group onto a point \\(z\\) on a line, i.e., \\\[\\begin{equation} z \= w\_1 x\_1 \+ w\_2 x\_2 \+ ... \+ w\_d x\_d \= w' x \\end{equation}\\] We want the \\(z\\) values of group 1 to be as far away as possible from that of group 2, accounting for the variation within and across groups. The **scatter** within group \\(j\=1,2\\) is defined as: \\\[\\begin{equation} S\_j \= \\sum\_{i\=1}^{n\_j} (z\_{ji} \- \\bar{z}\_j)^2 \= \\sum\_{i\=1}^{n\_j} (w' x\_{ji} \- w'\\bar{x}\_j)^2 \\end{equation}\\] where \\(\\bar{z}\_j\\) is the scalar mean of \\(z\\) values for group \\(j\\), and \\(\\bar{x}\_j\\) is the mean of \\(x\\) values for group \\(j\\), and is of dimension \\(d \\times 1\\). We want to capture this scatter more formally, so we define \\\[\\begin{eqnarray} S\_j \= w' (x\_{ji} \- \\bar{x}\_j)(x\_{ji} \- \\bar{x}\_j)' w \= w' V\_j w \\end{eqnarray}\\] where we have defined \\(V\_j \= (x\_{ji} \- \\bar{x}\_j)(x\_{ji} \- \\bar{x}\_j)'\\) as the variation within group \\(j\\). We also define total within group variation as \\(V\_w \= V\_1 \+ V\_2\\). Think of \\(V\_j\\) as a kind of covariance matrix of group \\(j\\). We note that \\(w\\) is dimension \\(d \\times 1\\), \\((x\_{ji} \- \\bar{x}\_j)\\) is dimension \\(d \\times n\_j\\), so that \\(S\_j\\) is scalar. We sum the within group scatter values to get the total within group variation, i.e., \\\[\\begin{equation} w' (V\_1 \+ V\_2\) w \= w' V\_w w \\end{equation}\\] For between group scatter, we get an analogous expression, i.e., \\\[\\begin{equation} w' V\_b w \= w' (\\bar{x}\_1 \- \\bar{x}\_2\)(\\bar{x}\_1 \- \\bar{x}\_2\)' w \\end{equation}\\] where we note that \\((\\bar{x}\_1 \- \\bar{x}\_2\)(\\bar{x}\_1 \- \\bar{x}\_2\)'\\) is the between group covariance, and \\(w\\) is \\((d \\times 1\)\\), \\((\\bar{x}\_1 \- \\bar{x}\_2\)\\) is dimension \\((d \\times 1\)\\). 10\.8 Fischer’s Discriminant ---------------------------- The Fischer linear discriminant approach is to maximize between group variation and minimize within group variation, i.e., \\\[\\begin{equation} F \= \\frac{w' V\_b w}{w' V\_w w} \\end{equation}\\] Taking the vector derivative w.r.t. \\(w\\) to maximize, we get \\\[\\begin{equation} \\frac{dF}{dw} \= \\frac{w' V\_w w (2 V\_b w) \- w' V\_b w (2 V\_w w)}{(w' V\_w w)^2} \= {\\bf 0} \\end{equation}\\] \\\[\\begin{equation} V\_b w \- \\frac{w' V\_b w}{w' V\_w w} V\_w w \= {\\bf 0} \\end{equation}\\] \\\[\\begin{equation} V\_b w \- F V\_w w \= {\\bf 0} \\end{equation}\\] \\\[\\begin{equation} V\_w^{\-1} V\_b w \- F w \= {\\bf 0} \\end{equation}\\] Rewrite this is an eigensystem and solve to get \\\[\\begin{eqnarray} Aw \&\=\& \\lambda w \\\\ w^\* \&\=\& V\_w^{\-1}(\\bar{x}\_1 \- \\bar{x}\_2\) \\end{eqnarray}\\] where \\(A \= V\_w^{\-1} V\_b\\), and \\(\\lambda\=F\\). Note: An easy way to see how to solve for \\(w^\*\\) is as follows. First, find the largest eigenvalue of matrix \\(A\\). Second, substitute that into the eigensystem and solve a system of \\(d\\) equations to get \\(w\\). 10\.9 Generalizing number of groups ----------------------------------- We proceed to \\(k\+1\\) groups. Therefore now we need \\(k\\) discriminant vectors, i.e., \\\[\\begin{equation} W \= \[w\_1, w\_2, ... , w\_k] \\in R^{d \\times k} \\end{equation}\\] The Fischer discriminant generalizes to \\\[\\begin{equation} F \= \\frac{\|W' V\_b W\|}{\|W' V\_w W\|} \\end{equation}\\] where we now use the determinant as the numerator and denominator are no longer scalars. Note that between group variation is now \\(V\_w \= V\_1 \+ V\_2 \+ ... \+ V\_k\\), and the denominator is the determinant of a \\((k \\times k)\\) matrix. The numerator is also the determinant of a \\((k \\times k)\\) matrix, and \\\[\\begin{equation} V\_b \= \\sum\_{i\=1}^k n\_i (x\_i \- \\bar{x}\_i)(x\_i \- \\bar{x}\_i)' \\end{equation}\\] where \\((x\_i \- \\bar{x}\_i)\\) is of dimension \\((d \\times n\_i)\\), so that \\(V\_b\\) is dimension \\((d \\times d)\\). ``` y1 = rep(3,16) y2 = rep(2,16) y3 = rep(1,16) y4 = rep(0,16) y = c(y1,y2,y3,y4) res = lda(y~x) res ``` ``` ## Call: ## lda(y ~ x) ## ## Prior probabilities of groups: ## 0 1 2 3 ## 0.25 0.25 0.25 0.25 ## ## Group means: ## xPTS xREB xAST xTO xA.T xSTL xBLK xPF ## 0 61.43750 33.18750 11.93750 14.37500 0.888750 6.12500 1.8750 19.5000 ## 1 62.78125 34.53125 11.00000 15.65625 0.782500 7.09375 2.8750 18.1875 ## 2 70.31250 36.59375 13.50000 12.71875 1.094375 6.84375 3.1875 19.4375 ## 3 73.87500 33.55625 14.55625 13.08125 1.145625 7.23125 3.0625 17.5000 ## xFG xFT xX3P ## 0 0.4006875 0.7174375 0.3014375 ## 1 0.3996250 0.6196250 0.3270000 ## 2 0.4223750 0.7055625 0.3260625 ## 3 0.4705000 0.7232500 0.3790000 ## ## Coefficients of linear discriminants: ## LD1 LD2 LD3 ## xPTS -0.03190376 -0.09589269 -0.03170138 ## xREB 0.16962627 0.08677669 -0.11932275 ## xAST 0.08820048 0.47175998 0.04601283 ## xTO -0.20264768 -0.29407195 -0.02550334 ## xA.T 0.02619042 -3.28901817 -1.42081485 ## xSTL 0.23954511 -0.26327278 -0.02694612 ## xBLK 0.05424102 -0.14766348 -0.17703174 ## xPF 0.03678799 0.22610347 -0.09608475 ## xFG 21.25583140 0.48722022 9.50234314 ## xFT 5.42057568 6.39065311 2.72767409 ## xX3P 1.98050128 -2.74869782 0.90901853 ## ## Proportion of trace: ## LD1 LD2 LD3 ## 0.6025 0.3101 0.0873 ``` ``` y_pred = predict(res)$class print(y_pred) ``` ``` ## [1] 3 3 3 3 3 3 3 3 1 3 3 2 0 3 3 3 0 3 2 3 2 2 3 2 2 0 2 2 2 2 2 2 3 1 1 ## [36] 1 0 1 1 1 1 1 1 1 1 1 0 2 2 0 0 0 0 2 0 0 2 0 1 0 1 1 0 0 ## Levels: 0 1 2 3 ``` ``` print(table(y,y_pred)) ``` ``` ## y_pred ## y 0 1 2 3 ## 0 10 3 3 0 ## 1 2 12 1 1 ## 2 2 0 11 3 ## 3 1 1 1 13 ``` ``` print(chisq.test(table(y,y_pred))) ``` ``` ## Warning in chisq.test(table(y, y_pred)): Chi-squared approximation may be ## incorrect ``` ``` ## ## Pearson's Chi-squared test ## ## data: table(y, y_pred) ## X-squared = 78.684, df = 9, p-value = 2.949e-13 ``` The idea is that when we have 4 groups, we project each observation in the data into a 3\-D space, which is then separated by hyperplanes to demarcate the 4 groups. 10\.10 Eigen Systems -------------------- We now move on to understanding some properties of matrices that may be useful in classifying data or deriving its underlying components. We download Treasury interest rate date from the FRED website, <http://research.stlouisfed.org/fred2/>. I have placed the data in a file called “tryrates.txt”. Let’s read in the file. ``` rates = read.table("DSTMAA_data/tryrates.txt",header=TRUE) print(names(rates)) ``` ``` ## [1] "DATE" "FYGM3" "FYGM6" "FYGT1" "FYGT2" "FYGT3" "FYGT5" "FYGT7" ## [9] "FYGT10" ``` ``` print(head(rates)) ``` ``` ## DATE FYGM3 FYGM6 FYGT1 FYGT2 FYGT3 FYGT5 FYGT7 FYGT10 ## 1 Jun-76 5.41 5.77 6.52 7.06 7.31 7.61 7.75 7.86 ## 2 Jul-76 5.23 5.53 6.20 6.85 7.12 7.49 7.70 7.83 ## 3 Aug-76 5.14 5.40 6.00 6.63 6.86 7.31 7.58 7.77 ## 4 Sep-76 5.08 5.30 5.84 6.42 6.66 7.13 7.41 7.59 ## 5 Oct-76 4.92 5.06 5.50 5.98 6.24 6.75 7.16 7.41 ## 6 Nov-76 4.75 4.88 5.29 5.81 6.09 6.52 6.86 7.29 ``` Understanding eigenvalues and eigenvectors is best done visually. An excellent simple exposition is available at: [http://setosa.io/ev/eigenvectors\-and\-eigenvalues/](http://setosa.io/ev/eigenvectors-and-eigenvalues/) A \\(M \\times M\\) matrix \\(A\\) has attendant \\(M\\) eigenvectors \\(V\\) and eigenvalue \\(\\lambda\\) if we can write \\\[\\begin{equation} \\lambda V \= A \\; V \\end{equation}\\] Starting with matrix \\(A\\), the eigenvalue decomposition gives both \\(V\\) and \\(\\lambda\\). It turns out we can find \\(M\\) such eigenvalues and eigenvectors, as there is no unique solution to this equation. We also require that \\(\\lambda \\neq 0\\). We may implement this in R as follows, setting matrix \\(A\\) equal to the covariance matrix of the rates of different maturities: ``` A = matrix(c(5,2,1,4),2,2) E = eigen(A) print(E) ``` ``` ## $values ## [1] 6 3 ## ## $vectors ## [,1] [,2] ## [1,] 0.7071068 -0.4472136 ## [2,] 0.7071068 0.8944272 ``` ``` v1 = E$vectors[,1] v2 = E$vectors[,2] e1 = E$values[1] e2 = E$values[2] print(t(e1*v1)) ``` ``` ## [,1] [,2] ## [1,] 4.242641 4.242641 ``` ``` print(A %*% v1) ``` ``` ## [,1] ## [1,] 4.242641 ## [2,] 4.242641 ``` ``` print(t(e2*v2)) ``` ``` ## [,1] [,2] ## [1,] -1.341641 2.683282 ``` ``` print(A %*% v2) ``` ``` ## [,1] ## [1,] -1.341641 ## [2,] 2.683282 ``` We see that the origin, eigenvalues and eigenvectors comprise \\(n\\) eigenspaces. The line from the origin through an eigenvector (i.e., a coordinate given by any one eigenvector) is called an “eigenspace”. All points on eigenspaces are themselves eigenvectors. These eigenpaces are dimensions in which the relationships between vectors in the matrix \\(A\\) load. We may also think of the matrix \\(A\\) as an “operator” or function on vectors/matrices. ``` rates = as.matrix(rates[,2:9]) eigen(cov(rates)) ``` ``` ## $values ## [1] 7.070996e+01 1.655049e+00 9.015819e-02 1.655911e-02 3.001199e-03 ## [6] 2.145993e-03 1.597282e-03 8.562439e-04 ## ## $vectors ## [,1] [,2] [,3] [,4] [,5] [,6] ## [1,] 0.3596990 -0.49201202 0.59353257 -0.38686589 0.34419189 -0.07045281 ## [2,] 0.3581944 -0.40372601 0.06355170 0.20153645 -0.79515713 0.07823632 ## [3,] 0.3875117 -0.28678312 -0.30984414 0.61694982 0.45913099 0.20442661 ## [4,] 0.3753168 -0.01733899 -0.45669522 -0.19416861 -0.03906518 -0.46590654 ## [5,] 0.3614653 0.13461055 -0.36505588 -0.41827644 0.06076305 -0.14203743 ## [6,] 0.3405515 0.31741378 -0.01159915 -0.18845999 0.03366277 0.72373049 ## [7,] 0.3260941 0.40838395 0.19017973 -0.05000002 -0.16835391 0.09196861 ## [8,] 0.3135530 0.47616732 0.41174955 0.42239432 0.06132982 -0.42147082 ## [,7] [,8] ## [1,] -0.04282858 0.03645143 ## [2,] 0.15571962 -0.03744201 ## [3,] -0.10492279 -0.16540673 ## [4,] -0.30395044 0.54916644 ## [5,] 0.45521861 -0.55849003 ## [6,] 0.19935685 0.42773742 ## [7,] -0.70469469 -0.39347299 ## [8,] 0.35631546 0.13650940 ``` ``` rcorr = cor(rates) rcorr ``` ``` ## FYGM3 FYGM6 FYGT1 FYGT2 FYGT3 FYGT5 ## FYGM3 1.0000000 0.9975369 0.9911255 0.9750889 0.9612253 0.9383289 ## FYGM6 0.9975369 1.0000000 0.9973496 0.9851248 0.9728437 0.9512659 ## FYGT1 0.9911255 0.9973496 1.0000000 0.9936959 0.9846924 0.9668591 ## FYGT2 0.9750889 0.9851248 0.9936959 1.0000000 0.9977673 0.9878921 ## FYGT3 0.9612253 0.9728437 0.9846924 0.9977673 1.0000000 0.9956215 ## FYGT5 0.9383289 0.9512659 0.9668591 0.9878921 0.9956215 1.0000000 ## FYGT7 0.9220409 0.9356033 0.9531304 0.9786511 0.9894029 0.9984354 ## FYGT10 0.9065636 0.9205419 0.9396863 0.9680926 0.9813066 0.9945691 ## FYGT7 FYGT10 ## FYGM3 0.9220409 0.9065636 ## FYGM6 0.9356033 0.9205419 ## FYGT1 0.9531304 0.9396863 ## FYGT2 0.9786511 0.9680926 ## FYGT3 0.9894029 0.9813066 ## FYGT5 0.9984354 0.9945691 ## FYGT7 1.0000000 0.9984927 ## FYGT10 0.9984927 1.0000000 ``` ### 10\.10\.1 Intuition So we calculated the eigenvalues and eigenvectors for the covariance matrix of the data. What does it really mean? Think of the covariance matrix as the summarization of the connections between the rates of different maturities in our data set. What we do not know is how many dimensions of commonality there are in these rates, and what is the relative importance of these dimensions. For each dimension of commonality, we wish to ask (a) how important is that dimension (the eigenvalue), and (b) the relative influence of that dimension on each rate (the values in the eigenvector). The most important dimension is the one with the highest eigenvalue, known as the **principal** eigenvalue, corresponding to which we have the principal eigenvector. It should be clear by now that the eigenvalue and its eigenvector are **eigen pairs**. It should also be intuitive why we call this the **eigenvalue decomposition** of a matrix. 10\.11 Determinants ------------------- These functions of a matrix are also difficult to get an intuition for. But its best to think of the determinant as one possible function that returns the “sizing” of a matrix. More specifically, it relates to the volume of the space defined by the matrix. But not exactly, because it can also be negative, though the absolute size will give some sense of volume as well. For example, let’s take the two\-dimensional identity matrix, which defines the unit square. ``` a = matrix(0,2,2); diag(a) = 1 print(det(a)) ``` ``` ## [1] 1 ``` ``` print(det(2*a)) ``` ``` ## [1] 4 ``` We see immediately that when we multiply the matrix by 2, we get a determinant value that is four times the original, because the volume in two\-dimensional space is area, and that has changed by 4\. To verify, we’ll try the three\-dimensional identity matrix. ``` a = matrix(0,3,3); diag(a) = 1 print(det(a)) ``` ``` ## [1] 1 ``` ``` print(det(2*a)) ``` ``` ## [1] 8 ``` Now we see that the orginal determinant has grown by \\(2^3\\) when all dimensions are doubled. We may also distort just one dimension, and see what happens. ``` a = matrix(0,2,2); diag(a) = 1 print(det(a)) ``` ``` ## [1] 1 ``` ``` a[2,2] = 2 print(det(a)) ``` ``` ## [1] 2 ``` That’s pretty self\-explanatory! 10\.12 Dimension Reduction: Factor Analysis and PCA --------------------------------------------------- **Factor analysis** is the use of eigenvalue decomposition to uncover the underlying structure of the data. Given a data set of observations and explanatory variables, factor analysis seeks to achieve a decomposition with these two properties: 1. Obtain a reduced dimension set of explanatory variables, known as derived/extracted/discovered factors. Factors must be **orthogonal**, i.e., uncorrelated with each other. 2. Obtain data reduction, i.e., suggest a limited set of variables. Each such subset is a manifestation of an abstract underlying dimension. 3. These subsets are ordered in terms of their ability to explain the variation across observations. See the article by Richard Darlington: <http://www.psych.cornell.edu/Darlington/factor.htm>, which is as good as any explanation one can get. See also the article by Statsoft: <http://www.statsoft.com/textbook/stfacan.html>. ### 10\.12\.1 Notation * Observations: \\(y\_i, i\=1\...N\\). * Original explanatory variables: \\(x\_{ik}, k\=1\...K\\). * Factors: \\(F\_j, j\=1\...M\\). * \\(M \< K\\). ### 10\.12\.2 The Idea As you can see in the rates data, there are eight different rates. If we wanted to model the underlying drivers of this system of rates, we could assume a separate driver for each one leading to \\(K\=8\\) underlying factors. But the whole idea of factor analysis is to reduce the number of drivers that exist. So we may want to go with a smaller number of \\(M \< K\\) factors. The main concept here is to **project** the variables \\(x \\in R^{K}\\) onto the reduced factor set \\(F \\in R^M\\) such that we can explain most of the variables by the factors. Hence we are looking for a relation \\\[\\begin{equation} x \= B F \\end{equation}\\] where \\(B \= \\{b\_{kj}\\}\\in R^{K \\times M}\\) is a matrix of factor **loadings** for the variables. Through matrix \\(B\\), \\(x\\) may be represented in smaller dimension \\(M\\). The entries in matrix \\(B\\) may be positive or negative. Negative loadings mean that the variable is negatively correlated with the factor. The whole idea is that we want to replace the relation of \\(y\\) to \\(x\\) with a relation of \\(y\\) to a reduced set \\(F\\). Once we have the set of factors defined, then the \\(N\\) observations \\(y\\) may be expressed in terms of the factors through a factor **score matrix** \\(A \= \\{a\_{ij}\\} \\in R^{N \\times M}\\) as follows: \\\[\\begin{equation} y \= A F \\end{equation}\\] Again, factor scores may be positive or negative. There are many ways in which such a transformation from variables to factors might be undertaken. We look at the most common one. 10\.13 Principal Components Analysis (PCA) ------------------------------------------ In PCA, each component (factor) is viewed as a weighted combination of the other variables (this is not always the way factor analysis is implemented, but is certainly one of the most popular). The starting point for PCA is the covariance matrix of the data. Essentially what is involved is an eigenvalue analysis of this matrix to extract the principal eigenvectors. We can do the analysis using the R statistical package. Here is the sample session: ``` ncaa = read.table("DSTMAA_data/ncaa.txt",header=TRUE) x = ncaa[4:14] print(names(x)) ``` ``` ## [1] "PTS" "REB" "AST" "TO" "A.T" "STL" "BLK" "PF" "FG" "FT" "X3P" ``` ``` result = princomp(x) summary(result) ``` ``` ## Importance of components: ## Comp.1 Comp.2 Comp.3 Comp.4 ## Standard deviation 9.8747703 5.2870154 3.95773149 3.19879732 ## Proportion of Variance 0.5951046 0.1705927 0.09559429 0.06244717 ## Cumulative Proportion 0.5951046 0.7656973 0.86129161 0.92373878 ## Comp.5 Comp.6 Comp.7 Comp.8 ## Standard deviation 2.43526651 2.04505010 1.53272256 0.1314860827 ## Proportion of Variance 0.03619364 0.02552391 0.01433727 0.0001055113 ## Cumulative Proportion 0.95993242 0.98545633 0.99979360 0.9998991100 ## Comp.9 Comp.10 Comp.11 ## Standard deviation 1.062179e-01 6.591218e-02 3.007832e-02 ## Proportion of Variance 6.885489e-05 2.651372e-05 5.521365e-06 ## Cumulative Proportion 9.999680e-01 9.999945e-01 1.000000e+00 ``` ``` screeplot(result) ``` ``` screeplot(result,type="lines") ``` ``` result$loadings ``` ``` ## ## Loadings: ## Comp.1 Comp.2 Comp.3 Comp.4 Comp.5 Comp.6 Comp.7 Comp.8 Comp.9 Comp.10 ## PTS 0.964 0.240 ## REB 0.940 -0.316 ## AST 0.257 -0.228 -0.283 -0.431 -0.778 ## TO 0.194 -0.908 -0.116 0.313 -0.109 ## A.T 0.712 0.642 0.262 ## STL -0.194 0.205 0.816 0.498 ## BLK 0.516 -0.849 ## PF -0.110 -0.223 0.862 -0.364 -0.228 ## FG ## FT 0.619 -0.762 0.175 ## X3P -0.315 0.948 ## Comp.11 ## PTS ## REB ## AST ## TO ## A.T ## STL ## BLK ## PF ## FG -0.996 ## FT ## X3P ## ## Comp.1 Comp.2 Comp.3 Comp.4 Comp.5 Comp.6 Comp.7 Comp.8 ## SS loadings 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 ## Proportion Var 0.091 0.091 0.091 0.091 0.091 0.091 0.091 0.091 ## Cumulative Var 0.091 0.182 0.273 0.364 0.455 0.545 0.636 0.727 ## Comp.9 Comp.10 Comp.11 ## SS loadings 1.000 1.000 1.000 ## Proportion Var 0.091 0.091 0.091 ## Cumulative Var 0.818 0.909 1.000 ``` ``` print(names(result)) ``` ``` ## [1] "sdev" "loadings" "center" "scale" "n.obs" "scores" ## [7] "call" ``` ``` result$sdev ``` ``` ## Comp.1 Comp.2 Comp.3 Comp.4 Comp.5 Comp.6 ## 9.87477028 5.28701542 3.95773149 3.19879732 2.43526651 2.04505010 ## Comp.7 Comp.8 Comp.9 Comp.10 Comp.11 ## 1.53272256 0.13148608 0.10621791 0.06591218 0.03007832 ``` ``` biplot(result) ``` The alternative function **prcomp** returns the same stuff, but gives all the factor loadings immediately. ``` prcomp(x) ``` ``` ## Standard deviations: ## [1] 9.95283292 5.32881066 3.98901840 3.22408465 2.45451793 2.06121675 ## [7] 1.54483913 0.13252551 0.10705759 0.06643324 0.03031610 ## ## Rotation: ## PC1 PC2 PC3 PC4 PC5 ## PTS -0.963808450 -0.052962387 0.018398319 0.094091517 -0.240334810 ## REB -0.022483140 -0.939689339 0.073265952 0.026260543 0.315515827 ## AST -0.256799635 0.228136664 -0.282724110 -0.430517969 0.778063875 ## TO 0.061658120 -0.193810802 -0.908005124 -0.115659421 -0.313055838 ## A.T -0.021008035 0.030935414 0.035465079 -0.022580766 0.068308725 ## STL -0.006513483 0.081572061 -0.193844456 0.205272135 0.014528901 ## BLK -0.012711101 -0.070032329 0.035371935 0.073370876 -0.034410932 ## PF -0.012034143 0.109640846 -0.223148274 0.862316681 0.364494150 ## FG -0.003729350 0.002175469 -0.001708722 -0.006568270 -0.001837634 ## FT -0.001210397 0.003852067 0.001793045 0.008110836 -0.019134412 ## X3P -0.003804597 0.003708648 -0.001211492 -0.002352869 -0.003849550 ## PC6 PC7 PC8 PC9 PC10 ## PTS 0.029408534 -0.0196304356 0.0026169995 -0.004516521 0.004889708 ## REB -0.040851345 -0.0951099200 -0.0074120623 0.003557921 -0.008319362 ## AST -0.044767132 0.0681222890 0.0359559264 0.056106512 0.015018370 ## TO 0.108917779 0.0864648004 -0.0416005762 -0.039363263 -0.012726102 ## A.T -0.004846032 0.0061047937 -0.7122315249 -0.642496008 -0.262468560 ## STL -0.815509399 -0.4981690905 0.0008726057 -0.008845999 -0.005846547 ## BLK -0.516094006 0.8489313874 0.0023262933 -0.001364270 0.008293758 ## PF 0.228294830 0.0972181527 0.0005835116 0.001302210 -0.001385509 ## FG 0.004118140 0.0041758373 0.0848448651 -0.019610637 0.030860027 ## FT -0.005525032 0.0001301938 -0.6189703010 0.761929615 -0.174641147 ## X3P 0.001012866 0.0094289825 0.3151374823 0.038279107 -0.948194531 ## PC11 ## PTS 0.0037883918 ## REB -0.0043776255 ## AST 0.0058744543 ## TO -0.0001063247 ## A.T -0.0560584903 ## STL -0.0062405867 ## BLK 0.0013213701 ## PF -0.0043605809 ## FG -0.9956716097 ## FT -0.0731951151 ## X3P -0.0031976296 ``` ### 10\.13\.1 Difference between PCA and LDA ### 10\.13\.2 Application to Treasury Yield Curves We had previously downloaded monthly data for constant maturity yields from June 1976 to December 2006\. Here is the 3D plot. It shows the change in the yield curve over time for a range of maturities. ``` persp(rates,theta=30,phi=0,xlab="years",ylab="maturity",zlab="rates") ``` ``` tryrates = read.table("DSTMAA_data/tryrates.txt",header=TRUE) rates = as.matrix(tryrates[2:9]) result = princomp(rates) result$loadings ``` ``` ## ## Loadings: ## Comp.1 Comp.2 Comp.3 Comp.4 Comp.5 Comp.6 Comp.7 Comp.8 ## FYGM3 -0.360 -0.492 0.594 -0.387 0.344 ## FYGM6 -0.358 -0.404 0.202 -0.795 -0.156 ## FYGT1 -0.388 -0.287 -0.310 0.617 0.459 0.204 0.105 -0.165 ## FYGT2 -0.375 -0.457 -0.194 -0.466 0.304 0.549 ## FYGT3 -0.361 0.135 -0.365 -0.418 -0.142 -0.455 -0.558 ## FYGT5 -0.341 0.317 -0.188 0.724 -0.199 0.428 ## FYGT7 -0.326 0.408 0.190 -0.168 0.705 -0.393 ## FYGT10 -0.314 0.476 0.412 0.422 -0.421 -0.356 0.137 ## ## Comp.1 Comp.2 Comp.3 Comp.4 Comp.5 Comp.6 Comp.7 Comp.8 ## SS loadings 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 ## Proportion Var 0.125 0.125 0.125 0.125 0.125 0.125 0.125 0.125 ## Cumulative Var 0.125 0.250 0.375 0.500 0.625 0.750 0.875 1.000 ``` ``` result$sdev ``` ``` ## Comp.1 Comp.2 Comp.3 Comp.4 Comp.5 Comp.6 ## 8.39745750 1.28473300 0.29985418 0.12850678 0.05470852 0.04626171 ## Comp.7 Comp.8 ## 0.03991152 0.02922175 ``` ``` summary(result) ``` ``` ## Importance of components: ## Comp.1 Comp.2 Comp.3 Comp.4 ## Standard deviation 8.397458 1.28473300 0.299854180 0.1285067846 ## Proportion of Variance 0.975588 0.02283477 0.001243916 0.0002284667 ## Cumulative Proportion 0.975588 0.99842275 0.999666666 0.9998951326 ## Comp.5 Comp.6 Comp.7 Comp.8 ## Standard deviation 5.470852e-02 4.626171e-02 3.991152e-02 2.922175e-02 ## Proportion of Variance 4.140766e-05 2.960835e-05 2.203775e-05 1.181363e-05 ## Cumulative Proportion 9.999365e-01 9.999661e-01 9.999882e-01 1.000000e+00 ``` ### 10\.13\.3 Results The results are interesting. We see that the loadings are large in the first three component vectors for all maturity rates. The loadings correspond to a classic feature of the yield curve, i.e., there are three components: level, slope, and curvature. Note that the first component has almost equal loadings for all rates that are all identical in sign. Hence, this is the **level** factor. The second component has negative loadings for the shorter maturity rates and positive loadings for the later maturity ones. Therefore, when this factor moves up, the short rates will go down, and the long rates will go up, resulting in a steepening of the yield curve. If the factor goes down, the yield curve will become flatter. Hence, the second principal component is clearly the **slope** factor. Examining the loadings of the third principal component should make it clear that the effect of this factor is to modulate the **curvature** or hump of the yield curve. Still, from looking at the results, it is clear that 97% of the common variation is explained by just the first factor, and a wee bit more by the next two. The resultant **biplot** shows the dominance of the main component. ``` biplot(result) ``` 10\.14 Difference between PCA and FA ------------------------------------ The difference between PCA and FA is that for the purposes of matrix computations PCA assumes that all variance is common, with all unique factors set equal to zero; while FA assumes that there is some unique variance. Hence PCA may also be thought of as a subset of FA. The level of unique variance is dictated by the FA model which is chosen. Accordingly, PCA is a model of a closed system, while FA is a model of an open system. FA tries to decompose the correlation matrix into common and unique portions. 10\.15 Factor Rotation ---------------------- Finally, there are some times when the variables would load better on the factors if the factor system were to be rotated. This called factor rotation, and many times the software does this automatically. Remember that we decomposed variables \\(x\\) as follows: \\\[\\begin{equation} x \= B\\;F \+ e \\end{equation}\\] where \\(x\\) is dimension \\(K\\), \\(B \\in R^{K \\times M}\\), \\(F \\in R^{M}\\), and \\(e\\) is a \\(K\\)\-dimension vector. This implies that \\\[\\begin{equation} Cov(x) \= BB' \+ \\psi \\end{equation}\\] Recall that \\(B\\) is the matrix of factor loadings. The system remains unchanged if \\(B\\) is replaced by \\(BG\\), where \\(G \\in R^{M \\times M}\\), and \\(G\\) is orthogonal. Then we call \\(G\\) a **rotation** of \\(B\\). The idea of rotation is easier to see with the following diagram. Two conditions need to be satisfied: (a) The new axis (and the old one) should be orthogonal. (b) The difference in loadings on the factors by each variable must increase. In the diagram below we can see that the rotation has made the variables align better along the new axis system. ### 10\.15\.1 Using the factor analysis function To illustrate, let’s undertake a factor analysis of the Treasury rates data. In R, we can implement it generally with the **factanal** command. ``` factanal(rates,2) ``` ``` ## ## Call: ## factanal(x = rates, factors = 2) ## ## Uniquenesses: ## FYGM3 FYGM6 FYGT1 FYGT2 FYGT3 FYGT5 FYGT7 FYGT10 ## 0.006 0.005 0.005 0.005 0.005 0.005 0.005 0.005 ## ## Loadings: ## Factor1 Factor2 ## FYGM3 0.843 0.533 ## FYGM6 0.826 0.562 ## FYGT1 0.793 0.608 ## FYGT2 0.726 0.686 ## FYGT3 0.681 0.731 ## FYGT5 0.617 0.786 ## FYGT7 0.579 0.814 ## FYGT10 0.546 0.836 ## ## Factor1 Factor2 ## SS loadings 4.024 3.953 ## Proportion Var 0.503 0.494 ## Cumulative Var 0.503 0.997 ## ## Test of the hypothesis that 2 factors are sufficient. ## The chi square statistic is 3556.38 on 13 degrees of freedom. ## The p-value is 0 ``` Notice how the first factor explains the shorter maturities better and the second factor explains the longer maturity rates. Hence, the two factors cover the range of maturities. Note that the ability of the factors to separate the variables increases when we apply a **factor rotation**: ``` factanal(rates,2,rotation="promax") ``` ``` ## ## Call: ## factanal(x = rates, factors = 2, rotation = "promax") ## ## Uniquenesses: ## FYGM3 FYGM6 FYGT1 FYGT2 FYGT3 FYGT5 FYGT7 FYGT10 ## 0.006 0.005 0.005 0.005 0.005 0.005 0.005 0.005 ## ## Loadings: ## Factor1 Factor2 ## FYGM3 0.110 0.902 ## FYGM6 0.174 0.846 ## FYGT1 0.282 0.747 ## FYGT2 0.477 0.560 ## FYGT3 0.593 0.443 ## FYGT5 0.746 0.284 ## FYGT7 0.829 0.194 ## FYGT10 0.895 0.118 ## ## Factor1 Factor2 ## SS loadings 2.745 2.730 ## Proportion Var 0.343 0.341 ## Cumulative Var 0.343 0.684 ## ## Factor Correlations: ## Factor1 Factor2 ## Factor1 1.000 -0.854 ## Factor2 -0.854 1.000 ## ## Test of the hypothesis that 2 factors are sufficient. ## The chi square statistic is 3556.38 on 13 degrees of freedom. ## The p-value is 0 ``` The factors have been reversed after the rotation. Now the first factor explains long rates and the second factor explains short rates. If we want the time series of the factors, use the following command: ``` result = factanal(rates,2,scores="regression") ts = result$scores par(mfrow=c(2,1)) plot(ts[,1],type="l") plot(ts[,2],type="l") ``` ``` result$scores ``` ``` ## Factor1 Factor2 ## [1,] -0.355504878 0.3538523566 ## [2,] -0.501355106 0.4219522836 ## [3,] -0.543664379 0.3889362268 ## [4,] -0.522169984 0.2906034115 ## [5,] -0.566607393 0.1900987229 ## [6,] -0.584273677 0.1158550772 ## [7,] -0.617786769 -0.0509882532 ## [8,] -0.624247257 0.1623048344 ## [9,] -0.677009820 0.2997973824 ## [10,] -0.733334654 0.3687408921 ## [11,] -0.727719655 0.3139994343 ## [12,] -0.500063146 0.2096808039 ## [13,] -0.384131543 0.0410744861 ## [14,] -0.295154982 0.0079262851 ## [15,] -0.074469748 -0.0869377108 ## [16,] 0.116075785 -0.2371344010 ## [17,] 0.281023133 -0.2477845555 ## [18,] 0.236661204 -0.1984323585 ## [19,] 0.157626371 -0.0889735514 ## [20,] 0.243074384 -0.0298846923 ## [21,] 0.229996509 0.0114794387 ## [22,] 0.147494917 0.0837694919 ## [23,] 0.142866056 0.1429388300 ## [24,] 0.217975571 0.1794260505 ## [25,] 0.333131324 0.1632220682 ## [26,] 0.427011092 0.1745390683 ## [27,] 0.526015625 0.0105962505 ## [28,] 0.930970981 -0.2759351140 ## [29,] 1.099941917 -0.3067850535 ## [30,] 1.531649405 -0.5218883427 ## [31,] 1.612359229 -0.4795275595 ## [32,] 1.674541369 -0.4768444035 ## [33,] 1.628259706 -0.4725850979 ## [34,] 1.666619753 -0.4812732821 ## [35,] 1.607802989 -0.4160125641 ## [36,] 1.637193575 -0.4306264237 ## [37,] 1.453482425 -0.4656836872 ## [38,] 1.525156467 -0.5096808367 ## [39,] 1.674848519 -0.5570384352 ## [40,] 2.049336334 -0.6730573078 ## [41,] 2.541609184 -0.5458070626 ## [42,] 2.420122121 -0.3166891875 ## [43,] 2.598308192 -0.6327155757 ## [44,] 2.391009307 -0.3356467032 ## [45,] 2.311818441 0.5221104615 ## [46,] 3.605474901 -0.1557021034 ## [47,] 2.785430927 -0.2516679525 ## [48,] 0.485057576 0.7228887760 ## [49,] -0.189141617 0.9855640276 ## [50,] 0.122281914 0.9105895503 ## [51,] 0.511485539 1.1255567094 ## [52,] 1.064745422 1.0034602577 ## [53,] 1.750902392 0.6022272759 ## [54,] 2.603592320 0.4009099335 ## [55,] 3.355620751 -0.0481064328 ## [56,] 3.096436233 -0.0475952393 ## [57,] 2.790570579 0.4732116005 ## [58,] 1.952978382 1.0839764053 ## [59,] 2.007654491 1.3008974495 ## [60,] 3.280609956 0.6027071203 ## [61,] 2.650522546 0.7811051077 ## [62,] 2.600300068 1.1915626752 ## [63,] 2.766003209 1.4022416607 ## [64,] 2.146320286 2.0370917324 ## [65,] 1.479726566 2.3555071345 ## [66,] 0.552668203 2.1652137124 ## [67,] 0.556340456 2.3056213923 ## [68,] 1.031484956 2.3872744033 ## [69,] 1.723405950 1.8108125155 ## [70,] 1.449614947 1.7709138593 ## [71,] 1.460961876 1.7702209124 ## [72,] 1.135992230 1.8967045582 ## [73,] 1.135689418 2.2082173178 ## [74,] 0.666878126 2.3873764566 ## [75,] -0.383975947 2.7314819419 ## [76,] -0.403354427 2.4378117276 ## [77,] -0.261254207 1.6718118006 ## [78,] 0.010954309 1.2752998691 ## [79,] -0.092289703 1.3197429280 ## [80,] -0.174691946 1.3083222077 ## [81,] -0.097560278 1.3574900674 ## [82,] 0.150646660 1.0910471461 ## [83,] 0.121953667 1.0829765752 ## [84,] 0.078801527 1.1050249969 ## [85,] 0.278156097 1.2016627452 ## [86,] 0.258501480 1.4588567047 ## [87,] 0.210284188 1.6848813104 ## [88,] 0.056036784 1.7137233052 ## [89,] -0.118921800 1.7816790973 ## [90,] -0.117431498 1.8372880351 ## [91,] -0.040073664 1.8448115903 ## [92,] -0.053649940 1.7738312784 ## [93,] -0.027125996 1.8236531568 ## [94,] 0.049919465 1.9851081358 ## [95,] 0.029704916 2.1507133812 ## [96,] -0.088880625 2.5931510323 ## [97,] -0.047171830 2.6850656261 ## [98,] 0.127458117 2.4718496073 ## [99,] 0.538302707 1.8902746778 ## [100,] 0.519981276 1.8260867038 ## [101,] 0.287350732 1.8070920575 ## [102,] -0.143185374 1.8168901486 ## [103,] -0.477616832 1.9938013470 ## [104,] -0.613354610 2.0298832121 ## [105,] -0.412838433 1.9458918523 ## [106,] -0.297013068 2.0396170842 ## [107,] -0.510299939 1.9824043717 ## [108,] -0.582920837 1.7520202839 ## [109,] -0.620119822 1.4751073269 ## [110,] -0.611872307 1.5171154200 ## [111,] -0.547668692 1.5025027015 ## [112,] -0.583785173 1.5461201027 ## [113,] -0.495210980 1.4215226364 ## [114,] -0.251451362 1.0449328603 ## [115,] -0.082066002 0.6903391640 ## [116,] -0.033194050 0.6316345737 ## [117,] 0.182241740 0.2936690259 ## [118,] 0.301423491 -0.1838473881 ## [119,] 0.189478645 -0.3060949875 ## [120,] 0.034277252 0.0074803060 ## [121,] 0.031909353 0.0570923793 ## [122,] 0.027356842 -0.1748564026 ## [123,] -0.100678983 -0.1801001545 ## [124,] -0.404727556 0.1406985128 ## [125,] -0.424620066 0.1335285826 ## [126,] -0.238905541 -0.0635401642 ## [127,] -0.074664082 -0.2315185060 ## [128,] -0.126155469 -0.2071550795 ## [129,] -0.095540492 -0.1620034845 ## [130,] -0.078865638 -0.1717327847 ## [131,] -0.323056834 0.3504769061 ## [132,] -0.515629047 0.7919922740 ## [133,] -0.450893817 0.6472867847 ## [134,] -0.549249387 0.7161373931 ## [135,] -0.461526588 0.7850863426 ## [136,] -0.477585081 1.0841412516 ## [137,] -0.607936481 1.2313669640 ## [138,] -0.602383745 0.9170263524 ## [139,] -0.561466443 0.9439199208 ## [140,] -0.440679406 0.7183641932 ## [141,] -0.379694393 0.4646994387 ## [142,] -0.448884489 0.5804226311 ## [143,] -0.447585272 0.7304696952 ## [144,] -0.394150535 0.8590552893 ## [145,] -0.208356333 0.6731650551 ## [146,] -0.089538357 0.6552198933 ## [147,] 0.063317301 0.6517126106 ## [148,] 0.251481083 0.3963555025 ## [149,] 0.401325001 0.2069459108 ## [150,] 0.566691007 0.1813057709 ## [151,] 0.730739423 0.1753541513 ## [152,] 0.828629006 0.1125881742 ## [153,] 0.937069127 0.0763716514 ## [154,] 1.044340934 0.0956119916 ## [155,] 1.009393906 0.0347124400 ## [156,] 1.003079712 -0.1255034699 ## [157,] 1.017520561 -0.4004578618 ## [158,] 0.932546637 -0.5165964072 ## [159,] 0.952361490 -0.4406600026 ## [160,] 0.875515542 -0.3342672213 ## [161,] 0.869656935 -0.4237046276 ## [162,] 0.888125852 -0.5145540230 ## [163,] 0.861924343 -0.5076632865 ## [164,] 0.738497876 -0.2536767792 ## [165,] 0.691510554 -0.0954080233 ## [166,] 0.741059090 -0.0544984271 ## [167,] 0.614055561 0.1175151057 ## [168,] 0.583992721 0.1208051871 ## [169,] 0.655094889 -0.0609062254 ## [170,] 0.585834845 -0.0430834033 ## [171,] 0.348303688 0.1979721122 ## [172,] 0.231869484 0.3331562586 ## [173,] 0.200162810 0.2747729337 ## [174,] 0.267236920 0.0828341446 ## [175,] 0.210187651 -0.0004188853 ## [176,] -0.109270296 0.2268927070 ## [177,] -0.213761239 0.1965527314 ## [178,] -0.348143133 0.4200966364 ## [179,] -0.462961583 0.4705859027 ## [180,] -0.578054300 0.5511131060 ## [181,] -0.593897266 0.6647046884 ## [182,] -0.606218752 0.6648334975 ## [183,] -0.633747164 0.4861920257 ## [184,] -0.595576784 0.3376759766 ## [185,] -0.655205129 0.2879100847 ## [186,] -0.877512941 0.3630640184 ## [187,] -1.042216136 0.3097316247 ## [188,] -1.210234114 0.4345481035 ## [189,] -1.322308783 0.6532822938 ## [190,] -1.277192666 0.7643285790 ## [191,] -1.452808921 0.8312821327 ## [192,] -1.487541641 0.8156176243 ## [193,] -1.394870534 0.6686928699 ## [194,] -1.479383323 0.4841365561 ## [195,] -1.406886161 0.3273062670 ## [196,] -1.492942737 0.3000294646 ## [197,] -1.562195349 0.4406992406 ## [198,] -1.516051602 0.5752479903 ## [199,] -1.451353552 0.5211772634 ## [200,] -1.501708646 0.4624047169 ## [201,] -1.354991806 0.1902649452 ## [202,] -1.228608089 -0.0070402815 ## [203,] -1.267977350 -0.0029138561 ## [204,] -1.230161999 0.0042656449 ## [205,] -1.096818811 -0.0947205249 ## [206,] -1.050883407 -0.1864794956 ## [207,] -1.002987371 -0.2674961604 ## [208,] -0.888334747 -0.4730245331 ## [209,] -0.832011974 -0.5241786702 ## [210,] -0.950806163 -0.2717874846 ## [211,] -0.990904734 -0.2173246581 ## [212,] -1.025888696 -0.2110302502 ## [213,] -0.961207504 -0.1336593297 ## [214,] -1.008873152 0.1426874706 ## [215,] -1.066127710 0.4267411899 ## [216,] -0.832669187 0.3633991700 ## [217,] -0.804268297 0.3062188682 ## [218,] -0.775554360 0.3751582494 ## [219,] -0.654699498 0.2680646665 ## [220,] -0.655827369 0.3622377616 ## [221,] -0.572138953 0.4346262554 ## [222,] -0.446528852 0.4693814204 ## [223,] -0.065472508 0.2004701690 ## [224,] -0.047390852 0.1708246675 ## [225,] 0.033716643 -0.0546444756 ## [226,] 0.090511779 -0.2360703511 ## [227,] 0.096712210 -0.3211426773 ## [228,] 0.263153818 -0.6427860627 ## [229,] 0.327938463 -0.8977202535 ## [230,] 0.227009433 -0.7738217993 ## [231,] 0.146847582 -0.6164082349 ## [232,] 0.217408892 -0.7820706869 ## [233,] 0.303059068 -0.9119089249 ## [234,] 0.346164990 -1.0156070316 ## [235,] 0.344495268 -1.0989068333 ## [236,] 0.254605496 -1.0839365333 ## [237,] 0.076434520 -0.9212083749 ## [238,] -0.038930459 -0.5853123528 ## [239,] -0.124579936 -0.3899503999 ## [240,] -0.184503898 -0.2610908904 ## [241,] -0.195782588 -0.1682655163 ## [242,] -0.130929970 -0.2396129985 ## [243,] -0.107305460 -0.3638191317 ## [244,] -0.146037350 -0.2440039282 ## [245,] -0.091759778 -0.4265627928 ## [246,] 0.060904468 -0.6770486218 ## [247,] -0.021981240 -0.5691143174 ## [248,] -0.098778176 -0.3937451878 ## [249,] -0.046565752 -0.4968429844 ## [250,] -0.074221981 -0.3346834015 ## [251,] -0.114633531 -0.2075481471 ## [252,] -0.080181397 -0.3167544243 ## [253,] -0.077245027 -0.4075464988 ## [254,] 0.067095102 -0.6330318266 ## [255,] 0.070287704 -0.6063439043 ## [256,] 0.034358274 -0.6110384546 ## [257,] 0.122570752 -0.7498264729 ## [258,] 0.268350996 -0.9191662258 ## [259,] 0.341928786 -0.9953776859 ## [260,] 0.358493675 -1.1493486058 ## [261,] 0.366995992 -1.1315765328 ## [262,] 0.308211094 -1.0360637068 ## [263,] 0.296634032 -1.0283183308 ## [264,] 0.333921857 -1.0482262664 ## [265,] 0.399654634 -1.1547504178 ## [266,] 0.384082293 -1.1639983135 ## [267,] 0.398207702 -1.2498402091 ## [268,] 0.458285541 -1.5595689354 ## [269,] 0.190961643 -1.5179769824 ## [270,] 0.312795727 -1.4594244181 ## [271,] 0.384110006 -1.5668180503 ## [272,] 0.289341234 -1.4408671342 ## [273,] 0.219416836 -1.2581560002 ## [274,] 0.109564976 -1.0724088237 ## [275,] 0.062406607 -1.0647289538 ## [276,] -0.003233728 -0.8644137409 ## [277,] -0.073271391 -0.6429640308 ## [278,] -0.092114043 -0.6751620268 ## [279,] -0.035775597 -0.6458887585 ## [280,] -0.018356448 -0.6699793136 ## [281,] -0.024265930 -0.5752117330 ## [282,] 0.169113471 -0.7594497105 ## [283,] 0.196907611 -0.6785741261 ## [284,] 0.099214208 -0.4437077861 ## [285,] 0.261745559 -0.5584470428 ## [286,] 0.459835499 -0.7964931207 ## [287,] 0.571275193 -0.9824797396 ## [288,] 0.480016597 -0.7239083896 ## [289,] 0.584006730 -0.9603237689 ## [290,] 0.684635191 -1.0869791122 ## [291,] 0.854501019 -1.2873287505 ## [292,] 0.829639616 -1.3076896394 ## [293,] 0.904390403 -1.4233854975 ## [294,] 0.965487586 -1.4916665856 ## [295,] 0.939437320 -1.6964516427 ## [296,] 0.503593382 -1.4775751602 ## [297,] 0.360893182 -1.3829316066 ## [298,] 0.175593148 -1.3465999103 ## [299,] -0.251176076 -0.9627487991 ## [300,] -0.539075038 -0.6634413175 ## [301,] -0.599350551 -0.6725569082 ## [302,] -0.556412743 -0.7281211894 ## [303,] -0.540217609 -0.8466812382 ## [304,] -0.862343566 -0.7743682184 ## [305,] -1.120682354 -0.6757445700 ## [306,] -1.332197920 -0.4766963100 ## [307,] -1.635390509 -0.0574670942 ## [308,] -1.640813369 -0.0797300906 ## [309,] -1.529734133 -0.1952548992 ## [310,] -1.611895694 0.0685046158 ## [311,] -1.620979516 0.0300820065 ## [312,] -1.611657565 -0.0337932009 ## [313,] -1.521101087 -0.2270269452 ## [314,] -1.434980209 -0.4497880483 ## [315,] -1.283417015 -0.7628290825 ## [316,] -1.072346961 -1.0683534564 ## [317,] -1.140637580 -1.0104383462 ## [318,] -1.395549643 -0.7734735074 ## [319,] -1.415043289 -0.7733548411 ## [320,] -1.454986296 -0.7501208892 ## [321,] -1.388833790 -0.8644898171 ## [322,] -1.365505724 -0.9246379945 ## [323,] -1.439150405 -0.8129456121 ## [324,] -1.262015053 -1.1101810729 ## [325,] -1.242212525 -1.2288228293 ## [326,] -1.575868993 -0.7274654884 ## [327,] -1.776113351 -0.3592139365 ## [328,] -1.688938879 -0.5119478063 ## [329,] -1.700951156 -0.4941221141 ## [330,] -1.694672567 -0.4605841099 ## [331,] -1.702468087 -0.4640479153 ## [332,] -1.654904379 -0.5634761675 ## [333,] -1.601784931 -0.6271607888 ## [334,] -1.459084170 -0.8494350933 ## [335,] -1.690953476 -0.4241288061 ## [336,] -1.763251101 -0.1746603929 ## [337,] -1.569093305 -0.2888010297 ## [338,] -1.408665012 -0.5098879003 ## [339,] -1.249641136 -0.7229902408 ## [340,] -1.064271255 -0.9142618698 ## [341,] -0.969933254 -0.9878591695 ## [342,] -0.829422105 -1.0259461991 ## [343,] -0.746049960 -1.0573799245 ## [344,] -0.636393008 -1.1066676094 ## [345,] -0.496790978 -1.1981395438 ## [346,] -0.526818274 -1.0157822994 ## [347,] -0.406273939 -1.1747944777 ## [348,] -0.266428973 -1.3514185013 ## [349,] -0.152652610 -1.4757833223 ## [350,] -0.063065136 -1.4551322378 ## [351,] 0.044113220 -1.4821790342 ## [352,] 0.083554485 -1.5531582261 ## [353,] 0.149851616 -1.4719167589 ## [354,] 0.214089933 -1.4732795716 ## [355,] 0.267359067 -1.5397675087 ## [356,] 0.433101487 -1.6864685717 ## [357,] 0.487372036 -1.6363593913 ## [358,] 0.465044913 -1.5603091398 ## [359,] 0.407435603 -1.4222412386 ## [360,] 0.424439377 -1.3921872057 ## [361,] 0.500793195 -1.4233665943 ## [362,] 0.590547206 -1.5031899730 ## [363,] 0.658037559 -1.6520855175 ## [364,] 0.663797018 -1.7232186290 ## [365,] 0.700576947 -1.7445853037 ## [366,] 0.780491234 -1.8529250191 ## [367,] 0.747690062 -1.8487246210 ``` 10\.1 Introduction ------------------ In discriminant analysis (DA), we develop statistical models that differentiate two or more population types, such as immigrants vs natives, males vs females, etc. In factor analysis (FA), we attempt to collapse an enormous amount of data about the population into a few common explanatory variables. DA is an attempt to explain categorical data, and FA is an attempt to reduce the dimensionality of the data that we use to explain both categorical or continuous data. They are distinct techniques, related in that they both exploit the techniques of linear algebra. 10\.2 Discriminant Analysis --------------------------- In DA, what we are trying to explain is very often a dichotomous split of our observations. For example, if we are trying to understand what determines a good versus a bad creditor. We call the good vs bad the “criterion” variable, or the “dependent” variable. The variables we use to explain the split between the criterion variables are called “predictor” or “explanatory” variables. We may think of the criterion variables as left\-hand side variables or dependent variables in the lingo of regression analysis. Likewise, the explanatory variables are the right\-hand side ones. What distinguishes DA is that the left\-hand side (lhs) variables are essentially **qualitative** in nature. They have some underlying numerical value, but are in essence qualitative. For example, when universities go through the admission process, they may have a cut off score for admission. This cut off score discriminates the students that they want to admit and the ones that they wish to reject. DA is a very useful tool for determining this cut off score. In short, DA is the means by which quantitative explanatory variables are used to explain qualitative criterion variables. The number of qualitative categories need not be restricted to just two. DA encompasses a larger number of categories too. 10\.3 Notation and assumptions ------------------------------ * Assume that there are \\(N\\) categories or groups indexed by \\(i\=2\...N\\). * Within each group there are observations \\(y\_j\\), indexed by \\(j\=1\...M\_i\\). The size of each group need not be the same, i.e., it is possible that \\(M\_i \\neq M\_j\\). * There are a set of predictor variables \\(x \= \[x\_1,x\_2,\\ldots,x\_K]'\\). Clearly, there must be good reasons for choosing these so as to explain the groups in which the \\(y\_j\\) reside. Hence the value of the \\(k\\)th variable for group \\(i\\), observation \\(j\\), is denoted as \\(x\_{ijk}\\). * Observations are mutually exclusive, i.e., each object can only belong to any one of the groups. * The \\(K \\times K\\) covariance matrix of explanatory variables is assumed to be the same for all groups, i.e., \\(Cov(x\_i) \= Cov(x\_j)\\). This is the homoskedasticity assumption, and makes the criterion for choosing one class over the other a simple projection on the \\(z\\) axis where it may be compared to a cut off. 10\.4 Discriminant Function --------------------------- DA involves finding a discriminant function \\(D\\) that best classifies the observations into the chosen groups. The function may be nonlinear, but the most common approach is to use linear DA. The function takes the following form: \\\[\\begin{equation} D \= a\_1 x\_1 \+ a\_2 x\_2 \+ \\ldots \+ a\_K x\_K \= \\sum\_{k\=1}^K a\_k x\_k \\end{equation}\\] where the \\(a\_k\\) coefficients are discriminant weights. The analysis requires the inclusion of a cut\-off score \\(C\\). For example, if \\(N\=2\\), i.e., there are 2 groups, then if \\(D\>C\\) the observation falls into group 1, and if \\(D \\leq C\\), then the observation falls into group 2\. Hence, the *objective* function is to choose \\(\\{\\{a\_k\\}, C\\}\\) such that classification error is minimized. The equation \\(C\=D(\\{x\_k\\}; \\{a\_k\\})\\) is the equation of a hyperplane that cuts the space of the observations into 2 parts if there are only two groups. Note that if there are \\(N\\) groups then there will be \\((N\-1\)\\) cutoffs \\(\\{C\_1,C\_2,\\ldots,C\_{N\-1}\\}\\), and a corresponding number of hyperplanes. The variables \\(x\_k\\) are also known as the “discriminants”. In the extraction of the discriminant function, better discriminants will have higher statistical significance. 10\.5 How good is the discriminant function? -------------------------------------------- After fitting the discriminant function, the next question to ask is how good the fit is. There are various measures that have been suggested for this. All of them have the essential property that they best separate the distribution of observations for different groups. There are many such measures: (a) Point biserial correlation, (b) Mahalobis \\(D\_M\\), (c) Wilks’ \\(\\lambda\\), (d) Rao’s \\(V\\), and (e) the confusion matrix. Each of the measures assesses the degree of classification error. * The point biserial correlation is the \\(R^2\\) of a regression in which the classified observations are signed as \\(y\_{ij}\=1, i\=1\\) for group 1 and \\(y\_{ij}\=0, i\=2\\) for group 2, and the rhs variables are the \\(x\_{ijk}\\) values. * The Mahalanobis distance between any two characteristic vectors for two entities in the data is given by \\\[\\begin{equation} D\_M \= \\sqrt{({\\bf x}\_1 \- {\\bf x}\_2\)' {\\bf \\Sigma}^{\-1} ({\\bf x}\_1 \- {\\bf x}\_2\)} \\end{equation}\\] where \\({\\bf x}\_1, {\\bf x}\_2\\) are two vectors and \\({\\bf \\Sigma}\\) is the covariance matrix of characteristics of all observations in the data set. First, note that if \\({\\bf \\Sigma}\\) is the identity matrix, then \\(D\_M\\) defaults to the Euclidean distance between two vectors. Second, one of the vectors may be treated as the mean vector for a given category, in which case the Mahalanobis distance can be used to assess the distances within and across groups in a pairwise manner. The quality of the discriminant function is then gauged by computing the ratio of the average distance across groups to the average distance within groups. Such ratios are often called the Fisher’s discriminant value. 10\.6 Confusion Matrix ---------------------- The confusion matrix is a cross\-tabulation of the actual versus predicted classification. For example, a \\(n\\)\-category model will result in a \\(n \\times n\\) confusion matrix. A comparison of this matrix with a matrix where the model is assumed to have no classification ability leads to a \\(\\chi^2\\) statistic that informs us about the statistical strength of the classification ability of the model. We will examine this in more detail shortly. ### 10\.6\.1 Example Using Basketball Data ``` ncaa = read.table("DSTMAA_data/ncaa.txt",header=TRUE) x = as.matrix(ncaa[4:14]) y1 = 1:32 y1 = y1*0+1 y2 = y1*0 y = c(y1,y2) library(MASS) dm = lda(y~x) dm ``` ``` ## Call: ## lda(y ~ x) ## ## Prior probabilities of groups: ## 0 1 ## 0.5 0.5 ## ## Group means: ## xPTS xREB xAST xTO xA.T xSTL xBLK xPF ## 0 62.10938 33.85938 11.46875 15.01562 0.835625 6.609375 2.375 18.84375 ## 1 72.09375 35.07500 14.02812 12.90000 1.120000 7.037500 3.125 18.46875 ## xFG xFT xX3P ## 0 0.4001562 0.6685313 0.3142187 ## 1 0.4464375 0.7144063 0.3525313 ## ## Coefficients of linear discriminants: ## LD1 ## xPTS -0.02192489 ## xREB 0.18473974 ## xAST 0.06059732 ## xTO -0.18299304 ## xA.T 0.40637827 ## xSTL 0.24925833 ## xBLK 0.09090269 ## xPF 0.04524600 ## xFG 19.06652563 ## xFT 4.57566671 ## xX3P 1.87519768 ``` ``` head(ncaa) ``` ``` ## No NAME GMS PTS REB AST TO A.T STL BLK PF FG FT ## 1 1 NorthCarolina 6 84.2 41.5 17.8 12.8 1.39 6.7 3.8 16.7 0.514 0.664 ## 2 2 Illinois 6 74.5 34.0 19.0 10.2 1.87 8.0 1.7 16.5 0.457 0.753 ## 3 3 Louisville 5 77.4 35.4 13.6 11.0 1.24 5.4 4.2 16.6 0.479 0.702 ## 4 4 MichiganState 5 80.8 37.8 13.0 12.6 1.03 8.4 2.4 19.8 0.445 0.783 ## 5 5 Arizona 4 79.8 35.0 15.8 14.5 1.09 6.0 6.5 13.3 0.542 0.759 ## 6 6 Kentucky 4 72.8 32.3 12.8 13.5 0.94 7.3 3.5 19.5 0.510 0.663 ## X3P ## 1 0.417 ## 2 0.361 ## 3 0.376 ## 4 0.329 ## 5 0.397 ## 6 0.400 ``` ``` print(names(dm)) ``` ``` ## [1] "prior" "counts" "means" "scaling" "lev" "svd" "N" ## [8] "call" "terms" "xlevels" ``` ``` print(dm$scaling) ``` ``` ## LD1 ## xPTS -0.02192489 ## xREB 0.18473974 ## xAST 0.06059732 ## xTO -0.18299304 ## xA.T 0.40637827 ## xSTL 0.24925833 ## xBLK 0.09090269 ## xPF 0.04524600 ## xFG 19.06652563 ## xFT 4.57566671 ## xX3P 1.87519768 ``` ``` print(dm$means) ``` ``` ## xPTS xREB xAST xTO xA.T xSTL xBLK xPF ## 0 62.10938 33.85938 11.46875 15.01562 0.835625 6.609375 2.375 18.84375 ## 1 72.09375 35.07500 14.02812 12.90000 1.120000 7.037500 3.125 18.46875 ## xFG xFT xX3P ## 0 0.4001562 0.6685313 0.3142187 ## 1 0.4464375 0.7144063 0.3525313 ``` ``` print(sum(dm$scaling*colMeans(dm$means))) ``` ``` ## [1] 18.16674 ``` ``` print(sum(dm$scaling*dm$means[1,])) ``` ``` ## [1] 17.17396 ``` ``` print(sum(dm$scaling*dm$means[2,])) ``` ``` ## [1] 19.15952 ``` ``` y_pred = predict(dm)$class print(y_pred) ``` ``` ## [1] 1 1 1 1 1 1 1 1 0 1 1 1 0 1 1 1 0 1 0 1 1 1 1 1 1 0 1 1 1 1 1 1 0 0 0 ## [36] 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 1 1 0 0 0 1 0 0 0 0 0 0 ## Levels: 0 1 ``` ``` predict(dm) ``` ``` ## $class ## [1] 1 1 1 1 1 1 1 1 0 1 1 1 0 1 1 1 0 1 0 1 1 1 1 1 1 0 1 1 1 1 1 1 0 0 0 ## [36] 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 1 1 0 0 0 1 0 0 0 0 0 0 ## Levels: 0 1 ## ## $posterior ## 0 1 ## 1 0.001299131 0.998700869 ## 2 0.011196418 0.988803582 ## 3 0.046608204 0.953391796 ## 4 0.025364951 0.974635049 ## 5 0.006459513 0.993540487 ## 6 0.056366779 0.943633221 ## 7 0.474976979 0.525023021 ## 8 0.081379875 0.918620125 ## 9 0.502094785 0.497905215 ## 10 0.327329832 0.672670168 ## 11 0.065547282 0.934452718 ## 12 0.341547846 0.658452154 ## 13 0.743464274 0.256535726 ## 14 0.024815082 0.975184918 ## 15 0.285683981 0.714316019 ## 16 0.033598255 0.966401745 ## 17 0.751098160 0.248901840 ## 18 0.136470406 0.863529594 ## 19 0.565743827 0.434256173 ## 20 0.106256858 0.893743142 ## 21 0.079260811 0.920739189 ## 22 0.211287405 0.788712595 ## 23 0.016145814 0.983854186 ## 24 0.017916328 0.982083672 ## 25 0.053361102 0.946638898 ## 26 0.929799893 0.070200107 ## 27 0.421467187 0.578532813 ## 28 0.041196674 0.958803326 ## 29 0.160473313 0.839526687 ## 30 0.226165888 0.773834112 ## 31 0.103861216 0.896138784 ## 32 0.328218436 0.671781564 ## 33 0.511514581 0.488485419 ## 34 0.595293351 0.404706649 ## 35 0.986761936 0.013238064 ## 36 0.676574981 0.323425019 ## 37 0.926833195 0.073166805 ## 38 0.955066682 0.044933318 ## 39 0.986527865 0.013472135 ## 40 0.877497556 0.122502444 ## 41 0.859503954 0.140496046 ## 42 0.991731912 0.008268088 ## 43 0.827209283 0.172790717 ## 44 0.964180566 0.035819434 ## 45 0.958246183 0.041753817 ## 46 0.517839067 0.482160933 ## 47 0.992279182 0.007720818 ## 48 0.241060617 0.758939383 ## 49 0.358987835 0.641012165 ## 50 0.653092701 0.346907299 ## 51 0.799810486 0.200189514 ## 52 0.933218396 0.066781604 ## 53 0.297058121 0.702941879 ## 54 0.222809854 0.777190146 ## 55 0.996971215 0.003028785 ## 56 0.924919737 0.075080263 ## 57 0.583330536 0.416669464 ## 58 0.483663571 0.516336429 ## 59 0.946886736 0.053113264 ## 60 0.860202673 0.139797327 ## 61 0.961358779 0.038641221 ## 62 0.998027953 0.001972047 ## 63 0.859521185 0.140478815 ## 64 0.706002516 0.293997484 ## ## $x ## LD1 ## 1 3.346531869 ## 2 2.256737828 ## 3 1.520095227 ## 4 1.837609440 ## 5 2.536163975 ## 6 1.419170979 ## 7 0.050452000 ## 8 1.220682015 ## 9 -0.004220052 ## 10 0.362761452 ## 11 1.338252835 ## 12 0.330587901 ## 13 -0.535893942 ## 14 1.848931516 ## 15 0.461550632 ## 16 1.691762218 ## 17 -0.556253363 ## 18 0.929165997 ## 19 -0.133214789 ## 20 1.072519927 ## 21 1.235130454 ## 22 0.663378952 ## 23 2.069846547 ## 24 2.016535392 ## 25 1.448370738 ## 26 -1.301200562 ## 27 0.159527985 ## 28 1.585103944 ## 29 0.833369746 ## 30 0.619515440 ## 31 1.085352883 ## 32 0.360730337 ## 33 -0.023200674 ## 34 -0.194348531 ## 35 -2.171336821 ## 36 -0.371720701 ## 37 -1.278744604 ## 38 -1.539410745 ## 39 -2.162390029 ## 40 -0.991628191 ## 41 -0.912171192 ## 42 -2.410924430 ## 43 -0.788680213 ## 44 -1.658362422 ## 45 -1.578045708 ## 46 -0.035952755 ## 47 -2.445692660 ## 48 0.577605329 ## 49 0.291987243 ## 50 -0.318630304 ## 51 -0.697589676 ## 52 -1.328191375 ## 53 0.433803969 ## 54 0.629224272 ## 55 -2.919349215 ## 56 -1.264701997 ## 57 -0.169453310 ## 58 0.032922090 ## 59 -1.450847181 ## 60 -0.915091388 ## 61 -1.618696192 ## 62 -3.135987051 ## 63 -0.912243063 ## 64 -0.441208000 ``` ``` out = table(y,y_pred) print(out) ``` ``` ## y_pred ## y 0 1 ## 0 27 5 ## 1 5 27 ``` ``` chisq.test(out) ``` ``` ## ## Pearson's Chi-squared test with Yates' continuity correction ## ## data: out ## X-squared = 27.562, df = 1, p-value = 1.521e-07 ``` ``` chisq.test(out,correct=FALSE) ``` ``` ## ## Pearson's Chi-squared test ## ## data: out ## X-squared = 30.25, df = 1, p-value = 3.798e-08 ``` ``` ldahist(data = predict(dm)$x[,1], g=predict(dm)$class) ``` ``` predict(dm) ``` ``` ## $class ## [1] 1 1 1 1 1 1 1 1 0 1 1 1 0 1 1 1 0 1 0 1 1 1 1 1 1 0 1 1 1 1 1 1 0 0 0 ## [36] 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 1 1 0 0 0 1 0 0 0 0 0 0 ## Levels: 0 1 ## ## $posterior ## 0 1 ## 1 0.001299131 0.998700869 ## 2 0.011196418 0.988803582 ## 3 0.046608204 0.953391796 ## 4 0.025364951 0.974635049 ## 5 0.006459513 0.993540487 ## 6 0.056366779 0.943633221 ## 7 0.474976979 0.525023021 ## 8 0.081379875 0.918620125 ## 9 0.502094785 0.497905215 ## 10 0.327329832 0.672670168 ## 11 0.065547282 0.934452718 ## 12 0.341547846 0.658452154 ## 13 0.743464274 0.256535726 ## 14 0.024815082 0.975184918 ## 15 0.285683981 0.714316019 ## 16 0.033598255 0.966401745 ## 17 0.751098160 0.248901840 ## 18 0.136470406 0.863529594 ## 19 0.565743827 0.434256173 ## 20 0.106256858 0.893743142 ## 21 0.079260811 0.920739189 ## 22 0.211287405 0.788712595 ## 23 0.016145814 0.983854186 ## 24 0.017916328 0.982083672 ## 25 0.053361102 0.946638898 ## 26 0.929799893 0.070200107 ## 27 0.421467187 0.578532813 ## 28 0.041196674 0.958803326 ## 29 0.160473313 0.839526687 ## 30 0.226165888 0.773834112 ## 31 0.103861216 0.896138784 ## 32 0.328218436 0.671781564 ## 33 0.511514581 0.488485419 ## 34 0.595293351 0.404706649 ## 35 0.986761936 0.013238064 ## 36 0.676574981 0.323425019 ## 37 0.926833195 0.073166805 ## 38 0.955066682 0.044933318 ## 39 0.986527865 0.013472135 ## 40 0.877497556 0.122502444 ## 41 0.859503954 0.140496046 ## 42 0.991731912 0.008268088 ## 43 0.827209283 0.172790717 ## 44 0.964180566 0.035819434 ## 45 0.958246183 0.041753817 ## 46 0.517839067 0.482160933 ## 47 0.992279182 0.007720818 ## 48 0.241060617 0.758939383 ## 49 0.358987835 0.641012165 ## 50 0.653092701 0.346907299 ## 51 0.799810486 0.200189514 ## 52 0.933218396 0.066781604 ## 53 0.297058121 0.702941879 ## 54 0.222809854 0.777190146 ## 55 0.996971215 0.003028785 ## 56 0.924919737 0.075080263 ## 57 0.583330536 0.416669464 ## 58 0.483663571 0.516336429 ## 59 0.946886736 0.053113264 ## 60 0.860202673 0.139797327 ## 61 0.961358779 0.038641221 ## 62 0.998027953 0.001972047 ## 63 0.859521185 0.140478815 ## 64 0.706002516 0.293997484 ## ## $x ## LD1 ## 1 3.346531869 ## 2 2.256737828 ## 3 1.520095227 ## 4 1.837609440 ## 5 2.536163975 ## 6 1.419170979 ## 7 0.050452000 ## 8 1.220682015 ## 9 -0.004220052 ## 10 0.362761452 ## 11 1.338252835 ## 12 0.330587901 ## 13 -0.535893942 ## 14 1.848931516 ## 15 0.461550632 ## 16 1.691762218 ## 17 -0.556253363 ## 18 0.929165997 ## 19 -0.133214789 ## 20 1.072519927 ## 21 1.235130454 ## 22 0.663378952 ## 23 2.069846547 ## 24 2.016535392 ## 25 1.448370738 ## 26 -1.301200562 ## 27 0.159527985 ## 28 1.585103944 ## 29 0.833369746 ## 30 0.619515440 ## 31 1.085352883 ## 32 0.360730337 ## 33 -0.023200674 ## 34 -0.194348531 ## 35 -2.171336821 ## 36 -0.371720701 ## 37 -1.278744604 ## 38 -1.539410745 ## 39 -2.162390029 ## 40 -0.991628191 ## 41 -0.912171192 ## 42 -2.410924430 ## 43 -0.788680213 ## 44 -1.658362422 ## 45 -1.578045708 ## 46 -0.035952755 ## 47 -2.445692660 ## 48 0.577605329 ## 49 0.291987243 ## 50 -0.318630304 ## 51 -0.697589676 ## 52 -1.328191375 ## 53 0.433803969 ## 54 0.629224272 ## 55 -2.919349215 ## 56 -1.264701997 ## 57 -0.169453310 ## 58 0.032922090 ## 59 -1.450847181 ## 60 -0.915091388 ## 61 -1.618696192 ## 62 -3.135987051 ## 63 -0.912243063 ## 64 -0.441208000 ``` ### 10\.6\.2 Confusion Matrix This matrix shows some classification ability. Now we ask, what if the model has no classification ability, then what would the average confusion matrix look like? It’s easy to see that this would give a matrix that would assume no relation between the rows and columns, and the numbers in each cell would reflect the average number drawn based on row and column totals. In this case since the row and column totals are all 32, we get the following confusion matrix of no classification ability: \\\[\\begin{equation} E \= \\left\[ \\begin{array}{cc} 16 \& 16\\\\ 16 \& 16 \\end{array} \\right] \\end{equation}\\] The test statistic is the sum of squared normalized differences in the cells of both matrices, i.e., \\\[\\begin{equation} \\mbox{Test\-Stat } \= \\sum\_{i,j} \\frac{\[A\_{ij} \- E\_{ij}]^2}{E\_{ij}} \\end{equation}\\] We compute this in R. ``` A = matrix(c(27,5,5,27),2,2); print(A) ``` ``` ## [,1] [,2] ## [1,] 27 5 ## [2,] 5 27 ``` ``` E = matrix(c(16,16,16,16),2,2); print(E) ``` ``` ## [,1] [,2] ## [1,] 16 16 ## [2,] 16 16 ``` ``` test_stat = sum((A-E)^2/E); print(test_stat) ``` ``` ## [1] 30.25 ``` ``` print(1-pchisq(test_stat,1)) ``` ``` ## [1] 3.797912e-08 ``` ### 10\.6\.1 Example Using Basketball Data ``` ncaa = read.table("DSTMAA_data/ncaa.txt",header=TRUE) x = as.matrix(ncaa[4:14]) y1 = 1:32 y1 = y1*0+1 y2 = y1*0 y = c(y1,y2) library(MASS) dm = lda(y~x) dm ``` ``` ## Call: ## lda(y ~ x) ## ## Prior probabilities of groups: ## 0 1 ## 0.5 0.5 ## ## Group means: ## xPTS xREB xAST xTO xA.T xSTL xBLK xPF ## 0 62.10938 33.85938 11.46875 15.01562 0.835625 6.609375 2.375 18.84375 ## 1 72.09375 35.07500 14.02812 12.90000 1.120000 7.037500 3.125 18.46875 ## xFG xFT xX3P ## 0 0.4001562 0.6685313 0.3142187 ## 1 0.4464375 0.7144063 0.3525313 ## ## Coefficients of linear discriminants: ## LD1 ## xPTS -0.02192489 ## xREB 0.18473974 ## xAST 0.06059732 ## xTO -0.18299304 ## xA.T 0.40637827 ## xSTL 0.24925833 ## xBLK 0.09090269 ## xPF 0.04524600 ## xFG 19.06652563 ## xFT 4.57566671 ## xX3P 1.87519768 ``` ``` head(ncaa) ``` ``` ## No NAME GMS PTS REB AST TO A.T STL BLK PF FG FT ## 1 1 NorthCarolina 6 84.2 41.5 17.8 12.8 1.39 6.7 3.8 16.7 0.514 0.664 ## 2 2 Illinois 6 74.5 34.0 19.0 10.2 1.87 8.0 1.7 16.5 0.457 0.753 ## 3 3 Louisville 5 77.4 35.4 13.6 11.0 1.24 5.4 4.2 16.6 0.479 0.702 ## 4 4 MichiganState 5 80.8 37.8 13.0 12.6 1.03 8.4 2.4 19.8 0.445 0.783 ## 5 5 Arizona 4 79.8 35.0 15.8 14.5 1.09 6.0 6.5 13.3 0.542 0.759 ## 6 6 Kentucky 4 72.8 32.3 12.8 13.5 0.94 7.3 3.5 19.5 0.510 0.663 ## X3P ## 1 0.417 ## 2 0.361 ## 3 0.376 ## 4 0.329 ## 5 0.397 ## 6 0.400 ``` ``` print(names(dm)) ``` ``` ## [1] "prior" "counts" "means" "scaling" "lev" "svd" "N" ## [8] "call" "terms" "xlevels" ``` ``` print(dm$scaling) ``` ``` ## LD1 ## xPTS -0.02192489 ## xREB 0.18473974 ## xAST 0.06059732 ## xTO -0.18299304 ## xA.T 0.40637827 ## xSTL 0.24925833 ## xBLK 0.09090269 ## xPF 0.04524600 ## xFG 19.06652563 ## xFT 4.57566671 ## xX3P 1.87519768 ``` ``` print(dm$means) ``` ``` ## xPTS xREB xAST xTO xA.T xSTL xBLK xPF ## 0 62.10938 33.85938 11.46875 15.01562 0.835625 6.609375 2.375 18.84375 ## 1 72.09375 35.07500 14.02812 12.90000 1.120000 7.037500 3.125 18.46875 ## xFG xFT xX3P ## 0 0.4001562 0.6685313 0.3142187 ## 1 0.4464375 0.7144063 0.3525313 ``` ``` print(sum(dm$scaling*colMeans(dm$means))) ``` ``` ## [1] 18.16674 ``` ``` print(sum(dm$scaling*dm$means[1,])) ``` ``` ## [1] 17.17396 ``` ``` print(sum(dm$scaling*dm$means[2,])) ``` ``` ## [1] 19.15952 ``` ``` y_pred = predict(dm)$class print(y_pred) ``` ``` ## [1] 1 1 1 1 1 1 1 1 0 1 1 1 0 1 1 1 0 1 0 1 1 1 1 1 1 0 1 1 1 1 1 1 0 0 0 ## [36] 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 1 1 0 0 0 1 0 0 0 0 0 0 ## Levels: 0 1 ``` ``` predict(dm) ``` ``` ## $class ## [1] 1 1 1 1 1 1 1 1 0 1 1 1 0 1 1 1 0 1 0 1 1 1 1 1 1 0 1 1 1 1 1 1 0 0 0 ## [36] 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 1 1 0 0 0 1 0 0 0 0 0 0 ## Levels: 0 1 ## ## $posterior ## 0 1 ## 1 0.001299131 0.998700869 ## 2 0.011196418 0.988803582 ## 3 0.046608204 0.953391796 ## 4 0.025364951 0.974635049 ## 5 0.006459513 0.993540487 ## 6 0.056366779 0.943633221 ## 7 0.474976979 0.525023021 ## 8 0.081379875 0.918620125 ## 9 0.502094785 0.497905215 ## 10 0.327329832 0.672670168 ## 11 0.065547282 0.934452718 ## 12 0.341547846 0.658452154 ## 13 0.743464274 0.256535726 ## 14 0.024815082 0.975184918 ## 15 0.285683981 0.714316019 ## 16 0.033598255 0.966401745 ## 17 0.751098160 0.248901840 ## 18 0.136470406 0.863529594 ## 19 0.565743827 0.434256173 ## 20 0.106256858 0.893743142 ## 21 0.079260811 0.920739189 ## 22 0.211287405 0.788712595 ## 23 0.016145814 0.983854186 ## 24 0.017916328 0.982083672 ## 25 0.053361102 0.946638898 ## 26 0.929799893 0.070200107 ## 27 0.421467187 0.578532813 ## 28 0.041196674 0.958803326 ## 29 0.160473313 0.839526687 ## 30 0.226165888 0.773834112 ## 31 0.103861216 0.896138784 ## 32 0.328218436 0.671781564 ## 33 0.511514581 0.488485419 ## 34 0.595293351 0.404706649 ## 35 0.986761936 0.013238064 ## 36 0.676574981 0.323425019 ## 37 0.926833195 0.073166805 ## 38 0.955066682 0.044933318 ## 39 0.986527865 0.013472135 ## 40 0.877497556 0.122502444 ## 41 0.859503954 0.140496046 ## 42 0.991731912 0.008268088 ## 43 0.827209283 0.172790717 ## 44 0.964180566 0.035819434 ## 45 0.958246183 0.041753817 ## 46 0.517839067 0.482160933 ## 47 0.992279182 0.007720818 ## 48 0.241060617 0.758939383 ## 49 0.358987835 0.641012165 ## 50 0.653092701 0.346907299 ## 51 0.799810486 0.200189514 ## 52 0.933218396 0.066781604 ## 53 0.297058121 0.702941879 ## 54 0.222809854 0.777190146 ## 55 0.996971215 0.003028785 ## 56 0.924919737 0.075080263 ## 57 0.583330536 0.416669464 ## 58 0.483663571 0.516336429 ## 59 0.946886736 0.053113264 ## 60 0.860202673 0.139797327 ## 61 0.961358779 0.038641221 ## 62 0.998027953 0.001972047 ## 63 0.859521185 0.140478815 ## 64 0.706002516 0.293997484 ## ## $x ## LD1 ## 1 3.346531869 ## 2 2.256737828 ## 3 1.520095227 ## 4 1.837609440 ## 5 2.536163975 ## 6 1.419170979 ## 7 0.050452000 ## 8 1.220682015 ## 9 -0.004220052 ## 10 0.362761452 ## 11 1.338252835 ## 12 0.330587901 ## 13 -0.535893942 ## 14 1.848931516 ## 15 0.461550632 ## 16 1.691762218 ## 17 -0.556253363 ## 18 0.929165997 ## 19 -0.133214789 ## 20 1.072519927 ## 21 1.235130454 ## 22 0.663378952 ## 23 2.069846547 ## 24 2.016535392 ## 25 1.448370738 ## 26 -1.301200562 ## 27 0.159527985 ## 28 1.585103944 ## 29 0.833369746 ## 30 0.619515440 ## 31 1.085352883 ## 32 0.360730337 ## 33 -0.023200674 ## 34 -0.194348531 ## 35 -2.171336821 ## 36 -0.371720701 ## 37 -1.278744604 ## 38 -1.539410745 ## 39 -2.162390029 ## 40 -0.991628191 ## 41 -0.912171192 ## 42 -2.410924430 ## 43 -0.788680213 ## 44 -1.658362422 ## 45 -1.578045708 ## 46 -0.035952755 ## 47 -2.445692660 ## 48 0.577605329 ## 49 0.291987243 ## 50 -0.318630304 ## 51 -0.697589676 ## 52 -1.328191375 ## 53 0.433803969 ## 54 0.629224272 ## 55 -2.919349215 ## 56 -1.264701997 ## 57 -0.169453310 ## 58 0.032922090 ## 59 -1.450847181 ## 60 -0.915091388 ## 61 -1.618696192 ## 62 -3.135987051 ## 63 -0.912243063 ## 64 -0.441208000 ``` ``` out = table(y,y_pred) print(out) ``` ``` ## y_pred ## y 0 1 ## 0 27 5 ## 1 5 27 ``` ``` chisq.test(out) ``` ``` ## ## Pearson's Chi-squared test with Yates' continuity correction ## ## data: out ## X-squared = 27.562, df = 1, p-value = 1.521e-07 ``` ``` chisq.test(out,correct=FALSE) ``` ``` ## ## Pearson's Chi-squared test ## ## data: out ## X-squared = 30.25, df = 1, p-value = 3.798e-08 ``` ``` ldahist(data = predict(dm)$x[,1], g=predict(dm)$class) ``` ``` predict(dm) ``` ``` ## $class ## [1] 1 1 1 1 1 1 1 1 0 1 1 1 0 1 1 1 0 1 0 1 1 1 1 1 1 0 1 1 1 1 1 1 0 0 0 ## [36] 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 1 1 0 0 0 1 0 0 0 0 0 0 ## Levels: 0 1 ## ## $posterior ## 0 1 ## 1 0.001299131 0.998700869 ## 2 0.011196418 0.988803582 ## 3 0.046608204 0.953391796 ## 4 0.025364951 0.974635049 ## 5 0.006459513 0.993540487 ## 6 0.056366779 0.943633221 ## 7 0.474976979 0.525023021 ## 8 0.081379875 0.918620125 ## 9 0.502094785 0.497905215 ## 10 0.327329832 0.672670168 ## 11 0.065547282 0.934452718 ## 12 0.341547846 0.658452154 ## 13 0.743464274 0.256535726 ## 14 0.024815082 0.975184918 ## 15 0.285683981 0.714316019 ## 16 0.033598255 0.966401745 ## 17 0.751098160 0.248901840 ## 18 0.136470406 0.863529594 ## 19 0.565743827 0.434256173 ## 20 0.106256858 0.893743142 ## 21 0.079260811 0.920739189 ## 22 0.211287405 0.788712595 ## 23 0.016145814 0.983854186 ## 24 0.017916328 0.982083672 ## 25 0.053361102 0.946638898 ## 26 0.929799893 0.070200107 ## 27 0.421467187 0.578532813 ## 28 0.041196674 0.958803326 ## 29 0.160473313 0.839526687 ## 30 0.226165888 0.773834112 ## 31 0.103861216 0.896138784 ## 32 0.328218436 0.671781564 ## 33 0.511514581 0.488485419 ## 34 0.595293351 0.404706649 ## 35 0.986761936 0.013238064 ## 36 0.676574981 0.323425019 ## 37 0.926833195 0.073166805 ## 38 0.955066682 0.044933318 ## 39 0.986527865 0.013472135 ## 40 0.877497556 0.122502444 ## 41 0.859503954 0.140496046 ## 42 0.991731912 0.008268088 ## 43 0.827209283 0.172790717 ## 44 0.964180566 0.035819434 ## 45 0.958246183 0.041753817 ## 46 0.517839067 0.482160933 ## 47 0.992279182 0.007720818 ## 48 0.241060617 0.758939383 ## 49 0.358987835 0.641012165 ## 50 0.653092701 0.346907299 ## 51 0.799810486 0.200189514 ## 52 0.933218396 0.066781604 ## 53 0.297058121 0.702941879 ## 54 0.222809854 0.777190146 ## 55 0.996971215 0.003028785 ## 56 0.924919737 0.075080263 ## 57 0.583330536 0.416669464 ## 58 0.483663571 0.516336429 ## 59 0.946886736 0.053113264 ## 60 0.860202673 0.139797327 ## 61 0.961358779 0.038641221 ## 62 0.998027953 0.001972047 ## 63 0.859521185 0.140478815 ## 64 0.706002516 0.293997484 ## ## $x ## LD1 ## 1 3.346531869 ## 2 2.256737828 ## 3 1.520095227 ## 4 1.837609440 ## 5 2.536163975 ## 6 1.419170979 ## 7 0.050452000 ## 8 1.220682015 ## 9 -0.004220052 ## 10 0.362761452 ## 11 1.338252835 ## 12 0.330587901 ## 13 -0.535893942 ## 14 1.848931516 ## 15 0.461550632 ## 16 1.691762218 ## 17 -0.556253363 ## 18 0.929165997 ## 19 -0.133214789 ## 20 1.072519927 ## 21 1.235130454 ## 22 0.663378952 ## 23 2.069846547 ## 24 2.016535392 ## 25 1.448370738 ## 26 -1.301200562 ## 27 0.159527985 ## 28 1.585103944 ## 29 0.833369746 ## 30 0.619515440 ## 31 1.085352883 ## 32 0.360730337 ## 33 -0.023200674 ## 34 -0.194348531 ## 35 -2.171336821 ## 36 -0.371720701 ## 37 -1.278744604 ## 38 -1.539410745 ## 39 -2.162390029 ## 40 -0.991628191 ## 41 -0.912171192 ## 42 -2.410924430 ## 43 -0.788680213 ## 44 -1.658362422 ## 45 -1.578045708 ## 46 -0.035952755 ## 47 -2.445692660 ## 48 0.577605329 ## 49 0.291987243 ## 50 -0.318630304 ## 51 -0.697589676 ## 52 -1.328191375 ## 53 0.433803969 ## 54 0.629224272 ## 55 -2.919349215 ## 56 -1.264701997 ## 57 -0.169453310 ## 58 0.032922090 ## 59 -1.450847181 ## 60 -0.915091388 ## 61 -1.618696192 ## 62 -3.135987051 ## 63 -0.912243063 ## 64 -0.441208000 ``` ### 10\.6\.2 Confusion Matrix This matrix shows some classification ability. Now we ask, what if the model has no classification ability, then what would the average confusion matrix look like? It’s easy to see that this would give a matrix that would assume no relation between the rows and columns, and the numbers in each cell would reflect the average number drawn based on row and column totals. In this case since the row and column totals are all 32, we get the following confusion matrix of no classification ability: \\\[\\begin{equation} E \= \\left\[ \\begin{array}{cc} 16 \& 16\\\\ 16 \& 16 \\end{array} \\right] \\end{equation}\\] The test statistic is the sum of squared normalized differences in the cells of both matrices, i.e., \\\[\\begin{equation} \\mbox{Test\-Stat } \= \\sum\_{i,j} \\frac{\[A\_{ij} \- E\_{ij}]^2}{E\_{ij}} \\end{equation}\\] We compute this in R. ``` A = matrix(c(27,5,5,27),2,2); print(A) ``` ``` ## [,1] [,2] ## [1,] 27 5 ## [2,] 5 27 ``` ``` E = matrix(c(16,16,16,16),2,2); print(E) ``` ``` ## [,1] [,2] ## [1,] 16 16 ## [2,] 16 16 ``` ``` test_stat = sum((A-E)^2/E); print(test_stat) ``` ``` ## [1] 30.25 ``` ``` print(1-pchisq(test_stat,1)) ``` ``` ## [1] 3.797912e-08 ``` 10\.7 Explanation of LDA ------------------------ We assume two groups first for simplicity, 1 and 2\. Assume a feature space \\(x \\in R^d\\). Group 1 has \\(n\_1\\) observations, and group 2 has \\(n\_2\\) observations, i.e., tuples of dimension \\(d\\). We want to find weights \\(w \\in R^d\\) that will project each observation in each group onto a point \\(z\\) on a line, i.e., \\\[\\begin{equation} z \= w\_1 x\_1 \+ w\_2 x\_2 \+ ... \+ w\_d x\_d \= w' x \\end{equation}\\] We want the \\(z\\) values of group 1 to be as far away as possible from that of group 2, accounting for the variation within and across groups. The **scatter** within group \\(j\=1,2\\) is defined as: \\\[\\begin{equation} S\_j \= \\sum\_{i\=1}^{n\_j} (z\_{ji} \- \\bar{z}\_j)^2 \= \\sum\_{i\=1}^{n\_j} (w' x\_{ji} \- w'\\bar{x}\_j)^2 \\end{equation}\\] where \\(\\bar{z}\_j\\) is the scalar mean of \\(z\\) values for group \\(j\\), and \\(\\bar{x}\_j\\) is the mean of \\(x\\) values for group \\(j\\), and is of dimension \\(d \\times 1\\). We want to capture this scatter more formally, so we define \\\[\\begin{eqnarray} S\_j \= w' (x\_{ji} \- \\bar{x}\_j)(x\_{ji} \- \\bar{x}\_j)' w \= w' V\_j w \\end{eqnarray}\\] where we have defined \\(V\_j \= (x\_{ji} \- \\bar{x}\_j)(x\_{ji} \- \\bar{x}\_j)'\\) as the variation within group \\(j\\). We also define total within group variation as \\(V\_w \= V\_1 \+ V\_2\\). Think of \\(V\_j\\) as a kind of covariance matrix of group \\(j\\). We note that \\(w\\) is dimension \\(d \\times 1\\), \\((x\_{ji} \- \\bar{x}\_j)\\) is dimension \\(d \\times n\_j\\), so that \\(S\_j\\) is scalar. We sum the within group scatter values to get the total within group variation, i.e., \\\[\\begin{equation} w' (V\_1 \+ V\_2\) w \= w' V\_w w \\end{equation}\\] For between group scatter, we get an analogous expression, i.e., \\\[\\begin{equation} w' V\_b w \= w' (\\bar{x}\_1 \- \\bar{x}\_2\)(\\bar{x}\_1 \- \\bar{x}\_2\)' w \\end{equation}\\] where we note that \\((\\bar{x}\_1 \- \\bar{x}\_2\)(\\bar{x}\_1 \- \\bar{x}\_2\)'\\) is the between group covariance, and \\(w\\) is \\((d \\times 1\)\\), \\((\\bar{x}\_1 \- \\bar{x}\_2\)\\) is dimension \\((d \\times 1\)\\). 10\.8 Fischer’s Discriminant ---------------------------- The Fischer linear discriminant approach is to maximize between group variation and minimize within group variation, i.e., \\\[\\begin{equation} F \= \\frac{w' V\_b w}{w' V\_w w} \\end{equation}\\] Taking the vector derivative w.r.t. \\(w\\) to maximize, we get \\\[\\begin{equation} \\frac{dF}{dw} \= \\frac{w' V\_w w (2 V\_b w) \- w' V\_b w (2 V\_w w)}{(w' V\_w w)^2} \= {\\bf 0} \\end{equation}\\] \\\[\\begin{equation} V\_b w \- \\frac{w' V\_b w}{w' V\_w w} V\_w w \= {\\bf 0} \\end{equation}\\] \\\[\\begin{equation} V\_b w \- F V\_w w \= {\\bf 0} \\end{equation}\\] \\\[\\begin{equation} V\_w^{\-1} V\_b w \- F w \= {\\bf 0} \\end{equation}\\] Rewrite this is an eigensystem and solve to get \\\[\\begin{eqnarray} Aw \&\=\& \\lambda w \\\\ w^\* \&\=\& V\_w^{\-1}(\\bar{x}\_1 \- \\bar{x}\_2\) \\end{eqnarray}\\] where \\(A \= V\_w^{\-1} V\_b\\), and \\(\\lambda\=F\\). Note: An easy way to see how to solve for \\(w^\*\\) is as follows. First, find the largest eigenvalue of matrix \\(A\\). Second, substitute that into the eigensystem and solve a system of \\(d\\) equations to get \\(w\\). 10\.9 Generalizing number of groups ----------------------------------- We proceed to \\(k\+1\\) groups. Therefore now we need \\(k\\) discriminant vectors, i.e., \\\[\\begin{equation} W \= \[w\_1, w\_2, ... , w\_k] \\in R^{d \\times k} \\end{equation}\\] The Fischer discriminant generalizes to \\\[\\begin{equation} F \= \\frac{\|W' V\_b W\|}{\|W' V\_w W\|} \\end{equation}\\] where we now use the determinant as the numerator and denominator are no longer scalars. Note that between group variation is now \\(V\_w \= V\_1 \+ V\_2 \+ ... \+ V\_k\\), and the denominator is the determinant of a \\((k \\times k)\\) matrix. The numerator is also the determinant of a \\((k \\times k)\\) matrix, and \\\[\\begin{equation} V\_b \= \\sum\_{i\=1}^k n\_i (x\_i \- \\bar{x}\_i)(x\_i \- \\bar{x}\_i)' \\end{equation}\\] where \\((x\_i \- \\bar{x}\_i)\\) is of dimension \\((d \\times n\_i)\\), so that \\(V\_b\\) is dimension \\((d \\times d)\\). ``` y1 = rep(3,16) y2 = rep(2,16) y3 = rep(1,16) y4 = rep(0,16) y = c(y1,y2,y3,y4) res = lda(y~x) res ``` ``` ## Call: ## lda(y ~ x) ## ## Prior probabilities of groups: ## 0 1 2 3 ## 0.25 0.25 0.25 0.25 ## ## Group means: ## xPTS xREB xAST xTO xA.T xSTL xBLK xPF ## 0 61.43750 33.18750 11.93750 14.37500 0.888750 6.12500 1.8750 19.5000 ## 1 62.78125 34.53125 11.00000 15.65625 0.782500 7.09375 2.8750 18.1875 ## 2 70.31250 36.59375 13.50000 12.71875 1.094375 6.84375 3.1875 19.4375 ## 3 73.87500 33.55625 14.55625 13.08125 1.145625 7.23125 3.0625 17.5000 ## xFG xFT xX3P ## 0 0.4006875 0.7174375 0.3014375 ## 1 0.3996250 0.6196250 0.3270000 ## 2 0.4223750 0.7055625 0.3260625 ## 3 0.4705000 0.7232500 0.3790000 ## ## Coefficients of linear discriminants: ## LD1 LD2 LD3 ## xPTS -0.03190376 -0.09589269 -0.03170138 ## xREB 0.16962627 0.08677669 -0.11932275 ## xAST 0.08820048 0.47175998 0.04601283 ## xTO -0.20264768 -0.29407195 -0.02550334 ## xA.T 0.02619042 -3.28901817 -1.42081485 ## xSTL 0.23954511 -0.26327278 -0.02694612 ## xBLK 0.05424102 -0.14766348 -0.17703174 ## xPF 0.03678799 0.22610347 -0.09608475 ## xFG 21.25583140 0.48722022 9.50234314 ## xFT 5.42057568 6.39065311 2.72767409 ## xX3P 1.98050128 -2.74869782 0.90901853 ## ## Proportion of trace: ## LD1 LD2 LD3 ## 0.6025 0.3101 0.0873 ``` ``` y_pred = predict(res)$class print(y_pred) ``` ``` ## [1] 3 3 3 3 3 3 3 3 1 3 3 2 0 3 3 3 0 3 2 3 2 2 3 2 2 0 2 2 2 2 2 2 3 1 1 ## [36] 1 0 1 1 1 1 1 1 1 1 1 0 2 2 0 0 0 0 2 0 0 2 0 1 0 1 1 0 0 ## Levels: 0 1 2 3 ``` ``` print(table(y,y_pred)) ``` ``` ## y_pred ## y 0 1 2 3 ## 0 10 3 3 0 ## 1 2 12 1 1 ## 2 2 0 11 3 ## 3 1 1 1 13 ``` ``` print(chisq.test(table(y,y_pred))) ``` ``` ## Warning in chisq.test(table(y, y_pred)): Chi-squared approximation may be ## incorrect ``` ``` ## ## Pearson's Chi-squared test ## ## data: table(y, y_pred) ## X-squared = 78.684, df = 9, p-value = 2.949e-13 ``` The idea is that when we have 4 groups, we project each observation in the data into a 3\-D space, which is then separated by hyperplanes to demarcate the 4 groups. 10\.10 Eigen Systems -------------------- We now move on to understanding some properties of matrices that may be useful in classifying data or deriving its underlying components. We download Treasury interest rate date from the FRED website, <http://research.stlouisfed.org/fred2/>. I have placed the data in a file called “tryrates.txt”. Let’s read in the file. ``` rates = read.table("DSTMAA_data/tryrates.txt",header=TRUE) print(names(rates)) ``` ``` ## [1] "DATE" "FYGM3" "FYGM6" "FYGT1" "FYGT2" "FYGT3" "FYGT5" "FYGT7" ## [9] "FYGT10" ``` ``` print(head(rates)) ``` ``` ## DATE FYGM3 FYGM6 FYGT1 FYGT2 FYGT3 FYGT5 FYGT7 FYGT10 ## 1 Jun-76 5.41 5.77 6.52 7.06 7.31 7.61 7.75 7.86 ## 2 Jul-76 5.23 5.53 6.20 6.85 7.12 7.49 7.70 7.83 ## 3 Aug-76 5.14 5.40 6.00 6.63 6.86 7.31 7.58 7.77 ## 4 Sep-76 5.08 5.30 5.84 6.42 6.66 7.13 7.41 7.59 ## 5 Oct-76 4.92 5.06 5.50 5.98 6.24 6.75 7.16 7.41 ## 6 Nov-76 4.75 4.88 5.29 5.81 6.09 6.52 6.86 7.29 ``` Understanding eigenvalues and eigenvectors is best done visually. An excellent simple exposition is available at: [http://setosa.io/ev/eigenvectors\-and\-eigenvalues/](http://setosa.io/ev/eigenvectors-and-eigenvalues/) A \\(M \\times M\\) matrix \\(A\\) has attendant \\(M\\) eigenvectors \\(V\\) and eigenvalue \\(\\lambda\\) if we can write \\\[\\begin{equation} \\lambda V \= A \\; V \\end{equation}\\] Starting with matrix \\(A\\), the eigenvalue decomposition gives both \\(V\\) and \\(\\lambda\\). It turns out we can find \\(M\\) such eigenvalues and eigenvectors, as there is no unique solution to this equation. We also require that \\(\\lambda \\neq 0\\). We may implement this in R as follows, setting matrix \\(A\\) equal to the covariance matrix of the rates of different maturities: ``` A = matrix(c(5,2,1,4),2,2) E = eigen(A) print(E) ``` ``` ## $values ## [1] 6 3 ## ## $vectors ## [,1] [,2] ## [1,] 0.7071068 -0.4472136 ## [2,] 0.7071068 0.8944272 ``` ``` v1 = E$vectors[,1] v2 = E$vectors[,2] e1 = E$values[1] e2 = E$values[2] print(t(e1*v1)) ``` ``` ## [,1] [,2] ## [1,] 4.242641 4.242641 ``` ``` print(A %*% v1) ``` ``` ## [,1] ## [1,] 4.242641 ## [2,] 4.242641 ``` ``` print(t(e2*v2)) ``` ``` ## [,1] [,2] ## [1,] -1.341641 2.683282 ``` ``` print(A %*% v2) ``` ``` ## [,1] ## [1,] -1.341641 ## [2,] 2.683282 ``` We see that the origin, eigenvalues and eigenvectors comprise \\(n\\) eigenspaces. The line from the origin through an eigenvector (i.e., a coordinate given by any one eigenvector) is called an “eigenspace”. All points on eigenspaces are themselves eigenvectors. These eigenpaces are dimensions in which the relationships between vectors in the matrix \\(A\\) load. We may also think of the matrix \\(A\\) as an “operator” or function on vectors/matrices. ``` rates = as.matrix(rates[,2:9]) eigen(cov(rates)) ``` ``` ## $values ## [1] 7.070996e+01 1.655049e+00 9.015819e-02 1.655911e-02 3.001199e-03 ## [6] 2.145993e-03 1.597282e-03 8.562439e-04 ## ## $vectors ## [,1] [,2] [,3] [,4] [,5] [,6] ## [1,] 0.3596990 -0.49201202 0.59353257 -0.38686589 0.34419189 -0.07045281 ## [2,] 0.3581944 -0.40372601 0.06355170 0.20153645 -0.79515713 0.07823632 ## [3,] 0.3875117 -0.28678312 -0.30984414 0.61694982 0.45913099 0.20442661 ## [4,] 0.3753168 -0.01733899 -0.45669522 -0.19416861 -0.03906518 -0.46590654 ## [5,] 0.3614653 0.13461055 -0.36505588 -0.41827644 0.06076305 -0.14203743 ## [6,] 0.3405515 0.31741378 -0.01159915 -0.18845999 0.03366277 0.72373049 ## [7,] 0.3260941 0.40838395 0.19017973 -0.05000002 -0.16835391 0.09196861 ## [8,] 0.3135530 0.47616732 0.41174955 0.42239432 0.06132982 -0.42147082 ## [,7] [,8] ## [1,] -0.04282858 0.03645143 ## [2,] 0.15571962 -0.03744201 ## [3,] -0.10492279 -0.16540673 ## [4,] -0.30395044 0.54916644 ## [5,] 0.45521861 -0.55849003 ## [6,] 0.19935685 0.42773742 ## [7,] -0.70469469 -0.39347299 ## [8,] 0.35631546 0.13650940 ``` ``` rcorr = cor(rates) rcorr ``` ``` ## FYGM3 FYGM6 FYGT1 FYGT2 FYGT3 FYGT5 ## FYGM3 1.0000000 0.9975369 0.9911255 0.9750889 0.9612253 0.9383289 ## FYGM6 0.9975369 1.0000000 0.9973496 0.9851248 0.9728437 0.9512659 ## FYGT1 0.9911255 0.9973496 1.0000000 0.9936959 0.9846924 0.9668591 ## FYGT2 0.9750889 0.9851248 0.9936959 1.0000000 0.9977673 0.9878921 ## FYGT3 0.9612253 0.9728437 0.9846924 0.9977673 1.0000000 0.9956215 ## FYGT5 0.9383289 0.9512659 0.9668591 0.9878921 0.9956215 1.0000000 ## FYGT7 0.9220409 0.9356033 0.9531304 0.9786511 0.9894029 0.9984354 ## FYGT10 0.9065636 0.9205419 0.9396863 0.9680926 0.9813066 0.9945691 ## FYGT7 FYGT10 ## FYGM3 0.9220409 0.9065636 ## FYGM6 0.9356033 0.9205419 ## FYGT1 0.9531304 0.9396863 ## FYGT2 0.9786511 0.9680926 ## FYGT3 0.9894029 0.9813066 ## FYGT5 0.9984354 0.9945691 ## FYGT7 1.0000000 0.9984927 ## FYGT10 0.9984927 1.0000000 ``` ### 10\.10\.1 Intuition So we calculated the eigenvalues and eigenvectors for the covariance matrix of the data. What does it really mean? Think of the covariance matrix as the summarization of the connections between the rates of different maturities in our data set. What we do not know is how many dimensions of commonality there are in these rates, and what is the relative importance of these dimensions. For each dimension of commonality, we wish to ask (a) how important is that dimension (the eigenvalue), and (b) the relative influence of that dimension on each rate (the values in the eigenvector). The most important dimension is the one with the highest eigenvalue, known as the **principal** eigenvalue, corresponding to which we have the principal eigenvector. It should be clear by now that the eigenvalue and its eigenvector are **eigen pairs**. It should also be intuitive why we call this the **eigenvalue decomposition** of a matrix. ### 10\.10\.1 Intuition So we calculated the eigenvalues and eigenvectors for the covariance matrix of the data. What does it really mean? Think of the covariance matrix as the summarization of the connections between the rates of different maturities in our data set. What we do not know is how many dimensions of commonality there are in these rates, and what is the relative importance of these dimensions. For each dimension of commonality, we wish to ask (a) how important is that dimension (the eigenvalue), and (b) the relative influence of that dimension on each rate (the values in the eigenvector). The most important dimension is the one with the highest eigenvalue, known as the **principal** eigenvalue, corresponding to which we have the principal eigenvector. It should be clear by now that the eigenvalue and its eigenvector are **eigen pairs**. It should also be intuitive why we call this the **eigenvalue decomposition** of a matrix. 10\.11 Determinants ------------------- These functions of a matrix are also difficult to get an intuition for. But its best to think of the determinant as one possible function that returns the “sizing” of a matrix. More specifically, it relates to the volume of the space defined by the matrix. But not exactly, because it can also be negative, though the absolute size will give some sense of volume as well. For example, let’s take the two\-dimensional identity matrix, which defines the unit square. ``` a = matrix(0,2,2); diag(a) = 1 print(det(a)) ``` ``` ## [1] 1 ``` ``` print(det(2*a)) ``` ``` ## [1] 4 ``` We see immediately that when we multiply the matrix by 2, we get a determinant value that is four times the original, because the volume in two\-dimensional space is area, and that has changed by 4\. To verify, we’ll try the three\-dimensional identity matrix. ``` a = matrix(0,3,3); diag(a) = 1 print(det(a)) ``` ``` ## [1] 1 ``` ``` print(det(2*a)) ``` ``` ## [1] 8 ``` Now we see that the orginal determinant has grown by \\(2^3\\) when all dimensions are doubled. We may also distort just one dimension, and see what happens. ``` a = matrix(0,2,2); diag(a) = 1 print(det(a)) ``` ``` ## [1] 1 ``` ``` a[2,2] = 2 print(det(a)) ``` ``` ## [1] 2 ``` That’s pretty self\-explanatory! 10\.12 Dimension Reduction: Factor Analysis and PCA --------------------------------------------------- **Factor analysis** is the use of eigenvalue decomposition to uncover the underlying structure of the data. Given a data set of observations and explanatory variables, factor analysis seeks to achieve a decomposition with these two properties: 1. Obtain a reduced dimension set of explanatory variables, known as derived/extracted/discovered factors. Factors must be **orthogonal**, i.e., uncorrelated with each other. 2. Obtain data reduction, i.e., suggest a limited set of variables. Each such subset is a manifestation of an abstract underlying dimension. 3. These subsets are ordered in terms of their ability to explain the variation across observations. See the article by Richard Darlington: <http://www.psych.cornell.edu/Darlington/factor.htm>, which is as good as any explanation one can get. See also the article by Statsoft: <http://www.statsoft.com/textbook/stfacan.html>. ### 10\.12\.1 Notation * Observations: \\(y\_i, i\=1\...N\\). * Original explanatory variables: \\(x\_{ik}, k\=1\...K\\). * Factors: \\(F\_j, j\=1\...M\\). * \\(M \< K\\). ### 10\.12\.2 The Idea As you can see in the rates data, there are eight different rates. If we wanted to model the underlying drivers of this system of rates, we could assume a separate driver for each one leading to \\(K\=8\\) underlying factors. But the whole idea of factor analysis is to reduce the number of drivers that exist. So we may want to go with a smaller number of \\(M \< K\\) factors. The main concept here is to **project** the variables \\(x \\in R^{K}\\) onto the reduced factor set \\(F \\in R^M\\) such that we can explain most of the variables by the factors. Hence we are looking for a relation \\\[\\begin{equation} x \= B F \\end{equation}\\] where \\(B \= \\{b\_{kj}\\}\\in R^{K \\times M}\\) is a matrix of factor **loadings** for the variables. Through matrix \\(B\\), \\(x\\) may be represented in smaller dimension \\(M\\). The entries in matrix \\(B\\) may be positive or negative. Negative loadings mean that the variable is negatively correlated with the factor. The whole idea is that we want to replace the relation of \\(y\\) to \\(x\\) with a relation of \\(y\\) to a reduced set \\(F\\). Once we have the set of factors defined, then the \\(N\\) observations \\(y\\) may be expressed in terms of the factors through a factor **score matrix** \\(A \= \\{a\_{ij}\\} \\in R^{N \\times M}\\) as follows: \\\[\\begin{equation} y \= A F \\end{equation}\\] Again, factor scores may be positive or negative. There are many ways in which such a transformation from variables to factors might be undertaken. We look at the most common one. ### 10\.12\.1 Notation * Observations: \\(y\_i, i\=1\...N\\). * Original explanatory variables: \\(x\_{ik}, k\=1\...K\\). * Factors: \\(F\_j, j\=1\...M\\). * \\(M \< K\\). ### 10\.12\.2 The Idea As you can see in the rates data, there are eight different rates. If we wanted to model the underlying drivers of this system of rates, we could assume a separate driver for each one leading to \\(K\=8\\) underlying factors. But the whole idea of factor analysis is to reduce the number of drivers that exist. So we may want to go with a smaller number of \\(M \< K\\) factors. The main concept here is to **project** the variables \\(x \\in R^{K}\\) onto the reduced factor set \\(F \\in R^M\\) such that we can explain most of the variables by the factors. Hence we are looking for a relation \\\[\\begin{equation} x \= B F \\end{equation}\\] where \\(B \= \\{b\_{kj}\\}\\in R^{K \\times M}\\) is a matrix of factor **loadings** for the variables. Through matrix \\(B\\), \\(x\\) may be represented in smaller dimension \\(M\\). The entries in matrix \\(B\\) may be positive or negative. Negative loadings mean that the variable is negatively correlated with the factor. The whole idea is that we want to replace the relation of \\(y\\) to \\(x\\) with a relation of \\(y\\) to a reduced set \\(F\\). Once we have the set of factors defined, then the \\(N\\) observations \\(y\\) may be expressed in terms of the factors through a factor **score matrix** \\(A \= \\{a\_{ij}\\} \\in R^{N \\times M}\\) as follows: \\\[\\begin{equation} y \= A F \\end{equation}\\] Again, factor scores may be positive or negative. There are many ways in which such a transformation from variables to factors might be undertaken. We look at the most common one. 10\.13 Principal Components Analysis (PCA) ------------------------------------------ In PCA, each component (factor) is viewed as a weighted combination of the other variables (this is not always the way factor analysis is implemented, but is certainly one of the most popular). The starting point for PCA is the covariance matrix of the data. Essentially what is involved is an eigenvalue analysis of this matrix to extract the principal eigenvectors. We can do the analysis using the R statistical package. Here is the sample session: ``` ncaa = read.table("DSTMAA_data/ncaa.txt",header=TRUE) x = ncaa[4:14] print(names(x)) ``` ``` ## [1] "PTS" "REB" "AST" "TO" "A.T" "STL" "BLK" "PF" "FG" "FT" "X3P" ``` ``` result = princomp(x) summary(result) ``` ``` ## Importance of components: ## Comp.1 Comp.2 Comp.3 Comp.4 ## Standard deviation 9.8747703 5.2870154 3.95773149 3.19879732 ## Proportion of Variance 0.5951046 0.1705927 0.09559429 0.06244717 ## Cumulative Proportion 0.5951046 0.7656973 0.86129161 0.92373878 ## Comp.5 Comp.6 Comp.7 Comp.8 ## Standard deviation 2.43526651 2.04505010 1.53272256 0.1314860827 ## Proportion of Variance 0.03619364 0.02552391 0.01433727 0.0001055113 ## Cumulative Proportion 0.95993242 0.98545633 0.99979360 0.9998991100 ## Comp.9 Comp.10 Comp.11 ## Standard deviation 1.062179e-01 6.591218e-02 3.007832e-02 ## Proportion of Variance 6.885489e-05 2.651372e-05 5.521365e-06 ## Cumulative Proportion 9.999680e-01 9.999945e-01 1.000000e+00 ``` ``` screeplot(result) ``` ``` screeplot(result,type="lines") ``` ``` result$loadings ``` ``` ## ## Loadings: ## Comp.1 Comp.2 Comp.3 Comp.4 Comp.5 Comp.6 Comp.7 Comp.8 Comp.9 Comp.10 ## PTS 0.964 0.240 ## REB 0.940 -0.316 ## AST 0.257 -0.228 -0.283 -0.431 -0.778 ## TO 0.194 -0.908 -0.116 0.313 -0.109 ## A.T 0.712 0.642 0.262 ## STL -0.194 0.205 0.816 0.498 ## BLK 0.516 -0.849 ## PF -0.110 -0.223 0.862 -0.364 -0.228 ## FG ## FT 0.619 -0.762 0.175 ## X3P -0.315 0.948 ## Comp.11 ## PTS ## REB ## AST ## TO ## A.T ## STL ## BLK ## PF ## FG -0.996 ## FT ## X3P ## ## Comp.1 Comp.2 Comp.3 Comp.4 Comp.5 Comp.6 Comp.7 Comp.8 ## SS loadings 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 ## Proportion Var 0.091 0.091 0.091 0.091 0.091 0.091 0.091 0.091 ## Cumulative Var 0.091 0.182 0.273 0.364 0.455 0.545 0.636 0.727 ## Comp.9 Comp.10 Comp.11 ## SS loadings 1.000 1.000 1.000 ## Proportion Var 0.091 0.091 0.091 ## Cumulative Var 0.818 0.909 1.000 ``` ``` print(names(result)) ``` ``` ## [1] "sdev" "loadings" "center" "scale" "n.obs" "scores" ## [7] "call" ``` ``` result$sdev ``` ``` ## Comp.1 Comp.2 Comp.3 Comp.4 Comp.5 Comp.6 ## 9.87477028 5.28701542 3.95773149 3.19879732 2.43526651 2.04505010 ## Comp.7 Comp.8 Comp.9 Comp.10 Comp.11 ## 1.53272256 0.13148608 0.10621791 0.06591218 0.03007832 ``` ``` biplot(result) ``` The alternative function **prcomp** returns the same stuff, but gives all the factor loadings immediately. ``` prcomp(x) ``` ``` ## Standard deviations: ## [1] 9.95283292 5.32881066 3.98901840 3.22408465 2.45451793 2.06121675 ## [7] 1.54483913 0.13252551 0.10705759 0.06643324 0.03031610 ## ## Rotation: ## PC1 PC2 PC3 PC4 PC5 ## PTS -0.963808450 -0.052962387 0.018398319 0.094091517 -0.240334810 ## REB -0.022483140 -0.939689339 0.073265952 0.026260543 0.315515827 ## AST -0.256799635 0.228136664 -0.282724110 -0.430517969 0.778063875 ## TO 0.061658120 -0.193810802 -0.908005124 -0.115659421 -0.313055838 ## A.T -0.021008035 0.030935414 0.035465079 -0.022580766 0.068308725 ## STL -0.006513483 0.081572061 -0.193844456 0.205272135 0.014528901 ## BLK -0.012711101 -0.070032329 0.035371935 0.073370876 -0.034410932 ## PF -0.012034143 0.109640846 -0.223148274 0.862316681 0.364494150 ## FG -0.003729350 0.002175469 -0.001708722 -0.006568270 -0.001837634 ## FT -0.001210397 0.003852067 0.001793045 0.008110836 -0.019134412 ## X3P -0.003804597 0.003708648 -0.001211492 -0.002352869 -0.003849550 ## PC6 PC7 PC8 PC9 PC10 ## PTS 0.029408534 -0.0196304356 0.0026169995 -0.004516521 0.004889708 ## REB -0.040851345 -0.0951099200 -0.0074120623 0.003557921 -0.008319362 ## AST -0.044767132 0.0681222890 0.0359559264 0.056106512 0.015018370 ## TO 0.108917779 0.0864648004 -0.0416005762 -0.039363263 -0.012726102 ## A.T -0.004846032 0.0061047937 -0.7122315249 -0.642496008 -0.262468560 ## STL -0.815509399 -0.4981690905 0.0008726057 -0.008845999 -0.005846547 ## BLK -0.516094006 0.8489313874 0.0023262933 -0.001364270 0.008293758 ## PF 0.228294830 0.0972181527 0.0005835116 0.001302210 -0.001385509 ## FG 0.004118140 0.0041758373 0.0848448651 -0.019610637 0.030860027 ## FT -0.005525032 0.0001301938 -0.6189703010 0.761929615 -0.174641147 ## X3P 0.001012866 0.0094289825 0.3151374823 0.038279107 -0.948194531 ## PC11 ## PTS 0.0037883918 ## REB -0.0043776255 ## AST 0.0058744543 ## TO -0.0001063247 ## A.T -0.0560584903 ## STL -0.0062405867 ## BLK 0.0013213701 ## PF -0.0043605809 ## FG -0.9956716097 ## FT -0.0731951151 ## X3P -0.0031976296 ``` ### 10\.13\.1 Difference between PCA and LDA ### 10\.13\.2 Application to Treasury Yield Curves We had previously downloaded monthly data for constant maturity yields from June 1976 to December 2006\. Here is the 3D plot. It shows the change in the yield curve over time for a range of maturities. ``` persp(rates,theta=30,phi=0,xlab="years",ylab="maturity",zlab="rates") ``` ``` tryrates = read.table("DSTMAA_data/tryrates.txt",header=TRUE) rates = as.matrix(tryrates[2:9]) result = princomp(rates) result$loadings ``` ``` ## ## Loadings: ## Comp.1 Comp.2 Comp.3 Comp.4 Comp.5 Comp.6 Comp.7 Comp.8 ## FYGM3 -0.360 -0.492 0.594 -0.387 0.344 ## FYGM6 -0.358 -0.404 0.202 -0.795 -0.156 ## FYGT1 -0.388 -0.287 -0.310 0.617 0.459 0.204 0.105 -0.165 ## FYGT2 -0.375 -0.457 -0.194 -0.466 0.304 0.549 ## FYGT3 -0.361 0.135 -0.365 -0.418 -0.142 -0.455 -0.558 ## FYGT5 -0.341 0.317 -0.188 0.724 -0.199 0.428 ## FYGT7 -0.326 0.408 0.190 -0.168 0.705 -0.393 ## FYGT10 -0.314 0.476 0.412 0.422 -0.421 -0.356 0.137 ## ## Comp.1 Comp.2 Comp.3 Comp.4 Comp.5 Comp.6 Comp.7 Comp.8 ## SS loadings 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 ## Proportion Var 0.125 0.125 0.125 0.125 0.125 0.125 0.125 0.125 ## Cumulative Var 0.125 0.250 0.375 0.500 0.625 0.750 0.875 1.000 ``` ``` result$sdev ``` ``` ## Comp.1 Comp.2 Comp.3 Comp.4 Comp.5 Comp.6 ## 8.39745750 1.28473300 0.29985418 0.12850678 0.05470852 0.04626171 ## Comp.7 Comp.8 ## 0.03991152 0.02922175 ``` ``` summary(result) ``` ``` ## Importance of components: ## Comp.1 Comp.2 Comp.3 Comp.4 ## Standard deviation 8.397458 1.28473300 0.299854180 0.1285067846 ## Proportion of Variance 0.975588 0.02283477 0.001243916 0.0002284667 ## Cumulative Proportion 0.975588 0.99842275 0.999666666 0.9998951326 ## Comp.5 Comp.6 Comp.7 Comp.8 ## Standard deviation 5.470852e-02 4.626171e-02 3.991152e-02 2.922175e-02 ## Proportion of Variance 4.140766e-05 2.960835e-05 2.203775e-05 1.181363e-05 ## Cumulative Proportion 9.999365e-01 9.999661e-01 9.999882e-01 1.000000e+00 ``` ### 10\.13\.3 Results The results are interesting. We see that the loadings are large in the first three component vectors for all maturity rates. The loadings correspond to a classic feature of the yield curve, i.e., there are three components: level, slope, and curvature. Note that the first component has almost equal loadings for all rates that are all identical in sign. Hence, this is the **level** factor. The second component has negative loadings for the shorter maturity rates and positive loadings for the later maturity ones. Therefore, when this factor moves up, the short rates will go down, and the long rates will go up, resulting in a steepening of the yield curve. If the factor goes down, the yield curve will become flatter. Hence, the second principal component is clearly the **slope** factor. Examining the loadings of the third principal component should make it clear that the effect of this factor is to modulate the **curvature** or hump of the yield curve. Still, from looking at the results, it is clear that 97% of the common variation is explained by just the first factor, and a wee bit more by the next two. The resultant **biplot** shows the dominance of the main component. ``` biplot(result) ``` ### 10\.13\.1 Difference between PCA and LDA ### 10\.13\.2 Application to Treasury Yield Curves We had previously downloaded monthly data for constant maturity yields from June 1976 to December 2006\. Here is the 3D plot. It shows the change in the yield curve over time for a range of maturities. ``` persp(rates,theta=30,phi=0,xlab="years",ylab="maturity",zlab="rates") ``` ``` tryrates = read.table("DSTMAA_data/tryrates.txt",header=TRUE) rates = as.matrix(tryrates[2:9]) result = princomp(rates) result$loadings ``` ``` ## ## Loadings: ## Comp.1 Comp.2 Comp.3 Comp.4 Comp.5 Comp.6 Comp.7 Comp.8 ## FYGM3 -0.360 -0.492 0.594 -0.387 0.344 ## FYGM6 -0.358 -0.404 0.202 -0.795 -0.156 ## FYGT1 -0.388 -0.287 -0.310 0.617 0.459 0.204 0.105 -0.165 ## FYGT2 -0.375 -0.457 -0.194 -0.466 0.304 0.549 ## FYGT3 -0.361 0.135 -0.365 -0.418 -0.142 -0.455 -0.558 ## FYGT5 -0.341 0.317 -0.188 0.724 -0.199 0.428 ## FYGT7 -0.326 0.408 0.190 -0.168 0.705 -0.393 ## FYGT10 -0.314 0.476 0.412 0.422 -0.421 -0.356 0.137 ## ## Comp.1 Comp.2 Comp.3 Comp.4 Comp.5 Comp.6 Comp.7 Comp.8 ## SS loadings 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 ## Proportion Var 0.125 0.125 0.125 0.125 0.125 0.125 0.125 0.125 ## Cumulative Var 0.125 0.250 0.375 0.500 0.625 0.750 0.875 1.000 ``` ``` result$sdev ``` ``` ## Comp.1 Comp.2 Comp.3 Comp.4 Comp.5 Comp.6 ## 8.39745750 1.28473300 0.29985418 0.12850678 0.05470852 0.04626171 ## Comp.7 Comp.8 ## 0.03991152 0.02922175 ``` ``` summary(result) ``` ``` ## Importance of components: ## Comp.1 Comp.2 Comp.3 Comp.4 ## Standard deviation 8.397458 1.28473300 0.299854180 0.1285067846 ## Proportion of Variance 0.975588 0.02283477 0.001243916 0.0002284667 ## Cumulative Proportion 0.975588 0.99842275 0.999666666 0.9998951326 ## Comp.5 Comp.6 Comp.7 Comp.8 ## Standard deviation 5.470852e-02 4.626171e-02 3.991152e-02 2.922175e-02 ## Proportion of Variance 4.140766e-05 2.960835e-05 2.203775e-05 1.181363e-05 ## Cumulative Proportion 9.999365e-01 9.999661e-01 9.999882e-01 1.000000e+00 ``` ### 10\.13\.3 Results The results are interesting. We see that the loadings are large in the first three component vectors for all maturity rates. The loadings correspond to a classic feature of the yield curve, i.e., there are three components: level, slope, and curvature. Note that the first component has almost equal loadings for all rates that are all identical in sign. Hence, this is the **level** factor. The second component has negative loadings for the shorter maturity rates and positive loadings for the later maturity ones. Therefore, when this factor moves up, the short rates will go down, and the long rates will go up, resulting in a steepening of the yield curve. If the factor goes down, the yield curve will become flatter. Hence, the second principal component is clearly the **slope** factor. Examining the loadings of the third principal component should make it clear that the effect of this factor is to modulate the **curvature** or hump of the yield curve. Still, from looking at the results, it is clear that 97% of the common variation is explained by just the first factor, and a wee bit more by the next two. The resultant **biplot** shows the dominance of the main component. ``` biplot(result) ``` 10\.14 Difference between PCA and FA ------------------------------------ The difference between PCA and FA is that for the purposes of matrix computations PCA assumes that all variance is common, with all unique factors set equal to zero; while FA assumes that there is some unique variance. Hence PCA may also be thought of as a subset of FA. The level of unique variance is dictated by the FA model which is chosen. Accordingly, PCA is a model of a closed system, while FA is a model of an open system. FA tries to decompose the correlation matrix into common and unique portions. 10\.15 Factor Rotation ---------------------- Finally, there are some times when the variables would load better on the factors if the factor system were to be rotated. This called factor rotation, and many times the software does this automatically. Remember that we decomposed variables \\(x\\) as follows: \\\[\\begin{equation} x \= B\\;F \+ e \\end{equation}\\] where \\(x\\) is dimension \\(K\\), \\(B \\in R^{K \\times M}\\), \\(F \\in R^{M}\\), and \\(e\\) is a \\(K\\)\-dimension vector. This implies that \\\[\\begin{equation} Cov(x) \= BB' \+ \\psi \\end{equation}\\] Recall that \\(B\\) is the matrix of factor loadings. The system remains unchanged if \\(B\\) is replaced by \\(BG\\), where \\(G \\in R^{M \\times M}\\), and \\(G\\) is orthogonal. Then we call \\(G\\) a **rotation** of \\(B\\). The idea of rotation is easier to see with the following diagram. Two conditions need to be satisfied: (a) The new axis (and the old one) should be orthogonal. (b) The difference in loadings on the factors by each variable must increase. In the diagram below we can see that the rotation has made the variables align better along the new axis system. ### 10\.15\.1 Using the factor analysis function To illustrate, let’s undertake a factor analysis of the Treasury rates data. In R, we can implement it generally with the **factanal** command. ``` factanal(rates,2) ``` ``` ## ## Call: ## factanal(x = rates, factors = 2) ## ## Uniquenesses: ## FYGM3 FYGM6 FYGT1 FYGT2 FYGT3 FYGT5 FYGT7 FYGT10 ## 0.006 0.005 0.005 0.005 0.005 0.005 0.005 0.005 ## ## Loadings: ## Factor1 Factor2 ## FYGM3 0.843 0.533 ## FYGM6 0.826 0.562 ## FYGT1 0.793 0.608 ## FYGT2 0.726 0.686 ## FYGT3 0.681 0.731 ## FYGT5 0.617 0.786 ## FYGT7 0.579 0.814 ## FYGT10 0.546 0.836 ## ## Factor1 Factor2 ## SS loadings 4.024 3.953 ## Proportion Var 0.503 0.494 ## Cumulative Var 0.503 0.997 ## ## Test of the hypothesis that 2 factors are sufficient. ## The chi square statistic is 3556.38 on 13 degrees of freedom. ## The p-value is 0 ``` Notice how the first factor explains the shorter maturities better and the second factor explains the longer maturity rates. Hence, the two factors cover the range of maturities. Note that the ability of the factors to separate the variables increases when we apply a **factor rotation**: ``` factanal(rates,2,rotation="promax") ``` ``` ## ## Call: ## factanal(x = rates, factors = 2, rotation = "promax") ## ## Uniquenesses: ## FYGM3 FYGM6 FYGT1 FYGT2 FYGT3 FYGT5 FYGT7 FYGT10 ## 0.006 0.005 0.005 0.005 0.005 0.005 0.005 0.005 ## ## Loadings: ## Factor1 Factor2 ## FYGM3 0.110 0.902 ## FYGM6 0.174 0.846 ## FYGT1 0.282 0.747 ## FYGT2 0.477 0.560 ## FYGT3 0.593 0.443 ## FYGT5 0.746 0.284 ## FYGT7 0.829 0.194 ## FYGT10 0.895 0.118 ## ## Factor1 Factor2 ## SS loadings 2.745 2.730 ## Proportion Var 0.343 0.341 ## Cumulative Var 0.343 0.684 ## ## Factor Correlations: ## Factor1 Factor2 ## Factor1 1.000 -0.854 ## Factor2 -0.854 1.000 ## ## Test of the hypothesis that 2 factors are sufficient. ## The chi square statistic is 3556.38 on 13 degrees of freedom. ## The p-value is 0 ``` The factors have been reversed after the rotation. Now the first factor explains long rates and the second factor explains short rates. If we want the time series of the factors, use the following command: ``` result = factanal(rates,2,scores="regression") ts = result$scores par(mfrow=c(2,1)) plot(ts[,1],type="l") plot(ts[,2],type="l") ``` ``` result$scores ``` ``` ## Factor1 Factor2 ## [1,] -0.355504878 0.3538523566 ## [2,] -0.501355106 0.4219522836 ## [3,] -0.543664379 0.3889362268 ## [4,] -0.522169984 0.2906034115 ## [5,] -0.566607393 0.1900987229 ## [6,] -0.584273677 0.1158550772 ## [7,] -0.617786769 -0.0509882532 ## [8,] -0.624247257 0.1623048344 ## [9,] -0.677009820 0.2997973824 ## [10,] -0.733334654 0.3687408921 ## [11,] -0.727719655 0.3139994343 ## [12,] -0.500063146 0.2096808039 ## [13,] -0.384131543 0.0410744861 ## [14,] -0.295154982 0.0079262851 ## [15,] -0.074469748 -0.0869377108 ## [16,] 0.116075785 -0.2371344010 ## [17,] 0.281023133 -0.2477845555 ## [18,] 0.236661204 -0.1984323585 ## [19,] 0.157626371 -0.0889735514 ## [20,] 0.243074384 -0.0298846923 ## [21,] 0.229996509 0.0114794387 ## [22,] 0.147494917 0.0837694919 ## [23,] 0.142866056 0.1429388300 ## [24,] 0.217975571 0.1794260505 ## [25,] 0.333131324 0.1632220682 ## [26,] 0.427011092 0.1745390683 ## [27,] 0.526015625 0.0105962505 ## [28,] 0.930970981 -0.2759351140 ## [29,] 1.099941917 -0.3067850535 ## [30,] 1.531649405 -0.5218883427 ## [31,] 1.612359229 -0.4795275595 ## [32,] 1.674541369 -0.4768444035 ## [33,] 1.628259706 -0.4725850979 ## [34,] 1.666619753 -0.4812732821 ## [35,] 1.607802989 -0.4160125641 ## [36,] 1.637193575 -0.4306264237 ## [37,] 1.453482425 -0.4656836872 ## [38,] 1.525156467 -0.5096808367 ## [39,] 1.674848519 -0.5570384352 ## [40,] 2.049336334 -0.6730573078 ## [41,] 2.541609184 -0.5458070626 ## [42,] 2.420122121 -0.3166891875 ## [43,] 2.598308192 -0.6327155757 ## [44,] 2.391009307 -0.3356467032 ## [45,] 2.311818441 0.5221104615 ## [46,] 3.605474901 -0.1557021034 ## [47,] 2.785430927 -0.2516679525 ## [48,] 0.485057576 0.7228887760 ## [49,] -0.189141617 0.9855640276 ## [50,] 0.122281914 0.9105895503 ## [51,] 0.511485539 1.1255567094 ## [52,] 1.064745422 1.0034602577 ## [53,] 1.750902392 0.6022272759 ## [54,] 2.603592320 0.4009099335 ## [55,] 3.355620751 -0.0481064328 ## [56,] 3.096436233 -0.0475952393 ## [57,] 2.790570579 0.4732116005 ## [58,] 1.952978382 1.0839764053 ## [59,] 2.007654491 1.3008974495 ## [60,] 3.280609956 0.6027071203 ## [61,] 2.650522546 0.7811051077 ## [62,] 2.600300068 1.1915626752 ## [63,] 2.766003209 1.4022416607 ## [64,] 2.146320286 2.0370917324 ## [65,] 1.479726566 2.3555071345 ## [66,] 0.552668203 2.1652137124 ## [67,] 0.556340456 2.3056213923 ## [68,] 1.031484956 2.3872744033 ## [69,] 1.723405950 1.8108125155 ## [70,] 1.449614947 1.7709138593 ## [71,] 1.460961876 1.7702209124 ## [72,] 1.135992230 1.8967045582 ## [73,] 1.135689418 2.2082173178 ## [74,] 0.666878126 2.3873764566 ## [75,] -0.383975947 2.7314819419 ## [76,] -0.403354427 2.4378117276 ## [77,] -0.261254207 1.6718118006 ## [78,] 0.010954309 1.2752998691 ## [79,] -0.092289703 1.3197429280 ## [80,] -0.174691946 1.3083222077 ## [81,] -0.097560278 1.3574900674 ## [82,] 0.150646660 1.0910471461 ## [83,] 0.121953667 1.0829765752 ## [84,] 0.078801527 1.1050249969 ## [85,] 0.278156097 1.2016627452 ## [86,] 0.258501480 1.4588567047 ## [87,] 0.210284188 1.6848813104 ## [88,] 0.056036784 1.7137233052 ## [89,] -0.118921800 1.7816790973 ## [90,] -0.117431498 1.8372880351 ## [91,] -0.040073664 1.8448115903 ## [92,] -0.053649940 1.7738312784 ## [93,] -0.027125996 1.8236531568 ## [94,] 0.049919465 1.9851081358 ## [95,] 0.029704916 2.1507133812 ## [96,] -0.088880625 2.5931510323 ## [97,] -0.047171830 2.6850656261 ## [98,] 0.127458117 2.4718496073 ## [99,] 0.538302707 1.8902746778 ## [100,] 0.519981276 1.8260867038 ## [101,] 0.287350732 1.8070920575 ## [102,] -0.143185374 1.8168901486 ## [103,] -0.477616832 1.9938013470 ## [104,] -0.613354610 2.0298832121 ## [105,] -0.412838433 1.9458918523 ## [106,] -0.297013068 2.0396170842 ## [107,] -0.510299939 1.9824043717 ## [108,] -0.582920837 1.7520202839 ## [109,] -0.620119822 1.4751073269 ## [110,] -0.611872307 1.5171154200 ## [111,] -0.547668692 1.5025027015 ## [112,] -0.583785173 1.5461201027 ## [113,] -0.495210980 1.4215226364 ## [114,] -0.251451362 1.0449328603 ## [115,] -0.082066002 0.6903391640 ## [116,] -0.033194050 0.6316345737 ## [117,] 0.182241740 0.2936690259 ## [118,] 0.301423491 -0.1838473881 ## [119,] 0.189478645 -0.3060949875 ## [120,] 0.034277252 0.0074803060 ## [121,] 0.031909353 0.0570923793 ## [122,] 0.027356842 -0.1748564026 ## [123,] -0.100678983 -0.1801001545 ## [124,] -0.404727556 0.1406985128 ## [125,] -0.424620066 0.1335285826 ## [126,] -0.238905541 -0.0635401642 ## [127,] -0.074664082 -0.2315185060 ## [128,] -0.126155469 -0.2071550795 ## [129,] -0.095540492 -0.1620034845 ## [130,] -0.078865638 -0.1717327847 ## [131,] -0.323056834 0.3504769061 ## [132,] -0.515629047 0.7919922740 ## [133,] -0.450893817 0.6472867847 ## [134,] -0.549249387 0.7161373931 ## [135,] -0.461526588 0.7850863426 ## [136,] -0.477585081 1.0841412516 ## [137,] -0.607936481 1.2313669640 ## [138,] -0.602383745 0.9170263524 ## [139,] -0.561466443 0.9439199208 ## [140,] -0.440679406 0.7183641932 ## [141,] -0.379694393 0.4646994387 ## [142,] -0.448884489 0.5804226311 ## [143,] -0.447585272 0.7304696952 ## [144,] -0.394150535 0.8590552893 ## [145,] -0.208356333 0.6731650551 ## [146,] -0.089538357 0.6552198933 ## [147,] 0.063317301 0.6517126106 ## [148,] 0.251481083 0.3963555025 ## [149,] 0.401325001 0.2069459108 ## [150,] 0.566691007 0.1813057709 ## [151,] 0.730739423 0.1753541513 ## [152,] 0.828629006 0.1125881742 ## [153,] 0.937069127 0.0763716514 ## [154,] 1.044340934 0.0956119916 ## [155,] 1.009393906 0.0347124400 ## [156,] 1.003079712 -0.1255034699 ## [157,] 1.017520561 -0.4004578618 ## [158,] 0.932546637 -0.5165964072 ## [159,] 0.952361490 -0.4406600026 ## [160,] 0.875515542 -0.3342672213 ## [161,] 0.869656935 -0.4237046276 ## [162,] 0.888125852 -0.5145540230 ## [163,] 0.861924343 -0.5076632865 ## [164,] 0.738497876 -0.2536767792 ## [165,] 0.691510554 -0.0954080233 ## [166,] 0.741059090 -0.0544984271 ## [167,] 0.614055561 0.1175151057 ## [168,] 0.583992721 0.1208051871 ## [169,] 0.655094889 -0.0609062254 ## [170,] 0.585834845 -0.0430834033 ## [171,] 0.348303688 0.1979721122 ## [172,] 0.231869484 0.3331562586 ## [173,] 0.200162810 0.2747729337 ## [174,] 0.267236920 0.0828341446 ## [175,] 0.210187651 -0.0004188853 ## [176,] -0.109270296 0.2268927070 ## [177,] -0.213761239 0.1965527314 ## [178,] -0.348143133 0.4200966364 ## [179,] -0.462961583 0.4705859027 ## [180,] -0.578054300 0.5511131060 ## [181,] -0.593897266 0.6647046884 ## [182,] -0.606218752 0.6648334975 ## [183,] -0.633747164 0.4861920257 ## [184,] -0.595576784 0.3376759766 ## [185,] -0.655205129 0.2879100847 ## [186,] -0.877512941 0.3630640184 ## [187,] -1.042216136 0.3097316247 ## [188,] -1.210234114 0.4345481035 ## [189,] -1.322308783 0.6532822938 ## [190,] -1.277192666 0.7643285790 ## [191,] -1.452808921 0.8312821327 ## [192,] -1.487541641 0.8156176243 ## [193,] -1.394870534 0.6686928699 ## [194,] -1.479383323 0.4841365561 ## [195,] -1.406886161 0.3273062670 ## [196,] -1.492942737 0.3000294646 ## [197,] -1.562195349 0.4406992406 ## [198,] -1.516051602 0.5752479903 ## [199,] -1.451353552 0.5211772634 ## [200,] -1.501708646 0.4624047169 ## [201,] -1.354991806 0.1902649452 ## [202,] -1.228608089 -0.0070402815 ## [203,] -1.267977350 -0.0029138561 ## [204,] -1.230161999 0.0042656449 ## [205,] -1.096818811 -0.0947205249 ## [206,] -1.050883407 -0.1864794956 ## [207,] -1.002987371 -0.2674961604 ## [208,] -0.888334747 -0.4730245331 ## [209,] -0.832011974 -0.5241786702 ## [210,] -0.950806163 -0.2717874846 ## [211,] -0.990904734 -0.2173246581 ## [212,] -1.025888696 -0.2110302502 ## [213,] -0.961207504 -0.1336593297 ## [214,] -1.008873152 0.1426874706 ## [215,] -1.066127710 0.4267411899 ## [216,] -0.832669187 0.3633991700 ## [217,] -0.804268297 0.3062188682 ## [218,] -0.775554360 0.3751582494 ## [219,] -0.654699498 0.2680646665 ## [220,] -0.655827369 0.3622377616 ## [221,] -0.572138953 0.4346262554 ## [222,] -0.446528852 0.4693814204 ## [223,] -0.065472508 0.2004701690 ## [224,] -0.047390852 0.1708246675 ## [225,] 0.033716643 -0.0546444756 ## [226,] 0.090511779 -0.2360703511 ## [227,] 0.096712210 -0.3211426773 ## [228,] 0.263153818 -0.6427860627 ## [229,] 0.327938463 -0.8977202535 ## [230,] 0.227009433 -0.7738217993 ## [231,] 0.146847582 -0.6164082349 ## [232,] 0.217408892 -0.7820706869 ## [233,] 0.303059068 -0.9119089249 ## [234,] 0.346164990 -1.0156070316 ## [235,] 0.344495268 -1.0989068333 ## [236,] 0.254605496 -1.0839365333 ## [237,] 0.076434520 -0.9212083749 ## [238,] -0.038930459 -0.5853123528 ## [239,] -0.124579936 -0.3899503999 ## [240,] -0.184503898 -0.2610908904 ## [241,] -0.195782588 -0.1682655163 ## [242,] -0.130929970 -0.2396129985 ## [243,] -0.107305460 -0.3638191317 ## [244,] -0.146037350 -0.2440039282 ## [245,] -0.091759778 -0.4265627928 ## [246,] 0.060904468 -0.6770486218 ## [247,] -0.021981240 -0.5691143174 ## [248,] -0.098778176 -0.3937451878 ## [249,] -0.046565752 -0.4968429844 ## [250,] -0.074221981 -0.3346834015 ## [251,] -0.114633531 -0.2075481471 ## [252,] -0.080181397 -0.3167544243 ## [253,] -0.077245027 -0.4075464988 ## [254,] 0.067095102 -0.6330318266 ## [255,] 0.070287704 -0.6063439043 ## [256,] 0.034358274 -0.6110384546 ## [257,] 0.122570752 -0.7498264729 ## [258,] 0.268350996 -0.9191662258 ## [259,] 0.341928786 -0.9953776859 ## [260,] 0.358493675 -1.1493486058 ## [261,] 0.366995992 -1.1315765328 ## [262,] 0.308211094 -1.0360637068 ## [263,] 0.296634032 -1.0283183308 ## [264,] 0.333921857 -1.0482262664 ## [265,] 0.399654634 -1.1547504178 ## [266,] 0.384082293 -1.1639983135 ## [267,] 0.398207702 -1.2498402091 ## [268,] 0.458285541 -1.5595689354 ## [269,] 0.190961643 -1.5179769824 ## [270,] 0.312795727 -1.4594244181 ## [271,] 0.384110006 -1.5668180503 ## [272,] 0.289341234 -1.4408671342 ## [273,] 0.219416836 -1.2581560002 ## [274,] 0.109564976 -1.0724088237 ## [275,] 0.062406607 -1.0647289538 ## [276,] -0.003233728 -0.8644137409 ## [277,] -0.073271391 -0.6429640308 ## [278,] -0.092114043 -0.6751620268 ## [279,] -0.035775597 -0.6458887585 ## [280,] -0.018356448 -0.6699793136 ## [281,] -0.024265930 -0.5752117330 ## [282,] 0.169113471 -0.7594497105 ## [283,] 0.196907611 -0.6785741261 ## [284,] 0.099214208 -0.4437077861 ## [285,] 0.261745559 -0.5584470428 ## [286,] 0.459835499 -0.7964931207 ## [287,] 0.571275193 -0.9824797396 ## [288,] 0.480016597 -0.7239083896 ## [289,] 0.584006730 -0.9603237689 ## [290,] 0.684635191 -1.0869791122 ## [291,] 0.854501019 -1.2873287505 ## [292,] 0.829639616 -1.3076896394 ## [293,] 0.904390403 -1.4233854975 ## [294,] 0.965487586 -1.4916665856 ## [295,] 0.939437320 -1.6964516427 ## [296,] 0.503593382 -1.4775751602 ## [297,] 0.360893182 -1.3829316066 ## [298,] 0.175593148 -1.3465999103 ## [299,] -0.251176076 -0.9627487991 ## [300,] -0.539075038 -0.6634413175 ## [301,] -0.599350551 -0.6725569082 ## [302,] -0.556412743 -0.7281211894 ## [303,] -0.540217609 -0.8466812382 ## [304,] -0.862343566 -0.7743682184 ## [305,] -1.120682354 -0.6757445700 ## [306,] -1.332197920 -0.4766963100 ## [307,] -1.635390509 -0.0574670942 ## [308,] -1.640813369 -0.0797300906 ## [309,] -1.529734133 -0.1952548992 ## [310,] -1.611895694 0.0685046158 ## [311,] -1.620979516 0.0300820065 ## [312,] -1.611657565 -0.0337932009 ## [313,] -1.521101087 -0.2270269452 ## [314,] -1.434980209 -0.4497880483 ## [315,] -1.283417015 -0.7628290825 ## [316,] -1.072346961 -1.0683534564 ## [317,] -1.140637580 -1.0104383462 ## [318,] -1.395549643 -0.7734735074 ## [319,] -1.415043289 -0.7733548411 ## [320,] -1.454986296 -0.7501208892 ## [321,] -1.388833790 -0.8644898171 ## [322,] -1.365505724 -0.9246379945 ## [323,] -1.439150405 -0.8129456121 ## [324,] -1.262015053 -1.1101810729 ## [325,] -1.242212525 -1.2288228293 ## [326,] -1.575868993 -0.7274654884 ## [327,] -1.776113351 -0.3592139365 ## [328,] -1.688938879 -0.5119478063 ## [329,] -1.700951156 -0.4941221141 ## [330,] -1.694672567 -0.4605841099 ## [331,] -1.702468087 -0.4640479153 ## [332,] -1.654904379 -0.5634761675 ## [333,] -1.601784931 -0.6271607888 ## [334,] -1.459084170 -0.8494350933 ## [335,] -1.690953476 -0.4241288061 ## [336,] -1.763251101 -0.1746603929 ## [337,] -1.569093305 -0.2888010297 ## [338,] -1.408665012 -0.5098879003 ## [339,] -1.249641136 -0.7229902408 ## [340,] -1.064271255 -0.9142618698 ## [341,] -0.969933254 -0.9878591695 ## [342,] -0.829422105 -1.0259461991 ## [343,] -0.746049960 -1.0573799245 ## [344,] -0.636393008 -1.1066676094 ## [345,] -0.496790978 -1.1981395438 ## [346,] -0.526818274 -1.0157822994 ## [347,] -0.406273939 -1.1747944777 ## [348,] -0.266428973 -1.3514185013 ## [349,] -0.152652610 -1.4757833223 ## [350,] -0.063065136 -1.4551322378 ## [351,] 0.044113220 -1.4821790342 ## [352,] 0.083554485 -1.5531582261 ## [353,] 0.149851616 -1.4719167589 ## [354,] 0.214089933 -1.4732795716 ## [355,] 0.267359067 -1.5397675087 ## [356,] 0.433101487 -1.6864685717 ## [357,] 0.487372036 -1.6363593913 ## [358,] 0.465044913 -1.5603091398 ## [359,] 0.407435603 -1.4222412386 ## [360,] 0.424439377 -1.3921872057 ## [361,] 0.500793195 -1.4233665943 ## [362,] 0.590547206 -1.5031899730 ## [363,] 0.658037559 -1.6520855175 ## [364,] 0.663797018 -1.7232186290 ## [365,] 0.700576947 -1.7445853037 ## [366,] 0.780491234 -1.8529250191 ## [367,] 0.747690062 -1.8487246210 ``` ### 10\.15\.1 Using the factor analysis function To illustrate, let’s undertake a factor analysis of the Treasury rates data. In R, we can implement it generally with the **factanal** command. ``` factanal(rates,2) ``` ``` ## ## Call: ## factanal(x = rates, factors = 2) ## ## Uniquenesses: ## FYGM3 FYGM6 FYGT1 FYGT2 FYGT3 FYGT5 FYGT7 FYGT10 ## 0.006 0.005 0.005 0.005 0.005 0.005 0.005 0.005 ## ## Loadings: ## Factor1 Factor2 ## FYGM3 0.843 0.533 ## FYGM6 0.826 0.562 ## FYGT1 0.793 0.608 ## FYGT2 0.726 0.686 ## FYGT3 0.681 0.731 ## FYGT5 0.617 0.786 ## FYGT7 0.579 0.814 ## FYGT10 0.546 0.836 ## ## Factor1 Factor2 ## SS loadings 4.024 3.953 ## Proportion Var 0.503 0.494 ## Cumulative Var 0.503 0.997 ## ## Test of the hypothesis that 2 factors are sufficient. ## The chi square statistic is 3556.38 on 13 degrees of freedom. ## The p-value is 0 ``` Notice how the first factor explains the shorter maturities better and the second factor explains the longer maturity rates. Hence, the two factors cover the range of maturities. Note that the ability of the factors to separate the variables increases when we apply a **factor rotation**: ``` factanal(rates,2,rotation="promax") ``` ``` ## ## Call: ## factanal(x = rates, factors = 2, rotation = "promax") ## ## Uniquenesses: ## FYGM3 FYGM6 FYGT1 FYGT2 FYGT3 FYGT5 FYGT7 FYGT10 ## 0.006 0.005 0.005 0.005 0.005 0.005 0.005 0.005 ## ## Loadings: ## Factor1 Factor2 ## FYGM3 0.110 0.902 ## FYGM6 0.174 0.846 ## FYGT1 0.282 0.747 ## FYGT2 0.477 0.560 ## FYGT3 0.593 0.443 ## FYGT5 0.746 0.284 ## FYGT7 0.829 0.194 ## FYGT10 0.895 0.118 ## ## Factor1 Factor2 ## SS loadings 2.745 2.730 ## Proportion Var 0.343 0.341 ## Cumulative Var 0.343 0.684 ## ## Factor Correlations: ## Factor1 Factor2 ## Factor1 1.000 -0.854 ## Factor2 -0.854 1.000 ## ## Test of the hypothesis that 2 factors are sufficient. ## The chi square statistic is 3556.38 on 13 degrees of freedom. ## The p-value is 0 ``` The factors have been reversed after the rotation. Now the first factor explains long rates and the second factor explains short rates. If we want the time series of the factors, use the following command: ``` result = factanal(rates,2,scores="regression") ts = result$scores par(mfrow=c(2,1)) plot(ts[,1],type="l") plot(ts[,2],type="l") ``` ``` result$scores ``` ``` ## Factor1 Factor2 ## [1,] -0.355504878 0.3538523566 ## [2,] -0.501355106 0.4219522836 ## [3,] -0.543664379 0.3889362268 ## [4,] -0.522169984 0.2906034115 ## [5,] -0.566607393 0.1900987229 ## [6,] -0.584273677 0.1158550772 ## [7,] -0.617786769 -0.0509882532 ## [8,] -0.624247257 0.1623048344 ## [9,] -0.677009820 0.2997973824 ## [10,] -0.733334654 0.3687408921 ## [11,] -0.727719655 0.3139994343 ## [12,] -0.500063146 0.2096808039 ## [13,] -0.384131543 0.0410744861 ## [14,] -0.295154982 0.0079262851 ## [15,] -0.074469748 -0.0869377108 ## [16,] 0.116075785 -0.2371344010 ## [17,] 0.281023133 -0.2477845555 ## [18,] 0.236661204 -0.1984323585 ## [19,] 0.157626371 -0.0889735514 ## [20,] 0.243074384 -0.0298846923 ## [21,] 0.229996509 0.0114794387 ## [22,] 0.147494917 0.0837694919 ## [23,] 0.142866056 0.1429388300 ## [24,] 0.217975571 0.1794260505 ## [25,] 0.333131324 0.1632220682 ## [26,] 0.427011092 0.1745390683 ## [27,] 0.526015625 0.0105962505 ## [28,] 0.930970981 -0.2759351140 ## [29,] 1.099941917 -0.3067850535 ## [30,] 1.531649405 -0.5218883427 ## [31,] 1.612359229 -0.4795275595 ## [32,] 1.674541369 -0.4768444035 ## [33,] 1.628259706 -0.4725850979 ## [34,] 1.666619753 -0.4812732821 ## [35,] 1.607802989 -0.4160125641 ## [36,] 1.637193575 -0.4306264237 ## [37,] 1.453482425 -0.4656836872 ## [38,] 1.525156467 -0.5096808367 ## [39,] 1.674848519 -0.5570384352 ## [40,] 2.049336334 -0.6730573078 ## [41,] 2.541609184 -0.5458070626 ## [42,] 2.420122121 -0.3166891875 ## [43,] 2.598308192 -0.6327155757 ## [44,] 2.391009307 -0.3356467032 ## [45,] 2.311818441 0.5221104615 ## [46,] 3.605474901 -0.1557021034 ## [47,] 2.785430927 -0.2516679525 ## [48,] 0.485057576 0.7228887760 ## [49,] -0.189141617 0.9855640276 ## [50,] 0.122281914 0.9105895503 ## [51,] 0.511485539 1.1255567094 ## [52,] 1.064745422 1.0034602577 ## [53,] 1.750902392 0.6022272759 ## [54,] 2.603592320 0.4009099335 ## [55,] 3.355620751 -0.0481064328 ## [56,] 3.096436233 -0.0475952393 ## [57,] 2.790570579 0.4732116005 ## [58,] 1.952978382 1.0839764053 ## [59,] 2.007654491 1.3008974495 ## [60,] 3.280609956 0.6027071203 ## [61,] 2.650522546 0.7811051077 ## [62,] 2.600300068 1.1915626752 ## [63,] 2.766003209 1.4022416607 ## [64,] 2.146320286 2.0370917324 ## [65,] 1.479726566 2.3555071345 ## [66,] 0.552668203 2.1652137124 ## [67,] 0.556340456 2.3056213923 ## [68,] 1.031484956 2.3872744033 ## [69,] 1.723405950 1.8108125155 ## [70,] 1.449614947 1.7709138593 ## [71,] 1.460961876 1.7702209124 ## [72,] 1.135992230 1.8967045582 ## [73,] 1.135689418 2.2082173178 ## [74,] 0.666878126 2.3873764566 ## [75,] -0.383975947 2.7314819419 ## [76,] -0.403354427 2.4378117276 ## [77,] -0.261254207 1.6718118006 ## [78,] 0.010954309 1.2752998691 ## [79,] -0.092289703 1.3197429280 ## [80,] -0.174691946 1.3083222077 ## [81,] -0.097560278 1.3574900674 ## [82,] 0.150646660 1.0910471461 ## [83,] 0.121953667 1.0829765752 ## [84,] 0.078801527 1.1050249969 ## [85,] 0.278156097 1.2016627452 ## [86,] 0.258501480 1.4588567047 ## [87,] 0.210284188 1.6848813104 ## [88,] 0.056036784 1.7137233052 ## [89,] -0.118921800 1.7816790973 ## [90,] -0.117431498 1.8372880351 ## [91,] -0.040073664 1.8448115903 ## [92,] -0.053649940 1.7738312784 ## [93,] -0.027125996 1.8236531568 ## [94,] 0.049919465 1.9851081358 ## [95,] 0.029704916 2.1507133812 ## [96,] -0.088880625 2.5931510323 ## [97,] -0.047171830 2.6850656261 ## [98,] 0.127458117 2.4718496073 ## [99,] 0.538302707 1.8902746778 ## [100,] 0.519981276 1.8260867038 ## [101,] 0.287350732 1.8070920575 ## [102,] -0.143185374 1.8168901486 ## [103,] -0.477616832 1.9938013470 ## [104,] -0.613354610 2.0298832121 ## [105,] -0.412838433 1.9458918523 ## [106,] -0.297013068 2.0396170842 ## [107,] -0.510299939 1.9824043717 ## [108,] -0.582920837 1.7520202839 ## [109,] -0.620119822 1.4751073269 ## [110,] -0.611872307 1.5171154200 ## [111,] -0.547668692 1.5025027015 ## [112,] -0.583785173 1.5461201027 ## [113,] -0.495210980 1.4215226364 ## [114,] -0.251451362 1.0449328603 ## [115,] -0.082066002 0.6903391640 ## [116,] -0.033194050 0.6316345737 ## [117,] 0.182241740 0.2936690259 ## [118,] 0.301423491 -0.1838473881 ## [119,] 0.189478645 -0.3060949875 ## [120,] 0.034277252 0.0074803060 ## [121,] 0.031909353 0.0570923793 ## [122,] 0.027356842 -0.1748564026 ## [123,] -0.100678983 -0.1801001545 ## [124,] -0.404727556 0.1406985128 ## [125,] -0.424620066 0.1335285826 ## [126,] -0.238905541 -0.0635401642 ## [127,] -0.074664082 -0.2315185060 ## [128,] -0.126155469 -0.2071550795 ## [129,] -0.095540492 -0.1620034845 ## [130,] -0.078865638 -0.1717327847 ## [131,] -0.323056834 0.3504769061 ## [132,] -0.515629047 0.7919922740 ## [133,] -0.450893817 0.6472867847 ## [134,] -0.549249387 0.7161373931 ## [135,] -0.461526588 0.7850863426 ## [136,] -0.477585081 1.0841412516 ## [137,] -0.607936481 1.2313669640 ## [138,] -0.602383745 0.9170263524 ## [139,] -0.561466443 0.9439199208 ## [140,] -0.440679406 0.7183641932 ## [141,] -0.379694393 0.4646994387 ## [142,] -0.448884489 0.5804226311 ## [143,] -0.447585272 0.7304696952 ## [144,] -0.394150535 0.8590552893 ## [145,] -0.208356333 0.6731650551 ## [146,] -0.089538357 0.6552198933 ## [147,] 0.063317301 0.6517126106 ## [148,] 0.251481083 0.3963555025 ## [149,] 0.401325001 0.2069459108 ## [150,] 0.566691007 0.1813057709 ## [151,] 0.730739423 0.1753541513 ## [152,] 0.828629006 0.1125881742 ## [153,] 0.937069127 0.0763716514 ## [154,] 1.044340934 0.0956119916 ## [155,] 1.009393906 0.0347124400 ## [156,] 1.003079712 -0.1255034699 ## [157,] 1.017520561 -0.4004578618 ## [158,] 0.932546637 -0.5165964072 ## [159,] 0.952361490 -0.4406600026 ## [160,] 0.875515542 -0.3342672213 ## [161,] 0.869656935 -0.4237046276 ## [162,] 0.888125852 -0.5145540230 ## [163,] 0.861924343 -0.5076632865 ## [164,] 0.738497876 -0.2536767792 ## [165,] 0.691510554 -0.0954080233 ## [166,] 0.741059090 -0.0544984271 ## [167,] 0.614055561 0.1175151057 ## [168,] 0.583992721 0.1208051871 ## [169,] 0.655094889 -0.0609062254 ## [170,] 0.585834845 -0.0430834033 ## [171,] 0.348303688 0.1979721122 ## [172,] 0.231869484 0.3331562586 ## [173,] 0.200162810 0.2747729337 ## [174,] 0.267236920 0.0828341446 ## [175,] 0.210187651 -0.0004188853 ## [176,] -0.109270296 0.2268927070 ## [177,] -0.213761239 0.1965527314 ## [178,] -0.348143133 0.4200966364 ## [179,] -0.462961583 0.4705859027 ## [180,] -0.578054300 0.5511131060 ## [181,] -0.593897266 0.6647046884 ## [182,] -0.606218752 0.6648334975 ## [183,] -0.633747164 0.4861920257 ## [184,] -0.595576784 0.3376759766 ## [185,] -0.655205129 0.2879100847 ## [186,] -0.877512941 0.3630640184 ## [187,] -1.042216136 0.3097316247 ## [188,] -1.210234114 0.4345481035 ## [189,] -1.322308783 0.6532822938 ## [190,] -1.277192666 0.7643285790 ## [191,] -1.452808921 0.8312821327 ## [192,] -1.487541641 0.8156176243 ## [193,] -1.394870534 0.6686928699 ## [194,] -1.479383323 0.4841365561 ## [195,] -1.406886161 0.3273062670 ## [196,] -1.492942737 0.3000294646 ## [197,] -1.562195349 0.4406992406 ## [198,] -1.516051602 0.5752479903 ## [199,] -1.451353552 0.5211772634 ## [200,] -1.501708646 0.4624047169 ## [201,] -1.354991806 0.1902649452 ## [202,] -1.228608089 -0.0070402815 ## [203,] -1.267977350 -0.0029138561 ## [204,] -1.230161999 0.0042656449 ## [205,] -1.096818811 -0.0947205249 ## [206,] -1.050883407 -0.1864794956 ## [207,] -1.002987371 -0.2674961604 ## [208,] -0.888334747 -0.4730245331 ## [209,] -0.832011974 -0.5241786702 ## [210,] -0.950806163 -0.2717874846 ## [211,] -0.990904734 -0.2173246581 ## [212,] -1.025888696 -0.2110302502 ## [213,] -0.961207504 -0.1336593297 ## [214,] -1.008873152 0.1426874706 ## [215,] -1.066127710 0.4267411899 ## [216,] -0.832669187 0.3633991700 ## [217,] -0.804268297 0.3062188682 ## [218,] -0.775554360 0.3751582494 ## [219,] -0.654699498 0.2680646665 ## [220,] -0.655827369 0.3622377616 ## [221,] -0.572138953 0.4346262554 ## [222,] -0.446528852 0.4693814204 ## [223,] -0.065472508 0.2004701690 ## [224,] -0.047390852 0.1708246675 ## [225,] 0.033716643 -0.0546444756 ## [226,] 0.090511779 -0.2360703511 ## [227,] 0.096712210 -0.3211426773 ## [228,] 0.263153818 -0.6427860627 ## [229,] 0.327938463 -0.8977202535 ## [230,] 0.227009433 -0.7738217993 ## [231,] 0.146847582 -0.6164082349 ## [232,] 0.217408892 -0.7820706869 ## [233,] 0.303059068 -0.9119089249 ## [234,] 0.346164990 -1.0156070316 ## [235,] 0.344495268 -1.0989068333 ## [236,] 0.254605496 -1.0839365333 ## [237,] 0.076434520 -0.9212083749 ## [238,] -0.038930459 -0.5853123528 ## [239,] -0.124579936 -0.3899503999 ## [240,] -0.184503898 -0.2610908904 ## [241,] -0.195782588 -0.1682655163 ## [242,] -0.130929970 -0.2396129985 ## [243,] -0.107305460 -0.3638191317 ## [244,] -0.146037350 -0.2440039282 ## [245,] -0.091759778 -0.4265627928 ## [246,] 0.060904468 -0.6770486218 ## [247,] -0.021981240 -0.5691143174 ## [248,] -0.098778176 -0.3937451878 ## [249,] -0.046565752 -0.4968429844 ## [250,] -0.074221981 -0.3346834015 ## [251,] -0.114633531 -0.2075481471 ## [252,] -0.080181397 -0.3167544243 ## [253,] -0.077245027 -0.4075464988 ## [254,] 0.067095102 -0.6330318266 ## [255,] 0.070287704 -0.6063439043 ## [256,] 0.034358274 -0.6110384546 ## [257,] 0.122570752 -0.7498264729 ## [258,] 0.268350996 -0.9191662258 ## [259,] 0.341928786 -0.9953776859 ## [260,] 0.358493675 -1.1493486058 ## [261,] 0.366995992 -1.1315765328 ## [262,] 0.308211094 -1.0360637068 ## [263,] 0.296634032 -1.0283183308 ## [264,] 0.333921857 -1.0482262664 ## [265,] 0.399654634 -1.1547504178 ## [266,] 0.384082293 -1.1639983135 ## [267,] 0.398207702 -1.2498402091 ## [268,] 0.458285541 -1.5595689354 ## [269,] 0.190961643 -1.5179769824 ## [270,] 0.312795727 -1.4594244181 ## [271,] 0.384110006 -1.5668180503 ## [272,] 0.289341234 -1.4408671342 ## [273,] 0.219416836 -1.2581560002 ## [274,] 0.109564976 -1.0724088237 ## [275,] 0.062406607 -1.0647289538 ## [276,] -0.003233728 -0.8644137409 ## [277,] -0.073271391 -0.6429640308 ## [278,] -0.092114043 -0.6751620268 ## [279,] -0.035775597 -0.6458887585 ## [280,] -0.018356448 -0.6699793136 ## [281,] -0.024265930 -0.5752117330 ## [282,] 0.169113471 -0.7594497105 ## [283,] 0.196907611 -0.6785741261 ## [284,] 0.099214208 -0.4437077861 ## [285,] 0.261745559 -0.5584470428 ## [286,] 0.459835499 -0.7964931207 ## [287,] 0.571275193 -0.9824797396 ## [288,] 0.480016597 -0.7239083896 ## [289,] 0.584006730 -0.9603237689 ## [290,] 0.684635191 -1.0869791122 ## [291,] 0.854501019 -1.2873287505 ## [292,] 0.829639616 -1.3076896394 ## [293,] 0.904390403 -1.4233854975 ## [294,] 0.965487586 -1.4916665856 ## [295,] 0.939437320 -1.6964516427 ## [296,] 0.503593382 -1.4775751602 ## [297,] 0.360893182 -1.3829316066 ## [298,] 0.175593148 -1.3465999103 ## [299,] -0.251176076 -0.9627487991 ## [300,] -0.539075038 -0.6634413175 ## [301,] -0.599350551 -0.6725569082 ## [302,] -0.556412743 -0.7281211894 ## [303,] -0.540217609 -0.8466812382 ## [304,] -0.862343566 -0.7743682184 ## [305,] -1.120682354 -0.6757445700 ## [306,] -1.332197920 -0.4766963100 ## [307,] -1.635390509 -0.0574670942 ## [308,] -1.640813369 -0.0797300906 ## [309,] -1.529734133 -0.1952548992 ## [310,] -1.611895694 0.0685046158 ## [311,] -1.620979516 0.0300820065 ## [312,] -1.611657565 -0.0337932009 ## [313,] -1.521101087 -0.2270269452 ## [314,] -1.434980209 -0.4497880483 ## [315,] -1.283417015 -0.7628290825 ## [316,] -1.072346961 -1.0683534564 ## [317,] -1.140637580 -1.0104383462 ## [318,] -1.395549643 -0.7734735074 ## [319,] -1.415043289 -0.7733548411 ## [320,] -1.454986296 -0.7501208892 ## [321,] -1.388833790 -0.8644898171 ## [322,] -1.365505724 -0.9246379945 ## [323,] -1.439150405 -0.8129456121 ## [324,] -1.262015053 -1.1101810729 ## [325,] -1.242212525 -1.2288228293 ## [326,] -1.575868993 -0.7274654884 ## [327,] -1.776113351 -0.3592139365 ## [328,] -1.688938879 -0.5119478063 ## [329,] -1.700951156 -0.4941221141 ## [330,] -1.694672567 -0.4605841099 ## [331,] -1.702468087 -0.4640479153 ## [332,] -1.654904379 -0.5634761675 ## [333,] -1.601784931 -0.6271607888 ## [334,] -1.459084170 -0.8494350933 ## [335,] -1.690953476 -0.4241288061 ## [336,] -1.763251101 -0.1746603929 ## [337,] -1.569093305 -0.2888010297 ## [338,] -1.408665012 -0.5098879003 ## [339,] -1.249641136 -0.7229902408 ## [340,] -1.064271255 -0.9142618698 ## [341,] -0.969933254 -0.9878591695 ## [342,] -0.829422105 -1.0259461991 ## [343,] -0.746049960 -1.0573799245 ## [344,] -0.636393008 -1.1066676094 ## [345,] -0.496790978 -1.1981395438 ## [346,] -0.526818274 -1.0157822994 ## [347,] -0.406273939 -1.1747944777 ## [348,] -0.266428973 -1.3514185013 ## [349,] -0.152652610 -1.4757833223 ## [350,] -0.063065136 -1.4551322378 ## [351,] 0.044113220 -1.4821790342 ## [352,] 0.083554485 -1.5531582261 ## [353,] 0.149851616 -1.4719167589 ## [354,] 0.214089933 -1.4732795716 ## [355,] 0.267359067 -1.5397675087 ## [356,] 0.433101487 -1.6864685717 ## [357,] 0.487372036 -1.6363593913 ## [358,] 0.465044913 -1.5603091398 ## [359,] 0.407435603 -1.4222412386 ## [360,] 0.424439377 -1.3921872057 ## [361,] 0.500793195 -1.4233665943 ## [362,] 0.590547206 -1.5031899730 ## [363,] 0.658037559 -1.6520855175 ## [364,] 0.663797018 -1.7232186290 ## [365,] 0.700576947 -1.7445853037 ## [366,] 0.780491234 -1.8529250191 ## [367,] 0.747690062 -1.8487246210 ```
Machine Learning
srdas.github.io
https://srdas.github.io/MLBook/LimitedDependentVariables.html
Chapter 11 Truncate and Estimate: Limited Dependent Variables ============================================================= 11\.1 Maximum\-Likelihood Estimation (MLE) ------------------------------------------ Suppose we wish to fit data to a given distribution, then we may use this technique to do so. Many of the data fitting procedures need to use MLE. MLE is a general technique, and applies widely. It is also a fundamental approach to many estimation tools in econometrics. Here we recap this. Let’s say we have a series of data \\(x\\), with \\(T\\) observations. If \\(x \\sim N(\\mu,\\sigma^2\)\\), then \\\[\\begin{equation} \\mbox{density function:} \\quad f(x) \= \\frac{1}{\\sqrt{2\\pi\\sigma^2}} \\exp\\left\[\-\\frac{1}{2}\\frac{(x\-\\mu)^2}{\\sigma^2} \\right] \\end{equation}\\] \\\[\\begin{equation} N(x) \= 1 \- N(\-x) \\end{equation}\\] \\\[\\begin{equation} F(x) \= \\int\_{\-\\infty}^x f(u) du \\end{equation}\\] The standard normal distribution is \\(x \\sim N(0,1\)\\). For the standard normal distribution: \\(F(0\) \= \\frac{1}{2}\\). The likelihood of the entire series is \\\[\\begin{equation} \\prod\_{t\=1}^T f\[R(t)] \\end{equation}\\] It is easier (computationally) to maximize \\\[\\begin{equation} \\max\_{\\mu,\\sigma} \\; {\\cal L} \\equiv \\sum\_{t\=1}^T \\ln f\[R(t)] \\end{equation}\\] known as the log\-likelihood. 11\.2 Implementation -------------------- This is easily done in R. First we create the log\-likelihood function, so you can see how functions are defined in R. Second, we optimize the log\-likelihood, i.e., we find the maximum value, hence it is known as maximum log\-likelihood estimation (MLE). ``` #LOG-LIKELIHOOD FUNCTION LL = function(params,x) { mu = params[1]; sigsq = params[2] f = (1/sqrt(2*pi*sigsq))*exp(-0.5*(x-mu)^2/sigsq) LL = -sum(log(f)) } ``` ``` #GENERATE DATA FROM A NORMAL DISTRIBUTION x = rnorm(10000, mean=5, sd=3) #MAXIMIZE LOG-LIKELIHOOD params = c(4,2) #Create starting guess for parameters res = nlm(LL,params,x) print(res) ``` ``` ## $minimum ## [1] 25257.34 ## ## $estimate ## [1] 4.965689 9.148508 ## ## $gradient ## [1] 0.0014777011 -0.0002584778 ## ## $code ## [1] 1 ## ## $iterations ## [1] 11 ``` We can see that the result was a fitted normal distribution with mean close to 5, and variance close to 9, the square root of which is roughly the same as the distribution from which the data was originally generated. Further, notice that the gradient is zero for both parameters, as it should be when the maximum is reached. 11\.3 Logit and Probit Models ----------------------------- Usually we run regressions using continuous variables for the dependent (\\(y\\)) variables, such as, for example, when we regress income on education. Sometimes however, the dependent variable may be discrete, and could be binomial or multinomial. That is, the dependent variable is **limited**. In such cases, we need a different approach. **Discrete dependent** variables are a special case of **limited dependent** variables. The Logit and Probit models we look at here are examples of discrete dependent variable models. Such models are also often called **qualitative response** (QR) models. In particular, when the variable is binary, i.e., takes values of \\(\\{0,1\\}\\), then we get a probability model. If we just regressed the left hand side variables of ones and zeros on a suite of right hand side variables we could of course fit a linear regression. Then if we took another observation with values for the right hand side, i.e., \\(x \= \\{x\_1,x\_2,\\ldots,x\_k\\}\\), we could compute the value of the \\(y\\) variable using the fitted coefficients. But of course, this value will not be exactly 0 or 1, except by unlikely coincidence. Nor will this value lie in the range \\((0,1\)\\). There is also a relationship to classifier models. In classifier models, we are interested in allocating observations to categories. In limited dependent models we also want to explain the reasons (i.e., find explanatory variables) for the allocation across categories. Some examples of such models are to explain whether a person is employed or not, whether a firm is syndicated or not, whether a firm is solvent or not, which field of work is chosen by graduates, where consumers shop, whether they choose Coke versus Pepsi, etc. These fitted values might not even lie between 0 and 1 with a linear regression. However, if we used a carefully chosen nonlinear regression function, then we could ensure that the fitted values of \\(y\\) are restricted to the range \\((0,1\)\\), and then we would get a model where we fitted a probability. There are two such model forms that are widely used: (a) Logit, also known as a logistic regression, and (b) Probit models. We look at each one in turn. 11\.4 Logit ----------- A logit model takes the following form: \\\[\\begin{equation} y \= \\frac{e^{f(x)}}{1\+e^{f(x)}}, \\quad f(x) \= \\beta\_0 \+ \\beta\_1 x\_1 \+ \\ldots \\beta\_k x\_k \\end{equation}\\] We are interested in fitting the coefficients \\(\\{\\beta\_0,\\beta\_1, \\ldots, \\beta\_k\\}\\). Note that, irrespective of the coefficients, \\(f(x) \\in (\-\\infty,\+\\infty)\\), but \\(y \\in (0,1\)\\). When \\(f(x) \\rightarrow \-\\infty\\), \\(y \\rightarrow 0\\), and when \\(f(x) \\rightarrow \+\\infty\\), \\(y \\rightarrow 1\\). We also write this model as \\\[\\begin{equation} y \= \\frac{e^{\\beta' x}}{1\+e^{\\beta' x}} \\equiv \\Lambda(\\beta' x) \\end{equation}\\] where \\(\\Lambda\\) (lambda) is for logit. The model generates a \\(S\\)\-shaped curve for \\(y\\), and we can plot it as follows. The fitted value of \\(y\\) is nothing but the probability that \\(y\=1\\). ``` logit = function(fx) { res = exp(fx)/(1+exp(fx)) } fx = seq(-4,4,0.01) y = logit(fx) plot(fx,y,type="l",xlab="x",ylab="f(x)",col="blue",lwd=3) ``` ### 11\.4\.1 Example For the NCAA data, take the top 32 teams and make their dependent variable 1, and that of the bottom 32 teams zero. Therefore, the teams that have \\(y\=1\\) are those that did not lose in the first round of the playoffs, and the teams that have \\(y\=0\\) are those that did. Estimation is done by maximizing the log\-likelihood. ``` ncaa = read.table("DSTMAA_data/ncaa.txt",header=TRUE) y1 = 1:32 y1 = y1*0+1 y2 = y1*0 y = c(y1,y2) print(y) ``` ``` ## [1] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 ## [36] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ``` ``` x = as.matrix(ncaa[4:14]) h = glm(y~x, family=binomial(link="logit")) names(h) ``` ``` ## [1] "coefficients" "residuals" "fitted.values" ## [4] "effects" "R" "rank" ## [7] "qr" "family" "linear.predictors" ## [10] "deviance" "aic" "null.deviance" ## [13] "iter" "weights" "prior.weights" ## [16] "df.residual" "df.null" "y" ## [19] "converged" "boundary" "model" ## [22] "call" "formula" "terms" ## [25] "data" "offset" "control" ## [28] "method" "contrasts" "xlevels" ``` ``` print(logLik(h)) ``` ``` ## 'log Lik.' -21.44779 (df=12) ``` ``` summary(h) ``` ``` ## ## Call: ## glm(formula = y ~ x, family = binomial(link = "logit")) ## ## Deviance Residuals: ## Min 1Q Median 3Q Max ## -1.80174 -0.40502 -0.00238 0.37584 2.31767 ## ## Coefficients: ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -45.83315 14.97564 -3.061 0.00221 ** ## xPTS -0.06127 0.09549 -0.642 0.52108 ## xREB 0.49037 0.18089 2.711 0.00671 ** ## xAST 0.16422 0.26804 0.613 0.54010 ## xTO -0.38405 0.23434 -1.639 0.10124 ## xA.T 1.56351 3.17091 0.493 0.62196 ## xSTL 0.78360 0.32605 2.403 0.01625 * ## xBLK 0.07867 0.23482 0.335 0.73761 ## xPF 0.02602 0.13644 0.191 0.84874 ## xFG 46.21374 17.33685 2.666 0.00768 ** ## xFT 10.72992 4.47729 2.397 0.01655 * ## xX3P 5.41985 5.77966 0.938 0.34838 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## (Dispersion parameter for binomial family taken to be 1) ## ## Null deviance: 88.723 on 63 degrees of freedom ## Residual deviance: 42.896 on 52 degrees of freedom ## AIC: 66.896 ## ## Number of Fisher Scoring iterations: 6 ``` ``` h$fitted.values ``` ``` ## 1 2 3 4 5 ## 0.9998267965 0.9983229192 0.9686755530 0.9909359265 0.9977011039 ## 6 7 8 9 10 ## 0.9639506326 0.5381841865 0.9505255187 0.4329829232 0.7413280575 ## 11 12 13 14 15 ## 0.9793554057 0.7273235463 0.2309261473 0.9905414749 0.7344407215 ## 16 17 18 19 20 ## 0.9936312074 0.2269619354 0.8779507370 0.2572796426 0.9335376447 ## 21 22 23 24 25 ## 0.9765843274 0.7836742557 0.9967552281 0.9966486903 0.9715110760 ## 26 27 28 29 30 ## 0.0681674628 0.4984153630 0.9607522159 0.8624544140 0.6988578200 ## 31 32 33 34 35 ## 0.9265057217 0.7472357037 0.5589318497 0.2552381741 0.0051790298 ## 36 37 38 39 40 ## 0.4394307950 0.0205919396 0.0545333361 0.0100662111 0.0995262051 ## 41 42 43 44 45 ## 0.1219394290 0.0025416737 0.3191888357 0.0149772804 0.0685930622 ## 46 47 48 49 50 ## 0.3457439539 0.0034943441 0.5767386617 0.5489544863 0.4637012227 ## 51 52 53 54 55 ## 0.2354894587 0.0487342700 0.6359622098 0.8027221707 0.0003240393 ## 56 57 58 59 60 ## 0.0479116454 0.3422867567 0.4649889328 0.0547385409 0.0722894447 ## 61 62 63 64 ## 0.0228629774 0.0002730981 0.0570387301 0.2830628760 ``` 11\.5 Probit ------------ Probit has essentially the same idea as the logit except that the probability function is replaced by the normal distribution. The nonlinear regression equation is as follows: \\\[\\begin{equation} y \= \\Phi\[f(x)], \\quad f(x) \= \\beta\_0 \+ \\beta\_1 x\_1 \+ \\ldots \\beta\_k x\_k \\end{equation}\\] where \\(\\Phi(.)\\) is the cumulative normal probability function. Again, irrespective of the coefficients, \\(f(x) \\in (\-\\infty,\+\\infty)\\), but \\(y \\in (0,1\)\\). When \\(f(x) \\rightarrow \-\\infty\\), \\(y \\rightarrow 0\\), and when \\(f(x) \\rightarrow \+\\infty\\), \\(y \\rightarrow 1\\). We can redo the same previous logit model using a probit instead: ``` h = glm(y~x, family=binomial(link="probit")) print(logLik(h)) ``` ``` ## 'log Lik.' -21.27924 (df=12) ``` ``` summary(h) ``` ``` ## ## Call: ## glm(formula = y ~ x, family = binomial(link = "probit")) ## ## Deviance Residuals: ## Min 1Q Median 3Q Max ## -1.76353 -0.41212 -0.00031 0.34996 2.24568 ## ## Coefficients: ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -26.28219 8.09608 -3.246 0.00117 ** ## xPTS -0.03463 0.05385 -0.643 0.52020 ## xREB 0.28493 0.09939 2.867 0.00415 ** ## xAST 0.10894 0.15735 0.692 0.48874 ## xTO -0.23742 0.13642 -1.740 0.08180 . ## xA.T 0.71485 1.86701 0.383 0.70181 ## xSTL 0.45963 0.18414 2.496 0.01256 * ## xBLK 0.03029 0.13631 0.222 0.82415 ## xPF 0.01041 0.07907 0.132 0.89529 ## xFG 26.58461 9.38711 2.832 0.00463 ** ## xFT 6.28278 2.51452 2.499 0.01247 * ## xX3P 3.15824 3.37841 0.935 0.34988 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## (Dispersion parameter for binomial family taken to be 1) ## ## Null deviance: 88.723 on 63 degrees of freedom ## Residual deviance: 42.558 on 52 degrees of freedom ## AIC: 66.558 ## ## Number of Fisher Scoring iterations: 8 ``` ``` h$fitted.values ``` ``` ## 1 2 3 4 5 ## 9.999998e-01 9.999048e-01 9.769711e-01 9.972812e-01 9.997756e-01 ## 6 7 8 9 10 ## 9.721166e-01 5.590209e-01 9.584564e-01 4.367808e-01 7.362946e-01 ## 11 12 13 14 15 ## 9.898112e-01 7.262200e-01 2.444006e-01 9.968605e-01 7.292286e-01 ## 16 17 18 19 20 ## 9.985910e-01 2.528807e-01 8.751178e-01 2.544738e-01 9.435318e-01 ## 21 22 23 24 25 ## 9.850437e-01 7.841357e-01 9.995601e-01 9.996077e-01 9.825306e-01 ## 26 27 28 29 30 ## 8.033540e-02 5.101626e-01 9.666841e-01 8.564489e-01 6.657773e-01 ## 31 32 33 34 35 ## 9.314164e-01 7.481401e-01 5.810465e-01 2.488875e-01 1.279599e-03 ## 36 37 38 39 40 ## 4.391782e-01 1.020269e-02 5.461190e-02 4.267754e-03 1.067584e-01 ## 41 42 43 44 45 ## 1.234915e-01 2.665101e-04 3.212605e-01 6.434112e-03 7.362892e-02 ## 46 47 48 49 50 ## 3.673105e-01 4.875193e-04 6.020993e-01 5.605770e-01 4.786576e-01 ## 51 52 53 54 55 ## 2.731573e-01 4.485079e-02 6.194202e-01 7.888145e-01 1.630556e-06 ## 56 57 58 59 60 ## 4.325189e-02 3.899566e-01 4.809365e-01 5.043005e-02 7.330590e-02 ## 61 62 63 64 ## 1.498018e-02 8.425836e-07 5.515960e-02 3.218696e-01 ``` 11\.6 Analysis -------------- Both these models are just settings in which we are computing binomial (binary) probabilities, i.e. \\\[\\begin{equation} \\mbox{Pr}\[y\=1] \= F(\\beta' x) \\end{equation}\\] where \\(\\beta\\) is a vector of coefficients, and \\(x\\) is a vector of explanatory variables. \\(F\\) is the logit/probit function. \\\[\\begin{equation} {\\hat y} \= F(\\beta' x) \\end{equation}\\] where \\({\\hat y}\\) is the fitted value of \\(y\\) for a given \\(x\\), and now \\(\\beta\\) is the fitted model’s coefficients. In each case the function takes the logit or probit form that we provided earlier. Of course, \\\[\\begin{equation} \\mbox{Pr}\[y\=0] \= 1 \- F(\\beta' x) \\end{equation}\\] Note that the model may also be expressed in conditional expectation form, i.e. \\\[\\begin{equation} E\[y \| x] \= F(\\beta' x) (y\=1\) \+ \[1\-F(\\beta' x)] (y\=0\) \= F(\\beta' x) \\end{equation}\\] 11\.7 Odds Ratio and Slopes (Coefficients) in a Logit ----------------------------------------------------- In a linear regression, it is easy to see how the dependent variable changes when any right hand side variable changes. Not so with nonlinear models. A little bit of pencil pushing is required (add some calculus too). The coefficient of an independent variable in a logit regression tell us by how much the log odds of the dependent variable change with a one unit change in the independent variable. If you want the odds ratio, then simply take the exponentiation of the log odds. The odds ratio says that when the independent variable increases by one, then the odds of the dependent outcome occurring increase by a factor of the odds ratio. What are odds ratios? An odds ratio is the ratio of probability of success to the probability of failure. If the probability of success is \\(p\\), then we have \\\[ \\mbox{Odds Ratio (OR)} \= \\frac{p}{1\-p}, \\quad p \= \\frac{OR}{1\+OR} \\] For example, if \\(p\=0\.3\\), then the odds ratio will be \\(OR\=0\.3/0\.7 \= 0\.4285714\\). If the coefficient \\(\\beta\\) (log odds) of an independent variable in the logit is (say) 2, then it meands the odds ratio is \\(\\exp(2\) \= 7\.38\\). This is the factor by which the variable impacts the odds ratio when the variable increases by 1\. Suppose the independent variable increases by 1\. Then the odds ratio and probabilities change as follows. ``` p = 0.3 OR = p/(1-p); print(OR) ``` ``` ## [1] 0.4285714 ``` ``` beta = 2 OR_new = OR * exp(beta); print(OR_new) ``` ``` ## [1] 3.166738 ``` ``` p_new = OR_new/(1+OR_new); print(p_new) ``` ``` ## [1] 0.7600041 ``` So we see that the probability of the dependent outcome occurring has increased from \\(0\.3\\) to \\(0\.76\\). Now let’s do the same example with the NCAA data. ``` h = glm(y~x, family=binomial(link="logit")) b = h$coefficients #Odds ratio is the exponentiated coefficients print(exp(b)) ``` ``` ## (Intercept) xPTS xREB xAST xTO ## 1.244270e-20 9.405653e-01 1.632927e+00 1.178470e+00 6.810995e-01 ## xA.T xSTL xBLK xPF xFG ## 4.775577e+00 2.189332e+00 1.081849e+00 1.026364e+00 1.175903e+20 ## xFT xX3P ## 4.570325e+04 2.258450e+02 ``` ``` x1 = c(1,as.numeric(x[18,])) #Take row 18 and create the RHS variables array p1 = 1/(1+exp(-sum(b*x1))) print(p1) ``` ``` ## [1] 0.8779507 ``` ``` OR1 = p1/(1-p1) print(OR1) ``` ``` ## [1] 7.193413 ``` Now, let’s see what happens if the rebounds increase by 1\. ``` x2 = x1 x2[3] = x2[3] + 1 p2 = 1/(1+exp(-sum(b*x2))) print(p2) ``` ``` ## [1] 0.921546 ``` So, the probability increases as expected. We can check that the new odds ratio will give the new probability as well. ``` OR2 = OR1 * exp(b[3]) print(OR2/(1+OR2)) ``` ``` ## xREB ## 0.921546 ``` And we see that this is exactly as required. 11\.8 Calculus of the logit coefficients ---------------------------------------- Remember that \\(y\\) lies in the range \\((0,1\)\\). Hence, we may be interested in how \\(E(y\|x)\\) changes as any of the explanatory variables changes in value, so we can take the derivative: \\\[\\begin{equation} \\frac{\\partial E(y\|x)}{\\partial x} \= F'(\\beta' x) \\beta \\equiv f(\\beta' x) \\beta \\end{equation}\\] For each model we may compute this at the means of the regressors: \\\[\\begin{equation} \\frac{\\partial E(y\|x)}{\\partial x} \= \\beta\\left( \\frac{e^{\\beta' x}}{1\+e^{\\beta' x}} \\right) \\left( 1 \- \\frac{e^{\\beta' x}}{1\+e^{\\beta' x}} \\right) \\end{equation}\\] which may be re\-written as \\\[\\begin{equation} \\frac{\\partial E(y\|x)}{\\partial x} \= \\beta \\cdot \\Lambda(\\beta' x) \\cdot \[1\-\\Lambda(\\beta'x)] \\end{equation}\\] ``` h = glm(y~x, family=binomial(link="logit")) beta = h$coefficients print(beta) ``` ``` ## (Intercept) xPTS xREB xAST xTO ## -45.83315262 -0.06127422 0.49037435 0.16421685 -0.38404689 ## xA.T xSTL xBLK xPF xFG ## 1.56351478 0.78359670 0.07867125 0.02602243 46.21373793 ## xFT xX3P ## 10.72992472 5.41984900 ``` ``` print(dim(x)) ``` ``` ## [1] 64 11 ``` ``` beta = as.matrix(beta) print(dim(beta)) ``` ``` ## [1] 12 1 ``` ``` wuns = matrix(1,64,1) x = cbind(wuns,x) xbar = as.matrix(colMeans(x)) xbar ``` ``` ## [,1] ## 1.0000000 ## PTS 67.1015625 ## REB 34.4671875 ## AST 12.7484375 ## TO 13.9578125 ## A.T 0.9778125 ## STL 6.8234375 ## BLK 2.7500000 ## PF 18.6562500 ## FG 0.4232969 ## FT 0.6914687 ## X3P 0.3333750 ``` ``` logitfunction = exp(t(beta) %*% xbar)/(1+exp(t(beta) %*% xbar)) print(logitfunction) ``` ``` ## [,1] ## [1,] 0.5139925 ``` ``` slopes = beta * logitfunction[1] * (1-logitfunction[1]) slopes ``` ``` ## [,1] ## (Intercept) -11.449314459 ## xPTS -0.015306558 ## xREB 0.122497576 ## xAST 0.041022062 ## xTO -0.095936529 ## xA.T 0.390572574 ## xSTL 0.195745753 ## xBLK 0.019652410 ## xPF 0.006500512 ## xFG 11.544386272 ## xFT 2.680380362 ## xX3P 1.353901094 ``` ### 11\.8\.1 How about the Probit model? In the probit model this is \\\[\\begin{equation} \\frac{\\partial E(y\|x)}{\\partial x} \= \\phi(\\beta' x) \\beta \\end{equation}\\] where \\(\\phi(.)\\) is the normal density function (not the cumulative probability). ``` ncaa = read.table("DSTMAA_data/ncaa.txt",header=TRUE) y1 = 1:32 y1 = y1*0+1 y2 = y1*0 y = c(y1,y2) print(y) ``` ``` ## [1] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 ## [36] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ``` ``` x = as.matrix(ncaa[4:14]) h = glm(y~x, family=binomial(link="probit")) beta = h$coefficients print(beta) ``` ``` ## (Intercept) xPTS xREB xAST xTO ## -26.28219202 -0.03462510 0.28493498 0.10893727 -0.23742076 ## xA.T xSTL xBLK xPF xFG ## 0.71484863 0.45963279 0.03029006 0.01040612 26.58460638 ## xFT xX3P ## 6.28277680 3.15823537 ``` ``` print(dim(x)) ``` ``` ## [1] 64 11 ``` ``` beta = as.matrix(beta) print(dim(beta)) ``` ``` ## [1] 12 1 ``` ``` wuns = matrix(1,64,1) x = cbind(wuns,x) xbar = as.matrix(colMeans(x)) print(xbar) ``` ``` ## [,1] ## 1.0000000 ## PTS 67.1015625 ## REB 34.4671875 ## AST 12.7484375 ## TO 13.9578125 ## A.T 0.9778125 ## STL 6.8234375 ## BLK 2.7500000 ## PF 18.6562500 ## FG 0.4232969 ## FT 0.6914687 ## X3P 0.3333750 ``` ``` probitfunction = t(beta) %*% xbar slopes = probitfunction[1] * beta slopes ``` ``` ## [,1] ## (Intercept) -1.401478911 ## xPTS -0.001846358 ## xREB 0.015193952 ## xAST 0.005809001 ## xTO -0.012660291 ## xA.T 0.038118787 ## xSTL 0.024509587 ## xBLK 0.001615196 ## xPF 0.000554899 ## xFG 1.417604938 ## xFT 0.335024536 ## xX3P 0.168410621 ``` 11\.9 Maximum\-Likelihood Estimation (MLE) of these Choice Models ----------------------------------------------------------------- Estimation in the models above, using the **glm** function is done by R using MLE. Lets write this out a little formally. Since we have say \\(n\\) observations, and each LHS variable is \\(y \= \\{0,1\\}\\), we have the likelihood function as follows: \\\[\\begin{equation} L \= \\prod\_{i\=1}^n F(\\beta'x)^{y\_i} \[1\-F(\\beta'x)]^{1\-y\_i} \\end{equation}\\] The log\-likelihood will be \\\[\\begin{equation} \\ln L \= \\sum\_{i\=1}^n \\left\[ y\_i \\ln F(\\beta'x) \+ (1\-y\_i) \\ln \[1\-F(\\beta'x)] \\right] \\end{equation}\\] To maximize the log\-likelihood we take the derivative: \\\[\\begin{equation} \\frac{\\partial \\ln L}{\\partial \\beta} \= \\sum\_{i\=1}^n \\left\[ y\_i \\frac{f(\\beta'x)}{F(\\beta'x)} \- (1\-y\_i) \\frac{f(\\beta'x)}{1\-F(\\beta'x)} \\right]x \= 0 \\end{equation}\\] which gives a system of equations to be solved for \\(\\beta\\). This is what the software is doing. The system of first\-order conditions are collectively called the **likelihood equation**. You may well ask, how do we get the t\-statistics of the parameter estimates \\(\\beta\\)? The formal derivation is beyond the scope of this class, as it requires probability limit theorems, but let’s just do this a little heuristically, so you have some idea of what lies behind it. The t\-stat for a coefficient is its value divided by its standard deviation. We get some idea of the standard deviation by asking the question: how does the coefficient set \\(\\beta\\) change when the log\-likelihood changes? That is, we are interested in \\(\\partial \\beta / \\partial \\ln L\\). Above we have computed the reciprocal of this, as you can see. Lets define \\\[\\begin{equation} g \= \\frac{\\partial \\ln L}{\\partial \\beta} \\end{equation}\\] We also define the second derivative (also known as the Hessian matrix) \\\[\\begin{equation} H \= \\frac{\\partial^2 \\ln L}{\\partial \\beta \\partial \\beta'} \\end{equation}\\] Note that the following are valid: \\\[\\begin{eqnarray\*} E(g) \&\=\& 0 \\quad \\mbox{(this is a vector)} \\\\ Var(g) \&\=\& E(g g') \- E(g)^2 \= E(g g') \\\\ \&\=\& \-E(H) \\quad \\mbox{(this is a non\-trivial proof)} \\end{eqnarray\*}\\] We call \\\[\\begin{equation} I(\\beta) \= \-E(H) \\end{equation}\\] the information matrix. Since (heuristically) the variation in log\-likelihood with changes in beta is given by \\(Var(g)\=\-E(H)\=I(\\beta)\\), the inverse gives the variance of \\(\\beta\\). Therefore, we have \\\[\\begin{equation} Var(\\beta) \\rightarrow I(\\beta)^{\-1} \\end{equation}\\] We take the square root of the diagonal of this matrix and divide the values of \\(\\beta\\) by that to get the t\-statistics. 11\.10 Multinomial Logit ------------------------ You will need the **nnet** package for this. This model takes the following form: \\\[\\begin{equation} \\mbox{Prob}\[y \= j] \= p\_j\= \\frac{\\exp(\\beta\_j' x)}{1\+\\sum\_{j\=1}^{J} \\exp(\\beta\_j' x)} \\end{equation}\\] We usually set \\\[\\begin{equation} \\mbox{Prob}\[y \= 0] \= p\_0 \= \\frac{1}{1\+\\sum\_{j\=1}^{J} \\exp(\\beta\_j' x)} \\end{equation}\\] ``` ncaa = read.table("DSTMAA_data/ncaa.txt",header=TRUE) x = as.matrix(ncaa[4:14]) w1 = (1:16)*0 + 1 w0 = (1:16)*0 y1 = c(w1,w0,w0,w0) y2 = c(w0,w1,w0,w0) y3 = c(w0,w0,w1,w0) y4 = c(w0,w0,w0,w1) y = cbind(y1,y2,y3,y4) library(nnet) res = multinom(y~x) ``` ``` ## # weights: 52 (36 variable) ## initial value 88.722839 ## iter 10 value 71.177975 ## iter 20 value 60.076921 ## iter 30 value 51.167439 ## iter 40 value 47.005269 ## iter 50 value 45.196280 ## iter 60 value 44.305029 ## iter 70 value 43.341689 ## iter 80 value 43.260097 ## iter 90 value 43.247324 ## iter 100 value 43.141297 ## final value 43.141297 ## stopped after 100 iterations ``` ``` res ``` ``` ## Call: ## multinom(formula = y ~ x) ## ## Coefficients: ## (Intercept) xPTS xREB xAST xTO xA.T ## y2 -8.847514 -0.1595873 0.3134622 0.6198001 -0.2629260 -2.1647350 ## y3 65.688912 0.2983748 -0.7309783 -0.6059289 0.9284964 -0.5720152 ## y4 31.513342 -0.1382873 -0.2432960 0.2887910 0.2204605 -2.6409780 ## xSTL xBLK xPF xFG xFT xX3P ## y2 -0.813519 0.01472506 0.6521056 -13.77579 10.374888 -3.436073 ## y3 -1.310701 0.63038878 -0.1788238 -86.37410 -24.769245 -4.897203 ## y4 -1.470406 -0.31863373 0.5392835 -45.18077 6.701026 -7.841990 ## ## Residual Deviance: 86.28259 ## AIC: 158.2826 ``` ``` print(names(res)) ``` ``` ## [1] "n" "nunits" "nconn" "conn" ## [5] "nsunits" "decay" "entropy" "softmax" ## [9] "censored" "value" "wts" "convergence" ## [13] "fitted.values" "residuals" "call" "terms" ## [17] "weights" "deviance" "rank" "lab" ## [21] "coefnames" "vcoefnames" "xlevels" "edf" ## [25] "AIC" ``` ``` res$fitted.values ``` ``` ## y1 y2 y3 y4 ## 1 6.785454e-01 3.214178e-01 7.032345e-06 2.972107e-05 ## 2 6.168467e-01 3.817718e-01 2.797313e-06 1.378715e-03 ## 3 7.784836e-01 1.990510e-01 1.688098e-02 5.584445e-03 ## 4 5.962949e-01 3.988588e-01 5.018346e-04 4.344392e-03 ## 5 9.815286e-01 1.694721e-02 1.442350e-03 8.179230e-05 ## 6 9.271150e-01 6.330104e-02 4.916966e-03 4.666964e-03 ## 7 4.515721e-01 9.303667e-02 3.488898e-02 4.205023e-01 ## 8 8.210631e-01 1.530721e-01 7.631770e-03 1.823302e-02 ## 9 1.567804e-01 9.375075e-02 6.413693e-01 1.080996e-01 ## 10 8.403357e-01 9.793135e-03 1.396393e-01 1.023186e-02 ## 11 9.163789e-01 6.747946e-02 7.847380e-05 1.606316e-02 ## 12 2.448850e-01 4.256001e-01 2.880803e-01 4.143463e-02 ## 13 1.040352e-01 1.534272e-01 1.369554e-01 6.055822e-01 ## 14 8.468755e-01 1.506311e-01 5.083480e-04 1.985036e-03 ## 15 7.136048e-01 1.294146e-01 7.385294e-02 8.312770e-02 ## 16 9.885439e-01 1.114547e-02 2.187311e-05 2.887256e-04 ## 17 6.478074e-02 3.547072e-01 1.988993e-01 3.816127e-01 ## 18 4.414721e-01 4.497228e-01 4.716550e-02 6.163956e-02 ## 19 6.024508e-03 3.608270e-01 7.837087e-02 5.547777e-01 ## 20 4.553205e-01 4.270499e-01 3.614863e-04 1.172681e-01 ## 21 1.342122e-01 8.627911e-01 1.759865e-03 1.236845e-03 ## 22 1.877123e-02 6.423037e-01 5.456372e-05 3.388705e-01 ## 23 5.620528e-01 4.359459e-01 5.606424e-04 1.440645e-03 ## 24 2.837494e-01 7.154506e-01 2.190456e-04 5.809815e-04 ## 25 1.787749e-01 8.037335e-01 3.361806e-04 1.715541e-02 ## 26 3.274874e-02 3.484005e-02 1.307795e-01 8.016317e-01 ## 27 1.635480e-01 3.471676e-01 1.131599e-01 3.761245e-01 ## 28 2.360922e-01 7.235497e-01 3.375018e-02 6.607966e-03 ## 29 1.618602e-02 7.233098e-01 5.762083e-06 2.604984e-01 ## 30 3.037741e-02 8.550873e-01 7.487804e-02 3.965729e-02 ## 31 1.122897e-01 8.648388e-01 3.935657e-03 1.893584e-02 ## 32 2.312231e-01 6.607587e-01 4.770775e-02 6.031045e-02 ## 33 6.743125e-01 2.028181e-02 2.612683e-01 4.413746e-02 ## 34 1.407693e-01 4.089518e-02 7.007541e-01 1.175815e-01 ## 35 6.919547e-04 4.194577e-05 9.950322e-01 4.233924e-03 ## 36 8.051225e-02 4.213965e-03 9.151287e-01 1.450423e-04 ## 37 5.691220e-05 7.480549e-02 5.171594e-01 4.079782e-01 ## 38 2.709867e-02 3.808987e-02 6.193969e-01 3.154145e-01 ## 39 4.531001e-05 2.248580e-08 9.999542e-01 4.626258e-07 ## 40 1.021976e-01 4.597678e-03 5.133839e-01 3.798208e-01 ## 41 2.005837e-02 2.063200e-01 5.925050e-01 1.811166e-01 ## 42 1.829028e-04 1.378795e-03 6.182839e-01 3.801544e-01 ## 43 1.734296e-01 9.025284e-04 7.758862e-01 4.978171e-02 ## 44 4.314938e-05 3.131390e-06 9.997892e-01 1.645004e-04 ## 45 1.516231e-02 2.060325e-03 9.792594e-01 3.517926e-03 ## 46 2.917597e-01 6.351166e-02 4.943818e-01 1.503468e-01 ## 47 1.278933e-04 1.773509e-03 1.209486e-01 8.771500e-01 ## 48 1.320000e-01 2.064338e-01 6.324904e-01 2.907578e-02 ## 49 1.683221e-02 4.007848e-01 1.628981e-03 5.807540e-01 ## 50 9.670085e-02 4.314765e-01 7.669035e-03 4.641536e-01 ## 51 4.953577e-02 1.370037e-01 9.882004e-02 7.146405e-01 ## 52 1.787927e-02 9.825660e-02 2.203037e-01 6.635604e-01 ## 53 1.174053e-02 4.723628e-01 2.430072e-03 5.134666e-01 ## 54 2.053871e-01 6.721356e-01 4.169640e-02 8.078090e-02 ## 55 3.060369e-06 1.418623e-03 1.072549e-02 9.878528e-01 ## 56 1.122164e-02 6.566169e-02 3.080641e-01 6.150525e-01 ## 57 8.873716e-03 4.996907e-01 8.222034e-03 4.832136e-01 ## 58 2.164962e-02 2.874313e-01 1.136455e-03 6.897826e-01 ## 59 5.230443e-03 6.430174e-04 9.816825e-01 1.244406e-02 ## 60 8.743368e-02 6.710327e-02 4.260116e-01 4.194514e-01 ## 61 1.913578e-01 6.458463e-04 3.307553e-01 4.772410e-01 ## 62 6.450967e-07 5.035697e-05 7.448285e-01 2.551205e-01 ## 63 2.400365e-04 4.651537e-03 8.183390e-06 9.951002e-01 ## 64 1.515894e-04 2.631451e-01 1.002332e-05 7.366933e-01 ``` You can see from the results that the probability for category 1 is the same as \\(p\_0\\). What this means is that we compute the other three probabilities, and the remaining is for the first category. We check that the probabilities across each row for all four categories add up to 1: ``` rowSums(res$fitted.values) ``` ``` ## 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 ## 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ## 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 ## 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ## 51 52 53 54 55 56 57 58 59 60 61 62 63 64 ## 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ``` 11\.11 When OLS fails --------------------- The standard linear regression model often does not apply, and we need to be careful to not overuse it. Peter Kennedy in his excellent book “A Guide to Econometrics” states five cases where violations of critical assumptions for OLS occur, and we should then be warned against its use. 1. The OLS model is in error when (a) the RHS variables are incorrect (**inapproprate regressors**) for use to explain the LHS variable. This is just the presence of a poor model. Hopefully, the F\-statistic from such a regression will warn against use of the model. (b) The relationship between the LHS and RHS is **nonlinear**, and this makes use of a linear regression inaccurate. (c) the model is **non\-stationary**, i.e., the data spans a period where the coefficients cannot be reasonably expected to remain the same. 2. **Non\-zero mean regression residuals**. This occurs with truncated residuals (see discussion below) and in **sample selection** problems, where the fitted model to a selected subsample would result in non\-zero mean errors for the full sample. This is also known as the biased intercept problem. The errors may also be correlated with the regressors, i.e., endogeneity (see below). 3. **Residuals are not iid**. This occurs in two ways. (a) Heterosledasticity, i.e., the variances of the residuals for all observations is not the same, i.e., violation of the identically distributed assumption. (b) Autocorrelation, where the residuals are correlated with each other, i.e., violation of the independence assumption. 4. **Endogeneity**. Here the observations of regressors \\(x\\) cannot be assumed to be fixed in repeated samples. This occurs in several ways. (a) Errors in variables, i.e., measurement of \\(x\\) in error. (b) Omitted variables, which is a form of errors in variables. (c) Autoregression, i.e., using a lagged value of the dependent variable as an independent variable, as in VARs. (d) Simultaneous equation systems, where all variables are endogenous, and this is also known as **reverse causality**. For example, changes in tax rates changes economic behavior, and hence income, which may result in further policy changes in tax rates, and so on. Because the \\(x\\) variables are correlated with the errors \\(\\epsilon\\), they are no longer exogenous, and hence we term this situation one of “endogeneity”. 5. **\\(n \> p\\)**. The number of observations (\\(n\\)) is greater than the number of independent variables (\\(p\\)), i.e., the dimension of \\(x\\). This can also occur when two regressors are highly correlated with each other, i.e., known as **multicollinearity**. 11\.12 Truncated Variables and Sample Selection ----------------------------------------------- Sample selection problems arise because the sample is truncated based on some selection criterion, and the regression that is run is biased because the sample is biased and does not reflect the true/full population. For example, wage data is only available for people who decided to work, i.e., the wage was worth their while, and above their reservation wage. If we are interested in finding out the determinants of wages, we need to take this fact into account, i.e., the sample only contains people who were willing to work at the wage levels that were in turn determined by demand and supply of labor. The sample becomes non\-random. It explains the curious case that women with more children tend to have lower wages (because they need the money and hence, their reservation wage is lower). Usually we handle sample selection issues using a two\-equation regression approach. The first equation determines if an observation enters the sample. The second equation then assesses the model of interest, e.g., what determines wages. We will look at an example later. But first, we provide some basic mathematical results that we need later. And of course, we need to revisit our Bayesian ideas again! * Given a probability density \\(f(x)\\), \\\[\\begin{equation} f(x \| x \> a) \= \\frac{f(x)}{Pr(x\>a)} \\end{equation}\\] If we are using the normal distribution then this is: \\\[\\begin{equation} f(x \| x \> a) \= \\frac{\\phi(x)}{1\-\\Phi(a)} \\end{equation}\\] * If \\(x \\sim N(\\mu, \\sigma^2\)\\), then \\\[\\begin{equation} E(x \| x\>a) \= \\mu \+ \\sigma\\; \\frac{\\phi(c)}{1\-\\Phi(c)}, \\quad c \= \\frac{a\-\\mu}{\\sigma} \\end{equation}\\] Note that this expectation is provided without proof, as are the next few ones. For example if we let \\(x\\) be standard normal and we want \\(E(\[x \| x \> \-1]\\), we have ``` dnorm(-1)/(1-pnorm(-1)) ``` ``` ## [1] 0.2876 ``` For the same distribution \\\[\\begin{equation} E(x \| x \< a) \= \\mu \+ \\sigma\\; \\frac{\-\\phi(c)}{\\Phi(c)}, \\quad c \= \\frac{a\-\\mu}{\\sigma} \\end{equation}\\] For example, \\(E\[x \| x \< 1]\\) is ``` -dnorm(1)/pnorm(1) ``` ``` ## [1] -0.2876 ``` 11\.13 Inverse Mills Ratio -------------------------- The values \\(\\frac{\\phi(c)}{1\-\\Phi(c)}\\) or \\(\\frac{\-\\phi(c)}{\\Phi(c)}\\) as the case may be is often shortened to the variable \\(\\lambda(c)\\), which is also known as the Inverse Mills Ratio. If \\(y\\) and \\(x\\) are correlated (with correlation \\(\\rho\\)), and \\(y \\sim N(\\mu\_y,\\sigma\_y^2\)\\), then \\\[\\begin{eqnarray\*} Pr(y,x \| x\>a) \&\=\& \\frac{f(y,x)}{Pr(x\>a)} \\\\ E(y \| x\>a) \&\=\& \\mu\_y \+ \\sigma\_y \\rho \\lambda(c), \\quad c \= \\frac{a\-\\mu}{\\sigma} \\end{eqnarray\*}\\] This leads naturally to the truncated regression model. Suppose we have the usual regression model where \\\[\\begin{equation} y \= \\beta'x \+ e, \\quad e \\sim N(0,\\sigma^2\) \\end{equation}\\] But suppose we restrict attention in our model to values of \\(y\\) that are greater than a cut off \\(a\\). We can then write down by inspection the following correct model (no longer is the simple linear regression valid) \\\[\\begin{equation} E(y \| y \> a) \= \\beta' x \+ \\sigma \\; \\frac{\\phi\[(a\-\\beta'x)/\\sigma]}{1\-\\Phi\[(a\-\\beta'x)/\\sigma]} \\end{equation}\\] Therefore, when the sample is truncated, then we need to run the regression above, i.e., the usual right\-hand side \\(\\beta' x\\) with an additional variable, i.e., the Inverse Mill’s ratio. We look at this in a real\-world example. ### 11\.13\.1 Example: Limited Dependent Variables in VC Syndications Not all venture\-backed firms end up making a successful exit, either via an IPO, through a buyout, or by means of another exit route. By examining a large sample of startup firms, we can measure the probability of a firm making a successful exit. By designating successful exits as \\(S\=1\\), and setting \\(S\=0\\) otherwise, we use matrix \\(X\\) of explanatory variables and fit a Probit model to the data. We define \\(S\\) to be based on a **latent** threshold variable \\(S^\*\\) such that \\\[\\begin{equation} S \= \\left\\{ \\begin{array}{ll} 1 \& \\mbox{if } S^\* \> 0\\\\ 0 \& \\mbox{if } S^\* \\leq 0\. \\end{array} \\right. \\end{equation}\\] where the latent variable is modeled as \\\[\\begin{equation} S^\* \= \\gamma' X \+ u, \\quad u \\sim N(0,\\sigma\_u^2\) \\end{equation}\\] The fitted model provides us the probability of exit, i.e., \\(E(S)\\), for all financing rounds. \\\[\\begin{equation} E(S) \= E(S^\* \> 0\) \= E(u \> \-\\gamma' X) \= 1 \- \\Phi(\-\\gamma' X) \= \\Phi(\\gamma' X), \\end{equation}\\] where \\(\\gamma\\) is the vector of coefficients fitted in the Probit model, using standard likelihood methods. The last expression in the equation above follows from the use of normality in the Probit specification. \\(\\Phi(.)\\) denotes the cumulative normal distribution. 11\.14 Sample Selection Problems (and endogeneity) -------------------------------------------------- Suppose we want to examine the role of syndication in venture success. Success in a syndicated venture comes from two broad sources of VC expertise. First, VCs are experienced in picking good projects to invest in, and syndicates are efficient vehicles for picking good firms; this is the selection hypothesis put forth by Lerner (1994\). Amongst two projects that appear a\-priori similar in prospects, the fact that one of them is selected by a syndicate is evidence that the project is of better quality (ex\-post to being vetted by the syndicate, but ex\-ante to effort added by the VCs), since the process of syndication effectively entails getting a second opinion by the lead VC. Second, syndicates may provide better monitoring as they bring a wide range of skills to the venture, and this is suggested in the value\-added hypothesis of Brander, Amit, and Antweiler (2002\). A regression of venture returns on various firm characteristics and a dummy variable for syndication allows a first pass estimate of whether syndication impacts performance. However, it may be that syndicated firms are simply of higher quality and deliver better performance, whether or not they chose to syndicate. Better firms are more likely to syndicate because VCs tend to prefer such firms and can identify them. In this case, the coefficient on the dummy variable might reveal a value\-add from syndication, when indeed, there is none. Hence, we correct the specification for endogeneity, and then examine whether the dummy variable remains significant. Greene, in his classic book “Econometric Analysis” provides the correction for endogeneity required here. We briefly summarize the model required. The performance regression is of the form: \\\[\\begin{equation} Y \= \\beta' X \+ \\delta Q \+ \\epsilon, \\quad \\epsilon \\sim N(0,\\sigma\_{\\epsilon}^2\) \\end{equation}\\] where \\(Y\\) is the performance variable; \\(Q\\) is the dummy variable taking a value of 1 if the firm is syndicated, and zero otherwise, and \\(\\delta\\) is a coefficient that determines whether performance is different on account of syndication. If it is not, then it implies that the variables \\(X\\) are sufficient to explain the differential performance across firms, or that there is no differential performance across the two types of firms. However, since these same variables determine also, whether the firm syndicates or not, we have an endogeneity issue which is resolved by adding a correction to the model above. The error term \\(\\epsilon\\) is affected by censoring bias in the subsamples of syndicated and non\-syndicated firms. When \\(Q\=1\\), i.e. when the firm’s financing is syndicated, then the residual \\(\\epsilon\\) has the following expectation \\\[\\begin{equation} E(\\epsilon \| Q\=1\) \= E(\\epsilon \| S^\* \>0\) \= E(\\epsilon \| u \> \-\\gamma' X) \= \\rho \\sigma\_{\\epsilon} \\left\[ \\frac{\\phi(\\gamma' X)}{\\Phi(\\gamma' X)} \\right]. \\end{equation}\\] where \\(\\rho \= Corr(\\epsilon,u)\\), and \\(\\sigma\_{\\epsilon}\\) is the standard deviation of \\(\\epsilon\\). This implies that \\\[\\begin{equation} E(Y \| Q\=1\) \= \\beta'X \+ \\delta \+ \\rho \\sigma\_{\\epsilon} \\left\[ \\frac{\\phi(\\gamma' X)}{\\Phi(\\gamma' X)} \\right]. \\end{equation}\\] Note that \\(\\phi(\-\\gamma'X)\=\\phi(\\gamma'X)\\), and \\(1\-\\Phi(\-\\gamma'X)\=\\Phi(\\gamma'X)\\). For estimation purposes, we write this as the following regression equation: EQN1 \\\[\\begin{equation} Y \= \\delta \+ \\beta' X \+ \\beta\_m m(\\gamma' X) \\end{equation}\\] where \\(m(\\gamma' X) \= \\frac{\\phi(\\gamma' X)}{\\Phi(\\gamma' X)}\\) and \\(\\beta\_m \= \\rho \\sigma\_{\\epsilon}\\). Thus, \\(\\{\\delta,\\beta,\\beta\_m\\}\\) are the coefficients estimated in the regression. (Note here that \\(m(\\gamma' X)\\) is also known as the inverse Mill’s ratio.) Likewise, for firms that are not syndicated, we have the following result \\\[\\begin{equation} E(Y \| Q\=0\) \= \\beta'X \+ \\rho \\sigma\_{\\epsilon} \\left\[ \\frac{\-\\phi(\\gamma' X)}{1\-\\Phi(\\gamma' X)} \\right]. \\end{equation}\\] This may also be estimated by linear cross\-sectional regression. EQN0 \\\[\\begin{equation} Y \= \\beta' X \+ \\beta\_m \\cdot m'(\\gamma' X) \\end{equation}\\] where \\(m' \= \\frac{\-\\phi(\\gamma' X)}{1\-\\Phi(\\gamma' X)}\\) and \\(\\beta\_m \= \\rho \\sigma\_{\\epsilon}\\). The estimation model will take the form of a stacked linear regression comprising both equations (EQN1\) and (EQN0\). This forces \\(\\beta\\) to be the same across all firms without necessitating additional constraints, and allows the specification to remain within the simple OLS form. If \\(\\delta\\) is significant after this endogeneity correction, then the empirical evidence supports the hypothesis that syndication is a driver of differential performance. If the coefficients \\(\\{\\delta, \\beta\_m\\}\\) are significant, then the expected difference in performance for each syndicated financing round \\((i,j)\\) is \\\[\\begin{equation} \\delta \+ \\beta\_m \\left\[ m(\\gamma\_{ij}' X\_{ij}) \- m'(\\gamma\_{ij}' X\_{ij}) \\right], \\;\\;\\; \\forall i,j. \\end{equation}\\] The method above forms one possible approach to addressing treatment effects. Another approach is to estimate a Probit model first, and then to set \\(m(\\gamma' X) \= \\Phi(\\gamma' X)\\). This is known as the instrumental variables approach. Some **References**: Brander, Amit, and Antweiler ([2002](#ref-JEMS:JEMS423)); Lerner ([1994](#ref-10.2307/3665618)) The correct regression may be run using the **sampleSelection** package in R. Sample selection models correct for the fact that two subsamples may be different because of treatment effects. Let’s take an example with data from the wage market. ### 11\.14\.1 Example: Women in the Labor Market This is an example from the package in R itself. The data used is also within the package. After loading in the package **sampleSelection** we can use the data set called **Mroz87**. This contains labour market participation data for women as well as wage levels for women. If we are explaining what drives women’s wages we can simply run the following regression. See: [http://www.inside\-r.org/packages/cran/sampleSelection/docs/Mroz87](http://www.inside-r.org/packages/cran/sampleSelection/docs/Mroz87) The original paper may be downloaded at: [http://eml.berkeley.edu/\~cle/e250a\_f13/mroz\-paper.pdf](http://eml.berkeley.edu/~cle/e250a_f13/mroz-paper.pdf) ``` library(sampleSelection) ``` ``` ## Loading required package: maxLik ``` ``` ## Loading required package: miscTools ``` ``` ## Warning: package 'miscTools' was built under R version 3.3.2 ``` ``` ## Loading required package: methods ``` ``` ## ## Please cite the 'maxLik' package as: ## Henningsen, Arne and Toomet, Ott (2011). maxLik: A package for maximum likelihood estimation in R. Computational Statistics 26(3), 443-458. DOI 10.1007/s00180-010-0217-1. ## ## If you have questions, suggestions, or comments regarding the 'maxLik' package, please use a forum or 'tracker' at maxLik's R-Forge site: ## https://r-forge.r-project.org/projects/maxlik/ ``` ``` data(Mroz87) Mroz87$kids = (Mroz87$kids5 + Mroz87$kids618 > 0) Mroz87$numkids = Mroz87$kids5 + Mroz87$kids618 summary(Mroz87) ``` ``` ## lfp hours kids5 kids618 ## Min. :0.0000 Min. : 0.0 Min. :0.0000 Min. :0.000 ## 1st Qu.:0.0000 1st Qu.: 0.0 1st Qu.:0.0000 1st Qu.:0.000 ## Median :1.0000 Median : 288.0 Median :0.0000 Median :1.000 ## Mean :0.5684 Mean : 740.6 Mean :0.2377 Mean :1.353 ## 3rd Qu.:1.0000 3rd Qu.:1516.0 3rd Qu.:0.0000 3rd Qu.:2.000 ## Max. :1.0000 Max. :4950.0 Max. :3.0000 Max. :8.000 ## age educ wage repwage ## Min. :30.00 Min. : 5.00 Min. : 0.000 Min. :0.00 ## 1st Qu.:36.00 1st Qu.:12.00 1st Qu.: 0.000 1st Qu.:0.00 ## Median :43.00 Median :12.00 Median : 1.625 Median :0.00 ## Mean :42.54 Mean :12.29 Mean : 2.375 Mean :1.85 ## 3rd Qu.:49.00 3rd Qu.:13.00 3rd Qu.: 3.788 3rd Qu.:3.58 ## Max. :60.00 Max. :17.00 Max. :25.000 Max. :9.98 ## hushrs husage huseduc huswage ## Min. : 175 Min. :30.00 Min. : 3.00 Min. : 0.4121 ## 1st Qu.:1928 1st Qu.:38.00 1st Qu.:11.00 1st Qu.: 4.7883 ## Median :2164 Median :46.00 Median :12.00 Median : 6.9758 ## Mean :2267 Mean :45.12 Mean :12.49 Mean : 7.4822 ## 3rd Qu.:2553 3rd Qu.:52.00 3rd Qu.:15.00 3rd Qu.: 9.1667 ## Max. :5010 Max. :60.00 Max. :17.00 Max. :40.5090 ## faminc mtr motheduc fatheduc ## Min. : 1500 Min. :0.4415 Min. : 0.000 Min. : 0.000 ## 1st Qu.:15428 1st Qu.:0.6215 1st Qu.: 7.000 1st Qu.: 7.000 ## Median :20880 Median :0.6915 Median :10.000 Median : 7.000 ## Mean :23081 Mean :0.6789 Mean : 9.251 Mean : 8.809 ## 3rd Qu.:28200 3rd Qu.:0.7215 3rd Qu.:12.000 3rd Qu.:12.000 ## Max. :96000 Max. :0.9415 Max. :17.000 Max. :17.000 ## unem city exper nwifeinc ## Min. : 3.000 Min. :0.0000 Min. : 0.00 Min. :-0.02906 ## 1st Qu.: 7.500 1st Qu.:0.0000 1st Qu.: 4.00 1st Qu.:13.02504 ## Median : 7.500 Median :1.0000 Median : 9.00 Median :17.70000 ## Mean : 8.624 Mean :0.6428 Mean :10.63 Mean :20.12896 ## 3rd Qu.:11.000 3rd Qu.:1.0000 3rd Qu.:15.00 3rd Qu.:24.46600 ## Max. :14.000 Max. :1.0000 Max. :45.00 Max. :96.00000 ## wifecoll huscoll kids numkids ## TRUE:212 TRUE:295 Mode :logical Min. :0.000 ## FALSE:541 FALSE:458 FALSE:229 1st Qu.:0.000 ## TRUE :524 Median :1.000 ## NA's :0 Mean :1.591 ## 3rd Qu.:3.000 ## Max. :8.000 ``` ``` res = lm(wage ~ age + age^2 + educ + city, data=Mroz87) summary(res) ``` ``` ## ## Call: ## lm(formula = wage ~ age + age^2 + educ + city, data = Mroz87) ## ## Residuals: ## Min 1Q Median 3Q Max ## -4.5331 -2.2710 -0.4765 1.3975 22.7241 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -3.2490882 0.9094210 -3.573 0.000376 *** ## age 0.0008193 0.0141084 0.058 0.953708 ## educ 0.4496393 0.0503591 8.929 < 2e-16 *** ## city 0.0998064 0.2388551 0.418 0.676174 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 3.079 on 749 degrees of freedom ## Multiple R-squared: 0.1016, Adjusted R-squared: 0.09799 ## F-statistic: 28.23 on 3 and 749 DF, p-value: < 2.2e-16 ``` So, education matters. But since education also determines labor force participation (variable **lfp**) it may just be that we can use **lfp** instead. Let’s try that. ``` res = lm(wage ~ age + age^2 + lfp + city, data=Mroz87) summary(res) ``` ``` ## ## Call: ## lm(formula = wage ~ age + age^2 + lfp + city, data = Mroz87) ## ## Residuals: ## Min 1Q Median 3Q Max ## -4.1815 -0.9869 -0.1624 0.3081 20.6809 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -0.478793 0.513001 -0.933 0.3510 ## age 0.004163 0.011333 0.367 0.7135 ## lfp 4.185897 0.183727 22.783 <2e-16 *** ## city 0.462158 0.190176 2.430 0.0153 * ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 2.489 on 749 degrees of freedom ## Multiple R-squared: 0.4129, Adjusted R-squared: 0.4105 ## F-statistic: 175.6 on 3 and 749 DF, p-value: < 2.2e-16 ``` ``` #LET'S TRY BOTH VARIABLES Mroz87$educlfp = Mroz87$educ*Mroz87$lfp res = lm(wage ~ age + age^2 + lfp + educ + city + educlfp , data=Mroz87) summary(res) ``` ``` ## ## Call: ## lm(formula = wage ~ age + age^2 + lfp + educ + city + educlfp, ## data = Mroz87) ## ## Residuals: ## Min 1Q Median 3Q Max ## -5.8139 -0.7307 -0.0712 0.2261 21.1120 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -0.528196 0.904949 -0.584 0.5596 ## age 0.009299 0.010801 0.861 0.3895 ## lfp -2.028354 0.963841 -2.104 0.0357 * ## educ -0.002723 0.060710 -0.045 0.9642 ## city 0.244245 0.182220 1.340 0.1805 ## educlfp 0.491515 0.077942 6.306 4.89e-10 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 2.347 on 747 degrees of freedom ## Multiple R-squared: 0.4792, Adjusted R-squared: 0.4757 ## F-statistic: 137.4 on 5 and 747 DF, p-value: < 2.2e-16 ``` ``` #LET'S TRY BOTH VARIABLES res = lm(wage ~ age + age^2 + lfp + educ + city, data=Mroz87) summary(res) ``` ``` ## ## Call: ## lm(formula = wage ~ age + age^2 + lfp + educ + city, data = Mroz87) ## ## Residuals: ## Min 1Q Median 3Q Max ## -4.9849 -1.1053 -0.1626 0.4762 21.0179 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -4.18595 0.71239 -5.876 6.33e-09 *** ## age 0.01421 0.01105 1.286 0.199 ## lfp 3.94731 0.18073 21.841 < 2e-16 *** ## educ 0.29043 0.04005 7.252 1.03e-12 *** ## city 0.22401 0.18685 1.199 0.231 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 2.407 on 748 degrees of freedom ## Multiple R-squared: 0.4514, Adjusted R-squared: 0.4485 ## F-statistic: 153.9 on 4 and 748 DF, p-value: < 2.2e-16 ``` In fact, it seems like both matter, but we should use the selection equation approach of Heckman, in two stages. ``` res = selection(lfp ~ age + age^2 + faminc + kids5 + educ, wage ~ exper + exper^2 + educ + city, data=Mroz87, method = "2step" ) summary(res) ``` ``` ## -------------------------------------------- ## Tobit 2 model (sample selection model) ## 2-step Heckman / heckit estimation ## 753 observations (325 censored and 428 observed) ## 12 free parameters (df = 742) ## Probit selection equation: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 3.394e-01 4.119e-01 0.824 0.410 ## age -3.424e-02 6.728e-03 -5.090 4.55e-07 *** ## faminc 3.390e-06 4.267e-06 0.795 0.427 ## kids5 -8.624e-01 1.111e-01 -7.762 2.78e-14 *** ## educ 1.162e-01 2.361e-02 4.923 1.05e-06 *** ## Outcome equation: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -2.66736 1.30192 -2.049 0.0408 * ## exper 0.02370 0.01886 1.256 0.2093 ## educ 0.48816 0.07946 6.144 1.31e-09 *** ## city 0.44936 0.31585 1.423 0.1553 ## Multiple R-Squared:0.1248, Adjusted R-Squared:0.1165 ## Error terms: ## Estimate Std. Error t value Pr(>|t|) ## invMillsRatio 0.11082 0.73108 0.152 0.88 ## sigma 3.09434 NA NA NA ## rho 0.03581 NA NA NA ## -------------------------------------------- ``` Note that even after using education to explain **lfp** in the selection equation, it still matters in the wage equation. So education does really impact wages. ``` ## Example using binary outcome for selection model. ## We estimate the probability of womens' education on their ## chances to get high wage (> $5/hr in 1975 USD), using PSID data ## We use education as explanatory variable ## and add age, kids, and non-work income as exclusion restrictions. library(mvtnorm) data(Mroz87) m <- selection(lfp ~ educ + age + kids5 + kids618 + nwifeinc, wage >= 5 ~ educ, data = Mroz87 ) summary(m) ``` ``` ## -------------------------------------------- ## Tobit 2 model (sample selection model) ## Maximum Likelihood estimation ## BHHH maximisation, 8 iterations ## Return code 2: successive function values within tolerance limit ## Log-Likelihood: -653.2037 ## 753 observations (325 censored and 428 observed) ## 9 free parameters (df = 744) ## Probit selection equation: ## Estimate Std. error t value Pr(> t) ## (Intercept) 0.430362 0.475966 0.904 0.366 ## educ 0.156223 0.023811 6.561 5.35e-11 *** ## age -0.034713 0.007649 -4.538 5.67e-06 *** ## kids5 -0.890560 0.112663 -7.905 2.69e-15 *** ## kids618 -0.038167 0.039320 -0.971 0.332 ## nwifeinc -0.020948 0.004318 -4.851 1.23e-06 *** ## Outcome equation: ## Estimate Std. error t value Pr(> t) ## (Intercept) -4.5213 0.5611 -8.058 7.73e-16 *** ## educ 0.2879 0.0369 7.800 6.18e-15 *** ## Error terms: ## Estimate Std. error t value Pr(> t) ## rho 0.1164 0.2706 0.43 0.667 ## -------------------------------------------- ``` ``` #CHECK THAT THE NUMBER OF KIDS MATTERS OR NOT Mroz87$numkids = Mroz87$kids5 + Mroz87$kids618 summary(lm(wage ~ numkids, data=Mroz87)) ``` ``` ## ## Call: ## lm(formula = wage ~ numkids, data = Mroz87) ## ## Residuals: ## Min 1Q Median 3Q Max ## -2.6814 -2.2957 -0.8125 1.3186 23.0900 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 2.68138 0.17421 15.39 <2e-16 *** ## numkids -0.19285 0.08069 -2.39 0.0171 * ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 3.232 on 751 degrees of freedom ## Multiple R-squared: 0.007548, Adjusted R-squared: 0.006227 ## F-statistic: 5.712 on 1 and 751 DF, p-value: 0.0171 ``` ``` res = selection(lfp ~ age + I(age^2) + faminc + numkids + educ, wage ~ exper + I(exper^2) + educ + city + numkids, data=Mroz87, method = "2step" ) summary(res) ``` ``` ## -------------------------------------------- ## Tobit 2 model (sample selection model) ## 2-step Heckman / heckit estimation ## 753 observations (325 censored and 428 observed) ## 15 free parameters (df = 739) ## Probit selection equation: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -3.725e+00 1.398e+00 -2.664 0.00789 ** ## age 1.656e-01 6.482e-02 2.554 0.01084 * ## I(age^2) -2.198e-03 7.537e-04 -2.917 0.00365 ** ## faminc 4.001e-06 4.204e-06 0.952 0.34161 ## numkids -1.513e-01 3.827e-02 -3.955 8.39e-05 *** ## educ 9.224e-02 2.302e-02 4.007 6.77e-05 *** ## Outcome equation: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -2.2476932 2.0702572 -1.086 0.278 ## exper 0.0271253 0.0635033 0.427 0.669 ## I(exper^2) -0.0001957 0.0019429 -0.101 0.920 ## educ 0.4726828 0.1037086 4.558 6.05e-06 *** ## city 0.4389577 0.3166504 1.386 0.166 ## numkids -0.0471181 0.1420580 -0.332 0.740 ## Multiple R-Squared:0.1252, Adjusted R-Squared:0.1128 ## Error terms: ## Estimate Std. Error t value Pr(>|t|) ## invMillsRatio -0.11737 1.38036 -0.085 0.932 ## sigma 3.09374 NA NA NA ## rho -0.03794 NA NA NA ## -------------------------------------------- ``` 11\.15 Endogeity: Some Theory to Wrap Up ---------------------------------------- Endogeneity may be technically expressed as arising from a correlation of the independent variables and the error term in a regression. This can be stated as: \\\[\\begin{equation} Y \= \\beta' X \+ u, \\quad E(X\\cdot u) \\neq 0 \\end{equation}\\] This can happen in many ways: * **Measurement error** (or errors in variables): If \\(X\\) is measured in error, we have \\({\\tilde X} \= X \+ e\\). The regression becomes \\\[\\begin{equation} Y \= \\beta\_0 \+ \\beta\_1 ({\\tilde X} \- e) \+ u \= \\beta\_0 \+ \\beta\_1 {\\tilde X} \+ (u \- \\beta\_1 e) \= \\beta\_0 \+ \\beta\_1 {\\tilde X} \+ v \\end{equation}\\] We see that \\\[\\begin{equation} E({\\tilde X} \\cdot v) \= E\[(X\+e)(u \- \\beta\_1 e)] \= \-\\beta\_1 E(e^2\) \= \-\\beta\_1 Var(e) \\neq 0 \\end{equation}\\] * **Omitted variables**: Suppose the true model is \\\[\\begin{equation} Y \= \\beta\_0 \+ \\beta\_1 X\_1 \+ \\beta\_2 X\_2 \+ u \\end{equation}\\] but we do not have \\(X\_2\\), which happens to be correlated with \\(X\_1\\), then it will be subsumed in the error term and no longer will \\(E(X\_i \\cdot u) \= 0, \\forall i\\). * **Simultaneity**: This occurs when \\(Y\\) and \\(X\\) are jointly determined. For example, high wages and high education go together. Or, advertising and sales coincide. Or that better start\-up firms tend to receive syndication. The **structural form** of these settings may be written as: \\\[\\begin{equation} Y \= \\beta\_0 \+ \\beta\_1 X \+ u, \\quad \\quad X \= \\alpha\_0 \+ \\alpha\_1 Y \+ v \\end{equation}\\] The solution to these equations gives the {} version of the model. \\\[\\begin{equation} Y \= \\frac{\\beta\_0 \+ \\beta\_1 \\alpha\_0}{1 \- \\alpha\_1 \\beta\_1} \+ \\frac{\\beta v \+ u}{1 \- \\alpha\_1 \\beta\_1}, \\quad \\quad X \= \\frac{\\alpha\_0 \+\\alpha\_1 \\beta\_0}{1 \- \\alpha\_1 \\beta\_1} \+ \\frac{v \+ \\alpha\_1 u}{1 \- \\alpha\_1 \\beta\_1} \\end{equation}\\] From which we can compute the endogeneity result. \\\[\\begin{equation} Cov(X, u) \= Cov\\left(\\frac{v \+ \\alpha\_1 u}{1 \- \\alpha\_1 \\beta\_1}, u \\right) \= \\frac{\\alpha\_1}{1 \- \\alpha\_1 \\beta\_1}\\cdot Var(u) \\end{equation}\\] To summarize, if \\(x\\) is correlated with \\(u\\) then \\(x\\) is said to be “endogenous”. Endogeneity biases parameter estimates. The solution is to find an **instrumental variable** (denoted \\(x'\\)) that is highly correlated with \\(x\\), but not correlated with \\(u\\). That is * \\(\|Corr(x,x')\|\\) is high. * \\(Corr(x',u)\=0\\). But since \\(x'\\) is not really \\(x\\), it adds (uncorrelated )variance to the residuals, because \\(x' \= x \+ \\eta\\). 11\.16 Cox Proportional Hazards Model ------------------------------------- This is a model used to estimate the expected time to an event. We may be interested in estimating mortality, failure time of equipment, time to successful IPO of a startup, etc. If we define “stoppping” time of an event as \\(\\tau\\), then we are interested in the cumulative probability of an event occurring in time \\(t\\) as \\\[ F(t) \= Pr(\\tau \\leq t ) \\] and the corresponding density function \\(f(t) \= F'(t)\\). The **hazard rate** is defined as the probability that the event occurs at time \\(t\\), conditional on it not having occurred until time \\(t\\), i.e., \\\[ \\lambda(t) \= \\frac{f(t)}{1\-F(t)} \\] Correspondingly, the probability of survival is \\\[ s(t) \= \\exp\\left( \-\\int\_0^t \\lambda(u)\\; du \\right) \\] with the probability of failure up to time \\(t\\) then given by \\\[ F(t) \= 1 \- s(t) \= 1 \-\\exp\\left( \-\\int\_0^t \\lambda(u)\\; du \\right) \\] Empirically, we estimate the hazard rate as follows, for individual \\(i\\): \\\[ \\lambda\_i(t) \= \\lambda\_0(t) \\exp\[\\beta^\\top x\_i] \\geq 0 \\] where \\(\\beta\\) is a vector of coefficients, and \\(x\_i\\) is a vector of characteristics of individual \\(i\\). The function \\(\\lambda\_0(t) \\geq 0\\) is known as the “baseline hazard function”. The hazard ratio is defined as \\(\\lambda\_i(t)/\\lambda\_0(t)\\). When greater than 1, individual \\(i\\) has a greater hazard than baseline. The log hazard ratio is linear in \\(x\_i\\). \\\[ \\ln \\left\[ \\frac{\\lambda\_i(t)}{\\lambda\_0(t)} \\right] \= \\beta^\\top x\_i \\] In order to get some intuition for the hazard rate, suppose we have three friends who just graduated from college, and they all have an equal chance of getting married. Then at any time \\(t\\), the probability that any one gets married, given no one has been married so far is \\(\\lambda\_i(t) \= \\lambda\_0(t) \= 1/3, \\forall t\\). Now, if anyone gets married, then the hazard rate will jump to \\(1/2\\). But what if all the three friends are of different ages, and the propensity to get married is proportional to age. Then \\\[ \\lambda\_i(t) \= \\frac{\\mbox{Age}\_i(t)}{\\sum\_{j\=1}^3 \\mbox{Age}\_j(t)} \\] This model may also be extended to include gender and other variables. Given we have data on \\(M\\) individuals, we can order the data by times \\(t\_1 \< t\_2 \< ... t\_i \< ... \< t\_M\\). Some of these times are times to the event, and some are times of existence without the event, the latter is also known as “censoring” times. The values \\(\\delta\_1, \\delta\_2, ..., \\delta\_i, ..., \\delta\_M\\) take values 1 if the individual has experienced the event and zero otherwise. The likelihood of an individual experiencing the event is \\\[ L\_i(\\beta) \= \\frac{\\lambda\_i(t\_i)}{\\sum\_{j\=i}^M \\lambda\_j(t\_i)} \= \\frac{\\lambda\_0(t\_i) e^{\\beta^\\top x\_i}}{\\sum\_{j\=i}^M \\lambda\_0(t\_i) e^{\\beta^\\top x\_j}} \= \\frac{ e^{\\beta^\\top x\_i}}{\\sum\_{j\=i}^M e^{\\beta^\\top x\_j}} \\] This accounts for all remaining individuals in the population at time \\(t\_i\\). We see that the likelihood does not depend on \\(t\\) as the baseline hazard function cancels out. The parameters \\(\\beta\\) are obtained by maximizing the likelihood function: \\\[ L(\\beta) \= \\prod\_{i\=1}^M L\_i(\\beta)^{\\delta\_i} \\] which uses the subset of data where \\(\\delta\_i \= 1\\). We use the **survival** package in R. ``` library(survival) ``` Here is a very small data set. Note the columns that correspond to time to event, and the indictor variable “death” (\\(\\delta\\)). The \\(x\\) variables are “age” and “female”. ``` SURV = read.table("DSTMAA_data/survival_data.txt",header=TRUE) SURV ``` ``` ## id time death age female ## 1 1 1 1 20 0 ## 2 2 4 0 21 1 ## 3 3 7 1 19 0 ## 4 4 10 1 22 1 ## 5 5 12 0 20 0 ## 6 6 13 1 24 1 ``` We can of course run a linear regression just to see how age and gender affect death, by merely looking at the sign, and we see that being older means on average a greater chance of dying, and being female reduces risk. ``` #SIMPLE REGRESSION APPROACH summary(lm(death ~ age+female, SURV)) ``` ``` ## ## Call: ## lm(formula = death ~ age + female, data = SURV) ## ## Residuals: ## 1 2 3 4 5 6 ## 0.27083 -0.41667 0.45833 0.39583 -0.72917 0.02083 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -3.0208 5.2751 -0.573 0.607 ## age 0.1875 0.2676 0.701 0.534 ## female -0.5000 0.8740 -0.572 0.607 ## ## Residual standard error: 0.618 on 3 degrees of freedom ## Multiple R-squared: 0.1406, Adjusted R-squared: -0.4323 ## F-statistic: 0.2455 on 2 and 3 DF, p-value: 0.7967 ``` Instead of a linear regression, estimate the Cox PH model for the survival time. Here the coefficients are reversed in sign because we are estimating survival and not death. ``` #COX APPROACH res = coxph(Surv(time, death) ~ female + age, data = SURV) summary(res) ``` ``` ## Call: ## coxph(formula = Surv(time, death) ~ female + age, data = SURV) ## ## n= 6, number of events= 4 ## ## coef exp(coef) se(coef) z Pr(>|z|) ## female 1.5446 4.6860 2.7717 0.557 0.577 ## age -0.9453 0.3886 1.0637 -0.889 0.374 ## ## exp(coef) exp(-coef) lower .95 upper .95 ## female 4.6860 0.2134 0.02049 1071.652 ## age 0.3886 2.5735 0.04831 3.125 ## ## Concordance= 0.65 (se = 0.218 ) ## Rsquare= 0.241 (max possible= 0.76 ) ## Likelihood ratio test= 1.65 on 2 df, p=0.4378 ## Wald test = 1.06 on 2 df, p=0.5899 ## Score (logrank) test = 1.26 on 2 df, p=0.5319 ``` ``` plot(survfit(res)) #Plot the baseline survival function ``` Note that the **exp(coef)** is the hazard ratio. When it is greater than 1, there is an increase in hazard, and when it is less than 1, there is a decrease in the hazard. We can do a test for proportional hazards as follows, and examine the p\-values. ``` cox.zph(res) ``` ``` ## rho chisq p ## female 0.563 1.504 0.220 ## age -0.472 0.743 0.389 ## GLOBAL NA 1.762 0.414 ``` Finally, we are interested in obtaining the baseline hazard function \\(\\lambda\_0(t)\\) which as we know has dropped out of the estimation. So how do we recover it? In fact, without it, where do we even get \\(\\lambda\_i(t)\\) from? We would also like to get the cumulative baseline hazard, i.e., \\(\\Lambda\_0(t) \= \\int\_0^t \\lambda\_0(u) du\\). Sadly, this is a major deficiency of the Cox PH model. However, one may make a distributional assumption about the form of \\(\\lambda\_0(t)\\) and then fit it to maximize the likelihood of survival times, after the coefficients \\(\\beta\\) have been fit already. For example, one function might be \\(\\lambda\_0(t) \= e^{\\alpha t}\\), and it would only need the estimation of \\(\\alpha\\). We can then obtain the estimated survival probabilities over time. ``` covs <- data.frame(age = 21, female = 0) summary(survfit(res, newdata = covs, type = "aalen")) ``` ``` ## Call: survfit(formula = res, newdata = covs, type = "aalen") ## ## time n.risk n.event survival std.err lower 95% CI upper 95% CI ## 1 6 1 0.9475 0.108 7.58e-01 1 ## 7 4 1 0.8672 0.236 5.08e-01 1 ## 10 3 1 0.7000 0.394 2.32e-01 1 ## 13 1 1 0.0184 0.117 7.14e-08 1 ``` The “survival” column gives the survival probabilty for various time horizons shown in the first column. For a useful guide, see <https://rpubs.com/daspringate/survival> To sum up, see that the Cox PH model estimates the hazard rate function (t): \\\[ \\lambda(t) \= \\lambda\_0(t) \\exp\[\\beta^\\top x] \\] The “exp(coef)” is the baseline hazard rate multiplier effect. If exp(coef)\>1, then an increase in the variable \\(x\\) increases the hazard rate by that factor, and if exp(coef)\<1, then it reduces the hazard rate \\(\\lambda(t)\\) by that factor. Note that the hazard rate is NOT the probability of survival, and in fact \\(\\lambda(t) \\in (0,\\infty)\\). Note that the probability of survival over time t, if we assume a constant hazard rate \\(\\lambda\\) is \\(s(t) \= e^{\-\\lambda t}\\). Of course \\(s(t) \\in (0,1\)\\). So for example, if the current (assumed constant) hazard rate is \\(\\lambda \= 0\.02\\), then the (for example) 3\-year survival probability is \\\[ s(t) \= e^{\-0\.02 \\times 3} \= 0\.9418 \\] If the person is female, then the new hazard rate is \\(\\lambda \\times 4\.686 \= 0\.09372\\). So the new survival probability is \\\[ s(t\=3\) \= e^{\-0\.09372 \\times 3} \= 0\.7549 \\] If Age increases by one year then the new hazard rate will be \\(0\.02 \\times 0\.3886 \= 0\.007772\\). And the new survival probability will be \\\[ s(t\=3\) \= e^{\-0\.007772 \\times 3} \= 0\.977 \\] Note that the hazard rate and the probability of survival go in opposite directions. 11\.17 GLMNET: Lasso and Ridge Regressions ------------------------------------------ The **glmnet** package is from Stanford, and you can get all the details and examples here: [https://web.stanford.edu/\~hastie/glmnet/glmnet\_alpha.html](https://web.stanford.edu/~hastie/glmnet/glmnet_alpha.html) The package fits generalized linear models and also penalizes the size of the model, with various standard models as special cases. The function equation for minimization is \\\[ \\min\_{\\beta} \\frac{1}{n}\\sum\_{i\=1}^n w\_i L(y\_i,\\beta^\\top x\_i) \+ \\lambda \\left\[(1\-\\alpha) \\frac{1}{2}\\\| \\beta \\\|\_2^2 \+ \\alpha \\\|\\beta \\\|\_1\\right] \\] where \\(\\\|\\beta\\\|\_1\\) and \\(\\\|\\beta\\\|\_2\\) are the \\(L\_1\\) and \\(L\_2\\) norms for tge vector \\(\\beta\\). The idea is to take any loss function and penalize it. For example, if the loss function is just the sum of squared residuals \\(y\_i\-\\beta^\\top x\_i\\), and \\(w\_i\=1, \\lambda\=0\\), then we get an ordinary least squares regression model. The function \\(L\\) is usually set to be the log\-likelihood function. If the \\(L\_1\\) norm is applied only, i.e., \\(\\alpha\=1\\), then we get the Lasso model. If the \\(L\_2\\) norm is solely applied, i.e., \\(\\alpha\=0\\), then we get a ridge regression. As is obvious from the equation, \\(\\lambda\\) is the size of the penalty applied, and increasing this parameter forces a more parsimonious model. Here is an example of lasso (\\(\\alpha\=1\\)): ``` suppressMessages(library(glmnet)) ncaa = read.table("DSTMAA_data/ncaa.txt",header=TRUE) y = c(rep(1,32),rep(0,32)) x = as.matrix(ncaa[4:14]) res = cv.glmnet(x = x, y = y, family = 'binomial', alpha = 1, type.measure = "auc") ``` ``` ## Warning: Too few (< 10) observations per fold for type.measure='auc' in ## cv.lognet; changed to type.measure='deviance'. Alternatively, use smaller ## value for nfolds ``` ``` plot(res) ``` We may also run glmnet to get coefficients. ``` res = glmnet(x = x, y = y, family = 'binomial', alpha = 1) print(names(res)) ``` ``` ## [1] "a0" "beta" "df" "dim" "lambda" ## [6] "dev.ratio" "nulldev" "npasses" "jerr" "offset" ## [11] "classnames" "call" "nobs" ``` ``` print(res) ``` ``` ## ## Call: glmnet(x = x, y = y, family = "binomial", alpha = 1) ## ## Df %Dev Lambda ## [1,] 0 1.602e-16 0.2615000 ## [2,] 1 3.357e-02 0.2383000 ## [3,] 1 6.172e-02 0.2171000 ## [4,] 1 8.554e-02 0.1978000 ## [5,] 1 1.058e-01 0.1803000 ## [6,] 1 1.231e-01 0.1642000 ## [7,] 1 1.380e-01 0.1496000 ## [8,] 1 1.508e-01 0.1364000 ## [9,] 1 1.618e-01 0.1242000 ## [10,] 2 1.721e-01 0.1132000 ## [11,] 4 1.851e-01 0.1031000 ## [12,] 5 1.990e-01 0.0939800 ## [13,] 4 2.153e-01 0.0856300 ## [14,] 4 2.293e-01 0.0780300 ## [15,] 4 2.415e-01 0.0711000 ## [16,] 5 2.540e-01 0.0647800 ## [17,] 8 2.730e-01 0.0590200 ## [18,] 8 2.994e-01 0.0537800 ## [19,] 8 3.225e-01 0.0490000 ## [20,] 8 3.428e-01 0.0446500 ## [21,] 8 3.608e-01 0.0406800 ## [22,] 8 3.766e-01 0.0370700 ## [23,] 8 3.908e-01 0.0337800 ## [24,] 8 4.033e-01 0.0307800 ## [25,] 8 4.145e-01 0.0280400 ## [26,] 9 4.252e-01 0.0255500 ## [27,] 10 4.356e-01 0.0232800 ## [28,] 10 4.450e-01 0.0212100 ## [29,] 10 4.534e-01 0.0193300 ## [30,] 10 4.609e-01 0.0176100 ## [31,] 10 4.676e-01 0.0160500 ## [32,] 10 4.735e-01 0.0146200 ## [33,] 10 4.789e-01 0.0133200 ## [34,] 10 4.836e-01 0.0121400 ## [35,] 10 4.878e-01 0.0110600 ## [36,] 9 4.912e-01 0.0100800 ## [37,] 9 4.938e-01 0.0091820 ## [38,] 9 4.963e-01 0.0083670 ## [39,] 9 4.984e-01 0.0076230 ## [40,] 9 5.002e-01 0.0069460 ## [41,] 9 5.018e-01 0.0063290 ## [42,] 9 5.032e-01 0.0057670 ## [43,] 9 5.044e-01 0.0052540 ## [44,] 9 5.055e-01 0.0047880 ## [45,] 9 5.064e-01 0.0043620 ## [46,] 9 5.071e-01 0.0039750 ## [47,] 10 5.084e-01 0.0036220 ## [48,] 10 5.095e-01 0.0033000 ## [49,] 10 5.105e-01 0.0030070 ## [50,] 10 5.114e-01 0.0027400 ## [51,] 10 5.121e-01 0.0024960 ## [52,] 10 5.127e-01 0.0022750 ## [53,] 11 5.133e-01 0.0020720 ## [54,] 11 5.138e-01 0.0018880 ## [55,] 11 5.142e-01 0.0017210 ## [56,] 11 5.146e-01 0.0015680 ## [57,] 11 5.149e-01 0.0014280 ## [58,] 11 5.152e-01 0.0013020 ## [59,] 11 5.154e-01 0.0011860 ## [60,] 11 5.156e-01 0.0010810 ## [61,] 11 5.157e-01 0.0009846 ## [62,] 11 5.158e-01 0.0008971 ## [63,] 11 5.160e-01 0.0008174 ## [64,] 11 5.160e-01 0.0007448 ## [65,] 11 5.161e-01 0.0006786 ## [66,] 11 5.162e-01 0.0006183 ## [67,] 11 5.162e-01 0.0005634 ## [68,] 11 5.163e-01 0.0005134 ## [69,] 11 5.163e-01 0.0004678 ## [70,] 11 5.164e-01 0.0004262 ## [71,] 11 5.164e-01 0.0003883 ## [72,] 11 5.164e-01 0.0003538 ## [73,] 11 5.164e-01 0.0003224 ## [74,] 11 5.164e-01 0.0002938 ## [75,] 11 5.165e-01 0.0002677 ## [76,] 11 5.165e-01 0.0002439 ## [77,] 11 5.165e-01 0.0002222 ``` ``` b = coef(res)[,25] #Choose the best case with 8 coefficients print(b) ``` ``` ## (Intercept) PTS REB AST TO ## -17.30807199 0.04224762 0.13304541 0.00000000 -0.13440922 ## A.T STL BLK PF FG ## 0.63059336 0.21867734 0.11635708 0.00000000 17.14864201 ## FT X3P ## 3.00069901 0.00000000 ``` ``` x1 = c(1,as.numeric(x[18,])) p = 1/(1+exp(-sum(b*x1))) print(p) ``` ``` ## [1] 0.7696481 ``` ### 11\.17\.1 Prediction on test data ``` preds = predict(res, x, type = 'response') print(dim(preds)) ``` ``` ## [1] 64 77 ``` ``` preds = preds[,25] #Take the 25th case print(preds) ``` ``` ## [1] 0.97443940 0.90157397 0.87711437 0.89911656 0.95684199 0.82949042 ## [7] 0.53186622 0.83745812 0.45979765 0.58355756 0.78726183 0.55050365 ## [13] 0.30633472 0.93605170 0.70646742 0.85811465 0.42394178 0.76964806 ## [19] 0.40172414 0.66137964 0.69620096 0.61569705 0.88800581 0.92834645 ## [25] 0.82719624 0.17209046 0.66881541 0.84149477 0.58937886 0.64674446 ## [31] 0.79368965 0.51186217 0.58500925 0.61275721 0.17532362 0.47406867 ## [37] 0.24314471 0.11843924 0.26787937 0.24296988 0.21129918 0.05041436 ## [43] 0.30109650 0.14989973 0.17976216 0.57119150 0.05514704 0.46220128 ## [49] 0.63788393 0.32605605 0.35544396 0.12647374 0.61772958 0.63883954 ## [55] 0.02306762 0.21285032 0.36455131 0.53953727 0.18563868 0.23598354 ## [61] 0.11821886 0.04258418 0.19603015 0.24630145 ``` ``` print(glmnet:::auc(y, preds)) ``` ``` ## [1] 0.9072266 ``` ``` print(table(y,round(preds,0))) #rounding needed to make 0,1 ``` ``` ## ## y 0 1 ## 0 25 7 ## 1 5 27 ``` 11\.18 ROC Curves ----------------- ROC stands for Receiver Operating Characteristic. The acronym comes from signal theory, where the users are interested in the number of true positive signals that are identified. The idea is simple, and best explained with an example. Let’s say you have an algorithm that detects customers probability \\(p \\in (0,1\)\\) of buying a product. Take a tagged set of training data and sort the customers by this probability in a line with the highest propensity to buy on the left and moving to the right the probabilty declines monotonically. (Tagged means you know whether they bought the product or not.) Now, starting from the left, plot a line that jumps vertically by a unit if the customer buys the product as you move across else remains flat. If the algorithm is a good one, the line will quickly move up at first and then flatten out. Let’s take the train and test data here and plot the ROC curve by writing our own code. We can do the same with the **pROC** package. Here is the code. ``` suppressMessages(library(pROC)) ``` ``` ## Warning: package 'pROC' was built under R version 3.3.2 ``` ``` res = roc(response=y,predictor=preds) print(res) ``` ``` ## ## Call: ## roc.default(response = y, predictor = preds) ## ## Data: preds in 32 controls (y 0) < 32 cases (y 1). ## Area under the curve: 0.9072 ``` ``` plot(res); grid() ``` We see that “specificity” equals the true negative rate, and is also denoted as “recall”. And that the true positive rate is also labeled as “sensitivity”. The AUC or “area under the curve”" is the area between the curve and the diagonal divided by the area in the top right triangle of the diagram. This is also reported and is the same number as obtained when we fitted the model using the **glmnet** function before. For nice graphics that explain all these measures and more, see <https://en.wikipedia.org/wiki/Precision_and_recall> 11\.19 Glmnet Cox Models ------------------------ As we did before, we may fit a Cox PH model using GLMNET with the additional feature that we include a penalty when we maximize the likelihood function. ``` SURV = read.table("DSTMAA_data/survival_data.txt",header=TRUE) print(SURV) ``` ``` ## id time death age female ## 1 1 1 1 20 0 ## 2 2 4 0 21 1 ## 3 3 7 1 19 0 ## 4 4 10 1 22 1 ## 5 5 12 0 20 0 ## 6 6 13 1 24 1 ``` ``` names(SURV)[3] = "status" y = as.matrix(SURV[,2:3]) x = as.matrix(SURV[,4:5]) res = glmnet(x, y, family = "cox") print(res) ``` ``` ## ## Call: glmnet(x = x, y = y, family = "cox") ## ## Df %Dev Lambda ## [1,] 0 0.00000 0.331700 ## [2,] 1 0.02347 0.302200 ## [3,] 1 0.04337 0.275400 ## [4,] 1 0.06027 0.250900 ## [5,] 1 0.07466 0.228600 ## [6,] 1 0.08690 0.208300 ## [7,] 1 0.09734 0.189800 ## [8,] 1 0.10620 0.172900 ## [9,] 1 0.11380 0.157600 ## [10,] 1 0.12020 0.143600 ## [11,] 1 0.12570 0.130800 ## [12,] 1 0.13040 0.119200 ## [13,] 1 0.13430 0.108600 ## [14,] 1 0.13770 0.098970 ## [15,] 1 0.14050 0.090180 ## [16,] 1 0.14300 0.082170 ## [17,] 1 0.14500 0.074870 ## [18,] 1 0.14670 0.068210 ## [19,] 1 0.14820 0.062150 ## [20,] 1 0.14940 0.056630 ## [21,] 1 0.15040 0.051600 ## [22,] 1 0.15130 0.047020 ## [23,] 1 0.15200 0.042840 ## [24,] 1 0.15260 0.039040 ## [25,] 1 0.15310 0.035570 ## [26,] 2 0.15930 0.032410 ## [27,] 2 0.16480 0.029530 ## [28,] 2 0.16930 0.026910 ## [29,] 2 0.17320 0.024520 ## [30,] 2 0.17640 0.022340 ## [31,] 2 0.17910 0.020350 ## [32,] 2 0.18140 0.018540 ## [33,] 2 0.18330 0.016900 ## [34,] 2 0.18490 0.015400 ## [35,] 2 0.18630 0.014030 ## [36,] 2 0.18740 0.012780 ## [37,] 2 0.18830 0.011650 ## [38,] 2 0.18910 0.010610 ## [39,] 2 0.18980 0.009669 ## [40,] 2 0.19030 0.008810 ## [41,] 2 0.19080 0.008028 ## [42,] 2 0.19120 0.007314 ## [43,] 2 0.19150 0.006665 ## [44,] 2 0.19180 0.006073 ## [45,] 2 0.19200 0.005533 ## [46,] 2 0.19220 0.005042 ## [47,] 2 0.19240 0.004594 ## [48,] 2 0.19250 0.004186 ## [49,] 2 0.19260 0.003814 ## [50,] 2 0.19270 0.003475 ## [51,] 2 0.19280 0.003166 ## [52,] 2 0.19280 0.002885 ## [53,] 2 0.19290 0.002629 ## [54,] 2 0.19290 0.002395 ## [55,] 2 0.19300 0.002182 ## [56,] 2 0.19300 0.001988 ``` ``` plot(res) ``` ``` print(coef(res)) ``` ``` ## 2 x 56 sparse Matrix of class "dgCMatrix" ``` ``` ## [[ suppressing 56 column names 's0', 's1', 's2' ... ]] ``` ``` ## ## age . -0.03232796 -0.06240328 -0.09044971 -0.1166396 -0.1411157 ## female . . . . . . ## ## age -0.1639991 -0.185342 -0.2053471 -0.2240373 -0.2414872 -0.2577658 ## female . . . . . . ## ## age -0.272938 -0.2870651 -0.3002053 -0.3124148 -0.3237473 -0.3342545 ## female . . . . . . ## ## age -0.3440275 -0.3530249 -0.3613422 -0.3690231 -0.3761098 -0.3826423 ## female . . . . . . ## ## age -0.3886591 -0.4300447 -0.4704889 -0.5078614 -0.5424838 -0.5745449 ## female . 0.1232263 0.2429576 0.3522138 0.4522592 0.5439278 ## ## age -0.6042077 -0.6316057 -0.6569988 -0.6804703 -0.7022042 -0.7222141 ## female 0.6279337 0.7048655 0.7754539 0.8403575 0.9000510 0.9546989 ## ## age -0.7407295 -0.7577467 -0.773467 -0.7878944 -0.8012225 -0.8133071 ## female 1.0049765 1.0509715 1.093264 1.1319284 1.1675026 1.1999905 ## ## age -0.8246563 -0.8349496 -0.8442393 -0.8528942 -0.860838 -0.8680639 ## female 1.2297716 1.2570025 1.2817654 1.3045389 1.325398 1.3443458 ## ## age -0.874736 -0.8808466 -0.8863844 -0.8915045 -0.8961894 -0.9004172 ## female 1.361801 1.3777603 1.3922138 1.4055495 1.4177359 1.4287319 ## ## age -0.9043351 -0.9079181 ## female 1.4389022 1.4481934 ``` With cross validation, we get the usual plot for the fit. ``` cvfit = cv.glmnet(x, y, family = "cox") plot(cvfit) ``` ``` print(cvfit$lambda.min) ``` ``` ## [1] 0.0989681 ``` ``` print(coef(cvfit,s=cvfit$lambda.min)) ``` ``` ## 2 x 1 sparse Matrix of class "dgCMatrix" ## 1 ## age -0.2870651 ## female . ``` Note that the signs of the coefficients are the same as we had earlier, i.e., survival is lower with age and higher for females. 11\.1 Maximum\-Likelihood Estimation (MLE) ------------------------------------------ Suppose we wish to fit data to a given distribution, then we may use this technique to do so. Many of the data fitting procedures need to use MLE. MLE is a general technique, and applies widely. It is also a fundamental approach to many estimation tools in econometrics. Here we recap this. Let’s say we have a series of data \\(x\\), with \\(T\\) observations. If \\(x \\sim N(\\mu,\\sigma^2\)\\), then \\\[\\begin{equation} \\mbox{density function:} \\quad f(x) \= \\frac{1}{\\sqrt{2\\pi\\sigma^2}} \\exp\\left\[\-\\frac{1}{2}\\frac{(x\-\\mu)^2}{\\sigma^2} \\right] \\end{equation}\\] \\\[\\begin{equation} N(x) \= 1 \- N(\-x) \\end{equation}\\] \\\[\\begin{equation} F(x) \= \\int\_{\-\\infty}^x f(u) du \\end{equation}\\] The standard normal distribution is \\(x \\sim N(0,1\)\\). For the standard normal distribution: \\(F(0\) \= \\frac{1}{2}\\). The likelihood of the entire series is \\\[\\begin{equation} \\prod\_{t\=1}^T f\[R(t)] \\end{equation}\\] It is easier (computationally) to maximize \\\[\\begin{equation} \\max\_{\\mu,\\sigma} \\; {\\cal L} \\equiv \\sum\_{t\=1}^T \\ln f\[R(t)] \\end{equation}\\] known as the log\-likelihood. 11\.2 Implementation -------------------- This is easily done in R. First we create the log\-likelihood function, so you can see how functions are defined in R. Second, we optimize the log\-likelihood, i.e., we find the maximum value, hence it is known as maximum log\-likelihood estimation (MLE). ``` #LOG-LIKELIHOOD FUNCTION LL = function(params,x) { mu = params[1]; sigsq = params[2] f = (1/sqrt(2*pi*sigsq))*exp(-0.5*(x-mu)^2/sigsq) LL = -sum(log(f)) } ``` ``` #GENERATE DATA FROM A NORMAL DISTRIBUTION x = rnorm(10000, mean=5, sd=3) #MAXIMIZE LOG-LIKELIHOOD params = c(4,2) #Create starting guess for parameters res = nlm(LL,params,x) print(res) ``` ``` ## $minimum ## [1] 25257.34 ## ## $estimate ## [1] 4.965689 9.148508 ## ## $gradient ## [1] 0.0014777011 -0.0002584778 ## ## $code ## [1] 1 ## ## $iterations ## [1] 11 ``` We can see that the result was a fitted normal distribution with mean close to 5, and variance close to 9, the square root of which is roughly the same as the distribution from which the data was originally generated. Further, notice that the gradient is zero for both parameters, as it should be when the maximum is reached. 11\.3 Logit and Probit Models ----------------------------- Usually we run regressions using continuous variables for the dependent (\\(y\\)) variables, such as, for example, when we regress income on education. Sometimes however, the dependent variable may be discrete, and could be binomial or multinomial. That is, the dependent variable is **limited**. In such cases, we need a different approach. **Discrete dependent** variables are a special case of **limited dependent** variables. The Logit and Probit models we look at here are examples of discrete dependent variable models. Such models are also often called **qualitative response** (QR) models. In particular, when the variable is binary, i.e., takes values of \\(\\{0,1\\}\\), then we get a probability model. If we just regressed the left hand side variables of ones and zeros on a suite of right hand side variables we could of course fit a linear regression. Then if we took another observation with values for the right hand side, i.e., \\(x \= \\{x\_1,x\_2,\\ldots,x\_k\\}\\), we could compute the value of the \\(y\\) variable using the fitted coefficients. But of course, this value will not be exactly 0 or 1, except by unlikely coincidence. Nor will this value lie in the range \\((0,1\)\\). There is also a relationship to classifier models. In classifier models, we are interested in allocating observations to categories. In limited dependent models we also want to explain the reasons (i.e., find explanatory variables) for the allocation across categories. Some examples of such models are to explain whether a person is employed or not, whether a firm is syndicated or not, whether a firm is solvent or not, which field of work is chosen by graduates, where consumers shop, whether they choose Coke versus Pepsi, etc. These fitted values might not even lie between 0 and 1 with a linear regression. However, if we used a carefully chosen nonlinear regression function, then we could ensure that the fitted values of \\(y\\) are restricted to the range \\((0,1\)\\), and then we would get a model where we fitted a probability. There are two such model forms that are widely used: (a) Logit, also known as a logistic regression, and (b) Probit models. We look at each one in turn. 11\.4 Logit ----------- A logit model takes the following form: \\\[\\begin{equation} y \= \\frac{e^{f(x)}}{1\+e^{f(x)}}, \\quad f(x) \= \\beta\_0 \+ \\beta\_1 x\_1 \+ \\ldots \\beta\_k x\_k \\end{equation}\\] We are interested in fitting the coefficients \\(\\{\\beta\_0,\\beta\_1, \\ldots, \\beta\_k\\}\\). Note that, irrespective of the coefficients, \\(f(x) \\in (\-\\infty,\+\\infty)\\), but \\(y \\in (0,1\)\\). When \\(f(x) \\rightarrow \-\\infty\\), \\(y \\rightarrow 0\\), and when \\(f(x) \\rightarrow \+\\infty\\), \\(y \\rightarrow 1\\). We also write this model as \\\[\\begin{equation} y \= \\frac{e^{\\beta' x}}{1\+e^{\\beta' x}} \\equiv \\Lambda(\\beta' x) \\end{equation}\\] where \\(\\Lambda\\) (lambda) is for logit. The model generates a \\(S\\)\-shaped curve for \\(y\\), and we can plot it as follows. The fitted value of \\(y\\) is nothing but the probability that \\(y\=1\\). ``` logit = function(fx) { res = exp(fx)/(1+exp(fx)) } fx = seq(-4,4,0.01) y = logit(fx) plot(fx,y,type="l",xlab="x",ylab="f(x)",col="blue",lwd=3) ``` ### 11\.4\.1 Example For the NCAA data, take the top 32 teams and make their dependent variable 1, and that of the bottom 32 teams zero. Therefore, the teams that have \\(y\=1\\) are those that did not lose in the first round of the playoffs, and the teams that have \\(y\=0\\) are those that did. Estimation is done by maximizing the log\-likelihood. ``` ncaa = read.table("DSTMAA_data/ncaa.txt",header=TRUE) y1 = 1:32 y1 = y1*0+1 y2 = y1*0 y = c(y1,y2) print(y) ``` ``` ## [1] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 ## [36] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ``` ``` x = as.matrix(ncaa[4:14]) h = glm(y~x, family=binomial(link="logit")) names(h) ``` ``` ## [1] "coefficients" "residuals" "fitted.values" ## [4] "effects" "R" "rank" ## [7] "qr" "family" "linear.predictors" ## [10] "deviance" "aic" "null.deviance" ## [13] "iter" "weights" "prior.weights" ## [16] "df.residual" "df.null" "y" ## [19] "converged" "boundary" "model" ## [22] "call" "formula" "terms" ## [25] "data" "offset" "control" ## [28] "method" "contrasts" "xlevels" ``` ``` print(logLik(h)) ``` ``` ## 'log Lik.' -21.44779 (df=12) ``` ``` summary(h) ``` ``` ## ## Call: ## glm(formula = y ~ x, family = binomial(link = "logit")) ## ## Deviance Residuals: ## Min 1Q Median 3Q Max ## -1.80174 -0.40502 -0.00238 0.37584 2.31767 ## ## Coefficients: ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -45.83315 14.97564 -3.061 0.00221 ** ## xPTS -0.06127 0.09549 -0.642 0.52108 ## xREB 0.49037 0.18089 2.711 0.00671 ** ## xAST 0.16422 0.26804 0.613 0.54010 ## xTO -0.38405 0.23434 -1.639 0.10124 ## xA.T 1.56351 3.17091 0.493 0.62196 ## xSTL 0.78360 0.32605 2.403 0.01625 * ## xBLK 0.07867 0.23482 0.335 0.73761 ## xPF 0.02602 0.13644 0.191 0.84874 ## xFG 46.21374 17.33685 2.666 0.00768 ** ## xFT 10.72992 4.47729 2.397 0.01655 * ## xX3P 5.41985 5.77966 0.938 0.34838 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## (Dispersion parameter for binomial family taken to be 1) ## ## Null deviance: 88.723 on 63 degrees of freedom ## Residual deviance: 42.896 on 52 degrees of freedom ## AIC: 66.896 ## ## Number of Fisher Scoring iterations: 6 ``` ``` h$fitted.values ``` ``` ## 1 2 3 4 5 ## 0.9998267965 0.9983229192 0.9686755530 0.9909359265 0.9977011039 ## 6 7 8 9 10 ## 0.9639506326 0.5381841865 0.9505255187 0.4329829232 0.7413280575 ## 11 12 13 14 15 ## 0.9793554057 0.7273235463 0.2309261473 0.9905414749 0.7344407215 ## 16 17 18 19 20 ## 0.9936312074 0.2269619354 0.8779507370 0.2572796426 0.9335376447 ## 21 22 23 24 25 ## 0.9765843274 0.7836742557 0.9967552281 0.9966486903 0.9715110760 ## 26 27 28 29 30 ## 0.0681674628 0.4984153630 0.9607522159 0.8624544140 0.6988578200 ## 31 32 33 34 35 ## 0.9265057217 0.7472357037 0.5589318497 0.2552381741 0.0051790298 ## 36 37 38 39 40 ## 0.4394307950 0.0205919396 0.0545333361 0.0100662111 0.0995262051 ## 41 42 43 44 45 ## 0.1219394290 0.0025416737 0.3191888357 0.0149772804 0.0685930622 ## 46 47 48 49 50 ## 0.3457439539 0.0034943441 0.5767386617 0.5489544863 0.4637012227 ## 51 52 53 54 55 ## 0.2354894587 0.0487342700 0.6359622098 0.8027221707 0.0003240393 ## 56 57 58 59 60 ## 0.0479116454 0.3422867567 0.4649889328 0.0547385409 0.0722894447 ## 61 62 63 64 ## 0.0228629774 0.0002730981 0.0570387301 0.2830628760 ``` ### 11\.4\.1 Example For the NCAA data, take the top 32 teams and make their dependent variable 1, and that of the bottom 32 teams zero. Therefore, the teams that have \\(y\=1\\) are those that did not lose in the first round of the playoffs, and the teams that have \\(y\=0\\) are those that did. Estimation is done by maximizing the log\-likelihood. ``` ncaa = read.table("DSTMAA_data/ncaa.txt",header=TRUE) y1 = 1:32 y1 = y1*0+1 y2 = y1*0 y = c(y1,y2) print(y) ``` ``` ## [1] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 ## [36] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ``` ``` x = as.matrix(ncaa[4:14]) h = glm(y~x, family=binomial(link="logit")) names(h) ``` ``` ## [1] "coefficients" "residuals" "fitted.values" ## [4] "effects" "R" "rank" ## [7] "qr" "family" "linear.predictors" ## [10] "deviance" "aic" "null.deviance" ## [13] "iter" "weights" "prior.weights" ## [16] "df.residual" "df.null" "y" ## [19] "converged" "boundary" "model" ## [22] "call" "formula" "terms" ## [25] "data" "offset" "control" ## [28] "method" "contrasts" "xlevels" ``` ``` print(logLik(h)) ``` ``` ## 'log Lik.' -21.44779 (df=12) ``` ``` summary(h) ``` ``` ## ## Call: ## glm(formula = y ~ x, family = binomial(link = "logit")) ## ## Deviance Residuals: ## Min 1Q Median 3Q Max ## -1.80174 -0.40502 -0.00238 0.37584 2.31767 ## ## Coefficients: ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -45.83315 14.97564 -3.061 0.00221 ** ## xPTS -0.06127 0.09549 -0.642 0.52108 ## xREB 0.49037 0.18089 2.711 0.00671 ** ## xAST 0.16422 0.26804 0.613 0.54010 ## xTO -0.38405 0.23434 -1.639 0.10124 ## xA.T 1.56351 3.17091 0.493 0.62196 ## xSTL 0.78360 0.32605 2.403 0.01625 * ## xBLK 0.07867 0.23482 0.335 0.73761 ## xPF 0.02602 0.13644 0.191 0.84874 ## xFG 46.21374 17.33685 2.666 0.00768 ** ## xFT 10.72992 4.47729 2.397 0.01655 * ## xX3P 5.41985 5.77966 0.938 0.34838 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## (Dispersion parameter for binomial family taken to be 1) ## ## Null deviance: 88.723 on 63 degrees of freedom ## Residual deviance: 42.896 on 52 degrees of freedom ## AIC: 66.896 ## ## Number of Fisher Scoring iterations: 6 ``` ``` h$fitted.values ``` ``` ## 1 2 3 4 5 ## 0.9998267965 0.9983229192 0.9686755530 0.9909359265 0.9977011039 ## 6 7 8 9 10 ## 0.9639506326 0.5381841865 0.9505255187 0.4329829232 0.7413280575 ## 11 12 13 14 15 ## 0.9793554057 0.7273235463 0.2309261473 0.9905414749 0.7344407215 ## 16 17 18 19 20 ## 0.9936312074 0.2269619354 0.8779507370 0.2572796426 0.9335376447 ## 21 22 23 24 25 ## 0.9765843274 0.7836742557 0.9967552281 0.9966486903 0.9715110760 ## 26 27 28 29 30 ## 0.0681674628 0.4984153630 0.9607522159 0.8624544140 0.6988578200 ## 31 32 33 34 35 ## 0.9265057217 0.7472357037 0.5589318497 0.2552381741 0.0051790298 ## 36 37 38 39 40 ## 0.4394307950 0.0205919396 0.0545333361 0.0100662111 0.0995262051 ## 41 42 43 44 45 ## 0.1219394290 0.0025416737 0.3191888357 0.0149772804 0.0685930622 ## 46 47 48 49 50 ## 0.3457439539 0.0034943441 0.5767386617 0.5489544863 0.4637012227 ## 51 52 53 54 55 ## 0.2354894587 0.0487342700 0.6359622098 0.8027221707 0.0003240393 ## 56 57 58 59 60 ## 0.0479116454 0.3422867567 0.4649889328 0.0547385409 0.0722894447 ## 61 62 63 64 ## 0.0228629774 0.0002730981 0.0570387301 0.2830628760 ``` 11\.5 Probit ------------ Probit has essentially the same idea as the logit except that the probability function is replaced by the normal distribution. The nonlinear regression equation is as follows: \\\[\\begin{equation} y \= \\Phi\[f(x)], \\quad f(x) \= \\beta\_0 \+ \\beta\_1 x\_1 \+ \\ldots \\beta\_k x\_k \\end{equation}\\] where \\(\\Phi(.)\\) is the cumulative normal probability function. Again, irrespective of the coefficients, \\(f(x) \\in (\-\\infty,\+\\infty)\\), but \\(y \\in (0,1\)\\). When \\(f(x) \\rightarrow \-\\infty\\), \\(y \\rightarrow 0\\), and when \\(f(x) \\rightarrow \+\\infty\\), \\(y \\rightarrow 1\\). We can redo the same previous logit model using a probit instead: ``` h = glm(y~x, family=binomial(link="probit")) print(logLik(h)) ``` ``` ## 'log Lik.' -21.27924 (df=12) ``` ``` summary(h) ``` ``` ## ## Call: ## glm(formula = y ~ x, family = binomial(link = "probit")) ## ## Deviance Residuals: ## Min 1Q Median 3Q Max ## -1.76353 -0.41212 -0.00031 0.34996 2.24568 ## ## Coefficients: ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -26.28219 8.09608 -3.246 0.00117 ** ## xPTS -0.03463 0.05385 -0.643 0.52020 ## xREB 0.28493 0.09939 2.867 0.00415 ** ## xAST 0.10894 0.15735 0.692 0.48874 ## xTO -0.23742 0.13642 -1.740 0.08180 . ## xA.T 0.71485 1.86701 0.383 0.70181 ## xSTL 0.45963 0.18414 2.496 0.01256 * ## xBLK 0.03029 0.13631 0.222 0.82415 ## xPF 0.01041 0.07907 0.132 0.89529 ## xFG 26.58461 9.38711 2.832 0.00463 ** ## xFT 6.28278 2.51452 2.499 0.01247 * ## xX3P 3.15824 3.37841 0.935 0.34988 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## (Dispersion parameter for binomial family taken to be 1) ## ## Null deviance: 88.723 on 63 degrees of freedom ## Residual deviance: 42.558 on 52 degrees of freedom ## AIC: 66.558 ## ## Number of Fisher Scoring iterations: 8 ``` ``` h$fitted.values ``` ``` ## 1 2 3 4 5 ## 9.999998e-01 9.999048e-01 9.769711e-01 9.972812e-01 9.997756e-01 ## 6 7 8 9 10 ## 9.721166e-01 5.590209e-01 9.584564e-01 4.367808e-01 7.362946e-01 ## 11 12 13 14 15 ## 9.898112e-01 7.262200e-01 2.444006e-01 9.968605e-01 7.292286e-01 ## 16 17 18 19 20 ## 9.985910e-01 2.528807e-01 8.751178e-01 2.544738e-01 9.435318e-01 ## 21 22 23 24 25 ## 9.850437e-01 7.841357e-01 9.995601e-01 9.996077e-01 9.825306e-01 ## 26 27 28 29 30 ## 8.033540e-02 5.101626e-01 9.666841e-01 8.564489e-01 6.657773e-01 ## 31 32 33 34 35 ## 9.314164e-01 7.481401e-01 5.810465e-01 2.488875e-01 1.279599e-03 ## 36 37 38 39 40 ## 4.391782e-01 1.020269e-02 5.461190e-02 4.267754e-03 1.067584e-01 ## 41 42 43 44 45 ## 1.234915e-01 2.665101e-04 3.212605e-01 6.434112e-03 7.362892e-02 ## 46 47 48 49 50 ## 3.673105e-01 4.875193e-04 6.020993e-01 5.605770e-01 4.786576e-01 ## 51 52 53 54 55 ## 2.731573e-01 4.485079e-02 6.194202e-01 7.888145e-01 1.630556e-06 ## 56 57 58 59 60 ## 4.325189e-02 3.899566e-01 4.809365e-01 5.043005e-02 7.330590e-02 ## 61 62 63 64 ## 1.498018e-02 8.425836e-07 5.515960e-02 3.218696e-01 ``` 11\.6 Analysis -------------- Both these models are just settings in which we are computing binomial (binary) probabilities, i.e. \\\[\\begin{equation} \\mbox{Pr}\[y\=1] \= F(\\beta' x) \\end{equation}\\] where \\(\\beta\\) is a vector of coefficients, and \\(x\\) is a vector of explanatory variables. \\(F\\) is the logit/probit function. \\\[\\begin{equation} {\\hat y} \= F(\\beta' x) \\end{equation}\\] where \\({\\hat y}\\) is the fitted value of \\(y\\) for a given \\(x\\), and now \\(\\beta\\) is the fitted model’s coefficients. In each case the function takes the logit or probit form that we provided earlier. Of course, \\\[\\begin{equation} \\mbox{Pr}\[y\=0] \= 1 \- F(\\beta' x) \\end{equation}\\] Note that the model may also be expressed in conditional expectation form, i.e. \\\[\\begin{equation} E\[y \| x] \= F(\\beta' x) (y\=1\) \+ \[1\-F(\\beta' x)] (y\=0\) \= F(\\beta' x) \\end{equation}\\] 11\.7 Odds Ratio and Slopes (Coefficients) in a Logit ----------------------------------------------------- In a linear regression, it is easy to see how the dependent variable changes when any right hand side variable changes. Not so with nonlinear models. A little bit of pencil pushing is required (add some calculus too). The coefficient of an independent variable in a logit regression tell us by how much the log odds of the dependent variable change with a one unit change in the independent variable. If you want the odds ratio, then simply take the exponentiation of the log odds. The odds ratio says that when the independent variable increases by one, then the odds of the dependent outcome occurring increase by a factor of the odds ratio. What are odds ratios? An odds ratio is the ratio of probability of success to the probability of failure. If the probability of success is \\(p\\), then we have \\\[ \\mbox{Odds Ratio (OR)} \= \\frac{p}{1\-p}, \\quad p \= \\frac{OR}{1\+OR} \\] For example, if \\(p\=0\.3\\), then the odds ratio will be \\(OR\=0\.3/0\.7 \= 0\.4285714\\). If the coefficient \\(\\beta\\) (log odds) of an independent variable in the logit is (say) 2, then it meands the odds ratio is \\(\\exp(2\) \= 7\.38\\). This is the factor by which the variable impacts the odds ratio when the variable increases by 1\. Suppose the independent variable increases by 1\. Then the odds ratio and probabilities change as follows. ``` p = 0.3 OR = p/(1-p); print(OR) ``` ``` ## [1] 0.4285714 ``` ``` beta = 2 OR_new = OR * exp(beta); print(OR_new) ``` ``` ## [1] 3.166738 ``` ``` p_new = OR_new/(1+OR_new); print(p_new) ``` ``` ## [1] 0.7600041 ``` So we see that the probability of the dependent outcome occurring has increased from \\(0\.3\\) to \\(0\.76\\). Now let’s do the same example with the NCAA data. ``` h = glm(y~x, family=binomial(link="logit")) b = h$coefficients #Odds ratio is the exponentiated coefficients print(exp(b)) ``` ``` ## (Intercept) xPTS xREB xAST xTO ## 1.244270e-20 9.405653e-01 1.632927e+00 1.178470e+00 6.810995e-01 ## xA.T xSTL xBLK xPF xFG ## 4.775577e+00 2.189332e+00 1.081849e+00 1.026364e+00 1.175903e+20 ## xFT xX3P ## 4.570325e+04 2.258450e+02 ``` ``` x1 = c(1,as.numeric(x[18,])) #Take row 18 and create the RHS variables array p1 = 1/(1+exp(-sum(b*x1))) print(p1) ``` ``` ## [1] 0.8779507 ``` ``` OR1 = p1/(1-p1) print(OR1) ``` ``` ## [1] 7.193413 ``` Now, let’s see what happens if the rebounds increase by 1\. ``` x2 = x1 x2[3] = x2[3] + 1 p2 = 1/(1+exp(-sum(b*x2))) print(p2) ``` ``` ## [1] 0.921546 ``` So, the probability increases as expected. We can check that the new odds ratio will give the new probability as well. ``` OR2 = OR1 * exp(b[3]) print(OR2/(1+OR2)) ``` ``` ## xREB ## 0.921546 ``` And we see that this is exactly as required. 11\.8 Calculus of the logit coefficients ---------------------------------------- Remember that \\(y\\) lies in the range \\((0,1\)\\). Hence, we may be interested in how \\(E(y\|x)\\) changes as any of the explanatory variables changes in value, so we can take the derivative: \\\[\\begin{equation} \\frac{\\partial E(y\|x)}{\\partial x} \= F'(\\beta' x) \\beta \\equiv f(\\beta' x) \\beta \\end{equation}\\] For each model we may compute this at the means of the regressors: \\\[\\begin{equation} \\frac{\\partial E(y\|x)}{\\partial x} \= \\beta\\left( \\frac{e^{\\beta' x}}{1\+e^{\\beta' x}} \\right) \\left( 1 \- \\frac{e^{\\beta' x}}{1\+e^{\\beta' x}} \\right) \\end{equation}\\] which may be re\-written as \\\[\\begin{equation} \\frac{\\partial E(y\|x)}{\\partial x} \= \\beta \\cdot \\Lambda(\\beta' x) \\cdot \[1\-\\Lambda(\\beta'x)] \\end{equation}\\] ``` h = glm(y~x, family=binomial(link="logit")) beta = h$coefficients print(beta) ``` ``` ## (Intercept) xPTS xREB xAST xTO ## -45.83315262 -0.06127422 0.49037435 0.16421685 -0.38404689 ## xA.T xSTL xBLK xPF xFG ## 1.56351478 0.78359670 0.07867125 0.02602243 46.21373793 ## xFT xX3P ## 10.72992472 5.41984900 ``` ``` print(dim(x)) ``` ``` ## [1] 64 11 ``` ``` beta = as.matrix(beta) print(dim(beta)) ``` ``` ## [1] 12 1 ``` ``` wuns = matrix(1,64,1) x = cbind(wuns,x) xbar = as.matrix(colMeans(x)) xbar ``` ``` ## [,1] ## 1.0000000 ## PTS 67.1015625 ## REB 34.4671875 ## AST 12.7484375 ## TO 13.9578125 ## A.T 0.9778125 ## STL 6.8234375 ## BLK 2.7500000 ## PF 18.6562500 ## FG 0.4232969 ## FT 0.6914687 ## X3P 0.3333750 ``` ``` logitfunction = exp(t(beta) %*% xbar)/(1+exp(t(beta) %*% xbar)) print(logitfunction) ``` ``` ## [,1] ## [1,] 0.5139925 ``` ``` slopes = beta * logitfunction[1] * (1-logitfunction[1]) slopes ``` ``` ## [,1] ## (Intercept) -11.449314459 ## xPTS -0.015306558 ## xREB 0.122497576 ## xAST 0.041022062 ## xTO -0.095936529 ## xA.T 0.390572574 ## xSTL 0.195745753 ## xBLK 0.019652410 ## xPF 0.006500512 ## xFG 11.544386272 ## xFT 2.680380362 ## xX3P 1.353901094 ``` ### 11\.8\.1 How about the Probit model? In the probit model this is \\\[\\begin{equation} \\frac{\\partial E(y\|x)}{\\partial x} \= \\phi(\\beta' x) \\beta \\end{equation}\\] where \\(\\phi(.)\\) is the normal density function (not the cumulative probability). ``` ncaa = read.table("DSTMAA_data/ncaa.txt",header=TRUE) y1 = 1:32 y1 = y1*0+1 y2 = y1*0 y = c(y1,y2) print(y) ``` ``` ## [1] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 ## [36] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ``` ``` x = as.matrix(ncaa[4:14]) h = glm(y~x, family=binomial(link="probit")) beta = h$coefficients print(beta) ``` ``` ## (Intercept) xPTS xREB xAST xTO ## -26.28219202 -0.03462510 0.28493498 0.10893727 -0.23742076 ## xA.T xSTL xBLK xPF xFG ## 0.71484863 0.45963279 0.03029006 0.01040612 26.58460638 ## xFT xX3P ## 6.28277680 3.15823537 ``` ``` print(dim(x)) ``` ``` ## [1] 64 11 ``` ``` beta = as.matrix(beta) print(dim(beta)) ``` ``` ## [1] 12 1 ``` ``` wuns = matrix(1,64,1) x = cbind(wuns,x) xbar = as.matrix(colMeans(x)) print(xbar) ``` ``` ## [,1] ## 1.0000000 ## PTS 67.1015625 ## REB 34.4671875 ## AST 12.7484375 ## TO 13.9578125 ## A.T 0.9778125 ## STL 6.8234375 ## BLK 2.7500000 ## PF 18.6562500 ## FG 0.4232969 ## FT 0.6914687 ## X3P 0.3333750 ``` ``` probitfunction = t(beta) %*% xbar slopes = probitfunction[1] * beta slopes ``` ``` ## [,1] ## (Intercept) -1.401478911 ## xPTS -0.001846358 ## xREB 0.015193952 ## xAST 0.005809001 ## xTO -0.012660291 ## xA.T 0.038118787 ## xSTL 0.024509587 ## xBLK 0.001615196 ## xPF 0.000554899 ## xFG 1.417604938 ## xFT 0.335024536 ## xX3P 0.168410621 ``` ### 11\.8\.1 How about the Probit model? In the probit model this is \\\[\\begin{equation} \\frac{\\partial E(y\|x)}{\\partial x} \= \\phi(\\beta' x) \\beta \\end{equation}\\] where \\(\\phi(.)\\) is the normal density function (not the cumulative probability). ``` ncaa = read.table("DSTMAA_data/ncaa.txt",header=TRUE) y1 = 1:32 y1 = y1*0+1 y2 = y1*0 y = c(y1,y2) print(y) ``` ``` ## [1] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 ## [36] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ``` ``` x = as.matrix(ncaa[4:14]) h = glm(y~x, family=binomial(link="probit")) beta = h$coefficients print(beta) ``` ``` ## (Intercept) xPTS xREB xAST xTO ## -26.28219202 -0.03462510 0.28493498 0.10893727 -0.23742076 ## xA.T xSTL xBLK xPF xFG ## 0.71484863 0.45963279 0.03029006 0.01040612 26.58460638 ## xFT xX3P ## 6.28277680 3.15823537 ``` ``` print(dim(x)) ``` ``` ## [1] 64 11 ``` ``` beta = as.matrix(beta) print(dim(beta)) ``` ``` ## [1] 12 1 ``` ``` wuns = matrix(1,64,1) x = cbind(wuns,x) xbar = as.matrix(colMeans(x)) print(xbar) ``` ``` ## [,1] ## 1.0000000 ## PTS 67.1015625 ## REB 34.4671875 ## AST 12.7484375 ## TO 13.9578125 ## A.T 0.9778125 ## STL 6.8234375 ## BLK 2.7500000 ## PF 18.6562500 ## FG 0.4232969 ## FT 0.6914687 ## X3P 0.3333750 ``` ``` probitfunction = t(beta) %*% xbar slopes = probitfunction[1] * beta slopes ``` ``` ## [,1] ## (Intercept) -1.401478911 ## xPTS -0.001846358 ## xREB 0.015193952 ## xAST 0.005809001 ## xTO -0.012660291 ## xA.T 0.038118787 ## xSTL 0.024509587 ## xBLK 0.001615196 ## xPF 0.000554899 ## xFG 1.417604938 ## xFT 0.335024536 ## xX3P 0.168410621 ``` 11\.9 Maximum\-Likelihood Estimation (MLE) of these Choice Models ----------------------------------------------------------------- Estimation in the models above, using the **glm** function is done by R using MLE. Lets write this out a little formally. Since we have say \\(n\\) observations, and each LHS variable is \\(y \= \\{0,1\\}\\), we have the likelihood function as follows: \\\[\\begin{equation} L \= \\prod\_{i\=1}^n F(\\beta'x)^{y\_i} \[1\-F(\\beta'x)]^{1\-y\_i} \\end{equation}\\] The log\-likelihood will be \\\[\\begin{equation} \\ln L \= \\sum\_{i\=1}^n \\left\[ y\_i \\ln F(\\beta'x) \+ (1\-y\_i) \\ln \[1\-F(\\beta'x)] \\right] \\end{equation}\\] To maximize the log\-likelihood we take the derivative: \\\[\\begin{equation} \\frac{\\partial \\ln L}{\\partial \\beta} \= \\sum\_{i\=1}^n \\left\[ y\_i \\frac{f(\\beta'x)}{F(\\beta'x)} \- (1\-y\_i) \\frac{f(\\beta'x)}{1\-F(\\beta'x)} \\right]x \= 0 \\end{equation}\\] which gives a system of equations to be solved for \\(\\beta\\). This is what the software is doing. The system of first\-order conditions are collectively called the **likelihood equation**. You may well ask, how do we get the t\-statistics of the parameter estimates \\(\\beta\\)? The formal derivation is beyond the scope of this class, as it requires probability limit theorems, but let’s just do this a little heuristically, so you have some idea of what lies behind it. The t\-stat for a coefficient is its value divided by its standard deviation. We get some idea of the standard deviation by asking the question: how does the coefficient set \\(\\beta\\) change when the log\-likelihood changes? That is, we are interested in \\(\\partial \\beta / \\partial \\ln L\\). Above we have computed the reciprocal of this, as you can see. Lets define \\\[\\begin{equation} g \= \\frac{\\partial \\ln L}{\\partial \\beta} \\end{equation}\\] We also define the second derivative (also known as the Hessian matrix) \\\[\\begin{equation} H \= \\frac{\\partial^2 \\ln L}{\\partial \\beta \\partial \\beta'} \\end{equation}\\] Note that the following are valid: \\\[\\begin{eqnarray\*} E(g) \&\=\& 0 \\quad \\mbox{(this is a vector)} \\\\ Var(g) \&\=\& E(g g') \- E(g)^2 \= E(g g') \\\\ \&\=\& \-E(H) \\quad \\mbox{(this is a non\-trivial proof)} \\end{eqnarray\*}\\] We call \\\[\\begin{equation} I(\\beta) \= \-E(H) \\end{equation}\\] the information matrix. Since (heuristically) the variation in log\-likelihood with changes in beta is given by \\(Var(g)\=\-E(H)\=I(\\beta)\\), the inverse gives the variance of \\(\\beta\\). Therefore, we have \\\[\\begin{equation} Var(\\beta) \\rightarrow I(\\beta)^{\-1} \\end{equation}\\] We take the square root of the diagonal of this matrix and divide the values of \\(\\beta\\) by that to get the t\-statistics. 11\.10 Multinomial Logit ------------------------ You will need the **nnet** package for this. This model takes the following form: \\\[\\begin{equation} \\mbox{Prob}\[y \= j] \= p\_j\= \\frac{\\exp(\\beta\_j' x)}{1\+\\sum\_{j\=1}^{J} \\exp(\\beta\_j' x)} \\end{equation}\\] We usually set \\\[\\begin{equation} \\mbox{Prob}\[y \= 0] \= p\_0 \= \\frac{1}{1\+\\sum\_{j\=1}^{J} \\exp(\\beta\_j' x)} \\end{equation}\\] ``` ncaa = read.table("DSTMAA_data/ncaa.txt",header=TRUE) x = as.matrix(ncaa[4:14]) w1 = (1:16)*0 + 1 w0 = (1:16)*0 y1 = c(w1,w0,w0,w0) y2 = c(w0,w1,w0,w0) y3 = c(w0,w0,w1,w0) y4 = c(w0,w0,w0,w1) y = cbind(y1,y2,y3,y4) library(nnet) res = multinom(y~x) ``` ``` ## # weights: 52 (36 variable) ## initial value 88.722839 ## iter 10 value 71.177975 ## iter 20 value 60.076921 ## iter 30 value 51.167439 ## iter 40 value 47.005269 ## iter 50 value 45.196280 ## iter 60 value 44.305029 ## iter 70 value 43.341689 ## iter 80 value 43.260097 ## iter 90 value 43.247324 ## iter 100 value 43.141297 ## final value 43.141297 ## stopped after 100 iterations ``` ``` res ``` ``` ## Call: ## multinom(formula = y ~ x) ## ## Coefficients: ## (Intercept) xPTS xREB xAST xTO xA.T ## y2 -8.847514 -0.1595873 0.3134622 0.6198001 -0.2629260 -2.1647350 ## y3 65.688912 0.2983748 -0.7309783 -0.6059289 0.9284964 -0.5720152 ## y4 31.513342 -0.1382873 -0.2432960 0.2887910 0.2204605 -2.6409780 ## xSTL xBLK xPF xFG xFT xX3P ## y2 -0.813519 0.01472506 0.6521056 -13.77579 10.374888 -3.436073 ## y3 -1.310701 0.63038878 -0.1788238 -86.37410 -24.769245 -4.897203 ## y4 -1.470406 -0.31863373 0.5392835 -45.18077 6.701026 -7.841990 ## ## Residual Deviance: 86.28259 ## AIC: 158.2826 ``` ``` print(names(res)) ``` ``` ## [1] "n" "nunits" "nconn" "conn" ## [5] "nsunits" "decay" "entropy" "softmax" ## [9] "censored" "value" "wts" "convergence" ## [13] "fitted.values" "residuals" "call" "terms" ## [17] "weights" "deviance" "rank" "lab" ## [21] "coefnames" "vcoefnames" "xlevels" "edf" ## [25] "AIC" ``` ``` res$fitted.values ``` ``` ## y1 y2 y3 y4 ## 1 6.785454e-01 3.214178e-01 7.032345e-06 2.972107e-05 ## 2 6.168467e-01 3.817718e-01 2.797313e-06 1.378715e-03 ## 3 7.784836e-01 1.990510e-01 1.688098e-02 5.584445e-03 ## 4 5.962949e-01 3.988588e-01 5.018346e-04 4.344392e-03 ## 5 9.815286e-01 1.694721e-02 1.442350e-03 8.179230e-05 ## 6 9.271150e-01 6.330104e-02 4.916966e-03 4.666964e-03 ## 7 4.515721e-01 9.303667e-02 3.488898e-02 4.205023e-01 ## 8 8.210631e-01 1.530721e-01 7.631770e-03 1.823302e-02 ## 9 1.567804e-01 9.375075e-02 6.413693e-01 1.080996e-01 ## 10 8.403357e-01 9.793135e-03 1.396393e-01 1.023186e-02 ## 11 9.163789e-01 6.747946e-02 7.847380e-05 1.606316e-02 ## 12 2.448850e-01 4.256001e-01 2.880803e-01 4.143463e-02 ## 13 1.040352e-01 1.534272e-01 1.369554e-01 6.055822e-01 ## 14 8.468755e-01 1.506311e-01 5.083480e-04 1.985036e-03 ## 15 7.136048e-01 1.294146e-01 7.385294e-02 8.312770e-02 ## 16 9.885439e-01 1.114547e-02 2.187311e-05 2.887256e-04 ## 17 6.478074e-02 3.547072e-01 1.988993e-01 3.816127e-01 ## 18 4.414721e-01 4.497228e-01 4.716550e-02 6.163956e-02 ## 19 6.024508e-03 3.608270e-01 7.837087e-02 5.547777e-01 ## 20 4.553205e-01 4.270499e-01 3.614863e-04 1.172681e-01 ## 21 1.342122e-01 8.627911e-01 1.759865e-03 1.236845e-03 ## 22 1.877123e-02 6.423037e-01 5.456372e-05 3.388705e-01 ## 23 5.620528e-01 4.359459e-01 5.606424e-04 1.440645e-03 ## 24 2.837494e-01 7.154506e-01 2.190456e-04 5.809815e-04 ## 25 1.787749e-01 8.037335e-01 3.361806e-04 1.715541e-02 ## 26 3.274874e-02 3.484005e-02 1.307795e-01 8.016317e-01 ## 27 1.635480e-01 3.471676e-01 1.131599e-01 3.761245e-01 ## 28 2.360922e-01 7.235497e-01 3.375018e-02 6.607966e-03 ## 29 1.618602e-02 7.233098e-01 5.762083e-06 2.604984e-01 ## 30 3.037741e-02 8.550873e-01 7.487804e-02 3.965729e-02 ## 31 1.122897e-01 8.648388e-01 3.935657e-03 1.893584e-02 ## 32 2.312231e-01 6.607587e-01 4.770775e-02 6.031045e-02 ## 33 6.743125e-01 2.028181e-02 2.612683e-01 4.413746e-02 ## 34 1.407693e-01 4.089518e-02 7.007541e-01 1.175815e-01 ## 35 6.919547e-04 4.194577e-05 9.950322e-01 4.233924e-03 ## 36 8.051225e-02 4.213965e-03 9.151287e-01 1.450423e-04 ## 37 5.691220e-05 7.480549e-02 5.171594e-01 4.079782e-01 ## 38 2.709867e-02 3.808987e-02 6.193969e-01 3.154145e-01 ## 39 4.531001e-05 2.248580e-08 9.999542e-01 4.626258e-07 ## 40 1.021976e-01 4.597678e-03 5.133839e-01 3.798208e-01 ## 41 2.005837e-02 2.063200e-01 5.925050e-01 1.811166e-01 ## 42 1.829028e-04 1.378795e-03 6.182839e-01 3.801544e-01 ## 43 1.734296e-01 9.025284e-04 7.758862e-01 4.978171e-02 ## 44 4.314938e-05 3.131390e-06 9.997892e-01 1.645004e-04 ## 45 1.516231e-02 2.060325e-03 9.792594e-01 3.517926e-03 ## 46 2.917597e-01 6.351166e-02 4.943818e-01 1.503468e-01 ## 47 1.278933e-04 1.773509e-03 1.209486e-01 8.771500e-01 ## 48 1.320000e-01 2.064338e-01 6.324904e-01 2.907578e-02 ## 49 1.683221e-02 4.007848e-01 1.628981e-03 5.807540e-01 ## 50 9.670085e-02 4.314765e-01 7.669035e-03 4.641536e-01 ## 51 4.953577e-02 1.370037e-01 9.882004e-02 7.146405e-01 ## 52 1.787927e-02 9.825660e-02 2.203037e-01 6.635604e-01 ## 53 1.174053e-02 4.723628e-01 2.430072e-03 5.134666e-01 ## 54 2.053871e-01 6.721356e-01 4.169640e-02 8.078090e-02 ## 55 3.060369e-06 1.418623e-03 1.072549e-02 9.878528e-01 ## 56 1.122164e-02 6.566169e-02 3.080641e-01 6.150525e-01 ## 57 8.873716e-03 4.996907e-01 8.222034e-03 4.832136e-01 ## 58 2.164962e-02 2.874313e-01 1.136455e-03 6.897826e-01 ## 59 5.230443e-03 6.430174e-04 9.816825e-01 1.244406e-02 ## 60 8.743368e-02 6.710327e-02 4.260116e-01 4.194514e-01 ## 61 1.913578e-01 6.458463e-04 3.307553e-01 4.772410e-01 ## 62 6.450967e-07 5.035697e-05 7.448285e-01 2.551205e-01 ## 63 2.400365e-04 4.651537e-03 8.183390e-06 9.951002e-01 ## 64 1.515894e-04 2.631451e-01 1.002332e-05 7.366933e-01 ``` You can see from the results that the probability for category 1 is the same as \\(p\_0\\). What this means is that we compute the other three probabilities, and the remaining is for the first category. We check that the probabilities across each row for all four categories add up to 1: ``` rowSums(res$fitted.values) ``` ``` ## 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 ## 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ## 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 ## 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ## 51 52 53 54 55 56 57 58 59 60 61 62 63 64 ## 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ``` 11\.11 When OLS fails --------------------- The standard linear regression model often does not apply, and we need to be careful to not overuse it. Peter Kennedy in his excellent book “A Guide to Econometrics” states five cases where violations of critical assumptions for OLS occur, and we should then be warned against its use. 1. The OLS model is in error when (a) the RHS variables are incorrect (**inapproprate regressors**) for use to explain the LHS variable. This is just the presence of a poor model. Hopefully, the F\-statistic from such a regression will warn against use of the model. (b) The relationship between the LHS and RHS is **nonlinear**, and this makes use of a linear regression inaccurate. (c) the model is **non\-stationary**, i.e., the data spans a period where the coefficients cannot be reasonably expected to remain the same. 2. **Non\-zero mean regression residuals**. This occurs with truncated residuals (see discussion below) and in **sample selection** problems, where the fitted model to a selected subsample would result in non\-zero mean errors for the full sample. This is also known as the biased intercept problem. The errors may also be correlated with the regressors, i.e., endogeneity (see below). 3. **Residuals are not iid**. This occurs in two ways. (a) Heterosledasticity, i.e., the variances of the residuals for all observations is not the same, i.e., violation of the identically distributed assumption. (b) Autocorrelation, where the residuals are correlated with each other, i.e., violation of the independence assumption. 4. **Endogeneity**. Here the observations of regressors \\(x\\) cannot be assumed to be fixed in repeated samples. This occurs in several ways. (a) Errors in variables, i.e., measurement of \\(x\\) in error. (b) Omitted variables, which is a form of errors in variables. (c) Autoregression, i.e., using a lagged value of the dependent variable as an independent variable, as in VARs. (d) Simultaneous equation systems, where all variables are endogenous, and this is also known as **reverse causality**. For example, changes in tax rates changes economic behavior, and hence income, which may result in further policy changes in tax rates, and so on. Because the \\(x\\) variables are correlated with the errors \\(\\epsilon\\), they are no longer exogenous, and hence we term this situation one of “endogeneity”. 5. **\\(n \> p\\)**. The number of observations (\\(n\\)) is greater than the number of independent variables (\\(p\\)), i.e., the dimension of \\(x\\). This can also occur when two regressors are highly correlated with each other, i.e., known as **multicollinearity**. 11\.12 Truncated Variables and Sample Selection ----------------------------------------------- Sample selection problems arise because the sample is truncated based on some selection criterion, and the regression that is run is biased because the sample is biased and does not reflect the true/full population. For example, wage data is only available for people who decided to work, i.e., the wage was worth their while, and above their reservation wage. If we are interested in finding out the determinants of wages, we need to take this fact into account, i.e., the sample only contains people who were willing to work at the wage levels that were in turn determined by demand and supply of labor. The sample becomes non\-random. It explains the curious case that women with more children tend to have lower wages (because they need the money and hence, their reservation wage is lower). Usually we handle sample selection issues using a two\-equation regression approach. The first equation determines if an observation enters the sample. The second equation then assesses the model of interest, e.g., what determines wages. We will look at an example later. But first, we provide some basic mathematical results that we need later. And of course, we need to revisit our Bayesian ideas again! * Given a probability density \\(f(x)\\), \\\[\\begin{equation} f(x \| x \> a) \= \\frac{f(x)}{Pr(x\>a)} \\end{equation}\\] If we are using the normal distribution then this is: \\\[\\begin{equation} f(x \| x \> a) \= \\frac{\\phi(x)}{1\-\\Phi(a)} \\end{equation}\\] * If \\(x \\sim N(\\mu, \\sigma^2\)\\), then \\\[\\begin{equation} E(x \| x\>a) \= \\mu \+ \\sigma\\; \\frac{\\phi(c)}{1\-\\Phi(c)}, \\quad c \= \\frac{a\-\\mu}{\\sigma} \\end{equation}\\] Note that this expectation is provided without proof, as are the next few ones. For example if we let \\(x\\) be standard normal and we want \\(E(\[x \| x \> \-1]\\), we have ``` dnorm(-1)/(1-pnorm(-1)) ``` ``` ## [1] 0.2876 ``` For the same distribution \\\[\\begin{equation} E(x \| x \< a) \= \\mu \+ \\sigma\\; \\frac{\-\\phi(c)}{\\Phi(c)}, \\quad c \= \\frac{a\-\\mu}{\\sigma} \\end{equation}\\] For example, \\(E\[x \| x \< 1]\\) is ``` -dnorm(1)/pnorm(1) ``` ``` ## [1] -0.2876 ``` 11\.13 Inverse Mills Ratio -------------------------- The values \\(\\frac{\\phi(c)}{1\-\\Phi(c)}\\) or \\(\\frac{\-\\phi(c)}{\\Phi(c)}\\) as the case may be is often shortened to the variable \\(\\lambda(c)\\), which is also known as the Inverse Mills Ratio. If \\(y\\) and \\(x\\) are correlated (with correlation \\(\\rho\\)), and \\(y \\sim N(\\mu\_y,\\sigma\_y^2\)\\), then \\\[\\begin{eqnarray\*} Pr(y,x \| x\>a) \&\=\& \\frac{f(y,x)}{Pr(x\>a)} \\\\ E(y \| x\>a) \&\=\& \\mu\_y \+ \\sigma\_y \\rho \\lambda(c), \\quad c \= \\frac{a\-\\mu}{\\sigma} \\end{eqnarray\*}\\] This leads naturally to the truncated regression model. Suppose we have the usual regression model where \\\[\\begin{equation} y \= \\beta'x \+ e, \\quad e \\sim N(0,\\sigma^2\) \\end{equation}\\] But suppose we restrict attention in our model to values of \\(y\\) that are greater than a cut off \\(a\\). We can then write down by inspection the following correct model (no longer is the simple linear regression valid) \\\[\\begin{equation} E(y \| y \> a) \= \\beta' x \+ \\sigma \\; \\frac{\\phi\[(a\-\\beta'x)/\\sigma]}{1\-\\Phi\[(a\-\\beta'x)/\\sigma]} \\end{equation}\\] Therefore, when the sample is truncated, then we need to run the regression above, i.e., the usual right\-hand side \\(\\beta' x\\) with an additional variable, i.e., the Inverse Mill’s ratio. We look at this in a real\-world example. ### 11\.13\.1 Example: Limited Dependent Variables in VC Syndications Not all venture\-backed firms end up making a successful exit, either via an IPO, through a buyout, or by means of another exit route. By examining a large sample of startup firms, we can measure the probability of a firm making a successful exit. By designating successful exits as \\(S\=1\\), and setting \\(S\=0\\) otherwise, we use matrix \\(X\\) of explanatory variables and fit a Probit model to the data. We define \\(S\\) to be based on a **latent** threshold variable \\(S^\*\\) such that \\\[\\begin{equation} S \= \\left\\{ \\begin{array}{ll} 1 \& \\mbox{if } S^\* \> 0\\\\ 0 \& \\mbox{if } S^\* \\leq 0\. \\end{array} \\right. \\end{equation}\\] where the latent variable is modeled as \\\[\\begin{equation} S^\* \= \\gamma' X \+ u, \\quad u \\sim N(0,\\sigma\_u^2\) \\end{equation}\\] The fitted model provides us the probability of exit, i.e., \\(E(S)\\), for all financing rounds. \\\[\\begin{equation} E(S) \= E(S^\* \> 0\) \= E(u \> \-\\gamma' X) \= 1 \- \\Phi(\-\\gamma' X) \= \\Phi(\\gamma' X), \\end{equation}\\] where \\(\\gamma\\) is the vector of coefficients fitted in the Probit model, using standard likelihood methods. The last expression in the equation above follows from the use of normality in the Probit specification. \\(\\Phi(.)\\) denotes the cumulative normal distribution. ### 11\.13\.1 Example: Limited Dependent Variables in VC Syndications Not all venture\-backed firms end up making a successful exit, either via an IPO, through a buyout, or by means of another exit route. By examining a large sample of startup firms, we can measure the probability of a firm making a successful exit. By designating successful exits as \\(S\=1\\), and setting \\(S\=0\\) otherwise, we use matrix \\(X\\) of explanatory variables and fit a Probit model to the data. We define \\(S\\) to be based on a **latent** threshold variable \\(S^\*\\) such that \\\[\\begin{equation} S \= \\left\\{ \\begin{array}{ll} 1 \& \\mbox{if } S^\* \> 0\\\\ 0 \& \\mbox{if } S^\* \\leq 0\. \\end{array} \\right. \\end{equation}\\] where the latent variable is modeled as \\\[\\begin{equation} S^\* \= \\gamma' X \+ u, \\quad u \\sim N(0,\\sigma\_u^2\) \\end{equation}\\] The fitted model provides us the probability of exit, i.e., \\(E(S)\\), for all financing rounds. \\\[\\begin{equation} E(S) \= E(S^\* \> 0\) \= E(u \> \-\\gamma' X) \= 1 \- \\Phi(\-\\gamma' X) \= \\Phi(\\gamma' X), \\end{equation}\\] where \\(\\gamma\\) is the vector of coefficients fitted in the Probit model, using standard likelihood methods. The last expression in the equation above follows from the use of normality in the Probit specification. \\(\\Phi(.)\\) denotes the cumulative normal distribution. 11\.14 Sample Selection Problems (and endogeneity) -------------------------------------------------- Suppose we want to examine the role of syndication in venture success. Success in a syndicated venture comes from two broad sources of VC expertise. First, VCs are experienced in picking good projects to invest in, and syndicates are efficient vehicles for picking good firms; this is the selection hypothesis put forth by Lerner (1994\). Amongst two projects that appear a\-priori similar in prospects, the fact that one of them is selected by a syndicate is evidence that the project is of better quality (ex\-post to being vetted by the syndicate, but ex\-ante to effort added by the VCs), since the process of syndication effectively entails getting a second opinion by the lead VC. Second, syndicates may provide better monitoring as they bring a wide range of skills to the venture, and this is suggested in the value\-added hypothesis of Brander, Amit, and Antweiler (2002\). A regression of venture returns on various firm characteristics and a dummy variable for syndication allows a first pass estimate of whether syndication impacts performance. However, it may be that syndicated firms are simply of higher quality and deliver better performance, whether or not they chose to syndicate. Better firms are more likely to syndicate because VCs tend to prefer such firms and can identify them. In this case, the coefficient on the dummy variable might reveal a value\-add from syndication, when indeed, there is none. Hence, we correct the specification for endogeneity, and then examine whether the dummy variable remains significant. Greene, in his classic book “Econometric Analysis” provides the correction for endogeneity required here. We briefly summarize the model required. The performance regression is of the form: \\\[\\begin{equation} Y \= \\beta' X \+ \\delta Q \+ \\epsilon, \\quad \\epsilon \\sim N(0,\\sigma\_{\\epsilon}^2\) \\end{equation}\\] where \\(Y\\) is the performance variable; \\(Q\\) is the dummy variable taking a value of 1 if the firm is syndicated, and zero otherwise, and \\(\\delta\\) is a coefficient that determines whether performance is different on account of syndication. If it is not, then it implies that the variables \\(X\\) are sufficient to explain the differential performance across firms, or that there is no differential performance across the two types of firms. However, since these same variables determine also, whether the firm syndicates or not, we have an endogeneity issue which is resolved by adding a correction to the model above. The error term \\(\\epsilon\\) is affected by censoring bias in the subsamples of syndicated and non\-syndicated firms. When \\(Q\=1\\), i.e. when the firm’s financing is syndicated, then the residual \\(\\epsilon\\) has the following expectation \\\[\\begin{equation} E(\\epsilon \| Q\=1\) \= E(\\epsilon \| S^\* \>0\) \= E(\\epsilon \| u \> \-\\gamma' X) \= \\rho \\sigma\_{\\epsilon} \\left\[ \\frac{\\phi(\\gamma' X)}{\\Phi(\\gamma' X)} \\right]. \\end{equation}\\] where \\(\\rho \= Corr(\\epsilon,u)\\), and \\(\\sigma\_{\\epsilon}\\) is the standard deviation of \\(\\epsilon\\). This implies that \\\[\\begin{equation} E(Y \| Q\=1\) \= \\beta'X \+ \\delta \+ \\rho \\sigma\_{\\epsilon} \\left\[ \\frac{\\phi(\\gamma' X)}{\\Phi(\\gamma' X)} \\right]. \\end{equation}\\] Note that \\(\\phi(\-\\gamma'X)\=\\phi(\\gamma'X)\\), and \\(1\-\\Phi(\-\\gamma'X)\=\\Phi(\\gamma'X)\\). For estimation purposes, we write this as the following regression equation: EQN1 \\\[\\begin{equation} Y \= \\delta \+ \\beta' X \+ \\beta\_m m(\\gamma' X) \\end{equation}\\] where \\(m(\\gamma' X) \= \\frac{\\phi(\\gamma' X)}{\\Phi(\\gamma' X)}\\) and \\(\\beta\_m \= \\rho \\sigma\_{\\epsilon}\\). Thus, \\(\\{\\delta,\\beta,\\beta\_m\\}\\) are the coefficients estimated in the regression. (Note here that \\(m(\\gamma' X)\\) is also known as the inverse Mill’s ratio.) Likewise, for firms that are not syndicated, we have the following result \\\[\\begin{equation} E(Y \| Q\=0\) \= \\beta'X \+ \\rho \\sigma\_{\\epsilon} \\left\[ \\frac{\-\\phi(\\gamma' X)}{1\-\\Phi(\\gamma' X)} \\right]. \\end{equation}\\] This may also be estimated by linear cross\-sectional regression. EQN0 \\\[\\begin{equation} Y \= \\beta' X \+ \\beta\_m \\cdot m'(\\gamma' X) \\end{equation}\\] where \\(m' \= \\frac{\-\\phi(\\gamma' X)}{1\-\\Phi(\\gamma' X)}\\) and \\(\\beta\_m \= \\rho \\sigma\_{\\epsilon}\\). The estimation model will take the form of a stacked linear regression comprising both equations (EQN1\) and (EQN0\). This forces \\(\\beta\\) to be the same across all firms without necessitating additional constraints, and allows the specification to remain within the simple OLS form. If \\(\\delta\\) is significant after this endogeneity correction, then the empirical evidence supports the hypothesis that syndication is a driver of differential performance. If the coefficients \\(\\{\\delta, \\beta\_m\\}\\) are significant, then the expected difference in performance for each syndicated financing round \\((i,j)\\) is \\\[\\begin{equation} \\delta \+ \\beta\_m \\left\[ m(\\gamma\_{ij}' X\_{ij}) \- m'(\\gamma\_{ij}' X\_{ij}) \\right], \\;\\;\\; \\forall i,j. \\end{equation}\\] The method above forms one possible approach to addressing treatment effects. Another approach is to estimate a Probit model first, and then to set \\(m(\\gamma' X) \= \\Phi(\\gamma' X)\\). This is known as the instrumental variables approach. Some **References**: Brander, Amit, and Antweiler ([2002](#ref-JEMS:JEMS423)); Lerner ([1994](#ref-10.2307/3665618)) The correct regression may be run using the **sampleSelection** package in R. Sample selection models correct for the fact that two subsamples may be different because of treatment effects. Let’s take an example with data from the wage market. ### 11\.14\.1 Example: Women in the Labor Market This is an example from the package in R itself. The data used is also within the package. After loading in the package **sampleSelection** we can use the data set called **Mroz87**. This contains labour market participation data for women as well as wage levels for women. If we are explaining what drives women’s wages we can simply run the following regression. See: [http://www.inside\-r.org/packages/cran/sampleSelection/docs/Mroz87](http://www.inside-r.org/packages/cran/sampleSelection/docs/Mroz87) The original paper may be downloaded at: [http://eml.berkeley.edu/\~cle/e250a\_f13/mroz\-paper.pdf](http://eml.berkeley.edu/~cle/e250a_f13/mroz-paper.pdf) ``` library(sampleSelection) ``` ``` ## Loading required package: maxLik ``` ``` ## Loading required package: miscTools ``` ``` ## Warning: package 'miscTools' was built under R version 3.3.2 ``` ``` ## Loading required package: methods ``` ``` ## ## Please cite the 'maxLik' package as: ## Henningsen, Arne and Toomet, Ott (2011). maxLik: A package for maximum likelihood estimation in R. Computational Statistics 26(3), 443-458. DOI 10.1007/s00180-010-0217-1. ## ## If you have questions, suggestions, or comments regarding the 'maxLik' package, please use a forum or 'tracker' at maxLik's R-Forge site: ## https://r-forge.r-project.org/projects/maxlik/ ``` ``` data(Mroz87) Mroz87$kids = (Mroz87$kids5 + Mroz87$kids618 > 0) Mroz87$numkids = Mroz87$kids5 + Mroz87$kids618 summary(Mroz87) ``` ``` ## lfp hours kids5 kids618 ## Min. :0.0000 Min. : 0.0 Min. :0.0000 Min. :0.000 ## 1st Qu.:0.0000 1st Qu.: 0.0 1st Qu.:0.0000 1st Qu.:0.000 ## Median :1.0000 Median : 288.0 Median :0.0000 Median :1.000 ## Mean :0.5684 Mean : 740.6 Mean :0.2377 Mean :1.353 ## 3rd Qu.:1.0000 3rd Qu.:1516.0 3rd Qu.:0.0000 3rd Qu.:2.000 ## Max. :1.0000 Max. :4950.0 Max. :3.0000 Max. :8.000 ## age educ wage repwage ## Min. :30.00 Min. : 5.00 Min. : 0.000 Min. :0.00 ## 1st Qu.:36.00 1st Qu.:12.00 1st Qu.: 0.000 1st Qu.:0.00 ## Median :43.00 Median :12.00 Median : 1.625 Median :0.00 ## Mean :42.54 Mean :12.29 Mean : 2.375 Mean :1.85 ## 3rd Qu.:49.00 3rd Qu.:13.00 3rd Qu.: 3.788 3rd Qu.:3.58 ## Max. :60.00 Max. :17.00 Max. :25.000 Max. :9.98 ## hushrs husage huseduc huswage ## Min. : 175 Min. :30.00 Min. : 3.00 Min. : 0.4121 ## 1st Qu.:1928 1st Qu.:38.00 1st Qu.:11.00 1st Qu.: 4.7883 ## Median :2164 Median :46.00 Median :12.00 Median : 6.9758 ## Mean :2267 Mean :45.12 Mean :12.49 Mean : 7.4822 ## 3rd Qu.:2553 3rd Qu.:52.00 3rd Qu.:15.00 3rd Qu.: 9.1667 ## Max. :5010 Max. :60.00 Max. :17.00 Max. :40.5090 ## faminc mtr motheduc fatheduc ## Min. : 1500 Min. :0.4415 Min. : 0.000 Min. : 0.000 ## 1st Qu.:15428 1st Qu.:0.6215 1st Qu.: 7.000 1st Qu.: 7.000 ## Median :20880 Median :0.6915 Median :10.000 Median : 7.000 ## Mean :23081 Mean :0.6789 Mean : 9.251 Mean : 8.809 ## 3rd Qu.:28200 3rd Qu.:0.7215 3rd Qu.:12.000 3rd Qu.:12.000 ## Max. :96000 Max. :0.9415 Max. :17.000 Max. :17.000 ## unem city exper nwifeinc ## Min. : 3.000 Min. :0.0000 Min. : 0.00 Min. :-0.02906 ## 1st Qu.: 7.500 1st Qu.:0.0000 1st Qu.: 4.00 1st Qu.:13.02504 ## Median : 7.500 Median :1.0000 Median : 9.00 Median :17.70000 ## Mean : 8.624 Mean :0.6428 Mean :10.63 Mean :20.12896 ## 3rd Qu.:11.000 3rd Qu.:1.0000 3rd Qu.:15.00 3rd Qu.:24.46600 ## Max. :14.000 Max. :1.0000 Max. :45.00 Max. :96.00000 ## wifecoll huscoll kids numkids ## TRUE:212 TRUE:295 Mode :logical Min. :0.000 ## FALSE:541 FALSE:458 FALSE:229 1st Qu.:0.000 ## TRUE :524 Median :1.000 ## NA's :0 Mean :1.591 ## 3rd Qu.:3.000 ## Max. :8.000 ``` ``` res = lm(wage ~ age + age^2 + educ + city, data=Mroz87) summary(res) ``` ``` ## ## Call: ## lm(formula = wage ~ age + age^2 + educ + city, data = Mroz87) ## ## Residuals: ## Min 1Q Median 3Q Max ## -4.5331 -2.2710 -0.4765 1.3975 22.7241 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -3.2490882 0.9094210 -3.573 0.000376 *** ## age 0.0008193 0.0141084 0.058 0.953708 ## educ 0.4496393 0.0503591 8.929 < 2e-16 *** ## city 0.0998064 0.2388551 0.418 0.676174 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 3.079 on 749 degrees of freedom ## Multiple R-squared: 0.1016, Adjusted R-squared: 0.09799 ## F-statistic: 28.23 on 3 and 749 DF, p-value: < 2.2e-16 ``` So, education matters. But since education also determines labor force participation (variable **lfp**) it may just be that we can use **lfp** instead. Let’s try that. ``` res = lm(wage ~ age + age^2 + lfp + city, data=Mroz87) summary(res) ``` ``` ## ## Call: ## lm(formula = wage ~ age + age^2 + lfp + city, data = Mroz87) ## ## Residuals: ## Min 1Q Median 3Q Max ## -4.1815 -0.9869 -0.1624 0.3081 20.6809 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -0.478793 0.513001 -0.933 0.3510 ## age 0.004163 0.011333 0.367 0.7135 ## lfp 4.185897 0.183727 22.783 <2e-16 *** ## city 0.462158 0.190176 2.430 0.0153 * ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 2.489 on 749 degrees of freedom ## Multiple R-squared: 0.4129, Adjusted R-squared: 0.4105 ## F-statistic: 175.6 on 3 and 749 DF, p-value: < 2.2e-16 ``` ``` #LET'S TRY BOTH VARIABLES Mroz87$educlfp = Mroz87$educ*Mroz87$lfp res = lm(wage ~ age + age^2 + lfp + educ + city + educlfp , data=Mroz87) summary(res) ``` ``` ## ## Call: ## lm(formula = wage ~ age + age^2 + lfp + educ + city + educlfp, ## data = Mroz87) ## ## Residuals: ## Min 1Q Median 3Q Max ## -5.8139 -0.7307 -0.0712 0.2261 21.1120 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -0.528196 0.904949 -0.584 0.5596 ## age 0.009299 0.010801 0.861 0.3895 ## lfp -2.028354 0.963841 -2.104 0.0357 * ## educ -0.002723 0.060710 -0.045 0.9642 ## city 0.244245 0.182220 1.340 0.1805 ## educlfp 0.491515 0.077942 6.306 4.89e-10 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 2.347 on 747 degrees of freedom ## Multiple R-squared: 0.4792, Adjusted R-squared: 0.4757 ## F-statistic: 137.4 on 5 and 747 DF, p-value: < 2.2e-16 ``` ``` #LET'S TRY BOTH VARIABLES res = lm(wage ~ age + age^2 + lfp + educ + city, data=Mroz87) summary(res) ``` ``` ## ## Call: ## lm(formula = wage ~ age + age^2 + lfp + educ + city, data = Mroz87) ## ## Residuals: ## Min 1Q Median 3Q Max ## -4.9849 -1.1053 -0.1626 0.4762 21.0179 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -4.18595 0.71239 -5.876 6.33e-09 *** ## age 0.01421 0.01105 1.286 0.199 ## lfp 3.94731 0.18073 21.841 < 2e-16 *** ## educ 0.29043 0.04005 7.252 1.03e-12 *** ## city 0.22401 0.18685 1.199 0.231 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 2.407 on 748 degrees of freedom ## Multiple R-squared: 0.4514, Adjusted R-squared: 0.4485 ## F-statistic: 153.9 on 4 and 748 DF, p-value: < 2.2e-16 ``` In fact, it seems like both matter, but we should use the selection equation approach of Heckman, in two stages. ``` res = selection(lfp ~ age + age^2 + faminc + kids5 + educ, wage ~ exper + exper^2 + educ + city, data=Mroz87, method = "2step" ) summary(res) ``` ``` ## -------------------------------------------- ## Tobit 2 model (sample selection model) ## 2-step Heckman / heckit estimation ## 753 observations (325 censored and 428 observed) ## 12 free parameters (df = 742) ## Probit selection equation: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 3.394e-01 4.119e-01 0.824 0.410 ## age -3.424e-02 6.728e-03 -5.090 4.55e-07 *** ## faminc 3.390e-06 4.267e-06 0.795 0.427 ## kids5 -8.624e-01 1.111e-01 -7.762 2.78e-14 *** ## educ 1.162e-01 2.361e-02 4.923 1.05e-06 *** ## Outcome equation: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -2.66736 1.30192 -2.049 0.0408 * ## exper 0.02370 0.01886 1.256 0.2093 ## educ 0.48816 0.07946 6.144 1.31e-09 *** ## city 0.44936 0.31585 1.423 0.1553 ## Multiple R-Squared:0.1248, Adjusted R-Squared:0.1165 ## Error terms: ## Estimate Std. Error t value Pr(>|t|) ## invMillsRatio 0.11082 0.73108 0.152 0.88 ## sigma 3.09434 NA NA NA ## rho 0.03581 NA NA NA ## -------------------------------------------- ``` Note that even after using education to explain **lfp** in the selection equation, it still matters in the wage equation. So education does really impact wages. ``` ## Example using binary outcome for selection model. ## We estimate the probability of womens' education on their ## chances to get high wage (> $5/hr in 1975 USD), using PSID data ## We use education as explanatory variable ## and add age, kids, and non-work income as exclusion restrictions. library(mvtnorm) data(Mroz87) m <- selection(lfp ~ educ + age + kids5 + kids618 + nwifeinc, wage >= 5 ~ educ, data = Mroz87 ) summary(m) ``` ``` ## -------------------------------------------- ## Tobit 2 model (sample selection model) ## Maximum Likelihood estimation ## BHHH maximisation, 8 iterations ## Return code 2: successive function values within tolerance limit ## Log-Likelihood: -653.2037 ## 753 observations (325 censored and 428 observed) ## 9 free parameters (df = 744) ## Probit selection equation: ## Estimate Std. error t value Pr(> t) ## (Intercept) 0.430362 0.475966 0.904 0.366 ## educ 0.156223 0.023811 6.561 5.35e-11 *** ## age -0.034713 0.007649 -4.538 5.67e-06 *** ## kids5 -0.890560 0.112663 -7.905 2.69e-15 *** ## kids618 -0.038167 0.039320 -0.971 0.332 ## nwifeinc -0.020948 0.004318 -4.851 1.23e-06 *** ## Outcome equation: ## Estimate Std. error t value Pr(> t) ## (Intercept) -4.5213 0.5611 -8.058 7.73e-16 *** ## educ 0.2879 0.0369 7.800 6.18e-15 *** ## Error terms: ## Estimate Std. error t value Pr(> t) ## rho 0.1164 0.2706 0.43 0.667 ## -------------------------------------------- ``` ``` #CHECK THAT THE NUMBER OF KIDS MATTERS OR NOT Mroz87$numkids = Mroz87$kids5 + Mroz87$kids618 summary(lm(wage ~ numkids, data=Mroz87)) ``` ``` ## ## Call: ## lm(formula = wage ~ numkids, data = Mroz87) ## ## Residuals: ## Min 1Q Median 3Q Max ## -2.6814 -2.2957 -0.8125 1.3186 23.0900 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 2.68138 0.17421 15.39 <2e-16 *** ## numkids -0.19285 0.08069 -2.39 0.0171 * ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 3.232 on 751 degrees of freedom ## Multiple R-squared: 0.007548, Adjusted R-squared: 0.006227 ## F-statistic: 5.712 on 1 and 751 DF, p-value: 0.0171 ``` ``` res = selection(lfp ~ age + I(age^2) + faminc + numkids + educ, wage ~ exper + I(exper^2) + educ + city + numkids, data=Mroz87, method = "2step" ) summary(res) ``` ``` ## -------------------------------------------- ## Tobit 2 model (sample selection model) ## 2-step Heckman / heckit estimation ## 753 observations (325 censored and 428 observed) ## 15 free parameters (df = 739) ## Probit selection equation: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -3.725e+00 1.398e+00 -2.664 0.00789 ** ## age 1.656e-01 6.482e-02 2.554 0.01084 * ## I(age^2) -2.198e-03 7.537e-04 -2.917 0.00365 ** ## faminc 4.001e-06 4.204e-06 0.952 0.34161 ## numkids -1.513e-01 3.827e-02 -3.955 8.39e-05 *** ## educ 9.224e-02 2.302e-02 4.007 6.77e-05 *** ## Outcome equation: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -2.2476932 2.0702572 -1.086 0.278 ## exper 0.0271253 0.0635033 0.427 0.669 ## I(exper^2) -0.0001957 0.0019429 -0.101 0.920 ## educ 0.4726828 0.1037086 4.558 6.05e-06 *** ## city 0.4389577 0.3166504 1.386 0.166 ## numkids -0.0471181 0.1420580 -0.332 0.740 ## Multiple R-Squared:0.1252, Adjusted R-Squared:0.1128 ## Error terms: ## Estimate Std. Error t value Pr(>|t|) ## invMillsRatio -0.11737 1.38036 -0.085 0.932 ## sigma 3.09374 NA NA NA ## rho -0.03794 NA NA NA ## -------------------------------------------- ``` ### 11\.14\.1 Example: Women in the Labor Market This is an example from the package in R itself. The data used is also within the package. After loading in the package **sampleSelection** we can use the data set called **Mroz87**. This contains labour market participation data for women as well as wage levels for women. If we are explaining what drives women’s wages we can simply run the following regression. See: [http://www.inside\-r.org/packages/cran/sampleSelection/docs/Mroz87](http://www.inside-r.org/packages/cran/sampleSelection/docs/Mroz87) The original paper may be downloaded at: [http://eml.berkeley.edu/\~cle/e250a\_f13/mroz\-paper.pdf](http://eml.berkeley.edu/~cle/e250a_f13/mroz-paper.pdf) ``` library(sampleSelection) ``` ``` ## Loading required package: maxLik ``` ``` ## Loading required package: miscTools ``` ``` ## Warning: package 'miscTools' was built under R version 3.3.2 ``` ``` ## Loading required package: methods ``` ``` ## ## Please cite the 'maxLik' package as: ## Henningsen, Arne and Toomet, Ott (2011). maxLik: A package for maximum likelihood estimation in R. Computational Statistics 26(3), 443-458. DOI 10.1007/s00180-010-0217-1. ## ## If you have questions, suggestions, or comments regarding the 'maxLik' package, please use a forum or 'tracker' at maxLik's R-Forge site: ## https://r-forge.r-project.org/projects/maxlik/ ``` ``` data(Mroz87) Mroz87$kids = (Mroz87$kids5 + Mroz87$kids618 > 0) Mroz87$numkids = Mroz87$kids5 + Mroz87$kids618 summary(Mroz87) ``` ``` ## lfp hours kids5 kids618 ## Min. :0.0000 Min. : 0.0 Min. :0.0000 Min. :0.000 ## 1st Qu.:0.0000 1st Qu.: 0.0 1st Qu.:0.0000 1st Qu.:0.000 ## Median :1.0000 Median : 288.0 Median :0.0000 Median :1.000 ## Mean :0.5684 Mean : 740.6 Mean :0.2377 Mean :1.353 ## 3rd Qu.:1.0000 3rd Qu.:1516.0 3rd Qu.:0.0000 3rd Qu.:2.000 ## Max. :1.0000 Max. :4950.0 Max. :3.0000 Max. :8.000 ## age educ wage repwage ## Min. :30.00 Min. : 5.00 Min. : 0.000 Min. :0.00 ## 1st Qu.:36.00 1st Qu.:12.00 1st Qu.: 0.000 1st Qu.:0.00 ## Median :43.00 Median :12.00 Median : 1.625 Median :0.00 ## Mean :42.54 Mean :12.29 Mean : 2.375 Mean :1.85 ## 3rd Qu.:49.00 3rd Qu.:13.00 3rd Qu.: 3.788 3rd Qu.:3.58 ## Max. :60.00 Max. :17.00 Max. :25.000 Max. :9.98 ## hushrs husage huseduc huswage ## Min. : 175 Min. :30.00 Min. : 3.00 Min. : 0.4121 ## 1st Qu.:1928 1st Qu.:38.00 1st Qu.:11.00 1st Qu.: 4.7883 ## Median :2164 Median :46.00 Median :12.00 Median : 6.9758 ## Mean :2267 Mean :45.12 Mean :12.49 Mean : 7.4822 ## 3rd Qu.:2553 3rd Qu.:52.00 3rd Qu.:15.00 3rd Qu.: 9.1667 ## Max. :5010 Max. :60.00 Max. :17.00 Max. :40.5090 ## faminc mtr motheduc fatheduc ## Min. : 1500 Min. :0.4415 Min. : 0.000 Min. : 0.000 ## 1st Qu.:15428 1st Qu.:0.6215 1st Qu.: 7.000 1st Qu.: 7.000 ## Median :20880 Median :0.6915 Median :10.000 Median : 7.000 ## Mean :23081 Mean :0.6789 Mean : 9.251 Mean : 8.809 ## 3rd Qu.:28200 3rd Qu.:0.7215 3rd Qu.:12.000 3rd Qu.:12.000 ## Max. :96000 Max. :0.9415 Max. :17.000 Max. :17.000 ## unem city exper nwifeinc ## Min. : 3.000 Min. :0.0000 Min. : 0.00 Min. :-0.02906 ## 1st Qu.: 7.500 1st Qu.:0.0000 1st Qu.: 4.00 1st Qu.:13.02504 ## Median : 7.500 Median :1.0000 Median : 9.00 Median :17.70000 ## Mean : 8.624 Mean :0.6428 Mean :10.63 Mean :20.12896 ## 3rd Qu.:11.000 3rd Qu.:1.0000 3rd Qu.:15.00 3rd Qu.:24.46600 ## Max. :14.000 Max. :1.0000 Max. :45.00 Max. :96.00000 ## wifecoll huscoll kids numkids ## TRUE:212 TRUE:295 Mode :logical Min. :0.000 ## FALSE:541 FALSE:458 FALSE:229 1st Qu.:0.000 ## TRUE :524 Median :1.000 ## NA's :0 Mean :1.591 ## 3rd Qu.:3.000 ## Max. :8.000 ``` ``` res = lm(wage ~ age + age^2 + educ + city, data=Mroz87) summary(res) ``` ``` ## ## Call: ## lm(formula = wage ~ age + age^2 + educ + city, data = Mroz87) ## ## Residuals: ## Min 1Q Median 3Q Max ## -4.5331 -2.2710 -0.4765 1.3975 22.7241 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -3.2490882 0.9094210 -3.573 0.000376 *** ## age 0.0008193 0.0141084 0.058 0.953708 ## educ 0.4496393 0.0503591 8.929 < 2e-16 *** ## city 0.0998064 0.2388551 0.418 0.676174 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 3.079 on 749 degrees of freedom ## Multiple R-squared: 0.1016, Adjusted R-squared: 0.09799 ## F-statistic: 28.23 on 3 and 749 DF, p-value: < 2.2e-16 ``` So, education matters. But since education also determines labor force participation (variable **lfp**) it may just be that we can use **lfp** instead. Let’s try that. ``` res = lm(wage ~ age + age^2 + lfp + city, data=Mroz87) summary(res) ``` ``` ## ## Call: ## lm(formula = wage ~ age + age^2 + lfp + city, data = Mroz87) ## ## Residuals: ## Min 1Q Median 3Q Max ## -4.1815 -0.9869 -0.1624 0.3081 20.6809 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -0.478793 0.513001 -0.933 0.3510 ## age 0.004163 0.011333 0.367 0.7135 ## lfp 4.185897 0.183727 22.783 <2e-16 *** ## city 0.462158 0.190176 2.430 0.0153 * ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 2.489 on 749 degrees of freedom ## Multiple R-squared: 0.4129, Adjusted R-squared: 0.4105 ## F-statistic: 175.6 on 3 and 749 DF, p-value: < 2.2e-16 ``` ``` #LET'S TRY BOTH VARIABLES Mroz87$educlfp = Mroz87$educ*Mroz87$lfp res = lm(wage ~ age + age^2 + lfp + educ + city + educlfp , data=Mroz87) summary(res) ``` ``` ## ## Call: ## lm(formula = wage ~ age + age^2 + lfp + educ + city + educlfp, ## data = Mroz87) ## ## Residuals: ## Min 1Q Median 3Q Max ## -5.8139 -0.7307 -0.0712 0.2261 21.1120 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -0.528196 0.904949 -0.584 0.5596 ## age 0.009299 0.010801 0.861 0.3895 ## lfp -2.028354 0.963841 -2.104 0.0357 * ## educ -0.002723 0.060710 -0.045 0.9642 ## city 0.244245 0.182220 1.340 0.1805 ## educlfp 0.491515 0.077942 6.306 4.89e-10 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 2.347 on 747 degrees of freedom ## Multiple R-squared: 0.4792, Adjusted R-squared: 0.4757 ## F-statistic: 137.4 on 5 and 747 DF, p-value: < 2.2e-16 ``` ``` #LET'S TRY BOTH VARIABLES res = lm(wage ~ age + age^2 + lfp + educ + city, data=Mroz87) summary(res) ``` ``` ## ## Call: ## lm(formula = wage ~ age + age^2 + lfp + educ + city, data = Mroz87) ## ## Residuals: ## Min 1Q Median 3Q Max ## -4.9849 -1.1053 -0.1626 0.4762 21.0179 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -4.18595 0.71239 -5.876 6.33e-09 *** ## age 0.01421 0.01105 1.286 0.199 ## lfp 3.94731 0.18073 21.841 < 2e-16 *** ## educ 0.29043 0.04005 7.252 1.03e-12 *** ## city 0.22401 0.18685 1.199 0.231 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 2.407 on 748 degrees of freedom ## Multiple R-squared: 0.4514, Adjusted R-squared: 0.4485 ## F-statistic: 153.9 on 4 and 748 DF, p-value: < 2.2e-16 ``` In fact, it seems like both matter, but we should use the selection equation approach of Heckman, in two stages. ``` res = selection(lfp ~ age + age^2 + faminc + kids5 + educ, wage ~ exper + exper^2 + educ + city, data=Mroz87, method = "2step" ) summary(res) ``` ``` ## -------------------------------------------- ## Tobit 2 model (sample selection model) ## 2-step Heckman / heckit estimation ## 753 observations (325 censored and 428 observed) ## 12 free parameters (df = 742) ## Probit selection equation: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 3.394e-01 4.119e-01 0.824 0.410 ## age -3.424e-02 6.728e-03 -5.090 4.55e-07 *** ## faminc 3.390e-06 4.267e-06 0.795 0.427 ## kids5 -8.624e-01 1.111e-01 -7.762 2.78e-14 *** ## educ 1.162e-01 2.361e-02 4.923 1.05e-06 *** ## Outcome equation: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -2.66736 1.30192 -2.049 0.0408 * ## exper 0.02370 0.01886 1.256 0.2093 ## educ 0.48816 0.07946 6.144 1.31e-09 *** ## city 0.44936 0.31585 1.423 0.1553 ## Multiple R-Squared:0.1248, Adjusted R-Squared:0.1165 ## Error terms: ## Estimate Std. Error t value Pr(>|t|) ## invMillsRatio 0.11082 0.73108 0.152 0.88 ## sigma 3.09434 NA NA NA ## rho 0.03581 NA NA NA ## -------------------------------------------- ``` Note that even after using education to explain **lfp** in the selection equation, it still matters in the wage equation. So education does really impact wages. ``` ## Example using binary outcome for selection model. ## We estimate the probability of womens' education on their ## chances to get high wage (> $5/hr in 1975 USD), using PSID data ## We use education as explanatory variable ## and add age, kids, and non-work income as exclusion restrictions. library(mvtnorm) data(Mroz87) m <- selection(lfp ~ educ + age + kids5 + kids618 + nwifeinc, wage >= 5 ~ educ, data = Mroz87 ) summary(m) ``` ``` ## -------------------------------------------- ## Tobit 2 model (sample selection model) ## Maximum Likelihood estimation ## BHHH maximisation, 8 iterations ## Return code 2: successive function values within tolerance limit ## Log-Likelihood: -653.2037 ## 753 observations (325 censored and 428 observed) ## 9 free parameters (df = 744) ## Probit selection equation: ## Estimate Std. error t value Pr(> t) ## (Intercept) 0.430362 0.475966 0.904 0.366 ## educ 0.156223 0.023811 6.561 5.35e-11 *** ## age -0.034713 0.007649 -4.538 5.67e-06 *** ## kids5 -0.890560 0.112663 -7.905 2.69e-15 *** ## kids618 -0.038167 0.039320 -0.971 0.332 ## nwifeinc -0.020948 0.004318 -4.851 1.23e-06 *** ## Outcome equation: ## Estimate Std. error t value Pr(> t) ## (Intercept) -4.5213 0.5611 -8.058 7.73e-16 *** ## educ 0.2879 0.0369 7.800 6.18e-15 *** ## Error terms: ## Estimate Std. error t value Pr(> t) ## rho 0.1164 0.2706 0.43 0.667 ## -------------------------------------------- ``` ``` #CHECK THAT THE NUMBER OF KIDS MATTERS OR NOT Mroz87$numkids = Mroz87$kids5 + Mroz87$kids618 summary(lm(wage ~ numkids, data=Mroz87)) ``` ``` ## ## Call: ## lm(formula = wage ~ numkids, data = Mroz87) ## ## Residuals: ## Min 1Q Median 3Q Max ## -2.6814 -2.2957 -0.8125 1.3186 23.0900 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 2.68138 0.17421 15.39 <2e-16 *** ## numkids -0.19285 0.08069 -2.39 0.0171 * ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 3.232 on 751 degrees of freedom ## Multiple R-squared: 0.007548, Adjusted R-squared: 0.006227 ## F-statistic: 5.712 on 1 and 751 DF, p-value: 0.0171 ``` ``` res = selection(lfp ~ age + I(age^2) + faminc + numkids + educ, wage ~ exper + I(exper^2) + educ + city + numkids, data=Mroz87, method = "2step" ) summary(res) ``` ``` ## -------------------------------------------- ## Tobit 2 model (sample selection model) ## 2-step Heckman / heckit estimation ## 753 observations (325 censored and 428 observed) ## 15 free parameters (df = 739) ## Probit selection equation: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -3.725e+00 1.398e+00 -2.664 0.00789 ** ## age 1.656e-01 6.482e-02 2.554 0.01084 * ## I(age^2) -2.198e-03 7.537e-04 -2.917 0.00365 ** ## faminc 4.001e-06 4.204e-06 0.952 0.34161 ## numkids -1.513e-01 3.827e-02 -3.955 8.39e-05 *** ## educ 9.224e-02 2.302e-02 4.007 6.77e-05 *** ## Outcome equation: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -2.2476932 2.0702572 -1.086 0.278 ## exper 0.0271253 0.0635033 0.427 0.669 ## I(exper^2) -0.0001957 0.0019429 -0.101 0.920 ## educ 0.4726828 0.1037086 4.558 6.05e-06 *** ## city 0.4389577 0.3166504 1.386 0.166 ## numkids -0.0471181 0.1420580 -0.332 0.740 ## Multiple R-Squared:0.1252, Adjusted R-Squared:0.1128 ## Error terms: ## Estimate Std. Error t value Pr(>|t|) ## invMillsRatio -0.11737 1.38036 -0.085 0.932 ## sigma 3.09374 NA NA NA ## rho -0.03794 NA NA NA ## -------------------------------------------- ``` 11\.15 Endogeity: Some Theory to Wrap Up ---------------------------------------- Endogeneity may be technically expressed as arising from a correlation of the independent variables and the error term in a regression. This can be stated as: \\\[\\begin{equation} Y \= \\beta' X \+ u, \\quad E(X\\cdot u) \\neq 0 \\end{equation}\\] This can happen in many ways: * **Measurement error** (or errors in variables): If \\(X\\) is measured in error, we have \\({\\tilde X} \= X \+ e\\). The regression becomes \\\[\\begin{equation} Y \= \\beta\_0 \+ \\beta\_1 ({\\tilde X} \- e) \+ u \= \\beta\_0 \+ \\beta\_1 {\\tilde X} \+ (u \- \\beta\_1 e) \= \\beta\_0 \+ \\beta\_1 {\\tilde X} \+ v \\end{equation}\\] We see that \\\[\\begin{equation} E({\\tilde X} \\cdot v) \= E\[(X\+e)(u \- \\beta\_1 e)] \= \-\\beta\_1 E(e^2\) \= \-\\beta\_1 Var(e) \\neq 0 \\end{equation}\\] * **Omitted variables**: Suppose the true model is \\\[\\begin{equation} Y \= \\beta\_0 \+ \\beta\_1 X\_1 \+ \\beta\_2 X\_2 \+ u \\end{equation}\\] but we do not have \\(X\_2\\), which happens to be correlated with \\(X\_1\\), then it will be subsumed in the error term and no longer will \\(E(X\_i \\cdot u) \= 0, \\forall i\\). * **Simultaneity**: This occurs when \\(Y\\) and \\(X\\) are jointly determined. For example, high wages and high education go together. Or, advertising and sales coincide. Or that better start\-up firms tend to receive syndication. The **structural form** of these settings may be written as: \\\[\\begin{equation} Y \= \\beta\_0 \+ \\beta\_1 X \+ u, \\quad \\quad X \= \\alpha\_0 \+ \\alpha\_1 Y \+ v \\end{equation}\\] The solution to these equations gives the {} version of the model. \\\[\\begin{equation} Y \= \\frac{\\beta\_0 \+ \\beta\_1 \\alpha\_0}{1 \- \\alpha\_1 \\beta\_1} \+ \\frac{\\beta v \+ u}{1 \- \\alpha\_1 \\beta\_1}, \\quad \\quad X \= \\frac{\\alpha\_0 \+\\alpha\_1 \\beta\_0}{1 \- \\alpha\_1 \\beta\_1} \+ \\frac{v \+ \\alpha\_1 u}{1 \- \\alpha\_1 \\beta\_1} \\end{equation}\\] From which we can compute the endogeneity result. \\\[\\begin{equation} Cov(X, u) \= Cov\\left(\\frac{v \+ \\alpha\_1 u}{1 \- \\alpha\_1 \\beta\_1}, u \\right) \= \\frac{\\alpha\_1}{1 \- \\alpha\_1 \\beta\_1}\\cdot Var(u) \\end{equation}\\] To summarize, if \\(x\\) is correlated with \\(u\\) then \\(x\\) is said to be “endogenous”. Endogeneity biases parameter estimates. The solution is to find an **instrumental variable** (denoted \\(x'\\)) that is highly correlated with \\(x\\), but not correlated with \\(u\\). That is * \\(\|Corr(x,x')\|\\) is high. * \\(Corr(x',u)\=0\\). But since \\(x'\\) is not really \\(x\\), it adds (uncorrelated )variance to the residuals, because \\(x' \= x \+ \\eta\\). 11\.16 Cox Proportional Hazards Model ------------------------------------- This is a model used to estimate the expected time to an event. We may be interested in estimating mortality, failure time of equipment, time to successful IPO of a startup, etc. If we define “stoppping” time of an event as \\(\\tau\\), then we are interested in the cumulative probability of an event occurring in time \\(t\\) as \\\[ F(t) \= Pr(\\tau \\leq t ) \\] and the corresponding density function \\(f(t) \= F'(t)\\). The **hazard rate** is defined as the probability that the event occurs at time \\(t\\), conditional on it not having occurred until time \\(t\\), i.e., \\\[ \\lambda(t) \= \\frac{f(t)}{1\-F(t)} \\] Correspondingly, the probability of survival is \\\[ s(t) \= \\exp\\left( \-\\int\_0^t \\lambda(u)\\; du \\right) \\] with the probability of failure up to time \\(t\\) then given by \\\[ F(t) \= 1 \- s(t) \= 1 \-\\exp\\left( \-\\int\_0^t \\lambda(u)\\; du \\right) \\] Empirically, we estimate the hazard rate as follows, for individual \\(i\\): \\\[ \\lambda\_i(t) \= \\lambda\_0(t) \\exp\[\\beta^\\top x\_i] \\geq 0 \\] where \\(\\beta\\) is a vector of coefficients, and \\(x\_i\\) is a vector of characteristics of individual \\(i\\). The function \\(\\lambda\_0(t) \\geq 0\\) is known as the “baseline hazard function”. The hazard ratio is defined as \\(\\lambda\_i(t)/\\lambda\_0(t)\\). When greater than 1, individual \\(i\\) has a greater hazard than baseline. The log hazard ratio is linear in \\(x\_i\\). \\\[ \\ln \\left\[ \\frac{\\lambda\_i(t)}{\\lambda\_0(t)} \\right] \= \\beta^\\top x\_i \\] In order to get some intuition for the hazard rate, suppose we have three friends who just graduated from college, and they all have an equal chance of getting married. Then at any time \\(t\\), the probability that any one gets married, given no one has been married so far is \\(\\lambda\_i(t) \= \\lambda\_0(t) \= 1/3, \\forall t\\). Now, if anyone gets married, then the hazard rate will jump to \\(1/2\\). But what if all the three friends are of different ages, and the propensity to get married is proportional to age. Then \\\[ \\lambda\_i(t) \= \\frac{\\mbox{Age}\_i(t)}{\\sum\_{j\=1}^3 \\mbox{Age}\_j(t)} \\] This model may also be extended to include gender and other variables. Given we have data on \\(M\\) individuals, we can order the data by times \\(t\_1 \< t\_2 \< ... t\_i \< ... \< t\_M\\). Some of these times are times to the event, and some are times of existence without the event, the latter is also known as “censoring” times. The values \\(\\delta\_1, \\delta\_2, ..., \\delta\_i, ..., \\delta\_M\\) take values 1 if the individual has experienced the event and zero otherwise. The likelihood of an individual experiencing the event is \\\[ L\_i(\\beta) \= \\frac{\\lambda\_i(t\_i)}{\\sum\_{j\=i}^M \\lambda\_j(t\_i)} \= \\frac{\\lambda\_0(t\_i) e^{\\beta^\\top x\_i}}{\\sum\_{j\=i}^M \\lambda\_0(t\_i) e^{\\beta^\\top x\_j}} \= \\frac{ e^{\\beta^\\top x\_i}}{\\sum\_{j\=i}^M e^{\\beta^\\top x\_j}} \\] This accounts for all remaining individuals in the population at time \\(t\_i\\). We see that the likelihood does not depend on \\(t\\) as the baseline hazard function cancels out. The parameters \\(\\beta\\) are obtained by maximizing the likelihood function: \\\[ L(\\beta) \= \\prod\_{i\=1}^M L\_i(\\beta)^{\\delta\_i} \\] which uses the subset of data where \\(\\delta\_i \= 1\\). We use the **survival** package in R. ``` library(survival) ``` Here is a very small data set. Note the columns that correspond to time to event, and the indictor variable “death” (\\(\\delta\\)). The \\(x\\) variables are “age” and “female”. ``` SURV = read.table("DSTMAA_data/survival_data.txt",header=TRUE) SURV ``` ``` ## id time death age female ## 1 1 1 1 20 0 ## 2 2 4 0 21 1 ## 3 3 7 1 19 0 ## 4 4 10 1 22 1 ## 5 5 12 0 20 0 ## 6 6 13 1 24 1 ``` We can of course run a linear regression just to see how age and gender affect death, by merely looking at the sign, and we see that being older means on average a greater chance of dying, and being female reduces risk. ``` #SIMPLE REGRESSION APPROACH summary(lm(death ~ age+female, SURV)) ``` ``` ## ## Call: ## lm(formula = death ~ age + female, data = SURV) ## ## Residuals: ## 1 2 3 4 5 6 ## 0.27083 -0.41667 0.45833 0.39583 -0.72917 0.02083 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -3.0208 5.2751 -0.573 0.607 ## age 0.1875 0.2676 0.701 0.534 ## female -0.5000 0.8740 -0.572 0.607 ## ## Residual standard error: 0.618 on 3 degrees of freedom ## Multiple R-squared: 0.1406, Adjusted R-squared: -0.4323 ## F-statistic: 0.2455 on 2 and 3 DF, p-value: 0.7967 ``` Instead of a linear regression, estimate the Cox PH model for the survival time. Here the coefficients are reversed in sign because we are estimating survival and not death. ``` #COX APPROACH res = coxph(Surv(time, death) ~ female + age, data = SURV) summary(res) ``` ``` ## Call: ## coxph(formula = Surv(time, death) ~ female + age, data = SURV) ## ## n= 6, number of events= 4 ## ## coef exp(coef) se(coef) z Pr(>|z|) ## female 1.5446 4.6860 2.7717 0.557 0.577 ## age -0.9453 0.3886 1.0637 -0.889 0.374 ## ## exp(coef) exp(-coef) lower .95 upper .95 ## female 4.6860 0.2134 0.02049 1071.652 ## age 0.3886 2.5735 0.04831 3.125 ## ## Concordance= 0.65 (se = 0.218 ) ## Rsquare= 0.241 (max possible= 0.76 ) ## Likelihood ratio test= 1.65 on 2 df, p=0.4378 ## Wald test = 1.06 on 2 df, p=0.5899 ## Score (logrank) test = 1.26 on 2 df, p=0.5319 ``` ``` plot(survfit(res)) #Plot the baseline survival function ``` Note that the **exp(coef)** is the hazard ratio. When it is greater than 1, there is an increase in hazard, and when it is less than 1, there is a decrease in the hazard. We can do a test for proportional hazards as follows, and examine the p\-values. ``` cox.zph(res) ``` ``` ## rho chisq p ## female 0.563 1.504 0.220 ## age -0.472 0.743 0.389 ## GLOBAL NA 1.762 0.414 ``` Finally, we are interested in obtaining the baseline hazard function \\(\\lambda\_0(t)\\) which as we know has dropped out of the estimation. So how do we recover it? In fact, without it, where do we even get \\(\\lambda\_i(t)\\) from? We would also like to get the cumulative baseline hazard, i.e., \\(\\Lambda\_0(t) \= \\int\_0^t \\lambda\_0(u) du\\). Sadly, this is a major deficiency of the Cox PH model. However, one may make a distributional assumption about the form of \\(\\lambda\_0(t)\\) and then fit it to maximize the likelihood of survival times, after the coefficients \\(\\beta\\) have been fit already. For example, one function might be \\(\\lambda\_0(t) \= e^{\\alpha t}\\), and it would only need the estimation of \\(\\alpha\\). We can then obtain the estimated survival probabilities over time. ``` covs <- data.frame(age = 21, female = 0) summary(survfit(res, newdata = covs, type = "aalen")) ``` ``` ## Call: survfit(formula = res, newdata = covs, type = "aalen") ## ## time n.risk n.event survival std.err lower 95% CI upper 95% CI ## 1 6 1 0.9475 0.108 7.58e-01 1 ## 7 4 1 0.8672 0.236 5.08e-01 1 ## 10 3 1 0.7000 0.394 2.32e-01 1 ## 13 1 1 0.0184 0.117 7.14e-08 1 ``` The “survival” column gives the survival probabilty for various time horizons shown in the first column. For a useful guide, see <https://rpubs.com/daspringate/survival> To sum up, see that the Cox PH model estimates the hazard rate function (t): \\\[ \\lambda(t) \= \\lambda\_0(t) \\exp\[\\beta^\\top x] \\] The “exp(coef)” is the baseline hazard rate multiplier effect. If exp(coef)\>1, then an increase in the variable \\(x\\) increases the hazard rate by that factor, and if exp(coef)\<1, then it reduces the hazard rate \\(\\lambda(t)\\) by that factor. Note that the hazard rate is NOT the probability of survival, and in fact \\(\\lambda(t) \\in (0,\\infty)\\). Note that the probability of survival over time t, if we assume a constant hazard rate \\(\\lambda\\) is \\(s(t) \= e^{\-\\lambda t}\\). Of course \\(s(t) \\in (0,1\)\\). So for example, if the current (assumed constant) hazard rate is \\(\\lambda \= 0\.02\\), then the (for example) 3\-year survival probability is \\\[ s(t) \= e^{\-0\.02 \\times 3} \= 0\.9418 \\] If the person is female, then the new hazard rate is \\(\\lambda \\times 4\.686 \= 0\.09372\\). So the new survival probability is \\\[ s(t\=3\) \= e^{\-0\.09372 \\times 3} \= 0\.7549 \\] If Age increases by one year then the new hazard rate will be \\(0\.02 \\times 0\.3886 \= 0\.007772\\). And the new survival probability will be \\\[ s(t\=3\) \= e^{\-0\.007772 \\times 3} \= 0\.977 \\] Note that the hazard rate and the probability of survival go in opposite directions. 11\.17 GLMNET: Lasso and Ridge Regressions ------------------------------------------ The **glmnet** package is from Stanford, and you can get all the details and examples here: [https://web.stanford.edu/\~hastie/glmnet/glmnet\_alpha.html](https://web.stanford.edu/~hastie/glmnet/glmnet_alpha.html) The package fits generalized linear models and also penalizes the size of the model, with various standard models as special cases. The function equation for minimization is \\\[ \\min\_{\\beta} \\frac{1}{n}\\sum\_{i\=1}^n w\_i L(y\_i,\\beta^\\top x\_i) \+ \\lambda \\left\[(1\-\\alpha) \\frac{1}{2}\\\| \\beta \\\|\_2^2 \+ \\alpha \\\|\\beta \\\|\_1\\right] \\] where \\(\\\|\\beta\\\|\_1\\) and \\(\\\|\\beta\\\|\_2\\) are the \\(L\_1\\) and \\(L\_2\\) norms for tge vector \\(\\beta\\). The idea is to take any loss function and penalize it. For example, if the loss function is just the sum of squared residuals \\(y\_i\-\\beta^\\top x\_i\\), and \\(w\_i\=1, \\lambda\=0\\), then we get an ordinary least squares regression model. The function \\(L\\) is usually set to be the log\-likelihood function. If the \\(L\_1\\) norm is applied only, i.e., \\(\\alpha\=1\\), then we get the Lasso model. If the \\(L\_2\\) norm is solely applied, i.e., \\(\\alpha\=0\\), then we get a ridge regression. As is obvious from the equation, \\(\\lambda\\) is the size of the penalty applied, and increasing this parameter forces a more parsimonious model. Here is an example of lasso (\\(\\alpha\=1\\)): ``` suppressMessages(library(glmnet)) ncaa = read.table("DSTMAA_data/ncaa.txt",header=TRUE) y = c(rep(1,32),rep(0,32)) x = as.matrix(ncaa[4:14]) res = cv.glmnet(x = x, y = y, family = 'binomial', alpha = 1, type.measure = "auc") ``` ``` ## Warning: Too few (< 10) observations per fold for type.measure='auc' in ## cv.lognet; changed to type.measure='deviance'. Alternatively, use smaller ## value for nfolds ``` ``` plot(res) ``` We may also run glmnet to get coefficients. ``` res = glmnet(x = x, y = y, family = 'binomial', alpha = 1) print(names(res)) ``` ``` ## [1] "a0" "beta" "df" "dim" "lambda" ## [6] "dev.ratio" "nulldev" "npasses" "jerr" "offset" ## [11] "classnames" "call" "nobs" ``` ``` print(res) ``` ``` ## ## Call: glmnet(x = x, y = y, family = "binomial", alpha = 1) ## ## Df %Dev Lambda ## [1,] 0 1.602e-16 0.2615000 ## [2,] 1 3.357e-02 0.2383000 ## [3,] 1 6.172e-02 0.2171000 ## [4,] 1 8.554e-02 0.1978000 ## [5,] 1 1.058e-01 0.1803000 ## [6,] 1 1.231e-01 0.1642000 ## [7,] 1 1.380e-01 0.1496000 ## [8,] 1 1.508e-01 0.1364000 ## [9,] 1 1.618e-01 0.1242000 ## [10,] 2 1.721e-01 0.1132000 ## [11,] 4 1.851e-01 0.1031000 ## [12,] 5 1.990e-01 0.0939800 ## [13,] 4 2.153e-01 0.0856300 ## [14,] 4 2.293e-01 0.0780300 ## [15,] 4 2.415e-01 0.0711000 ## [16,] 5 2.540e-01 0.0647800 ## [17,] 8 2.730e-01 0.0590200 ## [18,] 8 2.994e-01 0.0537800 ## [19,] 8 3.225e-01 0.0490000 ## [20,] 8 3.428e-01 0.0446500 ## [21,] 8 3.608e-01 0.0406800 ## [22,] 8 3.766e-01 0.0370700 ## [23,] 8 3.908e-01 0.0337800 ## [24,] 8 4.033e-01 0.0307800 ## [25,] 8 4.145e-01 0.0280400 ## [26,] 9 4.252e-01 0.0255500 ## [27,] 10 4.356e-01 0.0232800 ## [28,] 10 4.450e-01 0.0212100 ## [29,] 10 4.534e-01 0.0193300 ## [30,] 10 4.609e-01 0.0176100 ## [31,] 10 4.676e-01 0.0160500 ## [32,] 10 4.735e-01 0.0146200 ## [33,] 10 4.789e-01 0.0133200 ## [34,] 10 4.836e-01 0.0121400 ## [35,] 10 4.878e-01 0.0110600 ## [36,] 9 4.912e-01 0.0100800 ## [37,] 9 4.938e-01 0.0091820 ## [38,] 9 4.963e-01 0.0083670 ## [39,] 9 4.984e-01 0.0076230 ## [40,] 9 5.002e-01 0.0069460 ## [41,] 9 5.018e-01 0.0063290 ## [42,] 9 5.032e-01 0.0057670 ## [43,] 9 5.044e-01 0.0052540 ## [44,] 9 5.055e-01 0.0047880 ## [45,] 9 5.064e-01 0.0043620 ## [46,] 9 5.071e-01 0.0039750 ## [47,] 10 5.084e-01 0.0036220 ## [48,] 10 5.095e-01 0.0033000 ## [49,] 10 5.105e-01 0.0030070 ## [50,] 10 5.114e-01 0.0027400 ## [51,] 10 5.121e-01 0.0024960 ## [52,] 10 5.127e-01 0.0022750 ## [53,] 11 5.133e-01 0.0020720 ## [54,] 11 5.138e-01 0.0018880 ## [55,] 11 5.142e-01 0.0017210 ## [56,] 11 5.146e-01 0.0015680 ## [57,] 11 5.149e-01 0.0014280 ## [58,] 11 5.152e-01 0.0013020 ## [59,] 11 5.154e-01 0.0011860 ## [60,] 11 5.156e-01 0.0010810 ## [61,] 11 5.157e-01 0.0009846 ## [62,] 11 5.158e-01 0.0008971 ## [63,] 11 5.160e-01 0.0008174 ## [64,] 11 5.160e-01 0.0007448 ## [65,] 11 5.161e-01 0.0006786 ## [66,] 11 5.162e-01 0.0006183 ## [67,] 11 5.162e-01 0.0005634 ## [68,] 11 5.163e-01 0.0005134 ## [69,] 11 5.163e-01 0.0004678 ## [70,] 11 5.164e-01 0.0004262 ## [71,] 11 5.164e-01 0.0003883 ## [72,] 11 5.164e-01 0.0003538 ## [73,] 11 5.164e-01 0.0003224 ## [74,] 11 5.164e-01 0.0002938 ## [75,] 11 5.165e-01 0.0002677 ## [76,] 11 5.165e-01 0.0002439 ## [77,] 11 5.165e-01 0.0002222 ``` ``` b = coef(res)[,25] #Choose the best case with 8 coefficients print(b) ``` ``` ## (Intercept) PTS REB AST TO ## -17.30807199 0.04224762 0.13304541 0.00000000 -0.13440922 ## A.T STL BLK PF FG ## 0.63059336 0.21867734 0.11635708 0.00000000 17.14864201 ## FT X3P ## 3.00069901 0.00000000 ``` ``` x1 = c(1,as.numeric(x[18,])) p = 1/(1+exp(-sum(b*x1))) print(p) ``` ``` ## [1] 0.7696481 ``` ### 11\.17\.1 Prediction on test data ``` preds = predict(res, x, type = 'response') print(dim(preds)) ``` ``` ## [1] 64 77 ``` ``` preds = preds[,25] #Take the 25th case print(preds) ``` ``` ## [1] 0.97443940 0.90157397 0.87711437 0.89911656 0.95684199 0.82949042 ## [7] 0.53186622 0.83745812 0.45979765 0.58355756 0.78726183 0.55050365 ## [13] 0.30633472 0.93605170 0.70646742 0.85811465 0.42394178 0.76964806 ## [19] 0.40172414 0.66137964 0.69620096 0.61569705 0.88800581 0.92834645 ## [25] 0.82719624 0.17209046 0.66881541 0.84149477 0.58937886 0.64674446 ## [31] 0.79368965 0.51186217 0.58500925 0.61275721 0.17532362 0.47406867 ## [37] 0.24314471 0.11843924 0.26787937 0.24296988 0.21129918 0.05041436 ## [43] 0.30109650 0.14989973 0.17976216 0.57119150 0.05514704 0.46220128 ## [49] 0.63788393 0.32605605 0.35544396 0.12647374 0.61772958 0.63883954 ## [55] 0.02306762 0.21285032 0.36455131 0.53953727 0.18563868 0.23598354 ## [61] 0.11821886 0.04258418 0.19603015 0.24630145 ``` ``` print(glmnet:::auc(y, preds)) ``` ``` ## [1] 0.9072266 ``` ``` print(table(y,round(preds,0))) #rounding needed to make 0,1 ``` ``` ## ## y 0 1 ## 0 25 7 ## 1 5 27 ``` ### 11\.17\.1 Prediction on test data ``` preds = predict(res, x, type = 'response') print(dim(preds)) ``` ``` ## [1] 64 77 ``` ``` preds = preds[,25] #Take the 25th case print(preds) ``` ``` ## [1] 0.97443940 0.90157397 0.87711437 0.89911656 0.95684199 0.82949042 ## [7] 0.53186622 0.83745812 0.45979765 0.58355756 0.78726183 0.55050365 ## [13] 0.30633472 0.93605170 0.70646742 0.85811465 0.42394178 0.76964806 ## [19] 0.40172414 0.66137964 0.69620096 0.61569705 0.88800581 0.92834645 ## [25] 0.82719624 0.17209046 0.66881541 0.84149477 0.58937886 0.64674446 ## [31] 0.79368965 0.51186217 0.58500925 0.61275721 0.17532362 0.47406867 ## [37] 0.24314471 0.11843924 0.26787937 0.24296988 0.21129918 0.05041436 ## [43] 0.30109650 0.14989973 0.17976216 0.57119150 0.05514704 0.46220128 ## [49] 0.63788393 0.32605605 0.35544396 0.12647374 0.61772958 0.63883954 ## [55] 0.02306762 0.21285032 0.36455131 0.53953727 0.18563868 0.23598354 ## [61] 0.11821886 0.04258418 0.19603015 0.24630145 ``` ``` print(glmnet:::auc(y, preds)) ``` ``` ## [1] 0.9072266 ``` ``` print(table(y,round(preds,0))) #rounding needed to make 0,1 ``` ``` ## ## y 0 1 ## 0 25 7 ## 1 5 27 ``` 11\.18 ROC Curves ----------------- ROC stands for Receiver Operating Characteristic. The acronym comes from signal theory, where the users are interested in the number of true positive signals that are identified. The idea is simple, and best explained with an example. Let’s say you have an algorithm that detects customers probability \\(p \\in (0,1\)\\) of buying a product. Take a tagged set of training data and sort the customers by this probability in a line with the highest propensity to buy on the left and moving to the right the probabilty declines monotonically. (Tagged means you know whether they bought the product or not.) Now, starting from the left, plot a line that jumps vertically by a unit if the customer buys the product as you move across else remains flat. If the algorithm is a good one, the line will quickly move up at first and then flatten out. Let’s take the train and test data here and plot the ROC curve by writing our own code. We can do the same with the **pROC** package. Here is the code. ``` suppressMessages(library(pROC)) ``` ``` ## Warning: package 'pROC' was built under R version 3.3.2 ``` ``` res = roc(response=y,predictor=preds) print(res) ``` ``` ## ## Call: ## roc.default(response = y, predictor = preds) ## ## Data: preds in 32 controls (y 0) < 32 cases (y 1). ## Area under the curve: 0.9072 ``` ``` plot(res); grid() ``` We see that “specificity” equals the true negative rate, and is also denoted as “recall”. And that the true positive rate is also labeled as “sensitivity”. The AUC or “area under the curve”" is the area between the curve and the diagonal divided by the area in the top right triangle of the diagram. This is also reported and is the same number as obtained when we fitted the model using the **glmnet** function before. For nice graphics that explain all these measures and more, see <https://en.wikipedia.org/wiki/Precision_and_recall> 11\.19 Glmnet Cox Models ------------------------ As we did before, we may fit a Cox PH model using GLMNET with the additional feature that we include a penalty when we maximize the likelihood function. ``` SURV = read.table("DSTMAA_data/survival_data.txt",header=TRUE) print(SURV) ``` ``` ## id time death age female ## 1 1 1 1 20 0 ## 2 2 4 0 21 1 ## 3 3 7 1 19 0 ## 4 4 10 1 22 1 ## 5 5 12 0 20 0 ## 6 6 13 1 24 1 ``` ``` names(SURV)[3] = "status" y = as.matrix(SURV[,2:3]) x = as.matrix(SURV[,4:5]) res = glmnet(x, y, family = "cox") print(res) ``` ``` ## ## Call: glmnet(x = x, y = y, family = "cox") ## ## Df %Dev Lambda ## [1,] 0 0.00000 0.331700 ## [2,] 1 0.02347 0.302200 ## [3,] 1 0.04337 0.275400 ## [4,] 1 0.06027 0.250900 ## [5,] 1 0.07466 0.228600 ## [6,] 1 0.08690 0.208300 ## [7,] 1 0.09734 0.189800 ## [8,] 1 0.10620 0.172900 ## [9,] 1 0.11380 0.157600 ## [10,] 1 0.12020 0.143600 ## [11,] 1 0.12570 0.130800 ## [12,] 1 0.13040 0.119200 ## [13,] 1 0.13430 0.108600 ## [14,] 1 0.13770 0.098970 ## [15,] 1 0.14050 0.090180 ## [16,] 1 0.14300 0.082170 ## [17,] 1 0.14500 0.074870 ## [18,] 1 0.14670 0.068210 ## [19,] 1 0.14820 0.062150 ## [20,] 1 0.14940 0.056630 ## [21,] 1 0.15040 0.051600 ## [22,] 1 0.15130 0.047020 ## [23,] 1 0.15200 0.042840 ## [24,] 1 0.15260 0.039040 ## [25,] 1 0.15310 0.035570 ## [26,] 2 0.15930 0.032410 ## [27,] 2 0.16480 0.029530 ## [28,] 2 0.16930 0.026910 ## [29,] 2 0.17320 0.024520 ## [30,] 2 0.17640 0.022340 ## [31,] 2 0.17910 0.020350 ## [32,] 2 0.18140 0.018540 ## [33,] 2 0.18330 0.016900 ## [34,] 2 0.18490 0.015400 ## [35,] 2 0.18630 0.014030 ## [36,] 2 0.18740 0.012780 ## [37,] 2 0.18830 0.011650 ## [38,] 2 0.18910 0.010610 ## [39,] 2 0.18980 0.009669 ## [40,] 2 0.19030 0.008810 ## [41,] 2 0.19080 0.008028 ## [42,] 2 0.19120 0.007314 ## [43,] 2 0.19150 0.006665 ## [44,] 2 0.19180 0.006073 ## [45,] 2 0.19200 0.005533 ## [46,] 2 0.19220 0.005042 ## [47,] 2 0.19240 0.004594 ## [48,] 2 0.19250 0.004186 ## [49,] 2 0.19260 0.003814 ## [50,] 2 0.19270 0.003475 ## [51,] 2 0.19280 0.003166 ## [52,] 2 0.19280 0.002885 ## [53,] 2 0.19290 0.002629 ## [54,] 2 0.19290 0.002395 ## [55,] 2 0.19300 0.002182 ## [56,] 2 0.19300 0.001988 ``` ``` plot(res) ``` ``` print(coef(res)) ``` ``` ## 2 x 56 sparse Matrix of class "dgCMatrix" ``` ``` ## [[ suppressing 56 column names 's0', 's1', 's2' ... ]] ``` ``` ## ## age . -0.03232796 -0.06240328 -0.09044971 -0.1166396 -0.1411157 ## female . . . . . . ## ## age -0.1639991 -0.185342 -0.2053471 -0.2240373 -0.2414872 -0.2577658 ## female . . . . . . ## ## age -0.272938 -0.2870651 -0.3002053 -0.3124148 -0.3237473 -0.3342545 ## female . . . . . . ## ## age -0.3440275 -0.3530249 -0.3613422 -0.3690231 -0.3761098 -0.3826423 ## female . . . . . . ## ## age -0.3886591 -0.4300447 -0.4704889 -0.5078614 -0.5424838 -0.5745449 ## female . 0.1232263 0.2429576 0.3522138 0.4522592 0.5439278 ## ## age -0.6042077 -0.6316057 -0.6569988 -0.6804703 -0.7022042 -0.7222141 ## female 0.6279337 0.7048655 0.7754539 0.8403575 0.9000510 0.9546989 ## ## age -0.7407295 -0.7577467 -0.773467 -0.7878944 -0.8012225 -0.8133071 ## female 1.0049765 1.0509715 1.093264 1.1319284 1.1675026 1.1999905 ## ## age -0.8246563 -0.8349496 -0.8442393 -0.8528942 -0.860838 -0.8680639 ## female 1.2297716 1.2570025 1.2817654 1.3045389 1.325398 1.3443458 ## ## age -0.874736 -0.8808466 -0.8863844 -0.8915045 -0.8961894 -0.9004172 ## female 1.361801 1.3777603 1.3922138 1.4055495 1.4177359 1.4287319 ## ## age -0.9043351 -0.9079181 ## female 1.4389022 1.4481934 ``` With cross validation, we get the usual plot for the fit. ``` cvfit = cv.glmnet(x, y, family = "cox") plot(cvfit) ``` ``` print(cvfit$lambda.min) ``` ``` ## [1] 0.0989681 ``` ``` print(coef(cvfit,s=cvfit$lambda.min)) ``` ``` ## 2 x 1 sparse Matrix of class "dgCMatrix" ## 1 ## age -0.2870651 ## female . ``` Note that the signs of the coefficients are the same as we had earlier, i.e., survival is lower with age and higher for females.
Machine Learning
srdas.github.io
https://srdas.github.io/MLBook/ClusterAnalysis.html
Chapter 12 In the Same Boat: Cluster Analysis and Prediction Trees ================================================================== 12\.1 Overview -------------- There are many aspects of data analysis that call for grouping individuals, firms, projects, etc. These fall under the rubric of what may be termed as **classification** analysis. Cluster analysis comprises a group of techniques that uses distance metrics to bunch data into categories. There are two broad approaches to cluster analysis: 1. Agglomerative or Hierarchical or Bottom\-up: In this case we begin with all entities in the analysis being given their own cluster, so that we start with \\(n\\) clusters. Then, entities are grouped into clusters based on a given distance metric between each pair of entities. In this way a **hierarchy** of clusters is built up and the researcher can choose which grouping is preferred. 2. Partitioning or Top\-down: In this approach, the entire set of \\(n\\) entities is assumed to be a cluster. Then it is progressively partitioned into smaller and smaller clusters. We will employ both clustering approaches and examine their properties with various data sets as examples. 12\.2 k\-MEANS -------------- This approach is bottom\-up. If we have a sample of \\(n\\) observations to be allocated to \\(k\\) clusters, then we can initialize the clusters in many ways. One approach is to assume that each observation is a cluster unto itself. We proceed by taking each observation and allocating it to the nearest cluster using a distance metric. At the outset, we would simply allocate an observation to its nearest neighbor. How is nearness measured? We need a distance metric, and one common one is Euclidian distance. Suppose we have two observations \\(x\_i\\) and \\(x\_j\\). These may be represented by a vector of attributes. Suppose our observations are people, and the attributes are {height, weight, IQ} \= \\(x\_i \= \\{h\_i, w\_i, I\_i\\}\\) for the \\(i\\)\-th individual. Then the Euclidian distance between two individuals \\(i\\) and \\(j\\) is \\\[ d\_{ij} \= \\sqrt{(h\_i\-h\_j)^2 \+ (w\_i\-w\_j)^2 \+ (I\_i \- I\_j)^2} \\] It is usually computed using normalized variables, so that no single variable of large size dominates the distance calculation. (Normalization is the process of subtracting the mean from each observation and then dividing by the standard deviation.) In contrast, the “Manhattan” distance is given by (when is this more appropriate?) \\\[ d\_{ij} \= \|h\_i\-h\_j\| \+ \|w\_i\-w\_j\| \+ \|I\_i \- I\_j\| \\] We may use other metrics such as the cosine distance, or the Mahalanobis distance. A matrix of \\(n \\times n\\) values of all \\(d\_{ij}\\)s is called the **distance matrix**. Using this distance metric we assign nodes to clusters or attach them to nearest neighbors. After a few iterations, no longer are clusters made up of singleton observations, and the number of clusters reaches \\(k\\), the preset number required, and then all nodes are assigned to one of these \\(k\\) clusters. As we examine each observation we then assign it (or re\-assign it) to the nearest cluster, where the distance is measured from the observation to some representative node of the cluster. Some common choices of the representative node in a cluster of are: 1. Centroid of the cluster. This is the mean of the observations in the cluster for each attribute. The centroid of the two observations above is the average vector \\(\\{(h\_i\+h\_j)/2, (w\_i\+w\_j)/2, (I\_i \+ I\_j)/2\\}\\). This is often called the **center** of the cluster. If there are more nodes then the centroid is the average of the same coordinate for all nodes. 2. Closest member of the cluster. 3. Furthest member of the cluster. The algorithm converges when no re\-assignments of observations to clusters occurs. Note that \\(k\\)\-means is a random algorithm, and may not always return the same clusters every time the algorithm is run. Also, one needs to specify the number of clusters to begin with and there may be no a\-priori way in which to ascertain the correct number. Hence, trial and error and examination of the results is called for. Also, the algorithm aims to have balanced clusters, but this may not always be appropriate. In R, we may construct the distance matrix using the **dist** function. Using the NCAA data we are already familiar with, we have: ``` ncaa = read.table("DSTMAA_data/ncaa.txt",header=TRUE) print(names(ncaa)) ``` ``` ## [1] "No" "NAME" "GMS" "PTS" "REB" "AST" "TO" "A.T" "STL" "BLK" ## [11] "PF" "FG" "FT" "X3P" ``` ``` d = dist(ncaa[,3:14], method="euclidian") print(head(d)) ``` ``` ## [1] 12.842301 10.354557 7.996641 9.588546 15.892854 20.036546 ``` Examining this matrix will show that it contains \\(n(n\-1\)/2\\) elements, i.e., the number of pairs of nodes. Only the lower triangular matrix of \\(d\\) is populated. Clustering takes many observations with their characteristics and then allocates them into buckets or clusters based on their similarity. In finance, we may use cluster analysis to determine groups of similar firms. Unlike regression analysis, cluster analysis uses only the right\-hand side variables, and there is no dependent variable required. We group observations purely on their overall similarity across characteristics. Hence, it is closely linked to the notion of **communities** that we studied in network analysis, though that concept lives primarily in the domain of networks. ### 12\.2\.1 Example: Randomly generated data in k\-means Here we use the example from the **kmeans** function to see how the clusters appear. This function is standard issue, i.e., it comes with the **stats** package, which is included in the base R distribution and does not need to be separately installed. The data is randomly generated but has two bunches of items with different means, so we should be easily able to see two separate clusters. You will need the **graphics** package which is also in the base installation. ``` # a 2-dimensional example x <- rbind(matrix(rnorm(100, sd = 0.3), ncol = 2), matrix(rnorm(100, mean = 1, sd = 0.3), ncol = 2)) colnames(x) <- c("x", "y") (cl <- kmeans(x, 2)) ``` ``` ## K-means clustering with 2 clusters of sizes 49, 51 ## ## Cluster means: ## x y ## 1 1.04959200 1.05894643 ## 2 -0.01334206 0.02180248 ## ## Clustering vector: ## [1] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 ## [36] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ## [71] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ## ## Within cluster sum of squares by cluster: ## [1] 11.059361 9.740516 ## (between_SS / total_SS = 72.6 %) ## ## Available components: ## ## [1] "cluster" "centers" "totss" "withinss" ## [5] "tot.withinss" "betweenss" "size" "iter" ## [9] "ifault" ``` ``` #PLOTTING CLUSTERS print(names(cl)) ``` ``` ## [1] "cluster" "centers" "totss" "withinss" ## [5] "tot.withinss" "betweenss" "size" "iter" ## [9] "ifault" ``` ``` plot(x, col = cl$cluster) points(cl$centers, col = 1:2, pch = 8, cex=4) ``` ``` #REDO ANALYSIS WITH 5 CLUSTERS ## random starts do help here with too many clusters (cl <- kmeans(x, 5, nstart = 25)) ``` ``` ## K-means clustering with 5 clusters of sizes 24, 16, 23, 27, 10 ## ## Cluster means: ## x y ## 1 0.1426836 0.3005998 ## 2 1.3211293 0.8482919 ## 3 0.7201982 0.9970443 ## 4 -0.1520315 -0.2260174 ## 5 1.3727382 1.5383686 ## ## Clustering vector: ## [1] 1 1 1 4 4 1 4 1 4 4 4 4 1 1 4 1 1 1 4 4 4 1 1 4 4 1 1 4 4 1 4 4 4 4 4 ## [36] 4 1 1 4 1 4 4 4 1 4 1 1 1 1 4 1 2 5 5 3 3 3 3 2 3 5 3 3 3 3 3 3 2 2 2 ## [71] 5 2 2 3 2 3 5 2 2 3 5 2 5 5 3 3 3 3 3 3 3 2 3 3 2 2 5 2 2 5 ## ## Within cluster sum of squares by cluster: ## [1] 2.6542258 1.2278786 1.2401518 2.4590282 0.7752739 ## (between_SS / total_SS = 89.0 %) ## ## Available components: ## ## [1] "cluster" "centers" "totss" "withinss" ## [5] "tot.withinss" "betweenss" "size" "iter" ## [9] "ifault" ``` ``` plot(x, col = cl$cluster) points(cl$centers, col = 1:5, pch = 8) ``` ### 12\.2\.2 Example: NCAA teams We revisit our NCAA data set, and form clusters there. ``` ncaa = read.table("DSTMAA_data/ncaa.txt",header=TRUE) print(names(ncaa)) ``` ``` ## [1] "No" "NAME" "GMS" "PTS" "REB" "AST" "TO" "A.T" "STL" "BLK" ## [11] "PF" "FG" "FT" "X3P" ``` ``` fit = kmeans(ncaa[,3:14],4) print(fit$size) ``` ``` ## [1] 6 27 14 17 ``` ``` print(fit$centers) ``` ``` ## GMS PTS REB AST TO A.T STL ## 1 1.000000 50.33333 28.83333 10.333333 12.50000 0.9000000 6.666667 ## 2 1.777778 68.39259 33.17407 13.596296 12.83704 1.1107407 6.822222 ## 3 3.357143 80.12857 34.15714 16.357143 13.70714 1.2357143 6.821429 ## 4 1.529412 60.24118 38.76471 9.282353 16.45882 0.5817647 6.882353 ## BLK PF FG FT X3P ## 1 2.166667 19.33333 0.3835000 0.6565000 0.2696667 ## 2 2.918519 18.68519 0.4256296 0.7071852 0.3263704 ## 3 2.514286 18.48571 0.4837143 0.7042143 0.4035714 ## 4 2.882353 18.51176 0.3838824 0.6683529 0.3091765 ``` ``` #Since there are more than two attributes of each observation in the data, #we picked two of them {AST, PTS} and plotted the clusters against those. idx = c(4,6) plot(ncaa[,idx],col=fit$cluster) ``` 12\.3 Hierarchical Clustering ----------------------------- Hierarchical clustering is both, a top\-down (divisive) approach and bottom\-up (agglomerative) approach. At the top level there is just one cluster. A level below, this may be broken down into a few clusters, which are then further broken down into more sub\-clusters a level below, and so on. This clustering approach is computationally expensive, and the divisive approach is exponentially expensive in \\(n\\), the number of entities being clustered. In fact, the algorithm is \\({\\cal O}(2^n)\\). The function for clustering is **hclust** and is included in the **stats** package in the base R distribution. We begin by first computing the distance matrix. Then we call the **hclust** function and the **plot** function applied to object **fit** gives what is known as a **dendrogram** plot, showing the cluster hierarchy. We may pick clusters at any level. In this case, we chose a **cut** level such that we get four clusters, and the **rect.hclust** function allows us to superimpose boxes on the clusters so we can see the grouping more clearly. The result is plotted in the Figure below. ``` d = dist(ncaa[,3:14], method="euclidian") fit = hclust(d, method="ward") ``` ``` ## The "ward" method has been renamed to "ward.D"; note new "ward.D2" ``` ``` names(fit) ``` ``` ## [1] "merge" "height" "order" "labels" "method" ## [6] "call" "dist.method" ``` ``` plot(fit,main="NCAA Teams") groups = cutree(fit, k=3) rect.hclust(fit, k=3, border="blue") ``` We can also visualize the clusters loaded on to the top two principal components as follows, using the **clusplot** function that resides in package **cluster**. The result is plotted in the Figure below. ``` print(groups) ``` ``` ## [1] 1 1 1 1 1 2 1 1 2 2 1 2 2 1 1 1 2 2 2 2 2 2 1 1 2 2 1 2 2 2 2 2 1 2 2 ## [36] 2 2 3 1 2 3 3 3 2 2 2 3 2 1 2 2 3 1 2 3 2 2 2 2 3 3 3 3 2 ``` ``` library(cluster) clusplot(ncaa[,3:14],groups,color=TRUE,shade=TRUE,labels=2,lines=0) ``` ``` #Using the correlation matrix as a proxy for distance x = t(as.matrix(ncaa[,3:14])) d = as.dist((1-cor(x))/2) fit = hclust(d, method="ward") ``` ``` ## The "ward" method has been renamed to "ward.D"; note new "ward.D2" ``` ``` plot(fit,main="NCAA Teams") groups = cutree(fit, k=3) rect.hclust(fit, k=3, border="red") ``` ``` print(groups) ``` ``` ## [1] 1 1 1 1 1 1 1 1 2 1 1 2 2 1 1 1 1 2 1 1 2 1 1 1 2 2 1 2 1 2 2 2 1 1 1 ## [36] 2 3 3 1 1 3 3 3 3 2 1 3 2 1 2 2 2 1 1 2 2 2 2 3 2 1 3 1 2 ``` ``` library(cluster) clusplot(ncaa[,3:14],groups,color=TRUE,shade=TRUE,labels=2,lines=0) ``` 12\.4 k Nearest Neighbors ------------------------- This is one of the simplest algorithms for classification and grouping. Simply define a distance metric over a set of observations, each with \\(M\\) characteristics, i.e., \\(x\_1, x\_2,..., x\_M\\). Compute the pairwise distance between each pair of observations, using any of the metrics above. Next, fix \\(k\\), the number of nearest neighbors in the population to be considered. Finally, assign the category based on which one has the majority of nearest neighbors to the case we are trying to classify. We see an example in R using the **iris** data set that we examined before. ``` library(class) data(iris) print(head(iris)) ``` ``` ## Sepal.Length Sepal.Width Petal.Length Petal.Width Species ## 1 5.1 3.5 1.4 0.2 setosa ## 2 4.9 3.0 1.4 0.2 setosa ## 3 4.7 3.2 1.3 0.2 setosa ## 4 4.6 3.1 1.5 0.2 setosa ## 5 5.0 3.6 1.4 0.2 setosa ## 6 5.4 3.9 1.7 0.4 setosa ``` ``` #SAMPLE A SUBSET idx = seq(1,length(iris[,1])) train_idx = sample(idx,100) test_idx = setdiff(idx,train_idx) x_train = iris[train_idx,1:4] x_test = iris[test_idx,1:4] y_train = iris[train_idx,5] y_test = iris[test_idx,5] #RUN knn res = knn(x_train, x_test, y_train, k = 3, prob = FALSE, use.all = TRUE) res ``` ``` ## [1] setosa setosa setosa setosa setosa setosa ## [7] setosa setosa setosa setosa setosa setosa ## [13] setosa setosa setosa versicolor versicolor versicolor ## [19] versicolor versicolor versicolor versicolor versicolor versicolor ## [25] versicolor versicolor versicolor versicolor versicolor versicolor ## [31] virginica versicolor versicolor versicolor versicolor virginica ## [37] virginica virginica virginica virginica virginica virginica ## [43] virginica virginica virginica virginica virginica virginica ## [49] virginica virginica ## Levels: setosa versicolor virginica ``` ``` table(res,y_test) ``` ``` ## y_test ## res setosa versicolor virginica ## setosa 15 0 0 ## versicolor 0 19 0 ## virginica 0 1 15 ``` 12\.5 Prediction Trees ---------------------- Prediction trees are a natural outcome of recursive partitioning of the data. Hence, they are a particular form of clustering at different levels. Usual cluster analysis results in a **flat** partition, but prediction trees develop a multi\-level cluster of trees. The term used here is CART, which stands for classification analysis and regression trees. But prediction trees are different from vanilla clustering in an important way – there is a dependent variable, i.e., a category or a range of values (e.g., a score) that one is attempting to predict. Prediction trees are of two types: (a) Classification trees, where the leaves of the trees are different categories of discrete outcomes. and (b) Regression trees, where the leaves are continuous outcomes. We may think of the former as a generalized form of limited dependent variables, and the latter as a generalized form of regression analysis. To set ideas, suppose we want to predict the credit score of an individual using age, income, and education as explanatory variables. Assume that income is the best explanatory variable of the three. Then, at the top of the tree, there will be income as the branching variable, i.e., if income is less than some threshold, then we go down the left branch of the tree, else we go down the right. At the next level, it may be that we use education to make the next bifurcation, and then at the third level we use age. A variable may even be repeatedly used at more than one level. This leads us to several leaves at the bottom of the tree that contain the average values of the credit scores that may be reached. For example if we get an individual of young age, low income, and no education, it is very likely that this path down the tree will lead to a low credit score on average. Instead of credit score (an example of a regression tree), consider credit ratings of companies (an example of a classification tree). These ideas will become clearer once we present some examples. ### 12\.5\.1 Fitting the tree Recursive partitioning is the main algorithmic construct behind prediction trees. We take the data and using a single explanatory variable, we try and bifurcate the data into two categories such that the additional information from categorization results in better **information** than before the binary split. For example, suppose we are trying to predict who will make donations and who will not using a single variable – income. If we have a sample of people and have not yet analyzed their incomes, we only have the raw frequency \\(p\\) of how many people made donations, i.e., and number between 0 and 1\. The **information** of the predicted likelihood \\(p\\) is inversely related to the sum of squared errors (SSE) between this value \\(p\\) and the 0 values and 1 values of the observations. \\\[ SSE\_1 \= \\sum\_{i\=1}^n (x\_i \- p)^2 \\] where \\(x\_i \= \\{0,1\\}\\), depending on whether person \\(i\\) made a donation or not. Now, if we bifurcate the sample based on income, say to the left we have people with income less than \\(K\\), and to the right, people with incomes greater than or equal to \\(K\\). If we find that the proportion of people on the left making donations is \\(p\_L \< p\\) and on the right is \\(p\_R \> p\\), our new information is: \\\[ SSE\_2 \= \\sum\_{i, Income \< K} (x\_i \- p\_L)^2 \+ \\sum\_{i, Income \\geq K} (x\_i \- p\_R)^2 \\] By choosing \\(K\\) correctly, our recursive partitioning algorithm will maximize the gain, i.e., \\(\\delta \= (SSE\_1 \- SSE\_2\)\\). We stop branching further when at a given tree level \\(\\delta\\) is less than a pre\-specified threshold. We note that as \\(n\\) gets large, the computation of binary splits on any variable is expensive, i.e., of order \\({\\cal O}(2^n)\\). But as we go down the tree, and use smaller subsamples, the algorithm becomes faster and faster. In general, this is quite an efficient algorithm to implement. The motivation of prediction trees is to emulate a decision tree. It also helps make sense of complicated regression scenarios where there are lots of variable interactions over many variables, when it becomes difficult to interpret the meaning and importance of explanatory variables in a prediction scenario. By proceeding in a hierarchical manner on a tree, the decision analysis becomes transparent, and can also be used in practical settings to make decisions. 12\.6 Classification Trees -------------------------- To demonstrate this, let’s use a data set that is already in R. We use the **kyphosis** data set which contains data on children who have had spinal surgery. The model we wish to fit is to predict whether a child has a post\-operative deformity or not (variable: Kyphosis \= {absent, present}). The variables we use are Age in months, number of vertebrae operated on (Number), and the beginning of the range of vertebrae operated on (Start). The package used is called **rpart** which stands for **recursive partitioning**. ``` library(rpart) data(kyphosis) head(kyphosis) ``` ``` ## Kyphosis Age Number Start ## 1 absent 71 3 5 ## 2 absent 158 3 14 ## 3 present 128 4 5 ## 4 absent 2 5 1 ## 5 absent 1 4 15 ## 6 absent 1 2 16 ``` ``` fit = rpart(Kyphosis~Age+Number+Start, method="class", data=kyphosis) printcp(fit) ``` ``` ## ## Classification tree: ## rpart(formula = Kyphosis ~ Age + Number + Start, data = kyphosis, ## method = "class") ## ## Variables actually used in tree construction: ## [1] Age Start ## ## Root node error: 17/81 = 0.20988 ## ## n= 81 ## ## CP nsplit rel error xerror xstd ## 1 0.176471 0 1.00000 1.0000 0.21559 ## 2 0.019608 1 0.82353 1.2353 0.23200 ## 3 0.010000 4 0.76471 1.2353 0.23200 ``` ``` summary(kyphosis) ``` ``` ## Kyphosis Age Number Start ## absent :64 Min. : 1.00 Min. : 2.000 Min. : 1.00 ## present:17 1st Qu.: 26.00 1st Qu.: 3.000 1st Qu.: 9.00 ## Median : 87.00 Median : 4.000 Median :13.00 ## Mean : 83.65 Mean : 4.049 Mean :11.49 ## 3rd Qu.:130.00 3rd Qu.: 5.000 3rd Qu.:16.00 ## Max. :206.00 Max. :10.000 Max. :18.00 ``` ``` summary(fit) ``` ``` ## Call: ## rpart(formula = Kyphosis ~ Age + Number + Start, data = kyphosis, ## method = "class") ## n= 81 ## ## CP nsplit rel error xerror xstd ## 1 0.17647059 0 1.0000000 1.000000 0.2155872 ## 2 0.01960784 1 0.8235294 1.235294 0.2320031 ## 3 0.01000000 4 0.7647059 1.235294 0.2320031 ## ## Variable importance ## Start Age Number ## 64 24 12 ## ## Node number 1: 81 observations, complexity param=0.1764706 ## predicted class=absent expected loss=0.2098765 P(node) =1 ## class counts: 64 17 ## probabilities: 0.790 0.210 ## left son=2 (62 obs) right son=3 (19 obs) ## Primary splits: ## Start < 8.5 to the right, improve=6.762330, (0 missing) ## Number < 5.5 to the left, improve=2.866795, (0 missing) ## Age < 39.5 to the left, improve=2.250212, (0 missing) ## Surrogate splits: ## Number < 6.5 to the left, agree=0.802, adj=0.158, (0 split) ## ## Node number 2: 62 observations, complexity param=0.01960784 ## predicted class=absent expected loss=0.09677419 P(node) =0.7654321 ## class counts: 56 6 ## probabilities: 0.903 0.097 ## left son=4 (29 obs) right son=5 (33 obs) ## Primary splits: ## Start < 14.5 to the right, improve=1.0205280, (0 missing) ## Age < 55 to the left, improve=0.6848635, (0 missing) ## Number < 4.5 to the left, improve=0.2975332, (0 missing) ## Surrogate splits: ## Number < 3.5 to the left, agree=0.645, adj=0.241, (0 split) ## Age < 16 to the left, agree=0.597, adj=0.138, (0 split) ## ## Node number 3: 19 observations ## predicted class=present expected loss=0.4210526 P(node) =0.2345679 ## class counts: 8 11 ## probabilities: 0.421 0.579 ## ## Node number 4: 29 observations ## predicted class=absent expected loss=0 P(node) =0.3580247 ## class counts: 29 0 ## probabilities: 1.000 0.000 ## ## Node number 5: 33 observations, complexity param=0.01960784 ## predicted class=absent expected loss=0.1818182 P(node) =0.4074074 ## class counts: 27 6 ## probabilities: 0.818 0.182 ## left son=10 (12 obs) right son=11 (21 obs) ## Primary splits: ## Age < 55 to the left, improve=1.2467530, (0 missing) ## Start < 12.5 to the right, improve=0.2887701, (0 missing) ## Number < 3.5 to the right, improve=0.1753247, (0 missing) ## Surrogate splits: ## Start < 9.5 to the left, agree=0.758, adj=0.333, (0 split) ## Number < 5.5 to the right, agree=0.697, adj=0.167, (0 split) ## ## Node number 10: 12 observations ## predicted class=absent expected loss=0 P(node) =0.1481481 ## class counts: 12 0 ## probabilities: 1.000 0.000 ## ## Node number 11: 21 observations, complexity param=0.01960784 ## predicted class=absent expected loss=0.2857143 P(node) =0.2592593 ## class counts: 15 6 ## probabilities: 0.714 0.286 ## left son=22 (14 obs) right son=23 (7 obs) ## Primary splits: ## Age < 111 to the right, improve=1.71428600, (0 missing) ## Start < 12.5 to the right, improve=0.79365080, (0 missing) ## Number < 3.5 to the right, improve=0.07142857, (0 missing) ## ## Node number 22: 14 observations ## predicted class=absent expected loss=0.1428571 P(node) =0.1728395 ## class counts: 12 2 ## probabilities: 0.857 0.143 ## ## Node number 23: 7 observations ## predicted class=present expected loss=0.4285714 P(node) =0.08641975 ## class counts: 3 4 ## probabilities: 0.429 0.571 ``` We can plot the tree as well using the **plot** command. The dendrogram like tree shows the allocation of the \\(n\=81\\) cases to various branches of the tree. ``` plot(fit, uniform=TRUE) text(fit, use.n=TRUE, all=TRUE, cex=0.8) ``` 12\.7 C4\.5 Classifier ---------------------- This classifier also follows recursive partitioning as in the previous case, but instead of minimizing the sum of squared errors between the sample data \\(x\\) and the true value \\(p\\) at each level, here the goal is to minimize entropy. This improves the information gain. Natural entropy (\\(H\\)) of the data \\(x\\) is defined as \\\[ H \= \-\\sum\_x\\; f(x) \\cdot ln \\;f(x) \\] where \\(f(x)\\) is the probability density of \\(x\\). This is intuitive because after the optimal split in recursing down the tree, the distribution of \\(x\\) becomes narrower, lowering entropy. This measure is also often known as \`\`differential entropy.’’ To see this let’s do a quick example. We compute entropy for two distributions of varying spread (standard deviation). ``` dx = 0.001 x = seq(-5,5,dx) H2 = -sum(dnorm(x,sd=2)*log(dnorm(x,sd=2))*dx) print(H2) ``` ``` ## [1] 2.042076 ``` ``` H3 = -sum(dnorm(x,sd=3)*log(dnorm(x,sd=3))*dx) print(H3) ``` ``` ## [1] 2.111239 ``` Therefore, we see that entropy increases as the normal distribution becomes wider. ``` library(RWeka) data(iris) print(head(iris)) res = J48(Species~.,data=iris) print(res) summary(res) ``` 12\.8 Regression Trees ---------------------- We move from classification trees (discrete outcomes) to regression trees (scored or continuous outcomes). Again, we use an example that already exists in R, i.e., the *cars* dataset in the **cu.summary** data frame. Let’s load it up. ``` data(cu.summary) print(names(cu.summary)) ``` ``` ## [1] "Price" "Country" "Reliability" "Mileage" "Type" ``` ``` print(head(cu.summary)) ``` ``` ## Price Country Reliability Mileage Type ## Acura Integra 4 11950 Japan Much better NA Small ## Dodge Colt 4 6851 Japan <NA> NA Small ## Dodge Omni 4 6995 USA Much worse NA Small ## Eagle Summit 4 8895 USA better 33 Small ## Ford Escort 4 7402 USA worse 33 Small ## Ford Festiva 4 6319 Korea better 37 Small ``` ``` print(tail(cu.summary)) ``` ``` ## Price Country Reliability Mileage Type ## Ford Aerostar V6 12267 USA average 18 Van ## Mazda MPV V6 14944 Japan Much better 19 Van ## Mitsubishi Wagon 4 14929 Japan <NA> 20 Van ## Nissan Axxess 4 13949 Japan <NA> 20 Van ## Nissan Van 4 14799 Japan <NA> 19 Van ## Volkswagen Vanagon 4 14080 Germany <NA> NA Van ``` ``` print(dim(cu.summary)) ``` ``` ## [1] 117 5 ``` ``` print(unique(cu.summary$Type)) ``` ``` ## [1] Small Sporty Compact Medium Large Van ## Levels: Compact Large Medium Small Sporty Van ``` ``` print(unique(cu.summary$Country)) ``` ``` ## [1] Japan USA Korea Japan/USA Mexico Brazil Germany ## [8] France Sweden England ## 10 Levels: Brazil England France Germany Japan Japan/USA Korea ... USA ``` We will try and predict Mileage using the other variables. (Note: if we tried to predict Reliability, then we would be back in the realm of classification trees, here we are looking at regression trees.) ``` library(rpart) fit <- rpart(Mileage~Price + Country + Reliability + Type, method="anova", data=cu.summary) print(summary(fit)) ``` ``` ## Call: ## rpart(formula = Mileage ~ Price + Country + Reliability + Type, ## data = cu.summary, method = "anova") ## n=60 (57 observations deleted due to missingness) ## ## CP nsplit rel error xerror xstd ## 1 0.62288527 0 1.0000000 1.0278364 0.17665513 ## 2 0.13206061 1 0.3771147 0.5199982 0.10233496 ## 3 0.02544094 2 0.2450541 0.4095695 0.08549195 ## 4 0.01160389 3 0.2196132 0.4195450 0.09312124 ## 5 0.01000000 4 0.2080093 0.4171213 0.08786038 ## ## Variable importance ## Price Type Country ## 48 42 10 ## ## Node number 1: 60 observations, complexity param=0.6228853 ## mean=24.58333, MSE=22.57639 ## left son=2 (48 obs) right son=3 (12 obs) ## Primary splits: ## Price < 9446.5 to the right, improve=0.6228853, (0 missing) ## Type splits as LLLRLL, improve=0.5044405, (0 missing) ## Reliability splits as LLLRR, improve=0.1263005, (11 missing) ## Country splits as --LRLRRRLL, improve=0.1243525, (0 missing) ## Surrogate splits: ## Type splits as LLLRLL, agree=0.950, adj=0.750, (0 split) ## Country splits as --LLLLRRLL, agree=0.833, adj=0.167, (0 split) ## ## Node number 2: 48 observations, complexity param=0.1320606 ## mean=22.70833, MSE=8.498264 ## left son=4 (23 obs) right son=5 (25 obs) ## Primary splits: ## Type splits as RLLRRL, improve=0.43853830, (0 missing) ## Price < 12154.5 to the right, improve=0.25748500, (0 missing) ## Country splits as --RRLRL-LL, improve=0.13345700, (0 missing) ## Reliability splits as LLLRR, improve=0.01637086, (10 missing) ## Surrogate splits: ## Price < 12215.5 to the right, agree=0.812, adj=0.609, (0 split) ## Country splits as --RRLRL-RL, agree=0.646, adj=0.261, (0 split) ## ## Node number 3: 12 observations ## mean=32.08333, MSE=8.576389 ## ## Node number 4: 23 observations, complexity param=0.02544094 ## mean=20.69565, MSE=2.907372 ## left son=8 (10 obs) right son=9 (13 obs) ## Primary splits: ## Type splits as -LR--L, improve=0.515359600, (0 missing) ## Price < 14962 to the left, improve=0.131259400, (0 missing) ## Country splits as ----L-R--R, improve=0.007022107, (0 missing) ## Surrogate splits: ## Price < 13572 to the right, agree=0.609, adj=0.1, (0 split) ## ## Node number 5: 25 observations, complexity param=0.01160389 ## mean=24.56, MSE=6.4864 ## left son=10 (14 obs) right son=11 (11 obs) ## Primary splits: ## Price < 11484.5 to the right, improve=0.09693168, (0 missing) ## Reliability splits as LLRRR, improve=0.07767167, (4 missing) ## Type splits as L--RR-, improve=0.04209834, (0 missing) ## Country splits as --LRRR--LL, improve=0.02201687, (0 missing) ## Surrogate splits: ## Country splits as --LLLL--LR, agree=0.80, adj=0.545, (0 split) ## Type splits as L--RL-, agree=0.64, adj=0.182, (0 split) ## ## Node number 8: 10 observations ## mean=19.3, MSE=2.21 ## ## Node number 9: 13 observations ## mean=21.76923, MSE=0.7928994 ## ## Node number 10: 14 observations ## mean=23.85714, MSE=7.693878 ## ## Node number 11: 11 observations ## mean=25.45455, MSE=3.520661 ## ## n=60 (57 observations deleted due to missingness) ## ## node), split, n, deviance, yval ## * denotes terminal node ## ## 1) root 60 1354.58300 24.58333 ## 2) Price>=9446.5 48 407.91670 22.70833 ## 4) Type=Large,Medium,Van 23 66.86957 20.69565 ## 8) Type=Large,Van 10 22.10000 19.30000 * ## 9) Type=Medium 13 10.30769 21.76923 * ## 5) Type=Compact,Small,Sporty 25 162.16000 24.56000 ## 10) Price>=11484.5 14 107.71430 23.85714 * ## 11) Price< 11484.5 11 38.72727 25.45455 * ## 3) Price< 9446.5 12 102.91670 32.08333 * ``` ``` plot(fit, uniform=TRUE) text(fit, use.n=TRUE, all=TRUE, cex=.8) ``` ### 12\.8\.1 Example: Califonia Home Data This example is taken from a data set posted by Cosmo Shalizi at CMU. We use a different package here, called **tree**, though this has been subsumed in most of its functionality by **rpart** used earlier. The analysis is as follows: ``` library(tree) cahomes = read.table("DSTMAA_data/cahomedata.txt",header=TRUE) print(dim(cahomes)) ``` ``` ## [1] 20640 9 ``` ``` head(cahomes) ``` ``` ## MedianHouseValue MedianIncome MedianHouseAge TotalRooms TotalBedrooms ## 1 452600 8.3252 41 880 129 ## 2 358500 8.3014 21 7099 1106 ## 3 352100 7.2574 52 1467 190 ## 4 341300 5.6431 52 1274 235 ## 5 342200 3.8462 52 1627 280 ## 6 269700 4.0368 52 919 213 ## Population Households Latitude Longitude ## 1 322 126 37.88 -122.23 ## 2 2401 1138 37.86 -122.22 ## 3 496 177 37.85 -122.24 ## 4 558 219 37.85 -122.25 ## 5 565 259 37.85 -122.25 ## 6 413 193 37.85 -122.25 ``` ``` summary(cahomes) ``` ``` ## MedianHouseValue MedianIncome MedianHouseAge TotalRooms ## Min. : 14999 Min. : 0.4999 Min. : 1.00 Min. : 2 ## 1st Qu.:119600 1st Qu.: 2.5634 1st Qu.:18.00 1st Qu.: 1448 ## Median :179700 Median : 3.5348 Median :29.00 Median : 2127 ## Mean :206856 Mean : 3.8707 Mean :28.64 Mean : 2636 ## 3rd Qu.:264725 3rd Qu.: 4.7432 3rd Qu.:37.00 3rd Qu.: 3148 ## Max. :500001 Max. :15.0001 Max. :52.00 Max. :39320 ## TotalBedrooms Population Households Latitude ## Min. : 1.0 Min. : 3 Min. : 1.0 Min. :32.54 ## 1st Qu.: 295.0 1st Qu.: 787 1st Qu.: 280.0 1st Qu.:33.93 ## Median : 435.0 Median : 1166 Median : 409.0 Median :34.26 ## Mean : 537.9 Mean : 1425 Mean : 499.5 Mean :35.63 ## 3rd Qu.: 647.0 3rd Qu.: 1725 3rd Qu.: 605.0 3rd Qu.:37.71 ## Max. :6445.0 Max. :35682 Max. :6082.0 Max. :41.95 ## Longitude ## Min. :-124.3 ## 1st Qu.:-121.8 ## Median :-118.5 ## Mean :-119.6 ## 3rd Qu.:-118.0 ## Max. :-114.3 ``` ``` mhv = as.matrix(as.numeric(cahomes$MedianHouseValue)) logmhv = log(mhv) lat = as.matrix(as.numeric(cahomes$Latitude)) lon = as.matrix(as.numeric(cahomes$Longitude)) summary(lm(mhv~lat+lon)) ``` ``` ## ## Call: ## lm(formula = mhv ~ lat + lon) ## ## Residuals: ## Min 1Q Median 3Q Max ## -316022 -67502 -22903 46042 483381 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -5829397.0 82092.2 -71.01 <2e-16 *** ## lat -69551.0 859.6 -80.91 <2e-16 *** ## lon -71209.4 916.4 -77.70 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 100400 on 20637 degrees of freedom ## Multiple R-squared: 0.2424, Adjusted R-squared: 0.2423 ## F-statistic: 3302 on 2 and 20637 DF, p-value: < 2.2e-16 ``` ``` fit = tree(logmhv~lon+lat) plot(fit) text(fit,cex=0.8) ``` ``` price.deciles = quantile(mhv,0:10/10) cut.prices = cut(mhv,price.deciles,include.lowest=TRUE) plot(lon, lat, col=grey(10:2/11)[cut.prices],pch=20,xlab="Longitude",ylab="Latitude") partition.tree(fit,ordvars=c("lon","lat"),add=TRUE,cex=0.8) ``` 12\.9 Random Forests -------------------- A random forest model is an extension of the CART class of models. In CART, at each decision node, all variables in the feature set are selected and the best one is determined for the bifurcation rule at that node. This approach tends to overfit the model to training data. To ameliorate overfitting Breiman (2001\) suggested generating classification and regression trees using a random subset of the feature set at each. One at a time, a random tree is grown. By building a large set of random trees (the default number in R is 500\), we get a “random forest” of decision trees, and when the algorithm is run, each tree in the forest classifies the input. The output classification is determined by taking the modal classification across all trees. The default number of variables from a feature set of \\(p\\) variables is defaulted to \\(p/3\\), rounded down, for a regression tree, and \\(\\sqrt{p}\\) for a classification tree. **Reference**: Breiman ([2001](#ref-Breiman:2001:RF:570181.570182)) For the NCAA data, take the top 32 teams and make their dependent variable 1, and that of the bottom 32 teams zero. ``` ncaa = read.table("DSTMAA_data/ncaa.txt",header=TRUE) y1 = 1:32 y1 = y1*0+1 y2 = y1*0 y = c(y1,y2) print(y) ``` ``` ## [1] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 ## [36] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ``` ``` x = as.matrix(ncaa[4:14]) ``` ``` library(randomForest) ``` ``` ## randomForest 4.6-12 ``` ``` ## Type rfNews() to see new features/changes/bug fixes. ``` ``` yf = factor(y) res = randomForest(x,yf) print(res) ``` ``` ## ## Call: ## randomForest(x = x, y = yf) ## Type of random forest: classification ## Number of trees: 500 ## No. of variables tried at each split: 3 ## ## OOB estimate of error rate: 28.12% ## Confusion matrix: ## 0 1 class.error ## 0 24 8 0.2500 ## 1 10 22 0.3125 ``` ``` print(importance(res)) ``` ``` ## MeanDecreaseGini ## PTS 4.625922 ## REB 1.605147 ## AST 1.999855 ## TO 3.882536 ## A.T 3.880554 ## STL 2.026178 ## BLK 1.951694 ## PF 1.756469 ## FG 4.159391 ## FT 3.258861 ## X3P 2.354894 ``` ``` res = randomForest(x,yf,mtry=3) print(res) ``` ``` ## ## Call: ## randomForest(x = x, y = yf, mtry = 3) ## Type of random forest: classification ## Number of trees: 500 ## No. of variables tried at each split: 3 ## ## OOB estimate of error rate: 31.25% ## Confusion matrix: ## 0 1 class.error ## 0 23 9 0.28125 ## 1 11 21 0.34375 ``` ``` print(importance(res)) ``` ``` ## MeanDecreaseGini ## PTS 4.576616 ## REB 1.379877 ## AST 2.158874 ## TO 3.847833 ## A.T 3.674293 ## STL 1.983024 ## BLK 2.089959 ## PF 1.621722 ## FG 4.408469 ## FT 3.562817 ## X3P 2.191143 ``` 12\.10 Top Ten Algorithms in Data Science ----------------------------------------- The top voted algorithms in machine learning are: C4\.5, k\-means, SVM, Apriori, EM, PageRank, AdaBoost, kNN, Naive Bayes, CART. (This is just from one source, and differences of opinion will remain.) 12\.11 Boosting --------------- Boosting is an immensely popular machine learning technique. It is an iterative approach that takes weak learning algorithms and “boosts” them into strong learners. The method is intuitive. Start out with a classification algorithm such as logit for binary classification and run one pass to fit the model. Check which cases are correctly predicted in\-sample, and which are incorrect. In the next classification pass (also known as a round), reweight the misclassified observations versus the correctly classified ones, by overweighting the former, and underweighting the latter. This forces the learner to “focus” more on the tougher cases, and adjust so that it gets these classified more accurately. Through multiple rounds, the results are boosted to higher levels of accuracy. Because there are many different weighting schemes, data scientists have evolved many different boosting algorithms. AdaBoost is one such popular algorithm, developed by Schapire and Singer ([1999](#ref-Schapire99improvedboosting)). In recent times, these boosting algorithms have improved in their computer implementation, mostly through parallelization to speed them up when using huge data sets. Such versions are known as “extreme gradient” boosting algorithms. In R, the package **xgboost** contains an easy to use implementation. We illustrate with an example. We use the sample data that comes with the **xgboost** package. Read in the data for the model. ``` library(xgboost) ``` ``` ## Warning: package 'xgboost' was built under R version 3.3.2 ``` ``` data("agaricus.train") print(names(agaricus.train)) ``` ``` ## [1] "data" "label" ``` ``` print(dim(agaricus.train$data)) ``` ``` ## [1] 6513 126 ``` ``` print(length(agaricus.train$label)) ``` ``` ## [1] 6513 ``` ``` data("agaricus.test") print(names(agaricus.test)) ``` ``` ## [1] "data" "label" ``` ``` print(dim(agaricus.test$data)) ``` ``` ## [1] 1611 126 ``` ``` print(length(agaricus.test$label)) ``` ``` ## [1] 1611 ``` Fit the model. All that is needed is a single\-line call to the *xgboost* function. ``` res = xgboost(data=agaricus.train$data, label=agaricus.train$label, objective = "binary:logistic", nrounds=5) ``` ``` ## [1] train-error:0.000614 ## [2] train-error:0.001228 ## [3] train-error:0.000614 ## [4] train-error:0.000614 ## [5] train-error:0.000000 ``` ``` print(names(res)) ``` ``` ## [1] "handle" "raw" "niter" "evaluation_log" ## [5] "call" "params" "callbacks" ``` Undertake prediction using the *predict* function and then examine the confusion matrix for performance. ``` #In sample yhat = predict(res,agaricus.train$data) print(head(yhat,50)) ``` ``` ## [1] 0.8973738 0.1030030 0.1030030 0.8973738 0.1018238 0.1030030 0.1030030 ## [8] 0.8973738 0.1030030 0.1030030 0.1030030 0.1030030 0.1018238 0.1058771 ## [15] 0.1018238 0.8973738 0.8973738 0.8973738 0.1030030 0.8973738 0.1030030 ## [22] 0.1030030 0.8973738 0.1030030 0.1030030 0.8973738 0.1030030 0.1057071 ## [29] 0.1030030 0.1144627 0.1058771 0.1139800 0.1030030 0.1057071 0.1058771 ## [36] 0.1030030 0.1030030 0.1030030 0.1030030 0.1057071 0.1057071 0.1030030 ## [43] 0.1030030 0.8973738 0.1030030 0.1030030 0.1057071 0.1058771 0.1030030 ## [50] 0.1030030 ``` ``` cm = table(agaricus.train$label,as.integer(round(yhat))) print(cm) ``` ``` ## ## 0 1 ## 0 3373 0 ## 1 0 3140 ``` ``` print(chisq.test(cm)) ``` ``` ## ## Pearson's Chi-squared test with Yates' continuity correction ## ## data: cm ## X-squared = 6509, df = 1, p-value < 2.2e-16 ``` ``` #Out of sample yhat = predict(res,agaricus.test$data) print(head(yhat,50)) ``` ``` ## [1] 0.1030030 0.8973738 0.1030030 0.1030030 0.1058771 0.1139800 0.8973738 ## [8] 0.1030030 0.8973738 0.1057071 0.8973738 0.1030030 0.1018238 0.1030030 ## [15] 0.1018238 0.1057071 0.1030030 0.8973738 0.1058771 0.1030030 0.1030030 ## [22] 0.1057071 0.1030030 0.1030030 0.1057071 0.8973738 0.1139800 0.1030030 ## [29] 0.1030030 0.1018238 0.1030030 0.1030030 0.1057071 0.1058771 0.1030030 ## [36] 0.1030030 0.1139800 0.8973738 0.1030030 0.1030030 0.1058771 0.1030030 ## [43] 0.1030030 0.1030030 0.1030030 0.1144627 0.1057071 0.1144627 0.1058771 ## [50] 0.1030030 ``` ``` cm = table(agaricus.test$label,as.integer(round(yhat))) print(cm) ``` ``` ## ## 0 1 ## 0 835 0 ## 1 0 776 ``` ``` print(chisq.test(cm)) ``` ``` ## ## Pearson's Chi-squared test with Yates' continuity correction ## ## data: cm ## X-squared = 1607, df = 1, p-value < 2.2e-16 ``` There are many types of algorithms that may be used with boosting, see the documentation of the function in R. But here are some of the options. * reg:linear, linear regression (Default). * reg:logistic, logistic regression. * binary:logistic, logistic regression for binary classification. Output probability. * binary:logitraw, logistic regression for binary classification, output score before logistic transformation. * multi:softmax, set xgboost to do multiclass classification using the softmax objective. Class is represented by a number and should be from 0 to num\_class \- 1\. * multi:softprob, same as softmax, but prediction outputs a vector of ndata \* nclass elements, which can be further reshaped to ndata, nclass matrix. The result contains predicted probabilities of each data point belonging to each class. * rank:pairwise set xgboost to do ranking task by minimizing the pairwise loss. Let’s repeat the exercise using the NCAA data. ``` ncaa = read.table("DSTMAA_data/ncaa.txt",header=TRUE) y = as.matrix(c(rep(1,32),rep(0,32))) x = as.matrix(ncaa[4:14]) res = xgboost(data=x,label=y,objective = "binary:logistic", nrounds=10) ``` ``` ## [1] train-error:0.109375 ## [2] train-error:0.062500 ## [3] train-error:0.031250 ## [4] train-error:0.046875 ## [5] train-error:0.046875 ## [6] train-error:0.031250 ## [7] train-error:0.015625 ## [8] train-error:0.015625 ## [9] train-error:0.015625 ## [10] train-error:0.000000 ``` ``` yhat = predict(res,x) print(yhat) ``` ``` ## [1] 0.93651539 0.91299230 0.94973743 0.92731959 0.88483542 0.78989410 ## [7] 0.87560666 0.90532523 0.86085796 0.83430755 0.91133112 0.77964365 ## [13] 0.65978771 0.91299230 0.93371087 0.91403663 0.78532064 0.80347157 ## [19] 0.60545647 0.79564470 0.84763408 0.86694145 0.79334742 0.91165835 ## [25] 0.80980736 0.76779360 0.90779346 0.88314682 0.85020524 0.77409834 ## [31] 0.85503411 0.92695338 0.49809304 0.15059802 0.13718443 0.30433667 ## [37] 0.35902274 0.08057866 0.16935477 0.06189578 0.08516480 0.12777112 ## [43] 0.06224639 0.18913418 0.07675765 0.33156753 0.06586388 0.13792981 ## [49] 0.22327985 0.08479820 0.16396984 0.10236575 0.16346745 0.27498406 ## [55] 0.10642117 0.07299758 0.15809764 0.15259050 0.07768227 0.15006000 ## [61] 0.08349544 0.06932075 0.10376420 0.11887703 ``` ``` cm = table(y,as.integer(round(yhat))) print(cm) ``` ``` ## ## y 0 1 ## 0 32 0 ## 1 0 32 ``` ``` print(chisq.test(cm)) ``` ``` ## ## Pearson's Chi-squared test with Yates' continuity correction ## ## data: cm ## X-squared = 60.062, df = 1, p-value = 9.189e-15 ``` 12\.1 Overview -------------- There are many aspects of data analysis that call for grouping individuals, firms, projects, etc. These fall under the rubric of what may be termed as **classification** analysis. Cluster analysis comprises a group of techniques that uses distance metrics to bunch data into categories. There are two broad approaches to cluster analysis: 1. Agglomerative or Hierarchical or Bottom\-up: In this case we begin with all entities in the analysis being given their own cluster, so that we start with \\(n\\) clusters. Then, entities are grouped into clusters based on a given distance metric between each pair of entities. In this way a **hierarchy** of clusters is built up and the researcher can choose which grouping is preferred. 2. Partitioning or Top\-down: In this approach, the entire set of \\(n\\) entities is assumed to be a cluster. Then it is progressively partitioned into smaller and smaller clusters. We will employ both clustering approaches and examine their properties with various data sets as examples. 12\.2 k\-MEANS -------------- This approach is bottom\-up. If we have a sample of \\(n\\) observations to be allocated to \\(k\\) clusters, then we can initialize the clusters in many ways. One approach is to assume that each observation is a cluster unto itself. We proceed by taking each observation and allocating it to the nearest cluster using a distance metric. At the outset, we would simply allocate an observation to its nearest neighbor. How is nearness measured? We need a distance metric, and one common one is Euclidian distance. Suppose we have two observations \\(x\_i\\) and \\(x\_j\\). These may be represented by a vector of attributes. Suppose our observations are people, and the attributes are {height, weight, IQ} \= \\(x\_i \= \\{h\_i, w\_i, I\_i\\}\\) for the \\(i\\)\-th individual. Then the Euclidian distance between two individuals \\(i\\) and \\(j\\) is \\\[ d\_{ij} \= \\sqrt{(h\_i\-h\_j)^2 \+ (w\_i\-w\_j)^2 \+ (I\_i \- I\_j)^2} \\] It is usually computed using normalized variables, so that no single variable of large size dominates the distance calculation. (Normalization is the process of subtracting the mean from each observation and then dividing by the standard deviation.) In contrast, the “Manhattan” distance is given by (when is this more appropriate?) \\\[ d\_{ij} \= \|h\_i\-h\_j\| \+ \|w\_i\-w\_j\| \+ \|I\_i \- I\_j\| \\] We may use other metrics such as the cosine distance, or the Mahalanobis distance. A matrix of \\(n \\times n\\) values of all \\(d\_{ij}\\)s is called the **distance matrix**. Using this distance metric we assign nodes to clusters or attach them to nearest neighbors. After a few iterations, no longer are clusters made up of singleton observations, and the number of clusters reaches \\(k\\), the preset number required, and then all nodes are assigned to one of these \\(k\\) clusters. As we examine each observation we then assign it (or re\-assign it) to the nearest cluster, where the distance is measured from the observation to some representative node of the cluster. Some common choices of the representative node in a cluster of are: 1. Centroid of the cluster. This is the mean of the observations in the cluster for each attribute. The centroid of the two observations above is the average vector \\(\\{(h\_i\+h\_j)/2, (w\_i\+w\_j)/2, (I\_i \+ I\_j)/2\\}\\). This is often called the **center** of the cluster. If there are more nodes then the centroid is the average of the same coordinate for all nodes. 2. Closest member of the cluster. 3. Furthest member of the cluster. The algorithm converges when no re\-assignments of observations to clusters occurs. Note that \\(k\\)\-means is a random algorithm, and may not always return the same clusters every time the algorithm is run. Also, one needs to specify the number of clusters to begin with and there may be no a\-priori way in which to ascertain the correct number. Hence, trial and error and examination of the results is called for. Also, the algorithm aims to have balanced clusters, but this may not always be appropriate. In R, we may construct the distance matrix using the **dist** function. Using the NCAA data we are already familiar with, we have: ``` ncaa = read.table("DSTMAA_data/ncaa.txt",header=TRUE) print(names(ncaa)) ``` ``` ## [1] "No" "NAME" "GMS" "PTS" "REB" "AST" "TO" "A.T" "STL" "BLK" ## [11] "PF" "FG" "FT" "X3P" ``` ``` d = dist(ncaa[,3:14], method="euclidian") print(head(d)) ``` ``` ## [1] 12.842301 10.354557 7.996641 9.588546 15.892854 20.036546 ``` Examining this matrix will show that it contains \\(n(n\-1\)/2\\) elements, i.e., the number of pairs of nodes. Only the lower triangular matrix of \\(d\\) is populated. Clustering takes many observations with their characteristics and then allocates them into buckets or clusters based on their similarity. In finance, we may use cluster analysis to determine groups of similar firms. Unlike regression analysis, cluster analysis uses only the right\-hand side variables, and there is no dependent variable required. We group observations purely on their overall similarity across characteristics. Hence, it is closely linked to the notion of **communities** that we studied in network analysis, though that concept lives primarily in the domain of networks. ### 12\.2\.1 Example: Randomly generated data in k\-means Here we use the example from the **kmeans** function to see how the clusters appear. This function is standard issue, i.e., it comes with the **stats** package, which is included in the base R distribution and does not need to be separately installed. The data is randomly generated but has two bunches of items with different means, so we should be easily able to see two separate clusters. You will need the **graphics** package which is also in the base installation. ``` # a 2-dimensional example x <- rbind(matrix(rnorm(100, sd = 0.3), ncol = 2), matrix(rnorm(100, mean = 1, sd = 0.3), ncol = 2)) colnames(x) <- c("x", "y") (cl <- kmeans(x, 2)) ``` ``` ## K-means clustering with 2 clusters of sizes 49, 51 ## ## Cluster means: ## x y ## 1 1.04959200 1.05894643 ## 2 -0.01334206 0.02180248 ## ## Clustering vector: ## [1] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 ## [36] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ## [71] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ## ## Within cluster sum of squares by cluster: ## [1] 11.059361 9.740516 ## (between_SS / total_SS = 72.6 %) ## ## Available components: ## ## [1] "cluster" "centers" "totss" "withinss" ## [5] "tot.withinss" "betweenss" "size" "iter" ## [9] "ifault" ``` ``` #PLOTTING CLUSTERS print(names(cl)) ``` ``` ## [1] "cluster" "centers" "totss" "withinss" ## [5] "tot.withinss" "betweenss" "size" "iter" ## [9] "ifault" ``` ``` plot(x, col = cl$cluster) points(cl$centers, col = 1:2, pch = 8, cex=4) ``` ``` #REDO ANALYSIS WITH 5 CLUSTERS ## random starts do help here with too many clusters (cl <- kmeans(x, 5, nstart = 25)) ``` ``` ## K-means clustering with 5 clusters of sizes 24, 16, 23, 27, 10 ## ## Cluster means: ## x y ## 1 0.1426836 0.3005998 ## 2 1.3211293 0.8482919 ## 3 0.7201982 0.9970443 ## 4 -0.1520315 -0.2260174 ## 5 1.3727382 1.5383686 ## ## Clustering vector: ## [1] 1 1 1 4 4 1 4 1 4 4 4 4 1 1 4 1 1 1 4 4 4 1 1 4 4 1 1 4 4 1 4 4 4 4 4 ## [36] 4 1 1 4 1 4 4 4 1 4 1 1 1 1 4 1 2 5 5 3 3 3 3 2 3 5 3 3 3 3 3 3 2 2 2 ## [71] 5 2 2 3 2 3 5 2 2 3 5 2 5 5 3 3 3 3 3 3 3 2 3 3 2 2 5 2 2 5 ## ## Within cluster sum of squares by cluster: ## [1] 2.6542258 1.2278786 1.2401518 2.4590282 0.7752739 ## (between_SS / total_SS = 89.0 %) ## ## Available components: ## ## [1] "cluster" "centers" "totss" "withinss" ## [5] "tot.withinss" "betweenss" "size" "iter" ## [9] "ifault" ``` ``` plot(x, col = cl$cluster) points(cl$centers, col = 1:5, pch = 8) ``` ### 12\.2\.2 Example: NCAA teams We revisit our NCAA data set, and form clusters there. ``` ncaa = read.table("DSTMAA_data/ncaa.txt",header=TRUE) print(names(ncaa)) ``` ``` ## [1] "No" "NAME" "GMS" "PTS" "REB" "AST" "TO" "A.T" "STL" "BLK" ## [11] "PF" "FG" "FT" "X3P" ``` ``` fit = kmeans(ncaa[,3:14],4) print(fit$size) ``` ``` ## [1] 6 27 14 17 ``` ``` print(fit$centers) ``` ``` ## GMS PTS REB AST TO A.T STL ## 1 1.000000 50.33333 28.83333 10.333333 12.50000 0.9000000 6.666667 ## 2 1.777778 68.39259 33.17407 13.596296 12.83704 1.1107407 6.822222 ## 3 3.357143 80.12857 34.15714 16.357143 13.70714 1.2357143 6.821429 ## 4 1.529412 60.24118 38.76471 9.282353 16.45882 0.5817647 6.882353 ## BLK PF FG FT X3P ## 1 2.166667 19.33333 0.3835000 0.6565000 0.2696667 ## 2 2.918519 18.68519 0.4256296 0.7071852 0.3263704 ## 3 2.514286 18.48571 0.4837143 0.7042143 0.4035714 ## 4 2.882353 18.51176 0.3838824 0.6683529 0.3091765 ``` ``` #Since there are more than two attributes of each observation in the data, #we picked two of them {AST, PTS} and plotted the clusters against those. idx = c(4,6) plot(ncaa[,idx],col=fit$cluster) ``` ### 12\.2\.1 Example: Randomly generated data in k\-means Here we use the example from the **kmeans** function to see how the clusters appear. This function is standard issue, i.e., it comes with the **stats** package, which is included in the base R distribution and does not need to be separately installed. The data is randomly generated but has two bunches of items with different means, so we should be easily able to see two separate clusters. You will need the **graphics** package which is also in the base installation. ``` # a 2-dimensional example x <- rbind(matrix(rnorm(100, sd = 0.3), ncol = 2), matrix(rnorm(100, mean = 1, sd = 0.3), ncol = 2)) colnames(x) <- c("x", "y") (cl <- kmeans(x, 2)) ``` ``` ## K-means clustering with 2 clusters of sizes 49, 51 ## ## Cluster means: ## x y ## 1 1.04959200 1.05894643 ## 2 -0.01334206 0.02180248 ## ## Clustering vector: ## [1] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 ## [36] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ## [71] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ## ## Within cluster sum of squares by cluster: ## [1] 11.059361 9.740516 ## (between_SS / total_SS = 72.6 %) ## ## Available components: ## ## [1] "cluster" "centers" "totss" "withinss" ## [5] "tot.withinss" "betweenss" "size" "iter" ## [9] "ifault" ``` ``` #PLOTTING CLUSTERS print(names(cl)) ``` ``` ## [1] "cluster" "centers" "totss" "withinss" ## [5] "tot.withinss" "betweenss" "size" "iter" ## [9] "ifault" ``` ``` plot(x, col = cl$cluster) points(cl$centers, col = 1:2, pch = 8, cex=4) ``` ``` #REDO ANALYSIS WITH 5 CLUSTERS ## random starts do help here with too many clusters (cl <- kmeans(x, 5, nstart = 25)) ``` ``` ## K-means clustering with 5 clusters of sizes 24, 16, 23, 27, 10 ## ## Cluster means: ## x y ## 1 0.1426836 0.3005998 ## 2 1.3211293 0.8482919 ## 3 0.7201982 0.9970443 ## 4 -0.1520315 -0.2260174 ## 5 1.3727382 1.5383686 ## ## Clustering vector: ## [1] 1 1 1 4 4 1 4 1 4 4 4 4 1 1 4 1 1 1 4 4 4 1 1 4 4 1 1 4 4 1 4 4 4 4 4 ## [36] 4 1 1 4 1 4 4 4 1 4 1 1 1 1 4 1 2 5 5 3 3 3 3 2 3 5 3 3 3 3 3 3 2 2 2 ## [71] 5 2 2 3 2 3 5 2 2 3 5 2 5 5 3 3 3 3 3 3 3 2 3 3 2 2 5 2 2 5 ## ## Within cluster sum of squares by cluster: ## [1] 2.6542258 1.2278786 1.2401518 2.4590282 0.7752739 ## (between_SS / total_SS = 89.0 %) ## ## Available components: ## ## [1] "cluster" "centers" "totss" "withinss" ## [5] "tot.withinss" "betweenss" "size" "iter" ## [9] "ifault" ``` ``` plot(x, col = cl$cluster) points(cl$centers, col = 1:5, pch = 8) ``` ### 12\.2\.2 Example: NCAA teams We revisit our NCAA data set, and form clusters there. ``` ncaa = read.table("DSTMAA_data/ncaa.txt",header=TRUE) print(names(ncaa)) ``` ``` ## [1] "No" "NAME" "GMS" "PTS" "REB" "AST" "TO" "A.T" "STL" "BLK" ## [11] "PF" "FG" "FT" "X3P" ``` ``` fit = kmeans(ncaa[,3:14],4) print(fit$size) ``` ``` ## [1] 6 27 14 17 ``` ``` print(fit$centers) ``` ``` ## GMS PTS REB AST TO A.T STL ## 1 1.000000 50.33333 28.83333 10.333333 12.50000 0.9000000 6.666667 ## 2 1.777778 68.39259 33.17407 13.596296 12.83704 1.1107407 6.822222 ## 3 3.357143 80.12857 34.15714 16.357143 13.70714 1.2357143 6.821429 ## 4 1.529412 60.24118 38.76471 9.282353 16.45882 0.5817647 6.882353 ## BLK PF FG FT X3P ## 1 2.166667 19.33333 0.3835000 0.6565000 0.2696667 ## 2 2.918519 18.68519 0.4256296 0.7071852 0.3263704 ## 3 2.514286 18.48571 0.4837143 0.7042143 0.4035714 ## 4 2.882353 18.51176 0.3838824 0.6683529 0.3091765 ``` ``` #Since there are more than two attributes of each observation in the data, #we picked two of them {AST, PTS} and plotted the clusters against those. idx = c(4,6) plot(ncaa[,idx],col=fit$cluster) ``` 12\.3 Hierarchical Clustering ----------------------------- Hierarchical clustering is both, a top\-down (divisive) approach and bottom\-up (agglomerative) approach. At the top level there is just one cluster. A level below, this may be broken down into a few clusters, which are then further broken down into more sub\-clusters a level below, and so on. This clustering approach is computationally expensive, and the divisive approach is exponentially expensive in \\(n\\), the number of entities being clustered. In fact, the algorithm is \\({\\cal O}(2^n)\\). The function for clustering is **hclust** and is included in the **stats** package in the base R distribution. We begin by first computing the distance matrix. Then we call the **hclust** function and the **plot** function applied to object **fit** gives what is known as a **dendrogram** plot, showing the cluster hierarchy. We may pick clusters at any level. In this case, we chose a **cut** level such that we get four clusters, and the **rect.hclust** function allows us to superimpose boxes on the clusters so we can see the grouping more clearly. The result is plotted in the Figure below. ``` d = dist(ncaa[,3:14], method="euclidian") fit = hclust(d, method="ward") ``` ``` ## The "ward" method has been renamed to "ward.D"; note new "ward.D2" ``` ``` names(fit) ``` ``` ## [1] "merge" "height" "order" "labels" "method" ## [6] "call" "dist.method" ``` ``` plot(fit,main="NCAA Teams") groups = cutree(fit, k=3) rect.hclust(fit, k=3, border="blue") ``` We can also visualize the clusters loaded on to the top two principal components as follows, using the **clusplot** function that resides in package **cluster**. The result is plotted in the Figure below. ``` print(groups) ``` ``` ## [1] 1 1 1 1 1 2 1 1 2 2 1 2 2 1 1 1 2 2 2 2 2 2 1 1 2 2 1 2 2 2 2 2 1 2 2 ## [36] 2 2 3 1 2 3 3 3 2 2 2 3 2 1 2 2 3 1 2 3 2 2 2 2 3 3 3 3 2 ``` ``` library(cluster) clusplot(ncaa[,3:14],groups,color=TRUE,shade=TRUE,labels=2,lines=0) ``` ``` #Using the correlation matrix as a proxy for distance x = t(as.matrix(ncaa[,3:14])) d = as.dist((1-cor(x))/2) fit = hclust(d, method="ward") ``` ``` ## The "ward" method has been renamed to "ward.D"; note new "ward.D2" ``` ``` plot(fit,main="NCAA Teams") groups = cutree(fit, k=3) rect.hclust(fit, k=3, border="red") ``` ``` print(groups) ``` ``` ## [1] 1 1 1 1 1 1 1 1 2 1 1 2 2 1 1 1 1 2 1 1 2 1 1 1 2 2 1 2 1 2 2 2 1 1 1 ## [36] 2 3 3 1 1 3 3 3 3 2 1 3 2 1 2 2 2 1 1 2 2 2 2 3 2 1 3 1 2 ``` ``` library(cluster) clusplot(ncaa[,3:14],groups,color=TRUE,shade=TRUE,labels=2,lines=0) ``` 12\.4 k Nearest Neighbors ------------------------- This is one of the simplest algorithms for classification and grouping. Simply define a distance metric over a set of observations, each with \\(M\\) characteristics, i.e., \\(x\_1, x\_2,..., x\_M\\). Compute the pairwise distance between each pair of observations, using any of the metrics above. Next, fix \\(k\\), the number of nearest neighbors in the population to be considered. Finally, assign the category based on which one has the majority of nearest neighbors to the case we are trying to classify. We see an example in R using the **iris** data set that we examined before. ``` library(class) data(iris) print(head(iris)) ``` ``` ## Sepal.Length Sepal.Width Petal.Length Petal.Width Species ## 1 5.1 3.5 1.4 0.2 setosa ## 2 4.9 3.0 1.4 0.2 setosa ## 3 4.7 3.2 1.3 0.2 setosa ## 4 4.6 3.1 1.5 0.2 setosa ## 5 5.0 3.6 1.4 0.2 setosa ## 6 5.4 3.9 1.7 0.4 setosa ``` ``` #SAMPLE A SUBSET idx = seq(1,length(iris[,1])) train_idx = sample(idx,100) test_idx = setdiff(idx,train_idx) x_train = iris[train_idx,1:4] x_test = iris[test_idx,1:4] y_train = iris[train_idx,5] y_test = iris[test_idx,5] #RUN knn res = knn(x_train, x_test, y_train, k = 3, prob = FALSE, use.all = TRUE) res ``` ``` ## [1] setosa setosa setosa setosa setosa setosa ## [7] setosa setosa setosa setosa setosa setosa ## [13] setosa setosa setosa versicolor versicolor versicolor ## [19] versicolor versicolor versicolor versicolor versicolor versicolor ## [25] versicolor versicolor versicolor versicolor versicolor versicolor ## [31] virginica versicolor versicolor versicolor versicolor virginica ## [37] virginica virginica virginica virginica virginica virginica ## [43] virginica virginica virginica virginica virginica virginica ## [49] virginica virginica ## Levels: setosa versicolor virginica ``` ``` table(res,y_test) ``` ``` ## y_test ## res setosa versicolor virginica ## setosa 15 0 0 ## versicolor 0 19 0 ## virginica 0 1 15 ``` 12\.5 Prediction Trees ---------------------- Prediction trees are a natural outcome of recursive partitioning of the data. Hence, they are a particular form of clustering at different levels. Usual cluster analysis results in a **flat** partition, but prediction trees develop a multi\-level cluster of trees. The term used here is CART, which stands for classification analysis and regression trees. But prediction trees are different from vanilla clustering in an important way – there is a dependent variable, i.e., a category or a range of values (e.g., a score) that one is attempting to predict. Prediction trees are of two types: (a) Classification trees, where the leaves of the trees are different categories of discrete outcomes. and (b) Regression trees, where the leaves are continuous outcomes. We may think of the former as a generalized form of limited dependent variables, and the latter as a generalized form of regression analysis. To set ideas, suppose we want to predict the credit score of an individual using age, income, and education as explanatory variables. Assume that income is the best explanatory variable of the three. Then, at the top of the tree, there will be income as the branching variable, i.e., if income is less than some threshold, then we go down the left branch of the tree, else we go down the right. At the next level, it may be that we use education to make the next bifurcation, and then at the third level we use age. A variable may even be repeatedly used at more than one level. This leads us to several leaves at the bottom of the tree that contain the average values of the credit scores that may be reached. For example if we get an individual of young age, low income, and no education, it is very likely that this path down the tree will lead to a low credit score on average. Instead of credit score (an example of a regression tree), consider credit ratings of companies (an example of a classification tree). These ideas will become clearer once we present some examples. ### 12\.5\.1 Fitting the tree Recursive partitioning is the main algorithmic construct behind prediction trees. We take the data and using a single explanatory variable, we try and bifurcate the data into two categories such that the additional information from categorization results in better **information** than before the binary split. For example, suppose we are trying to predict who will make donations and who will not using a single variable – income. If we have a sample of people and have not yet analyzed their incomes, we only have the raw frequency \\(p\\) of how many people made donations, i.e., and number between 0 and 1\. The **information** of the predicted likelihood \\(p\\) is inversely related to the sum of squared errors (SSE) between this value \\(p\\) and the 0 values and 1 values of the observations. \\\[ SSE\_1 \= \\sum\_{i\=1}^n (x\_i \- p)^2 \\] where \\(x\_i \= \\{0,1\\}\\), depending on whether person \\(i\\) made a donation or not. Now, if we bifurcate the sample based on income, say to the left we have people with income less than \\(K\\), and to the right, people with incomes greater than or equal to \\(K\\). If we find that the proportion of people on the left making donations is \\(p\_L \< p\\) and on the right is \\(p\_R \> p\\), our new information is: \\\[ SSE\_2 \= \\sum\_{i, Income \< K} (x\_i \- p\_L)^2 \+ \\sum\_{i, Income \\geq K} (x\_i \- p\_R)^2 \\] By choosing \\(K\\) correctly, our recursive partitioning algorithm will maximize the gain, i.e., \\(\\delta \= (SSE\_1 \- SSE\_2\)\\). We stop branching further when at a given tree level \\(\\delta\\) is less than a pre\-specified threshold. We note that as \\(n\\) gets large, the computation of binary splits on any variable is expensive, i.e., of order \\({\\cal O}(2^n)\\). But as we go down the tree, and use smaller subsamples, the algorithm becomes faster and faster. In general, this is quite an efficient algorithm to implement. The motivation of prediction trees is to emulate a decision tree. It also helps make sense of complicated regression scenarios where there are lots of variable interactions over many variables, when it becomes difficult to interpret the meaning and importance of explanatory variables in a prediction scenario. By proceeding in a hierarchical manner on a tree, the decision analysis becomes transparent, and can also be used in practical settings to make decisions. ### 12\.5\.1 Fitting the tree Recursive partitioning is the main algorithmic construct behind prediction trees. We take the data and using a single explanatory variable, we try and bifurcate the data into two categories such that the additional information from categorization results in better **information** than before the binary split. For example, suppose we are trying to predict who will make donations and who will not using a single variable – income. If we have a sample of people and have not yet analyzed their incomes, we only have the raw frequency \\(p\\) of how many people made donations, i.e., and number between 0 and 1\. The **information** of the predicted likelihood \\(p\\) is inversely related to the sum of squared errors (SSE) between this value \\(p\\) and the 0 values and 1 values of the observations. \\\[ SSE\_1 \= \\sum\_{i\=1}^n (x\_i \- p)^2 \\] where \\(x\_i \= \\{0,1\\}\\), depending on whether person \\(i\\) made a donation or not. Now, if we bifurcate the sample based on income, say to the left we have people with income less than \\(K\\), and to the right, people with incomes greater than or equal to \\(K\\). If we find that the proportion of people on the left making donations is \\(p\_L \< p\\) and on the right is \\(p\_R \> p\\), our new information is: \\\[ SSE\_2 \= \\sum\_{i, Income \< K} (x\_i \- p\_L)^2 \+ \\sum\_{i, Income \\geq K} (x\_i \- p\_R)^2 \\] By choosing \\(K\\) correctly, our recursive partitioning algorithm will maximize the gain, i.e., \\(\\delta \= (SSE\_1 \- SSE\_2\)\\). We stop branching further when at a given tree level \\(\\delta\\) is less than a pre\-specified threshold. We note that as \\(n\\) gets large, the computation of binary splits on any variable is expensive, i.e., of order \\({\\cal O}(2^n)\\). But as we go down the tree, and use smaller subsamples, the algorithm becomes faster and faster. In general, this is quite an efficient algorithm to implement. The motivation of prediction trees is to emulate a decision tree. It also helps make sense of complicated regression scenarios where there are lots of variable interactions over many variables, when it becomes difficult to interpret the meaning and importance of explanatory variables in a prediction scenario. By proceeding in a hierarchical manner on a tree, the decision analysis becomes transparent, and can also be used in practical settings to make decisions. 12\.6 Classification Trees -------------------------- To demonstrate this, let’s use a data set that is already in R. We use the **kyphosis** data set which contains data on children who have had spinal surgery. The model we wish to fit is to predict whether a child has a post\-operative deformity or not (variable: Kyphosis \= {absent, present}). The variables we use are Age in months, number of vertebrae operated on (Number), and the beginning of the range of vertebrae operated on (Start). The package used is called **rpart** which stands for **recursive partitioning**. ``` library(rpart) data(kyphosis) head(kyphosis) ``` ``` ## Kyphosis Age Number Start ## 1 absent 71 3 5 ## 2 absent 158 3 14 ## 3 present 128 4 5 ## 4 absent 2 5 1 ## 5 absent 1 4 15 ## 6 absent 1 2 16 ``` ``` fit = rpart(Kyphosis~Age+Number+Start, method="class", data=kyphosis) printcp(fit) ``` ``` ## ## Classification tree: ## rpart(formula = Kyphosis ~ Age + Number + Start, data = kyphosis, ## method = "class") ## ## Variables actually used in tree construction: ## [1] Age Start ## ## Root node error: 17/81 = 0.20988 ## ## n= 81 ## ## CP nsplit rel error xerror xstd ## 1 0.176471 0 1.00000 1.0000 0.21559 ## 2 0.019608 1 0.82353 1.2353 0.23200 ## 3 0.010000 4 0.76471 1.2353 0.23200 ``` ``` summary(kyphosis) ``` ``` ## Kyphosis Age Number Start ## absent :64 Min. : 1.00 Min. : 2.000 Min. : 1.00 ## present:17 1st Qu.: 26.00 1st Qu.: 3.000 1st Qu.: 9.00 ## Median : 87.00 Median : 4.000 Median :13.00 ## Mean : 83.65 Mean : 4.049 Mean :11.49 ## 3rd Qu.:130.00 3rd Qu.: 5.000 3rd Qu.:16.00 ## Max. :206.00 Max. :10.000 Max. :18.00 ``` ``` summary(fit) ``` ``` ## Call: ## rpart(formula = Kyphosis ~ Age + Number + Start, data = kyphosis, ## method = "class") ## n= 81 ## ## CP nsplit rel error xerror xstd ## 1 0.17647059 0 1.0000000 1.000000 0.2155872 ## 2 0.01960784 1 0.8235294 1.235294 0.2320031 ## 3 0.01000000 4 0.7647059 1.235294 0.2320031 ## ## Variable importance ## Start Age Number ## 64 24 12 ## ## Node number 1: 81 observations, complexity param=0.1764706 ## predicted class=absent expected loss=0.2098765 P(node) =1 ## class counts: 64 17 ## probabilities: 0.790 0.210 ## left son=2 (62 obs) right son=3 (19 obs) ## Primary splits: ## Start < 8.5 to the right, improve=6.762330, (0 missing) ## Number < 5.5 to the left, improve=2.866795, (0 missing) ## Age < 39.5 to the left, improve=2.250212, (0 missing) ## Surrogate splits: ## Number < 6.5 to the left, agree=0.802, adj=0.158, (0 split) ## ## Node number 2: 62 observations, complexity param=0.01960784 ## predicted class=absent expected loss=0.09677419 P(node) =0.7654321 ## class counts: 56 6 ## probabilities: 0.903 0.097 ## left son=4 (29 obs) right son=5 (33 obs) ## Primary splits: ## Start < 14.5 to the right, improve=1.0205280, (0 missing) ## Age < 55 to the left, improve=0.6848635, (0 missing) ## Number < 4.5 to the left, improve=0.2975332, (0 missing) ## Surrogate splits: ## Number < 3.5 to the left, agree=0.645, adj=0.241, (0 split) ## Age < 16 to the left, agree=0.597, adj=0.138, (0 split) ## ## Node number 3: 19 observations ## predicted class=present expected loss=0.4210526 P(node) =0.2345679 ## class counts: 8 11 ## probabilities: 0.421 0.579 ## ## Node number 4: 29 observations ## predicted class=absent expected loss=0 P(node) =0.3580247 ## class counts: 29 0 ## probabilities: 1.000 0.000 ## ## Node number 5: 33 observations, complexity param=0.01960784 ## predicted class=absent expected loss=0.1818182 P(node) =0.4074074 ## class counts: 27 6 ## probabilities: 0.818 0.182 ## left son=10 (12 obs) right son=11 (21 obs) ## Primary splits: ## Age < 55 to the left, improve=1.2467530, (0 missing) ## Start < 12.5 to the right, improve=0.2887701, (0 missing) ## Number < 3.5 to the right, improve=0.1753247, (0 missing) ## Surrogate splits: ## Start < 9.5 to the left, agree=0.758, adj=0.333, (0 split) ## Number < 5.5 to the right, agree=0.697, adj=0.167, (0 split) ## ## Node number 10: 12 observations ## predicted class=absent expected loss=0 P(node) =0.1481481 ## class counts: 12 0 ## probabilities: 1.000 0.000 ## ## Node number 11: 21 observations, complexity param=0.01960784 ## predicted class=absent expected loss=0.2857143 P(node) =0.2592593 ## class counts: 15 6 ## probabilities: 0.714 0.286 ## left son=22 (14 obs) right son=23 (7 obs) ## Primary splits: ## Age < 111 to the right, improve=1.71428600, (0 missing) ## Start < 12.5 to the right, improve=0.79365080, (0 missing) ## Number < 3.5 to the right, improve=0.07142857, (0 missing) ## ## Node number 22: 14 observations ## predicted class=absent expected loss=0.1428571 P(node) =0.1728395 ## class counts: 12 2 ## probabilities: 0.857 0.143 ## ## Node number 23: 7 observations ## predicted class=present expected loss=0.4285714 P(node) =0.08641975 ## class counts: 3 4 ## probabilities: 0.429 0.571 ``` We can plot the tree as well using the **plot** command. The dendrogram like tree shows the allocation of the \\(n\=81\\) cases to various branches of the tree. ``` plot(fit, uniform=TRUE) text(fit, use.n=TRUE, all=TRUE, cex=0.8) ``` 12\.7 C4\.5 Classifier ---------------------- This classifier also follows recursive partitioning as in the previous case, but instead of minimizing the sum of squared errors between the sample data \\(x\\) and the true value \\(p\\) at each level, here the goal is to minimize entropy. This improves the information gain. Natural entropy (\\(H\\)) of the data \\(x\\) is defined as \\\[ H \= \-\\sum\_x\\; f(x) \\cdot ln \\;f(x) \\] where \\(f(x)\\) is the probability density of \\(x\\). This is intuitive because after the optimal split in recursing down the tree, the distribution of \\(x\\) becomes narrower, lowering entropy. This measure is also often known as \`\`differential entropy.’’ To see this let’s do a quick example. We compute entropy for two distributions of varying spread (standard deviation). ``` dx = 0.001 x = seq(-5,5,dx) H2 = -sum(dnorm(x,sd=2)*log(dnorm(x,sd=2))*dx) print(H2) ``` ``` ## [1] 2.042076 ``` ``` H3 = -sum(dnorm(x,sd=3)*log(dnorm(x,sd=3))*dx) print(H3) ``` ``` ## [1] 2.111239 ``` Therefore, we see that entropy increases as the normal distribution becomes wider. ``` library(RWeka) data(iris) print(head(iris)) res = J48(Species~.,data=iris) print(res) summary(res) ``` 12\.8 Regression Trees ---------------------- We move from classification trees (discrete outcomes) to regression trees (scored or continuous outcomes). Again, we use an example that already exists in R, i.e., the *cars* dataset in the **cu.summary** data frame. Let’s load it up. ``` data(cu.summary) print(names(cu.summary)) ``` ``` ## [1] "Price" "Country" "Reliability" "Mileage" "Type" ``` ``` print(head(cu.summary)) ``` ``` ## Price Country Reliability Mileage Type ## Acura Integra 4 11950 Japan Much better NA Small ## Dodge Colt 4 6851 Japan <NA> NA Small ## Dodge Omni 4 6995 USA Much worse NA Small ## Eagle Summit 4 8895 USA better 33 Small ## Ford Escort 4 7402 USA worse 33 Small ## Ford Festiva 4 6319 Korea better 37 Small ``` ``` print(tail(cu.summary)) ``` ``` ## Price Country Reliability Mileage Type ## Ford Aerostar V6 12267 USA average 18 Van ## Mazda MPV V6 14944 Japan Much better 19 Van ## Mitsubishi Wagon 4 14929 Japan <NA> 20 Van ## Nissan Axxess 4 13949 Japan <NA> 20 Van ## Nissan Van 4 14799 Japan <NA> 19 Van ## Volkswagen Vanagon 4 14080 Germany <NA> NA Van ``` ``` print(dim(cu.summary)) ``` ``` ## [1] 117 5 ``` ``` print(unique(cu.summary$Type)) ``` ``` ## [1] Small Sporty Compact Medium Large Van ## Levels: Compact Large Medium Small Sporty Van ``` ``` print(unique(cu.summary$Country)) ``` ``` ## [1] Japan USA Korea Japan/USA Mexico Brazil Germany ## [8] France Sweden England ## 10 Levels: Brazil England France Germany Japan Japan/USA Korea ... USA ``` We will try and predict Mileage using the other variables. (Note: if we tried to predict Reliability, then we would be back in the realm of classification trees, here we are looking at regression trees.) ``` library(rpart) fit <- rpart(Mileage~Price + Country + Reliability + Type, method="anova", data=cu.summary) print(summary(fit)) ``` ``` ## Call: ## rpart(formula = Mileage ~ Price + Country + Reliability + Type, ## data = cu.summary, method = "anova") ## n=60 (57 observations deleted due to missingness) ## ## CP nsplit rel error xerror xstd ## 1 0.62288527 0 1.0000000 1.0278364 0.17665513 ## 2 0.13206061 1 0.3771147 0.5199982 0.10233496 ## 3 0.02544094 2 0.2450541 0.4095695 0.08549195 ## 4 0.01160389 3 0.2196132 0.4195450 0.09312124 ## 5 0.01000000 4 0.2080093 0.4171213 0.08786038 ## ## Variable importance ## Price Type Country ## 48 42 10 ## ## Node number 1: 60 observations, complexity param=0.6228853 ## mean=24.58333, MSE=22.57639 ## left son=2 (48 obs) right son=3 (12 obs) ## Primary splits: ## Price < 9446.5 to the right, improve=0.6228853, (0 missing) ## Type splits as LLLRLL, improve=0.5044405, (0 missing) ## Reliability splits as LLLRR, improve=0.1263005, (11 missing) ## Country splits as --LRLRRRLL, improve=0.1243525, (0 missing) ## Surrogate splits: ## Type splits as LLLRLL, agree=0.950, adj=0.750, (0 split) ## Country splits as --LLLLRRLL, agree=0.833, adj=0.167, (0 split) ## ## Node number 2: 48 observations, complexity param=0.1320606 ## mean=22.70833, MSE=8.498264 ## left son=4 (23 obs) right son=5 (25 obs) ## Primary splits: ## Type splits as RLLRRL, improve=0.43853830, (0 missing) ## Price < 12154.5 to the right, improve=0.25748500, (0 missing) ## Country splits as --RRLRL-LL, improve=0.13345700, (0 missing) ## Reliability splits as LLLRR, improve=0.01637086, (10 missing) ## Surrogate splits: ## Price < 12215.5 to the right, agree=0.812, adj=0.609, (0 split) ## Country splits as --RRLRL-RL, agree=0.646, adj=0.261, (0 split) ## ## Node number 3: 12 observations ## mean=32.08333, MSE=8.576389 ## ## Node number 4: 23 observations, complexity param=0.02544094 ## mean=20.69565, MSE=2.907372 ## left son=8 (10 obs) right son=9 (13 obs) ## Primary splits: ## Type splits as -LR--L, improve=0.515359600, (0 missing) ## Price < 14962 to the left, improve=0.131259400, (0 missing) ## Country splits as ----L-R--R, improve=0.007022107, (0 missing) ## Surrogate splits: ## Price < 13572 to the right, agree=0.609, adj=0.1, (0 split) ## ## Node number 5: 25 observations, complexity param=0.01160389 ## mean=24.56, MSE=6.4864 ## left son=10 (14 obs) right son=11 (11 obs) ## Primary splits: ## Price < 11484.5 to the right, improve=0.09693168, (0 missing) ## Reliability splits as LLRRR, improve=0.07767167, (4 missing) ## Type splits as L--RR-, improve=0.04209834, (0 missing) ## Country splits as --LRRR--LL, improve=0.02201687, (0 missing) ## Surrogate splits: ## Country splits as --LLLL--LR, agree=0.80, adj=0.545, (0 split) ## Type splits as L--RL-, agree=0.64, adj=0.182, (0 split) ## ## Node number 8: 10 observations ## mean=19.3, MSE=2.21 ## ## Node number 9: 13 observations ## mean=21.76923, MSE=0.7928994 ## ## Node number 10: 14 observations ## mean=23.85714, MSE=7.693878 ## ## Node number 11: 11 observations ## mean=25.45455, MSE=3.520661 ## ## n=60 (57 observations deleted due to missingness) ## ## node), split, n, deviance, yval ## * denotes terminal node ## ## 1) root 60 1354.58300 24.58333 ## 2) Price>=9446.5 48 407.91670 22.70833 ## 4) Type=Large,Medium,Van 23 66.86957 20.69565 ## 8) Type=Large,Van 10 22.10000 19.30000 * ## 9) Type=Medium 13 10.30769 21.76923 * ## 5) Type=Compact,Small,Sporty 25 162.16000 24.56000 ## 10) Price>=11484.5 14 107.71430 23.85714 * ## 11) Price< 11484.5 11 38.72727 25.45455 * ## 3) Price< 9446.5 12 102.91670 32.08333 * ``` ``` plot(fit, uniform=TRUE) text(fit, use.n=TRUE, all=TRUE, cex=.8) ``` ### 12\.8\.1 Example: Califonia Home Data This example is taken from a data set posted by Cosmo Shalizi at CMU. We use a different package here, called **tree**, though this has been subsumed in most of its functionality by **rpart** used earlier. The analysis is as follows: ``` library(tree) cahomes = read.table("DSTMAA_data/cahomedata.txt",header=TRUE) print(dim(cahomes)) ``` ``` ## [1] 20640 9 ``` ``` head(cahomes) ``` ``` ## MedianHouseValue MedianIncome MedianHouseAge TotalRooms TotalBedrooms ## 1 452600 8.3252 41 880 129 ## 2 358500 8.3014 21 7099 1106 ## 3 352100 7.2574 52 1467 190 ## 4 341300 5.6431 52 1274 235 ## 5 342200 3.8462 52 1627 280 ## 6 269700 4.0368 52 919 213 ## Population Households Latitude Longitude ## 1 322 126 37.88 -122.23 ## 2 2401 1138 37.86 -122.22 ## 3 496 177 37.85 -122.24 ## 4 558 219 37.85 -122.25 ## 5 565 259 37.85 -122.25 ## 6 413 193 37.85 -122.25 ``` ``` summary(cahomes) ``` ``` ## MedianHouseValue MedianIncome MedianHouseAge TotalRooms ## Min. : 14999 Min. : 0.4999 Min. : 1.00 Min. : 2 ## 1st Qu.:119600 1st Qu.: 2.5634 1st Qu.:18.00 1st Qu.: 1448 ## Median :179700 Median : 3.5348 Median :29.00 Median : 2127 ## Mean :206856 Mean : 3.8707 Mean :28.64 Mean : 2636 ## 3rd Qu.:264725 3rd Qu.: 4.7432 3rd Qu.:37.00 3rd Qu.: 3148 ## Max. :500001 Max. :15.0001 Max. :52.00 Max. :39320 ## TotalBedrooms Population Households Latitude ## Min. : 1.0 Min. : 3 Min. : 1.0 Min. :32.54 ## 1st Qu.: 295.0 1st Qu.: 787 1st Qu.: 280.0 1st Qu.:33.93 ## Median : 435.0 Median : 1166 Median : 409.0 Median :34.26 ## Mean : 537.9 Mean : 1425 Mean : 499.5 Mean :35.63 ## 3rd Qu.: 647.0 3rd Qu.: 1725 3rd Qu.: 605.0 3rd Qu.:37.71 ## Max. :6445.0 Max. :35682 Max. :6082.0 Max. :41.95 ## Longitude ## Min. :-124.3 ## 1st Qu.:-121.8 ## Median :-118.5 ## Mean :-119.6 ## 3rd Qu.:-118.0 ## Max. :-114.3 ``` ``` mhv = as.matrix(as.numeric(cahomes$MedianHouseValue)) logmhv = log(mhv) lat = as.matrix(as.numeric(cahomes$Latitude)) lon = as.matrix(as.numeric(cahomes$Longitude)) summary(lm(mhv~lat+lon)) ``` ``` ## ## Call: ## lm(formula = mhv ~ lat + lon) ## ## Residuals: ## Min 1Q Median 3Q Max ## -316022 -67502 -22903 46042 483381 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -5829397.0 82092.2 -71.01 <2e-16 *** ## lat -69551.0 859.6 -80.91 <2e-16 *** ## lon -71209.4 916.4 -77.70 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 100400 on 20637 degrees of freedom ## Multiple R-squared: 0.2424, Adjusted R-squared: 0.2423 ## F-statistic: 3302 on 2 and 20637 DF, p-value: < 2.2e-16 ``` ``` fit = tree(logmhv~lon+lat) plot(fit) text(fit,cex=0.8) ``` ``` price.deciles = quantile(mhv,0:10/10) cut.prices = cut(mhv,price.deciles,include.lowest=TRUE) plot(lon, lat, col=grey(10:2/11)[cut.prices],pch=20,xlab="Longitude",ylab="Latitude") partition.tree(fit,ordvars=c("lon","lat"),add=TRUE,cex=0.8) ``` ### 12\.8\.1 Example: Califonia Home Data This example is taken from a data set posted by Cosmo Shalizi at CMU. We use a different package here, called **tree**, though this has been subsumed in most of its functionality by **rpart** used earlier. The analysis is as follows: ``` library(tree) cahomes = read.table("DSTMAA_data/cahomedata.txt",header=TRUE) print(dim(cahomes)) ``` ``` ## [1] 20640 9 ``` ``` head(cahomes) ``` ``` ## MedianHouseValue MedianIncome MedianHouseAge TotalRooms TotalBedrooms ## 1 452600 8.3252 41 880 129 ## 2 358500 8.3014 21 7099 1106 ## 3 352100 7.2574 52 1467 190 ## 4 341300 5.6431 52 1274 235 ## 5 342200 3.8462 52 1627 280 ## 6 269700 4.0368 52 919 213 ## Population Households Latitude Longitude ## 1 322 126 37.88 -122.23 ## 2 2401 1138 37.86 -122.22 ## 3 496 177 37.85 -122.24 ## 4 558 219 37.85 -122.25 ## 5 565 259 37.85 -122.25 ## 6 413 193 37.85 -122.25 ``` ``` summary(cahomes) ``` ``` ## MedianHouseValue MedianIncome MedianHouseAge TotalRooms ## Min. : 14999 Min. : 0.4999 Min. : 1.00 Min. : 2 ## 1st Qu.:119600 1st Qu.: 2.5634 1st Qu.:18.00 1st Qu.: 1448 ## Median :179700 Median : 3.5348 Median :29.00 Median : 2127 ## Mean :206856 Mean : 3.8707 Mean :28.64 Mean : 2636 ## 3rd Qu.:264725 3rd Qu.: 4.7432 3rd Qu.:37.00 3rd Qu.: 3148 ## Max. :500001 Max. :15.0001 Max. :52.00 Max. :39320 ## TotalBedrooms Population Households Latitude ## Min. : 1.0 Min. : 3 Min. : 1.0 Min. :32.54 ## 1st Qu.: 295.0 1st Qu.: 787 1st Qu.: 280.0 1st Qu.:33.93 ## Median : 435.0 Median : 1166 Median : 409.0 Median :34.26 ## Mean : 537.9 Mean : 1425 Mean : 499.5 Mean :35.63 ## 3rd Qu.: 647.0 3rd Qu.: 1725 3rd Qu.: 605.0 3rd Qu.:37.71 ## Max. :6445.0 Max. :35682 Max. :6082.0 Max. :41.95 ## Longitude ## Min. :-124.3 ## 1st Qu.:-121.8 ## Median :-118.5 ## Mean :-119.6 ## 3rd Qu.:-118.0 ## Max. :-114.3 ``` ``` mhv = as.matrix(as.numeric(cahomes$MedianHouseValue)) logmhv = log(mhv) lat = as.matrix(as.numeric(cahomes$Latitude)) lon = as.matrix(as.numeric(cahomes$Longitude)) summary(lm(mhv~lat+lon)) ``` ``` ## ## Call: ## lm(formula = mhv ~ lat + lon) ## ## Residuals: ## Min 1Q Median 3Q Max ## -316022 -67502 -22903 46042 483381 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -5829397.0 82092.2 -71.01 <2e-16 *** ## lat -69551.0 859.6 -80.91 <2e-16 *** ## lon -71209.4 916.4 -77.70 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 100400 on 20637 degrees of freedom ## Multiple R-squared: 0.2424, Adjusted R-squared: 0.2423 ## F-statistic: 3302 on 2 and 20637 DF, p-value: < 2.2e-16 ``` ``` fit = tree(logmhv~lon+lat) plot(fit) text(fit,cex=0.8) ``` ``` price.deciles = quantile(mhv,0:10/10) cut.prices = cut(mhv,price.deciles,include.lowest=TRUE) plot(lon, lat, col=grey(10:2/11)[cut.prices],pch=20,xlab="Longitude",ylab="Latitude") partition.tree(fit,ordvars=c("lon","lat"),add=TRUE,cex=0.8) ``` 12\.9 Random Forests -------------------- A random forest model is an extension of the CART class of models. In CART, at each decision node, all variables in the feature set are selected and the best one is determined for the bifurcation rule at that node. This approach tends to overfit the model to training data. To ameliorate overfitting Breiman (2001\) suggested generating classification and regression trees using a random subset of the feature set at each. One at a time, a random tree is grown. By building a large set of random trees (the default number in R is 500\), we get a “random forest” of decision trees, and when the algorithm is run, each tree in the forest classifies the input. The output classification is determined by taking the modal classification across all trees. The default number of variables from a feature set of \\(p\\) variables is defaulted to \\(p/3\\), rounded down, for a regression tree, and \\(\\sqrt{p}\\) for a classification tree. **Reference**: Breiman ([2001](#ref-Breiman:2001:RF:570181.570182)) For the NCAA data, take the top 32 teams and make their dependent variable 1, and that of the bottom 32 teams zero. ``` ncaa = read.table("DSTMAA_data/ncaa.txt",header=TRUE) y1 = 1:32 y1 = y1*0+1 y2 = y1*0 y = c(y1,y2) print(y) ``` ``` ## [1] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 ## [36] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ``` ``` x = as.matrix(ncaa[4:14]) ``` ``` library(randomForest) ``` ``` ## randomForest 4.6-12 ``` ``` ## Type rfNews() to see new features/changes/bug fixes. ``` ``` yf = factor(y) res = randomForest(x,yf) print(res) ``` ``` ## ## Call: ## randomForest(x = x, y = yf) ## Type of random forest: classification ## Number of trees: 500 ## No. of variables tried at each split: 3 ## ## OOB estimate of error rate: 28.12% ## Confusion matrix: ## 0 1 class.error ## 0 24 8 0.2500 ## 1 10 22 0.3125 ``` ``` print(importance(res)) ``` ``` ## MeanDecreaseGini ## PTS 4.625922 ## REB 1.605147 ## AST 1.999855 ## TO 3.882536 ## A.T 3.880554 ## STL 2.026178 ## BLK 1.951694 ## PF 1.756469 ## FG 4.159391 ## FT 3.258861 ## X3P 2.354894 ``` ``` res = randomForest(x,yf,mtry=3) print(res) ``` ``` ## ## Call: ## randomForest(x = x, y = yf, mtry = 3) ## Type of random forest: classification ## Number of trees: 500 ## No. of variables tried at each split: 3 ## ## OOB estimate of error rate: 31.25% ## Confusion matrix: ## 0 1 class.error ## 0 23 9 0.28125 ## 1 11 21 0.34375 ``` ``` print(importance(res)) ``` ``` ## MeanDecreaseGini ## PTS 4.576616 ## REB 1.379877 ## AST 2.158874 ## TO 3.847833 ## A.T 3.674293 ## STL 1.983024 ## BLK 2.089959 ## PF 1.621722 ## FG 4.408469 ## FT 3.562817 ## X3P 2.191143 ``` 12\.10 Top Ten Algorithms in Data Science ----------------------------------------- The top voted algorithms in machine learning are: C4\.5, k\-means, SVM, Apriori, EM, PageRank, AdaBoost, kNN, Naive Bayes, CART. (This is just from one source, and differences of opinion will remain.) 12\.11 Boosting --------------- Boosting is an immensely popular machine learning technique. It is an iterative approach that takes weak learning algorithms and “boosts” them into strong learners. The method is intuitive. Start out with a classification algorithm such as logit for binary classification and run one pass to fit the model. Check which cases are correctly predicted in\-sample, and which are incorrect. In the next classification pass (also known as a round), reweight the misclassified observations versus the correctly classified ones, by overweighting the former, and underweighting the latter. This forces the learner to “focus” more on the tougher cases, and adjust so that it gets these classified more accurately. Through multiple rounds, the results are boosted to higher levels of accuracy. Because there are many different weighting schemes, data scientists have evolved many different boosting algorithms. AdaBoost is one such popular algorithm, developed by Schapire and Singer ([1999](#ref-Schapire99improvedboosting)). In recent times, these boosting algorithms have improved in their computer implementation, mostly through parallelization to speed them up when using huge data sets. Such versions are known as “extreme gradient” boosting algorithms. In R, the package **xgboost** contains an easy to use implementation. We illustrate with an example. We use the sample data that comes with the **xgboost** package. Read in the data for the model. ``` library(xgboost) ``` ``` ## Warning: package 'xgboost' was built under R version 3.3.2 ``` ``` data("agaricus.train") print(names(agaricus.train)) ``` ``` ## [1] "data" "label" ``` ``` print(dim(agaricus.train$data)) ``` ``` ## [1] 6513 126 ``` ``` print(length(agaricus.train$label)) ``` ``` ## [1] 6513 ``` ``` data("agaricus.test") print(names(agaricus.test)) ``` ``` ## [1] "data" "label" ``` ``` print(dim(agaricus.test$data)) ``` ``` ## [1] 1611 126 ``` ``` print(length(agaricus.test$label)) ``` ``` ## [1] 1611 ``` Fit the model. All that is needed is a single\-line call to the *xgboost* function. ``` res = xgboost(data=agaricus.train$data, label=agaricus.train$label, objective = "binary:logistic", nrounds=5) ``` ``` ## [1] train-error:0.000614 ## [2] train-error:0.001228 ## [3] train-error:0.000614 ## [4] train-error:0.000614 ## [5] train-error:0.000000 ``` ``` print(names(res)) ``` ``` ## [1] "handle" "raw" "niter" "evaluation_log" ## [5] "call" "params" "callbacks" ``` Undertake prediction using the *predict* function and then examine the confusion matrix for performance. ``` #In sample yhat = predict(res,agaricus.train$data) print(head(yhat,50)) ``` ``` ## [1] 0.8973738 0.1030030 0.1030030 0.8973738 0.1018238 0.1030030 0.1030030 ## [8] 0.8973738 0.1030030 0.1030030 0.1030030 0.1030030 0.1018238 0.1058771 ## [15] 0.1018238 0.8973738 0.8973738 0.8973738 0.1030030 0.8973738 0.1030030 ## [22] 0.1030030 0.8973738 0.1030030 0.1030030 0.8973738 0.1030030 0.1057071 ## [29] 0.1030030 0.1144627 0.1058771 0.1139800 0.1030030 0.1057071 0.1058771 ## [36] 0.1030030 0.1030030 0.1030030 0.1030030 0.1057071 0.1057071 0.1030030 ## [43] 0.1030030 0.8973738 0.1030030 0.1030030 0.1057071 0.1058771 0.1030030 ## [50] 0.1030030 ``` ``` cm = table(agaricus.train$label,as.integer(round(yhat))) print(cm) ``` ``` ## ## 0 1 ## 0 3373 0 ## 1 0 3140 ``` ``` print(chisq.test(cm)) ``` ``` ## ## Pearson's Chi-squared test with Yates' continuity correction ## ## data: cm ## X-squared = 6509, df = 1, p-value < 2.2e-16 ``` ``` #Out of sample yhat = predict(res,agaricus.test$data) print(head(yhat,50)) ``` ``` ## [1] 0.1030030 0.8973738 0.1030030 0.1030030 0.1058771 0.1139800 0.8973738 ## [8] 0.1030030 0.8973738 0.1057071 0.8973738 0.1030030 0.1018238 0.1030030 ## [15] 0.1018238 0.1057071 0.1030030 0.8973738 0.1058771 0.1030030 0.1030030 ## [22] 0.1057071 0.1030030 0.1030030 0.1057071 0.8973738 0.1139800 0.1030030 ## [29] 0.1030030 0.1018238 0.1030030 0.1030030 0.1057071 0.1058771 0.1030030 ## [36] 0.1030030 0.1139800 0.8973738 0.1030030 0.1030030 0.1058771 0.1030030 ## [43] 0.1030030 0.1030030 0.1030030 0.1144627 0.1057071 0.1144627 0.1058771 ## [50] 0.1030030 ``` ``` cm = table(agaricus.test$label,as.integer(round(yhat))) print(cm) ``` ``` ## ## 0 1 ## 0 835 0 ## 1 0 776 ``` ``` print(chisq.test(cm)) ``` ``` ## ## Pearson's Chi-squared test with Yates' continuity correction ## ## data: cm ## X-squared = 1607, df = 1, p-value < 2.2e-16 ``` There are many types of algorithms that may be used with boosting, see the documentation of the function in R. But here are some of the options. * reg:linear, linear regression (Default). * reg:logistic, logistic regression. * binary:logistic, logistic regression for binary classification. Output probability. * binary:logitraw, logistic regression for binary classification, output score before logistic transformation. * multi:softmax, set xgboost to do multiclass classification using the softmax objective. Class is represented by a number and should be from 0 to num\_class \- 1\. * multi:softprob, same as softmax, but prediction outputs a vector of ndata \* nclass elements, which can be further reshaped to ndata, nclass matrix. The result contains predicted probabilities of each data point belonging to each class. * rank:pairwise set xgboost to do ranking task by minimizing the pairwise loss. Let’s repeat the exercise using the NCAA data. ``` ncaa = read.table("DSTMAA_data/ncaa.txt",header=TRUE) y = as.matrix(c(rep(1,32),rep(0,32))) x = as.matrix(ncaa[4:14]) res = xgboost(data=x,label=y,objective = "binary:logistic", nrounds=10) ``` ``` ## [1] train-error:0.109375 ## [2] train-error:0.062500 ## [3] train-error:0.031250 ## [4] train-error:0.046875 ## [5] train-error:0.046875 ## [6] train-error:0.031250 ## [7] train-error:0.015625 ## [8] train-error:0.015625 ## [9] train-error:0.015625 ## [10] train-error:0.000000 ``` ``` yhat = predict(res,x) print(yhat) ``` ``` ## [1] 0.93651539 0.91299230 0.94973743 0.92731959 0.88483542 0.78989410 ## [7] 0.87560666 0.90532523 0.86085796 0.83430755 0.91133112 0.77964365 ## [13] 0.65978771 0.91299230 0.93371087 0.91403663 0.78532064 0.80347157 ## [19] 0.60545647 0.79564470 0.84763408 0.86694145 0.79334742 0.91165835 ## [25] 0.80980736 0.76779360 0.90779346 0.88314682 0.85020524 0.77409834 ## [31] 0.85503411 0.92695338 0.49809304 0.15059802 0.13718443 0.30433667 ## [37] 0.35902274 0.08057866 0.16935477 0.06189578 0.08516480 0.12777112 ## [43] 0.06224639 0.18913418 0.07675765 0.33156753 0.06586388 0.13792981 ## [49] 0.22327985 0.08479820 0.16396984 0.10236575 0.16346745 0.27498406 ## [55] 0.10642117 0.07299758 0.15809764 0.15259050 0.07768227 0.15006000 ## [61] 0.08349544 0.06932075 0.10376420 0.11887703 ``` ``` cm = table(y,as.integer(round(yhat))) print(cm) ``` ``` ## ## y 0 1 ## 0 32 0 ## 1 0 32 ``` ``` print(chisq.test(cm)) ``` ``` ## ## Pearson's Chi-squared test with Yates' continuity correction ## ## data: cm ## X-squared = 60.062, df = 1, p-value = 9.189e-15 ```
Machine Learning
srdas.github.io
https://srdas.github.io/MLBook/NeuralNetsDeepLearning.html
Chapter 13 Statistical Brains: Neural Networks ============================================== 13\.1 Overview -------------- Neural networks are special forms of nonlinear regressions where the decision system for which the NN is built mimics the way the brain is supposed to work (whether it works like a NN is up for grabs of course). Terrific online book: <http://neuralnetworksanddeeplearning.com/> 13\.2 Perceptrons ----------------- The basic building block of a neural network is a perceptron. A perceptron is like a neuron in a human brain. It takes inputs (e.g. sensory in a real brain) and then produces an output signal. An entire network of perceptrons is called a neural net. For example, if you make a credit card application, then the inputs comprise a whole set of personal data such as age, sex, income, credit score, employment status, etc, which are then passed to a series of perceptrons in parallel. This is the first **layer** of assessment. Each of the perceptrons then emits an output signal which may then be passed to another layer of perceptrons, who again produce another signal. This second layer is often known as the **hidden** perceptron layer. Finally, after many hidden layers, the signals are all passed to a single perceptron which emits the decision signal to issue you a credit card or to deny your application. Perceptrons may emit continuous signals or binary \\((0,1\)\\) signals. In the case of the credit card application, the final perceptron is a binary one. Such perceptrons are implemented by means of **squashing** functions. For example, a really simple squashing function is one that issues a 1 if the function value is positive and a 0 if it is negative. More generally, \\\[\\begin{equation} S(x) \= \\left\\{ \\begin{array}{cl} 1 \& \\mbox{if } g(x)\>T \\\\ 0 \& \\mbox{if } g(x) \\leq T \\end{array} \\right. \\end{equation}\\] where \\(g(x)\\) is any function taking positive and negative values, for instance, \\(g(x) \\in (\-\\infty, \+\\infty)\\). \\(T\\) is a threshold level. A neural network with many layers is also known as a **multi\-layered** perceptron, i.e., all those perceptrons together may be thought of as one single, big perceptron. **x** is the input layer, **y** is the hidden layer, and **z** is the output layer. 13\.3 Deep Learning ------------------- Neural net models are related to **Deep Learning**, where the number of hidden layers is vastly greater than was possible in the past when computational power was limited. Now, deep learning nets cascade through 20\-30 layers, resulting in a surprising ability of neural nets in mimicking human learning processes. see: <http://en.wikipedia.org/wiki/Deep_learning> And also see: <http://deeplearning.net/> 13\.4 Binary NNs ---------------- Binary NNs are also thought of as a category of classifier systems. They are widely used to divide members of a population into classes. But NNs with continuous output are also popular. As we will see later, researchers have used NNs to learn the Black\-Scholes option pricing model. Areas of application: credit cards, risk management, forecasting corporate defaults, forecasting economic regimes, measuring the gains from mass mailings by mapping demographics to success rates. 13\.5 Squashing Functions ------------------------- Squashing functions may be more general than just binary. They usually squash the output signal into a narrow range, usually \\((0,1\)\\). A common choice is the logistic function (also known as the sigmoid function). \\\[\\begin{equation} f(x) \= \\frac{1}{1\+e^{\-w\\;x}} \\end{equation}\\] Think of \\(w\\) as the adjustable weight. Another common choice is the probit function \\\[\\begin{equation} f(x) \= \\Phi(w\\;x) \\end{equation}\\] where \\(\\Phi(\\cdot)\\) is the cumulative normal distribution function. 13\.6 How does the NN work? --------------------------- The easiest way to see how a NN works is to think of the simplest NN, i.e. one with a single perceptron generating a binary output. The perceptron has \\(n\\) inputs, with values \\(x\_i, i\=1\...n\\) and current weights \\(w\_i, i\=1\...n\\). It generates an output \\(y\\). The **net input** is defined as \\\[\\begin{equation} \\sum\_{i\=1}^n w\_i x\_i \\end{equation}\\] If a function of the net input is greater than a threshold \\(T\\), then the output signal is \\(y\=1\\), and if it is less than \\(T\\), the output is \\(y\=0\\). The actual output is called the **desired** output and is denoted \\(d \= \\{0,1\\}\\). Hence, the **training** data provided to the NN comprises both the inputs \\(x\_i\\) and the desired output \\(d\\). The output of our single perceptron model will be the sigmoid function of the net input, i.e. \\\[\\begin{equation} y \= \\frac{1}{1\+\\exp\\left( \- \\sum\_{i\=1}^n w\_i x\_i \\right)} \\end{equation}\\] For a given input set, the error in the NN is given by some loss function, an example of which is below: \\\[\\begin{equation} E \= \\frac{1}{2} \\sum\_{j\=1}^m (y\_j \- d\_j)^2 \\end{equation}\\] where \\(m\\) is the size of the training data set. The optimal NN for given data is obtained by finding the weights \\(w\_i\\) that minimize this error function \\(E\\). Once we have the optimal weights, we have a calibrated **feed\-forward** neural net. For a given squashing function \\(f\\), and input \\(x \= \[x\_1, x\_2, \\ldots, x\_n]'\\), the multi\-layer perceptron will given an output at each node of the hidden layer of \\\[\\begin{equation} y(x) \= f \\left(w\_0 \+ \\sum\_{j\=1}^n w\_j x\_j \\right) \\end{equation}\\] and then at the final output level the node is \\\[\\begin{equation} z(x) \= f\\left( w\_0 \+ \\sum\_{i\=1}^N w\_i \\cdot f \\left(w\_{0i} \+ \\sum\_{j\=1}^n w\_{ji} x\_j \\right) \\right) \\end{equation}\\] where the nested structure of the neural net is quite apparent. The \\(f\\) functions are also known as **activation** functions. 13\.7 Relationship to Logit/Probit Models ----------------------------------------- The special model above with a single perceptron is actually nothing else than the logit regression model. If the squashing function is taken to the cumulative normal distribution, then the model becomes the probit regression model. In both cases though, the model is fitted by minimizing squared errors, not by maximum likelihood, which is how standard logit/probit models are parameterized. 13\.8 Connection to hyperplanes ------------------------------- Note that in binary squashing functions, the net input is passed through a sigmoid function and then compared to the threshold level \\(T\\). This sigmoid function is a monotone one. Hence, this means that there must be a level \\(T'\\) at which the net input \\(\\sum\_{i\=1}^n w\_i x\_i\\) must be for the result to be on the cusp. The following is the equation for a hyperplane \\\[\\begin{equation} \\sum\_{i\=1}^n w\_i x\_i \= T' \\end{equation}\\] which also implies that observations in \\(n\\)\-dimensional space of the inputs \\(x\_i\\), must lie on one side or the other of this hyperplane. If above the hyperplane, then \\(y\=1\\), else \\(y\=0\\). Hence, single perceptrons in neural nets have a simple geometrical intuition. 13\.9 Gradient Descent ---------------------- We start with a simple function. We want to minimize this function. But let’s plot it first to see where the minimum lies. ``` f = function(x) { result = 3*x^2 - 5*x + 10 } x = seq(-4,4,0.1) plot(x,f(x),type="l") ``` Next, we solve for \\(x\_{min}\\), the value at which the function is minimized, which appears to lie between \\(0\\) and \\(2\\). We do this by gradient descent, from an initial value for \\(x\=\-3\\). We then run down the function to its minimum but manage the rate of descent using a paramater \\(\\eta\=0\.10\\). The evolution (descent) equation is called recursively through the following dynamics for \\(x\\): \\\[ x \\leftarrow x \- \\eta \\cdot \\frac{\\partial f}{\\partial x} \\] If the gradient is positive, then we need to head in the opposite direction to reach the minimum, and hence, we have a negative sign in front of the modification term above. But first we need to calculate the gradient, and the descent. To repeat, first gradient, then descent! ``` x = -3 eta = 0.10 dx = 0.0001 grad = (f(x+dx)-f(x))/dx x = x - eta*grad print(x) ``` ``` ## [1] -0.70003 ``` We see that \\(x\\) has moved closer to the value that minimizes the function. We can repeat thismany times till it settles down at the minimum, each round of updates being called an **epoch**. We run 20 epochs next. ``` for (j in 1:20) { grad = (f(x+dx)-f(x))/dx x = x - eta*grad print(c(j,x,grad,f(x))) } ``` ``` ## [1] 1.000000 0.219958 -9.199880 9.045355 ## [1] 2.0000000 0.5879532 -3.6799520 8.0973009 ## [1] 3.0000000 0.7351513 -1.4719808 7.9455858 ## [1] 4.0000000 0.7940305 -0.5887923 7.9213008 ## [1] 5.0000000 0.8175822 -0.2355169 7.9174110 ## [1] 6.00000000 0.82700288 -0.09420677 7.91678689 ## [1] 7.00000000 0.83077115 -0.03768271 7.91668636 ## [1] 8.00000000 0.83227846 -0.01507308 7.91667000 ## [1] 9.000000000 0.832881384 -0.006029233 7.916667279 ## [1] 10.000000000 0.833122554 -0.002411693 7.916666800 ## [1] 11.0000000000 0.8332190215 -0.0009646773 7.9166667059 ## [1] 12.0000000000 0.8332576086 -0.0003858709 7.9166666839 ## [1] 13.0000000000 0.8332730434 -0.0001543484 7.9166666776 ## [1] 1.400000e+01 8.332792e-01 -6.173935e-05 7.916667e+00 ## [1] 1.500000e+01 8.332817e-01 -2.469573e-05 7.916667e+00 ## [1] 1.600000e+01 8.332827e-01 -9.878303e-06 7.916667e+00 ## [1] 1.700000e+01 8.332831e-01 -3.951310e-06 7.916667e+00 ## [1] 1.800000e+01 8.332832e-01 -1.580540e-06 7.916667e+00 ## [1] 1.900000e+01 8.332833e-01 -6.322054e-07 7.916667e+00 ## [1] 2.000000e+01 8.332833e-01 -2.528822e-07 7.916667e+00 ``` It has converged really quickly! At convergence, the gradient goes to zero. 13\.10 Feedback and Backpropagation ----------------------------------- What distinguishes neural nets from ordinary nonlinear regressions is feedback. Neural nets **learn** from feedback as they are used. Feedback is implemented using a technique called backpropagation. Suppose you have a calibrated NN. Now you obtain another observation of data and run it through the NN. Comparing the output value \\(y\\) with the desired observation \\(d\\) gives you the error for this observation. If the error is large, then it makes sense to update the weights in the NN, so as to self\-correct. This process of self\-correction is known as **gradient descent** via **backpropagation**. The benefit of gradient descent via backpropagation is that a full re\-fitting exercise may not be required. Using simple rules the correction to the weights can be applied gradually in a learning manner. Lets look at fitting with a simple example using a single perceptron. Consider the \\(k\\)\-th perceptron. The sigmoid of this is \\\[\\begin{equation} y\_k \= \\frac{1}{1\+\\exp\\left( \- \\sum\_{i\=1}^n w\_{i} x\_{ik} \\right)} \\end{equation}\\] where \\(y\_k\\) is the output of the \\(k\\)\-th perceptron, and \\(x\_{ik}\\) is the \\(i\\)\-th input to the \\(k\\)\-th perceptron. The error from this observation is \\((y\_k \- d\_k)\\). Recalling that \\(E \= \\frac{1}{2} \\sum\_{j\=1}^m (y\_j \- d\_j)^2\\), we may compute the change in error with respect to the \\(j\\)\-th output, i.e. \\\[\\begin{equation} \\frac{\\partial E}{\\partial y\_j} \= y\_j \- d\_j, \\quad \\forall j \\end{equation}\\] Note also that \\\[\\begin{equation} \\frac{dy\_j}{dx\_{ij}} \= y\_j (1\-y\_j) w\_i \\end{equation}\\] and \\\[\\begin{equation} \\frac{dy\_j}{dw\_i} \= y\_j (1\-y\_j) x\_{ij} \\end{equation}\\] Next, we examine how the error changes with input values: \\\[\\begin{equation} \\frac{\\partial E}{\\partial x\_{ij}} \= \\frac{\\partial E}{\\partial y\_j} \\times \\frac{dy\_j}{dx\_{ij}} \= (y\_j \- d\_j) y\_j (1\-y\_j) w\_i \\end{equation}\\] We can now get to the value of interest, which is the change in error value with respect to the weights \\\[\\begin{equation} \\frac{\\partial E}{\\partial w\_{i}} \= \\frac{\\partial E}{\\partial y\_j} \\times \\frac{dy\_j}{dw\_i} \= (y\_j \- d\_j)y\_j (1\-y\_j) x\_{ij}, \\forall i \\end{equation}\\] We thus have one equation for each weight \\(w\_i\\) and each observation \\(j\\). (Note that the \\(w\_i\\) apply across perceptrons. A more general case might be where we have weights for each perceptron, i.e., \\(w\_{ij}\\).) Instead of updating on just one observation, we might want to do this for many observations in which case the error derivative would be \\\[\\begin{equation} \\frac{\\partial E}{\\partial w\_{i}} \= \\sum\_j (y\_j \- d\_j)y\_j (1\-y\_j) x\_{ij}, \\forall i \\end{equation}\\] Therefore, if \\(\\frac{\\partial E}{\\partial w\_{i}} \> 0\\), then we would need to reduce \\(w\_i\\) to bring down \\(E\\). By how much? Here is where some art and judgment is imposed. There is a tuning parameter \\(0\<\\gamma\<1\\) which we apply to \\(w\_i\\) to shrink it when the weight needs to be reduced. Likewise, if the derivative \\(\\frac{\\partial E}{\\partial w\_{i}} \< 0\\), then we would increase \\(w\_i\\) by dividing it by \\(\\gamma\\). This is known as **gradient descent**. 13\.11 Backpropagation ---------------------- ### 13\.11\.1 Extension to many observations Our notation now becomes extended to weights \\(w\_{ik}\\) which stand for the weight on the \\(i\\)\-th input to the \\(k\\)\-th perceptron. The derivative for the error becomes, across all observations \\(j\\): \\\[\\begin{equation} \\frac{\\partial E}{\\partial w\_{ik}} \= \\sum\_j (y\_j \- d\_j)y\_j (1\-y\_j) x\_{ikj}, \\forall i,k \\end{equation}\\] Hence all nodes in the network have their weights updated. In many cases of course, we can just take the derivatives numerically. Change the weight \\(w\_{ik}\\) and see what happens to the error. However, the formal process of finding all the gradients using a fast algorithm via backpropagation requires more formal calculus, and the rest of this section provides detailed analysis showing how this is done. 13\.12 Backprop: Detailed Analysis ---------------------------------- In this section, we dig deeper into the incredible algebra that drives the unreasonable effectiveness of deep learning algorithms. To do this, we will work with a richer algebra, and extended notation. ### 13\.12\.1 Net Input Assume several hidden layers in a deep learning net (DLN), indexed by \\(r\=1,2,...,R\\). Consider two adjacent layers \\((r)\\) and \\((r\+1\)\\). Each layer as number of nodes \\(n\_r\\) and \\(n\_{r\+1}\\), respectively. The output of node \\(i\\) in layer \\((r)\\) is \\(Z\_i^{(r)} \= f(a\_i^{(r)})\\). The function \\(f\\) is the *activation* function. At node \\(j\\) in layer \\((r\+1\)\\), these inputs are taken and used to compute an intermediate value, known as the *net value*: \\\[ a\_j^{(r\+1\)} \= \\sum\_{i\=1}^{n\_r} W\_{ij}^{(r\+1\)}Z\_i^{r} \+ b\_j^{(r\+1\)} \\] ### 13\.12\.2 Activation Function The net value is then ingested by an activation function to create the output from layer \\((r\+1\)\\). \\\[ Z\_j^{(r\+1\)} \= f(a\_j^{(r\+1\)}) \\] The activation functions may be simple sigmoid functions or other functions such as ReLU (Rectified Linear Unit). The final output of the DLN is from layer \\((R)\\), i.e., \\(Z\_j^{(R)}\\). For the first hidden layer \\(r\=1\\), and the net input will be based on the original data \\(X^{(m)}\\) \\\[ a\_j^{(1\)} \= \\sum\_{m\=1}^M W\_{mj}^{(1\)} X\_m \+ b\_j^{(1\)} \\] ### 13\.12\.3 Loss Function Fitting the DLN is an exercise where the best weights \\(\\{W,b\\} \= \\{W\_{ij}^{(r\+1\)}, b\_j^{(r\+1\)}\\},\\forall r\\) for all layers are determined to minimize a loss function generally denoted as \\\[ \\min\_{W,b} \\sum\_{m\=1}^M L\_m\[h(X^{(m)}),T^{(m)}] \\] where \\(M\\) is the number of training observations (rows in the data set), \\(T^{(m)}\\) is the true value of the output, and \\(h(X^{(m)})\\) is the model output from the DLN. The loss function \\(L\_m\\) quantifies the difference between the model output and the true output. ### 13\.12\.4 Gradients To solve this minimization problem, we need gradients for all \\(W,b\\). These are denoted as \\\[ \\frac{\\partial L\_m}{\\partial W\_{ij}^{(r\+1\)}}, \\quad \\frac{\\partial L\_m}{\\partial b\_{j}^{(r\+1\)}}, \\quad \\forall r\+1, j \\] We write out these gradients using the chain rule: \\\[ \\frac{\\partial L\_m}{\\partial W\_{ij}^{(r\+1\)}} \= \\frac{\\partial L\_m}{\\partial a\_j^{(r\+1\)}} \\cdot \\frac{\\partial a\_j^{(r\+1\)}}{\\partial W\_{ij}^{(r\+1\)}} \= \\delta\_j^{(r\+1\)} \\cdot Z\_i^{(r)} \\] where we have written \\\[ \\delta\_j^{(r\+1\)} \= \\frac{\\partial L\_m}{\\partial a\_j^{(r\+1\)}} \\] Likewise, we have \\\[ \\frac{\\partial L\_m}{\\partial b\_{j}^{(r\+1\)}} \= \\frac{\\partial L\_m}{\\partial a\_j^{(r\+1\)}} \\cdot \\frac{\\partial a\_j^{(r\+1\)}}{\\partial b\_{j}^{(r\+1\)}} \= \\delta\_j^{(r\+1\)} \\cdot 1 \= \\delta\_j^{(r\+1\)} \\] ### 13\.12\.5 Delta Values So we need to find all the \\(\\delta\_j^{(r\+1\)}\\) values. To do so, we need the following intermediate calculation. \\\[ \\begin{align} a\_j^{(r\+1\)} \&\= \\sum\_{i\=1}^{n\_r} W\_{ij}^{(r\+1\)} Z\_i^{(r)} \+ b\_j^{(r\+1\)} \\\\ \&\= \\sum\_{i\=1}^{n\_r} W\_{ij}^{(r\+1\)} f(a\_i^{(r)}) \+ b\_j^{(r\+1\)} \\\\ \\end{align} \\] This implies that \\\[ \\frac{\\partial a\_j^{(r\+1\)}}{\\partial a\_i^{(r)}} \= W\_{ij}^{(r\+1\)} \\cdot f'(a\_i^{(r)}) \\] Using this we may now rewrite the \\(\\delta\\) value for layer \\((r)\\) as follows: \\\[ \\begin{align} \\delta\_i^{(r)} \&\= \\frac{\\partial L\_m}{\\partial a\_i^{(r)}} \\\\ \&\= \\sum\_{j\=1}^{n\_{r\+1}} \\frac{\\partial L\_m}{\\partial a\_j^{(r\+1\)}} \\cdot \\frac{\\partial a\_j^{(r\+1\)}}{\\partial a\_i^{(r)}} \\\\ \&\= \\sum\_{j\=1}^{n\_{r\+1}}\\delta\_j^{(r\+1\)} \\cdot W\_{ij}^{(r\+1\)} \\cdot f'(a\_i^{(r)}) \\\\ \&\= f'(a\_i^{(r)}) \\cdot \\sum\_{j\=1}^{n\_{r\+1}}\\delta\_j^{(r\+1\)} \\cdot W\_{ij}^{(r\+1\)} \\end{align} \\] ### 13\.12\.6 Output layer The output layer takes as input the last hidden layer \\({(R)}\\)’s output \\(Z\_j^{(R)}\\), and computes the net input \\(a\_j^{(R\+1\)}\\) and then the activation function \\(h(a\_j^{(R\+1\)})\\) is applied to generate the final output. \\\[ \\begin{align} a\_j^{(R\+1\)} \&\= \\sum\_{i\=1}^{n\_R} W\_{ij}^{(R\+1\)} Z\_j^{(R)} \+ b\_j^{(R\+1\)} \\\\ \\mbox{Final output} \&\= h(a\_j^{(R\+1\)}) \\end{align} \\] The \\(\\delta\\) for the final layer is simple. \\\[ \\delta\_j^{(R\+1\)} \= \\frac{\\partial L\_m}{\\partial a\_j^{(R\+1\)}} \= h'(a\_j^{(R\+1\)}) \\] ### 13\.12\.7 Feedforward and Backward Propagation Fitting the DLN requires getting the weights \\(\\{W,b\\}\\) that minimize \\(L\_m\\). These are done using gradient descent, i.e., \\\[ \\begin{align} W\_{ij}^{(r\+1\)} \\leftarrow W\_{ij}^{(r\+1\)} \- \\eta \\cdot \\frac{\\partial L\_m}{\\partial W\_{ij}^{(r\+1\)}} \\\\ b\_{j}^{(r\+1\)} \\leftarrow b\_{j}^{(r\+1\)} \- \\eta \\cdot \\frac{\\partial L\_m}{\\partial b\_{j}^{(r\+1\)}} \\end{align} \\] Here \\(\\eta\\) is the learning rate parameter. We iterate on these functions until the gradients become zero, and the weights discontinue changing with each update, also known as an **epoch**. The steps are as follows: 1. Start with an initial set of weights \\(\\{w,b\\}\\). 2. Feedforward the initial data and weights into the DLN, and find the \\(\\{a\_i^{(r)}, Z\_i^{(r)}\\}, \\forall r,i\\). This is one forward pass through the network. 3. Then, using backpropagation, compute all \\(\\delta\_i^{(r)}, \\forall r,i\\). 4. Use these \\(\\delta\_i^{(r)}\\) values to get all the new gradients. 5. Apply gradient descent to get new weights. 6. Keep iterating steps 2\-5, until the chosen number of epochs is completed. The entire process is summarized in Figure [13\.1](NeuralNetsDeepLearning.html#fig:BackPropSummary): Figure 13\.1: Quick Summary of Backpropagation 13\.13 Research Applications ---------------------------- * Discovering Black\-Scholes: See the paper by Hutchinson, Lo, and Poggio ([1994](#ref-RePEc:bla:jfinan:v:49:y:1994:i:3:p:851-89)), A Nonparametric Approach to Pricing and Hedging Securities Via Learning Networks, The Journal of Finance, Vol XLIX. * Forecasting: See the paper by Ghiassi, Saidane, and Zimbra ([2005](#ref-CIS-201490)). “A dynamic artificial neural network model for forecasting time series events,” International Journal of Forecasting 21, 341–362\. 13\.14 Package *neuralnet* in R ------------------------------- The package focuses on multi\-layer perceptrons (MLP), see Bishop (1995\), which are well applicable when modeling functional relation\- ships. The underlying structure of an MLP is a directed graph, i.e. it consists of vertices and directed edges, in this context called neurons and synapses. See Bishop (1995\), Neural networks for pattern recognition. Oxford University Press, New York. The data set used by this package as an example is the infert data set that comes bundled with R. This data set examines infertility after induced and spontaneous abortion. The variables **induced** and **spontaneous** take values in \\(\\{0,1,2\\}\\) indicating the number of previous abortions. The variable **parity** denotes the number of births. The variable **case** equals 1 if the woman is infertile and 0 otherwise. The idea is to model infertility. ``` library(neuralnet) data(infert) print(names(infert)) ``` ``` ## [1] "education" "age" "parity" "induced" ## [5] "case" "spontaneous" "stratum" "pooled.stratum" ``` ``` head(infert) ``` ``` ## education age parity induced case spontaneous stratum pooled.stratum ## 1 0-5yrs 26 6 1 1 2 1 3 ## 2 0-5yrs 42 1 1 1 0 2 1 ## 3 0-5yrs 39 6 2 1 0 3 4 ## 4 0-5yrs 34 4 2 1 0 4 2 ## 5 6-11yrs 35 3 1 1 1 5 32 ## 6 6-11yrs 36 4 2 1 1 6 36 ``` ``` summary(infert) ``` ``` ## education age parity induced ## 0-5yrs : 12 Min. :21.00 Min. :1.000 Min. :0.0000 ## 6-11yrs:120 1st Qu.:28.00 1st Qu.:1.000 1st Qu.:0.0000 ## 12+ yrs:116 Median :31.00 Median :2.000 Median :0.0000 ## Mean :31.50 Mean :2.093 Mean :0.5726 ## 3rd Qu.:35.25 3rd Qu.:3.000 3rd Qu.:1.0000 ## Max. :44.00 Max. :6.000 Max. :2.0000 ## case spontaneous stratum pooled.stratum ## Min. :0.0000 Min. :0.0000 Min. : 1.00 Min. : 1.00 ## 1st Qu.:0.0000 1st Qu.:0.0000 1st Qu.:21.00 1st Qu.:19.00 ## Median :0.0000 Median :0.0000 Median :42.00 Median :36.00 ## Mean :0.3347 Mean :0.5766 Mean :41.87 Mean :33.58 ## 3rd Qu.:1.0000 3rd Qu.:1.0000 3rd Qu.:62.25 3rd Qu.:48.25 ## Max. :1.0000 Max. :2.0000 Max. :83.00 Max. :63.00 ``` This data set examines infertility after induced and spontaneous abortion. The variables \*\* induced\*\* and **spontaneous** take values in \\(\\{0,1,2\\}\\) indicating the number of previous abortions. The variable **parity** denotes the number of births. The variable **case** equals 1 if the woman is infertile and 0 otherwise. The idea is to model infertility. ### 13\.14\.1 First step, fit a logit model to the data. ``` res = glm(case ~ age+parity+induced+spontaneous, family=binomial(link="logit"), data=infert) summary(res) ``` ``` ## ## Call: ## glm(formula = case ~ age + parity + induced + spontaneous, family = binomial(link = "logit"), ## data = infert) ## ## Deviance Residuals: ## Min 1Q Median 3Q Max ## -1.6281 -0.8055 -0.5299 0.8669 2.6141 ## ## Coefficients: ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -2.85239 1.00428 -2.840 0.00451 ** ## age 0.05318 0.03014 1.764 0.07767 . ## parity -0.70883 0.18091 -3.918 8.92e-05 *** ## induced 1.18966 0.28987 4.104 4.06e-05 *** ## spontaneous 1.92534 0.29863 6.447 1.14e-10 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## (Dispersion parameter for binomial family taken to be 1) ## ## Null deviance: 316.17 on 247 degrees of freedom ## Residual deviance: 260.94 on 243 degrees of freedom ## AIC: 270.94 ## ## Number of Fisher Scoring iterations: 4 ``` ### 13\.14\.2 Second step, fit a NN ``` nn = neuralnet(case~age+parity+induced+spontaneous,hidden=2,data=infert) ``` ``` print(names(nn)) ``` ``` ## [1] "call" "response" "covariate" ## [4] "model.list" "err.fct" "act.fct" ## [7] "linear.output" "data" "net.result" ## [10] "weights" "startweights" "generalized.weights" ## [13] "result.matrix" ``` ``` nn$result.matrix ``` ``` ## 1 ## error 19.75482621709 ## reached.threshold 0.00796839405 ## steps 3891.00000000000 ## Intercept.to.1layhid1 -2.39345712918 ## age.to.1layhid1 -0.51858603247 ## parity.to.1layhid1 0.26786607381 ## induced.to.1layhid1 -346.33808632368 ## spontaneous.to.1layhid1 6.50949229932 ## Intercept.to.1layhid2 6.18035131278 ## age.to.1layhid2 -0.13013668178 ## parity.to.1layhid2 2.31764808626 ## induced.to.1layhid2 -2.78558680449 ## spontaneous.to.1layhid2 -4.58533007894 ## Intercept.to.case 1.08152541274 ## 1layhid.1.to.case -6.43770238799 ## 1layhid.2.to.case -0.93730921525 ``` ``` plot(nn) #Run this plot from the command line. #<img src="image_files/nn.png" height=510 width=740> ``` ``` head(cbind(nn$covariate,nn$net.result[[1]])) ``` ``` ## [,1] [,2] [,3] [,4] [,5] ## 1 26 6 1 2 0.1522843862 ## 2 42 1 1 0 0.5553601474 ## 3 39 6 2 0 0.1442907090 ## 4 34 4 2 0 0.1482055348 ## 5 35 3 1 1 0.3599162573 ## 6 36 4 2 1 0.4743072882 ``` ### 13\.14\.3 Logit vs NN We can compare the output to that from the logit model, by looking at the correlation of the fitted values from both models. ``` cor(cbind(nn$net.result[[1]],res$fitted.values)) ``` ``` ## [,1] [,2] ## [1,] 1.0000000000 0.8869825759 ## [2,] 0.8869825759 1.0000000000 ``` ### 13\.14\.4 Backpropagation option We can add in an option for back propagation, and see how the results change. ``` nn2 = neuralnet(case~age+parity+induced+spontaneous, hidden=2, algorithm="rprop+", data=infert) print(cor(cbind(nn2$net.result[[1]],res$fitted.values))) ``` ``` ## [,1] [,2] ## [1,] 1.0000000000 0.9157468405 ## [2,] 0.9157468405 1.0000000000 ``` ``` cor(cbind(nn2$net.result[[1]],nn$fitted.result[[1]])) ``` ``` ## [,1] ## [1,] 1 ``` Given a calibrated neural net, how do we use it to compute values for a new observation? Here is an example. ``` compute(nn,covariate=matrix(c(30,1,0,1),1,4)) ``` ``` ## $neurons ## $neurons[[1]] ## [,1] [,2] [,3] [,4] [,5] ## [1,] 1 30 1 0 1 ## ## $neurons[[2]] ## [,1] [,2] [,3] ## [1,] 1 0.00001403868578 0.5021422036 ## ## ## $net.result ## [,1] ## [1,] 0.6107725211 ``` 13\.15 Statistical Significance ------------------------------- We can assess statistical significance of the model as follows: ``` confidence.interval(nn,alpha=0.10) ``` ``` ## $lower.ci ## $lower.ci[[1]] ## $lower.ci[[1]][[1]] ## [,1] [,2] ## [1,] -15.8007772276 4.3682646706 ## [2,] -1.3384298107 -0.1876702868 ## [3,] -0.2530961989 1.4895025332 ## [4,] -346.3380863237 -3.6315599341 ## [5,] -0.2056362177 -5.6749552264 ## ## $lower.ci[[1]][[2]] ## [,1] ## [1,] 0.9354811195 ## [2,] -38.0986993664 ## [3,] -1.0879829307 ## ## ## ## $upper.ci ## $upper.ci[[1]] ## $upper.ci[[1]][[1]] ## [,1] [,2] ## [1,] 11.0138629693 7.99243795495 ## [2,] 0.3012577458 -0.07260307674 ## [3,] 0.7888283465 3.14579363935 ## [4,] -346.3380863237 -1.93961367486 ## [5,] 13.2246208164 -3.49570493146 ## ## $upper.ci[[1]][[2]] ## [,1] ## [1,] 1.2275697059 ## [2,] 25.2232945904 ## [3,] -0.7866354998 ## ## ## ## $nic ## [1] 21.21884675 ``` The confidence level is \\((1\-\\alpha)\\). This is at the 90% level, and at the 5% level we get: ``` confidence.interval(nn,alpha=0.95) ``` ``` ## $lower.ci ## $lower.ci[[1]] ## $lower.ci[[1]][[1]] ## [,1] [,2] ## [1,] -2.9045845818 6.1112691082 ## [2,] -0.5498409484 -0.1323300362 ## [3,] 0.2480054218 2.2860766817 ## [4,] -346.3380863237 -2.8178378500 ## [5,] 6.2534913605 -4.6268698737 ## ## $lower.ci[[1]][[2]] ## [,1] ## [1,] 1.0759577641 ## [2,] -7.6447150209 ## [3,] -0.9430533514 ## ## ## ## $upper.ci ## $upper.ci[[1]] ## $upper.ci[[1]][[1]] ## [,1] [,2] ## [1,] -1.8823296766 6.2494335173 ## [2,] -0.4873311166 -0.1279433273 ## [3,] 0.2877267259 2.3492194908 ## [4,] -346.3380863237 -2.7533357590 ## [5,] 6.7654932382 -4.5437902841 ## ## $upper.ci[[1]][[2]] ## [,1] ## [1,] 1.0870930614 ## [2,] -5.2306897551 ## [3,] -0.9315650791 ## ## ## ## $nic ## [1] 21.21884675 ``` 13\.16 Deep Learning Overview ----------------------------- The Wikipedia entry is excellent: <https://en.wikipedia.org/wiki/Deep_learning> <http://deeplearning.net/> [https://www.youtube.com/watch?v\=S75EdAcXHKk](https://www.youtube.com/watch?v=S75EdAcXHKk) [https://www.youtube.com/watch?v\=czLI3oLDe8M](https://www.youtube.com/watch?v=czLI3oLDe8M) Article on Google’s Deep Learning team’s work on image processing: [https://medium.com/backchannel/inside\-deep\-dreams\-how\-google\-made\-its\-computers\-go\-crazy\-83b9d24e66df\#.gtfwip891](https://medium.com/backchannel/inside-deep-dreams-how-google-made-its-computers-go-crazy-83b9d24e66df#.gtfwip891) ### 13\.16\.1 Grab Some Data The **mlbench** package contains some useful datasets for testing machine learning algorithms. One of these is a small dataset of cancer cases, and contains ten characteristics of cancer cells, and a flag for whether cancer is present or the cells are benign. We use this dataset to try out some deep learning algorithms in R, and see if they improve on vanilla neural nets. First, let’s fit a neural net to this data. We’ll fit this using the **deepnet** package, which allows for more hidden layers. ### 13\.16\.2 Simple Example ``` library(neuralnet) library(deepnet) ``` First, we use randomly generated data, and train the NN. ``` #From the **deepnet** package by Xiao Rong. First train the model using one hidden layer. Var1 <- c(rnorm(50, 1, 0.5), rnorm(50, -0.6, 0.2)) Var2 <- c(rnorm(50, -0.8, 0.2), rnorm(50, 2, 1)) x <- matrix(c(Var1, Var2), nrow = 100, ncol = 2) y <- c(rep(1, 50), rep(0, 50)) plot(x,col=y+1) ``` ``` nn <- nn.train(x, y, hidden = c(5)) ``` ### 13\.16\.3 Prediction ``` #Next, predict the model. This is in-sample. test_Var1 <- c(rnorm(50, 1, 0.5), rnorm(50, -0.6, 0.2)) test_Var2 <- c(rnorm(50, -0.8, 0.2), rnorm(50, 2, 1)) test_x <- matrix(c(test_Var1, test_Var2), nrow = 100, ncol = 2) yy <- nn.predict(nn, test_x) ``` ### 13\.16\.4 Test Predictive Ability of the Model ``` #The output is just a number that is higher for one class and lower for another. #One needs to separate these to get groups. yhat = matrix(0,length(yy),1) yhat[which(yy > mean(yy))] = 1 yhat[which(yy <= mean(yy))] = 0 print(table(yhat,y)) ``` ``` ## y ## yhat 0 1 ## 0 49 0 ## 1 1 50 ``` ### 13\.16\.5 Prediction Error ``` #Testing the results. err <- nn.test(nn, test_x, y, t=mean(yy)) print(err) ``` ``` ## [1] 0.005 ``` 13\.17 Cancer dataset --------------------- Now we’ll try the Breast Cancer data set. First we use the NN in the **deepnet** package. ``` library(mlbench) data("BreastCancer") head(BreastCancer) ``` ``` ## Id Cl.thickness Cell.size Cell.shape Marg.adhesion Epith.c.size ## 1 1000025 5 1 1 1 2 ## 2 1002945 5 4 4 5 7 ## 3 1015425 3 1 1 1 2 ## 4 1016277 6 8 8 1 3 ## 5 1017023 4 1 1 3 2 ## 6 1017122 8 10 10 8 7 ## Bare.nuclei Bl.cromatin Normal.nucleoli Mitoses Class ## 1 1 3 1 1 benign ## 2 10 3 2 1 benign ## 3 2 3 1 1 benign ## 4 4 3 7 1 benign ## 5 1 3 1 1 benign ## 6 10 9 7 1 malignant ``` ``` BreastCancer = BreastCancer[which(complete.cases(BreastCancer)==TRUE),] ``` ``` y = as.matrix(BreastCancer[,11]) y[which(y=="benign")] = 0 y[which(y=="malignant")] = 1 y = as.numeric(y) x = as.numeric(as.matrix(BreastCancer[,2:10])) x = matrix(as.numeric(x),ncol=9) nn <- nn.train(x, y, hidden = c(5)) yy = nn.predict(nn, x) yhat = matrix(0,length(yy),1) yhat[which(yy > mean(yy))] = 1 yhat[which(yy <= mean(yy))] = 0 print(table(y,yhat)) ``` ``` ## yhat ## y 0 1 ## 0 424 20 ## 1 5 234 ``` ### 13\.17\.1 Compare to a simple NN It does really well. Now we compare it to a simple neural net. ``` library(neuralnet) df = data.frame(cbind(x,y)) nn = neuralnet(y~V1+V2+V3+V4+V5+V6+V7+V8+V9,data=df,hidden = 5) yy = nn$net.result[[1]] yhat = matrix(0,length(y),1) yhat[which(yy > mean(yy))] = 1 yhat[which(yy <= mean(yy))] = 0 print(table(y,yhat)) ``` ``` ## yhat ## y 0 1 ## 0 429 15 ## 1 0 239 ``` Somehow, the **neuralnet** package appears to perform better. Which is interesting. But the deep learning net was not “deep” \- it had only one hidden layer. 13\.18 Deeper Net: More Hidden Layers ------------------------------------- Now we’ll try the **deepnet** function with two hidden layers. ``` dnn <- sae.dnn.train(x, y, hidden = c(5,5)) ``` ``` ## begin to train sae ...... ``` ``` ## training layer 1 autoencoder ... ``` ``` ## training layer 2 autoencoder ... ``` ``` ## sae has been trained. ``` ``` ## begin to train deep nn ...... ``` ``` ## deep nn has been trained. ``` ``` yy = nn.predict(dnn, x) yhat = matrix(0,length(yy),1) yhat[which(yy > mean(yy))] = 1 yhat[which(yy <= mean(yy))] = 0 print(table(y,yhat)) ``` ``` ## yhat ## y 0 1 ## 0 119 325 ## 1 0 239 ``` This performs terribly. Maybe there is something wrong here. 13\.19 Using h2o ---------------- Here we start up a server using all cores of the machine, and then use the h2o package’s deep learning toolkit to fit a model. ``` library(h2o) ``` ``` ## Loading required package: methods ``` ``` ## Loading required package: statmod ``` ``` ## Warning: package 'statmod' was built under R version 3.3.2 ``` ``` ## ## ---------------------------------------------------------------------- ## ## Your next step is to start H2O: ## > h2o.init() ## ## For H2O package documentation, ask for help: ## > ??h2o ## ## After starting H2O, you can use the Web UI at http://localhost:54321 ## For more information visit http://docs.h2o.ai ## ## ---------------------------------------------------------------------- ``` ``` ## ## Attaching package: 'h2o' ``` ``` ## The following objects are masked from 'package:stats': ## ## cor, sd, var ``` ``` ## The following objects are masked from 'package:base': ## ## &&, %*%, %in%, ||, apply, as.factor, as.numeric, colnames, ## colnames<-, ifelse, is.character, is.factor, is.numeric, log, ## log10, log1p, log2, round, signif, trunc ``` ``` localH2O = h2o.init(ip="localhost", port = 54321, startH2O = TRUE, nthreads=-1) ``` ``` ## ## H2O is not running yet, starting it now... ## ## Note: In case of errors look at the following log files: ## /var/folders/yd/h1lvwd952wbgw189srw3m3z80000gn/T//RtmpTFfRwJ/h2o_srdas_started_from_r.out ## /var/folders/yd/h1lvwd952wbgw189srw3m3z80000gn/T//RtmpTFfRwJ/h2o_srdas_started_from_r.err ## ## ## Starting H2O JVM and connecting: ... Connection successful! ## ## R is connected to the H2O cluster: ## H2O cluster uptime: 2 seconds 576 milliseconds ## H2O cluster version: 3.10.0.8 ## H2O cluster version age: 5 months and 13 days !!! ## H2O cluster name: H2O_started_from_R_srdas_dpl191 ## H2O cluster total nodes: 1 ## H2O cluster total memory: 3.56 GB ## H2O cluster total cores: 4 ## H2O cluster allowed cores: 4 ## H2O cluster healthy: TRUE ## H2O Connection ip: localhost ## H2O Connection port: 54321 ## H2O Connection proxy: NA ## R Version: R version 3.3.1 (2016-06-21) ``` ``` ## Warning in h2o.clusterInfo(): ## Your H2O cluster version is too old (5 months and 13 days)! ## Please download and install the latest version from http://h2o.ai/download/ ``` ``` train <- h2o.importFile("DSTMAA_data/BreastCancer.csv") ``` ``` ## | | | 0% | |=================================================================| 100% ``` ``` test <- h2o.importFile("DSTMAA_data/BreastCancer.csv") ``` ``` ## | | | 0% | |=================================================================| 100% ``` ``` y = names(train)[11] x = names(train)[1:10] train[,y] = as.factor(train[,y]) test[,y] = as.factor(train[,y]) model = h2o.deeplearning(x=x, y=y, training_frame=train, validation_frame=test, distribution = "multinomial", activation = "RectifierWithDropout", hidden = c(10,10,10,10), input_dropout_ratio = 0.2, l1 = 1e-5, epochs = 50) ``` ``` ## | | | 0% | |=================================================================| 100% ``` ``` model ``` ``` ## Model Details: ## ============== ## ## H2OBinomialModel: deeplearning ## Model ID: DeepLearning_model_R_1490380949733_1 ## Status of Neuron Layers: predicting Class, 2-class classification, multinomial distribution, CrossEntropy loss, 462 weights/biases, 12.5 KB, 34,150 training samples, mini-batch size 1 ## layer units type dropout l1 l2 mean_rate ## 1 1 10 Input 20.00 % ## 2 2 10 RectifierDropout 50.00 % 0.000010 0.000000 0.001514 ## 3 3 10 RectifierDropout 50.00 % 0.000010 0.000000 0.000861 ## 4 4 10 RectifierDropout 50.00 % 0.000010 0.000000 0.001387 ## 5 5 10 RectifierDropout 50.00 % 0.000010 0.000000 0.002995 ## 6 6 2 Softmax 0.000010 0.000000 0.002317 ## rate_rms momentum mean_weight weight_rms mean_bias bias_rms ## 1 ## 2 0.000975 0.000000 -0.005576 0.381788 0.560946 0.127078 ## 3 0.000416 0.000000 -0.009698 0.356050 0.993232 0.107121 ## 4 0.002665 0.000000 -0.027108 0.354956 0.890325 0.095600 ## 5 0.009876 0.000000 -0.114653 0.464009 0.871405 0.324988 ## 6 0.001023 0.000000 0.308202 1.334877 -0.006136 0.450422 ## ## ## H2OBinomialMetrics: deeplearning ## ** Reported on training data. ** ## ** Metrics reported on full training frame ** ## ## MSE: 0.02467530347 ## RMSE: 0.1570837467 ## LogLoss: 0.09715290711 ## Mean Per-Class Error: 0.02011006823 ## AUC: 0.9944494704 ## Gini: 0.9888989408 ## ## Confusion Matrix for F1-optimal threshold: ## benign malignant Error Rate ## benign 428 16 0.036036 =16/444 ## malignant 1 238 0.004184 =1/239 ## Totals 429 254 0.024890 =17/683 ## ## Maximum Metrics: Maximum metrics at their respective thresholds ## metric threshold value idx ## 1 max f1 0.153224 0.965517 248 ## 2 max f2 0.153224 0.983471 248 ## 3 max f0point5 0.745568 0.954962 234 ## 4 max accuracy 0.153224 0.975110 248 ## 5 max precision 0.943219 1.000000 0 ## 6 max recall 0.002179 1.000000 288 ## 7 max specificity 0.943219 1.000000 0 ## 8 max absolute_mcc 0.153224 0.947145 248 ## 9 max min_per_class_accuracy 0.439268 0.970721 240 ## 10 max mean_per_class_accuracy 0.153224 0.979890 248 ## ## Gains/Lift Table: Extract with `h2o.gainsLift(<model>, <data>)` or `h2o.gainsLift(<model>, valid=<T/F>, xval=<T/F>)` ## H2OBinomialMetrics: deeplearning ## ** Reported on validation data. ** ## ** Metrics reported on full validation frame ** ## ## MSE: 0.02467530347 ## RMSE: 0.1570837467 ## LogLoss: 0.09715290711 ## Mean Per-Class Error: 0.02011006823 ## AUC: 0.9944494704 ## Gini: 0.9888989408 ## ## Confusion Matrix for F1-optimal threshold: ## benign malignant Error Rate ## benign 428 16 0.036036 =16/444 ## malignant 1 238 0.004184 =1/239 ## Totals 429 254 0.024890 =17/683 ## ## Maximum Metrics: Maximum metrics at their respective thresholds ## metric threshold value idx ## 1 max f1 0.153224 0.965517 248 ## 2 max f2 0.153224 0.983471 248 ## 3 max f0point5 0.745568 0.954962 234 ## 4 max accuracy 0.153224 0.975110 248 ## 5 max precision 0.943219 1.000000 0 ## 6 max recall 0.002179 1.000000 288 ## 7 max specificity 0.943219 1.000000 0 ## 8 max absolute_mcc 0.153224 0.947145 248 ## 9 max min_per_class_accuracy 0.439268 0.970721 240 ## 10 max mean_per_class_accuracy 0.153224 0.979890 248 ## ## Gains/Lift Table: Extract with `h2o.gainsLift(<model>, <data>)` or `h2o.gainsLift(<model>, valid=<T/F>, xval=<T/F>)` ``` The h2o deep learning package does very well. ``` #h2o.shutdown(prompt=FALSE) ``` 13\.20 Character Recognition ---------------------------- We use the MNIST dataset ``` library(h2o) localH2O = h2o.init(ip="localhost", port = 54321, startH2O = TRUE) ``` ``` ## Connection successful! ## ## R is connected to the H2O cluster: ## H2O cluster uptime: 7 seconds 810 milliseconds ## H2O cluster version: 3.10.0.8 ## H2O cluster version age: 5 months and 13 days !!! ## H2O cluster name: H2O_started_from_R_srdas_dpl191 ## H2O cluster total nodes: 1 ## H2O cluster total memory: 3.54 GB ## H2O cluster total cores: 4 ## H2O cluster allowed cores: 4 ## H2O cluster healthy: TRUE ## H2O Connection ip: localhost ## H2O Connection port: 54321 ## H2O Connection proxy: NA ## R Version: R version 3.3.1 (2016-06-21) ``` ``` ## Warning in h2o.clusterInfo(): ## Your H2O cluster version is too old (5 months and 13 days)! ## Please download and install the latest version from http://h2o.ai/download/ ``` ``` ## Import MNIST CSV as H2O train <- h2o.importFile("DSTMAA_data/train.csv") ``` ``` ## | | | 0% | |============================================================= | 94% | |=================================================================| 100% ``` ``` test <- h2o.importFile("DSTMAA_data/test.csv") ``` ``` ## | | | 0% | |================ | 25% | |=================================================================| 100% ``` ``` #summary(train) #summary(test) ``` ``` y <- "C785" x <- setdiff(names(train), y) train[,y] <- as.factor(train[,y]) test[,y] <- as.factor(test[,y]) # Train a Deep Learning model and validate on a test set model <- h2o.deeplearning(x = x, y = y, training_frame = train, validation_frame = test, distribution = "multinomial", activation = "RectifierWithDropout", hidden = c(100,100,100), input_dropout_ratio = 0.2, l1 = 1e-5, epochs = 20) ``` ``` ## Warning in .h2o.startModelJob(algo, params, h2oRestApiVersion): Dropping constant columns: [C86, C85, C729, C728, C646, C645, C169, C760, C561, C53, C11, C55, C10, C54, C57, C12, C56, C58, C17, C19, C18, C731, C730, C20, C22, C21, C24, C23, C26, C25, C28, C27, C702, C701, C29, C700, C1, C2, C784, C3, C783, C4, C782, C5, C781, C6, C142, C7, C141, C8, C9, C31, C30, C32, C759, C758, C757, C756, C755, C477, C113, C674, C112, C673, C672, C84, C83]. ``` ``` ## | | | 0% | |== | 3% | |==== | 6% | |====== | 9% | |======== | 12% | |========= | 15% | |=========== | 18% | |============= | 20% | |=============== | 23% | |================= | 26% | |=================== | 29% | |===================== | 32% | |======================= | 35% | |========================= | 38% | |=========================== | 41% | |============================ | 44% | |============================== | 47% | |================================ | 50% | |================================== | 53% | |==================================== | 55% | |====================================== | 58% | |======================================== | 61% | |========================================== | 64% | |============================================ | 67% | |============================================== | 70% | |=============================================== | 73% | |================================================= | 76% | |=================================================== | 79% | |===================================================== | 82% | |======================================================= | 85% | |========================================================= | 88% | |=========================================================== | 91% | |============================================================= | 93% | |=============================================================== | 96% | |=================================================================| 99% | |=================================================================| 100% ``` ``` model ``` ``` ## Model Details: ## ============== ## ## H2OMultinomialModel: deeplearning ## Model ID: DeepLearning_model_R_1490380949733_6 ## Status of Neuron Layers: predicting C785, 10-class classification, multinomial distribution, CrossEntropy loss, 93,010 weights/biases, 1.3 MB, 1,227,213 training samples, mini-batch size 1 ## layer units type dropout l1 l2 mean_rate ## 1 1 717 Input 20.00 % ## 2 2 100 RectifierDropout 50.00 % 0.000010 0.000000 0.047334 ## 3 3 100 RectifierDropout 50.00 % 0.000010 0.000000 0.000400 ## 4 4 100 RectifierDropout 50.00 % 0.000010 0.000000 0.000849 ## 5 5 10 Softmax 0.000010 0.000000 0.006109 ## rate_rms momentum mean_weight weight_rms mean_bias bias_rms ## 1 ## 2 0.115335 0.000000 0.035077 0.109799 -0.348895 0.248555 ## 3 0.000188 0.000000 -0.032328 0.100167 0.791746 0.120468 ## 4 0.000482 0.000000 -0.029638 0.101666 0.562113 0.126671 ## 5 0.011201 0.000000 -0.533853 0.731477 -3.217856 0.626765 ## ## ## H2OMultinomialMetrics: deeplearning ## ** Reported on training data. ** ## ** Metrics reported on temporary training frame with 10101 samples ** ## ## Training Set Metrics: ## ===================== ## ## MSE: (Extract with `h2o.mse`) 0.03144841538 ## RMSE: (Extract with `h2o.rmse`) 0.1773370107 ## Logloss: (Extract with `h2o.logloss`) 0.1154417969 ## Mean Per-Class Error: 0.03526829002 ## Confusion Matrix: Extract with `h2o.confusionMatrix(<model>,train = TRUE)`) ## ========================================================================= ## Confusion Matrix: vertical: actual; across: predicted ## 0 1 2 3 4 5 6 7 8 9 Error Rate ## 0 994 0 7 4 3 1 2 0 1 2 0.0197 = 20 / 1,014 ## 1 0 1151 8 9 2 5 4 2 5 0 0.0295 = 35 / 1,186 ## 2 0 2 930 12 4 0 2 7 4 2 0.0343 = 33 / 963 ## 3 1 0 18 982 2 9 0 8 8 4 0.0484 = 50 / 1,032 ## 4 3 3 4 1 927 1 4 2 1 12 0.0324 = 31 / 958 ## 5 3 0 2 10 2 913 7 1 7 4 0.0379 = 36 / 949 ## 6 8 0 2 0 1 8 927 0 6 0 0.0263 = 25 / 952 ## 7 1 9 6 5 4 1 0 1019 2 3 0.0295 = 31 / 1,050 ## 8 4 4 5 3 2 7 2 1 952 3 0.0315 = 31 / 983 ## 9 0 1 1 13 17 3 0 23 6 950 0.0631 = 64 / 1,014 ## Totals 1014 1170 983 1039 964 948 948 1063 992 980 0.0352 = 356 / 10,101 ## ## Hit Ratio Table: Extract with `h2o.hit_ratio_table(<model>,train = TRUE)` ## ======================================================================= ## Top-10 Hit Ratios: ## k hit_ratio ## 1 1 0.964756 ## 2 2 0.988021 ## 3 3 0.993664 ## 4 4 0.996337 ## 5 5 0.997921 ## 6 6 0.998812 ## 7 7 0.998911 ## 8 8 0.999307 ## 9 9 0.999703 ## 10 10 1.000000 ## ## ## H2OMultinomialMetrics: deeplearning ## ** Reported on validation data. ** ## ** Metrics reported on full validation frame ** ## ## Validation Set Metrics: ## ===================== ## ## Extract validation frame with `h2o.getFrame("RTMP_sid_9b15_8")` ## MSE: (Extract with `h2o.mse`) 0.036179964 ## RMSE: (Extract with `h2o.rmse`) 0.1902103152 ## Logloss: (Extract with `h2o.logloss`) 0.1374188218 ## Mean Per-Class Error: 0.04004564619 ## Confusion Matrix: Extract with `h2o.confusionMatrix(<model>,valid = TRUE)`) ## ========================================================================= ## Confusion Matrix: vertical: actual; across: predicted ## 0 1 2 3 4 5 6 7 8 9 Error Rate ## 0 963 0 1 1 0 6 3 3 2 1 0.0173 = 17 / 980 ## 1 0 1117 7 2 0 0 3 1 5 0 0.0159 = 18 / 1,135 ## 2 5 0 988 6 5 1 5 10 12 0 0.0426 = 44 / 1,032 ## 3 1 0 12 969 0 10 0 10 7 1 0.0406 = 41 / 1,010 ## 4 1 1 5 0 941 2 9 5 3 15 0.0418 = 41 / 982 ## 5 2 0 3 8 2 858 5 2 9 3 0.0381 = 34 / 892 ## 6 8 3 3 0 3 17 920 0 4 0 0.0397 = 38 / 958 ## 7 1 6 15 6 1 0 1 987 0 11 0.0399 = 41 / 1,028 ## 8 3 2 4 7 4 15 5 6 926 2 0.0493 = 48 / 974 ## 9 3 8 2 12 18 6 1 17 9 933 0.0753 = 76 / 1,009 ## Totals 987 1137 1040 1011 974 915 952 1041 977 966 0.0398 = 398 / 10,000 ## ## Hit Ratio Table: Extract with `h2o.hit_ratio_table(<model>,valid = TRUE)` ## ======================================================================= ## Top-10 Hit Ratios: ## k hit_ratio ## 1 1 0.960200 ## 2 2 0.983900 ## 3 3 0.991700 ## 4 4 0.995900 ## 5 5 0.997600 ## 6 6 0.998800 ## 7 7 0.999200 ## 8 8 0.999600 ## 9 9 1.000000 ## 10 10 1.000000 ``` 13\.21 MxNet Package -------------------- The package needs the correct version of Java to run. ``` #From R-bloggers require(mlbench) ## Loading required package: mlbench require(mxnet) ## Loading required package: mxnet ## Loading required package: methods data(Sonar, package="mlbench") Sonar[,61] = as.numeric(Sonar[,61])-1 train.ind = c(1:50, 100:150) train.x = data.matrix(Sonar[train.ind, 1:60]) train.y = Sonar[train.ind, 61] test.x = data.matrix(Sonar[-train.ind, 1:60]) test.y = Sonar[-train.ind, 61] mx.set.seed(0) model <- mx.mlp(train.x, train.y, hidden_node=10, out_node=2, out_activation="softmax", num.round=100, array.batch.size=15, learning.rate=0.25, momentum=0.9, eval.metric=mx.metric.accuracy) preds = predict(model, test.x) ## Auto detect layout of input matrix, use rowmajor.. pred.label = max.col(t(preds))-1 table(pred.label, test.y) ``` ### 13\.21\.1 Cancer Data Now an example using the BreastCancer data set. ``` data("BreastCancer") BreastCancer = BreastCancer[which(complete.cases(BreastCancer)==TRUE),] y = as.matrix(BreastCancer[,11]) y[which(y=="benign")] = 0 y[which(y=="malignant")] = 1 y = as.numeric(y) x = as.numeric(as.matrix(BreastCancer[,2:10])) x = matrix(as.numeric(x),ncol=9) train.x = x train.y = y test.x = x test.y = y mx.set.seed(0) model <- mx.mlp(train.x, train.y, hidden_node=5, out_node=10, out_activation="softmax", num.round=30, array.batch.size=15, learning.rate=0.07, momentum=0.9, eval.metric=mx.metric.accuracy) preds = predict(model, test.x) ## Auto detect layout of input matrix, use rowmajor.. pred.label = max.col(t(preds))-1 table(pred.label, test.y) ``` 13\.22 Convolutional Neural Nets (CNNs) --------------------------------------- To be written See: [https://adeshpande3\.github.io/adeshpande3\.github.io/A\-Beginner's\-Guide\-To\-Understanding\-Convolutional\-Neural\-Networks/](https://adeshpande3.github.io/adeshpande3.github.io/A-Beginner's-Guide-To-Understanding-Convolutional-Neural-Networks/) 13\.23 Recurrent Neural Nets (RNNs) ----------------------------------- To be written 13\.1 Overview -------------- Neural networks are special forms of nonlinear regressions where the decision system for which the NN is built mimics the way the brain is supposed to work (whether it works like a NN is up for grabs of course). Terrific online book: <http://neuralnetworksanddeeplearning.com/> 13\.2 Perceptrons ----------------- The basic building block of a neural network is a perceptron. A perceptron is like a neuron in a human brain. It takes inputs (e.g. sensory in a real brain) and then produces an output signal. An entire network of perceptrons is called a neural net. For example, if you make a credit card application, then the inputs comprise a whole set of personal data such as age, sex, income, credit score, employment status, etc, which are then passed to a series of perceptrons in parallel. This is the first **layer** of assessment. Each of the perceptrons then emits an output signal which may then be passed to another layer of perceptrons, who again produce another signal. This second layer is often known as the **hidden** perceptron layer. Finally, after many hidden layers, the signals are all passed to a single perceptron which emits the decision signal to issue you a credit card or to deny your application. Perceptrons may emit continuous signals or binary \\((0,1\)\\) signals. In the case of the credit card application, the final perceptron is a binary one. Such perceptrons are implemented by means of **squashing** functions. For example, a really simple squashing function is one that issues a 1 if the function value is positive and a 0 if it is negative. More generally, \\\[\\begin{equation} S(x) \= \\left\\{ \\begin{array}{cl} 1 \& \\mbox{if } g(x)\>T \\\\ 0 \& \\mbox{if } g(x) \\leq T \\end{array} \\right. \\end{equation}\\] where \\(g(x)\\) is any function taking positive and negative values, for instance, \\(g(x) \\in (\-\\infty, \+\\infty)\\). \\(T\\) is a threshold level. A neural network with many layers is also known as a **multi\-layered** perceptron, i.e., all those perceptrons together may be thought of as one single, big perceptron. **x** is the input layer, **y** is the hidden layer, and **z** is the output layer. 13\.3 Deep Learning ------------------- Neural net models are related to **Deep Learning**, where the number of hidden layers is vastly greater than was possible in the past when computational power was limited. Now, deep learning nets cascade through 20\-30 layers, resulting in a surprising ability of neural nets in mimicking human learning processes. see: <http://en.wikipedia.org/wiki/Deep_learning> And also see: <http://deeplearning.net/> 13\.4 Binary NNs ---------------- Binary NNs are also thought of as a category of classifier systems. They are widely used to divide members of a population into classes. But NNs with continuous output are also popular. As we will see later, researchers have used NNs to learn the Black\-Scholes option pricing model. Areas of application: credit cards, risk management, forecasting corporate defaults, forecasting economic regimes, measuring the gains from mass mailings by mapping demographics to success rates. 13\.5 Squashing Functions ------------------------- Squashing functions may be more general than just binary. They usually squash the output signal into a narrow range, usually \\((0,1\)\\). A common choice is the logistic function (also known as the sigmoid function). \\\[\\begin{equation} f(x) \= \\frac{1}{1\+e^{\-w\\;x}} \\end{equation}\\] Think of \\(w\\) as the adjustable weight. Another common choice is the probit function \\\[\\begin{equation} f(x) \= \\Phi(w\\;x) \\end{equation}\\] where \\(\\Phi(\\cdot)\\) is the cumulative normal distribution function. 13\.6 How does the NN work? --------------------------- The easiest way to see how a NN works is to think of the simplest NN, i.e. one with a single perceptron generating a binary output. The perceptron has \\(n\\) inputs, with values \\(x\_i, i\=1\...n\\) and current weights \\(w\_i, i\=1\...n\\). It generates an output \\(y\\). The **net input** is defined as \\\[\\begin{equation} \\sum\_{i\=1}^n w\_i x\_i \\end{equation}\\] If a function of the net input is greater than a threshold \\(T\\), then the output signal is \\(y\=1\\), and if it is less than \\(T\\), the output is \\(y\=0\\). The actual output is called the **desired** output and is denoted \\(d \= \\{0,1\\}\\). Hence, the **training** data provided to the NN comprises both the inputs \\(x\_i\\) and the desired output \\(d\\). The output of our single perceptron model will be the sigmoid function of the net input, i.e. \\\[\\begin{equation} y \= \\frac{1}{1\+\\exp\\left( \- \\sum\_{i\=1}^n w\_i x\_i \\right)} \\end{equation}\\] For a given input set, the error in the NN is given by some loss function, an example of which is below: \\\[\\begin{equation} E \= \\frac{1}{2} \\sum\_{j\=1}^m (y\_j \- d\_j)^2 \\end{equation}\\] where \\(m\\) is the size of the training data set. The optimal NN for given data is obtained by finding the weights \\(w\_i\\) that minimize this error function \\(E\\). Once we have the optimal weights, we have a calibrated **feed\-forward** neural net. For a given squashing function \\(f\\), and input \\(x \= \[x\_1, x\_2, \\ldots, x\_n]'\\), the multi\-layer perceptron will given an output at each node of the hidden layer of \\\[\\begin{equation} y(x) \= f \\left(w\_0 \+ \\sum\_{j\=1}^n w\_j x\_j \\right) \\end{equation}\\] and then at the final output level the node is \\\[\\begin{equation} z(x) \= f\\left( w\_0 \+ \\sum\_{i\=1}^N w\_i \\cdot f \\left(w\_{0i} \+ \\sum\_{j\=1}^n w\_{ji} x\_j \\right) \\right) \\end{equation}\\] where the nested structure of the neural net is quite apparent. The \\(f\\) functions are also known as **activation** functions. 13\.7 Relationship to Logit/Probit Models ----------------------------------------- The special model above with a single perceptron is actually nothing else than the logit regression model. If the squashing function is taken to the cumulative normal distribution, then the model becomes the probit regression model. In both cases though, the model is fitted by minimizing squared errors, not by maximum likelihood, which is how standard logit/probit models are parameterized. 13\.8 Connection to hyperplanes ------------------------------- Note that in binary squashing functions, the net input is passed through a sigmoid function and then compared to the threshold level \\(T\\). This sigmoid function is a monotone one. Hence, this means that there must be a level \\(T'\\) at which the net input \\(\\sum\_{i\=1}^n w\_i x\_i\\) must be for the result to be on the cusp. The following is the equation for a hyperplane \\\[\\begin{equation} \\sum\_{i\=1}^n w\_i x\_i \= T' \\end{equation}\\] which also implies that observations in \\(n\\)\-dimensional space of the inputs \\(x\_i\\), must lie on one side or the other of this hyperplane. If above the hyperplane, then \\(y\=1\\), else \\(y\=0\\). Hence, single perceptrons in neural nets have a simple geometrical intuition. 13\.9 Gradient Descent ---------------------- We start with a simple function. We want to minimize this function. But let’s plot it first to see where the minimum lies. ``` f = function(x) { result = 3*x^2 - 5*x + 10 } x = seq(-4,4,0.1) plot(x,f(x),type="l") ``` Next, we solve for \\(x\_{min}\\), the value at which the function is minimized, which appears to lie between \\(0\\) and \\(2\\). We do this by gradient descent, from an initial value for \\(x\=\-3\\). We then run down the function to its minimum but manage the rate of descent using a paramater \\(\\eta\=0\.10\\). The evolution (descent) equation is called recursively through the following dynamics for \\(x\\): \\\[ x \\leftarrow x \- \\eta \\cdot \\frac{\\partial f}{\\partial x} \\] If the gradient is positive, then we need to head in the opposite direction to reach the minimum, and hence, we have a negative sign in front of the modification term above. But first we need to calculate the gradient, and the descent. To repeat, first gradient, then descent! ``` x = -3 eta = 0.10 dx = 0.0001 grad = (f(x+dx)-f(x))/dx x = x - eta*grad print(x) ``` ``` ## [1] -0.70003 ``` We see that \\(x\\) has moved closer to the value that minimizes the function. We can repeat thismany times till it settles down at the minimum, each round of updates being called an **epoch**. We run 20 epochs next. ``` for (j in 1:20) { grad = (f(x+dx)-f(x))/dx x = x - eta*grad print(c(j,x,grad,f(x))) } ``` ``` ## [1] 1.000000 0.219958 -9.199880 9.045355 ## [1] 2.0000000 0.5879532 -3.6799520 8.0973009 ## [1] 3.0000000 0.7351513 -1.4719808 7.9455858 ## [1] 4.0000000 0.7940305 -0.5887923 7.9213008 ## [1] 5.0000000 0.8175822 -0.2355169 7.9174110 ## [1] 6.00000000 0.82700288 -0.09420677 7.91678689 ## [1] 7.00000000 0.83077115 -0.03768271 7.91668636 ## [1] 8.00000000 0.83227846 -0.01507308 7.91667000 ## [1] 9.000000000 0.832881384 -0.006029233 7.916667279 ## [1] 10.000000000 0.833122554 -0.002411693 7.916666800 ## [1] 11.0000000000 0.8332190215 -0.0009646773 7.9166667059 ## [1] 12.0000000000 0.8332576086 -0.0003858709 7.9166666839 ## [1] 13.0000000000 0.8332730434 -0.0001543484 7.9166666776 ## [1] 1.400000e+01 8.332792e-01 -6.173935e-05 7.916667e+00 ## [1] 1.500000e+01 8.332817e-01 -2.469573e-05 7.916667e+00 ## [1] 1.600000e+01 8.332827e-01 -9.878303e-06 7.916667e+00 ## [1] 1.700000e+01 8.332831e-01 -3.951310e-06 7.916667e+00 ## [1] 1.800000e+01 8.332832e-01 -1.580540e-06 7.916667e+00 ## [1] 1.900000e+01 8.332833e-01 -6.322054e-07 7.916667e+00 ## [1] 2.000000e+01 8.332833e-01 -2.528822e-07 7.916667e+00 ``` It has converged really quickly! At convergence, the gradient goes to zero. 13\.10 Feedback and Backpropagation ----------------------------------- What distinguishes neural nets from ordinary nonlinear regressions is feedback. Neural nets **learn** from feedback as they are used. Feedback is implemented using a technique called backpropagation. Suppose you have a calibrated NN. Now you obtain another observation of data and run it through the NN. Comparing the output value \\(y\\) with the desired observation \\(d\\) gives you the error for this observation. If the error is large, then it makes sense to update the weights in the NN, so as to self\-correct. This process of self\-correction is known as **gradient descent** via **backpropagation**. The benefit of gradient descent via backpropagation is that a full re\-fitting exercise may not be required. Using simple rules the correction to the weights can be applied gradually in a learning manner. Lets look at fitting with a simple example using a single perceptron. Consider the \\(k\\)\-th perceptron. The sigmoid of this is \\\[\\begin{equation} y\_k \= \\frac{1}{1\+\\exp\\left( \- \\sum\_{i\=1}^n w\_{i} x\_{ik} \\right)} \\end{equation}\\] where \\(y\_k\\) is the output of the \\(k\\)\-th perceptron, and \\(x\_{ik}\\) is the \\(i\\)\-th input to the \\(k\\)\-th perceptron. The error from this observation is \\((y\_k \- d\_k)\\). Recalling that \\(E \= \\frac{1}{2} \\sum\_{j\=1}^m (y\_j \- d\_j)^2\\), we may compute the change in error with respect to the \\(j\\)\-th output, i.e. \\\[\\begin{equation} \\frac{\\partial E}{\\partial y\_j} \= y\_j \- d\_j, \\quad \\forall j \\end{equation}\\] Note also that \\\[\\begin{equation} \\frac{dy\_j}{dx\_{ij}} \= y\_j (1\-y\_j) w\_i \\end{equation}\\] and \\\[\\begin{equation} \\frac{dy\_j}{dw\_i} \= y\_j (1\-y\_j) x\_{ij} \\end{equation}\\] Next, we examine how the error changes with input values: \\\[\\begin{equation} \\frac{\\partial E}{\\partial x\_{ij}} \= \\frac{\\partial E}{\\partial y\_j} \\times \\frac{dy\_j}{dx\_{ij}} \= (y\_j \- d\_j) y\_j (1\-y\_j) w\_i \\end{equation}\\] We can now get to the value of interest, which is the change in error value with respect to the weights \\\[\\begin{equation} \\frac{\\partial E}{\\partial w\_{i}} \= \\frac{\\partial E}{\\partial y\_j} \\times \\frac{dy\_j}{dw\_i} \= (y\_j \- d\_j)y\_j (1\-y\_j) x\_{ij}, \\forall i \\end{equation}\\] We thus have one equation for each weight \\(w\_i\\) and each observation \\(j\\). (Note that the \\(w\_i\\) apply across perceptrons. A more general case might be where we have weights for each perceptron, i.e., \\(w\_{ij}\\).) Instead of updating on just one observation, we might want to do this for many observations in which case the error derivative would be \\\[\\begin{equation} \\frac{\\partial E}{\\partial w\_{i}} \= \\sum\_j (y\_j \- d\_j)y\_j (1\-y\_j) x\_{ij}, \\forall i \\end{equation}\\] Therefore, if \\(\\frac{\\partial E}{\\partial w\_{i}} \> 0\\), then we would need to reduce \\(w\_i\\) to bring down \\(E\\). By how much? Here is where some art and judgment is imposed. There is a tuning parameter \\(0\<\\gamma\<1\\) which we apply to \\(w\_i\\) to shrink it when the weight needs to be reduced. Likewise, if the derivative \\(\\frac{\\partial E}{\\partial w\_{i}} \< 0\\), then we would increase \\(w\_i\\) by dividing it by \\(\\gamma\\). This is known as **gradient descent**. 13\.11 Backpropagation ---------------------- ### 13\.11\.1 Extension to many observations Our notation now becomes extended to weights \\(w\_{ik}\\) which stand for the weight on the \\(i\\)\-th input to the \\(k\\)\-th perceptron. The derivative for the error becomes, across all observations \\(j\\): \\\[\\begin{equation} \\frac{\\partial E}{\\partial w\_{ik}} \= \\sum\_j (y\_j \- d\_j)y\_j (1\-y\_j) x\_{ikj}, \\forall i,k \\end{equation}\\] Hence all nodes in the network have their weights updated. In many cases of course, we can just take the derivatives numerically. Change the weight \\(w\_{ik}\\) and see what happens to the error. However, the formal process of finding all the gradients using a fast algorithm via backpropagation requires more formal calculus, and the rest of this section provides detailed analysis showing how this is done. ### 13\.11\.1 Extension to many observations Our notation now becomes extended to weights \\(w\_{ik}\\) which stand for the weight on the \\(i\\)\-th input to the \\(k\\)\-th perceptron. The derivative for the error becomes, across all observations \\(j\\): \\\[\\begin{equation} \\frac{\\partial E}{\\partial w\_{ik}} \= \\sum\_j (y\_j \- d\_j)y\_j (1\-y\_j) x\_{ikj}, \\forall i,k \\end{equation}\\] Hence all nodes in the network have their weights updated. In many cases of course, we can just take the derivatives numerically. Change the weight \\(w\_{ik}\\) and see what happens to the error. However, the formal process of finding all the gradients using a fast algorithm via backpropagation requires more formal calculus, and the rest of this section provides detailed analysis showing how this is done. 13\.12 Backprop: Detailed Analysis ---------------------------------- In this section, we dig deeper into the incredible algebra that drives the unreasonable effectiveness of deep learning algorithms. To do this, we will work with a richer algebra, and extended notation. ### 13\.12\.1 Net Input Assume several hidden layers in a deep learning net (DLN), indexed by \\(r\=1,2,...,R\\). Consider two adjacent layers \\((r)\\) and \\((r\+1\)\\). Each layer as number of nodes \\(n\_r\\) and \\(n\_{r\+1}\\), respectively. The output of node \\(i\\) in layer \\((r)\\) is \\(Z\_i^{(r)} \= f(a\_i^{(r)})\\). The function \\(f\\) is the *activation* function. At node \\(j\\) in layer \\((r\+1\)\\), these inputs are taken and used to compute an intermediate value, known as the *net value*: \\\[ a\_j^{(r\+1\)} \= \\sum\_{i\=1}^{n\_r} W\_{ij}^{(r\+1\)}Z\_i^{r} \+ b\_j^{(r\+1\)} \\] ### 13\.12\.2 Activation Function The net value is then ingested by an activation function to create the output from layer \\((r\+1\)\\). \\\[ Z\_j^{(r\+1\)} \= f(a\_j^{(r\+1\)}) \\] The activation functions may be simple sigmoid functions or other functions such as ReLU (Rectified Linear Unit). The final output of the DLN is from layer \\((R)\\), i.e., \\(Z\_j^{(R)}\\). For the first hidden layer \\(r\=1\\), and the net input will be based on the original data \\(X^{(m)}\\) \\\[ a\_j^{(1\)} \= \\sum\_{m\=1}^M W\_{mj}^{(1\)} X\_m \+ b\_j^{(1\)} \\] ### 13\.12\.3 Loss Function Fitting the DLN is an exercise where the best weights \\(\\{W,b\\} \= \\{W\_{ij}^{(r\+1\)}, b\_j^{(r\+1\)}\\},\\forall r\\) for all layers are determined to minimize a loss function generally denoted as \\\[ \\min\_{W,b} \\sum\_{m\=1}^M L\_m\[h(X^{(m)}),T^{(m)}] \\] where \\(M\\) is the number of training observations (rows in the data set), \\(T^{(m)}\\) is the true value of the output, and \\(h(X^{(m)})\\) is the model output from the DLN. The loss function \\(L\_m\\) quantifies the difference between the model output and the true output. ### 13\.12\.4 Gradients To solve this minimization problem, we need gradients for all \\(W,b\\). These are denoted as \\\[ \\frac{\\partial L\_m}{\\partial W\_{ij}^{(r\+1\)}}, \\quad \\frac{\\partial L\_m}{\\partial b\_{j}^{(r\+1\)}}, \\quad \\forall r\+1, j \\] We write out these gradients using the chain rule: \\\[ \\frac{\\partial L\_m}{\\partial W\_{ij}^{(r\+1\)}} \= \\frac{\\partial L\_m}{\\partial a\_j^{(r\+1\)}} \\cdot \\frac{\\partial a\_j^{(r\+1\)}}{\\partial W\_{ij}^{(r\+1\)}} \= \\delta\_j^{(r\+1\)} \\cdot Z\_i^{(r)} \\] where we have written \\\[ \\delta\_j^{(r\+1\)} \= \\frac{\\partial L\_m}{\\partial a\_j^{(r\+1\)}} \\] Likewise, we have \\\[ \\frac{\\partial L\_m}{\\partial b\_{j}^{(r\+1\)}} \= \\frac{\\partial L\_m}{\\partial a\_j^{(r\+1\)}} \\cdot \\frac{\\partial a\_j^{(r\+1\)}}{\\partial b\_{j}^{(r\+1\)}} \= \\delta\_j^{(r\+1\)} \\cdot 1 \= \\delta\_j^{(r\+1\)} \\] ### 13\.12\.5 Delta Values So we need to find all the \\(\\delta\_j^{(r\+1\)}\\) values. To do so, we need the following intermediate calculation. \\\[ \\begin{align} a\_j^{(r\+1\)} \&\= \\sum\_{i\=1}^{n\_r} W\_{ij}^{(r\+1\)} Z\_i^{(r)} \+ b\_j^{(r\+1\)} \\\\ \&\= \\sum\_{i\=1}^{n\_r} W\_{ij}^{(r\+1\)} f(a\_i^{(r)}) \+ b\_j^{(r\+1\)} \\\\ \\end{align} \\] This implies that \\\[ \\frac{\\partial a\_j^{(r\+1\)}}{\\partial a\_i^{(r)}} \= W\_{ij}^{(r\+1\)} \\cdot f'(a\_i^{(r)}) \\] Using this we may now rewrite the \\(\\delta\\) value for layer \\((r)\\) as follows: \\\[ \\begin{align} \\delta\_i^{(r)} \&\= \\frac{\\partial L\_m}{\\partial a\_i^{(r)}} \\\\ \&\= \\sum\_{j\=1}^{n\_{r\+1}} \\frac{\\partial L\_m}{\\partial a\_j^{(r\+1\)}} \\cdot \\frac{\\partial a\_j^{(r\+1\)}}{\\partial a\_i^{(r)}} \\\\ \&\= \\sum\_{j\=1}^{n\_{r\+1}}\\delta\_j^{(r\+1\)} \\cdot W\_{ij}^{(r\+1\)} \\cdot f'(a\_i^{(r)}) \\\\ \&\= f'(a\_i^{(r)}) \\cdot \\sum\_{j\=1}^{n\_{r\+1}}\\delta\_j^{(r\+1\)} \\cdot W\_{ij}^{(r\+1\)} \\end{align} \\] ### 13\.12\.6 Output layer The output layer takes as input the last hidden layer \\({(R)}\\)’s output \\(Z\_j^{(R)}\\), and computes the net input \\(a\_j^{(R\+1\)}\\) and then the activation function \\(h(a\_j^{(R\+1\)})\\) is applied to generate the final output. \\\[ \\begin{align} a\_j^{(R\+1\)} \&\= \\sum\_{i\=1}^{n\_R} W\_{ij}^{(R\+1\)} Z\_j^{(R)} \+ b\_j^{(R\+1\)} \\\\ \\mbox{Final output} \&\= h(a\_j^{(R\+1\)}) \\end{align} \\] The \\(\\delta\\) for the final layer is simple. \\\[ \\delta\_j^{(R\+1\)} \= \\frac{\\partial L\_m}{\\partial a\_j^{(R\+1\)}} \= h'(a\_j^{(R\+1\)}) \\] ### 13\.12\.7 Feedforward and Backward Propagation Fitting the DLN requires getting the weights \\(\\{W,b\\}\\) that minimize \\(L\_m\\). These are done using gradient descent, i.e., \\\[ \\begin{align} W\_{ij}^{(r\+1\)} \\leftarrow W\_{ij}^{(r\+1\)} \- \\eta \\cdot \\frac{\\partial L\_m}{\\partial W\_{ij}^{(r\+1\)}} \\\\ b\_{j}^{(r\+1\)} \\leftarrow b\_{j}^{(r\+1\)} \- \\eta \\cdot \\frac{\\partial L\_m}{\\partial b\_{j}^{(r\+1\)}} \\end{align} \\] Here \\(\\eta\\) is the learning rate parameter. We iterate on these functions until the gradients become zero, and the weights discontinue changing with each update, also known as an **epoch**. The steps are as follows: 1. Start with an initial set of weights \\(\\{w,b\\}\\). 2. Feedforward the initial data and weights into the DLN, and find the \\(\\{a\_i^{(r)}, Z\_i^{(r)}\\}, \\forall r,i\\). This is one forward pass through the network. 3. Then, using backpropagation, compute all \\(\\delta\_i^{(r)}, \\forall r,i\\). 4. Use these \\(\\delta\_i^{(r)}\\) values to get all the new gradients. 5. Apply gradient descent to get new weights. 6. Keep iterating steps 2\-5, until the chosen number of epochs is completed. The entire process is summarized in Figure [13\.1](NeuralNetsDeepLearning.html#fig:BackPropSummary): Figure 13\.1: Quick Summary of Backpropagation ### 13\.12\.1 Net Input Assume several hidden layers in a deep learning net (DLN), indexed by \\(r\=1,2,...,R\\). Consider two adjacent layers \\((r)\\) and \\((r\+1\)\\). Each layer as number of nodes \\(n\_r\\) and \\(n\_{r\+1}\\), respectively. The output of node \\(i\\) in layer \\((r)\\) is \\(Z\_i^{(r)} \= f(a\_i^{(r)})\\). The function \\(f\\) is the *activation* function. At node \\(j\\) in layer \\((r\+1\)\\), these inputs are taken and used to compute an intermediate value, known as the *net value*: \\\[ a\_j^{(r\+1\)} \= \\sum\_{i\=1}^{n\_r} W\_{ij}^{(r\+1\)}Z\_i^{r} \+ b\_j^{(r\+1\)} \\] ### 13\.12\.2 Activation Function The net value is then ingested by an activation function to create the output from layer \\((r\+1\)\\). \\\[ Z\_j^{(r\+1\)} \= f(a\_j^{(r\+1\)}) \\] The activation functions may be simple sigmoid functions or other functions such as ReLU (Rectified Linear Unit). The final output of the DLN is from layer \\((R)\\), i.e., \\(Z\_j^{(R)}\\). For the first hidden layer \\(r\=1\\), and the net input will be based on the original data \\(X^{(m)}\\) \\\[ a\_j^{(1\)} \= \\sum\_{m\=1}^M W\_{mj}^{(1\)} X\_m \+ b\_j^{(1\)} \\] ### 13\.12\.3 Loss Function Fitting the DLN is an exercise where the best weights \\(\\{W,b\\} \= \\{W\_{ij}^{(r\+1\)}, b\_j^{(r\+1\)}\\},\\forall r\\) for all layers are determined to minimize a loss function generally denoted as \\\[ \\min\_{W,b} \\sum\_{m\=1}^M L\_m\[h(X^{(m)}),T^{(m)}] \\] where \\(M\\) is the number of training observations (rows in the data set), \\(T^{(m)}\\) is the true value of the output, and \\(h(X^{(m)})\\) is the model output from the DLN. The loss function \\(L\_m\\) quantifies the difference between the model output and the true output. ### 13\.12\.4 Gradients To solve this minimization problem, we need gradients for all \\(W,b\\). These are denoted as \\\[ \\frac{\\partial L\_m}{\\partial W\_{ij}^{(r\+1\)}}, \\quad \\frac{\\partial L\_m}{\\partial b\_{j}^{(r\+1\)}}, \\quad \\forall r\+1, j \\] We write out these gradients using the chain rule: \\\[ \\frac{\\partial L\_m}{\\partial W\_{ij}^{(r\+1\)}} \= \\frac{\\partial L\_m}{\\partial a\_j^{(r\+1\)}} \\cdot \\frac{\\partial a\_j^{(r\+1\)}}{\\partial W\_{ij}^{(r\+1\)}} \= \\delta\_j^{(r\+1\)} \\cdot Z\_i^{(r)} \\] where we have written \\\[ \\delta\_j^{(r\+1\)} \= \\frac{\\partial L\_m}{\\partial a\_j^{(r\+1\)}} \\] Likewise, we have \\\[ \\frac{\\partial L\_m}{\\partial b\_{j}^{(r\+1\)}} \= \\frac{\\partial L\_m}{\\partial a\_j^{(r\+1\)}} \\cdot \\frac{\\partial a\_j^{(r\+1\)}}{\\partial b\_{j}^{(r\+1\)}} \= \\delta\_j^{(r\+1\)} \\cdot 1 \= \\delta\_j^{(r\+1\)} \\] ### 13\.12\.5 Delta Values So we need to find all the \\(\\delta\_j^{(r\+1\)}\\) values. To do so, we need the following intermediate calculation. \\\[ \\begin{align} a\_j^{(r\+1\)} \&\= \\sum\_{i\=1}^{n\_r} W\_{ij}^{(r\+1\)} Z\_i^{(r)} \+ b\_j^{(r\+1\)} \\\\ \&\= \\sum\_{i\=1}^{n\_r} W\_{ij}^{(r\+1\)} f(a\_i^{(r)}) \+ b\_j^{(r\+1\)} \\\\ \\end{align} \\] This implies that \\\[ \\frac{\\partial a\_j^{(r\+1\)}}{\\partial a\_i^{(r)}} \= W\_{ij}^{(r\+1\)} \\cdot f'(a\_i^{(r)}) \\] Using this we may now rewrite the \\(\\delta\\) value for layer \\((r)\\) as follows: \\\[ \\begin{align} \\delta\_i^{(r)} \&\= \\frac{\\partial L\_m}{\\partial a\_i^{(r)}} \\\\ \&\= \\sum\_{j\=1}^{n\_{r\+1}} \\frac{\\partial L\_m}{\\partial a\_j^{(r\+1\)}} \\cdot \\frac{\\partial a\_j^{(r\+1\)}}{\\partial a\_i^{(r)}} \\\\ \&\= \\sum\_{j\=1}^{n\_{r\+1}}\\delta\_j^{(r\+1\)} \\cdot W\_{ij}^{(r\+1\)} \\cdot f'(a\_i^{(r)}) \\\\ \&\= f'(a\_i^{(r)}) \\cdot \\sum\_{j\=1}^{n\_{r\+1}}\\delta\_j^{(r\+1\)} \\cdot W\_{ij}^{(r\+1\)} \\end{align} \\] ### 13\.12\.6 Output layer The output layer takes as input the last hidden layer \\({(R)}\\)’s output \\(Z\_j^{(R)}\\), and computes the net input \\(a\_j^{(R\+1\)}\\) and then the activation function \\(h(a\_j^{(R\+1\)})\\) is applied to generate the final output. \\\[ \\begin{align} a\_j^{(R\+1\)} \&\= \\sum\_{i\=1}^{n\_R} W\_{ij}^{(R\+1\)} Z\_j^{(R)} \+ b\_j^{(R\+1\)} \\\\ \\mbox{Final output} \&\= h(a\_j^{(R\+1\)}) \\end{align} \\] The \\(\\delta\\) for the final layer is simple. \\\[ \\delta\_j^{(R\+1\)} \= \\frac{\\partial L\_m}{\\partial a\_j^{(R\+1\)}} \= h'(a\_j^{(R\+1\)}) \\] ### 13\.12\.7 Feedforward and Backward Propagation Fitting the DLN requires getting the weights \\(\\{W,b\\}\\) that minimize \\(L\_m\\). These are done using gradient descent, i.e., \\\[ \\begin{align} W\_{ij}^{(r\+1\)} \\leftarrow W\_{ij}^{(r\+1\)} \- \\eta \\cdot \\frac{\\partial L\_m}{\\partial W\_{ij}^{(r\+1\)}} \\\\ b\_{j}^{(r\+1\)} \\leftarrow b\_{j}^{(r\+1\)} \- \\eta \\cdot \\frac{\\partial L\_m}{\\partial b\_{j}^{(r\+1\)}} \\end{align} \\] Here \\(\\eta\\) is the learning rate parameter. We iterate on these functions until the gradients become zero, and the weights discontinue changing with each update, also known as an **epoch**. The steps are as follows: 1. Start with an initial set of weights \\(\\{w,b\\}\\). 2. Feedforward the initial data and weights into the DLN, and find the \\(\\{a\_i^{(r)}, Z\_i^{(r)}\\}, \\forall r,i\\). This is one forward pass through the network. 3. Then, using backpropagation, compute all \\(\\delta\_i^{(r)}, \\forall r,i\\). 4. Use these \\(\\delta\_i^{(r)}\\) values to get all the new gradients. 5. Apply gradient descent to get new weights. 6. Keep iterating steps 2\-5, until the chosen number of epochs is completed. The entire process is summarized in Figure [13\.1](NeuralNetsDeepLearning.html#fig:BackPropSummary): Figure 13\.1: Quick Summary of Backpropagation 13\.13 Research Applications ---------------------------- * Discovering Black\-Scholes: See the paper by Hutchinson, Lo, and Poggio ([1994](#ref-RePEc:bla:jfinan:v:49:y:1994:i:3:p:851-89)), A Nonparametric Approach to Pricing and Hedging Securities Via Learning Networks, The Journal of Finance, Vol XLIX. * Forecasting: See the paper by Ghiassi, Saidane, and Zimbra ([2005](#ref-CIS-201490)). “A dynamic artificial neural network model for forecasting time series events,” International Journal of Forecasting 21, 341–362\. 13\.14 Package *neuralnet* in R ------------------------------- The package focuses on multi\-layer perceptrons (MLP), see Bishop (1995\), which are well applicable when modeling functional relation\- ships. The underlying structure of an MLP is a directed graph, i.e. it consists of vertices and directed edges, in this context called neurons and synapses. See Bishop (1995\), Neural networks for pattern recognition. Oxford University Press, New York. The data set used by this package as an example is the infert data set that comes bundled with R. This data set examines infertility after induced and spontaneous abortion. The variables **induced** and **spontaneous** take values in \\(\\{0,1,2\\}\\) indicating the number of previous abortions. The variable **parity** denotes the number of births. The variable **case** equals 1 if the woman is infertile and 0 otherwise. The idea is to model infertility. ``` library(neuralnet) data(infert) print(names(infert)) ``` ``` ## [1] "education" "age" "parity" "induced" ## [5] "case" "spontaneous" "stratum" "pooled.stratum" ``` ``` head(infert) ``` ``` ## education age parity induced case spontaneous stratum pooled.stratum ## 1 0-5yrs 26 6 1 1 2 1 3 ## 2 0-5yrs 42 1 1 1 0 2 1 ## 3 0-5yrs 39 6 2 1 0 3 4 ## 4 0-5yrs 34 4 2 1 0 4 2 ## 5 6-11yrs 35 3 1 1 1 5 32 ## 6 6-11yrs 36 4 2 1 1 6 36 ``` ``` summary(infert) ``` ``` ## education age parity induced ## 0-5yrs : 12 Min. :21.00 Min. :1.000 Min. :0.0000 ## 6-11yrs:120 1st Qu.:28.00 1st Qu.:1.000 1st Qu.:0.0000 ## 12+ yrs:116 Median :31.00 Median :2.000 Median :0.0000 ## Mean :31.50 Mean :2.093 Mean :0.5726 ## 3rd Qu.:35.25 3rd Qu.:3.000 3rd Qu.:1.0000 ## Max. :44.00 Max. :6.000 Max. :2.0000 ## case spontaneous stratum pooled.stratum ## Min. :0.0000 Min. :0.0000 Min. : 1.00 Min. : 1.00 ## 1st Qu.:0.0000 1st Qu.:0.0000 1st Qu.:21.00 1st Qu.:19.00 ## Median :0.0000 Median :0.0000 Median :42.00 Median :36.00 ## Mean :0.3347 Mean :0.5766 Mean :41.87 Mean :33.58 ## 3rd Qu.:1.0000 3rd Qu.:1.0000 3rd Qu.:62.25 3rd Qu.:48.25 ## Max. :1.0000 Max. :2.0000 Max. :83.00 Max. :63.00 ``` This data set examines infertility after induced and spontaneous abortion. The variables \*\* induced\*\* and **spontaneous** take values in \\(\\{0,1,2\\}\\) indicating the number of previous abortions. The variable **parity** denotes the number of births. The variable **case** equals 1 if the woman is infertile and 0 otherwise. The idea is to model infertility. ### 13\.14\.1 First step, fit a logit model to the data. ``` res = glm(case ~ age+parity+induced+spontaneous, family=binomial(link="logit"), data=infert) summary(res) ``` ``` ## ## Call: ## glm(formula = case ~ age + parity + induced + spontaneous, family = binomial(link = "logit"), ## data = infert) ## ## Deviance Residuals: ## Min 1Q Median 3Q Max ## -1.6281 -0.8055 -0.5299 0.8669 2.6141 ## ## Coefficients: ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -2.85239 1.00428 -2.840 0.00451 ** ## age 0.05318 0.03014 1.764 0.07767 . ## parity -0.70883 0.18091 -3.918 8.92e-05 *** ## induced 1.18966 0.28987 4.104 4.06e-05 *** ## spontaneous 1.92534 0.29863 6.447 1.14e-10 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## (Dispersion parameter for binomial family taken to be 1) ## ## Null deviance: 316.17 on 247 degrees of freedom ## Residual deviance: 260.94 on 243 degrees of freedom ## AIC: 270.94 ## ## Number of Fisher Scoring iterations: 4 ``` ### 13\.14\.2 Second step, fit a NN ``` nn = neuralnet(case~age+parity+induced+spontaneous,hidden=2,data=infert) ``` ``` print(names(nn)) ``` ``` ## [1] "call" "response" "covariate" ## [4] "model.list" "err.fct" "act.fct" ## [7] "linear.output" "data" "net.result" ## [10] "weights" "startweights" "generalized.weights" ## [13] "result.matrix" ``` ``` nn$result.matrix ``` ``` ## 1 ## error 19.75482621709 ## reached.threshold 0.00796839405 ## steps 3891.00000000000 ## Intercept.to.1layhid1 -2.39345712918 ## age.to.1layhid1 -0.51858603247 ## parity.to.1layhid1 0.26786607381 ## induced.to.1layhid1 -346.33808632368 ## spontaneous.to.1layhid1 6.50949229932 ## Intercept.to.1layhid2 6.18035131278 ## age.to.1layhid2 -0.13013668178 ## parity.to.1layhid2 2.31764808626 ## induced.to.1layhid2 -2.78558680449 ## spontaneous.to.1layhid2 -4.58533007894 ## Intercept.to.case 1.08152541274 ## 1layhid.1.to.case -6.43770238799 ## 1layhid.2.to.case -0.93730921525 ``` ``` plot(nn) #Run this plot from the command line. #<img src="image_files/nn.png" height=510 width=740> ``` ``` head(cbind(nn$covariate,nn$net.result[[1]])) ``` ``` ## [,1] [,2] [,3] [,4] [,5] ## 1 26 6 1 2 0.1522843862 ## 2 42 1 1 0 0.5553601474 ## 3 39 6 2 0 0.1442907090 ## 4 34 4 2 0 0.1482055348 ## 5 35 3 1 1 0.3599162573 ## 6 36 4 2 1 0.4743072882 ``` ### 13\.14\.3 Logit vs NN We can compare the output to that from the logit model, by looking at the correlation of the fitted values from both models. ``` cor(cbind(nn$net.result[[1]],res$fitted.values)) ``` ``` ## [,1] [,2] ## [1,] 1.0000000000 0.8869825759 ## [2,] 0.8869825759 1.0000000000 ``` ### 13\.14\.4 Backpropagation option We can add in an option for back propagation, and see how the results change. ``` nn2 = neuralnet(case~age+parity+induced+spontaneous, hidden=2, algorithm="rprop+", data=infert) print(cor(cbind(nn2$net.result[[1]],res$fitted.values))) ``` ``` ## [,1] [,2] ## [1,] 1.0000000000 0.9157468405 ## [2,] 0.9157468405 1.0000000000 ``` ``` cor(cbind(nn2$net.result[[1]],nn$fitted.result[[1]])) ``` ``` ## [,1] ## [1,] 1 ``` Given a calibrated neural net, how do we use it to compute values for a new observation? Here is an example. ``` compute(nn,covariate=matrix(c(30,1,0,1),1,4)) ``` ``` ## $neurons ## $neurons[[1]] ## [,1] [,2] [,3] [,4] [,5] ## [1,] 1 30 1 0 1 ## ## $neurons[[2]] ## [,1] [,2] [,3] ## [1,] 1 0.00001403868578 0.5021422036 ## ## ## $net.result ## [,1] ## [1,] 0.6107725211 ``` ### 13\.14\.1 First step, fit a logit model to the data. ``` res = glm(case ~ age+parity+induced+spontaneous, family=binomial(link="logit"), data=infert) summary(res) ``` ``` ## ## Call: ## glm(formula = case ~ age + parity + induced + spontaneous, family = binomial(link = "logit"), ## data = infert) ## ## Deviance Residuals: ## Min 1Q Median 3Q Max ## -1.6281 -0.8055 -0.5299 0.8669 2.6141 ## ## Coefficients: ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -2.85239 1.00428 -2.840 0.00451 ** ## age 0.05318 0.03014 1.764 0.07767 . ## parity -0.70883 0.18091 -3.918 8.92e-05 *** ## induced 1.18966 0.28987 4.104 4.06e-05 *** ## spontaneous 1.92534 0.29863 6.447 1.14e-10 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## (Dispersion parameter for binomial family taken to be 1) ## ## Null deviance: 316.17 on 247 degrees of freedom ## Residual deviance: 260.94 on 243 degrees of freedom ## AIC: 270.94 ## ## Number of Fisher Scoring iterations: 4 ``` ### 13\.14\.2 Second step, fit a NN ``` nn = neuralnet(case~age+parity+induced+spontaneous,hidden=2,data=infert) ``` ``` print(names(nn)) ``` ``` ## [1] "call" "response" "covariate" ## [4] "model.list" "err.fct" "act.fct" ## [7] "linear.output" "data" "net.result" ## [10] "weights" "startweights" "generalized.weights" ## [13] "result.matrix" ``` ``` nn$result.matrix ``` ``` ## 1 ## error 19.75482621709 ## reached.threshold 0.00796839405 ## steps 3891.00000000000 ## Intercept.to.1layhid1 -2.39345712918 ## age.to.1layhid1 -0.51858603247 ## parity.to.1layhid1 0.26786607381 ## induced.to.1layhid1 -346.33808632368 ## spontaneous.to.1layhid1 6.50949229932 ## Intercept.to.1layhid2 6.18035131278 ## age.to.1layhid2 -0.13013668178 ## parity.to.1layhid2 2.31764808626 ## induced.to.1layhid2 -2.78558680449 ## spontaneous.to.1layhid2 -4.58533007894 ## Intercept.to.case 1.08152541274 ## 1layhid.1.to.case -6.43770238799 ## 1layhid.2.to.case -0.93730921525 ``` ``` plot(nn) #Run this plot from the command line. #<img src="image_files/nn.png" height=510 width=740> ``` ``` head(cbind(nn$covariate,nn$net.result[[1]])) ``` ``` ## [,1] [,2] [,3] [,4] [,5] ## 1 26 6 1 2 0.1522843862 ## 2 42 1 1 0 0.5553601474 ## 3 39 6 2 0 0.1442907090 ## 4 34 4 2 0 0.1482055348 ## 5 35 3 1 1 0.3599162573 ## 6 36 4 2 1 0.4743072882 ``` ### 13\.14\.3 Logit vs NN We can compare the output to that from the logit model, by looking at the correlation of the fitted values from both models. ``` cor(cbind(nn$net.result[[1]],res$fitted.values)) ``` ``` ## [,1] [,2] ## [1,] 1.0000000000 0.8869825759 ## [2,] 0.8869825759 1.0000000000 ``` ### 13\.14\.4 Backpropagation option We can add in an option for back propagation, and see how the results change. ``` nn2 = neuralnet(case~age+parity+induced+spontaneous, hidden=2, algorithm="rprop+", data=infert) print(cor(cbind(nn2$net.result[[1]],res$fitted.values))) ``` ``` ## [,1] [,2] ## [1,] 1.0000000000 0.9157468405 ## [2,] 0.9157468405 1.0000000000 ``` ``` cor(cbind(nn2$net.result[[1]],nn$fitted.result[[1]])) ``` ``` ## [,1] ## [1,] 1 ``` Given a calibrated neural net, how do we use it to compute values for a new observation? Here is an example. ``` compute(nn,covariate=matrix(c(30,1,0,1),1,4)) ``` ``` ## $neurons ## $neurons[[1]] ## [,1] [,2] [,3] [,4] [,5] ## [1,] 1 30 1 0 1 ## ## $neurons[[2]] ## [,1] [,2] [,3] ## [1,] 1 0.00001403868578 0.5021422036 ## ## ## $net.result ## [,1] ## [1,] 0.6107725211 ``` 13\.15 Statistical Significance ------------------------------- We can assess statistical significance of the model as follows: ``` confidence.interval(nn,alpha=0.10) ``` ``` ## $lower.ci ## $lower.ci[[1]] ## $lower.ci[[1]][[1]] ## [,1] [,2] ## [1,] -15.8007772276 4.3682646706 ## [2,] -1.3384298107 -0.1876702868 ## [3,] -0.2530961989 1.4895025332 ## [4,] -346.3380863237 -3.6315599341 ## [5,] -0.2056362177 -5.6749552264 ## ## $lower.ci[[1]][[2]] ## [,1] ## [1,] 0.9354811195 ## [2,] -38.0986993664 ## [3,] -1.0879829307 ## ## ## ## $upper.ci ## $upper.ci[[1]] ## $upper.ci[[1]][[1]] ## [,1] [,2] ## [1,] 11.0138629693 7.99243795495 ## [2,] 0.3012577458 -0.07260307674 ## [3,] 0.7888283465 3.14579363935 ## [4,] -346.3380863237 -1.93961367486 ## [5,] 13.2246208164 -3.49570493146 ## ## $upper.ci[[1]][[2]] ## [,1] ## [1,] 1.2275697059 ## [2,] 25.2232945904 ## [3,] -0.7866354998 ## ## ## ## $nic ## [1] 21.21884675 ``` The confidence level is \\((1\-\\alpha)\\). This is at the 90% level, and at the 5% level we get: ``` confidence.interval(nn,alpha=0.95) ``` ``` ## $lower.ci ## $lower.ci[[1]] ## $lower.ci[[1]][[1]] ## [,1] [,2] ## [1,] -2.9045845818 6.1112691082 ## [2,] -0.5498409484 -0.1323300362 ## [3,] 0.2480054218 2.2860766817 ## [4,] -346.3380863237 -2.8178378500 ## [5,] 6.2534913605 -4.6268698737 ## ## $lower.ci[[1]][[2]] ## [,1] ## [1,] 1.0759577641 ## [2,] -7.6447150209 ## [3,] -0.9430533514 ## ## ## ## $upper.ci ## $upper.ci[[1]] ## $upper.ci[[1]][[1]] ## [,1] [,2] ## [1,] -1.8823296766 6.2494335173 ## [2,] -0.4873311166 -0.1279433273 ## [3,] 0.2877267259 2.3492194908 ## [4,] -346.3380863237 -2.7533357590 ## [5,] 6.7654932382 -4.5437902841 ## ## $upper.ci[[1]][[2]] ## [,1] ## [1,] 1.0870930614 ## [2,] -5.2306897551 ## [3,] -0.9315650791 ## ## ## ## $nic ## [1] 21.21884675 ``` 13\.16 Deep Learning Overview ----------------------------- The Wikipedia entry is excellent: <https://en.wikipedia.org/wiki/Deep_learning> <http://deeplearning.net/> [https://www.youtube.com/watch?v\=S75EdAcXHKk](https://www.youtube.com/watch?v=S75EdAcXHKk) [https://www.youtube.com/watch?v\=czLI3oLDe8M](https://www.youtube.com/watch?v=czLI3oLDe8M) Article on Google’s Deep Learning team’s work on image processing: [https://medium.com/backchannel/inside\-deep\-dreams\-how\-google\-made\-its\-computers\-go\-crazy\-83b9d24e66df\#.gtfwip891](https://medium.com/backchannel/inside-deep-dreams-how-google-made-its-computers-go-crazy-83b9d24e66df#.gtfwip891) ### 13\.16\.1 Grab Some Data The **mlbench** package contains some useful datasets for testing machine learning algorithms. One of these is a small dataset of cancer cases, and contains ten characteristics of cancer cells, and a flag for whether cancer is present or the cells are benign. We use this dataset to try out some deep learning algorithms in R, and see if they improve on vanilla neural nets. First, let’s fit a neural net to this data. We’ll fit this using the **deepnet** package, which allows for more hidden layers. ### 13\.16\.2 Simple Example ``` library(neuralnet) library(deepnet) ``` First, we use randomly generated data, and train the NN. ``` #From the **deepnet** package by Xiao Rong. First train the model using one hidden layer. Var1 <- c(rnorm(50, 1, 0.5), rnorm(50, -0.6, 0.2)) Var2 <- c(rnorm(50, -0.8, 0.2), rnorm(50, 2, 1)) x <- matrix(c(Var1, Var2), nrow = 100, ncol = 2) y <- c(rep(1, 50), rep(0, 50)) plot(x,col=y+1) ``` ``` nn <- nn.train(x, y, hidden = c(5)) ``` ### 13\.16\.3 Prediction ``` #Next, predict the model. This is in-sample. test_Var1 <- c(rnorm(50, 1, 0.5), rnorm(50, -0.6, 0.2)) test_Var2 <- c(rnorm(50, -0.8, 0.2), rnorm(50, 2, 1)) test_x <- matrix(c(test_Var1, test_Var2), nrow = 100, ncol = 2) yy <- nn.predict(nn, test_x) ``` ### 13\.16\.4 Test Predictive Ability of the Model ``` #The output is just a number that is higher for one class and lower for another. #One needs to separate these to get groups. yhat = matrix(0,length(yy),1) yhat[which(yy > mean(yy))] = 1 yhat[which(yy <= mean(yy))] = 0 print(table(yhat,y)) ``` ``` ## y ## yhat 0 1 ## 0 49 0 ## 1 1 50 ``` ### 13\.16\.5 Prediction Error ``` #Testing the results. err <- nn.test(nn, test_x, y, t=mean(yy)) print(err) ``` ``` ## [1] 0.005 ``` ### 13\.16\.1 Grab Some Data The **mlbench** package contains some useful datasets for testing machine learning algorithms. One of these is a small dataset of cancer cases, and contains ten characteristics of cancer cells, and a flag for whether cancer is present or the cells are benign. We use this dataset to try out some deep learning algorithms in R, and see if they improve on vanilla neural nets. First, let’s fit a neural net to this data. We’ll fit this using the **deepnet** package, which allows for more hidden layers. ### 13\.16\.2 Simple Example ``` library(neuralnet) library(deepnet) ``` First, we use randomly generated data, and train the NN. ``` #From the **deepnet** package by Xiao Rong. First train the model using one hidden layer. Var1 <- c(rnorm(50, 1, 0.5), rnorm(50, -0.6, 0.2)) Var2 <- c(rnorm(50, -0.8, 0.2), rnorm(50, 2, 1)) x <- matrix(c(Var1, Var2), nrow = 100, ncol = 2) y <- c(rep(1, 50), rep(0, 50)) plot(x,col=y+1) ``` ``` nn <- nn.train(x, y, hidden = c(5)) ``` ### 13\.16\.3 Prediction ``` #Next, predict the model. This is in-sample. test_Var1 <- c(rnorm(50, 1, 0.5), rnorm(50, -0.6, 0.2)) test_Var2 <- c(rnorm(50, -0.8, 0.2), rnorm(50, 2, 1)) test_x <- matrix(c(test_Var1, test_Var2), nrow = 100, ncol = 2) yy <- nn.predict(nn, test_x) ``` ### 13\.16\.4 Test Predictive Ability of the Model ``` #The output is just a number that is higher for one class and lower for another. #One needs to separate these to get groups. yhat = matrix(0,length(yy),1) yhat[which(yy > mean(yy))] = 1 yhat[which(yy <= mean(yy))] = 0 print(table(yhat,y)) ``` ``` ## y ## yhat 0 1 ## 0 49 0 ## 1 1 50 ``` ### 13\.16\.5 Prediction Error ``` #Testing the results. err <- nn.test(nn, test_x, y, t=mean(yy)) print(err) ``` ``` ## [1] 0.005 ``` 13\.17 Cancer dataset --------------------- Now we’ll try the Breast Cancer data set. First we use the NN in the **deepnet** package. ``` library(mlbench) data("BreastCancer") head(BreastCancer) ``` ``` ## Id Cl.thickness Cell.size Cell.shape Marg.adhesion Epith.c.size ## 1 1000025 5 1 1 1 2 ## 2 1002945 5 4 4 5 7 ## 3 1015425 3 1 1 1 2 ## 4 1016277 6 8 8 1 3 ## 5 1017023 4 1 1 3 2 ## 6 1017122 8 10 10 8 7 ## Bare.nuclei Bl.cromatin Normal.nucleoli Mitoses Class ## 1 1 3 1 1 benign ## 2 10 3 2 1 benign ## 3 2 3 1 1 benign ## 4 4 3 7 1 benign ## 5 1 3 1 1 benign ## 6 10 9 7 1 malignant ``` ``` BreastCancer = BreastCancer[which(complete.cases(BreastCancer)==TRUE),] ``` ``` y = as.matrix(BreastCancer[,11]) y[which(y=="benign")] = 0 y[which(y=="malignant")] = 1 y = as.numeric(y) x = as.numeric(as.matrix(BreastCancer[,2:10])) x = matrix(as.numeric(x),ncol=9) nn <- nn.train(x, y, hidden = c(5)) yy = nn.predict(nn, x) yhat = matrix(0,length(yy),1) yhat[which(yy > mean(yy))] = 1 yhat[which(yy <= mean(yy))] = 0 print(table(y,yhat)) ``` ``` ## yhat ## y 0 1 ## 0 424 20 ## 1 5 234 ``` ### 13\.17\.1 Compare to a simple NN It does really well. Now we compare it to a simple neural net. ``` library(neuralnet) df = data.frame(cbind(x,y)) nn = neuralnet(y~V1+V2+V3+V4+V5+V6+V7+V8+V9,data=df,hidden = 5) yy = nn$net.result[[1]] yhat = matrix(0,length(y),1) yhat[which(yy > mean(yy))] = 1 yhat[which(yy <= mean(yy))] = 0 print(table(y,yhat)) ``` ``` ## yhat ## y 0 1 ## 0 429 15 ## 1 0 239 ``` Somehow, the **neuralnet** package appears to perform better. Which is interesting. But the deep learning net was not “deep” \- it had only one hidden layer. ### 13\.17\.1 Compare to a simple NN It does really well. Now we compare it to a simple neural net. ``` library(neuralnet) df = data.frame(cbind(x,y)) nn = neuralnet(y~V1+V2+V3+V4+V5+V6+V7+V8+V9,data=df,hidden = 5) yy = nn$net.result[[1]] yhat = matrix(0,length(y),1) yhat[which(yy > mean(yy))] = 1 yhat[which(yy <= mean(yy))] = 0 print(table(y,yhat)) ``` ``` ## yhat ## y 0 1 ## 0 429 15 ## 1 0 239 ``` Somehow, the **neuralnet** package appears to perform better. Which is interesting. But the deep learning net was not “deep” \- it had only one hidden layer. 13\.18 Deeper Net: More Hidden Layers ------------------------------------- Now we’ll try the **deepnet** function with two hidden layers. ``` dnn <- sae.dnn.train(x, y, hidden = c(5,5)) ``` ``` ## begin to train sae ...... ``` ``` ## training layer 1 autoencoder ... ``` ``` ## training layer 2 autoencoder ... ``` ``` ## sae has been trained. ``` ``` ## begin to train deep nn ...... ``` ``` ## deep nn has been trained. ``` ``` yy = nn.predict(dnn, x) yhat = matrix(0,length(yy),1) yhat[which(yy > mean(yy))] = 1 yhat[which(yy <= mean(yy))] = 0 print(table(y,yhat)) ``` ``` ## yhat ## y 0 1 ## 0 119 325 ## 1 0 239 ``` This performs terribly. Maybe there is something wrong here. 13\.19 Using h2o ---------------- Here we start up a server using all cores of the machine, and then use the h2o package’s deep learning toolkit to fit a model. ``` library(h2o) ``` ``` ## Loading required package: methods ``` ``` ## Loading required package: statmod ``` ``` ## Warning: package 'statmod' was built under R version 3.3.2 ``` ``` ## ## ---------------------------------------------------------------------- ## ## Your next step is to start H2O: ## > h2o.init() ## ## For H2O package documentation, ask for help: ## > ??h2o ## ## After starting H2O, you can use the Web UI at http://localhost:54321 ## For more information visit http://docs.h2o.ai ## ## ---------------------------------------------------------------------- ``` ``` ## ## Attaching package: 'h2o' ``` ``` ## The following objects are masked from 'package:stats': ## ## cor, sd, var ``` ``` ## The following objects are masked from 'package:base': ## ## &&, %*%, %in%, ||, apply, as.factor, as.numeric, colnames, ## colnames<-, ifelse, is.character, is.factor, is.numeric, log, ## log10, log1p, log2, round, signif, trunc ``` ``` localH2O = h2o.init(ip="localhost", port = 54321, startH2O = TRUE, nthreads=-1) ``` ``` ## ## H2O is not running yet, starting it now... ## ## Note: In case of errors look at the following log files: ## /var/folders/yd/h1lvwd952wbgw189srw3m3z80000gn/T//RtmpTFfRwJ/h2o_srdas_started_from_r.out ## /var/folders/yd/h1lvwd952wbgw189srw3m3z80000gn/T//RtmpTFfRwJ/h2o_srdas_started_from_r.err ## ## ## Starting H2O JVM and connecting: ... Connection successful! ## ## R is connected to the H2O cluster: ## H2O cluster uptime: 2 seconds 576 milliseconds ## H2O cluster version: 3.10.0.8 ## H2O cluster version age: 5 months and 13 days !!! ## H2O cluster name: H2O_started_from_R_srdas_dpl191 ## H2O cluster total nodes: 1 ## H2O cluster total memory: 3.56 GB ## H2O cluster total cores: 4 ## H2O cluster allowed cores: 4 ## H2O cluster healthy: TRUE ## H2O Connection ip: localhost ## H2O Connection port: 54321 ## H2O Connection proxy: NA ## R Version: R version 3.3.1 (2016-06-21) ``` ``` ## Warning in h2o.clusterInfo(): ## Your H2O cluster version is too old (5 months and 13 days)! ## Please download and install the latest version from http://h2o.ai/download/ ``` ``` train <- h2o.importFile("DSTMAA_data/BreastCancer.csv") ``` ``` ## | | | 0% | |=================================================================| 100% ``` ``` test <- h2o.importFile("DSTMAA_data/BreastCancer.csv") ``` ``` ## | | | 0% | |=================================================================| 100% ``` ``` y = names(train)[11] x = names(train)[1:10] train[,y] = as.factor(train[,y]) test[,y] = as.factor(train[,y]) model = h2o.deeplearning(x=x, y=y, training_frame=train, validation_frame=test, distribution = "multinomial", activation = "RectifierWithDropout", hidden = c(10,10,10,10), input_dropout_ratio = 0.2, l1 = 1e-5, epochs = 50) ``` ``` ## | | | 0% | |=================================================================| 100% ``` ``` model ``` ``` ## Model Details: ## ============== ## ## H2OBinomialModel: deeplearning ## Model ID: DeepLearning_model_R_1490380949733_1 ## Status of Neuron Layers: predicting Class, 2-class classification, multinomial distribution, CrossEntropy loss, 462 weights/biases, 12.5 KB, 34,150 training samples, mini-batch size 1 ## layer units type dropout l1 l2 mean_rate ## 1 1 10 Input 20.00 % ## 2 2 10 RectifierDropout 50.00 % 0.000010 0.000000 0.001514 ## 3 3 10 RectifierDropout 50.00 % 0.000010 0.000000 0.000861 ## 4 4 10 RectifierDropout 50.00 % 0.000010 0.000000 0.001387 ## 5 5 10 RectifierDropout 50.00 % 0.000010 0.000000 0.002995 ## 6 6 2 Softmax 0.000010 0.000000 0.002317 ## rate_rms momentum mean_weight weight_rms mean_bias bias_rms ## 1 ## 2 0.000975 0.000000 -0.005576 0.381788 0.560946 0.127078 ## 3 0.000416 0.000000 -0.009698 0.356050 0.993232 0.107121 ## 4 0.002665 0.000000 -0.027108 0.354956 0.890325 0.095600 ## 5 0.009876 0.000000 -0.114653 0.464009 0.871405 0.324988 ## 6 0.001023 0.000000 0.308202 1.334877 -0.006136 0.450422 ## ## ## H2OBinomialMetrics: deeplearning ## ** Reported on training data. ** ## ** Metrics reported on full training frame ** ## ## MSE: 0.02467530347 ## RMSE: 0.1570837467 ## LogLoss: 0.09715290711 ## Mean Per-Class Error: 0.02011006823 ## AUC: 0.9944494704 ## Gini: 0.9888989408 ## ## Confusion Matrix for F1-optimal threshold: ## benign malignant Error Rate ## benign 428 16 0.036036 =16/444 ## malignant 1 238 0.004184 =1/239 ## Totals 429 254 0.024890 =17/683 ## ## Maximum Metrics: Maximum metrics at their respective thresholds ## metric threshold value idx ## 1 max f1 0.153224 0.965517 248 ## 2 max f2 0.153224 0.983471 248 ## 3 max f0point5 0.745568 0.954962 234 ## 4 max accuracy 0.153224 0.975110 248 ## 5 max precision 0.943219 1.000000 0 ## 6 max recall 0.002179 1.000000 288 ## 7 max specificity 0.943219 1.000000 0 ## 8 max absolute_mcc 0.153224 0.947145 248 ## 9 max min_per_class_accuracy 0.439268 0.970721 240 ## 10 max mean_per_class_accuracy 0.153224 0.979890 248 ## ## Gains/Lift Table: Extract with `h2o.gainsLift(<model>, <data>)` or `h2o.gainsLift(<model>, valid=<T/F>, xval=<T/F>)` ## H2OBinomialMetrics: deeplearning ## ** Reported on validation data. ** ## ** Metrics reported on full validation frame ** ## ## MSE: 0.02467530347 ## RMSE: 0.1570837467 ## LogLoss: 0.09715290711 ## Mean Per-Class Error: 0.02011006823 ## AUC: 0.9944494704 ## Gini: 0.9888989408 ## ## Confusion Matrix for F1-optimal threshold: ## benign malignant Error Rate ## benign 428 16 0.036036 =16/444 ## malignant 1 238 0.004184 =1/239 ## Totals 429 254 0.024890 =17/683 ## ## Maximum Metrics: Maximum metrics at their respective thresholds ## metric threshold value idx ## 1 max f1 0.153224 0.965517 248 ## 2 max f2 0.153224 0.983471 248 ## 3 max f0point5 0.745568 0.954962 234 ## 4 max accuracy 0.153224 0.975110 248 ## 5 max precision 0.943219 1.000000 0 ## 6 max recall 0.002179 1.000000 288 ## 7 max specificity 0.943219 1.000000 0 ## 8 max absolute_mcc 0.153224 0.947145 248 ## 9 max min_per_class_accuracy 0.439268 0.970721 240 ## 10 max mean_per_class_accuracy 0.153224 0.979890 248 ## ## Gains/Lift Table: Extract with `h2o.gainsLift(<model>, <data>)` or `h2o.gainsLift(<model>, valid=<T/F>, xval=<T/F>)` ``` The h2o deep learning package does very well. ``` #h2o.shutdown(prompt=FALSE) ``` 13\.20 Character Recognition ---------------------------- We use the MNIST dataset ``` library(h2o) localH2O = h2o.init(ip="localhost", port = 54321, startH2O = TRUE) ``` ``` ## Connection successful! ## ## R is connected to the H2O cluster: ## H2O cluster uptime: 7 seconds 810 milliseconds ## H2O cluster version: 3.10.0.8 ## H2O cluster version age: 5 months and 13 days !!! ## H2O cluster name: H2O_started_from_R_srdas_dpl191 ## H2O cluster total nodes: 1 ## H2O cluster total memory: 3.54 GB ## H2O cluster total cores: 4 ## H2O cluster allowed cores: 4 ## H2O cluster healthy: TRUE ## H2O Connection ip: localhost ## H2O Connection port: 54321 ## H2O Connection proxy: NA ## R Version: R version 3.3.1 (2016-06-21) ``` ``` ## Warning in h2o.clusterInfo(): ## Your H2O cluster version is too old (5 months and 13 days)! ## Please download and install the latest version from http://h2o.ai/download/ ``` ``` ## Import MNIST CSV as H2O train <- h2o.importFile("DSTMAA_data/train.csv") ``` ``` ## | | | 0% | |============================================================= | 94% | |=================================================================| 100% ``` ``` test <- h2o.importFile("DSTMAA_data/test.csv") ``` ``` ## | | | 0% | |================ | 25% | |=================================================================| 100% ``` ``` #summary(train) #summary(test) ``` ``` y <- "C785" x <- setdiff(names(train), y) train[,y] <- as.factor(train[,y]) test[,y] <- as.factor(test[,y]) # Train a Deep Learning model and validate on a test set model <- h2o.deeplearning(x = x, y = y, training_frame = train, validation_frame = test, distribution = "multinomial", activation = "RectifierWithDropout", hidden = c(100,100,100), input_dropout_ratio = 0.2, l1 = 1e-5, epochs = 20) ``` ``` ## Warning in .h2o.startModelJob(algo, params, h2oRestApiVersion): Dropping constant columns: [C86, C85, C729, C728, C646, C645, C169, C760, C561, C53, C11, C55, C10, C54, C57, C12, C56, C58, C17, C19, C18, C731, C730, C20, C22, C21, C24, C23, C26, C25, C28, C27, C702, C701, C29, C700, C1, C2, C784, C3, C783, C4, C782, C5, C781, C6, C142, C7, C141, C8, C9, C31, C30, C32, C759, C758, C757, C756, C755, C477, C113, C674, C112, C673, C672, C84, C83]. ``` ``` ## | | | 0% | |== | 3% | |==== | 6% | |====== | 9% | |======== | 12% | |========= | 15% | |=========== | 18% | |============= | 20% | |=============== | 23% | |================= | 26% | |=================== | 29% | |===================== | 32% | |======================= | 35% | |========================= | 38% | |=========================== | 41% | |============================ | 44% | |============================== | 47% | |================================ | 50% | |================================== | 53% | |==================================== | 55% | |====================================== | 58% | |======================================== | 61% | |========================================== | 64% | |============================================ | 67% | |============================================== | 70% | |=============================================== | 73% | |================================================= | 76% | |=================================================== | 79% | |===================================================== | 82% | |======================================================= | 85% | |========================================================= | 88% | |=========================================================== | 91% | |============================================================= | 93% | |=============================================================== | 96% | |=================================================================| 99% | |=================================================================| 100% ``` ``` model ``` ``` ## Model Details: ## ============== ## ## H2OMultinomialModel: deeplearning ## Model ID: DeepLearning_model_R_1490380949733_6 ## Status of Neuron Layers: predicting C785, 10-class classification, multinomial distribution, CrossEntropy loss, 93,010 weights/biases, 1.3 MB, 1,227,213 training samples, mini-batch size 1 ## layer units type dropout l1 l2 mean_rate ## 1 1 717 Input 20.00 % ## 2 2 100 RectifierDropout 50.00 % 0.000010 0.000000 0.047334 ## 3 3 100 RectifierDropout 50.00 % 0.000010 0.000000 0.000400 ## 4 4 100 RectifierDropout 50.00 % 0.000010 0.000000 0.000849 ## 5 5 10 Softmax 0.000010 0.000000 0.006109 ## rate_rms momentum mean_weight weight_rms mean_bias bias_rms ## 1 ## 2 0.115335 0.000000 0.035077 0.109799 -0.348895 0.248555 ## 3 0.000188 0.000000 -0.032328 0.100167 0.791746 0.120468 ## 4 0.000482 0.000000 -0.029638 0.101666 0.562113 0.126671 ## 5 0.011201 0.000000 -0.533853 0.731477 -3.217856 0.626765 ## ## ## H2OMultinomialMetrics: deeplearning ## ** Reported on training data. ** ## ** Metrics reported on temporary training frame with 10101 samples ** ## ## Training Set Metrics: ## ===================== ## ## MSE: (Extract with `h2o.mse`) 0.03144841538 ## RMSE: (Extract with `h2o.rmse`) 0.1773370107 ## Logloss: (Extract with `h2o.logloss`) 0.1154417969 ## Mean Per-Class Error: 0.03526829002 ## Confusion Matrix: Extract with `h2o.confusionMatrix(<model>,train = TRUE)`) ## ========================================================================= ## Confusion Matrix: vertical: actual; across: predicted ## 0 1 2 3 4 5 6 7 8 9 Error Rate ## 0 994 0 7 4 3 1 2 0 1 2 0.0197 = 20 / 1,014 ## 1 0 1151 8 9 2 5 4 2 5 0 0.0295 = 35 / 1,186 ## 2 0 2 930 12 4 0 2 7 4 2 0.0343 = 33 / 963 ## 3 1 0 18 982 2 9 0 8 8 4 0.0484 = 50 / 1,032 ## 4 3 3 4 1 927 1 4 2 1 12 0.0324 = 31 / 958 ## 5 3 0 2 10 2 913 7 1 7 4 0.0379 = 36 / 949 ## 6 8 0 2 0 1 8 927 0 6 0 0.0263 = 25 / 952 ## 7 1 9 6 5 4 1 0 1019 2 3 0.0295 = 31 / 1,050 ## 8 4 4 5 3 2 7 2 1 952 3 0.0315 = 31 / 983 ## 9 0 1 1 13 17 3 0 23 6 950 0.0631 = 64 / 1,014 ## Totals 1014 1170 983 1039 964 948 948 1063 992 980 0.0352 = 356 / 10,101 ## ## Hit Ratio Table: Extract with `h2o.hit_ratio_table(<model>,train = TRUE)` ## ======================================================================= ## Top-10 Hit Ratios: ## k hit_ratio ## 1 1 0.964756 ## 2 2 0.988021 ## 3 3 0.993664 ## 4 4 0.996337 ## 5 5 0.997921 ## 6 6 0.998812 ## 7 7 0.998911 ## 8 8 0.999307 ## 9 9 0.999703 ## 10 10 1.000000 ## ## ## H2OMultinomialMetrics: deeplearning ## ** Reported on validation data. ** ## ** Metrics reported on full validation frame ** ## ## Validation Set Metrics: ## ===================== ## ## Extract validation frame with `h2o.getFrame("RTMP_sid_9b15_8")` ## MSE: (Extract with `h2o.mse`) 0.036179964 ## RMSE: (Extract with `h2o.rmse`) 0.1902103152 ## Logloss: (Extract with `h2o.logloss`) 0.1374188218 ## Mean Per-Class Error: 0.04004564619 ## Confusion Matrix: Extract with `h2o.confusionMatrix(<model>,valid = TRUE)`) ## ========================================================================= ## Confusion Matrix: vertical: actual; across: predicted ## 0 1 2 3 4 5 6 7 8 9 Error Rate ## 0 963 0 1 1 0 6 3 3 2 1 0.0173 = 17 / 980 ## 1 0 1117 7 2 0 0 3 1 5 0 0.0159 = 18 / 1,135 ## 2 5 0 988 6 5 1 5 10 12 0 0.0426 = 44 / 1,032 ## 3 1 0 12 969 0 10 0 10 7 1 0.0406 = 41 / 1,010 ## 4 1 1 5 0 941 2 9 5 3 15 0.0418 = 41 / 982 ## 5 2 0 3 8 2 858 5 2 9 3 0.0381 = 34 / 892 ## 6 8 3 3 0 3 17 920 0 4 0 0.0397 = 38 / 958 ## 7 1 6 15 6 1 0 1 987 0 11 0.0399 = 41 / 1,028 ## 8 3 2 4 7 4 15 5 6 926 2 0.0493 = 48 / 974 ## 9 3 8 2 12 18 6 1 17 9 933 0.0753 = 76 / 1,009 ## Totals 987 1137 1040 1011 974 915 952 1041 977 966 0.0398 = 398 / 10,000 ## ## Hit Ratio Table: Extract with `h2o.hit_ratio_table(<model>,valid = TRUE)` ## ======================================================================= ## Top-10 Hit Ratios: ## k hit_ratio ## 1 1 0.960200 ## 2 2 0.983900 ## 3 3 0.991700 ## 4 4 0.995900 ## 5 5 0.997600 ## 6 6 0.998800 ## 7 7 0.999200 ## 8 8 0.999600 ## 9 9 1.000000 ## 10 10 1.000000 ``` 13\.21 MxNet Package -------------------- The package needs the correct version of Java to run. ``` #From R-bloggers require(mlbench) ## Loading required package: mlbench require(mxnet) ## Loading required package: mxnet ## Loading required package: methods data(Sonar, package="mlbench") Sonar[,61] = as.numeric(Sonar[,61])-1 train.ind = c(1:50, 100:150) train.x = data.matrix(Sonar[train.ind, 1:60]) train.y = Sonar[train.ind, 61] test.x = data.matrix(Sonar[-train.ind, 1:60]) test.y = Sonar[-train.ind, 61] mx.set.seed(0) model <- mx.mlp(train.x, train.y, hidden_node=10, out_node=2, out_activation="softmax", num.round=100, array.batch.size=15, learning.rate=0.25, momentum=0.9, eval.metric=mx.metric.accuracy) preds = predict(model, test.x) ## Auto detect layout of input matrix, use rowmajor.. pred.label = max.col(t(preds))-1 table(pred.label, test.y) ``` ### 13\.21\.1 Cancer Data Now an example using the BreastCancer data set. ``` data("BreastCancer") BreastCancer = BreastCancer[which(complete.cases(BreastCancer)==TRUE),] y = as.matrix(BreastCancer[,11]) y[which(y=="benign")] = 0 y[which(y=="malignant")] = 1 y = as.numeric(y) x = as.numeric(as.matrix(BreastCancer[,2:10])) x = matrix(as.numeric(x),ncol=9) train.x = x train.y = y test.x = x test.y = y mx.set.seed(0) model <- mx.mlp(train.x, train.y, hidden_node=5, out_node=10, out_activation="softmax", num.round=30, array.batch.size=15, learning.rate=0.07, momentum=0.9, eval.metric=mx.metric.accuracy) preds = predict(model, test.x) ## Auto detect layout of input matrix, use rowmajor.. pred.label = max.col(t(preds))-1 table(pred.label, test.y) ``` ### 13\.21\.1 Cancer Data Now an example using the BreastCancer data set. ``` data("BreastCancer") BreastCancer = BreastCancer[which(complete.cases(BreastCancer)==TRUE),] y = as.matrix(BreastCancer[,11]) y[which(y=="benign")] = 0 y[which(y=="malignant")] = 1 y = as.numeric(y) x = as.numeric(as.matrix(BreastCancer[,2:10])) x = matrix(as.numeric(x),ncol=9) train.x = x train.y = y test.x = x test.y = y mx.set.seed(0) model <- mx.mlp(train.x, train.y, hidden_node=5, out_node=10, out_activation="softmax", num.round=30, array.batch.size=15, learning.rate=0.07, momentum=0.9, eval.metric=mx.metric.accuracy) preds = predict(model, test.x) ## Auto detect layout of input matrix, use rowmajor.. pred.label = max.col(t(preds))-1 table(pred.label, test.y) ``` 13\.22 Convolutional Neural Nets (CNNs) --------------------------------------- To be written See: [https://adeshpande3\.github.io/adeshpande3\.github.io/A\-Beginner's\-Guide\-To\-Understanding\-Convolutional\-Neural\-Networks/](https://adeshpande3.github.io/adeshpande3.github.io/A-Beginner's-Guide-To-Understanding-Convolutional-Neural-Networks/) 13\.23 Recurrent Neural Nets (RNNs) ----------------------------------- To be written
Machine Learning
srdas.github.io
https://srdas.github.io/MLBook/Recommenders.html
Chapter 14 The Machine Knows What You Want: Recommender Systems =============================================================== 14\.1 Introduction ------------------ A recommendation algorithm tells you what you like or want. It may tell you about many things you like, sorted in order as well. It tries to understand your preferences using recorded data on your likes and dislikes. Netflix has a recommendation engine for movies. It tries to show you movies that you prefer. If you think about all the \\(N\\) users of a movie service, each one having preferences over \\(K\\) attributes of a movie, then we can represent this matrix as a collection of weights, with each user on the columns, and the attributes on the rows. This would be a matrix \\(u \\in R^{K \\times N}\\). Each element of the matrix is indexed as \\(u\_{ki}\\), where \\(i\=1,2,...,N\\), and \\(k\=1,2,...,K\\). Likewise imagine another matrix \\(m\\) of \\(M\\) movies on the columns and the same \\(K\\) attributes on the rows. We get a matrix \\(m \\in R^{K \\times M}\\). Each element of the matrix is indexed as \\(m\_{kj}\\), where \\(j\=1,2,...,M\\), and \\(k\=1,2,...,K\\). For any user \\(i\\), we may rank movies based on the *predicted* score \\(r\_{ij}\\) for movie \\(j\\), easily calculated as \\\[ r\_{ij} \= \\sum\_{k\=1}^K u\_{ki} m\_{kj} \= u\_i^\\top m\_j \\] where \\(u\_i\\) is a column vector of size \\(K \\times 1\\), and \\(m\_j\\) is a column vector of size \\(K \\times 1\\) as well. The elements of \\(r\_{ij}\\) form a matrix \\(r\\) of dimension \\(N \\times M\\). Some, but not all of these elements are actually observed, because users rate movies. While matrix \\(r\\) may be observable with ratings data, matrices \\(u\\) and \\(m\\) are latent, because the \\(K\\) attributes are unknown. We do know that \\\[ r \= u^\\top m \\] Therefore, we would like to factorize matrix \\(r\\) into the two matrices \\(u,m\\). If the true score for movie \\(j\\), user \\(i\\), is \\(y\_{ij}\\), then we want to find \\(u,m\\) that deliver the closest value of \\(r\_{ij}\\) to its true value. This is done using a technique known as Alternating Least Squares (ALS). 14\.2 Alternating Least Squares ------------------------------- The best fit recommender system is specified as the solution to the following problem, where we minimize loss function \\(L\\). Since the notation gets hairy here, remember that any variable with two subscripts is scalar, with one subscript is a vector, and with no subscripts is a matrix. \\\[ \\begin{align} L \&\= \\sum\_{i\=1}^N \\sum\_{j\=1}^M (y\_{ij}\-r\_{ij})^2 \+ \\lambda\_u \\sum\_{i\=1}^N \\parallel u\_i \\parallel^2 \+ \\lambda\_m \\sum\_{j\=1}^M \\parallel m\_j \\parallel^2 \\\\ \&\= \\sum\_{i\=1}^N \\sum\_{j\=1}^M (y\_{ij}\-u\_i^\\top m\_j)^2 \+ \\lambda\_u \\sum\_{i\=1}^N u\_i^\\top u\_i \+ \\lambda\_m \\sum\_{j\=1}^M m\_j^\\top m\_j \\end{align} \\] We wish to find matrices \\(u,m\\) that solve this minimization problem. We begin with some starting guess for both matrices. Then, we differentiate function \\(L\\) with respect to just one matrix, say \\(u\\), and solve for its optimal values. Next, we take the new values of \\(u\\), and old values of \\(m\\), and differentiate \\(L\\) with respect to \\(m\\) to solve for the new optimal values of \\(m\\), holding \\(u\\) fixed. We then repeat the process for a chosen number of epochs, alternating between the \\(u\\) and \\(m\\) subproblems. This eventually converges to the optimal \\(u\\) and \\(m\\) matrices, completing the factorization that minimizes the loss function. 14\.3 Solve \\(u\\) matrix -------------------------- Differentiate \\(L\\) with respect to each user, obtaining \\(N\\) first\-order equations, which may then be set to zero and solved. \\\[ \\frac{\\partial L}{\\partial u\_i} \= \\sum\_{i\=1}^N \\sum\_{j\=1}^M (y\_{ij}\-u\_i^\\top m\_j)(\-2 m\_j) \+ 2 \\lambda\_u u\_i \= 0 \\quad \\in {\\cal R}^{K \\times 1} \\] which gives the following equation: \\\[ \\sum\_{j\=1}^M (y\_{ij}\-u\_i^\\top m\_j)m\_j \= \\lambda\_u u\_i \\] If you work it out carefully you can write this purely in matrix form as follows: \\\[ (y\_i \- m^\\top u\_i)^\\top \= \\lambda\_u u\_i^\\top \\quad \\in {\\cal R}^{1 \\times K} \\] (Note that \\(y\_i \\in {\\cal R}^{1 \\times M}\\); \\(m^\\top \\in {\\cal R}^{M \\times K}\\); \\(u\_i \\in {\\cal R}^{K \\times 1}\\). And so the LHS is \\({\\cal R}^{1 \\times K}\\), as is the RHS.) We rewrite this economically as \\\[ \\begin{align} m \\cdot (y\_i^\\top \- m^\\top u\_i) \&\= \\lambda\_u u\_i \\quad \\in {\\cal R}^{K \\times 1} \\\\ m \\cdot y\_i^\\top \&\= (\\lambda\_u I \+ m \\cdot m^\\top) \\cdot u\_i \\\\ \\mbox{ } \\\\ u\_i \&\= (\\lambda\_u I \+ m \\cdot m^\\top)^{\-1} \\cdot m \\cdot y\_i^\\top \\quad \\in {\\cal R}^{K \\times 1} \\end{align} \\] where \\(I \\in {\\cal R}^{K \\times K}\\) is an identity matrix. This gives one column \\(u\_i\\) of the \\(u\\) matrix, and we compute this over all \\(i\=1,2,...,N\\). 14\.4 Solve \\(m\\) matrix -------------------------- This is analogous to the \\(u\\) matrix, and the answer is \\\[ m\_j \= (\\lambda\_m I \+ u \\cdot u^\\top)^{\-1} \\cdot u \\cdot y\_j \\quad \\in {\\cal R}^{K \\times 1} \\] This gives one column \\(m\_j\\) of the \\(m\\) matrix, and we compute this over all \\(j\=1,2,...,M\\). Note that \\(I \\in {\\cal R}^{K \\times K}\\) is an identity matrix, and \\(y\_j \\in {\\cal R}^{N \\times 1}\\). For those students who are uncomfortable with this sort of matrix algebra, I strongly recommend taking a small system of \\(N\=4\\) users and \\(M\=3\\) movies, and factorize matrix \\(r \\in {\\cal R}^{3 \\times 2}\\). Maybe set \\(K\=2\\). Rework the calculus and algebra above to get comfortable with these mathematical objects. The alternating least squares algorithm is similar to the Expectations\-Maximization (EM) algorithm of Dempster, Laird, and Rubin ([1977](#ref-10.2307/2984875)). 14\.5 ALS package ----------------- In R, we have the ALS package to do this matrix factorization. ``` library(ALS) ``` ``` ## Loading required package: nnls ``` ``` ## Loading required package: Iso ``` ``` ## Iso 0.0-17 ``` Suppose we have 50 users who rate 200 movies. This gives us the \\(y\\) matrix of true ratings. We want to factorize this matrix into two latent matrices \\(u\\) and \\(m\\), by choosing \\(K\=2\\) latent attributes. The code for this is simple. ``` N=50; M=200; K=2 y = matrix(ceiling(runif(N*M)*5),N,M) #Matrix (i,j) to be factorized u0 = matrix(rnorm(K*N),N,K) #Guess for u matrix m0 = matrix(rnorm(K*M),M,K) #Guess for m matrix res = als(CList=list(u0),S=m0,PsiList=list(y)) ``` ``` ## Initial RSS 129355.7 ## Iteration (opt. S): 1, RSS: 109957.2, RD: 0.1499624 ## Iteration (opt. C): 2, RSS: 67315.62, RD: 0.3878016 ## Iteration (opt. S): 3, RSS: 19471.09, RD: 0.7107493 ## Iteration (opt. C): 4, RSS: 19029.84, RD: 0.02266185 ## Iteration (opt. S): 5, RSS: 18992.98, RD: 0.001936723 ## Iteration (opt. C): 6, RSS: 18966.15, RD: 0.001412627 ## Iteration (opt. S): 7, RSS: 18947.37, RD: 0.0009902627 ## Initial RSS / Final RSS = 129355.7 / 18947.37 = 6.827104 ``` We now extract the two latent matrices. The predicted ratings matrix is also generated, i.e., \\(r\\). We compute the RMSE of ratings predictions. ``` #Results u = t(res$CList[[1]]); print(dim(u)) #Put in K x N format ``` ``` ## [1] 2 50 ``` ``` m = t(res$S); print(dim(m)) #In K x M format ``` ``` ## [1] 2 200 ``` ``` r = t(u) %*% m; print(dim(r)) #Should be N x M ``` ``` ## [1] 50 200 ``` ``` e = (r-y)^2; print(mean(e)) ``` ``` ## [1] 1.894737 ``` ``` #Check print(mean(res$resid[[1]]^2)) ``` ``` ## [1] 1.894737 ``` 14\.6 Interpretation and Use ---------------------------- What does the \\(u\\) matrix tell us? It says for each of the 50 users, how they weight the two attributes. For example, the first user has the following weights: ``` print(u[,1]) ``` ``` ## [1] 11.46459 5.84150 ``` You can see that each attribute is given different weights. Likewise, you can take matrix \\(m\\) and see how much each movie offers on each attribute dimension. For example, the first movie’s loadings are ``` print(m[,1]) ``` ``` ## [1] 0.2137344 0.0583482 ``` How do we use this decomposition? We can take a new user and find out which existing user is closest, using cosine similarity on some other characteristics. Then you can use that user’s weights in matrix \\(u\\), to get the movie rankings ordering. Suppose the new user’s weights just happen to be the mean of all the other users’. Then we have ``` #Find new user's weights on attributes u_new = as.matrix(rowMeans(u)) print(u_new) ``` ``` ## [,1] ## [1,] 9.576559 ## [2,] 9.489472 ``` ``` #Find predicted ratings for all M movies for the new user pred_ratings = t(m) %*% u_new sol = sort(pred_ratings,decreasing=TRUE,index.return=TRUE) print(head(sol$ix)) ``` ``` ## [1] 87 162 158 165 112 97 ``` The top 6 movie numbers are listed. 14\.1 Introduction ------------------ A recommendation algorithm tells you what you like or want. It may tell you about many things you like, sorted in order as well. It tries to understand your preferences using recorded data on your likes and dislikes. Netflix has a recommendation engine for movies. It tries to show you movies that you prefer. If you think about all the \\(N\\) users of a movie service, each one having preferences over \\(K\\) attributes of a movie, then we can represent this matrix as a collection of weights, with each user on the columns, and the attributes on the rows. This would be a matrix \\(u \\in R^{K \\times N}\\). Each element of the matrix is indexed as \\(u\_{ki}\\), where \\(i\=1,2,...,N\\), and \\(k\=1,2,...,K\\). Likewise imagine another matrix \\(m\\) of \\(M\\) movies on the columns and the same \\(K\\) attributes on the rows. We get a matrix \\(m \\in R^{K \\times M}\\). Each element of the matrix is indexed as \\(m\_{kj}\\), where \\(j\=1,2,...,M\\), and \\(k\=1,2,...,K\\). For any user \\(i\\), we may rank movies based on the *predicted* score \\(r\_{ij}\\) for movie \\(j\\), easily calculated as \\\[ r\_{ij} \= \\sum\_{k\=1}^K u\_{ki} m\_{kj} \= u\_i^\\top m\_j \\] where \\(u\_i\\) is a column vector of size \\(K \\times 1\\), and \\(m\_j\\) is a column vector of size \\(K \\times 1\\) as well. The elements of \\(r\_{ij}\\) form a matrix \\(r\\) of dimension \\(N \\times M\\). Some, but not all of these elements are actually observed, because users rate movies. While matrix \\(r\\) may be observable with ratings data, matrices \\(u\\) and \\(m\\) are latent, because the \\(K\\) attributes are unknown. We do know that \\\[ r \= u^\\top m \\] Therefore, we would like to factorize matrix \\(r\\) into the two matrices \\(u,m\\). If the true score for movie \\(j\\), user \\(i\\), is \\(y\_{ij}\\), then we want to find \\(u,m\\) that deliver the closest value of \\(r\_{ij}\\) to its true value. This is done using a technique known as Alternating Least Squares (ALS). 14\.2 Alternating Least Squares ------------------------------- The best fit recommender system is specified as the solution to the following problem, where we minimize loss function \\(L\\). Since the notation gets hairy here, remember that any variable with two subscripts is scalar, with one subscript is a vector, and with no subscripts is a matrix. \\\[ \\begin{align} L \&\= \\sum\_{i\=1}^N \\sum\_{j\=1}^M (y\_{ij}\-r\_{ij})^2 \+ \\lambda\_u \\sum\_{i\=1}^N \\parallel u\_i \\parallel^2 \+ \\lambda\_m \\sum\_{j\=1}^M \\parallel m\_j \\parallel^2 \\\\ \&\= \\sum\_{i\=1}^N \\sum\_{j\=1}^M (y\_{ij}\-u\_i^\\top m\_j)^2 \+ \\lambda\_u \\sum\_{i\=1}^N u\_i^\\top u\_i \+ \\lambda\_m \\sum\_{j\=1}^M m\_j^\\top m\_j \\end{align} \\] We wish to find matrices \\(u,m\\) that solve this minimization problem. We begin with some starting guess for both matrices. Then, we differentiate function \\(L\\) with respect to just one matrix, say \\(u\\), and solve for its optimal values. Next, we take the new values of \\(u\\), and old values of \\(m\\), and differentiate \\(L\\) with respect to \\(m\\) to solve for the new optimal values of \\(m\\), holding \\(u\\) fixed. We then repeat the process for a chosen number of epochs, alternating between the \\(u\\) and \\(m\\) subproblems. This eventually converges to the optimal \\(u\\) and \\(m\\) matrices, completing the factorization that minimizes the loss function. 14\.3 Solve \\(u\\) matrix -------------------------- Differentiate \\(L\\) with respect to each user, obtaining \\(N\\) first\-order equations, which may then be set to zero and solved. \\\[ \\frac{\\partial L}{\\partial u\_i} \= \\sum\_{i\=1}^N \\sum\_{j\=1}^M (y\_{ij}\-u\_i^\\top m\_j)(\-2 m\_j) \+ 2 \\lambda\_u u\_i \= 0 \\quad \\in {\\cal R}^{K \\times 1} \\] which gives the following equation: \\\[ \\sum\_{j\=1}^M (y\_{ij}\-u\_i^\\top m\_j)m\_j \= \\lambda\_u u\_i \\] If you work it out carefully you can write this purely in matrix form as follows: \\\[ (y\_i \- m^\\top u\_i)^\\top \= \\lambda\_u u\_i^\\top \\quad \\in {\\cal R}^{1 \\times K} \\] (Note that \\(y\_i \\in {\\cal R}^{1 \\times M}\\); \\(m^\\top \\in {\\cal R}^{M \\times K}\\); \\(u\_i \\in {\\cal R}^{K \\times 1}\\). And so the LHS is \\({\\cal R}^{1 \\times K}\\), as is the RHS.) We rewrite this economically as \\\[ \\begin{align} m \\cdot (y\_i^\\top \- m^\\top u\_i) \&\= \\lambda\_u u\_i \\quad \\in {\\cal R}^{K \\times 1} \\\\ m \\cdot y\_i^\\top \&\= (\\lambda\_u I \+ m \\cdot m^\\top) \\cdot u\_i \\\\ \\mbox{ } \\\\ u\_i \&\= (\\lambda\_u I \+ m \\cdot m^\\top)^{\-1} \\cdot m \\cdot y\_i^\\top \\quad \\in {\\cal R}^{K \\times 1} \\end{align} \\] where \\(I \\in {\\cal R}^{K \\times K}\\) is an identity matrix. This gives one column \\(u\_i\\) of the \\(u\\) matrix, and we compute this over all \\(i\=1,2,...,N\\). 14\.4 Solve \\(m\\) matrix -------------------------- This is analogous to the \\(u\\) matrix, and the answer is \\\[ m\_j \= (\\lambda\_m I \+ u \\cdot u^\\top)^{\-1} \\cdot u \\cdot y\_j \\quad \\in {\\cal R}^{K \\times 1} \\] This gives one column \\(m\_j\\) of the \\(m\\) matrix, and we compute this over all \\(j\=1,2,...,M\\). Note that \\(I \\in {\\cal R}^{K \\times K}\\) is an identity matrix, and \\(y\_j \\in {\\cal R}^{N \\times 1}\\). For those students who are uncomfortable with this sort of matrix algebra, I strongly recommend taking a small system of \\(N\=4\\) users and \\(M\=3\\) movies, and factorize matrix \\(r \\in {\\cal R}^{3 \\times 2}\\). Maybe set \\(K\=2\\). Rework the calculus and algebra above to get comfortable with these mathematical objects. The alternating least squares algorithm is similar to the Expectations\-Maximization (EM) algorithm of Dempster, Laird, and Rubin ([1977](#ref-10.2307/2984875)). 14\.5 ALS package ----------------- In R, we have the ALS package to do this matrix factorization. ``` library(ALS) ``` ``` ## Loading required package: nnls ``` ``` ## Loading required package: Iso ``` ``` ## Iso 0.0-17 ``` Suppose we have 50 users who rate 200 movies. This gives us the \\(y\\) matrix of true ratings. We want to factorize this matrix into two latent matrices \\(u\\) and \\(m\\), by choosing \\(K\=2\\) latent attributes. The code for this is simple. ``` N=50; M=200; K=2 y = matrix(ceiling(runif(N*M)*5),N,M) #Matrix (i,j) to be factorized u0 = matrix(rnorm(K*N),N,K) #Guess for u matrix m0 = matrix(rnorm(K*M),M,K) #Guess for m matrix res = als(CList=list(u0),S=m0,PsiList=list(y)) ``` ``` ## Initial RSS 129355.7 ## Iteration (opt. S): 1, RSS: 109957.2, RD: 0.1499624 ## Iteration (opt. C): 2, RSS: 67315.62, RD: 0.3878016 ## Iteration (opt. S): 3, RSS: 19471.09, RD: 0.7107493 ## Iteration (opt. C): 4, RSS: 19029.84, RD: 0.02266185 ## Iteration (opt. S): 5, RSS: 18992.98, RD: 0.001936723 ## Iteration (opt. C): 6, RSS: 18966.15, RD: 0.001412627 ## Iteration (opt. S): 7, RSS: 18947.37, RD: 0.0009902627 ## Initial RSS / Final RSS = 129355.7 / 18947.37 = 6.827104 ``` We now extract the two latent matrices. The predicted ratings matrix is also generated, i.e., \\(r\\). We compute the RMSE of ratings predictions. ``` #Results u = t(res$CList[[1]]); print(dim(u)) #Put in K x N format ``` ``` ## [1] 2 50 ``` ``` m = t(res$S); print(dim(m)) #In K x M format ``` ``` ## [1] 2 200 ``` ``` r = t(u) %*% m; print(dim(r)) #Should be N x M ``` ``` ## [1] 50 200 ``` ``` e = (r-y)^2; print(mean(e)) ``` ``` ## [1] 1.894737 ``` ``` #Check print(mean(res$resid[[1]]^2)) ``` ``` ## [1] 1.894737 ``` 14\.6 Interpretation and Use ---------------------------- What does the \\(u\\) matrix tell us? It says for each of the 50 users, how they weight the two attributes. For example, the first user has the following weights: ``` print(u[,1]) ``` ``` ## [1] 11.46459 5.84150 ``` You can see that each attribute is given different weights. Likewise, you can take matrix \\(m\\) and see how much each movie offers on each attribute dimension. For example, the first movie’s loadings are ``` print(m[,1]) ``` ``` ## [1] 0.2137344 0.0583482 ``` How do we use this decomposition? We can take a new user and find out which existing user is closest, using cosine similarity on some other characteristics. Then you can use that user’s weights in matrix \\(u\\), to get the movie rankings ordering. Suppose the new user’s weights just happen to be the mean of all the other users’. Then we have ``` #Find new user's weights on attributes u_new = as.matrix(rowMeans(u)) print(u_new) ``` ``` ## [,1] ## [1,] 9.576559 ## [2,] 9.489472 ``` ``` #Find predicted ratings for all M movies for the new user pred_ratings = t(m) %*% u_new sol = sort(pred_ratings,decreasing=TRUE,index.return=TRUE) print(head(sol$ix)) ``` ``` ## [1] 87 162 158 165 112 97 ``` The top 6 movie numbers are listed.
Machine Learning
srdas.github.io
https://srdas.github.io/MLBook/productForecastingBassModel.html
Chapter 15 Product Market Forecasting using the Bass Model ========================================================== 15\.1 Main Ideas ---------------- The **Bass** product diffusion model is a classic one in the marketing literature. It has been successfully used to predict the market shares of various newly introduced products, as well as mature ones. The main idea of the model is that the adoption rate of a product comes from two sources: 1. The propensity of consumers to adopt the product independent of social influences to do so. 2. The additional propensity to adopt the product because others have adopted it. Hence, at some point in the life cycle of a good product, social contagion, i.e. the influence of the early adopters becomes sufficiently strong so as to drive many others to adopt the product as well. It may be going too far to think of this as a **network** effect, because Frank Bass did this work well before the concept of network effect was introduced, but essentially that is what it is. The Bass model shows how the information of the first few periods of sales data may be used to develop a fairly good forecast of future sales. One can easily see that whereas this model came from the domain of marketing, it may just as easily be used to model forecasts of cashflows to determine the value of a start\-up company. 15\.2 Historical Examples ------------------------- There are some classic examples from the literature of the Bass model providing a very good forecast of the ramp up in product adoption as a function of the two sources described above. See for example the actual versus predicted market growth for VCRs in the 80s and the adoption of answering machines shown in the Figures below. 15\.3 The Basic Idea -------------------- We follow the exposition in Bass (1969\). Define the cumulative probability of purchase of a product from time zero to time \\(t\\) by a single individual as \\(F(t)\\). Then, the probability of purchase at time \\(t\\) is the density function \\(f(t) \= F'(t)\\). The rate of purchase at time \\(t\\), given no purchase so far, logically follows, i.e. \\\[ \\frac{f(t)}{1\-F(t)}. \\] Modeling this is just like modeling the adoption rate of the product at a given time \\(t\\). 15\.4 Main Differential Equation -------------------------------- Bass suggested that this adoption rate be defined as \\\[ \\frac{f(t)}{1\-F(t)} \= p \+ q\\; F(t). \\] where we may think of \\(p\\) as defining the **independent rate** of a consumer adopting the product, and \\(q\\) as the **imitation rate**, because it modulates the impact from the cumulative intensity of adoption, \\(F(t)\\). Hence, if we can find \\(p\\) and \\(q\\) for a product, we can forecast its adoption over time, and thereby generate a time path of sales. To summarize: * \\(p\\): coefficient of innovation. * \\(q\\): coefficient of imitation. 15\.5 Solving the Model for \\(F(t)\\) -------------------------------------- We rewrite the Bass equation: \\\[ \\frac{dF/dt}{1\-F} \= p \+ q\\; F. \\] and note that \\(F(0\)\=0\\). The steps in the solution are: \\\[ \\begin{eqnarray} \\frac{dF}{dt} \&\=\& (p\+qF)(1\-F) \\\\ \\frac{dF}{dt} \&\=\& p \+ (q\-p)F \- qF^2 \\\\ \\int \\frac{1}{p \+ (q\-p)F \- qF^2}\\;dF \&\=\& \\int dt \\\\ \\frac{\\ln(p\+qF) \- \\ln(1\-F)}{p\+q} \&\=\& t\+c\_1 \\quad \\quad (\*) \\\\ t\=0 \&\\Rightarrow\& F(0\)\=0 \\\\ t\=0 \&\\Rightarrow\& c\_1 \= \\frac{\\ln p}{p\+q} \\\\ F(t) \&\=\& \\frac{p(e^{(p\+q)t}\-1\)}{p e^{(p\+q)t} \+ q} \\end{eqnarray} \\] 15\.6 Another solution ---------------------- An alternative approach (this was suggested by students Muhammad Sagarwalla based on ideas from Alexey Orlovsky) goes as follows. First, split the integral above into partial fractions. \\\[ \\int \\frac{1}{(p\+qF)(1\-F)}\\;dF \= \\int dt \\] So we write \\\[ \\begin{eqnarray} \\frac{1}{(p\+qF)(1\-F)} \&\=\& \\frac{A}{p\+qF} \+ \\frac{B}{1\-F}\\\\ \&\=\& \\frac{A\-AF\+pB\+qFB}{(p\+qF)(1\-F)}\\\\ \&\=\& \\frac{A\+pB\+F(qB\-A)}{(p\+qF)(1\-F)} \\end{eqnarray} \\] This implies that \\\[ \\begin{eqnarray} A\+pB \&\=\& 1 \\\\ qB\-A \&\=\& 0 \\end{eqnarray} \\] Solving we get \\\[ \\begin{eqnarray} A \&\=\& q/(p\+q)\\\\ B \&\=\& 1/(p\+q) \\end{eqnarray} \\] so that \\\[ \\begin{eqnarray} \\int \\frac{1}{(p\+qF)(1\-F)}\\;dF \&\=\& \\int dt \\\\ \\int \\left(\\frac{A}{p\+qF} \+ \\frac{B}{1\-F}\\right) \\; dF\&\=\& t \+ c\_1 \\\\ \\int \\left(\\frac{q/(p\+q)}{p\+qF} \+ \\frac{1/(p\+q)}{1\-F}\\right) \\; dF\&\=\& t\+c\_1\\\\ \\frac{1}{p\+q}\\ln(p\+qF) \- \\frac{1}{p\+q}\\ln(1\-F) \&\=\& t\+c\_1\\\\ \\frac{\\ln(p\+qF) \- \\ln(1\-F)}{p\+q} \&\=\& t\+c\_1 \\end{eqnarray} \\] which is the same as equation (\*). The solution as before is \\\[ F(t) \= \\frac{p(e^{(p\+q)t}\-1\)}{p e^{(p\+q)t} \+ q} \\] 15\.7 Solve for \\(f(t)\\) -------------------------- We may also solve for \\\[ f(t) \= \\frac{dF}{dt} \= \\frac{e^{(p\+q)t}\\; p \\; (p\+q)^2}{\[p e^{(p\+q)t} \+ q]^2} \\] Therefore, if the target market is of size \\(m\\), then at each \\(t\\), the adoptions are simply given by \\(m \\times f(t)\\). 15\.8 Example ------------- For example, set \\(m\=100,000\\), \\(p\=0\.01\\) and \\(q\=0\.2\\). Then the adoption rate is shown in the Figure below. ``` f = function(p,q,t) { res = (exp((p+q)*t)*p*(p+q)^2)/(p*exp((p+q)*t)+q)^2 } t = seq(1,20) m = 100000 p = 0.01 q = 0.20 plot(t,m*f(p,q,t),type="l",col="blue",lwd=3,xlab="Time (years)",ylab="Adoptions") grid(lwd=2) ``` 15\.9 Symbolic Math in R ------------------------ ``` #BASS MODEL FF = expression(p*(exp((p+q)*t)-1)/(p*exp((p+q)*t)+q)) print(FF) ``` ``` ## expression(p * (exp((p + q) * t) - 1)/(p * exp((p + q) * t) + ## q)) ``` ``` #Take derivative ff = D(FF,"t") print(ff) ``` ``` ## p * (exp((p + q) * t) * (p + q))/(p * exp((p + q) * t) + q) - ## p * (exp((p + q) * t) - 1) * (p * (exp((p + q) * t) * (p + ## q)))/(p * exp((p + q) * t) + q)^2 ``` ``` #SET UP THE FUNCTION ff = function(p,q,t) { res = D(FF,"t") } ``` ``` #NOTE THE USE OF eval m=100000; p=0.01; q=0.20; t=seq(1,20) plot(t,m*eval(ff(p,q,t)),type="l",col="red",lwd=3) grid(lwd=2) ``` 15\.10 Solution using Wolfram Alpha ----------------------------------- <https://www.wolframalpha.com/> 15\.11 Calibration ------------------ How do we get coefficients \\(p\\) and \\(q\\)? Given we have the current sales history of the product, we can use it to fit the adoption curve. * Sales in any period are: \\(s(t) \= m \\; f(t)\\). * Cumulative sales up to time \\(t\\) are: \\(S(t) \= m \\; F(t)\\). Substituting for \\(f(t)\\) and \\(F(t)\\) in the Bass equation gives: \\\[ \\frac{s(t)/m}{1\-S(t)/m} \= p \+ q\\; S(t)/m \\] We may rewrite this as \\\[ s(t) \= \[p\+q\\; S(t)/m]\[m \- S(t)] \\] Therefore, \\\[ \\begin{eqnarray} s(t) \&\=\& \\beta\_0 \+ \\beta\_1 \\; S(t) \+ \\beta\_2 \\; S(t)^2 \\quad (BASS) \\\\ \\beta\_0 \&\=\& pm \\\\ \\beta\_1 \&\=\& q\-p \\\\ \\beta\_2 \&\=\& \-q/m \\end{eqnarray} \\] Equation (BASS) may be estimated by a regression of sales against cumulative sales. Once the coefficients in the regression \\(\\{\\beta\_0, \\beta\_1, \\beta\_2\\}\\) are obtained, the equations above may be inverted to determine the values of \\(\\{m,p,q\\}\\). We note that since \\\[ \\beta\_1 \= q\-p \= \-m \\beta\_2 \- \\frac{\\beta\_0}{m}, \\] we obtain a quadratic equation in \\(m\\): \\\[ \\beta\_2 m^2 \+ \\beta\_1 m \+ \\beta\_0 \= 0 \\] Solving we have \\\[ m \= \\frac{\-\\beta\_1 \\pm \\sqrt{\\beta\_1^2 \- 4 \\beta\_0 \\beta\_2}}{2 \\beta\_1} \\] and then this value of \\(m\\) may be used to solve for \\\[ p \= \\frac{\\beta\_0}{m}; \\quad \\quad q \= \- m \\beta\_2 \\] 15\.12 iPhone Sales Forecast ---------------------------- As an example, let’s look at the trend for iPhone sales (we store the quarterly sales in a file and read it in, and then undertake the Bass model analysis). We get the data from: [http://www.statista.com/statistics/263401/global\-apple\-iphone\-sales\-since\-3rd\-quarter\-2007/](http://www.statista.com/statistics/263401/global-apple-iphone-sales-since-3rd-quarter-2007/) The R code for this computation is as follows: ``` #USING APPLE iPHONE SALES DATA data = read.table("DSTMAA_data/iphone_sales.txt",header=TRUE) print(head(data)) ``` ``` ## Quarter Sales_MM_units ## 1 Q3_07 0.27 ## 2 Q4_07 1.12 ## 3 Q1_08 2.32 ## 4 Q2_08 1.70 ## 5 Q3_08 0.72 ## 6 Q4_08 6.89 ``` ``` print(tail(data)) ``` ``` ## Quarter Sales_MM_units ## 30 Q4_14 39.27 ## 31 Q1_15 74.47 ## 32 Q2_15 61.17 ## 33 Q3_15 47.53 ## 34 Q4_15 48.05 ## 35 Q1_16 74.78 ``` ``` #data = data[1:31,] data = data[1:length(data[,1]),] ``` ``` isales = data[,2] cum_isales = cumsum(isales) cum_isales2 = cum_isales^2 res = lm(isales ~ cum_isales+cum_isales2) print(summary(res)) ``` ``` ## ## Call: ## lm(formula = isales ~ cum_isales + cum_isales2) ## ## Residuals: ## Min 1Q Median 3Q Max ## -14.050 -3.413 -1.429 2.905 19.987 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 3.696e+00 2.205e+00 1.676 0.1034 ## cum_isales 1.130e-01 1.677e-02 6.737 1.31e-07 *** ## cum_isales2 -5.508e-05 2.110e-05 -2.610 0.0136 * ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 7.844 on 32 degrees of freedom ## Multiple R-squared: 0.8729, Adjusted R-squared: 0.865 ## F-statistic: 109.9 on 2 and 32 DF, p-value: 4.61e-15 ``` ``` b = res$coefficients ``` ``` #FIT THE MODEL m1 = (-b[2]+sqrt(b[2]^2-4*b[1]*b[3]))/(2*b[3]) m2 = (-b[2]-sqrt(b[2]^2-4*b[1]*b[3]))/(2*b[3]) print(c(m1,m2)) ``` ``` ## cum_isales cum_isales ## -32.20691 2083.82202 ``` ``` m = max(m1,m2); print(m) ``` ``` ## [1] 2083.822 ``` ``` p = b[1]/m q = -m*b[3] print(c("p,q=",p,q)) ``` ``` ## (Intercept) cum_isales2 ## "p,q=" "0.00177381124189973" "0.114767511363674" ``` ``` #PLOT THE FITTED MODEL nqtrs = 100 t=seq(0,nqtrs) ff = D(FF,"t") fn_f = eval(ff)*m plot(t,fn_f,type="l",ylab="Qtrly Units (MM)",main="Apple Inc Sales") n = length(isales) lines(1:n,isales,col="red",lwd=2,lty=2) ``` 15\.13 Comparison to other products ----------------------------------- The estimated Apple coefficients are: \\(p\=0\.0018\\) and \\(q\=0\.1148\\). For several other products, the table below shows the estimated coefficients reported in Table I of the original Bass (1969\) paper. 15\.14 Sales Peak ----------------- It is easy to calculate the time at which adoptions will peak out. Differentiate \\(f(t)\\) with respect to \\(t\\), and set the result equal to zero, i.e. \\\[ t^\* \= \\mbox{argmax}\_t f(t) \\] which is equivalent to the solution to \\(f'(t)\=0\\). The calculations are simple and give \\\[ t^\* \= \\frac{\-1}{p\+q}\\; \\ln(p/q) \\] Hence, for the values \\(p\=0\.01\\) and \\(q\=0\.2\\), we have \\\[ t^\* \= \\frac{\-1}{0\.01\+0\.2} \\ln(0\.01/0\.2\) \= 14\.2654 \\; \\mbox{years}. \\] If we examine the plot in the earlier Figure in the symbolic math section we see this to be where the graph peaks out. For the Apple data, here is the computation of the sales peak, reported in number of quarters from inception. ``` #PEAK SALES TIME POINT (IN QUARTERS) FOR APPLE tstar = -1/(p+q)*log(p/q) print(tstar) ``` ``` ## (Intercept) ## 35.77939 ``` 15\.15 Samsung Galaxy Phone Sales --------------------------------- ``` #Read in Galaxy sales data data = read.csv("DSTMAA_data/galaxy_sales.csv") #Get coefficients isales = data[,2] cum_isales = cumsum(isales) cum_isales2 = cum_isales^2 res = lm(isales ~ cum_isales+cum_isales2) print(summary(res)) ``` ``` ## ## Call: ## lm(formula = isales ~ cum_isales + cum_isales2) ## ## Residuals: ## Min 1Q Median 3Q Max ## -11.1239 -6.1774 0.4633 5.0862 13.2662 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 5.375e+01 4.506e+00 11.928 2.87e-10 *** ## cum_isales 7.660e-02 1.068e-02 7.173 8.15e-07 *** ## cum_isales2 -2.806e-05 5.074e-06 -5.530 2.47e-05 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 7.327 on 19 degrees of freedom ## Multiple R-squared: 0.8206, Adjusted R-squared: 0.8017 ## F-statistic: 43.44 on 2 and 19 DF, p-value: 8.167e-08 ``` ``` b = res$coefficients #FIT THE MODEL m1 = (-b[2]+sqrt(b[2]^2-4*b[1]*b[3]))/(2*b[3]) m2 = (-b[2]-sqrt(b[2]^2-4*b[1]*b[3]))/(2*b[3]) print(c(m1,m2)) ``` ``` ## cum_isales cum_isales ## -578.9157 3308.9652 ``` ``` m = max(m1,m2); print(m) ``` ``` ## [1] 3308.965 ``` ``` p = b[1]/m q = -m*b[3] print(c("p,q=",p,q)) ``` ``` ## (Intercept) cum_isales2 ## "p,q=" "0.0162432614649845" "0.0928432001791269" ``` ``` #PLOT THE FITTED MODEL nqtrs = 100 t=seq(0,nqtrs) ff = D(FF,"t") fn_f = eval(ff)*m plot(t,fn_f,type="l",ylab="Qtrly Units (MM)",main="Samsung Galaxy Sales") n = length(isales) lines(1:n,isales,col="red",lwd=2,lty=2) ``` 15\.16 Global Semiconductor Sales --------------------------------- ``` #Read in semiconductor sales data data = read.csv("DSTMAA_data/semiconductor_sales.csv") #Get coefficients isales = data[,2] cum_isales = cumsum(isales) cum_isales2 = cum_isales^2 res = lm(isales ~ cum_isales+cum_isales2) print(summary(res)) ``` ``` ## ## Call: ## lm(formula = isales ~ cum_isales + cum_isales2) ## ## Residuals: ## Min 1Q Median 3Q Max ## -42.359 -12.415 0.698 12.963 45.489 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 5.086e+01 8.627e+00 5.896 3.76e-06 *** ## cum_isales 9.004e-02 9.601e-03 9.378 1.15e-09 *** ## cum_isales2 -6.878e-06 1.988e-06 -3.459 0.00196 ** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 21.46 on 25 degrees of freedom ## Multiple R-squared: 0.9515, Adjusted R-squared: 0.9476 ## F-statistic: 245.3 on 2 and 25 DF, p-value: < 2.2e-16 ``` ``` b = res$coefficients #FIT THE MODEL m1 = (-b[2]+sqrt(b[2]^2-4*b[1]*b[3]))/(2*b[3]) m2 = (-b[2]-sqrt(b[2]^2-4*b[1]*b[3]))/(2*b[3]) print(c(m1,m2)) ``` ``` ## cum_isales cum_isales ## -542.4036 13633.3003 ``` ``` m = max(m1,m2); print(m) ``` ``` ## [1] 13633.3 ``` ``` p = b[1]/m q = -m*b[3] print(c("p,q=",p,q)) ``` ``` ## (Intercept) cum_isales2 ## "p,q=" "0.00373048366213552" "0.0937656034785294" ``` ``` #PLOT THE FITTED MODEL nqtrs = 100 t=seq(0,nqtrs) ff = D(FF,"t") fn_f = eval(ff)*m plot(t,fn_f,type="l",ylab="Annual Sales",main="Semiconductor Sales") n = length(isales) lines(1:n,isales,col="red",lwd=2,lty=2) ``` Show data frame: ``` df = as.data.frame(cbind(t,1988+t,fn_f)) print(df) ``` ``` ## t V2 fn_f ## 1 0 1988 50.858804 ## 2 1 1989 55.630291 ## 3 2 1990 60.802858 ## 4 3 1991 66.400785 ## 5 4 1992 72.447856 ## 6 5 1993 78.966853 ## 7 6 1994 85.978951 ## 8 7 1995 93.503005 ## 9 8 1996 101.554731 ## 10 9 1997 110.145765 ## 11 10 1998 119.282622 ## 12 11 1999 128.965545 ## 13 12 2000 139.187272 ## 14 13 2001 149.931747 ## 15 14 2002 161.172802 ## 16 15 2003 172.872875 ## 17 16 2004 184.981811 ## 18 17 2005 197.435829 ## 19 18 2006 210.156736 ## 20 19 2007 223.051488 ## 21 20 2008 236.012180 ## 22 21 2009 248.916580 ## 23 22 2010 261.629271 ## 24 23 2011 274.003469 ## 25 24 2012 285.883554 ## 26 25 2013 297.108294 ## 27 26 2014 307.514709 ## 28 27 2015 316.942473 ## 29 28 2016 325.238670 ## 30 29 2017 332.262713 ## 31 30 2018 337.891162 ## 32 31 2019 342.022188 ## 33 32 2020 344.579402 ## 34 33 2021 345.514816 ## 35 34 2022 344.810743 ## 36 35 2023 342.480503 ## 37 36 2024 338.567889 ## 38 37 2025 333.145434 ## 39 38 2026 326.311588 ## 40 39 2027 318.186993 ## 41 40 2028 308.910101 ## 42 41 2029 298.632384 ## 43 42 2030 287.513417 ## 44 43 2031 275.716082 ## 45 44 2032 263.402104 ## 46 45 2033 250.728110 ## 47 46 2034 237.842301 ## 48 47 2035 224.881829 ## 49 48 2036 211.970872 ## 50 49 2037 199.219405 ## 51 50 2038 186.722585 ## 52 51 2039 174.560685 ## 53 52 2040 162.799482 ## 54 53 2041 151.490998 ## 55 54 2042 140.674507 ## 56 55 2043 130.377708 ## 57 56 2044 120.618009 ## 58 57 2045 111.403835 ## 59 58 2046 102.735925 ## 60 59 2047 94.608582 ## 61 60 2048 87.010824 ## 62 61 2049 79.927455 ## 63 62 2050 73.340010 ## 64 63 2051 67.227596 ## 65 64 2052 61.567619 ## 66 65 2053 56.336405 ## 67 66 2054 51.509716 ## 68 67 2055 47.063180 ## 69 68 2056 42.972643 ## 70 69 2057 39.214436 ## 71 70 2058 35.765596 ## 72 71 2059 32.604021 ## 73 72 2060 29.708584 ## 74 73 2061 27.059212 ## 75 74 2062 24.636929 ## 76 75 2063 22.423878 ## 77 76 2064 20.403319 ## 78 77 2065 18.559614 ## 79 78 2066 16.878199 ## 80 79 2067 15.345546 ## 81 80 2068 13.949122 ## 82 81 2069 12.677336 ## 83 82 2070 11.519492 ## 84 83 2071 10.465737 ## 85 84 2072 9.507005 ## 86 85 2073 8.634969 ## 87 86 2074 7.841989 ## 88 87 2075 7.121062 ## 89 88 2076 6.465778 ## 90 89 2077 5.870270 ## 91 90 2078 5.329179 ## 92 91 2079 4.837608 ## 93 92 2080 4.391088 ## 94 93 2081 3.985542 ## 95 94 2082 3.617252 ## 96 95 2083 3.282831 ## 97 96 2084 2.979193 ## 98 97 2085 2.703528 ## 99 98 2086 2.453280 ## 100 99 2087 2.226120 ## 101 100 2088 2.019932 ``` 15\.17 Extensions ----------------- The Bass model has been extended to what is known as the generalized Bass model in a paper by Bass, Krishnan, and Jain (1994\). The idea is to extend the model to the following equation: \\\[ \\frac{f(t)}{1\-F(t)} \= \[p\+q\\; F(t)] \\; x(t) \\] where \\(x(t)\\) stands for current marketing effort. This additional variable allows (i) consideration of effort in the model, and (ii) given the function \\(x(t)\\), it may be optimized. The Bass model comes from a deterministic differential equation. Extensions to stochastic differential equations need to be considered. See also the paper on Bayesian inference in Bass models by Boatwright and Kamakura (2003\). * Bass, Frank. (1969\). “A New Product Growth Model for Consumer Durables,” *Management Science* 16, 215–227\. * Bass, Frank., Trichy Krishnan, and Dipak Jain (1994\). “Why the Bass Model Without Decision Variables,” *Marketing Science* 13, 204–223\. * Boatwright, Lee., and Wagner Kamakura (2003\). “Bayesian Model for Prelaunch Sales Forecasting of Recorded Music,” Management Science 49(2\), 179–196\. 15\.18 Trading off \\(p\\) vs \\(q\\) ------------------------------------- In the Bass model, if the coefficient of imitation increases relative to the coefficient of innovation, then which of the following is the most valid? 1. the peak of the product life cycle occurs later. 2. the peak of the product life cycle occurs sooner. 3. there may be an increasing chance of two life\-cycle peaks. 4. the peak may occur sooner or later, depending on the coefficient of innovation. Using peak time formula, substitute \\(x\=q/p\\): \\\[ t^\*\=\\frac{\-1}{p\+q} \\ln(p/q) \= \\frac{\\ln(q/p)}{p\+q} \= \\frac{1}{p} \\; \\frac{\\ln(q/p)}{1\+q/p} \= \\frac{1}{p}\\; \\frac{\\ln(x)}{1\+x} \\] Differentiate with regard to x (we are interested in the sign of the first derivative \\(\\partial t^\*/\\partial q\\), which is the same as sign of \\(\\partial t^\*/\\partial x\\)): \\\[ \\frac{\\partial t^\*}{\\partial x} \= \\frac{1}{p}\\left\[\\frac{1}{x(1\+x)}\-\\frac{\\ln x}{(1\+x)^2}\\right]\=\\frac{1\+x\-x\\ln x}{px(1\+x)^2} \\] From the Bass model we know that \\(q \> p \> 0\\), i.e. \\(x\>1\\), otherwise we could get negative values of acceptance or shape without maximum in the \\(0 \\le F \< 1\\) area. Therefore, the sign of \\(\\partial t^\*/\\partial x\\) is same as: \\\[ sign\\left(\\frac{\\partial t^\*}{\\partial x}\\right) \= sign(1\+x\-x\\ln x), \~\~\~ x\>1 \\] But this non\-linear equation \\\[ 1\+x\-x\\ln x\=0, \~\~\~ x\>1 \\] has a root \\(x \\approx 3\.59\\). In other words, the derivative \\(\\partial t^\* / \\partial x\\) is negative when \\(x \> 3\.59\\) and positive when \\(x \< 3\.59\\). For low values of \\(x\=q/p\\), an increase in the coefficient of imitation \\(q\\) increases the time to sales peak, and for high values of \\(q/p\\) the time decreases with increasing \\(q\\). So the right answer for the question appears to be “it depends on values of \\(p\\) and \\(q\\)”. ``` t = seq(0,5,0.1) p = 0.1; q=0.22 plot(t,f(p,q,t),type="l",col="blue",lwd=2) p = 0.1; q=0.20 lines(t,f(p,q,t),type="l",col="red",lwd=2) ``` In the Figure, when \\(x\\) gets smaller, the peak is earlier. 15\.1 Main Ideas ---------------- The **Bass** product diffusion model is a classic one in the marketing literature. It has been successfully used to predict the market shares of various newly introduced products, as well as mature ones. The main idea of the model is that the adoption rate of a product comes from two sources: 1. The propensity of consumers to adopt the product independent of social influences to do so. 2. The additional propensity to adopt the product because others have adopted it. Hence, at some point in the life cycle of a good product, social contagion, i.e. the influence of the early adopters becomes sufficiently strong so as to drive many others to adopt the product as well. It may be going too far to think of this as a **network** effect, because Frank Bass did this work well before the concept of network effect was introduced, but essentially that is what it is. The Bass model shows how the information of the first few periods of sales data may be used to develop a fairly good forecast of future sales. One can easily see that whereas this model came from the domain of marketing, it may just as easily be used to model forecasts of cashflows to determine the value of a start\-up company. 15\.2 Historical Examples ------------------------- There are some classic examples from the literature of the Bass model providing a very good forecast of the ramp up in product adoption as a function of the two sources described above. See for example the actual versus predicted market growth for VCRs in the 80s and the adoption of answering machines shown in the Figures below. 15\.3 The Basic Idea -------------------- We follow the exposition in Bass (1969\). Define the cumulative probability of purchase of a product from time zero to time \\(t\\) by a single individual as \\(F(t)\\). Then, the probability of purchase at time \\(t\\) is the density function \\(f(t) \= F'(t)\\). The rate of purchase at time \\(t\\), given no purchase so far, logically follows, i.e. \\\[ \\frac{f(t)}{1\-F(t)}. \\] Modeling this is just like modeling the adoption rate of the product at a given time \\(t\\). 15\.4 Main Differential Equation -------------------------------- Bass suggested that this adoption rate be defined as \\\[ \\frac{f(t)}{1\-F(t)} \= p \+ q\\; F(t). \\] where we may think of \\(p\\) as defining the **independent rate** of a consumer adopting the product, and \\(q\\) as the **imitation rate**, because it modulates the impact from the cumulative intensity of adoption, \\(F(t)\\). Hence, if we can find \\(p\\) and \\(q\\) for a product, we can forecast its adoption over time, and thereby generate a time path of sales. To summarize: * \\(p\\): coefficient of innovation. * \\(q\\): coefficient of imitation. 15\.5 Solving the Model for \\(F(t)\\) -------------------------------------- We rewrite the Bass equation: \\\[ \\frac{dF/dt}{1\-F} \= p \+ q\\; F. \\] and note that \\(F(0\)\=0\\). The steps in the solution are: \\\[ \\begin{eqnarray} \\frac{dF}{dt} \&\=\& (p\+qF)(1\-F) \\\\ \\frac{dF}{dt} \&\=\& p \+ (q\-p)F \- qF^2 \\\\ \\int \\frac{1}{p \+ (q\-p)F \- qF^2}\\;dF \&\=\& \\int dt \\\\ \\frac{\\ln(p\+qF) \- \\ln(1\-F)}{p\+q} \&\=\& t\+c\_1 \\quad \\quad (\*) \\\\ t\=0 \&\\Rightarrow\& F(0\)\=0 \\\\ t\=0 \&\\Rightarrow\& c\_1 \= \\frac{\\ln p}{p\+q} \\\\ F(t) \&\=\& \\frac{p(e^{(p\+q)t}\-1\)}{p e^{(p\+q)t} \+ q} \\end{eqnarray} \\] 15\.6 Another solution ---------------------- An alternative approach (this was suggested by students Muhammad Sagarwalla based on ideas from Alexey Orlovsky) goes as follows. First, split the integral above into partial fractions. \\\[ \\int \\frac{1}{(p\+qF)(1\-F)}\\;dF \= \\int dt \\] So we write \\\[ \\begin{eqnarray} \\frac{1}{(p\+qF)(1\-F)} \&\=\& \\frac{A}{p\+qF} \+ \\frac{B}{1\-F}\\\\ \&\=\& \\frac{A\-AF\+pB\+qFB}{(p\+qF)(1\-F)}\\\\ \&\=\& \\frac{A\+pB\+F(qB\-A)}{(p\+qF)(1\-F)} \\end{eqnarray} \\] This implies that \\\[ \\begin{eqnarray} A\+pB \&\=\& 1 \\\\ qB\-A \&\=\& 0 \\end{eqnarray} \\] Solving we get \\\[ \\begin{eqnarray} A \&\=\& q/(p\+q)\\\\ B \&\=\& 1/(p\+q) \\end{eqnarray} \\] so that \\\[ \\begin{eqnarray} \\int \\frac{1}{(p\+qF)(1\-F)}\\;dF \&\=\& \\int dt \\\\ \\int \\left(\\frac{A}{p\+qF} \+ \\frac{B}{1\-F}\\right) \\; dF\&\=\& t \+ c\_1 \\\\ \\int \\left(\\frac{q/(p\+q)}{p\+qF} \+ \\frac{1/(p\+q)}{1\-F}\\right) \\; dF\&\=\& t\+c\_1\\\\ \\frac{1}{p\+q}\\ln(p\+qF) \- \\frac{1}{p\+q}\\ln(1\-F) \&\=\& t\+c\_1\\\\ \\frac{\\ln(p\+qF) \- \\ln(1\-F)}{p\+q} \&\=\& t\+c\_1 \\end{eqnarray} \\] which is the same as equation (\*). The solution as before is \\\[ F(t) \= \\frac{p(e^{(p\+q)t}\-1\)}{p e^{(p\+q)t} \+ q} \\] 15\.7 Solve for \\(f(t)\\) -------------------------- We may also solve for \\\[ f(t) \= \\frac{dF}{dt} \= \\frac{e^{(p\+q)t}\\; p \\; (p\+q)^2}{\[p e^{(p\+q)t} \+ q]^2} \\] Therefore, if the target market is of size \\(m\\), then at each \\(t\\), the adoptions are simply given by \\(m \\times f(t)\\). 15\.8 Example ------------- For example, set \\(m\=100,000\\), \\(p\=0\.01\\) and \\(q\=0\.2\\). Then the adoption rate is shown in the Figure below. ``` f = function(p,q,t) { res = (exp((p+q)*t)*p*(p+q)^2)/(p*exp((p+q)*t)+q)^2 } t = seq(1,20) m = 100000 p = 0.01 q = 0.20 plot(t,m*f(p,q,t),type="l",col="blue",lwd=3,xlab="Time (years)",ylab="Adoptions") grid(lwd=2) ``` 15\.9 Symbolic Math in R ------------------------ ``` #BASS MODEL FF = expression(p*(exp((p+q)*t)-1)/(p*exp((p+q)*t)+q)) print(FF) ``` ``` ## expression(p * (exp((p + q) * t) - 1)/(p * exp((p + q) * t) + ## q)) ``` ``` #Take derivative ff = D(FF,"t") print(ff) ``` ``` ## p * (exp((p + q) * t) * (p + q))/(p * exp((p + q) * t) + q) - ## p * (exp((p + q) * t) - 1) * (p * (exp((p + q) * t) * (p + ## q)))/(p * exp((p + q) * t) + q)^2 ``` ``` #SET UP THE FUNCTION ff = function(p,q,t) { res = D(FF,"t") } ``` ``` #NOTE THE USE OF eval m=100000; p=0.01; q=0.20; t=seq(1,20) plot(t,m*eval(ff(p,q,t)),type="l",col="red",lwd=3) grid(lwd=2) ``` 15\.10 Solution using Wolfram Alpha ----------------------------------- <https://www.wolframalpha.com/> 15\.11 Calibration ------------------ How do we get coefficients \\(p\\) and \\(q\\)? Given we have the current sales history of the product, we can use it to fit the adoption curve. * Sales in any period are: \\(s(t) \= m \\; f(t)\\). * Cumulative sales up to time \\(t\\) are: \\(S(t) \= m \\; F(t)\\). Substituting for \\(f(t)\\) and \\(F(t)\\) in the Bass equation gives: \\\[ \\frac{s(t)/m}{1\-S(t)/m} \= p \+ q\\; S(t)/m \\] We may rewrite this as \\\[ s(t) \= \[p\+q\\; S(t)/m]\[m \- S(t)] \\] Therefore, \\\[ \\begin{eqnarray} s(t) \&\=\& \\beta\_0 \+ \\beta\_1 \\; S(t) \+ \\beta\_2 \\; S(t)^2 \\quad (BASS) \\\\ \\beta\_0 \&\=\& pm \\\\ \\beta\_1 \&\=\& q\-p \\\\ \\beta\_2 \&\=\& \-q/m \\end{eqnarray} \\] Equation (BASS) may be estimated by a regression of sales against cumulative sales. Once the coefficients in the regression \\(\\{\\beta\_0, \\beta\_1, \\beta\_2\\}\\) are obtained, the equations above may be inverted to determine the values of \\(\\{m,p,q\\}\\). We note that since \\\[ \\beta\_1 \= q\-p \= \-m \\beta\_2 \- \\frac{\\beta\_0}{m}, \\] we obtain a quadratic equation in \\(m\\): \\\[ \\beta\_2 m^2 \+ \\beta\_1 m \+ \\beta\_0 \= 0 \\] Solving we have \\\[ m \= \\frac{\-\\beta\_1 \\pm \\sqrt{\\beta\_1^2 \- 4 \\beta\_0 \\beta\_2}}{2 \\beta\_1} \\] and then this value of \\(m\\) may be used to solve for \\\[ p \= \\frac{\\beta\_0}{m}; \\quad \\quad q \= \- m \\beta\_2 \\] 15\.12 iPhone Sales Forecast ---------------------------- As an example, let’s look at the trend for iPhone sales (we store the quarterly sales in a file and read it in, and then undertake the Bass model analysis). We get the data from: [http://www.statista.com/statistics/263401/global\-apple\-iphone\-sales\-since\-3rd\-quarter\-2007/](http://www.statista.com/statistics/263401/global-apple-iphone-sales-since-3rd-quarter-2007/) The R code for this computation is as follows: ``` #USING APPLE iPHONE SALES DATA data = read.table("DSTMAA_data/iphone_sales.txt",header=TRUE) print(head(data)) ``` ``` ## Quarter Sales_MM_units ## 1 Q3_07 0.27 ## 2 Q4_07 1.12 ## 3 Q1_08 2.32 ## 4 Q2_08 1.70 ## 5 Q3_08 0.72 ## 6 Q4_08 6.89 ``` ``` print(tail(data)) ``` ``` ## Quarter Sales_MM_units ## 30 Q4_14 39.27 ## 31 Q1_15 74.47 ## 32 Q2_15 61.17 ## 33 Q3_15 47.53 ## 34 Q4_15 48.05 ## 35 Q1_16 74.78 ``` ``` #data = data[1:31,] data = data[1:length(data[,1]),] ``` ``` isales = data[,2] cum_isales = cumsum(isales) cum_isales2 = cum_isales^2 res = lm(isales ~ cum_isales+cum_isales2) print(summary(res)) ``` ``` ## ## Call: ## lm(formula = isales ~ cum_isales + cum_isales2) ## ## Residuals: ## Min 1Q Median 3Q Max ## -14.050 -3.413 -1.429 2.905 19.987 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 3.696e+00 2.205e+00 1.676 0.1034 ## cum_isales 1.130e-01 1.677e-02 6.737 1.31e-07 *** ## cum_isales2 -5.508e-05 2.110e-05 -2.610 0.0136 * ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 7.844 on 32 degrees of freedom ## Multiple R-squared: 0.8729, Adjusted R-squared: 0.865 ## F-statistic: 109.9 on 2 and 32 DF, p-value: 4.61e-15 ``` ``` b = res$coefficients ``` ``` #FIT THE MODEL m1 = (-b[2]+sqrt(b[2]^2-4*b[1]*b[3]))/(2*b[3]) m2 = (-b[2]-sqrt(b[2]^2-4*b[1]*b[3]))/(2*b[3]) print(c(m1,m2)) ``` ``` ## cum_isales cum_isales ## -32.20691 2083.82202 ``` ``` m = max(m1,m2); print(m) ``` ``` ## [1] 2083.822 ``` ``` p = b[1]/m q = -m*b[3] print(c("p,q=",p,q)) ``` ``` ## (Intercept) cum_isales2 ## "p,q=" "0.00177381124189973" "0.114767511363674" ``` ``` #PLOT THE FITTED MODEL nqtrs = 100 t=seq(0,nqtrs) ff = D(FF,"t") fn_f = eval(ff)*m plot(t,fn_f,type="l",ylab="Qtrly Units (MM)",main="Apple Inc Sales") n = length(isales) lines(1:n,isales,col="red",lwd=2,lty=2) ``` 15\.13 Comparison to other products ----------------------------------- The estimated Apple coefficients are: \\(p\=0\.0018\\) and \\(q\=0\.1148\\). For several other products, the table below shows the estimated coefficients reported in Table I of the original Bass (1969\) paper. 15\.14 Sales Peak ----------------- It is easy to calculate the time at which adoptions will peak out. Differentiate \\(f(t)\\) with respect to \\(t\\), and set the result equal to zero, i.e. \\\[ t^\* \= \\mbox{argmax}\_t f(t) \\] which is equivalent to the solution to \\(f'(t)\=0\\). The calculations are simple and give \\\[ t^\* \= \\frac{\-1}{p\+q}\\; \\ln(p/q) \\] Hence, for the values \\(p\=0\.01\\) and \\(q\=0\.2\\), we have \\\[ t^\* \= \\frac{\-1}{0\.01\+0\.2} \\ln(0\.01/0\.2\) \= 14\.2654 \\; \\mbox{years}. \\] If we examine the plot in the earlier Figure in the symbolic math section we see this to be where the graph peaks out. For the Apple data, here is the computation of the sales peak, reported in number of quarters from inception. ``` #PEAK SALES TIME POINT (IN QUARTERS) FOR APPLE tstar = -1/(p+q)*log(p/q) print(tstar) ``` ``` ## (Intercept) ## 35.77939 ``` 15\.15 Samsung Galaxy Phone Sales --------------------------------- ``` #Read in Galaxy sales data data = read.csv("DSTMAA_data/galaxy_sales.csv") #Get coefficients isales = data[,2] cum_isales = cumsum(isales) cum_isales2 = cum_isales^2 res = lm(isales ~ cum_isales+cum_isales2) print(summary(res)) ``` ``` ## ## Call: ## lm(formula = isales ~ cum_isales + cum_isales2) ## ## Residuals: ## Min 1Q Median 3Q Max ## -11.1239 -6.1774 0.4633 5.0862 13.2662 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 5.375e+01 4.506e+00 11.928 2.87e-10 *** ## cum_isales 7.660e-02 1.068e-02 7.173 8.15e-07 *** ## cum_isales2 -2.806e-05 5.074e-06 -5.530 2.47e-05 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 7.327 on 19 degrees of freedom ## Multiple R-squared: 0.8206, Adjusted R-squared: 0.8017 ## F-statistic: 43.44 on 2 and 19 DF, p-value: 8.167e-08 ``` ``` b = res$coefficients #FIT THE MODEL m1 = (-b[2]+sqrt(b[2]^2-4*b[1]*b[3]))/(2*b[3]) m2 = (-b[2]-sqrt(b[2]^2-4*b[1]*b[3]))/(2*b[3]) print(c(m1,m2)) ``` ``` ## cum_isales cum_isales ## -578.9157 3308.9652 ``` ``` m = max(m1,m2); print(m) ``` ``` ## [1] 3308.965 ``` ``` p = b[1]/m q = -m*b[3] print(c("p,q=",p,q)) ``` ``` ## (Intercept) cum_isales2 ## "p,q=" "0.0162432614649845" "0.0928432001791269" ``` ``` #PLOT THE FITTED MODEL nqtrs = 100 t=seq(0,nqtrs) ff = D(FF,"t") fn_f = eval(ff)*m plot(t,fn_f,type="l",ylab="Qtrly Units (MM)",main="Samsung Galaxy Sales") n = length(isales) lines(1:n,isales,col="red",lwd=2,lty=2) ``` 15\.16 Global Semiconductor Sales --------------------------------- ``` #Read in semiconductor sales data data = read.csv("DSTMAA_data/semiconductor_sales.csv") #Get coefficients isales = data[,2] cum_isales = cumsum(isales) cum_isales2 = cum_isales^2 res = lm(isales ~ cum_isales+cum_isales2) print(summary(res)) ``` ``` ## ## Call: ## lm(formula = isales ~ cum_isales + cum_isales2) ## ## Residuals: ## Min 1Q Median 3Q Max ## -42.359 -12.415 0.698 12.963 45.489 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 5.086e+01 8.627e+00 5.896 3.76e-06 *** ## cum_isales 9.004e-02 9.601e-03 9.378 1.15e-09 *** ## cum_isales2 -6.878e-06 1.988e-06 -3.459 0.00196 ** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 21.46 on 25 degrees of freedom ## Multiple R-squared: 0.9515, Adjusted R-squared: 0.9476 ## F-statistic: 245.3 on 2 and 25 DF, p-value: < 2.2e-16 ``` ``` b = res$coefficients #FIT THE MODEL m1 = (-b[2]+sqrt(b[2]^2-4*b[1]*b[3]))/(2*b[3]) m2 = (-b[2]-sqrt(b[2]^2-4*b[1]*b[3]))/(2*b[3]) print(c(m1,m2)) ``` ``` ## cum_isales cum_isales ## -542.4036 13633.3003 ``` ``` m = max(m1,m2); print(m) ``` ``` ## [1] 13633.3 ``` ``` p = b[1]/m q = -m*b[3] print(c("p,q=",p,q)) ``` ``` ## (Intercept) cum_isales2 ## "p,q=" "0.00373048366213552" "0.0937656034785294" ``` ``` #PLOT THE FITTED MODEL nqtrs = 100 t=seq(0,nqtrs) ff = D(FF,"t") fn_f = eval(ff)*m plot(t,fn_f,type="l",ylab="Annual Sales",main="Semiconductor Sales") n = length(isales) lines(1:n,isales,col="red",lwd=2,lty=2) ``` Show data frame: ``` df = as.data.frame(cbind(t,1988+t,fn_f)) print(df) ``` ``` ## t V2 fn_f ## 1 0 1988 50.858804 ## 2 1 1989 55.630291 ## 3 2 1990 60.802858 ## 4 3 1991 66.400785 ## 5 4 1992 72.447856 ## 6 5 1993 78.966853 ## 7 6 1994 85.978951 ## 8 7 1995 93.503005 ## 9 8 1996 101.554731 ## 10 9 1997 110.145765 ## 11 10 1998 119.282622 ## 12 11 1999 128.965545 ## 13 12 2000 139.187272 ## 14 13 2001 149.931747 ## 15 14 2002 161.172802 ## 16 15 2003 172.872875 ## 17 16 2004 184.981811 ## 18 17 2005 197.435829 ## 19 18 2006 210.156736 ## 20 19 2007 223.051488 ## 21 20 2008 236.012180 ## 22 21 2009 248.916580 ## 23 22 2010 261.629271 ## 24 23 2011 274.003469 ## 25 24 2012 285.883554 ## 26 25 2013 297.108294 ## 27 26 2014 307.514709 ## 28 27 2015 316.942473 ## 29 28 2016 325.238670 ## 30 29 2017 332.262713 ## 31 30 2018 337.891162 ## 32 31 2019 342.022188 ## 33 32 2020 344.579402 ## 34 33 2021 345.514816 ## 35 34 2022 344.810743 ## 36 35 2023 342.480503 ## 37 36 2024 338.567889 ## 38 37 2025 333.145434 ## 39 38 2026 326.311588 ## 40 39 2027 318.186993 ## 41 40 2028 308.910101 ## 42 41 2029 298.632384 ## 43 42 2030 287.513417 ## 44 43 2031 275.716082 ## 45 44 2032 263.402104 ## 46 45 2033 250.728110 ## 47 46 2034 237.842301 ## 48 47 2035 224.881829 ## 49 48 2036 211.970872 ## 50 49 2037 199.219405 ## 51 50 2038 186.722585 ## 52 51 2039 174.560685 ## 53 52 2040 162.799482 ## 54 53 2041 151.490998 ## 55 54 2042 140.674507 ## 56 55 2043 130.377708 ## 57 56 2044 120.618009 ## 58 57 2045 111.403835 ## 59 58 2046 102.735925 ## 60 59 2047 94.608582 ## 61 60 2048 87.010824 ## 62 61 2049 79.927455 ## 63 62 2050 73.340010 ## 64 63 2051 67.227596 ## 65 64 2052 61.567619 ## 66 65 2053 56.336405 ## 67 66 2054 51.509716 ## 68 67 2055 47.063180 ## 69 68 2056 42.972643 ## 70 69 2057 39.214436 ## 71 70 2058 35.765596 ## 72 71 2059 32.604021 ## 73 72 2060 29.708584 ## 74 73 2061 27.059212 ## 75 74 2062 24.636929 ## 76 75 2063 22.423878 ## 77 76 2064 20.403319 ## 78 77 2065 18.559614 ## 79 78 2066 16.878199 ## 80 79 2067 15.345546 ## 81 80 2068 13.949122 ## 82 81 2069 12.677336 ## 83 82 2070 11.519492 ## 84 83 2071 10.465737 ## 85 84 2072 9.507005 ## 86 85 2073 8.634969 ## 87 86 2074 7.841989 ## 88 87 2075 7.121062 ## 89 88 2076 6.465778 ## 90 89 2077 5.870270 ## 91 90 2078 5.329179 ## 92 91 2079 4.837608 ## 93 92 2080 4.391088 ## 94 93 2081 3.985542 ## 95 94 2082 3.617252 ## 96 95 2083 3.282831 ## 97 96 2084 2.979193 ## 98 97 2085 2.703528 ## 99 98 2086 2.453280 ## 100 99 2087 2.226120 ## 101 100 2088 2.019932 ``` 15\.17 Extensions ----------------- The Bass model has been extended to what is known as the generalized Bass model in a paper by Bass, Krishnan, and Jain (1994\). The idea is to extend the model to the following equation: \\\[ \\frac{f(t)}{1\-F(t)} \= \[p\+q\\; F(t)] \\; x(t) \\] where \\(x(t)\\) stands for current marketing effort. This additional variable allows (i) consideration of effort in the model, and (ii) given the function \\(x(t)\\), it may be optimized. The Bass model comes from a deterministic differential equation. Extensions to stochastic differential equations need to be considered. See also the paper on Bayesian inference in Bass models by Boatwright and Kamakura (2003\). * Bass, Frank. (1969\). “A New Product Growth Model for Consumer Durables,” *Management Science* 16, 215–227\. * Bass, Frank., Trichy Krishnan, and Dipak Jain (1994\). “Why the Bass Model Without Decision Variables,” *Marketing Science* 13, 204–223\. * Boatwright, Lee., and Wagner Kamakura (2003\). “Bayesian Model for Prelaunch Sales Forecasting of Recorded Music,” Management Science 49(2\), 179–196\. 15\.18 Trading off \\(p\\) vs \\(q\\) ------------------------------------- In the Bass model, if the coefficient of imitation increases relative to the coefficient of innovation, then which of the following is the most valid? 1. the peak of the product life cycle occurs later. 2. the peak of the product life cycle occurs sooner. 3. there may be an increasing chance of two life\-cycle peaks. 4. the peak may occur sooner or later, depending on the coefficient of innovation. Using peak time formula, substitute \\(x\=q/p\\): \\\[ t^\*\=\\frac{\-1}{p\+q} \\ln(p/q) \= \\frac{\\ln(q/p)}{p\+q} \= \\frac{1}{p} \\; \\frac{\\ln(q/p)}{1\+q/p} \= \\frac{1}{p}\\; \\frac{\\ln(x)}{1\+x} \\] Differentiate with regard to x (we are interested in the sign of the first derivative \\(\\partial t^\*/\\partial q\\), which is the same as sign of \\(\\partial t^\*/\\partial x\\)): \\\[ \\frac{\\partial t^\*}{\\partial x} \= \\frac{1}{p}\\left\[\\frac{1}{x(1\+x)}\-\\frac{\\ln x}{(1\+x)^2}\\right]\=\\frac{1\+x\-x\\ln x}{px(1\+x)^2} \\] From the Bass model we know that \\(q \> p \> 0\\), i.e. \\(x\>1\\), otherwise we could get negative values of acceptance or shape without maximum in the \\(0 \\le F \< 1\\) area. Therefore, the sign of \\(\\partial t^\*/\\partial x\\) is same as: \\\[ sign\\left(\\frac{\\partial t^\*}{\\partial x}\\right) \= sign(1\+x\-x\\ln x), \~\~\~ x\>1 \\] But this non\-linear equation \\\[ 1\+x\-x\\ln x\=0, \~\~\~ x\>1 \\] has a root \\(x \\approx 3\.59\\). In other words, the derivative \\(\\partial t^\* / \\partial x\\) is negative when \\(x \> 3\.59\\) and positive when \\(x \< 3\.59\\). For low values of \\(x\=q/p\\), an increase in the coefficient of imitation \\(q\\) increases the time to sales peak, and for high values of \\(q/p\\) the time decreases with increasing \\(q\\). So the right answer for the question appears to be “it depends on values of \\(p\\) and \\(q\\)”. ``` t = seq(0,5,0.1) p = 0.1; q=0.22 plot(t,f(p,q,t),type="l",col="blue",lwd=2) p = 0.1; q=0.20 lines(t,f(p,q,t),type="l",col="red",lwd=2) ``` In the Figure, when \\(x\\) gets smaller, the peak is earlier.
Machine Learning
srdas.github.io
https://srdas.github.io/MLBook/Fourier.html
Chapter 16 Riding the Wave: Fourier Analysis ============================================ 16\.1 Introduction ------------------ Fourier analysis comprises many different connnections between infinite series, complex numbers, vector theory, and geometry. We may think of different applications: (a) fitting economic time series, (b) pricing options, (c) wavelets, (d) obtaining risk\-neutral pricing distributions via Fourier inversion. 16\.2 Fourier Series -------------------- ### 16\.2\.1 Basic stuff Fourier series are used to represent {} time series by combinations of sine and cosine waves. The time it takes for one cycle of the wave is called the `period'' $T$ of the wave. The`frequency’’ \\(f\\) of the wave is the number of cycles per second, hence, \\\[ f \= \\frac{1}{T} \\] ### 16\.2\.2 Unit Circle We need some basic geometry on the unit circle. This circle is the unit circle if \\(a\=1\\). There is a nice link between the unit circle and the sine wave. See the next figure for this relationship. Hence, as we rotate through the angles, the height of the unit vector on the circle traces out the sine wave. In general for radius \\(a\\), we get a sine wave with amplitude \\(a\\), or we may write: \\\[\\begin{equation} f(\\theta) \= a \\sin(\\theta) \\tag{16\.1} \\end{equation}\\] ### 16\.2\.3 Angular velocity Velocity is distance per time (in a given direction). For angular velocity we measure distance in degrees, i.e. degrees per unit of time. The usual symbol for angular velocity is \\(\\omega\\). We can thus write \\\[ \\omega \= \\frac{\\theta}{T}, \\quad \\theta \= \\omega T \\] Hence, we can state the function in equation [(16\.1\)](Fourier.html#eq:ftheta) in terms of time as follows \\\[ f(t) \= a \\sin \\omega t \\] ### 16\.2\.4 Fourier series A Fourier series is a collection of sine and cosine waves, which when summed up, closely approximate any given waveform. We can express the Fourier series in terms of sine and cosine waves \\\[ f(\\theta) \= a\_0 \+ \\sum\_{n\=1}^{\\infty} \\left( a\_n \\cos n\\theta \+ b\_n \\sin n \\theta \\right) \\] \\\[ f(t) \= a\_0 \+ \\sum\_{n\=1}^{\\infty} \\left( a\_n \\cos n\\omega t \+ b\_n \\sin n \\omega t \\right) \\] The \\(a\_0\\) is needed since the waves may not be symmetric around the x\-axis. ### 16\.2\.5 Radians Degrees are expressed in units of radians. A radian is an angle defined in the following figure. The angle here is a radian which is equal to 57\.2958 degrees (approximately). This is slightly less than 60 degrees as you would expect to get with an equilateral triangle. Note that (since the circumference is \\(2 \\pi a\\)) \\(57\.2958 \\pi \= 57\.2958 \\times 3\.142 \= 180\\) degrees. So now for the unit circle \\\[\\begin{align} 2 \\pi \&\= 360 \\mbox{(degrees)}\\\\ \\omega \&\= \\frac{360}{T} \\\\ \\omega \&\= \\frac{2\\pi}{T} \\end{align}\\] Hence, we may rewrite the Fourier series equation as: \\\[\\begin{align} f(t) \&\= a\_0 \+ \\sum\_{n\=1}^{\\infty} \\left( a\_n \\cos n\\omega t \+ b\_n \\sin n \\omega t \\right) \\tag{16\.2} \\\\ \&\= a\_0 \+ \\sum\_{n\=1}^{\\infty} \\left( a\_n \\cos \\frac{2\\pi n}{T} t \+ b\_n \\sin \\frac{2\\pi n}{T} t \\right) \\end{align}\\] So we now need to figure out how to get the coefficients \\(\\{a\_0,a\_n,b\_n\\}\\). ### 16\.2\.6 Solving for the coefficients We start by noting the interesting phenomenon that sines and cosines are orthogonal, i.e. their inner product is zero. Hence, \\\[\\begin{align} \\int\_0^T \\sin(nt) . \\cos(mt)\\; dt \&\= 0, \\forall n,m \\\\ \\int\_0^T \\sin(nt) . \\sin(mt)\\; dt \&\= 0, \\forall n \\neq m \\\\ \\int\_0^T \\cos(nt) . \\cos(mt)\\; dt \&\= 0, \\forall n \\neq m \\end{align}\\] What this means is that when we multiply one wave by another, and then integrate the resultant wave from \\(0\\) to \\(T\\) (i.e. over any cycle, so we could go from say \\(\-T/2\\) to \\(\+T/2\\) also), then we get zero, unless the two waves have the {} frequency. Hence, the way we get the coefficients of the Fourier series is as follows. Integrate both sides of the series in equation [(16\.2\)](Fourier.html#eq:fseries) from \\(0\\) to \\(T\\), i.e. \\\[ \\int\_0^T f(t) \= \\int\_0^T a\_0 \\;dt \+ \\int\_0^T \\left\[\\sum\_{n\=1}^{\\infty} \\left( a\_n \\cos n\\omega t \+ b\_n \\sin n \\omega t \\right) \\;dt \\right] \\] Except for the first term all the remaining terms are zero (integrating a sine or cosine wave over its cycle gives net zero). So we get \\\[ \\int\_0^T f(t) \\;dt \= a\_0 T \\] or \\\[ a\_0 \= \\frac{1}{T} \\int\_0^T f(t) \\;dt \\] Now lets try another integral, i.e. \\\[\\begin{align} \\int\_0^T f(t) \\cos(\\omega t) \&\= \\int\_0^T a\_0 \\cos(\\omega t) \\;dt \\\\ \& \+ \\int\_0^T \\left\[\\sum\_{n\=1}^{\\infty} \\left( a\_n \\cos n\\omega t \+ b\_n \\sin n \\omega t \\right)\\cos(\\omega t) \\;dt \\right] \\end{align}\\] Here, all terms are zero except for the term in \\(a\_1 \\cos(\\omega t)\\cos(\\omega t)\\), because we are multiplying two waves (pointwise) that have the same frequency. So we get \\\[\\begin{align} \\int\_0^T f(t) \\cos(\\omega t) \&\= \\int\_0^T a\_1 \\cos(\\omega t)\\cos(\\omega t) \\;dt \\\\ \&\= a\_1 \\; \\frac{T}{2} \\end{align}\\] How? Note here that for unit amplitude, integrating \\(\\cos(\\omega t)\\) over one cycle will give zero. If we multiply \\(\\cos(\\omega t)\\) by itself, we flip all the wave segments from below to above the zero line. The product wave now fills out half the area from \\(0\\) to \\(T\\), so we get \\(T/2\\). Thus \\\[ a\_1 \= \\frac{2}{T} \\int\_0^T f(t) \\cos(\\omega t) \\] We can get all \\(a\_n\\) this way \- just multiply by \\(\\cos(n \\omega t)\\) and integrate. We can also get all \\(b\_n\\) this way \- just multiply by \\(\\sin(n \\omega t)\\) and integrate. This forms the basis of the following summary results that give the coefficients of the Fourier series. \\\[\\begin{align} a\_0 \&\= \\frac{1}{T} \\int\_{\-T/2}^{T/2} f(t) \\;dt \= \\frac{1}{T} \\int\_{0}^{T} f(t) \\;dt\\\\ a\_n \&\= \\frac{1}{T/2} \\int\_{\-T/2}^{T/2} f(t) \\cos(n\\omega t)\\;dt \= \\frac{2}{T} \\int\_{0}^{T} f(t) \\cos(n\\omega t)\\;dt \\\\ b\_n \&\= \\frac{1}{T/2} \\int\_{\-T/2}^{T/2} f(t) \\sin(n\\omega t)\\;dt \= \\frac{2}{T} \\int\_{0}^{T} f(t) \\sin(n\\omega t)\\;dt \\end{align}\\] 16\.3 Complex Algebra --------------------- Just for fun, recall that \\\[ e \= \\sum\_{n\=0}^{\\infty} \\frac{1}{n!}. \\] and \\\[ e^{i \\theta} \= \\sum\_{n\=0}^{\\infty} \\frac{1}{n!} (i \\theta)^n \\] \\\[\\begin{align} \\cos(\\theta) \&\= 1 \+ 0\.\\theta \- \\frac{1}{2!} \\theta^2 \+ 0\.\\theta^3 \+ \\frac{1}{4!} \\theta^2 \+ \\ldots \\\\ i \\sin(\\theta) \&\= 0 \+ i \\theta \+ 0\.\\theta^2 \- \\frac{1}{3!}i\\theta^3 \+ 0\.\\theta^4 \+ \\ldots \\end{align}\\] Which leads into the famous Euler’s formula: \\\[\\begin{equation} \\tag{16\.3} e^{i \\theta} \= \\cos \\theta \+ i \\sin \\theta \\end{equation}\\] and the corresponding \\\[\\begin{equation} \\tag{16\.4} e^{\-i \\theta} \= \\cos \\theta \- i \\sin \\theta \\end{equation}\\] Recall also that \\(\\cos(\-\\theta) \= \\cos(\\theta)\\). And \\(\\sin(\-\\theta) \= \- \\sin(\\theta)\\). Note also that if \\(\\theta \= \\pi\\), then \\\[ e^{\-i \\pi} \= \\cos(\\pi) \- i \\sin(\\pi) \= \-1 \+ 0 \\] which can be written as \\\[ e^{\-i \\pi} \+ 1 \= 0 \\] an equation that contains five fundamental mathematical constants: \\(\\{i, \\pi, e, 0, 1\\}\\), and three operators \\(\\{\+, \-, \=\\}\\). ### 16\.3\.1 Trig to Complex Using equations [(16\.3\)](Fourier.html#eq:eipi) and [(16\.4\)](Fourier.html#eq:e-ipi) gives \\\[\\begin{align} \\cos \\theta \&\= \\frac{1}{2} (e^{i \\theta} \+ e^{\-i \\theta}) \\\\ \\sin \\theta \&\= \\frac{1}{2}i (e^{i \\theta} \- e^{\-i \\theta}) \\end{align}\\] Now, return to the Fourier series, \\\[\\begin{align} f(t) \&\= a\_0 \+ \\sum\_{n\=1}^{\\infty} \\left( a\_n \\cos n\\omega t \+ b\_n \\sin n \\omega t \\right) \\\\ \&\= a\_0 \+ \\sum\_{n\=1}^{\\infty} \\left( a\_n \\frac{1}{2} (e^{in\\omega t}\+e^{\-in\\omega t}) \+ b\_n \\frac{1}{2i} (e^{in\\omega t} \- e^{\-i n \\omega t}) \\right) \\\\ \&\= a\_0 \+ \\sum\_{n\=1}^{\\infty} \\left( A\_n e^{in\\omega t} \+ B\_n e^{\-in \\omega t} \\right)\\\\ \& \\mbox{where} \\nonumber \\\\ \& A\_n \= \\frac{1}{T} \\int\_0^T f(t) e^{\-in\\omega t} \\;dt \\nonumber \\\\ \& B\_n \= \\frac{1}{T} \\int\_0^T f(t) e^{in\\omega t} \\;dt \\nonumber \\end{align}\\] How? Start with \\\[ f(t) \= a\_0 \+ \\sum\_{n\=1}^{\\infty} \\left( a\_n \\frac{1}{2} (e^{in\\omega t}\+e^{\-in\\omega t}) \+ b\_n \\frac{1}{2i} (e^{in\\omega t} \- e^{\-i n \\omega t}) \\right) \\] Then \\\[ f(t) \= a\_0 \+ \\sum\_{n\=1}^{\\infty} \\left( a\_n \\frac{1}{2} (e^{in\\omega t}\+e^{\-in\\omega t}) \+ b\_n \\frac{i}{2i^2} (e^{in\\omega t} \- e^{\-i n \\omega t}) \\right) \\] \\\[ \= a\_0 \+ \\sum\_{n\=1}^{\\infty} \\left( a\_n \\frac{1}{2} (e^{in\\omega t}\+e^{\-in\\omega t}) \+ b\_n \\frac{i}{\-2} (e^{in\\omega t} \- e^{\-i n \\omega t}) \\right) \\] \\\[\\begin{equation} f(t) \= a\_0 \+ \\sum\_{n\=1}^{\\infty} \\left( \\frac{1}{2}(a\_n \-ib\_n)e^{in\\omega t} \+ \\frac{1}{2}(a\_n \+ ib\_n)e^{\-in\\omega t} \\right) \\tag{16\.5} \\end{equation}\\] Note that from equations [(16\.3\)](Fourier.html#eq:eipi) and [(16\.4\)](Fourier.html#eq:e-ipi), \\\[\\begin{align} a\_n \&\= \\frac{2}{T} \\int\_0^T f(t) \\cos(n \\omega t) \\;dt \\\\ \&\= \\frac{2}{T} \\int\_0^T f(t) \\frac{1}{2} \[e^{in\\omega t} \+ e^{\-in\\omega t}] \\;dt \\\\ a\_n \&\= \\frac{1}{T} \\int\_0^T f(t) \[e^{in\\omega t} \+ e^{\-in\\omega t}] \\;dt \\tag{16\.6} \\end{align}\\] In the same way, we can handle \\(b\_n\\), to get \\\[\\begin{align} b\_n \&\= \\frac{2}{T} \\int\_0^T f(t) \\sin(n \\omega t) \\;dt \\\\ \&\= \\frac{2}{T} \\int\_0^T f(t) \\frac{1}{2i} \[e^{in\\omega t} \- e^{\-in\\omega t}] \\;dt \\\\ \&\= \\frac{1}{i}\\frac{1}{T} \\int\_0^T f(t) \[e^{in\\omega t} \- e^{\-in\\omega t}] \\;dt \\end{align}\\] So that \\\[\\begin{equation} i b\_n \= \\frac{1}{T} \\int\_0^T f(t) \[e^{in\\omega t} \- e^{\-in\\omega t}] \\;dt \\tag{16\.7} \\end{equation}\\] So from equations [(16\.6\)](Fourier.html#eq:an) and [(16\.7\)](Fourier.html#eq:ibn), we get \\\[\\begin{align} \\frac{1}{2}(a\_n \- i b\_n) \&\= \\frac{1}{T} \\int\_0^T f(t) e^{\-in\\omega t} \\;dt \\equiv A\_n\\\\ \\frac{1}{2}(a\_n \+ i b\_n) \&\= \\frac{1}{T} \\int\_0^T f(t) e^{in\\omega t} \\;dt \\equiv B\_n \\end{align}\\] Put these back into equation [(16\.5\)](Fourier.html#eq:ft) to get \\\[\\begin{align} f(t) \&\= a\_0 \+ \\sum\_{n\=1}^{\\infty} \\left( \\frac{1}{2}(a\_n \-ib\_n)e^{in\\omega t} \+ \\frac{1}{2}(a\_n \+ ib\_n)e^{\-in\\omega t} \\right) \\\\ \&\= a\_0 \+ \\sum\_{n\=1}^{\\infty} \\left( A\_n e^{in\\omega t} \+ B\_n e^{\-in \\omega t} \\right) \\end{align}\\] ### 16\.3\.2 Getting rid of \\(a\_0\\) Note that if we expand the range of the first summation to start from \\(n\=0\\), then we have a term \\(A\_0 e^{i0 \\omega t} \= A\_0 \\equiv a\_0\\). So we can then write our expression as \\\[ f(t) \= \\sum\_{n\=0}^{\\infty} A\_n e^{in\\omega t} \+ \\sum\_{n\=1}^{\\infty} B\_n e^{\-in \\omega t} \\mbox{ (sum of A runs from zero)} \\] ### 16\.3\.3 Collapsing and Simplifying So now we want to collapse these two terms together. Lets note that \\\[ \\sum\_{n\=1}^2 x^n \= x^1 \+ x^2 \= \\sum\_{n\=\-2}^{\-1} x^{\-n} \= x^2 \+ x^1 \\] Applying this idea, we get \\\[\\begin{align} f(t) \&\= \\sum\_{n\=0}^{\\infty} A\_n e^{in\\omega t} \+ \\sum\_{n\=1}^{\\infty} B\_n e^{\-in \\omega t} \\\\ \&\= \\sum\_{n\=0}^{\\infty} A\_n e^{in\\omega t} \+ \\sum\_{n\=\-\\infty}^{\-1} B\_{(\-n)} e^{in \\omega t} \\\\ \& \\mbox{where} \\nonumber \\\\ \& B\_{(\-n)} \= \\frac{1}{T} \\int\_0^T f(t) e^{\-in\\omega t} \\;dt \\nonumber \= A\_n \\\\ \&\= \\sum\_{n\=\-\\infty}^{\\infty} C\_n e^{in\\omega t}\\\\ \& \\mbox{where} \\nonumber \\\\ \& C\_n \= \\frac{1}{T} \\int\_0^T f(t) e^{\-in\\omega t} \\;dt \\nonumber \\end{align}\\] where we just renamed \\(A\_n\\) to \\(C\_n\\) for clarity. The big win here is that we have been able to subsume \\(\\{a\_0,a\_n,b\_n\\}\\) all into one coefficient set \\(C\_n\\). For completeness we write \\\[ f(t) \= a\_0 \+ \\sum\_{n\=1}^{\\infty} \\left( a\_n \\cos n\\omega t \+ b\_n \\sin n \\omega t \\right) \=\\sum\_{n\=\-\\infty}^{\\infty} C\_n e^{in\\omega t} \\] This is the complex number representation of the Fourier series. 16\.4 Fourier Transform ----------------------- The FT is a cool technique that allows us to go from the Fourier series, which needs a period \\(T\\) to waves that are aperiodic. The idea is to simply let the period go to infinity. Which means the frequency gets very small. We can then sample a slice of the wave to do analysis. We will replace \\(f(t)\\) with \\(g(t)\\) because we now need to use \\(f\\) or \\(\\Delta f\\) to denote frequency. Recall that \\\[ \\omega \= \\frac{2\\pi}{T} \= 2\\pi f, \\quad n\\omega \= 2 \\pi f\_n \\] To recap \\\[\\begin{align} g(t) \&\= \\sum\_{n\=\-\\infty}^{\\infty} C\_n e^{in\\omega t} \= \\sum\_{n\=\-\\infty}^{\\infty} C\_n e^{i 2\\pi f t}\\\\ C\_n \&\= \\frac{1}{T} \\int\_0^T g(t) e^{\-in\\omega t} \\;dt \\end{align}\\] This may be written alternatively in frequency terms as follows \\\[ C\_n \= \\Delta f \\int\_{\-T/2}^{T/2} g(t) e^{\-i 2\\pi f\_n t} \\;dt \\] which we substitute into the formula for \\(g(t)\\) and get \\\[ g(t) \= \\sum\_{n\=\-\\infty}^{\\infty} \\left\[\\Delta f \\int\_{\-T/2}^{T/2} g(t) e^{\-i 2\\pi f\_n t} \\;dt \\right]e^{in\\omega t} \\] Taking limits \\\[ g(t) \= \\lim\_{T \\rightarrow \\infty} \\sum\_{n\=\-\\infty}^{\\infty} \\left\[\\int\_{\-T/2}^{T/2} g(t) e^{\-i 2\\pi f\_n t} \\;dt \\right]e^{i 2 \\pi f\_n t} \\Delta f \\] gives a double integral \\\[ g(t) \= \\int\_{\-\\infty}^{\\infty} \\underbrace{\\left\[\\int\_{\-\\infty}^{\\infty} g(t) e^{\-i 2\\pi f t} \\;dt \\right]}\_{G(f)} e^{i 2 \\pi f t} \\;df \\] The \\(dt\\) is for the time domain and the \\(df\\) for the frequency domain. Hence, the {} goes from the time domain into the frequency domain, given by \\\[ G(f) \= \\int\_{\-\\infty}^{\\infty} g(t) e^{\-i 2\\pi f t} \\;dt \\] The {} goes from the frequency domain into the time domain \\\[ g(t) \= \\int\_{\-\\infty}^{\\infty} G(f) e^{i 2 \\pi f t} \\;df \\] And the {} are as before \\\[ C\_n \= \\frac{1}{T} \\int\_0^T g(t) e^{\-i 2\\pi f\_n t} \\;dt \= \\frac{1}{T} \\int\_0^T g(t) e^{\-in\\omega t} \\; dt \\] Notice the incredible similarity between the coefficients and the transform. Note the following: The spectrum of a wave is a graph showing its component frequencies, i.e. the quantity in which they occur. It is the frequency components of the waves. But it does not give their amplitudes. ### 16\.4\.1 Empirical Example We can use the Fourier transform function in R to compute the main component frequencies of the times series of interest rate data as follows: ``` library(zoo) ``` ``` ## ## Attaching package: 'zoo' ``` ``` ## The following objects are masked from 'package:base': ## ## as.Date, as.Date.numeric ``` ``` rd = read.table("DSTMAA_data/tryrates.txt",header=TRUE) r1 = rd$FYGT1 dt = as.yearmon(rd$DATE,"%b-%y") plot(dt,r1,type="l",ylab="One-Year Rate") ``` The line with ``` dr1 = resid(lm(r1 ~ seq(along = r1))) ``` detrends the series, and when we plot it we see that its done. We can then subject the detrended line to fourier analysis. The plot of the fit of the detrended one\-year interest rates is here: ``` dr1 = resid(lm(r1 ~ seq(along = r1))) plot(dt,dr1,type="l",ylab="Detrended 1y Rate") ``` Now, carry out the Fourier transform. ``` y=fft(dr1) plot(abs(y),type="l") ``` Its easy to see that the series has short frequencies and long frequencies. Essentially there are two factors. If we do a factor analysis of interest rates, it turns out we get two factors as well. See Chapter @ref{DiscriminantFactorAnalysis}. 16\.5 Application to Binomial Option Pricing -------------------------------------------- To implement the option pricing in Cerny ([2009](#ref-Cerny)), Chapter 7, Figure 8\. ``` ifft = function(x) { fft(x,inverse=TRUE)/length(x) } ct = c(599.64,102,0,0) q = c(0.43523,0.56477,0,0) R = 1.0033 ifft(fft(ct)*( (4*ifft(q)/R)^3) ) ``` ``` ## [1] 81.36464+0i 115.28447+0i 265.46949+0i 232.62076+0i ``` The resulting price is 81\.36\. 16\.6 Application to probability functions ------------------------------------------ ### 16\.6\.1 Characteristic functions A characteristic function of a variable \\(x\\) is given by the expectation of the following function of \\(x\\): \\\[ \\phi(s) \= E\[e^{isx}] \= \\int\_{\-\\infty}^{\\infty} e^{isx} f(x) \\; dx \\] where \\(f(x)\\) is the probability density of \\(x\\). By Taylor series for \\(e^{isx}\\) we have \\\[\\begin{align} \\int\_{\-\\infty}^{\\infty} e^{isx} f(x) \\; dx \&\= \\int\_{\-\\infty}^{\\infty} \[1\+isx \+ \\frac{1}{2} (isx)^2 \+ \\ldots] f(x)dx \\\\ \&\= \\sum\_{j\=0}^{\\infty} \\frac{(is)^j}{j!} m\_j \\\\ \&\= 1 \+ (is) m\_1 \+ \\frac{1}{2} (is)^2 m\_2 \+ \\frac{1}{6} (is)^3 m\_3 \+ \\ldots \\end{align}\\] where \\(m\_j\\) is the \\(j\\)\-th moment. It is therefore easy to see that \\\[ m\_j \= \\frac{1}{i^j} \\left\[\\frac{d\\phi(s)}{ds} \\right]\_{s\=0} \\] where \\(i\=\\sqrt{\-1}\\). ### 16\.6\.2 Finance application In a paper in 1993, Steve Heston developed a new approach to valuing stock and foreign currency options using a Fourier inversion technique. See also Duffie, Pan and Singleton (2001\) for extension to jumps, and Chacko and Das (2002\) for a generalization of this to interest\-rate derivatives with jumps. Lets explore a much simpler model of the same so as to get the idea of how we can get at probability functions if we are given a stochastic process for any security. When we are thinking of a dynamically moving financial variable (say \\(x\_t\\)), we are usually interested in knowing what the probability is of this variable reaching a value \\(x\_{\\tau}\\) at time \\(t\=\\tau\\), given that right now, it has value \\(x\_0\\) at time \\(t\=0\\). Note that \\(\\tau\\) is the remaining time to maturity. Suppose we have the following financial variable \\(x\_t\\) following a very simple Brownian motion, i.e. \\\[ dx\_t \= \\mu \\; dt \+ \\sigma\\; dz\_t \\] Here, \\(\\mu\\) is known as its `drift" and`sigma’’ is the volatility. The differential equation above gives the movement of the variable \\(x\\) and the term \\(dz\\) is a Brownian motion, and is a random variable with normal distribution of mean zero, and variance \\(dt\\). What we are interested in is the characteristic function of this process. The characteristic function of \\(x\\) is defined as the Fourier transform of \\(x\\), i.e. \\\[ F(x) \= E\[e^{isx}] \= \\int e^{isx} f(x) ds \\] where \\(s\\) is the Fourier variable of integration, and \\(i\=\\sqrt{\-1}\\), as usual. Notice the similarity to the Fourier transforms described earlier in the note. It turns out that once we have the characteristic function, then we can obtain by simple calculations the probability function for \\(x\\), as well as all the moments of \\(x\\). ### 16\.6\.3 Solving for the characteristic function We write the characteristic function as \\(F(x,\\tau; s)\\). Then, using Ito’s lemma we have \\\[ dF \= F\_x dx \+ \\frac{1}{2} F\_{xx} (dx)^2 \-F\_{\\tau} dt \\] \\(F\_x\\) is the first derivative of \\(F\\) with respect to \\(x\\); \\(F\_{xx}\\) is the second derivative, and \\(F\_{\\tau}\\) is the derivative with respect to remaining maturity. Since \\(F\\) is a characteristic (probability) function, the expected change in \\(F\\) is zero. \\\[ E(dF) \= \\mu F\_x \\;dt\+ \\frac{1}{2} \\sigma^2 F\_{xx} \\; dt\- F\_{\\tau}\\; dt \= 0 \\] which gives a PDE in \\((x,\\tau)\\): \\\[ \\mu F\_x \+ \\frac{1}{2} \\sigma^2 F\_{xx} \- F\_{\\tau} \= 0 \\] We need a boundary condition for the characteristic function which is \\\[ F(x,\\tau\=0;s) \= e^{isx} \\] We solve the PDE by making an educated guess, which is \\\[ F(x,\\tau;s) \= e^{isx \+ A(\\tau)} \\] which implies that when \\(\\tau\=0\\), \\(A(\\tau\=0\)\=0\\) as well. We can see that \\\[\\begin{align} F\_x \&\= isF \\\\ F\_{xx} \&\= \-s^2 F\\\\ F\_{\\tau} \&\= A\_{\\tau} F \\end{align}\\] Substituting this back in the PDE gives \\\[\\begin{align} \\mu is F \- \\frac{1}{2} \\sigma^2 s^2 F \- A\_{\\tau} F \&\= 0 \\\\ \\mu is \- \\frac{1}{2} \\sigma^2 s^2 \- A\_{\\tau} \&\= 0 \\\\ \\frac{dA}{d\\tau} \&\= \\mu is \- \\frac{1}{2} \\sigma^2 s^2 \\\\ \\mbox{gives: } A(\\tau) \&\= \\mu is \\tau \- \\frac{1}{2} \\sigma^2 s^2 \\tau, \\quad \\mbox{because } A(0\)\=0 \\end{align}\\] Thus we finally have the characteristic function which is \\\[ F(x,\\tau; s) \= \\exp\[isx \+ \\mu is \\tau \-\\frac{1}{2} \\sigma^2 s^2 \\tau] \\] ### 16\.6\.4 Computing the moments In general, the moments are derived by differentiating the characteristic function y \\(s\\) and setting \\(s\=0\\). The \\(k\\)\-th moment will be \\\[ E\[x^k] \= \\frac{1}{i^k} \\left\[ \\frac{\\partial^k F}{\\partial s^k} \\right]\_{s\=0} \\] Lets test it by computing the mean (\\(k\=1\\)): \\\[ E(x) \= \\frac{1}{i} \\left\[ \\frac{\\partial F}{\\partial s} \\right]\_{s\=0} \= x \+ \\mu \\tau \\] where \\(x\\) is the current value \\(x\_0\\). How about the second moment? \\\[ E(x^2\) \= \\frac{1}{i^2} \\left\[ \\frac{\\partial^2 F}{\\partial s^2} \\right]\_{s\=0} \=\\sigma^2 \\tau \+ (x\+\\mu \\tau)^2 \= \\sigma^2 \\tau \+ E(x)^2 \\] Hence, the variance will be \\\[ Var(x) \= E(x^2\) \- E(x)^2 \= \\sigma^2 \\tau \+ E(x)^2 \- E(x^2\) \= \\sigma^2 \\tau \\] ### 16\.6\.5 Probability density function It turns out that we can \`\`invert’’ the characteristic function to get the pdf (boy, this characteristic function sure is useful!). Again we use Fourier inversion, which result is stated as follows: \\\[ f(x\_{\\tau} \| x\_0\) \= \\frac{1}{\\pi} \\int\_0^{\\infty} Re\[e^{\-isx\_{\\tau}}] F(x\_0,\\tau; s)\\; ds \\] Here is an implementation ``` #Model for fourier inversion for arithmetic brownian motion x0 = 10 mu = 10 sig = 5 tau = 0.25 s = (0:10000)/100 ds = s[2]-s[1] phi = exp(1i*s*x0+mu*1i*s*tau-0.5*s^2*sig^2*tau) x = (0:250)/10 fx=NULL for (k in 1:length(x)) { g = sum(Re(exp(-1i*s*x[k]) * phi * ds))/pi #print(c(x[k],g)) fx = c(fx,g) } plot(x,fx,type="l",main="Inverse Fourier plot") ``` 16\.1 Introduction ------------------ Fourier analysis comprises many different connnections between infinite series, complex numbers, vector theory, and geometry. We may think of different applications: (a) fitting economic time series, (b) pricing options, (c) wavelets, (d) obtaining risk\-neutral pricing distributions via Fourier inversion. 16\.2 Fourier Series -------------------- ### 16\.2\.1 Basic stuff Fourier series are used to represent {} time series by combinations of sine and cosine waves. The time it takes for one cycle of the wave is called the `period'' $T$ of the wave. The`frequency’’ \\(f\\) of the wave is the number of cycles per second, hence, \\\[ f \= \\frac{1}{T} \\] ### 16\.2\.2 Unit Circle We need some basic geometry on the unit circle. This circle is the unit circle if \\(a\=1\\). There is a nice link between the unit circle and the sine wave. See the next figure for this relationship. Hence, as we rotate through the angles, the height of the unit vector on the circle traces out the sine wave. In general for radius \\(a\\), we get a sine wave with amplitude \\(a\\), or we may write: \\\[\\begin{equation} f(\\theta) \= a \\sin(\\theta) \\tag{16\.1} \\end{equation}\\] ### 16\.2\.3 Angular velocity Velocity is distance per time (in a given direction). For angular velocity we measure distance in degrees, i.e. degrees per unit of time. The usual symbol for angular velocity is \\(\\omega\\). We can thus write \\\[ \\omega \= \\frac{\\theta}{T}, \\quad \\theta \= \\omega T \\] Hence, we can state the function in equation [(16\.1\)](Fourier.html#eq:ftheta) in terms of time as follows \\\[ f(t) \= a \\sin \\omega t \\] ### 16\.2\.4 Fourier series A Fourier series is a collection of sine and cosine waves, which when summed up, closely approximate any given waveform. We can express the Fourier series in terms of sine and cosine waves \\\[ f(\\theta) \= a\_0 \+ \\sum\_{n\=1}^{\\infty} \\left( a\_n \\cos n\\theta \+ b\_n \\sin n \\theta \\right) \\] \\\[ f(t) \= a\_0 \+ \\sum\_{n\=1}^{\\infty} \\left( a\_n \\cos n\\omega t \+ b\_n \\sin n \\omega t \\right) \\] The \\(a\_0\\) is needed since the waves may not be symmetric around the x\-axis. ### 16\.2\.5 Radians Degrees are expressed in units of radians. A radian is an angle defined in the following figure. The angle here is a radian which is equal to 57\.2958 degrees (approximately). This is slightly less than 60 degrees as you would expect to get with an equilateral triangle. Note that (since the circumference is \\(2 \\pi a\\)) \\(57\.2958 \\pi \= 57\.2958 \\times 3\.142 \= 180\\) degrees. So now for the unit circle \\\[\\begin{align} 2 \\pi \&\= 360 \\mbox{(degrees)}\\\\ \\omega \&\= \\frac{360}{T} \\\\ \\omega \&\= \\frac{2\\pi}{T} \\end{align}\\] Hence, we may rewrite the Fourier series equation as: \\\[\\begin{align} f(t) \&\= a\_0 \+ \\sum\_{n\=1}^{\\infty} \\left( a\_n \\cos n\\omega t \+ b\_n \\sin n \\omega t \\right) \\tag{16\.2} \\\\ \&\= a\_0 \+ \\sum\_{n\=1}^{\\infty} \\left( a\_n \\cos \\frac{2\\pi n}{T} t \+ b\_n \\sin \\frac{2\\pi n}{T} t \\right) \\end{align}\\] So we now need to figure out how to get the coefficients \\(\\{a\_0,a\_n,b\_n\\}\\). ### 16\.2\.6 Solving for the coefficients We start by noting the interesting phenomenon that sines and cosines are orthogonal, i.e. their inner product is zero. Hence, \\\[\\begin{align} \\int\_0^T \\sin(nt) . \\cos(mt)\\; dt \&\= 0, \\forall n,m \\\\ \\int\_0^T \\sin(nt) . \\sin(mt)\\; dt \&\= 0, \\forall n \\neq m \\\\ \\int\_0^T \\cos(nt) . \\cos(mt)\\; dt \&\= 0, \\forall n \\neq m \\end{align}\\] What this means is that when we multiply one wave by another, and then integrate the resultant wave from \\(0\\) to \\(T\\) (i.e. over any cycle, so we could go from say \\(\-T/2\\) to \\(\+T/2\\) also), then we get zero, unless the two waves have the {} frequency. Hence, the way we get the coefficients of the Fourier series is as follows. Integrate both sides of the series in equation [(16\.2\)](Fourier.html#eq:fseries) from \\(0\\) to \\(T\\), i.e. \\\[ \\int\_0^T f(t) \= \\int\_0^T a\_0 \\;dt \+ \\int\_0^T \\left\[\\sum\_{n\=1}^{\\infty} \\left( a\_n \\cos n\\omega t \+ b\_n \\sin n \\omega t \\right) \\;dt \\right] \\] Except for the first term all the remaining terms are zero (integrating a sine or cosine wave over its cycle gives net zero). So we get \\\[ \\int\_0^T f(t) \\;dt \= a\_0 T \\] or \\\[ a\_0 \= \\frac{1}{T} \\int\_0^T f(t) \\;dt \\] Now lets try another integral, i.e. \\\[\\begin{align} \\int\_0^T f(t) \\cos(\\omega t) \&\= \\int\_0^T a\_0 \\cos(\\omega t) \\;dt \\\\ \& \+ \\int\_0^T \\left\[\\sum\_{n\=1}^{\\infty} \\left( a\_n \\cos n\\omega t \+ b\_n \\sin n \\omega t \\right)\\cos(\\omega t) \\;dt \\right] \\end{align}\\] Here, all terms are zero except for the term in \\(a\_1 \\cos(\\omega t)\\cos(\\omega t)\\), because we are multiplying two waves (pointwise) that have the same frequency. So we get \\\[\\begin{align} \\int\_0^T f(t) \\cos(\\omega t) \&\= \\int\_0^T a\_1 \\cos(\\omega t)\\cos(\\omega t) \\;dt \\\\ \&\= a\_1 \\; \\frac{T}{2} \\end{align}\\] How? Note here that for unit amplitude, integrating \\(\\cos(\\omega t)\\) over one cycle will give zero. If we multiply \\(\\cos(\\omega t)\\) by itself, we flip all the wave segments from below to above the zero line. The product wave now fills out half the area from \\(0\\) to \\(T\\), so we get \\(T/2\\). Thus \\\[ a\_1 \= \\frac{2}{T} \\int\_0^T f(t) \\cos(\\omega t) \\] We can get all \\(a\_n\\) this way \- just multiply by \\(\\cos(n \\omega t)\\) and integrate. We can also get all \\(b\_n\\) this way \- just multiply by \\(\\sin(n \\omega t)\\) and integrate. This forms the basis of the following summary results that give the coefficients of the Fourier series. \\\[\\begin{align} a\_0 \&\= \\frac{1}{T} \\int\_{\-T/2}^{T/2} f(t) \\;dt \= \\frac{1}{T} \\int\_{0}^{T} f(t) \\;dt\\\\ a\_n \&\= \\frac{1}{T/2} \\int\_{\-T/2}^{T/2} f(t) \\cos(n\\omega t)\\;dt \= \\frac{2}{T} \\int\_{0}^{T} f(t) \\cos(n\\omega t)\\;dt \\\\ b\_n \&\= \\frac{1}{T/2} \\int\_{\-T/2}^{T/2} f(t) \\sin(n\\omega t)\\;dt \= \\frac{2}{T} \\int\_{0}^{T} f(t) \\sin(n\\omega t)\\;dt \\end{align}\\] ### 16\.2\.1 Basic stuff Fourier series are used to represent {} time series by combinations of sine and cosine waves. The time it takes for one cycle of the wave is called the `period'' $T$ of the wave. The`frequency’’ \\(f\\) of the wave is the number of cycles per second, hence, \\\[ f \= \\frac{1}{T} \\] ### 16\.2\.2 Unit Circle We need some basic geometry on the unit circle. This circle is the unit circle if \\(a\=1\\). There is a nice link between the unit circle and the sine wave. See the next figure for this relationship. Hence, as we rotate through the angles, the height of the unit vector on the circle traces out the sine wave. In general for radius \\(a\\), we get a sine wave with amplitude \\(a\\), or we may write: \\\[\\begin{equation} f(\\theta) \= a \\sin(\\theta) \\tag{16\.1} \\end{equation}\\] ### 16\.2\.3 Angular velocity Velocity is distance per time (in a given direction). For angular velocity we measure distance in degrees, i.e. degrees per unit of time. The usual symbol for angular velocity is \\(\\omega\\). We can thus write \\\[ \\omega \= \\frac{\\theta}{T}, \\quad \\theta \= \\omega T \\] Hence, we can state the function in equation [(16\.1\)](Fourier.html#eq:ftheta) in terms of time as follows \\\[ f(t) \= a \\sin \\omega t \\] ### 16\.2\.4 Fourier series A Fourier series is a collection of sine and cosine waves, which when summed up, closely approximate any given waveform. We can express the Fourier series in terms of sine and cosine waves \\\[ f(\\theta) \= a\_0 \+ \\sum\_{n\=1}^{\\infty} \\left( a\_n \\cos n\\theta \+ b\_n \\sin n \\theta \\right) \\] \\\[ f(t) \= a\_0 \+ \\sum\_{n\=1}^{\\infty} \\left( a\_n \\cos n\\omega t \+ b\_n \\sin n \\omega t \\right) \\] The \\(a\_0\\) is needed since the waves may not be symmetric around the x\-axis. ### 16\.2\.5 Radians Degrees are expressed in units of radians. A radian is an angle defined in the following figure. The angle here is a radian which is equal to 57\.2958 degrees (approximately). This is slightly less than 60 degrees as you would expect to get with an equilateral triangle. Note that (since the circumference is \\(2 \\pi a\\)) \\(57\.2958 \\pi \= 57\.2958 \\times 3\.142 \= 180\\) degrees. So now for the unit circle \\\[\\begin{align} 2 \\pi \&\= 360 \\mbox{(degrees)}\\\\ \\omega \&\= \\frac{360}{T} \\\\ \\omega \&\= \\frac{2\\pi}{T} \\end{align}\\] Hence, we may rewrite the Fourier series equation as: \\\[\\begin{align} f(t) \&\= a\_0 \+ \\sum\_{n\=1}^{\\infty} \\left( a\_n \\cos n\\omega t \+ b\_n \\sin n \\omega t \\right) \\tag{16\.2} \\\\ \&\= a\_0 \+ \\sum\_{n\=1}^{\\infty} \\left( a\_n \\cos \\frac{2\\pi n}{T} t \+ b\_n \\sin \\frac{2\\pi n}{T} t \\right) \\end{align}\\] So we now need to figure out how to get the coefficients \\(\\{a\_0,a\_n,b\_n\\}\\). ### 16\.2\.6 Solving for the coefficients We start by noting the interesting phenomenon that sines and cosines are orthogonal, i.e. their inner product is zero. Hence, \\\[\\begin{align} \\int\_0^T \\sin(nt) . \\cos(mt)\\; dt \&\= 0, \\forall n,m \\\\ \\int\_0^T \\sin(nt) . \\sin(mt)\\; dt \&\= 0, \\forall n \\neq m \\\\ \\int\_0^T \\cos(nt) . \\cos(mt)\\; dt \&\= 0, \\forall n \\neq m \\end{align}\\] What this means is that when we multiply one wave by another, and then integrate the resultant wave from \\(0\\) to \\(T\\) (i.e. over any cycle, so we could go from say \\(\-T/2\\) to \\(\+T/2\\) also), then we get zero, unless the two waves have the {} frequency. Hence, the way we get the coefficients of the Fourier series is as follows. Integrate both sides of the series in equation [(16\.2\)](Fourier.html#eq:fseries) from \\(0\\) to \\(T\\), i.e. \\\[ \\int\_0^T f(t) \= \\int\_0^T a\_0 \\;dt \+ \\int\_0^T \\left\[\\sum\_{n\=1}^{\\infty} \\left( a\_n \\cos n\\omega t \+ b\_n \\sin n \\omega t \\right) \\;dt \\right] \\] Except for the first term all the remaining terms are zero (integrating a sine or cosine wave over its cycle gives net zero). So we get \\\[ \\int\_0^T f(t) \\;dt \= a\_0 T \\] or \\\[ a\_0 \= \\frac{1}{T} \\int\_0^T f(t) \\;dt \\] Now lets try another integral, i.e. \\\[\\begin{align} \\int\_0^T f(t) \\cos(\\omega t) \&\= \\int\_0^T a\_0 \\cos(\\omega t) \\;dt \\\\ \& \+ \\int\_0^T \\left\[\\sum\_{n\=1}^{\\infty} \\left( a\_n \\cos n\\omega t \+ b\_n \\sin n \\omega t \\right)\\cos(\\omega t) \\;dt \\right] \\end{align}\\] Here, all terms are zero except for the term in \\(a\_1 \\cos(\\omega t)\\cos(\\omega t)\\), because we are multiplying two waves (pointwise) that have the same frequency. So we get \\\[\\begin{align} \\int\_0^T f(t) \\cos(\\omega t) \&\= \\int\_0^T a\_1 \\cos(\\omega t)\\cos(\\omega t) \\;dt \\\\ \&\= a\_1 \\; \\frac{T}{2} \\end{align}\\] How? Note here that for unit amplitude, integrating \\(\\cos(\\omega t)\\) over one cycle will give zero. If we multiply \\(\\cos(\\omega t)\\) by itself, we flip all the wave segments from below to above the zero line. The product wave now fills out half the area from \\(0\\) to \\(T\\), so we get \\(T/2\\). Thus \\\[ a\_1 \= \\frac{2}{T} \\int\_0^T f(t) \\cos(\\omega t) \\] We can get all \\(a\_n\\) this way \- just multiply by \\(\\cos(n \\omega t)\\) and integrate. We can also get all \\(b\_n\\) this way \- just multiply by \\(\\sin(n \\omega t)\\) and integrate. This forms the basis of the following summary results that give the coefficients of the Fourier series. \\\[\\begin{align} a\_0 \&\= \\frac{1}{T} \\int\_{\-T/2}^{T/2} f(t) \\;dt \= \\frac{1}{T} \\int\_{0}^{T} f(t) \\;dt\\\\ a\_n \&\= \\frac{1}{T/2} \\int\_{\-T/2}^{T/2} f(t) \\cos(n\\omega t)\\;dt \= \\frac{2}{T} \\int\_{0}^{T} f(t) \\cos(n\\omega t)\\;dt \\\\ b\_n \&\= \\frac{1}{T/2} \\int\_{\-T/2}^{T/2} f(t) \\sin(n\\omega t)\\;dt \= \\frac{2}{T} \\int\_{0}^{T} f(t) \\sin(n\\omega t)\\;dt \\end{align}\\] 16\.3 Complex Algebra --------------------- Just for fun, recall that \\\[ e \= \\sum\_{n\=0}^{\\infty} \\frac{1}{n!}. \\] and \\\[ e^{i \\theta} \= \\sum\_{n\=0}^{\\infty} \\frac{1}{n!} (i \\theta)^n \\] \\\[\\begin{align} \\cos(\\theta) \&\= 1 \+ 0\.\\theta \- \\frac{1}{2!} \\theta^2 \+ 0\.\\theta^3 \+ \\frac{1}{4!} \\theta^2 \+ \\ldots \\\\ i \\sin(\\theta) \&\= 0 \+ i \\theta \+ 0\.\\theta^2 \- \\frac{1}{3!}i\\theta^3 \+ 0\.\\theta^4 \+ \\ldots \\end{align}\\] Which leads into the famous Euler’s formula: \\\[\\begin{equation} \\tag{16\.3} e^{i \\theta} \= \\cos \\theta \+ i \\sin \\theta \\end{equation}\\] and the corresponding \\\[\\begin{equation} \\tag{16\.4} e^{\-i \\theta} \= \\cos \\theta \- i \\sin \\theta \\end{equation}\\] Recall also that \\(\\cos(\-\\theta) \= \\cos(\\theta)\\). And \\(\\sin(\-\\theta) \= \- \\sin(\\theta)\\). Note also that if \\(\\theta \= \\pi\\), then \\\[ e^{\-i \\pi} \= \\cos(\\pi) \- i \\sin(\\pi) \= \-1 \+ 0 \\] which can be written as \\\[ e^{\-i \\pi} \+ 1 \= 0 \\] an equation that contains five fundamental mathematical constants: \\(\\{i, \\pi, e, 0, 1\\}\\), and three operators \\(\\{\+, \-, \=\\}\\). ### 16\.3\.1 Trig to Complex Using equations [(16\.3\)](Fourier.html#eq:eipi) and [(16\.4\)](Fourier.html#eq:e-ipi) gives \\\[\\begin{align} \\cos \\theta \&\= \\frac{1}{2} (e^{i \\theta} \+ e^{\-i \\theta}) \\\\ \\sin \\theta \&\= \\frac{1}{2}i (e^{i \\theta} \- e^{\-i \\theta}) \\end{align}\\] Now, return to the Fourier series, \\\[\\begin{align} f(t) \&\= a\_0 \+ \\sum\_{n\=1}^{\\infty} \\left( a\_n \\cos n\\omega t \+ b\_n \\sin n \\omega t \\right) \\\\ \&\= a\_0 \+ \\sum\_{n\=1}^{\\infty} \\left( a\_n \\frac{1}{2} (e^{in\\omega t}\+e^{\-in\\omega t}) \+ b\_n \\frac{1}{2i} (e^{in\\omega t} \- e^{\-i n \\omega t}) \\right) \\\\ \&\= a\_0 \+ \\sum\_{n\=1}^{\\infty} \\left( A\_n e^{in\\omega t} \+ B\_n e^{\-in \\omega t} \\right)\\\\ \& \\mbox{where} \\nonumber \\\\ \& A\_n \= \\frac{1}{T} \\int\_0^T f(t) e^{\-in\\omega t} \\;dt \\nonumber \\\\ \& B\_n \= \\frac{1}{T} \\int\_0^T f(t) e^{in\\omega t} \\;dt \\nonumber \\end{align}\\] How? Start with \\\[ f(t) \= a\_0 \+ \\sum\_{n\=1}^{\\infty} \\left( a\_n \\frac{1}{2} (e^{in\\omega t}\+e^{\-in\\omega t}) \+ b\_n \\frac{1}{2i} (e^{in\\omega t} \- e^{\-i n \\omega t}) \\right) \\] Then \\\[ f(t) \= a\_0 \+ \\sum\_{n\=1}^{\\infty} \\left( a\_n \\frac{1}{2} (e^{in\\omega t}\+e^{\-in\\omega t}) \+ b\_n \\frac{i}{2i^2} (e^{in\\omega t} \- e^{\-i n \\omega t}) \\right) \\] \\\[ \= a\_0 \+ \\sum\_{n\=1}^{\\infty} \\left( a\_n \\frac{1}{2} (e^{in\\omega t}\+e^{\-in\\omega t}) \+ b\_n \\frac{i}{\-2} (e^{in\\omega t} \- e^{\-i n \\omega t}) \\right) \\] \\\[\\begin{equation} f(t) \= a\_0 \+ \\sum\_{n\=1}^{\\infty} \\left( \\frac{1}{2}(a\_n \-ib\_n)e^{in\\omega t} \+ \\frac{1}{2}(a\_n \+ ib\_n)e^{\-in\\omega t} \\right) \\tag{16\.5} \\end{equation}\\] Note that from equations [(16\.3\)](Fourier.html#eq:eipi) and [(16\.4\)](Fourier.html#eq:e-ipi), \\\[\\begin{align} a\_n \&\= \\frac{2}{T} \\int\_0^T f(t) \\cos(n \\omega t) \\;dt \\\\ \&\= \\frac{2}{T} \\int\_0^T f(t) \\frac{1}{2} \[e^{in\\omega t} \+ e^{\-in\\omega t}] \\;dt \\\\ a\_n \&\= \\frac{1}{T} \\int\_0^T f(t) \[e^{in\\omega t} \+ e^{\-in\\omega t}] \\;dt \\tag{16\.6} \\end{align}\\] In the same way, we can handle \\(b\_n\\), to get \\\[\\begin{align} b\_n \&\= \\frac{2}{T} \\int\_0^T f(t) \\sin(n \\omega t) \\;dt \\\\ \&\= \\frac{2}{T} \\int\_0^T f(t) \\frac{1}{2i} \[e^{in\\omega t} \- e^{\-in\\omega t}] \\;dt \\\\ \&\= \\frac{1}{i}\\frac{1}{T} \\int\_0^T f(t) \[e^{in\\omega t} \- e^{\-in\\omega t}] \\;dt \\end{align}\\] So that \\\[\\begin{equation} i b\_n \= \\frac{1}{T} \\int\_0^T f(t) \[e^{in\\omega t} \- e^{\-in\\omega t}] \\;dt \\tag{16\.7} \\end{equation}\\] So from equations [(16\.6\)](Fourier.html#eq:an) and [(16\.7\)](Fourier.html#eq:ibn), we get \\\[\\begin{align} \\frac{1}{2}(a\_n \- i b\_n) \&\= \\frac{1}{T} \\int\_0^T f(t) e^{\-in\\omega t} \\;dt \\equiv A\_n\\\\ \\frac{1}{2}(a\_n \+ i b\_n) \&\= \\frac{1}{T} \\int\_0^T f(t) e^{in\\omega t} \\;dt \\equiv B\_n \\end{align}\\] Put these back into equation [(16\.5\)](Fourier.html#eq:ft) to get \\\[\\begin{align} f(t) \&\= a\_0 \+ \\sum\_{n\=1}^{\\infty} \\left( \\frac{1}{2}(a\_n \-ib\_n)e^{in\\omega t} \+ \\frac{1}{2}(a\_n \+ ib\_n)e^{\-in\\omega t} \\right) \\\\ \&\= a\_0 \+ \\sum\_{n\=1}^{\\infty} \\left( A\_n e^{in\\omega t} \+ B\_n e^{\-in \\omega t} \\right) \\end{align}\\] ### 16\.3\.2 Getting rid of \\(a\_0\\) Note that if we expand the range of the first summation to start from \\(n\=0\\), then we have a term \\(A\_0 e^{i0 \\omega t} \= A\_0 \\equiv a\_0\\). So we can then write our expression as \\\[ f(t) \= \\sum\_{n\=0}^{\\infty} A\_n e^{in\\omega t} \+ \\sum\_{n\=1}^{\\infty} B\_n e^{\-in \\omega t} \\mbox{ (sum of A runs from zero)} \\] ### 16\.3\.3 Collapsing and Simplifying So now we want to collapse these two terms together. Lets note that \\\[ \\sum\_{n\=1}^2 x^n \= x^1 \+ x^2 \= \\sum\_{n\=\-2}^{\-1} x^{\-n} \= x^2 \+ x^1 \\] Applying this idea, we get \\\[\\begin{align} f(t) \&\= \\sum\_{n\=0}^{\\infty} A\_n e^{in\\omega t} \+ \\sum\_{n\=1}^{\\infty} B\_n e^{\-in \\omega t} \\\\ \&\= \\sum\_{n\=0}^{\\infty} A\_n e^{in\\omega t} \+ \\sum\_{n\=\-\\infty}^{\-1} B\_{(\-n)} e^{in \\omega t} \\\\ \& \\mbox{where} \\nonumber \\\\ \& B\_{(\-n)} \= \\frac{1}{T} \\int\_0^T f(t) e^{\-in\\omega t} \\;dt \\nonumber \= A\_n \\\\ \&\= \\sum\_{n\=\-\\infty}^{\\infty} C\_n e^{in\\omega t}\\\\ \& \\mbox{where} \\nonumber \\\\ \& C\_n \= \\frac{1}{T} \\int\_0^T f(t) e^{\-in\\omega t} \\;dt \\nonumber \\end{align}\\] where we just renamed \\(A\_n\\) to \\(C\_n\\) for clarity. The big win here is that we have been able to subsume \\(\\{a\_0,a\_n,b\_n\\}\\) all into one coefficient set \\(C\_n\\). For completeness we write \\\[ f(t) \= a\_0 \+ \\sum\_{n\=1}^{\\infty} \\left( a\_n \\cos n\\omega t \+ b\_n \\sin n \\omega t \\right) \=\\sum\_{n\=\-\\infty}^{\\infty} C\_n e^{in\\omega t} \\] This is the complex number representation of the Fourier series. ### 16\.3\.1 Trig to Complex Using equations [(16\.3\)](Fourier.html#eq:eipi) and [(16\.4\)](Fourier.html#eq:e-ipi) gives \\\[\\begin{align} \\cos \\theta \&\= \\frac{1}{2} (e^{i \\theta} \+ e^{\-i \\theta}) \\\\ \\sin \\theta \&\= \\frac{1}{2}i (e^{i \\theta} \- e^{\-i \\theta}) \\end{align}\\] Now, return to the Fourier series, \\\[\\begin{align} f(t) \&\= a\_0 \+ \\sum\_{n\=1}^{\\infty} \\left( a\_n \\cos n\\omega t \+ b\_n \\sin n \\omega t \\right) \\\\ \&\= a\_0 \+ \\sum\_{n\=1}^{\\infty} \\left( a\_n \\frac{1}{2} (e^{in\\omega t}\+e^{\-in\\omega t}) \+ b\_n \\frac{1}{2i} (e^{in\\omega t} \- e^{\-i n \\omega t}) \\right) \\\\ \&\= a\_0 \+ \\sum\_{n\=1}^{\\infty} \\left( A\_n e^{in\\omega t} \+ B\_n e^{\-in \\omega t} \\right)\\\\ \& \\mbox{where} \\nonumber \\\\ \& A\_n \= \\frac{1}{T} \\int\_0^T f(t) e^{\-in\\omega t} \\;dt \\nonumber \\\\ \& B\_n \= \\frac{1}{T} \\int\_0^T f(t) e^{in\\omega t} \\;dt \\nonumber \\end{align}\\] How? Start with \\\[ f(t) \= a\_0 \+ \\sum\_{n\=1}^{\\infty} \\left( a\_n \\frac{1}{2} (e^{in\\omega t}\+e^{\-in\\omega t}) \+ b\_n \\frac{1}{2i} (e^{in\\omega t} \- e^{\-i n \\omega t}) \\right) \\] Then \\\[ f(t) \= a\_0 \+ \\sum\_{n\=1}^{\\infty} \\left( a\_n \\frac{1}{2} (e^{in\\omega t}\+e^{\-in\\omega t}) \+ b\_n \\frac{i}{2i^2} (e^{in\\omega t} \- e^{\-i n \\omega t}) \\right) \\] \\\[ \= a\_0 \+ \\sum\_{n\=1}^{\\infty} \\left( a\_n \\frac{1}{2} (e^{in\\omega t}\+e^{\-in\\omega t}) \+ b\_n \\frac{i}{\-2} (e^{in\\omega t} \- e^{\-i n \\omega t}) \\right) \\] \\\[\\begin{equation} f(t) \= a\_0 \+ \\sum\_{n\=1}^{\\infty} \\left( \\frac{1}{2}(a\_n \-ib\_n)e^{in\\omega t} \+ \\frac{1}{2}(a\_n \+ ib\_n)e^{\-in\\omega t} \\right) \\tag{16\.5} \\end{equation}\\] Note that from equations [(16\.3\)](Fourier.html#eq:eipi) and [(16\.4\)](Fourier.html#eq:e-ipi), \\\[\\begin{align} a\_n \&\= \\frac{2}{T} \\int\_0^T f(t) \\cos(n \\omega t) \\;dt \\\\ \&\= \\frac{2}{T} \\int\_0^T f(t) \\frac{1}{2} \[e^{in\\omega t} \+ e^{\-in\\omega t}] \\;dt \\\\ a\_n \&\= \\frac{1}{T} \\int\_0^T f(t) \[e^{in\\omega t} \+ e^{\-in\\omega t}] \\;dt \\tag{16\.6} \\end{align}\\] In the same way, we can handle \\(b\_n\\), to get \\\[\\begin{align} b\_n \&\= \\frac{2}{T} \\int\_0^T f(t) \\sin(n \\omega t) \\;dt \\\\ \&\= \\frac{2}{T} \\int\_0^T f(t) \\frac{1}{2i} \[e^{in\\omega t} \- e^{\-in\\omega t}] \\;dt \\\\ \&\= \\frac{1}{i}\\frac{1}{T} \\int\_0^T f(t) \[e^{in\\omega t} \- e^{\-in\\omega t}] \\;dt \\end{align}\\] So that \\\[\\begin{equation} i b\_n \= \\frac{1}{T} \\int\_0^T f(t) \[e^{in\\omega t} \- e^{\-in\\omega t}] \\;dt \\tag{16\.7} \\end{equation}\\] So from equations [(16\.6\)](Fourier.html#eq:an) and [(16\.7\)](Fourier.html#eq:ibn), we get \\\[\\begin{align} \\frac{1}{2}(a\_n \- i b\_n) \&\= \\frac{1}{T} \\int\_0^T f(t) e^{\-in\\omega t} \\;dt \\equiv A\_n\\\\ \\frac{1}{2}(a\_n \+ i b\_n) \&\= \\frac{1}{T} \\int\_0^T f(t) e^{in\\omega t} \\;dt \\equiv B\_n \\end{align}\\] Put these back into equation [(16\.5\)](Fourier.html#eq:ft) to get \\\[\\begin{align} f(t) \&\= a\_0 \+ \\sum\_{n\=1}^{\\infty} \\left( \\frac{1}{2}(a\_n \-ib\_n)e^{in\\omega t} \+ \\frac{1}{2}(a\_n \+ ib\_n)e^{\-in\\omega t} \\right) \\\\ \&\= a\_0 \+ \\sum\_{n\=1}^{\\infty} \\left( A\_n e^{in\\omega t} \+ B\_n e^{\-in \\omega t} \\right) \\end{align}\\] ### 16\.3\.2 Getting rid of \\(a\_0\\) Note that if we expand the range of the first summation to start from \\(n\=0\\), then we have a term \\(A\_0 e^{i0 \\omega t} \= A\_0 \\equiv a\_0\\). So we can then write our expression as \\\[ f(t) \= \\sum\_{n\=0}^{\\infty} A\_n e^{in\\omega t} \+ \\sum\_{n\=1}^{\\infty} B\_n e^{\-in \\omega t} \\mbox{ (sum of A runs from zero)} \\] ### 16\.3\.3 Collapsing and Simplifying So now we want to collapse these two terms together. Lets note that \\\[ \\sum\_{n\=1}^2 x^n \= x^1 \+ x^2 \= \\sum\_{n\=\-2}^{\-1} x^{\-n} \= x^2 \+ x^1 \\] Applying this idea, we get \\\[\\begin{align} f(t) \&\= \\sum\_{n\=0}^{\\infty} A\_n e^{in\\omega t} \+ \\sum\_{n\=1}^{\\infty} B\_n e^{\-in \\omega t} \\\\ \&\= \\sum\_{n\=0}^{\\infty} A\_n e^{in\\omega t} \+ \\sum\_{n\=\-\\infty}^{\-1} B\_{(\-n)} e^{in \\omega t} \\\\ \& \\mbox{where} \\nonumber \\\\ \& B\_{(\-n)} \= \\frac{1}{T} \\int\_0^T f(t) e^{\-in\\omega t} \\;dt \\nonumber \= A\_n \\\\ \&\= \\sum\_{n\=\-\\infty}^{\\infty} C\_n e^{in\\omega t}\\\\ \& \\mbox{where} \\nonumber \\\\ \& C\_n \= \\frac{1}{T} \\int\_0^T f(t) e^{\-in\\omega t} \\;dt \\nonumber \\end{align}\\] where we just renamed \\(A\_n\\) to \\(C\_n\\) for clarity. The big win here is that we have been able to subsume \\(\\{a\_0,a\_n,b\_n\\}\\) all into one coefficient set \\(C\_n\\). For completeness we write \\\[ f(t) \= a\_0 \+ \\sum\_{n\=1}^{\\infty} \\left( a\_n \\cos n\\omega t \+ b\_n \\sin n \\omega t \\right) \=\\sum\_{n\=\-\\infty}^{\\infty} C\_n e^{in\\omega t} \\] This is the complex number representation of the Fourier series. 16\.4 Fourier Transform ----------------------- The FT is a cool technique that allows us to go from the Fourier series, which needs a period \\(T\\) to waves that are aperiodic. The idea is to simply let the period go to infinity. Which means the frequency gets very small. We can then sample a slice of the wave to do analysis. We will replace \\(f(t)\\) with \\(g(t)\\) because we now need to use \\(f\\) or \\(\\Delta f\\) to denote frequency. Recall that \\\[ \\omega \= \\frac{2\\pi}{T} \= 2\\pi f, \\quad n\\omega \= 2 \\pi f\_n \\] To recap \\\[\\begin{align} g(t) \&\= \\sum\_{n\=\-\\infty}^{\\infty} C\_n e^{in\\omega t} \= \\sum\_{n\=\-\\infty}^{\\infty} C\_n e^{i 2\\pi f t}\\\\ C\_n \&\= \\frac{1}{T} \\int\_0^T g(t) e^{\-in\\omega t} \\;dt \\end{align}\\] This may be written alternatively in frequency terms as follows \\\[ C\_n \= \\Delta f \\int\_{\-T/2}^{T/2} g(t) e^{\-i 2\\pi f\_n t} \\;dt \\] which we substitute into the formula for \\(g(t)\\) and get \\\[ g(t) \= \\sum\_{n\=\-\\infty}^{\\infty} \\left\[\\Delta f \\int\_{\-T/2}^{T/2} g(t) e^{\-i 2\\pi f\_n t} \\;dt \\right]e^{in\\omega t} \\] Taking limits \\\[ g(t) \= \\lim\_{T \\rightarrow \\infty} \\sum\_{n\=\-\\infty}^{\\infty} \\left\[\\int\_{\-T/2}^{T/2} g(t) e^{\-i 2\\pi f\_n t} \\;dt \\right]e^{i 2 \\pi f\_n t} \\Delta f \\] gives a double integral \\\[ g(t) \= \\int\_{\-\\infty}^{\\infty} \\underbrace{\\left\[\\int\_{\-\\infty}^{\\infty} g(t) e^{\-i 2\\pi f t} \\;dt \\right]}\_{G(f)} e^{i 2 \\pi f t} \\;df \\] The \\(dt\\) is for the time domain and the \\(df\\) for the frequency domain. Hence, the {} goes from the time domain into the frequency domain, given by \\\[ G(f) \= \\int\_{\-\\infty}^{\\infty} g(t) e^{\-i 2\\pi f t} \\;dt \\] The {} goes from the frequency domain into the time domain \\\[ g(t) \= \\int\_{\-\\infty}^{\\infty} G(f) e^{i 2 \\pi f t} \\;df \\] And the {} are as before \\\[ C\_n \= \\frac{1}{T} \\int\_0^T g(t) e^{\-i 2\\pi f\_n t} \\;dt \= \\frac{1}{T} \\int\_0^T g(t) e^{\-in\\omega t} \\; dt \\] Notice the incredible similarity between the coefficients and the transform. Note the following: The spectrum of a wave is a graph showing its component frequencies, i.e. the quantity in which they occur. It is the frequency components of the waves. But it does not give their amplitudes. ### 16\.4\.1 Empirical Example We can use the Fourier transform function in R to compute the main component frequencies of the times series of interest rate data as follows: ``` library(zoo) ``` ``` ## ## Attaching package: 'zoo' ``` ``` ## The following objects are masked from 'package:base': ## ## as.Date, as.Date.numeric ``` ``` rd = read.table("DSTMAA_data/tryrates.txt",header=TRUE) r1 = rd$FYGT1 dt = as.yearmon(rd$DATE,"%b-%y") plot(dt,r1,type="l",ylab="One-Year Rate") ``` The line with ``` dr1 = resid(lm(r1 ~ seq(along = r1))) ``` detrends the series, and when we plot it we see that its done. We can then subject the detrended line to fourier analysis. The plot of the fit of the detrended one\-year interest rates is here: ``` dr1 = resid(lm(r1 ~ seq(along = r1))) plot(dt,dr1,type="l",ylab="Detrended 1y Rate") ``` Now, carry out the Fourier transform. ``` y=fft(dr1) plot(abs(y),type="l") ``` Its easy to see that the series has short frequencies and long frequencies. Essentially there are two factors. If we do a factor analysis of interest rates, it turns out we get two factors as well. See Chapter @ref{DiscriminantFactorAnalysis}. ### 16\.4\.1 Empirical Example We can use the Fourier transform function in R to compute the main component frequencies of the times series of interest rate data as follows: ``` library(zoo) ``` ``` ## ## Attaching package: 'zoo' ``` ``` ## The following objects are masked from 'package:base': ## ## as.Date, as.Date.numeric ``` ``` rd = read.table("DSTMAA_data/tryrates.txt",header=TRUE) r1 = rd$FYGT1 dt = as.yearmon(rd$DATE,"%b-%y") plot(dt,r1,type="l",ylab="One-Year Rate") ``` The line with ``` dr1 = resid(lm(r1 ~ seq(along = r1))) ``` detrends the series, and when we plot it we see that its done. We can then subject the detrended line to fourier analysis. The plot of the fit of the detrended one\-year interest rates is here: ``` dr1 = resid(lm(r1 ~ seq(along = r1))) plot(dt,dr1,type="l",ylab="Detrended 1y Rate") ``` Now, carry out the Fourier transform. ``` y=fft(dr1) plot(abs(y),type="l") ``` Its easy to see that the series has short frequencies and long frequencies. Essentially there are two factors. If we do a factor analysis of interest rates, it turns out we get two factors as well. See Chapter @ref{DiscriminantFactorAnalysis}. 16\.5 Application to Binomial Option Pricing -------------------------------------------- To implement the option pricing in Cerny ([2009](#ref-Cerny)), Chapter 7, Figure 8\. ``` ifft = function(x) { fft(x,inverse=TRUE)/length(x) } ct = c(599.64,102,0,0) q = c(0.43523,0.56477,0,0) R = 1.0033 ifft(fft(ct)*( (4*ifft(q)/R)^3) ) ``` ``` ## [1] 81.36464+0i 115.28447+0i 265.46949+0i 232.62076+0i ``` The resulting price is 81\.36\. 16\.6 Application to probability functions ------------------------------------------ ### 16\.6\.1 Characteristic functions A characteristic function of a variable \\(x\\) is given by the expectation of the following function of \\(x\\): \\\[ \\phi(s) \= E\[e^{isx}] \= \\int\_{\-\\infty}^{\\infty} e^{isx} f(x) \\; dx \\] where \\(f(x)\\) is the probability density of \\(x\\). By Taylor series for \\(e^{isx}\\) we have \\\[\\begin{align} \\int\_{\-\\infty}^{\\infty} e^{isx} f(x) \\; dx \&\= \\int\_{\-\\infty}^{\\infty} \[1\+isx \+ \\frac{1}{2} (isx)^2 \+ \\ldots] f(x)dx \\\\ \&\= \\sum\_{j\=0}^{\\infty} \\frac{(is)^j}{j!} m\_j \\\\ \&\= 1 \+ (is) m\_1 \+ \\frac{1}{2} (is)^2 m\_2 \+ \\frac{1}{6} (is)^3 m\_3 \+ \\ldots \\end{align}\\] where \\(m\_j\\) is the \\(j\\)\-th moment. It is therefore easy to see that \\\[ m\_j \= \\frac{1}{i^j} \\left\[\\frac{d\\phi(s)}{ds} \\right]\_{s\=0} \\] where \\(i\=\\sqrt{\-1}\\). ### 16\.6\.2 Finance application In a paper in 1993, Steve Heston developed a new approach to valuing stock and foreign currency options using a Fourier inversion technique. See also Duffie, Pan and Singleton (2001\) for extension to jumps, and Chacko and Das (2002\) for a generalization of this to interest\-rate derivatives with jumps. Lets explore a much simpler model of the same so as to get the idea of how we can get at probability functions if we are given a stochastic process for any security. When we are thinking of a dynamically moving financial variable (say \\(x\_t\\)), we are usually interested in knowing what the probability is of this variable reaching a value \\(x\_{\\tau}\\) at time \\(t\=\\tau\\), given that right now, it has value \\(x\_0\\) at time \\(t\=0\\). Note that \\(\\tau\\) is the remaining time to maturity. Suppose we have the following financial variable \\(x\_t\\) following a very simple Brownian motion, i.e. \\\[ dx\_t \= \\mu \\; dt \+ \\sigma\\; dz\_t \\] Here, \\(\\mu\\) is known as its `drift" and`sigma’’ is the volatility. The differential equation above gives the movement of the variable \\(x\\) and the term \\(dz\\) is a Brownian motion, and is a random variable with normal distribution of mean zero, and variance \\(dt\\). What we are interested in is the characteristic function of this process. The characteristic function of \\(x\\) is defined as the Fourier transform of \\(x\\), i.e. \\\[ F(x) \= E\[e^{isx}] \= \\int e^{isx} f(x) ds \\] where \\(s\\) is the Fourier variable of integration, and \\(i\=\\sqrt{\-1}\\), as usual. Notice the similarity to the Fourier transforms described earlier in the note. It turns out that once we have the characteristic function, then we can obtain by simple calculations the probability function for \\(x\\), as well as all the moments of \\(x\\). ### 16\.6\.3 Solving for the characteristic function We write the characteristic function as \\(F(x,\\tau; s)\\). Then, using Ito’s lemma we have \\\[ dF \= F\_x dx \+ \\frac{1}{2} F\_{xx} (dx)^2 \-F\_{\\tau} dt \\] \\(F\_x\\) is the first derivative of \\(F\\) with respect to \\(x\\); \\(F\_{xx}\\) is the second derivative, and \\(F\_{\\tau}\\) is the derivative with respect to remaining maturity. Since \\(F\\) is a characteristic (probability) function, the expected change in \\(F\\) is zero. \\\[ E(dF) \= \\mu F\_x \\;dt\+ \\frac{1}{2} \\sigma^2 F\_{xx} \\; dt\- F\_{\\tau}\\; dt \= 0 \\] which gives a PDE in \\((x,\\tau)\\): \\\[ \\mu F\_x \+ \\frac{1}{2} \\sigma^2 F\_{xx} \- F\_{\\tau} \= 0 \\] We need a boundary condition for the characteristic function which is \\\[ F(x,\\tau\=0;s) \= e^{isx} \\] We solve the PDE by making an educated guess, which is \\\[ F(x,\\tau;s) \= e^{isx \+ A(\\tau)} \\] which implies that when \\(\\tau\=0\\), \\(A(\\tau\=0\)\=0\\) as well. We can see that \\\[\\begin{align} F\_x \&\= isF \\\\ F\_{xx} \&\= \-s^2 F\\\\ F\_{\\tau} \&\= A\_{\\tau} F \\end{align}\\] Substituting this back in the PDE gives \\\[\\begin{align} \\mu is F \- \\frac{1}{2} \\sigma^2 s^2 F \- A\_{\\tau} F \&\= 0 \\\\ \\mu is \- \\frac{1}{2} \\sigma^2 s^2 \- A\_{\\tau} \&\= 0 \\\\ \\frac{dA}{d\\tau} \&\= \\mu is \- \\frac{1}{2} \\sigma^2 s^2 \\\\ \\mbox{gives: } A(\\tau) \&\= \\mu is \\tau \- \\frac{1}{2} \\sigma^2 s^2 \\tau, \\quad \\mbox{because } A(0\)\=0 \\end{align}\\] Thus we finally have the characteristic function which is \\\[ F(x,\\tau; s) \= \\exp\[isx \+ \\mu is \\tau \-\\frac{1}{2} \\sigma^2 s^2 \\tau] \\] ### 16\.6\.4 Computing the moments In general, the moments are derived by differentiating the characteristic function y \\(s\\) and setting \\(s\=0\\). The \\(k\\)\-th moment will be \\\[ E\[x^k] \= \\frac{1}{i^k} \\left\[ \\frac{\\partial^k F}{\\partial s^k} \\right]\_{s\=0} \\] Lets test it by computing the mean (\\(k\=1\\)): \\\[ E(x) \= \\frac{1}{i} \\left\[ \\frac{\\partial F}{\\partial s} \\right]\_{s\=0} \= x \+ \\mu \\tau \\] where \\(x\\) is the current value \\(x\_0\\). How about the second moment? \\\[ E(x^2\) \= \\frac{1}{i^2} \\left\[ \\frac{\\partial^2 F}{\\partial s^2} \\right]\_{s\=0} \=\\sigma^2 \\tau \+ (x\+\\mu \\tau)^2 \= \\sigma^2 \\tau \+ E(x)^2 \\] Hence, the variance will be \\\[ Var(x) \= E(x^2\) \- E(x)^2 \= \\sigma^2 \\tau \+ E(x)^2 \- E(x^2\) \= \\sigma^2 \\tau \\] ### 16\.6\.5 Probability density function It turns out that we can \`\`invert’’ the characteristic function to get the pdf (boy, this characteristic function sure is useful!). Again we use Fourier inversion, which result is stated as follows: \\\[ f(x\_{\\tau} \| x\_0\) \= \\frac{1}{\\pi} \\int\_0^{\\infty} Re\[e^{\-isx\_{\\tau}}] F(x\_0,\\tau; s)\\; ds \\] Here is an implementation ``` #Model for fourier inversion for arithmetic brownian motion x0 = 10 mu = 10 sig = 5 tau = 0.25 s = (0:10000)/100 ds = s[2]-s[1] phi = exp(1i*s*x0+mu*1i*s*tau-0.5*s^2*sig^2*tau) x = (0:250)/10 fx=NULL for (k in 1:length(x)) { g = sum(Re(exp(-1i*s*x[k]) * phi * ds))/pi #print(c(x[k],g)) fx = c(fx,g) } plot(x,fx,type="l",main="Inverse Fourier plot") ``` ### 16\.6\.1 Characteristic functions A characteristic function of a variable \\(x\\) is given by the expectation of the following function of \\(x\\): \\\[ \\phi(s) \= E\[e^{isx}] \= \\int\_{\-\\infty}^{\\infty} e^{isx} f(x) \\; dx \\] where \\(f(x)\\) is the probability density of \\(x\\). By Taylor series for \\(e^{isx}\\) we have \\\[\\begin{align} \\int\_{\-\\infty}^{\\infty} e^{isx} f(x) \\; dx \&\= \\int\_{\-\\infty}^{\\infty} \[1\+isx \+ \\frac{1}{2} (isx)^2 \+ \\ldots] f(x)dx \\\\ \&\= \\sum\_{j\=0}^{\\infty} \\frac{(is)^j}{j!} m\_j \\\\ \&\= 1 \+ (is) m\_1 \+ \\frac{1}{2} (is)^2 m\_2 \+ \\frac{1}{6} (is)^3 m\_3 \+ \\ldots \\end{align}\\] where \\(m\_j\\) is the \\(j\\)\-th moment. It is therefore easy to see that \\\[ m\_j \= \\frac{1}{i^j} \\left\[\\frac{d\\phi(s)}{ds} \\right]\_{s\=0} \\] where \\(i\=\\sqrt{\-1}\\). ### 16\.6\.2 Finance application In a paper in 1993, Steve Heston developed a new approach to valuing stock and foreign currency options using a Fourier inversion technique. See also Duffie, Pan and Singleton (2001\) for extension to jumps, and Chacko and Das (2002\) for a generalization of this to interest\-rate derivatives with jumps. Lets explore a much simpler model of the same so as to get the idea of how we can get at probability functions if we are given a stochastic process for any security. When we are thinking of a dynamically moving financial variable (say \\(x\_t\\)), we are usually interested in knowing what the probability is of this variable reaching a value \\(x\_{\\tau}\\) at time \\(t\=\\tau\\), given that right now, it has value \\(x\_0\\) at time \\(t\=0\\). Note that \\(\\tau\\) is the remaining time to maturity. Suppose we have the following financial variable \\(x\_t\\) following a very simple Brownian motion, i.e. \\\[ dx\_t \= \\mu \\; dt \+ \\sigma\\; dz\_t \\] Here, \\(\\mu\\) is known as its `drift" and`sigma’’ is the volatility. The differential equation above gives the movement of the variable \\(x\\) and the term \\(dz\\) is a Brownian motion, and is a random variable with normal distribution of mean zero, and variance \\(dt\\). What we are interested in is the characteristic function of this process. The characteristic function of \\(x\\) is defined as the Fourier transform of \\(x\\), i.e. \\\[ F(x) \= E\[e^{isx}] \= \\int e^{isx} f(x) ds \\] where \\(s\\) is the Fourier variable of integration, and \\(i\=\\sqrt{\-1}\\), as usual. Notice the similarity to the Fourier transforms described earlier in the note. It turns out that once we have the characteristic function, then we can obtain by simple calculations the probability function for \\(x\\), as well as all the moments of \\(x\\). ### 16\.6\.3 Solving for the characteristic function We write the characteristic function as \\(F(x,\\tau; s)\\). Then, using Ito’s lemma we have \\\[ dF \= F\_x dx \+ \\frac{1}{2} F\_{xx} (dx)^2 \-F\_{\\tau} dt \\] \\(F\_x\\) is the first derivative of \\(F\\) with respect to \\(x\\); \\(F\_{xx}\\) is the second derivative, and \\(F\_{\\tau}\\) is the derivative with respect to remaining maturity. Since \\(F\\) is a characteristic (probability) function, the expected change in \\(F\\) is zero. \\\[ E(dF) \= \\mu F\_x \\;dt\+ \\frac{1}{2} \\sigma^2 F\_{xx} \\; dt\- F\_{\\tau}\\; dt \= 0 \\] which gives a PDE in \\((x,\\tau)\\): \\\[ \\mu F\_x \+ \\frac{1}{2} \\sigma^2 F\_{xx} \- F\_{\\tau} \= 0 \\] We need a boundary condition for the characteristic function which is \\\[ F(x,\\tau\=0;s) \= e^{isx} \\] We solve the PDE by making an educated guess, which is \\\[ F(x,\\tau;s) \= e^{isx \+ A(\\tau)} \\] which implies that when \\(\\tau\=0\\), \\(A(\\tau\=0\)\=0\\) as well. We can see that \\\[\\begin{align} F\_x \&\= isF \\\\ F\_{xx} \&\= \-s^2 F\\\\ F\_{\\tau} \&\= A\_{\\tau} F \\end{align}\\] Substituting this back in the PDE gives \\\[\\begin{align} \\mu is F \- \\frac{1}{2} \\sigma^2 s^2 F \- A\_{\\tau} F \&\= 0 \\\\ \\mu is \- \\frac{1}{2} \\sigma^2 s^2 \- A\_{\\tau} \&\= 0 \\\\ \\frac{dA}{d\\tau} \&\= \\mu is \- \\frac{1}{2} \\sigma^2 s^2 \\\\ \\mbox{gives: } A(\\tau) \&\= \\mu is \\tau \- \\frac{1}{2} \\sigma^2 s^2 \\tau, \\quad \\mbox{because } A(0\)\=0 \\end{align}\\] Thus we finally have the characteristic function which is \\\[ F(x,\\tau; s) \= \\exp\[isx \+ \\mu is \\tau \-\\frac{1}{2} \\sigma^2 s^2 \\tau] \\] ### 16\.6\.4 Computing the moments In general, the moments are derived by differentiating the characteristic function y \\(s\\) and setting \\(s\=0\\). The \\(k\\)\-th moment will be \\\[ E\[x^k] \= \\frac{1}{i^k} \\left\[ \\frac{\\partial^k F}{\\partial s^k} \\right]\_{s\=0} \\] Lets test it by computing the mean (\\(k\=1\\)): \\\[ E(x) \= \\frac{1}{i} \\left\[ \\frac{\\partial F}{\\partial s} \\right]\_{s\=0} \= x \+ \\mu \\tau \\] where \\(x\\) is the current value \\(x\_0\\). How about the second moment? \\\[ E(x^2\) \= \\frac{1}{i^2} \\left\[ \\frac{\\partial^2 F}{\\partial s^2} \\right]\_{s\=0} \=\\sigma^2 \\tau \+ (x\+\\mu \\tau)^2 \= \\sigma^2 \\tau \+ E(x)^2 \\] Hence, the variance will be \\\[ Var(x) \= E(x^2\) \- E(x)^2 \= \\sigma^2 \\tau \+ E(x)^2 \- E(x^2\) \= \\sigma^2 \\tau \\] ### 16\.6\.5 Probability density function It turns out that we can \`\`invert’’ the characteristic function to get the pdf (boy, this characteristic function sure is useful!). Again we use Fourier inversion, which result is stated as follows: \\\[ f(x\_{\\tau} \| x\_0\) \= \\frac{1}{\\pi} \\int\_0^{\\infty} Re\[e^{\-isx\_{\\tau}}] F(x\_0,\\tau; s)\\; ds \\] Here is an implementation ``` #Model for fourier inversion for arithmetic brownian motion x0 = 10 mu = 10 sig = 5 tau = 0.25 s = (0:10000)/100 ds = s[2]-s[1] phi = exp(1i*s*x0+mu*1i*s*tau-0.5*s^2*sig^2*tau) x = (0:250)/10 fx=NULL for (k in 1:length(x)) { g = sum(Re(exp(-1i*s*x[k]) * phi * ds))/pi #print(c(x[k],g)) fx = c(fx,g) } plot(x,fx,type="l",main="Inverse Fourier plot") ```
Machine Learning
srdas.github.io
https://srdas.github.io/MLBook/FinanceModels.html
Chapter 17 Finance Models ========================= 17\.1 Brownian Motions: Quick Introduction ------------------------------------------ The law of motion for stocks is often based on a geometric Brownian motion, i.e., \\\[ dS(t) \= \\mu S(t) \\; dt \+ \\sigma S(t) \\; dB(t), \\quad S(0\)\=S\_0\. \\] This is a **stochastic differential equation** (SDE), because it describes random movement of the stock \\(S(t)\\). The coefficient \\(\\mu\\) determines the drift of the process, and \\(\\sigma\\) determines its volatility. Randomness is injected by Brownian motion \\(B(t)\\). This is more general than a deterministic differential equation that is only a function of time, as with a bank account, whose accretion is based on the equation \\(dy(t) \= r \\;y(t) \\;dt\\), where \\(r\\) is the risk\-free rate of interest. The solution to a SDE is not a deterministic function but a random function. In this case, the solution for time interval \\(h\\) is known to be \\\[ S(t\+h) \= S(t) \\exp \\left\[\\left(\\mu\-\\frac{1}{2}\\sigma^2 \\right) h \+ \\sigma B(h) \\right] \\] The presence of \\(B(h) \\sim N(0,h)\\) in the solution makes the function random. The presence of the exponential return makes the stock price lognormal. (Note: if random variable \\(x\\) is normal, then \\(e^x\\) is lognormal.) Re\-arranging, the stock return is \\\[ R(t\+h) \= \\ln\\left( \\frac{S(t\+h)}{S(t)}\\right) \\sim N\\left\[ \\left(\\mu\-\\frac{1}{2}\\sigma^2 \\right) h, \\sigma^2 h \\right] \\] Using properties of the lognormal distribution, the conditional mean of the stock price becomes \\\[ E\[S(t\+h) \| S(t)] \= e^{\\mu h} \\] The following R code computes the annualized volatility \\(\\sigma\\). ``` library(quantmod) ``` ``` ## Loading required package: xts ``` ``` ## Loading required package: zoo ``` ``` ## ## Attaching package: 'zoo' ``` ``` ## The following objects are masked from 'package:base': ## ## as.Date, as.Date.numeric ``` ``` ## Loading required package: TTR ``` ``` ## Loading required package: methods ``` ``` ## Version 0.4-0 included new data defaults. See ?getSymbols. ``` ``` getSymbols('MSFT',src='google') ``` ``` ## As of 0.4-0, 'getSymbols' uses env=parent.frame() and ## auto.assign=TRUE by default. ## ## This behavior will be phased out in 0.5-0 when the call will ## default to use auto.assign=FALSE. getOption("getSymbols.env") and ## getOptions("getSymbols.auto.assign") are now checked for alternate defaults ## ## This message is shown once per session and may be disabled by setting ## options("getSymbols.warning4.0"=FALSE). See ?getSymbols for more details. ``` ``` ## [1] "MSFT" ``` ``` stkp = MSFT$MSFT.Close rets = diff(log(stkp))[-1] h = 1/252 sigma = sd(rets)/sqrt(h) print(sigma) ``` ``` ## [1] 0.2802329 ``` The parameter \\(\\mu\\) is also easily estimated as ``` mu = mean(rets)/h+0.5*sigma^2 print(mu) ``` ``` ## [1] 0.1152832 ``` So the additional term \\(\\frac{1}{2}\\sigma^2\\) does matter substantially. 17\.2 Monte Carlo Simulation ---------------------------- It is easy to simulate a path of stock prices using a discrete form of the solution to the Geometric Brownian motion SDE. \\\[ S(t\+h) \= S(t) \\exp \\left\[\\left(\\mu\-\\frac{1}{2}\\sigma^2 \\right) h \+ \\sigma \\cdot e\\; \\sqrt{h} \\right] \\] Note that we replaced \\(B(h)\\) with \\(e \\sqrt{h}\\), where \\(e \\sim N(0,1\)\\). Both \\(B(h)\\) and \\(e \\sqrt{h}\\) have mean zero and variance \\(h\\). Knowing \\(S(t)\\), we can simulate \\(S(t\+h)\\) by drawing \\(e\\) from a standard normal distribution. ``` n = 252 s0 = 100 mu = 0.10 sig = 0.20 s = matrix(0,1,n+1) h=1/n s[1] = s0 for (j in 2:(n+1)) { s[j]=s[j-1]*exp((mu-sig^2/2)*h +sig*rnorm(1)*sqrt(h)) } print(s[1:5]) ``` ``` ## [1] 100.0000 100.8426 100.6249 100.6474 100.0126 ``` ``` print(s[(n-4):n]) ``` ``` ## [1] 101.3609 101.0642 101.9908 101.3826 102.2418 ``` ``` plot(t(s),type="l",col="blue",xlab="Days",ylab="stock price"); grid(lwd=2) ``` 17\.3 Vectorization ------------------- The same logic may be used to generate multiple paths of stock prices, in a **vectorized** way as follows. In the following example we generate 3 paths. Because of the vectorization, the run time does not increase linearly with the number of paths, and in fact, hardly increases at all. ``` s = matrix(0,3,n+1) s[,1] = s0 for (j in seq(2,(n+1))) { s[,j]=s[,j-1]*exp((mu-sig^2/2)*h +sig*matrix(rnorm(3),3,1)*sqrt(h)) } ymin = min(s); ymax = max(s) plot(t(s)[,1],ylim=c(ymin,ymax),type="l",xlab="Days",ylab="stock price"); grid(lwd=2) lines(t(s)[,2],col="red",lty=2) lines(t(s)[,3],col="blue",lty=3) ``` The plot code shows how to change the style of the path and its color. If you generate many more paths, how can you find the probability of the stock ending up below a defined price? Can you do this directly from the discrete version of the Geometric Brownian motion process above? 17\.4 Bivariate random variables -------------------------------- To convert two independent random variables \\((e\_1,e\_2\) \\sim N(0,1\)\\) into two correlated random variables \\((x\_1,x\_2\)\\) with correlation \\(\\rho\\), use the following trannsformations. \\\[ x\_1 \= e\_1, \\quad \\quad x\_2 \= \\rho \\cdot x\_1 \+ \\sqrt{1\-\\rho^2} \\cdot x\_2 \\] ``` e = matrix(rnorm(20000),10000,2) print(cor(e)) ``` ``` ## [,1] [,2] ## [1,] 1.0000000000 0.0003501396 ## [2,] 0.0003501396 1.0000000000 ``` ``` print(cor(e[,1],e[,2])) ``` ``` ## [1] 0.0003501396 ``` We see that these are uncorrelated random variables. Now we create a pair of correlated variates using the formula above. ``` rho = 0.6 x1 = e[,1] x2 = rho*e[,1]+sqrt(1-rho^2)*e[,2] cor(x1,x2) ``` ``` ## [1] 0.5941021 ``` Check algebraically that \\(E\[x\_i]\=0, i\=1,2\\), \\(Var\[x\_i]\=1, i\=1,2\\). Also check that \\(Cov\[x\_1,x\_2]\=\\rho \= Corr\[x\_1,x\_2]\\). ``` print(mean(x1)) ``` ``` ## [1] -0.01568866 ``` ``` print(mean(x2)) ``` ``` ## [1] 0.0004103859 ``` ``` print(var(x1)) ``` ``` ## [1] 0.993663 ``` ``` print(var(x2)) ``` ``` ## [1] 1.014452 ``` ``` print(cov(x1,x2)) ``` ``` ## [1] 0.5964806 ``` 17\.5 Multivariate random variables ----------------------------------- These are generated using Cholesky decomposition. We may write a covariance matrix in decomposed form, i.e., \\(\\Sigma \= L\\; L'\\), where \\(L\\) is a lower triangular matrix. Alternatively we might have an upper triangular decomposition, where \\(U\= L'\\). Think of each component of the decomposition as a square\-root of the covariance matrix. ``` #Original matrix cv = matrix(c(0.01,0,0,0,0.04,0.02,0,0.02,0.16),3,3,byrow=TRUE) print(cv) ``` ``` ## [,1] [,2] [,3] ## [1,] 0.01 0.00 0.00 ## [2,] 0.00 0.04 0.02 ## [3,] 0.00 0.02 0.16 ``` ``` #Let's enhance it cv[1,2]=0.005 cv[2,1]=0.005 cv[1,3]=0.005 cv[3,1]=0.005 print(cv) ``` ``` ## [,1] [,2] [,3] ## [1,] 0.010 0.005 0.005 ## [2,] 0.005 0.040 0.020 ## [3,] 0.005 0.020 0.160 ``` We now compute the Cholesky decomposition of the covariance matrix. ``` L = t(chol(cv)) print(L) ``` ``` ## [,1] [,2] [,3] ## [1,] 0.10 0.00000000 0.0000000 ## [2,] 0.05 0.19364917 0.0000000 ## [3,] 0.05 0.09036961 0.3864367 ``` The Cholesky decomposition is now used to generate multivariate random variables with the correct correlation. ``` e = matrix(rnorm(3*10000),10000,3) x = t(L %*% t(e)) print(dim(x)) ``` ``` ## [1] 10000 3 ``` ``` print(cov(x)) ``` ``` ## [,1] [,2] [,3] ## [1,] 0.009857043 0.004803601 0.005329556 ## [2,] 0.004803601 0.040708872 0.021150791 ## [3,] 0.005329556 0.021150791 0.161275573 ``` In the last calculation, we confirmed that the simulated data has the same covariance matrix as the one that we generated correlated random variables from. 17\.6 Portfolio Computations ---------------------------- Let’s enter a sample mean vector and covariance matrix and then using some sample weights, we will perform some basic matrix computations for portfolios to illustrate the use of R. ``` mu = matrix(c(0.01,0.05,0.15),3,1) cv = matrix(c(0.01,0,0,0,0.04,0.02, 0,0.02,0.16),3,3) print(mu) ``` ``` ## [,1] ## [1,] 0.01 ## [2,] 0.05 ## [3,] 0.15 ``` ``` print(cv) ``` ``` ## [,1] [,2] [,3] ## [1,] 0.01 0.00 0.00 ## [2,] 0.00 0.04 0.02 ## [3,] 0.00 0.02 0.16 ``` ``` w = matrix(c(0.3,0.3,0.4)) print(w) ``` ``` ## [,1] ## [1,] 0.3 ## [2,] 0.3 ## [3,] 0.4 ``` The expected return of the portfolio is ``` muP = t(w) %*% mu print(muP) ``` ``` ## [,1] ## [1,] 0.078 ``` And the standard deviation of the portfolio is ``` stdP = sqrt(t(w) %*% cv %*% w) print(stdP) ``` ``` ## [,1] ## [1,] 0.1868154 ``` We thus generated the expected return and risk of the portfolio, i.e., the values \\(0\.078\\) and \\(0\.187\\), respectively. We are interested in the risk of a portfolio, often measured by its variance. As we had seen in a previous chapter, as we increase \\(n\\), the number of securities in the portfolio, the variance keeps dropping, and asymptotes to a level equal to the average covariance of all the assets. It is interesting to see what happens as \\(n\\) increases through a very simple function in R that returns the standard deviation of the portfolio. ``` sigport = function(n,sig_i2,sig_ij) { cv = matrix(sig_ij,n,n) diag(cv) = sig_i2 w = matrix(1/n,n,1) result = sqrt(t(w) %*% cv %*% w) } ``` We now apply it to increasingly diversified portfolios. ``` n = seq(5,100,5) risk_n = NULL for (nn in n) { risk_n = c(risk_n,sigport(nn,0.04,0.01)) } print(risk_n) ``` ``` ## [1] 0.1264911 0.1140175 0.1095445 0.1072381 0.1058301 0.1048809 0.1041976 ## [8] 0.1036822 0.1032796 0.1029563 0.1026911 0.1024695 0.1022817 0.1021204 ## [15] 0.1019804 0.1018577 0.1017494 0.1016530 0.1015667 0.1014889 ``` We can plot this to see the classic systematic risk plot. Systematic risk declines as the number of stocks in the portfolio increases. ``` plot(n,risk_n,type="l",ylab="Portfolio Std Dev") ``` 17\.7 Optimal Portfolio ----------------------- We will review the notation one more time. Assume that the risk free asset has return \\(r\_f\\). And we have \\(n\\) risky assets, with mean returns \\(\\mu\_i, i\=1\...n\\). We need to invest in optimal weights \\(w\_i\\) in each asset. Let \\(w \= \[w\_1, \\ldots, w\_n]'\\) be a column vector of portfolio weights. We define \\(\\mu \= \[\\mu\_1, \\ldots, \\mu\_n]'\\) be the column vector of mean returns on each asset, and \\({\\bf 1} \= \[1,\\ldots,1]'\\) be a column vector of ones. Hence, the expected return on the portfolio will be \\\[ E(R\_p) \= (1\-w'{\\bf 1}) r\_f \+ w'\\mu \\] The variance of return on the portfolio will be \\\[ Var(R\_p) \= w' \\Sigma w \\] where \\(\\Sigma\\) is the covariance matrix of returns on the portfolio. The objective function is a trade\-off between return and risk, with \\(\\beta\\) modulating the balance between risk and return: \\\[ U(R\_p) \= r\_f \+ w'(\\mu \- r\_f {\\bf 1}) \- \\frac{\\beta}{2} w'\\Sigma w \\] The f.o.c. becomes a system of equations now (not a single equation), since we differentiate by an entire vector \\(w\\): \\\[ \\frac{dU}{dw'} \= \\mu \- r\_f {\\bf 1} \- \\beta \\Sigma w \= {\\bf 0} \\] where the RHS is a vector of zeros of dimension \\(n\\). Solving we have \\\[ w \= \\frac{1}{\\beta} \\Sigma^{\-1} (\\mu \- r\_f {\\bf 1}) \\] Therefore, allocation to the risky assets * Increases when the relative return to it \\((\\mu \- r\_f {\\bf 1})\\) increases. * Decreases when risk aversion increases. * Decreases when riskiness of the assets increases as proxied for by \\(\\Sigma\\). ``` n = 3 print(cv) ``` ``` ## [,1] [,2] [,3] ## [1,] 0.01 0.00 0.00 ## [2,] 0.00 0.04 0.02 ## [3,] 0.00 0.02 0.16 ``` ``` print(mu) ``` ``` ## [,1] ## [1,] 0.01 ## [2,] 0.05 ## [3,] 0.15 ``` ``` rf=0.005 beta = 4 wuns = matrix(1,n,1) print(wuns) ``` ``` ## [,1] ## [1,] 1 ## [2,] 1 ## [3,] 1 ``` ``` w = (1/beta)*(solve(cv) %*% (mu-rf*wuns)) print(w) ``` ``` ## [,1] ## [1,] 0.1250000 ## [2,] 0.1791667 ## [3,] 0.2041667 ``` ``` w_in_rf = 1-sum(w) print(w_in_rf) ``` ``` ## [1] 0.4916667 ``` What if we reduced beta? ``` beta = 3 w = (1/beta)*(solve(cv) %*% (mu-rf*wuns)); print(w) ``` ``` ## [,1] ## [1,] 0.1666667 ## [2,] 0.2388889 ## [3,] 0.2722222 ``` ``` beta = 2 w = (1/beta)*(solve(cv) %*% (mu-rf*wuns)); print(w) ``` ``` ## [,1] ## [1,] 0.2500000 ## [2,] 0.3583333 ## [3,] 0.4083333 ``` Notice that the weights in stocks scales linearly with \\(\\beta\\). The relative proportions of the stocks themselves remains constant. Hence, \\(\\beta\\) modulates the proportions invested in a risk\-free asset and a stock portfolio, in which stock proportions remain same. It is as if the stock versus bond decision can be taken separately from the decision about the composition of the stock portfolio. This is known as the **two\-fund separation** property, i.e., first determine the proportions in the bond fund vs stock fund and the allocation within each fund can be handled subsequently. 17\.1 Brownian Motions: Quick Introduction ------------------------------------------ The law of motion for stocks is often based on a geometric Brownian motion, i.e., \\\[ dS(t) \= \\mu S(t) \\; dt \+ \\sigma S(t) \\; dB(t), \\quad S(0\)\=S\_0\. \\] This is a **stochastic differential equation** (SDE), because it describes random movement of the stock \\(S(t)\\). The coefficient \\(\\mu\\) determines the drift of the process, and \\(\\sigma\\) determines its volatility. Randomness is injected by Brownian motion \\(B(t)\\). This is more general than a deterministic differential equation that is only a function of time, as with a bank account, whose accretion is based on the equation \\(dy(t) \= r \\;y(t) \\;dt\\), where \\(r\\) is the risk\-free rate of interest. The solution to a SDE is not a deterministic function but a random function. In this case, the solution for time interval \\(h\\) is known to be \\\[ S(t\+h) \= S(t) \\exp \\left\[\\left(\\mu\-\\frac{1}{2}\\sigma^2 \\right) h \+ \\sigma B(h) \\right] \\] The presence of \\(B(h) \\sim N(0,h)\\) in the solution makes the function random. The presence of the exponential return makes the stock price lognormal. (Note: if random variable \\(x\\) is normal, then \\(e^x\\) is lognormal.) Re\-arranging, the stock return is \\\[ R(t\+h) \= \\ln\\left( \\frac{S(t\+h)}{S(t)}\\right) \\sim N\\left\[ \\left(\\mu\-\\frac{1}{2}\\sigma^2 \\right) h, \\sigma^2 h \\right] \\] Using properties of the lognormal distribution, the conditional mean of the stock price becomes \\\[ E\[S(t\+h) \| S(t)] \= e^{\\mu h} \\] The following R code computes the annualized volatility \\(\\sigma\\). ``` library(quantmod) ``` ``` ## Loading required package: xts ``` ``` ## Loading required package: zoo ``` ``` ## ## Attaching package: 'zoo' ``` ``` ## The following objects are masked from 'package:base': ## ## as.Date, as.Date.numeric ``` ``` ## Loading required package: TTR ``` ``` ## Loading required package: methods ``` ``` ## Version 0.4-0 included new data defaults. See ?getSymbols. ``` ``` getSymbols('MSFT',src='google') ``` ``` ## As of 0.4-0, 'getSymbols' uses env=parent.frame() and ## auto.assign=TRUE by default. ## ## This behavior will be phased out in 0.5-0 when the call will ## default to use auto.assign=FALSE. getOption("getSymbols.env") and ## getOptions("getSymbols.auto.assign") are now checked for alternate defaults ## ## This message is shown once per session and may be disabled by setting ## options("getSymbols.warning4.0"=FALSE). See ?getSymbols for more details. ``` ``` ## [1] "MSFT" ``` ``` stkp = MSFT$MSFT.Close rets = diff(log(stkp))[-1] h = 1/252 sigma = sd(rets)/sqrt(h) print(sigma) ``` ``` ## [1] 0.2802329 ``` The parameter \\(\\mu\\) is also easily estimated as ``` mu = mean(rets)/h+0.5*sigma^2 print(mu) ``` ``` ## [1] 0.1152832 ``` So the additional term \\(\\frac{1}{2}\\sigma^2\\) does matter substantially. 17\.2 Monte Carlo Simulation ---------------------------- It is easy to simulate a path of stock prices using a discrete form of the solution to the Geometric Brownian motion SDE. \\\[ S(t\+h) \= S(t) \\exp \\left\[\\left(\\mu\-\\frac{1}{2}\\sigma^2 \\right) h \+ \\sigma \\cdot e\\; \\sqrt{h} \\right] \\] Note that we replaced \\(B(h)\\) with \\(e \\sqrt{h}\\), where \\(e \\sim N(0,1\)\\). Both \\(B(h)\\) and \\(e \\sqrt{h}\\) have mean zero and variance \\(h\\). Knowing \\(S(t)\\), we can simulate \\(S(t\+h)\\) by drawing \\(e\\) from a standard normal distribution. ``` n = 252 s0 = 100 mu = 0.10 sig = 0.20 s = matrix(0,1,n+1) h=1/n s[1] = s0 for (j in 2:(n+1)) { s[j]=s[j-1]*exp((mu-sig^2/2)*h +sig*rnorm(1)*sqrt(h)) } print(s[1:5]) ``` ``` ## [1] 100.0000 100.8426 100.6249 100.6474 100.0126 ``` ``` print(s[(n-4):n]) ``` ``` ## [1] 101.3609 101.0642 101.9908 101.3826 102.2418 ``` ``` plot(t(s),type="l",col="blue",xlab="Days",ylab="stock price"); grid(lwd=2) ``` 17\.3 Vectorization ------------------- The same logic may be used to generate multiple paths of stock prices, in a **vectorized** way as follows. In the following example we generate 3 paths. Because of the vectorization, the run time does not increase linearly with the number of paths, and in fact, hardly increases at all. ``` s = matrix(0,3,n+1) s[,1] = s0 for (j in seq(2,(n+1))) { s[,j]=s[,j-1]*exp((mu-sig^2/2)*h +sig*matrix(rnorm(3),3,1)*sqrt(h)) } ymin = min(s); ymax = max(s) plot(t(s)[,1],ylim=c(ymin,ymax),type="l",xlab="Days",ylab="stock price"); grid(lwd=2) lines(t(s)[,2],col="red",lty=2) lines(t(s)[,3],col="blue",lty=3) ``` The plot code shows how to change the style of the path and its color. If you generate many more paths, how can you find the probability of the stock ending up below a defined price? Can you do this directly from the discrete version of the Geometric Brownian motion process above? 17\.4 Bivariate random variables -------------------------------- To convert two independent random variables \\((e\_1,e\_2\) \\sim N(0,1\)\\) into two correlated random variables \\((x\_1,x\_2\)\\) with correlation \\(\\rho\\), use the following trannsformations. \\\[ x\_1 \= e\_1, \\quad \\quad x\_2 \= \\rho \\cdot x\_1 \+ \\sqrt{1\-\\rho^2} \\cdot x\_2 \\] ``` e = matrix(rnorm(20000),10000,2) print(cor(e)) ``` ``` ## [,1] [,2] ## [1,] 1.0000000000 0.0003501396 ## [2,] 0.0003501396 1.0000000000 ``` ``` print(cor(e[,1],e[,2])) ``` ``` ## [1] 0.0003501396 ``` We see that these are uncorrelated random variables. Now we create a pair of correlated variates using the formula above. ``` rho = 0.6 x1 = e[,1] x2 = rho*e[,1]+sqrt(1-rho^2)*e[,2] cor(x1,x2) ``` ``` ## [1] 0.5941021 ``` Check algebraically that \\(E\[x\_i]\=0, i\=1,2\\), \\(Var\[x\_i]\=1, i\=1,2\\). Also check that \\(Cov\[x\_1,x\_2]\=\\rho \= Corr\[x\_1,x\_2]\\). ``` print(mean(x1)) ``` ``` ## [1] -0.01568866 ``` ``` print(mean(x2)) ``` ``` ## [1] 0.0004103859 ``` ``` print(var(x1)) ``` ``` ## [1] 0.993663 ``` ``` print(var(x2)) ``` ``` ## [1] 1.014452 ``` ``` print(cov(x1,x2)) ``` ``` ## [1] 0.5964806 ``` 17\.5 Multivariate random variables ----------------------------------- These are generated using Cholesky decomposition. We may write a covariance matrix in decomposed form, i.e., \\(\\Sigma \= L\\; L'\\), where \\(L\\) is a lower triangular matrix. Alternatively we might have an upper triangular decomposition, where \\(U\= L'\\). Think of each component of the decomposition as a square\-root of the covariance matrix. ``` #Original matrix cv = matrix(c(0.01,0,0,0,0.04,0.02,0,0.02,0.16),3,3,byrow=TRUE) print(cv) ``` ``` ## [,1] [,2] [,3] ## [1,] 0.01 0.00 0.00 ## [2,] 0.00 0.04 0.02 ## [3,] 0.00 0.02 0.16 ``` ``` #Let's enhance it cv[1,2]=0.005 cv[2,1]=0.005 cv[1,3]=0.005 cv[3,1]=0.005 print(cv) ``` ``` ## [,1] [,2] [,3] ## [1,] 0.010 0.005 0.005 ## [2,] 0.005 0.040 0.020 ## [3,] 0.005 0.020 0.160 ``` We now compute the Cholesky decomposition of the covariance matrix. ``` L = t(chol(cv)) print(L) ``` ``` ## [,1] [,2] [,3] ## [1,] 0.10 0.00000000 0.0000000 ## [2,] 0.05 0.19364917 0.0000000 ## [3,] 0.05 0.09036961 0.3864367 ``` The Cholesky decomposition is now used to generate multivariate random variables with the correct correlation. ``` e = matrix(rnorm(3*10000),10000,3) x = t(L %*% t(e)) print(dim(x)) ``` ``` ## [1] 10000 3 ``` ``` print(cov(x)) ``` ``` ## [,1] [,2] [,3] ## [1,] 0.009857043 0.004803601 0.005329556 ## [2,] 0.004803601 0.040708872 0.021150791 ## [3,] 0.005329556 0.021150791 0.161275573 ``` In the last calculation, we confirmed that the simulated data has the same covariance matrix as the one that we generated correlated random variables from. 17\.6 Portfolio Computations ---------------------------- Let’s enter a sample mean vector and covariance matrix and then using some sample weights, we will perform some basic matrix computations for portfolios to illustrate the use of R. ``` mu = matrix(c(0.01,0.05,0.15),3,1) cv = matrix(c(0.01,0,0,0,0.04,0.02, 0,0.02,0.16),3,3) print(mu) ``` ``` ## [,1] ## [1,] 0.01 ## [2,] 0.05 ## [3,] 0.15 ``` ``` print(cv) ``` ``` ## [,1] [,2] [,3] ## [1,] 0.01 0.00 0.00 ## [2,] 0.00 0.04 0.02 ## [3,] 0.00 0.02 0.16 ``` ``` w = matrix(c(0.3,0.3,0.4)) print(w) ``` ``` ## [,1] ## [1,] 0.3 ## [2,] 0.3 ## [3,] 0.4 ``` The expected return of the portfolio is ``` muP = t(w) %*% mu print(muP) ``` ``` ## [,1] ## [1,] 0.078 ``` And the standard deviation of the portfolio is ``` stdP = sqrt(t(w) %*% cv %*% w) print(stdP) ``` ``` ## [,1] ## [1,] 0.1868154 ``` We thus generated the expected return and risk of the portfolio, i.e., the values \\(0\.078\\) and \\(0\.187\\), respectively. We are interested in the risk of a portfolio, often measured by its variance. As we had seen in a previous chapter, as we increase \\(n\\), the number of securities in the portfolio, the variance keeps dropping, and asymptotes to a level equal to the average covariance of all the assets. It is interesting to see what happens as \\(n\\) increases through a very simple function in R that returns the standard deviation of the portfolio. ``` sigport = function(n,sig_i2,sig_ij) { cv = matrix(sig_ij,n,n) diag(cv) = sig_i2 w = matrix(1/n,n,1) result = sqrt(t(w) %*% cv %*% w) } ``` We now apply it to increasingly diversified portfolios. ``` n = seq(5,100,5) risk_n = NULL for (nn in n) { risk_n = c(risk_n,sigport(nn,0.04,0.01)) } print(risk_n) ``` ``` ## [1] 0.1264911 0.1140175 0.1095445 0.1072381 0.1058301 0.1048809 0.1041976 ## [8] 0.1036822 0.1032796 0.1029563 0.1026911 0.1024695 0.1022817 0.1021204 ## [15] 0.1019804 0.1018577 0.1017494 0.1016530 0.1015667 0.1014889 ``` We can plot this to see the classic systematic risk plot. Systematic risk declines as the number of stocks in the portfolio increases. ``` plot(n,risk_n,type="l",ylab="Portfolio Std Dev") ``` 17\.7 Optimal Portfolio ----------------------- We will review the notation one more time. Assume that the risk free asset has return \\(r\_f\\). And we have \\(n\\) risky assets, with mean returns \\(\\mu\_i, i\=1\...n\\). We need to invest in optimal weights \\(w\_i\\) in each asset. Let \\(w \= \[w\_1, \\ldots, w\_n]'\\) be a column vector of portfolio weights. We define \\(\\mu \= \[\\mu\_1, \\ldots, \\mu\_n]'\\) be the column vector of mean returns on each asset, and \\({\\bf 1} \= \[1,\\ldots,1]'\\) be a column vector of ones. Hence, the expected return on the portfolio will be \\\[ E(R\_p) \= (1\-w'{\\bf 1}) r\_f \+ w'\\mu \\] The variance of return on the portfolio will be \\\[ Var(R\_p) \= w' \\Sigma w \\] where \\(\\Sigma\\) is the covariance matrix of returns on the portfolio. The objective function is a trade\-off between return and risk, with \\(\\beta\\) modulating the balance between risk and return: \\\[ U(R\_p) \= r\_f \+ w'(\\mu \- r\_f {\\bf 1}) \- \\frac{\\beta}{2} w'\\Sigma w \\] The f.o.c. becomes a system of equations now (not a single equation), since we differentiate by an entire vector \\(w\\): \\\[ \\frac{dU}{dw'} \= \\mu \- r\_f {\\bf 1} \- \\beta \\Sigma w \= {\\bf 0} \\] where the RHS is a vector of zeros of dimension \\(n\\). Solving we have \\\[ w \= \\frac{1}{\\beta} \\Sigma^{\-1} (\\mu \- r\_f {\\bf 1}) \\] Therefore, allocation to the risky assets * Increases when the relative return to it \\((\\mu \- r\_f {\\bf 1})\\) increases. * Decreases when risk aversion increases. * Decreases when riskiness of the assets increases as proxied for by \\(\\Sigma\\). ``` n = 3 print(cv) ``` ``` ## [,1] [,2] [,3] ## [1,] 0.01 0.00 0.00 ## [2,] 0.00 0.04 0.02 ## [3,] 0.00 0.02 0.16 ``` ``` print(mu) ``` ``` ## [,1] ## [1,] 0.01 ## [2,] 0.05 ## [3,] 0.15 ``` ``` rf=0.005 beta = 4 wuns = matrix(1,n,1) print(wuns) ``` ``` ## [,1] ## [1,] 1 ## [2,] 1 ## [3,] 1 ``` ``` w = (1/beta)*(solve(cv) %*% (mu-rf*wuns)) print(w) ``` ``` ## [,1] ## [1,] 0.1250000 ## [2,] 0.1791667 ## [3,] 0.2041667 ``` ``` w_in_rf = 1-sum(w) print(w_in_rf) ``` ``` ## [1] 0.4916667 ``` What if we reduced beta? ``` beta = 3 w = (1/beta)*(solve(cv) %*% (mu-rf*wuns)); print(w) ``` ``` ## [,1] ## [1,] 0.1666667 ## [2,] 0.2388889 ## [3,] 0.2722222 ``` ``` beta = 2 w = (1/beta)*(solve(cv) %*% (mu-rf*wuns)); print(w) ``` ``` ## [,1] ## [1,] 0.2500000 ## [2,] 0.3583333 ## [3,] 0.4083333 ``` Notice that the weights in stocks scales linearly with \\(\\beta\\). The relative proportions of the stocks themselves remains constant. Hence, \\(\\beta\\) modulates the proportions invested in a risk\-free asset and a stock portfolio, in which stock proportions remain same. It is as if the stock versus bond decision can be taken separately from the decision about the composition of the stock portfolio. This is known as the **two\-fund separation** property, i.e., first determine the proportions in the bond fund vs stock fund and the allocation within each fund can be handled subsequently.
Machine Learning
srdas.github.io
https://srdas.github.io/MLBook/PortfolioOptimization.html
Chapter 18 Being Mean with Variance: Markowitz Optimization =========================================================== 18\.1 Diversification of a portfolio ------------------------------------ It is useful to examine the power of using vector algebra with an application. Here we use vector and summation math to understand how diversification in stock portfolios works. Diversification occurs when we increase the number of non\-perfectly correlated stocks in a portfolio, thereby reducing portfolio variance. In order to compute the variance of the portfolio we need to use the portfolio weights \\({\\bf w}\\) and the covariance matrix of stock returns \\({\\bf R}\\), denoted \\({\\bf \\Sigma}\\). We first write down the formula for a portfolio’s return variance: \\\[ Var(\\boldsymbol{w'R}) \= \\boldsymbol{w'\\Sigma w} \= \\sum\_{i\=1}^n \\boldsymbol{w\_i^2 \\sigma\_i^2} \+ \\sum\_{i\=1}^n \\sum\_{j\=1,i \\neq j}^n \\boldsymbol{w\_i w\_j \\sigma\_{ij}} \\] Readers are strongly encouraged to implement this by hand for \\(n\=2\\) to convince themselves that the vector form of the expression for variance \\(\\boldsymbol{w'\\Sigma w}\\) is the same thing as the long form on the right\-hand side of the equation above. If returns are independent, then the formula collapses to: \\\[ Var(\\bf{w'R}) \= \\bf{w'\\Sigma w} \= \\sum\_{i\=1}^n \\boldsymbol{w\_i^2 \\sigma\_i^2} \\] If returns are dependent, and equal amounts are invested in each asset (\\(w\_i\=1/n,\\;\\;\\forall i\\)): \\\[ \\begin{align} Var(\\bf{w'R}) \&\= \\frac{1}{n}\\sum\_{i\=1}^n \\frac{\\sigma\_i^2}{n} \+ \\frac{n\-1}{n}\\sum\_{i\=1}^n \\sum\_{j\=1,i \\neq j}^n \\frac{\\sigma\_{ij}}{n(n\-1\)}\\\\ \&\= \\frac{1}{n} \\bar{\\sigma\_i}^2 \+ \\frac{n\-1}{n} \\bar{\\sigma\_{ij}}\\\\ \&\= \\frac{1}{n} \\bar{\\sigma\_i}^2 \+ \\left(1 \- \\frac{1}{n} \\right) \\bar{\\sigma\_{ij}} \\end{align} \\] The first term is the average variance, denoted \\(\\bar{\\sigma\_1}^2\\) divided by \\(n\\), and the second is the average covariance, denoted \\(\\bar{\\sigma\_{ij}}\\) multiplied by factor \\((n\-1\)/n\\). As \\(n \\rightarrow \\infty\\), \\\[ Var({\\bf w'R}) \= \\bar{\\sigma\_{ij}} \\] This produces the remarkable result that in a well diversified portfolio, the variances of each stock’s return does not matter at all for portfolio risk! Further the risk of the portfolio, i.e., its variance, is nothing but the average of off\-diagonal terms in the covariance matrix. ``` sd=0.50; cv=0.05; m=100 sd_p = matrix(0,m,1) for (j in 1:m) { cv_mat = matrix(1,j,j)*cv diag(cv_mat) = sd^2 w = matrix(1/j,j,1) sd_p[j] = sqrt(t(w) %*% cv_mat %*% w) } plot(sd_p,type="l",col="blue") ``` 18\.2 Markowitz Portfolio Problem --------------------------------- We now explore the mathematics of a famous portfolio optimization result, known as the Markowitz mean\-variance problem. The solution to this problem is still being used widely in practice. We are interested in portfolios of \\(n\\) assets, which have a mean return which we denote as \\(E(r\_p)\\), and a variance, denoted \\(Var(r\_p)\\). Let \\(\\underline{w} \\in R^n\\) be the portfolio weights. What this means is that we allocate each $1 into various assets, such that the total of the weights sums up to 1\. Note that we do not preclude short\-selling, so that it is possible for weights to be negative as well. The optimization problem is defined as follows. We wish to find the portfolio that delivers the minimum variance (risk) while achieving a pre\-specified level of expected (mean) return. \\\[ \\min\_{\\underline{w}} \\quad \\frac{1}{2}\\: \\underline{w}' \\underline{\\Sigma} \\: \\underline{w} \\] subject to \\\[ \\begin{align} \\underline{w}'\\:\\underline{\\mu} \&\= E(r\_p) \\\\ \\underline{w}'\\:\\underline{1} \&\= 1 \\end{align} \\] Note that we have a \\(\\frac{1}{2}\\) in front of the variance term above, which is for mathematical neatness as will become clear shortly. The minimized solution is not affected by scaling the objective function by a constant. The first constraint forces the expected return of the portfolio to a specified mean return, denoted \\(E(r\_p)\\), and the second constraint requires that the portfolio weights add up to 1, also known as the “fully invested” constraint. It is convenient that the constraints are equality constraints. 18\.3 The Solution by Lagrange Multipliers ------------------------------------------ This is a Lagrangian problem, and requires that we embed the constraints into the objective function using Lagragian multipliers \\(\\{\\lambda\_1, \\lambda\_2\\}\\). This results in the following minimization problem: \\\[ \\min\_{\\underline{w}\\, ,\\lambda\_1, \\lambda\_2} \\quad L\=\\frac{1}{2}\\:\\underline{w}'\\underline{\\Sigma} \\:\\underline{w}\+ \\lambda\_1\[E(r\_p)\-\\underline{w}'\\underline{\\mu}]\+\\lambda\_2\[1\-\\underline{w}'\\underline{1}\\;] \\] 18\.4 Optimization ------------------ To minimize this function, we take derivatives with respect to \\(\\underline{w}\\), \\(\\lambda\_1\\), and \\(\\lambda\_2\\), to arrive at the first order conditions: \\\[ \\begin{align} \\frac{\\partial L}{\\partial \\underline{w}} \&\= \\underline{\\Sigma}\\underline{w} \- \\lambda\_1 \\underline{\\mu} \- \\lambda\_2 \\underline{1}\= \\underline{0} \\qquad(1\) \\\\ \\\\ \\frac{\\partial L}{\\partial \\lambda\_1} \&\= E(r\_p)\-\\underline{w}'\\underline{\\mu}\= 0 \\\\ \\\\ \\frac{\\partial L}{\\partial \\lambda\_2} \&\= 1\-\\underline{w}'\\underline{1}\= 0 \\end{align} \\] The first equation above, is a system of \\(n\\) equations, because the derivative is taken with respect to every element of the vector \\(\\underline{w}\\). Hence, we have a total of \\((n\+2\)\\) first\-order conditions. From (1\) \\\[ \\begin{align} \\underline{w} \&\= \\Sigma^{\-1}(\\lambda\_1\\underline{\\mu}\+\\lambda\_2\\underline{1}) \\\\ \&\= \\lambda\_1\\Sigma^{\-1}\\underline{\\mu}\+\\lambda\_2\\Sigma^{\-1}\\underline{1} \\quad(2\) \\end{align} \\] Premultiply (2\) by \\(\\underline{\\mu}'\\): \\\[ \\underline{\\mu}'\\underline{w}\=\\lambda\_1\\underbrace{\\,\\underline{\\mu}'\\underline{\\Sigma}^{\-1}\\underline{\\mu}\\,}\_B\+ \\lambda\_2\\underbrace{\\,\\underline{\\mu}'\\underline{\\Sigma}^{\-1}\\underline{1}\\,}\_A\=E(r\_p) \\] Also premultiply (2\) by \\(\\underline{1}'\\): \\\[\\ \\underline{1}'\\underline{w}\=\\lambda\_1\\underbrace{\\,\\underline{1}'\\underline{\\Sigma}^{\-1}\\underline{\\mu}}\_A\+ \\lambda\_2\\underbrace{\\,\\underline{1}'\\underline{\\Sigma}^{\-1}\\underline{1}}\_C\=1 \\] Solve for \\(\\lambda\_1, \\lambda\_2\\) \\\[ \\lambda\_1\=\\frac{CE(r\_p)\-A}{D} \\] \\\[ \\lambda\_2\=\\frac{B\-AE(r\_p)}{D} \\] \\\[ \\mbox{where} \\quad D\=BC\-A^2 \\] 18\.5 Notes on the solution --------------------------- Note 1: Since \\(\\underline{\\Sigma}\\) is positive definite, \\(\\underline{\\Sigma}^{\-1}\\) is also positive definite: \\(B\>0, C\>0\\). Note 2: Given solutions for \\(\\lambda\_1, \\lambda\_2\\), we solve for \\(\\underline{w}\\). \\\[ \\underline{w}\=\\underbrace{\\;\\frac{1}{D}\\,\[B\\underline{\\Sigma}^{\-1}\\underline{1} \-A\\underline{\\Sigma}^{\-1}\\underline{\\mu}]}\_{\\underline{g}}\+\\underbrace{\\;\\frac{1}{D }\\,\[C\\underline{\\Sigma}^{\-1}\\underline{\\mu} \- A\\underline{\\Sigma}^{\-1}\\underline{1}\\,]}\_{\\underline{h}}\\cdot E(r\_p) \\] This is the expression for the optimal portfolio weights that minimize the variance for given expected return \\(E(r\_p)\\). We see that the vectors \\(\\underline{g}\\), \\(\\underline{h}\\) are fixed once we are given the inputs to the problem, i.e., \\(\\underline{\\mu}\\) and \\(\\underline{\\Sigma}\\). Note 3: We can vary \\(E(r\_p)\\) to get a set of frontier (efficient or optimal) portfolios \\(\\underline{w}\\). \\\[ \\underline{w}\=\\underline{g}\+\\underline{h}\\,E(r\_p) \\] \\\[ \\begin{align} if \\quad E(r\_p)\&\= 0,\\; \\underline{w} \= \\underline{g} \\\\ if \\quad E(r\_p)\&\= 1,\\; \\underline{w} \= \\underline{g}\+\\underline{h} \\end{align} \\] Note that \\\[ \\underline{w}\=\\underline{g}\+\\underline{h}\\,E(r\_p)\=\[1\-E(r\_p)]\\,\\underline{g}\+E(r\_p)\[\\,\\underline{g}\+\\underline{h}\\:] \\] Hence these 2 portfolios \\(\\underline{g}\\), \\(\\underline{g} \+ \\underline{h}\\) “generate” the entire frontier. 18\.6 The Function ------------------ We create a function to return the optimal portfolio weights. Here is the code for the function to do portfolio optimization: ``` markowitz = function(mu,cv,Er) { n = length(mu) wuns = matrix(1,n,1) A = t(wuns) %*% solve(cv) %*% mu B = t(mu) %*% solve(cv) %*% mu C = t(wuns) %*% solve(cv) %*% wuns D = B*C - A^2 lam = (C*Er-A)/D gam = (B-A*Er)/D wts = lam[1]*(solve(cv) %*% mu) + gam[1]*(solve(cv) %*% wuns) g = (B[1]*(solve(cv) %*% wuns) - A[1]*(solve(cv) %*% mu))/D[1] h = (C[1]*(solve(cv) %*% mu) - A[1]*(solve(cv) %*% wuns))/D[1] wts = g + h*Er } ``` 18\.7 Example ------------- We can enter an example of a mean return vector and the covariance matrix of returns, and then call the function for a given expected return. ``` #PARAMETERS mu = matrix(c(0.02,0.10,0.20),3,1) n = length(mu) cv = matrix(c(0.0001,0,0,0,0.04,0.02,0,0.02,0.16),n,n) print(mu) ``` ``` ## [,1] ## [1,] 0.02 ## [2,] 0.10 ## [3,] 0.20 ``` ``` print(round(cv,4)) ``` ``` ## [,1] [,2] [,3] ## [1,] 1e-04 0.00 0.00 ## [2,] 0e+00 0.04 0.02 ## [3,] 0e+00 0.02 0.16 ``` The output is the vector of optimal portfolio weights. ``` Er = 0.18 #SOLVE PORTFOLIO PROBLEM wts = markowitz(mu,cv,Er) print(wts) ``` ``` ## [,1] ## [1,] -0.3575931 ## [2,] 0.8436676 ## [3,] 0.5139255 ``` ``` print(sum(wts)) ``` ``` ## [1] 1 ``` ``` print(t(wts) %*% mu) ``` ``` ## [,1] ## [1,] 0.18 ``` ``` print(sqrt(t(wts) %*% cv %*% wts)) ``` ``` ## [,1] ## [1,] 0.2967932 ``` 18\.8 A different expected return --------------------------------- If we change the expected return to 0\.10, then we get a different set of portfolio weights. ``` Er = 0.10 #SOLVE PORTFOLIO PROBLEM wts = markowitz(mu,cv,Er) print(wts) ``` ``` ## [,1] ## [1,] 0.3209169 ## [2,] 0.4223496 ## [3,] 0.2567335 ``` ``` print(t(wts) %*% mu) ``` ``` ## [,1] ## [1,] 0.1 ``` ``` print(sqrt(t(wts) %*% cv %*% wts)) ``` ``` ## [,1] ## [1,] 0.1484205 ``` Note that in the first example, to get a high expected return of 0\.18, we needed to take some leverage, by shorting the low risk asset and going long the medium and high risk assets. When we dropped the expected return to 0\.10, all weights are positive, i.e., we have a long\-only portfolio. 18\.9 Numerical Optimization with Constraints --------------------------------------------- The **quadprog** package is an optimizer that takes a quadratic objective function with linear constraints. Hence, it is exactly what is needed for the mean\-variance portfolio problem we just considered. The advantage of this package is that we can also apply additional inequality constraints. For example, we may not wish to permit short\-sales of any asset, and thereby we might bound all the weights to lie between zero and one. The specification in the **quadprog** package of the problem set up is shown in the manual: ``` Description This routine implements the dual method of Goldfarb and Idnani (1982, 1983) for solving quadratic programming problems of the form min(-d^T b + 1/2 b^T D b) with the constraints A^T b >= b_0. (note: b here is the weights vector in our problem) Usage solve.QP(Dmat, dvec, Amat, bvec, meq=0, factorized=FALSE) Arguments Dmat matrix appearing in the quadratic function to be minimized. dvec vector appearing in the quadratic function to be minimized. Amat matrix defining the constraints under which we want to minimize the quadratic function. bvec vector holding the values of b_0 (defaults to zero). meq the first meq constraints are treated as equality constraints, all further as inequality constraints (defaults to 0). factorized logical flag: if TRUE, then we are passing R^(-1) (where D = R^T R) instead of the matrix D in the argument Dmat. \end{lstlisting} ``` In our problem set up, with three securities, and no short sales, we will have the following **Amat** and **bvec**. The constraints will be modulated by {meq \= 2}, which states that the first two constraints will be equality constraints, and the last three will be greater than equal to constraints. The constraints will be of the form \\(A'w \\geq b\_0\\), i.e., \\\[ \\begin{align} w\_1 \\mu\_1 \+ w\_2 \\mu\_2 \+ w\_3 \\mu\_3 \&\= E(r\_p) \\\\ w\_1 1 \+ w\_2 1 \+ w\_3 1 \&\= 1 \\\\ w\_1 \&\\geq 0\\\\ w\_2 \&\\geq 0\\\\ w\_3 \&\\geq 0 \\end{align} \\] The code for using the package is as follows. If we run this code we get the following result for expected return \= 0\.18, with short\-selling allowed. ``` #SOLVING THE PROBLEM WITH THE "quadprog" PACKAGE Er = 0.18 library(quadprog) nss = 0 #Equals 1 if no short sales allowed Bmat = matrix(0,n,n) #No Short sales matrix diag(Bmat) = 1 Amat = matrix(c(mu,1,1,1),n,2) if (nss==1) { Amat = matrix(c(Amat,Bmat),n,2+n) } dvec = matrix(0,n,1) bvec = matrix(c(Er,1),2,1) if (nss==1) { bvec = t(c(bvec,matrix(0,3,1))) } sol = solve.QP(cv,dvec,Amat,bvec,meq=2) print(sol$solution) ``` ``` ## [1] -0.3575931 0.8436676 0.5139255 ``` This is exactly what is obtained from the Markowitz solution. Hence, the model checks out. What if we restricted short\-selling? Then we would get the following solution. ``` #SOLVING THE PROBLEM WITH THE "quadprog" PACKAGE Er = 0.18 library(quadprog) nss = 1 #Equals 1 if no short sales allowed Bmat = matrix(0,n,n) #No Short sales matrix diag(Bmat) = 1 Amat = matrix(c(mu,1,1,1),n,2) if (nss==1) { Amat = matrix(c(Amat,Bmat),n,2+n) } dvec = matrix(0,n,1) bvec = matrix(c(Er,1),2,1) if (nss==1) { bvec = t(c(bvec,matrix(0,3,1))) } sol = solve.QP(cv,dvec,Amat,bvec,meq=2) print(sol$solution) ``` ``` ## [1] 0.0 0.2 0.8 ``` ``` wstar = as.matrix(sol$solution) print(t(wstar) %*% mu) ``` ``` ## [,1] ## [1,] 0.18 ``` ``` print(sqrt(t(wstar) %*% cv %*% wstar)) ``` ``` ## [,1] ## [1,] 0.332265 ``` 18\.10 The Efficient Frontier ----------------------------- Since we can use the Markowitz model to solve for the optimal portfolio weights when the expected return is fixed, we can keep solving for different values of \\(E(r\_p)\\). This will trace out the efficient frontier. The program to do this and plot the frontier is as follows. ``` #TRACING OUT THE EFFICIENT FRONTIER Er_vec = as.matrix(seq(0.01,0.5,0.01)) Sig_vec = matrix(0,50,1) j = 0 for (Er in Er_vec) { j = j+1 wts = markowitz(mu,cv,Er) Sig_vec[j] = sqrt(t(wts) %*% cv %*% wts) } plot(Sig_vec,Er_vec,type='l') ``` ``` print(cbind(Sig_vec,Er_vec)) ``` ``` ## [,1] [,2] ## [1,] 0.021486319 0.01 ## [2,] 0.009997134 0.02 ## [3,] 0.020681789 0.03 ## [4,] 0.038013721 0.04 ## [5,] 0.056141450 0.05 ## [6,] 0.074486206 0.06 ## [7,] 0.092919536 0.07 ## [8,] 0.111397479 0.08 ## [9,] 0.129900998 0.09 ## [10,] 0.148420529 0.10 ## [11,] 0.166950742 0.11 ## [12,] 0.185488436 0.12 ## [13,] 0.204031572 0.13 ## [14,] 0.222578791 0.14 ## [15,] 0.241129149 0.15 ## [16,] 0.259681974 0.16 ## [17,] 0.278236773 0.17 ## [18,] 0.296793176 0.18 ## [19,] 0.315350898 0.19 ## [20,] 0.333909721 0.20 ## [21,] 0.352469471 0.21 ## [22,] 0.371030008 0.22 ## [23,] 0.389591219 0.23 ## [24,] 0.408153014 0.24 ## [25,] 0.426715315 0.25 ## [26,] 0.445278059 0.26 ## [27,] 0.463841194 0.27 ## [28,] 0.482404674 0.28 ## [29,] 0.500968460 0.29 ## [30,] 0.519532521 0.30 ## [31,] 0.538096827 0.31 ## [32,] 0.556661353 0.32 ## [33,] 0.575226080 0.33 ## [34,] 0.593790987 0.34 ## [35,] 0.612356059 0.35 ## [36,] 0.630921280 0.36 ## [37,] 0.649486639 0.37 ## [38,] 0.668052123 0.38 ## [39,] 0.686617722 0.39 ## [40,] 0.705183428 0.40 ## [41,] 0.723749232 0.41 ## [42,] 0.742315127 0.42 ## [43,] 0.760881106 0.43 ## [44,] 0.779447163 0.44 ## [45,] 0.798013292 0.45 ## [46,] 0.816579490 0.46 ## [47,] 0.835145750 0.47 ## [48,] 0.853712070 0.48 ## [49,] 0.872278445 0.49 ## [50,] 0.890844871 0.50 ``` We can also simulate to see how the efficient frontier appears as the outer envelope of candidate portfolios. ``` #SIMULATE THE EFFICIENT FRONTIER n = 10000 w = matrix(rnorm(2*n),n,2) w = cbind(w,1-rowSums(w)) Exp_ret = w %*% mu Sd_ret = matrix(0,n,1) for (j in 1:n) { wt = as.matrix(w[j,]) Sd_ret[j] = sqrt(t(wt) %*% cv %*% wt) } plot(Sd_ret,Exp_ret,col="red") lines(Sig_vec,Er_vec,col="blue",lwd=6) ``` 18\.11 Covariances of frontier portfolios ----------------------------------------- Suppose we have two portfolios on the efficient frontier with weight vectors \\(\\underline{w}\_p\\) and \\(\\underline{w}\_q\\). The covariance between these two portfolios is: \\\[ Cov(r\_p,r\_q)\=\\underline{w}\_p'\\:\\underline{\\Sigma}\\:\\underline{w}\_q \=\[\\underline{g}\+\\underline{h}E(r\_p)]'\\underline{\\Sigma}\\,\[\\underline{g} \+\\underline{h}E(r\_q)] \\] Now, \\\[ \\underline{g}\+\\underline{h}E(r\_p)\=\\frac{1}{D}\[B\\underline{\\Sigma}^{\-1}\\underline{1} \-A\\underline{\\Sigma}^{\-1}\\underline{\\mu}]\+\\frac{1}{D}\[C\\underline{\\Sigma}^{\-1}\\underline{\\mu} \-A\\underline{\\Sigma}^{\-1}\\underline{1}\\,]\\underbrace{\[\\lambda\_1B\+\\lambda\_2A]}\_{\\frac{CE(r\_p)\-A}{D/B}\+\\frac{B\-AE(r\_p)}{D/B}} \\] After much simplification: \\\[ \\begin{align} Cov(r\_p,r\_q) \&\= \\underline{w}\_p'\\:\\underline{\\Sigma}\\:\\underline{w}\_q' \\\\ \&\= \\frac{C}{D}\\,\[E(r\_p)\-A/C]\[E(r\_q)\-A/C]\+\\frac{1}{C}\\\\ \\\\ \\sigma^2\_p\=Cov(r\_p,r\_p)\&\= \\frac{C}{D}\[E(r\_p)\-A/C]^2\+\\frac{1}{C} \\end{align} \\] Therefore, \\\[ \\;\\frac{\\sigma^2\_p}{1/C}\-\\frac{\[E(r\_p)\-A/C]^2}{D/C^2}\=1 \\] which is the equation of a hyperbola in \\(\\: \\sigma, E(r)\\) space with center \\((0, A/C)\\), or \\\[ \\sigma^2\_p\=\\frac{1}{D}\[CE(r\_p)^2\-2AE(r\_p)\+B], \\] which is a parabola in \\(E(r), \\sigma\\) space. 18\.12 Combinations ------------------- It is easy to see that linear combinations of portfolios on the frontier will also lie on the frontier. \\\[ \\begin{align} \\sum\_{i\=1}^m \\alpha\_i\\,\\underline{w}\_i \&\= \\sum\_{i\=1}^m \\alpha\_i\[\\,\\underline{g}\+\\underline{h}\\,E(r\_i)]\\\\ \&\= \\underline{g}\+\\underline{h}\\sum\_{i\=1}^m \\alpha\_iE(r\_i) \\\\ \\sum\_{i\=1}^m \\alpha\_i \&\=1 \\end{align} \\] ### 18\.12\.1 Exercise Carry out the following analyses: 1. Use your R program to do the following. Set \\(E(r\_p)\=0\.10\\) (i.e. return of 10%), and solve for the optimal portfolio weights for your 3 securities. Call this vector of weights \\(w\_1\\). Next, set \\(E(r\_p)\=0\.20\\) and again solve for the portfolios weights \\(w\_2\\). 2. Take a 50/50 combination of these two portfolios. What are the weights? What is the expected return? 3. For the expected return in the previous part, resolve the mean\-variance problem to get the new weights? 4. Compare these weights in part 3 to the ones in part 2 above. Explain your result. This is a special portfolio of interest, and we will soon see why. Find \\\[ E(r\_q), \\;s.t. \\; \\; Cov(r\_p,r\_q)\=0 \\] Suppose it exists, then the solution is: \\\[ E(r\_q)\=\\frac{A}{C}\-\\frac{D/C^2}{E(r\_p)\-A/C}\\:\\equiv\\:E(r\_{ZC(p)}) \\] Since \\(ZC(p)\\) exists for all p, all frontier portfolios can be formed from \\(p\\) and \\(ZC(p)\\). \\\[ \\begin{align} Cov(r\_p,r\_q) \&\=\\underline{w}\_p'\\:\\underline{\\Sigma}\\:\\underline{w}\_q \\\\ \&\=\\lambda\_1\\underline{\\mu}'\\underline{\\Sigma}^{\-1}\\underline{\\Sigma}\\: \\underline{w}\_q \+\\lambda\_2\\underline{1}'\\underline{\\Sigma}^{\-1}\\underline{\\Sigma} \\: \\underline{w}\_q \\\\ \&\= \\lambda\_1\\underline{\\mu}'\\underline{w}\_q\+\\lambda\_2\\underline{1}'\\underline{w}\_q\\\\ \&\= \\lambda\_1E(r\_q)\+\\lambda\_2 \\end{align} \\] Substitute in for \\(\\lambda\_1, \\lambda\_2\\) and rearrange to get \\\[ E(r\_q)\=(1\- \\beta\_{qp})E\[r\_{ZC(p)}]\+\\beta\_{qp}E(r\_p) \\] \\\[ \\beta\_{qp}\=\\frac{Cov(r\_q,r\_p)}{\\sigma\_p^2} \\] Therefore, the return on a portfolio can be written in terms of a basic portfolio \\(p\\) and its zero covariance portfolio \\(ZC(p)\\). This suggests a regression relationship, i.e. \\\[ r\_q \= \\beta\_0 \+ \\beta\_1 r\_{ZC(p)}\+ \\beta\_2 r\_p \+ \\xi \\] which is nothing but a factor model, i.e. with orthogonal factors. 18\.13 Portfolio problem with riskless assets --------------------------------------------- We now enhance the portfolio problem to deal with risk less assets. The difference is that the fully\-invested constraint is expanded to include the risk free asset. We require just a single equality constraint. The problem may be specified as follows. \\\[ \\min\_{\\underline{w}} \\quad \\frac{1}{2}\\: \\underline{w}' \\underline{\\Sigma} \\: \\underline{w} \\] \\\[ s.t. \\quad \\underline{w}'\\underline{\\mu}\+(1\-\\underline{w}'\\underline{1}\\,)\\,r\_f\=E(r\_p) \\] The Lagrangian specification of the problem is as follows. \\\[ \\min\_{\\underline{w},\\lambda} \\quad L \= \\frac{1}{2}\\:\\underline{w}'\\underline{\\Sigma} \\: \\underline{w}\+\\lambda\[E(r\_p)\-\\underline{w}'\\underline{\\mu}\-(1\-\\underline{w}'\\underline{1})r\_f] \\] The first\-order conditions for the problem are as follows. \\\[ \\begin{align} \\frac{\\partial L}{\\partial \\underline{w}}\&\= \\underline{\\Sigma} \\: \\underline{w} \- \\lambda \\underline{\\mu}\+\\lambda\\,\\underline{1}\\,r\_f\=\\underline{0}\\\\ \\frac{\\partial L}{\\partial \\lambda}\&\= E(r\_p)\-\\underline{w}'\\underline{\\mu}\-(1\-\\underline{w}'\\underline{1})\\,r\_f\=0 \\end{align} \\] Re\-aranging, and solving for \\(\\underline{w}\\) and \\(\\lambda\\), we get the following manipulations, eventually leading to the desired solution. \\\[ \\begin{align} \\underline{\\Sigma} \\: \\underline{w}\&\= \\lambda(\\underline{\\mu}\-\\underline{1}\\:r\_f)\\\\ E(r\_p)\-r\_f\&\= \\underline{w}'(\\underline{\\mu}\-\\underline{1}\\:r\_f) \\end{align} \\] Take the first equation and proceed as follows: \\\[ \\begin{align} \\underline{w}\&\= \\lambda \\underline{\\Sigma}^{\-1} (\\underline{\\mu}\-\\underline{1}\\:r\_f)\\\\ E(r\_p)\-r\_f \\equiv (\\underline{\\mu} \- \\underline{1} r\_f)' \\underline{w}\&\= \\lambda (\\underline{\\mu} \- \\underline{1} r\_f)' \\underline{\\Sigma}^{\-1} (\\underline{\\mu}\-\\underline{1}\\:r\_f)\\\\ \\end{align} \\] The first and third terms in the equation above then give that \\\[ \\lambda \= \\frac{E(r\_p)\-r\_f}{(\\underline{\\mu} \- \\underline{1} r\_f)' \\underline{\\Sigma}^{\-1} (\\underline{\\mu}\-\\underline{1}\\:r\_f)} \\] Substituting this back into the first foc results in the final solution. \\\[ \\underline{w}\=\\underline{\\Sigma}^{\-1}(\\underline{\\mu}\-\\underline{1}\\:r\_f)\\frac{E(r\_p)\-r\_f}{H} \\] \\\[ \\mbox{where} \\quad H\=(\\underline{\\mu}\-r\_f\\underline{1}\\:)'\\underline{\\Sigma}^{\-1}(\\underline{\\mu}\-r\_f\\underline{1}\\:) \\] \#\#\# Example We create a function for the solution to this problem, and then run the model. ``` markowitz2 = function(mu,cv,Er,rf) { n = length(mu) wuns = matrix(1,n,1) x = as.matrix(mu - rf*wuns) H = t(x) %*% solve(cv) %*% x wts = (solve(cv) %*% x) * (Er-rf)/H[1] } ``` We run the code here. ``` #PARAMETERS mu = matrix(c(0.02,0.10,0.20),3,1) n = length(mu) cv = matrix(c(0.0001,0,0,0,0.04,0.02,0,0.02,0.16),n,n) Er = 0.18 rf = 0.01 sol = markowitz2(mu,cv,Er,rf) print("Wts in stocks") ``` ``` ## [1] "Wts in stocks" ``` ``` print(sol) ``` ``` ## [,1] ## [1,] 12.6613704 ## [2,] 0.2236842 ## [3,] 0.1223932 ``` ``` print("Wts in risk free asset") ``` ``` ## [1] "Wts in risk free asset" ``` ``` print(1-sum(sol)) ``` ``` ## [1] -12.00745 ``` ``` print("Exp return") ``` ``` ## [1] "Exp return" ``` ``` print(rf + t(sol) %*% (mu-rf)) ``` ``` ## [,1] ## [1,] 0.18 ``` ``` print("Std Dev of return") ``` ``` ## [1] "Std Dev of return" ``` ``` print(sqrt(t(sol) %*% cv %*% sol)) ``` ``` ## [,1] ## [1,] 0.1467117 ``` 18\.1 Diversification of a portfolio ------------------------------------ It is useful to examine the power of using vector algebra with an application. Here we use vector and summation math to understand how diversification in stock portfolios works. Diversification occurs when we increase the number of non\-perfectly correlated stocks in a portfolio, thereby reducing portfolio variance. In order to compute the variance of the portfolio we need to use the portfolio weights \\({\\bf w}\\) and the covariance matrix of stock returns \\({\\bf R}\\), denoted \\({\\bf \\Sigma}\\). We first write down the formula for a portfolio’s return variance: \\\[ Var(\\boldsymbol{w'R}) \= \\boldsymbol{w'\\Sigma w} \= \\sum\_{i\=1}^n \\boldsymbol{w\_i^2 \\sigma\_i^2} \+ \\sum\_{i\=1}^n \\sum\_{j\=1,i \\neq j}^n \\boldsymbol{w\_i w\_j \\sigma\_{ij}} \\] Readers are strongly encouraged to implement this by hand for \\(n\=2\\) to convince themselves that the vector form of the expression for variance \\(\\boldsymbol{w'\\Sigma w}\\) is the same thing as the long form on the right\-hand side of the equation above. If returns are independent, then the formula collapses to: \\\[ Var(\\bf{w'R}) \= \\bf{w'\\Sigma w} \= \\sum\_{i\=1}^n \\boldsymbol{w\_i^2 \\sigma\_i^2} \\] If returns are dependent, and equal amounts are invested in each asset (\\(w\_i\=1/n,\\;\\;\\forall i\\)): \\\[ \\begin{align} Var(\\bf{w'R}) \&\= \\frac{1}{n}\\sum\_{i\=1}^n \\frac{\\sigma\_i^2}{n} \+ \\frac{n\-1}{n}\\sum\_{i\=1}^n \\sum\_{j\=1,i \\neq j}^n \\frac{\\sigma\_{ij}}{n(n\-1\)}\\\\ \&\= \\frac{1}{n} \\bar{\\sigma\_i}^2 \+ \\frac{n\-1}{n} \\bar{\\sigma\_{ij}}\\\\ \&\= \\frac{1}{n} \\bar{\\sigma\_i}^2 \+ \\left(1 \- \\frac{1}{n} \\right) \\bar{\\sigma\_{ij}} \\end{align} \\] The first term is the average variance, denoted \\(\\bar{\\sigma\_1}^2\\) divided by \\(n\\), and the second is the average covariance, denoted \\(\\bar{\\sigma\_{ij}}\\) multiplied by factor \\((n\-1\)/n\\). As \\(n \\rightarrow \\infty\\), \\\[ Var({\\bf w'R}) \= \\bar{\\sigma\_{ij}} \\] This produces the remarkable result that in a well diversified portfolio, the variances of each stock’s return does not matter at all for portfolio risk! Further the risk of the portfolio, i.e., its variance, is nothing but the average of off\-diagonal terms in the covariance matrix. ``` sd=0.50; cv=0.05; m=100 sd_p = matrix(0,m,1) for (j in 1:m) { cv_mat = matrix(1,j,j)*cv diag(cv_mat) = sd^2 w = matrix(1/j,j,1) sd_p[j] = sqrt(t(w) %*% cv_mat %*% w) } plot(sd_p,type="l",col="blue") ``` 18\.2 Markowitz Portfolio Problem --------------------------------- We now explore the mathematics of a famous portfolio optimization result, known as the Markowitz mean\-variance problem. The solution to this problem is still being used widely in practice. We are interested in portfolios of \\(n\\) assets, which have a mean return which we denote as \\(E(r\_p)\\), and a variance, denoted \\(Var(r\_p)\\). Let \\(\\underline{w} \\in R^n\\) be the portfolio weights. What this means is that we allocate each $1 into various assets, such that the total of the weights sums up to 1\. Note that we do not preclude short\-selling, so that it is possible for weights to be negative as well. The optimization problem is defined as follows. We wish to find the portfolio that delivers the minimum variance (risk) while achieving a pre\-specified level of expected (mean) return. \\\[ \\min\_{\\underline{w}} \\quad \\frac{1}{2}\\: \\underline{w}' \\underline{\\Sigma} \\: \\underline{w} \\] subject to \\\[ \\begin{align} \\underline{w}'\\:\\underline{\\mu} \&\= E(r\_p) \\\\ \\underline{w}'\\:\\underline{1} \&\= 1 \\end{align} \\] Note that we have a \\(\\frac{1}{2}\\) in front of the variance term above, which is for mathematical neatness as will become clear shortly. The minimized solution is not affected by scaling the objective function by a constant. The first constraint forces the expected return of the portfolio to a specified mean return, denoted \\(E(r\_p)\\), and the second constraint requires that the portfolio weights add up to 1, also known as the “fully invested” constraint. It is convenient that the constraints are equality constraints. 18\.3 The Solution by Lagrange Multipliers ------------------------------------------ This is a Lagrangian problem, and requires that we embed the constraints into the objective function using Lagragian multipliers \\(\\{\\lambda\_1, \\lambda\_2\\}\\). This results in the following minimization problem: \\\[ \\min\_{\\underline{w}\\, ,\\lambda\_1, \\lambda\_2} \\quad L\=\\frac{1}{2}\\:\\underline{w}'\\underline{\\Sigma} \\:\\underline{w}\+ \\lambda\_1\[E(r\_p)\-\\underline{w}'\\underline{\\mu}]\+\\lambda\_2\[1\-\\underline{w}'\\underline{1}\\;] \\] 18\.4 Optimization ------------------ To minimize this function, we take derivatives with respect to \\(\\underline{w}\\), \\(\\lambda\_1\\), and \\(\\lambda\_2\\), to arrive at the first order conditions: \\\[ \\begin{align} \\frac{\\partial L}{\\partial \\underline{w}} \&\= \\underline{\\Sigma}\\underline{w} \- \\lambda\_1 \\underline{\\mu} \- \\lambda\_2 \\underline{1}\= \\underline{0} \\qquad(1\) \\\\ \\\\ \\frac{\\partial L}{\\partial \\lambda\_1} \&\= E(r\_p)\-\\underline{w}'\\underline{\\mu}\= 0 \\\\ \\\\ \\frac{\\partial L}{\\partial \\lambda\_2} \&\= 1\-\\underline{w}'\\underline{1}\= 0 \\end{align} \\] The first equation above, is a system of \\(n\\) equations, because the derivative is taken with respect to every element of the vector \\(\\underline{w}\\). Hence, we have a total of \\((n\+2\)\\) first\-order conditions. From (1\) \\\[ \\begin{align} \\underline{w} \&\= \\Sigma^{\-1}(\\lambda\_1\\underline{\\mu}\+\\lambda\_2\\underline{1}) \\\\ \&\= \\lambda\_1\\Sigma^{\-1}\\underline{\\mu}\+\\lambda\_2\\Sigma^{\-1}\\underline{1} \\quad(2\) \\end{align} \\] Premultiply (2\) by \\(\\underline{\\mu}'\\): \\\[ \\underline{\\mu}'\\underline{w}\=\\lambda\_1\\underbrace{\\,\\underline{\\mu}'\\underline{\\Sigma}^{\-1}\\underline{\\mu}\\,}\_B\+ \\lambda\_2\\underbrace{\\,\\underline{\\mu}'\\underline{\\Sigma}^{\-1}\\underline{1}\\,}\_A\=E(r\_p) \\] Also premultiply (2\) by \\(\\underline{1}'\\): \\\[\\ \\underline{1}'\\underline{w}\=\\lambda\_1\\underbrace{\\,\\underline{1}'\\underline{\\Sigma}^{\-1}\\underline{\\mu}}\_A\+ \\lambda\_2\\underbrace{\\,\\underline{1}'\\underline{\\Sigma}^{\-1}\\underline{1}}\_C\=1 \\] Solve for \\(\\lambda\_1, \\lambda\_2\\) \\\[ \\lambda\_1\=\\frac{CE(r\_p)\-A}{D} \\] \\\[ \\lambda\_2\=\\frac{B\-AE(r\_p)}{D} \\] \\\[ \\mbox{where} \\quad D\=BC\-A^2 \\] 18\.5 Notes on the solution --------------------------- Note 1: Since \\(\\underline{\\Sigma}\\) is positive definite, \\(\\underline{\\Sigma}^{\-1}\\) is also positive definite: \\(B\>0, C\>0\\). Note 2: Given solutions for \\(\\lambda\_1, \\lambda\_2\\), we solve for \\(\\underline{w}\\). \\\[ \\underline{w}\=\\underbrace{\\;\\frac{1}{D}\\,\[B\\underline{\\Sigma}^{\-1}\\underline{1} \-A\\underline{\\Sigma}^{\-1}\\underline{\\mu}]}\_{\\underline{g}}\+\\underbrace{\\;\\frac{1}{D }\\,\[C\\underline{\\Sigma}^{\-1}\\underline{\\mu} \- A\\underline{\\Sigma}^{\-1}\\underline{1}\\,]}\_{\\underline{h}}\\cdot E(r\_p) \\] This is the expression for the optimal portfolio weights that minimize the variance for given expected return \\(E(r\_p)\\). We see that the vectors \\(\\underline{g}\\), \\(\\underline{h}\\) are fixed once we are given the inputs to the problem, i.e., \\(\\underline{\\mu}\\) and \\(\\underline{\\Sigma}\\). Note 3: We can vary \\(E(r\_p)\\) to get a set of frontier (efficient or optimal) portfolios \\(\\underline{w}\\). \\\[ \\underline{w}\=\\underline{g}\+\\underline{h}\\,E(r\_p) \\] \\\[ \\begin{align} if \\quad E(r\_p)\&\= 0,\\; \\underline{w} \= \\underline{g} \\\\ if \\quad E(r\_p)\&\= 1,\\; \\underline{w} \= \\underline{g}\+\\underline{h} \\end{align} \\] Note that \\\[ \\underline{w}\=\\underline{g}\+\\underline{h}\\,E(r\_p)\=\[1\-E(r\_p)]\\,\\underline{g}\+E(r\_p)\[\\,\\underline{g}\+\\underline{h}\\:] \\] Hence these 2 portfolios \\(\\underline{g}\\), \\(\\underline{g} \+ \\underline{h}\\) “generate” the entire frontier. 18\.6 The Function ------------------ We create a function to return the optimal portfolio weights. Here is the code for the function to do portfolio optimization: ``` markowitz = function(mu,cv,Er) { n = length(mu) wuns = matrix(1,n,1) A = t(wuns) %*% solve(cv) %*% mu B = t(mu) %*% solve(cv) %*% mu C = t(wuns) %*% solve(cv) %*% wuns D = B*C - A^2 lam = (C*Er-A)/D gam = (B-A*Er)/D wts = lam[1]*(solve(cv) %*% mu) + gam[1]*(solve(cv) %*% wuns) g = (B[1]*(solve(cv) %*% wuns) - A[1]*(solve(cv) %*% mu))/D[1] h = (C[1]*(solve(cv) %*% mu) - A[1]*(solve(cv) %*% wuns))/D[1] wts = g + h*Er } ``` 18\.7 Example ------------- We can enter an example of a mean return vector and the covariance matrix of returns, and then call the function for a given expected return. ``` #PARAMETERS mu = matrix(c(0.02,0.10,0.20),3,1) n = length(mu) cv = matrix(c(0.0001,0,0,0,0.04,0.02,0,0.02,0.16),n,n) print(mu) ``` ``` ## [,1] ## [1,] 0.02 ## [2,] 0.10 ## [3,] 0.20 ``` ``` print(round(cv,4)) ``` ``` ## [,1] [,2] [,3] ## [1,] 1e-04 0.00 0.00 ## [2,] 0e+00 0.04 0.02 ## [3,] 0e+00 0.02 0.16 ``` The output is the vector of optimal portfolio weights. ``` Er = 0.18 #SOLVE PORTFOLIO PROBLEM wts = markowitz(mu,cv,Er) print(wts) ``` ``` ## [,1] ## [1,] -0.3575931 ## [2,] 0.8436676 ## [3,] 0.5139255 ``` ``` print(sum(wts)) ``` ``` ## [1] 1 ``` ``` print(t(wts) %*% mu) ``` ``` ## [,1] ## [1,] 0.18 ``` ``` print(sqrt(t(wts) %*% cv %*% wts)) ``` ``` ## [,1] ## [1,] 0.2967932 ``` 18\.8 A different expected return --------------------------------- If we change the expected return to 0\.10, then we get a different set of portfolio weights. ``` Er = 0.10 #SOLVE PORTFOLIO PROBLEM wts = markowitz(mu,cv,Er) print(wts) ``` ``` ## [,1] ## [1,] 0.3209169 ## [2,] 0.4223496 ## [3,] 0.2567335 ``` ``` print(t(wts) %*% mu) ``` ``` ## [,1] ## [1,] 0.1 ``` ``` print(sqrt(t(wts) %*% cv %*% wts)) ``` ``` ## [,1] ## [1,] 0.1484205 ``` Note that in the first example, to get a high expected return of 0\.18, we needed to take some leverage, by shorting the low risk asset and going long the medium and high risk assets. When we dropped the expected return to 0\.10, all weights are positive, i.e., we have a long\-only portfolio. 18\.9 Numerical Optimization with Constraints --------------------------------------------- The **quadprog** package is an optimizer that takes a quadratic objective function with linear constraints. Hence, it is exactly what is needed for the mean\-variance portfolio problem we just considered. The advantage of this package is that we can also apply additional inequality constraints. For example, we may not wish to permit short\-sales of any asset, and thereby we might bound all the weights to lie between zero and one. The specification in the **quadprog** package of the problem set up is shown in the manual: ``` Description This routine implements the dual method of Goldfarb and Idnani (1982, 1983) for solving quadratic programming problems of the form min(-d^T b + 1/2 b^T D b) with the constraints A^T b >= b_0. (note: b here is the weights vector in our problem) Usage solve.QP(Dmat, dvec, Amat, bvec, meq=0, factorized=FALSE) Arguments Dmat matrix appearing in the quadratic function to be minimized. dvec vector appearing in the quadratic function to be minimized. Amat matrix defining the constraints under which we want to minimize the quadratic function. bvec vector holding the values of b_0 (defaults to zero). meq the first meq constraints are treated as equality constraints, all further as inequality constraints (defaults to 0). factorized logical flag: if TRUE, then we are passing R^(-1) (where D = R^T R) instead of the matrix D in the argument Dmat. \end{lstlisting} ``` In our problem set up, with three securities, and no short sales, we will have the following **Amat** and **bvec**. The constraints will be modulated by {meq \= 2}, which states that the first two constraints will be equality constraints, and the last three will be greater than equal to constraints. The constraints will be of the form \\(A'w \\geq b\_0\\), i.e., \\\[ \\begin{align} w\_1 \\mu\_1 \+ w\_2 \\mu\_2 \+ w\_3 \\mu\_3 \&\= E(r\_p) \\\\ w\_1 1 \+ w\_2 1 \+ w\_3 1 \&\= 1 \\\\ w\_1 \&\\geq 0\\\\ w\_2 \&\\geq 0\\\\ w\_3 \&\\geq 0 \\end{align} \\] The code for using the package is as follows. If we run this code we get the following result for expected return \= 0\.18, with short\-selling allowed. ``` #SOLVING THE PROBLEM WITH THE "quadprog" PACKAGE Er = 0.18 library(quadprog) nss = 0 #Equals 1 if no short sales allowed Bmat = matrix(0,n,n) #No Short sales matrix diag(Bmat) = 1 Amat = matrix(c(mu,1,1,1),n,2) if (nss==1) { Amat = matrix(c(Amat,Bmat),n,2+n) } dvec = matrix(0,n,1) bvec = matrix(c(Er,1),2,1) if (nss==1) { bvec = t(c(bvec,matrix(0,3,1))) } sol = solve.QP(cv,dvec,Amat,bvec,meq=2) print(sol$solution) ``` ``` ## [1] -0.3575931 0.8436676 0.5139255 ``` This is exactly what is obtained from the Markowitz solution. Hence, the model checks out. What if we restricted short\-selling? Then we would get the following solution. ``` #SOLVING THE PROBLEM WITH THE "quadprog" PACKAGE Er = 0.18 library(quadprog) nss = 1 #Equals 1 if no short sales allowed Bmat = matrix(0,n,n) #No Short sales matrix diag(Bmat) = 1 Amat = matrix(c(mu,1,1,1),n,2) if (nss==1) { Amat = matrix(c(Amat,Bmat),n,2+n) } dvec = matrix(0,n,1) bvec = matrix(c(Er,1),2,1) if (nss==1) { bvec = t(c(bvec,matrix(0,3,1))) } sol = solve.QP(cv,dvec,Amat,bvec,meq=2) print(sol$solution) ``` ``` ## [1] 0.0 0.2 0.8 ``` ``` wstar = as.matrix(sol$solution) print(t(wstar) %*% mu) ``` ``` ## [,1] ## [1,] 0.18 ``` ``` print(sqrt(t(wstar) %*% cv %*% wstar)) ``` ``` ## [,1] ## [1,] 0.332265 ``` 18\.10 The Efficient Frontier ----------------------------- Since we can use the Markowitz model to solve for the optimal portfolio weights when the expected return is fixed, we can keep solving for different values of \\(E(r\_p)\\). This will trace out the efficient frontier. The program to do this and plot the frontier is as follows. ``` #TRACING OUT THE EFFICIENT FRONTIER Er_vec = as.matrix(seq(0.01,0.5,0.01)) Sig_vec = matrix(0,50,1) j = 0 for (Er in Er_vec) { j = j+1 wts = markowitz(mu,cv,Er) Sig_vec[j] = sqrt(t(wts) %*% cv %*% wts) } plot(Sig_vec,Er_vec,type='l') ``` ``` print(cbind(Sig_vec,Er_vec)) ``` ``` ## [,1] [,2] ## [1,] 0.021486319 0.01 ## [2,] 0.009997134 0.02 ## [3,] 0.020681789 0.03 ## [4,] 0.038013721 0.04 ## [5,] 0.056141450 0.05 ## [6,] 0.074486206 0.06 ## [7,] 0.092919536 0.07 ## [8,] 0.111397479 0.08 ## [9,] 0.129900998 0.09 ## [10,] 0.148420529 0.10 ## [11,] 0.166950742 0.11 ## [12,] 0.185488436 0.12 ## [13,] 0.204031572 0.13 ## [14,] 0.222578791 0.14 ## [15,] 0.241129149 0.15 ## [16,] 0.259681974 0.16 ## [17,] 0.278236773 0.17 ## [18,] 0.296793176 0.18 ## [19,] 0.315350898 0.19 ## [20,] 0.333909721 0.20 ## [21,] 0.352469471 0.21 ## [22,] 0.371030008 0.22 ## [23,] 0.389591219 0.23 ## [24,] 0.408153014 0.24 ## [25,] 0.426715315 0.25 ## [26,] 0.445278059 0.26 ## [27,] 0.463841194 0.27 ## [28,] 0.482404674 0.28 ## [29,] 0.500968460 0.29 ## [30,] 0.519532521 0.30 ## [31,] 0.538096827 0.31 ## [32,] 0.556661353 0.32 ## [33,] 0.575226080 0.33 ## [34,] 0.593790987 0.34 ## [35,] 0.612356059 0.35 ## [36,] 0.630921280 0.36 ## [37,] 0.649486639 0.37 ## [38,] 0.668052123 0.38 ## [39,] 0.686617722 0.39 ## [40,] 0.705183428 0.40 ## [41,] 0.723749232 0.41 ## [42,] 0.742315127 0.42 ## [43,] 0.760881106 0.43 ## [44,] 0.779447163 0.44 ## [45,] 0.798013292 0.45 ## [46,] 0.816579490 0.46 ## [47,] 0.835145750 0.47 ## [48,] 0.853712070 0.48 ## [49,] 0.872278445 0.49 ## [50,] 0.890844871 0.50 ``` We can also simulate to see how the efficient frontier appears as the outer envelope of candidate portfolios. ``` #SIMULATE THE EFFICIENT FRONTIER n = 10000 w = matrix(rnorm(2*n),n,2) w = cbind(w,1-rowSums(w)) Exp_ret = w %*% mu Sd_ret = matrix(0,n,1) for (j in 1:n) { wt = as.matrix(w[j,]) Sd_ret[j] = sqrt(t(wt) %*% cv %*% wt) } plot(Sd_ret,Exp_ret,col="red") lines(Sig_vec,Er_vec,col="blue",lwd=6) ``` 18\.11 Covariances of frontier portfolios ----------------------------------------- Suppose we have two portfolios on the efficient frontier with weight vectors \\(\\underline{w}\_p\\) and \\(\\underline{w}\_q\\). The covariance between these two portfolios is: \\\[ Cov(r\_p,r\_q)\=\\underline{w}\_p'\\:\\underline{\\Sigma}\\:\\underline{w}\_q \=\[\\underline{g}\+\\underline{h}E(r\_p)]'\\underline{\\Sigma}\\,\[\\underline{g} \+\\underline{h}E(r\_q)] \\] Now, \\\[ \\underline{g}\+\\underline{h}E(r\_p)\=\\frac{1}{D}\[B\\underline{\\Sigma}^{\-1}\\underline{1} \-A\\underline{\\Sigma}^{\-1}\\underline{\\mu}]\+\\frac{1}{D}\[C\\underline{\\Sigma}^{\-1}\\underline{\\mu} \-A\\underline{\\Sigma}^{\-1}\\underline{1}\\,]\\underbrace{\[\\lambda\_1B\+\\lambda\_2A]}\_{\\frac{CE(r\_p)\-A}{D/B}\+\\frac{B\-AE(r\_p)}{D/B}} \\] After much simplification: \\\[ \\begin{align} Cov(r\_p,r\_q) \&\= \\underline{w}\_p'\\:\\underline{\\Sigma}\\:\\underline{w}\_q' \\\\ \&\= \\frac{C}{D}\\,\[E(r\_p)\-A/C]\[E(r\_q)\-A/C]\+\\frac{1}{C}\\\\ \\\\ \\sigma^2\_p\=Cov(r\_p,r\_p)\&\= \\frac{C}{D}\[E(r\_p)\-A/C]^2\+\\frac{1}{C} \\end{align} \\] Therefore, \\\[ \\;\\frac{\\sigma^2\_p}{1/C}\-\\frac{\[E(r\_p)\-A/C]^2}{D/C^2}\=1 \\] which is the equation of a hyperbola in \\(\\: \\sigma, E(r)\\) space with center \\((0, A/C)\\), or \\\[ \\sigma^2\_p\=\\frac{1}{D}\[CE(r\_p)^2\-2AE(r\_p)\+B], \\] which is a parabola in \\(E(r), \\sigma\\) space. 18\.12 Combinations ------------------- It is easy to see that linear combinations of portfolios on the frontier will also lie on the frontier. \\\[ \\begin{align} \\sum\_{i\=1}^m \\alpha\_i\\,\\underline{w}\_i \&\= \\sum\_{i\=1}^m \\alpha\_i\[\\,\\underline{g}\+\\underline{h}\\,E(r\_i)]\\\\ \&\= \\underline{g}\+\\underline{h}\\sum\_{i\=1}^m \\alpha\_iE(r\_i) \\\\ \\sum\_{i\=1}^m \\alpha\_i \&\=1 \\end{align} \\] ### 18\.12\.1 Exercise Carry out the following analyses: 1. Use your R program to do the following. Set \\(E(r\_p)\=0\.10\\) (i.e. return of 10%), and solve for the optimal portfolio weights for your 3 securities. Call this vector of weights \\(w\_1\\). Next, set \\(E(r\_p)\=0\.20\\) and again solve for the portfolios weights \\(w\_2\\). 2. Take a 50/50 combination of these two portfolios. What are the weights? What is the expected return? 3. For the expected return in the previous part, resolve the mean\-variance problem to get the new weights? 4. Compare these weights in part 3 to the ones in part 2 above. Explain your result. This is a special portfolio of interest, and we will soon see why. Find \\\[ E(r\_q), \\;s.t. \\; \\; Cov(r\_p,r\_q)\=0 \\] Suppose it exists, then the solution is: \\\[ E(r\_q)\=\\frac{A}{C}\-\\frac{D/C^2}{E(r\_p)\-A/C}\\:\\equiv\\:E(r\_{ZC(p)}) \\] Since \\(ZC(p)\\) exists for all p, all frontier portfolios can be formed from \\(p\\) and \\(ZC(p)\\). \\\[ \\begin{align} Cov(r\_p,r\_q) \&\=\\underline{w}\_p'\\:\\underline{\\Sigma}\\:\\underline{w}\_q \\\\ \&\=\\lambda\_1\\underline{\\mu}'\\underline{\\Sigma}^{\-1}\\underline{\\Sigma}\\: \\underline{w}\_q \+\\lambda\_2\\underline{1}'\\underline{\\Sigma}^{\-1}\\underline{\\Sigma} \\: \\underline{w}\_q \\\\ \&\= \\lambda\_1\\underline{\\mu}'\\underline{w}\_q\+\\lambda\_2\\underline{1}'\\underline{w}\_q\\\\ \&\= \\lambda\_1E(r\_q)\+\\lambda\_2 \\end{align} \\] Substitute in for \\(\\lambda\_1, \\lambda\_2\\) and rearrange to get \\\[ E(r\_q)\=(1\- \\beta\_{qp})E\[r\_{ZC(p)}]\+\\beta\_{qp}E(r\_p) \\] \\\[ \\beta\_{qp}\=\\frac{Cov(r\_q,r\_p)}{\\sigma\_p^2} \\] Therefore, the return on a portfolio can be written in terms of a basic portfolio \\(p\\) and its zero covariance portfolio \\(ZC(p)\\). This suggests a regression relationship, i.e. \\\[ r\_q \= \\beta\_0 \+ \\beta\_1 r\_{ZC(p)}\+ \\beta\_2 r\_p \+ \\xi \\] which is nothing but a factor model, i.e. with orthogonal factors. ### 18\.12\.1 Exercise Carry out the following analyses: 1. Use your R program to do the following. Set \\(E(r\_p)\=0\.10\\) (i.e. return of 10%), and solve for the optimal portfolio weights for your 3 securities. Call this vector of weights \\(w\_1\\). Next, set \\(E(r\_p)\=0\.20\\) and again solve for the portfolios weights \\(w\_2\\). 2. Take a 50/50 combination of these two portfolios. What are the weights? What is the expected return? 3. For the expected return in the previous part, resolve the mean\-variance problem to get the new weights? 4. Compare these weights in part 3 to the ones in part 2 above. Explain your result. This is a special portfolio of interest, and we will soon see why. Find \\\[ E(r\_q), \\;s.t. \\; \\; Cov(r\_p,r\_q)\=0 \\] Suppose it exists, then the solution is: \\\[ E(r\_q)\=\\frac{A}{C}\-\\frac{D/C^2}{E(r\_p)\-A/C}\\:\\equiv\\:E(r\_{ZC(p)}) \\] Since \\(ZC(p)\\) exists for all p, all frontier portfolios can be formed from \\(p\\) and \\(ZC(p)\\). \\\[ \\begin{align} Cov(r\_p,r\_q) \&\=\\underline{w}\_p'\\:\\underline{\\Sigma}\\:\\underline{w}\_q \\\\ \&\=\\lambda\_1\\underline{\\mu}'\\underline{\\Sigma}^{\-1}\\underline{\\Sigma}\\: \\underline{w}\_q \+\\lambda\_2\\underline{1}'\\underline{\\Sigma}^{\-1}\\underline{\\Sigma} \\: \\underline{w}\_q \\\\ \&\= \\lambda\_1\\underline{\\mu}'\\underline{w}\_q\+\\lambda\_2\\underline{1}'\\underline{w}\_q\\\\ \&\= \\lambda\_1E(r\_q)\+\\lambda\_2 \\end{align} \\] Substitute in for \\(\\lambda\_1, \\lambda\_2\\) and rearrange to get \\\[ E(r\_q)\=(1\- \\beta\_{qp})E\[r\_{ZC(p)}]\+\\beta\_{qp}E(r\_p) \\] \\\[ \\beta\_{qp}\=\\frac{Cov(r\_q,r\_p)}{\\sigma\_p^2} \\] Therefore, the return on a portfolio can be written in terms of a basic portfolio \\(p\\) and its zero covariance portfolio \\(ZC(p)\\). This suggests a regression relationship, i.e. \\\[ r\_q \= \\beta\_0 \+ \\beta\_1 r\_{ZC(p)}\+ \\beta\_2 r\_p \+ \\xi \\] which is nothing but a factor model, i.e. with orthogonal factors. 18\.13 Portfolio problem with riskless assets --------------------------------------------- We now enhance the portfolio problem to deal with risk less assets. The difference is that the fully\-invested constraint is expanded to include the risk free asset. We require just a single equality constraint. The problem may be specified as follows. \\\[ \\min\_{\\underline{w}} \\quad \\frac{1}{2}\\: \\underline{w}' \\underline{\\Sigma} \\: \\underline{w} \\] \\\[ s.t. \\quad \\underline{w}'\\underline{\\mu}\+(1\-\\underline{w}'\\underline{1}\\,)\\,r\_f\=E(r\_p) \\] The Lagrangian specification of the problem is as follows. \\\[ \\min\_{\\underline{w},\\lambda} \\quad L \= \\frac{1}{2}\\:\\underline{w}'\\underline{\\Sigma} \\: \\underline{w}\+\\lambda\[E(r\_p)\-\\underline{w}'\\underline{\\mu}\-(1\-\\underline{w}'\\underline{1})r\_f] \\] The first\-order conditions for the problem are as follows. \\\[ \\begin{align} \\frac{\\partial L}{\\partial \\underline{w}}\&\= \\underline{\\Sigma} \\: \\underline{w} \- \\lambda \\underline{\\mu}\+\\lambda\\,\\underline{1}\\,r\_f\=\\underline{0}\\\\ \\frac{\\partial L}{\\partial \\lambda}\&\= E(r\_p)\-\\underline{w}'\\underline{\\mu}\-(1\-\\underline{w}'\\underline{1})\\,r\_f\=0 \\end{align} \\] Re\-aranging, and solving for \\(\\underline{w}\\) and \\(\\lambda\\), we get the following manipulations, eventually leading to the desired solution. \\\[ \\begin{align} \\underline{\\Sigma} \\: \\underline{w}\&\= \\lambda(\\underline{\\mu}\-\\underline{1}\\:r\_f)\\\\ E(r\_p)\-r\_f\&\= \\underline{w}'(\\underline{\\mu}\-\\underline{1}\\:r\_f) \\end{align} \\] Take the first equation and proceed as follows: \\\[ \\begin{align} \\underline{w}\&\= \\lambda \\underline{\\Sigma}^{\-1} (\\underline{\\mu}\-\\underline{1}\\:r\_f)\\\\ E(r\_p)\-r\_f \\equiv (\\underline{\\mu} \- \\underline{1} r\_f)' \\underline{w}\&\= \\lambda (\\underline{\\mu} \- \\underline{1} r\_f)' \\underline{\\Sigma}^{\-1} (\\underline{\\mu}\-\\underline{1}\\:r\_f)\\\\ \\end{align} \\] The first and third terms in the equation above then give that \\\[ \\lambda \= \\frac{E(r\_p)\-r\_f}{(\\underline{\\mu} \- \\underline{1} r\_f)' \\underline{\\Sigma}^{\-1} (\\underline{\\mu}\-\\underline{1}\\:r\_f)} \\] Substituting this back into the first foc results in the final solution. \\\[ \\underline{w}\=\\underline{\\Sigma}^{\-1}(\\underline{\\mu}\-\\underline{1}\\:r\_f)\\frac{E(r\_p)\-r\_f}{H} \\] \\\[ \\mbox{where} \\quad H\=(\\underline{\\mu}\-r\_f\\underline{1}\\:)'\\underline{\\Sigma}^{\-1}(\\underline{\\mu}\-r\_f\\underline{1}\\:) \\] \#\#\# Example We create a function for the solution to this problem, and then run the model. ``` markowitz2 = function(mu,cv,Er,rf) { n = length(mu) wuns = matrix(1,n,1) x = as.matrix(mu - rf*wuns) H = t(x) %*% solve(cv) %*% x wts = (solve(cv) %*% x) * (Er-rf)/H[1] } ``` We run the code here. ``` #PARAMETERS mu = matrix(c(0.02,0.10,0.20),3,1) n = length(mu) cv = matrix(c(0.0001,0,0,0,0.04,0.02,0,0.02,0.16),n,n) Er = 0.18 rf = 0.01 sol = markowitz2(mu,cv,Er,rf) print("Wts in stocks") ``` ``` ## [1] "Wts in stocks" ``` ``` print(sol) ``` ``` ## [,1] ## [1,] 12.6613704 ## [2,] 0.2236842 ## [3,] 0.1223932 ``` ``` print("Wts in risk free asset") ``` ``` ## [1] "Wts in risk free asset" ``` ``` print(1-sum(sol)) ``` ``` ## [1] -12.00745 ``` ``` print("Exp return") ``` ``` ## [1] "Exp return" ``` ``` print(rf + t(sol) %*% (mu-rf)) ``` ``` ## [,1] ## [1,] 0.18 ``` ``` print("Std Dev of return") ``` ``` ## [1] "Std Dev of return" ``` ``` print(sqrt(t(sol) %*% cv %*% sol)) ``` ``` ## [,1] ## [1,] 0.1467117 ```
Machine Learning
srdas.github.io
https://srdas.github.io/MLBook/DigitalPortfolios.html
Chapter 19 Zero or One: Optimal Digital Portfolios ================================================== This chapter is taken from the published paper “Digital Portfolios”, see S. Das ([2013](#ref-DasDP)). 19\.1 Digital Assets -------------------- Digital assets are investments with returns that are binary in nature, i.e., they either have a very large or very small payoff. We explore the features of optimal portfolios of digital assets such as venture investments, credit assets, search keyword groups, and lotteries. These portfolios comprise correlated assets with joint Bernoulli distributions. Using a simple, standard, fast recursion technique to generate the return distribution of the portfolio, we derive guidelines on how investors in digital assets may think about constructing their portfolios. We find that digital portfolios are better when they are homogeneous in the size of the assets, but heterogeneous in the success probabilities of the asset components. The return distributions of digital portfolios are highly skewed and fat\-tailed. A good example of such a portfolio is a venture fund. A simple representation of the payoff to a digital investment is Bernoulli with a large payoff for a successful outcome and a very small (almost zero) payoff for a failed one. The probability of success of digital investments is typically small, in the region of 5–25% for new ventures, see Sarin, Das, and Jagannathan ([2003](#ref-DasJagSarin)). Optimizing portfolios of such investments is therefore not amenable to standard techniques used for mean\-variance optimization. It is also not apparent that the intuitions obtained from the mean\-variance setting carry over to portfolios of Bernoulli assets. For instance, it is interesting to ask, ceteris paribus, whether diversification by increasing the number of assets in the digital portfolio is always a good thing. Since Bernoulli portfolios involve higher moments, how diversification is achieved is by no means obvious. We may also ask whether it is preferable to include assets with as little correlation as possible or is there a sweet spot for the optimal correlation levels of the assets? Should all the investments be of even size, or is it preferable to take a few large bets and several small ones? And finally, is a mixed portfolio of safe and risky assets preferred to one where the probability of success is more uniform across assets? These are all questions that are of interest to investors in digital type portfolios, such as CDO investors, venture capitalists and investors in venture funds. We will use a method that is based on standard recursion for modeling of the exact return distribution of a Bernoulli portfolio. This method on which we build was first developed by Andersen, Sidenius, and Basu ([2003](#ref-AndSidBasu)) for generating loss distributions of credit portfolios. We then examine the properties of these portfolios in a stochastic dominance framework framework to provide guidelines to digital investors. These guidelines are found to be consistent with prescriptions from expected utility optimization. The prescriptions are as follows: 1. Holding all else the same, more digital investments are preferred, meaning for example, that a venture portfolio should seek to maximize market share. 2. As with mean\-variance portfolios, lower asset correlation is better, unless the digital investor’s payoff depends on the upper tail of returns. 3. A strategy of a few large bets and many small ones is inferior to one with bets being roughly the same size. 4. And finally, a mixed portfolio of low\-success and high\-success assets is better than one with all assets of the same average success probability level. 19\.2 Modeling Digital Portfolios --------------------------------- Assume that the investor has a choice of \\(n\\) investments in digital assets (e.g., start\-up firms). The investments are indexed \\(i\=1,2, \\ldots, n\\). Each investment has a probability of success that is denoted \\(q\_i\\), and if successful, the payoff returned is \\(S\_i\\) dollars. With probability \\((1\-q\_i)\\), the investment will not work out, the start\-up will fail, and the money will be lost in totality. Therefore, the payoff (cashflow) is \\\[ \\mbox{Payoff} \= C\_i \= \\left\\{ \\begin{array}{cl} S\_i \& \\mbox{with prob } q\_i \\\\ 0 \& \\mbox{with prob } (1\-q\_i) \\end{array} \\right. \\] The specification of the investment as a Bernoulli trial is a simple representation of reality in the case of digital portfolios. This mimics well for example, the case of the venture capital business. Two generalizations might be envisaged. First, we might extend the model to allowing \\(S\_i\\) to be random, i.e., drawn from a range of values. This will complicate the mathematics, but not add much in terms of enriching the model’s results. Second, the failure payoff might be non\-zero, say an amount \\(a\_i\\). Then we have a pair of Bernoulli payoffs \\(\\{S\_i, a\_i\\}\\). Note that we can decompose these investment payoffs into a project with constant payoff \\(a\_i\\) plus another project with payoffs \\(\\{S\_i\-a\_i,0\\}\\), the latter being exactly the original setting where the failure payoff is zero. Hence, the version of the model we solve in this note, with zero failure payoffs, is without loss of generality. Unlike stock portfolios where the choice set of assets is assumed to be multivariate normal, digital asset investments have a joint Bernoulli distribution. Portfolio returns of these investments are unlikely to be Gaussian, and hence higher\-order moments are likely to matter more. In order to generate the return distribution for the portfolio of digital assets, we need to account for the correlations across digital investments. We adopt the following simple model of correlation. Define \\(y\_i\\) to be the performance proxy for the \\(i\\)\-th asset. This proxy variable will be simulated for comparison with a threshold level of performance to determine whether the asset yielded a success or failure. It is defined by the following function, widely used in the correlated default modeling literature, see for example Andersen, Sidenius, and Basu ([2003](#ref-AndSidBasu)): \\\[ y\_i \= \\rho\_i \\; X \+ \\sqrt{1\-\\rho\_i^2}\\; Z\_i, \\quad i \= 1 \\ldots n \\] where \\(\\rho\_i \\in \[0,1]\\) is a coefficient that correlates threshold \\(y\_i\\) with a normalized common factor \\(X \\sim N(0,1\)\\). The common factor drives the correlations amongst the digital assets in the portfolio. We assume that \\(Z\_i \\sim N(0,1\)\\) and \\(\\mbox{Corr}(X,Z\_i)\=0, \\forall i\\). Hence, the correlation between assets \\(i\\) and \\(j\\) is given by \\(\\rho\_i \\times \\rho\_j\\). Note that the mean and variance of \\(y\_i\\) are: \\(E(y\_i)\=0, Var(y\_i)\=1, \\forall i\\). Conditional on \\(X\\), the values of \\(y\_i\\) are all independent, as \\(\\mbox{Corr}(Z\_i, Z\_j)\=0\\). We now formalize the probability model governing the success or failure of the digital investment. We define a variable \\(x\_i\\), with distribution function \\(F(\\cdot)\\), such that \\(F(x\_i) \= q\_i\\), the probability of success of the digital investment. Conditional on a fixed value of \\(X\\), the probability of success of the \\(i\\)\-th investment is defined as \\\[ p\_i^X \\equiv Pr\[y\_i \< x\_i \| X] \\] Assuming \\(F\\) to be the normal distribution function, we have \\\[ \\begin{align} p\_i^X \&\= Pr \\left\[ \\rho\_i X \+ \\sqrt{1\-\\rho\_i^2}\\; Z\_i \< x\_i \| X \\right] \\nonumber \\\\ \&\= Pr \\left\[ Z\_i \< \\frac{x\_i \- \\rho\_i X}{\\sqrt{1\-\\rho\_i^2}} \| X \\right] \\nonumber \\\\ \&\= \\Phi \\left\[ \\frac{F^{\-1}(q\_i) \- \\rho\_i X}{\\sqrt{1\-\\rho\_i}} \\right] \\end{align} \\] where \\(\\Phi(.)\\) is the cumulative normal density function. Therefore, given the level of the common factor \\(X\\), asset correlation \\(\\rho\\), and the unconditional success probabilities \\(q\_i\\), we obtain the conditional success probability for each asset \\(p\_i^X\\). As \\(X\\) varies, so does \\(p\_i^X\\). For the numerical examples here we choose the function \\(F(x\_i)\\) to the cumulative normal probability function. 19\.3 Fast Computation Approach ------------------------------- We use a fast technique for building up distributions for sums of Bernoulli random variables. In finance, this *recursion* technique was introduced in the credit portfolio modeling literature by Andersen, Sidenius, and Basu ([2003](#ref-AndSidBasu)). We deem an investment in a digital asset as successful if it achieves its high payoff \\(S\_i\\). The cashflow from the portfolio is a random variable \\(C \= \\sum\_{i\=1}^n C\_i\\). The maximum cashflow that may be generated by the portfolio will be the sum of all digital asset cashflows, because each and every outcome was a success, i.e., \\\[ C\_{max} \= \\sum\_{i\=1}^n \\; S\_i \\] To keep matters simple, we assume that each \\(S\_i\\) is an integer, and that we round off the amounts to the nearest significant digit. So, if the smallest unit we care about is a million dollars, then each \\(S\_i\\) will be in units of integer millions. Recall that, conditional on a value of \\(X\\), the probability of success of digital asset \\(i\\) is given as \\(p\_i^X\\). The recursion technique will allow us to generate the portfolio cashflow probability distribution for each level of \\(X\\). We will then simply compose these conditional (on \\(X\\)) distributions using the marginal distribution for \\(X\\), denoted \\(g(X)\\), into the unconditional distribution for the entire portfolio. Therefore, we define the probability of total cashflow from the portfolio, conditional on \\(X\\), to be \\(f(C \| X)\\). Then, the unconditional cashflow distribution of the portfolio becomes \\\[ f(C) \= \\int\_X \\; f(C \| X) \\cdot g(X)\\; dX \\quad \\quad \\quad (CONV) \\] The distribution \\(f(C \| X)\\) is easily computed numerically as follows. We index the assets with \\(i\=1 \\ldots n\\). The cashflow from all assets taken together will range from zero to \\(C\_{max}\\). Suppose this range is broken into integer buckets, resulting in \\(N\_B\\) buckets in total, each one containing an increasing level of total cashflow. We index these buckets by \\(j\=1 \\ldots N\_B\\), with the cashflow in each bucket equal to \\(B\_j\\). \\(B\_j\\) represents the total cashflow from all assets (some pay off and some do not), and the buckets comprise the discrete support for the entire distribution of total cashflow from the portfolio. For example, suppose we had 10 assets, each with a payoff of \\(C\_i\=3\\). Then \\(C\_{max}\=30\\). A plausible set of buckets comprising the support of the cashflow distribution would be: \\(\\{0,3,6,9,12,15,18,21,24,27,C\_{max}\\}\\). Define \\(P(k,B\_j)\\) as the probability of bucket \\(j\\)’s cashflow level \\(B\_j\\) if we account for the first \\(k\\) assets. For example, if we had just 3 assets, with payoffs of value 1,3,2 respectively, then we would have 7 buckets, i.e. \\(B\_j\=\\{0,1,2,3,4,5,6\\}\\). After accounting for the first asset, the only possible buckets with positive probability would be \\(B\_j\=0,1\\), and after the first two assets, the buckets with positive probability would be \\(B\_j\=0,1,3,4\\). We begin with the first asset, then the second and so on, and compute the probability of seeing the returns in each bucket. Each probability is given by the following *recursion*: \\\[ P(k\+1,B\_j) \= P(k,B\_j)\\;\[1\-p^X\_{k\+1}] \+ P(k,B\_j \- S\_{k\+1}) \\; p^X\_{k\+1}, \\quad k \= 1, \\ldots, n\-1\. \\quad \\quad (REC) \\] Thus the probability of a total cashflow of \\(B\_j\\) after considering the first \\((k\+1\)\\) firms is equal to the sum of two probability terms. First, the probability of the same cashflow \\(B\_j\\) from the first \\(k\\) firms, given that firm \\((k\+1\)\\) did not succeed. Second, the probability of a cashflow of \\(B\_j \- S\_{k\+1}\\) from the first \\(k\\) firms and the \\((k\+1\)\\)\-st firm does succeed. We start off this recursion from the first asset, after which the \\(N\_B\\) buckets are all of probability zero, except for the bucket with zero cashflow (the first bucket) and the one with \\(S\_1\\) cashflow, i.e., \\\[ \\begin{align} P(1,0\) \&\= 1\-p^X\_1 \\\\ P(1,S\_1\) \&\= p^X\_1 \\end{align} \\] All the other buckets will have probability zero, i.e., \\(P(1,B\_j \\neq \\{0,S\_1\\})\=0\\). With these starting values, we can run the system up from the first asset to the \\(n\\)\-th one by repeated application of equation (**REC**). Finally, we will have the entire distribution \\(P(n,B\_j)\\), conditional on a given value of \\(X\\). We then compose all these distributions that are conditional on \\(X\\) into one single cashflow distribution using equation (**CONV**). This is done by numerically integrating over all values of \\(X\\). ``` library(pspline) #Library for Digital Portfolio Analysis #Copyright, Sanjiv Das, Dec 1, 2008. #------------------------------------------------------------ #Function to implement the Andersen-Sidenius-Basu (Risk, 2003) #recursion algorithm. Note that the probabilities are fixed, #i.e. conditional on a given level of factor. The full blown #distribution comes from the integral over all levels of the factor. #INPUTS (example) #w = c(1,7,3,2) #Loss weights #p = c(0.05, 0.2, 0.03, 0.1) #Loss probabilities asbrec = function(w,p) { #BASIC SET UP N = length(w) maxloss = sum(w) bucket = c(0,seq(maxloss)) LP = matrix(0,N,maxloss+1) #probability grid over losses #DO FIRST FIRM LP[1,1] = 1-p[1]; LP[1,w[1]+1] = p[1]; #LOOP OVER REMAINING FIRMS for (i in seq(2,N)) { for (j in seq(maxloss+1)) { LP[i,j] = LP[i-1,j]*(1-p[i]) if (bucket[j]-w[i] >= 0) { LP[i,j] = LP[i,j] + LP[i-1,j-w[i]]*p[i] } } } #FINISH UP lossprobs = LP[N,] #print(t(LP)) #print(c("Sum of final probs = ",sum(lossprobs))) result = matrix(c(bucket,lossprobs),(maxloss+1),2) } #END ASBREC ``` We use this function in the following example. ``` #EXAMPLE w = c(1,7,3,2) p = c(0.05, 0.2, 0.03, 0.1) res = asbrec(w,p) print(res) ``` ``` ## [,1] [,2] ## [1,] 0 0.66348 ## [2,] 1 0.03492 ## [3,] 2 0.07372 ## [4,] 3 0.02440 ## [5,] 4 0.00108 ## [6,] 5 0.00228 ## [7,] 6 0.00012 ## [8,] 7 0.16587 ## [9,] 8 0.00873 ## [10,] 9 0.01843 ## [11,] 10 0.00610 ## [12,] 11 0.00027 ## [13,] 12 0.00057 ## [14,] 13 0.00003 ``` ``` barplot(res[,2],names.arg=res[,1],col=2) ``` Here is a second example. Here each column represents one pass through the recursion. Since there are five assets, we get five passes, and the final column is the result we are looking for. ``` #EXAMPLE w = c(5,8,4,2,1) p = array(1/length(w),length(w)) res = asbrec(w,p) print(res) ``` ``` ## [,1] [,2] ## [1,] 0 0.32768 ## [2,] 1 0.08192 ## [3,] 2 0.08192 ## [4,] 3 0.02048 ## [5,] 4 0.08192 ## [6,] 5 0.10240 ## [7,] 6 0.04096 ## [8,] 7 0.02560 ## [9,] 8 0.08704 ## [10,] 9 0.04096 ## [11,] 10 0.02560 ## [12,] 11 0.01024 ## [13,] 12 0.02176 ## [14,] 13 0.02560 ## [15,] 14 0.01024 ## [16,] 15 0.00640 ## [17,] 16 0.00128 ## [18,] 17 0.00512 ## [19,] 18 0.00128 ## [20,] 19 0.00128 ## [21,] 20 0.00032 ``` ``` barplot(res[,2],names.arg=res[,1],col=2) ``` We can explore these recursion calculations in some detail as follows. Note that in our example \\(p\_i \= 0\.2, i \= 1,2,3,4,5\\). We are interested in computing \\(P(k,B)\\), where \\(k\\) denotes the \\(k\\)\-th recursion pass, and \\(B\\) denotes the return bucket. Recall that we have five assets with return levels of \\(\\{5,8,4,2,1\\}\\), respecitvely. After \\(i\=1\\), we have \\\[ \\begin{align} P(1,0\) \&\= (1\-p\_1\) \= 0\.8\\\\ P(1,5\) \&\= p\_1 \= 0\.2\\\\ P(1,j) \&\= 0, j \\neq \\{0,5\\} \\end{align} \\] The completes the first recursion pass and the values can be verified from the R output above by examining column 2 (column 1 contains the values of the return buckets). We now move on the calculations needed for the second pass in the recursion. \\\[ \\begin{align} P(2,0\) \&\= P(1,0\)(1\-p\_2\) \= 0\.64\\\\ P(2,5\) \&\= P(1,5\)(1\-p\_2\) \+ P(1,5\-8\) p\_2 \= 0\.2 (0\.8\) \+ 0 (0\.2\) \= 0\.16\\\\ P(2,8\) \&\= P(1,8\) (1\-p\_2\) \+ P(1,8\-8\) p\_2 \= 0 (0\.8\) \+ 0\.8 (0\.2\) \= 0\.16\\\\ P(2,13\) \&\= P(1,13\)(1\-p\_2\) \+ P(1,13\-8\) p\_2 \= 0 (0\.8\) \+ 0\.2 (0\.2\) \= 0\.04\\\\ P(2,j) \&\= 0, j \\neq \\{0,5,8,13\\} \\end{align} \\] The third recursion pass is as follows: \\\[ \\begin{align} P(3,0\) \&\= P(2,0\)(1\-p\_3\) \= 0\.512\\\\ P(3,4\) \&\= P(2,4\)(1\-p\_3\) \+ P(2,4\-4\) \= 0(0\.8\) \+ 0\.64(0\.2\) \= 0\.128\\\\ P(3,5\) \&\= P(2,5\)(1\-p\_3\) \+ P(2,5\-4\) p\_3 \= 0\.16 (0\.8\) \+ 0 (0\.2\) \= 0\.128\\\\ P(3,8\) \&\= P(2,8\) (1\-p\_3\) \+ P(2,8\-4\) p\_3 \= 0\.16 (0\.8\) \+ 0 (0\.2\) \= 0\.128\\\\ P(3,9\) \&\= P(2,9\) (1\-p\_3\) \+ P(2,9\-4\) p\_3 \= 0 (0\.8\) \+ 0\.16 (0\.2\) \= 0\.032\\\\ P(3,12\) \&\= P(2,12\) (1\-p\_3\) \+ P(2,12\-4\) p\_3 \= 0 (0\.8\) \+ 0\.16 (0\.2\) \= 0\.032\\\\ P(3,13\) \&\= P(2,13\) (1\-p\_3\) \+ P(2,13\-4\) p\_3 \= 0\.04 (0\.8\) \+ 0 (0\.2\) \= 0\.032\\\\ P(3,17\) \&\= P(2,17\) (1\-p\_3\) \+ P(2,17\-4\) p\_3 \= 0 (0\.8\) \+ 0\.04 (0\.2\) \= 0\.008\\\\ P(3,j) \&\= 0, j \\neq \\{0,4,5,8,9,12,13,17\\} \\end{align} \\] Note that the same computation work even when the outcomes are not of equal probability. Let’s do one more example. ``` #ONE FINAL EXAMPLE #----------MAIN CALLING SEGMENT------------------ w = c(5,2,4,2,8,1,9) p = array(0.2,length(w)) res = asbrec(w,p) print(res) ``` ``` ## [,1] [,2] ## [1,] 0 0.2097152 ## [2,] 1 0.0524288 ## [3,] 2 0.1048576 ## [4,] 3 0.0262144 ## [5,] 4 0.0655360 ## [6,] 5 0.0688128 ## [7,] 6 0.0393216 ## [8,] 7 0.0327680 ## [9,] 8 0.0622592 ## [10,] 9 0.0827392 ## [11,] 10 0.0434176 ## [12,] 11 0.0393216 ## [13,] 12 0.0245760 ## [14,] 13 0.0344064 ## [15,] 14 0.0272384 ## [16,] 15 0.0180224 ## [17,] 16 0.0106496 ## [18,] 17 0.0198656 ## [19,] 18 0.0086016 ## [20,] 19 0.0092160 ## [21,] 20 0.0036864 ## [22,] 21 0.0047104 ## [23,] 22 0.0045568 ## [24,] 23 0.0025088 ## [25,] 24 0.0020480 ## [26,] 25 0.0006144 ## [27,] 26 0.0010752 ## [28,] 27 0.0002560 ## [29,] 28 0.0004096 ## [30,] 29 0.0001024 ## [31,] 30 0.0000512 ## [32,] 31 0.0000128 ``` ``` print(sum(res[,2])) ``` ``` ## [1] 1 ``` ``` barplot(res[,2],names.arg=res[,1],col=4) ``` 19\.4 Combining conditional distributions ----------------------------------------- We now demonstrate how we will integrate the conditional probability distributions \\(p^X\\) into an unconditional probability distribution of outcomes, denoted \\\[ p \= \\int\_X p^X g(X) \\; dX, \\] where \\(g(X)\\) is the density function of the state variable \\(X\\). We create a function to combine the conditional distribution functions. This function calls the **asbrec** function that we had used earlier. ``` #--------------------------- #FUNCTION TO COMPUTE FULL RETURN DISTRIBUTION #INTEGRATES OVER X BY CALLING ASBREC.R digiprob = function(L,q,rho) { #Note: L,q same as w,p from before dx = 0.1 x = seq(-40,40)*dx fx = dnorm(x)*dx fx = fx/sum(fx) maxloss = sum(L) bucket = c(0,seq(maxloss)) totp = array(0,(maxloss+1)) for (i in seq(length(x))) { p = pnorm((qnorm(q)-rho*x[i])/sqrt(1-rho^2)) ldist = asbrec(L,p) totp = totp + ldist[,2]*fx[i] } result = matrix(c(bucket,totp),(maxloss+1),2) } ``` Note that now we will use the unconditional probabilities of success for each asset, and correlate them with a specified correlation level. We run this with two correlation levels \\(\\{\-0\.5, \+0\.5\\}\\). ``` #------INTEGRATE OVER CONDITIONAL DISTRIBUTIONS---- w = c(5,8,4,2,1) q = c(0.1,0.2,0.1,0.05,0.15) rho = 0.25 res1 = digiprob(w,q,rho) rho = 0.75 res2 = digiprob(w,q,rho) par(mfrow=c(2,1)) barplot(res1[,2],names.arg=res1[,1],xlab="portfolio value", ylab="probability",main="rho = 0.25") barplot(res2[,2],names.arg=res2[,1],xlab="portfolio value", ylab="probability",main="rho = 0.75") ``` ``` cbind(res1,res2) ``` ``` ## [,1] [,2] [,3] [,4] ## [1,] 0 0.5391766174 0 0.666318464 ## [2,] 1 0.0863707325 1 0.046624312 ## [3,] 2 0.0246746918 2 0.007074104 ## [4,] 3 0.0049966420 3 0.002885901 ## [5,] 4 0.0534700675 4 0.022765422 ## [6,] 5 0.0640540228 5 0.030785967 ## [7,] 6 0.0137226107 6 0.009556413 ## [8,] 7 0.0039074039 7 0.002895774 ## [9,] 8 0.1247287209 8 0.081172499 ## [10,] 9 0.0306776806 9 0.029154885 ## [11,] 10 0.0086979993 10 0.008197488 ## [12,] 11 0.0021989842 11 0.004841742 ## [13,] 12 0.0152035638 12 0.014391319 ## [14,] 13 0.0186144920 13 0.023667222 ## [15,] 14 0.0046389439 14 0.012776165 ## [16,] 15 0.0013978502 15 0.006233366 ## [17,] 16 0.0003123473 16 0.004010559 ## [18,] 17 0.0022521668 17 0.005706283 ## [19,] 18 0.0006364672 18 0.010008267 ## [20,] 19 0.0002001003 19 0.002144265 ## [21,] 20 0.0000678949 20 0.008789582 ``` The left column of probabilities has correlation of \\(\\rho\=0\.25\\) and the right one is the case when \\(\\rho\=0\.75\\). We see that the probabilities on the right are lower for low outcomes (except zero) and high for high outcomes. Why? 19\.5 Stochastic Dominance (SD) ------------------------------- SD is an ordering over probabilistic bundles. We may want to know if one VC’s portfolio dominates another in a risk\-adjusted sense. Different SD concepts apply to answer this question. For example if portfolio \\(A\\) does better than portfolio \\(B\\) in every state of the world, it clearly dominates. This is called **state\-by\-state** dominance, and is hardly ever encountered. Hence, we briefly examine two more common types of SD. 1. First\-order Stochastic Dominance (FSD): For cumulative distribution function \\(F(X)\\) over states \\(X\\), portfolio \\(A\\) dominates \\(B\\) if \\(\\mbox{Prob}(A \\geq k) \\geq \\mbox{Prob}(B \\geq k)\\) for all states \\(k \\in X\\), and \\(\\mbox{Prob}(A \\geq k) \> \\mbox{Prob}(B \\geq k)\\) for some \\(k\\). It is the same as \\(\\mbox{Prob}(A \\leq k) \\leq \\mbox{Prob}(B \\leq k)\\) for all states \\(k \\in X\\), and \\(\\mbox{Prob}(A \\leq k) \< \\mbox{Prob}(B \\leq k)\\) for some \\(k\\).This implies that \\(F\_A(k) \\leq F\_B(k)\\). The mean outcome under \\(A\\) will be higher than under \\(B\\), and all increasing utility functions will give higher utility for \\(A\\). This is a weaker notion of dominance than state\-wise, but also not as often encountered in practice. 2. Second\-order Stochastic Dominance (SSD): Here the portfolios have the same mean but the risk is less for portfolio \\(A\\). Then we say that portfolio \\(A\\) has a **mean\-preserving spread** over portfolio \\(B\\). Technically this is the same as \\(\\int\_{\-\\infty}^k \[F\_A(k) \- F\_B(k)] \\; dX \< 0\\), and \\(\\int\_X X dF\_A(X) \= \\int\_X X dF\_B(X)\\). Mean\-variance models in which portfolios on the efficient frontier dominate those below are a special case of SSD. See the example below, there is no FSD, but there is SSD. ``` #FIRST_ORDER SD x = seq(-4,4,0.1) F_B = pnorm(x,mean=0,sd=1); F_A = pnorm(x,mean=0.25,sd=1); F_A-F_B #FSD exists ``` ``` ## [1] -2.098272e-05 -3.147258e-05 -4.673923e-05 -6.872414e-05 -1.000497e-04 ## [6] -1.442118e-04 -2.058091e-04 -2.908086e-04 -4.068447e-04 -5.635454e-04 ## [11] -7.728730e-04 -1.049461e-03 -1.410923e-03 -1.878104e-03 -2.475227e-03 ## [16] -3.229902e-03 -4.172947e-03 -5.337964e-03 -6.760637e-03 -8.477715e-03 ## [21] -1.052566e-02 -1.293895e-02 -1.574810e-02 -1.897740e-02 -2.264252e-02 ## [26] -2.674804e-02 -3.128519e-02 -3.622973e-02 -4.154041e-02 -4.715807e-02 ## [31] -5.300548e-02 -5.898819e-02 -6.499634e-02 -7.090753e-02 -7.659057e-02 ## [36] -8.191019e-02 -8.673215e-02 -9.092889e-02 -9.438507e-02 -9.700281e-02 ## [41] -9.870633e-02 -9.944553e-02 -9.919852e-02 -9.797262e-02 -9.580405e-02 ## [46] -9.275614e-02 -8.891623e-02 -8.439157e-02 -7.930429e-02 -7.378599e-02 ## [51] -6.797210e-02 -6.199648e-02 -5.598646e-02 -5.005857e-02 -4.431528e-02 ## [56] -3.884257e-02 -3.370870e-02 -2.896380e-02 -2.464044e-02 -2.075491e-02 ## [61] -1.730902e-02 -1.429235e-02 -1.168461e-02 -9.458105e-03 -7.580071e-03 ## [66] -6.014807e-03 -4.725518e-03 -3.675837e-03 -2.831016e-03 -2.158775e-03 ## [71] -1.629865e-03 -1.218358e-03 -9.017317e-04 -6.607827e-04 -4.794230e-04 ## [76] -3.443960e-04 -2.449492e-04 -1.724935e-04 -1.202675e-04 -8.302381e-05 ## [81] -5.674604e-05 ``` ``` #SECOND_ORDER SD x = seq(-4,4,0.1) F_B = pnorm(x,mean=0,sd=2); F_A = pnorm(x,mean=0,sd=1); print(F_A-F_B) #No FSD ``` ``` ## [1] -0.02271846 -0.02553996 -0.02864421 -0.03204898 -0.03577121 ## [6] -0.03982653 -0.04422853 -0.04898804 -0.05411215 -0.05960315 ## [11] -0.06545730 -0.07166345 -0.07820153 -0.08504102 -0.09213930 ## [16] -0.09944011 -0.10687213 -0.11434783 -0.12176261 -0.12899464 ## [21] -0.13590512 -0.14233957 -0.14812981 -0.15309708 -0.15705611 ## [26] -0.15982015 -0.16120699 -0.16104563 -0.15918345 -0.15549363 ## [31] -0.14988228 -0.14229509 -0.13272286 -0.12120570 -0.10783546 ## [36] -0.09275614 -0.07616203 -0.05829373 -0.03943187 -0.01988903 ## [41] 0.00000000 0.01988903 0.03943187 0.05829373 0.07616203 ## [46] 0.09275614 0.10783546 0.12120570 0.13272286 0.14229509 ## [51] 0.14988228 0.15549363 0.15918345 0.16104563 0.16120699 ## [56] 0.15982015 0.15705611 0.15309708 0.14812981 0.14233957 ## [61] 0.13590512 0.12899464 0.12176261 0.11434783 0.10687213 ## [66] 0.09944011 0.09213930 0.08504102 0.07820153 0.07166345 ## [71] 0.06545730 0.05960315 0.05411215 0.04898804 0.04422853 ## [76] 0.03982653 0.03577121 0.03204898 0.02864421 0.02553996 ## [81] 0.02271846 ``` ``` cumsum(F_A-F_B) ``` ``` ## [1] -2.271846e-02 -4.825842e-02 -7.690264e-02 -1.089516e-01 -1.447228e-01 ## [6] -1.845493e-01 -2.287779e-01 -2.777659e-01 -3.318781e-01 -3.914812e-01 ## [11] -4.569385e-01 -5.286020e-01 -6.068035e-01 -6.918445e-01 -7.839838e-01 ## [16] -8.834239e-01 -9.902961e-01 -1.104644e+00 -1.226407e+00 -1.355401e+00 ## [21] -1.491306e+00 -1.633646e+00 -1.781776e+00 -1.934873e+00 -2.091929e+00 ## [26] -2.251749e+00 -2.412956e+00 -2.574002e+00 -2.733185e+00 -2.888679e+00 ## [31] -3.038561e+00 -3.180856e+00 -3.313579e+00 -3.434785e+00 -3.542620e+00 ## [36] -3.635376e+00 -3.711538e+00 -3.769832e+00 -3.809264e+00 -3.829153e+00 ## [41] -3.829153e+00 -3.809264e+00 -3.769832e+00 -3.711538e+00 -3.635376e+00 ## [46] -3.542620e+00 -3.434785e+00 -3.313579e+00 -3.180856e+00 -3.038561e+00 ## [51] -2.888679e+00 -2.733185e+00 -2.574002e+00 -2.412956e+00 -2.251749e+00 ## [56] -2.091929e+00 -1.934873e+00 -1.781776e+00 -1.633646e+00 -1.491306e+00 ## [61] -1.355401e+00 -1.226407e+00 -1.104644e+00 -9.902961e-01 -8.834239e-01 ## [66] -7.839838e-01 -6.918445e-01 -6.068035e-01 -5.286020e-01 -4.569385e-01 ## [71] -3.914812e-01 -3.318781e-01 -2.777659e-01 -2.287779e-01 -1.845493e-01 ## [76] -1.447228e-01 -1.089516e-01 -7.690264e-02 -4.825842e-02 -2.271846e-02 ## [81] 1.353084e-16 ``` 19\.6 Portfolio Characteristics ------------------------------- Armed with this established machinery, there are several questions an investor (e.g. a VC) in a digital portfolio may pose. First, is there an optimal number of assets, i.e., ceteris paribus, are more assets better than fewer assets, assuming no span of control issues? Second, are Bernoulli portfolios different from mean\-variances ones, in that is it always better to have less asset correlation than more correlation? Third, is it better to have an even weighting of investment across the assets or might it be better to take a few large bets amongst many smaller ones? Fourth, is a high dispersion of probability of success better than a low dispersion? These questions are very different from the ones facing investors in traditional mean\-variance portfolios. We shall examine each of these questions in turn. 19\.7 How many assets? ---------------------- With mean\-variance portfolios, keeping the mean return of the portfolio fixed, more securities in the portfolio is better, because diversification reduces the variance of the portfolio. Also, with mean\-variance portfolios, higher\-order moments do not matter. But with portfolios of Bernoulli assets, increasing the number of assets might exacerbate higher\-order moments, even though it will reduce variance. Therefore it may not be worthwhile to increase the number of assets (\\(n\\)) beyond a point. In order to assess this issue we conducted the following experiment. We invested in \\(n\\) assets each with payoff of \\(1/n\\). Hence, if all assets succeed, the total (normalized) payoff is 1\. This normalization is only to make the results comparable across different \\(n\\), and is without loss of generality. We also assumed that the correlation parameter is \\(\\rho\_i \= 0\.25\\), for all \\(i\\). To make it easy to interpret the results, we assumed each asset to be identical with a success probability of \\(q\_i\=0\.05\\) for all \\(i\\). Using the recursion technique, we computed the probability distribution of the portfolio payoff for four values of \\(n \= \\{25,50,75,100\\}\\). The distribution function is plotted below. There are 4 plots, one for each \\(n\\), and if we look at the bottom left of the plot, the leftmost line is for \\(n\=100\\). The next line to the right is for \\(n\=75\\), and so on. One approach to determining if greater \\(n\\) is better for a digital portfolio is to investigate if a portfolio of \\(n\\) assets stochastically dominates one with less than \\(n\\) assets. On examination of the shapes of the distribution functions for different \\(n\\), we see that it is likely that as \\(n\\) increases, we obtain portfolios that exhibit second\-order stochastic dominance (SSD) over portfolios with smaller \\(n\\). The return distribution when \\(n\=100\\) (denoted \\(G\_{100}\\)) would dominate that for \\(n\=25\\) (denoted \\(G\_{25}\\)) in the SSD sense, if \\(\\int\_x x \\; dG\_{100}(x) \= \\int\_x x \\; dG\_{25}(x)\\), and \\(\\int\_0^u \[G\_{100}(x) \- G\_{25}(x)]\\; dx \\leq 0\\) for all \\(u \\in (0,1\)\\). That is, \\(G\_{25}\\) has a mean\-preserving spread over \\(G\_{100}\\), or \\(G\_{100}\\) has the same mean as \\(G\_{25}\\) but lower variance, i.e., implies superior mean\-variance efficiency. To show this we plotted the integral \\(\\int\_0^u \[G\_{100}(x) \- G\_{25}(x)] \\; dx\\) and checked the SSD condition. We found that this condition is satisfied (see Figure ). As is known, SSD implies mean\-variance efficiency as well. We also examine if higher \\(n\\) portfolios are better for a power utility investor with utility function, \\(U(C) \= \\frac{(0\.1 \+ C)^{1\-\\gamma}}{1\-\\gamma}\\), where \\(C\\) is the normalized total payoff of the Bernoulli portfolio. Expected utility is given by \\(\\sum\_C U(C)\\; f(C)\\). We set the risk aversion coefficient to \\(\\gamma\=3\\) which is in the standard range in the asset\-pricing literature. The cpde below reports the results. We can see that the expected utility increases monotonically with \\(n\\). Hence, for a power utility investor, having more assets is better than less, keeping the mean return of the portfolio constant. Economically, in the specific case of VCs, this highlights the goal of trying to capture a larger share of the number of available ventures. The results from the SSD analysis are consistent with those of expected power utility. ``` #CHECK WHAT HAPPENS WHEN NUMBER OF ASSETS/ISSUERS INCREASES #Result: No ordering with SSD, utility better with more names #source("number_names.R") #SECOND-ORDER STOCH DOMINANCE (SSD): GREATER num_names IS BETTER num_names = c(25,50,75,100) each_loss = 1 each_prob = 0.05 rho = 0.5^2 gam = 3 for (j in seq(4)) { L = array(each_loss,num_names[j]) q = array(each_prob,num_names[j]) res = digiprob(L,q,rho) rets = res[,1]/num_names[j] probs = res[,2] cumprobs = array(0,length(res[,2])) cumprobs[1] = probs[1] for (k in seq(2,length(res[,2]))) { cumprobs[k] = cumprobs[k-1] + probs[k] } if (j==1) { plot(rets,cumprobs,type="l",xlab="Normalized Total Payoff",ylab="Cumulative Probability") rets1 = rets cumprobs1 = cumprobs utility = sum(((0.1+rets)^(1-gam)/(1-gam))*probs) } if (j==2) { lines(rets,cumprobs,type="l",col="Red") utility = sum(((0.1+rets)^(1-gam)/(1-gam))*probs) } if (j==3) { lines(rets,cumprobs,type="l",col="Green") utility = sum(((0.1+rets)^(1-gam)/(1-gam))*probs) } if (j==4) { lines(rets,cumprobs,type="l",col="Blue") rets4 = rets cumprobs4 = cumprobs utility = sum(((0.1+rets)^(1-gam)/(1-gam))*probs) } mn = sum(rets*probs) idx = which(rets>0.03); p03 = sum(probs[idx]) idx = which(rets>0.07); p07 = sum(probs[idx]) idx = which(rets>0.10); p10 = sum(probs[idx]) idx = which(rets>0.15); p15 = sum(probs[idx]) print(c(mn,p03,p07,p10,p15)) print(c("Utility = ",utility)) } ``` ``` ## [1] 0.04999545 0.66546862 0.34247245 0.15028422 0.05924270 ## [1] "Utility = " "-29.2593289535026" ## [1] 0.04999545 0.63326734 0.25935510 0.08448287 0.02410500 ## [1] "Utility = " "-26.7549907254343" ## [1] 0.04999545 0.61961559 0.22252474 0.09645862 0.01493276 ## [1] "Utility = " "-25.8764941625812" ## [1] 0.04999545 0.61180443 0.20168330 0.07267614 0.01109592 ## [1] "Utility = " "-25.433466221872" ``` We now look at stochastic dominance. ``` #PLOT DIFFERENCE IN DISTRIBUTION FUNCTIONS #IF POSITIVE FLAT WEIGHTS BETTER THAN RISING WEIGHTS fit = sm.spline(rets1,cumprobs1) cumprobs1 = predict(fit,rets4) plot(rets4,cumprobs1-matrix(cumprobs4),type="l",xlab="Normalized total payoff",ylab="Difference in cumulative probs") ``` ``` #CHECK IF SSD IS SATISFIED #A SSD> B, if E(A)=E(B), and integral_0^y (F_A(z)-F_B(z)) dz <= 0, for all y cumprobs4 = matrix(cumprobs4,length(cumprobs4),1) n = length(cumprobs1) ssd = NULL for (j in 1:n) { check = sum(cumprobs4[1:j]-cumprobs1[1:j]) ssd = c(ssd,check) } print(c("Max ssd = ",max(ssd))) #If <0, then SSD satisfied, and it implies MV efficiency. ``` ``` ## [1] "Max ssd = " "-0.295083435837737" ``` ``` plot(rets4,ssd,type="l",xlab="Normalized total payoff",ylab="Integrated F(G100) minus F(G25)") ``` 19\.8 The impact of correlation ------------------------------- As with mean\-variance portfolios, we expect that increases in payoff correlation for Bernoulli assets will adversely impact portfolios. In order to verify this intuition we analyzed portfolios keeping all other variables the same, but changing correlation. In the previous subsection, we set the parameter for correlation to be \\(\\rho \= 0\.25\\). Here, we examine four levels of the correlation parameter: \\(\\rho\=\\{0\.09, 0\.25, 0\.49, 0\.81\\}\\). For each level of correlation, we computed the normalized total payoff distribution. The number of assets is kept fixed at \\(n\=25\\) and the probability of success of each digital asset is \\(0\.05\\) as before. The results are shown in the Figures below where the probability distribution function of payoffs is shown for all four correlation levels. We find that the SSD condition is met, i.e., that lower correlation portfolios stochastically dominate (in the SSD sense) higher correlation portfolios. We also examined changing correlation in the context of a power utility investor with the same utility function as in the previous subsection. See results from the code below. We confirm that, as with mean\-variance portfolios, Bernoulli portfolios also improve if the assets have low correlation. Hence, digital investors should also optimally attempt to diversify their portfolios. Insurance companies are a good example—they diversify risk across geographical and other demographic divisions. ``` #CHECK WHAT HAPPENS WHEN RHO INCREASES #Result: No ordering with SSD, Lower rho is better in the utility metric #source("change_rho.R") num_names = 25 each_loss = 1 each_prob = 0.05 rho = c(0.3,0.5,0.7,0.9)^2 gam = 3 for (j in seq(4)) { L = array(each_loss,num_names) q = array(each_prob,num_names) res = digiprob(L,q,rho[j]) rets = res[,1]/num_names probs = res[,2] cumprobs = array(0,length(res[,2])) cumprobs[1] = probs[1] for (k in seq(2,length(res[,2]))) { cumprobs[k] = cumprobs[k-1] + probs[k] } if (j==1) { plot(rets,cumprobs,type="l",xlab="Normalized Total Payoff",ylab="Cumulative Probability") cumprobs1 = cumprobs utility = sum(((0.1+rets)^(1-gam)/(1-gam))*probs) } if (j==2) { lines(rets,cumprobs,type="l",col="Red") utility = sum(((0.1+rets)^(1-gam)/(1-gam))*probs) } if (j==3) { lines(rets,cumprobs,type="l",col="Green") utility = sum(((0.1+rets)^(1-gam)/(1-gam))*probs) } if (j==4) { lines(rets,cumprobs,type="l",col="Blue") cumprobs2 = cumprobs utility = sum(((0.1+rets)^(1-gam)/(1-gam))*probs) } mn = sum(rets*probs) idx = which(rets>0.03); p03 = sum(probs[idx]) idx = which(rets>0.07); p07 = sum(probs[idx]) idx = which(rets>0.10); p10 = sum(probs[idx]) idx = which(rets>0.15); p15 = sum(probs[idx]) print(c(mn,p03,p07,p10,p15)) print(c("Utility = ",utility)) } ``` ``` ## [1] 0.04999940 0.71474772 0.35589301 0.13099315 0.03759002 ## [1] "Utility = " "-28.1122419295505" ## [1] 0.04999545 0.66546862 0.34247245 0.15028422 0.05924270 ## [1] "Utility = " "-29.2593289535026" ## [1] 0.04998484 0.53141370 0.29432093 0.16957177 0.10034508 ## [1] "Utility = " "-32.6682122683527" ## [1] 0.04997715 0.28323169 0.18573188 0.13890452 0.10963643 ## [1] "Utility = " "-39.7578637369197" ``` 19\.9 Uneven bets? ------------------ Digital asset investors are often faced with the question of whether to bet even amounts across digital investments, or to invest with different weights. We explore this question by considering two types of Bernoulli portfolios. Both have \\(n\=25\\) assets within them, each with a success probability of \\(q\_i\=0\.05\\). The first has equal payoffs, i.e., \\(1/25\\) each. The second portfolio has payoffs that monotonically increase, i.e., the payoffs are equal to \\(j/325, j\=1,2,\\ldots,25\\). We note that the sum of the payoffs in both cases is 1\. The code output shows the utility of the investor, where the utility function is the same as in the previous sections. We see that the utility for the balanced portfolio is higher than that for the imbalanced one. Also the balanced portfolio evidences SSD over the imbalanced portfolio. However, the return distribution has fatter tails when the portfolio investments are imbalanced. Hence, investors seeking to distinguish themselves by taking on greater risk in their early careers may be better off with imbalanced portfolios. ``` #PLOT DIFFERENCE IN DISTRIBUTION FUNCTIONS #IF POSITIVE FLAT WEIGHTS BETTER THAN RISING WEIGHTS plot(rets,cumprobs1-cumprobs2,type="l",xlab="Normalized total payoff",ylab="Difference in cumulative probs") ``` ``` #CHECK IF SSD IS SATISFIED #A SSD> B, if E(A)=E(B), and integral_0^y (F_A(z)-F_B(z)) dz <= 0, for all y cumprobs2 = matrix(cumprobs2,length(cumprobs2),1) n = length(cumprobs1) ssd = NULL for (j in 1:n) { check = sum(cumprobs1[1:j]-cumprobs2[1:j]) ssd = c(ssd,check) } print(c("Max ssd = ",max(ssd))) #If <0, then SSD satisfied, and it implies MV efficiency. ``` ``` ## [1] "Max ssd = " "-0.000556175706150297" ``` ``` plot(rets,ssd,type="l",xlab="Normalized total payoff",ylab="Integrated F(G[rho=0.09]) minus F(G[rho=0.81])") ``` We look at expected utility. ``` #CHECK WHAT HAPPENS WITH UNEVEN WEIGHTS #Result: No ordering with SSD, Utility lower if weights ascending. #source("uneven_weights.R") #Flat vs rising weights num_names = 25 each_loss1 = array(13,num_names) each_loss2 = seq(num_names) each_prob = 0.05 rho = 0.55 gam = 3 for (j in seq(2)) { if (j==1) { L = each_loss1 } if (j==2) { L = each_loss2 } q = array(each_prob,num_names) res = digiprob(L,q,rho) rets = res[,1]/sum(each_loss1) probs = res[,2] cumprobs = array(0,length(res[,2])) cumprobs[1] = probs[1] for (k in seq(2,length(res[,2]))) { cumprobs[k] = cumprobs[k-1] + probs[k] } if (j==1) { plot(rets,cumprobs,type="l",xlab="Normalized Total Payoff",ylab="Cumulative Probability") cumprobs1 = cumprobs utility = sum(((0.1+rets)^(1-gam)/(1-gam))*probs) } if (j==2) { lines(rets,cumprobs,type="l",col="Red") cumprobs2 = cumprobs utility = sum(((0.1+rets)^(1-gam)/(1-gam))*probs) } mn = sum(rets*probs) idx = which(rets>0.01); p01 = sum(probs[idx]) idx = which(rets>0.02); p02 = sum(probs[idx]) idx = which(rets>0.03); p03 = sum(probs[idx]) idx = which(rets>0.07); p07 = sum(probs[idx]) idx = which(rets>0.10); p10 = sum(probs[idx]) idx = which(rets>0.15); p15 = sum(probs[idx]) idx = which(rets>0.25); p25 = sum(probs[idx]) print(c(mn,p01,p02,p03,p07,p10,p15,p25)) print(c("Utility = ",utility)) } ``` ``` ## [1] 0.04998222 0.49021241 0.49021241 0.49021241 0.27775760 0.16903478 ## [7] 0.10711351 0.03051047 ## [1] "Utility = " "-33.7820026132803" ## [1] 0.04998222 0.46435542 0.43702188 0.40774167 0.25741601 0.17644497 ## [7] 0.10250256 0.03688191 ## [1] "Utility = " "-34.4937532559838" ``` We now look at stochastic dominance. ``` #PLOT DIFFERENCE IN DISTRIBUTION FUNCTIONS #IF POSITIVE FLAT WEIGHTS BETTER THAN RISING WEIGHTS plot(rets,cumprobs1-cumprobs2,type="l",xlab="Normalized total payoff",ylab="Difference in cumulative probs") ``` ``` #CHECK IF SSD IS SATISFIED #A SSD> B, if E(A)=E(B), and integral_0^y (F_A(z)-F_B(z)) dz <= 0, for all y cumprobs2 = matrix(cumprobs2,length(cumprobs2),1) n = length(cumprobs1) ssd = NULL for (j in 1:n) { check = sum(cumprobs1[1:j]-cumprobs2[1:j]) ssd = c(ssd,check) } print(c("Max ssd = ",max(ssd))) #If <0, then SSD satisfied, and it implies MV efficiency. ``` ``` ## [1] "Max ssd = " "0" ``` 19\.10 Mixing safe and risky assets ----------------------------------- Is it better to have assets with a wide variation in probability of success or with similar probabilities? To examine this, we look at two portfolios of \\(n\=26\\) assets. In the first portfolio, all the assets have a probability of success equal to \\(q\_i \= 0\.10\\). In the second portfolio, half the firms have a success probability of \\(0\.05\\) and the other half have a probability of \\(0\.15\\). The payoff of all investments is \\(1/26\\). The probability distribution of payoffs and the expected utility for the same power utility investor (with \\(\\gamma\=3\\)) are given in code output below. We see that mixing the portfolio between investments with high and low probability of success results in higher expected utility than keeping the investments similar. We also confirmed that such imbalanced success probability portfolios also evidence SSD over portfolios with similar investments in terms of success rates. This result does not have a natural analog in the mean\-variance world with non\-digital assets. For empirical evidence on the efficacy of various diversification approaches, see (“The Performance of Private Equity Funds: Does Diversification Matter?” [2006](#ref-Lossen)). ``` #CHECK WHAT HAPPENS WITH MIXED PDs #Result: No SSD ordering, but Utility higher for mixed pds #source("mixed_pds.R") num_names = 26 each_loss = array(1,num_names) each_prob1 = array(0.10,num_names) each_prob2 = c(array(0.05,num_names/2),array(0.15,num_names/2)) rho = 0.55 gam = 3 #Risk aversion CARA for (j in seq(2)) { if (j==1) { q = each_prob1 } if (j==2) { q = each_prob2 } L = each_loss res = digiprob(L,q,rho) rets = res[,1]/sum(each_loss) probs = res[,2] cumprobs = array(0,length(res[,2])) cumprobs[1] = probs[1] for (k in seq(2,length(res[,2]))) { cumprobs[k] = cumprobs[k-1] + probs[k] } if (j==1) { plot(rets,cumprobs,type="l",xlab="Normalized Total Payoff",ylab="Cumulative Probability") cumprobs1 = cumprobs utility = sum(((0.1+rets)^(1-gam)/(1-gam))*probs) } if (j==2) { lines(rets,cumprobs,type="l",col="Red") cumprobs2 = cumprobs utility = sum(((0.1+rets)^(1-gam)/(1-gam))*probs) } mn = sum(rets*probs) idx = which(rets>0.01); p01 = sum(probs[idx]) idx = which(rets>0.02); p02 = sum(probs[idx]) idx = which(rets>0.03); p03 = sum(probs[idx]) idx = which(rets>0.07); p07 = sum(probs[idx]) idx = which(rets>0.10); p10 = sum(probs[idx]) idx = which(rets>0.15); p15 = sum(probs[idx]) idx = which(rets>0.25); p25 = sum(probs[idx]) print(c(mn,p01,p02,p03,p07,p10,p15,p25)) print(c("Utility = ",utility)) } ``` ``` ## [1] 0.09998225 0.70142788 0.70142788 0.70142788 0.50249327 0.36635887 ## [7] 0.27007883 0.11105329 ## [1] "Utility = " "-24.6254789193705" ## [1] 0.09998296 0.72144189 0.72144189 0.72144189 0.51895166 0.37579336 ## [7] 0.27345532 0.10589547 ## [1] "Utility = " "-23.9454295328498" ``` And of course, an examination of stochastic dominance. ``` #PLOT DIFFERENCE IN DISTRIBUTION FUNCTIONS #IF POSITIVE EVERYWHERE MIXED PDs BETTER THAN FLAT PDs plot(rets,cumprobs1-cumprobs2,type="l",xlab="Normalized total payoff",ylab="Difference in cumulative probs") ``` ``` #CHECK IF SSD IS SATISFIED #A SSD> B, if E(A)=E(B), and integral_0^y (F_A(z)-F_B(z)) dz <= 0, for all y cumprobs2 = matrix(cumprobs2,length(cumprobs2),1) n = length(cumprobs1) ssd = NULL for (j in 1:n) { check = sum(cumprobs2[1:j]-cumprobs1[1:j]) ssd = c(ssd,check) } print(c("Max ssd = ",max(ssd))) #If <0, then SSD satisfied, and it implies MV efficiency. ``` ``` ## [1] "Max ssd = " "-1.85123605385695e-05" ``` 19\.11 Conclusions ------------------ Digital asset portfolios are different from mean\-variance ones because the asset returns are Bernoulli with small success probabilities. We used a recursion technique borrowed from the credit portfolio literature to construct the payoff distributions for Bernoulli portfolios. We find that many intuitions for these portfolios are similar to those of mean\-variance ones: diversification by adding assets is useful, low correlations amongst investments is good. However, we also find that uniform bet size is preferred to some small and some large bets. Rather than construct portfolios with assets having uniform success probabilities, it is preferable to have some assets with low success rates and others with high success probabilities, a feature that is noticed in the case of venture funds. These insights augment the standard understanding obtained from mean\-variance portfolio optimization. The approach taken here is simple to use. The only inputs needed are the expected payoffs of the assets \\(C\_i\\), success probabilities \\(q\_i\\), and the average correlation between assets, given by a parameter \\(\\rho\\). Broad statistics on these inputs are available, say for venture investments, from papers such as Sarin, Das, and Jagannathan ([2003](#ref-DasJagSarin)). Therefore, using data, it is easy to optimize the portfolio of a digital asset fund. The technical approach here is also easily extended to features including cost of effort by investors as the number of projects grows (Kanniainen and Keuschnigg ([2003](#ref-KannKeus))), syndication, etc. The number of portfolios with digital assets appears to be increasing in the marketplace, and the results of this analysis provide important intuition for asset managers. The approach in Section 2 is just one way in which to model joint success probabilities using a common factor. Undeniably, there are other ways too, such as modeling joint probabilities directly, making sure that they are consistent with each other, which itself may be mathematically tricky. It is indeed possible to envisage that, for some different system of joint success probabilities, the qualitative nature of the results may differ from the ones developed here. It is also possible that the system we adopt here with a single common factor \\(X\\) may be extended to more than one common factor, an approach often taken in the default literature. 19\.1 Digital Assets -------------------- Digital assets are investments with returns that are binary in nature, i.e., they either have a very large or very small payoff. We explore the features of optimal portfolios of digital assets such as venture investments, credit assets, search keyword groups, and lotteries. These portfolios comprise correlated assets with joint Bernoulli distributions. Using a simple, standard, fast recursion technique to generate the return distribution of the portfolio, we derive guidelines on how investors in digital assets may think about constructing their portfolios. We find that digital portfolios are better when they are homogeneous in the size of the assets, but heterogeneous in the success probabilities of the asset components. The return distributions of digital portfolios are highly skewed and fat\-tailed. A good example of such a portfolio is a venture fund. A simple representation of the payoff to a digital investment is Bernoulli with a large payoff for a successful outcome and a very small (almost zero) payoff for a failed one. The probability of success of digital investments is typically small, in the region of 5–25% for new ventures, see Sarin, Das, and Jagannathan ([2003](#ref-DasJagSarin)). Optimizing portfolios of such investments is therefore not amenable to standard techniques used for mean\-variance optimization. It is also not apparent that the intuitions obtained from the mean\-variance setting carry over to portfolios of Bernoulli assets. For instance, it is interesting to ask, ceteris paribus, whether diversification by increasing the number of assets in the digital portfolio is always a good thing. Since Bernoulli portfolios involve higher moments, how diversification is achieved is by no means obvious. We may also ask whether it is preferable to include assets with as little correlation as possible or is there a sweet spot for the optimal correlation levels of the assets? Should all the investments be of even size, or is it preferable to take a few large bets and several small ones? And finally, is a mixed portfolio of safe and risky assets preferred to one where the probability of success is more uniform across assets? These are all questions that are of interest to investors in digital type portfolios, such as CDO investors, venture capitalists and investors in venture funds. We will use a method that is based on standard recursion for modeling of the exact return distribution of a Bernoulli portfolio. This method on which we build was first developed by Andersen, Sidenius, and Basu ([2003](#ref-AndSidBasu)) for generating loss distributions of credit portfolios. We then examine the properties of these portfolios in a stochastic dominance framework framework to provide guidelines to digital investors. These guidelines are found to be consistent with prescriptions from expected utility optimization. The prescriptions are as follows: 1. Holding all else the same, more digital investments are preferred, meaning for example, that a venture portfolio should seek to maximize market share. 2. As with mean\-variance portfolios, lower asset correlation is better, unless the digital investor’s payoff depends on the upper tail of returns. 3. A strategy of a few large bets and many small ones is inferior to one with bets being roughly the same size. 4. And finally, a mixed portfolio of low\-success and high\-success assets is better than one with all assets of the same average success probability level. 19\.2 Modeling Digital Portfolios --------------------------------- Assume that the investor has a choice of \\(n\\) investments in digital assets (e.g., start\-up firms). The investments are indexed \\(i\=1,2, \\ldots, n\\). Each investment has a probability of success that is denoted \\(q\_i\\), and if successful, the payoff returned is \\(S\_i\\) dollars. With probability \\((1\-q\_i)\\), the investment will not work out, the start\-up will fail, and the money will be lost in totality. Therefore, the payoff (cashflow) is \\\[ \\mbox{Payoff} \= C\_i \= \\left\\{ \\begin{array}{cl} S\_i \& \\mbox{with prob } q\_i \\\\ 0 \& \\mbox{with prob } (1\-q\_i) \\end{array} \\right. \\] The specification of the investment as a Bernoulli trial is a simple representation of reality in the case of digital portfolios. This mimics well for example, the case of the venture capital business. Two generalizations might be envisaged. First, we might extend the model to allowing \\(S\_i\\) to be random, i.e., drawn from a range of values. This will complicate the mathematics, but not add much in terms of enriching the model’s results. Second, the failure payoff might be non\-zero, say an amount \\(a\_i\\). Then we have a pair of Bernoulli payoffs \\(\\{S\_i, a\_i\\}\\). Note that we can decompose these investment payoffs into a project with constant payoff \\(a\_i\\) plus another project with payoffs \\(\\{S\_i\-a\_i,0\\}\\), the latter being exactly the original setting where the failure payoff is zero. Hence, the version of the model we solve in this note, with zero failure payoffs, is without loss of generality. Unlike stock portfolios where the choice set of assets is assumed to be multivariate normal, digital asset investments have a joint Bernoulli distribution. Portfolio returns of these investments are unlikely to be Gaussian, and hence higher\-order moments are likely to matter more. In order to generate the return distribution for the portfolio of digital assets, we need to account for the correlations across digital investments. We adopt the following simple model of correlation. Define \\(y\_i\\) to be the performance proxy for the \\(i\\)\-th asset. This proxy variable will be simulated for comparison with a threshold level of performance to determine whether the asset yielded a success or failure. It is defined by the following function, widely used in the correlated default modeling literature, see for example Andersen, Sidenius, and Basu ([2003](#ref-AndSidBasu)): \\\[ y\_i \= \\rho\_i \\; X \+ \\sqrt{1\-\\rho\_i^2}\\; Z\_i, \\quad i \= 1 \\ldots n \\] where \\(\\rho\_i \\in \[0,1]\\) is a coefficient that correlates threshold \\(y\_i\\) with a normalized common factor \\(X \\sim N(0,1\)\\). The common factor drives the correlations amongst the digital assets in the portfolio. We assume that \\(Z\_i \\sim N(0,1\)\\) and \\(\\mbox{Corr}(X,Z\_i)\=0, \\forall i\\). Hence, the correlation between assets \\(i\\) and \\(j\\) is given by \\(\\rho\_i \\times \\rho\_j\\). Note that the mean and variance of \\(y\_i\\) are: \\(E(y\_i)\=0, Var(y\_i)\=1, \\forall i\\). Conditional on \\(X\\), the values of \\(y\_i\\) are all independent, as \\(\\mbox{Corr}(Z\_i, Z\_j)\=0\\). We now formalize the probability model governing the success or failure of the digital investment. We define a variable \\(x\_i\\), with distribution function \\(F(\\cdot)\\), such that \\(F(x\_i) \= q\_i\\), the probability of success of the digital investment. Conditional on a fixed value of \\(X\\), the probability of success of the \\(i\\)\-th investment is defined as \\\[ p\_i^X \\equiv Pr\[y\_i \< x\_i \| X] \\] Assuming \\(F\\) to be the normal distribution function, we have \\\[ \\begin{align} p\_i^X \&\= Pr \\left\[ \\rho\_i X \+ \\sqrt{1\-\\rho\_i^2}\\; Z\_i \< x\_i \| X \\right] \\nonumber \\\\ \&\= Pr \\left\[ Z\_i \< \\frac{x\_i \- \\rho\_i X}{\\sqrt{1\-\\rho\_i^2}} \| X \\right] \\nonumber \\\\ \&\= \\Phi \\left\[ \\frac{F^{\-1}(q\_i) \- \\rho\_i X}{\\sqrt{1\-\\rho\_i}} \\right] \\end{align} \\] where \\(\\Phi(.)\\) is the cumulative normal density function. Therefore, given the level of the common factor \\(X\\), asset correlation \\(\\rho\\), and the unconditional success probabilities \\(q\_i\\), we obtain the conditional success probability for each asset \\(p\_i^X\\). As \\(X\\) varies, so does \\(p\_i^X\\). For the numerical examples here we choose the function \\(F(x\_i)\\) to the cumulative normal probability function. 19\.3 Fast Computation Approach ------------------------------- We use a fast technique for building up distributions for sums of Bernoulli random variables. In finance, this *recursion* technique was introduced in the credit portfolio modeling literature by Andersen, Sidenius, and Basu ([2003](#ref-AndSidBasu)). We deem an investment in a digital asset as successful if it achieves its high payoff \\(S\_i\\). The cashflow from the portfolio is a random variable \\(C \= \\sum\_{i\=1}^n C\_i\\). The maximum cashflow that may be generated by the portfolio will be the sum of all digital asset cashflows, because each and every outcome was a success, i.e., \\\[ C\_{max} \= \\sum\_{i\=1}^n \\; S\_i \\] To keep matters simple, we assume that each \\(S\_i\\) is an integer, and that we round off the amounts to the nearest significant digit. So, if the smallest unit we care about is a million dollars, then each \\(S\_i\\) will be in units of integer millions. Recall that, conditional on a value of \\(X\\), the probability of success of digital asset \\(i\\) is given as \\(p\_i^X\\). The recursion technique will allow us to generate the portfolio cashflow probability distribution for each level of \\(X\\). We will then simply compose these conditional (on \\(X\\)) distributions using the marginal distribution for \\(X\\), denoted \\(g(X)\\), into the unconditional distribution for the entire portfolio. Therefore, we define the probability of total cashflow from the portfolio, conditional on \\(X\\), to be \\(f(C \| X)\\). Then, the unconditional cashflow distribution of the portfolio becomes \\\[ f(C) \= \\int\_X \\; f(C \| X) \\cdot g(X)\\; dX \\quad \\quad \\quad (CONV) \\] The distribution \\(f(C \| X)\\) is easily computed numerically as follows. We index the assets with \\(i\=1 \\ldots n\\). The cashflow from all assets taken together will range from zero to \\(C\_{max}\\). Suppose this range is broken into integer buckets, resulting in \\(N\_B\\) buckets in total, each one containing an increasing level of total cashflow. We index these buckets by \\(j\=1 \\ldots N\_B\\), with the cashflow in each bucket equal to \\(B\_j\\). \\(B\_j\\) represents the total cashflow from all assets (some pay off and some do not), and the buckets comprise the discrete support for the entire distribution of total cashflow from the portfolio. For example, suppose we had 10 assets, each with a payoff of \\(C\_i\=3\\). Then \\(C\_{max}\=30\\). A plausible set of buckets comprising the support of the cashflow distribution would be: \\(\\{0,3,6,9,12,15,18,21,24,27,C\_{max}\\}\\). Define \\(P(k,B\_j)\\) as the probability of bucket \\(j\\)’s cashflow level \\(B\_j\\) if we account for the first \\(k\\) assets. For example, if we had just 3 assets, with payoffs of value 1,3,2 respectively, then we would have 7 buckets, i.e. \\(B\_j\=\\{0,1,2,3,4,5,6\\}\\). After accounting for the first asset, the only possible buckets with positive probability would be \\(B\_j\=0,1\\), and after the first two assets, the buckets with positive probability would be \\(B\_j\=0,1,3,4\\). We begin with the first asset, then the second and so on, and compute the probability of seeing the returns in each bucket. Each probability is given by the following *recursion*: \\\[ P(k\+1,B\_j) \= P(k,B\_j)\\;\[1\-p^X\_{k\+1}] \+ P(k,B\_j \- S\_{k\+1}) \\; p^X\_{k\+1}, \\quad k \= 1, \\ldots, n\-1\. \\quad \\quad (REC) \\] Thus the probability of a total cashflow of \\(B\_j\\) after considering the first \\((k\+1\)\\) firms is equal to the sum of two probability terms. First, the probability of the same cashflow \\(B\_j\\) from the first \\(k\\) firms, given that firm \\((k\+1\)\\) did not succeed. Second, the probability of a cashflow of \\(B\_j \- S\_{k\+1}\\) from the first \\(k\\) firms and the \\((k\+1\)\\)\-st firm does succeed. We start off this recursion from the first asset, after which the \\(N\_B\\) buckets are all of probability zero, except for the bucket with zero cashflow (the first bucket) and the one with \\(S\_1\\) cashflow, i.e., \\\[ \\begin{align} P(1,0\) \&\= 1\-p^X\_1 \\\\ P(1,S\_1\) \&\= p^X\_1 \\end{align} \\] All the other buckets will have probability zero, i.e., \\(P(1,B\_j \\neq \\{0,S\_1\\})\=0\\). With these starting values, we can run the system up from the first asset to the \\(n\\)\-th one by repeated application of equation (**REC**). Finally, we will have the entire distribution \\(P(n,B\_j)\\), conditional on a given value of \\(X\\). We then compose all these distributions that are conditional on \\(X\\) into one single cashflow distribution using equation (**CONV**). This is done by numerically integrating over all values of \\(X\\). ``` library(pspline) #Library for Digital Portfolio Analysis #Copyright, Sanjiv Das, Dec 1, 2008. #------------------------------------------------------------ #Function to implement the Andersen-Sidenius-Basu (Risk, 2003) #recursion algorithm. Note that the probabilities are fixed, #i.e. conditional on a given level of factor. The full blown #distribution comes from the integral over all levels of the factor. #INPUTS (example) #w = c(1,7,3,2) #Loss weights #p = c(0.05, 0.2, 0.03, 0.1) #Loss probabilities asbrec = function(w,p) { #BASIC SET UP N = length(w) maxloss = sum(w) bucket = c(0,seq(maxloss)) LP = matrix(0,N,maxloss+1) #probability grid over losses #DO FIRST FIRM LP[1,1] = 1-p[1]; LP[1,w[1]+1] = p[1]; #LOOP OVER REMAINING FIRMS for (i in seq(2,N)) { for (j in seq(maxloss+1)) { LP[i,j] = LP[i-1,j]*(1-p[i]) if (bucket[j]-w[i] >= 0) { LP[i,j] = LP[i,j] + LP[i-1,j-w[i]]*p[i] } } } #FINISH UP lossprobs = LP[N,] #print(t(LP)) #print(c("Sum of final probs = ",sum(lossprobs))) result = matrix(c(bucket,lossprobs),(maxloss+1),2) } #END ASBREC ``` We use this function in the following example. ``` #EXAMPLE w = c(1,7,3,2) p = c(0.05, 0.2, 0.03, 0.1) res = asbrec(w,p) print(res) ``` ``` ## [,1] [,2] ## [1,] 0 0.66348 ## [2,] 1 0.03492 ## [3,] 2 0.07372 ## [4,] 3 0.02440 ## [5,] 4 0.00108 ## [6,] 5 0.00228 ## [7,] 6 0.00012 ## [8,] 7 0.16587 ## [9,] 8 0.00873 ## [10,] 9 0.01843 ## [11,] 10 0.00610 ## [12,] 11 0.00027 ## [13,] 12 0.00057 ## [14,] 13 0.00003 ``` ``` barplot(res[,2],names.arg=res[,1],col=2) ``` Here is a second example. Here each column represents one pass through the recursion. Since there are five assets, we get five passes, and the final column is the result we are looking for. ``` #EXAMPLE w = c(5,8,4,2,1) p = array(1/length(w),length(w)) res = asbrec(w,p) print(res) ``` ``` ## [,1] [,2] ## [1,] 0 0.32768 ## [2,] 1 0.08192 ## [3,] 2 0.08192 ## [4,] 3 0.02048 ## [5,] 4 0.08192 ## [6,] 5 0.10240 ## [7,] 6 0.04096 ## [8,] 7 0.02560 ## [9,] 8 0.08704 ## [10,] 9 0.04096 ## [11,] 10 0.02560 ## [12,] 11 0.01024 ## [13,] 12 0.02176 ## [14,] 13 0.02560 ## [15,] 14 0.01024 ## [16,] 15 0.00640 ## [17,] 16 0.00128 ## [18,] 17 0.00512 ## [19,] 18 0.00128 ## [20,] 19 0.00128 ## [21,] 20 0.00032 ``` ``` barplot(res[,2],names.arg=res[,1],col=2) ``` We can explore these recursion calculations in some detail as follows. Note that in our example \\(p\_i \= 0\.2, i \= 1,2,3,4,5\\). We are interested in computing \\(P(k,B)\\), where \\(k\\) denotes the \\(k\\)\-th recursion pass, and \\(B\\) denotes the return bucket. Recall that we have five assets with return levels of \\(\\{5,8,4,2,1\\}\\), respecitvely. After \\(i\=1\\), we have \\\[ \\begin{align} P(1,0\) \&\= (1\-p\_1\) \= 0\.8\\\\ P(1,5\) \&\= p\_1 \= 0\.2\\\\ P(1,j) \&\= 0, j \\neq \\{0,5\\} \\end{align} \\] The completes the first recursion pass and the values can be verified from the R output above by examining column 2 (column 1 contains the values of the return buckets). We now move on the calculations needed for the second pass in the recursion. \\\[ \\begin{align} P(2,0\) \&\= P(1,0\)(1\-p\_2\) \= 0\.64\\\\ P(2,5\) \&\= P(1,5\)(1\-p\_2\) \+ P(1,5\-8\) p\_2 \= 0\.2 (0\.8\) \+ 0 (0\.2\) \= 0\.16\\\\ P(2,8\) \&\= P(1,8\) (1\-p\_2\) \+ P(1,8\-8\) p\_2 \= 0 (0\.8\) \+ 0\.8 (0\.2\) \= 0\.16\\\\ P(2,13\) \&\= P(1,13\)(1\-p\_2\) \+ P(1,13\-8\) p\_2 \= 0 (0\.8\) \+ 0\.2 (0\.2\) \= 0\.04\\\\ P(2,j) \&\= 0, j \\neq \\{0,5,8,13\\} \\end{align} \\] The third recursion pass is as follows: \\\[ \\begin{align} P(3,0\) \&\= P(2,0\)(1\-p\_3\) \= 0\.512\\\\ P(3,4\) \&\= P(2,4\)(1\-p\_3\) \+ P(2,4\-4\) \= 0(0\.8\) \+ 0\.64(0\.2\) \= 0\.128\\\\ P(3,5\) \&\= P(2,5\)(1\-p\_3\) \+ P(2,5\-4\) p\_3 \= 0\.16 (0\.8\) \+ 0 (0\.2\) \= 0\.128\\\\ P(3,8\) \&\= P(2,8\) (1\-p\_3\) \+ P(2,8\-4\) p\_3 \= 0\.16 (0\.8\) \+ 0 (0\.2\) \= 0\.128\\\\ P(3,9\) \&\= P(2,9\) (1\-p\_3\) \+ P(2,9\-4\) p\_3 \= 0 (0\.8\) \+ 0\.16 (0\.2\) \= 0\.032\\\\ P(3,12\) \&\= P(2,12\) (1\-p\_3\) \+ P(2,12\-4\) p\_3 \= 0 (0\.8\) \+ 0\.16 (0\.2\) \= 0\.032\\\\ P(3,13\) \&\= P(2,13\) (1\-p\_3\) \+ P(2,13\-4\) p\_3 \= 0\.04 (0\.8\) \+ 0 (0\.2\) \= 0\.032\\\\ P(3,17\) \&\= P(2,17\) (1\-p\_3\) \+ P(2,17\-4\) p\_3 \= 0 (0\.8\) \+ 0\.04 (0\.2\) \= 0\.008\\\\ P(3,j) \&\= 0, j \\neq \\{0,4,5,8,9,12,13,17\\} \\end{align} \\] Note that the same computation work even when the outcomes are not of equal probability. Let’s do one more example. ``` #ONE FINAL EXAMPLE #----------MAIN CALLING SEGMENT------------------ w = c(5,2,4,2,8,1,9) p = array(0.2,length(w)) res = asbrec(w,p) print(res) ``` ``` ## [,1] [,2] ## [1,] 0 0.2097152 ## [2,] 1 0.0524288 ## [3,] 2 0.1048576 ## [4,] 3 0.0262144 ## [5,] 4 0.0655360 ## [6,] 5 0.0688128 ## [7,] 6 0.0393216 ## [8,] 7 0.0327680 ## [9,] 8 0.0622592 ## [10,] 9 0.0827392 ## [11,] 10 0.0434176 ## [12,] 11 0.0393216 ## [13,] 12 0.0245760 ## [14,] 13 0.0344064 ## [15,] 14 0.0272384 ## [16,] 15 0.0180224 ## [17,] 16 0.0106496 ## [18,] 17 0.0198656 ## [19,] 18 0.0086016 ## [20,] 19 0.0092160 ## [21,] 20 0.0036864 ## [22,] 21 0.0047104 ## [23,] 22 0.0045568 ## [24,] 23 0.0025088 ## [25,] 24 0.0020480 ## [26,] 25 0.0006144 ## [27,] 26 0.0010752 ## [28,] 27 0.0002560 ## [29,] 28 0.0004096 ## [30,] 29 0.0001024 ## [31,] 30 0.0000512 ## [32,] 31 0.0000128 ``` ``` print(sum(res[,2])) ``` ``` ## [1] 1 ``` ``` barplot(res[,2],names.arg=res[,1],col=4) ``` 19\.4 Combining conditional distributions ----------------------------------------- We now demonstrate how we will integrate the conditional probability distributions \\(p^X\\) into an unconditional probability distribution of outcomes, denoted \\\[ p \= \\int\_X p^X g(X) \\; dX, \\] where \\(g(X)\\) is the density function of the state variable \\(X\\). We create a function to combine the conditional distribution functions. This function calls the **asbrec** function that we had used earlier. ``` #--------------------------- #FUNCTION TO COMPUTE FULL RETURN DISTRIBUTION #INTEGRATES OVER X BY CALLING ASBREC.R digiprob = function(L,q,rho) { #Note: L,q same as w,p from before dx = 0.1 x = seq(-40,40)*dx fx = dnorm(x)*dx fx = fx/sum(fx) maxloss = sum(L) bucket = c(0,seq(maxloss)) totp = array(0,(maxloss+1)) for (i in seq(length(x))) { p = pnorm((qnorm(q)-rho*x[i])/sqrt(1-rho^2)) ldist = asbrec(L,p) totp = totp + ldist[,2]*fx[i] } result = matrix(c(bucket,totp),(maxloss+1),2) } ``` Note that now we will use the unconditional probabilities of success for each asset, and correlate them with a specified correlation level. We run this with two correlation levels \\(\\{\-0\.5, \+0\.5\\}\\). ``` #------INTEGRATE OVER CONDITIONAL DISTRIBUTIONS---- w = c(5,8,4,2,1) q = c(0.1,0.2,0.1,0.05,0.15) rho = 0.25 res1 = digiprob(w,q,rho) rho = 0.75 res2 = digiprob(w,q,rho) par(mfrow=c(2,1)) barplot(res1[,2],names.arg=res1[,1],xlab="portfolio value", ylab="probability",main="rho = 0.25") barplot(res2[,2],names.arg=res2[,1],xlab="portfolio value", ylab="probability",main="rho = 0.75") ``` ``` cbind(res1,res2) ``` ``` ## [,1] [,2] [,3] [,4] ## [1,] 0 0.5391766174 0 0.666318464 ## [2,] 1 0.0863707325 1 0.046624312 ## [3,] 2 0.0246746918 2 0.007074104 ## [4,] 3 0.0049966420 3 0.002885901 ## [5,] 4 0.0534700675 4 0.022765422 ## [6,] 5 0.0640540228 5 0.030785967 ## [7,] 6 0.0137226107 6 0.009556413 ## [8,] 7 0.0039074039 7 0.002895774 ## [9,] 8 0.1247287209 8 0.081172499 ## [10,] 9 0.0306776806 9 0.029154885 ## [11,] 10 0.0086979993 10 0.008197488 ## [12,] 11 0.0021989842 11 0.004841742 ## [13,] 12 0.0152035638 12 0.014391319 ## [14,] 13 0.0186144920 13 0.023667222 ## [15,] 14 0.0046389439 14 0.012776165 ## [16,] 15 0.0013978502 15 0.006233366 ## [17,] 16 0.0003123473 16 0.004010559 ## [18,] 17 0.0022521668 17 0.005706283 ## [19,] 18 0.0006364672 18 0.010008267 ## [20,] 19 0.0002001003 19 0.002144265 ## [21,] 20 0.0000678949 20 0.008789582 ``` The left column of probabilities has correlation of \\(\\rho\=0\.25\\) and the right one is the case when \\(\\rho\=0\.75\\). We see that the probabilities on the right are lower for low outcomes (except zero) and high for high outcomes. Why? 19\.5 Stochastic Dominance (SD) ------------------------------- SD is an ordering over probabilistic bundles. We may want to know if one VC’s portfolio dominates another in a risk\-adjusted sense. Different SD concepts apply to answer this question. For example if portfolio \\(A\\) does better than portfolio \\(B\\) in every state of the world, it clearly dominates. This is called **state\-by\-state** dominance, and is hardly ever encountered. Hence, we briefly examine two more common types of SD. 1. First\-order Stochastic Dominance (FSD): For cumulative distribution function \\(F(X)\\) over states \\(X\\), portfolio \\(A\\) dominates \\(B\\) if \\(\\mbox{Prob}(A \\geq k) \\geq \\mbox{Prob}(B \\geq k)\\) for all states \\(k \\in X\\), and \\(\\mbox{Prob}(A \\geq k) \> \\mbox{Prob}(B \\geq k)\\) for some \\(k\\). It is the same as \\(\\mbox{Prob}(A \\leq k) \\leq \\mbox{Prob}(B \\leq k)\\) for all states \\(k \\in X\\), and \\(\\mbox{Prob}(A \\leq k) \< \\mbox{Prob}(B \\leq k)\\) for some \\(k\\).This implies that \\(F\_A(k) \\leq F\_B(k)\\). The mean outcome under \\(A\\) will be higher than under \\(B\\), and all increasing utility functions will give higher utility for \\(A\\). This is a weaker notion of dominance than state\-wise, but also not as often encountered in practice. 2. Second\-order Stochastic Dominance (SSD): Here the portfolios have the same mean but the risk is less for portfolio \\(A\\). Then we say that portfolio \\(A\\) has a **mean\-preserving spread** over portfolio \\(B\\). Technically this is the same as \\(\\int\_{\-\\infty}^k \[F\_A(k) \- F\_B(k)] \\; dX \< 0\\), and \\(\\int\_X X dF\_A(X) \= \\int\_X X dF\_B(X)\\). Mean\-variance models in which portfolios on the efficient frontier dominate those below are a special case of SSD. See the example below, there is no FSD, but there is SSD. ``` #FIRST_ORDER SD x = seq(-4,4,0.1) F_B = pnorm(x,mean=0,sd=1); F_A = pnorm(x,mean=0.25,sd=1); F_A-F_B #FSD exists ``` ``` ## [1] -2.098272e-05 -3.147258e-05 -4.673923e-05 -6.872414e-05 -1.000497e-04 ## [6] -1.442118e-04 -2.058091e-04 -2.908086e-04 -4.068447e-04 -5.635454e-04 ## [11] -7.728730e-04 -1.049461e-03 -1.410923e-03 -1.878104e-03 -2.475227e-03 ## [16] -3.229902e-03 -4.172947e-03 -5.337964e-03 -6.760637e-03 -8.477715e-03 ## [21] -1.052566e-02 -1.293895e-02 -1.574810e-02 -1.897740e-02 -2.264252e-02 ## [26] -2.674804e-02 -3.128519e-02 -3.622973e-02 -4.154041e-02 -4.715807e-02 ## [31] -5.300548e-02 -5.898819e-02 -6.499634e-02 -7.090753e-02 -7.659057e-02 ## [36] -8.191019e-02 -8.673215e-02 -9.092889e-02 -9.438507e-02 -9.700281e-02 ## [41] -9.870633e-02 -9.944553e-02 -9.919852e-02 -9.797262e-02 -9.580405e-02 ## [46] -9.275614e-02 -8.891623e-02 -8.439157e-02 -7.930429e-02 -7.378599e-02 ## [51] -6.797210e-02 -6.199648e-02 -5.598646e-02 -5.005857e-02 -4.431528e-02 ## [56] -3.884257e-02 -3.370870e-02 -2.896380e-02 -2.464044e-02 -2.075491e-02 ## [61] -1.730902e-02 -1.429235e-02 -1.168461e-02 -9.458105e-03 -7.580071e-03 ## [66] -6.014807e-03 -4.725518e-03 -3.675837e-03 -2.831016e-03 -2.158775e-03 ## [71] -1.629865e-03 -1.218358e-03 -9.017317e-04 -6.607827e-04 -4.794230e-04 ## [76] -3.443960e-04 -2.449492e-04 -1.724935e-04 -1.202675e-04 -8.302381e-05 ## [81] -5.674604e-05 ``` ``` #SECOND_ORDER SD x = seq(-4,4,0.1) F_B = pnorm(x,mean=0,sd=2); F_A = pnorm(x,mean=0,sd=1); print(F_A-F_B) #No FSD ``` ``` ## [1] -0.02271846 -0.02553996 -0.02864421 -0.03204898 -0.03577121 ## [6] -0.03982653 -0.04422853 -0.04898804 -0.05411215 -0.05960315 ## [11] -0.06545730 -0.07166345 -0.07820153 -0.08504102 -0.09213930 ## [16] -0.09944011 -0.10687213 -0.11434783 -0.12176261 -0.12899464 ## [21] -0.13590512 -0.14233957 -0.14812981 -0.15309708 -0.15705611 ## [26] -0.15982015 -0.16120699 -0.16104563 -0.15918345 -0.15549363 ## [31] -0.14988228 -0.14229509 -0.13272286 -0.12120570 -0.10783546 ## [36] -0.09275614 -0.07616203 -0.05829373 -0.03943187 -0.01988903 ## [41] 0.00000000 0.01988903 0.03943187 0.05829373 0.07616203 ## [46] 0.09275614 0.10783546 0.12120570 0.13272286 0.14229509 ## [51] 0.14988228 0.15549363 0.15918345 0.16104563 0.16120699 ## [56] 0.15982015 0.15705611 0.15309708 0.14812981 0.14233957 ## [61] 0.13590512 0.12899464 0.12176261 0.11434783 0.10687213 ## [66] 0.09944011 0.09213930 0.08504102 0.07820153 0.07166345 ## [71] 0.06545730 0.05960315 0.05411215 0.04898804 0.04422853 ## [76] 0.03982653 0.03577121 0.03204898 0.02864421 0.02553996 ## [81] 0.02271846 ``` ``` cumsum(F_A-F_B) ``` ``` ## [1] -2.271846e-02 -4.825842e-02 -7.690264e-02 -1.089516e-01 -1.447228e-01 ## [6] -1.845493e-01 -2.287779e-01 -2.777659e-01 -3.318781e-01 -3.914812e-01 ## [11] -4.569385e-01 -5.286020e-01 -6.068035e-01 -6.918445e-01 -7.839838e-01 ## [16] -8.834239e-01 -9.902961e-01 -1.104644e+00 -1.226407e+00 -1.355401e+00 ## [21] -1.491306e+00 -1.633646e+00 -1.781776e+00 -1.934873e+00 -2.091929e+00 ## [26] -2.251749e+00 -2.412956e+00 -2.574002e+00 -2.733185e+00 -2.888679e+00 ## [31] -3.038561e+00 -3.180856e+00 -3.313579e+00 -3.434785e+00 -3.542620e+00 ## [36] -3.635376e+00 -3.711538e+00 -3.769832e+00 -3.809264e+00 -3.829153e+00 ## [41] -3.829153e+00 -3.809264e+00 -3.769832e+00 -3.711538e+00 -3.635376e+00 ## [46] -3.542620e+00 -3.434785e+00 -3.313579e+00 -3.180856e+00 -3.038561e+00 ## [51] -2.888679e+00 -2.733185e+00 -2.574002e+00 -2.412956e+00 -2.251749e+00 ## [56] -2.091929e+00 -1.934873e+00 -1.781776e+00 -1.633646e+00 -1.491306e+00 ## [61] -1.355401e+00 -1.226407e+00 -1.104644e+00 -9.902961e-01 -8.834239e-01 ## [66] -7.839838e-01 -6.918445e-01 -6.068035e-01 -5.286020e-01 -4.569385e-01 ## [71] -3.914812e-01 -3.318781e-01 -2.777659e-01 -2.287779e-01 -1.845493e-01 ## [76] -1.447228e-01 -1.089516e-01 -7.690264e-02 -4.825842e-02 -2.271846e-02 ## [81] 1.353084e-16 ``` 19\.6 Portfolio Characteristics ------------------------------- Armed with this established machinery, there are several questions an investor (e.g. a VC) in a digital portfolio may pose. First, is there an optimal number of assets, i.e., ceteris paribus, are more assets better than fewer assets, assuming no span of control issues? Second, are Bernoulli portfolios different from mean\-variances ones, in that is it always better to have less asset correlation than more correlation? Third, is it better to have an even weighting of investment across the assets or might it be better to take a few large bets amongst many smaller ones? Fourth, is a high dispersion of probability of success better than a low dispersion? These questions are very different from the ones facing investors in traditional mean\-variance portfolios. We shall examine each of these questions in turn. 19\.7 How many assets? ---------------------- With mean\-variance portfolios, keeping the mean return of the portfolio fixed, more securities in the portfolio is better, because diversification reduces the variance of the portfolio. Also, with mean\-variance portfolios, higher\-order moments do not matter. But with portfolios of Bernoulli assets, increasing the number of assets might exacerbate higher\-order moments, even though it will reduce variance. Therefore it may not be worthwhile to increase the number of assets (\\(n\\)) beyond a point. In order to assess this issue we conducted the following experiment. We invested in \\(n\\) assets each with payoff of \\(1/n\\). Hence, if all assets succeed, the total (normalized) payoff is 1\. This normalization is only to make the results comparable across different \\(n\\), and is without loss of generality. We also assumed that the correlation parameter is \\(\\rho\_i \= 0\.25\\), for all \\(i\\). To make it easy to interpret the results, we assumed each asset to be identical with a success probability of \\(q\_i\=0\.05\\) for all \\(i\\). Using the recursion technique, we computed the probability distribution of the portfolio payoff for four values of \\(n \= \\{25,50,75,100\\}\\). The distribution function is plotted below. There are 4 plots, one for each \\(n\\), and if we look at the bottom left of the plot, the leftmost line is for \\(n\=100\\). The next line to the right is for \\(n\=75\\), and so on. One approach to determining if greater \\(n\\) is better for a digital portfolio is to investigate if a portfolio of \\(n\\) assets stochastically dominates one with less than \\(n\\) assets. On examination of the shapes of the distribution functions for different \\(n\\), we see that it is likely that as \\(n\\) increases, we obtain portfolios that exhibit second\-order stochastic dominance (SSD) over portfolios with smaller \\(n\\). The return distribution when \\(n\=100\\) (denoted \\(G\_{100}\\)) would dominate that for \\(n\=25\\) (denoted \\(G\_{25}\\)) in the SSD sense, if \\(\\int\_x x \\; dG\_{100}(x) \= \\int\_x x \\; dG\_{25}(x)\\), and \\(\\int\_0^u \[G\_{100}(x) \- G\_{25}(x)]\\; dx \\leq 0\\) for all \\(u \\in (0,1\)\\). That is, \\(G\_{25}\\) has a mean\-preserving spread over \\(G\_{100}\\), or \\(G\_{100}\\) has the same mean as \\(G\_{25}\\) but lower variance, i.e., implies superior mean\-variance efficiency. To show this we plotted the integral \\(\\int\_0^u \[G\_{100}(x) \- G\_{25}(x)] \\; dx\\) and checked the SSD condition. We found that this condition is satisfied (see Figure ). As is known, SSD implies mean\-variance efficiency as well. We also examine if higher \\(n\\) portfolios are better for a power utility investor with utility function, \\(U(C) \= \\frac{(0\.1 \+ C)^{1\-\\gamma}}{1\-\\gamma}\\), where \\(C\\) is the normalized total payoff of the Bernoulli portfolio. Expected utility is given by \\(\\sum\_C U(C)\\; f(C)\\). We set the risk aversion coefficient to \\(\\gamma\=3\\) which is in the standard range in the asset\-pricing literature. The cpde below reports the results. We can see that the expected utility increases monotonically with \\(n\\). Hence, for a power utility investor, having more assets is better than less, keeping the mean return of the portfolio constant. Economically, in the specific case of VCs, this highlights the goal of trying to capture a larger share of the number of available ventures. The results from the SSD analysis are consistent with those of expected power utility. ``` #CHECK WHAT HAPPENS WHEN NUMBER OF ASSETS/ISSUERS INCREASES #Result: No ordering with SSD, utility better with more names #source("number_names.R") #SECOND-ORDER STOCH DOMINANCE (SSD): GREATER num_names IS BETTER num_names = c(25,50,75,100) each_loss = 1 each_prob = 0.05 rho = 0.5^2 gam = 3 for (j in seq(4)) { L = array(each_loss,num_names[j]) q = array(each_prob,num_names[j]) res = digiprob(L,q,rho) rets = res[,1]/num_names[j] probs = res[,2] cumprobs = array(0,length(res[,2])) cumprobs[1] = probs[1] for (k in seq(2,length(res[,2]))) { cumprobs[k] = cumprobs[k-1] + probs[k] } if (j==1) { plot(rets,cumprobs,type="l",xlab="Normalized Total Payoff",ylab="Cumulative Probability") rets1 = rets cumprobs1 = cumprobs utility = sum(((0.1+rets)^(1-gam)/(1-gam))*probs) } if (j==2) { lines(rets,cumprobs,type="l",col="Red") utility = sum(((0.1+rets)^(1-gam)/(1-gam))*probs) } if (j==3) { lines(rets,cumprobs,type="l",col="Green") utility = sum(((0.1+rets)^(1-gam)/(1-gam))*probs) } if (j==4) { lines(rets,cumprobs,type="l",col="Blue") rets4 = rets cumprobs4 = cumprobs utility = sum(((0.1+rets)^(1-gam)/(1-gam))*probs) } mn = sum(rets*probs) idx = which(rets>0.03); p03 = sum(probs[idx]) idx = which(rets>0.07); p07 = sum(probs[idx]) idx = which(rets>0.10); p10 = sum(probs[idx]) idx = which(rets>0.15); p15 = sum(probs[idx]) print(c(mn,p03,p07,p10,p15)) print(c("Utility = ",utility)) } ``` ``` ## [1] 0.04999545 0.66546862 0.34247245 0.15028422 0.05924270 ## [1] "Utility = " "-29.2593289535026" ## [1] 0.04999545 0.63326734 0.25935510 0.08448287 0.02410500 ## [1] "Utility = " "-26.7549907254343" ## [1] 0.04999545 0.61961559 0.22252474 0.09645862 0.01493276 ## [1] "Utility = " "-25.8764941625812" ## [1] 0.04999545 0.61180443 0.20168330 0.07267614 0.01109592 ## [1] "Utility = " "-25.433466221872" ``` We now look at stochastic dominance. ``` #PLOT DIFFERENCE IN DISTRIBUTION FUNCTIONS #IF POSITIVE FLAT WEIGHTS BETTER THAN RISING WEIGHTS fit = sm.spline(rets1,cumprobs1) cumprobs1 = predict(fit,rets4) plot(rets4,cumprobs1-matrix(cumprobs4),type="l",xlab="Normalized total payoff",ylab="Difference in cumulative probs") ``` ``` #CHECK IF SSD IS SATISFIED #A SSD> B, if E(A)=E(B), and integral_0^y (F_A(z)-F_B(z)) dz <= 0, for all y cumprobs4 = matrix(cumprobs4,length(cumprobs4),1) n = length(cumprobs1) ssd = NULL for (j in 1:n) { check = sum(cumprobs4[1:j]-cumprobs1[1:j]) ssd = c(ssd,check) } print(c("Max ssd = ",max(ssd))) #If <0, then SSD satisfied, and it implies MV efficiency. ``` ``` ## [1] "Max ssd = " "-0.295083435837737" ``` ``` plot(rets4,ssd,type="l",xlab="Normalized total payoff",ylab="Integrated F(G100) minus F(G25)") ``` 19\.8 The impact of correlation ------------------------------- As with mean\-variance portfolios, we expect that increases in payoff correlation for Bernoulli assets will adversely impact portfolios. In order to verify this intuition we analyzed portfolios keeping all other variables the same, but changing correlation. In the previous subsection, we set the parameter for correlation to be \\(\\rho \= 0\.25\\). Here, we examine four levels of the correlation parameter: \\(\\rho\=\\{0\.09, 0\.25, 0\.49, 0\.81\\}\\). For each level of correlation, we computed the normalized total payoff distribution. The number of assets is kept fixed at \\(n\=25\\) and the probability of success of each digital asset is \\(0\.05\\) as before. The results are shown in the Figures below where the probability distribution function of payoffs is shown for all four correlation levels. We find that the SSD condition is met, i.e., that lower correlation portfolios stochastically dominate (in the SSD sense) higher correlation portfolios. We also examined changing correlation in the context of a power utility investor with the same utility function as in the previous subsection. See results from the code below. We confirm that, as with mean\-variance portfolios, Bernoulli portfolios also improve if the assets have low correlation. Hence, digital investors should also optimally attempt to diversify their portfolios. Insurance companies are a good example—they diversify risk across geographical and other demographic divisions. ``` #CHECK WHAT HAPPENS WHEN RHO INCREASES #Result: No ordering with SSD, Lower rho is better in the utility metric #source("change_rho.R") num_names = 25 each_loss = 1 each_prob = 0.05 rho = c(0.3,0.5,0.7,0.9)^2 gam = 3 for (j in seq(4)) { L = array(each_loss,num_names) q = array(each_prob,num_names) res = digiprob(L,q,rho[j]) rets = res[,1]/num_names probs = res[,2] cumprobs = array(0,length(res[,2])) cumprobs[1] = probs[1] for (k in seq(2,length(res[,2]))) { cumprobs[k] = cumprobs[k-1] + probs[k] } if (j==1) { plot(rets,cumprobs,type="l",xlab="Normalized Total Payoff",ylab="Cumulative Probability") cumprobs1 = cumprobs utility = sum(((0.1+rets)^(1-gam)/(1-gam))*probs) } if (j==2) { lines(rets,cumprobs,type="l",col="Red") utility = sum(((0.1+rets)^(1-gam)/(1-gam))*probs) } if (j==3) { lines(rets,cumprobs,type="l",col="Green") utility = sum(((0.1+rets)^(1-gam)/(1-gam))*probs) } if (j==4) { lines(rets,cumprobs,type="l",col="Blue") cumprobs2 = cumprobs utility = sum(((0.1+rets)^(1-gam)/(1-gam))*probs) } mn = sum(rets*probs) idx = which(rets>0.03); p03 = sum(probs[idx]) idx = which(rets>0.07); p07 = sum(probs[idx]) idx = which(rets>0.10); p10 = sum(probs[idx]) idx = which(rets>0.15); p15 = sum(probs[idx]) print(c(mn,p03,p07,p10,p15)) print(c("Utility = ",utility)) } ``` ``` ## [1] 0.04999940 0.71474772 0.35589301 0.13099315 0.03759002 ## [1] "Utility = " "-28.1122419295505" ## [1] 0.04999545 0.66546862 0.34247245 0.15028422 0.05924270 ## [1] "Utility = " "-29.2593289535026" ## [1] 0.04998484 0.53141370 0.29432093 0.16957177 0.10034508 ## [1] "Utility = " "-32.6682122683527" ## [1] 0.04997715 0.28323169 0.18573188 0.13890452 0.10963643 ## [1] "Utility = " "-39.7578637369197" ``` 19\.9 Uneven bets? ------------------ Digital asset investors are often faced with the question of whether to bet even amounts across digital investments, or to invest with different weights. We explore this question by considering two types of Bernoulli portfolios. Both have \\(n\=25\\) assets within them, each with a success probability of \\(q\_i\=0\.05\\). The first has equal payoffs, i.e., \\(1/25\\) each. The second portfolio has payoffs that monotonically increase, i.e., the payoffs are equal to \\(j/325, j\=1,2,\\ldots,25\\). We note that the sum of the payoffs in both cases is 1\. The code output shows the utility of the investor, where the utility function is the same as in the previous sections. We see that the utility for the balanced portfolio is higher than that for the imbalanced one. Also the balanced portfolio evidences SSD over the imbalanced portfolio. However, the return distribution has fatter tails when the portfolio investments are imbalanced. Hence, investors seeking to distinguish themselves by taking on greater risk in their early careers may be better off with imbalanced portfolios. ``` #PLOT DIFFERENCE IN DISTRIBUTION FUNCTIONS #IF POSITIVE FLAT WEIGHTS BETTER THAN RISING WEIGHTS plot(rets,cumprobs1-cumprobs2,type="l",xlab="Normalized total payoff",ylab="Difference in cumulative probs") ``` ``` #CHECK IF SSD IS SATISFIED #A SSD> B, if E(A)=E(B), and integral_0^y (F_A(z)-F_B(z)) dz <= 0, for all y cumprobs2 = matrix(cumprobs2,length(cumprobs2),1) n = length(cumprobs1) ssd = NULL for (j in 1:n) { check = sum(cumprobs1[1:j]-cumprobs2[1:j]) ssd = c(ssd,check) } print(c("Max ssd = ",max(ssd))) #If <0, then SSD satisfied, and it implies MV efficiency. ``` ``` ## [1] "Max ssd = " "-0.000556175706150297" ``` ``` plot(rets,ssd,type="l",xlab="Normalized total payoff",ylab="Integrated F(G[rho=0.09]) minus F(G[rho=0.81])") ``` We look at expected utility. ``` #CHECK WHAT HAPPENS WITH UNEVEN WEIGHTS #Result: No ordering with SSD, Utility lower if weights ascending. #source("uneven_weights.R") #Flat vs rising weights num_names = 25 each_loss1 = array(13,num_names) each_loss2 = seq(num_names) each_prob = 0.05 rho = 0.55 gam = 3 for (j in seq(2)) { if (j==1) { L = each_loss1 } if (j==2) { L = each_loss2 } q = array(each_prob,num_names) res = digiprob(L,q,rho) rets = res[,1]/sum(each_loss1) probs = res[,2] cumprobs = array(0,length(res[,2])) cumprobs[1] = probs[1] for (k in seq(2,length(res[,2]))) { cumprobs[k] = cumprobs[k-1] + probs[k] } if (j==1) { plot(rets,cumprobs,type="l",xlab="Normalized Total Payoff",ylab="Cumulative Probability") cumprobs1 = cumprobs utility = sum(((0.1+rets)^(1-gam)/(1-gam))*probs) } if (j==2) { lines(rets,cumprobs,type="l",col="Red") cumprobs2 = cumprobs utility = sum(((0.1+rets)^(1-gam)/(1-gam))*probs) } mn = sum(rets*probs) idx = which(rets>0.01); p01 = sum(probs[idx]) idx = which(rets>0.02); p02 = sum(probs[idx]) idx = which(rets>0.03); p03 = sum(probs[idx]) idx = which(rets>0.07); p07 = sum(probs[idx]) idx = which(rets>0.10); p10 = sum(probs[idx]) idx = which(rets>0.15); p15 = sum(probs[idx]) idx = which(rets>0.25); p25 = sum(probs[idx]) print(c(mn,p01,p02,p03,p07,p10,p15,p25)) print(c("Utility = ",utility)) } ``` ``` ## [1] 0.04998222 0.49021241 0.49021241 0.49021241 0.27775760 0.16903478 ## [7] 0.10711351 0.03051047 ## [1] "Utility = " "-33.7820026132803" ## [1] 0.04998222 0.46435542 0.43702188 0.40774167 0.25741601 0.17644497 ## [7] 0.10250256 0.03688191 ## [1] "Utility = " "-34.4937532559838" ``` We now look at stochastic dominance. ``` #PLOT DIFFERENCE IN DISTRIBUTION FUNCTIONS #IF POSITIVE FLAT WEIGHTS BETTER THAN RISING WEIGHTS plot(rets,cumprobs1-cumprobs2,type="l",xlab="Normalized total payoff",ylab="Difference in cumulative probs") ``` ``` #CHECK IF SSD IS SATISFIED #A SSD> B, if E(A)=E(B), and integral_0^y (F_A(z)-F_B(z)) dz <= 0, for all y cumprobs2 = matrix(cumprobs2,length(cumprobs2),1) n = length(cumprobs1) ssd = NULL for (j in 1:n) { check = sum(cumprobs1[1:j]-cumprobs2[1:j]) ssd = c(ssd,check) } print(c("Max ssd = ",max(ssd))) #If <0, then SSD satisfied, and it implies MV efficiency. ``` ``` ## [1] "Max ssd = " "0" ``` 19\.10 Mixing safe and risky assets ----------------------------------- Is it better to have assets with a wide variation in probability of success or with similar probabilities? To examine this, we look at two portfolios of \\(n\=26\\) assets. In the first portfolio, all the assets have a probability of success equal to \\(q\_i \= 0\.10\\). In the second portfolio, half the firms have a success probability of \\(0\.05\\) and the other half have a probability of \\(0\.15\\). The payoff of all investments is \\(1/26\\). The probability distribution of payoffs and the expected utility for the same power utility investor (with \\(\\gamma\=3\\)) are given in code output below. We see that mixing the portfolio between investments with high and low probability of success results in higher expected utility than keeping the investments similar. We also confirmed that such imbalanced success probability portfolios also evidence SSD over portfolios with similar investments in terms of success rates. This result does not have a natural analog in the mean\-variance world with non\-digital assets. For empirical evidence on the efficacy of various diversification approaches, see (“The Performance of Private Equity Funds: Does Diversification Matter?” [2006](#ref-Lossen)). ``` #CHECK WHAT HAPPENS WITH MIXED PDs #Result: No SSD ordering, but Utility higher for mixed pds #source("mixed_pds.R") num_names = 26 each_loss = array(1,num_names) each_prob1 = array(0.10,num_names) each_prob2 = c(array(0.05,num_names/2),array(0.15,num_names/2)) rho = 0.55 gam = 3 #Risk aversion CARA for (j in seq(2)) { if (j==1) { q = each_prob1 } if (j==2) { q = each_prob2 } L = each_loss res = digiprob(L,q,rho) rets = res[,1]/sum(each_loss) probs = res[,2] cumprobs = array(0,length(res[,2])) cumprobs[1] = probs[1] for (k in seq(2,length(res[,2]))) { cumprobs[k] = cumprobs[k-1] + probs[k] } if (j==1) { plot(rets,cumprobs,type="l",xlab="Normalized Total Payoff",ylab="Cumulative Probability") cumprobs1 = cumprobs utility = sum(((0.1+rets)^(1-gam)/(1-gam))*probs) } if (j==2) { lines(rets,cumprobs,type="l",col="Red") cumprobs2 = cumprobs utility = sum(((0.1+rets)^(1-gam)/(1-gam))*probs) } mn = sum(rets*probs) idx = which(rets>0.01); p01 = sum(probs[idx]) idx = which(rets>0.02); p02 = sum(probs[idx]) idx = which(rets>0.03); p03 = sum(probs[idx]) idx = which(rets>0.07); p07 = sum(probs[idx]) idx = which(rets>0.10); p10 = sum(probs[idx]) idx = which(rets>0.15); p15 = sum(probs[idx]) idx = which(rets>0.25); p25 = sum(probs[idx]) print(c(mn,p01,p02,p03,p07,p10,p15,p25)) print(c("Utility = ",utility)) } ``` ``` ## [1] 0.09998225 0.70142788 0.70142788 0.70142788 0.50249327 0.36635887 ## [7] 0.27007883 0.11105329 ## [1] "Utility = " "-24.6254789193705" ## [1] 0.09998296 0.72144189 0.72144189 0.72144189 0.51895166 0.37579336 ## [7] 0.27345532 0.10589547 ## [1] "Utility = " "-23.9454295328498" ``` And of course, an examination of stochastic dominance. ``` #PLOT DIFFERENCE IN DISTRIBUTION FUNCTIONS #IF POSITIVE EVERYWHERE MIXED PDs BETTER THAN FLAT PDs plot(rets,cumprobs1-cumprobs2,type="l",xlab="Normalized total payoff",ylab="Difference in cumulative probs") ``` ``` #CHECK IF SSD IS SATISFIED #A SSD> B, if E(A)=E(B), and integral_0^y (F_A(z)-F_B(z)) dz <= 0, for all y cumprobs2 = matrix(cumprobs2,length(cumprobs2),1) n = length(cumprobs1) ssd = NULL for (j in 1:n) { check = sum(cumprobs2[1:j]-cumprobs1[1:j]) ssd = c(ssd,check) } print(c("Max ssd = ",max(ssd))) #If <0, then SSD satisfied, and it implies MV efficiency. ``` ``` ## [1] "Max ssd = " "-1.85123605385695e-05" ``` 19\.11 Conclusions ------------------ Digital asset portfolios are different from mean\-variance ones because the asset returns are Bernoulli with small success probabilities. We used a recursion technique borrowed from the credit portfolio literature to construct the payoff distributions for Bernoulli portfolios. We find that many intuitions for these portfolios are similar to those of mean\-variance ones: diversification by adding assets is useful, low correlations amongst investments is good. However, we also find that uniform bet size is preferred to some small and some large bets. Rather than construct portfolios with assets having uniform success probabilities, it is preferable to have some assets with low success rates and others with high success probabilities, a feature that is noticed in the case of venture funds. These insights augment the standard understanding obtained from mean\-variance portfolio optimization. The approach taken here is simple to use. The only inputs needed are the expected payoffs of the assets \\(C\_i\\), success probabilities \\(q\_i\\), and the average correlation between assets, given by a parameter \\(\\rho\\). Broad statistics on these inputs are available, say for venture investments, from papers such as Sarin, Das, and Jagannathan ([2003](#ref-DasJagSarin)). Therefore, using data, it is easy to optimize the portfolio of a digital asset fund. The technical approach here is also easily extended to features including cost of effort by investors as the number of projects grows (Kanniainen and Keuschnigg ([2003](#ref-KannKeus))), syndication, etc. The number of portfolios with digital assets appears to be increasing in the marketplace, and the results of this analysis provide important intuition for asset managers. The approach in Section 2 is just one way in which to model joint success probabilities using a common factor. Undeniably, there are other ways too, such as modeling joint probabilities directly, making sure that they are consistent with each other, which itself may be mathematically tricky. It is indeed possible to envisage that, for some different system of joint success probabilities, the qualitative nature of the results may differ from the ones developed here. It is also possible that the system we adopt here with a single common factor \\(X\\) may be extended to more than one common factor, an approach often taken in the default literature.
Machine Learning
srdas.github.io
https://srdas.github.io/MLBook/Gambling.html
Chapter 20 Against the Odds: The Mathematics of Gambling ======================================================== 20\.1 Introduction ------------------ Most people hate mathematics but love gambling. Which of course, is strange because gambling is driven mostly by math. Think of any type of gambling and no doubt there will be maths involved: Horse\-track betting, sports betting, blackjack, poker, roulette, stocks, etc. 20\.2 Odds ---------- Oddly, bets are defined by their odds. If a bet on a horse is quoted at 4\-to\-1 odds, it means that if you win, you receive 4 times your wager plus the amount wagered. That is, if you bet $1, you get back $5\. The odds effectively define the probability of winning. Lets define this to be \\(p\\). If the odds are fair, then the expected gain is zero, i.e. \\\[ 4p \+ (1 − p)(−1\) \= 0 \\] which implies that \\(p \= 1/5\\). Hence, if the odds are \\(x : 1\\), then the probability of winning is \\(p \= \\frac{1}{x\+1} \= 0\.2\\) 20\.3 Edge ---------- Everyone bets because they think they have an advantage, or an edge over the others. It might be that they just think they have better information, better understanding, are using secret technology, or actually have private information (which may be illegal). The edge is the expected profit that will be made from repeated trials relative to the bet size. You have an edge if you can win with higher probability (\\(p^∗\\)) than \\(p \= 1/(x \+ 1\)\\). In the above example, with bet size $1 each time, suppose your probability of winning is not \\(1/5\\), but instead it is \\(1/4\\). What is your edge? The expected profit is \\\[ (−1\)×(3/4\)\+4×(1/4\) \= 1/4 \\] Dividing this by the bet size (i.e. $1\) gives the edge equal to \\(1/4\\). 20\.4 Bookmakers ---------------- These folks set the odds. Odds are dynamic of course. If the bookie thinks the probability of a win is \\(1/5\\), then he will set the odds to be a bit less than 4:1, maybe something like 3\.5:1\. In this way his expected intake minus payout is positive. At 3\.5:1 odds, if there are still a lot of takers, then the bookie surely realizes that the probability of a win must be higher than in his own estimation. He also infers that \\(p \> 1/(3\.5\+1\)\\), and will then change the odds to say 3:1\. Therefore, he acts as a market maker in the bet. 20\.5 The Kelly Criterion ------------------------- Suppose you have an edge. How should you bet over repeated plays of the game to maximize your wealth? (Do you think this is the way that hedge funds operate?) The Kelly (1956\) criterion says that you should invest only a fraction of your wealth in the bet. By keeping some aside you are guaranteed to not end up in ruin. What fraction should you bet? The answer is that you should bet \\\[ \\begin{equation} f \= \\frac{Edge}{Odds} \= \\frac{p^∗ x−(1−p^∗)}{x} \\end{equation} \\] where the odds are expressed in the form \\(x : 1\\). Recall that \\(p^∗\\) is your privately known probability of winning. ``` #EXAMPLE x=4; pstar=1/4; f = (pstar*x-(1-pstar))/x print(c("Kelly share = ",f)) ``` ``` ## [1] "Kelly share = " "0.0625" ``` This means we invest 6\.25% of the current bankroll. 20\.6 Simulation of the betting strategy ---------------------------------------- Lets simulate this strategy using R. Here is a simple program to simulate it, with optimal Kelly betting, and over\- and under\-betting. ``` #Simulation of the Kelly Criterion #Basic data p = 0.25 #private prob of winning odds = 4 #actual odds edge = p*odds - (1-p) f = edge/odds print(c("edge",edge,"f",f)) ``` ``` ## [1] "edge" "0.25" "f" "0.0625" ``` ``` n = 1000 x = runif(n) f_over = 1.5*f f_under = 0.5*f bankroll = rep(0,n); bankroll[1]=1 br_overbet = bankroll; br_overbet[1]=1 br_underbet = bankroll; br_underbet[1]=1 for (i in 2:n) { if (x[i]<=0.25) { bankroll[i] = bankroll[i-1] + bankroll[i-1]*f*odds br_overbet[i] = br_overbet[i-1] + br_overbet[i-1]*f_over*odds br_underbet[i] = br_underbet[i-1] + br_underbet[i-1]*f_under*odds } else { bankroll[i] = bankroll[i-1] - bankroll[i-1]*f br_overbet[i] = br_overbet[i-1] - br_overbet[i-1]*f_over br_underbet[i] = br_underbet[i-1] - br_underbet[i-1]*f_under } } par(mfrow=c(3,1)) plot(bankroll,type="l") plot(br_overbet,type="l") plot(br_underbet,type="l") ``` ``` print(c(bankroll[n] ,br_overbet[n] ,br_underbet[n])) ``` ``` ## [1] 40580.312 35159.498 1496.423 ``` ``` print(c(bankroll[n]/br_overbet[n],bankroll[n]/br_underbet[n])) ``` ``` ## [1] 1.154178 27.118207 ``` We repeat this bet a thousand times. The initial pot is $1 only, but after a thousand trials, the optimal strategy ends up being a multiple of the suboptimal ones. 20\.7 Half\-Kelly ----------------- Note here that over\-betting is usually worse then under\-betting the Kelly optimal. Hence, many players employ what is known as the **Half\-Kelly** rule, i.e., they bet \\(f/2\\). Look at the resultant plot of the three strategies for the above example. The top plot follows the Kelly criterion, but the other two deviate from it, by overbetting or underbetting the fraction given by Kelly. We can very clearly see that not betting Kelly leads to far worse outcomes than sticking with the Kelly optimal plan. We ran this for 1000 periods, as if we went to the casino every day and placed one bet (or we placed four bets every minute for about four hours straight). Even within a few trials, the performance of the Kelly is remarkable. Note though that this is only one of the simulated outcomes. The simulations would result in different types of paths of the bankroll value, but generally, the outcomes are similar to what we see in the figure. Over\-betting leads to losses faster than under\-betting as one would naturally expect, because it is the more risky strategy. In this model, under the optimal rule, the probability of dropping to \\(1/n\\) of the bankroll is \\(1/n\\). So the probability of dropping to 90% of the bankroll (\\(n\=1\.11\\)) is \\(0\.9\\). Or, there is a 90% chance of losing 10% of the bankroll. Alternate betting rules are: (a) fixed size bets, (b) double up bets. The former is too slow, the latter ruins eventually. 20\.8 Deriving the Kelly Criterion ---------------------------------- First we define some notation. Let \\(B\_t\\) be the bankroll at time \\(t\\). We index time as going from time \\(t\=1, \\ldots, N\\). The odds are denoted, as before \\(x:1\\), and the random variable denoting the outcome (i.e., gains) of the wager is written as \\\[ \\begin{equation} Z\_t \= \\left\\{ \\begin{array}{ll} x \& \\mbox{ w/p } p \\\\ \-1 \& \\mbox{ w/p } (1\-p) \\end{array} \\right. \\end{equation} \\] We are said to have an **edge** when \\(E(Z\_t)\>0\\). The edge will be equal to \\(px\-(1\-p)\>0\\). We invest fraction \\(f\\) of our bankroll, where \\(0\<f\<1\\), and since \\(f \\neq 1\\), there is no chance of being wiped out. Each wager is for an amount \\(f B\_t\\) and returns \\(f B\_t Z\_t\\). Hence, we may write \\\[ \\begin{eqnarray} B\_t \&\=\& B\_{t\-1} \+ f B\_{t\-1} Z\_t \\\\ \&\=\& B\_{t\-1} \[1 \+ f Z\_t] \\\\ \&\=\& B\_0 \\prod\_{i\=1}^t \[1\+f Z\_t] \\end{eqnarray} \\] If we define the growth rate as \\\[ \\begin{eqnarray} g\_t(f) \&\=\& \\frac{1}{t} \\ln \\left( \\frac{B\_t}{B\_0} \\right) \\\\ \&\=\& \\frac{1}{t} \\ln \\prod\_{i\=1}^t \[1\+f Z\_t] \\\\ \&\=\& \\frac{1}{t} \\sum\_{i\=1}^t \\ln \[1\+f Z\_t] \\end{eqnarray} \\] Taking the limit by applying the law of large numbers, we get \\\[ \\begin{equation} g(f) \= \\lim\_{t \\rightarrow \\infty} g\_t(f) \= E\[\\ln(1\+f Z)] \\end{equation} \\] which is nothing but the time average of \\(\\ln(1\+fZ)\\). We need to find the \\(f\\) that maximizes \\(g(f)\\). We can write this more explicitly as $4 \\\[ \\begin{equation} g(f) \= p \\ln(1\+f x) \+ (1\-p) \\ln(1\-f) \\end{equation} \\] Differentiating to get the f.o.c, \\\[ \\begin{equation} \\frac{\\partial g}{\\partial f} \= p \\frac{x}{1\+fx} \+ (1\-p) \\frac{\-1}{1\-f} \= 0 \\end{equation} \\] Soving this first\-order condition for \\(f\\) gives \\\[ \\begin{equation} \\mbox{The Kelly criterion: } f^\* \= \\frac{px \-(1\-p)}{x} \\end{equation} \\] This is the optimal fraction of the bankroll that should be invested in each wager. Note that we are back to the well\-known formula of **Edge/Odds** we saw before. 20\.9 Entropy ------------- Entropy is defined by physicists as the extent of disorder in the universe. Entropy in the universe keeps on increasing. Things get more and more disorderly. The arrow of time moves on inexorably, and entropy keeps on increasing. It is intuitive that as the entropy of a communication channel increases, its informativeness decreases. The connection between entropy and informativeness was made by Claude Shannon, the father of information theory. It was his PhD thesis at MIT. See Shannon (1948\). * Shannon, Claude (1948\). “A Mathematical Theory of Communication,” *The Bell System Technical Journal* 27, 379–423\. \[ [https://www.cs.ucf.edu/\~dcm/Teaching/COP5611\-Spring2012/Shannon48\-MathTheoryComm.pdf](https://www.cs.ucf.edu/~dcm/Teaching/COP5611-Spring2012/Shannon48-MathTheoryComm.pdf) ] With respect to probability distributions, entropy of a discrete distribution taking values \\(\\{p\_1, p\_2, \\ldots, p\_K\\}\\) is \\\[ \\begin{equation} H \= \- \\sum\_{j\=1}^K p\_j \\ln (p\_j) \\end{equation} \\] For the simple wager we have been considering, entropy is \\\[ \\begin{equation} H \= \-\[p \\ln p \+ (1\-p) \\ln(1\-p)] \\end{equation} \\] This is called Shannon entropy after his seminal work in 1948\. For \\(p\=1/2, 1/5, 1/100\\) entropy is ``` p=0.5; res = -(p*log(p)+(1-p)*log(1-p)) print(res) ``` ``` ## [1] 0.6931472 ``` ``` p=0.2; res = -(p*log(p)+(1-p)*log(1-p)) print(res) ``` ``` ## [1] 0.5004024 ``` ``` p=0.01; res = -(p*log(p)+(1-p)*log(1-p)) print(res) ``` ``` ## [1] 0.05600153 ``` We see various probability distributions in decreasing order of entropy. At \\(p\=0\.5\\) entropy is highest. Note that the normal distribution is the one with the highest entropy in its class of distributions. 20\.10 Linking the Kelly Criterion to Entropy --------------------------------------------- For the particular case of a simple random walk, we have odds \\(x\=1\\). In this case, \\\[ f \= p\-(1\-p) \= 2p \- 1 \\] where we see that \\(p\=1/2\\), and the optimal average bet value is \\\[ \\begin{eqnarray} g \&\=\& p \\ln(1\+f) \+(1\-p) \\ln(1\-f) \\\\ \&\=\& p \\ln(2p) \+ (1\-p) \\ln\[2(1\-p)] \\\\ \&\=\& \\ln 2 \+ p \\ln p \+(1\-p) \\ln(1\-p) \\\\ \&\=\& \\ln 2 \- H \\end{eqnarray} \\] where \\(H\\) is the entropy of the distribution of \\(Z\\). For \\(p\=0\.5\\), we have \\\[ \\begin{equation} g \= \\ln 2 \- 0\.5 \\ln(0\.5\) \- 0\.5 \\ln (0\.5\) \= 1\.386294 \\end{equation} \\] We note that \\(g\\) is decreasing in entropy, because informativeness declines with entropy and so the portfolio earns less if we have less of an edge, i.e. our winning information is less than perfect. 20\.11 Linking the Kelly criterion to portfolio optimization ------------------------------------------------------------ A small change in the mathematics above leads to an analogous concept for portfolio policy. The value of a portfolio follows the dynamics below \\\[ \\begin{equation} B\_t \= B\_{t\-1} \[1 \+ (1\-f)r \+ f Z\_t] \= B\_0 \\prod\_{i\=1}^t \[1\+r \+f(Z\_t \-r)] \\end{equation} \\] Hence, the growth rate of the portfolio is given by \\\[ \\begin{eqnarray} g\_t(f) \&\=\& \\frac{1}{t} \\ln \\left( \\frac{B\_t}{B\_0} \\right) \\\\ \&\=\& \\frac{1}{t} \\ln \\left( \\prod\_{i\=1}^t \[1\+r \+f(Z\_t \-r)] \\right) \\\\ \&\=\& \\frac{1}{t} \\sum\_{i\=1}^t \\ln \\left( \[1\+r \+f(Z\_t \-r)] \\right) \\end{eqnarray} \\] Taking the limit by applying the law of large numbers, we get \\\[ \\begin{equation} g(f) \= \\lim\_{t \\rightarrow \\infty} g\_t(f) \= E\[\\ln(1\+r \+ f (Z\-r))] \\end{equation} \\] Hence, maximizing the growth rate of the portfolio is the same as maximizing expected log utility. For a much more detailed analysis, see Browne and Whitt ([1996](#ref-CIS-130546)). 20\.12 Implementing day trading ------------------------------- We may choose any suitable distribution for the asset \\(Z\\). Suppose \\(Z\\) is normally distributed with mean \\(\\mu\\) and variance \\(\\sigma^2\\). Then we just need to find \\(f\\) such that we have \\\[ \\begin{equation} f^\* \= \\mbox{argmax}\_f \\; \\; E\[\\ln(1\+r \+ f (Z\-r))] \\end{equation} \\] This may be done numerically. Note now that this does not guarantee that \\(0 \< f \< 1\\), which does not preclude ruin. How would a day\-trader think about portfolio optimization? His problem would be closer to that of a gambler’s because he is very much like someone at the tables, making a series of bets, whose outcomes become known in very short time frames. A day\-trader can easily look at his history of round\-trip trades and see how many of them made money, and how many lost money. He would then obtain an estimate of \\(p\\), the probability of winning, which is the fraction of total round\-trip trades that make money. The Lavinio ([2000](#ref-Lavinio)) \\(d\\)\-ratio is known as the **gain\-loss** ratio and is as follows: \\\[ \\begin{equation} d \= \\frac{n\_d \\times \\sum\_{j\=1}^n \\max(0,\-Z\_j)}{n\_u \\times \\sum\_{j\=1}^n \\max(0,Z\_j)} \\end{equation} \\] where \\(n\_d\\) is the number of down (loss) trades, and \\(n\_u\\) is the number of up (gain) trades and \\(n \= n\_d \+ n\_u\\), and \\(Z\_j\\) are the returns on the trades. In our original example at the beginning of this chapter, we have odds of 4:1, implying \\(n\_d\=4\\) loss trades for each win (\\(n\_u\=1\\)) trade, and a winning trade nets \\(\+4\\), and a losing trade nets \\(\-1\\). Hence, we have \\\[ \\begin{equation} d \= \\frac{4 \\times (1\+1\+1\+1\)}{1 \\times 4} \= 4 \= x \\end{equation} \\] which is just equal to the odds. Once, these are computed, the day\-trader simply plugs them in to the formula we had before, i.e., \\\[ \\begin{equation} f \= \\frac{px \- (1\-p)}{x} \= p \- \\frac{(1\-p)}{x} \\end{equation} \\] Of course, here \\(p\=0\.2\\). A trader would also constantly re\-assess the values of \\(p\\) and \\(x\\) given that the markets change over time. 20\.13 Casino Games ------------------- The statistics of various casino games are displayed in [http://wizardofodds.com/gambling/house\-edge/](http://wizardofodds.com/gambling/house-edge/). To recap, note that the Kelly criterion maximizes the average bankroll and also minimizes the risk of ruin, but is of no use if the house had an edge. **You** need to have an edge before it works. But then it really works! It is not a short\-term formula and works over a long sequence of bets. Naturally it follows that it also minimizes the number of bets needed to double the bankroll. In a neat paper, E. O. Thorp ([2011](#ref-RePEc:wsi:wschap:9789814293501_0054)) presents various Kelly rules for blackjack, sports betting, and the stock market. Reading E. Thorp ([1962](#ref-ThorpBeatDealer)) for blackjack is highly recommended. And of course there is the great story of the MIT Blackjack Team in Mezrich ([2002](#ref-Mezrich)), in the well\-known book behind the movie “21” \[ [https://www.youtube.com/watch?v\=ZFWfXbjl95I](https://www.youtube.com/watch?v=ZFWfXbjl95I) ]. Here is an example from E. O. Thorp ([2011](#ref-RePEc:wsi:wschap:9789814293501_0054)). Suppose you have an edge where you can win \\(\+1\\) with probability \\(0\.51\\), and lose \\(\-1\\) with probability \\(0\.49\\) when the blackjack deck is **hot** and when it is cold the probabilities are reversed. We will bet \\(f\\) on the hot deck and \\(af, a\<1\\) on the cold deck. We have to bet on cold decks just to prevent the dealer from getting suspicious. Hot and cold decks occur with equal probability. Then the Kelly growth rate is \\\[ \\begin{equation} g(f) \= 0\.5 \[0\.51 \\ln(1\+f) \+ 0\.49 \\ln(1\-f)] \+ 0\.5 \[0\.49 \\ln(1\+af) \+ 0\.51 \\ln(1\-af)] \\end{equation} \\] If we do not bet on cold decks, then \\(a\=0\\) and \\(f^\*\=0\.02\\) using the usual formula. As \\(a\\) increases from 0 to 1, we see that \\(f^\*\\) decreases. Hence, we bet less of our pot to make up for losses from cold decks. We compute this and get the following: \\\[ \\begin{eqnarray} a\=0 \& \\rightarrow \& f^\* \= 0\.020\\\\ a\=1/4 \& \\rightarrow \& f^\* \= 0\.014\\\\ a\=1/2 \& \\rightarrow \& f^\* \= 0\.008\\\\ a\=3/4 \& \\rightarrow \& f^\* \= 0\.0032 \\end{eqnarray} \\] 20\.1 Introduction ------------------ Most people hate mathematics but love gambling. Which of course, is strange because gambling is driven mostly by math. Think of any type of gambling and no doubt there will be maths involved: Horse\-track betting, sports betting, blackjack, poker, roulette, stocks, etc. 20\.2 Odds ---------- Oddly, bets are defined by their odds. If a bet on a horse is quoted at 4\-to\-1 odds, it means that if you win, you receive 4 times your wager plus the amount wagered. That is, if you bet $1, you get back $5\. The odds effectively define the probability of winning. Lets define this to be \\(p\\). If the odds are fair, then the expected gain is zero, i.e. \\\[ 4p \+ (1 − p)(−1\) \= 0 \\] which implies that \\(p \= 1/5\\). Hence, if the odds are \\(x : 1\\), then the probability of winning is \\(p \= \\frac{1}{x\+1} \= 0\.2\\) 20\.3 Edge ---------- Everyone bets because they think they have an advantage, or an edge over the others. It might be that they just think they have better information, better understanding, are using secret technology, or actually have private information (which may be illegal). The edge is the expected profit that will be made from repeated trials relative to the bet size. You have an edge if you can win with higher probability (\\(p^∗\\)) than \\(p \= 1/(x \+ 1\)\\). In the above example, with bet size $1 each time, suppose your probability of winning is not \\(1/5\\), but instead it is \\(1/4\\). What is your edge? The expected profit is \\\[ (−1\)×(3/4\)\+4×(1/4\) \= 1/4 \\] Dividing this by the bet size (i.e. $1\) gives the edge equal to \\(1/4\\). 20\.4 Bookmakers ---------------- These folks set the odds. Odds are dynamic of course. If the bookie thinks the probability of a win is \\(1/5\\), then he will set the odds to be a bit less than 4:1, maybe something like 3\.5:1\. In this way his expected intake minus payout is positive. At 3\.5:1 odds, if there are still a lot of takers, then the bookie surely realizes that the probability of a win must be higher than in his own estimation. He also infers that \\(p \> 1/(3\.5\+1\)\\), and will then change the odds to say 3:1\. Therefore, he acts as a market maker in the bet. 20\.5 The Kelly Criterion ------------------------- Suppose you have an edge. How should you bet over repeated plays of the game to maximize your wealth? (Do you think this is the way that hedge funds operate?) The Kelly (1956\) criterion says that you should invest only a fraction of your wealth in the bet. By keeping some aside you are guaranteed to not end up in ruin. What fraction should you bet? The answer is that you should bet \\\[ \\begin{equation} f \= \\frac{Edge}{Odds} \= \\frac{p^∗ x−(1−p^∗)}{x} \\end{equation} \\] where the odds are expressed in the form \\(x : 1\\). Recall that \\(p^∗\\) is your privately known probability of winning. ``` #EXAMPLE x=4; pstar=1/4; f = (pstar*x-(1-pstar))/x print(c("Kelly share = ",f)) ``` ``` ## [1] "Kelly share = " "0.0625" ``` This means we invest 6\.25% of the current bankroll. 20\.6 Simulation of the betting strategy ---------------------------------------- Lets simulate this strategy using R. Here is a simple program to simulate it, with optimal Kelly betting, and over\- and under\-betting. ``` #Simulation of the Kelly Criterion #Basic data p = 0.25 #private prob of winning odds = 4 #actual odds edge = p*odds - (1-p) f = edge/odds print(c("edge",edge,"f",f)) ``` ``` ## [1] "edge" "0.25" "f" "0.0625" ``` ``` n = 1000 x = runif(n) f_over = 1.5*f f_under = 0.5*f bankroll = rep(0,n); bankroll[1]=1 br_overbet = bankroll; br_overbet[1]=1 br_underbet = bankroll; br_underbet[1]=1 for (i in 2:n) { if (x[i]<=0.25) { bankroll[i] = bankroll[i-1] + bankroll[i-1]*f*odds br_overbet[i] = br_overbet[i-1] + br_overbet[i-1]*f_over*odds br_underbet[i] = br_underbet[i-1] + br_underbet[i-1]*f_under*odds } else { bankroll[i] = bankroll[i-1] - bankroll[i-1]*f br_overbet[i] = br_overbet[i-1] - br_overbet[i-1]*f_over br_underbet[i] = br_underbet[i-1] - br_underbet[i-1]*f_under } } par(mfrow=c(3,1)) plot(bankroll,type="l") plot(br_overbet,type="l") plot(br_underbet,type="l") ``` ``` print(c(bankroll[n] ,br_overbet[n] ,br_underbet[n])) ``` ``` ## [1] 40580.312 35159.498 1496.423 ``` ``` print(c(bankroll[n]/br_overbet[n],bankroll[n]/br_underbet[n])) ``` ``` ## [1] 1.154178 27.118207 ``` We repeat this bet a thousand times. The initial pot is $1 only, but after a thousand trials, the optimal strategy ends up being a multiple of the suboptimal ones. 20\.7 Half\-Kelly ----------------- Note here that over\-betting is usually worse then under\-betting the Kelly optimal. Hence, many players employ what is known as the **Half\-Kelly** rule, i.e., they bet \\(f/2\\). Look at the resultant plot of the three strategies for the above example. The top plot follows the Kelly criterion, but the other two deviate from it, by overbetting or underbetting the fraction given by Kelly. We can very clearly see that not betting Kelly leads to far worse outcomes than sticking with the Kelly optimal plan. We ran this for 1000 periods, as if we went to the casino every day and placed one bet (or we placed four bets every minute for about four hours straight). Even within a few trials, the performance of the Kelly is remarkable. Note though that this is only one of the simulated outcomes. The simulations would result in different types of paths of the bankroll value, but generally, the outcomes are similar to what we see in the figure. Over\-betting leads to losses faster than under\-betting as one would naturally expect, because it is the more risky strategy. In this model, under the optimal rule, the probability of dropping to \\(1/n\\) of the bankroll is \\(1/n\\). So the probability of dropping to 90% of the bankroll (\\(n\=1\.11\\)) is \\(0\.9\\). Or, there is a 90% chance of losing 10% of the bankroll. Alternate betting rules are: (a) fixed size bets, (b) double up bets. The former is too slow, the latter ruins eventually. 20\.8 Deriving the Kelly Criterion ---------------------------------- First we define some notation. Let \\(B\_t\\) be the bankroll at time \\(t\\). We index time as going from time \\(t\=1, \\ldots, N\\). The odds are denoted, as before \\(x:1\\), and the random variable denoting the outcome (i.e., gains) of the wager is written as \\\[ \\begin{equation} Z\_t \= \\left\\{ \\begin{array}{ll} x \& \\mbox{ w/p } p \\\\ \-1 \& \\mbox{ w/p } (1\-p) \\end{array} \\right. \\end{equation} \\] We are said to have an **edge** when \\(E(Z\_t)\>0\\). The edge will be equal to \\(px\-(1\-p)\>0\\). We invest fraction \\(f\\) of our bankroll, where \\(0\<f\<1\\), and since \\(f \\neq 1\\), there is no chance of being wiped out. Each wager is for an amount \\(f B\_t\\) and returns \\(f B\_t Z\_t\\). Hence, we may write \\\[ \\begin{eqnarray} B\_t \&\=\& B\_{t\-1} \+ f B\_{t\-1} Z\_t \\\\ \&\=\& B\_{t\-1} \[1 \+ f Z\_t] \\\\ \&\=\& B\_0 \\prod\_{i\=1}^t \[1\+f Z\_t] \\end{eqnarray} \\] If we define the growth rate as \\\[ \\begin{eqnarray} g\_t(f) \&\=\& \\frac{1}{t} \\ln \\left( \\frac{B\_t}{B\_0} \\right) \\\\ \&\=\& \\frac{1}{t} \\ln \\prod\_{i\=1}^t \[1\+f Z\_t] \\\\ \&\=\& \\frac{1}{t} \\sum\_{i\=1}^t \\ln \[1\+f Z\_t] \\end{eqnarray} \\] Taking the limit by applying the law of large numbers, we get \\\[ \\begin{equation} g(f) \= \\lim\_{t \\rightarrow \\infty} g\_t(f) \= E\[\\ln(1\+f Z)] \\end{equation} \\] which is nothing but the time average of \\(\\ln(1\+fZ)\\). We need to find the \\(f\\) that maximizes \\(g(f)\\). We can write this more explicitly as $4 \\\[ \\begin{equation} g(f) \= p \\ln(1\+f x) \+ (1\-p) \\ln(1\-f) \\end{equation} \\] Differentiating to get the f.o.c, \\\[ \\begin{equation} \\frac{\\partial g}{\\partial f} \= p \\frac{x}{1\+fx} \+ (1\-p) \\frac{\-1}{1\-f} \= 0 \\end{equation} \\] Soving this first\-order condition for \\(f\\) gives \\\[ \\begin{equation} \\mbox{The Kelly criterion: } f^\* \= \\frac{px \-(1\-p)}{x} \\end{equation} \\] This is the optimal fraction of the bankroll that should be invested in each wager. Note that we are back to the well\-known formula of **Edge/Odds** we saw before. 20\.9 Entropy ------------- Entropy is defined by physicists as the extent of disorder in the universe. Entropy in the universe keeps on increasing. Things get more and more disorderly. The arrow of time moves on inexorably, and entropy keeps on increasing. It is intuitive that as the entropy of a communication channel increases, its informativeness decreases. The connection between entropy and informativeness was made by Claude Shannon, the father of information theory. It was his PhD thesis at MIT. See Shannon (1948\). * Shannon, Claude (1948\). “A Mathematical Theory of Communication,” *The Bell System Technical Journal* 27, 379–423\. \[ [https://www.cs.ucf.edu/\~dcm/Teaching/COP5611\-Spring2012/Shannon48\-MathTheoryComm.pdf](https://www.cs.ucf.edu/~dcm/Teaching/COP5611-Spring2012/Shannon48-MathTheoryComm.pdf) ] With respect to probability distributions, entropy of a discrete distribution taking values \\(\\{p\_1, p\_2, \\ldots, p\_K\\}\\) is \\\[ \\begin{equation} H \= \- \\sum\_{j\=1}^K p\_j \\ln (p\_j) \\end{equation} \\] For the simple wager we have been considering, entropy is \\\[ \\begin{equation} H \= \-\[p \\ln p \+ (1\-p) \\ln(1\-p)] \\end{equation} \\] This is called Shannon entropy after his seminal work in 1948\. For \\(p\=1/2, 1/5, 1/100\\) entropy is ``` p=0.5; res = -(p*log(p)+(1-p)*log(1-p)) print(res) ``` ``` ## [1] 0.6931472 ``` ``` p=0.2; res = -(p*log(p)+(1-p)*log(1-p)) print(res) ``` ``` ## [1] 0.5004024 ``` ``` p=0.01; res = -(p*log(p)+(1-p)*log(1-p)) print(res) ``` ``` ## [1] 0.05600153 ``` We see various probability distributions in decreasing order of entropy. At \\(p\=0\.5\\) entropy is highest. Note that the normal distribution is the one with the highest entropy in its class of distributions. 20\.10 Linking the Kelly Criterion to Entropy --------------------------------------------- For the particular case of a simple random walk, we have odds \\(x\=1\\). In this case, \\\[ f \= p\-(1\-p) \= 2p \- 1 \\] where we see that \\(p\=1/2\\), and the optimal average bet value is \\\[ \\begin{eqnarray} g \&\=\& p \\ln(1\+f) \+(1\-p) \\ln(1\-f) \\\\ \&\=\& p \\ln(2p) \+ (1\-p) \\ln\[2(1\-p)] \\\\ \&\=\& \\ln 2 \+ p \\ln p \+(1\-p) \\ln(1\-p) \\\\ \&\=\& \\ln 2 \- H \\end{eqnarray} \\] where \\(H\\) is the entropy of the distribution of \\(Z\\). For \\(p\=0\.5\\), we have \\\[ \\begin{equation} g \= \\ln 2 \- 0\.5 \\ln(0\.5\) \- 0\.5 \\ln (0\.5\) \= 1\.386294 \\end{equation} \\] We note that \\(g\\) is decreasing in entropy, because informativeness declines with entropy and so the portfolio earns less if we have less of an edge, i.e. our winning information is less than perfect. 20\.11 Linking the Kelly criterion to portfolio optimization ------------------------------------------------------------ A small change in the mathematics above leads to an analogous concept for portfolio policy. The value of a portfolio follows the dynamics below \\\[ \\begin{equation} B\_t \= B\_{t\-1} \[1 \+ (1\-f)r \+ f Z\_t] \= B\_0 \\prod\_{i\=1}^t \[1\+r \+f(Z\_t \-r)] \\end{equation} \\] Hence, the growth rate of the portfolio is given by \\\[ \\begin{eqnarray} g\_t(f) \&\=\& \\frac{1}{t} \\ln \\left( \\frac{B\_t}{B\_0} \\right) \\\\ \&\=\& \\frac{1}{t} \\ln \\left( \\prod\_{i\=1}^t \[1\+r \+f(Z\_t \-r)] \\right) \\\\ \&\=\& \\frac{1}{t} \\sum\_{i\=1}^t \\ln \\left( \[1\+r \+f(Z\_t \-r)] \\right) \\end{eqnarray} \\] Taking the limit by applying the law of large numbers, we get \\\[ \\begin{equation} g(f) \= \\lim\_{t \\rightarrow \\infty} g\_t(f) \= E\[\\ln(1\+r \+ f (Z\-r))] \\end{equation} \\] Hence, maximizing the growth rate of the portfolio is the same as maximizing expected log utility. For a much more detailed analysis, see Browne and Whitt ([1996](#ref-CIS-130546)). 20\.12 Implementing day trading ------------------------------- We may choose any suitable distribution for the asset \\(Z\\). Suppose \\(Z\\) is normally distributed with mean \\(\\mu\\) and variance \\(\\sigma^2\\). Then we just need to find \\(f\\) such that we have \\\[ \\begin{equation} f^\* \= \\mbox{argmax}\_f \\; \\; E\[\\ln(1\+r \+ f (Z\-r))] \\end{equation} \\] This may be done numerically. Note now that this does not guarantee that \\(0 \< f \< 1\\), which does not preclude ruin. How would a day\-trader think about portfolio optimization? His problem would be closer to that of a gambler’s because he is very much like someone at the tables, making a series of bets, whose outcomes become known in very short time frames. A day\-trader can easily look at his history of round\-trip trades and see how many of them made money, and how many lost money. He would then obtain an estimate of \\(p\\), the probability of winning, which is the fraction of total round\-trip trades that make money. The Lavinio ([2000](#ref-Lavinio)) \\(d\\)\-ratio is known as the **gain\-loss** ratio and is as follows: \\\[ \\begin{equation} d \= \\frac{n\_d \\times \\sum\_{j\=1}^n \\max(0,\-Z\_j)}{n\_u \\times \\sum\_{j\=1}^n \\max(0,Z\_j)} \\end{equation} \\] where \\(n\_d\\) is the number of down (loss) trades, and \\(n\_u\\) is the number of up (gain) trades and \\(n \= n\_d \+ n\_u\\), and \\(Z\_j\\) are the returns on the trades. In our original example at the beginning of this chapter, we have odds of 4:1, implying \\(n\_d\=4\\) loss trades for each win (\\(n\_u\=1\\)) trade, and a winning trade nets \\(\+4\\), and a losing trade nets \\(\-1\\). Hence, we have \\\[ \\begin{equation} d \= \\frac{4 \\times (1\+1\+1\+1\)}{1 \\times 4} \= 4 \= x \\end{equation} \\] which is just equal to the odds. Once, these are computed, the day\-trader simply plugs them in to the formula we had before, i.e., \\\[ \\begin{equation} f \= \\frac{px \- (1\-p)}{x} \= p \- \\frac{(1\-p)}{x} \\end{equation} \\] Of course, here \\(p\=0\.2\\). A trader would also constantly re\-assess the values of \\(p\\) and \\(x\\) given that the markets change over time. 20\.13 Casino Games ------------------- The statistics of various casino games are displayed in [http://wizardofodds.com/gambling/house\-edge/](http://wizardofodds.com/gambling/house-edge/). To recap, note that the Kelly criterion maximizes the average bankroll and also minimizes the risk of ruin, but is of no use if the house had an edge. **You** need to have an edge before it works. But then it really works! It is not a short\-term formula and works over a long sequence of bets. Naturally it follows that it also minimizes the number of bets needed to double the bankroll. In a neat paper, E. O. Thorp ([2011](#ref-RePEc:wsi:wschap:9789814293501_0054)) presents various Kelly rules for blackjack, sports betting, and the stock market. Reading E. Thorp ([1962](#ref-ThorpBeatDealer)) for blackjack is highly recommended. And of course there is the great story of the MIT Blackjack Team in Mezrich ([2002](#ref-Mezrich)), in the well\-known book behind the movie “21” \[ [https://www.youtube.com/watch?v\=ZFWfXbjl95I](https://www.youtube.com/watch?v=ZFWfXbjl95I) ]. Here is an example from E. O. Thorp ([2011](#ref-RePEc:wsi:wschap:9789814293501_0054)). Suppose you have an edge where you can win \\(\+1\\) with probability \\(0\.51\\), and lose \\(\-1\\) with probability \\(0\.49\\) when the blackjack deck is **hot** and when it is cold the probabilities are reversed. We will bet \\(f\\) on the hot deck and \\(af, a\<1\\) on the cold deck. We have to bet on cold decks just to prevent the dealer from getting suspicious. Hot and cold decks occur with equal probability. Then the Kelly growth rate is \\\[ \\begin{equation} g(f) \= 0\.5 \[0\.51 \\ln(1\+f) \+ 0\.49 \\ln(1\-f)] \+ 0\.5 \[0\.49 \\ln(1\+af) \+ 0\.51 \\ln(1\-af)] \\end{equation} \\] If we do not bet on cold decks, then \\(a\=0\\) and \\(f^\*\=0\.02\\) using the usual formula. As \\(a\\) increases from 0 to 1, we see that \\(f^\*\\) decreases. Hence, we bet less of our pot to make up for losses from cold decks. We compute this and get the following: \\\[ \\begin{eqnarray} a\=0 \& \\rightarrow \& f^\* \= 0\.020\\\\ a\=1/4 \& \\rightarrow \& f^\* \= 0\.014\\\\ a\=1/2 \& \\rightarrow \& f^\* \= 0\.008\\\\ a\=3/4 \& \\rightarrow \& f^\* \= 0\.0032 \\end{eqnarray} \\]
Machine Learning
srdas.github.io
https://srdas.github.io/MLBook/Auctions.html
Chapter 21 Bidding it Up: Auctions ================================== 21\.1 Introduction ------------------ Auctions comprise one of the oldest market forms, and are still a popular mechanism for selling various assets and their related price discovery. In this chapter we will study different auction formats, bidding theory, and revenue maximization principles. Hal Varian, Chief Economist at Google (NYT, Aug 1, 2002\) writes: “Auctions, one of the oldest ways to buy and sell, have been reborn and revitalized on the Internet. When I say “old”, I mean it. Herodotus described a Babylonian marriage market, circa 500 B.C., in which potential wives were auctioned off. Notably, some of the brides sold for a negative price. The Romans used auctions for many purposes, including auctioning off the right to collect taxes. In A.D. 193, the Praetorian Guards even auctioned off the Roman empire itself! We don’t see auctions like this anymore (unless you count campaign finance practices), but auctions are used for just about everything else. Online, computer\-managed auctions are cheap to run and have become increasingly popular. EBay is the most prominent example, but other, less well\-known companies use similar technology." For a review paper, see: [http://algo.scu.edu/\~sanjivdas/DasSundaram\_FMII1996\_AuctionTheory.pdf](http://algo.scu.edu/~sanjivdas/DasSundaram_FMII1996_AuctionTheory.pdf) 21\.2 Overview -------------- Auctions have many features, but the key ingredient is **information asymmetry** between seller and buyers. The seller may know more about the product than the buyers, and the buyers themselves might have differential information about the item on sale. Moreover, buyers also take into account imperfect information about the behavior of the other bidders. We will examine how this information asymmetry plays into bidding strategy in the mathematical analysis that follows. Auction market mechanisms are **explicit**, with the prices and revenue a direct consequence of the auction design. In contrast, in other markets, the interaction of buyers and sellers might be more implicit, as in the case of commodities, where the market mechanism is based on demand and supply, resulting in the implicit, proverbial “invisible hand” setting prices. There are many *examples* of active auction markets, such as auctions of art and valuables, eBay, Treasury securities, Google ad auctions, and even the New York Stock Exchange, which is an example of a continuous call auction market. Auctions may be for a **single unit** (e.g., art) or **multiple units** (e.g., Treasury securities). 21\.3 Auction types ------------------- The main types of auctions may be classified as follows: 1. English (E): highest bid wins. The auction is open, i.e., bids are revealed to all participants as they occur. This is an ascending price auction. 2. Dutch (D): auctioneer starts at a high price and calls out successively lower prices. First bidder accepts and wins the auction. Again, bids are open. 3. 1st price sealed bid (1P): Bids are sealed. Highest bidder wins and pays his price. 4. 2nd price sealed bid (2P): Same as 1P but the price paid by the winner is the second\-highest price. Same as the auction analyzed by William Vickrey in his seminal paper in 1961 that led to a Nobel prize. See . 5. Anglo\-Dutch (AD): Open, ascending\-price auction till only two bidders remain, then it becomes sealed\-bid. 21\.4 Value Determination ------------------------- The eventual outcome of an auction is price/value discovery of the item being sold. There are two characterizations of this value determination process, depending on the nature of the item being sold. 1. Independent private values model: Each buyer bids his own independent valuation of the item at sale (as in regular art auctions). 2. Common\-values model: Bidders aim to discover a common price, as in Treasury auctions. This is because there is usually an after market in which common value is traded. 21\.5 Bidder Types ------------------ The assumptions made about the bidders impacts the revenue raised in the auction and the optimal auction design chosen by the seller. We consider two types of bidders. 1. Symmetric: all bidders observe the same probability distribution of bids and **stop\-out** (SP) prices. The stop out price is the price of the lowest winning bid for the last unit sold. This is a robust assumption when markets are competitive. 2. Asymmetric or non\-symmetric. Here the bidders may have different distributions of value. This is often the case when markets are segmented. Example: bidding for firms in merger and acquisition deals. 21\.6 Benchmark Model (BM) -------------------------- We begin by analyzing what is known as the benchmark model. It is the simplest framework in which we can analyze auctions. It is based on 4 main assumptions: 1. Risk\-neutrality of bidders. We do not need utility functions in the analysis. 2. Private\-values model. Every bidder has her own value for the item. There is a distribution of bidders’ private values. 3. Symmetric bidders. Every bidder faces the same distribution of private values mentioned in the previous point. 4. Payment by winners is a function of bids alone. For a counterexample, think of payment via royalties for a book contract which depends on post auction outcomes. Or the bidding for movie rights, where the buyer takes a part share of the movie with the seller. 21\.7 Properties of the BM -------------------------- The following are the results and properties of the BM. 1. D \= 1P. That is, the Dutch auction and first price auction are equivalent to bidders. These two mechanisms are identical because in each the bidder needs to choose how high to bid without knowledge of the other bids. 2. In the BM, the optimal strategy is to bid one’s true valuation. This is easy to see for D and 1P. In both auctions, you do not see any other lower bids, so you bid up to your maximum value, i.e., one’s true value, and see if the bid ends up winning. For 2P, if you bid too high you overpay, bid too low you lose, so best to bid one’s valuation. For E, it’s best to keep bidding till price crosses your valuation (reservation price). 3. Equilibria types: * Dominant: A situation where bidders bid their true valuation irrespective of other bidders bids. Satisfied by E and 2P. * Nash: Bids are chosen based on the best guess of other bidders’ bids. Satisfied by D and 1P. 21\.8 Auction Math and Stats: Seller’s Expected Revenue ------------------------------------------------------- We now get away from the abstract definition of different types of auctions and work out an example of an auctions *equilibrium*. Let \\(F\\) be the probability distribution of the bids. And define \\(v\_i\\) as the true value of the \\(i\\)\-th bidder, on a continuum between 0 and 1\. Assume bidders are ranked in order of their true valuations \\(v\_i\\). How do we interpret \\(F(v)\\)? Think of the bids as being drawn from say, a beta distribution \\(F\\) on \\(v \\in (0,1\)\\), so that the probability of a very high or very low bid is lower than a bid around the mean of the distribution. The expected difference between the first and second highest bids is, given \\(v\_1\\) and \\(v\_2\\): \\\[ D \= \[1\-F(v\_2\)](v\_1\-v\_2\) \\] That is, multiply the difference between the first and second bids by the probability that \\(v\_2\\) is the second\-highest bid. Or think of the probability of there being a bid higher than \\(v\_2\\). Taking first\-order conditions (from the seller’s viewpoint): \\\[ \\frac{\\partial D}{\\partial v\_1} \= \[1\-F(v\_1\)] \- (v\_1\-v\_2\)F'(v\_1\)\=0 \\] Note that \\(v\_1 \\equiv^d v\_2\\), given bidders are symmetric in BM. The symbol \\(\\equiv^d\\) means “equivalent in distribution”. (Definition of equivalence in distribution: \\(v\_1, v\_2\\) are equivalent in distribution if \\(Pr\[v\_1 \\leq V] \= Pr\[v\_2 \\leq V]\\) for all values of \\(V\\).) This implies that \\\[ v\_1\-v\_2 \= \\frac{1\-F(v\_1\)}{f(v\_1\)} \\] The expected revenue to the seller is the same as the expected 2nd price (i.e., bounded below by this price). The second price comes from the following re\-arranged equation: \\\[ v\_2 \= v\_1 \- \\frac{1\-F(v\_1\)}{f(v\_1\)} \\] 21\.9 Optimization by bidders ----------------------------- The goal of bidder \\(i\\) is to find a function/bidding rule \\(B\\) that is a function of the private value \\(v\_i\\) such that \\\[ b\_i \= B(v\_i) \\] where \\(b\_i\\) is the actual bid. If there are \\(n\\) bidders, then \\\[ \\begin{eqnarray\*} Pr\[\\mbox{bidder } i \\mbox{ wins}] \&\=\& \\mbox{Pr}\[b\_i \> B(v\_j)], \\quad \\forall j \\neq i, \\\\ \&\=\& \[F(B^{\-1}(b\_i))]^{n\-1} \\end{eqnarray\*} \\] Each bidder tries to maximize her expected profit relative to her true valuation, which is \\\[ \\pi\_i \= (v\_i \- b\_i)\[F(B^{\-1}(b\_i))]^{n\-1} \= (v\_i\-b\_i)\[F(v\_i)]^{n\-1}, \\quad \\quad (EQN1\) \\] again invoking the notion of bidder symmetry. Optimize by taking \\(\\frac{\\partial \\pi\_i}{\\partial b\_i} \= 0\\). We can get this by taking first the total derivative of profit relative to the bidder’s value as follows (noting that \\(\\pi\_i\[v\_i,b\_i(v\_i)]\\) is the full form of the profit function): \\\[ \\frac{d \\pi\_i}{d v\_i} \= \\frac{\\partial \\pi\_i}{\\partial v\_i} \+ \\frac{\\partial \\pi\_i}{\\partial b\_i}\\frac{db\_i}{dv\_i} \= \\frac{\\partial \\pi\_i}{\\partial v\_i} \\] which reduces to the partial derivative of profit with respect to personal valuation because \\(\\frac{\\partial \\pi\_i}{\\partial b\_i} \= 0\\). This useful first partial derivative is taken from equation (EQN1\): \\\[ \\frac{\\partial \\pi\_i}{\\partial v\_i} \= \[F(B^{\-1}(b\_i))]^{n\-1} \= \[F(v\_i)]^{n\-1} \\] Now, let \\(v\_l\\) be the lowest bid. Integrate the previous equation to get \\\[ \\pi\_i \= \\int\_{v\_l}^{v\_i} \[F(x)]^{n\-1} \\; dx \\quad \\quad (EQN2\) \\] The previous is derived from the Fundamental Theorem of Calculus which is \\\[ F(x) \= \\int\_{x\_l}^x f(u) \\; du, \\quad \\quad \\int\_{v\_l}^{v\_h} f(u)du \= F(v\_h) \- F(v\_l) \\] Equating (EQN1\) and (EQN2\) gives \\\[ b\_i \= v\_i \- \\frac{\\int\_{v\_l}^{v\_i} \[F(x)]^{n\-1} \\; dx}{\[F(v\_i)]^{n\-1}} \= B(v\_i) \\] which gives the bidding rule \\(B(v\_i)\\) entirely in terms of the personal valuation of the bidder. If, for example, \\(F\\) is uniform, then assuming \\(v \\in (0,1\)\\), note that \\(F(v)\=v\\): \\\[ \\begin{eqnarray\*} b \&\=\& v \- \\frac{\\int\_0^v F(x)^{n\-1} \\;dx}{F(v)^{n\-1}} \\\\ \&\=\& v \- \\frac{\\int\_0^v x^{n\-1} \\;dx}{v^{n\-1}} \\\\ \&\=\& v \- \\frac{\[x^n/n]\_0^v}{v^{n\-1}} \\\\ \&\=\& v \- \\frac{v^n/n}{v^{n\-1}} \\end{eqnarray\*} \\] resulting in: \\\[ b \= B(v) \= \\frac{(n\-1\)v}{n} \\] Here we see that we “shade” our bid down slightly from our personal valuation. We bid less than true valuation to leave some room for profit. The amount of shading of our bid depends on how much competition there is, i.e., the number of bidders \\(n\\). Note that \\\[ \\frac{\\partial B}{\\partial v\_i} \> 0, \\quad \\quad \\frac{\\partial B}{\\partial n} \> 0 \\] i.e., you increase your bid as your personal value rises, and as the number of bidders increases. ### 21\.9\.1 Example We are bidding for a used laptop on eBay. Suppose we assume that the distribution of bids follows a beta distribution with parameters 2,4 and a minimum value $50 and a maximum value of $500\. Our personal value for the machine is $300\. Assume 10 other bidders. How much should we bid? ``` x = seq(0,1,1/1000) y = x*450+50 prob_y = dbeta(x,2,4) print(c("check=",sum(prob_y)/1000)) ``` ``` ## [1] "check=" "0.999998333334" ``` ``` prob_y = prob_y/sum(prob_y) plot(y,prob_y,type="l") grid(lwd=3) ``` Note that we have used the non\-central Beta distribution, with shape parameters \\(a\=2\\) and \\(b\=4\\). The beta distribution density function is: \\\[ Beta(x,a,b) \= \\frac{\\Gamma(a\+b)}{\\Gamma(a) \\Gamma(b)} x^{a\-1} (1\-x)^{b\-1} \\] for \\(x\\) taking values between 0 and 1\. The distribution of bids from 50 to 500 is shown above. An excellent blog post on the intuition for the beta distribution is here: [http://stats.stackexchange.com/questions/47771/what\-is\-the\-intuition\-behind\-beta\-distribution/47782\#47782](http://stats.stackexchange.com/questions/47771/what-is-the-intuition-behind-beta-distribution/47782#47782) Check the mean of the distribution. ``` mn = sum(y*prob_y) print(c("mean=",mn)) ``` ``` ## [1] "mean=" "200.000250000167" ``` Check the standard deviation. ``` stdev = sqrt(sum(y^2*prob_y)-mn^2) print(c("stdev=",stdev)) ``` ``` ## [1] "stdev=" "80.1782055353774" ``` Now use a computational approach to solving the problem. We program up equation (EQN1\) and then find the bid at which this is maximized. ``` x = seq(0,1,1/1000) y = 50+450*x cumprob_y = pbeta(x,2,4) exp_profit = (300-y)*cumprob_y^10 idx = which(exp_profit==max(exp_profit)) print(c("Optimal Bid = ",y[idx])) ``` ``` ## [1] "Optimal Bid = " "271.85" ``` ``` print(300*10/11) ``` ``` ## [1] 272.7273 ``` ``` print(idx) ``` ``` ## [1] 494 ``` From the plot, we can see the point of peak profit. ``` plot(y[1:550],exp_profit[1:550],type="l") ``` Hence, the bid of 271\.85 is slightly lower than the reservation price. It is 10% lower. If there were only 5 other bidders, then the bid would be: ``` #What if there were only 5 other bidders? exp_profit = (300-y)*cumprob_y^5 idx = which(exp_profit==max(exp_profit)) print(c("Optimal Bid = ",y[idx])) ``` ``` ## [1] "Optimal Bid = " "254.3" ``` Now, we shade the bid down much more, because there are fewer competing bidders, and so the chance of winning with a lower bid increases. 21\.10 Treasury Auctions ------------------------ This section is based on the published paper by S. Das and Sundaram ([1996](#ref-DasSundAuctions)). We move on from single\-unit auctions to a very common multi\-unit auction. Treasury auctions are the mechanism by which the Federal government issues its bills, notes, and bonds. Auctions are usually held on Wednesdays. Bids are received up to early afternoon after which the top bidders are given their quantities requested (up to prescribed ceilings for any one bidder), until there is no remaining supply of securities. Even before the auction, Treasury securities trade in what is known as a **when\-issued** or pre\-market. This market gives early indications of price that may lead to tighter clustering of bids in the auction. There are two types of dealers in a Treasury auction, primary dealers, i.e., the big banks and investment houses, and smaller independent bidders. The auction is really played out amongst the primary dealers. They place what are known as **competitive** bids versus the others, who place **non\-competitive bids**. Bidders also keep an eye on the secondary market that ensues right after the auction. In many ways, the bidders are also influenced by the possible prices they expect the paper to be trading at in the secondary market, and indicators of these prices come from the when\-issued market. The winner in an auction experiences regret, because he knows he bid higher than everyone else, and senses that he overpaid. This phenomenon is known as the **winner’s curse**. Treasury auction participants talk amongst each other to mitigate winner’s curse. The Fed also talks to primary dealers to mitigate their winner’s curse and thereby induce them to bid higher, because someone with lower propensity for regret is likely to bid higher. 21\.11 DPA or UPA? ------------------ DPA stands for **discriminating price auction** and UPA for **uniform price auction**. The former was the preferred format for Treasury auctions and the latter was introduced only recently. In a DPA, the highest bidder gets his bid quantity at the price he bid. Then the next highest bidder wins his quantity at the price he bid. And so on, until the supply of Treasury securities is exhausted. In this manner the Treasury seeks to maximize revenue by filling each winning at the price. Since the prices paid by each winning bidder are different, the auction is called **discriminating** in price. Revenue maximization is attempted by walking down the demand curve, see Figure below. The shaded area quantifies the revenue raised. In a UPA, the highest bidder gets his bid quantity at the price of the last winning bid (this price is also known as the stop\-out price). Then the next highest bidder wins his quantity at the stop\-out price. And so on, until the supply of Treasury securities is exhausted. Thus, the UPA is also known as a **single\-price** auction. See the Figure above, lower panel, where the shaded area quantifies the revenue raised. It may intuitively appear that the DPA will raise more revenue, but in fact, empirically, the UPA has been more successful. This is because the UPA incentivizes higher bids, as the winner’s curse is mitigated. In a DPA, bids are shaded down on account of winner’s curse – winning means you paid higher than what a large number of other bidders were willing to pay. Some countries like Mexico have used the UPA format. The U.S., started with the DPA, and now runs both auction formats. An interesting study examined markups achieved over yields in the when\-issued market as an indicator of the success of the two auction formats. They examined the auctions of 2\- and 5\-year notes from June 1991 \- 1994\). \[from Mulvey, Archibald and Flynn, US Office of the Treasury]. See Figure below. The results of a regression of the markups on bid dispersion and duration of the auctioned securities shows that markups increase in the dispersion of bids. If we think of bid dispersion as a proxy for the extent of winner’s curse, then we can see that the yields are pushed higher in the UPA than the DPA, therefore prices are lower in the UPA than the DPA. Markups are decreasing in the duration of the securities. Bid\-Ask spread is shown in the Figure below. 21\.12 Collusion ---------------- Here are some examples of collusion in auctions, which can be explicit or implicit. Collusion amongst buyers results in mitigating the winner’s curse, and may work to either raise revenues or lower revenues for the seller. * (Varian) 1999: German phone spectrum auction. Bids had to be in minimum 10% increments for multiple units. A firm bid 18\.18 and 20 million for 2 lots. They signaled that everyone could center at 20 million, which they believed was the fair price. This sort of implicit collusion averts a bidding war. * In Treasury auctions, firms can discuss bids, which is encouraged by the Treasury (why?). The restriction on cornering by placing a ceiling on how much of the supply one party can obtain in the auction aids collusion (why?). Repeated games in Treasury security auctions also aids collusion (why?). * Multiple units also allows punitive behavior, by firms bidding to raise prices on lots they do not want to signal others should not bid on lots they do want. 21\.13 Web Advertising Auctions ------------------------------- The Google AdWords program enables you to create advertisements which will appear on relevant Google search results pages and our network of partner sites. See <http://www.adwords.google.com>. The Google AdSense (<https://www.google.com/adsense/>) program differs in that it delivers Google AdWords ads to individuals’ websites. Google then pays web publishers for the ads displayed on their site based on user clicks on ads or on ad impressions, depending on the type of ad. The material here refers to the elegant paper by Aggarwal, Goel, and Motwani (2006\) on keyword auctions in AdWords: [http://web.stanford.edu/\~ashishg/papers/laddered\_auction\_extended.pdf](http://web.stanford.edu/~ashishg/papers/laddered_auction_extended.pdf) We first list some basic features of search engine advertising models. Aggarwal went on to work for Google as they adopted this algorithm from her thesis at Stanford. 1. Search engine advertising uses three models: (a) CPM, cost per thousand views, (b) CPC, cost per click, and (c) CPA, cost per acquisition. These are all at different stages of the search page experience. 2. CPC seems to be mostly used. There are 2 models here: 1. **Direct ranking**: the Overture model. (Price ordering of bidders) 2. **Revenue ranking**: the Google model. (Revenue ordering, with modifications) 3. The merchant pays the price of the **next** click (different from **second** price auctions). This is non\-truthful in both revenue ranking cases as we will see in a subsequent example. That is, bidders will not bid their true private valuations. 4. Asymmetric: there is an incentive to underbid, none to overbid. 5. Iterative: by placing many bids and watching responses, a bidder can figure out the bid ordering of other bidders for the same keywords, or basket of keywords. However, this is not obvious or simple. Google used to provide the GBS or Google Bid Simulator so that sellers using AdWords can figure out their optimal bids. See the following for more details on Adwords: google.com/adwords/. 6. If revenue ranking were truthful, it would maximize utility of auctioneer and merchant. (Known as auction **efficiency**). 7. Innovation: the **laddered auction**. Randomized weights attached to bids. If weights are 1, then it’s direct ranking. If weights are CTR (click\-through rate), i.e. revenue\-based, it’s the revenue ranking. To get some insights about the process of optimal bidding in AdWords auctions, see the Hal Varian video: [http://www.youtube.com/watch?v\=jRx7AMb6rZ0](http://www.youtube.com/watch?v=jRx7AMb6rZ0). Three hours version: [https://www.youtube.com/watch?v\=VqCCAIeW4KY](https://www.youtube.com/watch?v=VqCCAIeW4KY) * Aggarwal, Gagan., Ashish Goel, and Rajeev Motwani (2006\). “Truthful Auctions for Price Searching Keywords,” Working paper, Stanford University. 21\.14 Quick tutorial on Google Ad Auctions ------------------------------------------- Here is a quick summary of Hal Varian’s video. A merchant can figure out what the maximum bid per click should be in the following steps: 1. **Maximum profitable CPA**: This is the profit margin on the product. For example, if the selling price is $300 and cost is $200, then the profit margin is $100, which is also the maximum cost per acquisition (CPA) a seller would pay. 2. **Conversion Rate (CR)**: This is the number of times a click results in a sale. Hence, CR is equal to number of sales divided by clicks. So, if for every 100 clicks, we get a sale every 5 times, the CR is 5%. 3. **Value per Click (VPC)**: Equal to the CR times the CPA. In the example, we have $VPC \= 0\.05 100 \= \\(5\\). 4. **Determine the profit maximizing CPC bid**: As the bid is lowered, the number of clicks falls, but the CPC falls as well, revenue falls, but the profit after acquisition costs can rise until the sweet spot is determined. To find the number of clicks expected at each bid price, use the Google Bid Simulator. See the table below (from Google) for the economics at different bid prices. Note that the price you bid is not the price you pay for the click, because it is a **next\-price** auction, based on a revenue ranking model, so the exact price you pay is based on Google’s model, discussed in the next section. We see that the profit is maximized at a bid of $4\. Just as an example, note that the profit is equal to \\\[ (VPC \- CPC) \\times \\mbox{\#Clicks} \= (CPA \\times CR\- CPC) \\times \\mbox{\#Clicks} \\] Hence, for a bid of $4, we have \\\[ (5 \- 407\.02/154\) \\times 154 \= 362\.98 \\] As pointed out by Varian, the rule is to compute ICC (Incremental cost per click), and make sure that it equals the VPC. The ICC at a bid of $5\.00 is \\\[ ICC(5\.00\) \= \\frac{697\.42\-594\.27}{208\-190} \= 5\.73 \> 5 \\] Then \\\[ ICC(4\.50\) \= \\frac{594\.27\-407\.02}{190\-154} \= 5\.20 \> 5 \\] \\\[ ICC(4\.00\) \= \\frac{407\.02\-309\.73}{154\-133} \= 4\.63 \< 5 \\] Hence, the optimal bid lies between 4\.00 and 4\.50\. ``` #GOOGLE CLICK AUCTION MATH Bid = seq(5,2,-0.5); print(c("Bid:", Bid)) ``` ``` ## [1] "Bid:" "5" "4.5" "4" "3.5" "3" "2.5" "2" ``` ``` Clicks = c(208,190,154,133,113,86); print(c("Clicks:",Clicks)) ``` ``` ## [1] "Clicks:" "208" "190" "154" "133" "113" "86" ``` ``` Cost = c(697.42,594.27,407.02,309.73,230.00,140.37); print(c("Cost:",Cost)) ``` ``` ## [1] "Cost:" "697.42" "594.27" "407.02" "309.73" "230" "140.37" ``` ``` VPC = 5 Revenue = VPC*Clicks; print(c("Revenue = ",Revenue)) ``` ``` ## [1] "Revenue = " "1040" "950" "770" "665" ## [6] "565" "430" ``` ``` Profit = Revenue - Cost; print(c("Profit = ",Profit)) ``` ``` ## [1] "Profit = " "342.58" "355.73" "362.98" "355.27" "335" ## [7] "289.63" ``` 21\.15 Next Price Auctions -------------------------- In a next\-price auction, the CPC is based on the price of the click next after your own bid. Thus, you do not pay your bid price, but the one in the advertising slot just lower than yours. Hence, if your winning bid is for position \\(j\\) on the search screen, the price paid is that of the winning bid at position \\(j\+1\\). See the paper by Aggarwal, Goel, and Motwani ([2006](#ref-Aggarwal:2006:TAP:1134707.1134708)). Our discussion here is based on their paper. Let the true valuation (revenue) expected by bidder/seller \\(i\\) be equal to \\(v\_i\\). The CPC is denoted \\(p\_i\\). Let the click\-through\-rate (CTR) for seller/merchant \\(i\\) at a position \\(j\\) (where the ad shows up on the search screen) be denoted \\(CTR\_{ij}\\). \\(CTR\\) is the ratio of the number of clicks to the number of “impressions” i.e., the number of times the ad is shown. * The **utility** to the seller is given by \\\[ \\mbox{Utility} \= CTR\_{ij} (v\_i\-p\_i) \\] * Example: 3 bidders \\(A\\), \\(B\\), \\(C\\), with private values 200, 180, 100\. There are two slots or ad positions with \\(CTR\\)s 0\.5 and 0\.4\. If bidder \\(A\\) bids 200, pays 180, utility is \\((200\-180\) \\times 0\.5\=10\\). But why not bid 110, for utility of \\((200\-100\) \\times 0\.4\=40\\)? This simple example shows that the next price auction is not truthful. Also note that your bid determines your ranking but not the price you pay (CPC). * Ranking of bids is based on \\(w\_i b\_i\\) in descending order of \\(i\\). If \\(w\_i\=1\\), then we get the Overture direct ranking model. And if \\(w\_i \= CTR\_{ij}\\) then we have Google’s original revenue ranking model. In the example below, the weights range from 0 to 100, not 0 to 1, but this is without any loss of generality. The weights assigned to each merchant bidder may be based on some qualitative ranking such as the Quality Score (QS) of the ad. * Price paid by bidder \\(i\\) is \\(\\frac{w\_{i\+1} b\_{i\+1}}{w\_i}\\). * Separable CTRs: CTRs of merchant \\(i\=1\\) and \\(i\=2\\) are the same for position \\(j\\). No bidder position dependence. 21\.16 Laddered Auction ----------------------- Aggarwal, Goel, and Motwani ([2006](#ref-Aggarwal:2006:TAP:1134707.1134708)) denoted the revised auction as **laddered**. It gives a unique truthful auction. Assume \\(K\\) slots, indexed by \\(j\\), and bidders are indexed by \\(i\\). The main idea is to set the CPC to \\\[ p\_i \= \\sum\_{j\=i}^K \\left( \\frac{CTR\_{i,j}\-CTR\_{i,j\+1}}{CTR\_{i,i}} \\right) \\frac{w\_{j\+1} b\_{j\+1}}{w\_i}, \\quad 1 \\leq i \\leq K \\] so that \\\[ \\frac{\\\#Clicks\_i}{\\\#Impressions\_i} \\times p\_i \= CTR\_{ii}\\times p\_i \= \\sum\_{j\=i}^K \\left( CTR\_{i,j}\-CTR\_{i,j\+1} \\right) \\frac{w\_{j\+1} b\_{j\+1}}{w\_i} \\] The LHS is the expected revenue to Google per ad impression. Make no mistake, the whole point of the model is to maximize Google’s revenue, while making the auction system more effective for merchants. If this new model results in truthful equilibria, it is good for Google. The weights \\(w\_i\\) are arbitrary and not known to the merchants. Here is the table of \\(CTR\\)s for each slot by seller. These tables are the examples in the AGM 2006 paper. The assigned weights and the eventual allocations and prices are shown below. We can verify these calculations as follows. ``` p3 = (0.20-0)/0.20 * 40/50 * 15; print(p3) ``` ``` ## [1] 12 ``` ``` p2 = (0.25-0.20)/0.25 * 50/40 * 16 + (0.20-0)/0.25 * 40/40 * 15; print(p2) ``` ``` ## [1] 16 ``` ``` p1 = (0.40-0.30)/0.40 * 40/60 * 30 + (0.30-0.18)/0.40 * 50/60 * 16 + (0.18-0)/0.40 * 40/60 * 15; p1 ``` ``` ## [1] 13.5 ``` Note: the bids are no longer in ascending order. The winner of the highest slot may still pay less than the winner of the second slot. But this does maximize revenue to Google. See the paper for more details, but this equilibrium is unique and truthful. Looking at this model, examine the following questions: * What happens to the prices paid when the \\(CTR\\) drop rapidly as we go down the slots versus when they drop slowly? * As a merchant, would you prefer that your weight be higher or lower? * What is better for Google, a high dispersion in weights, or a low dispersion in weights? * Can you see that by watching bidding behavior of the merchants, Google can adjust their weights to maximize revenue? By seeing a week’s behavior Google can set weights for the next week. Is this legal? * Is Google better off if the bids are more dispersed than when they are close together? How would you use the data in the table above to answer this question using R? 21\.17 Remaining questions to ponder ------------------------------------ Whereas Google clearly has modeled their AdWords auction to maximize revenue, less is known about how merchants maximize their net revenue per ad, by designing ads, and choosing keywords in an appropriate manner. Google offers merchants a product called **Google Bid Simulator** so that the return from an adword (key word) may be determined. In this exercise, you will first take the time to role play a merchant who is trying to explore and understand AdWords, and then come up with an approach to maximize the return from a portfolio of AdWords. Here are some questions that will help in navigating the AdWords landscape. 1. What is the relation between keywords and cost\-per\-click (CPC)? 2. What is the Quality Score (QS) of your ad, and how does it related to keywords and CPC? 3. What defines success in an ad auction? What are its determinants? %bid amount, quality (Ad Rank) 4. What is AdRank. What does a higher AdRank buy for a merchant? 5. What are AdGroups and how do they relate to keywords? 6. What is automated CPC bidding? 7. What are the following tools? Keyword tool, Traffic estimator, Placement tool, Contextual targeting tool? 8. What is the incremental cost\-per\-click (ICC)? Sketch a brief outline of how you might go about optimizing a portfolio of AdWords. Use the concepts we studied in Markowitz portfolio optimization for this. 21\.1 Introduction ------------------ Auctions comprise one of the oldest market forms, and are still a popular mechanism for selling various assets and their related price discovery. In this chapter we will study different auction formats, bidding theory, and revenue maximization principles. Hal Varian, Chief Economist at Google (NYT, Aug 1, 2002\) writes: “Auctions, one of the oldest ways to buy and sell, have been reborn and revitalized on the Internet. When I say “old”, I mean it. Herodotus described a Babylonian marriage market, circa 500 B.C., in which potential wives were auctioned off. Notably, some of the brides sold for a negative price. The Romans used auctions for many purposes, including auctioning off the right to collect taxes. In A.D. 193, the Praetorian Guards even auctioned off the Roman empire itself! We don’t see auctions like this anymore (unless you count campaign finance practices), but auctions are used for just about everything else. Online, computer\-managed auctions are cheap to run and have become increasingly popular. EBay is the most prominent example, but other, less well\-known companies use similar technology." For a review paper, see: [http://algo.scu.edu/\~sanjivdas/DasSundaram\_FMII1996\_AuctionTheory.pdf](http://algo.scu.edu/~sanjivdas/DasSundaram_FMII1996_AuctionTheory.pdf) 21\.2 Overview -------------- Auctions have many features, but the key ingredient is **information asymmetry** between seller and buyers. The seller may know more about the product than the buyers, and the buyers themselves might have differential information about the item on sale. Moreover, buyers also take into account imperfect information about the behavior of the other bidders. We will examine how this information asymmetry plays into bidding strategy in the mathematical analysis that follows. Auction market mechanisms are **explicit**, with the prices and revenue a direct consequence of the auction design. In contrast, in other markets, the interaction of buyers and sellers might be more implicit, as in the case of commodities, where the market mechanism is based on demand and supply, resulting in the implicit, proverbial “invisible hand” setting prices. There are many *examples* of active auction markets, such as auctions of art and valuables, eBay, Treasury securities, Google ad auctions, and even the New York Stock Exchange, which is an example of a continuous call auction market. Auctions may be for a **single unit** (e.g., art) or **multiple units** (e.g., Treasury securities). 21\.3 Auction types ------------------- The main types of auctions may be classified as follows: 1. English (E): highest bid wins. The auction is open, i.e., bids are revealed to all participants as they occur. This is an ascending price auction. 2. Dutch (D): auctioneer starts at a high price and calls out successively lower prices. First bidder accepts and wins the auction. Again, bids are open. 3. 1st price sealed bid (1P): Bids are sealed. Highest bidder wins and pays his price. 4. 2nd price sealed bid (2P): Same as 1P but the price paid by the winner is the second\-highest price. Same as the auction analyzed by William Vickrey in his seminal paper in 1961 that led to a Nobel prize. See . 5. Anglo\-Dutch (AD): Open, ascending\-price auction till only two bidders remain, then it becomes sealed\-bid. 21\.4 Value Determination ------------------------- The eventual outcome of an auction is price/value discovery of the item being sold. There are two characterizations of this value determination process, depending on the nature of the item being sold. 1. Independent private values model: Each buyer bids his own independent valuation of the item at sale (as in regular art auctions). 2. Common\-values model: Bidders aim to discover a common price, as in Treasury auctions. This is because there is usually an after market in which common value is traded. 21\.5 Bidder Types ------------------ The assumptions made about the bidders impacts the revenue raised in the auction and the optimal auction design chosen by the seller. We consider two types of bidders. 1. Symmetric: all bidders observe the same probability distribution of bids and **stop\-out** (SP) prices. The stop out price is the price of the lowest winning bid for the last unit sold. This is a robust assumption when markets are competitive. 2. Asymmetric or non\-symmetric. Here the bidders may have different distributions of value. This is often the case when markets are segmented. Example: bidding for firms in merger and acquisition deals. 21\.6 Benchmark Model (BM) -------------------------- We begin by analyzing what is known as the benchmark model. It is the simplest framework in which we can analyze auctions. It is based on 4 main assumptions: 1. Risk\-neutrality of bidders. We do not need utility functions in the analysis. 2. Private\-values model. Every bidder has her own value for the item. There is a distribution of bidders’ private values. 3. Symmetric bidders. Every bidder faces the same distribution of private values mentioned in the previous point. 4. Payment by winners is a function of bids alone. For a counterexample, think of payment via royalties for a book contract which depends on post auction outcomes. Or the bidding for movie rights, where the buyer takes a part share of the movie with the seller. 21\.7 Properties of the BM -------------------------- The following are the results and properties of the BM. 1. D \= 1P. That is, the Dutch auction and first price auction are equivalent to bidders. These two mechanisms are identical because in each the bidder needs to choose how high to bid without knowledge of the other bids. 2. In the BM, the optimal strategy is to bid one’s true valuation. This is easy to see for D and 1P. In both auctions, you do not see any other lower bids, so you bid up to your maximum value, i.e., one’s true value, and see if the bid ends up winning. For 2P, if you bid too high you overpay, bid too low you lose, so best to bid one’s valuation. For E, it’s best to keep bidding till price crosses your valuation (reservation price). 3. Equilibria types: * Dominant: A situation where bidders bid their true valuation irrespective of other bidders bids. Satisfied by E and 2P. * Nash: Bids are chosen based on the best guess of other bidders’ bids. Satisfied by D and 1P. 21\.8 Auction Math and Stats: Seller’s Expected Revenue ------------------------------------------------------- We now get away from the abstract definition of different types of auctions and work out an example of an auctions *equilibrium*. Let \\(F\\) be the probability distribution of the bids. And define \\(v\_i\\) as the true value of the \\(i\\)\-th bidder, on a continuum between 0 and 1\. Assume bidders are ranked in order of their true valuations \\(v\_i\\). How do we interpret \\(F(v)\\)? Think of the bids as being drawn from say, a beta distribution \\(F\\) on \\(v \\in (0,1\)\\), so that the probability of a very high or very low bid is lower than a bid around the mean of the distribution. The expected difference between the first and second highest bids is, given \\(v\_1\\) and \\(v\_2\\): \\\[ D \= \[1\-F(v\_2\)](v\_1\-v\_2\) \\] That is, multiply the difference between the first and second bids by the probability that \\(v\_2\\) is the second\-highest bid. Or think of the probability of there being a bid higher than \\(v\_2\\). Taking first\-order conditions (from the seller’s viewpoint): \\\[ \\frac{\\partial D}{\\partial v\_1} \= \[1\-F(v\_1\)] \- (v\_1\-v\_2\)F'(v\_1\)\=0 \\] Note that \\(v\_1 \\equiv^d v\_2\\), given bidders are symmetric in BM. The symbol \\(\\equiv^d\\) means “equivalent in distribution”. (Definition of equivalence in distribution: \\(v\_1, v\_2\\) are equivalent in distribution if \\(Pr\[v\_1 \\leq V] \= Pr\[v\_2 \\leq V]\\) for all values of \\(V\\).) This implies that \\\[ v\_1\-v\_2 \= \\frac{1\-F(v\_1\)}{f(v\_1\)} \\] The expected revenue to the seller is the same as the expected 2nd price (i.e., bounded below by this price). The second price comes from the following re\-arranged equation: \\\[ v\_2 \= v\_1 \- \\frac{1\-F(v\_1\)}{f(v\_1\)} \\] 21\.9 Optimization by bidders ----------------------------- The goal of bidder \\(i\\) is to find a function/bidding rule \\(B\\) that is a function of the private value \\(v\_i\\) such that \\\[ b\_i \= B(v\_i) \\] where \\(b\_i\\) is the actual bid. If there are \\(n\\) bidders, then \\\[ \\begin{eqnarray\*} Pr\[\\mbox{bidder } i \\mbox{ wins}] \&\=\& \\mbox{Pr}\[b\_i \> B(v\_j)], \\quad \\forall j \\neq i, \\\\ \&\=\& \[F(B^{\-1}(b\_i))]^{n\-1} \\end{eqnarray\*} \\] Each bidder tries to maximize her expected profit relative to her true valuation, which is \\\[ \\pi\_i \= (v\_i \- b\_i)\[F(B^{\-1}(b\_i))]^{n\-1} \= (v\_i\-b\_i)\[F(v\_i)]^{n\-1}, \\quad \\quad (EQN1\) \\] again invoking the notion of bidder symmetry. Optimize by taking \\(\\frac{\\partial \\pi\_i}{\\partial b\_i} \= 0\\). We can get this by taking first the total derivative of profit relative to the bidder’s value as follows (noting that \\(\\pi\_i\[v\_i,b\_i(v\_i)]\\) is the full form of the profit function): \\\[ \\frac{d \\pi\_i}{d v\_i} \= \\frac{\\partial \\pi\_i}{\\partial v\_i} \+ \\frac{\\partial \\pi\_i}{\\partial b\_i}\\frac{db\_i}{dv\_i} \= \\frac{\\partial \\pi\_i}{\\partial v\_i} \\] which reduces to the partial derivative of profit with respect to personal valuation because \\(\\frac{\\partial \\pi\_i}{\\partial b\_i} \= 0\\). This useful first partial derivative is taken from equation (EQN1\): \\\[ \\frac{\\partial \\pi\_i}{\\partial v\_i} \= \[F(B^{\-1}(b\_i))]^{n\-1} \= \[F(v\_i)]^{n\-1} \\] Now, let \\(v\_l\\) be the lowest bid. Integrate the previous equation to get \\\[ \\pi\_i \= \\int\_{v\_l}^{v\_i} \[F(x)]^{n\-1} \\; dx \\quad \\quad (EQN2\) \\] The previous is derived from the Fundamental Theorem of Calculus which is \\\[ F(x) \= \\int\_{x\_l}^x f(u) \\; du, \\quad \\quad \\int\_{v\_l}^{v\_h} f(u)du \= F(v\_h) \- F(v\_l) \\] Equating (EQN1\) and (EQN2\) gives \\\[ b\_i \= v\_i \- \\frac{\\int\_{v\_l}^{v\_i} \[F(x)]^{n\-1} \\; dx}{\[F(v\_i)]^{n\-1}} \= B(v\_i) \\] which gives the bidding rule \\(B(v\_i)\\) entirely in terms of the personal valuation of the bidder. If, for example, \\(F\\) is uniform, then assuming \\(v \\in (0,1\)\\), note that \\(F(v)\=v\\): \\\[ \\begin{eqnarray\*} b \&\=\& v \- \\frac{\\int\_0^v F(x)^{n\-1} \\;dx}{F(v)^{n\-1}} \\\\ \&\=\& v \- \\frac{\\int\_0^v x^{n\-1} \\;dx}{v^{n\-1}} \\\\ \&\=\& v \- \\frac{\[x^n/n]\_0^v}{v^{n\-1}} \\\\ \&\=\& v \- \\frac{v^n/n}{v^{n\-1}} \\end{eqnarray\*} \\] resulting in: \\\[ b \= B(v) \= \\frac{(n\-1\)v}{n} \\] Here we see that we “shade” our bid down slightly from our personal valuation. We bid less than true valuation to leave some room for profit. The amount of shading of our bid depends on how much competition there is, i.e., the number of bidders \\(n\\). Note that \\\[ \\frac{\\partial B}{\\partial v\_i} \> 0, \\quad \\quad \\frac{\\partial B}{\\partial n} \> 0 \\] i.e., you increase your bid as your personal value rises, and as the number of bidders increases. ### 21\.9\.1 Example We are bidding for a used laptop on eBay. Suppose we assume that the distribution of bids follows a beta distribution with parameters 2,4 and a minimum value $50 and a maximum value of $500\. Our personal value for the machine is $300\. Assume 10 other bidders. How much should we bid? ``` x = seq(0,1,1/1000) y = x*450+50 prob_y = dbeta(x,2,4) print(c("check=",sum(prob_y)/1000)) ``` ``` ## [1] "check=" "0.999998333334" ``` ``` prob_y = prob_y/sum(prob_y) plot(y,prob_y,type="l") grid(lwd=3) ``` Note that we have used the non\-central Beta distribution, with shape parameters \\(a\=2\\) and \\(b\=4\\). The beta distribution density function is: \\\[ Beta(x,a,b) \= \\frac{\\Gamma(a\+b)}{\\Gamma(a) \\Gamma(b)} x^{a\-1} (1\-x)^{b\-1} \\] for \\(x\\) taking values between 0 and 1\. The distribution of bids from 50 to 500 is shown above. An excellent blog post on the intuition for the beta distribution is here: [http://stats.stackexchange.com/questions/47771/what\-is\-the\-intuition\-behind\-beta\-distribution/47782\#47782](http://stats.stackexchange.com/questions/47771/what-is-the-intuition-behind-beta-distribution/47782#47782) Check the mean of the distribution. ``` mn = sum(y*prob_y) print(c("mean=",mn)) ``` ``` ## [1] "mean=" "200.000250000167" ``` Check the standard deviation. ``` stdev = sqrt(sum(y^2*prob_y)-mn^2) print(c("stdev=",stdev)) ``` ``` ## [1] "stdev=" "80.1782055353774" ``` Now use a computational approach to solving the problem. We program up equation (EQN1\) and then find the bid at which this is maximized. ``` x = seq(0,1,1/1000) y = 50+450*x cumprob_y = pbeta(x,2,4) exp_profit = (300-y)*cumprob_y^10 idx = which(exp_profit==max(exp_profit)) print(c("Optimal Bid = ",y[idx])) ``` ``` ## [1] "Optimal Bid = " "271.85" ``` ``` print(300*10/11) ``` ``` ## [1] 272.7273 ``` ``` print(idx) ``` ``` ## [1] 494 ``` From the plot, we can see the point of peak profit. ``` plot(y[1:550],exp_profit[1:550],type="l") ``` Hence, the bid of 271\.85 is slightly lower than the reservation price. It is 10% lower. If there were only 5 other bidders, then the bid would be: ``` #What if there were only 5 other bidders? exp_profit = (300-y)*cumprob_y^5 idx = which(exp_profit==max(exp_profit)) print(c("Optimal Bid = ",y[idx])) ``` ``` ## [1] "Optimal Bid = " "254.3" ``` Now, we shade the bid down much more, because there are fewer competing bidders, and so the chance of winning with a lower bid increases. ### 21\.9\.1 Example We are bidding for a used laptop on eBay. Suppose we assume that the distribution of bids follows a beta distribution with parameters 2,4 and a minimum value $50 and a maximum value of $500\. Our personal value for the machine is $300\. Assume 10 other bidders. How much should we bid? ``` x = seq(0,1,1/1000) y = x*450+50 prob_y = dbeta(x,2,4) print(c("check=",sum(prob_y)/1000)) ``` ``` ## [1] "check=" "0.999998333334" ``` ``` prob_y = prob_y/sum(prob_y) plot(y,prob_y,type="l") grid(lwd=3) ``` Note that we have used the non\-central Beta distribution, with shape parameters \\(a\=2\\) and \\(b\=4\\). The beta distribution density function is: \\\[ Beta(x,a,b) \= \\frac{\\Gamma(a\+b)}{\\Gamma(a) \\Gamma(b)} x^{a\-1} (1\-x)^{b\-1} \\] for \\(x\\) taking values between 0 and 1\. The distribution of bids from 50 to 500 is shown above. An excellent blog post on the intuition for the beta distribution is here: [http://stats.stackexchange.com/questions/47771/what\-is\-the\-intuition\-behind\-beta\-distribution/47782\#47782](http://stats.stackexchange.com/questions/47771/what-is-the-intuition-behind-beta-distribution/47782#47782) Check the mean of the distribution. ``` mn = sum(y*prob_y) print(c("mean=",mn)) ``` ``` ## [1] "mean=" "200.000250000167" ``` Check the standard deviation. ``` stdev = sqrt(sum(y^2*prob_y)-mn^2) print(c("stdev=",stdev)) ``` ``` ## [1] "stdev=" "80.1782055353774" ``` Now use a computational approach to solving the problem. We program up equation (EQN1\) and then find the bid at which this is maximized. ``` x = seq(0,1,1/1000) y = 50+450*x cumprob_y = pbeta(x,2,4) exp_profit = (300-y)*cumprob_y^10 idx = which(exp_profit==max(exp_profit)) print(c("Optimal Bid = ",y[idx])) ``` ``` ## [1] "Optimal Bid = " "271.85" ``` ``` print(300*10/11) ``` ``` ## [1] 272.7273 ``` ``` print(idx) ``` ``` ## [1] 494 ``` From the plot, we can see the point of peak profit. ``` plot(y[1:550],exp_profit[1:550],type="l") ``` Hence, the bid of 271\.85 is slightly lower than the reservation price. It is 10% lower. If there were only 5 other bidders, then the bid would be: ``` #What if there were only 5 other bidders? exp_profit = (300-y)*cumprob_y^5 idx = which(exp_profit==max(exp_profit)) print(c("Optimal Bid = ",y[idx])) ``` ``` ## [1] "Optimal Bid = " "254.3" ``` Now, we shade the bid down much more, because there are fewer competing bidders, and so the chance of winning with a lower bid increases. 21\.10 Treasury Auctions ------------------------ This section is based on the published paper by S. Das and Sundaram ([1996](#ref-DasSundAuctions)). We move on from single\-unit auctions to a very common multi\-unit auction. Treasury auctions are the mechanism by which the Federal government issues its bills, notes, and bonds. Auctions are usually held on Wednesdays. Bids are received up to early afternoon after which the top bidders are given their quantities requested (up to prescribed ceilings for any one bidder), until there is no remaining supply of securities. Even before the auction, Treasury securities trade in what is known as a **when\-issued** or pre\-market. This market gives early indications of price that may lead to tighter clustering of bids in the auction. There are two types of dealers in a Treasury auction, primary dealers, i.e., the big banks and investment houses, and smaller independent bidders. The auction is really played out amongst the primary dealers. They place what are known as **competitive** bids versus the others, who place **non\-competitive bids**. Bidders also keep an eye on the secondary market that ensues right after the auction. In many ways, the bidders are also influenced by the possible prices they expect the paper to be trading at in the secondary market, and indicators of these prices come from the when\-issued market. The winner in an auction experiences regret, because he knows he bid higher than everyone else, and senses that he overpaid. This phenomenon is known as the **winner’s curse**. Treasury auction participants talk amongst each other to mitigate winner’s curse. The Fed also talks to primary dealers to mitigate their winner’s curse and thereby induce them to bid higher, because someone with lower propensity for regret is likely to bid higher. 21\.11 DPA or UPA? ------------------ DPA stands for **discriminating price auction** and UPA for **uniform price auction**. The former was the preferred format for Treasury auctions and the latter was introduced only recently. In a DPA, the highest bidder gets his bid quantity at the price he bid. Then the next highest bidder wins his quantity at the price he bid. And so on, until the supply of Treasury securities is exhausted. In this manner the Treasury seeks to maximize revenue by filling each winning at the price. Since the prices paid by each winning bidder are different, the auction is called **discriminating** in price. Revenue maximization is attempted by walking down the demand curve, see Figure below. The shaded area quantifies the revenue raised. In a UPA, the highest bidder gets his bid quantity at the price of the last winning bid (this price is also known as the stop\-out price). Then the next highest bidder wins his quantity at the stop\-out price. And so on, until the supply of Treasury securities is exhausted. Thus, the UPA is also known as a **single\-price** auction. See the Figure above, lower panel, where the shaded area quantifies the revenue raised. It may intuitively appear that the DPA will raise more revenue, but in fact, empirically, the UPA has been more successful. This is because the UPA incentivizes higher bids, as the winner’s curse is mitigated. In a DPA, bids are shaded down on account of winner’s curse – winning means you paid higher than what a large number of other bidders were willing to pay. Some countries like Mexico have used the UPA format. The U.S., started with the DPA, and now runs both auction formats. An interesting study examined markups achieved over yields in the when\-issued market as an indicator of the success of the two auction formats. They examined the auctions of 2\- and 5\-year notes from June 1991 \- 1994\). \[from Mulvey, Archibald and Flynn, US Office of the Treasury]. See Figure below. The results of a regression of the markups on bid dispersion and duration of the auctioned securities shows that markups increase in the dispersion of bids. If we think of bid dispersion as a proxy for the extent of winner’s curse, then we can see that the yields are pushed higher in the UPA than the DPA, therefore prices are lower in the UPA than the DPA. Markups are decreasing in the duration of the securities. Bid\-Ask spread is shown in the Figure below. 21\.12 Collusion ---------------- Here are some examples of collusion in auctions, which can be explicit or implicit. Collusion amongst buyers results in mitigating the winner’s curse, and may work to either raise revenues or lower revenues for the seller. * (Varian) 1999: German phone spectrum auction. Bids had to be in minimum 10% increments for multiple units. A firm bid 18\.18 and 20 million for 2 lots. They signaled that everyone could center at 20 million, which they believed was the fair price. This sort of implicit collusion averts a bidding war. * In Treasury auctions, firms can discuss bids, which is encouraged by the Treasury (why?). The restriction on cornering by placing a ceiling on how much of the supply one party can obtain in the auction aids collusion (why?). Repeated games in Treasury security auctions also aids collusion (why?). * Multiple units also allows punitive behavior, by firms bidding to raise prices on lots they do not want to signal others should not bid on lots they do want. 21\.13 Web Advertising Auctions ------------------------------- The Google AdWords program enables you to create advertisements which will appear on relevant Google search results pages and our network of partner sites. See <http://www.adwords.google.com>. The Google AdSense (<https://www.google.com/adsense/>) program differs in that it delivers Google AdWords ads to individuals’ websites. Google then pays web publishers for the ads displayed on their site based on user clicks on ads or on ad impressions, depending on the type of ad. The material here refers to the elegant paper by Aggarwal, Goel, and Motwani (2006\) on keyword auctions in AdWords: [http://web.stanford.edu/\~ashishg/papers/laddered\_auction\_extended.pdf](http://web.stanford.edu/~ashishg/papers/laddered_auction_extended.pdf) We first list some basic features of search engine advertising models. Aggarwal went on to work for Google as they adopted this algorithm from her thesis at Stanford. 1. Search engine advertising uses three models: (a) CPM, cost per thousand views, (b) CPC, cost per click, and (c) CPA, cost per acquisition. These are all at different stages of the search page experience. 2. CPC seems to be mostly used. There are 2 models here: 1. **Direct ranking**: the Overture model. (Price ordering of bidders) 2. **Revenue ranking**: the Google model. (Revenue ordering, with modifications) 3. The merchant pays the price of the **next** click (different from **second** price auctions). This is non\-truthful in both revenue ranking cases as we will see in a subsequent example. That is, bidders will not bid their true private valuations. 4. Asymmetric: there is an incentive to underbid, none to overbid. 5. Iterative: by placing many bids and watching responses, a bidder can figure out the bid ordering of other bidders for the same keywords, or basket of keywords. However, this is not obvious or simple. Google used to provide the GBS or Google Bid Simulator so that sellers using AdWords can figure out their optimal bids. See the following for more details on Adwords: google.com/adwords/. 6. If revenue ranking were truthful, it would maximize utility of auctioneer and merchant. (Known as auction **efficiency**). 7. Innovation: the **laddered auction**. Randomized weights attached to bids. If weights are 1, then it’s direct ranking. If weights are CTR (click\-through rate), i.e. revenue\-based, it’s the revenue ranking. To get some insights about the process of optimal bidding in AdWords auctions, see the Hal Varian video: [http://www.youtube.com/watch?v\=jRx7AMb6rZ0](http://www.youtube.com/watch?v=jRx7AMb6rZ0). Three hours version: [https://www.youtube.com/watch?v\=VqCCAIeW4KY](https://www.youtube.com/watch?v=VqCCAIeW4KY) * Aggarwal, Gagan., Ashish Goel, and Rajeev Motwani (2006\). “Truthful Auctions for Price Searching Keywords,” Working paper, Stanford University. 21\.14 Quick tutorial on Google Ad Auctions ------------------------------------------- Here is a quick summary of Hal Varian’s video. A merchant can figure out what the maximum bid per click should be in the following steps: 1. **Maximum profitable CPA**: This is the profit margin on the product. For example, if the selling price is $300 and cost is $200, then the profit margin is $100, which is also the maximum cost per acquisition (CPA) a seller would pay. 2. **Conversion Rate (CR)**: This is the number of times a click results in a sale. Hence, CR is equal to number of sales divided by clicks. So, if for every 100 clicks, we get a sale every 5 times, the CR is 5%. 3. **Value per Click (VPC)**: Equal to the CR times the CPA. In the example, we have $VPC \= 0\.05 100 \= \\(5\\). 4. **Determine the profit maximizing CPC bid**: As the bid is lowered, the number of clicks falls, but the CPC falls as well, revenue falls, but the profit after acquisition costs can rise until the sweet spot is determined. To find the number of clicks expected at each bid price, use the Google Bid Simulator. See the table below (from Google) for the economics at different bid prices. Note that the price you bid is not the price you pay for the click, because it is a **next\-price** auction, based on a revenue ranking model, so the exact price you pay is based on Google’s model, discussed in the next section. We see that the profit is maximized at a bid of $4\. Just as an example, note that the profit is equal to \\\[ (VPC \- CPC) \\times \\mbox{\#Clicks} \= (CPA \\times CR\- CPC) \\times \\mbox{\#Clicks} \\] Hence, for a bid of $4, we have \\\[ (5 \- 407\.02/154\) \\times 154 \= 362\.98 \\] As pointed out by Varian, the rule is to compute ICC (Incremental cost per click), and make sure that it equals the VPC. The ICC at a bid of $5\.00 is \\\[ ICC(5\.00\) \= \\frac{697\.42\-594\.27}{208\-190} \= 5\.73 \> 5 \\] Then \\\[ ICC(4\.50\) \= \\frac{594\.27\-407\.02}{190\-154} \= 5\.20 \> 5 \\] \\\[ ICC(4\.00\) \= \\frac{407\.02\-309\.73}{154\-133} \= 4\.63 \< 5 \\] Hence, the optimal bid lies between 4\.00 and 4\.50\. ``` #GOOGLE CLICK AUCTION MATH Bid = seq(5,2,-0.5); print(c("Bid:", Bid)) ``` ``` ## [1] "Bid:" "5" "4.5" "4" "3.5" "3" "2.5" "2" ``` ``` Clicks = c(208,190,154,133,113,86); print(c("Clicks:",Clicks)) ``` ``` ## [1] "Clicks:" "208" "190" "154" "133" "113" "86" ``` ``` Cost = c(697.42,594.27,407.02,309.73,230.00,140.37); print(c("Cost:",Cost)) ``` ``` ## [1] "Cost:" "697.42" "594.27" "407.02" "309.73" "230" "140.37" ``` ``` VPC = 5 Revenue = VPC*Clicks; print(c("Revenue = ",Revenue)) ``` ``` ## [1] "Revenue = " "1040" "950" "770" "665" ## [6] "565" "430" ``` ``` Profit = Revenue - Cost; print(c("Profit = ",Profit)) ``` ``` ## [1] "Profit = " "342.58" "355.73" "362.98" "355.27" "335" ## [7] "289.63" ``` 21\.15 Next Price Auctions -------------------------- In a next\-price auction, the CPC is based on the price of the click next after your own bid. Thus, you do not pay your bid price, but the one in the advertising slot just lower than yours. Hence, if your winning bid is for position \\(j\\) on the search screen, the price paid is that of the winning bid at position \\(j\+1\\). See the paper by Aggarwal, Goel, and Motwani ([2006](#ref-Aggarwal:2006:TAP:1134707.1134708)). Our discussion here is based on their paper. Let the true valuation (revenue) expected by bidder/seller \\(i\\) be equal to \\(v\_i\\). The CPC is denoted \\(p\_i\\). Let the click\-through\-rate (CTR) for seller/merchant \\(i\\) at a position \\(j\\) (where the ad shows up on the search screen) be denoted \\(CTR\_{ij}\\). \\(CTR\\) is the ratio of the number of clicks to the number of “impressions” i.e., the number of times the ad is shown. * The **utility** to the seller is given by \\\[ \\mbox{Utility} \= CTR\_{ij} (v\_i\-p\_i) \\] * Example: 3 bidders \\(A\\), \\(B\\), \\(C\\), with private values 200, 180, 100\. There are two slots or ad positions with \\(CTR\\)s 0\.5 and 0\.4\. If bidder \\(A\\) bids 200, pays 180, utility is \\((200\-180\) \\times 0\.5\=10\\). But why not bid 110, for utility of \\((200\-100\) \\times 0\.4\=40\\)? This simple example shows that the next price auction is not truthful. Also note that your bid determines your ranking but not the price you pay (CPC). * Ranking of bids is based on \\(w\_i b\_i\\) in descending order of \\(i\\). If \\(w\_i\=1\\), then we get the Overture direct ranking model. And if \\(w\_i \= CTR\_{ij}\\) then we have Google’s original revenue ranking model. In the example below, the weights range from 0 to 100, not 0 to 1, but this is without any loss of generality. The weights assigned to each merchant bidder may be based on some qualitative ranking such as the Quality Score (QS) of the ad. * Price paid by bidder \\(i\\) is \\(\\frac{w\_{i\+1} b\_{i\+1}}{w\_i}\\). * Separable CTRs: CTRs of merchant \\(i\=1\\) and \\(i\=2\\) are the same for position \\(j\\). No bidder position dependence. 21\.16 Laddered Auction ----------------------- Aggarwal, Goel, and Motwani ([2006](#ref-Aggarwal:2006:TAP:1134707.1134708)) denoted the revised auction as **laddered**. It gives a unique truthful auction. Assume \\(K\\) slots, indexed by \\(j\\), and bidders are indexed by \\(i\\). The main idea is to set the CPC to \\\[ p\_i \= \\sum\_{j\=i}^K \\left( \\frac{CTR\_{i,j}\-CTR\_{i,j\+1}}{CTR\_{i,i}} \\right) \\frac{w\_{j\+1} b\_{j\+1}}{w\_i}, \\quad 1 \\leq i \\leq K \\] so that \\\[ \\frac{\\\#Clicks\_i}{\\\#Impressions\_i} \\times p\_i \= CTR\_{ii}\\times p\_i \= \\sum\_{j\=i}^K \\left( CTR\_{i,j}\-CTR\_{i,j\+1} \\right) \\frac{w\_{j\+1} b\_{j\+1}}{w\_i} \\] The LHS is the expected revenue to Google per ad impression. Make no mistake, the whole point of the model is to maximize Google’s revenue, while making the auction system more effective for merchants. If this new model results in truthful equilibria, it is good for Google. The weights \\(w\_i\\) are arbitrary and not known to the merchants. Here is the table of \\(CTR\\)s for each slot by seller. These tables are the examples in the AGM 2006 paper. The assigned weights and the eventual allocations and prices are shown below. We can verify these calculations as follows. ``` p3 = (0.20-0)/0.20 * 40/50 * 15; print(p3) ``` ``` ## [1] 12 ``` ``` p2 = (0.25-0.20)/0.25 * 50/40 * 16 + (0.20-0)/0.25 * 40/40 * 15; print(p2) ``` ``` ## [1] 16 ``` ``` p1 = (0.40-0.30)/0.40 * 40/60 * 30 + (0.30-0.18)/0.40 * 50/60 * 16 + (0.18-0)/0.40 * 40/60 * 15; p1 ``` ``` ## [1] 13.5 ``` Note: the bids are no longer in ascending order. The winner of the highest slot may still pay less than the winner of the second slot. But this does maximize revenue to Google. See the paper for more details, but this equilibrium is unique and truthful. Looking at this model, examine the following questions: * What happens to the prices paid when the \\(CTR\\) drop rapidly as we go down the slots versus when they drop slowly? * As a merchant, would you prefer that your weight be higher or lower? * What is better for Google, a high dispersion in weights, or a low dispersion in weights? * Can you see that by watching bidding behavior of the merchants, Google can adjust their weights to maximize revenue? By seeing a week’s behavior Google can set weights for the next week. Is this legal? * Is Google better off if the bids are more dispersed than when they are close together? How would you use the data in the table above to answer this question using R? 21\.17 Remaining questions to ponder ------------------------------------ Whereas Google clearly has modeled their AdWords auction to maximize revenue, less is known about how merchants maximize their net revenue per ad, by designing ads, and choosing keywords in an appropriate manner. Google offers merchants a product called **Google Bid Simulator** so that the return from an adword (key word) may be determined. In this exercise, you will first take the time to role play a merchant who is trying to explore and understand AdWords, and then come up with an approach to maximize the return from a portfolio of AdWords. Here are some questions that will help in navigating the AdWords landscape. 1. What is the relation between keywords and cost\-per\-click (CPC)? 2. What is the Quality Score (QS) of your ad, and how does it related to keywords and CPC? 3. What defines success in an ad auction? What are its determinants? %bid amount, quality (Ad Rank) 4. What is AdRank. What does a higher AdRank buy for a merchant? 5. What are AdGroups and how do they relate to keywords? 6. What is automated CPC bidding? 7. What are the following tools? Keyword tool, Traffic estimator, Placement tool, Contextual targeting tool? 8. What is the incremental cost\-per\-click (ICC)? Sketch a brief outline of how you might go about optimizing a portfolio of AdWords. Use the concepts we studied in Markowitz portfolio optimization for this.
Machine Learning
skeydan.github.io
https://skeydan.github.io/Deep-Learning-and-Scientific-Computing-with-R-torch/tensors.html
Machine Learning
skeydan.github.io
https://skeydan.github.io/Deep-Learning-and-Scientific-Computing-with-R-torch/autograd.html
Machine Learning
skeydan.github.io
https://skeydan.github.io/Deep-Learning-and-Scientific-Computing-with-R-torch/optim_1.html
Machine Learning
skeydan.github.io
https://skeydan.github.io/Deep-Learning-and-Scientific-Computing-with-R-torch/network_1.html
Machine Learning
skeydan.github.io
https://skeydan.github.io/Deep-Learning-and-Scientific-Computing-with-R-torch/modules.html
Machine Learning
skeydan.github.io
https://skeydan.github.io/Deep-Learning-and-Scientific-Computing-with-R-torch/optimizers.html
Machine Learning
skeydan.github.io
https://skeydan.github.io/Deep-Learning-and-Scientific-Computing-with-R-torch/loss_functions.html
Machine Learning
skeydan.github.io
https://skeydan.github.io/Deep-Learning-and-Scientific-Computing-with-R-torch/optim_2.html
Machine Learning
skeydan.github.io
https://skeydan.github.io/Deep-Learning-and-Scientific-Computing-with-R-torch/network_2.html
Machine Learning
skeydan.github.io
https://skeydan.github.io/Deep-Learning-and-Scientific-Computing-with-R-torch/data.html
Machine Learning
skeydan.github.io
https://skeydan.github.io/Deep-Learning-and-Scientific-Computing-with-R-torch/training_with_luz.html
Machine Learning
skeydan.github.io
https://skeydan.github.io/Deep-Learning-and-Scientific-Computing-with-R-torch/image_classification_1.html
Machine Learning
skeydan.github.io
https://skeydan.github.io/Deep-Learning-and-Scientific-Computing-with-R-torch/overfitting.html
Machine Learning
skeydan.github.io
https://skeydan.github.io/Deep-Learning-and-Scientific-Computing-with-R-torch/training_efficiency.html
Machine Learning
skeydan.github.io
https://skeydan.github.io/Deep-Learning-and-Scientific-Computing-with-R-torch/image_classification_2.html
Machine Learning
skeydan.github.io
https://skeydan.github.io/Deep-Learning-and-Scientific-Computing-with-R-torch/image_segmentation.html
Machine Learning
skeydan.github.io
https://skeydan.github.io/Deep-Learning-and-Scientific-Computing-with-R-torch/tabular_data.html
Machine Learning
skeydan.github.io
https://skeydan.github.io/Deep-Learning-and-Scientific-Computing-with-R-torch/time_series.html
Machine Learning
skeydan.github.io
https://skeydan.github.io/Deep-Learning-and-Scientific-Computing-with-R-torch/audio_classification.html
Machine Learning
skeydan.github.io
https://skeydan.github.io/Deep-Learning-and-Scientific-Computing-with-R-torch/matrix_computations_leastsquares.html
Machine Learning
skeydan.github.io
https://skeydan.github.io/Deep-Learning-and-Scientific-Computing-with-R-torch/matrix_computations_convolution.html
Machine Learning
skeydan.github.io
https://skeydan.github.io/Deep-Learning-and-Scientific-Computing-with-R-torch/fourier_transform_dft.html
Machine Learning
skeydan.github.io
https://skeydan.github.io/Deep-Learning-and-Scientific-Computing-with-R-torch/fourier_transform_fft.html
Machine Learning
skeydan.github.io
https://skeydan.github.io/Deep-Learning-and-Scientific-Computing-with-R-torch/wavelets.html
Machine Learning
pbiecek.github.io
https://pbiecek.github.io/ema/preface.html
Preface ======= ``` @Book{, author = {Przemyslaw Biecek and Tomasz Burzykowski}, title = {{Explanatory Model Analysis}}, publisher = {Chapman and Hall/CRC, New York}, year = {2021}, isbn = {9780367135591}, url = {https://pbiecek.github.io/ema/}, } ```
Machine Learning
pbiecek.github.io
https://pbiecek.github.io/ema/do-it-yourself.html
3 Do\-it\-yourself ================== Most of the methods presented in this book are available in both R and Python and can be used in a uniform way. But each of these languages has also many other tools for Explanatory Model Analysis. In this book, we introduce various methods for instance\-level and dataset\-level exploration and explanation of predictive models. In each chapter, there is a section with code snippets for R and Python that shows how to use a particular method. 3\.1 Do\-it\-yourself with R ---------------------------- In this section, we provide a short description of the steps that are needed to set\-up the R environment with the required libraries. ### 3\.1\.1 What to install? Obviously, the R software (R Core Team [2018](#ref-RcoreT)) is needed. It is always a good idea to use the newest version. At least R in version 3\.6 is recommended. It can be downloaded from the CRAN website [https://cran.r\-project.org/](https://cran.r-project.org/). A good editor makes working with R much easier. There are plenty of choices, but, especially for beginners, consider the RStudio editor, an open\-source and enterprise\-ready tool for R. It can be downloaded from <https://www.rstudio.com/>. Once R and the editor are available, the required packages should be installed. The most important one is the `DALEX` package in version 1\.0 or newer. It is the entry point to solutions introduced in this book. The package can be installed by executing the following command from the R command line: ``` install.packages("DALEX") ``` Installation of `DALEX` will automatically take care about installation of other requirements (packages required by it), like the `ggplot2` package for data visualization, or `ingredients` and `iBreakDown` with specific methods for model exploration. ### 3\.1\.2 How to work with `DALEX`? To conduct model exploration with `DALEX`, first, a model has to be created. Then the model has got to be prepared for exploration. There are many packages in R that can be used to construct a model. Some packages are algorithm\-specific, like `randomForest` for random forest classification and regression models (Liaw and Wiener [2002](#ref-randomForest)), `gbm` for generalized boosted regression models (Ridgeway [2017](#ref-gbm)), `rms` with extensions for generalized linear models (Harrell Jr [2018](#ref-rms)), and many others. There are also packages that can be used for constructing models with different algorithms; these include the `h2o` package (LeDell et al. [2019](#ref-h2oPackage)), `caret` (Kuhn [2008](#ref-caret)) and its successor `parsnip` (Kuhn and Vaughan [2019](#ref-parsnipPackage)), a very powerful and extensible framework `mlr` (Bischl et al. [2016](#ref-mlr)), or `keras` that is a wrapper to Python library with the same name (Allaire and Chollet [2019](#ref-kerasPackage)). While it is great to have such a large choice of tools for constructing models, the disadvantage is that different packages have different interfaces and different arguments. Moreover, model\-objects created with different packages may have different internal structures. The main goal of the `DALEX` package is to create a level of abstraction around a model that makes it easier to explore and explain the model. Figure [3\.1](do-it-yourself.html#fig:DALEXarchitecture) illustrates the contents of the package. In particular, function `DALEX::explain` is THE function for model wrapping. There is only one argument that is required by the function; it is `model`, which is used to specify the model\-object with the fitted form of the model. However, the function allows additional arguments that extend its functionalities. They are discussed in Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode). Figure 3\.1: The `DALEX` package creates a layer of abstraction around models, allowing you to work with different models in a uniform way. The key function is the `explain()` function, which wraps any model into a uniform interface. Then other functions from the `DALEX` package can be applied to the resulting object to explore the model. ### 3\.1\.3 How to work with `archivist`? As we will focus on the exploration of predictive models, we prefer not to waste space nor time on replication of the code necessary for model development. This is where the `archivist` packages help. The `archivist` package (Biecek and Kosinski [2017](#ref-archivist)) is designed to store, share, and manage R objects. We will use it to easily access R objects for pre\-constructed models and pre\-calculated explainers. To install the package, the following command should be executed in the R command line: ``` install.packages("archivist") ``` Once the package has been installed, function `aread()` can be used to retrieve R objects from any remote repository. For this book, we use a GitHub repository `models` hosted at <https://github.com/pbiecek/models>. For instance, to download a model with the md5 hash `ceb40`, the following command has to be executed: ``` archivist::aread("pbiecek/models/ceb40") ``` Since the md5 hash `ceb40` uniquely defines the model, referring to the repository object results in using exactly the same model and the same explanations. Thus, in the subsequent chapters, pre\-constructed models will be accessed with `archivist` hooks. In the following sections, we will also use `archivist` hooks when referring to datasets. 3\.2 Do\-it\-yourself with Python --------------------------------- In this section, we provide a short description of steps that are needed to set\-up the Python environment with the required libraries. ### 3\.2\.1 What to install? The Python interpreter (Rossum and Drake [2009](#ref-python3)) is needed. It is always a good idea to use the newest version. Python in version 3\.6 is the minimum recommendation. It can be downloaded from the Python website <https://python.org/>. A popular environment for a simple Python installation and configuration is Anaconda, which can be downloaded from website <https://www.anaconda.com/>. There are many editors available for Python that allow editing the code in a convenient way. In the data science community a very popular solution is Jupyter Notebook. It is a web application that allows creating and sharing documents that contain live code, visualizations, and descriptions. Jupyter Notebook can be installed from the website <https://jupyter.org/>. Once Python and the editor are available, the required libraries should be installed. The most important one is the `dalex` library, currently in version `0.2.0`. The library can be installed with `pip` by executing the following instruction from the command line: ``` pip install dalex ``` Installation of `dalex` will automatically take care of other required libraries. ### 3\.2\.2 How to work with `dalex`? There are many libraries in Python that can be used to construct a predictive model. Among the most popular ones are algorithm\-specific libraries like `catboost` (Dorogush, Ershov, and Gulin [2018](#ref-catbooost)), `xgboost` (Chen and Guestrin [2016](#ref-xgboost)), and `keras` (Gulli and Pal [2017](#ref-chollet2015keras)), or libraries with multiple ML algorithms like `scikit-learn` (Pedregosa et al. [2011](#ref-scikitlearn)). While it is great to have such a large choice of tools for constructing models, the disadvantage is that different libraries have different interfaces and different arguments. Moreover, model\-objects created with different library may have different internal structures. The main goal of the `dalex` library is to create a level of abstraction around a model that makes it easier to explore and explain the model. Constructor `Explainer()` is THE method for model wrapping. There is only one argument that is required by the function; it is `model`, which is used to specify the model\-object with the fitted form of the model. However, the function also takes additional arguments that extend its functionalities. They are discussed in Section [4\.3\.6](dataSetsIntro.html#ExplainersTitanicPythonCode). If these additional arguments are not provided by the user, the `dalex` library will try to extract them from the model. It is a good idea to specify them directly to avoid surprises. As soon as the model is wrapped by using the `Explainer()` function, all further functionalities can be performed on the resulting object. They will be presented in subsequent chapters in subsections *Code snippets for Python*. ### 3\.2\.3 Code snippets for Python A detailed description of model exploration will be presented in the next chapters. In general, however, the way of working with the `dalex` library can be described in the following steps: 1. Import the `dalex` library. ``` import dalex as dx ``` 2. Create an `Explainer` object. This serves as a wrapper around the model. ``` exp = dx.Explainer(model, X, y) ``` 3. Calculate predictions for the model. ``` exp.predict(henry) ``` 4. Calculate specific explanations. ``` obs_bd = exp.predict_parts(obs, type='break_down') ``` 5. Print calculated explanations. ``` obs_bd.result ``` 6. Plot calculated explanations. ``` obs_bd.plot() ``` 3\.1 Do\-it\-yourself with R ---------------------------- In this section, we provide a short description of the steps that are needed to set\-up the R environment with the required libraries. ### 3\.1\.1 What to install? Obviously, the R software (R Core Team [2018](#ref-RcoreT)) is needed. It is always a good idea to use the newest version. At least R in version 3\.6 is recommended. It can be downloaded from the CRAN website [https://cran.r\-project.org/](https://cran.r-project.org/). A good editor makes working with R much easier. There are plenty of choices, but, especially for beginners, consider the RStudio editor, an open\-source and enterprise\-ready tool for R. It can be downloaded from <https://www.rstudio.com/>. Once R and the editor are available, the required packages should be installed. The most important one is the `DALEX` package in version 1\.0 or newer. It is the entry point to solutions introduced in this book. The package can be installed by executing the following command from the R command line: ``` install.packages("DALEX") ``` Installation of `DALEX` will automatically take care about installation of other requirements (packages required by it), like the `ggplot2` package for data visualization, or `ingredients` and `iBreakDown` with specific methods for model exploration. ### 3\.1\.2 How to work with `DALEX`? To conduct model exploration with `DALEX`, first, a model has to be created. Then the model has got to be prepared for exploration. There are many packages in R that can be used to construct a model. Some packages are algorithm\-specific, like `randomForest` for random forest classification and regression models (Liaw and Wiener [2002](#ref-randomForest)), `gbm` for generalized boosted regression models (Ridgeway [2017](#ref-gbm)), `rms` with extensions for generalized linear models (Harrell Jr [2018](#ref-rms)), and many others. There are also packages that can be used for constructing models with different algorithms; these include the `h2o` package (LeDell et al. [2019](#ref-h2oPackage)), `caret` (Kuhn [2008](#ref-caret)) and its successor `parsnip` (Kuhn and Vaughan [2019](#ref-parsnipPackage)), a very powerful and extensible framework `mlr` (Bischl et al. [2016](#ref-mlr)), or `keras` that is a wrapper to Python library with the same name (Allaire and Chollet [2019](#ref-kerasPackage)). While it is great to have such a large choice of tools for constructing models, the disadvantage is that different packages have different interfaces and different arguments. Moreover, model\-objects created with different packages may have different internal structures. The main goal of the `DALEX` package is to create a level of abstraction around a model that makes it easier to explore and explain the model. Figure [3\.1](do-it-yourself.html#fig:DALEXarchitecture) illustrates the contents of the package. In particular, function `DALEX::explain` is THE function for model wrapping. There is only one argument that is required by the function; it is `model`, which is used to specify the model\-object with the fitted form of the model. However, the function allows additional arguments that extend its functionalities. They are discussed in Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode). Figure 3\.1: The `DALEX` package creates a layer of abstraction around models, allowing you to work with different models in a uniform way. The key function is the `explain()` function, which wraps any model into a uniform interface. Then other functions from the `DALEX` package can be applied to the resulting object to explore the model. ### 3\.1\.3 How to work with `archivist`? As we will focus on the exploration of predictive models, we prefer not to waste space nor time on replication of the code necessary for model development. This is where the `archivist` packages help. The `archivist` package (Biecek and Kosinski [2017](#ref-archivist)) is designed to store, share, and manage R objects. We will use it to easily access R objects for pre\-constructed models and pre\-calculated explainers. To install the package, the following command should be executed in the R command line: ``` install.packages("archivist") ``` Once the package has been installed, function `aread()` can be used to retrieve R objects from any remote repository. For this book, we use a GitHub repository `models` hosted at <https://github.com/pbiecek/models>. For instance, to download a model with the md5 hash `ceb40`, the following command has to be executed: ``` archivist::aread("pbiecek/models/ceb40") ``` Since the md5 hash `ceb40` uniquely defines the model, referring to the repository object results in using exactly the same model and the same explanations. Thus, in the subsequent chapters, pre\-constructed models will be accessed with `archivist` hooks. In the following sections, we will also use `archivist` hooks when referring to datasets. ### 3\.1\.1 What to install? Obviously, the R software (R Core Team [2018](#ref-RcoreT)) is needed. It is always a good idea to use the newest version. At least R in version 3\.6 is recommended. It can be downloaded from the CRAN website [https://cran.r\-project.org/](https://cran.r-project.org/). A good editor makes working with R much easier. There are plenty of choices, but, especially for beginners, consider the RStudio editor, an open\-source and enterprise\-ready tool for R. It can be downloaded from <https://www.rstudio.com/>. Once R and the editor are available, the required packages should be installed. The most important one is the `DALEX` package in version 1\.0 or newer. It is the entry point to solutions introduced in this book. The package can be installed by executing the following command from the R command line: ``` install.packages("DALEX") ``` Installation of `DALEX` will automatically take care about installation of other requirements (packages required by it), like the `ggplot2` package for data visualization, or `ingredients` and `iBreakDown` with specific methods for model exploration. ### 3\.1\.2 How to work with `DALEX`? To conduct model exploration with `DALEX`, first, a model has to be created. Then the model has got to be prepared for exploration. There are many packages in R that can be used to construct a model. Some packages are algorithm\-specific, like `randomForest` for random forest classification and regression models (Liaw and Wiener [2002](#ref-randomForest)), `gbm` for generalized boosted regression models (Ridgeway [2017](#ref-gbm)), `rms` with extensions for generalized linear models (Harrell Jr [2018](#ref-rms)), and many others. There are also packages that can be used for constructing models with different algorithms; these include the `h2o` package (LeDell et al. [2019](#ref-h2oPackage)), `caret` (Kuhn [2008](#ref-caret)) and its successor `parsnip` (Kuhn and Vaughan [2019](#ref-parsnipPackage)), a very powerful and extensible framework `mlr` (Bischl et al. [2016](#ref-mlr)), or `keras` that is a wrapper to Python library with the same name (Allaire and Chollet [2019](#ref-kerasPackage)). While it is great to have such a large choice of tools for constructing models, the disadvantage is that different packages have different interfaces and different arguments. Moreover, model\-objects created with different packages may have different internal structures. The main goal of the `DALEX` package is to create a level of abstraction around a model that makes it easier to explore and explain the model. Figure [3\.1](do-it-yourself.html#fig:DALEXarchitecture) illustrates the contents of the package. In particular, function `DALEX::explain` is THE function for model wrapping. There is only one argument that is required by the function; it is `model`, which is used to specify the model\-object with the fitted form of the model. However, the function allows additional arguments that extend its functionalities. They are discussed in Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode). Figure 3\.1: The `DALEX` package creates a layer of abstraction around models, allowing you to work with different models in a uniform way. The key function is the `explain()` function, which wraps any model into a uniform interface. Then other functions from the `DALEX` package can be applied to the resulting object to explore the model. ### 3\.1\.3 How to work with `archivist`? As we will focus on the exploration of predictive models, we prefer not to waste space nor time on replication of the code necessary for model development. This is where the `archivist` packages help. The `archivist` package (Biecek and Kosinski [2017](#ref-archivist)) is designed to store, share, and manage R objects. We will use it to easily access R objects for pre\-constructed models and pre\-calculated explainers. To install the package, the following command should be executed in the R command line: ``` install.packages("archivist") ``` Once the package has been installed, function `aread()` can be used to retrieve R objects from any remote repository. For this book, we use a GitHub repository `models` hosted at <https://github.com/pbiecek/models>. For instance, to download a model with the md5 hash `ceb40`, the following command has to be executed: ``` archivist::aread("pbiecek/models/ceb40") ``` Since the md5 hash `ceb40` uniquely defines the model, referring to the repository object results in using exactly the same model and the same explanations. Thus, in the subsequent chapters, pre\-constructed models will be accessed with `archivist` hooks. In the following sections, we will also use `archivist` hooks when referring to datasets. 3\.2 Do\-it\-yourself with Python --------------------------------- In this section, we provide a short description of steps that are needed to set\-up the Python environment with the required libraries. ### 3\.2\.1 What to install? The Python interpreter (Rossum and Drake [2009](#ref-python3)) is needed. It is always a good idea to use the newest version. Python in version 3\.6 is the minimum recommendation. It can be downloaded from the Python website <https://python.org/>. A popular environment for a simple Python installation and configuration is Anaconda, which can be downloaded from website <https://www.anaconda.com/>. There are many editors available for Python that allow editing the code in a convenient way. In the data science community a very popular solution is Jupyter Notebook. It is a web application that allows creating and sharing documents that contain live code, visualizations, and descriptions. Jupyter Notebook can be installed from the website <https://jupyter.org/>. Once Python and the editor are available, the required libraries should be installed. The most important one is the `dalex` library, currently in version `0.2.0`. The library can be installed with `pip` by executing the following instruction from the command line: ``` pip install dalex ``` Installation of `dalex` will automatically take care of other required libraries. ### 3\.2\.2 How to work with `dalex`? There are many libraries in Python that can be used to construct a predictive model. Among the most popular ones are algorithm\-specific libraries like `catboost` (Dorogush, Ershov, and Gulin [2018](#ref-catbooost)), `xgboost` (Chen and Guestrin [2016](#ref-xgboost)), and `keras` (Gulli and Pal [2017](#ref-chollet2015keras)), or libraries with multiple ML algorithms like `scikit-learn` (Pedregosa et al. [2011](#ref-scikitlearn)). While it is great to have such a large choice of tools for constructing models, the disadvantage is that different libraries have different interfaces and different arguments. Moreover, model\-objects created with different library may have different internal structures. The main goal of the `dalex` library is to create a level of abstraction around a model that makes it easier to explore and explain the model. Constructor `Explainer()` is THE method for model wrapping. There is only one argument that is required by the function; it is `model`, which is used to specify the model\-object with the fitted form of the model. However, the function also takes additional arguments that extend its functionalities. They are discussed in Section [4\.3\.6](dataSetsIntro.html#ExplainersTitanicPythonCode). If these additional arguments are not provided by the user, the `dalex` library will try to extract them from the model. It is a good idea to specify them directly to avoid surprises. As soon as the model is wrapped by using the `Explainer()` function, all further functionalities can be performed on the resulting object. They will be presented in subsequent chapters in subsections *Code snippets for Python*. ### 3\.2\.3 Code snippets for Python A detailed description of model exploration will be presented in the next chapters. In general, however, the way of working with the `dalex` library can be described in the following steps: 1. Import the `dalex` library. ``` import dalex as dx ``` 2. Create an `Explainer` object. This serves as a wrapper around the model. ``` exp = dx.Explainer(model, X, y) ``` 3. Calculate predictions for the model. ``` exp.predict(henry) ``` 4. Calculate specific explanations. ``` obs_bd = exp.predict_parts(obs, type='break_down') ``` 5. Print calculated explanations. ``` obs_bd.result ``` 6. Plot calculated explanations. ``` obs_bd.plot() ``` ### 3\.2\.1 What to install? The Python interpreter (Rossum and Drake [2009](#ref-python3)) is needed. It is always a good idea to use the newest version. Python in version 3\.6 is the minimum recommendation. It can be downloaded from the Python website <https://python.org/>. A popular environment for a simple Python installation and configuration is Anaconda, which can be downloaded from website <https://www.anaconda.com/>. There are many editors available for Python that allow editing the code in a convenient way. In the data science community a very popular solution is Jupyter Notebook. It is a web application that allows creating and sharing documents that contain live code, visualizations, and descriptions. Jupyter Notebook can be installed from the website <https://jupyter.org/>. Once Python and the editor are available, the required libraries should be installed. The most important one is the `dalex` library, currently in version `0.2.0`. The library can be installed with `pip` by executing the following instruction from the command line: ``` pip install dalex ``` Installation of `dalex` will automatically take care of other required libraries. ### 3\.2\.2 How to work with `dalex`? There are many libraries in Python that can be used to construct a predictive model. Among the most popular ones are algorithm\-specific libraries like `catboost` (Dorogush, Ershov, and Gulin [2018](#ref-catbooost)), `xgboost` (Chen and Guestrin [2016](#ref-xgboost)), and `keras` (Gulli and Pal [2017](#ref-chollet2015keras)), or libraries with multiple ML algorithms like `scikit-learn` (Pedregosa et al. [2011](#ref-scikitlearn)). While it is great to have such a large choice of tools for constructing models, the disadvantage is that different libraries have different interfaces and different arguments. Moreover, model\-objects created with different library may have different internal structures. The main goal of the `dalex` library is to create a level of abstraction around a model that makes it easier to explore and explain the model. Constructor `Explainer()` is THE method for model wrapping. There is only one argument that is required by the function; it is `model`, which is used to specify the model\-object with the fitted form of the model. However, the function also takes additional arguments that extend its functionalities. They are discussed in Section [4\.3\.6](dataSetsIntro.html#ExplainersTitanicPythonCode). If these additional arguments are not provided by the user, the `dalex` library will try to extract them from the model. It is a good idea to specify them directly to avoid surprises. As soon as the model is wrapped by using the `Explainer()` function, all further functionalities can be performed on the resulting object. They will be presented in subsequent chapters in subsections *Code snippets for Python*. ### 3\.2\.3 Code snippets for Python A detailed description of model exploration will be presented in the next chapters. In general, however, the way of working with the `dalex` library can be described in the following steps: 1. Import the `dalex` library. ``` import dalex as dx ``` 2. Create an `Explainer` object. This serves as a wrapper around the model. ``` exp = dx.Explainer(model, X, y) ``` 3. Calculate predictions for the model. ``` exp.predict(henry) ``` 4. Calculate specific explanations. ``` obs_bd = exp.predict_parts(obs, type='break_down') ``` 5. Print calculated explanations. ``` obs_bd.result ``` 6. Plot calculated explanations. ``` obs_bd.plot() ```
Machine Learning
pbiecek.github.io
https://pbiecek.github.io/ema/dataSetsIntro.html
4 Datasets and Models ===================== We will illustrate the methods presented in this book by using three datasets related to: * predicting probability of survival for passengers of the *RMS Titanic*; * predicting prices of *apartments in Warsaw*; * predicting the value of the football players based on the *FIFA* dataset. The first dataset will be used to illustrate the application of the techniques in the case of a predictive (classification) model for a binary dependent variable. It is mainly used in the examples presented in the second part of the book. The second dataset will be used to illustrate the exploration of prediction models for a continuous dependent variable. It is mainly used in the examples in the third part of this book. The third dataset will be introduced in Chapter [21](UseCaseFIFA.html#UseCaseFIFA) and will be used to illustrate the use of all of the techniques introduced in the book. In this chapter, we provide a short description of the first two datasets, together with results of exploratory analyses. We also introduce models that will be used for illustration purposes in subsequent chapters. 4\.1 Sinking of the RMS Titanic ------------------------------- The sinking of the RMS Titanic is one of the deadliest maritime disasters in history (during peacetime). Over 1500 people died as a consequence of a collision with an iceberg. Projects like *Encyclopedia titanica* ([https://www.encyclopedia\-titanica.org/](https://www.encyclopedia-titanica.org/)) are a source of rich and precise data about Titanic’s passengers. The `stablelearner` package in R includes a data frame with information about passengers’ characteristics. The dataset, after some data cleaning and variable transformations, is also available in the `DALEX` package for R and in the `dalex` library for Python. In particular, the `titanic` data frame contains 2207 observations (for 1317 passengers and 890 crew members) and nine variables: * *gender*, person’s (passenger’s or crew member’s) gender, a factor (categorical variable) with two levels (categories): “male” (78%) and “female” (22%); * *age*, person’s age in years, a numerical variable; the age is given in (integer) years, in the range of 0–74 years; * *class*, the class in which the passenger travelled, or the duty class of a crew member; a factor with seven levels: “1st” (14\.7%), “2nd” (12\.9%), “3rd” (32\.1%), “deck crew” (3%), “engineering crew” (14\.7%), “restaurant staff” (3\.1%), and “victualling crew” (19\.5%); * *embarked*, the harbor in which the person embarked on the ship, a factor with four levels: “Belfast” (8\.9%), “Cherbourg” (12\.3%), “Queenstown” (5\.6%), and “Southampton” (73\.2%); * *country*, person’s home country, a factor with 48 levels; the most common levels are “England” (51%), “United States” (12%), “Ireland” (6\.2%), and “Sweden” (4\.8%); * *fare*, the price of the ticket (only available for passengers; 0 for crew members), a numerical variable in the range of 0–512; * *sibsp*, the number of siblings/spouses aboard the ship, a numerical variable in the range of 0–8; * *parch*, the number of parents/children aboard the ship, a numerical variable in the range of 0–9; * *survived*, a factor with two levels: “yes” (67\.8%) and “no” (32\.2%) indicating whether the person survived or not. The first six rows of this dataset are presented in the table below. | gender | age | class | embarked | fare | sibsp | parch | survived | | --- | --- | --- | --- | --- | --- | --- | --- | | male | 42 | 3rd | Southampton | 7\.11 | 0 | 0 | no | | male | 13 | 3rd | Southampton | 20\.05 | 0 | 2 | no | | male | 16 | 3rd | Southampton | 20\.05 | 1 | 1 | no | | female | 39 | 3rd | Southampton | 20\.05 | 1 | 1 | yes | | female | 16 | 3rd | Southampton | 7\.13 | 0 | 0 | yes | | male | 25 | 3rd | Southampton | 7\.13 | 0 | 0 | yes | Models considered for this dataset will use *survived* as the (binary) dependent variable. ### 4\.1\.1 Data exploration As discussed in Chapter [2](modelDevelopmentProcess.html#modelDevelopmentProcess), it is always advisable to explore data before modelling. However, as this book is focused on model exploration, we will limit the data exploration part. Before exploring the data, we first conduct some pre\-processing. In particular, the value of variables *age*, *country*, *sibsp*, *parch*, and *fare* is missing for a limited number of observations (2, 81, 10, 10, and 26, respectively). Analyzing data with missing values is a topic on its own (Schafer [1997](#ref-Schafer1997); Little and Rubin [2002](#ref-LittleRubin2002); Molenberghs and Kenward [2007](#ref-MolKen2007)). An often\-used approach is to impute the missing values. Toward this end, multiple imputations should be considered (Schafer [1997](#ref-Schafer1997); Molenberghs and Kenward [2007](#ref-MolKen2007); Buuren [2012](#ref-vanBuuren2012)). However, given the limited number of missing values and the intended illustrative use of the dataset, we will limit ourselves to, admittedly inferior, single imputation. In particular, we replace the missing *age* values by the mean of the observed ones, i.e., 30\. Missing *country* is encoded by `"X"`. For *sibsp* and *parch*, we replace the missing values by the most frequently observed value, i.e., 0\. Finally, for *fare*, we use the mean fare for a given *class*, i.e., 0 pounds for crew, 89 pounds for the first, 22 pounds for the second, and 13 pounds for the third class. After imputing the missing values, we investigate the association between survival status and other variables. Most variables in the Titanic dataset are categorical, except of *age* and *fare*. Figure [4\.1](dataSetsIntro.html#fig:titanicExplorationHistograms) shows histograms for the latter two variables. In order to keep the exploration uniform, we transform the two variables into categorical ones. In particular, *age* is discretized into five categories by using cutoffs equal to 5, 10, 20, and 30, while *fare* is discretized by applying cutoffs equal to 1, 10, 25, and 50\. Figure 4\.1: Histograms for variables *age* and *fare* from the Titanic data. Figures [4\.2](dataSetsIntro.html#fig:titanicExplorationGenderAge)–[4\.5](dataSetsIntro.html#fig:titanicExplorationCountryHarbor) present graphically, with the help of mosaic plots, the proportion of non\- and survivors for different levels of other variables. The width of the bars (on the x\-axis) reflects the marginal distribution (proportions) of the observed levels of the variable. On the other hand, the height of the bars (on the y\-axis) provides information about the proportion of non\- and survivors. The graphs for *age* and *fare* were constructed by using the categorized versions of the variables. Figure [4\.2](dataSetsIntro.html#fig:titanicExplorationGenderAge) indicates that the proportion of survivors was larger for females and children below 5 years of age. This is most likely the result of the “women and children first” principle that is often evoked in situations that require the evacuation of persons whose life is in danger. Figure 4\.2: Survival according to gender and age category in the Titanic data. The principle can, perhaps, partially explain the trend seen in Figure [4\.3](dataSetsIntro.html#fig:titanicExplorationParch), i.e., a higher proportion of survivors among those with 1\-2 parents/children and 1\-2 siblings/spouses aboard. Figure 4\.3: Survival according to the number of parents/children and siblings/spouses in the Titanic data. Figure [4\.4](dataSetsIntro.html#fig:titanicExplorationClassFare) indicates that passengers travelling in the first and second class had a higher chance of survival, perhaps due to the proximity of the location of their cabins to the deck. Interestingly, the proportion of survivors among the deck crew was similar to the proportion of the first\-class passengers. The figure also shows that the proportion of survivors increased with the fare, which is consistent with the fact that the proportion was higher for passengers travelling in the first and second class. Figure 4\.4: Survival according to travel\-class and ticket\-fare in the Titanic data. Finally, Figure [4\.5](dataSetsIntro.html#fig:titanicExplorationCountryHarbor) does not suggest any noteworthy trends. Figure 4\.5: Survival according to the embarked harbour and country in the Titanic data. 4\.2 Models for RMS Titanic, snippets for R ------------------------------------------- ### 4\.2\.1 Logistic regression model The dependent variable of interest, *survived*, is binary. Thus, a natural choice is to start the predictive modelling with a logistic regression model. As there is no reason to expect a linear relationship between age and odds of survival, we use linear tail\-restricted cubic splines, available in the `rcs()` function of the `rms` package (Harrell Jr [2018](#ref-rms)), to model the effect of age. We also do not expect linear relation for the *fare* variable, but because of its skewness (see Figure [4\.1](dataSetsIntro.html#fig:titanicExplorationHistograms)), we do not use splines for this variable. The results of the model are stored in model\-object `titanic_lmr`, which will be used in subsequent chapters. ``` library("rms") titanic_lmr <- lrm(survived == "yes" ~ gender + rcs(age) + class + sibsp + parch + fare + embarked, titanic) ``` Note that we are not very much interested in the assessment of the model’s predictive performance, but rather on understanding how the model yields its predictions. This is why we do not split the data into the training and testing subsets. Instead, the model is fitted to the entire dataset and will be examined on the same dataset. ### 4\.2\.2 Random forest model As an alternative to the logistic regression model we consider a random forest model. Random forest modelling is known for good predictive performance, ability to grasp low\-order variable interactions, and stability (Leo Breiman [2001](#ref-randomForestBreiman)[a](#ref-randomForestBreiman)). To fit the model, we apply the `randomForest()` function, with default settings, from the package with the same name (Liaw and Wiener [2002](#ref-randomForest)). In particular, we fit a model with the same set of explanatory variables as the logistic regression model (see Section [4\.2\.1](dataSetsIntro.html#model-titanic-lmr)). The results of the random forest model are stored in model\-object `titanic_rf`. ``` library("randomForest") set.seed(1313) titanic_rf <- randomForest(survived ~ class + gender + age + sibsp + parch + fare + embarked, data = titanic) ``` ### 4\.2\.3 Gradient boosting model Additionally, we consider the gradient boosting model (Friedman [2000](#ref-Friedman00greedyfunction)). Tree\-based boosting models are known for being able to accommodate higher\-order interactions between variables. We use the same set of six explanatory variables as for the logistic regression model (see Section [4\.2\.1](dataSetsIntro.html#model-titanic-lmr)). To fit the gradient boosting model, we use function `gbm()` from the `gbm` package (Ridgeway [2017](#ref-gbm)). The results of the model are stored in model\-object `titanic_gbm`. ``` library("gbm") set.seed(1313) titanic_gbm <- gbm(survived == "yes" ~ class + gender + age + sibsp + parch + fare + embarked, data = titanic, n.trees = 15000, distribution = "bernoulli") ``` ### 4\.2\.4 Support vector machine model Finally, we also consider a support vector machine (SVM) model (Cortes and Vapnik [1995](#ref-svm95vapnik)). We use the C\-classification mode. Again, we fit a model with the same set of explanatory variables as in the logistic regression model (see Section [4\.2\.1](dataSetsIntro.html#model-titanic-lmr)) To fit the model, we use function `svm()` from the `e1071` package (Meyer et al. [2019](#ref-e1071)). The results of the model are stored in model\-object `titanic_svm`. ``` library("e1071") titanic_svm <- svm(survived == "yes" ~ class + gender + age + sibsp + parch + fare + embarked, data = titanic, type = "C-classification", probability = TRUE) ``` ### 4\.2\.5 Models’ predictions Let us now compare predictions that are obtained from the different models. In particular, we compute the predicted probability of survival for Johnny D, an 8\-year\-old boy who embarked in Southampton and travelled in the first class with no parents nor siblings, and with a ticket costing 72 pounds. First, we create a data frame `johnny_d` that contains the data describing the passenger. ``` johnny_d <- data.frame( class = factor("1st", levels = c("1st", "2nd", "3rd", "deck crew", "engineering crew", "restaurant staff", "victualling crew")), gender = factor("male", levels = c("female", "male")), age = 8, sibsp = 0, parch = 0, fare = 72, embarked = factor("Southampton", levels = c("Belfast", "Cherbourg","Queenstown","Southampton"))) ``` Subsequently, we use the generic function `predict()` to obtain the predicted probability of survival for the logistic regression model. ``` (pred_lmr <- predict(titanic_lmr, johnny_d, type = "fitted")) ``` ``` ## 1 ## 0.7677036 ``` The predicted probability is equal to 0\.77\. We do the same for the remaining three models. ``` (pred_rf <- predict(titanic_rf, johnny_d, type = "prob")) ``` ``` ## no yes ## 1 0.578 0.422 ## attr(,"class") ## [1] "matrix" "array" "votes" ``` ``` (pred_gbm <- predict(titanic_gbm, johnny_d, type = "response", n.trees = 15000)) ``` ``` ## [1] 0.6632574 ``` ``` (pred_svm <- predict(titanic_svm, johnny_d, probability = TRUE)) ``` ``` ## 1 ## FALSE ## attr(,"probabilities") ## FALSE TRUE ## 1 0.7799685 0.2200315 ## Levels: FALSE TRUE ``` As a result, we obtain the predicted probabilities of 0\.42, 0\.66, and 0\.22 for the random forest, gradient boosting, and SVM models, respectively. The models lead to different probabilities. Thus, it might be of interest to understand the reason for the differences, as it could help us decide which of the predictions we might want to trust. We will investigate this issue in the subsequent chapters. Note that, for some examples later in the book, we will use another observation (instance). We will call this passenger Henry. ``` henry <- data.frame( class = factor("1st", levels = c("1st", "2nd", "3rd", "deck crew", "engineering crew", "restaurant staff", "victualling crew")), gender = factor("male", levels = c("female", "male")), age = 47, sibsp = 0, parch = 0, fare = 25, embarked = factor("Cherbourg", levels = c("Belfast", "Cherbourg","Queenstown","Southampton"))) ``` For Henry, the predicted probability of survival is lower than for Johnny D. ``` predict(titanic_lmr, henry, type = "fitted") ``` ``` ## 1 ## 0.4318245 ``` ``` predict(titanic_rf, henry, type = "prob")[,2] ``` ``` ## [1] 0.246 ``` ``` predict(titanic_gbm, henry, type = "response", n.trees = 15000) ``` ``` ## [1] 0.3073358 ``` ``` attr(predict(titanic_svm, henry, probability = TRUE),"probabilities")[,2] ``` ``` ## [1] 0.1767995 ``` ### 4\.2\.6 Models’ explainers Model\-objects created with different libraries may have different internal structures. Thus, first, we have got to create an “explainer,” i.e., an object that provides an uniform interface for different models. Toward this end, we use the `explain()` function from the `DALEX` package (Biecek [2018](#ref-DALEX)). As it was mentioned in Section [3\.1\.2](do-it-yourself.html#infoDALEX), there is only one argument that is required by the function, i.e., `model`. The argument is used to specify the model\-object with the fitted form of the model. However, the function allows additional arguments that extend its functionalities. In particular, the list of arguments includes the following: * `data`, a data frame or matrix providing data to which the model is to be applied; if not provided (`data = NULL` by default), the data are extracted from the model\-object. Note that the data object should not, in principle, contain the dependent variable. * `y`, observed values of the dependent variable corresponding to the data given in the `data` object; if not provided (`y = NULL` by default), the values are extracted from the model\-object; * `predict_function`, a function that returns prediction scores; if not specified (`predict_function = NULL` by default), then a default `predict()` function is used (note that this may lead to errors); * `residual_function`, a function that returns model residuals; if not specified (`residual_function = NULL` by default), then model residuals defined in equation [(2\.1\)](modelDevelopmentProcess.html#eq:modelResiduals) are calculated; * `verbose`, a logical argument (`verbose = TRUE` by default) indicating whether diagnostic messages are to be printed; * `precalculate`, a logical argument (`precalculate = TRUE` by default) indicating whether predicted values and residuals are to be calculated when the explainer is created. Note that this will also happen if `verbose = TRUE`. To skip the calculations, both `verbose` and `precalculate` should be set to FALSE . * `model_info`, a named list (with components `package`, `version`, and `type`) providing information about the model; if not specified (`model_info = NULL` by default), `DALEX` seeks for information on its own; * `type`, information about the type of the model, either `"classification"` (for a binary dependent variable) or `"regression"` (for a continuous dependent variable); if not specified (`type = NULL` by default), then the value of the argument is extracted from `model_info`; * `label`, a unique name of the model; if not specified (`label = NULL` by default), then it is extracted from `class(model)`. Application of function `explain()` provides an object of class `explainer`. It is a list of many components that include: * `model`, the explained model; * `data`, the data to which the model is applied; * `y`, observed values of the dependent variable corresponding to `data`; * `y_hat`, predictions obtained by applying `model` to `data`; * `residuals`, residuals computed based on `y` and `y_hat`; * `predict_function`, the function used to obtain the model’s predictions; * `residual_function`, the function used to obtain residuals; * `class`, class/classes of the model; * `label`, label of the model/explainer; * `model_info`, a named list (with components `package`, `version`, and `type`) providing information about the model. Thus, each explainer\-object contains all elements needed to create a model explanation. The code below creates explainers for the models (see Sections [4\.2\.1](dataSetsIntro.html#model-titanic-lmr)–[4\.2\.4](dataSetsIntro.html#model-titanic-svm)) fitted to the Titanic data. Note that, in the `data` argument, we indicate the `titanic` data frame without the ninth column, i.e., without the *survived* variable. The variable is used in the `y` argument to explicitly define the binary dependent variable equal to 1 for survivors and 0 for passengers who did not survive. ``` titanic_lmr_exp <- explain(model = titanic_lmr, data = titanic[, -9], y = titanic$survived == "yes", label = "Logistic Regression", type = "classification") titanic_rf_exp <- explain(model = titanic_rf, data = titanic[, -9], y = titanic$survived == "yes", label = "Random Forest") titanic_gbm_exp <- explain(model = titanic_gbm, data = titanic[, -9], y = titanic$survived == "yes", label = "Generalized Boosted Regression") titanic_svm_exp <- explain(model = titanic_svm, data = titanic[, -9], y = titanic$survived == "yes", label = "Support Vector Machine") ``` ### 4\.2\.7 List of model\-objects In the previous sections, we have built four predictive models for the Titanic dataset. The models will be used in the rest of the book to illustrate model\-explanation methods and tools. For the ease of reference, we summarize the models in Table [4\.1](dataSetsIntro.html#tab:archivistHooksOfModelsTitanic). The binary model\-objects can be downloaded by using the indicated `archivist` hooks (Biecek and Kosinski [2017](#ref-archivist)). By calling a function specified in the last column of the table, one can restore a selected model in its local R environment. Table 4\.1: Predictive models created for the `titanic` dataset. All models are fitted with following variables: *gender*, *age*, *class*, *sibsp*, *parch*, *fare*, *embarked.* | Model name / library | Link to this object | | --- | --- | | `titanic_lmr` | Get the model: `archivist::` | | `rms:: lmr` v.5\.1\.3 | `aread("pbiecek/models/58b24")`. | | `titanic_rf` | Get the model: `archivist::` | | `randomForest:: randomForest` v.4\.6\.14 | `aread("pbiecek/models/4e0fc")`. | | `titanic_gbm` | Get the model: `archivist::` | | `gbm:: gbm` v.2\.1\.5 | `aread("pbiecek/models/b7078")`. | | `titanic_svm` | Get the model: `archivist::` | | `e1071:: svm` v.1\.7\.3 | `aread("pbiecek/models/9c27f")`. | Table [4\.2](dataSetsIntro.html#tab:archivistHooksOfDataFramesTitanic) summarizes the data frames that will be used in examples in the subsequent chapters. Table 4\.2: Data frames created for the Titanic use\-case. All frames have following variables: *gender*, *age*, *class*, *embarked*, *country*, *fare*, *sibsp*, *parch*. The `titanic` data frame includes also the *survived* variable. | Description | Link to this object | | --- | --- | | `titanic` dataset with 2207 observations with imputed missing values | `archivist:: aread("pbiecek/models/27e5c")` | | `johnny_d` 8\-year\-old boy from the 1st class without parents, paid 72 pounds, embarked in Southampton | `archivist:: aread("pbiecek/models/e3596")` | | `henry` 47\-year\-old male from the 1st class, travelled alone, paid 25 pounds, embarked in Cherbourg | `archivist:: aread("pbiecek/models/a6538")` | 4\.3 Models for RMS Titanic, snippets for Python ------------------------------------------------ Titanic data are provided in the `titanic` dataset, which is available in the `dalex` library. The values of the dependent binary variable are given in the `survived` column; the remaining columns give the values of the explanatory variables that are used to construct the classifiers. The following instructions load the `titanic` dataset and split it into the dependent variable `y` and the explanatory variables `X`. Note that, for the purpose of this example, we do not divide the data into the training and testing sets. Instructions on how to deal with the situation when you want to analyze the model on data other than the training set will be presented in the subsequent chapters. ``` import dalex as dx titanic = dx.datasets.load_titanic() X = titanic.drop(columns='survived') y = titanic.survived ``` Dataset `X` contains numeric variables with different ranges (for instance, *age* and *fare*) and categorical variables. Machine\-learning algorithms in the `sklearn` library require data in a numeric form. Therefore, before modelling, we use a pipeline that performs data pre\-processing. In particular, we scale the continuous variables (*age*, *fare*, *parch*, and *sibsp*) and one\-hot\-encode the categorical variables (*gender*, *class*, *embarked*). ``` from sklearn.preprocessing import StandardScaler, OneHotEncoder from sklearn.compose import make_column_transformer from sklearn.pipeline import make_pipeline preprocess = make_column_transformer( (StandardScaler(), ['age', 'fare', 'parch', 'sibsp']), (OneHotEncoder(), ['gender', 'class', 'embarked'])) ``` ### 4\.3\.1 Logistic regression model To fit the logistic regression model (see Section [4\.2\.1](dataSetsIntro.html#model-titanic-lmr)), we use the `LogisticRegression` algorithm from the `sklearn` library. By default, the implementation uses the ridge penalty, defined in [(2\.6\)](modelDevelopmentProcess.html#eq:ridgePenalty). For this reason it is important to scale continuous variables like `age` and `fare`. The fitted model is stored in object `titanic_lr`, which will be used in subsequent chapters. ``` from sklearn.linear_model import LogisticRegression titanic_lr = make_pipeline( preprocess, LogisticRegression(penalty = 'l2')) titanic_lr.fit(X, y) ``` ### 4\.3\.2 Random forest model To fit the random forest model (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)), we use the `RandomForestClassifier` algorithm from the `sklearn` library. We use the default settings with trees not deeper than three levels, and the number of trees set to 500\. The fitted model is stored in object `titanic_rf`. ``` from sklearn.ensemble import RandomForestClassifier titanic_rf = make_pipeline( preprocess, RandomForestClassifier(max_depth = 3, n_estimators = 500)) titanic_rf.fit(X, y) ``` ### 4\.3\.3 Gradient boosting model To fit the gradient boosting model (see Section [4\.2\.3](dataSetsIntro.html#model-titanic-gbm)), we use the `GradientBoostingClassifier` algorithm from the `sklearn` library. We use the default settings, with the number of trees in the ensemble set to 100\. The fitted model is stored in object `titanic_gbc`. ``` from sklearn.ensemble import GradientBoostingClassifier titanic_gbc = make_pipeline( preprocess, GradientBoostingClassifier(n_estimators = 100)) titanic_gbc.fit(X, y) ``` ### 4\.3\.4 Support vector machine model Finally, to fit the SVM model with C\-Support Vector Classification mode (see Section [4\.2\.4](dataSetsIntro.html#model-titanic-svm)), we use the `SVC` algorithm from the `sklearn` library based on `libsvm`. The fitted model is stored in object `titanic_svm`. ``` from sklearn.svm import SVC titanic_svm = make_pipeline( preprocess, SVC(probability = True)) titanic_svm.fit(X, y) ``` ### 4\.3\.5 Models’ predictions Let us now compare predictions that are obtained from the different models. In particular, we compute the predicted probability of survival for Johnny D, an 8\-year\-old boy who embarked in Southampton and travelled in the first class with no parents nor siblings, and with a ticket costing 72 pounds (see Section [4\.2\.5](dataSetsIntro.html#predictions-titanic)). First, we create a data frame `johnny_d` that contains the data describing the passenger. ``` import pandas as pd johnny_d = pd.DataFrame({'gender': ['male'], 'age' : [8], 'class' : ['1st'], 'embarked': ['Southampton'], 'fare' : [72], 'sibsp' : [0], 'parch' : [0]}, index = ['JohnnyD']) ``` Subsequently, we use the method `predict_proba()` to obtain the predicted probability of survival for the logistic regression model. ``` titanic_lr.predict_proba(johnny_d) # array([[0.35884528, 0.64115472]]) ``` We do the same for the three remaining models. ``` titanic_rf.predict_proba(johnny_d) # array([[0.63028556, 0.36971444]]) titanic_gbc.predict_proba(johnny_d) # array([[0.1567194, 0.8432806]]) titanic_svm.predict_proba(johnny_d) # array([[0.78308146, 0.21691854]]) ``` We also create data frame for passenger Henry (see Section [4\.2\.5](dataSetsIntro.html#predictions-titanic)) and compute his predicted probability of survival. ``` henry = pd.DataFrame({'gender' : ['male'], 'age' : [47], 'class' : ['1st'], 'embarked': ['Cherbourg'], 'fare' : [25], 'sibsp' : [0], 'parch' : [0]}, index = ['Henry']) titanic_lr.predict_proba(henry) # array([[0.56798421 0.43201579]]) titanic_rf.predict_proba(henry) # array([[0.69917845 0.30082155]]) titanic_gbc.predict_proba(henry) # array([[0.78542886 0.21457114]]) titanic_svm.predict_proba(henry) # array([[0.81725832 0.18274168]]) ``` ### 4\.3\.6 Models’ explainers The Python\-code examples shown above use functions from the `sklearn` library, which facilitates uniform working with models. However, we may want to, or have to, work with models built by using other libraries. To simplify the task, the `dalex` library wraps models in objects of class `Explainer` that contain, in a uniform way, all the functions necessary for working with models. There is only one argument that is required by the `Explainer()` constructor, i.e., `model`. However, the constructor allows additional arguments that extend its functionalities. In particular, the list of arguments includes the following: * `data`, a data frame or `numpy.ndarray` providing data to which the model is to be applied. It should be an object of the `pandas.DataFrame` class, otherwise it will be converted to `pandas.DataFrame`. * `y`, values of the dependent variable/target variable corresponding to the data given in the `data` object; * `predict_function`, a function that returns prediction scores; if not specified, then `dalex` will make a guess which function should be used (`predict()`, `predict_proba()`, or something else). Note that this function should work on `pandas.DataFrame` objects; if it works only on `numpy.ndarray` then an appropriate conversion should also be included in `predict_function`. * `residual_function`, a function that returns model residuals; * `label`, a unique name of the model; * `model_class`, the class of actual model; * `verbose`, a logical argument (`verbose = TRUE` by default) indicating whether diagnostic messages are to be printed; * `model_type`, information about the type of the model, either `"classification"` (for a binary dependent variable) or `"regression"` (for a continuous dependent variable); * `model_info`, a dictionary with additional information about the model. Application of constructor `Explainer()` provides an object of class `Explainer`. It is an object with many components that include: * `model`, the explained model; * `data`, the data to which the model is applied; * `y`, observed values of the dependent variable corresponding to `data`; * `y_hat`, predictions obtained by applying `model` to `data`; * `residuals`, residuals computed based on `y` and `y_hat`; * `predict_function`, the function used to obtain the model’s predictions; * `residual_function`, the function used to obtain residuals; * `class`, class/classes of the model; * `label`, label of the model/explainer; * `model_info`, a dictionary (with components `package`, `version`, and `type`) providing information about the model. Thus, each explainer\-object contains all elements needed to create a model explanation. The code below creates explainers for the models (see Sections [4\.3\.1](dataSetsIntro.html#model-titanic-python-lr)–[4\.3\.4](dataSetsIntro.html#model-titanic-python-svm)) fitted to the Titanic data. ``` titanic_rf_exp = dx.Explainer(titanic_rf, X, y, label = "Titanic RF Pipeline") titanic_lr_exp = dx.Explainer(titanic_lr, X, y, label = "Titanic LR Pipeline") titanic_gbc_exp = dx.Explainer(titanic_gbc, X, y, label = "Titanic GBC Pipeline") titanic_svm_exp = dx.Explainer(titanic_svm, X, y, label = "Titanic SVM Pipeline") ``` When an explainer is created, the specified model and data are tested for consistency. Diagnostic information is printed on the screen. The following output shows diagnostic information for the `titanic_rf` model. ``` Preparation of a new explainer is initiated -> data : 2207 rows 7 cols -> target variable : Argument 'y' was converted to a numpy.ndarray. -> target variable : 2207 values -> model_class : sklearn.pipeline.Pipeline (default) -> label : Titanic RF Pipeline -> predict function : <yhat_proba> will be used (default) -> predicted values : min = 0.171, mean = 0.322, max = 0.893 -> residual function : difference between y and yhat (default) -> residuals : min = -0.826, mean = 4.89e-05, max = 0.826 -> model_info : package sklearn A new explainer has been created! ``` 4\.4 Apartment prices --------------------- Predicting house prices is a common exercise used in machine\-learning courses. Various datasets for house prices are available at websites like [Kaggle](https://www.kaggle.com) or [UCI Machine Learning Repository](https://archive.ics.uci.edu). In this book, we will work with an interesting variant of this problem. The `apartments` dataset contains simulated data that match key characteristics of real apartments in Warsaw, the capital of Poland. However, the dataset is created in a way that two very different models, namely linear regression and random forest, offer almost exactly the same overall accuracy of predictions. The natural question is then: which model should we choose? We will show that model\-explanation tools provide important insight into the key model characteristics and are helpful in model selection. The dataset is available in the `DALEX` package in R and the `dalex` library in Python. It contains 1000 observations (apartments) and six variables: * *m2\.price*, apartment’s price per square meter (in EUR), a numerical variable in the range of 1607–6595; * *construction.year*, the year of construction of the block of flats in which the apartment is located, a numerical variable in the range of 1920–2010; * *surface*, apartment’s total surface in square meters, a numerical variable in the range of 20–150; * *floor*, the floor at which the apartment is located (ground floor taken to be the first floor), a numerical integer variable with values ranging from 1 to 10; * *no.rooms*, the total number of rooms, a numerical integer variable with values ranging from 1 to 6; * *district*, a factor with 10 levels indicating the district of Warsaw where the apartment is located. The first six rows of this dataset are presented in the table below. | m2\.price | construction.year | surface | floor | no.rooms | district | | --- | --- | --- | --- | --- | --- | | 5897 | 1953 | 25 | 3 | 1 | Srodmiescie | | 1818 | 1992 | 143 | 9 | 5 | Bielany | | 3643 | 1937 | 56 | 1 | 2 | Praga | | 3517 | 1995 | 93 | 7 | 3 | Ochota | | 3013 | 1992 | 144 | 6 | 5 | Mokotow | | 5795 | 1926 | 61 | 6 | 2 | Srodmiescie | Models considered for this dataset will use *m2\.price* as the (continuous) dependent variable. Models’ predictions will be validated on a set of 9000 apartments included in data frame `apartments_test`. Note that, usually, the training dataset is larger than the testing one. In this example, we deliberately use a small training set, so that model selection may be more challenging. ### 4\.4\.1 Data exploration Note that `apartments` is an artificial dataset created to illustrate and explain differences between random forest and linear regression. Hence, the structure of the data, the form and strength of association between variables, plausibility of distributional assumptions, etc., is less problematic than in a real\-life dataset. In fact, all these characteristics of the data are known. Nevertheless, we present some data exploration below to illustrate the important aspects of the data. The variable of interest is *m2\.price*, the price per square meter. The histogram presented in Figure [4\.6](dataSetsIntro.html#fig:apartmentsExplorationMi2) indicates that the distribution of the variable is slightly skewed to the right. Figure 4\.6: Distribution of the price per square meter in the apartment\-prices data. Figure [4\.7](dataSetsIntro.html#fig:apartmentsMi2Construction) suggests (possibly) a non\-linear relationship between *construction.year* and *m2\.price* and a linear relation between *surface* and *m2\.price*. Figure 4\.7: Apartment\-prices data. Price per square meter vs. year of construction (left\-hand\-side panel) and vs. surface (right\-hand\-side panel). Figure [4\.8](dataSetsIntro.html#fig:apartmentsMi2Floor) indicates that the relationship between *floor* and *m2\.price* is also close to linear, as well as is the association between *no.rooms* and *m2\.price* . Figure 4\.8: Apartment\-prices data. Price per square meter vs. floor (left\-hand\-side panel) and vs. number of rooms (right\-hand\-side panel). Figure [4\.9](dataSetsIntro.html#fig:apartmentsSurfaceNorooms) shows that *surface* and *number of rooms* are positively associated and that prices depend on the district. In particular, box plots in Figure [4\.9](dataSetsIntro.html#fig:apartmentsSurfaceNorooms) indicate that the highest prices per square meter are observed in Srodmiescie (Downtown). Figure 4\.9: Apartment\-prices data. Surface vs. number of rooms (left\-hand\-side panel) and price per square meter for different districts (right\-hand\-side panel). 4\.5 Models for apartment prices, snippets for R ------------------------------------------------ ### 4\.5\.1 Linear regression model The dependent variable of interest, *m2\.price*, is continuous. Thus, a natural choice to build a predictive model is linear regression. We treat all the other variables in the `apartments` data frame as explanatory and include them in the model. To fit the model, we apply the `lm()` function. The results of the model are stored in model\-object `apartments_lm`. ``` library("DALEX") apartments_lm <- lm(m2.price ~ ., data = apartments) anova(apartments_lm) ``` ``` ## Analysis of Variance Table ## ## Response: m2.price ## Df Sum Sq Mean Sq F value Pr(>F) ## construction.year 1 2629802 2629802 33.233 1.093e-08 *** ## surface 1 207840733 207840733 2626.541 < 2.2e-16 *** ## floor 1 79823027 79823027 1008.746 < 2.2e-16 *** ## no.rooms 1 956996 956996 12.094 0.000528 *** ## district 9 451993980 50221553 634.664 < 2.2e-16 *** ## Residuals 986 78023123 79131 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ``` ### 4\.5\.2 Random forest model As an alternative to linear regression, we consider a random forest model. Again, we treat all the variables in the `apartments` data frame other than *m2\.price* as explanatory and include them in the model. To fit the model, we apply the `randomForest()` function, with default settings, from the package with the same name (Liaw and Wiener [2002](#ref-randomForest)). The results of the model are stored in model\-object `apartments_rf`. ``` library("randomForest") set.seed(72) apartments_rf <- randomForest(m2.price ~ ., data = apartments) ``` ### 4\.5\.3 Support vector machine model Finally, we consider an SVM model, with all the variables in the `apartments` data frame other than *m2\.price* treated as explanatory. To fit the model, we use the `svm()` function, with default settings, from package `e1071` (Meyer et al. [2019](#ref-e1071)). The results of the model are stored in model\-object `apartments_svm`. ``` library("e1071") apartments_svm <- svm(m2.price ~ construction.year + surface + floor + no.rooms + district, data = apartments) ``` ### 4\.5\.4 Models’ predictions The `predict()` function calculates predictions for a specific model. In the example below, we use model\-objects `apartments_lm`, `apartments_rf`, and `apartments_svm`, to calculate predictions for prices of the apartments from the `apartments_test` data frame. Note that, for brevity’s sake, we compute the predictions only for the first six observations from the data frame. The actual prices for the first six observations from `apartments_test` are provided below. ``` apartments_test$m2.price[1:6] ``` ``` ## [1] 4644 3082 2498 2735 2781 2936 ``` Predicted prices for the linear regression model are as follows: ``` predict(apartments_lm, apartments_test[1:6,]) ``` ``` ## 1001 1002 1003 1004 1005 1006 ## 4820.009 3292.678 2717.910 2922.751 2974.086 2527.043 ``` Predicted prices for the random forest model take the following values: ``` predict(apartments_rf, apartments_test[1:6,]) ``` ``` ## 1001 1002 1003 1004 1005 1006 ## 4214.084 3178.061 2695.787 2744.775 2951.069 2999.450 ``` Predicted prices for the SVM model are as follows: ``` predict(apartments_svm, apartments_test[1:6,]) ``` ``` ## 1001 1002 1003 1004 1005 1006 ## 4590.076 3012.044 2369.748 2712.456 2681.777 2750.904 ``` By using the code presented below, we summarize the predictive performance of the linear regression and random forest models by computing the square root of the mean\-squared\-error (RMSE). For a “perfect” predictive model, which would predict all observations exactly, RMSE should be equal to 0\. More information about RMSE can be found in Section [15\.3\.1](modelPerformance.html#modelPerformanceMethodCont). ``` predicted_apartments_lm <- predict(apartments_lm, apartments_test) sqrt(mean((predicted_apartments_lm - apartments_test$m2.price)^2)) ``` ``` ## [1] 283.0865 ``` ``` predicted_apartments_rf <- predict(apartments_rf, apartments_test) sqrt(mean((predicted_apartments_rf - apartments_test$m2.price)^2)) ``` ``` ## [1] 282.9519 ``` For the random forest model, RMSE is equal to 283\. It is almost identical to the RMSE for the linear regression model, which is equal to 283\.1\. Thus, the question we may face is: should we choose the more complex but flexible random forest model, or the simpler and easier to interpret linear regression model? In the subsequent chapters, we will try to provide an answer to this question. In particular, we will show that a proper model exploration may help to discover weak and strong sides of any of the models and, in consequence, allow the creation of a new model, with better performance than either of the two. ### 4\.5\.5 Models’ explainers The code presented below creates explainers for the models (see Sections [4\.5\.1](dataSetsIntro.html#model-Apartments-lr)–[4\.5\.3](dataSetsIntro.html#model-Apartments-svm)) fitted to the apartment\-prices data. Note that we use the `apartments_test` data frame without the first column, i.e., the *m2\.price* variable, in the `data` argument. This will be the dataset to which the model will be applied (see Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode)). The *m2\.price* variable is explicitly specified as the dependent variable in the `y` argument (see Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode)). ``` apartments_lm_exp <- explain(model = apartments_lm, data = apartments_test[,-1], y = apartments_test$m2.price, label = "Linear Regression") apartments_rf_exp <- explain(model = apartments_rf, data = apartments_test[,-1], y = apartments_test$m2.price, label = "Random Forest") apartments_svm_exp <- explain(model = apartments_svm, data = apartments_test[,-1], y = apartments_test$m2.price, label = "Support Vector Machine") ``` ### 4\.5\.6 List of model\-objects In Sections [4\.5\.1](dataSetsIntro.html#model-Apartments-lr)–[4\.5\.3](dataSetsIntro.html#model-Apartments-svm), we have built three predictive models for the `apartments` dataset. The models will be used in the rest of the book to illustrate the model\-explanation methods and tools. For the ease of reference, we summarize the models in Table [4\.3](dataSetsIntro.html#tab:archivistHooksOfModelsApartments). The binary model\-objects can be downloaded by using the indicated `archivist` hooks (Biecek and Kosinski [2017](#ref-archivist)). By calling a function specified in the last column of the table, one can restore a selected model in a local R environment. Table 4\.3: Predictive models created for the dataset Apartment prices. All models are fitted by using *construction.year*, *surface*, *floor*, *no.rooms*, and *district* as explanatory variables. | Model name / library | Link to this object | | --- | --- | | `apartments_lm` | Get the model: `archivist::` | | `stats:: lm` v.3\.5\.3 | `aread("pbiecek/models/55f19")`. | | `apartments_rf` | Get the model: `archivist::` | | `randomForest:: randomForest` v.4\.6\.14 | `aread("pbiecek/models/fe7a5")`. | | `apartments_svm` | Get the model: `archivist::` | | `e1071:: svm` v.1\.7\.3 | `aread("pbiecek/models/d2ca0")`. | 4\.6 Models for apartment prices, snippets for Python ----------------------------------------------------- Apartment\-prices data are provided in the `apartments` dataset, which is available in the `dalex` library. The values of the continuous dependent variable are given in the `m2_price` column; the remaining columns give the values of the explanatory variables that are used to construct the predictive models. The following instructions load the `apartments` dataset and split it into the dependent variable `y` and the explanatory variables `X`. ``` import dalex as dx apartments = dx.datasets.load_apartments() X = apartments.drop(columns='m2_price') y = apartments['m2_price'] ``` Dataset `X` contains numeric variables with different ranges (for instance, *surface* and *no.rooms*) and categorical variables (*district*). Machine\-learning algorithms in the `sklearn` library require data in a numeric form. Therefore, before modelling, we use a pipeline that performs data pre\-processing. In particular, we scale the continuous variables (*construction.year*, *surface*, *floor*, and *no.rooms*) and one\-hot\-encode the categorical variables (*district*). ``` from sklearn.preprocessing import StandardScaler, OneHotEncoder from sklearn.compose import make_column_transformer from sklearn.pipeline import make_pipeline preprocess = make_column_transformer( (StandardScaler(), ['construction_year', 'surface', 'floor', 'no_rooms']), (OneHotEncoder(), ['district'])) ``` ### 4\.6\.1 Linear regression model To fit the linear regression model (see Section [4\.5\.1](dataSetsIntro.html#model-Apartments-lr)), we use the `LinearRegression` algorithm from the `sklearn` library. The fitted model is stored in object `apartments_lm`, which will be used in subsequent chapters. ``` from sklearn.linear_model import LinearRegression apartments_lm = make_pipeline( preprocess, LinearRegression()) apartments_lm.fit(X, y) ``` ### 4\.6\.2 Random forest model To fit the random forest model (see Section [4\.5\.2](dataSetsIntro.html#model-Apartments-rf)), we use the `RandomForestRegressor` algorithm from the `sklearn` library. We apply the default settings with trees not deeper than three levels and the number of trees in the random forest set to 500\. The fitted model is stored in object `apartments_rf` for purpose of illustrations in subsequent chapters. ``` from sklearn.ensemble import RandomForestRegressor apartments_rf = make_pipeline( preprocess, RandomForestRegressor(max_depth = 3, n_estimators = 500)) apartments_rf.fit(X, y) ``` ### 4\.6\.3 Support vector machine model Finally, to fit the SVM model (see Section [4\.5\.3](dataSetsIntro.html#model-Apartments-svm)), we use the `SVR` algorithm from the `sklearn` library. The fitted model is stored in object `apartments_svm`, which will be used in subsequent chapters. ``` from sklearn.svm import SVR apartments_svm = make_pipeline( preprocess, SVR()) apartments_svm.fit(X, y) ``` ### 4\.6\.4 Models’ predictions Let us now compare predictions that are obtained from the different models for the `apartments_test` data. In the code below, we use the `predict()` method to obtain the predicted price per square meter for the linear regression model. ``` apartments_test = dx.datasets.load_apartments_test() apartments_test = apartments_test.drop(columns='m2_price') apartments_lm.predict(apartments_test) # array([4820.00943156, 3292.67756996, 2717.90972101, ..., 4836.44370353, # 3191.69063189, 5157.93680175]) ``` In a similar way, we obtain the predictions for the two remaining models. ``` apartments_rf.predict(apartments_test) # array([4708, 3819, 2273, ..., 4708, 4336, 4916]) ``` ``` apartments_svm.predict(apartments_test) # array([3344.48570564, 3323.01215313, 3321.97053977, ..., 3353.19750146, # 3383.51743883, 3376.31070911]) ``` ### 4\.6\.5 Models’ explainers The Python\-code examples presented for the models for the apartment\-prices dataset use functions from the `sklearn` library, which facilitates uniform working with models. However, we may want to, or have to, work with models built by using other libraries. To simplify the task, the `dalex` library wraps models in objects of class `Explainer` that contain, in a uniform way, all the functions necessary for working with models (see Section [4\.3\.6](dataSetsIntro.html#ExplainersTitanicPythonCode)). The code below creates explainer\-objects for the models (see Sections [4\.6\.1](dataSetsIntro.html#model-Apartments-python-lr)–[4\.6\.3](dataSetsIntro.html#model-Apartments-python-svm)) fitted to the apartment\-prices data. ``` apartments_lm_exp = dx.Explainer(apartments_lm, X, y, label = "Apartments LM Pipeline") apartments_rf_exp = dx.Explainer(apartments_rf, X, y, label = "Apartments RF Pipeline") apartments_svm_exp = dx.Explainer(apartments_svm, X, y, label = "Apartments SVM Pipeline") ``` When an explainer is created, the specified model and data are tested for consistency. Diagnostic information is printed on the screen. The following output shows diagnostic information for the `apartments_lm` model. ``` Preparation of a new explainer is initiated -> data : 1000 rows 5 cols -> target variable : Argument 'y' converted to a numpy.ndarray. -> target variable : 1000 values -> model_class : sklearn.pipeline.Pipeline (default) -> label : Apartments LM Pipeline -> predict function : <yhat at 0x117090840> will be used (default) -> predicted values : min = 1.78e+03, mean = 3.49e+03, max = 6.18e+03 -> residual function : difference between y and yhat (default) -> residuals : min = -2.47e+02, mean = 2.06e-13, max = 4.69e+02 -> model_info : package sklearn A new explainer has been created! ``` 4\.1 Sinking of the RMS Titanic ------------------------------- The sinking of the RMS Titanic is one of the deadliest maritime disasters in history (during peacetime). Over 1500 people died as a consequence of a collision with an iceberg. Projects like *Encyclopedia titanica* ([https://www.encyclopedia\-titanica.org/](https://www.encyclopedia-titanica.org/)) are a source of rich and precise data about Titanic’s passengers. The `stablelearner` package in R includes a data frame with information about passengers’ characteristics. The dataset, after some data cleaning and variable transformations, is also available in the `DALEX` package for R and in the `dalex` library for Python. In particular, the `titanic` data frame contains 2207 observations (for 1317 passengers and 890 crew members) and nine variables: * *gender*, person’s (passenger’s or crew member’s) gender, a factor (categorical variable) with two levels (categories): “male” (78%) and “female” (22%); * *age*, person’s age in years, a numerical variable; the age is given in (integer) years, in the range of 0–74 years; * *class*, the class in which the passenger travelled, or the duty class of a crew member; a factor with seven levels: “1st” (14\.7%), “2nd” (12\.9%), “3rd” (32\.1%), “deck crew” (3%), “engineering crew” (14\.7%), “restaurant staff” (3\.1%), and “victualling crew” (19\.5%); * *embarked*, the harbor in which the person embarked on the ship, a factor with four levels: “Belfast” (8\.9%), “Cherbourg” (12\.3%), “Queenstown” (5\.6%), and “Southampton” (73\.2%); * *country*, person’s home country, a factor with 48 levels; the most common levels are “England” (51%), “United States” (12%), “Ireland” (6\.2%), and “Sweden” (4\.8%); * *fare*, the price of the ticket (only available for passengers; 0 for crew members), a numerical variable in the range of 0–512; * *sibsp*, the number of siblings/spouses aboard the ship, a numerical variable in the range of 0–8; * *parch*, the number of parents/children aboard the ship, a numerical variable in the range of 0–9; * *survived*, a factor with two levels: “yes” (67\.8%) and “no” (32\.2%) indicating whether the person survived or not. The first six rows of this dataset are presented in the table below. | gender | age | class | embarked | fare | sibsp | parch | survived | | --- | --- | --- | --- | --- | --- | --- | --- | | male | 42 | 3rd | Southampton | 7\.11 | 0 | 0 | no | | male | 13 | 3rd | Southampton | 20\.05 | 0 | 2 | no | | male | 16 | 3rd | Southampton | 20\.05 | 1 | 1 | no | | female | 39 | 3rd | Southampton | 20\.05 | 1 | 1 | yes | | female | 16 | 3rd | Southampton | 7\.13 | 0 | 0 | yes | | male | 25 | 3rd | Southampton | 7\.13 | 0 | 0 | yes | Models considered for this dataset will use *survived* as the (binary) dependent variable. ### 4\.1\.1 Data exploration As discussed in Chapter [2](modelDevelopmentProcess.html#modelDevelopmentProcess), it is always advisable to explore data before modelling. However, as this book is focused on model exploration, we will limit the data exploration part. Before exploring the data, we first conduct some pre\-processing. In particular, the value of variables *age*, *country*, *sibsp*, *parch*, and *fare* is missing for a limited number of observations (2, 81, 10, 10, and 26, respectively). Analyzing data with missing values is a topic on its own (Schafer [1997](#ref-Schafer1997); Little and Rubin [2002](#ref-LittleRubin2002); Molenberghs and Kenward [2007](#ref-MolKen2007)). An often\-used approach is to impute the missing values. Toward this end, multiple imputations should be considered (Schafer [1997](#ref-Schafer1997); Molenberghs and Kenward [2007](#ref-MolKen2007); Buuren [2012](#ref-vanBuuren2012)). However, given the limited number of missing values and the intended illustrative use of the dataset, we will limit ourselves to, admittedly inferior, single imputation. In particular, we replace the missing *age* values by the mean of the observed ones, i.e., 30\. Missing *country* is encoded by `"X"`. For *sibsp* and *parch*, we replace the missing values by the most frequently observed value, i.e., 0\. Finally, for *fare*, we use the mean fare for a given *class*, i.e., 0 pounds for crew, 89 pounds for the first, 22 pounds for the second, and 13 pounds for the third class. After imputing the missing values, we investigate the association between survival status and other variables. Most variables in the Titanic dataset are categorical, except of *age* and *fare*. Figure [4\.1](dataSetsIntro.html#fig:titanicExplorationHistograms) shows histograms for the latter two variables. In order to keep the exploration uniform, we transform the two variables into categorical ones. In particular, *age* is discretized into five categories by using cutoffs equal to 5, 10, 20, and 30, while *fare* is discretized by applying cutoffs equal to 1, 10, 25, and 50\. Figure 4\.1: Histograms for variables *age* and *fare* from the Titanic data. Figures [4\.2](dataSetsIntro.html#fig:titanicExplorationGenderAge)–[4\.5](dataSetsIntro.html#fig:titanicExplorationCountryHarbor) present graphically, with the help of mosaic plots, the proportion of non\- and survivors for different levels of other variables. The width of the bars (on the x\-axis) reflects the marginal distribution (proportions) of the observed levels of the variable. On the other hand, the height of the bars (on the y\-axis) provides information about the proportion of non\- and survivors. The graphs for *age* and *fare* were constructed by using the categorized versions of the variables. Figure [4\.2](dataSetsIntro.html#fig:titanicExplorationGenderAge) indicates that the proportion of survivors was larger for females and children below 5 years of age. This is most likely the result of the “women and children first” principle that is often evoked in situations that require the evacuation of persons whose life is in danger. Figure 4\.2: Survival according to gender and age category in the Titanic data. The principle can, perhaps, partially explain the trend seen in Figure [4\.3](dataSetsIntro.html#fig:titanicExplorationParch), i.e., a higher proportion of survivors among those with 1\-2 parents/children and 1\-2 siblings/spouses aboard. Figure 4\.3: Survival according to the number of parents/children and siblings/spouses in the Titanic data. Figure [4\.4](dataSetsIntro.html#fig:titanicExplorationClassFare) indicates that passengers travelling in the first and second class had a higher chance of survival, perhaps due to the proximity of the location of their cabins to the deck. Interestingly, the proportion of survivors among the deck crew was similar to the proportion of the first\-class passengers. The figure also shows that the proportion of survivors increased with the fare, which is consistent with the fact that the proportion was higher for passengers travelling in the first and second class. Figure 4\.4: Survival according to travel\-class and ticket\-fare in the Titanic data. Finally, Figure [4\.5](dataSetsIntro.html#fig:titanicExplorationCountryHarbor) does not suggest any noteworthy trends. Figure 4\.5: Survival according to the embarked harbour and country in the Titanic data. ### 4\.1\.1 Data exploration As discussed in Chapter [2](modelDevelopmentProcess.html#modelDevelopmentProcess), it is always advisable to explore data before modelling. However, as this book is focused on model exploration, we will limit the data exploration part. Before exploring the data, we first conduct some pre\-processing. In particular, the value of variables *age*, *country*, *sibsp*, *parch*, and *fare* is missing for a limited number of observations (2, 81, 10, 10, and 26, respectively). Analyzing data with missing values is a topic on its own (Schafer [1997](#ref-Schafer1997); Little and Rubin [2002](#ref-LittleRubin2002); Molenberghs and Kenward [2007](#ref-MolKen2007)). An often\-used approach is to impute the missing values. Toward this end, multiple imputations should be considered (Schafer [1997](#ref-Schafer1997); Molenberghs and Kenward [2007](#ref-MolKen2007); Buuren [2012](#ref-vanBuuren2012)). However, given the limited number of missing values and the intended illustrative use of the dataset, we will limit ourselves to, admittedly inferior, single imputation. In particular, we replace the missing *age* values by the mean of the observed ones, i.e., 30\. Missing *country* is encoded by `"X"`. For *sibsp* and *parch*, we replace the missing values by the most frequently observed value, i.e., 0\. Finally, for *fare*, we use the mean fare for a given *class*, i.e., 0 pounds for crew, 89 pounds for the first, 22 pounds for the second, and 13 pounds for the third class. After imputing the missing values, we investigate the association between survival status and other variables. Most variables in the Titanic dataset are categorical, except of *age* and *fare*. Figure [4\.1](dataSetsIntro.html#fig:titanicExplorationHistograms) shows histograms for the latter two variables. In order to keep the exploration uniform, we transform the two variables into categorical ones. In particular, *age* is discretized into five categories by using cutoffs equal to 5, 10, 20, and 30, while *fare* is discretized by applying cutoffs equal to 1, 10, 25, and 50\. Figure 4\.1: Histograms for variables *age* and *fare* from the Titanic data. Figures [4\.2](dataSetsIntro.html#fig:titanicExplorationGenderAge)–[4\.5](dataSetsIntro.html#fig:titanicExplorationCountryHarbor) present graphically, with the help of mosaic plots, the proportion of non\- and survivors for different levels of other variables. The width of the bars (on the x\-axis) reflects the marginal distribution (proportions) of the observed levels of the variable. On the other hand, the height of the bars (on the y\-axis) provides information about the proportion of non\- and survivors. The graphs for *age* and *fare* were constructed by using the categorized versions of the variables. Figure [4\.2](dataSetsIntro.html#fig:titanicExplorationGenderAge) indicates that the proportion of survivors was larger for females and children below 5 years of age. This is most likely the result of the “women and children first” principle that is often evoked in situations that require the evacuation of persons whose life is in danger. Figure 4\.2: Survival according to gender and age category in the Titanic data. The principle can, perhaps, partially explain the trend seen in Figure [4\.3](dataSetsIntro.html#fig:titanicExplorationParch), i.e., a higher proportion of survivors among those with 1\-2 parents/children and 1\-2 siblings/spouses aboard. Figure 4\.3: Survival according to the number of parents/children and siblings/spouses in the Titanic data. Figure [4\.4](dataSetsIntro.html#fig:titanicExplorationClassFare) indicates that passengers travelling in the first and second class had a higher chance of survival, perhaps due to the proximity of the location of their cabins to the deck. Interestingly, the proportion of survivors among the deck crew was similar to the proportion of the first\-class passengers. The figure also shows that the proportion of survivors increased with the fare, which is consistent with the fact that the proportion was higher for passengers travelling in the first and second class. Figure 4\.4: Survival according to travel\-class and ticket\-fare in the Titanic data. Finally, Figure [4\.5](dataSetsIntro.html#fig:titanicExplorationCountryHarbor) does not suggest any noteworthy trends. Figure 4\.5: Survival according to the embarked harbour and country in the Titanic data. 4\.2 Models for RMS Titanic, snippets for R ------------------------------------------- ### 4\.2\.1 Logistic regression model The dependent variable of interest, *survived*, is binary. Thus, a natural choice is to start the predictive modelling with a logistic regression model. As there is no reason to expect a linear relationship between age and odds of survival, we use linear tail\-restricted cubic splines, available in the `rcs()` function of the `rms` package (Harrell Jr [2018](#ref-rms)), to model the effect of age. We also do not expect linear relation for the *fare* variable, but because of its skewness (see Figure [4\.1](dataSetsIntro.html#fig:titanicExplorationHistograms)), we do not use splines for this variable. The results of the model are stored in model\-object `titanic_lmr`, which will be used in subsequent chapters. ``` library("rms") titanic_lmr <- lrm(survived == "yes" ~ gender + rcs(age) + class + sibsp + parch + fare + embarked, titanic) ``` Note that we are not very much interested in the assessment of the model’s predictive performance, but rather on understanding how the model yields its predictions. This is why we do not split the data into the training and testing subsets. Instead, the model is fitted to the entire dataset and will be examined on the same dataset. ### 4\.2\.2 Random forest model As an alternative to the logistic regression model we consider a random forest model. Random forest modelling is known for good predictive performance, ability to grasp low\-order variable interactions, and stability (Leo Breiman [2001](#ref-randomForestBreiman)[a](#ref-randomForestBreiman)). To fit the model, we apply the `randomForest()` function, with default settings, from the package with the same name (Liaw and Wiener [2002](#ref-randomForest)). In particular, we fit a model with the same set of explanatory variables as the logistic regression model (see Section [4\.2\.1](dataSetsIntro.html#model-titanic-lmr)). The results of the random forest model are stored in model\-object `titanic_rf`. ``` library("randomForest") set.seed(1313) titanic_rf <- randomForest(survived ~ class + gender + age + sibsp + parch + fare + embarked, data = titanic) ``` ### 4\.2\.3 Gradient boosting model Additionally, we consider the gradient boosting model (Friedman [2000](#ref-Friedman00greedyfunction)). Tree\-based boosting models are known for being able to accommodate higher\-order interactions between variables. We use the same set of six explanatory variables as for the logistic regression model (see Section [4\.2\.1](dataSetsIntro.html#model-titanic-lmr)). To fit the gradient boosting model, we use function `gbm()` from the `gbm` package (Ridgeway [2017](#ref-gbm)). The results of the model are stored in model\-object `titanic_gbm`. ``` library("gbm") set.seed(1313) titanic_gbm <- gbm(survived == "yes" ~ class + gender + age + sibsp + parch + fare + embarked, data = titanic, n.trees = 15000, distribution = "bernoulli") ``` ### 4\.2\.4 Support vector machine model Finally, we also consider a support vector machine (SVM) model (Cortes and Vapnik [1995](#ref-svm95vapnik)). We use the C\-classification mode. Again, we fit a model with the same set of explanatory variables as in the logistic regression model (see Section [4\.2\.1](dataSetsIntro.html#model-titanic-lmr)) To fit the model, we use function `svm()` from the `e1071` package (Meyer et al. [2019](#ref-e1071)). The results of the model are stored in model\-object `titanic_svm`. ``` library("e1071") titanic_svm <- svm(survived == "yes" ~ class + gender + age + sibsp + parch + fare + embarked, data = titanic, type = "C-classification", probability = TRUE) ``` ### 4\.2\.5 Models’ predictions Let us now compare predictions that are obtained from the different models. In particular, we compute the predicted probability of survival for Johnny D, an 8\-year\-old boy who embarked in Southampton and travelled in the first class with no parents nor siblings, and with a ticket costing 72 pounds. First, we create a data frame `johnny_d` that contains the data describing the passenger. ``` johnny_d <- data.frame( class = factor("1st", levels = c("1st", "2nd", "3rd", "deck crew", "engineering crew", "restaurant staff", "victualling crew")), gender = factor("male", levels = c("female", "male")), age = 8, sibsp = 0, parch = 0, fare = 72, embarked = factor("Southampton", levels = c("Belfast", "Cherbourg","Queenstown","Southampton"))) ``` Subsequently, we use the generic function `predict()` to obtain the predicted probability of survival for the logistic regression model. ``` (pred_lmr <- predict(titanic_lmr, johnny_d, type = "fitted")) ``` ``` ## 1 ## 0.7677036 ``` The predicted probability is equal to 0\.77\. We do the same for the remaining three models. ``` (pred_rf <- predict(titanic_rf, johnny_d, type = "prob")) ``` ``` ## no yes ## 1 0.578 0.422 ## attr(,"class") ## [1] "matrix" "array" "votes" ``` ``` (pred_gbm <- predict(titanic_gbm, johnny_d, type = "response", n.trees = 15000)) ``` ``` ## [1] 0.6632574 ``` ``` (pred_svm <- predict(titanic_svm, johnny_d, probability = TRUE)) ``` ``` ## 1 ## FALSE ## attr(,"probabilities") ## FALSE TRUE ## 1 0.7799685 0.2200315 ## Levels: FALSE TRUE ``` As a result, we obtain the predicted probabilities of 0\.42, 0\.66, and 0\.22 for the random forest, gradient boosting, and SVM models, respectively. The models lead to different probabilities. Thus, it might be of interest to understand the reason for the differences, as it could help us decide which of the predictions we might want to trust. We will investigate this issue in the subsequent chapters. Note that, for some examples later in the book, we will use another observation (instance). We will call this passenger Henry. ``` henry <- data.frame( class = factor("1st", levels = c("1st", "2nd", "3rd", "deck crew", "engineering crew", "restaurant staff", "victualling crew")), gender = factor("male", levels = c("female", "male")), age = 47, sibsp = 0, parch = 0, fare = 25, embarked = factor("Cherbourg", levels = c("Belfast", "Cherbourg","Queenstown","Southampton"))) ``` For Henry, the predicted probability of survival is lower than for Johnny D. ``` predict(titanic_lmr, henry, type = "fitted") ``` ``` ## 1 ## 0.4318245 ``` ``` predict(titanic_rf, henry, type = "prob")[,2] ``` ``` ## [1] 0.246 ``` ``` predict(titanic_gbm, henry, type = "response", n.trees = 15000) ``` ``` ## [1] 0.3073358 ``` ``` attr(predict(titanic_svm, henry, probability = TRUE),"probabilities")[,2] ``` ``` ## [1] 0.1767995 ``` ### 4\.2\.6 Models’ explainers Model\-objects created with different libraries may have different internal structures. Thus, first, we have got to create an “explainer,” i.e., an object that provides an uniform interface for different models. Toward this end, we use the `explain()` function from the `DALEX` package (Biecek [2018](#ref-DALEX)). As it was mentioned in Section [3\.1\.2](do-it-yourself.html#infoDALEX), there is only one argument that is required by the function, i.e., `model`. The argument is used to specify the model\-object with the fitted form of the model. However, the function allows additional arguments that extend its functionalities. In particular, the list of arguments includes the following: * `data`, a data frame or matrix providing data to which the model is to be applied; if not provided (`data = NULL` by default), the data are extracted from the model\-object. Note that the data object should not, in principle, contain the dependent variable. * `y`, observed values of the dependent variable corresponding to the data given in the `data` object; if not provided (`y = NULL` by default), the values are extracted from the model\-object; * `predict_function`, a function that returns prediction scores; if not specified (`predict_function = NULL` by default), then a default `predict()` function is used (note that this may lead to errors); * `residual_function`, a function that returns model residuals; if not specified (`residual_function = NULL` by default), then model residuals defined in equation [(2\.1\)](modelDevelopmentProcess.html#eq:modelResiduals) are calculated; * `verbose`, a logical argument (`verbose = TRUE` by default) indicating whether diagnostic messages are to be printed; * `precalculate`, a logical argument (`precalculate = TRUE` by default) indicating whether predicted values and residuals are to be calculated when the explainer is created. Note that this will also happen if `verbose = TRUE`. To skip the calculations, both `verbose` and `precalculate` should be set to FALSE . * `model_info`, a named list (with components `package`, `version`, and `type`) providing information about the model; if not specified (`model_info = NULL` by default), `DALEX` seeks for information on its own; * `type`, information about the type of the model, either `"classification"` (for a binary dependent variable) or `"regression"` (for a continuous dependent variable); if not specified (`type = NULL` by default), then the value of the argument is extracted from `model_info`; * `label`, a unique name of the model; if not specified (`label = NULL` by default), then it is extracted from `class(model)`. Application of function `explain()` provides an object of class `explainer`. It is a list of many components that include: * `model`, the explained model; * `data`, the data to which the model is applied; * `y`, observed values of the dependent variable corresponding to `data`; * `y_hat`, predictions obtained by applying `model` to `data`; * `residuals`, residuals computed based on `y` and `y_hat`; * `predict_function`, the function used to obtain the model’s predictions; * `residual_function`, the function used to obtain residuals; * `class`, class/classes of the model; * `label`, label of the model/explainer; * `model_info`, a named list (with components `package`, `version`, and `type`) providing information about the model. Thus, each explainer\-object contains all elements needed to create a model explanation. The code below creates explainers for the models (see Sections [4\.2\.1](dataSetsIntro.html#model-titanic-lmr)–[4\.2\.4](dataSetsIntro.html#model-titanic-svm)) fitted to the Titanic data. Note that, in the `data` argument, we indicate the `titanic` data frame without the ninth column, i.e., without the *survived* variable. The variable is used in the `y` argument to explicitly define the binary dependent variable equal to 1 for survivors and 0 for passengers who did not survive. ``` titanic_lmr_exp <- explain(model = titanic_lmr, data = titanic[, -9], y = titanic$survived == "yes", label = "Logistic Regression", type = "classification") titanic_rf_exp <- explain(model = titanic_rf, data = titanic[, -9], y = titanic$survived == "yes", label = "Random Forest") titanic_gbm_exp <- explain(model = titanic_gbm, data = titanic[, -9], y = titanic$survived == "yes", label = "Generalized Boosted Regression") titanic_svm_exp <- explain(model = titanic_svm, data = titanic[, -9], y = titanic$survived == "yes", label = "Support Vector Machine") ``` ### 4\.2\.7 List of model\-objects In the previous sections, we have built four predictive models for the Titanic dataset. The models will be used in the rest of the book to illustrate model\-explanation methods and tools. For the ease of reference, we summarize the models in Table [4\.1](dataSetsIntro.html#tab:archivistHooksOfModelsTitanic). The binary model\-objects can be downloaded by using the indicated `archivist` hooks (Biecek and Kosinski [2017](#ref-archivist)). By calling a function specified in the last column of the table, one can restore a selected model in its local R environment. Table 4\.1: Predictive models created for the `titanic` dataset. All models are fitted with following variables: *gender*, *age*, *class*, *sibsp*, *parch*, *fare*, *embarked.* | Model name / library | Link to this object | | --- | --- | | `titanic_lmr` | Get the model: `archivist::` | | `rms:: lmr` v.5\.1\.3 | `aread("pbiecek/models/58b24")`. | | `titanic_rf` | Get the model: `archivist::` | | `randomForest:: randomForest` v.4\.6\.14 | `aread("pbiecek/models/4e0fc")`. | | `titanic_gbm` | Get the model: `archivist::` | | `gbm:: gbm` v.2\.1\.5 | `aread("pbiecek/models/b7078")`. | | `titanic_svm` | Get the model: `archivist::` | | `e1071:: svm` v.1\.7\.3 | `aread("pbiecek/models/9c27f")`. | Table [4\.2](dataSetsIntro.html#tab:archivistHooksOfDataFramesTitanic) summarizes the data frames that will be used in examples in the subsequent chapters. Table 4\.2: Data frames created for the Titanic use\-case. All frames have following variables: *gender*, *age*, *class*, *embarked*, *country*, *fare*, *sibsp*, *parch*. The `titanic` data frame includes also the *survived* variable. | Description | Link to this object | | --- | --- | | `titanic` dataset with 2207 observations with imputed missing values | `archivist:: aread("pbiecek/models/27e5c")` | | `johnny_d` 8\-year\-old boy from the 1st class without parents, paid 72 pounds, embarked in Southampton | `archivist:: aread("pbiecek/models/e3596")` | | `henry` 47\-year\-old male from the 1st class, travelled alone, paid 25 pounds, embarked in Cherbourg | `archivist:: aread("pbiecek/models/a6538")` | ### 4\.2\.1 Logistic regression model The dependent variable of interest, *survived*, is binary. Thus, a natural choice is to start the predictive modelling with a logistic regression model. As there is no reason to expect a linear relationship between age and odds of survival, we use linear tail\-restricted cubic splines, available in the `rcs()` function of the `rms` package (Harrell Jr [2018](#ref-rms)), to model the effect of age. We also do not expect linear relation for the *fare* variable, but because of its skewness (see Figure [4\.1](dataSetsIntro.html#fig:titanicExplorationHistograms)), we do not use splines for this variable. The results of the model are stored in model\-object `titanic_lmr`, which will be used in subsequent chapters. ``` library("rms") titanic_lmr <- lrm(survived == "yes" ~ gender + rcs(age) + class + sibsp + parch + fare + embarked, titanic) ``` Note that we are not very much interested in the assessment of the model’s predictive performance, but rather on understanding how the model yields its predictions. This is why we do not split the data into the training and testing subsets. Instead, the model is fitted to the entire dataset and will be examined on the same dataset. ### 4\.2\.2 Random forest model As an alternative to the logistic regression model we consider a random forest model. Random forest modelling is known for good predictive performance, ability to grasp low\-order variable interactions, and stability (Leo Breiman [2001](#ref-randomForestBreiman)[a](#ref-randomForestBreiman)). To fit the model, we apply the `randomForest()` function, with default settings, from the package with the same name (Liaw and Wiener [2002](#ref-randomForest)). In particular, we fit a model with the same set of explanatory variables as the logistic regression model (see Section [4\.2\.1](dataSetsIntro.html#model-titanic-lmr)). The results of the random forest model are stored in model\-object `titanic_rf`. ``` library("randomForest") set.seed(1313) titanic_rf <- randomForest(survived ~ class + gender + age + sibsp + parch + fare + embarked, data = titanic) ``` ### 4\.2\.3 Gradient boosting model Additionally, we consider the gradient boosting model (Friedman [2000](#ref-Friedman00greedyfunction)). Tree\-based boosting models are known for being able to accommodate higher\-order interactions between variables. We use the same set of six explanatory variables as for the logistic regression model (see Section [4\.2\.1](dataSetsIntro.html#model-titanic-lmr)). To fit the gradient boosting model, we use function `gbm()` from the `gbm` package (Ridgeway [2017](#ref-gbm)). The results of the model are stored in model\-object `titanic_gbm`. ``` library("gbm") set.seed(1313) titanic_gbm <- gbm(survived == "yes" ~ class + gender + age + sibsp + parch + fare + embarked, data = titanic, n.trees = 15000, distribution = "bernoulli") ``` ### 4\.2\.4 Support vector machine model Finally, we also consider a support vector machine (SVM) model (Cortes and Vapnik [1995](#ref-svm95vapnik)). We use the C\-classification mode. Again, we fit a model with the same set of explanatory variables as in the logistic regression model (see Section [4\.2\.1](dataSetsIntro.html#model-titanic-lmr)) To fit the model, we use function `svm()` from the `e1071` package (Meyer et al. [2019](#ref-e1071)). The results of the model are stored in model\-object `titanic_svm`. ``` library("e1071") titanic_svm <- svm(survived == "yes" ~ class + gender + age + sibsp + parch + fare + embarked, data = titanic, type = "C-classification", probability = TRUE) ``` ### 4\.2\.5 Models’ predictions Let us now compare predictions that are obtained from the different models. In particular, we compute the predicted probability of survival for Johnny D, an 8\-year\-old boy who embarked in Southampton and travelled in the first class with no parents nor siblings, and with a ticket costing 72 pounds. First, we create a data frame `johnny_d` that contains the data describing the passenger. ``` johnny_d <- data.frame( class = factor("1st", levels = c("1st", "2nd", "3rd", "deck crew", "engineering crew", "restaurant staff", "victualling crew")), gender = factor("male", levels = c("female", "male")), age = 8, sibsp = 0, parch = 0, fare = 72, embarked = factor("Southampton", levels = c("Belfast", "Cherbourg","Queenstown","Southampton"))) ``` Subsequently, we use the generic function `predict()` to obtain the predicted probability of survival for the logistic regression model. ``` (pred_lmr <- predict(titanic_lmr, johnny_d, type = "fitted")) ``` ``` ## 1 ## 0.7677036 ``` The predicted probability is equal to 0\.77\. We do the same for the remaining three models. ``` (pred_rf <- predict(titanic_rf, johnny_d, type = "prob")) ``` ``` ## no yes ## 1 0.578 0.422 ## attr(,"class") ## [1] "matrix" "array" "votes" ``` ``` (pred_gbm <- predict(titanic_gbm, johnny_d, type = "response", n.trees = 15000)) ``` ``` ## [1] 0.6632574 ``` ``` (pred_svm <- predict(titanic_svm, johnny_d, probability = TRUE)) ``` ``` ## 1 ## FALSE ## attr(,"probabilities") ## FALSE TRUE ## 1 0.7799685 0.2200315 ## Levels: FALSE TRUE ``` As a result, we obtain the predicted probabilities of 0\.42, 0\.66, and 0\.22 for the random forest, gradient boosting, and SVM models, respectively. The models lead to different probabilities. Thus, it might be of interest to understand the reason for the differences, as it could help us decide which of the predictions we might want to trust. We will investigate this issue in the subsequent chapters. Note that, for some examples later in the book, we will use another observation (instance). We will call this passenger Henry. ``` henry <- data.frame( class = factor("1st", levels = c("1st", "2nd", "3rd", "deck crew", "engineering crew", "restaurant staff", "victualling crew")), gender = factor("male", levels = c("female", "male")), age = 47, sibsp = 0, parch = 0, fare = 25, embarked = factor("Cherbourg", levels = c("Belfast", "Cherbourg","Queenstown","Southampton"))) ``` For Henry, the predicted probability of survival is lower than for Johnny D. ``` predict(titanic_lmr, henry, type = "fitted") ``` ``` ## 1 ## 0.4318245 ``` ``` predict(titanic_rf, henry, type = "prob")[,2] ``` ``` ## [1] 0.246 ``` ``` predict(titanic_gbm, henry, type = "response", n.trees = 15000) ``` ``` ## [1] 0.3073358 ``` ``` attr(predict(titanic_svm, henry, probability = TRUE),"probabilities")[,2] ``` ``` ## [1] 0.1767995 ``` ### 4\.2\.6 Models’ explainers Model\-objects created with different libraries may have different internal structures. Thus, first, we have got to create an “explainer,” i.e., an object that provides an uniform interface for different models. Toward this end, we use the `explain()` function from the `DALEX` package (Biecek [2018](#ref-DALEX)). As it was mentioned in Section [3\.1\.2](do-it-yourself.html#infoDALEX), there is only one argument that is required by the function, i.e., `model`. The argument is used to specify the model\-object with the fitted form of the model. However, the function allows additional arguments that extend its functionalities. In particular, the list of arguments includes the following: * `data`, a data frame or matrix providing data to which the model is to be applied; if not provided (`data = NULL` by default), the data are extracted from the model\-object. Note that the data object should not, in principle, contain the dependent variable. * `y`, observed values of the dependent variable corresponding to the data given in the `data` object; if not provided (`y = NULL` by default), the values are extracted from the model\-object; * `predict_function`, a function that returns prediction scores; if not specified (`predict_function = NULL` by default), then a default `predict()` function is used (note that this may lead to errors); * `residual_function`, a function that returns model residuals; if not specified (`residual_function = NULL` by default), then model residuals defined in equation [(2\.1\)](modelDevelopmentProcess.html#eq:modelResiduals) are calculated; * `verbose`, a logical argument (`verbose = TRUE` by default) indicating whether diagnostic messages are to be printed; * `precalculate`, a logical argument (`precalculate = TRUE` by default) indicating whether predicted values and residuals are to be calculated when the explainer is created. Note that this will also happen if `verbose = TRUE`. To skip the calculations, both `verbose` and `precalculate` should be set to FALSE . * `model_info`, a named list (with components `package`, `version`, and `type`) providing information about the model; if not specified (`model_info = NULL` by default), `DALEX` seeks for information on its own; * `type`, information about the type of the model, either `"classification"` (for a binary dependent variable) or `"regression"` (for a continuous dependent variable); if not specified (`type = NULL` by default), then the value of the argument is extracted from `model_info`; * `label`, a unique name of the model; if not specified (`label = NULL` by default), then it is extracted from `class(model)`. Application of function `explain()` provides an object of class `explainer`. It is a list of many components that include: * `model`, the explained model; * `data`, the data to which the model is applied; * `y`, observed values of the dependent variable corresponding to `data`; * `y_hat`, predictions obtained by applying `model` to `data`; * `residuals`, residuals computed based on `y` and `y_hat`; * `predict_function`, the function used to obtain the model’s predictions; * `residual_function`, the function used to obtain residuals; * `class`, class/classes of the model; * `label`, label of the model/explainer; * `model_info`, a named list (with components `package`, `version`, and `type`) providing information about the model. Thus, each explainer\-object contains all elements needed to create a model explanation. The code below creates explainers for the models (see Sections [4\.2\.1](dataSetsIntro.html#model-titanic-lmr)–[4\.2\.4](dataSetsIntro.html#model-titanic-svm)) fitted to the Titanic data. Note that, in the `data` argument, we indicate the `titanic` data frame without the ninth column, i.e., without the *survived* variable. The variable is used in the `y` argument to explicitly define the binary dependent variable equal to 1 for survivors and 0 for passengers who did not survive. ``` titanic_lmr_exp <- explain(model = titanic_lmr, data = titanic[, -9], y = titanic$survived == "yes", label = "Logistic Regression", type = "classification") titanic_rf_exp <- explain(model = titanic_rf, data = titanic[, -9], y = titanic$survived == "yes", label = "Random Forest") titanic_gbm_exp <- explain(model = titanic_gbm, data = titanic[, -9], y = titanic$survived == "yes", label = "Generalized Boosted Regression") titanic_svm_exp <- explain(model = titanic_svm, data = titanic[, -9], y = titanic$survived == "yes", label = "Support Vector Machine") ``` ### 4\.2\.7 List of model\-objects In the previous sections, we have built four predictive models for the Titanic dataset. The models will be used in the rest of the book to illustrate model\-explanation methods and tools. For the ease of reference, we summarize the models in Table [4\.1](dataSetsIntro.html#tab:archivistHooksOfModelsTitanic). The binary model\-objects can be downloaded by using the indicated `archivist` hooks (Biecek and Kosinski [2017](#ref-archivist)). By calling a function specified in the last column of the table, one can restore a selected model in its local R environment. Table 4\.1: Predictive models created for the `titanic` dataset. All models are fitted with following variables: *gender*, *age*, *class*, *sibsp*, *parch*, *fare*, *embarked.* | Model name / library | Link to this object | | --- | --- | | `titanic_lmr` | Get the model: `archivist::` | | `rms:: lmr` v.5\.1\.3 | `aread("pbiecek/models/58b24")`. | | `titanic_rf` | Get the model: `archivist::` | | `randomForest:: randomForest` v.4\.6\.14 | `aread("pbiecek/models/4e0fc")`. | | `titanic_gbm` | Get the model: `archivist::` | | `gbm:: gbm` v.2\.1\.5 | `aread("pbiecek/models/b7078")`. | | `titanic_svm` | Get the model: `archivist::` | | `e1071:: svm` v.1\.7\.3 | `aread("pbiecek/models/9c27f")`. | Table [4\.2](dataSetsIntro.html#tab:archivistHooksOfDataFramesTitanic) summarizes the data frames that will be used in examples in the subsequent chapters. Table 4\.2: Data frames created for the Titanic use\-case. All frames have following variables: *gender*, *age*, *class*, *embarked*, *country*, *fare*, *sibsp*, *parch*. The `titanic` data frame includes also the *survived* variable. | Description | Link to this object | | --- | --- | | `titanic` dataset with 2207 observations with imputed missing values | `archivist:: aread("pbiecek/models/27e5c")` | | `johnny_d` 8\-year\-old boy from the 1st class without parents, paid 72 pounds, embarked in Southampton | `archivist:: aread("pbiecek/models/e3596")` | | `henry` 47\-year\-old male from the 1st class, travelled alone, paid 25 pounds, embarked in Cherbourg | `archivist:: aread("pbiecek/models/a6538")` | 4\.3 Models for RMS Titanic, snippets for Python ------------------------------------------------ Titanic data are provided in the `titanic` dataset, which is available in the `dalex` library. The values of the dependent binary variable are given in the `survived` column; the remaining columns give the values of the explanatory variables that are used to construct the classifiers. The following instructions load the `titanic` dataset and split it into the dependent variable `y` and the explanatory variables `X`. Note that, for the purpose of this example, we do not divide the data into the training and testing sets. Instructions on how to deal with the situation when you want to analyze the model on data other than the training set will be presented in the subsequent chapters. ``` import dalex as dx titanic = dx.datasets.load_titanic() X = titanic.drop(columns='survived') y = titanic.survived ``` Dataset `X` contains numeric variables with different ranges (for instance, *age* and *fare*) and categorical variables. Machine\-learning algorithms in the `sklearn` library require data in a numeric form. Therefore, before modelling, we use a pipeline that performs data pre\-processing. In particular, we scale the continuous variables (*age*, *fare*, *parch*, and *sibsp*) and one\-hot\-encode the categorical variables (*gender*, *class*, *embarked*). ``` from sklearn.preprocessing import StandardScaler, OneHotEncoder from sklearn.compose import make_column_transformer from sklearn.pipeline import make_pipeline preprocess = make_column_transformer( (StandardScaler(), ['age', 'fare', 'parch', 'sibsp']), (OneHotEncoder(), ['gender', 'class', 'embarked'])) ``` ### 4\.3\.1 Logistic regression model To fit the logistic regression model (see Section [4\.2\.1](dataSetsIntro.html#model-titanic-lmr)), we use the `LogisticRegression` algorithm from the `sklearn` library. By default, the implementation uses the ridge penalty, defined in [(2\.6\)](modelDevelopmentProcess.html#eq:ridgePenalty). For this reason it is important to scale continuous variables like `age` and `fare`. The fitted model is stored in object `titanic_lr`, which will be used in subsequent chapters. ``` from sklearn.linear_model import LogisticRegression titanic_lr = make_pipeline( preprocess, LogisticRegression(penalty = 'l2')) titanic_lr.fit(X, y) ``` ### 4\.3\.2 Random forest model To fit the random forest model (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)), we use the `RandomForestClassifier` algorithm from the `sklearn` library. We use the default settings with trees not deeper than three levels, and the number of trees set to 500\. The fitted model is stored in object `titanic_rf`. ``` from sklearn.ensemble import RandomForestClassifier titanic_rf = make_pipeline( preprocess, RandomForestClassifier(max_depth = 3, n_estimators = 500)) titanic_rf.fit(X, y) ``` ### 4\.3\.3 Gradient boosting model To fit the gradient boosting model (see Section [4\.2\.3](dataSetsIntro.html#model-titanic-gbm)), we use the `GradientBoostingClassifier` algorithm from the `sklearn` library. We use the default settings, with the number of trees in the ensemble set to 100\. The fitted model is stored in object `titanic_gbc`. ``` from sklearn.ensemble import GradientBoostingClassifier titanic_gbc = make_pipeline( preprocess, GradientBoostingClassifier(n_estimators = 100)) titanic_gbc.fit(X, y) ``` ### 4\.3\.4 Support vector machine model Finally, to fit the SVM model with C\-Support Vector Classification mode (see Section [4\.2\.4](dataSetsIntro.html#model-titanic-svm)), we use the `SVC` algorithm from the `sklearn` library based on `libsvm`. The fitted model is stored in object `titanic_svm`. ``` from sklearn.svm import SVC titanic_svm = make_pipeline( preprocess, SVC(probability = True)) titanic_svm.fit(X, y) ``` ### 4\.3\.5 Models’ predictions Let us now compare predictions that are obtained from the different models. In particular, we compute the predicted probability of survival for Johnny D, an 8\-year\-old boy who embarked in Southampton and travelled in the first class with no parents nor siblings, and with a ticket costing 72 pounds (see Section [4\.2\.5](dataSetsIntro.html#predictions-titanic)). First, we create a data frame `johnny_d` that contains the data describing the passenger. ``` import pandas as pd johnny_d = pd.DataFrame({'gender': ['male'], 'age' : [8], 'class' : ['1st'], 'embarked': ['Southampton'], 'fare' : [72], 'sibsp' : [0], 'parch' : [0]}, index = ['JohnnyD']) ``` Subsequently, we use the method `predict_proba()` to obtain the predicted probability of survival for the logistic regression model. ``` titanic_lr.predict_proba(johnny_d) # array([[0.35884528, 0.64115472]]) ``` We do the same for the three remaining models. ``` titanic_rf.predict_proba(johnny_d) # array([[0.63028556, 0.36971444]]) titanic_gbc.predict_proba(johnny_d) # array([[0.1567194, 0.8432806]]) titanic_svm.predict_proba(johnny_d) # array([[0.78308146, 0.21691854]]) ``` We also create data frame for passenger Henry (see Section [4\.2\.5](dataSetsIntro.html#predictions-titanic)) and compute his predicted probability of survival. ``` henry = pd.DataFrame({'gender' : ['male'], 'age' : [47], 'class' : ['1st'], 'embarked': ['Cherbourg'], 'fare' : [25], 'sibsp' : [0], 'parch' : [0]}, index = ['Henry']) titanic_lr.predict_proba(henry) # array([[0.56798421 0.43201579]]) titanic_rf.predict_proba(henry) # array([[0.69917845 0.30082155]]) titanic_gbc.predict_proba(henry) # array([[0.78542886 0.21457114]]) titanic_svm.predict_proba(henry) # array([[0.81725832 0.18274168]]) ``` ### 4\.3\.6 Models’ explainers The Python\-code examples shown above use functions from the `sklearn` library, which facilitates uniform working with models. However, we may want to, or have to, work with models built by using other libraries. To simplify the task, the `dalex` library wraps models in objects of class `Explainer` that contain, in a uniform way, all the functions necessary for working with models. There is only one argument that is required by the `Explainer()` constructor, i.e., `model`. However, the constructor allows additional arguments that extend its functionalities. In particular, the list of arguments includes the following: * `data`, a data frame or `numpy.ndarray` providing data to which the model is to be applied. It should be an object of the `pandas.DataFrame` class, otherwise it will be converted to `pandas.DataFrame`. * `y`, values of the dependent variable/target variable corresponding to the data given in the `data` object; * `predict_function`, a function that returns prediction scores; if not specified, then `dalex` will make a guess which function should be used (`predict()`, `predict_proba()`, or something else). Note that this function should work on `pandas.DataFrame` objects; if it works only on `numpy.ndarray` then an appropriate conversion should also be included in `predict_function`. * `residual_function`, a function that returns model residuals; * `label`, a unique name of the model; * `model_class`, the class of actual model; * `verbose`, a logical argument (`verbose = TRUE` by default) indicating whether diagnostic messages are to be printed; * `model_type`, information about the type of the model, either `"classification"` (for a binary dependent variable) or `"regression"` (for a continuous dependent variable); * `model_info`, a dictionary with additional information about the model. Application of constructor `Explainer()` provides an object of class `Explainer`. It is an object with many components that include: * `model`, the explained model; * `data`, the data to which the model is applied; * `y`, observed values of the dependent variable corresponding to `data`; * `y_hat`, predictions obtained by applying `model` to `data`; * `residuals`, residuals computed based on `y` and `y_hat`; * `predict_function`, the function used to obtain the model’s predictions; * `residual_function`, the function used to obtain residuals; * `class`, class/classes of the model; * `label`, label of the model/explainer; * `model_info`, a dictionary (with components `package`, `version`, and `type`) providing information about the model. Thus, each explainer\-object contains all elements needed to create a model explanation. The code below creates explainers for the models (see Sections [4\.3\.1](dataSetsIntro.html#model-titanic-python-lr)–[4\.3\.4](dataSetsIntro.html#model-titanic-python-svm)) fitted to the Titanic data. ``` titanic_rf_exp = dx.Explainer(titanic_rf, X, y, label = "Titanic RF Pipeline") titanic_lr_exp = dx.Explainer(titanic_lr, X, y, label = "Titanic LR Pipeline") titanic_gbc_exp = dx.Explainer(titanic_gbc, X, y, label = "Titanic GBC Pipeline") titanic_svm_exp = dx.Explainer(titanic_svm, X, y, label = "Titanic SVM Pipeline") ``` When an explainer is created, the specified model and data are tested for consistency. Diagnostic information is printed on the screen. The following output shows diagnostic information for the `titanic_rf` model. ``` Preparation of a new explainer is initiated -> data : 2207 rows 7 cols -> target variable : Argument 'y' was converted to a numpy.ndarray. -> target variable : 2207 values -> model_class : sklearn.pipeline.Pipeline (default) -> label : Titanic RF Pipeline -> predict function : <yhat_proba> will be used (default) -> predicted values : min = 0.171, mean = 0.322, max = 0.893 -> residual function : difference between y and yhat (default) -> residuals : min = -0.826, mean = 4.89e-05, max = 0.826 -> model_info : package sklearn A new explainer has been created! ``` ### 4\.3\.1 Logistic regression model To fit the logistic regression model (see Section [4\.2\.1](dataSetsIntro.html#model-titanic-lmr)), we use the `LogisticRegression` algorithm from the `sklearn` library. By default, the implementation uses the ridge penalty, defined in [(2\.6\)](modelDevelopmentProcess.html#eq:ridgePenalty). For this reason it is important to scale continuous variables like `age` and `fare`. The fitted model is stored in object `titanic_lr`, which will be used in subsequent chapters. ``` from sklearn.linear_model import LogisticRegression titanic_lr = make_pipeline( preprocess, LogisticRegression(penalty = 'l2')) titanic_lr.fit(X, y) ``` ### 4\.3\.2 Random forest model To fit the random forest model (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)), we use the `RandomForestClassifier` algorithm from the `sklearn` library. We use the default settings with trees not deeper than three levels, and the number of trees set to 500\. The fitted model is stored in object `titanic_rf`. ``` from sklearn.ensemble import RandomForestClassifier titanic_rf = make_pipeline( preprocess, RandomForestClassifier(max_depth = 3, n_estimators = 500)) titanic_rf.fit(X, y) ``` ### 4\.3\.3 Gradient boosting model To fit the gradient boosting model (see Section [4\.2\.3](dataSetsIntro.html#model-titanic-gbm)), we use the `GradientBoostingClassifier` algorithm from the `sklearn` library. We use the default settings, with the number of trees in the ensemble set to 100\. The fitted model is stored in object `titanic_gbc`. ``` from sklearn.ensemble import GradientBoostingClassifier titanic_gbc = make_pipeline( preprocess, GradientBoostingClassifier(n_estimators = 100)) titanic_gbc.fit(X, y) ``` ### 4\.3\.4 Support vector machine model Finally, to fit the SVM model with C\-Support Vector Classification mode (see Section [4\.2\.4](dataSetsIntro.html#model-titanic-svm)), we use the `SVC` algorithm from the `sklearn` library based on `libsvm`. The fitted model is stored in object `titanic_svm`. ``` from sklearn.svm import SVC titanic_svm = make_pipeline( preprocess, SVC(probability = True)) titanic_svm.fit(X, y) ``` ### 4\.3\.5 Models’ predictions Let us now compare predictions that are obtained from the different models. In particular, we compute the predicted probability of survival for Johnny D, an 8\-year\-old boy who embarked in Southampton and travelled in the first class with no parents nor siblings, and with a ticket costing 72 pounds (see Section [4\.2\.5](dataSetsIntro.html#predictions-titanic)). First, we create a data frame `johnny_d` that contains the data describing the passenger. ``` import pandas as pd johnny_d = pd.DataFrame({'gender': ['male'], 'age' : [8], 'class' : ['1st'], 'embarked': ['Southampton'], 'fare' : [72], 'sibsp' : [0], 'parch' : [0]}, index = ['JohnnyD']) ``` Subsequently, we use the method `predict_proba()` to obtain the predicted probability of survival for the logistic regression model. ``` titanic_lr.predict_proba(johnny_d) # array([[0.35884528, 0.64115472]]) ``` We do the same for the three remaining models. ``` titanic_rf.predict_proba(johnny_d) # array([[0.63028556, 0.36971444]]) titanic_gbc.predict_proba(johnny_d) # array([[0.1567194, 0.8432806]]) titanic_svm.predict_proba(johnny_d) # array([[0.78308146, 0.21691854]]) ``` We also create data frame for passenger Henry (see Section [4\.2\.5](dataSetsIntro.html#predictions-titanic)) and compute his predicted probability of survival. ``` henry = pd.DataFrame({'gender' : ['male'], 'age' : [47], 'class' : ['1st'], 'embarked': ['Cherbourg'], 'fare' : [25], 'sibsp' : [0], 'parch' : [0]}, index = ['Henry']) titanic_lr.predict_proba(henry) # array([[0.56798421 0.43201579]]) titanic_rf.predict_proba(henry) # array([[0.69917845 0.30082155]]) titanic_gbc.predict_proba(henry) # array([[0.78542886 0.21457114]]) titanic_svm.predict_proba(henry) # array([[0.81725832 0.18274168]]) ``` ### 4\.3\.6 Models’ explainers The Python\-code examples shown above use functions from the `sklearn` library, which facilitates uniform working with models. However, we may want to, or have to, work with models built by using other libraries. To simplify the task, the `dalex` library wraps models in objects of class `Explainer` that contain, in a uniform way, all the functions necessary for working with models. There is only one argument that is required by the `Explainer()` constructor, i.e., `model`. However, the constructor allows additional arguments that extend its functionalities. In particular, the list of arguments includes the following: * `data`, a data frame or `numpy.ndarray` providing data to which the model is to be applied. It should be an object of the `pandas.DataFrame` class, otherwise it will be converted to `pandas.DataFrame`. * `y`, values of the dependent variable/target variable corresponding to the data given in the `data` object; * `predict_function`, a function that returns prediction scores; if not specified, then `dalex` will make a guess which function should be used (`predict()`, `predict_proba()`, or something else). Note that this function should work on `pandas.DataFrame` objects; if it works only on `numpy.ndarray` then an appropriate conversion should also be included in `predict_function`. * `residual_function`, a function that returns model residuals; * `label`, a unique name of the model; * `model_class`, the class of actual model; * `verbose`, a logical argument (`verbose = TRUE` by default) indicating whether diagnostic messages are to be printed; * `model_type`, information about the type of the model, either `"classification"` (for a binary dependent variable) or `"regression"` (for a continuous dependent variable); * `model_info`, a dictionary with additional information about the model. Application of constructor `Explainer()` provides an object of class `Explainer`. It is an object with many components that include: * `model`, the explained model; * `data`, the data to which the model is applied; * `y`, observed values of the dependent variable corresponding to `data`; * `y_hat`, predictions obtained by applying `model` to `data`; * `residuals`, residuals computed based on `y` and `y_hat`; * `predict_function`, the function used to obtain the model’s predictions; * `residual_function`, the function used to obtain residuals; * `class`, class/classes of the model; * `label`, label of the model/explainer; * `model_info`, a dictionary (with components `package`, `version`, and `type`) providing information about the model. Thus, each explainer\-object contains all elements needed to create a model explanation. The code below creates explainers for the models (see Sections [4\.3\.1](dataSetsIntro.html#model-titanic-python-lr)–[4\.3\.4](dataSetsIntro.html#model-titanic-python-svm)) fitted to the Titanic data. ``` titanic_rf_exp = dx.Explainer(titanic_rf, X, y, label = "Titanic RF Pipeline") titanic_lr_exp = dx.Explainer(titanic_lr, X, y, label = "Titanic LR Pipeline") titanic_gbc_exp = dx.Explainer(titanic_gbc, X, y, label = "Titanic GBC Pipeline") titanic_svm_exp = dx.Explainer(titanic_svm, X, y, label = "Titanic SVM Pipeline") ``` When an explainer is created, the specified model and data are tested for consistency. Diagnostic information is printed on the screen. The following output shows diagnostic information for the `titanic_rf` model. ``` Preparation of a new explainer is initiated -> data : 2207 rows 7 cols -> target variable : Argument 'y' was converted to a numpy.ndarray. -> target variable : 2207 values -> model_class : sklearn.pipeline.Pipeline (default) -> label : Titanic RF Pipeline -> predict function : <yhat_proba> will be used (default) -> predicted values : min = 0.171, mean = 0.322, max = 0.893 -> residual function : difference between y and yhat (default) -> residuals : min = -0.826, mean = 4.89e-05, max = 0.826 -> model_info : package sklearn A new explainer has been created! ``` 4\.4 Apartment prices --------------------- Predicting house prices is a common exercise used in machine\-learning courses. Various datasets for house prices are available at websites like [Kaggle](https://www.kaggle.com) or [UCI Machine Learning Repository](https://archive.ics.uci.edu). In this book, we will work with an interesting variant of this problem. The `apartments` dataset contains simulated data that match key characteristics of real apartments in Warsaw, the capital of Poland. However, the dataset is created in a way that two very different models, namely linear regression and random forest, offer almost exactly the same overall accuracy of predictions. The natural question is then: which model should we choose? We will show that model\-explanation tools provide important insight into the key model characteristics and are helpful in model selection. The dataset is available in the `DALEX` package in R and the `dalex` library in Python. It contains 1000 observations (apartments) and six variables: * *m2\.price*, apartment’s price per square meter (in EUR), a numerical variable in the range of 1607–6595; * *construction.year*, the year of construction of the block of flats in which the apartment is located, a numerical variable in the range of 1920–2010; * *surface*, apartment’s total surface in square meters, a numerical variable in the range of 20–150; * *floor*, the floor at which the apartment is located (ground floor taken to be the first floor), a numerical integer variable with values ranging from 1 to 10; * *no.rooms*, the total number of rooms, a numerical integer variable with values ranging from 1 to 6; * *district*, a factor with 10 levels indicating the district of Warsaw where the apartment is located. The first six rows of this dataset are presented in the table below. | m2\.price | construction.year | surface | floor | no.rooms | district | | --- | --- | --- | --- | --- | --- | | 5897 | 1953 | 25 | 3 | 1 | Srodmiescie | | 1818 | 1992 | 143 | 9 | 5 | Bielany | | 3643 | 1937 | 56 | 1 | 2 | Praga | | 3517 | 1995 | 93 | 7 | 3 | Ochota | | 3013 | 1992 | 144 | 6 | 5 | Mokotow | | 5795 | 1926 | 61 | 6 | 2 | Srodmiescie | Models considered for this dataset will use *m2\.price* as the (continuous) dependent variable. Models’ predictions will be validated on a set of 9000 apartments included in data frame `apartments_test`. Note that, usually, the training dataset is larger than the testing one. In this example, we deliberately use a small training set, so that model selection may be more challenging. ### 4\.4\.1 Data exploration Note that `apartments` is an artificial dataset created to illustrate and explain differences between random forest and linear regression. Hence, the structure of the data, the form and strength of association between variables, plausibility of distributional assumptions, etc., is less problematic than in a real\-life dataset. In fact, all these characteristics of the data are known. Nevertheless, we present some data exploration below to illustrate the important aspects of the data. The variable of interest is *m2\.price*, the price per square meter. The histogram presented in Figure [4\.6](dataSetsIntro.html#fig:apartmentsExplorationMi2) indicates that the distribution of the variable is slightly skewed to the right. Figure 4\.6: Distribution of the price per square meter in the apartment\-prices data. Figure [4\.7](dataSetsIntro.html#fig:apartmentsMi2Construction) suggests (possibly) a non\-linear relationship between *construction.year* and *m2\.price* and a linear relation between *surface* and *m2\.price*. Figure 4\.7: Apartment\-prices data. Price per square meter vs. year of construction (left\-hand\-side panel) and vs. surface (right\-hand\-side panel). Figure [4\.8](dataSetsIntro.html#fig:apartmentsMi2Floor) indicates that the relationship between *floor* and *m2\.price* is also close to linear, as well as is the association between *no.rooms* and *m2\.price* . Figure 4\.8: Apartment\-prices data. Price per square meter vs. floor (left\-hand\-side panel) and vs. number of rooms (right\-hand\-side panel). Figure [4\.9](dataSetsIntro.html#fig:apartmentsSurfaceNorooms) shows that *surface* and *number of rooms* are positively associated and that prices depend on the district. In particular, box plots in Figure [4\.9](dataSetsIntro.html#fig:apartmentsSurfaceNorooms) indicate that the highest prices per square meter are observed in Srodmiescie (Downtown). Figure 4\.9: Apartment\-prices data. Surface vs. number of rooms (left\-hand\-side panel) and price per square meter for different districts (right\-hand\-side panel). ### 4\.4\.1 Data exploration Note that `apartments` is an artificial dataset created to illustrate and explain differences between random forest and linear regression. Hence, the structure of the data, the form and strength of association between variables, plausibility of distributional assumptions, etc., is less problematic than in a real\-life dataset. In fact, all these characteristics of the data are known. Nevertheless, we present some data exploration below to illustrate the important aspects of the data. The variable of interest is *m2\.price*, the price per square meter. The histogram presented in Figure [4\.6](dataSetsIntro.html#fig:apartmentsExplorationMi2) indicates that the distribution of the variable is slightly skewed to the right. Figure 4\.6: Distribution of the price per square meter in the apartment\-prices data. Figure [4\.7](dataSetsIntro.html#fig:apartmentsMi2Construction) suggests (possibly) a non\-linear relationship between *construction.year* and *m2\.price* and a linear relation between *surface* and *m2\.price*. Figure 4\.7: Apartment\-prices data. Price per square meter vs. year of construction (left\-hand\-side panel) and vs. surface (right\-hand\-side panel). Figure [4\.8](dataSetsIntro.html#fig:apartmentsMi2Floor) indicates that the relationship between *floor* and *m2\.price* is also close to linear, as well as is the association between *no.rooms* and *m2\.price* . Figure 4\.8: Apartment\-prices data. Price per square meter vs. floor (left\-hand\-side panel) and vs. number of rooms (right\-hand\-side panel). Figure [4\.9](dataSetsIntro.html#fig:apartmentsSurfaceNorooms) shows that *surface* and *number of rooms* are positively associated and that prices depend on the district. In particular, box plots in Figure [4\.9](dataSetsIntro.html#fig:apartmentsSurfaceNorooms) indicate that the highest prices per square meter are observed in Srodmiescie (Downtown). Figure 4\.9: Apartment\-prices data. Surface vs. number of rooms (left\-hand\-side panel) and price per square meter for different districts (right\-hand\-side panel). 4\.5 Models for apartment prices, snippets for R ------------------------------------------------ ### 4\.5\.1 Linear regression model The dependent variable of interest, *m2\.price*, is continuous. Thus, a natural choice to build a predictive model is linear regression. We treat all the other variables in the `apartments` data frame as explanatory and include them in the model. To fit the model, we apply the `lm()` function. The results of the model are stored in model\-object `apartments_lm`. ``` library("DALEX") apartments_lm <- lm(m2.price ~ ., data = apartments) anova(apartments_lm) ``` ``` ## Analysis of Variance Table ## ## Response: m2.price ## Df Sum Sq Mean Sq F value Pr(>F) ## construction.year 1 2629802 2629802 33.233 1.093e-08 *** ## surface 1 207840733 207840733 2626.541 < 2.2e-16 *** ## floor 1 79823027 79823027 1008.746 < 2.2e-16 *** ## no.rooms 1 956996 956996 12.094 0.000528 *** ## district 9 451993980 50221553 634.664 < 2.2e-16 *** ## Residuals 986 78023123 79131 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ``` ### 4\.5\.2 Random forest model As an alternative to linear regression, we consider a random forest model. Again, we treat all the variables in the `apartments` data frame other than *m2\.price* as explanatory and include them in the model. To fit the model, we apply the `randomForest()` function, with default settings, from the package with the same name (Liaw and Wiener [2002](#ref-randomForest)). The results of the model are stored in model\-object `apartments_rf`. ``` library("randomForest") set.seed(72) apartments_rf <- randomForest(m2.price ~ ., data = apartments) ``` ### 4\.5\.3 Support vector machine model Finally, we consider an SVM model, with all the variables in the `apartments` data frame other than *m2\.price* treated as explanatory. To fit the model, we use the `svm()` function, with default settings, from package `e1071` (Meyer et al. [2019](#ref-e1071)). The results of the model are stored in model\-object `apartments_svm`. ``` library("e1071") apartments_svm <- svm(m2.price ~ construction.year + surface + floor + no.rooms + district, data = apartments) ``` ### 4\.5\.4 Models’ predictions The `predict()` function calculates predictions for a specific model. In the example below, we use model\-objects `apartments_lm`, `apartments_rf`, and `apartments_svm`, to calculate predictions for prices of the apartments from the `apartments_test` data frame. Note that, for brevity’s sake, we compute the predictions only for the first six observations from the data frame. The actual prices for the first six observations from `apartments_test` are provided below. ``` apartments_test$m2.price[1:6] ``` ``` ## [1] 4644 3082 2498 2735 2781 2936 ``` Predicted prices for the linear regression model are as follows: ``` predict(apartments_lm, apartments_test[1:6,]) ``` ``` ## 1001 1002 1003 1004 1005 1006 ## 4820.009 3292.678 2717.910 2922.751 2974.086 2527.043 ``` Predicted prices for the random forest model take the following values: ``` predict(apartments_rf, apartments_test[1:6,]) ``` ``` ## 1001 1002 1003 1004 1005 1006 ## 4214.084 3178.061 2695.787 2744.775 2951.069 2999.450 ``` Predicted prices for the SVM model are as follows: ``` predict(apartments_svm, apartments_test[1:6,]) ``` ``` ## 1001 1002 1003 1004 1005 1006 ## 4590.076 3012.044 2369.748 2712.456 2681.777 2750.904 ``` By using the code presented below, we summarize the predictive performance of the linear regression and random forest models by computing the square root of the mean\-squared\-error (RMSE). For a “perfect” predictive model, which would predict all observations exactly, RMSE should be equal to 0\. More information about RMSE can be found in Section [15\.3\.1](modelPerformance.html#modelPerformanceMethodCont). ``` predicted_apartments_lm <- predict(apartments_lm, apartments_test) sqrt(mean((predicted_apartments_lm - apartments_test$m2.price)^2)) ``` ``` ## [1] 283.0865 ``` ``` predicted_apartments_rf <- predict(apartments_rf, apartments_test) sqrt(mean((predicted_apartments_rf - apartments_test$m2.price)^2)) ``` ``` ## [1] 282.9519 ``` For the random forest model, RMSE is equal to 283\. It is almost identical to the RMSE for the linear regression model, which is equal to 283\.1\. Thus, the question we may face is: should we choose the more complex but flexible random forest model, or the simpler and easier to interpret linear regression model? In the subsequent chapters, we will try to provide an answer to this question. In particular, we will show that a proper model exploration may help to discover weak and strong sides of any of the models and, in consequence, allow the creation of a new model, with better performance than either of the two. ### 4\.5\.5 Models’ explainers The code presented below creates explainers for the models (see Sections [4\.5\.1](dataSetsIntro.html#model-Apartments-lr)–[4\.5\.3](dataSetsIntro.html#model-Apartments-svm)) fitted to the apartment\-prices data. Note that we use the `apartments_test` data frame without the first column, i.e., the *m2\.price* variable, in the `data` argument. This will be the dataset to which the model will be applied (see Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode)). The *m2\.price* variable is explicitly specified as the dependent variable in the `y` argument (see Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode)). ``` apartments_lm_exp <- explain(model = apartments_lm, data = apartments_test[,-1], y = apartments_test$m2.price, label = "Linear Regression") apartments_rf_exp <- explain(model = apartments_rf, data = apartments_test[,-1], y = apartments_test$m2.price, label = "Random Forest") apartments_svm_exp <- explain(model = apartments_svm, data = apartments_test[,-1], y = apartments_test$m2.price, label = "Support Vector Machine") ``` ### 4\.5\.6 List of model\-objects In Sections [4\.5\.1](dataSetsIntro.html#model-Apartments-lr)–[4\.5\.3](dataSetsIntro.html#model-Apartments-svm), we have built three predictive models for the `apartments` dataset. The models will be used in the rest of the book to illustrate the model\-explanation methods and tools. For the ease of reference, we summarize the models in Table [4\.3](dataSetsIntro.html#tab:archivistHooksOfModelsApartments). The binary model\-objects can be downloaded by using the indicated `archivist` hooks (Biecek and Kosinski [2017](#ref-archivist)). By calling a function specified in the last column of the table, one can restore a selected model in a local R environment. Table 4\.3: Predictive models created for the dataset Apartment prices. All models are fitted by using *construction.year*, *surface*, *floor*, *no.rooms*, and *district* as explanatory variables. | Model name / library | Link to this object | | --- | --- | | `apartments_lm` | Get the model: `archivist::` | | `stats:: lm` v.3\.5\.3 | `aread("pbiecek/models/55f19")`. | | `apartments_rf` | Get the model: `archivist::` | | `randomForest:: randomForest` v.4\.6\.14 | `aread("pbiecek/models/fe7a5")`. | | `apartments_svm` | Get the model: `archivist::` | | `e1071:: svm` v.1\.7\.3 | `aread("pbiecek/models/d2ca0")`. | ### 4\.5\.1 Linear regression model The dependent variable of interest, *m2\.price*, is continuous. Thus, a natural choice to build a predictive model is linear regression. We treat all the other variables in the `apartments` data frame as explanatory and include them in the model. To fit the model, we apply the `lm()` function. The results of the model are stored in model\-object `apartments_lm`. ``` library("DALEX") apartments_lm <- lm(m2.price ~ ., data = apartments) anova(apartments_lm) ``` ``` ## Analysis of Variance Table ## ## Response: m2.price ## Df Sum Sq Mean Sq F value Pr(>F) ## construction.year 1 2629802 2629802 33.233 1.093e-08 *** ## surface 1 207840733 207840733 2626.541 < 2.2e-16 *** ## floor 1 79823027 79823027 1008.746 < 2.2e-16 *** ## no.rooms 1 956996 956996 12.094 0.000528 *** ## district 9 451993980 50221553 634.664 < 2.2e-16 *** ## Residuals 986 78023123 79131 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ``` ### 4\.5\.2 Random forest model As an alternative to linear regression, we consider a random forest model. Again, we treat all the variables in the `apartments` data frame other than *m2\.price* as explanatory and include them in the model. To fit the model, we apply the `randomForest()` function, with default settings, from the package with the same name (Liaw and Wiener [2002](#ref-randomForest)). The results of the model are stored in model\-object `apartments_rf`. ``` library("randomForest") set.seed(72) apartments_rf <- randomForest(m2.price ~ ., data = apartments) ``` ### 4\.5\.3 Support vector machine model Finally, we consider an SVM model, with all the variables in the `apartments` data frame other than *m2\.price* treated as explanatory. To fit the model, we use the `svm()` function, with default settings, from package `e1071` (Meyer et al. [2019](#ref-e1071)). The results of the model are stored in model\-object `apartments_svm`. ``` library("e1071") apartments_svm <- svm(m2.price ~ construction.year + surface + floor + no.rooms + district, data = apartments) ``` ### 4\.5\.4 Models’ predictions The `predict()` function calculates predictions for a specific model. In the example below, we use model\-objects `apartments_lm`, `apartments_rf`, and `apartments_svm`, to calculate predictions for prices of the apartments from the `apartments_test` data frame. Note that, for brevity’s sake, we compute the predictions only for the first six observations from the data frame. The actual prices for the first six observations from `apartments_test` are provided below. ``` apartments_test$m2.price[1:6] ``` ``` ## [1] 4644 3082 2498 2735 2781 2936 ``` Predicted prices for the linear regression model are as follows: ``` predict(apartments_lm, apartments_test[1:6,]) ``` ``` ## 1001 1002 1003 1004 1005 1006 ## 4820.009 3292.678 2717.910 2922.751 2974.086 2527.043 ``` Predicted prices for the random forest model take the following values: ``` predict(apartments_rf, apartments_test[1:6,]) ``` ``` ## 1001 1002 1003 1004 1005 1006 ## 4214.084 3178.061 2695.787 2744.775 2951.069 2999.450 ``` Predicted prices for the SVM model are as follows: ``` predict(apartments_svm, apartments_test[1:6,]) ``` ``` ## 1001 1002 1003 1004 1005 1006 ## 4590.076 3012.044 2369.748 2712.456 2681.777 2750.904 ``` By using the code presented below, we summarize the predictive performance of the linear regression and random forest models by computing the square root of the mean\-squared\-error (RMSE). For a “perfect” predictive model, which would predict all observations exactly, RMSE should be equal to 0\. More information about RMSE can be found in Section [15\.3\.1](modelPerformance.html#modelPerformanceMethodCont). ``` predicted_apartments_lm <- predict(apartments_lm, apartments_test) sqrt(mean((predicted_apartments_lm - apartments_test$m2.price)^2)) ``` ``` ## [1] 283.0865 ``` ``` predicted_apartments_rf <- predict(apartments_rf, apartments_test) sqrt(mean((predicted_apartments_rf - apartments_test$m2.price)^2)) ``` ``` ## [1] 282.9519 ``` For the random forest model, RMSE is equal to 283\. It is almost identical to the RMSE for the linear regression model, which is equal to 283\.1\. Thus, the question we may face is: should we choose the more complex but flexible random forest model, or the simpler and easier to interpret linear regression model? In the subsequent chapters, we will try to provide an answer to this question. In particular, we will show that a proper model exploration may help to discover weak and strong sides of any of the models and, in consequence, allow the creation of a new model, with better performance than either of the two. ### 4\.5\.5 Models’ explainers The code presented below creates explainers for the models (see Sections [4\.5\.1](dataSetsIntro.html#model-Apartments-lr)–[4\.5\.3](dataSetsIntro.html#model-Apartments-svm)) fitted to the apartment\-prices data. Note that we use the `apartments_test` data frame without the first column, i.e., the *m2\.price* variable, in the `data` argument. This will be the dataset to which the model will be applied (see Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode)). The *m2\.price* variable is explicitly specified as the dependent variable in the `y` argument (see Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode)). ``` apartments_lm_exp <- explain(model = apartments_lm, data = apartments_test[,-1], y = apartments_test$m2.price, label = "Linear Regression") apartments_rf_exp <- explain(model = apartments_rf, data = apartments_test[,-1], y = apartments_test$m2.price, label = "Random Forest") apartments_svm_exp <- explain(model = apartments_svm, data = apartments_test[,-1], y = apartments_test$m2.price, label = "Support Vector Machine") ``` ### 4\.5\.6 List of model\-objects In Sections [4\.5\.1](dataSetsIntro.html#model-Apartments-lr)–[4\.5\.3](dataSetsIntro.html#model-Apartments-svm), we have built three predictive models for the `apartments` dataset. The models will be used in the rest of the book to illustrate the model\-explanation methods and tools. For the ease of reference, we summarize the models in Table [4\.3](dataSetsIntro.html#tab:archivistHooksOfModelsApartments). The binary model\-objects can be downloaded by using the indicated `archivist` hooks (Biecek and Kosinski [2017](#ref-archivist)). By calling a function specified in the last column of the table, one can restore a selected model in a local R environment. Table 4\.3: Predictive models created for the dataset Apartment prices. All models are fitted by using *construction.year*, *surface*, *floor*, *no.rooms*, and *district* as explanatory variables. | Model name / library | Link to this object | | --- | --- | | `apartments_lm` | Get the model: `archivist::` | | `stats:: lm` v.3\.5\.3 | `aread("pbiecek/models/55f19")`. | | `apartments_rf` | Get the model: `archivist::` | | `randomForest:: randomForest` v.4\.6\.14 | `aread("pbiecek/models/fe7a5")`. | | `apartments_svm` | Get the model: `archivist::` | | `e1071:: svm` v.1\.7\.3 | `aread("pbiecek/models/d2ca0")`. | 4\.6 Models for apartment prices, snippets for Python ----------------------------------------------------- Apartment\-prices data are provided in the `apartments` dataset, which is available in the `dalex` library. The values of the continuous dependent variable are given in the `m2_price` column; the remaining columns give the values of the explanatory variables that are used to construct the predictive models. The following instructions load the `apartments` dataset and split it into the dependent variable `y` and the explanatory variables `X`. ``` import dalex as dx apartments = dx.datasets.load_apartments() X = apartments.drop(columns='m2_price') y = apartments['m2_price'] ``` Dataset `X` contains numeric variables with different ranges (for instance, *surface* and *no.rooms*) and categorical variables (*district*). Machine\-learning algorithms in the `sklearn` library require data in a numeric form. Therefore, before modelling, we use a pipeline that performs data pre\-processing. In particular, we scale the continuous variables (*construction.year*, *surface*, *floor*, and *no.rooms*) and one\-hot\-encode the categorical variables (*district*). ``` from sklearn.preprocessing import StandardScaler, OneHotEncoder from sklearn.compose import make_column_transformer from sklearn.pipeline import make_pipeline preprocess = make_column_transformer( (StandardScaler(), ['construction_year', 'surface', 'floor', 'no_rooms']), (OneHotEncoder(), ['district'])) ``` ### 4\.6\.1 Linear regression model To fit the linear regression model (see Section [4\.5\.1](dataSetsIntro.html#model-Apartments-lr)), we use the `LinearRegression` algorithm from the `sklearn` library. The fitted model is stored in object `apartments_lm`, which will be used in subsequent chapters. ``` from sklearn.linear_model import LinearRegression apartments_lm = make_pipeline( preprocess, LinearRegression()) apartments_lm.fit(X, y) ``` ### 4\.6\.2 Random forest model To fit the random forest model (see Section [4\.5\.2](dataSetsIntro.html#model-Apartments-rf)), we use the `RandomForestRegressor` algorithm from the `sklearn` library. We apply the default settings with trees not deeper than three levels and the number of trees in the random forest set to 500\. The fitted model is stored in object `apartments_rf` for purpose of illustrations in subsequent chapters. ``` from sklearn.ensemble import RandomForestRegressor apartments_rf = make_pipeline( preprocess, RandomForestRegressor(max_depth = 3, n_estimators = 500)) apartments_rf.fit(X, y) ``` ### 4\.6\.3 Support vector machine model Finally, to fit the SVM model (see Section [4\.5\.3](dataSetsIntro.html#model-Apartments-svm)), we use the `SVR` algorithm from the `sklearn` library. The fitted model is stored in object `apartments_svm`, which will be used in subsequent chapters. ``` from sklearn.svm import SVR apartments_svm = make_pipeline( preprocess, SVR()) apartments_svm.fit(X, y) ``` ### 4\.6\.4 Models’ predictions Let us now compare predictions that are obtained from the different models for the `apartments_test` data. In the code below, we use the `predict()` method to obtain the predicted price per square meter for the linear regression model. ``` apartments_test = dx.datasets.load_apartments_test() apartments_test = apartments_test.drop(columns='m2_price') apartments_lm.predict(apartments_test) # array([4820.00943156, 3292.67756996, 2717.90972101, ..., 4836.44370353, # 3191.69063189, 5157.93680175]) ``` In a similar way, we obtain the predictions for the two remaining models. ``` apartments_rf.predict(apartments_test) # array([4708, 3819, 2273, ..., 4708, 4336, 4916]) ``` ``` apartments_svm.predict(apartments_test) # array([3344.48570564, 3323.01215313, 3321.97053977, ..., 3353.19750146, # 3383.51743883, 3376.31070911]) ``` ### 4\.6\.5 Models’ explainers The Python\-code examples presented for the models for the apartment\-prices dataset use functions from the `sklearn` library, which facilitates uniform working with models. However, we may want to, or have to, work with models built by using other libraries. To simplify the task, the `dalex` library wraps models in objects of class `Explainer` that contain, in a uniform way, all the functions necessary for working with models (see Section [4\.3\.6](dataSetsIntro.html#ExplainersTitanicPythonCode)). The code below creates explainer\-objects for the models (see Sections [4\.6\.1](dataSetsIntro.html#model-Apartments-python-lr)–[4\.6\.3](dataSetsIntro.html#model-Apartments-python-svm)) fitted to the apartment\-prices data. ``` apartments_lm_exp = dx.Explainer(apartments_lm, X, y, label = "Apartments LM Pipeline") apartments_rf_exp = dx.Explainer(apartments_rf, X, y, label = "Apartments RF Pipeline") apartments_svm_exp = dx.Explainer(apartments_svm, X, y, label = "Apartments SVM Pipeline") ``` When an explainer is created, the specified model and data are tested for consistency. Diagnostic information is printed on the screen. The following output shows diagnostic information for the `apartments_lm` model. ``` Preparation of a new explainer is initiated -> data : 1000 rows 5 cols -> target variable : Argument 'y' converted to a numpy.ndarray. -> target variable : 1000 values -> model_class : sklearn.pipeline.Pipeline (default) -> label : Apartments LM Pipeline -> predict function : <yhat at 0x117090840> will be used (default) -> predicted values : min = 1.78e+03, mean = 3.49e+03, max = 6.18e+03 -> residual function : difference between y and yhat (default) -> residuals : min = -2.47e+02, mean = 2.06e-13, max = 4.69e+02 -> model_info : package sklearn A new explainer has been created! ``` ### 4\.6\.1 Linear regression model To fit the linear regression model (see Section [4\.5\.1](dataSetsIntro.html#model-Apartments-lr)), we use the `LinearRegression` algorithm from the `sklearn` library. The fitted model is stored in object `apartments_lm`, which will be used in subsequent chapters. ``` from sklearn.linear_model import LinearRegression apartments_lm = make_pipeline( preprocess, LinearRegression()) apartments_lm.fit(X, y) ``` ### 4\.6\.2 Random forest model To fit the random forest model (see Section [4\.5\.2](dataSetsIntro.html#model-Apartments-rf)), we use the `RandomForestRegressor` algorithm from the `sklearn` library. We apply the default settings with trees not deeper than three levels and the number of trees in the random forest set to 500\. The fitted model is stored in object `apartments_rf` for purpose of illustrations in subsequent chapters. ``` from sklearn.ensemble import RandomForestRegressor apartments_rf = make_pipeline( preprocess, RandomForestRegressor(max_depth = 3, n_estimators = 500)) apartments_rf.fit(X, y) ``` ### 4\.6\.3 Support vector machine model Finally, to fit the SVM model (see Section [4\.5\.3](dataSetsIntro.html#model-Apartments-svm)), we use the `SVR` algorithm from the `sklearn` library. The fitted model is stored in object `apartments_svm`, which will be used in subsequent chapters. ``` from sklearn.svm import SVR apartments_svm = make_pipeline( preprocess, SVR()) apartments_svm.fit(X, y) ``` ### 4\.6\.4 Models’ predictions Let us now compare predictions that are obtained from the different models for the `apartments_test` data. In the code below, we use the `predict()` method to obtain the predicted price per square meter for the linear regression model. ``` apartments_test = dx.datasets.load_apartments_test() apartments_test = apartments_test.drop(columns='m2_price') apartments_lm.predict(apartments_test) # array([4820.00943156, 3292.67756996, 2717.90972101, ..., 4836.44370353, # 3191.69063189, 5157.93680175]) ``` In a similar way, we obtain the predictions for the two remaining models. ``` apartments_rf.predict(apartments_test) # array([4708, 3819, 2273, ..., 4708, 4336, 4916]) ``` ``` apartments_svm.predict(apartments_test) # array([3344.48570564, 3323.01215313, 3321.97053977, ..., 3353.19750146, # 3383.51743883, 3376.31070911]) ``` ### 4\.6\.5 Models’ explainers The Python\-code examples presented for the models for the apartment\-prices dataset use functions from the `sklearn` library, which facilitates uniform working with models. However, we may want to, or have to, work with models built by using other libraries. To simplify the task, the `dalex` library wraps models in objects of class `Explainer` that contain, in a uniform way, all the functions necessary for working with models (see Section [4\.3\.6](dataSetsIntro.html#ExplainersTitanicPythonCode)). The code below creates explainer\-objects for the models (see Sections [4\.6\.1](dataSetsIntro.html#model-Apartments-python-lr)–[4\.6\.3](dataSetsIntro.html#model-Apartments-python-svm)) fitted to the apartment\-prices data. ``` apartments_lm_exp = dx.Explainer(apartments_lm, X, y, label = "Apartments LM Pipeline") apartments_rf_exp = dx.Explainer(apartments_rf, X, y, label = "Apartments RF Pipeline") apartments_svm_exp = dx.Explainer(apartments_svm, X, y, label = "Apartments SVM Pipeline") ``` When an explainer is created, the specified model and data are tested for consistency. Diagnostic information is printed on the screen. The following output shows diagnostic information for the `apartments_lm` model. ``` Preparation of a new explainer is initiated -> data : 1000 rows 5 cols -> target variable : Argument 'y' converted to a numpy.ndarray. -> target variable : 1000 values -> model_class : sklearn.pipeline.Pipeline (default) -> label : Apartments LM Pipeline -> predict function : <yhat at 0x117090840> will be used (default) -> predicted values : min = 1.78e+03, mean = 3.49e+03, max = 6.18e+03 -> residual function : difference between y and yhat (default) -> residuals : min = -2.47e+02, mean = 2.06e-13, max = 4.69e+02 -> model_info : package sklearn A new explainer has been created! ```
Machine Learning
pbiecek.github.io
https://pbiecek.github.io/ema/breakDown.html
6 Break\-down Plots for Additive Attributions ============================================= 6\.1 Introduction ----------------- Probably the most commonly asked question when trying to understand a model’s prediction for a single observation is: *which variables contribute to this result the most?* There is no single best approach that can be used to answer this question. In this chapter, we introduce break\-down (BD) plots, which offer a possible solution. The plots can be used to present “variable attributions”, i.e., the decomposition of the model’s prediction into contributions that can be attributed to different explanatory variables. Note that the method is similar to the `EXPLAIN` algorithm introduced by Robnik\-Šikonja and Kononenko ([2008](#ref-explainPaper)) and implemented in the `ExplainPrediction` package (Robnik\-Šikonja [2018](#ref-explainPackage)). 6\.2 Intuition -------------- As mentioned in Section [2\.5](modelDevelopmentProcess.html#fitting), we assume that prediction \\(f(\\underline{x})\\) is an approximation of the expected value of the dependent variable \\(Y\\) given values of explanatory variables \\(\\underline{x}\\). The underlying idea of BD plots is to capture the contribution of an explanatory variable to the model’s prediction by computing the shift in the expected value of \\(Y\\), while fixing the values of other variables. This idea is illustrated in Figure [6\.1](breakDown.html#fig:BDPrice4). Consider an example related to the prediction obtained for the random forest model `model_rf` for the Titanic data (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)). We are interested in the probability of survival for Johnny D, an 8\-year\-old passenger travelling in the first class (see Section [4\.2\.5](dataSetsIntro.html#predictions-titanic)). Panel A of Figure [6\.1](breakDown.html#fig:BDPrice4) shows the distribution of the model’s predictions for observations from the Titanic dataset. In particular, the violin plot in the row marked “all data” summarizes the distribution of predictions for all 2207 observations from the dataset. The red dot indicates the mean value that can be interpreted as an estimate of the expected value of the model’s predictions over the distribution of all explanatory variables. In this example, the mean value is equal to 23\.5%. Figure 6\.1: Break\-down plots show how the contributions attributed to individual explanatory variables change the mean model’s prediction to yield the actual prediction for a particular single instance (observation). Panel A) The first row shows the distribution and the mean value (red dot) of the model’s predictions for all data. The next rows show the distribution and the mean value of the predictions when fixing values of subsequent explanatory variables. The last row shows the prediction for the particular instance of interest. B) Red dots indicate the mean predictions from panel A. C) The green and red bars indicate, respectively, positive and negative changes in the mean predictions (contributions attributed to explanatory variables). To evaluate the contribution of individual explanatory variables to this particular single\-instance prediction, we investigate the changes in the model’s predictions when fixing the values of consecutive variables. For instance, the violin plot in the row marked “age\=8” in panel A of Figure [6\.1](breakDown.html#fig:BDPrice4) summarizes the distribution of the predictions obtained when the *age* variable takes the value “8 years”, as for Johnny D. Again, the red dot indicates the mean of the predictions, and it can be interpreted as an estimate of the expected value of the predictions over the distribution of all explanatory variables other than *age*. The violin plot in the “class\=1st” row describes the distribution and the mean value of predictions with the values of variables *age* and *class* set to “8 years” and “1st class”, respectively. Subsequent rows contain similar information for other explanatory variables included in the random forest model. In the last row, all explanatory variables are fixed at the values describing Johnny D. Hence, the last row contains only one point, the red dot, which corresponds to the model’s prediction, i.e., survival probability, for Johnny D. The thin grey lines in panel A of Figure [6\.1](breakDown.html#fig:BDPrice4) show the change of predictions for different individuals when the value of a particular explanatory variable is being replaced by the value indicated in the name of the row. For instance, the lines between the first and the second row indicate that fixing the value of the *age* variable to “8 years” has a different effect for different individuals. For some individuals (most likely, passengers that are 8 years old) the model’s prediction does not change at all. For others, the predicted value increases (probably for the passengers older than 8 years) or decreases (most likely for the passengers younger than 8 years). Eventually, however, we may be interested in the mean predictions, or even only in the changes of the means. Thus, simplified plots, similar to those shown in panels B and C of Figure [6\.1](breakDown.html#fig:BDPrice4), may be of interest. Note that, in panel C, the row marked “intercept” presents the overall mean value (0\.235\) of predictions for the entire dataset. Consecutive rows present changes in the mean prediction induced by fixing the value of a particular explanatory variable. Positive changes are indicated with green bars, while negative differences are indicated with red bars. The last row, marked “prediction,” contains the sum of the overall mean value and the changes, i.e., the predicted value of survival probability for Johnny D, indicated by the blue bar. What can be learned from BD plots as those presented in Figure [6\.1](breakDown.html#fig:BDPrice4)? The plots offer a summary of the effects of particular explanatory variables on a model’s predictions. From Figure [6\.1](breakDown.html#fig:BDPrice4) we can conclude, for instance, that the mean prediction for the random forest model for the Titanic dataset is equal to 23\.5%. This is the predicted probability of survival averaged over all people on Titanic. Note that it is not the percentage of individuals that survived, but the mean model\-prediction. Thus, for a different model, we would most likely obtain a different mean value. The model’s prediction for Johnny D is equal to 42\.2%. It is much higher than the mean prediction. The two explanatory variables that influence this prediction the most are *class* (with the value “1st”) and *age* (with the value equal to 8\). By fixing the values of these two variables, we add 35\.6 percentage points to the mean prediction. All other explanatory variables have smaller effects, and they actually reduce the increase in the predicted value induced by *class* and *age*. For instance, *gender* (Johnny D was a boy) reduces the predicted survival probability by about 8\.3 percentage points. It is worth noting that the part of the prediction attributed to an explanatory variable depends not only on the variable but also on the considered value. For instance, in the example presented in Figure [6\.1](breakDown.html#fig:BDPrice4), the effect of the *embarked* harbour is very small. This may be due to the fact that the variable is not very important for prediction. However, it is also possible that the variable is important, but the effect of the value considered for the particular instance (Johnny D, who embarked Titanic in Southampton) may be close to the mean, as compared to all other possible values of the variable. It is also worth mentioning that, for models that include interactions, the part of the prediction attributed to a variable depends on the order in which one sets the values of the explanatory variables. Note that the interactions do not have to be explicitly specified in the model structure as it is the case of, for instance, linear\-regression models. They may also emerge as a result of fitting to the data a flexible model like, for instance, a regression tree. To illustrate the point, Figure [6\.2](breakDown.html#fig:ordering) presents an example of a random forest model with only three explanatory variables fitted to the Titanic data. Subsequently, we focus on the model’s prediction for a 2\-year old boy that travelled in the second class. The predicted probability of survival is equal to 0\.964, more than a double of the mean model\-prediction of 0\.407\. We would like to understand which explanatory variables drive this prediction. Two possible explanations are illustrated in Figure [6\.2](breakDown.html#fig:ordering). **Explanation 1:** We first consider the explanatory variables *gender*, *class*, and *age*, in that order. Figure [6\.2](breakDown.html#fig:ordering) indicates negative contributions for the first two variables and a positive contribution for the third one. Thus, the fact that the passenger was a boy decreases the chances of survival, as compared to the mean model\-prediction. He travelled in the second class, which further lowers the probability of survival. However, as the boy was very young, this substantially increases the odds of surviving. This last conclusion is the result of the fact that most passengers in the second class were adults; therefore, a kid from the second class had higher chances of survival. **Explanation 2:** We now consider the following order of explanatory variables: *gender*, *age*, and *class*. Figure [6\.2](breakDown.html#fig:ordering) indicates a positive contribution of *class*, unlike in the first explanation. Again, the fact that the passenger was a boy decreases the chances of survival, as compared to the mean model\-prediction. However, he was very young, and this increases the probability of survival as compared to adult men. Finally, the fact that the boy travelled in the second class increases the chance even further. This last conclusion stems from the fact that most kids travelled in the third class; thus, being a child in the second class would increase chances of survival. Figure 6\.2: An illustration of the order\-dependence of variable attributions. Two break\-down plots for the same observation for a random forest model for the Titanic dataset. The contribution attributed to class is negative in the plot at the top and positive in the one at the bottom. The difference is due to the difference in the ordering of explanatory variables used to construct the plots (as seen in the labelling of the rows). 6\.3 Method ----------- In this section, we introduce more formally the method of variable attribution. We first focus on linear models, because their simple and additive structure allows building intuition. Then we consider a more general case. ### 6\.3\.1 Break\-down for linear models Assume the classical linear\-regression model for dependent variable \\(Y\\) with \\(p\\) explanatory variables, the values of which are collected in vector \\(\\underline{x}\\), and vector \\(\\underline{\\beta}\\) of \\(p\\) corresponding coefficients. Note that we separately consider \\(\\beta^0\\), which is the intercept. Prediction for \\(Y\\) is given by the expected value of \\(Y\\) conditional on \\(\\underline{x}\\). In particular, the expected value is given by the following linear combination: \\\[\\begin{equation} E\_Y(Y \| \\underline{x}) \= f(\\underline{x}) \= \\beta^0 \+ \\underline{x}'\\underline{\\beta}. \\tag{6\.1} \\end{equation}\\] Assume that we select a vector of values of explanatory variables \\(\\underline{x}\_\* \\in \\mathcal R^p\\). We are interested in the contribution of the \\(j\\)\-th explanatory variable to model’s prediction \\(f(\\underline{x}\_\*)\\) for a single observation described by \\(\\underline{x}\_\*\\). A possible approach to evaluate the contribution is to measure how much the expected value of \\(Y\\) changes after conditioning on \\({x}^j\_\*\\). Using the notation \\(\\underline{x}^{j\|\=X^j}\_\*\\) (see Section [2\.3](modelDevelopmentProcess.html#notation)) to indicate that we treat the value of the \\(j\\)\-th coordinate as a random variable \\(X^j\\), we can thus define \\\[\\begin{equation} v(j, \\underline{x}\_\*) \= E\_Y(Y \| \\underline{x}\_\*) \- E\_{X^j}\\left\[E\_Y\\left\\{Y \| \\underline{x}^{j\|\=X^j}\_\*\\right\\}\\right]\= f(\\underline{x}\_\*) \- E\_{X^j}\\left\\{f\\left(\\underline{x}^{j\|\=X^j}\_\*\\right)\\right\\}, \\tag{6\.2} \\end{equation}\\] where \\(v(j, \\underline{x}\_\*)\\) is the *variable\-importance measure* for the \\(j\\)\-th explanatory variable evaluated at \\(\\underline{x}\_\*\\) and the last expected value on the right\-hand side of [(6\.2\)](breakDown.html#eq:BDattr1) is taken over the distribution of the variable (treated as random). For the linear\-regression model [(6\.1\)](breakDown.html#eq:BDkinreg), and if the explanatory variables are independent, \\(v(j,\\underline{x}\_\*)\\) can be expressed as follows: \\\[\\begin{equation} v(j, \\underline{x}\_\*) \= \\beta^0 \+ \\underline{x}\_\*' \\underline{\\beta} \- E\_{X^j}\\left\\{\\beta^0 \+ \\left(\\underline{x}^{j\|\=X^j}\_\*\\right)' \\underline{\\beta}\\right\\} \= {\\beta}^j\\left\\{{x}\_\*^j \- E\_{X^j}(X^j)\\right\\}. \\tag{6\.3} \\end{equation}\\] Using [(6\.3\)](breakDown.html#eq:BDattr2), the linear\-regression prediction [(6\.1\)](breakDown.html#eq:BDkinreg) may be re\-expressed in the following way: \\\[\\begin{align} f(\\underline{x}\_\*) \&\= \\left\\{\\beta^0 \+ {\\beta}^1E\_{X^1}(X^1\) \+ ... \+ {\\beta}^pE\_{X^p}(X^p)\\right\\}\+ \\nonumber \\\\ \& \\ \\ \\ \\left\[\\left\\{{x}^1\_\* \- E\_{X^1}(X^1\)\\right\\} {\\beta}^1 \+ ... \+\\left\\{{x}^p\_\* \- E\_{X^p}(X^p)\\right\\} {\\beta}^p\\right] \\nonumber \\\\ \&\\equiv (mean \\ prediction) \+ \\sum\_{j\=1}^p v(j, \\underline{x}\_\*). \\tag{6\.4} \\end{align}\\] Thus, the contributions of the explanatory variables \\(v(j, \\underline{x}\_\*)\\) sum up to the difference between the model’s prediction for \\(\\underline{x}\_\*\\) and the mean prediction. In practice, given a dataset, the expected value \\(E\_{X^j}(X^j)\\) can be estimated by the sample mean \\(\\bar x^j\\). This leads to \\\[\\begin{equation} {v}(j, \\underline{x}\_\*) \= {\\beta}^j ({x}\_\*^j \- \\bar x^j). \\end{equation}\\] Obviously, the sample mean \\(\\bar x^j\\) is an estimator of the expected value \\(E\_{X^j}(X^j)\\), calculated using a dataset. For the sake of simplicity, we do not emphasize this difference in the notation. Also, we ignore the fact that, in practice, we never know the true model coefficients and use their estimates instead. We are also silent about the fact that, usually, explanatory variables are not independent. We needed this simplified example just to build our intuition. ### 6\.3\.2 Break\-down for a general case Again, let \\(v(j, \\underline{x}\_\*)\\) denote the variable\-importance measure of the \\(j\\)\-th variable and instance \\(\\underline{x}\_\*\\), i.e., the contribution of the \\(j\\)\-th variable to the model’s prediction at \\(\\underline{x}\_\*\\). We would like the sum of the \\(v(j, \\underline{x}\_\*)\\) for all explanatory variables to be equal to the instance prediction. This property is called *local accuracy*. Thus, we want that \\\[\\begin{equation} f(\\underline{x}\_\*) \= v\_0 \+ \\sum\_{j\=1}^p v(j, \\underline{x}\_\*), \\tag{6\.5} \\end{equation}\\] where \\(v\_0\\) denotes the mean model\-prediction. Denote by \\(\\underline{X}\\) the vector of random values of explanatory variables. If we rewrite equation [(6\.5\)](breakDown.html#eq:generalBreakDownLocalAccuracy) as follows: \\\[\\begin{equation} E\_{\\underline{X}}\\{f(\\underline{X})\|X^1 \= {x}^1\_\*, \\ldots, X^p \= {x}^p\_\*\\} \= E\_{\\underline{X}}\\{f(\\underline{X})\\} \+ \\sum\_{j\=1}^p v(j, \\underline{x}\_\*), \\end{equation}\\] then a natural proposal for \\(v(j, \\underline{x}\_\*)\\) is \\\[\\begin{align} v(j, \\underline{x}\_\*) \=\& E\_{\\underline{X}}\\{f(\\underline{X}) \| X^1 \= {x}^1\_\*, \\ldots, X^j \= {x}^j\_\*\\} \- \\nonumber \\\\ \& E\_{\\underline{X}}\\{f(\\underline{X}) \| X^1 \= {x}^1\_\*, \\ldots, X^{j\-1} \= {x}^{j\-1}\_\*\\}. \\tag{6\.6} \\end{align}\\] In other words, the contribution of the \\(j\\)\-th variable is the difference between the expected value of the model’s prediction conditional on setting the values of the first \\(j\\) variables equal to their values in \\(\\underline{x}\_\*\\) and the expected value conditional on setting the values of the first \\(j\-1\\) variables equal to their values in \\(\\underline{x}\_\*\\). Note that the definition does imply the dependence of \\(v(j, \\underline{x}\_\*)\\) on the order of the explanatory variables that is reflected in their indices (superscripts). To consider more general cases, let \\(J\\) denote a subset of \\(K\\) indices (\\(K\\leq p\\)) from \\(\\{1,2,\\ldots,p\\}\\), i.e., \\(J\=\\{j\_1,j\_2,\\ldots,j\_K\\}\\), where each \\(j\_k \\in \\{1,2,\\ldots,p\\}\\). Furthermore, let \\(L\\) denote another subset of \\(M\\) indices (\\(M\\leq p\-K\\)) from \\(\\{1,2,\\ldots,p\\}\\), distinct from \\(J\\). That is, \\(L\=\\{l\_1,l\_2,\\ldots,l\_M\\}\\), where each \\(l\_m \\in \\{1,2,\\ldots,p\\}\\) and \\(J \\cap L \= \\emptyset\\). Let us define now \\\[\\begin{align} \\Delta^{L\|J}(\\underline{x}\_\*) \\equiv \& E\_{\\underline{X}}\\{f(\\underline{X}) \| X^{l\_1} \= {x}\_\*^{l\_1},\\ldots,X^{l\_M} \= {x}\_\*^{l\_M},X^{j\_1} \= {x}\_\*^{j\_1},\\ldots,X^{j\_K} \= {x}\_\*^{j\_K}\\} \- \\nonumber \\\\ \& E\_{\\underline{X}}\\{f(\\underline{X}) \| X^{j\_1} \= {x}\_\*^{j\_1},\\ldots,X^{j\_K} \= {x}\_\*^{j\_K}\\}. \\end{align}\\] In other words, \\(\\Delta^{L\|J}(\\underline{x}\_\*)\\) is the change between the expected model\-prediction, when setting the values of the explanatory variables with indices from the set \\(J \\cup L\\) equal to their values in \\(\\underline{x}\_\*\\), and the expected prediction conditional on setting the values of the explanatory variables with indices from the set \\(J\\) equal to their values in \\(\\underline{x}\_\*\\). In particular, for the \\(l\\)\-th explanatory variable, let \\\[\\begin{align} \\Delta^{l\|J}(\\underline{x}\_\*) \\equiv \\Delta^{\\{l\\}\|J}(\\underline{x}\_\*) \= \& E\_{\\underline{X}}\\{f(\\underline{X}) \| X^{j\_1} \= {x}\_\*^{j\_1},\\ldots,X^{j\_K} \= {x}\_\*^{j\_K}, X^{l} \= {x}\_\*^{l}\\} \- \\nonumber \\\\ \& E\_{\\underline{X}}\\{f(\\underline{X}) \| X^{j\_1} \= {x}\_\*^{j\_1},\\ldots,X^{j\_K} \= {x}\_\*^{j\_K}\\}. \\tag{6\.7} \\end{align}\\] Thus, \\(\\Delta^{l\|J}\\) is the change between the expected prediction, when setting the values of the explanatory variables with indices from the set \\(J \\cup \\{l\\}\\) equal to their values in \\(\\underline{x}\_\*\\), and the expected prediction conditional on setting the values of the explanatory variables with indices from the set \\(J\\) equal to their values in \\(\\underline{x}\_\*\\). Note that, if \\(J\=\\emptyset\\), then \\\[\\begin{equation} \\Delta^{l\|\\emptyset}(\\underline{x}\_\*) \= E\_{\\underline{X}}\\{f(\\underline{X}) \| X^{l} \= {x}\_\*^{l}\\} \- E\_{\\underline{X}}\\{f(\\underline{X})\\} \= E\_{\\underline{X}}\\{f(\\underline{X}) \| X^{l} \= {x}\_\*^{l}\\} \- v\_0\. \\tag{6\.8} \\end{equation}\\] It follows that \\\[\\begin{equation} v(j, \\underline{x}\_\*) \= \\Delta^{j\|\\{1,\\ldots, j\-1\\}}(\\underline{x}\_\*) \= \\Delta^{\\{1,\\ldots, j\\}\|\\emptyset}(\\underline{x}\_\*)\-\\Delta^{\\{1,\\ldots, j\-1\\}\|\\emptyset}(\\underline{x}\_\*). \\tag{6\.9} \\end{equation}\\] As it was mentioned in Section [6\.2](breakDown.html#BDIntuition), for models that include interactions, the value of the variable\-importance measure \\(v(j, \\underline{x}\_\*)\\) depends on the order of conditioning on explanatory variables. A heuristic approach to address this issue consists of choosing an order in which the variables with the largest contributions are selected first. In particular, the following two\-step procedure can be considered. In the first step, the ordering is chosen based on the decreasing values of \\(\|\\Delta^{k\|\\emptyset}(\\underline{x}\_\*)\|\\). Note that the use of absolute values is needed because the variable contributions can be positive or negative. In the second step, the variable\-importance measure for the \\(j\\)\-th variable is calculated as \\\[ v(j, \\underline{x}\_\*) \= \\Delta ^{j\|J}(\\underline{x}\_\*), \\] where \\\[ J \= \\{k: \|\\Delta^{k\|\\emptyset}(\\underline{x}\_\*)\| \< \|\\Delta^{j\|\\emptyset}(\\underline{x}\_\*)\|\\}. \\] That is, \\(J\\) is the set of indices of explanatory variables with scores \\(\|\\Delta^{k\|\\emptyset}(\\underline{x}\_\*)\|\\) smaller than the corresponding score for variable \\(j\\). The time complexity of each of the two steps of the procedure is \\(O(p)\\), where \\(p\\) is the number of explanatory variables. Note, that there are also other possible approaches to the problem of calculation of variable attributions. One consists of identifying the interactions that cause a difference in variable\-importance measures for different orderings and focusing on those interactions. This approach is discussed in Chapter [7](iBreakDown.html#iBreakDown). The other one consists of calculating an average value of the variance\-importance measure across all possible orderings. This approach is presented in Chapter [8](shapley.html#shapley). 6\.4 Example: Titanic data -------------------------- Let us consider the random forest model `titanic_rf` (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) and passenger Johnny D (see Section [4\.2\.5](dataSetsIntro.html#predictions-titanic)) as the instance of interest in the Titanic data. The mean of model’s predictions for all passengers is equal to \\(v\_0\=\\) 0\.2353095\. Table [6\.1](breakDown.html#tab:titanicBreakDownDeltas) presents the scores \\(\|\\Delta^{j\|\\emptyset}(\\underline{x}\_\*)\|\\) and the expected values \\(E\_{\\underline{X}}\\{f(\\underline{X}) \| X^j \= {x}^j\_\*\\}\\). Note that, given [(6\.8\)](breakDown.html#eq:deltaBreakDownAdditive) and the fact that \\(E\_{\\underline{X}}\\{f(\\underline{X}) \| X^j \= {x}^j\_\*\\}\>v\_0\\) for all variables, we have got \\(E\_{\\underline{X}}\\{f(\\underline{X}) \| X^j \= {x}^j\_\*\\}\=\|\\Delta^{j\|\\emptyset}(\\underline{x}\_\*)\|\+v\_0\\). Table 6\.1: Expected values \\(E\_{\\underline{X}}\\{f(\\underline{X}) \| X^j \= {x}^j\_\*\\}\\) and scores \\(\|\\Delta^{j\|\\emptyset}(\\underline{x}\_\*)\|\\) for the random forest model and Johnny D for the Titanic data. The scores are sorted in decreasing order. | variable \\(j\\) | \\(E\_{\\underline{X}}\\{f(\\underline{X}) \| X^j \= {x}^j\_\*\\}\\) | \\(\|\\Delta^{j\|\\emptyset}(\\underline{x}\_\*)\|\\) | | --- | --- | --- | | age \= 8 | 0\.5051210 | 0\.2698115 | | class \= 1st | 0\.4204449 | 0\.1851354 | | fare \= 72 | 0\.3785383 | 0\.1432288 | | gender \= male | 0\.1102873 | 0\.1250222 | | embarked \= Southampton | 0\.2246035 | 0\.0107060 | | sibsp \= 0 | 0\.2429597 | 0\.0076502 | | parch \= 0 | 0\.2322655 | 0\.0030440 | Based on the ordering defined by the scores \\(\|\\Delta^{j\|\\emptyset}(\\underline{x}\_\*)\|\\) from Table [6\.1](breakDown.html#tab:titanicBreakDownDeltas), we can compute the variable\-importance measures based on the sequential contributions \\(\\Delta^{j\|J}(\\underline{x}\_\*)\\). The computed values are presented in Table [6\.2](breakDown.html#tab:titanicBreakDownDeltasConseq). Table 6\.2: Variable\-importance measures \\(\\Delta^{j\|J}(\\underline{x}\_\*)\\), with \\(J\=\\{1,\\ldots,j\\}\\), for the random forest model and Johnny D for the Titanic data, computed by using the ordering of variables defined in Table [6\.1](breakDown.html#tab:titanicBreakDownDeltas). | variable \\(j\\) | \\(E\_{\\underline{X}}\\left\\{ f(\\underline{X}) \| \\underline{X}^{J} \= \\underline{x}^{J}\_\*\\right\\}\\) | \\(\\Delta^{j\|J}(\\underline{x}\_\*)\\) | | --- | --- | --- | | intercept \\((v\_0\)\\) | 0\.2353095 | 0\.2353095 | | age \= 8 | 0\.5051210 | 0\.2698115 | | class \= 1st | 0\.5906969 | 0\.0855759 | | fare \= 72 | 0\.5443561 | \-0\.0463407 | | gender \= male | 0\.4611518 | \-0\.0832043 | | embarked \= Southampton | 0\.4584422 | \-0\.0027096 | | sibsp \= 0 | 0\.4523398 | \-0\.0061024 | | parch \= 0 | 0\.4220000 | \-0\.0303398 | | prediction | 0\.4220000 | 0\.4220000 | Results from Table [6\.2](breakDown.html#tab:titanicBreakDownDeltasConseq) are presented in Figure [6\.3](breakDown.html#fig:BDjohnyExample). The plot indicates that the largest positive contributions to the predicted probability of survival for Johnny D come from explanatory variables *age* and *class*. The contributions of the remaining variables are smaller (in absolute values) and negative. Figure 6\.3: Break\-down plot for the random forest model and Johnny D for the Titanic data. 6\.5 Pros and cons ------------------ BD plots offer a model\-agnostic approach that can be applied to any predictive model that returns a single number for a single observation (instance). The approach offers several advantages. The plots are, in general, easy to understand. They are compact; results for many explanatory variables can be presented in a limited space. The approach reduces to an intuitive interpretation for linear models. Numerical complexity of the BD algorithm is linear in the number of explanatory variables. An important issue is that BD plots may be misleading for models including interactions. This is because the plots show only the additive attributions. Thus, the choice of the ordering of the explanatory variables that is used in the calculation of the variable\-importance measures is important. Also, for models with a large number of variables, BD plots may be complex and include many explanatory variables with small contributions to the instance prediction. To address the issue of the dependence of the variable\-importance measure on the ordering of the explanatory variables, the heuristic approach described in Section [6\.3\.2](breakDown.html#BDMethodGen) can be applied. Alternative approaches are described in Chapters [7](iBreakDown.html#iBreakDown) and [8](shapley.html#shapley). 6\.6 Code snippets for R ------------------------ In this section, we use the `DALEX` package, which is a wrapper for the `iBreakDown` R package (Gosiewska and Biecek [2019](#ref-iBreakDownRPackage)). The package covers all methods presented in this chapter. It is available on `CRAN` and `GitHub`. For illustration purposes, we use the `titanic_rf` random forest model for the Titanic data developed in Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf). Recall that the model is developed to predict the probability of survival for passengers of Titanic. Instance\-level explanations are calculated for Henry, a 47\-year\-old passenger that travelled in the 1st class (see Section [4\.2\.5](dataSetsIntro.html#predictions-titanic)). We first retrieve the `titanic_rf` model\-object and the data frame for Henry via the `archivist` hooks, as listed in Section [4\.2\.7](dataSetsIntro.html#ListOfModelsTitanic). We also retrieve the version of the `titanic` data with imputed missing values. ``` titanic_imputed <- archivist::aread("pbiecek/models/27e5c") titanic_rf <- archivist:: aread("pbiecek/models/4e0fc") (henry <- archivist::aread("pbiecek/models/a6538")) ``` ``` ## class gender age sibsp parch fare embarked ## 1 1st male 47 0 0 25 Cherbourg ``` Then we construct the explainer for the model by using function `explain()` from the `DALEX` package (see Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode)). We also load the `randomForest` package, as the model was fitted by using function `randomForest()` from this package (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) and it is important to have the corresponding `predict()` function available. ``` library("randomForest") library("DALEX") explain_rf <- DALEX::explain(model = titanic_rf, data = titanic_imputed[, -9], y = titanic_imputed$survived == "yes", label = "Random Forest") ``` The explainer object allows uniform access to predictive models regardless of their internal structure. With this object, we can proceed to the model analysis. ### 6\.6\.1 Basic use of the `predict_parts()` function The `DALEX::predict_parts()` function decomposes model predictions into parts that can be attributed to individual variables. It calculates the variable\-attribution measures for a selected model and an instance of interest. The object obtained as a result of applying the function is a data frame containing the calculated measures. In the simplest call, the function requires three arguments: * `explainer` \- an explainer\-object, created with function `DALEX::explain()`; * `new_observation` \- an observation to be explained; it should be a data frame with a structure that matches the structure of the dataset used for fitting of the model; * `type` \- the method for calculation of variable attribution; the possible methods are `"break_down"` (the default), `"shap"`, `"oscillations"`, and `"break_down_interactions"`. In the code below, the argument `type = "break_down"` is explicitly used. The code essentially provides the variable\-importance values \\(\\Delta^{j\|\\{1,\\ldots,j\\}}(\\underline{x}\_\*)\\). ``` bd_rf <- predict_parts(explainer = explain_rf, new_observation = henry, type = "break_down") bd_rf ``` ``` ## contribution ## Random Forest: intercept 0.235 ## Random Forest: class = 1st 0.185 ## Random Forest: gender = male -0.124 ## Random Forest: embarked = Cherbourg 0.105 ## Random Forest: age = 47 -0.092 ## Random Forest: fare = 25 -0.030 ## Random Forest: sibsp = 0 -0.032 ## Random Forest: parch = 0 -0.001 ## Random Forest: prediction 0.246 ``` By applying the generic `plot()` function to the object resulting from the application of the `predict_parts()` function we obtain a BD plot. ``` plot(bd_rf) ``` The resulting plot is shown in Figure [6\.4](breakDown.html#fig:BDhenryExample). It can be used to compare the explanatory\-variable attributions obtained for Henry with those computed for Johnny D (see Figure [6\.3](breakDown.html#fig:BDjohnyExample)). Both explanations refer to the same random forest model. We can see that the predicted survival probability for Henry (0\.246\) is almost the same as the mean prediction (0\.235\), while the probability for Johnny D is higher (0\.422\). For Johnny D, this result can be mainly attributed to the positive contribution of *age* and *class*. For Henry, *class* still contributes positively to the chances of survival, but the effect of *age* is negative. For both passengers the effect of *gender* is negative. Thus, one could conclude that the difference in the predicted survival probabilities is mainly due to the difference in the age of Henry and Johnny D. Figure 6\.4: Break\-down plot for the random forest model and Henry for the Titanic data, obtained by the generic `plot()` function in R. ### 6\.6\.2 Advanced use of the `predict_parts()` function Apart from the `explainer`, `new_observation`, and `type` arguments, function `predict_parts()` allows additional ones. The most commonly used are: * `order` \- a vector of characters (column names) or integers (column indexes) that specify the order of explanatory variables to be used for computing the variable\-importance measures; if not specified (default), then a one\-step heuristic is used to determine the order; * `keep_distributions` \- a logical value (`FALSE` by default); if `TRUE`, then additional diagnostic information about conditional distributions of predictions is stored in the resulting object and can be plotted with the generic `plot()` function. In what follows, we illustrate the use of the arguments. First, we specify the ordering of the explanatory variables. Toward this end, we can use integer indexes or variable names. The latter option is preferable in most cases because of transparency. Additionally, to reduce clutter in the plot, we set `max_features = 3` argument in the `plot()` function. ``` bd_rf_order <- predict_parts(explainer = explain_rf, new_observation = henry, type = "break_down", order = c("class", "age", "gender", "fare", "parch", "sibsp", "embarked")) plot(bd_rf_order, max_features = 3) ``` The resulting plot is presented in Figure [6\.5](breakDown.html#fig:BDhenryExampleTop). It is worth noting that the attributions for variables *gender* and *fare* do differ from those shown in Figure [6\.4](breakDown.html#fig:BDhenryExample). This is the result of the change of the ordering of variables used in the computation of the attributions. Figure 6\.5: Break\-down plot for the top three variables for the random forest model and Henry for the Titanic data. We can use the `keep_distributions = TRUE` argument to enrich the resulting object with additional information about conditional distributions of predicted values. Subsequently, we can apply the `plot_distributions = TRUE` argument in the `plot()` function to present the distributions as violin plots. ``` bd_rf_distr <- predict_parts(explainer = explain_rf, new_observation = henry, type = "break_down", order = c("age", "class", "fare", "gender", "embarked", "sibsp", "parch"), keep_distributions = TRUE) plot(bd_rf_distr, plot_distributions = TRUE) ``` The resulting plot is presented in Figure [6\.6](breakDown.html#fig:BDhenryExampleDistr). Red dots indicate the mean model’s predictions. Thin grey lines between violin plots indicate changes in predictions for individual observations. They can be used to track how the model’s predictions change after consecutive conditionings. A similar code was used to create the plot in panel A of Figure [6\.1](breakDown.html#fig:BDPrice4) for Johnny D. Figure 6\.6: Break\-down plot with violin plots summarizing distributions of predicted values for a selected order of explanatory variables for the random forest model and Henry for the Titanic data. 6\.7 Code snippets for Python ----------------------------- In this section, we use the `dalex` library in Python. The package covers all methods presented in this chapter. It is available on `pip` and `GitHub.` For illustration purposes, we use the `titanic_rf` random forest model for the Titanic data developed in Section [4\.3\.2](dataSetsIntro.html#model-titanic-python-rf). Recall that the model is developed to predict the probability of survival for passengers of Titanic. Instance\-level explanations are calculated for Henry, a 47\-year\-old passenger that travelled in the first class (see Section [4\.3\.5](dataSetsIntro.html#predictions-titanic-python)). In the first step, we create an explainer object that will provide a uniform interface for the predictive model. We use the `Explainer()` constructor for this purpose (see Section [4\.3\.6](dataSetsIntro.html#ExplainersTitanicPythonCode)). ``` import pandas as pd henry = pd.DataFrame({'gender' : ['male'], 'age' : [47], 'class' : ['1st'], 'embarked': ['Cherbourg'], 'fare' : [25], 'sibsp' : [0], 'parch' : [0]}, index = ['Henry']) import dalex as dx titanic_rf_exp = dx.Explainer(titanic_rf, X, y, label = "Titanic RF Pipeline") ``` To apply the break\-down method we use the `predict_parts()` method. The first argument indicates the data for the observation for which the attributions are to be calculated. The `type` argument specifies the method of calculation of attributions. Results are stored in the `result` field. ``` bd_henry = titanic_rf_exp.predict_parts(henry, type = 'break_down') bd_henry.result ``` To obtain a waterfall chart we can use the `plot()` method. It generates an interactive chart based on the `plotly` library. ``` bd_henry.plot() ``` The resulting plot is presented in Figure [6\.7](breakDown.html#fig:bdPython1). Figure 6\.7: Break\-down plot for the random forest model and Henry for the Titanic data, obtained by the `plot()` method in Python. Advanced users can make use of the `order` argument of the `predict_parts()` method. It allows forcing a specific order of variables in the break\-down method. Also, if the model includes many explanatory variables, the waterfall chart may be hard to read. In this situation, the `max_vars` argument can be used in the `plot()` method to limit the number of variables presented in the graph. ``` import numpy as np bd_henry = titanic_rf_exp.predict_parts(henry, type = 'break_down', order = np.array(['gender', 'class', 'age', 'embarked', 'fare', 'sibsp', 'parch'])) bd_henry.plot(max_vars = 5) ``` The resulting plot is presented in Figure [6\.8](breakDown.html#fig:bdPython2). Figure 6\.8: Break\-down plot for a limited number of explanatory variables in a specified order for the random forest model and Henry for the Titanic data, obtained by the `plot()` method in Python. 6\.1 Introduction ----------------- Probably the most commonly asked question when trying to understand a model’s prediction for a single observation is: *which variables contribute to this result the most?* There is no single best approach that can be used to answer this question. In this chapter, we introduce break\-down (BD) plots, which offer a possible solution. The plots can be used to present “variable attributions”, i.e., the decomposition of the model’s prediction into contributions that can be attributed to different explanatory variables. Note that the method is similar to the `EXPLAIN` algorithm introduced by Robnik\-Šikonja and Kononenko ([2008](#ref-explainPaper)) and implemented in the `ExplainPrediction` package (Robnik\-Šikonja [2018](#ref-explainPackage)). 6\.2 Intuition -------------- As mentioned in Section [2\.5](modelDevelopmentProcess.html#fitting), we assume that prediction \\(f(\\underline{x})\\) is an approximation of the expected value of the dependent variable \\(Y\\) given values of explanatory variables \\(\\underline{x}\\). The underlying idea of BD plots is to capture the contribution of an explanatory variable to the model’s prediction by computing the shift in the expected value of \\(Y\\), while fixing the values of other variables. This idea is illustrated in Figure [6\.1](breakDown.html#fig:BDPrice4). Consider an example related to the prediction obtained for the random forest model `model_rf` for the Titanic data (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)). We are interested in the probability of survival for Johnny D, an 8\-year\-old passenger travelling in the first class (see Section [4\.2\.5](dataSetsIntro.html#predictions-titanic)). Panel A of Figure [6\.1](breakDown.html#fig:BDPrice4) shows the distribution of the model’s predictions for observations from the Titanic dataset. In particular, the violin plot in the row marked “all data” summarizes the distribution of predictions for all 2207 observations from the dataset. The red dot indicates the mean value that can be interpreted as an estimate of the expected value of the model’s predictions over the distribution of all explanatory variables. In this example, the mean value is equal to 23\.5%. Figure 6\.1: Break\-down plots show how the contributions attributed to individual explanatory variables change the mean model’s prediction to yield the actual prediction for a particular single instance (observation). Panel A) The first row shows the distribution and the mean value (red dot) of the model’s predictions for all data. The next rows show the distribution and the mean value of the predictions when fixing values of subsequent explanatory variables. The last row shows the prediction for the particular instance of interest. B) Red dots indicate the mean predictions from panel A. C) The green and red bars indicate, respectively, positive and negative changes in the mean predictions (contributions attributed to explanatory variables). To evaluate the contribution of individual explanatory variables to this particular single\-instance prediction, we investigate the changes in the model’s predictions when fixing the values of consecutive variables. For instance, the violin plot in the row marked “age\=8” in panel A of Figure [6\.1](breakDown.html#fig:BDPrice4) summarizes the distribution of the predictions obtained when the *age* variable takes the value “8 years”, as for Johnny D. Again, the red dot indicates the mean of the predictions, and it can be interpreted as an estimate of the expected value of the predictions over the distribution of all explanatory variables other than *age*. The violin plot in the “class\=1st” row describes the distribution and the mean value of predictions with the values of variables *age* and *class* set to “8 years” and “1st class”, respectively. Subsequent rows contain similar information for other explanatory variables included in the random forest model. In the last row, all explanatory variables are fixed at the values describing Johnny D. Hence, the last row contains only one point, the red dot, which corresponds to the model’s prediction, i.e., survival probability, for Johnny D. The thin grey lines in panel A of Figure [6\.1](breakDown.html#fig:BDPrice4) show the change of predictions for different individuals when the value of a particular explanatory variable is being replaced by the value indicated in the name of the row. For instance, the lines between the first and the second row indicate that fixing the value of the *age* variable to “8 years” has a different effect for different individuals. For some individuals (most likely, passengers that are 8 years old) the model’s prediction does not change at all. For others, the predicted value increases (probably for the passengers older than 8 years) or decreases (most likely for the passengers younger than 8 years). Eventually, however, we may be interested in the mean predictions, or even only in the changes of the means. Thus, simplified plots, similar to those shown in panels B and C of Figure [6\.1](breakDown.html#fig:BDPrice4), may be of interest. Note that, in panel C, the row marked “intercept” presents the overall mean value (0\.235\) of predictions for the entire dataset. Consecutive rows present changes in the mean prediction induced by fixing the value of a particular explanatory variable. Positive changes are indicated with green bars, while negative differences are indicated with red bars. The last row, marked “prediction,” contains the sum of the overall mean value and the changes, i.e., the predicted value of survival probability for Johnny D, indicated by the blue bar. What can be learned from BD plots as those presented in Figure [6\.1](breakDown.html#fig:BDPrice4)? The plots offer a summary of the effects of particular explanatory variables on a model’s predictions. From Figure [6\.1](breakDown.html#fig:BDPrice4) we can conclude, for instance, that the mean prediction for the random forest model for the Titanic dataset is equal to 23\.5%. This is the predicted probability of survival averaged over all people on Titanic. Note that it is not the percentage of individuals that survived, but the mean model\-prediction. Thus, for a different model, we would most likely obtain a different mean value. The model’s prediction for Johnny D is equal to 42\.2%. It is much higher than the mean prediction. The two explanatory variables that influence this prediction the most are *class* (with the value “1st”) and *age* (with the value equal to 8\). By fixing the values of these two variables, we add 35\.6 percentage points to the mean prediction. All other explanatory variables have smaller effects, and they actually reduce the increase in the predicted value induced by *class* and *age*. For instance, *gender* (Johnny D was a boy) reduces the predicted survival probability by about 8\.3 percentage points. It is worth noting that the part of the prediction attributed to an explanatory variable depends not only on the variable but also on the considered value. For instance, in the example presented in Figure [6\.1](breakDown.html#fig:BDPrice4), the effect of the *embarked* harbour is very small. This may be due to the fact that the variable is not very important for prediction. However, it is also possible that the variable is important, but the effect of the value considered for the particular instance (Johnny D, who embarked Titanic in Southampton) may be close to the mean, as compared to all other possible values of the variable. It is also worth mentioning that, for models that include interactions, the part of the prediction attributed to a variable depends on the order in which one sets the values of the explanatory variables. Note that the interactions do not have to be explicitly specified in the model structure as it is the case of, for instance, linear\-regression models. They may also emerge as a result of fitting to the data a flexible model like, for instance, a regression tree. To illustrate the point, Figure [6\.2](breakDown.html#fig:ordering) presents an example of a random forest model with only three explanatory variables fitted to the Titanic data. Subsequently, we focus on the model’s prediction for a 2\-year old boy that travelled in the second class. The predicted probability of survival is equal to 0\.964, more than a double of the mean model\-prediction of 0\.407\. We would like to understand which explanatory variables drive this prediction. Two possible explanations are illustrated in Figure [6\.2](breakDown.html#fig:ordering). **Explanation 1:** We first consider the explanatory variables *gender*, *class*, and *age*, in that order. Figure [6\.2](breakDown.html#fig:ordering) indicates negative contributions for the first two variables and a positive contribution for the third one. Thus, the fact that the passenger was a boy decreases the chances of survival, as compared to the mean model\-prediction. He travelled in the second class, which further lowers the probability of survival. However, as the boy was very young, this substantially increases the odds of surviving. This last conclusion is the result of the fact that most passengers in the second class were adults; therefore, a kid from the second class had higher chances of survival. **Explanation 2:** We now consider the following order of explanatory variables: *gender*, *age*, and *class*. Figure [6\.2](breakDown.html#fig:ordering) indicates a positive contribution of *class*, unlike in the first explanation. Again, the fact that the passenger was a boy decreases the chances of survival, as compared to the mean model\-prediction. However, he was very young, and this increases the probability of survival as compared to adult men. Finally, the fact that the boy travelled in the second class increases the chance even further. This last conclusion stems from the fact that most kids travelled in the third class; thus, being a child in the second class would increase chances of survival. Figure 6\.2: An illustration of the order\-dependence of variable attributions. Two break\-down plots for the same observation for a random forest model for the Titanic dataset. The contribution attributed to class is negative in the plot at the top and positive in the one at the bottom. The difference is due to the difference in the ordering of explanatory variables used to construct the plots (as seen in the labelling of the rows). 6\.3 Method ----------- In this section, we introduce more formally the method of variable attribution. We first focus on linear models, because their simple and additive structure allows building intuition. Then we consider a more general case. ### 6\.3\.1 Break\-down for linear models Assume the classical linear\-regression model for dependent variable \\(Y\\) with \\(p\\) explanatory variables, the values of which are collected in vector \\(\\underline{x}\\), and vector \\(\\underline{\\beta}\\) of \\(p\\) corresponding coefficients. Note that we separately consider \\(\\beta^0\\), which is the intercept. Prediction for \\(Y\\) is given by the expected value of \\(Y\\) conditional on \\(\\underline{x}\\). In particular, the expected value is given by the following linear combination: \\\[\\begin{equation} E\_Y(Y \| \\underline{x}) \= f(\\underline{x}) \= \\beta^0 \+ \\underline{x}'\\underline{\\beta}. \\tag{6\.1} \\end{equation}\\] Assume that we select a vector of values of explanatory variables \\(\\underline{x}\_\* \\in \\mathcal R^p\\). We are interested in the contribution of the \\(j\\)\-th explanatory variable to model’s prediction \\(f(\\underline{x}\_\*)\\) for a single observation described by \\(\\underline{x}\_\*\\). A possible approach to evaluate the contribution is to measure how much the expected value of \\(Y\\) changes after conditioning on \\({x}^j\_\*\\). Using the notation \\(\\underline{x}^{j\|\=X^j}\_\*\\) (see Section [2\.3](modelDevelopmentProcess.html#notation)) to indicate that we treat the value of the \\(j\\)\-th coordinate as a random variable \\(X^j\\), we can thus define \\\[\\begin{equation} v(j, \\underline{x}\_\*) \= E\_Y(Y \| \\underline{x}\_\*) \- E\_{X^j}\\left\[E\_Y\\left\\{Y \| \\underline{x}^{j\|\=X^j}\_\*\\right\\}\\right]\= f(\\underline{x}\_\*) \- E\_{X^j}\\left\\{f\\left(\\underline{x}^{j\|\=X^j}\_\*\\right)\\right\\}, \\tag{6\.2} \\end{equation}\\] where \\(v(j, \\underline{x}\_\*)\\) is the *variable\-importance measure* for the \\(j\\)\-th explanatory variable evaluated at \\(\\underline{x}\_\*\\) and the last expected value on the right\-hand side of [(6\.2\)](breakDown.html#eq:BDattr1) is taken over the distribution of the variable (treated as random). For the linear\-regression model [(6\.1\)](breakDown.html#eq:BDkinreg), and if the explanatory variables are independent, \\(v(j,\\underline{x}\_\*)\\) can be expressed as follows: \\\[\\begin{equation} v(j, \\underline{x}\_\*) \= \\beta^0 \+ \\underline{x}\_\*' \\underline{\\beta} \- E\_{X^j}\\left\\{\\beta^0 \+ \\left(\\underline{x}^{j\|\=X^j}\_\*\\right)' \\underline{\\beta}\\right\\} \= {\\beta}^j\\left\\{{x}\_\*^j \- E\_{X^j}(X^j)\\right\\}. \\tag{6\.3} \\end{equation}\\] Using [(6\.3\)](breakDown.html#eq:BDattr2), the linear\-regression prediction [(6\.1\)](breakDown.html#eq:BDkinreg) may be re\-expressed in the following way: \\\[\\begin{align} f(\\underline{x}\_\*) \&\= \\left\\{\\beta^0 \+ {\\beta}^1E\_{X^1}(X^1\) \+ ... \+ {\\beta}^pE\_{X^p}(X^p)\\right\\}\+ \\nonumber \\\\ \& \\ \\ \\ \\left\[\\left\\{{x}^1\_\* \- E\_{X^1}(X^1\)\\right\\} {\\beta}^1 \+ ... \+\\left\\{{x}^p\_\* \- E\_{X^p}(X^p)\\right\\} {\\beta}^p\\right] \\nonumber \\\\ \&\\equiv (mean \\ prediction) \+ \\sum\_{j\=1}^p v(j, \\underline{x}\_\*). \\tag{6\.4} \\end{align}\\] Thus, the contributions of the explanatory variables \\(v(j, \\underline{x}\_\*)\\) sum up to the difference between the model’s prediction for \\(\\underline{x}\_\*\\) and the mean prediction. In practice, given a dataset, the expected value \\(E\_{X^j}(X^j)\\) can be estimated by the sample mean \\(\\bar x^j\\). This leads to \\\[\\begin{equation} {v}(j, \\underline{x}\_\*) \= {\\beta}^j ({x}\_\*^j \- \\bar x^j). \\end{equation}\\] Obviously, the sample mean \\(\\bar x^j\\) is an estimator of the expected value \\(E\_{X^j}(X^j)\\), calculated using a dataset. For the sake of simplicity, we do not emphasize this difference in the notation. Also, we ignore the fact that, in practice, we never know the true model coefficients and use their estimates instead. We are also silent about the fact that, usually, explanatory variables are not independent. We needed this simplified example just to build our intuition. ### 6\.3\.2 Break\-down for a general case Again, let \\(v(j, \\underline{x}\_\*)\\) denote the variable\-importance measure of the \\(j\\)\-th variable and instance \\(\\underline{x}\_\*\\), i.e., the contribution of the \\(j\\)\-th variable to the model’s prediction at \\(\\underline{x}\_\*\\). We would like the sum of the \\(v(j, \\underline{x}\_\*)\\) for all explanatory variables to be equal to the instance prediction. This property is called *local accuracy*. Thus, we want that \\\[\\begin{equation} f(\\underline{x}\_\*) \= v\_0 \+ \\sum\_{j\=1}^p v(j, \\underline{x}\_\*), \\tag{6\.5} \\end{equation}\\] where \\(v\_0\\) denotes the mean model\-prediction. Denote by \\(\\underline{X}\\) the vector of random values of explanatory variables. If we rewrite equation [(6\.5\)](breakDown.html#eq:generalBreakDownLocalAccuracy) as follows: \\\[\\begin{equation} E\_{\\underline{X}}\\{f(\\underline{X})\|X^1 \= {x}^1\_\*, \\ldots, X^p \= {x}^p\_\*\\} \= E\_{\\underline{X}}\\{f(\\underline{X})\\} \+ \\sum\_{j\=1}^p v(j, \\underline{x}\_\*), \\end{equation}\\] then a natural proposal for \\(v(j, \\underline{x}\_\*)\\) is \\\[\\begin{align} v(j, \\underline{x}\_\*) \=\& E\_{\\underline{X}}\\{f(\\underline{X}) \| X^1 \= {x}^1\_\*, \\ldots, X^j \= {x}^j\_\*\\} \- \\nonumber \\\\ \& E\_{\\underline{X}}\\{f(\\underline{X}) \| X^1 \= {x}^1\_\*, \\ldots, X^{j\-1} \= {x}^{j\-1}\_\*\\}. \\tag{6\.6} \\end{align}\\] In other words, the contribution of the \\(j\\)\-th variable is the difference between the expected value of the model’s prediction conditional on setting the values of the first \\(j\\) variables equal to their values in \\(\\underline{x}\_\*\\) and the expected value conditional on setting the values of the first \\(j\-1\\) variables equal to their values in \\(\\underline{x}\_\*\\). Note that the definition does imply the dependence of \\(v(j, \\underline{x}\_\*)\\) on the order of the explanatory variables that is reflected in their indices (superscripts). To consider more general cases, let \\(J\\) denote a subset of \\(K\\) indices (\\(K\\leq p\\)) from \\(\\{1,2,\\ldots,p\\}\\), i.e., \\(J\=\\{j\_1,j\_2,\\ldots,j\_K\\}\\), where each \\(j\_k \\in \\{1,2,\\ldots,p\\}\\). Furthermore, let \\(L\\) denote another subset of \\(M\\) indices (\\(M\\leq p\-K\\)) from \\(\\{1,2,\\ldots,p\\}\\), distinct from \\(J\\). That is, \\(L\=\\{l\_1,l\_2,\\ldots,l\_M\\}\\), where each \\(l\_m \\in \\{1,2,\\ldots,p\\}\\) and \\(J \\cap L \= \\emptyset\\). Let us define now \\\[\\begin{align} \\Delta^{L\|J}(\\underline{x}\_\*) \\equiv \& E\_{\\underline{X}}\\{f(\\underline{X}) \| X^{l\_1} \= {x}\_\*^{l\_1},\\ldots,X^{l\_M} \= {x}\_\*^{l\_M},X^{j\_1} \= {x}\_\*^{j\_1},\\ldots,X^{j\_K} \= {x}\_\*^{j\_K}\\} \- \\nonumber \\\\ \& E\_{\\underline{X}}\\{f(\\underline{X}) \| X^{j\_1} \= {x}\_\*^{j\_1},\\ldots,X^{j\_K} \= {x}\_\*^{j\_K}\\}. \\end{align}\\] In other words, \\(\\Delta^{L\|J}(\\underline{x}\_\*)\\) is the change between the expected model\-prediction, when setting the values of the explanatory variables with indices from the set \\(J \\cup L\\) equal to their values in \\(\\underline{x}\_\*\\), and the expected prediction conditional on setting the values of the explanatory variables with indices from the set \\(J\\) equal to their values in \\(\\underline{x}\_\*\\). In particular, for the \\(l\\)\-th explanatory variable, let \\\[\\begin{align} \\Delta^{l\|J}(\\underline{x}\_\*) \\equiv \\Delta^{\\{l\\}\|J}(\\underline{x}\_\*) \= \& E\_{\\underline{X}}\\{f(\\underline{X}) \| X^{j\_1} \= {x}\_\*^{j\_1},\\ldots,X^{j\_K} \= {x}\_\*^{j\_K}, X^{l} \= {x}\_\*^{l}\\} \- \\nonumber \\\\ \& E\_{\\underline{X}}\\{f(\\underline{X}) \| X^{j\_1} \= {x}\_\*^{j\_1},\\ldots,X^{j\_K} \= {x}\_\*^{j\_K}\\}. \\tag{6\.7} \\end{align}\\] Thus, \\(\\Delta^{l\|J}\\) is the change between the expected prediction, when setting the values of the explanatory variables with indices from the set \\(J \\cup \\{l\\}\\) equal to their values in \\(\\underline{x}\_\*\\), and the expected prediction conditional on setting the values of the explanatory variables with indices from the set \\(J\\) equal to their values in \\(\\underline{x}\_\*\\). Note that, if \\(J\=\\emptyset\\), then \\\[\\begin{equation} \\Delta^{l\|\\emptyset}(\\underline{x}\_\*) \= E\_{\\underline{X}}\\{f(\\underline{X}) \| X^{l} \= {x}\_\*^{l}\\} \- E\_{\\underline{X}}\\{f(\\underline{X})\\} \= E\_{\\underline{X}}\\{f(\\underline{X}) \| X^{l} \= {x}\_\*^{l}\\} \- v\_0\. \\tag{6\.8} \\end{equation}\\] It follows that \\\[\\begin{equation} v(j, \\underline{x}\_\*) \= \\Delta^{j\|\\{1,\\ldots, j\-1\\}}(\\underline{x}\_\*) \= \\Delta^{\\{1,\\ldots, j\\}\|\\emptyset}(\\underline{x}\_\*)\-\\Delta^{\\{1,\\ldots, j\-1\\}\|\\emptyset}(\\underline{x}\_\*). \\tag{6\.9} \\end{equation}\\] As it was mentioned in Section [6\.2](breakDown.html#BDIntuition), for models that include interactions, the value of the variable\-importance measure \\(v(j, \\underline{x}\_\*)\\) depends on the order of conditioning on explanatory variables. A heuristic approach to address this issue consists of choosing an order in which the variables with the largest contributions are selected first. In particular, the following two\-step procedure can be considered. In the first step, the ordering is chosen based on the decreasing values of \\(\|\\Delta^{k\|\\emptyset}(\\underline{x}\_\*)\|\\). Note that the use of absolute values is needed because the variable contributions can be positive or negative. In the second step, the variable\-importance measure for the \\(j\\)\-th variable is calculated as \\\[ v(j, \\underline{x}\_\*) \= \\Delta ^{j\|J}(\\underline{x}\_\*), \\] where \\\[ J \= \\{k: \|\\Delta^{k\|\\emptyset}(\\underline{x}\_\*)\| \< \|\\Delta^{j\|\\emptyset}(\\underline{x}\_\*)\|\\}. \\] That is, \\(J\\) is the set of indices of explanatory variables with scores \\(\|\\Delta^{k\|\\emptyset}(\\underline{x}\_\*)\|\\) smaller than the corresponding score for variable \\(j\\). The time complexity of each of the two steps of the procedure is \\(O(p)\\), where \\(p\\) is the number of explanatory variables. Note, that there are also other possible approaches to the problem of calculation of variable attributions. One consists of identifying the interactions that cause a difference in variable\-importance measures for different orderings and focusing on those interactions. This approach is discussed in Chapter [7](iBreakDown.html#iBreakDown). The other one consists of calculating an average value of the variance\-importance measure across all possible orderings. This approach is presented in Chapter [8](shapley.html#shapley). ### 6\.3\.1 Break\-down for linear models Assume the classical linear\-regression model for dependent variable \\(Y\\) with \\(p\\) explanatory variables, the values of which are collected in vector \\(\\underline{x}\\), and vector \\(\\underline{\\beta}\\) of \\(p\\) corresponding coefficients. Note that we separately consider \\(\\beta^0\\), which is the intercept. Prediction for \\(Y\\) is given by the expected value of \\(Y\\) conditional on \\(\\underline{x}\\). In particular, the expected value is given by the following linear combination: \\\[\\begin{equation} E\_Y(Y \| \\underline{x}) \= f(\\underline{x}) \= \\beta^0 \+ \\underline{x}'\\underline{\\beta}. \\tag{6\.1} \\end{equation}\\] Assume that we select a vector of values of explanatory variables \\(\\underline{x}\_\* \\in \\mathcal R^p\\). We are interested in the contribution of the \\(j\\)\-th explanatory variable to model’s prediction \\(f(\\underline{x}\_\*)\\) for a single observation described by \\(\\underline{x}\_\*\\). A possible approach to evaluate the contribution is to measure how much the expected value of \\(Y\\) changes after conditioning on \\({x}^j\_\*\\). Using the notation \\(\\underline{x}^{j\|\=X^j}\_\*\\) (see Section [2\.3](modelDevelopmentProcess.html#notation)) to indicate that we treat the value of the \\(j\\)\-th coordinate as a random variable \\(X^j\\), we can thus define \\\[\\begin{equation} v(j, \\underline{x}\_\*) \= E\_Y(Y \| \\underline{x}\_\*) \- E\_{X^j}\\left\[E\_Y\\left\\{Y \| \\underline{x}^{j\|\=X^j}\_\*\\right\\}\\right]\= f(\\underline{x}\_\*) \- E\_{X^j}\\left\\{f\\left(\\underline{x}^{j\|\=X^j}\_\*\\right)\\right\\}, \\tag{6\.2} \\end{equation}\\] where \\(v(j, \\underline{x}\_\*)\\) is the *variable\-importance measure* for the \\(j\\)\-th explanatory variable evaluated at \\(\\underline{x}\_\*\\) and the last expected value on the right\-hand side of [(6\.2\)](breakDown.html#eq:BDattr1) is taken over the distribution of the variable (treated as random). For the linear\-regression model [(6\.1\)](breakDown.html#eq:BDkinreg), and if the explanatory variables are independent, \\(v(j,\\underline{x}\_\*)\\) can be expressed as follows: \\\[\\begin{equation} v(j, \\underline{x}\_\*) \= \\beta^0 \+ \\underline{x}\_\*' \\underline{\\beta} \- E\_{X^j}\\left\\{\\beta^0 \+ \\left(\\underline{x}^{j\|\=X^j}\_\*\\right)' \\underline{\\beta}\\right\\} \= {\\beta}^j\\left\\{{x}\_\*^j \- E\_{X^j}(X^j)\\right\\}. \\tag{6\.3} \\end{equation}\\] Using [(6\.3\)](breakDown.html#eq:BDattr2), the linear\-regression prediction [(6\.1\)](breakDown.html#eq:BDkinreg) may be re\-expressed in the following way: \\\[\\begin{align} f(\\underline{x}\_\*) \&\= \\left\\{\\beta^0 \+ {\\beta}^1E\_{X^1}(X^1\) \+ ... \+ {\\beta}^pE\_{X^p}(X^p)\\right\\}\+ \\nonumber \\\\ \& \\ \\ \\ \\left\[\\left\\{{x}^1\_\* \- E\_{X^1}(X^1\)\\right\\} {\\beta}^1 \+ ... \+\\left\\{{x}^p\_\* \- E\_{X^p}(X^p)\\right\\} {\\beta}^p\\right] \\nonumber \\\\ \&\\equiv (mean \\ prediction) \+ \\sum\_{j\=1}^p v(j, \\underline{x}\_\*). \\tag{6\.4} \\end{align}\\] Thus, the contributions of the explanatory variables \\(v(j, \\underline{x}\_\*)\\) sum up to the difference between the model’s prediction for \\(\\underline{x}\_\*\\) and the mean prediction. In practice, given a dataset, the expected value \\(E\_{X^j}(X^j)\\) can be estimated by the sample mean \\(\\bar x^j\\). This leads to \\\[\\begin{equation} {v}(j, \\underline{x}\_\*) \= {\\beta}^j ({x}\_\*^j \- \\bar x^j). \\end{equation}\\] Obviously, the sample mean \\(\\bar x^j\\) is an estimator of the expected value \\(E\_{X^j}(X^j)\\), calculated using a dataset. For the sake of simplicity, we do not emphasize this difference in the notation. Also, we ignore the fact that, in practice, we never know the true model coefficients and use their estimates instead. We are also silent about the fact that, usually, explanatory variables are not independent. We needed this simplified example just to build our intuition. ### 6\.3\.2 Break\-down for a general case Again, let \\(v(j, \\underline{x}\_\*)\\) denote the variable\-importance measure of the \\(j\\)\-th variable and instance \\(\\underline{x}\_\*\\), i.e., the contribution of the \\(j\\)\-th variable to the model’s prediction at \\(\\underline{x}\_\*\\). We would like the sum of the \\(v(j, \\underline{x}\_\*)\\) for all explanatory variables to be equal to the instance prediction. This property is called *local accuracy*. Thus, we want that \\\[\\begin{equation} f(\\underline{x}\_\*) \= v\_0 \+ \\sum\_{j\=1}^p v(j, \\underline{x}\_\*), \\tag{6\.5} \\end{equation}\\] where \\(v\_0\\) denotes the mean model\-prediction. Denote by \\(\\underline{X}\\) the vector of random values of explanatory variables. If we rewrite equation [(6\.5\)](breakDown.html#eq:generalBreakDownLocalAccuracy) as follows: \\\[\\begin{equation} E\_{\\underline{X}}\\{f(\\underline{X})\|X^1 \= {x}^1\_\*, \\ldots, X^p \= {x}^p\_\*\\} \= E\_{\\underline{X}}\\{f(\\underline{X})\\} \+ \\sum\_{j\=1}^p v(j, \\underline{x}\_\*), \\end{equation}\\] then a natural proposal for \\(v(j, \\underline{x}\_\*)\\) is \\\[\\begin{align} v(j, \\underline{x}\_\*) \=\& E\_{\\underline{X}}\\{f(\\underline{X}) \| X^1 \= {x}^1\_\*, \\ldots, X^j \= {x}^j\_\*\\} \- \\nonumber \\\\ \& E\_{\\underline{X}}\\{f(\\underline{X}) \| X^1 \= {x}^1\_\*, \\ldots, X^{j\-1} \= {x}^{j\-1}\_\*\\}. \\tag{6\.6} \\end{align}\\] In other words, the contribution of the \\(j\\)\-th variable is the difference between the expected value of the model’s prediction conditional on setting the values of the first \\(j\\) variables equal to their values in \\(\\underline{x}\_\*\\) and the expected value conditional on setting the values of the first \\(j\-1\\) variables equal to their values in \\(\\underline{x}\_\*\\). Note that the definition does imply the dependence of \\(v(j, \\underline{x}\_\*)\\) on the order of the explanatory variables that is reflected in their indices (superscripts). To consider more general cases, let \\(J\\) denote a subset of \\(K\\) indices (\\(K\\leq p\\)) from \\(\\{1,2,\\ldots,p\\}\\), i.e., \\(J\=\\{j\_1,j\_2,\\ldots,j\_K\\}\\), where each \\(j\_k \\in \\{1,2,\\ldots,p\\}\\). Furthermore, let \\(L\\) denote another subset of \\(M\\) indices (\\(M\\leq p\-K\\)) from \\(\\{1,2,\\ldots,p\\}\\), distinct from \\(J\\). That is, \\(L\=\\{l\_1,l\_2,\\ldots,l\_M\\}\\), where each \\(l\_m \\in \\{1,2,\\ldots,p\\}\\) and \\(J \\cap L \= \\emptyset\\). Let us define now \\\[\\begin{align} \\Delta^{L\|J}(\\underline{x}\_\*) \\equiv \& E\_{\\underline{X}}\\{f(\\underline{X}) \| X^{l\_1} \= {x}\_\*^{l\_1},\\ldots,X^{l\_M} \= {x}\_\*^{l\_M},X^{j\_1} \= {x}\_\*^{j\_1},\\ldots,X^{j\_K} \= {x}\_\*^{j\_K}\\} \- \\nonumber \\\\ \& E\_{\\underline{X}}\\{f(\\underline{X}) \| X^{j\_1} \= {x}\_\*^{j\_1},\\ldots,X^{j\_K} \= {x}\_\*^{j\_K}\\}. \\end{align}\\] In other words, \\(\\Delta^{L\|J}(\\underline{x}\_\*)\\) is the change between the expected model\-prediction, when setting the values of the explanatory variables with indices from the set \\(J \\cup L\\) equal to their values in \\(\\underline{x}\_\*\\), and the expected prediction conditional on setting the values of the explanatory variables with indices from the set \\(J\\) equal to their values in \\(\\underline{x}\_\*\\). In particular, for the \\(l\\)\-th explanatory variable, let \\\[\\begin{align} \\Delta^{l\|J}(\\underline{x}\_\*) \\equiv \\Delta^{\\{l\\}\|J}(\\underline{x}\_\*) \= \& E\_{\\underline{X}}\\{f(\\underline{X}) \| X^{j\_1} \= {x}\_\*^{j\_1},\\ldots,X^{j\_K} \= {x}\_\*^{j\_K}, X^{l} \= {x}\_\*^{l}\\} \- \\nonumber \\\\ \& E\_{\\underline{X}}\\{f(\\underline{X}) \| X^{j\_1} \= {x}\_\*^{j\_1},\\ldots,X^{j\_K} \= {x}\_\*^{j\_K}\\}. \\tag{6\.7} \\end{align}\\] Thus, \\(\\Delta^{l\|J}\\) is the change between the expected prediction, when setting the values of the explanatory variables with indices from the set \\(J \\cup \\{l\\}\\) equal to their values in \\(\\underline{x}\_\*\\), and the expected prediction conditional on setting the values of the explanatory variables with indices from the set \\(J\\) equal to their values in \\(\\underline{x}\_\*\\). Note that, if \\(J\=\\emptyset\\), then \\\[\\begin{equation} \\Delta^{l\|\\emptyset}(\\underline{x}\_\*) \= E\_{\\underline{X}}\\{f(\\underline{X}) \| X^{l} \= {x}\_\*^{l}\\} \- E\_{\\underline{X}}\\{f(\\underline{X})\\} \= E\_{\\underline{X}}\\{f(\\underline{X}) \| X^{l} \= {x}\_\*^{l}\\} \- v\_0\. \\tag{6\.8} \\end{equation}\\] It follows that \\\[\\begin{equation} v(j, \\underline{x}\_\*) \= \\Delta^{j\|\\{1,\\ldots, j\-1\\}}(\\underline{x}\_\*) \= \\Delta^{\\{1,\\ldots, j\\}\|\\emptyset}(\\underline{x}\_\*)\-\\Delta^{\\{1,\\ldots, j\-1\\}\|\\emptyset}(\\underline{x}\_\*). \\tag{6\.9} \\end{equation}\\] As it was mentioned in Section [6\.2](breakDown.html#BDIntuition), for models that include interactions, the value of the variable\-importance measure \\(v(j, \\underline{x}\_\*)\\) depends on the order of conditioning on explanatory variables. A heuristic approach to address this issue consists of choosing an order in which the variables with the largest contributions are selected first. In particular, the following two\-step procedure can be considered. In the first step, the ordering is chosen based on the decreasing values of \\(\|\\Delta^{k\|\\emptyset}(\\underline{x}\_\*)\|\\). Note that the use of absolute values is needed because the variable contributions can be positive or negative. In the second step, the variable\-importance measure for the \\(j\\)\-th variable is calculated as \\\[ v(j, \\underline{x}\_\*) \= \\Delta ^{j\|J}(\\underline{x}\_\*), \\] where \\\[ J \= \\{k: \|\\Delta^{k\|\\emptyset}(\\underline{x}\_\*)\| \< \|\\Delta^{j\|\\emptyset}(\\underline{x}\_\*)\|\\}. \\] That is, \\(J\\) is the set of indices of explanatory variables with scores \\(\|\\Delta^{k\|\\emptyset}(\\underline{x}\_\*)\|\\) smaller than the corresponding score for variable \\(j\\). The time complexity of each of the two steps of the procedure is \\(O(p)\\), where \\(p\\) is the number of explanatory variables. Note, that there are also other possible approaches to the problem of calculation of variable attributions. One consists of identifying the interactions that cause a difference in variable\-importance measures for different orderings and focusing on those interactions. This approach is discussed in Chapter [7](iBreakDown.html#iBreakDown). The other one consists of calculating an average value of the variance\-importance measure across all possible orderings. This approach is presented in Chapter [8](shapley.html#shapley). 6\.4 Example: Titanic data -------------------------- Let us consider the random forest model `titanic_rf` (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) and passenger Johnny D (see Section [4\.2\.5](dataSetsIntro.html#predictions-titanic)) as the instance of interest in the Titanic data. The mean of model’s predictions for all passengers is equal to \\(v\_0\=\\) 0\.2353095\. Table [6\.1](breakDown.html#tab:titanicBreakDownDeltas) presents the scores \\(\|\\Delta^{j\|\\emptyset}(\\underline{x}\_\*)\|\\) and the expected values \\(E\_{\\underline{X}}\\{f(\\underline{X}) \| X^j \= {x}^j\_\*\\}\\). Note that, given [(6\.8\)](breakDown.html#eq:deltaBreakDownAdditive) and the fact that \\(E\_{\\underline{X}}\\{f(\\underline{X}) \| X^j \= {x}^j\_\*\\}\>v\_0\\) for all variables, we have got \\(E\_{\\underline{X}}\\{f(\\underline{X}) \| X^j \= {x}^j\_\*\\}\=\|\\Delta^{j\|\\emptyset}(\\underline{x}\_\*)\|\+v\_0\\). Table 6\.1: Expected values \\(E\_{\\underline{X}}\\{f(\\underline{X}) \| X^j \= {x}^j\_\*\\}\\) and scores \\(\|\\Delta^{j\|\\emptyset}(\\underline{x}\_\*)\|\\) for the random forest model and Johnny D for the Titanic data. The scores are sorted in decreasing order. | variable \\(j\\) | \\(E\_{\\underline{X}}\\{f(\\underline{X}) \| X^j \= {x}^j\_\*\\}\\) | \\(\|\\Delta^{j\|\\emptyset}(\\underline{x}\_\*)\|\\) | | --- | --- | --- | | age \= 8 | 0\.5051210 | 0\.2698115 | | class \= 1st | 0\.4204449 | 0\.1851354 | | fare \= 72 | 0\.3785383 | 0\.1432288 | | gender \= male | 0\.1102873 | 0\.1250222 | | embarked \= Southampton | 0\.2246035 | 0\.0107060 | | sibsp \= 0 | 0\.2429597 | 0\.0076502 | | parch \= 0 | 0\.2322655 | 0\.0030440 | Based on the ordering defined by the scores \\(\|\\Delta^{j\|\\emptyset}(\\underline{x}\_\*)\|\\) from Table [6\.1](breakDown.html#tab:titanicBreakDownDeltas), we can compute the variable\-importance measures based on the sequential contributions \\(\\Delta^{j\|J}(\\underline{x}\_\*)\\). The computed values are presented in Table [6\.2](breakDown.html#tab:titanicBreakDownDeltasConseq). Table 6\.2: Variable\-importance measures \\(\\Delta^{j\|J}(\\underline{x}\_\*)\\), with \\(J\=\\{1,\\ldots,j\\}\\), for the random forest model and Johnny D for the Titanic data, computed by using the ordering of variables defined in Table [6\.1](breakDown.html#tab:titanicBreakDownDeltas). | variable \\(j\\) | \\(E\_{\\underline{X}}\\left\\{ f(\\underline{X}) \| \\underline{X}^{J} \= \\underline{x}^{J}\_\*\\right\\}\\) | \\(\\Delta^{j\|J}(\\underline{x}\_\*)\\) | | --- | --- | --- | | intercept \\((v\_0\)\\) | 0\.2353095 | 0\.2353095 | | age \= 8 | 0\.5051210 | 0\.2698115 | | class \= 1st | 0\.5906969 | 0\.0855759 | | fare \= 72 | 0\.5443561 | \-0\.0463407 | | gender \= male | 0\.4611518 | \-0\.0832043 | | embarked \= Southampton | 0\.4584422 | \-0\.0027096 | | sibsp \= 0 | 0\.4523398 | \-0\.0061024 | | parch \= 0 | 0\.4220000 | \-0\.0303398 | | prediction | 0\.4220000 | 0\.4220000 | Results from Table [6\.2](breakDown.html#tab:titanicBreakDownDeltasConseq) are presented in Figure [6\.3](breakDown.html#fig:BDjohnyExample). The plot indicates that the largest positive contributions to the predicted probability of survival for Johnny D come from explanatory variables *age* and *class*. The contributions of the remaining variables are smaller (in absolute values) and negative. Figure 6\.3: Break\-down plot for the random forest model and Johnny D for the Titanic data. 6\.5 Pros and cons ------------------ BD plots offer a model\-agnostic approach that can be applied to any predictive model that returns a single number for a single observation (instance). The approach offers several advantages. The plots are, in general, easy to understand. They are compact; results for many explanatory variables can be presented in a limited space. The approach reduces to an intuitive interpretation for linear models. Numerical complexity of the BD algorithm is linear in the number of explanatory variables. An important issue is that BD plots may be misleading for models including interactions. This is because the plots show only the additive attributions. Thus, the choice of the ordering of the explanatory variables that is used in the calculation of the variable\-importance measures is important. Also, for models with a large number of variables, BD plots may be complex and include many explanatory variables with small contributions to the instance prediction. To address the issue of the dependence of the variable\-importance measure on the ordering of the explanatory variables, the heuristic approach described in Section [6\.3\.2](breakDown.html#BDMethodGen) can be applied. Alternative approaches are described in Chapters [7](iBreakDown.html#iBreakDown) and [8](shapley.html#shapley). 6\.6 Code snippets for R ------------------------ In this section, we use the `DALEX` package, which is a wrapper for the `iBreakDown` R package (Gosiewska and Biecek [2019](#ref-iBreakDownRPackage)). The package covers all methods presented in this chapter. It is available on `CRAN` and `GitHub`. For illustration purposes, we use the `titanic_rf` random forest model for the Titanic data developed in Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf). Recall that the model is developed to predict the probability of survival for passengers of Titanic. Instance\-level explanations are calculated for Henry, a 47\-year\-old passenger that travelled in the 1st class (see Section [4\.2\.5](dataSetsIntro.html#predictions-titanic)). We first retrieve the `titanic_rf` model\-object and the data frame for Henry via the `archivist` hooks, as listed in Section [4\.2\.7](dataSetsIntro.html#ListOfModelsTitanic). We also retrieve the version of the `titanic` data with imputed missing values. ``` titanic_imputed <- archivist::aread("pbiecek/models/27e5c") titanic_rf <- archivist:: aread("pbiecek/models/4e0fc") (henry <- archivist::aread("pbiecek/models/a6538")) ``` ``` ## class gender age sibsp parch fare embarked ## 1 1st male 47 0 0 25 Cherbourg ``` Then we construct the explainer for the model by using function `explain()` from the `DALEX` package (see Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode)). We also load the `randomForest` package, as the model was fitted by using function `randomForest()` from this package (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) and it is important to have the corresponding `predict()` function available. ``` library("randomForest") library("DALEX") explain_rf <- DALEX::explain(model = titanic_rf, data = titanic_imputed[, -9], y = titanic_imputed$survived == "yes", label = "Random Forest") ``` The explainer object allows uniform access to predictive models regardless of their internal structure. With this object, we can proceed to the model analysis. ### 6\.6\.1 Basic use of the `predict_parts()` function The `DALEX::predict_parts()` function decomposes model predictions into parts that can be attributed to individual variables. It calculates the variable\-attribution measures for a selected model and an instance of interest. The object obtained as a result of applying the function is a data frame containing the calculated measures. In the simplest call, the function requires three arguments: * `explainer` \- an explainer\-object, created with function `DALEX::explain()`; * `new_observation` \- an observation to be explained; it should be a data frame with a structure that matches the structure of the dataset used for fitting of the model; * `type` \- the method for calculation of variable attribution; the possible methods are `"break_down"` (the default), `"shap"`, `"oscillations"`, and `"break_down_interactions"`. In the code below, the argument `type = "break_down"` is explicitly used. The code essentially provides the variable\-importance values \\(\\Delta^{j\|\\{1,\\ldots,j\\}}(\\underline{x}\_\*)\\). ``` bd_rf <- predict_parts(explainer = explain_rf, new_observation = henry, type = "break_down") bd_rf ``` ``` ## contribution ## Random Forest: intercept 0.235 ## Random Forest: class = 1st 0.185 ## Random Forest: gender = male -0.124 ## Random Forest: embarked = Cherbourg 0.105 ## Random Forest: age = 47 -0.092 ## Random Forest: fare = 25 -0.030 ## Random Forest: sibsp = 0 -0.032 ## Random Forest: parch = 0 -0.001 ## Random Forest: prediction 0.246 ``` By applying the generic `plot()` function to the object resulting from the application of the `predict_parts()` function we obtain a BD plot. ``` plot(bd_rf) ``` The resulting plot is shown in Figure [6\.4](breakDown.html#fig:BDhenryExample). It can be used to compare the explanatory\-variable attributions obtained for Henry with those computed for Johnny D (see Figure [6\.3](breakDown.html#fig:BDjohnyExample)). Both explanations refer to the same random forest model. We can see that the predicted survival probability for Henry (0\.246\) is almost the same as the mean prediction (0\.235\), while the probability for Johnny D is higher (0\.422\). For Johnny D, this result can be mainly attributed to the positive contribution of *age* and *class*. For Henry, *class* still contributes positively to the chances of survival, but the effect of *age* is negative. For both passengers the effect of *gender* is negative. Thus, one could conclude that the difference in the predicted survival probabilities is mainly due to the difference in the age of Henry and Johnny D. Figure 6\.4: Break\-down plot for the random forest model and Henry for the Titanic data, obtained by the generic `plot()` function in R. ### 6\.6\.2 Advanced use of the `predict_parts()` function Apart from the `explainer`, `new_observation`, and `type` arguments, function `predict_parts()` allows additional ones. The most commonly used are: * `order` \- a vector of characters (column names) or integers (column indexes) that specify the order of explanatory variables to be used for computing the variable\-importance measures; if not specified (default), then a one\-step heuristic is used to determine the order; * `keep_distributions` \- a logical value (`FALSE` by default); if `TRUE`, then additional diagnostic information about conditional distributions of predictions is stored in the resulting object and can be plotted with the generic `plot()` function. In what follows, we illustrate the use of the arguments. First, we specify the ordering of the explanatory variables. Toward this end, we can use integer indexes or variable names. The latter option is preferable in most cases because of transparency. Additionally, to reduce clutter in the plot, we set `max_features = 3` argument in the `plot()` function. ``` bd_rf_order <- predict_parts(explainer = explain_rf, new_observation = henry, type = "break_down", order = c("class", "age", "gender", "fare", "parch", "sibsp", "embarked")) plot(bd_rf_order, max_features = 3) ``` The resulting plot is presented in Figure [6\.5](breakDown.html#fig:BDhenryExampleTop). It is worth noting that the attributions for variables *gender* and *fare* do differ from those shown in Figure [6\.4](breakDown.html#fig:BDhenryExample). This is the result of the change of the ordering of variables used in the computation of the attributions. Figure 6\.5: Break\-down plot for the top three variables for the random forest model and Henry for the Titanic data. We can use the `keep_distributions = TRUE` argument to enrich the resulting object with additional information about conditional distributions of predicted values. Subsequently, we can apply the `plot_distributions = TRUE` argument in the `plot()` function to present the distributions as violin plots. ``` bd_rf_distr <- predict_parts(explainer = explain_rf, new_observation = henry, type = "break_down", order = c("age", "class", "fare", "gender", "embarked", "sibsp", "parch"), keep_distributions = TRUE) plot(bd_rf_distr, plot_distributions = TRUE) ``` The resulting plot is presented in Figure [6\.6](breakDown.html#fig:BDhenryExampleDistr). Red dots indicate the mean model’s predictions. Thin grey lines between violin plots indicate changes in predictions for individual observations. They can be used to track how the model’s predictions change after consecutive conditionings. A similar code was used to create the plot in panel A of Figure [6\.1](breakDown.html#fig:BDPrice4) for Johnny D. Figure 6\.6: Break\-down plot with violin plots summarizing distributions of predicted values for a selected order of explanatory variables for the random forest model and Henry for the Titanic data. ### 6\.6\.1 Basic use of the `predict_parts()` function The `DALEX::predict_parts()` function decomposes model predictions into parts that can be attributed to individual variables. It calculates the variable\-attribution measures for a selected model and an instance of interest. The object obtained as a result of applying the function is a data frame containing the calculated measures. In the simplest call, the function requires three arguments: * `explainer` \- an explainer\-object, created with function `DALEX::explain()`; * `new_observation` \- an observation to be explained; it should be a data frame with a structure that matches the structure of the dataset used for fitting of the model; * `type` \- the method for calculation of variable attribution; the possible methods are `"break_down"` (the default), `"shap"`, `"oscillations"`, and `"break_down_interactions"`. In the code below, the argument `type = "break_down"` is explicitly used. The code essentially provides the variable\-importance values \\(\\Delta^{j\|\\{1,\\ldots,j\\}}(\\underline{x}\_\*)\\). ``` bd_rf <- predict_parts(explainer = explain_rf, new_observation = henry, type = "break_down") bd_rf ``` ``` ## contribution ## Random Forest: intercept 0.235 ## Random Forest: class = 1st 0.185 ## Random Forest: gender = male -0.124 ## Random Forest: embarked = Cherbourg 0.105 ## Random Forest: age = 47 -0.092 ## Random Forest: fare = 25 -0.030 ## Random Forest: sibsp = 0 -0.032 ## Random Forest: parch = 0 -0.001 ## Random Forest: prediction 0.246 ``` By applying the generic `plot()` function to the object resulting from the application of the `predict_parts()` function we obtain a BD plot. ``` plot(bd_rf) ``` The resulting plot is shown in Figure [6\.4](breakDown.html#fig:BDhenryExample). It can be used to compare the explanatory\-variable attributions obtained for Henry with those computed for Johnny D (see Figure [6\.3](breakDown.html#fig:BDjohnyExample)). Both explanations refer to the same random forest model. We can see that the predicted survival probability for Henry (0\.246\) is almost the same as the mean prediction (0\.235\), while the probability for Johnny D is higher (0\.422\). For Johnny D, this result can be mainly attributed to the positive contribution of *age* and *class*. For Henry, *class* still contributes positively to the chances of survival, but the effect of *age* is negative. For both passengers the effect of *gender* is negative. Thus, one could conclude that the difference in the predicted survival probabilities is mainly due to the difference in the age of Henry and Johnny D. Figure 6\.4: Break\-down plot for the random forest model and Henry for the Titanic data, obtained by the generic `plot()` function in R. ### 6\.6\.2 Advanced use of the `predict_parts()` function Apart from the `explainer`, `new_observation`, and `type` arguments, function `predict_parts()` allows additional ones. The most commonly used are: * `order` \- a vector of characters (column names) or integers (column indexes) that specify the order of explanatory variables to be used for computing the variable\-importance measures; if not specified (default), then a one\-step heuristic is used to determine the order; * `keep_distributions` \- a logical value (`FALSE` by default); if `TRUE`, then additional diagnostic information about conditional distributions of predictions is stored in the resulting object and can be plotted with the generic `plot()` function. In what follows, we illustrate the use of the arguments. First, we specify the ordering of the explanatory variables. Toward this end, we can use integer indexes or variable names. The latter option is preferable in most cases because of transparency. Additionally, to reduce clutter in the plot, we set `max_features = 3` argument in the `plot()` function. ``` bd_rf_order <- predict_parts(explainer = explain_rf, new_observation = henry, type = "break_down", order = c("class", "age", "gender", "fare", "parch", "sibsp", "embarked")) plot(bd_rf_order, max_features = 3) ``` The resulting plot is presented in Figure [6\.5](breakDown.html#fig:BDhenryExampleTop). It is worth noting that the attributions for variables *gender* and *fare* do differ from those shown in Figure [6\.4](breakDown.html#fig:BDhenryExample). This is the result of the change of the ordering of variables used in the computation of the attributions. Figure 6\.5: Break\-down plot for the top three variables for the random forest model and Henry for the Titanic data. We can use the `keep_distributions = TRUE` argument to enrich the resulting object with additional information about conditional distributions of predicted values. Subsequently, we can apply the `plot_distributions = TRUE` argument in the `plot()` function to present the distributions as violin plots. ``` bd_rf_distr <- predict_parts(explainer = explain_rf, new_observation = henry, type = "break_down", order = c("age", "class", "fare", "gender", "embarked", "sibsp", "parch"), keep_distributions = TRUE) plot(bd_rf_distr, plot_distributions = TRUE) ``` The resulting plot is presented in Figure [6\.6](breakDown.html#fig:BDhenryExampleDistr). Red dots indicate the mean model’s predictions. Thin grey lines between violin plots indicate changes in predictions for individual observations. They can be used to track how the model’s predictions change after consecutive conditionings. A similar code was used to create the plot in panel A of Figure [6\.1](breakDown.html#fig:BDPrice4) for Johnny D. Figure 6\.6: Break\-down plot with violin plots summarizing distributions of predicted values for a selected order of explanatory variables for the random forest model and Henry for the Titanic data. 6\.7 Code snippets for Python ----------------------------- In this section, we use the `dalex` library in Python. The package covers all methods presented in this chapter. It is available on `pip` and `GitHub.` For illustration purposes, we use the `titanic_rf` random forest model for the Titanic data developed in Section [4\.3\.2](dataSetsIntro.html#model-titanic-python-rf). Recall that the model is developed to predict the probability of survival for passengers of Titanic. Instance\-level explanations are calculated for Henry, a 47\-year\-old passenger that travelled in the first class (see Section [4\.3\.5](dataSetsIntro.html#predictions-titanic-python)). In the first step, we create an explainer object that will provide a uniform interface for the predictive model. We use the `Explainer()` constructor for this purpose (see Section [4\.3\.6](dataSetsIntro.html#ExplainersTitanicPythonCode)). ``` import pandas as pd henry = pd.DataFrame({'gender' : ['male'], 'age' : [47], 'class' : ['1st'], 'embarked': ['Cherbourg'], 'fare' : [25], 'sibsp' : [0], 'parch' : [0]}, index = ['Henry']) import dalex as dx titanic_rf_exp = dx.Explainer(titanic_rf, X, y, label = "Titanic RF Pipeline") ``` To apply the break\-down method we use the `predict_parts()` method. The first argument indicates the data for the observation for which the attributions are to be calculated. The `type` argument specifies the method of calculation of attributions. Results are stored in the `result` field. ``` bd_henry = titanic_rf_exp.predict_parts(henry, type = 'break_down') bd_henry.result ``` To obtain a waterfall chart we can use the `plot()` method. It generates an interactive chart based on the `plotly` library. ``` bd_henry.plot() ``` The resulting plot is presented in Figure [6\.7](breakDown.html#fig:bdPython1). Figure 6\.7: Break\-down plot for the random forest model and Henry for the Titanic data, obtained by the `plot()` method in Python. Advanced users can make use of the `order` argument of the `predict_parts()` method. It allows forcing a specific order of variables in the break\-down method. Also, if the model includes many explanatory variables, the waterfall chart may be hard to read. In this situation, the `max_vars` argument can be used in the `plot()` method to limit the number of variables presented in the graph. ``` import numpy as np bd_henry = titanic_rf_exp.predict_parts(henry, type = 'break_down', order = np.array(['gender', 'class', 'age', 'embarked', 'fare', 'sibsp', 'parch'])) bd_henry.plot(max_vars = 5) ``` The resulting plot is presented in Figure [6\.8](breakDown.html#fig:bdPython2). Figure 6\.8: Break\-down plot for a limited number of explanatory variables in a specified order for the random forest model and Henry for the Titanic data, obtained by the `plot()` method in Python.
Machine Learning
pbiecek.github.io
https://pbiecek.github.io/ema/iBreakDown.html
7 Break\-down Plots for Interactions ==================================== In Chapter [6](breakDown.html#breakDown), we presented a model\-agnostic approach to the calculation of the attribution of an explanatory variable to a model’s predictions. However, for some models, like models with interactions, the results of the method introduced in Chapter [6](breakDown.html#breakDown) depend on the ordering of the explanatory variables that are used in computations. In this chapter, we present an algorithm that addresses the issue. In particular, the algorithm identifies interactions between pairs of variables and takes them into account when constructing break\-down (BD) plots. In our presentation, we focus on pairwise interactions that involve pairs of explanatory variables, but the algorithm can be easily extended to interactions involving a larger number of variables. 7\.1 Intuition -------------- Interaction (deviation from additivity) means that the effect of an explanatory variable depends on the value(s) of other variable(s). To illustrate such a situation, we use the Titanic dataset (see Section [4\.1](dataSetsIntro.html#TitanicDataset)). For the sake of simplicity, we consider only two variables, *age* and *class*. *Age* is a continuous variable, but we will use a dichotomized version of it, with two levels: boys (0\-16 years old) and adults (17\+ years old). Also, for *class*, we will consider just “2nd class” and “other”. Table [7\.1](iBreakDown.html#tab:titanicMaleSurvival) shows percentages of survivors for boys and adult men travelling in the second class and other classes on Titanic. Overall, the proportion of survivors among males is 20\.5%. However, among boys in the second class, the proportion is 91\.7%. How do *age* and *class* contribute to this higher survival probability? Let us consider the following two explanations. **Explanation 1:** The overall probability of survival for males is 20\.5%, but for the male passengers from the second class, the probability is even lower, i.e., 13\.5%. Thus, the effect of the travel class is negative, as it decreases the probability of survival by 7 percentage points. Now, if, for male passengers of the second class, we consider their age, we see that the survival probability for boys increases by 78\.2 percentage points, from 13\.5% (for a male in the second class) to 91\.7%. Thus, by considering first the effect of *class*, and then the effect of *age*, we can conclude the effect of \\(\-7\\) percentage points for *class* and \\(\+78\.2\\) percentage points for *age* (being a boy). **Explanation 2:** The overall probability of survival for males is 20\.5%, but for boys the probability is higher, i.e., 40\.7%. Thus, the effect of *age* (being a boy) is positive, as it increases the survival probability by 20\.2 percentage points. On the other hand, for boys, travelling in the second class increases the probability further from 40\.7% overall to 91\.7%. Thus, by considering first the effect of *age*, and then the effect of *class*, we can conclude the effect of \\(\+20\.2\\) percentage points for *age* (being a boy) and \\(\+51\\) percentage points for *class*. Table 7\.1: Proportion of survivors for men on Titanic. | Class | Boys (0\-16\) | Adults (\>16\) | Total | | --- | --- | --- | --- | | 2nd | 11/12 \= 91\.7% | 13/166 \= 7\.8% | 24/178 \= 13\.5% | | other | 22/69 \= 31\.9% | 306/1469 \= 20\.8% | 328/1538 \= 21\.3% | | Total | 33/81 \= 40\.7% | 319/1635 \= 19\.5% | 352/1716 \= 20\.5% | Thus, by considering the effects of *class* and *age* in a different order, we get very different attributions (contributions attributed to the variables). This is because there is an interaction: the effect of *class* depends on *age* and *vice versa*. In particular, from Table [7\.1](iBreakDown.html#tab:titanicMaleSurvival) we could conclude that the overall effect of the second class is negative ( \\(\-7\\) percentage points), as it decreases the probability of survival from 20\.5% to 13\.5%. On the other hand, the overall effect of being a boy is positive (\\(\+20\.2\\) percentage points), as it increases the probability of survival from 20\.5% to 40\.7%. Based on those effects, we would expect a probability of \\(20\.5\\% \- 7\\% \+ 20\.2\\% \= 33\.7\\%\\) for a boy in the second class. However, the observed proportion of survivors is much higher, 91\.7%. The difference \\(91\.7\\% \- 33\.7\\% \= 58\\%\\) is the interaction effect. We can interpret it as an additional effect of the second class specific for boys, or as an additional effect of being a boy for the male passengers travelling in the second class. The example illustrates that interactions complicate the evaluation of the importance of explanatory variables with respect to a model’s predictions. In the next section, we present an algorithm that allows including interactions in the BD plots. 7\.2 Method ----------- Identification of interactions in the model is performed in three steps (Gosiewska and Biecek [2019](#ref-iBreakDownRPackage)): 1. For each explanatory variable, compute \\(\\Delta^{j\|\\emptyset}(\\underline{x}\_\*)\\) as in equation [(6\.8\)](breakDown.html#eq:deltaBreakDownAdditive) in Section [6\.3\.2](breakDown.html#BDMethodGen). The measure quantifies the additive contribution of each variable to the instance prediction. 2. For each pair of explanatory variables, compute \\(\\Delta^{\\{i,j\\}\|\\emptyset}(\\underline{x}\_\*)\\) as in equation [(6\.8\)](breakDown.html#eq:deltaBreakDownAdditive) in Section [6\.3\.2](breakDown.html#BDMethodGen), and then the “net effect” of the interaction \\\[\\begin{equation} \\Delta^{\\{i,j\\}}\_I(x\_\*) \\equiv \\Delta^{\\{i,j\\}\|\\emptyset}(\\underline{x}\_\*)\-\\Delta^{i\|\\emptyset}(\\underline{x}\_\*)\-\\Delta^{j\|\\emptyset}(\\underline{x}\_\*). \\tag{7\.1} \\end{equation}\\] Note that \\(\\Delta^{\\{i,j\\}\|\\emptyset}(\\underline{x}\_\*)\\) quantifies the joint contribution of a pair of variables. Thus, \\(\\Delta^{\\{i,j\\}}\_I(\\underline{x}\_\*)\\) measures the contribution related to the deviation from additivity, i.e., to the interaction between the \\(i\\)\-th and \\(j\\)\-th variable. 3. Rank the so\-obtained measures for individual explanatory variables and interactions to determine the final ordering for computing the variable\-importance measures. Using the ordering, compute variable\-importance measures \\(v(j, \\underline{x}\_\*)\\), as defined in equation [(6\.9\)](breakDown.html#eq:viBD) in Section [6\.3\.2](breakDown.html#BDMethodGen). The time complexity of the first step is \\(O(p)\\), where \\(p\\) is the number of explanatory variables. For the second step, the complexity is \\(O(p^2\)\\), while for the third step it is \\(O(p)\\). Thus, the time complexity of the entire procedure is \\(O(p^2\)\\). 7\.3 Example: Titanic data -------------------------- Let us consider the random forest model `titanic_rf` (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) and passenger Johnny D (see Section [4\.2\.5](dataSetsIntro.html#predictions-titanic)) as the instance of interest in the Titanic data. Table [7\.2](iBreakDown.html#tab:titanicIBreakDownList) presents single\-variable contributions \\(\\Delta^{j\|\\emptyset}(\\underline{x}\_\*)\\), paired\-variable contributions \\(\\Delta^{\\{i,j\\}\|\\emptyset}(\\underline{x}\_\*)\\), and interaction contributions \\(\\Delta\_{I}^{\\{i,j\\}}(\\underline{x}\_\*)\\) for each explanatory variable and each pair of variables. All the measures are calculated for Johnny D, the instance of interest. Table 7\.2: Paired\-variable contributions \\(\\Delta^{\\{i,j\\}\|\\emptyset}(\\underline{x}\_\*)\\), interaction contributions \\(\\Delta\_{I}^{\\{i,j\\}}(\\underline{x}\_\*)\\), and single\-variable contributions \\(\\Delta^{j\|\\emptyset}(\\underline{x}\_\*)\\) for the random forest model and Johnny D for the Titanic data. | Variable | \\(\\Delta^{\\{i,j\\}\|\\emptyset}(\\underline{x}\_\*)\\) | \\(\\Delta\_{I}^{\\{i,j\\}}(\\underline{x}\_\*)\\) | \\(\\Delta^{i\|\\emptyset}(\\underline{x}\_\*)\\) | | --- | --- | --- | --- | | age | | | 0\.270 | | fare:class | 0\.098 | \-0\.231 | | | class | | | 0\.185 | | fare:age | 0\.249 | \-0\.164 | | | fare | | | 0\.143 | | gender | | | \-0\.125 | | age:class | 0\.355 | \-0\.100 | | | age:gender | 0\.215 | 0\.070 | | | fare:gender | | | | | embarked | | | \-0\.011 | | embarked:age | 0\.269 | 0\.010 | | | parch:gender | \-0\.136 | \-0\.008 | | | sibsp | | | 0\.008 | | sibsp:age | 0\.284 | 0\.007 | | | sibsp:class | 0\.187 | \-0\.006 | | | embarked:fare | 0\.138 | 0\.006 | | | sibsp:gender | \-0\.123 | \-0\.005 | | | fare:parch | 0\.145 | 0\.005 | | | parch:sibsp | 0\.001 | \-0\.004 | | | parch | | | \-0\.003 | | parch:age | 0\.264 | \-0\.002 | | | embarked:gender | \-0\.134 | 0\.002 | | | embarked:parch | \-0\.012 | 0\.001 | | | fare:sibsp | 0\.152 | 0\.001 | | | embarked:class | 0\.173 | \-0\.001 | | | gender:class | 0\.061 | 0\.001 | | | embarked:sibsp | \-0\.002 | 0\.001 | | | parch:class | 0\.183 | 0\.000 | | The table illustrates the calculation of the contributions of interactions. For instance, the additive contribution of *age* is equal to 0\.270, while for *fare* it is equal to 0\.143\. The joint contribution of these two variables is equal to 0\.249\. Hence, the contribution attributed to the interaction is equal to \\(0\.249 \- 0\.270 \- 0\.143 \= \-0\.164\\). Note that the rows of Table [7\.2](iBreakDown.html#tab:titanicIBreakDownList) are sorted according to the absolute value of the net contribution of the single explanatory variable or the net contribution of the interaction between two variables. For a single variable, the net contribution is simply measured by \\(\\Delta^{j\|\\emptyset}(\\underline{x}\_\*)\\), while for an interaction it is given by \\(\\Delta\_{I}^{\\{i,j\\}}(\\underline{x}\_\*)\\). In this way, if two variables are important and there is little interaction, then the net contribution of the interaction is smaller than the contribution of any of the two variables. Consequently, the interaction will be ranked lower. This is the case, for example, of variables *age* and *gender* in Table [7\.2](iBreakDown.html#tab:titanicIBreakDownList). On the other hand, if the interaction is important, then its net contribution will be larger than the contribution of any of the two variables. This is the case, for example, of variables *fare* and *class* in Table [7\.2](iBreakDown.html#tab:titanicIBreakDownList). Based on the ordering of the rows in Table [7\.2](iBreakDown.html#tab:titanicIBreakDownList), the following sequence of variables is identified as informative: * *age*, because it has the largest (in absolute value) net contribution equal to 0\.270; * *fare:class* interaction, because its net contribution (\-0\.231\) is the second largest (in absolute value); * *gender*, because variables *class* and *fare* are already accounted for in the *fare:class* interaction and the net contribution of *gender*, equal to 0\.125, is the largest (in absolute value) among the remaining variables and interactions; * *embarked* harbor (based on a similar reasoning as for *gender*); * then *sibsp* and *parch* as variables with the smallest net contributions (among single variables), which are larger than the contribution of their interaction. Table [7\.3](iBreakDown.html#tab:titanicIBreakDownList2) presents the variable\-importance measures computed by using the following ordering of explanatory variables and their pairwise interactions: *age*, *fare:class*, *gender*, *embarked*, *sibsp*, and *parch*. The table presents also the conditional expected values (see equations [(6\.5\)](breakDown.html#eq:generalBreakDownLocalAccuracy) and [(6\.9\)](breakDown.html#eq:viBD) in Section [6\.3\.2](breakDown.html#BDMethodGen)) \\\[E\_{\\underline{X}}\\left\\{f(\\underline{X}) \| \\underline{X}^{\\{1,\\ldots,j\\}} \= \\underline{x}^{\\{1,\\ldots,j\\}}\_\*\\right\\}\=v\_0\+\\sum\_{k\=1}^j v(k,\\underline{x}\_\*)\=v\_0\+\\Delta^{\\{1,\\ldots\\,j\\}\|\\emptyset}(\\underline{x}\_\*).\\] Note that the expected value presented in the last row, 0\.422, corresponds to the model’s prediction for the instance of interest, passenger Johnny D. Table 7\.3: Variable\-importance measures \\(v(j,\\underline{x}\_\*)\\) and the conditonal expected values \\(v\_0\+\\sum\_{k\=1}^j v(k,\\underline{x}\_\*)\\) computed by using the sequence of variables *age*, *fare:class*, *gender*, *embarked*, *sibsp*, and *parch* for the random forest model and Johnny D for the Titanic data. | Variable | \\(j\\) | \\(v(j,\\underline{x}\_\*)\\) | \\(v\_0\+\\sum\_{k\=1}^j v(k,\\underline{x}\_\*)\\) | | --- | --- | --- | --- | | intercept (\\(v\_0\\)) | | | 0\.235 | | age \= 8 | 1 | 0\.269 | 0\.505 | | fare:class \= 72:1st | 2 | 0\.039 | 0\.544 | | gender \= male | 3 | \-0\.083 | 0\.461 | | embarked \= Southampton | 4 | \-0\.002 | 0\.458 | | sibsp \= 0 | 5 | \-0\.006 | 0\.452 | | parch \= 0 | 6 | \-0\.030 | 0\.422 | Figure [7\.1](iBreakDown.html#fig:iBreakDownTitanicExamplePlot) presents the interaction\-break\-down (iBD) plot corresponding to the results shown in Table [7\.3](iBreakDown.html#tab:titanicIBreakDownList2). The interaction between *fare* and *class* variables is included in the plot as a single bar. As the effects of these two variables cannot be disentangled, the plot uses just that single bar to represent the contribution of both variables. Table [7\.2](iBreakDown.html#tab:titanicIBreakDownList) indicates that *class* alone would increase the mean prediction by 0\.185, while *fare* would increase the mean prediction by 0\.143\. However, taken together, they increase the average prediction only by 0\.098\. A possible explanation of this negative interaction could be that, while the ticket fare of 72 is high on average, it is actually below the median when the first\-class passengers are considered. Thus, if first\-class passengers with “cheaper” tickets, as Johnny D, were, for instance, placed in cabins that made it more difficult to reach a lifeboat, this could lead to lower chances of survival as compared to other passengers from the same class (though the chances could be still higher as compared to passengers from other, lower travel classes). Figure 7\.1: Break\-down plot with interactions for the random forest model and Johnny D for the Titanic data. 7\.4 Pros and cons ------------------ iBD plots share many advantages and disadvantages of BD plots for models without interactions (see Section [6\.5](breakDown.html#BDProsCons)). However, in the case of models with interactions, iBD plots provide more correct explanations. Though the numerical complexity of the iBD procedure is quadratic, it may be time\-consuming in case of models with a large number of explanatory variables. For a model with \\(p\\) explanatory variables, we have got to calculate \\(p\*(p\+1\)/2\\) net contributions for single variables and pairs of variables. For datasets with a small number of observations, the calculations of the net contributions will be subject to a larger variability and, therefore, larger randomness in the ranking of the contributions. It is also worth noting that the presented procedure of identification of interactions is not based on any formal statistical\-significance test. Thus, the procedure may lead to false\-positive findings and, especially for small sample sizes, false\-negative errors. 7\.5 Code snippets for R ------------------------ In this section, we use the `DALEX` package, which is a wrapper for `iBreakDown` R package. The package covers all methods presented in this chapter. It is available on `CRAN` and `GitHub`. For illustration purposes, we use the `titanic_rf` random forest model for the Titanic data developed in Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf). Recall that the model is constructed to predict the probability of survival for passengers of Titanic. Instance\-level explanations are calculated for Henry, a 47\-year\-old passenger that travelled in the first class (see Section [4\.2\.5](dataSetsIntro.html#predictions-titanic)). First, we retrieve the `titanic_rf` model\-object and the data for Henry via the `archivist` hooks, as listed in Section [4\.2\.7](dataSetsIntro.html#ListOfModelsTitanic). We also retrieve the version of the `titanic` data with imputed missing values. ``` titanic_imputed <- archivist::aread("pbiecek/models/27e5c") titanic_rf <- archivist:: aread("pbiecek/models/4e0fc") (henry <- archivist::aread("pbiecek/models/a6538")) ``` ``` class gender age sibsp parch fare embarked 1 1st male 47 0 0 25 Cherbourg ``` Then we construct the explainer for the model by using the function `explain()` from the `DALEX` package (see Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode)). Note that, beforehand, we have got to load the `randomForest` package, as the model was fitted by using function `randomForest()` from this package (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) and it is important to have the corresponding `predict()` function available. ``` library("DALEX") library("randomForest") explain_rf <- DALEX::explain(model = titanic_rf, data = titanic_imputed[, -9], y = titanic_imputed$survived == "yes", label = "Random Forest") ``` The key function to construct iBD plots is the `DALEX::predict_parts()` function. The use of the function has already been explained in Section [6\.6](breakDown.html#BDR). In order to perform calculations that allow obtaining iBD plots, the required argument is `type = "break_down_interactions"`. ``` bd_rf <- predict_parts(explainer = explain_rf, new_observation = henry, type = "break_down_interactions") bd_rf ``` ``` ## contribution ## Random Forest: intercept 0.235 ## Random Forest: class = 1st 0.185 ## Random Forest: gender = male -0.124 ## Random Forest: embarked:fare = Cherbourg:25 0.107 ## Random Forest: age = 47 -0.125 ## Random Forest: sibsp = 0 -0.032 ## Random Forest: parch = 0 -0.001 ## Random Forest: prediction 0.246 ``` We can compare the obtained variable\-importance measures to those reported for Johnny D in Table [7\.3](iBreakDown.html#tab:titanicIBreakDownList2). For Henry, the most important positive contribution comes from *class*, while for Johnny D it is *age*. Interestingly, for Henry, a positive contribution of the interaction between *embarked* harbour and *fare* is found. For Johnny D, a different interaction was identified: for *fare* and *class*. Finding an explanation for this difference is not straightforward. In any case, in those two instances, the contribution of fare appears to be modified by effects of other variable(s), i.e., its effect is not purely additive. By applying the generic `plot()` function to the object created by the `DALEX::predict_parts()` function we obtain the iBD plot. ``` plot(bd_rf) ``` Figure 7\.2: Break\-down plot with interactions for the random forest model and Henry for the Titanic data, obtained by applying the generic `plot()` function in R. The resulting iBD plot for Henry is shown in Figure [7\.2](iBreakDown.html#fig:iBDforHenry). It can be compared to the iBD plot for Johnny D that is presented in Figure [7\.1](iBreakDown.html#fig:iBreakDownTitanicExamplePlot). 7\.6 Code snippets for Python ----------------------------- In this section, we use the `dalex` library for Python. The package covers all methods presented in this chapter. It is available on `pip` and `GitHub.` For illustration purposes, we use the `titanic_rf` random forest model for the Titanic data developed in Section [4\.3\.2](dataSetsIntro.html#model-titanic-python-rf). Instance\-level explanations are calculated for Henry, a 47\-year\-old passenger that travelled in the first class (see Section [4\.3\.5](dataSetsIntro.html#predictions-titanic-python)). In the first step, we create an explainer\-object that provides a uniform interface for the predictive model. We use the `Explainer()` constructor for this purpose (see Section [4\.3\.6](dataSetsIntro.html#ExplainersTitanicPythonCode)). ``` import pandas as pd henry = pd.DataFrame({'gender': ['male'], 'age': [47], 'class': ['1st'], 'embarked': ['Cherbourg'], 'fare': [25], 'sibsp': [0], 'parch': [0]}, index = ['Henry']) import dalex as dx titanic_rf_exp = dx.Explainer(titanic_rf, X, y, label = "Titanic RF Pipeline") ``` To calculate the attributions with the break\-down method with interactions, we use the `predict_parts()` method with `type='break_down_interactions'` argument (see Section [6\.7](breakDown.html#BDPython)). The first argument indicates the data for the observation for which the attributions are to be calculated. Interactions are often weak and their net effects are not larger than the contributions of individual variables. If we would like to increase our preference for interactions, we can use the `interaction_preference` argument. The default value of \\(1\\) means no preference, while larger values indicate a larger preference. Results are stored in the `result` field. ``` bd_henry = titanic_rf_exp.predict_parts(henry, type = 'break_down_interactions', interaction_preference = 10) bd_henry.result ``` By applying the `plot()` method to the resulting object, we construct the corresponding iBD plot. ``` bd_henry.plot() ``` The resulting plot for Henry is shown in Figure [7\.3](iBreakDown.html#fig:ibdPython2). Figure 7\.3: Break\-down plot with interactions for the random forest model and Henry for the Titanic data, obtained by applying the `plot()` method in Python. 7\.1 Intuition -------------- Interaction (deviation from additivity) means that the effect of an explanatory variable depends on the value(s) of other variable(s). To illustrate such a situation, we use the Titanic dataset (see Section [4\.1](dataSetsIntro.html#TitanicDataset)). For the sake of simplicity, we consider only two variables, *age* and *class*. *Age* is a continuous variable, but we will use a dichotomized version of it, with two levels: boys (0\-16 years old) and adults (17\+ years old). Also, for *class*, we will consider just “2nd class” and “other”. Table [7\.1](iBreakDown.html#tab:titanicMaleSurvival) shows percentages of survivors for boys and adult men travelling in the second class and other classes on Titanic. Overall, the proportion of survivors among males is 20\.5%. However, among boys in the second class, the proportion is 91\.7%. How do *age* and *class* contribute to this higher survival probability? Let us consider the following two explanations. **Explanation 1:** The overall probability of survival for males is 20\.5%, but for the male passengers from the second class, the probability is even lower, i.e., 13\.5%. Thus, the effect of the travel class is negative, as it decreases the probability of survival by 7 percentage points. Now, if, for male passengers of the second class, we consider their age, we see that the survival probability for boys increases by 78\.2 percentage points, from 13\.5% (for a male in the second class) to 91\.7%. Thus, by considering first the effect of *class*, and then the effect of *age*, we can conclude the effect of \\(\-7\\) percentage points for *class* and \\(\+78\.2\\) percentage points for *age* (being a boy). **Explanation 2:** The overall probability of survival for males is 20\.5%, but for boys the probability is higher, i.e., 40\.7%. Thus, the effect of *age* (being a boy) is positive, as it increases the survival probability by 20\.2 percentage points. On the other hand, for boys, travelling in the second class increases the probability further from 40\.7% overall to 91\.7%. Thus, by considering first the effect of *age*, and then the effect of *class*, we can conclude the effect of \\(\+20\.2\\) percentage points for *age* (being a boy) and \\(\+51\\) percentage points for *class*. Table 7\.1: Proportion of survivors for men on Titanic. | Class | Boys (0\-16\) | Adults (\>16\) | Total | | --- | --- | --- | --- | | 2nd | 11/12 \= 91\.7% | 13/166 \= 7\.8% | 24/178 \= 13\.5% | | other | 22/69 \= 31\.9% | 306/1469 \= 20\.8% | 328/1538 \= 21\.3% | | Total | 33/81 \= 40\.7% | 319/1635 \= 19\.5% | 352/1716 \= 20\.5% | Thus, by considering the effects of *class* and *age* in a different order, we get very different attributions (contributions attributed to the variables). This is because there is an interaction: the effect of *class* depends on *age* and *vice versa*. In particular, from Table [7\.1](iBreakDown.html#tab:titanicMaleSurvival) we could conclude that the overall effect of the second class is negative ( \\(\-7\\) percentage points), as it decreases the probability of survival from 20\.5% to 13\.5%. On the other hand, the overall effect of being a boy is positive (\\(\+20\.2\\) percentage points), as it increases the probability of survival from 20\.5% to 40\.7%. Based on those effects, we would expect a probability of \\(20\.5\\% \- 7\\% \+ 20\.2\\% \= 33\.7\\%\\) for a boy in the second class. However, the observed proportion of survivors is much higher, 91\.7%. The difference \\(91\.7\\% \- 33\.7\\% \= 58\\%\\) is the interaction effect. We can interpret it as an additional effect of the second class specific for boys, or as an additional effect of being a boy for the male passengers travelling in the second class. The example illustrates that interactions complicate the evaluation of the importance of explanatory variables with respect to a model’s predictions. In the next section, we present an algorithm that allows including interactions in the BD plots. 7\.2 Method ----------- Identification of interactions in the model is performed in three steps (Gosiewska and Biecek [2019](#ref-iBreakDownRPackage)): 1. For each explanatory variable, compute \\(\\Delta^{j\|\\emptyset}(\\underline{x}\_\*)\\) as in equation [(6\.8\)](breakDown.html#eq:deltaBreakDownAdditive) in Section [6\.3\.2](breakDown.html#BDMethodGen). The measure quantifies the additive contribution of each variable to the instance prediction. 2. For each pair of explanatory variables, compute \\(\\Delta^{\\{i,j\\}\|\\emptyset}(\\underline{x}\_\*)\\) as in equation [(6\.8\)](breakDown.html#eq:deltaBreakDownAdditive) in Section [6\.3\.2](breakDown.html#BDMethodGen), and then the “net effect” of the interaction \\\[\\begin{equation} \\Delta^{\\{i,j\\}}\_I(x\_\*) \\equiv \\Delta^{\\{i,j\\}\|\\emptyset}(\\underline{x}\_\*)\-\\Delta^{i\|\\emptyset}(\\underline{x}\_\*)\-\\Delta^{j\|\\emptyset}(\\underline{x}\_\*). \\tag{7\.1} \\end{equation}\\] Note that \\(\\Delta^{\\{i,j\\}\|\\emptyset}(\\underline{x}\_\*)\\) quantifies the joint contribution of a pair of variables. Thus, \\(\\Delta^{\\{i,j\\}}\_I(\\underline{x}\_\*)\\) measures the contribution related to the deviation from additivity, i.e., to the interaction between the \\(i\\)\-th and \\(j\\)\-th variable. 3. Rank the so\-obtained measures for individual explanatory variables and interactions to determine the final ordering for computing the variable\-importance measures. Using the ordering, compute variable\-importance measures \\(v(j, \\underline{x}\_\*)\\), as defined in equation [(6\.9\)](breakDown.html#eq:viBD) in Section [6\.3\.2](breakDown.html#BDMethodGen). The time complexity of the first step is \\(O(p)\\), where \\(p\\) is the number of explanatory variables. For the second step, the complexity is \\(O(p^2\)\\), while for the third step it is \\(O(p)\\). Thus, the time complexity of the entire procedure is \\(O(p^2\)\\). 7\.3 Example: Titanic data -------------------------- Let us consider the random forest model `titanic_rf` (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) and passenger Johnny D (see Section [4\.2\.5](dataSetsIntro.html#predictions-titanic)) as the instance of interest in the Titanic data. Table [7\.2](iBreakDown.html#tab:titanicIBreakDownList) presents single\-variable contributions \\(\\Delta^{j\|\\emptyset}(\\underline{x}\_\*)\\), paired\-variable contributions \\(\\Delta^{\\{i,j\\}\|\\emptyset}(\\underline{x}\_\*)\\), and interaction contributions \\(\\Delta\_{I}^{\\{i,j\\}}(\\underline{x}\_\*)\\) for each explanatory variable and each pair of variables. All the measures are calculated for Johnny D, the instance of interest. Table 7\.2: Paired\-variable contributions \\(\\Delta^{\\{i,j\\}\|\\emptyset}(\\underline{x}\_\*)\\), interaction contributions \\(\\Delta\_{I}^{\\{i,j\\}}(\\underline{x}\_\*)\\), and single\-variable contributions \\(\\Delta^{j\|\\emptyset}(\\underline{x}\_\*)\\) for the random forest model and Johnny D for the Titanic data. | Variable | \\(\\Delta^{\\{i,j\\}\|\\emptyset}(\\underline{x}\_\*)\\) | \\(\\Delta\_{I}^{\\{i,j\\}}(\\underline{x}\_\*)\\) | \\(\\Delta^{i\|\\emptyset}(\\underline{x}\_\*)\\) | | --- | --- | --- | --- | | age | | | 0\.270 | | fare:class | 0\.098 | \-0\.231 | | | class | | | 0\.185 | | fare:age | 0\.249 | \-0\.164 | | | fare | | | 0\.143 | | gender | | | \-0\.125 | | age:class | 0\.355 | \-0\.100 | | | age:gender | 0\.215 | 0\.070 | | | fare:gender | | | | | embarked | | | \-0\.011 | | embarked:age | 0\.269 | 0\.010 | | | parch:gender | \-0\.136 | \-0\.008 | | | sibsp | | | 0\.008 | | sibsp:age | 0\.284 | 0\.007 | | | sibsp:class | 0\.187 | \-0\.006 | | | embarked:fare | 0\.138 | 0\.006 | | | sibsp:gender | \-0\.123 | \-0\.005 | | | fare:parch | 0\.145 | 0\.005 | | | parch:sibsp | 0\.001 | \-0\.004 | | | parch | | | \-0\.003 | | parch:age | 0\.264 | \-0\.002 | | | embarked:gender | \-0\.134 | 0\.002 | | | embarked:parch | \-0\.012 | 0\.001 | | | fare:sibsp | 0\.152 | 0\.001 | | | embarked:class | 0\.173 | \-0\.001 | | | gender:class | 0\.061 | 0\.001 | | | embarked:sibsp | \-0\.002 | 0\.001 | | | parch:class | 0\.183 | 0\.000 | | The table illustrates the calculation of the contributions of interactions. For instance, the additive contribution of *age* is equal to 0\.270, while for *fare* it is equal to 0\.143\. The joint contribution of these two variables is equal to 0\.249\. Hence, the contribution attributed to the interaction is equal to \\(0\.249 \- 0\.270 \- 0\.143 \= \-0\.164\\). Note that the rows of Table [7\.2](iBreakDown.html#tab:titanicIBreakDownList) are sorted according to the absolute value of the net contribution of the single explanatory variable or the net contribution of the interaction between two variables. For a single variable, the net contribution is simply measured by \\(\\Delta^{j\|\\emptyset}(\\underline{x}\_\*)\\), while for an interaction it is given by \\(\\Delta\_{I}^{\\{i,j\\}}(\\underline{x}\_\*)\\). In this way, if two variables are important and there is little interaction, then the net contribution of the interaction is smaller than the contribution of any of the two variables. Consequently, the interaction will be ranked lower. This is the case, for example, of variables *age* and *gender* in Table [7\.2](iBreakDown.html#tab:titanicIBreakDownList). On the other hand, if the interaction is important, then its net contribution will be larger than the contribution of any of the two variables. This is the case, for example, of variables *fare* and *class* in Table [7\.2](iBreakDown.html#tab:titanicIBreakDownList). Based on the ordering of the rows in Table [7\.2](iBreakDown.html#tab:titanicIBreakDownList), the following sequence of variables is identified as informative: * *age*, because it has the largest (in absolute value) net contribution equal to 0\.270; * *fare:class* interaction, because its net contribution (\-0\.231\) is the second largest (in absolute value); * *gender*, because variables *class* and *fare* are already accounted for in the *fare:class* interaction and the net contribution of *gender*, equal to 0\.125, is the largest (in absolute value) among the remaining variables and interactions; * *embarked* harbor (based on a similar reasoning as for *gender*); * then *sibsp* and *parch* as variables with the smallest net contributions (among single variables), which are larger than the contribution of their interaction. Table [7\.3](iBreakDown.html#tab:titanicIBreakDownList2) presents the variable\-importance measures computed by using the following ordering of explanatory variables and their pairwise interactions: *age*, *fare:class*, *gender*, *embarked*, *sibsp*, and *parch*. The table presents also the conditional expected values (see equations [(6\.5\)](breakDown.html#eq:generalBreakDownLocalAccuracy) and [(6\.9\)](breakDown.html#eq:viBD) in Section [6\.3\.2](breakDown.html#BDMethodGen)) \\\[E\_{\\underline{X}}\\left\\{f(\\underline{X}) \| \\underline{X}^{\\{1,\\ldots,j\\}} \= \\underline{x}^{\\{1,\\ldots,j\\}}\_\*\\right\\}\=v\_0\+\\sum\_{k\=1}^j v(k,\\underline{x}\_\*)\=v\_0\+\\Delta^{\\{1,\\ldots\\,j\\}\|\\emptyset}(\\underline{x}\_\*).\\] Note that the expected value presented in the last row, 0\.422, corresponds to the model’s prediction for the instance of interest, passenger Johnny D. Table 7\.3: Variable\-importance measures \\(v(j,\\underline{x}\_\*)\\) and the conditonal expected values \\(v\_0\+\\sum\_{k\=1}^j v(k,\\underline{x}\_\*)\\) computed by using the sequence of variables *age*, *fare:class*, *gender*, *embarked*, *sibsp*, and *parch* for the random forest model and Johnny D for the Titanic data. | Variable | \\(j\\) | \\(v(j,\\underline{x}\_\*)\\) | \\(v\_0\+\\sum\_{k\=1}^j v(k,\\underline{x}\_\*)\\) | | --- | --- | --- | --- | | intercept (\\(v\_0\\)) | | | 0\.235 | | age \= 8 | 1 | 0\.269 | 0\.505 | | fare:class \= 72:1st | 2 | 0\.039 | 0\.544 | | gender \= male | 3 | \-0\.083 | 0\.461 | | embarked \= Southampton | 4 | \-0\.002 | 0\.458 | | sibsp \= 0 | 5 | \-0\.006 | 0\.452 | | parch \= 0 | 6 | \-0\.030 | 0\.422 | Figure [7\.1](iBreakDown.html#fig:iBreakDownTitanicExamplePlot) presents the interaction\-break\-down (iBD) plot corresponding to the results shown in Table [7\.3](iBreakDown.html#tab:titanicIBreakDownList2). The interaction between *fare* and *class* variables is included in the plot as a single bar. As the effects of these two variables cannot be disentangled, the plot uses just that single bar to represent the contribution of both variables. Table [7\.2](iBreakDown.html#tab:titanicIBreakDownList) indicates that *class* alone would increase the mean prediction by 0\.185, while *fare* would increase the mean prediction by 0\.143\. However, taken together, they increase the average prediction only by 0\.098\. A possible explanation of this negative interaction could be that, while the ticket fare of 72 is high on average, it is actually below the median when the first\-class passengers are considered. Thus, if first\-class passengers with “cheaper” tickets, as Johnny D, were, for instance, placed in cabins that made it more difficult to reach a lifeboat, this could lead to lower chances of survival as compared to other passengers from the same class (though the chances could be still higher as compared to passengers from other, lower travel classes). Figure 7\.1: Break\-down plot with interactions for the random forest model and Johnny D for the Titanic data. 7\.4 Pros and cons ------------------ iBD plots share many advantages and disadvantages of BD plots for models without interactions (see Section [6\.5](breakDown.html#BDProsCons)). However, in the case of models with interactions, iBD plots provide more correct explanations. Though the numerical complexity of the iBD procedure is quadratic, it may be time\-consuming in case of models with a large number of explanatory variables. For a model with \\(p\\) explanatory variables, we have got to calculate \\(p\*(p\+1\)/2\\) net contributions for single variables and pairs of variables. For datasets with a small number of observations, the calculations of the net contributions will be subject to a larger variability and, therefore, larger randomness in the ranking of the contributions. It is also worth noting that the presented procedure of identification of interactions is not based on any formal statistical\-significance test. Thus, the procedure may lead to false\-positive findings and, especially for small sample sizes, false\-negative errors. 7\.5 Code snippets for R ------------------------ In this section, we use the `DALEX` package, which is a wrapper for `iBreakDown` R package. The package covers all methods presented in this chapter. It is available on `CRAN` and `GitHub`. For illustration purposes, we use the `titanic_rf` random forest model for the Titanic data developed in Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf). Recall that the model is constructed to predict the probability of survival for passengers of Titanic. Instance\-level explanations are calculated for Henry, a 47\-year\-old passenger that travelled in the first class (see Section [4\.2\.5](dataSetsIntro.html#predictions-titanic)). First, we retrieve the `titanic_rf` model\-object and the data for Henry via the `archivist` hooks, as listed in Section [4\.2\.7](dataSetsIntro.html#ListOfModelsTitanic). We also retrieve the version of the `titanic` data with imputed missing values. ``` titanic_imputed <- archivist::aread("pbiecek/models/27e5c") titanic_rf <- archivist:: aread("pbiecek/models/4e0fc") (henry <- archivist::aread("pbiecek/models/a6538")) ``` ``` class gender age sibsp parch fare embarked 1 1st male 47 0 0 25 Cherbourg ``` Then we construct the explainer for the model by using the function `explain()` from the `DALEX` package (see Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode)). Note that, beforehand, we have got to load the `randomForest` package, as the model was fitted by using function `randomForest()` from this package (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) and it is important to have the corresponding `predict()` function available. ``` library("DALEX") library("randomForest") explain_rf <- DALEX::explain(model = titanic_rf, data = titanic_imputed[, -9], y = titanic_imputed$survived == "yes", label = "Random Forest") ``` The key function to construct iBD plots is the `DALEX::predict_parts()` function. The use of the function has already been explained in Section [6\.6](breakDown.html#BDR). In order to perform calculations that allow obtaining iBD plots, the required argument is `type = "break_down_interactions"`. ``` bd_rf <- predict_parts(explainer = explain_rf, new_observation = henry, type = "break_down_interactions") bd_rf ``` ``` ## contribution ## Random Forest: intercept 0.235 ## Random Forest: class = 1st 0.185 ## Random Forest: gender = male -0.124 ## Random Forest: embarked:fare = Cherbourg:25 0.107 ## Random Forest: age = 47 -0.125 ## Random Forest: sibsp = 0 -0.032 ## Random Forest: parch = 0 -0.001 ## Random Forest: prediction 0.246 ``` We can compare the obtained variable\-importance measures to those reported for Johnny D in Table [7\.3](iBreakDown.html#tab:titanicIBreakDownList2). For Henry, the most important positive contribution comes from *class*, while for Johnny D it is *age*. Interestingly, for Henry, a positive contribution of the interaction between *embarked* harbour and *fare* is found. For Johnny D, a different interaction was identified: for *fare* and *class*. Finding an explanation for this difference is not straightforward. In any case, in those two instances, the contribution of fare appears to be modified by effects of other variable(s), i.e., its effect is not purely additive. By applying the generic `plot()` function to the object created by the `DALEX::predict_parts()` function we obtain the iBD plot. ``` plot(bd_rf) ``` Figure 7\.2: Break\-down plot with interactions for the random forest model and Henry for the Titanic data, obtained by applying the generic `plot()` function in R. The resulting iBD plot for Henry is shown in Figure [7\.2](iBreakDown.html#fig:iBDforHenry). It can be compared to the iBD plot for Johnny D that is presented in Figure [7\.1](iBreakDown.html#fig:iBreakDownTitanicExamplePlot). 7\.6 Code snippets for Python ----------------------------- In this section, we use the `dalex` library for Python. The package covers all methods presented in this chapter. It is available on `pip` and `GitHub.` For illustration purposes, we use the `titanic_rf` random forest model for the Titanic data developed in Section [4\.3\.2](dataSetsIntro.html#model-titanic-python-rf). Instance\-level explanations are calculated for Henry, a 47\-year\-old passenger that travelled in the first class (see Section [4\.3\.5](dataSetsIntro.html#predictions-titanic-python)). In the first step, we create an explainer\-object that provides a uniform interface for the predictive model. We use the `Explainer()` constructor for this purpose (see Section [4\.3\.6](dataSetsIntro.html#ExplainersTitanicPythonCode)). ``` import pandas as pd henry = pd.DataFrame({'gender': ['male'], 'age': [47], 'class': ['1st'], 'embarked': ['Cherbourg'], 'fare': [25], 'sibsp': [0], 'parch': [0]}, index = ['Henry']) import dalex as dx titanic_rf_exp = dx.Explainer(titanic_rf, X, y, label = "Titanic RF Pipeline") ``` To calculate the attributions with the break\-down method with interactions, we use the `predict_parts()` method with `type='break_down_interactions'` argument (see Section [6\.7](breakDown.html#BDPython)). The first argument indicates the data for the observation for which the attributions are to be calculated. Interactions are often weak and their net effects are not larger than the contributions of individual variables. If we would like to increase our preference for interactions, we can use the `interaction_preference` argument. The default value of \\(1\\) means no preference, while larger values indicate a larger preference. Results are stored in the `result` field. ``` bd_henry = titanic_rf_exp.predict_parts(henry, type = 'break_down_interactions', interaction_preference = 10) bd_henry.result ``` By applying the `plot()` method to the resulting object, we construct the corresponding iBD plot. ``` bd_henry.plot() ``` The resulting plot for Henry is shown in Figure [7\.3](iBreakDown.html#fig:ibdPython2). Figure 7\.3: Break\-down plot with interactions for the random forest model and Henry for the Titanic data, obtained by applying the `plot()` method in Python.
Machine Learning
pbiecek.github.io
https://pbiecek.github.io/ema/shapley.html
8 Shapley Additive Explanations (SHAP) for Average Attributions =============================================================== In Chapter [6](breakDown.html#breakDown), we introduced break\-down (BD) plots, a procedure for calculation of attribution of an explanatory variable for a model’s prediction. We also indicated that, in the presence of interactions, the computed value of the attribution depends on the order of explanatory covariates that are used in calculations. One solution to the problem, presented in Chapter [6](breakDown.html#breakDown), is to find an ordering in which the most important variables are placed at the beginning. Another solution, described in Chapter [7](iBreakDown.html#iBreakDown), is to identify interactions and explicitly present their contributions to the predictions. In this chapter, we introduce yet another approach to address the ordering issue. It is based on the idea of averaging the value of a variable’s attribution over all (or a large number of) possible orderings. The idea is closely linked to “Shapley values” developed originally for cooperative games (Shapley [1953](#ref-shapleybook1952)). The approach was first translated to the machine\-learning domain by Štrumbelj and Kononenko ([2010](#ref-imeJLMR)) and Štrumbelj and Kononenko ([2014](#ref-Strumbelj2014)). It has been widely adopted after the publication of the paper by Lundberg and Lee ([2017](#ref-SHAP)) and Python’s library for SHapley Additive exPlanations, SHAP (Lundberg [2019](#ref-shapPackage)). The authors of SHAP introduced an efficient algorithm for tree\-based models (Lundberg, Erion, and Lee [2018](#ref-TreeSHAP)). They also showed that Shapley values could be presented as a unification of a collection of different commonly used techniques for model explanations (Lundberg and Lee [2017](#ref-SHAP)). 8\.1 Intuition -------------- Figure [8\.1](shapley.html#fig:shap10orderings) presents BD plots for 10 random orderings (indicated by the order of the rows in each plot) of explanatory variables for the prediction for Johnny D (see Section [4\.2\.5](dataSetsIntro.html#predictions-titanic)) for the random forest model `titanic_rf` (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) for the Titanic dataset. The plots show clear differences in the contributions of various variables for different orderings. The most remarkable differences can be observed for variables *fare* and *class*, with contributions changing the sign depending on the ordering. Figure 8\.1: Break\-down plots for 10 random orderings of explanatory variables for the prediction for Johnny D for the random forest model `titanic_rf` for the Titanic dataset. Each panel presents a single ordering, indicated by the order of the rows in the plot. To remove the influence of the ordering of the variables, we can compute the mean value of the attributions. Figure [8\.2](shapley.html#fig:shapOrdering) presents the averages, calculated over the ten orderings presented in Figure [8\.1](shapley.html#fig:shap10orderings). Red and green bars present, respectively, the negative and positive averages Violet box plots summarize the distribution of the attributions for each explanatory variable across the different orderings. The plot indicates that the most important variables, from the point of view of the prediction for Johnny D, are *age*, *class*, and *gender*. Figure 8\.2: Average attributions for 10 random orderings. Red and green bars present the means. Box plots summarize the distribution of contributions for each explanatory variable across the orderings. 8\.2 Method ----------- SHapley Additive exPlanations (SHAP) are based on “Shapley values” developed by Shapley ([1953](#ref-shapleybook1952)) in the cooperative game theory. Note that the terminology may be confusing at first glance. Shapley values are introduced for cooperative games. SHAP is an acronym for a method designed for predictive models. To avoid confusion, we will use the term “Shapley values”. Shapley values are a solution to the following problem. A coalition of players cooperates and obtains a certain overall gain from the cooperation. Players are not identical, and different players may have different importance. Cooperation is beneficial, because it may bring more benefit than individual actions. The problem to solve is how to distribute the generated surplus among the players. Shapley values offer one possible fair answer to this question (Shapley [1953](#ref-shapleybook1952)). Let’s translate this problem to the context of a model’s predictions. Explanatory variables are the players, while model \\(f()\\) plays the role of the coalition. The payoff from the coalition is the model’s prediction. The problem to solve is how to distribute the model’s prediction across particular variables? The idea of using Shapley values for evaluation of local variable\-importance was introduced by Štrumbelj and Kononenko ([2010](#ref-imeJLMR)). We will define the values using the notation introduced in Section [6\.3\.2](breakDown.html#BDMethodGen). Let us consider a permutation \\(J\\) of the set of indices \\(\\{1,2,\\ldots,p\\}\\) corresponding to an ordering of \\(p\\) explanatory variables included in the model \\(f()\\). Denote by \\(\\pi(J,j)\\) the set of the indices of the variables that are positioned in \\(J\\) before the \\(j\\)\-th variable. Note that, if the \\(j\\)\-th variable is placed as the first, then \\(\\pi(J,j) \= \\emptyset\\). Consider the model’s prediction \\(f(\\underline{x}\_\*)\\) for a particular instance of interest \\(\\underline{x}\_\*\\). The Shapley value is defined as follows: \\\[\\begin{equation} \\varphi(\\underline{x}\_\*,j) \= \\frac{1}{p!} \\sum\_{J} \\Delta^{j\|\\pi(J,j)}(\\underline{x}\_\*), \\tag{8\.1} \\end{equation}\\] where the sum is taken over all \\(p!\\) possible permutations (orderings of explanatory variables) and the variable\-importance measure \\(\\Delta^{j\|J}(\\underline{x}\_\*)\\) was defined in equation [(6\.7\)](breakDown.html#eq:lcondJBD) in Section [6\.3\.2](breakDown.html#BDMethodGen). Essentially, \\(\\varphi(\\underline{x}\_\*,j)\\) is the average of the variable\-importance measures across all possible orderings of explanatory variables. It is worth noting that the value of \\(\\Delta^{j\|\\pi(J,j)}(\\underline{x}\_\*)\\) is constant for all permutations \\(J\\) that share the same subset \\(\\pi(J,j)\\). It follows that equation [(8\.1\)](shapley.html#eq:SHAP) can be expressed in an alternative form: \\\[\\begin{eqnarray} \\varphi(\\underline{x}\_\*,j) \&\=\& \\frac 1{p!}\\sum\_{s\=0}^{p\-1} \\sum\_{ \\substack{ S \\subseteq \\{1,\\ldots,p\\}\\setminus \\{j\\} \\\\ \|S\|\=s }} \\left\\{s!(p\-1\-s)! \\Delta^{j\|S}(\\underline{x}\_\*)\\right\\}\\nonumber\\\\ \&\=\& \\frac 1{p}\\sum\_{s\=0}^{p\-1} \\sum\_{ \\substack{ S \\subseteq \\{1,\\ldots,p\\}\\setminus \\{j\\} \\\\ \|S\|\=s }} \\left\\{{{p\-1}\\choose{s}}^{\-1} \\Delta^{j\|S}(\\underline{x}\_\*)\\right\\}, \\tag{8\.2} \\end{eqnarray}\\] where \\(\|S\|\\) denotes the cardinal number (size) of set \\(S\\) and the second sum is taken over all subsets \\(S\\) of explanatory variables, excluding the \\(j\\)\-th one, of size \\(s\\). Note that the number of all subsets of sizes from 0 to \\(p\-1\\) is \\(2^{p}\-1\\), i.e., it is much smaller than number of all permutations \\(p!\\). Nevertheless, for a large \\(p\\), it may be feasible to compute Shapley values neither using [(8\.1\)](shapley.html#eq:SHAP) nor [(8\.2\)](shapley.html#eq:SHAP1). In that case, an estimate based on a sample of permutations may be considered. A Monte Carlo estimator was introduced by Štrumbelj and Kononenko ([2014](#ref-Strumbelj2014)). An efficient implementation of computations of Shapley values for tree\-based models was used in package SHAP (Lundberg and Lee [2017](#ref-SHAP)). From the properties of Shapley values for cooperative games it follows that, in the context of predictive models, they enjoy the following properties: * Symmetry: if two explanatory variables \\(j\\) and \\(k\\) are interchangeable, i.e., if, for any set of explanatory variables \\(S \\subseteq \\{1,\\dots,p\\}\\setminus \\{j,k\\}\\) we have got \\\[ \\Delta^{j\|S}(\\underline{x}\_\*) \= \\Delta^{k\|S}(\\underline{x}\_\*), \\] then their Shapley values are equal: \\\[ \\varphi(\\underline{x}\_\*,j) \= \\varphi(\\underline{x}\_\*,k). \\] * Dummy feature: if an explanatory variable \\(j\\) does not contribute to any prediction for any set of explanatory variables \\(S \\subseteq \\{1,\\dots,p\\}\\setminus \\{j\\}\\), that is, if \\\[ \\Delta^{j\|S}(\\underline{x}\_\*) \= 0, \\] then its Shapley value is equal to 0: \\\[ \\varphi(\\underline{x}\_\*,j) \= 0\. \\] * Additivity: if model \\(f()\\) is a sum of two other models \\(g()\\) and \\(h()\\), then the Shapley value calculated for model \\(f()\\) is a sum of Shapley values for models \\(g()\\) and \\(h()\\). * Local accuracy (see Section [6\.3\.2](breakDown.html#BDMethodGen)): the sum of Shapley values is equal to the model’s prediction, that is, \\\[ f(\\underline{x}\_\*) \- E\_{\\underline{X}}\\{f(\\underline{X})\\} \= \\sum\_{j\=1}^p \\varphi(\\underline{x}\_\*,j), \\] where \\(\\underline{X}\\) is the vector of explanatory variables (corresponding to \\(\\underline{x}\_\*\\)) that are treated as random values. 8\.3 Example: Titanic data -------------------------- Let us consider the random forest model `titanic_rf` (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) and passenger Johnny D (see Section [4\.2\.5](dataSetsIntro.html#predictions-titanic)) as the instance of interest in the Titanic data. Box plots in Figure [8\.3](shapley.html#fig:shappJohny02) present the distribution of the contributions \\(\\Delta^{j\|\\pi(J,j)}(\\underline{x}\_\*)\\) for each explanatory variable of the model for 25 random orderings of the explanatory variables. Red and green bars represent, respectively, the negative and positive Shapley values across the orderings. It is clear that the young age of Johnny D results in a positive contribution for all orderings; the resulting Shapley value is equal to 0\.2525\. On the other hand, the effect of gender is in all cases negative, with the Shapley value equal to \-0\.0908\. The picture for variables *fare* and *class* is more complex, as their contributions can even change the sign, depending on the ordering. Note that Figure [8\.3](shapley.html#fig:shappJohny02) presents Shapley values separately for each of the two variables. However, it is worth recalling that the iBD plot in Figure [7\.1](iBreakDown.html#fig:iBreakDownTitanicExamplePlot) indicated an important contribution of an interaction between the two variables. Hence, their contributions should not be separated. Thus, the Shapley values for *fare* and *class*, presented in Figure [8\.3](shapley.html#fig:shappJohny02), should be interpreted with caution. Figure 8\.3: Explanatory\-variable attributions for the prediction for Johnny D for the random forest model `titanic_rf` and the Titanic data based on 25 random orderings. Left\-hand\-side plot: box plots summarize the distribution of the attributions for each explanatory variable across the orderings. Red and green bars present Shapley values. Right\-hand\-side plot: Shapley values (mean attributions) without box plots. In most applications, the detailed information about the distribution of variable contributions across the considered orderings of explanatory variables may not be of interest. Thus, one could simplify the plot by presenting only the Shapley values, as illustrated in the right\-hand\-side panel of Figure [8\.3](shapley.html#fig:shappJohny02). Table [8\.1](shapley.html#tab:shapOrderingTable) presents the Shapley values underlying this plot. Table 8\.1: Shapley values for the prediction for Johnny D for the random forest model `titanic_rf` and the Titanic data based on 25 random orderings. | Variable | Shapley value | | --- | --- | | age \= 8 | 0\.2525 | | class \= 1st | 0\.0246 | | embarked \= Southampton | \-0\.0032 | | fare \= 72 | 0\.0140 | | gender \= male | \-0\.0943 | | parch \= 0 | \-0\.0097 | | sibsp \= 0 | 0\.0027 | 8\.4 Pros and cons ------------------ Shapley values provide a uniform approach to decompose a model’s predictions into contributions that can be attributed additively to different explanatory variables. Lundberg and Lee ([2017](#ref-SHAP)) showed that the method unifies different approaches to additive variable attributions, like DeepLIFT (Shrikumar, Greenside, and Kundaje [2017](#ref-DeepLIFT)), Layer\-Wise Relevance Propagation (Binder et al. [2016](#ref-LWRP)), or Local Interpretable Model\-agnostic Explanations (Ribeiro, Singh, and Guestrin [2016](#ref-lime)). The method has got a strong formal foundation derived from the cooperative games theory. It also enjoys an efficient implementation in Python, with ports or re\-implementations in R. An important drawback of Shapley values is that they provide additive contributions (attributions) of explanatory variables. If the model is not additive, then the Shapley values may be misleading. This issue can be seen as arising from the fact that, in cooperative games, the goal is to distribute the payoff among payers. However, in the predictive modelling context, we want to understand how do the players affect the payoff? Thus, we are not limited to independent payoff\-splits for players. It is worth noting that, for an additive model, the approaches presented in Chapters [6](breakDown.html#breakDown)–[7](iBreakDown.html#iBreakDown) and in the current one lead to the same attributions. The reason is that, for additive models, different orderings lead to the same contributions. Since Shapley values can be seen as the mean across all orderings, it is essentially an average of identical values, i.e., it also assumes the same value. An important practical limitation of the general model\-agnostic method is that, for large models, the calculation of Shapley values is time\-consuming. However, sub\-sampling can be used to address the issue. For tree\-based models, effective implementations are available. 8\.5 Code snippets for R ------------------------ In this section, we use the `DALEX` package, which is a wrapper for the `iBreakDown` R package. The package covers all methods presented in this chapter. It is available on `CRAN` and `GitHub`. Note that there are also other R packages that offer similar functionalities, like `iml` (Molnar, Bischl, and Casalicchio [2018](#ref-imlRPackage)), `fastshap` (Greenwell [2020](#ref-fastshapRpackage)) or `shapper` (Maksymiuk, Gosiewska, and Biecek [2019](#ref-shapperPackage)), which is a wrapper for the Python library `SHAP` (Lundberg [2019](#ref-shapPackage)). For illustration purposes, we use the `titanic_rf` random forest model for the Titanic data developed in Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf). Recall that the model is developed to predict the probability of survival for passengers of Titanic. Instance\-level explanations are calculated for Henry, a 47\-year\-old passenger that travelled in the first class (see Section [4\.2\.5](dataSetsIntro.html#predictions-titanic)). We first retrieve the `titanic_rf` model\-object and the data frame for Henry via the `archivist` hooks, as listed in Section [4\.2\.7](dataSetsIntro.html#ListOfModelsTitanic). We also retrieve the version of the `titanic` data with imputed missing values. ``` titanic_imputed <- archivist::aread("pbiecek/models/27e5c") titanic_rf <- archivist:: aread("pbiecek/models/4e0fc") henry <- archivist::aread("pbiecek/models/a6538") ``` Then we construct the explainer for the model by using function `explain()` from the `DALEX` package (see Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode)). We also load the `randomForest` package, as the model was fitted by using function `randomForest()` from this package (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) and it is important to have the corresponding `predict()` function available. The model’s prediction for Henry is obtained with the help of the function. ``` library("randomForest") library("DALEX") explain_rf <- DALEX::explain(model = titanic_rf, data = titanic_imputed[, -9], y = titanic_imputed$survived == "yes", label = "Random Forest") predict(explain_rf, henry) ``` ``` ## [1] 0.246 ``` To compute Shapley values for Henry, we apply function `predict_parts()` (as in Section [6\.6](breakDown.html#BDR)) to the explainer\-object `explain_rf` and the data frame for the instance of interest, i.e., Henry. By specifying the `type="shap"` argument we indicate that we want to compute Shapley values. Additionally, the `B=25` argument indicates that we want to select 25 random orderings of explanatory variables for which the Shapley values are to be computed. Note that `B=25` is also the default value of the argument. ``` shap_henry <- predict_parts(explainer = explain_rf, new_observation = henry, type = "shap", B = 25) ``` The resulting object `shap_henry` is a data frame with variable\-specific attributions computed for every ordering. Printing out the object provides various summary statistics of the attributions including, of course, the mean. ``` shap_henry ``` ``` ## min q1 ## Random Forest: age = 47 -0.14872225 -0.081197100 ## Random Forest: class = 1st 0.12112732 0.123195061 ## Random Forest: embarked = Cherbourg 0.01245129 0.022680335 ## Random Forest: fare = 25 -0.03180517 -0.011710693 ## Random Forest: gender = male -0.15670412 -0.145184866 ## Random Forest: parch = 0 -0.02795650 -0.007438151 ## Random Forest: sibsp = 0 -0.03593203 -0.012978704 ## median mean ## Random Forest: age = 47 -0.040909832 -0.060137381 ## Random Forest: class = 1st 0.159974789 0.159090494 ## Random Forest: embarked = Cherbourg 0.045746262 0.051056420 ## Random Forest: fare = 25 -0.008647485 0.002175261 ## Random Forest: gender = male -0.126003135 -0.126984069 ## Random Forest: parch = 0 -0.003043951 -0.005439239 ## Random Forest: sibsp = 0 -0.005466244 -0.009070956 ## q3 max ## Random Forest: age = 47 -0.0230765745 -0.004967830 ## Random Forest: class = 1st 0.1851354780 0.232307204 ## Random Forest: embarked = Cherbourg 0.0558871772 0.117857725 ## Random Forest: fare = 25 0.0162267784 0.070487540 ## Random Forest: gender = male -0.1115160852 -0.101295877 ## Random Forest: parch = 0 -0.0008337109 0.003412778 ## Random Forest: sibsp = 0 0.0031207522 0.007650204 ``` By applying the generic function `plot()` to the `shap_henry` object, we obtain a graphical illustration of the results. ``` plot(shap_henry) ``` The resulting plot is shown in Figure [8\.4](shapley.html#fig:ShapforHenry). It includes the Shapley values and box plots summarizing the distributions of the variable\-specific contributions for the selected random orderings. Figure 8\.4: A plot of Shapley values with box plots for the `titanic_rf` model and passenger Henry for the Titanic data, obtained by applying the generic `plot()` function in R. To obtain a plot with only Shapley values, i.e., without the box plots, we apply the `show_boxplots=FALSE` argument in the `plot()` function call. ``` plot(shap_henry, show_boxplots = FALSE) ``` The resulting plot, shown in Figure [8\.5](shapley.html#fig:ShapOnlyforHenry), can be compared to the plot in the right\-hand\-side panel of Figure [8\.3](shapley.html#fig:shappJohny02) for Johnny D. The most remarkable difference is related to the contribution of *age*. The young age of Johnny D markedly increases the chances of survival, contrary to the negative contribution of the age of 47 for Henry. Figure 8\.5: A plot of Shapley values without box plots for the `titanic_rf` model and passenger Henry for the Titanic data, obtained by applying the generic `plot()` function in R. 8\.6 Code snippets for Python ----------------------------- In this section, we use the `dalex` library for Python. The package covers all methods presented in this chapter. It is available on `pip` and `GitHub`. Note that the most popular implementation in Python is available in the `shap` library (Lundberg and Lee [2017](#ref-SHAP)). In this section, however, we show implementations from the `dalex` library because they are consistent with other methods presented in this book. For illustration purposes, we use the `titanic_rf` random forest model for the Titanic data developed in Section [4\.3\.2](dataSetsIntro.html#model-titanic-python-rf). Instance\-level explanations are calculated for Henry, a 47\-year\-old passenger that travelled in the 1st class (see Section [4\.3\.5](dataSetsIntro.html#predictions-titanic-python)). In the first step, we create an explainer\-object that provides a uniform interface for the predictive model. We use the `Explainer()` constructor for this purpose (see Section [4\.3\.6](dataSetsIntro.html#ExplainersTitanicPythonCode)). ``` import pandas as pd henry = pd.DataFrame({'gender' : ['male'], 'age' : [47], 'class' : ['1st'], 'embarked': ['Cherbourg'], 'fare' : [25], 'sibsp' : [0], 'parch' : [0]}, index = ['Henry']) import dalex as dx titanic_rf_exp = dx.Explainer(titanic_rf, X, y, label = "Titanic RF Pipeline") ``` To calculate Shapley values we use the `predict_parts()` method with the `type='shap'` argument (see Section [6\.7](breakDown.html#BDPython)). The first argument indicates the data observation for which the values are to be calculated. Results are stored in the `results` field. ``` bd_henry = titanic_rf_exp.predict_parts(henry, type = 'shap') bd_henry.result ``` To visualize the obtained values, we simply call the `plot()` method. ``` bd_henry.plot() ``` The resulting plot is shown in Figure [8\.6](shapley.html#fig:shapPython2). Figure 8\.6: A plot of Shapley values for the `titanic_rf` model and passenger Henry for the Titanic data, obtained by applying the `plot()` method in Python. By default, Shapley values are calculated and plotted for all variables in the data. To limit the number of variables included in the graph, we can use the argument `max_vars` in the `plot()` method (see Section [6\.7](breakDown.html#BDPython)). 8\.1 Intuition -------------- Figure [8\.1](shapley.html#fig:shap10orderings) presents BD plots for 10 random orderings (indicated by the order of the rows in each plot) of explanatory variables for the prediction for Johnny D (see Section [4\.2\.5](dataSetsIntro.html#predictions-titanic)) for the random forest model `titanic_rf` (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) for the Titanic dataset. The plots show clear differences in the contributions of various variables for different orderings. The most remarkable differences can be observed for variables *fare* and *class*, with contributions changing the sign depending on the ordering. Figure 8\.1: Break\-down plots for 10 random orderings of explanatory variables for the prediction for Johnny D for the random forest model `titanic_rf` for the Titanic dataset. Each panel presents a single ordering, indicated by the order of the rows in the plot. To remove the influence of the ordering of the variables, we can compute the mean value of the attributions. Figure [8\.2](shapley.html#fig:shapOrdering) presents the averages, calculated over the ten orderings presented in Figure [8\.1](shapley.html#fig:shap10orderings). Red and green bars present, respectively, the negative and positive averages Violet box plots summarize the distribution of the attributions for each explanatory variable across the different orderings. The plot indicates that the most important variables, from the point of view of the prediction for Johnny D, are *age*, *class*, and *gender*. Figure 8\.2: Average attributions for 10 random orderings. Red and green bars present the means. Box plots summarize the distribution of contributions for each explanatory variable across the orderings. 8\.2 Method ----------- SHapley Additive exPlanations (SHAP) are based on “Shapley values” developed by Shapley ([1953](#ref-shapleybook1952)) in the cooperative game theory. Note that the terminology may be confusing at first glance. Shapley values are introduced for cooperative games. SHAP is an acronym for a method designed for predictive models. To avoid confusion, we will use the term “Shapley values”. Shapley values are a solution to the following problem. A coalition of players cooperates and obtains a certain overall gain from the cooperation. Players are not identical, and different players may have different importance. Cooperation is beneficial, because it may bring more benefit than individual actions. The problem to solve is how to distribute the generated surplus among the players. Shapley values offer one possible fair answer to this question (Shapley [1953](#ref-shapleybook1952)). Let’s translate this problem to the context of a model’s predictions. Explanatory variables are the players, while model \\(f()\\) plays the role of the coalition. The payoff from the coalition is the model’s prediction. The problem to solve is how to distribute the model’s prediction across particular variables? The idea of using Shapley values for evaluation of local variable\-importance was introduced by Štrumbelj and Kononenko ([2010](#ref-imeJLMR)). We will define the values using the notation introduced in Section [6\.3\.2](breakDown.html#BDMethodGen). Let us consider a permutation \\(J\\) of the set of indices \\(\\{1,2,\\ldots,p\\}\\) corresponding to an ordering of \\(p\\) explanatory variables included in the model \\(f()\\). Denote by \\(\\pi(J,j)\\) the set of the indices of the variables that are positioned in \\(J\\) before the \\(j\\)\-th variable. Note that, if the \\(j\\)\-th variable is placed as the first, then \\(\\pi(J,j) \= \\emptyset\\). Consider the model’s prediction \\(f(\\underline{x}\_\*)\\) for a particular instance of interest \\(\\underline{x}\_\*\\). The Shapley value is defined as follows: \\\[\\begin{equation} \\varphi(\\underline{x}\_\*,j) \= \\frac{1}{p!} \\sum\_{J} \\Delta^{j\|\\pi(J,j)}(\\underline{x}\_\*), \\tag{8\.1} \\end{equation}\\] where the sum is taken over all \\(p!\\) possible permutations (orderings of explanatory variables) and the variable\-importance measure \\(\\Delta^{j\|J}(\\underline{x}\_\*)\\) was defined in equation [(6\.7\)](breakDown.html#eq:lcondJBD) in Section [6\.3\.2](breakDown.html#BDMethodGen). Essentially, \\(\\varphi(\\underline{x}\_\*,j)\\) is the average of the variable\-importance measures across all possible orderings of explanatory variables. It is worth noting that the value of \\(\\Delta^{j\|\\pi(J,j)}(\\underline{x}\_\*)\\) is constant for all permutations \\(J\\) that share the same subset \\(\\pi(J,j)\\). It follows that equation [(8\.1\)](shapley.html#eq:SHAP) can be expressed in an alternative form: \\\[\\begin{eqnarray} \\varphi(\\underline{x}\_\*,j) \&\=\& \\frac 1{p!}\\sum\_{s\=0}^{p\-1} \\sum\_{ \\substack{ S \\subseteq \\{1,\\ldots,p\\}\\setminus \\{j\\} \\\\ \|S\|\=s }} \\left\\{s!(p\-1\-s)! \\Delta^{j\|S}(\\underline{x}\_\*)\\right\\}\\nonumber\\\\ \&\=\& \\frac 1{p}\\sum\_{s\=0}^{p\-1} \\sum\_{ \\substack{ S \\subseteq \\{1,\\ldots,p\\}\\setminus \\{j\\} \\\\ \|S\|\=s }} \\left\\{{{p\-1}\\choose{s}}^{\-1} \\Delta^{j\|S}(\\underline{x}\_\*)\\right\\}, \\tag{8\.2} \\end{eqnarray}\\] where \\(\|S\|\\) denotes the cardinal number (size) of set \\(S\\) and the second sum is taken over all subsets \\(S\\) of explanatory variables, excluding the \\(j\\)\-th one, of size \\(s\\). Note that the number of all subsets of sizes from 0 to \\(p\-1\\) is \\(2^{p}\-1\\), i.e., it is much smaller than number of all permutations \\(p!\\). Nevertheless, for a large \\(p\\), it may be feasible to compute Shapley values neither using [(8\.1\)](shapley.html#eq:SHAP) nor [(8\.2\)](shapley.html#eq:SHAP1). In that case, an estimate based on a sample of permutations may be considered. A Monte Carlo estimator was introduced by Štrumbelj and Kononenko ([2014](#ref-Strumbelj2014)). An efficient implementation of computations of Shapley values for tree\-based models was used in package SHAP (Lundberg and Lee [2017](#ref-SHAP)). From the properties of Shapley values for cooperative games it follows that, in the context of predictive models, they enjoy the following properties: * Symmetry: if two explanatory variables \\(j\\) and \\(k\\) are interchangeable, i.e., if, for any set of explanatory variables \\(S \\subseteq \\{1,\\dots,p\\}\\setminus \\{j,k\\}\\) we have got \\\[ \\Delta^{j\|S}(\\underline{x}\_\*) \= \\Delta^{k\|S}(\\underline{x}\_\*), \\] then their Shapley values are equal: \\\[ \\varphi(\\underline{x}\_\*,j) \= \\varphi(\\underline{x}\_\*,k). \\] * Dummy feature: if an explanatory variable \\(j\\) does not contribute to any prediction for any set of explanatory variables \\(S \\subseteq \\{1,\\dots,p\\}\\setminus \\{j\\}\\), that is, if \\\[ \\Delta^{j\|S}(\\underline{x}\_\*) \= 0, \\] then its Shapley value is equal to 0: \\\[ \\varphi(\\underline{x}\_\*,j) \= 0\. \\] * Additivity: if model \\(f()\\) is a sum of two other models \\(g()\\) and \\(h()\\), then the Shapley value calculated for model \\(f()\\) is a sum of Shapley values for models \\(g()\\) and \\(h()\\). * Local accuracy (see Section [6\.3\.2](breakDown.html#BDMethodGen)): the sum of Shapley values is equal to the model’s prediction, that is, \\\[ f(\\underline{x}\_\*) \- E\_{\\underline{X}}\\{f(\\underline{X})\\} \= \\sum\_{j\=1}^p \\varphi(\\underline{x}\_\*,j), \\] where \\(\\underline{X}\\) is the vector of explanatory variables (corresponding to \\(\\underline{x}\_\*\\)) that are treated as random values. 8\.3 Example: Titanic data -------------------------- Let us consider the random forest model `titanic_rf` (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) and passenger Johnny D (see Section [4\.2\.5](dataSetsIntro.html#predictions-titanic)) as the instance of interest in the Titanic data. Box plots in Figure [8\.3](shapley.html#fig:shappJohny02) present the distribution of the contributions \\(\\Delta^{j\|\\pi(J,j)}(\\underline{x}\_\*)\\) for each explanatory variable of the model for 25 random orderings of the explanatory variables. Red and green bars represent, respectively, the negative and positive Shapley values across the orderings. It is clear that the young age of Johnny D results in a positive contribution for all orderings; the resulting Shapley value is equal to 0\.2525\. On the other hand, the effect of gender is in all cases negative, with the Shapley value equal to \-0\.0908\. The picture for variables *fare* and *class* is more complex, as their contributions can even change the sign, depending on the ordering. Note that Figure [8\.3](shapley.html#fig:shappJohny02) presents Shapley values separately for each of the two variables. However, it is worth recalling that the iBD plot in Figure [7\.1](iBreakDown.html#fig:iBreakDownTitanicExamplePlot) indicated an important contribution of an interaction between the two variables. Hence, their contributions should not be separated. Thus, the Shapley values for *fare* and *class*, presented in Figure [8\.3](shapley.html#fig:shappJohny02), should be interpreted with caution. Figure 8\.3: Explanatory\-variable attributions for the prediction for Johnny D for the random forest model `titanic_rf` and the Titanic data based on 25 random orderings. Left\-hand\-side plot: box plots summarize the distribution of the attributions for each explanatory variable across the orderings. Red and green bars present Shapley values. Right\-hand\-side plot: Shapley values (mean attributions) without box plots. In most applications, the detailed information about the distribution of variable contributions across the considered orderings of explanatory variables may not be of interest. Thus, one could simplify the plot by presenting only the Shapley values, as illustrated in the right\-hand\-side panel of Figure [8\.3](shapley.html#fig:shappJohny02). Table [8\.1](shapley.html#tab:shapOrderingTable) presents the Shapley values underlying this plot. Table 8\.1: Shapley values for the prediction for Johnny D for the random forest model `titanic_rf` and the Titanic data based on 25 random orderings. | Variable | Shapley value | | --- | --- | | age \= 8 | 0\.2525 | | class \= 1st | 0\.0246 | | embarked \= Southampton | \-0\.0032 | | fare \= 72 | 0\.0140 | | gender \= male | \-0\.0943 | | parch \= 0 | \-0\.0097 | | sibsp \= 0 | 0\.0027 | 8\.4 Pros and cons ------------------ Shapley values provide a uniform approach to decompose a model’s predictions into contributions that can be attributed additively to different explanatory variables. Lundberg and Lee ([2017](#ref-SHAP)) showed that the method unifies different approaches to additive variable attributions, like DeepLIFT (Shrikumar, Greenside, and Kundaje [2017](#ref-DeepLIFT)), Layer\-Wise Relevance Propagation (Binder et al. [2016](#ref-LWRP)), or Local Interpretable Model\-agnostic Explanations (Ribeiro, Singh, and Guestrin [2016](#ref-lime)). The method has got a strong formal foundation derived from the cooperative games theory. It also enjoys an efficient implementation in Python, with ports or re\-implementations in R. An important drawback of Shapley values is that they provide additive contributions (attributions) of explanatory variables. If the model is not additive, then the Shapley values may be misleading. This issue can be seen as arising from the fact that, in cooperative games, the goal is to distribute the payoff among payers. However, in the predictive modelling context, we want to understand how do the players affect the payoff? Thus, we are not limited to independent payoff\-splits for players. It is worth noting that, for an additive model, the approaches presented in Chapters [6](breakDown.html#breakDown)–[7](iBreakDown.html#iBreakDown) and in the current one lead to the same attributions. The reason is that, for additive models, different orderings lead to the same contributions. Since Shapley values can be seen as the mean across all orderings, it is essentially an average of identical values, i.e., it also assumes the same value. An important practical limitation of the general model\-agnostic method is that, for large models, the calculation of Shapley values is time\-consuming. However, sub\-sampling can be used to address the issue. For tree\-based models, effective implementations are available. 8\.5 Code snippets for R ------------------------ In this section, we use the `DALEX` package, which is a wrapper for the `iBreakDown` R package. The package covers all methods presented in this chapter. It is available on `CRAN` and `GitHub`. Note that there are also other R packages that offer similar functionalities, like `iml` (Molnar, Bischl, and Casalicchio [2018](#ref-imlRPackage)), `fastshap` (Greenwell [2020](#ref-fastshapRpackage)) or `shapper` (Maksymiuk, Gosiewska, and Biecek [2019](#ref-shapperPackage)), which is a wrapper for the Python library `SHAP` (Lundberg [2019](#ref-shapPackage)). For illustration purposes, we use the `titanic_rf` random forest model for the Titanic data developed in Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf). Recall that the model is developed to predict the probability of survival for passengers of Titanic. Instance\-level explanations are calculated for Henry, a 47\-year\-old passenger that travelled in the first class (see Section [4\.2\.5](dataSetsIntro.html#predictions-titanic)). We first retrieve the `titanic_rf` model\-object and the data frame for Henry via the `archivist` hooks, as listed in Section [4\.2\.7](dataSetsIntro.html#ListOfModelsTitanic). We also retrieve the version of the `titanic` data with imputed missing values. ``` titanic_imputed <- archivist::aread("pbiecek/models/27e5c") titanic_rf <- archivist:: aread("pbiecek/models/4e0fc") henry <- archivist::aread("pbiecek/models/a6538") ``` Then we construct the explainer for the model by using function `explain()` from the `DALEX` package (see Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode)). We also load the `randomForest` package, as the model was fitted by using function `randomForest()` from this package (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) and it is important to have the corresponding `predict()` function available. The model’s prediction for Henry is obtained with the help of the function. ``` library("randomForest") library("DALEX") explain_rf <- DALEX::explain(model = titanic_rf, data = titanic_imputed[, -9], y = titanic_imputed$survived == "yes", label = "Random Forest") predict(explain_rf, henry) ``` ``` ## [1] 0.246 ``` To compute Shapley values for Henry, we apply function `predict_parts()` (as in Section [6\.6](breakDown.html#BDR)) to the explainer\-object `explain_rf` and the data frame for the instance of interest, i.e., Henry. By specifying the `type="shap"` argument we indicate that we want to compute Shapley values. Additionally, the `B=25` argument indicates that we want to select 25 random orderings of explanatory variables for which the Shapley values are to be computed. Note that `B=25` is also the default value of the argument. ``` shap_henry <- predict_parts(explainer = explain_rf, new_observation = henry, type = "shap", B = 25) ``` The resulting object `shap_henry` is a data frame with variable\-specific attributions computed for every ordering. Printing out the object provides various summary statistics of the attributions including, of course, the mean. ``` shap_henry ``` ``` ## min q1 ## Random Forest: age = 47 -0.14872225 -0.081197100 ## Random Forest: class = 1st 0.12112732 0.123195061 ## Random Forest: embarked = Cherbourg 0.01245129 0.022680335 ## Random Forest: fare = 25 -0.03180517 -0.011710693 ## Random Forest: gender = male -0.15670412 -0.145184866 ## Random Forest: parch = 0 -0.02795650 -0.007438151 ## Random Forest: sibsp = 0 -0.03593203 -0.012978704 ## median mean ## Random Forest: age = 47 -0.040909832 -0.060137381 ## Random Forest: class = 1st 0.159974789 0.159090494 ## Random Forest: embarked = Cherbourg 0.045746262 0.051056420 ## Random Forest: fare = 25 -0.008647485 0.002175261 ## Random Forest: gender = male -0.126003135 -0.126984069 ## Random Forest: parch = 0 -0.003043951 -0.005439239 ## Random Forest: sibsp = 0 -0.005466244 -0.009070956 ## q3 max ## Random Forest: age = 47 -0.0230765745 -0.004967830 ## Random Forest: class = 1st 0.1851354780 0.232307204 ## Random Forest: embarked = Cherbourg 0.0558871772 0.117857725 ## Random Forest: fare = 25 0.0162267784 0.070487540 ## Random Forest: gender = male -0.1115160852 -0.101295877 ## Random Forest: parch = 0 -0.0008337109 0.003412778 ## Random Forest: sibsp = 0 0.0031207522 0.007650204 ``` By applying the generic function `plot()` to the `shap_henry` object, we obtain a graphical illustration of the results. ``` plot(shap_henry) ``` The resulting plot is shown in Figure [8\.4](shapley.html#fig:ShapforHenry). It includes the Shapley values and box plots summarizing the distributions of the variable\-specific contributions for the selected random orderings. Figure 8\.4: A plot of Shapley values with box plots for the `titanic_rf` model and passenger Henry for the Titanic data, obtained by applying the generic `plot()` function in R. To obtain a plot with only Shapley values, i.e., without the box plots, we apply the `show_boxplots=FALSE` argument in the `plot()` function call. ``` plot(shap_henry, show_boxplots = FALSE) ``` The resulting plot, shown in Figure [8\.5](shapley.html#fig:ShapOnlyforHenry), can be compared to the plot in the right\-hand\-side panel of Figure [8\.3](shapley.html#fig:shappJohny02) for Johnny D. The most remarkable difference is related to the contribution of *age*. The young age of Johnny D markedly increases the chances of survival, contrary to the negative contribution of the age of 47 for Henry. Figure 8\.5: A plot of Shapley values without box plots for the `titanic_rf` model and passenger Henry for the Titanic data, obtained by applying the generic `plot()` function in R. 8\.6 Code snippets for Python ----------------------------- In this section, we use the `dalex` library for Python. The package covers all methods presented in this chapter. It is available on `pip` and `GitHub`. Note that the most popular implementation in Python is available in the `shap` library (Lundberg and Lee [2017](#ref-SHAP)). In this section, however, we show implementations from the `dalex` library because they are consistent with other methods presented in this book. For illustration purposes, we use the `titanic_rf` random forest model for the Titanic data developed in Section [4\.3\.2](dataSetsIntro.html#model-titanic-python-rf). Instance\-level explanations are calculated for Henry, a 47\-year\-old passenger that travelled in the 1st class (see Section [4\.3\.5](dataSetsIntro.html#predictions-titanic-python)). In the first step, we create an explainer\-object that provides a uniform interface for the predictive model. We use the `Explainer()` constructor for this purpose (see Section [4\.3\.6](dataSetsIntro.html#ExplainersTitanicPythonCode)). ``` import pandas as pd henry = pd.DataFrame({'gender' : ['male'], 'age' : [47], 'class' : ['1st'], 'embarked': ['Cherbourg'], 'fare' : [25], 'sibsp' : [0], 'parch' : [0]}, index = ['Henry']) import dalex as dx titanic_rf_exp = dx.Explainer(titanic_rf, X, y, label = "Titanic RF Pipeline") ``` To calculate Shapley values we use the `predict_parts()` method with the `type='shap'` argument (see Section [6\.7](breakDown.html#BDPython)). The first argument indicates the data observation for which the values are to be calculated. Results are stored in the `results` field. ``` bd_henry = titanic_rf_exp.predict_parts(henry, type = 'shap') bd_henry.result ``` To visualize the obtained values, we simply call the `plot()` method. ``` bd_henry.plot() ``` The resulting plot is shown in Figure [8\.6](shapley.html#fig:shapPython2). Figure 8\.6: A plot of Shapley values for the `titanic_rf` model and passenger Henry for the Titanic data, obtained by applying the `plot()` method in Python. By default, Shapley values are calculated and plotted for all variables in the data. To limit the number of variables included in the graph, we can use the argument `max_vars` in the `plot()` method (see Section [6\.7](breakDown.html#BDPython)).
Machine Learning
pbiecek.github.io
https://pbiecek.github.io/ema/LIME.html
9 Local Interpretable Model\-agnostic Explanations (LIME) ========================================================= 9\.1 Introduction ----------------- Break\-down (BD) plots and Shapley values, introduced in Chapters [6](breakDown.html#breakDown) and [8](shapley.html#shapley), respectively, are most suitable for models with a small or moderate number of explanatory variables. None of those approaches is well\-suited for models with a very large number of explanatory variables, because they usually determine non\-zero attributions for all variables in the model. However, in domains like, for instance, genomics or image recognition, models with hundreds of thousands, or even millions, of explanatory (input) variables are not uncommon. In such cases, sparse explanations with a small number of variables offer a useful alternative. The most popular example of such sparse explainers is the Local Interpretable Model\-agnostic Explanations (LIME) method and its modifications. The LIME method was originally proposed by Ribeiro, Singh, and Guestrin ([2016](#ref-lime)). The key idea behind it is to locally approximate a black\-box model by a simpler glass\-box model, which is easier to interpret. In this chapter, we describe this approach. 9\.2 Intuition -------------- The intuition behind the LIME method is explained in Figure [9\.1](LIME.html#fig:limeIntroduction). We want to understand the factors that influence a complex black\-box model around a single instance of interest (black cross). The coloured areas presented in Figure [9\.1](LIME.html#fig:limeIntroduction) correspond to decision regions for a binary classifier, i.e., they pertain to a prediction of a value of a binary dependent variable. The axes represent the values of two continuous explanatory variables. The coloured areas indicate combinations of values of the two variables for which the model classifies the observation to one of the two classes. To understand the local behavior of the complex model around the point of interest, we generate an artificial dataset, to which we fit a glass\-box model. The dots in Figure [9\.1](LIME.html#fig:limeIntroduction) represent the generated artificial data; the size of the dots corresponds to proximity to the instance of interest. We can fit a simpler glass\-box model to the artificial data so that it will locally approximate the predictions of the black\-box model. In Figure [9\.1](LIME.html#fig:limeIntroduction), a simple linear model (indicated by the dashed line) is used to construct the local approximation. The simpler model serves as a “local explainer” for the more complex model. We may select different classes of glass\-box models. The most typical choices are regularized linear models like LASSO regression (Tibshirani [1994](#ref-Tibshirani94regressionshrinkage)) or decision trees (Hothorn, Hornik, and Zeileis [2006](#ref-party2006)). Both lead to sparse models that are easier to understand. The important point is to limit the complexity of the models, so that they are easier to explain. Figure 9\.1: The idea behind the LIME approximation with a local glass\-box model. The coloured areas correspond to decision regions for a complex binary classification model. The black cross indicates the instance (observation) of interest. Dots correspond to artificial data around the instance of interest. The dashed line represents a simple linear model fitted to the artificial data. The simple model “explains” local behavior of the black\-box model around the instance of interest. 9\.3 Method ----------- We want to find a model that locally approximates a black\-box model \\(f()\\) around the instance of interest \\(\\underline{x}\_\*\\). Consider class \\(G\\) of simple, interpretable models like, for instance, linear models or decision trees. To find the required approximation, we minimize a “loss function”: \\\[ \\hat g \= \\arg \\min\_{g \\in \\mathcal{G}} L\\{f, g, \\nu(\\underline{x}\_\*)\\} \+ \\Omega (g), \\] where model \\(g()\\) belongs to class \\(\\mathcal{G}\\), \\(\\nu(\\underline{x}\_\*)\\) defines a neighborhood of \\(\\underline{x}\_\*\\) in which approximation is sought, \\(L()\\) is a function measuring the discrepancy between models \\(f()\\) and \\(g()\\) in the neighborhood \\(\\nu(\\underline{x}\_\*)\\), and \\(\\Omega(g)\\) is a penalty for the complexity of model \\(g()\\). The penalty is used to favour simpler models from class \\(\\mathcal{G}\\). In applications, this criterion is very often simplified by limiting class \\(G\\) to models with the same complexity, i.e., with the same number of coefficients. In such a situation, \\(\\Omega(g)\\) is the same for each model \\(g()\\), so it can be omitted in optimization. Note that models \\(f()\\) and \\(g()\\) may operate on different data spaces. The black\-box model (function) \\(f(\\underline{x}):\\mathcal X \\rightarrow \\mathcal R\\) is defined on a large, \\(p\\)\-dimensional space \\(\\mathcal X\\) corresponding to the \\(p\\) explanatory variables used in the model. The glass\-box model (function) \\(g(\\underline{x}):\\tilde{ \\mathcal X} \\rightarrow \\mathcal R\\) is defined on a \\(q\\)\-dimensional space \\(\\tilde{ \\mathcal X}\\) with \\(q \<\< p\\), often called the “space for interpretable representation”. We will present some examples of \\(\\tilde{ \\mathcal X}\\) in the next section. For now we will just assume that some function \\(h()\\) transforms \\(\\mathcal X\\) into \\(\\tilde{ \\mathcal X}\\). If we limit class \\(\\mathcal{G}\\) to linear models with a limited number, say \\(K\\), of non\-zero coefficients, then the following algorithm may be used to find an interpretable glass\-box model \\(g()\\) that includes \\(K\\) most important, interpretable, explanatory variables: ``` Input: x* - observation to be explained Input: N - sample size for the glass-box model Input: K - complexity, the number of variables for the glass-box model Input: similarity - a distance function in the original data space 1. Let x' = h(x*) be a version of x* in the lower-dimensional space 2. for i in 1...N { 3. z'[i] <- sample_around(x') 4. y'[i] <- f(z'[i]) # prediction for new observation z'[i] 5. w'[i] <- similarity(x', z'[i]) 6. } 7. return K-LASSO(y', x', w') ``` In Step 7, `K-LASSO(y', x', w')` stands for a weighted LASSO linear\-regression that selects \\(K\\) variables based on the new data `y'` and `x'` with weights `w'`. Practical implementation of this idea involves three important steps, which are discussed in the subsequent subsections. ### 9\.3\.1 Interpretable data representation As it has been mentioned, the black\-box model \\(f()\\) and the glass\-box model \\(g()\\) operate on different data spaces. For example, let us consider a VGG16 neural network (Simonyan and Zisserman [2015](#ref-Simonyan15)) trained on the ImageNet data (Deng et al. [2009](#ref-ImageNet)). The model uses an image of the size of 244 \\(\\times\\) 244 pixels as input and predicts to which of 1000 potential categories does the image belong to. The original space \\(\\mathcal X\\) is of dimension 3 \\(\\times\\) 244 \\(\\times\\) 244 (three single\-color channels (*red, green, blue*) for a single pixel \\(\\times\\) 244 \\(\\times\\) 244 pixels), i.e., the input space is 178,608\-dimensional. Explaining predictions in such a high\-dimensional space is difficult. Instead, from the perspective of a single instance of interest, the space can be transformed into superpixels, which are treated as binary features that can be turned on or off. Figure [9\.2](LIME.html#fig:duckHorse06) (right\-hand\-side panel) presents an example of 100 superpixels created for an ambiguous picture. Thus, in this case the black\-box model \\(f()\\) operates on space \\(\\mathcal X\=\\mathcal{R}^{178608}\\), while the glass\-box model \\(g()\\) applies to space \\(\\tilde{ \\mathcal X} \= \\{0,1\\}^{100}\\). It is worth noting that superpixels, based on image segmentation, are frequent choices for image data. For text data, groups of words are frequently used as interpretable variables. For tabular data, continuous variables are often discretized to obtain interpretable categorical data. In the case of categorical variables, combination of categories is often used. We will present examples in the next section. Figure 9\.2: The left\-hand\-side panel shows an ambiguous picture, half\-horse and half\-duck (source [Twitter](https://twitter.com/finmaddison/status/352128550704398338)). The right\-hand\-side panel shows 100 superpixels identified for this figure. ### 9\.3\.2 Sampling around the instance of interest To develop a local\-approximation glass\-box model, we need new data points in the low\-dimensional interpretable data space around the instance of interest. One could consider sampling the data points from the original dataset. However, there may not be enough points to sample from, because the data in high\-dimensional datasets are usually very sparse and data points are “far” from each other. Thus, we need new, artificial data points. For this reason, the data for the development of the glass\-box model is often created by using perturbations of the instance of interest. For binary variables in the low\-dimensional space, the common choice is to switch (from 0 to 1 or from 1 to 0\) the value of a randomly\-selected number of variables describing the instance of interest. For continuous variables, various proposals have been formulated in different papers. For example, Molnar, Bischl, and Casalicchio ([2018](#ref-imlRPackage)) and Molnar ([2019](#ref-molnar2019)) suggest adding Gaussian noise to continuous variables. Pedersen and Benesty ([2019](#ref-limePackage)) propose to discretize continuous variables by using quantiles and then perturb the discretized versions of the variables. Staniak et al. ([2019](#ref-localModelPackage)) discretize continuous variables based on segmentation of local ceteris\-paribus profiles (for more information about the profiles, see Chapter [10](ceterisParibus.html#ceterisParibus)). In the example of the duck\-horse image in Figure [9\.2](LIME.html#fig:duckHorse06), the perturbations of the image could be created by randomly excluding some of the superpixels. An illustration of this process is shown in Figure [9\.3](LIME.html#fig:duckHorseProcess). Figure 9\.3: The original image (left\-hand\-side panel) is transformed into a lower\-dimensional data space by defining 100 super pixels (panel in the middle). The artificial data are created by using subsets of superpixels (right\-hand\-side panel). ### 9\.3\.3 Fitting the glass\-box model Once the artificial data around the instance of interest have been created, we may attempt to fit an interpretable glass\-box model \\(g()\\) from class \\(\\mathcal{G}\\). The most common choices for class \\(\\mathcal{G}\\) are generalized linear models. To get sparse models, i.e., models with a limited number of variables, LASSO (least absolute shrinkage and selection operator) (Tibshirani [1994](#ref-Tibshirani94regressionshrinkage)) or similar regularization\-modelling techniques are used. For instance, in the algorithm presented in Section [9\.3](LIME.html#LIMEMethod), the K\-LASSO method with K non\-zero coefficients has been mentioned. An alternative choice are classification\-and\-regression trees models (Breiman et al. [1984](#ref-CARTtree)). For the example of the duck\-horse image in Figure [9\.2](LIME.html#fig:duckHorse06), the VGG16 network provides 1000 probabilities that the image belongs to one of the 1000 classes used for training the network. It appears that the two most likely classes for the image are *‘standard poodle’* (probability of 0\.18\) and *‘goose’* (probability of 0\.15\). Figure [9\.4](LIME.html#fig:duckHorse04) presents LIME explanations for these two predictions. The explanations were obtained with the K\-LASSO method, which selected \\(K\=15\\) superpixels that were the most influential from a model\-prediction point of view. For each of the selected two classes, the \\(K\\) superpixels with non\-zero coefficients are highlighted. It is interesting to observe that the superpixel which contains the beak is influential for the *‘goose’* prediction, while superpixels linked with the white colour are influential for the *‘standard poodle’* prediction. At least for the former, the influential feature of the plot does correspond to the intended content of the image. Thus, the results of the explanation increase confidence in the model’s predictions. Figure 9\.4: LIME for two predictions (‘standard poodle’ and ‘goose’) obtained by the VGG16 network with ImageNet weights for the half\-duck, half\-horse image. TODO: fix apostrophes! 9\.4 Example: Titanic data -------------------------- Most examples of the LIME method are related to the text or image data. In this section, we present an example of a binary classification for tabular data to facilitate comparisons between methods introduced in different chapters. Let us consider the random forest model `titanic_rf` (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) and passenger Johnny D (see Section [4\.2\.5](dataSetsIntro.html#predictions-titanic)) as the instance of interest for the Titanic data. First, we have got to define an interpretable data space. One option would be to gather similar variables into larger constructs corresponding to some concepts. For example *class* and *fare* variables can be combined into “wealth”, *age* and *gender* into “demography”, and so on. In this example, however, we have got a relatively small number of variables, so we will use a simpler data representation in the form of a binary vector. Toward this aim, each variable is dichotomized into two levels. For example, *age* is transformed into a binary variable with categories “\\(\\leq\\) 15\.36” and “\>15\.36”, *class* is transformed into a binary variable with categories “1st/2nd/deck crew” and “other”, and so on. Once the lower\-dimension data space is defined, the LIME algorithm is applied to this space. In particular, we first have got to appropriately transform data for Johnny D. Subsequently, we generate a new artificial dataset that will be used for K\-LASSO approximations of the random forest model. In particular, the K\-LASSO method with \\(K\=3\\) is used to identify the three most influential (binary) variables that will provide an explanation for the prediction for Johnny D. The three variables are: *age*, *gender*, and *class*. This result agrees with the conclusions drawn in the previous chapters. Figure [9\.5](LIME.html#fig:LIMEexample01) shows the coefficients estimated for the K\-LASSO model. Figure 9\.5: LIME method for the prediction for Johnny D for the random forest model `titanic_rf` and the Titanic data. Presented values are the coefficients of the K\-LASSO model fitted locally to the predictions from the original model. 9\.5 Pros and cons ------------------ As mentioned by Ribeiro, Singh, and Guestrin ([2016](#ref-lime)), the LIME method * is *model\-agnostic*, as it does not imply any assumptions about the black\-box model structure; * offers an *interpretable representation*, because the original data space is transformed (for instance, by replacing individual pixels by superpixels for image data) into a more interpretable, lower\-dimension space; * provides *local fidelity*, i.e., the explanations are locally well\-fitted to the black\-box model. The method has been widely adopted in the text and image analysis, partly due to the interpretable data representation. In that case, the explanations are delivered in the form of fragments of an image/text, and users can easily find the justification of such explanations. The underlying intuition for the method is easy to understand: a simpler model is used to approximate a more complex one. By using a simpler model, with a smaller number of interpretable explanatory variables, predictions are easier to explain. The LIME method can be applied to complex, high\-dimensional models. There are several important limitations, however. For instance, as mentioned in Section [9\.3\.2](LIME.html#LIMEsample), there have been various proposals for finding interpretable representations for continuous and categorical explanatory variables in case of tabular data. The issue has not been solved yet. This leads to different implementations of LIME, which use different variable\-transformation methods and, consequently, that can lead to different results. Another important point is that, because the glass\-box model is selected to approximate the black\-box model, and not the data themselves, the method does not control the quality of the local fit of the glass\-box model to the data. Thus, the latter model may be misleading. Finally, in high\-dimensional data, data points are sparse. Defining a “local neighborhood” of the instance of interest may not be straightforward. Importance of the selection of the neighborhood is discussed, for example, by Alvarez\-Melis and Jaakkola ([2018](#ref-LIMESHAPstability)). Sometimes even slight changes in the neighborhood strongly affect the obtained explanations. To summarize, the most useful applications of LIME are limited to high\-dimensional data for which one can define a low\-dimensional interpretable data representation, as in image analysis, text analysis, or genomics. 9\.6 Code snippets for R ------------------------ LIME and its variants are implemented in various R and Python packages. For example, `lime` (Pedersen and Benesty [2019](#ref-limePackage)) started as a port of the LIME Python library (Lundberg [2019](#ref-shapPackage)), while `localModel` (Staniak et al. [2019](#ref-localModelPackage)), and `iml` (Molnar, Bischl, and Casalicchio [2018](#ref-imlRPackage)) are separate packages that implement a version of this method entirely in R. Different implementations of LIME offer different algorithms for extraction of interpretable features, different methods for sampling, and different methods of weighting. For instance, regarding transformation of continuous variables into interpretable features, `lime` performs global discretization using quartiles, `localModel` performs local discretization using ceteris\-paribus profiles (for more information about the profiles, see Chapter [10](ceterisParibus.html#ceterisParibus)), while `iml` works directly on continuous variables. Due to these differences, the packages yield different results (explanations). Also, `lime`, `localModel`, and `iml` use different functions to implement the LIME method. Thus, we will use the `predict_surrogate()` method from the `DALEXtra` package. The function offers a uniform interface to the functions from the three packages. In what follows, for illustration purposes, we use the `titanic_rf` random forest model for the Titanic data developed in Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf). Recall that it is developed to predict the probability of survival from the sinking of the Titanic. Instance\-level explanations are calculated for Johnny D, an 8\-year\-old passenger that travelled in the first class. We first retrieve the `titanic_rf` model\-object and the data frame for Johnny D via the `archivist` hooks, as listed in Section [4\.2\.7](dataSetsIntro.html#ListOfModelsTitanic). We also retrieve the version of the `titanic` data with imputed missing values. ``` titanic_imputed <- archivist::aread("pbiecek/models/27e5c") titanic_rf <- archivist:: aread("pbiecek/models/4e0fc") johnny_d <- archivist:: aread("pbiecek/models/e3596") ``` ``` class gender age sibsp parch fare embarked 1 1st male 8 0 0 72 Southampton ``` Then we construct the explainer for the model by using the function `explain()` from the `DALEX` package (see Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode)). We also load the `randomForest` package, as the model was fitted by using function `randomForest()` from this package (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) and it is important to have the corresponding `predict()` function available. ``` library("randomForest") library("DALEX") titanic_rf_exp <- DALEX::explain(model = titanic_rf, data = titanic_imputed[, -9], y = titanic_imputed$survived == "yes", label = "Random Forest") ``` ### 9\.6\.1 The `lime` package The key functions in the `lime` package are `lime()`, which creates an explanation, and `explain()`, which evaluates explanations. As mentioned earlier, we will apply the `predict_surrogate()` function from the `DALEXtra` package to access the functions via an interface that is consistent with the approach used in the previous chapters. The `predict_surrogate()` function requires two arguments: `explainer`, which specifies the name of the explainer\-object created with the help of function `explain()` from the `DALEX` package, and `new_observation`, which specifies the name of the data frame for the instance for which prediction is of interest. An additional, important argument is `type` that indicates the package with the desired implementation of the LIME method: either `"localModel"` (default), `"lime"`, or `"iml"`. In case of the `lime`\-package implementation, we can specify two additional arguments: `n_features` to indicate the maximum number (\\(K\\)) of explanatory variables to be selected by the K\-LASSO method, and `n_permutations` to specify the number of artifical data points to be sampled for the local\-model approximation. In the code below, we apply the `predict_surrogate()` function to the explainer\-object for the random forest model `titanic_rf` and data for Johnny D. Additionally, we specify that the K\-LASSO method should select no more than `n_features=3` explanatory variables based on a fit to `n_permutations=1000` sampled data points. Note that we use the `set.seed()` function to ensure repeatability of the sampling. ``` set.seed(1) library("DALEXtra") library("lime") model_type.dalex_explainer <- DALEXtra::model_type.dalex_explainer predict_model.dalex_explainer <- DALEXtra::predict_model.dalex_explainer lime_johnny <- predict_surrogate(explainer = titanic_rf_exp, new_observation = johnny_d, n_features = 3, n_permutations = 1000, type = "lime") ``` The contents of the resulting object can be printed out in the form of a data frame with 11 variables. ``` (as.data.frame(lime_johnny)) ``` ``` ## model_type case model_r2 model_intercept model_prediction feature ## 1 regression 1 0.6826437 0.5541115 0.4784804 gender ## 2 regression 1 0.6826437 0.5541115 0.4784804 age ## 3 regression 1 0.6826437 0.5541115 0.4784804 class ## feature_value feature_weight feature_desc data ## 1 2 -0.4038175 gender = male 1, 2, 8, 0, 0, 72, 4 ## 2 8 0.1636630 age <= 22 1, 2, 8, 0, 0, 72, 4 ## 3 1 0.1645234 class = 1st 1, 2, 8, 0, 0, 72, 4 ## prediction ## 1 0.422 ## 2 0.422 ## 3 0.422 ``` The output includes column `case` that provides indices of observations for which the explanations are calculated. In our case there is only one index equal to 1, because we asked for an explanation for only one observation, Johnny D. The `feature` column indicates which explanatory variables were given non\-zero coefficients in the K\-LASSO method. The `feature_value` column provides information about the values of the original explanatory variables for the observations for which the explanations are calculated. On the other hand, the `feature_desc` column indicates how the original explanatory variable was transformed. Note that the applied implementation of the LIME method dichotomizes continuous variables by using quartiles. Hence, for instance, *age* for Johnny D was transformed into a binary variable `age <= 22`. Column `feature_weight` provides the estimated coefficients for the variables selected by the K\-LASSO method for the explanation. The `model_intercept` column provides of the value of the intercept. Thus, the linear combination of the transformed explanatory variables used in the glass\-box model approximating the random forest model around the instance of interest, Johnny D, is given by the following equation (see Section [2\.5](modelDevelopmentProcess.html#fitting)): \\\[ \\hat p\_{lime} \= 0\.55411 \- 0\.40381 \\cdot 1\_{male} \+ 0\.16366 \\cdot 1\_{age \<\= 22} \+ 0\.16452 \\cdot 1\_{class \= 1st} \= 0\.47848, \\] where \\(1\_A\\) denotes the indicator variable for condition \\(A\\). Note that the computed value corresponds to the number given in the column `model_prediction` in the printed output. By applying the `plot()` function to the object containing the explanation, we obtain a graphical presentation of the results. ``` plot(lime_johnny) ``` The resulting plot is shown in Figure [9\.6](LIME.html#fig:limeExplLIMETitanic). The length of the bar indicates the magnitude (absolute value), while the color indicates the sign (red for negative, blue for positive) of the estimated coefficient. Figure 9\.6: Illustration of the LIME\-method results for the prediction for Johnny D for the random forest model `titanic_rf` and the Titanic data, generated by the `lime` package. ### 9\.6\.2 The `localModel` package The key function of the `localModel` package is the `individual_surrogate_model()` function that fits the local glass\-box model. The function is applied to the explainer\-object obtained with the help of the `DALEX::explain()` function (see Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode)). As mentioned earlier, we will apply the `predict_surrogate()` function from the `DALEXtra` package to access the functions via an interface that is consistent with the approach used in the previous chapters. To choose the `localModel`\-implementation of LIME, we set argument `type="localMode"` (see Section [9\.6\.1](LIME.html#LIMERcodelime)). In that case, the method accepts, apart from the required arguments `explainer` and `new_observation`, two additional arguments: `size`, which specifies the number of artificial data points to be sampled for the local\-model approximation, and `seed`, which sets the seed for the random\-number generation allowing for a repeatable execution. In the code below, we apply the `predict_surrogate()` function to the explainer\-object for the random forest model `titanic_rf` and data for Johnny D. Additionally, we specify that 1000 data points are to be sampled and we set the random\-number\-generation seed. ``` library("localModel") locMod_johnny <- predict_surrogate(explainer = titanic_rf_exp, new_observation = johnny_d, size = 1000, seed = 1, type = "localModel") ``` The resulting object is a data frame with seven variables (columns). For brevity, we only print out the first three variables. ``` locMod_johnny[,1:3] ``` ``` ## estimated variable original_variable ## 1 0.23530947 (Model mean) ## 2 0.30331646 (Intercept) ## 3 0.06004988 gender = male gender ## 4 -0.05222505 age <= 15.36 age ## 5 0.20988506 class = 1st, 2nd, deck crew class ## 6 0.00000000 embarked = Belfast, Southampton embarked ``` The printed output includes column `estimated` that contains the estimated coefficients of the LASSO regression model, which is used to approximate the predictions from the random forest model. Column `variable` provides the information about the corresponding variables, which are transformations of `original_variable`. Note that the version of LIME, implemented in the `localModel` package, dichotomizes continuous variables by using ceteris\-paribus profiles (for more information about the profiles, see Chapter [10](ceterisParibus.html#ceterisParibus)). The profile for variable *age* for Johnny D can be obtained by using function `plot_interpretable_feature()`, as shown below. ``` plot_interpretable_feature(locMod_johnny, "age") ``` The resulting plot is presented in Figure [9\.7](LIME.html#fig:LIMEexample02). The profile indicates that the largest drop in the predicted probability of survival is observed when the value of *age* increases beyond about 15 years. Hence, in the output of the `predict_surrogate()` function, we see a binary variable `age <= 15.36`, as Johnny D was 8 years old. Figure 9\.7: Discretization of the *age* variable for Johnny D based on the ceteris\-paribus profile. The optimal change\-point is around 15 years of age. By applying the generic `plot()` function to the object containing the LIME\-method results, we obtain a graphical representation of the results. ``` plot(locMod_johnny) ``` The resulting plot is shown in Figure [9\.8](LIME.html#fig:limeExplLocalModelTitanic). The lengths of the bars indicate the magnitude (absolute value) of the estimated coefficients of the LASSO logistic regression model. The bars are placed relative to the value of the mean prediction, 0\.235\. Figure 9\.8: Illustration of the LIME\-method results for the prediction for Johnny D for the random forest model `titanic_rf` and the Titanic data, generated by the `localModel` package. ### 9\.6\.3 The `iml` package The key functions of the `iml` package are `Predictor$new()`, which creates an explainer, and `LocalModel$new()`, which develops the local glass\-box model. The main arguments of the `Predictor$new()` function are `model`, which specifies the model\-object, and `data`, the data frame used for fitting the model. As mentioned earlier, we will apply the `predict_surrogate()` function from the `DALEXtra` package to access the functions via an interface that is consistent with the approach used in the previous chapters. To choose the `iml`\-implementation of LIME, we set argument `type="iml"` (see Section [9\.6\.1](LIME.html#LIMERcodelime)). In that case, the method accepts, apart from the required arguments `explainer`and `new_observation`, an additional argument `k` that specifies the number of explanatory variables included in the local\-approximation model. ``` library("DALEXtra") library("iml") iml_johnny <- predict_surrogate(explainer = titanic_rf_exp, new_observation = johnny_d, k = 3, type = "iml") ``` The resulting object includes data frame `results` with seven variables that provides results of the LASSO logistic regression model which is used to approximate the predictions of the random forest model. For brevity, we print out selected variables. ``` iml_johnny$results[,c(1:5,7)] ``` ``` ## beta x.recoded effect x.original feature .class ## 1 -0.1992616770 1 -0.19926168 1st class=1st no ## 2 1.6005493672 1 1.60054937 male gender=male no ## 3 -0.0002111346 72 -0.01520169 72 fare no ## 4 0.1992616770 1 0.19926168 1st class=1st yes ## 5 -1.6005493672 1 -1.60054937 male gender=male yes ## 6 0.0002111346 72 0.01520169 72 fare yes ``` The printed output includes column `beta` that provides the estimated coefficients of the local\-approximation model. Note that two sets of three coefficients (six in total) are given, corresponding to the prediction of the probability of death (column `.class` assuming value `no`, corresponding to the value `"no"` of the `survived` dependent\-variable) and survival (`.class` asuming value `yes`). Column `x.recoded` contains the information about the value of the corresponding transformed (interpretable) variable. The value of the original explanatory variable is given in column `x.original`, with column `feature` providing the information about the corresponding variable. Note that the implemented version of LIME does not transform continuous variables. Categorical variables are dichotomized, with the resulting binary variable assuming the value of 1 for the category observed for the instance of interest and 0 for other categories. The `effect` column provides the product of the estimated coefficient (from column `beta`) and the value of the interpretable covariate (from column `x.recoded`) of the model approximating the random forest model. By applying the generic `plot()` function to the object containing the LIME\-method results, we obtain a graphical representation of the results. ``` plot(iml_johnny) ``` The resulting plot is shown in Figure [9\.9](LIME.html#fig:limeExplIMLTitanic). It shows values of the two sets of three coefficients for both types of predictions (probability of death and survival). ``` plot(iml_johnny) ``` Figure 9\.9: Illustration of the LIME\-method results for the prediction for Johnny D for the random forest model `titanic_rf` and the Titanic data, generated by the `iml` package. It is worth noting that *age*, *gender*, and *class* are correlated. For instance, crew members are only adults and mainly men. This is probably the reason why the three packages implementing the LIME method generate slightly different explanations for the model prediction for Johnny D. 9\.7 Code snippets for Python ----------------------------- In this section, we use the `lime` library for Python, which is probably the most popular implementation of the LIME method (Ribeiro, Singh, and Guestrin [2016](#ref-lime)). The `lime` library requires categorical variables to be encoded in a numerical format. This requires some additional work with the data. Therefore, below we will show you how to use this method in Python step by step. For illustration purposes, we use the random forest model for the Titanic data. Instance\-level explanations are calculated for Henry, a 47\-year\-old passenger that travelled in the 1st class. In the first step, we read the Titanic data and encode categorical variables. In this case, we use the simplest encoding for *gender*, *class*, and *embarked*, i.e., the label\-encoding. ``` import dalex as dx titanic = dx.datasets.load_titanic() X = titanic.drop(columns='survived') y = titanic.survived from sklearn import preprocessing le = preprocessing.LabelEncoder() X['gender'] = le.fit_transform(X['gender']) X['class'] = le.fit_transform(X['class']) X['embarked'] = le.fit_transform(X['embarked']) ``` In the next step we train a random forest model. ``` from sklearn.ensemble import RandomForestClassifier as rfc titanic_fr = rfc() titanic_fr.fit(X, y) ``` It is time to define the observation for which model prediction will be explained. We write Henry’s data into `pandas.Series` object. ``` import pandas as pd henry = pd.Series([1, 47.0, 0, 1, 25.0, 0, 0], index =['gender', 'age', 'class', 'embarked', 'fare', 'sibsp', 'parch']) ``` The `lime` library explains models that operate on images, text, or tabular data. In the latter case, we have to use the `LimeTabularExplainer` module. ``` from lime.lime_tabular import LimeTabularExplainer explainer = LimeTabularExplainer(X, feature_names=X.columns, class_names=['died', 'survived'], discretize_continuous=False, verbose=True) ``` The result is an explainer that can be used to interpret a model around specific observations. In the following example, we explain the behaviour of the model for Henry. The `explain_instance()` method finds a local approximation with an interpretable linear model. The result can be presented graphically with the `show_in_notebook()` method. ``` lime = explainer.explain_instance(henry, titanic_fr.predict_proba) lime.show_in_notebook(show_table=True) ``` The resulting plot is shown in Figure [9\.10](LIME.html#fig:limePython1). Figure 9\.10: A plot of LIME model values for the random forest model and passenger Henry for the Titanic data. 9\.1 Introduction ----------------- Break\-down (BD) plots and Shapley values, introduced in Chapters [6](breakDown.html#breakDown) and [8](shapley.html#shapley), respectively, are most suitable for models with a small or moderate number of explanatory variables. None of those approaches is well\-suited for models with a very large number of explanatory variables, because they usually determine non\-zero attributions for all variables in the model. However, in domains like, for instance, genomics or image recognition, models with hundreds of thousands, or even millions, of explanatory (input) variables are not uncommon. In such cases, sparse explanations with a small number of variables offer a useful alternative. The most popular example of such sparse explainers is the Local Interpretable Model\-agnostic Explanations (LIME) method and its modifications. The LIME method was originally proposed by Ribeiro, Singh, and Guestrin ([2016](#ref-lime)). The key idea behind it is to locally approximate a black\-box model by a simpler glass\-box model, which is easier to interpret. In this chapter, we describe this approach. 9\.2 Intuition -------------- The intuition behind the LIME method is explained in Figure [9\.1](LIME.html#fig:limeIntroduction). We want to understand the factors that influence a complex black\-box model around a single instance of interest (black cross). The coloured areas presented in Figure [9\.1](LIME.html#fig:limeIntroduction) correspond to decision regions for a binary classifier, i.e., they pertain to a prediction of a value of a binary dependent variable. The axes represent the values of two continuous explanatory variables. The coloured areas indicate combinations of values of the two variables for which the model classifies the observation to one of the two classes. To understand the local behavior of the complex model around the point of interest, we generate an artificial dataset, to which we fit a glass\-box model. The dots in Figure [9\.1](LIME.html#fig:limeIntroduction) represent the generated artificial data; the size of the dots corresponds to proximity to the instance of interest. We can fit a simpler glass\-box model to the artificial data so that it will locally approximate the predictions of the black\-box model. In Figure [9\.1](LIME.html#fig:limeIntroduction), a simple linear model (indicated by the dashed line) is used to construct the local approximation. The simpler model serves as a “local explainer” for the more complex model. We may select different classes of glass\-box models. The most typical choices are regularized linear models like LASSO regression (Tibshirani [1994](#ref-Tibshirani94regressionshrinkage)) or decision trees (Hothorn, Hornik, and Zeileis [2006](#ref-party2006)). Both lead to sparse models that are easier to understand. The important point is to limit the complexity of the models, so that they are easier to explain. Figure 9\.1: The idea behind the LIME approximation with a local glass\-box model. The coloured areas correspond to decision regions for a complex binary classification model. The black cross indicates the instance (observation) of interest. Dots correspond to artificial data around the instance of interest. The dashed line represents a simple linear model fitted to the artificial data. The simple model “explains” local behavior of the black\-box model around the instance of interest. 9\.3 Method ----------- We want to find a model that locally approximates a black\-box model \\(f()\\) around the instance of interest \\(\\underline{x}\_\*\\). Consider class \\(G\\) of simple, interpretable models like, for instance, linear models or decision trees. To find the required approximation, we minimize a “loss function”: \\\[ \\hat g \= \\arg \\min\_{g \\in \\mathcal{G}} L\\{f, g, \\nu(\\underline{x}\_\*)\\} \+ \\Omega (g), \\] where model \\(g()\\) belongs to class \\(\\mathcal{G}\\), \\(\\nu(\\underline{x}\_\*)\\) defines a neighborhood of \\(\\underline{x}\_\*\\) in which approximation is sought, \\(L()\\) is a function measuring the discrepancy between models \\(f()\\) and \\(g()\\) in the neighborhood \\(\\nu(\\underline{x}\_\*)\\), and \\(\\Omega(g)\\) is a penalty for the complexity of model \\(g()\\). The penalty is used to favour simpler models from class \\(\\mathcal{G}\\). In applications, this criterion is very often simplified by limiting class \\(G\\) to models with the same complexity, i.e., with the same number of coefficients. In such a situation, \\(\\Omega(g)\\) is the same for each model \\(g()\\), so it can be omitted in optimization. Note that models \\(f()\\) and \\(g()\\) may operate on different data spaces. The black\-box model (function) \\(f(\\underline{x}):\\mathcal X \\rightarrow \\mathcal R\\) is defined on a large, \\(p\\)\-dimensional space \\(\\mathcal X\\) corresponding to the \\(p\\) explanatory variables used in the model. The glass\-box model (function) \\(g(\\underline{x}):\\tilde{ \\mathcal X} \\rightarrow \\mathcal R\\) is defined on a \\(q\\)\-dimensional space \\(\\tilde{ \\mathcal X}\\) with \\(q \<\< p\\), often called the “space for interpretable representation”. We will present some examples of \\(\\tilde{ \\mathcal X}\\) in the next section. For now we will just assume that some function \\(h()\\) transforms \\(\\mathcal X\\) into \\(\\tilde{ \\mathcal X}\\). If we limit class \\(\\mathcal{G}\\) to linear models with a limited number, say \\(K\\), of non\-zero coefficients, then the following algorithm may be used to find an interpretable glass\-box model \\(g()\\) that includes \\(K\\) most important, interpretable, explanatory variables: ``` Input: x* - observation to be explained Input: N - sample size for the glass-box model Input: K - complexity, the number of variables for the glass-box model Input: similarity - a distance function in the original data space 1. Let x' = h(x*) be a version of x* in the lower-dimensional space 2. for i in 1...N { 3. z'[i] <- sample_around(x') 4. y'[i] <- f(z'[i]) # prediction for new observation z'[i] 5. w'[i] <- similarity(x', z'[i]) 6. } 7. return K-LASSO(y', x', w') ``` In Step 7, `K-LASSO(y', x', w')` stands for a weighted LASSO linear\-regression that selects \\(K\\) variables based on the new data `y'` and `x'` with weights `w'`. Practical implementation of this idea involves three important steps, which are discussed in the subsequent subsections. ### 9\.3\.1 Interpretable data representation As it has been mentioned, the black\-box model \\(f()\\) and the glass\-box model \\(g()\\) operate on different data spaces. For example, let us consider a VGG16 neural network (Simonyan and Zisserman [2015](#ref-Simonyan15)) trained on the ImageNet data (Deng et al. [2009](#ref-ImageNet)). The model uses an image of the size of 244 \\(\\times\\) 244 pixels as input and predicts to which of 1000 potential categories does the image belong to. The original space \\(\\mathcal X\\) is of dimension 3 \\(\\times\\) 244 \\(\\times\\) 244 (three single\-color channels (*red, green, blue*) for a single pixel \\(\\times\\) 244 \\(\\times\\) 244 pixels), i.e., the input space is 178,608\-dimensional. Explaining predictions in such a high\-dimensional space is difficult. Instead, from the perspective of a single instance of interest, the space can be transformed into superpixels, which are treated as binary features that can be turned on or off. Figure [9\.2](LIME.html#fig:duckHorse06) (right\-hand\-side panel) presents an example of 100 superpixels created for an ambiguous picture. Thus, in this case the black\-box model \\(f()\\) operates on space \\(\\mathcal X\=\\mathcal{R}^{178608}\\), while the glass\-box model \\(g()\\) applies to space \\(\\tilde{ \\mathcal X} \= \\{0,1\\}^{100}\\). It is worth noting that superpixels, based on image segmentation, are frequent choices for image data. For text data, groups of words are frequently used as interpretable variables. For tabular data, continuous variables are often discretized to obtain interpretable categorical data. In the case of categorical variables, combination of categories is often used. We will present examples in the next section. Figure 9\.2: The left\-hand\-side panel shows an ambiguous picture, half\-horse and half\-duck (source [Twitter](https://twitter.com/finmaddison/status/352128550704398338)). The right\-hand\-side panel shows 100 superpixels identified for this figure. ### 9\.3\.2 Sampling around the instance of interest To develop a local\-approximation glass\-box model, we need new data points in the low\-dimensional interpretable data space around the instance of interest. One could consider sampling the data points from the original dataset. However, there may not be enough points to sample from, because the data in high\-dimensional datasets are usually very sparse and data points are “far” from each other. Thus, we need new, artificial data points. For this reason, the data for the development of the glass\-box model is often created by using perturbations of the instance of interest. For binary variables in the low\-dimensional space, the common choice is to switch (from 0 to 1 or from 1 to 0\) the value of a randomly\-selected number of variables describing the instance of interest. For continuous variables, various proposals have been formulated in different papers. For example, Molnar, Bischl, and Casalicchio ([2018](#ref-imlRPackage)) and Molnar ([2019](#ref-molnar2019)) suggest adding Gaussian noise to continuous variables. Pedersen and Benesty ([2019](#ref-limePackage)) propose to discretize continuous variables by using quantiles and then perturb the discretized versions of the variables. Staniak et al. ([2019](#ref-localModelPackage)) discretize continuous variables based on segmentation of local ceteris\-paribus profiles (for more information about the profiles, see Chapter [10](ceterisParibus.html#ceterisParibus)). In the example of the duck\-horse image in Figure [9\.2](LIME.html#fig:duckHorse06), the perturbations of the image could be created by randomly excluding some of the superpixels. An illustration of this process is shown in Figure [9\.3](LIME.html#fig:duckHorseProcess). Figure 9\.3: The original image (left\-hand\-side panel) is transformed into a lower\-dimensional data space by defining 100 super pixels (panel in the middle). The artificial data are created by using subsets of superpixels (right\-hand\-side panel). ### 9\.3\.3 Fitting the glass\-box model Once the artificial data around the instance of interest have been created, we may attempt to fit an interpretable glass\-box model \\(g()\\) from class \\(\\mathcal{G}\\). The most common choices for class \\(\\mathcal{G}\\) are generalized linear models. To get sparse models, i.e., models with a limited number of variables, LASSO (least absolute shrinkage and selection operator) (Tibshirani [1994](#ref-Tibshirani94regressionshrinkage)) or similar regularization\-modelling techniques are used. For instance, in the algorithm presented in Section [9\.3](LIME.html#LIMEMethod), the K\-LASSO method with K non\-zero coefficients has been mentioned. An alternative choice are classification\-and\-regression trees models (Breiman et al. [1984](#ref-CARTtree)). For the example of the duck\-horse image in Figure [9\.2](LIME.html#fig:duckHorse06), the VGG16 network provides 1000 probabilities that the image belongs to one of the 1000 classes used for training the network. It appears that the two most likely classes for the image are *‘standard poodle’* (probability of 0\.18\) and *‘goose’* (probability of 0\.15\). Figure [9\.4](LIME.html#fig:duckHorse04) presents LIME explanations for these two predictions. The explanations were obtained with the K\-LASSO method, which selected \\(K\=15\\) superpixels that were the most influential from a model\-prediction point of view. For each of the selected two classes, the \\(K\\) superpixels with non\-zero coefficients are highlighted. It is interesting to observe that the superpixel which contains the beak is influential for the *‘goose’* prediction, while superpixels linked with the white colour are influential for the *‘standard poodle’* prediction. At least for the former, the influential feature of the plot does correspond to the intended content of the image. Thus, the results of the explanation increase confidence in the model’s predictions. Figure 9\.4: LIME for two predictions (‘standard poodle’ and ‘goose’) obtained by the VGG16 network with ImageNet weights for the half\-duck, half\-horse image. TODO: fix apostrophes! ### 9\.3\.1 Interpretable data representation As it has been mentioned, the black\-box model \\(f()\\) and the glass\-box model \\(g()\\) operate on different data spaces. For example, let us consider a VGG16 neural network (Simonyan and Zisserman [2015](#ref-Simonyan15)) trained on the ImageNet data (Deng et al. [2009](#ref-ImageNet)). The model uses an image of the size of 244 \\(\\times\\) 244 pixels as input and predicts to which of 1000 potential categories does the image belong to. The original space \\(\\mathcal X\\) is of dimension 3 \\(\\times\\) 244 \\(\\times\\) 244 (three single\-color channels (*red, green, blue*) for a single pixel \\(\\times\\) 244 \\(\\times\\) 244 pixels), i.e., the input space is 178,608\-dimensional. Explaining predictions in such a high\-dimensional space is difficult. Instead, from the perspective of a single instance of interest, the space can be transformed into superpixels, which are treated as binary features that can be turned on or off. Figure [9\.2](LIME.html#fig:duckHorse06) (right\-hand\-side panel) presents an example of 100 superpixels created for an ambiguous picture. Thus, in this case the black\-box model \\(f()\\) operates on space \\(\\mathcal X\=\\mathcal{R}^{178608}\\), while the glass\-box model \\(g()\\) applies to space \\(\\tilde{ \\mathcal X} \= \\{0,1\\}^{100}\\). It is worth noting that superpixels, based on image segmentation, are frequent choices for image data. For text data, groups of words are frequently used as interpretable variables. For tabular data, continuous variables are often discretized to obtain interpretable categorical data. In the case of categorical variables, combination of categories is often used. We will present examples in the next section. Figure 9\.2: The left\-hand\-side panel shows an ambiguous picture, half\-horse and half\-duck (source [Twitter](https://twitter.com/finmaddison/status/352128550704398338)). The right\-hand\-side panel shows 100 superpixels identified for this figure. ### 9\.3\.2 Sampling around the instance of interest To develop a local\-approximation glass\-box model, we need new data points in the low\-dimensional interpretable data space around the instance of interest. One could consider sampling the data points from the original dataset. However, there may not be enough points to sample from, because the data in high\-dimensional datasets are usually very sparse and data points are “far” from each other. Thus, we need new, artificial data points. For this reason, the data for the development of the glass\-box model is often created by using perturbations of the instance of interest. For binary variables in the low\-dimensional space, the common choice is to switch (from 0 to 1 or from 1 to 0\) the value of a randomly\-selected number of variables describing the instance of interest. For continuous variables, various proposals have been formulated in different papers. For example, Molnar, Bischl, and Casalicchio ([2018](#ref-imlRPackage)) and Molnar ([2019](#ref-molnar2019)) suggest adding Gaussian noise to continuous variables. Pedersen and Benesty ([2019](#ref-limePackage)) propose to discretize continuous variables by using quantiles and then perturb the discretized versions of the variables. Staniak et al. ([2019](#ref-localModelPackage)) discretize continuous variables based on segmentation of local ceteris\-paribus profiles (for more information about the profiles, see Chapter [10](ceterisParibus.html#ceterisParibus)). In the example of the duck\-horse image in Figure [9\.2](LIME.html#fig:duckHorse06), the perturbations of the image could be created by randomly excluding some of the superpixels. An illustration of this process is shown in Figure [9\.3](LIME.html#fig:duckHorseProcess). Figure 9\.3: The original image (left\-hand\-side panel) is transformed into a lower\-dimensional data space by defining 100 super pixels (panel in the middle). The artificial data are created by using subsets of superpixels (right\-hand\-side panel). ### 9\.3\.3 Fitting the glass\-box model Once the artificial data around the instance of interest have been created, we may attempt to fit an interpretable glass\-box model \\(g()\\) from class \\(\\mathcal{G}\\). The most common choices for class \\(\\mathcal{G}\\) are generalized linear models. To get sparse models, i.e., models with a limited number of variables, LASSO (least absolute shrinkage and selection operator) (Tibshirani [1994](#ref-Tibshirani94regressionshrinkage)) or similar regularization\-modelling techniques are used. For instance, in the algorithm presented in Section [9\.3](LIME.html#LIMEMethod), the K\-LASSO method with K non\-zero coefficients has been mentioned. An alternative choice are classification\-and\-regression trees models (Breiman et al. [1984](#ref-CARTtree)). For the example of the duck\-horse image in Figure [9\.2](LIME.html#fig:duckHorse06), the VGG16 network provides 1000 probabilities that the image belongs to one of the 1000 classes used for training the network. It appears that the two most likely classes for the image are *‘standard poodle’* (probability of 0\.18\) and *‘goose’* (probability of 0\.15\). Figure [9\.4](LIME.html#fig:duckHorse04) presents LIME explanations for these two predictions. The explanations were obtained with the K\-LASSO method, which selected \\(K\=15\\) superpixels that were the most influential from a model\-prediction point of view. For each of the selected two classes, the \\(K\\) superpixels with non\-zero coefficients are highlighted. It is interesting to observe that the superpixel which contains the beak is influential for the *‘goose’* prediction, while superpixels linked with the white colour are influential for the *‘standard poodle’* prediction. At least for the former, the influential feature of the plot does correspond to the intended content of the image. Thus, the results of the explanation increase confidence in the model’s predictions. Figure 9\.4: LIME for two predictions (‘standard poodle’ and ‘goose’) obtained by the VGG16 network with ImageNet weights for the half\-duck, half\-horse image. TODO: fix apostrophes! 9\.4 Example: Titanic data -------------------------- Most examples of the LIME method are related to the text or image data. In this section, we present an example of a binary classification for tabular data to facilitate comparisons between methods introduced in different chapters. Let us consider the random forest model `titanic_rf` (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) and passenger Johnny D (see Section [4\.2\.5](dataSetsIntro.html#predictions-titanic)) as the instance of interest for the Titanic data. First, we have got to define an interpretable data space. One option would be to gather similar variables into larger constructs corresponding to some concepts. For example *class* and *fare* variables can be combined into “wealth”, *age* and *gender* into “demography”, and so on. In this example, however, we have got a relatively small number of variables, so we will use a simpler data representation in the form of a binary vector. Toward this aim, each variable is dichotomized into two levels. For example, *age* is transformed into a binary variable with categories “\\(\\leq\\) 15\.36” and “\>15\.36”, *class* is transformed into a binary variable with categories “1st/2nd/deck crew” and “other”, and so on. Once the lower\-dimension data space is defined, the LIME algorithm is applied to this space. In particular, we first have got to appropriately transform data for Johnny D. Subsequently, we generate a new artificial dataset that will be used for K\-LASSO approximations of the random forest model. In particular, the K\-LASSO method with \\(K\=3\\) is used to identify the three most influential (binary) variables that will provide an explanation for the prediction for Johnny D. The three variables are: *age*, *gender*, and *class*. This result agrees with the conclusions drawn in the previous chapters. Figure [9\.5](LIME.html#fig:LIMEexample01) shows the coefficients estimated for the K\-LASSO model. Figure 9\.5: LIME method for the prediction for Johnny D for the random forest model `titanic_rf` and the Titanic data. Presented values are the coefficients of the K\-LASSO model fitted locally to the predictions from the original model. 9\.5 Pros and cons ------------------ As mentioned by Ribeiro, Singh, and Guestrin ([2016](#ref-lime)), the LIME method * is *model\-agnostic*, as it does not imply any assumptions about the black\-box model structure; * offers an *interpretable representation*, because the original data space is transformed (for instance, by replacing individual pixels by superpixels for image data) into a more interpretable, lower\-dimension space; * provides *local fidelity*, i.e., the explanations are locally well\-fitted to the black\-box model. The method has been widely adopted in the text and image analysis, partly due to the interpretable data representation. In that case, the explanations are delivered in the form of fragments of an image/text, and users can easily find the justification of such explanations. The underlying intuition for the method is easy to understand: a simpler model is used to approximate a more complex one. By using a simpler model, with a smaller number of interpretable explanatory variables, predictions are easier to explain. The LIME method can be applied to complex, high\-dimensional models. There are several important limitations, however. For instance, as mentioned in Section [9\.3\.2](LIME.html#LIMEsample), there have been various proposals for finding interpretable representations for continuous and categorical explanatory variables in case of tabular data. The issue has not been solved yet. This leads to different implementations of LIME, which use different variable\-transformation methods and, consequently, that can lead to different results. Another important point is that, because the glass\-box model is selected to approximate the black\-box model, and not the data themselves, the method does not control the quality of the local fit of the glass\-box model to the data. Thus, the latter model may be misleading. Finally, in high\-dimensional data, data points are sparse. Defining a “local neighborhood” of the instance of interest may not be straightforward. Importance of the selection of the neighborhood is discussed, for example, by Alvarez\-Melis and Jaakkola ([2018](#ref-LIMESHAPstability)). Sometimes even slight changes in the neighborhood strongly affect the obtained explanations. To summarize, the most useful applications of LIME are limited to high\-dimensional data for which one can define a low\-dimensional interpretable data representation, as in image analysis, text analysis, or genomics. 9\.6 Code snippets for R ------------------------ LIME and its variants are implemented in various R and Python packages. For example, `lime` (Pedersen and Benesty [2019](#ref-limePackage)) started as a port of the LIME Python library (Lundberg [2019](#ref-shapPackage)), while `localModel` (Staniak et al. [2019](#ref-localModelPackage)), and `iml` (Molnar, Bischl, and Casalicchio [2018](#ref-imlRPackage)) are separate packages that implement a version of this method entirely in R. Different implementations of LIME offer different algorithms for extraction of interpretable features, different methods for sampling, and different methods of weighting. For instance, regarding transformation of continuous variables into interpretable features, `lime` performs global discretization using quartiles, `localModel` performs local discretization using ceteris\-paribus profiles (for more information about the profiles, see Chapter [10](ceterisParibus.html#ceterisParibus)), while `iml` works directly on continuous variables. Due to these differences, the packages yield different results (explanations). Also, `lime`, `localModel`, and `iml` use different functions to implement the LIME method. Thus, we will use the `predict_surrogate()` method from the `DALEXtra` package. The function offers a uniform interface to the functions from the three packages. In what follows, for illustration purposes, we use the `titanic_rf` random forest model for the Titanic data developed in Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf). Recall that it is developed to predict the probability of survival from the sinking of the Titanic. Instance\-level explanations are calculated for Johnny D, an 8\-year\-old passenger that travelled in the first class. We first retrieve the `titanic_rf` model\-object and the data frame for Johnny D via the `archivist` hooks, as listed in Section [4\.2\.7](dataSetsIntro.html#ListOfModelsTitanic). We also retrieve the version of the `titanic` data with imputed missing values. ``` titanic_imputed <- archivist::aread("pbiecek/models/27e5c") titanic_rf <- archivist:: aread("pbiecek/models/4e0fc") johnny_d <- archivist:: aread("pbiecek/models/e3596") ``` ``` class gender age sibsp parch fare embarked 1 1st male 8 0 0 72 Southampton ``` Then we construct the explainer for the model by using the function `explain()` from the `DALEX` package (see Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode)). We also load the `randomForest` package, as the model was fitted by using function `randomForest()` from this package (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) and it is important to have the corresponding `predict()` function available. ``` library("randomForest") library("DALEX") titanic_rf_exp <- DALEX::explain(model = titanic_rf, data = titanic_imputed[, -9], y = titanic_imputed$survived == "yes", label = "Random Forest") ``` ### 9\.6\.1 The `lime` package The key functions in the `lime` package are `lime()`, which creates an explanation, and `explain()`, which evaluates explanations. As mentioned earlier, we will apply the `predict_surrogate()` function from the `DALEXtra` package to access the functions via an interface that is consistent with the approach used in the previous chapters. The `predict_surrogate()` function requires two arguments: `explainer`, which specifies the name of the explainer\-object created with the help of function `explain()` from the `DALEX` package, and `new_observation`, which specifies the name of the data frame for the instance for which prediction is of interest. An additional, important argument is `type` that indicates the package with the desired implementation of the LIME method: either `"localModel"` (default), `"lime"`, or `"iml"`. In case of the `lime`\-package implementation, we can specify two additional arguments: `n_features` to indicate the maximum number (\\(K\\)) of explanatory variables to be selected by the K\-LASSO method, and `n_permutations` to specify the number of artifical data points to be sampled for the local\-model approximation. In the code below, we apply the `predict_surrogate()` function to the explainer\-object for the random forest model `titanic_rf` and data for Johnny D. Additionally, we specify that the K\-LASSO method should select no more than `n_features=3` explanatory variables based on a fit to `n_permutations=1000` sampled data points. Note that we use the `set.seed()` function to ensure repeatability of the sampling. ``` set.seed(1) library("DALEXtra") library("lime") model_type.dalex_explainer <- DALEXtra::model_type.dalex_explainer predict_model.dalex_explainer <- DALEXtra::predict_model.dalex_explainer lime_johnny <- predict_surrogate(explainer = titanic_rf_exp, new_observation = johnny_d, n_features = 3, n_permutations = 1000, type = "lime") ``` The contents of the resulting object can be printed out in the form of a data frame with 11 variables. ``` (as.data.frame(lime_johnny)) ``` ``` ## model_type case model_r2 model_intercept model_prediction feature ## 1 regression 1 0.6826437 0.5541115 0.4784804 gender ## 2 regression 1 0.6826437 0.5541115 0.4784804 age ## 3 regression 1 0.6826437 0.5541115 0.4784804 class ## feature_value feature_weight feature_desc data ## 1 2 -0.4038175 gender = male 1, 2, 8, 0, 0, 72, 4 ## 2 8 0.1636630 age <= 22 1, 2, 8, 0, 0, 72, 4 ## 3 1 0.1645234 class = 1st 1, 2, 8, 0, 0, 72, 4 ## prediction ## 1 0.422 ## 2 0.422 ## 3 0.422 ``` The output includes column `case` that provides indices of observations for which the explanations are calculated. In our case there is only one index equal to 1, because we asked for an explanation for only one observation, Johnny D. The `feature` column indicates which explanatory variables were given non\-zero coefficients in the K\-LASSO method. The `feature_value` column provides information about the values of the original explanatory variables for the observations for which the explanations are calculated. On the other hand, the `feature_desc` column indicates how the original explanatory variable was transformed. Note that the applied implementation of the LIME method dichotomizes continuous variables by using quartiles. Hence, for instance, *age* for Johnny D was transformed into a binary variable `age <= 22`. Column `feature_weight` provides the estimated coefficients for the variables selected by the K\-LASSO method for the explanation. The `model_intercept` column provides of the value of the intercept. Thus, the linear combination of the transformed explanatory variables used in the glass\-box model approximating the random forest model around the instance of interest, Johnny D, is given by the following equation (see Section [2\.5](modelDevelopmentProcess.html#fitting)): \\\[ \\hat p\_{lime} \= 0\.55411 \- 0\.40381 \\cdot 1\_{male} \+ 0\.16366 \\cdot 1\_{age \<\= 22} \+ 0\.16452 \\cdot 1\_{class \= 1st} \= 0\.47848, \\] where \\(1\_A\\) denotes the indicator variable for condition \\(A\\). Note that the computed value corresponds to the number given in the column `model_prediction` in the printed output. By applying the `plot()` function to the object containing the explanation, we obtain a graphical presentation of the results. ``` plot(lime_johnny) ``` The resulting plot is shown in Figure [9\.6](LIME.html#fig:limeExplLIMETitanic). The length of the bar indicates the magnitude (absolute value), while the color indicates the sign (red for negative, blue for positive) of the estimated coefficient. Figure 9\.6: Illustration of the LIME\-method results for the prediction for Johnny D for the random forest model `titanic_rf` and the Titanic data, generated by the `lime` package. ### 9\.6\.2 The `localModel` package The key function of the `localModel` package is the `individual_surrogate_model()` function that fits the local glass\-box model. The function is applied to the explainer\-object obtained with the help of the `DALEX::explain()` function (see Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode)). As mentioned earlier, we will apply the `predict_surrogate()` function from the `DALEXtra` package to access the functions via an interface that is consistent with the approach used in the previous chapters. To choose the `localModel`\-implementation of LIME, we set argument `type="localMode"` (see Section [9\.6\.1](LIME.html#LIMERcodelime)). In that case, the method accepts, apart from the required arguments `explainer` and `new_observation`, two additional arguments: `size`, which specifies the number of artificial data points to be sampled for the local\-model approximation, and `seed`, which sets the seed for the random\-number generation allowing for a repeatable execution. In the code below, we apply the `predict_surrogate()` function to the explainer\-object for the random forest model `titanic_rf` and data for Johnny D. Additionally, we specify that 1000 data points are to be sampled and we set the random\-number\-generation seed. ``` library("localModel") locMod_johnny <- predict_surrogate(explainer = titanic_rf_exp, new_observation = johnny_d, size = 1000, seed = 1, type = "localModel") ``` The resulting object is a data frame with seven variables (columns). For brevity, we only print out the first three variables. ``` locMod_johnny[,1:3] ``` ``` ## estimated variable original_variable ## 1 0.23530947 (Model mean) ## 2 0.30331646 (Intercept) ## 3 0.06004988 gender = male gender ## 4 -0.05222505 age <= 15.36 age ## 5 0.20988506 class = 1st, 2nd, deck crew class ## 6 0.00000000 embarked = Belfast, Southampton embarked ``` The printed output includes column `estimated` that contains the estimated coefficients of the LASSO regression model, which is used to approximate the predictions from the random forest model. Column `variable` provides the information about the corresponding variables, which are transformations of `original_variable`. Note that the version of LIME, implemented in the `localModel` package, dichotomizes continuous variables by using ceteris\-paribus profiles (for more information about the profiles, see Chapter [10](ceterisParibus.html#ceterisParibus)). The profile for variable *age* for Johnny D can be obtained by using function `plot_interpretable_feature()`, as shown below. ``` plot_interpretable_feature(locMod_johnny, "age") ``` The resulting plot is presented in Figure [9\.7](LIME.html#fig:LIMEexample02). The profile indicates that the largest drop in the predicted probability of survival is observed when the value of *age* increases beyond about 15 years. Hence, in the output of the `predict_surrogate()` function, we see a binary variable `age <= 15.36`, as Johnny D was 8 years old. Figure 9\.7: Discretization of the *age* variable for Johnny D based on the ceteris\-paribus profile. The optimal change\-point is around 15 years of age. By applying the generic `plot()` function to the object containing the LIME\-method results, we obtain a graphical representation of the results. ``` plot(locMod_johnny) ``` The resulting plot is shown in Figure [9\.8](LIME.html#fig:limeExplLocalModelTitanic). The lengths of the bars indicate the magnitude (absolute value) of the estimated coefficients of the LASSO logistic regression model. The bars are placed relative to the value of the mean prediction, 0\.235\. Figure 9\.8: Illustration of the LIME\-method results for the prediction for Johnny D for the random forest model `titanic_rf` and the Titanic data, generated by the `localModel` package. ### 9\.6\.3 The `iml` package The key functions of the `iml` package are `Predictor$new()`, which creates an explainer, and `LocalModel$new()`, which develops the local glass\-box model. The main arguments of the `Predictor$new()` function are `model`, which specifies the model\-object, and `data`, the data frame used for fitting the model. As mentioned earlier, we will apply the `predict_surrogate()` function from the `DALEXtra` package to access the functions via an interface that is consistent with the approach used in the previous chapters. To choose the `iml`\-implementation of LIME, we set argument `type="iml"` (see Section [9\.6\.1](LIME.html#LIMERcodelime)). In that case, the method accepts, apart from the required arguments `explainer`and `new_observation`, an additional argument `k` that specifies the number of explanatory variables included in the local\-approximation model. ``` library("DALEXtra") library("iml") iml_johnny <- predict_surrogate(explainer = titanic_rf_exp, new_observation = johnny_d, k = 3, type = "iml") ``` The resulting object includes data frame `results` with seven variables that provides results of the LASSO logistic regression model which is used to approximate the predictions of the random forest model. For brevity, we print out selected variables. ``` iml_johnny$results[,c(1:5,7)] ``` ``` ## beta x.recoded effect x.original feature .class ## 1 -0.1992616770 1 -0.19926168 1st class=1st no ## 2 1.6005493672 1 1.60054937 male gender=male no ## 3 -0.0002111346 72 -0.01520169 72 fare no ## 4 0.1992616770 1 0.19926168 1st class=1st yes ## 5 -1.6005493672 1 -1.60054937 male gender=male yes ## 6 0.0002111346 72 0.01520169 72 fare yes ``` The printed output includes column `beta` that provides the estimated coefficients of the local\-approximation model. Note that two sets of three coefficients (six in total) are given, corresponding to the prediction of the probability of death (column `.class` assuming value `no`, corresponding to the value `"no"` of the `survived` dependent\-variable) and survival (`.class` asuming value `yes`). Column `x.recoded` contains the information about the value of the corresponding transformed (interpretable) variable. The value of the original explanatory variable is given in column `x.original`, with column `feature` providing the information about the corresponding variable. Note that the implemented version of LIME does not transform continuous variables. Categorical variables are dichotomized, with the resulting binary variable assuming the value of 1 for the category observed for the instance of interest and 0 for other categories. The `effect` column provides the product of the estimated coefficient (from column `beta`) and the value of the interpretable covariate (from column `x.recoded`) of the model approximating the random forest model. By applying the generic `plot()` function to the object containing the LIME\-method results, we obtain a graphical representation of the results. ``` plot(iml_johnny) ``` The resulting plot is shown in Figure [9\.9](LIME.html#fig:limeExplIMLTitanic). It shows values of the two sets of three coefficients for both types of predictions (probability of death and survival). ``` plot(iml_johnny) ``` Figure 9\.9: Illustration of the LIME\-method results for the prediction for Johnny D for the random forest model `titanic_rf` and the Titanic data, generated by the `iml` package. It is worth noting that *age*, *gender*, and *class* are correlated. For instance, crew members are only adults and mainly men. This is probably the reason why the three packages implementing the LIME method generate slightly different explanations for the model prediction for Johnny D. ### 9\.6\.1 The `lime` package The key functions in the `lime` package are `lime()`, which creates an explanation, and `explain()`, which evaluates explanations. As mentioned earlier, we will apply the `predict_surrogate()` function from the `DALEXtra` package to access the functions via an interface that is consistent with the approach used in the previous chapters. The `predict_surrogate()` function requires two arguments: `explainer`, which specifies the name of the explainer\-object created with the help of function `explain()` from the `DALEX` package, and `new_observation`, which specifies the name of the data frame for the instance for which prediction is of interest. An additional, important argument is `type` that indicates the package with the desired implementation of the LIME method: either `"localModel"` (default), `"lime"`, or `"iml"`. In case of the `lime`\-package implementation, we can specify two additional arguments: `n_features` to indicate the maximum number (\\(K\\)) of explanatory variables to be selected by the K\-LASSO method, and `n_permutations` to specify the number of artifical data points to be sampled for the local\-model approximation. In the code below, we apply the `predict_surrogate()` function to the explainer\-object for the random forest model `titanic_rf` and data for Johnny D. Additionally, we specify that the K\-LASSO method should select no more than `n_features=3` explanatory variables based on a fit to `n_permutations=1000` sampled data points. Note that we use the `set.seed()` function to ensure repeatability of the sampling. ``` set.seed(1) library("DALEXtra") library("lime") model_type.dalex_explainer <- DALEXtra::model_type.dalex_explainer predict_model.dalex_explainer <- DALEXtra::predict_model.dalex_explainer lime_johnny <- predict_surrogate(explainer = titanic_rf_exp, new_observation = johnny_d, n_features = 3, n_permutations = 1000, type = "lime") ``` The contents of the resulting object can be printed out in the form of a data frame with 11 variables. ``` (as.data.frame(lime_johnny)) ``` ``` ## model_type case model_r2 model_intercept model_prediction feature ## 1 regression 1 0.6826437 0.5541115 0.4784804 gender ## 2 regression 1 0.6826437 0.5541115 0.4784804 age ## 3 regression 1 0.6826437 0.5541115 0.4784804 class ## feature_value feature_weight feature_desc data ## 1 2 -0.4038175 gender = male 1, 2, 8, 0, 0, 72, 4 ## 2 8 0.1636630 age <= 22 1, 2, 8, 0, 0, 72, 4 ## 3 1 0.1645234 class = 1st 1, 2, 8, 0, 0, 72, 4 ## prediction ## 1 0.422 ## 2 0.422 ## 3 0.422 ``` The output includes column `case` that provides indices of observations for which the explanations are calculated. In our case there is only one index equal to 1, because we asked for an explanation for only one observation, Johnny D. The `feature` column indicates which explanatory variables were given non\-zero coefficients in the K\-LASSO method. The `feature_value` column provides information about the values of the original explanatory variables for the observations for which the explanations are calculated. On the other hand, the `feature_desc` column indicates how the original explanatory variable was transformed. Note that the applied implementation of the LIME method dichotomizes continuous variables by using quartiles. Hence, for instance, *age* for Johnny D was transformed into a binary variable `age <= 22`. Column `feature_weight` provides the estimated coefficients for the variables selected by the K\-LASSO method for the explanation. The `model_intercept` column provides of the value of the intercept. Thus, the linear combination of the transformed explanatory variables used in the glass\-box model approximating the random forest model around the instance of interest, Johnny D, is given by the following equation (see Section [2\.5](modelDevelopmentProcess.html#fitting)): \\\[ \\hat p\_{lime} \= 0\.55411 \- 0\.40381 \\cdot 1\_{male} \+ 0\.16366 \\cdot 1\_{age \<\= 22} \+ 0\.16452 \\cdot 1\_{class \= 1st} \= 0\.47848, \\] where \\(1\_A\\) denotes the indicator variable for condition \\(A\\). Note that the computed value corresponds to the number given in the column `model_prediction` in the printed output. By applying the `plot()` function to the object containing the explanation, we obtain a graphical presentation of the results. ``` plot(lime_johnny) ``` The resulting plot is shown in Figure [9\.6](LIME.html#fig:limeExplLIMETitanic). The length of the bar indicates the magnitude (absolute value), while the color indicates the sign (red for negative, blue for positive) of the estimated coefficient. Figure 9\.6: Illustration of the LIME\-method results for the prediction for Johnny D for the random forest model `titanic_rf` and the Titanic data, generated by the `lime` package. ### 9\.6\.2 The `localModel` package The key function of the `localModel` package is the `individual_surrogate_model()` function that fits the local glass\-box model. The function is applied to the explainer\-object obtained with the help of the `DALEX::explain()` function (see Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode)). As mentioned earlier, we will apply the `predict_surrogate()` function from the `DALEXtra` package to access the functions via an interface that is consistent with the approach used in the previous chapters. To choose the `localModel`\-implementation of LIME, we set argument `type="localMode"` (see Section [9\.6\.1](LIME.html#LIMERcodelime)). In that case, the method accepts, apart from the required arguments `explainer` and `new_observation`, two additional arguments: `size`, which specifies the number of artificial data points to be sampled for the local\-model approximation, and `seed`, which sets the seed for the random\-number generation allowing for a repeatable execution. In the code below, we apply the `predict_surrogate()` function to the explainer\-object for the random forest model `titanic_rf` and data for Johnny D. Additionally, we specify that 1000 data points are to be sampled and we set the random\-number\-generation seed. ``` library("localModel") locMod_johnny <- predict_surrogate(explainer = titanic_rf_exp, new_observation = johnny_d, size = 1000, seed = 1, type = "localModel") ``` The resulting object is a data frame with seven variables (columns). For brevity, we only print out the first three variables. ``` locMod_johnny[,1:3] ``` ``` ## estimated variable original_variable ## 1 0.23530947 (Model mean) ## 2 0.30331646 (Intercept) ## 3 0.06004988 gender = male gender ## 4 -0.05222505 age <= 15.36 age ## 5 0.20988506 class = 1st, 2nd, deck crew class ## 6 0.00000000 embarked = Belfast, Southampton embarked ``` The printed output includes column `estimated` that contains the estimated coefficients of the LASSO regression model, which is used to approximate the predictions from the random forest model. Column `variable` provides the information about the corresponding variables, which are transformations of `original_variable`. Note that the version of LIME, implemented in the `localModel` package, dichotomizes continuous variables by using ceteris\-paribus profiles (for more information about the profiles, see Chapter [10](ceterisParibus.html#ceterisParibus)). The profile for variable *age* for Johnny D can be obtained by using function `plot_interpretable_feature()`, as shown below. ``` plot_interpretable_feature(locMod_johnny, "age") ``` The resulting plot is presented in Figure [9\.7](LIME.html#fig:LIMEexample02). The profile indicates that the largest drop in the predicted probability of survival is observed when the value of *age* increases beyond about 15 years. Hence, in the output of the `predict_surrogate()` function, we see a binary variable `age <= 15.36`, as Johnny D was 8 years old. Figure 9\.7: Discretization of the *age* variable for Johnny D based on the ceteris\-paribus profile. The optimal change\-point is around 15 years of age. By applying the generic `plot()` function to the object containing the LIME\-method results, we obtain a graphical representation of the results. ``` plot(locMod_johnny) ``` The resulting plot is shown in Figure [9\.8](LIME.html#fig:limeExplLocalModelTitanic). The lengths of the bars indicate the magnitude (absolute value) of the estimated coefficients of the LASSO logistic regression model. The bars are placed relative to the value of the mean prediction, 0\.235\. Figure 9\.8: Illustration of the LIME\-method results for the prediction for Johnny D for the random forest model `titanic_rf` and the Titanic data, generated by the `localModel` package. ### 9\.6\.3 The `iml` package The key functions of the `iml` package are `Predictor$new()`, which creates an explainer, and `LocalModel$new()`, which develops the local glass\-box model. The main arguments of the `Predictor$new()` function are `model`, which specifies the model\-object, and `data`, the data frame used for fitting the model. As mentioned earlier, we will apply the `predict_surrogate()` function from the `DALEXtra` package to access the functions via an interface that is consistent with the approach used in the previous chapters. To choose the `iml`\-implementation of LIME, we set argument `type="iml"` (see Section [9\.6\.1](LIME.html#LIMERcodelime)). In that case, the method accepts, apart from the required arguments `explainer`and `new_observation`, an additional argument `k` that specifies the number of explanatory variables included in the local\-approximation model. ``` library("DALEXtra") library("iml") iml_johnny <- predict_surrogate(explainer = titanic_rf_exp, new_observation = johnny_d, k = 3, type = "iml") ``` The resulting object includes data frame `results` with seven variables that provides results of the LASSO logistic regression model which is used to approximate the predictions of the random forest model. For brevity, we print out selected variables. ``` iml_johnny$results[,c(1:5,7)] ``` ``` ## beta x.recoded effect x.original feature .class ## 1 -0.1992616770 1 -0.19926168 1st class=1st no ## 2 1.6005493672 1 1.60054937 male gender=male no ## 3 -0.0002111346 72 -0.01520169 72 fare no ## 4 0.1992616770 1 0.19926168 1st class=1st yes ## 5 -1.6005493672 1 -1.60054937 male gender=male yes ## 6 0.0002111346 72 0.01520169 72 fare yes ``` The printed output includes column `beta` that provides the estimated coefficients of the local\-approximation model. Note that two sets of three coefficients (six in total) are given, corresponding to the prediction of the probability of death (column `.class` assuming value `no`, corresponding to the value `"no"` of the `survived` dependent\-variable) and survival (`.class` asuming value `yes`). Column `x.recoded` contains the information about the value of the corresponding transformed (interpretable) variable. The value of the original explanatory variable is given in column `x.original`, with column `feature` providing the information about the corresponding variable. Note that the implemented version of LIME does not transform continuous variables. Categorical variables are dichotomized, with the resulting binary variable assuming the value of 1 for the category observed for the instance of interest and 0 for other categories. The `effect` column provides the product of the estimated coefficient (from column `beta`) and the value of the interpretable covariate (from column `x.recoded`) of the model approximating the random forest model. By applying the generic `plot()` function to the object containing the LIME\-method results, we obtain a graphical representation of the results. ``` plot(iml_johnny) ``` The resulting plot is shown in Figure [9\.9](LIME.html#fig:limeExplIMLTitanic). It shows values of the two sets of three coefficients for both types of predictions (probability of death and survival). ``` plot(iml_johnny) ``` Figure 9\.9: Illustration of the LIME\-method results for the prediction for Johnny D for the random forest model `titanic_rf` and the Titanic data, generated by the `iml` package. It is worth noting that *age*, *gender*, and *class* are correlated. For instance, crew members are only adults and mainly men. This is probably the reason why the three packages implementing the LIME method generate slightly different explanations for the model prediction for Johnny D. 9\.7 Code snippets for Python ----------------------------- In this section, we use the `lime` library for Python, which is probably the most popular implementation of the LIME method (Ribeiro, Singh, and Guestrin [2016](#ref-lime)). The `lime` library requires categorical variables to be encoded in a numerical format. This requires some additional work with the data. Therefore, below we will show you how to use this method in Python step by step. For illustration purposes, we use the random forest model for the Titanic data. Instance\-level explanations are calculated for Henry, a 47\-year\-old passenger that travelled in the 1st class. In the first step, we read the Titanic data and encode categorical variables. In this case, we use the simplest encoding for *gender*, *class*, and *embarked*, i.e., the label\-encoding. ``` import dalex as dx titanic = dx.datasets.load_titanic() X = titanic.drop(columns='survived') y = titanic.survived from sklearn import preprocessing le = preprocessing.LabelEncoder() X['gender'] = le.fit_transform(X['gender']) X['class'] = le.fit_transform(X['class']) X['embarked'] = le.fit_transform(X['embarked']) ``` In the next step we train a random forest model. ``` from sklearn.ensemble import RandomForestClassifier as rfc titanic_fr = rfc() titanic_fr.fit(X, y) ``` It is time to define the observation for which model prediction will be explained. We write Henry’s data into `pandas.Series` object. ``` import pandas as pd henry = pd.Series([1, 47.0, 0, 1, 25.0, 0, 0], index =['gender', 'age', 'class', 'embarked', 'fare', 'sibsp', 'parch']) ``` The `lime` library explains models that operate on images, text, or tabular data. In the latter case, we have to use the `LimeTabularExplainer` module. ``` from lime.lime_tabular import LimeTabularExplainer explainer = LimeTabularExplainer(X, feature_names=X.columns, class_names=['died', 'survived'], discretize_continuous=False, verbose=True) ``` The result is an explainer that can be used to interpret a model around specific observations. In the following example, we explain the behaviour of the model for Henry. The `explain_instance()` method finds a local approximation with an interpretable linear model. The result can be presented graphically with the `show_in_notebook()` method. ``` lime = explainer.explain_instance(henry, titanic_fr.predict_proba) lime.show_in_notebook(show_table=True) ``` The resulting plot is shown in Figure [9\.10](LIME.html#fig:limePython1). Figure 9\.10: A plot of LIME model values for the random forest model and passenger Henry for the Titanic data.
Machine Learning
pbiecek.github.io
https://pbiecek.github.io/ema/ceterisParibus.html
10 Ceteris\-paribus Profiles ============================ 10\.1 Introduction ------------------ Chapters [6](breakDown.html#breakDown)–[9](LIME.html#LIME) focused on the methods that quantified the importance of explanatory variables in the context of a single\-instance prediction. Their application yields a decomposition of the prediction into components that can be attributed to particular variables. In this chapter, we focus on a method that evaluates the effect of a selected explanatory variable in terms of changes of a model’s prediction induced by changes in the variable’s values. The method is based on the *ceteris paribus* principle. *“Ceteris paribus”* is a Latin phrase meaning “other things held constant” or “all else unchanged”. The method examines the influence of an explanatory variable by assuming that the values of all other variables do not change. The main goal is to understand how changes in the values of the variable affect the model’s predictions. Explanation tools (explainers) presented in this chapter are linked to the second law introduced in Section [1\.2](introduction.html#three-single-laws), i.e., the law of “Prediction’s speculation”. This is why the tools are also known as “What\-if” model analysis or Individual Conditional Expectations (Goldstein et al. [2015](#ref-ICEbox)). It appears that it is easier to understand how a black\-box model works if we can explore the model by investigating the influence of explanatory variables separately, changing one at a time. 10\.2 Intuition --------------- Ceteris\-paribus (CP) profiles show how a model’s prediction would change if the value of a single exploratory variable changed. In essence, a CP profile shows the dependence of the conditional expectation of the dependent variable (response) on the values of the particular explanatory variable. For example, panel A of Figure [10\.1](ceterisParibus.html#fig:modelResponseCurveLine) presents response (prediction) surface for two explanatory variables, *age* and *class*, for the logistic regression model `titanic_lmr` (see Section [4\.2\.1](dataSetsIntro.html#model-titanic-lmr)) for the Titanic dataset (see Section [4\.1](dataSetsIntro.html#TitanicDataset)). We are interested in the change of the model’s prediction for passenger Henry (see Section [4\.2\.5](dataSetsIntro.html#predictions-titanic)) induced by each of the variables. Toward this end, we may want to explore the curvature of the response surface around a single point with *age* equal to 47 and *class* equal to “1st”, indicated in the plot. CP profiles are one\-dimensional plots that examine the curvature across each dimension, i.e., for each variable. Panel B of Figure [10\.1](ceterisParibus.html#fig:modelResponseCurveLine) presents the CP profiles for *age* and *class*. Note that, in the CP profile for *age*, the point of interest is indicated by the dot. The plots for both variables suggest that the predicted probability of survival varies considerably for different ages and classes. Figure 10\.1: Panel A) shows the model response (prediction) surface for variables *age* and *class.* Ceteris\-paribus (CP) profiles are conditional, one\-dimensional plots that are marked with black curves. They help to understand the changes of the curvature of the surface induced by changes in only a single explanatory variable. Panel B) CP profiles for individual variables, *age* (continuous) and *class* (categorical). 10\.3 Method ------------ In this section, we introduce more formally one\-dimensional CP profiles. Recall (see Section [2\.3](modelDevelopmentProcess.html#notation)) that we use \\(\\underline{x}\_i\\) to refer to the vector of values of explanatory variables corresponding to the \\(i\\)\-th observation in a dataset. A vector with arbitrary values (not linked to any particular observation in the dataset) is denoted by \\(\\underline{x}\_\*\\). Let \\(\\underline{x}^{j}\_{\*}\\) denote the \\(j\\)\-th element of \\(\\underline{x}\_{\*}\\), i.e., the value of the \\(j\\)\-th explanatory variable. We use \\(\\underline{x}^{\-j}\_{\*}\\) to refer to a vector resulting from removing the \\(j\\)\-th element from \\(\\underline{x}\_{\*}\\). By \\(\\underline{x}^{j\|\=z}\_{\*}\\), we denote a vector resulting from changing the value of the \\(j\\)\-th element of \\(\\underline{x}\_{\*}\\) to (a scalar) \\(z\\). We define a one\-dimensional CP profile \\(h()\\) for model \\(f()\\), the \\(j\\)\-th explanatory variable, and point of interest \\(\\underline{x}\_\*\\) as follows: \\\[\\begin{equation} h^{f,j}\_{\\underline{x}\_\*}(z) \\equiv f\\left(\\underline{x}\_\*^{j\|\=z}\\right). \\tag{10\.1} \\end{equation}\\] CP profile is a function that describes the dependence of the (approximated) conditional expected value (prediction) of \\(Y\\) on the value \\(z\\) of the \\(j\\)\-th explanatory variable. Note that, in practice, \\(z\\) assumes values from the entire observed range for the variable, while values of all other explanatory variables are kept fixed at the values specified by \\(\\underline{x}\_\*\\). Note that, in the situation when only a single model is considered, we will skip the model index and we will denote the CP profile for the \\(j\\)\-th explanatory variable and the point of interest \\(\\underline{x}\_\*\\) by \\(h^{j}\_{\\underline{x}\_\*}(z)\\). 10\.4 Example: Titanic data --------------------------- For continuous explanatory variables, a natural way to represent the CP function [(10\.1\)](ceterisParibus.html#eq:CPPdef) is to use a plot similar to one of those presented in Figure [10\.2](ceterisParibus.html#fig:profileAgeRf). In the figure, the dot on the curves marks the instance\-prediction of interest, i.e., prediction \\(f(\\underline{x}\_\*)\\) for a single observation \\(\\underline{x}\_\*\\). The curve itself shows how the prediction would change if the value of a particular explanatory variable changed. In particular, Figure [10\.2](ceterisParibus.html#fig:profileAgeRf) presents CP profiles for the *age* variable for the logistic regression model `titanic_lmr` and the random forest model `titanic_rf` for the Titanic dataset (see Sections [4\.2\.1](dataSetsIntro.html#model-titanic-lmr) and [4\.2\.2](dataSetsIntro.html#model-titanic-rf), respectively). The instance of interest is passenger Henry, a 47\-year\-old man who travelled in the first class (see Section [4\.2\.5](dataSetsIntro.html#predictions-titanic)). It is worth observing that the profile for the logistic regression model is smooth, while the one for the random forest model is a step function with some variability. However, the general shape of the two CP profiles is similar. If Henry were a newborn, while keeping values of all other explanatory variables unchanged, his predicted survival probability would increase by about 40 percentage points for both models. And if Henry were 80 years old, the predictions would decrease by more than 10 percentage points. Figure 10\.2: Ceteris\-paribus profiles for variable *age* for the logistic regression (`titanic_lmr`) and random forest (`titanic_rf` ) models that predict the probability of surviving of passenger Henry based on the Titanic data. Dots indicate the values of the variable and of the prediction for Henry. For a categorical explanatory variable, a natural way to represent the CP function is to use a bar plot similar to one of those presented in Figure [10\.3](ceterisParibus.html#fig:profileAgeRf2). In particular, the figure presents CP profiles for the *class* variable in the logistic regression and random forest models for the Titanic dataset (see Sections [4\.2\.1](dataSetsIntro.html#model-titanic-lmr) and [4\.2\.2](dataSetsIntro.html#model-titanic-rf), respectively). For this instance (observation), passenger Henry, the predicted probability for the logistic regression model would decrease substantially if the value of *class* changed to “2nd” or “3rd”. On the other hand, for the random forest model, the largest change would be marked if *class* changed to “desk crew”. Figure 10\.3: Ceteris\-paribus profiles for variable *class* for the logistic regression (`titanic_lmr`) and random forest (`titanic_rf` ) models that predict the probability of surviving of passenger Henry based on the Titanic data. Dots indicate the values of the variable and of the prediction for Henry. Usually, black\-box models contain a large number of explanatory variables. However, CP profiles are legible even for tiny subplots, if created with techniques like sparklines or small multiples (Tufte [1986](#ref-Tufte1986)). By using the techniques, we can display a large number of profiles, while at the same time keeping profiles for consecutive variables in separate panels, as shown in Figure [10\.4](ceterisParibus.html#fig:profileV4Rf) for the random forest model for the Titanic dataset. It helps if the panels are ordered so that the most important profiles are listed first. A method to assess the importance of CP profiles is discussed in the next chapter. Figure 10\.4: Ceteris\-paribus profiles for all continuous explanatory variables for the random forest model `titanic_rf` for the Titanic dataset and passenger Henry. Dots indicate the values of the variables and of the prediction for Henry. 10\.5 Pros and cons ------------------- One\-dimensional CP profiles, as presented in this chapter, offer a uniform, easy to communicate, and extendable approach to model exploration. Their graphical representation is easy to understand and explain. It is possible to show profiles for many variables or models in a single plot. CP profiles are easy to compare, as we can overlay profiles for two or more models to better understand differences between the models. We can also compare two or more instances to better understand model\-prediction’s stability. CP profiles are also a useful tool for sensitivity analysis. However, there are several issues related to the use of the CP profiles. One of the most important ones is related to the presence of correlated explanatory variables. For such variables, the application of the *ceteris\-paribus* principle may lead to unrealistic settings and misleading results, as it is not possible to keep one variable fixed while varying the other one. For example, variables like surface and number of rooms, which can be used in prediction of an apartment’s price, are usually correlated. Thus, it is unrealistic to consider very small apartments with a large number of rooms. In fact, in a training dataset, there may be no such combinations. Yet, as implied by [(10\.1\)](ceterisParibus.html#eq:CPPdef), to compute a CP profile for the number\-of\-rooms variable for a particular instance of a small\-surface apartment, we should consider the model’s predictions \\(f\\left(\\underline{x}\_\*^{j\|\=z}\\right)\\) for all values of \\(z\\) (i.e., numbers of rooms) observed in the training dataset, including large ones. This means that, especially for flexible models like, for example, regression trees, predictions for a large number of rooms \\(z\\) may have to be obtained by extrapolating the results obtained for large\-surface apartments. Needless to say, such extrapolation may be problematic. We will come back to this issue in Chapters [17](partialDependenceProfiles.html#partialDependenceProfiles) and [18](accumulatedLocalProfiles.html#accumulatedLocalProfiles). A somewhat similar issue is related to the presence of interactions in a model, as they imply the dependence of the effect of one variable on other one(s). Pairwise interactions require the use of two\-dimensional CP profiles that are more complex than one\-dimensional ones. Needless to say, interactions of higher orders pose even a greater challenge. A practical issue is that, in case of a model with hundreds or thousands of variables, the number of plots to inspect may be daunting. Finally, while bar plots allow visualization of CP profiles for factors (categorical explanatory variables), their use becomes less trivial in case of factors with many nominal (unordered) categories (like, for example, a ZIP\-code). 10\.6 Code snippets for R ------------------------- In this section, we present CP profiles as implemented in the `DALEX` package for R. Note that presented functions are, in fact, wrappers to package `ingredients` (Biecek et al. [2019](#ref-ingredientsRPackage)) with a simplified interface. There are also other R packages that offer similar functionalities, like `condvis` (O’Connell, Hurley, and Domijan [2017](#ref-condvisRPackage)), `pdp` (Greenwell [2017](#ref-pdpRPackage)), `ICEbox` (Goldstein et al. [2015](#ref-ICEbox)), `ALEPlot` (Apley [2018](#ref-ALEPlotRPackage)), or `iml` (Molnar, Bischl, and Casalicchio [2018](#ref-imlRPackage)). For illustration, we use two classification models developed in Chapter [4\.1](dataSetsIntro.html#TitanicDataset), namely the logistic regression model `titanic_lmr` (Section [4\.2\.1](dataSetsIntro.html#model-titanic-lmr)) and the random forest model `titanic_rf` (Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)). They are developed to predict the probability of survival after sinking of Titanic. Instance\-level explanations are calculated for Henry, a 47\-year\-old male passenger that travelled in the first class (see Section [4\.2\.5](dataSetsIntro.html#predictions-titanic)). We first retrieve the `titanic_lmr` and `titanic_rf` model\-objects and the data frame for Henry via the `archivist` hooks, as listed in Section [4\.2\.7](dataSetsIntro.html#ListOfModelsTitanic). We also retrieve the version of the `titanic` data with imputed missing values. ``` titanic_imputed <- archivist::aread("pbiecek/models/27e5c") titanic_lmr <- archivist::aread("pbiecek/models/58b24") titanic_rf <- archivist::aread("pbiecek/models/4e0fc") (henry <- archivist::aread("pbiecek/models/a6538")) ``` ``` class gender age sibsp parch fare embarked 1 1st male 47 0 0 25 Cherbourg ``` Then we construct the explainers for the model by using function `explain()` from the `DALEX` package (see Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode)). We also load the `rms` and `randomForest` packages, as the models were fitted by using functions from those packages and it is important to have the corresponding `predict()` functions available. ``` library("DALEX") library("rms") explain_lmr <- explain(model = titanic_lmr, data = titanic_imputed[, -9], y = titanic_imputed$survived == "yes", type = "classification", label = "Logistic Regression") library("randomForest") explain_rf <- DALEX::explain(model = titanic_rf, data = titanic_imputed[, -9], y = titanic_imputed$survived == "yes", label = "Random Forest") ``` ### 10\.6\.1 Basic use of the `predict_profile()` function The easiest way to create and plot CP profiles is to use the `predict_profile()` function and then apply the generic `plot()` function to the resulting object. By default, profiles for all explanatory variables are calculated, while profiles for all numeric (continuous) variables are plotted. One can limit the number of variables for which calculations and/or plots are necessary by using the `variables` argument. To compute the CP profiles, the `predict_profile()` function requires arguments `explainer`, which specifies the name of the explainer\-object, and `new_observation`, which specifies the name of the data frame for the instance for which prediction is of interest. As a result, the function returns an object of class “ceteris\_paribus\_explainer”. It is a data frame with the model’s predictions. Below we illustrate the use of the function for the random forest model. ``` cp_titanic_rf <- predict_profile(explainer = explain_rf, new_observation = henry) cp_titanic_rf ``` ``` ## Top profiles : ## class gender age sibsp parch fare embarked _yhat_ ## 1 1st male 47 0 0 25 Cherbourg 0.246 ## 1.1 2nd male 47 0 0 25 Cherbourg 0.054 ## 1.2 3rd male 47 0 0 25 Cherbourg 0.100 ## 1.3 deck crew male 47 0 0 25 Cherbourg 0.454 ## 1.4 engineering crew male 47 0 0 25 Cherbourg 0.096 ## 1.5 restaurant staff male 47 0 0 25 Cherbourg 0.092 ## _vname_ _ids_ _label_ ## 1 class 1 Random Forest ## 1.1 class 1 Random Forest ## 1.2 class 1 Random Forest ## 1.3 class 1 Random Forest ## 1.4 class 1 Random Forest ## 1.5 class 1 Random Forest ## ## ## Top observations: ## class gender age sibsp parch fare embarked _yhat_ _label_ ## 1 1st male 47 0 0 25 Cherbourg 0.246 Random Forest ## _ids_ ## 1 1 ``` To obtain a graphical representation of CP profiles, the generic `plot()` function can be applied to the data frame returned by the `predict_profile()` function. It returns a `ggplot2` object that can be processed further if needed. In the examples below, we use the `ggplot2` functions like `ggtitle()` or `ylim()` to modify the plot’s title or the range of the y\-axis. Below we show the code that can be used to create plots similar to those presented in the upper part of Figure [10\.4](ceterisParibus.html#fig:profileV4Rf). By default, the `plot()` function provides a graph with plots for all numerical variables. To limit the display to variables *age* and *fare*, the names of the variables are provided in the `variables` argument. The resulting plot is shown in Figure [10\.5](ceterisParibus.html#fig:titanicCeterisProfile01). ``` library("ggplot2") plot(cp_titanic_rf, variables = c("age", "fare")) + ggtitle("Ceteris-paribus profile", "") + ylim(0, 0.8) ``` Figure 10\.5: Ceteris\-paribus profiles for variables *age* and *fare* and the `titanic_rf` random forest model for the Titanic data. Dots indicate the values of the variables and of the prediction for Henry. To plot CP profiles for categorical variables, we have got to add the `variable_type = "categorical"` argument to the `plot()` function. In that case, we can use the `categorical_type` argument to specify whether we want to obtain a plot with `"lines"` (default) or `"bars"`. In the code below, we also use argument `variables` to indicate that we want to create plots only for variables *class* and *embarked*. The resulting plot is shown in Figure [10\.6](ceterisParibus.html#fig:titanicCeterisProfile01B). ``` plot(cp_titanic_rf, variables = c("class", "embarked"), variable_type = "categorical", categorical_type = "bars") + ggtitle("Ceteris-paribus profile", "") ``` Figure 10\.6: Ceteris\-paribus profiles for variables *class* and *embarked* and the `titanic_rf` random forest model for the Titanic data. Dots indicate the values of the variables and of the prediction for Henry. ### 10\.6\.2 Advanced use of the `predict_profile()` function The `predict_profile()` function is very flexible. To better understand how it can be used, we briefly review its arguments: * `explainer`, `data`, `predict_function`, `label` \- they provide information about the model. If the object provided in the `explainer` argument has been created with the `DALEX::explain()` function, then values of the other arguments are extracted from the object; this is how we use the function in this chapter. Otherwise, we have got to specify directly the model\-object, the data frame used for fitting the model, the function that should be used to compute predictions, and the model label. * `new_observation` \- a data frame with data for instance(s), for which we want to calculate CP profiles, with the same variables as in the data used to fit the model. Note, however, that it is best not to include the dependent variable in the data frame, as they should not appear in plots. * `y` \- the observed values of the dependent variable corresponding to `new_observation`. The use of this argument is illustrated in Section [12\.1](localDiagnostics.html#cPLocDiagIntro). * `variables` \- names of explanatory variables, for which CP profiles are to be calculated. By default, `variables = NULL` and the profiles are constructed for all variables, which may be time consuming. * `variable_splits` \- a list of values for which CP profiles are to be calculated. By default, `variable_splits = NULL` and the list includes all values for categorical variables and uniformly\-placed values for continuous variables; for the latter, one can specify the number of the values with the `grid_points` argument (by default, `grid_points = 101`). The code below uses argument `variable_splits` to specify that CP profiles are to be calculated for *age* and *fare*, together with the list of values at which the profiles are to be evaluated. ``` variable_splits = list(age = seq(0, 70, 0.1), fare = seq(0, 100, 0.1)) cp_titanic_rf <- predict_profile(explainer = explain_rf, new_observation = henry, variable_splits = variable_splits) ``` Subsequently, to replicate the plots presented in the upper part of Figure [10\.4](ceterisParibus.html#fig:profileV4Rf), a call to function `plot()` can be used as below. The resulting plot is shown in Figure [10\.5](ceterisParibus.html#fig:titanicCeterisProfile01). ``` plot(cp_titanic_rf, variables = c("age", "fare")) + ggtitle("Ceteris-paribus profile", "") ``` Figure 10\.7: Ceteris\-paribus profiles for variables *class* and *embarked* and the `titanic_rf` random forest model. Blue dots indicate the values of the variables and of the prediction for Henry. In the example below, we present the code to create CP profiles for two passengers, Henry and Johnny D (see Section [4\.2\.5](dataSetsIntro.html#predictions-titanic)), for the random forest model `titanic_rf` (Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)). Toward this end, we first retrieve the `johnny_d` data frame via the `archivist` hook, as listed in Section [4\.2\.7](dataSetsIntro.html#ListOfModelsTitanic). We then apply the `predict_profile()` function with the explainer\-object `explain_rf` specified in the `explainer` argument and the combined data frame for Henry and Johnny D used in the `new_observation` argument. We also use argument `variable_splits` to specify that CP profiles are to be calculated for *age* and *fare*, together with the list of values at which the profiles are to be evaluated. ``` (johnny_d <- archivist::aread("pbiecek/models/e3596")) ``` ``` ## class gender age sibsp parch fare embarked ## 1 1st male 8 0 0 72 Southampton ``` ``` cp_titanic_rf2 <- predict_profile(explainer = explain_rf, new_observation = rbind(henry, johnny_d), variable_splits = variable_splits) ``` To create the plots of CP profile, we apply the `plot()` function. We use the `scale_color_manual()` function to add names of passengers to the plot, and to control colors and positions. ``` library(ingredients) plot(cp_titanic_rf2, color = "_ids_", variables = c("age", "fare")) + scale_color_manual(name = "Passenger:", breaks = 1:2, values = c("#4378bf", "#8bdcbe"), labels = c("henry" , "johny_d")) ``` The resulting graph, which includes CP profiles for Henry and Johnny D, is presented in Figure [10\.8](ceterisParibus.html#fig:titanicCeterisProfile01D). For Henry, the predicted probability of survival is smaller than for Johnny D, as seen from the location of the large dots on the profiles. The profiles for *age* indicate a somewhat larger effect of the variable for Henry, as the predicted probability, in general, decreases from about 0\.6 to 0\.1 with increasing values of the variable. For Johny D, the probability changes from about 0\.45 to about 0\.05, with a bit less monotonic pattern. For *fare*, the effect is smaller for both passengers, as the probability changes within a smaller range of about 0\.2\. For Henry, the changes are approximately limited to the interval \[0\.1,0\.3], while for Johnny D they are limited to the interval \[0\.4,0\.6]. Figure 10\.8: Ceteris\-paribus profiles for the `titanic_rf` model. Profiles for different passengers are color\-coded. Dots indicate the values of the variables and of the predictions for the passengers. ### 10\.6\.3 Comparison of models (champion\-challenger) One of the most interesting uses of the CP profiles is the comparison for two or more of models. To illustrate this possibility, first, we have to construct profiles for the models. In our illustration, for the sake of clarity, we limit ourselves to the logistic regression (Section [4\.2\.1](dataSetsIntro.html#model-titanic-lmr)) and random forest (Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) models for the Titanic data. Moreover, we use Henry as the instance for which predictions are of interest. We apply the `predict_profile()` function to compute the CP profiles for the two models. ``` cp_titanic_rf <- predict_profile(explain_rf, henry) cp_titanic_lmr <- predict_profile(explain_lmr, henry) ``` Subsequently, we construct the plot with the help of the `plot()` function. Note that, for the sake of brevity, we use the `variables` argument to limit the plot only to profiles for variables *age* and *class*. Every `plot()` function can take a collection of explainers as arguments. In such case, profiles for different models are combined in a single plot. In the code presented below, argument `color = "_label_"` is used to specify that models are to be color\-coded. The `_label_` refers to the name of the column in the CP explainer that contains the model’s name. ``` plot(cp_titanic_rf, cp_titanic_lmr, color = "_label_", variables = c("age", "fare")) + ggtitle("Ceteris-paribus profiles for Henry", "") ``` The resulting plot is shown in Figure [10\.9](ceterisParibus.html#fig:titanicCeterisProfile01E). For Henry, the predicted probability of survival is higher for the logistic regression model than for the random forest model. CP profiles for *age* show a similar shape, however, and indicate decreasing probability with age. Note that this relation is not linear because we used spline transformation for the *age* variable, see Section [4\.2\.1](dataSetsIntro.html#model-titanic-lmr). For *fare*, the profile for the logistic regression model suggests a slight increase of the probability, while for the random forest a decreasing trend can be inferred. The difference between the values of the CP profiles for *fare* increases with the increasing values of the variable. We can only speculate what is the reason for the difference. Perhaps the cause is the correlation between the ticket *fare* and *class.* The logistic regression model handles the dependency of variables differently from the random forest model. Figure 10\.9: Comparison of the ceteris\-paribus profiles for Henry for the logistic regression and random forest models. Profiles for different models are color\-coded. Dots indicate the values of the variables and of the prediction for Henry. 10\.7 Code snippets for Python ------------------------------ In this section, we use the `dalex` library for Python. The package covers all methods presented in this chapter. It is available on `pip` and `GitHub`. For illustration purposes, we use the `titanic_rf` random forest model for the Titanic data developed in Section [4\.3\.2](dataSetsIntro.html#model-titanic-python-rf). Recall that the model is developed to predict the probability of survival for passengers of Titanic. Instance\-level explanations are calculated for Henry, a 47\-year\-old passenger that travelled in the 1st class (see Section [4\.3\.5](dataSetsIntro.html#predictions-titanic-python)). In the first step, we create an explainer\-object that will provide a uniform interface for the predictive model. We use the `Explainer()` constructor for this purpose. ``` import pandas as pd henry = pd.DataFrame({'gender' : ['male'], 'age' : [47], 'class' : ['1st'], 'embarked': ['Cherbourg'], 'fare' : [25], 'sibsp' : [0], 'parch' : [0]}, index = ['Henry']) import dalex as dx titanic_rf_exp = dx.Explainer(titanic_rf, X, y, label = "Titanic RF Pipeline") ``` To calculate the CP profile we use the `predict_profile()` method. The first argument is the data frame for the observation for which the attributions are to be calculated. Results are stored in the `results` field. ``` cp_henry = titanic_rf_exp.predict_profile(henry) cp_henry.result ``` The resulting object can be visualised by using the `plot()` method. By default, CP profiles for all continuous variables are plotted. To select specific variables, a vector with the names of the variables can be provided in the `variables` argument. In the code below, we select variables *age* and *fare*. The resulting plot is shown in Figure [10\.10](ceterisParibus.html#fig:cpPython1). ``` cp_henry.plot(variables = ['age', 'fare']) ``` Figure 10\.10: Ceteris\-paribus profiles for continuous explanatory variables *age* and *fare* for the random forest model for the Titanic data and passenger Henry. Dots indicate the values of the variables and of the prediction for Henry. To plot profiles for categorical variables, we use the `variable_type = 'categorical'` argument. In the code below, we limit the plot to variables *class* and *embarked*. The resulting plot is shown in Figure [10\.11](ceterisParibus.html#fig:cpPython2). ``` cp_henry.plot(variables = ['class', 'embarked'], variable_type = 'categorical') ``` Figure 10\.11: Ceteris\-paribus profiles for categorical explanatory variables *class* and *embarked* for the random forest model for the Titanic data and passenger Henry. CP profiles for several models can be placed on a single chart by adding them as further arguments for the `plot()` function (see an example below). The resulting plot is shown in Figure [10\.12](ceterisParibus.html#fig:cpPython4). ``` cp_henry2 = titanic_lr_exp.predict_profile(henry) cp_henry.plot(cp_henry2, variables = ['age', 'fare']) ``` Figure 10\.12: Ceteris\-paribus profiles for logistic regression model and random forest model for the Titanic data and passenger Henry. 10\.1 Introduction ------------------ Chapters [6](breakDown.html#breakDown)–[9](LIME.html#LIME) focused on the methods that quantified the importance of explanatory variables in the context of a single\-instance prediction. Their application yields a decomposition of the prediction into components that can be attributed to particular variables. In this chapter, we focus on a method that evaluates the effect of a selected explanatory variable in terms of changes of a model’s prediction induced by changes in the variable’s values. The method is based on the *ceteris paribus* principle. *“Ceteris paribus”* is a Latin phrase meaning “other things held constant” or “all else unchanged”. The method examines the influence of an explanatory variable by assuming that the values of all other variables do not change. The main goal is to understand how changes in the values of the variable affect the model’s predictions. Explanation tools (explainers) presented in this chapter are linked to the second law introduced in Section [1\.2](introduction.html#three-single-laws), i.e., the law of “Prediction’s speculation”. This is why the tools are also known as “What\-if” model analysis or Individual Conditional Expectations (Goldstein et al. [2015](#ref-ICEbox)). It appears that it is easier to understand how a black\-box model works if we can explore the model by investigating the influence of explanatory variables separately, changing one at a time. 10\.2 Intuition --------------- Ceteris\-paribus (CP) profiles show how a model’s prediction would change if the value of a single exploratory variable changed. In essence, a CP profile shows the dependence of the conditional expectation of the dependent variable (response) on the values of the particular explanatory variable. For example, panel A of Figure [10\.1](ceterisParibus.html#fig:modelResponseCurveLine) presents response (prediction) surface for two explanatory variables, *age* and *class*, for the logistic regression model `titanic_lmr` (see Section [4\.2\.1](dataSetsIntro.html#model-titanic-lmr)) for the Titanic dataset (see Section [4\.1](dataSetsIntro.html#TitanicDataset)). We are interested in the change of the model’s prediction for passenger Henry (see Section [4\.2\.5](dataSetsIntro.html#predictions-titanic)) induced by each of the variables. Toward this end, we may want to explore the curvature of the response surface around a single point with *age* equal to 47 and *class* equal to “1st”, indicated in the plot. CP profiles are one\-dimensional plots that examine the curvature across each dimension, i.e., for each variable. Panel B of Figure [10\.1](ceterisParibus.html#fig:modelResponseCurveLine) presents the CP profiles for *age* and *class*. Note that, in the CP profile for *age*, the point of interest is indicated by the dot. The plots for both variables suggest that the predicted probability of survival varies considerably for different ages and classes. Figure 10\.1: Panel A) shows the model response (prediction) surface for variables *age* and *class.* Ceteris\-paribus (CP) profiles are conditional, one\-dimensional plots that are marked with black curves. They help to understand the changes of the curvature of the surface induced by changes in only a single explanatory variable. Panel B) CP profiles for individual variables, *age* (continuous) and *class* (categorical). 10\.3 Method ------------ In this section, we introduce more formally one\-dimensional CP profiles. Recall (see Section [2\.3](modelDevelopmentProcess.html#notation)) that we use \\(\\underline{x}\_i\\) to refer to the vector of values of explanatory variables corresponding to the \\(i\\)\-th observation in a dataset. A vector with arbitrary values (not linked to any particular observation in the dataset) is denoted by \\(\\underline{x}\_\*\\). Let \\(\\underline{x}^{j}\_{\*}\\) denote the \\(j\\)\-th element of \\(\\underline{x}\_{\*}\\), i.e., the value of the \\(j\\)\-th explanatory variable. We use \\(\\underline{x}^{\-j}\_{\*}\\) to refer to a vector resulting from removing the \\(j\\)\-th element from \\(\\underline{x}\_{\*}\\). By \\(\\underline{x}^{j\|\=z}\_{\*}\\), we denote a vector resulting from changing the value of the \\(j\\)\-th element of \\(\\underline{x}\_{\*}\\) to (a scalar) \\(z\\). We define a one\-dimensional CP profile \\(h()\\) for model \\(f()\\), the \\(j\\)\-th explanatory variable, and point of interest \\(\\underline{x}\_\*\\) as follows: \\\[\\begin{equation} h^{f,j}\_{\\underline{x}\_\*}(z) \\equiv f\\left(\\underline{x}\_\*^{j\|\=z}\\right). \\tag{10\.1} \\end{equation}\\] CP profile is a function that describes the dependence of the (approximated) conditional expected value (prediction) of \\(Y\\) on the value \\(z\\) of the \\(j\\)\-th explanatory variable. Note that, in practice, \\(z\\) assumes values from the entire observed range for the variable, while values of all other explanatory variables are kept fixed at the values specified by \\(\\underline{x}\_\*\\). Note that, in the situation when only a single model is considered, we will skip the model index and we will denote the CP profile for the \\(j\\)\-th explanatory variable and the point of interest \\(\\underline{x}\_\*\\) by \\(h^{j}\_{\\underline{x}\_\*}(z)\\). 10\.4 Example: Titanic data --------------------------- For continuous explanatory variables, a natural way to represent the CP function [(10\.1\)](ceterisParibus.html#eq:CPPdef) is to use a plot similar to one of those presented in Figure [10\.2](ceterisParibus.html#fig:profileAgeRf). In the figure, the dot on the curves marks the instance\-prediction of interest, i.e., prediction \\(f(\\underline{x}\_\*)\\) for a single observation \\(\\underline{x}\_\*\\). The curve itself shows how the prediction would change if the value of a particular explanatory variable changed. In particular, Figure [10\.2](ceterisParibus.html#fig:profileAgeRf) presents CP profiles for the *age* variable for the logistic regression model `titanic_lmr` and the random forest model `titanic_rf` for the Titanic dataset (see Sections [4\.2\.1](dataSetsIntro.html#model-titanic-lmr) and [4\.2\.2](dataSetsIntro.html#model-titanic-rf), respectively). The instance of interest is passenger Henry, a 47\-year\-old man who travelled in the first class (see Section [4\.2\.5](dataSetsIntro.html#predictions-titanic)). It is worth observing that the profile for the logistic regression model is smooth, while the one for the random forest model is a step function with some variability. However, the general shape of the two CP profiles is similar. If Henry were a newborn, while keeping values of all other explanatory variables unchanged, his predicted survival probability would increase by about 40 percentage points for both models. And if Henry were 80 years old, the predictions would decrease by more than 10 percentage points. Figure 10\.2: Ceteris\-paribus profiles for variable *age* for the logistic regression (`titanic_lmr`) and random forest (`titanic_rf` ) models that predict the probability of surviving of passenger Henry based on the Titanic data. Dots indicate the values of the variable and of the prediction for Henry. For a categorical explanatory variable, a natural way to represent the CP function is to use a bar plot similar to one of those presented in Figure [10\.3](ceterisParibus.html#fig:profileAgeRf2). In particular, the figure presents CP profiles for the *class* variable in the logistic regression and random forest models for the Titanic dataset (see Sections [4\.2\.1](dataSetsIntro.html#model-titanic-lmr) and [4\.2\.2](dataSetsIntro.html#model-titanic-rf), respectively). For this instance (observation), passenger Henry, the predicted probability for the logistic regression model would decrease substantially if the value of *class* changed to “2nd” or “3rd”. On the other hand, for the random forest model, the largest change would be marked if *class* changed to “desk crew”. Figure 10\.3: Ceteris\-paribus profiles for variable *class* for the logistic regression (`titanic_lmr`) and random forest (`titanic_rf` ) models that predict the probability of surviving of passenger Henry based on the Titanic data. Dots indicate the values of the variable and of the prediction for Henry. Usually, black\-box models contain a large number of explanatory variables. However, CP profiles are legible even for tiny subplots, if created with techniques like sparklines or small multiples (Tufte [1986](#ref-Tufte1986)). By using the techniques, we can display a large number of profiles, while at the same time keeping profiles for consecutive variables in separate panels, as shown in Figure [10\.4](ceterisParibus.html#fig:profileV4Rf) for the random forest model for the Titanic dataset. It helps if the panels are ordered so that the most important profiles are listed first. A method to assess the importance of CP profiles is discussed in the next chapter. Figure 10\.4: Ceteris\-paribus profiles for all continuous explanatory variables for the random forest model `titanic_rf` for the Titanic dataset and passenger Henry. Dots indicate the values of the variables and of the prediction for Henry. 10\.5 Pros and cons ------------------- One\-dimensional CP profiles, as presented in this chapter, offer a uniform, easy to communicate, and extendable approach to model exploration. Their graphical representation is easy to understand and explain. It is possible to show profiles for many variables or models in a single plot. CP profiles are easy to compare, as we can overlay profiles for two or more models to better understand differences between the models. We can also compare two or more instances to better understand model\-prediction’s stability. CP profiles are also a useful tool for sensitivity analysis. However, there are several issues related to the use of the CP profiles. One of the most important ones is related to the presence of correlated explanatory variables. For such variables, the application of the *ceteris\-paribus* principle may lead to unrealistic settings and misleading results, as it is not possible to keep one variable fixed while varying the other one. For example, variables like surface and number of rooms, which can be used in prediction of an apartment’s price, are usually correlated. Thus, it is unrealistic to consider very small apartments with a large number of rooms. In fact, in a training dataset, there may be no such combinations. Yet, as implied by [(10\.1\)](ceterisParibus.html#eq:CPPdef), to compute a CP profile for the number\-of\-rooms variable for a particular instance of a small\-surface apartment, we should consider the model’s predictions \\(f\\left(\\underline{x}\_\*^{j\|\=z}\\right)\\) for all values of \\(z\\) (i.e., numbers of rooms) observed in the training dataset, including large ones. This means that, especially for flexible models like, for example, regression trees, predictions for a large number of rooms \\(z\\) may have to be obtained by extrapolating the results obtained for large\-surface apartments. Needless to say, such extrapolation may be problematic. We will come back to this issue in Chapters [17](partialDependenceProfiles.html#partialDependenceProfiles) and [18](accumulatedLocalProfiles.html#accumulatedLocalProfiles). A somewhat similar issue is related to the presence of interactions in a model, as they imply the dependence of the effect of one variable on other one(s). Pairwise interactions require the use of two\-dimensional CP profiles that are more complex than one\-dimensional ones. Needless to say, interactions of higher orders pose even a greater challenge. A practical issue is that, in case of a model with hundreds or thousands of variables, the number of plots to inspect may be daunting. Finally, while bar plots allow visualization of CP profiles for factors (categorical explanatory variables), their use becomes less trivial in case of factors with many nominal (unordered) categories (like, for example, a ZIP\-code). 10\.6 Code snippets for R ------------------------- In this section, we present CP profiles as implemented in the `DALEX` package for R. Note that presented functions are, in fact, wrappers to package `ingredients` (Biecek et al. [2019](#ref-ingredientsRPackage)) with a simplified interface. There are also other R packages that offer similar functionalities, like `condvis` (O’Connell, Hurley, and Domijan [2017](#ref-condvisRPackage)), `pdp` (Greenwell [2017](#ref-pdpRPackage)), `ICEbox` (Goldstein et al. [2015](#ref-ICEbox)), `ALEPlot` (Apley [2018](#ref-ALEPlotRPackage)), or `iml` (Molnar, Bischl, and Casalicchio [2018](#ref-imlRPackage)). For illustration, we use two classification models developed in Chapter [4\.1](dataSetsIntro.html#TitanicDataset), namely the logistic regression model `titanic_lmr` (Section [4\.2\.1](dataSetsIntro.html#model-titanic-lmr)) and the random forest model `titanic_rf` (Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)). They are developed to predict the probability of survival after sinking of Titanic. Instance\-level explanations are calculated for Henry, a 47\-year\-old male passenger that travelled in the first class (see Section [4\.2\.5](dataSetsIntro.html#predictions-titanic)). We first retrieve the `titanic_lmr` and `titanic_rf` model\-objects and the data frame for Henry via the `archivist` hooks, as listed in Section [4\.2\.7](dataSetsIntro.html#ListOfModelsTitanic). We also retrieve the version of the `titanic` data with imputed missing values. ``` titanic_imputed <- archivist::aread("pbiecek/models/27e5c") titanic_lmr <- archivist::aread("pbiecek/models/58b24") titanic_rf <- archivist::aread("pbiecek/models/4e0fc") (henry <- archivist::aread("pbiecek/models/a6538")) ``` ``` class gender age sibsp parch fare embarked 1 1st male 47 0 0 25 Cherbourg ``` Then we construct the explainers for the model by using function `explain()` from the `DALEX` package (see Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode)). We also load the `rms` and `randomForest` packages, as the models were fitted by using functions from those packages and it is important to have the corresponding `predict()` functions available. ``` library("DALEX") library("rms") explain_lmr <- explain(model = titanic_lmr, data = titanic_imputed[, -9], y = titanic_imputed$survived == "yes", type = "classification", label = "Logistic Regression") library("randomForest") explain_rf <- DALEX::explain(model = titanic_rf, data = titanic_imputed[, -9], y = titanic_imputed$survived == "yes", label = "Random Forest") ``` ### 10\.6\.1 Basic use of the `predict_profile()` function The easiest way to create and plot CP profiles is to use the `predict_profile()` function and then apply the generic `plot()` function to the resulting object. By default, profiles for all explanatory variables are calculated, while profiles for all numeric (continuous) variables are plotted. One can limit the number of variables for which calculations and/or plots are necessary by using the `variables` argument. To compute the CP profiles, the `predict_profile()` function requires arguments `explainer`, which specifies the name of the explainer\-object, and `new_observation`, which specifies the name of the data frame for the instance for which prediction is of interest. As a result, the function returns an object of class “ceteris\_paribus\_explainer”. It is a data frame with the model’s predictions. Below we illustrate the use of the function for the random forest model. ``` cp_titanic_rf <- predict_profile(explainer = explain_rf, new_observation = henry) cp_titanic_rf ``` ``` ## Top profiles : ## class gender age sibsp parch fare embarked _yhat_ ## 1 1st male 47 0 0 25 Cherbourg 0.246 ## 1.1 2nd male 47 0 0 25 Cherbourg 0.054 ## 1.2 3rd male 47 0 0 25 Cherbourg 0.100 ## 1.3 deck crew male 47 0 0 25 Cherbourg 0.454 ## 1.4 engineering crew male 47 0 0 25 Cherbourg 0.096 ## 1.5 restaurant staff male 47 0 0 25 Cherbourg 0.092 ## _vname_ _ids_ _label_ ## 1 class 1 Random Forest ## 1.1 class 1 Random Forest ## 1.2 class 1 Random Forest ## 1.3 class 1 Random Forest ## 1.4 class 1 Random Forest ## 1.5 class 1 Random Forest ## ## ## Top observations: ## class gender age sibsp parch fare embarked _yhat_ _label_ ## 1 1st male 47 0 0 25 Cherbourg 0.246 Random Forest ## _ids_ ## 1 1 ``` To obtain a graphical representation of CP profiles, the generic `plot()` function can be applied to the data frame returned by the `predict_profile()` function. It returns a `ggplot2` object that can be processed further if needed. In the examples below, we use the `ggplot2` functions like `ggtitle()` or `ylim()` to modify the plot’s title or the range of the y\-axis. Below we show the code that can be used to create plots similar to those presented in the upper part of Figure [10\.4](ceterisParibus.html#fig:profileV4Rf). By default, the `plot()` function provides a graph with plots for all numerical variables. To limit the display to variables *age* and *fare*, the names of the variables are provided in the `variables` argument. The resulting plot is shown in Figure [10\.5](ceterisParibus.html#fig:titanicCeterisProfile01). ``` library("ggplot2") plot(cp_titanic_rf, variables = c("age", "fare")) + ggtitle("Ceteris-paribus profile", "") + ylim(0, 0.8) ``` Figure 10\.5: Ceteris\-paribus profiles for variables *age* and *fare* and the `titanic_rf` random forest model for the Titanic data. Dots indicate the values of the variables and of the prediction for Henry. To plot CP profiles for categorical variables, we have got to add the `variable_type = "categorical"` argument to the `plot()` function. In that case, we can use the `categorical_type` argument to specify whether we want to obtain a plot with `"lines"` (default) or `"bars"`. In the code below, we also use argument `variables` to indicate that we want to create plots only for variables *class* and *embarked*. The resulting plot is shown in Figure [10\.6](ceterisParibus.html#fig:titanicCeterisProfile01B). ``` plot(cp_titanic_rf, variables = c("class", "embarked"), variable_type = "categorical", categorical_type = "bars") + ggtitle("Ceteris-paribus profile", "") ``` Figure 10\.6: Ceteris\-paribus profiles for variables *class* and *embarked* and the `titanic_rf` random forest model for the Titanic data. Dots indicate the values of the variables and of the prediction for Henry. ### 10\.6\.2 Advanced use of the `predict_profile()` function The `predict_profile()` function is very flexible. To better understand how it can be used, we briefly review its arguments: * `explainer`, `data`, `predict_function`, `label` \- they provide information about the model. If the object provided in the `explainer` argument has been created with the `DALEX::explain()` function, then values of the other arguments are extracted from the object; this is how we use the function in this chapter. Otherwise, we have got to specify directly the model\-object, the data frame used for fitting the model, the function that should be used to compute predictions, and the model label. * `new_observation` \- a data frame with data for instance(s), for which we want to calculate CP profiles, with the same variables as in the data used to fit the model. Note, however, that it is best not to include the dependent variable in the data frame, as they should not appear in plots. * `y` \- the observed values of the dependent variable corresponding to `new_observation`. The use of this argument is illustrated in Section [12\.1](localDiagnostics.html#cPLocDiagIntro). * `variables` \- names of explanatory variables, for which CP profiles are to be calculated. By default, `variables = NULL` and the profiles are constructed for all variables, which may be time consuming. * `variable_splits` \- a list of values for which CP profiles are to be calculated. By default, `variable_splits = NULL` and the list includes all values for categorical variables and uniformly\-placed values for continuous variables; for the latter, one can specify the number of the values with the `grid_points` argument (by default, `grid_points = 101`). The code below uses argument `variable_splits` to specify that CP profiles are to be calculated for *age* and *fare*, together with the list of values at which the profiles are to be evaluated. ``` variable_splits = list(age = seq(0, 70, 0.1), fare = seq(0, 100, 0.1)) cp_titanic_rf <- predict_profile(explainer = explain_rf, new_observation = henry, variable_splits = variable_splits) ``` Subsequently, to replicate the plots presented in the upper part of Figure [10\.4](ceterisParibus.html#fig:profileV4Rf), a call to function `plot()` can be used as below. The resulting plot is shown in Figure [10\.5](ceterisParibus.html#fig:titanicCeterisProfile01). ``` plot(cp_titanic_rf, variables = c("age", "fare")) + ggtitle("Ceteris-paribus profile", "") ``` Figure 10\.7: Ceteris\-paribus profiles for variables *class* and *embarked* and the `titanic_rf` random forest model. Blue dots indicate the values of the variables and of the prediction for Henry. In the example below, we present the code to create CP profiles for two passengers, Henry and Johnny D (see Section [4\.2\.5](dataSetsIntro.html#predictions-titanic)), for the random forest model `titanic_rf` (Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)). Toward this end, we first retrieve the `johnny_d` data frame via the `archivist` hook, as listed in Section [4\.2\.7](dataSetsIntro.html#ListOfModelsTitanic). We then apply the `predict_profile()` function with the explainer\-object `explain_rf` specified in the `explainer` argument and the combined data frame for Henry and Johnny D used in the `new_observation` argument. We also use argument `variable_splits` to specify that CP profiles are to be calculated for *age* and *fare*, together with the list of values at which the profiles are to be evaluated. ``` (johnny_d <- archivist::aread("pbiecek/models/e3596")) ``` ``` ## class gender age sibsp parch fare embarked ## 1 1st male 8 0 0 72 Southampton ``` ``` cp_titanic_rf2 <- predict_profile(explainer = explain_rf, new_observation = rbind(henry, johnny_d), variable_splits = variable_splits) ``` To create the plots of CP profile, we apply the `plot()` function. We use the `scale_color_manual()` function to add names of passengers to the plot, and to control colors and positions. ``` library(ingredients) plot(cp_titanic_rf2, color = "_ids_", variables = c("age", "fare")) + scale_color_manual(name = "Passenger:", breaks = 1:2, values = c("#4378bf", "#8bdcbe"), labels = c("henry" , "johny_d")) ``` The resulting graph, which includes CP profiles for Henry and Johnny D, is presented in Figure [10\.8](ceterisParibus.html#fig:titanicCeterisProfile01D). For Henry, the predicted probability of survival is smaller than for Johnny D, as seen from the location of the large dots on the profiles. The profiles for *age* indicate a somewhat larger effect of the variable for Henry, as the predicted probability, in general, decreases from about 0\.6 to 0\.1 with increasing values of the variable. For Johny D, the probability changes from about 0\.45 to about 0\.05, with a bit less monotonic pattern. For *fare*, the effect is smaller for both passengers, as the probability changes within a smaller range of about 0\.2\. For Henry, the changes are approximately limited to the interval \[0\.1,0\.3], while for Johnny D they are limited to the interval \[0\.4,0\.6]. Figure 10\.8: Ceteris\-paribus profiles for the `titanic_rf` model. Profiles for different passengers are color\-coded. Dots indicate the values of the variables and of the predictions for the passengers. ### 10\.6\.3 Comparison of models (champion\-challenger) One of the most interesting uses of the CP profiles is the comparison for two or more of models. To illustrate this possibility, first, we have to construct profiles for the models. In our illustration, for the sake of clarity, we limit ourselves to the logistic regression (Section [4\.2\.1](dataSetsIntro.html#model-titanic-lmr)) and random forest (Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) models for the Titanic data. Moreover, we use Henry as the instance for which predictions are of interest. We apply the `predict_profile()` function to compute the CP profiles for the two models. ``` cp_titanic_rf <- predict_profile(explain_rf, henry) cp_titanic_lmr <- predict_profile(explain_lmr, henry) ``` Subsequently, we construct the plot with the help of the `plot()` function. Note that, for the sake of brevity, we use the `variables` argument to limit the plot only to profiles for variables *age* and *class*. Every `plot()` function can take a collection of explainers as arguments. In such case, profiles for different models are combined in a single plot. In the code presented below, argument `color = "_label_"` is used to specify that models are to be color\-coded. The `_label_` refers to the name of the column in the CP explainer that contains the model’s name. ``` plot(cp_titanic_rf, cp_titanic_lmr, color = "_label_", variables = c("age", "fare")) + ggtitle("Ceteris-paribus profiles for Henry", "") ``` The resulting plot is shown in Figure [10\.9](ceterisParibus.html#fig:titanicCeterisProfile01E). For Henry, the predicted probability of survival is higher for the logistic regression model than for the random forest model. CP profiles for *age* show a similar shape, however, and indicate decreasing probability with age. Note that this relation is not linear because we used spline transformation for the *age* variable, see Section [4\.2\.1](dataSetsIntro.html#model-titanic-lmr). For *fare*, the profile for the logistic regression model suggests a slight increase of the probability, while for the random forest a decreasing trend can be inferred. The difference between the values of the CP profiles for *fare* increases with the increasing values of the variable. We can only speculate what is the reason for the difference. Perhaps the cause is the correlation between the ticket *fare* and *class.* The logistic regression model handles the dependency of variables differently from the random forest model. Figure 10\.9: Comparison of the ceteris\-paribus profiles for Henry for the logistic regression and random forest models. Profiles for different models are color\-coded. Dots indicate the values of the variables and of the prediction for Henry. ### 10\.6\.1 Basic use of the `predict_profile()` function The easiest way to create and plot CP profiles is to use the `predict_profile()` function and then apply the generic `plot()` function to the resulting object. By default, profiles for all explanatory variables are calculated, while profiles for all numeric (continuous) variables are plotted. One can limit the number of variables for which calculations and/or plots are necessary by using the `variables` argument. To compute the CP profiles, the `predict_profile()` function requires arguments `explainer`, which specifies the name of the explainer\-object, and `new_observation`, which specifies the name of the data frame for the instance for which prediction is of interest. As a result, the function returns an object of class “ceteris\_paribus\_explainer”. It is a data frame with the model’s predictions. Below we illustrate the use of the function for the random forest model. ``` cp_titanic_rf <- predict_profile(explainer = explain_rf, new_observation = henry) cp_titanic_rf ``` ``` ## Top profiles : ## class gender age sibsp parch fare embarked _yhat_ ## 1 1st male 47 0 0 25 Cherbourg 0.246 ## 1.1 2nd male 47 0 0 25 Cherbourg 0.054 ## 1.2 3rd male 47 0 0 25 Cherbourg 0.100 ## 1.3 deck crew male 47 0 0 25 Cherbourg 0.454 ## 1.4 engineering crew male 47 0 0 25 Cherbourg 0.096 ## 1.5 restaurant staff male 47 0 0 25 Cherbourg 0.092 ## _vname_ _ids_ _label_ ## 1 class 1 Random Forest ## 1.1 class 1 Random Forest ## 1.2 class 1 Random Forest ## 1.3 class 1 Random Forest ## 1.4 class 1 Random Forest ## 1.5 class 1 Random Forest ## ## ## Top observations: ## class gender age sibsp parch fare embarked _yhat_ _label_ ## 1 1st male 47 0 0 25 Cherbourg 0.246 Random Forest ## _ids_ ## 1 1 ``` To obtain a graphical representation of CP profiles, the generic `plot()` function can be applied to the data frame returned by the `predict_profile()` function. It returns a `ggplot2` object that can be processed further if needed. In the examples below, we use the `ggplot2` functions like `ggtitle()` or `ylim()` to modify the plot’s title or the range of the y\-axis. Below we show the code that can be used to create plots similar to those presented in the upper part of Figure [10\.4](ceterisParibus.html#fig:profileV4Rf). By default, the `plot()` function provides a graph with plots for all numerical variables. To limit the display to variables *age* and *fare*, the names of the variables are provided in the `variables` argument. The resulting plot is shown in Figure [10\.5](ceterisParibus.html#fig:titanicCeterisProfile01). ``` library("ggplot2") plot(cp_titanic_rf, variables = c("age", "fare")) + ggtitle("Ceteris-paribus profile", "") + ylim(0, 0.8) ``` Figure 10\.5: Ceteris\-paribus profiles for variables *age* and *fare* and the `titanic_rf` random forest model for the Titanic data. Dots indicate the values of the variables and of the prediction for Henry. To plot CP profiles for categorical variables, we have got to add the `variable_type = "categorical"` argument to the `plot()` function. In that case, we can use the `categorical_type` argument to specify whether we want to obtain a plot with `"lines"` (default) or `"bars"`. In the code below, we also use argument `variables` to indicate that we want to create plots only for variables *class* and *embarked*. The resulting plot is shown in Figure [10\.6](ceterisParibus.html#fig:titanicCeterisProfile01B). ``` plot(cp_titanic_rf, variables = c("class", "embarked"), variable_type = "categorical", categorical_type = "bars") + ggtitle("Ceteris-paribus profile", "") ``` Figure 10\.6: Ceteris\-paribus profiles for variables *class* and *embarked* and the `titanic_rf` random forest model for the Titanic data. Dots indicate the values of the variables and of the prediction for Henry. ### 10\.6\.2 Advanced use of the `predict_profile()` function The `predict_profile()` function is very flexible. To better understand how it can be used, we briefly review its arguments: * `explainer`, `data`, `predict_function`, `label` \- they provide information about the model. If the object provided in the `explainer` argument has been created with the `DALEX::explain()` function, then values of the other arguments are extracted from the object; this is how we use the function in this chapter. Otherwise, we have got to specify directly the model\-object, the data frame used for fitting the model, the function that should be used to compute predictions, and the model label. * `new_observation` \- a data frame with data for instance(s), for which we want to calculate CP profiles, with the same variables as in the data used to fit the model. Note, however, that it is best not to include the dependent variable in the data frame, as they should not appear in plots. * `y` \- the observed values of the dependent variable corresponding to `new_observation`. The use of this argument is illustrated in Section [12\.1](localDiagnostics.html#cPLocDiagIntro). * `variables` \- names of explanatory variables, for which CP profiles are to be calculated. By default, `variables = NULL` and the profiles are constructed for all variables, which may be time consuming. * `variable_splits` \- a list of values for which CP profiles are to be calculated. By default, `variable_splits = NULL` and the list includes all values for categorical variables and uniformly\-placed values for continuous variables; for the latter, one can specify the number of the values with the `grid_points` argument (by default, `grid_points = 101`). The code below uses argument `variable_splits` to specify that CP profiles are to be calculated for *age* and *fare*, together with the list of values at which the profiles are to be evaluated. ``` variable_splits = list(age = seq(0, 70, 0.1), fare = seq(0, 100, 0.1)) cp_titanic_rf <- predict_profile(explainer = explain_rf, new_observation = henry, variable_splits = variable_splits) ``` Subsequently, to replicate the plots presented in the upper part of Figure [10\.4](ceterisParibus.html#fig:profileV4Rf), a call to function `plot()` can be used as below. The resulting plot is shown in Figure [10\.5](ceterisParibus.html#fig:titanicCeterisProfile01). ``` plot(cp_titanic_rf, variables = c("age", "fare")) + ggtitle("Ceteris-paribus profile", "") ``` Figure 10\.7: Ceteris\-paribus profiles for variables *class* and *embarked* and the `titanic_rf` random forest model. Blue dots indicate the values of the variables and of the prediction for Henry. In the example below, we present the code to create CP profiles for two passengers, Henry and Johnny D (see Section [4\.2\.5](dataSetsIntro.html#predictions-titanic)), for the random forest model `titanic_rf` (Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)). Toward this end, we first retrieve the `johnny_d` data frame via the `archivist` hook, as listed in Section [4\.2\.7](dataSetsIntro.html#ListOfModelsTitanic). We then apply the `predict_profile()` function with the explainer\-object `explain_rf` specified in the `explainer` argument and the combined data frame for Henry and Johnny D used in the `new_observation` argument. We also use argument `variable_splits` to specify that CP profiles are to be calculated for *age* and *fare*, together with the list of values at which the profiles are to be evaluated. ``` (johnny_d <- archivist::aread("pbiecek/models/e3596")) ``` ``` ## class gender age sibsp parch fare embarked ## 1 1st male 8 0 0 72 Southampton ``` ``` cp_titanic_rf2 <- predict_profile(explainer = explain_rf, new_observation = rbind(henry, johnny_d), variable_splits = variable_splits) ``` To create the plots of CP profile, we apply the `plot()` function. We use the `scale_color_manual()` function to add names of passengers to the plot, and to control colors and positions. ``` library(ingredients) plot(cp_titanic_rf2, color = "_ids_", variables = c("age", "fare")) + scale_color_manual(name = "Passenger:", breaks = 1:2, values = c("#4378bf", "#8bdcbe"), labels = c("henry" , "johny_d")) ``` The resulting graph, which includes CP profiles for Henry and Johnny D, is presented in Figure [10\.8](ceterisParibus.html#fig:titanicCeterisProfile01D). For Henry, the predicted probability of survival is smaller than for Johnny D, as seen from the location of the large dots on the profiles. The profiles for *age* indicate a somewhat larger effect of the variable for Henry, as the predicted probability, in general, decreases from about 0\.6 to 0\.1 with increasing values of the variable. For Johny D, the probability changes from about 0\.45 to about 0\.05, with a bit less monotonic pattern. For *fare*, the effect is smaller for both passengers, as the probability changes within a smaller range of about 0\.2\. For Henry, the changes are approximately limited to the interval \[0\.1,0\.3], while for Johnny D they are limited to the interval \[0\.4,0\.6]. Figure 10\.8: Ceteris\-paribus profiles for the `titanic_rf` model. Profiles for different passengers are color\-coded. Dots indicate the values of the variables and of the predictions for the passengers. ### 10\.6\.3 Comparison of models (champion\-challenger) One of the most interesting uses of the CP profiles is the comparison for two or more of models. To illustrate this possibility, first, we have to construct profiles for the models. In our illustration, for the sake of clarity, we limit ourselves to the logistic regression (Section [4\.2\.1](dataSetsIntro.html#model-titanic-lmr)) and random forest (Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) models for the Titanic data. Moreover, we use Henry as the instance for which predictions are of interest. We apply the `predict_profile()` function to compute the CP profiles for the two models. ``` cp_titanic_rf <- predict_profile(explain_rf, henry) cp_titanic_lmr <- predict_profile(explain_lmr, henry) ``` Subsequently, we construct the plot with the help of the `plot()` function. Note that, for the sake of brevity, we use the `variables` argument to limit the plot only to profiles for variables *age* and *class*. Every `plot()` function can take a collection of explainers as arguments. In such case, profiles for different models are combined in a single plot. In the code presented below, argument `color = "_label_"` is used to specify that models are to be color\-coded. The `_label_` refers to the name of the column in the CP explainer that contains the model’s name. ``` plot(cp_titanic_rf, cp_titanic_lmr, color = "_label_", variables = c("age", "fare")) + ggtitle("Ceteris-paribus profiles for Henry", "") ``` The resulting plot is shown in Figure [10\.9](ceterisParibus.html#fig:titanicCeterisProfile01E). For Henry, the predicted probability of survival is higher for the logistic regression model than for the random forest model. CP profiles for *age* show a similar shape, however, and indicate decreasing probability with age. Note that this relation is not linear because we used spline transformation for the *age* variable, see Section [4\.2\.1](dataSetsIntro.html#model-titanic-lmr). For *fare*, the profile for the logistic regression model suggests a slight increase of the probability, while for the random forest a decreasing trend can be inferred. The difference between the values of the CP profiles for *fare* increases with the increasing values of the variable. We can only speculate what is the reason for the difference. Perhaps the cause is the correlation between the ticket *fare* and *class.* The logistic regression model handles the dependency of variables differently from the random forest model. Figure 10\.9: Comparison of the ceteris\-paribus profiles for Henry for the logistic regression and random forest models. Profiles for different models are color\-coded. Dots indicate the values of the variables and of the prediction for Henry. 10\.7 Code snippets for Python ------------------------------ In this section, we use the `dalex` library for Python. The package covers all methods presented in this chapter. It is available on `pip` and `GitHub`. For illustration purposes, we use the `titanic_rf` random forest model for the Titanic data developed in Section [4\.3\.2](dataSetsIntro.html#model-titanic-python-rf). Recall that the model is developed to predict the probability of survival for passengers of Titanic. Instance\-level explanations are calculated for Henry, a 47\-year\-old passenger that travelled in the 1st class (see Section [4\.3\.5](dataSetsIntro.html#predictions-titanic-python)). In the first step, we create an explainer\-object that will provide a uniform interface for the predictive model. We use the `Explainer()` constructor for this purpose. ``` import pandas as pd henry = pd.DataFrame({'gender' : ['male'], 'age' : [47], 'class' : ['1st'], 'embarked': ['Cherbourg'], 'fare' : [25], 'sibsp' : [0], 'parch' : [0]}, index = ['Henry']) import dalex as dx titanic_rf_exp = dx.Explainer(titanic_rf, X, y, label = "Titanic RF Pipeline") ``` To calculate the CP profile we use the `predict_profile()` method. The first argument is the data frame for the observation for which the attributions are to be calculated. Results are stored in the `results` field. ``` cp_henry = titanic_rf_exp.predict_profile(henry) cp_henry.result ``` The resulting object can be visualised by using the `plot()` method. By default, CP profiles for all continuous variables are plotted. To select specific variables, a vector with the names of the variables can be provided in the `variables` argument. In the code below, we select variables *age* and *fare*. The resulting plot is shown in Figure [10\.10](ceterisParibus.html#fig:cpPython1). ``` cp_henry.plot(variables = ['age', 'fare']) ``` Figure 10\.10: Ceteris\-paribus profiles for continuous explanatory variables *age* and *fare* for the random forest model for the Titanic data and passenger Henry. Dots indicate the values of the variables and of the prediction for Henry. To plot profiles for categorical variables, we use the `variable_type = 'categorical'` argument. In the code below, we limit the plot to variables *class* and *embarked*. The resulting plot is shown in Figure [10\.11](ceterisParibus.html#fig:cpPython2). ``` cp_henry.plot(variables = ['class', 'embarked'], variable_type = 'categorical') ``` Figure 10\.11: Ceteris\-paribus profiles for categorical explanatory variables *class* and *embarked* for the random forest model for the Titanic data and passenger Henry. CP profiles for several models can be placed on a single chart by adding them as further arguments for the `plot()` function (see an example below). The resulting plot is shown in Figure [10\.12](ceterisParibus.html#fig:cpPython4). ``` cp_henry2 = titanic_lr_exp.predict_profile(henry) cp_henry.plot(cp_henry2, variables = ['age', 'fare']) ``` Figure 10\.12: Ceteris\-paribus profiles for logistic regression model and random forest model for the Titanic data and passenger Henry.
Machine Learning
pbiecek.github.io
https://pbiecek.github.io/ema/ceterisParibusOscillations.html
11 Ceteris\-paribus Oscillations ================================ 11\.1 Introduction ------------------ Visual examination of ceteris\-paribus (CP) profiles, as illustrated in the previous chapter, is insightful. However, in case of a model with a large number of explanatory variables, we may end up with a large number of plots that may be overwhelming. In such a situation, it might be useful to select the most interesting or important profiles. In this chapter, we describe a measure that can be used for such a purpose and that is directly linked to CP profiles. It can be seen as an instance\-level variable\-importance measure alternative to the measures discussed in Chapters [6](breakDown.html#breakDown)–[9](LIME.html#LIME). 11\.2 Intuition --------------- To assign importance to CP profiles, we can use the concept of profile oscillations. It is worth noting that the larger influence of an explanatory variable on prediction for a particular instance, the larger the fluctuations of the corresponding CP profile. For a variable that exercises little or no influence on a model’s prediction, the profile will be flat or will barely change. In other words, the values of the CP profile should be close to the value of the model’s prediction for a particular instance. Consequently, the sum of differences between the profile and the value of the prediction, taken across all possible values of the explanatory variable, should be close to zero. The sum can be graphically depicted by the area between the profile and the horizontal line representing the value of the single\-instance prediction. On the other hand, for an explanatory variable with a large influence on the prediction, the area should be large. Figure [11\.1](ceterisParibusOscillations.html#fig:CPVIPprofiles) illustrates the concept based on CP profiles presented in Figure [10\.4](ceterisParibus.html#fig:profileV4Rf). The larger the highlighted area in Figure [11\.1](ceterisParibusOscillations.html#fig:CPVIPprofiles), the more important is the variable for the particular prediction. Figure 11\.1: The value of the coloured area summarizes the oscillations of a ceteris\-paribus (CP) profile and provides the mean of the absolute deviations between the CP profile and the single\-instance prediction. The CP profiles are constructed for the `titanic_rf` random forest model for the Titanic data and passenger Henry. 11\.3 Method ------------ Let us formalize this concept now. Denote by \\(g^j(z)\\) the probability density function of the distribution of the \\(j\\)\-th explanatory variable. The summary measure of the variable’s importance for model \\(f()\\)’s prediction at \\(\\underline{x}\_\*\\), \\(vip\_{CP}^{j}(\\underline{x}\_\*)\\), is defined as follows: \\\[\\begin{equation} vip\_{CP}^j(\\underline{x}\_\*) \= \\int\_{\\mathcal R} \|h^{j}\_{\\underline{x}\_\*}(z) \- f(\\underline{x}\_\*)\| g^j(z)dz\=E\_{X^j}\\left\\{\|h^{j}\_{\\underline{x}\_\*}(X^j) \- f(\\underline{x}\_\*)\|\\right\\}. \\tag{11\.1} \\end{equation}\\] Thus, \\(vip\_{CP}^j(\\underline{x}\_\*)\\) is the expected absolute deviation of the CP profile \\(h^{j}\_{\\underline{x}\_\*}()\\), defined in [(10\.1\)](ceterisParibus.html#eq:CPPdef), from the model’s prediction at \\(\\underline{x}\_\*\\), computed over the distribution \\(g^j(z)\\) of the \\(j\\)\-th explanatory variable. The true distribution of \\(j\\)\-th explanatory variable is, in most cases, unknown. There are several possible approaches to construct an estimator of [(11\.1\)](ceterisParibusOscillations.html#eq:VIPCPdef). One is to calculate the area under the CP curve, i.e., to assume that \\(g^j(z)\\) is a uniform distribution over the range of variable \\(X^j\\). It follows that a straightforward estimator of \\(vip\_{CP}^{j}(\\underline{x}\_\*)\\) is \\\[\\begin{equation} \\widehat{vip}\_{CP}^{j,uni}(\\underline{x}\_\*) \= \\frac 1k \\sum\_{l\=1}^k \|h^{j}\_{x\_\*}(z\_l) \- f(\\underline{x}\_\*)\|, \\tag{11\.2} \\end{equation}\\] where \\(z\_l\\) (\\(l\=1, \\ldots, k\\)) are selected values of the \\(j\\)\-th explanatory variable. For instance, one can consider all unique values of \\(X^{j}\\) in a dataset. Alternatively, for a continuous variable, one can use an equidistant grid of values. Another approach is to use the empirical distribution of \\(X^{j}\\). This leads to the estimator defined as follows: \\\[\\begin{equation} \\widehat{vip}\_{CP}^{j,emp}(\\underline{x}\_\*) \= \\frac 1n \\sum\_{i\=1}^n \|h^{j}\_{\\underline{x}\_\*}(x^{j}\_i) \- f(\\underline{x}\_\*)\|, \\tag{11\.3} \\end{equation}\\] where index \\(i\\) runs through all observations in a dataset. The use of \\(\\widehat{vip}\_{CP}^{j,emp}(\\underline{x}\_\*)\\) is preferred when there are enough data to accurately estimate the empirical distribution and when the distribution is not uniform. On the other hand, \\(\\widehat{vip}\_{CP}^{j,uni}(\\underline{x}\_\*)\\) is in most cases quicker to compute and, therefore, it is preferred if we look for fast approximations. Note that the local evaluation of the variables’ importance can be very different from the global evaluation. This is well illustrated by the following example. Consider the model \\\[ f(x^1, x^2\) \= x^1 \* x^2, \\] where variables \\(X^1\\) and \\(X^2\\) take values in \\(\[0,1]\\). Furthermore, consider prediction for an observation described by vector \\(\\underline{x}\_\* \= (0,1\)\\). In that case, the importance of \\(X^1\\) is larger than \\(X^2\\). This is because the CP profile \\(h^1\_{x\_\*}(z) \= z\\), while \\(h^2\_{x\_\*}(z) \= 0\\). Thus, there are oscillations for the first variable, but no oscillations for the second one. Hence, at \\(\\underline{x}\_\* \= (0,1\)\\), the first variable is more important than the second. Globally, however, both variables are equally important, because the model is symmetrical. 11\.4 Example: Titanic data --------------------------- Figure [11\.2](ceterisParibusOscillations.html#fig:CPVIP1) shows bar plots summarizing the size of oscillations for explanatory variables for the random forest model `titanic_rf` (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) for Henry, a 47\-year\-old man who travelled in the first class (see Section [4\.2\.5](dataSetsIntro.html#predictions-titanic)). The longer the bar, the larger the CP\-profile oscillations for the particular explanatory variable. The left\-hand\-side panel presents the variable\-importance measures computed by applying estimator \\(\\widehat{vip}\_{CP}^{j,uni}(\\underline{x}\_\*)\\), given in [(11\.2\)](ceterisParibusOscillations.html#eq:VIPCPuni), to an equidistant grid of values. The right\-hand\-side panel shows the results obtained by applying estimator \\(\\widehat{vip}\_{CP}^{j,emp}(\\underline{x}\_\*)\\), given in [(11\.3\)](ceterisParibusOscillations.html#eq:VIPCPemp), with an empirical distribution for explanatory variables. The plots presented in Figure [11\.2](ceterisParibusOscillations.html#fig:CPVIP1) indicate that both estimators consistently suggest that the most important variables for the model’s prediction for Henry are *gender* and *age*, followed by *class*. However, a remarkable difference can be observed for the *sibsp* variable, which gains in relative importance for estimator \\(\\widehat{vip}\_{CP}^{j,uni}(\\underline{x}\_\*)\\). In this respect, it is worth recalling that this variable has a very skewed distribution (see Figure [4\.3](dataSetsIntro.html#fig:titanicExplorationParch)). In particular, a significant mass of the distribution is concentrated at zero, but there have been a few high values observed for the variable. As a result, the of empirical density is very different from a uniform distribution. Hence the difference in the relative importance noted in Figure [11\.2](ceterisParibusOscillations.html#fig:CPVIP1). It is worth noting that, while the variable\-importance plot in Figure [11\.2](ceterisParibusOscillations.html#fig:CPVIP1) does indicate which explanatory variables are important, it does not describe how do the variables influence the prediction. In that respect, the CP profile for *age* for Henry (see Figure [10\.4](ceterisParibus.html#fig:profileV4Rf)) suggested that, if Henry were older, this would significantly lower his probability of survival. One the other hand, the CP profile for *sibsp* (see Figure [10\.4](ceterisParibus.html#fig:profileV4Rf)) indicated that, were Henry not travelling alone, this would increase his chances of survival. Thus, the variable\-importance plots should always be accompanied by plots of the relevant CP profiles. Figure 11\.2: Variable\-importance measures based on ceteris\-paribus oscillations estimated by using (left\-hand\-side panel) a uniform grid of explanatory\-variable values and (right\-hand\-side panel) empirical distribution of explanatory\-variables for the random forest model and passenger Henry for the Titanic data. 11\.5 Pros and cons ------------------- Oscillations of CP profiles are easy to interpret and understand. By using the average of oscillations, it is possible to select the most important variables for an instance prediction. This method can easily be extended to two or more variables. There are several issues related to the use of the CP oscillations, though. For example, the oscillations may not be of help in situations when the use of CP profiles may itself be problematic (e.g., in the case of correlated explanatory variables or interactions – see Section [10\.5](ceterisParibus.html#CPProsCons)). An important issue is that the CP\-based variable\-importance measures [(11\.1\)](ceterisParibusOscillations.html#eq:VIPCPdef) do not fulfil the local accuracy condition (see Section [8\.2](shapley.html#SHAPMethod)), i.e., they do not sum up to the instance prediction for which they are calculated, unlike the break\-down attributions (see Chapter [6](breakDown.html#breakDown)) or Shapley values (see Chapter [8](shapley.html#shapley)). 11\.6 Code snippets for R ------------------------- In this section, we present analysis of CP\-profile oscillations as implemented in the `DALEX` package for R. For illustration, we use the random forest model `titanic_rf` (Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)). The model was developed to predict the probability of survival after the sinking of the Titanic. Instance\-level explanations are calculated for Henry, a 47\-year\-old male passenger that travelled in the first class (see Section [4\.2\.5](dataSetsIntro.html#predictions-titanic)). We first retrieve the `titanic_rf` model\-object and the data frame for Henry via the `archivist` hooks, as listed in Section [4\.2\.7](dataSetsIntro.html#ListOfModelsTitanic). We also retrieve the version of the `titanic` data with imputed missing values. ``` titanic_imputed <- archivist::aread("pbiecek/models/27e5c") titanic_rf <- archivist:: aread("pbiecek/models/4e0fc") (henry <- archivist::aread("pbiecek/models/a6538")) ``` ``` class gender age sibsp parch fare embarked 1 1st male 47 0 0 25 Cherbourg ``` Then we construct the explainer for the model by using the function `explain()` from the `DALEX` package (see Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode)). We also load the `randomForest` package, as the model was fitted by using function `randomForest()` from this package (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) and it is important to have the corresponding `predict()` function available. The model’s prediction for Henry is obtained with the help of that function. ``` library("randomForest") library("DALEX") explain_rf <- DALEX::explain(model = titanic_rf, data = titanic_imputed[, -9], y = titanic_imputed$survived == "yes", label = "Random Forest") predict(explain_rf, henry) ``` ``` [1] 0.246 ``` ### 11\.6\.1 Basic use of the `predict_parts()` function To calculate CP\-profile oscillations, we use the `predict_parts()` function, already introduced in Section [6\.6](breakDown.html#BDR). In particular, to use estimator \\(\\widehat{vip}\_{CP}^{j,uni}(\\underline{x}\_\*)\\), defined in [(11\.2\)](ceterisParibusOscillations.html#eq:VIPCPuni), we specify argument `type="oscillations_uni"`, whereas for estimator \\(\\widehat{vip}\_{CP}^{j,emp}(\\underline{x}\_\*)\\), defined in [(11\.3\)](ceterisParibusOscillations.html#eq:VIPCPemp), we specify argument `type="oscillations_emp"`. By default, oscillations are calculated for all explanatory variables. To perform calcualtions only for a subset of variables, one can use the `variables` argument. In the code below, we apply the function to the explainer\-object for the random forest model `titanic_rf` and the data frame for the instance of interest, i.e., `henry`. Additionally, we specify the `type="oscillations_uni"` argument to indicate that we want to compute CP\-profile oscillations and the estimated value of the variable\-importance measure as defined in [(11\.2\)](ceterisParibusOscillations.html#eq:VIPCPuni). ``` oscillations_uniform <- predict_parts(explainer = explain_rf, new_observation = henry, type = "oscillations_uni") oscillations_uniform ``` ``` ## _vname_ _ids_ oscillations ## 2 gender 1 0.33700000 ## 4 sibsp 1 0.16859406 ## 3 age 1 0.16744554 ## 1 class 1 0.14257143 ## 6 fare 1 0.09942574 ## 7 embarked 1 0.02400000 ## 5 parch 1 0.01031683 ``` The resulting object is of class `ceteris_paribus_oscillations`, which is a data frame with three variables: `_vname_`, `_ids_`, and `oscillations` that provide, respectively, the name of the variable, the value of the identifier of the instance, and the estimated value of the variable\-importance measure. Additionally, the object has also got an overloaded `plot()` function. We can use the latter function to plot the estimated values of the variable\-importance measure for the instance of interest. In the code below, before creating the plot, we make the identifier for Henry more explicit. The resulting graph is shown in Figure [11\.3](ceterisParibusOscillations.html#fig:CPoscDefForHenry). ``` oscillations_uniform$`_ids_` <- "Henry" plot(oscillations_uniform) + ggtitle("Ceteris-paribus Oscillations", "Expectation over uniform distribution (unique values)") ``` Figure 11\.3: Variable\-importance measures based on ceteris\-paribus oscillations estimated by the `oscillations_uni` method of the `predict_parts()` function for the random forest model and passenger Henry for the Titanic data. ### 11\.6\.2 Advanced use of the `predict_parts()` function As mentioned in the previous section, the `predict_parts()` function with argument `type = "oscillations_uni"` computes estimator \\(\\widehat{vip}\_{CP}^{j,uni}(\\underline{x}\_\*)\\), defined in [(11\.2\)](ceterisParibusOscillations.html#eq:VIPCPuni), while for argument `type="oscillations_emp"` it provides estimator \\(\\widehat{vip}\_{CP}^{j,emp}(\\underline{x}\_\*)\\), defined in [(11\.3\)](ceterisParibusOscillations.html#eq:VIPCPemp). However, one could also consider applying estimator \\(\\widehat{vip}\_{CP}^{j,uni}(\\underline{x}\_\*)\\) but using a pre\-defined grid of values for a continuous explanatory variable. Toward this aim, we can use the `variable_splits` argument to explicitly specify values for the density estimation. Its application is illustrated in the code below for variables *age* and *fare*. Note that, in this case, we use argument `type = "oscillations"`. It is also worth noting that the use of the `variable_splits` argument limits the computations to the variables specified in the argument. ``` oscillations_equidist <- predict_parts(explain_rf, henry, variable_splits = list(age = seq(0, 65, 0.1), fare = seq(0, 200, 0.1), gender = unique(titanic_imputed$gender), class = unique(titanic_imputed$class)), type = "oscillations") oscillations_equidist ``` ``` ## _vname_ _ids_ oscillations ## 3 gender 1 0.3370000 ## 1 age 1 0.1677235 ## 4 class 1 0.1425714 ## 2 fare 1 0.1040790 ``` Subsequently, we can use the `plot()` function to construct a bar plot of the estimated values. In the code below, before creating the plot, we make the identifier for Henry more explicit. The resulting graph is shown in Figure [11\.4](ceterisParibusOscillations.html#fig:CPoscGridForHenry). ``` oscillations_equidist$`_ids_` <- "Henry" plot(oscillations_equidist) + ggtitle("Ceteris-paribus Oscillations", "Expectation over specified grid of points") ``` Figure 11\.4: Variable\-importance measures based on ceteris\-paribus oscillations estimated by using a specified grid of points for the random forest model and passenger Henry for the Titanic data. 11\.7 Code snippets for Python ------------------------------ At this point we are not aware about any Python libraries that would implement the methods presented in the current chapter. 11\.1 Introduction ------------------ Visual examination of ceteris\-paribus (CP) profiles, as illustrated in the previous chapter, is insightful. However, in case of a model with a large number of explanatory variables, we may end up with a large number of plots that may be overwhelming. In such a situation, it might be useful to select the most interesting or important profiles. In this chapter, we describe a measure that can be used for such a purpose and that is directly linked to CP profiles. It can be seen as an instance\-level variable\-importance measure alternative to the measures discussed in Chapters [6](breakDown.html#breakDown)–[9](LIME.html#LIME). 11\.2 Intuition --------------- To assign importance to CP profiles, we can use the concept of profile oscillations. It is worth noting that the larger influence of an explanatory variable on prediction for a particular instance, the larger the fluctuations of the corresponding CP profile. For a variable that exercises little or no influence on a model’s prediction, the profile will be flat or will barely change. In other words, the values of the CP profile should be close to the value of the model’s prediction for a particular instance. Consequently, the sum of differences between the profile and the value of the prediction, taken across all possible values of the explanatory variable, should be close to zero. The sum can be graphically depicted by the area between the profile and the horizontal line representing the value of the single\-instance prediction. On the other hand, for an explanatory variable with a large influence on the prediction, the area should be large. Figure [11\.1](ceterisParibusOscillations.html#fig:CPVIPprofiles) illustrates the concept based on CP profiles presented in Figure [10\.4](ceterisParibus.html#fig:profileV4Rf). The larger the highlighted area in Figure [11\.1](ceterisParibusOscillations.html#fig:CPVIPprofiles), the more important is the variable for the particular prediction. Figure 11\.1: The value of the coloured area summarizes the oscillations of a ceteris\-paribus (CP) profile and provides the mean of the absolute deviations between the CP profile and the single\-instance prediction. The CP profiles are constructed for the `titanic_rf` random forest model for the Titanic data and passenger Henry. 11\.3 Method ------------ Let us formalize this concept now. Denote by \\(g^j(z)\\) the probability density function of the distribution of the \\(j\\)\-th explanatory variable. The summary measure of the variable’s importance for model \\(f()\\)’s prediction at \\(\\underline{x}\_\*\\), \\(vip\_{CP}^{j}(\\underline{x}\_\*)\\), is defined as follows: \\\[\\begin{equation} vip\_{CP}^j(\\underline{x}\_\*) \= \\int\_{\\mathcal R} \|h^{j}\_{\\underline{x}\_\*}(z) \- f(\\underline{x}\_\*)\| g^j(z)dz\=E\_{X^j}\\left\\{\|h^{j}\_{\\underline{x}\_\*}(X^j) \- f(\\underline{x}\_\*)\|\\right\\}. \\tag{11\.1} \\end{equation}\\] Thus, \\(vip\_{CP}^j(\\underline{x}\_\*)\\) is the expected absolute deviation of the CP profile \\(h^{j}\_{\\underline{x}\_\*}()\\), defined in [(10\.1\)](ceterisParibus.html#eq:CPPdef), from the model’s prediction at \\(\\underline{x}\_\*\\), computed over the distribution \\(g^j(z)\\) of the \\(j\\)\-th explanatory variable. The true distribution of \\(j\\)\-th explanatory variable is, in most cases, unknown. There are several possible approaches to construct an estimator of [(11\.1\)](ceterisParibusOscillations.html#eq:VIPCPdef). One is to calculate the area under the CP curve, i.e., to assume that \\(g^j(z)\\) is a uniform distribution over the range of variable \\(X^j\\). It follows that a straightforward estimator of \\(vip\_{CP}^{j}(\\underline{x}\_\*)\\) is \\\[\\begin{equation} \\widehat{vip}\_{CP}^{j,uni}(\\underline{x}\_\*) \= \\frac 1k \\sum\_{l\=1}^k \|h^{j}\_{x\_\*}(z\_l) \- f(\\underline{x}\_\*)\|, \\tag{11\.2} \\end{equation}\\] where \\(z\_l\\) (\\(l\=1, \\ldots, k\\)) are selected values of the \\(j\\)\-th explanatory variable. For instance, one can consider all unique values of \\(X^{j}\\) in a dataset. Alternatively, for a continuous variable, one can use an equidistant grid of values. Another approach is to use the empirical distribution of \\(X^{j}\\). This leads to the estimator defined as follows: \\\[\\begin{equation} \\widehat{vip}\_{CP}^{j,emp}(\\underline{x}\_\*) \= \\frac 1n \\sum\_{i\=1}^n \|h^{j}\_{\\underline{x}\_\*}(x^{j}\_i) \- f(\\underline{x}\_\*)\|, \\tag{11\.3} \\end{equation}\\] where index \\(i\\) runs through all observations in a dataset. The use of \\(\\widehat{vip}\_{CP}^{j,emp}(\\underline{x}\_\*)\\) is preferred when there are enough data to accurately estimate the empirical distribution and when the distribution is not uniform. On the other hand, \\(\\widehat{vip}\_{CP}^{j,uni}(\\underline{x}\_\*)\\) is in most cases quicker to compute and, therefore, it is preferred if we look for fast approximations. Note that the local evaluation of the variables’ importance can be very different from the global evaluation. This is well illustrated by the following example. Consider the model \\\[ f(x^1, x^2\) \= x^1 \* x^2, \\] where variables \\(X^1\\) and \\(X^2\\) take values in \\(\[0,1]\\). Furthermore, consider prediction for an observation described by vector \\(\\underline{x}\_\* \= (0,1\)\\). In that case, the importance of \\(X^1\\) is larger than \\(X^2\\). This is because the CP profile \\(h^1\_{x\_\*}(z) \= z\\), while \\(h^2\_{x\_\*}(z) \= 0\\). Thus, there are oscillations for the first variable, but no oscillations for the second one. Hence, at \\(\\underline{x}\_\* \= (0,1\)\\), the first variable is more important than the second. Globally, however, both variables are equally important, because the model is symmetrical. 11\.4 Example: Titanic data --------------------------- Figure [11\.2](ceterisParibusOscillations.html#fig:CPVIP1) shows bar plots summarizing the size of oscillations for explanatory variables for the random forest model `titanic_rf` (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) for Henry, a 47\-year\-old man who travelled in the first class (see Section [4\.2\.5](dataSetsIntro.html#predictions-titanic)). The longer the bar, the larger the CP\-profile oscillations for the particular explanatory variable. The left\-hand\-side panel presents the variable\-importance measures computed by applying estimator \\(\\widehat{vip}\_{CP}^{j,uni}(\\underline{x}\_\*)\\), given in [(11\.2\)](ceterisParibusOscillations.html#eq:VIPCPuni), to an equidistant grid of values. The right\-hand\-side panel shows the results obtained by applying estimator \\(\\widehat{vip}\_{CP}^{j,emp}(\\underline{x}\_\*)\\), given in [(11\.3\)](ceterisParibusOscillations.html#eq:VIPCPemp), with an empirical distribution for explanatory variables. The plots presented in Figure [11\.2](ceterisParibusOscillations.html#fig:CPVIP1) indicate that both estimators consistently suggest that the most important variables for the model’s prediction for Henry are *gender* and *age*, followed by *class*. However, a remarkable difference can be observed for the *sibsp* variable, which gains in relative importance for estimator \\(\\widehat{vip}\_{CP}^{j,uni}(\\underline{x}\_\*)\\). In this respect, it is worth recalling that this variable has a very skewed distribution (see Figure [4\.3](dataSetsIntro.html#fig:titanicExplorationParch)). In particular, a significant mass of the distribution is concentrated at zero, but there have been a few high values observed for the variable. As a result, the of empirical density is very different from a uniform distribution. Hence the difference in the relative importance noted in Figure [11\.2](ceterisParibusOscillations.html#fig:CPVIP1). It is worth noting that, while the variable\-importance plot in Figure [11\.2](ceterisParibusOscillations.html#fig:CPVIP1) does indicate which explanatory variables are important, it does not describe how do the variables influence the prediction. In that respect, the CP profile for *age* for Henry (see Figure [10\.4](ceterisParibus.html#fig:profileV4Rf)) suggested that, if Henry were older, this would significantly lower his probability of survival. One the other hand, the CP profile for *sibsp* (see Figure [10\.4](ceterisParibus.html#fig:profileV4Rf)) indicated that, were Henry not travelling alone, this would increase his chances of survival. Thus, the variable\-importance plots should always be accompanied by plots of the relevant CP profiles. Figure 11\.2: Variable\-importance measures based on ceteris\-paribus oscillations estimated by using (left\-hand\-side panel) a uniform grid of explanatory\-variable values and (right\-hand\-side panel) empirical distribution of explanatory\-variables for the random forest model and passenger Henry for the Titanic data. 11\.5 Pros and cons ------------------- Oscillations of CP profiles are easy to interpret and understand. By using the average of oscillations, it is possible to select the most important variables for an instance prediction. This method can easily be extended to two or more variables. There are several issues related to the use of the CP oscillations, though. For example, the oscillations may not be of help in situations when the use of CP profiles may itself be problematic (e.g., in the case of correlated explanatory variables or interactions – see Section [10\.5](ceterisParibus.html#CPProsCons)). An important issue is that the CP\-based variable\-importance measures [(11\.1\)](ceterisParibusOscillations.html#eq:VIPCPdef) do not fulfil the local accuracy condition (see Section [8\.2](shapley.html#SHAPMethod)), i.e., they do not sum up to the instance prediction for which they are calculated, unlike the break\-down attributions (see Chapter [6](breakDown.html#breakDown)) or Shapley values (see Chapter [8](shapley.html#shapley)). 11\.6 Code snippets for R ------------------------- In this section, we present analysis of CP\-profile oscillations as implemented in the `DALEX` package for R. For illustration, we use the random forest model `titanic_rf` (Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)). The model was developed to predict the probability of survival after the sinking of the Titanic. Instance\-level explanations are calculated for Henry, a 47\-year\-old male passenger that travelled in the first class (see Section [4\.2\.5](dataSetsIntro.html#predictions-titanic)). We first retrieve the `titanic_rf` model\-object and the data frame for Henry via the `archivist` hooks, as listed in Section [4\.2\.7](dataSetsIntro.html#ListOfModelsTitanic). We also retrieve the version of the `titanic` data with imputed missing values. ``` titanic_imputed <- archivist::aread("pbiecek/models/27e5c") titanic_rf <- archivist:: aread("pbiecek/models/4e0fc") (henry <- archivist::aread("pbiecek/models/a6538")) ``` ``` class gender age sibsp parch fare embarked 1 1st male 47 0 0 25 Cherbourg ``` Then we construct the explainer for the model by using the function `explain()` from the `DALEX` package (see Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode)). We also load the `randomForest` package, as the model was fitted by using function `randomForest()` from this package (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) and it is important to have the corresponding `predict()` function available. The model’s prediction for Henry is obtained with the help of that function. ``` library("randomForest") library("DALEX") explain_rf <- DALEX::explain(model = titanic_rf, data = titanic_imputed[, -9], y = titanic_imputed$survived == "yes", label = "Random Forest") predict(explain_rf, henry) ``` ``` [1] 0.246 ``` ### 11\.6\.1 Basic use of the `predict_parts()` function To calculate CP\-profile oscillations, we use the `predict_parts()` function, already introduced in Section [6\.6](breakDown.html#BDR). In particular, to use estimator \\(\\widehat{vip}\_{CP}^{j,uni}(\\underline{x}\_\*)\\), defined in [(11\.2\)](ceterisParibusOscillations.html#eq:VIPCPuni), we specify argument `type="oscillations_uni"`, whereas for estimator \\(\\widehat{vip}\_{CP}^{j,emp}(\\underline{x}\_\*)\\), defined in [(11\.3\)](ceterisParibusOscillations.html#eq:VIPCPemp), we specify argument `type="oscillations_emp"`. By default, oscillations are calculated for all explanatory variables. To perform calcualtions only for a subset of variables, one can use the `variables` argument. In the code below, we apply the function to the explainer\-object for the random forest model `titanic_rf` and the data frame for the instance of interest, i.e., `henry`. Additionally, we specify the `type="oscillations_uni"` argument to indicate that we want to compute CP\-profile oscillations and the estimated value of the variable\-importance measure as defined in [(11\.2\)](ceterisParibusOscillations.html#eq:VIPCPuni). ``` oscillations_uniform <- predict_parts(explainer = explain_rf, new_observation = henry, type = "oscillations_uni") oscillations_uniform ``` ``` ## _vname_ _ids_ oscillations ## 2 gender 1 0.33700000 ## 4 sibsp 1 0.16859406 ## 3 age 1 0.16744554 ## 1 class 1 0.14257143 ## 6 fare 1 0.09942574 ## 7 embarked 1 0.02400000 ## 5 parch 1 0.01031683 ``` The resulting object is of class `ceteris_paribus_oscillations`, which is a data frame with three variables: `_vname_`, `_ids_`, and `oscillations` that provide, respectively, the name of the variable, the value of the identifier of the instance, and the estimated value of the variable\-importance measure. Additionally, the object has also got an overloaded `plot()` function. We can use the latter function to plot the estimated values of the variable\-importance measure for the instance of interest. In the code below, before creating the plot, we make the identifier for Henry more explicit. The resulting graph is shown in Figure [11\.3](ceterisParibusOscillations.html#fig:CPoscDefForHenry). ``` oscillations_uniform$`_ids_` <- "Henry" plot(oscillations_uniform) + ggtitle("Ceteris-paribus Oscillations", "Expectation over uniform distribution (unique values)") ``` Figure 11\.3: Variable\-importance measures based on ceteris\-paribus oscillations estimated by the `oscillations_uni` method of the `predict_parts()` function for the random forest model and passenger Henry for the Titanic data. ### 11\.6\.2 Advanced use of the `predict_parts()` function As mentioned in the previous section, the `predict_parts()` function with argument `type = "oscillations_uni"` computes estimator \\(\\widehat{vip}\_{CP}^{j,uni}(\\underline{x}\_\*)\\), defined in [(11\.2\)](ceterisParibusOscillations.html#eq:VIPCPuni), while for argument `type="oscillations_emp"` it provides estimator \\(\\widehat{vip}\_{CP}^{j,emp}(\\underline{x}\_\*)\\), defined in [(11\.3\)](ceterisParibusOscillations.html#eq:VIPCPemp). However, one could also consider applying estimator \\(\\widehat{vip}\_{CP}^{j,uni}(\\underline{x}\_\*)\\) but using a pre\-defined grid of values for a continuous explanatory variable. Toward this aim, we can use the `variable_splits` argument to explicitly specify values for the density estimation. Its application is illustrated in the code below for variables *age* and *fare*. Note that, in this case, we use argument `type = "oscillations"`. It is also worth noting that the use of the `variable_splits` argument limits the computations to the variables specified in the argument. ``` oscillations_equidist <- predict_parts(explain_rf, henry, variable_splits = list(age = seq(0, 65, 0.1), fare = seq(0, 200, 0.1), gender = unique(titanic_imputed$gender), class = unique(titanic_imputed$class)), type = "oscillations") oscillations_equidist ``` ``` ## _vname_ _ids_ oscillations ## 3 gender 1 0.3370000 ## 1 age 1 0.1677235 ## 4 class 1 0.1425714 ## 2 fare 1 0.1040790 ``` Subsequently, we can use the `plot()` function to construct a bar plot of the estimated values. In the code below, before creating the plot, we make the identifier for Henry more explicit. The resulting graph is shown in Figure [11\.4](ceterisParibusOscillations.html#fig:CPoscGridForHenry). ``` oscillations_equidist$`_ids_` <- "Henry" plot(oscillations_equidist) + ggtitle("Ceteris-paribus Oscillations", "Expectation over specified grid of points") ``` Figure 11\.4: Variable\-importance measures based on ceteris\-paribus oscillations estimated by using a specified grid of points for the random forest model and passenger Henry for the Titanic data. ### 11\.6\.1 Basic use of the `predict_parts()` function To calculate CP\-profile oscillations, we use the `predict_parts()` function, already introduced in Section [6\.6](breakDown.html#BDR). In particular, to use estimator \\(\\widehat{vip}\_{CP}^{j,uni}(\\underline{x}\_\*)\\), defined in [(11\.2\)](ceterisParibusOscillations.html#eq:VIPCPuni), we specify argument `type="oscillations_uni"`, whereas for estimator \\(\\widehat{vip}\_{CP}^{j,emp}(\\underline{x}\_\*)\\), defined in [(11\.3\)](ceterisParibusOscillations.html#eq:VIPCPemp), we specify argument `type="oscillations_emp"`. By default, oscillations are calculated for all explanatory variables. To perform calcualtions only for a subset of variables, one can use the `variables` argument. In the code below, we apply the function to the explainer\-object for the random forest model `titanic_rf` and the data frame for the instance of interest, i.e., `henry`. Additionally, we specify the `type="oscillations_uni"` argument to indicate that we want to compute CP\-profile oscillations and the estimated value of the variable\-importance measure as defined in [(11\.2\)](ceterisParibusOscillations.html#eq:VIPCPuni). ``` oscillations_uniform <- predict_parts(explainer = explain_rf, new_observation = henry, type = "oscillations_uni") oscillations_uniform ``` ``` ## _vname_ _ids_ oscillations ## 2 gender 1 0.33700000 ## 4 sibsp 1 0.16859406 ## 3 age 1 0.16744554 ## 1 class 1 0.14257143 ## 6 fare 1 0.09942574 ## 7 embarked 1 0.02400000 ## 5 parch 1 0.01031683 ``` The resulting object is of class `ceteris_paribus_oscillations`, which is a data frame with three variables: `_vname_`, `_ids_`, and `oscillations` that provide, respectively, the name of the variable, the value of the identifier of the instance, and the estimated value of the variable\-importance measure. Additionally, the object has also got an overloaded `plot()` function. We can use the latter function to plot the estimated values of the variable\-importance measure for the instance of interest. In the code below, before creating the plot, we make the identifier for Henry more explicit. The resulting graph is shown in Figure [11\.3](ceterisParibusOscillations.html#fig:CPoscDefForHenry). ``` oscillations_uniform$`_ids_` <- "Henry" plot(oscillations_uniform) + ggtitle("Ceteris-paribus Oscillations", "Expectation over uniform distribution (unique values)") ``` Figure 11\.3: Variable\-importance measures based on ceteris\-paribus oscillations estimated by the `oscillations_uni` method of the `predict_parts()` function for the random forest model and passenger Henry for the Titanic data. ### 11\.6\.2 Advanced use of the `predict_parts()` function As mentioned in the previous section, the `predict_parts()` function with argument `type = "oscillations_uni"` computes estimator \\(\\widehat{vip}\_{CP}^{j,uni}(\\underline{x}\_\*)\\), defined in [(11\.2\)](ceterisParibusOscillations.html#eq:VIPCPuni), while for argument `type="oscillations_emp"` it provides estimator \\(\\widehat{vip}\_{CP}^{j,emp}(\\underline{x}\_\*)\\), defined in [(11\.3\)](ceterisParibusOscillations.html#eq:VIPCPemp). However, one could also consider applying estimator \\(\\widehat{vip}\_{CP}^{j,uni}(\\underline{x}\_\*)\\) but using a pre\-defined grid of values for a continuous explanatory variable. Toward this aim, we can use the `variable_splits` argument to explicitly specify values for the density estimation. Its application is illustrated in the code below for variables *age* and *fare*. Note that, in this case, we use argument `type = "oscillations"`. It is also worth noting that the use of the `variable_splits` argument limits the computations to the variables specified in the argument. ``` oscillations_equidist <- predict_parts(explain_rf, henry, variable_splits = list(age = seq(0, 65, 0.1), fare = seq(0, 200, 0.1), gender = unique(titanic_imputed$gender), class = unique(titanic_imputed$class)), type = "oscillations") oscillations_equidist ``` ``` ## _vname_ _ids_ oscillations ## 3 gender 1 0.3370000 ## 1 age 1 0.1677235 ## 4 class 1 0.1425714 ## 2 fare 1 0.1040790 ``` Subsequently, we can use the `plot()` function to construct a bar plot of the estimated values. In the code below, before creating the plot, we make the identifier for Henry more explicit. The resulting graph is shown in Figure [11\.4](ceterisParibusOscillations.html#fig:CPoscGridForHenry). ``` oscillations_equidist$`_ids_` <- "Henry" plot(oscillations_equidist) + ggtitle("Ceteris-paribus Oscillations", "Expectation over specified grid of points") ``` Figure 11\.4: Variable\-importance measures based on ceteris\-paribus oscillations estimated by using a specified grid of points for the random forest model and passenger Henry for the Titanic data. 11\.7 Code snippets for Python ------------------------------ At this point we are not aware about any Python libraries that would implement the methods presented in the current chapter.
Machine Learning
pbiecek.github.io
https://pbiecek.github.io/ema/localDiagnostics.html
12 Local\-diagnostics Plots =========================== 12\.1 Introduction ------------------ It may happen that, despite the fact that the predictive performance of a model is satisfactory overall, the model’s predictions for some observations are drastically worse. In such a situation it is often said that “the model does not cover well some areas of the input space”. For example, a model fitted to the data for “typical” patients in a certain hospital may not perform well for patients from another hospital with possible different characteristics. Or, a model developed to evaluate the risk of spring\-holiday consumer\-loans may not perform well in the case of autumn\-loans for Christmas\-holiday gifts. For this reason, in case of important decisions, it is worthwhile to check how does the model behave locally for observations similar to the instance of interest. In this chapter, we present two local\-diagnostics techniques that address this issue. The first one are *local\-fidelity plots* that evaluate the local predictive performance of the model around the observation of interest. The second one are *local\-stability plots* that assess the (local) stability of predictions around the observation of interest. 12\.2 Intuition --------------- Assume that, for the observation of interest, we have identified a set of observations from the training data with similar characteristics. We will call these similar observations “neighbours”. The basic idea behind local\-fidelity plots is to compare the distribution of residuals (i.e., differences between the observed and predicted value of the dependent variable; see equation [(2\.1\)](modelDevelopmentProcess.html#eq:modelResiduals)) for the neighbours with the distribution of residuals for the entire training dataset. Figure [12\.1](localDiagnostics.html#fig:profileBack2BackHist) presents histograms of residuals for the entire dataset and for a selected set of 25 neighbours for an instance of interest for the random forest model for the apartment\-prices dataset (Section [4\.5\.2](dataSetsIntro.html#model-Apartments-rf)). The distribution of residuals for the entire dataset is rather symmetric and centred around 0, suggesting a reasonable overall performance of the model. On the other hand, the residuals for the selected neighbours are centred around the value of 500\. This suggests that, for the apartments similar to the one of interest, the model is biased towards values smaller than the observed ones (residuals are positive, so, on average, the observed value of the dependent variable is larger than the predicted value). Figure 12\.1: Histograms of residuals for the random forest model `apartments_rf` for the apartment\-prices dataset. Upper panel: residuals calculated for all observations from the dataset. Bottom panel: residuals calculated for 25 nearest neighbours of the instance of interest. The idea behind local\-stability plots is to check whether small changes in the explanatory variables, as represented by the changes within the set of neighbours, have got much influence on the predictions. Figure [12\.2](localDiagnostics.html#fig:profileWith10NN) presents CP profiles for variable *age* for an instance of interest and its 10 nearest neighbours for the random forest model for the Titanic dataset (Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)). The profiles are almost parallel and very close to each other. In fact, some of them overlap so that only 5 different ones are visible. This suggests that the model’s predictions are stable around the instance of interest. Of course, CP profiles for different explanatory variables may be very different, so a natural question is: which variables should we examine? The obvious choice is to focus on the variables that are the most important according to a variable\-importance measure such as the ones discussed in Chapters [6](breakDown.html#breakDown), [8](shapley.html#shapley), [9](LIME.html#LIME), or [11](ceterisParibusOscillations.html#ceterisParibusOscillations). Figure 12\.2: Ceteris\-paribus profiles for a selected instance (dark violet line) and 10 nearest neighbours (light grey lines) for the random forest model for the Titanic data. 12\.3 Method ------------ To construct local\-fidelity or local\-stability plots, we have got to, first, select “neighbours” of the observation of interest. Then, for the fidelity analysis, we have got to calculate and compare residuals for the neighbours. For the stability analysis, we have got to calculate and visualize CP profiles for the neighbours. In what follows, we discuss each of these steps in more detail. ### 12\.3\.1 Nearest neighbours There are two important questions related to the selection of the neighbours “nearest” to the instance (observation) of interest: * How many neighbours should we choose? * What metric should be used to measure the “proximity” of observations? The answer to both questions is *it depends*. The smaller the number of neighbours, the more local is the analysis. However, selecting a very small number will lead to a larger variability of the results. In many cases we found that having about 20 neighbours works fine. However, one should always take into account computational time (because a smaller number of neighbours will result in faster calculations) and the size of the dataset (because, for a small dataset, a smaller set of neighbours may be preferred). The metric is very important. The more explanatory variables, the more important is the choice. In particular, the metric should be capable of accommodating variables of different nature (categorical, continuous). Our default choice is the Gower similarity measure: \\\[\\begin{equation} d\_{gower}(\\underline{x}\_i, \\underline{x}\_j) \= \\frac{1}{p} \\sum\_{k\=1}^p d^k(x\_i^k, x\_j^k), \\tag{12\.1} \\end{equation}\\] where \\(\\underline{x}\_i\\) is a \\(p\\)\-dimensional vector of values of explanatory variables for the \\(i\\)\-th observation and \\(d^k(x\_i^k,x\_j^k)\\) is the distance between the values of the \\(k\\)\-th variable for the \\(i\\)\-th and \\(j\\)\-th observations. Note that \\(p\\) may be equal to the number of all explanatory variables included in the model, or only a subset of them. Metric \\(d^k()\\) in [(12\.1\)](localDiagnostics.html#eq:Gower) depends on the nature of the variable. For a continuous variable, it is equal to \\\[ d^k(x\_i^k, x\_j^k)\=\\frac{\|x\_i^k\-x\_j^k\|}{\\max(x\_1^k,\\ldots,x\_n^k)\-\\min(x\_1^k,\\ldots,x\_n^k)}, \\] i.e., the absolute difference scaled by the observed range of the variable. On the other hand, for a categorical variable, \\\[ d^k(x\_i^k, x\_j^k)\=1\_{x\_i^k \= x\_j^k}, \\] where \\(1\_A\\) is the indicator function for condition \\(A\\). An advantage of the Gower similarity measure is that it can be used for vectors with both categorical and continuous variables. A disadvantage is that it takes into account neither correlation between variables nor variable importance. For a high\-dimensional setting, an interesting alternative is the proximity measure used in random forests (Leo Breiman [2001](#ref-randomForestBreiman)[a](#ref-randomForestBreiman)), as it takes into account variable importance; however, it requires a fitted random forest model. Once we have decided on the number of neighbours, we can use the chosen metric to select the required number of observations “closest” to the one of interest. ### 12\.3\.2 Local\-fidelity plot Figure [12\.1](localDiagnostics.html#fig:profileBack2BackHist) summarizes two distributions of residuals, i.e., residuals for the neighbours of the observation of interest and residuals for the entire training dataset except for neighbours. For a typical observation, these two distributions shall be similar. An alarming situation is if the residuals for the neighbours are shifted towards positive or negative values. Apart from visual examination, we may use statistical tests to compare the two distributions. If we do not want to assume any particular parametric form of the distributions (like, e.g., normal), we may choose non\-parametric tests like the Wilcoxon test or the Kolmogorov\-Smirnov test. For statistical tests, it is important that the two sets are disjointed. ### 12\.3\.3 Local\-stability plot Once neighbours of the observation of interest have been identified, we can graphically compare CP profiles for selected (or all) explanatory variables. For a model with a large number of variables, we may end up with a large number of plots. In such a case, a better strategy is to focus only on a few most important variables, selected by using a variable\-importance measure (see, for example, Chapter [11](ceterisParibusOscillations.html#ceterisParibusOscillations)). CP profiles are helpful to assess model stability. In addition, we can enhance the plot by adding residuals to them to allow evaluation of the local model\-fit. The plot that includes CP profiles for the nearest neighbours and the corresponding residuals is called a local\-stability plot. 12\.4 Example: Titanic ---------------------- As an example, we will consider the prediction for Johnny D (see Section [4\.2\.5](dataSetsIntro.html#predictions-titanic)) for the random forest model for the Titanic data (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)). Figure [12\.3](localDiagnostics.html#fig:localStabilityPlotAge) presents a detailed explanation of the elements of a local\-stability plot for *age*, a continuous explanatory variable. The plot includes eight nearest neighbours of Johnny D (see Section [4\.2\.5](dataSetsIntro.html#predictions-titanic)). The green line shows the CP profile for the instance of interest. Profiles of the nearest neighbours are marked with grey lines. The vertical intervals correspond to residuals; the shorter the interval, the smaller the residual and the more accurate prediction of the model. Blue intervals correspond to positive residuals, red intervals to negative residuals. For an additive model, CP profiles will be approximately parallel. For a model with stable predictions, the profiles should be close to each other. This is not the case of Figure [12\.3](localDiagnostics.html#fig:localStabilityPlotAge), in which profiles are quite apart from each other. Thus, the plot suggests potential instability of the model’s predictions. Note that there are positive and negative residuals included in the plot. This indicates that, on average, the instance prediction itself should not be biased. Figure 12\.3: Elements of a local\-stability plot for a continuous explanatory variable. Ceteris\-paribus profiles for variable *age* for Johnny D and 5 nearest neighbours for the random forest model for the Titanic data. 12\.5 Pros and cons ------------------- Local\-stability plots may be very helpful to check if the model is locally additive, as for such models the CP profiles should be parallel. Also, the plots can allow assessment whether the model is locally stable, as in that case, the CP profiles should be close to each other. Local\-fidelity plots are useful in checking whether the model\-fit for the instance of interest is unbiased, as in that case the residuals should be small and their distribution should be symmetric around 0\. The disadvantage of both types of plots is that they are quite complex and lack objective measures of the quality of the model\-fit. Thus, they are mainly suitable for exploratory analysis. 12\.6 Code snippets for R ------------------------- In this section, we present local diagnostic plots as implemented in the `DALEX` package for R. For illustration, we use the random forest model `titanic_rf` (Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)). The model was developed to predict the probability of survival after sinking of Titanic. Instance\-level explanations are calculated for Henry, a 47\-year\-old male passenger that travelled in the first class (see Section [4\.2\.5](dataSetsIntro.html#predictions-titanic)). We first retrieve the `titanic_rf` model\-object and the data frame for Henry via the `archivist` hooks, as listed in Section [4\.2\.7](dataSetsIntro.html#ListOfModelsTitanic). We also retrieve the version of the `titanic` data with imputed missing values. ``` titanic_imputed <- archivist::aread("pbiecek/models/27e5c") titanic_rf <- archivist:: aread("pbiecek/models/4e0fc") henry <- archivist::aread("pbiecek/models/a6538") ``` ``` ## class gender age sibsp parch fare embarked ## 1 1st male 47 0 0 25 Cherbourg ``` Then we construct the explainer for the model by using function `explain()` from the `DALEX` package (see Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode)). We also load the `randomForest` package, as the model was fitted by using function `randomForest()` from this package (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) and it is important to have the corresponding `predict()` function available. The model’s prediction for Henry is obtained with the help of that function. ``` library("randomForest") library("DALEX") explain_rf <- DALEX::explain(model = titanic_rf, data = titanic_imputed[, -9], y = titanic_imputed$survived == "yes", label = "Random Forest") predict(explain_rf, henry) ``` ``` ## [1] 0.246 ``` To construct a local\-fidelity plot similar to the one shown Figure [12\.1](localDiagnostics.html#fig:profileBack2BackHist), we can use the `predict_diagnostics()` function from the `DALEX` package. The main arguments of the function are `explainer`, which specifies the name of the explainer\-object for the model to be explained, and `new_observation`, which specifies the name of the data frame for the instance for which prediction is of interest. Additional useful arguments are `neighbours`, which specifies the number of observations similar to the instance of interest to be selected (default is 50\), and `distance`, the function used to measure the similarity of the observations (by default, the Gower similarity measure is used). Note that function `predict_diagnostics()` has to compute residuals. Thus, we have got to specify the `y` and `residual_function` arguments when using function `explain()` to create the explainer\-object (see Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode)). If the `residual_function` argument is applied with the default `NULL` value, then model residuals are calculated as in [(2\.1\)](modelDevelopmentProcess.html#eq:modelResiduals). In the code below, we perform computations for the random forest model `titanic_rf` and Henry. We select 100 “neighbours” of Henry by using the (default) Gower similarity measure. ``` id_rf <- predict_diagnostics(explainer = explain_rf, new_observation = henry, neighbours = 100) id_rf ``` ``` ## ## Two-sample Kolmogorov-Smirnov test ## ## data: residuals_other and residuals_sel ## D = 0.47767, p-value = 4.132e-10 ## alternative hypothesis: two-sided ``` The resulting object is of class `predict_diagnostics`. It is a list of several components that includes, among others, histograms summarizing the distribution of residuals for the entire training dataset and for the neighbours, as well as the result of the Kolmogorov\-Smirnov test comparing the two distributions. The test result is given by default when the object is printed out. In our case, it suggests a statistically significant difference between the two distributions. We can use the `plot()` function to compare the distributions graphically. The resulting graph is shown in Figure [12\.4](localDiagnostics.html#fig:localFidelityPlotResHenry). The plot suggests that the distribution of the residuals for Henry’s neighbours might be slightly shifted towards positive values, as compared to the overall distribution. ``` plot(id_rf) ``` Figure 12\.4: The local\-fidelity plot for the random forest model for the Titanic data and passenger Henry with 100 neighbours. Function `predict_diagnostics()` can be also used to construct local\-stability plots. Toward this aim, we have got to select the explanatory variable, for which we want to create the plot. We can do it by passing the name of the variable to the `variables` argument of the function. In the code below, we first calculate CP profiles and residuals for *age* and 10 neighbours of Henry. ``` id_rf_age <- predict_diagnostics(explainer = explain_rf, new_observation = henry, neighbours = 10, variables = "age") ``` By applying the `plot()` function to the resulting object, we obtain the local\-stability plot shown in Figure [12\.5](localDiagnostics.html#fig:localStabilityPlotAgeHenry). The profiles are relatively close to each other, suggesting the stability of predictions. There are more negative than positive residuals, which may be seen as a signal of a (local) positive bias of the predictions. ``` plot(id_rf_age) ``` Figure 12\.5: The local\-stability plot for variable *age* in the random forest model for the Titanic data and passenger Henry with 10 neighbours. Note that some profiles overlap, so the graph shows fewer lines. In the code below, we conduct the necessary calculations for the categorical variable *class* and 10 neighbours of Henry. ``` id_rf_class <- predict_diagnostics(explainer = explain_rf, new_observation = henry, neighbours = 10, variables = "class") ``` By applying the `plot()` function to the resulting object, we obtain the local\-stability plot shown in Figure [**??**](#fig:localStabilityPlotClassHenry). The profiles are not parallel, indicating non\-additivity of the effect. However, they are relatively close to each other, suggesting the stability of predictions. ``` plot(id_rf_class) ``` 12\.7 Code snippets for Python ------------------------------ At this point we are not aware of any Python libraries that would implement the methods presented in the current chapter. 12\.1 Introduction ------------------ It may happen that, despite the fact that the predictive performance of a model is satisfactory overall, the model’s predictions for some observations are drastically worse. In such a situation it is often said that “the model does not cover well some areas of the input space”. For example, a model fitted to the data for “typical” patients in a certain hospital may not perform well for patients from another hospital with possible different characteristics. Or, a model developed to evaluate the risk of spring\-holiday consumer\-loans may not perform well in the case of autumn\-loans for Christmas\-holiday gifts. For this reason, in case of important decisions, it is worthwhile to check how does the model behave locally for observations similar to the instance of interest. In this chapter, we present two local\-diagnostics techniques that address this issue. The first one are *local\-fidelity plots* that evaluate the local predictive performance of the model around the observation of interest. The second one are *local\-stability plots* that assess the (local) stability of predictions around the observation of interest. 12\.2 Intuition --------------- Assume that, for the observation of interest, we have identified a set of observations from the training data with similar characteristics. We will call these similar observations “neighbours”. The basic idea behind local\-fidelity plots is to compare the distribution of residuals (i.e., differences between the observed and predicted value of the dependent variable; see equation [(2\.1\)](modelDevelopmentProcess.html#eq:modelResiduals)) for the neighbours with the distribution of residuals for the entire training dataset. Figure [12\.1](localDiagnostics.html#fig:profileBack2BackHist) presents histograms of residuals for the entire dataset and for a selected set of 25 neighbours for an instance of interest for the random forest model for the apartment\-prices dataset (Section [4\.5\.2](dataSetsIntro.html#model-Apartments-rf)). The distribution of residuals for the entire dataset is rather symmetric and centred around 0, suggesting a reasonable overall performance of the model. On the other hand, the residuals for the selected neighbours are centred around the value of 500\. This suggests that, for the apartments similar to the one of interest, the model is biased towards values smaller than the observed ones (residuals are positive, so, on average, the observed value of the dependent variable is larger than the predicted value). Figure 12\.1: Histograms of residuals for the random forest model `apartments_rf` for the apartment\-prices dataset. Upper panel: residuals calculated for all observations from the dataset. Bottom panel: residuals calculated for 25 nearest neighbours of the instance of interest. The idea behind local\-stability plots is to check whether small changes in the explanatory variables, as represented by the changes within the set of neighbours, have got much influence on the predictions. Figure [12\.2](localDiagnostics.html#fig:profileWith10NN) presents CP profiles for variable *age* for an instance of interest and its 10 nearest neighbours for the random forest model for the Titanic dataset (Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)). The profiles are almost parallel and very close to each other. In fact, some of them overlap so that only 5 different ones are visible. This suggests that the model’s predictions are stable around the instance of interest. Of course, CP profiles for different explanatory variables may be very different, so a natural question is: which variables should we examine? The obvious choice is to focus on the variables that are the most important according to a variable\-importance measure such as the ones discussed in Chapters [6](breakDown.html#breakDown), [8](shapley.html#shapley), [9](LIME.html#LIME), or [11](ceterisParibusOscillations.html#ceterisParibusOscillations). Figure 12\.2: Ceteris\-paribus profiles for a selected instance (dark violet line) and 10 nearest neighbours (light grey lines) for the random forest model for the Titanic data. 12\.3 Method ------------ To construct local\-fidelity or local\-stability plots, we have got to, first, select “neighbours” of the observation of interest. Then, for the fidelity analysis, we have got to calculate and compare residuals for the neighbours. For the stability analysis, we have got to calculate and visualize CP profiles for the neighbours. In what follows, we discuss each of these steps in more detail. ### 12\.3\.1 Nearest neighbours There are two important questions related to the selection of the neighbours “nearest” to the instance (observation) of interest: * How many neighbours should we choose? * What metric should be used to measure the “proximity” of observations? The answer to both questions is *it depends*. The smaller the number of neighbours, the more local is the analysis. However, selecting a very small number will lead to a larger variability of the results. In many cases we found that having about 20 neighbours works fine. However, one should always take into account computational time (because a smaller number of neighbours will result in faster calculations) and the size of the dataset (because, for a small dataset, a smaller set of neighbours may be preferred). The metric is very important. The more explanatory variables, the more important is the choice. In particular, the metric should be capable of accommodating variables of different nature (categorical, continuous). Our default choice is the Gower similarity measure: \\\[\\begin{equation} d\_{gower}(\\underline{x}\_i, \\underline{x}\_j) \= \\frac{1}{p} \\sum\_{k\=1}^p d^k(x\_i^k, x\_j^k), \\tag{12\.1} \\end{equation}\\] where \\(\\underline{x}\_i\\) is a \\(p\\)\-dimensional vector of values of explanatory variables for the \\(i\\)\-th observation and \\(d^k(x\_i^k,x\_j^k)\\) is the distance between the values of the \\(k\\)\-th variable for the \\(i\\)\-th and \\(j\\)\-th observations. Note that \\(p\\) may be equal to the number of all explanatory variables included in the model, or only a subset of them. Metric \\(d^k()\\) in [(12\.1\)](localDiagnostics.html#eq:Gower) depends on the nature of the variable. For a continuous variable, it is equal to \\\[ d^k(x\_i^k, x\_j^k)\=\\frac{\|x\_i^k\-x\_j^k\|}{\\max(x\_1^k,\\ldots,x\_n^k)\-\\min(x\_1^k,\\ldots,x\_n^k)}, \\] i.e., the absolute difference scaled by the observed range of the variable. On the other hand, for a categorical variable, \\\[ d^k(x\_i^k, x\_j^k)\=1\_{x\_i^k \= x\_j^k}, \\] where \\(1\_A\\) is the indicator function for condition \\(A\\). An advantage of the Gower similarity measure is that it can be used for vectors with both categorical and continuous variables. A disadvantage is that it takes into account neither correlation between variables nor variable importance. For a high\-dimensional setting, an interesting alternative is the proximity measure used in random forests (Leo Breiman [2001](#ref-randomForestBreiman)[a](#ref-randomForestBreiman)), as it takes into account variable importance; however, it requires a fitted random forest model. Once we have decided on the number of neighbours, we can use the chosen metric to select the required number of observations “closest” to the one of interest. ### 12\.3\.2 Local\-fidelity plot Figure [12\.1](localDiagnostics.html#fig:profileBack2BackHist) summarizes two distributions of residuals, i.e., residuals for the neighbours of the observation of interest and residuals for the entire training dataset except for neighbours. For a typical observation, these two distributions shall be similar. An alarming situation is if the residuals for the neighbours are shifted towards positive or negative values. Apart from visual examination, we may use statistical tests to compare the two distributions. If we do not want to assume any particular parametric form of the distributions (like, e.g., normal), we may choose non\-parametric tests like the Wilcoxon test or the Kolmogorov\-Smirnov test. For statistical tests, it is important that the two sets are disjointed. ### 12\.3\.3 Local\-stability plot Once neighbours of the observation of interest have been identified, we can graphically compare CP profiles for selected (or all) explanatory variables. For a model with a large number of variables, we may end up with a large number of plots. In such a case, a better strategy is to focus only on a few most important variables, selected by using a variable\-importance measure (see, for example, Chapter [11](ceterisParibusOscillations.html#ceterisParibusOscillations)). CP profiles are helpful to assess model stability. In addition, we can enhance the plot by adding residuals to them to allow evaluation of the local model\-fit. The plot that includes CP profiles for the nearest neighbours and the corresponding residuals is called a local\-stability plot. ### 12\.3\.1 Nearest neighbours There are two important questions related to the selection of the neighbours “nearest” to the instance (observation) of interest: * How many neighbours should we choose? * What metric should be used to measure the “proximity” of observations? The answer to both questions is *it depends*. The smaller the number of neighbours, the more local is the analysis. However, selecting a very small number will lead to a larger variability of the results. In many cases we found that having about 20 neighbours works fine. However, one should always take into account computational time (because a smaller number of neighbours will result in faster calculations) and the size of the dataset (because, for a small dataset, a smaller set of neighbours may be preferred). The metric is very important. The more explanatory variables, the more important is the choice. In particular, the metric should be capable of accommodating variables of different nature (categorical, continuous). Our default choice is the Gower similarity measure: \\\[\\begin{equation} d\_{gower}(\\underline{x}\_i, \\underline{x}\_j) \= \\frac{1}{p} \\sum\_{k\=1}^p d^k(x\_i^k, x\_j^k), \\tag{12\.1} \\end{equation}\\] where \\(\\underline{x}\_i\\) is a \\(p\\)\-dimensional vector of values of explanatory variables for the \\(i\\)\-th observation and \\(d^k(x\_i^k,x\_j^k)\\) is the distance between the values of the \\(k\\)\-th variable for the \\(i\\)\-th and \\(j\\)\-th observations. Note that \\(p\\) may be equal to the number of all explanatory variables included in the model, or only a subset of them. Metric \\(d^k()\\) in [(12\.1\)](localDiagnostics.html#eq:Gower) depends on the nature of the variable. For a continuous variable, it is equal to \\\[ d^k(x\_i^k, x\_j^k)\=\\frac{\|x\_i^k\-x\_j^k\|}{\\max(x\_1^k,\\ldots,x\_n^k)\-\\min(x\_1^k,\\ldots,x\_n^k)}, \\] i.e., the absolute difference scaled by the observed range of the variable. On the other hand, for a categorical variable, \\\[ d^k(x\_i^k, x\_j^k)\=1\_{x\_i^k \= x\_j^k}, \\] where \\(1\_A\\) is the indicator function for condition \\(A\\). An advantage of the Gower similarity measure is that it can be used for vectors with both categorical and continuous variables. A disadvantage is that it takes into account neither correlation between variables nor variable importance. For a high\-dimensional setting, an interesting alternative is the proximity measure used in random forests (Leo Breiman [2001](#ref-randomForestBreiman)[a](#ref-randomForestBreiman)), as it takes into account variable importance; however, it requires a fitted random forest model. Once we have decided on the number of neighbours, we can use the chosen metric to select the required number of observations “closest” to the one of interest. ### 12\.3\.2 Local\-fidelity plot Figure [12\.1](localDiagnostics.html#fig:profileBack2BackHist) summarizes two distributions of residuals, i.e., residuals for the neighbours of the observation of interest and residuals for the entire training dataset except for neighbours. For a typical observation, these two distributions shall be similar. An alarming situation is if the residuals for the neighbours are shifted towards positive or negative values. Apart from visual examination, we may use statistical tests to compare the two distributions. If we do not want to assume any particular parametric form of the distributions (like, e.g., normal), we may choose non\-parametric tests like the Wilcoxon test or the Kolmogorov\-Smirnov test. For statistical tests, it is important that the two sets are disjointed. ### 12\.3\.3 Local\-stability plot Once neighbours of the observation of interest have been identified, we can graphically compare CP profiles for selected (or all) explanatory variables. For a model with a large number of variables, we may end up with a large number of plots. In such a case, a better strategy is to focus only on a few most important variables, selected by using a variable\-importance measure (see, for example, Chapter [11](ceterisParibusOscillations.html#ceterisParibusOscillations)). CP profiles are helpful to assess model stability. In addition, we can enhance the plot by adding residuals to them to allow evaluation of the local model\-fit. The plot that includes CP profiles for the nearest neighbours and the corresponding residuals is called a local\-stability plot. 12\.4 Example: Titanic ---------------------- As an example, we will consider the prediction for Johnny D (see Section [4\.2\.5](dataSetsIntro.html#predictions-titanic)) for the random forest model for the Titanic data (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)). Figure [12\.3](localDiagnostics.html#fig:localStabilityPlotAge) presents a detailed explanation of the elements of a local\-stability plot for *age*, a continuous explanatory variable. The plot includes eight nearest neighbours of Johnny D (see Section [4\.2\.5](dataSetsIntro.html#predictions-titanic)). The green line shows the CP profile for the instance of interest. Profiles of the nearest neighbours are marked with grey lines. The vertical intervals correspond to residuals; the shorter the interval, the smaller the residual and the more accurate prediction of the model. Blue intervals correspond to positive residuals, red intervals to negative residuals. For an additive model, CP profiles will be approximately parallel. For a model with stable predictions, the profiles should be close to each other. This is not the case of Figure [12\.3](localDiagnostics.html#fig:localStabilityPlotAge), in which profiles are quite apart from each other. Thus, the plot suggests potential instability of the model’s predictions. Note that there are positive and negative residuals included in the plot. This indicates that, on average, the instance prediction itself should not be biased. Figure 12\.3: Elements of a local\-stability plot for a continuous explanatory variable. Ceteris\-paribus profiles for variable *age* for Johnny D and 5 nearest neighbours for the random forest model for the Titanic data. 12\.5 Pros and cons ------------------- Local\-stability plots may be very helpful to check if the model is locally additive, as for such models the CP profiles should be parallel. Also, the plots can allow assessment whether the model is locally stable, as in that case, the CP profiles should be close to each other. Local\-fidelity plots are useful in checking whether the model\-fit for the instance of interest is unbiased, as in that case the residuals should be small and their distribution should be symmetric around 0\. The disadvantage of both types of plots is that they are quite complex and lack objective measures of the quality of the model\-fit. Thus, they are mainly suitable for exploratory analysis. 12\.6 Code snippets for R ------------------------- In this section, we present local diagnostic plots as implemented in the `DALEX` package for R. For illustration, we use the random forest model `titanic_rf` (Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)). The model was developed to predict the probability of survival after sinking of Titanic. Instance\-level explanations are calculated for Henry, a 47\-year\-old male passenger that travelled in the first class (see Section [4\.2\.5](dataSetsIntro.html#predictions-titanic)). We first retrieve the `titanic_rf` model\-object and the data frame for Henry via the `archivist` hooks, as listed in Section [4\.2\.7](dataSetsIntro.html#ListOfModelsTitanic). We also retrieve the version of the `titanic` data with imputed missing values. ``` titanic_imputed <- archivist::aread("pbiecek/models/27e5c") titanic_rf <- archivist:: aread("pbiecek/models/4e0fc") henry <- archivist::aread("pbiecek/models/a6538") ``` ``` ## class gender age sibsp parch fare embarked ## 1 1st male 47 0 0 25 Cherbourg ``` Then we construct the explainer for the model by using function `explain()` from the `DALEX` package (see Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode)). We also load the `randomForest` package, as the model was fitted by using function `randomForest()` from this package (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) and it is important to have the corresponding `predict()` function available. The model’s prediction for Henry is obtained with the help of that function. ``` library("randomForest") library("DALEX") explain_rf <- DALEX::explain(model = titanic_rf, data = titanic_imputed[, -9], y = titanic_imputed$survived == "yes", label = "Random Forest") predict(explain_rf, henry) ``` ``` ## [1] 0.246 ``` To construct a local\-fidelity plot similar to the one shown Figure [12\.1](localDiagnostics.html#fig:profileBack2BackHist), we can use the `predict_diagnostics()` function from the `DALEX` package. The main arguments of the function are `explainer`, which specifies the name of the explainer\-object for the model to be explained, and `new_observation`, which specifies the name of the data frame for the instance for which prediction is of interest. Additional useful arguments are `neighbours`, which specifies the number of observations similar to the instance of interest to be selected (default is 50\), and `distance`, the function used to measure the similarity of the observations (by default, the Gower similarity measure is used). Note that function `predict_diagnostics()` has to compute residuals. Thus, we have got to specify the `y` and `residual_function` arguments when using function `explain()` to create the explainer\-object (see Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode)). If the `residual_function` argument is applied with the default `NULL` value, then model residuals are calculated as in [(2\.1\)](modelDevelopmentProcess.html#eq:modelResiduals). In the code below, we perform computations for the random forest model `titanic_rf` and Henry. We select 100 “neighbours” of Henry by using the (default) Gower similarity measure. ``` id_rf <- predict_diagnostics(explainer = explain_rf, new_observation = henry, neighbours = 100) id_rf ``` ``` ## ## Two-sample Kolmogorov-Smirnov test ## ## data: residuals_other and residuals_sel ## D = 0.47767, p-value = 4.132e-10 ## alternative hypothesis: two-sided ``` The resulting object is of class `predict_diagnostics`. It is a list of several components that includes, among others, histograms summarizing the distribution of residuals for the entire training dataset and for the neighbours, as well as the result of the Kolmogorov\-Smirnov test comparing the two distributions. The test result is given by default when the object is printed out. In our case, it suggests a statistically significant difference between the two distributions. We can use the `plot()` function to compare the distributions graphically. The resulting graph is shown in Figure [12\.4](localDiagnostics.html#fig:localFidelityPlotResHenry). The plot suggests that the distribution of the residuals for Henry’s neighbours might be slightly shifted towards positive values, as compared to the overall distribution. ``` plot(id_rf) ``` Figure 12\.4: The local\-fidelity plot for the random forest model for the Titanic data and passenger Henry with 100 neighbours. Function `predict_diagnostics()` can be also used to construct local\-stability plots. Toward this aim, we have got to select the explanatory variable, for which we want to create the plot. We can do it by passing the name of the variable to the `variables` argument of the function. In the code below, we first calculate CP profiles and residuals for *age* and 10 neighbours of Henry. ``` id_rf_age <- predict_diagnostics(explainer = explain_rf, new_observation = henry, neighbours = 10, variables = "age") ``` By applying the `plot()` function to the resulting object, we obtain the local\-stability plot shown in Figure [12\.5](localDiagnostics.html#fig:localStabilityPlotAgeHenry). The profiles are relatively close to each other, suggesting the stability of predictions. There are more negative than positive residuals, which may be seen as a signal of a (local) positive bias of the predictions. ``` plot(id_rf_age) ``` Figure 12\.5: The local\-stability plot for variable *age* in the random forest model for the Titanic data and passenger Henry with 10 neighbours. Note that some profiles overlap, so the graph shows fewer lines. In the code below, we conduct the necessary calculations for the categorical variable *class* and 10 neighbours of Henry. ``` id_rf_class <- predict_diagnostics(explainer = explain_rf, new_observation = henry, neighbours = 10, variables = "class") ``` By applying the `plot()` function to the resulting object, we obtain the local\-stability plot shown in Figure [**??**](#fig:localStabilityPlotClassHenry). The profiles are not parallel, indicating non\-additivity of the effect. However, they are relatively close to each other, suggesting the stability of predictions. ``` plot(id_rf_class) ``` 12\.7 Code snippets for Python ------------------------------ At this point we are not aware of any Python libraries that would implement the methods presented in the current chapter.
Machine Learning
pbiecek.github.io
https://pbiecek.github.io/ema/modelPerformance.html
15 Model\-performance Measures ============================== 15\.1 Introduction ------------------ In this chapter, we present measures that are useful for the evaluation of the overall performance of a (predictive) model. As it was mentioned in Sections [2\.1](modelDevelopmentProcess.html#MDPIntro) and [2\.5](modelDevelopmentProcess.html#fitting), in general, we can distinguish between the explanatory and predictive approaches to statistical modelling. Leo Breiman ([2001](#ref-twoCultures)[b](#ref-twoCultures)) indicates that validation of a model can be based on evaluation of *goodness\-of\-fit* (GoF) or on evaluation of predictive accuracy (which we will term *goodness\-of\-predicton*, GoP). In principle, GoF is mainly used for explanatory models, while GoP is applied for predictive models. In a nutshell, GoF pertains to the question: how well do the model’s predictions explain (fit) dependent\-variable values of the observations used for developing the model? On the other hand, GoP is related to the question: how well does the model predict the value of the dependent variable for a new observation? For some measures, their interpretation in terms of GoF or GoP depends on whether they are computed by using training or testing data. The measures may be applied for several purposes, including: * model evaluation: we may want to know how good the model is, i.e., how reliable are the model’s predictions (how frequent and how large errors may we expect)?; * model comparison: we may want to compare two or more models in order to choose between them; * out\-of\-sample and out\-of\-time comparisons: we may want to check a model’s performance when applied to new data to evaluate if performance has not worsened. Depending on the nature of the dependent variable (continuous, binary, categorical, count, etc.), different model\-performance measures may be used. Moreover, the list of useful measures is growing as new applications emerge. In this chapter, we discuss only a selected set of measures, some of which are used in dataset\-level exploration techniques introduced in subsequent chapters. We also limit ourselves to the two basic types of dependent variables continuous (including count) and categorical (including binary) considered in our book. 15\.2 Intuition --------------- Most model\-performance measures are based on the comparison of the model’s predictions with the (known) values of the dependent variable in a dataset. For an ideal model, the predictions and the dependent\-variable values should be equal. In practice, it is never the case, and we want to quantify the disagreement. In principle, model\-performance measures may be computed for the training dataset, i.e., the data used for developing the model. However, in that case there is a serious risk that the computed values will overestimate the quality of the model’s predictive performance. A more meaningful approach is to apply the measures to an independent testing dataset. Alternatively, a bias\-correction strategy can be used when applying them to the training data. Toward this aim, various strategies have been proposed, such as cross\-validation or bootstrapping (Kuhn and Johnson [2013](#ref-Kuhn2013); Harrell [2015](#ref-Harrell2015); Steyerberg [2019](#ref-Steyerberg2019)). In what follows, we mainly consider the simple data\-split strategy, i.e., we assume that the available data are split into a training set and a testing set. The model is created on the former, and the latter set is used to assess the model’s performance. It is worth mentioning that there are two important aspects of prediction: *calibration* and *discrimination* (Harrell, Lee, and Mark [1996](#ref-Harrell1996)). Calibration refers to the extent of bias in predicted values, i.e., the mean difference between the predicted and true values. Discrimination refers to the ability of the predictions to distinguish between individual true values. For instance, consider a model to be used for weather forecasts in a region where, on average, it rains half the year. A simple model that predicts that every other day is rainy is well\-calibrated because, on average, the resulting predicted risk of a rainy day in a year is 50%, which agrees with the actual situation. However, the model is not very much discriminative (for each calendar day, the probability of a correct prediction is 50%, the same as for a fair\-coin toss) and, hence, not very useful. Thus, in addition to overall measures of GoP, we may need separate measures for calibration and discrimination of a model. Note that, for the latter, we may want to weigh differently the situation when the prediction is, for instance, larger than the true value, as compared to the case when it is smaller. Depending on the decision on how to weigh different types of disagreement, we may need different measures. In the best possible scenario, we can specify a single model\-performance measure before the model is created and then optimize the model for this measure. But, in practice, a more common scenario is to use several performance measures, which are often selected after the model has been created. 15\.3 Method ------------ Assume that we have got a training dataset with \\(n\\) observations on \\(p\\) explanatory variables and on a dependent variable \\(Y\\). Let \\(\\underline{x}\_i\\) denote the (column) vector of values of the explanatory variables for the \\(i\\)\-th observation, and \\(y\_i\\) the corresponding value of the dependent variable. We will use \\(\\underline{X}\=(x'\_1,\\ldots,x'\_n)\\) to denote the matrix of explanatory variables for all \\(n\\) observations, and \\(\\underline{y}\=(y\_1,\\ldots,y\_n)'\\) to denote the (column) vector of the values of the dependent variable. The training dataset is used to develop model \\(f(\\underline{\\hat{\\theta}}; \\underline X)\\), where \\(\\underline{\\hat{\\theta}}\\) denotes the estimated values of the model’s coefficients. Note that could also use here the “penalized” estimates \\(\\underline{\\tilde{\\theta}}\\) (see Section [2\.5](modelDevelopmentProcess.html#fitting)). Let \\(\\widehat{y}\_i\\) indicate the model’s prediction corresponding to \\(y\_i.\\) The model performance analysis is often based on an independent dataset called a testing set. In some cases, model\-performance mesures are based on a leave\-one\-out approach. We will denote by \\(\\underline{X}\_{\-i}\\) the matrix of explanatory variables when excluding the \\(i\\)\-th observation and by \\(f(\\underline{\\hat{\\theta}}\_{\-i}; \\underline{X}\_{\-i})\\) the model developed for the reduced data. It is worth noting here that the leave\-one\-out model \\(f(\\underline{\\hat{\\theta}}\_{\-i}; \\underline{X}\_{\-i})\\) is different from the full\-data model \\(f(\\underline{\\hat{\\theta}}; \\underline X)\\). But often they are close to each other and conclusions obtained from one can be transferred to the other. We will use \\(\\widehat{y}\_{i(\-i)}\\) to denote the prediction for \\(y\_i\\) obtained from model \\(f(\\underline{\\hat{\\theta}}\_{\-i}; \\underline{X}\_{\-i})\\). In the subsequent sections, we present various model\-performance measures. The measures are applied in essentially the same way if a training or a testing dataset is used. If there is any difference in the interpretation or properties of the measures between the two situations, we will explicitly mention them. Note that, in what follows, we will ignore in the notation the fact that we consider the estimated model \\(f(\\underline{\\hat{\\theta}}; \\underline X)\\) and we will use \\(f()\\) as a generic notation for it. ### 15\.3\.1 Continuous dependent variable #### 15\.3\.1\.1 Goodness\-of\-fit The most popular GoF measure for models for a continuous dependent variable is the mean squared\-error, defined as \\\[\\begin{equation} MSE(f,\\underline{X},\\underline{y}) \= \\frac{1}{n} \\sum\_{i}^{n} (\\widehat{y}\_i \- y\_i)^2 \= \\frac{1}{n} \\sum\_{i}^{n} r\_i^2, \\tag{15\.1} \\end{equation}\\] where \\(r\_i\\) is the residual for the \\(i\\)\-th observation (see also Section [2\.3](modelDevelopmentProcess.html#notation)). Thus, MSE can be seen as a sum of squared residuals. MSE is a convex differentiable function, which is important from an optimization point of view (see Section [2\.5](modelDevelopmentProcess.html#fitting)). As the measure weighs all differences equally, large residuals have got a high impact on MSE. Thus, the measure is sensitive to outliers. For a “perfect” model, which predicts (fits) all \\(y\_i\\) exactly, \\(MSE \= 0\\). Note that MSE is constructed on a different scale from the dependent variable. Thus, a more interpretable variant of this measure is the root\-mean\-squared\-error (RMSE), defined as \\\[\\begin{equation} RMSE(f, \\underline{X}, \\underline{y}) \= \\sqrt{MSE(f, \\underline{X}, \\underline{y})}. \\tag{15\.2} \\end{equation}\\] A popular variant of RMSE is its normalized version, \\(R^2\\), defined as \\\[\\begin{equation} R^2(f, \\underline{X}, \\underline{y}) \= 1 \- \\frac{MSE(f, \\underline{X}, \\underline{y})}{MSE(f\_0, \\underline{X},\\underline{y})}. \\tag{15\.3} \\end{equation}\\] In [(15\.3\)](modelPerformance.html#eq:R2), \\(f\_0()\\) denotes a “baseline” model. For instance, in the case of the classical linear regression, \\(f\_0()\\) is the model that includes only the intercept, which implies the use of the mean value of \\(Y\\) as a prediction for all observations. \\(R^2\\) is normalized in the sense that the “perfectly” fitting model leads to \\(R^2 \= 1\\), while \\(R^2 \= 0\\) means that we are not doing better than the baseline model. In the context of the classical linear regression, \\(R^2\\) is the familiar coefficient of determination and can be interpreted as the fraction of the total variance of \\(Y\\) “explained” by model \\(f()\\). Given sensitivity of MSE to outliers, sometimes the median absolute\-deviation (MAD) is considered: \\\[\\begin{equation} MAD(f, \\underline{X} ,\\underline{y}) \= median( \|r\_1\|, ..., \|r\_n\| ). \\tag{15\.4} \\end{equation}\\] MAD is more robust to outliers than MSE. A disadvantage of MAD are its less favourable mathematical properties. Section [15\.4\.1](modelPerformance.html#modelPerformanceApartments) illustrates the use of measures for the linear regression model and the random forest model for the apartment\-prices data. #### 15\.3\.1\.2 Goodness\-of\-prediction Assume that a testing dataset is available. In that case, we can use model \\(f()\\), obtained by fitting the model to training data, to predict the values of the dependent variable observed in the testing dataset. Subsequently, we can compute MSE as in [(15\.1\)](modelPerformance.html#eq:MSE) to obtain the mean squared\-prediction\-error (MSPE) as a GoP measure (Kutner et al. [2005](#ref-Kutner2005)). By taking the square root of MSPE, we get the root\-mean\-squared\-prediction\-error (RMSPE). In the absence of testing data, one of the most known GoP measures for models for a continuous dependent variable is the predicted sum\-of\-squares (PRESS), defined as \\\[\\begin{equation} PRESS(f,\\underline{X},\\underline{y}) \= \\sum\_{i\=1}^{n} (\\widehat{y}\_{i(\-i)} \- y\_i)^2\. \\tag{15\.5} \\end{equation}\\] Thus, PRESS can be seen as a result of the application of the leave\-one\-out strategy to the evaluation of GoP of a model using the training data. Note that, for the classical linear regression model, there is no need to re\-fit the model \\(n\\) times to compute PRESS (Kutner et al. [2005](#ref-Kutner2005)). Based on PRESS, one can define the predictive squared\-error \\(PSE\=PRESS/n\\) and the standard deviation error in prediction \\(SEP\=\\sqrt{PSE}\=\\sqrt{PRESS/n}\\) (Todeschini [2010](#ref-SummariesTutorial)). Another measure gaining in popularity is \\\[\\begin{equation} Q^2(f,\\underline{X},\\underline{y}) \= 1\- \\frac{ PRESS(f,\\underline{X},\\underline{y})}{\\sum\_{i\=1}^{n} ({y}\_{i} \- \\bar{y})^2}. \\tag{15\.6} \\end{equation}\\] It is sometimes called the cross\-validated \\(R^2\\) or the coefficient of prediction (Landram, Abdullat, and Shah [2005](#ref-Landram2005)). It appears that \\(Q^2 \\leq R^2\\), i.e., the expected accuracy of out\-of\-sample predictions measured by \\(Q^2\\) cannot exceed the accuracy of in\-sample estimates (Landram, Abdullat, and Shah [2005](#ref-Landram2005)). For a “perfect” predictive model, \\(Q^2\=1\\). It is worth noting that, while \\(R^2\\) always increases if an explanatory variable is added to a model, \\(Q^2\\) decreases when “noisy” variables are added to the model (Todeschini [2010](#ref-SummariesTutorial)). The aforementioned measures capture the overall predictive performance of a model. A measure aimed at evaluating discrimination is the *concordance index* (c\-index) (Harrell, Lee, and Mark [1996](#ref-Harrell1996); Brentnall and Cuzick [2018](#ref-Brentnall2018)). It is computed by considering all pairs of observations and computing the fraction of the pairs in which the ordering of the predictions corresponds to the ordering of the true values (Brentnall and Cuzick [2018](#ref-Brentnall2018)). The index assumes the value of 1 in case of perfect discrimination and 0\.25 for random discrimination. Calibration can be assessed by a scatter plot of the predicted values of \\(Y\\) in function of the true ones (Harrell, Lee, and Mark [1996](#ref-Harrell1996); van Houwelingen, H.C. [2000](#ref-vanHouwelingen2000); Steyerberg et al. [2010](#ref-Steyerberg2010)). The plot can be characterized by its intercept and slope. In case of perfect prediction, the plot should assume the form of a straight line with intercept 0 and slope 1\. A deviation of the intercept from 0 indicates overall bias in predictions (“calibration\-in\-the\-large”), while the value of the slope smaller than 1 suggests overfitting of the model (van Houwelingen, H.C. [2000](#ref-vanHouwelingen2000); Steyerberg et al. [2010](#ref-Steyerberg2010)). The estimated values of the coefficients can be used to re\-calibrate the model (van Houwelingen, H.C. [2000](#ref-vanHouwelingen2000)). ### 15\.3\.2 Binary dependent variable To introduce model\-performance measures, we, somewhat arbitrarily, label the two possible values of the dependent variable as “success” and “failure”. Of course, in a particular application, the meaning of the “success” outcome does not have to be positive nor optimistic; for a diagnostic test, “success” often means detection of disease. We also assume that model prediction \\(\\widehat{y}\_i\\) takes the form of the predicted probability of success. #### 15\.3\.2\.1 Goodness\-of\-fit If we assign the value of 1 to success and 0 to failure, it is possible to use MSE, RMSE, and MAD, as defined in Equations [(15\.1\)](modelPerformance.html#eq:MSE), [(15\.2\)](modelPerformance.html#eq:RMSE), [(15\.4\)](modelPerformance.html#eq:MAD), respectively, as a GoF measure. In fact, the MSE obtained in that way is equivalent to the Brier score, which can be also expressed as \\\[ \\sum\_{i\=1}^{n} \\{y\_i(1\-\\widehat{y}\_i)^2\+(1\-y\_i)(\\widehat{y}\_i)^2\\}/n. \\] Its minimum value is 0 for a “perfect” model and 0\.25 for an “uninformative” model that yields the predicted probability of 0\.5 for all observations. The Brier score is often also interpreted as an overall predictive\-performance measure for models for a binary dependent variable because it captures both calibration and the concentration of the predictive distribution (Rufibach [2010](#ref-Rufibach2010)). One of the main issues related to the summary measures based on MSE is that they penalize too mildly for wrong predictions. In fact, the maximum penalty for an individual prediction is equal to 1 (if, for instance, the model yields zero probability for an actual success). To address this issue, the log\-likelihood function based on the Bernoulli distribution (see also [(2\.8\)](modelDevelopmentProcess.html#eq:modelTrainingBernoulli)) can be considered: \\\[\\begin{equation} l(f, \\underline{X},\\underline{y}) \= \\sum\_{i\=1}^{n} \\{y\_i \\ln(\\widehat{y}\_i)\+ (1\-y\_i)\\ln(1\-\\widehat{y}\_i)\\}. \\tag{15\.7} \\end{equation}\\] Note that, in the machine\-learning world, function \\(\-l(f, \\underline{X} ,\\underline{y})/n\\) is often considered (sometimes also with \\(\\ln\\) replaced by \\(\\log\_2\\)) and termed “log\-loss” or “cross\-entropy”. The log\-likelihood heavily “penalizes” the cases when the model\-predicted probability of success \\(\\widehat{y}\_i\\) is high for an actual failure (\\(y\_i\=0\\)) and low for an actual success (\\(y\_i\=1\\)). The log\-likelihood [(15\.7\)](modelPerformance.html#eq:bernoulli) can be used to define \\(R^2\\)\-like measures (for a review, see, for example, Allison ([2014](#ref-Allison2014))). One of the variants most often used is the measure proposed by Nagelkerke ([1991](#ref-Nagelkerke1991)): \\\[\\begin{equation} R\_{bin}^2(f, \\underline{X}, \\underline{y}) \= \\frac{1\-\\exp\\left(\\frac{2}{n}\\{l(f\_0, \\underline{X},\\underline{y})\-l(f, \\underline{X},\\underline{y})\\}\\right)} {1\-\\exp\\left(\\frac{2}{n}l(f\_0, \\underline{X},\\underline{y})\\right)} . \\tag{15\.8} \\end{equation}\\] It shares properties of the “classical” \\(R^2\\), defined in [(15\.3\)](modelPerformance.html#eq:R2). In [(15\.8\)](modelPerformance.html#eq:R2bin), \\(f\_0()\\) denotes the model that includes only the intercept, which implies the use of the observed fraction of successes as the predicted probability of success. If we denote the fraction by \\(\\hat{p}\\), then \\\[ l(f\_0, \\underline{X},\\underline{y}) \= n \\hat{p} \\ln{\\hat{p}} \+ n(1\-\\hat{p}) \\ln{(1\-\\hat{p})}. \\] #### 15\.3\.2\.2 Goodness\-of\-prediction In many situations, consequences of a prediction error depend on the form of the error. For this reason, performance measures based on the (estimated values of) probability of correct/wrong prediction are more often used. To introduce some of those measures, we assume that, for each observation from the testing dataset, the predicted probability of success \\(\\widehat{y}\_i\\) is compared to a fixed cut\-off threshold, \\(C\\) say. If the probability is larger than \\(C\\), then we assume that the model predicts success; otherwise, we assume that it predicts failure. As a result of such a procedure, the comparison of the observed and predicted values of the dependent variable for the \\(n\\) observations in a dataset can be summarized in a table similar to Table [15\.1](modelPerformance.html#tab:confMat). Table 15\.1: Confusion table for a classification model with scores \\(\\widehat{y}\_i\\). | | True value: `success` | True value: `failure` | Total | | --- | --- | --- | --- | | \\(\\widehat{y}\_i \\geq C\\), predicted: `success` | True Positive: \\(TP\_C\\) | False Positive (Type I error): \\(FP\_C\\) | \\(P\_C\\) | | \\(\\widehat{y}\_i \< C\\), predicted: `failure` | False Negative (Type II error): \\(FN\_C\\) | True Negative: \\(TN\_C\\) | \\(N\_C\\) | | Total | \\(S\\) | \\(F\\) | \\(n\\) | In the machine\-learning world, Table [15\.1](modelPerformance.html#tab:confMat) is often referred to as the “confusion table” or “confusion matrix”. In statistics, it is often called the “decision table”. The counts \\(TP\_C\\) and \\(TN\_C\\) on the diagonal of the table correspond to the cases when the predicted and observed value of the dependent variable \\(Y\\) coincide. \\(FP\_C\\) is the number of cases in which failure is predicted as a success. These are false\-positive, or Type I error, cases. On the other hand, \\(FN\_C\\) is the count of false\-negative, or Type II error, cases, in which success is predicted as failure. Marginally, there are \\(P\_C\\) predicted successes and \\(N\_C\\) predicted failures, with \\(P\_C\+N\_C\=n\\). In the testing dataset, there are \\(S\\) observed successes and \\(F\\) observed failures, with \\(S\+F\=n\\). The effectiveness of such a test can be described by various measures. Let us present some of the most popular examples. The simplest measure of model performance is *accuracy*, defined as \\\[ ACC\_C \= \\frac{TP\_C\+TN\_C}{n}. \\] It is the fraction of correct predictions in the entire testing dataset. Accuracy is of interest if true positives and true negatives are more important than their false counterparts. However, accuracy may not be very informative when one of the binary categories is much more prevalent (so called unbalanced labels). For example, if the testing data contain 90% of successes, a model that would always predict a success would reach an accuracy of 0\.9, although one could argue that this is not a very useful model. There may be situations when false positives and/or false negatives may be of more concern. In that case, one might want to keep their number low. Hence, other measures, focused on the false results, might be of interest. In the machine\-learning world, two other measures are often considered: *precision* and *recall*. Precision is defined as \\\[ Precision\_C \= \\frac{TP\_C}{TP\_C\+FP\_C} \= \\frac{TP\_C}{P\_C}. \\] Precision is also referred to as the *positive predictive value*. It is the fraction of correct predictions among the predicted successes. Precision is high if the number of false positives is low. Thus, it is a useful measure when the penalty for committing the Type I error (false positive) is high. For instance, consider the use of a genetic test in cancer diagnostics, with a positive result of the test taken as an indication of an increased risk of developing a cancer. A false\-positive result of a genetic test might mean that a person would have to unnecessarily cope with emotions and, possibly, medical procedures related to the fact of being evaluated as having a high risk of developing a cancer. We might want to avoid this situation more than the false\-negative case. The latter would mean that the genetic test gives a negative result for a person that, actually, might be at an increased risk of developing a cancer. However, an increased risk does not mean that the person will develop cancer. And even so, we could hope that we could detect it in due time. Recall is defined as \\\[ Recall\_C \= \\frac{TP\_C}{TP\_C\+FN\_C} \= \\frac{TP\_C}{S}. \\] Recall is also referred to as *sensitivity* or the *true\-positive rate*. It is the fraction of correct predictions among the true successes. Recall is high if the number of false negatives is low. Thus, it is a useful measure when the penalty for committing the Type II error (false negative) is high. For instance, consider the use of an algorithm that predicts whether a bank transaction is fraudulent. A false\-negative result means that the algorithm accepts a fraudulent transaction as a legitimate one. Such a decision may have immediate and unpleasant consequences for the bank, because it may imply a non\-recoverable loss of money. On the other hand, a false\-positive result means that a legitimate transaction is considered as a fraudulent one and is blocked. However, upon further checking, the legitimate nature of the transaction can be confirmed with, perhaps, the annoyed client as the only consequence for the bank. The harmonic mean of these two measures defines the *F1 score*: \\\[ F1\\ score\_C \= \\frac{2}{\\frac{1}{Precision\_C} \+ \\frac{1}{Recall\_C}} \= 2\\cdot\\frac{Precision\_C \\cdot Recall\_C}{Precision\_C \+ Recall\_C}. \\] F1 score tends to give a low value if either precision or recall is low, and a high value if both precision and recall are high. For instance, if precision is 0, F1 score will also be 0 irrespectively of the value of recall. Thus, it is a useful measure if we have got to seek a balance between precision and recall. In statistics, and especially in applications in medicine, the popular measures are *sensitivity* and *specificity*. Sensitivity is simply another name for recall. Specificity is defined as \\\[ Specificity\_C \= \\frac{TN\_C}{TN\_C \+ FP\_C} \= \\frac{TN\_C}{F}. \\] Specificity is also referred to as the *true\-negative rate*. It is the fraction of correct predictions among the true failures. Specificity is high if the number of false positives is low. Thus, as precision, it is a useful measure when the penalty for committing the Type I error (false positive) is high. The reason why sensitivity and specificity may be more often used outside the machine\-learning world is related to the fact that their values do not depend on the proportion \\(S/n\\) (sometimes termed *prevalence*) of true successes. This means that, once estimated in a sample obtained from a population, they may be applied to other populations, in which the prevalence may be different. This is not true for precision, because one can write \\\[ Precision\_C \= \\frac{Sensitivity\_C \\cdot \\frac{S}{n}}{Sensitivity\_C \\cdot \\frac{S}{n}\+Specificity\_C \\cdot \\left(1\-\\frac{S}{n}\\right)}. \\] All the measures depend on the choice of cut\-off \\(C\\). To assess the form and the strength of dependence, a common approach is to construct the Receiver Operating Characteristic (ROC) curve. The curve plots \\(Sensitivity\_C\\) in function of \\(1\-Specificity\_C\\) for all possible, ordered values of \\(C\\). Figure [15\.2](modelPerformance.html#fig:exampleROC) presents the ROC curve for the random forest model for the Titanic dataset. Note that the curve indicates an inverse relationship between sensitivity and specificity: by increasing one measure, the other is decreased. The ROC curve is very informative. For a model that predicts successes and failures at random, the corresponding curve will be equal to the diagonal line. On the other hand, for a model that yields perfect predictions, the ROC curve reduces to two intervals that connect points (0,0\), (0,1\), and (1,1\). Often, there is a need to summarize the ROC curve with one number, which can be used to compare models. A popular measure that is used toward this aim is the area under the curve (AUC). For a model that predicts successes and failures at random, AUC is the area under the diagonal line, i.e., it is equal to 0\.5\. For a model that yields perfect predictions, AUC is equal to 1\. It appears that, in this case, AUC is equivalent to the c\-index (see Section [15\.3\.1\.2](modelPerformance.html#modelPerformanceMethodContGOP)). Another ROC\-curve\-based measure that is often used is the *Gini coefficient* \\(G\\). It is closely related to AUC; in fact, it can be calculated as \\(G \= 2 \\times AUC \- 1\\). For a model that predicts successes and failures at random, \\(G\=0\\); for a perfect\-prediction model, \\(G \= 1\\). Figure [15\.2](modelPerformance.html#fig:exampleROC) illustrates the calculation of the Gini coefficient for the random forest model for the Titanic dataset (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)). A variant of ROC curve based on precision and recall is a called a precision\-recall curve. Figure [15\.3](modelPerformance.html#fig:examplePRC) the curve for the random forest model for the Titanic dataset. The value of the Gini coefficient or, equivalently, of \\(AUC\-0\.5\\) allows a comparison of the model\-based predictions with random guessing. A measure that explicitly compares a prediction model with a baseline (or null) model is the *lift*. Commonly, random guessing is considered as the baseline model. In that case, \\\[ Lift\_C \= \\frac{\\frac{TP\_C}{P\_C}}{\\frac{S}{n}} \= n\\frac{Precision\_C}{S}. \\] Note that \\(S/n\\) can be seen as the estimated probability of a correct prediction of success for random guessing. On the other hand, \\(TP\_C/P\_C\\) is the estimated probability of a correct prediction of a success given that the model predicts a success. Hence, informally speaking, the lift indicates how many more (or less) times does the model do better in predicting success as compared to random guessing. As other measures, the lift depends on the choice of cut\-off \\(C\\). The plot of the lift as a function of \\(P\_C\\) is called the *lift chart*. Figure [15\.3](modelPerformance.html#fig:examplePRC) presents the lift chart for the random forest model for the Titanic dataset. Calibration of predictions can be assessed by a scatter plot of the predicted values of \\(Y\\) in function of the true ones. A complicating issue is a fact that the true values are only equal to 0 or 1\. Therefore, smoothing techniques or grouping of observations is needed to obtain a meaningful plot (Steyerberg et al. [2010](#ref-Steyerberg2010); Steyerberg [2019](#ref-Steyerberg2019)). There are many more measures aimed at measuring the performance of a predictive model for a binary dependent variable. An overview can be found in, e.g., Berrar ([2019](#ref-Berrar2019)). ### 15\.3\.3 Categorical dependent variable To introduce model\-performance measures for a categorical dependent variable, we assume that \\(\\underline{y}\_i\\) is now a vector of \\(K\\) elements. Each element \\(y\_{i}^k\\) (\\(k\=1,\\ldots,K\\)) is a binary variable indicating whether the \\(k\\)\-th category was observed for the \\(i\\)\-th observation. We assume that, for each observation, only one category can be observed. Thus, all elements of \\(\\underline{y}\_i\\) are equal to 0 except one that is equal to 1\. Furthermore, we assume that a model’s prediction takes the form of a vector, \\(\\underline{\\widehat{y}}\_i\\) say, of the predicted probabilities for each of the \\(K\\) categories, with \\({\\widehat{y}}\_i^k\\) denoting the probability for the \\(k\\)\-th category. The predicted category is the one with the highest predicted probability. #### 15\.3\.3\.1 Goodness\-of\-fit The log\-likelihood function [(15\.7\)](modelPerformance.html#eq:bernoulli) can be adapted to the categorical dependent variable case as follows: \\\[\\begin{equation} l(f, \\underline{X} ,\\underline{y}) \= \\sum\_{i\=1}^{n}\\sum\_{k\=1}^{K} y\_{i}^k \\ln({\\widehat{y}}\_i^k). \\tag{15\.9} \\end{equation}\\] It is essentially the log\-likelihood function based on a multinomial distribution. Based on the likelihood, an \\(R^2\\)\-like measure can be defined, using an approach similar to the one used in [(15\.8\)](modelPerformance.html#eq:R2bin) for construction of \\(R\_{bin}^2\\) (Harrell [2015](#ref-Harrell2015)). #### 15\.3\.3\.2 Goodness\-of\-prediction It is possible to extend measures like accuracy, precision, etc., introduced in Section [15\.3\.2](modelPerformance.html#modelPerformanceMethodBin) for a binary dependent variable, to the case of a categorical one. Toward this end, first, a confusion table is created for each category \\(k\\), treating the category as “success” and all other categories as “failure”. Let us denote the counts in the table by \\(TP\_k\\), \\(FP\_k\\), \\(TN\_k\\), and \\(FN\_k\\). Based on the counts, we can compute the average accuracy across all classes as follows: \\\[\\begin{equation} \\overline{ACC\_C} \= \\frac{1}{K}\\sum\_{k\=1}^K\\frac{TP\_{C,k}\+TN\_{C,k}}{n}. \\tag{15\.10} \\end{equation}\\] Similarly, one could compute the average precision, average sensitivity, etc. In the machine\-learning world, this approach is often termed “macro\-averaging” (Sokolva and Lapalme [2009](#ref-Sokolova2009); Tsoumakas, Katakis, and Vlahavas [2010](#ref-Tsoumakas2010)). The averages computed in that way treat all classes equally. An alternative approach is to sum the appropriate counts from the confusion tables for all classes, and then form a measure based on the so\-computed cumulative counts. For instance, for precision, this would lead to \\\[\\begin{equation} \\overline{Precision\_C}\_{\\mu} \= \\frac{\\sum\_{k\=1}^K TP\_{C,k}}{\\sum\_{k\=1}^K (TP\_{C,k}\+FP\_{C,k})}. \\tag{15\.11} \\end{equation}\\] In the machine\-learning world, this approach is often termed “micro\-averaging” (Sokolva and Lapalme [2009](#ref-Sokolova2009); Tsoumakas, Katakis, and Vlahavas [2010](#ref-Tsoumakas2010)), hence subscript \\(\\mu\\) for “micro” in [(15\.11\)](modelPerformance.html#eq:precmicro). Note that, for accuracy, this computation still leads to [(15\.10\)](modelPerformance.html#eq:accmacro). The measures computed in that way favour classes with larger numbers of observations. ### 15\.3\.4 Count dependent variable In case of counts, one could consider using MSE or any of the measures for a continuous dependent variable mentioned in Section [15\.3\.1\.1](modelPerformance.html#modelPerformanceMethodContGOF). However, a particular feature of count dependent variables is that their variance depends on the mean value. Consequently, weighing all contributions to MSE equally, as in [(15\.1\)](modelPerformance.html#eq:MSE), is not appropriate, because the same residual value \\(r\_i\\) indicates a larger discrepancy for a smaller count \\(y\_i\\) than for a larger one. Therefore, a popular measure of performance of a predictive model for counts is Pearson’s statistic: \\\[\\begin{equation} \\chi^2(f,\\underline{X},\\underline{y}) \= \\sum\_{i\=1}^{n} \\frac{(\\widehat{y}\_i \- y\_i)^2}{\\widehat{y}\_i} \= \\sum\_{i\=1}^{n} \\frac{r\_i^2}{\\widehat{y}\_i}. \\tag{15\.12} \\end{equation}\\] From [(15\.12\)](modelPerformance.html#eq:Pearson) it is clear that, if the same residual is obtained for two different observed counts, it is assigned a larger weight for the count for which the predicted value is smaller. Of course, there are more measures of model performance as well as types of model responses (e.g., censored data). A complete list, even if it could be created, would be beyond the scope of this book. 15\.4 Example ------------- ### 15\.4\.1 Apartment prices Let us consider the linear regression model `apartments_lm` (see Section [4\.5\.1](dataSetsIntro.html#model-Apartments-lr)) and the random forest model `apartments_rf` (see Section [4\.5\.2](dataSetsIntro.html#model-Apartments-rf)) for the apartment\-prices data (see Section [4\.4](dataSetsIntro.html#ApartmentDataset)). Recall that, for these data, the dependent variable, the price per square meter, is continuous. Hence, we can use the performance measures presented in Section [15\.3\.1](modelPerformance.html#modelPerformanceMethodCont). In particular, we consider MSE and RMSE. Figure [15\.1](modelPerformance.html#fig:prepareMPBoxplotEx) presents a box plot of the absolute values of residuals for the linear regression and random forest models, computed for the testing\-data. The computed values of RMSE are also indicated in the plots. The values are very similar for both models; we have already noted that fact in Section [4\.5\.4](dataSetsIntro.html#predictionsApartments). Figure 15\.1: Box plot for the absolute values of residuals for the linear regression and random forest models for the apartment\-prices data. The red dot indicates the RMSE. In particular, MSE, RMSE, \\(R^2\\), and MAD values for the linear regression model are equal to 80137, 283\.09, 0\.901, and 212\.7, respectively. For the random forest model, they are equal to 80137, 282\.95, 0\.901, and 169\.1 respectively. The values of the measures suggest that the predicitve performance of the random forest model is slightly better. But is this difference relevant? It should be remembered that development of any random forest model includes a random component. This means that, when a random forest model is fitted to the same dataset several times, but using a different random\-number\-generation seed, the value of MSE or MAD for the fitted models will fluctuate. Thus, we should consider the values obtained for the linear regression and random forest models for the apartment\-prices data as indicating a similar performance of the two models rather than a superiority of one of them. ### 15\.4\.2 Titanic data Let us consider the random forest model `titanic_rf` (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) and the logistic regression model `titanic_lmr` (see Section [4\.2\.1](dataSetsIntro.html#model-titanic-lmr)) for the Titanic data (see Section [4\.1](dataSetsIntro.html#TitanicDataset)). Recall that, for these data, the dependent variable is binary, with success defined as survival of the passenger. First, we take a look at the random forest model. We will illustrate the “confusion table” by using threshold \\(C\\) equal to 0\.5, i.e., we will classify passengers as “survivors” and “non\-survivors” depending on whether their model\-predicted probability of survival was larger than 50% or not, respectively. Table [15\.2](modelPerformance.html#tab:confMatRF) presents the resulting table. Table 15\.2: Confusion table for the random forest model for the Titanic data. Predicted survival status is equal to *survived* if the model\-predicted probability of survival \\(\\hat y\_i\\) is larger than 50%. | | Actual: survived | Actual: died | Total | | --- | --- | --- | --- | | Predicted: survived | 454 | 60 | 514 | | Predicted: died | 257 | 1436 | 1693 | | Total | 711 | 1496 | 2207 | Based on the table, we obtain the value of accuracy equal to (454 \+ 1436\) / 2207 \= 0\.8564\. The values of precision and recall (sensitivity) are equal to \\(454 / 514 \= 0\.8833\\) and \\(454 / 711 \= 0\.6385\\), respectively, with the resulting F1 score equal to 0\.7412\. Specificity is equal to \\(1436 / 1496 \= 0\.9599\\). Figure [15\.2](modelPerformance.html#fig:exampleROC) presents the ROC curve for the random forest model. AUC is equal to 0\.8595, and the Gini coefficient is equal to 0\.719\. Figure 15\.2: Receiver Operating Characteristic curve for the random forest model for the Titanic dataset. The Gini coefficient can be calculated as 2\\(\\times\\) area between the ROC curve and the diagonal (this area is highlighted). The AUC coefficient is defined as an area under the ROC curve. Figure [15\.3](modelPerformance.html#fig:examplePRC) presents the precision\-recall curve (left\-hand\-side panel) and lift chart (right\-hand\-side panel) for the random forest model. Figure 15\.3: Precision\-recall curve (left panel) and lift chart (right panel) for the random forest model for the Titanic dataset. Table [15\.3](modelPerformance.html#tab:confMatLR) presents the confusion table for the logistic regression model for threshold \\(C\\) equal to 0\.5\. The resulting values of accuracy, precision, recall (sensitivity), F1 score, and specificity are equal to 0\.8043, 0\.7522, 0\.5851, 0\.6582, and 0\.9084\. The values are smaller than for the random forest model, suggesting a better performance of the latter. Table 15\.3: Confusion table for the logisitic regression model for the Titanic data. Predicted survival status is equal to TRUE if the model\-predicted probability of survival is larger than 50%. | | Actual: survived | Actual: died | Total | | --- | --- | --- | --- | | Predicted: survived | 416 | 137 | 653 | | Predicted: died | 295 | 1359 | 1654 | | Total | 711 | 1496 | 2207 | Left\-hand\-side panel in Figure [15\.4](modelPerformance.html#fig:titanicROC) presents ROC curves for both the logistic regression and the random forest model. The curve for the random forest model lies above the one for the logistic regression model for the majority of the cut\-offs \\(C\\), except for the very high values of the cut\-off \\(C\\). AUC for the logistic regression model is equal to 0\.8174 and is smaller than for the random forest model. Right\-hand\-side panel in Figure [15\.4](modelPerformance.html#fig:titanicROC) presents lift charts for both models. Also in this case the curve for the random forest suggests a better performance than for the logistic regression model, except for the very high values of cut\-off \\(C\\). Figure 15\.4: Receiver Operating Characteristic curves (left panel) and lift charts (right panel) for the random forest and logistic regression models for the Titanic dataset. 15\.5 Pros and cons ------------------- All model\-performance measures presented in this chapter are subject to some limitations. For that reason, many measures are available, as the limitations of a particular measure were addressed by developing an alternative one. For instance, RMSE is frequently used and reported for linear regression models. However, as it is sensitive to outliers, MAD has been proposed as an alternative. In case of predictive models for a binary dependent variable, measures like accuracy, F1 score, sensitivity, and specificity are often considered, depending on the consequences of correct/incorrect predictions in a particular application. However, the value of those measures depends on the cut\-off value used for creating predictions. For this reason, ROC curve and AUC have been developed and have become very popular. They are not easily extended to the case of a categorical dependent variable, though. Given the advantages and disadvantages of various measures and the fact that each may reflect a different aspect of the predictive performance of a model, it is customary to report and compare several of them when evaluating a model’s performance. 15\.6 Code snippets for R ------------------------- In this section, we present model\-performance measures as implemented in the `DALEX` package for R. The package covers the most often used measures and methods presented in this chapter. More advanced measures of performance are available in the `auditor` package for R (Gosiewska and Biecek [2018](#ref-R-auditor)). Note that there are also other R packages that offer similar functionality. These include, for instance, packages `mlr` (Bischl et al. [2016](#ref-mlr)), `caret` (Kuhn [2008](#ref-caret)), `tidymodels` (Max and Wickham [2018](#ref-tidymodels)), and `ROCR` (Sing et al. [2005](#ref-ROCR)). For illustration purposes, we use the random forest model `titanic_rf` (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) and the logistic regression model `titanic_lmr` (see Section [4\.2\.1](dataSetsIntro.html#model-titanic-lmr)) for the Titanic data (see Section [4\.1](dataSetsIntro.html#TitanicDataset)). Consequently, the `DALEX` functions are applied in the context of a binary classification problem. However, the same functions can be used for, for instance, linear regression models. To illustrate the use of the functions, we first retrieve the `titanic_lmr` and `titanic_rf` model\-objects via the `archivist` hooks, as listed in Section [4\.2\.7](dataSetsIntro.html#ListOfModelsTitanic). We also retrieve the version of the `titanic` data with imputed missing values. ``` titanic_imputed <- archivist::aread("pbiecek/models/27e5c") titanic_lmr <- archivist::aread("pbiecek/models/58b24") titanic_rf <- archivist::aread("pbiecek/models/4e0fc") ``` Then we construct the explainers for the models by using function `explain()` from the `DALEX` package (see Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode)). We also load the `rms` and `randomForest` packages, as the models were fitted by using functions from those packages and it is important to have the corresponding `predict()` functions available. ``` library("rms") library("DALEX") explain_lmr <- explain(model = titanic_lmr, data = titanic_imputed[, -9], y = titanic_imputed$survived == "yes", type = "classification", label = "Logistic Regression") library("randomForest") explain_rf <- explain(model = titanic_rf, data = titanic_imputed[, -9], y = titanic_imputed$survived == "yes", label = "Random Forest") ``` Function `model_performance()` calculates, by default, a set of selected model\-performance measures. These include MSE, RMSE, \\(R^2\\), and MAD for linear regression models, and recall, precision, F1, accuracy, and AUC for models for a binary dependent variable. The function includes the `cutoff` argument that allows specifying the cut\-off value for the measures that require it, i.e., recall, precision, F1 score, and accuracy. By default, the cut\-off value is set at 0\.5\. Note that, by default, all measures are computed for the data that are extracted from the explainer object; these can be training or testing data. ``` (eva_rf <- DALEX::model_performance(explain_rf)) ``` ``` ## Measures for: classification ## recall : 0.6385373 ## precision : 0.8832685 ## f1 : 0.7412245 ## accuracy : 0.8563661 ## auc : 0.8595467 ## ## Residuals: ## 0% 10% 20% 30% 40% 50% 60% 70% ## -0.8920 -0.1140 -0.0240 -0.0080 -0.0040 0.0000 0.0000 0.0100 ## 80% 90% 100% ## 0.1400 0.5892 1.0000 ``` ``` (eva_lr <- DALEX::model_performance(explain_lmr)) ``` ``` ## Measures for: classification ## recall : 0.5850914 ## precision : 0.7522604 ## f1 : 0.6582278 ## accuracy : 0.8042592 ## auc : 0.81741 ## ## Residuals: ## 0% 10% 20% 30% 40% ## -0.98457244 -0.31904861 -0.23408037 -0.20311483 -0.15200813 ## 50% 60% 70% 80% 90% ## -0.10318060 -0.06933478 0.05858024 0.29306442 0.73666519 ## 100% ## 0.97151255 ``` Application of the `DALEX::model_performance()` function returns an object of class “model\_performance”, which includes estimated values of several model\-performance measures, as well as a data frame containing the observed and predicted values of the dependent variable together with their difference, i.e., residuals. An ROC curve or lift chart can be constructed by applying the generic `plot()` function to the object. The type of the required plot is indicated by using argument `geom`. In particular, the argument allows values `geom = "lift"` for lift charts, `geom = "roc"` for ROC curves, `geom = "histogram"` for histograms of residuals, and `geom = "boxplot"` for box\-and\-whisker plots of residuals. The `plot()` function returns a `ggplot2` object. It is possible to apply the function to more than one object. In that case, the plots for the models corresponding to each object are combined in one graph. In the code below, we create two `ggplot2` objects: one for a graph containing precision\-recall curves for both models, and one for a histogram of residuals. Subsequently, we use the `patchwork` package to combine the graphs in one display. ``` p1 <- plot(eva_rf, eva_lr, geom = "histogram") p2 <- plot(eva_rf, eva_lr, geom = "prc") ``` ``` library("patchwork") p1 + p2 ``` Figure 15\.5: Precision\-recall curves and histograms for residuals obtained by the generic `plot()` function in R for the logistic regression model `titanic_lmr` and the random forest model `titanic_rf` for the Titanic dataset. The resulting graph is shown in Figure [15\.5](modelPerformance.html#fig:titanicMEexamples). Combined with the plot of ROC curves and the lift charts presented in both panels of Figure [15\.4](modelPerformance.html#fig:titanicROC), it provides additional insight into the comparison of performance of the two models. 15\.7 Code snippets for Python ------------------------------ In this section, we use the `dalex` library for Python. A collection of numerous metrics and performance charts is also available in the popular `sklearn.metrics` library. For illustration purposes, we use the `titanic_rf` random forest model for the Titanic data developed in Section [4\.3\.2](dataSetsIntro.html#model-titanic-python-rf). Recall that the model is developed to predict the probability of survival for passengers of Titanic. In the first step, we create an explainer\-object that will provide a uniform interface for the predictive model. We use the `Explainer()` constructor for this purpose. ``` import dalex as dx titanic_rf_exp = dx.Explainer(titanic_rf, X, y, label = "Titanic RF Pipeline") ``` To calculate selected measures of the overall performance, we use the `model_performance()` method. In the syntax below, we apply the `model_type` argument to indicate that we deal with a classification problem, and the `cutoff` argument to specify the cutoff value equal to 0\.5\. It is worth noting that we get different results than in R. In both cases, the models may differ slightly in implementation and are also trained with a different random seed. ``` mp_rf = titanic_rf_exp.model_performance(model_type = "classification", cutoff = 0.5) mp_rf.result ``` The resulting object can be visualised in many different ways. The code below constructs an ROC curve with AUC measure. Figure [15\.6](modelPerformance.html#fig:examplePythonMP4) presents the created plot. ``` import plotly.express as px from sklearn.metrics import roc_curve, auc y_score = titanic_rf_exp.predict(X) fpr, tpr, thresholds = roc_curve(y, y_score) fig = px.area(x=fpr, y=tpr, title=f'ROC Curve (AUC={auc(fpr, tpr):.4f})', labels=dict(x='False Positive Rate', y='True Positive Rate'), width=700, height=500) fig.add_shape( type='line', line=dict(dash='dash'), x0=0, x1=1, y0=0, y1=1) fig.update_yaxes(scaleanchor="x", scaleratio=1) fig.update_xaxes(constrain='domain') fig.show() ``` Figure 15\.6: The ROC curve for the random forest model for the Titanic dataset. The code below constructs a plot of FP and TP rates as a function of different thresholds. Figure [15\.7](modelPerformance.html#fig:examplePythonMP3) presents the created plot. ``` df = pd.DataFrame({'False Positive Rate': fpr, 'True Positive Rate': tpr }, index=thresholds) df.index.name = "Thresholds" df.columns.name = "Rate" fig_thresh = px.line(df, title='TPR and FPR at every threshold', width=700, height=500) fig_thresh.update_yaxes(scaleanchor="x", scaleratio=1) fig_thresh.update_xaxes(range=[0, 1], constrain='domain') fig_thresh.show() ``` Figure 15\.7: False\-positive and true\-positive rates as a function of threshold for the random forest model for the Titanic dataset. 15\.1 Introduction ------------------ In this chapter, we present measures that are useful for the evaluation of the overall performance of a (predictive) model. As it was mentioned in Sections [2\.1](modelDevelopmentProcess.html#MDPIntro) and [2\.5](modelDevelopmentProcess.html#fitting), in general, we can distinguish between the explanatory and predictive approaches to statistical modelling. Leo Breiman ([2001](#ref-twoCultures)[b](#ref-twoCultures)) indicates that validation of a model can be based on evaluation of *goodness\-of\-fit* (GoF) or on evaluation of predictive accuracy (which we will term *goodness\-of\-predicton*, GoP). In principle, GoF is mainly used for explanatory models, while GoP is applied for predictive models. In a nutshell, GoF pertains to the question: how well do the model’s predictions explain (fit) dependent\-variable values of the observations used for developing the model? On the other hand, GoP is related to the question: how well does the model predict the value of the dependent variable for a new observation? For some measures, their interpretation in terms of GoF or GoP depends on whether they are computed by using training or testing data. The measures may be applied for several purposes, including: * model evaluation: we may want to know how good the model is, i.e., how reliable are the model’s predictions (how frequent and how large errors may we expect)?; * model comparison: we may want to compare two or more models in order to choose between them; * out\-of\-sample and out\-of\-time comparisons: we may want to check a model’s performance when applied to new data to evaluate if performance has not worsened. Depending on the nature of the dependent variable (continuous, binary, categorical, count, etc.), different model\-performance measures may be used. Moreover, the list of useful measures is growing as new applications emerge. In this chapter, we discuss only a selected set of measures, some of which are used in dataset\-level exploration techniques introduced in subsequent chapters. We also limit ourselves to the two basic types of dependent variables continuous (including count) and categorical (including binary) considered in our book. 15\.2 Intuition --------------- Most model\-performance measures are based on the comparison of the model’s predictions with the (known) values of the dependent variable in a dataset. For an ideal model, the predictions and the dependent\-variable values should be equal. In practice, it is never the case, and we want to quantify the disagreement. In principle, model\-performance measures may be computed for the training dataset, i.e., the data used for developing the model. However, in that case there is a serious risk that the computed values will overestimate the quality of the model’s predictive performance. A more meaningful approach is to apply the measures to an independent testing dataset. Alternatively, a bias\-correction strategy can be used when applying them to the training data. Toward this aim, various strategies have been proposed, such as cross\-validation or bootstrapping (Kuhn and Johnson [2013](#ref-Kuhn2013); Harrell [2015](#ref-Harrell2015); Steyerberg [2019](#ref-Steyerberg2019)). In what follows, we mainly consider the simple data\-split strategy, i.e., we assume that the available data are split into a training set and a testing set. The model is created on the former, and the latter set is used to assess the model’s performance. It is worth mentioning that there are two important aspects of prediction: *calibration* and *discrimination* (Harrell, Lee, and Mark [1996](#ref-Harrell1996)). Calibration refers to the extent of bias in predicted values, i.e., the mean difference between the predicted and true values. Discrimination refers to the ability of the predictions to distinguish between individual true values. For instance, consider a model to be used for weather forecasts in a region where, on average, it rains half the year. A simple model that predicts that every other day is rainy is well\-calibrated because, on average, the resulting predicted risk of a rainy day in a year is 50%, which agrees with the actual situation. However, the model is not very much discriminative (for each calendar day, the probability of a correct prediction is 50%, the same as for a fair\-coin toss) and, hence, not very useful. Thus, in addition to overall measures of GoP, we may need separate measures for calibration and discrimination of a model. Note that, for the latter, we may want to weigh differently the situation when the prediction is, for instance, larger than the true value, as compared to the case when it is smaller. Depending on the decision on how to weigh different types of disagreement, we may need different measures. In the best possible scenario, we can specify a single model\-performance measure before the model is created and then optimize the model for this measure. But, in practice, a more common scenario is to use several performance measures, which are often selected after the model has been created. 15\.3 Method ------------ Assume that we have got a training dataset with \\(n\\) observations on \\(p\\) explanatory variables and on a dependent variable \\(Y\\). Let \\(\\underline{x}\_i\\) denote the (column) vector of values of the explanatory variables for the \\(i\\)\-th observation, and \\(y\_i\\) the corresponding value of the dependent variable. We will use \\(\\underline{X}\=(x'\_1,\\ldots,x'\_n)\\) to denote the matrix of explanatory variables for all \\(n\\) observations, and \\(\\underline{y}\=(y\_1,\\ldots,y\_n)'\\) to denote the (column) vector of the values of the dependent variable. The training dataset is used to develop model \\(f(\\underline{\\hat{\\theta}}; \\underline X)\\), where \\(\\underline{\\hat{\\theta}}\\) denotes the estimated values of the model’s coefficients. Note that could also use here the “penalized” estimates \\(\\underline{\\tilde{\\theta}}\\) (see Section [2\.5](modelDevelopmentProcess.html#fitting)). Let \\(\\widehat{y}\_i\\) indicate the model’s prediction corresponding to \\(y\_i.\\) The model performance analysis is often based on an independent dataset called a testing set. In some cases, model\-performance mesures are based on a leave\-one\-out approach. We will denote by \\(\\underline{X}\_{\-i}\\) the matrix of explanatory variables when excluding the \\(i\\)\-th observation and by \\(f(\\underline{\\hat{\\theta}}\_{\-i}; \\underline{X}\_{\-i})\\) the model developed for the reduced data. It is worth noting here that the leave\-one\-out model \\(f(\\underline{\\hat{\\theta}}\_{\-i}; \\underline{X}\_{\-i})\\) is different from the full\-data model \\(f(\\underline{\\hat{\\theta}}; \\underline X)\\). But often they are close to each other and conclusions obtained from one can be transferred to the other. We will use \\(\\widehat{y}\_{i(\-i)}\\) to denote the prediction for \\(y\_i\\) obtained from model \\(f(\\underline{\\hat{\\theta}}\_{\-i}; \\underline{X}\_{\-i})\\). In the subsequent sections, we present various model\-performance measures. The measures are applied in essentially the same way if a training or a testing dataset is used. If there is any difference in the interpretation or properties of the measures between the two situations, we will explicitly mention them. Note that, in what follows, we will ignore in the notation the fact that we consider the estimated model \\(f(\\underline{\\hat{\\theta}}; \\underline X)\\) and we will use \\(f()\\) as a generic notation for it. ### 15\.3\.1 Continuous dependent variable #### 15\.3\.1\.1 Goodness\-of\-fit The most popular GoF measure for models for a continuous dependent variable is the mean squared\-error, defined as \\\[\\begin{equation} MSE(f,\\underline{X},\\underline{y}) \= \\frac{1}{n} \\sum\_{i}^{n} (\\widehat{y}\_i \- y\_i)^2 \= \\frac{1}{n} \\sum\_{i}^{n} r\_i^2, \\tag{15\.1} \\end{equation}\\] where \\(r\_i\\) is the residual for the \\(i\\)\-th observation (see also Section [2\.3](modelDevelopmentProcess.html#notation)). Thus, MSE can be seen as a sum of squared residuals. MSE is a convex differentiable function, which is important from an optimization point of view (see Section [2\.5](modelDevelopmentProcess.html#fitting)). As the measure weighs all differences equally, large residuals have got a high impact on MSE. Thus, the measure is sensitive to outliers. For a “perfect” model, which predicts (fits) all \\(y\_i\\) exactly, \\(MSE \= 0\\). Note that MSE is constructed on a different scale from the dependent variable. Thus, a more interpretable variant of this measure is the root\-mean\-squared\-error (RMSE), defined as \\\[\\begin{equation} RMSE(f, \\underline{X}, \\underline{y}) \= \\sqrt{MSE(f, \\underline{X}, \\underline{y})}. \\tag{15\.2} \\end{equation}\\] A popular variant of RMSE is its normalized version, \\(R^2\\), defined as \\\[\\begin{equation} R^2(f, \\underline{X}, \\underline{y}) \= 1 \- \\frac{MSE(f, \\underline{X}, \\underline{y})}{MSE(f\_0, \\underline{X},\\underline{y})}. \\tag{15\.3} \\end{equation}\\] In [(15\.3\)](modelPerformance.html#eq:R2), \\(f\_0()\\) denotes a “baseline” model. For instance, in the case of the classical linear regression, \\(f\_0()\\) is the model that includes only the intercept, which implies the use of the mean value of \\(Y\\) as a prediction for all observations. \\(R^2\\) is normalized in the sense that the “perfectly” fitting model leads to \\(R^2 \= 1\\), while \\(R^2 \= 0\\) means that we are not doing better than the baseline model. In the context of the classical linear regression, \\(R^2\\) is the familiar coefficient of determination and can be interpreted as the fraction of the total variance of \\(Y\\) “explained” by model \\(f()\\). Given sensitivity of MSE to outliers, sometimes the median absolute\-deviation (MAD) is considered: \\\[\\begin{equation} MAD(f, \\underline{X} ,\\underline{y}) \= median( \|r\_1\|, ..., \|r\_n\| ). \\tag{15\.4} \\end{equation}\\] MAD is more robust to outliers than MSE. A disadvantage of MAD are its less favourable mathematical properties. Section [15\.4\.1](modelPerformance.html#modelPerformanceApartments) illustrates the use of measures for the linear regression model and the random forest model for the apartment\-prices data. #### 15\.3\.1\.2 Goodness\-of\-prediction Assume that a testing dataset is available. In that case, we can use model \\(f()\\), obtained by fitting the model to training data, to predict the values of the dependent variable observed in the testing dataset. Subsequently, we can compute MSE as in [(15\.1\)](modelPerformance.html#eq:MSE) to obtain the mean squared\-prediction\-error (MSPE) as a GoP measure (Kutner et al. [2005](#ref-Kutner2005)). By taking the square root of MSPE, we get the root\-mean\-squared\-prediction\-error (RMSPE). In the absence of testing data, one of the most known GoP measures for models for a continuous dependent variable is the predicted sum\-of\-squares (PRESS), defined as \\\[\\begin{equation} PRESS(f,\\underline{X},\\underline{y}) \= \\sum\_{i\=1}^{n} (\\widehat{y}\_{i(\-i)} \- y\_i)^2\. \\tag{15\.5} \\end{equation}\\] Thus, PRESS can be seen as a result of the application of the leave\-one\-out strategy to the evaluation of GoP of a model using the training data. Note that, for the classical linear regression model, there is no need to re\-fit the model \\(n\\) times to compute PRESS (Kutner et al. [2005](#ref-Kutner2005)). Based on PRESS, one can define the predictive squared\-error \\(PSE\=PRESS/n\\) and the standard deviation error in prediction \\(SEP\=\\sqrt{PSE}\=\\sqrt{PRESS/n}\\) (Todeschini [2010](#ref-SummariesTutorial)). Another measure gaining in popularity is \\\[\\begin{equation} Q^2(f,\\underline{X},\\underline{y}) \= 1\- \\frac{ PRESS(f,\\underline{X},\\underline{y})}{\\sum\_{i\=1}^{n} ({y}\_{i} \- \\bar{y})^2}. \\tag{15\.6} \\end{equation}\\] It is sometimes called the cross\-validated \\(R^2\\) or the coefficient of prediction (Landram, Abdullat, and Shah [2005](#ref-Landram2005)). It appears that \\(Q^2 \\leq R^2\\), i.e., the expected accuracy of out\-of\-sample predictions measured by \\(Q^2\\) cannot exceed the accuracy of in\-sample estimates (Landram, Abdullat, and Shah [2005](#ref-Landram2005)). For a “perfect” predictive model, \\(Q^2\=1\\). It is worth noting that, while \\(R^2\\) always increases if an explanatory variable is added to a model, \\(Q^2\\) decreases when “noisy” variables are added to the model (Todeschini [2010](#ref-SummariesTutorial)). The aforementioned measures capture the overall predictive performance of a model. A measure aimed at evaluating discrimination is the *concordance index* (c\-index) (Harrell, Lee, and Mark [1996](#ref-Harrell1996); Brentnall and Cuzick [2018](#ref-Brentnall2018)). It is computed by considering all pairs of observations and computing the fraction of the pairs in which the ordering of the predictions corresponds to the ordering of the true values (Brentnall and Cuzick [2018](#ref-Brentnall2018)). The index assumes the value of 1 in case of perfect discrimination and 0\.25 for random discrimination. Calibration can be assessed by a scatter plot of the predicted values of \\(Y\\) in function of the true ones (Harrell, Lee, and Mark [1996](#ref-Harrell1996); van Houwelingen, H.C. [2000](#ref-vanHouwelingen2000); Steyerberg et al. [2010](#ref-Steyerberg2010)). The plot can be characterized by its intercept and slope. In case of perfect prediction, the plot should assume the form of a straight line with intercept 0 and slope 1\. A deviation of the intercept from 0 indicates overall bias in predictions (“calibration\-in\-the\-large”), while the value of the slope smaller than 1 suggests overfitting of the model (van Houwelingen, H.C. [2000](#ref-vanHouwelingen2000); Steyerberg et al. [2010](#ref-Steyerberg2010)). The estimated values of the coefficients can be used to re\-calibrate the model (van Houwelingen, H.C. [2000](#ref-vanHouwelingen2000)). ### 15\.3\.2 Binary dependent variable To introduce model\-performance measures, we, somewhat arbitrarily, label the two possible values of the dependent variable as “success” and “failure”. Of course, in a particular application, the meaning of the “success” outcome does not have to be positive nor optimistic; for a diagnostic test, “success” often means detection of disease. We also assume that model prediction \\(\\widehat{y}\_i\\) takes the form of the predicted probability of success. #### 15\.3\.2\.1 Goodness\-of\-fit If we assign the value of 1 to success and 0 to failure, it is possible to use MSE, RMSE, and MAD, as defined in Equations [(15\.1\)](modelPerformance.html#eq:MSE), [(15\.2\)](modelPerformance.html#eq:RMSE), [(15\.4\)](modelPerformance.html#eq:MAD), respectively, as a GoF measure. In fact, the MSE obtained in that way is equivalent to the Brier score, which can be also expressed as \\\[ \\sum\_{i\=1}^{n} \\{y\_i(1\-\\widehat{y}\_i)^2\+(1\-y\_i)(\\widehat{y}\_i)^2\\}/n. \\] Its minimum value is 0 for a “perfect” model and 0\.25 for an “uninformative” model that yields the predicted probability of 0\.5 for all observations. The Brier score is often also interpreted as an overall predictive\-performance measure for models for a binary dependent variable because it captures both calibration and the concentration of the predictive distribution (Rufibach [2010](#ref-Rufibach2010)). One of the main issues related to the summary measures based on MSE is that they penalize too mildly for wrong predictions. In fact, the maximum penalty for an individual prediction is equal to 1 (if, for instance, the model yields zero probability for an actual success). To address this issue, the log\-likelihood function based on the Bernoulli distribution (see also [(2\.8\)](modelDevelopmentProcess.html#eq:modelTrainingBernoulli)) can be considered: \\\[\\begin{equation} l(f, \\underline{X},\\underline{y}) \= \\sum\_{i\=1}^{n} \\{y\_i \\ln(\\widehat{y}\_i)\+ (1\-y\_i)\\ln(1\-\\widehat{y}\_i)\\}. \\tag{15\.7} \\end{equation}\\] Note that, in the machine\-learning world, function \\(\-l(f, \\underline{X} ,\\underline{y})/n\\) is often considered (sometimes also with \\(\\ln\\) replaced by \\(\\log\_2\\)) and termed “log\-loss” or “cross\-entropy”. The log\-likelihood heavily “penalizes” the cases when the model\-predicted probability of success \\(\\widehat{y}\_i\\) is high for an actual failure (\\(y\_i\=0\\)) and low for an actual success (\\(y\_i\=1\\)). The log\-likelihood [(15\.7\)](modelPerformance.html#eq:bernoulli) can be used to define \\(R^2\\)\-like measures (for a review, see, for example, Allison ([2014](#ref-Allison2014))). One of the variants most often used is the measure proposed by Nagelkerke ([1991](#ref-Nagelkerke1991)): \\\[\\begin{equation} R\_{bin}^2(f, \\underline{X}, \\underline{y}) \= \\frac{1\-\\exp\\left(\\frac{2}{n}\\{l(f\_0, \\underline{X},\\underline{y})\-l(f, \\underline{X},\\underline{y})\\}\\right)} {1\-\\exp\\left(\\frac{2}{n}l(f\_0, \\underline{X},\\underline{y})\\right)} . \\tag{15\.8} \\end{equation}\\] It shares properties of the “classical” \\(R^2\\), defined in [(15\.3\)](modelPerformance.html#eq:R2). In [(15\.8\)](modelPerformance.html#eq:R2bin), \\(f\_0()\\) denotes the model that includes only the intercept, which implies the use of the observed fraction of successes as the predicted probability of success. If we denote the fraction by \\(\\hat{p}\\), then \\\[ l(f\_0, \\underline{X},\\underline{y}) \= n \\hat{p} \\ln{\\hat{p}} \+ n(1\-\\hat{p}) \\ln{(1\-\\hat{p})}. \\] #### 15\.3\.2\.2 Goodness\-of\-prediction In many situations, consequences of a prediction error depend on the form of the error. For this reason, performance measures based on the (estimated values of) probability of correct/wrong prediction are more often used. To introduce some of those measures, we assume that, for each observation from the testing dataset, the predicted probability of success \\(\\widehat{y}\_i\\) is compared to a fixed cut\-off threshold, \\(C\\) say. If the probability is larger than \\(C\\), then we assume that the model predicts success; otherwise, we assume that it predicts failure. As a result of such a procedure, the comparison of the observed and predicted values of the dependent variable for the \\(n\\) observations in a dataset can be summarized in a table similar to Table [15\.1](modelPerformance.html#tab:confMat). Table 15\.1: Confusion table for a classification model with scores \\(\\widehat{y}\_i\\). | | True value: `success` | True value: `failure` | Total | | --- | --- | --- | --- | | \\(\\widehat{y}\_i \\geq C\\), predicted: `success` | True Positive: \\(TP\_C\\) | False Positive (Type I error): \\(FP\_C\\) | \\(P\_C\\) | | \\(\\widehat{y}\_i \< C\\), predicted: `failure` | False Negative (Type II error): \\(FN\_C\\) | True Negative: \\(TN\_C\\) | \\(N\_C\\) | | Total | \\(S\\) | \\(F\\) | \\(n\\) | In the machine\-learning world, Table [15\.1](modelPerformance.html#tab:confMat) is often referred to as the “confusion table” or “confusion matrix”. In statistics, it is often called the “decision table”. The counts \\(TP\_C\\) and \\(TN\_C\\) on the diagonal of the table correspond to the cases when the predicted and observed value of the dependent variable \\(Y\\) coincide. \\(FP\_C\\) is the number of cases in which failure is predicted as a success. These are false\-positive, or Type I error, cases. On the other hand, \\(FN\_C\\) is the count of false\-negative, or Type II error, cases, in which success is predicted as failure. Marginally, there are \\(P\_C\\) predicted successes and \\(N\_C\\) predicted failures, with \\(P\_C\+N\_C\=n\\). In the testing dataset, there are \\(S\\) observed successes and \\(F\\) observed failures, with \\(S\+F\=n\\). The effectiveness of such a test can be described by various measures. Let us present some of the most popular examples. The simplest measure of model performance is *accuracy*, defined as \\\[ ACC\_C \= \\frac{TP\_C\+TN\_C}{n}. \\] It is the fraction of correct predictions in the entire testing dataset. Accuracy is of interest if true positives and true negatives are more important than their false counterparts. However, accuracy may not be very informative when one of the binary categories is much more prevalent (so called unbalanced labels). For example, if the testing data contain 90% of successes, a model that would always predict a success would reach an accuracy of 0\.9, although one could argue that this is not a very useful model. There may be situations when false positives and/or false negatives may be of more concern. In that case, one might want to keep their number low. Hence, other measures, focused on the false results, might be of interest. In the machine\-learning world, two other measures are often considered: *precision* and *recall*. Precision is defined as \\\[ Precision\_C \= \\frac{TP\_C}{TP\_C\+FP\_C} \= \\frac{TP\_C}{P\_C}. \\] Precision is also referred to as the *positive predictive value*. It is the fraction of correct predictions among the predicted successes. Precision is high if the number of false positives is low. Thus, it is a useful measure when the penalty for committing the Type I error (false positive) is high. For instance, consider the use of a genetic test in cancer diagnostics, with a positive result of the test taken as an indication of an increased risk of developing a cancer. A false\-positive result of a genetic test might mean that a person would have to unnecessarily cope with emotions and, possibly, medical procedures related to the fact of being evaluated as having a high risk of developing a cancer. We might want to avoid this situation more than the false\-negative case. The latter would mean that the genetic test gives a negative result for a person that, actually, might be at an increased risk of developing a cancer. However, an increased risk does not mean that the person will develop cancer. And even so, we could hope that we could detect it in due time. Recall is defined as \\\[ Recall\_C \= \\frac{TP\_C}{TP\_C\+FN\_C} \= \\frac{TP\_C}{S}. \\] Recall is also referred to as *sensitivity* or the *true\-positive rate*. It is the fraction of correct predictions among the true successes. Recall is high if the number of false negatives is low. Thus, it is a useful measure when the penalty for committing the Type II error (false negative) is high. For instance, consider the use of an algorithm that predicts whether a bank transaction is fraudulent. A false\-negative result means that the algorithm accepts a fraudulent transaction as a legitimate one. Such a decision may have immediate and unpleasant consequences for the bank, because it may imply a non\-recoverable loss of money. On the other hand, a false\-positive result means that a legitimate transaction is considered as a fraudulent one and is blocked. However, upon further checking, the legitimate nature of the transaction can be confirmed with, perhaps, the annoyed client as the only consequence for the bank. The harmonic mean of these two measures defines the *F1 score*: \\\[ F1\\ score\_C \= \\frac{2}{\\frac{1}{Precision\_C} \+ \\frac{1}{Recall\_C}} \= 2\\cdot\\frac{Precision\_C \\cdot Recall\_C}{Precision\_C \+ Recall\_C}. \\] F1 score tends to give a low value if either precision or recall is low, and a high value if both precision and recall are high. For instance, if precision is 0, F1 score will also be 0 irrespectively of the value of recall. Thus, it is a useful measure if we have got to seek a balance between precision and recall. In statistics, and especially in applications in medicine, the popular measures are *sensitivity* and *specificity*. Sensitivity is simply another name for recall. Specificity is defined as \\\[ Specificity\_C \= \\frac{TN\_C}{TN\_C \+ FP\_C} \= \\frac{TN\_C}{F}. \\] Specificity is also referred to as the *true\-negative rate*. It is the fraction of correct predictions among the true failures. Specificity is high if the number of false positives is low. Thus, as precision, it is a useful measure when the penalty for committing the Type I error (false positive) is high. The reason why sensitivity and specificity may be more often used outside the machine\-learning world is related to the fact that their values do not depend on the proportion \\(S/n\\) (sometimes termed *prevalence*) of true successes. This means that, once estimated in a sample obtained from a population, they may be applied to other populations, in which the prevalence may be different. This is not true for precision, because one can write \\\[ Precision\_C \= \\frac{Sensitivity\_C \\cdot \\frac{S}{n}}{Sensitivity\_C \\cdot \\frac{S}{n}\+Specificity\_C \\cdot \\left(1\-\\frac{S}{n}\\right)}. \\] All the measures depend on the choice of cut\-off \\(C\\). To assess the form and the strength of dependence, a common approach is to construct the Receiver Operating Characteristic (ROC) curve. The curve plots \\(Sensitivity\_C\\) in function of \\(1\-Specificity\_C\\) for all possible, ordered values of \\(C\\). Figure [15\.2](modelPerformance.html#fig:exampleROC) presents the ROC curve for the random forest model for the Titanic dataset. Note that the curve indicates an inverse relationship between sensitivity and specificity: by increasing one measure, the other is decreased. The ROC curve is very informative. For a model that predicts successes and failures at random, the corresponding curve will be equal to the diagonal line. On the other hand, for a model that yields perfect predictions, the ROC curve reduces to two intervals that connect points (0,0\), (0,1\), and (1,1\). Often, there is a need to summarize the ROC curve with one number, which can be used to compare models. A popular measure that is used toward this aim is the area under the curve (AUC). For a model that predicts successes and failures at random, AUC is the area under the diagonal line, i.e., it is equal to 0\.5\. For a model that yields perfect predictions, AUC is equal to 1\. It appears that, in this case, AUC is equivalent to the c\-index (see Section [15\.3\.1\.2](modelPerformance.html#modelPerformanceMethodContGOP)). Another ROC\-curve\-based measure that is often used is the *Gini coefficient* \\(G\\). It is closely related to AUC; in fact, it can be calculated as \\(G \= 2 \\times AUC \- 1\\). For a model that predicts successes and failures at random, \\(G\=0\\); for a perfect\-prediction model, \\(G \= 1\\). Figure [15\.2](modelPerformance.html#fig:exampleROC) illustrates the calculation of the Gini coefficient for the random forest model for the Titanic dataset (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)). A variant of ROC curve based on precision and recall is a called a precision\-recall curve. Figure [15\.3](modelPerformance.html#fig:examplePRC) the curve for the random forest model for the Titanic dataset. The value of the Gini coefficient or, equivalently, of \\(AUC\-0\.5\\) allows a comparison of the model\-based predictions with random guessing. A measure that explicitly compares a prediction model with a baseline (or null) model is the *lift*. Commonly, random guessing is considered as the baseline model. In that case, \\\[ Lift\_C \= \\frac{\\frac{TP\_C}{P\_C}}{\\frac{S}{n}} \= n\\frac{Precision\_C}{S}. \\] Note that \\(S/n\\) can be seen as the estimated probability of a correct prediction of success for random guessing. On the other hand, \\(TP\_C/P\_C\\) is the estimated probability of a correct prediction of a success given that the model predicts a success. Hence, informally speaking, the lift indicates how many more (or less) times does the model do better in predicting success as compared to random guessing. As other measures, the lift depends on the choice of cut\-off \\(C\\). The plot of the lift as a function of \\(P\_C\\) is called the *lift chart*. Figure [15\.3](modelPerformance.html#fig:examplePRC) presents the lift chart for the random forest model for the Titanic dataset. Calibration of predictions can be assessed by a scatter plot of the predicted values of \\(Y\\) in function of the true ones. A complicating issue is a fact that the true values are only equal to 0 or 1\. Therefore, smoothing techniques or grouping of observations is needed to obtain a meaningful plot (Steyerberg et al. [2010](#ref-Steyerberg2010); Steyerberg [2019](#ref-Steyerberg2019)). There are many more measures aimed at measuring the performance of a predictive model for a binary dependent variable. An overview can be found in, e.g., Berrar ([2019](#ref-Berrar2019)). ### 15\.3\.3 Categorical dependent variable To introduce model\-performance measures for a categorical dependent variable, we assume that \\(\\underline{y}\_i\\) is now a vector of \\(K\\) elements. Each element \\(y\_{i}^k\\) (\\(k\=1,\\ldots,K\\)) is a binary variable indicating whether the \\(k\\)\-th category was observed for the \\(i\\)\-th observation. We assume that, for each observation, only one category can be observed. Thus, all elements of \\(\\underline{y}\_i\\) are equal to 0 except one that is equal to 1\. Furthermore, we assume that a model’s prediction takes the form of a vector, \\(\\underline{\\widehat{y}}\_i\\) say, of the predicted probabilities for each of the \\(K\\) categories, with \\({\\widehat{y}}\_i^k\\) denoting the probability for the \\(k\\)\-th category. The predicted category is the one with the highest predicted probability. #### 15\.3\.3\.1 Goodness\-of\-fit The log\-likelihood function [(15\.7\)](modelPerformance.html#eq:bernoulli) can be adapted to the categorical dependent variable case as follows: \\\[\\begin{equation} l(f, \\underline{X} ,\\underline{y}) \= \\sum\_{i\=1}^{n}\\sum\_{k\=1}^{K} y\_{i}^k \\ln({\\widehat{y}}\_i^k). \\tag{15\.9} \\end{equation}\\] It is essentially the log\-likelihood function based on a multinomial distribution. Based on the likelihood, an \\(R^2\\)\-like measure can be defined, using an approach similar to the one used in [(15\.8\)](modelPerformance.html#eq:R2bin) for construction of \\(R\_{bin}^2\\) (Harrell [2015](#ref-Harrell2015)). #### 15\.3\.3\.2 Goodness\-of\-prediction It is possible to extend measures like accuracy, precision, etc., introduced in Section [15\.3\.2](modelPerformance.html#modelPerformanceMethodBin) for a binary dependent variable, to the case of a categorical one. Toward this end, first, a confusion table is created for each category \\(k\\), treating the category as “success” and all other categories as “failure”. Let us denote the counts in the table by \\(TP\_k\\), \\(FP\_k\\), \\(TN\_k\\), and \\(FN\_k\\). Based on the counts, we can compute the average accuracy across all classes as follows: \\\[\\begin{equation} \\overline{ACC\_C} \= \\frac{1}{K}\\sum\_{k\=1}^K\\frac{TP\_{C,k}\+TN\_{C,k}}{n}. \\tag{15\.10} \\end{equation}\\] Similarly, one could compute the average precision, average sensitivity, etc. In the machine\-learning world, this approach is often termed “macro\-averaging” (Sokolva and Lapalme [2009](#ref-Sokolova2009); Tsoumakas, Katakis, and Vlahavas [2010](#ref-Tsoumakas2010)). The averages computed in that way treat all classes equally. An alternative approach is to sum the appropriate counts from the confusion tables for all classes, and then form a measure based on the so\-computed cumulative counts. For instance, for precision, this would lead to \\\[\\begin{equation} \\overline{Precision\_C}\_{\\mu} \= \\frac{\\sum\_{k\=1}^K TP\_{C,k}}{\\sum\_{k\=1}^K (TP\_{C,k}\+FP\_{C,k})}. \\tag{15\.11} \\end{equation}\\] In the machine\-learning world, this approach is often termed “micro\-averaging” (Sokolva and Lapalme [2009](#ref-Sokolova2009); Tsoumakas, Katakis, and Vlahavas [2010](#ref-Tsoumakas2010)), hence subscript \\(\\mu\\) for “micro” in [(15\.11\)](modelPerformance.html#eq:precmicro). Note that, for accuracy, this computation still leads to [(15\.10\)](modelPerformance.html#eq:accmacro). The measures computed in that way favour classes with larger numbers of observations. ### 15\.3\.4 Count dependent variable In case of counts, one could consider using MSE or any of the measures for a continuous dependent variable mentioned in Section [15\.3\.1\.1](modelPerformance.html#modelPerformanceMethodContGOF). However, a particular feature of count dependent variables is that their variance depends on the mean value. Consequently, weighing all contributions to MSE equally, as in [(15\.1\)](modelPerformance.html#eq:MSE), is not appropriate, because the same residual value \\(r\_i\\) indicates a larger discrepancy for a smaller count \\(y\_i\\) than for a larger one. Therefore, a popular measure of performance of a predictive model for counts is Pearson’s statistic: \\\[\\begin{equation} \\chi^2(f,\\underline{X},\\underline{y}) \= \\sum\_{i\=1}^{n} \\frac{(\\widehat{y}\_i \- y\_i)^2}{\\widehat{y}\_i} \= \\sum\_{i\=1}^{n} \\frac{r\_i^2}{\\widehat{y}\_i}. \\tag{15\.12} \\end{equation}\\] From [(15\.12\)](modelPerformance.html#eq:Pearson) it is clear that, if the same residual is obtained for two different observed counts, it is assigned a larger weight for the count for which the predicted value is smaller. Of course, there are more measures of model performance as well as types of model responses (e.g., censored data). A complete list, even if it could be created, would be beyond the scope of this book. ### 15\.3\.1 Continuous dependent variable #### 15\.3\.1\.1 Goodness\-of\-fit The most popular GoF measure for models for a continuous dependent variable is the mean squared\-error, defined as \\\[\\begin{equation} MSE(f,\\underline{X},\\underline{y}) \= \\frac{1}{n} \\sum\_{i}^{n} (\\widehat{y}\_i \- y\_i)^2 \= \\frac{1}{n} \\sum\_{i}^{n} r\_i^2, \\tag{15\.1} \\end{equation}\\] where \\(r\_i\\) is the residual for the \\(i\\)\-th observation (see also Section [2\.3](modelDevelopmentProcess.html#notation)). Thus, MSE can be seen as a sum of squared residuals. MSE is a convex differentiable function, which is important from an optimization point of view (see Section [2\.5](modelDevelopmentProcess.html#fitting)). As the measure weighs all differences equally, large residuals have got a high impact on MSE. Thus, the measure is sensitive to outliers. For a “perfect” model, which predicts (fits) all \\(y\_i\\) exactly, \\(MSE \= 0\\). Note that MSE is constructed on a different scale from the dependent variable. Thus, a more interpretable variant of this measure is the root\-mean\-squared\-error (RMSE), defined as \\\[\\begin{equation} RMSE(f, \\underline{X}, \\underline{y}) \= \\sqrt{MSE(f, \\underline{X}, \\underline{y})}. \\tag{15\.2} \\end{equation}\\] A popular variant of RMSE is its normalized version, \\(R^2\\), defined as \\\[\\begin{equation} R^2(f, \\underline{X}, \\underline{y}) \= 1 \- \\frac{MSE(f, \\underline{X}, \\underline{y})}{MSE(f\_0, \\underline{X},\\underline{y})}. \\tag{15\.3} \\end{equation}\\] In [(15\.3\)](modelPerformance.html#eq:R2), \\(f\_0()\\) denotes a “baseline” model. For instance, in the case of the classical linear regression, \\(f\_0()\\) is the model that includes only the intercept, which implies the use of the mean value of \\(Y\\) as a prediction for all observations. \\(R^2\\) is normalized in the sense that the “perfectly” fitting model leads to \\(R^2 \= 1\\), while \\(R^2 \= 0\\) means that we are not doing better than the baseline model. In the context of the classical linear regression, \\(R^2\\) is the familiar coefficient of determination and can be interpreted as the fraction of the total variance of \\(Y\\) “explained” by model \\(f()\\). Given sensitivity of MSE to outliers, sometimes the median absolute\-deviation (MAD) is considered: \\\[\\begin{equation} MAD(f, \\underline{X} ,\\underline{y}) \= median( \|r\_1\|, ..., \|r\_n\| ). \\tag{15\.4} \\end{equation}\\] MAD is more robust to outliers than MSE. A disadvantage of MAD are its less favourable mathematical properties. Section [15\.4\.1](modelPerformance.html#modelPerformanceApartments) illustrates the use of measures for the linear regression model and the random forest model for the apartment\-prices data. #### 15\.3\.1\.2 Goodness\-of\-prediction Assume that a testing dataset is available. In that case, we can use model \\(f()\\), obtained by fitting the model to training data, to predict the values of the dependent variable observed in the testing dataset. Subsequently, we can compute MSE as in [(15\.1\)](modelPerformance.html#eq:MSE) to obtain the mean squared\-prediction\-error (MSPE) as a GoP measure (Kutner et al. [2005](#ref-Kutner2005)). By taking the square root of MSPE, we get the root\-mean\-squared\-prediction\-error (RMSPE). In the absence of testing data, one of the most known GoP measures for models for a continuous dependent variable is the predicted sum\-of\-squares (PRESS), defined as \\\[\\begin{equation} PRESS(f,\\underline{X},\\underline{y}) \= \\sum\_{i\=1}^{n} (\\widehat{y}\_{i(\-i)} \- y\_i)^2\. \\tag{15\.5} \\end{equation}\\] Thus, PRESS can be seen as a result of the application of the leave\-one\-out strategy to the evaluation of GoP of a model using the training data. Note that, for the classical linear regression model, there is no need to re\-fit the model \\(n\\) times to compute PRESS (Kutner et al. [2005](#ref-Kutner2005)). Based on PRESS, one can define the predictive squared\-error \\(PSE\=PRESS/n\\) and the standard deviation error in prediction \\(SEP\=\\sqrt{PSE}\=\\sqrt{PRESS/n}\\) (Todeschini [2010](#ref-SummariesTutorial)). Another measure gaining in popularity is \\\[\\begin{equation} Q^2(f,\\underline{X},\\underline{y}) \= 1\- \\frac{ PRESS(f,\\underline{X},\\underline{y})}{\\sum\_{i\=1}^{n} ({y}\_{i} \- \\bar{y})^2}. \\tag{15\.6} \\end{equation}\\] It is sometimes called the cross\-validated \\(R^2\\) or the coefficient of prediction (Landram, Abdullat, and Shah [2005](#ref-Landram2005)). It appears that \\(Q^2 \\leq R^2\\), i.e., the expected accuracy of out\-of\-sample predictions measured by \\(Q^2\\) cannot exceed the accuracy of in\-sample estimates (Landram, Abdullat, and Shah [2005](#ref-Landram2005)). For a “perfect” predictive model, \\(Q^2\=1\\). It is worth noting that, while \\(R^2\\) always increases if an explanatory variable is added to a model, \\(Q^2\\) decreases when “noisy” variables are added to the model (Todeschini [2010](#ref-SummariesTutorial)). The aforementioned measures capture the overall predictive performance of a model. A measure aimed at evaluating discrimination is the *concordance index* (c\-index) (Harrell, Lee, and Mark [1996](#ref-Harrell1996); Brentnall and Cuzick [2018](#ref-Brentnall2018)). It is computed by considering all pairs of observations and computing the fraction of the pairs in which the ordering of the predictions corresponds to the ordering of the true values (Brentnall and Cuzick [2018](#ref-Brentnall2018)). The index assumes the value of 1 in case of perfect discrimination and 0\.25 for random discrimination. Calibration can be assessed by a scatter plot of the predicted values of \\(Y\\) in function of the true ones (Harrell, Lee, and Mark [1996](#ref-Harrell1996); van Houwelingen, H.C. [2000](#ref-vanHouwelingen2000); Steyerberg et al. [2010](#ref-Steyerberg2010)). The plot can be characterized by its intercept and slope. In case of perfect prediction, the plot should assume the form of a straight line with intercept 0 and slope 1\. A deviation of the intercept from 0 indicates overall bias in predictions (“calibration\-in\-the\-large”), while the value of the slope smaller than 1 suggests overfitting of the model (van Houwelingen, H.C. [2000](#ref-vanHouwelingen2000); Steyerberg et al. [2010](#ref-Steyerberg2010)). The estimated values of the coefficients can be used to re\-calibrate the model (van Houwelingen, H.C. [2000](#ref-vanHouwelingen2000)). #### 15\.3\.1\.1 Goodness\-of\-fit The most popular GoF measure for models for a continuous dependent variable is the mean squared\-error, defined as \\\[\\begin{equation} MSE(f,\\underline{X},\\underline{y}) \= \\frac{1}{n} \\sum\_{i}^{n} (\\widehat{y}\_i \- y\_i)^2 \= \\frac{1}{n} \\sum\_{i}^{n} r\_i^2, \\tag{15\.1} \\end{equation}\\] where \\(r\_i\\) is the residual for the \\(i\\)\-th observation (see also Section [2\.3](modelDevelopmentProcess.html#notation)). Thus, MSE can be seen as a sum of squared residuals. MSE is a convex differentiable function, which is important from an optimization point of view (see Section [2\.5](modelDevelopmentProcess.html#fitting)). As the measure weighs all differences equally, large residuals have got a high impact on MSE. Thus, the measure is sensitive to outliers. For a “perfect” model, which predicts (fits) all \\(y\_i\\) exactly, \\(MSE \= 0\\). Note that MSE is constructed on a different scale from the dependent variable. Thus, a more interpretable variant of this measure is the root\-mean\-squared\-error (RMSE), defined as \\\[\\begin{equation} RMSE(f, \\underline{X}, \\underline{y}) \= \\sqrt{MSE(f, \\underline{X}, \\underline{y})}. \\tag{15\.2} \\end{equation}\\] A popular variant of RMSE is its normalized version, \\(R^2\\), defined as \\\[\\begin{equation} R^2(f, \\underline{X}, \\underline{y}) \= 1 \- \\frac{MSE(f, \\underline{X}, \\underline{y})}{MSE(f\_0, \\underline{X},\\underline{y})}. \\tag{15\.3} \\end{equation}\\] In [(15\.3\)](modelPerformance.html#eq:R2), \\(f\_0()\\) denotes a “baseline” model. For instance, in the case of the classical linear regression, \\(f\_0()\\) is the model that includes only the intercept, which implies the use of the mean value of \\(Y\\) as a prediction for all observations. \\(R^2\\) is normalized in the sense that the “perfectly” fitting model leads to \\(R^2 \= 1\\), while \\(R^2 \= 0\\) means that we are not doing better than the baseline model. In the context of the classical linear regression, \\(R^2\\) is the familiar coefficient of determination and can be interpreted as the fraction of the total variance of \\(Y\\) “explained” by model \\(f()\\). Given sensitivity of MSE to outliers, sometimes the median absolute\-deviation (MAD) is considered: \\\[\\begin{equation} MAD(f, \\underline{X} ,\\underline{y}) \= median( \|r\_1\|, ..., \|r\_n\| ). \\tag{15\.4} \\end{equation}\\] MAD is more robust to outliers than MSE. A disadvantage of MAD are its less favourable mathematical properties. Section [15\.4\.1](modelPerformance.html#modelPerformanceApartments) illustrates the use of measures for the linear regression model and the random forest model for the apartment\-prices data. #### 15\.3\.1\.2 Goodness\-of\-prediction Assume that a testing dataset is available. In that case, we can use model \\(f()\\), obtained by fitting the model to training data, to predict the values of the dependent variable observed in the testing dataset. Subsequently, we can compute MSE as in [(15\.1\)](modelPerformance.html#eq:MSE) to obtain the mean squared\-prediction\-error (MSPE) as a GoP measure (Kutner et al. [2005](#ref-Kutner2005)). By taking the square root of MSPE, we get the root\-mean\-squared\-prediction\-error (RMSPE). In the absence of testing data, one of the most known GoP measures for models for a continuous dependent variable is the predicted sum\-of\-squares (PRESS), defined as \\\[\\begin{equation} PRESS(f,\\underline{X},\\underline{y}) \= \\sum\_{i\=1}^{n} (\\widehat{y}\_{i(\-i)} \- y\_i)^2\. \\tag{15\.5} \\end{equation}\\] Thus, PRESS can be seen as a result of the application of the leave\-one\-out strategy to the evaluation of GoP of a model using the training data. Note that, for the classical linear regression model, there is no need to re\-fit the model \\(n\\) times to compute PRESS (Kutner et al. [2005](#ref-Kutner2005)). Based on PRESS, one can define the predictive squared\-error \\(PSE\=PRESS/n\\) and the standard deviation error in prediction \\(SEP\=\\sqrt{PSE}\=\\sqrt{PRESS/n}\\) (Todeschini [2010](#ref-SummariesTutorial)). Another measure gaining in popularity is \\\[\\begin{equation} Q^2(f,\\underline{X},\\underline{y}) \= 1\- \\frac{ PRESS(f,\\underline{X},\\underline{y})}{\\sum\_{i\=1}^{n} ({y}\_{i} \- \\bar{y})^2}. \\tag{15\.6} \\end{equation}\\] It is sometimes called the cross\-validated \\(R^2\\) or the coefficient of prediction (Landram, Abdullat, and Shah [2005](#ref-Landram2005)). It appears that \\(Q^2 \\leq R^2\\), i.e., the expected accuracy of out\-of\-sample predictions measured by \\(Q^2\\) cannot exceed the accuracy of in\-sample estimates (Landram, Abdullat, and Shah [2005](#ref-Landram2005)). For a “perfect” predictive model, \\(Q^2\=1\\). It is worth noting that, while \\(R^2\\) always increases if an explanatory variable is added to a model, \\(Q^2\\) decreases when “noisy” variables are added to the model (Todeschini [2010](#ref-SummariesTutorial)). The aforementioned measures capture the overall predictive performance of a model. A measure aimed at evaluating discrimination is the *concordance index* (c\-index) (Harrell, Lee, and Mark [1996](#ref-Harrell1996); Brentnall and Cuzick [2018](#ref-Brentnall2018)). It is computed by considering all pairs of observations and computing the fraction of the pairs in which the ordering of the predictions corresponds to the ordering of the true values (Brentnall and Cuzick [2018](#ref-Brentnall2018)). The index assumes the value of 1 in case of perfect discrimination and 0\.25 for random discrimination. Calibration can be assessed by a scatter plot of the predicted values of \\(Y\\) in function of the true ones (Harrell, Lee, and Mark [1996](#ref-Harrell1996); van Houwelingen, H.C. [2000](#ref-vanHouwelingen2000); Steyerberg et al. [2010](#ref-Steyerberg2010)). The plot can be characterized by its intercept and slope. In case of perfect prediction, the plot should assume the form of a straight line with intercept 0 and slope 1\. A deviation of the intercept from 0 indicates overall bias in predictions (“calibration\-in\-the\-large”), while the value of the slope smaller than 1 suggests overfitting of the model (van Houwelingen, H.C. [2000](#ref-vanHouwelingen2000); Steyerberg et al. [2010](#ref-Steyerberg2010)). The estimated values of the coefficients can be used to re\-calibrate the model (van Houwelingen, H.C. [2000](#ref-vanHouwelingen2000)). ### 15\.3\.2 Binary dependent variable To introduce model\-performance measures, we, somewhat arbitrarily, label the two possible values of the dependent variable as “success” and “failure”. Of course, in a particular application, the meaning of the “success” outcome does not have to be positive nor optimistic; for a diagnostic test, “success” often means detection of disease. We also assume that model prediction \\(\\widehat{y}\_i\\) takes the form of the predicted probability of success. #### 15\.3\.2\.1 Goodness\-of\-fit If we assign the value of 1 to success and 0 to failure, it is possible to use MSE, RMSE, and MAD, as defined in Equations [(15\.1\)](modelPerformance.html#eq:MSE), [(15\.2\)](modelPerformance.html#eq:RMSE), [(15\.4\)](modelPerformance.html#eq:MAD), respectively, as a GoF measure. In fact, the MSE obtained in that way is equivalent to the Brier score, which can be also expressed as \\\[ \\sum\_{i\=1}^{n} \\{y\_i(1\-\\widehat{y}\_i)^2\+(1\-y\_i)(\\widehat{y}\_i)^2\\}/n. \\] Its minimum value is 0 for a “perfect” model and 0\.25 for an “uninformative” model that yields the predicted probability of 0\.5 for all observations. The Brier score is often also interpreted as an overall predictive\-performance measure for models for a binary dependent variable because it captures both calibration and the concentration of the predictive distribution (Rufibach [2010](#ref-Rufibach2010)). One of the main issues related to the summary measures based on MSE is that they penalize too mildly for wrong predictions. In fact, the maximum penalty for an individual prediction is equal to 1 (if, for instance, the model yields zero probability for an actual success). To address this issue, the log\-likelihood function based on the Bernoulli distribution (see also [(2\.8\)](modelDevelopmentProcess.html#eq:modelTrainingBernoulli)) can be considered: \\\[\\begin{equation} l(f, \\underline{X},\\underline{y}) \= \\sum\_{i\=1}^{n} \\{y\_i \\ln(\\widehat{y}\_i)\+ (1\-y\_i)\\ln(1\-\\widehat{y}\_i)\\}. \\tag{15\.7} \\end{equation}\\] Note that, in the machine\-learning world, function \\(\-l(f, \\underline{X} ,\\underline{y})/n\\) is often considered (sometimes also with \\(\\ln\\) replaced by \\(\\log\_2\\)) and termed “log\-loss” or “cross\-entropy”. The log\-likelihood heavily “penalizes” the cases when the model\-predicted probability of success \\(\\widehat{y}\_i\\) is high for an actual failure (\\(y\_i\=0\\)) and low for an actual success (\\(y\_i\=1\\)). The log\-likelihood [(15\.7\)](modelPerformance.html#eq:bernoulli) can be used to define \\(R^2\\)\-like measures (for a review, see, for example, Allison ([2014](#ref-Allison2014))). One of the variants most often used is the measure proposed by Nagelkerke ([1991](#ref-Nagelkerke1991)): \\\[\\begin{equation} R\_{bin}^2(f, \\underline{X}, \\underline{y}) \= \\frac{1\-\\exp\\left(\\frac{2}{n}\\{l(f\_0, \\underline{X},\\underline{y})\-l(f, \\underline{X},\\underline{y})\\}\\right)} {1\-\\exp\\left(\\frac{2}{n}l(f\_0, \\underline{X},\\underline{y})\\right)} . \\tag{15\.8} \\end{equation}\\] It shares properties of the “classical” \\(R^2\\), defined in [(15\.3\)](modelPerformance.html#eq:R2). In [(15\.8\)](modelPerformance.html#eq:R2bin), \\(f\_0()\\) denotes the model that includes only the intercept, which implies the use of the observed fraction of successes as the predicted probability of success. If we denote the fraction by \\(\\hat{p}\\), then \\\[ l(f\_0, \\underline{X},\\underline{y}) \= n \\hat{p} \\ln{\\hat{p}} \+ n(1\-\\hat{p}) \\ln{(1\-\\hat{p})}. \\] #### 15\.3\.2\.2 Goodness\-of\-prediction In many situations, consequences of a prediction error depend on the form of the error. For this reason, performance measures based on the (estimated values of) probability of correct/wrong prediction are more often used. To introduce some of those measures, we assume that, for each observation from the testing dataset, the predicted probability of success \\(\\widehat{y}\_i\\) is compared to a fixed cut\-off threshold, \\(C\\) say. If the probability is larger than \\(C\\), then we assume that the model predicts success; otherwise, we assume that it predicts failure. As a result of such a procedure, the comparison of the observed and predicted values of the dependent variable for the \\(n\\) observations in a dataset can be summarized in a table similar to Table [15\.1](modelPerformance.html#tab:confMat). Table 15\.1: Confusion table for a classification model with scores \\(\\widehat{y}\_i\\). | | True value: `success` | True value: `failure` | Total | | --- | --- | --- | --- | | \\(\\widehat{y}\_i \\geq C\\), predicted: `success` | True Positive: \\(TP\_C\\) | False Positive (Type I error): \\(FP\_C\\) | \\(P\_C\\) | | \\(\\widehat{y}\_i \< C\\), predicted: `failure` | False Negative (Type II error): \\(FN\_C\\) | True Negative: \\(TN\_C\\) | \\(N\_C\\) | | Total | \\(S\\) | \\(F\\) | \\(n\\) | In the machine\-learning world, Table [15\.1](modelPerformance.html#tab:confMat) is often referred to as the “confusion table” or “confusion matrix”. In statistics, it is often called the “decision table”. The counts \\(TP\_C\\) and \\(TN\_C\\) on the diagonal of the table correspond to the cases when the predicted and observed value of the dependent variable \\(Y\\) coincide. \\(FP\_C\\) is the number of cases in which failure is predicted as a success. These are false\-positive, or Type I error, cases. On the other hand, \\(FN\_C\\) is the count of false\-negative, or Type II error, cases, in which success is predicted as failure. Marginally, there are \\(P\_C\\) predicted successes and \\(N\_C\\) predicted failures, with \\(P\_C\+N\_C\=n\\). In the testing dataset, there are \\(S\\) observed successes and \\(F\\) observed failures, with \\(S\+F\=n\\). The effectiveness of such a test can be described by various measures. Let us present some of the most popular examples. The simplest measure of model performance is *accuracy*, defined as \\\[ ACC\_C \= \\frac{TP\_C\+TN\_C}{n}. \\] It is the fraction of correct predictions in the entire testing dataset. Accuracy is of interest if true positives and true negatives are more important than their false counterparts. However, accuracy may not be very informative when one of the binary categories is much more prevalent (so called unbalanced labels). For example, if the testing data contain 90% of successes, a model that would always predict a success would reach an accuracy of 0\.9, although one could argue that this is not a very useful model. There may be situations when false positives and/or false negatives may be of more concern. In that case, one might want to keep their number low. Hence, other measures, focused on the false results, might be of interest. In the machine\-learning world, two other measures are often considered: *precision* and *recall*. Precision is defined as \\\[ Precision\_C \= \\frac{TP\_C}{TP\_C\+FP\_C} \= \\frac{TP\_C}{P\_C}. \\] Precision is also referred to as the *positive predictive value*. It is the fraction of correct predictions among the predicted successes. Precision is high if the number of false positives is low. Thus, it is a useful measure when the penalty for committing the Type I error (false positive) is high. For instance, consider the use of a genetic test in cancer diagnostics, with a positive result of the test taken as an indication of an increased risk of developing a cancer. A false\-positive result of a genetic test might mean that a person would have to unnecessarily cope with emotions and, possibly, medical procedures related to the fact of being evaluated as having a high risk of developing a cancer. We might want to avoid this situation more than the false\-negative case. The latter would mean that the genetic test gives a negative result for a person that, actually, might be at an increased risk of developing a cancer. However, an increased risk does not mean that the person will develop cancer. And even so, we could hope that we could detect it in due time. Recall is defined as \\\[ Recall\_C \= \\frac{TP\_C}{TP\_C\+FN\_C} \= \\frac{TP\_C}{S}. \\] Recall is also referred to as *sensitivity* or the *true\-positive rate*. It is the fraction of correct predictions among the true successes. Recall is high if the number of false negatives is low. Thus, it is a useful measure when the penalty for committing the Type II error (false negative) is high. For instance, consider the use of an algorithm that predicts whether a bank transaction is fraudulent. A false\-negative result means that the algorithm accepts a fraudulent transaction as a legitimate one. Such a decision may have immediate and unpleasant consequences for the bank, because it may imply a non\-recoverable loss of money. On the other hand, a false\-positive result means that a legitimate transaction is considered as a fraudulent one and is blocked. However, upon further checking, the legitimate nature of the transaction can be confirmed with, perhaps, the annoyed client as the only consequence for the bank. The harmonic mean of these two measures defines the *F1 score*: \\\[ F1\\ score\_C \= \\frac{2}{\\frac{1}{Precision\_C} \+ \\frac{1}{Recall\_C}} \= 2\\cdot\\frac{Precision\_C \\cdot Recall\_C}{Precision\_C \+ Recall\_C}. \\] F1 score tends to give a low value if either precision or recall is low, and a high value if both precision and recall are high. For instance, if precision is 0, F1 score will also be 0 irrespectively of the value of recall. Thus, it is a useful measure if we have got to seek a balance between precision and recall. In statistics, and especially in applications in medicine, the popular measures are *sensitivity* and *specificity*. Sensitivity is simply another name for recall. Specificity is defined as \\\[ Specificity\_C \= \\frac{TN\_C}{TN\_C \+ FP\_C} \= \\frac{TN\_C}{F}. \\] Specificity is also referred to as the *true\-negative rate*. It is the fraction of correct predictions among the true failures. Specificity is high if the number of false positives is low. Thus, as precision, it is a useful measure when the penalty for committing the Type I error (false positive) is high. The reason why sensitivity and specificity may be more often used outside the machine\-learning world is related to the fact that their values do not depend on the proportion \\(S/n\\) (sometimes termed *prevalence*) of true successes. This means that, once estimated in a sample obtained from a population, they may be applied to other populations, in which the prevalence may be different. This is not true for precision, because one can write \\\[ Precision\_C \= \\frac{Sensitivity\_C \\cdot \\frac{S}{n}}{Sensitivity\_C \\cdot \\frac{S}{n}\+Specificity\_C \\cdot \\left(1\-\\frac{S}{n}\\right)}. \\] All the measures depend on the choice of cut\-off \\(C\\). To assess the form and the strength of dependence, a common approach is to construct the Receiver Operating Characteristic (ROC) curve. The curve plots \\(Sensitivity\_C\\) in function of \\(1\-Specificity\_C\\) for all possible, ordered values of \\(C\\). Figure [15\.2](modelPerformance.html#fig:exampleROC) presents the ROC curve for the random forest model for the Titanic dataset. Note that the curve indicates an inverse relationship between sensitivity and specificity: by increasing one measure, the other is decreased. The ROC curve is very informative. For a model that predicts successes and failures at random, the corresponding curve will be equal to the diagonal line. On the other hand, for a model that yields perfect predictions, the ROC curve reduces to two intervals that connect points (0,0\), (0,1\), and (1,1\). Often, there is a need to summarize the ROC curve with one number, which can be used to compare models. A popular measure that is used toward this aim is the area under the curve (AUC). For a model that predicts successes and failures at random, AUC is the area under the diagonal line, i.e., it is equal to 0\.5\. For a model that yields perfect predictions, AUC is equal to 1\. It appears that, in this case, AUC is equivalent to the c\-index (see Section [15\.3\.1\.2](modelPerformance.html#modelPerformanceMethodContGOP)). Another ROC\-curve\-based measure that is often used is the *Gini coefficient* \\(G\\). It is closely related to AUC; in fact, it can be calculated as \\(G \= 2 \\times AUC \- 1\\). For a model that predicts successes and failures at random, \\(G\=0\\); for a perfect\-prediction model, \\(G \= 1\\). Figure [15\.2](modelPerformance.html#fig:exampleROC) illustrates the calculation of the Gini coefficient for the random forest model for the Titanic dataset (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)). A variant of ROC curve based on precision and recall is a called a precision\-recall curve. Figure [15\.3](modelPerformance.html#fig:examplePRC) the curve for the random forest model for the Titanic dataset. The value of the Gini coefficient or, equivalently, of \\(AUC\-0\.5\\) allows a comparison of the model\-based predictions with random guessing. A measure that explicitly compares a prediction model with a baseline (or null) model is the *lift*. Commonly, random guessing is considered as the baseline model. In that case, \\\[ Lift\_C \= \\frac{\\frac{TP\_C}{P\_C}}{\\frac{S}{n}} \= n\\frac{Precision\_C}{S}. \\] Note that \\(S/n\\) can be seen as the estimated probability of a correct prediction of success for random guessing. On the other hand, \\(TP\_C/P\_C\\) is the estimated probability of a correct prediction of a success given that the model predicts a success. Hence, informally speaking, the lift indicates how many more (or less) times does the model do better in predicting success as compared to random guessing. As other measures, the lift depends on the choice of cut\-off \\(C\\). The plot of the lift as a function of \\(P\_C\\) is called the *lift chart*. Figure [15\.3](modelPerformance.html#fig:examplePRC) presents the lift chart for the random forest model for the Titanic dataset. Calibration of predictions can be assessed by a scatter plot of the predicted values of \\(Y\\) in function of the true ones. A complicating issue is a fact that the true values are only equal to 0 or 1\. Therefore, smoothing techniques or grouping of observations is needed to obtain a meaningful plot (Steyerberg et al. [2010](#ref-Steyerberg2010); Steyerberg [2019](#ref-Steyerberg2019)). There are many more measures aimed at measuring the performance of a predictive model for a binary dependent variable. An overview can be found in, e.g., Berrar ([2019](#ref-Berrar2019)). #### 15\.3\.2\.1 Goodness\-of\-fit If we assign the value of 1 to success and 0 to failure, it is possible to use MSE, RMSE, and MAD, as defined in Equations [(15\.1\)](modelPerformance.html#eq:MSE), [(15\.2\)](modelPerformance.html#eq:RMSE), [(15\.4\)](modelPerformance.html#eq:MAD), respectively, as a GoF measure. In fact, the MSE obtained in that way is equivalent to the Brier score, which can be also expressed as \\\[ \\sum\_{i\=1}^{n} \\{y\_i(1\-\\widehat{y}\_i)^2\+(1\-y\_i)(\\widehat{y}\_i)^2\\}/n. \\] Its minimum value is 0 for a “perfect” model and 0\.25 for an “uninformative” model that yields the predicted probability of 0\.5 for all observations. The Brier score is often also interpreted as an overall predictive\-performance measure for models for a binary dependent variable because it captures both calibration and the concentration of the predictive distribution (Rufibach [2010](#ref-Rufibach2010)). One of the main issues related to the summary measures based on MSE is that they penalize too mildly for wrong predictions. In fact, the maximum penalty for an individual prediction is equal to 1 (if, for instance, the model yields zero probability for an actual success). To address this issue, the log\-likelihood function based on the Bernoulli distribution (see also [(2\.8\)](modelDevelopmentProcess.html#eq:modelTrainingBernoulli)) can be considered: \\\[\\begin{equation} l(f, \\underline{X},\\underline{y}) \= \\sum\_{i\=1}^{n} \\{y\_i \\ln(\\widehat{y}\_i)\+ (1\-y\_i)\\ln(1\-\\widehat{y}\_i)\\}. \\tag{15\.7} \\end{equation}\\] Note that, in the machine\-learning world, function \\(\-l(f, \\underline{X} ,\\underline{y})/n\\) is often considered (sometimes also with \\(\\ln\\) replaced by \\(\\log\_2\\)) and termed “log\-loss” or “cross\-entropy”. The log\-likelihood heavily “penalizes” the cases when the model\-predicted probability of success \\(\\widehat{y}\_i\\) is high for an actual failure (\\(y\_i\=0\\)) and low for an actual success (\\(y\_i\=1\\)). The log\-likelihood [(15\.7\)](modelPerformance.html#eq:bernoulli) can be used to define \\(R^2\\)\-like measures (for a review, see, for example, Allison ([2014](#ref-Allison2014))). One of the variants most often used is the measure proposed by Nagelkerke ([1991](#ref-Nagelkerke1991)): \\\[\\begin{equation} R\_{bin}^2(f, \\underline{X}, \\underline{y}) \= \\frac{1\-\\exp\\left(\\frac{2}{n}\\{l(f\_0, \\underline{X},\\underline{y})\-l(f, \\underline{X},\\underline{y})\\}\\right)} {1\-\\exp\\left(\\frac{2}{n}l(f\_0, \\underline{X},\\underline{y})\\right)} . \\tag{15\.8} \\end{equation}\\] It shares properties of the “classical” \\(R^2\\), defined in [(15\.3\)](modelPerformance.html#eq:R2). In [(15\.8\)](modelPerformance.html#eq:R2bin), \\(f\_0()\\) denotes the model that includes only the intercept, which implies the use of the observed fraction of successes as the predicted probability of success. If we denote the fraction by \\(\\hat{p}\\), then \\\[ l(f\_0, \\underline{X},\\underline{y}) \= n \\hat{p} \\ln{\\hat{p}} \+ n(1\-\\hat{p}) \\ln{(1\-\\hat{p})}. \\] #### 15\.3\.2\.2 Goodness\-of\-prediction In many situations, consequences of a prediction error depend on the form of the error. For this reason, performance measures based on the (estimated values of) probability of correct/wrong prediction are more often used. To introduce some of those measures, we assume that, for each observation from the testing dataset, the predicted probability of success \\(\\widehat{y}\_i\\) is compared to a fixed cut\-off threshold, \\(C\\) say. If the probability is larger than \\(C\\), then we assume that the model predicts success; otherwise, we assume that it predicts failure. As a result of such a procedure, the comparison of the observed and predicted values of the dependent variable for the \\(n\\) observations in a dataset can be summarized in a table similar to Table [15\.1](modelPerformance.html#tab:confMat). Table 15\.1: Confusion table for a classification model with scores \\(\\widehat{y}\_i\\). | | True value: `success` | True value: `failure` | Total | | --- | --- | --- | --- | | \\(\\widehat{y}\_i \\geq C\\), predicted: `success` | True Positive: \\(TP\_C\\) | False Positive (Type I error): \\(FP\_C\\) | \\(P\_C\\) | | \\(\\widehat{y}\_i \< C\\), predicted: `failure` | False Negative (Type II error): \\(FN\_C\\) | True Negative: \\(TN\_C\\) | \\(N\_C\\) | | Total | \\(S\\) | \\(F\\) | \\(n\\) | In the machine\-learning world, Table [15\.1](modelPerformance.html#tab:confMat) is often referred to as the “confusion table” or “confusion matrix”. In statistics, it is often called the “decision table”. The counts \\(TP\_C\\) and \\(TN\_C\\) on the diagonal of the table correspond to the cases when the predicted and observed value of the dependent variable \\(Y\\) coincide. \\(FP\_C\\) is the number of cases in which failure is predicted as a success. These are false\-positive, or Type I error, cases. On the other hand, \\(FN\_C\\) is the count of false\-negative, or Type II error, cases, in which success is predicted as failure. Marginally, there are \\(P\_C\\) predicted successes and \\(N\_C\\) predicted failures, with \\(P\_C\+N\_C\=n\\). In the testing dataset, there are \\(S\\) observed successes and \\(F\\) observed failures, with \\(S\+F\=n\\). The effectiveness of such a test can be described by various measures. Let us present some of the most popular examples. The simplest measure of model performance is *accuracy*, defined as \\\[ ACC\_C \= \\frac{TP\_C\+TN\_C}{n}. \\] It is the fraction of correct predictions in the entire testing dataset. Accuracy is of interest if true positives and true negatives are more important than their false counterparts. However, accuracy may not be very informative when one of the binary categories is much more prevalent (so called unbalanced labels). For example, if the testing data contain 90% of successes, a model that would always predict a success would reach an accuracy of 0\.9, although one could argue that this is not a very useful model. There may be situations when false positives and/or false negatives may be of more concern. In that case, one might want to keep their number low. Hence, other measures, focused on the false results, might be of interest. In the machine\-learning world, two other measures are often considered: *precision* and *recall*. Precision is defined as \\\[ Precision\_C \= \\frac{TP\_C}{TP\_C\+FP\_C} \= \\frac{TP\_C}{P\_C}. \\] Precision is also referred to as the *positive predictive value*. It is the fraction of correct predictions among the predicted successes. Precision is high if the number of false positives is low. Thus, it is a useful measure when the penalty for committing the Type I error (false positive) is high. For instance, consider the use of a genetic test in cancer diagnostics, with a positive result of the test taken as an indication of an increased risk of developing a cancer. A false\-positive result of a genetic test might mean that a person would have to unnecessarily cope with emotions and, possibly, medical procedures related to the fact of being evaluated as having a high risk of developing a cancer. We might want to avoid this situation more than the false\-negative case. The latter would mean that the genetic test gives a negative result for a person that, actually, might be at an increased risk of developing a cancer. However, an increased risk does not mean that the person will develop cancer. And even so, we could hope that we could detect it in due time. Recall is defined as \\\[ Recall\_C \= \\frac{TP\_C}{TP\_C\+FN\_C} \= \\frac{TP\_C}{S}. \\] Recall is also referred to as *sensitivity* or the *true\-positive rate*. It is the fraction of correct predictions among the true successes. Recall is high if the number of false negatives is low. Thus, it is a useful measure when the penalty for committing the Type II error (false negative) is high. For instance, consider the use of an algorithm that predicts whether a bank transaction is fraudulent. A false\-negative result means that the algorithm accepts a fraudulent transaction as a legitimate one. Such a decision may have immediate and unpleasant consequences for the bank, because it may imply a non\-recoverable loss of money. On the other hand, a false\-positive result means that a legitimate transaction is considered as a fraudulent one and is blocked. However, upon further checking, the legitimate nature of the transaction can be confirmed with, perhaps, the annoyed client as the only consequence for the bank. The harmonic mean of these two measures defines the *F1 score*: \\\[ F1\\ score\_C \= \\frac{2}{\\frac{1}{Precision\_C} \+ \\frac{1}{Recall\_C}} \= 2\\cdot\\frac{Precision\_C \\cdot Recall\_C}{Precision\_C \+ Recall\_C}. \\] F1 score tends to give a low value if either precision or recall is low, and a high value if both precision and recall are high. For instance, if precision is 0, F1 score will also be 0 irrespectively of the value of recall. Thus, it is a useful measure if we have got to seek a balance between precision and recall. In statistics, and especially in applications in medicine, the popular measures are *sensitivity* and *specificity*. Sensitivity is simply another name for recall. Specificity is defined as \\\[ Specificity\_C \= \\frac{TN\_C}{TN\_C \+ FP\_C} \= \\frac{TN\_C}{F}. \\] Specificity is also referred to as the *true\-negative rate*. It is the fraction of correct predictions among the true failures. Specificity is high if the number of false positives is low. Thus, as precision, it is a useful measure when the penalty for committing the Type I error (false positive) is high. The reason why sensitivity and specificity may be more often used outside the machine\-learning world is related to the fact that their values do not depend on the proportion \\(S/n\\) (sometimes termed *prevalence*) of true successes. This means that, once estimated in a sample obtained from a population, they may be applied to other populations, in which the prevalence may be different. This is not true for precision, because one can write \\\[ Precision\_C \= \\frac{Sensitivity\_C \\cdot \\frac{S}{n}}{Sensitivity\_C \\cdot \\frac{S}{n}\+Specificity\_C \\cdot \\left(1\-\\frac{S}{n}\\right)}. \\] All the measures depend on the choice of cut\-off \\(C\\). To assess the form and the strength of dependence, a common approach is to construct the Receiver Operating Characteristic (ROC) curve. The curve plots \\(Sensitivity\_C\\) in function of \\(1\-Specificity\_C\\) for all possible, ordered values of \\(C\\). Figure [15\.2](modelPerformance.html#fig:exampleROC) presents the ROC curve for the random forest model for the Titanic dataset. Note that the curve indicates an inverse relationship between sensitivity and specificity: by increasing one measure, the other is decreased. The ROC curve is very informative. For a model that predicts successes and failures at random, the corresponding curve will be equal to the diagonal line. On the other hand, for a model that yields perfect predictions, the ROC curve reduces to two intervals that connect points (0,0\), (0,1\), and (1,1\). Often, there is a need to summarize the ROC curve with one number, which can be used to compare models. A popular measure that is used toward this aim is the area under the curve (AUC). For a model that predicts successes and failures at random, AUC is the area under the diagonal line, i.e., it is equal to 0\.5\. For a model that yields perfect predictions, AUC is equal to 1\. It appears that, in this case, AUC is equivalent to the c\-index (see Section [15\.3\.1\.2](modelPerformance.html#modelPerformanceMethodContGOP)). Another ROC\-curve\-based measure that is often used is the *Gini coefficient* \\(G\\). It is closely related to AUC; in fact, it can be calculated as \\(G \= 2 \\times AUC \- 1\\). For a model that predicts successes and failures at random, \\(G\=0\\); for a perfect\-prediction model, \\(G \= 1\\). Figure [15\.2](modelPerformance.html#fig:exampleROC) illustrates the calculation of the Gini coefficient for the random forest model for the Titanic dataset (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)). A variant of ROC curve based on precision and recall is a called a precision\-recall curve. Figure [15\.3](modelPerformance.html#fig:examplePRC) the curve for the random forest model for the Titanic dataset. The value of the Gini coefficient or, equivalently, of \\(AUC\-0\.5\\) allows a comparison of the model\-based predictions with random guessing. A measure that explicitly compares a prediction model with a baseline (or null) model is the *lift*. Commonly, random guessing is considered as the baseline model. In that case, \\\[ Lift\_C \= \\frac{\\frac{TP\_C}{P\_C}}{\\frac{S}{n}} \= n\\frac{Precision\_C}{S}. \\] Note that \\(S/n\\) can be seen as the estimated probability of a correct prediction of success for random guessing. On the other hand, \\(TP\_C/P\_C\\) is the estimated probability of a correct prediction of a success given that the model predicts a success. Hence, informally speaking, the lift indicates how many more (or less) times does the model do better in predicting success as compared to random guessing. As other measures, the lift depends on the choice of cut\-off \\(C\\). The plot of the lift as a function of \\(P\_C\\) is called the *lift chart*. Figure [15\.3](modelPerformance.html#fig:examplePRC) presents the lift chart for the random forest model for the Titanic dataset. Calibration of predictions can be assessed by a scatter plot of the predicted values of \\(Y\\) in function of the true ones. A complicating issue is a fact that the true values are only equal to 0 or 1\. Therefore, smoothing techniques or grouping of observations is needed to obtain a meaningful plot (Steyerberg et al. [2010](#ref-Steyerberg2010); Steyerberg [2019](#ref-Steyerberg2019)). There are many more measures aimed at measuring the performance of a predictive model for a binary dependent variable. An overview can be found in, e.g., Berrar ([2019](#ref-Berrar2019)). ### 15\.3\.3 Categorical dependent variable To introduce model\-performance measures for a categorical dependent variable, we assume that \\(\\underline{y}\_i\\) is now a vector of \\(K\\) elements. Each element \\(y\_{i}^k\\) (\\(k\=1,\\ldots,K\\)) is a binary variable indicating whether the \\(k\\)\-th category was observed for the \\(i\\)\-th observation. We assume that, for each observation, only one category can be observed. Thus, all elements of \\(\\underline{y}\_i\\) are equal to 0 except one that is equal to 1\. Furthermore, we assume that a model’s prediction takes the form of a vector, \\(\\underline{\\widehat{y}}\_i\\) say, of the predicted probabilities for each of the \\(K\\) categories, with \\({\\widehat{y}}\_i^k\\) denoting the probability for the \\(k\\)\-th category. The predicted category is the one with the highest predicted probability. #### 15\.3\.3\.1 Goodness\-of\-fit The log\-likelihood function [(15\.7\)](modelPerformance.html#eq:bernoulli) can be adapted to the categorical dependent variable case as follows: \\\[\\begin{equation} l(f, \\underline{X} ,\\underline{y}) \= \\sum\_{i\=1}^{n}\\sum\_{k\=1}^{K} y\_{i}^k \\ln({\\widehat{y}}\_i^k). \\tag{15\.9} \\end{equation}\\] It is essentially the log\-likelihood function based on a multinomial distribution. Based on the likelihood, an \\(R^2\\)\-like measure can be defined, using an approach similar to the one used in [(15\.8\)](modelPerformance.html#eq:R2bin) for construction of \\(R\_{bin}^2\\) (Harrell [2015](#ref-Harrell2015)). #### 15\.3\.3\.2 Goodness\-of\-prediction It is possible to extend measures like accuracy, precision, etc., introduced in Section [15\.3\.2](modelPerformance.html#modelPerformanceMethodBin) for a binary dependent variable, to the case of a categorical one. Toward this end, first, a confusion table is created for each category \\(k\\), treating the category as “success” and all other categories as “failure”. Let us denote the counts in the table by \\(TP\_k\\), \\(FP\_k\\), \\(TN\_k\\), and \\(FN\_k\\). Based on the counts, we can compute the average accuracy across all classes as follows: \\\[\\begin{equation} \\overline{ACC\_C} \= \\frac{1}{K}\\sum\_{k\=1}^K\\frac{TP\_{C,k}\+TN\_{C,k}}{n}. \\tag{15\.10} \\end{equation}\\] Similarly, one could compute the average precision, average sensitivity, etc. In the machine\-learning world, this approach is often termed “macro\-averaging” (Sokolva and Lapalme [2009](#ref-Sokolova2009); Tsoumakas, Katakis, and Vlahavas [2010](#ref-Tsoumakas2010)). The averages computed in that way treat all classes equally. An alternative approach is to sum the appropriate counts from the confusion tables for all classes, and then form a measure based on the so\-computed cumulative counts. For instance, for precision, this would lead to \\\[\\begin{equation} \\overline{Precision\_C}\_{\\mu} \= \\frac{\\sum\_{k\=1}^K TP\_{C,k}}{\\sum\_{k\=1}^K (TP\_{C,k}\+FP\_{C,k})}. \\tag{15\.11} \\end{equation}\\] In the machine\-learning world, this approach is often termed “micro\-averaging” (Sokolva and Lapalme [2009](#ref-Sokolova2009); Tsoumakas, Katakis, and Vlahavas [2010](#ref-Tsoumakas2010)), hence subscript \\(\\mu\\) for “micro” in [(15\.11\)](modelPerformance.html#eq:precmicro). Note that, for accuracy, this computation still leads to [(15\.10\)](modelPerformance.html#eq:accmacro). The measures computed in that way favour classes with larger numbers of observations. #### 15\.3\.3\.1 Goodness\-of\-fit The log\-likelihood function [(15\.7\)](modelPerformance.html#eq:bernoulli) can be adapted to the categorical dependent variable case as follows: \\\[\\begin{equation} l(f, \\underline{X} ,\\underline{y}) \= \\sum\_{i\=1}^{n}\\sum\_{k\=1}^{K} y\_{i}^k \\ln({\\widehat{y}}\_i^k). \\tag{15\.9} \\end{equation}\\] It is essentially the log\-likelihood function based on a multinomial distribution. Based on the likelihood, an \\(R^2\\)\-like measure can be defined, using an approach similar to the one used in [(15\.8\)](modelPerformance.html#eq:R2bin) for construction of \\(R\_{bin}^2\\) (Harrell [2015](#ref-Harrell2015)). #### 15\.3\.3\.2 Goodness\-of\-prediction It is possible to extend measures like accuracy, precision, etc., introduced in Section [15\.3\.2](modelPerformance.html#modelPerformanceMethodBin) for a binary dependent variable, to the case of a categorical one. Toward this end, first, a confusion table is created for each category \\(k\\), treating the category as “success” and all other categories as “failure”. Let us denote the counts in the table by \\(TP\_k\\), \\(FP\_k\\), \\(TN\_k\\), and \\(FN\_k\\). Based on the counts, we can compute the average accuracy across all classes as follows: \\\[\\begin{equation} \\overline{ACC\_C} \= \\frac{1}{K}\\sum\_{k\=1}^K\\frac{TP\_{C,k}\+TN\_{C,k}}{n}. \\tag{15\.10} \\end{equation}\\] Similarly, one could compute the average precision, average sensitivity, etc. In the machine\-learning world, this approach is often termed “macro\-averaging” (Sokolva and Lapalme [2009](#ref-Sokolova2009); Tsoumakas, Katakis, and Vlahavas [2010](#ref-Tsoumakas2010)). The averages computed in that way treat all classes equally. An alternative approach is to sum the appropriate counts from the confusion tables for all classes, and then form a measure based on the so\-computed cumulative counts. For instance, for precision, this would lead to \\\[\\begin{equation} \\overline{Precision\_C}\_{\\mu} \= \\frac{\\sum\_{k\=1}^K TP\_{C,k}}{\\sum\_{k\=1}^K (TP\_{C,k}\+FP\_{C,k})}. \\tag{15\.11} \\end{equation}\\] In the machine\-learning world, this approach is often termed “micro\-averaging” (Sokolva and Lapalme [2009](#ref-Sokolova2009); Tsoumakas, Katakis, and Vlahavas [2010](#ref-Tsoumakas2010)), hence subscript \\(\\mu\\) for “micro” in [(15\.11\)](modelPerformance.html#eq:precmicro). Note that, for accuracy, this computation still leads to [(15\.10\)](modelPerformance.html#eq:accmacro). The measures computed in that way favour classes with larger numbers of observations. ### 15\.3\.4 Count dependent variable In case of counts, one could consider using MSE or any of the measures for a continuous dependent variable mentioned in Section [15\.3\.1\.1](modelPerformance.html#modelPerformanceMethodContGOF). However, a particular feature of count dependent variables is that their variance depends on the mean value. Consequently, weighing all contributions to MSE equally, as in [(15\.1\)](modelPerformance.html#eq:MSE), is not appropriate, because the same residual value \\(r\_i\\) indicates a larger discrepancy for a smaller count \\(y\_i\\) than for a larger one. Therefore, a popular measure of performance of a predictive model for counts is Pearson’s statistic: \\\[\\begin{equation} \\chi^2(f,\\underline{X},\\underline{y}) \= \\sum\_{i\=1}^{n} \\frac{(\\widehat{y}\_i \- y\_i)^2}{\\widehat{y}\_i} \= \\sum\_{i\=1}^{n} \\frac{r\_i^2}{\\widehat{y}\_i}. \\tag{15\.12} \\end{equation}\\] From [(15\.12\)](modelPerformance.html#eq:Pearson) it is clear that, if the same residual is obtained for two different observed counts, it is assigned a larger weight for the count for which the predicted value is smaller. Of course, there are more measures of model performance as well as types of model responses (e.g., censored data). A complete list, even if it could be created, would be beyond the scope of this book. 15\.4 Example ------------- ### 15\.4\.1 Apartment prices Let us consider the linear regression model `apartments_lm` (see Section [4\.5\.1](dataSetsIntro.html#model-Apartments-lr)) and the random forest model `apartments_rf` (see Section [4\.5\.2](dataSetsIntro.html#model-Apartments-rf)) for the apartment\-prices data (see Section [4\.4](dataSetsIntro.html#ApartmentDataset)). Recall that, for these data, the dependent variable, the price per square meter, is continuous. Hence, we can use the performance measures presented in Section [15\.3\.1](modelPerformance.html#modelPerformanceMethodCont). In particular, we consider MSE and RMSE. Figure [15\.1](modelPerformance.html#fig:prepareMPBoxplotEx) presents a box plot of the absolute values of residuals for the linear regression and random forest models, computed for the testing\-data. The computed values of RMSE are also indicated in the plots. The values are very similar for both models; we have already noted that fact in Section [4\.5\.4](dataSetsIntro.html#predictionsApartments). Figure 15\.1: Box plot for the absolute values of residuals for the linear regression and random forest models for the apartment\-prices data. The red dot indicates the RMSE. In particular, MSE, RMSE, \\(R^2\\), and MAD values for the linear regression model are equal to 80137, 283\.09, 0\.901, and 212\.7, respectively. For the random forest model, they are equal to 80137, 282\.95, 0\.901, and 169\.1 respectively. The values of the measures suggest that the predicitve performance of the random forest model is slightly better. But is this difference relevant? It should be remembered that development of any random forest model includes a random component. This means that, when a random forest model is fitted to the same dataset several times, but using a different random\-number\-generation seed, the value of MSE or MAD for the fitted models will fluctuate. Thus, we should consider the values obtained for the linear regression and random forest models for the apartment\-prices data as indicating a similar performance of the two models rather than a superiority of one of them. ### 15\.4\.2 Titanic data Let us consider the random forest model `titanic_rf` (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) and the logistic regression model `titanic_lmr` (see Section [4\.2\.1](dataSetsIntro.html#model-titanic-lmr)) for the Titanic data (see Section [4\.1](dataSetsIntro.html#TitanicDataset)). Recall that, for these data, the dependent variable is binary, with success defined as survival of the passenger. First, we take a look at the random forest model. We will illustrate the “confusion table” by using threshold \\(C\\) equal to 0\.5, i.e., we will classify passengers as “survivors” and “non\-survivors” depending on whether their model\-predicted probability of survival was larger than 50% or not, respectively. Table [15\.2](modelPerformance.html#tab:confMatRF) presents the resulting table. Table 15\.2: Confusion table for the random forest model for the Titanic data. Predicted survival status is equal to *survived* if the model\-predicted probability of survival \\(\\hat y\_i\\) is larger than 50%. | | Actual: survived | Actual: died | Total | | --- | --- | --- | --- | | Predicted: survived | 454 | 60 | 514 | | Predicted: died | 257 | 1436 | 1693 | | Total | 711 | 1496 | 2207 | Based on the table, we obtain the value of accuracy equal to (454 \+ 1436\) / 2207 \= 0\.8564\. The values of precision and recall (sensitivity) are equal to \\(454 / 514 \= 0\.8833\\) and \\(454 / 711 \= 0\.6385\\), respectively, with the resulting F1 score equal to 0\.7412\. Specificity is equal to \\(1436 / 1496 \= 0\.9599\\). Figure [15\.2](modelPerformance.html#fig:exampleROC) presents the ROC curve for the random forest model. AUC is equal to 0\.8595, and the Gini coefficient is equal to 0\.719\. Figure 15\.2: Receiver Operating Characteristic curve for the random forest model for the Titanic dataset. The Gini coefficient can be calculated as 2\\(\\times\\) area between the ROC curve and the diagonal (this area is highlighted). The AUC coefficient is defined as an area under the ROC curve. Figure [15\.3](modelPerformance.html#fig:examplePRC) presents the precision\-recall curve (left\-hand\-side panel) and lift chart (right\-hand\-side panel) for the random forest model. Figure 15\.3: Precision\-recall curve (left panel) and lift chart (right panel) for the random forest model for the Titanic dataset. Table [15\.3](modelPerformance.html#tab:confMatLR) presents the confusion table for the logistic regression model for threshold \\(C\\) equal to 0\.5\. The resulting values of accuracy, precision, recall (sensitivity), F1 score, and specificity are equal to 0\.8043, 0\.7522, 0\.5851, 0\.6582, and 0\.9084\. The values are smaller than for the random forest model, suggesting a better performance of the latter. Table 15\.3: Confusion table for the logisitic regression model for the Titanic data. Predicted survival status is equal to TRUE if the model\-predicted probability of survival is larger than 50%. | | Actual: survived | Actual: died | Total | | --- | --- | --- | --- | | Predicted: survived | 416 | 137 | 653 | | Predicted: died | 295 | 1359 | 1654 | | Total | 711 | 1496 | 2207 | Left\-hand\-side panel in Figure [15\.4](modelPerformance.html#fig:titanicROC) presents ROC curves for both the logistic regression and the random forest model. The curve for the random forest model lies above the one for the logistic regression model for the majority of the cut\-offs \\(C\\), except for the very high values of the cut\-off \\(C\\). AUC for the logistic regression model is equal to 0\.8174 and is smaller than for the random forest model. Right\-hand\-side panel in Figure [15\.4](modelPerformance.html#fig:titanicROC) presents lift charts for both models. Also in this case the curve for the random forest suggests a better performance than for the logistic regression model, except for the very high values of cut\-off \\(C\\). Figure 15\.4: Receiver Operating Characteristic curves (left panel) and lift charts (right panel) for the random forest and logistic regression models for the Titanic dataset. ### 15\.4\.1 Apartment prices Let us consider the linear regression model `apartments_lm` (see Section [4\.5\.1](dataSetsIntro.html#model-Apartments-lr)) and the random forest model `apartments_rf` (see Section [4\.5\.2](dataSetsIntro.html#model-Apartments-rf)) for the apartment\-prices data (see Section [4\.4](dataSetsIntro.html#ApartmentDataset)). Recall that, for these data, the dependent variable, the price per square meter, is continuous. Hence, we can use the performance measures presented in Section [15\.3\.1](modelPerformance.html#modelPerformanceMethodCont). In particular, we consider MSE and RMSE. Figure [15\.1](modelPerformance.html#fig:prepareMPBoxplotEx) presents a box plot of the absolute values of residuals for the linear regression and random forest models, computed for the testing\-data. The computed values of RMSE are also indicated in the plots. The values are very similar for both models; we have already noted that fact in Section [4\.5\.4](dataSetsIntro.html#predictionsApartments). Figure 15\.1: Box plot for the absolute values of residuals for the linear regression and random forest models for the apartment\-prices data. The red dot indicates the RMSE. In particular, MSE, RMSE, \\(R^2\\), and MAD values for the linear regression model are equal to 80137, 283\.09, 0\.901, and 212\.7, respectively. For the random forest model, they are equal to 80137, 282\.95, 0\.901, and 169\.1 respectively. The values of the measures suggest that the predicitve performance of the random forest model is slightly better. But is this difference relevant? It should be remembered that development of any random forest model includes a random component. This means that, when a random forest model is fitted to the same dataset several times, but using a different random\-number\-generation seed, the value of MSE or MAD for the fitted models will fluctuate. Thus, we should consider the values obtained for the linear regression and random forest models for the apartment\-prices data as indicating a similar performance of the two models rather than a superiority of one of them. ### 15\.4\.2 Titanic data Let us consider the random forest model `titanic_rf` (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) and the logistic regression model `titanic_lmr` (see Section [4\.2\.1](dataSetsIntro.html#model-titanic-lmr)) for the Titanic data (see Section [4\.1](dataSetsIntro.html#TitanicDataset)). Recall that, for these data, the dependent variable is binary, with success defined as survival of the passenger. First, we take a look at the random forest model. We will illustrate the “confusion table” by using threshold \\(C\\) equal to 0\.5, i.e., we will classify passengers as “survivors” and “non\-survivors” depending on whether their model\-predicted probability of survival was larger than 50% or not, respectively. Table [15\.2](modelPerformance.html#tab:confMatRF) presents the resulting table. Table 15\.2: Confusion table for the random forest model for the Titanic data. Predicted survival status is equal to *survived* if the model\-predicted probability of survival \\(\\hat y\_i\\) is larger than 50%. | | Actual: survived | Actual: died | Total | | --- | --- | --- | --- | | Predicted: survived | 454 | 60 | 514 | | Predicted: died | 257 | 1436 | 1693 | | Total | 711 | 1496 | 2207 | Based on the table, we obtain the value of accuracy equal to (454 \+ 1436\) / 2207 \= 0\.8564\. The values of precision and recall (sensitivity) are equal to \\(454 / 514 \= 0\.8833\\) and \\(454 / 711 \= 0\.6385\\), respectively, with the resulting F1 score equal to 0\.7412\. Specificity is equal to \\(1436 / 1496 \= 0\.9599\\). Figure [15\.2](modelPerformance.html#fig:exampleROC) presents the ROC curve for the random forest model. AUC is equal to 0\.8595, and the Gini coefficient is equal to 0\.719\. Figure 15\.2: Receiver Operating Characteristic curve for the random forest model for the Titanic dataset. The Gini coefficient can be calculated as 2\\(\\times\\) area between the ROC curve and the diagonal (this area is highlighted). The AUC coefficient is defined as an area under the ROC curve. Figure [15\.3](modelPerformance.html#fig:examplePRC) presents the precision\-recall curve (left\-hand\-side panel) and lift chart (right\-hand\-side panel) for the random forest model. Figure 15\.3: Precision\-recall curve (left panel) and lift chart (right panel) for the random forest model for the Titanic dataset. Table [15\.3](modelPerformance.html#tab:confMatLR) presents the confusion table for the logistic regression model for threshold \\(C\\) equal to 0\.5\. The resulting values of accuracy, precision, recall (sensitivity), F1 score, and specificity are equal to 0\.8043, 0\.7522, 0\.5851, 0\.6582, and 0\.9084\. The values are smaller than for the random forest model, suggesting a better performance of the latter. Table 15\.3: Confusion table for the logisitic regression model for the Titanic data. Predicted survival status is equal to TRUE if the model\-predicted probability of survival is larger than 50%. | | Actual: survived | Actual: died | Total | | --- | --- | --- | --- | | Predicted: survived | 416 | 137 | 653 | | Predicted: died | 295 | 1359 | 1654 | | Total | 711 | 1496 | 2207 | Left\-hand\-side panel in Figure [15\.4](modelPerformance.html#fig:titanicROC) presents ROC curves for both the logistic regression and the random forest model. The curve for the random forest model lies above the one for the logistic regression model for the majority of the cut\-offs \\(C\\), except for the very high values of the cut\-off \\(C\\). AUC for the logistic regression model is equal to 0\.8174 and is smaller than for the random forest model. Right\-hand\-side panel in Figure [15\.4](modelPerformance.html#fig:titanicROC) presents lift charts for both models. Also in this case the curve for the random forest suggests a better performance than for the logistic regression model, except for the very high values of cut\-off \\(C\\). Figure 15\.4: Receiver Operating Characteristic curves (left panel) and lift charts (right panel) for the random forest and logistic regression models for the Titanic dataset. 15\.5 Pros and cons ------------------- All model\-performance measures presented in this chapter are subject to some limitations. For that reason, many measures are available, as the limitations of a particular measure were addressed by developing an alternative one. For instance, RMSE is frequently used and reported for linear regression models. However, as it is sensitive to outliers, MAD has been proposed as an alternative. In case of predictive models for a binary dependent variable, measures like accuracy, F1 score, sensitivity, and specificity are often considered, depending on the consequences of correct/incorrect predictions in a particular application. However, the value of those measures depends on the cut\-off value used for creating predictions. For this reason, ROC curve and AUC have been developed and have become very popular. They are not easily extended to the case of a categorical dependent variable, though. Given the advantages and disadvantages of various measures and the fact that each may reflect a different aspect of the predictive performance of a model, it is customary to report and compare several of them when evaluating a model’s performance. 15\.6 Code snippets for R ------------------------- In this section, we present model\-performance measures as implemented in the `DALEX` package for R. The package covers the most often used measures and methods presented in this chapter. More advanced measures of performance are available in the `auditor` package for R (Gosiewska and Biecek [2018](#ref-R-auditor)). Note that there are also other R packages that offer similar functionality. These include, for instance, packages `mlr` (Bischl et al. [2016](#ref-mlr)), `caret` (Kuhn [2008](#ref-caret)), `tidymodels` (Max and Wickham [2018](#ref-tidymodels)), and `ROCR` (Sing et al. [2005](#ref-ROCR)). For illustration purposes, we use the random forest model `titanic_rf` (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) and the logistic regression model `titanic_lmr` (see Section [4\.2\.1](dataSetsIntro.html#model-titanic-lmr)) for the Titanic data (see Section [4\.1](dataSetsIntro.html#TitanicDataset)). Consequently, the `DALEX` functions are applied in the context of a binary classification problem. However, the same functions can be used for, for instance, linear regression models. To illustrate the use of the functions, we first retrieve the `titanic_lmr` and `titanic_rf` model\-objects via the `archivist` hooks, as listed in Section [4\.2\.7](dataSetsIntro.html#ListOfModelsTitanic). We also retrieve the version of the `titanic` data with imputed missing values. ``` titanic_imputed <- archivist::aread("pbiecek/models/27e5c") titanic_lmr <- archivist::aread("pbiecek/models/58b24") titanic_rf <- archivist::aread("pbiecek/models/4e0fc") ``` Then we construct the explainers for the models by using function `explain()` from the `DALEX` package (see Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode)). We also load the `rms` and `randomForest` packages, as the models were fitted by using functions from those packages and it is important to have the corresponding `predict()` functions available. ``` library("rms") library("DALEX") explain_lmr <- explain(model = titanic_lmr, data = titanic_imputed[, -9], y = titanic_imputed$survived == "yes", type = "classification", label = "Logistic Regression") library("randomForest") explain_rf <- explain(model = titanic_rf, data = titanic_imputed[, -9], y = titanic_imputed$survived == "yes", label = "Random Forest") ``` Function `model_performance()` calculates, by default, a set of selected model\-performance measures. These include MSE, RMSE, \\(R^2\\), and MAD for linear regression models, and recall, precision, F1, accuracy, and AUC for models for a binary dependent variable. The function includes the `cutoff` argument that allows specifying the cut\-off value for the measures that require it, i.e., recall, precision, F1 score, and accuracy. By default, the cut\-off value is set at 0\.5\. Note that, by default, all measures are computed for the data that are extracted from the explainer object; these can be training or testing data. ``` (eva_rf <- DALEX::model_performance(explain_rf)) ``` ``` ## Measures for: classification ## recall : 0.6385373 ## precision : 0.8832685 ## f1 : 0.7412245 ## accuracy : 0.8563661 ## auc : 0.8595467 ## ## Residuals: ## 0% 10% 20% 30% 40% 50% 60% 70% ## -0.8920 -0.1140 -0.0240 -0.0080 -0.0040 0.0000 0.0000 0.0100 ## 80% 90% 100% ## 0.1400 0.5892 1.0000 ``` ``` (eva_lr <- DALEX::model_performance(explain_lmr)) ``` ``` ## Measures for: classification ## recall : 0.5850914 ## precision : 0.7522604 ## f1 : 0.6582278 ## accuracy : 0.8042592 ## auc : 0.81741 ## ## Residuals: ## 0% 10% 20% 30% 40% ## -0.98457244 -0.31904861 -0.23408037 -0.20311483 -0.15200813 ## 50% 60% 70% 80% 90% ## -0.10318060 -0.06933478 0.05858024 0.29306442 0.73666519 ## 100% ## 0.97151255 ``` Application of the `DALEX::model_performance()` function returns an object of class “model\_performance”, which includes estimated values of several model\-performance measures, as well as a data frame containing the observed and predicted values of the dependent variable together with their difference, i.e., residuals. An ROC curve or lift chart can be constructed by applying the generic `plot()` function to the object. The type of the required plot is indicated by using argument `geom`. In particular, the argument allows values `geom = "lift"` for lift charts, `geom = "roc"` for ROC curves, `geom = "histogram"` for histograms of residuals, and `geom = "boxplot"` for box\-and\-whisker plots of residuals. The `plot()` function returns a `ggplot2` object. It is possible to apply the function to more than one object. In that case, the plots for the models corresponding to each object are combined in one graph. In the code below, we create two `ggplot2` objects: one for a graph containing precision\-recall curves for both models, and one for a histogram of residuals. Subsequently, we use the `patchwork` package to combine the graphs in one display. ``` p1 <- plot(eva_rf, eva_lr, geom = "histogram") p2 <- plot(eva_rf, eva_lr, geom = "prc") ``` ``` library("patchwork") p1 + p2 ``` Figure 15\.5: Precision\-recall curves and histograms for residuals obtained by the generic `plot()` function in R for the logistic regression model `titanic_lmr` and the random forest model `titanic_rf` for the Titanic dataset. The resulting graph is shown in Figure [15\.5](modelPerformance.html#fig:titanicMEexamples). Combined with the plot of ROC curves and the lift charts presented in both panels of Figure [15\.4](modelPerformance.html#fig:titanicROC), it provides additional insight into the comparison of performance of the two models. 15\.7 Code snippets for Python ------------------------------ In this section, we use the `dalex` library for Python. A collection of numerous metrics and performance charts is also available in the popular `sklearn.metrics` library. For illustration purposes, we use the `titanic_rf` random forest model for the Titanic data developed in Section [4\.3\.2](dataSetsIntro.html#model-titanic-python-rf). Recall that the model is developed to predict the probability of survival for passengers of Titanic. In the first step, we create an explainer\-object that will provide a uniform interface for the predictive model. We use the `Explainer()` constructor for this purpose. ``` import dalex as dx titanic_rf_exp = dx.Explainer(titanic_rf, X, y, label = "Titanic RF Pipeline") ``` To calculate selected measures of the overall performance, we use the `model_performance()` method. In the syntax below, we apply the `model_type` argument to indicate that we deal with a classification problem, and the `cutoff` argument to specify the cutoff value equal to 0\.5\. It is worth noting that we get different results than in R. In both cases, the models may differ slightly in implementation and are also trained with a different random seed. ``` mp_rf = titanic_rf_exp.model_performance(model_type = "classification", cutoff = 0.5) mp_rf.result ``` The resulting object can be visualised in many different ways. The code below constructs an ROC curve with AUC measure. Figure [15\.6](modelPerformance.html#fig:examplePythonMP4) presents the created plot. ``` import plotly.express as px from sklearn.metrics import roc_curve, auc y_score = titanic_rf_exp.predict(X) fpr, tpr, thresholds = roc_curve(y, y_score) fig = px.area(x=fpr, y=tpr, title=f'ROC Curve (AUC={auc(fpr, tpr):.4f})', labels=dict(x='False Positive Rate', y='True Positive Rate'), width=700, height=500) fig.add_shape( type='line', line=dict(dash='dash'), x0=0, x1=1, y0=0, y1=1) fig.update_yaxes(scaleanchor="x", scaleratio=1) fig.update_xaxes(constrain='domain') fig.show() ``` Figure 15\.6: The ROC curve for the random forest model for the Titanic dataset. The code below constructs a plot of FP and TP rates as a function of different thresholds. Figure [15\.7](modelPerformance.html#fig:examplePythonMP3) presents the created plot. ``` df = pd.DataFrame({'False Positive Rate': fpr, 'True Positive Rate': tpr }, index=thresholds) df.index.name = "Thresholds" df.columns.name = "Rate" fig_thresh = px.line(df, title='TPR and FPR at every threshold', width=700, height=500) fig_thresh.update_yaxes(scaleanchor="x", scaleratio=1) fig_thresh.update_xaxes(range=[0, 1], constrain='domain') fig_thresh.show() ``` Figure 15\.7: False\-positive and true\-positive rates as a function of threshold for the random forest model for the Titanic dataset.
Machine Learning
pbiecek.github.io
https://pbiecek.github.io/ema/featureImportance.html
16 Variable\-importance Measures ================================ 16\.1 Introduction ------------------ In this chapter, we present a method that is useful for the evaluation of the importance of an explanatory variable. The method may be applied for several purposes. * Model simplification: variables that do not influence a model’s predictions may be excluded from the model. * Model exploration: comparison of variables’ importance in different models may help in discovering interrelations between the variables. Also, the ordering of variables in the function of their importance is helpful in deciding in which order should we perform further model exploration. * Domain\-knowledge\-based model validation: identification of the most important variables may be helpful in assessing the validity of the model based on domain knowledge. * Knowledge generation: identification of the most important variables may lead to the discovery of new factors involved in a particular mechanism. The methods for assessment of variable importance can be divided, in general, into two groups: model\-specific and model\-agnostic. For linear models and many other types of models, there are methods of assessing explanatory variable’s importance that exploit particular elements of the structure of the model. These are model\-specific methods. For instance, for linear models, one can use the value of the normalized regression coefficient or its corresponding p\-value as the variable\-importance measure. For tree\-based ensembles, such a measure may be based on the use of a particular variable in particular trees. A great example in this respect is the variable\-importance measure based on out\-of\-bag data for a random forest model (Leo Breiman [2001](#ref-randomForestBreiman)[a](#ref-randomForestBreiman)), but there are also other approaches like methods implemented in the `XgboostExplainer` package (Foster [2017](#ref-xgboostExplainer)) for gradient boosting and `randomForestExplainer` (Paluszynska and Biecek [2017](#ref-randomForestExplainer)) for random forest. In this book, we focus on a model\-agnostic method that does not assume anything about the model structure. Therefore, it can be applied to any predictive model or ensemble of models. Moreover, and perhaps even more importantly, it allows comparing an explanatory\-variable’s importance between models with different structures. 16\.2 Intuition --------------- We focus on the method described in more detail by Fisher, Rudin, and Dominici ([2019](#ref-variableImportancePermutations)). The main idea is to measure how much does a model’s performance change if the effect of a selected explanatory variable, or of a group of variables, is removed? To remove the effect, we use perturbations, like resampling from an empirical distribution or permutation of the values of the variable. The idea is borrowed from the variable\-importance measure proposed by Leo Breiman ([2001](#ref-randomForestBreiman)[a](#ref-randomForestBreiman)) for random forest. If a variable is important, then we expect that, after permuting the values of the variable, the model’s performance (as captured by one of the measures discussed in Chapter [15](modelPerformance.html#modelPerformance)) will worsen. The larger the change in the performance, the more important is the variable. Despite the simplicity of the idea, the permutation\-based approach to measuring an explanatory\-variable’s importance is a very powerful model\-agnostic tool for model exploration. Variable\-importance measures obtained in this way may be compared between different models. This property is discussed in detail in Section [16\.5](featureImportance.html#featureImportanceProsCons). 16\.3 Method ------------ Consider a set of \\(n\\) observations for a set of \\(p\\) explanatory variables and dependent variable \\(Y\\). Let \\(\\underline{X}\\) denote the matrix containing, in rows, the (transposed column\-vectors of) observed values of the explanatory variables for all observations. Denote by \\(\\underline{y}\\) the column vector of the observed values of \\(Y\\). Let \\(\\underline{\\hat{y}}\=(f(\\underline{x}\_1\),\\ldots,f(\\underline{x}\_n))'\\) denote the corresponding vector of predictions for \\(\\underline{y}\\) for model \\(f()\\). Let \\(\\mathcal L(\\underline{\\hat{y}}, \\underline X, \\underline{y})\\) be a loss function that quantifies goodness\-of\-fit of model \\(f()\\). For instance, \\(\\mathcal L()\\) may be the value of log\-likelihood (see Chapter [15](modelPerformance.html#modelPerformance)) or any other model performance measure discussed in previous chapter. Consider the following algorithm: 1. Compute \\(L^0 \= \\mathcal L(\\underline{\\hat{y}}, \\underline X, \\underline{y})\\), i.e., the value of the loss function for the original data. Then, for each explanatory variable \\(X^j\\) included in the model, do steps 2\-5\. 2. Create matrix \\(\\underline{X}^{\*j}\\) by permuting the \\(j\\)\-th column of \\(\\underline{X}\\), i.e., by permuting the vector of observed values of \\(X^j\\). 3. Compute model predictions \\(\\underline{\\hat{y}}^{\*j}\\) based on the modified data \\(\\underline{X}^{\*j}\\). 4. Compute the value of the loss function for the modified data: \\\[ L^{\*j} \= \\mathcal L(\\underline{\\hat{y}}^{\*j}, \\underline{X}^{\*j}, \\underline{y}). \\] 5. Quantify the importance of \\(X^j\\) by calculating \\(vip\_{Diff}^j \= L^{\*j} \- L^0\\) or \\(vip\_{Ratio}^j \= L^{\*j} / L^0\\). Note that the use of resampling or permuting data in Step 2 involves randomness. Thus, the results of the procedure may depend on the obtained configuration of resampled/permuted values. Hence, it is advisable to repeat the procedure several (many) times. In this way, the uncertainty associated with the calculated variable\-importance values can be assessed. The calculations in Step 5 “normalize” the value of the variable\-importance measure with respect to \\(L^0\\). However, given that \\(L^0\\) is a constant, the normalization has no effect on the ranking of explanatory variables according to \\(vip\_{Diff}^j\\) nor \\(vip\_{Ratio}^j\\). Thus, in practice, often the values of \\(L^{\*j}\\) are simply used to quantify a variable’s importance. 16\.4 Example: Titanic data --------------------------- In this section, we illustrate the use of the permutation\-based variable\-importance evaluation by applying it to the random forest model for the Titanic data (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)). Recall that the goal is to predict survival probability of passengers based on their gender, age, class in which they travelled, ticket fare, the number of persons they travelled with, and the harbour they embarked the ship on. We use the area under the ROC curve (AUC, see Section [15\.3\.2\.2](modelPerformance.html#modelPerformanceMethodBinGOP)) as the model\-performance measure. Figure [16\.1](featureImportance.html#fig:TitanicRFFeatImp) shows, for each explanatory variable included in the model, the values of \\(1\-AUC^{\*j}\\) obtained by the algorithm described in the previous section. Additionally, the plot indicates the value of \\(L^0\\) by the vertical dashed\-line at the left\-hand\-side of the plot. The lengths of the bars correspond to \\(vip\_{Diff}^j\\) and provide the variable\-importance measures. Figure 16\.1: Single\-permutation\-based variable\-importance measures for the explanatory variables included in the random forest model for the Titanic data using 1\-AUC as the loss function. The plot in Figure [16\.1](featureImportance.html#fig:TitanicRFFeatImp) suggests that the most important variable in the model is *gender*. This agrees with the conclusions drawn in the exploratory analysis presented in Section [4\.1\.1](dataSetsIntro.html#exploration-titanic). The next three important variables are *class* (passengers travelling in the first class had a higher chance of survival), *age* (children had a higher chance of survival), and *fare* (owners of more expensive tickets had a higher chance of survival). To take into account the uncertainty related to the use of permutations, we can consider computing the mean values of \\(L^{\*j}\\) over a set of, say, 10 permutations. The plot in Figure [16\.2](featureImportance.html#fig:TitanicRFFeatImp10) presents the mean values. The only remarkable difference, as compared to Figure [16\.1](featureImportance.html#fig:TitanicRFFeatImp), is the change in the ordering of the *sibsp* and *parch* variables. Figure 16\.2: Means (over 10 permutations) of permutation\-based variable\-importance measures for the explanatory variables included in the random forest model for the Titanic data using 1\-AUC as the loss function. Plots similar to those presented in Figures [16\.1](featureImportance.html#fig:TitanicRFFeatImp) and [16\.2](featureImportance.html#fig:TitanicRFFeatImp10) are useful for comparisons of a variable’s importance in different models. Figure [16\.3](featureImportance.html#fig:TitanicFeatImp) presents single\-permutation results for the random forest, logistic regression (see Section [4\.2\.1](dataSetsIntro.html#model-titanic-lmr)), and gradient boosting (see Section [4\.2\.3](dataSetsIntro.html#model-titanic-gbm)) models. The best result, in terms of the smallest value of \\(L^0\\), is obtained for the generalized boosted regression model (as indicated by the location of the dashed lines in the plots). Note that the indicated \\(L^0\\) value for the model is different from the one indicated in Figure [16\.1](featureImportance.html#fig:TitanicRFFeatImp). This is due to the difference in the set of (random) permutations used to compute the two values. Figure 16\.3: Single\-permutation\-based variable\-importance measures for the random forest, gradient boosting, and logistic regression models for the Titanic data with 1\-AUC as the loss function. Note the different starting locations for the bars, due to differences in the AUC value obtained for the original data for different models. The plots in Figure [16\.3](featureImportance.html#fig:TitanicFeatImp) indicate that *gender* is the most important explanatory variable in all three models, followed by *class* and *age*. Variable *fare*, which is highly correlated with *class*, is important in the random forest and SVM models, but not in the logistic regression model. On the other hand, variable *parch* is, essentially, not important, neither in the gradient boosting nor in the logistic regression model, but it has some importance in the random forest model. *Country* is not important in any of the models. Overall, Figure [16\.3](featureImportance.html#fig:TitanicFeatImp) indicates that, in the random forest model, all variables (except of *country*) have got some importance, while in the other two models the effect is mainly limited to *gender*, *class*, and *age* (and *fare* for the gradient boosting model). 16\.5 Pros and cons ------------------- Permutation\-based variable importance offers several advantages. It is a model\-agnostic approach to the assessment of the influence of an explanatory variable on a model’s performance. The plots of variable\-importance measures are easy to understand, as they are compact and present the most important variables in a single graph. The measures can be compared between models and may lead to interesting insights. For example, if variables are correlated, then models like random forest are expected to spread importance across many variables, while in regularized\-regression models the effect of one variable may dominate the effect of other correlated variables. The same approach can be used to measure the importance of a single explanatory variable or a group of variables. The latter is useful for “aspects,” i.e., groups of variables that are complementary to each other or are related to a similar concept. For example, in the Titanic example, the *fare* and *class* variables are related to the financial status of a passenger. Instead of assessing the importance of each of these variables separately, we may be interested in their joint importance. Toward this aim, we may compute the permutation\-based measure by permuting the values of both variables at the same time. The main disadvantage of the permutation\-based variable\-importance measure is its dependence on the random nature of the permutations. As a result, for different permutations, we will, in general, get different results. Also, the value of the measure depends on the choice of the loss function \\(\\mathcal L()\\). Thus, there is no single, “absolute” measure. 16\.6 Code snippets for R ------------------------- In this section, we present the implementation of the permutation\-based variable\-importance measure in the `DALEX` package for R. The key function is `model_parts()` that allows computation of the measure. For the purposes of the computation, one can choose among several loss fuctions that include `loss_sum_of_squares()`, `loss_root_mean_square()`, `loss_accuracy()`, `loss_cross_entropy()`, and `loss_one_minus_auc()`. For the definitions of the loss functions, see Chapter [15](modelPerformance.html#modelPerformance). For illustration purposes, we use the random forest model `apartments_rf` for the apartment\-prices data (see Section [4\.5\.2](dataSetsIntro.html#model-Apartments-rf)). We first load the model\-object via the `archivist` hook, as listed in Section [4\.5\.6](dataSetsIntro.html#ListOfModelsApartments). We also load the `randomForest` package, as the model was fitted by using function `randomForest()` from this package (see Section [4\.5\.2](dataSetsIntro.html#model-Apartments-rf)) and it is important to have the corresponding `predict()` function available. Then we construct the explainer for the model by using the function `explain()` from the `DALEX` package (see Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode)). Note that we use the `apartments_test` data frame without the first column, i.e., the *m2\.price* variable, in the `data` argument. This will be the dataset to which the model will be applied (see Section [4\.5\.5](dataSetsIntro.html#ExplainersApartmentsRCode)). The *m2\.price* variable is explicitly specified as the dependent variable in the `y` argument. ``` library("DALEX") library("randomForest") apartments_rf <- archivist::aread("pbiecek/models/fe7a5") explainer_rf <- DALEX::explain(model = apartments_rf, data = apartments_test[,-1], y = apartments_test$m2.price, label = "Random Forest") ``` A popular loss function is the root\-mean\-square\-error (RMSE) function [(15\.2\)](modelPerformance.html#eq:RMSE). It is implemented in the `DALEX` package as the `loss_root_mean_square()` function. The latter requires two arguments: `observed`, which indicates the vector of observed values of the dependent variable, and `predicted`, which specifies the object (either vector or a matrix, as returned from the model\-specific `predict()` function) with the predicted values. The original\-testing\-data value \\(L^0\\) of RMSE for the random forest model can be obtained by applying the `loss_root_mean_square()` in the form given below. ``` loss_root_mean_square(observed = apartments_test$m2.price, predicted = predict(apartments_rf, apartments_test)) ``` ``` ## [1] 282.9519 ``` To compute the permutation\-based variable\-importance measure, we apply the `model_parts()` function. Note that it is a wrapper for function `feature_importance()` from the `ingredients` package. The only required argument is `explainer`, which indicates the explainer\-object (obtained with the help of the `explain()` function, see Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode)) for the model to be explained. The other (optional) arguments are: * `loss_function`, the loss function to be used (by default, it is the `loss_root_mean_square` function). * `type`, the form of the variable\-importance measure, with values `"raw"` resulting in the computation of \\(\\mathcal L()\\), `"difference"` yielding \\(vip\_{Diff}^j\\), and `"ratio"` providing \\(vip\_{Ratio}^j\\) (see Section [16\.3](featureImportance.html#featureImportanceMethod)). * `variables`, a character vector providing the names of the explanatory variables, for which the variable\-importance measure is to be computed. By default, `variables = NULL`, in which case computations are performed for all variables in the dataset. * `variable_groups`, a list of character vectors of names of explanatory variables. For each vector, a single variable\-importance measure is computed for the joint effect of the variables which names are provided in the vector. By default, `variable_groups = NULL`, in which case variable\-importance measures are computed separately for all variables indicated in the `variables` argument. * `B`, the number of permutations to be used for the purpose of calculation of the (mean) variable\-importance measures, with `B = 10` used by default. To get a single\-permutation\-based measure, use `B = 1`. * `N`, the number of observations that are to be sampled from the data available in the explainer\-object for the purpose of calculation of the variable\-importance measure; by default, `N = 1000` is used; if `N = NULL`, the entire dataset is used. To compute a single\-permutation\-based value of the RMSE for all the explanatory variables included in the random forest model `apartments_rf`, we apply the `model_parts()` function to the model’s explainer\-object as shown below. We use the `set.seed()` function to make the process of random selection of the permutation repeatable. ``` set.seed(1980) model_parts(explainer = explainer_rf, loss_function = loss_root_mean_square, B = 1) ``` ``` ## variable mean_dropout_loss label ## 1 _full_model_ 271.9089 Random Forest ## 2 construction.year 389.4840 Random Forest ## 3 no.rooms 396.0281 Random Forest ## 4 floor 436.6190 Random Forest ## 5 surface 462.7374 Random Forest ## 6 district 794.7619 Random Forest ## 7 _baseline_ 1095.4724 Random Forest ``` Note that the outcome is identical to the following call below (results not shown). ``` set.seed(1980) model_parts(explainer = explainer_rf, loss_function = loss_root_mean_square, B = 1, variables = colnames(explainer_rf$data)) ``` However, if we use a different ordering of the variables in the `variables` argument, the result is slightly different: ``` set.seed(1980) vars <- c("surface","floor","construction.year","no.rooms","district") model_parts(explainer = explainer_rf, loss_function = loss_root_mean_square, B = 1, variables = vars) ``` ``` ## variable mean_dropout_loss label ## 1 _full_model_ 271.9089 Random Forest ## 2 construction.year 393.1586 Random Forest ## 3 no.rooms 396.0281 Random Forest ## 4 floor 440.9293 Random Forest ## 5 surface 483.1104 Random Forest ## 6 district 794.7619 Random Forest ## 7 _baseline_ 1095.4724 Random Forest ``` This is due to the fact that, despite the same seed, the first permutation is now selected for the *surface* variable, while in the previous code the same permutation was applied to the values of the *floor* variable. To compute the mean variable\-importance measure based on 50 permutations and using the RMSE difference \\(vip\_{Diff}^j\\) (see Section [16\.3](featureImportance.html#featureImportanceMethod)), we have got to specify the appropriate values of the `B` and `type` arguments. ``` set.seed(1980) (vip.50 <- model_parts(explainer = explainer_rf, loss_function = loss_root_mean_square, B = 50, type = "difference")) ``` ``` ## variable mean_dropout_loss label ## 1 _full_model_ 0.0000 Random Forest ## 2 no.rooms 117.4678 Random Forest ## 3 construction.year 122.4445 Random Forest ## 4 floor 162.4554 Random Forest ## 5 surface 182.4368 Random Forest ## 6 district 563.7343 Random Forest ## 7 _baseline_ 843.0472 Random Forest ``` To obtain a graphical illustration, we apply the `plot()` function to the `vip.50` object. ``` library("ggplot2") plot(vip.50) + ggtitle("Mean variable-importance over 50 permutations", "") ``` The resulting graph is presented in Figure [16\.4](featureImportance.html#fig:featureImportanceUnoPlot). The bars in the plot indicate the mean values of the variable\-importance measures for all explanatory variables. Box plots are added to the bars to provide an idea about the distribution of the values of the measure across the permutations. Figure 16\.4: Mean variable\-importance calculated by using 50 permutations and the root\-mean\-squared\-error loss\-function for the random forest model `apartments_rf` for the apartment\-prices data. Plot obtained by using the generic `plot()` function in R. Variable\-importance measures are a very useful tool for model comparison. We will illustrate this application by considering the random forest model, linear\-regression model (Section [4\.5\.1](dataSetsIntro.html#model-Apartments-lr)), and support\-vector\-machine (SVM) model (Section [4\.5\.3](dataSetsIntro.html#model-Apartments-svm)) for the apartment prices dataset. The models differ in their flexibility and structure; hence, it may be of interest to compare them. We first load the necessary model\-objects via the `archivist` hooks, as listed in Section [4\.5\.6](dataSetsIntro.html#ListOfModelsApartments). ``` apartments_lm <- archivist::aread("pbiecek/models/55f19") apartments_svm <- archivist::aread("pbiecek/models/d2ca0") ``` Then we construct the corresponding explainer\-objects. We also load the `e1071` package, as it is important to have a suitable `predict()` function available for the SVM model. ``` explainer_lm <- DALEX::explain(model = apartments_lm, data = apartments_test[,-1], y = apartments_test$m2.price, label = "Linear Regression") library("e1071") explainer_svm <- DALEX::explain(model = apartments_svm, data = apartments_test[,-1], y = apartments_test$m2.price, label = "Support Vector Machine") ``` Subsequently, we compute mean values of the permutation\-based variable\-importance measure for 50 permutations and the RMSE loss function. Note that we use the `set.seed()` function to make the process of random selection of the permutation repeatable. By specifying `N = NULL` we include all the data from the apartments dataset in the calculations. ``` vip_lm <- model_parts(explainer = explainer_lm, B = 50, N = NULL) vip_rf <- model_parts(explainer = explainer_rf, B = 50, N = NULL) vip_svm <- model_parts(explainer = explainer_svm, B = 50, N = NULL) ``` Finally, we apply the `plot()` function to the created objects to obtain a single plot with the variable\-importance measures for all three models. ``` library("ggplot2") plot(vip_rf, vip_svm, vip_lm) + ggtitle("Mean variable-importance over 50 permutations", "") ``` The resulting graph is presented in Figure [16\.5](featureImportance.html#fig:featureImportanceTriPlot). The plots suggest that the best result, in terms of the smallest value of \\(L^0\\), is obtained for the SVM model (as indicated by the location of the dashed lines in the plots). The length of bars indicates that *district* is the most important explanatory variable in all three models, followed by *surface* and *floor*. *Construction year* is the fourth most important variable for the random forest and SVM models, but it is not important in the linear\-regression model at all. We will investigate the reason for this difference in the next chapter. Figure 16\.5: Mean variable\-importance calculated using 50 permutations and the root\-mean\-squared\-error loss for the random forest, support\-vector\-machine, and linear\-regression models for the apartment\-prices data. 16\.7 Code snippets for Python ------------------------------ In this section, we use the `dalex` library for Python. The package covers all methods presented in this chapter. It is available on `pip` and `GitHub`. For illustration purposes, we use the `titanic_rf` random forest model for the Titanic data developed in Section [4\.3\.2](dataSetsIntro.html#model-titanic-python-rf). Recall that the model is developed to predict the probability of survival for passengers of Titanic. In the first step, we create an explainer\-object that will provide a uniform interface for the predictive model. We use the `Explainer()` constructor for this purpose. ``` import dalex as dx titanic_rf_exp = dx.Explainer(titanic_rf, X, y, label = "Titanic RF Pipeline") ``` To calculate the variable\-importance measure, we use the `model_parts()` method. By default it performs `B = 10` permutations of variable importance calculated on `N = 1000` observations. ``` mp_rf = titanic_rf_exp.model_parts() mp_rf.result ``` The obtained results can be visualised by using the `plot()` method. Results are presented in Figure [16\.6](featureImportance.html#fig:examplePythonFIM2). ``` mp_rf.plot() ``` Figure 16\.6: Mean variable\-importance calculated by using 10 permutations and the root\-mean\-squared\-error loss\-function for the random forest model for the Titanic data. The `model_parts()` method in Python allows similar arguments as the corresponding function in the `DALEX` package in R (see Section [16\.6](featureImportance.html#featureImportanceR)). These include, for example, the `loss_function` argument (with values like, e.g. ,`'rmse'` or `'1-auc'`); the `type` argument, (with values `'variable_importance'`, `'ratio'`, `'difference'`); and the `variable_groups` argument that allows specifying groups of explanatory variables, for which a single variable\-importance measure should be computed. In the code below, we illustrate the use of the `variable_groups` argument to specify two groups of variables. The resulting plot is presented in Figure [16\.7](featureImportance.html#fig:examplePythonFIM5). ``` vi_grouped = titanic_rf_exp.model_parts( variable_groups={'personal': ['gender', 'age', 'sibsp', 'parch'], 'wealth': ['class', 'fare']}) vi_grouped.result ``` ``` vi_grouped.plot() ``` Figure 16\.7: Mean variable\-importance calculated for two groups of variables for the random forest model for the Titanic data. 16\.1 Introduction ------------------ In this chapter, we present a method that is useful for the evaluation of the importance of an explanatory variable. The method may be applied for several purposes. * Model simplification: variables that do not influence a model’s predictions may be excluded from the model. * Model exploration: comparison of variables’ importance in different models may help in discovering interrelations between the variables. Also, the ordering of variables in the function of their importance is helpful in deciding in which order should we perform further model exploration. * Domain\-knowledge\-based model validation: identification of the most important variables may be helpful in assessing the validity of the model based on domain knowledge. * Knowledge generation: identification of the most important variables may lead to the discovery of new factors involved in a particular mechanism. The methods for assessment of variable importance can be divided, in general, into two groups: model\-specific and model\-agnostic. For linear models and many other types of models, there are methods of assessing explanatory variable’s importance that exploit particular elements of the structure of the model. These are model\-specific methods. For instance, for linear models, one can use the value of the normalized regression coefficient or its corresponding p\-value as the variable\-importance measure. For tree\-based ensembles, such a measure may be based on the use of a particular variable in particular trees. A great example in this respect is the variable\-importance measure based on out\-of\-bag data for a random forest model (Leo Breiman [2001](#ref-randomForestBreiman)[a](#ref-randomForestBreiman)), but there are also other approaches like methods implemented in the `XgboostExplainer` package (Foster [2017](#ref-xgboostExplainer)) for gradient boosting and `randomForestExplainer` (Paluszynska and Biecek [2017](#ref-randomForestExplainer)) for random forest. In this book, we focus on a model\-agnostic method that does not assume anything about the model structure. Therefore, it can be applied to any predictive model or ensemble of models. Moreover, and perhaps even more importantly, it allows comparing an explanatory\-variable’s importance between models with different structures. 16\.2 Intuition --------------- We focus on the method described in more detail by Fisher, Rudin, and Dominici ([2019](#ref-variableImportancePermutations)). The main idea is to measure how much does a model’s performance change if the effect of a selected explanatory variable, or of a group of variables, is removed? To remove the effect, we use perturbations, like resampling from an empirical distribution or permutation of the values of the variable. The idea is borrowed from the variable\-importance measure proposed by Leo Breiman ([2001](#ref-randomForestBreiman)[a](#ref-randomForestBreiman)) for random forest. If a variable is important, then we expect that, after permuting the values of the variable, the model’s performance (as captured by one of the measures discussed in Chapter [15](modelPerformance.html#modelPerformance)) will worsen. The larger the change in the performance, the more important is the variable. Despite the simplicity of the idea, the permutation\-based approach to measuring an explanatory\-variable’s importance is a very powerful model\-agnostic tool for model exploration. Variable\-importance measures obtained in this way may be compared between different models. This property is discussed in detail in Section [16\.5](featureImportance.html#featureImportanceProsCons). 16\.3 Method ------------ Consider a set of \\(n\\) observations for a set of \\(p\\) explanatory variables and dependent variable \\(Y\\). Let \\(\\underline{X}\\) denote the matrix containing, in rows, the (transposed column\-vectors of) observed values of the explanatory variables for all observations. Denote by \\(\\underline{y}\\) the column vector of the observed values of \\(Y\\). Let \\(\\underline{\\hat{y}}\=(f(\\underline{x}\_1\),\\ldots,f(\\underline{x}\_n))'\\) denote the corresponding vector of predictions for \\(\\underline{y}\\) for model \\(f()\\). Let \\(\\mathcal L(\\underline{\\hat{y}}, \\underline X, \\underline{y})\\) be a loss function that quantifies goodness\-of\-fit of model \\(f()\\). For instance, \\(\\mathcal L()\\) may be the value of log\-likelihood (see Chapter [15](modelPerformance.html#modelPerformance)) or any other model performance measure discussed in previous chapter. Consider the following algorithm: 1. Compute \\(L^0 \= \\mathcal L(\\underline{\\hat{y}}, \\underline X, \\underline{y})\\), i.e., the value of the loss function for the original data. Then, for each explanatory variable \\(X^j\\) included in the model, do steps 2\-5\. 2. Create matrix \\(\\underline{X}^{\*j}\\) by permuting the \\(j\\)\-th column of \\(\\underline{X}\\), i.e., by permuting the vector of observed values of \\(X^j\\). 3. Compute model predictions \\(\\underline{\\hat{y}}^{\*j}\\) based on the modified data \\(\\underline{X}^{\*j}\\). 4. Compute the value of the loss function for the modified data: \\\[ L^{\*j} \= \\mathcal L(\\underline{\\hat{y}}^{\*j}, \\underline{X}^{\*j}, \\underline{y}). \\] 5. Quantify the importance of \\(X^j\\) by calculating \\(vip\_{Diff}^j \= L^{\*j} \- L^0\\) or \\(vip\_{Ratio}^j \= L^{\*j} / L^0\\). Note that the use of resampling or permuting data in Step 2 involves randomness. Thus, the results of the procedure may depend on the obtained configuration of resampled/permuted values. Hence, it is advisable to repeat the procedure several (many) times. In this way, the uncertainty associated with the calculated variable\-importance values can be assessed. The calculations in Step 5 “normalize” the value of the variable\-importance measure with respect to \\(L^0\\). However, given that \\(L^0\\) is a constant, the normalization has no effect on the ranking of explanatory variables according to \\(vip\_{Diff}^j\\) nor \\(vip\_{Ratio}^j\\). Thus, in practice, often the values of \\(L^{\*j}\\) are simply used to quantify a variable’s importance. 16\.4 Example: Titanic data --------------------------- In this section, we illustrate the use of the permutation\-based variable\-importance evaluation by applying it to the random forest model for the Titanic data (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)). Recall that the goal is to predict survival probability of passengers based on their gender, age, class in which they travelled, ticket fare, the number of persons they travelled with, and the harbour they embarked the ship on. We use the area under the ROC curve (AUC, see Section [15\.3\.2\.2](modelPerformance.html#modelPerformanceMethodBinGOP)) as the model\-performance measure. Figure [16\.1](featureImportance.html#fig:TitanicRFFeatImp) shows, for each explanatory variable included in the model, the values of \\(1\-AUC^{\*j}\\) obtained by the algorithm described in the previous section. Additionally, the plot indicates the value of \\(L^0\\) by the vertical dashed\-line at the left\-hand\-side of the plot. The lengths of the bars correspond to \\(vip\_{Diff}^j\\) and provide the variable\-importance measures. Figure 16\.1: Single\-permutation\-based variable\-importance measures for the explanatory variables included in the random forest model for the Titanic data using 1\-AUC as the loss function. The plot in Figure [16\.1](featureImportance.html#fig:TitanicRFFeatImp) suggests that the most important variable in the model is *gender*. This agrees with the conclusions drawn in the exploratory analysis presented in Section [4\.1\.1](dataSetsIntro.html#exploration-titanic). The next three important variables are *class* (passengers travelling in the first class had a higher chance of survival), *age* (children had a higher chance of survival), and *fare* (owners of more expensive tickets had a higher chance of survival). To take into account the uncertainty related to the use of permutations, we can consider computing the mean values of \\(L^{\*j}\\) over a set of, say, 10 permutations. The plot in Figure [16\.2](featureImportance.html#fig:TitanicRFFeatImp10) presents the mean values. The only remarkable difference, as compared to Figure [16\.1](featureImportance.html#fig:TitanicRFFeatImp), is the change in the ordering of the *sibsp* and *parch* variables. Figure 16\.2: Means (over 10 permutations) of permutation\-based variable\-importance measures for the explanatory variables included in the random forest model for the Titanic data using 1\-AUC as the loss function. Plots similar to those presented in Figures [16\.1](featureImportance.html#fig:TitanicRFFeatImp) and [16\.2](featureImportance.html#fig:TitanicRFFeatImp10) are useful for comparisons of a variable’s importance in different models. Figure [16\.3](featureImportance.html#fig:TitanicFeatImp) presents single\-permutation results for the random forest, logistic regression (see Section [4\.2\.1](dataSetsIntro.html#model-titanic-lmr)), and gradient boosting (see Section [4\.2\.3](dataSetsIntro.html#model-titanic-gbm)) models. The best result, in terms of the smallest value of \\(L^0\\), is obtained for the generalized boosted regression model (as indicated by the location of the dashed lines in the plots). Note that the indicated \\(L^0\\) value for the model is different from the one indicated in Figure [16\.1](featureImportance.html#fig:TitanicRFFeatImp). This is due to the difference in the set of (random) permutations used to compute the two values. Figure 16\.3: Single\-permutation\-based variable\-importance measures for the random forest, gradient boosting, and logistic regression models for the Titanic data with 1\-AUC as the loss function. Note the different starting locations for the bars, due to differences in the AUC value obtained for the original data for different models. The plots in Figure [16\.3](featureImportance.html#fig:TitanicFeatImp) indicate that *gender* is the most important explanatory variable in all three models, followed by *class* and *age*. Variable *fare*, which is highly correlated with *class*, is important in the random forest and SVM models, but not in the logistic regression model. On the other hand, variable *parch* is, essentially, not important, neither in the gradient boosting nor in the logistic regression model, but it has some importance in the random forest model. *Country* is not important in any of the models. Overall, Figure [16\.3](featureImportance.html#fig:TitanicFeatImp) indicates that, in the random forest model, all variables (except of *country*) have got some importance, while in the other two models the effect is mainly limited to *gender*, *class*, and *age* (and *fare* for the gradient boosting model). 16\.5 Pros and cons ------------------- Permutation\-based variable importance offers several advantages. It is a model\-agnostic approach to the assessment of the influence of an explanatory variable on a model’s performance. The plots of variable\-importance measures are easy to understand, as they are compact and present the most important variables in a single graph. The measures can be compared between models and may lead to interesting insights. For example, if variables are correlated, then models like random forest are expected to spread importance across many variables, while in regularized\-regression models the effect of one variable may dominate the effect of other correlated variables. The same approach can be used to measure the importance of a single explanatory variable or a group of variables. The latter is useful for “aspects,” i.e., groups of variables that are complementary to each other or are related to a similar concept. For example, in the Titanic example, the *fare* and *class* variables are related to the financial status of a passenger. Instead of assessing the importance of each of these variables separately, we may be interested in their joint importance. Toward this aim, we may compute the permutation\-based measure by permuting the values of both variables at the same time. The main disadvantage of the permutation\-based variable\-importance measure is its dependence on the random nature of the permutations. As a result, for different permutations, we will, in general, get different results. Also, the value of the measure depends on the choice of the loss function \\(\\mathcal L()\\). Thus, there is no single, “absolute” measure. 16\.6 Code snippets for R ------------------------- In this section, we present the implementation of the permutation\-based variable\-importance measure in the `DALEX` package for R. The key function is `model_parts()` that allows computation of the measure. For the purposes of the computation, one can choose among several loss fuctions that include `loss_sum_of_squares()`, `loss_root_mean_square()`, `loss_accuracy()`, `loss_cross_entropy()`, and `loss_one_minus_auc()`. For the definitions of the loss functions, see Chapter [15](modelPerformance.html#modelPerformance). For illustration purposes, we use the random forest model `apartments_rf` for the apartment\-prices data (see Section [4\.5\.2](dataSetsIntro.html#model-Apartments-rf)). We first load the model\-object via the `archivist` hook, as listed in Section [4\.5\.6](dataSetsIntro.html#ListOfModelsApartments). We also load the `randomForest` package, as the model was fitted by using function `randomForest()` from this package (see Section [4\.5\.2](dataSetsIntro.html#model-Apartments-rf)) and it is important to have the corresponding `predict()` function available. Then we construct the explainer for the model by using the function `explain()` from the `DALEX` package (see Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode)). Note that we use the `apartments_test` data frame without the first column, i.e., the *m2\.price* variable, in the `data` argument. This will be the dataset to which the model will be applied (see Section [4\.5\.5](dataSetsIntro.html#ExplainersApartmentsRCode)). The *m2\.price* variable is explicitly specified as the dependent variable in the `y` argument. ``` library("DALEX") library("randomForest") apartments_rf <- archivist::aread("pbiecek/models/fe7a5") explainer_rf <- DALEX::explain(model = apartments_rf, data = apartments_test[,-1], y = apartments_test$m2.price, label = "Random Forest") ``` A popular loss function is the root\-mean\-square\-error (RMSE) function [(15\.2\)](modelPerformance.html#eq:RMSE). It is implemented in the `DALEX` package as the `loss_root_mean_square()` function. The latter requires two arguments: `observed`, which indicates the vector of observed values of the dependent variable, and `predicted`, which specifies the object (either vector or a matrix, as returned from the model\-specific `predict()` function) with the predicted values. The original\-testing\-data value \\(L^0\\) of RMSE for the random forest model can be obtained by applying the `loss_root_mean_square()` in the form given below. ``` loss_root_mean_square(observed = apartments_test$m2.price, predicted = predict(apartments_rf, apartments_test)) ``` ``` ## [1] 282.9519 ``` To compute the permutation\-based variable\-importance measure, we apply the `model_parts()` function. Note that it is a wrapper for function `feature_importance()` from the `ingredients` package. The only required argument is `explainer`, which indicates the explainer\-object (obtained with the help of the `explain()` function, see Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode)) for the model to be explained. The other (optional) arguments are: * `loss_function`, the loss function to be used (by default, it is the `loss_root_mean_square` function). * `type`, the form of the variable\-importance measure, with values `"raw"` resulting in the computation of \\(\\mathcal L()\\), `"difference"` yielding \\(vip\_{Diff}^j\\), and `"ratio"` providing \\(vip\_{Ratio}^j\\) (see Section [16\.3](featureImportance.html#featureImportanceMethod)). * `variables`, a character vector providing the names of the explanatory variables, for which the variable\-importance measure is to be computed. By default, `variables = NULL`, in which case computations are performed for all variables in the dataset. * `variable_groups`, a list of character vectors of names of explanatory variables. For each vector, a single variable\-importance measure is computed for the joint effect of the variables which names are provided in the vector. By default, `variable_groups = NULL`, in which case variable\-importance measures are computed separately for all variables indicated in the `variables` argument. * `B`, the number of permutations to be used for the purpose of calculation of the (mean) variable\-importance measures, with `B = 10` used by default. To get a single\-permutation\-based measure, use `B = 1`. * `N`, the number of observations that are to be sampled from the data available in the explainer\-object for the purpose of calculation of the variable\-importance measure; by default, `N = 1000` is used; if `N = NULL`, the entire dataset is used. To compute a single\-permutation\-based value of the RMSE for all the explanatory variables included in the random forest model `apartments_rf`, we apply the `model_parts()` function to the model’s explainer\-object as shown below. We use the `set.seed()` function to make the process of random selection of the permutation repeatable. ``` set.seed(1980) model_parts(explainer = explainer_rf, loss_function = loss_root_mean_square, B = 1) ``` ``` ## variable mean_dropout_loss label ## 1 _full_model_ 271.9089 Random Forest ## 2 construction.year 389.4840 Random Forest ## 3 no.rooms 396.0281 Random Forest ## 4 floor 436.6190 Random Forest ## 5 surface 462.7374 Random Forest ## 6 district 794.7619 Random Forest ## 7 _baseline_ 1095.4724 Random Forest ``` Note that the outcome is identical to the following call below (results not shown). ``` set.seed(1980) model_parts(explainer = explainer_rf, loss_function = loss_root_mean_square, B = 1, variables = colnames(explainer_rf$data)) ``` However, if we use a different ordering of the variables in the `variables` argument, the result is slightly different: ``` set.seed(1980) vars <- c("surface","floor","construction.year","no.rooms","district") model_parts(explainer = explainer_rf, loss_function = loss_root_mean_square, B = 1, variables = vars) ``` ``` ## variable mean_dropout_loss label ## 1 _full_model_ 271.9089 Random Forest ## 2 construction.year 393.1586 Random Forest ## 3 no.rooms 396.0281 Random Forest ## 4 floor 440.9293 Random Forest ## 5 surface 483.1104 Random Forest ## 6 district 794.7619 Random Forest ## 7 _baseline_ 1095.4724 Random Forest ``` This is due to the fact that, despite the same seed, the first permutation is now selected for the *surface* variable, while in the previous code the same permutation was applied to the values of the *floor* variable. To compute the mean variable\-importance measure based on 50 permutations and using the RMSE difference \\(vip\_{Diff}^j\\) (see Section [16\.3](featureImportance.html#featureImportanceMethod)), we have got to specify the appropriate values of the `B` and `type` arguments. ``` set.seed(1980) (vip.50 <- model_parts(explainer = explainer_rf, loss_function = loss_root_mean_square, B = 50, type = "difference")) ``` ``` ## variable mean_dropout_loss label ## 1 _full_model_ 0.0000 Random Forest ## 2 no.rooms 117.4678 Random Forest ## 3 construction.year 122.4445 Random Forest ## 4 floor 162.4554 Random Forest ## 5 surface 182.4368 Random Forest ## 6 district 563.7343 Random Forest ## 7 _baseline_ 843.0472 Random Forest ``` To obtain a graphical illustration, we apply the `plot()` function to the `vip.50` object. ``` library("ggplot2") plot(vip.50) + ggtitle("Mean variable-importance over 50 permutations", "") ``` The resulting graph is presented in Figure [16\.4](featureImportance.html#fig:featureImportanceUnoPlot). The bars in the plot indicate the mean values of the variable\-importance measures for all explanatory variables. Box plots are added to the bars to provide an idea about the distribution of the values of the measure across the permutations. Figure 16\.4: Mean variable\-importance calculated by using 50 permutations and the root\-mean\-squared\-error loss\-function for the random forest model `apartments_rf` for the apartment\-prices data. Plot obtained by using the generic `plot()` function in R. Variable\-importance measures are a very useful tool for model comparison. We will illustrate this application by considering the random forest model, linear\-regression model (Section [4\.5\.1](dataSetsIntro.html#model-Apartments-lr)), and support\-vector\-machine (SVM) model (Section [4\.5\.3](dataSetsIntro.html#model-Apartments-svm)) for the apartment prices dataset. The models differ in their flexibility and structure; hence, it may be of interest to compare them. We first load the necessary model\-objects via the `archivist` hooks, as listed in Section [4\.5\.6](dataSetsIntro.html#ListOfModelsApartments). ``` apartments_lm <- archivist::aread("pbiecek/models/55f19") apartments_svm <- archivist::aread("pbiecek/models/d2ca0") ``` Then we construct the corresponding explainer\-objects. We also load the `e1071` package, as it is important to have a suitable `predict()` function available for the SVM model. ``` explainer_lm <- DALEX::explain(model = apartments_lm, data = apartments_test[,-1], y = apartments_test$m2.price, label = "Linear Regression") library("e1071") explainer_svm <- DALEX::explain(model = apartments_svm, data = apartments_test[,-1], y = apartments_test$m2.price, label = "Support Vector Machine") ``` Subsequently, we compute mean values of the permutation\-based variable\-importance measure for 50 permutations and the RMSE loss function. Note that we use the `set.seed()` function to make the process of random selection of the permutation repeatable. By specifying `N = NULL` we include all the data from the apartments dataset in the calculations. ``` vip_lm <- model_parts(explainer = explainer_lm, B = 50, N = NULL) vip_rf <- model_parts(explainer = explainer_rf, B = 50, N = NULL) vip_svm <- model_parts(explainer = explainer_svm, B = 50, N = NULL) ``` Finally, we apply the `plot()` function to the created objects to obtain a single plot with the variable\-importance measures for all three models. ``` library("ggplot2") plot(vip_rf, vip_svm, vip_lm) + ggtitle("Mean variable-importance over 50 permutations", "") ``` The resulting graph is presented in Figure [16\.5](featureImportance.html#fig:featureImportanceTriPlot). The plots suggest that the best result, in terms of the smallest value of \\(L^0\\), is obtained for the SVM model (as indicated by the location of the dashed lines in the plots). The length of bars indicates that *district* is the most important explanatory variable in all three models, followed by *surface* and *floor*. *Construction year* is the fourth most important variable for the random forest and SVM models, but it is not important in the linear\-regression model at all. We will investigate the reason for this difference in the next chapter. Figure 16\.5: Mean variable\-importance calculated using 50 permutations and the root\-mean\-squared\-error loss for the random forest, support\-vector\-machine, and linear\-regression models for the apartment\-prices data. 16\.7 Code snippets for Python ------------------------------ In this section, we use the `dalex` library for Python. The package covers all methods presented in this chapter. It is available on `pip` and `GitHub`. For illustration purposes, we use the `titanic_rf` random forest model for the Titanic data developed in Section [4\.3\.2](dataSetsIntro.html#model-titanic-python-rf). Recall that the model is developed to predict the probability of survival for passengers of Titanic. In the first step, we create an explainer\-object that will provide a uniform interface for the predictive model. We use the `Explainer()` constructor for this purpose. ``` import dalex as dx titanic_rf_exp = dx.Explainer(titanic_rf, X, y, label = "Titanic RF Pipeline") ``` To calculate the variable\-importance measure, we use the `model_parts()` method. By default it performs `B = 10` permutations of variable importance calculated on `N = 1000` observations. ``` mp_rf = titanic_rf_exp.model_parts() mp_rf.result ``` The obtained results can be visualised by using the `plot()` method. Results are presented in Figure [16\.6](featureImportance.html#fig:examplePythonFIM2). ``` mp_rf.plot() ``` Figure 16\.6: Mean variable\-importance calculated by using 10 permutations and the root\-mean\-squared\-error loss\-function for the random forest model for the Titanic data. The `model_parts()` method in Python allows similar arguments as the corresponding function in the `DALEX` package in R (see Section [16\.6](featureImportance.html#featureImportanceR)). These include, for example, the `loss_function` argument (with values like, e.g. ,`'rmse'` or `'1-auc'`); the `type` argument, (with values `'variable_importance'`, `'ratio'`, `'difference'`); and the `variable_groups` argument that allows specifying groups of explanatory variables, for which a single variable\-importance measure should be computed. In the code below, we illustrate the use of the `variable_groups` argument to specify two groups of variables. The resulting plot is presented in Figure [16\.7](featureImportance.html#fig:examplePythonFIM5). ``` vi_grouped = titanic_rf_exp.model_parts( variable_groups={'personal': ['gender', 'age', 'sibsp', 'parch'], 'wealth': ['class', 'fare']}) vi_grouped.result ``` ``` vi_grouped.plot() ``` Figure 16\.7: Mean variable\-importance calculated for two groups of variables for the random forest model for the Titanic data.
Machine Learning
pbiecek.github.io
https://pbiecek.github.io/ema/partialDependenceProfiles.html
17 Partial\-dependence Profiles =============================== 17\.1 Introduction ------------------ In this chapter, we focus on partial\-dependence (PD) plots, sometimes also called PD profiles. They were introduced in the context of gradient boosting machines (GBM) by Friedman ([2000](#ref-Friedman00greedyfunction)). For many years, PD profiles went unnoticed in the shadow of GBM. However, in recent years, they have become very popular and are available in many data\-science\-oriented packages like `DALEX` (Biecek [2018](#ref-DALEX)), `iml` (Molnar, Bischl, and Casalicchio [2018](#ref-imlRPackage)), `pdp` (Greenwell [2017](#ref-pdpRPackage)) or `PDPbox` (Jiangchun [2018](#ref-PDPbox)). The general idea underlying the construction of PD profiles is to show how does the expected value of model prediction behave as a function of a selected explanatory variable. For a single model, one can construct an overall PD profile by using all observations from a dataset, or several profiles for sub\-groups of the observations. Comparison of sub\-group\-specific profiles may provide important insight into, for instance, the stability of the model’s predictions. PD profiles are also useful for comparisons of different models: * *Agreement between profiles for different models is reassuring.* Some models are more flexible than others. If PD profiles for models, which differ with respect to flexibility, are similar, we can treat it as a piece of evidence that the more flexible model is not overfitting and that the models capture the same relationship. * *Disagreement between profiles may suggest a way to improve a model.* If a PD profile of a simpler, more interpretable model disagrees with a profile of a flexible model, this may suggest a variable transformation that can be used to improve the interpretable model. For example, if a random forest model indicates a non\-linear relationship between the dependent variable and an explanatory variable, then a suitable transformation of the explanatory variable may improve the fit or performance of a linear\-regression model. * *Evaluation of model performance at boundaries.* Models are known to have different behaviour at the boundaries of the possible range of a dependent variable, i.e., for the largest or the lowest values. For instance, random forest models are known to shrink predictions towards the average, whereas support\-vector machines are known for a larger variance at edges. Comparison of PD profiles may help to understand the differences in models’ behaviour at boundaries. 17\.2 Intuition --------------- To show how does the expected value of model prediction behave as a function of a selected explanatory variable, the average of a set of individual ceteris\-paribus (CP) profiles can be used. Recall that a CP profile (see Chapter [10](ceterisParibus.html#ceterisParibus)) shows the dependence of an instance\-level prediction on an explanatory variable. A PD profile is estimated by the mean of the CP profiles for all instances (observations) from a dataset. Note that, for additive models, CP profiles are parallel. In particular, they have got the same shape. Consequently, the mean retains the shape, while offering a more precise estimate. However, for models that, for instance, include interactions, CP profiles may not be parallel. In that case, the mean may not necessarily correspond to the shape of any particular profile. Nevertheless, it can still offer a summary of how (in general) do the model’s predictions depend on changes in a given explanatory variable. The left\-hand\-side panel of Figure [17\.1](partialDependenceProfiles.html#fig:pdpIntuition) presents CP profiles for the explanatory variable *age* in the random forest model `titanic_rf` (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) for 25 randomly selected instances (observations) from the Titanic dataset (see Section [4\.1](dataSetsIntro.html#TitanicDataset)). Note that the profiles are not parallel, indicating non\-additive effects of explanatory variables. The right\-hand\-side panel shows the mean of the CP profiles, which offers an estimate of the PD profile. Clearly, the shape of the PD profile does not capture, for instance, the shape of the group of five CP profiles shown at the top of the panel. Nevertheless, it does seem to reflect the fact that the majority of CP profiles suggest a substantial drop in the predicted probability of survival for the ages between 2 and 18\. Figure 17\.1: Ceteris\-paribus (CP) and partial\-dependence (PD) profiles for the random forest model for 25 randomly selected observations from the Titanic dataset. Left\-hand\-side plot: CP profiles for *age*; blue dots indicate the age and the corresponding prediction for the selected observations. Right\-hand\-side plot: CP profiles (grey lines) and the corresponding PD profile (blue line). 17\.3 Method ------------ ### 17\.3\.1 Partial\-dependence profiles The value of a PD profile for model \\(f()\\) and explanatory variable \\(X^j\\) at \\(z\\) is defined as follows: \\\[\\begin{equation} g\_{PD}^{j}(z) \= E\_{\\underline{X}^{\-j}}\\{f(X^{j\|\=z})\\}. \\tag{17\.1} \\end{equation}\\] Thus, it is the expected value of the model predictions when \\(X^j\\) is fixed at \\(z\\) over the (marginal) distribution of \\(\\underline{X}^{\-j}\\), i.e., over the joint distribution of all explanatory variables other than \\(X^j\\). Or, in other words, it is the expected value of the CP profile for \\(X^j\\), defined in [(10\.1\)](ceterisParibus.html#eq:CPPdef), over the distribution of \\(\\underline{X}^{\-j}\\). Usually, we do not know the true distribution of \\(\\underline{X}^{\-j}\\). We can estimate it, however, by the empirical distribution of \\(n\\), say, observations available in a training dataset. This leads to the use of the mean of CP profiles for \\(X^j\\) as an estimator of the PD profile: \\\[\\begin{equation} \\hat g\_{PD}^{j}(z) \= \\frac{1}{n} \\sum\_{i\=1}^{n} f(\\underline{x}\_i^{j\|\=z}). \\tag{17\.2} \\end{equation}\\] ### 17\.3\.2 Clustered partial\-dependence profiles As it has been already mentioned, the mean of CP profiles is a good summary if the profiles are parallel. If they are not parallel, the average may not adequately represent the shape of a subset of profiles. To deal with this issue, one can consider clustering the profiles and calculating the mean separately for each cluster. To cluster the CP profiles, one may use standard methods like K\-means or hierarchical clustering. The similarities between observations can be calculated based on the Euclidean distance between CP profiles. Figure [17\.2](partialDependenceProfiles.html#fig:pdpPart4) illustrates an application of that approach to the random forest model `titanic_rf` (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) for 100 randomly selected instances (observations) from the Titanic dataset. The CP profiles for the *age* variable are marked in grey. It can be noted that they could be split into three clusters: one for a group of passengers with a substantial drop in the predicted survival probability for ages below 18 (with the average represented by the blue line), one with an almost linear decrease of the probability over the age (with the average represented by the red line), and one with almost constant predicted probability (with the average represented by the green line). The plot itself does not allow to identify the variables that may be linked with these clusters, but the additional exploratory analysis could be performed for this purpose. Figure 17\.2: Clustered partial\-dependence profiles for *age* for the random forest model for 100 randomly selected observations from the Titanic dataset. Grey lines indicate ceteris\-paribus profiles that are clustered into three groups with the average profiles indicated by the blue, green, and red lines. ### 17\.3\.3 Grouped partial\-dependence profiles It may happen that we can identify an explanatory variable that influences the shape of CP profiles for the explanatory variable of interest. The most obvious situation is when a model includes an interaction between the variable and another one. In that case, a natural approach is to investigate the PD profiles for the variable of interest within the groups of observations defined by the variable involved in the interaction. Figure [17\.3](partialDependenceProfiles.html#fig:pdpPart5) illustrates an application of the approach to the random forest model `titanic_rf` (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) for 100 randomly selected instances (observations) from the Titanic dataset. The CP profiles for the explanatory\-variable *age* are marked in grey. The red and blue lines present the PD profiles for females and males, respectively. The gender\-specifc averages have different shapes: the predicted survival probability for females is more stable across different ages, as compared to males. Thus, the PD profiles clearly indicate an interaction between age and gender. Figure 17\.3: Partial\-dependence profiles for two genders for the random forest model for 100 randomly selected observations from the Titanic dataset. Grey lines indicate ceteris\-paribus profiles for *age*. ### 17\.3\.4 Contrastive partial\-dependence profiles Comparison of clustered or grouped PD profiles for a single model may provide important insight into, for instance, the stability of the model’s predictions. PD profiles can also be compared between different models. Figure [17\.4](partialDependenceProfiles.html#fig:pdpPart7) presents PD profiles for *age* for the random forest model (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) and the logistic regression model with splines for the Titanic data (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)). The profiles are similar with respect to a general relationship between *age* and the predicted probability of survival (the younger the passenger, the higher chance of survival). However, the profile for the random forest model is flatter. The difference between both models is the largest at the left edge of the age scale. This pattern can be seen as expected because random forest models, in general, shrink predictions towards the average and they are not very good for extrapolation outside the range of values observed in the training dataset. Figure 17\.4: Partial\-dependence profiles for *age* for the random forest (green line) and logistic regression (blue line) models for the Titanic dataset. 17\.4 Example: apartment\-prices data ------------------------------------- In this section, we use PD profiles to evaluate performance of the random forest model (see Section [4\.5\.2](dataSetsIntro.html#model-Apartments-rf)) for the apartment\-prices dataset (see Section [4\.4](dataSetsIntro.html#ApartmentDataset)). Recall that the goal is to predict the price per square meter of an apartment. In our illustration, we focus on two explanatory variables, *surface* and *construction year*. We consider the predictions for the training dataset `apartments`. ### 17\.4\.1 Partial\-dependence profiles Figure [17\.5](partialDependenceProfiles.html#fig:pdpApartment1) presents CP profiles (grey lines) for 100 randomly\-selected apartments together with the estimated PD profile (blue line) for *construction year* and *surface*. PD profile for *surface* suggests an approximately linear relationship between the explanatory variable and the predicted price. On the other hand, PD profile for *construction year* is U\-shaped: the predicted price is the highest for the very new and very old apartments. Note that, while the data were simulated, they were generated to reflect the effect of a lower quality of building materials used in rapid housing construction after the World War II. Figure 17\.5: Ceteris\-paribus and partial\-dependence profiles for *construction year* and *surface* for 100 randomly\-selected apartments for the random forest model for the apartment\-prices dataset. ### 17\.4\.2 Clustered partial\-dependence profiles Almost all CP profiles for *construction year*, presented in Figure [17\.5](partialDependenceProfiles.html#fig:pdpApartment1), seem to be U\-shaped. The same shape is observed for the PD profile. One might want to confirm that the shape is, indeed, common for all the observations. The left\-hand\-side panel of Figure [17\.6](partialDependenceProfiles.html#fig:pdpApartment1clustered) presents clustered PD profiles for *construction year* for three clusters derived from the CP profiles presented in Figure [17\.5](partialDependenceProfiles.html#fig:pdpApartment1). The three PD profiles differ slightly in the size of the oscillations at the edges, but they all are U\-shaped. Thus, we could conclude that the overall PD profile adequately captures the shape of the CP profiles. Or, put differently, there is little evidence that there might be any strong interaction between year of construction and any other variable in the model. Similar conclusions can be drawn for the CP and PD profiles for *surface*, presented in the right\-hand\-side panel of Figure [17\.6](partialDependenceProfiles.html#fig:pdpApartment1clustered). Figure 17\.6: Ceteris\-paribus (grey lines) and partial\-dependence profiles (red, green, and blue lines) for three clusters for 100 randomly\-selected apartments for the random forest model for the apartment\-prices dataset. Left\-hand\-side panel: profiles for *construction year*. Right\-hand\-side panel: profiles for *surface*. ### 17\.4\.3 Grouped partial\-dependence profiles One of the categorical explanatory variables in the apartment prices dataset is *district*. We may want to investigate whether the relationship between the model’s predictions and *construction year* and *surface* is similar for all districts. Toward this aim, we can use grouped PD profiles, for groups of apartments defined by districts. Figure [17\.7](partialDependenceProfiles.html#fig:pdpApartment2) shows PD profiles for *construction year* (left\-hand\-side panel) and *surface* (right\-hand\-side panel) for each district. Several observations are worth making. First, profiles for apartments in “Srodmiescie” (Downtown) are clearly much higher than for other districts. Second, the profiles are roughly parallel, indicating that the effects of *construction year* and *surface* are similar for each level of *district*. Third, the profiles appear to form three clusters, i.e., “Srodmiescie” (Downtown), three districts close to “Srodmiescie” (namely “Mokotow”, “Ochota”, and “Ursynow”), and the six remaining districts. Figure 17\.7: Partial\-dependence profiles for separate districts for the random forest model for the apartment\-prices dataset. Left\-hand\-side panel: profiles for *construction year*. Right\-hand\-side panel: profiles for *surface*. ### 17\.4\.4 Contrastive partial\-dependence profiles One of the main challenges in predictive modelling is to avoid overfitting. The issue is particularly important for flexible models, such as random forest models. Figure [17\.8](partialDependenceProfiles.html#fig:pdpApartment3) presents PD profiles for *construction year* (left\-hand\-side panel) and *surface* (right\-hand\-side panel) for the linear\-regression model (see Section [4\.5\.1](dataSetsIntro.html#model-Apartments-lr)) and the random forest model. Several observations are worth making. The linear\-regression model cannot, of course, accommodate the non\-monotonic relationship between *construction year* and the price per square meter. However, for *surface*, both models support a linear relationship, though the slope of the line resulting from the linear regression is steeper. This may be seen as an expected difference, given that random forest models yield predictions that are shrunk towards the mean. Overall, we could cautiously conclude that there is not much evidence for overfitting of the more flexible random forest model. Note that the non\-monotonic relationship between *construction year* and the price per square meter might be the reason why the explanatory variable was found not to be important in the model in Section [16\.6](featureImportance.html#featureImportanceR). In Section [4\.5\.4](dataSetsIntro.html#predictionsApartments), we mentioned that a proper model exploration may suggest a way to construct a model with improved performance, as compared to the random forest and linear\-regression models. In this respect, it is worth observing that the profiles in Figure [17\.8](partialDependenceProfiles.html#fig:pdpApartment3) suggest that both models miss some aspects of the data. In particular, the linear\-regression model does not capture the U\-shaped relationship between *construction year* and the price. On the other hand, the effect of *surface* on the apartment price seems to be underestimated by the random forest model. Hence, one could conclude that, by addressing the issues, one could improve either of the models, possibly with an improvement in predictive performance. Figure 17\.8: Partial\-dependence profiles for the linear\-regression and random forest models for the apartment\-prices dataset. Left\-hand\-side panel: profiles for *construction year*. Right\-hand\-side panel: profiles for *surface*. 17\.5 Pros and cons ------------------- PD profiles, presented in this chapter, offer a simple way to summarize the effect of a particular explanatory variable on the dependent variable. They are easy to explain and intuitive. They can be obtained for sub\-groups of observations and compared across different models. For these reasons, they have gained in popularity and have been implemented in various software packages, including R and Python. Given that the PD profiles are averages of CP profiles, they inherit the limitations of the latter. In particular, as CP profiles are problematic for correlated explanatory variables (see Section [10\.5](ceterisParibus.html#CPProsCons)), PD profiles are also not suitable for that case, as they may offer a crude and potentially misleading summarization. An approach to deal with this issue will be discussed in the next chapter. 17\.6 Code snippets for R ------------------------- In this section, we present the `DALEX` package for R, which covers the methods presented in this chapter. It uses the `ingredients` package with various implementations of variable profiles. Similar functions can be found in packages `pdp` (Greenwell [2017](#ref-pdpRPackage)), `ALEPlots` (Apley [2018](#ref-ALEPlotRPackage)), and `iml` (Molnar, Bischl, and Casalicchio [2018](#ref-imlRPackage)). For illustration purposes, we use the random forest model `titanic_rf` (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) for the Titanic data. Recall that the model has been developed to predict the probability of survival from the sinking of the Titanic. We first retrieve the version of the `titanic` data with imputed missing values and the `titanic_rf` model\-object via the `archivist` hooks, as listed in Section [4\.2\.7](dataSetsIntro.html#ListOfModelsTitanic). Then we construct the explainer for the model by using function `explain()` from the `DALEX` package (see Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode)). Note that, beforehand, we have got to load the `randomForest` package, as the model was fitted by using function `randomForest()` from this package (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) and it is important to have the corresponding `predict()` function available. ``` library("DALEX") library("randomForest") titanic_imputed <- archivist::aread("pbiecek/models/27e5c") titanic_rf <- archivist::aread("pbiecek/models/4e0fc") explainer_rf <- DALEX::explain(model = titanic_rf, data = titanic_imputed[, -9], y = titanic_imputed$survived, label = "Random Forest") ``` ### 17\.6\.1 Partial\-dependence profiles The function that allows computation of PD profiles in the `DALEX` package is `model_profile()`. The only required argument is `explainer`, which indicates the explainer\-object (obtained with the help of the `explain()` function, see Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode)) for the model to be explained. The other useful arguments include: * `variables`, a character vector providing the names of the explanatory variables, for which the profile is to be computed; by default, `variables = NULL`, in which case computations are performed for all numerical variables included in the model. * `N`, the number of (randomly sampled) observations that are to be used for the calculation of the PD profiles (`N = 100` by default); `N = NULL` implies the use of the entire dataset included in the explainer\-object. * `type`, the type of the PD profile, with values `"partial"` (default), `"conditional"`, and `"accumulated"`. * `variable_type`, a character string indicating whether calculations should be performed only for `"numerical"` (continuous) explanatory variables (default) or only for `"categorical"` variables. * `groups`, the name of the explanatory variable that will be used to group profiles, with `groups = NULL` by default (in which case no grouping of profiles is applied). * `k`, the number of clusters to be created with the help of the `hclust()` function, with `k = NULL` used by default and implying no clustering. In the example below, we calculate the PD profile for *age* by applying the `model_profile()` function to the explainer\-object for the random forest model. By default, the profile is based on 100 randomly selected observations. ``` pdp_rf <- model_profile(explainer = explainer_rf, variables = "age") ``` The resulting object of class `model_profile` contains the PD profile for *age*. By applying the `plot()` function to the object, we obtain a plot of the PD profile. Had we not used the `variables` argument, we would have obtained separate plots of PD profiles for all continuous explanatory variables. ``` library("ggplot2") plot(pdp_rf) + ggtitle("Partial-dependence profile for age") ``` The resulting plot for *age* (not shown) corresponds to the one presented in Figure [17\.4](partialDependenceProfiles.html#fig:pdpPart7). It may slightly differ, as the two plots are based on different sets of (randomly selected) 100 observations from the Titanic dataset. A PD profile can be plotted on top of CP profiles. This is a very useful feature if we want to check how well the former captures the latter. It is worth noting that, apart from the PD profile, the object created by the `model_profile()` function also contains the CP profiles for the selected observations and all explanatory variables included in the model. By specifying the argument `geom = "profiles"` in the `plot()` function, we add the CP profiles to the plot of the PD profile. ``` plot(pdp_rf, geom = "profiles") + ggtitle("Ceteris-paribus and partial-dependence profiles for age") ``` The resulting plot (not shown) is essentially the same as the one shown in the right\-hand\-side panel of Figure [17\.1](partialDependenceProfiles.html#fig:pdpIntuition). ### 17\.6\.2 Clustered partial\-dependence profiles To calculate clustered PD profiles, we have got to cluster the CP profiles. Toward this aim, we use the `k` argument of the `model_profile()` function that specifies the number of clusters that are to be formed by the `hclust()` function. In the code below, we specify that three clusters are to be formed for profiles for *age*. ``` pdp_rf_clust <- model_profile(explainer = explainer_rf, variables = "age", k = 3) ``` The clustered PD profiles can be plotted on top of the CP profiles by using the `geom = "profiles"` argument in the `plot()` function. ``` plot(pdp_rf_clust, geom = "profiles") + ggtitle("Clustered partial-dependence profiles for age") ``` The resulting plot (not shown) resembles the one shown for the random forest model in Figure [17\.2](partialDependenceProfiles.html#fig:pdpPart4). The only difference may stem from the fact that the two plots are based on a different set of (randomly selected) 100 observations from the Titanic dataset. ### 17\.6\.3 Grouped partial\-dependence profiles The `model_profile()` function admits the `groups` argument that allows constructing PD profiles for groups of observations defined by the levels of an explanatory variable. In the example below, we use the argument to obtain PD profiles for *age*, while grouping them by *gender*. ``` pdp_rf_gender <- model_profile(explainer = explainer_rf, variables = "age", groups = "gender") ``` The grouped PD profiles can be plotted on top of the CP profiles by using the `geom = "profiles"` argument in the `plot()` function. ``` plot(pdp_rf_gender, geom = "profiles") + ggtitle("Partial-dependence profiles for age, grouped by gender") ``` The resulting plot (not shown) resembles the one shown in Figure [17\.3](partialDependenceProfiles.html#fig:pdpPart5). ### 17\.6\.4 Contrastive partial\-dependence profiles It may be of interest to compare PD profiles for several models. We will compare the random forest model with the linear\-regression model `titanic_lmr` (see Section [4\.2\.1](dataSetsIntro.html#model-titanic-lmr)). For the latter, we first have got to load it via the `archivist` hook, as listed in Section [4\.5\.6](dataSetsIntro.html#ListOfModelsApartments). Then we construct the explainer for the model by using function `explain()`. Note that we first load the `rms` package, as the model was fitted by using function `lmr()` from this package (see Section [4\.2\.1](dataSetsIntro.html#model-titanic-lmr)) and it is important to have the corresponding `predict()` function available. Finally, we apply the `model_profile()` function to compute CP profiles and the PD profile for *age* based on 100 randomly\-selected observations from the Titanic dataset. We also repeat the calculations of the profiles for the random forest model. ``` library("rms") titanic_lmr <- archivist::aread("pbiecek/models/58b24") explainer_lmr <- DALEX::explain(model = titanic_lmr, data = titanic_imputed[, -9], y = titanic_imputed$survived, label = "Logistic Regression") pdp_lmr <- model_profile(explainer = explainer_lmr, variables = "age") pdp_rf <- model_profile(explainer = explainer_rf, variables = "age") ``` To overlay the PD profiles for *age* for the two models in a single plot, we apply the `plot()` function to the `model_profile`\-class objects for the two models that contain the profiles for *age*. ``` plot(pdp_rf, pdp_lmr) + ggtitle("Partial-dependence profiles for age for two models") ``` As a result, the profiles are plotted in a single plot. The resulting graph (not shown) is essentially the same as the one presented in Figure [17\.4](partialDependenceProfiles.html#fig:pdpPart7), with a possible difference due to the use of a different set of (randomly selected) 100 observations from the Titanic dataset. 17\.7 Code snippets for Python ------------------------------ In this section, we use the `dalex` library for Python. The package covers all methods presented in this chapter. It is available on `pip` and `GitHub`. Similar functions can be found in library `PDPbox` (Jiangchun [2018](#ref-PDPbox)). For illustration purposes, we use the `titanic_rf` random forest model for the Titanic data developed in Section [4\.3\.2](dataSetsIntro.html#model-titanic-python-rf). Recall that the model is developed to predict the probability of survival for passengers of Titanic. In the first step, we create an explainer\-object that will provide a uniform interface for the predictive model. We use the `Explainer()` constructor for this purpose. ``` import dalex as dx titanic_rf_exp = dx.Explainer(titanic_rf, X, y, label = "Titanic RF Pipeline") ``` The function that allows calculations of PD profiles is `model_profile()`. By default, it calculates profiles for all continuous variables. The other useful arguments include: * `variables`, a `str`, `list`, `np.ndarray` or `pd.Series` providing the names of the explanatory variables, for which the profile is to be computed; by default computations are performed for all numerical variables included in the model. * `N`, the number of (randomly sampled) observations that are to be used for the calculation of the PD profiles (`N = 300` by default); `N = None` implies the use of the entire dataset included in the explainer\-object. * `B`, the number of times (by default, 10\) the entire procedure is to be repeated. * `type`, the type of the PD profile, with values `'partial'` (default), `'conditional'`, and `'accumulated'`. * `variable_type`, a character string indicating whether calculations should be performed only for `'numerical'` (continuous) explanatory variables (default) or only for `'categorical'` variables. * `groups`, the name or list of names of the explanatory variable that will be used to group profiles, with `groups = None` by default (in which case no grouping of profiles is applied). In the example below, we calculate the PD profiles for *age* and *fare* by applying the `model_profile()` function to the explainer\-object for the random forest model. By default, the profile is based on 300 randomly selected observations. ``` pd_rf = titanic_rf_exp.model_profile(variables = ['age', 'fare']) pd_rf.result ``` The results can be visualised by applying the `plot()` method. Figure [17\.9](partialDependenceProfiles.html#fig:examplePythonMProfile2) presents the created plot. ``` pd_rf.plot() ``` Figure 17\.9: Partial\-dependence profiles for *age* and *fare* for the random forest model for the Titanic data, obtained by using the `plot()` method in Python. A PD profile can be plotted on top of CP profiles. This is a very useful feature if we want to check how well does the former capture the latter. By specifying the argument `geom = 'profiles'` in the `plot()` method, we add the CP profiles to the plot of the PD profile. ``` pd_rf.plot(geom = 'profiles') ``` The left\-hand\-side panel of the resulting plot (see Figure [17\.10](partialDependenceProfiles.html#fig:examplePythonMProfile7)) is essentially the same as the one shown in the right\-hand\-side panel of Figure [17\.1](partialDependenceProfiles.html#fig:pdpIntuition). Figure 17\.10: Partial\-dependence profiles (blue) with corresponding ceteris\-paribus profiles (grey) for *age* and *fare* for the random forest model for the Titanic data, obtained by using the `plot()` method in Python. By default, the `model_profile()` function computes the PD profiles only for continuous explanatory variables. To obtain the profiles for categorical variables, in the code that follows we use the argument `variable_type='categorical'`. Additionally, in the call to the `plot()` method we indicate that we want to display the profiles only to variables *class* and *gender*. ``` pd_rf = titanic_rf_exp.model_profile( variable_type = 'categorical') pd_rf.plot(variables = ['gender', 'class']) ``` The resulting plot is presented in Figure [17\.11](partialDependenceProfiles.html#fig:examplePythonMProfile3). Figure 17\.11: Partial\-dependence profiles for *class* and *gender* for the random forest model for the Titanic data, obtained by using the `plot()` method in Python. ### 17\.7\.1 Grouped partial\-dependence profiles The `model_profile()` function admits the `groups` argument that allows constructing PD profiles for groups of observations defined by the levels of an explanatory variable. In the code below, we use the argument to compute the profiles for *age* and *fare*, while grouping them by *class*. Subsequently, we use the `plot()` method to obtain a graphical presentation of the results. The resulting plot is presented in Figure [17\.12](partialDependenceProfiles.html#fig:examplePythonMProfile4). ``` pd_rf = titanic_rf_exp.model_profile(groups = 'class', variables = ['age', 'fare']) pd_rf.plot() ``` Figure 17\.12: Partial\-dependence profiles for *age* and *fare*, grouped by *class*, for the random forest model for the Titanic data, obtained by using the `plot()` method in Python. ### 17\.7\.2 Contrastive partial\-dependence profiles It may be of interest to compare PD profiles for several models. As an illustration, we will compare the random forest model with the logistic regression model `titanic_lr` (see Section [4\.3\.1](dataSetsIntro.html#model-titanic-python-lr)). First, we have got to compute to the profiles for both models by using the `model_profile()` function. ``` pdp_rf = titanic_rf_exp.model_profile() pdp_lr = titanic_lr_exp.model_profile() ``` Subsequently, we apply the `plot()` method to plot the profiles. Note that, in the code below, we use the `variables` argument to limit the display to variable *age* and *fare*. ``` pdp_rf.plot(pdp_lr, variables = ['age', 'fare']) ``` As a result, the profiles for *age* and *fare* are presented in a single plot. The resulting graph is presented in Figure [17\.13](partialDependenceProfiles.html#fig:examplePythonMProfile6)). Figure 17\.13: Partial\-dependence profiles for *age* and *fare* for the random forest model and the logistic regression model for the Titanic data. 17\.1 Introduction ------------------ In this chapter, we focus on partial\-dependence (PD) plots, sometimes also called PD profiles. They were introduced in the context of gradient boosting machines (GBM) by Friedman ([2000](#ref-Friedman00greedyfunction)). For many years, PD profiles went unnoticed in the shadow of GBM. However, in recent years, they have become very popular and are available in many data\-science\-oriented packages like `DALEX` (Biecek [2018](#ref-DALEX)), `iml` (Molnar, Bischl, and Casalicchio [2018](#ref-imlRPackage)), `pdp` (Greenwell [2017](#ref-pdpRPackage)) or `PDPbox` (Jiangchun [2018](#ref-PDPbox)). The general idea underlying the construction of PD profiles is to show how does the expected value of model prediction behave as a function of a selected explanatory variable. For a single model, one can construct an overall PD profile by using all observations from a dataset, or several profiles for sub\-groups of the observations. Comparison of sub\-group\-specific profiles may provide important insight into, for instance, the stability of the model’s predictions. PD profiles are also useful for comparisons of different models: * *Agreement between profiles for different models is reassuring.* Some models are more flexible than others. If PD profiles for models, which differ with respect to flexibility, are similar, we can treat it as a piece of evidence that the more flexible model is not overfitting and that the models capture the same relationship. * *Disagreement between profiles may suggest a way to improve a model.* If a PD profile of a simpler, more interpretable model disagrees with a profile of a flexible model, this may suggest a variable transformation that can be used to improve the interpretable model. For example, if a random forest model indicates a non\-linear relationship between the dependent variable and an explanatory variable, then a suitable transformation of the explanatory variable may improve the fit or performance of a linear\-regression model. * *Evaluation of model performance at boundaries.* Models are known to have different behaviour at the boundaries of the possible range of a dependent variable, i.e., for the largest or the lowest values. For instance, random forest models are known to shrink predictions towards the average, whereas support\-vector machines are known for a larger variance at edges. Comparison of PD profiles may help to understand the differences in models’ behaviour at boundaries. 17\.2 Intuition --------------- To show how does the expected value of model prediction behave as a function of a selected explanatory variable, the average of a set of individual ceteris\-paribus (CP) profiles can be used. Recall that a CP profile (see Chapter [10](ceterisParibus.html#ceterisParibus)) shows the dependence of an instance\-level prediction on an explanatory variable. A PD profile is estimated by the mean of the CP profiles for all instances (observations) from a dataset. Note that, for additive models, CP profiles are parallel. In particular, they have got the same shape. Consequently, the mean retains the shape, while offering a more precise estimate. However, for models that, for instance, include interactions, CP profiles may not be parallel. In that case, the mean may not necessarily correspond to the shape of any particular profile. Nevertheless, it can still offer a summary of how (in general) do the model’s predictions depend on changes in a given explanatory variable. The left\-hand\-side panel of Figure [17\.1](partialDependenceProfiles.html#fig:pdpIntuition) presents CP profiles for the explanatory variable *age* in the random forest model `titanic_rf` (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) for 25 randomly selected instances (observations) from the Titanic dataset (see Section [4\.1](dataSetsIntro.html#TitanicDataset)). Note that the profiles are not parallel, indicating non\-additive effects of explanatory variables. The right\-hand\-side panel shows the mean of the CP profiles, which offers an estimate of the PD profile. Clearly, the shape of the PD profile does not capture, for instance, the shape of the group of five CP profiles shown at the top of the panel. Nevertheless, it does seem to reflect the fact that the majority of CP profiles suggest a substantial drop in the predicted probability of survival for the ages between 2 and 18\. Figure 17\.1: Ceteris\-paribus (CP) and partial\-dependence (PD) profiles for the random forest model for 25 randomly selected observations from the Titanic dataset. Left\-hand\-side plot: CP profiles for *age*; blue dots indicate the age and the corresponding prediction for the selected observations. Right\-hand\-side plot: CP profiles (grey lines) and the corresponding PD profile (blue line). 17\.3 Method ------------ ### 17\.3\.1 Partial\-dependence profiles The value of a PD profile for model \\(f()\\) and explanatory variable \\(X^j\\) at \\(z\\) is defined as follows: \\\[\\begin{equation} g\_{PD}^{j}(z) \= E\_{\\underline{X}^{\-j}}\\{f(X^{j\|\=z})\\}. \\tag{17\.1} \\end{equation}\\] Thus, it is the expected value of the model predictions when \\(X^j\\) is fixed at \\(z\\) over the (marginal) distribution of \\(\\underline{X}^{\-j}\\), i.e., over the joint distribution of all explanatory variables other than \\(X^j\\). Or, in other words, it is the expected value of the CP profile for \\(X^j\\), defined in [(10\.1\)](ceterisParibus.html#eq:CPPdef), over the distribution of \\(\\underline{X}^{\-j}\\). Usually, we do not know the true distribution of \\(\\underline{X}^{\-j}\\). We can estimate it, however, by the empirical distribution of \\(n\\), say, observations available in a training dataset. This leads to the use of the mean of CP profiles for \\(X^j\\) as an estimator of the PD profile: \\\[\\begin{equation} \\hat g\_{PD}^{j}(z) \= \\frac{1}{n} \\sum\_{i\=1}^{n} f(\\underline{x}\_i^{j\|\=z}). \\tag{17\.2} \\end{equation}\\] ### 17\.3\.2 Clustered partial\-dependence profiles As it has been already mentioned, the mean of CP profiles is a good summary if the profiles are parallel. If they are not parallel, the average may not adequately represent the shape of a subset of profiles. To deal with this issue, one can consider clustering the profiles and calculating the mean separately for each cluster. To cluster the CP profiles, one may use standard methods like K\-means or hierarchical clustering. The similarities between observations can be calculated based on the Euclidean distance between CP profiles. Figure [17\.2](partialDependenceProfiles.html#fig:pdpPart4) illustrates an application of that approach to the random forest model `titanic_rf` (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) for 100 randomly selected instances (observations) from the Titanic dataset. The CP profiles for the *age* variable are marked in grey. It can be noted that they could be split into three clusters: one for a group of passengers with a substantial drop in the predicted survival probability for ages below 18 (with the average represented by the blue line), one with an almost linear decrease of the probability over the age (with the average represented by the red line), and one with almost constant predicted probability (with the average represented by the green line). The plot itself does not allow to identify the variables that may be linked with these clusters, but the additional exploratory analysis could be performed for this purpose. Figure 17\.2: Clustered partial\-dependence profiles for *age* for the random forest model for 100 randomly selected observations from the Titanic dataset. Grey lines indicate ceteris\-paribus profiles that are clustered into three groups with the average profiles indicated by the blue, green, and red lines. ### 17\.3\.3 Grouped partial\-dependence profiles It may happen that we can identify an explanatory variable that influences the shape of CP profiles for the explanatory variable of interest. The most obvious situation is when a model includes an interaction between the variable and another one. In that case, a natural approach is to investigate the PD profiles for the variable of interest within the groups of observations defined by the variable involved in the interaction. Figure [17\.3](partialDependenceProfiles.html#fig:pdpPart5) illustrates an application of the approach to the random forest model `titanic_rf` (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) for 100 randomly selected instances (observations) from the Titanic dataset. The CP profiles for the explanatory\-variable *age* are marked in grey. The red and blue lines present the PD profiles for females and males, respectively. The gender\-specifc averages have different shapes: the predicted survival probability for females is more stable across different ages, as compared to males. Thus, the PD profiles clearly indicate an interaction between age and gender. Figure 17\.3: Partial\-dependence profiles for two genders for the random forest model for 100 randomly selected observations from the Titanic dataset. Grey lines indicate ceteris\-paribus profiles for *age*. ### 17\.3\.4 Contrastive partial\-dependence profiles Comparison of clustered or grouped PD profiles for a single model may provide important insight into, for instance, the stability of the model’s predictions. PD profiles can also be compared between different models. Figure [17\.4](partialDependenceProfiles.html#fig:pdpPart7) presents PD profiles for *age* for the random forest model (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) and the logistic regression model with splines for the Titanic data (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)). The profiles are similar with respect to a general relationship between *age* and the predicted probability of survival (the younger the passenger, the higher chance of survival). However, the profile for the random forest model is flatter. The difference between both models is the largest at the left edge of the age scale. This pattern can be seen as expected because random forest models, in general, shrink predictions towards the average and they are not very good for extrapolation outside the range of values observed in the training dataset. Figure 17\.4: Partial\-dependence profiles for *age* for the random forest (green line) and logistic regression (blue line) models for the Titanic dataset. ### 17\.3\.1 Partial\-dependence profiles The value of a PD profile for model \\(f()\\) and explanatory variable \\(X^j\\) at \\(z\\) is defined as follows: \\\[\\begin{equation} g\_{PD}^{j}(z) \= E\_{\\underline{X}^{\-j}}\\{f(X^{j\|\=z})\\}. \\tag{17\.1} \\end{equation}\\] Thus, it is the expected value of the model predictions when \\(X^j\\) is fixed at \\(z\\) over the (marginal) distribution of \\(\\underline{X}^{\-j}\\), i.e., over the joint distribution of all explanatory variables other than \\(X^j\\). Or, in other words, it is the expected value of the CP profile for \\(X^j\\), defined in [(10\.1\)](ceterisParibus.html#eq:CPPdef), over the distribution of \\(\\underline{X}^{\-j}\\). Usually, we do not know the true distribution of \\(\\underline{X}^{\-j}\\). We can estimate it, however, by the empirical distribution of \\(n\\), say, observations available in a training dataset. This leads to the use of the mean of CP profiles for \\(X^j\\) as an estimator of the PD profile: \\\[\\begin{equation} \\hat g\_{PD}^{j}(z) \= \\frac{1}{n} \\sum\_{i\=1}^{n} f(\\underline{x}\_i^{j\|\=z}). \\tag{17\.2} \\end{equation}\\] ### 17\.3\.2 Clustered partial\-dependence profiles As it has been already mentioned, the mean of CP profiles is a good summary if the profiles are parallel. If they are not parallel, the average may not adequately represent the shape of a subset of profiles. To deal with this issue, one can consider clustering the profiles and calculating the mean separately for each cluster. To cluster the CP profiles, one may use standard methods like K\-means or hierarchical clustering. The similarities between observations can be calculated based on the Euclidean distance between CP profiles. Figure [17\.2](partialDependenceProfiles.html#fig:pdpPart4) illustrates an application of that approach to the random forest model `titanic_rf` (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) for 100 randomly selected instances (observations) from the Titanic dataset. The CP profiles for the *age* variable are marked in grey. It can be noted that they could be split into three clusters: one for a group of passengers with a substantial drop in the predicted survival probability for ages below 18 (with the average represented by the blue line), one with an almost linear decrease of the probability over the age (with the average represented by the red line), and one with almost constant predicted probability (with the average represented by the green line). The plot itself does not allow to identify the variables that may be linked with these clusters, but the additional exploratory analysis could be performed for this purpose. Figure 17\.2: Clustered partial\-dependence profiles for *age* for the random forest model for 100 randomly selected observations from the Titanic dataset. Grey lines indicate ceteris\-paribus profiles that are clustered into three groups with the average profiles indicated by the blue, green, and red lines. ### 17\.3\.3 Grouped partial\-dependence profiles It may happen that we can identify an explanatory variable that influences the shape of CP profiles for the explanatory variable of interest. The most obvious situation is when a model includes an interaction between the variable and another one. In that case, a natural approach is to investigate the PD profiles for the variable of interest within the groups of observations defined by the variable involved in the interaction. Figure [17\.3](partialDependenceProfiles.html#fig:pdpPart5) illustrates an application of the approach to the random forest model `titanic_rf` (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) for 100 randomly selected instances (observations) from the Titanic dataset. The CP profiles for the explanatory\-variable *age* are marked in grey. The red and blue lines present the PD profiles for females and males, respectively. The gender\-specifc averages have different shapes: the predicted survival probability for females is more stable across different ages, as compared to males. Thus, the PD profiles clearly indicate an interaction between age and gender. Figure 17\.3: Partial\-dependence profiles for two genders for the random forest model for 100 randomly selected observations from the Titanic dataset. Grey lines indicate ceteris\-paribus profiles for *age*. ### 17\.3\.4 Contrastive partial\-dependence profiles Comparison of clustered or grouped PD profiles for a single model may provide important insight into, for instance, the stability of the model’s predictions. PD profiles can also be compared between different models. Figure [17\.4](partialDependenceProfiles.html#fig:pdpPart7) presents PD profiles for *age* for the random forest model (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) and the logistic regression model with splines for the Titanic data (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)). The profiles are similar with respect to a general relationship between *age* and the predicted probability of survival (the younger the passenger, the higher chance of survival). However, the profile for the random forest model is flatter. The difference between both models is the largest at the left edge of the age scale. This pattern can be seen as expected because random forest models, in general, shrink predictions towards the average and they are not very good for extrapolation outside the range of values observed in the training dataset. Figure 17\.4: Partial\-dependence profiles for *age* for the random forest (green line) and logistic regression (blue line) models for the Titanic dataset. 17\.4 Example: apartment\-prices data ------------------------------------- In this section, we use PD profiles to evaluate performance of the random forest model (see Section [4\.5\.2](dataSetsIntro.html#model-Apartments-rf)) for the apartment\-prices dataset (see Section [4\.4](dataSetsIntro.html#ApartmentDataset)). Recall that the goal is to predict the price per square meter of an apartment. In our illustration, we focus on two explanatory variables, *surface* and *construction year*. We consider the predictions for the training dataset `apartments`. ### 17\.4\.1 Partial\-dependence profiles Figure [17\.5](partialDependenceProfiles.html#fig:pdpApartment1) presents CP profiles (grey lines) for 100 randomly\-selected apartments together with the estimated PD profile (blue line) for *construction year* and *surface*. PD profile for *surface* suggests an approximately linear relationship between the explanatory variable and the predicted price. On the other hand, PD profile for *construction year* is U\-shaped: the predicted price is the highest for the very new and very old apartments. Note that, while the data were simulated, they were generated to reflect the effect of a lower quality of building materials used in rapid housing construction after the World War II. Figure 17\.5: Ceteris\-paribus and partial\-dependence profiles for *construction year* and *surface* for 100 randomly\-selected apartments for the random forest model for the apartment\-prices dataset. ### 17\.4\.2 Clustered partial\-dependence profiles Almost all CP profiles for *construction year*, presented in Figure [17\.5](partialDependenceProfiles.html#fig:pdpApartment1), seem to be U\-shaped. The same shape is observed for the PD profile. One might want to confirm that the shape is, indeed, common for all the observations. The left\-hand\-side panel of Figure [17\.6](partialDependenceProfiles.html#fig:pdpApartment1clustered) presents clustered PD profiles for *construction year* for three clusters derived from the CP profiles presented in Figure [17\.5](partialDependenceProfiles.html#fig:pdpApartment1). The three PD profiles differ slightly in the size of the oscillations at the edges, but they all are U\-shaped. Thus, we could conclude that the overall PD profile adequately captures the shape of the CP profiles. Or, put differently, there is little evidence that there might be any strong interaction between year of construction and any other variable in the model. Similar conclusions can be drawn for the CP and PD profiles for *surface*, presented in the right\-hand\-side panel of Figure [17\.6](partialDependenceProfiles.html#fig:pdpApartment1clustered). Figure 17\.6: Ceteris\-paribus (grey lines) and partial\-dependence profiles (red, green, and blue lines) for three clusters for 100 randomly\-selected apartments for the random forest model for the apartment\-prices dataset. Left\-hand\-side panel: profiles for *construction year*. Right\-hand\-side panel: profiles for *surface*. ### 17\.4\.3 Grouped partial\-dependence profiles One of the categorical explanatory variables in the apartment prices dataset is *district*. We may want to investigate whether the relationship between the model’s predictions and *construction year* and *surface* is similar for all districts. Toward this aim, we can use grouped PD profiles, for groups of apartments defined by districts. Figure [17\.7](partialDependenceProfiles.html#fig:pdpApartment2) shows PD profiles for *construction year* (left\-hand\-side panel) and *surface* (right\-hand\-side panel) for each district. Several observations are worth making. First, profiles for apartments in “Srodmiescie” (Downtown) are clearly much higher than for other districts. Second, the profiles are roughly parallel, indicating that the effects of *construction year* and *surface* are similar for each level of *district*. Third, the profiles appear to form three clusters, i.e., “Srodmiescie” (Downtown), three districts close to “Srodmiescie” (namely “Mokotow”, “Ochota”, and “Ursynow”), and the six remaining districts. Figure 17\.7: Partial\-dependence profiles for separate districts for the random forest model for the apartment\-prices dataset. Left\-hand\-side panel: profiles for *construction year*. Right\-hand\-side panel: profiles for *surface*. ### 17\.4\.4 Contrastive partial\-dependence profiles One of the main challenges in predictive modelling is to avoid overfitting. The issue is particularly important for flexible models, such as random forest models. Figure [17\.8](partialDependenceProfiles.html#fig:pdpApartment3) presents PD profiles for *construction year* (left\-hand\-side panel) and *surface* (right\-hand\-side panel) for the linear\-regression model (see Section [4\.5\.1](dataSetsIntro.html#model-Apartments-lr)) and the random forest model. Several observations are worth making. The linear\-regression model cannot, of course, accommodate the non\-monotonic relationship between *construction year* and the price per square meter. However, for *surface*, both models support a linear relationship, though the slope of the line resulting from the linear regression is steeper. This may be seen as an expected difference, given that random forest models yield predictions that are shrunk towards the mean. Overall, we could cautiously conclude that there is not much evidence for overfitting of the more flexible random forest model. Note that the non\-monotonic relationship between *construction year* and the price per square meter might be the reason why the explanatory variable was found not to be important in the model in Section [16\.6](featureImportance.html#featureImportanceR). In Section [4\.5\.4](dataSetsIntro.html#predictionsApartments), we mentioned that a proper model exploration may suggest a way to construct a model with improved performance, as compared to the random forest and linear\-regression models. In this respect, it is worth observing that the profiles in Figure [17\.8](partialDependenceProfiles.html#fig:pdpApartment3) suggest that both models miss some aspects of the data. In particular, the linear\-regression model does not capture the U\-shaped relationship between *construction year* and the price. On the other hand, the effect of *surface* on the apartment price seems to be underestimated by the random forest model. Hence, one could conclude that, by addressing the issues, one could improve either of the models, possibly with an improvement in predictive performance. Figure 17\.8: Partial\-dependence profiles for the linear\-regression and random forest models for the apartment\-prices dataset. Left\-hand\-side panel: profiles for *construction year*. Right\-hand\-side panel: profiles for *surface*. ### 17\.4\.1 Partial\-dependence profiles Figure [17\.5](partialDependenceProfiles.html#fig:pdpApartment1) presents CP profiles (grey lines) for 100 randomly\-selected apartments together with the estimated PD profile (blue line) for *construction year* and *surface*. PD profile for *surface* suggests an approximately linear relationship between the explanatory variable and the predicted price. On the other hand, PD profile for *construction year* is U\-shaped: the predicted price is the highest for the very new and very old apartments. Note that, while the data were simulated, they were generated to reflect the effect of a lower quality of building materials used in rapid housing construction after the World War II. Figure 17\.5: Ceteris\-paribus and partial\-dependence profiles for *construction year* and *surface* for 100 randomly\-selected apartments for the random forest model for the apartment\-prices dataset. ### 17\.4\.2 Clustered partial\-dependence profiles Almost all CP profiles for *construction year*, presented in Figure [17\.5](partialDependenceProfiles.html#fig:pdpApartment1), seem to be U\-shaped. The same shape is observed for the PD profile. One might want to confirm that the shape is, indeed, common for all the observations. The left\-hand\-side panel of Figure [17\.6](partialDependenceProfiles.html#fig:pdpApartment1clustered) presents clustered PD profiles for *construction year* for three clusters derived from the CP profiles presented in Figure [17\.5](partialDependenceProfiles.html#fig:pdpApartment1). The three PD profiles differ slightly in the size of the oscillations at the edges, but they all are U\-shaped. Thus, we could conclude that the overall PD profile adequately captures the shape of the CP profiles. Or, put differently, there is little evidence that there might be any strong interaction between year of construction and any other variable in the model. Similar conclusions can be drawn for the CP and PD profiles for *surface*, presented in the right\-hand\-side panel of Figure [17\.6](partialDependenceProfiles.html#fig:pdpApartment1clustered). Figure 17\.6: Ceteris\-paribus (grey lines) and partial\-dependence profiles (red, green, and blue lines) for three clusters for 100 randomly\-selected apartments for the random forest model for the apartment\-prices dataset. Left\-hand\-side panel: profiles for *construction year*. Right\-hand\-side panel: profiles for *surface*. ### 17\.4\.3 Grouped partial\-dependence profiles One of the categorical explanatory variables in the apartment prices dataset is *district*. We may want to investigate whether the relationship between the model’s predictions and *construction year* and *surface* is similar for all districts. Toward this aim, we can use grouped PD profiles, for groups of apartments defined by districts. Figure [17\.7](partialDependenceProfiles.html#fig:pdpApartment2) shows PD profiles for *construction year* (left\-hand\-side panel) and *surface* (right\-hand\-side panel) for each district. Several observations are worth making. First, profiles for apartments in “Srodmiescie” (Downtown) are clearly much higher than for other districts. Second, the profiles are roughly parallel, indicating that the effects of *construction year* and *surface* are similar for each level of *district*. Third, the profiles appear to form three clusters, i.e., “Srodmiescie” (Downtown), three districts close to “Srodmiescie” (namely “Mokotow”, “Ochota”, and “Ursynow”), and the six remaining districts. Figure 17\.7: Partial\-dependence profiles for separate districts for the random forest model for the apartment\-prices dataset. Left\-hand\-side panel: profiles for *construction year*. Right\-hand\-side panel: profiles for *surface*. ### 17\.4\.4 Contrastive partial\-dependence profiles One of the main challenges in predictive modelling is to avoid overfitting. The issue is particularly important for flexible models, such as random forest models. Figure [17\.8](partialDependenceProfiles.html#fig:pdpApartment3) presents PD profiles for *construction year* (left\-hand\-side panel) and *surface* (right\-hand\-side panel) for the linear\-regression model (see Section [4\.5\.1](dataSetsIntro.html#model-Apartments-lr)) and the random forest model. Several observations are worth making. The linear\-regression model cannot, of course, accommodate the non\-monotonic relationship between *construction year* and the price per square meter. However, for *surface*, both models support a linear relationship, though the slope of the line resulting from the linear regression is steeper. This may be seen as an expected difference, given that random forest models yield predictions that are shrunk towards the mean. Overall, we could cautiously conclude that there is not much evidence for overfitting of the more flexible random forest model. Note that the non\-monotonic relationship between *construction year* and the price per square meter might be the reason why the explanatory variable was found not to be important in the model in Section [16\.6](featureImportance.html#featureImportanceR). In Section [4\.5\.4](dataSetsIntro.html#predictionsApartments), we mentioned that a proper model exploration may suggest a way to construct a model with improved performance, as compared to the random forest and linear\-regression models. In this respect, it is worth observing that the profiles in Figure [17\.8](partialDependenceProfiles.html#fig:pdpApartment3) suggest that both models miss some aspects of the data. In particular, the linear\-regression model does not capture the U\-shaped relationship between *construction year* and the price. On the other hand, the effect of *surface* on the apartment price seems to be underestimated by the random forest model. Hence, one could conclude that, by addressing the issues, one could improve either of the models, possibly with an improvement in predictive performance. Figure 17\.8: Partial\-dependence profiles for the linear\-regression and random forest models for the apartment\-prices dataset. Left\-hand\-side panel: profiles for *construction year*. Right\-hand\-side panel: profiles for *surface*. 17\.5 Pros and cons ------------------- PD profiles, presented in this chapter, offer a simple way to summarize the effect of a particular explanatory variable on the dependent variable. They are easy to explain and intuitive. They can be obtained for sub\-groups of observations and compared across different models. For these reasons, they have gained in popularity and have been implemented in various software packages, including R and Python. Given that the PD profiles are averages of CP profiles, they inherit the limitations of the latter. In particular, as CP profiles are problematic for correlated explanatory variables (see Section [10\.5](ceterisParibus.html#CPProsCons)), PD profiles are also not suitable for that case, as they may offer a crude and potentially misleading summarization. An approach to deal with this issue will be discussed in the next chapter. 17\.6 Code snippets for R ------------------------- In this section, we present the `DALEX` package for R, which covers the methods presented in this chapter. It uses the `ingredients` package with various implementations of variable profiles. Similar functions can be found in packages `pdp` (Greenwell [2017](#ref-pdpRPackage)), `ALEPlots` (Apley [2018](#ref-ALEPlotRPackage)), and `iml` (Molnar, Bischl, and Casalicchio [2018](#ref-imlRPackage)). For illustration purposes, we use the random forest model `titanic_rf` (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) for the Titanic data. Recall that the model has been developed to predict the probability of survival from the sinking of the Titanic. We first retrieve the version of the `titanic` data with imputed missing values and the `titanic_rf` model\-object via the `archivist` hooks, as listed in Section [4\.2\.7](dataSetsIntro.html#ListOfModelsTitanic). Then we construct the explainer for the model by using function `explain()` from the `DALEX` package (see Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode)). Note that, beforehand, we have got to load the `randomForest` package, as the model was fitted by using function `randomForest()` from this package (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) and it is important to have the corresponding `predict()` function available. ``` library("DALEX") library("randomForest") titanic_imputed <- archivist::aread("pbiecek/models/27e5c") titanic_rf <- archivist::aread("pbiecek/models/4e0fc") explainer_rf <- DALEX::explain(model = titanic_rf, data = titanic_imputed[, -9], y = titanic_imputed$survived, label = "Random Forest") ``` ### 17\.6\.1 Partial\-dependence profiles The function that allows computation of PD profiles in the `DALEX` package is `model_profile()`. The only required argument is `explainer`, which indicates the explainer\-object (obtained with the help of the `explain()` function, see Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode)) for the model to be explained. The other useful arguments include: * `variables`, a character vector providing the names of the explanatory variables, for which the profile is to be computed; by default, `variables = NULL`, in which case computations are performed for all numerical variables included in the model. * `N`, the number of (randomly sampled) observations that are to be used for the calculation of the PD profiles (`N = 100` by default); `N = NULL` implies the use of the entire dataset included in the explainer\-object. * `type`, the type of the PD profile, with values `"partial"` (default), `"conditional"`, and `"accumulated"`. * `variable_type`, a character string indicating whether calculations should be performed only for `"numerical"` (continuous) explanatory variables (default) or only for `"categorical"` variables. * `groups`, the name of the explanatory variable that will be used to group profiles, with `groups = NULL` by default (in which case no grouping of profiles is applied). * `k`, the number of clusters to be created with the help of the `hclust()` function, with `k = NULL` used by default and implying no clustering. In the example below, we calculate the PD profile for *age* by applying the `model_profile()` function to the explainer\-object for the random forest model. By default, the profile is based on 100 randomly selected observations. ``` pdp_rf <- model_profile(explainer = explainer_rf, variables = "age") ``` The resulting object of class `model_profile` contains the PD profile for *age*. By applying the `plot()` function to the object, we obtain a plot of the PD profile. Had we not used the `variables` argument, we would have obtained separate plots of PD profiles for all continuous explanatory variables. ``` library("ggplot2") plot(pdp_rf) + ggtitle("Partial-dependence profile for age") ``` The resulting plot for *age* (not shown) corresponds to the one presented in Figure [17\.4](partialDependenceProfiles.html#fig:pdpPart7). It may slightly differ, as the two plots are based on different sets of (randomly selected) 100 observations from the Titanic dataset. A PD profile can be plotted on top of CP profiles. This is a very useful feature if we want to check how well the former captures the latter. It is worth noting that, apart from the PD profile, the object created by the `model_profile()` function also contains the CP profiles for the selected observations and all explanatory variables included in the model. By specifying the argument `geom = "profiles"` in the `plot()` function, we add the CP profiles to the plot of the PD profile. ``` plot(pdp_rf, geom = "profiles") + ggtitle("Ceteris-paribus and partial-dependence profiles for age") ``` The resulting plot (not shown) is essentially the same as the one shown in the right\-hand\-side panel of Figure [17\.1](partialDependenceProfiles.html#fig:pdpIntuition). ### 17\.6\.2 Clustered partial\-dependence profiles To calculate clustered PD profiles, we have got to cluster the CP profiles. Toward this aim, we use the `k` argument of the `model_profile()` function that specifies the number of clusters that are to be formed by the `hclust()` function. In the code below, we specify that three clusters are to be formed for profiles for *age*. ``` pdp_rf_clust <- model_profile(explainer = explainer_rf, variables = "age", k = 3) ``` The clustered PD profiles can be plotted on top of the CP profiles by using the `geom = "profiles"` argument in the `plot()` function. ``` plot(pdp_rf_clust, geom = "profiles") + ggtitle("Clustered partial-dependence profiles for age") ``` The resulting plot (not shown) resembles the one shown for the random forest model in Figure [17\.2](partialDependenceProfiles.html#fig:pdpPart4). The only difference may stem from the fact that the two plots are based on a different set of (randomly selected) 100 observations from the Titanic dataset. ### 17\.6\.3 Grouped partial\-dependence profiles The `model_profile()` function admits the `groups` argument that allows constructing PD profiles for groups of observations defined by the levels of an explanatory variable. In the example below, we use the argument to obtain PD profiles for *age*, while grouping them by *gender*. ``` pdp_rf_gender <- model_profile(explainer = explainer_rf, variables = "age", groups = "gender") ``` The grouped PD profiles can be plotted on top of the CP profiles by using the `geom = "profiles"` argument in the `plot()` function. ``` plot(pdp_rf_gender, geom = "profiles") + ggtitle("Partial-dependence profiles for age, grouped by gender") ``` The resulting plot (not shown) resembles the one shown in Figure [17\.3](partialDependenceProfiles.html#fig:pdpPart5). ### 17\.6\.4 Contrastive partial\-dependence profiles It may be of interest to compare PD profiles for several models. We will compare the random forest model with the linear\-regression model `titanic_lmr` (see Section [4\.2\.1](dataSetsIntro.html#model-titanic-lmr)). For the latter, we first have got to load it via the `archivist` hook, as listed in Section [4\.5\.6](dataSetsIntro.html#ListOfModelsApartments). Then we construct the explainer for the model by using function `explain()`. Note that we first load the `rms` package, as the model was fitted by using function `lmr()` from this package (see Section [4\.2\.1](dataSetsIntro.html#model-titanic-lmr)) and it is important to have the corresponding `predict()` function available. Finally, we apply the `model_profile()` function to compute CP profiles and the PD profile for *age* based on 100 randomly\-selected observations from the Titanic dataset. We also repeat the calculations of the profiles for the random forest model. ``` library("rms") titanic_lmr <- archivist::aread("pbiecek/models/58b24") explainer_lmr <- DALEX::explain(model = titanic_lmr, data = titanic_imputed[, -9], y = titanic_imputed$survived, label = "Logistic Regression") pdp_lmr <- model_profile(explainer = explainer_lmr, variables = "age") pdp_rf <- model_profile(explainer = explainer_rf, variables = "age") ``` To overlay the PD profiles for *age* for the two models in a single plot, we apply the `plot()` function to the `model_profile`\-class objects for the two models that contain the profiles for *age*. ``` plot(pdp_rf, pdp_lmr) + ggtitle("Partial-dependence profiles for age for two models") ``` As a result, the profiles are plotted in a single plot. The resulting graph (not shown) is essentially the same as the one presented in Figure [17\.4](partialDependenceProfiles.html#fig:pdpPart7), with a possible difference due to the use of a different set of (randomly selected) 100 observations from the Titanic dataset. ### 17\.6\.1 Partial\-dependence profiles The function that allows computation of PD profiles in the `DALEX` package is `model_profile()`. The only required argument is `explainer`, which indicates the explainer\-object (obtained with the help of the `explain()` function, see Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode)) for the model to be explained. The other useful arguments include: * `variables`, a character vector providing the names of the explanatory variables, for which the profile is to be computed; by default, `variables = NULL`, in which case computations are performed for all numerical variables included in the model. * `N`, the number of (randomly sampled) observations that are to be used for the calculation of the PD profiles (`N = 100` by default); `N = NULL` implies the use of the entire dataset included in the explainer\-object. * `type`, the type of the PD profile, with values `"partial"` (default), `"conditional"`, and `"accumulated"`. * `variable_type`, a character string indicating whether calculations should be performed only for `"numerical"` (continuous) explanatory variables (default) or only for `"categorical"` variables. * `groups`, the name of the explanatory variable that will be used to group profiles, with `groups = NULL` by default (in which case no grouping of profiles is applied). * `k`, the number of clusters to be created with the help of the `hclust()` function, with `k = NULL` used by default and implying no clustering. In the example below, we calculate the PD profile for *age* by applying the `model_profile()` function to the explainer\-object for the random forest model. By default, the profile is based on 100 randomly selected observations. ``` pdp_rf <- model_profile(explainer = explainer_rf, variables = "age") ``` The resulting object of class `model_profile` contains the PD profile for *age*. By applying the `plot()` function to the object, we obtain a plot of the PD profile. Had we not used the `variables` argument, we would have obtained separate plots of PD profiles for all continuous explanatory variables. ``` library("ggplot2") plot(pdp_rf) + ggtitle("Partial-dependence profile for age") ``` The resulting plot for *age* (not shown) corresponds to the one presented in Figure [17\.4](partialDependenceProfiles.html#fig:pdpPart7). It may slightly differ, as the two plots are based on different sets of (randomly selected) 100 observations from the Titanic dataset. A PD profile can be plotted on top of CP profiles. This is a very useful feature if we want to check how well the former captures the latter. It is worth noting that, apart from the PD profile, the object created by the `model_profile()` function also contains the CP profiles for the selected observations and all explanatory variables included in the model. By specifying the argument `geom = "profiles"` in the `plot()` function, we add the CP profiles to the plot of the PD profile. ``` plot(pdp_rf, geom = "profiles") + ggtitle("Ceteris-paribus and partial-dependence profiles for age") ``` The resulting plot (not shown) is essentially the same as the one shown in the right\-hand\-side panel of Figure [17\.1](partialDependenceProfiles.html#fig:pdpIntuition). ### 17\.6\.2 Clustered partial\-dependence profiles To calculate clustered PD profiles, we have got to cluster the CP profiles. Toward this aim, we use the `k` argument of the `model_profile()` function that specifies the number of clusters that are to be formed by the `hclust()` function. In the code below, we specify that three clusters are to be formed for profiles for *age*. ``` pdp_rf_clust <- model_profile(explainer = explainer_rf, variables = "age", k = 3) ``` The clustered PD profiles can be plotted on top of the CP profiles by using the `geom = "profiles"` argument in the `plot()` function. ``` plot(pdp_rf_clust, geom = "profiles") + ggtitle("Clustered partial-dependence profiles for age") ``` The resulting plot (not shown) resembles the one shown for the random forest model in Figure [17\.2](partialDependenceProfiles.html#fig:pdpPart4). The only difference may stem from the fact that the two plots are based on a different set of (randomly selected) 100 observations from the Titanic dataset. ### 17\.6\.3 Grouped partial\-dependence profiles The `model_profile()` function admits the `groups` argument that allows constructing PD profiles for groups of observations defined by the levels of an explanatory variable. In the example below, we use the argument to obtain PD profiles for *age*, while grouping them by *gender*. ``` pdp_rf_gender <- model_profile(explainer = explainer_rf, variables = "age", groups = "gender") ``` The grouped PD profiles can be plotted on top of the CP profiles by using the `geom = "profiles"` argument in the `plot()` function. ``` plot(pdp_rf_gender, geom = "profiles") + ggtitle("Partial-dependence profiles for age, grouped by gender") ``` The resulting plot (not shown) resembles the one shown in Figure [17\.3](partialDependenceProfiles.html#fig:pdpPart5). ### 17\.6\.4 Contrastive partial\-dependence profiles It may be of interest to compare PD profiles for several models. We will compare the random forest model with the linear\-regression model `titanic_lmr` (see Section [4\.2\.1](dataSetsIntro.html#model-titanic-lmr)). For the latter, we first have got to load it via the `archivist` hook, as listed in Section [4\.5\.6](dataSetsIntro.html#ListOfModelsApartments). Then we construct the explainer for the model by using function `explain()`. Note that we first load the `rms` package, as the model was fitted by using function `lmr()` from this package (see Section [4\.2\.1](dataSetsIntro.html#model-titanic-lmr)) and it is important to have the corresponding `predict()` function available. Finally, we apply the `model_profile()` function to compute CP profiles and the PD profile for *age* based on 100 randomly\-selected observations from the Titanic dataset. We also repeat the calculations of the profiles for the random forest model. ``` library("rms") titanic_lmr <- archivist::aread("pbiecek/models/58b24") explainer_lmr <- DALEX::explain(model = titanic_lmr, data = titanic_imputed[, -9], y = titanic_imputed$survived, label = "Logistic Regression") pdp_lmr <- model_profile(explainer = explainer_lmr, variables = "age") pdp_rf <- model_profile(explainer = explainer_rf, variables = "age") ``` To overlay the PD profiles for *age* for the two models in a single plot, we apply the `plot()` function to the `model_profile`\-class objects for the two models that contain the profiles for *age*. ``` plot(pdp_rf, pdp_lmr) + ggtitle("Partial-dependence profiles for age for two models") ``` As a result, the profiles are plotted in a single plot. The resulting graph (not shown) is essentially the same as the one presented in Figure [17\.4](partialDependenceProfiles.html#fig:pdpPart7), with a possible difference due to the use of a different set of (randomly selected) 100 observations from the Titanic dataset. 17\.7 Code snippets for Python ------------------------------ In this section, we use the `dalex` library for Python. The package covers all methods presented in this chapter. It is available on `pip` and `GitHub`. Similar functions can be found in library `PDPbox` (Jiangchun [2018](#ref-PDPbox)). For illustration purposes, we use the `titanic_rf` random forest model for the Titanic data developed in Section [4\.3\.2](dataSetsIntro.html#model-titanic-python-rf). Recall that the model is developed to predict the probability of survival for passengers of Titanic. In the first step, we create an explainer\-object that will provide a uniform interface for the predictive model. We use the `Explainer()` constructor for this purpose. ``` import dalex as dx titanic_rf_exp = dx.Explainer(titanic_rf, X, y, label = "Titanic RF Pipeline") ``` The function that allows calculations of PD profiles is `model_profile()`. By default, it calculates profiles for all continuous variables. The other useful arguments include: * `variables`, a `str`, `list`, `np.ndarray` or `pd.Series` providing the names of the explanatory variables, for which the profile is to be computed; by default computations are performed for all numerical variables included in the model. * `N`, the number of (randomly sampled) observations that are to be used for the calculation of the PD profiles (`N = 300` by default); `N = None` implies the use of the entire dataset included in the explainer\-object. * `B`, the number of times (by default, 10\) the entire procedure is to be repeated. * `type`, the type of the PD profile, with values `'partial'` (default), `'conditional'`, and `'accumulated'`. * `variable_type`, a character string indicating whether calculations should be performed only for `'numerical'` (continuous) explanatory variables (default) or only for `'categorical'` variables. * `groups`, the name or list of names of the explanatory variable that will be used to group profiles, with `groups = None` by default (in which case no grouping of profiles is applied). In the example below, we calculate the PD profiles for *age* and *fare* by applying the `model_profile()` function to the explainer\-object for the random forest model. By default, the profile is based on 300 randomly selected observations. ``` pd_rf = titanic_rf_exp.model_profile(variables = ['age', 'fare']) pd_rf.result ``` The results can be visualised by applying the `plot()` method. Figure [17\.9](partialDependenceProfiles.html#fig:examplePythonMProfile2) presents the created plot. ``` pd_rf.plot() ``` Figure 17\.9: Partial\-dependence profiles for *age* and *fare* for the random forest model for the Titanic data, obtained by using the `plot()` method in Python. A PD profile can be plotted on top of CP profiles. This is a very useful feature if we want to check how well does the former capture the latter. By specifying the argument `geom = 'profiles'` in the `plot()` method, we add the CP profiles to the plot of the PD profile. ``` pd_rf.plot(geom = 'profiles') ``` The left\-hand\-side panel of the resulting plot (see Figure [17\.10](partialDependenceProfiles.html#fig:examplePythonMProfile7)) is essentially the same as the one shown in the right\-hand\-side panel of Figure [17\.1](partialDependenceProfiles.html#fig:pdpIntuition). Figure 17\.10: Partial\-dependence profiles (blue) with corresponding ceteris\-paribus profiles (grey) for *age* and *fare* for the random forest model for the Titanic data, obtained by using the `plot()` method in Python. By default, the `model_profile()` function computes the PD profiles only for continuous explanatory variables. To obtain the profiles for categorical variables, in the code that follows we use the argument `variable_type='categorical'`. Additionally, in the call to the `plot()` method we indicate that we want to display the profiles only to variables *class* and *gender*. ``` pd_rf = titanic_rf_exp.model_profile( variable_type = 'categorical') pd_rf.plot(variables = ['gender', 'class']) ``` The resulting plot is presented in Figure [17\.11](partialDependenceProfiles.html#fig:examplePythonMProfile3). Figure 17\.11: Partial\-dependence profiles for *class* and *gender* for the random forest model for the Titanic data, obtained by using the `plot()` method in Python. ### 17\.7\.1 Grouped partial\-dependence profiles The `model_profile()` function admits the `groups` argument that allows constructing PD profiles for groups of observations defined by the levels of an explanatory variable. In the code below, we use the argument to compute the profiles for *age* and *fare*, while grouping them by *class*. Subsequently, we use the `plot()` method to obtain a graphical presentation of the results. The resulting plot is presented in Figure [17\.12](partialDependenceProfiles.html#fig:examplePythonMProfile4). ``` pd_rf = titanic_rf_exp.model_profile(groups = 'class', variables = ['age', 'fare']) pd_rf.plot() ``` Figure 17\.12: Partial\-dependence profiles for *age* and *fare*, grouped by *class*, for the random forest model for the Titanic data, obtained by using the `plot()` method in Python. ### 17\.7\.2 Contrastive partial\-dependence profiles It may be of interest to compare PD profiles for several models. As an illustration, we will compare the random forest model with the logistic regression model `titanic_lr` (see Section [4\.3\.1](dataSetsIntro.html#model-titanic-python-lr)). First, we have got to compute to the profiles for both models by using the `model_profile()` function. ``` pdp_rf = titanic_rf_exp.model_profile() pdp_lr = titanic_lr_exp.model_profile() ``` Subsequently, we apply the `plot()` method to plot the profiles. Note that, in the code below, we use the `variables` argument to limit the display to variable *age* and *fare*. ``` pdp_rf.plot(pdp_lr, variables = ['age', 'fare']) ``` As a result, the profiles for *age* and *fare* are presented in a single plot. The resulting graph is presented in Figure [17\.13](partialDependenceProfiles.html#fig:examplePythonMProfile6)). Figure 17\.13: Partial\-dependence profiles for *age* and *fare* for the random forest model and the logistic regression model for the Titanic data. ### 17\.7\.1 Grouped partial\-dependence profiles The `model_profile()` function admits the `groups` argument that allows constructing PD profiles for groups of observations defined by the levels of an explanatory variable. In the code below, we use the argument to compute the profiles for *age* and *fare*, while grouping them by *class*. Subsequently, we use the `plot()` method to obtain a graphical presentation of the results. The resulting plot is presented in Figure [17\.12](partialDependenceProfiles.html#fig:examplePythonMProfile4). ``` pd_rf = titanic_rf_exp.model_profile(groups = 'class', variables = ['age', 'fare']) pd_rf.plot() ``` Figure 17\.12: Partial\-dependence profiles for *age* and *fare*, grouped by *class*, for the random forest model for the Titanic data, obtained by using the `plot()` method in Python. ### 17\.7\.2 Contrastive partial\-dependence profiles It may be of interest to compare PD profiles for several models. As an illustration, we will compare the random forest model with the logistic regression model `titanic_lr` (see Section [4\.3\.1](dataSetsIntro.html#model-titanic-python-lr)). First, we have got to compute to the profiles for both models by using the `model_profile()` function. ``` pdp_rf = titanic_rf_exp.model_profile() pdp_lr = titanic_lr_exp.model_profile() ``` Subsequently, we apply the `plot()` method to plot the profiles. Note that, in the code below, we use the `variables` argument to limit the display to variable *age* and *fare*. ``` pdp_rf.plot(pdp_lr, variables = ['age', 'fare']) ``` As a result, the profiles for *age* and *fare* are presented in a single plot. The resulting graph is presented in Figure [17\.13](partialDependenceProfiles.html#fig:examplePythonMProfile6)). Figure 17\.13: Partial\-dependence profiles for *age* and *fare* for the random forest model and the logistic regression model for the Titanic data.
Machine Learning
pbiecek.github.io
https://pbiecek.github.io/ema/accumulatedLocalProfiles.html
18 Local\-dependence and Accumulated\-local Profiles ==================================================== 18\.1 Introduction ------------------ Partial\-dependence (PD) profiles, introduced in the previous chapter, are easy to explain and interpret, especially given their estimation as the mean of ceteris\-paribus (CP) profiles. However, as it was mentioned in Section [17\.5](partialDependenceProfiles.html#PDPProsCons), the profiles may be misleading if, for instance, explanatory variables are correlated. In many applications, this is the case. For example, in the apartment\-prices dataset (see Section [4\.4](dataSetsIntro.html#ApartmentDataset)), one can expect that variables *surface* and *number of rooms* may be positively correlated, because apartments with a larger number of rooms usually also have a larger surface. Thus, in ceteris\-paribus profiles, it is not realistic to consider, for instance, an apartment with five rooms and a surface of 20 square meters. Similarly, in the Titanic dataset, a positive association can be expected for the values of variables *fare* and *class*, as tickets in the higher classes are more expensive than in the lower classes. In this chapter, we present accumulated\-local profiles that address this issue. As they are related to local\-dependence profiles, we introduce the latter first. Both approaches were proposed by Apley ([2018](#ref-ALEPlotRPackage)). 18\.2 Intuition --------------- Let us consider the following simple linear model with two explanatory variables: \\\[\\begin{equation} Y \= X^1 \+ X^2 \+ \\varepsilon \= f(X^1, X^2\) \+ \\varepsilon, \\tag{18\.1} \\end{equation}\\] where \\(\\varepsilon \\sim N(0,0\.1^2\)\\). For this model, the effect of \\(X^1\\) for any value of \\(X^2\\) is linear, i.e., it can be described by a straight line with the intercept equal to 0 and the slope equal to 1\. Assume that observations of explanatory variables \\(X^1\\) and \\(X^2\\) are uniformly distributed over the unit square, as illustrated in the left\-hand\-side panel of Figure [18\.1](accumulatedLocalProfiles.html#fig:PDPcorr1) for a set of 1000 observations. The right\-hand\-side panel of Figure [18\.1](accumulatedLocalProfiles.html#fig:PDPcorr1) presents the scatter plot of the observed values of \\(Y\\) in function of \\(X^1\\). The plot for \\(X^2\\) is, essentially, the same and we do not show it. Figure 18\.1: Observations of two explanatory variables uniformly distributed over the unit square (left\-hand\-side panel) and the scatter plot of the observed values of the dependent variable \\(Y\\) in function of \\(X^1\\) (right\-hand\-side panel). In view of the plot shown in the right\-hand\-side panel of Figure [18\.1](accumulatedLocalProfiles.html#fig:PDPcorr1), we could consider using a simple linear model with \\(X^1\\) and \\(X^2\\) as explanatory variables. Assume, however, that we would like to analyze the data without postulating any particular parametric form of the effect of the variables. A naïve way would be to split the observed range of each of the two variables into, for instance, five intervals (as illustrated in the left\-hand\-side panel of Figure [18\.1](accumulatedLocalProfiles.html#fig:PDPcorr1)), and estimate the means of observed values of \\(Y\\) for the resulting 25 groups of observations. Table [18\.1](accumulatedLocalProfiles.html#tab:FullDataMeans) presents the sample means (with rows and columns defined by the ranges of possible values of, respectively, \\(X^1\\) and \\(X^2\\)). Table 18\.1: Sample means of \\(Y\\) for 25 groups of observations resulting from splitting the ranges of explanatory variables \\(X^1\\) and \\(X^2\\) into five intervals (see the left\-hand\-side panel of Figure [18\.1](accumulatedLocalProfiles.html#fig:PDPcorr1)). | | (0,0\.2] | (0\.2,0\.4] | (0\.4,0\.6] | (0\.6,0\.8] | (0\.8,1] | | --- | --- | --- | --- | --- | --- | | (0,0\.2] | 0\.19 | 0\.42 | 0\.63 | 0\.80 | 0\.99 | | (0\.2,0\.4] | 0\.39 | 0\.59 | 0\.81 | 1\.01 | 1\.19 | | (0\.4,0\.6] | 0\.59 | 0\.81 | 0\.98 | 1\.20 | 1\.44 | | (0\.6,0\.8] | 0\.76 | 1\.00 | 1\.20 | 1\.40 | 1\.58 | | (0\.8,1] | 1\.01 | 1\.22 | 1\.38 | 1\.58 | 1\.77 | Table [18\.2](accumulatedLocalProfiles.html#tab:FullDataNumbers) presents the number of observations for each of the sample means from Table [18\.1](accumulatedLocalProfiles.html#tab:FullDataMeans). Table 18\.2: Number of observations for the sample means from Table [18\.1](accumulatedLocalProfiles.html#tab:FullDataMeans). | | (0,0\.2] | (0\.2,0\.4] | (0\.4,0\.6] | (0\.6,0\.8] | (0\.8,1] | total | | --- | --- | --- | --- | --- | --- | --- | | (0,0\.2] | 51 | 39 | 31 | 43 | 43 | 207 | | (0\.2,0\.4] | 39 | 40 | 35 | 53 | 42 | 209 | | (0\.4,0\.6] | 28 | 42 | 35 | 49 | 40 | 194 | | (0\.6,0\.8] | 37 | 30 | 36 | 55 | 45 | 203 | | (0\.8,1] | 43 | 46 | 36 | 28 | 34 | 187 | | total | 198 | 197 | 173 | 228 | 204 | 1000 | By using this simple approach, we can compute the PD profile for \\(X^1\\). Consider \\(X^1\=z\\). To apply the estimator defined in [(17\.2\)](partialDependenceProfiles.html#eq:PDPest), we need the predicted values \\(\\hat{f}(z,x^2\_i)\\) for any observed value of \\(x^2\_i \\in \[0,1]\\). As our observations are uncorrelated and fill\-in the unit\-square, we can use the suitable mean values for that purpose. In particular, for \\(z \\in \[0,0\.2]\\), we get \\\[\\begin{align} \\hat g\_{PD}^{1}(z) \&\= \\frac{1}{1000}\\sum\_{i}\\hat{f}(z,x^2\_i) \= \\nonumber \\\\ \&\= (198\\times 0\.19 \+ 197\\times 0\.42 \+ 173\\times 0\.63 \+ \\nonumber\\\\ \& \\ \\ \\ \\ \\ 228\\times 0\.80 \+ 204\\times 1\.00\)/1000 \= 0\.6\. \\nonumber \\tag{18\.2} \\end{align}\\] By following the same principle, for \\(z \\in (0\.2,0\.4]\\), \\((0\.4,0\.6]\\), \\((0\.6,0\.8]\\), and \\((0\.8,1]\\) we get the values of 0\.8, 1, 1\.2, and 1\.4, respectively. Thus, overall, we obtain a piecewise\-constant profile with values that capture the (correct) linear effect of \\(X^1\\) in model [(18\.1\)](accumulatedLocalProfiles.html#eq:simpleModel). In fact, by using, for instance, midpoints of the intervals for \\(z\\), i.e., 0\.1, 0\.3, 0\.5, 0\.7, and 0\.9, we could describe the profile by the linear function \\(0\.5\+z\\). Assume now that we are given the data only from the regions on the diagonal of the unit square, as illustrated in the left\-hand\-side panel of Figure [18\.2](accumulatedLocalProfiles.html#fig:PDPcorr2). In that case, the observed values of \\(X^1\\) and \\(X^2\\) are strongly correlated, with the estimated value of Pearson’s correlation coefficient equal to 0\.96\. The right\-hand\-side panel of Figure [18\.2](accumulatedLocalProfiles.html#fig:PDPcorr2) presents the scatter plot of the observed values of \\(Y\\) in the function of \\(X^1\\). Figure 18\.2: Correlated observations of two explanatory variables (left\-hand\-side panel) and the scatter plot of the observed values of the dependent variable \\(Y\\) in the function of \\(X^1\\) (right\-hand\-side panel). Now, the “naïve” modelling approach would amount to using only five sample means, as in the table below. Table 18\.3: Sample means of \\(Y\\) for five groups of observations (see the left\-hand\-side panel of Figure [18\.2](accumulatedLocalProfiles.html#fig:PDPcorr2)). | | (0,0\.2] | (0\.2,0\.4] | (0\.4,0\.6] | (0\.6,0\.8] | (0\.8,1] | | --- | --- | --- | --- | --- | --- | | (0,0\.2] | 0\.19 | NA | NA | NA | NA | | (0\.2,0\.4] | NA | 0\.59 | NA | NA | NA | | (0\.4,0\.6] | NA | NA | 0\.98 | NA | NA | | (0\.6,0\.8] | NA | NA | NA | 1\.4 | NA | | (0\.8,1] | NA | NA | NA | NA | 1\.77 | When computing the PD profile for \\(X^1\\), we now encounter the issue related to the fact that, for instance, for \\(z \\in \[0,0\.2]\\), we have not got any observations and, hence, any sample mean for \\(x^2\_i\>0\.2\\). To overcome this issue, we could extrapolate the predictions (i.e., mean values) obtained for other intervals of \\(z\\). That is, we could assume that, for \\(x^2\_i \\in (0\.2,0\.4]\\), the prediction is equal to 0\.59, for \\(x^2\_i \\in (0\.4,0\.6]\\) it is equal to 0\.98, and so on. This leads to the following value of the PD profile for \\(z \\in \[0,0\.2]\\): \\\[\\begin{align} \\hat g\_{PD}^{1}(z) \&\= \\frac{1}{51 \+ 40 \+ 35 \+ 55 \+ 34}\\sum\_{i}\\hat{f}(z,x^2\_i) \= \\nonumber \\\\ \&\= \\frac{1}{215}(51\\times0\.19 \+ 40\\times0\.59 \+ 35\\times0\.98 \+ \\nonumber \\\\ \& \\ \\ \\ \\ \\ 55\\times1\.40 \+ 34\\times1\.77\)\=0\.95\. \\end{align}\\] This is a larger value than 0\.6 computed in [(18\.2\)](accumulatedLocalProfiles.html#eq:fullDataPD) for the uncorrelated data. The reason is the extrapolation: for instance, for \\(z \\in \[0,0\.2]\\) and \\(x^2\_i \\in (0\.6,0\.8]\\), we use 1\.40 as the predicted value of \\(Y\\). However, Table [18\.1](accumulatedLocalProfiles.html#tab:FullDataMeans) indicates that the sample mean for those observations is equal to 0\.80\. In fact, by using the same extrapolation principle, we get \\(\\hat g\_{PD}^{1}(z) \= 0\.95\\) also for \\(z \\in (0\.2,0\.4]\\), \\((0\.4,0\.6]\\), \\((0\.6,0\.8]\\), and \\((0\.8,1]\\). Thus, the obtained profile indicates no effect of \\(X^1\\), which is clearly a wrong conclusion. While the modelling approach presented in the example above may seem to be simplistic, it does illustrate the issue that would also appear for other flexible modelling methods like, for instance, regression trees. In particular, the left\-hand\-side panel of Figure [18\.3](accumulatedLocalProfiles.html#fig:PDPcorr3) presents a regression tree fitted to the data shown in Figure [18\.2](accumulatedLocalProfiles.html#fig:PDPcorr2) by using function `tree()` from the R package `tree`. The right\-hand\-side panel of Figure [18\.3](accumulatedLocalProfiles.html#fig:PDPcorr3) presents the corresponding split of the observations. According to the model, the predicted value of \\(Y\\) for the observations in the region \\(x^1 \\in \[0,0\.2]\\) and \\(x^2 \\in \[0\.8,1]\\) would be equal to 1\.74\. This extrapolation implies a substantial overestimation, as the true expected value of \\(Y\\) in the region is equal to 1\. Note that the latter is well estimated by the sample mean equal to 0\.99 (see Table [18\.1](accumulatedLocalProfiles.html#tab:FullDataMeans)) in the case of the uncorrelated data shown in Figure [18\.1](accumulatedLocalProfiles.html#fig:PDPcorr1). The PD profile for \\(X^1\\) for the regression tree would be equal to 0\.2, 0\.8, and 1\.5 for \\(z \\in \[0,0\.2]\\), \\((0\.2,0\.6]\\), and \\((0\.6,1]\\), respectively. It does show an effect of \\(X^1\\), but if we used midpoints of the intervals for \\(z\\), i.e., 0\.1, 0\.4, and 0\.8, we could (approximately) describe the profile by the linear function \\(2z\\), i.e., with a slope larger than (the true value of) 1\. Figure 18\.3: Results of fitting of a regression tree to the data shown in Figure [18\.2](accumulatedLocalProfiles.html#fig:PDPcorr2) (left\-hand\-side panel) and the corresponding split of the observations of the two explanatory variables (right\-hand\-side panel). The issue stems from the fact that, in the definition [(17\.1\)](partialDependenceProfiles.html#eq:PDPdef0) of the PD profile, the expected value of model predictions is computed by using the marginal distribution of \\(X^2\\), which disregards the value of \\(X^1\\). Clearly, this is an issue when the explanatory variables are correlated. This observation suggests a modification: instead of the marginal distribution, one might use the conditional distribution of \\(X^2\\) given \\(X^1\\), because it reflects the association between the two variables. The modification leads to the definition of an LD profile. It turns out, however, that the modification does not fully address the issue of correlated explanatory variables. As argued by Apley and Zhu ([2020](#ref-Apley2019)), if an explanatory variable is correlated with some other variables, the LD profile for the variable will still capture the effect of the other variables. This is because the profile is obtained by marginalizing over (in fact, ignoring) the remaining variables in the model, which results in an effect similar to the “omitted variable” bias in linear regression. Thus, in this respect, LD profiles share the same limitation as PD profiles. To address the limitation, Apley and Zhu ([2020](#ref-Apley2019)) proposed the concept of local\-dependence effects and accumulated\-local (AL) profiles. 18\.3 Method ------------ ### 18\.3\.1 Local\-dependence profile Local\-dependence (LD) profile for model \\(f()\\) and variable \\(X^j\\) is defined as follows: \\\[\\begin{equation} g\_{LD}^{f, j}(z) \= E\_{\\underline{X}^{\-j}\|X^j\=z}\\left\\{f\\left(\\underline{X}^{j\|\=z}\\right)\\right\\}. \\tag{18\.3} \\end{equation}\\] Thus, it is the expected value of the model predictions over the conditional distribution of \\(\\underline{X}^{\-j}\\) given \\(X^j\=z\\), i.e., over the joint distribution of all explanatory variables other than \\(X^j\\) conditional on the value of the latter variable set to \\(z\\). Or, in other words, it is the expected value of the CP profiles for \\(X^j\\), defined in [(10\.1\)](ceterisParibus.html#eq:CPPdef), over the conditional distribution of \\(\\underline{X}^{\-j} \| X^j \= z\\). As proposed by Apley and Zhu ([2020](#ref-Apley2019)), LD profile can be estimated as follows: \\\[\\begin{equation} \\hat g\_{LD}^{j}(z) \= \\frac{1}{\|N\_j\|} \\sum\_{k\\in N\_j} f\\left(\\underline{x}\_k^{j\| \= z}\\right), \\tag{18\.4} \\end{equation}\\] where \\(N\_j\\) is the set of observations with the value of \\(X^j\\) “close” to \\(z\\) that is used to estimate the conditional distribution of \\(\\underline{X}^{\-j}\|X^j\=z\\). Note that, in general, the estimator given in [(18\.4\)](accumulatedLocalProfiles.html#eq:LDPest) is neither smooth nor continuous at boundaries between subsets \\(N\_j\\). A smooth estimator for \\(g\_{LD}^{f,j}(z)\\) can be defined as follows: \\\[\\begin{equation} \\tilde g\_{LD}^{j}(z) \= \\frac{1}{\\sum\_k w\_{k}(z)} \\sum\_{i \= 1}^n w\_i(z) f\\left(\\underline{x}\_i^{j\| \= z}\\right), \\tag{18\.5} \\end{equation}\\] where weights \\(w\_i(z)\\) capture the distance between \\(z\\) and \\(x\_i^j\\). In particular, for a categorical variable, we may just use the indicator function \\(w\_i(z) \= 1\_{z \= x^j\_i}\\), while for a continuous variable we may use the Gaussian kernel: \\\[\\begin{equation} w\_i(z) \= \\phi(z \- x\_i^j, 0, s), \\tag{18\.6} \\end{equation}\\] where \\(\\phi(y,0,s)\\) is the density of a normal distribution with mean 0 and standard deviation \\(s\\). Note that \\(s\\) plays the role of a smoothing factor. As already mentioned in Section [18\.2](accumulatedLocalProfiles.html#ALPIntuition), if an explanatory variable is correlated with some other variables, the LD profile for the variable will capture the effect of all of the variables. For instance, consider model [(18\.1\)](accumulatedLocalProfiles.html#eq:simpleModel). Assume that \\(X^1\\) has a uniform distribution on \\(\[0,1]\\) and that \\(X^1\=X^2\\), i.e., explanatory variables are perfectly correlated. In that case, the LD profile for \\(X^1\\) is given by \\\[ g\_{LD}^{1}(z) \= E\_{X^2\|X^1\=z}(z\+X^2\) \= z \+ E\_{X^2\|X^1\=z}(X^2\) \= 2z. \\] Hence, it suggests an effect of \\(X^1\\) twice larger than the correct one. To address the limitation, AL profiles can be used. We present them in the next section. ### 18\.3\.2 Accumulated\-local profile Consider model \\(f()\\) and define \\\[ q^j(\\underline{u})\=\\left\\{ \\frac{\\partial f(\\underline{x})}{\\partial x^j} \\right\\}\_{\\underline{x}\=\\underline{u}}. \\] Accumulated\-local (AL) profile for model \\(f()\\) and variable \\(X^j\\) is defined as follows: \\\[\\begin{equation} g\_{AL}^{j}(z) \= \\int\_{z\_0}^z \\left\[E\_{\\underline{X}^{\-j}\|X^j\=v}\\left\\{ q^j(\\underline{X}^{j\|\=v}) \\right\\}\\right] dv \+ c, \\tag{18\.7} \\end{equation}\\] where \\(z\_0\\) is a value close to the lower bound of the effective support of the distribution of \\(X^j\\) and \\(c\\) is a constant, usually selected so that \\(E\_{X^j}\\left\\{g\_{AL}^{j}(X^j)\\right\\} \= 0\\). To interpret [(18\.7\)](accumulatedLocalProfiles.html#eq:ALPdef), note that \\(q^j(\\underline{x}^{j\|\=v})\\) describes the local effect (change) of the model due to \\(X^j\\). Or, to put it in other words, \\(q^j(\\underline{x}^{j\|\=v})\\) describes how much the CP profile for \\(X^j\\) changes at \\((x^1,\\ldots,x^{j\-1},v,x^{j\+1},\\ldots,x^p)\\). This effect (change) is averaged over the “relevant” (according to the conditional distribution of \\(\\underline{X}^{\-j}\|X^j\\)) values of \\(\\underline{x}^{\-j}\\) and, subsequently, accumulated (integrated) over values of \\(v\\) up to \\(z\\). As argued by Apley and Zhu ([2020](#ref-Apley2019)), the averaging of the local effects allows avoiding the issue, present in the PD and LD profiles, of capturing the effect of other variables in the profile for a particular variable in additive models (without interactions). To see this, one can consider the approximation \\\[ f(\\underline{x}^{j\|\=v\+dv})\-f(\\underline{x}^{j\|\=v}) \\approx q^j(\\underline{x}^{j\|\=v})dv, \\] and note that the difference \\(f(\\underline{x}^{j\|\=v\+dv})\-f(\\underline{v}^{j\|\=v})\\), for a model without interaction, effectively removes the effect of all variables other than \\(X^j\\). For example, consider model [(18\.1\)](accumulatedLocalProfiles.html#eq:simpleModel). In that case, \\(f(x^1,x^2\)\=x^1\+x^2\\) and \\(q^1(\\underline{u}) \= 1\\). Thus, \\\[ f(u\+du,x\_2\)\-f(u,x\_2\) \= (u \+ du \+ x^2\) \- (u \+ x^2\) \= du \= q^1(u)du. \\] Consequently, irrespective of the joint distribution of \\(X^1\\) and \\(X^2\\) and upon setting \\(c\=z\_0\\), we get \\\[ g\_{AL}^{1}(z) \= \\int\_{z\_0}^z \\left\\{E\_{{X}^{2}\|X^1\=v}(1\)\\right\\} dv \+ z\_0 \= z. \\] To estimate an AL profile, one replaces the integral in [(18\.7\)](accumulatedLocalProfiles.html#eq:ALPdef) by a summation and the derivative with a finite difference (Apley and Zhu [2020](#ref-Apley2019)). In particular, consider a partition of the range of observed values \\(x\_{i}^j\\) of variable \\(X^j\\) into \\(K\\) intervals \\(N\_j(k)\=\\left(z\_{k\-1}^j,z\_k^j\\right]\\) (\\(k\=1,\\ldots,K\\)). Note that \\(z\_0^j\\) can be chosen just below \\(\\min(x\_1^j,\\ldots,x\_N^j)\\) and \\(z\_K^j\=\\max(x\_1^j,\\ldots,x\_N^j)\\). Let \\(n\_j(k)\\) denote the number of observations \\(x\_i^j\\) falling into \\(N\_j(k)\\), with \\(\\sum\_{k\=1}^K n\_j(k)\=n\\). An estimator of the AL profile for variable \\(X^j\\) can then be constructed as follows: \\\[\\begin{equation} \\widehat{g}\_{AL}^{j}(z) \= \\sum\_{k\=1}^{k\_j(z)} \\frac{1}{n\_j(k)} \\sum\_{i: x\_i^j \\in N\_j(k)} \\left\\{ f\\left(\\underline{x}\_i^{j\| \= z\_k^j}\\right) \- f\\left(\\underline{x}\_i^{j\| \= z\_{k\-1}^j}\\right) \\right\\} \- \\hat{c}, \\tag{18\.8} \\end{equation}\\] where \\(k\_j(z)\\) is the index of interval \\(N\_j(k)\\) in which \\(z\\) falls, i.e., \\(z \\in N\_j\\{k\_j(z)\\}\\), and \\(\\hat{c}\\) is selected so that \\(\\sum\_{i\=1}^n \\widehat{g}\_{AL}^{f,j}(x\_i^j)\=0\\). To interpret [(18\.8\)](accumulatedLocalProfiles.html#eq:ALPest), note that difference \\(f\\left(\\underline{x}\_i^{j\| \= z\_k^j}\\right) \- f\\left(\\underline{x}\_i^{j\| \= z\_{k\-1}^j}\\right)\\) corresponds to the difference of the CP profile for the \\(i\\)\-th observation at the limits of interval \\(N\_j(k)\\). These differences are then averaged across all observations for which the observed value of \\(X^j\\) falls into the interval and are then accumulated. Note that, in general, \\(\\widehat{g}\_{AL}^{f,j}(z)\\) is not smooth at the boundaries of intervals \\(N\_j(k)\\). A smooth estimate can obtained as follows: \\\[\\begin{equation} \\widetilde{g}\_{AL}^{j}(z) \= \\sum\_{k\=1}^K \\left\[ \\frac{1}{\\sum\_{l} w\_l(z\_k)} \\sum\_{i\=1}^N w\_{i}(z\_k) \\left\\{f\\left(\\underline{x}\_i^{j\| \= z\_k}\\right) \- f\\left(\\underline{x}\_i^{j\| \= z\_k \- \\Delta}\\right)\\right\\}\\right] \- \\hat{c}, \\tag{18\.9} \\end{equation}\\] where points \\(z\_k\\) (\\(k\=0, \\ldots, K\\)) form a uniform grid covering the interval \\((z\_0,z)\\) with step \\(\\Delta \= (z\-z\_0\)/K\\), and weight \\(w\_i(z\_k)\\) captures the distance between point \\(z\_k\\) and observation \\(x\_i^j\\). In particular, we may use similar weights as in case of [(18\.5\)](accumulatedLocalProfiles.html#eq:LDPest2). ### 18\.3\.3 Dependence profiles for a model with interaction and correlated explanatory variables: an example In this section, we illustrate in more detail the behavior of PD, LD, and AL profiles for a model with an interaction between correlated explanatory variables. In particular, let us consider the following simple model for two explanatory variables: \\\[\\begin{equation} f(X^1, X^2\) \= (X^1 \+ 1\)\\cdot X^2\. \\tag{18\.10} \\end{equation}\\] Moreover, assume that explanatory variables \\(X^1\\) and \\(X^2\\) are uniformly distributed over the interval \\(\[\-1,1]\\) and perfectly correlated, i.e., \\(X^2 \= X^1\\). Suppose that we have got a dataset with eight observations as in Table [18\.4](accumulatedLocalProfiles.html#tab:trickyModelData). Note that, for both \\(X^1\\) and \\(X^2\\), the sum of all observed values is equal to 0\. Table 18\.4: A sample of eight observations. | i | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | \\(X^1\\) | \-1 | \-0\.71 | \-0\.43 | \-0\.14 | 0\.14 | 0\.43 | 0\.71 | 1 | | \\(X^2\\) | \-1 | \-0\.71 | \-0\.43 | \-0\.14 | 0\.14 | 0\.43 | 0\.71 | 1 | | \\(y\\) | 0 | \-0\.2059 | \-0\.2451 | \-0\.1204 | 0\.1596 | 0\.6149 | 1\.2141 | 2 | Note that PD, LD, AL profiles describe the effect of a variable in isolation from the values of other variables. In model [(18\.10\)](accumulatedLocalProfiles.html#eq:trickyModel), the effect of variable \\(X^1\\) depends on the value of variable \\(X^2\\). For models with interactions, it is subjective to define what would be the “true” main effect of variable \\(X^1\\). Complex predictive models often have interactions. By examining the case of model [(18\.10\)](accumulatedLocalProfiles.html#eq:trickyModel), we will provide some intuition on how PD, LD and AL profiles may behave in such cases. Figure 18\.4: Partial\-dependence (PD), local\-dependence (LD), and accumulated\-local (AL) profiles for model [(18\.10\)](accumulatedLocalProfiles.html#eq:trickyModel). Panel A: ceteris\-paribus (CP) profiles for eight observations from Table [18\.4](accumulatedLocalProfiles.html#tab:trickyModelData). Panel B: entire CP profiles (top) contribute to calculation of the corresponding PD profile (bottom). Panel C: only parts of the CP profiles (top), close to observations of interest, contribute to the calculation of the corresponding LD profile (bottom). Panel D: only parts of the CP profiles (top) contribute to the calculation of the corresponding AL profile (bottom). Let us explicitly express the CP profile for \\(X^1\\) for model [(18\.10\)](accumulatedLocalProfiles.html#eq:trickyModel): \\\[\\begin{equation} h^{1}\_{CP}(z) \= f(z,X^2\) \= (z\+1\)\\cdot X^2\. \\tag{18\.11} \\end{equation}\\] By allowing \\(z\\) to take any value in the interval \\(\[\-1,1]\\), we get the CP profiles as straight lines with the slope equal to the value of variable \\(X^2\\). Hence, for instance, the CP profile for observation \\((\-1,\-1\)\\) is a straight line with the slope equal to \\(\-1\\). The CP profiles for the eight observations, from Table [18\.4](accumulatedLocalProfiles.html#tab:trickyModelData) are presented in panel A of Figure [18\.4](accumulatedLocalProfiles.html#fig:accumulatedLocalEffectsNew). Recall that the PD profile for \\(X^j\\), defined in equation [(17\.1\)](partialDependenceProfiles.html#eq:PDPdef0), is the expected value, over the joint distribution of all explanatory variables other than \\(X^j\\), of the model predictions when \\(X^j\\) is set to \\(z\\). This leads to the estimation of the profile by taking the average of CP profiles for \\(X^j\\), as given in [(17\.2\)](partialDependenceProfiles.html#eq:PDPest). In our case, this implies that the PD profile for \\(X^1\\) is the expected value of the model predictions over the distribution of \\(X^2\\), i.e., over the uniform distribution on the interval \\(\[\-1,1]\\). Thus, the PD profile is estimated by taking the average of the CP profiles, given by [(18\.11\)](accumulatedLocalProfiles.html#eq:CPtrickyModel), at each value of \\(z\\) in \\(\[\-1,1]\\): \\\[\\begin{equation} \\hat g\_{PD}^{1}(z) \= \\frac{1}{8} \\sum\_{i\=1}^{8} (z\+1\)\\cdot X^2\_{i} \= \\frac{z\+1}{8} \\sum\_{i\=1}^{8} X^2\_{i} \= 0\. \\tag{18\.12} \\end{equation}\\] As a result, the PD profile for \\(X^1\\) is estimated as a horizontal line at 0, as seen in the bottom part of Panel B of Figure [18\.4](accumulatedLocalProfiles.html#fig:accumulatedLocalEffectsNew). Since the \\(X^1\\) and \\(X^2\\) variables are correlated, it can be argued that we should not include entire CP profiles in the calculation of the PD profile, but only parts of them. In fact, for perfectly correlated explanatory variables, the CP profile for the \\(i\\)\-th observation should actually be undefined for any values of \\(z\\) different from \\(x^2\_i\\). The estimated horizontal PD profile results from using the marginal distribution of \\(X^2\\), which disregards the value of \\(X^1\\), in the definition of the profile. This observation suggests a modification: instead of the marginal distribution, one might consider the conditional distribution of \\(X^2\\) given \\(X^1\\). The modification leads to the definition of LD profile. For the data from Table [18\.4](accumulatedLocalProfiles.html#tab:trickyModelData), the conditional distribution of \\(X^2\\), given \\(X^1\=z\\), is just a probability mass of 1 at \\(z\\). Consequently, for model [(18\.10\)](accumulatedLocalProfiles.html#eq:trickyModel), the LD profile for \\(X^1\\) and any \\(z \\in \[\-1,1]\\) is given by \\\[\\begin{equation} g\_{LD}^{1}(z) \= z \\cdot (z\+1\). \\tag{18\.13} \\end{equation}\\] The bottom part of panel C of Figure [18\.4](accumulatedLocalProfiles.html#fig:accumulatedLocalEffectsNew) presents the LD profile estimated by applying estimator [(18\.3\)](accumulatedLocalProfiles.html#eq:LDPdef), in which the conditional distribution was calculated by using four bins with two observations each (shown in the top part of the panel). The LD profile shows the average of predictions over the conditional distribution. Part of the average can be attributed to the effect of the correlated variable \\(X^2\\). AL profile shows the net effect of \\(X^1\\) variable. By using definition [(18\.7\)](accumulatedLocalProfiles.html#eq:ALPdef), the AL profile for model [(18\.10\)](accumulatedLocalProfiles.html#eq:trickyModel) is given by \\\[\\begin{align} g\_{AL}^{1}(z) \&\= \\int\_{\-1}^z E \\left\[\\frac{\\partial f(X^1, X^2\)}{\\partial X^1} \| X^1 \= v \\right] dv \\nonumber \\\\ \& \= \\int\_{\-1}^z E \\left\[X^2 \| X^1 \= v \\right] dv \= \\int\_{\-1}^z v dv \= (z^2 \- 1\)/2\. \\tag{18\.14} \\end{align}\\] The bottom part of panel D of Figure [18\.4](accumulatedLocalProfiles.html#fig:accumulatedLocalEffectsNew) presents the AL profile estimated by applying estimator [(18\.7\)](accumulatedLocalProfiles.html#eq:ALPdef), in which the range of observed values of \\(X^1\\) was split into four intervals with two observations each. It is clear that PD, LD and AL profiles show different aspects of the model. In the analyzed example of model [(18\.10\)](accumulatedLocalProfiles.html#eq:trickyModel), we obtain three different explanations of the effect of variable \\(X^1\\). In practice, explanatory variables are typically correlated and complex predictive models are usually not additive. Therefore, when analyzing any model, it is worth checking how much do the PD, LD, and AL profiles differ. And if so, look for potential causes. Correlations can be detected at the stage of data exploration. Interactions can be noted by looking at individual CP profiles. 18\.4 Example: apartment\-prices data ------------------------------------- In this section, we use PD, LD, and AL profiles to evaluate performance of the random forest model `apartments_rf` (see Section [4\.5\.2](dataSetsIntro.html#model-Apartments-rf)) for the apartment\-prices dataset (see Section [4\.4](dataSetsIntro.html#ApartmentDataset)). Recall that the goal is to predict the price per square meter of an apartment. In our illustration, we focus on two explanatory variables, *surface* and *number of rooms*, as they are correlated (see Figure [4\.9](dataSetsIntro.html#fig:apartmentsSurfaceNorooms)). Figure [18\.5](accumulatedLocalProfiles.html#fig:featureEffectsApartment) shows the three types of profiles for both variables estimated according to formulas [(17\.2\)](partialDependenceProfiles.html#eq:PDPest), [(18\.5\)](accumulatedLocalProfiles.html#eq:LDPest2), and [(18\.9\)](accumulatedLocalProfiles.html#eq:ALPest2). As we can see from the plots, the profiles calculated with different methods are different. The LD profiles are steeper than the PD profiles. This is because, for instance, the effect of *surface* includes the effect of other correlated variables, including *number of rooms*. The AL profile eliminates the effect of correlated variables. Since the AL and PD profiles are parallel to each other, they suggest that the model is additive for these two explanatory variables. Figure 18\.5: Partial\-dependence, local\-dependence, and accumulated\-local profiles for the random forest model for the apartment\-prices dataset. 18\.5 Pros and cons ------------------- The LD and AL profiles, described in this chapter, are useful to summarize the influence of an explanatory variable on a model’s predictions. The profiles are constructed by using the CP profiles introduced in Chapter [10](ceterisParibus.html#ceterisParibus), but they differ in how the CP profiles for individual observations are summarized. When explanatory variables are independent and there are no interactions in the model, the CP profiles are parallel and their mean, i.e., the PD profile introduced in Chapter [17](partialDependenceProfiles.html#partialDependenceProfiles), adequately summarizes them. When the model is additive, but an explanatory variable is correlated with some other variables, neither PD nor LD profiles will properly capture the effect of the explanatory variable on the model’s predictions. However, the AL profile will provide a correct summary of the effect. When there are interactions in the model, none of the profiles will provide a correct assessment of the effect of any explanatory variable involved in the interaction(s). This is because the profiles for the variable will also include the effect of other variables. Comparison of PD, LD, and AL profiles may help in identifying whether there are any interactions in the model and/or whether explanatory variables are correlated. When there are interactions, they may be explored by using a generalization of the PD profiles for two or more dependent variables (Apley and Zhu [2020](#ref-Apley2019)). 18\.6 Code snippets for R ------------------------- In this section, we present the `DALEX` package for R, which covers the methods presented in this chapter. In particular, it includes wrappers for functions from the `ingredients` package (Biecek et al. [2019](#ref-ingredientsRPackage)). Note that similar functionalities can be found in package `ALEPlots` (Apley [2018](#ref-ALEPlotRPackage)) or `iml` (Molnar, Bischl, and Casalicchio [2018](#ref-imlRPackage)). For illustration purposes, we use the random forest model `apartments_rf` (see Section [4\.5\.2](dataSetsIntro.html#model-Apartments-rf)) for the apartment prices dataset (see Section [4\.4](dataSetsIntro.html#ApartmentDataset)). Recall that the goal is to predict the price per square meter of an apartment. In our illustration, we focus on two explanatory variables, *surface* and *number of rooms*. We first load the model\-object via the `archivist` hook, as listed in Section [4\.5\.6](dataSetsIntro.html#ListOfModelsApartments). Then we construct the explainer for the model by using the function `explain()` from the `DALEX` package (see Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode)). Note that, beforehand, we have got to load the `randomForest` package, as the model was fitted by using function `randomForest()` from this package (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) and it is important to have the corresponding `predict()` function available. ``` library("DALEX") library("randomForest") apartments_rf <- archivist::aread("pbiecek/models/fe7a5") explainer_apart_rf <- DALEX::explain(model = apartments_rf, data = apartments_test[,-1], y = apartments_test$m2.price, label = "Random Forest") ``` The function that allows the computation of LD and AL profiles in the `DALEX` package is `model_profile()`. Its use and arguments were described in Section [17\.6](partialDependenceProfiles.html#PDPR). LD profiles are calculated by specifying argument `type = "conditional"`. In the example below, we also use the `variables` argument to calculate the profile only for the explanatory variables *surface* and *no.rooms*. By default, the profile is based on 100 randomly selected observations. ``` ld_rf <- model_profile(explainer = explainer_apart_rf, type = "conditional", variables = c("no.rooms", "surface")) ``` The resulting object of class “model\_profile” contains the LD profiles for both explanatory variables. By applying the `plot()` function to the object, we obtain separate plots of the profiles. ``` plot(ld_rf) + ggtitle("Local-dependence profiles for no. of rooms and surface", "") ``` The resulting plot is shown in Figure [18\.6](accumulatedLocalProfiles.html#fig:aleExample3Plot). The profiles essentially correspond to those included in Figure [18\.5](accumulatedLocalProfiles.html#fig:featureEffectsApartment). Figure 18\.6: Local\-dependence profiles for the random forest model and explanatory variables *no.rooms* and *surface* for the apartment\-prices dataset. AL profiles are calculated by applying function `model_profile()` with the additional argument `type = "accumulated"`. In the example below, we also use the `variables` argument to calculate the profile only for the explanatory variables *surface* and *no.rooms*. ``` al_rf <- model_profile(explainer = explainer_apart_rf, type = "accumulated", variables = c("no.rooms", "surface")) ``` By applying the `plot()` function to the object, we obtain separate plots of the AL profiles for *no.rooms* and *surface*. They are presented in Figure [18\.7](accumulatedLocalProfiles.html#fig:aleExample2Plot). The profiles essentially correspond to those included in Figure [18\.5](accumulatedLocalProfiles.html#fig:featureEffectsApartment). ``` plot(al_rf) + ggtitle("Accumulated-local profiles for no. of rooms and surface", "") ``` Figure 18\.7: Accumulated\-local profiles for the random forest model and explanatory variables *no.rooms* and *surface* for the apartment\-prices dataset. Function `plot()` allows including all plots in a single graph. We will show how to apply it in order to obtain Figure [18\.5](accumulatedLocalProfiles.html#fig:featureEffectsApartment). Toward this end, we have got to create PD profiles first (see Section [17\.6](partialDependenceProfiles.html#PDPR)). We also modify the labels of the PD, LD, and AL profiles contained in the `agr_profiles` components of the “model\_profile”\-class objects created for the different profiles. ``` pd_rf <- model_profile(explainer = explainer_apart_rf, type = "partial", variables = c("no.rooms", "surface")) pd_rf$agr_profiles$`_label_` = "partial dependence" ld_rf$agr_profiles$`_label_` = "local dependence" al_rf$agr_profiles$`_label_` = "accumulated local" ``` Subsequently, we simply apply the `plot()` function to the `agr_profiles` components of the “model\_profile”\-class objects for the different profiles (see Section [17\.6](partialDependenceProfiles.html#PDPR)). ``` plot(pd_rf, ld_rf, al_rf) ``` The resulting plot (not shown) is essentially the same as the one presented in Figure [18\.5](accumulatedLocalProfiles.html#fig:featureEffectsApartment), with a possible difference due to the use of a different set of (randomly selected) 100 observations from the apartment\-prices dataset. 18\.7 Code snippets for Python ------------------------------ In this section, we use the `dalex` library for Python. The package covers all methods presented in this chapter. It is available on `pip` and `GitHub`. For illustration purposes, we use the `titanic_rf` random forest model for the Titanic data developed in Section [4\.3\.2](dataSetsIntro.html#model-titanic-python-rf). Recall that the model is developed to predict the probability of survival for passengers of Titanic. In the first step, we create an explainer\-object that will provide a uniform interface for the predictive model. We use the `Explainer()` constructor for this purpose. ``` import dalex as dx titanic_rf_exp = dx.Explainer(titanic_rf, X, y, label = "Titanic RF Pipeline") ``` The function that allows calculations of LD profiles is `model_profile()`. It was already introduced in Section [17\.7](partialDependenceProfiles.html#PDPPython). By defaut, it calculates PD profiles. To obtain LD profiles, the `type = 'conditional'` should be used. In the example below, we calculate the LD profile for *age* and *fare* by applying the `model_profile()` function to the explainer\-object for the random forest model while specifying `type = 'conditional'`. Results are stored in the `ld_rf.result` field. ``` ld_rf = titanic_rf_exp.model_profile(type = 'conditional') ld_rf.result['_label_'] = 'LD profiles' ld_rf.result ``` Results can be visualised by using the `plot()` method. Note that, in the code below, we use the `variables` argument to display the LD profiles only for *age* and *fare*. The resulting plot is presented in Figure [18\.8](accumulatedLocalProfiles.html#fig:examplePythonLDProfile2). ``` ld_rf.plot(variables = ['age', 'fare']) ``` Figure 18\.8: Local\-dependence profiles for *age* and *fare* for the random forest model for the Titanic data, obtained by using the `plot()` method in Python. In order to calculate the AL profiles for *age* and *fare*, we apply the `model_profile()` function with the `type = 'accumulated'` option. ``` al_rf = titanic_rf_exp.model_profile(type = 'accumulated') al_rf.result['_label_'] = 'AL profiles' ``` We can plot AL and LD profiles in a single chart. Toward this end, in the code that follows, we pass the `ld_rf` object, which contains LD profiles, as the first argument of the `plot()` method of the `al_rf` object that includes AL profiles. We also use the `variables` argument to display the profiles only for *age* and *fare*. The resulting plot is presented in Figure [18\.9](accumulatedLocalProfiles.html#fig:examplePythonALLDProfiles). ``` al_rf.plot(ld_rf, variables = ['age', 'fare']) ``` Figure 18\.9: Local\-dependence and accumulated\-local profiles for *age* and *fare* for the random forest model for the Titanic data, obtained by using the `plot()` method in Python. 18\.1 Introduction ------------------ Partial\-dependence (PD) profiles, introduced in the previous chapter, are easy to explain and interpret, especially given their estimation as the mean of ceteris\-paribus (CP) profiles. However, as it was mentioned in Section [17\.5](partialDependenceProfiles.html#PDPProsCons), the profiles may be misleading if, for instance, explanatory variables are correlated. In many applications, this is the case. For example, in the apartment\-prices dataset (see Section [4\.4](dataSetsIntro.html#ApartmentDataset)), one can expect that variables *surface* and *number of rooms* may be positively correlated, because apartments with a larger number of rooms usually also have a larger surface. Thus, in ceteris\-paribus profiles, it is not realistic to consider, for instance, an apartment with five rooms and a surface of 20 square meters. Similarly, in the Titanic dataset, a positive association can be expected for the values of variables *fare* and *class*, as tickets in the higher classes are more expensive than in the lower classes. In this chapter, we present accumulated\-local profiles that address this issue. As they are related to local\-dependence profiles, we introduce the latter first. Both approaches were proposed by Apley ([2018](#ref-ALEPlotRPackage)). 18\.2 Intuition --------------- Let us consider the following simple linear model with two explanatory variables: \\\[\\begin{equation} Y \= X^1 \+ X^2 \+ \\varepsilon \= f(X^1, X^2\) \+ \\varepsilon, \\tag{18\.1} \\end{equation}\\] where \\(\\varepsilon \\sim N(0,0\.1^2\)\\). For this model, the effect of \\(X^1\\) for any value of \\(X^2\\) is linear, i.e., it can be described by a straight line with the intercept equal to 0 and the slope equal to 1\. Assume that observations of explanatory variables \\(X^1\\) and \\(X^2\\) are uniformly distributed over the unit square, as illustrated in the left\-hand\-side panel of Figure [18\.1](accumulatedLocalProfiles.html#fig:PDPcorr1) for a set of 1000 observations. The right\-hand\-side panel of Figure [18\.1](accumulatedLocalProfiles.html#fig:PDPcorr1) presents the scatter plot of the observed values of \\(Y\\) in function of \\(X^1\\). The plot for \\(X^2\\) is, essentially, the same and we do not show it. Figure 18\.1: Observations of two explanatory variables uniformly distributed over the unit square (left\-hand\-side panel) and the scatter plot of the observed values of the dependent variable \\(Y\\) in function of \\(X^1\\) (right\-hand\-side panel). In view of the plot shown in the right\-hand\-side panel of Figure [18\.1](accumulatedLocalProfiles.html#fig:PDPcorr1), we could consider using a simple linear model with \\(X^1\\) and \\(X^2\\) as explanatory variables. Assume, however, that we would like to analyze the data without postulating any particular parametric form of the effect of the variables. A naïve way would be to split the observed range of each of the two variables into, for instance, five intervals (as illustrated in the left\-hand\-side panel of Figure [18\.1](accumulatedLocalProfiles.html#fig:PDPcorr1)), and estimate the means of observed values of \\(Y\\) for the resulting 25 groups of observations. Table [18\.1](accumulatedLocalProfiles.html#tab:FullDataMeans) presents the sample means (with rows and columns defined by the ranges of possible values of, respectively, \\(X^1\\) and \\(X^2\\)). Table 18\.1: Sample means of \\(Y\\) for 25 groups of observations resulting from splitting the ranges of explanatory variables \\(X^1\\) and \\(X^2\\) into five intervals (see the left\-hand\-side panel of Figure [18\.1](accumulatedLocalProfiles.html#fig:PDPcorr1)). | | (0,0\.2] | (0\.2,0\.4] | (0\.4,0\.6] | (0\.6,0\.8] | (0\.8,1] | | --- | --- | --- | --- | --- | --- | | (0,0\.2] | 0\.19 | 0\.42 | 0\.63 | 0\.80 | 0\.99 | | (0\.2,0\.4] | 0\.39 | 0\.59 | 0\.81 | 1\.01 | 1\.19 | | (0\.4,0\.6] | 0\.59 | 0\.81 | 0\.98 | 1\.20 | 1\.44 | | (0\.6,0\.8] | 0\.76 | 1\.00 | 1\.20 | 1\.40 | 1\.58 | | (0\.8,1] | 1\.01 | 1\.22 | 1\.38 | 1\.58 | 1\.77 | Table [18\.2](accumulatedLocalProfiles.html#tab:FullDataNumbers) presents the number of observations for each of the sample means from Table [18\.1](accumulatedLocalProfiles.html#tab:FullDataMeans). Table 18\.2: Number of observations for the sample means from Table [18\.1](accumulatedLocalProfiles.html#tab:FullDataMeans). | | (0,0\.2] | (0\.2,0\.4] | (0\.4,0\.6] | (0\.6,0\.8] | (0\.8,1] | total | | --- | --- | --- | --- | --- | --- | --- | | (0,0\.2] | 51 | 39 | 31 | 43 | 43 | 207 | | (0\.2,0\.4] | 39 | 40 | 35 | 53 | 42 | 209 | | (0\.4,0\.6] | 28 | 42 | 35 | 49 | 40 | 194 | | (0\.6,0\.8] | 37 | 30 | 36 | 55 | 45 | 203 | | (0\.8,1] | 43 | 46 | 36 | 28 | 34 | 187 | | total | 198 | 197 | 173 | 228 | 204 | 1000 | By using this simple approach, we can compute the PD profile for \\(X^1\\). Consider \\(X^1\=z\\). To apply the estimator defined in [(17\.2\)](partialDependenceProfiles.html#eq:PDPest), we need the predicted values \\(\\hat{f}(z,x^2\_i)\\) for any observed value of \\(x^2\_i \\in \[0,1]\\). As our observations are uncorrelated and fill\-in the unit\-square, we can use the suitable mean values for that purpose. In particular, for \\(z \\in \[0,0\.2]\\), we get \\\[\\begin{align} \\hat g\_{PD}^{1}(z) \&\= \\frac{1}{1000}\\sum\_{i}\\hat{f}(z,x^2\_i) \= \\nonumber \\\\ \&\= (198\\times 0\.19 \+ 197\\times 0\.42 \+ 173\\times 0\.63 \+ \\nonumber\\\\ \& \\ \\ \\ \\ \\ 228\\times 0\.80 \+ 204\\times 1\.00\)/1000 \= 0\.6\. \\nonumber \\tag{18\.2} \\end{align}\\] By following the same principle, for \\(z \\in (0\.2,0\.4]\\), \\((0\.4,0\.6]\\), \\((0\.6,0\.8]\\), and \\((0\.8,1]\\) we get the values of 0\.8, 1, 1\.2, and 1\.4, respectively. Thus, overall, we obtain a piecewise\-constant profile with values that capture the (correct) linear effect of \\(X^1\\) in model [(18\.1\)](accumulatedLocalProfiles.html#eq:simpleModel). In fact, by using, for instance, midpoints of the intervals for \\(z\\), i.e., 0\.1, 0\.3, 0\.5, 0\.7, and 0\.9, we could describe the profile by the linear function \\(0\.5\+z\\). Assume now that we are given the data only from the regions on the diagonal of the unit square, as illustrated in the left\-hand\-side panel of Figure [18\.2](accumulatedLocalProfiles.html#fig:PDPcorr2). In that case, the observed values of \\(X^1\\) and \\(X^2\\) are strongly correlated, with the estimated value of Pearson’s correlation coefficient equal to 0\.96\. The right\-hand\-side panel of Figure [18\.2](accumulatedLocalProfiles.html#fig:PDPcorr2) presents the scatter plot of the observed values of \\(Y\\) in the function of \\(X^1\\). Figure 18\.2: Correlated observations of two explanatory variables (left\-hand\-side panel) and the scatter plot of the observed values of the dependent variable \\(Y\\) in the function of \\(X^1\\) (right\-hand\-side panel). Now, the “naïve” modelling approach would amount to using only five sample means, as in the table below. Table 18\.3: Sample means of \\(Y\\) for five groups of observations (see the left\-hand\-side panel of Figure [18\.2](accumulatedLocalProfiles.html#fig:PDPcorr2)). | | (0,0\.2] | (0\.2,0\.4] | (0\.4,0\.6] | (0\.6,0\.8] | (0\.8,1] | | --- | --- | --- | --- | --- | --- | | (0,0\.2] | 0\.19 | NA | NA | NA | NA | | (0\.2,0\.4] | NA | 0\.59 | NA | NA | NA | | (0\.4,0\.6] | NA | NA | 0\.98 | NA | NA | | (0\.6,0\.8] | NA | NA | NA | 1\.4 | NA | | (0\.8,1] | NA | NA | NA | NA | 1\.77 | When computing the PD profile for \\(X^1\\), we now encounter the issue related to the fact that, for instance, for \\(z \\in \[0,0\.2]\\), we have not got any observations and, hence, any sample mean for \\(x^2\_i\>0\.2\\). To overcome this issue, we could extrapolate the predictions (i.e., mean values) obtained for other intervals of \\(z\\). That is, we could assume that, for \\(x^2\_i \\in (0\.2,0\.4]\\), the prediction is equal to 0\.59, for \\(x^2\_i \\in (0\.4,0\.6]\\) it is equal to 0\.98, and so on. This leads to the following value of the PD profile for \\(z \\in \[0,0\.2]\\): \\\[\\begin{align} \\hat g\_{PD}^{1}(z) \&\= \\frac{1}{51 \+ 40 \+ 35 \+ 55 \+ 34}\\sum\_{i}\\hat{f}(z,x^2\_i) \= \\nonumber \\\\ \&\= \\frac{1}{215}(51\\times0\.19 \+ 40\\times0\.59 \+ 35\\times0\.98 \+ \\nonumber \\\\ \& \\ \\ \\ \\ \\ 55\\times1\.40 \+ 34\\times1\.77\)\=0\.95\. \\end{align}\\] This is a larger value than 0\.6 computed in [(18\.2\)](accumulatedLocalProfiles.html#eq:fullDataPD) for the uncorrelated data. The reason is the extrapolation: for instance, for \\(z \\in \[0,0\.2]\\) and \\(x^2\_i \\in (0\.6,0\.8]\\), we use 1\.40 as the predicted value of \\(Y\\). However, Table [18\.1](accumulatedLocalProfiles.html#tab:FullDataMeans) indicates that the sample mean for those observations is equal to 0\.80\. In fact, by using the same extrapolation principle, we get \\(\\hat g\_{PD}^{1}(z) \= 0\.95\\) also for \\(z \\in (0\.2,0\.4]\\), \\((0\.4,0\.6]\\), \\((0\.6,0\.8]\\), and \\((0\.8,1]\\). Thus, the obtained profile indicates no effect of \\(X^1\\), which is clearly a wrong conclusion. While the modelling approach presented in the example above may seem to be simplistic, it does illustrate the issue that would also appear for other flexible modelling methods like, for instance, regression trees. In particular, the left\-hand\-side panel of Figure [18\.3](accumulatedLocalProfiles.html#fig:PDPcorr3) presents a regression tree fitted to the data shown in Figure [18\.2](accumulatedLocalProfiles.html#fig:PDPcorr2) by using function `tree()` from the R package `tree`. The right\-hand\-side panel of Figure [18\.3](accumulatedLocalProfiles.html#fig:PDPcorr3) presents the corresponding split of the observations. According to the model, the predicted value of \\(Y\\) for the observations in the region \\(x^1 \\in \[0,0\.2]\\) and \\(x^2 \\in \[0\.8,1]\\) would be equal to 1\.74\. This extrapolation implies a substantial overestimation, as the true expected value of \\(Y\\) in the region is equal to 1\. Note that the latter is well estimated by the sample mean equal to 0\.99 (see Table [18\.1](accumulatedLocalProfiles.html#tab:FullDataMeans)) in the case of the uncorrelated data shown in Figure [18\.1](accumulatedLocalProfiles.html#fig:PDPcorr1). The PD profile for \\(X^1\\) for the regression tree would be equal to 0\.2, 0\.8, and 1\.5 for \\(z \\in \[0,0\.2]\\), \\((0\.2,0\.6]\\), and \\((0\.6,1]\\), respectively. It does show an effect of \\(X^1\\), but if we used midpoints of the intervals for \\(z\\), i.e., 0\.1, 0\.4, and 0\.8, we could (approximately) describe the profile by the linear function \\(2z\\), i.e., with a slope larger than (the true value of) 1\. Figure 18\.3: Results of fitting of a regression tree to the data shown in Figure [18\.2](accumulatedLocalProfiles.html#fig:PDPcorr2) (left\-hand\-side panel) and the corresponding split of the observations of the two explanatory variables (right\-hand\-side panel). The issue stems from the fact that, in the definition [(17\.1\)](partialDependenceProfiles.html#eq:PDPdef0) of the PD profile, the expected value of model predictions is computed by using the marginal distribution of \\(X^2\\), which disregards the value of \\(X^1\\). Clearly, this is an issue when the explanatory variables are correlated. This observation suggests a modification: instead of the marginal distribution, one might use the conditional distribution of \\(X^2\\) given \\(X^1\\), because it reflects the association between the two variables. The modification leads to the definition of an LD profile. It turns out, however, that the modification does not fully address the issue of correlated explanatory variables. As argued by Apley and Zhu ([2020](#ref-Apley2019)), if an explanatory variable is correlated with some other variables, the LD profile for the variable will still capture the effect of the other variables. This is because the profile is obtained by marginalizing over (in fact, ignoring) the remaining variables in the model, which results in an effect similar to the “omitted variable” bias in linear regression. Thus, in this respect, LD profiles share the same limitation as PD profiles. To address the limitation, Apley and Zhu ([2020](#ref-Apley2019)) proposed the concept of local\-dependence effects and accumulated\-local (AL) profiles. 18\.3 Method ------------ ### 18\.3\.1 Local\-dependence profile Local\-dependence (LD) profile for model \\(f()\\) and variable \\(X^j\\) is defined as follows: \\\[\\begin{equation} g\_{LD}^{f, j}(z) \= E\_{\\underline{X}^{\-j}\|X^j\=z}\\left\\{f\\left(\\underline{X}^{j\|\=z}\\right)\\right\\}. \\tag{18\.3} \\end{equation}\\] Thus, it is the expected value of the model predictions over the conditional distribution of \\(\\underline{X}^{\-j}\\) given \\(X^j\=z\\), i.e., over the joint distribution of all explanatory variables other than \\(X^j\\) conditional on the value of the latter variable set to \\(z\\). Or, in other words, it is the expected value of the CP profiles for \\(X^j\\), defined in [(10\.1\)](ceterisParibus.html#eq:CPPdef), over the conditional distribution of \\(\\underline{X}^{\-j} \| X^j \= z\\). As proposed by Apley and Zhu ([2020](#ref-Apley2019)), LD profile can be estimated as follows: \\\[\\begin{equation} \\hat g\_{LD}^{j}(z) \= \\frac{1}{\|N\_j\|} \\sum\_{k\\in N\_j} f\\left(\\underline{x}\_k^{j\| \= z}\\right), \\tag{18\.4} \\end{equation}\\] where \\(N\_j\\) is the set of observations with the value of \\(X^j\\) “close” to \\(z\\) that is used to estimate the conditional distribution of \\(\\underline{X}^{\-j}\|X^j\=z\\). Note that, in general, the estimator given in [(18\.4\)](accumulatedLocalProfiles.html#eq:LDPest) is neither smooth nor continuous at boundaries between subsets \\(N\_j\\). A smooth estimator for \\(g\_{LD}^{f,j}(z)\\) can be defined as follows: \\\[\\begin{equation} \\tilde g\_{LD}^{j}(z) \= \\frac{1}{\\sum\_k w\_{k}(z)} \\sum\_{i \= 1}^n w\_i(z) f\\left(\\underline{x}\_i^{j\| \= z}\\right), \\tag{18\.5} \\end{equation}\\] where weights \\(w\_i(z)\\) capture the distance between \\(z\\) and \\(x\_i^j\\). In particular, for a categorical variable, we may just use the indicator function \\(w\_i(z) \= 1\_{z \= x^j\_i}\\), while for a continuous variable we may use the Gaussian kernel: \\\[\\begin{equation} w\_i(z) \= \\phi(z \- x\_i^j, 0, s), \\tag{18\.6} \\end{equation}\\] where \\(\\phi(y,0,s)\\) is the density of a normal distribution with mean 0 and standard deviation \\(s\\). Note that \\(s\\) plays the role of a smoothing factor. As already mentioned in Section [18\.2](accumulatedLocalProfiles.html#ALPIntuition), if an explanatory variable is correlated with some other variables, the LD profile for the variable will capture the effect of all of the variables. For instance, consider model [(18\.1\)](accumulatedLocalProfiles.html#eq:simpleModel). Assume that \\(X^1\\) has a uniform distribution on \\(\[0,1]\\) and that \\(X^1\=X^2\\), i.e., explanatory variables are perfectly correlated. In that case, the LD profile for \\(X^1\\) is given by \\\[ g\_{LD}^{1}(z) \= E\_{X^2\|X^1\=z}(z\+X^2\) \= z \+ E\_{X^2\|X^1\=z}(X^2\) \= 2z. \\] Hence, it suggests an effect of \\(X^1\\) twice larger than the correct one. To address the limitation, AL profiles can be used. We present them in the next section. ### 18\.3\.2 Accumulated\-local profile Consider model \\(f()\\) and define \\\[ q^j(\\underline{u})\=\\left\\{ \\frac{\\partial f(\\underline{x})}{\\partial x^j} \\right\\}\_{\\underline{x}\=\\underline{u}}. \\] Accumulated\-local (AL) profile for model \\(f()\\) and variable \\(X^j\\) is defined as follows: \\\[\\begin{equation} g\_{AL}^{j}(z) \= \\int\_{z\_0}^z \\left\[E\_{\\underline{X}^{\-j}\|X^j\=v}\\left\\{ q^j(\\underline{X}^{j\|\=v}) \\right\\}\\right] dv \+ c, \\tag{18\.7} \\end{equation}\\] where \\(z\_0\\) is a value close to the lower bound of the effective support of the distribution of \\(X^j\\) and \\(c\\) is a constant, usually selected so that \\(E\_{X^j}\\left\\{g\_{AL}^{j}(X^j)\\right\\} \= 0\\). To interpret [(18\.7\)](accumulatedLocalProfiles.html#eq:ALPdef), note that \\(q^j(\\underline{x}^{j\|\=v})\\) describes the local effect (change) of the model due to \\(X^j\\). Or, to put it in other words, \\(q^j(\\underline{x}^{j\|\=v})\\) describes how much the CP profile for \\(X^j\\) changes at \\((x^1,\\ldots,x^{j\-1},v,x^{j\+1},\\ldots,x^p)\\). This effect (change) is averaged over the “relevant” (according to the conditional distribution of \\(\\underline{X}^{\-j}\|X^j\\)) values of \\(\\underline{x}^{\-j}\\) and, subsequently, accumulated (integrated) over values of \\(v\\) up to \\(z\\). As argued by Apley and Zhu ([2020](#ref-Apley2019)), the averaging of the local effects allows avoiding the issue, present in the PD and LD profiles, of capturing the effect of other variables in the profile for a particular variable in additive models (without interactions). To see this, one can consider the approximation \\\[ f(\\underline{x}^{j\|\=v\+dv})\-f(\\underline{x}^{j\|\=v}) \\approx q^j(\\underline{x}^{j\|\=v})dv, \\] and note that the difference \\(f(\\underline{x}^{j\|\=v\+dv})\-f(\\underline{v}^{j\|\=v})\\), for a model without interaction, effectively removes the effect of all variables other than \\(X^j\\). For example, consider model [(18\.1\)](accumulatedLocalProfiles.html#eq:simpleModel). In that case, \\(f(x^1,x^2\)\=x^1\+x^2\\) and \\(q^1(\\underline{u}) \= 1\\). Thus, \\\[ f(u\+du,x\_2\)\-f(u,x\_2\) \= (u \+ du \+ x^2\) \- (u \+ x^2\) \= du \= q^1(u)du. \\] Consequently, irrespective of the joint distribution of \\(X^1\\) and \\(X^2\\) and upon setting \\(c\=z\_0\\), we get \\\[ g\_{AL}^{1}(z) \= \\int\_{z\_0}^z \\left\\{E\_{{X}^{2}\|X^1\=v}(1\)\\right\\} dv \+ z\_0 \= z. \\] To estimate an AL profile, one replaces the integral in [(18\.7\)](accumulatedLocalProfiles.html#eq:ALPdef) by a summation and the derivative with a finite difference (Apley and Zhu [2020](#ref-Apley2019)). In particular, consider a partition of the range of observed values \\(x\_{i}^j\\) of variable \\(X^j\\) into \\(K\\) intervals \\(N\_j(k)\=\\left(z\_{k\-1}^j,z\_k^j\\right]\\) (\\(k\=1,\\ldots,K\\)). Note that \\(z\_0^j\\) can be chosen just below \\(\\min(x\_1^j,\\ldots,x\_N^j)\\) and \\(z\_K^j\=\\max(x\_1^j,\\ldots,x\_N^j)\\). Let \\(n\_j(k)\\) denote the number of observations \\(x\_i^j\\) falling into \\(N\_j(k)\\), with \\(\\sum\_{k\=1}^K n\_j(k)\=n\\). An estimator of the AL profile for variable \\(X^j\\) can then be constructed as follows: \\\[\\begin{equation} \\widehat{g}\_{AL}^{j}(z) \= \\sum\_{k\=1}^{k\_j(z)} \\frac{1}{n\_j(k)} \\sum\_{i: x\_i^j \\in N\_j(k)} \\left\\{ f\\left(\\underline{x}\_i^{j\| \= z\_k^j}\\right) \- f\\left(\\underline{x}\_i^{j\| \= z\_{k\-1}^j}\\right) \\right\\} \- \\hat{c}, \\tag{18\.8} \\end{equation}\\] where \\(k\_j(z)\\) is the index of interval \\(N\_j(k)\\) in which \\(z\\) falls, i.e., \\(z \\in N\_j\\{k\_j(z)\\}\\), and \\(\\hat{c}\\) is selected so that \\(\\sum\_{i\=1}^n \\widehat{g}\_{AL}^{f,j}(x\_i^j)\=0\\). To interpret [(18\.8\)](accumulatedLocalProfiles.html#eq:ALPest), note that difference \\(f\\left(\\underline{x}\_i^{j\| \= z\_k^j}\\right) \- f\\left(\\underline{x}\_i^{j\| \= z\_{k\-1}^j}\\right)\\) corresponds to the difference of the CP profile for the \\(i\\)\-th observation at the limits of interval \\(N\_j(k)\\). These differences are then averaged across all observations for which the observed value of \\(X^j\\) falls into the interval and are then accumulated. Note that, in general, \\(\\widehat{g}\_{AL}^{f,j}(z)\\) is not smooth at the boundaries of intervals \\(N\_j(k)\\). A smooth estimate can obtained as follows: \\\[\\begin{equation} \\widetilde{g}\_{AL}^{j}(z) \= \\sum\_{k\=1}^K \\left\[ \\frac{1}{\\sum\_{l} w\_l(z\_k)} \\sum\_{i\=1}^N w\_{i}(z\_k) \\left\\{f\\left(\\underline{x}\_i^{j\| \= z\_k}\\right) \- f\\left(\\underline{x}\_i^{j\| \= z\_k \- \\Delta}\\right)\\right\\}\\right] \- \\hat{c}, \\tag{18\.9} \\end{equation}\\] where points \\(z\_k\\) (\\(k\=0, \\ldots, K\\)) form a uniform grid covering the interval \\((z\_0,z)\\) with step \\(\\Delta \= (z\-z\_0\)/K\\), and weight \\(w\_i(z\_k)\\) captures the distance between point \\(z\_k\\) and observation \\(x\_i^j\\). In particular, we may use similar weights as in case of [(18\.5\)](accumulatedLocalProfiles.html#eq:LDPest2). ### 18\.3\.3 Dependence profiles for a model with interaction and correlated explanatory variables: an example In this section, we illustrate in more detail the behavior of PD, LD, and AL profiles for a model with an interaction between correlated explanatory variables. In particular, let us consider the following simple model for two explanatory variables: \\\[\\begin{equation} f(X^1, X^2\) \= (X^1 \+ 1\)\\cdot X^2\. \\tag{18\.10} \\end{equation}\\] Moreover, assume that explanatory variables \\(X^1\\) and \\(X^2\\) are uniformly distributed over the interval \\(\[\-1,1]\\) and perfectly correlated, i.e., \\(X^2 \= X^1\\). Suppose that we have got a dataset with eight observations as in Table [18\.4](accumulatedLocalProfiles.html#tab:trickyModelData). Note that, for both \\(X^1\\) and \\(X^2\\), the sum of all observed values is equal to 0\. Table 18\.4: A sample of eight observations. | i | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | \\(X^1\\) | \-1 | \-0\.71 | \-0\.43 | \-0\.14 | 0\.14 | 0\.43 | 0\.71 | 1 | | \\(X^2\\) | \-1 | \-0\.71 | \-0\.43 | \-0\.14 | 0\.14 | 0\.43 | 0\.71 | 1 | | \\(y\\) | 0 | \-0\.2059 | \-0\.2451 | \-0\.1204 | 0\.1596 | 0\.6149 | 1\.2141 | 2 | Note that PD, LD, AL profiles describe the effect of a variable in isolation from the values of other variables. In model [(18\.10\)](accumulatedLocalProfiles.html#eq:trickyModel), the effect of variable \\(X^1\\) depends on the value of variable \\(X^2\\). For models with interactions, it is subjective to define what would be the “true” main effect of variable \\(X^1\\). Complex predictive models often have interactions. By examining the case of model [(18\.10\)](accumulatedLocalProfiles.html#eq:trickyModel), we will provide some intuition on how PD, LD and AL profiles may behave in such cases. Figure 18\.4: Partial\-dependence (PD), local\-dependence (LD), and accumulated\-local (AL) profiles for model [(18\.10\)](accumulatedLocalProfiles.html#eq:trickyModel). Panel A: ceteris\-paribus (CP) profiles for eight observations from Table [18\.4](accumulatedLocalProfiles.html#tab:trickyModelData). Panel B: entire CP profiles (top) contribute to calculation of the corresponding PD profile (bottom). Panel C: only parts of the CP profiles (top), close to observations of interest, contribute to the calculation of the corresponding LD profile (bottom). Panel D: only parts of the CP profiles (top) contribute to the calculation of the corresponding AL profile (bottom). Let us explicitly express the CP profile for \\(X^1\\) for model [(18\.10\)](accumulatedLocalProfiles.html#eq:trickyModel): \\\[\\begin{equation} h^{1}\_{CP}(z) \= f(z,X^2\) \= (z\+1\)\\cdot X^2\. \\tag{18\.11} \\end{equation}\\] By allowing \\(z\\) to take any value in the interval \\(\[\-1,1]\\), we get the CP profiles as straight lines with the slope equal to the value of variable \\(X^2\\). Hence, for instance, the CP profile for observation \\((\-1,\-1\)\\) is a straight line with the slope equal to \\(\-1\\). The CP profiles for the eight observations, from Table [18\.4](accumulatedLocalProfiles.html#tab:trickyModelData) are presented in panel A of Figure [18\.4](accumulatedLocalProfiles.html#fig:accumulatedLocalEffectsNew). Recall that the PD profile for \\(X^j\\), defined in equation [(17\.1\)](partialDependenceProfiles.html#eq:PDPdef0), is the expected value, over the joint distribution of all explanatory variables other than \\(X^j\\), of the model predictions when \\(X^j\\) is set to \\(z\\). This leads to the estimation of the profile by taking the average of CP profiles for \\(X^j\\), as given in [(17\.2\)](partialDependenceProfiles.html#eq:PDPest). In our case, this implies that the PD profile for \\(X^1\\) is the expected value of the model predictions over the distribution of \\(X^2\\), i.e., over the uniform distribution on the interval \\(\[\-1,1]\\). Thus, the PD profile is estimated by taking the average of the CP profiles, given by [(18\.11\)](accumulatedLocalProfiles.html#eq:CPtrickyModel), at each value of \\(z\\) in \\(\[\-1,1]\\): \\\[\\begin{equation} \\hat g\_{PD}^{1}(z) \= \\frac{1}{8} \\sum\_{i\=1}^{8} (z\+1\)\\cdot X^2\_{i} \= \\frac{z\+1}{8} \\sum\_{i\=1}^{8} X^2\_{i} \= 0\. \\tag{18\.12} \\end{equation}\\] As a result, the PD profile for \\(X^1\\) is estimated as a horizontal line at 0, as seen in the bottom part of Panel B of Figure [18\.4](accumulatedLocalProfiles.html#fig:accumulatedLocalEffectsNew). Since the \\(X^1\\) and \\(X^2\\) variables are correlated, it can be argued that we should not include entire CP profiles in the calculation of the PD profile, but only parts of them. In fact, for perfectly correlated explanatory variables, the CP profile for the \\(i\\)\-th observation should actually be undefined for any values of \\(z\\) different from \\(x^2\_i\\). The estimated horizontal PD profile results from using the marginal distribution of \\(X^2\\), which disregards the value of \\(X^1\\), in the definition of the profile. This observation suggests a modification: instead of the marginal distribution, one might consider the conditional distribution of \\(X^2\\) given \\(X^1\\). The modification leads to the definition of LD profile. For the data from Table [18\.4](accumulatedLocalProfiles.html#tab:trickyModelData), the conditional distribution of \\(X^2\\), given \\(X^1\=z\\), is just a probability mass of 1 at \\(z\\). Consequently, for model [(18\.10\)](accumulatedLocalProfiles.html#eq:trickyModel), the LD profile for \\(X^1\\) and any \\(z \\in \[\-1,1]\\) is given by \\\[\\begin{equation} g\_{LD}^{1}(z) \= z \\cdot (z\+1\). \\tag{18\.13} \\end{equation}\\] The bottom part of panel C of Figure [18\.4](accumulatedLocalProfiles.html#fig:accumulatedLocalEffectsNew) presents the LD profile estimated by applying estimator [(18\.3\)](accumulatedLocalProfiles.html#eq:LDPdef), in which the conditional distribution was calculated by using four bins with two observations each (shown in the top part of the panel). The LD profile shows the average of predictions over the conditional distribution. Part of the average can be attributed to the effect of the correlated variable \\(X^2\\). AL profile shows the net effect of \\(X^1\\) variable. By using definition [(18\.7\)](accumulatedLocalProfiles.html#eq:ALPdef), the AL profile for model [(18\.10\)](accumulatedLocalProfiles.html#eq:trickyModel) is given by \\\[\\begin{align} g\_{AL}^{1}(z) \&\= \\int\_{\-1}^z E \\left\[\\frac{\\partial f(X^1, X^2\)}{\\partial X^1} \| X^1 \= v \\right] dv \\nonumber \\\\ \& \= \\int\_{\-1}^z E \\left\[X^2 \| X^1 \= v \\right] dv \= \\int\_{\-1}^z v dv \= (z^2 \- 1\)/2\. \\tag{18\.14} \\end{align}\\] The bottom part of panel D of Figure [18\.4](accumulatedLocalProfiles.html#fig:accumulatedLocalEffectsNew) presents the AL profile estimated by applying estimator [(18\.7\)](accumulatedLocalProfiles.html#eq:ALPdef), in which the range of observed values of \\(X^1\\) was split into four intervals with two observations each. It is clear that PD, LD and AL profiles show different aspects of the model. In the analyzed example of model [(18\.10\)](accumulatedLocalProfiles.html#eq:trickyModel), we obtain three different explanations of the effect of variable \\(X^1\\). In practice, explanatory variables are typically correlated and complex predictive models are usually not additive. Therefore, when analyzing any model, it is worth checking how much do the PD, LD, and AL profiles differ. And if so, look for potential causes. Correlations can be detected at the stage of data exploration. Interactions can be noted by looking at individual CP profiles. ### 18\.3\.1 Local\-dependence profile Local\-dependence (LD) profile for model \\(f()\\) and variable \\(X^j\\) is defined as follows: \\\[\\begin{equation} g\_{LD}^{f, j}(z) \= E\_{\\underline{X}^{\-j}\|X^j\=z}\\left\\{f\\left(\\underline{X}^{j\|\=z}\\right)\\right\\}. \\tag{18\.3} \\end{equation}\\] Thus, it is the expected value of the model predictions over the conditional distribution of \\(\\underline{X}^{\-j}\\) given \\(X^j\=z\\), i.e., over the joint distribution of all explanatory variables other than \\(X^j\\) conditional on the value of the latter variable set to \\(z\\). Or, in other words, it is the expected value of the CP profiles for \\(X^j\\), defined in [(10\.1\)](ceterisParibus.html#eq:CPPdef), over the conditional distribution of \\(\\underline{X}^{\-j} \| X^j \= z\\). As proposed by Apley and Zhu ([2020](#ref-Apley2019)), LD profile can be estimated as follows: \\\[\\begin{equation} \\hat g\_{LD}^{j}(z) \= \\frac{1}{\|N\_j\|} \\sum\_{k\\in N\_j} f\\left(\\underline{x}\_k^{j\| \= z}\\right), \\tag{18\.4} \\end{equation}\\] where \\(N\_j\\) is the set of observations with the value of \\(X^j\\) “close” to \\(z\\) that is used to estimate the conditional distribution of \\(\\underline{X}^{\-j}\|X^j\=z\\). Note that, in general, the estimator given in [(18\.4\)](accumulatedLocalProfiles.html#eq:LDPest) is neither smooth nor continuous at boundaries between subsets \\(N\_j\\). A smooth estimator for \\(g\_{LD}^{f,j}(z)\\) can be defined as follows: \\\[\\begin{equation} \\tilde g\_{LD}^{j}(z) \= \\frac{1}{\\sum\_k w\_{k}(z)} \\sum\_{i \= 1}^n w\_i(z) f\\left(\\underline{x}\_i^{j\| \= z}\\right), \\tag{18\.5} \\end{equation}\\] where weights \\(w\_i(z)\\) capture the distance between \\(z\\) and \\(x\_i^j\\). In particular, for a categorical variable, we may just use the indicator function \\(w\_i(z) \= 1\_{z \= x^j\_i}\\), while for a continuous variable we may use the Gaussian kernel: \\\[\\begin{equation} w\_i(z) \= \\phi(z \- x\_i^j, 0, s), \\tag{18\.6} \\end{equation}\\] where \\(\\phi(y,0,s)\\) is the density of a normal distribution with mean 0 and standard deviation \\(s\\). Note that \\(s\\) plays the role of a smoothing factor. As already mentioned in Section [18\.2](accumulatedLocalProfiles.html#ALPIntuition), if an explanatory variable is correlated with some other variables, the LD profile for the variable will capture the effect of all of the variables. For instance, consider model [(18\.1\)](accumulatedLocalProfiles.html#eq:simpleModel). Assume that \\(X^1\\) has a uniform distribution on \\(\[0,1]\\) and that \\(X^1\=X^2\\), i.e., explanatory variables are perfectly correlated. In that case, the LD profile for \\(X^1\\) is given by \\\[ g\_{LD}^{1}(z) \= E\_{X^2\|X^1\=z}(z\+X^2\) \= z \+ E\_{X^2\|X^1\=z}(X^2\) \= 2z. \\] Hence, it suggests an effect of \\(X^1\\) twice larger than the correct one. To address the limitation, AL profiles can be used. We present them in the next section. ### 18\.3\.2 Accumulated\-local profile Consider model \\(f()\\) and define \\\[ q^j(\\underline{u})\=\\left\\{ \\frac{\\partial f(\\underline{x})}{\\partial x^j} \\right\\}\_{\\underline{x}\=\\underline{u}}. \\] Accumulated\-local (AL) profile for model \\(f()\\) and variable \\(X^j\\) is defined as follows: \\\[\\begin{equation} g\_{AL}^{j}(z) \= \\int\_{z\_0}^z \\left\[E\_{\\underline{X}^{\-j}\|X^j\=v}\\left\\{ q^j(\\underline{X}^{j\|\=v}) \\right\\}\\right] dv \+ c, \\tag{18\.7} \\end{equation}\\] where \\(z\_0\\) is a value close to the lower bound of the effective support of the distribution of \\(X^j\\) and \\(c\\) is a constant, usually selected so that \\(E\_{X^j}\\left\\{g\_{AL}^{j}(X^j)\\right\\} \= 0\\). To interpret [(18\.7\)](accumulatedLocalProfiles.html#eq:ALPdef), note that \\(q^j(\\underline{x}^{j\|\=v})\\) describes the local effect (change) of the model due to \\(X^j\\). Or, to put it in other words, \\(q^j(\\underline{x}^{j\|\=v})\\) describes how much the CP profile for \\(X^j\\) changes at \\((x^1,\\ldots,x^{j\-1},v,x^{j\+1},\\ldots,x^p)\\). This effect (change) is averaged over the “relevant” (according to the conditional distribution of \\(\\underline{X}^{\-j}\|X^j\\)) values of \\(\\underline{x}^{\-j}\\) and, subsequently, accumulated (integrated) over values of \\(v\\) up to \\(z\\). As argued by Apley and Zhu ([2020](#ref-Apley2019)), the averaging of the local effects allows avoiding the issue, present in the PD and LD profiles, of capturing the effect of other variables in the profile for a particular variable in additive models (without interactions). To see this, one can consider the approximation \\\[ f(\\underline{x}^{j\|\=v\+dv})\-f(\\underline{x}^{j\|\=v}) \\approx q^j(\\underline{x}^{j\|\=v})dv, \\] and note that the difference \\(f(\\underline{x}^{j\|\=v\+dv})\-f(\\underline{v}^{j\|\=v})\\), for a model without interaction, effectively removes the effect of all variables other than \\(X^j\\). For example, consider model [(18\.1\)](accumulatedLocalProfiles.html#eq:simpleModel). In that case, \\(f(x^1,x^2\)\=x^1\+x^2\\) and \\(q^1(\\underline{u}) \= 1\\). Thus, \\\[ f(u\+du,x\_2\)\-f(u,x\_2\) \= (u \+ du \+ x^2\) \- (u \+ x^2\) \= du \= q^1(u)du. \\] Consequently, irrespective of the joint distribution of \\(X^1\\) and \\(X^2\\) and upon setting \\(c\=z\_0\\), we get \\\[ g\_{AL}^{1}(z) \= \\int\_{z\_0}^z \\left\\{E\_{{X}^{2}\|X^1\=v}(1\)\\right\\} dv \+ z\_0 \= z. \\] To estimate an AL profile, one replaces the integral in [(18\.7\)](accumulatedLocalProfiles.html#eq:ALPdef) by a summation and the derivative with a finite difference (Apley and Zhu [2020](#ref-Apley2019)). In particular, consider a partition of the range of observed values \\(x\_{i}^j\\) of variable \\(X^j\\) into \\(K\\) intervals \\(N\_j(k)\=\\left(z\_{k\-1}^j,z\_k^j\\right]\\) (\\(k\=1,\\ldots,K\\)). Note that \\(z\_0^j\\) can be chosen just below \\(\\min(x\_1^j,\\ldots,x\_N^j)\\) and \\(z\_K^j\=\\max(x\_1^j,\\ldots,x\_N^j)\\). Let \\(n\_j(k)\\) denote the number of observations \\(x\_i^j\\) falling into \\(N\_j(k)\\), with \\(\\sum\_{k\=1}^K n\_j(k)\=n\\). An estimator of the AL profile for variable \\(X^j\\) can then be constructed as follows: \\\[\\begin{equation} \\widehat{g}\_{AL}^{j}(z) \= \\sum\_{k\=1}^{k\_j(z)} \\frac{1}{n\_j(k)} \\sum\_{i: x\_i^j \\in N\_j(k)} \\left\\{ f\\left(\\underline{x}\_i^{j\| \= z\_k^j}\\right) \- f\\left(\\underline{x}\_i^{j\| \= z\_{k\-1}^j}\\right) \\right\\} \- \\hat{c}, \\tag{18\.8} \\end{equation}\\] where \\(k\_j(z)\\) is the index of interval \\(N\_j(k)\\) in which \\(z\\) falls, i.e., \\(z \\in N\_j\\{k\_j(z)\\}\\), and \\(\\hat{c}\\) is selected so that \\(\\sum\_{i\=1}^n \\widehat{g}\_{AL}^{f,j}(x\_i^j)\=0\\). To interpret [(18\.8\)](accumulatedLocalProfiles.html#eq:ALPest), note that difference \\(f\\left(\\underline{x}\_i^{j\| \= z\_k^j}\\right) \- f\\left(\\underline{x}\_i^{j\| \= z\_{k\-1}^j}\\right)\\) corresponds to the difference of the CP profile for the \\(i\\)\-th observation at the limits of interval \\(N\_j(k)\\). These differences are then averaged across all observations for which the observed value of \\(X^j\\) falls into the interval and are then accumulated. Note that, in general, \\(\\widehat{g}\_{AL}^{f,j}(z)\\) is not smooth at the boundaries of intervals \\(N\_j(k)\\). A smooth estimate can obtained as follows: \\\[\\begin{equation} \\widetilde{g}\_{AL}^{j}(z) \= \\sum\_{k\=1}^K \\left\[ \\frac{1}{\\sum\_{l} w\_l(z\_k)} \\sum\_{i\=1}^N w\_{i}(z\_k) \\left\\{f\\left(\\underline{x}\_i^{j\| \= z\_k}\\right) \- f\\left(\\underline{x}\_i^{j\| \= z\_k \- \\Delta}\\right)\\right\\}\\right] \- \\hat{c}, \\tag{18\.9} \\end{equation}\\] where points \\(z\_k\\) (\\(k\=0, \\ldots, K\\)) form a uniform grid covering the interval \\((z\_0,z)\\) with step \\(\\Delta \= (z\-z\_0\)/K\\), and weight \\(w\_i(z\_k)\\) captures the distance between point \\(z\_k\\) and observation \\(x\_i^j\\). In particular, we may use similar weights as in case of [(18\.5\)](accumulatedLocalProfiles.html#eq:LDPest2). ### 18\.3\.3 Dependence profiles for a model with interaction and correlated explanatory variables: an example In this section, we illustrate in more detail the behavior of PD, LD, and AL profiles for a model with an interaction between correlated explanatory variables. In particular, let us consider the following simple model for two explanatory variables: \\\[\\begin{equation} f(X^1, X^2\) \= (X^1 \+ 1\)\\cdot X^2\. \\tag{18\.10} \\end{equation}\\] Moreover, assume that explanatory variables \\(X^1\\) and \\(X^2\\) are uniformly distributed over the interval \\(\[\-1,1]\\) and perfectly correlated, i.e., \\(X^2 \= X^1\\). Suppose that we have got a dataset with eight observations as in Table [18\.4](accumulatedLocalProfiles.html#tab:trickyModelData). Note that, for both \\(X^1\\) and \\(X^2\\), the sum of all observed values is equal to 0\. Table 18\.4: A sample of eight observations. | i | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | \\(X^1\\) | \-1 | \-0\.71 | \-0\.43 | \-0\.14 | 0\.14 | 0\.43 | 0\.71 | 1 | | \\(X^2\\) | \-1 | \-0\.71 | \-0\.43 | \-0\.14 | 0\.14 | 0\.43 | 0\.71 | 1 | | \\(y\\) | 0 | \-0\.2059 | \-0\.2451 | \-0\.1204 | 0\.1596 | 0\.6149 | 1\.2141 | 2 | Note that PD, LD, AL profiles describe the effect of a variable in isolation from the values of other variables. In model [(18\.10\)](accumulatedLocalProfiles.html#eq:trickyModel), the effect of variable \\(X^1\\) depends on the value of variable \\(X^2\\). For models with interactions, it is subjective to define what would be the “true” main effect of variable \\(X^1\\). Complex predictive models often have interactions. By examining the case of model [(18\.10\)](accumulatedLocalProfiles.html#eq:trickyModel), we will provide some intuition on how PD, LD and AL profiles may behave in such cases. Figure 18\.4: Partial\-dependence (PD), local\-dependence (LD), and accumulated\-local (AL) profiles for model [(18\.10\)](accumulatedLocalProfiles.html#eq:trickyModel). Panel A: ceteris\-paribus (CP) profiles for eight observations from Table [18\.4](accumulatedLocalProfiles.html#tab:trickyModelData). Panel B: entire CP profiles (top) contribute to calculation of the corresponding PD profile (bottom). Panel C: only parts of the CP profiles (top), close to observations of interest, contribute to the calculation of the corresponding LD profile (bottom). Panel D: only parts of the CP profiles (top) contribute to the calculation of the corresponding AL profile (bottom). Let us explicitly express the CP profile for \\(X^1\\) for model [(18\.10\)](accumulatedLocalProfiles.html#eq:trickyModel): \\\[\\begin{equation} h^{1}\_{CP}(z) \= f(z,X^2\) \= (z\+1\)\\cdot X^2\. \\tag{18\.11} \\end{equation}\\] By allowing \\(z\\) to take any value in the interval \\(\[\-1,1]\\), we get the CP profiles as straight lines with the slope equal to the value of variable \\(X^2\\). Hence, for instance, the CP profile for observation \\((\-1,\-1\)\\) is a straight line with the slope equal to \\(\-1\\). The CP profiles for the eight observations, from Table [18\.4](accumulatedLocalProfiles.html#tab:trickyModelData) are presented in panel A of Figure [18\.4](accumulatedLocalProfiles.html#fig:accumulatedLocalEffectsNew). Recall that the PD profile for \\(X^j\\), defined in equation [(17\.1\)](partialDependenceProfiles.html#eq:PDPdef0), is the expected value, over the joint distribution of all explanatory variables other than \\(X^j\\), of the model predictions when \\(X^j\\) is set to \\(z\\). This leads to the estimation of the profile by taking the average of CP profiles for \\(X^j\\), as given in [(17\.2\)](partialDependenceProfiles.html#eq:PDPest). In our case, this implies that the PD profile for \\(X^1\\) is the expected value of the model predictions over the distribution of \\(X^2\\), i.e., over the uniform distribution on the interval \\(\[\-1,1]\\). Thus, the PD profile is estimated by taking the average of the CP profiles, given by [(18\.11\)](accumulatedLocalProfiles.html#eq:CPtrickyModel), at each value of \\(z\\) in \\(\[\-1,1]\\): \\\[\\begin{equation} \\hat g\_{PD}^{1}(z) \= \\frac{1}{8} \\sum\_{i\=1}^{8} (z\+1\)\\cdot X^2\_{i} \= \\frac{z\+1}{8} \\sum\_{i\=1}^{8} X^2\_{i} \= 0\. \\tag{18\.12} \\end{equation}\\] As a result, the PD profile for \\(X^1\\) is estimated as a horizontal line at 0, as seen in the bottom part of Panel B of Figure [18\.4](accumulatedLocalProfiles.html#fig:accumulatedLocalEffectsNew). Since the \\(X^1\\) and \\(X^2\\) variables are correlated, it can be argued that we should not include entire CP profiles in the calculation of the PD profile, but only parts of them. In fact, for perfectly correlated explanatory variables, the CP profile for the \\(i\\)\-th observation should actually be undefined for any values of \\(z\\) different from \\(x^2\_i\\). The estimated horizontal PD profile results from using the marginal distribution of \\(X^2\\), which disregards the value of \\(X^1\\), in the definition of the profile. This observation suggests a modification: instead of the marginal distribution, one might consider the conditional distribution of \\(X^2\\) given \\(X^1\\). The modification leads to the definition of LD profile. For the data from Table [18\.4](accumulatedLocalProfiles.html#tab:trickyModelData), the conditional distribution of \\(X^2\\), given \\(X^1\=z\\), is just a probability mass of 1 at \\(z\\). Consequently, for model [(18\.10\)](accumulatedLocalProfiles.html#eq:trickyModel), the LD profile for \\(X^1\\) and any \\(z \\in \[\-1,1]\\) is given by \\\[\\begin{equation} g\_{LD}^{1}(z) \= z \\cdot (z\+1\). \\tag{18\.13} \\end{equation}\\] The bottom part of panel C of Figure [18\.4](accumulatedLocalProfiles.html#fig:accumulatedLocalEffectsNew) presents the LD profile estimated by applying estimator [(18\.3\)](accumulatedLocalProfiles.html#eq:LDPdef), in which the conditional distribution was calculated by using four bins with two observations each (shown in the top part of the panel). The LD profile shows the average of predictions over the conditional distribution. Part of the average can be attributed to the effect of the correlated variable \\(X^2\\). AL profile shows the net effect of \\(X^1\\) variable. By using definition [(18\.7\)](accumulatedLocalProfiles.html#eq:ALPdef), the AL profile for model [(18\.10\)](accumulatedLocalProfiles.html#eq:trickyModel) is given by \\\[\\begin{align} g\_{AL}^{1}(z) \&\= \\int\_{\-1}^z E \\left\[\\frac{\\partial f(X^1, X^2\)}{\\partial X^1} \| X^1 \= v \\right] dv \\nonumber \\\\ \& \= \\int\_{\-1}^z E \\left\[X^2 \| X^1 \= v \\right] dv \= \\int\_{\-1}^z v dv \= (z^2 \- 1\)/2\. \\tag{18\.14} \\end{align}\\] The bottom part of panel D of Figure [18\.4](accumulatedLocalProfiles.html#fig:accumulatedLocalEffectsNew) presents the AL profile estimated by applying estimator [(18\.7\)](accumulatedLocalProfiles.html#eq:ALPdef), in which the range of observed values of \\(X^1\\) was split into four intervals with two observations each. It is clear that PD, LD and AL profiles show different aspects of the model. In the analyzed example of model [(18\.10\)](accumulatedLocalProfiles.html#eq:trickyModel), we obtain three different explanations of the effect of variable \\(X^1\\). In practice, explanatory variables are typically correlated and complex predictive models are usually not additive. Therefore, when analyzing any model, it is worth checking how much do the PD, LD, and AL profiles differ. And if so, look for potential causes. Correlations can be detected at the stage of data exploration. Interactions can be noted by looking at individual CP profiles. 18\.4 Example: apartment\-prices data ------------------------------------- In this section, we use PD, LD, and AL profiles to evaluate performance of the random forest model `apartments_rf` (see Section [4\.5\.2](dataSetsIntro.html#model-Apartments-rf)) for the apartment\-prices dataset (see Section [4\.4](dataSetsIntro.html#ApartmentDataset)). Recall that the goal is to predict the price per square meter of an apartment. In our illustration, we focus on two explanatory variables, *surface* and *number of rooms*, as they are correlated (see Figure [4\.9](dataSetsIntro.html#fig:apartmentsSurfaceNorooms)). Figure [18\.5](accumulatedLocalProfiles.html#fig:featureEffectsApartment) shows the three types of profiles for both variables estimated according to formulas [(17\.2\)](partialDependenceProfiles.html#eq:PDPest), [(18\.5\)](accumulatedLocalProfiles.html#eq:LDPest2), and [(18\.9\)](accumulatedLocalProfiles.html#eq:ALPest2). As we can see from the plots, the profiles calculated with different methods are different. The LD profiles are steeper than the PD profiles. This is because, for instance, the effect of *surface* includes the effect of other correlated variables, including *number of rooms*. The AL profile eliminates the effect of correlated variables. Since the AL and PD profiles are parallel to each other, they suggest that the model is additive for these two explanatory variables. Figure 18\.5: Partial\-dependence, local\-dependence, and accumulated\-local profiles for the random forest model for the apartment\-prices dataset. 18\.5 Pros and cons ------------------- The LD and AL profiles, described in this chapter, are useful to summarize the influence of an explanatory variable on a model’s predictions. The profiles are constructed by using the CP profiles introduced in Chapter [10](ceterisParibus.html#ceterisParibus), but they differ in how the CP profiles for individual observations are summarized. When explanatory variables are independent and there are no interactions in the model, the CP profiles are parallel and their mean, i.e., the PD profile introduced in Chapter [17](partialDependenceProfiles.html#partialDependenceProfiles), adequately summarizes them. When the model is additive, but an explanatory variable is correlated with some other variables, neither PD nor LD profiles will properly capture the effect of the explanatory variable on the model’s predictions. However, the AL profile will provide a correct summary of the effect. When there are interactions in the model, none of the profiles will provide a correct assessment of the effect of any explanatory variable involved in the interaction(s). This is because the profiles for the variable will also include the effect of other variables. Comparison of PD, LD, and AL profiles may help in identifying whether there are any interactions in the model and/or whether explanatory variables are correlated. When there are interactions, they may be explored by using a generalization of the PD profiles for two or more dependent variables (Apley and Zhu [2020](#ref-Apley2019)). 18\.6 Code snippets for R ------------------------- In this section, we present the `DALEX` package for R, which covers the methods presented in this chapter. In particular, it includes wrappers for functions from the `ingredients` package (Biecek et al. [2019](#ref-ingredientsRPackage)). Note that similar functionalities can be found in package `ALEPlots` (Apley [2018](#ref-ALEPlotRPackage)) or `iml` (Molnar, Bischl, and Casalicchio [2018](#ref-imlRPackage)). For illustration purposes, we use the random forest model `apartments_rf` (see Section [4\.5\.2](dataSetsIntro.html#model-Apartments-rf)) for the apartment prices dataset (see Section [4\.4](dataSetsIntro.html#ApartmentDataset)). Recall that the goal is to predict the price per square meter of an apartment. In our illustration, we focus on two explanatory variables, *surface* and *number of rooms*. We first load the model\-object via the `archivist` hook, as listed in Section [4\.5\.6](dataSetsIntro.html#ListOfModelsApartments). Then we construct the explainer for the model by using the function `explain()` from the `DALEX` package (see Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode)). Note that, beforehand, we have got to load the `randomForest` package, as the model was fitted by using function `randomForest()` from this package (see Section [4\.2\.2](dataSetsIntro.html#model-titanic-rf)) and it is important to have the corresponding `predict()` function available. ``` library("DALEX") library("randomForest") apartments_rf <- archivist::aread("pbiecek/models/fe7a5") explainer_apart_rf <- DALEX::explain(model = apartments_rf, data = apartments_test[,-1], y = apartments_test$m2.price, label = "Random Forest") ``` The function that allows the computation of LD and AL profiles in the `DALEX` package is `model_profile()`. Its use and arguments were described in Section [17\.6](partialDependenceProfiles.html#PDPR). LD profiles are calculated by specifying argument `type = "conditional"`. In the example below, we also use the `variables` argument to calculate the profile only for the explanatory variables *surface* and *no.rooms*. By default, the profile is based on 100 randomly selected observations. ``` ld_rf <- model_profile(explainer = explainer_apart_rf, type = "conditional", variables = c("no.rooms", "surface")) ``` The resulting object of class “model\_profile” contains the LD profiles for both explanatory variables. By applying the `plot()` function to the object, we obtain separate plots of the profiles. ``` plot(ld_rf) + ggtitle("Local-dependence profiles for no. of rooms and surface", "") ``` The resulting plot is shown in Figure [18\.6](accumulatedLocalProfiles.html#fig:aleExample3Plot). The profiles essentially correspond to those included in Figure [18\.5](accumulatedLocalProfiles.html#fig:featureEffectsApartment). Figure 18\.6: Local\-dependence profiles for the random forest model and explanatory variables *no.rooms* and *surface* for the apartment\-prices dataset. AL profiles are calculated by applying function `model_profile()` with the additional argument `type = "accumulated"`. In the example below, we also use the `variables` argument to calculate the profile only for the explanatory variables *surface* and *no.rooms*. ``` al_rf <- model_profile(explainer = explainer_apart_rf, type = "accumulated", variables = c("no.rooms", "surface")) ``` By applying the `plot()` function to the object, we obtain separate plots of the AL profiles for *no.rooms* and *surface*. They are presented in Figure [18\.7](accumulatedLocalProfiles.html#fig:aleExample2Plot). The profiles essentially correspond to those included in Figure [18\.5](accumulatedLocalProfiles.html#fig:featureEffectsApartment). ``` plot(al_rf) + ggtitle("Accumulated-local profiles for no. of rooms and surface", "") ``` Figure 18\.7: Accumulated\-local profiles for the random forest model and explanatory variables *no.rooms* and *surface* for the apartment\-prices dataset. Function `plot()` allows including all plots in a single graph. We will show how to apply it in order to obtain Figure [18\.5](accumulatedLocalProfiles.html#fig:featureEffectsApartment). Toward this end, we have got to create PD profiles first (see Section [17\.6](partialDependenceProfiles.html#PDPR)). We also modify the labels of the PD, LD, and AL profiles contained in the `agr_profiles` components of the “model\_profile”\-class objects created for the different profiles. ``` pd_rf <- model_profile(explainer = explainer_apart_rf, type = "partial", variables = c("no.rooms", "surface")) pd_rf$agr_profiles$`_label_` = "partial dependence" ld_rf$agr_profiles$`_label_` = "local dependence" al_rf$agr_profiles$`_label_` = "accumulated local" ``` Subsequently, we simply apply the `plot()` function to the `agr_profiles` components of the “model\_profile”\-class objects for the different profiles (see Section [17\.6](partialDependenceProfiles.html#PDPR)). ``` plot(pd_rf, ld_rf, al_rf) ``` The resulting plot (not shown) is essentially the same as the one presented in Figure [18\.5](accumulatedLocalProfiles.html#fig:featureEffectsApartment), with a possible difference due to the use of a different set of (randomly selected) 100 observations from the apartment\-prices dataset. 18\.7 Code snippets for Python ------------------------------ In this section, we use the `dalex` library for Python. The package covers all methods presented in this chapter. It is available on `pip` and `GitHub`. For illustration purposes, we use the `titanic_rf` random forest model for the Titanic data developed in Section [4\.3\.2](dataSetsIntro.html#model-titanic-python-rf). Recall that the model is developed to predict the probability of survival for passengers of Titanic. In the first step, we create an explainer\-object that will provide a uniform interface for the predictive model. We use the `Explainer()` constructor for this purpose. ``` import dalex as dx titanic_rf_exp = dx.Explainer(titanic_rf, X, y, label = "Titanic RF Pipeline") ``` The function that allows calculations of LD profiles is `model_profile()`. It was already introduced in Section [17\.7](partialDependenceProfiles.html#PDPPython). By defaut, it calculates PD profiles. To obtain LD profiles, the `type = 'conditional'` should be used. In the example below, we calculate the LD profile for *age* and *fare* by applying the `model_profile()` function to the explainer\-object for the random forest model while specifying `type = 'conditional'`. Results are stored in the `ld_rf.result` field. ``` ld_rf = titanic_rf_exp.model_profile(type = 'conditional') ld_rf.result['_label_'] = 'LD profiles' ld_rf.result ``` Results can be visualised by using the `plot()` method. Note that, in the code below, we use the `variables` argument to display the LD profiles only for *age* and *fare*. The resulting plot is presented in Figure [18\.8](accumulatedLocalProfiles.html#fig:examplePythonLDProfile2). ``` ld_rf.plot(variables = ['age', 'fare']) ``` Figure 18\.8: Local\-dependence profiles for *age* and *fare* for the random forest model for the Titanic data, obtained by using the `plot()` method in Python. In order to calculate the AL profiles for *age* and *fare*, we apply the `model_profile()` function with the `type = 'accumulated'` option. ``` al_rf = titanic_rf_exp.model_profile(type = 'accumulated') al_rf.result['_label_'] = 'AL profiles' ``` We can plot AL and LD profiles in a single chart. Toward this end, in the code that follows, we pass the `ld_rf` object, which contains LD profiles, as the first argument of the `plot()` method of the `al_rf` object that includes AL profiles. We also use the `variables` argument to display the profiles only for *age* and *fare*. The resulting plot is presented in Figure [18\.9](accumulatedLocalProfiles.html#fig:examplePythonALLDProfiles). ``` al_rf.plot(ld_rf, variables = ['age', 'fare']) ``` Figure 18\.9: Local\-dependence and accumulated\-local profiles for *age* and *fare* for the random forest model for the Titanic data, obtained by using the `plot()` method in Python.
Machine Learning
pbiecek.github.io
https://pbiecek.github.io/ema/residualDiagnostic.html
19 Residual\-diagnostics Plots ============================== 19\.1 Introduction ------------------ In this chapter, we present methods that are useful for a detailed examination of both overall and instance\-specific model performance. In particular, we focus on graphical methods that use residuals. The methods may be used for several purposes: * In Part II of the book, we discussed tools for single\-instance exploration. Residuals can be used to identify potentially problematic instances. The single\-instance explainers can then be used in the problematic cases to understand, for instance, which factors contribute most to the errors in prediction. * For most models, residuals should express a random behavior with certain properties (like, e.g., being concentrated around 0\). If we find any systematic deviations from the expected behavior, they may signal an issue with a model (for instance, an omitted explanatory variable or a wrong functional form of a variable included in the model). * In Chapter [15](modelPerformance.html#modelPerformance), we discussed measures that can be used to evaluate the overall performance of a predictive model. Sometimes, however, we may be more interested in cases with the largest prediction errors, which can be identified with the help of residuals. Residual diagnostics is a classical topic related to statistical modelling. It is most often discussed in the context of the evaluation of goodness\-of\-fit of a model. That is, residuals are computed using the training data and used to assess whether the model predictions “fit” the observed values of the dependent variable. The literature on the topic is vast, as essentially every book on statistical modeling includes some discussion about residuals. Thus, in this chapter, we are not aiming at being exhaustive. Rather, our goal is to present selected concepts that underlie the use of residuals for predictive models. 19\.2 Intuition --------------- As it was mentioned in Section [2\.3](modelDevelopmentProcess.html#notation), we primarily focus on models describing the expected value of the dependent variable as a function of explanatory variables. In such a case, for a “perfect” predictive model, the predicted value of the dependent variable should be exactly equal to the actual value of the variable for every observation. Perfect prediction is rarely, if ever, expected. In practice, we want the predictions to be reasonably close to the actual values. This suggests that we can use the difference between the predicted and the actual value of the dependent variable to quantify the quality of predictions obtained from a model. The difference is called a *residual*. For a single observation, residual will almost always be different from zero. While a large (absolute) value of a residual may indicate a problem with a prediction for a particular observation, it does not mean that the quality of predictions obtained from a model is unsatisfactory in general. To evaluate the quality, we should investigate the “behavior” of residuals for a group of observations. In other words, we should look at the distribution of the values of residuals. For a “good” model, residuals should deviate from zero randomly, i.e., not systematically. Thus, their distribution should be symmetric around zero, implying that their mean (or median) value should be zero. Also, residuals should be close to zero themselves, i.e., they should show low variability. Usually, to verify these properties, graphical methods are used. For instance, a histogram can be used to check the symmetry and location of the distribution of residuals. Note that a model may imply a concrete distribution for residuals. In such a case, the distributional assumption can be verified by using a suitable graphical method like, for instance, a quantile\-quantile plot. If the assumption is found to be violated, one might want to be careful when using predictions obtained from the model. 19\.3 Method ------------ As it was already mentioned in Chapter [2](modelDevelopmentProcess.html#modelDevelopmentProcess), for a continuous dependent variable \\(Y\\), residual \\(r\_i\\) for the \\(i\\)\-th observation in a dataset is the difference between the observed value of \\(Y\\) and the corresponding model prediction: \\\[\\begin{equation} r\_i \= y\_i \- f(\\underline{x}\_i) \= y\_i \- \\widehat{y}\_i. \\tag{19\.1} \\end{equation}\\] *Standardized residuals* are defined as \\\[\\begin{equation} \\tilde{r}\_i \= \\frac{r\_i}{\\sqrt{\\mbox{Var}(r\_i)}}, \\tag{19\.2} \\end{equation}\\] where \\(\\mbox{Var}(r\_i)\\) is the variance of the residual \\(r\_i\\). Of course, in practice, the variance of \\(r\_i\\) is usually unknown. Hence, the estimated value of \\(\\mbox{Var}(r\_i)\\) is used in [(19\.2\)](residualDiagnostic.html#eq:standresid). Residuals defined in this way are often called the *Pearson residuals* (Galecki and Burzykowski [2013](#ref-Galecki2013)). Their distribution should be approximately standard\-normal. For the classical linear\-regression model, \\(\\mbox{Var}(r\_i)\\) can be estimated by using the design matrix. On the other hand, for count data, the variance can be estimated by \\(f(\\underline{x}\_i)\\), i.e., the expected value of the count. In general, for complicated models, it may be hard to estimate \\(\\mbox{Var}(r\_i)\\), so it is often approximated by a constant for all residuals. Definition [(19\.2\)](residualDiagnostic.html#eq:standresid) can also be applied to a binary dependent variable if the model prediction \\(f(\\underline{x}\_i)\\) is the probability of observing \\(y\_i\\) and upon coding the two possible values of the variable as 0 and 1\. However, in this case, the range of possible values of \\(r\_i\\) is restricted to \\(\[\-1,1]\\), which limits the usefulness of the residuals. For this reason, more often the Pearson residuals are used. Note that, if the observed values of the explanatory\-variable vectors \\(\\underline{x}\_i\\) lead to different predictions \\(f(\\underline{x}\_i)\\) for different observations in a dataset, the distribution of the Pearson residuals will not be approximated by the standard\-normal one. This is the case when, for instance, one (or more) of the explanatory variables is continuous. Nevertheless, in that case, the index plot may still be useful to detect observations with large residuals. The standard\-normal approximation is more likely to apply in the situation when the observed values of vectors \\(\\underline{x}\_i\\) split the data into a few, say \\(K\\), groups, with observations in group \\(k\\) (\\(k\=1,\\ldots,K\\)) sharing the same predicted value \\(f\_k\\). This may be happen if all explanatory variables are categorical with a limited number of categories. In that case, one can consider averaging residuals \\(r\_i\\) per group and standardizing them by \\(\\sqrt{f\_k(1\-f\_k)/n\_k}\\), where \\(n\_k\\) is the number of observations in group \\(k\\). For categorical data, residuals are usually defined in terms of differences in predictions for the dummy binary variable indicating the category observed for the \\(i\\)\-th observation. Let us consider the classical linear\-regression model. In that case, residuals should be normally distributed with mean zero and variance defined by the diagonal of hat\-matrix \\(\\underline X(\\underline X^T \\underline X)^{\-1}\\underline X^T\\). For independent explanatory variables, it should lead to a constant variance of residuals. Figure [19\.1](residualDiagnostic.html#fig:residuals1234) presents examples of classical diagnostic plots for linear\-regression models that can be used to check whether the assumptions are fulfilled. In fact, the plots in Figure [19\.1](residualDiagnostic.html#fig:residuals1234) suggest issues with the assumptions. In particular, the top\-left panel presents the residuals in function of the estimated linear combination of explanatory variables, i.e., predicted (fitted) values. For a well\-fitting model, the plot should show points scattered symmetrically around the horizontal straight line at 0\. However, the scatter in the top\-left panel of Figure [19\.1](residualDiagnostic.html#fig:residuals1234) has got a shape of a funnel, reflecting increasing variability of residuals for increasing fitted values. This indicates a violation of the homoscedasticity, i.e., the constancy of variance, assumption. Also, the smoothed line suggests that the mean of residuals becomes increasingly positive for increasing fitted values. This indicates a violation of the assumption that residuals have got zero\-mean. The top\-right panel of Figure [19\.1](residualDiagnostic.html#fig:residuals1234) presents the scale\-location plot, i.e., the plot of \\(\\sqrt{\\tilde{r}\_i}\\) in function of the fitted values \\(f(\\underline{x}\_i)\\). For a well\-fitting model, the plot should show points scattered symmetrically across the horizontal axis. This is clearly not the case of the plot in Figure [19\.1](residualDiagnostic.html#fig:residuals1234), which indicates a violation of the homoscedasticity assumption. The bottom\-left panel of Figure [19\.1](residualDiagnostic.html#fig:residuals1234) presents the plot of standardized residuals in the function of *leverage*. Leverage is a measure of the distance between \\(\\underline{x}\_i\\) and the vector of mean values for all explanatory variables (Kutner et al. [2005](#ref-Kutner2005)). A large leverage value for the \\(i\\)\-th observation, say \\(l\_i\\), indicates that \\(\\underline{x}\_i\\) is distant from the center of all observed values of the vector of explanatory variables. Importantly, a large leverage value implies that the observation may have an important influence on predicted/fitted values. In fact, for the classical linear\-regression model, it can be shown that the predicted sum\-of\-squares, defined in [(15\.5\)](modelPerformance.html#eq:PRESS), can be written as \\\[\\begin{equation} PRESS \= \\sum\_{i\=1}^{n} (\\widehat{y}\_{i(\-i)} \- y\_i)^2 \= \\sum\_{i\=1}^{n} \\frac{r\_i^2}{(1\-l\_{i})^2}. \\tag{19\.3} \\end{equation}\\] Thus, [(19\.3\)](residualDiagnostic.html#eq:leveragePRESS) indicates that observations with a large \\(r\_i\\) (or \\(\\tilde{r}\_i\\)) and a large \\(l\_i\\) have an important influence on the overall predictive performance of the model. Hence, the plot of standardized residuals in the function of leverage can be used to detect such influential observations. Note that the plot can also be used to check homoscedasticity because, under that assumption, it should show a symmetric scatter of points around the horizontal line at 0\. This is not the case of the plot presented in the bottom\-left panel of Figure [19\.1](residualDiagnostic.html#fig:residuals1234). Hence, the plot suggests that the assumption is not fulfilled. However, it does not indicate any particular influential observations, which should be located in the upper\-right or lower\-right corners of the plot. Note that the plot of standardized residuals in function of leverage can also be used to detect observations with large differences between the predicted and observed value of the dependent variable. In particular, given that \\({\\tilde{r}\_i}\\) should have approximately standard\-normal distribution, only about 0\.5% of them should be larger, in absolute value, than 2\.57\. If there is an excess of such observations, this could be taken as a signal of issues with the fit of the model. At least two such observations (59 and 143\) are indicated in the plot shown in the bottom\-left panel of Figure [19\.1](residualDiagnostic.html#fig:residuals1234). Finally, the bottom\-right panel of Figure [19\.1](residualDiagnostic.html#fig:residuals1234) presents an example of a normal quantile\-quantile plot. In particular, the vertical axis represents the ordered values of the standardized residuals, whereas the horizontal axis represents the corresponding values expected from the standard normal distribution. If the normality assumption is fulfilled, the plot should show a scatter of points close to the \\(45^{\\circ}\\) diagonal. Clearly, this is not the case of the plot in the bottom\-right panel of Figure [19\.1](residualDiagnostic.html#fig:residuals1234). Figure 19\.1: Diagnostic plots for a linear\-regression model. Clockwise from the top\-left: residuals in function of fitted values, a scale\-location plot, a normal quantile\-quantile plot, and a leverage plot. In each panel, indexes of the three most extreme observations are indicated. 19\.4 Example: apartment\-prices data ------------------------------------- In this section, we consider the linear\-regression model `apartments_lm` (Section [4\.5\.1](dataSetsIntro.html#model-Apartments-lr)) and the random forest model `apartments_rf` (Section [4\.5\.2](dataSetsIntro.html#model-Apartments-rf)) for the apartment\-prices dataset (Section [4\.4](dataSetsIntro.html#ApartmentDataset)). Recall that the dependent variable of interest, the price per square meter, is continuous. Thus, we can use residuals \\(r\_i\\), as defined in [(19\.1\)](residualDiagnostic.html#eq:resid). We compute the residuals for the `apartments_test` testing dataset (see Section [4\.5\.4](dataSetsIntro.html#predictionsApartments)). It is worth noting that, as it was mentioned in Section [15\.4\.1](modelPerformance.html#modelPerformanceApartments), RMSE for both models is very similar for that dataset. Thus, overall, the two models could be seen as performing similarly on average. Figures [19\.2](residualDiagnostic.html#fig:plotResidualDensity1) and [19\.3](residualDiagnostic.html#fig:plotResidualBoxplot1) summarize the distribution of residuals for both models. In particular, Figure [19\.2](residualDiagnostic.html#fig:plotResidualDensity1) presents histograms of residuals, while Figure [19\.3](residualDiagnostic.html#fig:plotResidualBoxplot1) shows box\-and\-whisker plots for the absolute value of the residuals. Figure 19\.2: Histogram of residuals for the linear\-regression model `apartments_lm` and the random forest model `apartments_rf` for the `apartments_test` dataset. Despite the similar value of RMSE, the distributions of residuals for both models are different. In particular, Figure [19\.2](residualDiagnostic.html#fig:plotResidualDensity1) indicates that the distribution for the linear\-regression model is, in fact, split into two separate, normal\-like parts, which may suggest omission of a binary explanatory variable in the model. The two components are located around the values of about \-200 and 400\. As mentioned in the previous chapters, the reason for this behavior of the residuals is the fact that the model does not capture the non\-linear relationship between the price and the year of construction. For instance, Figure [17\.8](partialDependenceProfiles.html#fig:pdpApartment3) indicates that the relationship between the construction year and the price may be U\-shaped. In particular, apartments built between 1940 and 1990 appear to be, on average, cheaper than those built earlier or later. As seen from Figure [19\.2](residualDiagnostic.html#fig:plotResidualDensity1), the distribution of residuals for the random forest model is skewed to the right and multimodal. It seems to be centered at a value closer to zero than the distribution for the linear\-regression model, but it shows a larger variation. These conclusions are confirmed by the box\-and\-whisker plots in Figure [19\.3](residualDiagnostic.html#fig:plotResidualBoxplot1). Figure 19\.3: Box\-and\-whisker plots of the absolute values of the residuals of the linear\-regression model `apartments_lm` and the random forest model `apartments_rf` for the `apartments_test` dataset. The dots indicate the mean value that corresponds to root\-mean\-squared\-error. The plots in Figures [19\.2](residualDiagnostic.html#fig:plotResidualDensity1) and [19\.3](residualDiagnostic.html#fig:plotResidualBoxplot1) suggest that the residuals for the random forest model are more frequently smaller than the residuals for the linear\-regression model. However, a small fraction of the random forest\-model residuals is very large, and it is due to them that the RMSE is comparable for the two models. In the remainder of the section, we focus on the random forest model. Figure [19\.4](residualDiagnostic.html#fig:plotResidual1) shows a scatter plot of residuals (vertical axis) in function of the observed (horizontal axis) values of the dependent variable. For a “perfect” predictive model, we would expect the horizontal line at zero. For a “good” model, we would like to see a symmetric scatter of points around the horizontal line at zero, indicating random deviations of predictions from the observed values. The plot in Figure [19\.4](residualDiagnostic.html#fig:plotResidual1) shows that, for the large observed values of the dependent variable, the residuals are positive, while for small values they are negative. This trend is clearly captured by the smoothed curve included in the graph. Thus, the plot suggests that the predictions are shifted (biased) towards the average. Figure 19\.4: Residuals and observed values of the dependent variable for the random forest model `apartments_rf` for the `apartments_test` dataset. The shift towards the average can also be seen from Figure [19\.5](residualDiagnostic.html#fig:plotPrediction1) that shows a scatter plot of the predicted (vertical axis) and observed (horizontal axis) values of the dependent variable. For a “perfectly” fitting model we would expect a diagonal line (indicated in red). The plot shows that, for large observed values of the dependent variable, the predictions are smaller than the observed values, with an opposite trend for the small observed values of the dependent variable. Figure 19\.5: Predicted and observed values of the dependent variable for the random forest model `apartments_rf` for the `apartments_test` dataset. The red line indicates the diagonal. Figure [19\.6](residualDiagnostic.html#fig:plotResidual2) shows an index plot of residuals, i.e., their scatter plot in function of an (arbitrary) identifier of the observation (horizontal axis). The plot indicates an asymmetric distribution of residuals around zero, as there is an excess of large positive (larger than 500\) residuals without a corresponding fraction of negative values. This can be linked to the right\-skewed distribution seen in Figures [19\.2](residualDiagnostic.html#fig:plotResidualDensity1) and [19\.3](residualDiagnostic.html#fig:plotResidualBoxplot1) for the random forest model. Figure 19\.6: Index plot of residuals for the random forest model `apartments_rf` for the `apartments_test` dataset. Figure [19\.7](residualDiagnostic.html#fig:plotResidual3) shows a scatter plot of residuals (vertical axis) in function of the predicted (horizontal axis) value of the dependent variable. For a “good” model, we would like to see a symmetric scatter of points around the horizontal line at zero. The plot in Figure [19\.7](residualDiagnostic.html#fig:plotResidual3), as the one in Figure [19\.4](residualDiagnostic.html#fig:plotResidual1), suggests that the predictions are shifted (biased) towards the average. Figure 19\.7: Residuals and predicted values of the dependent variable for the random forest model `apartments_rf` for the `apartments_test` dataset. The random forest model, as the linear\-regression model, assumes that residuals should be homoscedastic, i.e., that they should have a constant variance. Figure [19\.8](residualDiagnostic.html#fig:plotScaleLocation1) presents a variant of the scale\-location plot of residuals, i.e., a scatter plot of the absolute value of residuals (vertical axis) in function of the predicted values of the dependent variable (horizontal axis). The plot includes a smoothed line capturing the average trend. For homoscedastic residuals, we would expect a symmetric scatter around a horizontal line; the smoothed trend should be also horizontal. The plot in Figure [19\.8](residualDiagnostic.html#fig:plotScaleLocation1) deviates from the expected pattern and indicates that the variability of the residuals depends on the (predicted) value of the dependent variable. For models like linear regression, such heteroscedasticity of the residuals would be worrying. In random forest models, however, it may be less of concern. This is beacuse it may occur due to the fact that the models reduce variability of residuals by introducing a bias (towards the average). Thus, it is up to the developer of a model to decide whether such a bias (in our example, for the cheapest and most expensive apartments) is a desirable price to pay for the reduced residual variability. Figure 19\.8: The scale\-location plot of residuals for the random forest model `apartments_rf` for the `apartments_test` dataset. 19\.5 Pros and cons ------------------- Diagnostic methods based on residuals are a very useful tool in model exploration. They allow identifying different types of issues with model fit or prediction, such as problems with distributional assumptions or with the assumed structure of the model (in terms of the selection of the explanatory variables and their form). The methods can help in detecting groups of observations for which a model’s predictions are biased and, hence, require inspection. A potential complication related to the use of residual diagnostics is that they rely on graphical displays. Hence, for a proper evaluation of a model, one may have to construct and review many graphs. Moreover, interpretation of the patterns seen in graphs may not be straightforward. Also, it may not be immediately obvious which element of the model may have to be changed to remove the potential issue with the model fit or predictions. 19\.6 Code snippets for R ------------------------- In this section, we present diagnostic plots as implemented in the `DALEX` package for R. The package covers all plots and methods presented in this chapter. Similar functions can be found in packages `auditor` (Gosiewska and Biecek [2018](#ref-R-auditor)), `rms` (Harrell Jr [2018](#ref-rms)), and `stats` (Faraway [2005](#ref-Faraway02practicalregression)). For illustration purposes, we will show how to create the plots shown in Section [19\.4](residualDiagnostic.html#ExampleResidualDiagnostic) for the linear\-regression model `apartments_lm` (Section [4\.5\.1](dataSetsIntro.html#model-Apartments-lr)) and the random forest model `apartments_rf` (Section [4\.5\.2](dataSetsIntro.html#model-Apartments-rf)) for the `apartments_test` dataset (Section [4\.4](dataSetsIntro.html#ApartmentDataset)). We first load the two models via the `archivist` hooks, as listed in Section [4\.5\.6](dataSetsIntro.html#ListOfModelsApartments). Subsequently, we construct the corresponding explainers by using function `explain()` from the `DALEX` package (see Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode)). Note that we use the `apartments_test` data frame without the first column, i.e., the *m2\.price* variable, in the `data` argument. This will be the dataset to which the model will be applied. The *m2\.price* variable is explicitly specified as the dependent variable in the `y` argument. We also load the `randomForest` package, as it is important to have the corresponding `predict()` function available for the random forest model. ``` library("DALEX") model_apart_lm <- archivist:: aread("pbiecek/models/55f19") explain_apart_lm <- DALEX::explain(model = model_apart_lm, data = apartments_test[,-1], y = apartments_test$m2.price, label = "Linear Regression") library("randomForest") model_apart_rf <- archivist:: aread("pbiecek/models/fe7a5") explain_apart_rf <- DALEX::explain(model = model_apart_rf, data = apartments_test[,-1], y = apartments_test$m2.price, label = "Random Forest") ``` For exploration of residuals, `DALEX` includes two useful functions. The `model_performance()` function can be used to evaluate the distribution of the residuals. On the other hand, the `model_diagnostics()` function is suitable for investigating the relationship between residuals and other variables. The `model_performance()` function was already introduced in Section [15\.6](modelPerformance.html#modelPerformanceR). Application of the function to an explainer\-object returns an object of class “model\_performance” which includes, in addition to selected model\-performance measures, a data frame containing the observed and predicted values of the dependent variable together with the residuals. ``` mr_lm <- DALEX::model_performance(explain_apart_lm) mr_rf <- DALEX::model_performance(explain_apart_rf) ``` By applying the `plot()` function to a “model\_performance”\-class object we can obtain various plots. The required type of the plot is specified with the help of the `geom` argument (see Section [15\.6](modelPerformance.html#modelPerformanceR)). In particular, specifying `geom = "histogram"` results in a histogram of residuals. In the code below, we apply the `plot()` function to the “model\_performance”\-class objects for the linear\-regression and random forest models. As a result, we automatically get a single graph with the histograms of residuals for the two models. The resulting graph is shown in Figure [19\.2](residualDiagnostic.html#fig:plotResidualDensity1) ``` library("ggplot2") plot(mr_lm, mr_rf, geom = "histogram") ``` The box\-and\-whisker plots of the residuals for the two models can be constructed by applying the `geom = "boxplot"` argument. The resulting graph is shown in Figure [19\.3](residualDiagnostic.html#fig:plotResidualBoxplot1). ``` plot(mr_lm, mr_rf, geom = "boxplot") ``` Function `model_diagnostics()` can be applied to an explainer\-object to directly compute residuals. The resulting object of class “model\_diagnostics” is a data frame in which the residuals and their absolute values are combined with the observed and predicted values of the dependent variable and the observed values of the explanatory variables. The data frame can be used to create various plots illustrating the relationship between residuals and the other variables. ``` md_lm <- model_diagnostics(explain_apart_lm) md_rf <- model_diagnostics(explain_apart_rf) ``` Application of the `plot()` function to a `model_diagnostics`\-class object produces, by default, a scatter plot of residuals (on the vertical axis) in function of the predicted values of the dependent variable (on the horizontal axis). By using arguments `variable` and `yvariable`, it is possible to specify plots with other variables used for the horizontal and vertical axes, respectively. The two arguments accept, apart from the names of the explanatory variables, the following values: * `"y"` for the dependent variable, * `"y_hat"` for the predicted value of the dependent variable, * `"obs"` for the identifiers of observations, * `"residuals"` for residuals, * `"abs_residuals"` for absolute values of residuals. Thus, to obtain the plot of residuals in function of the observed values of the dependent variable, as shown in Figure [19\.4](residualDiagnostic.html#fig:plotResidual1), the syntax presented below can be used. ``` plot(md_rf, variable = "y", yvariable = "residuals") ``` To produce Figure [19\.5](residualDiagnostic.html#fig:plotPrediction1), we have got to use the predicted values of the dependent variable on the vertical axis. This is achieved by specifying the `yvariable = "y_hat"` argument. We add the diagonal reference line to the plot by using the `geom_abline()` function. ``` plot(md_rf, variable = "y", yvariable = "y_hat") + geom_abline(colour = "red", intercept = 0, slope = 1) ``` Figure [19\.6](residualDiagnostic.html#fig:plotResidual2) presents an index plot of residuals, i.e., residuals (on the vertical axis) in function of identifiers of individual observations (on the horizontal axis). Toward this aim, we use the `plot()` function call as below. ``` plot(md_rf, variable = "ids", yvariable = "residuals") ``` Finally, Figure [19\.8](residualDiagnostic.html#fig:plotScaleLocation1) presents a variant of the scale\-location plot, with absolute values of the residuals shown on the vertical scale and the predicted values of the dependent variable on the horizontal scale. The plot is obtained with the syntax shown below. ``` plot(md_rf, variable = "y_hat", yvariable = "abs_residuals") ``` Note that, by default, all plots produced by applying the `plot()` function to a “model\_diagnostics”\-class object include a smoothed curve. To exclude the curve from a plot, one can use the argument `smooth = FALSE`. 19\.7 Code snippets for Python ------------------------------ In this section, we use the `dalex` library for Python. The package covers all methods presented in this chapter. But, as mentioned in Section [19\.1](residualDiagnostic.html#IntroResidualDiagnostic), residuals are a classical model\-diagnostics tool. Thus, essentially any model\-related library includes functions that allow calculation and plotting of residuals. For illustration purposes, we use the `apartments_rf` random forest model for the Titanic data developed in Section [4\.6\.2](dataSetsIntro.html#model-Apartments-python-rf). Recall that the model is developed to predict the price per square meter of an apartment in Warsaw. In the first step, we create an explainer\-object that will provide a uniform interface for the predictive model. We use the `Explainer()` constructor for this purpose. ``` import dalex as dx apartments_rf_exp = dx.Explainer(apartments_rf, X, y, label = "Apartments RF Pipeline") ``` The function that calculates residuals, absolute residuals and observation ids is `model_diagnostics()`. ``` md_rf = apartments_rf_exp.model_diagnostics() md_rf.result ``` The results can be visualised by applying the `plot()` method. Figure [19\.9](residualDiagnostic.html#fig:examplePythonMDiagnostics2) presents the created plot. ``` md_rf.plot() ``` Figure 19\.9: Residuals versus predicted values for the random forest model for the Apartments data. In the `plot()` function, we can specify what shall be presented on horizontal and vertical axes. Possible values are columns in the `md_rf.result` data frame, i.e. `residuals`, `abs_residuals`, `y`, `y_hat`, `ids` and variable names. ``` md_rf.plot(variable = "ids", yvariable = "abs_residuals") ``` Figure 19\.10: Absolute residuals versus indices of corresponding observations for the random forest model for the Apartments data. 19\.1 Introduction ------------------ In this chapter, we present methods that are useful for a detailed examination of both overall and instance\-specific model performance. In particular, we focus on graphical methods that use residuals. The methods may be used for several purposes: * In Part II of the book, we discussed tools for single\-instance exploration. Residuals can be used to identify potentially problematic instances. The single\-instance explainers can then be used in the problematic cases to understand, for instance, which factors contribute most to the errors in prediction. * For most models, residuals should express a random behavior with certain properties (like, e.g., being concentrated around 0\). If we find any systematic deviations from the expected behavior, they may signal an issue with a model (for instance, an omitted explanatory variable or a wrong functional form of a variable included in the model). * In Chapter [15](modelPerformance.html#modelPerformance), we discussed measures that can be used to evaluate the overall performance of a predictive model. Sometimes, however, we may be more interested in cases with the largest prediction errors, which can be identified with the help of residuals. Residual diagnostics is a classical topic related to statistical modelling. It is most often discussed in the context of the evaluation of goodness\-of\-fit of a model. That is, residuals are computed using the training data and used to assess whether the model predictions “fit” the observed values of the dependent variable. The literature on the topic is vast, as essentially every book on statistical modeling includes some discussion about residuals. Thus, in this chapter, we are not aiming at being exhaustive. Rather, our goal is to present selected concepts that underlie the use of residuals for predictive models. 19\.2 Intuition --------------- As it was mentioned in Section [2\.3](modelDevelopmentProcess.html#notation), we primarily focus on models describing the expected value of the dependent variable as a function of explanatory variables. In such a case, for a “perfect” predictive model, the predicted value of the dependent variable should be exactly equal to the actual value of the variable for every observation. Perfect prediction is rarely, if ever, expected. In practice, we want the predictions to be reasonably close to the actual values. This suggests that we can use the difference between the predicted and the actual value of the dependent variable to quantify the quality of predictions obtained from a model. The difference is called a *residual*. For a single observation, residual will almost always be different from zero. While a large (absolute) value of a residual may indicate a problem with a prediction for a particular observation, it does not mean that the quality of predictions obtained from a model is unsatisfactory in general. To evaluate the quality, we should investigate the “behavior” of residuals for a group of observations. In other words, we should look at the distribution of the values of residuals. For a “good” model, residuals should deviate from zero randomly, i.e., not systematically. Thus, their distribution should be symmetric around zero, implying that their mean (or median) value should be zero. Also, residuals should be close to zero themselves, i.e., they should show low variability. Usually, to verify these properties, graphical methods are used. For instance, a histogram can be used to check the symmetry and location of the distribution of residuals. Note that a model may imply a concrete distribution for residuals. In such a case, the distributional assumption can be verified by using a suitable graphical method like, for instance, a quantile\-quantile plot. If the assumption is found to be violated, one might want to be careful when using predictions obtained from the model. 19\.3 Method ------------ As it was already mentioned in Chapter [2](modelDevelopmentProcess.html#modelDevelopmentProcess), for a continuous dependent variable \\(Y\\), residual \\(r\_i\\) for the \\(i\\)\-th observation in a dataset is the difference between the observed value of \\(Y\\) and the corresponding model prediction: \\\[\\begin{equation} r\_i \= y\_i \- f(\\underline{x}\_i) \= y\_i \- \\widehat{y}\_i. \\tag{19\.1} \\end{equation}\\] *Standardized residuals* are defined as \\\[\\begin{equation} \\tilde{r}\_i \= \\frac{r\_i}{\\sqrt{\\mbox{Var}(r\_i)}}, \\tag{19\.2} \\end{equation}\\] where \\(\\mbox{Var}(r\_i)\\) is the variance of the residual \\(r\_i\\). Of course, in practice, the variance of \\(r\_i\\) is usually unknown. Hence, the estimated value of \\(\\mbox{Var}(r\_i)\\) is used in [(19\.2\)](residualDiagnostic.html#eq:standresid). Residuals defined in this way are often called the *Pearson residuals* (Galecki and Burzykowski [2013](#ref-Galecki2013)). Their distribution should be approximately standard\-normal. For the classical linear\-regression model, \\(\\mbox{Var}(r\_i)\\) can be estimated by using the design matrix. On the other hand, for count data, the variance can be estimated by \\(f(\\underline{x}\_i)\\), i.e., the expected value of the count. In general, for complicated models, it may be hard to estimate \\(\\mbox{Var}(r\_i)\\), so it is often approximated by a constant for all residuals. Definition [(19\.2\)](residualDiagnostic.html#eq:standresid) can also be applied to a binary dependent variable if the model prediction \\(f(\\underline{x}\_i)\\) is the probability of observing \\(y\_i\\) and upon coding the two possible values of the variable as 0 and 1\. However, in this case, the range of possible values of \\(r\_i\\) is restricted to \\(\[\-1,1]\\), which limits the usefulness of the residuals. For this reason, more often the Pearson residuals are used. Note that, if the observed values of the explanatory\-variable vectors \\(\\underline{x}\_i\\) lead to different predictions \\(f(\\underline{x}\_i)\\) for different observations in a dataset, the distribution of the Pearson residuals will not be approximated by the standard\-normal one. This is the case when, for instance, one (or more) of the explanatory variables is continuous. Nevertheless, in that case, the index plot may still be useful to detect observations with large residuals. The standard\-normal approximation is more likely to apply in the situation when the observed values of vectors \\(\\underline{x}\_i\\) split the data into a few, say \\(K\\), groups, with observations in group \\(k\\) (\\(k\=1,\\ldots,K\\)) sharing the same predicted value \\(f\_k\\). This may be happen if all explanatory variables are categorical with a limited number of categories. In that case, one can consider averaging residuals \\(r\_i\\) per group and standardizing them by \\(\\sqrt{f\_k(1\-f\_k)/n\_k}\\), where \\(n\_k\\) is the number of observations in group \\(k\\). For categorical data, residuals are usually defined in terms of differences in predictions for the dummy binary variable indicating the category observed for the \\(i\\)\-th observation. Let us consider the classical linear\-regression model. In that case, residuals should be normally distributed with mean zero and variance defined by the diagonal of hat\-matrix \\(\\underline X(\\underline X^T \\underline X)^{\-1}\\underline X^T\\). For independent explanatory variables, it should lead to a constant variance of residuals. Figure [19\.1](residualDiagnostic.html#fig:residuals1234) presents examples of classical diagnostic plots for linear\-regression models that can be used to check whether the assumptions are fulfilled. In fact, the plots in Figure [19\.1](residualDiagnostic.html#fig:residuals1234) suggest issues with the assumptions. In particular, the top\-left panel presents the residuals in function of the estimated linear combination of explanatory variables, i.e., predicted (fitted) values. For a well\-fitting model, the plot should show points scattered symmetrically around the horizontal straight line at 0\. However, the scatter in the top\-left panel of Figure [19\.1](residualDiagnostic.html#fig:residuals1234) has got a shape of a funnel, reflecting increasing variability of residuals for increasing fitted values. This indicates a violation of the homoscedasticity, i.e., the constancy of variance, assumption. Also, the smoothed line suggests that the mean of residuals becomes increasingly positive for increasing fitted values. This indicates a violation of the assumption that residuals have got zero\-mean. The top\-right panel of Figure [19\.1](residualDiagnostic.html#fig:residuals1234) presents the scale\-location plot, i.e., the plot of \\(\\sqrt{\\tilde{r}\_i}\\) in function of the fitted values \\(f(\\underline{x}\_i)\\). For a well\-fitting model, the plot should show points scattered symmetrically across the horizontal axis. This is clearly not the case of the plot in Figure [19\.1](residualDiagnostic.html#fig:residuals1234), which indicates a violation of the homoscedasticity assumption. The bottom\-left panel of Figure [19\.1](residualDiagnostic.html#fig:residuals1234) presents the plot of standardized residuals in the function of *leverage*. Leverage is a measure of the distance between \\(\\underline{x}\_i\\) and the vector of mean values for all explanatory variables (Kutner et al. [2005](#ref-Kutner2005)). A large leverage value for the \\(i\\)\-th observation, say \\(l\_i\\), indicates that \\(\\underline{x}\_i\\) is distant from the center of all observed values of the vector of explanatory variables. Importantly, a large leverage value implies that the observation may have an important influence on predicted/fitted values. In fact, for the classical linear\-regression model, it can be shown that the predicted sum\-of\-squares, defined in [(15\.5\)](modelPerformance.html#eq:PRESS), can be written as \\\[\\begin{equation} PRESS \= \\sum\_{i\=1}^{n} (\\widehat{y}\_{i(\-i)} \- y\_i)^2 \= \\sum\_{i\=1}^{n} \\frac{r\_i^2}{(1\-l\_{i})^2}. \\tag{19\.3} \\end{equation}\\] Thus, [(19\.3\)](residualDiagnostic.html#eq:leveragePRESS) indicates that observations with a large \\(r\_i\\) (or \\(\\tilde{r}\_i\\)) and a large \\(l\_i\\) have an important influence on the overall predictive performance of the model. Hence, the plot of standardized residuals in the function of leverage can be used to detect such influential observations. Note that the plot can also be used to check homoscedasticity because, under that assumption, it should show a symmetric scatter of points around the horizontal line at 0\. This is not the case of the plot presented in the bottom\-left panel of Figure [19\.1](residualDiagnostic.html#fig:residuals1234). Hence, the plot suggests that the assumption is not fulfilled. However, it does not indicate any particular influential observations, which should be located in the upper\-right or lower\-right corners of the plot. Note that the plot of standardized residuals in function of leverage can also be used to detect observations with large differences between the predicted and observed value of the dependent variable. In particular, given that \\({\\tilde{r}\_i}\\) should have approximately standard\-normal distribution, only about 0\.5% of them should be larger, in absolute value, than 2\.57\. If there is an excess of such observations, this could be taken as a signal of issues with the fit of the model. At least two such observations (59 and 143\) are indicated in the plot shown in the bottom\-left panel of Figure [19\.1](residualDiagnostic.html#fig:residuals1234). Finally, the bottom\-right panel of Figure [19\.1](residualDiagnostic.html#fig:residuals1234) presents an example of a normal quantile\-quantile plot. In particular, the vertical axis represents the ordered values of the standardized residuals, whereas the horizontal axis represents the corresponding values expected from the standard normal distribution. If the normality assumption is fulfilled, the plot should show a scatter of points close to the \\(45^{\\circ}\\) diagonal. Clearly, this is not the case of the plot in the bottom\-right panel of Figure [19\.1](residualDiagnostic.html#fig:residuals1234). Figure 19\.1: Diagnostic plots for a linear\-regression model. Clockwise from the top\-left: residuals in function of fitted values, a scale\-location plot, a normal quantile\-quantile plot, and a leverage plot. In each panel, indexes of the three most extreme observations are indicated. 19\.4 Example: apartment\-prices data ------------------------------------- In this section, we consider the linear\-regression model `apartments_lm` (Section [4\.5\.1](dataSetsIntro.html#model-Apartments-lr)) and the random forest model `apartments_rf` (Section [4\.5\.2](dataSetsIntro.html#model-Apartments-rf)) for the apartment\-prices dataset (Section [4\.4](dataSetsIntro.html#ApartmentDataset)). Recall that the dependent variable of interest, the price per square meter, is continuous. Thus, we can use residuals \\(r\_i\\), as defined in [(19\.1\)](residualDiagnostic.html#eq:resid). We compute the residuals for the `apartments_test` testing dataset (see Section [4\.5\.4](dataSetsIntro.html#predictionsApartments)). It is worth noting that, as it was mentioned in Section [15\.4\.1](modelPerformance.html#modelPerformanceApartments), RMSE for both models is very similar for that dataset. Thus, overall, the two models could be seen as performing similarly on average. Figures [19\.2](residualDiagnostic.html#fig:plotResidualDensity1) and [19\.3](residualDiagnostic.html#fig:plotResidualBoxplot1) summarize the distribution of residuals for both models. In particular, Figure [19\.2](residualDiagnostic.html#fig:plotResidualDensity1) presents histograms of residuals, while Figure [19\.3](residualDiagnostic.html#fig:plotResidualBoxplot1) shows box\-and\-whisker plots for the absolute value of the residuals. Figure 19\.2: Histogram of residuals for the linear\-regression model `apartments_lm` and the random forest model `apartments_rf` for the `apartments_test` dataset. Despite the similar value of RMSE, the distributions of residuals for both models are different. In particular, Figure [19\.2](residualDiagnostic.html#fig:plotResidualDensity1) indicates that the distribution for the linear\-regression model is, in fact, split into two separate, normal\-like parts, which may suggest omission of a binary explanatory variable in the model. The two components are located around the values of about \-200 and 400\. As mentioned in the previous chapters, the reason for this behavior of the residuals is the fact that the model does not capture the non\-linear relationship between the price and the year of construction. For instance, Figure [17\.8](partialDependenceProfiles.html#fig:pdpApartment3) indicates that the relationship between the construction year and the price may be U\-shaped. In particular, apartments built between 1940 and 1990 appear to be, on average, cheaper than those built earlier or later. As seen from Figure [19\.2](residualDiagnostic.html#fig:plotResidualDensity1), the distribution of residuals for the random forest model is skewed to the right and multimodal. It seems to be centered at a value closer to zero than the distribution for the linear\-regression model, but it shows a larger variation. These conclusions are confirmed by the box\-and\-whisker plots in Figure [19\.3](residualDiagnostic.html#fig:plotResidualBoxplot1). Figure 19\.3: Box\-and\-whisker plots of the absolute values of the residuals of the linear\-regression model `apartments_lm` and the random forest model `apartments_rf` for the `apartments_test` dataset. The dots indicate the mean value that corresponds to root\-mean\-squared\-error. The plots in Figures [19\.2](residualDiagnostic.html#fig:plotResidualDensity1) and [19\.3](residualDiagnostic.html#fig:plotResidualBoxplot1) suggest that the residuals for the random forest model are more frequently smaller than the residuals for the linear\-regression model. However, a small fraction of the random forest\-model residuals is very large, and it is due to them that the RMSE is comparable for the two models. In the remainder of the section, we focus on the random forest model. Figure [19\.4](residualDiagnostic.html#fig:plotResidual1) shows a scatter plot of residuals (vertical axis) in function of the observed (horizontal axis) values of the dependent variable. For a “perfect” predictive model, we would expect the horizontal line at zero. For a “good” model, we would like to see a symmetric scatter of points around the horizontal line at zero, indicating random deviations of predictions from the observed values. The plot in Figure [19\.4](residualDiagnostic.html#fig:plotResidual1) shows that, for the large observed values of the dependent variable, the residuals are positive, while for small values they are negative. This trend is clearly captured by the smoothed curve included in the graph. Thus, the plot suggests that the predictions are shifted (biased) towards the average. Figure 19\.4: Residuals and observed values of the dependent variable for the random forest model `apartments_rf` for the `apartments_test` dataset. The shift towards the average can also be seen from Figure [19\.5](residualDiagnostic.html#fig:plotPrediction1) that shows a scatter plot of the predicted (vertical axis) and observed (horizontal axis) values of the dependent variable. For a “perfectly” fitting model we would expect a diagonal line (indicated in red). The plot shows that, for large observed values of the dependent variable, the predictions are smaller than the observed values, with an opposite trend for the small observed values of the dependent variable. Figure 19\.5: Predicted and observed values of the dependent variable for the random forest model `apartments_rf` for the `apartments_test` dataset. The red line indicates the diagonal. Figure [19\.6](residualDiagnostic.html#fig:plotResidual2) shows an index plot of residuals, i.e., their scatter plot in function of an (arbitrary) identifier of the observation (horizontal axis). The plot indicates an asymmetric distribution of residuals around zero, as there is an excess of large positive (larger than 500\) residuals without a corresponding fraction of negative values. This can be linked to the right\-skewed distribution seen in Figures [19\.2](residualDiagnostic.html#fig:plotResidualDensity1) and [19\.3](residualDiagnostic.html#fig:plotResidualBoxplot1) for the random forest model. Figure 19\.6: Index plot of residuals for the random forest model `apartments_rf` for the `apartments_test` dataset. Figure [19\.7](residualDiagnostic.html#fig:plotResidual3) shows a scatter plot of residuals (vertical axis) in function of the predicted (horizontal axis) value of the dependent variable. For a “good” model, we would like to see a symmetric scatter of points around the horizontal line at zero. The plot in Figure [19\.7](residualDiagnostic.html#fig:plotResidual3), as the one in Figure [19\.4](residualDiagnostic.html#fig:plotResidual1), suggests that the predictions are shifted (biased) towards the average. Figure 19\.7: Residuals and predicted values of the dependent variable for the random forest model `apartments_rf` for the `apartments_test` dataset. The random forest model, as the linear\-regression model, assumes that residuals should be homoscedastic, i.e., that they should have a constant variance. Figure [19\.8](residualDiagnostic.html#fig:plotScaleLocation1) presents a variant of the scale\-location plot of residuals, i.e., a scatter plot of the absolute value of residuals (vertical axis) in function of the predicted values of the dependent variable (horizontal axis). The plot includes a smoothed line capturing the average trend. For homoscedastic residuals, we would expect a symmetric scatter around a horizontal line; the smoothed trend should be also horizontal. The plot in Figure [19\.8](residualDiagnostic.html#fig:plotScaleLocation1) deviates from the expected pattern and indicates that the variability of the residuals depends on the (predicted) value of the dependent variable. For models like linear regression, such heteroscedasticity of the residuals would be worrying. In random forest models, however, it may be less of concern. This is beacuse it may occur due to the fact that the models reduce variability of residuals by introducing a bias (towards the average). Thus, it is up to the developer of a model to decide whether such a bias (in our example, for the cheapest and most expensive apartments) is a desirable price to pay for the reduced residual variability. Figure 19\.8: The scale\-location plot of residuals for the random forest model `apartments_rf` for the `apartments_test` dataset. 19\.5 Pros and cons ------------------- Diagnostic methods based on residuals are a very useful tool in model exploration. They allow identifying different types of issues with model fit or prediction, such as problems with distributional assumptions or with the assumed structure of the model (in terms of the selection of the explanatory variables and their form). The methods can help in detecting groups of observations for which a model’s predictions are biased and, hence, require inspection. A potential complication related to the use of residual diagnostics is that they rely on graphical displays. Hence, for a proper evaluation of a model, one may have to construct and review many graphs. Moreover, interpretation of the patterns seen in graphs may not be straightforward. Also, it may not be immediately obvious which element of the model may have to be changed to remove the potential issue with the model fit or predictions. 19\.6 Code snippets for R ------------------------- In this section, we present diagnostic plots as implemented in the `DALEX` package for R. The package covers all plots and methods presented in this chapter. Similar functions can be found in packages `auditor` (Gosiewska and Biecek [2018](#ref-R-auditor)), `rms` (Harrell Jr [2018](#ref-rms)), and `stats` (Faraway [2005](#ref-Faraway02practicalregression)). For illustration purposes, we will show how to create the plots shown in Section [19\.4](residualDiagnostic.html#ExampleResidualDiagnostic) for the linear\-regression model `apartments_lm` (Section [4\.5\.1](dataSetsIntro.html#model-Apartments-lr)) and the random forest model `apartments_rf` (Section [4\.5\.2](dataSetsIntro.html#model-Apartments-rf)) for the `apartments_test` dataset (Section [4\.4](dataSetsIntro.html#ApartmentDataset)). We first load the two models via the `archivist` hooks, as listed in Section [4\.5\.6](dataSetsIntro.html#ListOfModelsApartments). Subsequently, we construct the corresponding explainers by using function `explain()` from the `DALEX` package (see Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode)). Note that we use the `apartments_test` data frame without the first column, i.e., the *m2\.price* variable, in the `data` argument. This will be the dataset to which the model will be applied. The *m2\.price* variable is explicitly specified as the dependent variable in the `y` argument. We also load the `randomForest` package, as it is important to have the corresponding `predict()` function available for the random forest model. ``` library("DALEX") model_apart_lm <- archivist:: aread("pbiecek/models/55f19") explain_apart_lm <- DALEX::explain(model = model_apart_lm, data = apartments_test[,-1], y = apartments_test$m2.price, label = "Linear Regression") library("randomForest") model_apart_rf <- archivist:: aread("pbiecek/models/fe7a5") explain_apart_rf <- DALEX::explain(model = model_apart_rf, data = apartments_test[,-1], y = apartments_test$m2.price, label = "Random Forest") ``` For exploration of residuals, `DALEX` includes two useful functions. The `model_performance()` function can be used to evaluate the distribution of the residuals. On the other hand, the `model_diagnostics()` function is suitable for investigating the relationship between residuals and other variables. The `model_performance()` function was already introduced in Section [15\.6](modelPerformance.html#modelPerformanceR). Application of the function to an explainer\-object returns an object of class “model\_performance” which includes, in addition to selected model\-performance measures, a data frame containing the observed and predicted values of the dependent variable together with the residuals. ``` mr_lm <- DALEX::model_performance(explain_apart_lm) mr_rf <- DALEX::model_performance(explain_apart_rf) ``` By applying the `plot()` function to a “model\_performance”\-class object we can obtain various plots. The required type of the plot is specified with the help of the `geom` argument (see Section [15\.6](modelPerformance.html#modelPerformanceR)). In particular, specifying `geom = "histogram"` results in a histogram of residuals. In the code below, we apply the `plot()` function to the “model\_performance”\-class objects for the linear\-regression and random forest models. As a result, we automatically get a single graph with the histograms of residuals for the two models. The resulting graph is shown in Figure [19\.2](residualDiagnostic.html#fig:plotResidualDensity1) ``` library("ggplot2") plot(mr_lm, mr_rf, geom = "histogram") ``` The box\-and\-whisker plots of the residuals for the two models can be constructed by applying the `geom = "boxplot"` argument. The resulting graph is shown in Figure [19\.3](residualDiagnostic.html#fig:plotResidualBoxplot1). ``` plot(mr_lm, mr_rf, geom = "boxplot") ``` Function `model_diagnostics()` can be applied to an explainer\-object to directly compute residuals. The resulting object of class “model\_diagnostics” is a data frame in which the residuals and their absolute values are combined with the observed and predicted values of the dependent variable and the observed values of the explanatory variables. The data frame can be used to create various plots illustrating the relationship between residuals and the other variables. ``` md_lm <- model_diagnostics(explain_apart_lm) md_rf <- model_diagnostics(explain_apart_rf) ``` Application of the `plot()` function to a `model_diagnostics`\-class object produces, by default, a scatter plot of residuals (on the vertical axis) in function of the predicted values of the dependent variable (on the horizontal axis). By using arguments `variable` and `yvariable`, it is possible to specify plots with other variables used for the horizontal and vertical axes, respectively. The two arguments accept, apart from the names of the explanatory variables, the following values: * `"y"` for the dependent variable, * `"y_hat"` for the predicted value of the dependent variable, * `"obs"` for the identifiers of observations, * `"residuals"` for residuals, * `"abs_residuals"` for absolute values of residuals. Thus, to obtain the plot of residuals in function of the observed values of the dependent variable, as shown in Figure [19\.4](residualDiagnostic.html#fig:plotResidual1), the syntax presented below can be used. ``` plot(md_rf, variable = "y", yvariable = "residuals") ``` To produce Figure [19\.5](residualDiagnostic.html#fig:plotPrediction1), we have got to use the predicted values of the dependent variable on the vertical axis. This is achieved by specifying the `yvariable = "y_hat"` argument. We add the diagonal reference line to the plot by using the `geom_abline()` function. ``` plot(md_rf, variable = "y", yvariable = "y_hat") + geom_abline(colour = "red", intercept = 0, slope = 1) ``` Figure [19\.6](residualDiagnostic.html#fig:plotResidual2) presents an index plot of residuals, i.e., residuals (on the vertical axis) in function of identifiers of individual observations (on the horizontal axis). Toward this aim, we use the `plot()` function call as below. ``` plot(md_rf, variable = "ids", yvariable = "residuals") ``` Finally, Figure [19\.8](residualDiagnostic.html#fig:plotScaleLocation1) presents a variant of the scale\-location plot, with absolute values of the residuals shown on the vertical scale and the predicted values of the dependent variable on the horizontal scale. The plot is obtained with the syntax shown below. ``` plot(md_rf, variable = "y_hat", yvariable = "abs_residuals") ``` Note that, by default, all plots produced by applying the `plot()` function to a “model\_diagnostics”\-class object include a smoothed curve. To exclude the curve from a plot, one can use the argument `smooth = FALSE`. 19\.7 Code snippets for Python ------------------------------ In this section, we use the `dalex` library for Python. The package covers all methods presented in this chapter. But, as mentioned in Section [19\.1](residualDiagnostic.html#IntroResidualDiagnostic), residuals are a classical model\-diagnostics tool. Thus, essentially any model\-related library includes functions that allow calculation and plotting of residuals. For illustration purposes, we use the `apartments_rf` random forest model for the Titanic data developed in Section [4\.6\.2](dataSetsIntro.html#model-Apartments-python-rf). Recall that the model is developed to predict the price per square meter of an apartment in Warsaw. In the first step, we create an explainer\-object that will provide a uniform interface for the predictive model. We use the `Explainer()` constructor for this purpose. ``` import dalex as dx apartments_rf_exp = dx.Explainer(apartments_rf, X, y, label = "Apartments RF Pipeline") ``` The function that calculates residuals, absolute residuals and observation ids is `model_diagnostics()`. ``` md_rf = apartments_rf_exp.model_diagnostics() md_rf.result ``` The results can be visualised by applying the `plot()` method. Figure [19\.9](residualDiagnostic.html#fig:examplePythonMDiagnostics2) presents the created plot. ``` md_rf.plot() ``` Figure 19\.9: Residuals versus predicted values for the random forest model for the Apartments data. In the `plot()` function, we can specify what shall be presented on horizontal and vertical axes. Possible values are columns in the `md_rf.result` data frame, i.e. `residuals`, `abs_residuals`, `y`, `y_hat`, `ids` and variable names. ``` md_rf.plot(variable = "ids", yvariable = "abs_residuals") ``` Figure 19\.10: Absolute residuals versus indices of corresponding observations for the random forest model for the Apartments data.
Machine Learning
pbiecek.github.io
https://pbiecek.github.io/ema/UseCaseFIFA.html
21 FIFA 19 ========== 21\.1 Introduction ------------------ In the previous chapters, we introduced a range of methods for the exploration of predictive models. Different methods were discussed in separate chapters, and while illustrated, they were not directly compared. Thus, in this chapter, we apply the methods to one dataset in order to present their relative merits. In particular, we present an example of a full process of a model development along the lines introduced in Chapter [2](modelDevelopmentProcess.html#modelDevelopmentProcess). This will allow us to show how one can combine results from different methods. The Fédération Internationale de Football Association (FIFA) is a governing body of football (sometimes, especially in the USA, called soccer). FIFA is also a series of video games developed by EA Sports which faithfully reproduces the characteristics of real players. FIFA ratings of football players from the video game can be found at `https://sofifa.com/`. Data from this website for 2019 were scrapped and made available at the Kaggle webpage `https://www.kaggle.com/karangadiya/fifa19`. We will use the data to build a predictive model for the evaluation of a player’s value. Subsequently, we will use the model exploration and explanation methods to better understand the model’s performance, as well as which variables and how to influence a player’s value. 21\.2 Data preparation ---------------------- The original dataset contains 89 variables that describe 16,924 players. The variables include information such as age, nationality, club, wage, etc. In what follows, we focus on 45 variables that are included in data frame `fifa` included in the `DALEX` package for R and Python. The variables from this dataset set are listed in Table [21\.1](UseCaseFIFA.html#tab:FIFAvariables). Table 21\.1: Variables in the FIFA 19 dataset. | Name | Weak.Foot | FKAccuracy | Jumping | Composure | | --- | --- | --- | --- | --- | | Club | Skill.Moves | LongPassing | Stamina | Marking | | Position | Crossing | BallControl | Strength | StandingTackle | | Value.EUR | Finishing | Acceleration | LongShots | SlidingTackle | | Age | HeadingAccuracy | SprintSpeed | Aggression | GKDiving | | Overall | ShortPassing | Agility | Interceptions | GKHandling | | Special | Volleys | Reactions | Positioning | GKKicking | | Preferred.Foot | Dribbling | Balance | Vision | GKPositioning | | Reputation | Curve | ShotPower | Penalties | GKReflexes | In particular, variable `Value.EUR` contains the player’s value in millions of EUR. This will be our dependent variable. The distribution of the variable is heavily skewed to the right. In particular, the quartiles are equal to 325,000 EUR, 725,000 EUR, and 2,534,478 EUR. There are three players with a value higher than 100 millions of Euro. Thus, in our analyses, we will consider a logarithmically\-transformed players’ value. Figure [21\.1](UseCaseFIFA.html#fig:distFIFA19Value) presents the empirical cumulative\-distribution function and histogram for the transformed value. They indicate that the transformation makes the distribution less skewed. Figure 21\.1: The empirical cumulative\-distribution function and histogram for the log\\(\_{10}\\)\-transformed players’ values. Additionally, we take a closer look at four characteristics that will be considered as explanatory variables later in this chapter. These are: `Age`, `Reactions` (a movement skill), `BallControl` (a general skill), and `Dribbling` (a general skill). Figure [21\.2](UseCaseFIFA.html#fig:distFIFA19histograms) presents histograms of the values of the four variables. From the plot for `Age` we can conclude that most of the players are between 20 and 30 years of age (median age: 25\). Variable `Reactions` has an approximately symmetric distribution, with quartiles equal to 56, 62, and 68\. Histograms of `BallControl` and `Dribbling` indicate, interestingly, bimodal distributions. The smaller modes are due to goalkeepers. Figure 21\.2: Histograms for selected characteristics of players. ### 21\.2\.1 Code snippets for R The subset of 5000 most valuable players from the FIFA 19 data is available in the `fifa` data frame in the `DALEX` package. ``` library("DALEX") head(fifa) ``` ### 21\.2\.2 Code snippets for Python The subset of 5000 most valuable players from FIFA 19 data can be loaded to Python with `dalex.datasets.load_fifa()` method. ``` import dalex as dx fifa = dx.datasets.load_fifa() ``` 21\.3 Data understanding ------------------------ We will investigate the relationship between the four selected characteristics and the (logarithmically\-transformed) player’s value. Toward this aim, we use the scatter plots shown in Figure [21\.3](UseCaseFIFA.html#fig:distFIFA19scatter). Each plot includes a smoothed curve capturing the trend. For `Age`, the relationship is not monotonic. There seems to be an optimal age, between 25 and 30 years, at which the player’s value reaches the maximum. On the other hand, the value of youngest and oldest players is about 10 times lower, as compared to the maximum. For variables `BallControl` and `Dribbling`, the relationship is not monotonic. In general, the larger value of these coefficients, the large value of a player. However, there are “local” maxima for players with low scores for `BallControl` and `Dribbling`. As it was suggested earlier, these are probably goalkeepers. For `Reactions`, the association with the player’s value is monotonic, with increasing values of the variable leading to increasing values of players. Figure 21\.3: Scatter plots illustrating the relationship between the (logarithmically\-transformed) player’s value and selected characteristics. Figure [21\.4](UseCaseFIFA.html#fig:distFIFA19scatter2) presents the scatter\-plot matrix for the four selected variables. It indicates that all variables are positively correlated, though with different strength. In particular, `BallControl` and `Dribbling` are strongly correlated, with the estimated correlation coefficient larger than 0\.9\. `Reactions` is moderately correlated with the other three variables. Finally, there is a moderate correlation between `Age` and `Reactions`, but not much correlation with `BallControl` and `Dribbling`. Figure 21\.4: Scatter\-plot matrix illustrating the relationship between selected characteristics of players. 21\.4 Model assembly -------------------- In this section, we develop a model for players’ values. We consider all variables other than `Name`, `Club`, `Position`, `Value.EUR`, `Overall`, and `Special` (see Section [21\.2](UseCaseFIFA.html#FIFAdataprep)) as explanatory variables. The base\-10 logarithm of the player’s value is the dependent variable. Given different possible forms of relationship between the (logarithmically\-transformed) player’s value and explanatory variables (as seen, for example, in Figure [21\.3](UseCaseFIFA.html#fig:distFIFA19scatter)), we build four different, flexible models to check whether they are capable of capturing the various relationships. In particular, we consider the following models: * a boosting model with 250 trees of 1\-level depth, as implemented in package `gbm` (Ridgeway [2017](#ref-gbm)), * a boosting model with 250 trees of 4\-levels depth (this model should be able to catch interactions between variables), * a random forest model with 250 trees, as implemented in package `ranger` (Wright and Ziegler [2017](#ref-rangerRpackage)), * a linear model with a spline\-transformation of explanatory variables, as implemented in package `rms` (Harrell Jr [2018](#ref-rms)). These models will be explored in detail in the following sections. ### 21\.4\.1 Code snippets for R In this section, we show R\-code snippets used to develop the gradient boosting model. Other models were built in a similar way. The code below fits the model to the data. The dependent variable `LogValue` contains the base\-10 logarithm of `Value.EUR`, i.e., of the player’s value. ``` fifa$LogValue <- log10(fifa$Value.EUR) fifa_small <- fifa[,-c(1, 2, 3, 4, 6, 7)] fifa_gbm_deep <- gbm(LogValue~., data = fifa_small, n.trees = 250, interaction.depth = 4, distribution = "gaussian") ``` For model\-exploration purposes, we have got to create an explainer\-object with the help of the `DALEX::explain()` function (see Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode)). The code below is used for the gradient boosting model. Note that the model was fitted to the logarithmically\-transformed player’s value. However, it is more natural to interpret the predictions on the original scale. This is why, in the provided syntax, we apply the `predict_function` argument to specify a user\-defined function to obtain predictions on the original scale, in Euro. Additionally, we use the `data` and `y` arguments to indicate the data frame with explanatory variables and the values of the dependent variable, for which predictions are to be obtained. Finally, the model receives its own `label`. ``` library("DALEX") fifa_gbm_exp_deep <- DALEX::explain(fifa_gbm_deep, data = fifa_small, y = 10^fifa_small$LogValue, predict_function = function(m,x) 10^predict(m, x, n.trees = 250), label = "GBM deep") ``` ### 21\.4\.2 Code snippets for Python In this section, we show Python\-code snippets used to develop the gradient boosting model. Other models were built in a similar way. The code below fits the model to the data. The dependent variable `ylog` contains the logarithm of `value_eur`, i.e., of the player’s value. ``` from lightgbm import LGBMRegressor from sklearn.model_selection import train_test_split import numpy as np X = fifa.drop(["nationality", "overall", "potential", "value_eur", "wage_eur"], axis = 1) y = fifa['value_eur'] ylog = np.log(y) X_train, X_test, ylog_train, ylog_test, y_train, y_test = train_test_split(X, ylog, y, test_size = 0.25, random_state = 4) gbm_model = LGBMRegressor() gbm_model.fit(X_train, ylog_train, verbose = False) ``` For model\-exploration purposes, we have to create the explainer\-object with the help of the `Explainer()` constructor from the `dalex` library (see Section [4\.3\.6](dataSetsIntro.html#ExplainersTitanicPythonCode)). The code is provided below. Note that the model was fitted to the logarithmically\-transformed player’s value. However, it is more natural to interpret the predictions on the original scale. This is why, in the provided syntax, we apply the `predict_function` argument to specify a user\-defined function to obtain predictions on the original scale, in Euro. Additionally, we use the `X` and `y` arguments to indicate the data frame with explanatory variables and the values of the dependent variable, for which predictions are to be obtained. Finally, the model receives its own `label`. ``` def predict_function(model, data): return np.exp(model.predict(data)) fifa_gbm_exp = dx.Explainer(gbm_model, X_test, y_test, predict_function = predict_function, label = 'gbm') ``` 21\.5 Model audit ----------------- Having developed the four candidate models, we may want to evaluate their performance. Toward this aim, we can use the measures discussed in Section [15\.3\.1](modelPerformance.html#modelPerformanceMethodCont). The computed values are presented in Table [21\.2](UseCaseFIFA.html#tab:modelPerformanceFIFA). On average, the values of the root\-mean\-squared\-error (RMSE) and mean\-absolute\-deviation (MAD) are the smallest for the random forest model. Table 21\.2: Model\-performance measures for the four models for the FIFA 19 data. | | MSE | RMSE | R2 | MAD | | --- | --- | --- | --- | --- | | GBM shallow | 8\.990694e\+12 | 2998449 | 0\.7300429 | 183682\.91 | | GBM deep | 2\.211439e\+12 | 1487091 | 0\.9335987 | 118425\.56 | | RF | 1\.141176e\+12 | 1068258 | 0\.9657347 | 50693\.24 | | RM | 2\.191297e\+13 | 4681129 | 0\.3420350 | 148187\.06 | In addition to computing measures of the overall performance of the model, we should conduct a more detailed examination of both overall\- and instance\-specific performance. Toward this aim, we can apply residual diagnostics, as discussed in Chapter [19](residualDiagnostic.html#residualDiagnostic). For instance, we can create a plot comparing the predicted (fitted) and observed values of the dependent variable. Figure 21\.5: Observed and predicted (fitted) players’ values for the four models for the FIFA 19 data. The resulting plot is shown in Figure [21\.5](UseCaseFIFA.html#fig:modelPerformanceScatterplot). It indicates that predictions are closest to the observed values of the dependent variable for the random forest model. It is worth noting that the smoothed trend for the model is close to a straight line, but with a slope smaller than 1\. This implies the random forest model underestimates the actual value of the most expensive players, while it overestimates the value for the least expensive ones. A similar pattern can be observed for the gradient boosting models. This “shrinking to the mean” is typical for this type of models. ### 21\.5\.1 Code snippets for R In this section, we show R\-code snippets for model audit for the gradient boosting model. For other models a similar syntax was used. The `model_performance()` function (see Section [15\.6](modelPerformance.html#modelPerformanceR)) is used to calculate the values of RMSE, MSE, R\\(^2\\), and MAD for the model. ``` model_performance(fifa_gbm_exp_deep) ``` The `model_diagnostics()` function (see Section [19\.6](residualDiagnostic.html#RcodeResidualDiagnostic)) is used to create residual\-diagnostics plots. Results of this function can be visualised with the generic `plot()` function. In the code that follows, additional arguments are used to improve the outlook and interpretability of both axes. ``` fifa_md_gbm_deep <- model_diagnostics(fifa_gbm_exp_deep) plot(fifa_md_gbm_deep, variable = "y", yvariable = "y_hat") + scale_x_continuous("Value in Euro", trans = "log10", labels = dollar_format(suffix = "€", prefix = "")) + scale_y_continuous("Predicted value in Euro", trans = "log10", labels = dollar_format(suffix = "€", prefix = "")) + geom_abline(slope = 1) + ggtitle("Predicted and observed players' values", "") ``` ### 21\.5\.2 Code snippets for Python In this section, we show Python\-code snippets used to perform residual diagnostic for trained the gradient boosting model. Other models were tested in a similar way. The `fifa_gbm_exp.model_diagnostics()` function (see Section [19\.7](residualDiagnostic.html#PythoncodeResidualDiagnostic)) is used to calculate the residuals and absolute residuals. Results of this function can be visualised with the `plot()` function. The code below produce diagnostic plots similar to these presented in Figure [21\.5](UseCaseFIFA.html#fig:modelPerformanceScatterplot). ``` fifa_md_gbm = fifa_gbm_exp.model_diagnostics() fifa_md_gbm.plot(variable = "y", yvariable = "y_hat") ``` 21\.6 Model understanding (dataset\-level explanations) ------------------------------------------------------- All four developed models involve many explanatory variables. It is of interest to understand which of the variables exercises the largest influence of models’ predictions. Toward this aim, we can apply the permutation\-based variable\-importance measure discussed in Chapter [16](featureImportance.html#featureImportance). Subsequently, we can construct a plot of the obtained mean (over the default 10 permutations) variable\-importance measures. Note that we consider only the top\-20 variables. Figure 21\.6: Mean variable\-importance calculated using 10 permutations for the four models for the FIFA 19 data. The resulting plot is shown in Figure [21\.6](UseCaseFIFA.html#fig:featureImportancePlot). The bar for each explanatory variable starts at the RMSE value of a particular model and ends at the (mean) RMSE calculated for data with permuted values of the variable. Figure [21\.6](UseCaseFIFA.html#fig:featureImportancePlot) indicates that, for the gradient boosting and random forest models, the two explanatory variables with the largest values of the importance measure are `Reactions` or `BallControl`. The importance of other variables varies depending on the model. Interestingly, in the linear\-regression model, the highest importance is given to goal\-keeping skills. We may also want to take a look at the partial\-dependence (PD) profiles discussed in Chapter [17](partialDependenceProfiles.html#partialDependenceProfiles). Recall that they illustrate how does the expected value of a model’s predictions behave as a function of an explanatory variable. To create the profiles, we apply function `model_profile()` from the `DALEX` package (see Section [17\.6](partialDependenceProfiles.html#PDPR)). We focus on variables `Reactions`, `BallControl`, and `Dribbling` that were important in the random forest model (see Figure [21\.6](UseCaseFIFA.html#fig:featureImportancePlot)). We also consider `Age`, as it had some effect in the gradient boosting models. Subsequently, we can construct a plot of contrastive PD profiles (see Section [17\.3\.4](partialDependenceProfiles.html#contrastivePDPs)) that is shown in Figure [21\.7](UseCaseFIFA.html#fig:usecaseFIFApdPlot). Figure 21\.7: Contrastive partial\-dependence profiles for the four models and selected explanatory variables for the FIFA 19 data. Figure [21\.7](UseCaseFIFA.html#fig:usecaseFIFApdPlot) indicates that the shape of the PD profiles for `Reactions`, `BallControl`, and `Dribbling` is, in general, similar for all the models and implies an increasing predicted player’s value for an increasing (at least, after passing some threshold) value of the explanatory variable. However, for `Age`, the shape is different and suggests a decreasing player’s value after the age of about 25 years. It is worth noting that the range of expected model’s predictions is, in general, the smallest for the random forest model. Also, the three tree\-based models tend to stabilize the predictions at the ends of the explanatory\-variable ranges. The most interesting difference between the conclusions drawn from Figure [21\.3](UseCaseFIFA.html#fig:distFIFA19scatter) and those obtained from Figure [21\.7](UseCaseFIFA.html#fig:usecaseFIFApdPlot) is observed for variable `Age`. In particular, Figure [21\.3](UseCaseFIFA.html#fig:distFIFA19scatter) suggests that the relationship between player’s age and value is non\-monotonic, while Figure [21\.7](UseCaseFIFA.html#fig:usecaseFIFApdPlot) suggests a non\-increasing relationship. How can we explain this difference? A possible explanation is as follows. The youngest players have lower values, not because of their age, but because of their lower skills, which are correlated (as seen from the scatter\-plot matrix in Figure [21\.4](UseCaseFIFA.html#fig:distFIFA19scatter2)) with young age. The simple data exploration analysis, presented in the upper\-left panel of Figure [21\.3](UseCaseFIFA.html#fig:distFIFA19scatter), cannot separate the effects of age and skills. As a result, the analysis suggests a decrease in player’s value for the youngest players. In models, however, the effect of age is estimated while adjusting for the effect of skills. After this adjustment, the effect takes the form of a non\-increasing pattern, as shown by the PD profiles for `Age` in Figure [21\.7](UseCaseFIFA.html#fig:usecaseFIFApdPlot). This example indicates that *exploration of models may provide more insight than exploration of raw data*. In exploratory data analysis, the effect of variable `Age` was confounded by the effect of skill\-related variables. By using a model, the confounding has been removed. ### 21\.6\.1 Code snippets for R In this section, we show R\-code snippets for dataset\-level exploration for the gradient boosting model. For other models a similar syntax was used. The `model_parts()` function from the `DALEX` package (see Section [16\.6](featureImportance.html#featureImportanceR)) is used to calculate the permutation\-based variable\-importance measure. The generic `plot()` function is applied to graphically present the computed values of the measure. The `max_vars` argument is used to limit the number of presented variables up to 20\. ``` fifa_mp_gbm_deep <- model_parts(fifa_gbm_exp_deep) plot(fifa_mp_gbm_deep, max_vars = 20, bar_width = 4, show_boxplots = FALSE) ``` The `model_profile()` function from the `DALEX` package (see Section [17\.6](partialDependenceProfiles.html#PDPR)) is used to calculate PD profiles. The generic `plot()` function is used to graphically present the profiles for selected variables. ``` selected_variables <- c("Reactions", "BallControl", "Dribbling", "Age") fifa19_pd_deep <- model_profile(fifa_gbm_exp_deep, variables = selected_variables) plot(fifa19_pd_deep) ``` ### 21\.6\.2 Code snippets for Python In this section, we show Python code snippets for dataset\-level exploration for the gradient boosting model. For other models a similar syntax was used. The `model_parts()` method from the `dalex` library (see Section [16\.7](featureImportance.html#featureImportancePython)) is used to calculate the permutation\-based variable\-importance measure. The `plot()` method is applied to graphically present the computed values of the measure. ``` fifa_mp_gbm = fifa_gbm_exp.model_parts() fifa_mp_gbm.plot(max_vars = 20) ``` The `model_profile()` method from the `dalex` library (see Section [17\.7](partialDependenceProfiles.html#PDPPython)) is used to calculate PD profiles. The `plot()` method is used to graphically present the computed profiles. ``` fifa_mp_gbm = fifa_gbm_exp.model_profile() fifa_mp_gbm.plot(variables = ['movement_reactions', 'skill_ball_control', 'skill_dribbling', 'age']) ``` In order to calculated other types of profiles, just change the `type` argument. ``` fifa_mp_gbm = fifa_gbm_exp.model_profile(type = 'accumulated') fifa_mp_gbm.plot(variables = ['movement_reactions', 'skill_ball_control', 'skill_dribbling', 'age']) ``` 21\.7 Instance\-level explanations ---------------------------------- After evaluation of the models at the dataset\-level, we may want to focus on particular instances. ### 21\.7\.1 Robert Lewandowski As a first example, we take a look at the value of *Robert Lewandowski*, for an obvious reason. Table [21\.3](UseCaseFIFA.html#tab:RobertLewandowski) presents his characteristics, as included in the analyzed dataset. Robert Lewandowski is a striker. Table 21\.3: Characteristics of Robert Lewandowski. | variable | value | variable | value | variable | value | variable | value | | --- | --- | --- | --- | --- | --- | --- | --- | | Age | 29 | Dribbling | 85 | ShotPower | 88 | Composure | 86 | | Preferred.Foot | 2 | Curve | 77 | Jumping | 84 | Marking | 34 | | Reputation | 4 | FKAccuracy | 86 | Stamina | 78 | StandingTackle | 42 | | Weak.Foot | 4 | LongPassing | 65 | Strength | 84 | SlidingTackle | 19 | | Skill.Moves | 4 | BallControl | 89 | LongShots | 84 | GKDiving | 15 | | Crossing | 62 | Acceleration | 77 | Aggression | 80 | GKHandling | 6 | | Finishing | 91 | SprintSpeed | 78 | Interceptions | 39 | GKKicking | 12 | | HeadingAccuracy | 85 | Agility | 78 | Positioning | 91 | GKPositioning | 8 | | ShortPassing | 83 | Reactions | 90 | Vision | 77 | GKReflexes | 10 | | Volleys | 89 | Balance | 78 | Penalties | 88 | LogValue | 8 | First, we take a look at variable attributions, discussed in Chapter [6](breakDown.html#breakDown). Recall that they decompose model’s prediction into parts that can be attributed to different explanatory variables. The attributions can be presented in a break\-down (BD) plot. For brevity, we only consider the random forest model. The resulting BD plot is shown in Figure [21\.8](UseCaseFIFA.html#fig:usecaseFIFAbreakDownPlot). Figure 21\.8: Break\-down plot for Robert Lewandowski for the random forest model. Figure [21\.8](UseCaseFIFA.html#fig:usecaseFIFAbreakDownPlot) suggests that the explanatory variables with the largest effect are `Composure`, `Volleys`, `LongShots`, and `Stamina`. However, in Chapter [6](breakDown.html#breakDown) it was mentioned that variable attributions may depend on the order of explanatory covariates that are used in calculations. Thus, in Chapter [8](shapley.html#shapley) we introduced Shapley values, based on the idea of averaging the attributions over many orderings. Figure [21\.9](UseCaseFIFA.html#fig:usecaseFIFAshapPlot) presents the means of the Shapley values computed by using 25 random orderings for the random forest model. Figure 21\.9: Shapley values for Robert Lewandowski for the random forest model. Figure [21\.9](UseCaseFIFA.html#fig:usecaseFIFAshapPlot) indicates that the five explanatory variables with the largest Shapley values are `BallControl`, `Dribbling`, `Reactions`, `ShortPassing`, and `Positioning`. This makes sense, as Robert Lewandowski is a striker. In Chapter [10](ceterisParibus.html#ceterisParibus), we introduced ceteris\-paribus (CP) profiles. They capture the effect of a selected explanatory variable in terms of changes in a model’s prediction induced by changes in the variable’s values. Figure [21\.10](UseCaseFIFA.html#fig:usecaseFIFAceterisParibusPlot) presents the profiles for variables `Age`, `Reactions`, `BallControl`, and `Dribbling` for the random forest model. Figure 21\.10: Ceteris\-paribus profiles for Robert Lewandowski for four selected variables and the random forest model. Figure [21\.10](UseCaseFIFA.html#fig:usecaseFIFAceterisParibusPlot) suggests that, among the four variables, `BallControl` and `Reactions` lead to the largest changes of predictions for this instance. For all four variables, the profiles flatten at the left\- and right\-hand\-side edges. The predicted value of Robert Lewandowski reaches or is very close to the maximum for all four profiles. It is interesting to note that, for `Age`, the predicted value is located at the border of the age region at which the profile suggests a sharp drop in player’s value. As it was argued in Chapter [12](localDiagnostics.html#localDiagnostics), it is worthwhile to check how does the model behave for observations similar to the instance of interest. Towards this aim, we may want to compare the distribution of residuals for “neighbors” of Robert Lewandowski. Figure [21\.11](UseCaseFIFA.html#fig:usecaseFIFAceterisParibusNeighboursPlot) presents the histogram of residuals for all data and the 30 neighbors of Robert Lewandowski. Figure 21\.11: Distribution of residuals for the random forest model for all players and for 30 neighbors of Robert Lewandowski. Clearly, the neighbors of Robert Lewandowski include some of the most expensive players. Therefore, as compared to the overall distribution, the distribution of residuals for the neighbors, presented in Figure [21\.11](UseCaseFIFA.html#fig:usecaseFIFAceterisParibusNeighboursPlot), is skewed to the right, and its mean is larger than the overall mean. Thus, the model underestimates the actual value of the most expensive players. This was also noted based on the plot in the bottom\-left panel of Figure [21\.5](UseCaseFIFA.html#fig:modelPerformanceScatterplot). We can also look at the local\-stability plot, i.e., the plot that includes CP profiles for the nearest neighbors and the corresponding residuals (see Chapter [12](localDiagnostics.html#localDiagnostics)). In Figure [21\.12](UseCaseFIFA.html#fig:usecaseFIFAceterisParibusNeighboursAgeRFPlot), we present the plot for `Age`. Figure 21\.12: Local\-stability plot for `Age` for 30 neighbors of Robert Lewandowski and the random forest model. The CP profiles in Figure [21\.12](UseCaseFIFA.html#fig:usecaseFIFAceterisParibusNeighboursAgeRFPlot) are almost parallel but span quite a wide range of the predicted player’s values. Thus, one could conclude that the predictions for the most expensive players are not very stable. Also, the plot includes more positive residuals (indicated in the plot by green vertical intervals) than negative ones (indicated by red vertical intervals). This confirms the conclusion drawn from Figure [21\.11](UseCaseFIFA.html#fig:usecaseFIFAceterisParibusNeighboursPlot) that the values of the most expensive players are underestimated by the model. ### 21\.7\.2 Code snippets for R In this section, we show R\-code snippets for instance\-level exploration for the gradient boosting model. For other models, a similar syntax was used. The `predict_parts()` function from the `DALEX` package (see Chapters [6](breakDown.html#breakDown)\-[8](shapley.html#shapley)) is used to calculate variable attributions. Note that we apply the `type = "break_down"` argument to prepare BD plots. The generic `plot()` function is used to graphically present the plots. ``` fifa_bd_gbm <- predict_parts(fifa_gbm_exp, new_observation = fifa["R. Lewandowski",], type = "break_down") plot(fifa_bd_gbm) + scale_y_continuous("Predicted value in Euro", labels = dollar_format(suffix = "€", prefix = "")) + ggtitle("Break-down plot for Robert Lewandowski","") ``` Shapley values are computed by applying the `type = "shap"` argument. ``` fifa_shap_gbm <- predict_parts(fifa_gbm_exp, new_observation = fifa["R. Lewandowski",], type = "shap") plot(fifa_shap_gbm, show_boxplots = FALSE) + scale_y_continuous("Estimated value in Euro", labels = dollar_format(suffix = "€", prefix = "")) + ggtitle("Shapley values for Robert Lewandowski","") ``` The `predict_profile()` function from the `DALEX` package (see Section [10\.6](ceterisParibus.html#CPR)) is used to calculate the CP profiles. The generic `plot()` function is applied to graphically present the profiles. ``` selected_variables <- c("Reactions", "BallControl", "Dribbling", "Age") fifa_cp_gbm <- predict_profile(fifa_gbm_exp, new_observation = fifa["R. Lewandowski",], variables = selected_variables) plot(fifa_cp_gbm, variables = selected_variables) ``` Finally, the `predict_diagnostics()` function (see Section [12\.6](localDiagnostics.html#cPLocDiagR)) allows calculating local\-stability plots. The generic `plot()` function can be used to plot these profiles for selected variables. ``` id_gbm <- predict_diagnostics(fifa_gbm_exp, fifa["R. Lewandowski",], neighbors = 30) plot(id_gbm) + scale_y_continuous("Estimated value in Euro", trans = "log10", labels = dollar_format(suffix = "€", prefix = "")) ``` ### 21\.7\.3 Code snippets for Python In this section, we show Python\-code snippets for instance\-level exploration for the gradient boosting model. For other models, a similar syntax was used. First, we need to select instance of interest. In this example we will use *Cristiano Ronaldo*. ``` cr7 = X.loc['Cristiano Ronaldo',] ``` The `predict_parts()` method from the `dalex` library (see Sections [6\.7](breakDown.html#BDPython) and [8\.6](shapley.html#SHAPPythonCode)) can be used to calculate calculate variable attributions. The `plot()` method with `max_vars` argument is applied to graphically present the corresponding BD plot for up to 20 variables. ``` fifa_pp_gbm = fifa_gbm_exp.predict_parts(cr7, type='break_down') fifa_pp_gbm.plot(max_vars = 20) ``` To calculate Shapley values, the `predict_parts()` method should be applied with the `type='shap'` argument (see Section [8\.6](shapley.html#SHAPPythonCode)). The `predict_profile()` method from the `dalex` library (see Section [10\.7](ceterisParibus.html#CPPython) allows calculation of the CP profiles. The `plot()` method with the `variables` argument plots the profiles for selected variables. ``` fifa_mp_gbm = fifa_gbm_exp.predict_profile(cr7) fifa_mp_gbm.plot(variables = ['movement_reactions', 'skill_ball_control', 'skill_dribbling', 'age']) ``` ### 21\.7\.4 CR7 As a second example, we present explanations for the random forest\-model’s prediction for *Cristiano Ronaldo* (CR7\). Table [21\.4](UseCaseFIFA.html#tab:CR7) presents his characteristics, as included in the analyzed dataset. Note that Cristiano Ronaldo, as Robert Lewandowski, is also a striker. It might be thus of interest to compare the characteristics contributing to the model’s predictions for the two players. Table 21\.4: Characteristics of Cristiano Ronaldo. | variable | value | variable | value | variable | value | variable | value | | --- | --- | --- | --- | --- | --- | --- | --- | | Age | 33 | Dribbling | 88 | ShotPower | 95 | Composure | 95 | | Preferred.Foot | 2 | Curve | 81 | Jumping | 95 | Marking | 28 | | Reputation | 5 | FKAccuracy | 76 | Stamina | 88 | StandingTackle | 31 | | Weak.Foot | 4 | LongPassing | 77 | Strength | 79 | SlidingTackle | 23 | | Skill.Moves | 5 | BallControl | 94 | LongShots | 93 | GKDiving | 7 | | Crossing | 84 | Acceleration | 89 | Aggression | 63 | GKHandling | 11 | | Finishing | 94 | SprintSpeed | 91 | Interceptions | 29 | GKKicking | 15 | | HeadingAccuracy | 89 | Agility | 87 | Positioning | 95 | GKPositioning | 14 | | ShortPassing | 81 | Reactions | 96 | Vision | 82 | GKReflexes | 11 | | Volleys | 87 | Balance | 70 | Penalties | 85 | LogValue | 8 | The BD plot for Cristiano Ronaldo is presented in Figure [21\.13](UseCaseFIFA.html#fig:usecaseFIFAbreakDownCR7Plot). It suggests that the explanatory variables with the largest effect are `ShotPower`, `LongShots`, `Volleys`, and `Vision`. Figure 21\.13: Break\-down plot for Cristiano Ronaldo for the random forest model. Figure [21\.14](UseCaseFIFA.html#fig:usecaseFIFAshapCR7Plot) presents Shapley values for Cristiano Ronaldo. It indicates that the four explanatory variables with the largest values are `Reactions`, `Dribbling`, `BallControl`, and `ShortPassing`. These are the same variables as for Robert Lewandowski, though in a different order. Interestingly, the plot for Cristiano Ronaldo includes variable `Age`, for which Shapley value is negative. It suggests that CR7’s age has got a negative effect on the model’s prediction. Figure 21\.14: Shapley values for Cristiano Ronaldo for the random forest model. Finally, Figure [21\.15](UseCaseFIFA.html#fig:usecaseFIFAceterisParibusCR7Plot) presents CP profiles for `Age`, `Reactions`, `Dribbling`, and `BallControl`. Figure 21\.15: Ceteris\-paribus profiles for Cristiano Ronaldo for four selected variables and the random forest model. The profiles are similar to those presented in Figure [21\.10](UseCaseFIFA.html#fig:usecaseFIFAceterisParibusPlot) for Robert Lewandowski. An interesting difference is that, for `Age`, the predicted value for Cristiano Ronaldo is located within the region of age, linked with a sharp drop in player’s value. This is in accordance with the observation, made based on Figure [21\.14](UseCaseFIFA.html#fig:usecaseFIFAshapCR7Plot), that CR7’s age has got a negative effect on the model’s prediction. ### 21\.7\.5 Wojciech Szczęsny One might be interested in the characteristics influencing the random forest model’s predictions for players other than strikers. To address the question, we present explanations for *Wojciech Szczęsny*, a goalkeeper. Table [21\.5](UseCaseFIFA.html#tab:WS) presents his characteristics, as included in the analyzed dataset. Table 21\.5: Characteristics of Wojciech Szczęsny. | variable | value | variable | value | variable | value | variable | value | | --- | --- | --- | --- | --- | --- | --- | --- | | Age | 28 | Dribbling | 11 | ShotPower | 15 | Composure | 65 | | Preferred.Foot | 2 | Curve | 16 | Jumping | 71 | Marking | 20 | | Reputation | 3 | FKAccuracy | 14 | Stamina | 45 | StandingTackle | 13 | | Weak.Foot | 3 | LongPassing | 36 | Strength | 65 | SlidingTackle | 12 | | Skill.Moves | 1 | BallControl | 22 | LongShots | 14 | GKDiving | 85 | | Crossing | 12 | Acceleration | 51 | Aggression | 40 | GKHandling | 81 | | Finishing | 12 | SprintSpeed | 47 | Interceptions | 15 | GKKicking | 71 | | HeadingAccuracy | 16 | Agility | 55 | Positioning | 14 | GKPositioning | 85 | | ShortPassing | 32 | Reactions | 82 | Vision | 48 | GKReflexes | 87 | | Volleys | 14 | Balance | 51 | Penalties | 18 | LogValue | 8 | Figure [21\.16](UseCaseFIFA.html#fig:usecaseFIFAbreakDownWS) shows the BD plot. We can see that the most important contributions come from the explanatory variables related to goalkeeping skills like `GKPositioning`, `GKHandling`, and `GKReflexes`. Interestingly, field\-player skills like `BallControl` or `Dribbling` have a negative effect. Figure 21\.16: Break\-down plot for Wojciech Szczęsny for the random forest model. Figure [21\.17](UseCaseFIFA.html#fig:usecaseFIFAshapWS) presents Shapley values (over 25 random orderings of explanatory variables). The plot confirms that the most important contributions to the prediction for Wojciech Szczęsny are due to goalkeeping skills like `GKDiving`, `GKPositioning`, `GKReflexes`, and `GKHandling`. Interestingly, `Reactions` is also important, as it was the case for Robert Lewandowski (see Figure [21\.9](UseCaseFIFA.html#fig:usecaseFIFAshapPlot)) and Cristiano Ronaldo (see Figure [21\.14](UseCaseFIFA.html#fig:usecaseFIFAshapCR7Plot)). Figure 21\.17: Shapley values for Wojciech Szczęsny for the random forest model. ### 21\.7\.6 Lionel Messi This instance might be THE choice for some of the readers. However, we have decided to leave explanation of the models’ predictions in this case as an exercise to the interested readers. 21\.1 Introduction ------------------ In the previous chapters, we introduced a range of methods for the exploration of predictive models. Different methods were discussed in separate chapters, and while illustrated, they were not directly compared. Thus, in this chapter, we apply the methods to one dataset in order to present their relative merits. In particular, we present an example of a full process of a model development along the lines introduced in Chapter [2](modelDevelopmentProcess.html#modelDevelopmentProcess). This will allow us to show how one can combine results from different methods. The Fédération Internationale de Football Association (FIFA) is a governing body of football (sometimes, especially in the USA, called soccer). FIFA is also a series of video games developed by EA Sports which faithfully reproduces the characteristics of real players. FIFA ratings of football players from the video game can be found at `https://sofifa.com/`. Data from this website for 2019 were scrapped and made available at the Kaggle webpage `https://www.kaggle.com/karangadiya/fifa19`. We will use the data to build a predictive model for the evaluation of a player’s value. Subsequently, we will use the model exploration and explanation methods to better understand the model’s performance, as well as which variables and how to influence a player’s value. 21\.2 Data preparation ---------------------- The original dataset contains 89 variables that describe 16,924 players. The variables include information such as age, nationality, club, wage, etc. In what follows, we focus on 45 variables that are included in data frame `fifa` included in the `DALEX` package for R and Python. The variables from this dataset set are listed in Table [21\.1](UseCaseFIFA.html#tab:FIFAvariables). Table 21\.1: Variables in the FIFA 19 dataset. | Name | Weak.Foot | FKAccuracy | Jumping | Composure | | --- | --- | --- | --- | --- | | Club | Skill.Moves | LongPassing | Stamina | Marking | | Position | Crossing | BallControl | Strength | StandingTackle | | Value.EUR | Finishing | Acceleration | LongShots | SlidingTackle | | Age | HeadingAccuracy | SprintSpeed | Aggression | GKDiving | | Overall | ShortPassing | Agility | Interceptions | GKHandling | | Special | Volleys | Reactions | Positioning | GKKicking | | Preferred.Foot | Dribbling | Balance | Vision | GKPositioning | | Reputation | Curve | ShotPower | Penalties | GKReflexes | In particular, variable `Value.EUR` contains the player’s value in millions of EUR. This will be our dependent variable. The distribution of the variable is heavily skewed to the right. In particular, the quartiles are equal to 325,000 EUR, 725,000 EUR, and 2,534,478 EUR. There are three players with a value higher than 100 millions of Euro. Thus, in our analyses, we will consider a logarithmically\-transformed players’ value. Figure [21\.1](UseCaseFIFA.html#fig:distFIFA19Value) presents the empirical cumulative\-distribution function and histogram for the transformed value. They indicate that the transformation makes the distribution less skewed. Figure 21\.1: The empirical cumulative\-distribution function and histogram for the log\\(\_{10}\\)\-transformed players’ values. Additionally, we take a closer look at four characteristics that will be considered as explanatory variables later in this chapter. These are: `Age`, `Reactions` (a movement skill), `BallControl` (a general skill), and `Dribbling` (a general skill). Figure [21\.2](UseCaseFIFA.html#fig:distFIFA19histograms) presents histograms of the values of the four variables. From the plot for `Age` we can conclude that most of the players are between 20 and 30 years of age (median age: 25\). Variable `Reactions` has an approximately symmetric distribution, with quartiles equal to 56, 62, and 68\. Histograms of `BallControl` and `Dribbling` indicate, interestingly, bimodal distributions. The smaller modes are due to goalkeepers. Figure 21\.2: Histograms for selected characteristics of players. ### 21\.2\.1 Code snippets for R The subset of 5000 most valuable players from the FIFA 19 data is available in the `fifa` data frame in the `DALEX` package. ``` library("DALEX") head(fifa) ``` ### 21\.2\.2 Code snippets for Python The subset of 5000 most valuable players from FIFA 19 data can be loaded to Python with `dalex.datasets.load_fifa()` method. ``` import dalex as dx fifa = dx.datasets.load_fifa() ``` ### 21\.2\.1 Code snippets for R The subset of 5000 most valuable players from the FIFA 19 data is available in the `fifa` data frame in the `DALEX` package. ``` library("DALEX") head(fifa) ``` ### 21\.2\.2 Code snippets for Python The subset of 5000 most valuable players from FIFA 19 data can be loaded to Python with `dalex.datasets.load_fifa()` method. ``` import dalex as dx fifa = dx.datasets.load_fifa() ``` 21\.3 Data understanding ------------------------ We will investigate the relationship between the four selected characteristics and the (logarithmically\-transformed) player’s value. Toward this aim, we use the scatter plots shown in Figure [21\.3](UseCaseFIFA.html#fig:distFIFA19scatter). Each plot includes a smoothed curve capturing the trend. For `Age`, the relationship is not monotonic. There seems to be an optimal age, between 25 and 30 years, at which the player’s value reaches the maximum. On the other hand, the value of youngest and oldest players is about 10 times lower, as compared to the maximum. For variables `BallControl` and `Dribbling`, the relationship is not monotonic. In general, the larger value of these coefficients, the large value of a player. However, there are “local” maxima for players with low scores for `BallControl` and `Dribbling`. As it was suggested earlier, these are probably goalkeepers. For `Reactions`, the association with the player’s value is monotonic, with increasing values of the variable leading to increasing values of players. Figure 21\.3: Scatter plots illustrating the relationship between the (logarithmically\-transformed) player’s value and selected characteristics. Figure [21\.4](UseCaseFIFA.html#fig:distFIFA19scatter2) presents the scatter\-plot matrix for the four selected variables. It indicates that all variables are positively correlated, though with different strength. In particular, `BallControl` and `Dribbling` are strongly correlated, with the estimated correlation coefficient larger than 0\.9\. `Reactions` is moderately correlated with the other three variables. Finally, there is a moderate correlation between `Age` and `Reactions`, but not much correlation with `BallControl` and `Dribbling`. Figure 21\.4: Scatter\-plot matrix illustrating the relationship between selected characteristics of players. 21\.4 Model assembly -------------------- In this section, we develop a model for players’ values. We consider all variables other than `Name`, `Club`, `Position`, `Value.EUR`, `Overall`, and `Special` (see Section [21\.2](UseCaseFIFA.html#FIFAdataprep)) as explanatory variables. The base\-10 logarithm of the player’s value is the dependent variable. Given different possible forms of relationship between the (logarithmically\-transformed) player’s value and explanatory variables (as seen, for example, in Figure [21\.3](UseCaseFIFA.html#fig:distFIFA19scatter)), we build four different, flexible models to check whether they are capable of capturing the various relationships. In particular, we consider the following models: * a boosting model with 250 trees of 1\-level depth, as implemented in package `gbm` (Ridgeway [2017](#ref-gbm)), * a boosting model with 250 trees of 4\-levels depth (this model should be able to catch interactions between variables), * a random forest model with 250 trees, as implemented in package `ranger` (Wright and Ziegler [2017](#ref-rangerRpackage)), * a linear model with a spline\-transformation of explanatory variables, as implemented in package `rms` (Harrell Jr [2018](#ref-rms)). These models will be explored in detail in the following sections. ### 21\.4\.1 Code snippets for R In this section, we show R\-code snippets used to develop the gradient boosting model. Other models were built in a similar way. The code below fits the model to the data. The dependent variable `LogValue` contains the base\-10 logarithm of `Value.EUR`, i.e., of the player’s value. ``` fifa$LogValue <- log10(fifa$Value.EUR) fifa_small <- fifa[,-c(1, 2, 3, 4, 6, 7)] fifa_gbm_deep <- gbm(LogValue~., data = fifa_small, n.trees = 250, interaction.depth = 4, distribution = "gaussian") ``` For model\-exploration purposes, we have got to create an explainer\-object with the help of the `DALEX::explain()` function (see Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode)). The code below is used for the gradient boosting model. Note that the model was fitted to the logarithmically\-transformed player’s value. However, it is more natural to interpret the predictions on the original scale. This is why, in the provided syntax, we apply the `predict_function` argument to specify a user\-defined function to obtain predictions on the original scale, in Euro. Additionally, we use the `data` and `y` arguments to indicate the data frame with explanatory variables and the values of the dependent variable, for which predictions are to be obtained. Finally, the model receives its own `label`. ``` library("DALEX") fifa_gbm_exp_deep <- DALEX::explain(fifa_gbm_deep, data = fifa_small, y = 10^fifa_small$LogValue, predict_function = function(m,x) 10^predict(m, x, n.trees = 250), label = "GBM deep") ``` ### 21\.4\.2 Code snippets for Python In this section, we show Python\-code snippets used to develop the gradient boosting model. Other models were built in a similar way. The code below fits the model to the data. The dependent variable `ylog` contains the logarithm of `value_eur`, i.e., of the player’s value. ``` from lightgbm import LGBMRegressor from sklearn.model_selection import train_test_split import numpy as np X = fifa.drop(["nationality", "overall", "potential", "value_eur", "wage_eur"], axis = 1) y = fifa['value_eur'] ylog = np.log(y) X_train, X_test, ylog_train, ylog_test, y_train, y_test = train_test_split(X, ylog, y, test_size = 0.25, random_state = 4) gbm_model = LGBMRegressor() gbm_model.fit(X_train, ylog_train, verbose = False) ``` For model\-exploration purposes, we have to create the explainer\-object with the help of the `Explainer()` constructor from the `dalex` library (see Section [4\.3\.6](dataSetsIntro.html#ExplainersTitanicPythonCode)). The code is provided below. Note that the model was fitted to the logarithmically\-transformed player’s value. However, it is more natural to interpret the predictions on the original scale. This is why, in the provided syntax, we apply the `predict_function` argument to specify a user\-defined function to obtain predictions on the original scale, in Euro. Additionally, we use the `X` and `y` arguments to indicate the data frame with explanatory variables and the values of the dependent variable, for which predictions are to be obtained. Finally, the model receives its own `label`. ``` def predict_function(model, data): return np.exp(model.predict(data)) fifa_gbm_exp = dx.Explainer(gbm_model, X_test, y_test, predict_function = predict_function, label = 'gbm') ``` ### 21\.4\.1 Code snippets for R In this section, we show R\-code snippets used to develop the gradient boosting model. Other models were built in a similar way. The code below fits the model to the data. The dependent variable `LogValue` contains the base\-10 logarithm of `Value.EUR`, i.e., of the player’s value. ``` fifa$LogValue <- log10(fifa$Value.EUR) fifa_small <- fifa[,-c(1, 2, 3, 4, 6, 7)] fifa_gbm_deep <- gbm(LogValue~., data = fifa_small, n.trees = 250, interaction.depth = 4, distribution = "gaussian") ``` For model\-exploration purposes, we have got to create an explainer\-object with the help of the `DALEX::explain()` function (see Section [4\.2\.6](dataSetsIntro.html#ExplainersTitanicRCode)). The code below is used for the gradient boosting model. Note that the model was fitted to the logarithmically\-transformed player’s value. However, it is more natural to interpret the predictions on the original scale. This is why, in the provided syntax, we apply the `predict_function` argument to specify a user\-defined function to obtain predictions on the original scale, in Euro. Additionally, we use the `data` and `y` arguments to indicate the data frame with explanatory variables and the values of the dependent variable, for which predictions are to be obtained. Finally, the model receives its own `label`. ``` library("DALEX") fifa_gbm_exp_deep <- DALEX::explain(fifa_gbm_deep, data = fifa_small, y = 10^fifa_small$LogValue, predict_function = function(m,x) 10^predict(m, x, n.trees = 250), label = "GBM deep") ``` ### 21\.4\.2 Code snippets for Python In this section, we show Python\-code snippets used to develop the gradient boosting model. Other models were built in a similar way. The code below fits the model to the data. The dependent variable `ylog` contains the logarithm of `value_eur`, i.e., of the player’s value. ``` from lightgbm import LGBMRegressor from sklearn.model_selection import train_test_split import numpy as np X = fifa.drop(["nationality", "overall", "potential", "value_eur", "wage_eur"], axis = 1) y = fifa['value_eur'] ylog = np.log(y) X_train, X_test, ylog_train, ylog_test, y_train, y_test = train_test_split(X, ylog, y, test_size = 0.25, random_state = 4) gbm_model = LGBMRegressor() gbm_model.fit(X_train, ylog_train, verbose = False) ``` For model\-exploration purposes, we have to create the explainer\-object with the help of the `Explainer()` constructor from the `dalex` library (see Section [4\.3\.6](dataSetsIntro.html#ExplainersTitanicPythonCode)). The code is provided below. Note that the model was fitted to the logarithmically\-transformed player’s value. However, it is more natural to interpret the predictions on the original scale. This is why, in the provided syntax, we apply the `predict_function` argument to specify a user\-defined function to obtain predictions on the original scale, in Euro. Additionally, we use the `X` and `y` arguments to indicate the data frame with explanatory variables and the values of the dependent variable, for which predictions are to be obtained. Finally, the model receives its own `label`. ``` def predict_function(model, data): return np.exp(model.predict(data)) fifa_gbm_exp = dx.Explainer(gbm_model, X_test, y_test, predict_function = predict_function, label = 'gbm') ``` 21\.5 Model audit ----------------- Having developed the four candidate models, we may want to evaluate their performance. Toward this aim, we can use the measures discussed in Section [15\.3\.1](modelPerformance.html#modelPerformanceMethodCont). The computed values are presented in Table [21\.2](UseCaseFIFA.html#tab:modelPerformanceFIFA). On average, the values of the root\-mean\-squared\-error (RMSE) and mean\-absolute\-deviation (MAD) are the smallest for the random forest model. Table 21\.2: Model\-performance measures for the four models for the FIFA 19 data. | | MSE | RMSE | R2 | MAD | | --- | --- | --- | --- | --- | | GBM shallow | 8\.990694e\+12 | 2998449 | 0\.7300429 | 183682\.91 | | GBM deep | 2\.211439e\+12 | 1487091 | 0\.9335987 | 118425\.56 | | RF | 1\.141176e\+12 | 1068258 | 0\.9657347 | 50693\.24 | | RM | 2\.191297e\+13 | 4681129 | 0\.3420350 | 148187\.06 | In addition to computing measures of the overall performance of the model, we should conduct a more detailed examination of both overall\- and instance\-specific performance. Toward this aim, we can apply residual diagnostics, as discussed in Chapter [19](residualDiagnostic.html#residualDiagnostic). For instance, we can create a plot comparing the predicted (fitted) and observed values of the dependent variable. Figure 21\.5: Observed and predicted (fitted) players’ values for the four models for the FIFA 19 data. The resulting plot is shown in Figure [21\.5](UseCaseFIFA.html#fig:modelPerformanceScatterplot). It indicates that predictions are closest to the observed values of the dependent variable for the random forest model. It is worth noting that the smoothed trend for the model is close to a straight line, but with a slope smaller than 1\. This implies the random forest model underestimates the actual value of the most expensive players, while it overestimates the value for the least expensive ones. A similar pattern can be observed for the gradient boosting models. This “shrinking to the mean” is typical for this type of models. ### 21\.5\.1 Code snippets for R In this section, we show R\-code snippets for model audit for the gradient boosting model. For other models a similar syntax was used. The `model_performance()` function (see Section [15\.6](modelPerformance.html#modelPerformanceR)) is used to calculate the values of RMSE, MSE, R\\(^2\\), and MAD for the model. ``` model_performance(fifa_gbm_exp_deep) ``` The `model_diagnostics()` function (see Section [19\.6](residualDiagnostic.html#RcodeResidualDiagnostic)) is used to create residual\-diagnostics plots. Results of this function can be visualised with the generic `plot()` function. In the code that follows, additional arguments are used to improve the outlook and interpretability of both axes. ``` fifa_md_gbm_deep <- model_diagnostics(fifa_gbm_exp_deep) plot(fifa_md_gbm_deep, variable = "y", yvariable = "y_hat") + scale_x_continuous("Value in Euro", trans = "log10", labels = dollar_format(suffix = "€", prefix = "")) + scale_y_continuous("Predicted value in Euro", trans = "log10", labels = dollar_format(suffix = "€", prefix = "")) + geom_abline(slope = 1) + ggtitle("Predicted and observed players' values", "") ``` ### 21\.5\.2 Code snippets for Python In this section, we show Python\-code snippets used to perform residual diagnostic for trained the gradient boosting model. Other models were tested in a similar way. The `fifa_gbm_exp.model_diagnostics()` function (see Section [19\.7](residualDiagnostic.html#PythoncodeResidualDiagnostic)) is used to calculate the residuals and absolute residuals. Results of this function can be visualised with the `plot()` function. The code below produce diagnostic plots similar to these presented in Figure [21\.5](UseCaseFIFA.html#fig:modelPerformanceScatterplot). ``` fifa_md_gbm = fifa_gbm_exp.model_diagnostics() fifa_md_gbm.plot(variable = "y", yvariable = "y_hat") ``` ### 21\.5\.1 Code snippets for R In this section, we show R\-code snippets for model audit for the gradient boosting model. For other models a similar syntax was used. The `model_performance()` function (see Section [15\.6](modelPerformance.html#modelPerformanceR)) is used to calculate the values of RMSE, MSE, R\\(^2\\), and MAD for the model. ``` model_performance(fifa_gbm_exp_deep) ``` The `model_diagnostics()` function (see Section [19\.6](residualDiagnostic.html#RcodeResidualDiagnostic)) is used to create residual\-diagnostics plots. Results of this function can be visualised with the generic `plot()` function. In the code that follows, additional arguments are used to improve the outlook and interpretability of both axes. ``` fifa_md_gbm_deep <- model_diagnostics(fifa_gbm_exp_deep) plot(fifa_md_gbm_deep, variable = "y", yvariable = "y_hat") + scale_x_continuous("Value in Euro", trans = "log10", labels = dollar_format(suffix = "€", prefix = "")) + scale_y_continuous("Predicted value in Euro", trans = "log10", labels = dollar_format(suffix = "€", prefix = "")) + geom_abline(slope = 1) + ggtitle("Predicted and observed players' values", "") ``` ### 21\.5\.2 Code snippets for Python In this section, we show Python\-code snippets used to perform residual diagnostic for trained the gradient boosting model. Other models were tested in a similar way. The `fifa_gbm_exp.model_diagnostics()` function (see Section [19\.7](residualDiagnostic.html#PythoncodeResidualDiagnostic)) is used to calculate the residuals and absolute residuals. Results of this function can be visualised with the `plot()` function. The code below produce diagnostic plots similar to these presented in Figure [21\.5](UseCaseFIFA.html#fig:modelPerformanceScatterplot). ``` fifa_md_gbm = fifa_gbm_exp.model_diagnostics() fifa_md_gbm.plot(variable = "y", yvariable = "y_hat") ``` 21\.6 Model understanding (dataset\-level explanations) ------------------------------------------------------- All four developed models involve many explanatory variables. It is of interest to understand which of the variables exercises the largest influence of models’ predictions. Toward this aim, we can apply the permutation\-based variable\-importance measure discussed in Chapter [16](featureImportance.html#featureImportance). Subsequently, we can construct a plot of the obtained mean (over the default 10 permutations) variable\-importance measures. Note that we consider only the top\-20 variables. Figure 21\.6: Mean variable\-importance calculated using 10 permutations for the four models for the FIFA 19 data. The resulting plot is shown in Figure [21\.6](UseCaseFIFA.html#fig:featureImportancePlot). The bar for each explanatory variable starts at the RMSE value of a particular model and ends at the (mean) RMSE calculated for data with permuted values of the variable. Figure [21\.6](UseCaseFIFA.html#fig:featureImportancePlot) indicates that, for the gradient boosting and random forest models, the two explanatory variables with the largest values of the importance measure are `Reactions` or `BallControl`. The importance of other variables varies depending on the model. Interestingly, in the linear\-regression model, the highest importance is given to goal\-keeping skills. We may also want to take a look at the partial\-dependence (PD) profiles discussed in Chapter [17](partialDependenceProfiles.html#partialDependenceProfiles). Recall that they illustrate how does the expected value of a model’s predictions behave as a function of an explanatory variable. To create the profiles, we apply function `model_profile()` from the `DALEX` package (see Section [17\.6](partialDependenceProfiles.html#PDPR)). We focus on variables `Reactions`, `BallControl`, and `Dribbling` that were important in the random forest model (see Figure [21\.6](UseCaseFIFA.html#fig:featureImportancePlot)). We also consider `Age`, as it had some effect in the gradient boosting models. Subsequently, we can construct a plot of contrastive PD profiles (see Section [17\.3\.4](partialDependenceProfiles.html#contrastivePDPs)) that is shown in Figure [21\.7](UseCaseFIFA.html#fig:usecaseFIFApdPlot). Figure 21\.7: Contrastive partial\-dependence profiles for the four models and selected explanatory variables for the FIFA 19 data. Figure [21\.7](UseCaseFIFA.html#fig:usecaseFIFApdPlot) indicates that the shape of the PD profiles for `Reactions`, `BallControl`, and `Dribbling` is, in general, similar for all the models and implies an increasing predicted player’s value for an increasing (at least, after passing some threshold) value of the explanatory variable. However, for `Age`, the shape is different and suggests a decreasing player’s value after the age of about 25 years. It is worth noting that the range of expected model’s predictions is, in general, the smallest for the random forest model. Also, the three tree\-based models tend to stabilize the predictions at the ends of the explanatory\-variable ranges. The most interesting difference between the conclusions drawn from Figure [21\.3](UseCaseFIFA.html#fig:distFIFA19scatter) and those obtained from Figure [21\.7](UseCaseFIFA.html#fig:usecaseFIFApdPlot) is observed for variable `Age`. In particular, Figure [21\.3](UseCaseFIFA.html#fig:distFIFA19scatter) suggests that the relationship between player’s age and value is non\-monotonic, while Figure [21\.7](UseCaseFIFA.html#fig:usecaseFIFApdPlot) suggests a non\-increasing relationship. How can we explain this difference? A possible explanation is as follows. The youngest players have lower values, not because of their age, but because of their lower skills, which are correlated (as seen from the scatter\-plot matrix in Figure [21\.4](UseCaseFIFA.html#fig:distFIFA19scatter2)) with young age. The simple data exploration analysis, presented in the upper\-left panel of Figure [21\.3](UseCaseFIFA.html#fig:distFIFA19scatter), cannot separate the effects of age and skills. As a result, the analysis suggests a decrease in player’s value for the youngest players. In models, however, the effect of age is estimated while adjusting for the effect of skills. After this adjustment, the effect takes the form of a non\-increasing pattern, as shown by the PD profiles for `Age` in Figure [21\.7](UseCaseFIFA.html#fig:usecaseFIFApdPlot). This example indicates that *exploration of models may provide more insight than exploration of raw data*. In exploratory data analysis, the effect of variable `Age` was confounded by the effect of skill\-related variables. By using a model, the confounding has been removed. ### 21\.6\.1 Code snippets for R In this section, we show R\-code snippets for dataset\-level exploration for the gradient boosting model. For other models a similar syntax was used. The `model_parts()` function from the `DALEX` package (see Section [16\.6](featureImportance.html#featureImportanceR)) is used to calculate the permutation\-based variable\-importance measure. The generic `plot()` function is applied to graphically present the computed values of the measure. The `max_vars` argument is used to limit the number of presented variables up to 20\. ``` fifa_mp_gbm_deep <- model_parts(fifa_gbm_exp_deep) plot(fifa_mp_gbm_deep, max_vars = 20, bar_width = 4, show_boxplots = FALSE) ``` The `model_profile()` function from the `DALEX` package (see Section [17\.6](partialDependenceProfiles.html#PDPR)) is used to calculate PD profiles. The generic `plot()` function is used to graphically present the profiles for selected variables. ``` selected_variables <- c("Reactions", "BallControl", "Dribbling", "Age") fifa19_pd_deep <- model_profile(fifa_gbm_exp_deep, variables = selected_variables) plot(fifa19_pd_deep) ``` ### 21\.6\.2 Code snippets for Python In this section, we show Python code snippets for dataset\-level exploration for the gradient boosting model. For other models a similar syntax was used. The `model_parts()` method from the `dalex` library (see Section [16\.7](featureImportance.html#featureImportancePython)) is used to calculate the permutation\-based variable\-importance measure. The `plot()` method is applied to graphically present the computed values of the measure. ``` fifa_mp_gbm = fifa_gbm_exp.model_parts() fifa_mp_gbm.plot(max_vars = 20) ``` The `model_profile()` method from the `dalex` library (see Section [17\.7](partialDependenceProfiles.html#PDPPython)) is used to calculate PD profiles. The `plot()` method is used to graphically present the computed profiles. ``` fifa_mp_gbm = fifa_gbm_exp.model_profile() fifa_mp_gbm.plot(variables = ['movement_reactions', 'skill_ball_control', 'skill_dribbling', 'age']) ``` In order to calculated other types of profiles, just change the `type` argument. ``` fifa_mp_gbm = fifa_gbm_exp.model_profile(type = 'accumulated') fifa_mp_gbm.plot(variables = ['movement_reactions', 'skill_ball_control', 'skill_dribbling', 'age']) ``` ### 21\.6\.1 Code snippets for R In this section, we show R\-code snippets for dataset\-level exploration for the gradient boosting model. For other models a similar syntax was used. The `model_parts()` function from the `DALEX` package (see Section [16\.6](featureImportance.html#featureImportanceR)) is used to calculate the permutation\-based variable\-importance measure. The generic `plot()` function is applied to graphically present the computed values of the measure. The `max_vars` argument is used to limit the number of presented variables up to 20\. ``` fifa_mp_gbm_deep <- model_parts(fifa_gbm_exp_deep) plot(fifa_mp_gbm_deep, max_vars = 20, bar_width = 4, show_boxplots = FALSE) ``` The `model_profile()` function from the `DALEX` package (see Section [17\.6](partialDependenceProfiles.html#PDPR)) is used to calculate PD profiles. The generic `plot()` function is used to graphically present the profiles for selected variables. ``` selected_variables <- c("Reactions", "BallControl", "Dribbling", "Age") fifa19_pd_deep <- model_profile(fifa_gbm_exp_deep, variables = selected_variables) plot(fifa19_pd_deep) ``` ### 21\.6\.2 Code snippets for Python In this section, we show Python code snippets for dataset\-level exploration for the gradient boosting model. For other models a similar syntax was used. The `model_parts()` method from the `dalex` library (see Section [16\.7](featureImportance.html#featureImportancePython)) is used to calculate the permutation\-based variable\-importance measure. The `plot()` method is applied to graphically present the computed values of the measure. ``` fifa_mp_gbm = fifa_gbm_exp.model_parts() fifa_mp_gbm.plot(max_vars = 20) ``` The `model_profile()` method from the `dalex` library (see Section [17\.7](partialDependenceProfiles.html#PDPPython)) is used to calculate PD profiles. The `plot()` method is used to graphically present the computed profiles. ``` fifa_mp_gbm = fifa_gbm_exp.model_profile() fifa_mp_gbm.plot(variables = ['movement_reactions', 'skill_ball_control', 'skill_dribbling', 'age']) ``` In order to calculated other types of profiles, just change the `type` argument. ``` fifa_mp_gbm = fifa_gbm_exp.model_profile(type = 'accumulated') fifa_mp_gbm.plot(variables = ['movement_reactions', 'skill_ball_control', 'skill_dribbling', 'age']) ``` 21\.7 Instance\-level explanations ---------------------------------- After evaluation of the models at the dataset\-level, we may want to focus on particular instances. ### 21\.7\.1 Robert Lewandowski As a first example, we take a look at the value of *Robert Lewandowski*, for an obvious reason. Table [21\.3](UseCaseFIFA.html#tab:RobertLewandowski) presents his characteristics, as included in the analyzed dataset. Robert Lewandowski is a striker. Table 21\.3: Characteristics of Robert Lewandowski. | variable | value | variable | value | variable | value | variable | value | | --- | --- | --- | --- | --- | --- | --- | --- | | Age | 29 | Dribbling | 85 | ShotPower | 88 | Composure | 86 | | Preferred.Foot | 2 | Curve | 77 | Jumping | 84 | Marking | 34 | | Reputation | 4 | FKAccuracy | 86 | Stamina | 78 | StandingTackle | 42 | | Weak.Foot | 4 | LongPassing | 65 | Strength | 84 | SlidingTackle | 19 | | Skill.Moves | 4 | BallControl | 89 | LongShots | 84 | GKDiving | 15 | | Crossing | 62 | Acceleration | 77 | Aggression | 80 | GKHandling | 6 | | Finishing | 91 | SprintSpeed | 78 | Interceptions | 39 | GKKicking | 12 | | HeadingAccuracy | 85 | Agility | 78 | Positioning | 91 | GKPositioning | 8 | | ShortPassing | 83 | Reactions | 90 | Vision | 77 | GKReflexes | 10 | | Volleys | 89 | Balance | 78 | Penalties | 88 | LogValue | 8 | First, we take a look at variable attributions, discussed in Chapter [6](breakDown.html#breakDown). Recall that they decompose model’s prediction into parts that can be attributed to different explanatory variables. The attributions can be presented in a break\-down (BD) plot. For brevity, we only consider the random forest model. The resulting BD plot is shown in Figure [21\.8](UseCaseFIFA.html#fig:usecaseFIFAbreakDownPlot). Figure 21\.8: Break\-down plot for Robert Lewandowski for the random forest model. Figure [21\.8](UseCaseFIFA.html#fig:usecaseFIFAbreakDownPlot) suggests that the explanatory variables with the largest effect are `Composure`, `Volleys`, `LongShots`, and `Stamina`. However, in Chapter [6](breakDown.html#breakDown) it was mentioned that variable attributions may depend on the order of explanatory covariates that are used in calculations. Thus, in Chapter [8](shapley.html#shapley) we introduced Shapley values, based on the idea of averaging the attributions over many orderings. Figure [21\.9](UseCaseFIFA.html#fig:usecaseFIFAshapPlot) presents the means of the Shapley values computed by using 25 random orderings for the random forest model. Figure 21\.9: Shapley values for Robert Lewandowski for the random forest model. Figure [21\.9](UseCaseFIFA.html#fig:usecaseFIFAshapPlot) indicates that the five explanatory variables with the largest Shapley values are `BallControl`, `Dribbling`, `Reactions`, `ShortPassing`, and `Positioning`. This makes sense, as Robert Lewandowski is a striker. In Chapter [10](ceterisParibus.html#ceterisParibus), we introduced ceteris\-paribus (CP) profiles. They capture the effect of a selected explanatory variable in terms of changes in a model’s prediction induced by changes in the variable’s values. Figure [21\.10](UseCaseFIFA.html#fig:usecaseFIFAceterisParibusPlot) presents the profiles for variables `Age`, `Reactions`, `BallControl`, and `Dribbling` for the random forest model. Figure 21\.10: Ceteris\-paribus profiles for Robert Lewandowski for four selected variables and the random forest model. Figure [21\.10](UseCaseFIFA.html#fig:usecaseFIFAceterisParibusPlot) suggests that, among the four variables, `BallControl` and `Reactions` lead to the largest changes of predictions for this instance. For all four variables, the profiles flatten at the left\- and right\-hand\-side edges. The predicted value of Robert Lewandowski reaches or is very close to the maximum for all four profiles. It is interesting to note that, for `Age`, the predicted value is located at the border of the age region at which the profile suggests a sharp drop in player’s value. As it was argued in Chapter [12](localDiagnostics.html#localDiagnostics), it is worthwhile to check how does the model behave for observations similar to the instance of interest. Towards this aim, we may want to compare the distribution of residuals for “neighbors” of Robert Lewandowski. Figure [21\.11](UseCaseFIFA.html#fig:usecaseFIFAceterisParibusNeighboursPlot) presents the histogram of residuals for all data and the 30 neighbors of Robert Lewandowski. Figure 21\.11: Distribution of residuals for the random forest model for all players and for 30 neighbors of Robert Lewandowski. Clearly, the neighbors of Robert Lewandowski include some of the most expensive players. Therefore, as compared to the overall distribution, the distribution of residuals for the neighbors, presented in Figure [21\.11](UseCaseFIFA.html#fig:usecaseFIFAceterisParibusNeighboursPlot), is skewed to the right, and its mean is larger than the overall mean. Thus, the model underestimates the actual value of the most expensive players. This was also noted based on the plot in the bottom\-left panel of Figure [21\.5](UseCaseFIFA.html#fig:modelPerformanceScatterplot). We can also look at the local\-stability plot, i.e., the plot that includes CP profiles for the nearest neighbors and the corresponding residuals (see Chapter [12](localDiagnostics.html#localDiagnostics)). In Figure [21\.12](UseCaseFIFA.html#fig:usecaseFIFAceterisParibusNeighboursAgeRFPlot), we present the plot for `Age`. Figure 21\.12: Local\-stability plot for `Age` for 30 neighbors of Robert Lewandowski and the random forest model. The CP profiles in Figure [21\.12](UseCaseFIFA.html#fig:usecaseFIFAceterisParibusNeighboursAgeRFPlot) are almost parallel but span quite a wide range of the predicted player’s values. Thus, one could conclude that the predictions for the most expensive players are not very stable. Also, the plot includes more positive residuals (indicated in the plot by green vertical intervals) than negative ones (indicated by red vertical intervals). This confirms the conclusion drawn from Figure [21\.11](UseCaseFIFA.html#fig:usecaseFIFAceterisParibusNeighboursPlot) that the values of the most expensive players are underestimated by the model. ### 21\.7\.2 Code snippets for R In this section, we show R\-code snippets for instance\-level exploration for the gradient boosting model. For other models, a similar syntax was used. The `predict_parts()` function from the `DALEX` package (see Chapters [6](breakDown.html#breakDown)\-[8](shapley.html#shapley)) is used to calculate variable attributions. Note that we apply the `type = "break_down"` argument to prepare BD plots. The generic `plot()` function is used to graphically present the plots. ``` fifa_bd_gbm <- predict_parts(fifa_gbm_exp, new_observation = fifa["R. Lewandowski",], type = "break_down") plot(fifa_bd_gbm) + scale_y_continuous("Predicted value in Euro", labels = dollar_format(suffix = "€", prefix = "")) + ggtitle("Break-down plot for Robert Lewandowski","") ``` Shapley values are computed by applying the `type = "shap"` argument. ``` fifa_shap_gbm <- predict_parts(fifa_gbm_exp, new_observation = fifa["R. Lewandowski",], type = "shap") plot(fifa_shap_gbm, show_boxplots = FALSE) + scale_y_continuous("Estimated value in Euro", labels = dollar_format(suffix = "€", prefix = "")) + ggtitle("Shapley values for Robert Lewandowski","") ``` The `predict_profile()` function from the `DALEX` package (see Section [10\.6](ceterisParibus.html#CPR)) is used to calculate the CP profiles. The generic `plot()` function is applied to graphically present the profiles. ``` selected_variables <- c("Reactions", "BallControl", "Dribbling", "Age") fifa_cp_gbm <- predict_profile(fifa_gbm_exp, new_observation = fifa["R. Lewandowski",], variables = selected_variables) plot(fifa_cp_gbm, variables = selected_variables) ``` Finally, the `predict_diagnostics()` function (see Section [12\.6](localDiagnostics.html#cPLocDiagR)) allows calculating local\-stability plots. The generic `plot()` function can be used to plot these profiles for selected variables. ``` id_gbm <- predict_diagnostics(fifa_gbm_exp, fifa["R. Lewandowski",], neighbors = 30) plot(id_gbm) + scale_y_continuous("Estimated value in Euro", trans = "log10", labels = dollar_format(suffix = "€", prefix = "")) ``` ### 21\.7\.3 Code snippets for Python In this section, we show Python\-code snippets for instance\-level exploration for the gradient boosting model. For other models, a similar syntax was used. First, we need to select instance of interest. In this example we will use *Cristiano Ronaldo*. ``` cr7 = X.loc['Cristiano Ronaldo',] ``` The `predict_parts()` method from the `dalex` library (see Sections [6\.7](breakDown.html#BDPython) and [8\.6](shapley.html#SHAPPythonCode)) can be used to calculate calculate variable attributions. The `plot()` method with `max_vars` argument is applied to graphically present the corresponding BD plot for up to 20 variables. ``` fifa_pp_gbm = fifa_gbm_exp.predict_parts(cr7, type='break_down') fifa_pp_gbm.plot(max_vars = 20) ``` To calculate Shapley values, the `predict_parts()` method should be applied with the `type='shap'` argument (see Section [8\.6](shapley.html#SHAPPythonCode)). The `predict_profile()` method from the `dalex` library (see Section [10\.7](ceterisParibus.html#CPPython) allows calculation of the CP profiles. The `plot()` method with the `variables` argument plots the profiles for selected variables. ``` fifa_mp_gbm = fifa_gbm_exp.predict_profile(cr7) fifa_mp_gbm.plot(variables = ['movement_reactions', 'skill_ball_control', 'skill_dribbling', 'age']) ``` ### 21\.7\.4 CR7 As a second example, we present explanations for the random forest\-model’s prediction for *Cristiano Ronaldo* (CR7\). Table [21\.4](UseCaseFIFA.html#tab:CR7) presents his characteristics, as included in the analyzed dataset. Note that Cristiano Ronaldo, as Robert Lewandowski, is also a striker. It might be thus of interest to compare the characteristics contributing to the model’s predictions for the two players. Table 21\.4: Characteristics of Cristiano Ronaldo. | variable | value | variable | value | variable | value | variable | value | | --- | --- | --- | --- | --- | --- | --- | --- | | Age | 33 | Dribbling | 88 | ShotPower | 95 | Composure | 95 | | Preferred.Foot | 2 | Curve | 81 | Jumping | 95 | Marking | 28 | | Reputation | 5 | FKAccuracy | 76 | Stamina | 88 | StandingTackle | 31 | | Weak.Foot | 4 | LongPassing | 77 | Strength | 79 | SlidingTackle | 23 | | Skill.Moves | 5 | BallControl | 94 | LongShots | 93 | GKDiving | 7 | | Crossing | 84 | Acceleration | 89 | Aggression | 63 | GKHandling | 11 | | Finishing | 94 | SprintSpeed | 91 | Interceptions | 29 | GKKicking | 15 | | HeadingAccuracy | 89 | Agility | 87 | Positioning | 95 | GKPositioning | 14 | | ShortPassing | 81 | Reactions | 96 | Vision | 82 | GKReflexes | 11 | | Volleys | 87 | Balance | 70 | Penalties | 85 | LogValue | 8 | The BD plot for Cristiano Ronaldo is presented in Figure [21\.13](UseCaseFIFA.html#fig:usecaseFIFAbreakDownCR7Plot). It suggests that the explanatory variables with the largest effect are `ShotPower`, `LongShots`, `Volleys`, and `Vision`. Figure 21\.13: Break\-down plot for Cristiano Ronaldo for the random forest model. Figure [21\.14](UseCaseFIFA.html#fig:usecaseFIFAshapCR7Plot) presents Shapley values for Cristiano Ronaldo. It indicates that the four explanatory variables with the largest values are `Reactions`, `Dribbling`, `BallControl`, and `ShortPassing`. These are the same variables as for Robert Lewandowski, though in a different order. Interestingly, the plot for Cristiano Ronaldo includes variable `Age`, for which Shapley value is negative. It suggests that CR7’s age has got a negative effect on the model’s prediction. Figure 21\.14: Shapley values for Cristiano Ronaldo for the random forest model. Finally, Figure [21\.15](UseCaseFIFA.html#fig:usecaseFIFAceterisParibusCR7Plot) presents CP profiles for `Age`, `Reactions`, `Dribbling`, and `BallControl`. Figure 21\.15: Ceteris\-paribus profiles for Cristiano Ronaldo for four selected variables and the random forest model. The profiles are similar to those presented in Figure [21\.10](UseCaseFIFA.html#fig:usecaseFIFAceterisParibusPlot) for Robert Lewandowski. An interesting difference is that, for `Age`, the predicted value for Cristiano Ronaldo is located within the region of age, linked with a sharp drop in player’s value. This is in accordance with the observation, made based on Figure [21\.14](UseCaseFIFA.html#fig:usecaseFIFAshapCR7Plot), that CR7’s age has got a negative effect on the model’s prediction. ### 21\.7\.5 Wojciech Szczęsny One might be interested in the characteristics influencing the random forest model’s predictions for players other than strikers. To address the question, we present explanations for *Wojciech Szczęsny*, a goalkeeper. Table [21\.5](UseCaseFIFA.html#tab:WS) presents his characteristics, as included in the analyzed dataset. Table 21\.5: Characteristics of Wojciech Szczęsny. | variable | value | variable | value | variable | value | variable | value | | --- | --- | --- | --- | --- | --- | --- | --- | | Age | 28 | Dribbling | 11 | ShotPower | 15 | Composure | 65 | | Preferred.Foot | 2 | Curve | 16 | Jumping | 71 | Marking | 20 | | Reputation | 3 | FKAccuracy | 14 | Stamina | 45 | StandingTackle | 13 | | Weak.Foot | 3 | LongPassing | 36 | Strength | 65 | SlidingTackle | 12 | | Skill.Moves | 1 | BallControl | 22 | LongShots | 14 | GKDiving | 85 | | Crossing | 12 | Acceleration | 51 | Aggression | 40 | GKHandling | 81 | | Finishing | 12 | SprintSpeed | 47 | Interceptions | 15 | GKKicking | 71 | | HeadingAccuracy | 16 | Agility | 55 | Positioning | 14 | GKPositioning | 85 | | ShortPassing | 32 | Reactions | 82 | Vision | 48 | GKReflexes | 87 | | Volleys | 14 | Balance | 51 | Penalties | 18 | LogValue | 8 | Figure [21\.16](UseCaseFIFA.html#fig:usecaseFIFAbreakDownWS) shows the BD plot. We can see that the most important contributions come from the explanatory variables related to goalkeeping skills like `GKPositioning`, `GKHandling`, and `GKReflexes`. Interestingly, field\-player skills like `BallControl` or `Dribbling` have a negative effect. Figure 21\.16: Break\-down plot for Wojciech Szczęsny for the random forest model. Figure [21\.17](UseCaseFIFA.html#fig:usecaseFIFAshapWS) presents Shapley values (over 25 random orderings of explanatory variables). The plot confirms that the most important contributions to the prediction for Wojciech Szczęsny are due to goalkeeping skills like `GKDiving`, `GKPositioning`, `GKReflexes`, and `GKHandling`. Interestingly, `Reactions` is also important, as it was the case for Robert Lewandowski (see Figure [21\.9](UseCaseFIFA.html#fig:usecaseFIFAshapPlot)) and Cristiano Ronaldo (see Figure [21\.14](UseCaseFIFA.html#fig:usecaseFIFAshapCR7Plot)). Figure 21\.17: Shapley values for Wojciech Szczęsny for the random forest model. ### 21\.7\.6 Lionel Messi This instance might be THE choice for some of the readers. However, we have decided to leave explanation of the models’ predictions in this case as an exercise to the interested readers. ### 21\.7\.1 Robert Lewandowski As a first example, we take a look at the value of *Robert Lewandowski*, for an obvious reason. Table [21\.3](UseCaseFIFA.html#tab:RobertLewandowski) presents his characteristics, as included in the analyzed dataset. Robert Lewandowski is a striker. Table 21\.3: Characteristics of Robert Lewandowski. | variable | value | variable | value | variable | value | variable | value | | --- | --- | --- | --- | --- | --- | --- | --- | | Age | 29 | Dribbling | 85 | ShotPower | 88 | Composure | 86 | | Preferred.Foot | 2 | Curve | 77 | Jumping | 84 | Marking | 34 | | Reputation | 4 | FKAccuracy | 86 | Stamina | 78 | StandingTackle | 42 | | Weak.Foot | 4 | LongPassing | 65 | Strength | 84 | SlidingTackle | 19 | | Skill.Moves | 4 | BallControl | 89 | LongShots | 84 | GKDiving | 15 | | Crossing | 62 | Acceleration | 77 | Aggression | 80 | GKHandling | 6 | | Finishing | 91 | SprintSpeed | 78 | Interceptions | 39 | GKKicking | 12 | | HeadingAccuracy | 85 | Agility | 78 | Positioning | 91 | GKPositioning | 8 | | ShortPassing | 83 | Reactions | 90 | Vision | 77 | GKReflexes | 10 | | Volleys | 89 | Balance | 78 | Penalties | 88 | LogValue | 8 | First, we take a look at variable attributions, discussed in Chapter [6](breakDown.html#breakDown). Recall that they decompose model’s prediction into parts that can be attributed to different explanatory variables. The attributions can be presented in a break\-down (BD) plot. For brevity, we only consider the random forest model. The resulting BD plot is shown in Figure [21\.8](UseCaseFIFA.html#fig:usecaseFIFAbreakDownPlot). Figure 21\.8: Break\-down plot for Robert Lewandowski for the random forest model. Figure [21\.8](UseCaseFIFA.html#fig:usecaseFIFAbreakDownPlot) suggests that the explanatory variables with the largest effect are `Composure`, `Volleys`, `LongShots`, and `Stamina`. However, in Chapter [6](breakDown.html#breakDown) it was mentioned that variable attributions may depend on the order of explanatory covariates that are used in calculations. Thus, in Chapter [8](shapley.html#shapley) we introduced Shapley values, based on the idea of averaging the attributions over many orderings. Figure [21\.9](UseCaseFIFA.html#fig:usecaseFIFAshapPlot) presents the means of the Shapley values computed by using 25 random orderings for the random forest model. Figure 21\.9: Shapley values for Robert Lewandowski for the random forest model. Figure [21\.9](UseCaseFIFA.html#fig:usecaseFIFAshapPlot) indicates that the five explanatory variables with the largest Shapley values are `BallControl`, `Dribbling`, `Reactions`, `ShortPassing`, and `Positioning`. This makes sense, as Robert Lewandowski is a striker. In Chapter [10](ceterisParibus.html#ceterisParibus), we introduced ceteris\-paribus (CP) profiles. They capture the effect of a selected explanatory variable in terms of changes in a model’s prediction induced by changes in the variable’s values. Figure [21\.10](UseCaseFIFA.html#fig:usecaseFIFAceterisParibusPlot) presents the profiles for variables `Age`, `Reactions`, `BallControl`, and `Dribbling` for the random forest model. Figure 21\.10: Ceteris\-paribus profiles for Robert Lewandowski for four selected variables and the random forest model. Figure [21\.10](UseCaseFIFA.html#fig:usecaseFIFAceterisParibusPlot) suggests that, among the four variables, `BallControl` and `Reactions` lead to the largest changes of predictions for this instance. For all four variables, the profiles flatten at the left\- and right\-hand\-side edges. The predicted value of Robert Lewandowski reaches or is very close to the maximum for all four profiles. It is interesting to note that, for `Age`, the predicted value is located at the border of the age region at which the profile suggests a sharp drop in player’s value. As it was argued in Chapter [12](localDiagnostics.html#localDiagnostics), it is worthwhile to check how does the model behave for observations similar to the instance of interest. Towards this aim, we may want to compare the distribution of residuals for “neighbors” of Robert Lewandowski. Figure [21\.11](UseCaseFIFA.html#fig:usecaseFIFAceterisParibusNeighboursPlot) presents the histogram of residuals for all data and the 30 neighbors of Robert Lewandowski. Figure 21\.11: Distribution of residuals for the random forest model for all players and for 30 neighbors of Robert Lewandowski. Clearly, the neighbors of Robert Lewandowski include some of the most expensive players. Therefore, as compared to the overall distribution, the distribution of residuals for the neighbors, presented in Figure [21\.11](UseCaseFIFA.html#fig:usecaseFIFAceterisParibusNeighboursPlot), is skewed to the right, and its mean is larger than the overall mean. Thus, the model underestimates the actual value of the most expensive players. This was also noted based on the plot in the bottom\-left panel of Figure [21\.5](UseCaseFIFA.html#fig:modelPerformanceScatterplot). We can also look at the local\-stability plot, i.e., the plot that includes CP profiles for the nearest neighbors and the corresponding residuals (see Chapter [12](localDiagnostics.html#localDiagnostics)). In Figure [21\.12](UseCaseFIFA.html#fig:usecaseFIFAceterisParibusNeighboursAgeRFPlot), we present the plot for `Age`. Figure 21\.12: Local\-stability plot for `Age` for 30 neighbors of Robert Lewandowski and the random forest model. The CP profiles in Figure [21\.12](UseCaseFIFA.html#fig:usecaseFIFAceterisParibusNeighboursAgeRFPlot) are almost parallel but span quite a wide range of the predicted player’s values. Thus, one could conclude that the predictions for the most expensive players are not very stable. Also, the plot includes more positive residuals (indicated in the plot by green vertical intervals) than negative ones (indicated by red vertical intervals). This confirms the conclusion drawn from Figure [21\.11](UseCaseFIFA.html#fig:usecaseFIFAceterisParibusNeighboursPlot) that the values of the most expensive players are underestimated by the model. ### 21\.7\.2 Code snippets for R In this section, we show R\-code snippets for instance\-level exploration for the gradient boosting model. For other models, a similar syntax was used. The `predict_parts()` function from the `DALEX` package (see Chapters [6](breakDown.html#breakDown)\-[8](shapley.html#shapley)) is used to calculate variable attributions. Note that we apply the `type = "break_down"` argument to prepare BD plots. The generic `plot()` function is used to graphically present the plots. ``` fifa_bd_gbm <- predict_parts(fifa_gbm_exp, new_observation = fifa["R. Lewandowski",], type = "break_down") plot(fifa_bd_gbm) + scale_y_continuous("Predicted value in Euro", labels = dollar_format(suffix = "€", prefix = "")) + ggtitle("Break-down plot for Robert Lewandowski","") ``` Shapley values are computed by applying the `type = "shap"` argument. ``` fifa_shap_gbm <- predict_parts(fifa_gbm_exp, new_observation = fifa["R. Lewandowski",], type = "shap") plot(fifa_shap_gbm, show_boxplots = FALSE) + scale_y_continuous("Estimated value in Euro", labels = dollar_format(suffix = "€", prefix = "")) + ggtitle("Shapley values for Robert Lewandowski","") ``` The `predict_profile()` function from the `DALEX` package (see Section [10\.6](ceterisParibus.html#CPR)) is used to calculate the CP profiles. The generic `plot()` function is applied to graphically present the profiles. ``` selected_variables <- c("Reactions", "BallControl", "Dribbling", "Age") fifa_cp_gbm <- predict_profile(fifa_gbm_exp, new_observation = fifa["R. Lewandowski",], variables = selected_variables) plot(fifa_cp_gbm, variables = selected_variables) ``` Finally, the `predict_diagnostics()` function (see Section [12\.6](localDiagnostics.html#cPLocDiagR)) allows calculating local\-stability plots. The generic `plot()` function can be used to plot these profiles for selected variables. ``` id_gbm <- predict_diagnostics(fifa_gbm_exp, fifa["R. Lewandowski",], neighbors = 30) plot(id_gbm) + scale_y_continuous("Estimated value in Euro", trans = "log10", labels = dollar_format(suffix = "€", prefix = "")) ``` ### 21\.7\.3 Code snippets for Python In this section, we show Python\-code snippets for instance\-level exploration for the gradient boosting model. For other models, a similar syntax was used. First, we need to select instance of interest. In this example we will use *Cristiano Ronaldo*. ``` cr7 = X.loc['Cristiano Ronaldo',] ``` The `predict_parts()` method from the `dalex` library (see Sections [6\.7](breakDown.html#BDPython) and [8\.6](shapley.html#SHAPPythonCode)) can be used to calculate calculate variable attributions. The `plot()` method with `max_vars` argument is applied to graphically present the corresponding BD plot for up to 20 variables. ``` fifa_pp_gbm = fifa_gbm_exp.predict_parts(cr7, type='break_down') fifa_pp_gbm.plot(max_vars = 20) ``` To calculate Shapley values, the `predict_parts()` method should be applied with the `type='shap'` argument (see Section [8\.6](shapley.html#SHAPPythonCode)). The `predict_profile()` method from the `dalex` library (see Section [10\.7](ceterisParibus.html#CPPython) allows calculation of the CP profiles. The `plot()` method with the `variables` argument plots the profiles for selected variables. ``` fifa_mp_gbm = fifa_gbm_exp.predict_profile(cr7) fifa_mp_gbm.plot(variables = ['movement_reactions', 'skill_ball_control', 'skill_dribbling', 'age']) ``` ### 21\.7\.4 CR7 As a second example, we present explanations for the random forest\-model’s prediction for *Cristiano Ronaldo* (CR7\). Table [21\.4](UseCaseFIFA.html#tab:CR7) presents his characteristics, as included in the analyzed dataset. Note that Cristiano Ronaldo, as Robert Lewandowski, is also a striker. It might be thus of interest to compare the characteristics contributing to the model’s predictions for the two players. Table 21\.4: Characteristics of Cristiano Ronaldo. | variable | value | variable | value | variable | value | variable | value | | --- | --- | --- | --- | --- | --- | --- | --- | | Age | 33 | Dribbling | 88 | ShotPower | 95 | Composure | 95 | | Preferred.Foot | 2 | Curve | 81 | Jumping | 95 | Marking | 28 | | Reputation | 5 | FKAccuracy | 76 | Stamina | 88 | StandingTackle | 31 | | Weak.Foot | 4 | LongPassing | 77 | Strength | 79 | SlidingTackle | 23 | | Skill.Moves | 5 | BallControl | 94 | LongShots | 93 | GKDiving | 7 | | Crossing | 84 | Acceleration | 89 | Aggression | 63 | GKHandling | 11 | | Finishing | 94 | SprintSpeed | 91 | Interceptions | 29 | GKKicking | 15 | | HeadingAccuracy | 89 | Agility | 87 | Positioning | 95 | GKPositioning | 14 | | ShortPassing | 81 | Reactions | 96 | Vision | 82 | GKReflexes | 11 | | Volleys | 87 | Balance | 70 | Penalties | 85 | LogValue | 8 | The BD plot for Cristiano Ronaldo is presented in Figure [21\.13](UseCaseFIFA.html#fig:usecaseFIFAbreakDownCR7Plot). It suggests that the explanatory variables with the largest effect are `ShotPower`, `LongShots`, `Volleys`, and `Vision`. Figure 21\.13: Break\-down plot for Cristiano Ronaldo for the random forest model. Figure [21\.14](UseCaseFIFA.html#fig:usecaseFIFAshapCR7Plot) presents Shapley values for Cristiano Ronaldo. It indicates that the four explanatory variables with the largest values are `Reactions`, `Dribbling`, `BallControl`, and `ShortPassing`. These are the same variables as for Robert Lewandowski, though in a different order. Interestingly, the plot for Cristiano Ronaldo includes variable `Age`, for which Shapley value is negative. It suggests that CR7’s age has got a negative effect on the model’s prediction. Figure 21\.14: Shapley values for Cristiano Ronaldo for the random forest model. Finally, Figure [21\.15](UseCaseFIFA.html#fig:usecaseFIFAceterisParibusCR7Plot) presents CP profiles for `Age`, `Reactions`, `Dribbling`, and `BallControl`. Figure 21\.15: Ceteris\-paribus profiles for Cristiano Ronaldo for four selected variables and the random forest model. The profiles are similar to those presented in Figure [21\.10](UseCaseFIFA.html#fig:usecaseFIFAceterisParibusPlot) for Robert Lewandowski. An interesting difference is that, for `Age`, the predicted value for Cristiano Ronaldo is located within the region of age, linked with a sharp drop in player’s value. This is in accordance with the observation, made based on Figure [21\.14](UseCaseFIFA.html#fig:usecaseFIFAshapCR7Plot), that CR7’s age has got a negative effect on the model’s prediction. ### 21\.7\.5 Wojciech Szczęsny One might be interested in the characteristics influencing the random forest model’s predictions for players other than strikers. To address the question, we present explanations for *Wojciech Szczęsny*, a goalkeeper. Table [21\.5](UseCaseFIFA.html#tab:WS) presents his characteristics, as included in the analyzed dataset. Table 21\.5: Characteristics of Wojciech Szczęsny. | variable | value | variable | value | variable | value | variable | value | | --- | --- | --- | --- | --- | --- | --- | --- | | Age | 28 | Dribbling | 11 | ShotPower | 15 | Composure | 65 | | Preferred.Foot | 2 | Curve | 16 | Jumping | 71 | Marking | 20 | | Reputation | 3 | FKAccuracy | 14 | Stamina | 45 | StandingTackle | 13 | | Weak.Foot | 3 | LongPassing | 36 | Strength | 65 | SlidingTackle | 12 | | Skill.Moves | 1 | BallControl | 22 | LongShots | 14 | GKDiving | 85 | | Crossing | 12 | Acceleration | 51 | Aggression | 40 | GKHandling | 81 | | Finishing | 12 | SprintSpeed | 47 | Interceptions | 15 | GKKicking | 71 | | HeadingAccuracy | 16 | Agility | 55 | Positioning | 14 | GKPositioning | 85 | | ShortPassing | 32 | Reactions | 82 | Vision | 48 | GKReflexes | 87 | | Volleys | 14 | Balance | 51 | Penalties | 18 | LogValue | 8 | Figure [21\.16](UseCaseFIFA.html#fig:usecaseFIFAbreakDownWS) shows the BD plot. We can see that the most important contributions come from the explanatory variables related to goalkeeping skills like `GKPositioning`, `GKHandling`, and `GKReflexes`. Interestingly, field\-player skills like `BallControl` or `Dribbling` have a negative effect. Figure 21\.16: Break\-down plot for Wojciech Szczęsny for the random forest model. Figure [21\.17](UseCaseFIFA.html#fig:usecaseFIFAshapWS) presents Shapley values (over 25 random orderings of explanatory variables). The plot confirms that the most important contributions to the prediction for Wojciech Szczęsny are due to goalkeeping skills like `GKDiving`, `GKPositioning`, `GKReflexes`, and `GKHandling`. Interestingly, `Reactions` is also important, as it was the case for Robert Lewandowski (see Figure [21\.9](UseCaseFIFA.html#fig:usecaseFIFAshapPlot)) and Cristiano Ronaldo (see Figure [21\.14](UseCaseFIFA.html#fig:usecaseFIFAshapCR7Plot)). Figure 21\.17: Shapley values for Wojciech Szczęsny for the random forest model. ### 21\.7\.6 Lionel Messi This instance might be THE choice for some of the readers. However, we have decided to leave explanation of the models’ predictions in this case as an exercise to the interested readers.
Machine Learning
pbiecek.github.io
https://pbiecek.github.io/ema/reproducibility.html
22 Reproducibility ================== All examples presented in this book are reproducible. Parts of the source codes are available in the book. Over time, some of the functionality of the described packages may change. The online version is updated. Fully reproducible code is available at <https://pbiecek.github.io/ema/>. Possible differences may be caused by other versions of the installed packages. Results in this version of the book are obtained with the following versions of the packages. 22\.1 Package versions for R ---------------------------- The current versions of packages in R can be checked with `sessionInfo()`. ``` sessionInfo() ``` ``` ## R version 4.0.2 (2020-06-22) ## Platform: x86_64-apple-darwin17.0 (64-bit) ## Running under: macOS Catalina 10.15.7 ## ## Matrix products: default ## BLAS: /Library/Frameworks/R.framework/Versions/4.0/Resources/lib/libRblas.dylib ## LAPACK: /Library/Frameworks/R.framework/Versions/4.0/Resources/lib/libRlapack.dylib ## ## locale: ## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8 ## ## attached base packages: ## [1] grid stats graphics grDevices utils datasets ## [7] methods base ## ## other attached packages: ## [1] ranger_0.12.1 GGally_2.0.0 tidyr_1.1.2 ## [4] scales_1.1.1 tree_1.0-40 gridExtra_2.3 ## [7] dplyr_1.0.2 caret_6.0-86 ingredients_2.0 ## [10] gower_0.2.2 glmnet_4.0-2 Matrix_1.2-18 ## [13] iml_0.10.1 localModel_0.5 lime_0.5.1 ## [16] DALEXtra_2.0 iBreakDown_1.3.1.9000 e1071_1.7-4 ## [19] gbm_2.1.8 randomForest_4.6-14 rms_6.0-1 ## [22] SparseM_1.78 Hmisc_4.4-1 Formula_1.2-4 ## [25] survival_3.2-7 lattice_0.20-41 forcats_0.5.0 ## [28] patchwork_1.0.1 ggmosaic_0.3.0 kableExtra_1.2.1 ## [31] knitr_1.30 DALEX_2.0.1 ggplot2_3.3.2 ## ## loaded via a namespace (and not attached): ## [1] backports_1.1.10 workflows_0.2.1 plyr_1.8.6 ## [4] lazyeval_0.2.2 splines_4.0.2 listenv_0.8.0 ## [7] TH.data_1.0-10 digest_0.6.27 foreach_1.5.1 ## [10] htmltools_0.5.0 parsnip_0.1.3 fansi_0.4.1 ## [13] productplots_0.1.1 memoise_1.1.0 magrittr_2.0.1 ## [16] checkmate_2.0.0 cluster_2.1.0 Metrics_0.1.4 ## [19] recipes_0.1.14 globals_0.13.1 matrixStats_0.57.0 ## [22] sandwich_3.0-0 dials_0.0.9 jpeg_0.1-8.1 ## [25] colorspace_2.0-0 blob_1.2.1 rvest_0.3.6 ## [28] xfun_0.18 RCurl_1.98-1.2 crayon_1.3.4 ## [31] jsonlite_1.7.1 libcoin_1.0-6 flock_0.7 ## [34] zoo_1.8-8 iterators_1.0.13 glue_1.4.2 ## [37] gtable_0.3.0 ipred_0.9-9 webshot_0.5.2 ## [40] MatrixModels_0.4-1 shape_1.4.5 mvtnorm_1.1-1 ## [43] DBI_1.1.0 Rcpp_1.0.5 archivist_2.3.4 ## [46] viridisLite_0.3.0 xtable_1.8-4 htmlTable_2.1.0 ## [49] reticulate_1.16 bit_4.0.4 GPfit_1.0-8 ## [52] foreign_0.8-80 stats4_4.0.2 prediction_0.3.14 ## [55] lava_1.6.8 prodlim_2019.11.13 htmlwidgets_1.5.2 ## [58] httr_1.4.2 RColorBrewer_1.1-2 ellipsis_0.3.1 ## [61] reshape_0.8.8 pkgconfig_2.0.3 farver_2.0.3 ## [64] nnet_7.3-14 reshape2_1.4.4 DiceDesign_1.8-1 ## [67] tidyselect_1.1.0 labeling_0.4.2 rlang_0.4.8 ## [70] later_1.1.0.1 munsell_0.5.0 tools_4.0.2 ## [73] cli_2.2.0 RSQLite_2.2.1 generics_0.1.0 ## [76] evaluate_0.14 stringr_1.4.0 fastmap_1.0.1 ## [79] yaml_2.2.1 bit64_4.0.5 ModelMetrics_1.2.2.2 ## [82] purrr_0.3.4 future_1.19.1 nlme_3.1-149 ## [85] mime_0.9 quantreg_5.73 xml2_1.3.2 ## [88] compiler_4.0.2 shinythemes_1.1.2 rstudioapi_0.13 ## [91] plotly_4.9.2.1 png_0.1-7 tibble_3.0.4 ## [94] lhs_1.1.1 stringi_1.5.3 highr_0.8 ## [97] vctrs_0.3.5 pillar_1.4.7 lifecycle_0.2.0 ## [100] bitops_1.0-6 data.table_1.13.2 conquer_1.0.2 ## [103] httpuv_1.5.4 R6_2.5.0 latticeExtra_0.6-29 ## [106] bookdown_0.21 promises_1.1.1 codetools_0.2-16 ## [109] polspline_1.1.19 MASS_7.3-53 assertthat_0.2.1 ## [112] withr_2.3.0 multcomp_1.4-14 mgcv_1.8-33 ## [115] parallel_4.0.2 rpart_4.1-15 timeDate_3043.102 ## [118] class_7.3-17 rmarkdown_2.4 inum_1.0-1 ## [121] pROC_1.16.2 partykit_1.2-10 shiny_1.5.0 ## [124] lubridate_1.7.9 base64enc_0.1-3 ``` 22\.2 Package versions for Python --------------------------------- The current versions of packages in Python can be checked with `pip`. ``` pip freeze ``` ``` absl-py==0.6.1 appnope==0.1.0 astor==0.7.1 atomicwrites==1.2.1 attrs==18.2.0 backcall==0.1.0 bleach==3.1.1 cycler==0.10.0 dalex==0.3.0 dbexplorer==1.21 decorator==4.4.2 defusedxml==0.6.0 entrypoints==0.3 future==0.17.1 gast==0.2.0 grpcio==1.16.1 h5py==2.8.0 imageio==2.9.0 importlib-metadata==1.5.0 innvestigate==1.0.4 ipykernel==5.1.4 ipython==7.13.0 ipython-genutils==0.2.0 jedi==0.16.0 Jinja2==2.11.1 joblib==0.14.1 json5==0.9.2 jsonschema==3.2.0 jupyter-client==6.0.0 jupyter-core==4.6.3 jupyterlab==2.0.1 jupyterlab-server==1.0.7 Keras==2.2.2 Keras-Applications==1.0.6 Keras-Preprocessing==1.0.5 kiwisolver==1.0.1 lightgbm==2.3.1 lime==0.2.0.1 Markdown==3.0.1 MarkupSafe==1.1.1 matplotlib==2.2.2 mistune==0.8.4 more-itertools==4.3.0 nbconvert==5.6.1 nbformat==5.0.4 networkx==2.4 notebook==6.0.3 numpy==1.19.0 pandas==1.1.1 pandocfilters==1.4.2 parso==0.6.2 patsy==0.5.1 pd==0.0.1 pexpect==4.8.0 pickleshare==0.7.5 Pillow==5.3.0 plotly==4.9.0 pluggy==0.8.0 prometheus-client==0.7.1 prompt-toolkit==3.0.3 protobuf==3.6.1 psycopg2==2.7.4 ptyprocess==0.6.0 py==1.7.0 Pygments==2.5.2 PyMySQL==0.8.0 pyodbc==4.0.23 pyparsing==2.2.0 pyrsistent==0.15.7 pytest==4.0.0 python-dateutil==2.7.3 pytz==2018.4 PyWavelets==1.1.1 PyYAML==3.13 pyzmq==19.0.0 retrying==1.3.3 scikit-image==0.17.2 scikit-learn==0.22.2.post1 scipy==1.1.0 Send2Trash==1.5.0 simplejson==3.14.0 six==1.11.0 sklearn==0.0 sphinx-rtd-theme==0.4.0 statsmodels==0.11.1 tabulate==0.8.7 tensorboard==1.12.0 tensorflow==1.12.0 termcolor==1.1.0 terminado==0.8.3 testpath==0.4.4 tifffile==2020.7.17 torch==1.0.1.post2 torchvision==0.2.1 tornado==6.0.4 tqdm==4.43.0 traitlets==4.3.3 typing==3.6.4 virtualenv==16.2.0 wcwidth==0.1.8 webencodings==0.5.1 Werkzeug==0.14.1 xgboost==0.72 zipp==3.1.0 ``` 22\.1 Package versions for R ---------------------------- The current versions of packages in R can be checked with `sessionInfo()`. ``` sessionInfo() ``` ``` ## R version 4.0.2 (2020-06-22) ## Platform: x86_64-apple-darwin17.0 (64-bit) ## Running under: macOS Catalina 10.15.7 ## ## Matrix products: default ## BLAS: /Library/Frameworks/R.framework/Versions/4.0/Resources/lib/libRblas.dylib ## LAPACK: /Library/Frameworks/R.framework/Versions/4.0/Resources/lib/libRlapack.dylib ## ## locale: ## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8 ## ## attached base packages: ## [1] grid stats graphics grDevices utils datasets ## [7] methods base ## ## other attached packages: ## [1] ranger_0.12.1 GGally_2.0.0 tidyr_1.1.2 ## [4] scales_1.1.1 tree_1.0-40 gridExtra_2.3 ## [7] dplyr_1.0.2 caret_6.0-86 ingredients_2.0 ## [10] gower_0.2.2 glmnet_4.0-2 Matrix_1.2-18 ## [13] iml_0.10.1 localModel_0.5 lime_0.5.1 ## [16] DALEXtra_2.0 iBreakDown_1.3.1.9000 e1071_1.7-4 ## [19] gbm_2.1.8 randomForest_4.6-14 rms_6.0-1 ## [22] SparseM_1.78 Hmisc_4.4-1 Formula_1.2-4 ## [25] survival_3.2-7 lattice_0.20-41 forcats_0.5.0 ## [28] patchwork_1.0.1 ggmosaic_0.3.0 kableExtra_1.2.1 ## [31] knitr_1.30 DALEX_2.0.1 ggplot2_3.3.2 ## ## loaded via a namespace (and not attached): ## [1] backports_1.1.10 workflows_0.2.1 plyr_1.8.6 ## [4] lazyeval_0.2.2 splines_4.0.2 listenv_0.8.0 ## [7] TH.data_1.0-10 digest_0.6.27 foreach_1.5.1 ## [10] htmltools_0.5.0 parsnip_0.1.3 fansi_0.4.1 ## [13] productplots_0.1.1 memoise_1.1.0 magrittr_2.0.1 ## [16] checkmate_2.0.0 cluster_2.1.0 Metrics_0.1.4 ## [19] recipes_0.1.14 globals_0.13.1 matrixStats_0.57.0 ## [22] sandwich_3.0-0 dials_0.0.9 jpeg_0.1-8.1 ## [25] colorspace_2.0-0 blob_1.2.1 rvest_0.3.6 ## [28] xfun_0.18 RCurl_1.98-1.2 crayon_1.3.4 ## [31] jsonlite_1.7.1 libcoin_1.0-6 flock_0.7 ## [34] zoo_1.8-8 iterators_1.0.13 glue_1.4.2 ## [37] gtable_0.3.0 ipred_0.9-9 webshot_0.5.2 ## [40] MatrixModels_0.4-1 shape_1.4.5 mvtnorm_1.1-1 ## [43] DBI_1.1.0 Rcpp_1.0.5 archivist_2.3.4 ## [46] viridisLite_0.3.0 xtable_1.8-4 htmlTable_2.1.0 ## [49] reticulate_1.16 bit_4.0.4 GPfit_1.0-8 ## [52] foreign_0.8-80 stats4_4.0.2 prediction_0.3.14 ## [55] lava_1.6.8 prodlim_2019.11.13 htmlwidgets_1.5.2 ## [58] httr_1.4.2 RColorBrewer_1.1-2 ellipsis_0.3.1 ## [61] reshape_0.8.8 pkgconfig_2.0.3 farver_2.0.3 ## [64] nnet_7.3-14 reshape2_1.4.4 DiceDesign_1.8-1 ## [67] tidyselect_1.1.0 labeling_0.4.2 rlang_0.4.8 ## [70] later_1.1.0.1 munsell_0.5.0 tools_4.0.2 ## [73] cli_2.2.0 RSQLite_2.2.1 generics_0.1.0 ## [76] evaluate_0.14 stringr_1.4.0 fastmap_1.0.1 ## [79] yaml_2.2.1 bit64_4.0.5 ModelMetrics_1.2.2.2 ## [82] purrr_0.3.4 future_1.19.1 nlme_3.1-149 ## [85] mime_0.9 quantreg_5.73 xml2_1.3.2 ## [88] compiler_4.0.2 shinythemes_1.1.2 rstudioapi_0.13 ## [91] plotly_4.9.2.1 png_0.1-7 tibble_3.0.4 ## [94] lhs_1.1.1 stringi_1.5.3 highr_0.8 ## [97] vctrs_0.3.5 pillar_1.4.7 lifecycle_0.2.0 ## [100] bitops_1.0-6 data.table_1.13.2 conquer_1.0.2 ## [103] httpuv_1.5.4 R6_2.5.0 latticeExtra_0.6-29 ## [106] bookdown_0.21 promises_1.1.1 codetools_0.2-16 ## [109] polspline_1.1.19 MASS_7.3-53 assertthat_0.2.1 ## [112] withr_2.3.0 multcomp_1.4-14 mgcv_1.8-33 ## [115] parallel_4.0.2 rpart_4.1-15 timeDate_3043.102 ## [118] class_7.3-17 rmarkdown_2.4 inum_1.0-1 ## [121] pROC_1.16.2 partykit_1.2-10 shiny_1.5.0 ## [124] lubridate_1.7.9 base64enc_0.1-3 ``` 22\.2 Package versions for Python --------------------------------- The current versions of packages in Python can be checked with `pip`. ``` pip freeze ``` ``` absl-py==0.6.1 appnope==0.1.0 astor==0.7.1 atomicwrites==1.2.1 attrs==18.2.0 backcall==0.1.0 bleach==3.1.1 cycler==0.10.0 dalex==0.3.0 dbexplorer==1.21 decorator==4.4.2 defusedxml==0.6.0 entrypoints==0.3 future==0.17.1 gast==0.2.0 grpcio==1.16.1 h5py==2.8.0 imageio==2.9.0 importlib-metadata==1.5.0 innvestigate==1.0.4 ipykernel==5.1.4 ipython==7.13.0 ipython-genutils==0.2.0 jedi==0.16.0 Jinja2==2.11.1 joblib==0.14.1 json5==0.9.2 jsonschema==3.2.0 jupyter-client==6.0.0 jupyter-core==4.6.3 jupyterlab==2.0.1 jupyterlab-server==1.0.7 Keras==2.2.2 Keras-Applications==1.0.6 Keras-Preprocessing==1.0.5 kiwisolver==1.0.1 lightgbm==2.3.1 lime==0.2.0.1 Markdown==3.0.1 MarkupSafe==1.1.1 matplotlib==2.2.2 mistune==0.8.4 more-itertools==4.3.0 nbconvert==5.6.1 nbformat==5.0.4 networkx==2.4 notebook==6.0.3 numpy==1.19.0 pandas==1.1.1 pandocfilters==1.4.2 parso==0.6.2 patsy==0.5.1 pd==0.0.1 pexpect==4.8.0 pickleshare==0.7.5 Pillow==5.3.0 plotly==4.9.0 pluggy==0.8.0 prometheus-client==0.7.1 prompt-toolkit==3.0.3 protobuf==3.6.1 psycopg2==2.7.4 ptyprocess==0.6.0 py==1.7.0 Pygments==2.5.2 PyMySQL==0.8.0 pyodbc==4.0.23 pyparsing==2.2.0 pyrsistent==0.15.7 pytest==4.0.0 python-dateutil==2.7.3 pytz==2018.4 PyWavelets==1.1.1 PyYAML==3.13 pyzmq==19.0.0 retrying==1.3.3 scikit-image==0.17.2 scikit-learn==0.22.2.post1 scipy==1.1.0 Send2Trash==1.5.0 simplejson==3.14.0 six==1.11.0 sklearn==0.0 sphinx-rtd-theme==0.4.0 statsmodels==0.11.1 tabulate==0.8.7 tensorboard==1.12.0 tensorflow==1.12.0 termcolor==1.1.0 terminado==0.8.3 testpath==0.4.4 tifffile==2020.7.17 torch==1.0.1.post2 torchvision==0.2.1 tornado==6.0.4 tqdm==4.43.0 traitlets==4.3.3 typing==3.6.4 virtualenv==16.2.0 wcwidth==0.1.8 webencodings==0.5.1 Werkzeug==0.14.1 xgboost==0.72 zipp==3.1.0 ```
Machine Learning
bradleyboehmke.github.io
https://bradleyboehmke.github.io/HOML/index.html
Preface ======= This book is sold by Taylor \& Francis Group, who owns the copyright. The physical copies are available at [Taylor \& Francis](https://www.crcpress.com/Hands-On-Machine-Learning-with-R/Boehmke-Greenwell/p/book/9781138495685) and [Amazon](https://www.amazon.com/gp/product/1138495689?pf_rd_p=ab873d20-a0ca-439b-ac45-cd78f07a84d8&pf_rd_r=JBRX0ZJ1WFSR9T3JPTQE). Welcome to *Hands\-On Machine Learning with R*. This book provides hands\-on modules for many of the most common machine learning methods to include: * Generalized low rank models * Clustering algorithms * Autoencoders * Regularized models * Random forests * Gradient boosting machines * Deep neural networks * Stacking / super learners * and more! You will learn how to build and tune these various models with R packages that have been tested and approved due to their ability to scale well. However, our motivation in almost every case is to describe the techniques in a way that helps develop intuition for its strengths and weaknesses. For the most part, we minimize mathematical complexity when possible but also provide resources to get deeper into the details if desired. Who should read this -------------------- We intend this work to be a practitioner’s guide to the machine learning process and a place where one can come to learn about the approach and to gain intuition about the many commonly used, modern, and powerful methods accepted in the machine learning community. If you are familiar with the analytic methodologies, this book may still serve as a reference for how to work with the various R packages for implementation. While an abundance of videos, blog posts, and tutorials exist online, we have long been frustrated by the lack of consistency, completeness, and bias towards singular packages for implementation. This is what inspired this book. This book is not meant to be an introduction to R or to programming in general; as we assume the reader has familiarity with the R language to include defining functions, managing R objects, controlling the flow of a program, and other basic tasks. If not, we would refer you to [R for Data Science](http://r4ds.had.co.nz/index.html) (Wickham and Grolemund [2016](#ref-wickham2016r)) to learn the fundamentals of data science with R such as importing, cleaning, transforming, visualizing, and exploring your data. For those looking to advance their R programming skills and knowledge of the language, we would refer you to [Advanced R](http://adv-r.had.co.nz/) (Wickham [2014](#ref-wickham2014advanced)). Nor is this book designed to be a deep dive into the theory and math underpinning machine learning algorithms. Several books already exist that do great justice in this arena (i.e. [Elements of Statistical Learning](https://web.stanford.edu/~hastie/ElemStatLearn/) (J. Friedman, Hastie, and Tibshirani [2001](#ref-esl)), [Computer Age Statistical Inference](https://web.stanford.edu/~hastie/CASI/) (Efron and Hastie [2016](#ref-efron2016computer)), [Deep Learning](http://www.deeplearningbook.org/) (Goodfellow, Bengio, and Courville [2016](#ref-goodfellow2016deep))). Instead, this book is meant to help R users learn to use the machine learning stack within R, which includes using various R packages such as **glmnet**, **h2o**, **ranger**, **xgboost**, **lime**, and others to effectively model and gain insight from your data. The book favors a hands\-on approach, growing an intuitive understanding of machine learning through concrete examples and just a little bit of theory. While you can read this book without opening R, we highly recommend you experiment with the code examples provided throughout. Why R ----- R has emerged over the last couple decades as a first\-class tool for scientific computing tasks, and has been a consistent leader in implementing statistical methodologies for analyzing data. The usefulness of R for data science stems from the large, active, and growing ecosystem of third\-party packages: **tidyverse** for common data analysis activities; **h2o**, **ranger**, **xgboost**, and others for fast and scalable machine learning; **iml**, **pdp**, **vip**, and others for machine learning interpretability; and many more tools will be mentioned throughout the pages that follow. Conventions used in this book ----------------------------- The following typographical conventions are used in this book: * ***strong italic***: indicates new terms, * **bold**: indicates package \& file names, * `inline code`: monospaced highlighted text indicates functions or other commands that could be typed literally by the user, * code chunk: indicates commands or other text that could be typed literally by the user ``` 1 + 2 ## [1] 3 ``` In addition to the general text used throughout, you will notice the following code chunks with images: Signifies a tip or suggestion Signifies a general note Signifies a warning or caution Additional resources -------------------- There are many great resources available to learn about machine learning. Throughout the chapters we try to include many of the resources that we have found extremely useful for digging deeper into the methodology and applying with code. However, due to print restrictions, the hard copy version of this book limits the concepts and methods discussed. Online supplementary material exists at <https://koalaverse.github.io/homlr/>. The additional material will accumulate over time and include extended chapter material (i.e., random forest package benchmarking) along with brand new content we couldn’t fit in (i.e., random hyperparameter search). In addition, you can download the data used throughout the book, find teaching resources (i.e., slides and exercises), and more. Feedback -------- Reader comments are greatly appreciated. To report errors or bugs please post an issue at <https://github.com/koalaverse/homlr/issues>. Acknowledgments --------------- We’d like to thank everyone who contributed feedback, typo corrections, and discussions while the book was being written. GitHub contributors included \\(@\\)agailloty, \\(@\\)asimumba, \\(@\\)benprew, \\(@\\)bfgray3, \\(@\\)bragks, \\(@\\)cunningjames, \\(@\\)DesmondChoy, \\(@\\)erickeniuk, \\(@\\)j\-ryanhart, \\(@\\)lcreteig, \\(@\\)liangwu82, \\(@\\)Lianta, \\(@\\)mccurcio, \\(@\\)mmelcher76, \\(@\\)MMonterosso89, \\(@\\)nsharkey, \\(@\\)raycblai, \\(@\\)schoonees, \\(@\\)tpristavec and \\(@\\)william3031\. We’d also like to thank folks such as Alex Gutman, Greg Anderson, Jay Cunningham, Joe Keller, Mike Pane, Scott Crawford, and several other co\-workers who provided great input around much of this machine learning content. Software information -------------------- This book was built with the following packages and R version. All code was executed on 2017 MacBook Pro with a 2\.9 GHz Intel Core i7 processor, 16 GB of memory, 2133 MHz speed, and double data rate synchronous dynamic random access memory (DDR3\). ``` # packages used pkgs <- c( "AmesHousing", "AppliedPredictiveModeling", "bookdown", "broom", "caret", "caretEnsemble", "cluster", "cowplot", "DALEX", "data.table", "doParallel", "dplyr", "dslabs", "e1071", "earth", "emo", "extracat", "factoextra", "foreach", "forecast", "ggbeeswarm", "ggmap", "ggplot2", "ggplotify", "gbm", "glmnet", "gridExtra", "h2o", "HDclassif", "iml", "ipred", "kableExtra", "keras", "kernlab", "knitr", "lime", "markdown", "MASS", "Matrix", "mclust", "mlbench", "NbClust", "pBrackets", "pcadapt", "pdp", "plotROC", "pls", "pROC", "purrr", "ranger", "readr", "recipes", "reshape2", "ROCR", "rpart", "rpart.plot", "rsample", "scales", "sparsepca", "stringr", "subsemble", "SuperLearner", "tfruns", "tfestimators", "tidyr", "vip", "visdat", "xgboost", "yardstick" ) # package & session info sessioninfo::session_info(pkgs) #> ─ Session info ────────────────────────────────────────────────────────── #> setting value #> version R version 3.6.2 (2019-12-12) #> os macOS Mojave 10.14.6 #> system x86_64, darwin15.6.0 #> ui X11 #> language (EN) #> collate en_US.UTF-8 #> ctype en_US.UTF-8 #> tz America/New_York #> date 2020-02-01 #> #> ─ Packages ────────────────────────────────────────────────────────────── #> ! package * version date lib #> abind 1.4-5 2016-07-21 [1] #> AmesHousing 0.0.3 2017-12-17 [1] #> ape 5.3 2019-03-17 [1] #> AppliedPredictiveModeling 1.1-7 2018-05-22 [1] #> askpass 1.1 2019-01-13 [1] #> assertthat 0.2.1 2019-03-21 [1] #> backports 1.1.5 2019-10-02 [1] #> base64enc 0.1-3 2015-07-28 [1] #> beeswarm 0.2.3 2016-04-25 [1] #> BH 1.69.0-1 2019-01-07 [1] #> bitops 1.0-6 2013-08-17 [1] #> bookdown 0.11 2019-05-28 [1] #> boot 1.3-23 2019-07-05 [1] #> broom 0.5.2 2019-04-07 [1] #> callr 3.3.2 2019-09-22 [1] #> car 3.0-3 2019-05-27 [1] #> carData 3.0-2 2018-09-30 [1] #> caret 6.0-84 2019-04-27 [1] #> caretEnsemble 2.0.0 2016-02-07 [1] #> caTools 1.17.1.2 2019-03-06 [1] #> cellranger 1.1.0 2016-07-27 [1] #> checkmate 1.9.3 2019-05-03 [1] #> class 7.3-15 2019-01-01 [1] #> cli 2.0.1 2020-01-08 [1] #> clipr 0.7.0 2019-07-23 [1] #> cluster 2.1.0 2019-06-19 [1] #> codetools 0.2-16 2018-12-24 [1] #> colorspace 1.4-1 2019-03-18 [1] #> config 0.3 2018-03-27 [1] #> CORElearn 1.53.1 2018-09-29 [1] #> cowplot 0.9.4 2019-01-08 [1] #> crayon 1.3.4 2017-09-16 [1] #> crosstalk 1.0.0 2016-12-21 [1] #> curl 4.3 2019-12-02 [1] #> cvAUC 1.1.0 2014-12-09 [1] #> DALEX 0.4 2019-05-17 [1] #> data.table 1.12.6 2019-10-18 [1] #> dendextend 1.12.0 2019-05-11 [1] #> DEoptimR 1.0-8 2016-11-19 [1] #> digest 0.6.22 2019-10-21 [1] #> doParallel 1.0.14 2018-09-24 [1] #> dplyr 0.8.3 2019-07-04 [1] #> dslabs 0.5.2 2018-12-19 [1] #> e1071 1.7-2 2019-06-05 [1] #> earth 5.1.1 2019-04-12 [1] #> ellipse 0.4.1 2018-01-05 [1] #> ellipsis 0.3.0 2019-09-20 [1] #> emo 0.0.0.9000 2019-05-03 [1] #> evaluate 0.14 2019-05-28 [1] #> R extracat <NA> <NA> [?] #> factoextra 1.0.5 2017-08-22 [1] #> FactoMineR 1.41 2018-05-04 [1] #> fansi 0.4.1 2020-01-08 [1] #> fit.models 0.5-14 2017-04-06 [1] #> flashClust 1.01-2 2012-08-21 [1] #> forcats 0.4.0 2019-02-17 [1] #> foreach 1.4.4 2017-12-12 [1] #> forecast 8.7 2019-04-29 [1] #> foreign 0.8-72 2019-08-02 [1] #> forge 0.2.0 2019-02-26 [1] #> Formula 1.2-3 2018-05-03 [1] #> fracdiff 1.4-2 2012-12-02 [1] #> furrr 0.1.0 2018-05-16 [1] #> future 1.13.0 2019-05-08 [1] #> gbm 2.1.5 2019-01-14 [1] #> gdata 2.18.0 2017-06-06 [1] #> generics 0.0.2 2018-11-29 [1] #> ggbeeswarm 0.6.0 2017-08-07 [1] #> ggmap 3.0.0 2019-02-05 [1] #> ggplot2 3.2.1 2019-08-10 [1] #> ggplotify 0.0.3 2018-08-03 [1] #> ggpubr 0.2 2018-11-15 [1] #> ggrepel 0.8.1 2019-05-07 [1] #> ggsci 2.9 2018-05-14 [1] #> ggsignif 0.5.0 2019-02-20 [1] #> glmnet 3.0 2019-11-09 [1] #> globals 0.12.4 2018-10-11 [1] #> glue 1.3.1 2019-03-12 [1] #> gower 0.2.0 2019-03-07 [1] #> gplots 3.0.1.1 2019-01-27 [1] #> gridExtra 2.3 2017-09-09 [1] #> gridGraphics 0.4-1 2019-05-20 [1] #> gridSVG 1.7-0 2019-02-12 [1] #> gtable 0.3.0 2019-03-25 [1] #> gtools 3.8.1 2018-06-26 [1] #> h2o 3.22.1.1 2019-01-10 [1] #> haven 2.2.0 2019-11-08 [1] #> HDclassif 2.1.0 2018-05-11 [1] #> hexbin 1.27.3 2019-05-14 [1] #> highr 0.8 2019-03-20 [1] #> hms 0.5.2 2019-10-30 [1] #> htmltools 0.3.6 2017-04-28 [1] #> htmlwidgets 1.3 2018-09-30 [1] #> httpuv 1.5.1 2019-04-05 [1] #> httr 1.4.1 2019-08-05 [1] #> iml 0.9.0 2019-02-05 [1] #> inum 1.0-1 2019-04-25 [1] #> ipred 0.9-9 2019-04-28 [1] #> iterators 1.0.10 2018-07-13 [1] #> jpeg 0.1-8.1 2019-10-24 [1] #> jsonlite 1.6 2018-12-07 [1] #> kableExtra 1.1.0 2019-03-16 [1] #> keras 2.2.5.0 2019-10-08 [1] #> kernlab 0.9-27 2018-08-10 [1] #> KernSmooth 2.23-16 2019-10-15 [1] #> knitr 1.25 2019-09-18 [1] #> labeling 0.3 2014-08-23 [1] #> later 0.8.0 2019-02-11 [1] #> lattice 0.20-38 2018-11-04 [1] #> lava 1.6.5 2019-02-12 [1] #> lazyeval 0.2.2 2019-03-15 [1] #> leaps 3.0 2017-01-10 [1] #> libcoin 1.0-4 2019-02-28 [1] #> lifecycle 0.1.0 2019-08-01 [1] #> lime 0.5.1 2019-11-12 [1] #> listenv 0.7.0 2018-01-21 [1] #> lme4 1.1-21 2019-03-05 [1] #> lmtest 0.9-37 2019-04-30 [1] #> lubridate 1.7.4 2018-04-11 [1] #> magrittr 1.5 2014-11-22 [1] #> maptools 0.9-5 2019-02-18 [1] #> markdown 1.1 2019-08-07 [1] #> MASS 7.3-51.4 2019-03-31 [1] #> Matrix 1.2-18 2019-11-27 [1] #> MatrixModels 0.4-1 2015-08-22 [1] #> mclust 5.4.3 2019-03-14 [1] #> memuse 4.0-0 2017-11-10 [1] #> Metrics 0.1.4 2018-07-09 [1] #> mgcv 1.8-31 2019-11-09 [1] #> mime 0.8 2019-12-19 [1] #> minqa 1.2.4 2014-10-09 [1] #> mlbench 2.1-1 2012-07-10 [1] #> mmapcharr 0.3.0 2019-02-26 [1] #> ModelMetrics 1.2.2 2018-11-03 [1] #> munsell 0.5.0 2018-06-12 [1] #> mvtnorm 1.0-10 2019-03-05 [1] #> NbClust 3.0 2015-04-13 [1] #> nlme 3.1-142 2019-11-07 [1] #> nloptr 1.2.1 2018-10-03 [1] #> nnet 7.3-12 2016-02-02 [1] #> nnls 1.4 2012-03-19 [1] #> numDeriv 2016.8-1 2016-08-27 [1] #> openssl 1.4.1 2019-07-18 [1] #> openxlsx 4.1.0.1 2019-05-28 [1] #> partykit 1.2-3 2019-01-31 [1] #> pbapply 1.4-2 2019-08-31 [1] #> pbkrtest 0.4-7 2017-03-15 [1] #> pBrackets 1.0 2014-10-17 [1] #> pcadapt 4.1.0 2019-02-27 [1] #> pcaPP 1.9-73 2018-01-14 [1] #> pdp 0.7.0 2018-08-27 [1] #> permute 0.9-5 2019-03-12 [1] #> pillar 1.4.2 2019-06-29 [1] #> pinfsc50 1.1.0 2016-12-02 [1] #> pkgconfig 2.0.3 2019-09-22 [1] #> plogr 0.2.0 2018-03-25 [1] #> plotly 4.9.1 2019-11-07 [1] #> plotmo 3.5.4 2019-04-06 [1] #> plotrix 3.7-5 2019-04-07 [1] #> plotROC 2.2.1 2018-06-23 [1] #> pls 2.7-1 2019-03-23 [1] #> plyr 1.8.4 2016-06-08 [1] #> png 0.1-7 2013-12-03 [1] #> polynom 1.4-0 2019-03-22 [1] #> prediction 0.3.6.2 2019-01-31 [1] #> prettyunits 1.0.2 2015-07-13 [1] #> pROC 1.14.0 2019-03-12 [1] #> processx 3.4.1 2019-07-18 [1] #> prodlim 2018.04.18 2018-04-18 [1] #> progress 1.2.2 2019-05-16 [1] #> promises 1.0.1 2018-04-13 [1] #> ps 1.3.0 2018-12-21 [1] #> purrr 0.3.3 2019-10-18 [1] #> quadprog 1.5-7 2019-05-06 [1] #> quantmod 0.4-15 2019-06-17 [1] #> quantreg 5.38 2018-12-18 [1] #> R6 2.4.1 2019-11-12 [1] #> ranger 0.11.2 2019-03-07 [1] #> rARPACK 0.11-0 2016-03-10 [1] #> RColorBrewer 1.1-2 2014-12-07 [1] #> Rcpp 1.0.3 2019-11-08 [1] #> RcppArmadillo 0.9.500.2.0 2019-06-12 [1] #> RcppEigen 0.3.3.5.0 2018-11-24 [1] #> RCurl 1.95-4.12 2019-03-04 [1] #> readr 1.3.1 2018-12-21 [1] #> readxl 1.3.1 2019-03-13 [1] #> recipes 0.1.7 2019-09-15 [1] #> rematch 1.0.1 2016-04-21 [1] #> reshape2 1.4.3 2017-12-11 [1] #> reticulate 1.13 2019-07-24 [1] #> RgoogleMaps 1.4.3 2018-11-07 [1] #> rio 0.5.16 2018-11-26 [1] #> rjson 0.2.20 2018-06-08 [1] #> rlang 0.4.4 2020-01-28 [1] #> rmarkdown 1.15.1 2019-09-09 [1] #> rmio 0.1.2 2019-02-22 [1] #> robust 0.4-18 2017-04-27 [1] #> robustbase 0.93-5 2019-05-12 [1] #> ROCR 1.0-7 2015-03-26 [1] #> rpart 4.1-15 2019-04-12 [1] #> rpart.plot 3.0.7 2019-04-12 [1] #> rrcov 1.4-7 2018-11-15 [1] #> rsample 0.0.5 2019-07-12 [1] #> RSpectra 0.14-0 2019-04-04 [1] #> rstudioapi 0.10 2019-03-19 [1] #> rsvd 1.0.0 2018-11-06 [1] #> rvcheck 0.1.3 2018-12-06 [1] #> rvest 0.3.5 2019-11-08 [1] #> scales 1.0.0 2018-08-09 [1] #> scatterplot3d 0.3-41 2018-03-14 [1] #> selectr 0.4-1 2018-04-06 [1] #> shape 1.4.4 2018-02-07 [1] #> shiny 1.3.2 2019-04-22 [1] #> shinythemes 1.1.2 2018-11-06 [1] #> sourcetools 0.1.7 2018-04-25 [1] #> sp 1.3-1 2018-06-05 [1] #> SparseM 1.77 2017-04-23 [1] #> sparsepca 0.1.2 2018-04-11 [1] #> SQUAREM 2017.10-1 2017-10-07 [1] #> stringi 1.4.3 2019-03-12 [1] #> stringr 1.4.0.9000 2019-11-12 [1] #> R subsemble <NA> <NA> [?] #> SuperLearner 2.0-25 2019-08-09 [1] #> survival 3.1-8 2019-12-03 [1] #> sys 3.3 2019-08-21 [1] #> TeachingDemos 2.10 2016-02-12 [1] #> tensorflow 2.0.0 2019-10-02 [1] #> tfestimators 1.9.1 2018-11-07 [1] #> tfruns 1.4 2018-08-25 [1] #> tibble 2.1.3 2019-06-06 [1] #> tidyr 1.0.0 2019-09-11 [1] #> tidyselect 0.2.5 2018-10-11 [1] #> timeDate 3043.102 2018-02-21 [1] #> tinytex 0.15 2019-08-07 [1] #> tseries 0.10-47 2019-06-05 [1] #> TTR 0.23-4 2018-09-20 [1] #> urca 1.3-0 2016-09-06 [1] #> utf8 1.1.4 2018-05-24 [1] #> vcfR 1.8.0 2018-04-17 [1] #> vctrs 0.2.0 2019-07-05 [1] #> vegan 2.5-5 2019-05-12 [1] #> vip 0.2.0 2020-01-20 [1] #> vipor 0.4.5 2017-03-22 [1] #> viridis 0.5.1 2018-03-29 [1] #> viridisLite 0.3.0 2018-02-01 [1] #> visdat 0.5.3 2019-02-15 [1] #> webshot 0.5.1 2018-09-28 [1] #> whisker 0.4 2019-08-28 [1] #> withr 2.1.2 2018-03-15 [1] #> xfun 0.10 2019-10-01 [1] #> xgboost 0.90.0.2 2019-08-01 [1] #> XML 3.98-1.19 2019-03-06 [1] #> xml2 1.2.2 2019-08-09 [1] #> xtable 1.8-4 2019-04-21 [1] #> xts 0.11-2 2018-11-05 [1] #> yaImpute 1.0-31 2019-01-09 [1] #> yaml 2.2.0 2018-07-25 [1] #> yardstick 0.0.3 2019-03-08 [1] #> zeallot 0.1.0 2018-01-28 [1] #> zip 2.0.4 2019-09-01 [1] #> zoo 1.8-6 2019-05-28 [1] #> source #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.2) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.2) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.2) #> CRAN (R 3.6.2) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.1) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> Github (hadley/emo@02a5206) #> CRAN (R 3.6.0) #> <NA> #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.2) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.2) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.2) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.2) #> CRAN (R 3.6.2) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.2) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.2) #> CRAN (R 3.6.0) #> CRAN (R 3.6.2) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.2) #> Github (rstudio/rmarkdown@ff285a0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.2) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> Github (tidyverse/stringr@80aaaac) #> <NA> #> CRAN (R 3.6.0) #> CRAN (R 3.6.2) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> Github (koalaverse/vip@a3323d3) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> #> [1] /Library/Frameworks/R.framework/Versions/3.6/Resources/library #> #> R ── Package was removed from disk. ``` Who should read this -------------------- We intend this work to be a practitioner’s guide to the machine learning process and a place where one can come to learn about the approach and to gain intuition about the many commonly used, modern, and powerful methods accepted in the machine learning community. If you are familiar with the analytic methodologies, this book may still serve as a reference for how to work with the various R packages for implementation. While an abundance of videos, blog posts, and tutorials exist online, we have long been frustrated by the lack of consistency, completeness, and bias towards singular packages for implementation. This is what inspired this book. This book is not meant to be an introduction to R or to programming in general; as we assume the reader has familiarity with the R language to include defining functions, managing R objects, controlling the flow of a program, and other basic tasks. If not, we would refer you to [R for Data Science](http://r4ds.had.co.nz/index.html) (Wickham and Grolemund [2016](#ref-wickham2016r)) to learn the fundamentals of data science with R such as importing, cleaning, transforming, visualizing, and exploring your data. For those looking to advance their R programming skills and knowledge of the language, we would refer you to [Advanced R](http://adv-r.had.co.nz/) (Wickham [2014](#ref-wickham2014advanced)). Nor is this book designed to be a deep dive into the theory and math underpinning machine learning algorithms. Several books already exist that do great justice in this arena (i.e. [Elements of Statistical Learning](https://web.stanford.edu/~hastie/ElemStatLearn/) (J. Friedman, Hastie, and Tibshirani [2001](#ref-esl)), [Computer Age Statistical Inference](https://web.stanford.edu/~hastie/CASI/) (Efron and Hastie [2016](#ref-efron2016computer)), [Deep Learning](http://www.deeplearningbook.org/) (Goodfellow, Bengio, and Courville [2016](#ref-goodfellow2016deep))). Instead, this book is meant to help R users learn to use the machine learning stack within R, which includes using various R packages such as **glmnet**, **h2o**, **ranger**, **xgboost**, **lime**, and others to effectively model and gain insight from your data. The book favors a hands\-on approach, growing an intuitive understanding of machine learning through concrete examples and just a little bit of theory. While you can read this book without opening R, we highly recommend you experiment with the code examples provided throughout. Why R ----- R has emerged over the last couple decades as a first\-class tool for scientific computing tasks, and has been a consistent leader in implementing statistical methodologies for analyzing data. The usefulness of R for data science stems from the large, active, and growing ecosystem of third\-party packages: **tidyverse** for common data analysis activities; **h2o**, **ranger**, **xgboost**, and others for fast and scalable machine learning; **iml**, **pdp**, **vip**, and others for machine learning interpretability; and many more tools will be mentioned throughout the pages that follow. Conventions used in this book ----------------------------- The following typographical conventions are used in this book: * ***strong italic***: indicates new terms, * **bold**: indicates package \& file names, * `inline code`: monospaced highlighted text indicates functions or other commands that could be typed literally by the user, * code chunk: indicates commands or other text that could be typed literally by the user ``` 1 + 2 ## [1] 3 ``` In addition to the general text used throughout, you will notice the following code chunks with images: Signifies a tip or suggestion Signifies a general note Signifies a warning or caution Additional resources -------------------- There are many great resources available to learn about machine learning. Throughout the chapters we try to include many of the resources that we have found extremely useful for digging deeper into the methodology and applying with code. However, due to print restrictions, the hard copy version of this book limits the concepts and methods discussed. Online supplementary material exists at <https://koalaverse.github.io/homlr/>. The additional material will accumulate over time and include extended chapter material (i.e., random forest package benchmarking) along with brand new content we couldn’t fit in (i.e., random hyperparameter search). In addition, you can download the data used throughout the book, find teaching resources (i.e., slides and exercises), and more. Feedback -------- Reader comments are greatly appreciated. To report errors or bugs please post an issue at <https://github.com/koalaverse/homlr/issues>. Acknowledgments --------------- We’d like to thank everyone who contributed feedback, typo corrections, and discussions while the book was being written. GitHub contributors included \\(@\\)agailloty, \\(@\\)asimumba, \\(@\\)benprew, \\(@\\)bfgray3, \\(@\\)bragks, \\(@\\)cunningjames, \\(@\\)DesmondChoy, \\(@\\)erickeniuk, \\(@\\)j\-ryanhart, \\(@\\)lcreteig, \\(@\\)liangwu82, \\(@\\)Lianta, \\(@\\)mccurcio, \\(@\\)mmelcher76, \\(@\\)MMonterosso89, \\(@\\)nsharkey, \\(@\\)raycblai, \\(@\\)schoonees, \\(@\\)tpristavec and \\(@\\)william3031\. We’d also like to thank folks such as Alex Gutman, Greg Anderson, Jay Cunningham, Joe Keller, Mike Pane, Scott Crawford, and several other co\-workers who provided great input around much of this machine learning content. Software information -------------------- This book was built with the following packages and R version. All code was executed on 2017 MacBook Pro with a 2\.9 GHz Intel Core i7 processor, 16 GB of memory, 2133 MHz speed, and double data rate synchronous dynamic random access memory (DDR3\). ``` # packages used pkgs <- c( "AmesHousing", "AppliedPredictiveModeling", "bookdown", "broom", "caret", "caretEnsemble", "cluster", "cowplot", "DALEX", "data.table", "doParallel", "dplyr", "dslabs", "e1071", "earth", "emo", "extracat", "factoextra", "foreach", "forecast", "ggbeeswarm", "ggmap", "ggplot2", "ggplotify", "gbm", "glmnet", "gridExtra", "h2o", "HDclassif", "iml", "ipred", "kableExtra", "keras", "kernlab", "knitr", "lime", "markdown", "MASS", "Matrix", "mclust", "mlbench", "NbClust", "pBrackets", "pcadapt", "pdp", "plotROC", "pls", "pROC", "purrr", "ranger", "readr", "recipes", "reshape2", "ROCR", "rpart", "rpart.plot", "rsample", "scales", "sparsepca", "stringr", "subsemble", "SuperLearner", "tfruns", "tfestimators", "tidyr", "vip", "visdat", "xgboost", "yardstick" ) # package & session info sessioninfo::session_info(pkgs) #> ─ Session info ────────────────────────────────────────────────────────── #> setting value #> version R version 3.6.2 (2019-12-12) #> os macOS Mojave 10.14.6 #> system x86_64, darwin15.6.0 #> ui X11 #> language (EN) #> collate en_US.UTF-8 #> ctype en_US.UTF-8 #> tz America/New_York #> date 2020-02-01 #> #> ─ Packages ────────────────────────────────────────────────────────────── #> ! package * version date lib #> abind 1.4-5 2016-07-21 [1] #> AmesHousing 0.0.3 2017-12-17 [1] #> ape 5.3 2019-03-17 [1] #> AppliedPredictiveModeling 1.1-7 2018-05-22 [1] #> askpass 1.1 2019-01-13 [1] #> assertthat 0.2.1 2019-03-21 [1] #> backports 1.1.5 2019-10-02 [1] #> base64enc 0.1-3 2015-07-28 [1] #> beeswarm 0.2.3 2016-04-25 [1] #> BH 1.69.0-1 2019-01-07 [1] #> bitops 1.0-6 2013-08-17 [1] #> bookdown 0.11 2019-05-28 [1] #> boot 1.3-23 2019-07-05 [1] #> broom 0.5.2 2019-04-07 [1] #> callr 3.3.2 2019-09-22 [1] #> car 3.0-3 2019-05-27 [1] #> carData 3.0-2 2018-09-30 [1] #> caret 6.0-84 2019-04-27 [1] #> caretEnsemble 2.0.0 2016-02-07 [1] #> caTools 1.17.1.2 2019-03-06 [1] #> cellranger 1.1.0 2016-07-27 [1] #> checkmate 1.9.3 2019-05-03 [1] #> class 7.3-15 2019-01-01 [1] #> cli 2.0.1 2020-01-08 [1] #> clipr 0.7.0 2019-07-23 [1] #> cluster 2.1.0 2019-06-19 [1] #> codetools 0.2-16 2018-12-24 [1] #> colorspace 1.4-1 2019-03-18 [1] #> config 0.3 2018-03-27 [1] #> CORElearn 1.53.1 2018-09-29 [1] #> cowplot 0.9.4 2019-01-08 [1] #> crayon 1.3.4 2017-09-16 [1] #> crosstalk 1.0.0 2016-12-21 [1] #> curl 4.3 2019-12-02 [1] #> cvAUC 1.1.0 2014-12-09 [1] #> DALEX 0.4 2019-05-17 [1] #> data.table 1.12.6 2019-10-18 [1] #> dendextend 1.12.0 2019-05-11 [1] #> DEoptimR 1.0-8 2016-11-19 [1] #> digest 0.6.22 2019-10-21 [1] #> doParallel 1.0.14 2018-09-24 [1] #> dplyr 0.8.3 2019-07-04 [1] #> dslabs 0.5.2 2018-12-19 [1] #> e1071 1.7-2 2019-06-05 [1] #> earth 5.1.1 2019-04-12 [1] #> ellipse 0.4.1 2018-01-05 [1] #> ellipsis 0.3.0 2019-09-20 [1] #> emo 0.0.0.9000 2019-05-03 [1] #> evaluate 0.14 2019-05-28 [1] #> R extracat <NA> <NA> [?] #> factoextra 1.0.5 2017-08-22 [1] #> FactoMineR 1.41 2018-05-04 [1] #> fansi 0.4.1 2020-01-08 [1] #> fit.models 0.5-14 2017-04-06 [1] #> flashClust 1.01-2 2012-08-21 [1] #> forcats 0.4.0 2019-02-17 [1] #> foreach 1.4.4 2017-12-12 [1] #> forecast 8.7 2019-04-29 [1] #> foreign 0.8-72 2019-08-02 [1] #> forge 0.2.0 2019-02-26 [1] #> Formula 1.2-3 2018-05-03 [1] #> fracdiff 1.4-2 2012-12-02 [1] #> furrr 0.1.0 2018-05-16 [1] #> future 1.13.0 2019-05-08 [1] #> gbm 2.1.5 2019-01-14 [1] #> gdata 2.18.0 2017-06-06 [1] #> generics 0.0.2 2018-11-29 [1] #> ggbeeswarm 0.6.0 2017-08-07 [1] #> ggmap 3.0.0 2019-02-05 [1] #> ggplot2 3.2.1 2019-08-10 [1] #> ggplotify 0.0.3 2018-08-03 [1] #> ggpubr 0.2 2018-11-15 [1] #> ggrepel 0.8.1 2019-05-07 [1] #> ggsci 2.9 2018-05-14 [1] #> ggsignif 0.5.0 2019-02-20 [1] #> glmnet 3.0 2019-11-09 [1] #> globals 0.12.4 2018-10-11 [1] #> glue 1.3.1 2019-03-12 [1] #> gower 0.2.0 2019-03-07 [1] #> gplots 3.0.1.1 2019-01-27 [1] #> gridExtra 2.3 2017-09-09 [1] #> gridGraphics 0.4-1 2019-05-20 [1] #> gridSVG 1.7-0 2019-02-12 [1] #> gtable 0.3.0 2019-03-25 [1] #> gtools 3.8.1 2018-06-26 [1] #> h2o 3.22.1.1 2019-01-10 [1] #> haven 2.2.0 2019-11-08 [1] #> HDclassif 2.1.0 2018-05-11 [1] #> hexbin 1.27.3 2019-05-14 [1] #> highr 0.8 2019-03-20 [1] #> hms 0.5.2 2019-10-30 [1] #> htmltools 0.3.6 2017-04-28 [1] #> htmlwidgets 1.3 2018-09-30 [1] #> httpuv 1.5.1 2019-04-05 [1] #> httr 1.4.1 2019-08-05 [1] #> iml 0.9.0 2019-02-05 [1] #> inum 1.0-1 2019-04-25 [1] #> ipred 0.9-9 2019-04-28 [1] #> iterators 1.0.10 2018-07-13 [1] #> jpeg 0.1-8.1 2019-10-24 [1] #> jsonlite 1.6 2018-12-07 [1] #> kableExtra 1.1.0 2019-03-16 [1] #> keras 2.2.5.0 2019-10-08 [1] #> kernlab 0.9-27 2018-08-10 [1] #> KernSmooth 2.23-16 2019-10-15 [1] #> knitr 1.25 2019-09-18 [1] #> labeling 0.3 2014-08-23 [1] #> later 0.8.0 2019-02-11 [1] #> lattice 0.20-38 2018-11-04 [1] #> lava 1.6.5 2019-02-12 [1] #> lazyeval 0.2.2 2019-03-15 [1] #> leaps 3.0 2017-01-10 [1] #> libcoin 1.0-4 2019-02-28 [1] #> lifecycle 0.1.0 2019-08-01 [1] #> lime 0.5.1 2019-11-12 [1] #> listenv 0.7.0 2018-01-21 [1] #> lme4 1.1-21 2019-03-05 [1] #> lmtest 0.9-37 2019-04-30 [1] #> lubridate 1.7.4 2018-04-11 [1] #> magrittr 1.5 2014-11-22 [1] #> maptools 0.9-5 2019-02-18 [1] #> markdown 1.1 2019-08-07 [1] #> MASS 7.3-51.4 2019-03-31 [1] #> Matrix 1.2-18 2019-11-27 [1] #> MatrixModels 0.4-1 2015-08-22 [1] #> mclust 5.4.3 2019-03-14 [1] #> memuse 4.0-0 2017-11-10 [1] #> Metrics 0.1.4 2018-07-09 [1] #> mgcv 1.8-31 2019-11-09 [1] #> mime 0.8 2019-12-19 [1] #> minqa 1.2.4 2014-10-09 [1] #> mlbench 2.1-1 2012-07-10 [1] #> mmapcharr 0.3.0 2019-02-26 [1] #> ModelMetrics 1.2.2 2018-11-03 [1] #> munsell 0.5.0 2018-06-12 [1] #> mvtnorm 1.0-10 2019-03-05 [1] #> NbClust 3.0 2015-04-13 [1] #> nlme 3.1-142 2019-11-07 [1] #> nloptr 1.2.1 2018-10-03 [1] #> nnet 7.3-12 2016-02-02 [1] #> nnls 1.4 2012-03-19 [1] #> numDeriv 2016.8-1 2016-08-27 [1] #> openssl 1.4.1 2019-07-18 [1] #> openxlsx 4.1.0.1 2019-05-28 [1] #> partykit 1.2-3 2019-01-31 [1] #> pbapply 1.4-2 2019-08-31 [1] #> pbkrtest 0.4-7 2017-03-15 [1] #> pBrackets 1.0 2014-10-17 [1] #> pcadapt 4.1.0 2019-02-27 [1] #> pcaPP 1.9-73 2018-01-14 [1] #> pdp 0.7.0 2018-08-27 [1] #> permute 0.9-5 2019-03-12 [1] #> pillar 1.4.2 2019-06-29 [1] #> pinfsc50 1.1.0 2016-12-02 [1] #> pkgconfig 2.0.3 2019-09-22 [1] #> plogr 0.2.0 2018-03-25 [1] #> plotly 4.9.1 2019-11-07 [1] #> plotmo 3.5.4 2019-04-06 [1] #> plotrix 3.7-5 2019-04-07 [1] #> plotROC 2.2.1 2018-06-23 [1] #> pls 2.7-1 2019-03-23 [1] #> plyr 1.8.4 2016-06-08 [1] #> png 0.1-7 2013-12-03 [1] #> polynom 1.4-0 2019-03-22 [1] #> prediction 0.3.6.2 2019-01-31 [1] #> prettyunits 1.0.2 2015-07-13 [1] #> pROC 1.14.0 2019-03-12 [1] #> processx 3.4.1 2019-07-18 [1] #> prodlim 2018.04.18 2018-04-18 [1] #> progress 1.2.2 2019-05-16 [1] #> promises 1.0.1 2018-04-13 [1] #> ps 1.3.0 2018-12-21 [1] #> purrr 0.3.3 2019-10-18 [1] #> quadprog 1.5-7 2019-05-06 [1] #> quantmod 0.4-15 2019-06-17 [1] #> quantreg 5.38 2018-12-18 [1] #> R6 2.4.1 2019-11-12 [1] #> ranger 0.11.2 2019-03-07 [1] #> rARPACK 0.11-0 2016-03-10 [1] #> RColorBrewer 1.1-2 2014-12-07 [1] #> Rcpp 1.0.3 2019-11-08 [1] #> RcppArmadillo 0.9.500.2.0 2019-06-12 [1] #> RcppEigen 0.3.3.5.0 2018-11-24 [1] #> RCurl 1.95-4.12 2019-03-04 [1] #> readr 1.3.1 2018-12-21 [1] #> readxl 1.3.1 2019-03-13 [1] #> recipes 0.1.7 2019-09-15 [1] #> rematch 1.0.1 2016-04-21 [1] #> reshape2 1.4.3 2017-12-11 [1] #> reticulate 1.13 2019-07-24 [1] #> RgoogleMaps 1.4.3 2018-11-07 [1] #> rio 0.5.16 2018-11-26 [1] #> rjson 0.2.20 2018-06-08 [1] #> rlang 0.4.4 2020-01-28 [1] #> rmarkdown 1.15.1 2019-09-09 [1] #> rmio 0.1.2 2019-02-22 [1] #> robust 0.4-18 2017-04-27 [1] #> robustbase 0.93-5 2019-05-12 [1] #> ROCR 1.0-7 2015-03-26 [1] #> rpart 4.1-15 2019-04-12 [1] #> rpart.plot 3.0.7 2019-04-12 [1] #> rrcov 1.4-7 2018-11-15 [1] #> rsample 0.0.5 2019-07-12 [1] #> RSpectra 0.14-0 2019-04-04 [1] #> rstudioapi 0.10 2019-03-19 [1] #> rsvd 1.0.0 2018-11-06 [1] #> rvcheck 0.1.3 2018-12-06 [1] #> rvest 0.3.5 2019-11-08 [1] #> scales 1.0.0 2018-08-09 [1] #> scatterplot3d 0.3-41 2018-03-14 [1] #> selectr 0.4-1 2018-04-06 [1] #> shape 1.4.4 2018-02-07 [1] #> shiny 1.3.2 2019-04-22 [1] #> shinythemes 1.1.2 2018-11-06 [1] #> sourcetools 0.1.7 2018-04-25 [1] #> sp 1.3-1 2018-06-05 [1] #> SparseM 1.77 2017-04-23 [1] #> sparsepca 0.1.2 2018-04-11 [1] #> SQUAREM 2017.10-1 2017-10-07 [1] #> stringi 1.4.3 2019-03-12 [1] #> stringr 1.4.0.9000 2019-11-12 [1] #> R subsemble <NA> <NA> [?] #> SuperLearner 2.0-25 2019-08-09 [1] #> survival 3.1-8 2019-12-03 [1] #> sys 3.3 2019-08-21 [1] #> TeachingDemos 2.10 2016-02-12 [1] #> tensorflow 2.0.0 2019-10-02 [1] #> tfestimators 1.9.1 2018-11-07 [1] #> tfruns 1.4 2018-08-25 [1] #> tibble 2.1.3 2019-06-06 [1] #> tidyr 1.0.0 2019-09-11 [1] #> tidyselect 0.2.5 2018-10-11 [1] #> timeDate 3043.102 2018-02-21 [1] #> tinytex 0.15 2019-08-07 [1] #> tseries 0.10-47 2019-06-05 [1] #> TTR 0.23-4 2018-09-20 [1] #> urca 1.3-0 2016-09-06 [1] #> utf8 1.1.4 2018-05-24 [1] #> vcfR 1.8.0 2018-04-17 [1] #> vctrs 0.2.0 2019-07-05 [1] #> vegan 2.5-5 2019-05-12 [1] #> vip 0.2.0 2020-01-20 [1] #> vipor 0.4.5 2017-03-22 [1] #> viridis 0.5.1 2018-03-29 [1] #> viridisLite 0.3.0 2018-02-01 [1] #> visdat 0.5.3 2019-02-15 [1] #> webshot 0.5.1 2018-09-28 [1] #> whisker 0.4 2019-08-28 [1] #> withr 2.1.2 2018-03-15 [1] #> xfun 0.10 2019-10-01 [1] #> xgboost 0.90.0.2 2019-08-01 [1] #> XML 3.98-1.19 2019-03-06 [1] #> xml2 1.2.2 2019-08-09 [1] #> xtable 1.8-4 2019-04-21 [1] #> xts 0.11-2 2018-11-05 [1] #> yaImpute 1.0-31 2019-01-09 [1] #> yaml 2.2.0 2018-07-25 [1] #> yardstick 0.0.3 2019-03-08 [1] #> zeallot 0.1.0 2018-01-28 [1] #> zip 2.0.4 2019-09-01 [1] #> zoo 1.8-6 2019-05-28 [1] #> source #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.2) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.2) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.2) #> CRAN (R 3.6.2) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.1) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> Github (hadley/emo@02a5206) #> CRAN (R 3.6.0) #> <NA> #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.2) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.2) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.2) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.2) #> CRAN (R 3.6.2) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.2) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.2) #> CRAN (R 3.6.0) #> CRAN (R 3.6.2) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.2) #> Github (rstudio/rmarkdown@ff285a0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.2) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> Github (tidyverse/stringr@80aaaac) #> <NA> #> CRAN (R 3.6.0) #> CRAN (R 3.6.2) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> Github (koalaverse/vip@a3323d3) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> CRAN (R 3.6.0) #> #> [1] /Library/Frameworks/R.framework/Versions/3.6/Resources/library #> #> R ── Package was removed from disk. ```
Machine Learning
bradleyboehmke.github.io
https://bradleyboehmke.github.io/HOML/intro.html
Chapter 1 Introduction to Machine Learning ========================================== Machine learning (ML) continues to grow in importance for many organizations across nearly all domains. Some example applications of machine learning in practice include: * Predicting the likelihood of a patient returning to the hospital (*readmission*) within 30 days of discharge. * Segmenting customers based on common attributes or purchasing behavior for targeted marketing. * Predicting coupon redemption rates for a given marketing campaign. * Predicting customer churn so an organization can perform preventative intervention. * And many more! In essence, these tasks all seek to learn from data. To address each scenario, we can use a given set of *features* to train an algorithm and extract insights. These algorithms, or *learners*, can be classified according to the amount and type of supervision needed during training. The two main groups this book focuses on are: ***supervised learners*** which construct predictive models, and ***unsupervised learners*** which build descriptive models. Which type you will need to use depends on the learning task you hope to accomplish. 1\.1 Supervised learning ------------------------ A ***predictive model*** is used for tasks that involve the prediction of a given output (or target) using other variables (or features) in the data set. Or, as stated by Kuhn and Johnson ([2013](#ref-apm), 26:2\), predictive modeling is “…the process of developing a mathematical tool or model that generates an accurate prediction.” The learning algorithm in a predictive model attempts to discover and model the relationships among the target variable (the variable being predicted) and the other features (aka predictor variables). Examples of predictive modeling include: * using customer attributes to predict the probability of the customer churning in the next 6 weeks; * using home attributes to predict the sales price; * using employee attributes to predict the likelihood of attrition; * using patient attributes and symptoms to predict the risk of readmission; * using production attributes to predict time to market. Each of these examples has a defined learning task; they each intend to use attributes (\\(X\\)) to predict an outcome measurement (\\(Y\\)). Throughout this text we’ll use various terms interchangeably for * \\(X\\): “predictor variable”, “independent variable”, “attribute”, “feature”, “predictor” * \\(Y\\): “target variable”, “dependent variable”, “response”, “outcome measurement” The predictive modeling examples above describe what is known as *supervised learning*. The supervision refers to the fact that the target values provide a supervisory role, which indicates to the learner the task it needs to learn. Specifically, given a set of data, the learning algorithm attempts to optimize a function (the algorithmic steps) to find the combination of feature values that results in a predicted value that is as close to the actual target output as possible. In supervised learning, the training data you feed the algorithm includes the target values. Consequently, the solutions can be used to help *supervise* the training process to find the optimal algorithm parameters. Most supervised learning problems can be bucketed into one of two categories, *regression* or *classification*, which we discuss next. ### 1\.1\.1 Regression problems When the objective of our supervised learning is to predict a numeric outcome, we refer to this as a ***regression problem*** (not to be confused with linear regression modeling). Regression problems revolve around predicting output that falls on a continuum. In the examples above, predicting home sales prices and time to market reflect a regression problem because the output is numeric and continuous. This means, given the combination of predictor values, the response value could fall anywhere along some continuous spectrum (e.g., the predicted sales price of a particular home could be between $80,000 and $755,000\). Figure [1\.1](intro.html#fig:intro-regression-problem) illustrates average home sales prices as a function of two home features: year built and total square footage. Depending on the combination of these two features, the expected home sales price could fall anywhere along a plane. Figure 1\.1: Average home sales price as a function of year built and total square footage. ### 1\.1\.2 Classification problems When the objective of our supervised learning is to predict a categorical outcome, we refer to this as a ***classification problem***. Classification problems most commonly revolve around predicting a binary or multinomial response measure such as: * Did a customer redeem a coupon (coded as yes/no or 1/0\)? * Did a customer churn (coded as yes/no or 1/0\)? * Did a customer click on our online ad (coded as yes/no or 1/0\)? * Classifying customer reviews: + Binary: positive vs. negative. + Multinomial: extremely negative to extremely positive on a 0–5 Likert scale. Figure 1\.2: Classification problem modeling ‘Yes’/‘No’ response based on three features. However, when we apply machine learning models for classification problems, rather than predict a particular class (i.e., “yes” or “no”), we often want to predict the *probability* of a particular class (i.e., yes: 0\.65, no: 0\.35\). By default, the class with the highest predicted probability becomes the predicted class. Consequently, even though we are performing a classification problem, we are still predicting a numeric output (probability). However, the essence of the problem still makes it a classification problem. Although there are machine learning algorithms that can be applied to regression problems but not classification and vice versa, most of the supervised learning algorithms we cover in this book can be applied to both. These algorithms have become the most popular machine learning applications in recent years. 1\.2 Unsupervised learning -------------------------- ***Unsupervised learning***, in contrast to supervised learning, includes a set of statistical tools to better understand and describe your data, but performs the analysis without a target variable. In essence, unsupervised learning is concerned with identifying groups in a data set. The groups may be defined by the rows (i.e., *clustering*) or the columns (i.e., *dimension reduction*); however, the motive in each case is quite different. The goal of ***clustering*** is to segment observations into similar groups based on the observed variables; for example, to divide consumers into different homogeneous groups, a process known as market segmentation. In **dimension reduction**, we are often concerned with reducing the number of variables in a data set. For example, classical linear regression models break down in the presence of highly correlated features. Some dimension reduction techniques can be used to reduce the feature set to a potentially smaller set of uncorrelated variables. Such a reduced feature set is often used as input to downstream supervised learning models (e.g., principal component regression). Unsupervised learning is often performed as part of an exploratory data analysis (EDA). However, the exercise tends to be more subjective, and there is no simple goal for the analysis, such as prediction of a response. Furthermore, it can be hard to assess the quality of results obtained from unsupervised learning methods. The reason for this is simple. If we fit a predictive model using a supervised learning technique (i.e., linear regression), then it is possible to check our work by seeing how well our model predicts the response *Y* on observations not used in fitting the model. However, in unsupervised learning, there is no way to check our work because we don’t know the true answer—the problem is unsupervised! Despite its subjectivity, the importance of unsupervised learning should not be overlooked and such techniques are often used in organizations to: * Divide consumers into different homogeneous groups so that tailored marketing strategies can be developed and deployed for each segment. * Identify groups of online shoppers with similar browsing and purchase histories, as well as items that are of particular interest to the shoppers within each group. Then an individual shopper can be preferentially shown the items in which he or she is particularly likely to be interested, based on the purchase histories of similar shoppers. * Identify products that have similar purchasing behavior so that managers can manage them as product groups. These questions, and many more, can be addressed with unsupervised learning. Moreover, the outputs of unsupervised learning models can be used as inputs to downstream supervised learning models. 1\.3 Roadmap ------------ The goal of this book is to provide effective tools for uncovering relevant and useful patterns in your data by using R’s ML stack. We begin by providing an overview of the ML modeling process and discussing fundamental concepts that will carry through the rest of the book. These include feature engineering, data splitting, model validation and tuning, and performance measurement. These concepts will be discussed in Chapters [2](process.html#process)\-[3](engineering.html#engineering). Chapters [4](linear-regression.html#linear-regression)\-[14](svm.html#svm) focus on common supervised learners ranging from simpler linear regression models to the more complicated gradient boosting machines and deep neural networks. Here we will illustrate the fundamental concepts of each base learning algorithm and how to tune its hyperparameters to maximize predictive performance. Chapters [15](stacking.html#stacking)\-[16](iml.html#iml) delve into more advanced approaches to maximize effectiveness, efficiency, and interpretation of your ML models. We discuss how to combine multiple models to create a stacked model (aka *super learner*), which allows you to combine the strengths from each base learner and further maximize predictive accuracy. We then illustrate how to make the training and validation process more efficient with automated ML (aka AutoML). Finally, we illustrate many ways to extract insight from your “black box” models with various ML interpretation techniques. The latter part of the book focuses on unsupervised techniques aimed at reducing the dimensions of your data for more effective data representation (Chapters [17](pca.html#pca)\-[19](autoencoders.html#autoencoders)) and identifying common groups among your observations with clustering techniques (Chapters [20](kmeans.html#kmeans)\-[22](model-clustering.html#model-clustering)). 1\.4 The data sets ------------------ The data sets chosen for this book allow us to illustrate the different features of the presented machine learning algorithms. Since the goal of this book is to demonstrate how to implement R’s ML stack, we make the assumption that you have already spent significant time cleaning and getting to know your data via EDA. This would allow you to perform many necessary tasks prior to the ML tasks outlined in this book such as: * Feature selection (i.e., removing unnecessary variables and retaining only those variables you wish to include in your modeling process). * Recoding variable names and values so that they are meaningful and more interpretable. * Recoding, removing, or some other approach to handling missing values. Consequently, the exemplar data sets we use throughout this book have, for the most part, gone through the necessary cleaning processes. In some cases we illustrate concepts with stereotypical data sets (i.e. `mtcars`, `iris`, `geyser`); however, we tend to focus most of our discussion around the following data sets: * Property sales information as described in De Cock ([2011](#ref-de2011ames)). + **problem type**: supervised regression + **response variable**: `Sale_Price` (i.e., $195,000, $215,000\) + **features**: 80 + **observations**: 2,930 + **objective**: use property attributes to predict the sale price of a home + **access**: provided by the `AmesHousing` package (Kuhn [2017](#ref-R-AmesHousing)[a](#ref-R-AmesHousing)) + **more details**: See `?AmesHousing::ames_raw` ``` # access data ames <- AmesHousing::make_ames() # initial dimension dim(ames) ## [1] 2930 81 # response variable head(ames$Sale_Price) ## [1] 215000 105000 172000 244000 189900 195500 ``` You can see the entire data cleaning process to transform the raw Ames housing data (`AmesHousing::ames_raw`) to the final clean data (`AmesHousing::make_ames`) that we will use in machine learning algorithms throughout this book by typing `AmesHousing::make_ames` into the R console. * Employee attrition information originally provided by [IBM Watson Analytics Lab](https://www.ibm.com/communities/analytics/watson-analytics-blog/hr-employee-attrition/). + **problem type**: supervised binomial classification + **response variable**: `Attrition` (i.e., “Yes”, “No”) + **features**: 30 + **observations**: 1,470 + **objective**: use employee attributes to predict if they will attrit (leave the company) + **access**: provided by the `rsample` package (Kuhn and Wickham [2019](#ref-R-rsample)) + **more details**: See `?rsample::attrition` ``` # access data attrition <- rsample::attrition # initial dimension dim(attrition) ## [1] 1470 31 # response variable head(attrition$Attrition) ## [1] Yes No Yes No No No ## Levels: No Yes ``` * Image information for handwritten numbers originally presented to AT\&T Bell Lab’s to help build automatic mail\-sorting machines for the USPS. Has been used since early 1990s to compare machine learning performance on pattern recognition (i.e., LeCun et al. ([1990](#ref-lecun1990handwritten)); LeCun et al. ([1998](#ref-lecun1998gradient)); Cireşan, Meier, and Schmidhuber ([2012](#ref-cirecsan2012multi))). + **Problem type**: supervised multinomial classification + **response variable**: `V785` (i.e., numbers to predict: 0, 1, …, 9\) + **features**: 784 + **observations**: 60,000 (train) / 10,000 (test) + **objective**: use attributes about the “darkness” of each of the 784 pixels in images of handwritten numbers to predict if the number is 0, 1, …, or 9\. + **access**: provided by the `dslabs` package (Irizarry [2018](#ref-R-dslabs)) + **more details**: See `?dslabs::read_mnist()` and [online MNIST documentation](http://yann.lecun.com/exdb/mnist/) ``` #access data mnist <- dslabs::read_mnist() names(mnist) ## [1] "train" "test" # initial feature dimensions dim(mnist$train$images) ## [1] 60000 784 # response variable head(mnist$train$labels) ## [1] 5 0 4 1 9 2 ``` * Grocery items and quantities purchased. Each observation represents a single basket of goods that were purchased together. + **Problem type**: unsupervised basket analysis + **response variable**: NA + **features**: 42 + **observations**: 2,000 + **objective**: use attributes of each basket to identify common groupings of items purchased together. + **access**: available on the companion website for this book ``` # URL to download/read in the data url <- "https://koalaverse.github.io/homlr/data/my_basket.csv" # Access data my_basket <- readr::read_csv(url) # Print dimensions dim(my_basket) ## [1] 2000 42 # Peek at response variable my_basket ## # A tibble: 2,000 x 42 ## `7up` lasagna pepsi yop red.wine cheese bbq bulmers mayonnaise ## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> ## 1 0 0 0 0 0 0 0 0 0 ## 2 0 0 0 0 0 0 0 0 0 ## 3 0 0 0 0 0 0 0 0 0 ## 4 0 0 0 2 1 0 0 0 0 ## 5 0 0 0 0 0 0 0 2 0 ## 6 0 0 0 0 0 0 0 0 0 ## 7 1 1 0 0 0 0 1 0 0 ## 8 0 0 0 0 0 0 0 0 0 ## 9 0 1 0 0 0 0 0 0 0 ## 10 0 0 0 0 0 0 0 0 0 ## # … with 1,990 more rows, and 33 more variables: horlics <dbl>, ## # chicken.tikka <dbl>, milk <dbl>, mars <dbl>, coke <dbl>, ## # lottery <dbl>, bread <dbl>, pizza <dbl>, sunny.delight <dbl>, ## # ham <dbl>, lettuce <dbl>, kronenbourg <dbl>, leeks <dbl>, fanta <dbl>, ## # tea <dbl>, whiskey <dbl>, peas <dbl>, newspaper <dbl>, muesli <dbl>, ## # white.wine <dbl>, carrots <dbl>, spinach <dbl>, pate <dbl>, ## # instant.coffee <dbl>, twix <dbl>, potatoes <dbl>, fosters <dbl>, ## # soup <dbl>, toad.in.hole <dbl>, coco.pops <dbl>, kitkat <dbl>, ## # broccoli <dbl>, cigarettes <dbl> ``` 1\.1 Supervised learning ------------------------ A ***predictive model*** is used for tasks that involve the prediction of a given output (or target) using other variables (or features) in the data set. Or, as stated by Kuhn and Johnson ([2013](#ref-apm), 26:2\), predictive modeling is “…the process of developing a mathematical tool or model that generates an accurate prediction.” The learning algorithm in a predictive model attempts to discover and model the relationships among the target variable (the variable being predicted) and the other features (aka predictor variables). Examples of predictive modeling include: * using customer attributes to predict the probability of the customer churning in the next 6 weeks; * using home attributes to predict the sales price; * using employee attributes to predict the likelihood of attrition; * using patient attributes and symptoms to predict the risk of readmission; * using production attributes to predict time to market. Each of these examples has a defined learning task; they each intend to use attributes (\\(X\\)) to predict an outcome measurement (\\(Y\\)). Throughout this text we’ll use various terms interchangeably for * \\(X\\): “predictor variable”, “independent variable”, “attribute”, “feature”, “predictor” * \\(Y\\): “target variable”, “dependent variable”, “response”, “outcome measurement” The predictive modeling examples above describe what is known as *supervised learning*. The supervision refers to the fact that the target values provide a supervisory role, which indicates to the learner the task it needs to learn. Specifically, given a set of data, the learning algorithm attempts to optimize a function (the algorithmic steps) to find the combination of feature values that results in a predicted value that is as close to the actual target output as possible. In supervised learning, the training data you feed the algorithm includes the target values. Consequently, the solutions can be used to help *supervise* the training process to find the optimal algorithm parameters. Most supervised learning problems can be bucketed into one of two categories, *regression* or *classification*, which we discuss next. ### 1\.1\.1 Regression problems When the objective of our supervised learning is to predict a numeric outcome, we refer to this as a ***regression problem*** (not to be confused with linear regression modeling). Regression problems revolve around predicting output that falls on a continuum. In the examples above, predicting home sales prices and time to market reflect a regression problem because the output is numeric and continuous. This means, given the combination of predictor values, the response value could fall anywhere along some continuous spectrum (e.g., the predicted sales price of a particular home could be between $80,000 and $755,000\). Figure [1\.1](intro.html#fig:intro-regression-problem) illustrates average home sales prices as a function of two home features: year built and total square footage. Depending on the combination of these two features, the expected home sales price could fall anywhere along a plane. Figure 1\.1: Average home sales price as a function of year built and total square footage. ### 1\.1\.2 Classification problems When the objective of our supervised learning is to predict a categorical outcome, we refer to this as a ***classification problem***. Classification problems most commonly revolve around predicting a binary or multinomial response measure such as: * Did a customer redeem a coupon (coded as yes/no or 1/0\)? * Did a customer churn (coded as yes/no or 1/0\)? * Did a customer click on our online ad (coded as yes/no or 1/0\)? * Classifying customer reviews: + Binary: positive vs. negative. + Multinomial: extremely negative to extremely positive on a 0–5 Likert scale. Figure 1\.2: Classification problem modeling ‘Yes’/‘No’ response based on three features. However, when we apply machine learning models for classification problems, rather than predict a particular class (i.e., “yes” or “no”), we often want to predict the *probability* of a particular class (i.e., yes: 0\.65, no: 0\.35\). By default, the class with the highest predicted probability becomes the predicted class. Consequently, even though we are performing a classification problem, we are still predicting a numeric output (probability). However, the essence of the problem still makes it a classification problem. Although there are machine learning algorithms that can be applied to regression problems but not classification and vice versa, most of the supervised learning algorithms we cover in this book can be applied to both. These algorithms have become the most popular machine learning applications in recent years. ### 1\.1\.1 Regression problems When the objective of our supervised learning is to predict a numeric outcome, we refer to this as a ***regression problem*** (not to be confused with linear regression modeling). Regression problems revolve around predicting output that falls on a continuum. In the examples above, predicting home sales prices and time to market reflect a regression problem because the output is numeric and continuous. This means, given the combination of predictor values, the response value could fall anywhere along some continuous spectrum (e.g., the predicted sales price of a particular home could be between $80,000 and $755,000\). Figure [1\.1](intro.html#fig:intro-regression-problem) illustrates average home sales prices as a function of two home features: year built and total square footage. Depending on the combination of these two features, the expected home sales price could fall anywhere along a plane. Figure 1\.1: Average home sales price as a function of year built and total square footage. ### 1\.1\.2 Classification problems When the objective of our supervised learning is to predict a categorical outcome, we refer to this as a ***classification problem***. Classification problems most commonly revolve around predicting a binary or multinomial response measure such as: * Did a customer redeem a coupon (coded as yes/no or 1/0\)? * Did a customer churn (coded as yes/no or 1/0\)? * Did a customer click on our online ad (coded as yes/no or 1/0\)? * Classifying customer reviews: + Binary: positive vs. negative. + Multinomial: extremely negative to extremely positive on a 0–5 Likert scale. Figure 1\.2: Classification problem modeling ‘Yes’/‘No’ response based on three features. However, when we apply machine learning models for classification problems, rather than predict a particular class (i.e., “yes” or “no”), we often want to predict the *probability* of a particular class (i.e., yes: 0\.65, no: 0\.35\). By default, the class with the highest predicted probability becomes the predicted class. Consequently, even though we are performing a classification problem, we are still predicting a numeric output (probability). However, the essence of the problem still makes it a classification problem. Although there are machine learning algorithms that can be applied to regression problems but not classification and vice versa, most of the supervised learning algorithms we cover in this book can be applied to both. These algorithms have become the most popular machine learning applications in recent years. 1\.2 Unsupervised learning -------------------------- ***Unsupervised learning***, in contrast to supervised learning, includes a set of statistical tools to better understand and describe your data, but performs the analysis without a target variable. In essence, unsupervised learning is concerned with identifying groups in a data set. The groups may be defined by the rows (i.e., *clustering*) or the columns (i.e., *dimension reduction*); however, the motive in each case is quite different. The goal of ***clustering*** is to segment observations into similar groups based on the observed variables; for example, to divide consumers into different homogeneous groups, a process known as market segmentation. In **dimension reduction**, we are often concerned with reducing the number of variables in a data set. For example, classical linear regression models break down in the presence of highly correlated features. Some dimension reduction techniques can be used to reduce the feature set to a potentially smaller set of uncorrelated variables. Such a reduced feature set is often used as input to downstream supervised learning models (e.g., principal component regression). Unsupervised learning is often performed as part of an exploratory data analysis (EDA). However, the exercise tends to be more subjective, and there is no simple goal for the analysis, such as prediction of a response. Furthermore, it can be hard to assess the quality of results obtained from unsupervised learning methods. The reason for this is simple. If we fit a predictive model using a supervised learning technique (i.e., linear regression), then it is possible to check our work by seeing how well our model predicts the response *Y* on observations not used in fitting the model. However, in unsupervised learning, there is no way to check our work because we don’t know the true answer—the problem is unsupervised! Despite its subjectivity, the importance of unsupervised learning should not be overlooked and such techniques are often used in organizations to: * Divide consumers into different homogeneous groups so that tailored marketing strategies can be developed and deployed for each segment. * Identify groups of online shoppers with similar browsing and purchase histories, as well as items that are of particular interest to the shoppers within each group. Then an individual shopper can be preferentially shown the items in which he or she is particularly likely to be interested, based on the purchase histories of similar shoppers. * Identify products that have similar purchasing behavior so that managers can manage them as product groups. These questions, and many more, can be addressed with unsupervised learning. Moreover, the outputs of unsupervised learning models can be used as inputs to downstream supervised learning models. 1\.3 Roadmap ------------ The goal of this book is to provide effective tools for uncovering relevant and useful patterns in your data by using R’s ML stack. We begin by providing an overview of the ML modeling process and discussing fundamental concepts that will carry through the rest of the book. These include feature engineering, data splitting, model validation and tuning, and performance measurement. These concepts will be discussed in Chapters [2](process.html#process)\-[3](engineering.html#engineering). Chapters [4](linear-regression.html#linear-regression)\-[14](svm.html#svm) focus on common supervised learners ranging from simpler linear regression models to the more complicated gradient boosting machines and deep neural networks. Here we will illustrate the fundamental concepts of each base learning algorithm and how to tune its hyperparameters to maximize predictive performance. Chapters [15](stacking.html#stacking)\-[16](iml.html#iml) delve into more advanced approaches to maximize effectiveness, efficiency, and interpretation of your ML models. We discuss how to combine multiple models to create a stacked model (aka *super learner*), which allows you to combine the strengths from each base learner and further maximize predictive accuracy. We then illustrate how to make the training and validation process more efficient with automated ML (aka AutoML). Finally, we illustrate many ways to extract insight from your “black box” models with various ML interpretation techniques. The latter part of the book focuses on unsupervised techniques aimed at reducing the dimensions of your data for more effective data representation (Chapters [17](pca.html#pca)\-[19](autoencoders.html#autoencoders)) and identifying common groups among your observations with clustering techniques (Chapters [20](kmeans.html#kmeans)\-[22](model-clustering.html#model-clustering)). 1\.4 The data sets ------------------ The data sets chosen for this book allow us to illustrate the different features of the presented machine learning algorithms. Since the goal of this book is to demonstrate how to implement R’s ML stack, we make the assumption that you have already spent significant time cleaning and getting to know your data via EDA. This would allow you to perform many necessary tasks prior to the ML tasks outlined in this book such as: * Feature selection (i.e., removing unnecessary variables and retaining only those variables you wish to include in your modeling process). * Recoding variable names and values so that they are meaningful and more interpretable. * Recoding, removing, or some other approach to handling missing values. Consequently, the exemplar data sets we use throughout this book have, for the most part, gone through the necessary cleaning processes. In some cases we illustrate concepts with stereotypical data sets (i.e. `mtcars`, `iris`, `geyser`); however, we tend to focus most of our discussion around the following data sets: * Property sales information as described in De Cock ([2011](#ref-de2011ames)). + **problem type**: supervised regression + **response variable**: `Sale_Price` (i.e., $195,000, $215,000\) + **features**: 80 + **observations**: 2,930 + **objective**: use property attributes to predict the sale price of a home + **access**: provided by the `AmesHousing` package (Kuhn [2017](#ref-R-AmesHousing)[a](#ref-R-AmesHousing)) + **more details**: See `?AmesHousing::ames_raw` ``` # access data ames <- AmesHousing::make_ames() # initial dimension dim(ames) ## [1] 2930 81 # response variable head(ames$Sale_Price) ## [1] 215000 105000 172000 244000 189900 195500 ``` You can see the entire data cleaning process to transform the raw Ames housing data (`AmesHousing::ames_raw`) to the final clean data (`AmesHousing::make_ames`) that we will use in machine learning algorithms throughout this book by typing `AmesHousing::make_ames` into the R console. * Employee attrition information originally provided by [IBM Watson Analytics Lab](https://www.ibm.com/communities/analytics/watson-analytics-blog/hr-employee-attrition/). + **problem type**: supervised binomial classification + **response variable**: `Attrition` (i.e., “Yes”, “No”) + **features**: 30 + **observations**: 1,470 + **objective**: use employee attributes to predict if they will attrit (leave the company) + **access**: provided by the `rsample` package (Kuhn and Wickham [2019](#ref-R-rsample)) + **more details**: See `?rsample::attrition` ``` # access data attrition <- rsample::attrition # initial dimension dim(attrition) ## [1] 1470 31 # response variable head(attrition$Attrition) ## [1] Yes No Yes No No No ## Levels: No Yes ``` * Image information for handwritten numbers originally presented to AT\&T Bell Lab’s to help build automatic mail\-sorting machines for the USPS. Has been used since early 1990s to compare machine learning performance on pattern recognition (i.e., LeCun et al. ([1990](#ref-lecun1990handwritten)); LeCun et al. ([1998](#ref-lecun1998gradient)); Cireşan, Meier, and Schmidhuber ([2012](#ref-cirecsan2012multi))). + **Problem type**: supervised multinomial classification + **response variable**: `V785` (i.e., numbers to predict: 0, 1, …, 9\) + **features**: 784 + **observations**: 60,000 (train) / 10,000 (test) + **objective**: use attributes about the “darkness” of each of the 784 pixels in images of handwritten numbers to predict if the number is 0, 1, …, or 9\. + **access**: provided by the `dslabs` package (Irizarry [2018](#ref-R-dslabs)) + **more details**: See `?dslabs::read_mnist()` and [online MNIST documentation](http://yann.lecun.com/exdb/mnist/) ``` #access data mnist <- dslabs::read_mnist() names(mnist) ## [1] "train" "test" # initial feature dimensions dim(mnist$train$images) ## [1] 60000 784 # response variable head(mnist$train$labels) ## [1] 5 0 4 1 9 2 ``` * Grocery items and quantities purchased. Each observation represents a single basket of goods that were purchased together. + **Problem type**: unsupervised basket analysis + **response variable**: NA + **features**: 42 + **observations**: 2,000 + **objective**: use attributes of each basket to identify common groupings of items purchased together. + **access**: available on the companion website for this book ``` # URL to download/read in the data url <- "https://koalaverse.github.io/homlr/data/my_basket.csv" # Access data my_basket <- readr::read_csv(url) # Print dimensions dim(my_basket) ## [1] 2000 42 # Peek at response variable my_basket ## # A tibble: 2,000 x 42 ## `7up` lasagna pepsi yop red.wine cheese bbq bulmers mayonnaise ## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> ## 1 0 0 0 0 0 0 0 0 0 ## 2 0 0 0 0 0 0 0 0 0 ## 3 0 0 0 0 0 0 0 0 0 ## 4 0 0 0 2 1 0 0 0 0 ## 5 0 0 0 0 0 0 0 2 0 ## 6 0 0 0 0 0 0 0 0 0 ## 7 1 1 0 0 0 0 1 0 0 ## 8 0 0 0 0 0 0 0 0 0 ## 9 0 1 0 0 0 0 0 0 0 ## 10 0 0 0 0 0 0 0 0 0 ## # … with 1,990 more rows, and 33 more variables: horlics <dbl>, ## # chicken.tikka <dbl>, milk <dbl>, mars <dbl>, coke <dbl>, ## # lottery <dbl>, bread <dbl>, pizza <dbl>, sunny.delight <dbl>, ## # ham <dbl>, lettuce <dbl>, kronenbourg <dbl>, leeks <dbl>, fanta <dbl>, ## # tea <dbl>, whiskey <dbl>, peas <dbl>, newspaper <dbl>, muesli <dbl>, ## # white.wine <dbl>, carrots <dbl>, spinach <dbl>, pate <dbl>, ## # instant.coffee <dbl>, twix <dbl>, potatoes <dbl>, fosters <dbl>, ## # soup <dbl>, toad.in.hole <dbl>, coco.pops <dbl>, kitkat <dbl>, ## # broccoli <dbl>, cigarettes <dbl> ```
Machine Learning
bradleyboehmke.github.io
https://bradleyboehmke.github.io/HOML/process.html
Chapter 2 Modeling Process ========================== Much like EDA, the ML process is very iterative and heurstic\-based. With minimal knowledge of the problem or data at hand, it is difficult to know which ML method will perform best. This is known as the *no free lunch* theorem for ML (Wolpert [1996](#ref-wolpert1996lack)). Consequently, it is common for many ML approaches to be applied, evaluated, and modified before a final, optimal model can be determined. Performing this process correctly provides great confidence in our outcomes. If not, the results will be useless and, potentially, damaging.[1](#fn1) Approaching ML modeling correctly means approaching it strategically by spending our data wisely on learning and validation procedures, properly pre\-processing the feature and target variables, minimizing *data leakage* (Section [3\.8\.2](engineering.html#data-leakage)), tuning hyperparameters, and assessing model performance. Many books and courses portray the modeling process as a short sprint. A better analogy would be a marathon where many iterations of these steps are repeated before eventually finding the final optimal model. This process is illustrated in Figure [2\.1](process.html#fig:modeling-process-modeling-process). Before introducing specific algorithms, this chapter, and the next, introduce concepts that are fundamental to the ML modeling process and that you’ll see briskly covered in future modeling chapters. Although the discussions in this chapter focus on supervised ML modeling, many of the topics also apply to unsupervised methods. Figure 2\.1: General predictive machine learning process. 2\.1 Prerequisites ------------------ This chapter leverages the following packages. ``` # Helper packages library(dplyr) # for data manipulation library(ggplot2) # for awesome graphics # Modeling process packages library(rsample) # for resampling procedures library(caret) # for resampling and model training library(h2o) # for resampling and model training # h2o set-up h2o.no_progress() # turn off h2o progress bars h2o.init() # launch h2o ``` To illustrate some of the concepts, we’ll use the Ames Housing and employee attrition data sets introduced in Chapter [1](intro.html#intro). Throughout this book, we’ll demonstrate approaches with ordinary R data frames. However, since many of the supervised machine learning chapters leverage the **h2o** package, we’ll also show how to do some of the tasks with H2O objects. You can convert any R data frame to an H2O object (i.e., import it to the H2O cloud) easily with `as.h2o(<my-data-frame>)`. If you try to convert the original `rsample::attrition` data set to an H2O object an error will occur. This is because several variables are *ordered factors* and H2O has no way of handling this data type. Consequently, you must convert any ordered factors to unordered; see `?base::ordered` for details. ``` # Ames housing data ames <- AmesHousing::make_ames() ames.h2o <- as.h2o(ames) # Job attrition data churn <- rsample::attrition %>% mutate_if(is.ordered, .funs = factor, ordered = FALSE) churn.h2o <- as.h2o(churn) ``` 2\.2 Data splitting ------------------- A major goal of the machine learning process is to find an algorithm \\(f\\left(X\\right)\\) that most accurately predicts future values (\\(\\hat{Y}\\)) based on a set of features (\\(X\\)). In other words, we want an algorithm that not only fits well to our past data, but more importantly, one that predicts a future outcome accurately. This is called the ***generalizability*** of our algorithm. How we “spend” our data will help us understand how well our algorithm generalizes to unseen data. To provide an accurate understanding of the generalizability of our final optimal model, we can split our data into training and test data sets: * **Training set**: these data are used to develop feature sets, train our algorithms, tune hyperparameters, compare models, and all of the other activities required to choose a final model (e.g., the model we want to put into production). * **Test set**: having chosen a final model, these data are used to estimate an unbiased assessment of the model’s performance, which we refer to as the *generalization error*. It is critical that the test set not be used prior to selecting your final model. Assessing results on the test set prior to final model selection biases the model selection process since the testing data will have become part of the model development process. Figure 2\.2: Splitting data into training and test sets. Given a fixed amount of data, typical recommendations for splitting your data into training\-test splits include 60% (training)–40% (testing), 70%–30%, or 80%–20%. Generally speaking, these are appropriate guidelines to follow; however, it is good to keep the following points in mind: * Spending too much in training (e.g., \\(\>80\\%\\)) won’t allow us to get a good assessment of predictive performance. We may find a model that fits the training data very well, but is not generalizable (*overfitting*). * Sometimes too much spent in testing (\\(\>40\\%\\)) won’t allow us to get a good assessment of model parameters. Other factors should also influence the allocation proportions. For example, very large training sets (e.g., \\(n \> 100\\texttt{K}\\)) often result in only marginal gains compared to smaller sample sizes. Consequently, you may use a smaller training sample to increase computation speed (e.g., models built on larger training sets often take longer to score new data sets in production). In contrast, as \\(p \\geq n\\) (where \\(p\\) represents the number of features), larger samples sizes are often required to identify consistent signals in the features. The two most common ways of splitting data include ***simple random sampling*** and ***stratified sampling***. ### 2\.2\.1 Simple random sampling The simplest way to split the data into training and test sets is to take a simple random sample. This does not control for any data attributes, such as the distribution of your response variable (\\(Y\\)). There are multiple ways to split our data in R. Here we show four options to produce a 70–30 split in the Ames housing data: Sampling is a random process so setting the random number generator with a common seed allows for reproducible results. Throughout this book we’ll often use the seed `123` for reproducibility but the number itself has no special meaning. ``` # Using base R set.seed(123) # for reproducibility index_1 <- sample(1:nrow(ames), round(nrow(ames) * 0.7)) train_1 <- ames[index_1, ] test_1 <- ames[-index_1, ] # Using caret package set.seed(123) # for reproducibility index_2 <- createDataPartition(ames$Sale_Price, p = 0.7, list = FALSE) train_2 <- ames[index_2, ] test_2 <- ames[-index_2, ] # Using rsample package set.seed(123) # for reproducibility split_1 <- initial_split(ames, prop = 0.7) train_3 <- training(split_1) test_3 <- testing(split_1) # Using h2o package split_2 <- h2o.splitFrame(ames.h2o, ratios = 0.7, seed = 123) train_4 <- split_2[[1]] test_4 <- split_2[[2]] ``` With sufficient sample size, this sampling approach will typically result in a similar distribution of \\(Y\\) (e.g., `Sale_Price` in the `ames` data) between your training and test sets, as illustrated below. Figure 2\.3: Training (black) vs. test (red) response distribution. ### 2\.2\.2 Stratified sampling If we want to explicitly control the sampling so that our training and test sets have similar \\(Y\\) distributions, we can use stratified sampling. This is more common with classification problems where the response variable may be severely imbalanced (e.g., 90% of observations with response “Yes” and 10% with response “No”). However, we can also apply stratified sampling to regression problems for data sets that have a small sample size and where the response variable deviates strongly from normality (i.e., positively skewed like `Sale_Price`). With a continuous response variable, stratified sampling will segment \\(Y\\) into quantiles and randomly sample from each. Consequently, this will help ensure a balanced representation of the response distribution in both the training and test sets. The easiest way to perform stratified sampling on a response variable is to use the **rsample** package, where you specify the response variable to `strata`fy. The following illustrates that in our original employee attrition data we have an imbalanced response (No: 84%, Yes: 16%). By enforcing stratified sampling, both our training and testing sets have approximately equal response distributions. ``` # orginal response distribution table(churn$Attrition) %>% prop.table() ## ## No Yes ## 0.8387755 0.1612245 # stratified sampling with the rsample package set.seed(123) split_strat <- initial_split(churn, prop = 0.7, strata = "Attrition") train_strat <- training(split_strat) test_strat <- testing(split_strat) # consistent response ratio between train & test table(train_strat$Attrition) %>% prop.table() ## ## No Yes ## 0.838835 0.161165 table(test_strat$Attrition) %>% prop.table() ## ## No Yes ## 0.8386364 0.1613636 ``` ### 2\.2\.3 Class imbalances Imbalanced data can have a significant impact on model predictions and performance (Kuhn and Johnson [2013](#ref-apm)). Most often this involves classification problems where one class has a very small proportion of observations (e.g., defaults \- 5% versus nondefaults \- 95%). Several sampling methods have been developed to help remedy class imbalance and most of them can be categorized as either *up\-sampling* or *down\-sampling*. Down\-sampling balances the dataset by reducing the size of the abundant class(es) to match the frequencies in the least prevalent class. This method is used when the quantity of data is sufficient. By keeping all samples in the rare class and randomly selecting an equal number of samples in the abundant class, a balanced new dataset can be retrieved for further modeling. Furthermore, the reduced sample size reduces the computation burden imposed by further steps in the ML process. On the contrary, up\-sampling is used when the quantity of data is insufficient. It tries to balance the dataset by increasing the size of rarer samples. Rather than getting rid of abundant samples, new rare samples are generated by using repetition or bootstrapping (described further in Section [2\.4\.2](process.html#bootstrapping)). Note that there is no absolute advantage of one sampling method over another. Application of these two methods depends on the use case it applies to and the data set itself. A combination of over\- and under\-sampling is often successful and a common approach is known as Synthetic Minority Over\-Sampling Technique, or SMOTE (Chawla et al. [2002](#ref-chawla2002smote)). This alternative sampling approach, as well as others, can be implemented in R (see the `sampling` argument in `?caret::trainControl()`). Furthermore, many ML algorithms implemented in R have class weighting schemes to remedy imbalances internally (e.g., most **h2o** algorithms have a `weights_column` and `balance_classes` argument). 2\.3 Creating models in R ------------------------- The R ecosystem provides a wide variety of ML algorithm implementations. This makes many powerful algorithms available at your fingertips. Moreover, there are almost always more than one package to perform each algorithm (e.g., there are over 20 packages for fitting random forests). There are pros and cons to this wide selection; some implementations may be more computationally efficient while others may be more flexible (i.e., have more hyperparameter tuning options). Future chapters will expose you to many of the packages and algorithms that perform and scale best to the kinds of tabular data and problems encountered by most organizations. However, this also has resulted in some drawbacks as there are inconsistencies in how algorithms allow you to define the formula of interest and how the results and predictions are supplied.[2](#fn2) In addition to illustrating the more popular and powerful packages, we’ll also show you how to use implementations that provide more consistency. ### 2\.3\.1 Many formula interfaces To fit a model to our data, the model terms must be specified. Historically, there are two main interfaces for doing this. The formula interface uses R’s formula rules to specify a symbolic representation of the terms. For example, `Y ~ X` where we say “Y is a function of X”. To illustrate, suppose we have some generic modeling function called `model_fn()` which accepts an R formula, as in the following examples: ``` # Sale price as function of neighborhood and year sold model_fn(Sale_Price ~ Neighborhood + Year_Sold, data = ames) # Variables + interactions model_fn(Sale_Price ~ Neighborhood + Year_Sold + Neighborhood:Year_Sold, data = ames) # Shorthand for all predictors model_fn(Sale_Price ~ ., data = ames) # Inline functions / transformations model_fn(log10(Sale_Price) ~ ns(Longitude, df = 3) + ns(Latitude, df = 3), data = ames) ``` This is very convenient but it has some disadvantages. For example: * You can’t nest in\-line functions such as performing principal components analysis on the feature set prior to executing the model (`model_fn(y ~ pca(scale(x1), scale(x2), scale(x3)), data = df)`). * All the model matrix calculations happen at once and can’t be recycled when used in a model function. * For very wide data sets, the formula method can be extremely inefficient (Kuhn [2017](#ref-kuhnFormula)[b](#ref-kuhnFormula)). * There are limited roles that variables can take which has led to several re\-implementations of formulas. * Specifying multivariate outcomes is clunky and inelegant. * Not all modeling functions have a formula method (lack of consistency!). Some modeling functions have a non\-formula (XY) interface. These functions have separate arguments for the predictors and the outcome(s): ``` # Use separate inputs for X and Y features <- c("Year_Sold", "Longitude", "Latitude") model_fn(x = ames[, features], y = ames$Sale_Price) ``` This provides more efficient calculations but can be inconvenient if you have transformations, factor variables, interactions, or any other operations to apply to the data prior to modeling. Overall, it is difficult to determine if a package has one or both of these interfaces. For example, the `lm()` function, which performs linear regression, only has the formula method. Consequently, until you are familiar with a particular implementation you will need to continue referencing the corresponding help documentation. A third interface, is to use *variable name specification* where we provide all the data combined in one training frame but we specify the features and response with character strings. This is the interface used by the **h2o** package. ``` model_fn( x = c("Year_Sold", "Longitude", "Latitude"), y = "Sale_Price", data = ames.h2o ) ``` One approach to get around these inconsistencies is to use a meta engine, which we discuss next. ### 2\.3\.2 Many engines Although there are many individual ML packages available, there is also an abundance of meta engines that can be used to help provide consistency. For example, the following all produce the same linear regression model output: ``` lm_lm <- lm(Sale_Price ~ ., data = ames) lm_glm <- glm(Sale_Price ~ ., data = ames, family = gaussian) lm_caret <- train(Sale_Price ~ ., data = ames, method = "lm") ``` Here, `lm()` and `glm()` are two different algorithm engines that can be used to fit the linear model and `caret::train()` is a meta engine (aggregator) that allows you to apply almost any direct engine with `method = "<method-name>"`. There are trade\-offs to consider when using direct versus meta engines. For example, using direct engines can allow for extreme flexibility but also requires you to familiarize yourself with the unique differences of each implementation. For example, the following highlights the various syntax nuances required to compute and extract predicted class probabilities across different direct engines.[3](#fn3) Table 1: Syntax for computing predicted class probabilities with direct engines. | Algorithm | Package | Code | | --- | --- | --- | | Linear discriminant analysis | **MASS** | `predict(obj)` | | Generalized linear model | **stats** | `predict(obj, type = "response")` | | Mixture discriminant analysis | **mda** | `predict(obj, type = "posterior")` | | Decision tree | **rpart** | `predict(obj, type = "prob")` | | Random Forest | **ranger** | `predict(obj)$predictions` | | Gradient boosting machine | **gbm** | `predict(obj, type = "response", n.trees)` | Meta engines provide you with more consistency in how you specify inputs and extract outputs but can be less flexible than direct engines. Future chapters will illustrate both approaches. For meta engines, we’ll focus on the **caret** package in the hardcopy of the book while also demonstrating the newer **parsnip** package in the additional online resources.[4](#fn4) 2\.4 Resampling methods ----------------------- In Section [2\.2](process.html#splitting) we split our data into training and testing sets. Furthermore, we were very explicit about the fact that we ***do not*** use the test set to assess model performance during the training phase. So how do we assess the generalization performance of the model? One option is to assess an error metric based on the training data. Unfortunately, this leads to biased results as some models can perform very well on the training data but not generalize well to a new data set (we’ll illustrate this in Section [2\.5](process.html#bias-var)). A second method is to use a *validation* approach, which involves splitting the training set further to create two parts (as in Section [2\.2](process.html#splitting)): a training set and a validation set (or *holdout set*). We can then train our model(s) on the new training set and estimate the performance on the validation set. Unfortunately, validation using a single holdout set can be highly variable and unreliable unless you are working with very large data sets (Molinaro, Simon, and Pfeiffer [2005](#ref-molinaro2005prediction); Hawkins, Basak, and Mills [2003](#ref-hawkins2003assessing)). As the size of your data set reduces, this concern increases. Although we stick to our definitions of test, validation, and holdout sets, these terms are sometimes used interchangeably in other literature and software. What’s important to remember is to always put a portion of the data under lock and key until a final model has been selected (we refer to this as the test data, but others refer to it as the holdout set). **Resampling methods** provide an alternative approach by allowing us to repeatedly fit a model of interest to parts of the training data and test its performance on other parts. The two most commonly used resampling methods include *k\-fold cross validation* and *bootstrapping*. ### 2\.4\.1 *k*\-fold cross validation *k*\-fold cross\-validation (aka *k*\-fold CV) is a resampling method that randomly divides the training data into *k* groups (aka folds) of approximately equal size. The model is fit on \\(k\-1\\) folds and then the remaining fold is used to compute model performance. This procedure is repeated *k* times; each time, a different fold is treated as the validation set. This process results in *k* estimates of the generalization error (say \\(\\epsilon\_1, \\epsilon\_2, \\dots, \\epsilon\_k\\)). Thus, the *k*\-fold CV estimate is computed by averaging the *k* test errors, providing us with an approximation of the error we might expect on unseen data. Figure 2\.4: Illustration of the k\-fold cross validation process. Consequently, with *k*\-fold CV, every observation in the training data will be held out one time to be included in the test set as illustrated in Figure [2\.5](process.html#fig:modeling-process-cv). In practice, one typically uses \\(k \= 5\\) or \\(k \= 10\\). There is no formal rule as to the size of *k*; however, as *k* gets larger, the difference between the estimated performance and the true performance to be seen on the test set will decrease. On the other hand, using too large *k* can introduce computational burdens. Moreover, Molinaro, Simon, and Pfeiffer ([2005](#ref-molinaro2005prediction)) found that \\(k\=10\\) performed similarly to leave\-one\-out cross validation (LOOCV) which is the most extreme approach (i.e., setting \\(k \= n\\)). Figure 2\.5: 10\-fold cross validation on 32 observations. Each observation is used once for validation and nine times for training. Although using \\(k \\geq 10\\) helps to minimize the variability in the estimated performance, *k*\-fold CV still tends to have higher variability than bootstrapping (discussed next). Kim ([2009](#ref-kim2009estimating)) showed that repeating *k*\-fold CV can help to increase the precision of the estimated generalization error. Consequently, for smaller data sets (say \\(n \< 10,000\\)), 10\-fold CV repeated 5 or 10 times will improve the accuracy of your estimated performance and also provide an estimate of its variability. Throughout this book we’ll cover multiple ways to incorporate CV as you can often perform CV directly within certain ML functions: ``` # Example using h2o h2o.cv <- h2o.glm( x = x, y = y, training_frame = ames.h2o, nfolds = 10 # perform 10-fold CV ) ``` Or externally as in the below chunk[5](#fn5). When applying it externally to an ML algorithm as below, we’ll need a process to apply the ML model to each resample, which we’ll also cover. ``` vfold_cv(ames, v = 10) ## # 10-fold cross-validation ## # A tibble: 10 x 2 ## splits id ## <named list> <chr> ## 1 <split [2.6K/293]> Fold01 ## 2 <split [2.6K/293]> Fold02 ## 3 <split [2.6K/293]> Fold03 ## 4 <split [2.6K/293]> Fold04 ## 5 <split [2.6K/293]> Fold05 ## 6 <split [2.6K/293]> Fold06 ## 7 <split [2.6K/293]> Fold07 ## 8 <split [2.6K/293]> Fold08 ## 9 <split [2.6K/293]> Fold09 ## 10 <split [2.6K/293]> Fold10 ``` ### 2\.4\.2 Bootstrapping A bootstrap sample is a random sample of the data taken *with replacement* (Efron and Tibshirani [1986](#ref-efron1986bootstrap)). This means that, after a data point is selected for inclusion in the subset, it’s still available for further selection. A bootstrap sample is the same size as the original data set from which it was constructed. Figure [2\.6](process.html#fig:modeling-process-bootstrapscheme) provides a schematic of bootstrap sampling where each bootstrap sample contains 12 observations just as in the original data set. Furthermore, bootstrap sampling will contain approximately the same distribution of values (represented by colors) as the original data set. Figure 2\.6: Illustration of the bootstrapping process. Since samples are drawn with replacement, each bootstrap sample is likely to contain duplicate values. In fact, on average, \\(\\approx 63\.21\\)% of the original sample ends up in any particular bootstrap sample. The original observations not contained in a particular bootstrap sample are considered *out\-of\-bag* (OOB). When bootstrapping, a model can be built on the selected samples and validated on the OOB samples; this is often done, for example, in random forests (see Chapter [11](random-forest.html#random-forest)). Since observations are replicated in bootstrapping, there tends to be less variability in the error measure compared with *k*\-fold CV (Efron [1983](#ref-efron1983estimating)). However, this can also increase the bias of your error estimate. This can be problematic with smaller data sets; however, for most average\-to\-large data sets (say \\(n \\geq 1,000\\)) this concern is often negligible. Figure [2\.7](process.html#fig:modeling-process-sampling-comparison) compares bootstrapping to 10\-fold CV on a small data set with \\(n \= 32\\) observations. A thorough introduction to the bootstrap and its use in R is provided in Davison, Hinkley, and others ([1997](#ref-davison1997bootstrap)). Figure 2\.7: Bootstrap sampling (left) versus 10\-fold cross validation (right) on 32 observations. For bootstrap sampling, the observations that have zero replications (white) are the out\-of\-bag observations used for validation. We can create bootstrap samples easily with `rsample::bootstraps()`, as illustrated in the code chunk below. ``` bootstraps(ames, times = 10) ## # Bootstrap sampling ## # A tibble: 10 x 2 ## splits id ## <list> <chr> ## 1 <split [2.9K/1.1K]> Bootstrap01 ## 2 <split [2.9K/1.1K]> Bootstrap02 ## 3 <split [2.9K/1.1K]> Bootstrap03 ## 4 <split [2.9K/1K]> Bootstrap04 ## 5 <split [2.9K/1.1K]> Bootstrap05 ## 6 <split [2.9K/1.1K]> Bootstrap06 ## 7 <split [2.9K/1.1K]> Bootstrap07 ## 8 <split [2.9K/1.1K]> Bootstrap08 ## 9 <split [2.9K/1.1K]> Bootstrap09 ## 10 <split [2.9K/1K]> Bootstrap10 ``` Bootstrapping is, typically, more of an internal resampling procedure that is naturally built into certain ML algorithms. This will become more apparent in Chapters [10](bagging.html#bagging)–[11](random-forest.html#random-forest) where we discuss bagging and random forests, respectively. ### 2\.4\.3 Alternatives It is important to note that there are other useful resampling procedures. If you’re working with time\-series specific data then you will want to incorporate rolling origin and other time series resampling procedures. Hyndman and Athanasopoulos ([2018](#ref-hyndman2018forecasting)) is the dominant, R\-focused, time series resource[6](#fn6). Additionally, Efron ([1983](#ref-efron1983estimating)) developed the “632 method” and Efron and Tibshirani ([1997](#ref-efron1997improvements)) discuss the “632\+ method”; both approaches seek to minimize biases experienced with bootstrapping on smaller data sets and are available via **caret** (see `?caret::trainControl` for details). 2\.5 Bias variance trade\-off ----------------------------- Prediction errors can be decomposed into two important subcomponents: error due to “bias” and error due to “variance”. There is often a tradeoff between a model’s ability to minimize bias and variance. Understanding how different sources of error lead to bias and variance helps us improve the data fitting process resulting in more accurate models. ### 2\.5\.1 Bias *Bias* is the difference between the expected (or average) prediction of our model and the correct value which we are trying to predict. It measures how far off in general a model’s predictions are from the correct value, which provides a sense of how well a model can conform to the underlying structure of the data. Figure [2\.8](process.html#fig:modeling-process-bias-model) illustrates an example where the polynomial model does not capture the underlying structure well. Linear models are classical examples of high bias models as they are less flexible and rarely capture non\-linear, non\-monotonic relationships. We also need to think of bias\-variance in relation to resampling. Models with high bias are rarely affected by the noise introduced by resampling. If a model has high bias, it will have consistency in its resampling performance as illustrated by Figure [2\.8](process.html#fig:modeling-process-bias-model). Figure 2\.8: A biased polynomial model fit to a single data set does not capture the underlying non\-linear, non\-monotonic data structure (left). Models fit to 25 bootstrapped replicates of the data are underterred by the noise and generates similar, yet still biased, predictions (right). ### 2\.5\.2 Variance On the other hand, error due to *variance* is defined as the variability of a model prediction for a given data point. Many models (e.g., *k*\-nearest neighbor, decision trees, gradient boosting machines) are very adaptable and offer extreme flexibility in the patterns that they can fit to. However, these models offer their own problems as they run the risk of overfitting to the training data. Although you may achieve very good performance on your training data, the model will not automatically generalize well to unseen data. Figure 2\.9: A high variance *k*\-nearest neighbor model fit to a single data set captures the underlying non\-linear, non\-monotonic data structure well but also overfits to individual data points (left). Models fit to 25 bootstrapped replicates of the data are deterred by the noise and generate highly variable predictions (right). Since high variance models are more prone to overfitting, using resampling procedures are critical to reduce this risk. Moreover, many algorithms that are capable of achieving high generalization performance have lots of *hyperparameters* that control the level of model complexity (i.e., the tradeoff between bias and variance). ### 2\.5\.3 Hyperparameter tuning Hyperparameters (aka *tuning parameters*) are the “knobs to twiddle”[7](#fn7) to control the complexity of machine learning algorithms and, therefore, the bias\-variance trade\-off. Not all algorithms have hyperparameters (e.g., ordinary least squares[8](#fn8)); however, most have at least one or more. The proper setting of these hyperparameters is often dependent on the data and problem at hand and cannot always be estimated by the training data alone. Consequently, we need a method of identifying the optimal setting. For example, in the high variance example in the previous section, we illustrated a high variance *k*\-nearest neighbor model (we’ll discuss *k*\-nearest neighbor in Chapter [8](knn.html#knn)). *k*\-nearest neighbor models have a single hyperparameter (*k*) that determines the predicted value to be made based on the *k* nearest observations in the training data to the one being predicted. If *k* is small (e.g., \\(k\=3\\)), the model will make a prediction for a given observation based on the average of the response values for the 3 observations in the training data most similar to the observation being predicted. This often results in highly variable predicted values because we are basing the prediction (in this case, an average) on a very small subset of the training data. As *k* gets bigger, we base our predictions on an average of a larger subset of the training data, which naturally reduces the variance in our predicted values (remember this for later, averaging often helps to reduce variance!). Figure [2\.10](process.html#fig:modeling-process-knn-options) illustrates this point. Smaller *k* values (e.g., 2, 5, or 10\) lead to high variance (but lower bias) and larger values (e.g., 150\) lead to high bias (but lower variance). The optimal *k* value might exist somewhere between 20–50, but how do we know which value of *k* to use? Figure 2\.10: *k*\-nearest neighbor model with differing values for *k*. One way to perform hyperparameter tuning is to fiddle with hyperparameters manually until you find a great combination of hyperparameter values that result in high predictive accuracy (as measured using *k*\-fold CV, for instance). However, this can be very tedious work depending on the number of hyperparameters. An alternative approach is to perform a *grid search*. A grid search is an automated approach to searching across many combinations of hyperparameter values. For our *k*\-nearest neighbor example, a grid search would predefine a candidate set of values for *k* (e.g., \\(k \= 1, 2, \\dots, j\\)) and perform a resampling method (e.g., *k*\-fold CV) to estimate which *k* value generalizes the best to unseen data. Figure [2\.11](process.html#fig:modeling-process-knn-tune) illustrates the results from a grid search to assess \\(k \= 2, 12, 14, \\dots, 150\\) using repeated 10\-fold CV. The error rate displayed represents the average error for each value of *k* across all the repeated CV folds. On average, \\(k\=46\\) was the optimal hyperparameter value to minimize error (in this case, RMSE which is discussed in Section [2\.6](process.html#model-eval))) on unseen data. Figure 2\.11: Results from a grid search for a *k*\-nearest neighbor model assessing values for *k* ranging from 2\-150\. We see high error values due to high model variance when *k* is small and we also see high errors values due to high model bias when *k* is large. The optimal model is found at *k* \= 46\. Throughout this book you’ll be exposed to different approaches to performing grid searches. In the above example, we used a *full cartesian grid search*, which assesses every hyperparameter value manually defined. However, as models get more complex and offer more hyperparameters, this approach can become computationally burdensome and requires you to define the optimal hyperparameter grid settings to explore. Additional approaches we’ll illustrate include *random grid searches* (Bergstra and Bengio [2012](#ref-bergstra2012random)) which explores randomly selected hyperparameter values from a range of possible values, *early stopping* which allows you to stop a grid search once reduction in the error stops marginally improving, *adaptive resampling* via futility analysis (Kuhn [2014](#ref-kuhn2014futility)) which adaptively resamples candidate hyperparameter values based on approximately optimal performance, and more. 2\.6 Model evaluation --------------------- Historically, the performance of statistical models was largely based on goodness\-of\-fit tests and assessment of residuals. Unfortunately, misleading conclusions may follow from predictive models that pass these kinds of assessments (Breiman and others [2001](#ref-breiman2001statistical)). Today, it has become widely accepted that a more sound approach to assessing model performance is to assess the predictive accuracy via *loss functions*. Loss functions are metrics that compare the predicted values to the actual value (the output of a loss function is often referred to as the *error* or pseudo *residual*). When performing resampling methods, we assess the predicted values for a validation set compared to the actual target value. For example, in regression, one way to measure error is to take the difference between the actual and predicted value for a given observation (this is the usual definition of a residual in ordinary linear regression). The overall validation error of the model is computed by aggregating the errors across the entire validation data set. There are many loss functions to choose from when assessing the performance of a predictive model, each providing a unique understanding of the predictive accuracy and differing between regression and classification models. Furthermore, the way a loss function is computed will tend to emphasize certain types of errors over others and can lead to drastic differences in how we interpret the “optimal model”. Its important to consider the problem context when identifying the preferred performance metric to use. And when comparing multiple models, we need to compare them across the same metric. ### 2\.6\.1 Regression models * **MSE**: Mean squared error is the average of the squared error (\\(MSE \= \\frac{1}{n} \\sum^n\_{i\=1}(y\_i \- \\hat y\_i)^2\\))[9](#fn9). The squared component results in larger errors having larger penalties. This (along with RMSE) is the most common error metric to use. **Objective: minimize** * **RMSE**: Root mean squared error. This simply takes the square root of the MSE metric (\\(RMSE \= \\sqrt{\\frac{1}{n} \\sum^n\_{i\=1}(y\_i \- \\hat y\_i)^2}\\)) so that your error is in the same units as your response variable. If your response variable units are dollars, the units of MSE are dollars\-squared, but the RMSE will be in dollars. **Objective: minimize** * **Deviance**: Short for mean residual deviance. In essence, it provides a degree to which a model explains the variation in a set of data when using maximum likelihood estimation. Essentially this compares a saturated model (i.e. fully featured model) to an unsaturated model (i.e. intercept only or average). If the response variable distribution is Gaussian, then it will be approximately equal to MSE. When not, it usually gives a more useful estimate of error. Deviance is often used with classification models.[10](#fn10) **Objective: minimize** * **MAE**: Mean absolute error. Similar to MSE but rather than squaring, it just takes the mean absolute difference between the actual and predicted values (\\(MAE \= \\frac{1}{n} \\sum^n\_{i\=1}(\\vert y\_i \- \\hat y\_i \\vert)\\)). This results in less emphasis on larger errors than MSE. **Objective: minimize** * **RMSLE**: Root mean squared logarithmic error. Similar to RMSE but it performs a `log()` on the actual and predicted values prior to computing the difference (\\(RMSLE \= \\sqrt{\\frac{1}{n} \\sum^n\_{i\=1}(log(y\_i \+ 1\) \- log(\\hat y\_i \+ 1\))^2}\\)). When your response variable has a wide range of values, large response values with large errors can dominate the MSE/RMSE metric. RMSLE minimizes this impact so that small response values with large errors can have just as meaningful of an impact as large response values with large errors. **Objective: minimize** * **\\(R^2\\)**: This is a popular metric that represents the proportion of the variance in the dependent variable that is predictable from the independent variable(s). Unfortunately, it has several limitations. For example, two models built from two different data sets could have the exact same RMSE but if one has less variability in the response variable then it would have a lower \\(R^2\\) than the other. You should not place too much emphasis on this metric. **Objective: maximize** Most models we assess in this book will report most, if not all, of these metrics. We will emphasize MSE and RMSE but it’s important to realize that certain situations warrant emphasis on some metrics more than others. ### 2\.6\.2 Classification models * **Misclassification**: This is the overall error. For example, say you are predicting 3 classes ( *high*, *medium*, *low* ) and each class has 25, 30, 35 observations respectively (90 observations total). If you misclassify 3 observations of class *high*, 6 of class *medium*, and 4 of class *low*, then you misclassified 13 out of 90 observations resulting in a 14% misclassification rate. **Objective: minimize** * **Mean per class error**: This is the average error rate for each class. For the above example, this would be the mean of \\(\\frac{3}{25}, \\frac{6}{30}, \\frac{4}{35}\\), which is 14\.5%. If your classes are balanced this will be identical to misclassification. **Objective: minimize** * **MSE**: Mean squared error. Computes the distance from 1\.0 to the probability suggested. So, say we have three classes, A, B, and C, and your model predicts a probability of 0\.91 for A, 0\.07 for B, and 0\.02 for C. If the correct answer was A the \\(MSE \= 0\.09^2 \= 0\.0081\\), if it is B \\(MSE \= 0\.93^2 \= 0\.8649\\), if it is C \\(MSE \= 0\.98^2 \= 0\.9604\\). The squared component results in large differences in probabilities for the true class having larger penalties. **Objective: minimize** * **Cross\-entropy (aka Log Loss or Deviance)**: Similar to MSE but it incorporates a log of the predicted probability multiplied by the true class. Consequently, this metric disproportionately punishes predictions where we predict a small probability for the true class, which is another way of saying having high confidence in the wrong answer is really bad. **Objective: minimize** * **Gini index**: Mainly used with tree\-based methods and commonly referred to as a measure of *purity* where a small value indicates that a node contains predominantly observations from a single class. **Objective: minimize** When applying classification models, we often use a *confusion matrix* to evaluate certain performance measures. A confusion matrix is simply a matrix that compares actual categorical levels (or events) to the predicted categorical levels. When we predict the right level, we refer to this as a *true positive*. However, if we predict a level or event that did not happen this is called a *false positive* (i.e. we predicted a customer would redeem a coupon and they did not). Alternatively, when we do not predict a level or event and it does happen that this is called a *false negative* (i.e. a customer that we did not predict to redeem a coupon does). Figure 2\.12: Confusion matrix and relationships to terms such as true\-positive and false\-negative. We can extract different levels of performance for binary classifiers. For example, given the classification (or confusion) matrix illustrated in Figure [2\.13](process.html#fig:modeling-process-confusion-matrix2) we can assess the following: * **Accuracy**: Overall, how often is the classifier correct? Opposite of misclassification above. Example: \\(\\frac{TP \+ TN}{total} \= \\frac{100\+50}{165} \= 0\.91\\). **Objective: maximize** * **Precision**: How accurately does the classifier predict events? This metric is concerned with maximizing the true positives to false positive ratio. In other words, for the number of predictions that we made, how many were correct? Example: \\(\\frac{TP}{TP \+ FP} \= \\frac{100}{100\+10} \= 0\.91\\). **Objective: maximize** * **Sensitivity (aka recall)**: How accurately does the classifier classify actual events? This metric is concerned with maximizing the true positives to false negatives ratio. In other words, for the events that occurred, how many did we predict? Example: \\(\\frac{TP}{TP \+ FN} \= \\frac{100}{100\+5} \= 0\.95\\). **Objective: maximize** * **Specificity**: How accurately does the classifier classify actual non\-events? Example: \\(\\frac{TN}{TN \+ FP} \= \\frac{50}{50\+10} \= 0\.83\\). **Objective: maximize** Figure 2\.13: Example confusion matrix. * **AUC**: Area under the curve. A good binary classifier will have high precision and sensitivity. This means the classifier does well when it predicts an event will and will not occur, which minimizes false positives and false negatives. To capture this balance, we often use a ROC curve that plots the false positive rate along the x\-axis and the true positive rate along the y\-axis. A line that is diagonal from the lower left corner to the upper right corner represents a random guess. The higher the line is in the upper left\-hand corner, the better. AUC computes the area under this curve. **Objective: maximize** Figure 2\.14: ROC curve. 2\.7 Putting the processes together ----------------------------------- To illustrate how this process works together via R code, let’s do a simple assessment on the `ames` housing data. First, we perform stratified sampling as illustrated in Section [2\.2\.2](process.html#stratified) to break our data into training vs. test data while ensuring we have consistent distributions between the training and test sets. ``` # Stratified sampling with the rsample package set.seed(123) split <- initial_split(ames, prop = 0.7, strata = "Sale_Price") ames_train <- training(split) ames_test <- testing(split) ``` Next, we’re going to apply a *k*\-nearest neighbor regressor to our data. To do so, we’ll use **caret**, which is a meta\-engine to simplify the resampling, grid search, and model application processes. The following defines: 1. **Resampling method**: we use 10\-fold CV repeated 5 times. 2. **Grid search**: we specify the hyperparameter values to assess (\\(k \= 2, 3, 4, \\dots, 25\\)). 3. **Model training \& Validation**: we train a *k*\-nearest neighbor (`method = "knn"`) model using our pre\-specified resampling procedure (`trControl = cv`), grid search (`tuneGrid = hyper_grid`), and preferred loss function (`metric = "RMSE"`). This grid search takes approximately 3\.5 minutes ``` # Specify resampling strategy cv <- trainControl( method = "repeatedcv", number = 10, repeats = 5 ) # Create grid of hyperparameter values hyper_grid <- expand.grid(k = seq(2, 25, by = 1)) # Tune a knn model using grid search knn_fit <- train( Sale_Price ~ ., data = ames_train, method = "knn", trControl = cv, tuneGrid = hyper_grid, metric = "RMSE" ) ``` Looking at our results we see that the best model coincided with \\(k\=\\) 7, which resulted in an RMSE of 43439\.07\. This implies that, on average, our model mispredicts the expected sale price of a home by $43,439\. Figure [2\.15](process.html#fig:modeling-process-example-process-assess) illustrates the cross\-validated error rate across the spectrum of hyperparameter values that we specified. ``` # Print and plot the CV results knn_fit ## k-Nearest Neighbors ## ## 2053 samples ## 80 predictor ## ## No pre-processing ## Resampling: Cross-Validated (10 fold, repeated 5 times) ## Summary of sample sizes: 1848, 1848, 1848, 1847, 1849, 1847, ... ## Resampling results across tuning parameters: ## ## k RMSE Rsquared MAE ## 2 47844.53 0.6538046 31002.72 ## 3 45875.79 0.6769848 29784.69 ## 4 44529.50 0.6949240 28992.48 ## 5 43944.65 0.7026947 28738.66 ## 6 43645.76 0.7079683 28553.50 ## 7 43439.07 0.7129916 28617.80 ## 8 43658.35 0.7123254 28769.16 ## 9 43799.74 0.7128924 28905.50 ## 10 44058.76 0.7108900 29061.68 ## 11 44304.91 0.7091949 29197.78 ## 12 44565.82 0.7073437 29320.81 ## 13 44798.10 0.7056491 29475.33 ## 14 44966.27 0.7051474 29561.70 ## 15 45188.86 0.7036000 29731.56 ## 16 45376.09 0.7027152 29860.67 ## 17 45557.94 0.7016254 29974.44 ## 18 45666.30 0.7021351 30018.59 ## 19 45836.33 0.7013026 30105.50 ## 20 46044.44 0.6997198 30235.80 ## 21 46242.59 0.6983978 30367.95 ## 22 46441.87 0.6969620 30481.48 ## 23 46651.66 0.6953968 30611.48 ## 24 46788.22 0.6948738 30681.97 ## 25 46980.13 0.6928159 30777.25 ## ## RMSE was used to select the optimal model using the smallest value. ## The final value used for the model was k = 7. ggplot(knn_fit) ``` Figure 2\.15: Results from a grid search for a *k*\-nearest neighbor model on the Ames housing data assessing values for *k* ranging from 2\-25\. The question remains: “Is this the best predictive model we can find?” We may have identified the optimal *k*\-nearest neighbor model for our given data set, but this doesn’t mean we’ve found the best possible overall model. Nor have we considered potential feature and target engineering options. The remainder of this book will walk you through the journey of identifying alternative solutions and, hopefully, a much more optimal model. 2\.1 Prerequisites ------------------ This chapter leverages the following packages. ``` # Helper packages library(dplyr) # for data manipulation library(ggplot2) # for awesome graphics # Modeling process packages library(rsample) # for resampling procedures library(caret) # for resampling and model training library(h2o) # for resampling and model training # h2o set-up h2o.no_progress() # turn off h2o progress bars h2o.init() # launch h2o ``` To illustrate some of the concepts, we’ll use the Ames Housing and employee attrition data sets introduced in Chapter [1](intro.html#intro). Throughout this book, we’ll demonstrate approaches with ordinary R data frames. However, since many of the supervised machine learning chapters leverage the **h2o** package, we’ll also show how to do some of the tasks with H2O objects. You can convert any R data frame to an H2O object (i.e., import it to the H2O cloud) easily with `as.h2o(<my-data-frame>)`. If you try to convert the original `rsample::attrition` data set to an H2O object an error will occur. This is because several variables are *ordered factors* and H2O has no way of handling this data type. Consequently, you must convert any ordered factors to unordered; see `?base::ordered` for details. ``` # Ames housing data ames <- AmesHousing::make_ames() ames.h2o <- as.h2o(ames) # Job attrition data churn <- rsample::attrition %>% mutate_if(is.ordered, .funs = factor, ordered = FALSE) churn.h2o <- as.h2o(churn) ``` 2\.2 Data splitting ------------------- A major goal of the machine learning process is to find an algorithm \\(f\\left(X\\right)\\) that most accurately predicts future values (\\(\\hat{Y}\\)) based on a set of features (\\(X\\)). In other words, we want an algorithm that not only fits well to our past data, but more importantly, one that predicts a future outcome accurately. This is called the ***generalizability*** of our algorithm. How we “spend” our data will help us understand how well our algorithm generalizes to unseen data. To provide an accurate understanding of the generalizability of our final optimal model, we can split our data into training and test data sets: * **Training set**: these data are used to develop feature sets, train our algorithms, tune hyperparameters, compare models, and all of the other activities required to choose a final model (e.g., the model we want to put into production). * **Test set**: having chosen a final model, these data are used to estimate an unbiased assessment of the model’s performance, which we refer to as the *generalization error*. It is critical that the test set not be used prior to selecting your final model. Assessing results on the test set prior to final model selection biases the model selection process since the testing data will have become part of the model development process. Figure 2\.2: Splitting data into training and test sets. Given a fixed amount of data, typical recommendations for splitting your data into training\-test splits include 60% (training)–40% (testing), 70%–30%, or 80%–20%. Generally speaking, these are appropriate guidelines to follow; however, it is good to keep the following points in mind: * Spending too much in training (e.g., \\(\>80\\%\\)) won’t allow us to get a good assessment of predictive performance. We may find a model that fits the training data very well, but is not generalizable (*overfitting*). * Sometimes too much spent in testing (\\(\>40\\%\\)) won’t allow us to get a good assessment of model parameters. Other factors should also influence the allocation proportions. For example, very large training sets (e.g., \\(n \> 100\\texttt{K}\\)) often result in only marginal gains compared to smaller sample sizes. Consequently, you may use a smaller training sample to increase computation speed (e.g., models built on larger training sets often take longer to score new data sets in production). In contrast, as \\(p \\geq n\\) (where \\(p\\) represents the number of features), larger samples sizes are often required to identify consistent signals in the features. The two most common ways of splitting data include ***simple random sampling*** and ***stratified sampling***. ### 2\.2\.1 Simple random sampling The simplest way to split the data into training and test sets is to take a simple random sample. This does not control for any data attributes, such as the distribution of your response variable (\\(Y\\)). There are multiple ways to split our data in R. Here we show four options to produce a 70–30 split in the Ames housing data: Sampling is a random process so setting the random number generator with a common seed allows for reproducible results. Throughout this book we’ll often use the seed `123` for reproducibility but the number itself has no special meaning. ``` # Using base R set.seed(123) # for reproducibility index_1 <- sample(1:nrow(ames), round(nrow(ames) * 0.7)) train_1 <- ames[index_1, ] test_1 <- ames[-index_1, ] # Using caret package set.seed(123) # for reproducibility index_2 <- createDataPartition(ames$Sale_Price, p = 0.7, list = FALSE) train_2 <- ames[index_2, ] test_2 <- ames[-index_2, ] # Using rsample package set.seed(123) # for reproducibility split_1 <- initial_split(ames, prop = 0.7) train_3 <- training(split_1) test_3 <- testing(split_1) # Using h2o package split_2 <- h2o.splitFrame(ames.h2o, ratios = 0.7, seed = 123) train_4 <- split_2[[1]] test_4 <- split_2[[2]] ``` With sufficient sample size, this sampling approach will typically result in a similar distribution of \\(Y\\) (e.g., `Sale_Price` in the `ames` data) between your training and test sets, as illustrated below. Figure 2\.3: Training (black) vs. test (red) response distribution. ### 2\.2\.2 Stratified sampling If we want to explicitly control the sampling so that our training and test sets have similar \\(Y\\) distributions, we can use stratified sampling. This is more common with classification problems where the response variable may be severely imbalanced (e.g., 90% of observations with response “Yes” and 10% with response “No”). However, we can also apply stratified sampling to regression problems for data sets that have a small sample size and where the response variable deviates strongly from normality (i.e., positively skewed like `Sale_Price`). With a continuous response variable, stratified sampling will segment \\(Y\\) into quantiles and randomly sample from each. Consequently, this will help ensure a balanced representation of the response distribution in both the training and test sets. The easiest way to perform stratified sampling on a response variable is to use the **rsample** package, where you specify the response variable to `strata`fy. The following illustrates that in our original employee attrition data we have an imbalanced response (No: 84%, Yes: 16%). By enforcing stratified sampling, both our training and testing sets have approximately equal response distributions. ``` # orginal response distribution table(churn$Attrition) %>% prop.table() ## ## No Yes ## 0.8387755 0.1612245 # stratified sampling with the rsample package set.seed(123) split_strat <- initial_split(churn, prop = 0.7, strata = "Attrition") train_strat <- training(split_strat) test_strat <- testing(split_strat) # consistent response ratio between train & test table(train_strat$Attrition) %>% prop.table() ## ## No Yes ## 0.838835 0.161165 table(test_strat$Attrition) %>% prop.table() ## ## No Yes ## 0.8386364 0.1613636 ``` ### 2\.2\.3 Class imbalances Imbalanced data can have a significant impact on model predictions and performance (Kuhn and Johnson [2013](#ref-apm)). Most often this involves classification problems where one class has a very small proportion of observations (e.g., defaults \- 5% versus nondefaults \- 95%). Several sampling methods have been developed to help remedy class imbalance and most of them can be categorized as either *up\-sampling* or *down\-sampling*. Down\-sampling balances the dataset by reducing the size of the abundant class(es) to match the frequencies in the least prevalent class. This method is used when the quantity of data is sufficient. By keeping all samples in the rare class and randomly selecting an equal number of samples in the abundant class, a balanced new dataset can be retrieved for further modeling. Furthermore, the reduced sample size reduces the computation burden imposed by further steps in the ML process. On the contrary, up\-sampling is used when the quantity of data is insufficient. It tries to balance the dataset by increasing the size of rarer samples. Rather than getting rid of abundant samples, new rare samples are generated by using repetition or bootstrapping (described further in Section [2\.4\.2](process.html#bootstrapping)). Note that there is no absolute advantage of one sampling method over another. Application of these two methods depends on the use case it applies to and the data set itself. A combination of over\- and under\-sampling is often successful and a common approach is known as Synthetic Minority Over\-Sampling Technique, or SMOTE (Chawla et al. [2002](#ref-chawla2002smote)). This alternative sampling approach, as well as others, can be implemented in R (see the `sampling` argument in `?caret::trainControl()`). Furthermore, many ML algorithms implemented in R have class weighting schemes to remedy imbalances internally (e.g., most **h2o** algorithms have a `weights_column` and `balance_classes` argument). ### 2\.2\.1 Simple random sampling The simplest way to split the data into training and test sets is to take a simple random sample. This does not control for any data attributes, such as the distribution of your response variable (\\(Y\\)). There are multiple ways to split our data in R. Here we show four options to produce a 70–30 split in the Ames housing data: Sampling is a random process so setting the random number generator with a common seed allows for reproducible results. Throughout this book we’ll often use the seed `123` for reproducibility but the number itself has no special meaning. ``` # Using base R set.seed(123) # for reproducibility index_1 <- sample(1:nrow(ames), round(nrow(ames) * 0.7)) train_1 <- ames[index_1, ] test_1 <- ames[-index_1, ] # Using caret package set.seed(123) # for reproducibility index_2 <- createDataPartition(ames$Sale_Price, p = 0.7, list = FALSE) train_2 <- ames[index_2, ] test_2 <- ames[-index_2, ] # Using rsample package set.seed(123) # for reproducibility split_1 <- initial_split(ames, prop = 0.7) train_3 <- training(split_1) test_3 <- testing(split_1) # Using h2o package split_2 <- h2o.splitFrame(ames.h2o, ratios = 0.7, seed = 123) train_4 <- split_2[[1]] test_4 <- split_2[[2]] ``` With sufficient sample size, this sampling approach will typically result in a similar distribution of \\(Y\\) (e.g., `Sale_Price` in the `ames` data) between your training and test sets, as illustrated below. Figure 2\.3: Training (black) vs. test (red) response distribution. ### 2\.2\.2 Stratified sampling If we want to explicitly control the sampling so that our training and test sets have similar \\(Y\\) distributions, we can use stratified sampling. This is more common with classification problems where the response variable may be severely imbalanced (e.g., 90% of observations with response “Yes” and 10% with response “No”). However, we can also apply stratified sampling to regression problems for data sets that have a small sample size and where the response variable deviates strongly from normality (i.e., positively skewed like `Sale_Price`). With a continuous response variable, stratified sampling will segment \\(Y\\) into quantiles and randomly sample from each. Consequently, this will help ensure a balanced representation of the response distribution in both the training and test sets. The easiest way to perform stratified sampling on a response variable is to use the **rsample** package, where you specify the response variable to `strata`fy. The following illustrates that in our original employee attrition data we have an imbalanced response (No: 84%, Yes: 16%). By enforcing stratified sampling, both our training and testing sets have approximately equal response distributions. ``` # orginal response distribution table(churn$Attrition) %>% prop.table() ## ## No Yes ## 0.8387755 0.1612245 # stratified sampling with the rsample package set.seed(123) split_strat <- initial_split(churn, prop = 0.7, strata = "Attrition") train_strat <- training(split_strat) test_strat <- testing(split_strat) # consistent response ratio between train & test table(train_strat$Attrition) %>% prop.table() ## ## No Yes ## 0.838835 0.161165 table(test_strat$Attrition) %>% prop.table() ## ## No Yes ## 0.8386364 0.1613636 ``` ### 2\.2\.3 Class imbalances Imbalanced data can have a significant impact on model predictions and performance (Kuhn and Johnson [2013](#ref-apm)). Most often this involves classification problems where one class has a very small proportion of observations (e.g., defaults \- 5% versus nondefaults \- 95%). Several sampling methods have been developed to help remedy class imbalance and most of them can be categorized as either *up\-sampling* or *down\-sampling*. Down\-sampling balances the dataset by reducing the size of the abundant class(es) to match the frequencies in the least prevalent class. This method is used when the quantity of data is sufficient. By keeping all samples in the rare class and randomly selecting an equal number of samples in the abundant class, a balanced new dataset can be retrieved for further modeling. Furthermore, the reduced sample size reduces the computation burden imposed by further steps in the ML process. On the contrary, up\-sampling is used when the quantity of data is insufficient. It tries to balance the dataset by increasing the size of rarer samples. Rather than getting rid of abundant samples, new rare samples are generated by using repetition or bootstrapping (described further in Section [2\.4\.2](process.html#bootstrapping)). Note that there is no absolute advantage of one sampling method over another. Application of these two methods depends on the use case it applies to and the data set itself. A combination of over\- and under\-sampling is often successful and a common approach is known as Synthetic Minority Over\-Sampling Technique, or SMOTE (Chawla et al. [2002](#ref-chawla2002smote)). This alternative sampling approach, as well as others, can be implemented in R (see the `sampling` argument in `?caret::trainControl()`). Furthermore, many ML algorithms implemented in R have class weighting schemes to remedy imbalances internally (e.g., most **h2o** algorithms have a `weights_column` and `balance_classes` argument). 2\.3 Creating models in R ------------------------- The R ecosystem provides a wide variety of ML algorithm implementations. This makes many powerful algorithms available at your fingertips. Moreover, there are almost always more than one package to perform each algorithm (e.g., there are over 20 packages for fitting random forests). There are pros and cons to this wide selection; some implementations may be more computationally efficient while others may be more flexible (i.e., have more hyperparameter tuning options). Future chapters will expose you to many of the packages and algorithms that perform and scale best to the kinds of tabular data and problems encountered by most organizations. However, this also has resulted in some drawbacks as there are inconsistencies in how algorithms allow you to define the formula of interest and how the results and predictions are supplied.[2](#fn2) In addition to illustrating the more popular and powerful packages, we’ll also show you how to use implementations that provide more consistency. ### 2\.3\.1 Many formula interfaces To fit a model to our data, the model terms must be specified. Historically, there are two main interfaces for doing this. The formula interface uses R’s formula rules to specify a symbolic representation of the terms. For example, `Y ~ X` where we say “Y is a function of X”. To illustrate, suppose we have some generic modeling function called `model_fn()` which accepts an R formula, as in the following examples: ``` # Sale price as function of neighborhood and year sold model_fn(Sale_Price ~ Neighborhood + Year_Sold, data = ames) # Variables + interactions model_fn(Sale_Price ~ Neighborhood + Year_Sold + Neighborhood:Year_Sold, data = ames) # Shorthand for all predictors model_fn(Sale_Price ~ ., data = ames) # Inline functions / transformations model_fn(log10(Sale_Price) ~ ns(Longitude, df = 3) + ns(Latitude, df = 3), data = ames) ``` This is very convenient but it has some disadvantages. For example: * You can’t nest in\-line functions such as performing principal components analysis on the feature set prior to executing the model (`model_fn(y ~ pca(scale(x1), scale(x2), scale(x3)), data = df)`). * All the model matrix calculations happen at once and can’t be recycled when used in a model function. * For very wide data sets, the formula method can be extremely inefficient (Kuhn [2017](#ref-kuhnFormula)[b](#ref-kuhnFormula)). * There are limited roles that variables can take which has led to several re\-implementations of formulas. * Specifying multivariate outcomes is clunky and inelegant. * Not all modeling functions have a formula method (lack of consistency!). Some modeling functions have a non\-formula (XY) interface. These functions have separate arguments for the predictors and the outcome(s): ``` # Use separate inputs for X and Y features <- c("Year_Sold", "Longitude", "Latitude") model_fn(x = ames[, features], y = ames$Sale_Price) ``` This provides more efficient calculations but can be inconvenient if you have transformations, factor variables, interactions, or any other operations to apply to the data prior to modeling. Overall, it is difficult to determine if a package has one or both of these interfaces. For example, the `lm()` function, which performs linear regression, only has the formula method. Consequently, until you are familiar with a particular implementation you will need to continue referencing the corresponding help documentation. A third interface, is to use *variable name specification* where we provide all the data combined in one training frame but we specify the features and response with character strings. This is the interface used by the **h2o** package. ``` model_fn( x = c("Year_Sold", "Longitude", "Latitude"), y = "Sale_Price", data = ames.h2o ) ``` One approach to get around these inconsistencies is to use a meta engine, which we discuss next. ### 2\.3\.2 Many engines Although there are many individual ML packages available, there is also an abundance of meta engines that can be used to help provide consistency. For example, the following all produce the same linear regression model output: ``` lm_lm <- lm(Sale_Price ~ ., data = ames) lm_glm <- glm(Sale_Price ~ ., data = ames, family = gaussian) lm_caret <- train(Sale_Price ~ ., data = ames, method = "lm") ``` Here, `lm()` and `glm()` are two different algorithm engines that can be used to fit the linear model and `caret::train()` is a meta engine (aggregator) that allows you to apply almost any direct engine with `method = "<method-name>"`. There are trade\-offs to consider when using direct versus meta engines. For example, using direct engines can allow for extreme flexibility but also requires you to familiarize yourself with the unique differences of each implementation. For example, the following highlights the various syntax nuances required to compute and extract predicted class probabilities across different direct engines.[3](#fn3) Table 1: Syntax for computing predicted class probabilities with direct engines. | Algorithm | Package | Code | | --- | --- | --- | | Linear discriminant analysis | **MASS** | `predict(obj)` | | Generalized linear model | **stats** | `predict(obj, type = "response")` | | Mixture discriminant analysis | **mda** | `predict(obj, type = "posterior")` | | Decision tree | **rpart** | `predict(obj, type = "prob")` | | Random Forest | **ranger** | `predict(obj)$predictions` | | Gradient boosting machine | **gbm** | `predict(obj, type = "response", n.trees)` | Meta engines provide you with more consistency in how you specify inputs and extract outputs but can be less flexible than direct engines. Future chapters will illustrate both approaches. For meta engines, we’ll focus on the **caret** package in the hardcopy of the book while also demonstrating the newer **parsnip** package in the additional online resources.[4](#fn4) ### 2\.3\.1 Many formula interfaces To fit a model to our data, the model terms must be specified. Historically, there are two main interfaces for doing this. The formula interface uses R’s formula rules to specify a symbolic representation of the terms. For example, `Y ~ X` where we say “Y is a function of X”. To illustrate, suppose we have some generic modeling function called `model_fn()` which accepts an R formula, as in the following examples: ``` # Sale price as function of neighborhood and year sold model_fn(Sale_Price ~ Neighborhood + Year_Sold, data = ames) # Variables + interactions model_fn(Sale_Price ~ Neighborhood + Year_Sold + Neighborhood:Year_Sold, data = ames) # Shorthand for all predictors model_fn(Sale_Price ~ ., data = ames) # Inline functions / transformations model_fn(log10(Sale_Price) ~ ns(Longitude, df = 3) + ns(Latitude, df = 3), data = ames) ``` This is very convenient but it has some disadvantages. For example: * You can’t nest in\-line functions such as performing principal components analysis on the feature set prior to executing the model (`model_fn(y ~ pca(scale(x1), scale(x2), scale(x3)), data = df)`). * All the model matrix calculations happen at once and can’t be recycled when used in a model function. * For very wide data sets, the formula method can be extremely inefficient (Kuhn [2017](#ref-kuhnFormula)[b](#ref-kuhnFormula)). * There are limited roles that variables can take which has led to several re\-implementations of formulas. * Specifying multivariate outcomes is clunky and inelegant. * Not all modeling functions have a formula method (lack of consistency!). Some modeling functions have a non\-formula (XY) interface. These functions have separate arguments for the predictors and the outcome(s): ``` # Use separate inputs for X and Y features <- c("Year_Sold", "Longitude", "Latitude") model_fn(x = ames[, features], y = ames$Sale_Price) ``` This provides more efficient calculations but can be inconvenient if you have transformations, factor variables, interactions, or any other operations to apply to the data prior to modeling. Overall, it is difficult to determine if a package has one or both of these interfaces. For example, the `lm()` function, which performs linear regression, only has the formula method. Consequently, until you are familiar with a particular implementation you will need to continue referencing the corresponding help documentation. A third interface, is to use *variable name specification* where we provide all the data combined in one training frame but we specify the features and response with character strings. This is the interface used by the **h2o** package. ``` model_fn( x = c("Year_Sold", "Longitude", "Latitude"), y = "Sale_Price", data = ames.h2o ) ``` One approach to get around these inconsistencies is to use a meta engine, which we discuss next. ### 2\.3\.2 Many engines Although there are many individual ML packages available, there is also an abundance of meta engines that can be used to help provide consistency. For example, the following all produce the same linear regression model output: ``` lm_lm <- lm(Sale_Price ~ ., data = ames) lm_glm <- glm(Sale_Price ~ ., data = ames, family = gaussian) lm_caret <- train(Sale_Price ~ ., data = ames, method = "lm") ``` Here, `lm()` and `glm()` are two different algorithm engines that can be used to fit the linear model and `caret::train()` is a meta engine (aggregator) that allows you to apply almost any direct engine with `method = "<method-name>"`. There are trade\-offs to consider when using direct versus meta engines. For example, using direct engines can allow for extreme flexibility but also requires you to familiarize yourself with the unique differences of each implementation. For example, the following highlights the various syntax nuances required to compute and extract predicted class probabilities across different direct engines.[3](#fn3) Table 1: Syntax for computing predicted class probabilities with direct engines. | Algorithm | Package | Code | | --- | --- | --- | | Linear discriminant analysis | **MASS** | `predict(obj)` | | Generalized linear model | **stats** | `predict(obj, type = "response")` | | Mixture discriminant analysis | **mda** | `predict(obj, type = "posterior")` | | Decision tree | **rpart** | `predict(obj, type = "prob")` | | Random Forest | **ranger** | `predict(obj)$predictions` | | Gradient boosting machine | **gbm** | `predict(obj, type = "response", n.trees)` | Meta engines provide you with more consistency in how you specify inputs and extract outputs but can be less flexible than direct engines. Future chapters will illustrate both approaches. For meta engines, we’ll focus on the **caret** package in the hardcopy of the book while also demonstrating the newer **parsnip** package in the additional online resources.[4](#fn4) 2\.4 Resampling methods ----------------------- In Section [2\.2](process.html#splitting) we split our data into training and testing sets. Furthermore, we were very explicit about the fact that we ***do not*** use the test set to assess model performance during the training phase. So how do we assess the generalization performance of the model? One option is to assess an error metric based on the training data. Unfortunately, this leads to biased results as some models can perform very well on the training data but not generalize well to a new data set (we’ll illustrate this in Section [2\.5](process.html#bias-var)). A second method is to use a *validation* approach, which involves splitting the training set further to create two parts (as in Section [2\.2](process.html#splitting)): a training set and a validation set (or *holdout set*). We can then train our model(s) on the new training set and estimate the performance on the validation set. Unfortunately, validation using a single holdout set can be highly variable and unreliable unless you are working with very large data sets (Molinaro, Simon, and Pfeiffer [2005](#ref-molinaro2005prediction); Hawkins, Basak, and Mills [2003](#ref-hawkins2003assessing)). As the size of your data set reduces, this concern increases. Although we stick to our definitions of test, validation, and holdout sets, these terms are sometimes used interchangeably in other literature and software. What’s important to remember is to always put a portion of the data under lock and key until a final model has been selected (we refer to this as the test data, but others refer to it as the holdout set). **Resampling methods** provide an alternative approach by allowing us to repeatedly fit a model of interest to parts of the training data and test its performance on other parts. The two most commonly used resampling methods include *k\-fold cross validation* and *bootstrapping*. ### 2\.4\.1 *k*\-fold cross validation *k*\-fold cross\-validation (aka *k*\-fold CV) is a resampling method that randomly divides the training data into *k* groups (aka folds) of approximately equal size. The model is fit on \\(k\-1\\) folds and then the remaining fold is used to compute model performance. This procedure is repeated *k* times; each time, a different fold is treated as the validation set. This process results in *k* estimates of the generalization error (say \\(\\epsilon\_1, \\epsilon\_2, \\dots, \\epsilon\_k\\)). Thus, the *k*\-fold CV estimate is computed by averaging the *k* test errors, providing us with an approximation of the error we might expect on unseen data. Figure 2\.4: Illustration of the k\-fold cross validation process. Consequently, with *k*\-fold CV, every observation in the training data will be held out one time to be included in the test set as illustrated in Figure [2\.5](process.html#fig:modeling-process-cv). In practice, one typically uses \\(k \= 5\\) or \\(k \= 10\\). There is no formal rule as to the size of *k*; however, as *k* gets larger, the difference between the estimated performance and the true performance to be seen on the test set will decrease. On the other hand, using too large *k* can introduce computational burdens. Moreover, Molinaro, Simon, and Pfeiffer ([2005](#ref-molinaro2005prediction)) found that \\(k\=10\\) performed similarly to leave\-one\-out cross validation (LOOCV) which is the most extreme approach (i.e., setting \\(k \= n\\)). Figure 2\.5: 10\-fold cross validation on 32 observations. Each observation is used once for validation and nine times for training. Although using \\(k \\geq 10\\) helps to minimize the variability in the estimated performance, *k*\-fold CV still tends to have higher variability than bootstrapping (discussed next). Kim ([2009](#ref-kim2009estimating)) showed that repeating *k*\-fold CV can help to increase the precision of the estimated generalization error. Consequently, for smaller data sets (say \\(n \< 10,000\\)), 10\-fold CV repeated 5 or 10 times will improve the accuracy of your estimated performance and also provide an estimate of its variability. Throughout this book we’ll cover multiple ways to incorporate CV as you can often perform CV directly within certain ML functions: ``` # Example using h2o h2o.cv <- h2o.glm( x = x, y = y, training_frame = ames.h2o, nfolds = 10 # perform 10-fold CV ) ``` Or externally as in the below chunk[5](#fn5). When applying it externally to an ML algorithm as below, we’ll need a process to apply the ML model to each resample, which we’ll also cover. ``` vfold_cv(ames, v = 10) ## # 10-fold cross-validation ## # A tibble: 10 x 2 ## splits id ## <named list> <chr> ## 1 <split [2.6K/293]> Fold01 ## 2 <split [2.6K/293]> Fold02 ## 3 <split [2.6K/293]> Fold03 ## 4 <split [2.6K/293]> Fold04 ## 5 <split [2.6K/293]> Fold05 ## 6 <split [2.6K/293]> Fold06 ## 7 <split [2.6K/293]> Fold07 ## 8 <split [2.6K/293]> Fold08 ## 9 <split [2.6K/293]> Fold09 ## 10 <split [2.6K/293]> Fold10 ``` ### 2\.4\.2 Bootstrapping A bootstrap sample is a random sample of the data taken *with replacement* (Efron and Tibshirani [1986](#ref-efron1986bootstrap)). This means that, after a data point is selected for inclusion in the subset, it’s still available for further selection. A bootstrap sample is the same size as the original data set from which it was constructed. Figure [2\.6](process.html#fig:modeling-process-bootstrapscheme) provides a schematic of bootstrap sampling where each bootstrap sample contains 12 observations just as in the original data set. Furthermore, bootstrap sampling will contain approximately the same distribution of values (represented by colors) as the original data set. Figure 2\.6: Illustration of the bootstrapping process. Since samples are drawn with replacement, each bootstrap sample is likely to contain duplicate values. In fact, on average, \\(\\approx 63\.21\\)% of the original sample ends up in any particular bootstrap sample. The original observations not contained in a particular bootstrap sample are considered *out\-of\-bag* (OOB). When bootstrapping, a model can be built on the selected samples and validated on the OOB samples; this is often done, for example, in random forests (see Chapter [11](random-forest.html#random-forest)). Since observations are replicated in bootstrapping, there tends to be less variability in the error measure compared with *k*\-fold CV (Efron [1983](#ref-efron1983estimating)). However, this can also increase the bias of your error estimate. This can be problematic with smaller data sets; however, for most average\-to\-large data sets (say \\(n \\geq 1,000\\)) this concern is often negligible. Figure [2\.7](process.html#fig:modeling-process-sampling-comparison) compares bootstrapping to 10\-fold CV on a small data set with \\(n \= 32\\) observations. A thorough introduction to the bootstrap and its use in R is provided in Davison, Hinkley, and others ([1997](#ref-davison1997bootstrap)). Figure 2\.7: Bootstrap sampling (left) versus 10\-fold cross validation (right) on 32 observations. For bootstrap sampling, the observations that have zero replications (white) are the out\-of\-bag observations used for validation. We can create bootstrap samples easily with `rsample::bootstraps()`, as illustrated in the code chunk below. ``` bootstraps(ames, times = 10) ## # Bootstrap sampling ## # A tibble: 10 x 2 ## splits id ## <list> <chr> ## 1 <split [2.9K/1.1K]> Bootstrap01 ## 2 <split [2.9K/1.1K]> Bootstrap02 ## 3 <split [2.9K/1.1K]> Bootstrap03 ## 4 <split [2.9K/1K]> Bootstrap04 ## 5 <split [2.9K/1.1K]> Bootstrap05 ## 6 <split [2.9K/1.1K]> Bootstrap06 ## 7 <split [2.9K/1.1K]> Bootstrap07 ## 8 <split [2.9K/1.1K]> Bootstrap08 ## 9 <split [2.9K/1.1K]> Bootstrap09 ## 10 <split [2.9K/1K]> Bootstrap10 ``` Bootstrapping is, typically, more of an internal resampling procedure that is naturally built into certain ML algorithms. This will become more apparent in Chapters [10](bagging.html#bagging)–[11](random-forest.html#random-forest) where we discuss bagging and random forests, respectively. ### 2\.4\.3 Alternatives It is important to note that there are other useful resampling procedures. If you’re working with time\-series specific data then you will want to incorporate rolling origin and other time series resampling procedures. Hyndman and Athanasopoulos ([2018](#ref-hyndman2018forecasting)) is the dominant, R\-focused, time series resource[6](#fn6). Additionally, Efron ([1983](#ref-efron1983estimating)) developed the “632 method” and Efron and Tibshirani ([1997](#ref-efron1997improvements)) discuss the “632\+ method”; both approaches seek to minimize biases experienced with bootstrapping on smaller data sets and are available via **caret** (see `?caret::trainControl` for details). ### 2\.4\.1 *k*\-fold cross validation *k*\-fold cross\-validation (aka *k*\-fold CV) is a resampling method that randomly divides the training data into *k* groups (aka folds) of approximately equal size. The model is fit on \\(k\-1\\) folds and then the remaining fold is used to compute model performance. This procedure is repeated *k* times; each time, a different fold is treated as the validation set. This process results in *k* estimates of the generalization error (say \\(\\epsilon\_1, \\epsilon\_2, \\dots, \\epsilon\_k\\)). Thus, the *k*\-fold CV estimate is computed by averaging the *k* test errors, providing us with an approximation of the error we might expect on unseen data. Figure 2\.4: Illustration of the k\-fold cross validation process. Consequently, with *k*\-fold CV, every observation in the training data will be held out one time to be included in the test set as illustrated in Figure [2\.5](process.html#fig:modeling-process-cv). In practice, one typically uses \\(k \= 5\\) or \\(k \= 10\\). There is no formal rule as to the size of *k*; however, as *k* gets larger, the difference between the estimated performance and the true performance to be seen on the test set will decrease. On the other hand, using too large *k* can introduce computational burdens. Moreover, Molinaro, Simon, and Pfeiffer ([2005](#ref-molinaro2005prediction)) found that \\(k\=10\\) performed similarly to leave\-one\-out cross validation (LOOCV) which is the most extreme approach (i.e., setting \\(k \= n\\)). Figure 2\.5: 10\-fold cross validation on 32 observations. Each observation is used once for validation and nine times for training. Although using \\(k \\geq 10\\) helps to minimize the variability in the estimated performance, *k*\-fold CV still tends to have higher variability than bootstrapping (discussed next). Kim ([2009](#ref-kim2009estimating)) showed that repeating *k*\-fold CV can help to increase the precision of the estimated generalization error. Consequently, for smaller data sets (say \\(n \< 10,000\\)), 10\-fold CV repeated 5 or 10 times will improve the accuracy of your estimated performance and also provide an estimate of its variability. Throughout this book we’ll cover multiple ways to incorporate CV as you can often perform CV directly within certain ML functions: ``` # Example using h2o h2o.cv <- h2o.glm( x = x, y = y, training_frame = ames.h2o, nfolds = 10 # perform 10-fold CV ) ``` Or externally as in the below chunk[5](#fn5). When applying it externally to an ML algorithm as below, we’ll need a process to apply the ML model to each resample, which we’ll also cover. ``` vfold_cv(ames, v = 10) ## # 10-fold cross-validation ## # A tibble: 10 x 2 ## splits id ## <named list> <chr> ## 1 <split [2.6K/293]> Fold01 ## 2 <split [2.6K/293]> Fold02 ## 3 <split [2.6K/293]> Fold03 ## 4 <split [2.6K/293]> Fold04 ## 5 <split [2.6K/293]> Fold05 ## 6 <split [2.6K/293]> Fold06 ## 7 <split [2.6K/293]> Fold07 ## 8 <split [2.6K/293]> Fold08 ## 9 <split [2.6K/293]> Fold09 ## 10 <split [2.6K/293]> Fold10 ``` ### 2\.4\.2 Bootstrapping A bootstrap sample is a random sample of the data taken *with replacement* (Efron and Tibshirani [1986](#ref-efron1986bootstrap)). This means that, after a data point is selected for inclusion in the subset, it’s still available for further selection. A bootstrap sample is the same size as the original data set from which it was constructed. Figure [2\.6](process.html#fig:modeling-process-bootstrapscheme) provides a schematic of bootstrap sampling where each bootstrap sample contains 12 observations just as in the original data set. Furthermore, bootstrap sampling will contain approximately the same distribution of values (represented by colors) as the original data set. Figure 2\.6: Illustration of the bootstrapping process. Since samples are drawn with replacement, each bootstrap sample is likely to contain duplicate values. In fact, on average, \\(\\approx 63\.21\\)% of the original sample ends up in any particular bootstrap sample. The original observations not contained in a particular bootstrap sample are considered *out\-of\-bag* (OOB). When bootstrapping, a model can be built on the selected samples and validated on the OOB samples; this is often done, for example, in random forests (see Chapter [11](random-forest.html#random-forest)). Since observations are replicated in bootstrapping, there tends to be less variability in the error measure compared with *k*\-fold CV (Efron [1983](#ref-efron1983estimating)). However, this can also increase the bias of your error estimate. This can be problematic with smaller data sets; however, for most average\-to\-large data sets (say \\(n \\geq 1,000\\)) this concern is often negligible. Figure [2\.7](process.html#fig:modeling-process-sampling-comparison) compares bootstrapping to 10\-fold CV on a small data set with \\(n \= 32\\) observations. A thorough introduction to the bootstrap and its use in R is provided in Davison, Hinkley, and others ([1997](#ref-davison1997bootstrap)). Figure 2\.7: Bootstrap sampling (left) versus 10\-fold cross validation (right) on 32 observations. For bootstrap sampling, the observations that have zero replications (white) are the out\-of\-bag observations used for validation. We can create bootstrap samples easily with `rsample::bootstraps()`, as illustrated in the code chunk below. ``` bootstraps(ames, times = 10) ## # Bootstrap sampling ## # A tibble: 10 x 2 ## splits id ## <list> <chr> ## 1 <split [2.9K/1.1K]> Bootstrap01 ## 2 <split [2.9K/1.1K]> Bootstrap02 ## 3 <split [2.9K/1.1K]> Bootstrap03 ## 4 <split [2.9K/1K]> Bootstrap04 ## 5 <split [2.9K/1.1K]> Bootstrap05 ## 6 <split [2.9K/1.1K]> Bootstrap06 ## 7 <split [2.9K/1.1K]> Bootstrap07 ## 8 <split [2.9K/1.1K]> Bootstrap08 ## 9 <split [2.9K/1.1K]> Bootstrap09 ## 10 <split [2.9K/1K]> Bootstrap10 ``` Bootstrapping is, typically, more of an internal resampling procedure that is naturally built into certain ML algorithms. This will become more apparent in Chapters [10](bagging.html#bagging)–[11](random-forest.html#random-forest) where we discuss bagging and random forests, respectively. ### 2\.4\.3 Alternatives It is important to note that there are other useful resampling procedures. If you’re working with time\-series specific data then you will want to incorporate rolling origin and other time series resampling procedures. Hyndman and Athanasopoulos ([2018](#ref-hyndman2018forecasting)) is the dominant, R\-focused, time series resource[6](#fn6). Additionally, Efron ([1983](#ref-efron1983estimating)) developed the “632 method” and Efron and Tibshirani ([1997](#ref-efron1997improvements)) discuss the “632\+ method”; both approaches seek to minimize biases experienced with bootstrapping on smaller data sets and are available via **caret** (see `?caret::trainControl` for details). 2\.5 Bias variance trade\-off ----------------------------- Prediction errors can be decomposed into two important subcomponents: error due to “bias” and error due to “variance”. There is often a tradeoff between a model’s ability to minimize bias and variance. Understanding how different sources of error lead to bias and variance helps us improve the data fitting process resulting in more accurate models. ### 2\.5\.1 Bias *Bias* is the difference between the expected (or average) prediction of our model and the correct value which we are trying to predict. It measures how far off in general a model’s predictions are from the correct value, which provides a sense of how well a model can conform to the underlying structure of the data. Figure [2\.8](process.html#fig:modeling-process-bias-model) illustrates an example where the polynomial model does not capture the underlying structure well. Linear models are classical examples of high bias models as they are less flexible and rarely capture non\-linear, non\-monotonic relationships. We also need to think of bias\-variance in relation to resampling. Models with high bias are rarely affected by the noise introduced by resampling. If a model has high bias, it will have consistency in its resampling performance as illustrated by Figure [2\.8](process.html#fig:modeling-process-bias-model). Figure 2\.8: A biased polynomial model fit to a single data set does not capture the underlying non\-linear, non\-monotonic data structure (left). Models fit to 25 bootstrapped replicates of the data are underterred by the noise and generates similar, yet still biased, predictions (right). ### 2\.5\.2 Variance On the other hand, error due to *variance* is defined as the variability of a model prediction for a given data point. Many models (e.g., *k*\-nearest neighbor, decision trees, gradient boosting machines) are very adaptable and offer extreme flexibility in the patterns that they can fit to. However, these models offer their own problems as they run the risk of overfitting to the training data. Although you may achieve very good performance on your training data, the model will not automatically generalize well to unseen data. Figure 2\.9: A high variance *k*\-nearest neighbor model fit to a single data set captures the underlying non\-linear, non\-monotonic data structure well but also overfits to individual data points (left). Models fit to 25 bootstrapped replicates of the data are deterred by the noise and generate highly variable predictions (right). Since high variance models are more prone to overfitting, using resampling procedures are critical to reduce this risk. Moreover, many algorithms that are capable of achieving high generalization performance have lots of *hyperparameters* that control the level of model complexity (i.e., the tradeoff between bias and variance). ### 2\.5\.3 Hyperparameter tuning Hyperparameters (aka *tuning parameters*) are the “knobs to twiddle”[7](#fn7) to control the complexity of machine learning algorithms and, therefore, the bias\-variance trade\-off. Not all algorithms have hyperparameters (e.g., ordinary least squares[8](#fn8)); however, most have at least one or more. The proper setting of these hyperparameters is often dependent on the data and problem at hand and cannot always be estimated by the training data alone. Consequently, we need a method of identifying the optimal setting. For example, in the high variance example in the previous section, we illustrated a high variance *k*\-nearest neighbor model (we’ll discuss *k*\-nearest neighbor in Chapter [8](knn.html#knn)). *k*\-nearest neighbor models have a single hyperparameter (*k*) that determines the predicted value to be made based on the *k* nearest observations in the training data to the one being predicted. If *k* is small (e.g., \\(k\=3\\)), the model will make a prediction for a given observation based on the average of the response values for the 3 observations in the training data most similar to the observation being predicted. This often results in highly variable predicted values because we are basing the prediction (in this case, an average) on a very small subset of the training data. As *k* gets bigger, we base our predictions on an average of a larger subset of the training data, which naturally reduces the variance in our predicted values (remember this for later, averaging often helps to reduce variance!). Figure [2\.10](process.html#fig:modeling-process-knn-options) illustrates this point. Smaller *k* values (e.g., 2, 5, or 10\) lead to high variance (but lower bias) and larger values (e.g., 150\) lead to high bias (but lower variance). The optimal *k* value might exist somewhere between 20–50, but how do we know which value of *k* to use? Figure 2\.10: *k*\-nearest neighbor model with differing values for *k*. One way to perform hyperparameter tuning is to fiddle with hyperparameters manually until you find a great combination of hyperparameter values that result in high predictive accuracy (as measured using *k*\-fold CV, for instance). However, this can be very tedious work depending on the number of hyperparameters. An alternative approach is to perform a *grid search*. A grid search is an automated approach to searching across many combinations of hyperparameter values. For our *k*\-nearest neighbor example, a grid search would predefine a candidate set of values for *k* (e.g., \\(k \= 1, 2, \\dots, j\\)) and perform a resampling method (e.g., *k*\-fold CV) to estimate which *k* value generalizes the best to unseen data. Figure [2\.11](process.html#fig:modeling-process-knn-tune) illustrates the results from a grid search to assess \\(k \= 2, 12, 14, \\dots, 150\\) using repeated 10\-fold CV. The error rate displayed represents the average error for each value of *k* across all the repeated CV folds. On average, \\(k\=46\\) was the optimal hyperparameter value to minimize error (in this case, RMSE which is discussed in Section [2\.6](process.html#model-eval))) on unseen data. Figure 2\.11: Results from a grid search for a *k*\-nearest neighbor model assessing values for *k* ranging from 2\-150\. We see high error values due to high model variance when *k* is small and we also see high errors values due to high model bias when *k* is large. The optimal model is found at *k* \= 46\. Throughout this book you’ll be exposed to different approaches to performing grid searches. In the above example, we used a *full cartesian grid search*, which assesses every hyperparameter value manually defined. However, as models get more complex and offer more hyperparameters, this approach can become computationally burdensome and requires you to define the optimal hyperparameter grid settings to explore. Additional approaches we’ll illustrate include *random grid searches* (Bergstra and Bengio [2012](#ref-bergstra2012random)) which explores randomly selected hyperparameter values from a range of possible values, *early stopping* which allows you to stop a grid search once reduction in the error stops marginally improving, *adaptive resampling* via futility analysis (Kuhn [2014](#ref-kuhn2014futility)) which adaptively resamples candidate hyperparameter values based on approximately optimal performance, and more. ### 2\.5\.1 Bias *Bias* is the difference between the expected (or average) prediction of our model and the correct value which we are trying to predict. It measures how far off in general a model’s predictions are from the correct value, which provides a sense of how well a model can conform to the underlying structure of the data. Figure [2\.8](process.html#fig:modeling-process-bias-model) illustrates an example where the polynomial model does not capture the underlying structure well. Linear models are classical examples of high bias models as they are less flexible and rarely capture non\-linear, non\-monotonic relationships. We also need to think of bias\-variance in relation to resampling. Models with high bias are rarely affected by the noise introduced by resampling. If a model has high bias, it will have consistency in its resampling performance as illustrated by Figure [2\.8](process.html#fig:modeling-process-bias-model). Figure 2\.8: A biased polynomial model fit to a single data set does not capture the underlying non\-linear, non\-monotonic data structure (left). Models fit to 25 bootstrapped replicates of the data are underterred by the noise and generates similar, yet still biased, predictions (right). ### 2\.5\.2 Variance On the other hand, error due to *variance* is defined as the variability of a model prediction for a given data point. Many models (e.g., *k*\-nearest neighbor, decision trees, gradient boosting machines) are very adaptable and offer extreme flexibility in the patterns that they can fit to. However, these models offer their own problems as they run the risk of overfitting to the training data. Although you may achieve very good performance on your training data, the model will not automatically generalize well to unseen data. Figure 2\.9: A high variance *k*\-nearest neighbor model fit to a single data set captures the underlying non\-linear, non\-monotonic data structure well but also overfits to individual data points (left). Models fit to 25 bootstrapped replicates of the data are deterred by the noise and generate highly variable predictions (right). Since high variance models are more prone to overfitting, using resampling procedures are critical to reduce this risk. Moreover, many algorithms that are capable of achieving high generalization performance have lots of *hyperparameters* that control the level of model complexity (i.e., the tradeoff between bias and variance). ### 2\.5\.3 Hyperparameter tuning Hyperparameters (aka *tuning parameters*) are the “knobs to twiddle”[7](#fn7) to control the complexity of machine learning algorithms and, therefore, the bias\-variance trade\-off. Not all algorithms have hyperparameters (e.g., ordinary least squares[8](#fn8)); however, most have at least one or more. The proper setting of these hyperparameters is often dependent on the data and problem at hand and cannot always be estimated by the training data alone. Consequently, we need a method of identifying the optimal setting. For example, in the high variance example in the previous section, we illustrated a high variance *k*\-nearest neighbor model (we’ll discuss *k*\-nearest neighbor in Chapter [8](knn.html#knn)). *k*\-nearest neighbor models have a single hyperparameter (*k*) that determines the predicted value to be made based on the *k* nearest observations in the training data to the one being predicted. If *k* is small (e.g., \\(k\=3\\)), the model will make a prediction for a given observation based on the average of the response values for the 3 observations in the training data most similar to the observation being predicted. This often results in highly variable predicted values because we are basing the prediction (in this case, an average) on a very small subset of the training data. As *k* gets bigger, we base our predictions on an average of a larger subset of the training data, which naturally reduces the variance in our predicted values (remember this for later, averaging often helps to reduce variance!). Figure [2\.10](process.html#fig:modeling-process-knn-options) illustrates this point. Smaller *k* values (e.g., 2, 5, or 10\) lead to high variance (but lower bias) and larger values (e.g., 150\) lead to high bias (but lower variance). The optimal *k* value might exist somewhere between 20–50, but how do we know which value of *k* to use? Figure 2\.10: *k*\-nearest neighbor model with differing values for *k*. One way to perform hyperparameter tuning is to fiddle with hyperparameters manually until you find a great combination of hyperparameter values that result in high predictive accuracy (as measured using *k*\-fold CV, for instance). However, this can be very tedious work depending on the number of hyperparameters. An alternative approach is to perform a *grid search*. A grid search is an automated approach to searching across many combinations of hyperparameter values. For our *k*\-nearest neighbor example, a grid search would predefine a candidate set of values for *k* (e.g., \\(k \= 1, 2, \\dots, j\\)) and perform a resampling method (e.g., *k*\-fold CV) to estimate which *k* value generalizes the best to unseen data. Figure [2\.11](process.html#fig:modeling-process-knn-tune) illustrates the results from a grid search to assess \\(k \= 2, 12, 14, \\dots, 150\\) using repeated 10\-fold CV. The error rate displayed represents the average error for each value of *k* across all the repeated CV folds. On average, \\(k\=46\\) was the optimal hyperparameter value to minimize error (in this case, RMSE which is discussed in Section [2\.6](process.html#model-eval))) on unseen data. Figure 2\.11: Results from a grid search for a *k*\-nearest neighbor model assessing values for *k* ranging from 2\-150\. We see high error values due to high model variance when *k* is small and we also see high errors values due to high model bias when *k* is large. The optimal model is found at *k* \= 46\. Throughout this book you’ll be exposed to different approaches to performing grid searches. In the above example, we used a *full cartesian grid search*, which assesses every hyperparameter value manually defined. However, as models get more complex and offer more hyperparameters, this approach can become computationally burdensome and requires you to define the optimal hyperparameter grid settings to explore. Additional approaches we’ll illustrate include *random grid searches* (Bergstra and Bengio [2012](#ref-bergstra2012random)) which explores randomly selected hyperparameter values from a range of possible values, *early stopping* which allows you to stop a grid search once reduction in the error stops marginally improving, *adaptive resampling* via futility analysis (Kuhn [2014](#ref-kuhn2014futility)) which adaptively resamples candidate hyperparameter values based on approximately optimal performance, and more. 2\.6 Model evaluation --------------------- Historically, the performance of statistical models was largely based on goodness\-of\-fit tests and assessment of residuals. Unfortunately, misleading conclusions may follow from predictive models that pass these kinds of assessments (Breiman and others [2001](#ref-breiman2001statistical)). Today, it has become widely accepted that a more sound approach to assessing model performance is to assess the predictive accuracy via *loss functions*. Loss functions are metrics that compare the predicted values to the actual value (the output of a loss function is often referred to as the *error* or pseudo *residual*). When performing resampling methods, we assess the predicted values for a validation set compared to the actual target value. For example, in regression, one way to measure error is to take the difference between the actual and predicted value for a given observation (this is the usual definition of a residual in ordinary linear regression). The overall validation error of the model is computed by aggregating the errors across the entire validation data set. There are many loss functions to choose from when assessing the performance of a predictive model, each providing a unique understanding of the predictive accuracy and differing between regression and classification models. Furthermore, the way a loss function is computed will tend to emphasize certain types of errors over others and can lead to drastic differences in how we interpret the “optimal model”. Its important to consider the problem context when identifying the preferred performance metric to use. And when comparing multiple models, we need to compare them across the same metric. ### 2\.6\.1 Regression models * **MSE**: Mean squared error is the average of the squared error (\\(MSE \= \\frac{1}{n} \\sum^n\_{i\=1}(y\_i \- \\hat y\_i)^2\\))[9](#fn9). The squared component results in larger errors having larger penalties. This (along with RMSE) is the most common error metric to use. **Objective: minimize** * **RMSE**: Root mean squared error. This simply takes the square root of the MSE metric (\\(RMSE \= \\sqrt{\\frac{1}{n} \\sum^n\_{i\=1}(y\_i \- \\hat y\_i)^2}\\)) so that your error is in the same units as your response variable. If your response variable units are dollars, the units of MSE are dollars\-squared, but the RMSE will be in dollars. **Objective: minimize** * **Deviance**: Short for mean residual deviance. In essence, it provides a degree to which a model explains the variation in a set of data when using maximum likelihood estimation. Essentially this compares a saturated model (i.e. fully featured model) to an unsaturated model (i.e. intercept only or average). If the response variable distribution is Gaussian, then it will be approximately equal to MSE. When not, it usually gives a more useful estimate of error. Deviance is often used with classification models.[10](#fn10) **Objective: minimize** * **MAE**: Mean absolute error. Similar to MSE but rather than squaring, it just takes the mean absolute difference between the actual and predicted values (\\(MAE \= \\frac{1}{n} \\sum^n\_{i\=1}(\\vert y\_i \- \\hat y\_i \\vert)\\)). This results in less emphasis on larger errors than MSE. **Objective: minimize** * **RMSLE**: Root mean squared logarithmic error. Similar to RMSE but it performs a `log()` on the actual and predicted values prior to computing the difference (\\(RMSLE \= \\sqrt{\\frac{1}{n} \\sum^n\_{i\=1}(log(y\_i \+ 1\) \- log(\\hat y\_i \+ 1\))^2}\\)). When your response variable has a wide range of values, large response values with large errors can dominate the MSE/RMSE metric. RMSLE minimizes this impact so that small response values with large errors can have just as meaningful of an impact as large response values with large errors. **Objective: minimize** * **\\(R^2\\)**: This is a popular metric that represents the proportion of the variance in the dependent variable that is predictable from the independent variable(s). Unfortunately, it has several limitations. For example, two models built from two different data sets could have the exact same RMSE but if one has less variability in the response variable then it would have a lower \\(R^2\\) than the other. You should not place too much emphasis on this metric. **Objective: maximize** Most models we assess in this book will report most, if not all, of these metrics. We will emphasize MSE and RMSE but it’s important to realize that certain situations warrant emphasis on some metrics more than others. ### 2\.6\.2 Classification models * **Misclassification**: This is the overall error. For example, say you are predicting 3 classes ( *high*, *medium*, *low* ) and each class has 25, 30, 35 observations respectively (90 observations total). If you misclassify 3 observations of class *high*, 6 of class *medium*, and 4 of class *low*, then you misclassified 13 out of 90 observations resulting in a 14% misclassification rate. **Objective: minimize** * **Mean per class error**: This is the average error rate for each class. For the above example, this would be the mean of \\(\\frac{3}{25}, \\frac{6}{30}, \\frac{4}{35}\\), which is 14\.5%. If your classes are balanced this will be identical to misclassification. **Objective: minimize** * **MSE**: Mean squared error. Computes the distance from 1\.0 to the probability suggested. So, say we have three classes, A, B, and C, and your model predicts a probability of 0\.91 for A, 0\.07 for B, and 0\.02 for C. If the correct answer was A the \\(MSE \= 0\.09^2 \= 0\.0081\\), if it is B \\(MSE \= 0\.93^2 \= 0\.8649\\), if it is C \\(MSE \= 0\.98^2 \= 0\.9604\\). The squared component results in large differences in probabilities for the true class having larger penalties. **Objective: minimize** * **Cross\-entropy (aka Log Loss or Deviance)**: Similar to MSE but it incorporates a log of the predicted probability multiplied by the true class. Consequently, this metric disproportionately punishes predictions where we predict a small probability for the true class, which is another way of saying having high confidence in the wrong answer is really bad. **Objective: minimize** * **Gini index**: Mainly used with tree\-based methods and commonly referred to as a measure of *purity* where a small value indicates that a node contains predominantly observations from a single class. **Objective: minimize** When applying classification models, we often use a *confusion matrix* to evaluate certain performance measures. A confusion matrix is simply a matrix that compares actual categorical levels (or events) to the predicted categorical levels. When we predict the right level, we refer to this as a *true positive*. However, if we predict a level or event that did not happen this is called a *false positive* (i.e. we predicted a customer would redeem a coupon and they did not). Alternatively, when we do not predict a level or event and it does happen that this is called a *false negative* (i.e. a customer that we did not predict to redeem a coupon does). Figure 2\.12: Confusion matrix and relationships to terms such as true\-positive and false\-negative. We can extract different levels of performance for binary classifiers. For example, given the classification (or confusion) matrix illustrated in Figure [2\.13](process.html#fig:modeling-process-confusion-matrix2) we can assess the following: * **Accuracy**: Overall, how often is the classifier correct? Opposite of misclassification above. Example: \\(\\frac{TP \+ TN}{total} \= \\frac{100\+50}{165} \= 0\.91\\). **Objective: maximize** * **Precision**: How accurately does the classifier predict events? This metric is concerned with maximizing the true positives to false positive ratio. In other words, for the number of predictions that we made, how many were correct? Example: \\(\\frac{TP}{TP \+ FP} \= \\frac{100}{100\+10} \= 0\.91\\). **Objective: maximize** * **Sensitivity (aka recall)**: How accurately does the classifier classify actual events? This metric is concerned with maximizing the true positives to false negatives ratio. In other words, for the events that occurred, how many did we predict? Example: \\(\\frac{TP}{TP \+ FN} \= \\frac{100}{100\+5} \= 0\.95\\). **Objective: maximize** * **Specificity**: How accurately does the classifier classify actual non\-events? Example: \\(\\frac{TN}{TN \+ FP} \= \\frac{50}{50\+10} \= 0\.83\\). **Objective: maximize** Figure 2\.13: Example confusion matrix. * **AUC**: Area under the curve. A good binary classifier will have high precision and sensitivity. This means the classifier does well when it predicts an event will and will not occur, which minimizes false positives and false negatives. To capture this balance, we often use a ROC curve that plots the false positive rate along the x\-axis and the true positive rate along the y\-axis. A line that is diagonal from the lower left corner to the upper right corner represents a random guess. The higher the line is in the upper left\-hand corner, the better. AUC computes the area under this curve. **Objective: maximize** Figure 2\.14: ROC curve. ### 2\.6\.1 Regression models * **MSE**: Mean squared error is the average of the squared error (\\(MSE \= \\frac{1}{n} \\sum^n\_{i\=1}(y\_i \- \\hat y\_i)^2\\))[9](#fn9). The squared component results in larger errors having larger penalties. This (along with RMSE) is the most common error metric to use. **Objective: minimize** * **RMSE**: Root mean squared error. This simply takes the square root of the MSE metric (\\(RMSE \= \\sqrt{\\frac{1}{n} \\sum^n\_{i\=1}(y\_i \- \\hat y\_i)^2}\\)) so that your error is in the same units as your response variable. If your response variable units are dollars, the units of MSE are dollars\-squared, but the RMSE will be in dollars. **Objective: minimize** * **Deviance**: Short for mean residual deviance. In essence, it provides a degree to which a model explains the variation in a set of data when using maximum likelihood estimation. Essentially this compares a saturated model (i.e. fully featured model) to an unsaturated model (i.e. intercept only or average). If the response variable distribution is Gaussian, then it will be approximately equal to MSE. When not, it usually gives a more useful estimate of error. Deviance is often used with classification models.[10](#fn10) **Objective: minimize** * **MAE**: Mean absolute error. Similar to MSE but rather than squaring, it just takes the mean absolute difference between the actual and predicted values (\\(MAE \= \\frac{1}{n} \\sum^n\_{i\=1}(\\vert y\_i \- \\hat y\_i \\vert)\\)). This results in less emphasis on larger errors than MSE. **Objective: minimize** * **RMSLE**: Root mean squared logarithmic error. Similar to RMSE but it performs a `log()` on the actual and predicted values prior to computing the difference (\\(RMSLE \= \\sqrt{\\frac{1}{n} \\sum^n\_{i\=1}(log(y\_i \+ 1\) \- log(\\hat y\_i \+ 1\))^2}\\)). When your response variable has a wide range of values, large response values with large errors can dominate the MSE/RMSE metric. RMSLE minimizes this impact so that small response values with large errors can have just as meaningful of an impact as large response values with large errors. **Objective: minimize** * **\\(R^2\\)**: This is a popular metric that represents the proportion of the variance in the dependent variable that is predictable from the independent variable(s). Unfortunately, it has several limitations. For example, two models built from two different data sets could have the exact same RMSE but if one has less variability in the response variable then it would have a lower \\(R^2\\) than the other. You should not place too much emphasis on this metric. **Objective: maximize** Most models we assess in this book will report most, if not all, of these metrics. We will emphasize MSE and RMSE but it’s important to realize that certain situations warrant emphasis on some metrics more than others. ### 2\.6\.2 Classification models * **Misclassification**: This is the overall error. For example, say you are predicting 3 classes ( *high*, *medium*, *low* ) and each class has 25, 30, 35 observations respectively (90 observations total). If you misclassify 3 observations of class *high*, 6 of class *medium*, and 4 of class *low*, then you misclassified 13 out of 90 observations resulting in a 14% misclassification rate. **Objective: minimize** * **Mean per class error**: This is the average error rate for each class. For the above example, this would be the mean of \\(\\frac{3}{25}, \\frac{6}{30}, \\frac{4}{35}\\), which is 14\.5%. If your classes are balanced this will be identical to misclassification. **Objective: minimize** * **MSE**: Mean squared error. Computes the distance from 1\.0 to the probability suggested. So, say we have three classes, A, B, and C, and your model predicts a probability of 0\.91 for A, 0\.07 for B, and 0\.02 for C. If the correct answer was A the \\(MSE \= 0\.09^2 \= 0\.0081\\), if it is B \\(MSE \= 0\.93^2 \= 0\.8649\\), if it is C \\(MSE \= 0\.98^2 \= 0\.9604\\). The squared component results in large differences in probabilities for the true class having larger penalties. **Objective: minimize** * **Cross\-entropy (aka Log Loss or Deviance)**: Similar to MSE but it incorporates a log of the predicted probability multiplied by the true class. Consequently, this metric disproportionately punishes predictions where we predict a small probability for the true class, which is another way of saying having high confidence in the wrong answer is really bad. **Objective: minimize** * **Gini index**: Mainly used with tree\-based methods and commonly referred to as a measure of *purity* where a small value indicates that a node contains predominantly observations from a single class. **Objective: minimize** When applying classification models, we often use a *confusion matrix* to evaluate certain performance measures. A confusion matrix is simply a matrix that compares actual categorical levels (or events) to the predicted categorical levels. When we predict the right level, we refer to this as a *true positive*. However, if we predict a level or event that did not happen this is called a *false positive* (i.e. we predicted a customer would redeem a coupon and they did not). Alternatively, when we do not predict a level or event and it does happen that this is called a *false negative* (i.e. a customer that we did not predict to redeem a coupon does). Figure 2\.12: Confusion matrix and relationships to terms such as true\-positive and false\-negative. We can extract different levels of performance for binary classifiers. For example, given the classification (or confusion) matrix illustrated in Figure [2\.13](process.html#fig:modeling-process-confusion-matrix2) we can assess the following: * **Accuracy**: Overall, how often is the classifier correct? Opposite of misclassification above. Example: \\(\\frac{TP \+ TN}{total} \= \\frac{100\+50}{165} \= 0\.91\\). **Objective: maximize** * **Precision**: How accurately does the classifier predict events? This metric is concerned with maximizing the true positives to false positive ratio. In other words, for the number of predictions that we made, how many were correct? Example: \\(\\frac{TP}{TP \+ FP} \= \\frac{100}{100\+10} \= 0\.91\\). **Objective: maximize** * **Sensitivity (aka recall)**: How accurately does the classifier classify actual events? This metric is concerned with maximizing the true positives to false negatives ratio. In other words, for the events that occurred, how many did we predict? Example: \\(\\frac{TP}{TP \+ FN} \= \\frac{100}{100\+5} \= 0\.95\\). **Objective: maximize** * **Specificity**: How accurately does the classifier classify actual non\-events? Example: \\(\\frac{TN}{TN \+ FP} \= \\frac{50}{50\+10} \= 0\.83\\). **Objective: maximize** Figure 2\.13: Example confusion matrix. * **AUC**: Area under the curve. A good binary classifier will have high precision and sensitivity. This means the classifier does well when it predicts an event will and will not occur, which minimizes false positives and false negatives. To capture this balance, we often use a ROC curve that plots the false positive rate along the x\-axis and the true positive rate along the y\-axis. A line that is diagonal from the lower left corner to the upper right corner represents a random guess. The higher the line is in the upper left\-hand corner, the better. AUC computes the area under this curve. **Objective: maximize** Figure 2\.14: ROC curve. 2\.7 Putting the processes together ----------------------------------- To illustrate how this process works together via R code, let’s do a simple assessment on the `ames` housing data. First, we perform stratified sampling as illustrated in Section [2\.2\.2](process.html#stratified) to break our data into training vs. test data while ensuring we have consistent distributions between the training and test sets. ``` # Stratified sampling with the rsample package set.seed(123) split <- initial_split(ames, prop = 0.7, strata = "Sale_Price") ames_train <- training(split) ames_test <- testing(split) ``` Next, we’re going to apply a *k*\-nearest neighbor regressor to our data. To do so, we’ll use **caret**, which is a meta\-engine to simplify the resampling, grid search, and model application processes. The following defines: 1. **Resampling method**: we use 10\-fold CV repeated 5 times. 2. **Grid search**: we specify the hyperparameter values to assess (\\(k \= 2, 3, 4, \\dots, 25\\)). 3. **Model training \& Validation**: we train a *k*\-nearest neighbor (`method = "knn"`) model using our pre\-specified resampling procedure (`trControl = cv`), grid search (`tuneGrid = hyper_grid`), and preferred loss function (`metric = "RMSE"`). This grid search takes approximately 3\.5 minutes ``` # Specify resampling strategy cv <- trainControl( method = "repeatedcv", number = 10, repeats = 5 ) # Create grid of hyperparameter values hyper_grid <- expand.grid(k = seq(2, 25, by = 1)) # Tune a knn model using grid search knn_fit <- train( Sale_Price ~ ., data = ames_train, method = "knn", trControl = cv, tuneGrid = hyper_grid, metric = "RMSE" ) ``` Looking at our results we see that the best model coincided with \\(k\=\\) 7, which resulted in an RMSE of 43439\.07\. This implies that, on average, our model mispredicts the expected sale price of a home by $43,439\. Figure [2\.15](process.html#fig:modeling-process-example-process-assess) illustrates the cross\-validated error rate across the spectrum of hyperparameter values that we specified. ``` # Print and plot the CV results knn_fit ## k-Nearest Neighbors ## ## 2053 samples ## 80 predictor ## ## No pre-processing ## Resampling: Cross-Validated (10 fold, repeated 5 times) ## Summary of sample sizes: 1848, 1848, 1848, 1847, 1849, 1847, ... ## Resampling results across tuning parameters: ## ## k RMSE Rsquared MAE ## 2 47844.53 0.6538046 31002.72 ## 3 45875.79 0.6769848 29784.69 ## 4 44529.50 0.6949240 28992.48 ## 5 43944.65 0.7026947 28738.66 ## 6 43645.76 0.7079683 28553.50 ## 7 43439.07 0.7129916 28617.80 ## 8 43658.35 0.7123254 28769.16 ## 9 43799.74 0.7128924 28905.50 ## 10 44058.76 0.7108900 29061.68 ## 11 44304.91 0.7091949 29197.78 ## 12 44565.82 0.7073437 29320.81 ## 13 44798.10 0.7056491 29475.33 ## 14 44966.27 0.7051474 29561.70 ## 15 45188.86 0.7036000 29731.56 ## 16 45376.09 0.7027152 29860.67 ## 17 45557.94 0.7016254 29974.44 ## 18 45666.30 0.7021351 30018.59 ## 19 45836.33 0.7013026 30105.50 ## 20 46044.44 0.6997198 30235.80 ## 21 46242.59 0.6983978 30367.95 ## 22 46441.87 0.6969620 30481.48 ## 23 46651.66 0.6953968 30611.48 ## 24 46788.22 0.6948738 30681.97 ## 25 46980.13 0.6928159 30777.25 ## ## RMSE was used to select the optimal model using the smallest value. ## The final value used for the model was k = 7. ggplot(knn_fit) ``` Figure 2\.15: Results from a grid search for a *k*\-nearest neighbor model on the Ames housing data assessing values for *k* ranging from 2\-25\. The question remains: “Is this the best predictive model we can find?” We may have identified the optimal *k*\-nearest neighbor model for our given data set, but this doesn’t mean we’ve found the best possible overall model. Nor have we considered potential feature and target engineering options. The remainder of this book will walk you through the journey of identifying alternative solutions and, hopefully, a much more optimal model.
Machine Learning
bradleyboehmke.github.io
https://bradleyboehmke.github.io/HOML/engineering.html
Chapter 3 Feature \& Target Engineering ======================================= Data preprocessing and engineering techniques generally refer to the addition, deletion, or transformation of data. The time spent on identifying data engineering needs can be significant and requires you to spend substantial time understanding your data…or as Leo Breiman said “live with your data before you plunge into modeling” (Breiman and others [2001](#ref-breiman2001statistical), 201\). Although this book primarily focuses on applying machine learning algorithms, feature engineering can make or break an algorithm’s predictive ability and deserves your continued focus and education. We will not cover all the potential ways of implementing feature engineering; however, we’ll cover several fundamental preprocessing tasks that can potentially significantly improve modeling performance. Moreover, different models have different sensitivities to the type of target and feature values in the model and we will try to highlight some of these concerns. For more in depth coverage of feature engineering, please refer to Kuhn and Johnson ([2019](#ref-kuhn2019feature)) and Zheng and Casari ([2018](#ref-zheng2018feature)). 3\.1 Prerequisites ------------------ This chapter leverages the following packages: ``` # Helper packages library(dplyr) # for data manipulation library(ggplot2) # for awesome graphics library(visdat) # for additional visualizations # Feature engineering packages library(caret) # for various ML tasks library(recipes) # for feature engineering tasks ``` We’ll also continue working with the `ames_train` data set created in Section [2\.7](process.html#put-process-together): 3\.2 Target engineering ----------------------- Although not always a requirement, transforming the response variable can lead to predictive improvement, especially with parametric models (which require that certain assumptions about the model be met). For instance, ordinary linear regression models assume that the prediction errors (and hence the response) are normally distributed. This is usually fine, except when the prediction target has heavy tails (i.e., *outliers*) or is skewed in one direction or the other. In these cases, the normality assumption likely does not hold. For example, as we saw in the data splitting section ([2\.2](process.html#splitting)), the response variable for the Ames housing data (`Sale_Price`) is right (or positively) skewed as illustrated in Figure [3\.1](engineering.html#fig:engineering-skewed-residuals) (ranging from $12,789 to $755,000\). A simple linear model, say \\(\\text{Sale\_Price}\=\\beta\_{0} \+ \\beta\_{1} \\text{Year\_Built} \+ \\epsilon\\), often assumes the error term \\(\\epsilon\\) (and hence `Sale_Price`) is normally distributed; fortunately, a simple log (or similar) transformation of the response can often help alleviate this concern as Figure [3\.1](engineering.html#fig:engineering-skewed-residuals) illustrates. Figure 3\.1: Transforming the response variable to minimize skewness can resolve concerns with non\-normally distributed errors. Furthermore, using a log (or other) transformation to minimize the response skewness can be used for shaping the business problem as well. For example, in the House Prices: Advanced Regression Techniques Kaggle competition[11](#fn11), which used the Ames housing data, the competition focused on using a log transformed Sale Price response because “…taking logs means that errors in predicting expensive houses and cheap houses will affect the result equally.” This would be an alternative to using the root mean squared logarithmic error (RMSLE) loss function as discussed in Section [2\.6](process.html#model-eval). There are two main approaches to help correct for positively skewed target variables: **Option 1**: normalize with a log transformation. This will transform most right skewed distributions to be approximately normal. One way to do this is to simply log transform the training and test set in a manual, single step manner similar to: ``` transformed_response <- log(ames_train$Sale_Price) ``` However, we should think of the preprocessing as creating a blueprint to be re\-applied strategically. For this, you can use the **recipe** package or something similar (e.g., `caret::preProcess()`). This will not return the actual log transformed values but, rather, a blueprint to be applied later. ``` # log transformation ames_recipe <- recipe(Sale_Price ~ ., data = ames_train) %>% step_log(all_outcomes()) ames_recipe ## Data Recipe ## ## Inputs: ## ## role #variables ## outcome 1 ## predictor 80 ## ## Operations: ## ## Log transformation on all_outcomes ``` If your response has negative values or zeros then a log transformation will produce `NaN`s and `-Inf`s, respectively (you cannot take the logarithm of a negative number). If the nonpositive response values are small (say between \-0\.99 and 0\) then you can apply a small offset such as in `log1p()` which adds 1 to the value prior to applying a log transformation (you can do the same within `step_log()` by using the `offset` argument). If your data consists of values \\(\\le \-1\\), use the Yeo\-Johnson transformation mentioned next. ``` log(-0.5) ## [1] NaN log1p(-0.5) ## [1] -0.6931472 ``` **Option 2**: use a *Box Cox transformation*. A Box Cox transformation is more flexible than (but also includes as a special case) the log transformation and will find an appropriate transformation from a family of power transforms that will transform the variable as close as possible to a normal distribution (Box and Cox [1964](#ref-box1964analysis); Carroll and Ruppert [1981](#ref-carroll1981prediction)). At the core of the Box Cox transformation is an exponent, lambda (\\(\\lambda\\)), which varies from \-5 to 5\. All values of \\(\\lambda\\) are considered and the optimal value for the given data is estimated from the training data; The “optimal value” is the one which results in the best transformation to an approximate normal distribution. The transformation of the response \\(Y\\) has the form: \\\[ \\begin{equation} y(\\lambda) \= \\begin{cases} \\frac{Y^\\lambda\-1}{\\lambda}, \& \\text{if}\\ \\lambda \\neq 0 \\\\ \\log\\left(Y\\right), \& \\text{if}\\ \\lambda \= 0\. \\end{cases} \\end{equation} \\] Be sure to compute the `lambda` on the training set and apply that same `lambda` to both the training and test set to minimize *data leakage*. The **recipes** package automates this process for you. If your response has negative values, the Yeo\-Johnson transformation is very similar to the Box\-Cox but does not require the input variables to be strictly positive. To apply, use `step_YeoJohnson()`. Figure [3\.2](engineering.html#fig:engineering-distribution-comparison) illustrates that the log transformation and Box Cox transformation both do about equally well in transforming `Sale_Price` to look more normally distributed. Figure 3\.2: Response variable transformations. Note that when you model with a transformed response variable, your predictions will also be on the transformed scale. You will likely want to undo (or re\-transform) your predicted values back to their normal scale so that decision\-makers can more easily interpret the results. This is illustrated in the following code chunk: ``` # Log transform a value y <- log(10) # Undo log-transformation exp(y) ## [1] 10 # Box Cox transform a value y <- forecast::BoxCox(10, lambda) # Inverse Box Cox function inv_box_cox <- function(x, lambda) { # for Box-Cox, lambda = 0 --> log transform if (lambda == 0) exp(x) else (lambda*x + 1)^(1/lambda) } # Undo Box Cox-transformation inv_box_cox(y, lambda) ## [1] 10 ## attr(,"lambda") ## [1] -0.03616899 ``` 3\.3 Dealing with missingness ----------------------------- Data quality is an important issue for any project involving analyzing data. Data quality issues deserve an entire book in their own right, and a good reference is The Quartz guide to bad data.[12](#fn12) One of the most common data quality concerns you will run into is missing values. Data can be missing for many different reasons; however, these reasons are usually lumped into two categories: *informative missingness* (Kuhn and Johnson [2013](#ref-apm)) and *missingness at random* (Little and Rubin [2014](#ref-little2014statistical)). Informative missingness implies a structural cause for the missing value that can provide insight in its own right; whether this be deficiencies in how the data was collected or abnormalities in the observational environment. Missingness at random implies that missing values occur independent of the data collection process[13](#fn13). The category that drives missing values will determine how you handle them. For example, we may give values that are driven by informative missingness their own category (e.g., `"None"`) as their unique value may affect predictive performance. Whereas values that are missing at random may deserve deletion[14](#fn14) or imputation. Furthermore, different machine learning models handle missingness differently. Most algorithms cannot handle missingness (e.g., generalized linear models and their cousins, neural networks, and support vector machines) and, therefore, require them to be dealt with beforehand. A few models (mainly tree\-based), have built\-in procedures to deal with missing values. However, since the modeling process involves comparing and contrasting multiple models to identify the optimal one, you will want to handle missing values prior to applying any models so that your algorithms are based on the same data quality assumptions. ### 3\.3\.1 Visualizing missing values It is important to understand the distribution of missing values (i.e., `NA`) in any data set. So far, we have been using a pre\-processed version of the Ames housing data set (via the `AmesHousing::make_ames()` function). However, if we use the raw Ames housing data (via `AmesHousing::ames_raw`), there are actually 13,997 missing values—there is at least one missing values in each row of the original data! ``` sum(is.na(AmesHousing::ames_raw)) ## [1] 13997 ``` It is important to understand the distribution of missing values in a data set in order to determine the best approach for preprocessing. Heat maps are an efficient way to visualize the distribution of missing values for small\- to medium\-sized data sets. The code `is.na(<data-frame-name>)` will return a matrix of the same dimension as the given data frame, but each cell will contain either `TRUE` (if the corresponding value is missing) or `FALSE` (if the corresponding value is not missing). To construct such a plot, we can use R’s built\-in `heatmap()` or `image()` functions, or **ggplot2**’s `geom_raster()` function, among others; Figure [3\.3](engineering.html#fig:engineering-heat-map-missingness) illustrates `geom_raster()`. This allows us to easily see where the majority of missing values occur (i.e., in the variables `Alley`, `Fireplace Qual`, `Pool QC`, `Fence`, and `Misc Feature`). Due to their high frequency of missingness, these variables would likely need to be removed prior to statistical analysis, or imputed. We can also spot obvious patterns of missingness. For example, missing values appear to occur within the same observations across all garage variables. ``` AmesHousing::ames_raw %>% is.na() %>% reshape2::melt() %>% ggplot(aes(Var2, Var1, fill=value)) + geom_raster() + coord_flip() + scale_y_continuous(NULL, expand = c(0, 0)) + scale_fill_grey(name = "", labels = c("Present", "Missing")) + xlab("Observation") + theme(axis.text.y = element_text(size = 4)) ``` Figure 3\.3: Heat map of missing values in the raw Ames housing data. Digging a little deeper into these variables, we might notice that `Garage_Cars` and `Garage_Area` contain the value `0` whenever the other `Garage_xx` variables have missing values (i.e. a value of `NA`). This might be because they did not have a way to identify houses with no garages when the data were originally collected, and therefore, all houses with no garage were identified by including nothing. Since this missingness is informative, it would be appropriate to impute `NA` with a new category level (e.g., `"None"`) for these garage variables. Circumstances like this tend to only become apparent upon careful descriptive and visual examination of the data! ``` AmesHousing::ames_raw %>% filter(is.na(`Garage Type`)) %>% select(`Garage Type`, `Garage Cars`, `Garage Area`) ## # A tibble: 157 x 3 ## `Garage Type` `Garage Cars` `Garage Area` ## <chr> <int> <int> ## 1 <NA> 0 0 ## 2 <NA> 0 0 ## 3 <NA> 0 0 ## 4 <NA> 0 0 ## 5 <NA> 0 0 ## 6 <NA> 0 0 ## 7 <NA> 0 0 ## 8 <NA> 0 0 ## 9 <NA> 0 0 ## 10 <NA> 0 0 ## # … with 147 more rows ``` The `vis_miss()` function in R package `visdat` (Tierney [2019](#ref-R-visdat)) also allows for easy visualization of missing data patterns (with sorting and clustering options). We illustrate this functionality below using the raw Ames housing data (Figure [3\.4](engineering.html#fig:engineering-missingness-visna)). The columns of the heat map represent the 82 variables of the raw data and the rows represent the observations. Missing values (i.e., `NA`) are indicated via a black cell. The variables and `NA` patterns have been clustered by rows (i.e., `cluster = TRUE`). ``` vis_miss(AmesHousing::ames_raw, cluster = TRUE) ``` Figure 3\.4: Visualizing missing data patterns in the raw Ames housing data. Data can be missing for different reasons. Perhaps the values were never recorded (or lost in translation), or it was recorded in error (a common feature of data entered by hand). Regardless, it is important to identify and attempt to understand how missing values are distributed across a data set as it can provide insight into how to deal with these observations. ### 3\.3\.2 Imputation *Imputation* is the process of replacing a missing value with a substituted, “best guess” value. Imputation should be one of the first feature engineering steps you take as it will affect any downstream preprocessing[15](#fn15). #### 3\.3\.2\.1 Estimated statistic An elementary approach to imputing missing values for a feature is to compute descriptive statistics such as the mean, median, or mode (for categorical) and use that value to replace `NA`s. Although computationally efficient, this approach does not consider any other attributes for a given observation when imputing (e.g., a female patient that is 63 inches tall may have her weight imputed as 175 lbs since that is the average weight across all observations which contains 65% males that average a height of 70 inches). An alternative is to use grouped statistics to capture expected values for observations that fall into similar groups. However, this becomes infeasible for larger data sets. Modeling imputation can automate this process for you and the two most common methods include K\-nearest neighbor and tree\-based imputation, which are discussed next. However, it is important to remember that imputation should be performed **within the resampling process** and as your data set gets larger, repeated model\-based imputation can compound the computational demands. Thus, you must weigh the pros and cons of the two approaches. The following would build onto our `ames_recipe` and impute all missing values for the `Gr_Liv_Area` variable with the median value: ``` ames_recipe %>% step_medianimpute(Gr_Liv_Area) ## Data Recipe ## ## Inputs: ## ## role #variables ## outcome 1 ## predictor 80 ## ## Operations: ## ## Log transformation on all_outcomes ## Median Imputation for Gr_Liv_Area ``` Use `step_modeimpute()` to impute categorical features with the most common value. #### 3\.3\.2\.2 *K*\-nearest neighbor *K*\-nearest neighbor (KNN) imputes values by identifying observations with missing values, then identifying other observations that are most similar based on the other available features, and using the values from these nearest neighbor observations to impute missing values. We discuss KNN for predictive modeling in Chapter [8](knn.html#knn); the imputation application works in a similar manner. In KNN imputation, the missing value for a given observation is treated as the targeted response and is predicted based on the average (for quantitative values) or the mode (for qualitative values) of the *k* nearest neighbors. As discussed in Chapter [8](knn.html#knn), if all features are quantitative then standard Euclidean distance is commonly used as the distance metric to identify the *k* neighbors and when there is a mixture of quantitative and qualitative features then Gower’s distance (Gower [1971](#ref-gower1971general)) can be used. KNN imputation is best used on small to moderate sized data sets as it becomes computationally burdensome with larger data sets (Kuhn and Johnson [2019](#ref-kuhn2019feature)). As we saw in Section 2\.7, *k* is a tunable hyperparameter. Suggested values for imputation are 5–10 (Kuhn and Johnson [2019](#ref-kuhn2019feature)). By default, `step_knnimpute()` will use 5 but can be adjusted with the `neighbors` argument. ``` ames_recipe %>% step_knnimpute(all_predictors(), neighbors = 6) ## Data Recipe ## ## Inputs: ## ## role #variables ## outcome 1 ## predictor 80 ## ## Operations: ## ## Log transformation on all_outcomes ## K-nearest neighbor imputation for all_predictors ``` #### 3\.3\.2\.3 Tree\-based As previously discussed, several implementations of decision trees (Chapter [9](DT.html#DT)) and their derivatives can be constructed in the presence of missing values. Thus, they provide a good alternative for imputation. As discussed in Chapters [9](DT.html#DT)\-[11](random-forest.html#random-forest), single trees have high variance but aggregating across many trees creates a robust, low variance predictor. Random forest imputation procedures have been studied (Shah et al. [2014](#ref-shah2014comparison); Stekhoven [2015](#ref-stekhoven2015missforest)); however, they require significant computational demands in a resampling environment (Kuhn and Johnson [2019](#ref-kuhn2019feature)). Bagged trees (Chapter [10](bagging.html#bagging)) offer a compromise between predictive accuracy and computational burden. Similar to KNN imputation, observations with missing values are identified and the feature containing the missing value is treated as the target and predicted using bagged decision trees. ``` ames_recipe %>% step_bagimpute(all_predictors()) ## Data Recipe ## ## Inputs: ## ## role #variables ## outcome 1 ## predictor 80 ## ## Operations: ## ## Log transformation on all_outcomes ## Bagged tree imputation for all_predictors ``` Figure [3\.5](engineering.html#fig:engineering-imputation-examples) illustrates the differences between mean, KNN, and tree\-based imputation on the raw Ames housing data. It is apparent how descriptive statistic methods (e.g., using the mean and median) are inferior to the KNN and tree\-based imputation methods. Figure 3\.5: Comparison of three different imputation methods. The red points represent actual values which were removed and made missing and the blue points represent the imputed values. Estimated statistic imputation methods (i.e. mean, median) merely predict the same value for each observation and can reduce the signal between a feature and the response; whereas KNN and tree\-based procedures tend to maintain the feature distribution and relationship. 3\.4 Feature filtering ---------------------- In many data analyses and modeling projects we end up with hundreds or even thousands of collected features. From a practical perspective, a model with more features often becomes harder to interpret and is costly to compute. Some models are more resistant to non\-informative predictors (e.g., the Lasso and tree\-based methods) than others as illustrated in Figure [3\.6](engineering.html#fig:engineering-accuracy-comparison).[16](#fn16) Figure 3\.6: Test set RMSE profiles when non\-informative predictors are added. Although the performance of some of our models are not significantly affected by non\-informative predictors, the time to train these models can be negatively impacted as more features are added. Figure [3\.7](engineering.html#fig:engineering-impact-on-time) shows the increase in time to perform 10\-fold CV on the exemplar data, which consists of 10,000 observations. We see that many algorithms (e.g., elastic nets, random forests, and gradient boosting machines) become extremely time intensive the more predictors we add. Consequently, filtering or reducing features prior to modeling may significantly speed up training time. Figure 3\.7: Impact in model training time as non\-informative predictors are added. Zero and near\-zero variance variables are low\-hanging fruit to eliminate. Zero variance variables, meaning the feature only contains a single unique value, provides no useful information to a model. Some algorithms are unaffected by zero variance features. However, features that have near\-zero variance also offer very little, if any, information to a model. Furthermore, they can cause problems during resampling as there is a high probability that a given sample will only contain a single unique value (the dominant value) for that feature. A rule of thumb for detecting near\-zero variance features is: * The fraction of unique values over the sample size is low (say \\(\\leq 10\\)%). * The ratio of the frequency of the most prevalent value to the frequency of the second most prevalent value is large (say \\(\\geq 20\\)%). If both of these criteria are true then it is often advantageous to remove the variable from the model. For the Ames data, we do not have any zero variance predictors but there are 20 features that meet the near\-zero threshold. ``` caret::nearZeroVar(ames_train, saveMetrics = TRUE) %>% tibble::rownames_to_column() %>% filter(nzv) ## rowname freqRatio percentUnique zeroVar nzv ## 1 Street 292.28571 0.09741841 FALSE TRUE ## 2 Alley 21.76136 0.14612762 FALSE TRUE ## 3 Land_Contour 21.78824 0.19483682 FALSE TRUE ## 4 Utilities 1025.00000 0.14612762 FALSE TRUE ## 5 Land_Slope 23.33333 0.14612762 FALSE TRUE ## 6 Condition_2 225.77778 0.34096444 FALSE TRUE ## 7 Roof_Matl 126.50000 0.24354603 FALSE TRUE ## 8 Bsmt_Cond 19.88043 0.29225524 FALSE TRUE ## 9 BsmtFin_Type_2 22.35897 0.34096444 FALSE TRUE ## 10 Heating 96.23810 0.24354603 FALSE TRUE ## 11 Low_Qual_Fin_SF 1012.00000 1.31514856 FALSE TRUE ## 12 Kitchen_AbvGr 23.10588 0.19483682 FALSE TRUE ## 13 Functional 38.95918 0.34096444 FALSE TRUE ## 14 Enclosed_Porch 107.68750 7.25767170 FALSE TRUE ## 15 Three_season_porch 675.00000 1.12031174 FALSE TRUE ## 16 Screen_Porch 234.87500 4.52995616 FALSE TRUE ## 17 Pool_Area 2045.00000 0.43838285 FALSE TRUE ## 18 Pool_QC 681.66667 0.24354603 FALSE TRUE ## 19 Misc_Feature 31.00000 0.19483682 FALSE TRUE ## 20 Misc_Val 152.76923 1.36385777 FALSE TRUE ``` We can add `step_zv()` and `step_nzv()` to our `ames_recipe` to remove zero or near\-zero variance features. Other feature filtering methods exist; see Saeys, Inza, and Larrañaga ([2007](#ref-saeys2007review)) for a thorough review. Furthermore, several wrapper methods exist that evaluate multiple models using procedures that add or remove predictors to find the optimal combination of features that maximizes model performance (see, for example, Kursa, Rudnicki, and others ([2010](#ref-kursa2010feature)), Granitto et al. ([2006](#ref-granitto2006recursive)), Maldonado and Weber ([2009](#ref-maldonado2009wrapper))). However, this topic is beyond the scope of this book. 3\.5 Numeric feature engineering -------------------------------- Numeric features can create a host of problems for certain models when their distributions are skewed, contain outliers, or have a wide range in magnitudes. Tree\-based models are quite immune to these types of problems in the feature space, but many other models (e.g., GLMs, regularized regression, KNN, support vector machines, neural networks) can be greatly hampered by these issues. Normalizing and standardizing heavily skewed features can help minimize these concerns. ### 3\.5\.1 Skewness Similar to the process discussed to normalize target variables, parametric models that have distributional assumptions (e.g., GLMs, and regularized models) can benefit from minimizing the skewness of numeric features. When normalizing many variables, it’s best to use the Box\-Cox (when feature values are strictly positive) or Yeo\-Johnson (when feature values are not strictly positive) procedures as these methods will identify if a transformation is required and what the optimal transformation will be. Non\-parametric models are rarely affected by skewed features; however, normalizing features will not have a negative effect on these models’ performance. For example, normalizing features will only shift the optimal split points in tree\-based algorithms. Consequently, when in doubt, normalize. ``` # Normalize all numeric columns recipe(Sale_Price ~ ., data = ames_train) %>% step_YeoJohnson(all_numeric()) ## Data Recipe ## ## Inputs: ## ## role #variables ## outcome 1 ## predictor 80 ## ## Operations: ## ## Yeo-Johnson transformation on all_numeric ``` ### 3\.5\.2 Standardization We must also consider the scale on which the individual features are measured. What are the largest and smallest values across all features and do they span several orders of magnitude? Models that incorporate smooth functions of input features are sensitive to the scale of the inputs. For example, \\(5X\+2\\) is a simple linear function of the input *X*, and the scale of its output depends directly on the scale of the input. Many algorithms use linear functions within their algorithms, some more obvious (e.g., GLMs and regularized regression) than others (e.g., neural networks, support vector machines, and principal components analysis). Other examples include algorithms that use distance measures such as the Euclidean distance (e.g., *k* nearest neighbor, *k*\-means clustering, and hierarchical clustering). For these models and modeling components, it is often a good idea to *standardize* the features. Standardizing features includes *centering* and *scaling* so that numeric variables have zero mean and unit variance, which provides a common comparable unit of measure across all the variables. Figure 3\.8: Standardizing features allows all features to be compared on a common value scale regardless of their real value differences. Some packages (e.g., **glmnet**, and **caret**) have built\-in options to standardize and some do not (e.g., **keras** for neural networks). However, you should standardize your variables within the recipe blueprint so that both training and test data standardization are based on the same mean and variance. This helps to minimize data leakage. ``` ames_recipe %>% step_center(all_numeric(), -all_outcomes()) %>% step_scale(all_numeric(), -all_outcomes()) ## Data Recipe ## ## Inputs: ## ## role #variables ## outcome 1 ## predictor 80 ## ## Operations: ## ## Log transformation on all_outcomes ## Centering for all_numeric, -, all_outcomes() ## Scaling for all_numeric, -, all_outcomes() ``` 3\.6 Categorical feature engineering ------------------------------------ Most models require that the predictors take numeric form. There are exceptions; for example, tree\-based models naturally handle numeric or categorical features. However, even tree\-based models can benefit from preprocessing categorical features. The following sections will discuss a few of the more common approaches to engineer categorical features. ### 3\.6\.1 Lumping Sometimes features will contain levels that have very few observations. For example, there are 28 unique neighborhoods represented in the Ames housing data but several of them only have a few observations. ``` count(ames_train, Neighborhood) %>% arrange(n) ## # A tibble: 28 x 2 ## Neighborhood n ## <fct> <int> ## 1 Landmark 1 ## 2 Green_Hills 2 ## 3 Greens 7 ## 4 Blueste 9 ## 5 Northpark_Villa 17 ## 6 Briardale 18 ## 7 Veenker 20 ## 8 Bloomington_Heights 21 ## 9 South_and_West_of_Iowa_State_University 30 ## 10 Meadow_Village 30 ## # … with 18 more rows ``` Even numeric features can have similar distributions. For example, `Screen_Porch` has 92% values recorded as zero (zero square footage meaning no screen porch) and the remaining 8% have unique dispersed values. ``` count(ames_train, Screen_Porch) %>% arrange(n) ## # A tibble: 93 x 2 ## Screen_Porch n ## <int> <int> ## 1 40 1 ## 2 80 1 ## 3 92 1 ## 4 94 1 ## 5 99 1 ## 6 104 1 ## 7 109 1 ## 8 110 1 ## 9 111 1 ## 10 117 1 ## # … with 83 more rows ``` Sometimes we can benefit from collapsing, or “lumping” these into a lesser number of categories. In the above examples, we may want to collapse all levels that are observed in less than 10% of the training sample into an “other” category. We can use `step_other()` to do so. However, lumping should be used sparingly as there is often a loss in model performance (Kuhn and Johnson [2013](#ref-apm)). Tree\-based models often perform exceptionally well with high cardinality features and are not as impacted by levels with small representation. ``` # Lump levels for two features lumping <- recipe(Sale_Price ~ ., data = ames_train) %>% step_other(Neighborhood, threshold = 0.01, other = "other") %>% step_other(Screen_Porch, threshold = 0.1, other = ">0") # Apply this blue print --> you will learn about this at # the end of the chapter apply_2_training <- prep(lumping, training = ames_train) %>% bake(ames_train) # New distribution of Neighborhood count(apply_2_training, Neighborhood) %>% arrange(n) ## # A tibble: 22 x 2 ## Neighborhood n ## <fct> <int> ## 1 Bloomington_Heights 21 ## 2 South_and_West_of_Iowa_State_University 30 ## 3 Meadow_Village 30 ## 4 Clear_Creek 31 ## 5 Stone_Brook 34 ## 6 Northridge 48 ## 7 Timberland 55 ## 8 Iowa_DOT_and_Rail_Road 62 ## 9 Crawford 72 ## 10 Mitchell 74 ## # … with 12 more rows # New distribution of Screen_Porch count(apply_2_training, Screen_Porch) %>% arrange(n) ## # A tibble: 2 x 2 ## Screen_Porch n ## <fct> <int> ## 1 >0 174 ## 2 0 1879 ``` ### 3\.6\.2 One\-hot \& dummy encoding Many models require that all predictor variables be numeric. Consequently, we need to intelligently transform any categorical variables into numeric representations so that these algorithms can compute. Some packages automate this process (e.g., **h2o** and **caret**) while others do not (e.g., **glmnet** and **keras**). There are many ways to recode categorical variables as numeric (e.g., one\-hot, ordinal, binary, sum, and Helmert). The most common is referred to as one\-hot encoding, where we transpose our categorical variables so that each level of the feature is represented as a boolean value. For example, one\-hot encoding the left data frame in Figure [3\.9](engineering.html#fig:engineering-one-hot) results in `X` being converted into three columns, one for each level. This is called less than *full rank* encoding . However, this creates perfect collinearity which causes problems with some predictive modeling algorithms (e.g., ordinary linear regression and neural networks). Alternatively, we can create a full\-rank encoding by dropping one of the levels (level `c` has been dropped). This is referred to as *dummy* encoding. Figure 3\.9: Eight observations containing a categorical feature X and the difference in how one\-hot and dummy encoding transforms this feature. We can one\-hot or dummy encode with the same function (`step_dummy()`). By default, `step_dummy()` will create a full rank encoding but you can change this by setting `one_hot = TRUE`. ``` # Lump levels for two features recipe(Sale_Price ~ ., data = ames_train) %>% step_dummy(all_nominal(), one_hot = TRUE) ## Data Recipe ## ## Inputs: ## ## role #variables ## outcome 1 ## predictor 80 ## ## Operations: ## ## Dummy variables from all_nominal ``` Since one\-hot encoding adds new features it can significantly increase the dimensionality of our data. If you have a data set with many categorical variables and those categorical variables in turn have many unique levels, the number of features can explode. In these cases you may want to explore label/ordinal encoding or some other alternative. ### 3\.6\.3 Label encoding *Label encoding* is a pure numeric conversion of the levels of a categorical variable. If a categorical variable is a factor and it has pre\-specified levels then the numeric conversion will be in level order. If no levels are specified, the encoding will be based on alphabetical order. For example, the `MS_SubClass` variable has 16 levels, which we can recode numerically with `step_integer()`. ``` # Original categories count(ames_train, MS_SubClass) ## # A tibble: 16 x 2 ## MS_SubClass n ## <fct> <int> ## 1 One_Story_1946_and_Newer_All_Styles 749 ## 2 One_Story_1945_and_Older 93 ## 3 One_Story_with_Finished_Attic_All_Ages 5 ## 4 One_and_Half_Story_Unfinished_All_Ages 11 ## 5 One_and_Half_Story_Finished_All_Ages 207 ## 6 Two_Story_1946_and_Newer 394 ## 7 Two_Story_1945_and_Older 98 ## 8 Two_and_Half_Story_All_Ages 17 ## 9 Split_or_Multilevel 78 ## 10 Split_Foyer 31 ## 11 Duplex_All_Styles_and_Ages 69 ## 12 One_Story_PUD_1946_and_Newer 144 ## 13 One_and_Half_Story_PUD_All_Ages 1 ## 14 Two_Story_PUD_1946_and_Newer 98 ## 15 PUD_Multilevel_Split_Level_Foyer 14 ## 16 Two_Family_conversion_All_Styles_and_Ages 44 # Label encoded recipe(Sale_Price ~ ., data = ames_train) %>% step_integer(MS_SubClass) %>% prep(ames_train) %>% bake(ames_train) %>% count(MS_SubClass) ## # A tibble: 16 x 2 ## MS_SubClass n ## <dbl> <int> ## 1 1 749 ## 2 2 93 ## 3 3 5 ## 4 4 11 ## 5 5 207 ## 6 6 394 ## 7 7 98 ## 8 8 17 ## 9 9 78 ## 10 10 31 ## 11 11 69 ## 12 12 144 ## 13 13 1 ## 14 14 98 ## 15 15 14 ## 16 16 44 ``` We should be careful with label encoding unordered categorical features because most models will treat them as ordered numeric features. If a categorical feature is naturally ordered then label encoding is a natural choice (most commonly referred to as ordinal encoding). For example, the various quality features in the Ames housing data are ordinal in nature (ranging from `Very_Poor` to `Very_Excellent`). ``` ames_train %>% select(contains("Qual")) ## # A tibble: 2,053 x 6 ## Overall_Qual Exter_Qual Bsmt_Qual Low_Qual_Fin_SF Kitchen_Qual ## <fct> <fct> <fct> <int> <fct> ## 1 Above_Avera… Typical Typical 0 Typical ## 2 Average Typical Typical 0 Typical ## 3 Above_Avera… Typical Typical 0 Good ## 4 Above_Avera… Typical Typical 0 Good ## 5 Very_Good Good Good 0 Good ## 6 Very_Good Good Good 0 Good ## 7 Good Typical Typical 0 Good ## 8 Above_Avera… Typical Good 0 Typical ## 9 Above_Avera… Typical Good 0 Typical ## 10 Good Typical Good 0 Good ## # … with 2,043 more rows, and 1 more variable: Garage_Qual <fct> ``` Ordinal encoding these features provides a natural and intuitive interpretation and can logically be applied to all models. The various `xxx_Qual` features in the Ames housing are not ordered factors. For ordered factors you could also use `step_ordinalscore()`. ``` # Original categories count(ames_train, Overall_Qual) ## # A tibble: 10 x 2 ## Overall_Qual n ## <fct> <int> ## 1 Very_Poor 4 ## 2 Poor 9 ## 3 Fair 27 ## 4 Below_Average 166 ## 5 Average 565 ## 6 Above_Average 513 ## 7 Good 438 ## 8 Very_Good 231 ## 9 Excellent 77 ## 10 Very_Excellent 23 # Label encoded recipe(Sale_Price ~ ., data = ames_train) %>% step_integer(Overall_Qual) %>% prep(ames_train) %>% bake(ames_train) %>% count(Overall_Qual) ## # A tibble: 10 x 2 ## Overall_Qual n ## <dbl> <int> ## 1 1 4 ## 2 2 9 ## 3 3 27 ## 4 4 166 ## 5 5 565 ## 6 6 513 ## 7 7 438 ## 8 8 231 ## 9 9 77 ## 10 10 23 ``` ### 3\.6\.4 Alternatives There are several alternative categorical encodings that are implemented in various R machine learning engines and are worth exploring. For example, target encoding is the process of replacing a categorical value with the mean (regression) or proportion (classification) of the target variable. For example, target encoding the `Neighborhood` feature would change `North_Ames` to 144617\. Table 3\.1: Example of target encoding the Neighborhood feature of the Ames housing data set. | Neighborhood | Avg Sale\_Price | | --- | --- | | North\_Ames | 144792\.9 | | College\_Creek | 199591\.6 | | Old\_Town | 123138\.4 | | Edwards | 131109\.4 | | Somerset | 227379\.6 | | Northridge\_Heights | 323289\.5 | | Gilbert | 192162\.9 | | Sawyer | 136320\.4 | | Northwest\_Ames | 187328\.2 | | Sawyer\_West | 188644\.6 | Target encoding runs the risk of *data leakage* since you are using the response variable to encode a feature. An alternative to this is to change the feature value to represent the proportion a particular level represents for a given feature. In this case, `North_Ames` would be changed to 0\.153\. In Chapter 9, we discuss how tree\-based models use this approach to order categorical features when choosing a split point. Table 3\.2: Example of categorical proportion encoding the Neighborhood feature of the Ames housing data set. | Neighborhood | Proportion | | --- | --- | | North\_Ames | 0\.1441792 | | College\_Creek | 0\.0910862 | | Old\_Town | 0\.0832927 | | Edwards | 0\.0686800 | | Somerset | 0\.0623478 | | Northridge\_Heights | 0\.0560156 | | Gilbert | 0\.0565027 | | Sawyer | 0\.0496834 | | Northwest\_Ames | 0\.0467608 | | Sawyer\_West | 0\.0414028 | Several alternative approaches include effect or likelihood encoding (Micci\-Barreca [2001](#ref-micci2001preprocessing); Zumel and Mount [2016](#ref-zumel2016vtreat)), empirical Bayes methods (West, Welch, and Galecki [2014](#ref-west2014linear)), word and entity embeddings (Guo and Berkhahn [2016](#ref-guo2016entity); Chollet and Allaire [2018](#ref-chollet2018deep)), and more. For more in depth coverage of categorical encodings we highly recommend Kuhn and Johnson ([2019](#ref-kuhn2019feature)). 3\.7 Dimension reduction ------------------------ Dimension reduction is an alternative approach to filter out non\-informative features without manually removing them. We discuss dimension reduction topics in depth later in the book (Chapters [17](pca.html#pca)\-[19](autoencoders.html#autoencoders)) so please refer to those chapters for details. However, we wanted to highlight that it is very common to include these types of dimension reduction approaches during the feature engineering process. For example, we may wish to reduce the dimension of our features with principal components analysis (Chapter [17](pca.html#pca)) and retain the number of components required to explain, say, 95% of the variance and use these components as features in downstream modeling. ``` recipe(Sale_Price ~ ., data = ames_train) %>% step_center(all_numeric()) %>% step_scale(all_numeric()) %>% step_pca(all_numeric(), threshold = .95) ## Data Recipe ## ## Inputs: ## ## role #variables ## outcome 1 ## predictor 80 ## ## Operations: ## ## Centering for all_numeric ## Scaling for all_numeric ## No PCA components were extracted. ``` 3\.8 Proper implementation -------------------------- We stated at the beginning of this chapter that we should think of feature engineering as creating a blueprint rather than manually performing each task individually. This helps us in two ways: (1\) thinking sequentially and (2\) to apply appropriately within the resampling process. ### 3\.8\.1 Sequential steps Thinking of feature engineering as a blueprint forces us to think of the ordering of our preprocessing steps. Although each particular problem requires you to think of the effects of sequential preprocessing, there are some general suggestions that you should consider: * If using a log or Box\-Cox transformation, don’t center the data first or do any operations that might make the data non\-positive. Alternatively, use the Yeo\-Johnson transformation so you don’t have to worry about this. * One\-hot or dummy encoding typically results in sparse data which many algorithms can operate efficiently on. If you standardize sparse data you will create dense data and you loose the computational efficiency. Consequently, it’s often preferred to standardize your numeric features and then one\-hot/dummy encode. * If you are lumping infrequently occurring categories together, do so before one\-hot/dummy encoding. * Although you can perform dimension reduction procedures on categorical features, it is common to primarily do so on numeric features when doing so for feature engineering purposes. While your project’s needs may vary, here is a suggested order of potential steps that should work for most problems: 1. Filter out zero or near\-zero variance features. 2. Perform imputation if required. 3. Normalize to resolve numeric feature skewness. 4. Standardize (center and scale) numeric features. 5. Perform dimension reduction (e.g., PCA) on numeric features. 6. One\-hot or dummy encode categorical features. ### 3\.8\.2 Data leakage *Data leakage* is when information from outside the training data set is used to create the model. Data leakage often occurs during the data preprocessing period. To minimize this, feature engineering should be done in isolation of each resampling iteration. Recall that resampling allows us to estimate the generalizable prediction error. Therefore, we should apply our feature engineering blueprint to each resample independently as illustrated in Figure [3\.10](engineering.html#fig:engineering-minimize-leakage). That way we are not leaking information from one data set to another (each resample is designed to act as isolated training and test data). Figure 3\.10: Performing feature engineering preprocessing within each resample helps to minimize data leakage. For example, when standardizing numeric features, each resampled training data should use its own mean and variance estimates and these specific values should be applied to the same resampled test set. This imitates how real\-life prediction occurs where we only know our current data’s mean and variance estimates; therefore, on new data that comes in where we need to predict we assume the feature values follow the same distribution of what we’ve seen in the past. ### 3\.8\.3 Putting the process together To illustrate how this process works together via R code, let’s do a simple re\-assessment on the `ames` data set that we did at the end of the last chapter (Section [2\.7](process.html#put-process-together)) and see if some simple feature engineering improves our prediction error. But first, we’ll formally introduce the **recipes** package, which we’ve been implicitly illustrating throughout. The **recipes** package allows us to develop our feature engineering blueprint in a sequential nature. The idea behind **recipes** is similar to `caret::preProcess()` where we want to create the preprocessing blueprint but apply it later and within each resample.[17](#fn17) There are three main steps in creating and applying feature engineering with **recipes**: 1. `recipe`: where you define your feature engineering steps to create your blueprint. 2. `prep`are: estimate feature engineering parameters based on training data. 3. `bake`: apply the blueprint to new data. The first step is where you define your blueprint (aka recipe). With this process, you supply the formula of interest (the target variable, features, and the data these are based on) with `recipe()` and then you sequentially add feature engineering steps with `step_xxx()`. For example, the following defines `Sale_Price` as the target variable and then uses all the remaining columns as features based on `ames_train`. We then: 1. Remove near\-zero variance features that are categorical (aka nominal). 2. Ordinal encode our quality\-based features (which are inherently ordinal). 3. Center and scale (i.e., standardize) all numeric features. 4. Perform dimension reduction by applying PCA to all numeric features. ``` blueprint <- recipe(Sale_Price ~ ., data = ames_train) %>% step_nzv(all_nominal()) %>% step_integer(matches("Qual|Cond|QC|Qu")) %>% step_center(all_numeric(), -all_outcomes()) %>% step_scale(all_numeric(), -all_outcomes()) %>% step_pca(all_numeric(), -all_outcomes()) blueprint ## Data Recipe ## ## Inputs: ## ## role #variables ## outcome 1 ## predictor 80 ## ## Operations: ## ## Sparse, unbalanced variable filter on all_nominal ## Integer encoding for matches, Qual|Cond|QC|Qu ## Centering for all_numeric, -, all_outcomes() ## Scaling for all_numeric, -, all_outcomes() ## No PCA components were extracted. ``` Next, we need to train this blueprint on some training data. Remember, there are many feature engineering steps that we do not want to train on the test data (e.g., standardize and PCA) as this would create data leakage. So in this step we estimate these parameters based on the training data of interest. ``` prepare <- prep(blueprint, training = ames_train) prepare ## Data Recipe ## ## Inputs: ## ## role #variables ## outcome 1 ## predictor 80 ## ## Training data contained 2053 data points and no missing data. ## ## Operations: ## ## Sparse, unbalanced variable filter removed Street, Alley, ... [trained] ## Integer encoding for Condition_1, Overall_Qual, Overall_Cond, ... [trained] ## Centering for Lot_Frontage, Lot_Area, ... [trained] ## Scaling for Lot_Frontage, Lot_Area, ... [trained] ## PCA extraction with Lot_Frontage, Lot_Area, ... [trained] ``` Lastly, we can apply our blueprint to new data (e.g., the training data or future test data) with `bake()`. ``` baked_train <- bake(prepare, new_data = ames_train) baked_test <- bake(prepare, new_data = ames_test) baked_train ## # A tibble: 2,053 x 27 ## MS_SubClass MS_Zoning Lot_Shape Lot_Config Neighborhood Bldg_Type ## <fct> <fct> <fct> <fct> <fct> <fct> ## 1 One_Story_… Resident… Slightly… Corner North_Ames OneFam ## 2 One_Story_… Resident… Regular Inside North_Ames OneFam ## 3 One_Story_… Resident… Slightly… Corner North_Ames OneFam ## 4 Two_Story_… Resident… Slightly… Inside Gilbert OneFam ## 5 One_Story_… Resident… Regular Inside Stone_Brook TwnhsE ## 6 One_Story_… Resident… Slightly… Inside Stone_Brook TwnhsE ## 7 Two_Story_… Resident… Regular Inside Gilbert OneFam ## 8 Two_Story_… Resident… Slightly… Corner Gilbert OneFam ## 9 Two_Story_… Resident… Slightly… Inside Gilbert OneFam ## 10 One_Story_… Resident… Regular Inside Gilbert OneFam ## # … with 2,043 more rows, and 21 more variables: House_Style <fct>, ## # Roof_Style <fct>, Exterior_1st <fct>, Exterior_2nd <fct>, ## # Mas_Vnr_Type <fct>, Foundation <fct>, Bsmt_Exposure <fct>, ## # BsmtFin_Type_1 <fct>, Central_Air <fct>, Electrical <fct>, ## # Garage_Type <fct>, Garage_Finish <fct>, Paved_Drive <fct>, ## # Fence <fct>, Sale_Type <fct>, Sale_Price <int>, PC1 <dbl>, PC2 <dbl>, ## # PC3 <dbl>, PC4 <dbl>, PC5 <dbl> ``` Consequently, the goal is to develop our blueprint, then within each resample iteration we want to apply `prep()` and `bake()` to our resample training and validation data. Luckily, the **caret** package simplifies this process. We only need to specify the blueprint and **caret** will automatically prepare and bake within each resample. We illustrate with the `ames` housing example. First, we create our feature engineering blueprint to perform the following tasks: 1. Filter out near\-zero variance features for categorical features. 2. Ordinally encode all quality features, which are on a 1–10 Likert scale. 3. Standardize (center and scale) all numeric features. 4. One\-hot encode our remaining categorical features. ``` blueprint <- recipe(Sale_Price ~ ., data = ames_train) %>% step_nzv(all_nominal()) %>% step_integer(matches("Qual|Cond|QC|Qu")) %>% step_center(all_numeric(), -all_outcomes()) %>% step_scale(all_numeric(), -all_outcomes()) %>% step_dummy(all_nominal(), -all_outcomes(), one_hot = TRUE) ``` Next, we apply the same resampling method and hyperparameter search grid as we did in Section [2\.7](process.html#put-process-together). The only difference is when we train our resample models with `train()`, we supply our blueprint as the first argument and then **caret** takes care of the rest. ``` # Specify resampling plan cv <- trainControl( method = "repeatedcv", number = 10, repeats = 5 ) # Construct grid of hyperparameter values hyper_grid <- expand.grid(k = seq(2, 25, by = 1)) # Tune a knn model using grid search knn_fit2 <- train( blueprint, data = ames_train, method = "knn", trControl = cv, tuneGrid = hyper_grid, metric = "RMSE" ) ``` Looking at our results we see that the best model was associated with \\(k\=\\) 13, which resulted in a cross\-validated RMSE of 32,898\. Figure [3\.11](engineering.html#fig:engineering-knn-with-blueprint-assess) illustrates the cross\-validated error rate across the spectrum of hyperparameter values that we specified. ``` # print model results knn_fit2 ## k-Nearest Neighbors ## ## 2053 samples ## 80 predictor ## ## Recipe steps: nzv, integer, center, scale, dummy ## Resampling: Cross-Validated (10 fold, repeated 5 times) ## Summary of sample sizes: 1848, 1849, 1848, 1847, 1848, 1848, ... ## Resampling results across tuning parameters: ## ## k RMSE Rsquared MAE ## 2 36067.27 0.8031344 22618.51 ## 3 34924.85 0.8174313 21726.77 ## 4 34515.13 0.8223547 21281.38 ## 5 34040.72 0.8306678 20968.31 ## 6 33658.36 0.8366193 20850.36 ## 7 33477.81 0.8411600 20728.86 ## 8 33272.66 0.8449444 20607.91 ## 9 33151.51 0.8473631 20542.64 ## 10 33018.91 0.8496265 20540.82 ## 11 32963.31 0.8513253 20565.32 ## 12 32931.68 0.8531010 20615.63 ## 13 32898.37 0.8545475 20621.94 ## 14 32916.05 0.8554991 20660.38 ## 15 32911.62 0.8567444 20721.47 ## 16 32947.41 0.8574756 20771.31 ## 17 33012.23 0.8575633 20845.23 ## 18 33056.07 0.8576921 20942.94 ## 19 33152.81 0.8574236 21038.13 ## 20 33243.06 0.8570209 21125.38 ## 21 33300.40 0.8566910 21186.67 ## 22 33332.59 0.8569302 21240.79 ## 23 33442.28 0.8564495 21325.81 ## 24 33464.31 0.8567895 21345.11 ## 25 33514.23 0.8568821 21375.29 ## ## RMSE was used to select the optimal model using the smallest value. ## The final value used for the model was k = 13. # plot cross validation results ggplot(knn_fit2) ``` Figure 3\.11: Results from the same grid search performed in Section 2\.7 but with feature engineering performed within each resample. By applying a handful of the preprocessing techniques discussed throughout this chapter, we were able to reduce our prediction error by over $10,000\. The chapters that follow will look to see if we can continue reducing our error by applying different algorithms and feature engineering blueprints. 3\.1 Prerequisites ------------------ This chapter leverages the following packages: ``` # Helper packages library(dplyr) # for data manipulation library(ggplot2) # for awesome graphics library(visdat) # for additional visualizations # Feature engineering packages library(caret) # for various ML tasks library(recipes) # for feature engineering tasks ``` We’ll also continue working with the `ames_train` data set created in Section [2\.7](process.html#put-process-together): 3\.2 Target engineering ----------------------- Although not always a requirement, transforming the response variable can lead to predictive improvement, especially with parametric models (which require that certain assumptions about the model be met). For instance, ordinary linear regression models assume that the prediction errors (and hence the response) are normally distributed. This is usually fine, except when the prediction target has heavy tails (i.e., *outliers*) or is skewed in one direction or the other. In these cases, the normality assumption likely does not hold. For example, as we saw in the data splitting section ([2\.2](process.html#splitting)), the response variable for the Ames housing data (`Sale_Price`) is right (or positively) skewed as illustrated in Figure [3\.1](engineering.html#fig:engineering-skewed-residuals) (ranging from $12,789 to $755,000\). A simple linear model, say \\(\\text{Sale\_Price}\=\\beta\_{0} \+ \\beta\_{1} \\text{Year\_Built} \+ \\epsilon\\), often assumes the error term \\(\\epsilon\\) (and hence `Sale_Price`) is normally distributed; fortunately, a simple log (or similar) transformation of the response can often help alleviate this concern as Figure [3\.1](engineering.html#fig:engineering-skewed-residuals) illustrates. Figure 3\.1: Transforming the response variable to minimize skewness can resolve concerns with non\-normally distributed errors. Furthermore, using a log (or other) transformation to minimize the response skewness can be used for shaping the business problem as well. For example, in the House Prices: Advanced Regression Techniques Kaggle competition[11](#fn11), which used the Ames housing data, the competition focused on using a log transformed Sale Price response because “…taking logs means that errors in predicting expensive houses and cheap houses will affect the result equally.” This would be an alternative to using the root mean squared logarithmic error (RMSLE) loss function as discussed in Section [2\.6](process.html#model-eval). There are two main approaches to help correct for positively skewed target variables: **Option 1**: normalize with a log transformation. This will transform most right skewed distributions to be approximately normal. One way to do this is to simply log transform the training and test set in a manual, single step manner similar to: ``` transformed_response <- log(ames_train$Sale_Price) ``` However, we should think of the preprocessing as creating a blueprint to be re\-applied strategically. For this, you can use the **recipe** package or something similar (e.g., `caret::preProcess()`). This will not return the actual log transformed values but, rather, a blueprint to be applied later. ``` # log transformation ames_recipe <- recipe(Sale_Price ~ ., data = ames_train) %>% step_log(all_outcomes()) ames_recipe ## Data Recipe ## ## Inputs: ## ## role #variables ## outcome 1 ## predictor 80 ## ## Operations: ## ## Log transformation on all_outcomes ``` If your response has negative values or zeros then a log transformation will produce `NaN`s and `-Inf`s, respectively (you cannot take the logarithm of a negative number). If the nonpositive response values are small (say between \-0\.99 and 0\) then you can apply a small offset such as in `log1p()` which adds 1 to the value prior to applying a log transformation (you can do the same within `step_log()` by using the `offset` argument). If your data consists of values \\(\\le \-1\\), use the Yeo\-Johnson transformation mentioned next. ``` log(-0.5) ## [1] NaN log1p(-0.5) ## [1] -0.6931472 ``` **Option 2**: use a *Box Cox transformation*. A Box Cox transformation is more flexible than (but also includes as a special case) the log transformation and will find an appropriate transformation from a family of power transforms that will transform the variable as close as possible to a normal distribution (Box and Cox [1964](#ref-box1964analysis); Carroll and Ruppert [1981](#ref-carroll1981prediction)). At the core of the Box Cox transformation is an exponent, lambda (\\(\\lambda\\)), which varies from \-5 to 5\. All values of \\(\\lambda\\) are considered and the optimal value for the given data is estimated from the training data; The “optimal value” is the one which results in the best transformation to an approximate normal distribution. The transformation of the response \\(Y\\) has the form: \\\[ \\begin{equation} y(\\lambda) \= \\begin{cases} \\frac{Y^\\lambda\-1}{\\lambda}, \& \\text{if}\\ \\lambda \\neq 0 \\\\ \\log\\left(Y\\right), \& \\text{if}\\ \\lambda \= 0\. \\end{cases} \\end{equation} \\] Be sure to compute the `lambda` on the training set and apply that same `lambda` to both the training and test set to minimize *data leakage*. The **recipes** package automates this process for you. If your response has negative values, the Yeo\-Johnson transformation is very similar to the Box\-Cox but does not require the input variables to be strictly positive. To apply, use `step_YeoJohnson()`. Figure [3\.2](engineering.html#fig:engineering-distribution-comparison) illustrates that the log transformation and Box Cox transformation both do about equally well in transforming `Sale_Price` to look more normally distributed. Figure 3\.2: Response variable transformations. Note that when you model with a transformed response variable, your predictions will also be on the transformed scale. You will likely want to undo (or re\-transform) your predicted values back to their normal scale so that decision\-makers can more easily interpret the results. This is illustrated in the following code chunk: ``` # Log transform a value y <- log(10) # Undo log-transformation exp(y) ## [1] 10 # Box Cox transform a value y <- forecast::BoxCox(10, lambda) # Inverse Box Cox function inv_box_cox <- function(x, lambda) { # for Box-Cox, lambda = 0 --> log transform if (lambda == 0) exp(x) else (lambda*x + 1)^(1/lambda) } # Undo Box Cox-transformation inv_box_cox(y, lambda) ## [1] 10 ## attr(,"lambda") ## [1] -0.03616899 ``` 3\.3 Dealing with missingness ----------------------------- Data quality is an important issue for any project involving analyzing data. Data quality issues deserve an entire book in their own right, and a good reference is The Quartz guide to bad data.[12](#fn12) One of the most common data quality concerns you will run into is missing values. Data can be missing for many different reasons; however, these reasons are usually lumped into two categories: *informative missingness* (Kuhn and Johnson [2013](#ref-apm)) and *missingness at random* (Little and Rubin [2014](#ref-little2014statistical)). Informative missingness implies a structural cause for the missing value that can provide insight in its own right; whether this be deficiencies in how the data was collected or abnormalities in the observational environment. Missingness at random implies that missing values occur independent of the data collection process[13](#fn13). The category that drives missing values will determine how you handle them. For example, we may give values that are driven by informative missingness their own category (e.g., `"None"`) as their unique value may affect predictive performance. Whereas values that are missing at random may deserve deletion[14](#fn14) or imputation. Furthermore, different machine learning models handle missingness differently. Most algorithms cannot handle missingness (e.g., generalized linear models and their cousins, neural networks, and support vector machines) and, therefore, require them to be dealt with beforehand. A few models (mainly tree\-based), have built\-in procedures to deal with missing values. However, since the modeling process involves comparing and contrasting multiple models to identify the optimal one, you will want to handle missing values prior to applying any models so that your algorithms are based on the same data quality assumptions. ### 3\.3\.1 Visualizing missing values It is important to understand the distribution of missing values (i.e., `NA`) in any data set. So far, we have been using a pre\-processed version of the Ames housing data set (via the `AmesHousing::make_ames()` function). However, if we use the raw Ames housing data (via `AmesHousing::ames_raw`), there are actually 13,997 missing values—there is at least one missing values in each row of the original data! ``` sum(is.na(AmesHousing::ames_raw)) ## [1] 13997 ``` It is important to understand the distribution of missing values in a data set in order to determine the best approach for preprocessing. Heat maps are an efficient way to visualize the distribution of missing values for small\- to medium\-sized data sets. The code `is.na(<data-frame-name>)` will return a matrix of the same dimension as the given data frame, but each cell will contain either `TRUE` (if the corresponding value is missing) or `FALSE` (if the corresponding value is not missing). To construct such a plot, we can use R’s built\-in `heatmap()` or `image()` functions, or **ggplot2**’s `geom_raster()` function, among others; Figure [3\.3](engineering.html#fig:engineering-heat-map-missingness) illustrates `geom_raster()`. This allows us to easily see where the majority of missing values occur (i.e., in the variables `Alley`, `Fireplace Qual`, `Pool QC`, `Fence`, and `Misc Feature`). Due to their high frequency of missingness, these variables would likely need to be removed prior to statistical analysis, or imputed. We can also spot obvious patterns of missingness. For example, missing values appear to occur within the same observations across all garage variables. ``` AmesHousing::ames_raw %>% is.na() %>% reshape2::melt() %>% ggplot(aes(Var2, Var1, fill=value)) + geom_raster() + coord_flip() + scale_y_continuous(NULL, expand = c(0, 0)) + scale_fill_grey(name = "", labels = c("Present", "Missing")) + xlab("Observation") + theme(axis.text.y = element_text(size = 4)) ``` Figure 3\.3: Heat map of missing values in the raw Ames housing data. Digging a little deeper into these variables, we might notice that `Garage_Cars` and `Garage_Area` contain the value `0` whenever the other `Garage_xx` variables have missing values (i.e. a value of `NA`). This might be because they did not have a way to identify houses with no garages when the data were originally collected, and therefore, all houses with no garage were identified by including nothing. Since this missingness is informative, it would be appropriate to impute `NA` with a new category level (e.g., `"None"`) for these garage variables. Circumstances like this tend to only become apparent upon careful descriptive and visual examination of the data! ``` AmesHousing::ames_raw %>% filter(is.na(`Garage Type`)) %>% select(`Garage Type`, `Garage Cars`, `Garage Area`) ## # A tibble: 157 x 3 ## `Garage Type` `Garage Cars` `Garage Area` ## <chr> <int> <int> ## 1 <NA> 0 0 ## 2 <NA> 0 0 ## 3 <NA> 0 0 ## 4 <NA> 0 0 ## 5 <NA> 0 0 ## 6 <NA> 0 0 ## 7 <NA> 0 0 ## 8 <NA> 0 0 ## 9 <NA> 0 0 ## 10 <NA> 0 0 ## # … with 147 more rows ``` The `vis_miss()` function in R package `visdat` (Tierney [2019](#ref-R-visdat)) also allows for easy visualization of missing data patterns (with sorting and clustering options). We illustrate this functionality below using the raw Ames housing data (Figure [3\.4](engineering.html#fig:engineering-missingness-visna)). The columns of the heat map represent the 82 variables of the raw data and the rows represent the observations. Missing values (i.e., `NA`) are indicated via a black cell. The variables and `NA` patterns have been clustered by rows (i.e., `cluster = TRUE`). ``` vis_miss(AmesHousing::ames_raw, cluster = TRUE) ``` Figure 3\.4: Visualizing missing data patterns in the raw Ames housing data. Data can be missing for different reasons. Perhaps the values were never recorded (or lost in translation), or it was recorded in error (a common feature of data entered by hand). Regardless, it is important to identify and attempt to understand how missing values are distributed across a data set as it can provide insight into how to deal with these observations. ### 3\.3\.2 Imputation *Imputation* is the process of replacing a missing value with a substituted, “best guess” value. Imputation should be one of the first feature engineering steps you take as it will affect any downstream preprocessing[15](#fn15). #### 3\.3\.2\.1 Estimated statistic An elementary approach to imputing missing values for a feature is to compute descriptive statistics such as the mean, median, or mode (for categorical) and use that value to replace `NA`s. Although computationally efficient, this approach does not consider any other attributes for a given observation when imputing (e.g., a female patient that is 63 inches tall may have her weight imputed as 175 lbs since that is the average weight across all observations which contains 65% males that average a height of 70 inches). An alternative is to use grouped statistics to capture expected values for observations that fall into similar groups. However, this becomes infeasible for larger data sets. Modeling imputation can automate this process for you and the two most common methods include K\-nearest neighbor and tree\-based imputation, which are discussed next. However, it is important to remember that imputation should be performed **within the resampling process** and as your data set gets larger, repeated model\-based imputation can compound the computational demands. Thus, you must weigh the pros and cons of the two approaches. The following would build onto our `ames_recipe` and impute all missing values for the `Gr_Liv_Area` variable with the median value: ``` ames_recipe %>% step_medianimpute(Gr_Liv_Area) ## Data Recipe ## ## Inputs: ## ## role #variables ## outcome 1 ## predictor 80 ## ## Operations: ## ## Log transformation on all_outcomes ## Median Imputation for Gr_Liv_Area ``` Use `step_modeimpute()` to impute categorical features with the most common value. #### 3\.3\.2\.2 *K*\-nearest neighbor *K*\-nearest neighbor (KNN) imputes values by identifying observations with missing values, then identifying other observations that are most similar based on the other available features, and using the values from these nearest neighbor observations to impute missing values. We discuss KNN for predictive modeling in Chapter [8](knn.html#knn); the imputation application works in a similar manner. In KNN imputation, the missing value for a given observation is treated as the targeted response and is predicted based on the average (for quantitative values) or the mode (for qualitative values) of the *k* nearest neighbors. As discussed in Chapter [8](knn.html#knn), if all features are quantitative then standard Euclidean distance is commonly used as the distance metric to identify the *k* neighbors and when there is a mixture of quantitative and qualitative features then Gower’s distance (Gower [1971](#ref-gower1971general)) can be used. KNN imputation is best used on small to moderate sized data sets as it becomes computationally burdensome with larger data sets (Kuhn and Johnson [2019](#ref-kuhn2019feature)). As we saw in Section 2\.7, *k* is a tunable hyperparameter. Suggested values for imputation are 5–10 (Kuhn and Johnson [2019](#ref-kuhn2019feature)). By default, `step_knnimpute()` will use 5 but can be adjusted with the `neighbors` argument. ``` ames_recipe %>% step_knnimpute(all_predictors(), neighbors = 6) ## Data Recipe ## ## Inputs: ## ## role #variables ## outcome 1 ## predictor 80 ## ## Operations: ## ## Log transformation on all_outcomes ## K-nearest neighbor imputation for all_predictors ``` #### 3\.3\.2\.3 Tree\-based As previously discussed, several implementations of decision trees (Chapter [9](DT.html#DT)) and their derivatives can be constructed in the presence of missing values. Thus, they provide a good alternative for imputation. As discussed in Chapters [9](DT.html#DT)\-[11](random-forest.html#random-forest), single trees have high variance but aggregating across many trees creates a robust, low variance predictor. Random forest imputation procedures have been studied (Shah et al. [2014](#ref-shah2014comparison); Stekhoven [2015](#ref-stekhoven2015missforest)); however, they require significant computational demands in a resampling environment (Kuhn and Johnson [2019](#ref-kuhn2019feature)). Bagged trees (Chapter [10](bagging.html#bagging)) offer a compromise between predictive accuracy and computational burden. Similar to KNN imputation, observations with missing values are identified and the feature containing the missing value is treated as the target and predicted using bagged decision trees. ``` ames_recipe %>% step_bagimpute(all_predictors()) ## Data Recipe ## ## Inputs: ## ## role #variables ## outcome 1 ## predictor 80 ## ## Operations: ## ## Log transformation on all_outcomes ## Bagged tree imputation for all_predictors ``` Figure [3\.5](engineering.html#fig:engineering-imputation-examples) illustrates the differences between mean, KNN, and tree\-based imputation on the raw Ames housing data. It is apparent how descriptive statistic methods (e.g., using the mean and median) are inferior to the KNN and tree\-based imputation methods. Figure 3\.5: Comparison of three different imputation methods. The red points represent actual values which were removed and made missing and the blue points represent the imputed values. Estimated statistic imputation methods (i.e. mean, median) merely predict the same value for each observation and can reduce the signal between a feature and the response; whereas KNN and tree\-based procedures tend to maintain the feature distribution and relationship. ### 3\.3\.1 Visualizing missing values It is important to understand the distribution of missing values (i.e., `NA`) in any data set. So far, we have been using a pre\-processed version of the Ames housing data set (via the `AmesHousing::make_ames()` function). However, if we use the raw Ames housing data (via `AmesHousing::ames_raw`), there are actually 13,997 missing values—there is at least one missing values in each row of the original data! ``` sum(is.na(AmesHousing::ames_raw)) ## [1] 13997 ``` It is important to understand the distribution of missing values in a data set in order to determine the best approach for preprocessing. Heat maps are an efficient way to visualize the distribution of missing values for small\- to medium\-sized data sets. The code `is.na(<data-frame-name>)` will return a matrix of the same dimension as the given data frame, but each cell will contain either `TRUE` (if the corresponding value is missing) or `FALSE` (if the corresponding value is not missing). To construct such a plot, we can use R’s built\-in `heatmap()` or `image()` functions, or **ggplot2**’s `geom_raster()` function, among others; Figure [3\.3](engineering.html#fig:engineering-heat-map-missingness) illustrates `geom_raster()`. This allows us to easily see where the majority of missing values occur (i.e., in the variables `Alley`, `Fireplace Qual`, `Pool QC`, `Fence`, and `Misc Feature`). Due to their high frequency of missingness, these variables would likely need to be removed prior to statistical analysis, or imputed. We can also spot obvious patterns of missingness. For example, missing values appear to occur within the same observations across all garage variables. ``` AmesHousing::ames_raw %>% is.na() %>% reshape2::melt() %>% ggplot(aes(Var2, Var1, fill=value)) + geom_raster() + coord_flip() + scale_y_continuous(NULL, expand = c(0, 0)) + scale_fill_grey(name = "", labels = c("Present", "Missing")) + xlab("Observation") + theme(axis.text.y = element_text(size = 4)) ``` Figure 3\.3: Heat map of missing values in the raw Ames housing data. Digging a little deeper into these variables, we might notice that `Garage_Cars` and `Garage_Area` contain the value `0` whenever the other `Garage_xx` variables have missing values (i.e. a value of `NA`). This might be because they did not have a way to identify houses with no garages when the data were originally collected, and therefore, all houses with no garage were identified by including nothing. Since this missingness is informative, it would be appropriate to impute `NA` with a new category level (e.g., `"None"`) for these garage variables. Circumstances like this tend to only become apparent upon careful descriptive and visual examination of the data! ``` AmesHousing::ames_raw %>% filter(is.na(`Garage Type`)) %>% select(`Garage Type`, `Garage Cars`, `Garage Area`) ## # A tibble: 157 x 3 ## `Garage Type` `Garage Cars` `Garage Area` ## <chr> <int> <int> ## 1 <NA> 0 0 ## 2 <NA> 0 0 ## 3 <NA> 0 0 ## 4 <NA> 0 0 ## 5 <NA> 0 0 ## 6 <NA> 0 0 ## 7 <NA> 0 0 ## 8 <NA> 0 0 ## 9 <NA> 0 0 ## 10 <NA> 0 0 ## # … with 147 more rows ``` The `vis_miss()` function in R package `visdat` (Tierney [2019](#ref-R-visdat)) also allows for easy visualization of missing data patterns (with sorting and clustering options). We illustrate this functionality below using the raw Ames housing data (Figure [3\.4](engineering.html#fig:engineering-missingness-visna)). The columns of the heat map represent the 82 variables of the raw data and the rows represent the observations. Missing values (i.e., `NA`) are indicated via a black cell. The variables and `NA` patterns have been clustered by rows (i.e., `cluster = TRUE`). ``` vis_miss(AmesHousing::ames_raw, cluster = TRUE) ``` Figure 3\.4: Visualizing missing data patterns in the raw Ames housing data. Data can be missing for different reasons. Perhaps the values were never recorded (or lost in translation), or it was recorded in error (a common feature of data entered by hand). Regardless, it is important to identify and attempt to understand how missing values are distributed across a data set as it can provide insight into how to deal with these observations. ### 3\.3\.2 Imputation *Imputation* is the process of replacing a missing value with a substituted, “best guess” value. Imputation should be one of the first feature engineering steps you take as it will affect any downstream preprocessing[15](#fn15). #### 3\.3\.2\.1 Estimated statistic An elementary approach to imputing missing values for a feature is to compute descriptive statistics such as the mean, median, or mode (for categorical) and use that value to replace `NA`s. Although computationally efficient, this approach does not consider any other attributes for a given observation when imputing (e.g., a female patient that is 63 inches tall may have her weight imputed as 175 lbs since that is the average weight across all observations which contains 65% males that average a height of 70 inches). An alternative is to use grouped statistics to capture expected values for observations that fall into similar groups. However, this becomes infeasible for larger data sets. Modeling imputation can automate this process for you and the two most common methods include K\-nearest neighbor and tree\-based imputation, which are discussed next. However, it is important to remember that imputation should be performed **within the resampling process** and as your data set gets larger, repeated model\-based imputation can compound the computational demands. Thus, you must weigh the pros and cons of the two approaches. The following would build onto our `ames_recipe` and impute all missing values for the `Gr_Liv_Area` variable with the median value: ``` ames_recipe %>% step_medianimpute(Gr_Liv_Area) ## Data Recipe ## ## Inputs: ## ## role #variables ## outcome 1 ## predictor 80 ## ## Operations: ## ## Log transformation on all_outcomes ## Median Imputation for Gr_Liv_Area ``` Use `step_modeimpute()` to impute categorical features with the most common value. #### 3\.3\.2\.2 *K*\-nearest neighbor *K*\-nearest neighbor (KNN) imputes values by identifying observations with missing values, then identifying other observations that are most similar based on the other available features, and using the values from these nearest neighbor observations to impute missing values. We discuss KNN for predictive modeling in Chapter [8](knn.html#knn); the imputation application works in a similar manner. In KNN imputation, the missing value for a given observation is treated as the targeted response and is predicted based on the average (for quantitative values) or the mode (for qualitative values) of the *k* nearest neighbors. As discussed in Chapter [8](knn.html#knn), if all features are quantitative then standard Euclidean distance is commonly used as the distance metric to identify the *k* neighbors and when there is a mixture of quantitative and qualitative features then Gower’s distance (Gower [1971](#ref-gower1971general)) can be used. KNN imputation is best used on small to moderate sized data sets as it becomes computationally burdensome with larger data sets (Kuhn and Johnson [2019](#ref-kuhn2019feature)). As we saw in Section 2\.7, *k* is a tunable hyperparameter. Suggested values for imputation are 5–10 (Kuhn and Johnson [2019](#ref-kuhn2019feature)). By default, `step_knnimpute()` will use 5 but can be adjusted with the `neighbors` argument. ``` ames_recipe %>% step_knnimpute(all_predictors(), neighbors = 6) ## Data Recipe ## ## Inputs: ## ## role #variables ## outcome 1 ## predictor 80 ## ## Operations: ## ## Log transformation on all_outcomes ## K-nearest neighbor imputation for all_predictors ``` #### 3\.3\.2\.3 Tree\-based As previously discussed, several implementations of decision trees (Chapter [9](DT.html#DT)) and their derivatives can be constructed in the presence of missing values. Thus, they provide a good alternative for imputation. As discussed in Chapters [9](DT.html#DT)\-[11](random-forest.html#random-forest), single trees have high variance but aggregating across many trees creates a robust, low variance predictor. Random forest imputation procedures have been studied (Shah et al. [2014](#ref-shah2014comparison); Stekhoven [2015](#ref-stekhoven2015missforest)); however, they require significant computational demands in a resampling environment (Kuhn and Johnson [2019](#ref-kuhn2019feature)). Bagged trees (Chapter [10](bagging.html#bagging)) offer a compromise between predictive accuracy and computational burden. Similar to KNN imputation, observations with missing values are identified and the feature containing the missing value is treated as the target and predicted using bagged decision trees. ``` ames_recipe %>% step_bagimpute(all_predictors()) ## Data Recipe ## ## Inputs: ## ## role #variables ## outcome 1 ## predictor 80 ## ## Operations: ## ## Log transformation on all_outcomes ## Bagged tree imputation for all_predictors ``` Figure [3\.5](engineering.html#fig:engineering-imputation-examples) illustrates the differences between mean, KNN, and tree\-based imputation on the raw Ames housing data. It is apparent how descriptive statistic methods (e.g., using the mean and median) are inferior to the KNN and tree\-based imputation methods. Figure 3\.5: Comparison of three different imputation methods. The red points represent actual values which were removed and made missing and the blue points represent the imputed values. Estimated statistic imputation methods (i.e. mean, median) merely predict the same value for each observation and can reduce the signal between a feature and the response; whereas KNN and tree\-based procedures tend to maintain the feature distribution and relationship. #### 3\.3\.2\.1 Estimated statistic An elementary approach to imputing missing values for a feature is to compute descriptive statistics such as the mean, median, or mode (for categorical) and use that value to replace `NA`s. Although computationally efficient, this approach does not consider any other attributes for a given observation when imputing (e.g., a female patient that is 63 inches tall may have her weight imputed as 175 lbs since that is the average weight across all observations which contains 65% males that average a height of 70 inches). An alternative is to use grouped statistics to capture expected values for observations that fall into similar groups. However, this becomes infeasible for larger data sets. Modeling imputation can automate this process for you and the two most common methods include K\-nearest neighbor and tree\-based imputation, which are discussed next. However, it is important to remember that imputation should be performed **within the resampling process** and as your data set gets larger, repeated model\-based imputation can compound the computational demands. Thus, you must weigh the pros and cons of the two approaches. The following would build onto our `ames_recipe` and impute all missing values for the `Gr_Liv_Area` variable with the median value: ``` ames_recipe %>% step_medianimpute(Gr_Liv_Area) ## Data Recipe ## ## Inputs: ## ## role #variables ## outcome 1 ## predictor 80 ## ## Operations: ## ## Log transformation on all_outcomes ## Median Imputation for Gr_Liv_Area ``` Use `step_modeimpute()` to impute categorical features with the most common value. #### 3\.3\.2\.2 *K*\-nearest neighbor *K*\-nearest neighbor (KNN) imputes values by identifying observations with missing values, then identifying other observations that are most similar based on the other available features, and using the values from these nearest neighbor observations to impute missing values. We discuss KNN for predictive modeling in Chapter [8](knn.html#knn); the imputation application works in a similar manner. In KNN imputation, the missing value for a given observation is treated as the targeted response and is predicted based on the average (for quantitative values) or the mode (for qualitative values) of the *k* nearest neighbors. As discussed in Chapter [8](knn.html#knn), if all features are quantitative then standard Euclidean distance is commonly used as the distance metric to identify the *k* neighbors and when there is a mixture of quantitative and qualitative features then Gower’s distance (Gower [1971](#ref-gower1971general)) can be used. KNN imputation is best used on small to moderate sized data sets as it becomes computationally burdensome with larger data sets (Kuhn and Johnson [2019](#ref-kuhn2019feature)). As we saw in Section 2\.7, *k* is a tunable hyperparameter. Suggested values for imputation are 5–10 (Kuhn and Johnson [2019](#ref-kuhn2019feature)). By default, `step_knnimpute()` will use 5 but can be adjusted with the `neighbors` argument. ``` ames_recipe %>% step_knnimpute(all_predictors(), neighbors = 6) ## Data Recipe ## ## Inputs: ## ## role #variables ## outcome 1 ## predictor 80 ## ## Operations: ## ## Log transformation on all_outcomes ## K-nearest neighbor imputation for all_predictors ``` #### 3\.3\.2\.3 Tree\-based As previously discussed, several implementations of decision trees (Chapter [9](DT.html#DT)) and their derivatives can be constructed in the presence of missing values. Thus, they provide a good alternative for imputation. As discussed in Chapters [9](DT.html#DT)\-[11](random-forest.html#random-forest), single trees have high variance but aggregating across many trees creates a robust, low variance predictor. Random forest imputation procedures have been studied (Shah et al. [2014](#ref-shah2014comparison); Stekhoven [2015](#ref-stekhoven2015missforest)); however, they require significant computational demands in a resampling environment (Kuhn and Johnson [2019](#ref-kuhn2019feature)). Bagged trees (Chapter [10](bagging.html#bagging)) offer a compromise between predictive accuracy and computational burden. Similar to KNN imputation, observations with missing values are identified and the feature containing the missing value is treated as the target and predicted using bagged decision trees. ``` ames_recipe %>% step_bagimpute(all_predictors()) ## Data Recipe ## ## Inputs: ## ## role #variables ## outcome 1 ## predictor 80 ## ## Operations: ## ## Log transformation on all_outcomes ## Bagged tree imputation for all_predictors ``` Figure [3\.5](engineering.html#fig:engineering-imputation-examples) illustrates the differences between mean, KNN, and tree\-based imputation on the raw Ames housing data. It is apparent how descriptive statistic methods (e.g., using the mean and median) are inferior to the KNN and tree\-based imputation methods. Figure 3\.5: Comparison of three different imputation methods. The red points represent actual values which were removed and made missing and the blue points represent the imputed values. Estimated statistic imputation methods (i.e. mean, median) merely predict the same value for each observation and can reduce the signal between a feature and the response; whereas KNN and tree\-based procedures tend to maintain the feature distribution and relationship. 3\.4 Feature filtering ---------------------- In many data analyses and modeling projects we end up with hundreds or even thousands of collected features. From a practical perspective, a model with more features often becomes harder to interpret and is costly to compute. Some models are more resistant to non\-informative predictors (e.g., the Lasso and tree\-based methods) than others as illustrated in Figure [3\.6](engineering.html#fig:engineering-accuracy-comparison).[16](#fn16) Figure 3\.6: Test set RMSE profiles when non\-informative predictors are added. Although the performance of some of our models are not significantly affected by non\-informative predictors, the time to train these models can be negatively impacted as more features are added. Figure [3\.7](engineering.html#fig:engineering-impact-on-time) shows the increase in time to perform 10\-fold CV on the exemplar data, which consists of 10,000 observations. We see that many algorithms (e.g., elastic nets, random forests, and gradient boosting machines) become extremely time intensive the more predictors we add. Consequently, filtering or reducing features prior to modeling may significantly speed up training time. Figure 3\.7: Impact in model training time as non\-informative predictors are added. Zero and near\-zero variance variables are low\-hanging fruit to eliminate. Zero variance variables, meaning the feature only contains a single unique value, provides no useful information to a model. Some algorithms are unaffected by zero variance features. However, features that have near\-zero variance also offer very little, if any, information to a model. Furthermore, they can cause problems during resampling as there is a high probability that a given sample will only contain a single unique value (the dominant value) for that feature. A rule of thumb for detecting near\-zero variance features is: * The fraction of unique values over the sample size is low (say \\(\\leq 10\\)%). * The ratio of the frequency of the most prevalent value to the frequency of the second most prevalent value is large (say \\(\\geq 20\\)%). If both of these criteria are true then it is often advantageous to remove the variable from the model. For the Ames data, we do not have any zero variance predictors but there are 20 features that meet the near\-zero threshold. ``` caret::nearZeroVar(ames_train, saveMetrics = TRUE) %>% tibble::rownames_to_column() %>% filter(nzv) ## rowname freqRatio percentUnique zeroVar nzv ## 1 Street 292.28571 0.09741841 FALSE TRUE ## 2 Alley 21.76136 0.14612762 FALSE TRUE ## 3 Land_Contour 21.78824 0.19483682 FALSE TRUE ## 4 Utilities 1025.00000 0.14612762 FALSE TRUE ## 5 Land_Slope 23.33333 0.14612762 FALSE TRUE ## 6 Condition_2 225.77778 0.34096444 FALSE TRUE ## 7 Roof_Matl 126.50000 0.24354603 FALSE TRUE ## 8 Bsmt_Cond 19.88043 0.29225524 FALSE TRUE ## 9 BsmtFin_Type_2 22.35897 0.34096444 FALSE TRUE ## 10 Heating 96.23810 0.24354603 FALSE TRUE ## 11 Low_Qual_Fin_SF 1012.00000 1.31514856 FALSE TRUE ## 12 Kitchen_AbvGr 23.10588 0.19483682 FALSE TRUE ## 13 Functional 38.95918 0.34096444 FALSE TRUE ## 14 Enclosed_Porch 107.68750 7.25767170 FALSE TRUE ## 15 Three_season_porch 675.00000 1.12031174 FALSE TRUE ## 16 Screen_Porch 234.87500 4.52995616 FALSE TRUE ## 17 Pool_Area 2045.00000 0.43838285 FALSE TRUE ## 18 Pool_QC 681.66667 0.24354603 FALSE TRUE ## 19 Misc_Feature 31.00000 0.19483682 FALSE TRUE ## 20 Misc_Val 152.76923 1.36385777 FALSE TRUE ``` We can add `step_zv()` and `step_nzv()` to our `ames_recipe` to remove zero or near\-zero variance features. Other feature filtering methods exist; see Saeys, Inza, and Larrañaga ([2007](#ref-saeys2007review)) for a thorough review. Furthermore, several wrapper methods exist that evaluate multiple models using procedures that add or remove predictors to find the optimal combination of features that maximizes model performance (see, for example, Kursa, Rudnicki, and others ([2010](#ref-kursa2010feature)), Granitto et al. ([2006](#ref-granitto2006recursive)), Maldonado and Weber ([2009](#ref-maldonado2009wrapper))). However, this topic is beyond the scope of this book. 3\.5 Numeric feature engineering -------------------------------- Numeric features can create a host of problems for certain models when their distributions are skewed, contain outliers, or have a wide range in magnitudes. Tree\-based models are quite immune to these types of problems in the feature space, but many other models (e.g., GLMs, regularized regression, KNN, support vector machines, neural networks) can be greatly hampered by these issues. Normalizing and standardizing heavily skewed features can help minimize these concerns. ### 3\.5\.1 Skewness Similar to the process discussed to normalize target variables, parametric models that have distributional assumptions (e.g., GLMs, and regularized models) can benefit from minimizing the skewness of numeric features. When normalizing many variables, it’s best to use the Box\-Cox (when feature values are strictly positive) or Yeo\-Johnson (when feature values are not strictly positive) procedures as these methods will identify if a transformation is required and what the optimal transformation will be. Non\-parametric models are rarely affected by skewed features; however, normalizing features will not have a negative effect on these models’ performance. For example, normalizing features will only shift the optimal split points in tree\-based algorithms. Consequently, when in doubt, normalize. ``` # Normalize all numeric columns recipe(Sale_Price ~ ., data = ames_train) %>% step_YeoJohnson(all_numeric()) ## Data Recipe ## ## Inputs: ## ## role #variables ## outcome 1 ## predictor 80 ## ## Operations: ## ## Yeo-Johnson transformation on all_numeric ``` ### 3\.5\.2 Standardization We must also consider the scale on which the individual features are measured. What are the largest and smallest values across all features and do they span several orders of magnitude? Models that incorporate smooth functions of input features are sensitive to the scale of the inputs. For example, \\(5X\+2\\) is a simple linear function of the input *X*, and the scale of its output depends directly on the scale of the input. Many algorithms use linear functions within their algorithms, some more obvious (e.g., GLMs and regularized regression) than others (e.g., neural networks, support vector machines, and principal components analysis). Other examples include algorithms that use distance measures such as the Euclidean distance (e.g., *k* nearest neighbor, *k*\-means clustering, and hierarchical clustering). For these models and modeling components, it is often a good idea to *standardize* the features. Standardizing features includes *centering* and *scaling* so that numeric variables have zero mean and unit variance, which provides a common comparable unit of measure across all the variables. Figure 3\.8: Standardizing features allows all features to be compared on a common value scale regardless of their real value differences. Some packages (e.g., **glmnet**, and **caret**) have built\-in options to standardize and some do not (e.g., **keras** for neural networks). However, you should standardize your variables within the recipe blueprint so that both training and test data standardization are based on the same mean and variance. This helps to minimize data leakage. ``` ames_recipe %>% step_center(all_numeric(), -all_outcomes()) %>% step_scale(all_numeric(), -all_outcomes()) ## Data Recipe ## ## Inputs: ## ## role #variables ## outcome 1 ## predictor 80 ## ## Operations: ## ## Log transformation on all_outcomes ## Centering for all_numeric, -, all_outcomes() ## Scaling for all_numeric, -, all_outcomes() ``` ### 3\.5\.1 Skewness Similar to the process discussed to normalize target variables, parametric models that have distributional assumptions (e.g., GLMs, and regularized models) can benefit from minimizing the skewness of numeric features. When normalizing many variables, it’s best to use the Box\-Cox (when feature values are strictly positive) or Yeo\-Johnson (when feature values are not strictly positive) procedures as these methods will identify if a transformation is required and what the optimal transformation will be. Non\-parametric models are rarely affected by skewed features; however, normalizing features will not have a negative effect on these models’ performance. For example, normalizing features will only shift the optimal split points in tree\-based algorithms. Consequently, when in doubt, normalize. ``` # Normalize all numeric columns recipe(Sale_Price ~ ., data = ames_train) %>% step_YeoJohnson(all_numeric()) ## Data Recipe ## ## Inputs: ## ## role #variables ## outcome 1 ## predictor 80 ## ## Operations: ## ## Yeo-Johnson transformation on all_numeric ``` ### 3\.5\.2 Standardization We must also consider the scale on which the individual features are measured. What are the largest and smallest values across all features and do they span several orders of magnitude? Models that incorporate smooth functions of input features are sensitive to the scale of the inputs. For example, \\(5X\+2\\) is a simple linear function of the input *X*, and the scale of its output depends directly on the scale of the input. Many algorithms use linear functions within their algorithms, some more obvious (e.g., GLMs and regularized regression) than others (e.g., neural networks, support vector machines, and principal components analysis). Other examples include algorithms that use distance measures such as the Euclidean distance (e.g., *k* nearest neighbor, *k*\-means clustering, and hierarchical clustering). For these models and modeling components, it is often a good idea to *standardize* the features. Standardizing features includes *centering* and *scaling* so that numeric variables have zero mean and unit variance, which provides a common comparable unit of measure across all the variables. Figure 3\.8: Standardizing features allows all features to be compared on a common value scale regardless of their real value differences. Some packages (e.g., **glmnet**, and **caret**) have built\-in options to standardize and some do not (e.g., **keras** for neural networks). However, you should standardize your variables within the recipe blueprint so that both training and test data standardization are based on the same mean and variance. This helps to minimize data leakage. ``` ames_recipe %>% step_center(all_numeric(), -all_outcomes()) %>% step_scale(all_numeric(), -all_outcomes()) ## Data Recipe ## ## Inputs: ## ## role #variables ## outcome 1 ## predictor 80 ## ## Operations: ## ## Log transformation on all_outcomes ## Centering for all_numeric, -, all_outcomes() ## Scaling for all_numeric, -, all_outcomes() ``` 3\.6 Categorical feature engineering ------------------------------------ Most models require that the predictors take numeric form. There are exceptions; for example, tree\-based models naturally handle numeric or categorical features. However, even tree\-based models can benefit from preprocessing categorical features. The following sections will discuss a few of the more common approaches to engineer categorical features. ### 3\.6\.1 Lumping Sometimes features will contain levels that have very few observations. For example, there are 28 unique neighborhoods represented in the Ames housing data but several of them only have a few observations. ``` count(ames_train, Neighborhood) %>% arrange(n) ## # A tibble: 28 x 2 ## Neighborhood n ## <fct> <int> ## 1 Landmark 1 ## 2 Green_Hills 2 ## 3 Greens 7 ## 4 Blueste 9 ## 5 Northpark_Villa 17 ## 6 Briardale 18 ## 7 Veenker 20 ## 8 Bloomington_Heights 21 ## 9 South_and_West_of_Iowa_State_University 30 ## 10 Meadow_Village 30 ## # … with 18 more rows ``` Even numeric features can have similar distributions. For example, `Screen_Porch` has 92% values recorded as zero (zero square footage meaning no screen porch) and the remaining 8% have unique dispersed values. ``` count(ames_train, Screen_Porch) %>% arrange(n) ## # A tibble: 93 x 2 ## Screen_Porch n ## <int> <int> ## 1 40 1 ## 2 80 1 ## 3 92 1 ## 4 94 1 ## 5 99 1 ## 6 104 1 ## 7 109 1 ## 8 110 1 ## 9 111 1 ## 10 117 1 ## # … with 83 more rows ``` Sometimes we can benefit from collapsing, or “lumping” these into a lesser number of categories. In the above examples, we may want to collapse all levels that are observed in less than 10% of the training sample into an “other” category. We can use `step_other()` to do so. However, lumping should be used sparingly as there is often a loss in model performance (Kuhn and Johnson [2013](#ref-apm)). Tree\-based models often perform exceptionally well with high cardinality features and are not as impacted by levels with small representation. ``` # Lump levels for two features lumping <- recipe(Sale_Price ~ ., data = ames_train) %>% step_other(Neighborhood, threshold = 0.01, other = "other") %>% step_other(Screen_Porch, threshold = 0.1, other = ">0") # Apply this blue print --> you will learn about this at # the end of the chapter apply_2_training <- prep(lumping, training = ames_train) %>% bake(ames_train) # New distribution of Neighborhood count(apply_2_training, Neighborhood) %>% arrange(n) ## # A tibble: 22 x 2 ## Neighborhood n ## <fct> <int> ## 1 Bloomington_Heights 21 ## 2 South_and_West_of_Iowa_State_University 30 ## 3 Meadow_Village 30 ## 4 Clear_Creek 31 ## 5 Stone_Brook 34 ## 6 Northridge 48 ## 7 Timberland 55 ## 8 Iowa_DOT_and_Rail_Road 62 ## 9 Crawford 72 ## 10 Mitchell 74 ## # … with 12 more rows # New distribution of Screen_Porch count(apply_2_training, Screen_Porch) %>% arrange(n) ## # A tibble: 2 x 2 ## Screen_Porch n ## <fct> <int> ## 1 >0 174 ## 2 0 1879 ``` ### 3\.6\.2 One\-hot \& dummy encoding Many models require that all predictor variables be numeric. Consequently, we need to intelligently transform any categorical variables into numeric representations so that these algorithms can compute. Some packages automate this process (e.g., **h2o** and **caret**) while others do not (e.g., **glmnet** and **keras**). There are many ways to recode categorical variables as numeric (e.g., one\-hot, ordinal, binary, sum, and Helmert). The most common is referred to as one\-hot encoding, where we transpose our categorical variables so that each level of the feature is represented as a boolean value. For example, one\-hot encoding the left data frame in Figure [3\.9](engineering.html#fig:engineering-one-hot) results in `X` being converted into three columns, one for each level. This is called less than *full rank* encoding . However, this creates perfect collinearity which causes problems with some predictive modeling algorithms (e.g., ordinary linear regression and neural networks). Alternatively, we can create a full\-rank encoding by dropping one of the levels (level `c` has been dropped). This is referred to as *dummy* encoding. Figure 3\.9: Eight observations containing a categorical feature X and the difference in how one\-hot and dummy encoding transforms this feature. We can one\-hot or dummy encode with the same function (`step_dummy()`). By default, `step_dummy()` will create a full rank encoding but you can change this by setting `one_hot = TRUE`. ``` # Lump levels for two features recipe(Sale_Price ~ ., data = ames_train) %>% step_dummy(all_nominal(), one_hot = TRUE) ## Data Recipe ## ## Inputs: ## ## role #variables ## outcome 1 ## predictor 80 ## ## Operations: ## ## Dummy variables from all_nominal ``` Since one\-hot encoding adds new features it can significantly increase the dimensionality of our data. If you have a data set with many categorical variables and those categorical variables in turn have many unique levels, the number of features can explode. In these cases you may want to explore label/ordinal encoding or some other alternative. ### 3\.6\.3 Label encoding *Label encoding* is a pure numeric conversion of the levels of a categorical variable. If a categorical variable is a factor and it has pre\-specified levels then the numeric conversion will be in level order. If no levels are specified, the encoding will be based on alphabetical order. For example, the `MS_SubClass` variable has 16 levels, which we can recode numerically with `step_integer()`. ``` # Original categories count(ames_train, MS_SubClass) ## # A tibble: 16 x 2 ## MS_SubClass n ## <fct> <int> ## 1 One_Story_1946_and_Newer_All_Styles 749 ## 2 One_Story_1945_and_Older 93 ## 3 One_Story_with_Finished_Attic_All_Ages 5 ## 4 One_and_Half_Story_Unfinished_All_Ages 11 ## 5 One_and_Half_Story_Finished_All_Ages 207 ## 6 Two_Story_1946_and_Newer 394 ## 7 Two_Story_1945_and_Older 98 ## 8 Two_and_Half_Story_All_Ages 17 ## 9 Split_or_Multilevel 78 ## 10 Split_Foyer 31 ## 11 Duplex_All_Styles_and_Ages 69 ## 12 One_Story_PUD_1946_and_Newer 144 ## 13 One_and_Half_Story_PUD_All_Ages 1 ## 14 Two_Story_PUD_1946_and_Newer 98 ## 15 PUD_Multilevel_Split_Level_Foyer 14 ## 16 Two_Family_conversion_All_Styles_and_Ages 44 # Label encoded recipe(Sale_Price ~ ., data = ames_train) %>% step_integer(MS_SubClass) %>% prep(ames_train) %>% bake(ames_train) %>% count(MS_SubClass) ## # A tibble: 16 x 2 ## MS_SubClass n ## <dbl> <int> ## 1 1 749 ## 2 2 93 ## 3 3 5 ## 4 4 11 ## 5 5 207 ## 6 6 394 ## 7 7 98 ## 8 8 17 ## 9 9 78 ## 10 10 31 ## 11 11 69 ## 12 12 144 ## 13 13 1 ## 14 14 98 ## 15 15 14 ## 16 16 44 ``` We should be careful with label encoding unordered categorical features because most models will treat them as ordered numeric features. If a categorical feature is naturally ordered then label encoding is a natural choice (most commonly referred to as ordinal encoding). For example, the various quality features in the Ames housing data are ordinal in nature (ranging from `Very_Poor` to `Very_Excellent`). ``` ames_train %>% select(contains("Qual")) ## # A tibble: 2,053 x 6 ## Overall_Qual Exter_Qual Bsmt_Qual Low_Qual_Fin_SF Kitchen_Qual ## <fct> <fct> <fct> <int> <fct> ## 1 Above_Avera… Typical Typical 0 Typical ## 2 Average Typical Typical 0 Typical ## 3 Above_Avera… Typical Typical 0 Good ## 4 Above_Avera… Typical Typical 0 Good ## 5 Very_Good Good Good 0 Good ## 6 Very_Good Good Good 0 Good ## 7 Good Typical Typical 0 Good ## 8 Above_Avera… Typical Good 0 Typical ## 9 Above_Avera… Typical Good 0 Typical ## 10 Good Typical Good 0 Good ## # … with 2,043 more rows, and 1 more variable: Garage_Qual <fct> ``` Ordinal encoding these features provides a natural and intuitive interpretation and can logically be applied to all models. The various `xxx_Qual` features in the Ames housing are not ordered factors. For ordered factors you could also use `step_ordinalscore()`. ``` # Original categories count(ames_train, Overall_Qual) ## # A tibble: 10 x 2 ## Overall_Qual n ## <fct> <int> ## 1 Very_Poor 4 ## 2 Poor 9 ## 3 Fair 27 ## 4 Below_Average 166 ## 5 Average 565 ## 6 Above_Average 513 ## 7 Good 438 ## 8 Very_Good 231 ## 9 Excellent 77 ## 10 Very_Excellent 23 # Label encoded recipe(Sale_Price ~ ., data = ames_train) %>% step_integer(Overall_Qual) %>% prep(ames_train) %>% bake(ames_train) %>% count(Overall_Qual) ## # A tibble: 10 x 2 ## Overall_Qual n ## <dbl> <int> ## 1 1 4 ## 2 2 9 ## 3 3 27 ## 4 4 166 ## 5 5 565 ## 6 6 513 ## 7 7 438 ## 8 8 231 ## 9 9 77 ## 10 10 23 ``` ### 3\.6\.4 Alternatives There are several alternative categorical encodings that are implemented in various R machine learning engines and are worth exploring. For example, target encoding is the process of replacing a categorical value with the mean (regression) or proportion (classification) of the target variable. For example, target encoding the `Neighborhood` feature would change `North_Ames` to 144617\. Table 3\.1: Example of target encoding the Neighborhood feature of the Ames housing data set. | Neighborhood | Avg Sale\_Price | | --- | --- | | North\_Ames | 144792\.9 | | College\_Creek | 199591\.6 | | Old\_Town | 123138\.4 | | Edwards | 131109\.4 | | Somerset | 227379\.6 | | Northridge\_Heights | 323289\.5 | | Gilbert | 192162\.9 | | Sawyer | 136320\.4 | | Northwest\_Ames | 187328\.2 | | Sawyer\_West | 188644\.6 | Target encoding runs the risk of *data leakage* since you are using the response variable to encode a feature. An alternative to this is to change the feature value to represent the proportion a particular level represents for a given feature. In this case, `North_Ames` would be changed to 0\.153\. In Chapter 9, we discuss how tree\-based models use this approach to order categorical features when choosing a split point. Table 3\.2: Example of categorical proportion encoding the Neighborhood feature of the Ames housing data set. | Neighborhood | Proportion | | --- | --- | | North\_Ames | 0\.1441792 | | College\_Creek | 0\.0910862 | | Old\_Town | 0\.0832927 | | Edwards | 0\.0686800 | | Somerset | 0\.0623478 | | Northridge\_Heights | 0\.0560156 | | Gilbert | 0\.0565027 | | Sawyer | 0\.0496834 | | Northwest\_Ames | 0\.0467608 | | Sawyer\_West | 0\.0414028 | Several alternative approaches include effect or likelihood encoding (Micci\-Barreca [2001](#ref-micci2001preprocessing); Zumel and Mount [2016](#ref-zumel2016vtreat)), empirical Bayes methods (West, Welch, and Galecki [2014](#ref-west2014linear)), word and entity embeddings (Guo and Berkhahn [2016](#ref-guo2016entity); Chollet and Allaire [2018](#ref-chollet2018deep)), and more. For more in depth coverage of categorical encodings we highly recommend Kuhn and Johnson ([2019](#ref-kuhn2019feature)). ### 3\.6\.1 Lumping Sometimes features will contain levels that have very few observations. For example, there are 28 unique neighborhoods represented in the Ames housing data but several of them only have a few observations. ``` count(ames_train, Neighborhood) %>% arrange(n) ## # A tibble: 28 x 2 ## Neighborhood n ## <fct> <int> ## 1 Landmark 1 ## 2 Green_Hills 2 ## 3 Greens 7 ## 4 Blueste 9 ## 5 Northpark_Villa 17 ## 6 Briardale 18 ## 7 Veenker 20 ## 8 Bloomington_Heights 21 ## 9 South_and_West_of_Iowa_State_University 30 ## 10 Meadow_Village 30 ## # … with 18 more rows ``` Even numeric features can have similar distributions. For example, `Screen_Porch` has 92% values recorded as zero (zero square footage meaning no screen porch) and the remaining 8% have unique dispersed values. ``` count(ames_train, Screen_Porch) %>% arrange(n) ## # A tibble: 93 x 2 ## Screen_Porch n ## <int> <int> ## 1 40 1 ## 2 80 1 ## 3 92 1 ## 4 94 1 ## 5 99 1 ## 6 104 1 ## 7 109 1 ## 8 110 1 ## 9 111 1 ## 10 117 1 ## # … with 83 more rows ``` Sometimes we can benefit from collapsing, or “lumping” these into a lesser number of categories. In the above examples, we may want to collapse all levels that are observed in less than 10% of the training sample into an “other” category. We can use `step_other()` to do so. However, lumping should be used sparingly as there is often a loss in model performance (Kuhn and Johnson [2013](#ref-apm)). Tree\-based models often perform exceptionally well with high cardinality features and are not as impacted by levels with small representation. ``` # Lump levels for two features lumping <- recipe(Sale_Price ~ ., data = ames_train) %>% step_other(Neighborhood, threshold = 0.01, other = "other") %>% step_other(Screen_Porch, threshold = 0.1, other = ">0") # Apply this blue print --> you will learn about this at # the end of the chapter apply_2_training <- prep(lumping, training = ames_train) %>% bake(ames_train) # New distribution of Neighborhood count(apply_2_training, Neighborhood) %>% arrange(n) ## # A tibble: 22 x 2 ## Neighborhood n ## <fct> <int> ## 1 Bloomington_Heights 21 ## 2 South_and_West_of_Iowa_State_University 30 ## 3 Meadow_Village 30 ## 4 Clear_Creek 31 ## 5 Stone_Brook 34 ## 6 Northridge 48 ## 7 Timberland 55 ## 8 Iowa_DOT_and_Rail_Road 62 ## 9 Crawford 72 ## 10 Mitchell 74 ## # … with 12 more rows # New distribution of Screen_Porch count(apply_2_training, Screen_Porch) %>% arrange(n) ## # A tibble: 2 x 2 ## Screen_Porch n ## <fct> <int> ## 1 >0 174 ## 2 0 1879 ``` ### 3\.6\.2 One\-hot \& dummy encoding Many models require that all predictor variables be numeric. Consequently, we need to intelligently transform any categorical variables into numeric representations so that these algorithms can compute. Some packages automate this process (e.g., **h2o** and **caret**) while others do not (e.g., **glmnet** and **keras**). There are many ways to recode categorical variables as numeric (e.g., one\-hot, ordinal, binary, sum, and Helmert). The most common is referred to as one\-hot encoding, where we transpose our categorical variables so that each level of the feature is represented as a boolean value. For example, one\-hot encoding the left data frame in Figure [3\.9](engineering.html#fig:engineering-one-hot) results in `X` being converted into three columns, one for each level. This is called less than *full rank* encoding . However, this creates perfect collinearity which causes problems with some predictive modeling algorithms (e.g., ordinary linear regression and neural networks). Alternatively, we can create a full\-rank encoding by dropping one of the levels (level `c` has been dropped). This is referred to as *dummy* encoding. Figure 3\.9: Eight observations containing a categorical feature X and the difference in how one\-hot and dummy encoding transforms this feature. We can one\-hot or dummy encode with the same function (`step_dummy()`). By default, `step_dummy()` will create a full rank encoding but you can change this by setting `one_hot = TRUE`. ``` # Lump levels for two features recipe(Sale_Price ~ ., data = ames_train) %>% step_dummy(all_nominal(), one_hot = TRUE) ## Data Recipe ## ## Inputs: ## ## role #variables ## outcome 1 ## predictor 80 ## ## Operations: ## ## Dummy variables from all_nominal ``` Since one\-hot encoding adds new features it can significantly increase the dimensionality of our data. If you have a data set with many categorical variables and those categorical variables in turn have many unique levels, the number of features can explode. In these cases you may want to explore label/ordinal encoding or some other alternative. ### 3\.6\.3 Label encoding *Label encoding* is a pure numeric conversion of the levels of a categorical variable. If a categorical variable is a factor and it has pre\-specified levels then the numeric conversion will be in level order. If no levels are specified, the encoding will be based on alphabetical order. For example, the `MS_SubClass` variable has 16 levels, which we can recode numerically with `step_integer()`. ``` # Original categories count(ames_train, MS_SubClass) ## # A tibble: 16 x 2 ## MS_SubClass n ## <fct> <int> ## 1 One_Story_1946_and_Newer_All_Styles 749 ## 2 One_Story_1945_and_Older 93 ## 3 One_Story_with_Finished_Attic_All_Ages 5 ## 4 One_and_Half_Story_Unfinished_All_Ages 11 ## 5 One_and_Half_Story_Finished_All_Ages 207 ## 6 Two_Story_1946_and_Newer 394 ## 7 Two_Story_1945_and_Older 98 ## 8 Two_and_Half_Story_All_Ages 17 ## 9 Split_or_Multilevel 78 ## 10 Split_Foyer 31 ## 11 Duplex_All_Styles_and_Ages 69 ## 12 One_Story_PUD_1946_and_Newer 144 ## 13 One_and_Half_Story_PUD_All_Ages 1 ## 14 Two_Story_PUD_1946_and_Newer 98 ## 15 PUD_Multilevel_Split_Level_Foyer 14 ## 16 Two_Family_conversion_All_Styles_and_Ages 44 # Label encoded recipe(Sale_Price ~ ., data = ames_train) %>% step_integer(MS_SubClass) %>% prep(ames_train) %>% bake(ames_train) %>% count(MS_SubClass) ## # A tibble: 16 x 2 ## MS_SubClass n ## <dbl> <int> ## 1 1 749 ## 2 2 93 ## 3 3 5 ## 4 4 11 ## 5 5 207 ## 6 6 394 ## 7 7 98 ## 8 8 17 ## 9 9 78 ## 10 10 31 ## 11 11 69 ## 12 12 144 ## 13 13 1 ## 14 14 98 ## 15 15 14 ## 16 16 44 ``` We should be careful with label encoding unordered categorical features because most models will treat them as ordered numeric features. If a categorical feature is naturally ordered then label encoding is a natural choice (most commonly referred to as ordinal encoding). For example, the various quality features in the Ames housing data are ordinal in nature (ranging from `Very_Poor` to `Very_Excellent`). ``` ames_train %>% select(contains("Qual")) ## # A tibble: 2,053 x 6 ## Overall_Qual Exter_Qual Bsmt_Qual Low_Qual_Fin_SF Kitchen_Qual ## <fct> <fct> <fct> <int> <fct> ## 1 Above_Avera… Typical Typical 0 Typical ## 2 Average Typical Typical 0 Typical ## 3 Above_Avera… Typical Typical 0 Good ## 4 Above_Avera… Typical Typical 0 Good ## 5 Very_Good Good Good 0 Good ## 6 Very_Good Good Good 0 Good ## 7 Good Typical Typical 0 Good ## 8 Above_Avera… Typical Good 0 Typical ## 9 Above_Avera… Typical Good 0 Typical ## 10 Good Typical Good 0 Good ## # … with 2,043 more rows, and 1 more variable: Garage_Qual <fct> ``` Ordinal encoding these features provides a natural and intuitive interpretation and can logically be applied to all models. The various `xxx_Qual` features in the Ames housing are not ordered factors. For ordered factors you could also use `step_ordinalscore()`. ``` # Original categories count(ames_train, Overall_Qual) ## # A tibble: 10 x 2 ## Overall_Qual n ## <fct> <int> ## 1 Very_Poor 4 ## 2 Poor 9 ## 3 Fair 27 ## 4 Below_Average 166 ## 5 Average 565 ## 6 Above_Average 513 ## 7 Good 438 ## 8 Very_Good 231 ## 9 Excellent 77 ## 10 Very_Excellent 23 # Label encoded recipe(Sale_Price ~ ., data = ames_train) %>% step_integer(Overall_Qual) %>% prep(ames_train) %>% bake(ames_train) %>% count(Overall_Qual) ## # A tibble: 10 x 2 ## Overall_Qual n ## <dbl> <int> ## 1 1 4 ## 2 2 9 ## 3 3 27 ## 4 4 166 ## 5 5 565 ## 6 6 513 ## 7 7 438 ## 8 8 231 ## 9 9 77 ## 10 10 23 ``` ### 3\.6\.4 Alternatives There are several alternative categorical encodings that are implemented in various R machine learning engines and are worth exploring. For example, target encoding is the process of replacing a categorical value with the mean (regression) or proportion (classification) of the target variable. For example, target encoding the `Neighborhood` feature would change `North_Ames` to 144617\. Table 3\.1: Example of target encoding the Neighborhood feature of the Ames housing data set. | Neighborhood | Avg Sale\_Price | | --- | --- | | North\_Ames | 144792\.9 | | College\_Creek | 199591\.6 | | Old\_Town | 123138\.4 | | Edwards | 131109\.4 | | Somerset | 227379\.6 | | Northridge\_Heights | 323289\.5 | | Gilbert | 192162\.9 | | Sawyer | 136320\.4 | | Northwest\_Ames | 187328\.2 | | Sawyer\_West | 188644\.6 | Target encoding runs the risk of *data leakage* since you are using the response variable to encode a feature. An alternative to this is to change the feature value to represent the proportion a particular level represents for a given feature. In this case, `North_Ames` would be changed to 0\.153\. In Chapter 9, we discuss how tree\-based models use this approach to order categorical features when choosing a split point. Table 3\.2: Example of categorical proportion encoding the Neighborhood feature of the Ames housing data set. | Neighborhood | Proportion | | --- | --- | | North\_Ames | 0\.1441792 | | College\_Creek | 0\.0910862 | | Old\_Town | 0\.0832927 | | Edwards | 0\.0686800 | | Somerset | 0\.0623478 | | Northridge\_Heights | 0\.0560156 | | Gilbert | 0\.0565027 | | Sawyer | 0\.0496834 | | Northwest\_Ames | 0\.0467608 | | Sawyer\_West | 0\.0414028 | Several alternative approaches include effect or likelihood encoding (Micci\-Barreca [2001](#ref-micci2001preprocessing); Zumel and Mount [2016](#ref-zumel2016vtreat)), empirical Bayes methods (West, Welch, and Galecki [2014](#ref-west2014linear)), word and entity embeddings (Guo and Berkhahn [2016](#ref-guo2016entity); Chollet and Allaire [2018](#ref-chollet2018deep)), and more. For more in depth coverage of categorical encodings we highly recommend Kuhn and Johnson ([2019](#ref-kuhn2019feature)). 3\.7 Dimension reduction ------------------------ Dimension reduction is an alternative approach to filter out non\-informative features without manually removing them. We discuss dimension reduction topics in depth later in the book (Chapters [17](pca.html#pca)\-[19](autoencoders.html#autoencoders)) so please refer to those chapters for details. However, we wanted to highlight that it is very common to include these types of dimension reduction approaches during the feature engineering process. For example, we may wish to reduce the dimension of our features with principal components analysis (Chapter [17](pca.html#pca)) and retain the number of components required to explain, say, 95% of the variance and use these components as features in downstream modeling. ``` recipe(Sale_Price ~ ., data = ames_train) %>% step_center(all_numeric()) %>% step_scale(all_numeric()) %>% step_pca(all_numeric(), threshold = .95) ## Data Recipe ## ## Inputs: ## ## role #variables ## outcome 1 ## predictor 80 ## ## Operations: ## ## Centering for all_numeric ## Scaling for all_numeric ## No PCA components were extracted. ``` 3\.8 Proper implementation -------------------------- We stated at the beginning of this chapter that we should think of feature engineering as creating a blueprint rather than manually performing each task individually. This helps us in two ways: (1\) thinking sequentially and (2\) to apply appropriately within the resampling process. ### 3\.8\.1 Sequential steps Thinking of feature engineering as a blueprint forces us to think of the ordering of our preprocessing steps. Although each particular problem requires you to think of the effects of sequential preprocessing, there are some general suggestions that you should consider: * If using a log or Box\-Cox transformation, don’t center the data first or do any operations that might make the data non\-positive. Alternatively, use the Yeo\-Johnson transformation so you don’t have to worry about this. * One\-hot or dummy encoding typically results in sparse data which many algorithms can operate efficiently on. If you standardize sparse data you will create dense data and you loose the computational efficiency. Consequently, it’s often preferred to standardize your numeric features and then one\-hot/dummy encode. * If you are lumping infrequently occurring categories together, do so before one\-hot/dummy encoding. * Although you can perform dimension reduction procedures on categorical features, it is common to primarily do so on numeric features when doing so for feature engineering purposes. While your project’s needs may vary, here is a suggested order of potential steps that should work for most problems: 1. Filter out zero or near\-zero variance features. 2. Perform imputation if required. 3. Normalize to resolve numeric feature skewness. 4. Standardize (center and scale) numeric features. 5. Perform dimension reduction (e.g., PCA) on numeric features. 6. One\-hot or dummy encode categorical features. ### 3\.8\.2 Data leakage *Data leakage* is when information from outside the training data set is used to create the model. Data leakage often occurs during the data preprocessing period. To minimize this, feature engineering should be done in isolation of each resampling iteration. Recall that resampling allows us to estimate the generalizable prediction error. Therefore, we should apply our feature engineering blueprint to each resample independently as illustrated in Figure [3\.10](engineering.html#fig:engineering-minimize-leakage). That way we are not leaking information from one data set to another (each resample is designed to act as isolated training and test data). Figure 3\.10: Performing feature engineering preprocessing within each resample helps to minimize data leakage. For example, when standardizing numeric features, each resampled training data should use its own mean and variance estimates and these specific values should be applied to the same resampled test set. This imitates how real\-life prediction occurs where we only know our current data’s mean and variance estimates; therefore, on new data that comes in where we need to predict we assume the feature values follow the same distribution of what we’ve seen in the past. ### 3\.8\.3 Putting the process together To illustrate how this process works together via R code, let’s do a simple re\-assessment on the `ames` data set that we did at the end of the last chapter (Section [2\.7](process.html#put-process-together)) and see if some simple feature engineering improves our prediction error. But first, we’ll formally introduce the **recipes** package, which we’ve been implicitly illustrating throughout. The **recipes** package allows us to develop our feature engineering blueprint in a sequential nature. The idea behind **recipes** is similar to `caret::preProcess()` where we want to create the preprocessing blueprint but apply it later and within each resample.[17](#fn17) There are three main steps in creating and applying feature engineering with **recipes**: 1. `recipe`: where you define your feature engineering steps to create your blueprint. 2. `prep`are: estimate feature engineering parameters based on training data. 3. `bake`: apply the blueprint to new data. The first step is where you define your blueprint (aka recipe). With this process, you supply the formula of interest (the target variable, features, and the data these are based on) with `recipe()` and then you sequentially add feature engineering steps with `step_xxx()`. For example, the following defines `Sale_Price` as the target variable and then uses all the remaining columns as features based on `ames_train`. We then: 1. Remove near\-zero variance features that are categorical (aka nominal). 2. Ordinal encode our quality\-based features (which are inherently ordinal). 3. Center and scale (i.e., standardize) all numeric features. 4. Perform dimension reduction by applying PCA to all numeric features. ``` blueprint <- recipe(Sale_Price ~ ., data = ames_train) %>% step_nzv(all_nominal()) %>% step_integer(matches("Qual|Cond|QC|Qu")) %>% step_center(all_numeric(), -all_outcomes()) %>% step_scale(all_numeric(), -all_outcomes()) %>% step_pca(all_numeric(), -all_outcomes()) blueprint ## Data Recipe ## ## Inputs: ## ## role #variables ## outcome 1 ## predictor 80 ## ## Operations: ## ## Sparse, unbalanced variable filter on all_nominal ## Integer encoding for matches, Qual|Cond|QC|Qu ## Centering for all_numeric, -, all_outcomes() ## Scaling for all_numeric, -, all_outcomes() ## No PCA components were extracted. ``` Next, we need to train this blueprint on some training data. Remember, there are many feature engineering steps that we do not want to train on the test data (e.g., standardize and PCA) as this would create data leakage. So in this step we estimate these parameters based on the training data of interest. ``` prepare <- prep(blueprint, training = ames_train) prepare ## Data Recipe ## ## Inputs: ## ## role #variables ## outcome 1 ## predictor 80 ## ## Training data contained 2053 data points and no missing data. ## ## Operations: ## ## Sparse, unbalanced variable filter removed Street, Alley, ... [trained] ## Integer encoding for Condition_1, Overall_Qual, Overall_Cond, ... [trained] ## Centering for Lot_Frontage, Lot_Area, ... [trained] ## Scaling for Lot_Frontage, Lot_Area, ... [trained] ## PCA extraction with Lot_Frontage, Lot_Area, ... [trained] ``` Lastly, we can apply our blueprint to new data (e.g., the training data or future test data) with `bake()`. ``` baked_train <- bake(prepare, new_data = ames_train) baked_test <- bake(prepare, new_data = ames_test) baked_train ## # A tibble: 2,053 x 27 ## MS_SubClass MS_Zoning Lot_Shape Lot_Config Neighborhood Bldg_Type ## <fct> <fct> <fct> <fct> <fct> <fct> ## 1 One_Story_… Resident… Slightly… Corner North_Ames OneFam ## 2 One_Story_… Resident… Regular Inside North_Ames OneFam ## 3 One_Story_… Resident… Slightly… Corner North_Ames OneFam ## 4 Two_Story_… Resident… Slightly… Inside Gilbert OneFam ## 5 One_Story_… Resident… Regular Inside Stone_Brook TwnhsE ## 6 One_Story_… Resident… Slightly… Inside Stone_Brook TwnhsE ## 7 Two_Story_… Resident… Regular Inside Gilbert OneFam ## 8 Two_Story_… Resident… Slightly… Corner Gilbert OneFam ## 9 Two_Story_… Resident… Slightly… Inside Gilbert OneFam ## 10 One_Story_… Resident… Regular Inside Gilbert OneFam ## # … with 2,043 more rows, and 21 more variables: House_Style <fct>, ## # Roof_Style <fct>, Exterior_1st <fct>, Exterior_2nd <fct>, ## # Mas_Vnr_Type <fct>, Foundation <fct>, Bsmt_Exposure <fct>, ## # BsmtFin_Type_1 <fct>, Central_Air <fct>, Electrical <fct>, ## # Garage_Type <fct>, Garage_Finish <fct>, Paved_Drive <fct>, ## # Fence <fct>, Sale_Type <fct>, Sale_Price <int>, PC1 <dbl>, PC2 <dbl>, ## # PC3 <dbl>, PC4 <dbl>, PC5 <dbl> ``` Consequently, the goal is to develop our blueprint, then within each resample iteration we want to apply `prep()` and `bake()` to our resample training and validation data. Luckily, the **caret** package simplifies this process. We only need to specify the blueprint and **caret** will automatically prepare and bake within each resample. We illustrate with the `ames` housing example. First, we create our feature engineering blueprint to perform the following tasks: 1. Filter out near\-zero variance features for categorical features. 2. Ordinally encode all quality features, which are on a 1–10 Likert scale. 3. Standardize (center and scale) all numeric features. 4. One\-hot encode our remaining categorical features. ``` blueprint <- recipe(Sale_Price ~ ., data = ames_train) %>% step_nzv(all_nominal()) %>% step_integer(matches("Qual|Cond|QC|Qu")) %>% step_center(all_numeric(), -all_outcomes()) %>% step_scale(all_numeric(), -all_outcomes()) %>% step_dummy(all_nominal(), -all_outcomes(), one_hot = TRUE) ``` Next, we apply the same resampling method and hyperparameter search grid as we did in Section [2\.7](process.html#put-process-together). The only difference is when we train our resample models with `train()`, we supply our blueprint as the first argument and then **caret** takes care of the rest. ``` # Specify resampling plan cv <- trainControl( method = "repeatedcv", number = 10, repeats = 5 ) # Construct grid of hyperparameter values hyper_grid <- expand.grid(k = seq(2, 25, by = 1)) # Tune a knn model using grid search knn_fit2 <- train( blueprint, data = ames_train, method = "knn", trControl = cv, tuneGrid = hyper_grid, metric = "RMSE" ) ``` Looking at our results we see that the best model was associated with \\(k\=\\) 13, which resulted in a cross\-validated RMSE of 32,898\. Figure [3\.11](engineering.html#fig:engineering-knn-with-blueprint-assess) illustrates the cross\-validated error rate across the spectrum of hyperparameter values that we specified. ``` # print model results knn_fit2 ## k-Nearest Neighbors ## ## 2053 samples ## 80 predictor ## ## Recipe steps: nzv, integer, center, scale, dummy ## Resampling: Cross-Validated (10 fold, repeated 5 times) ## Summary of sample sizes: 1848, 1849, 1848, 1847, 1848, 1848, ... ## Resampling results across tuning parameters: ## ## k RMSE Rsquared MAE ## 2 36067.27 0.8031344 22618.51 ## 3 34924.85 0.8174313 21726.77 ## 4 34515.13 0.8223547 21281.38 ## 5 34040.72 0.8306678 20968.31 ## 6 33658.36 0.8366193 20850.36 ## 7 33477.81 0.8411600 20728.86 ## 8 33272.66 0.8449444 20607.91 ## 9 33151.51 0.8473631 20542.64 ## 10 33018.91 0.8496265 20540.82 ## 11 32963.31 0.8513253 20565.32 ## 12 32931.68 0.8531010 20615.63 ## 13 32898.37 0.8545475 20621.94 ## 14 32916.05 0.8554991 20660.38 ## 15 32911.62 0.8567444 20721.47 ## 16 32947.41 0.8574756 20771.31 ## 17 33012.23 0.8575633 20845.23 ## 18 33056.07 0.8576921 20942.94 ## 19 33152.81 0.8574236 21038.13 ## 20 33243.06 0.8570209 21125.38 ## 21 33300.40 0.8566910 21186.67 ## 22 33332.59 0.8569302 21240.79 ## 23 33442.28 0.8564495 21325.81 ## 24 33464.31 0.8567895 21345.11 ## 25 33514.23 0.8568821 21375.29 ## ## RMSE was used to select the optimal model using the smallest value. ## The final value used for the model was k = 13. # plot cross validation results ggplot(knn_fit2) ``` Figure 3\.11: Results from the same grid search performed in Section 2\.7 but with feature engineering performed within each resample. By applying a handful of the preprocessing techniques discussed throughout this chapter, we were able to reduce our prediction error by over $10,000\. The chapters that follow will look to see if we can continue reducing our error by applying different algorithms and feature engineering blueprints. ### 3\.8\.1 Sequential steps Thinking of feature engineering as a blueprint forces us to think of the ordering of our preprocessing steps. Although each particular problem requires you to think of the effects of sequential preprocessing, there are some general suggestions that you should consider: * If using a log or Box\-Cox transformation, don’t center the data first or do any operations that might make the data non\-positive. Alternatively, use the Yeo\-Johnson transformation so you don’t have to worry about this. * One\-hot or dummy encoding typically results in sparse data which many algorithms can operate efficiently on. If you standardize sparse data you will create dense data and you loose the computational efficiency. Consequently, it’s often preferred to standardize your numeric features and then one\-hot/dummy encode. * If you are lumping infrequently occurring categories together, do so before one\-hot/dummy encoding. * Although you can perform dimension reduction procedures on categorical features, it is common to primarily do so on numeric features when doing so for feature engineering purposes. While your project’s needs may vary, here is a suggested order of potential steps that should work for most problems: 1. Filter out zero or near\-zero variance features. 2. Perform imputation if required. 3. Normalize to resolve numeric feature skewness. 4. Standardize (center and scale) numeric features. 5. Perform dimension reduction (e.g., PCA) on numeric features. 6. One\-hot or dummy encode categorical features. ### 3\.8\.2 Data leakage *Data leakage* is when information from outside the training data set is used to create the model. Data leakage often occurs during the data preprocessing period. To minimize this, feature engineering should be done in isolation of each resampling iteration. Recall that resampling allows us to estimate the generalizable prediction error. Therefore, we should apply our feature engineering blueprint to each resample independently as illustrated in Figure [3\.10](engineering.html#fig:engineering-minimize-leakage). That way we are not leaking information from one data set to another (each resample is designed to act as isolated training and test data). Figure 3\.10: Performing feature engineering preprocessing within each resample helps to minimize data leakage. For example, when standardizing numeric features, each resampled training data should use its own mean and variance estimates and these specific values should be applied to the same resampled test set. This imitates how real\-life prediction occurs where we only know our current data’s mean and variance estimates; therefore, on new data that comes in where we need to predict we assume the feature values follow the same distribution of what we’ve seen in the past. ### 3\.8\.3 Putting the process together To illustrate how this process works together via R code, let’s do a simple re\-assessment on the `ames` data set that we did at the end of the last chapter (Section [2\.7](process.html#put-process-together)) and see if some simple feature engineering improves our prediction error. But first, we’ll formally introduce the **recipes** package, which we’ve been implicitly illustrating throughout. The **recipes** package allows us to develop our feature engineering blueprint in a sequential nature. The idea behind **recipes** is similar to `caret::preProcess()` where we want to create the preprocessing blueprint but apply it later and within each resample.[17](#fn17) There are three main steps in creating and applying feature engineering with **recipes**: 1. `recipe`: where you define your feature engineering steps to create your blueprint. 2. `prep`are: estimate feature engineering parameters based on training data. 3. `bake`: apply the blueprint to new data. The first step is where you define your blueprint (aka recipe). With this process, you supply the formula of interest (the target variable, features, and the data these are based on) with `recipe()` and then you sequentially add feature engineering steps with `step_xxx()`. For example, the following defines `Sale_Price` as the target variable and then uses all the remaining columns as features based on `ames_train`. We then: 1. Remove near\-zero variance features that are categorical (aka nominal). 2. Ordinal encode our quality\-based features (which are inherently ordinal). 3. Center and scale (i.e., standardize) all numeric features. 4. Perform dimension reduction by applying PCA to all numeric features. ``` blueprint <- recipe(Sale_Price ~ ., data = ames_train) %>% step_nzv(all_nominal()) %>% step_integer(matches("Qual|Cond|QC|Qu")) %>% step_center(all_numeric(), -all_outcomes()) %>% step_scale(all_numeric(), -all_outcomes()) %>% step_pca(all_numeric(), -all_outcomes()) blueprint ## Data Recipe ## ## Inputs: ## ## role #variables ## outcome 1 ## predictor 80 ## ## Operations: ## ## Sparse, unbalanced variable filter on all_nominal ## Integer encoding for matches, Qual|Cond|QC|Qu ## Centering for all_numeric, -, all_outcomes() ## Scaling for all_numeric, -, all_outcomes() ## No PCA components were extracted. ``` Next, we need to train this blueprint on some training data. Remember, there are many feature engineering steps that we do not want to train on the test data (e.g., standardize and PCA) as this would create data leakage. So in this step we estimate these parameters based on the training data of interest. ``` prepare <- prep(blueprint, training = ames_train) prepare ## Data Recipe ## ## Inputs: ## ## role #variables ## outcome 1 ## predictor 80 ## ## Training data contained 2053 data points and no missing data. ## ## Operations: ## ## Sparse, unbalanced variable filter removed Street, Alley, ... [trained] ## Integer encoding for Condition_1, Overall_Qual, Overall_Cond, ... [trained] ## Centering for Lot_Frontage, Lot_Area, ... [trained] ## Scaling for Lot_Frontage, Lot_Area, ... [trained] ## PCA extraction with Lot_Frontage, Lot_Area, ... [trained] ``` Lastly, we can apply our blueprint to new data (e.g., the training data or future test data) with `bake()`. ``` baked_train <- bake(prepare, new_data = ames_train) baked_test <- bake(prepare, new_data = ames_test) baked_train ## # A tibble: 2,053 x 27 ## MS_SubClass MS_Zoning Lot_Shape Lot_Config Neighborhood Bldg_Type ## <fct> <fct> <fct> <fct> <fct> <fct> ## 1 One_Story_… Resident… Slightly… Corner North_Ames OneFam ## 2 One_Story_… Resident… Regular Inside North_Ames OneFam ## 3 One_Story_… Resident… Slightly… Corner North_Ames OneFam ## 4 Two_Story_… Resident… Slightly… Inside Gilbert OneFam ## 5 One_Story_… Resident… Regular Inside Stone_Brook TwnhsE ## 6 One_Story_… Resident… Slightly… Inside Stone_Brook TwnhsE ## 7 Two_Story_… Resident… Regular Inside Gilbert OneFam ## 8 Two_Story_… Resident… Slightly… Corner Gilbert OneFam ## 9 Two_Story_… Resident… Slightly… Inside Gilbert OneFam ## 10 One_Story_… Resident… Regular Inside Gilbert OneFam ## # … with 2,043 more rows, and 21 more variables: House_Style <fct>, ## # Roof_Style <fct>, Exterior_1st <fct>, Exterior_2nd <fct>, ## # Mas_Vnr_Type <fct>, Foundation <fct>, Bsmt_Exposure <fct>, ## # BsmtFin_Type_1 <fct>, Central_Air <fct>, Electrical <fct>, ## # Garage_Type <fct>, Garage_Finish <fct>, Paved_Drive <fct>, ## # Fence <fct>, Sale_Type <fct>, Sale_Price <int>, PC1 <dbl>, PC2 <dbl>, ## # PC3 <dbl>, PC4 <dbl>, PC5 <dbl> ``` Consequently, the goal is to develop our blueprint, then within each resample iteration we want to apply `prep()` and `bake()` to our resample training and validation data. Luckily, the **caret** package simplifies this process. We only need to specify the blueprint and **caret** will automatically prepare and bake within each resample. We illustrate with the `ames` housing example. First, we create our feature engineering blueprint to perform the following tasks: 1. Filter out near\-zero variance features for categorical features. 2. Ordinally encode all quality features, which are on a 1–10 Likert scale. 3. Standardize (center and scale) all numeric features. 4. One\-hot encode our remaining categorical features. ``` blueprint <- recipe(Sale_Price ~ ., data = ames_train) %>% step_nzv(all_nominal()) %>% step_integer(matches("Qual|Cond|QC|Qu")) %>% step_center(all_numeric(), -all_outcomes()) %>% step_scale(all_numeric(), -all_outcomes()) %>% step_dummy(all_nominal(), -all_outcomes(), one_hot = TRUE) ``` Next, we apply the same resampling method and hyperparameter search grid as we did in Section [2\.7](process.html#put-process-together). The only difference is when we train our resample models with `train()`, we supply our blueprint as the first argument and then **caret** takes care of the rest. ``` # Specify resampling plan cv <- trainControl( method = "repeatedcv", number = 10, repeats = 5 ) # Construct grid of hyperparameter values hyper_grid <- expand.grid(k = seq(2, 25, by = 1)) # Tune a knn model using grid search knn_fit2 <- train( blueprint, data = ames_train, method = "knn", trControl = cv, tuneGrid = hyper_grid, metric = "RMSE" ) ``` Looking at our results we see that the best model was associated with \\(k\=\\) 13, which resulted in a cross\-validated RMSE of 32,898\. Figure [3\.11](engineering.html#fig:engineering-knn-with-blueprint-assess) illustrates the cross\-validated error rate across the spectrum of hyperparameter values that we specified. ``` # print model results knn_fit2 ## k-Nearest Neighbors ## ## 2053 samples ## 80 predictor ## ## Recipe steps: nzv, integer, center, scale, dummy ## Resampling: Cross-Validated (10 fold, repeated 5 times) ## Summary of sample sizes: 1848, 1849, 1848, 1847, 1848, 1848, ... ## Resampling results across tuning parameters: ## ## k RMSE Rsquared MAE ## 2 36067.27 0.8031344 22618.51 ## 3 34924.85 0.8174313 21726.77 ## 4 34515.13 0.8223547 21281.38 ## 5 34040.72 0.8306678 20968.31 ## 6 33658.36 0.8366193 20850.36 ## 7 33477.81 0.8411600 20728.86 ## 8 33272.66 0.8449444 20607.91 ## 9 33151.51 0.8473631 20542.64 ## 10 33018.91 0.8496265 20540.82 ## 11 32963.31 0.8513253 20565.32 ## 12 32931.68 0.8531010 20615.63 ## 13 32898.37 0.8545475 20621.94 ## 14 32916.05 0.8554991 20660.38 ## 15 32911.62 0.8567444 20721.47 ## 16 32947.41 0.8574756 20771.31 ## 17 33012.23 0.8575633 20845.23 ## 18 33056.07 0.8576921 20942.94 ## 19 33152.81 0.8574236 21038.13 ## 20 33243.06 0.8570209 21125.38 ## 21 33300.40 0.8566910 21186.67 ## 22 33332.59 0.8569302 21240.79 ## 23 33442.28 0.8564495 21325.81 ## 24 33464.31 0.8567895 21345.11 ## 25 33514.23 0.8568821 21375.29 ## ## RMSE was used to select the optimal model using the smallest value. ## The final value used for the model was k = 13. # plot cross validation results ggplot(knn_fit2) ``` Figure 3\.11: Results from the same grid search performed in Section 2\.7 but with feature engineering performed within each resample. By applying a handful of the preprocessing techniques discussed throughout this chapter, we were able to reduce our prediction error by over $10,000\. The chapters that follow will look to see if we can continue reducing our error by applying different algorithms and feature engineering blueprints.
Machine Learning
bradleyboehmke.github.io
https://bradleyboehmke.github.io/HOML/linear-regression.html
Chapter 4 Linear Regression =========================== *Linear regression*, a staple of classical statistical modeling, is one of the simplest algorithms for doing supervised learning. Though it may seem somewhat dull compared to some of the more modern statistical learning approaches described in later chapters, linear regression is still a useful and widely applied statistical learning method. Moreover, it serves as a good starting point for more advanced approaches; as we will see in later chapters, many of the more sophisticated statistical learning approaches can be seen as generalizations to or extensions of ordinary linear regression. Consequently, it is important to have a good understanding of linear regression before studying more complex learning methods. This chapter introduces linear regression with an emphasis on prediction, rather than inference. An excellent and comprehensive overview of linear regression is provided in Kutner et al. ([2005](#ref-kutner-2005-applied)). See Faraway ([2016](#ref-faraway-2016-linear)[b](#ref-faraway-2016-linear)) for a discussion of linear regression in R (the book’s website also provides Python scripts). 4\.1 Prerequisites ------------------ This chapter leverages the following packages: ``` # Helper packages library(dplyr) # for data manipulation library(ggplot2) # for awesome graphics # Modeling packages library(caret) # for cross-validation, etc. # Model interpretability packages library(vip) # variable importance ``` We’ll also continue working with the `ames_train` data set created in Section [2\.7](process.html#put-process-together). 4\.2 Simple linear regression ----------------------------- Pearson’s correlation coefficient is often used to quantify the strength of the linear association between two continuous variables. In this section, we seek to fully characterize that linear relationship. *Simple linear regression* (SLR) assumes that the statistical relationship between two continuous variables (say \\(X\\) and \\(Y\\)) is (at least approximately) linear: \\\[\\begin{equation} \\tag{4\.1} Y\_i \= \\beta\_0 \+ \\beta\_1 X\_i \+ \\epsilon\_i, \\quad \\text{for } i \= 1, 2, \\dots, n, \\end{equation}\\] where \\(Y\_i\\) represents the *i*\-th response value, \\(X\_i\\) represents the *i*\-th feature value, \\(\\beta\_0\\) and \\(\\beta\_1\\) are fixed, but unknown constants (commonly referred to as coefficients or parameters) that represent the intercept and slope of the regression line, respectively, and \\(\\epsilon\_i\\) represents noise or random error. In this chapter, we’ll assume that the errors are normally distributed with mean zero and constant variance \\(\\sigma^2\\), denoted \\(\\stackrel{iid}{\\sim} \\left(0, \\sigma^2\\right)\\). Since the random errors are centered around zero (i.e., \\(E\\left(\\epsilon\\right) \= 0\\)), linear regression is really a problem of estimating a *conditional mean*: \\\[\\begin{equation} E\\left(Y\_i \| X\_i\\right) \= \\beta\_0 \+ \\beta\_1 X\_i. \\end{equation}\\] For brevity, we often drop the conditional piece and write \\(E\\left(Y \| X\\right) \= E\\left(Y\\right)\\). Consequently, the interpretation of the coefficients is in terms of the average, or mean response. For example, the intercept \\(\\beta\_0\\) represents the average response value when \\(X \= 0\\) (it is often not meaningful or of interest and is sometimes referred to as a *bias term*). The slope \\(\\beta\_1\\) represents the increase in the average response per one\-unit increase in \\(X\\) (i.e., it is a *rate of change*). ### 4\.2\.1 Estimation Ideally, we want estimates of \\(\\beta\_0\\) and \\(\\beta\_1\\) that give us the “best fitting” line. But what is meant by “best fitting”? The most common approach is to use the method of *least squares* (LS) estimation; this form of linear regression is often referred to as ordinary least squares (OLS) regression. There are multiple ways to measure “best fitting”, but the LS criterion finds the “best fitting” line by minimizing the *residual sum of squares* (RSS): \\\[\\begin{equation} \\tag{4\.2} RSS\\left(\\beta\_0, \\beta\_1\\right) \= \\sum\_{i\=1}^n\\left\[Y\_i \- \\left(\\beta\_0 \+ \\beta\_1 X\_i\\right)\\right]^2 \= \\sum\_{i\=1}^n\\left(Y\_i \- \\beta\_0 \- \\beta\_1 X\_i\\right)^2\. \\end{equation}\\] The LS estimates of \\(\\beta\_0\\) and \\(\\beta\_1\\) are denoted as \\(\\widehat{\\beta}\_0\\) and \\(\\widehat{\\beta}\_1\\), respectively. Once obtained, we can generate predicted values, say at \\(X \= X\_{new}\\), using the estimated regression equation: \\\[\\begin{equation} \\widehat{Y}\_{new} \= \\widehat{\\beta}\_0 \+ \\widehat{\\beta}\_1 X\_{new}, \\end{equation}\\] where \\(\\widehat{Y}\_{new} \= \\widehat{E\\left(Y\_{new} \| X \= X\_{new}\\right)}\\) is the estimated mean response at \\(X \= X\_{new}\\). With the Ames housing data, suppose we wanted to model a linear relationship between the total above ground living space of a home (`Gr_Liv_Area`) and sale price (`Sale_Price`). To perform an OLS regression model in R we can use the `lm()` function: ``` model1 <- lm(Sale_Price ~ Gr_Liv_Area, data = ames_train) ``` The fitted model (`model1`) is displayed in the left plot in Figure [4\.1](linear-regression.html#fig:04-visualize-model1) where the points represent the values of `Sale_Price` in the training data. In the right plot of Figure [4\.1](linear-regression.html#fig:04-visualize-model1), the vertical lines represent the individual errors, called *residuals*, associated with each observation. The OLS criterion in Equation [(4\.2\)](linear-regression.html#eq:least-squares-simple) identifies the “best fitting” line that minimizes the sum of squares of these residuals. Figure 4\.1: The least squares fit from regressing sale price on living space for the the Ames housing data. Left: Fitted regression line. Right: Fitted regression line with vertical grey bars representing the residuals. The `coef()` function extracts the estimated coefficients from the model. We can also use `summary()` to get a more detailed report of the model results. ``` summary(model1) ## ## Call: ## lm(formula = Sale_Price ~ Gr_Liv_Area, data = ames_train) ## ## Residuals: ## Min 1Q Median 3Q Max ## -361143 -30668 -2449 22838 331357 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 8732.938 3996.613 2.185 0.029 * ## Gr_Liv_Area 114.876 2.531 45.385 <0.0000000000000002 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 56700 on 2051 degrees of freedom ## Multiple R-squared: 0.5011, Adjusted R-squared: 0.5008 ## F-statistic: 2060 on 1 and 2051 DF, p-value: < 0.00000000000000022 ``` The estimated coefficients from our model are \\(\\widehat{\\beta}\_0 \=\\) 8732\.94 and \\(\\widehat{\\beta}\_1 \=\\) 114\.88\. To interpret, we estimate that the mean selling price increases by 114\.88 for each additional one square foot of above ground living space. This simple description of the relationship between the sale price and square footage using a single number (i.e., the slope) is what makes linear regression such an intuitive and popular modeling tool. One drawback of the LS procedure in linear regression is that it only provides estimates of the coefficients; it does not provide an estimate of the error variance \\(\\sigma^2\\)! LS also makes no assumptions about the random errors. These assumptions are important for inference and in estimating the error variance which we’re assuming is a constant value \\(\\sigma^2\\). One way to estimate \\(\\sigma^2\\) (which is required for characterizing the variability of our fitted model), is to use the method of *maximum likelihood* (ML) estimation (see Kutner et al. ([2005](#ref-kutner-2005-applied)) Section 1\.7 for details). The ML procedure requires that we assume a particular distribution for the random errors. Most often, we assume the errors to be normally distributed. In practice, under the usual assumptions stated above, an unbiased estimate of the error variance is given as the sum of the squared residuals divided by \\(n \- p\\) (where \\(p\\) is the number of regression coefficients or parameters in the model): \\\[\\begin{equation} \\widehat{\\sigma}^2 \= \\frac{1}{n \- p}\\sum\_{i \= 1} ^ n r\_i ^ 2, \\end{equation}\\] where \\(r\_i \= \\left(Y\_i \- \\widehat{Y}\_i\\right)\\) is referred to as the \\(i\\)th residual (i.e., the difference between the \\(i\\)th observed and predicted response value). The quantity \\(\\widehat{\\sigma}^2\\) is also referred to as the *mean square error* (MSE) and its square root is denoted RMSE (see Section [2\.6](process.html#model-eval) for discussion on these metrics). In R, the RMSE of a linear model can be extracted using the `sigma()` function: Typically, these error metrics are computed on a separate validation set or using cross\-validation as discussed in Section 2\.4; however, they can also be computed on the same training data the model was trained on as illustrated here. ``` sigma(model1) # RMSE ## [1] 56704.78 sigma(model1)^2 # MSE ## [1] 3215432370 ``` Note that the RMSE is also reported as the `Residual standard error` in the output from `summary()`. ### 4\.2\.2 Inference How accurate are the LS of \\(\\beta\_0\\) and \\(\\beta\_1\\)? Point estimates by themselves are not very useful. It is often desirable to associate some measure of an estimates variability. The variability of an estimate is often measured by its *standard error* (SE)—the square root of its variance. If we assume that the errors in the linear regression model are \\(\\stackrel{iid}{\\sim} \\left(0, \\sigma^2\\right)\\), then simple expressions for the SEs of the estimated coefficients exist and are displayed in the column labeled `Std. Error` in the output from `summary()`. From this, we can also derive simple \\(t\\)\-tests to understand if the individual coefficients are statistically significant from zero. The *t*\-statistics for such a test are nothing more than the estimated coefficients divided by their corresponding estimated standard errors (i.e., in the output from `summary()`, `t value` \= `Estimate` / `Std. Error`). The reported *t*\-statistics measure the number of standard deviations each coefficient is away from 0\. Thus, large *t*\-statistics (greater than two in absolute value, say) roughly indicate statistical significance at the \\(\\alpha \= 0\.05\\) level. The *p*\-values for these tests are also reported by `summary()` in the column labeled `Pr(>|t|)`. Under the same assumptions, we can also derive confidence intervals for the coefficients. The formula for the traditional \\(100\\left(1 \- \\alpha\\right)\\)% confidence interval for \\(\\beta\_j\\) is \\\[\\begin{equation} \\widehat{\\beta}\_j \\pm t\_{1 \- \\alpha / 2, n \- p} \\widehat{SE}\\left(\\widehat{\\beta}\_j\\right). \\tag{4\.3} \\end{equation}\\] In R, we can construct such (one\-at\-a\-time) confidence intervals for each coefficient using `confint()`. For example, a 95% confidence intervals for the coefficients in our SLR example can be computed using ``` confint(model1, level = 0.95) ## 2.5 % 97.5 % ## (Intercept) 895.0961 16570.7805 ## Gr_Liv_Area 109.9121 119.8399 ``` To interpret, we estimate with 95% confidence that the mean selling price increases between 109\.91 and 119\.84 for each additional one square foot of above ground living space. We can also conclude that the slope \\(\\beta\_1\\) is significantly different from zero (or any other pre\-specified value not included in the interval) at the \\(\\alpha \= 0\.05\\) level. This is also supported by the output from `summary()`. Most statistical software, including R, will include estimated standard errors, *t*\-statistics, etc. as part of its regression output. However, it is important to remember that such quantities depend on three major assumptions of the linear regression model: 1. Independent observations 2. The random errors have mean zero, and constant variance 3. The random errors are normally distributed If any or all of these assumptions are violated, then remedial measures need to be taken. For instance, *weighted least squares* (and other procedures) can be used when the constant variance assumption is violated. Transformations (of both the response and features) can also help to correct departures from these assumptions. The residuals are extremely useful in helping to identify how parametric models depart from such assumptions. 4\.3 Multiple linear regression ------------------------------- In practice, we often have more than one predictor. For example, with the Ames housing data, we may wish to understand if above ground square footage (`Gr_Liv_Area`) and the year the house was built (`Year_Built`) are (linearly) related to sale price (`Sale_Price`). We can extend the SLR model so that it can directly accommodate multiple predictors; this is referred to as the *multiple linear regression* (MLR) model. With two predictors, the MLR model becomes: \\\[\\begin{equation} Y \= \\beta\_0 \+ \\beta\_1 X\_1 \+ \\beta\_2 X\_2 \+ \\epsilon, \\end{equation}\\] where \\(X\_1\\) and \\(X\_2\\) are features of interest. In our Ames housing example, \\(X\_1\\) represents `Gr_Liv_Area` and \\(X\_2\\) represents `Year_Built`. In R, multiple linear regression models can be fit by separating all the features of interest with a `+`: ``` (model2 <- lm(Sale_Price ~ Gr_Liv_Area + Year_Built, data = ames_train)) ## ## Call: ## lm(formula = Sale_Price ~ Gr_Liv_Area + Year_Built, data = ames_train) ## ## Coefficients: ## (Intercept) Gr_Liv_Area Year_Built ## -2123054.21 99.18 1093.48 ``` Alternatively, we can use `update()` to update the model formula used in `model1`. The new formula can use a `.` as shorthand for keep everything on either the left or right hand side of the formula, and a `+` or `-` can be used to add or remove terms from the original model, respectively. In the case of adding `Year_Built` to `model1`, we could’ve used: ``` (model2 <- update(model1, . ~ . + Year_Built)) ## ## Call: ## lm(formula = Sale_Price ~ Gr_Liv_Area + Year_Built, data = ames_train) ## ## Coefficients: ## (Intercept) Gr_Liv_Area Year_Built ## -2123054.21 99.18 1093.48 ``` The LS estimates of the regression coefficients are \\(\\widehat{\\beta}\_1 \=\\) 99\.176 and \\(\\widehat{\\beta}\_2 \=\\) 1093\.485 (the estimated intercept is \-2123054\.207\. In other words, every one square foot increase to above ground square footage is associated with an additional $99\.18 in **mean selling price** when holding the year the house was built constant. Likewise, for every year newer a home is there is approximately an increase of $1,093\.48 in selling price when holding the above ground square footage constant. A contour plot of the fitted regression surface is displayed in the left side of Figure [4\.2](linear-regression.html#fig:04-mlr-fit) below. Note how the fitted regression surface is flat (i.e., it does not twist or bend). This is true for all linear models that include only *main effects* (i.e., terms involving only a single predictor). One way to model curvature is to include *interaction effects*. An interaction occurs when the effect of one predictor on the response depends on the values of other predictors. In linear regression, interactions can be captured via products of features (i.e., \\(X\_1 \\times X\_2\\)). A model with two main effects can also include a two\-way interaction. For example, to include an interaction between \\(X\_1 \=\\) `Gr_Liv_Area` and \\(X\_2 \=\\) `Year_Built`, we introduce an additional product term: \\\[\\begin{equation} Y \= \\beta\_0 \+ \\beta\_1 X\_1 \+ \\beta\_2 X\_2 \+ \\beta\_3 X\_1 X\_2 \+ \\epsilon. \\end{equation}\\] Note that in R, we use the `:` operator to include an interaction (technically, we could use `*` as well, but `x1 * x2` is shorthand for `x1 + x2 + x1:x2` so is slightly redundant): ``` lm(Sale_Price ~ Gr_Liv_Area + Year_Built + Gr_Liv_Area:Year_Built, data = ames_train) ## ## Call: ## lm(formula = Sale_Price ~ Gr_Liv_Area + Year_Built + Gr_Liv_Area:Year_Built, ## data = ames_train) ## ## Coefficients: ## (Intercept) Gr_Liv_Area Year_Built ## 382194.3015 -1483.8810 -179.7979 ## Gr_Liv_Area:Year_Built ## 0.8037 ``` A contour plot of the fitted regression surface with interaction is displayed in the right side of Figure [4\.2](linear-regression.html#fig:04-mlr-fit). Note the curvature in the contour lines. Interaction effects are quite prevalent in predictive modeling. Since linear models are an example of parametric modeling, it is up to the analyst to decide if and when to include interaction effects. In later chapters, we’ll discuss algorithms that can automatically detect and incorporate interaction effects (albeit in different ways). It is also important to understand a concept called the *hierarchy principle*—which demands that all lower\-order terms corresponding to an interaction be retained in the model—when considering interaction effects in linear regression models. Figure 4\.2: In a three\-dimensional setting, with two predictors and one response, the least squares regression line becomes a plane. The ‘best\-fit’ plane minimizes the sum of squared errors between the actual sales price (individual dots) and the predicted sales price (plane). In general, we can include as many predictors as we want, as long as we have more rows than parameters! The general multiple linear regression model with *p* distinct predictors is \\\[\\begin{equation} Y \= \\beta\_0 \+ \\beta\_1 X\_1 \+ \\beta\_2 X\_2 \+ \\cdots \+ \\beta\_p X\_p \+ \\epsilon, \\end{equation}\\] where \\(X\_i\\) for \\(i \= 1, 2, \\dots, p\\) are the predictors of interest. Note some of these may represent interactions (e.g., \\(X\_3 \= X\_1 \\times X\_2\\)) between or transformations[18](#fn18) (e.g., \\(X\_4 \= \\sqrt{X\_1}\\)) of the original features. Unfortunately, visualizing beyond three dimensions is not practical as our best\-fit plane becomes a hyperplane. However, the motivation remains the same where the best\-fit hyperplane is identified by minimizing the RSS. The code below creates a third model where we use all features in our data set as main effects (i.e., no interaction terms) to predict `Sale_Price`. ``` # include all possible main effects model3 <- lm(Sale_Price ~ ., data = ames_train) # print estimated coefficients in a tidy data frame broom::tidy(model3) ## # A tibble: 283 x 5 ## term estimate std.error statistic p.value ## <chr> <dbl> <dbl> <dbl> <dbl> ## 1 (Intercept) -5.61e6 11261881. -0.498 0.618 ## 2 MS_SubClassOne_Story_1945_and_Older 3.56e3 3843. 0.926 0.355 ## 3 MS_SubClassOne_Story_with_Finished… 1.28e4 12834. 0.997 0.319 ## 4 MS_SubClassOne_and_Half_Story_Unfi… 8.73e3 12871. 0.678 0.498 ## 5 MS_SubClassOne_and_Half_Story_Fini… 4.11e3 6226. 0.660 0.509 ## 6 MS_SubClassTwo_Story_1946_and_Newer -1.09e3 5790. -0.189 0.850 ## 7 MS_SubClassTwo_Story_1945_and_Older 7.14e3 6349. 1.12 0.261 ## 8 MS_SubClassTwo_and_Half_Story_All_… -1.39e4 11003. -1.27 0.206 ## 9 MS_SubClassSplit_or_Multilevel -1.15e4 10512. -1.09 0.276 ## 10 MS_SubClassSplit_Foyer -4.39e3 8057. -0.545 0.586 ## # … with 273 more rows ``` 4\.4 Assessing model accuracy ----------------------------- We’ve fit three main effects models to the Ames housing data: a single predictor, two predictors, and all possible predictors. But the question remains, which model is “best”? To answer this question we have to define what we mean by “best”. In our case, we’ll use the RMSE metric and cross\-validation (Section [2\.4](process.html#resampling)) to determine the “best” model. We can use the `caret::train()` function to train a linear model (i.e., `method = "lm"`) using cross\-validation (or a variety of other validation methods). In practice, a number of factors should be considered in determining a “best” model (e.g., time constraints, model production cost, predictive accuracy, etc.). The benefit of **caret** is that it provides built\-in cross\-validation capabilities, whereas the `lm()` function does not[19](#fn19). The following code chunk uses `caret::train()` to refit `model1` using 10\-fold cross\-validation: ``` # Train model using 10-fold cross-validation set.seed(123) # for reproducibility (cv_model1 <- train( form = Sale_Price ~ Gr_Liv_Area, data = ames_train, method = "lm", trControl = trainControl(method = "cv", number = 10) )) ## Linear Regression ## ## 2053 samples ## 1 predictor ## ## No pre-processing ## Resampling: Cross-Validated (10 fold) ## Summary of sample sizes: 1846, 1848, 1848, 1848, 1848, 1848, ... ## Resampling results: ## ## RMSE Rsquared MAE ## 56410.89 0.5069425 39169.09 ## ## Tuning parameter 'intercept' was held constant at a value of TRUE ``` The resulting cross\-validated RMSE is $56,410\.89 (this is the average RMSE across the 10 CV folds). How should we interpret this? When applied to unseen data, the predictions this model makes are, on average, about $56,410\.89 off from the actual sale price. We can perform cross\-validation on the other two models in a similar fashion, which we do in the code chunk below. ``` # model 2 CV set.seed(123) cv_model2 <- train( Sale_Price ~ Gr_Liv_Area + Year_Built, data = ames_train, method = "lm", trControl = trainControl(method = "cv", number = 10) ) # model 3 CV set.seed(123) cv_model3 <- train( Sale_Price ~ ., data = ames_train, method = "lm", trControl = trainControl(method = "cv", number = 10) ) # Extract out of sample performance measures summary(resamples(list( model1 = cv_model1, model2 = cv_model2, model3 = cv_model3 ))) ## ## Call: ## summary.resamples(object = resamples(list(model1 = cv_model1, model2 ## = cv_model2, model3 = cv_model3))) ## ## Models: model1, model2, model3 ## Number of resamples: 10 ## ## MAE ## Min. 1st Qu. Median Mean 3rd Qu. Max. NA's ## model1 34457.58 36323.74 38943.81 39169.09 41660.81 45005.17 0 ## model2 28094.79 30594.47 31959.30 32246.86 34210.70 37441.82 0 ## model3 12458.27 15420.10 16484.77 16258.84 17262.39 19029.29 0 ## ## RMSE ## Min. 1st Qu. Median Mean 3rd Qu. Max. NA's ## model1 47211.34 52363.41 54948.96 56410.89 60672.31 67679.05 0 ## model2 37698.17 42607.11 45407.14 46292.38 49668.59 54692.06 0 ## model3 20844.33 22581.04 24947.45 26098.00 27695.65 39521.49 0 ## ## Rsquared ## Min. 1st Qu. Median Mean 3rd Qu. Max. NA's ## model1 0.3598237 0.4550791 0.5289068 0.5069425 0.5619841 0.5965793 0 ## model2 0.5714665 0.6392504 0.6800818 0.6703298 0.7067458 0.7348562 0 ## model3 0.7869022 0.9018567 0.9104351 0.8949642 0.9166564 0.9303504 0 ``` Extracting the results for each model, we see that by adding more information via more predictors, we are able to improve the out\-of\-sample cross validation performance metrics. Specifically, our cross\-validated RMSE reduces from $46,292\.38 (the model with two predictors) down to $26,098\.00 (for our full model). In this case, the model with all possible main effects performs the “best” (compared with the other two). 4\.5 Model concerns ------------------- As previously stated, linear regression has been a popular modeling tool due to the ease of interpreting the coefficients. However, linear regression makes several strong assumptions that are often violated as we include more predictors in our model. Violation of these assumptions can lead to flawed interpretation of the coefficients and prediction results. **1\. Linear relationship:** Linear regression assumes a linear relationship between the predictor and the response variable. However, as discussed in Chapter [3](engineering.html#engineering), non\-linear relationships can be made linear (or near\-linear) by applying transformations to the response and/or predictors. For example, Figure [4\.3](linear-regression.html#fig:04-linear-relationship) illustrates the relationship between sale price and the year a home was built. The left plot illustrates the non\-linear relationship that exists. However, we can achieve a near\-linear relationship by log transforming sale price, although some non\-linearity still exists for older homes. ``` p1 <- ggplot(ames_train, aes(Year_Built, Sale_Price)) + geom_point(size = 1, alpha = .4) + geom_smooth(se = FALSE) + scale_y_continuous("Sale price", labels = scales::dollar) + xlab("Year built") + ggtitle(paste("Non-transformed variables with a\n", "non-linear relationship.")) p2 <- ggplot(ames_train, aes(Year_Built, Sale_Price)) + geom_point(size = 1, alpha = .4) + geom_smooth(method = "lm", se = FALSE) + scale_y_log10("Sale price", labels = scales::dollar, breaks = seq(0, 400000, by = 100000)) + xlab("Year built") + ggtitle(paste("Transforming variables can provide a\n", "near-linear relationship.")) gridExtra::grid.arrange(p1, p2, nrow = 1) ``` Figure 4\.3: Linear regression assumes a linear relationship between the predictor(s) and the response variable; however, non\-linear relationships can often be altered to be near\-linear by applying a transformation to the variable(s). **2\. Constant variance among residuals:** Linear regression assumes the variance among error terms (\\(\\epsilon\_1, \\epsilon\_2, \\dots, \\epsilon\_p\\)) are constant (this assumption is referred to as homoscedasticity). If the error variance is not constant, the *p*\-values and confidence intervals for the coefficients will be invalid. Similar to the linear relationship assumption, non\-constant variance can often be resolved with variable transformations or by including additional predictors. For example, Figure [4\.4](linear-regression.html#fig:04-homoskedasticity) shows the residuals vs. predicted values for `model1` and `model3`. `model1` displays a classic violation of constant variance as indicated by the cone\-shaped pattern. However, `model3` appears to have near\-constant variance. The `broom::augment` function is an easy way to add model results to each observation (i.e. predicted values, residuals). ``` df1 <- broom::augment(cv_model1$finalModel, data = ames_train) p1 <- ggplot(df1, aes(.fitted, .resid)) + geom_point(size = 1, alpha = .4) + xlab("Predicted values") + ylab("Residuals") + ggtitle("Model 1", subtitle = "Sale_Price ~ Gr_Liv_Area") df2 <- broom::augment(cv_model3$finalModel, data = ames_train) p2 <- ggplot(df2, aes(.fitted, .resid)) + geom_point(size = 1, alpha = .4) + xlab("Predicted values") + ylab("Residuals") + ggtitle("Model 3", subtitle = "Sale_Price ~ .") gridExtra::grid.arrange(p1, p2, nrow = 1) ``` Figure 4\.4: Linear regression assumes constant variance among the residuals. `model1` (left) shows definitive signs of heteroskedasticity whereas `model3` (right) appears to have constant variance. **3\. No autocorrelation:** Linear regression assumes the errors are independent and uncorrelated. If in fact, there is correlation among the errors, then the estimated standard errors of the coefficients will be biased leading to prediction intervals being narrower than they should be. For example, the left plot in Figure [4\.5](linear-regression.html#fig:04-autocorrelation) displays the residuals (\\(y\\)\-axis) vs. the observation ID (\\(x\\)\-axis) for `model1`. A clear pattern exists suggesting that information about \\(\\epsilon\_1\\) provides information about \\(\\epsilon\_2\\). This pattern is a result of the data being ordered by neighborhood, which we have not accounted for in this model. Consequently, the residuals for homes in the same neighborhood are correlated (homes within a neighborhood are typically the same size and can often contain similar features). Since the `Neighborhood` predictor is included in `model3` (right plot), the correlation in the errors is reduced. ``` df1 <- mutate(df1, id = row_number()) df2 <- mutate(df2, id = row_number()) p1 <- ggplot(df1, aes(id, .resid)) + geom_point(size = 1, alpha = .4) + xlab("Row ID") + ylab("Residuals") + ggtitle("Model 1", subtitle = "Correlated residuals.") p2 <- ggplot(df2, aes(id, .resid)) + geom_point(size = 1, alpha = .4) + xlab("Row ID") + ylab("Residuals") + ggtitle("Model 3", subtitle = "Uncorrelated residuals.") gridExtra::grid.arrange(p1, p2, nrow = 1) ``` Figure 4\.5: Linear regression assumes uncorrelated errors. The residuals in `model1` (left) have a distinct pattern suggesting that information about \\(\\epsilon\_1\\) provides information about \\(\\epsilon\_2\\). Whereas `model3` has no signs of autocorrelation. **4\. More observations than predictors:** Although not an issue with the Ames housing data, when the number of features exceeds the number of observations (\\(p \> n\\)), the OLS estimates are not obtainable. To resolve this issue an analyst can remove variables one\-at\-a\-time until \\(p \< n\\). Although pre\-processing tools can be used to guide this manual approach (Kuhn and Johnson [2013](#ref-apm), 26:43–47\), it can be cumbersome and prone to errors. In Chapter [6](regularized-regression.html#regularized-regression) we’ll introduce regularized regression which provides an alternative to OLS that can be used when \\(p \> n\\). **5\. No or little multicollinearity:** *Collinearity* refers to the situation in which two or more predictor variables are closely related to one another. The presence of collinearity can pose problems in the OLS, since it can be difficult to separate out the individual effects of collinear variables on the response. In fact, collinearity can cause predictor variables to appear as statistically insignificant when in fact they are significant. This obviously leads to an inaccurate interpretation of coefficients and makes it difficult to identify influential predictors. In `ames`, for example, `Garage_Area` and `Garage_Cars` are two variables that have a correlation of 0\.89 and both variables are strongly related to our response variable (`Sale_Price`). Looking at our full model where both of these variables are included, we see that `Garage_Cars` is found to be statistically significant but `Garage_Area` is not: ``` # fit with two strongly correlated variables summary(cv_model3) %>% broom::tidy() %>% filter(term %in% c("Garage_Area", "Garage_Cars")) ## # A tibble: 2 x 5 ## term estimate std.error statistic p.value ## <chr> <dbl> <dbl> <dbl> <dbl> ## 1 Garage_Cars 3021. 1771. 1.71 0.0882 ## 2 Garage_Area 19.7 6.03 3.26 0.00112 ``` However, if we refit the full model without `Garage_Cars`, the coefficient estimate for `Garage_Area` increases two fold and becomes statistically significant. ``` # model without Garage_Area set.seed(123) mod_wo_Garage_Cars <- train( Sale_Price ~ ., data = select(ames_train, -Garage_Cars), method = "lm", trControl = trainControl(method = "cv", number = 10) ) summary(mod_wo_Garage_Cars) %>% broom::tidy() %>% filter(term == "Garage_Area") ## # A tibble: 1 x 5 ## term estimate std.error statistic p.value ## <chr> <dbl> <dbl> <dbl> <dbl> ## 1 Garage_Area 27.0 4.21 6.43 1.69e-10 ``` This reflects the instability in the linear regression model caused by between\-predictor relationships; this instability also gets propagated directly to the model predictions. Considering 16 of our 34 numeric predictors have a medium to strong correlation (Chapter [17](pca.html#pca)), the biased coefficients of these predictors are likely restricting the predictive accuracy of our model. How can we control for this problem? One option is to manually remove the offending predictors (one\-at\-a\-time) until all pairwise correlations are below some pre\-determined threshold. However, when the number of predictors is large such as in our case, this becomes tedious. Moreover, multicollinearity can arise when one feature is linearly related to two or more features (which is more difficult to detect[20](#fn20)). In these cases, manual removal of specific predictors may not be possible. Consequently, the following sections offers two simple extensions of linear regression where dimension reduction is applied prior to performing linear regression. Chapter [6](regularized-regression.html#regularized-regression) offers a modified regression approach that helps to deal with the problem. And future chapters provide alternative methods that are less affected by multicollinearity. 4\.6 Principal component regression ----------------------------------- As mentioned in Section [3\.7](engineering.html#feature-reduction) and fully discussed in Chapter [17](pca.html#pca), principal components analysis can be used to represent correlated variables with a smaller number of uncorrelated features (called principle components) and the resulting components can be used as predictors in a linear regression model. This two\-step process is known as *principal component regression* (PCR) (Massy [1965](#ref-massy1965principal)) and is illustrated in Figure [4\.6](linear-regression.html#fig:pcr-steps). Figure 4\.6: A depiction of the steps involved in performing principal component regression. Performing PCR with **caret** is an easy extension from our previous model. We simply specify `method = "pcr"` within `train()` to perform PCA on all our numeric predictors prior to fitting the model. Often, we can greatly improve performance by only using a small subset of all principal components as predictors. Consequently, you can think of the number of principal components as a tuning parameter (see Section [2\.5\.3](process.html#tune-overfit)). The following performs cross\-validated PCR with \\(1, 2, \\dots, 100\\) principal components, and Figure [4\.7](linear-regression.html#fig:pcr-regression) illustrates the cross\-validated RMSE. You can see a significant drop in prediction error from our previous linear models using just five principal components followed by a gradual decrease thereafter. However, you may realize that it takes nearly 100 principal components to reach a minimum RMSE (see `cv_model_pcr` for a comparison of the cross\-validated results). Note in the below example we use `preProcess` to remove near\-zero variance features and center/scale the numeric features. We then use `method = “pcr”`. This is equivalent to creating a blueprint as illustrated in Section 3\.8\.3 to remove near\-zero variance features, center/scale the numeric features, perform PCA on the numeric features, then feeding that blueprint into `train()` with `method = “lm”`. ``` # perform 10-fold cross validation on a PCR model tuning the # number of principal components to use as predictors from 1-100 set.seed(123) cv_model_pcr <- train( Sale_Price ~ ., data = ames_train, method = "pcr", trControl = trainControl(method = "cv", number = 10), preProcess = c("zv", "center", "scale"), tuneLength = 100 ) # model with lowest RMSE cv_model_pcr$bestTune ## ncomp ## 97 97 # results for model with lowest RMSE cv_model_pcr$results %>% dplyr::filter(ncomp == pull(cv_model_pcr$bestTune)) ## ncomp RMSE Rsquared MAE RMSESD RsquaredSD MAESD ## 1 97 30135.51 0.8615453 20143.42 5191.887 0.03764501 1696.534 # plot cross-validated RMSE ggplot(cv_model_pcr) ``` Figure 4\.7: The 10\-fold cross validation RMSE obtained using PCR with 1\-100 principal components. By controlling for multicollinearity with PCR, we can experience significant improvement in our predictive accuracy compared to the previously obtained linear models (reducing the cross\-validated RMSE from about $37,000 to nearly $30,000\), which beats the *k*\-nearest neighbor model illustrated in Section [3\.8\.3](engineering.html#engineering-process-example). It’s important to note that since PCR is a two step process, the PCA step does not consider any aspects of the response when it selects the components. Consequently, the new predictors produced by the PCA step are not designed to maximize the relationship with the response. Instead, it simply seeks to reduce the variability present throughout the predictor space. If that variability happens to be related to the response variability, then PCR has a good chance to identify a predictive relationship, as in our case. If, however, the variability in the predictor space is not related to the variability of the response, then PCR can have difficulty identifying a predictive relationship when one might actually exists (i.e., we may actually experience a decrease in our predictive accuracy). An alternative approach to reduce the impact of multicollinearity is partial least squares. 4\.7 Partial least squares -------------------------- *Partial least squares* (PLS) can be viewed as a supervised dimension reduction procedure (Kuhn and Johnson [2013](#ref-apm)). Similar to PCR, this technique also constructs a set of linear combinations of the inputs for regression, but unlike PCR it uses the response variable to aid the construction of the principal components as illustrated in Figure [4\.8](linear-regression.html#fig:pcr-vs-pls)[21](#fn21). Thus, we can think of PLS as a supervised dimension reduction procedure that finds new features that not only captures most of the information in the original features, but also are related to the response. Figure 4\.8: A diagram depicting the differences between PCR (left) and PLS (right). PCR finds principal components (PCs) that maximally summarize the features independent of the response variable and then uses those PCs as predictor variables. PLS finds components that simultaneously summarize variation of the predictors while being optimally correlated with the outcome and then uses those PCs as predictors. We illustrate PLS with some exemplar data[22](#fn22). Figure [4\.9](linear-regression.html#fig:pls-vs-pcr-relationship) illustrates that the first two PCs when using PCR have very little relationship to the response variable; however, the first two PCs when using PLS have a much stronger association to the response. Figure 4\.9: Illustration showing that the first two PCs when using PCR have very little relationship to the response variable (top row); however, the first two PCs when using PLS have a much stronger association to the response (bottom row). Referring to Equation [(17\.1\)](pca.html#eq:pca1) in Chapter [17](pca.html#pca), PLS will compute the first principal (\\(z\_1\\)) by setting each \\(\\phi\_{j1}\\) to the coefficient from a SLR model of \\(y\\) onto that respective \\(x\_j\\). One can show that this coefficient is proportional to the correlation between \\(y\\) and \\(x\_j\\). Hence, in computing \\(z\_1 \= \\sum^p\_{j\=1} \\phi\_{j1}x\_j\\), PLS places the highest weight on the variables that are most strongly related to the response. To compute the second PC (\\(z\_2\\)), we first regress each variable on \\(z\_1\\). The residuals from this regression capture the remaining signal that has not been explained by the first PC. We substitute these residual values for the predictor values in Equation [(17\.2\)](pca.html#eq:pca2) in Chapter [17](pca.html#pca). This process continues until all \\(m\\) components have been computed and then we use OLS to regress the response on \\(z\_1, \\dots, z\_m\\). See J. Friedman, Hastie, and Tibshirani ([2001](#ref-esl)) and Geladi and Kowalski ([1986](#ref-geladi1986partial)) for a thorough discussion of PLS. Similar to PCR, we can easily fit a PLS model by changing the `method` argument in `train()`. As with PCR, the number of principal components to use is a tuning parameter that is determined by the model that maximizes predictive accuracy (minimizes RMSE in this case). The following performs cross\-validated PLS with \\(1, 2, \\dots, 30\\) PCs, and Figure [4\.10](linear-regression.html#fig:pls-regression) shows the cross\-validated RMSEs. You can see a greater drop in prediction error compared to PCR and we reach this minimum RMSE with far less principal components because they are guided by the response. ``` # perform 10-fold cross validation on a PLS model tuning the # number of principal components to use as predictors from 1-30 set.seed(123) cv_model_pls <- train( Sale_Price ~ ., data = ames_train, method = "pls", trControl = trainControl(method = "cv", number = 10), preProcess = c("zv", "center", "scale"), tuneLength = 30 ) # model with lowest RMSE cv_model_pls$bestTune ## ncomp ## 20 20 # results for model with lowest RMSE cv_model_pls$results %>% dplyr::filter(ncomp == pull(cv_model_pls$bestTune)) ## ncomp RMSE Rsquared MAE RMSESD RsquaredSD MAESD ## 1 20 25459.51 0.8998194 16022.68 5243.478 0.04278512 1665.61 # plot cross-validated RMSE ggplot(cv_model_pls) ``` Figure 4\.10: The 10\-fold cross valdation RMSE obtained using PLS with 1\-30 principal components. 4\.8 Feature interpretation --------------------------- Once we’ve found the model that maximizes the predictive accuracy, our next goal is to interpret the model structure. Linear regression models provide a very intuitive model structure as they assume a *monotonic linear relationship* between the predictor variables and the response. The *linear* relationship part of that statement just means, for a given predictor variable, it assumes for every one unit change in a given predictor variable there is a constant change in the response. As discussed earlier in the chapter, this constant rate of change is provided by the coefficient for a predictor. The *monotonic* relationship means that a given predictor variable will always have a positive or negative relationship. But how do we determine the most influential variables? Variable importance seeks to identify those variables that are most influential in our model. For linear regression models, this is most often measured by the absolute value of the *t*\-statistic for each model parameter used; though simple, the results can be hard to interpret when the model includes interaction effects and complex transformations (in Chapter [16](iml.html#iml) we’ll discuss *model\-agnostic* approaches that don’t have this issue). For a PLS model, variable importance can be computed using the weighted sums of the absolute regression coefficients. The weights are a function of the reduction of the RSS across the number of PLS components and are computed separately for each outcome. Therefore, the contribution of the coefficients are weighted proportionally to the reduction in the RSS. We can use `vip::vip()` to extract and plot the most important variables. The importance measure is normalized from 100 (most important) to 0 (least important). Figure [4\.11](linear-regression.html#fig:pls-vip) illustrates that the top 4 most important variables are `Gr_liv_Area`, `Total_Bsmt_SF`, `First_Flr_SF`, and `Garage_Area` respectively. ``` vip(cv_model_pls, num_features = 20, method = "model") ``` Figure 4\.11: Top 20 most important variables for the PLS model. As stated earlier, linear regression models assume a monotonic linear relationship. To illustrate this, we can construct partial dependence plots (PDPs). PDPs plot the change in the average predicted value (\\(\\widehat{y}\\)) as specified feature(s) vary over their marginal distribution. As you will see in later chapters, PDPs become more useful when non\-linear relationships are present (we discuss PDPs and other ML interpretation techniques in Chapter [16](iml.html#iml)). However, PDPs of linear models help illustrate how a fixed change in \\(x\_i\\) relates to a fixed linear change in \\(\\widehat{y}\_i\\) while taking into account the average effect of all the other features in the model (for linear models, the slope of the PDP is equal to the corresponding features of the OLS coefficient). The **pdp** package (Brandon Greenwell [2018](#ref-R-pdp)) provides convenient functions for computing and plotting PDPs. For example, the following code chunk would plot the PDP for the `Gr_Liv_Area` predictor. `pdp::partial(cv_model_pls, "Gr_Liv_Area", grid.resolution = 20, plot = TRUE)` All four of the most important predictors have a positive relationship with sale price; however, we see that the slope (\\(\\widehat{\\beta}\_i\\)) is steepest for the most important predictor and gradually decreases for less important variables. Figure 4\.12: Partial dependence plots for the first four most important variables. 4\.9 Final thoughts ------------------- Linear regression is usually the first supervised learning algorithm you will learn. The approach provides a solid fundamental understanding of the supervised learning task; however, as we’ve discussed there are several concerns that result from the assumptions required. Although extensions of linear regression that integrate dimension reduction steps into the algorithm can help address some of the problems with linear regression, more advanced supervised algorithms typically provide greater flexibility and improved accuracy. Nonetheless, understanding linear regression provides a foundation that will serve you well in learning these more advanced methods. 4\.1 Prerequisites ------------------ This chapter leverages the following packages: ``` # Helper packages library(dplyr) # for data manipulation library(ggplot2) # for awesome graphics # Modeling packages library(caret) # for cross-validation, etc. # Model interpretability packages library(vip) # variable importance ``` We’ll also continue working with the `ames_train` data set created in Section [2\.7](process.html#put-process-together). 4\.2 Simple linear regression ----------------------------- Pearson’s correlation coefficient is often used to quantify the strength of the linear association between two continuous variables. In this section, we seek to fully characterize that linear relationship. *Simple linear regression* (SLR) assumes that the statistical relationship between two continuous variables (say \\(X\\) and \\(Y\\)) is (at least approximately) linear: \\\[\\begin{equation} \\tag{4\.1} Y\_i \= \\beta\_0 \+ \\beta\_1 X\_i \+ \\epsilon\_i, \\quad \\text{for } i \= 1, 2, \\dots, n, \\end{equation}\\] where \\(Y\_i\\) represents the *i*\-th response value, \\(X\_i\\) represents the *i*\-th feature value, \\(\\beta\_0\\) and \\(\\beta\_1\\) are fixed, but unknown constants (commonly referred to as coefficients or parameters) that represent the intercept and slope of the regression line, respectively, and \\(\\epsilon\_i\\) represents noise or random error. In this chapter, we’ll assume that the errors are normally distributed with mean zero and constant variance \\(\\sigma^2\\), denoted \\(\\stackrel{iid}{\\sim} \\left(0, \\sigma^2\\right)\\). Since the random errors are centered around zero (i.e., \\(E\\left(\\epsilon\\right) \= 0\\)), linear regression is really a problem of estimating a *conditional mean*: \\\[\\begin{equation} E\\left(Y\_i \| X\_i\\right) \= \\beta\_0 \+ \\beta\_1 X\_i. \\end{equation}\\] For brevity, we often drop the conditional piece and write \\(E\\left(Y \| X\\right) \= E\\left(Y\\right)\\). Consequently, the interpretation of the coefficients is in terms of the average, or mean response. For example, the intercept \\(\\beta\_0\\) represents the average response value when \\(X \= 0\\) (it is often not meaningful or of interest and is sometimes referred to as a *bias term*). The slope \\(\\beta\_1\\) represents the increase in the average response per one\-unit increase in \\(X\\) (i.e., it is a *rate of change*). ### 4\.2\.1 Estimation Ideally, we want estimates of \\(\\beta\_0\\) and \\(\\beta\_1\\) that give us the “best fitting” line. But what is meant by “best fitting”? The most common approach is to use the method of *least squares* (LS) estimation; this form of linear regression is often referred to as ordinary least squares (OLS) regression. There are multiple ways to measure “best fitting”, but the LS criterion finds the “best fitting” line by minimizing the *residual sum of squares* (RSS): \\\[\\begin{equation} \\tag{4\.2} RSS\\left(\\beta\_0, \\beta\_1\\right) \= \\sum\_{i\=1}^n\\left\[Y\_i \- \\left(\\beta\_0 \+ \\beta\_1 X\_i\\right)\\right]^2 \= \\sum\_{i\=1}^n\\left(Y\_i \- \\beta\_0 \- \\beta\_1 X\_i\\right)^2\. \\end{equation}\\] The LS estimates of \\(\\beta\_0\\) and \\(\\beta\_1\\) are denoted as \\(\\widehat{\\beta}\_0\\) and \\(\\widehat{\\beta}\_1\\), respectively. Once obtained, we can generate predicted values, say at \\(X \= X\_{new}\\), using the estimated regression equation: \\\[\\begin{equation} \\widehat{Y}\_{new} \= \\widehat{\\beta}\_0 \+ \\widehat{\\beta}\_1 X\_{new}, \\end{equation}\\] where \\(\\widehat{Y}\_{new} \= \\widehat{E\\left(Y\_{new} \| X \= X\_{new}\\right)}\\) is the estimated mean response at \\(X \= X\_{new}\\). With the Ames housing data, suppose we wanted to model a linear relationship between the total above ground living space of a home (`Gr_Liv_Area`) and sale price (`Sale_Price`). To perform an OLS regression model in R we can use the `lm()` function: ``` model1 <- lm(Sale_Price ~ Gr_Liv_Area, data = ames_train) ``` The fitted model (`model1`) is displayed in the left plot in Figure [4\.1](linear-regression.html#fig:04-visualize-model1) where the points represent the values of `Sale_Price` in the training data. In the right plot of Figure [4\.1](linear-regression.html#fig:04-visualize-model1), the vertical lines represent the individual errors, called *residuals*, associated with each observation. The OLS criterion in Equation [(4\.2\)](linear-regression.html#eq:least-squares-simple) identifies the “best fitting” line that minimizes the sum of squares of these residuals. Figure 4\.1: The least squares fit from regressing sale price on living space for the the Ames housing data. Left: Fitted regression line. Right: Fitted regression line with vertical grey bars representing the residuals. The `coef()` function extracts the estimated coefficients from the model. We can also use `summary()` to get a more detailed report of the model results. ``` summary(model1) ## ## Call: ## lm(formula = Sale_Price ~ Gr_Liv_Area, data = ames_train) ## ## Residuals: ## Min 1Q Median 3Q Max ## -361143 -30668 -2449 22838 331357 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 8732.938 3996.613 2.185 0.029 * ## Gr_Liv_Area 114.876 2.531 45.385 <0.0000000000000002 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 56700 on 2051 degrees of freedom ## Multiple R-squared: 0.5011, Adjusted R-squared: 0.5008 ## F-statistic: 2060 on 1 and 2051 DF, p-value: < 0.00000000000000022 ``` The estimated coefficients from our model are \\(\\widehat{\\beta}\_0 \=\\) 8732\.94 and \\(\\widehat{\\beta}\_1 \=\\) 114\.88\. To interpret, we estimate that the mean selling price increases by 114\.88 for each additional one square foot of above ground living space. This simple description of the relationship between the sale price and square footage using a single number (i.e., the slope) is what makes linear regression such an intuitive and popular modeling tool. One drawback of the LS procedure in linear regression is that it only provides estimates of the coefficients; it does not provide an estimate of the error variance \\(\\sigma^2\\)! LS also makes no assumptions about the random errors. These assumptions are important for inference and in estimating the error variance which we’re assuming is a constant value \\(\\sigma^2\\). One way to estimate \\(\\sigma^2\\) (which is required for characterizing the variability of our fitted model), is to use the method of *maximum likelihood* (ML) estimation (see Kutner et al. ([2005](#ref-kutner-2005-applied)) Section 1\.7 for details). The ML procedure requires that we assume a particular distribution for the random errors. Most often, we assume the errors to be normally distributed. In practice, under the usual assumptions stated above, an unbiased estimate of the error variance is given as the sum of the squared residuals divided by \\(n \- p\\) (where \\(p\\) is the number of regression coefficients or parameters in the model): \\\[\\begin{equation} \\widehat{\\sigma}^2 \= \\frac{1}{n \- p}\\sum\_{i \= 1} ^ n r\_i ^ 2, \\end{equation}\\] where \\(r\_i \= \\left(Y\_i \- \\widehat{Y}\_i\\right)\\) is referred to as the \\(i\\)th residual (i.e., the difference between the \\(i\\)th observed and predicted response value). The quantity \\(\\widehat{\\sigma}^2\\) is also referred to as the *mean square error* (MSE) and its square root is denoted RMSE (see Section [2\.6](process.html#model-eval) for discussion on these metrics). In R, the RMSE of a linear model can be extracted using the `sigma()` function: Typically, these error metrics are computed on a separate validation set or using cross\-validation as discussed in Section 2\.4; however, they can also be computed on the same training data the model was trained on as illustrated here. ``` sigma(model1) # RMSE ## [1] 56704.78 sigma(model1)^2 # MSE ## [1] 3215432370 ``` Note that the RMSE is also reported as the `Residual standard error` in the output from `summary()`. ### 4\.2\.2 Inference How accurate are the LS of \\(\\beta\_0\\) and \\(\\beta\_1\\)? Point estimates by themselves are not very useful. It is often desirable to associate some measure of an estimates variability. The variability of an estimate is often measured by its *standard error* (SE)—the square root of its variance. If we assume that the errors in the linear regression model are \\(\\stackrel{iid}{\\sim} \\left(0, \\sigma^2\\right)\\), then simple expressions for the SEs of the estimated coefficients exist and are displayed in the column labeled `Std. Error` in the output from `summary()`. From this, we can also derive simple \\(t\\)\-tests to understand if the individual coefficients are statistically significant from zero. The *t*\-statistics for such a test are nothing more than the estimated coefficients divided by their corresponding estimated standard errors (i.e., in the output from `summary()`, `t value` \= `Estimate` / `Std. Error`). The reported *t*\-statistics measure the number of standard deviations each coefficient is away from 0\. Thus, large *t*\-statistics (greater than two in absolute value, say) roughly indicate statistical significance at the \\(\\alpha \= 0\.05\\) level. The *p*\-values for these tests are also reported by `summary()` in the column labeled `Pr(>|t|)`. Under the same assumptions, we can also derive confidence intervals for the coefficients. The formula for the traditional \\(100\\left(1 \- \\alpha\\right)\\)% confidence interval for \\(\\beta\_j\\) is \\\[\\begin{equation} \\widehat{\\beta}\_j \\pm t\_{1 \- \\alpha / 2, n \- p} \\widehat{SE}\\left(\\widehat{\\beta}\_j\\right). \\tag{4\.3} \\end{equation}\\] In R, we can construct such (one\-at\-a\-time) confidence intervals for each coefficient using `confint()`. For example, a 95% confidence intervals for the coefficients in our SLR example can be computed using ``` confint(model1, level = 0.95) ## 2.5 % 97.5 % ## (Intercept) 895.0961 16570.7805 ## Gr_Liv_Area 109.9121 119.8399 ``` To interpret, we estimate with 95% confidence that the mean selling price increases between 109\.91 and 119\.84 for each additional one square foot of above ground living space. We can also conclude that the slope \\(\\beta\_1\\) is significantly different from zero (or any other pre\-specified value not included in the interval) at the \\(\\alpha \= 0\.05\\) level. This is also supported by the output from `summary()`. Most statistical software, including R, will include estimated standard errors, *t*\-statistics, etc. as part of its regression output. However, it is important to remember that such quantities depend on three major assumptions of the linear regression model: 1. Independent observations 2. The random errors have mean zero, and constant variance 3. The random errors are normally distributed If any or all of these assumptions are violated, then remedial measures need to be taken. For instance, *weighted least squares* (and other procedures) can be used when the constant variance assumption is violated. Transformations (of both the response and features) can also help to correct departures from these assumptions. The residuals are extremely useful in helping to identify how parametric models depart from such assumptions. ### 4\.2\.1 Estimation Ideally, we want estimates of \\(\\beta\_0\\) and \\(\\beta\_1\\) that give us the “best fitting” line. But what is meant by “best fitting”? The most common approach is to use the method of *least squares* (LS) estimation; this form of linear regression is often referred to as ordinary least squares (OLS) regression. There are multiple ways to measure “best fitting”, but the LS criterion finds the “best fitting” line by minimizing the *residual sum of squares* (RSS): \\\[\\begin{equation} \\tag{4\.2} RSS\\left(\\beta\_0, \\beta\_1\\right) \= \\sum\_{i\=1}^n\\left\[Y\_i \- \\left(\\beta\_0 \+ \\beta\_1 X\_i\\right)\\right]^2 \= \\sum\_{i\=1}^n\\left(Y\_i \- \\beta\_0 \- \\beta\_1 X\_i\\right)^2\. \\end{equation}\\] The LS estimates of \\(\\beta\_0\\) and \\(\\beta\_1\\) are denoted as \\(\\widehat{\\beta}\_0\\) and \\(\\widehat{\\beta}\_1\\), respectively. Once obtained, we can generate predicted values, say at \\(X \= X\_{new}\\), using the estimated regression equation: \\\[\\begin{equation} \\widehat{Y}\_{new} \= \\widehat{\\beta}\_0 \+ \\widehat{\\beta}\_1 X\_{new}, \\end{equation}\\] where \\(\\widehat{Y}\_{new} \= \\widehat{E\\left(Y\_{new} \| X \= X\_{new}\\right)}\\) is the estimated mean response at \\(X \= X\_{new}\\). With the Ames housing data, suppose we wanted to model a linear relationship between the total above ground living space of a home (`Gr_Liv_Area`) and sale price (`Sale_Price`). To perform an OLS regression model in R we can use the `lm()` function: ``` model1 <- lm(Sale_Price ~ Gr_Liv_Area, data = ames_train) ``` The fitted model (`model1`) is displayed in the left plot in Figure [4\.1](linear-regression.html#fig:04-visualize-model1) where the points represent the values of `Sale_Price` in the training data. In the right plot of Figure [4\.1](linear-regression.html#fig:04-visualize-model1), the vertical lines represent the individual errors, called *residuals*, associated with each observation. The OLS criterion in Equation [(4\.2\)](linear-regression.html#eq:least-squares-simple) identifies the “best fitting” line that minimizes the sum of squares of these residuals. Figure 4\.1: The least squares fit from regressing sale price on living space for the the Ames housing data. Left: Fitted regression line. Right: Fitted regression line with vertical grey bars representing the residuals. The `coef()` function extracts the estimated coefficients from the model. We can also use `summary()` to get a more detailed report of the model results. ``` summary(model1) ## ## Call: ## lm(formula = Sale_Price ~ Gr_Liv_Area, data = ames_train) ## ## Residuals: ## Min 1Q Median 3Q Max ## -361143 -30668 -2449 22838 331357 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 8732.938 3996.613 2.185 0.029 * ## Gr_Liv_Area 114.876 2.531 45.385 <0.0000000000000002 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 56700 on 2051 degrees of freedom ## Multiple R-squared: 0.5011, Adjusted R-squared: 0.5008 ## F-statistic: 2060 on 1 and 2051 DF, p-value: < 0.00000000000000022 ``` The estimated coefficients from our model are \\(\\widehat{\\beta}\_0 \=\\) 8732\.94 and \\(\\widehat{\\beta}\_1 \=\\) 114\.88\. To interpret, we estimate that the mean selling price increases by 114\.88 for each additional one square foot of above ground living space. This simple description of the relationship between the sale price and square footage using a single number (i.e., the slope) is what makes linear regression such an intuitive and popular modeling tool. One drawback of the LS procedure in linear regression is that it only provides estimates of the coefficients; it does not provide an estimate of the error variance \\(\\sigma^2\\)! LS also makes no assumptions about the random errors. These assumptions are important for inference and in estimating the error variance which we’re assuming is a constant value \\(\\sigma^2\\). One way to estimate \\(\\sigma^2\\) (which is required for characterizing the variability of our fitted model), is to use the method of *maximum likelihood* (ML) estimation (see Kutner et al. ([2005](#ref-kutner-2005-applied)) Section 1\.7 for details). The ML procedure requires that we assume a particular distribution for the random errors. Most often, we assume the errors to be normally distributed. In practice, under the usual assumptions stated above, an unbiased estimate of the error variance is given as the sum of the squared residuals divided by \\(n \- p\\) (where \\(p\\) is the number of regression coefficients or parameters in the model): \\\[\\begin{equation} \\widehat{\\sigma}^2 \= \\frac{1}{n \- p}\\sum\_{i \= 1} ^ n r\_i ^ 2, \\end{equation}\\] where \\(r\_i \= \\left(Y\_i \- \\widehat{Y}\_i\\right)\\) is referred to as the \\(i\\)th residual (i.e., the difference between the \\(i\\)th observed and predicted response value). The quantity \\(\\widehat{\\sigma}^2\\) is also referred to as the *mean square error* (MSE) and its square root is denoted RMSE (see Section [2\.6](process.html#model-eval) for discussion on these metrics). In R, the RMSE of a linear model can be extracted using the `sigma()` function: Typically, these error metrics are computed on a separate validation set or using cross\-validation as discussed in Section 2\.4; however, they can also be computed on the same training data the model was trained on as illustrated here. ``` sigma(model1) # RMSE ## [1] 56704.78 sigma(model1)^2 # MSE ## [1] 3215432370 ``` Note that the RMSE is also reported as the `Residual standard error` in the output from `summary()`. ### 4\.2\.2 Inference How accurate are the LS of \\(\\beta\_0\\) and \\(\\beta\_1\\)? Point estimates by themselves are not very useful. It is often desirable to associate some measure of an estimates variability. The variability of an estimate is often measured by its *standard error* (SE)—the square root of its variance. If we assume that the errors in the linear regression model are \\(\\stackrel{iid}{\\sim} \\left(0, \\sigma^2\\right)\\), then simple expressions for the SEs of the estimated coefficients exist and are displayed in the column labeled `Std. Error` in the output from `summary()`. From this, we can also derive simple \\(t\\)\-tests to understand if the individual coefficients are statistically significant from zero. The *t*\-statistics for such a test are nothing more than the estimated coefficients divided by their corresponding estimated standard errors (i.e., in the output from `summary()`, `t value` \= `Estimate` / `Std. Error`). The reported *t*\-statistics measure the number of standard deviations each coefficient is away from 0\. Thus, large *t*\-statistics (greater than two in absolute value, say) roughly indicate statistical significance at the \\(\\alpha \= 0\.05\\) level. The *p*\-values for these tests are also reported by `summary()` in the column labeled `Pr(>|t|)`. Under the same assumptions, we can also derive confidence intervals for the coefficients. The formula for the traditional \\(100\\left(1 \- \\alpha\\right)\\)% confidence interval for \\(\\beta\_j\\) is \\\[\\begin{equation} \\widehat{\\beta}\_j \\pm t\_{1 \- \\alpha / 2, n \- p} \\widehat{SE}\\left(\\widehat{\\beta}\_j\\right). \\tag{4\.3} \\end{equation}\\] In R, we can construct such (one\-at\-a\-time) confidence intervals for each coefficient using `confint()`. For example, a 95% confidence intervals for the coefficients in our SLR example can be computed using ``` confint(model1, level = 0.95) ## 2.5 % 97.5 % ## (Intercept) 895.0961 16570.7805 ## Gr_Liv_Area 109.9121 119.8399 ``` To interpret, we estimate with 95% confidence that the mean selling price increases between 109\.91 and 119\.84 for each additional one square foot of above ground living space. We can also conclude that the slope \\(\\beta\_1\\) is significantly different from zero (or any other pre\-specified value not included in the interval) at the \\(\\alpha \= 0\.05\\) level. This is also supported by the output from `summary()`. Most statistical software, including R, will include estimated standard errors, *t*\-statistics, etc. as part of its regression output. However, it is important to remember that such quantities depend on three major assumptions of the linear regression model: 1. Independent observations 2. The random errors have mean zero, and constant variance 3. The random errors are normally distributed If any or all of these assumptions are violated, then remedial measures need to be taken. For instance, *weighted least squares* (and other procedures) can be used when the constant variance assumption is violated. Transformations (of both the response and features) can also help to correct departures from these assumptions. The residuals are extremely useful in helping to identify how parametric models depart from such assumptions. 4\.3 Multiple linear regression ------------------------------- In practice, we often have more than one predictor. For example, with the Ames housing data, we may wish to understand if above ground square footage (`Gr_Liv_Area`) and the year the house was built (`Year_Built`) are (linearly) related to sale price (`Sale_Price`). We can extend the SLR model so that it can directly accommodate multiple predictors; this is referred to as the *multiple linear regression* (MLR) model. With two predictors, the MLR model becomes: \\\[\\begin{equation} Y \= \\beta\_0 \+ \\beta\_1 X\_1 \+ \\beta\_2 X\_2 \+ \\epsilon, \\end{equation}\\] where \\(X\_1\\) and \\(X\_2\\) are features of interest. In our Ames housing example, \\(X\_1\\) represents `Gr_Liv_Area` and \\(X\_2\\) represents `Year_Built`. In R, multiple linear regression models can be fit by separating all the features of interest with a `+`: ``` (model2 <- lm(Sale_Price ~ Gr_Liv_Area + Year_Built, data = ames_train)) ## ## Call: ## lm(formula = Sale_Price ~ Gr_Liv_Area + Year_Built, data = ames_train) ## ## Coefficients: ## (Intercept) Gr_Liv_Area Year_Built ## -2123054.21 99.18 1093.48 ``` Alternatively, we can use `update()` to update the model formula used in `model1`. The new formula can use a `.` as shorthand for keep everything on either the left or right hand side of the formula, and a `+` or `-` can be used to add or remove terms from the original model, respectively. In the case of adding `Year_Built` to `model1`, we could’ve used: ``` (model2 <- update(model1, . ~ . + Year_Built)) ## ## Call: ## lm(formula = Sale_Price ~ Gr_Liv_Area + Year_Built, data = ames_train) ## ## Coefficients: ## (Intercept) Gr_Liv_Area Year_Built ## -2123054.21 99.18 1093.48 ``` The LS estimates of the regression coefficients are \\(\\widehat{\\beta}\_1 \=\\) 99\.176 and \\(\\widehat{\\beta}\_2 \=\\) 1093\.485 (the estimated intercept is \-2123054\.207\. In other words, every one square foot increase to above ground square footage is associated with an additional $99\.18 in **mean selling price** when holding the year the house was built constant. Likewise, for every year newer a home is there is approximately an increase of $1,093\.48 in selling price when holding the above ground square footage constant. A contour plot of the fitted regression surface is displayed in the left side of Figure [4\.2](linear-regression.html#fig:04-mlr-fit) below. Note how the fitted regression surface is flat (i.e., it does not twist or bend). This is true for all linear models that include only *main effects* (i.e., terms involving only a single predictor). One way to model curvature is to include *interaction effects*. An interaction occurs when the effect of one predictor on the response depends on the values of other predictors. In linear regression, interactions can be captured via products of features (i.e., \\(X\_1 \\times X\_2\\)). A model with two main effects can also include a two\-way interaction. For example, to include an interaction between \\(X\_1 \=\\) `Gr_Liv_Area` and \\(X\_2 \=\\) `Year_Built`, we introduce an additional product term: \\\[\\begin{equation} Y \= \\beta\_0 \+ \\beta\_1 X\_1 \+ \\beta\_2 X\_2 \+ \\beta\_3 X\_1 X\_2 \+ \\epsilon. \\end{equation}\\] Note that in R, we use the `:` operator to include an interaction (technically, we could use `*` as well, but `x1 * x2` is shorthand for `x1 + x2 + x1:x2` so is slightly redundant): ``` lm(Sale_Price ~ Gr_Liv_Area + Year_Built + Gr_Liv_Area:Year_Built, data = ames_train) ## ## Call: ## lm(formula = Sale_Price ~ Gr_Liv_Area + Year_Built + Gr_Liv_Area:Year_Built, ## data = ames_train) ## ## Coefficients: ## (Intercept) Gr_Liv_Area Year_Built ## 382194.3015 -1483.8810 -179.7979 ## Gr_Liv_Area:Year_Built ## 0.8037 ``` A contour plot of the fitted regression surface with interaction is displayed in the right side of Figure [4\.2](linear-regression.html#fig:04-mlr-fit). Note the curvature in the contour lines. Interaction effects are quite prevalent in predictive modeling. Since linear models are an example of parametric modeling, it is up to the analyst to decide if and when to include interaction effects. In later chapters, we’ll discuss algorithms that can automatically detect and incorporate interaction effects (albeit in different ways). It is also important to understand a concept called the *hierarchy principle*—which demands that all lower\-order terms corresponding to an interaction be retained in the model—when considering interaction effects in linear regression models. Figure 4\.2: In a three\-dimensional setting, with two predictors and one response, the least squares regression line becomes a plane. The ‘best\-fit’ plane minimizes the sum of squared errors between the actual sales price (individual dots) and the predicted sales price (plane). In general, we can include as many predictors as we want, as long as we have more rows than parameters! The general multiple linear regression model with *p* distinct predictors is \\\[\\begin{equation} Y \= \\beta\_0 \+ \\beta\_1 X\_1 \+ \\beta\_2 X\_2 \+ \\cdots \+ \\beta\_p X\_p \+ \\epsilon, \\end{equation}\\] where \\(X\_i\\) for \\(i \= 1, 2, \\dots, p\\) are the predictors of interest. Note some of these may represent interactions (e.g., \\(X\_3 \= X\_1 \\times X\_2\\)) between or transformations[18](#fn18) (e.g., \\(X\_4 \= \\sqrt{X\_1}\\)) of the original features. Unfortunately, visualizing beyond three dimensions is not practical as our best\-fit plane becomes a hyperplane. However, the motivation remains the same where the best\-fit hyperplane is identified by minimizing the RSS. The code below creates a third model where we use all features in our data set as main effects (i.e., no interaction terms) to predict `Sale_Price`. ``` # include all possible main effects model3 <- lm(Sale_Price ~ ., data = ames_train) # print estimated coefficients in a tidy data frame broom::tidy(model3) ## # A tibble: 283 x 5 ## term estimate std.error statistic p.value ## <chr> <dbl> <dbl> <dbl> <dbl> ## 1 (Intercept) -5.61e6 11261881. -0.498 0.618 ## 2 MS_SubClassOne_Story_1945_and_Older 3.56e3 3843. 0.926 0.355 ## 3 MS_SubClassOne_Story_with_Finished… 1.28e4 12834. 0.997 0.319 ## 4 MS_SubClassOne_and_Half_Story_Unfi… 8.73e3 12871. 0.678 0.498 ## 5 MS_SubClassOne_and_Half_Story_Fini… 4.11e3 6226. 0.660 0.509 ## 6 MS_SubClassTwo_Story_1946_and_Newer -1.09e3 5790. -0.189 0.850 ## 7 MS_SubClassTwo_Story_1945_and_Older 7.14e3 6349. 1.12 0.261 ## 8 MS_SubClassTwo_and_Half_Story_All_… -1.39e4 11003. -1.27 0.206 ## 9 MS_SubClassSplit_or_Multilevel -1.15e4 10512. -1.09 0.276 ## 10 MS_SubClassSplit_Foyer -4.39e3 8057. -0.545 0.586 ## # … with 273 more rows ``` 4\.4 Assessing model accuracy ----------------------------- We’ve fit three main effects models to the Ames housing data: a single predictor, two predictors, and all possible predictors. But the question remains, which model is “best”? To answer this question we have to define what we mean by “best”. In our case, we’ll use the RMSE metric and cross\-validation (Section [2\.4](process.html#resampling)) to determine the “best” model. We can use the `caret::train()` function to train a linear model (i.e., `method = "lm"`) using cross\-validation (or a variety of other validation methods). In practice, a number of factors should be considered in determining a “best” model (e.g., time constraints, model production cost, predictive accuracy, etc.). The benefit of **caret** is that it provides built\-in cross\-validation capabilities, whereas the `lm()` function does not[19](#fn19). The following code chunk uses `caret::train()` to refit `model1` using 10\-fold cross\-validation: ``` # Train model using 10-fold cross-validation set.seed(123) # for reproducibility (cv_model1 <- train( form = Sale_Price ~ Gr_Liv_Area, data = ames_train, method = "lm", trControl = trainControl(method = "cv", number = 10) )) ## Linear Regression ## ## 2053 samples ## 1 predictor ## ## No pre-processing ## Resampling: Cross-Validated (10 fold) ## Summary of sample sizes: 1846, 1848, 1848, 1848, 1848, 1848, ... ## Resampling results: ## ## RMSE Rsquared MAE ## 56410.89 0.5069425 39169.09 ## ## Tuning parameter 'intercept' was held constant at a value of TRUE ``` The resulting cross\-validated RMSE is $56,410\.89 (this is the average RMSE across the 10 CV folds). How should we interpret this? When applied to unseen data, the predictions this model makes are, on average, about $56,410\.89 off from the actual sale price. We can perform cross\-validation on the other two models in a similar fashion, which we do in the code chunk below. ``` # model 2 CV set.seed(123) cv_model2 <- train( Sale_Price ~ Gr_Liv_Area + Year_Built, data = ames_train, method = "lm", trControl = trainControl(method = "cv", number = 10) ) # model 3 CV set.seed(123) cv_model3 <- train( Sale_Price ~ ., data = ames_train, method = "lm", trControl = trainControl(method = "cv", number = 10) ) # Extract out of sample performance measures summary(resamples(list( model1 = cv_model1, model2 = cv_model2, model3 = cv_model3 ))) ## ## Call: ## summary.resamples(object = resamples(list(model1 = cv_model1, model2 ## = cv_model2, model3 = cv_model3))) ## ## Models: model1, model2, model3 ## Number of resamples: 10 ## ## MAE ## Min. 1st Qu. Median Mean 3rd Qu. Max. NA's ## model1 34457.58 36323.74 38943.81 39169.09 41660.81 45005.17 0 ## model2 28094.79 30594.47 31959.30 32246.86 34210.70 37441.82 0 ## model3 12458.27 15420.10 16484.77 16258.84 17262.39 19029.29 0 ## ## RMSE ## Min. 1st Qu. Median Mean 3rd Qu. Max. NA's ## model1 47211.34 52363.41 54948.96 56410.89 60672.31 67679.05 0 ## model2 37698.17 42607.11 45407.14 46292.38 49668.59 54692.06 0 ## model3 20844.33 22581.04 24947.45 26098.00 27695.65 39521.49 0 ## ## Rsquared ## Min. 1st Qu. Median Mean 3rd Qu. Max. NA's ## model1 0.3598237 0.4550791 0.5289068 0.5069425 0.5619841 0.5965793 0 ## model2 0.5714665 0.6392504 0.6800818 0.6703298 0.7067458 0.7348562 0 ## model3 0.7869022 0.9018567 0.9104351 0.8949642 0.9166564 0.9303504 0 ``` Extracting the results for each model, we see that by adding more information via more predictors, we are able to improve the out\-of\-sample cross validation performance metrics. Specifically, our cross\-validated RMSE reduces from $46,292\.38 (the model with two predictors) down to $26,098\.00 (for our full model). In this case, the model with all possible main effects performs the “best” (compared with the other two). 4\.5 Model concerns ------------------- As previously stated, linear regression has been a popular modeling tool due to the ease of interpreting the coefficients. However, linear regression makes several strong assumptions that are often violated as we include more predictors in our model. Violation of these assumptions can lead to flawed interpretation of the coefficients and prediction results. **1\. Linear relationship:** Linear regression assumes a linear relationship between the predictor and the response variable. However, as discussed in Chapter [3](engineering.html#engineering), non\-linear relationships can be made linear (or near\-linear) by applying transformations to the response and/or predictors. For example, Figure [4\.3](linear-regression.html#fig:04-linear-relationship) illustrates the relationship between sale price and the year a home was built. The left plot illustrates the non\-linear relationship that exists. However, we can achieve a near\-linear relationship by log transforming sale price, although some non\-linearity still exists for older homes. ``` p1 <- ggplot(ames_train, aes(Year_Built, Sale_Price)) + geom_point(size = 1, alpha = .4) + geom_smooth(se = FALSE) + scale_y_continuous("Sale price", labels = scales::dollar) + xlab("Year built") + ggtitle(paste("Non-transformed variables with a\n", "non-linear relationship.")) p2 <- ggplot(ames_train, aes(Year_Built, Sale_Price)) + geom_point(size = 1, alpha = .4) + geom_smooth(method = "lm", se = FALSE) + scale_y_log10("Sale price", labels = scales::dollar, breaks = seq(0, 400000, by = 100000)) + xlab("Year built") + ggtitle(paste("Transforming variables can provide a\n", "near-linear relationship.")) gridExtra::grid.arrange(p1, p2, nrow = 1) ``` Figure 4\.3: Linear regression assumes a linear relationship between the predictor(s) and the response variable; however, non\-linear relationships can often be altered to be near\-linear by applying a transformation to the variable(s). **2\. Constant variance among residuals:** Linear regression assumes the variance among error terms (\\(\\epsilon\_1, \\epsilon\_2, \\dots, \\epsilon\_p\\)) are constant (this assumption is referred to as homoscedasticity). If the error variance is not constant, the *p*\-values and confidence intervals for the coefficients will be invalid. Similar to the linear relationship assumption, non\-constant variance can often be resolved with variable transformations or by including additional predictors. For example, Figure [4\.4](linear-regression.html#fig:04-homoskedasticity) shows the residuals vs. predicted values for `model1` and `model3`. `model1` displays a classic violation of constant variance as indicated by the cone\-shaped pattern. However, `model3` appears to have near\-constant variance. The `broom::augment` function is an easy way to add model results to each observation (i.e. predicted values, residuals). ``` df1 <- broom::augment(cv_model1$finalModel, data = ames_train) p1 <- ggplot(df1, aes(.fitted, .resid)) + geom_point(size = 1, alpha = .4) + xlab("Predicted values") + ylab("Residuals") + ggtitle("Model 1", subtitle = "Sale_Price ~ Gr_Liv_Area") df2 <- broom::augment(cv_model3$finalModel, data = ames_train) p2 <- ggplot(df2, aes(.fitted, .resid)) + geom_point(size = 1, alpha = .4) + xlab("Predicted values") + ylab("Residuals") + ggtitle("Model 3", subtitle = "Sale_Price ~ .") gridExtra::grid.arrange(p1, p2, nrow = 1) ``` Figure 4\.4: Linear regression assumes constant variance among the residuals. `model1` (left) shows definitive signs of heteroskedasticity whereas `model3` (right) appears to have constant variance. **3\. No autocorrelation:** Linear regression assumes the errors are independent and uncorrelated. If in fact, there is correlation among the errors, then the estimated standard errors of the coefficients will be biased leading to prediction intervals being narrower than they should be. For example, the left plot in Figure [4\.5](linear-regression.html#fig:04-autocorrelation) displays the residuals (\\(y\\)\-axis) vs. the observation ID (\\(x\\)\-axis) for `model1`. A clear pattern exists suggesting that information about \\(\\epsilon\_1\\) provides information about \\(\\epsilon\_2\\). This pattern is a result of the data being ordered by neighborhood, which we have not accounted for in this model. Consequently, the residuals for homes in the same neighborhood are correlated (homes within a neighborhood are typically the same size and can often contain similar features). Since the `Neighborhood` predictor is included in `model3` (right plot), the correlation in the errors is reduced. ``` df1 <- mutate(df1, id = row_number()) df2 <- mutate(df2, id = row_number()) p1 <- ggplot(df1, aes(id, .resid)) + geom_point(size = 1, alpha = .4) + xlab("Row ID") + ylab("Residuals") + ggtitle("Model 1", subtitle = "Correlated residuals.") p2 <- ggplot(df2, aes(id, .resid)) + geom_point(size = 1, alpha = .4) + xlab("Row ID") + ylab("Residuals") + ggtitle("Model 3", subtitle = "Uncorrelated residuals.") gridExtra::grid.arrange(p1, p2, nrow = 1) ``` Figure 4\.5: Linear regression assumes uncorrelated errors. The residuals in `model1` (left) have a distinct pattern suggesting that information about \\(\\epsilon\_1\\) provides information about \\(\\epsilon\_2\\). Whereas `model3` has no signs of autocorrelation. **4\. More observations than predictors:** Although not an issue with the Ames housing data, when the number of features exceeds the number of observations (\\(p \> n\\)), the OLS estimates are not obtainable. To resolve this issue an analyst can remove variables one\-at\-a\-time until \\(p \< n\\). Although pre\-processing tools can be used to guide this manual approach (Kuhn and Johnson [2013](#ref-apm), 26:43–47\), it can be cumbersome and prone to errors. In Chapter [6](regularized-regression.html#regularized-regression) we’ll introduce regularized regression which provides an alternative to OLS that can be used when \\(p \> n\\). **5\. No or little multicollinearity:** *Collinearity* refers to the situation in which two or more predictor variables are closely related to one another. The presence of collinearity can pose problems in the OLS, since it can be difficult to separate out the individual effects of collinear variables on the response. In fact, collinearity can cause predictor variables to appear as statistically insignificant when in fact they are significant. This obviously leads to an inaccurate interpretation of coefficients and makes it difficult to identify influential predictors. In `ames`, for example, `Garage_Area` and `Garage_Cars` are two variables that have a correlation of 0\.89 and both variables are strongly related to our response variable (`Sale_Price`). Looking at our full model where both of these variables are included, we see that `Garage_Cars` is found to be statistically significant but `Garage_Area` is not: ``` # fit with two strongly correlated variables summary(cv_model3) %>% broom::tidy() %>% filter(term %in% c("Garage_Area", "Garage_Cars")) ## # A tibble: 2 x 5 ## term estimate std.error statistic p.value ## <chr> <dbl> <dbl> <dbl> <dbl> ## 1 Garage_Cars 3021. 1771. 1.71 0.0882 ## 2 Garage_Area 19.7 6.03 3.26 0.00112 ``` However, if we refit the full model without `Garage_Cars`, the coefficient estimate for `Garage_Area` increases two fold and becomes statistically significant. ``` # model without Garage_Area set.seed(123) mod_wo_Garage_Cars <- train( Sale_Price ~ ., data = select(ames_train, -Garage_Cars), method = "lm", trControl = trainControl(method = "cv", number = 10) ) summary(mod_wo_Garage_Cars) %>% broom::tidy() %>% filter(term == "Garage_Area") ## # A tibble: 1 x 5 ## term estimate std.error statistic p.value ## <chr> <dbl> <dbl> <dbl> <dbl> ## 1 Garage_Area 27.0 4.21 6.43 1.69e-10 ``` This reflects the instability in the linear regression model caused by between\-predictor relationships; this instability also gets propagated directly to the model predictions. Considering 16 of our 34 numeric predictors have a medium to strong correlation (Chapter [17](pca.html#pca)), the biased coefficients of these predictors are likely restricting the predictive accuracy of our model. How can we control for this problem? One option is to manually remove the offending predictors (one\-at\-a\-time) until all pairwise correlations are below some pre\-determined threshold. However, when the number of predictors is large such as in our case, this becomes tedious. Moreover, multicollinearity can arise when one feature is linearly related to two or more features (which is more difficult to detect[20](#fn20)). In these cases, manual removal of specific predictors may not be possible. Consequently, the following sections offers two simple extensions of linear regression where dimension reduction is applied prior to performing linear regression. Chapter [6](regularized-regression.html#regularized-regression) offers a modified regression approach that helps to deal with the problem. And future chapters provide alternative methods that are less affected by multicollinearity. 4\.6 Principal component regression ----------------------------------- As mentioned in Section [3\.7](engineering.html#feature-reduction) and fully discussed in Chapter [17](pca.html#pca), principal components analysis can be used to represent correlated variables with a smaller number of uncorrelated features (called principle components) and the resulting components can be used as predictors in a linear regression model. This two\-step process is known as *principal component regression* (PCR) (Massy [1965](#ref-massy1965principal)) and is illustrated in Figure [4\.6](linear-regression.html#fig:pcr-steps). Figure 4\.6: A depiction of the steps involved in performing principal component regression. Performing PCR with **caret** is an easy extension from our previous model. We simply specify `method = "pcr"` within `train()` to perform PCA on all our numeric predictors prior to fitting the model. Often, we can greatly improve performance by only using a small subset of all principal components as predictors. Consequently, you can think of the number of principal components as a tuning parameter (see Section [2\.5\.3](process.html#tune-overfit)). The following performs cross\-validated PCR with \\(1, 2, \\dots, 100\\) principal components, and Figure [4\.7](linear-regression.html#fig:pcr-regression) illustrates the cross\-validated RMSE. You can see a significant drop in prediction error from our previous linear models using just five principal components followed by a gradual decrease thereafter. However, you may realize that it takes nearly 100 principal components to reach a minimum RMSE (see `cv_model_pcr` for a comparison of the cross\-validated results). Note in the below example we use `preProcess` to remove near\-zero variance features and center/scale the numeric features. We then use `method = “pcr”`. This is equivalent to creating a blueprint as illustrated in Section 3\.8\.3 to remove near\-zero variance features, center/scale the numeric features, perform PCA on the numeric features, then feeding that blueprint into `train()` with `method = “lm”`. ``` # perform 10-fold cross validation on a PCR model tuning the # number of principal components to use as predictors from 1-100 set.seed(123) cv_model_pcr <- train( Sale_Price ~ ., data = ames_train, method = "pcr", trControl = trainControl(method = "cv", number = 10), preProcess = c("zv", "center", "scale"), tuneLength = 100 ) # model with lowest RMSE cv_model_pcr$bestTune ## ncomp ## 97 97 # results for model with lowest RMSE cv_model_pcr$results %>% dplyr::filter(ncomp == pull(cv_model_pcr$bestTune)) ## ncomp RMSE Rsquared MAE RMSESD RsquaredSD MAESD ## 1 97 30135.51 0.8615453 20143.42 5191.887 0.03764501 1696.534 # plot cross-validated RMSE ggplot(cv_model_pcr) ``` Figure 4\.7: The 10\-fold cross validation RMSE obtained using PCR with 1\-100 principal components. By controlling for multicollinearity with PCR, we can experience significant improvement in our predictive accuracy compared to the previously obtained linear models (reducing the cross\-validated RMSE from about $37,000 to nearly $30,000\), which beats the *k*\-nearest neighbor model illustrated in Section [3\.8\.3](engineering.html#engineering-process-example). It’s important to note that since PCR is a two step process, the PCA step does not consider any aspects of the response when it selects the components. Consequently, the new predictors produced by the PCA step are not designed to maximize the relationship with the response. Instead, it simply seeks to reduce the variability present throughout the predictor space. If that variability happens to be related to the response variability, then PCR has a good chance to identify a predictive relationship, as in our case. If, however, the variability in the predictor space is not related to the variability of the response, then PCR can have difficulty identifying a predictive relationship when one might actually exists (i.e., we may actually experience a decrease in our predictive accuracy). An alternative approach to reduce the impact of multicollinearity is partial least squares. 4\.7 Partial least squares -------------------------- *Partial least squares* (PLS) can be viewed as a supervised dimension reduction procedure (Kuhn and Johnson [2013](#ref-apm)). Similar to PCR, this technique also constructs a set of linear combinations of the inputs for regression, but unlike PCR it uses the response variable to aid the construction of the principal components as illustrated in Figure [4\.8](linear-regression.html#fig:pcr-vs-pls)[21](#fn21). Thus, we can think of PLS as a supervised dimension reduction procedure that finds new features that not only captures most of the information in the original features, but also are related to the response. Figure 4\.8: A diagram depicting the differences between PCR (left) and PLS (right). PCR finds principal components (PCs) that maximally summarize the features independent of the response variable and then uses those PCs as predictor variables. PLS finds components that simultaneously summarize variation of the predictors while being optimally correlated with the outcome and then uses those PCs as predictors. We illustrate PLS with some exemplar data[22](#fn22). Figure [4\.9](linear-regression.html#fig:pls-vs-pcr-relationship) illustrates that the first two PCs when using PCR have very little relationship to the response variable; however, the first two PCs when using PLS have a much stronger association to the response. Figure 4\.9: Illustration showing that the first two PCs when using PCR have very little relationship to the response variable (top row); however, the first two PCs when using PLS have a much stronger association to the response (bottom row). Referring to Equation [(17\.1\)](pca.html#eq:pca1) in Chapter [17](pca.html#pca), PLS will compute the first principal (\\(z\_1\\)) by setting each \\(\\phi\_{j1}\\) to the coefficient from a SLR model of \\(y\\) onto that respective \\(x\_j\\). One can show that this coefficient is proportional to the correlation between \\(y\\) and \\(x\_j\\). Hence, in computing \\(z\_1 \= \\sum^p\_{j\=1} \\phi\_{j1}x\_j\\), PLS places the highest weight on the variables that are most strongly related to the response. To compute the second PC (\\(z\_2\\)), we first regress each variable on \\(z\_1\\). The residuals from this regression capture the remaining signal that has not been explained by the first PC. We substitute these residual values for the predictor values in Equation [(17\.2\)](pca.html#eq:pca2) in Chapter [17](pca.html#pca). This process continues until all \\(m\\) components have been computed and then we use OLS to regress the response on \\(z\_1, \\dots, z\_m\\). See J. Friedman, Hastie, and Tibshirani ([2001](#ref-esl)) and Geladi and Kowalski ([1986](#ref-geladi1986partial)) for a thorough discussion of PLS. Similar to PCR, we can easily fit a PLS model by changing the `method` argument in `train()`. As with PCR, the number of principal components to use is a tuning parameter that is determined by the model that maximizes predictive accuracy (minimizes RMSE in this case). The following performs cross\-validated PLS with \\(1, 2, \\dots, 30\\) PCs, and Figure [4\.10](linear-regression.html#fig:pls-regression) shows the cross\-validated RMSEs. You can see a greater drop in prediction error compared to PCR and we reach this minimum RMSE with far less principal components because they are guided by the response. ``` # perform 10-fold cross validation on a PLS model tuning the # number of principal components to use as predictors from 1-30 set.seed(123) cv_model_pls <- train( Sale_Price ~ ., data = ames_train, method = "pls", trControl = trainControl(method = "cv", number = 10), preProcess = c("zv", "center", "scale"), tuneLength = 30 ) # model with lowest RMSE cv_model_pls$bestTune ## ncomp ## 20 20 # results for model with lowest RMSE cv_model_pls$results %>% dplyr::filter(ncomp == pull(cv_model_pls$bestTune)) ## ncomp RMSE Rsquared MAE RMSESD RsquaredSD MAESD ## 1 20 25459.51 0.8998194 16022.68 5243.478 0.04278512 1665.61 # plot cross-validated RMSE ggplot(cv_model_pls) ``` Figure 4\.10: The 10\-fold cross valdation RMSE obtained using PLS with 1\-30 principal components. 4\.8 Feature interpretation --------------------------- Once we’ve found the model that maximizes the predictive accuracy, our next goal is to interpret the model structure. Linear regression models provide a very intuitive model structure as they assume a *monotonic linear relationship* between the predictor variables and the response. The *linear* relationship part of that statement just means, for a given predictor variable, it assumes for every one unit change in a given predictor variable there is a constant change in the response. As discussed earlier in the chapter, this constant rate of change is provided by the coefficient for a predictor. The *monotonic* relationship means that a given predictor variable will always have a positive or negative relationship. But how do we determine the most influential variables? Variable importance seeks to identify those variables that are most influential in our model. For linear regression models, this is most often measured by the absolute value of the *t*\-statistic for each model parameter used; though simple, the results can be hard to interpret when the model includes interaction effects and complex transformations (in Chapter [16](iml.html#iml) we’ll discuss *model\-agnostic* approaches that don’t have this issue). For a PLS model, variable importance can be computed using the weighted sums of the absolute regression coefficients. The weights are a function of the reduction of the RSS across the number of PLS components and are computed separately for each outcome. Therefore, the contribution of the coefficients are weighted proportionally to the reduction in the RSS. We can use `vip::vip()` to extract and plot the most important variables. The importance measure is normalized from 100 (most important) to 0 (least important). Figure [4\.11](linear-regression.html#fig:pls-vip) illustrates that the top 4 most important variables are `Gr_liv_Area`, `Total_Bsmt_SF`, `First_Flr_SF`, and `Garage_Area` respectively. ``` vip(cv_model_pls, num_features = 20, method = "model") ``` Figure 4\.11: Top 20 most important variables for the PLS model. As stated earlier, linear regression models assume a monotonic linear relationship. To illustrate this, we can construct partial dependence plots (PDPs). PDPs plot the change in the average predicted value (\\(\\widehat{y}\\)) as specified feature(s) vary over their marginal distribution. As you will see in later chapters, PDPs become more useful when non\-linear relationships are present (we discuss PDPs and other ML interpretation techniques in Chapter [16](iml.html#iml)). However, PDPs of linear models help illustrate how a fixed change in \\(x\_i\\) relates to a fixed linear change in \\(\\widehat{y}\_i\\) while taking into account the average effect of all the other features in the model (for linear models, the slope of the PDP is equal to the corresponding features of the OLS coefficient). The **pdp** package (Brandon Greenwell [2018](#ref-R-pdp)) provides convenient functions for computing and plotting PDPs. For example, the following code chunk would plot the PDP for the `Gr_Liv_Area` predictor. `pdp::partial(cv_model_pls, "Gr_Liv_Area", grid.resolution = 20, plot = TRUE)` All four of the most important predictors have a positive relationship with sale price; however, we see that the slope (\\(\\widehat{\\beta}\_i\\)) is steepest for the most important predictor and gradually decreases for less important variables. Figure 4\.12: Partial dependence plots for the first four most important variables. 4\.9 Final thoughts ------------------- Linear regression is usually the first supervised learning algorithm you will learn. The approach provides a solid fundamental understanding of the supervised learning task; however, as we’ve discussed there are several concerns that result from the assumptions required. Although extensions of linear regression that integrate dimension reduction steps into the algorithm can help address some of the problems with linear regression, more advanced supervised algorithms typically provide greater flexibility and improved accuracy. Nonetheless, understanding linear regression provides a foundation that will serve you well in learning these more advanced methods.
Machine Learning
bradleyboehmke.github.io
https://bradleyboehmke.github.io/HOML/logistic-regression.html
Chapter 5 Logistic Regression ============================= Linear regression is used to approximate the (linear) relationship between a continuous response variable and a set of predictor variables. However, when the response variable is binary (i.e., Yes/No), linear regression is not appropriate. Fortunately, analysts can turn to an analogous method, *logistic regression*, which is similar to linear regression in many ways. This chapter explores the use of logistic regression for binary response variables. Logistic regression can be expanded for multinomial problems (see Faraway ([2016](#ref-faraway2016extending)[a](#ref-faraway2016extending)) for discussion of multinomial logistic regression in R); however, that goes beyond our intent here. 5\.1 Prerequisites ------------------ For this section we’ll use the following packages: ``` # Helper packages library(dplyr) # for data wrangling library(ggplot2) # for awesome plotting library(rsample) # for data splitting # Modeling packages library(caret) # for logistic regression modeling # Model interpretability packages library(vip) # variable importance ``` To illustrate logistic regression concepts we’ll use the employee attrition data, where our intent is to predict the `Attrition` response variable (coded as `"Yes"`/`"No"`). As in the previous chapter, we’ll set aside 30% of our data as a test set to assess our generalizability error. ``` df <- attrition %>% mutate_if(is.ordered, factor, ordered = FALSE) # Create training (70%) and test (30%) sets for the # rsample::attrition data. set.seed(123) # for reproducibility churn_split <- initial_split(df, prop = .7, strata = "Attrition") churn_train <- training(churn_split) churn_test <- testing(churn_split) ``` 5\.2 Why logistic regression ---------------------------- To provide a clear motivation for logistic regression, assume we have credit card default data for customers and we want to understand if the current credit card balance of a customer is an indicator of whether or not they’ll default on their credit card. To classify a customer as a high\- vs. low\-risk defaulter based on their balance we could use linear regression; however, the left plot in Figure [5\.1](logistic-regression.html#fig:whylogit) illustrates how linear regression would predict the probability of defaulting. Unfortunately, for balances close to zero we predict a negative probability of defaulting; if we were to predict for very large balances, we would get values bigger than 1\. These predictions are not sensible, since of course the true probability of defaulting, regardless of credit card balance, must fall between 0 and 1\. These inconsistencies only increase as our data become more imbalanced and the number of outliers increase. Contrast this with the logistic regression line (right plot) that is nonlinear (sigmoidal\-shaped). Figure 5\.1: Comparing the predicted probabilities of linear regression (left) to logistic regression (right). Predicted probabilities using linear regression results in flawed logic whereas predicted values from logistic regression will always lie between 0 and 1\. To avoid the inadequacies of the linear model fit on a binary response, we must model the probability of our response using a function that gives outputs between 0 and 1 for all values of \\(X\\). Many functions meet this description. In logistic regression, we use the logistic function, which is defined in Equation [(5\.1\)](logistic-regression.html#eq:logistic) and produces the S\-shaped curve in the right plot above. \\\[\\begin{equation} \\tag{5\.1} p\\left(X\\right) \= \\frac{e^{\\beta\_0 \+ \\beta\_1X}}{1 \+ e^{\\beta\_0 \+ \\beta\_1X}} \\end{equation}\\] The \\(\\beta\_i\\) parameters represent the coefficients as in linear regression and \\(p\\left(X\\right)\\) may be interpreted as the probability that the positive class (default in the above example) is present. The minimum for \\(p\\left(x\\right)\\) is obtained at \\(\\lim\_{a \\rightarrow \-\\infty} \\left\[ \\frac{e^a}{1\+e^a} \\right] \= 0\\), and the maximum for \\(p\\left(x\\right)\\) is obtained at \\(\\lim\_{a \\rightarrow \\infty} \\left\[ \\frac{e^a}{1\+e^a} \\right] \= 1\\) which restricts the output probabilities to 0–1\. Rearranging Equation [(5\.1\)](logistic-regression.html#eq:logistic) yields the *logit transformation* (which is where logistic regression gets its name): \\\[\\begin{equation} \\tag{5\.2} g\\left(X\\right) \= \\ln \\left\[ \\frac{p\\left(X\\right)}{1 \- p\\left(X\\right)} \\right] \= \\beta\_0 \+ \\beta\_1 X \\end{equation}\\] Applying a logit transformation to \\(p\\left(X\\right)\\) results in a linear equation similar to the mean response in a simple linear regression model. Using the logit transformation also results in an intuitive interpretation for the magnitude of \\(\\beta\_1\\): the odds (e.g., of defaulting) increase multiplicatively by \\(\\exp\\left(\\beta\_1\\right)\\) for every one\-unit increase in \\(X\\). A similar interpretation exists if \\(X\\) is categorical; see Agresti ([2003](#ref-agresti2003categorical)), Chapter 5, for details. 5\.3 Simple logistic regression ------------------------------- We will fit two logistic regression models in order to predict the probability of an employee attriting. The first predicts the probability of attrition based on their monthly income (`MonthlyIncome`) and the second is based on whether or not the employee works overtime (`OverTime`). The `glm()` function fits generalized linear models, a class of models that includes both logistic regression and simple linear regression as special cases. The syntax of the `glm()` function is similar to that of `lm()`, except that we must pass the argument `family = "binomial"` in order to tell R to run a logistic regression rather than some other type of generalized linear model (the default is `family = "gaussian"`, which is equivalent to ordinary linear regression assuming normally distributed errors). ``` model1 <- glm(Attrition ~ MonthlyIncome, family = "binomial", data = churn_train) model2 <- glm(Attrition ~ OverTime, family = "binomial", data = churn_train) ``` In the background `glm()` uses ML estimation to estimate the unknown model parameters. The basic intuition behind using ML estimation to fit a logistic regression model is as follows: we seek estimates for \\(\\beta\_0\\) and \\(\\beta\_1\\) such that the predicted probability \\(\\widehat p\\left(X\_i\\right)\\) of attrition for each employee corresponds as closely as possible to the employee’s observed attrition status. In other words, we try to find \\(\\widehat \\beta\_0\\) and \\(\\widehat \\beta\_1\\) such that plugging these estimates into the model for \\(p\\left(X\\right)\\) (Equation [(5\.1\)](logistic-regression.html#eq:logistic)) yields a number close to one for all employees who attrited, and a number close to zero for all employees who did not. This intuition can be formalized using a mathematical equation called a *likelihood function*: \\\[\\begin{equation} \\tag{5\.3} \\ell\\left(\\beta\_0, \\beta\_1\\right) \= \\prod\_{i:y\_i\=1}p\\left(X\_i\\right) \\prod\_{i':y\_i'\=0}\\left\[1\-p\\left(x\_i'\\right)\\right] \\end{equation}\\] The estimates \\(\\widehat \\beta\_0\\) and \\(\\widehat \\beta\_1\\) are chosen to *maximize* this likelihood function. What results is the predicted probability of attrition. Figure [5\.2](logistic-regression.html#fig:glm-sigmoid) illustrates the predicted probabilities for the two models. Figure 5\.2: Predicted probablilities of employee attrition based on monthly income (left) and overtime (right). As monthly income increases, `model1` predicts a decreased probability of attrition and if employees work overtime `model2` predicts an increased probability. The table below shows the coefficient estimates and related information that result from fitting a logistic regression model in order to predict the probability of *Attrition \= Yes* for our two models. Bear in mind that the coefficient estimates from logistic regression characterize the relationship between the predictor and response variable on a *log\-odds* (i.e., logit) scale. For `model1`, the estimated coefficient for `MonthlyIncome` is \\(\\widehat \\beta\_1 \=\\) \-0\.000130, which is negative, indicating that an increase in `MonthlyIncome` is associated with a decrease in the probability of attrition. Similarly, for `model2`, employees who work `OverTime` are associated with an increased probability of attrition compared to those that do not work `OverTime`. ``` tidy(model1) ## # A tibble: 2 x 5 ## term estimate std.error statistic p.value ## <chr> <dbl> <dbl> <dbl> <dbl> ## 1 (Intercept) -0.924 0.155 -5.96 0.00000000259 ## 2 MonthlyIncome -0.000130 0.0000264 -4.93 0.000000836 tidy(model2) ## # A tibble: 2 x 5 ## term estimate std.error statistic p.value ## <chr> <dbl> <dbl> <dbl> <dbl> ## 1 (Intercept) -2.18 0.122 -17.9 6.76e-72 ## 2 OverTimeYes 1.41 0.176 8.00 1.20e-15 ``` As discussed earlier, it is easier to interpret the coefficients using an \\(\\exp()\\) transformation: ``` exp(coef(model1)) ## (Intercept) MonthlyIncome ## 0.3970771 0.9998697 exp(coef(model2)) ## (Intercept) OverTimeYes ## 0.1126126 4.0812121 ``` Thus, the odds of an employee attriting in `model1` increase multiplicatively by 0\.9999 for every one dollar increase in `MonthlyIncome`, whereas the odds of attriting in `model2` increase multiplicatively by 4\.0812 for employees that work `OverTime` compared to those that do not. Many aspects of the logistic regression output are similar to those discussed for linear regression. For example, we can use the estimated standard errors to get confidence intervals as we did for linear regression in Chapter [4](linear-regression.html#linear-regression): ``` confint(model1) # for odds, you can use `exp(confint(model1))` ## 2.5 % 97.5 % ## (Intercept) -1.2267754960 -6.180062e-01 ## MonthlyIncome -0.0001849796 -8.107634e-05 confint(model2) ## 2.5 % 97.5 % ## (Intercept) -2.430458 -1.952330 ## OverTimeYes 1.063246 1.752879 ``` 5\.4 Multiple logistic regression --------------------------------- We can also extend our model as seen in Equation 1 so that we can predict a binary response using multiple predictors: \\\[\\begin{equation} \\tag{5\.4} p\\left(X\\right) \= \\frac{e^{\\beta\_0 \+ \\beta\_1 X \+ \\cdots \+ \\beta\_p X\_p }}{1 \+ e^{\\beta\_0 \+ \\beta\_1 X \+ \\cdots \+ \\beta\_p X\_p}} \\end{equation}\\] Let’s go ahead and fit a model that predicts the probability of `Attrition` based on the `MonthlyIncome` and `OverTime`. Our results show that both features are statistically significant (at the 0\.05 level) and Figure [5\.3](logistic-regression.html#fig:glm-sigmoid2) illustrates common trends between `MonthlyIncome` and `Attrition`; however, working `OverTime` tends to nearly double the probability of attrition. ``` model3 <- glm( Attrition ~ MonthlyIncome + OverTime, family = "binomial", data = churn_train ) tidy(model3) ## # A tibble: 3 x 5 ## term estimate std.error statistic p.value ## <chr> <dbl> <dbl> <dbl> <dbl> ## 1 (Intercept) -1.43 0.176 -8.11 5.25e-16 ## 2 MonthlyIncome -0.000139 0.0000270 -5.15 2.62e- 7 ## 3 OverTimeYes 1.47 0.180 8.16 3.43e-16 ``` Figure 5\.3: Predicted probability of attrition based on monthly income and whether or not employees work overtime. 5\.5 Assessing model accuracy ----------------------------- With a basic understanding of logistic regression under our belt, similar to linear regression our concern now shifts to how well do our models predict. As in the last chapter, we’ll use `caret::train()` and fit three 10\-fold cross validated logistic regression models. Extracting the accuracy measures (in this case, classification accuracy), we see that both `cv_model1` and `cv_model2` had an average accuracy of 83\.88%. However, `cv_model3` which used all predictor variables in our data achieved an average accuracy rate of 87\.58%. ``` set.seed(123) cv_model1 <- train( Attrition ~ MonthlyIncome, data = churn_train, method = "glm", family = "binomial", trControl = trainControl(method = "cv", number = 10) ) set.seed(123) cv_model2 <- train( Attrition ~ MonthlyIncome + OverTime, data = churn_train, method = "glm", family = "binomial", trControl = trainControl(method = "cv", number = 10) ) set.seed(123) cv_model3 <- train( Attrition ~ ., data = churn_train, method = "glm", family = "binomial", trControl = trainControl(method = "cv", number = 10) ) # extract out of sample performance measures summary( resamples( list( model1 = cv_model1, model2 = cv_model2, model3 = cv_model3 ) ) )$statistics$Accuracy ## Min. 1st Qu. Median Mean 3rd Qu. Max. NA's ## model1 0.8349515 0.8349515 0.8365385 0.8388478 0.8431373 0.8446602 0 ## model2 0.8349515 0.8349515 0.8365385 0.8388478 0.8431373 0.8446602 0 ## model3 0.8365385 0.8495146 0.8792476 0.8757893 0.8907767 0.9313725 0 ``` We can get a better understanding of our model’s performance by assessing the confusion matrix (see Section [2\.6](process.html#model-eval)). We can use `caret::confusionMatrix()` to compute a confusion matrix. We need to supply our model’s predicted class and the actuals from our training data. The confusion matrix provides a wealth of information. Particularly, we can see that although we do well predicting cases of non\-attrition (note the high specificity), our model does particularly poor predicting actual cases of attrition (note the low sensitivity). By default the `predict()` function predicts the response class for a **caret** model; however, you can change the `type` argument to predict the probabilities (see `?caret::predict.train`). ``` # predict class pred_class <- predict(cv_model3, churn_train) # create confusion matrix confusionMatrix( data = relevel(pred_class, ref = "Yes"), reference = relevel(churn_train$Attrition, ref = "Yes") ) ## Confusion Matrix and Statistics ## ## Reference ## Prediction Yes No ## Yes 93 25 ## No 73 839 ## ## Accuracy : 0.9049 ## 95% CI : (0.8853, 0.9221) ## No Information Rate : 0.8388 ## P-Value [Acc > NIR] : 5.360e-10 ## ## Kappa : 0.6016 ## ## Mcnemar's Test P-Value : 2.057e-06 ## ## Sensitivity : 0.56024 ## Specificity : 0.97106 ## Pos Pred Value : 0.78814 ## Neg Pred Value : 0.91996 ## Prevalence : 0.16117 ## Detection Rate : 0.09029 ## Detection Prevalence : 0.11456 ## Balanced Accuracy : 0.76565 ## ## 'Positive' Class : Yes ## ``` One thing to point out, in the confusion matrix above you will note the metric `No Information Rate: 0.839`. This represents the ratio of non\-attrition vs. attrition in our training data (`table(churn_train$Attrition) %>% prop.table()`). Consequently, if we simply predicted `"No"` for every employee we would still get an accuracy rate of 83\.9%. Therefore, our goal is to maximize our accuracy rate over and above this no information baseline while also trying to balance sensitivity and specificity. To that end, we plot the ROC curve (section [2\.6](process.html#model-eval)) which is displayed in Figure [5\.4](logistic-regression.html#fig:logistic-regression-roc). If we compare our simple model (`cv_model1`) to our full model (`cv_model3`), we see the lift achieved with the more accurate model. ``` library(ROCR) # Compute predicted probabilities m1_prob <- predict(cv_model1, churn_train, type = "prob")$Yes m3_prob <- predict(cv_model3, churn_train, type = "prob")$Yes # Compute AUC metrics for cv_model1 and cv_model3 perf1 <- prediction(m1_prob, churn_train$Attrition) %>% performance(measure = "tpr", x.measure = "fpr") perf2 <- prediction(m3_prob, churn_train$Attrition) %>% performance(measure = "tpr", x.measure = "fpr") # Plot ROC curves for cv_model1 and cv_model3 plot(perf1, col = "black", lty = 2) plot(perf2, add = TRUE, col = "blue") legend(0.8, 0.2, legend = c("cv_model1", "cv_model3"), col = c("black", "blue"), lty = 2:1, cex = 0.6) ``` Figure 5\.4: ROC curve for cross\-validated models 1 and 3\. The increase in the AUC represents the ‘lift’ that we achieve with model 3\. Similar to linear regression, we can perform a PLS logistic regression to assess if reducing the dimension of our numeric predictors helps to improve accuracy. There are 16 numeric features in our data set so the following code performs a 10\-fold cross\-validated PLS model while tuning the number of principal components to use from 1–16\. The optimal model uses 14 principal components, which is not reducing the dimension by much. However, the mean accuracy of 0\.876 is no better than the average CV accuracy of `cv_model3` (0\.876\). PLS was originally designed to be used for continuous features. Although you are not restricted from using PLS on categorical features, it is commonly advised to start with numeric features and explore alternative options for categorical features (i.e. ordinal encode, label encode, factor analysis. Also, see Russolillo and Lauro ([2011](#ref-russolillo2011proposal)) for an alternative PLS approach for categorical features). ``` # Perform 10-fold CV on a PLS model tuning the number of PCs to # use as predictors set.seed(123) cv_model_pls <- train( Attrition ~ ., data = churn_train, method = "pls", family = "binomial", trControl = trainControl(method = "cv", number = 10), preProcess = c("zv", "center", "scale"), tuneLength = 16 ) # Model with lowest RMSE cv_model_pls$bestTune ## ncomp ## 14 14 # results for model with lowest loss cv_model_pls$results %>% dplyr::filter(ncomp == pull(cv_model_pls$bestTune)) ## ncomp Accuracy Kappa AccuracySD KappaSD ## 1 14 0.8757518 0.3766944 0.01919777 0.1142592 # Plot cross-validated RMSE ggplot(cv_model_pls) ``` Figure 5\.5: The 10\-fold cross\-validation RMSE obtained using PLS with 1–16 principal components. 5\.6 Model concerns ------------------- As with linear models, it is important to check the adequacy of the logistic regression model (in fact, this should be done for all parametric models). This was discussed for linear models in Section [4\.5](linear-regression.html#lm-residuals) where the residuals played an important role. Although not as common, residual analysis and diagnostics are equally important to generalized linear models. The problem is that there is no obvious way to define what a residual is for more general models. For instance, how might we define a residual in logistic regression when the outcome is either 0 or 1? Nonetheless attempts have been made and a number of useful diagnostics can be constructed based on the idea of a *pseudo residual*; see, for example, Harrell ([2015](#ref-harrell2015regression)), Section 10\.4\. More recently, Liu and Zhang ([2018](#ref-dungang2018residuals)) introduced the concept of *surrogate residuals* that allows for residual\-based diagnostic procedures and plots not unlike those in traditional linear regression (e.g., checking for outliers and misspecified link functions). For an overview with examples in R using the **sure** package, see Brandon M. Greenwell et al. ([2018](#ref-greenwell2018residuals)). 5\.7 Feature interpretation --------------------------- Similar to linear regression, once our preferred logistic regression model is identified, we need to interpret how the features are influencing the results. As with normal linear regression models, variable importance for logistic regression models can be computed using the absolute value of the \\(z\\)\-statistic for each coefficient (albeit with the same issues previously discussed). Using `vip::vip()` we can extract our top 20 influential variables. Figure [5\.6](logistic-regression.html#fig:glm-vip) illustrates that `OverTime` is the most influential followed by `JobSatisfaction`, and `EnvironmentSatisfaction`. ``` vip(cv_model3, num_features = 20) ``` Figure 5\.6: Top 20 most important variables for the PLS model. Similar to linear regression, logistic regression assumes a monotonic linear relationship. However, the linear relationship occurs on the logit scale; on the probability scale, the relationship will be nonlinear. This is illustrated by the PDP in Figure [5\.7](logistic-regression.html#fig:glm-pdp) which illustrates the functional relationship between the predicted probability of attrition and the number of companies an employee has worked for (`NumCompaniesWorked`) while taking into account the average effect of all the other predictors in the model. Employees who’ve experienced more employment changes tend to have a high probability of making another change in the future. Furthermore, the PDPs for the top three categorical predictors (`OverTime`, `JobSatisfaction`, and `EnvironmentSatisfaction`) illustrate the change in predicted probability of attrition based on the employee’s status for each predictor. See the online supplemental material for the code to reproduce the plots in Figure [5\.7](logistic-regression.html#fig:glm-pdp). Figure 5\.7: Partial dependence plots for the first four most important variables. We can see how the predicted probability of attrition changes for each value of the influential predictors. 5\.8 Final thoughts ------------------- Logistic regression provides an alternative to linear regression for binary classification problems. However, similar to linear regression, logistic regression suffers from the many assumptions involved in the algorithm (i.e. linear relationship of the coefficient, multicollinearity). Moreover, often we have more than two classes to predict which is commonly referred to as multinomial classification. Although multinomial extensions of logistic regression exist, the assumptions made only increase and, often, the stability of the coefficient estimates (and therefore the accuracy) decrease. Future chapters will discuss more advanced algorithms that provide a more natural and trustworthy approach to binary and multinomial classification prediction. 5\.1 Prerequisites ------------------ For this section we’ll use the following packages: ``` # Helper packages library(dplyr) # for data wrangling library(ggplot2) # for awesome plotting library(rsample) # for data splitting # Modeling packages library(caret) # for logistic regression modeling # Model interpretability packages library(vip) # variable importance ``` To illustrate logistic regression concepts we’ll use the employee attrition data, where our intent is to predict the `Attrition` response variable (coded as `"Yes"`/`"No"`). As in the previous chapter, we’ll set aside 30% of our data as a test set to assess our generalizability error. ``` df <- attrition %>% mutate_if(is.ordered, factor, ordered = FALSE) # Create training (70%) and test (30%) sets for the # rsample::attrition data. set.seed(123) # for reproducibility churn_split <- initial_split(df, prop = .7, strata = "Attrition") churn_train <- training(churn_split) churn_test <- testing(churn_split) ``` 5\.2 Why logistic regression ---------------------------- To provide a clear motivation for logistic regression, assume we have credit card default data for customers and we want to understand if the current credit card balance of a customer is an indicator of whether or not they’ll default on their credit card. To classify a customer as a high\- vs. low\-risk defaulter based on their balance we could use linear regression; however, the left plot in Figure [5\.1](logistic-regression.html#fig:whylogit) illustrates how linear regression would predict the probability of defaulting. Unfortunately, for balances close to zero we predict a negative probability of defaulting; if we were to predict for very large balances, we would get values bigger than 1\. These predictions are not sensible, since of course the true probability of defaulting, regardless of credit card balance, must fall between 0 and 1\. These inconsistencies only increase as our data become more imbalanced and the number of outliers increase. Contrast this with the logistic regression line (right plot) that is nonlinear (sigmoidal\-shaped). Figure 5\.1: Comparing the predicted probabilities of linear regression (left) to logistic regression (right). Predicted probabilities using linear regression results in flawed logic whereas predicted values from logistic regression will always lie between 0 and 1\. To avoid the inadequacies of the linear model fit on a binary response, we must model the probability of our response using a function that gives outputs between 0 and 1 for all values of \\(X\\). Many functions meet this description. In logistic regression, we use the logistic function, which is defined in Equation [(5\.1\)](logistic-regression.html#eq:logistic) and produces the S\-shaped curve in the right plot above. \\\[\\begin{equation} \\tag{5\.1} p\\left(X\\right) \= \\frac{e^{\\beta\_0 \+ \\beta\_1X}}{1 \+ e^{\\beta\_0 \+ \\beta\_1X}} \\end{equation}\\] The \\(\\beta\_i\\) parameters represent the coefficients as in linear regression and \\(p\\left(X\\right)\\) may be interpreted as the probability that the positive class (default in the above example) is present. The minimum for \\(p\\left(x\\right)\\) is obtained at \\(\\lim\_{a \\rightarrow \-\\infty} \\left\[ \\frac{e^a}{1\+e^a} \\right] \= 0\\), and the maximum for \\(p\\left(x\\right)\\) is obtained at \\(\\lim\_{a \\rightarrow \\infty} \\left\[ \\frac{e^a}{1\+e^a} \\right] \= 1\\) which restricts the output probabilities to 0–1\. Rearranging Equation [(5\.1\)](logistic-regression.html#eq:logistic) yields the *logit transformation* (which is where logistic regression gets its name): \\\[\\begin{equation} \\tag{5\.2} g\\left(X\\right) \= \\ln \\left\[ \\frac{p\\left(X\\right)}{1 \- p\\left(X\\right)} \\right] \= \\beta\_0 \+ \\beta\_1 X \\end{equation}\\] Applying a logit transformation to \\(p\\left(X\\right)\\) results in a linear equation similar to the mean response in a simple linear regression model. Using the logit transformation also results in an intuitive interpretation for the magnitude of \\(\\beta\_1\\): the odds (e.g., of defaulting) increase multiplicatively by \\(\\exp\\left(\\beta\_1\\right)\\) for every one\-unit increase in \\(X\\). A similar interpretation exists if \\(X\\) is categorical; see Agresti ([2003](#ref-agresti2003categorical)), Chapter 5, for details. 5\.3 Simple logistic regression ------------------------------- We will fit two logistic regression models in order to predict the probability of an employee attriting. The first predicts the probability of attrition based on their monthly income (`MonthlyIncome`) and the second is based on whether or not the employee works overtime (`OverTime`). The `glm()` function fits generalized linear models, a class of models that includes both logistic regression and simple linear regression as special cases. The syntax of the `glm()` function is similar to that of `lm()`, except that we must pass the argument `family = "binomial"` in order to tell R to run a logistic regression rather than some other type of generalized linear model (the default is `family = "gaussian"`, which is equivalent to ordinary linear regression assuming normally distributed errors). ``` model1 <- glm(Attrition ~ MonthlyIncome, family = "binomial", data = churn_train) model2 <- glm(Attrition ~ OverTime, family = "binomial", data = churn_train) ``` In the background `glm()` uses ML estimation to estimate the unknown model parameters. The basic intuition behind using ML estimation to fit a logistic regression model is as follows: we seek estimates for \\(\\beta\_0\\) and \\(\\beta\_1\\) such that the predicted probability \\(\\widehat p\\left(X\_i\\right)\\) of attrition for each employee corresponds as closely as possible to the employee’s observed attrition status. In other words, we try to find \\(\\widehat \\beta\_0\\) and \\(\\widehat \\beta\_1\\) such that plugging these estimates into the model for \\(p\\left(X\\right)\\) (Equation [(5\.1\)](logistic-regression.html#eq:logistic)) yields a number close to one for all employees who attrited, and a number close to zero for all employees who did not. This intuition can be formalized using a mathematical equation called a *likelihood function*: \\\[\\begin{equation} \\tag{5\.3} \\ell\\left(\\beta\_0, \\beta\_1\\right) \= \\prod\_{i:y\_i\=1}p\\left(X\_i\\right) \\prod\_{i':y\_i'\=0}\\left\[1\-p\\left(x\_i'\\right)\\right] \\end{equation}\\] The estimates \\(\\widehat \\beta\_0\\) and \\(\\widehat \\beta\_1\\) are chosen to *maximize* this likelihood function. What results is the predicted probability of attrition. Figure [5\.2](logistic-regression.html#fig:glm-sigmoid) illustrates the predicted probabilities for the two models. Figure 5\.2: Predicted probablilities of employee attrition based on monthly income (left) and overtime (right). As monthly income increases, `model1` predicts a decreased probability of attrition and if employees work overtime `model2` predicts an increased probability. The table below shows the coefficient estimates and related information that result from fitting a logistic regression model in order to predict the probability of *Attrition \= Yes* for our two models. Bear in mind that the coefficient estimates from logistic regression characterize the relationship between the predictor and response variable on a *log\-odds* (i.e., logit) scale. For `model1`, the estimated coefficient for `MonthlyIncome` is \\(\\widehat \\beta\_1 \=\\) \-0\.000130, which is negative, indicating that an increase in `MonthlyIncome` is associated with a decrease in the probability of attrition. Similarly, for `model2`, employees who work `OverTime` are associated with an increased probability of attrition compared to those that do not work `OverTime`. ``` tidy(model1) ## # A tibble: 2 x 5 ## term estimate std.error statistic p.value ## <chr> <dbl> <dbl> <dbl> <dbl> ## 1 (Intercept) -0.924 0.155 -5.96 0.00000000259 ## 2 MonthlyIncome -0.000130 0.0000264 -4.93 0.000000836 tidy(model2) ## # A tibble: 2 x 5 ## term estimate std.error statistic p.value ## <chr> <dbl> <dbl> <dbl> <dbl> ## 1 (Intercept) -2.18 0.122 -17.9 6.76e-72 ## 2 OverTimeYes 1.41 0.176 8.00 1.20e-15 ``` As discussed earlier, it is easier to interpret the coefficients using an \\(\\exp()\\) transformation: ``` exp(coef(model1)) ## (Intercept) MonthlyIncome ## 0.3970771 0.9998697 exp(coef(model2)) ## (Intercept) OverTimeYes ## 0.1126126 4.0812121 ``` Thus, the odds of an employee attriting in `model1` increase multiplicatively by 0\.9999 for every one dollar increase in `MonthlyIncome`, whereas the odds of attriting in `model2` increase multiplicatively by 4\.0812 for employees that work `OverTime` compared to those that do not. Many aspects of the logistic regression output are similar to those discussed for linear regression. For example, we can use the estimated standard errors to get confidence intervals as we did for linear regression in Chapter [4](linear-regression.html#linear-regression): ``` confint(model1) # for odds, you can use `exp(confint(model1))` ## 2.5 % 97.5 % ## (Intercept) -1.2267754960 -6.180062e-01 ## MonthlyIncome -0.0001849796 -8.107634e-05 confint(model2) ## 2.5 % 97.5 % ## (Intercept) -2.430458 -1.952330 ## OverTimeYes 1.063246 1.752879 ``` 5\.4 Multiple logistic regression --------------------------------- We can also extend our model as seen in Equation 1 so that we can predict a binary response using multiple predictors: \\\[\\begin{equation} \\tag{5\.4} p\\left(X\\right) \= \\frac{e^{\\beta\_0 \+ \\beta\_1 X \+ \\cdots \+ \\beta\_p X\_p }}{1 \+ e^{\\beta\_0 \+ \\beta\_1 X \+ \\cdots \+ \\beta\_p X\_p}} \\end{equation}\\] Let’s go ahead and fit a model that predicts the probability of `Attrition` based on the `MonthlyIncome` and `OverTime`. Our results show that both features are statistically significant (at the 0\.05 level) and Figure [5\.3](logistic-regression.html#fig:glm-sigmoid2) illustrates common trends between `MonthlyIncome` and `Attrition`; however, working `OverTime` tends to nearly double the probability of attrition. ``` model3 <- glm( Attrition ~ MonthlyIncome + OverTime, family = "binomial", data = churn_train ) tidy(model3) ## # A tibble: 3 x 5 ## term estimate std.error statistic p.value ## <chr> <dbl> <dbl> <dbl> <dbl> ## 1 (Intercept) -1.43 0.176 -8.11 5.25e-16 ## 2 MonthlyIncome -0.000139 0.0000270 -5.15 2.62e- 7 ## 3 OverTimeYes 1.47 0.180 8.16 3.43e-16 ``` Figure 5\.3: Predicted probability of attrition based on monthly income and whether or not employees work overtime. 5\.5 Assessing model accuracy ----------------------------- With a basic understanding of logistic regression under our belt, similar to linear regression our concern now shifts to how well do our models predict. As in the last chapter, we’ll use `caret::train()` and fit three 10\-fold cross validated logistic regression models. Extracting the accuracy measures (in this case, classification accuracy), we see that both `cv_model1` and `cv_model2` had an average accuracy of 83\.88%. However, `cv_model3` which used all predictor variables in our data achieved an average accuracy rate of 87\.58%. ``` set.seed(123) cv_model1 <- train( Attrition ~ MonthlyIncome, data = churn_train, method = "glm", family = "binomial", trControl = trainControl(method = "cv", number = 10) ) set.seed(123) cv_model2 <- train( Attrition ~ MonthlyIncome + OverTime, data = churn_train, method = "glm", family = "binomial", trControl = trainControl(method = "cv", number = 10) ) set.seed(123) cv_model3 <- train( Attrition ~ ., data = churn_train, method = "glm", family = "binomial", trControl = trainControl(method = "cv", number = 10) ) # extract out of sample performance measures summary( resamples( list( model1 = cv_model1, model2 = cv_model2, model3 = cv_model3 ) ) )$statistics$Accuracy ## Min. 1st Qu. Median Mean 3rd Qu. Max. NA's ## model1 0.8349515 0.8349515 0.8365385 0.8388478 0.8431373 0.8446602 0 ## model2 0.8349515 0.8349515 0.8365385 0.8388478 0.8431373 0.8446602 0 ## model3 0.8365385 0.8495146 0.8792476 0.8757893 0.8907767 0.9313725 0 ``` We can get a better understanding of our model’s performance by assessing the confusion matrix (see Section [2\.6](process.html#model-eval)). We can use `caret::confusionMatrix()` to compute a confusion matrix. We need to supply our model’s predicted class and the actuals from our training data. The confusion matrix provides a wealth of information. Particularly, we can see that although we do well predicting cases of non\-attrition (note the high specificity), our model does particularly poor predicting actual cases of attrition (note the low sensitivity). By default the `predict()` function predicts the response class for a **caret** model; however, you can change the `type` argument to predict the probabilities (see `?caret::predict.train`). ``` # predict class pred_class <- predict(cv_model3, churn_train) # create confusion matrix confusionMatrix( data = relevel(pred_class, ref = "Yes"), reference = relevel(churn_train$Attrition, ref = "Yes") ) ## Confusion Matrix and Statistics ## ## Reference ## Prediction Yes No ## Yes 93 25 ## No 73 839 ## ## Accuracy : 0.9049 ## 95% CI : (0.8853, 0.9221) ## No Information Rate : 0.8388 ## P-Value [Acc > NIR] : 5.360e-10 ## ## Kappa : 0.6016 ## ## Mcnemar's Test P-Value : 2.057e-06 ## ## Sensitivity : 0.56024 ## Specificity : 0.97106 ## Pos Pred Value : 0.78814 ## Neg Pred Value : 0.91996 ## Prevalence : 0.16117 ## Detection Rate : 0.09029 ## Detection Prevalence : 0.11456 ## Balanced Accuracy : 0.76565 ## ## 'Positive' Class : Yes ## ``` One thing to point out, in the confusion matrix above you will note the metric `No Information Rate: 0.839`. This represents the ratio of non\-attrition vs. attrition in our training data (`table(churn_train$Attrition) %>% prop.table()`). Consequently, if we simply predicted `"No"` for every employee we would still get an accuracy rate of 83\.9%. Therefore, our goal is to maximize our accuracy rate over and above this no information baseline while also trying to balance sensitivity and specificity. To that end, we plot the ROC curve (section [2\.6](process.html#model-eval)) which is displayed in Figure [5\.4](logistic-regression.html#fig:logistic-regression-roc). If we compare our simple model (`cv_model1`) to our full model (`cv_model3`), we see the lift achieved with the more accurate model. ``` library(ROCR) # Compute predicted probabilities m1_prob <- predict(cv_model1, churn_train, type = "prob")$Yes m3_prob <- predict(cv_model3, churn_train, type = "prob")$Yes # Compute AUC metrics for cv_model1 and cv_model3 perf1 <- prediction(m1_prob, churn_train$Attrition) %>% performance(measure = "tpr", x.measure = "fpr") perf2 <- prediction(m3_prob, churn_train$Attrition) %>% performance(measure = "tpr", x.measure = "fpr") # Plot ROC curves for cv_model1 and cv_model3 plot(perf1, col = "black", lty = 2) plot(perf2, add = TRUE, col = "blue") legend(0.8, 0.2, legend = c("cv_model1", "cv_model3"), col = c("black", "blue"), lty = 2:1, cex = 0.6) ``` Figure 5\.4: ROC curve for cross\-validated models 1 and 3\. The increase in the AUC represents the ‘lift’ that we achieve with model 3\. Similar to linear regression, we can perform a PLS logistic regression to assess if reducing the dimension of our numeric predictors helps to improve accuracy. There are 16 numeric features in our data set so the following code performs a 10\-fold cross\-validated PLS model while tuning the number of principal components to use from 1–16\. The optimal model uses 14 principal components, which is not reducing the dimension by much. However, the mean accuracy of 0\.876 is no better than the average CV accuracy of `cv_model3` (0\.876\). PLS was originally designed to be used for continuous features. Although you are not restricted from using PLS on categorical features, it is commonly advised to start with numeric features and explore alternative options for categorical features (i.e. ordinal encode, label encode, factor analysis. Also, see Russolillo and Lauro ([2011](#ref-russolillo2011proposal)) for an alternative PLS approach for categorical features). ``` # Perform 10-fold CV on a PLS model tuning the number of PCs to # use as predictors set.seed(123) cv_model_pls <- train( Attrition ~ ., data = churn_train, method = "pls", family = "binomial", trControl = trainControl(method = "cv", number = 10), preProcess = c("zv", "center", "scale"), tuneLength = 16 ) # Model with lowest RMSE cv_model_pls$bestTune ## ncomp ## 14 14 # results for model with lowest loss cv_model_pls$results %>% dplyr::filter(ncomp == pull(cv_model_pls$bestTune)) ## ncomp Accuracy Kappa AccuracySD KappaSD ## 1 14 0.8757518 0.3766944 0.01919777 0.1142592 # Plot cross-validated RMSE ggplot(cv_model_pls) ``` Figure 5\.5: The 10\-fold cross\-validation RMSE obtained using PLS with 1–16 principal components. 5\.6 Model concerns ------------------- As with linear models, it is important to check the adequacy of the logistic regression model (in fact, this should be done for all parametric models). This was discussed for linear models in Section [4\.5](linear-regression.html#lm-residuals) where the residuals played an important role. Although not as common, residual analysis and diagnostics are equally important to generalized linear models. The problem is that there is no obvious way to define what a residual is for more general models. For instance, how might we define a residual in logistic regression when the outcome is either 0 or 1? Nonetheless attempts have been made and a number of useful diagnostics can be constructed based on the idea of a *pseudo residual*; see, for example, Harrell ([2015](#ref-harrell2015regression)), Section 10\.4\. More recently, Liu and Zhang ([2018](#ref-dungang2018residuals)) introduced the concept of *surrogate residuals* that allows for residual\-based diagnostic procedures and plots not unlike those in traditional linear regression (e.g., checking for outliers and misspecified link functions). For an overview with examples in R using the **sure** package, see Brandon M. Greenwell et al. ([2018](#ref-greenwell2018residuals)). 5\.7 Feature interpretation --------------------------- Similar to linear regression, once our preferred logistic regression model is identified, we need to interpret how the features are influencing the results. As with normal linear regression models, variable importance for logistic regression models can be computed using the absolute value of the \\(z\\)\-statistic for each coefficient (albeit with the same issues previously discussed). Using `vip::vip()` we can extract our top 20 influential variables. Figure [5\.6](logistic-regression.html#fig:glm-vip) illustrates that `OverTime` is the most influential followed by `JobSatisfaction`, and `EnvironmentSatisfaction`. ``` vip(cv_model3, num_features = 20) ``` Figure 5\.6: Top 20 most important variables for the PLS model. Similar to linear regression, logistic regression assumes a monotonic linear relationship. However, the linear relationship occurs on the logit scale; on the probability scale, the relationship will be nonlinear. This is illustrated by the PDP in Figure [5\.7](logistic-regression.html#fig:glm-pdp) which illustrates the functional relationship between the predicted probability of attrition and the number of companies an employee has worked for (`NumCompaniesWorked`) while taking into account the average effect of all the other predictors in the model. Employees who’ve experienced more employment changes tend to have a high probability of making another change in the future. Furthermore, the PDPs for the top three categorical predictors (`OverTime`, `JobSatisfaction`, and `EnvironmentSatisfaction`) illustrate the change in predicted probability of attrition based on the employee’s status for each predictor. See the online supplemental material for the code to reproduce the plots in Figure [5\.7](logistic-regression.html#fig:glm-pdp). Figure 5\.7: Partial dependence plots for the first four most important variables. We can see how the predicted probability of attrition changes for each value of the influential predictors. 5\.8 Final thoughts ------------------- Logistic regression provides an alternative to linear regression for binary classification problems. However, similar to linear regression, logistic regression suffers from the many assumptions involved in the algorithm (i.e. linear relationship of the coefficient, multicollinearity). Moreover, often we have more than two classes to predict which is commonly referred to as multinomial classification. Although multinomial extensions of logistic regression exist, the assumptions made only increase and, often, the stability of the coefficient estimates (and therefore the accuracy) decrease. Future chapters will discuss more advanced algorithms that provide a more natural and trustworthy approach to binary and multinomial classification prediction.
Machine Learning
bradleyboehmke.github.io
https://bradleyboehmke.github.io/HOML/regularized-regression.html
Chapter 6 Regularized Regression ================================ Linear models (LMs) provide a simple, yet effective, approach to predictive modeling. Moreover, when certain assumptions required by LMs are met (e.g., constant variance), the estimated coefficients are unbiased and, of all linear unbiased estimates, have the lowest variance. However, in today’s world, data sets being analyzed typically contain a large number of features. As the number of features grow, certain assumptions typically break down and these models tend to overfit the training data, causing our out of sample error to increase. **Regularization** methods provide a means to constrain or *regularize* the estimated coefficients, which can reduce the variance and decrease out of sample error. 6\.1 Prerequisites ------------------ This chapter leverages the following packages. Most of these packages are playing a supporting role while the main emphasis will be on the **glmnet** package (Friedman et al. [2018](#ref-R-glmnet)). ``` # Helper packages library(recipes) # for feature engineering # Modeling packages library(glmnet) # for implementing regularized regression library(caret) # for automating the tuning process # Model interpretability packages library(vip) # for variable importance ``` To illustrate various regularization concepts we’ll continue working with the `ames_train` and `ames_test` data sets created in Section [2\.7](process.html#put-process-together); however, at the end of the chapter we’ll also apply regularized regression to the employee attrition data. 6\.2 Why regularize? -------------------- The easiest way to understand regularized regression is to explain how and why it is applied to ordinary least squares (OLS). The objective in OLS regression is to find the *hyperplane*[23](#fn23) (e.g., a straight line in two dimensions) that minimizes the sum of squared errors (SSE) between the observed and predicted response values (see Figure [6\.1](regularized-regression.html#fig:hyperplane) below). This means identifying the hyperplane that minimizes the grey lines, which measure the vertical distance between the observed (red dots) and predicted (blue line) response values. Figure 6\.1: Fitted regression line using Ordinary Least Squares. More formally, the objective function being minimized can be written as: \\\[\\begin{equation} \\tag{6\.1} \\text{minimize} \\left( SSE \= \\sum^n\_{i\=1} \\left(y\_i \- \\hat{y}\_i\\right)^2 \\right) \\end{equation}\\] As we discussed in Chapter [4](linear-regression.html#linear-regression), the OLS objective function performs quite well when our data adhere to a few key assumptions: * Linear relationship; * There are more observations (*n*) than features (*p*) (\\(n \> p\\)); * No or little multicollinearity. For classical statistical inference procedures (e.g., confidence intervals based on the classic *t*\-statistic) to be valid, we also need to make stronger assumptions regarding normality (of the errors) and homoscedasticity (i.e., constant error variance). Many real\-life data sets, like those common to *text mining* and *genomic studies* are *wide*, meaning they contain a larger number of features (\\(p \> n\\)). As *p* increases, we’re more likely to violate some of the OLS assumptions and alternative approaches should be considered. This was briefly illustrated in Chapter [4](linear-regression.html#linear-regression) where the presence of multicollinearity was diminishing the interpretability of our estimated coefficients due to inflated variance. By reducing multicollinearity, we were able to increase our model’s accuracy. Of course, multicollinearity can also occur when \\(n \> p\\). Having a large number of features invites additional issues in using classic regression models. For one, having a large number of features makes the model much less interpretable. Additionally, when \\(p \> n\\), there are many (in fact infinite) solutions to the OLS problem! In such cases, it is useful (and practical) to assume that a smaller subset of the features exhibit the strongest effects (something called the *bet on sparsity principle* (see Hastie, Tibshirani, and Wainwright [2015](#ref-hastie2015statistical), 2\).). For this reason, we sometimes prefer estimation techniques that incorporate *feature selection*. One approach to this is called *hard thresholding* feature selection, which includes many of the traditional linear model selection approaches like *forward selection* and *backward elimination*. These procedures, however, can be computationally inefficient, do not scale well, and treat a feature as either in or out of the model (hence the name hard thresholding). In contrast, a more modern approach, called *soft thresholding*, slowly pushes the effects of irrelevant features toward zero, and in some cases, will zero out entire coefficients. As will be demonstrated, this can result in more accurate models that are also easier to interpret. With wide data (or data that exhibits multicollinearity), one alternative to OLS regression is to use regularized regression (also commonly referred to as *penalized* models or *shrinkage* methods as in J. Friedman, Hastie, and Tibshirani ([2001](#ref-esl)) and Kuhn and Johnson ([2013](#ref-apm))) to constrain the total size of all the coefficient estimates. This constraint helps to reduce the magnitude and fluctuations of the coefficients and will reduce the variance of our model (at the expense of no longer being unbiased—a reasonable compromise). The objective function of a regularized regression model is similar to OLS, albeit with a penalty term \\(P\\). \\\[\\begin{equation} \\tag{6\.2} \\text{minimize} \\left( SSE \+ P \\right) \\end{equation}\\] This penalty parameter constrains the size of the coefficients such that the only way the coefficients can increase is if we experience a comparable decrease in the sum of squared errors (SSE). This concept generalizes to all GLM models (e.g., logistic and Poisson regression) and even some *survival models*. So far, we have been discussing OLS and the sum of squared errors loss function. However, different models within the GLM family have different loss functions (see Chapter 4 of J. Friedman, Hastie, and Tibshirani ([2001](#ref-esl))). Yet we can think of the penalty parameter all the same—it constrains the size of the coefficients such that the only way the coefficients can increase is if we experience a comparable decrease in the model’s loss function. There are three common penalty parameters we can implement: 1. Ridge; 2. Lasso (or LASSO); 3. Elastic net (or ENET), which is a combination of ridge and lasso. ### 6\.2\.1 Ridge penalty Ridge regression (Hoerl and Kennard [1970](#ref-hoerl1970ridge)) controls the estimated coefficients by adding \\(\\lambda \\sum^p\_{j\=1} \\beta\_j^2\\) to the objective function. \\\[\\begin{equation} \\tag{6\.3} \\text{minimize } \\left( SSE \+ \\lambda \\sum^p\_{j\=1} \\beta\_j^2 \\right) \\end{equation}\\] The size of this penalty, referred to as \\(L^2\\) (or Euclidean) norm, can take on a wide range of values, which is controlled by the *tuning parameter* \\(\\lambda\\). When \\(\\lambda \= 0\\) there is no effect and our objective function equals the normal OLS regression objective function of simply minimizing SSE. However, as \\(\\lambda \\rightarrow \\infty\\), the penalty becomes large and forces the coefficients toward zero (but not all the way). This is illustrated in Figure [6\.2](regularized-regression.html#fig:ridge-coef-example) where exemplar coefficients have been regularized with \\(\\lambda\\) ranging from 0 to over 8,000\. Figure 6\.2: Ridge regression coefficients for 15 exemplar predictor variables as \\(\\lambda\\) grows from \\(0 \\rightarrow \\infty\\). As \\(\\lambda\\) grows larger, our coefficient magnitudes are more constrained. Although these coefficients were scaled and centered prior to the analysis, you will notice that some are quite large when \\(\\lambda\\) is near zero. Furthermore, you’ll notice that feature `x1` has a large negative parameter that fluctuates until \\(\\lambda \\approx 7\\) where it then continuously shrinks toward zero. This is indicative of multicollinearity and likely illustrates that constraining our coefficients with \\(\\lambda \> 7\\) may reduce the variance, and therefore the error, in our predictions. In essence, the ridge regression model pushes many of the correlated features toward each other rather than allowing for one to be wildly positive and the other wildly negative. In addition, many of the less\-important features also get pushed toward zero. This helps to provide clarity in identifying the important signals in our data (i.e., the labeled features in Figure [6\.2](regularized-regression.html#fig:ridge-coef-example)). However, ridge regression does not perform feature selection and will retain **all** available features in the final model. Therefore, a ridge model is good if you believe there is a need to retain all features in your model yet reduce the noise that less influential variables may create (e.g., in smaller data sets with severe multicollinearity). If greater interpretation is necessary and many of the features are redundant or irrelevant then a lasso or elastic net penalty may be preferable. ### 6\.2\.2 Lasso penalty The lasso (*least absolute shrinkage and selection operator*) penalty (Tibshirani [1996](#ref-tibshirani1996regression)) is an alternative to the ridge penalty that requires only a small modification. The only difference is that we swap out the \\(L^2\\) norm for an \\(L^1\\) norm: \\(\\lambda \\sum^p\_{j\=1} \| \\beta\_j\|\\): \\\[\\begin{equation} \\tag{6\.4} \\text{minimize } \\left( SSE \+ \\lambda \\sum^p\_{j\=1} \| \\beta\_j \| \\right) \\end{equation}\\] Whereas the ridge penalty pushes variables to *approximately but not equal to zero*, the lasso penalty will actually push coefficients all the way to zero as illustrated in Figure [6\.3](regularized-regression.html#fig:lasso-coef-example). Switching to the lasso penalty not only improves the model but it also conducts automated feature selection. Figure 6\.3: Lasso regression coefficients as \\(\\lambda\\) grows from \\(0 \\rightarrow \\infty\\). In the figure above we see that when \\(\\lambda \< 0\.01\\) all 15 variables are included in the model, when \\(\\lambda \\approx 0\.5\\) 9 variables are retained, and when \\(log\\left(\\lambda\\right) \= 1\\) only 5 variables are retained. Consequently, when a data set has many features, lasso can be used to identify and extract those features with the largest (and most consistent) signal. ### 6\.2\.3 Elastic nets A generalization of the ridge and lasso penalties, called the *elastic net* (Zou and Hastie [2005](#ref-zou2005regularization)), combines the two penalties: \\\[\\begin{equation} \\tag{6\.5} \\text{minimize } \\left( SSE \+ \\lambda\_1 \\sum^p\_{j\=1} \\beta\_j^2 \+ \\lambda\_2 \\sum^p\_{j\=1} \| \\beta\_j \| \\right) \\end{equation}\\] Although lasso models perform feature selection, when two strongly correlated features are pushed towards zero, one may be pushed fully to zero while the other remains in the model. Furthermore, the process of one being in and one being out is not very systematic. In contrast, the ridge regression penalty is a little more effective in systematically handling correlated features together. Consequently, the advantage of the elastic net penalty is that it enables effective regularization via the ridge penalty with the feature selection characteristics of the lasso penalty. Figure 6\.4: Elastic net coefficients as \\(\\lambda\\) grows from \\(0 \\rightarrow \\infty\\). 6\.3 Implementation ------------------- First, we illustrate an implementation of regularized regression using the direct engine **glmnet**. This will provide you with a strong sense of what is happening with a regularized model. Realize there are other implementations available (e.g., **h2o**, **elasticnet**, **penalized**). Then, in Section [6\.4](regularized-regression.html#regression-glmnet-tune), we’ll demonstrate how to apply a regularized model so we can properly compare it with our previous predictive models. The **glmnet** package is extremely efficient and fast, even on very large data sets (mostly due to its use of Fortran to solve the lasso problem via *coordinate descent*); note, however, that it only accepts the non\-formula XY interface ([2\.3\.1](process.html#many-formula-interfaces)) so prior to modeling we need to separate our feature and target sets. The following uses `model.matrix` to dummy encode our feature set (see `Matrix::sparse.model.matrix` for increased efficiency on larger data sets). We also \\(\\log\\) transform the response variable which is not required; however, parametric models such as regularized regression are sensitive to skewed response values so transforming can often improve predictive performance. ``` # Create training feature matrices # we use model.matrix(...)[, -1] to discard the intercept X <- model.matrix(Sale_Price ~ ., ames_train)[, -1] # transform y with log transformation Y <- log(ames_train$Sale_Price) ``` To apply a regularized model we can use the `glmnet::glmnet()` function. The `alpha` parameter tells **glmnet** to perform a ridge (`alpha = 0`), lasso (`alpha = 1`), or elastic net (`0 < alpha < 1`) model. By default, **glmnet** will do two things that you should be aware of: 1. Since regularized methods apply a penalty to the coefficients, we need to ensure our coefficients are on a common scale. If not, then predictors with naturally larger values (e.g., total square footage) will be penalized more than predictors with naturally smaller values (e.g., total number of rooms). By default, **glmnet** automatically standardizes your features. If you standardize your predictors prior to **glmnet** you can turn this argument off with `standardize = FALSE`. 2. **glmnet** will fit ridge models across a wide range of \\(\\lambda\\) values, which is illustrated in Figure [6\.5](regularized-regression.html#fig:ridge1). ``` # Apply ridge regression to ames data ridge <- glmnet( x = X, y = Y, alpha = 0 ) plot(ridge, xvar = "lambda") ``` Figure 6\.5: Coefficients for our ridge regression model as \\(\\lambda\\) grows from \\(0 \\rightarrow \\infty\\). We can see the exact \\(\\lambda\\) values applied with `ridge$lambda`. Although you can specify your own \\(\\lambda\\) values, by default **glmnet** applies 100 \\(\\lambda\\) values that are data derived. **glmnet** can auto\-generate the appropriate \\(\\lambda\\) values based on the data; the vast majority of the time you will have little need to adjust this default. We can also access the coefficients for a particular model using `coef()`. **glmnet** stores all the coefficients for each model in order of largest to smallest \\(\\lambda\\). Here we just peek at the two largest coefficients (which correspond to `Latitude` \& `Overall_QualVery_Excellent`) for the largest (285\.8054696\) and smallest (0\.0285805\) \\(\\lambda\\) values. You can see how the largest \\(\\lambda\\) value has pushed most of these coefficients to nearly 0\. ``` # lambdas applied to penalty parameter ridge$lambda %>% head() ## [1] 285.8055 260.4153 237.2807 216.2014 196.9946 179.4942 # small lambda results in large coefficients coef(ridge)[c("Latitude", "Overall_QualVery_Excellent"), 100] ## Latitude Overall_QualVery_Excellent ## 0.4048216 0.1423770 # large lambda results in small coefficients coef(ridge)[c("Latitude", "Overall_QualVery_Excellent"), 1] ## Latitude ## 0.0000000000000000000000000000000000063823847 ## Overall_QualVery_Excellent ## 0.0000000000000000000000000000000000009838114 ``` At this point, we do not understand how much improvement we are experiencing in our loss function across various \\(\\lambda\\) values. 6\.4 Tuning ----------- Recall that \\(\\lambda\\) is a tuning parameter that helps to control our model from over\-fitting to the training data. To identify the optimal \\(\\lambda\\) value we can use *k*\-fold cross\-validation (CV). `glmnet::cv.glmnet()` can perform *k*\-fold CV, and by default, performs 10\-fold CV. Below we perform a CV **glmnet** model with both a ridge and lasso penalty separately: By default, `glmnet::cv.glmnet()` uses MSE as the loss function but you can also use mean absolute error (MAE) for continuous outcomes by changing the `type.measure` argument; see `?glmnet::cv.glmnet()` for more details. ``` # Apply CV ridge regression to Ames data ridge <- cv.glmnet( x = X, y = Y, alpha = 0 ) # Apply CV lasso regression to Ames data lasso <- cv.glmnet( x = X, y = Y, alpha = 1 ) # plot results par(mfrow = c(1, 2)) plot(ridge, main = "Ridge penalty\n\n") plot(lasso, main = "Lasso penalty\n\n") ``` Figure 6\.6: 10\-fold CV MSE for a ridge and lasso model. First dotted vertical line in each plot represents the \\(\\lambda\\) with the smallest MSE and the second represents the \\(\\lambda\\) with an MSE within one standard error of the minimum MSE. Figure [6\.6](regularized-regression.html#fig:ridge-lasso-cv-models) illustrates the 10\-fold CV MSE across all the \\(\\lambda\\) values. In both models we see a slight improvement in the MSE as our penalty \\(log(\\lambda)\\) gets larger, suggesting that a regular OLS model likely overfits the training data. But as we constrain it further (i.e., continue to increase the penalty), our MSE starts to increase. The numbers across the top of the plot refer to the number of features in the model. Ridge regression does not force any variables to exactly zero so all features will remain in the model but we see the number of variables retained in the lasso model decrease as the penalty increases. The first and second vertical dashed lines represent the \\(\\lambda\\) value with the minimum MSE and the largest \\(\\lambda\\) value within one standard error of it. The minimum MSE for our ridge model is 0\.01748 (produced when \\(\\lambda \=\\) 0\.10513 whereas the minimum MSE for our lasso model is 0\.01754 (produced when \\(\\lambda \=\\) 0\.00249\). ``` # Ridge model min(ridge$cvm) # minimum MSE ## [1] 0.01748122 ridge$lambda.min # lambda for this min MSE ## [1] 0.1051301 ridge$cvm[ridge$lambda == ridge$lambda.1se] # 1-SE rule ## [1] 0.01975572 ridge$lambda.1se # lambda for this MSE ## [1] 0.4657917 # Lasso model min(lasso$cvm) # minimum MSE ## [1] 0.01754244 lasso$lambda.min # lambda for this min MSE ## [1] 0.00248579 lasso$nzero[lasso$lambda == lasso$lambda.min] # No. of coef | Min MSE ## s51 ## 139 lasso$cvm[lasso$lambda == lasso$lambda.1se] # 1-SE rule ## [1] 0.01979976 lasso$lambda.1se # lambda for this MSE ## [1] 0.01003518 lasso$nzero[lasso$lambda == lasso$lambda.1se] # No. of coef | 1-SE MSE ## s36 ## 64 ``` We can assess this visually. Figure [6\.7](regularized-regression.html#fig:ridge-lasso-cv-viz-results) plots the estimated coefficients across the range of \\(\\lambda\\) values. The dashed red line represents the \\(\\lambda\\) value with the smallest MSE and the dashed blue line represents largest \\(\\lambda\\) value that falls within one standard error of the minimum MSE. This shows you how much we can constrain the coefficients while still maximizing predictive accuracy. Above, we saw that both ridge and lasso penalties provide similar MSEs; however, these plots illustrate that ridge is still using all 294 features whereas the lasso model can get a similar MSE while reducing the feature set from 294 down to 139\. However, there will be some variability with this MSE and we can reasonably assume that we can achieve a similar MSE with a slightly more constrained model that uses only 64 features. Although this lasso model does not offer significant improvement over the ridge model, we get approximately the same accuracy by using only 64 features! If describing and interpreting the predictors is an important component of your analysis, this may significantly aid your endeavor. ``` # Ridge model ridge_min <- glmnet( x = X, y = Y, alpha = 0 ) # Lasso model lasso_min <- glmnet( x = X, y = Y, alpha = 1 ) par(mfrow = c(1, 2)) # plot ridge model plot(ridge_min, xvar = "lambda", main = "Ridge penalty\n\n") abline(v = log(ridge$lambda.min), col = "red", lty = "dashed") abline(v = log(ridge$lambda.1se), col = "blue", lty = "dashed") # plot lasso model plot(lasso_min, xvar = "lambda", main = "Lasso penalty\n\n") abline(v = log(lasso$lambda.min), col = "red", lty = "dashed") abline(v = log(lasso$lambda.1se), col = "blue", lty = "dashed") ``` Figure 6\.7: Coefficients for our ridge and lasso models. First dotted vertical line in each plot represents the \\(\\lambda\\) with the smallest MSE and the second represents the \\(\\lambda\\) with an MSE within one standard error of the minimum MSE. So far we’ve implemented a pure ridge and pure lasso model. However, we can implement an elastic net the same way as the ridge and lasso models, by adjusting the `alpha` parameter. Any `alpha` value between 0–1 will perform an elastic net. When `alpha = 0.5` we perform an equal combination of penalties whereas `alpha` \\(\< 0\.5\\) will have a heavier ridge penalty applied and `alpha` \\(\> 0\.5\\) will have a heavier lasso penalty. Figure 6\.8: Coefficients for various penalty parameters. Often, the optimal model contains an `alpha` somewhere between 0–1, thus we want to tune both the \\(\\lambda\\) and the `alpha` parameters. As in Chapters [4](linear-regression.html#linear-regression) and [5](logistic-regression.html#logistic-regression), we can use the **caret** package to automate the tuning process. This ensures that any feature engineering is appropriately applied within each resample. The following performs a grid search over 10 values of the alpha parameter between 0–1 and ten values of the lambda parameter from the lowest to highest lambda values identified by **glmnet**. This grid search took roughly **71 seconds** to compute. The following snippet of code shows that the model that minimized RMSE used an alpha of 0\.1 and \\(\\lambda\\) of 0\.02\. The minimum RMSE of 0\.1277585 (\\(MSE \= 0\.1277585^2 \= 0\.01632223\\)) slightly improves upon the full ridge and lasso models produced earlier. Figure [6\.9](regularized-regression.html#fig:glmnet-tuning-grid) illustrates how the combination of alpha values (\\(x\\)\-axis) and \\(\\lambda\\) values (line color) influence the RMSE. ``` # for reproducibility set.seed(123) # grid search across cv_glmnet <- train( x = X, y = Y, method = "glmnet", preProc = c("zv", "center", "scale"), trControl = trainControl(method = "cv", number = 10), tuneLength = 10 ) # model with lowest RMSE cv_glmnet$bestTune ## alpha lambda ## 7 0.1 0.02007035 # results for model with lowest RMSE cv_glmnet$results %>% filter(alpha == cv_glmnet$bestTune$alpha, lambda == cv_glmnet$bestTune$lambda) ## alpha lambda RMSE Rsquared MAE RMSESD RsquaredSD ## 1 0.1 0.02007035 0.1277585 0.9001487 0.08102427 0.02235901 0.0346677 ## MAESD ## 1 0.005667366 # plot cross-validated RMSE ggplot(cv_glmnet) ``` Figure 6\.9: The 10\-fold cross valdation RMSE across 10 alpha values (x\-axis) and 10 lambda values (line color). So how does this compare to our previous best model for the Ames data set? Keep in mind that for this chapter we \\(\\log\\) transformed the response variable (`Sale_Price`). Consequently, to provide a fair comparison to our previously obtained PLS model’s RMSE of $25,460, we need to re\-transform our predicted values. The following illustrates that our optimal regularized model achieved an RMSE of $19,905\. Introducing a penalty parameter to constrain the coefficients provided quite an improvement over our previously obtained dimension reduction approach. ``` # predict sales price on training data pred <- predict(cv_glmnet, X) # compute RMSE of transformed predicted RMSE(exp(pred), exp(Y)) ## [1] 19905.05 ``` 6\.5 Feature interpretation --------------------------- Variable importance for regularized models provides a similar interpretation as in linear (or logistic) regression. Importance is determined by magnitude of the standardized coefficients and we can see in Figure [6\.10](regularized-regression.html#fig:regularize-vip) some of the same features that were considered highly influential in our PLS model, albeit in differing order (i.e. `Gr_Liv_Area`, `Total_Bsmt_SF`, `Overall_Qual`, `Year_Built`). ``` vip(cv_glmnet, num_features = 20, geom = "point") ``` Figure 6\.10: Top 20 most important variables for the optimal regularized regression model. Similar to linear and logistic regression, the relationship between the features and response is monotonic linear. However, since we modeled our response with a log transformation, the estimated relationships will still be monotonic but non\-linear on the original response scale. Figure [6\.11](regularized-regression.html#fig:regularized-top4-pdp) illustrates the relationship between the top four most influential variables (i.e., largest absolute coefficients) and the non\-transformed sales price. All relationships are positive in nature, as the values in these features increase (or for `Overall_QualExcellent` if it exists) the average predicted sales price increases. Figure 6\.11: Partial dependence plots for the first four most important variables. However, not that one of the top 20 most influential variables is `Overall_QualPoor`. When a home has an overall quality rating of poor we see that the average predicted sales price decreases versus when it has some other overall quality rating. Consequently, its important to not only look at the variable importance ranking, but also observe the positive or negative nature of the relationship. Figure 6\.12: Partial dependence plot for when overall quality of a home is (1\) versus is not poor (0\). 6\.6 Attrition data ------------------- We saw that regularization significantly improved our predictive accuracy for the Ames data set, but how about for the employee attrition example? In Chapter [5](logistic-regression.html#logistic-regression) we saw a maximum CV accuracy of 86\.3% for our logistic regression model. We see a little improvement in the following with some preprocessing; however, performing a regularized logistic regression model provides us with an additional 0\.8% improvement in accuracy (likely within the margin of error). ``` df <- attrition %>% mutate_if(is.ordered, factor, ordered = FALSE) # Create training (70%) and test (30%) sets for the # rsample::attrition data. Use set.seed for reproducibility set.seed(123) churn_split <- initial_split(df, prop = .7, strata = "Attrition") train <- training(churn_split) test <- testing(churn_split) # train logistic regression model set.seed(123) glm_mod <- train( Attrition ~ ., data = train, method = "glm", family = "binomial", preProc = c("zv", "center", "scale"), trControl = trainControl(method = "cv", number = 10) ) # train regularized logistic regression model set.seed(123) penalized_mod <- train( Attrition ~ ., data = train, method = "glmnet", family = "binomial", preProc = c("zv", "center", "scale"), trControl = trainControl(method = "cv", number = 10), tuneLength = 10 ) # extract out of sample performance measures summary(resamples(list( logistic_model = glm_mod, penalized_model = penalized_mod )))$statistics$Accuracy ## Min. 1st Qu. Median Mean 3rd Qu. ## logistic_model 0.8365385 0.8495146 0.8792476 0.8757893 0.8907767 ## penalized_model 0.8446602 0.8759280 0.8834951 0.8835759 0.8915469 ## Max. NA's ## logistic_model 0.9313725 0 ## penalized_model 0.9411765 0 ``` 6\.7 Final thoughts ------------------- Regularized regression provides many great benefits over traditional GLMs when applied to large data sets with lots of features. It provides a great option for handling the \\(n \> p\\) problem, helps minimize the impact of multicollinearity, and can perform automated feature selection. It also has relatively few hyperparameters which makes them easy to tune, computationally efficient compared to other algorithms discussed in later chapters, and memory efficient. However, regularized regression does require some feature preprocessing. Notably, all inputs must be numeric; however, some packages (e.g., **caret** and **h2o**) automate this process. They cannot automatically handle missing data, which requires you to remove or impute them prior to modeling. Similar to GLMs, they are also not robust to outliers in both the feature and target. Lastly, regularized regression models still assume a monotonic linear relationship (always increasing or decreasing in a linear fashion). It is also up to the analyst whether or not to include specific interaction effects. 6\.1 Prerequisites ------------------ This chapter leverages the following packages. Most of these packages are playing a supporting role while the main emphasis will be on the **glmnet** package (Friedman et al. [2018](#ref-R-glmnet)). ``` # Helper packages library(recipes) # for feature engineering # Modeling packages library(glmnet) # for implementing regularized regression library(caret) # for automating the tuning process # Model interpretability packages library(vip) # for variable importance ``` To illustrate various regularization concepts we’ll continue working with the `ames_train` and `ames_test` data sets created in Section [2\.7](process.html#put-process-together); however, at the end of the chapter we’ll also apply regularized regression to the employee attrition data. 6\.2 Why regularize? -------------------- The easiest way to understand regularized regression is to explain how and why it is applied to ordinary least squares (OLS). The objective in OLS regression is to find the *hyperplane*[23](#fn23) (e.g., a straight line in two dimensions) that minimizes the sum of squared errors (SSE) between the observed and predicted response values (see Figure [6\.1](regularized-regression.html#fig:hyperplane) below). This means identifying the hyperplane that minimizes the grey lines, which measure the vertical distance between the observed (red dots) and predicted (blue line) response values. Figure 6\.1: Fitted regression line using Ordinary Least Squares. More formally, the objective function being minimized can be written as: \\\[\\begin{equation} \\tag{6\.1} \\text{minimize} \\left( SSE \= \\sum^n\_{i\=1} \\left(y\_i \- \\hat{y}\_i\\right)^2 \\right) \\end{equation}\\] As we discussed in Chapter [4](linear-regression.html#linear-regression), the OLS objective function performs quite well when our data adhere to a few key assumptions: * Linear relationship; * There are more observations (*n*) than features (*p*) (\\(n \> p\\)); * No or little multicollinearity. For classical statistical inference procedures (e.g., confidence intervals based on the classic *t*\-statistic) to be valid, we also need to make stronger assumptions regarding normality (of the errors) and homoscedasticity (i.e., constant error variance). Many real\-life data sets, like those common to *text mining* and *genomic studies* are *wide*, meaning they contain a larger number of features (\\(p \> n\\)). As *p* increases, we’re more likely to violate some of the OLS assumptions and alternative approaches should be considered. This was briefly illustrated in Chapter [4](linear-regression.html#linear-regression) where the presence of multicollinearity was diminishing the interpretability of our estimated coefficients due to inflated variance. By reducing multicollinearity, we were able to increase our model’s accuracy. Of course, multicollinearity can also occur when \\(n \> p\\). Having a large number of features invites additional issues in using classic regression models. For one, having a large number of features makes the model much less interpretable. Additionally, when \\(p \> n\\), there are many (in fact infinite) solutions to the OLS problem! In such cases, it is useful (and practical) to assume that a smaller subset of the features exhibit the strongest effects (something called the *bet on sparsity principle* (see Hastie, Tibshirani, and Wainwright [2015](#ref-hastie2015statistical), 2\).). For this reason, we sometimes prefer estimation techniques that incorporate *feature selection*. One approach to this is called *hard thresholding* feature selection, which includes many of the traditional linear model selection approaches like *forward selection* and *backward elimination*. These procedures, however, can be computationally inefficient, do not scale well, and treat a feature as either in or out of the model (hence the name hard thresholding). In contrast, a more modern approach, called *soft thresholding*, slowly pushes the effects of irrelevant features toward zero, and in some cases, will zero out entire coefficients. As will be demonstrated, this can result in more accurate models that are also easier to interpret. With wide data (or data that exhibits multicollinearity), one alternative to OLS regression is to use regularized regression (also commonly referred to as *penalized* models or *shrinkage* methods as in J. Friedman, Hastie, and Tibshirani ([2001](#ref-esl)) and Kuhn and Johnson ([2013](#ref-apm))) to constrain the total size of all the coefficient estimates. This constraint helps to reduce the magnitude and fluctuations of the coefficients and will reduce the variance of our model (at the expense of no longer being unbiased—a reasonable compromise). The objective function of a regularized regression model is similar to OLS, albeit with a penalty term \\(P\\). \\\[\\begin{equation} \\tag{6\.2} \\text{minimize} \\left( SSE \+ P \\right) \\end{equation}\\] This penalty parameter constrains the size of the coefficients such that the only way the coefficients can increase is if we experience a comparable decrease in the sum of squared errors (SSE). This concept generalizes to all GLM models (e.g., logistic and Poisson regression) and even some *survival models*. So far, we have been discussing OLS and the sum of squared errors loss function. However, different models within the GLM family have different loss functions (see Chapter 4 of J. Friedman, Hastie, and Tibshirani ([2001](#ref-esl))). Yet we can think of the penalty parameter all the same—it constrains the size of the coefficients such that the only way the coefficients can increase is if we experience a comparable decrease in the model’s loss function. There are three common penalty parameters we can implement: 1. Ridge; 2. Lasso (or LASSO); 3. Elastic net (or ENET), which is a combination of ridge and lasso. ### 6\.2\.1 Ridge penalty Ridge regression (Hoerl and Kennard [1970](#ref-hoerl1970ridge)) controls the estimated coefficients by adding \\(\\lambda \\sum^p\_{j\=1} \\beta\_j^2\\) to the objective function. \\\[\\begin{equation} \\tag{6\.3} \\text{minimize } \\left( SSE \+ \\lambda \\sum^p\_{j\=1} \\beta\_j^2 \\right) \\end{equation}\\] The size of this penalty, referred to as \\(L^2\\) (or Euclidean) norm, can take on a wide range of values, which is controlled by the *tuning parameter* \\(\\lambda\\). When \\(\\lambda \= 0\\) there is no effect and our objective function equals the normal OLS regression objective function of simply minimizing SSE. However, as \\(\\lambda \\rightarrow \\infty\\), the penalty becomes large and forces the coefficients toward zero (but not all the way). This is illustrated in Figure [6\.2](regularized-regression.html#fig:ridge-coef-example) where exemplar coefficients have been regularized with \\(\\lambda\\) ranging from 0 to over 8,000\. Figure 6\.2: Ridge regression coefficients for 15 exemplar predictor variables as \\(\\lambda\\) grows from \\(0 \\rightarrow \\infty\\). As \\(\\lambda\\) grows larger, our coefficient magnitudes are more constrained. Although these coefficients were scaled and centered prior to the analysis, you will notice that some are quite large when \\(\\lambda\\) is near zero. Furthermore, you’ll notice that feature `x1` has a large negative parameter that fluctuates until \\(\\lambda \\approx 7\\) where it then continuously shrinks toward zero. This is indicative of multicollinearity and likely illustrates that constraining our coefficients with \\(\\lambda \> 7\\) may reduce the variance, and therefore the error, in our predictions. In essence, the ridge regression model pushes many of the correlated features toward each other rather than allowing for one to be wildly positive and the other wildly negative. In addition, many of the less\-important features also get pushed toward zero. This helps to provide clarity in identifying the important signals in our data (i.e., the labeled features in Figure [6\.2](regularized-regression.html#fig:ridge-coef-example)). However, ridge regression does not perform feature selection and will retain **all** available features in the final model. Therefore, a ridge model is good if you believe there is a need to retain all features in your model yet reduce the noise that less influential variables may create (e.g., in smaller data sets with severe multicollinearity). If greater interpretation is necessary and many of the features are redundant or irrelevant then a lasso or elastic net penalty may be preferable. ### 6\.2\.2 Lasso penalty The lasso (*least absolute shrinkage and selection operator*) penalty (Tibshirani [1996](#ref-tibshirani1996regression)) is an alternative to the ridge penalty that requires only a small modification. The only difference is that we swap out the \\(L^2\\) norm for an \\(L^1\\) norm: \\(\\lambda \\sum^p\_{j\=1} \| \\beta\_j\|\\): \\\[\\begin{equation} \\tag{6\.4} \\text{minimize } \\left( SSE \+ \\lambda \\sum^p\_{j\=1} \| \\beta\_j \| \\right) \\end{equation}\\] Whereas the ridge penalty pushes variables to *approximately but not equal to zero*, the lasso penalty will actually push coefficients all the way to zero as illustrated in Figure [6\.3](regularized-regression.html#fig:lasso-coef-example). Switching to the lasso penalty not only improves the model but it also conducts automated feature selection. Figure 6\.3: Lasso regression coefficients as \\(\\lambda\\) grows from \\(0 \\rightarrow \\infty\\). In the figure above we see that when \\(\\lambda \< 0\.01\\) all 15 variables are included in the model, when \\(\\lambda \\approx 0\.5\\) 9 variables are retained, and when \\(log\\left(\\lambda\\right) \= 1\\) only 5 variables are retained. Consequently, when a data set has many features, lasso can be used to identify and extract those features with the largest (and most consistent) signal. ### 6\.2\.3 Elastic nets A generalization of the ridge and lasso penalties, called the *elastic net* (Zou and Hastie [2005](#ref-zou2005regularization)), combines the two penalties: \\\[\\begin{equation} \\tag{6\.5} \\text{minimize } \\left( SSE \+ \\lambda\_1 \\sum^p\_{j\=1} \\beta\_j^2 \+ \\lambda\_2 \\sum^p\_{j\=1} \| \\beta\_j \| \\right) \\end{equation}\\] Although lasso models perform feature selection, when two strongly correlated features are pushed towards zero, one may be pushed fully to zero while the other remains in the model. Furthermore, the process of one being in and one being out is not very systematic. In contrast, the ridge regression penalty is a little more effective in systematically handling correlated features together. Consequently, the advantage of the elastic net penalty is that it enables effective regularization via the ridge penalty with the feature selection characteristics of the lasso penalty. Figure 6\.4: Elastic net coefficients as \\(\\lambda\\) grows from \\(0 \\rightarrow \\infty\\). ### 6\.2\.1 Ridge penalty Ridge regression (Hoerl and Kennard [1970](#ref-hoerl1970ridge)) controls the estimated coefficients by adding \\(\\lambda \\sum^p\_{j\=1} \\beta\_j^2\\) to the objective function. \\\[\\begin{equation} \\tag{6\.3} \\text{minimize } \\left( SSE \+ \\lambda \\sum^p\_{j\=1} \\beta\_j^2 \\right) \\end{equation}\\] The size of this penalty, referred to as \\(L^2\\) (or Euclidean) norm, can take on a wide range of values, which is controlled by the *tuning parameter* \\(\\lambda\\). When \\(\\lambda \= 0\\) there is no effect and our objective function equals the normal OLS regression objective function of simply minimizing SSE. However, as \\(\\lambda \\rightarrow \\infty\\), the penalty becomes large and forces the coefficients toward zero (but not all the way). This is illustrated in Figure [6\.2](regularized-regression.html#fig:ridge-coef-example) where exemplar coefficients have been regularized with \\(\\lambda\\) ranging from 0 to over 8,000\. Figure 6\.2: Ridge regression coefficients for 15 exemplar predictor variables as \\(\\lambda\\) grows from \\(0 \\rightarrow \\infty\\). As \\(\\lambda\\) grows larger, our coefficient magnitudes are more constrained. Although these coefficients were scaled and centered prior to the analysis, you will notice that some are quite large when \\(\\lambda\\) is near zero. Furthermore, you’ll notice that feature `x1` has a large negative parameter that fluctuates until \\(\\lambda \\approx 7\\) where it then continuously shrinks toward zero. This is indicative of multicollinearity and likely illustrates that constraining our coefficients with \\(\\lambda \> 7\\) may reduce the variance, and therefore the error, in our predictions. In essence, the ridge regression model pushes many of the correlated features toward each other rather than allowing for one to be wildly positive and the other wildly negative. In addition, many of the less\-important features also get pushed toward zero. This helps to provide clarity in identifying the important signals in our data (i.e., the labeled features in Figure [6\.2](regularized-regression.html#fig:ridge-coef-example)). However, ridge regression does not perform feature selection and will retain **all** available features in the final model. Therefore, a ridge model is good if you believe there is a need to retain all features in your model yet reduce the noise that less influential variables may create (e.g., in smaller data sets with severe multicollinearity). If greater interpretation is necessary and many of the features are redundant or irrelevant then a lasso or elastic net penalty may be preferable. ### 6\.2\.2 Lasso penalty The lasso (*least absolute shrinkage and selection operator*) penalty (Tibshirani [1996](#ref-tibshirani1996regression)) is an alternative to the ridge penalty that requires only a small modification. The only difference is that we swap out the \\(L^2\\) norm for an \\(L^1\\) norm: \\(\\lambda \\sum^p\_{j\=1} \| \\beta\_j\|\\): \\\[\\begin{equation} \\tag{6\.4} \\text{minimize } \\left( SSE \+ \\lambda \\sum^p\_{j\=1} \| \\beta\_j \| \\right) \\end{equation}\\] Whereas the ridge penalty pushes variables to *approximately but not equal to zero*, the lasso penalty will actually push coefficients all the way to zero as illustrated in Figure [6\.3](regularized-regression.html#fig:lasso-coef-example). Switching to the lasso penalty not only improves the model but it also conducts automated feature selection. Figure 6\.3: Lasso regression coefficients as \\(\\lambda\\) grows from \\(0 \\rightarrow \\infty\\). In the figure above we see that when \\(\\lambda \< 0\.01\\) all 15 variables are included in the model, when \\(\\lambda \\approx 0\.5\\) 9 variables are retained, and when \\(log\\left(\\lambda\\right) \= 1\\) only 5 variables are retained. Consequently, when a data set has many features, lasso can be used to identify and extract those features with the largest (and most consistent) signal. ### 6\.2\.3 Elastic nets A generalization of the ridge and lasso penalties, called the *elastic net* (Zou and Hastie [2005](#ref-zou2005regularization)), combines the two penalties: \\\[\\begin{equation} \\tag{6\.5} \\text{minimize } \\left( SSE \+ \\lambda\_1 \\sum^p\_{j\=1} \\beta\_j^2 \+ \\lambda\_2 \\sum^p\_{j\=1} \| \\beta\_j \| \\right) \\end{equation}\\] Although lasso models perform feature selection, when two strongly correlated features are pushed towards zero, one may be pushed fully to zero while the other remains in the model. Furthermore, the process of one being in and one being out is not very systematic. In contrast, the ridge regression penalty is a little more effective in systematically handling correlated features together. Consequently, the advantage of the elastic net penalty is that it enables effective regularization via the ridge penalty with the feature selection characteristics of the lasso penalty. Figure 6\.4: Elastic net coefficients as \\(\\lambda\\) grows from \\(0 \\rightarrow \\infty\\). 6\.3 Implementation ------------------- First, we illustrate an implementation of regularized regression using the direct engine **glmnet**. This will provide you with a strong sense of what is happening with a regularized model. Realize there are other implementations available (e.g., **h2o**, **elasticnet**, **penalized**). Then, in Section [6\.4](regularized-regression.html#regression-glmnet-tune), we’ll demonstrate how to apply a regularized model so we can properly compare it with our previous predictive models. The **glmnet** package is extremely efficient and fast, even on very large data sets (mostly due to its use of Fortran to solve the lasso problem via *coordinate descent*); note, however, that it only accepts the non\-formula XY interface ([2\.3\.1](process.html#many-formula-interfaces)) so prior to modeling we need to separate our feature and target sets. The following uses `model.matrix` to dummy encode our feature set (see `Matrix::sparse.model.matrix` for increased efficiency on larger data sets). We also \\(\\log\\) transform the response variable which is not required; however, parametric models such as regularized regression are sensitive to skewed response values so transforming can often improve predictive performance. ``` # Create training feature matrices # we use model.matrix(...)[, -1] to discard the intercept X <- model.matrix(Sale_Price ~ ., ames_train)[, -1] # transform y with log transformation Y <- log(ames_train$Sale_Price) ``` To apply a regularized model we can use the `glmnet::glmnet()` function. The `alpha` parameter tells **glmnet** to perform a ridge (`alpha = 0`), lasso (`alpha = 1`), or elastic net (`0 < alpha < 1`) model. By default, **glmnet** will do two things that you should be aware of: 1. Since regularized methods apply a penalty to the coefficients, we need to ensure our coefficients are on a common scale. If not, then predictors with naturally larger values (e.g., total square footage) will be penalized more than predictors with naturally smaller values (e.g., total number of rooms). By default, **glmnet** automatically standardizes your features. If you standardize your predictors prior to **glmnet** you can turn this argument off with `standardize = FALSE`. 2. **glmnet** will fit ridge models across a wide range of \\(\\lambda\\) values, which is illustrated in Figure [6\.5](regularized-regression.html#fig:ridge1). ``` # Apply ridge regression to ames data ridge <- glmnet( x = X, y = Y, alpha = 0 ) plot(ridge, xvar = "lambda") ``` Figure 6\.5: Coefficients for our ridge regression model as \\(\\lambda\\) grows from \\(0 \\rightarrow \\infty\\). We can see the exact \\(\\lambda\\) values applied with `ridge$lambda`. Although you can specify your own \\(\\lambda\\) values, by default **glmnet** applies 100 \\(\\lambda\\) values that are data derived. **glmnet** can auto\-generate the appropriate \\(\\lambda\\) values based on the data; the vast majority of the time you will have little need to adjust this default. We can also access the coefficients for a particular model using `coef()`. **glmnet** stores all the coefficients for each model in order of largest to smallest \\(\\lambda\\). Here we just peek at the two largest coefficients (which correspond to `Latitude` \& `Overall_QualVery_Excellent`) for the largest (285\.8054696\) and smallest (0\.0285805\) \\(\\lambda\\) values. You can see how the largest \\(\\lambda\\) value has pushed most of these coefficients to nearly 0\. ``` # lambdas applied to penalty parameter ridge$lambda %>% head() ## [1] 285.8055 260.4153 237.2807 216.2014 196.9946 179.4942 # small lambda results in large coefficients coef(ridge)[c("Latitude", "Overall_QualVery_Excellent"), 100] ## Latitude Overall_QualVery_Excellent ## 0.4048216 0.1423770 # large lambda results in small coefficients coef(ridge)[c("Latitude", "Overall_QualVery_Excellent"), 1] ## Latitude ## 0.0000000000000000000000000000000000063823847 ## Overall_QualVery_Excellent ## 0.0000000000000000000000000000000000009838114 ``` At this point, we do not understand how much improvement we are experiencing in our loss function across various \\(\\lambda\\) values. 6\.4 Tuning ----------- Recall that \\(\\lambda\\) is a tuning parameter that helps to control our model from over\-fitting to the training data. To identify the optimal \\(\\lambda\\) value we can use *k*\-fold cross\-validation (CV). `glmnet::cv.glmnet()` can perform *k*\-fold CV, and by default, performs 10\-fold CV. Below we perform a CV **glmnet** model with both a ridge and lasso penalty separately: By default, `glmnet::cv.glmnet()` uses MSE as the loss function but you can also use mean absolute error (MAE) for continuous outcomes by changing the `type.measure` argument; see `?glmnet::cv.glmnet()` for more details. ``` # Apply CV ridge regression to Ames data ridge <- cv.glmnet( x = X, y = Y, alpha = 0 ) # Apply CV lasso regression to Ames data lasso <- cv.glmnet( x = X, y = Y, alpha = 1 ) # plot results par(mfrow = c(1, 2)) plot(ridge, main = "Ridge penalty\n\n") plot(lasso, main = "Lasso penalty\n\n") ``` Figure 6\.6: 10\-fold CV MSE for a ridge and lasso model. First dotted vertical line in each plot represents the \\(\\lambda\\) with the smallest MSE and the second represents the \\(\\lambda\\) with an MSE within one standard error of the minimum MSE. Figure [6\.6](regularized-regression.html#fig:ridge-lasso-cv-models) illustrates the 10\-fold CV MSE across all the \\(\\lambda\\) values. In both models we see a slight improvement in the MSE as our penalty \\(log(\\lambda)\\) gets larger, suggesting that a regular OLS model likely overfits the training data. But as we constrain it further (i.e., continue to increase the penalty), our MSE starts to increase. The numbers across the top of the plot refer to the number of features in the model. Ridge regression does not force any variables to exactly zero so all features will remain in the model but we see the number of variables retained in the lasso model decrease as the penalty increases. The first and second vertical dashed lines represent the \\(\\lambda\\) value with the minimum MSE and the largest \\(\\lambda\\) value within one standard error of it. The minimum MSE for our ridge model is 0\.01748 (produced when \\(\\lambda \=\\) 0\.10513 whereas the minimum MSE for our lasso model is 0\.01754 (produced when \\(\\lambda \=\\) 0\.00249\). ``` # Ridge model min(ridge$cvm) # minimum MSE ## [1] 0.01748122 ridge$lambda.min # lambda for this min MSE ## [1] 0.1051301 ridge$cvm[ridge$lambda == ridge$lambda.1se] # 1-SE rule ## [1] 0.01975572 ridge$lambda.1se # lambda for this MSE ## [1] 0.4657917 # Lasso model min(lasso$cvm) # minimum MSE ## [1] 0.01754244 lasso$lambda.min # lambda for this min MSE ## [1] 0.00248579 lasso$nzero[lasso$lambda == lasso$lambda.min] # No. of coef | Min MSE ## s51 ## 139 lasso$cvm[lasso$lambda == lasso$lambda.1se] # 1-SE rule ## [1] 0.01979976 lasso$lambda.1se # lambda for this MSE ## [1] 0.01003518 lasso$nzero[lasso$lambda == lasso$lambda.1se] # No. of coef | 1-SE MSE ## s36 ## 64 ``` We can assess this visually. Figure [6\.7](regularized-regression.html#fig:ridge-lasso-cv-viz-results) plots the estimated coefficients across the range of \\(\\lambda\\) values. The dashed red line represents the \\(\\lambda\\) value with the smallest MSE and the dashed blue line represents largest \\(\\lambda\\) value that falls within one standard error of the minimum MSE. This shows you how much we can constrain the coefficients while still maximizing predictive accuracy. Above, we saw that both ridge and lasso penalties provide similar MSEs; however, these plots illustrate that ridge is still using all 294 features whereas the lasso model can get a similar MSE while reducing the feature set from 294 down to 139\. However, there will be some variability with this MSE and we can reasonably assume that we can achieve a similar MSE with a slightly more constrained model that uses only 64 features. Although this lasso model does not offer significant improvement over the ridge model, we get approximately the same accuracy by using only 64 features! If describing and interpreting the predictors is an important component of your analysis, this may significantly aid your endeavor. ``` # Ridge model ridge_min <- glmnet( x = X, y = Y, alpha = 0 ) # Lasso model lasso_min <- glmnet( x = X, y = Y, alpha = 1 ) par(mfrow = c(1, 2)) # plot ridge model plot(ridge_min, xvar = "lambda", main = "Ridge penalty\n\n") abline(v = log(ridge$lambda.min), col = "red", lty = "dashed") abline(v = log(ridge$lambda.1se), col = "blue", lty = "dashed") # plot lasso model plot(lasso_min, xvar = "lambda", main = "Lasso penalty\n\n") abline(v = log(lasso$lambda.min), col = "red", lty = "dashed") abline(v = log(lasso$lambda.1se), col = "blue", lty = "dashed") ``` Figure 6\.7: Coefficients for our ridge and lasso models. First dotted vertical line in each plot represents the \\(\\lambda\\) with the smallest MSE and the second represents the \\(\\lambda\\) with an MSE within one standard error of the minimum MSE. So far we’ve implemented a pure ridge and pure lasso model. However, we can implement an elastic net the same way as the ridge and lasso models, by adjusting the `alpha` parameter. Any `alpha` value between 0–1 will perform an elastic net. When `alpha = 0.5` we perform an equal combination of penalties whereas `alpha` \\(\< 0\.5\\) will have a heavier ridge penalty applied and `alpha` \\(\> 0\.5\\) will have a heavier lasso penalty. Figure 6\.8: Coefficients for various penalty parameters. Often, the optimal model contains an `alpha` somewhere between 0–1, thus we want to tune both the \\(\\lambda\\) and the `alpha` parameters. As in Chapters [4](linear-regression.html#linear-regression) and [5](logistic-regression.html#logistic-regression), we can use the **caret** package to automate the tuning process. This ensures that any feature engineering is appropriately applied within each resample. The following performs a grid search over 10 values of the alpha parameter between 0–1 and ten values of the lambda parameter from the lowest to highest lambda values identified by **glmnet**. This grid search took roughly **71 seconds** to compute. The following snippet of code shows that the model that minimized RMSE used an alpha of 0\.1 and \\(\\lambda\\) of 0\.02\. The minimum RMSE of 0\.1277585 (\\(MSE \= 0\.1277585^2 \= 0\.01632223\\)) slightly improves upon the full ridge and lasso models produced earlier. Figure [6\.9](regularized-regression.html#fig:glmnet-tuning-grid) illustrates how the combination of alpha values (\\(x\\)\-axis) and \\(\\lambda\\) values (line color) influence the RMSE. ``` # for reproducibility set.seed(123) # grid search across cv_glmnet <- train( x = X, y = Y, method = "glmnet", preProc = c("zv", "center", "scale"), trControl = trainControl(method = "cv", number = 10), tuneLength = 10 ) # model with lowest RMSE cv_glmnet$bestTune ## alpha lambda ## 7 0.1 0.02007035 # results for model with lowest RMSE cv_glmnet$results %>% filter(alpha == cv_glmnet$bestTune$alpha, lambda == cv_glmnet$bestTune$lambda) ## alpha lambda RMSE Rsquared MAE RMSESD RsquaredSD ## 1 0.1 0.02007035 0.1277585 0.9001487 0.08102427 0.02235901 0.0346677 ## MAESD ## 1 0.005667366 # plot cross-validated RMSE ggplot(cv_glmnet) ``` Figure 6\.9: The 10\-fold cross valdation RMSE across 10 alpha values (x\-axis) and 10 lambda values (line color). So how does this compare to our previous best model for the Ames data set? Keep in mind that for this chapter we \\(\\log\\) transformed the response variable (`Sale_Price`). Consequently, to provide a fair comparison to our previously obtained PLS model’s RMSE of $25,460, we need to re\-transform our predicted values. The following illustrates that our optimal regularized model achieved an RMSE of $19,905\. Introducing a penalty parameter to constrain the coefficients provided quite an improvement over our previously obtained dimension reduction approach. ``` # predict sales price on training data pred <- predict(cv_glmnet, X) # compute RMSE of transformed predicted RMSE(exp(pred), exp(Y)) ## [1] 19905.05 ``` 6\.5 Feature interpretation --------------------------- Variable importance for regularized models provides a similar interpretation as in linear (or logistic) regression. Importance is determined by magnitude of the standardized coefficients and we can see in Figure [6\.10](regularized-regression.html#fig:regularize-vip) some of the same features that were considered highly influential in our PLS model, albeit in differing order (i.e. `Gr_Liv_Area`, `Total_Bsmt_SF`, `Overall_Qual`, `Year_Built`). ``` vip(cv_glmnet, num_features = 20, geom = "point") ``` Figure 6\.10: Top 20 most important variables for the optimal regularized regression model. Similar to linear and logistic regression, the relationship between the features and response is monotonic linear. However, since we modeled our response with a log transformation, the estimated relationships will still be monotonic but non\-linear on the original response scale. Figure [6\.11](regularized-regression.html#fig:regularized-top4-pdp) illustrates the relationship between the top four most influential variables (i.e., largest absolute coefficients) and the non\-transformed sales price. All relationships are positive in nature, as the values in these features increase (or for `Overall_QualExcellent` if it exists) the average predicted sales price increases. Figure 6\.11: Partial dependence plots for the first four most important variables. However, not that one of the top 20 most influential variables is `Overall_QualPoor`. When a home has an overall quality rating of poor we see that the average predicted sales price decreases versus when it has some other overall quality rating. Consequently, its important to not only look at the variable importance ranking, but also observe the positive or negative nature of the relationship. Figure 6\.12: Partial dependence plot for when overall quality of a home is (1\) versus is not poor (0\). 6\.6 Attrition data ------------------- We saw that regularization significantly improved our predictive accuracy for the Ames data set, but how about for the employee attrition example? In Chapter [5](logistic-regression.html#logistic-regression) we saw a maximum CV accuracy of 86\.3% for our logistic regression model. We see a little improvement in the following with some preprocessing; however, performing a regularized logistic regression model provides us with an additional 0\.8% improvement in accuracy (likely within the margin of error). ``` df <- attrition %>% mutate_if(is.ordered, factor, ordered = FALSE) # Create training (70%) and test (30%) sets for the # rsample::attrition data. Use set.seed for reproducibility set.seed(123) churn_split <- initial_split(df, prop = .7, strata = "Attrition") train <- training(churn_split) test <- testing(churn_split) # train logistic regression model set.seed(123) glm_mod <- train( Attrition ~ ., data = train, method = "glm", family = "binomial", preProc = c("zv", "center", "scale"), trControl = trainControl(method = "cv", number = 10) ) # train regularized logistic regression model set.seed(123) penalized_mod <- train( Attrition ~ ., data = train, method = "glmnet", family = "binomial", preProc = c("zv", "center", "scale"), trControl = trainControl(method = "cv", number = 10), tuneLength = 10 ) # extract out of sample performance measures summary(resamples(list( logistic_model = glm_mod, penalized_model = penalized_mod )))$statistics$Accuracy ## Min. 1st Qu. Median Mean 3rd Qu. ## logistic_model 0.8365385 0.8495146 0.8792476 0.8757893 0.8907767 ## penalized_model 0.8446602 0.8759280 0.8834951 0.8835759 0.8915469 ## Max. NA's ## logistic_model 0.9313725 0 ## penalized_model 0.9411765 0 ``` 6\.7 Final thoughts ------------------- Regularized regression provides many great benefits over traditional GLMs when applied to large data sets with lots of features. It provides a great option for handling the \\(n \> p\\) problem, helps minimize the impact of multicollinearity, and can perform automated feature selection. It also has relatively few hyperparameters which makes them easy to tune, computationally efficient compared to other algorithms discussed in later chapters, and memory efficient. However, regularized regression does require some feature preprocessing. Notably, all inputs must be numeric; however, some packages (e.g., **caret** and **h2o**) automate this process. They cannot automatically handle missing data, which requires you to remove or impute them prior to modeling. Similar to GLMs, they are also not robust to outliers in both the feature and target. Lastly, regularized regression models still assume a monotonic linear relationship (always increasing or decreasing in a linear fashion). It is also up to the analyst whether or not to include specific interaction effects.
Machine Learning
bradleyboehmke.github.io
https://bradleyboehmke.github.io/HOML/mars.html
Chapter 7 Multivariate Adaptive Regression Splines ================================================== The previous chapters discussed algorithms that are intrinsically linear. Many of these models can be adapted to nonlinear patterns in the data by manually adding nonlinear model terms (e.g., squared terms, interaction effects, and other transformations of the original features); however, to do so you the analyst must know the specific nature of the nonlinearities and interactions *a priori*. Alternatively, there are numerous algorithms that are inherently nonlinear. When using these models, the exact form of the nonlinearity does not need to be known explicitly or specified prior to model training. Rather, these algorithms will search for, and discover, nonlinearities and interactions in the data that help maximize predictive accuracy. This chapter discusses *multivariate adaptive regression splines* (MARS) (Friedman [1991](#ref-friedman1991multivariate)), an algorithm that automatically creates a piecewise linear model which provides an intuitive stepping block into nonlinearity after grasping the concept of multiple linear regression. Future chapters will focus on other nonlinear algorithms. 7\.1 Prerequisites ------------------ For this chapter we will use the following packages: ``` # Helper packages library(dplyr) # for data wrangling library(ggplot2) # for awesome plotting # Modeling packages library(earth) # for fitting MARS models library(caret) # for automating the tuning process # Model interpretability packages library(vip) # for variable importance library(pdp) # for variable relationships ``` To illustrate various concepts we’ll continue with the `ames_train` and `ames_test` data sets created in Section [2\.7](process.html#put-process-together). 7\.2 The basic idea ------------------- In the previous chapters, we focused on linear models (where the analyst has to explicitly specify any nonlinear relationships and interaction effects). We illustrated some of the advantages of linear models such as their ease and speed of computation and also the intuitive nature of interpreting their coefficients. However, linear models make a strong assumption about linearity, and this assumption is often a poor one, which can affect predictive accuracy. We can extend linear models to capture any non\-linear relationship. Typically, this is done by explicitly including polynomial terms (e.g., \\(x\_i^2\\)) or step functions. Polynomial regression is a form of regression in which the relationship between \\(X\\) and \\(Y\\) is modeled as a \\(d\\)th degree polynomial in \\(X\\). For example, Equation [(7\.1\)](mars.html#eq:poly) represents a polynomial regression function where \\(Y\\) is modeled as a \\(d\\)\-th degree polynomial in \\(X\\). Generally speaking, it is unusual to use \\(d\\) greater than 3 or 4 as the larger \\(d\\) becomes, the easier the function fit becomes overly flexible and oddly shaped…especially near the boundaries of the range of \\(X\\) values. Increasing \\(d\\) also tends to increase the presence of multicollinearity. \\\[\\begin{equation} \\tag{7\.1} y\_i \= \\beta\_0 \+ \\beta\_1 x\_i \+ \\beta\_2 x^2\_i \+ \\beta\_3 x^3\_i \\dots \+ \\beta\_d x^d\_i \+ \\epsilon\_i, \\end{equation}\\] An alternative to polynomials is to use step functions. Whereas polynomial functions impose a global non\-linear relationship, step functions break the range of \\(X\\) into bins, and fit a simple constant (e.g., the mean response) in each. This amounts to converting a continuous feature into an ordered categorical variable such that our linear regression function is converted to Equation [(7\.2\)](mars.html#eq:steps) \\\[\\begin{equation} \\tag{7\.2} y\_i \= \\beta\_0 \+ \\beta\_1 C\_1(x\_i) \+ \\beta\_2 C\_2(x\_i) \+ \\beta\_3 C\_3(x\_i) \\dots \+ \\beta\_d C\_d(x\_i) \+ \\epsilon\_i, \\end{equation}\\] where \\(C\_1(x\_i)\\) represents \\(x\_i\\) values ranging from \\(c\_1 \\leq x\_i \< c\_2\\), \\(C\_2\\left(x\_i\\right)\\) represents \\(x\_i\\) values ranging from \\(c\_2 \\leq x\_i \< c\_3\\), \\(\\dots\\), \\(C\_d\\left(x\_i\\right)\\) represents \\(x\_i\\) values ranging from \\(c\_{d\-1} \\leq x\_i \< c\_d\\). Figure [7\.1](mars.html#fig:nonlinear-comparisons) contrasts linear, polynomial, and step function fits for non\-linear, non\-monotonic simulated data. Figure 7\.1: Blue line represents predicted (`y`) values as a function of `x` for alternative approaches to modeling explicit nonlinear regression patterns. (A) Traditional linear regression approach does not capture any nonlinearity unless the predictor or response is transformed (i.e. log transformation). (B) Degree\-2 polynomial, (C) Degree\-3 polynomial, (D) Step function cutting `x` into six categorical levels. Although useful, the typical implementation of polynomial regression and step functions require the user to explicitly identify and incorporate which variables should have what specific degree of interaction or at what points of a variable \\(X\\) should cut points be made for the step functions. Considering many data sets today can easily contain 50, 100, or more features, this would require an enormous and unnecessary time commitment from an analyst to determine these explicit non\-linear settings. ### 7\.2\.1 Multivariate adaptive regression splines Multivariate adaptive regression splines (MARS) provide a convenient approach to capture the nonlinear relationships in the data by assessing cutpoints (*knots*) similar to step functions. The procedure assesses each data point for each predictor as a knot and creates a linear regression model with the candidate feature(s). For example, consider our non\-linear, non\-monotonic data above where \\(Y \= f\\left(X\\right)\\). The MARS procedure will first look for the single point across the range of `X` values where two different linear relationships between `Y` and `X` achieve the smallest error (e.g., smallest SSE). What results is known as a hinge function \\(h\\left(x\-a\\right)\\), where \\(a\\) is the cutpoint value. For a single knot (Figure [7\.2](mars.html#fig:examples-of-multiple-knots) (A)), our hinge function is \\(h\\left(\\text{x}\-1\.183606\\right)\\) such that our two linear models for `Y` are \\\[\\begin{equation} \\tag{7\.3} \\text{y} \= \\begin{cases} \\beta\_0 \+ \\beta\_1(1\.183606 \- \\text{x}) \& \\text{x} \< 1\.183606, \\\\ \\beta\_0 \+ \\beta\_1(\\text{x} \- 1\.183606\) \& \\text{x} \> 1\.183606 \\end{cases} \\end{equation}\\] Once the first knot has been found, the search continues for a second knot which is found at \\(x \= 4\.898114\\) (Figure [7\.2](mars.html#fig:examples-of-multiple-knots) (B)). This results in three linear models for `y`: \\\[\\begin{equation} \\tag{7\.4} \\text{y} \= \\begin{cases} \\beta\_0 \+ \\beta\_1(1\.183606 \- \\text{x}) \& \\text{x} \< 1\.183606, \\\\ \\beta\_0 \+ \\beta\_1(\\text{x} \- 1\.183606\) \& \\text{x} \> 1\.183606 \\quad \\\& \\quad \\text{x} \< 4\.898114, \\\\ \\beta\_0 \+ \\beta\_1(4\.898114 \- \\text{x}) \& \\text{x} \> 4\.898114 \\end{cases} \\end{equation}\\] Figure 7\.2: Examples of fitted regression splines of one (A), two (B), three (C), and four (D) knots. This procedure continues until many knots are found, producing a (potentially) highly non\-linear prediction equation. Although including many knots may allow us to fit a really good relationship with our training data, it may not generalize very well to new, unseen data. Consequently, once the full set of knots has been identified, we can sequentially remove knots that do not contribute significantly to predictive accuracy. This process is known as “pruning” and we can use cross\-validation, as we have with the previous models, to find the optimal number of knots. 7\.3 Fitting a basic MARS model ------------------------------- We can fit a direct engine MARS model with the **earth** package (Trevor Hastie and Thomas Lumley’s leaps wrapper. [2019](#ref-R-earth)). By default, `earth::earth()` will assess all potential knots across all supplied features and then will prune to the optimal number of knots based on an expected change in \\(R^2\\) (for the training data) of less than 0\.001\. This calculation is performed by the Generalized cross\-validation (GCV) procedure, which is a computational shortcut for linear models that produces an approximate leave\-one\-out cross\-validation error metric (Golub, Heath, and Wahba [1979](#ref-golub1979generalized)). The term “MARS” is trademarked and licensed exclusively to Salford Systems: [https://www.salford\-systems.com](https://www.salford-systems.com). We can use MARS as an abbreviation; however, it cannot be used for competing software solutions. This is why the R package uses the name **earth**. Although, according to the package documentation, a backronym for “earth” is “Enhanced Adaptive Regression Through Hinges”. The following applies a basic MARS model to our **ames** example. The results show us the final models GCV statistic, generalized \\(R^2\\) (GRSq), and more. ``` # Fit a basic MARS model mars1 <- earth( Sale_Price ~ ., data = ames_train ) # Print model summary print(mars1) ## Selected 36 of 39 terms, and 27 of 307 predictors ## Termination condition: RSq changed by less than 0.001 at 39 terms ## Importance: Gr_Liv_Area, Year_Built, Total_Bsmt_SF, ... ## Number of terms at each degree of interaction: 1 35 (additive model) ## GCV 557038757 RSS 1.065869e+12 GRSq 0.9136059 RSq 0.9193997 ``` It also shows us that 36 of 39 terms were used from 27 of the 307 original predictors. But what does this mean? If we were to look at all the coefficients, we would see that there are 36 terms in our model (including the intercept). These terms include hinge functions produced from the original 307 predictors (307 predictors because the model automatically dummy encodes categorical features). Looking at the first 10 terms in our model, we see that `Gr_Liv_Area` is included with a knot at 2787 (the coefficient for \\(h\\left(2787\-\\text{Gr\_Liv\_Area}\\right)\\) is \-50\.84\), `Year_Built` is included with a knot at 2004, etc. You can check out all the coefficients with `summary(mars1)` or `coef(mars1)`. ``` summary(mars1) %>% .$coefficients %>% head(10) ## Sale_Price ## (Intercept) 223113.83301 ## h(2787-Gr_Liv_Area) -50.84125 ## h(Year_Built-2004) 3405.59787 ## h(2004-Year_Built) -382.79774 ## h(Total_Bsmt_SF-1302) 56.13784 ## h(1302-Total_Bsmt_SF) -29.72017 ## h(Bsmt_Unf_SF-534) -24.36493 ## h(534-Bsmt_Unf_SF) 16.61145 ## Overall_QualExcellent 80543.25421 ## Overall_QualVery_Excellent 118297.79515 ``` The plot method for MARS model objects provides useful performance and residual plots. Figure [7\.3](mars.html#fig:basic-mod-plot) illustrates the model selection plot that graphs the GCV \\(R^2\\) (left\-hand \\(y\\)\-axis and solid black line) based on the number of terms retained in the model (\\(x\\)\-axis) which are constructed from a certain number of original predictors (right\-hand \\(y\\)\-axis). The vertical dashed lined at 36 tells us the optimal number of terms retained where marginal increases in GCV \\(R^2\\) are less than 0\.001\. ``` plot(mars1, which = 1) ``` Figure 7\.3: Model summary capturing GCV \\(R^2\\) (left\-hand y\-axis and solid black line) based on the number of terms retained (x\-axis) which is based on the number of predictors used to make those terms (right\-hand side y\-axis). For this model, 35 non\-intercept terms were retained which are based on 27 predictors. Any additional terms retained in the model, over and above these 35, result in less than 0\.001 improvement in the GCV \\(R^2\\). In addition to pruning the number of knots, `earth::earth()` allows us to also assess potential interactions between different hinge functions. The following illustrates this by including a `degree = 2` argument. You can see that now our model includes interaction terms between a maximum of two hinge functions (e.g., `h(2004-Year_Built)*h(Total_Bsmt_SF-1330)` represents an interaction effect for those houses built after 2004 and has more than 1,330 square feet of basement space). ``` # Fit a basic MARS model mars2 <- earth( Sale_Price ~ ., data = ames_train, degree = 2 ) # check out the first 10 coefficient terms summary(mars2) %>% .$coefficients %>% head(10) ## Sale_Price ## (Intercept) 2.331420e+05 ## h(Gr_Liv_Area-2787) 1.084015e+02 ## h(2787-Gr_Liv_Area) -6.178182e+01 ## h(Year_Built-2004) 8.088153e+03 ## h(2004-Year_Built) -9.529436e+02 ## h(Total_Bsmt_SF-1302) 1.131967e+02 ## h(1302-Total_Bsmt_SF) -4.083722e+01 ## h(2004-Year_Built)*h(Total_Bsmt_SF-1330) -1.553894e+00 ## h(2004-Year_Built)*h(1330-Total_Bsmt_SF) 1.983699e-01 ## Condition_1PosN*h(Gr_Liv_Area-2787) -4.020535e+02 ``` 7\.4 Tuning ----------- There are two important tuning parameters associated with our MARS model: the maximum degree of interactions and the number of terms retained in the final model. We need to perform a grid search to identify the optimal combination of these hyperparameters that minimize prediction error (the above pruning process was based only on an approximation of CV model performance on the training data rather than an exact *k*\-fold CV process). As in previous chapters, we’ll perform a CV grid search to identify the optimal hyperparameter mix. Below, we set up a grid that assesses 30 different combinations of interaction complexity (`degree`) and the number of terms to retain in the final model (`nprune`). Rarely is there any benefit in assessing greater than 3\-rd degree interactions and we suggest starting out with 10 evenly spaced values for `nprune` and then you can always zoom in to a region once you find an approximate optimal solution. ``` # create a tuning grid hyper_grid <- expand.grid( degree = 1:3, nprune = seq(2, 100, length.out = 10) %>% floor() ) head(hyper_grid) ## degree nprune ## 1 1 2 ## 2 2 2 ## 3 3 2 ## 4 1 12 ## 5 2 12 ## 6 3 12 ``` As in the previous chapters, we can use **caret** to perform a grid search using 10\-fold CV. The model that provides the optimal combination includes second degree interaction effects and retains 56 terms. The cross\-validated RMSE for these models is displayed in Figure [7\.4](mars.html#fig:grid-search); the optimal model’s cross\-validated RMSE was $26,817\. This grid search took roughly five minutes to complete. ``` # Cross-validated model set.seed(123) # for reproducibility cv_mars <- train( x = subset(ames_train, select = -Sale_Price), y = ames_train$Sale_Price, method = "earth", metric = "RMSE", trControl = trainControl(method = "cv", number = 10), tuneGrid = hyper_grid ) # View results cv_mars$bestTune ## nprune degree ## 16 56 2 cv_mars$results %>% filter(nprune == cv_mars$bestTune$nprune, degree == cv_mars$bestTune$degree) ## degree nprune RMSE Rsquared MAE RMSESD RsquaredSD MAESD ## 1 2 56 26817.1 0.8838914 16439.15 11683.73 0.09785945 1678.672 ggplot(cv_mars) ``` Figure 7\.4: Cross\-validated RMSE for the 30 different hyperparameter combinations in our grid search. The optimal model retains 56 terms and includes up to 2\\(^{nd}\\) degree interactions. The above grid search helps to focus where we can further refine our model tuning. As a next step, we could perform a grid search that focuses in on a refined grid space for `nprune` (e.g., comparing 45–65 terms retained). However, for brevity we’ll leave this as an exercise for the reader. So how does this compare to our previously built models for the Ames housing data? The following table compares the cross\-validated RMSE for our tuned MARS model to an ordinary multiple regression model along with tuned principal component regression (PCR), partial least squares (PLS), and regularized regression (elastic net) models. Notice that our elastic net model is higher than in the last chapter. This table compares these 5 modeling approaches without performing any logarithmic transformation on the target variable. However, our MARS model still outperforms the results from the best elastic net in the last chapter (RMSE \= 19,905\). Table 7\.1: Cross\-validated RMSE results for tuned MARS and regression models. | | Min. | 1st Qu. | Median | Mean | 3rd Qu. | Max. | NA’s | | --- | --- | --- | --- | --- | --- | --- | --- | | LM | 16533\.37 | 22621\.51 | 24773\.10 | 25957\.46 | 28351\.02 | 39572\.36 | 0 | | PCR | 28279\.98 | 30963\.21 | 31425\.07 | 33871\.79 | 36925\.82 | 42676\.08 | 0 | | PLS | 16645\.33 | 21832\.43 | 24611\.00 | 25296\.77 | 25879\.52 | 39231\.40 | 0 | | ENET | 15610\.37 | 21035\.04 | 23609\.76 | 24647\.70 | 25653\.41 | 39184\.22 | 0 | | MARS | 19888\.56 | 22240\.22 | 23370\.48 | 26817\.10 | 24320\.10 | 59443\.17 | 0 | Although the MARS model did not have a lower MSE than the elastic net and PLS models, you can see that the the median RMSE of all the cross validation iterations was lower. However, there is one fold (`Fold08`) that had an extremely large RMSE that is skewing the mean RMSE for the MARS model. This would be worth exploring as there are likely some unique observations that are skewing the results. ``` cv_mars$resample ## RMSE Rsquared MAE Resample ## 1 22468.90 0.9205286 15471.14 Fold03 ## 2 19888.56 0.9316275 14944.30 Fold04 ## 3 59443.17 0.6143857 20867.67 Fold08 ## 4 22163.99 0.9395510 16327.75 Fold07 ## 5 24249.53 0.9278253 16551.83 Fold01 ## 6 20711.49 0.9188620 15659.14 Fold05 ## 7 23439.68 0.9241964 15463.52 Fold09 ## 8 24343.62 0.9118472 16556.19 Fold02 ## 9 28160.73 0.8513779 16955.07 Fold06 ## 10 23301.28 0.8987123 15594.89 Fold10 ``` 7\.5 Feature interpretation --------------------------- MARS models via `earth::earth()` include a backwards elimination feature selection routine that looks at reductions in the GCV estimate of error as each predictor is added to the model. This total reduction is used as the variable importance measure (`value = "gcv"`). Since MARS will automatically include and exclude terms during the pruning process, it essentially performs automated feature selection. If a predictor was never used in any of the MARS basis functions in the final model (after pruning), it has an importance value of zero. This is illustrated in Figure [7\.5](mars.html#fig:vip) where 27 features have \\(\>0\\) importance values while the rest of the features have an importance value of zero since they were not included in the final model. Alternatively, you can also monitor the change in the residual sums of squares (RSS) as terms are added (`value = "rss"`); however, you will see very little difference between these methods. ``` # variable importance plots p1 <- vip(cv_mars, num_features = 40, geom = "point", value = "gcv") + ggtitle("GCV") p2 <- vip(cv_mars, num_features = 40, geom = "point", value = "rss") + ggtitle("RSS") gridExtra::grid.arrange(p1, p2, ncol = 2) ``` Figure 7\.5: Variable importance based on impact to GCV (left) and RSS (right) values as predictors are added to the model. Both variable importance measures will usually give you very similar results. Its important to realize that variable importance will only measure the impact of the prediction error as features are included; however, it does not measure the impact for particular hinge functions created for a given feature. For example, in Figure [7\.5](mars.html#fig:vip) we see that `Gr_Liv_Area` and `Year_Built` are the two most influential variables; however, variable importance does not tell us how our model is treating the non\-linear patterns for each feature. Also, if we look at the interaction terms our model retained, we see interactions between different hinge functions. ``` # extract coefficients, convert to tidy data frame, and # filter for interaction terms cv_mars$finalModel %>% coef() %>% broom::tidy() %>% filter(stringr::str_detect(names, "\\*")) ## # A tibble: 20 x 2 ## names x ## <chr> <dbl> ## 1 h(2004-Year_Built) * h(Total_Bsmt_SF-1330) -1.55 ## 2 h(2004-Year_Built) * h(1330-Total_Bsmt_SF) 0.198 ## 3 Condition_1PosN * h(Gr_Liv_Area-2787) -402. ## 4 h(17871-Lot_Area) * h(Total_Bsmt_SF-1302) -0.00703 ## 5 h(Year_Built-2004) * h(2787-Gr_Liv_Area) -4.54 ## 6 h(2004-Year_Built) * h(2787-Gr_Liv_Area) 0.135 ## 7 h(Year_Remod_Add-1973) * h(900-Garage_Area) -1.61 ## 8 Overall_QualExcellent * h(Year_Remod_Add-1973) 2038. ## 9 h(Total_Bsmt_SF-1302) * h(TotRms_AbvGrd-7) 12.2 ## 10 h(Total_Bsmt_SF-1302) * h(7-TotRms_AbvGrd) 30.6 ## 11 h(Total_Bsmt_SF-1302) * h(1-Half_Bath) -35.6 ## 12 h(Lot_Area-6130) * Overall_CondFair -3.04 ## 13 NeighborhoodStone_Brook * h(Year_Remod_Add-1973) 1153. ## 14 Overall_QualVery_Good * h(Bsmt_Full_Bath-1) 48011. ## 15 Overall_QualVery_Good * h(1-Bsmt_Full_Bath) -12239. ## 16 Overall_CondGood * h(2004-Year_Built) 297. ## 17 h(Year_Remod_Add-1973) * h(Longitude- -93.6571) -9005. ## 18 h(Year_Remod_Add-1973) * h(-93.6571-Longitude) -14103. ## 19 Overall_CondAbove_Average * h(2787-Gr_Liv_Area) 5.80 ## 20 Condition_1Norm * h(2004-Year_Built) 148. ``` To better understand the relationship between these features and `Sale_Price`, we can create partial dependence plots (PDPs) for each feature individually and also together. The individual PDPs illustrate that our model found that one knot in each feature provides the best fit. For example, as homes exceed 2,787 square feet, each additional square foot demands a higher marginal increase in sale price than homes with less than 2,787 square feet. Similarly, for homes built in 2004 or later, there is a greater marginal effect on sales price based on the age of the home than for homes built prior to 2004\. The interaction plot (far right figure) illustrates the stronger effect these two features have when combined. ``` # Construct partial dependence plots p1 <- partial(cv_mars, pred.var = "Gr_Liv_Area", grid.resolution = 10) %>% autoplot() p2 <- partial(cv_mars, pred.var = "Year_Built", grid.resolution = 10) %>% autoplot() p3 <- partial(cv_mars, pred.var = c("Gr_Liv_Area", "Year_Built"), grid.resolution = 10) %>% plotPartial(levelplot = FALSE, zlab = "yhat", drape = TRUE, colorkey = TRUE, screen = list(z = -20, x = -60)) # Display plots side by side gridExtra::grid.arrange(p1, p2, p3, ncol = 3) ``` Figure 7\.6: Partial dependence plots to understand the relationship between `Sale_Price` and the `Gr_Liv_Area` and `Year_Built` features. The PDPs tell us that as `Gr_Liv_Area` increases and for newer homes, `Sale_Price` increases dramatically. 7\.6 Attrition data ------------------- The MARS method and algorithm can be extended to handle classification problems and GLMs in general.[24](#fn24) We saw significant improvement to our predictive accuracy on the Ames data with a MARS model, but how about the employee attrition example? In Chapter [5](logistic-regression.html#logistic-regression) we saw a slight improvement in our cross\-validated accuracy rate using regularized regression. Here, we tune a MARS model using the same search grid as we did above. We see our best models include no interaction effects and the optimal model retained 12 terms. ``` # get attrition data df <- rsample::attrition %>% mutate_if(is.ordered, factor, ordered = FALSE) # Create training (70%) and test (30%) sets for the rsample::attrition data. # Use set.seed for reproducibility set.seed(123) churn_split <- rsample::initial_split(df, prop = .7, strata = "Attrition") churn_train <- rsample::training(churn_split) churn_test <- rsample::testing(churn_split) # for reproducibiity set.seed(123) # cross validated model tuned_mars <- train( x = subset(churn_train, select = -Attrition), y = churn_train$Attrition, method = "earth", trControl = trainControl(method = "cv", number = 10), tuneGrid = hyper_grid ) # best model tuned_mars$bestTune ## nprune degree ## 2 12 1 # plot results ggplot(tuned_mars) ``` Figure 7\.7: Cross\-validated accuracy rate for the 30 different hyperparameter combinations in our grid search. The optimal model retains 12 terms and includes no interaction effects. However, comparing our MARS model to the previous linear models (logistic regression and regularized regression), we do not see any improvement in our overall accuracy rate. Table 7\.2: Cross\-validated accuracy results for tuned MARS and regression models. | | Min. | 1st Qu. | Median | Mean | 3rd Qu. | Max. | NA’s | | --- | --- | --- | --- | --- | --- | --- | --- | | Logistic\_model | 0\.8365385 | 0\.8495146 | 0\.8792476 | 0\.8757893 | 0\.8907767 | 0\.9313725 | 0 | | Elastic\_net | 0\.8446602 | 0\.8759280 | 0\.8834951 | 0\.8835759 | 0\.8915469 | 0\.9411765 | 0 | | MARS\_model | 0\.8155340 | 0\.8578463 | 0\.8780697 | 0\.8708500 | 0\.8907767 | 0\.9029126 | 0 | 7\.7 Final thoughts ------------------- There are several advantages to MARS. First, MARS naturally handles mixed types of predictors (quantitative and qualitative). MARS considers all possible binary partitions of the categories for a qualitative predictor into two groups.[25](#fn25) Each group then generates a pair of piecewise indicator functions for the two categories. MARS also requires minimal feature engineering (e.g., feature scaling) and performs automated feature selection. For example, since MARS scans each predictor to identify a split that improves predictive accuracy, non\-informative features will not be chosen. Furthermore, highly correlated predictors do not impede predictive accuracy as much as they do with OLS models. However, one disadvantage to MARS models is that they’re typically slower to train. Since the algorithm scans each value of each predictor for potential cutpoints, computational performance can suffer as both \\(n\\) and \\(p\\) increase. Also, although correlated predictors do not necessarily impede model performance, they can make model interpretation difficult. When two features are nearly perfectly correlated, the algorithm will essentially select the first one it happens to come across when scanning the features. Then, since it randomly selected one, the correlated feature will likely not be included as it adds no additional explanatory power. 7\.1 Prerequisites ------------------ For this chapter we will use the following packages: ``` # Helper packages library(dplyr) # for data wrangling library(ggplot2) # for awesome plotting # Modeling packages library(earth) # for fitting MARS models library(caret) # for automating the tuning process # Model interpretability packages library(vip) # for variable importance library(pdp) # for variable relationships ``` To illustrate various concepts we’ll continue with the `ames_train` and `ames_test` data sets created in Section [2\.7](process.html#put-process-together). 7\.2 The basic idea ------------------- In the previous chapters, we focused on linear models (where the analyst has to explicitly specify any nonlinear relationships and interaction effects). We illustrated some of the advantages of linear models such as their ease and speed of computation and also the intuitive nature of interpreting their coefficients. However, linear models make a strong assumption about linearity, and this assumption is often a poor one, which can affect predictive accuracy. We can extend linear models to capture any non\-linear relationship. Typically, this is done by explicitly including polynomial terms (e.g., \\(x\_i^2\\)) or step functions. Polynomial regression is a form of regression in which the relationship between \\(X\\) and \\(Y\\) is modeled as a \\(d\\)th degree polynomial in \\(X\\). For example, Equation [(7\.1\)](mars.html#eq:poly) represents a polynomial regression function where \\(Y\\) is modeled as a \\(d\\)\-th degree polynomial in \\(X\\). Generally speaking, it is unusual to use \\(d\\) greater than 3 or 4 as the larger \\(d\\) becomes, the easier the function fit becomes overly flexible and oddly shaped…especially near the boundaries of the range of \\(X\\) values. Increasing \\(d\\) also tends to increase the presence of multicollinearity. \\\[\\begin{equation} \\tag{7\.1} y\_i \= \\beta\_0 \+ \\beta\_1 x\_i \+ \\beta\_2 x^2\_i \+ \\beta\_3 x^3\_i \\dots \+ \\beta\_d x^d\_i \+ \\epsilon\_i, \\end{equation}\\] An alternative to polynomials is to use step functions. Whereas polynomial functions impose a global non\-linear relationship, step functions break the range of \\(X\\) into bins, and fit a simple constant (e.g., the mean response) in each. This amounts to converting a continuous feature into an ordered categorical variable such that our linear regression function is converted to Equation [(7\.2\)](mars.html#eq:steps) \\\[\\begin{equation} \\tag{7\.2} y\_i \= \\beta\_0 \+ \\beta\_1 C\_1(x\_i) \+ \\beta\_2 C\_2(x\_i) \+ \\beta\_3 C\_3(x\_i) \\dots \+ \\beta\_d C\_d(x\_i) \+ \\epsilon\_i, \\end{equation}\\] where \\(C\_1(x\_i)\\) represents \\(x\_i\\) values ranging from \\(c\_1 \\leq x\_i \< c\_2\\), \\(C\_2\\left(x\_i\\right)\\) represents \\(x\_i\\) values ranging from \\(c\_2 \\leq x\_i \< c\_3\\), \\(\\dots\\), \\(C\_d\\left(x\_i\\right)\\) represents \\(x\_i\\) values ranging from \\(c\_{d\-1} \\leq x\_i \< c\_d\\). Figure [7\.1](mars.html#fig:nonlinear-comparisons) contrasts linear, polynomial, and step function fits for non\-linear, non\-monotonic simulated data. Figure 7\.1: Blue line represents predicted (`y`) values as a function of `x` for alternative approaches to modeling explicit nonlinear regression patterns. (A) Traditional linear regression approach does not capture any nonlinearity unless the predictor or response is transformed (i.e. log transformation). (B) Degree\-2 polynomial, (C) Degree\-3 polynomial, (D) Step function cutting `x` into six categorical levels. Although useful, the typical implementation of polynomial regression and step functions require the user to explicitly identify and incorporate which variables should have what specific degree of interaction or at what points of a variable \\(X\\) should cut points be made for the step functions. Considering many data sets today can easily contain 50, 100, or more features, this would require an enormous and unnecessary time commitment from an analyst to determine these explicit non\-linear settings. ### 7\.2\.1 Multivariate adaptive regression splines Multivariate adaptive regression splines (MARS) provide a convenient approach to capture the nonlinear relationships in the data by assessing cutpoints (*knots*) similar to step functions. The procedure assesses each data point for each predictor as a knot and creates a linear regression model with the candidate feature(s). For example, consider our non\-linear, non\-monotonic data above where \\(Y \= f\\left(X\\right)\\). The MARS procedure will first look for the single point across the range of `X` values where two different linear relationships between `Y` and `X` achieve the smallest error (e.g., smallest SSE). What results is known as a hinge function \\(h\\left(x\-a\\right)\\), where \\(a\\) is the cutpoint value. For a single knot (Figure [7\.2](mars.html#fig:examples-of-multiple-knots) (A)), our hinge function is \\(h\\left(\\text{x}\-1\.183606\\right)\\) such that our two linear models for `Y` are \\\[\\begin{equation} \\tag{7\.3} \\text{y} \= \\begin{cases} \\beta\_0 \+ \\beta\_1(1\.183606 \- \\text{x}) \& \\text{x} \< 1\.183606, \\\\ \\beta\_0 \+ \\beta\_1(\\text{x} \- 1\.183606\) \& \\text{x} \> 1\.183606 \\end{cases} \\end{equation}\\] Once the first knot has been found, the search continues for a second knot which is found at \\(x \= 4\.898114\\) (Figure [7\.2](mars.html#fig:examples-of-multiple-knots) (B)). This results in three linear models for `y`: \\\[\\begin{equation} \\tag{7\.4} \\text{y} \= \\begin{cases} \\beta\_0 \+ \\beta\_1(1\.183606 \- \\text{x}) \& \\text{x} \< 1\.183606, \\\\ \\beta\_0 \+ \\beta\_1(\\text{x} \- 1\.183606\) \& \\text{x} \> 1\.183606 \\quad \\\& \\quad \\text{x} \< 4\.898114, \\\\ \\beta\_0 \+ \\beta\_1(4\.898114 \- \\text{x}) \& \\text{x} \> 4\.898114 \\end{cases} \\end{equation}\\] Figure 7\.2: Examples of fitted regression splines of one (A), two (B), three (C), and four (D) knots. This procedure continues until many knots are found, producing a (potentially) highly non\-linear prediction equation. Although including many knots may allow us to fit a really good relationship with our training data, it may not generalize very well to new, unseen data. Consequently, once the full set of knots has been identified, we can sequentially remove knots that do not contribute significantly to predictive accuracy. This process is known as “pruning” and we can use cross\-validation, as we have with the previous models, to find the optimal number of knots. ### 7\.2\.1 Multivariate adaptive regression splines Multivariate adaptive regression splines (MARS) provide a convenient approach to capture the nonlinear relationships in the data by assessing cutpoints (*knots*) similar to step functions. The procedure assesses each data point for each predictor as a knot and creates a linear regression model with the candidate feature(s). For example, consider our non\-linear, non\-monotonic data above where \\(Y \= f\\left(X\\right)\\). The MARS procedure will first look for the single point across the range of `X` values where two different linear relationships between `Y` and `X` achieve the smallest error (e.g., smallest SSE). What results is known as a hinge function \\(h\\left(x\-a\\right)\\), where \\(a\\) is the cutpoint value. For a single knot (Figure [7\.2](mars.html#fig:examples-of-multiple-knots) (A)), our hinge function is \\(h\\left(\\text{x}\-1\.183606\\right)\\) such that our two linear models for `Y` are \\\[\\begin{equation} \\tag{7\.3} \\text{y} \= \\begin{cases} \\beta\_0 \+ \\beta\_1(1\.183606 \- \\text{x}) \& \\text{x} \< 1\.183606, \\\\ \\beta\_0 \+ \\beta\_1(\\text{x} \- 1\.183606\) \& \\text{x} \> 1\.183606 \\end{cases} \\end{equation}\\] Once the first knot has been found, the search continues for a second knot which is found at \\(x \= 4\.898114\\) (Figure [7\.2](mars.html#fig:examples-of-multiple-knots) (B)). This results in three linear models for `y`: \\\[\\begin{equation} \\tag{7\.4} \\text{y} \= \\begin{cases} \\beta\_0 \+ \\beta\_1(1\.183606 \- \\text{x}) \& \\text{x} \< 1\.183606, \\\\ \\beta\_0 \+ \\beta\_1(\\text{x} \- 1\.183606\) \& \\text{x} \> 1\.183606 \\quad \\\& \\quad \\text{x} \< 4\.898114, \\\\ \\beta\_0 \+ \\beta\_1(4\.898114 \- \\text{x}) \& \\text{x} \> 4\.898114 \\end{cases} \\end{equation}\\] Figure 7\.2: Examples of fitted regression splines of one (A), two (B), three (C), and four (D) knots. This procedure continues until many knots are found, producing a (potentially) highly non\-linear prediction equation. Although including many knots may allow us to fit a really good relationship with our training data, it may not generalize very well to new, unseen data. Consequently, once the full set of knots has been identified, we can sequentially remove knots that do not contribute significantly to predictive accuracy. This process is known as “pruning” and we can use cross\-validation, as we have with the previous models, to find the optimal number of knots. 7\.3 Fitting a basic MARS model ------------------------------- We can fit a direct engine MARS model with the **earth** package (Trevor Hastie and Thomas Lumley’s leaps wrapper. [2019](#ref-R-earth)). By default, `earth::earth()` will assess all potential knots across all supplied features and then will prune to the optimal number of knots based on an expected change in \\(R^2\\) (for the training data) of less than 0\.001\. This calculation is performed by the Generalized cross\-validation (GCV) procedure, which is a computational shortcut for linear models that produces an approximate leave\-one\-out cross\-validation error metric (Golub, Heath, and Wahba [1979](#ref-golub1979generalized)). The term “MARS” is trademarked and licensed exclusively to Salford Systems: [https://www.salford\-systems.com](https://www.salford-systems.com). We can use MARS as an abbreviation; however, it cannot be used for competing software solutions. This is why the R package uses the name **earth**. Although, according to the package documentation, a backronym for “earth” is “Enhanced Adaptive Regression Through Hinges”. The following applies a basic MARS model to our **ames** example. The results show us the final models GCV statistic, generalized \\(R^2\\) (GRSq), and more. ``` # Fit a basic MARS model mars1 <- earth( Sale_Price ~ ., data = ames_train ) # Print model summary print(mars1) ## Selected 36 of 39 terms, and 27 of 307 predictors ## Termination condition: RSq changed by less than 0.001 at 39 terms ## Importance: Gr_Liv_Area, Year_Built, Total_Bsmt_SF, ... ## Number of terms at each degree of interaction: 1 35 (additive model) ## GCV 557038757 RSS 1.065869e+12 GRSq 0.9136059 RSq 0.9193997 ``` It also shows us that 36 of 39 terms were used from 27 of the 307 original predictors. But what does this mean? If we were to look at all the coefficients, we would see that there are 36 terms in our model (including the intercept). These terms include hinge functions produced from the original 307 predictors (307 predictors because the model automatically dummy encodes categorical features). Looking at the first 10 terms in our model, we see that `Gr_Liv_Area` is included with a knot at 2787 (the coefficient for \\(h\\left(2787\-\\text{Gr\_Liv\_Area}\\right)\\) is \-50\.84\), `Year_Built` is included with a knot at 2004, etc. You can check out all the coefficients with `summary(mars1)` or `coef(mars1)`. ``` summary(mars1) %>% .$coefficients %>% head(10) ## Sale_Price ## (Intercept) 223113.83301 ## h(2787-Gr_Liv_Area) -50.84125 ## h(Year_Built-2004) 3405.59787 ## h(2004-Year_Built) -382.79774 ## h(Total_Bsmt_SF-1302) 56.13784 ## h(1302-Total_Bsmt_SF) -29.72017 ## h(Bsmt_Unf_SF-534) -24.36493 ## h(534-Bsmt_Unf_SF) 16.61145 ## Overall_QualExcellent 80543.25421 ## Overall_QualVery_Excellent 118297.79515 ``` The plot method for MARS model objects provides useful performance and residual plots. Figure [7\.3](mars.html#fig:basic-mod-plot) illustrates the model selection plot that graphs the GCV \\(R^2\\) (left\-hand \\(y\\)\-axis and solid black line) based on the number of terms retained in the model (\\(x\\)\-axis) which are constructed from a certain number of original predictors (right\-hand \\(y\\)\-axis). The vertical dashed lined at 36 tells us the optimal number of terms retained where marginal increases in GCV \\(R^2\\) are less than 0\.001\. ``` plot(mars1, which = 1) ``` Figure 7\.3: Model summary capturing GCV \\(R^2\\) (left\-hand y\-axis and solid black line) based on the number of terms retained (x\-axis) which is based on the number of predictors used to make those terms (right\-hand side y\-axis). For this model, 35 non\-intercept terms were retained which are based on 27 predictors. Any additional terms retained in the model, over and above these 35, result in less than 0\.001 improvement in the GCV \\(R^2\\). In addition to pruning the number of knots, `earth::earth()` allows us to also assess potential interactions between different hinge functions. The following illustrates this by including a `degree = 2` argument. You can see that now our model includes interaction terms between a maximum of two hinge functions (e.g., `h(2004-Year_Built)*h(Total_Bsmt_SF-1330)` represents an interaction effect for those houses built after 2004 and has more than 1,330 square feet of basement space). ``` # Fit a basic MARS model mars2 <- earth( Sale_Price ~ ., data = ames_train, degree = 2 ) # check out the first 10 coefficient terms summary(mars2) %>% .$coefficients %>% head(10) ## Sale_Price ## (Intercept) 2.331420e+05 ## h(Gr_Liv_Area-2787) 1.084015e+02 ## h(2787-Gr_Liv_Area) -6.178182e+01 ## h(Year_Built-2004) 8.088153e+03 ## h(2004-Year_Built) -9.529436e+02 ## h(Total_Bsmt_SF-1302) 1.131967e+02 ## h(1302-Total_Bsmt_SF) -4.083722e+01 ## h(2004-Year_Built)*h(Total_Bsmt_SF-1330) -1.553894e+00 ## h(2004-Year_Built)*h(1330-Total_Bsmt_SF) 1.983699e-01 ## Condition_1PosN*h(Gr_Liv_Area-2787) -4.020535e+02 ``` 7\.4 Tuning ----------- There are two important tuning parameters associated with our MARS model: the maximum degree of interactions and the number of terms retained in the final model. We need to perform a grid search to identify the optimal combination of these hyperparameters that minimize prediction error (the above pruning process was based only on an approximation of CV model performance on the training data rather than an exact *k*\-fold CV process). As in previous chapters, we’ll perform a CV grid search to identify the optimal hyperparameter mix. Below, we set up a grid that assesses 30 different combinations of interaction complexity (`degree`) and the number of terms to retain in the final model (`nprune`). Rarely is there any benefit in assessing greater than 3\-rd degree interactions and we suggest starting out with 10 evenly spaced values for `nprune` and then you can always zoom in to a region once you find an approximate optimal solution. ``` # create a tuning grid hyper_grid <- expand.grid( degree = 1:3, nprune = seq(2, 100, length.out = 10) %>% floor() ) head(hyper_grid) ## degree nprune ## 1 1 2 ## 2 2 2 ## 3 3 2 ## 4 1 12 ## 5 2 12 ## 6 3 12 ``` As in the previous chapters, we can use **caret** to perform a grid search using 10\-fold CV. The model that provides the optimal combination includes second degree interaction effects and retains 56 terms. The cross\-validated RMSE for these models is displayed in Figure [7\.4](mars.html#fig:grid-search); the optimal model’s cross\-validated RMSE was $26,817\. This grid search took roughly five minutes to complete. ``` # Cross-validated model set.seed(123) # for reproducibility cv_mars <- train( x = subset(ames_train, select = -Sale_Price), y = ames_train$Sale_Price, method = "earth", metric = "RMSE", trControl = trainControl(method = "cv", number = 10), tuneGrid = hyper_grid ) # View results cv_mars$bestTune ## nprune degree ## 16 56 2 cv_mars$results %>% filter(nprune == cv_mars$bestTune$nprune, degree == cv_mars$bestTune$degree) ## degree nprune RMSE Rsquared MAE RMSESD RsquaredSD MAESD ## 1 2 56 26817.1 0.8838914 16439.15 11683.73 0.09785945 1678.672 ggplot(cv_mars) ``` Figure 7\.4: Cross\-validated RMSE for the 30 different hyperparameter combinations in our grid search. The optimal model retains 56 terms and includes up to 2\\(^{nd}\\) degree interactions. The above grid search helps to focus where we can further refine our model tuning. As a next step, we could perform a grid search that focuses in on a refined grid space for `nprune` (e.g., comparing 45–65 terms retained). However, for brevity we’ll leave this as an exercise for the reader. So how does this compare to our previously built models for the Ames housing data? The following table compares the cross\-validated RMSE for our tuned MARS model to an ordinary multiple regression model along with tuned principal component regression (PCR), partial least squares (PLS), and regularized regression (elastic net) models. Notice that our elastic net model is higher than in the last chapter. This table compares these 5 modeling approaches without performing any logarithmic transformation on the target variable. However, our MARS model still outperforms the results from the best elastic net in the last chapter (RMSE \= 19,905\). Table 7\.1: Cross\-validated RMSE results for tuned MARS and regression models. | | Min. | 1st Qu. | Median | Mean | 3rd Qu. | Max. | NA’s | | --- | --- | --- | --- | --- | --- | --- | --- | | LM | 16533\.37 | 22621\.51 | 24773\.10 | 25957\.46 | 28351\.02 | 39572\.36 | 0 | | PCR | 28279\.98 | 30963\.21 | 31425\.07 | 33871\.79 | 36925\.82 | 42676\.08 | 0 | | PLS | 16645\.33 | 21832\.43 | 24611\.00 | 25296\.77 | 25879\.52 | 39231\.40 | 0 | | ENET | 15610\.37 | 21035\.04 | 23609\.76 | 24647\.70 | 25653\.41 | 39184\.22 | 0 | | MARS | 19888\.56 | 22240\.22 | 23370\.48 | 26817\.10 | 24320\.10 | 59443\.17 | 0 | Although the MARS model did not have a lower MSE than the elastic net and PLS models, you can see that the the median RMSE of all the cross validation iterations was lower. However, there is one fold (`Fold08`) that had an extremely large RMSE that is skewing the mean RMSE for the MARS model. This would be worth exploring as there are likely some unique observations that are skewing the results. ``` cv_mars$resample ## RMSE Rsquared MAE Resample ## 1 22468.90 0.9205286 15471.14 Fold03 ## 2 19888.56 0.9316275 14944.30 Fold04 ## 3 59443.17 0.6143857 20867.67 Fold08 ## 4 22163.99 0.9395510 16327.75 Fold07 ## 5 24249.53 0.9278253 16551.83 Fold01 ## 6 20711.49 0.9188620 15659.14 Fold05 ## 7 23439.68 0.9241964 15463.52 Fold09 ## 8 24343.62 0.9118472 16556.19 Fold02 ## 9 28160.73 0.8513779 16955.07 Fold06 ## 10 23301.28 0.8987123 15594.89 Fold10 ``` 7\.5 Feature interpretation --------------------------- MARS models via `earth::earth()` include a backwards elimination feature selection routine that looks at reductions in the GCV estimate of error as each predictor is added to the model. This total reduction is used as the variable importance measure (`value = "gcv"`). Since MARS will automatically include and exclude terms during the pruning process, it essentially performs automated feature selection. If a predictor was never used in any of the MARS basis functions in the final model (after pruning), it has an importance value of zero. This is illustrated in Figure [7\.5](mars.html#fig:vip) where 27 features have \\(\>0\\) importance values while the rest of the features have an importance value of zero since they were not included in the final model. Alternatively, you can also monitor the change in the residual sums of squares (RSS) as terms are added (`value = "rss"`); however, you will see very little difference between these methods. ``` # variable importance plots p1 <- vip(cv_mars, num_features = 40, geom = "point", value = "gcv") + ggtitle("GCV") p2 <- vip(cv_mars, num_features = 40, geom = "point", value = "rss") + ggtitle("RSS") gridExtra::grid.arrange(p1, p2, ncol = 2) ``` Figure 7\.5: Variable importance based on impact to GCV (left) and RSS (right) values as predictors are added to the model. Both variable importance measures will usually give you very similar results. Its important to realize that variable importance will only measure the impact of the prediction error as features are included; however, it does not measure the impact for particular hinge functions created for a given feature. For example, in Figure [7\.5](mars.html#fig:vip) we see that `Gr_Liv_Area` and `Year_Built` are the two most influential variables; however, variable importance does not tell us how our model is treating the non\-linear patterns for each feature. Also, if we look at the interaction terms our model retained, we see interactions between different hinge functions. ``` # extract coefficients, convert to tidy data frame, and # filter for interaction terms cv_mars$finalModel %>% coef() %>% broom::tidy() %>% filter(stringr::str_detect(names, "\\*")) ## # A tibble: 20 x 2 ## names x ## <chr> <dbl> ## 1 h(2004-Year_Built) * h(Total_Bsmt_SF-1330) -1.55 ## 2 h(2004-Year_Built) * h(1330-Total_Bsmt_SF) 0.198 ## 3 Condition_1PosN * h(Gr_Liv_Area-2787) -402. ## 4 h(17871-Lot_Area) * h(Total_Bsmt_SF-1302) -0.00703 ## 5 h(Year_Built-2004) * h(2787-Gr_Liv_Area) -4.54 ## 6 h(2004-Year_Built) * h(2787-Gr_Liv_Area) 0.135 ## 7 h(Year_Remod_Add-1973) * h(900-Garage_Area) -1.61 ## 8 Overall_QualExcellent * h(Year_Remod_Add-1973) 2038. ## 9 h(Total_Bsmt_SF-1302) * h(TotRms_AbvGrd-7) 12.2 ## 10 h(Total_Bsmt_SF-1302) * h(7-TotRms_AbvGrd) 30.6 ## 11 h(Total_Bsmt_SF-1302) * h(1-Half_Bath) -35.6 ## 12 h(Lot_Area-6130) * Overall_CondFair -3.04 ## 13 NeighborhoodStone_Brook * h(Year_Remod_Add-1973) 1153. ## 14 Overall_QualVery_Good * h(Bsmt_Full_Bath-1) 48011. ## 15 Overall_QualVery_Good * h(1-Bsmt_Full_Bath) -12239. ## 16 Overall_CondGood * h(2004-Year_Built) 297. ## 17 h(Year_Remod_Add-1973) * h(Longitude- -93.6571) -9005. ## 18 h(Year_Remod_Add-1973) * h(-93.6571-Longitude) -14103. ## 19 Overall_CondAbove_Average * h(2787-Gr_Liv_Area) 5.80 ## 20 Condition_1Norm * h(2004-Year_Built) 148. ``` To better understand the relationship between these features and `Sale_Price`, we can create partial dependence plots (PDPs) for each feature individually and also together. The individual PDPs illustrate that our model found that one knot in each feature provides the best fit. For example, as homes exceed 2,787 square feet, each additional square foot demands a higher marginal increase in sale price than homes with less than 2,787 square feet. Similarly, for homes built in 2004 or later, there is a greater marginal effect on sales price based on the age of the home than for homes built prior to 2004\. The interaction plot (far right figure) illustrates the stronger effect these two features have when combined. ``` # Construct partial dependence plots p1 <- partial(cv_mars, pred.var = "Gr_Liv_Area", grid.resolution = 10) %>% autoplot() p2 <- partial(cv_mars, pred.var = "Year_Built", grid.resolution = 10) %>% autoplot() p3 <- partial(cv_mars, pred.var = c("Gr_Liv_Area", "Year_Built"), grid.resolution = 10) %>% plotPartial(levelplot = FALSE, zlab = "yhat", drape = TRUE, colorkey = TRUE, screen = list(z = -20, x = -60)) # Display plots side by side gridExtra::grid.arrange(p1, p2, p3, ncol = 3) ``` Figure 7\.6: Partial dependence plots to understand the relationship between `Sale_Price` and the `Gr_Liv_Area` and `Year_Built` features. The PDPs tell us that as `Gr_Liv_Area` increases and for newer homes, `Sale_Price` increases dramatically. 7\.6 Attrition data ------------------- The MARS method and algorithm can be extended to handle classification problems and GLMs in general.[24](#fn24) We saw significant improvement to our predictive accuracy on the Ames data with a MARS model, but how about the employee attrition example? In Chapter [5](logistic-regression.html#logistic-regression) we saw a slight improvement in our cross\-validated accuracy rate using regularized regression. Here, we tune a MARS model using the same search grid as we did above. We see our best models include no interaction effects and the optimal model retained 12 terms. ``` # get attrition data df <- rsample::attrition %>% mutate_if(is.ordered, factor, ordered = FALSE) # Create training (70%) and test (30%) sets for the rsample::attrition data. # Use set.seed for reproducibility set.seed(123) churn_split <- rsample::initial_split(df, prop = .7, strata = "Attrition") churn_train <- rsample::training(churn_split) churn_test <- rsample::testing(churn_split) # for reproducibiity set.seed(123) # cross validated model tuned_mars <- train( x = subset(churn_train, select = -Attrition), y = churn_train$Attrition, method = "earth", trControl = trainControl(method = "cv", number = 10), tuneGrid = hyper_grid ) # best model tuned_mars$bestTune ## nprune degree ## 2 12 1 # plot results ggplot(tuned_mars) ``` Figure 7\.7: Cross\-validated accuracy rate for the 30 different hyperparameter combinations in our grid search. The optimal model retains 12 terms and includes no interaction effects. However, comparing our MARS model to the previous linear models (logistic regression and regularized regression), we do not see any improvement in our overall accuracy rate. Table 7\.2: Cross\-validated accuracy results for tuned MARS and regression models. | | Min. | 1st Qu. | Median | Mean | 3rd Qu. | Max. | NA’s | | --- | --- | --- | --- | --- | --- | --- | --- | | Logistic\_model | 0\.8365385 | 0\.8495146 | 0\.8792476 | 0\.8757893 | 0\.8907767 | 0\.9313725 | 0 | | Elastic\_net | 0\.8446602 | 0\.8759280 | 0\.8834951 | 0\.8835759 | 0\.8915469 | 0\.9411765 | 0 | | MARS\_model | 0\.8155340 | 0\.8578463 | 0\.8780697 | 0\.8708500 | 0\.8907767 | 0\.9029126 | 0 | 7\.7 Final thoughts ------------------- There are several advantages to MARS. First, MARS naturally handles mixed types of predictors (quantitative and qualitative). MARS considers all possible binary partitions of the categories for a qualitative predictor into two groups.[25](#fn25) Each group then generates a pair of piecewise indicator functions for the two categories. MARS also requires minimal feature engineering (e.g., feature scaling) and performs automated feature selection. For example, since MARS scans each predictor to identify a split that improves predictive accuracy, non\-informative features will not be chosen. Furthermore, highly correlated predictors do not impede predictive accuracy as much as they do with OLS models. However, one disadvantage to MARS models is that they’re typically slower to train. Since the algorithm scans each value of each predictor for potential cutpoints, computational performance can suffer as both \\(n\\) and \\(p\\) increase. Also, although correlated predictors do not necessarily impede model performance, they can make model interpretation difficult. When two features are nearly perfectly correlated, the algorithm will essentially select the first one it happens to come across when scanning the features. Then, since it randomly selected one, the correlated feature will likely not be included as it adds no additional explanatory power.
Machine Learning
bradleyboehmke.github.io
https://bradleyboehmke.github.io/HOML/knn.html
Chapter 8 *K*\-Nearest Neighbors ================================ *K*\-nearest neighbor (KNN) is a very simple algorithm in which each observation is predicted based on its “similarity” to other observations. Unlike most methods in this book, KNN is a *memory\-based* algorithm and cannot be summarized by a closed\-form model. This means the training samples are required at run\-time and predictions are made directly from the sample relationships. Consequently, KNNs are also known as *lazy learners* (Cunningham and Delany [2007](#ref-cunningham2007k)) and can be computationally inefficient. However, KNNs have been successful in a large number of business problems (see, for example, Jiang et al. ([2012](#ref-jiang2012improved)) and Mccord and Chuah ([2011](#ref-mccord2011spam))) and are useful for preprocessing purposes as well (as was discussed in Section [3\.3\.2](engineering.html#impute)). 8\.1 Prerequisites ------------------ For this chapter we’ll use the following packages: ``` # Helper packages library(dplyr) # for data wrangling library(ggplot2) # for awesome graphics library(rsample) # for creating validation splits library(recipes) # for feature engineering # Modeling packages library(caret) # for fitting KNN models ``` To illustrate various concepts we’ll continue working with the `ames_train` and `ames_test` data sets created in Section [2\.7](process.html#put-process-together); however, we’ll also illustrate the performance of KNNs on the employee attrition and MNIST data sets. ``` # create training (70%) set for the rsample::attrition data. attrit <- attrition %>% mutate_if(is.ordered, factor, ordered = FALSE) set.seed(123) churn_split <- initial_split(attrit, prop = .7, strata = "Attrition") churn_train <- training(churn_split) # import MNIST training data mnist <- dslabs::read_mnist() names(mnist) ## [1] "train" "test" ``` 8\.2 Measuring similarity ------------------------- The KNN algorithm identifies \\(k\\) observations that are “similar” or nearest to the new record being predicted and then uses the average response value (regression) or the most common class (classification) of those \\(k\\) observations as the predicted output. For illustration, consider our Ames housing data. In real estate, Realtors determine what price they will list (or market) a home for based on “comps” (comparable homes). To identify comps, they look for homes that have very similar attributes to the one being sold. This can include similar features (e.g., square footage, number of rooms, and style of the home), location (e.g., neighborhood and school district), and many other attributes. The Realtor will look at the typical sale price of these comps and will usually list the new home at a very similar price to the prices these comps sold for. As an example, Figure [8\.1](knn.html#fig:map-homes) maps 10 homes (blue) that are most similar to the home of interest (red). These homes are all relatively close to the target home and likely have similar characteristics (e.g., home style, size, and school district). Consequently, the Realtor would likely list the target home around the average price that these comps sold for. In essence, this is what the KNN algorithm will do. Figure 8\.1: The 10 nearest neighbors (blue) whose home attributes most closely resemble the house of interest (red). ### 8\.2\.1 Distance measures How do we determine the similarity between observations (or homes as in Figure [8\.1](knn.html#fig:map-homes))? We use distance (or dissimilarity) metrics to compute the pairwise differences between observations. The most common distance measures are the Euclidean [(8\.1\)](knn.html#eq:euclidean) and Manhattan [(8\.2\)](knn.html#eq:manhattan) distance metrics; both of which measure the distance between observation \\(x\_a\\) and \\(x\_b\\) for all \\(j\\) features. \\\[\\begin{equation} \\tag{8\.1} \\sqrt{\\sum^P\_{j\=1}(x\_{aj} \- x\_{bj})^2} \\end{equation}\\] \\\[\\begin{equation} \\tag{8\.2} \\sum^P\_{j\=1} \| x\_{aj} \- x\_{bj} \| \\end{equation}\\] Euclidean distance is the most common and measures the straight\-line distance between two samples (i.e., how the crow flies). Manhattan measures the point\-to\-point travel time (i.e., city block) and is commonly used for binary predictors (e.g., one\-hot encoded 0/1 indicator variables). A simplified example is presented below and illustrated in Figure [8\.2](knn.html#fig:difference-btwn-distance-measures) where the distance measures are computed for the first two homes in `ames_train` and for only two features (`Gr_Liv_Area` \& `Year_Built`). ``` (two_houses <- ames_train[1:2, c("Gr_Liv_Area", "Year_Built")]) ## # A tibble: 2 x 2 ## Gr_Liv_Area Year_Built ## <int> <int> ## 1 1656 1960 ## 2 896 1961 # Euclidean dist(two_houses, method = "euclidean") ## 1 ## 2 760.0007 # Manhattan dist(two_houses, method = "manhattan") ## 1 ## 2 761 ``` Figure 8\.2: Euclidean (A) versus Manhattan (B) distance. There are other metrics to measure the distance between observations. For example, the Minkowski distance is a generalization of the Euclidean and Manhattan distances and is defined as \\\[\\begin{equation} \\tag{8\.3} \\bigg( \\sum^P\_{j\=1} \| x\_{aj} \- x\_{bj} \| ^q \\bigg)^{\\frac{1}{q}}, \\end{equation}\\] where \\(q \> 0\\) (Han, Pei, and Kamber [2011](#ref-han2011data)). When \\(q \= 2\\) the Minkowski distance equals the Euclidean distance and when \\(q \= 1\\) it is equal to the Manhattan distance. The Mahalanobis distance is also an attractive measure to use since it accounts for the correlation between two variables (De Maesschalck, Jouan\-Rimbaud, and Massart [2000](#ref-de2000mahalanobis)). ### 8\.2\.2 Pre\-processing Due to the squaring in Equation [(8\.1\)](knn.html#eq:euclidean), the Euclidean distance is more sensitive to outliers. Furthermore, most distance measures are sensitive to the scale of the features. Data with features that have different scales will bias the distance measures as those predictors with the largest values will contribute most to the distance between two samples. For example, consider the three home below: `home1` is a four bedroom built in 2008, `home2` is a two bedroom built in the same year, and `home3` is a three bedroom built a decade earlier. ``` home1 ## # A tibble: 1 x 4 ## home Bedroom_AbvGr Year_Built id ## <chr> <int> <int> <int> ## 1 home1 4 2008 423 home2 ## # A tibble: 1 x 4 ## home Bedroom_AbvGr Year_Built id ## <chr> <int> <int> <int> ## 1 home2 2 2008 424 home3 ## # A tibble: 1 x 4 ## home Bedroom_AbvGr Year_Built id ## <chr> <int> <int> <int> ## 1 home3 3 1998 6 ``` The Euclidean distance between `home1` and `home3` is larger due to the larger difference in `Year_Built` with `home2`. ``` features <- c("Bedroom_AbvGr", "Year_Built") # distance between home 1 and 2 dist(rbind(home1[,features], home2[,features])) ## 1 ## 2 2 # distance between home 1 and 3 dist(rbind(home1[,features], home3[,features])) ## 1 ## 2 10.04988 ``` However, `Year_Built` has a much larger range (1875–2010\) than `Bedroom_AbvGr` (0–8\). And if you ask most people, especially families with kids, the difference between 2 and 4 bedrooms is much more significant than a 10 year difference in the age of a home. If we standardize these features, we see that the difference between `home1` and `home2`’s standardized value for `Bedroom_AbvGr` is larger than the difference between `home1` and `home3`’s `Year_Built`. And if we compute the Euclidean distance between these standardized home features, we see that now `home1` and `home3` are more similar than `home1` and `home2`. ``` home1_std ## # A tibble: 1 x 4 ## home Bedroom_AbvGr Year_Built id ## <chr> <dbl> <dbl> <int> ## 1 home1 1.38 1.21 423 home2_std ## # A tibble: 1 x 4 ## home Bedroom_AbvGr Year_Built id ## <chr> <dbl> <dbl> <int> ## 1 home2 -1.03 1.21 424 home3_std ## # A tibble: 1 x 4 ## home Bedroom_AbvGr Year_Built id ## <chr> <dbl> <dbl> <int> ## 1 home3 0.176 0.881 6 # distance between home 1 and 2 dist(rbind(home1_std[,features], home2_std[,features])) ## 1 ## 2 2.416244 # distance between home 1 and 3 dist(rbind(home1_std[,features], home3_std[,features])) ## 1 ## 2 1.252547 ``` In addition to standardizing numeric features, all categorical features must be one\-hot encoded or encoded using another method (e.g., ordinal encoding) so that all categorical features are represented numerically. Furthermore, the KNN method is very sensitive to noisy predictors since they cause similar samples to have larger magnitudes and variability in distance values. Consequently, removing irrelevant, noisy features often leads to significant improvement. 8\.3 Choosing *k* ----------------- The performance of KNNs is very sensitive to the choice of \\(k\\). This was illustrated in Section [2\.5\.3](process.html#tune-overfit) where low values of \\(k\\) typically overfit and large values often underfit. At the extremes, when \\(k \= 1\\), we base our prediction on a single observation that has the closest distance measure. In contrast, when \\(k \= n\\), we are simply using the average (regression) or most common class (classification) across all training samples as our predicted value. There is no general rule about the best \\(k\\) as it depends greatly on the nature of the data. For high signal data with very few noisy (irrelevant) features, smaller values of \\(k\\) tend to work best. As more irrelevant features are involved, larger values of \\(k\\) are required to smooth out the noise. To illustrate, we saw in Section [3\.8\.3](engineering.html#engineering-process-example) that we optimized the RMSE for the `ames_train` data with \\(k \= 12\\). The `ames_train` data has 2054 observations, so such a small \\(k\\) likely suggests a strong signal exists. In contrast, the `churn_train` data has 1030 observations and Figure [8\.3](knn.html#fig:range-k-values) illustrates that our loss function is not optimized until \\(k \= 271\\). Moreover, the max ROC value is 0\.8078 and the overall proportion of attriting employees to non\-attriting is 0\.839\. This suggest there is likely not a very strong signal in the Attrition data. When using KNN for classification, it is best to assess odd numbers for \\(k\\) to avoid ties in the event there is equal proportion of response levels (i.e. when *k \= 2* one of the neighbors could have class “0” while the other neighbor has class “1”). ``` # Create blueprint blueprint <- recipe(Attrition ~ ., data = churn_train) %>% step_nzv(all_nominal()) %>% step_integer(contains("Satisfaction")) %>% step_integer(WorkLifeBalance) %>% step_integer(JobInvolvement) %>% step_dummy(all_nominal(), -all_outcomes(), one_hot = TRUE) %>% step_center(all_numeric(), -all_outcomes()) %>% step_scale(all_numeric(), -all_outcomes()) # Create a resampling method cv <- trainControl( method = "repeatedcv", number = 10, repeats = 5, classProbs = TRUE, summaryFunction = twoClassSummary ) # Create a hyperparameter grid search hyper_grid <- expand.grid( k = floor(seq(1, nrow(churn_train)/3, length.out = 20)) ) # Fit knn model and perform grid search knn_grid <- train( blueprint, data = churn_train, method = "knn", trControl = cv, tuneGrid = hyper_grid, metric = "ROC" ) ggplot(knn_grid) ``` Figure 8\.3: Cross validated search grid results for Attrition training data where 20 values between 1 and 343 are assessed for k. When k \= 1, the predicted value is based on a single observation that is closest to the target sample and when k \= 343, the predicted value is based on the response with the largest proportion for 1/3 of the training sample. 8\.4 MNIST example ------------------ The MNIST data set is significantly larger than the Ames housing and attrition data sets. Because we want this example to run locally and in a reasonable amount of time (\< 1 hour), we will train our initial models on a random sample of 10,000 rows from the training set. ``` set.seed(123) index <- sample(nrow(mnist$train$images), size = 10000) mnist_x <- mnist$train$images[index, ] mnist_y <- factor(mnist$train$labels[index]) ``` Recall that the MNIST data contains 784 features representing the darkness (0–255\) of pixels in images of handwritten numbers (0–9\). As stated in Section [8\.2\.2](knn.html#knn-preprocess), KNN models can be severely impacted by irrelevant features. One culprit of this is zero, or near\-zero variance features (see Section [3\.4](engineering.html#feature-filtering)). Figure [8\.4](knn.html#fig:mnist-plot-variance) illustrates that there are nearly 125 features that have zero variance and many more that have very little variation. ``` mnist_x %>% as.data.frame() %>% map_df(sd) %>% gather(feature, sd) %>% ggplot(aes(sd)) + geom_histogram(binwidth = 1) ``` Figure 8\.4: Distribution of variability across the MNIST features. We see a significant number of zero variance features that should be removed. Figure [8\.5](knn.html#fig:mnist-plot-nzv-feature-image) shows which features are driving this concern. Images (A)–(C) illustrate typical handwritten numbers from the test set. Image (D) illustrates which features in our images have variability. The white in the center shows that the features that represent the center pixels have regular variability whereas the black exterior highlights that the features representing the edge pixels in our images have zero or near\-zero variability. These features have low variability in pixel values because they are rarely drawn on. Figure 8\.5: Example images (A)\-(C) from our data set and (D) highlights near\-zero variance features around the edges of our images. By identifying and removing these zero (or near\-zero) variance features, we end up keeping 249 of the original 784 predictors. This can cause dramatic improvements to both the accuracy and speed of our algorithm. Furthermore, by removing these upfront we can remove some of the overhead experienced by `caret::train()`. Furthermore, we need to add column names to the feature matrices as these are required by **caret**. ``` # Rename features colnames(mnist_x) <- paste0("V", 1:ncol(mnist_x)) # Remove near zero variance features manually nzv <- nearZeroVar(mnist_x) index <- setdiff(1:ncol(mnist_x), nzv) mnist_x <- mnist_x[, index] ``` Next we perform our search grid. Since we are working with a larger data set, using resampling (e.g., \\(k\\)\-fold cross validation) becomes costly. Moreover, as we have more data, our estimated error rate produced by a simple train vs. validation set becomes less biased and variable. Consequently, the following CV procedure (`cv`) uses 70% of our data to train and the remaining 30% for validation. We can adjust the `number` of times we do this which becomes similar to the bootstrap procedure discussed in Section [2\.4](process.html#resampling). Our hyperparameter grid search assesses 13 \\(k\\) values between 1–25 and takes approximately 3 minutes. ``` # Use train/validate resampling method cv <- trainControl( method = "LGOCV", p = 0.7, number = 1, savePredictions = TRUE ) # Create a hyperparameter grid search hyper_grid <- expand.grid(k = seq(3, 25, by = 2)) # Execute grid search knn_mnist <- train( mnist_x, mnist_y, method = "knn", tuneGrid = hyper_grid, preProc = c("center", "scale"), trControl = cv ) ggplot(knn_mnist) ``` Figure 8\.6: KNN search grid results for the MNIST data Figure [8\.6](knn.html#fig:mnist-initial-model) illustrates the grid search results and our best model used 3 nearest neighbors and provided an accuracy of 93\.8%. Looking at the results for each class, we can see that 8s were the hardest to detect followed by 2s, 3s, and 4s (based on sensitivity). The most common incorrectly predicted digit is 1 (specificity). ``` # Create confusion matrix cm <- confusionMatrix(knn_mnist$pred$pred, knn_mnist$pred$obs) cm$byClass[, c(1:2, 11)] # sensitivity, specificity, & accuracy ## Sensitivity Specificity Balanced Accuracy ## Class: 0 0.9641638 0.9962374 0.9802006 ## Class: 1 0.9916667 0.9841210 0.9878938 ## Class: 2 0.9155666 0.9955114 0.9555390 ## Class: 3 0.9163952 0.9920325 0.9542139 ## Class: 4 0.8698630 0.9960538 0.9329584 ## Class: 5 0.9151404 0.9914891 0.9533148 ## Class: 6 0.9795322 0.9888684 0.9842003 ## Class: 7 0.9326520 0.9896962 0.9611741 ## Class: 8 0.8224382 0.9978798 0.9101590 ## Class: 9 0.9329897 0.9852687 0.9591292 ``` Feature importance for KNNs is computed by finding the features with the smallest distance measure (see Equation [(8\.1\)](knn.html#eq:euclidean)). Since the response variable in the MNIST data is multiclass, the variable importance scores below sort the features by maximum importance across the classes. ``` # Top 20 most important features vi <- varImp(knn_mnist) vi ## ROC curve variable importance ## ## variables are sorted by maximum importance across the classes ## only 20 most important variables shown (out of 249) ## ## X0 X1 X2 X3 X4 X5 X6 X7 X8 X9 ## V435 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 80.56 ## V407 99.42 99.42 99.42 99.42 99.42 99.42 99.42 99.42 99.42 75.21 ## V463 97.88 97.88 97.88 97.88 97.88 97.88 97.88 97.88 97.88 83.27 ## V379 97.38 97.38 97.38 97.38 97.38 97.38 97.38 97.38 97.38 86.56 ## V434 95.87 95.87 95.87 95.87 95.87 95.87 96.66 95.87 95.87 76.20 ## V380 96.10 96.10 96.10 96.10 96.10 96.10 96.10 96.10 96.10 88.04 ## V462 95.56 95.56 95.56 95.56 95.56 95.56 95.56 95.56 95.56 83.38 ## V408 95.37 95.37 95.37 95.37 95.37 95.37 95.37 95.37 95.37 75.05 ## V352 93.55 93.55 93.55 93.55 93.55 93.55 93.55 93.55 93.55 87.13 ## V490 93.07 93.07 93.07 93.07 93.07 93.07 93.07 93.07 93.07 81.88 ## V406 92.90 92.90 92.90 92.90 92.90 92.90 92.90 92.90 92.90 74.55 ## V437 70.79 60.44 92.79 52.04 71.11 83.42 75.51 91.15 52.02 70.79 ## V351 92.41 92.41 92.41 92.41 92.41 92.41 92.41 92.41 92.41 82.08 ## V409 70.55 76.12 88.11 54.54 79.94 77.69 84.88 91.91 52.72 76.12 ## V436 89.96 89.96 90.89 89.96 89.96 89.96 91.39 89.96 89.96 78.83 ## V464 76.73 76.51 90.24 76.51 76.51 76.58 77.67 82.02 76.51 76.73 ## V491 89.49 89.49 89.49 89.49 89.49 89.49 89.49 89.49 89.49 77.41 ## V598 68.01 68.01 88.44 68.01 68.01 84.92 68.01 88.25 68.01 38.76 ## V465 63.09 36.58 87.68 38.16 50.72 80.62 59.88 84.28 57.13 63.09 ## V433 63.74 55.69 76.69 55.69 57.43 55.69 87.59 68.44 55.69 63.74 ``` We can plot these results to get an understanding of what pixel features are driving our results. The image shows that the most influential features lie around the edges of numbers (outer white circle) and along the very center. This makes intuitive sense as many key differences between numbers lie in these areas. For example, the main difference between a 3 and an 8 is whether the left side of the number is enclosed. ``` # Get median value for feature importance imp <- vi$importance %>% rownames_to_column(var = "feature") %>% gather(response, imp, -feature) %>% group_by(feature) %>% summarize(imp = median(imp)) # Create tibble for all edge pixels edges <- tibble( feature = paste0("V", nzv), imp = 0 ) # Combine and plot imp <- rbind(imp, edges) %>% mutate(ID = as.numeric(str_extract(feature, "\\d+"))) %>% arrange(ID) image(matrix(imp$imp, 28, 28), col = gray(seq(0, 1, 0.05)), xaxt="n", yaxt="n") ``` Figure 8\.7: Image heat map showing which features, on average, are most influential across all response classes for our KNN model. We can look at a few of our correct (left) and incorrect (right) predictions in Figure [8\.8](knn.html#fig:correct-vs-incorrect). When looking at the incorrect predictions, we can rationalize some of the errors (e.g., the actual 4 where we predicted a 1 has a strong vertical stroke compared to the rest of the number’s features, the actual 2 where we predicted a 0 is blurry and not well defined.) ``` # Get a few accurate predictions set.seed(9) good <- knn_mnist$pred %>% filter(pred == obs) %>% sample_n(4) # Get a few inaccurate predictions set.seed(9) bad <- knn_mnist$pred %>% filter(pred != obs) %>% sample_n(4) combine <- bind_rows(good, bad) # Get original feature set with all pixel features set.seed(123) index <- sample(nrow(mnist$train$images), 10000) X <- mnist$train$images[index,] # Plot results par(mfrow = c(4, 2), mar=c(1, 1, 1, 1)) layout(matrix(seq_len(nrow(combine)), 4, 2, byrow = FALSE)) for(i in seq_len(nrow(combine))) { image(matrix(X[combine$rowIndex[i],], 28, 28)[, 28:1], col = gray(seq(0, 1, 0.05)), main = paste("Actual:", combine$obs[i], " ", "Predicted:", combine$pred[i]), xaxt="n", yaxt="n") } ``` Figure 8\.8: Actual images from the MNIST data set along with our KNN model’s predictions. Left column illustrates a few accurate predictions and the right column illustrates a few inaccurate predictions. 8\.5 Final thoughts ------------------- KNNs are a very simplistic, and intuitive, algorithm that can provide average to decent predictive power, especially when the response is dependent on the local structure of the features. However, a major drawback of KNNs is their computation time, which increases by \\(n \\times p\\) for each observation. Furthermore, since KNNs are a lazy learner, they require the model be run at prediction time which limits their use for real\-time modeling. Some work has been done to minimize this effect; for example the **FNN** package (Beygelzimer et al. [2019](#ref-R-fnn)) provides a collection of fast \\(k\\)\-nearest neighbor search algorithms and applications such as cover\-tree (Beygelzimer, Kakade, and Langford [2006](#ref-beygelzimer2006cover)) and kd\-tree (Robinson [1981](#ref-robinson1981kdb)). Although KNNs rarely provide the best predictive performance, they have many benefits, for example, in feature engineering and in data cleaning and preprocessing. We discussed KNN for imputation in Section [3\.3\.2](engineering.html#impute). Bruce and Bruce ([2017](#ref-bruce2017practical)) discuss another approach that uses KNNs to add a *local knowledge* feature. This includes running a KNN to estimate the predicted output or class and using this predicted value as a new feature for downstream modeling. However, this approach also invites more opportunities for target leakage. Other alternatives to traditional KNNs such as using invariant metrics, tangent distance metrics, and adaptive nearest neighbor methods are also discussed in J. Friedman, Hastie, and Tibshirani ([2001](#ref-esl)) and are worth exploring. 8\.1 Prerequisites ------------------ For this chapter we’ll use the following packages: ``` # Helper packages library(dplyr) # for data wrangling library(ggplot2) # for awesome graphics library(rsample) # for creating validation splits library(recipes) # for feature engineering # Modeling packages library(caret) # for fitting KNN models ``` To illustrate various concepts we’ll continue working with the `ames_train` and `ames_test` data sets created in Section [2\.7](process.html#put-process-together); however, we’ll also illustrate the performance of KNNs on the employee attrition and MNIST data sets. ``` # create training (70%) set for the rsample::attrition data. attrit <- attrition %>% mutate_if(is.ordered, factor, ordered = FALSE) set.seed(123) churn_split <- initial_split(attrit, prop = .7, strata = "Attrition") churn_train <- training(churn_split) # import MNIST training data mnist <- dslabs::read_mnist() names(mnist) ## [1] "train" "test" ``` 8\.2 Measuring similarity ------------------------- The KNN algorithm identifies \\(k\\) observations that are “similar” or nearest to the new record being predicted and then uses the average response value (regression) or the most common class (classification) of those \\(k\\) observations as the predicted output. For illustration, consider our Ames housing data. In real estate, Realtors determine what price they will list (or market) a home for based on “comps” (comparable homes). To identify comps, they look for homes that have very similar attributes to the one being sold. This can include similar features (e.g., square footage, number of rooms, and style of the home), location (e.g., neighborhood and school district), and many other attributes. The Realtor will look at the typical sale price of these comps and will usually list the new home at a very similar price to the prices these comps sold for. As an example, Figure [8\.1](knn.html#fig:map-homes) maps 10 homes (blue) that are most similar to the home of interest (red). These homes are all relatively close to the target home and likely have similar characteristics (e.g., home style, size, and school district). Consequently, the Realtor would likely list the target home around the average price that these comps sold for. In essence, this is what the KNN algorithm will do. Figure 8\.1: The 10 nearest neighbors (blue) whose home attributes most closely resemble the house of interest (red). ### 8\.2\.1 Distance measures How do we determine the similarity between observations (or homes as in Figure [8\.1](knn.html#fig:map-homes))? We use distance (or dissimilarity) metrics to compute the pairwise differences between observations. The most common distance measures are the Euclidean [(8\.1\)](knn.html#eq:euclidean) and Manhattan [(8\.2\)](knn.html#eq:manhattan) distance metrics; both of which measure the distance between observation \\(x\_a\\) and \\(x\_b\\) for all \\(j\\) features. \\\[\\begin{equation} \\tag{8\.1} \\sqrt{\\sum^P\_{j\=1}(x\_{aj} \- x\_{bj})^2} \\end{equation}\\] \\\[\\begin{equation} \\tag{8\.2} \\sum^P\_{j\=1} \| x\_{aj} \- x\_{bj} \| \\end{equation}\\] Euclidean distance is the most common and measures the straight\-line distance between two samples (i.e., how the crow flies). Manhattan measures the point\-to\-point travel time (i.e., city block) and is commonly used for binary predictors (e.g., one\-hot encoded 0/1 indicator variables). A simplified example is presented below and illustrated in Figure [8\.2](knn.html#fig:difference-btwn-distance-measures) where the distance measures are computed for the first two homes in `ames_train` and for only two features (`Gr_Liv_Area` \& `Year_Built`). ``` (two_houses <- ames_train[1:2, c("Gr_Liv_Area", "Year_Built")]) ## # A tibble: 2 x 2 ## Gr_Liv_Area Year_Built ## <int> <int> ## 1 1656 1960 ## 2 896 1961 # Euclidean dist(two_houses, method = "euclidean") ## 1 ## 2 760.0007 # Manhattan dist(two_houses, method = "manhattan") ## 1 ## 2 761 ``` Figure 8\.2: Euclidean (A) versus Manhattan (B) distance. There are other metrics to measure the distance between observations. For example, the Minkowski distance is a generalization of the Euclidean and Manhattan distances and is defined as \\\[\\begin{equation} \\tag{8\.3} \\bigg( \\sum^P\_{j\=1} \| x\_{aj} \- x\_{bj} \| ^q \\bigg)^{\\frac{1}{q}}, \\end{equation}\\] where \\(q \> 0\\) (Han, Pei, and Kamber [2011](#ref-han2011data)). When \\(q \= 2\\) the Minkowski distance equals the Euclidean distance and when \\(q \= 1\\) it is equal to the Manhattan distance. The Mahalanobis distance is also an attractive measure to use since it accounts for the correlation between two variables (De Maesschalck, Jouan\-Rimbaud, and Massart [2000](#ref-de2000mahalanobis)). ### 8\.2\.2 Pre\-processing Due to the squaring in Equation [(8\.1\)](knn.html#eq:euclidean), the Euclidean distance is more sensitive to outliers. Furthermore, most distance measures are sensitive to the scale of the features. Data with features that have different scales will bias the distance measures as those predictors with the largest values will contribute most to the distance between two samples. For example, consider the three home below: `home1` is a four bedroom built in 2008, `home2` is a two bedroom built in the same year, and `home3` is a three bedroom built a decade earlier. ``` home1 ## # A tibble: 1 x 4 ## home Bedroom_AbvGr Year_Built id ## <chr> <int> <int> <int> ## 1 home1 4 2008 423 home2 ## # A tibble: 1 x 4 ## home Bedroom_AbvGr Year_Built id ## <chr> <int> <int> <int> ## 1 home2 2 2008 424 home3 ## # A tibble: 1 x 4 ## home Bedroom_AbvGr Year_Built id ## <chr> <int> <int> <int> ## 1 home3 3 1998 6 ``` The Euclidean distance between `home1` and `home3` is larger due to the larger difference in `Year_Built` with `home2`. ``` features <- c("Bedroom_AbvGr", "Year_Built") # distance between home 1 and 2 dist(rbind(home1[,features], home2[,features])) ## 1 ## 2 2 # distance between home 1 and 3 dist(rbind(home1[,features], home3[,features])) ## 1 ## 2 10.04988 ``` However, `Year_Built` has a much larger range (1875–2010\) than `Bedroom_AbvGr` (0–8\). And if you ask most people, especially families with kids, the difference between 2 and 4 bedrooms is much more significant than a 10 year difference in the age of a home. If we standardize these features, we see that the difference between `home1` and `home2`’s standardized value for `Bedroom_AbvGr` is larger than the difference between `home1` and `home3`’s `Year_Built`. And if we compute the Euclidean distance between these standardized home features, we see that now `home1` and `home3` are more similar than `home1` and `home2`. ``` home1_std ## # A tibble: 1 x 4 ## home Bedroom_AbvGr Year_Built id ## <chr> <dbl> <dbl> <int> ## 1 home1 1.38 1.21 423 home2_std ## # A tibble: 1 x 4 ## home Bedroom_AbvGr Year_Built id ## <chr> <dbl> <dbl> <int> ## 1 home2 -1.03 1.21 424 home3_std ## # A tibble: 1 x 4 ## home Bedroom_AbvGr Year_Built id ## <chr> <dbl> <dbl> <int> ## 1 home3 0.176 0.881 6 # distance between home 1 and 2 dist(rbind(home1_std[,features], home2_std[,features])) ## 1 ## 2 2.416244 # distance between home 1 and 3 dist(rbind(home1_std[,features], home3_std[,features])) ## 1 ## 2 1.252547 ``` In addition to standardizing numeric features, all categorical features must be one\-hot encoded or encoded using another method (e.g., ordinal encoding) so that all categorical features are represented numerically. Furthermore, the KNN method is very sensitive to noisy predictors since they cause similar samples to have larger magnitudes and variability in distance values. Consequently, removing irrelevant, noisy features often leads to significant improvement. ### 8\.2\.1 Distance measures How do we determine the similarity between observations (or homes as in Figure [8\.1](knn.html#fig:map-homes))? We use distance (or dissimilarity) metrics to compute the pairwise differences between observations. The most common distance measures are the Euclidean [(8\.1\)](knn.html#eq:euclidean) and Manhattan [(8\.2\)](knn.html#eq:manhattan) distance metrics; both of which measure the distance between observation \\(x\_a\\) and \\(x\_b\\) for all \\(j\\) features. \\\[\\begin{equation} \\tag{8\.1} \\sqrt{\\sum^P\_{j\=1}(x\_{aj} \- x\_{bj})^2} \\end{equation}\\] \\\[\\begin{equation} \\tag{8\.2} \\sum^P\_{j\=1} \| x\_{aj} \- x\_{bj} \| \\end{equation}\\] Euclidean distance is the most common and measures the straight\-line distance between two samples (i.e., how the crow flies). Manhattan measures the point\-to\-point travel time (i.e., city block) and is commonly used for binary predictors (e.g., one\-hot encoded 0/1 indicator variables). A simplified example is presented below and illustrated in Figure [8\.2](knn.html#fig:difference-btwn-distance-measures) where the distance measures are computed for the first two homes in `ames_train` and for only two features (`Gr_Liv_Area` \& `Year_Built`). ``` (two_houses <- ames_train[1:2, c("Gr_Liv_Area", "Year_Built")]) ## # A tibble: 2 x 2 ## Gr_Liv_Area Year_Built ## <int> <int> ## 1 1656 1960 ## 2 896 1961 # Euclidean dist(two_houses, method = "euclidean") ## 1 ## 2 760.0007 # Manhattan dist(two_houses, method = "manhattan") ## 1 ## 2 761 ``` Figure 8\.2: Euclidean (A) versus Manhattan (B) distance. There are other metrics to measure the distance between observations. For example, the Minkowski distance is a generalization of the Euclidean and Manhattan distances and is defined as \\\[\\begin{equation} \\tag{8\.3} \\bigg( \\sum^P\_{j\=1} \| x\_{aj} \- x\_{bj} \| ^q \\bigg)^{\\frac{1}{q}}, \\end{equation}\\] where \\(q \> 0\\) (Han, Pei, and Kamber [2011](#ref-han2011data)). When \\(q \= 2\\) the Minkowski distance equals the Euclidean distance and when \\(q \= 1\\) it is equal to the Manhattan distance. The Mahalanobis distance is also an attractive measure to use since it accounts for the correlation between two variables (De Maesschalck, Jouan\-Rimbaud, and Massart [2000](#ref-de2000mahalanobis)). ### 8\.2\.2 Pre\-processing Due to the squaring in Equation [(8\.1\)](knn.html#eq:euclidean), the Euclidean distance is more sensitive to outliers. Furthermore, most distance measures are sensitive to the scale of the features. Data with features that have different scales will bias the distance measures as those predictors with the largest values will contribute most to the distance between two samples. For example, consider the three home below: `home1` is a four bedroom built in 2008, `home2` is a two bedroom built in the same year, and `home3` is a three bedroom built a decade earlier. ``` home1 ## # A tibble: 1 x 4 ## home Bedroom_AbvGr Year_Built id ## <chr> <int> <int> <int> ## 1 home1 4 2008 423 home2 ## # A tibble: 1 x 4 ## home Bedroom_AbvGr Year_Built id ## <chr> <int> <int> <int> ## 1 home2 2 2008 424 home3 ## # A tibble: 1 x 4 ## home Bedroom_AbvGr Year_Built id ## <chr> <int> <int> <int> ## 1 home3 3 1998 6 ``` The Euclidean distance between `home1` and `home3` is larger due to the larger difference in `Year_Built` with `home2`. ``` features <- c("Bedroom_AbvGr", "Year_Built") # distance between home 1 and 2 dist(rbind(home1[,features], home2[,features])) ## 1 ## 2 2 # distance between home 1 and 3 dist(rbind(home1[,features], home3[,features])) ## 1 ## 2 10.04988 ``` However, `Year_Built` has a much larger range (1875–2010\) than `Bedroom_AbvGr` (0–8\). And if you ask most people, especially families with kids, the difference between 2 and 4 bedrooms is much more significant than a 10 year difference in the age of a home. If we standardize these features, we see that the difference between `home1` and `home2`’s standardized value for `Bedroom_AbvGr` is larger than the difference between `home1` and `home3`’s `Year_Built`. And if we compute the Euclidean distance between these standardized home features, we see that now `home1` and `home3` are more similar than `home1` and `home2`. ``` home1_std ## # A tibble: 1 x 4 ## home Bedroom_AbvGr Year_Built id ## <chr> <dbl> <dbl> <int> ## 1 home1 1.38 1.21 423 home2_std ## # A tibble: 1 x 4 ## home Bedroom_AbvGr Year_Built id ## <chr> <dbl> <dbl> <int> ## 1 home2 -1.03 1.21 424 home3_std ## # A tibble: 1 x 4 ## home Bedroom_AbvGr Year_Built id ## <chr> <dbl> <dbl> <int> ## 1 home3 0.176 0.881 6 # distance between home 1 and 2 dist(rbind(home1_std[,features], home2_std[,features])) ## 1 ## 2 2.416244 # distance between home 1 and 3 dist(rbind(home1_std[,features], home3_std[,features])) ## 1 ## 2 1.252547 ``` In addition to standardizing numeric features, all categorical features must be one\-hot encoded or encoded using another method (e.g., ordinal encoding) so that all categorical features are represented numerically. Furthermore, the KNN method is very sensitive to noisy predictors since they cause similar samples to have larger magnitudes and variability in distance values. Consequently, removing irrelevant, noisy features often leads to significant improvement. 8\.3 Choosing *k* ----------------- The performance of KNNs is very sensitive to the choice of \\(k\\). This was illustrated in Section [2\.5\.3](process.html#tune-overfit) where low values of \\(k\\) typically overfit and large values often underfit. At the extremes, when \\(k \= 1\\), we base our prediction on a single observation that has the closest distance measure. In contrast, when \\(k \= n\\), we are simply using the average (regression) or most common class (classification) across all training samples as our predicted value. There is no general rule about the best \\(k\\) as it depends greatly on the nature of the data. For high signal data with very few noisy (irrelevant) features, smaller values of \\(k\\) tend to work best. As more irrelevant features are involved, larger values of \\(k\\) are required to smooth out the noise. To illustrate, we saw in Section [3\.8\.3](engineering.html#engineering-process-example) that we optimized the RMSE for the `ames_train` data with \\(k \= 12\\). The `ames_train` data has 2054 observations, so such a small \\(k\\) likely suggests a strong signal exists. In contrast, the `churn_train` data has 1030 observations and Figure [8\.3](knn.html#fig:range-k-values) illustrates that our loss function is not optimized until \\(k \= 271\\). Moreover, the max ROC value is 0\.8078 and the overall proportion of attriting employees to non\-attriting is 0\.839\. This suggest there is likely not a very strong signal in the Attrition data. When using KNN for classification, it is best to assess odd numbers for \\(k\\) to avoid ties in the event there is equal proportion of response levels (i.e. when *k \= 2* one of the neighbors could have class “0” while the other neighbor has class “1”). ``` # Create blueprint blueprint <- recipe(Attrition ~ ., data = churn_train) %>% step_nzv(all_nominal()) %>% step_integer(contains("Satisfaction")) %>% step_integer(WorkLifeBalance) %>% step_integer(JobInvolvement) %>% step_dummy(all_nominal(), -all_outcomes(), one_hot = TRUE) %>% step_center(all_numeric(), -all_outcomes()) %>% step_scale(all_numeric(), -all_outcomes()) # Create a resampling method cv <- trainControl( method = "repeatedcv", number = 10, repeats = 5, classProbs = TRUE, summaryFunction = twoClassSummary ) # Create a hyperparameter grid search hyper_grid <- expand.grid( k = floor(seq(1, nrow(churn_train)/3, length.out = 20)) ) # Fit knn model and perform grid search knn_grid <- train( blueprint, data = churn_train, method = "knn", trControl = cv, tuneGrid = hyper_grid, metric = "ROC" ) ggplot(knn_grid) ``` Figure 8\.3: Cross validated search grid results for Attrition training data where 20 values between 1 and 343 are assessed for k. When k \= 1, the predicted value is based on a single observation that is closest to the target sample and when k \= 343, the predicted value is based on the response with the largest proportion for 1/3 of the training sample. 8\.4 MNIST example ------------------ The MNIST data set is significantly larger than the Ames housing and attrition data sets. Because we want this example to run locally and in a reasonable amount of time (\< 1 hour), we will train our initial models on a random sample of 10,000 rows from the training set. ``` set.seed(123) index <- sample(nrow(mnist$train$images), size = 10000) mnist_x <- mnist$train$images[index, ] mnist_y <- factor(mnist$train$labels[index]) ``` Recall that the MNIST data contains 784 features representing the darkness (0–255\) of pixels in images of handwritten numbers (0–9\). As stated in Section [8\.2\.2](knn.html#knn-preprocess), KNN models can be severely impacted by irrelevant features. One culprit of this is zero, or near\-zero variance features (see Section [3\.4](engineering.html#feature-filtering)). Figure [8\.4](knn.html#fig:mnist-plot-variance) illustrates that there are nearly 125 features that have zero variance and many more that have very little variation. ``` mnist_x %>% as.data.frame() %>% map_df(sd) %>% gather(feature, sd) %>% ggplot(aes(sd)) + geom_histogram(binwidth = 1) ``` Figure 8\.4: Distribution of variability across the MNIST features. We see a significant number of zero variance features that should be removed. Figure [8\.5](knn.html#fig:mnist-plot-nzv-feature-image) shows which features are driving this concern. Images (A)–(C) illustrate typical handwritten numbers from the test set. Image (D) illustrates which features in our images have variability. The white in the center shows that the features that represent the center pixels have regular variability whereas the black exterior highlights that the features representing the edge pixels in our images have zero or near\-zero variability. These features have low variability in pixel values because they are rarely drawn on. Figure 8\.5: Example images (A)\-(C) from our data set and (D) highlights near\-zero variance features around the edges of our images. By identifying and removing these zero (or near\-zero) variance features, we end up keeping 249 of the original 784 predictors. This can cause dramatic improvements to both the accuracy and speed of our algorithm. Furthermore, by removing these upfront we can remove some of the overhead experienced by `caret::train()`. Furthermore, we need to add column names to the feature matrices as these are required by **caret**. ``` # Rename features colnames(mnist_x) <- paste0("V", 1:ncol(mnist_x)) # Remove near zero variance features manually nzv <- nearZeroVar(mnist_x) index <- setdiff(1:ncol(mnist_x), nzv) mnist_x <- mnist_x[, index] ``` Next we perform our search grid. Since we are working with a larger data set, using resampling (e.g., \\(k\\)\-fold cross validation) becomes costly. Moreover, as we have more data, our estimated error rate produced by a simple train vs. validation set becomes less biased and variable. Consequently, the following CV procedure (`cv`) uses 70% of our data to train and the remaining 30% for validation. We can adjust the `number` of times we do this which becomes similar to the bootstrap procedure discussed in Section [2\.4](process.html#resampling). Our hyperparameter grid search assesses 13 \\(k\\) values between 1–25 and takes approximately 3 minutes. ``` # Use train/validate resampling method cv <- trainControl( method = "LGOCV", p = 0.7, number = 1, savePredictions = TRUE ) # Create a hyperparameter grid search hyper_grid <- expand.grid(k = seq(3, 25, by = 2)) # Execute grid search knn_mnist <- train( mnist_x, mnist_y, method = "knn", tuneGrid = hyper_grid, preProc = c("center", "scale"), trControl = cv ) ggplot(knn_mnist) ``` Figure 8\.6: KNN search grid results for the MNIST data Figure [8\.6](knn.html#fig:mnist-initial-model) illustrates the grid search results and our best model used 3 nearest neighbors and provided an accuracy of 93\.8%. Looking at the results for each class, we can see that 8s were the hardest to detect followed by 2s, 3s, and 4s (based on sensitivity). The most common incorrectly predicted digit is 1 (specificity). ``` # Create confusion matrix cm <- confusionMatrix(knn_mnist$pred$pred, knn_mnist$pred$obs) cm$byClass[, c(1:2, 11)] # sensitivity, specificity, & accuracy ## Sensitivity Specificity Balanced Accuracy ## Class: 0 0.9641638 0.9962374 0.9802006 ## Class: 1 0.9916667 0.9841210 0.9878938 ## Class: 2 0.9155666 0.9955114 0.9555390 ## Class: 3 0.9163952 0.9920325 0.9542139 ## Class: 4 0.8698630 0.9960538 0.9329584 ## Class: 5 0.9151404 0.9914891 0.9533148 ## Class: 6 0.9795322 0.9888684 0.9842003 ## Class: 7 0.9326520 0.9896962 0.9611741 ## Class: 8 0.8224382 0.9978798 0.9101590 ## Class: 9 0.9329897 0.9852687 0.9591292 ``` Feature importance for KNNs is computed by finding the features with the smallest distance measure (see Equation [(8\.1\)](knn.html#eq:euclidean)). Since the response variable in the MNIST data is multiclass, the variable importance scores below sort the features by maximum importance across the classes. ``` # Top 20 most important features vi <- varImp(knn_mnist) vi ## ROC curve variable importance ## ## variables are sorted by maximum importance across the classes ## only 20 most important variables shown (out of 249) ## ## X0 X1 X2 X3 X4 X5 X6 X7 X8 X9 ## V435 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 80.56 ## V407 99.42 99.42 99.42 99.42 99.42 99.42 99.42 99.42 99.42 75.21 ## V463 97.88 97.88 97.88 97.88 97.88 97.88 97.88 97.88 97.88 83.27 ## V379 97.38 97.38 97.38 97.38 97.38 97.38 97.38 97.38 97.38 86.56 ## V434 95.87 95.87 95.87 95.87 95.87 95.87 96.66 95.87 95.87 76.20 ## V380 96.10 96.10 96.10 96.10 96.10 96.10 96.10 96.10 96.10 88.04 ## V462 95.56 95.56 95.56 95.56 95.56 95.56 95.56 95.56 95.56 83.38 ## V408 95.37 95.37 95.37 95.37 95.37 95.37 95.37 95.37 95.37 75.05 ## V352 93.55 93.55 93.55 93.55 93.55 93.55 93.55 93.55 93.55 87.13 ## V490 93.07 93.07 93.07 93.07 93.07 93.07 93.07 93.07 93.07 81.88 ## V406 92.90 92.90 92.90 92.90 92.90 92.90 92.90 92.90 92.90 74.55 ## V437 70.79 60.44 92.79 52.04 71.11 83.42 75.51 91.15 52.02 70.79 ## V351 92.41 92.41 92.41 92.41 92.41 92.41 92.41 92.41 92.41 82.08 ## V409 70.55 76.12 88.11 54.54 79.94 77.69 84.88 91.91 52.72 76.12 ## V436 89.96 89.96 90.89 89.96 89.96 89.96 91.39 89.96 89.96 78.83 ## V464 76.73 76.51 90.24 76.51 76.51 76.58 77.67 82.02 76.51 76.73 ## V491 89.49 89.49 89.49 89.49 89.49 89.49 89.49 89.49 89.49 77.41 ## V598 68.01 68.01 88.44 68.01 68.01 84.92 68.01 88.25 68.01 38.76 ## V465 63.09 36.58 87.68 38.16 50.72 80.62 59.88 84.28 57.13 63.09 ## V433 63.74 55.69 76.69 55.69 57.43 55.69 87.59 68.44 55.69 63.74 ``` We can plot these results to get an understanding of what pixel features are driving our results. The image shows that the most influential features lie around the edges of numbers (outer white circle) and along the very center. This makes intuitive sense as many key differences between numbers lie in these areas. For example, the main difference between a 3 and an 8 is whether the left side of the number is enclosed. ``` # Get median value for feature importance imp <- vi$importance %>% rownames_to_column(var = "feature") %>% gather(response, imp, -feature) %>% group_by(feature) %>% summarize(imp = median(imp)) # Create tibble for all edge pixels edges <- tibble( feature = paste0("V", nzv), imp = 0 ) # Combine and plot imp <- rbind(imp, edges) %>% mutate(ID = as.numeric(str_extract(feature, "\\d+"))) %>% arrange(ID) image(matrix(imp$imp, 28, 28), col = gray(seq(0, 1, 0.05)), xaxt="n", yaxt="n") ``` Figure 8\.7: Image heat map showing which features, on average, are most influential across all response classes for our KNN model. We can look at a few of our correct (left) and incorrect (right) predictions in Figure [8\.8](knn.html#fig:correct-vs-incorrect). When looking at the incorrect predictions, we can rationalize some of the errors (e.g., the actual 4 where we predicted a 1 has a strong vertical stroke compared to the rest of the number’s features, the actual 2 where we predicted a 0 is blurry and not well defined.) ``` # Get a few accurate predictions set.seed(9) good <- knn_mnist$pred %>% filter(pred == obs) %>% sample_n(4) # Get a few inaccurate predictions set.seed(9) bad <- knn_mnist$pred %>% filter(pred != obs) %>% sample_n(4) combine <- bind_rows(good, bad) # Get original feature set with all pixel features set.seed(123) index <- sample(nrow(mnist$train$images), 10000) X <- mnist$train$images[index,] # Plot results par(mfrow = c(4, 2), mar=c(1, 1, 1, 1)) layout(matrix(seq_len(nrow(combine)), 4, 2, byrow = FALSE)) for(i in seq_len(nrow(combine))) { image(matrix(X[combine$rowIndex[i],], 28, 28)[, 28:1], col = gray(seq(0, 1, 0.05)), main = paste("Actual:", combine$obs[i], " ", "Predicted:", combine$pred[i]), xaxt="n", yaxt="n") } ``` Figure 8\.8: Actual images from the MNIST data set along with our KNN model’s predictions. Left column illustrates a few accurate predictions and the right column illustrates a few inaccurate predictions. 8\.5 Final thoughts ------------------- KNNs are a very simplistic, and intuitive, algorithm that can provide average to decent predictive power, especially when the response is dependent on the local structure of the features. However, a major drawback of KNNs is their computation time, which increases by \\(n \\times p\\) for each observation. Furthermore, since KNNs are a lazy learner, they require the model be run at prediction time which limits their use for real\-time modeling. Some work has been done to minimize this effect; for example the **FNN** package (Beygelzimer et al. [2019](#ref-R-fnn)) provides a collection of fast \\(k\\)\-nearest neighbor search algorithms and applications such as cover\-tree (Beygelzimer, Kakade, and Langford [2006](#ref-beygelzimer2006cover)) and kd\-tree (Robinson [1981](#ref-robinson1981kdb)). Although KNNs rarely provide the best predictive performance, they have many benefits, for example, in feature engineering and in data cleaning and preprocessing. We discussed KNN for imputation in Section [3\.3\.2](engineering.html#impute). Bruce and Bruce ([2017](#ref-bruce2017practical)) discuss another approach that uses KNNs to add a *local knowledge* feature. This includes running a KNN to estimate the predicted output or class and using this predicted value as a new feature for downstream modeling. However, this approach also invites more opportunities for target leakage. Other alternatives to traditional KNNs such as using invariant metrics, tangent distance metrics, and adaptive nearest neighbor methods are also discussed in J. Friedman, Hastie, and Tibshirani ([2001](#ref-esl)) and are worth exploring.
Machine Learning
bradleyboehmke.github.io
https://bradleyboehmke.github.io/HOML/DT.html
Chapter 9 Decision Trees ======================== *Tree\-based models* are a class of nonparametric algorithms that work by partitioning the feature space into a number of smaller (non\-overlapping) regions with similar response values using a set of *splitting rules*. Predictions are obtained by fitting a simpler model (e.g., a constant like the average response value) in each region. Such *divide\-and\-conquer* methods can produce simple rules that are easy to interpret and visualize with *tree diagrams*. As we’ll see, decision trees offer many benefits; however, they typically lack in predictive performance compared to more complex algorithms like neural networks and MARS. However, future chapters will discuss powerful ensemble algorithms—like random forests and gradient boosting machines—which are constructed by combining together many decision trees in a clever way. This chapter will provide you with a strong foundation in decision trees. 9\.1 Prerequisites ------------------ In this chapter we’ll use the following packages: ``` # Helper packages library(dplyr) # for data wrangling library(ggplot2) # for awesome plotting # Modeling packages library(rpart) # direct engine for decision tree application library(caret) # meta engine for decision tree application # Model interpretability packages library(rpart.plot) # for plotting decision trees library(vip) # for feature importance library(pdp) # for feature effects ``` We’ll continue to illustrate the main concepts using the Ames housing example from Section [2\.7](process.html#put-process-together). 9\.2 Structure -------------- There are many methodologies for constructing decision trees but the most well\-known is the **c**lassification **a**nd **r**egression **t**ree (CART) algorithm proposed in Breiman ([1984](#ref-breiman2017classification)).[26](#fn26) A basic decision tree partitions the training data into homogeneous subgroups (i.e., groups with similar response values) and then fits a simple *constant* in each subgroup (e.g., the mean of the within group response values for regression). The subgroups (also called nodes) are formed recursively using binary partitions formed by asking simple yes\-or\-no questions about each feature (e.g., is `age < 18`?). This is done a number of times until a suitable stopping criteria is satisfied (e.g., a maximum depth of the tree is reached). After all the partitioning has been done, the model predicts the output based on (1\) the average response values for all observations that fall in that subgroup (regression problem), or (2\) the class that has majority representation (classification problem). For classification, predicted probabilities can be obtained using the proportion of each class within the subgroup. What results is an inverted tree\-like structure such as that in Figure [9\.1](DT.html#fig:exemplar-decision-tree). In essence, our tree is a set of rules that allows us to make predictions by asking simple yes\-or\-no questions about each feature. For example, if the customer is loyal, has household income greater than $150,000, and is shopping in a store, the exemplar tree diagram in Figure [9\.1](DT.html#fig:exemplar-decision-tree) would predict that the customer will redeem a coupon. Figure 9\.1: Exemplar decision tree predicting whether or not a customer will redeem a coupon (yes or no) based on the customer’s loyalty, household income, last month’s spend, coupon placement, and shopping mode. We refer to the first subgroup at the top of the tree as the *root node* (this node contains all of the training data). The final subgroups at the bottom of the tree are called the *terminal nodes* or *leaves*. Every subgroup in between is referred to as an internal node. The connections between nodes are called *branches*. Figure 9\.2: Terminology of a decision tree. 9\.3 Partitioning ----------------- As illustrated above, CART uses *binary recursive partitioning* (it’s recursive because each split or rule depends on the the splits above it). The objective at each node is to find the “best” feature (\\(x\_i\\)) to partition the remaining data into one of two regions (\\(R\_1\\) and \\(R\_2\\)) such that the overall error between the actual response (\\(y\_i\\)) and the predicted constant (\\(c\_i\\)) is minimized. For regression problems, the objective function to minimize is the total SSE as defined in Equation [(9\.1\)](DT.html#eq:partobjective) below: \\\[\\begin{equation} \\tag{9\.1} SSE \= \\sum\_{i \\in R\_1}\\left(y\_i \- c\_1\\right)^2 \+ \\sum\_{i \\in R\_2}\\left(y\_i \- c\_2\\right)^2 \\end{equation}\\] For classification problems, the partitioning is usually made to maximize the reduction in cross\-entropy or the Gini index (see Section [2\.6](process.html#model-eval)).[27](#fn27) In both regression and classification trees, the objective of partitioning is to minimize dissimilarity in the terminal nodes. However, we suggest Therneau, Atkinson, and others ([1997](#ref-therneau1997introduction)) for a more thorough discussion regarding binary recursive partitioning. Having found the best feature/split combination, the data are partitioned into two regions and the splitting process is repeated on each of the two regions (hence the name binary recursive partitioning). This process is continued until a suitable stopping criterion is reached (e.g., a maximum depth is reached or the tree becomes “too complex”). It’s important to note that a single feature can be used multiple times in a tree. For example, say we have data generated from a simple \\(\\sin\\) function with Gaussian noise: \\(Y\_i \\stackrel{iid}{\\sim} N\\left(\\sin\\left(X\_i\\right), \\sigma^2\\right)\\), for \\(i \= 1, 2, \\dots, 500\\). A regression tree built with a single root node (often referred to as a decision stump) leads to a split occurring at \\(x \= 3\.1\\). Figure 9\.3: Decision tree illustrating the single split on feature x (left). The resulting decision boundary illustrates the predicted value when x \< 3\.1 (0\.64\), and when x \> 3\.1 (\-0\.67\) (right). If we build a deeper tree, we’ll continue to split on the same feature (\\(x\\)) as illustrated in Figure [9\.4](DT.html#fig:depth-3-decision-tree). This is because \\(x\\) is the only feature available to split on so it will continue finding the optimal splits along this feature’s values until a pre\-determined stopping criteria is reached. Figure 9\.4: Decision tree illustrating with depth \= 3, resulting in 7 decision splits along values of feature x and 8 prediction regions (left). The resulting decision boundary (right). However, even when many features are available, a single feature may still dominate if it continues to provide the best split after each successive partition. For example, a decision tree applied to the iris data set (Fisher [1936](#ref-fisher1936use)) where the species of the flower (setosa, versicolor, and virginica) is predicted based on two features (sepal width and sepal length) results in an optimal decision tree with two splits on each feature. Also, note how the decision boundary in a classification problem results in rectangular regions enclosing the observations. The predicted value is the response class with the greatest proportion within the enclosed region. Figure 9\.5: Decision tree for the iris classification problem (left). The decision boundary results in rectangular regions that enclose the observations. The class with the highest proportion in each region is the predicted value (right). 9\.4 How deep? -------------- This leads to an important question: how deep (i.e., complex) should we make the tree? If we grow an overly complex tree as in Figure [9\.6](DT.html#fig:deep-overfit-tree), we tend to overfit to our training data resulting in poor generalization performance. Figure 9\.6: Overfit decision tree with 56 splits. Consequently, there is a balance to be achieved in the depth and complexity of the tree to optimize predictive performance on future unseen data. To find this balance, we have two primary approaches: (1\) early stopping and (2\) pruning. ### 9\.4\.1 Early stopping Early stopping explicitly restricts the growth of the tree. There are several ways we can restrict tree growth but two of the most common approaches are to restrict the tree depth to a certain level or to restrict the minimum number of observations allowed in any terminal node. When limiting tree depth we stop splitting after a certain depth (e.g., only grow a tree that has a depth of 5 levels). The shallower the tree the less variance we have in our predictions; however, at some point we can start to inject too much bias as shallow trees (e.g., stumps) are not able to capture interactions and complex patterns in our data. When restricting minimum terminal node size (e.g., leaf nodes must contain at least 10 observations for predictions) we are deciding to not split intermediate nodes which contain too few data points. At the far end of the spectrum, a terminal node’s size of one allows for a single observation to be captured in the leaf node and used as a prediction (in this case, we’re interpolating the training data). This results in high variance and poor generalizability. On the other hand, large values restrict further splits therefore reducing variance. These two approaches can be implemented independently of one another; however, they do have interaction effects as illustrated by Figure [9\.7](DT.html#fig:dt-early-stopping). Figure 9\.7: Illustration of how early stopping affects the decision boundary of a regression decision tree. The columns illustrate how tree depth impacts the decision boundary and the rows illustrate how the minimum number of observations in the terminal node influences the decision boundary. ### 9\.4\.2 Pruning An alternative to explicitly specifying the depth of a decision tree is to grow a very large, complex tree and then *prune* it back to find an optimal subtree. We find the optimal subtree by using a *cost complexity parameter* (\\(\\alpha\\)) that penalizes our objective function in Equation [(9\.1\)](DT.html#eq:partobjective) for the number of terminal nodes of the tree (\\(T\\)) as in Equation [(9\.2\)](DT.html#eq:prune). \\\[\\begin{equation} \\tag{9\.2} \\texttt{minimize} \\left\\{ SSE \+ \\alpha \\vert T \\vert \\right\\} \\end{equation}\\] For a given value of \\(\\alpha\\) we find the smallest pruned tree that has the lowest penalized error. You may recognize the close association to the lasso penalty discussed in Chapter [6](regularized-regression.html#regularized-regression). As with the regularization methods, smaller penalties tend to produce more complex models, which result in larger trees. Whereas larger penalties result in much smaller trees. Consequently, as a tree grows larger, the reduction in the SSE must be greater than the cost complexity penalty. Typically, we evaluate multiple models across a spectrum of \\(\\alpha\\) and use CV to identify the optimal value and, therefore, the optimal subtree that generalizes best to unseen data. Figure 9\.8: To prune a tree, we grow an overly complex tree (left) and then use a cost complexity parameter to identify the optimal subtree (right). 9\.5 Ames housing example ------------------------- We can fit a regression tree using `rpart` and then visualize it using `rpart.plot`. The fitting process and the visual output of regression trees and classification trees are very similar. Both use the formula method for expressing the model (similar to `lm()`). However, when fitting a regression tree, we need to set `method = "anova"`. By default, `rpart()` will make an intelligent guess as to what method to use based on the data type of your response column, but it’s good practice to set this explicitly. ``` ames_dt1 <- rpart( formula = Sale_Price ~ ., data = ames_train, method = "anova" ) ``` Once we’ve fit our model we can take a peak at the decision tree output. This prints various information about the different splits. For example, we start with `2054` observations at the root node and the first variable we split on (i.e., the first variable gave the largest reduction in SSE) is `Overall_Qual`. We see that at the first node all observations with `Overall_Qual` \\(\\in\\) \\(\\{\\)`Very_Poor`, `Poor`, `Fair`, `Below_Average`, `Average`, `Above_Average`, `Good`\\(\\}\\) go to the 2nd (`2)`) branch. The total number of observations that follow this branch (`1708`), their average sales price (`156195`) and SSE (`3.964e+12`) are listed. If you look for the 3rd branch (`3)`) you will see that `346` observations with `Overall_Qual` \\(\\in\\) \\(\\{\\)`Very_Good`, `Excellent`, `Very_Excellent`\\(\\}\\) follow this branch and their average sales prices is `304593` and the SEE in this region is `1.036e+12`. Basically, this is telling us that `Overall_Qual` is an important predictor on sales price with those homes on the upper end of the quality spectrum having almost double the average sales price. ``` ames_dt1 ## n= 2054 ## ## node), split, n, deviance, yval ## * denotes terminal node ## ## 1) root 2054 13216450000000 181192.80 ## 2) Overall_Qual=Very_Poor,Poor,Fair,Below_Average,Average,Above_Average,Good 1708 3963616000000 156194.90 ## 4) Neighborhood=North_Ames,Old_Town,Edwards,Sawyer,Mitchell,Brookside,Iowa_DOT_and_Rail_Road,South_and_West_of_Iowa_State_University,Meadow_Village,Briardale,Northpark_Villa,Blueste,Landmark 1022 1251428000000 131978.70 ## 8) Overall_Qual=Very_Poor,Poor,Fair,Below_Average 195 167094500000 98535.99 * ## 9) Overall_Qual=Average,Above_Average,Good 827 814819400000 139864.20 ## 18) First_Flr_SF< 1214.5 631 383938300000 132177.10 * ## 19) First_Flr_SF>=1214.5 196 273557300000 164611.70 * ## 5) Neighborhood=College_Creek,Somerset,Northridge_Heights,Gilbert,Northwest_Ames,Sawyer_West,Crawford,Timberland,Northridge,Stone_Brook,Clear_Creek,Bloomington_Heights,Veenker,Green_Hills 686 1219988000000 192272.10 ## 10) Gr_Liv_Area< 1725 492 517806100000 177796.00 ## 20) Total_Bsmt_SF< 1334.5 353 233343200000 166929.30 * ## 21) Total_Bsmt_SF>=1334.5 139 136919100000 205392.70 * ## 11) Gr_Liv_Area>=1725 194 337602800000 228984.70 * ## 3) Overall_Qual=Very_Good,Excellent,Very_Excellent 346 2916752000000 304593.10 ## 6) Overall_Qual=Very_Good 249 955363000000 272321.20 ## 12) Gr_Liv_Area< 1969 152 313458900000 244124.20 * ## 13) Gr_Liv_Area>=1969 97 331677500000 316506.30 * ## 7) Overall_Qual=Excellent,Very_Excellent 97 1036369000000 387435.20 ## 14) Total_Bsmt_SF< 1903 65 231940700000 349010.80 * ## 15) Total_Bsmt_SF>=1903 32 513524700000 465484.70 ## 30) Year_Built>=2003.5 25 270259300000 429760.40 * ## 31) Year_Built< 2003.5 7 97411210000 593071.40 * ``` We can visualize our tree model with `rpart.plot()`. The `rpart.plot()` function has many plotting options, which we’ll leave to the reader to explore. However, in the default print it will show the percentage of data that fall in each node and the predicted outcome for that node. One thing you may notice is that this tree contains 10 internal nodes resulting in 11 terminal nodes. In other words, this tree is partitioning on only 10 features even though there are 80 variables in the training data. Why is that? ``` rpart.plot(ames_dt1) ``` Figure 9\.9: Diagram displaying the pruned decision tree for the Ames Housing data. Behind the scenes `rpart()` is automatically applying a range of cost complexity (\\(\\alpha\\) values to prune the tree). To compare the error for each \\(\\alpha\\) value, `rpart()` performs a 10\-fold CV (by default). In this example we find diminishing returns after 1`2` terminal nodes as illustrated in Figure [9\.10](DT.html#fig:plot-cp) (\\(y\\)\-axis is the CV error, lower \\(x\\)\-axis is the cost complexity (\\(\\alpha\\)) value, upper \\(x\\)\-axis is the number of terminal nodes (i.e., tree size \= \\(\\vert T \\vert\\)). You may also notice the dashed line which goes through the point \\(\\vert T \\vert \= 8\\). Breiman ([1984](#ref-breiman2017classification)) suggested that in actual practice, it’s common to instead use the smallest tree within 1 standard error (SE) of the minimum CV error (this is called the *1\-SE rule*). Thus, we could use a tree with 8 terminal nodes and reasonably expect to experience similar results within a small margin of error. ``` plotcp(ames_dt1) ``` Figure 9\.10: Pruning complexity parameter (cp) plot illustrating the relative cross validation error (y\-axis) for various cp values (lower x\-axis). Smaller cp values lead to larger trees (upper x\-axis). Using the 1\-SE rule, a tree size of 10\-12 provides optimal cross validation results. To illustrate the point of selecting a tree with 11 terminal nodes (or 8 if you go by the 1\-SE rule), we can force `rpart()` to generate a full tree by setting `cp = 0` (no penalty results in a fully grown tree). Figure [9\.11](DT.html#fig:no-cp-tree) shows that after 11 terminal nodes, we see diminishing returns in error reduction as the tree grows deeper. Thus, we can significantly prune our tree and still achieve minimal expected error. ``` ames_dt2 <- rpart( formula = Sale_Price ~ ., data = ames_train, method = "anova", control = list(cp = 0, xval = 10) ) plotcp(ames_dt2) abline(v = 11, lty = "dashed") ``` Figure 9\.11: Pruning complexity parameter plot for a fully grown tree. Significant reduction in the cross validation error is achieved with tree sizes 6\-20 and then the cross validation error levels off with minimal or no additional improvements. So, by default, `rpart()` is performing some automated tuning, with an optimal subtree of 10 total splits, 11 terminal nodes, and a cross\-validated SSE of 0\.292\. Although `rpart()` does not provide the RMSE or other metrics, you can use **caret**. In both cases, smaller penalties (deeper trees) are providing better CV results. ``` # rpart cross validation results ames_dt1$cptable ## CP nsplit rel error xerror xstd ## 1 0.47940879 0 1.0000000 1.0014737 0.06120398 ## 2 0.11290476 1 0.5205912 0.5226036 0.03199501 ## 3 0.06999005 2 0.4076864 0.4098819 0.03111581 ## 4 0.02758522 3 0.3376964 0.3572726 0.02222507 ## 5 0.02347276 4 0.3101112 0.3339952 0.02184348 ## 6 0.02201070 5 0.2866384 0.3301630 0.02446178 ## 7 0.02039233 6 0.2646277 0.3244948 0.02421833 ## 8 0.01190364 7 0.2442354 0.3062031 0.02641595 ## 9 0.01116365 8 0.2323317 0.3025968 0.02708786 ## 10 0.01103581 9 0.2211681 0.2971663 0.02704837 ## 11 0.01000000 10 0.2101323 0.2920442 0.02704791 # caret cross validation results ames_dt3 <- train( Sale_Price ~ ., data = ames_train, method = "rpart", trControl = trainControl(method = "cv", number = 10), tuneLength = 20 ) ggplot(ames_dt3) ``` Figure 9\.12: Cross\-validated accuracy rate for the 20 different \\(\\alpha\\) parameter values in our grid search. Lower \\(\\alpha\\) values (deeper trees) help to minimize errors. 9\.6 Feature interpretation --------------------------- To measure feature importance, the reduction in the loss function (e.g., SSE) attributed to each variable at each split is tabulated. In some instances, a single variable could be used multiple times in a tree; consequently, the total reduction in the loss function across all splits by a variable are summed up and used as the total feature importance. When using **caret**, these values are standardized so that the most important feature has a value of 100 and the remaining features are scored based on their relative reduction in the loss function. Also, since there may be candidate variables that are important but are not used in a split, the top competing variables are also tabulated at each split. Figure [9\.13](DT.html#fig:dt-vip) illustrates the top 40 features in the Ames housing decision tree. Similar to MARS (Chapter [7](mars.html#mars)), decision trees perform automated feature selection where uninformative features are not used in the model. We can see this in Figure [9\.13](DT.html#fig:dt-vip) where the bottom four features in the plot have zero importance. ``` vip(ames_dt3, num_features = 40, bar = FALSE) ``` Figure 9\.13: Variable importance based on the total reduction in MSE for the Ames Housing decision tree. If we look at the same partial dependence plots that we created for the MARS models (Section [7\.5](mars.html#mars-features)), we can see the similarity in how decision trees are modeling the relationship between the features and target. In Figure [9\.14](DT.html#fig:dt-pdp), we see that `Gr_Liv_Area` has a non\-linear relationship such that it has increasingly stronger effects on the predicted sales price for `Gr_liv_Area` values between 1000–2500 but then has little, if any, influence when it exceeds 2500\. However, the 3\-D plot of the interaction effect between `Gr_Liv_Area` and `Year_Built` illustrates a key difference in how decision trees have rigid non\-smooth prediction surfaces compared to MARS; in fact, MARS was developed as an improvement to CART for regression problems. ``` # Construct partial dependence plots p1 <- partial(ames_dt3, pred.var = "Gr_Liv_Area") %>% autoplot() p2 <- partial(ames_dt3, pred.var = "Year_Built") %>% autoplot() p3 <- partial(ames_dt3, pred.var = c("Gr_Liv_Area", "Year_Built")) %>% plotPartial(levelplot = FALSE, zlab = "yhat", drape = TRUE, colorkey = TRUE, screen = list(z = -20, x = -60)) # Display plots side by side gridExtra::grid.arrange(p1, p2, p3, ncol = 3) ``` Figure 9\.14: Partial dependence plots to understand the relationship between sale price and the living space, and year built features. 9\.7 Final thoughts ------------------- Decision trees have a number of advantages. Trees require very little pre\-processing. This is not to say feature engineering may not improve upon a decision tree, but rather, that there are no pre\-processing requirements. Monotonic transformations (e.g., \\(\\log\\), \\(\\exp\\), and \\(\\sqrt{}\\)) are not required to meet algorithm assumptions as in many parametric models; instead, they only shift the location of the optimal split points. Outliers typically do not bias the results as much since the binary partitioning simply looks for a single location to make a split within the distribution of each feature. Decision trees can easily handle categorical features without preprocessing. For unordered categorical features with more than two levels, the classes are ordered based on the outcome (for regression problems, the mean of the response is used and for classification problems, the proportion of the positive outcome class is used). For more details see J. Friedman, Hastie, and Tibshirani ([2001](#ref-esl)), Breiman and Ihaka ([1984](#ref-breiman1984nonlinear)), Ripley ([2007](#ref-ripley2007pattern)), Fisher ([1958](#ref-fisher1958grouping)), and Loh and Vanichsetakul ([1988](#ref-loh1988tree)). Missing values often cause problems with statistical models and analyses. Most procedures deal with them by refusing to deal with them—incomplete observations are tossed out. However, most decision tree implementations can easily handle missing values in the features and do not require imputation. This is handled in various ways but most commonly by creating a new “missing” class for categorical variables or using surrogate splits (see Therneau, Atkinson, and others ([1997](#ref-therneau1997introduction)) for details). However, individual decision trees generally do not often achieve state\-of\-the\-art predictive accuracy. In this chapter, we saw that the best pruned decision tree, although it performed better than linear regression (Chapter [4](linear-regression.html#linear-regression)), had a very poor RMSE ($41,019\) compared to some of the other models we’ve built. This is driven by the fact that decision trees are composed of simple yes\-or\-no rules that create rigid non\-smooth decision boundaries. Furthermore, we saw that deep trees tend to have high variance (and low bias) and shallow trees tend to be overly bias (but low variance). In the chapters that follow, we’ll see how we can combine multiple trees together into very powerful prediction models called *ensembles*. 9\.1 Prerequisites ------------------ In this chapter we’ll use the following packages: ``` # Helper packages library(dplyr) # for data wrangling library(ggplot2) # for awesome plotting # Modeling packages library(rpart) # direct engine for decision tree application library(caret) # meta engine for decision tree application # Model interpretability packages library(rpart.plot) # for plotting decision trees library(vip) # for feature importance library(pdp) # for feature effects ``` We’ll continue to illustrate the main concepts using the Ames housing example from Section [2\.7](process.html#put-process-together). 9\.2 Structure -------------- There are many methodologies for constructing decision trees but the most well\-known is the **c**lassification **a**nd **r**egression **t**ree (CART) algorithm proposed in Breiman ([1984](#ref-breiman2017classification)).[26](#fn26) A basic decision tree partitions the training data into homogeneous subgroups (i.e., groups with similar response values) and then fits a simple *constant* in each subgroup (e.g., the mean of the within group response values for regression). The subgroups (also called nodes) are formed recursively using binary partitions formed by asking simple yes\-or\-no questions about each feature (e.g., is `age < 18`?). This is done a number of times until a suitable stopping criteria is satisfied (e.g., a maximum depth of the tree is reached). After all the partitioning has been done, the model predicts the output based on (1\) the average response values for all observations that fall in that subgroup (regression problem), or (2\) the class that has majority representation (classification problem). For classification, predicted probabilities can be obtained using the proportion of each class within the subgroup. What results is an inverted tree\-like structure such as that in Figure [9\.1](DT.html#fig:exemplar-decision-tree). In essence, our tree is a set of rules that allows us to make predictions by asking simple yes\-or\-no questions about each feature. For example, if the customer is loyal, has household income greater than $150,000, and is shopping in a store, the exemplar tree diagram in Figure [9\.1](DT.html#fig:exemplar-decision-tree) would predict that the customer will redeem a coupon. Figure 9\.1: Exemplar decision tree predicting whether or not a customer will redeem a coupon (yes or no) based on the customer’s loyalty, household income, last month’s spend, coupon placement, and shopping mode. We refer to the first subgroup at the top of the tree as the *root node* (this node contains all of the training data). The final subgroups at the bottom of the tree are called the *terminal nodes* or *leaves*. Every subgroup in between is referred to as an internal node. The connections between nodes are called *branches*. Figure 9\.2: Terminology of a decision tree. 9\.3 Partitioning ----------------- As illustrated above, CART uses *binary recursive partitioning* (it’s recursive because each split or rule depends on the the splits above it). The objective at each node is to find the “best” feature (\\(x\_i\\)) to partition the remaining data into one of two regions (\\(R\_1\\) and \\(R\_2\\)) such that the overall error between the actual response (\\(y\_i\\)) and the predicted constant (\\(c\_i\\)) is minimized. For regression problems, the objective function to minimize is the total SSE as defined in Equation [(9\.1\)](DT.html#eq:partobjective) below: \\\[\\begin{equation} \\tag{9\.1} SSE \= \\sum\_{i \\in R\_1}\\left(y\_i \- c\_1\\right)^2 \+ \\sum\_{i \\in R\_2}\\left(y\_i \- c\_2\\right)^2 \\end{equation}\\] For classification problems, the partitioning is usually made to maximize the reduction in cross\-entropy or the Gini index (see Section [2\.6](process.html#model-eval)).[27](#fn27) In both regression and classification trees, the objective of partitioning is to minimize dissimilarity in the terminal nodes. However, we suggest Therneau, Atkinson, and others ([1997](#ref-therneau1997introduction)) for a more thorough discussion regarding binary recursive partitioning. Having found the best feature/split combination, the data are partitioned into two regions and the splitting process is repeated on each of the two regions (hence the name binary recursive partitioning). This process is continued until a suitable stopping criterion is reached (e.g., a maximum depth is reached or the tree becomes “too complex”). It’s important to note that a single feature can be used multiple times in a tree. For example, say we have data generated from a simple \\(\\sin\\) function with Gaussian noise: \\(Y\_i \\stackrel{iid}{\\sim} N\\left(\\sin\\left(X\_i\\right), \\sigma^2\\right)\\), for \\(i \= 1, 2, \\dots, 500\\). A regression tree built with a single root node (often referred to as a decision stump) leads to a split occurring at \\(x \= 3\.1\\). Figure 9\.3: Decision tree illustrating the single split on feature x (left). The resulting decision boundary illustrates the predicted value when x \< 3\.1 (0\.64\), and when x \> 3\.1 (\-0\.67\) (right). If we build a deeper tree, we’ll continue to split on the same feature (\\(x\\)) as illustrated in Figure [9\.4](DT.html#fig:depth-3-decision-tree). This is because \\(x\\) is the only feature available to split on so it will continue finding the optimal splits along this feature’s values until a pre\-determined stopping criteria is reached. Figure 9\.4: Decision tree illustrating with depth \= 3, resulting in 7 decision splits along values of feature x and 8 prediction regions (left). The resulting decision boundary (right). However, even when many features are available, a single feature may still dominate if it continues to provide the best split after each successive partition. For example, a decision tree applied to the iris data set (Fisher [1936](#ref-fisher1936use)) where the species of the flower (setosa, versicolor, and virginica) is predicted based on two features (sepal width and sepal length) results in an optimal decision tree with two splits on each feature. Also, note how the decision boundary in a classification problem results in rectangular regions enclosing the observations. The predicted value is the response class with the greatest proportion within the enclosed region. Figure 9\.5: Decision tree for the iris classification problem (left). The decision boundary results in rectangular regions that enclose the observations. The class with the highest proportion in each region is the predicted value (right). 9\.4 How deep? -------------- This leads to an important question: how deep (i.e., complex) should we make the tree? If we grow an overly complex tree as in Figure [9\.6](DT.html#fig:deep-overfit-tree), we tend to overfit to our training data resulting in poor generalization performance. Figure 9\.6: Overfit decision tree with 56 splits. Consequently, there is a balance to be achieved in the depth and complexity of the tree to optimize predictive performance on future unseen data. To find this balance, we have two primary approaches: (1\) early stopping and (2\) pruning. ### 9\.4\.1 Early stopping Early stopping explicitly restricts the growth of the tree. There are several ways we can restrict tree growth but two of the most common approaches are to restrict the tree depth to a certain level or to restrict the minimum number of observations allowed in any terminal node. When limiting tree depth we stop splitting after a certain depth (e.g., only grow a tree that has a depth of 5 levels). The shallower the tree the less variance we have in our predictions; however, at some point we can start to inject too much bias as shallow trees (e.g., stumps) are not able to capture interactions and complex patterns in our data. When restricting minimum terminal node size (e.g., leaf nodes must contain at least 10 observations for predictions) we are deciding to not split intermediate nodes which contain too few data points. At the far end of the spectrum, a terminal node’s size of one allows for a single observation to be captured in the leaf node and used as a prediction (in this case, we’re interpolating the training data). This results in high variance and poor generalizability. On the other hand, large values restrict further splits therefore reducing variance. These two approaches can be implemented independently of one another; however, they do have interaction effects as illustrated by Figure [9\.7](DT.html#fig:dt-early-stopping). Figure 9\.7: Illustration of how early stopping affects the decision boundary of a regression decision tree. The columns illustrate how tree depth impacts the decision boundary and the rows illustrate how the minimum number of observations in the terminal node influences the decision boundary. ### 9\.4\.2 Pruning An alternative to explicitly specifying the depth of a decision tree is to grow a very large, complex tree and then *prune* it back to find an optimal subtree. We find the optimal subtree by using a *cost complexity parameter* (\\(\\alpha\\)) that penalizes our objective function in Equation [(9\.1\)](DT.html#eq:partobjective) for the number of terminal nodes of the tree (\\(T\\)) as in Equation [(9\.2\)](DT.html#eq:prune). \\\[\\begin{equation} \\tag{9\.2} \\texttt{minimize} \\left\\{ SSE \+ \\alpha \\vert T \\vert \\right\\} \\end{equation}\\] For a given value of \\(\\alpha\\) we find the smallest pruned tree that has the lowest penalized error. You may recognize the close association to the lasso penalty discussed in Chapter [6](regularized-regression.html#regularized-regression). As with the regularization methods, smaller penalties tend to produce more complex models, which result in larger trees. Whereas larger penalties result in much smaller trees. Consequently, as a tree grows larger, the reduction in the SSE must be greater than the cost complexity penalty. Typically, we evaluate multiple models across a spectrum of \\(\\alpha\\) and use CV to identify the optimal value and, therefore, the optimal subtree that generalizes best to unseen data. Figure 9\.8: To prune a tree, we grow an overly complex tree (left) and then use a cost complexity parameter to identify the optimal subtree (right). ### 9\.4\.1 Early stopping Early stopping explicitly restricts the growth of the tree. There are several ways we can restrict tree growth but two of the most common approaches are to restrict the tree depth to a certain level or to restrict the minimum number of observations allowed in any terminal node. When limiting tree depth we stop splitting after a certain depth (e.g., only grow a tree that has a depth of 5 levels). The shallower the tree the less variance we have in our predictions; however, at some point we can start to inject too much bias as shallow trees (e.g., stumps) are not able to capture interactions and complex patterns in our data. When restricting minimum terminal node size (e.g., leaf nodes must contain at least 10 observations for predictions) we are deciding to not split intermediate nodes which contain too few data points. At the far end of the spectrum, a terminal node’s size of one allows for a single observation to be captured in the leaf node and used as a prediction (in this case, we’re interpolating the training data). This results in high variance and poor generalizability. On the other hand, large values restrict further splits therefore reducing variance. These two approaches can be implemented independently of one another; however, they do have interaction effects as illustrated by Figure [9\.7](DT.html#fig:dt-early-stopping). Figure 9\.7: Illustration of how early stopping affects the decision boundary of a regression decision tree. The columns illustrate how tree depth impacts the decision boundary and the rows illustrate how the minimum number of observations in the terminal node influences the decision boundary. ### 9\.4\.2 Pruning An alternative to explicitly specifying the depth of a decision tree is to grow a very large, complex tree and then *prune* it back to find an optimal subtree. We find the optimal subtree by using a *cost complexity parameter* (\\(\\alpha\\)) that penalizes our objective function in Equation [(9\.1\)](DT.html#eq:partobjective) for the number of terminal nodes of the tree (\\(T\\)) as in Equation [(9\.2\)](DT.html#eq:prune). \\\[\\begin{equation} \\tag{9\.2} \\texttt{minimize} \\left\\{ SSE \+ \\alpha \\vert T \\vert \\right\\} \\end{equation}\\] For a given value of \\(\\alpha\\) we find the smallest pruned tree that has the lowest penalized error. You may recognize the close association to the lasso penalty discussed in Chapter [6](regularized-regression.html#regularized-regression). As with the regularization methods, smaller penalties tend to produce more complex models, which result in larger trees. Whereas larger penalties result in much smaller trees. Consequently, as a tree grows larger, the reduction in the SSE must be greater than the cost complexity penalty. Typically, we evaluate multiple models across a spectrum of \\(\\alpha\\) and use CV to identify the optimal value and, therefore, the optimal subtree that generalizes best to unseen data. Figure 9\.8: To prune a tree, we grow an overly complex tree (left) and then use a cost complexity parameter to identify the optimal subtree (right). 9\.5 Ames housing example ------------------------- We can fit a regression tree using `rpart` and then visualize it using `rpart.plot`. The fitting process and the visual output of regression trees and classification trees are very similar. Both use the formula method for expressing the model (similar to `lm()`). However, when fitting a regression tree, we need to set `method = "anova"`. By default, `rpart()` will make an intelligent guess as to what method to use based on the data type of your response column, but it’s good practice to set this explicitly. ``` ames_dt1 <- rpart( formula = Sale_Price ~ ., data = ames_train, method = "anova" ) ``` Once we’ve fit our model we can take a peak at the decision tree output. This prints various information about the different splits. For example, we start with `2054` observations at the root node and the first variable we split on (i.e., the first variable gave the largest reduction in SSE) is `Overall_Qual`. We see that at the first node all observations with `Overall_Qual` \\(\\in\\) \\(\\{\\)`Very_Poor`, `Poor`, `Fair`, `Below_Average`, `Average`, `Above_Average`, `Good`\\(\\}\\) go to the 2nd (`2)`) branch. The total number of observations that follow this branch (`1708`), their average sales price (`156195`) and SSE (`3.964e+12`) are listed. If you look for the 3rd branch (`3)`) you will see that `346` observations with `Overall_Qual` \\(\\in\\) \\(\\{\\)`Very_Good`, `Excellent`, `Very_Excellent`\\(\\}\\) follow this branch and their average sales prices is `304593` and the SEE in this region is `1.036e+12`. Basically, this is telling us that `Overall_Qual` is an important predictor on sales price with those homes on the upper end of the quality spectrum having almost double the average sales price. ``` ames_dt1 ## n= 2054 ## ## node), split, n, deviance, yval ## * denotes terminal node ## ## 1) root 2054 13216450000000 181192.80 ## 2) Overall_Qual=Very_Poor,Poor,Fair,Below_Average,Average,Above_Average,Good 1708 3963616000000 156194.90 ## 4) Neighborhood=North_Ames,Old_Town,Edwards,Sawyer,Mitchell,Brookside,Iowa_DOT_and_Rail_Road,South_and_West_of_Iowa_State_University,Meadow_Village,Briardale,Northpark_Villa,Blueste,Landmark 1022 1251428000000 131978.70 ## 8) Overall_Qual=Very_Poor,Poor,Fair,Below_Average 195 167094500000 98535.99 * ## 9) Overall_Qual=Average,Above_Average,Good 827 814819400000 139864.20 ## 18) First_Flr_SF< 1214.5 631 383938300000 132177.10 * ## 19) First_Flr_SF>=1214.5 196 273557300000 164611.70 * ## 5) Neighborhood=College_Creek,Somerset,Northridge_Heights,Gilbert,Northwest_Ames,Sawyer_West,Crawford,Timberland,Northridge,Stone_Brook,Clear_Creek,Bloomington_Heights,Veenker,Green_Hills 686 1219988000000 192272.10 ## 10) Gr_Liv_Area< 1725 492 517806100000 177796.00 ## 20) Total_Bsmt_SF< 1334.5 353 233343200000 166929.30 * ## 21) Total_Bsmt_SF>=1334.5 139 136919100000 205392.70 * ## 11) Gr_Liv_Area>=1725 194 337602800000 228984.70 * ## 3) Overall_Qual=Very_Good,Excellent,Very_Excellent 346 2916752000000 304593.10 ## 6) Overall_Qual=Very_Good 249 955363000000 272321.20 ## 12) Gr_Liv_Area< 1969 152 313458900000 244124.20 * ## 13) Gr_Liv_Area>=1969 97 331677500000 316506.30 * ## 7) Overall_Qual=Excellent,Very_Excellent 97 1036369000000 387435.20 ## 14) Total_Bsmt_SF< 1903 65 231940700000 349010.80 * ## 15) Total_Bsmt_SF>=1903 32 513524700000 465484.70 ## 30) Year_Built>=2003.5 25 270259300000 429760.40 * ## 31) Year_Built< 2003.5 7 97411210000 593071.40 * ``` We can visualize our tree model with `rpart.plot()`. The `rpart.plot()` function has many plotting options, which we’ll leave to the reader to explore. However, in the default print it will show the percentage of data that fall in each node and the predicted outcome for that node. One thing you may notice is that this tree contains 10 internal nodes resulting in 11 terminal nodes. In other words, this tree is partitioning on only 10 features even though there are 80 variables in the training data. Why is that? ``` rpart.plot(ames_dt1) ``` Figure 9\.9: Diagram displaying the pruned decision tree for the Ames Housing data. Behind the scenes `rpart()` is automatically applying a range of cost complexity (\\(\\alpha\\) values to prune the tree). To compare the error for each \\(\\alpha\\) value, `rpart()` performs a 10\-fold CV (by default). In this example we find diminishing returns after 1`2` terminal nodes as illustrated in Figure [9\.10](DT.html#fig:plot-cp) (\\(y\\)\-axis is the CV error, lower \\(x\\)\-axis is the cost complexity (\\(\\alpha\\)) value, upper \\(x\\)\-axis is the number of terminal nodes (i.e., tree size \= \\(\\vert T \\vert\\)). You may also notice the dashed line which goes through the point \\(\\vert T \\vert \= 8\\). Breiman ([1984](#ref-breiman2017classification)) suggested that in actual practice, it’s common to instead use the smallest tree within 1 standard error (SE) of the minimum CV error (this is called the *1\-SE rule*). Thus, we could use a tree with 8 terminal nodes and reasonably expect to experience similar results within a small margin of error. ``` plotcp(ames_dt1) ``` Figure 9\.10: Pruning complexity parameter (cp) plot illustrating the relative cross validation error (y\-axis) for various cp values (lower x\-axis). Smaller cp values lead to larger trees (upper x\-axis). Using the 1\-SE rule, a tree size of 10\-12 provides optimal cross validation results. To illustrate the point of selecting a tree with 11 terminal nodes (or 8 if you go by the 1\-SE rule), we can force `rpart()` to generate a full tree by setting `cp = 0` (no penalty results in a fully grown tree). Figure [9\.11](DT.html#fig:no-cp-tree) shows that after 11 terminal nodes, we see diminishing returns in error reduction as the tree grows deeper. Thus, we can significantly prune our tree and still achieve minimal expected error. ``` ames_dt2 <- rpart( formula = Sale_Price ~ ., data = ames_train, method = "anova", control = list(cp = 0, xval = 10) ) plotcp(ames_dt2) abline(v = 11, lty = "dashed") ``` Figure 9\.11: Pruning complexity parameter plot for a fully grown tree. Significant reduction in the cross validation error is achieved with tree sizes 6\-20 and then the cross validation error levels off with minimal or no additional improvements. So, by default, `rpart()` is performing some automated tuning, with an optimal subtree of 10 total splits, 11 terminal nodes, and a cross\-validated SSE of 0\.292\. Although `rpart()` does not provide the RMSE or other metrics, you can use **caret**. In both cases, smaller penalties (deeper trees) are providing better CV results. ``` # rpart cross validation results ames_dt1$cptable ## CP nsplit rel error xerror xstd ## 1 0.47940879 0 1.0000000 1.0014737 0.06120398 ## 2 0.11290476 1 0.5205912 0.5226036 0.03199501 ## 3 0.06999005 2 0.4076864 0.4098819 0.03111581 ## 4 0.02758522 3 0.3376964 0.3572726 0.02222507 ## 5 0.02347276 4 0.3101112 0.3339952 0.02184348 ## 6 0.02201070 5 0.2866384 0.3301630 0.02446178 ## 7 0.02039233 6 0.2646277 0.3244948 0.02421833 ## 8 0.01190364 7 0.2442354 0.3062031 0.02641595 ## 9 0.01116365 8 0.2323317 0.3025968 0.02708786 ## 10 0.01103581 9 0.2211681 0.2971663 0.02704837 ## 11 0.01000000 10 0.2101323 0.2920442 0.02704791 # caret cross validation results ames_dt3 <- train( Sale_Price ~ ., data = ames_train, method = "rpart", trControl = trainControl(method = "cv", number = 10), tuneLength = 20 ) ggplot(ames_dt3) ``` Figure 9\.12: Cross\-validated accuracy rate for the 20 different \\(\\alpha\\) parameter values in our grid search. Lower \\(\\alpha\\) values (deeper trees) help to minimize errors. 9\.6 Feature interpretation --------------------------- To measure feature importance, the reduction in the loss function (e.g., SSE) attributed to each variable at each split is tabulated. In some instances, a single variable could be used multiple times in a tree; consequently, the total reduction in the loss function across all splits by a variable are summed up and used as the total feature importance. When using **caret**, these values are standardized so that the most important feature has a value of 100 and the remaining features are scored based on their relative reduction in the loss function. Also, since there may be candidate variables that are important but are not used in a split, the top competing variables are also tabulated at each split. Figure [9\.13](DT.html#fig:dt-vip) illustrates the top 40 features in the Ames housing decision tree. Similar to MARS (Chapter [7](mars.html#mars)), decision trees perform automated feature selection where uninformative features are not used in the model. We can see this in Figure [9\.13](DT.html#fig:dt-vip) where the bottom four features in the plot have zero importance. ``` vip(ames_dt3, num_features = 40, bar = FALSE) ``` Figure 9\.13: Variable importance based on the total reduction in MSE for the Ames Housing decision tree. If we look at the same partial dependence plots that we created for the MARS models (Section [7\.5](mars.html#mars-features)), we can see the similarity in how decision trees are modeling the relationship between the features and target. In Figure [9\.14](DT.html#fig:dt-pdp), we see that `Gr_Liv_Area` has a non\-linear relationship such that it has increasingly stronger effects on the predicted sales price for `Gr_liv_Area` values between 1000–2500 but then has little, if any, influence when it exceeds 2500\. However, the 3\-D plot of the interaction effect between `Gr_Liv_Area` and `Year_Built` illustrates a key difference in how decision trees have rigid non\-smooth prediction surfaces compared to MARS; in fact, MARS was developed as an improvement to CART for regression problems. ``` # Construct partial dependence plots p1 <- partial(ames_dt3, pred.var = "Gr_Liv_Area") %>% autoplot() p2 <- partial(ames_dt3, pred.var = "Year_Built") %>% autoplot() p3 <- partial(ames_dt3, pred.var = c("Gr_Liv_Area", "Year_Built")) %>% plotPartial(levelplot = FALSE, zlab = "yhat", drape = TRUE, colorkey = TRUE, screen = list(z = -20, x = -60)) # Display plots side by side gridExtra::grid.arrange(p1, p2, p3, ncol = 3) ``` Figure 9\.14: Partial dependence plots to understand the relationship between sale price and the living space, and year built features. 9\.7 Final thoughts ------------------- Decision trees have a number of advantages. Trees require very little pre\-processing. This is not to say feature engineering may not improve upon a decision tree, but rather, that there are no pre\-processing requirements. Monotonic transformations (e.g., \\(\\log\\), \\(\\exp\\), and \\(\\sqrt{}\\)) are not required to meet algorithm assumptions as in many parametric models; instead, they only shift the location of the optimal split points. Outliers typically do not bias the results as much since the binary partitioning simply looks for a single location to make a split within the distribution of each feature. Decision trees can easily handle categorical features without preprocessing. For unordered categorical features with more than two levels, the classes are ordered based on the outcome (for regression problems, the mean of the response is used and for classification problems, the proportion of the positive outcome class is used). For more details see J. Friedman, Hastie, and Tibshirani ([2001](#ref-esl)), Breiman and Ihaka ([1984](#ref-breiman1984nonlinear)), Ripley ([2007](#ref-ripley2007pattern)), Fisher ([1958](#ref-fisher1958grouping)), and Loh and Vanichsetakul ([1988](#ref-loh1988tree)). Missing values often cause problems with statistical models and analyses. Most procedures deal with them by refusing to deal with them—incomplete observations are tossed out. However, most decision tree implementations can easily handle missing values in the features and do not require imputation. This is handled in various ways but most commonly by creating a new “missing” class for categorical variables or using surrogate splits (see Therneau, Atkinson, and others ([1997](#ref-therneau1997introduction)) for details). However, individual decision trees generally do not often achieve state\-of\-the\-art predictive accuracy. In this chapter, we saw that the best pruned decision tree, although it performed better than linear regression (Chapter [4](linear-regression.html#linear-regression)), had a very poor RMSE ($41,019\) compared to some of the other models we’ve built. This is driven by the fact that decision trees are composed of simple yes\-or\-no rules that create rigid non\-smooth decision boundaries. Furthermore, we saw that deep trees tend to have high variance (and low bias) and shallow trees tend to be overly bias (but low variance). In the chapters that follow, we’ll see how we can combine multiple trees together into very powerful prediction models called *ensembles*.
Machine Learning
bradleyboehmke.github.io
https://bradleyboehmke.github.io/HOML/bagging.html
Chapter 10 Bagging ================== In Section [2\.4\.2](process.html#bootstrapping) we learned about bootstrapping as a resampling procedure, which creates *b* new bootstrap samples by drawing samples with replacement of the original training data. This chapter illustrates how we can use bootstrapping to create an *ensemble* of predictions. Bootstrap aggregating, also called *bagging*, is one of the first ensemble algorithms[28](#fn28) machine learning practitioners learn and is designed to improve the stability and accuracy of regression and classification algorithms. By model averaging, bagging helps to reduce variance and minimize overfitting. Although it is usually applied to decision tree methods, it can be used with any type of method. 10\.1 Prerequisites ------------------- In this chapter we’ll make use of the following packages: ``` # Helper packages library(dplyr) # for data wrangling library(ggplot2) # for awesome plotting library(doParallel) # for parallel backend to foreach library(foreach) # for parallel processing with for loops # Modeling packages library(caret) # for general model fitting library(rpart) # for fitting decision trees library(ipred) # for fitting bagged decision trees ``` We’ll continue to illustrate the main concepts with the `ames_train` data set created in Section [2\.7](process.html#put-process-together). 10\.2 Why and when bagging works -------------------------------- *Bootstrap aggregating* (bagging) prediction models is a general method for fitting multiple versions of a prediction model and then combining (or ensembling) them into an aggregated prediction (Breiman [1996](#ref-breiman1996bagging)[a](#ref-breiman1996bagging)). Bagging is a fairly straight forward algorithm in which *b* bootstrap copies of the original training data are created, the regression or classification algorithm (commonly referred to as the *base learner*) is applied to each bootstrap sample and, in the regression context, new predictions are made by averaging the predictions together from the individual base learners. When dealing with a classification problem, the base learner predictions are combined using plurality vote or by averaging the estimated class probabilities together. This is represented in Equation [(10\.1\)](bagging.html#eq:bagging) where \\(X\\) is the record for which we want to generate a prediction, \\(\\widehat{f\_{bag}}\\) is the bagged prediction, and \\(\\widehat{f\_1}\\left(X\\right), \\widehat{f\_2}\\left(X\\right), \\dots, \\widehat{f\_b}\\left(X\\right)\\) are the predictions from the individual base learners. \\\[\\begin{equation} \\tag{10\.1} \\widehat{f\_{bag}} \= \\widehat{f\_1}\\left(X\\right) \+ \\widehat{f\_2}\\left(X\\right) \+ \\cdots \+ \\widehat{f\_b}\\left(X\\right) \\end{equation}\\] Because of the aggregation process, bagging effectively reduces the variance of an individual base learner (i.e., averaging reduces variance); however, bagging does not always improve upon an individual base learner. As discussed in Section [2\.5](process.html#bias-var), some models have larger variance than others. Bagging works especially well for unstable, high variance base learners—algorithms whose predicted output undergoes major changes in response to small changes in the training data (Dietterich [2000](#ref-dietterich2000ensemble)[b](#ref-dietterich2000ensemble), [2000](#ref-dietterich2000experimental)[a](#ref-dietterich2000experimental)). This includes algorithms such as decision trees and KNN (when *k* is sufficiently small). However, for algorithms that are more stable or have high bias, bagging offers less improvement on predicted outputs since there is less variability (e.g., bagging a linear regression model will effectively just return the original predictions for large enough \\(b\\)). The general idea behind bagging is referred to as the “wisdom of the crowd” effect and was popularized by Surowiecki ([2005](#ref-surowiecki2005wisdom)). It essentially means that the aggregation of information in large diverse groups results in decisions that are often better than could have been made by any single member of the group. The more diverse the group members are then the more diverse their perspectives and predictions will be, which often leads to better aggregated information. Think of estimating the number of jelly beans in a jar at a carinival. While any individual guess is likely to be way off, you’ll often find that the averaged guesses tends to be a lot closer to the true number. This is illustrated in Figure [10\.1](bagging.html#fig:bagging-multiple-models), which compares bagging \\(b \= 100\\) polynomial regression models, MARS models, and CART decision trees. You can see that the low variance base learner (polynomial regression) gains very little from bagging while the higher variance learner (decision trees) gains significantly more. Not only does bagging help minimize the high variability (instability) of single trees, but it also helps to smooth out the prediction surface. Figure 10\.1: The effect of bagging 100 base learners. High variance models such as decision trees (C) benefit the most from the aggregation effect in bagging, whereas low variance models such as polynomial regression (A) show little improvement. Optimal performance is often found by bagging 50–500 trees. Data sets that have a few strong predictors typically require less trees; whereas data sets with lots of noise or multiple strong predictors may need more. Using too many trees will not lead to overfitting. However, it’s important to realize that since multiple models are being run, the more iterations you perform the more computational and time requirements you will have. As these demands increase, performing *k*\-fold CV can become computationally burdensome. A benefit to creating ensembles via bagging, which is based on resampling with replacement, is that it can provide its own internal estimate of predictive performance with the out\-of\-bag (OOB) sample (see Section [2\.4\.2](process.html#bootstrapping)). The OOB sample can be used to test predictive performance and the results usually compare well compared to *k*\-fold CV assuming your data set is sufficiently large (say \\(n \\geq 1,000\\)). Consequently, as your data sets become larger and your bagging iterations increase, it is common to use the OOB error estimate as a proxy for predictive performance. Think of the OOB estimate of generalization performance as an unstructured, but free CV statistic. 10\.3 Implementation -------------------- In Chapter [9](DT.html#DT), we saw how decision trees performed in predicting the sales price for the Ames housing data. Performance was subpar compared to the MARS (Chapter [7](mars.html#mars)) and KNN (Chapter [8](knn.html#knn)) models we fit, even after tuning to find the optimal pruned tree. Rather than use a single pruned decision tree, we can use, say, 100 bagged unpruned trees (by not pruning the trees we’re keeping bias low and variance high which is when bagging will have the biggest effect). As the below code chunk illustrates, we gain significant improvement over our individual (pruned) decision tree (RMSE of 26,462 for bagged trees vs. 41,019 for the single decision tree). The `bagging()` function comes from the **ipred** package and we use `nbagg` to control how many iterations to include in the bagged model and `coob = TRUE` indicates to use the OOB error rate. By default, `bagging()` uses `rpart::rpart()` for decision tree base learners but other base learners are available. Since bagging just aggregates a base learner, we can tune the base learner parameters as normal. Here, we pass parameters to `rpart()` with the `control` parameter and we build deep trees (no pruning) that require just two observations in a node to split. ``` # make bootstrapping reproducible set.seed(123) # train bagged model ames_bag1 <- bagging( formula = Sale_Price ~ ., data = ames_train, nbagg = 100, coob = TRUE, control = rpart.control(minsplit = 2, cp = 0) ) ames_bag1 ## ## Bagging regression trees with 100 bootstrap replications ## ## Call: bagging.data.frame(formula = Sale_Price ~ ., data = ames_train, ## nbagg = 100, coob = TRUE, control = rpart.control(minsplit = 2, ## cp = 0)) ## ## Out-of-bag estimate of root mean squared error: 25528.78 ``` One thing to note is that typically, the more trees the better. As we add more trees we’re averaging over more high variance decision trees. Early on, we see a dramatic reduction in variance (and hence our error) but eventually the error will typically flatline and stabilize signaling that a suitable number of trees has been reached. Often, we need only 50–100 trees to stabilize the error (in other cases we may need 500 or more). For the Ames data we see that the error is stabilizing with just over 100 trees so we’ll likely not gain much improvement by simply bagging more trees. Unfortunately, `bagging()` does not provide the RMSE by tree so to produce this error curve we iterated over `nbagg` values of 1–200 and applied the same `bagging()` function above. Figure 10\.2: Error curve for bagging 1\-200 deep, unpruned decision trees. The benefit of bagging is optimized at 187 trees although the majority of error reduction occurred within the first 100 trees. We can also apply bagging within **caret** and use 10\-fold CV to see how well our ensemble will generalize. We see that the cross\-validated RMSE for 200 trees is similar to the OOB estimate (difference of 495\). However, using the OOB error took 58 seconds to compute whereas performing the following 10\-fold CV took roughly 26 minutes on our machine! ``` ames_bag2 <- train( Sale_Price ~ ., data = ames_train, method = "treebag", trControl = trainControl(method = "cv", number = 10), nbagg = 200, control = rpart.control(minsplit = 2, cp = 0) ) ames_bag2 ## Bagged CART ## ## 2054 samples ## 80 predictor ## ## No pre-processing ## Resampling: Cross-Validated (10 fold) ## Summary of sample sizes: 1849, 1848, 1848, 1849, 1849, 1847, ... ## Resampling results: ## ## RMSE Rsquared MAE ## 26957.06 0.8900689 16713.14 ``` 10\.4 Easily parallelize ------------------------ As stated in Section [10\.2](bagging.html#why-bag), bagging can become computationally intense as the number of iterations increases. Fortunately, the process of bagging involves fitting models to each of the bootstrap samples which are completely independent of one another. This means that each model can be trained in parallel and the results aggregated in the end for the final model. Consequently, if you have access to a large cluster or number of cores, you can more quickly create bagged ensembles on larger data sets. The following illustrates parallelizing the bagging algorithm (with \\(b \= 160\\) decision trees) on the Ames housing data using eight cores and returning the predictions for the test data for each of the trees. ``` # Create a parallel socket cluster cl <- makeCluster(8) # use 8 workers registerDoParallel(cl) # register the parallel backend # Fit trees in parallel and compute predictions on the test set predictions <- foreach( icount(160), .packages = "rpart", .combine = cbind ) %dopar% { # bootstrap copy of training data index <- sample(nrow(ames_train), replace = TRUE) ames_train_boot <- ames_train[index, ] # fit tree to bootstrap copy bagged_tree <- rpart( Sale_Price ~ ., control = rpart.control(minsplit = 2, cp = 0), data = ames_train_boot ) predict(bagged_tree, newdata = ames_test) } predictions[1:5, 1:7] ## result.1 result.2 result.3 result.4 result.5 result.6 result.7 ## 1 176500 187000 179900 187500 187500 187500 187500 ## 2 180000 254000 251000 240000 180000 180000 221000 ## 3 175000 174000 192000 192000 185000 178900 163990 ## 4 197900 157000 217500 215000 180000 210000 218500 ## 5 120000 129000 130000 143000 136500 153600 148500 ``` We can then do some data wrangling to compute and plot the RMSE as additional trees are added. Our results, illustrated in Figure [10\.3](bagging.html#fig:plotting-parallel-bag), closely resemble the results obtained in Figure [10\.2](bagging.html#fig:n-bags-plot). This also illustrates how the OOB error closely approximates the test error. ``` predictions %>% as.data.frame() %>% mutate( observation = 1:n(), actual = ames_test$Sale_Price) %>% tidyr::gather(tree, predicted, -c(observation, actual)) %>% group_by(observation) %>% mutate(tree = stringr::str_extract(tree, '\\d+') %>% as.numeric()) %>% ungroup() %>% arrange(observation, tree) %>% group_by(observation) %>% mutate(avg_prediction = cummean(predicted)) %>% group_by(tree) %>% summarize(RMSE = RMSE(avg_prediction, actual)) %>% ggplot(aes(tree, RMSE)) + geom_line() + xlab('Number of trees') ``` Figure 10\.3: Error curve for custom parallel bagging of 1\-160 deep, unpruned decision trees. ``` # Shutdown parallel cluster stopCluster(cl) ``` 10\.5 Feature interpretation ---------------------------- Unfortunately, due to the bagging process, models that are normally perceived as interpretable are no longer so. However, we can still make inferences about how features are influencing our model. Recall in Section [9\.6](DT.html#dt-vip) that we measure feature importance based on the sum of the reduction in the loss function (e.g., SSE) attributed to each variable at each split in a given tree. For bagged decision trees, this process is similar. For each tree, we compute the sum of the reduction of the loss function across all splits. We then aggregate this measure across all trees for each feature. The features with the largest average decrease in SSE (for regression) are considered most important. Unfortunately, the **ipred** package does not capture the required information for computing variable importance but the **caret** package does. In the code chunk below, we use **vip** to construct a variable importance plot (VIP) of the top 40 features in the `ames_bag2` model. With a single decision tree, we saw that many non\-informative features were not used in the tree. However, with bagging, since we use many trees built on bootstrapped samples, we are likely to see many more features used for splits. Consequently, we tend to have many more features involved but with lower levels of importance. ``` vip::vip(ames_bag2, num_features = 40, bar = FALSE) ``` Figure 10\.4: Variable importance for 200 bagged trees for the Ames Housing data. Understanding the relationship between a feature and predicted response for bagged models follows the same procedure we’ve seen in previous chapters. PDPs tell us visually how each feature influences the predicted output, on average. Although the averaging effect of bagging diminishes the ability to interpret the final ensemble, PDPs and other interpretability methods (Chapter [16](iml.html#iml)) help us to interpret any “black box” model. Figure [10\.5](bagging.html#fig:bag-pdp) highlights the unique, and sometimes non\-linear, non\-monotonic relationships that may exist between a feature and response. ``` # Construct partial dependence plots p1 <- pdp::partial( ames_bag2, pred.var = "Lot_Area", grid.resolution = 20 ) %>% autoplot() p2 <- pdp::partial( ames_bag2, pred.var = "Lot_Frontage", grid.resolution = 20 ) %>% autoplot() gridExtra::grid.arrange(p1, p2, nrow = 1) ``` Figure 10\.5: Partial dependence plots to understand the relationship between sales price and the lot area and frontage size features. 10\.6 Final thoughts -------------------- Bagging improves the prediction accuracy for high variance (and low bias) models at the expense of interpretability and computational speed. However, using various interpretability algorithms such as VIPs and PDPs, we can still make inferences about how our bagged model leverages feature information. Also, since bagging consists of independent processes, the algorithm is easily parallelizable. However, when bagging trees, a problem still exists. Although the model building steps are independent, the trees in bagging are not completely independent of each other since all the original features are considered at every split of every tree. Rather, trees from different bootstrap samples typically have similar structure to each other (especially at the top of the tree) due to any underlying strong relationships. For example, if we create six decision trees with different bootstrapped samples of the Boston housing data (Harrison Jr and Rubinfeld [1978](#ref-harrison1978hedonic)), we see a similar structure as the top of the trees. Although there are 15 predictor variables to split on, all six trees have both `lstat` and `rm` variables driving the first few splits. We use the Boston housing data in this example because it has fewer features and shorter names than the Ames housing data. Consequently, it is easier to compare multiple trees side\-by\-side; however, the same tree correlation problem exists in the Ames bagged model. Figure 10\.6: Six decision trees based on different bootstrap samples. This characteristic is known as *tree correlation* and prevents bagging from further reducing the variance of the base learner. In the next chapter, we discuss how *random forests* extend and improve upon bagged decision trees by reducing this correlation and thereby improving the accuracy of the overall ensemble. 10\.1 Prerequisites ------------------- In this chapter we’ll make use of the following packages: ``` # Helper packages library(dplyr) # for data wrangling library(ggplot2) # for awesome plotting library(doParallel) # for parallel backend to foreach library(foreach) # for parallel processing with for loops # Modeling packages library(caret) # for general model fitting library(rpart) # for fitting decision trees library(ipred) # for fitting bagged decision trees ``` We’ll continue to illustrate the main concepts with the `ames_train` data set created in Section [2\.7](process.html#put-process-together). 10\.2 Why and when bagging works -------------------------------- *Bootstrap aggregating* (bagging) prediction models is a general method for fitting multiple versions of a prediction model and then combining (or ensembling) them into an aggregated prediction (Breiman [1996](#ref-breiman1996bagging)[a](#ref-breiman1996bagging)). Bagging is a fairly straight forward algorithm in which *b* bootstrap copies of the original training data are created, the regression or classification algorithm (commonly referred to as the *base learner*) is applied to each bootstrap sample and, in the regression context, new predictions are made by averaging the predictions together from the individual base learners. When dealing with a classification problem, the base learner predictions are combined using plurality vote or by averaging the estimated class probabilities together. This is represented in Equation [(10\.1\)](bagging.html#eq:bagging) where \\(X\\) is the record for which we want to generate a prediction, \\(\\widehat{f\_{bag}}\\) is the bagged prediction, and \\(\\widehat{f\_1}\\left(X\\right), \\widehat{f\_2}\\left(X\\right), \\dots, \\widehat{f\_b}\\left(X\\right)\\) are the predictions from the individual base learners. \\\[\\begin{equation} \\tag{10\.1} \\widehat{f\_{bag}} \= \\widehat{f\_1}\\left(X\\right) \+ \\widehat{f\_2}\\left(X\\right) \+ \\cdots \+ \\widehat{f\_b}\\left(X\\right) \\end{equation}\\] Because of the aggregation process, bagging effectively reduces the variance of an individual base learner (i.e., averaging reduces variance); however, bagging does not always improve upon an individual base learner. As discussed in Section [2\.5](process.html#bias-var), some models have larger variance than others. Bagging works especially well for unstable, high variance base learners—algorithms whose predicted output undergoes major changes in response to small changes in the training data (Dietterich [2000](#ref-dietterich2000ensemble)[b](#ref-dietterich2000ensemble), [2000](#ref-dietterich2000experimental)[a](#ref-dietterich2000experimental)). This includes algorithms such as decision trees and KNN (when *k* is sufficiently small). However, for algorithms that are more stable or have high bias, bagging offers less improvement on predicted outputs since there is less variability (e.g., bagging a linear regression model will effectively just return the original predictions for large enough \\(b\\)). The general idea behind bagging is referred to as the “wisdom of the crowd” effect and was popularized by Surowiecki ([2005](#ref-surowiecki2005wisdom)). It essentially means that the aggregation of information in large diverse groups results in decisions that are often better than could have been made by any single member of the group. The more diverse the group members are then the more diverse their perspectives and predictions will be, which often leads to better aggregated information. Think of estimating the number of jelly beans in a jar at a carinival. While any individual guess is likely to be way off, you’ll often find that the averaged guesses tends to be a lot closer to the true number. This is illustrated in Figure [10\.1](bagging.html#fig:bagging-multiple-models), which compares bagging \\(b \= 100\\) polynomial regression models, MARS models, and CART decision trees. You can see that the low variance base learner (polynomial regression) gains very little from bagging while the higher variance learner (decision trees) gains significantly more. Not only does bagging help minimize the high variability (instability) of single trees, but it also helps to smooth out the prediction surface. Figure 10\.1: The effect of bagging 100 base learners. High variance models such as decision trees (C) benefit the most from the aggregation effect in bagging, whereas low variance models such as polynomial regression (A) show little improvement. Optimal performance is often found by bagging 50–500 trees. Data sets that have a few strong predictors typically require less trees; whereas data sets with lots of noise or multiple strong predictors may need more. Using too many trees will not lead to overfitting. However, it’s important to realize that since multiple models are being run, the more iterations you perform the more computational and time requirements you will have. As these demands increase, performing *k*\-fold CV can become computationally burdensome. A benefit to creating ensembles via bagging, which is based on resampling with replacement, is that it can provide its own internal estimate of predictive performance with the out\-of\-bag (OOB) sample (see Section [2\.4\.2](process.html#bootstrapping)). The OOB sample can be used to test predictive performance and the results usually compare well compared to *k*\-fold CV assuming your data set is sufficiently large (say \\(n \\geq 1,000\\)). Consequently, as your data sets become larger and your bagging iterations increase, it is common to use the OOB error estimate as a proxy for predictive performance. Think of the OOB estimate of generalization performance as an unstructured, but free CV statistic. 10\.3 Implementation -------------------- In Chapter [9](DT.html#DT), we saw how decision trees performed in predicting the sales price for the Ames housing data. Performance was subpar compared to the MARS (Chapter [7](mars.html#mars)) and KNN (Chapter [8](knn.html#knn)) models we fit, even after tuning to find the optimal pruned tree. Rather than use a single pruned decision tree, we can use, say, 100 bagged unpruned trees (by not pruning the trees we’re keeping bias low and variance high which is when bagging will have the biggest effect). As the below code chunk illustrates, we gain significant improvement over our individual (pruned) decision tree (RMSE of 26,462 for bagged trees vs. 41,019 for the single decision tree). The `bagging()` function comes from the **ipred** package and we use `nbagg` to control how many iterations to include in the bagged model and `coob = TRUE` indicates to use the OOB error rate. By default, `bagging()` uses `rpart::rpart()` for decision tree base learners but other base learners are available. Since bagging just aggregates a base learner, we can tune the base learner parameters as normal. Here, we pass parameters to `rpart()` with the `control` parameter and we build deep trees (no pruning) that require just two observations in a node to split. ``` # make bootstrapping reproducible set.seed(123) # train bagged model ames_bag1 <- bagging( formula = Sale_Price ~ ., data = ames_train, nbagg = 100, coob = TRUE, control = rpart.control(minsplit = 2, cp = 0) ) ames_bag1 ## ## Bagging regression trees with 100 bootstrap replications ## ## Call: bagging.data.frame(formula = Sale_Price ~ ., data = ames_train, ## nbagg = 100, coob = TRUE, control = rpart.control(minsplit = 2, ## cp = 0)) ## ## Out-of-bag estimate of root mean squared error: 25528.78 ``` One thing to note is that typically, the more trees the better. As we add more trees we’re averaging over more high variance decision trees. Early on, we see a dramatic reduction in variance (and hence our error) but eventually the error will typically flatline and stabilize signaling that a suitable number of trees has been reached. Often, we need only 50–100 trees to stabilize the error (in other cases we may need 500 or more). For the Ames data we see that the error is stabilizing with just over 100 trees so we’ll likely not gain much improvement by simply bagging more trees. Unfortunately, `bagging()` does not provide the RMSE by tree so to produce this error curve we iterated over `nbagg` values of 1–200 and applied the same `bagging()` function above. Figure 10\.2: Error curve for bagging 1\-200 deep, unpruned decision trees. The benefit of bagging is optimized at 187 trees although the majority of error reduction occurred within the first 100 trees. We can also apply bagging within **caret** and use 10\-fold CV to see how well our ensemble will generalize. We see that the cross\-validated RMSE for 200 trees is similar to the OOB estimate (difference of 495\). However, using the OOB error took 58 seconds to compute whereas performing the following 10\-fold CV took roughly 26 minutes on our machine! ``` ames_bag2 <- train( Sale_Price ~ ., data = ames_train, method = "treebag", trControl = trainControl(method = "cv", number = 10), nbagg = 200, control = rpart.control(minsplit = 2, cp = 0) ) ames_bag2 ## Bagged CART ## ## 2054 samples ## 80 predictor ## ## No pre-processing ## Resampling: Cross-Validated (10 fold) ## Summary of sample sizes: 1849, 1848, 1848, 1849, 1849, 1847, ... ## Resampling results: ## ## RMSE Rsquared MAE ## 26957.06 0.8900689 16713.14 ``` 10\.4 Easily parallelize ------------------------ As stated in Section [10\.2](bagging.html#why-bag), bagging can become computationally intense as the number of iterations increases. Fortunately, the process of bagging involves fitting models to each of the bootstrap samples which are completely independent of one another. This means that each model can be trained in parallel and the results aggregated in the end for the final model. Consequently, if you have access to a large cluster or number of cores, you can more quickly create bagged ensembles on larger data sets. The following illustrates parallelizing the bagging algorithm (with \\(b \= 160\\) decision trees) on the Ames housing data using eight cores and returning the predictions for the test data for each of the trees. ``` # Create a parallel socket cluster cl <- makeCluster(8) # use 8 workers registerDoParallel(cl) # register the parallel backend # Fit trees in parallel and compute predictions on the test set predictions <- foreach( icount(160), .packages = "rpart", .combine = cbind ) %dopar% { # bootstrap copy of training data index <- sample(nrow(ames_train), replace = TRUE) ames_train_boot <- ames_train[index, ] # fit tree to bootstrap copy bagged_tree <- rpart( Sale_Price ~ ., control = rpart.control(minsplit = 2, cp = 0), data = ames_train_boot ) predict(bagged_tree, newdata = ames_test) } predictions[1:5, 1:7] ## result.1 result.2 result.3 result.4 result.5 result.6 result.7 ## 1 176500 187000 179900 187500 187500 187500 187500 ## 2 180000 254000 251000 240000 180000 180000 221000 ## 3 175000 174000 192000 192000 185000 178900 163990 ## 4 197900 157000 217500 215000 180000 210000 218500 ## 5 120000 129000 130000 143000 136500 153600 148500 ``` We can then do some data wrangling to compute and plot the RMSE as additional trees are added. Our results, illustrated in Figure [10\.3](bagging.html#fig:plotting-parallel-bag), closely resemble the results obtained in Figure [10\.2](bagging.html#fig:n-bags-plot). This also illustrates how the OOB error closely approximates the test error. ``` predictions %>% as.data.frame() %>% mutate( observation = 1:n(), actual = ames_test$Sale_Price) %>% tidyr::gather(tree, predicted, -c(observation, actual)) %>% group_by(observation) %>% mutate(tree = stringr::str_extract(tree, '\\d+') %>% as.numeric()) %>% ungroup() %>% arrange(observation, tree) %>% group_by(observation) %>% mutate(avg_prediction = cummean(predicted)) %>% group_by(tree) %>% summarize(RMSE = RMSE(avg_prediction, actual)) %>% ggplot(aes(tree, RMSE)) + geom_line() + xlab('Number of trees') ``` Figure 10\.3: Error curve for custom parallel bagging of 1\-160 deep, unpruned decision trees. ``` # Shutdown parallel cluster stopCluster(cl) ``` 10\.5 Feature interpretation ---------------------------- Unfortunately, due to the bagging process, models that are normally perceived as interpretable are no longer so. However, we can still make inferences about how features are influencing our model. Recall in Section [9\.6](DT.html#dt-vip) that we measure feature importance based on the sum of the reduction in the loss function (e.g., SSE) attributed to each variable at each split in a given tree. For bagged decision trees, this process is similar. For each tree, we compute the sum of the reduction of the loss function across all splits. We then aggregate this measure across all trees for each feature. The features with the largest average decrease in SSE (for regression) are considered most important. Unfortunately, the **ipred** package does not capture the required information for computing variable importance but the **caret** package does. In the code chunk below, we use **vip** to construct a variable importance plot (VIP) of the top 40 features in the `ames_bag2` model. With a single decision tree, we saw that many non\-informative features were not used in the tree. However, with bagging, since we use many trees built on bootstrapped samples, we are likely to see many more features used for splits. Consequently, we tend to have many more features involved but with lower levels of importance. ``` vip::vip(ames_bag2, num_features = 40, bar = FALSE) ``` Figure 10\.4: Variable importance for 200 bagged trees for the Ames Housing data. Understanding the relationship between a feature and predicted response for bagged models follows the same procedure we’ve seen in previous chapters. PDPs tell us visually how each feature influences the predicted output, on average. Although the averaging effect of bagging diminishes the ability to interpret the final ensemble, PDPs and other interpretability methods (Chapter [16](iml.html#iml)) help us to interpret any “black box” model. Figure [10\.5](bagging.html#fig:bag-pdp) highlights the unique, and sometimes non\-linear, non\-monotonic relationships that may exist between a feature and response. ``` # Construct partial dependence plots p1 <- pdp::partial( ames_bag2, pred.var = "Lot_Area", grid.resolution = 20 ) %>% autoplot() p2 <- pdp::partial( ames_bag2, pred.var = "Lot_Frontage", grid.resolution = 20 ) %>% autoplot() gridExtra::grid.arrange(p1, p2, nrow = 1) ``` Figure 10\.5: Partial dependence plots to understand the relationship between sales price and the lot area and frontage size features. 10\.6 Final thoughts -------------------- Bagging improves the prediction accuracy for high variance (and low bias) models at the expense of interpretability and computational speed. However, using various interpretability algorithms such as VIPs and PDPs, we can still make inferences about how our bagged model leverages feature information. Also, since bagging consists of independent processes, the algorithm is easily parallelizable. However, when bagging trees, a problem still exists. Although the model building steps are independent, the trees in bagging are not completely independent of each other since all the original features are considered at every split of every tree. Rather, trees from different bootstrap samples typically have similar structure to each other (especially at the top of the tree) due to any underlying strong relationships. For example, if we create six decision trees with different bootstrapped samples of the Boston housing data (Harrison Jr and Rubinfeld [1978](#ref-harrison1978hedonic)), we see a similar structure as the top of the trees. Although there are 15 predictor variables to split on, all six trees have both `lstat` and `rm` variables driving the first few splits. We use the Boston housing data in this example because it has fewer features and shorter names than the Ames housing data. Consequently, it is easier to compare multiple trees side\-by\-side; however, the same tree correlation problem exists in the Ames bagged model. Figure 10\.6: Six decision trees based on different bootstrap samples. This characteristic is known as *tree correlation* and prevents bagging from further reducing the variance of the base learner. In the next chapter, we discuss how *random forests* extend and improve upon bagged decision trees by reducing this correlation and thereby improving the accuracy of the overall ensemble.
Machine Learning
bradleyboehmke.github.io
https://bradleyboehmke.github.io/HOML/random-forest.html
Chapter 11 Random Forests ========================= *Random forests* are a modification of bagged decision trees that build a large collection of *de\-correlated* trees to further improve predictive performance. They have become a very popular “out\-of\-the\-box” or “off\-the\-shelf” learning algorithm that enjoys good predictive performance with relatively little hyperparameter tuning. Many modern implementations of random forests exist; however, Leo Breiman’s algorithm (Breiman [2001](#ref-breiman2001random)) has largely become the authoritative procedure. This chapter will cover the fundamentals of random forests. 11\.1 Prerequisites ------------------- This chapter leverages the following packages. Some of these packages play a supporting role; however, the emphasis is on how to implement random forests with the **ranger** (Wright and Ziegler [2017](#ref-JSSv077i01)) and **h2o** packages. ``` # Helper packages library(dplyr) # for data wrangling library(ggplot2) # for awesome graphics # Modeling packages library(ranger) # a c++ implementation of random forest library(h2o) # a java-based implementation of random forest ``` We’ll continue working with the `ames_train` data set created in Section [2\.7](process.html#put-process-together) to illustrate the main concepts. 11\.2 Extending bagging ----------------------- Random forests are built using the same fundamental principles as decision trees (Chapter [9](DT.html#DT)) and bagging (Chapter [10](bagging.html#bagging)). Bagging trees introduces a random component into the tree building process by building many trees on bootstrapped copies of the training data. Bagging then aggregates the predictions across all the trees; this aggregation reduces the variance of the overall procedure and results in improved predictive performance. However, as we saw in Section [10\.6](bagging.html#bagging-thoughts), simply bagging trees results in tree correlation that limits the effect of variance reduction. Random forests help to reduce tree correlation by injecting more randomness into the tree\-growing process.[29](#fn29) More specifically, while growing a decision tree during the bagging process, random forests perform *split\-variable randomization* where each time a split is to be performed, the search for the split variable is limited to a random subset of \\(m\_{try}\\) of the original \\(p\\) features. Typical default values are \\(m\_{try} \= \\frac{p}{3}\\) (regression) and \\(m\_{try} \= \\sqrt{p}\\) (classification) but this should be considered a tuning parameter. The basic algorithm for a regression or classification random forest can be generalized as follows: ``` 1. Given a training data set 2. Select number of trees to build (n_trees) 3. for i = 1 to n_trees do 4. | Generate a bootstrap sample of the original data 5. | Grow a regression/classification tree to the bootstrapped data 6. | for each split do 7. | | Select m_try variables at random from all p variables 8. | | Pick the best variable/split-point among the m_try 9. | | Split the node into two child nodes 10. | end 11. | Use typical tree model stopping criteria to determine when a | tree is complete (but do not prune) 12. end 13. Output ensemble of trees ``` When \\(m\_{try} \= p\\), the algorithm is equivalent to *bagging* decision trees. Since the algorithm randomly selects a bootstrap sample to train on ***and*** a random sample of features to use at each split, a more diverse set of trees is produced which tends to lessen tree correlation beyond bagged trees and often dramatically increase predictive power. 11\.3 Out\-of\-the\-box performance ----------------------------------- Random forests have become popular because they tend to provide very good out\-of\-the\-box performance. Although they have several hyperparameters that can be tuned, the default values tend to produce good results. Moreover, Probst, Bischl, and Boulesteix ([2018](#ref-probst2018tunability)) illustrated that among the more popular machine learning algorithms, random forests have the least variability in their prediction accuracy when tuning. For example, if we train a random forest model[30](#fn30) with all hyperparameters set to their default values, we get an OOB RMSE that is better than any model we’ve run thus far (without any tuning). By default, **ranger** sets the `mtry` parameter to \\(\\text{floor}\\big(\\sqrt{\\texttt{number of features}}\\big)\\); however, for regression problems the preferred `mtry` to start with is \\(\\text{floor}\\big(\\frac{\\texttt{number of features}}{3}\\big)\\). We also set `respect.unordered.factors = "order"`. This specifies how to treat unordered factor variables and we recommend setting this to “order” (see J. Friedman, Hastie, and Tibshirani ([2001](#ref-esl)) Section 9\.2\.4 for details). ``` # number of features n_features <- length(setdiff(names(ames_train), "Sale_Price")) # train a default random forest model ames_rf1 <- ranger( Sale_Price ~ ., data = ames_train, mtry = floor(n_features / 3), respect.unordered.factors = "order", seed = 123 ) # get OOB RMSE (default_rmse <- sqrt(ames_rf1$prediction.error)) ## [1] 24859.27 ``` 11\.4 Hyperparameters --------------------- Although random forests perform well out\-of\-the\-box, there are several tunable hyperparameters that we should consider when training a model. Although we briefly discuss the main hyperparameters, Probst, Wright, and Boulesteix ([2019](#ref-probst2019hyperparameters)) provide a much more thorough discussion. The main hyperparameters to consider include: 1. The number of trees in the forest 2. The number of features to consider at any given split: \\(m\_{try}\\) 3. The complexity of each tree 4. The sampling scheme 5. The splitting rule to use during tree construction 6. and (2\) typically have the largest impact on predictive accuracy and should always be tuned. (3\) and (4\) tend to have marginal impact on predictive accuracy but are still worth exploring. They also have the ability to influence computational efficiency. (5\) tends to have the smallest impact on predictive accuracy and is used primarily to increase computational efficiency. ### 11\.4\.1 Number of trees The first consideration is the number of trees within your random forest. Although not technically a hyperparameter, the number of trees needs to be sufficiently large to stabilize the error rate. A good rule of thumb is to start with 10 times the number of features as illustrated in Figure [11\.1](random-forest.html#fig:tuning-trees); however, as you adjust other hyperparameters such as \\(m\_{try}\\) and node size, more or fewer trees may be required. More trees provide more robust and stable error estimates and variable importance measures; however, the impact on computation time increases linearly with the number of trees. Start with \\(p \\times 10\\) trees and adjust as necessary Figure 11\.1: The Ames data has 80 features and starting with 10 times the number of features typically ensures the error estimate converges. ### 11\.4\.2 \\(m\_{try}\\) The hyperparameter that controls the split\-variable randomization feature of random forests is often referred to as \\(m\_{try}\\) and it helps to balance low tree correlation with reasonable predictive strength. With regression problems the default value is often \\(m\_{try} \= \\frac{p}{3}\\) and for classification \\(m\_{try} \= \\sqrt{p}\\). However, when there are fewer relevant predictors (e.g., noisy data) a higher value of \\(m\_{try}\\) tends to perform better because it makes it more likely to select those features with the strongest signal. When there are many relevant predictors, a lower \\(m\_{try}\\) might perform better. Start with five evenly spaced values of \\(m\_{try}\\) across the range 2–\\(p\\) centered at the recommended default as illustrated in Figure 11\.2\. Figure 11\.2: For the Ames data, an mtry value slightly lower (21\) than the default (26\) improves performance. ### 11\.4\.3 Tree complexity Random forests are built on individual decision trees; consequently, most random forest implementations have one or more hyperparameters that allow us to control the depth and complexity of the individual trees. This will often include hyperparameters such as node size, max depth, max number of terminal nodes, or the required node size to allow additional splits. Node size is probably the most common hyperparameter to control tree complexity and most implementations use the default values of one for classification and five for regression as these values tend to produce good results (Dı'az\-Uriarte and De Andres [2006](#ref-diaz2006gene); Goldstein, Polley, and Briggs [2011](#ref-goldstein2011random)). However, Segal ([2004](#ref-segal2004machine)) showed that if your data has many noisy predictors and higher \\(m\_{try}\\) values are performing best, then performance may improve by increasing node size (i.e., decreasing tree depth and complexity). Moreover, if computation time is a concern then you can often decrease run time substantially by increasing the node size and have only marginal impacts to your error estimate as illustrated in Figure [11\.3](random-forest.html#fig:tuning-node-size). When adjusting node size start with three values between 1–10 and adjust depending on impact to accuracy and run time. Figure 11\.3: Increasing node size to reduce tree complexity will often have a larger impact on computation speed (right) than on your error estimate. ### 11\.4\.4 Sampling scheme The default sampling scheme for random forests is bootstrapping where 100% of the observations are sampled with replacement (in other words, each bootstrap copy has the same size as the original training data); however, we can adjust both the sample size and whether to sample with or without replacement. The sample size parameter determines how many observations are drawn for the training of each tree. Decreasing the sample size leads to more diverse trees and thereby lower between\-tree correlation, which can have a positive effect on the prediction accuracy. Consequently, if there are a few dominating features in your data set, reducing the sample size can also help to minimize between\-tree correlation. Also, when you have many categorical features with a varying number of levels, sampling with replacement can lead to biased variable split selection (Janitza, Binder, and Boulesteix [2016](#ref-janitza2016pitfalls); Strobl et al. [2007](#ref-strobl2007bias)). Consequently, if you have categories that are not balanced, sampling without replacement provides a less biased use of all levels across the trees in the random forest. Assess 3–4 values of sample sizes ranging from 25%–100% and if you have unbalanced categorical features try sampling without replacement. Figure 11\.4: The Ames data has several imbalanced categorical features such as neighborhood, zoning, overall quality, and more. Consequently, sampling without replacement appears to improve performance as it leads to less biased split variable selection and more uncorrelated trees. ### 11\.4\.5 Split rule Recall the default splitting rule during random forests tree building consists of selecting, out of all splits of the (randomly selected \\(m\_{try}\\)) candidate variables, the split that minimizes the Gini impurity (in the case of classification) and the SSE (in case of regression). However, Strobl et al. ([2007](#ref-strobl2007bias)) illustrated that these default splitting rules favor the selection of features with many possible splits (e.g., continuous variables or categorical variables with many categories) over variables with fewer splits (the extreme case being binary variables, which have only one possible split). *Conditional inference trees* (Hothorn, Hornik, and Zeileis [2006](#ref-hothorn2006unbiased)) implement an alternative splitting mechanism that helps to reduce this variable selection bias.[31](#fn31) However, ensembling conditional inference trees has yet to be proven superior with regards to predictive accuracy and they take a lot longer to train. To increase computational efficiency, splitting rules can be randomized where only a random subset of possible splitting values is considered for a variable (Geurts, Ernst, and Wehenkel [2006](#ref-geurts2006extremely)). If only a single random splitting value is randomly selected then we call this procedure *extremely randomized trees*. Due to the added randomness of split points, this method tends to have no improvement, or often a negative impact, on predictive accuracy. Regarding runtime, extremely randomized trees are the fastest as the cutpoints are drawn completely randomly, followed by the classical random forest, while for conditional inference forests the runtime is the largest (Probst, Wright, and Boulesteix [2019](#ref-probst2019hyperparameters)). If you need to increase computation time significantly try completely randomized trees; however, be sure to assess predictive accuracy to traditional split rules as this approach often has a negative impact on your loss function. 11\.5 Tuning strategies ----------------------- As we introduce more complex algorithms with greater number of hyperparameters, we should become more strategic with our tuning strategies. One way to become more strategic is to consider how we proceed through our grid search. Up to this point, all our grid searches have been *full Cartesian grid searches* where we assess every combination of hyperparameters of interest. We could continue to do the same; for example, the next code block searches across 120 combinations of hyperparameter settings. This grid search takes approximately 2 minutes. ``` # create hyperparameter grid hyper_grid <- expand.grid( mtry = floor(n_features * c(.05, .15, .25, .333, .4)), min.node.size = c(1, 3, 5, 10), replace = c(TRUE, FALSE), sample.fraction = c(.5, .63, .8), rmse = NA ) # execute full cartesian grid search for(i in seq_len(nrow(hyper_grid))) { # fit model for ith hyperparameter combination fit <- ranger( formula = Sale_Price ~ ., data = ames_train, num.trees = n_features * 10, mtry = hyper_grid$mtry[i], min.node.size = hyper_grid$min.node.size[i], replace = hyper_grid$replace[i], sample.fraction = hyper_grid$sample.fraction[i], verbose = FALSE, seed = 123, respect.unordered.factors = 'order', ) # export OOB error hyper_grid$rmse[i] <- sqrt(fit$prediction.error) } # assess top 10 models hyper_grid %>% arrange(rmse) %>% mutate(perc_gain = (default_rmse - rmse) / default_rmse * 100) %>% head(10) ## mtry min.node.size replace sample.fraction rmse perc_gain ## 1 32 1 FALSE 0.8 23975.32 3.555819 ## 2 32 3 FALSE 0.8 24022.97 3.364127 ## 3 32 5 FALSE 0.8 24032.69 3.325041 ## 4 26 3 FALSE 0.8 24103.53 3.040058 ## 5 20 1 FALSE 0.8 24132.35 2.924142 ## 6 26 5 FALSE 0.8 24144.38 2.875752 ## 7 20 3 FALSE 0.8 24194.64 2.673560 ## 8 26 1 FALSE 0.8 24216.02 2.587589 ## 9 32 10 FALSE 0.8 24224.18 2.554755 ## 10 20 5 FALSE 0.8 24249.46 2.453056 ``` If we look at the results we see that the top 10 models are all near or below an RMSE of 24000 (a 2\.5%–3\.5% improvement over our baseline model). In these results, the default `mtry` value of \\(\\left \\lfloor{\\frac{\\texttt{\# features}}{3}}\\right \\rfloor \= 26\\) is nearly sufficient and smaller node sizes (deeper trees) perform best. What stands out the most is that taking less than 100% sample rate and sampling without replacement consistently performs best. Sampling less than 100% adds additional randomness in the procedure, which helps to further de\-correlate the trees. Sampling without replacement likely improves performance because this data has a lot of high cardinality categorical features that are imbalanced. However, as we add more hyperparameters and values to search across and as our data sets become larger, you can see how a full Cartesian search can become exhaustive and computationally expensive. In addition to full Cartesian search, the **h2o** package provides a *random grid search* that allows you to jump from one random combination to another and it also provides *early stopping* rules that allow you to stop the grid search once a certain condition is met (e.g., a certain number of models have been trained, a certain runtime has elapsed, or the accuracy has stopped improving by a certain amount). Although using a random discrete search path will likely not find the optimal model, it typically does a good job of finding a very good model. To fit a random forest model with **h2o**, we first need to initiate our **h2o** session. ``` h2o.no_progress() h2o.init(max_mem_size = "5g") ``` Next, we need to convert our training and test data sets to objects that **h2o** can work with. ``` # convert training data to h2o object train_h2o <- as.h2o(ames_train) # set the response column to Sale_Price response <- "Sale_Price" # set the predictor names predictors <- setdiff(colnames(ames_train), response) ``` The following fits a default random forest model with **h2o** to illustrate that our baseline results (\\(\\text{OOB RMSE} \= 24439\\)) are very similar to the baseline **ranger** model we fit earlier. ``` h2o_rf1 <- h2o.randomForest( x = predictors, y = response, training_frame = train_h2o, ntrees = n_features * 10, seed = 123 ) h2o_rf1 ## Model Details: ## ============== ## ## H2ORegressionModel: drf ## Model ID: DRF_model_R_1554292876245_2 ## Model Summary: ## number_of_trees number_of_internal_trees model_size_in_bytes min_depth max_depth mean_depth min_leaves max_leaves mean_leaves ## 1 800 800 12365675 19 20 19.99875 1148 1283 1225.04630 ## ## ## H2ORegressionMetrics: drf ## ** Reported on training data. ** ## ** Metrics reported on Out-Of-Bag training samples ** ## ## MSE: 597254712 ## RMSE: 24438.8 ## MAE: 14833.34 ## RMSLE: 0.1396219 ## Mean Residual Deviance : 597254712 ``` To execute a grid search in **h2o** we need our hyperparameter grid to be a list. For example, the following code searches a larger grid space than before with a total of 240 hyperparameter combinations. We then create a random grid search strategy that will stop if none of the last 10 models have managed to have a 0\.1% improvement in MSE compared to the best model before that. If we continue to find improvements then we cut the grid search off after 300 seconds (5 minutes). ``` # hyperparameter grid hyper_grid <- list( mtries = floor(n_features * c(.05, .15, .25, .333, .4)), min_rows = c(1, 3, 5, 10), max_depth = c(10, 20, 30), sample_rate = c(.55, .632, .70, .80) ) # random grid search strategy search_criteria <- list( strategy = "RandomDiscrete", stopping_metric = "mse", stopping_tolerance = 0.001, # stop if improvement is < 0.1% stopping_rounds = 10, # over the last 10 models max_runtime_secs = 60*5 # or stop search after 5 min. ) ``` We can then perform the grid search with `h2o.grid()`. The following executes the grid search with early stopping turned on. The early stopping we specify below in `h2o.grid()` will stop growing an individual random forest model if we have not experienced at least a 0\.05% improvement in the overall OOB error in the last 10 trees. This is very useful as we can specify to build 1000 trees for each random forest model but **h2o** may only build 200 trees if we don’t experience any improvement. This grid search takes **5** minutes. ``` # perform grid search random_grid <- h2o.grid( algorithm = "randomForest", grid_id = "rf_random_grid", x = predictors, y = response, training_frame = train_h2o, hyper_params = hyper_grid, ntrees = n_features * 10, seed = 123, stopping_metric = "RMSE", stopping_rounds = 10, # stop if last 10 trees added stopping_tolerance = 0.005, # don't improve RMSE by 0.5% search_criteria = search_criteria ) ``` Our grid search assessed **129** models before stopping due to time. The best model (`max_depth = 30`, `min_rows = 1`, `mtries = 20`, and `sample_rate = 0.8`) achieved an OOB RMSE of 23932\. So although our random search assessed about 30% of the number of models as a full grid search would, the more efficient random search found a near\-optimal model within the specified time constraint. In fact, we re\-ran the same grid search but allowed for a full search across all 240 hyperparameter combinations and the best model achieved an OOB RMSE of 23785\. ``` # collect the results and sort by our model performance metric # of choice random_grid_perf <- h2o.getGrid( grid_id = "rf_random_grid", sort_by = "mse", decreasing = FALSE ) random_grid_perf ## H2O Grid Details ## ================ ## ## Grid ID: rf_random_grid ## Used hyper parameters: ## - max_depth ## - min_rows ## - mtries ## - sample_rate ## Number of models: 129 ## Number of failed models: 0 ## ## Hyper-Parameter Search Summary: ordered by increasing mse ## max_depth min_rows mtries sample_rate model_ids mse ## 1 30 1.0 20 0.8 rf_random_grid_model_113 5.727214331253618E8 ## 2 20 1.0 20 0.8 rf_random_grid_model_39 5.727741137204964E8 ## 3 20 1.0 32 0.7 rf_random_grid_model_8 5.76799145123527E8 ## 4 30 1.0 26 0.7 rf_random_grid_model_67 5.815643260591004E8 ## 5 30 1.0 12 0.8 rf_random_grid_model_64 5.951710701891141E8 ## ## --- ## max_depth min_rows mtries sample_rate model_ids mse ## 124 10 10.0 4 0.7 rf_random_grid_model_44 1.0367731339073703E9 ## 125 20 10.0 4 0.8 rf_random_grid_model_73 1.0451421787520385E9 ## 126 20 5.0 4 0.55 rf_random_grid_model_12 1.0710840266353173E9 ## 127 10 5.0 4 0.55 rf_random_grid_model_75 1.0793293549247448E9 ## 128 10 10.0 4 0.632 rf_random_grid_model_37 1.0804801985871077E9 ## 129 20 10.0 4 0.55 rf_random_grid_model_22 1.1525799087784908E9 ``` 11\.6 Feature interpretation ---------------------------- Computing feature importance and feature effects for random forests follow the same procedure as discussed in Section [10\.5](bagging.html#bagging-vip). However, in addition to the impurity\-based measure of feature importance where we base feature importance on the average total reduction of the loss function for a given feature across all trees, random forests also typically include a *permutation\-based* importance measure. In the permutation\-based approach, for each tree, the OOB sample is passed down the tree and the prediction accuracy is recorded. Then the values for each variable (one at a time) are randomly permuted and the accuracy is again computed. The decrease in accuracy as a result of this randomly shuffling of feature values is averaged over all the trees for each predictor. The variables with the largest average decrease in accuracy are considered most important. For example, we can compute both measures of feature importance with **ranger** by setting the `importance` argument. For **ranger**, once you’ve identified the optimal parameter values from the grid search, you will want to re\-run your model with these hyperparameter values. You can also crank up the number of trees, which will help create more stables values of variable importance. ``` # re-run model with impurity-based variable importance rf_impurity <- ranger( formula = Sale_Price ~ ., data = ames_train, num.trees = 2000, mtry = 32, min.node.size = 1, sample.fraction = .80, replace = FALSE, importance = "impurity", respect.unordered.factors = "order", verbose = FALSE, seed = 123 ) # re-run model with permutation-based variable importance rf_permutation <- ranger( formula = Sale_Price ~ ., data = ames_train, num.trees = 2000, mtry = 32, min.node.size = 1, sample.fraction = .80, replace = FALSE, importance = "permutation", respect.unordered.factors = "order", verbose = FALSE, seed = 123 ) ``` The resulting VIPs are displayed in Figure [11\.5](random-forest.html#fig:feature-importance-plot). Typically, you will not see the same variable importance order between the two options; however, you will often see similar variables at the top of the plots (and also the bottom). Consequently, in this example, we can comfortably state that there appears to be enough evidence to suggest that three variables stand out as most influential: * `Overall_Qual` * `Gr_Liv_Area` * `Neighborhood` Looking at the next \~10 variables in both plots, you will also see some commonality in influential variables (e.g., `Garage_Cars`, `Exter_Qual`, `Bsmt_Qual`, and `Year_Built`). ``` p1 <- vip::vip(rf_impurity, num_features = 25, bar = FALSE) p2 <- vip::vip(rf_permutation, num_features = 25, bar = FALSE) gridExtra::grid.arrange(p1, p2, nrow = 1) ``` Figure 11\.5: Top 25 most important variables based on impurity (left) and permutation (right). 11\.7 Final thoughts -------------------- Random forests provide a very powerful out\-of\-the\-box algorithm that often has great predictive accuracy. They come with all the benefits of decision trees (with the exception of surrogate splits) and bagging but greatly reduce instability and between\-tree correlation. And due to the added split variable selection attribute, random forests are also faster than bagging as they have a smaller feature search space at each tree split. However, random forests will still suffer from slow computational speed as your data sets get larger but, similar to bagging, the algorithm is built upon independent steps, and most modern implementations (e.g., **ranger**, **h2o**) allow for parallelization to improve training time. 11\.1 Prerequisites ------------------- This chapter leverages the following packages. Some of these packages play a supporting role; however, the emphasis is on how to implement random forests with the **ranger** (Wright and Ziegler [2017](#ref-JSSv077i01)) and **h2o** packages. ``` # Helper packages library(dplyr) # for data wrangling library(ggplot2) # for awesome graphics # Modeling packages library(ranger) # a c++ implementation of random forest library(h2o) # a java-based implementation of random forest ``` We’ll continue working with the `ames_train` data set created in Section [2\.7](process.html#put-process-together) to illustrate the main concepts. 11\.2 Extending bagging ----------------------- Random forests are built using the same fundamental principles as decision trees (Chapter [9](DT.html#DT)) and bagging (Chapter [10](bagging.html#bagging)). Bagging trees introduces a random component into the tree building process by building many trees on bootstrapped copies of the training data. Bagging then aggregates the predictions across all the trees; this aggregation reduces the variance of the overall procedure and results in improved predictive performance. However, as we saw in Section [10\.6](bagging.html#bagging-thoughts), simply bagging trees results in tree correlation that limits the effect of variance reduction. Random forests help to reduce tree correlation by injecting more randomness into the tree\-growing process.[29](#fn29) More specifically, while growing a decision tree during the bagging process, random forests perform *split\-variable randomization* where each time a split is to be performed, the search for the split variable is limited to a random subset of \\(m\_{try}\\) of the original \\(p\\) features. Typical default values are \\(m\_{try} \= \\frac{p}{3}\\) (regression) and \\(m\_{try} \= \\sqrt{p}\\) (classification) but this should be considered a tuning parameter. The basic algorithm for a regression or classification random forest can be generalized as follows: ``` 1. Given a training data set 2. Select number of trees to build (n_trees) 3. for i = 1 to n_trees do 4. | Generate a bootstrap sample of the original data 5. | Grow a regression/classification tree to the bootstrapped data 6. | for each split do 7. | | Select m_try variables at random from all p variables 8. | | Pick the best variable/split-point among the m_try 9. | | Split the node into two child nodes 10. | end 11. | Use typical tree model stopping criteria to determine when a | tree is complete (but do not prune) 12. end 13. Output ensemble of trees ``` When \\(m\_{try} \= p\\), the algorithm is equivalent to *bagging* decision trees. Since the algorithm randomly selects a bootstrap sample to train on ***and*** a random sample of features to use at each split, a more diverse set of trees is produced which tends to lessen tree correlation beyond bagged trees and often dramatically increase predictive power. 11\.3 Out\-of\-the\-box performance ----------------------------------- Random forests have become popular because they tend to provide very good out\-of\-the\-box performance. Although they have several hyperparameters that can be tuned, the default values tend to produce good results. Moreover, Probst, Bischl, and Boulesteix ([2018](#ref-probst2018tunability)) illustrated that among the more popular machine learning algorithms, random forests have the least variability in their prediction accuracy when tuning. For example, if we train a random forest model[30](#fn30) with all hyperparameters set to their default values, we get an OOB RMSE that is better than any model we’ve run thus far (without any tuning). By default, **ranger** sets the `mtry` parameter to \\(\\text{floor}\\big(\\sqrt{\\texttt{number of features}}\\big)\\); however, for regression problems the preferred `mtry` to start with is \\(\\text{floor}\\big(\\frac{\\texttt{number of features}}{3}\\big)\\). We also set `respect.unordered.factors = "order"`. This specifies how to treat unordered factor variables and we recommend setting this to “order” (see J. Friedman, Hastie, and Tibshirani ([2001](#ref-esl)) Section 9\.2\.4 for details). ``` # number of features n_features <- length(setdiff(names(ames_train), "Sale_Price")) # train a default random forest model ames_rf1 <- ranger( Sale_Price ~ ., data = ames_train, mtry = floor(n_features / 3), respect.unordered.factors = "order", seed = 123 ) # get OOB RMSE (default_rmse <- sqrt(ames_rf1$prediction.error)) ## [1] 24859.27 ``` 11\.4 Hyperparameters --------------------- Although random forests perform well out\-of\-the\-box, there are several tunable hyperparameters that we should consider when training a model. Although we briefly discuss the main hyperparameters, Probst, Wright, and Boulesteix ([2019](#ref-probst2019hyperparameters)) provide a much more thorough discussion. The main hyperparameters to consider include: 1. The number of trees in the forest 2. The number of features to consider at any given split: \\(m\_{try}\\) 3. The complexity of each tree 4. The sampling scheme 5. The splitting rule to use during tree construction 6. and (2\) typically have the largest impact on predictive accuracy and should always be tuned. (3\) and (4\) tend to have marginal impact on predictive accuracy but are still worth exploring. They also have the ability to influence computational efficiency. (5\) tends to have the smallest impact on predictive accuracy and is used primarily to increase computational efficiency. ### 11\.4\.1 Number of trees The first consideration is the number of trees within your random forest. Although not technically a hyperparameter, the number of trees needs to be sufficiently large to stabilize the error rate. A good rule of thumb is to start with 10 times the number of features as illustrated in Figure [11\.1](random-forest.html#fig:tuning-trees); however, as you adjust other hyperparameters such as \\(m\_{try}\\) and node size, more or fewer trees may be required. More trees provide more robust and stable error estimates and variable importance measures; however, the impact on computation time increases linearly with the number of trees. Start with \\(p \\times 10\\) trees and adjust as necessary Figure 11\.1: The Ames data has 80 features and starting with 10 times the number of features typically ensures the error estimate converges. ### 11\.4\.2 \\(m\_{try}\\) The hyperparameter that controls the split\-variable randomization feature of random forests is often referred to as \\(m\_{try}\\) and it helps to balance low tree correlation with reasonable predictive strength. With regression problems the default value is often \\(m\_{try} \= \\frac{p}{3}\\) and for classification \\(m\_{try} \= \\sqrt{p}\\). However, when there are fewer relevant predictors (e.g., noisy data) a higher value of \\(m\_{try}\\) tends to perform better because it makes it more likely to select those features with the strongest signal. When there are many relevant predictors, a lower \\(m\_{try}\\) might perform better. Start with five evenly spaced values of \\(m\_{try}\\) across the range 2–\\(p\\) centered at the recommended default as illustrated in Figure 11\.2\. Figure 11\.2: For the Ames data, an mtry value slightly lower (21\) than the default (26\) improves performance. ### 11\.4\.3 Tree complexity Random forests are built on individual decision trees; consequently, most random forest implementations have one or more hyperparameters that allow us to control the depth and complexity of the individual trees. This will often include hyperparameters such as node size, max depth, max number of terminal nodes, or the required node size to allow additional splits. Node size is probably the most common hyperparameter to control tree complexity and most implementations use the default values of one for classification and five for regression as these values tend to produce good results (Dı'az\-Uriarte and De Andres [2006](#ref-diaz2006gene); Goldstein, Polley, and Briggs [2011](#ref-goldstein2011random)). However, Segal ([2004](#ref-segal2004machine)) showed that if your data has many noisy predictors and higher \\(m\_{try}\\) values are performing best, then performance may improve by increasing node size (i.e., decreasing tree depth and complexity). Moreover, if computation time is a concern then you can often decrease run time substantially by increasing the node size and have only marginal impacts to your error estimate as illustrated in Figure [11\.3](random-forest.html#fig:tuning-node-size). When adjusting node size start with three values between 1–10 and adjust depending on impact to accuracy and run time. Figure 11\.3: Increasing node size to reduce tree complexity will often have a larger impact on computation speed (right) than on your error estimate. ### 11\.4\.4 Sampling scheme The default sampling scheme for random forests is bootstrapping where 100% of the observations are sampled with replacement (in other words, each bootstrap copy has the same size as the original training data); however, we can adjust both the sample size and whether to sample with or without replacement. The sample size parameter determines how many observations are drawn for the training of each tree. Decreasing the sample size leads to more diverse trees and thereby lower between\-tree correlation, which can have a positive effect on the prediction accuracy. Consequently, if there are a few dominating features in your data set, reducing the sample size can also help to minimize between\-tree correlation. Also, when you have many categorical features with a varying number of levels, sampling with replacement can lead to biased variable split selection (Janitza, Binder, and Boulesteix [2016](#ref-janitza2016pitfalls); Strobl et al. [2007](#ref-strobl2007bias)). Consequently, if you have categories that are not balanced, sampling without replacement provides a less biased use of all levels across the trees in the random forest. Assess 3–4 values of sample sizes ranging from 25%–100% and if you have unbalanced categorical features try sampling without replacement. Figure 11\.4: The Ames data has several imbalanced categorical features such as neighborhood, zoning, overall quality, and more. Consequently, sampling without replacement appears to improve performance as it leads to less biased split variable selection and more uncorrelated trees. ### 11\.4\.5 Split rule Recall the default splitting rule during random forests tree building consists of selecting, out of all splits of the (randomly selected \\(m\_{try}\\)) candidate variables, the split that minimizes the Gini impurity (in the case of classification) and the SSE (in case of regression). However, Strobl et al. ([2007](#ref-strobl2007bias)) illustrated that these default splitting rules favor the selection of features with many possible splits (e.g., continuous variables or categorical variables with many categories) over variables with fewer splits (the extreme case being binary variables, which have only one possible split). *Conditional inference trees* (Hothorn, Hornik, and Zeileis [2006](#ref-hothorn2006unbiased)) implement an alternative splitting mechanism that helps to reduce this variable selection bias.[31](#fn31) However, ensembling conditional inference trees has yet to be proven superior with regards to predictive accuracy and they take a lot longer to train. To increase computational efficiency, splitting rules can be randomized where only a random subset of possible splitting values is considered for a variable (Geurts, Ernst, and Wehenkel [2006](#ref-geurts2006extremely)). If only a single random splitting value is randomly selected then we call this procedure *extremely randomized trees*. Due to the added randomness of split points, this method tends to have no improvement, or often a negative impact, on predictive accuracy. Regarding runtime, extremely randomized trees are the fastest as the cutpoints are drawn completely randomly, followed by the classical random forest, while for conditional inference forests the runtime is the largest (Probst, Wright, and Boulesteix [2019](#ref-probst2019hyperparameters)). If you need to increase computation time significantly try completely randomized trees; however, be sure to assess predictive accuracy to traditional split rules as this approach often has a negative impact on your loss function. ### 11\.4\.1 Number of trees The first consideration is the number of trees within your random forest. Although not technically a hyperparameter, the number of trees needs to be sufficiently large to stabilize the error rate. A good rule of thumb is to start with 10 times the number of features as illustrated in Figure [11\.1](random-forest.html#fig:tuning-trees); however, as you adjust other hyperparameters such as \\(m\_{try}\\) and node size, more or fewer trees may be required. More trees provide more robust and stable error estimates and variable importance measures; however, the impact on computation time increases linearly with the number of trees. Start with \\(p \\times 10\\) trees and adjust as necessary Figure 11\.1: The Ames data has 80 features and starting with 10 times the number of features typically ensures the error estimate converges. ### 11\.4\.2 \\(m\_{try}\\) The hyperparameter that controls the split\-variable randomization feature of random forests is often referred to as \\(m\_{try}\\) and it helps to balance low tree correlation with reasonable predictive strength. With regression problems the default value is often \\(m\_{try} \= \\frac{p}{3}\\) and for classification \\(m\_{try} \= \\sqrt{p}\\). However, when there are fewer relevant predictors (e.g., noisy data) a higher value of \\(m\_{try}\\) tends to perform better because it makes it more likely to select those features with the strongest signal. When there are many relevant predictors, a lower \\(m\_{try}\\) might perform better. Start with five evenly spaced values of \\(m\_{try}\\) across the range 2–\\(p\\) centered at the recommended default as illustrated in Figure 11\.2\. Figure 11\.2: For the Ames data, an mtry value slightly lower (21\) than the default (26\) improves performance. ### 11\.4\.3 Tree complexity Random forests are built on individual decision trees; consequently, most random forest implementations have one or more hyperparameters that allow us to control the depth and complexity of the individual trees. This will often include hyperparameters such as node size, max depth, max number of terminal nodes, or the required node size to allow additional splits. Node size is probably the most common hyperparameter to control tree complexity and most implementations use the default values of one for classification and five for regression as these values tend to produce good results (Dı'az\-Uriarte and De Andres [2006](#ref-diaz2006gene); Goldstein, Polley, and Briggs [2011](#ref-goldstein2011random)). However, Segal ([2004](#ref-segal2004machine)) showed that if your data has many noisy predictors and higher \\(m\_{try}\\) values are performing best, then performance may improve by increasing node size (i.e., decreasing tree depth and complexity). Moreover, if computation time is a concern then you can often decrease run time substantially by increasing the node size and have only marginal impacts to your error estimate as illustrated in Figure [11\.3](random-forest.html#fig:tuning-node-size). When adjusting node size start with three values between 1–10 and adjust depending on impact to accuracy and run time. Figure 11\.3: Increasing node size to reduce tree complexity will often have a larger impact on computation speed (right) than on your error estimate. ### 11\.4\.4 Sampling scheme The default sampling scheme for random forests is bootstrapping where 100% of the observations are sampled with replacement (in other words, each bootstrap copy has the same size as the original training data); however, we can adjust both the sample size and whether to sample with or without replacement. The sample size parameter determines how many observations are drawn for the training of each tree. Decreasing the sample size leads to more diverse trees and thereby lower between\-tree correlation, which can have a positive effect on the prediction accuracy. Consequently, if there are a few dominating features in your data set, reducing the sample size can also help to minimize between\-tree correlation. Also, when you have many categorical features with a varying number of levels, sampling with replacement can lead to biased variable split selection (Janitza, Binder, and Boulesteix [2016](#ref-janitza2016pitfalls); Strobl et al. [2007](#ref-strobl2007bias)). Consequently, if you have categories that are not balanced, sampling without replacement provides a less biased use of all levels across the trees in the random forest. Assess 3–4 values of sample sizes ranging from 25%–100% and if you have unbalanced categorical features try sampling without replacement. Figure 11\.4: The Ames data has several imbalanced categorical features such as neighborhood, zoning, overall quality, and more. Consequently, sampling without replacement appears to improve performance as it leads to less biased split variable selection and more uncorrelated trees. ### 11\.4\.5 Split rule Recall the default splitting rule during random forests tree building consists of selecting, out of all splits of the (randomly selected \\(m\_{try}\\)) candidate variables, the split that minimizes the Gini impurity (in the case of classification) and the SSE (in case of regression). However, Strobl et al. ([2007](#ref-strobl2007bias)) illustrated that these default splitting rules favor the selection of features with many possible splits (e.g., continuous variables or categorical variables with many categories) over variables with fewer splits (the extreme case being binary variables, which have only one possible split). *Conditional inference trees* (Hothorn, Hornik, and Zeileis [2006](#ref-hothorn2006unbiased)) implement an alternative splitting mechanism that helps to reduce this variable selection bias.[31](#fn31) However, ensembling conditional inference trees has yet to be proven superior with regards to predictive accuracy and they take a lot longer to train. To increase computational efficiency, splitting rules can be randomized where only a random subset of possible splitting values is considered for a variable (Geurts, Ernst, and Wehenkel [2006](#ref-geurts2006extremely)). If only a single random splitting value is randomly selected then we call this procedure *extremely randomized trees*. Due to the added randomness of split points, this method tends to have no improvement, or often a negative impact, on predictive accuracy. Regarding runtime, extremely randomized trees are the fastest as the cutpoints are drawn completely randomly, followed by the classical random forest, while for conditional inference forests the runtime is the largest (Probst, Wright, and Boulesteix [2019](#ref-probst2019hyperparameters)). If you need to increase computation time significantly try completely randomized trees; however, be sure to assess predictive accuracy to traditional split rules as this approach often has a negative impact on your loss function. 11\.5 Tuning strategies ----------------------- As we introduce more complex algorithms with greater number of hyperparameters, we should become more strategic with our tuning strategies. One way to become more strategic is to consider how we proceed through our grid search. Up to this point, all our grid searches have been *full Cartesian grid searches* where we assess every combination of hyperparameters of interest. We could continue to do the same; for example, the next code block searches across 120 combinations of hyperparameter settings. This grid search takes approximately 2 minutes. ``` # create hyperparameter grid hyper_grid <- expand.grid( mtry = floor(n_features * c(.05, .15, .25, .333, .4)), min.node.size = c(1, 3, 5, 10), replace = c(TRUE, FALSE), sample.fraction = c(.5, .63, .8), rmse = NA ) # execute full cartesian grid search for(i in seq_len(nrow(hyper_grid))) { # fit model for ith hyperparameter combination fit <- ranger( formula = Sale_Price ~ ., data = ames_train, num.trees = n_features * 10, mtry = hyper_grid$mtry[i], min.node.size = hyper_grid$min.node.size[i], replace = hyper_grid$replace[i], sample.fraction = hyper_grid$sample.fraction[i], verbose = FALSE, seed = 123, respect.unordered.factors = 'order', ) # export OOB error hyper_grid$rmse[i] <- sqrt(fit$prediction.error) } # assess top 10 models hyper_grid %>% arrange(rmse) %>% mutate(perc_gain = (default_rmse - rmse) / default_rmse * 100) %>% head(10) ## mtry min.node.size replace sample.fraction rmse perc_gain ## 1 32 1 FALSE 0.8 23975.32 3.555819 ## 2 32 3 FALSE 0.8 24022.97 3.364127 ## 3 32 5 FALSE 0.8 24032.69 3.325041 ## 4 26 3 FALSE 0.8 24103.53 3.040058 ## 5 20 1 FALSE 0.8 24132.35 2.924142 ## 6 26 5 FALSE 0.8 24144.38 2.875752 ## 7 20 3 FALSE 0.8 24194.64 2.673560 ## 8 26 1 FALSE 0.8 24216.02 2.587589 ## 9 32 10 FALSE 0.8 24224.18 2.554755 ## 10 20 5 FALSE 0.8 24249.46 2.453056 ``` If we look at the results we see that the top 10 models are all near or below an RMSE of 24000 (a 2\.5%–3\.5% improvement over our baseline model). In these results, the default `mtry` value of \\(\\left \\lfloor{\\frac{\\texttt{\# features}}{3}}\\right \\rfloor \= 26\\) is nearly sufficient and smaller node sizes (deeper trees) perform best. What stands out the most is that taking less than 100% sample rate and sampling without replacement consistently performs best. Sampling less than 100% adds additional randomness in the procedure, which helps to further de\-correlate the trees. Sampling without replacement likely improves performance because this data has a lot of high cardinality categorical features that are imbalanced. However, as we add more hyperparameters and values to search across and as our data sets become larger, you can see how a full Cartesian search can become exhaustive and computationally expensive. In addition to full Cartesian search, the **h2o** package provides a *random grid search* that allows you to jump from one random combination to another and it also provides *early stopping* rules that allow you to stop the grid search once a certain condition is met (e.g., a certain number of models have been trained, a certain runtime has elapsed, or the accuracy has stopped improving by a certain amount). Although using a random discrete search path will likely not find the optimal model, it typically does a good job of finding a very good model. To fit a random forest model with **h2o**, we first need to initiate our **h2o** session. ``` h2o.no_progress() h2o.init(max_mem_size = "5g") ``` Next, we need to convert our training and test data sets to objects that **h2o** can work with. ``` # convert training data to h2o object train_h2o <- as.h2o(ames_train) # set the response column to Sale_Price response <- "Sale_Price" # set the predictor names predictors <- setdiff(colnames(ames_train), response) ``` The following fits a default random forest model with **h2o** to illustrate that our baseline results (\\(\\text{OOB RMSE} \= 24439\\)) are very similar to the baseline **ranger** model we fit earlier. ``` h2o_rf1 <- h2o.randomForest( x = predictors, y = response, training_frame = train_h2o, ntrees = n_features * 10, seed = 123 ) h2o_rf1 ## Model Details: ## ============== ## ## H2ORegressionModel: drf ## Model ID: DRF_model_R_1554292876245_2 ## Model Summary: ## number_of_trees number_of_internal_trees model_size_in_bytes min_depth max_depth mean_depth min_leaves max_leaves mean_leaves ## 1 800 800 12365675 19 20 19.99875 1148 1283 1225.04630 ## ## ## H2ORegressionMetrics: drf ## ** Reported on training data. ** ## ** Metrics reported on Out-Of-Bag training samples ** ## ## MSE: 597254712 ## RMSE: 24438.8 ## MAE: 14833.34 ## RMSLE: 0.1396219 ## Mean Residual Deviance : 597254712 ``` To execute a grid search in **h2o** we need our hyperparameter grid to be a list. For example, the following code searches a larger grid space than before with a total of 240 hyperparameter combinations. We then create a random grid search strategy that will stop if none of the last 10 models have managed to have a 0\.1% improvement in MSE compared to the best model before that. If we continue to find improvements then we cut the grid search off after 300 seconds (5 minutes). ``` # hyperparameter grid hyper_grid <- list( mtries = floor(n_features * c(.05, .15, .25, .333, .4)), min_rows = c(1, 3, 5, 10), max_depth = c(10, 20, 30), sample_rate = c(.55, .632, .70, .80) ) # random grid search strategy search_criteria <- list( strategy = "RandomDiscrete", stopping_metric = "mse", stopping_tolerance = 0.001, # stop if improvement is < 0.1% stopping_rounds = 10, # over the last 10 models max_runtime_secs = 60*5 # or stop search after 5 min. ) ``` We can then perform the grid search with `h2o.grid()`. The following executes the grid search with early stopping turned on. The early stopping we specify below in `h2o.grid()` will stop growing an individual random forest model if we have not experienced at least a 0\.05% improvement in the overall OOB error in the last 10 trees. This is very useful as we can specify to build 1000 trees for each random forest model but **h2o** may only build 200 trees if we don’t experience any improvement. This grid search takes **5** minutes. ``` # perform grid search random_grid <- h2o.grid( algorithm = "randomForest", grid_id = "rf_random_grid", x = predictors, y = response, training_frame = train_h2o, hyper_params = hyper_grid, ntrees = n_features * 10, seed = 123, stopping_metric = "RMSE", stopping_rounds = 10, # stop if last 10 trees added stopping_tolerance = 0.005, # don't improve RMSE by 0.5% search_criteria = search_criteria ) ``` Our grid search assessed **129** models before stopping due to time. The best model (`max_depth = 30`, `min_rows = 1`, `mtries = 20`, and `sample_rate = 0.8`) achieved an OOB RMSE of 23932\. So although our random search assessed about 30% of the number of models as a full grid search would, the more efficient random search found a near\-optimal model within the specified time constraint. In fact, we re\-ran the same grid search but allowed for a full search across all 240 hyperparameter combinations and the best model achieved an OOB RMSE of 23785\. ``` # collect the results and sort by our model performance metric # of choice random_grid_perf <- h2o.getGrid( grid_id = "rf_random_grid", sort_by = "mse", decreasing = FALSE ) random_grid_perf ## H2O Grid Details ## ================ ## ## Grid ID: rf_random_grid ## Used hyper parameters: ## - max_depth ## - min_rows ## - mtries ## - sample_rate ## Number of models: 129 ## Number of failed models: 0 ## ## Hyper-Parameter Search Summary: ordered by increasing mse ## max_depth min_rows mtries sample_rate model_ids mse ## 1 30 1.0 20 0.8 rf_random_grid_model_113 5.727214331253618E8 ## 2 20 1.0 20 0.8 rf_random_grid_model_39 5.727741137204964E8 ## 3 20 1.0 32 0.7 rf_random_grid_model_8 5.76799145123527E8 ## 4 30 1.0 26 0.7 rf_random_grid_model_67 5.815643260591004E8 ## 5 30 1.0 12 0.8 rf_random_grid_model_64 5.951710701891141E8 ## ## --- ## max_depth min_rows mtries sample_rate model_ids mse ## 124 10 10.0 4 0.7 rf_random_grid_model_44 1.0367731339073703E9 ## 125 20 10.0 4 0.8 rf_random_grid_model_73 1.0451421787520385E9 ## 126 20 5.0 4 0.55 rf_random_grid_model_12 1.0710840266353173E9 ## 127 10 5.0 4 0.55 rf_random_grid_model_75 1.0793293549247448E9 ## 128 10 10.0 4 0.632 rf_random_grid_model_37 1.0804801985871077E9 ## 129 20 10.0 4 0.55 rf_random_grid_model_22 1.1525799087784908E9 ``` 11\.6 Feature interpretation ---------------------------- Computing feature importance and feature effects for random forests follow the same procedure as discussed in Section [10\.5](bagging.html#bagging-vip). However, in addition to the impurity\-based measure of feature importance where we base feature importance on the average total reduction of the loss function for a given feature across all trees, random forests also typically include a *permutation\-based* importance measure. In the permutation\-based approach, for each tree, the OOB sample is passed down the tree and the prediction accuracy is recorded. Then the values for each variable (one at a time) are randomly permuted and the accuracy is again computed. The decrease in accuracy as a result of this randomly shuffling of feature values is averaged over all the trees for each predictor. The variables with the largest average decrease in accuracy are considered most important. For example, we can compute both measures of feature importance with **ranger** by setting the `importance` argument. For **ranger**, once you’ve identified the optimal parameter values from the grid search, you will want to re\-run your model with these hyperparameter values. You can also crank up the number of trees, which will help create more stables values of variable importance. ``` # re-run model with impurity-based variable importance rf_impurity <- ranger( formula = Sale_Price ~ ., data = ames_train, num.trees = 2000, mtry = 32, min.node.size = 1, sample.fraction = .80, replace = FALSE, importance = "impurity", respect.unordered.factors = "order", verbose = FALSE, seed = 123 ) # re-run model with permutation-based variable importance rf_permutation <- ranger( formula = Sale_Price ~ ., data = ames_train, num.trees = 2000, mtry = 32, min.node.size = 1, sample.fraction = .80, replace = FALSE, importance = "permutation", respect.unordered.factors = "order", verbose = FALSE, seed = 123 ) ``` The resulting VIPs are displayed in Figure [11\.5](random-forest.html#fig:feature-importance-plot). Typically, you will not see the same variable importance order between the two options; however, you will often see similar variables at the top of the plots (and also the bottom). Consequently, in this example, we can comfortably state that there appears to be enough evidence to suggest that three variables stand out as most influential: * `Overall_Qual` * `Gr_Liv_Area` * `Neighborhood` Looking at the next \~10 variables in both plots, you will also see some commonality in influential variables (e.g., `Garage_Cars`, `Exter_Qual`, `Bsmt_Qual`, and `Year_Built`). ``` p1 <- vip::vip(rf_impurity, num_features = 25, bar = FALSE) p2 <- vip::vip(rf_permutation, num_features = 25, bar = FALSE) gridExtra::grid.arrange(p1, p2, nrow = 1) ``` Figure 11\.5: Top 25 most important variables based on impurity (left) and permutation (right). 11\.7 Final thoughts -------------------- Random forests provide a very powerful out\-of\-the\-box algorithm that often has great predictive accuracy. They come with all the benefits of decision trees (with the exception of surrogate splits) and bagging but greatly reduce instability and between\-tree correlation. And due to the added split variable selection attribute, random forests are also faster than bagging as they have a smaller feature search space at each tree split. However, random forests will still suffer from slow computational speed as your data sets get larger but, similar to bagging, the algorithm is built upon independent steps, and most modern implementations (e.g., **ranger**, **h2o**) allow for parallelization to improve training time.
Machine Learning
bradleyboehmke.github.io
https://bradleyboehmke.github.io/HOML/gbm.html
Chapter 12 Gradient Boosting ============================ Gradient boosting machines (GBMs) are an extremely popular machine learning algorithm that have proven successful across many domains and is one of the leading methods for winning Kaggle competitions. Whereas random forests (Chapter [11](random-forest.html#random-forest)) build an ensemble of deep independent trees, GBMs build an ensemble of shallow trees in sequence with each tree learning and improving on the previous one. Although shallow trees by themselves are rather weak predictive models, they can be “boosted” to produce a powerful “committee” that, when appropriately tuned, is often hard to beat with other algorithms. This chapter will cover the fundamentals to understanding and implementing some popular implementations of GBMs. 12\.1 Prerequisites ------------------- This chapter leverages the following packages. Some of these packages play a supporting role; however, our focus is on demonstrating how to implement GBMs with the **gbm** (B Greenwell et al. [2018](#ref-gbm-pkg)), **xgboost** (Chen et al. [2018](#ref-xgboost-pkg)), and **h2o** packages. ``` # Helper packages library(dplyr) # for general data wrangling needs # Modeling packages library(gbm) # for original implementation of regular and stochastic GBMs library(h2o) # for a java-based implementation of GBM variants library(xgboost) # for fitting extreme gradient boosting ``` We’ll continue working with the `ames_train` data set created in Section [2\.7](process.html#put-process-together) to illustrate the main concepts. We’ll also demonstrate **h2o** functionality using the same setup from Section [11\.5](random-forest.html#rf-tuning-strategy). ``` h2o.init(max_mem_size = "10g") train_h2o <- as.h2o(ames_train) response <- "Sale_Price" predictors <- setdiff(colnames(ames_train), response) ``` 12\.2 How boosting works ------------------------ Several supervised machine learning algorithms are based on a single predictive model, for example: ordinary linear regression, penalized regression models, single decision trees, and support vector machines. Bagging and random forests, on the other hand, work by combining multiple models together into an overall ensemble. New predictions are made by combining the predictions from the individual base models that make up the ensemble (e.g., by averaging in regression). Since averaging reduces variance, bagging (and hence, random forests) are most effectively applied to models with low bias and high variance (e.g., an overgrown decision tree). While boosting is a general algorithm for building an ensemble out of simpler models (typically decision trees), it is more effectively applied to models with high bias and low variance! Although boosting, like bagging, can be applied to any type of model, it is often most effectively applied to decision trees (which we’ll assume from this point on). ### 12\.2\.1 A sequential ensemble approach The main idea of boosting is to add new models to the ensemble ***sequentially***. In essence, boosting attacks the bias\-variance\-tradeoff by starting with a *weak* model (e.g., a decision tree with only a few splits) and sequentially *boosts* its performance by continuing to build new trees, where each new tree in the sequence tries to fix up where the previous one made the biggest mistakes (i.e., each new tree in the sequence will focus on the training rows where the previous tree had the largest prediction errors); see Figure [12\.1](gbm.html#fig:sequential-fig). Figure 12\.1: Sequential ensemble approach. Let’s discuss the important components of boosting in closer detail. **The base learners**: Boosting is a framework that iteratively improves *any* weak learning model. Many gradient boosting applications allow you to “plug in” various classes of weak learners at your disposal. In practice however, boosted algorithms almost always use decision trees as the base\-learner. Consequently, this chapter will discuss boosting in the context of decision trees. **Training weak models**: A weak model is one whose error rate is only slightly better than random guessing. The idea behind boosting is that each model in the sequence slightly improves upon the performance of the previous one (essentially, by focusing on the rows of the training data where the previous tree had the largest errors or residuals). With regards to decision trees, shallow trees (i.e., trees with relatively few splits) represent a weak learner. In boosting, trees with 1–6 splits are most common. **Sequential training with respect to errors**: Boosted trees are grown sequentially; each tree is grown using information from previously grown trees to improve performance. This is illustrated in the following algorithm for boosting regression trees. By fitting each tree in the sequence to the previous tree’s residuals, we’re allowing each new tree in the sequence to focus on the previous tree’s mistakes: 1. Fit a decision tree to the data: \\(F\_1\\left(x\\right) \= y\\), 2. We then fit the next decision tree to the residuals of the previous: \\(h\_1\\left(x\\right) \= y \- F\_1\\left(x\\right)\\), 3. Add this new tree to our algorithm: \\(F\_2\\left(x\\right) \= F\_1\\left(x\\right) \+ h\_1\\left(x\\right)\\), 4. Fit the next decision tree to the residuals of \\(F\_2\\): \\(h\_2\\left(x\\right) \= y \- F\_2\\left(x\\right)\\), 5. Add this new tree to our algorithm: \\(F\_3\\left(x\\right) \= F\_2\\left(x\\right) \+ h\_2\\left(x\\right)\\), 6. Continue this process until some mechanism (i.e. cross validation) tells us to stop. The final model here is a stagewise additive model of *b* individual trees: \\\[ f\\left(x\\right) \= \\sum^B\_{b\=1}f^b\\left(x\\right) \\tag{1} \\] Figure [12\.2](gbm.html#fig:boosting-in-action) illustrates with a simple example where a single predictor (\\(x\\)) has a true underlying sine wave relationship (blue line) with *y* along with some irreducible error. The first tree fit in the series is a single decision stump (i.e., a tree with a single split). Each successive decision stump thereafter is fit to the previous one’s residuals. Initially there are large errors, but each additional decision stump in the sequence makes a small improvement in different areas across the feature space where errors still remain. Figure 12\.2: Boosted regression decision stumps as 0\-1024 successive trees are added. ### 12\.2\.2 Gradient descent Many algorithms in regression, including decision trees, focus on minimizing some function of the residuals; most typically the SSE loss function, or equivalently, the MSE or RMSE (this is accomplished through simple calculus and is the approach taken with least squares). The boosting algorithm for regression discussed in the previous section outlines the approach of sequentially fitting regression trees to the residuals from the previous tree. This specific approach is how gradient boosting minimizes the mean squared error (SSE) loss function (for SSE loss, the gradient is nothing more than the residual error). However, we often wish to focus on other loss functions such as mean absolute error (MAE)—which is less sensitive to outliers—or to be able to apply the method to a classification problem with a loss function such as deviance, or log loss. The name ***gradient*** boosting machine comes from the fact that this procedure can be generalized to loss functions other than SSE. Gradient boosting is considered a ***gradient descent*** algorithm. Gradient descent is a very generic optimization algorithm capable of finding optimal solutions to a wide range of problems. The general idea of gradient descent is to tweak parameter(s) iteratively in order to minimize a cost function. Suppose you are a downhill skier racing your friend. A good strategy to beat your friend to the bottom is to take the path with the steepest slope. This is exactly what gradient descent does—it measures the local gradient of the loss (cost) function for a given set of parameters (\\(\\Theta\\)) and takes steps in the direction of the descending gradient. As Figure [12\.3](gbm.html#fig:gradient-descent-fig)[32](#fn32) illustrates, once the gradient is zero, we have reached a minimum. Figure 12\.3: Gradient descent is the process of gradually decreasing the cost function (i.e. MSE) by tweaking parameter(s) iteratively until you have reached a minimum. Gradient descent can be performed on any loss function that is differentiable. Consequently, this allows GBMs to optimize different loss functions as desired (see J. Friedman, Hastie, and Tibshirani ([2001](#ref-esl)), p. 360 for common loss functions). An important parameter in gradient descent is the size of the steps which is controlled by the *learning rate*. If the learning rate is too small, then the algorithm will take many iterations (steps) to find the minimum. On the other hand, if the learning rate is too high, you might jump across the minimum and end up further away than when you started. Figure 12\.4: A learning rate that is too small will require many iterations to find the minimum. A learning rate too big may jump over the minimum. Moreover, not all cost functions are *convex* (i.e., bowl shaped). There may be local minimas, plateaus, and other irregular terrain of the loss function that makes finding the global minimum difficult. ***Stochastic gradient descent*** can help us address this problem by sampling a fraction of the training observations (typically without replacement) and growing the next tree using that subsample. This makes the algorithm faster but the stochastic nature of random sampling also adds some random nature in descending the loss function’s gradient. Although this randomness does not allow the algorithm to find the absolute global minimum, it can actually help the algorithm jump out of local minima and off plateaus to get sufficiently near the global minimum. Figure 12\.5: Stochastic gradient descent will often find a near\-optimal solution by jumping out of local minimas and off plateaus. As we’ll see in the sections that follow, there are several hyperparameter tuning options available in stochastic gradient boosting (some control the gradient descent and others control the tree growing process). If properly tuned (e.g., with *k*\-fold CV) GBMs can lead to some of the most flexible and accurate predictive models you can build! 12\.3 Basic GBM --------------- There are multiple variants of boosting algorithms with the original focused on classification problems (Kuhn and Johnson [2013](#ref-apm)). Throughout the 1990’s many approaches were developed with the most successful being the AdaBoost algorithm (Freund and Schapire [1999](#ref-freund1999adaptive)). In 2000, Friedman related AdaBoost to important statistical concepts (e.g., loss functions and additive modeling), which allowed him to generalize the boosting framework to regression problems and multiple loss functions (J. H. Friedman [2001](#ref-friedman2001greedy)). This led to the typical GBM model that we think of today and that most modern implementations are built on. ### 12\.3\.1 Hyperparameters A simple GBM model contains two categories of hyperparameters: *boosting hyperparameters* and *tree\-specific hyperparameters*. The two main boosting hyperparameters include: * **Number of trees**: The total number of trees in the sequence or ensemble. The averaging of independently grown trees in bagging and random forests makes it very difficult to overfit with too many trees. However, GBMs function differently as each tree is grown in sequence to fix up the past tree’s mistakes. For example, in regression, GBMs will chase residuals as long as you allow them to. Also, depending on the values of the other hyperparameters, GBMs often require many trees (it is not uncommon to have many thousands of trees) but since they can easily overfit we must find the optimal number of trees that minimize the loss function of interest with cross validation. * **Learning rate**: Determines the contribution of each tree on the final outcome and controls how quickly the algorithm proceeds down the gradient descent (learns); see Figure [12\.3](gbm.html#fig:gradient-descent-fig). Values range from 0–1 with typical values between 0\.001–0\.3\. Smaller values make the model robust to the specific characteristics of each individual tree, thus allowing it to generalize well. Smaller values also make it easier to stop prior to overfitting; however, they increase the risk of not reaching the optimum with a fixed number of trees and are more computationally demanding. This hyperparameter is also called *shrinkage*. Generally, the smaller this value, the more accurate the model can be but also will require more trees in the sequence. The two main tree hyperparameters in a simple GBM model include: * **Tree depth**: Controls the depth of the individual trees. Typical values range from a depth of 3–8 but it is not uncommon to see a tree depth of 1 (J. Friedman, Hastie, and Tibshirani [2001](#ref-esl)). Smaller depth trees such as decision stumps are computationally efficient (but require more trees); however, higher depth trees allow the algorithm to capture unique interactions but also increase the risk of over\-fitting. Note that larger \\(n\\) or \\(p\\) training data sets are more tolerable to deeper trees. * **Minimum number of observations in terminal nodes**: Also, controls the complexity of each tree. Since we tend to use shorter trees this rarely has a large impact on performance. Typical values range from 5–15 where higher values help prevent a model from learning relationships which might be highly specific to the particular sample selected for a tree (overfitting) but smaller values can help with imbalanced target classes in classification problems. ### 12\.3\.2 Implementation There are many packages that implement GBMs and GBM variants. You can find a fairly comprehensive list at the CRAN Machine Learning Task View: [https://cran.r\-project.org/web/views/MachineLearning.html](https://cran.r-project.org/web/views/MachineLearning.html). However, the most popular original R implementation of Friedman’s GBM algorithm (J. H. Friedman [2001](#ref-friedman2001greedy); Friedman [2002](#ref-friedman2002stochastic)) is the **gbm** package. **gbm** has two training functions: `gbm::gbm()` and `gbm::gbm.fit()`. The primary difference is that `gbm::gbm()` uses the formula interface to specify your model whereas `gbm::gbm.fit()` requires the separated `x` and `y` matrices; `gbm::gbm.fit()` is more efficient and recommended for advanced users. The default settings in **gbm** include a learning rate (`shrinkage`) of 0\.001\. This is a very small learning rate and typically requires a large number of trees to sufficiently minimize the loss function. However, **gbm** uses a default number of trees of 100, which is rarely sufficient. Consequently, we start with a learning rate of 0\.1 and increase the number of trees to train. The default depth of each tree (`interaction.depth`) is 1, which means we are ensembling a bunch of decision stumps (i.e., we are not able to capture any interaction effects). For the Ames housing data set, we increase the tree depth to 3 and use the default value for minimum number of observations required in the trees terminal nodes (`n.minobsinnode`). Lastly, we set `cv.folds = 10` to perform a 10\-fold CV. This model takes a little over 2 minutes to run. ``` # run a basic GBM model set.seed(123) # for reproducibility ames_gbm1 <- gbm( formula = Sale_Price ~ ., data = ames_train, distribution = "gaussian", # SSE loss function n.trees = 5000, shrinkage = 0.1, interaction.depth = 3, n.minobsinnode = 10, cv.folds = 10 ) # find index for number trees with minimum CV error best <- which.min(ames_gbm1$cv.error) # get MSE and compute RMSE sqrt(ames_gbm1$cv.error[best]) ## [1] 23240.38 ``` Our results show a cross\-validated SSE of 23240 which was achieved with 1219 trees. ``` # plot error curve gbm.perf(ames_gbm1, method = "cv") ``` Figure 12\.6: Training and cross\-validated MSE as n trees are added to the GBM algorithm. ``` ## [1] 1219 ``` ### 12\.3\.3 General tuning strategy Unlike random forests, GBMs can have high variability in accuracy dependent on their hyperparameter settings (Probst, Bischl, and Boulesteix [2018](#ref-probst2018tunability)). So tuning can require much more strategy than a random forest model. Often, a good approach is to: 1. Choose a relatively high learning rate. Generally the default value of 0\.1 works but somewhere between 0\.05–0\.2 should work across a wide range of problems. 2. Determine the optimum number of trees for this learning rate. 3. Fix tree hyperparameters and tune learning rate and assess speed vs. performance. 4. Tune tree\-specific parameters for decided learning rate. 5. Once tree\-specific parameters have been found, lower the learning rate to assess for any improvements in accuracy. 6. Use final hyperparameter settings and increase CV procedures to get more robust estimates. Often, the above steps are performed with a simple validation procedure or 5\-fold CV due to computational constraints. If you used *k*\-fold CV throughout steps 1–5 then this step is not necessary. We already did (1\)–(2\) in the Ames example above with our first GBM model. Next, we’ll do (3\) and asses the performance of various learning rate values between 0\.005–0\.3\. Our results indicate that a learning rate of 0\.05 sufficiently minimizes our loss function and requires 2375 trees. All our models take a little over 2 minutes to train so we don’t see any significant impacts in training time based on the learning rate. The following grid search took us about 10 minutes. ``` # create grid search hyper_grid <- expand.grid( learning_rate = c(0.3, 0.1, 0.05, 0.01, 0.005), RMSE = NA, trees = NA, time = NA ) # execute grid search for(i in seq_len(nrow(hyper_grid))) { # fit gbm set.seed(123) # for reproducibility train_time <- system.time({ m <- gbm( formula = Sale_Price ~ ., data = ames_train, distribution = "gaussian", n.trees = 5000, shrinkage = hyper_grid$learning_rate[i], interaction.depth = 3, n.minobsinnode = 10, cv.folds = 10 ) }) # add SSE, trees, and training time to results hyper_grid$RMSE[i] <- sqrt(min(m$cv.error)) hyper_grid$trees[i] <- which.min(m$cv.error) hyper_grid$Time[i] <- train_time[["elapsed"]] } # results arrange(hyper_grid, RMSE) ## learning_rate RMSE trees time ## 1 0.050 21382 2375 129.5 ## 2 0.010 21828 4982 126.0 ## 3 0.100 22252 874 137.6 ## 4 0.005 23136 5000 136.8 ## 5 0.300 24454 427 139.9 ``` Next, we’ll set our learning rate at the optimal level (0\.05\) and tune the tree specific hyperparameters (`interaction.depth` and `n.minobsinnode`). Adjusting the tree\-specific parameters provides us with an additional 600 reduction in RMSE. This grid search takes about 30 minutes. ``` # search grid hyper_grid <- expand.grid( n.trees = 6000, shrinkage = 0.01, interaction.depth = c(3, 5, 7), n.minobsinnode = c(5, 10, 15) ) # create model fit function model_fit <- function(n.trees, shrinkage, interaction.depth, n.minobsinnode) { set.seed(123) m <- gbm( formula = Sale_Price ~ ., data = ames_train, distribution = "gaussian", n.trees = n.trees, shrinkage = shrinkage, interaction.depth = interaction.depth, n.minobsinnode = n.minobsinnode, cv.folds = 10 ) # compute RMSE sqrt(min(m$cv.error)) } # perform search grid with functional programming hyper_grid$rmse <- purrr::pmap_dbl( hyper_grid, ~ model_fit( n.trees = ..1, shrinkage = ..2, interaction.depth = ..3, n.minobsinnode = ..4 ) ) # results arrange(hyper_grid, rmse) ## n.trees shrinkage interaction.depth n.minobsinnode rmse ## 1 4000 0.05 5 5 20699 ## 2 4000 0.05 3 5 20723 ## 3 4000 0.05 7 5 21021 ## 4 4000 0.05 3 10 21382 ## 5 4000 0.05 5 10 21915 ## 6 4000 0.05 5 15 21924 ## 7 4000 0.05 3 15 21943 ## 8 4000 0.05 7 10 21999 ## 9 4000 0.05 7 15 22348 ``` After this procedure, we took our top model’s hyperparameter settings, reduced the learning rate to 0\.005, and increased the number of trees (8000\) to see if we got any additional improvement in accuracy. We experienced no improvement in our RMSE and our training time increased to nearly 6 minutes. 12\.4 Stochastic GBMs --------------------- An important insight made by Breiman (Breiman ([1996](#ref-breiman1996bagging)[a](#ref-breiman1996bagging)); Breiman ([2001](#ref-breiman2001random))) in developing his bagging and random forest algorithms was that training the algorithm on a random subsample of the training data set offered additional reduction in tree correlation and, therefore, improvement in prediction accuracy. Friedman ([2002](#ref-friedman2002stochastic)) used this same logic and updated the boosting algorithm accordingly. This procedure is known as *stochastic gradient boosting* and, as illustrated in Figure [12\.5](gbm.html#fig:stochastic-gradient-descent-fig), helps reduce the chances of getting stuck in local minimas, plateaus, and other irregular terrain of the loss function so that we may find a near global optimum. ### 12\.4\.1 Stochastic hyperparameters There are a few variants of stochastic gradient boosting that can be used, all of which have additional hyperparameters: * Subsample rows before creating each tree (available in **gbm**, **h2o**, \& **xgboost**) * Subsample columns before creating each tree (**h2o** \& **xgboost**) * Subsample columns before considering each split in each tree (**h2o** \& **xgboost**) Generally, aggressive subsampling of rows, such as selecting only 50% or less of the training data, has shown to be beneficial and typical values range between 0\.5–0\.8\. Subsampling of columns and the impact to performance largely depends on the nature of the data and if there is strong multicollinearity or a lot of noisy features. Similar to the \\(m\_{try}\\) parameter in random forests (Section [11\.4\.2](random-forest.html#mtry)), if there are fewer relevant predictors (more noisy data) higher values of column subsampling tends to perform better because it makes it more likely to select those features with the strongest signal. When there are many relevant predictors, a lower values of column subsampling tends to perform well. When adding in a stochastic procedure, you can either include it in step 4\) in the general tuning strategy above (Section [12\.3\.3](gbm.html#tuning-strategy)), or once you’ve found the optimal basic model (after 6\)). In our experience, we have not seen strong interactions between the stochastic hyperparameters and the other boosting and tree\-specific hyperparameters. ### 12\.4\.2 Implementation The following uses **h2o** to implement a stochastic GBM. We use the optimal hyperparameters found in the previous section and build onto this by assessing a range of values for subsampling rows and columns before each tree is built, and subsampling columns before each split. To speed up training we use early stopping for the individual GBM modeling process and also add a stochastic search criteria. This grid search ran for the entire 60 minutes and evaluated 18 of the possible 27 models. ``` # refined hyperparameter grid hyper_grid <- list( sample_rate = c(0.5, 0.75, 1), # row subsampling col_sample_rate = c(0.5, 0.75, 1), # col subsampling for each split col_sample_rate_per_tree = c(0.5, 0.75, 1) # col subsampling for each tree ) # random grid search strategy search_criteria <- list( strategy = "RandomDiscrete", stopping_metric = "mse", stopping_tolerance = 0.001, stopping_rounds = 10, max_runtime_secs = 60*60 ) # perform grid search grid <- h2o.grid( algorithm = "gbm", grid_id = "gbm_grid", x = predictors, y = response, training_frame = train_h2o, hyper_params = hyper_grid, ntrees = 6000, learn_rate = 0.01, max_depth = 7, min_rows = 5, nfolds = 10, stopping_rounds = 10, stopping_tolerance = 0, search_criteria = search_criteria, seed = 123 ) # collect the results and sort by our model performance metric of choice grid_perf <- h2o.getGrid( grid_id = "gbm_grid", sort_by = "mse", decreasing = FALSE ) grid_perf ## H2O Grid Details ## ================ ## ## Grid ID: gbm_grid ## Used hyper parameters: ## - col_sample_rate ## - col_sample_rate_per_tree ## - sample_rate ## Number of models: 18 ## Number of failed models: 0 ## ## Hyper-Parameter Search Summary: ordered by increasing mse ## col_sample_rate col_sample_rate_per_tree sample_rate model_ids mse ## 1 0.5 0.5 0.5 gbm_grid_model_8 4.462965966345138E8 ## 2 0.5 1.0 0.5 gbm_grid_model_3 4.568248274796835E8 ## 3 0.5 0.75 0.75 gbm_grid_model_12 4.6466647244785947E8 ## 4 0.75 0.5 0.75 gbm_grid_model_5 4.689665768861389E8 ## 5 1.0 0.75 0.5 gbm_grid_model_14 4.7010349266737276E8 ## 6 0.5 0.5 0.75 gbm_grid_model_10 4.713882927949245E8 ## 7 0.75 1.0 0.5 gbm_grid_model_4 4.729884840420368E8 ## 8 1.0 1.0 0.5 gbm_grid_model_1 4.770705550988762E8 ## 9 1.0 0.75 0.75 gbm_grid_model_6 4.9292332262147874E8 ## 10 0.75 1.0 0.75 gbm_grid_model_13 4.985715082289563E8 ## 11 0.75 0.5 1.0 gbm_grid_model_2 5.0271257831462187E8 ## 12 0.75 0.75 0.75 gbm_grid_model_15 5.0981695262733763E8 ## 13 0.75 0.75 1.0 gbm_grid_model_9 5.3137490858680266E8 ## 14 0.75 1.0 1.0 gbm_grid_model_11 5.77518690995319E8 ## 15 1.0 1.0 1.0 gbm_grid_model_7 6.037512241688542E8 ## 16 1.0 0.75 1.0 gbm_grid_model_16 1.9742225720119803E9 ## 17 0.5 1.0 0.75 gbm_grid_model_17 4.1339991380839005E9 ## 18 1.0 0.5 1.0 gbm_grid_model_18 5.949489361558916E9 ``` Our grid search highlights some important results. Random sampling from the rows for each tree and randomly sampling features before each split appears to positively impact performance. It is not definitive if sampling features before each tree has an impact. Furthermore, the best sampling values are very low (0\.5\); a further grid search may be beneficial to evaluate even lower values. The below code chunk extracts the best performing model. In this particular case, we do not see additional improvement in our 10\-fold CV RMSE over the best non\-stochastic GBM model. ``` # Grab the model_id for the top model, chosen by cross validation error best_model_id <- grid_perf@model_ids[[1]] best_model <- h2o.getModel(best_model_id) # Now let’s get performance metrics on the best model h2o.performance(model = best_model, xval = TRUE) ## H2ORegressionMetrics: gbm ## ** Reported on cross-validation data. ** ## ** 10-fold cross-validation on training data (Metrics computed for combined holdout predictions) ** ## ## MSE: 446296597 ## RMSE: 21125.73 ## MAE: 13045.95 ## RMSLE: 0.1240542 ## Mean Residual Deviance : 446296597 ``` 12\.5 XGBoost ------------- Extreme gradient boosting (XGBoost) is an optimized distributed gradient boosting library that is designed to be efficient, flexible, and portable across multiple languages (Chen and Guestrin [2016](#ref-xgboost-paper)). Although XGBoost provides the same boosting and tree\-based hyperparameter options illustrated in the previous sections, it also provides a few advantages over traditional boosting such as: * **Regularization**: XGBoost offers additional regularization hyperparameters, which we will discuss shortly, that provides added protection against overfitting. * **Early stopping**: Similar to **h2o**, XGBoost implements early stopping so that we can stop model assessment when additional trees offer no improvement. * **Parallel Processing**: Since gradient boosting is sequential in nature it is extremely difficult to parallelize. XGBoost has implemented procedures to support GPU and Spark compatibility which allows you to fit gradient boosting using powerful distributed processing engines. * **Loss functions**: XGBoost allows users to define and optimize gradient boosting models using custom objective and evaluation criteria. * **Continue with existing model**: A user can train an XGBoost model, save the results, and later on return to that model and continue building onto the results. Whether you shut down for the day, wanted to review intermediate results, or came up with additional hyperparameter settings to evaluate, this allows you to continue training your model without starting from scratch. * **Different base learners**: Most GBM implementations are built with decision trees but XGBoost also provides boosted generalized linear models. * **Multiple languages**: XGBoost offers implementations in R, Python, Julia, Scala, Java, and C\+\+. In addition to being offered across multiple languages, XGboost can be implemented multiple ways within R. The main R implementation is the **xgboost** package; however, as illustrated throughout many chapters one can also use **caret** as a meta engine to implement XGBoost. The **h2o** package also offers an implementation of XGBoost. In this chapter we’ll demonstrate the **xgboost** package. ### 12\.5\.1 XGBoost hyperparameters As previously mentioned, **xgboost** provides the traditional boosting and tree\-based hyperparameters we discussed in Sections [12\.3\.1](gbm.html#hyper-gbm1) and [12\.4\.1](gbm.html#hyper-gbm2). However, **xgboost** also provides additional hyperparameters that can help reduce the chances of overfitting, leading to less prediction variability and, therefore, improved accuracy. #### 12\.5\.1\.1 Regularization **xgboost** provides multiple regularization parameters to help reduce model complexity and guard against overfitting. The first, `gamma`, is a pseudo\-regularization hyperparameter known as a Lagrangian multiplier and controls the complexity of a given tree. `gamma` specifies a minimum loss reduction required to make a further partition on a leaf node of the tree. When `gamma` is specified, **xgboost** will grow the tree to the max depth specified but then prune the tree to find and remove splits that do not meet the specified `gamma`. `gamma` tends to be worth exploring as your trees in your GBM become deeper and when you see a significant difference between the train and test CV error. The value of `gamma` ranges from \\(0\-\\infty\\) (0 means no constraint while large numbers mean a higher regularization). What quantifies as a large `gamma` value is dependent on the loss function but generally lower values between 1–20 will do if `gamma` is influential. Two more traditional regularization parameters include `alpha` and `lambda`. `alpha` provides an \\(L\_1\\) regularization (reference Section [6\.2\.2](regularized-regression.html#lasso)) and `lambda` provides an \\(L\_2\\) regularization (reference Section [6\.2\.1](regularized-regression.html#ridge)). Setting both of these to greater than 0 results in an elastic net regularization; similar to `gamma`, these parameters can range from \\(0\-\\infty\\). These regularization parameters limits how extreme the weights (or influence) of the leaves in a tree can become. All three hyperparameters (`gamma`, `alpha`, `lambda`) work to constrain model complexity and reduce overfitting. Although `gamma` is more commonly implemented, your tuning strategy should explore the impact of all three. Figure [12\.7](gbm.html#fig:xgboost-learning-curve) illustrates how regularization can make an overfit model more conservative on the training data which, in some circumstances, can result in improvements to the validation error. Figure 12\.7: When a GBM model significantly overfits to the training data (blue), adding regularization (dotted line) causes the model to be more conservative on the training data, which can improve the cross\-validated test error (red). #### 12\.5\.1\.2 Dropout Dropout is an alternative approach to reduce overfitting and can loosely be described as regularization. The dropout approach developed by Srivastava et al. ([2014](#ref-JMLR:v15:srivastava14a)[a](#ref-JMLR:v15:srivastava14a)) has been widely employed in deep learnings to prevent deep neural networks from overfitting (see Section [13\.7\.3](deep-learning.html#dl-regularization)). Dropout can also be used to address overfitting in GBMs. When constructing a GBM, the first few trees added at the beginning of the ensemble typically dominate the model performance while trees added later typically improve the prediction for only a small subset of the feature space. This often increases the risk of overfitting and the idea of dropout is to build an ensemble by randomly dropping trees in the boosting sequence. This is commonly referred to as DART (Rashmi and Gilad\-Bachrach [2015](#ref-rashmi2015dart)) since it was initially explored in the context of *Mutliple Additive Regression Trees* (MART); DART refers to *Dropout Additive Regression Trees*. The percentage of dropouts is another regularization parameter. Typically, when `gamma`, `alpha`, or `lambda` cannot help to control overfitting, exploring DART hyperparameters would be the next best option.[33](#fn33) ### 12\.5\.2 Tuning strategy The general tuning strategy for exploring **xgboost** hyperparameters builds onto the basic and stochastic GBM tuning strategies: 1. Crank up the number of trees and tune learning rate with early stopping 2. Tune tree\-specific hyperparameters 3. Explore stochastic GBM attributes 4. If substantial overfitting occurs (e.g., large differences between train and CV error) explore regularization hyperparameters 5. If you find hyperparameter values that are substantially different from default settings, be sure to retune the learning rate 6. Obtain final “optimal” model Running an XGBoost model with **xgboost** requires some additional data preparation. **xgboost** requires a matrix input for the features and the response to be a vector. Consequently, to provide a matrix input of the features we need to encode our categorical variables numerically (i.e. one\-hot encoding, label encoding). The following numerically label encodes all categorical features and converts the training data frame to a matrix. ``` library(recipes) xgb_prep <- recipe(Sale_Price ~ ., data = ames_train) %>% step_integer(all_nominal()) %>% prep(training = ames_train, retain = TRUE) %>% juice() X <- as.matrix(xgb_prep[setdiff(names(xgb_prep), "Sale_Price")]) Y <- xgb_prep$Sale_Price ``` **xgboost** will except three different kinds of matrices for the features: ordinary R matrix, sparse matrices from the **Matrix** package, or **xgboost**’s internal `xgb.DMatrix` objects. See `?xgboost::xgboost` for details. Next, we went through a series of grid searches similar to the previous sections and found the below model hyperparameters (provided via the `params` argument) to perform quite well. Our RMSE is slightly lower than the best regular and stochastic GBM models thus far. ``` set.seed(123) ames_xgb <- xgb.cv( data = X, label = Y, nrounds = 6000, objective = "reg:linear", early_stopping_rounds = 50, nfold = 10, params = list( eta = 0.1, max_depth = 3, min_child_weight = 3, subsample = 0.8, colsample_bytree = 1.0), verbose = 0 ) # minimum test CV RMSE min(ames_xgb$evaluation_log$test_rmse_mean) ## [1] 20488 ``` Next, we assess if overfitting is limiting our model’s performance by performing a grid search that examines various regularization parameters (`gamma`, `lambda`, and `alpha`). Our results indicate that the best performing models use `lambda` equal to 1 and it doesn’t appear that `alpha` or `gamma` have any consistent patterns. However, even when `lambda` equals 1, our CV RMSE has no improvement over our previous XGBoost model. Due to the low learning rate (`eta`), this cartesian grid search takes a long time. We stopped the search after 2 hours and only 98 of the 245 models had completed. ``` # hyperparameter grid hyper_grid <- expand.grid( eta = 0.01, max_depth = 3, min_child_weight = 3, subsample = 0.5, colsample_bytree = 0.5, gamma = c(0, 1, 10, 100, 1000), lambda = c(0, 1e-2, 0.1, 1, 100, 1000, 10000), alpha = c(0, 1e-2, 0.1, 1, 100, 1000, 10000), rmse = 0, # a place to dump RMSE results trees = 0 # a place to dump required number of trees ) # grid search for(i in seq_len(nrow(hyper_grid))) { set.seed(123) m <- xgb.cv( data = X, label = Y, nrounds = 4000, objective = "reg:linear", early_stopping_rounds = 50, nfold = 10, verbose = 0, params = list( eta = hyper_grid$eta[i], max_depth = hyper_grid$max_depth[i], min_child_weight = hyper_grid$min_child_weight[i], subsample = hyper_grid$subsample[i], colsample_bytree = hyper_grid$colsample_bytree[i], gamma = hyper_grid$gamma[i], lambda = hyper_grid$lambda[i], alpha = hyper_grid$alpha[i] ) ) hyper_grid$rmse[i] <- min(m$evaluation_log$test_rmse_mean) hyper_grid$trees[i] <- m$best_iteration } # results hyper_grid %>% filter(rmse > 0) %>% arrange(rmse) %>% glimpse() ## Observations: 98 ## Variables: 10 ## $ eta <dbl> 0.01, 0.01, 0.01, 0.01, 0.01, 0.0… ## $ max_depth <dbl> 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, … ## $ min_child_weight <dbl> 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, … ## $ subsample <dbl> 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5… ## $ colsample_bytree <dbl> 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5… ## $ gamma <dbl> 0, 1, 10, 100, 1000, 0, 1, 10, 10… ## $ lambda <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, … ## $ alpha <dbl> 0.00, 0.00, 0.00, 0.00, 0.00, 0.1… ## $ rmse <dbl> 20488, 20488, 20488, 20488, 20488… ## $ trees <dbl> 3944, 3944, 3944, 3944, 3944, 381… ``` Once you’ve found the optimal hyperparameters, fit the final model with `xgb.train` or `xgboost`. Be sure to use the optimal number of trees found during cross validation. In our example, adding regularization provides no improvement so we exclude them in our final model. ``` # optimal parameter list params <- list( eta = 0.01, max_depth = 3, min_child_weight = 3, subsample = 0.5, colsample_bytree = 0.5 ) # train final model xgb.fit.final <- xgboost( params = params, data = X, label = Y, nrounds = 3944, objective = "reg:linear", verbose = 0 ) ``` 12\.6 Feature interpretation ---------------------------- Measuring GBM feature importance and effects follows the same construct as random forests. Similar to random forests, the **gbm** and **h2o** packages offer an impurity\-based feature importance. **xgboost** actually provides three built\-in measures for feature importance: 1. **Gain**: This is equivalent to the impurity measure in random forests (reference Section [11\.6](random-forest.html#rf-vip)) and is the most common model\-centric metric to use. 2. **Coverage**: The Coverage metric quantifies the relative number of observations influenced by this feature. For example, if you have 100 observations, 4 features and 3 trees, and suppose \\(x\_1\\) is used to decide the leaf node for 10, 5, and 2 observations in \\(tree\_1\\), \\(tree\_2\\) and \\(tree\_3\\) respectively; then the metric will count cover for this feature as \\(10\+5\+2 \= 17\\) observations. This will be calculated for all the 4 features and expressed as a percentage. 3. **Frequency**: The percentage representing the relative number of times a particular feature occurs in the trees of the model. In the above example, if \\(x\_1\\) was used for 2 splits, 1 split and 3 splits in each of \\(tree\_1\\), \\(tree\_2\\) and \\(tree\_3\\) respectively; then the weightage for \\(x\_1\\) will be \\(2\+1\+3\=6\\). The frequency for \\(x\_1\\) is calculated as its percentage weight over weights of all \\(x\_p\\) features. If we examine the top 10 influential features in our final model using the impurity (gain) metric, we see very similar results as we saw with our random forest model (Section [11\.6](random-forest.html#rf-vip)). The primary difference is we no longer see `Neighborhood` as a top influential feature, which is likely a result of how we label encoded the categorical features. By default, `vip::vip()` uses the gain method for feature importance but you can assess the other types using the `type` argument. You can also use `xgboost::xgb.ggplot.importance()` to plot the various feature importance measures but you need to first run `xgb.importance()` on the final model. ``` # variable importance plot vip::vip(xgb.fit.final) ``` Figure 12\.8: Top 10 most important variables based on the impurity (gain) metric. 12\.7 Final thoughts -------------------- GBMs are one of the most powerful ensemble algorithms that are often first\-in\-class with predictive accuracy. Although they are less intuitive and more computationally demanding than many other machine learning algorithms, they are essential to have in your toolbox. Although we discussed the most popular GBM algorithms, realize there are alternative algorithms not covered here. For example LightGBM (Ke et al. [2017](#ref-ke2017lightgbm)) is a gradient boosting framework that focuses on *leaf\-wise* tree growth versus the traditional level\-wise tree growth. This means as a tree is grown deeper, it focuses on extending a single branch versus growing multiple branches (reference Figure [9\.2](DT.html#fig:decision-tree-terminology). CatBoost (Dorogush, Ershov, and Gulin [2018](#ref-dorogush2018catboost)) is another gradient boosting framework that focuses on using efficient methods for encoding categorical features during the gradient boosting process. Both frameworks are available in R. 12\.1 Prerequisites ------------------- This chapter leverages the following packages. Some of these packages play a supporting role; however, our focus is on demonstrating how to implement GBMs with the **gbm** (B Greenwell et al. [2018](#ref-gbm-pkg)), **xgboost** (Chen et al. [2018](#ref-xgboost-pkg)), and **h2o** packages. ``` # Helper packages library(dplyr) # for general data wrangling needs # Modeling packages library(gbm) # for original implementation of regular and stochastic GBMs library(h2o) # for a java-based implementation of GBM variants library(xgboost) # for fitting extreme gradient boosting ``` We’ll continue working with the `ames_train` data set created in Section [2\.7](process.html#put-process-together) to illustrate the main concepts. We’ll also demonstrate **h2o** functionality using the same setup from Section [11\.5](random-forest.html#rf-tuning-strategy). ``` h2o.init(max_mem_size = "10g") train_h2o <- as.h2o(ames_train) response <- "Sale_Price" predictors <- setdiff(colnames(ames_train), response) ``` 12\.2 How boosting works ------------------------ Several supervised machine learning algorithms are based on a single predictive model, for example: ordinary linear regression, penalized regression models, single decision trees, and support vector machines. Bagging and random forests, on the other hand, work by combining multiple models together into an overall ensemble. New predictions are made by combining the predictions from the individual base models that make up the ensemble (e.g., by averaging in regression). Since averaging reduces variance, bagging (and hence, random forests) are most effectively applied to models with low bias and high variance (e.g., an overgrown decision tree). While boosting is a general algorithm for building an ensemble out of simpler models (typically decision trees), it is more effectively applied to models with high bias and low variance! Although boosting, like bagging, can be applied to any type of model, it is often most effectively applied to decision trees (which we’ll assume from this point on). ### 12\.2\.1 A sequential ensemble approach The main idea of boosting is to add new models to the ensemble ***sequentially***. In essence, boosting attacks the bias\-variance\-tradeoff by starting with a *weak* model (e.g., a decision tree with only a few splits) and sequentially *boosts* its performance by continuing to build new trees, where each new tree in the sequence tries to fix up where the previous one made the biggest mistakes (i.e., each new tree in the sequence will focus on the training rows where the previous tree had the largest prediction errors); see Figure [12\.1](gbm.html#fig:sequential-fig). Figure 12\.1: Sequential ensemble approach. Let’s discuss the important components of boosting in closer detail. **The base learners**: Boosting is a framework that iteratively improves *any* weak learning model. Many gradient boosting applications allow you to “plug in” various classes of weak learners at your disposal. In practice however, boosted algorithms almost always use decision trees as the base\-learner. Consequently, this chapter will discuss boosting in the context of decision trees. **Training weak models**: A weak model is one whose error rate is only slightly better than random guessing. The idea behind boosting is that each model in the sequence slightly improves upon the performance of the previous one (essentially, by focusing on the rows of the training data where the previous tree had the largest errors or residuals). With regards to decision trees, shallow trees (i.e., trees with relatively few splits) represent a weak learner. In boosting, trees with 1–6 splits are most common. **Sequential training with respect to errors**: Boosted trees are grown sequentially; each tree is grown using information from previously grown trees to improve performance. This is illustrated in the following algorithm for boosting regression trees. By fitting each tree in the sequence to the previous tree’s residuals, we’re allowing each new tree in the sequence to focus on the previous tree’s mistakes: 1. Fit a decision tree to the data: \\(F\_1\\left(x\\right) \= y\\), 2. We then fit the next decision tree to the residuals of the previous: \\(h\_1\\left(x\\right) \= y \- F\_1\\left(x\\right)\\), 3. Add this new tree to our algorithm: \\(F\_2\\left(x\\right) \= F\_1\\left(x\\right) \+ h\_1\\left(x\\right)\\), 4. Fit the next decision tree to the residuals of \\(F\_2\\): \\(h\_2\\left(x\\right) \= y \- F\_2\\left(x\\right)\\), 5. Add this new tree to our algorithm: \\(F\_3\\left(x\\right) \= F\_2\\left(x\\right) \+ h\_2\\left(x\\right)\\), 6. Continue this process until some mechanism (i.e. cross validation) tells us to stop. The final model here is a stagewise additive model of *b* individual trees: \\\[ f\\left(x\\right) \= \\sum^B\_{b\=1}f^b\\left(x\\right) \\tag{1} \\] Figure [12\.2](gbm.html#fig:boosting-in-action) illustrates with a simple example where a single predictor (\\(x\\)) has a true underlying sine wave relationship (blue line) with *y* along with some irreducible error. The first tree fit in the series is a single decision stump (i.e., a tree with a single split). Each successive decision stump thereafter is fit to the previous one’s residuals. Initially there are large errors, but each additional decision stump in the sequence makes a small improvement in different areas across the feature space where errors still remain. Figure 12\.2: Boosted regression decision stumps as 0\-1024 successive trees are added. ### 12\.2\.2 Gradient descent Many algorithms in regression, including decision trees, focus on minimizing some function of the residuals; most typically the SSE loss function, or equivalently, the MSE or RMSE (this is accomplished through simple calculus and is the approach taken with least squares). The boosting algorithm for regression discussed in the previous section outlines the approach of sequentially fitting regression trees to the residuals from the previous tree. This specific approach is how gradient boosting minimizes the mean squared error (SSE) loss function (for SSE loss, the gradient is nothing more than the residual error). However, we often wish to focus on other loss functions such as mean absolute error (MAE)—which is less sensitive to outliers—or to be able to apply the method to a classification problem with a loss function such as deviance, or log loss. The name ***gradient*** boosting machine comes from the fact that this procedure can be generalized to loss functions other than SSE. Gradient boosting is considered a ***gradient descent*** algorithm. Gradient descent is a very generic optimization algorithm capable of finding optimal solutions to a wide range of problems. The general idea of gradient descent is to tweak parameter(s) iteratively in order to minimize a cost function. Suppose you are a downhill skier racing your friend. A good strategy to beat your friend to the bottom is to take the path with the steepest slope. This is exactly what gradient descent does—it measures the local gradient of the loss (cost) function for a given set of parameters (\\(\\Theta\\)) and takes steps in the direction of the descending gradient. As Figure [12\.3](gbm.html#fig:gradient-descent-fig)[32](#fn32) illustrates, once the gradient is zero, we have reached a minimum. Figure 12\.3: Gradient descent is the process of gradually decreasing the cost function (i.e. MSE) by tweaking parameter(s) iteratively until you have reached a minimum. Gradient descent can be performed on any loss function that is differentiable. Consequently, this allows GBMs to optimize different loss functions as desired (see J. Friedman, Hastie, and Tibshirani ([2001](#ref-esl)), p. 360 for common loss functions). An important parameter in gradient descent is the size of the steps which is controlled by the *learning rate*. If the learning rate is too small, then the algorithm will take many iterations (steps) to find the minimum. On the other hand, if the learning rate is too high, you might jump across the minimum and end up further away than when you started. Figure 12\.4: A learning rate that is too small will require many iterations to find the minimum. A learning rate too big may jump over the minimum. Moreover, not all cost functions are *convex* (i.e., bowl shaped). There may be local minimas, plateaus, and other irregular terrain of the loss function that makes finding the global minimum difficult. ***Stochastic gradient descent*** can help us address this problem by sampling a fraction of the training observations (typically without replacement) and growing the next tree using that subsample. This makes the algorithm faster but the stochastic nature of random sampling also adds some random nature in descending the loss function’s gradient. Although this randomness does not allow the algorithm to find the absolute global minimum, it can actually help the algorithm jump out of local minima and off plateaus to get sufficiently near the global minimum. Figure 12\.5: Stochastic gradient descent will often find a near\-optimal solution by jumping out of local minimas and off plateaus. As we’ll see in the sections that follow, there are several hyperparameter tuning options available in stochastic gradient boosting (some control the gradient descent and others control the tree growing process). If properly tuned (e.g., with *k*\-fold CV) GBMs can lead to some of the most flexible and accurate predictive models you can build! ### 12\.2\.1 A sequential ensemble approach The main idea of boosting is to add new models to the ensemble ***sequentially***. In essence, boosting attacks the bias\-variance\-tradeoff by starting with a *weak* model (e.g., a decision tree with only a few splits) and sequentially *boosts* its performance by continuing to build new trees, where each new tree in the sequence tries to fix up where the previous one made the biggest mistakes (i.e., each new tree in the sequence will focus on the training rows where the previous tree had the largest prediction errors); see Figure [12\.1](gbm.html#fig:sequential-fig). Figure 12\.1: Sequential ensemble approach. Let’s discuss the important components of boosting in closer detail. **The base learners**: Boosting is a framework that iteratively improves *any* weak learning model. Many gradient boosting applications allow you to “plug in” various classes of weak learners at your disposal. In practice however, boosted algorithms almost always use decision trees as the base\-learner. Consequently, this chapter will discuss boosting in the context of decision trees. **Training weak models**: A weak model is one whose error rate is only slightly better than random guessing. The idea behind boosting is that each model in the sequence slightly improves upon the performance of the previous one (essentially, by focusing on the rows of the training data where the previous tree had the largest errors or residuals). With regards to decision trees, shallow trees (i.e., trees with relatively few splits) represent a weak learner. In boosting, trees with 1–6 splits are most common. **Sequential training with respect to errors**: Boosted trees are grown sequentially; each tree is grown using information from previously grown trees to improve performance. This is illustrated in the following algorithm for boosting regression trees. By fitting each tree in the sequence to the previous tree’s residuals, we’re allowing each new tree in the sequence to focus on the previous tree’s mistakes: 1. Fit a decision tree to the data: \\(F\_1\\left(x\\right) \= y\\), 2. We then fit the next decision tree to the residuals of the previous: \\(h\_1\\left(x\\right) \= y \- F\_1\\left(x\\right)\\), 3. Add this new tree to our algorithm: \\(F\_2\\left(x\\right) \= F\_1\\left(x\\right) \+ h\_1\\left(x\\right)\\), 4. Fit the next decision tree to the residuals of \\(F\_2\\): \\(h\_2\\left(x\\right) \= y \- F\_2\\left(x\\right)\\), 5. Add this new tree to our algorithm: \\(F\_3\\left(x\\right) \= F\_2\\left(x\\right) \+ h\_2\\left(x\\right)\\), 6. Continue this process until some mechanism (i.e. cross validation) tells us to stop. The final model here is a stagewise additive model of *b* individual trees: \\\[ f\\left(x\\right) \= \\sum^B\_{b\=1}f^b\\left(x\\right) \\tag{1} \\] Figure [12\.2](gbm.html#fig:boosting-in-action) illustrates with a simple example where a single predictor (\\(x\\)) has a true underlying sine wave relationship (blue line) with *y* along with some irreducible error. The first tree fit in the series is a single decision stump (i.e., a tree with a single split). Each successive decision stump thereafter is fit to the previous one’s residuals. Initially there are large errors, but each additional decision stump in the sequence makes a small improvement in different areas across the feature space where errors still remain. Figure 12\.2: Boosted regression decision stumps as 0\-1024 successive trees are added. ### 12\.2\.2 Gradient descent Many algorithms in regression, including decision trees, focus on minimizing some function of the residuals; most typically the SSE loss function, or equivalently, the MSE or RMSE (this is accomplished through simple calculus and is the approach taken with least squares). The boosting algorithm for regression discussed in the previous section outlines the approach of sequentially fitting regression trees to the residuals from the previous tree. This specific approach is how gradient boosting minimizes the mean squared error (SSE) loss function (for SSE loss, the gradient is nothing more than the residual error). However, we often wish to focus on other loss functions such as mean absolute error (MAE)—which is less sensitive to outliers—or to be able to apply the method to a classification problem with a loss function such as deviance, or log loss. The name ***gradient*** boosting machine comes from the fact that this procedure can be generalized to loss functions other than SSE. Gradient boosting is considered a ***gradient descent*** algorithm. Gradient descent is a very generic optimization algorithm capable of finding optimal solutions to a wide range of problems. The general idea of gradient descent is to tweak parameter(s) iteratively in order to minimize a cost function. Suppose you are a downhill skier racing your friend. A good strategy to beat your friend to the bottom is to take the path with the steepest slope. This is exactly what gradient descent does—it measures the local gradient of the loss (cost) function for a given set of parameters (\\(\\Theta\\)) and takes steps in the direction of the descending gradient. As Figure [12\.3](gbm.html#fig:gradient-descent-fig)[32](#fn32) illustrates, once the gradient is zero, we have reached a minimum. Figure 12\.3: Gradient descent is the process of gradually decreasing the cost function (i.e. MSE) by tweaking parameter(s) iteratively until you have reached a minimum. Gradient descent can be performed on any loss function that is differentiable. Consequently, this allows GBMs to optimize different loss functions as desired (see J. Friedman, Hastie, and Tibshirani ([2001](#ref-esl)), p. 360 for common loss functions). An important parameter in gradient descent is the size of the steps which is controlled by the *learning rate*. If the learning rate is too small, then the algorithm will take many iterations (steps) to find the minimum. On the other hand, if the learning rate is too high, you might jump across the minimum and end up further away than when you started. Figure 12\.4: A learning rate that is too small will require many iterations to find the minimum. A learning rate too big may jump over the minimum. Moreover, not all cost functions are *convex* (i.e., bowl shaped). There may be local minimas, plateaus, and other irregular terrain of the loss function that makes finding the global minimum difficult. ***Stochastic gradient descent*** can help us address this problem by sampling a fraction of the training observations (typically without replacement) and growing the next tree using that subsample. This makes the algorithm faster but the stochastic nature of random sampling also adds some random nature in descending the loss function’s gradient. Although this randomness does not allow the algorithm to find the absolute global minimum, it can actually help the algorithm jump out of local minima and off plateaus to get sufficiently near the global minimum. Figure 12\.5: Stochastic gradient descent will often find a near\-optimal solution by jumping out of local minimas and off plateaus. As we’ll see in the sections that follow, there are several hyperparameter tuning options available in stochastic gradient boosting (some control the gradient descent and others control the tree growing process). If properly tuned (e.g., with *k*\-fold CV) GBMs can lead to some of the most flexible and accurate predictive models you can build! 12\.3 Basic GBM --------------- There are multiple variants of boosting algorithms with the original focused on classification problems (Kuhn and Johnson [2013](#ref-apm)). Throughout the 1990’s many approaches were developed with the most successful being the AdaBoost algorithm (Freund and Schapire [1999](#ref-freund1999adaptive)). In 2000, Friedman related AdaBoost to important statistical concepts (e.g., loss functions and additive modeling), which allowed him to generalize the boosting framework to regression problems and multiple loss functions (J. H. Friedman [2001](#ref-friedman2001greedy)). This led to the typical GBM model that we think of today and that most modern implementations are built on. ### 12\.3\.1 Hyperparameters A simple GBM model contains two categories of hyperparameters: *boosting hyperparameters* and *tree\-specific hyperparameters*. The two main boosting hyperparameters include: * **Number of trees**: The total number of trees in the sequence or ensemble. The averaging of independently grown trees in bagging and random forests makes it very difficult to overfit with too many trees. However, GBMs function differently as each tree is grown in sequence to fix up the past tree’s mistakes. For example, in regression, GBMs will chase residuals as long as you allow them to. Also, depending on the values of the other hyperparameters, GBMs often require many trees (it is not uncommon to have many thousands of trees) but since they can easily overfit we must find the optimal number of trees that minimize the loss function of interest with cross validation. * **Learning rate**: Determines the contribution of each tree on the final outcome and controls how quickly the algorithm proceeds down the gradient descent (learns); see Figure [12\.3](gbm.html#fig:gradient-descent-fig). Values range from 0–1 with typical values between 0\.001–0\.3\. Smaller values make the model robust to the specific characteristics of each individual tree, thus allowing it to generalize well. Smaller values also make it easier to stop prior to overfitting; however, they increase the risk of not reaching the optimum with a fixed number of trees and are more computationally demanding. This hyperparameter is also called *shrinkage*. Generally, the smaller this value, the more accurate the model can be but also will require more trees in the sequence. The two main tree hyperparameters in a simple GBM model include: * **Tree depth**: Controls the depth of the individual trees. Typical values range from a depth of 3–8 but it is not uncommon to see a tree depth of 1 (J. Friedman, Hastie, and Tibshirani [2001](#ref-esl)). Smaller depth trees such as decision stumps are computationally efficient (but require more trees); however, higher depth trees allow the algorithm to capture unique interactions but also increase the risk of over\-fitting. Note that larger \\(n\\) or \\(p\\) training data sets are more tolerable to deeper trees. * **Minimum number of observations in terminal nodes**: Also, controls the complexity of each tree. Since we tend to use shorter trees this rarely has a large impact on performance. Typical values range from 5–15 where higher values help prevent a model from learning relationships which might be highly specific to the particular sample selected for a tree (overfitting) but smaller values can help with imbalanced target classes in classification problems. ### 12\.3\.2 Implementation There are many packages that implement GBMs and GBM variants. You can find a fairly comprehensive list at the CRAN Machine Learning Task View: [https://cran.r\-project.org/web/views/MachineLearning.html](https://cran.r-project.org/web/views/MachineLearning.html). However, the most popular original R implementation of Friedman’s GBM algorithm (J. H. Friedman [2001](#ref-friedman2001greedy); Friedman [2002](#ref-friedman2002stochastic)) is the **gbm** package. **gbm** has two training functions: `gbm::gbm()` and `gbm::gbm.fit()`. The primary difference is that `gbm::gbm()` uses the formula interface to specify your model whereas `gbm::gbm.fit()` requires the separated `x` and `y` matrices; `gbm::gbm.fit()` is more efficient and recommended for advanced users. The default settings in **gbm** include a learning rate (`shrinkage`) of 0\.001\. This is a very small learning rate and typically requires a large number of trees to sufficiently minimize the loss function. However, **gbm** uses a default number of trees of 100, which is rarely sufficient. Consequently, we start with a learning rate of 0\.1 and increase the number of trees to train. The default depth of each tree (`interaction.depth`) is 1, which means we are ensembling a bunch of decision stumps (i.e., we are not able to capture any interaction effects). For the Ames housing data set, we increase the tree depth to 3 and use the default value for minimum number of observations required in the trees terminal nodes (`n.minobsinnode`). Lastly, we set `cv.folds = 10` to perform a 10\-fold CV. This model takes a little over 2 minutes to run. ``` # run a basic GBM model set.seed(123) # for reproducibility ames_gbm1 <- gbm( formula = Sale_Price ~ ., data = ames_train, distribution = "gaussian", # SSE loss function n.trees = 5000, shrinkage = 0.1, interaction.depth = 3, n.minobsinnode = 10, cv.folds = 10 ) # find index for number trees with minimum CV error best <- which.min(ames_gbm1$cv.error) # get MSE and compute RMSE sqrt(ames_gbm1$cv.error[best]) ## [1] 23240.38 ``` Our results show a cross\-validated SSE of 23240 which was achieved with 1219 trees. ``` # plot error curve gbm.perf(ames_gbm1, method = "cv") ``` Figure 12\.6: Training and cross\-validated MSE as n trees are added to the GBM algorithm. ``` ## [1] 1219 ``` ### 12\.3\.3 General tuning strategy Unlike random forests, GBMs can have high variability in accuracy dependent on their hyperparameter settings (Probst, Bischl, and Boulesteix [2018](#ref-probst2018tunability)). So tuning can require much more strategy than a random forest model. Often, a good approach is to: 1. Choose a relatively high learning rate. Generally the default value of 0\.1 works but somewhere between 0\.05–0\.2 should work across a wide range of problems. 2. Determine the optimum number of trees for this learning rate. 3. Fix tree hyperparameters and tune learning rate and assess speed vs. performance. 4. Tune tree\-specific parameters for decided learning rate. 5. Once tree\-specific parameters have been found, lower the learning rate to assess for any improvements in accuracy. 6. Use final hyperparameter settings and increase CV procedures to get more robust estimates. Often, the above steps are performed with a simple validation procedure or 5\-fold CV due to computational constraints. If you used *k*\-fold CV throughout steps 1–5 then this step is not necessary. We already did (1\)–(2\) in the Ames example above with our first GBM model. Next, we’ll do (3\) and asses the performance of various learning rate values between 0\.005–0\.3\. Our results indicate that a learning rate of 0\.05 sufficiently minimizes our loss function and requires 2375 trees. All our models take a little over 2 minutes to train so we don’t see any significant impacts in training time based on the learning rate. The following grid search took us about 10 minutes. ``` # create grid search hyper_grid <- expand.grid( learning_rate = c(0.3, 0.1, 0.05, 0.01, 0.005), RMSE = NA, trees = NA, time = NA ) # execute grid search for(i in seq_len(nrow(hyper_grid))) { # fit gbm set.seed(123) # for reproducibility train_time <- system.time({ m <- gbm( formula = Sale_Price ~ ., data = ames_train, distribution = "gaussian", n.trees = 5000, shrinkage = hyper_grid$learning_rate[i], interaction.depth = 3, n.minobsinnode = 10, cv.folds = 10 ) }) # add SSE, trees, and training time to results hyper_grid$RMSE[i] <- sqrt(min(m$cv.error)) hyper_grid$trees[i] <- which.min(m$cv.error) hyper_grid$Time[i] <- train_time[["elapsed"]] } # results arrange(hyper_grid, RMSE) ## learning_rate RMSE trees time ## 1 0.050 21382 2375 129.5 ## 2 0.010 21828 4982 126.0 ## 3 0.100 22252 874 137.6 ## 4 0.005 23136 5000 136.8 ## 5 0.300 24454 427 139.9 ``` Next, we’ll set our learning rate at the optimal level (0\.05\) and tune the tree specific hyperparameters (`interaction.depth` and `n.minobsinnode`). Adjusting the tree\-specific parameters provides us with an additional 600 reduction in RMSE. This grid search takes about 30 minutes. ``` # search grid hyper_grid <- expand.grid( n.trees = 6000, shrinkage = 0.01, interaction.depth = c(3, 5, 7), n.minobsinnode = c(5, 10, 15) ) # create model fit function model_fit <- function(n.trees, shrinkage, interaction.depth, n.minobsinnode) { set.seed(123) m <- gbm( formula = Sale_Price ~ ., data = ames_train, distribution = "gaussian", n.trees = n.trees, shrinkage = shrinkage, interaction.depth = interaction.depth, n.minobsinnode = n.minobsinnode, cv.folds = 10 ) # compute RMSE sqrt(min(m$cv.error)) } # perform search grid with functional programming hyper_grid$rmse <- purrr::pmap_dbl( hyper_grid, ~ model_fit( n.trees = ..1, shrinkage = ..2, interaction.depth = ..3, n.minobsinnode = ..4 ) ) # results arrange(hyper_grid, rmse) ## n.trees shrinkage interaction.depth n.minobsinnode rmse ## 1 4000 0.05 5 5 20699 ## 2 4000 0.05 3 5 20723 ## 3 4000 0.05 7 5 21021 ## 4 4000 0.05 3 10 21382 ## 5 4000 0.05 5 10 21915 ## 6 4000 0.05 5 15 21924 ## 7 4000 0.05 3 15 21943 ## 8 4000 0.05 7 10 21999 ## 9 4000 0.05 7 15 22348 ``` After this procedure, we took our top model’s hyperparameter settings, reduced the learning rate to 0\.005, and increased the number of trees (8000\) to see if we got any additional improvement in accuracy. We experienced no improvement in our RMSE and our training time increased to nearly 6 minutes. ### 12\.3\.1 Hyperparameters A simple GBM model contains two categories of hyperparameters: *boosting hyperparameters* and *tree\-specific hyperparameters*. The two main boosting hyperparameters include: * **Number of trees**: The total number of trees in the sequence or ensemble. The averaging of independently grown trees in bagging and random forests makes it very difficult to overfit with too many trees. However, GBMs function differently as each tree is grown in sequence to fix up the past tree’s mistakes. For example, in regression, GBMs will chase residuals as long as you allow them to. Also, depending on the values of the other hyperparameters, GBMs often require many trees (it is not uncommon to have many thousands of trees) but since they can easily overfit we must find the optimal number of trees that minimize the loss function of interest with cross validation. * **Learning rate**: Determines the contribution of each tree on the final outcome and controls how quickly the algorithm proceeds down the gradient descent (learns); see Figure [12\.3](gbm.html#fig:gradient-descent-fig). Values range from 0–1 with typical values between 0\.001–0\.3\. Smaller values make the model robust to the specific characteristics of each individual tree, thus allowing it to generalize well. Smaller values also make it easier to stop prior to overfitting; however, they increase the risk of not reaching the optimum with a fixed number of trees and are more computationally demanding. This hyperparameter is also called *shrinkage*. Generally, the smaller this value, the more accurate the model can be but also will require more trees in the sequence. The two main tree hyperparameters in a simple GBM model include: * **Tree depth**: Controls the depth of the individual trees. Typical values range from a depth of 3–8 but it is not uncommon to see a tree depth of 1 (J. Friedman, Hastie, and Tibshirani [2001](#ref-esl)). Smaller depth trees such as decision stumps are computationally efficient (but require more trees); however, higher depth trees allow the algorithm to capture unique interactions but also increase the risk of over\-fitting. Note that larger \\(n\\) or \\(p\\) training data sets are more tolerable to deeper trees. * **Minimum number of observations in terminal nodes**: Also, controls the complexity of each tree. Since we tend to use shorter trees this rarely has a large impact on performance. Typical values range from 5–15 where higher values help prevent a model from learning relationships which might be highly specific to the particular sample selected for a tree (overfitting) but smaller values can help with imbalanced target classes in classification problems. ### 12\.3\.2 Implementation There are many packages that implement GBMs and GBM variants. You can find a fairly comprehensive list at the CRAN Machine Learning Task View: [https://cran.r\-project.org/web/views/MachineLearning.html](https://cran.r-project.org/web/views/MachineLearning.html). However, the most popular original R implementation of Friedman’s GBM algorithm (J. H. Friedman [2001](#ref-friedman2001greedy); Friedman [2002](#ref-friedman2002stochastic)) is the **gbm** package. **gbm** has two training functions: `gbm::gbm()` and `gbm::gbm.fit()`. The primary difference is that `gbm::gbm()` uses the formula interface to specify your model whereas `gbm::gbm.fit()` requires the separated `x` and `y` matrices; `gbm::gbm.fit()` is more efficient and recommended for advanced users. The default settings in **gbm** include a learning rate (`shrinkage`) of 0\.001\. This is a very small learning rate and typically requires a large number of trees to sufficiently minimize the loss function. However, **gbm** uses a default number of trees of 100, which is rarely sufficient. Consequently, we start with a learning rate of 0\.1 and increase the number of trees to train. The default depth of each tree (`interaction.depth`) is 1, which means we are ensembling a bunch of decision stumps (i.e., we are not able to capture any interaction effects). For the Ames housing data set, we increase the tree depth to 3 and use the default value for minimum number of observations required in the trees terminal nodes (`n.minobsinnode`). Lastly, we set `cv.folds = 10` to perform a 10\-fold CV. This model takes a little over 2 minutes to run. ``` # run a basic GBM model set.seed(123) # for reproducibility ames_gbm1 <- gbm( formula = Sale_Price ~ ., data = ames_train, distribution = "gaussian", # SSE loss function n.trees = 5000, shrinkage = 0.1, interaction.depth = 3, n.minobsinnode = 10, cv.folds = 10 ) # find index for number trees with minimum CV error best <- which.min(ames_gbm1$cv.error) # get MSE and compute RMSE sqrt(ames_gbm1$cv.error[best]) ## [1] 23240.38 ``` Our results show a cross\-validated SSE of 23240 which was achieved with 1219 trees. ``` # plot error curve gbm.perf(ames_gbm1, method = "cv") ``` Figure 12\.6: Training and cross\-validated MSE as n trees are added to the GBM algorithm. ``` ## [1] 1219 ``` ### 12\.3\.3 General tuning strategy Unlike random forests, GBMs can have high variability in accuracy dependent on their hyperparameter settings (Probst, Bischl, and Boulesteix [2018](#ref-probst2018tunability)). So tuning can require much more strategy than a random forest model. Often, a good approach is to: 1. Choose a relatively high learning rate. Generally the default value of 0\.1 works but somewhere between 0\.05–0\.2 should work across a wide range of problems. 2. Determine the optimum number of trees for this learning rate. 3. Fix tree hyperparameters and tune learning rate and assess speed vs. performance. 4. Tune tree\-specific parameters for decided learning rate. 5. Once tree\-specific parameters have been found, lower the learning rate to assess for any improvements in accuracy. 6. Use final hyperparameter settings and increase CV procedures to get more robust estimates. Often, the above steps are performed with a simple validation procedure or 5\-fold CV due to computational constraints. If you used *k*\-fold CV throughout steps 1–5 then this step is not necessary. We already did (1\)–(2\) in the Ames example above with our first GBM model. Next, we’ll do (3\) and asses the performance of various learning rate values between 0\.005–0\.3\. Our results indicate that a learning rate of 0\.05 sufficiently minimizes our loss function and requires 2375 trees. All our models take a little over 2 minutes to train so we don’t see any significant impacts in training time based on the learning rate. The following grid search took us about 10 minutes. ``` # create grid search hyper_grid <- expand.grid( learning_rate = c(0.3, 0.1, 0.05, 0.01, 0.005), RMSE = NA, trees = NA, time = NA ) # execute grid search for(i in seq_len(nrow(hyper_grid))) { # fit gbm set.seed(123) # for reproducibility train_time <- system.time({ m <- gbm( formula = Sale_Price ~ ., data = ames_train, distribution = "gaussian", n.trees = 5000, shrinkage = hyper_grid$learning_rate[i], interaction.depth = 3, n.minobsinnode = 10, cv.folds = 10 ) }) # add SSE, trees, and training time to results hyper_grid$RMSE[i] <- sqrt(min(m$cv.error)) hyper_grid$trees[i] <- which.min(m$cv.error) hyper_grid$Time[i] <- train_time[["elapsed"]] } # results arrange(hyper_grid, RMSE) ## learning_rate RMSE trees time ## 1 0.050 21382 2375 129.5 ## 2 0.010 21828 4982 126.0 ## 3 0.100 22252 874 137.6 ## 4 0.005 23136 5000 136.8 ## 5 0.300 24454 427 139.9 ``` Next, we’ll set our learning rate at the optimal level (0\.05\) and tune the tree specific hyperparameters (`interaction.depth` and `n.minobsinnode`). Adjusting the tree\-specific parameters provides us with an additional 600 reduction in RMSE. This grid search takes about 30 minutes. ``` # search grid hyper_grid <- expand.grid( n.trees = 6000, shrinkage = 0.01, interaction.depth = c(3, 5, 7), n.minobsinnode = c(5, 10, 15) ) # create model fit function model_fit <- function(n.trees, shrinkage, interaction.depth, n.minobsinnode) { set.seed(123) m <- gbm( formula = Sale_Price ~ ., data = ames_train, distribution = "gaussian", n.trees = n.trees, shrinkage = shrinkage, interaction.depth = interaction.depth, n.minobsinnode = n.minobsinnode, cv.folds = 10 ) # compute RMSE sqrt(min(m$cv.error)) } # perform search grid with functional programming hyper_grid$rmse <- purrr::pmap_dbl( hyper_grid, ~ model_fit( n.trees = ..1, shrinkage = ..2, interaction.depth = ..3, n.minobsinnode = ..4 ) ) # results arrange(hyper_grid, rmse) ## n.trees shrinkage interaction.depth n.minobsinnode rmse ## 1 4000 0.05 5 5 20699 ## 2 4000 0.05 3 5 20723 ## 3 4000 0.05 7 5 21021 ## 4 4000 0.05 3 10 21382 ## 5 4000 0.05 5 10 21915 ## 6 4000 0.05 5 15 21924 ## 7 4000 0.05 3 15 21943 ## 8 4000 0.05 7 10 21999 ## 9 4000 0.05 7 15 22348 ``` After this procedure, we took our top model’s hyperparameter settings, reduced the learning rate to 0\.005, and increased the number of trees (8000\) to see if we got any additional improvement in accuracy. We experienced no improvement in our RMSE and our training time increased to nearly 6 minutes. 12\.4 Stochastic GBMs --------------------- An important insight made by Breiman (Breiman ([1996](#ref-breiman1996bagging)[a](#ref-breiman1996bagging)); Breiman ([2001](#ref-breiman2001random))) in developing his bagging and random forest algorithms was that training the algorithm on a random subsample of the training data set offered additional reduction in tree correlation and, therefore, improvement in prediction accuracy. Friedman ([2002](#ref-friedman2002stochastic)) used this same logic and updated the boosting algorithm accordingly. This procedure is known as *stochastic gradient boosting* and, as illustrated in Figure [12\.5](gbm.html#fig:stochastic-gradient-descent-fig), helps reduce the chances of getting stuck in local minimas, plateaus, and other irregular terrain of the loss function so that we may find a near global optimum. ### 12\.4\.1 Stochastic hyperparameters There are a few variants of stochastic gradient boosting that can be used, all of which have additional hyperparameters: * Subsample rows before creating each tree (available in **gbm**, **h2o**, \& **xgboost**) * Subsample columns before creating each tree (**h2o** \& **xgboost**) * Subsample columns before considering each split in each tree (**h2o** \& **xgboost**) Generally, aggressive subsampling of rows, such as selecting only 50% or less of the training data, has shown to be beneficial and typical values range between 0\.5–0\.8\. Subsampling of columns and the impact to performance largely depends on the nature of the data and if there is strong multicollinearity or a lot of noisy features. Similar to the \\(m\_{try}\\) parameter in random forests (Section [11\.4\.2](random-forest.html#mtry)), if there are fewer relevant predictors (more noisy data) higher values of column subsampling tends to perform better because it makes it more likely to select those features with the strongest signal. When there are many relevant predictors, a lower values of column subsampling tends to perform well. When adding in a stochastic procedure, you can either include it in step 4\) in the general tuning strategy above (Section [12\.3\.3](gbm.html#tuning-strategy)), or once you’ve found the optimal basic model (after 6\)). In our experience, we have not seen strong interactions between the stochastic hyperparameters and the other boosting and tree\-specific hyperparameters. ### 12\.4\.2 Implementation The following uses **h2o** to implement a stochastic GBM. We use the optimal hyperparameters found in the previous section and build onto this by assessing a range of values for subsampling rows and columns before each tree is built, and subsampling columns before each split. To speed up training we use early stopping for the individual GBM modeling process and also add a stochastic search criteria. This grid search ran for the entire 60 minutes and evaluated 18 of the possible 27 models. ``` # refined hyperparameter grid hyper_grid <- list( sample_rate = c(0.5, 0.75, 1), # row subsampling col_sample_rate = c(0.5, 0.75, 1), # col subsampling for each split col_sample_rate_per_tree = c(0.5, 0.75, 1) # col subsampling for each tree ) # random grid search strategy search_criteria <- list( strategy = "RandomDiscrete", stopping_metric = "mse", stopping_tolerance = 0.001, stopping_rounds = 10, max_runtime_secs = 60*60 ) # perform grid search grid <- h2o.grid( algorithm = "gbm", grid_id = "gbm_grid", x = predictors, y = response, training_frame = train_h2o, hyper_params = hyper_grid, ntrees = 6000, learn_rate = 0.01, max_depth = 7, min_rows = 5, nfolds = 10, stopping_rounds = 10, stopping_tolerance = 0, search_criteria = search_criteria, seed = 123 ) # collect the results and sort by our model performance metric of choice grid_perf <- h2o.getGrid( grid_id = "gbm_grid", sort_by = "mse", decreasing = FALSE ) grid_perf ## H2O Grid Details ## ================ ## ## Grid ID: gbm_grid ## Used hyper parameters: ## - col_sample_rate ## - col_sample_rate_per_tree ## - sample_rate ## Number of models: 18 ## Number of failed models: 0 ## ## Hyper-Parameter Search Summary: ordered by increasing mse ## col_sample_rate col_sample_rate_per_tree sample_rate model_ids mse ## 1 0.5 0.5 0.5 gbm_grid_model_8 4.462965966345138E8 ## 2 0.5 1.0 0.5 gbm_grid_model_3 4.568248274796835E8 ## 3 0.5 0.75 0.75 gbm_grid_model_12 4.6466647244785947E8 ## 4 0.75 0.5 0.75 gbm_grid_model_5 4.689665768861389E8 ## 5 1.0 0.75 0.5 gbm_grid_model_14 4.7010349266737276E8 ## 6 0.5 0.5 0.75 gbm_grid_model_10 4.713882927949245E8 ## 7 0.75 1.0 0.5 gbm_grid_model_4 4.729884840420368E8 ## 8 1.0 1.0 0.5 gbm_grid_model_1 4.770705550988762E8 ## 9 1.0 0.75 0.75 gbm_grid_model_6 4.9292332262147874E8 ## 10 0.75 1.0 0.75 gbm_grid_model_13 4.985715082289563E8 ## 11 0.75 0.5 1.0 gbm_grid_model_2 5.0271257831462187E8 ## 12 0.75 0.75 0.75 gbm_grid_model_15 5.0981695262733763E8 ## 13 0.75 0.75 1.0 gbm_grid_model_9 5.3137490858680266E8 ## 14 0.75 1.0 1.0 gbm_grid_model_11 5.77518690995319E8 ## 15 1.0 1.0 1.0 gbm_grid_model_7 6.037512241688542E8 ## 16 1.0 0.75 1.0 gbm_grid_model_16 1.9742225720119803E9 ## 17 0.5 1.0 0.75 gbm_grid_model_17 4.1339991380839005E9 ## 18 1.0 0.5 1.0 gbm_grid_model_18 5.949489361558916E9 ``` Our grid search highlights some important results. Random sampling from the rows for each tree and randomly sampling features before each split appears to positively impact performance. It is not definitive if sampling features before each tree has an impact. Furthermore, the best sampling values are very low (0\.5\); a further grid search may be beneficial to evaluate even lower values. The below code chunk extracts the best performing model. In this particular case, we do not see additional improvement in our 10\-fold CV RMSE over the best non\-stochastic GBM model. ``` # Grab the model_id for the top model, chosen by cross validation error best_model_id <- grid_perf@model_ids[[1]] best_model <- h2o.getModel(best_model_id) # Now let’s get performance metrics on the best model h2o.performance(model = best_model, xval = TRUE) ## H2ORegressionMetrics: gbm ## ** Reported on cross-validation data. ** ## ** 10-fold cross-validation on training data (Metrics computed for combined holdout predictions) ** ## ## MSE: 446296597 ## RMSE: 21125.73 ## MAE: 13045.95 ## RMSLE: 0.1240542 ## Mean Residual Deviance : 446296597 ``` ### 12\.4\.1 Stochastic hyperparameters There are a few variants of stochastic gradient boosting that can be used, all of which have additional hyperparameters: * Subsample rows before creating each tree (available in **gbm**, **h2o**, \& **xgboost**) * Subsample columns before creating each tree (**h2o** \& **xgboost**) * Subsample columns before considering each split in each tree (**h2o** \& **xgboost**) Generally, aggressive subsampling of rows, such as selecting only 50% or less of the training data, has shown to be beneficial and typical values range between 0\.5–0\.8\. Subsampling of columns and the impact to performance largely depends on the nature of the data and if there is strong multicollinearity or a lot of noisy features. Similar to the \\(m\_{try}\\) parameter in random forests (Section [11\.4\.2](random-forest.html#mtry)), if there are fewer relevant predictors (more noisy data) higher values of column subsampling tends to perform better because it makes it more likely to select those features with the strongest signal. When there are many relevant predictors, a lower values of column subsampling tends to perform well. When adding in a stochastic procedure, you can either include it in step 4\) in the general tuning strategy above (Section [12\.3\.3](gbm.html#tuning-strategy)), or once you’ve found the optimal basic model (after 6\)). In our experience, we have not seen strong interactions between the stochastic hyperparameters and the other boosting and tree\-specific hyperparameters. ### 12\.4\.2 Implementation The following uses **h2o** to implement a stochastic GBM. We use the optimal hyperparameters found in the previous section and build onto this by assessing a range of values for subsampling rows and columns before each tree is built, and subsampling columns before each split. To speed up training we use early stopping for the individual GBM modeling process and also add a stochastic search criteria. This grid search ran for the entire 60 minutes and evaluated 18 of the possible 27 models. ``` # refined hyperparameter grid hyper_grid <- list( sample_rate = c(0.5, 0.75, 1), # row subsampling col_sample_rate = c(0.5, 0.75, 1), # col subsampling for each split col_sample_rate_per_tree = c(0.5, 0.75, 1) # col subsampling for each tree ) # random grid search strategy search_criteria <- list( strategy = "RandomDiscrete", stopping_metric = "mse", stopping_tolerance = 0.001, stopping_rounds = 10, max_runtime_secs = 60*60 ) # perform grid search grid <- h2o.grid( algorithm = "gbm", grid_id = "gbm_grid", x = predictors, y = response, training_frame = train_h2o, hyper_params = hyper_grid, ntrees = 6000, learn_rate = 0.01, max_depth = 7, min_rows = 5, nfolds = 10, stopping_rounds = 10, stopping_tolerance = 0, search_criteria = search_criteria, seed = 123 ) # collect the results and sort by our model performance metric of choice grid_perf <- h2o.getGrid( grid_id = "gbm_grid", sort_by = "mse", decreasing = FALSE ) grid_perf ## H2O Grid Details ## ================ ## ## Grid ID: gbm_grid ## Used hyper parameters: ## - col_sample_rate ## - col_sample_rate_per_tree ## - sample_rate ## Number of models: 18 ## Number of failed models: 0 ## ## Hyper-Parameter Search Summary: ordered by increasing mse ## col_sample_rate col_sample_rate_per_tree sample_rate model_ids mse ## 1 0.5 0.5 0.5 gbm_grid_model_8 4.462965966345138E8 ## 2 0.5 1.0 0.5 gbm_grid_model_3 4.568248274796835E8 ## 3 0.5 0.75 0.75 gbm_grid_model_12 4.6466647244785947E8 ## 4 0.75 0.5 0.75 gbm_grid_model_5 4.689665768861389E8 ## 5 1.0 0.75 0.5 gbm_grid_model_14 4.7010349266737276E8 ## 6 0.5 0.5 0.75 gbm_grid_model_10 4.713882927949245E8 ## 7 0.75 1.0 0.5 gbm_grid_model_4 4.729884840420368E8 ## 8 1.0 1.0 0.5 gbm_grid_model_1 4.770705550988762E8 ## 9 1.0 0.75 0.75 gbm_grid_model_6 4.9292332262147874E8 ## 10 0.75 1.0 0.75 gbm_grid_model_13 4.985715082289563E8 ## 11 0.75 0.5 1.0 gbm_grid_model_2 5.0271257831462187E8 ## 12 0.75 0.75 0.75 gbm_grid_model_15 5.0981695262733763E8 ## 13 0.75 0.75 1.0 gbm_grid_model_9 5.3137490858680266E8 ## 14 0.75 1.0 1.0 gbm_grid_model_11 5.77518690995319E8 ## 15 1.0 1.0 1.0 gbm_grid_model_7 6.037512241688542E8 ## 16 1.0 0.75 1.0 gbm_grid_model_16 1.9742225720119803E9 ## 17 0.5 1.0 0.75 gbm_grid_model_17 4.1339991380839005E9 ## 18 1.0 0.5 1.0 gbm_grid_model_18 5.949489361558916E9 ``` Our grid search highlights some important results. Random sampling from the rows for each tree and randomly sampling features before each split appears to positively impact performance. It is not definitive if sampling features before each tree has an impact. Furthermore, the best sampling values are very low (0\.5\); a further grid search may be beneficial to evaluate even lower values. The below code chunk extracts the best performing model. In this particular case, we do not see additional improvement in our 10\-fold CV RMSE over the best non\-stochastic GBM model. ``` # Grab the model_id for the top model, chosen by cross validation error best_model_id <- grid_perf@model_ids[[1]] best_model <- h2o.getModel(best_model_id) # Now let’s get performance metrics on the best model h2o.performance(model = best_model, xval = TRUE) ## H2ORegressionMetrics: gbm ## ** Reported on cross-validation data. ** ## ** 10-fold cross-validation on training data (Metrics computed for combined holdout predictions) ** ## ## MSE: 446296597 ## RMSE: 21125.73 ## MAE: 13045.95 ## RMSLE: 0.1240542 ## Mean Residual Deviance : 446296597 ``` 12\.5 XGBoost ------------- Extreme gradient boosting (XGBoost) is an optimized distributed gradient boosting library that is designed to be efficient, flexible, and portable across multiple languages (Chen and Guestrin [2016](#ref-xgboost-paper)). Although XGBoost provides the same boosting and tree\-based hyperparameter options illustrated in the previous sections, it also provides a few advantages over traditional boosting such as: * **Regularization**: XGBoost offers additional regularization hyperparameters, which we will discuss shortly, that provides added protection against overfitting. * **Early stopping**: Similar to **h2o**, XGBoost implements early stopping so that we can stop model assessment when additional trees offer no improvement. * **Parallel Processing**: Since gradient boosting is sequential in nature it is extremely difficult to parallelize. XGBoost has implemented procedures to support GPU and Spark compatibility which allows you to fit gradient boosting using powerful distributed processing engines. * **Loss functions**: XGBoost allows users to define and optimize gradient boosting models using custom objective and evaluation criteria. * **Continue with existing model**: A user can train an XGBoost model, save the results, and later on return to that model and continue building onto the results. Whether you shut down for the day, wanted to review intermediate results, or came up with additional hyperparameter settings to evaluate, this allows you to continue training your model without starting from scratch. * **Different base learners**: Most GBM implementations are built with decision trees but XGBoost also provides boosted generalized linear models. * **Multiple languages**: XGBoost offers implementations in R, Python, Julia, Scala, Java, and C\+\+. In addition to being offered across multiple languages, XGboost can be implemented multiple ways within R. The main R implementation is the **xgboost** package; however, as illustrated throughout many chapters one can also use **caret** as a meta engine to implement XGBoost. The **h2o** package also offers an implementation of XGBoost. In this chapter we’ll demonstrate the **xgboost** package. ### 12\.5\.1 XGBoost hyperparameters As previously mentioned, **xgboost** provides the traditional boosting and tree\-based hyperparameters we discussed in Sections [12\.3\.1](gbm.html#hyper-gbm1) and [12\.4\.1](gbm.html#hyper-gbm2). However, **xgboost** also provides additional hyperparameters that can help reduce the chances of overfitting, leading to less prediction variability and, therefore, improved accuracy. #### 12\.5\.1\.1 Regularization **xgboost** provides multiple regularization parameters to help reduce model complexity and guard against overfitting. The first, `gamma`, is a pseudo\-regularization hyperparameter known as a Lagrangian multiplier and controls the complexity of a given tree. `gamma` specifies a minimum loss reduction required to make a further partition on a leaf node of the tree. When `gamma` is specified, **xgboost** will grow the tree to the max depth specified but then prune the tree to find and remove splits that do not meet the specified `gamma`. `gamma` tends to be worth exploring as your trees in your GBM become deeper and when you see a significant difference between the train and test CV error. The value of `gamma` ranges from \\(0\-\\infty\\) (0 means no constraint while large numbers mean a higher regularization). What quantifies as a large `gamma` value is dependent on the loss function but generally lower values between 1–20 will do if `gamma` is influential. Two more traditional regularization parameters include `alpha` and `lambda`. `alpha` provides an \\(L\_1\\) regularization (reference Section [6\.2\.2](regularized-regression.html#lasso)) and `lambda` provides an \\(L\_2\\) regularization (reference Section [6\.2\.1](regularized-regression.html#ridge)). Setting both of these to greater than 0 results in an elastic net regularization; similar to `gamma`, these parameters can range from \\(0\-\\infty\\). These regularization parameters limits how extreme the weights (or influence) of the leaves in a tree can become. All three hyperparameters (`gamma`, `alpha`, `lambda`) work to constrain model complexity and reduce overfitting. Although `gamma` is more commonly implemented, your tuning strategy should explore the impact of all three. Figure [12\.7](gbm.html#fig:xgboost-learning-curve) illustrates how regularization can make an overfit model more conservative on the training data which, in some circumstances, can result in improvements to the validation error. Figure 12\.7: When a GBM model significantly overfits to the training data (blue), adding regularization (dotted line) causes the model to be more conservative on the training data, which can improve the cross\-validated test error (red). #### 12\.5\.1\.2 Dropout Dropout is an alternative approach to reduce overfitting and can loosely be described as regularization. The dropout approach developed by Srivastava et al. ([2014](#ref-JMLR:v15:srivastava14a)[a](#ref-JMLR:v15:srivastava14a)) has been widely employed in deep learnings to prevent deep neural networks from overfitting (see Section [13\.7\.3](deep-learning.html#dl-regularization)). Dropout can also be used to address overfitting in GBMs. When constructing a GBM, the first few trees added at the beginning of the ensemble typically dominate the model performance while trees added later typically improve the prediction for only a small subset of the feature space. This often increases the risk of overfitting and the idea of dropout is to build an ensemble by randomly dropping trees in the boosting sequence. This is commonly referred to as DART (Rashmi and Gilad\-Bachrach [2015](#ref-rashmi2015dart)) since it was initially explored in the context of *Mutliple Additive Regression Trees* (MART); DART refers to *Dropout Additive Regression Trees*. The percentage of dropouts is another regularization parameter. Typically, when `gamma`, `alpha`, or `lambda` cannot help to control overfitting, exploring DART hyperparameters would be the next best option.[33](#fn33) ### 12\.5\.2 Tuning strategy The general tuning strategy for exploring **xgboost** hyperparameters builds onto the basic and stochastic GBM tuning strategies: 1. Crank up the number of trees and tune learning rate with early stopping 2. Tune tree\-specific hyperparameters 3. Explore stochastic GBM attributes 4. If substantial overfitting occurs (e.g., large differences between train and CV error) explore regularization hyperparameters 5. If you find hyperparameter values that are substantially different from default settings, be sure to retune the learning rate 6. Obtain final “optimal” model Running an XGBoost model with **xgboost** requires some additional data preparation. **xgboost** requires a matrix input for the features and the response to be a vector. Consequently, to provide a matrix input of the features we need to encode our categorical variables numerically (i.e. one\-hot encoding, label encoding). The following numerically label encodes all categorical features and converts the training data frame to a matrix. ``` library(recipes) xgb_prep <- recipe(Sale_Price ~ ., data = ames_train) %>% step_integer(all_nominal()) %>% prep(training = ames_train, retain = TRUE) %>% juice() X <- as.matrix(xgb_prep[setdiff(names(xgb_prep), "Sale_Price")]) Y <- xgb_prep$Sale_Price ``` **xgboost** will except three different kinds of matrices for the features: ordinary R matrix, sparse matrices from the **Matrix** package, or **xgboost**’s internal `xgb.DMatrix` objects. See `?xgboost::xgboost` for details. Next, we went through a series of grid searches similar to the previous sections and found the below model hyperparameters (provided via the `params` argument) to perform quite well. Our RMSE is slightly lower than the best regular and stochastic GBM models thus far. ``` set.seed(123) ames_xgb <- xgb.cv( data = X, label = Y, nrounds = 6000, objective = "reg:linear", early_stopping_rounds = 50, nfold = 10, params = list( eta = 0.1, max_depth = 3, min_child_weight = 3, subsample = 0.8, colsample_bytree = 1.0), verbose = 0 ) # minimum test CV RMSE min(ames_xgb$evaluation_log$test_rmse_mean) ## [1] 20488 ``` Next, we assess if overfitting is limiting our model’s performance by performing a grid search that examines various regularization parameters (`gamma`, `lambda`, and `alpha`). Our results indicate that the best performing models use `lambda` equal to 1 and it doesn’t appear that `alpha` or `gamma` have any consistent patterns. However, even when `lambda` equals 1, our CV RMSE has no improvement over our previous XGBoost model. Due to the low learning rate (`eta`), this cartesian grid search takes a long time. We stopped the search after 2 hours and only 98 of the 245 models had completed. ``` # hyperparameter grid hyper_grid <- expand.grid( eta = 0.01, max_depth = 3, min_child_weight = 3, subsample = 0.5, colsample_bytree = 0.5, gamma = c(0, 1, 10, 100, 1000), lambda = c(0, 1e-2, 0.1, 1, 100, 1000, 10000), alpha = c(0, 1e-2, 0.1, 1, 100, 1000, 10000), rmse = 0, # a place to dump RMSE results trees = 0 # a place to dump required number of trees ) # grid search for(i in seq_len(nrow(hyper_grid))) { set.seed(123) m <- xgb.cv( data = X, label = Y, nrounds = 4000, objective = "reg:linear", early_stopping_rounds = 50, nfold = 10, verbose = 0, params = list( eta = hyper_grid$eta[i], max_depth = hyper_grid$max_depth[i], min_child_weight = hyper_grid$min_child_weight[i], subsample = hyper_grid$subsample[i], colsample_bytree = hyper_grid$colsample_bytree[i], gamma = hyper_grid$gamma[i], lambda = hyper_grid$lambda[i], alpha = hyper_grid$alpha[i] ) ) hyper_grid$rmse[i] <- min(m$evaluation_log$test_rmse_mean) hyper_grid$trees[i] <- m$best_iteration } # results hyper_grid %>% filter(rmse > 0) %>% arrange(rmse) %>% glimpse() ## Observations: 98 ## Variables: 10 ## $ eta <dbl> 0.01, 0.01, 0.01, 0.01, 0.01, 0.0… ## $ max_depth <dbl> 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, … ## $ min_child_weight <dbl> 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, … ## $ subsample <dbl> 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5… ## $ colsample_bytree <dbl> 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5… ## $ gamma <dbl> 0, 1, 10, 100, 1000, 0, 1, 10, 10… ## $ lambda <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, … ## $ alpha <dbl> 0.00, 0.00, 0.00, 0.00, 0.00, 0.1… ## $ rmse <dbl> 20488, 20488, 20488, 20488, 20488… ## $ trees <dbl> 3944, 3944, 3944, 3944, 3944, 381… ``` Once you’ve found the optimal hyperparameters, fit the final model with `xgb.train` or `xgboost`. Be sure to use the optimal number of trees found during cross validation. In our example, adding regularization provides no improvement so we exclude them in our final model. ``` # optimal parameter list params <- list( eta = 0.01, max_depth = 3, min_child_weight = 3, subsample = 0.5, colsample_bytree = 0.5 ) # train final model xgb.fit.final <- xgboost( params = params, data = X, label = Y, nrounds = 3944, objective = "reg:linear", verbose = 0 ) ``` ### 12\.5\.1 XGBoost hyperparameters As previously mentioned, **xgboost** provides the traditional boosting and tree\-based hyperparameters we discussed in Sections [12\.3\.1](gbm.html#hyper-gbm1) and [12\.4\.1](gbm.html#hyper-gbm2). However, **xgboost** also provides additional hyperparameters that can help reduce the chances of overfitting, leading to less prediction variability and, therefore, improved accuracy. #### 12\.5\.1\.1 Regularization **xgboost** provides multiple regularization parameters to help reduce model complexity and guard against overfitting. The first, `gamma`, is a pseudo\-regularization hyperparameter known as a Lagrangian multiplier and controls the complexity of a given tree. `gamma` specifies a minimum loss reduction required to make a further partition on a leaf node of the tree. When `gamma` is specified, **xgboost** will grow the tree to the max depth specified but then prune the tree to find and remove splits that do not meet the specified `gamma`. `gamma` tends to be worth exploring as your trees in your GBM become deeper and when you see a significant difference between the train and test CV error. The value of `gamma` ranges from \\(0\-\\infty\\) (0 means no constraint while large numbers mean a higher regularization). What quantifies as a large `gamma` value is dependent on the loss function but generally lower values between 1–20 will do if `gamma` is influential. Two more traditional regularization parameters include `alpha` and `lambda`. `alpha` provides an \\(L\_1\\) regularization (reference Section [6\.2\.2](regularized-regression.html#lasso)) and `lambda` provides an \\(L\_2\\) regularization (reference Section [6\.2\.1](regularized-regression.html#ridge)). Setting both of these to greater than 0 results in an elastic net regularization; similar to `gamma`, these parameters can range from \\(0\-\\infty\\). These regularization parameters limits how extreme the weights (or influence) of the leaves in a tree can become. All three hyperparameters (`gamma`, `alpha`, `lambda`) work to constrain model complexity and reduce overfitting. Although `gamma` is more commonly implemented, your tuning strategy should explore the impact of all three. Figure [12\.7](gbm.html#fig:xgboost-learning-curve) illustrates how regularization can make an overfit model more conservative on the training data which, in some circumstances, can result in improvements to the validation error. Figure 12\.7: When a GBM model significantly overfits to the training data (blue), adding regularization (dotted line) causes the model to be more conservative on the training data, which can improve the cross\-validated test error (red). #### 12\.5\.1\.2 Dropout Dropout is an alternative approach to reduce overfitting and can loosely be described as regularization. The dropout approach developed by Srivastava et al. ([2014](#ref-JMLR:v15:srivastava14a)[a](#ref-JMLR:v15:srivastava14a)) has been widely employed in deep learnings to prevent deep neural networks from overfitting (see Section [13\.7\.3](deep-learning.html#dl-regularization)). Dropout can also be used to address overfitting in GBMs. When constructing a GBM, the first few trees added at the beginning of the ensemble typically dominate the model performance while trees added later typically improve the prediction for only a small subset of the feature space. This often increases the risk of overfitting and the idea of dropout is to build an ensemble by randomly dropping trees in the boosting sequence. This is commonly referred to as DART (Rashmi and Gilad\-Bachrach [2015](#ref-rashmi2015dart)) since it was initially explored in the context of *Mutliple Additive Regression Trees* (MART); DART refers to *Dropout Additive Regression Trees*. The percentage of dropouts is another regularization parameter. Typically, when `gamma`, `alpha`, or `lambda` cannot help to control overfitting, exploring DART hyperparameters would be the next best option.[33](#fn33) #### 12\.5\.1\.1 Regularization **xgboost** provides multiple regularization parameters to help reduce model complexity and guard against overfitting. The first, `gamma`, is a pseudo\-regularization hyperparameter known as a Lagrangian multiplier and controls the complexity of a given tree. `gamma` specifies a minimum loss reduction required to make a further partition on a leaf node of the tree. When `gamma` is specified, **xgboost** will grow the tree to the max depth specified but then prune the tree to find and remove splits that do not meet the specified `gamma`. `gamma` tends to be worth exploring as your trees in your GBM become deeper and when you see a significant difference between the train and test CV error. The value of `gamma` ranges from \\(0\-\\infty\\) (0 means no constraint while large numbers mean a higher regularization). What quantifies as a large `gamma` value is dependent on the loss function but generally lower values between 1–20 will do if `gamma` is influential. Two more traditional regularization parameters include `alpha` and `lambda`. `alpha` provides an \\(L\_1\\) regularization (reference Section [6\.2\.2](regularized-regression.html#lasso)) and `lambda` provides an \\(L\_2\\) regularization (reference Section [6\.2\.1](regularized-regression.html#ridge)). Setting both of these to greater than 0 results in an elastic net regularization; similar to `gamma`, these parameters can range from \\(0\-\\infty\\). These regularization parameters limits how extreme the weights (or influence) of the leaves in a tree can become. All three hyperparameters (`gamma`, `alpha`, `lambda`) work to constrain model complexity and reduce overfitting. Although `gamma` is more commonly implemented, your tuning strategy should explore the impact of all three. Figure [12\.7](gbm.html#fig:xgboost-learning-curve) illustrates how regularization can make an overfit model more conservative on the training data which, in some circumstances, can result in improvements to the validation error. Figure 12\.7: When a GBM model significantly overfits to the training data (blue), adding regularization (dotted line) causes the model to be more conservative on the training data, which can improve the cross\-validated test error (red). #### 12\.5\.1\.2 Dropout Dropout is an alternative approach to reduce overfitting and can loosely be described as regularization. The dropout approach developed by Srivastava et al. ([2014](#ref-JMLR:v15:srivastava14a)[a](#ref-JMLR:v15:srivastava14a)) has been widely employed in deep learnings to prevent deep neural networks from overfitting (see Section [13\.7\.3](deep-learning.html#dl-regularization)). Dropout can also be used to address overfitting in GBMs. When constructing a GBM, the first few trees added at the beginning of the ensemble typically dominate the model performance while trees added later typically improve the prediction for only a small subset of the feature space. This often increases the risk of overfitting and the idea of dropout is to build an ensemble by randomly dropping trees in the boosting sequence. This is commonly referred to as DART (Rashmi and Gilad\-Bachrach [2015](#ref-rashmi2015dart)) since it was initially explored in the context of *Mutliple Additive Regression Trees* (MART); DART refers to *Dropout Additive Regression Trees*. The percentage of dropouts is another regularization parameter. Typically, when `gamma`, `alpha`, or `lambda` cannot help to control overfitting, exploring DART hyperparameters would be the next best option.[33](#fn33) ### 12\.5\.2 Tuning strategy The general tuning strategy for exploring **xgboost** hyperparameters builds onto the basic and stochastic GBM tuning strategies: 1. Crank up the number of trees and tune learning rate with early stopping 2. Tune tree\-specific hyperparameters 3. Explore stochastic GBM attributes 4. If substantial overfitting occurs (e.g., large differences between train and CV error) explore regularization hyperparameters 5. If you find hyperparameter values that are substantially different from default settings, be sure to retune the learning rate 6. Obtain final “optimal” model Running an XGBoost model with **xgboost** requires some additional data preparation. **xgboost** requires a matrix input for the features and the response to be a vector. Consequently, to provide a matrix input of the features we need to encode our categorical variables numerically (i.e. one\-hot encoding, label encoding). The following numerically label encodes all categorical features and converts the training data frame to a matrix. ``` library(recipes) xgb_prep <- recipe(Sale_Price ~ ., data = ames_train) %>% step_integer(all_nominal()) %>% prep(training = ames_train, retain = TRUE) %>% juice() X <- as.matrix(xgb_prep[setdiff(names(xgb_prep), "Sale_Price")]) Y <- xgb_prep$Sale_Price ``` **xgboost** will except three different kinds of matrices for the features: ordinary R matrix, sparse matrices from the **Matrix** package, or **xgboost**’s internal `xgb.DMatrix` objects. See `?xgboost::xgboost` for details. Next, we went through a series of grid searches similar to the previous sections and found the below model hyperparameters (provided via the `params` argument) to perform quite well. Our RMSE is slightly lower than the best regular and stochastic GBM models thus far. ``` set.seed(123) ames_xgb <- xgb.cv( data = X, label = Y, nrounds = 6000, objective = "reg:linear", early_stopping_rounds = 50, nfold = 10, params = list( eta = 0.1, max_depth = 3, min_child_weight = 3, subsample = 0.8, colsample_bytree = 1.0), verbose = 0 ) # minimum test CV RMSE min(ames_xgb$evaluation_log$test_rmse_mean) ## [1] 20488 ``` Next, we assess if overfitting is limiting our model’s performance by performing a grid search that examines various regularization parameters (`gamma`, `lambda`, and `alpha`). Our results indicate that the best performing models use `lambda` equal to 1 and it doesn’t appear that `alpha` or `gamma` have any consistent patterns. However, even when `lambda` equals 1, our CV RMSE has no improvement over our previous XGBoost model. Due to the low learning rate (`eta`), this cartesian grid search takes a long time. We stopped the search after 2 hours and only 98 of the 245 models had completed. ``` # hyperparameter grid hyper_grid <- expand.grid( eta = 0.01, max_depth = 3, min_child_weight = 3, subsample = 0.5, colsample_bytree = 0.5, gamma = c(0, 1, 10, 100, 1000), lambda = c(0, 1e-2, 0.1, 1, 100, 1000, 10000), alpha = c(0, 1e-2, 0.1, 1, 100, 1000, 10000), rmse = 0, # a place to dump RMSE results trees = 0 # a place to dump required number of trees ) # grid search for(i in seq_len(nrow(hyper_grid))) { set.seed(123) m <- xgb.cv( data = X, label = Y, nrounds = 4000, objective = "reg:linear", early_stopping_rounds = 50, nfold = 10, verbose = 0, params = list( eta = hyper_grid$eta[i], max_depth = hyper_grid$max_depth[i], min_child_weight = hyper_grid$min_child_weight[i], subsample = hyper_grid$subsample[i], colsample_bytree = hyper_grid$colsample_bytree[i], gamma = hyper_grid$gamma[i], lambda = hyper_grid$lambda[i], alpha = hyper_grid$alpha[i] ) ) hyper_grid$rmse[i] <- min(m$evaluation_log$test_rmse_mean) hyper_grid$trees[i] <- m$best_iteration } # results hyper_grid %>% filter(rmse > 0) %>% arrange(rmse) %>% glimpse() ## Observations: 98 ## Variables: 10 ## $ eta <dbl> 0.01, 0.01, 0.01, 0.01, 0.01, 0.0… ## $ max_depth <dbl> 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, … ## $ min_child_weight <dbl> 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, … ## $ subsample <dbl> 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5… ## $ colsample_bytree <dbl> 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5… ## $ gamma <dbl> 0, 1, 10, 100, 1000, 0, 1, 10, 10… ## $ lambda <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, … ## $ alpha <dbl> 0.00, 0.00, 0.00, 0.00, 0.00, 0.1… ## $ rmse <dbl> 20488, 20488, 20488, 20488, 20488… ## $ trees <dbl> 3944, 3944, 3944, 3944, 3944, 381… ``` Once you’ve found the optimal hyperparameters, fit the final model with `xgb.train` or `xgboost`. Be sure to use the optimal number of trees found during cross validation. In our example, adding regularization provides no improvement so we exclude them in our final model. ``` # optimal parameter list params <- list( eta = 0.01, max_depth = 3, min_child_weight = 3, subsample = 0.5, colsample_bytree = 0.5 ) # train final model xgb.fit.final <- xgboost( params = params, data = X, label = Y, nrounds = 3944, objective = "reg:linear", verbose = 0 ) ``` 12\.6 Feature interpretation ---------------------------- Measuring GBM feature importance and effects follows the same construct as random forests. Similar to random forests, the **gbm** and **h2o** packages offer an impurity\-based feature importance. **xgboost** actually provides three built\-in measures for feature importance: 1. **Gain**: This is equivalent to the impurity measure in random forests (reference Section [11\.6](random-forest.html#rf-vip)) and is the most common model\-centric metric to use. 2. **Coverage**: The Coverage metric quantifies the relative number of observations influenced by this feature. For example, if you have 100 observations, 4 features and 3 trees, and suppose \\(x\_1\\) is used to decide the leaf node for 10, 5, and 2 observations in \\(tree\_1\\), \\(tree\_2\\) and \\(tree\_3\\) respectively; then the metric will count cover for this feature as \\(10\+5\+2 \= 17\\) observations. This will be calculated for all the 4 features and expressed as a percentage. 3. **Frequency**: The percentage representing the relative number of times a particular feature occurs in the trees of the model. In the above example, if \\(x\_1\\) was used for 2 splits, 1 split and 3 splits in each of \\(tree\_1\\), \\(tree\_2\\) and \\(tree\_3\\) respectively; then the weightage for \\(x\_1\\) will be \\(2\+1\+3\=6\\). The frequency for \\(x\_1\\) is calculated as its percentage weight over weights of all \\(x\_p\\) features. If we examine the top 10 influential features in our final model using the impurity (gain) metric, we see very similar results as we saw with our random forest model (Section [11\.6](random-forest.html#rf-vip)). The primary difference is we no longer see `Neighborhood` as a top influential feature, which is likely a result of how we label encoded the categorical features. By default, `vip::vip()` uses the gain method for feature importance but you can assess the other types using the `type` argument. You can also use `xgboost::xgb.ggplot.importance()` to plot the various feature importance measures but you need to first run `xgb.importance()` on the final model. ``` # variable importance plot vip::vip(xgb.fit.final) ``` Figure 12\.8: Top 10 most important variables based on the impurity (gain) metric. 12\.7 Final thoughts -------------------- GBMs are one of the most powerful ensemble algorithms that are often first\-in\-class with predictive accuracy. Although they are less intuitive and more computationally demanding than many other machine learning algorithms, they are essential to have in your toolbox. Although we discussed the most popular GBM algorithms, realize there are alternative algorithms not covered here. For example LightGBM (Ke et al. [2017](#ref-ke2017lightgbm)) is a gradient boosting framework that focuses on *leaf\-wise* tree growth versus the traditional level\-wise tree growth. This means as a tree is grown deeper, it focuses on extending a single branch versus growing multiple branches (reference Figure [9\.2](DT.html#fig:decision-tree-terminology). CatBoost (Dorogush, Ershov, and Gulin [2018](#ref-dorogush2018catboost)) is another gradient boosting framework that focuses on using efficient methods for encoding categorical features during the gradient boosting process. Both frameworks are available in R.
Machine Learning
bradleyboehmke.github.io
https://bradleyboehmke.github.io/HOML/deep-learning.html
Chapter 13 Deep Learning ======================== Machine learning algorithms typically search for the optimal representation of data using a feedback signal in the form of an objective function. However, most machine learning algorithms only have the ability to use one or two layers of data transformation to learn the output representation. We call these *shallow* models[34](#fn34) since they only use 1–2 representations of the feature space. As data sets continue to grow in the dimensions of the feature space, finding the optimal output representation with a shallow model is not always possible. Deep learning provides a multi\-layer approach to learn data representations, typically performed with a *multi\-layer neural network*. Like other machine learning algorithms, *deep neural networks* (DNN) perform learning by mapping features to targets through a process of simple data transformations and feedback signals; however, DNNs place an emphasis on learning successive layers of meaningful representations. Although an intimidating subject, the overarching concept is rather simple and has proven highly successful across a wide range of problems (e.g., image classification, speech recognition, autonomous driving). This chapter will teach you the fundamentals of building a simple *feedforward* DNN, which is the foundation for the more advanced deep learning models. Our online resources will provide content covering additional deep learning models such as convolutional, recurrent, and long short\-term memory neural networks. Moreover, Chollet and Allaire ([2018](#ref-chollet2018deep)) is an excellent, in\-depth text on applying deep learning methods with R. 13\.1 Prerequisites ------------------- This tutorial will use a few supporting packages but the main emphasis will be on the **keras** package (Allaire and Chollet [2019](#ref-R-keras)). Additional content provided online illustrates how to execute the same procedures we cover here with the **h2o** package. For more information on installing both CPU and GPU\-based Keras and TensorFlow software, visit [https://keras.rstudio.com](https://keras.rstudio.com/). ``` # Helper packages library(dplyr) # for basic data wrangling # Modeling packages library(keras) # for fitting DNNs library(tfruns) # for additional grid search & model training functions # Modeling helper package - not necessary for reproducibility library(tfestimators) # provides grid search & model training interface ``` We’ll use the MNIST data to illustrate various DNN concepts. With DNNs, it is important to note a few items: 1. Feedforward DNNs require all feature inputs to be numeric. Consequently, if your data contains categorical features they will need to be numerically encoded (e.g., one\-hot encoded, integer label encoded, etc.). 2. Due to the data transformation process that DNNs perform, they are highly sensitive to the individual scale of the feature values. Consequently, we should standardize our features first. Although the MNIST features are measured on the same scale (0–255\), they are not standardized (i.e., have mean zero and unit variance); the code chunk below standardizes the MNIST data to resolve this.[35](#fn35) 3. Since we are working with a multinomial response (0–9\), **keras** requires our response to be a one\-hot encoded matrix, which can be accomplished with the **keras** function `to_categorical()`. ``` # Import MNIST training data mnist <- dslabs::read_mnist() mnist_x <- mnist$train$images mnist_y <- mnist$train$labels # Rename columns and standardize feature values colnames(mnist_x) <- paste0("V", 1:ncol(mnist_x)) mnist_x <- mnist_x / 255 # One-hot encode response mnist_y <- to_categorical(mnist_y, 10) ``` 13\.2 Why deep learning ----------------------- Neural networks originated in the computer science field to answer questions that normal statistical approaches were not designed to answer at the time. The MNIST data is one of the most common examples you will find, where the goal is to to analyze hand\-written digits and predict the numbers written. This problem was originally presented to AT\&T Bell Lab’s to help build automatic mail\-sorting machines for the USPS (LeCun et al. [1990](#ref-lecun1990handwritten)). Figure 13\.1: Sample images from MNIST test dataset . This problem is quite unique because many different features of the data can be represented. As humans, we look at these numbers and consider features such as angles, edges, thickness, completeness of circles, etc. We interpret these different representations of the features and combine them to recognize the digit. In essence, neural networks perform the same task albeit in a far simpler manner than our brains. At their most basic levels, neural networks have three layers: an *input layer*, a *hidden layer*, and an *output layer*. The input layer consists of all of the original input features. The majority of the *learning* takes place in the hidden layer, and the output layer outputs the final predictions. Figure 13\.2: Representation of a simple feedforward neural network. Although simple on the surface, the computations being performed inside a network require lots of data to learn and are computationally intense rendering them impractical to use in the earlier days. However, over the past several decades, advancements in computer hardware (off the shelf CPUs became faster and GPUs were created) made the computations more practical, the growth in data collection made them more relevant, and advancements in the underlying algorithms made the *depth* (number of hidden layers) of neural nets less of a constraint. These advancements have resulted in the ability to run very deep and highly parameterized neural networks (i.e., DNNs). Figure 13\.3: Representation of a deep feedforward neural network. Such DNNs allow for very complex representations of data to be modeled, which has opened the door to analyzing high\-dimensional data (e.g., images, videos, and sound bytes). In some machine learning approaches, features of the data need to be defined prior to modeling (e.g., ordinary linear regression). One can only imagine trying to create the features for the digit recognition problem above. However, with DNNs, the hidden layers provide the means to auto\-identify useful features. A simple way to think of this is to go back to our digit recognition problem. The first hidden layer may learn about the angles of the line, the next hidden layer may learn about the thickness of the lines, the next may learn the location and completeness of the circles, etc. Aggregating these different attributes together by linking the layers allows the model to accurately predict what digit each image represents. This is the reason that DNNs are so popular for very complex problems where feature engineering is important, but rather difficult to do by hand (e.g., facial recognition). However, at their core, DNNs perform successive non\-linear transformations across each layer, allowing DNNs to model very complex and non\-linear relationships. This can make DNNs suitable machine learning approaches for traditional regression and classification problems as well. But it is important to keep in mind that deep learning thrives when dimensions of your data are sufficiently large (e.g., very large training sets). As the number of observations (\\(n\\)) and feature inputs (\\(p\\)) decrease, shallow machine learning approaches tend to perform just as well, if not better, and are more efficient. 13\.3 Feedforward DNNs ---------------------- Multiple DNN architectures exist and, as interest and research in this area increases, the field will continue to flourish. For example, convolutional neural networks (CNNs or ConvNets) have widespread applications in image and video recognition, recurrent neural networks (RNNs) are often used with speech recognition, and long short\-term memory neural networks (LSTMs) are advancing automated robotics and machine translation. However, fundamental to all these methods is the feedforward DNN (aka multilayer perceptron). Feedforward DNNs are densely connected layers where inputs influence each successive layer which then influences the final output layer. Figure 13\.4: Feedforward neural network. To build a feedforward DNN we need four key components: 1. Input data ✔ 2. A pre\-defined network architecture; 3. A feedback mechanism to help the network learn; 4. A model training approach. The next few sections will walk you through steps 2\)–4\) to build a feedforward DNN to the MNIST data. 13\.4 Network architecture -------------------------- When developing the network architecture for a feedforward DNN, you really only need to worry about two features: (1\) layers and nodes, and (2\) activation. ### 13\.4\.1 Layers and nodes The layers and nodes are the building blocks of our DNN and they decide how complex the network will be. Layers are considered *dense* (fully connected) when all the nodes in each successive layer are connected. Consequently, the more layers and nodes you add the more opportunities for new features to be learned (commonly referred to as the model’s *capacity*).[36](#fn36) Beyond the *input layer*, which is just our original predictor variables, there are two main types of layers to consider: *hidden layers* and an *output layer*. #### 13\.4\.1\.1 Hidden layers There is no well\-defined approach for selecting the number of hidden layers and nodes; rather, these are the first of many hyperparameters to tune. With regular tabular data, 2–5 hidden layers are often sufficient but your best bet is to err on the side of more layers rather than fewer. The number of nodes you incorporate in these hidden layers is largely determined by the number of features in your data. Often, the number of nodes in each layer is equal to or less than the number of features but this is not a hard requirement. It is important to note that the number of hidden layers and nodes in your network can affect its computational complexity (e.g., training time). When dealing with many features and, therefore, many nodes, training deep models with many hidden layers can be computationally more efficient than training a single layer network with the same number of high volume nodes (Goodfellow, Bengio, and Courville [2016](#ref-goodfellow2016deep)). Consequently, the goal is to find the simplest model with optimal performance. #### 13\.4\.1\.2 Output layers The choice of output layer is driven by the modeling task. For regression problems, your output layer will contain one node that outputs the final predicted value. Classification problems are different. If you are predicting a binary output (e.g., True/False, Win/Loss), your output layer will still contain only one node and that node will predict the probability of success (however you define success). However, if you are predicting a multinomial output, the output layer will contain the same number of nodes as the number of classes being predicted. For example, in our MNIST data, we are predicting 10 classes (0–9\); therefore, the output layer will have 10 nodes and the output would provide the probability of each class. #### 13\.4\.1\.3 Implementation The **keras** package allows us to develop our network with a layering approach. First, we initiate our sequential feedforward DNN architecture with `keras_model_sequential()` and then add some dense layers. This example creates two hidden layers, the first with 128 nodes and the second with 64, followed by an output layer with 10 nodes. One thing to point out is that the first layer needs the `input_shape` argument to equal the number of features in your data; however, the successive layers are able to dynamically interpret the number of expected inputs based on the previous layer. ``` model <- keras_model_sequential() %>% layer_dense(units = 128, input_shape = ncol(mnist_x)) %>% layer_dense(units = 64) %>% layer_dense(units = 10) ``` ### 13\.4\.2 Activation A key component with neural networks is what’s called *activation*. In the human brain, the biologic neuron receives inputs from many adjacent neurons. When these inputs accumulate beyond a certain threshold the neuron is *activated* suggesting there is a signal. DNNs work in a similar fashion. #### 13\.4\.2\.1 Activation functions As stated previously, each node is connected to all the nodes in the previous layer. Each connection gets a weight and then that node adds all the incoming inputs multiplied by its corresponding connection weight plus an extra *bias* parameter (\\(w\_0\\)). The summed total of these inputs become an input to an *activation function*; see [13\.5](deep-learning.html#fig:perceptron-node). Figure 13\.5: Flow of information in an artificial neuron. The activation function is simply a mathematical function that determines whether or not there is enough informative input at a node to fire a signal to the next layer. There are multiple [activation functions](https://en.wikipedia.org/wiki/Activation_function) to choose from but the most common ones include: \\\[\\begin{equation} \\tag{13\.1} \\texttt{Linear (identity):} \\;\\; f\\left(x\\right) \= x \\end{equation}\\] \\\[\\begin{equation} \\tag{13\.2} \\texttt{Rectified linear unit (ReLU):} \\;\\; f\\left(x\\right) \= \\begin{cases} 0, \& \\text{for $x\<0$}.\\\\ x, \& \\text{for $x\\geq0$}. \\end{cases} \\end{equation}\\] \\\[\\begin{equation} \\tag{13\.3} \\texttt{Sigmoid:} \\;\\; f\\left(x\\right) \= \\frac{1}{1 \+ e^{\-x}} \\end{equation}\\] \\\[\\begin{equation} \\tag{13\.4} \\texttt{Softmax:} \\;\\; f\\left(x\\right) \= \\frac{e^{x\_i}}{\\sum\_j e^{x\_j}} \\end{equation}\\] When using rectangular data, the most common approach is to use ReLU activation functions in the hidden layers. The ReLU activation function is simply taking the summed weighted inputs and transforming them to a \\(0\\) (not fire) or \\(\>0\\) (fire) if there is enough signal. For the output layers we use the linear activation function for regression problems, the sigmoid activation function for binary classification problems, and softmax for multinomial classification problems. #### 13\.4\.2\.2 Implementation To control the activation functions used in our layers we specify the `activation` argument. For the two hidden layers we add the ReLU activation function and for the output layer we specify `activation = softmax` (since MNIST is a multinomial classification problem). ``` model <- keras_model_sequential() %>% layer_dense(units = 128, activation = "relu", input_shape = p) %>% layer_dense(units = 64, activation = "relu") %>% layer_dense(units = 10, activation = "softmax") ``` Next, we need to incorporate a feedback mechanism to help our model learn. 13\.5 Backpropagation --------------------- On the first run (or *forward pass*), the DNN will select a batch of observations, randomly assign weights across all the node connections, and predict the output. The engine of neural networks is how it assesses its own accuracy and automatically adjusts the weights across all the node connections to improve that accuracy. This process is called *backpropagation*. To perform backpropagation we need two things: 1. An objective function; 2. An optimizer. First, you need to establish an objective (loss) function to measure performance. For regression problems this might be mean squared error (MSE) and for classification problems it is commonly binary and multi\-categorical cross entropy (reference Section [2\.6](process.html#model-eval)). DNNs can have multiple loss functions but we’ll just focus on using one. On each forward pass the DNN will measure its performance based on the loss function chosen. The DNN will then work backwards through the layers, compute the gradient[37](#fn37) of the loss with regards to the network weights, adjust the weights a little in the opposite direction of the gradient, grab another batch of observations to run through the model, …rinse and repeat until the loss function is minimized. This process is known as *mini\-batch stochastic gradient descent*[38](#fn38) (mini\-batch SGD). There are several variants of mini\-batch SGD algorithms; they primarily differ in how fast they descend the gradient (controlled by the *learning rate* as discussed in Section [12\.2\.2](gbm.html#gbm-gradient)). These different variations make up the different *optimizers* that can be used. Understanding the technical differences among the variants of gradient descent is beyond the intent of this book. An excellent source to learn more about these differences and appropriate scenarios to adjust this parameter is provided by Ruder ([2016](#ref-ruder2016overview)). For now, realize that sticking with the default optimizer (RMSProp) is often sufficient for most normal regression and classification problems; however, this is a tunable hyperparameter. To incorporate the backpropagation piece of our DNN we include `compile()` in our code sequence. In addition to the optimizer and loss function arguments, we can also identify one or more metrics in addition to our loss function to track and report. ``` model <- keras_model_sequential() %>% # Network architecture layer_dense(units = 128, activation = "relu", input_shape = ncol(mnist_x)) %>% layer_dense(units = 64, activation = "relu") %>% layer_dense(units = 10, activation = "softmax") %>% # Backpropagation compile( loss = 'categorical_crossentropy', optimizer = optimizer_rmsprop(), metrics = c('accuracy') ) ``` 13\.6 Model training -------------------- We’ve created a base model, now we just need to train it with some data. To do so we feed our model into a `fit()` function along with our training data. We also provide a few other arguments that are worth mentioning: * `batch_size`: As we mentioned in the last section, the DNN will take a batch of data to run through the mini\-batch SGD process. Batch sizes can be between one and several hundred. Small values will be more computationally burdensome while large values provide less feedback signal. Values are typically provided as a power of two that fit nicely into the memory requirements of the GPU or CPU hardware like 32, 64, 128, 256, and so on. * `epochs`: An *epoch* describes the number of times the algorithm sees the entire data set. So, each time the algorithm has seen all samples in the data set, an epoch has completed. In our training set, we have 60,000 observations so running batches of 128 will require 469 passes for one epoch. The more complex the features and relationships in your data, the more epochs you’ll require for your model to learn, adjust the weights, and minimize the loss function. * `validation_split`: The model will hold out XX% of the data so that we can compute a more accurate estimate of an out\-of\-sample error rate. * `verbose`: We set this to `FALSE` for brevity; however, when `TRUE` you will see a live update of the loss function in your RStudio IDE. Plotting the output shows how our loss function (and specified metrics) improve for each epoch. We see that our model’s performance is optimized at 5–10 epochs and then proceeds to overfit, which results in a flatlined accuracy rate. The training and validation below took \~30 seconds. ``` # Train the model fit1 <- model %>% fit( x = mnist_x, y = mnist_y, epochs = 25, batch_size = 128, validation_split = 0.2, verbose = FALSE ) # Display output fit1 ## Trained on 48,000 samples, validated on 12,000 samples (batch_size=128, epochs=25) ## Final epoch (plot to see history): ## val_loss: 0.1512 ## val_acc: 0.9773 ## loss: 0.002308 ## acc: 0.9994 plot(fit1) ``` Figure 13\.6: Training and validation performance over 25 epochs. 13\.7 Model tuning ------------------ Now that we have an understanding of producing and running a DNN model, the next task is to find an optimal one by tuning different hyperparameters. There are many ways to tune a DNN. Typically, the tuning process follows these general steps; however, there is often a lot of iteration among these: 1. Adjust model capacity (layers \& nodes); 2. Add batch normalization; 3. Add regularization; 4. Adjust learning rate. ### 13\.7\.1 Model capacity Typically, we start by maximizing predictive performance based on model capacity. Higher model capacity (i.e., more layers and nodes) results in more *memorization capacity* for the model. On one hand, this can be good as it allows the model to learn more features and patterns in the data. On the other hand, a model with too much capacity will overfit to the training data. Typically, we look to maximize validation error performance while minimizing model capacity. As an example, we assessed nine different model capacity settings that include the following number of layers and nodes while maintaining all other parameters the same as the models in the previous sections (i.e.. our medium sized 2\-hidden layer network contains 64 nodes in the first layer and 32 in the second.). Table 13\.1: Model capacities assessed represented as number of layers and nodes per layer. | | Hidden Layers | | | | --- | --- | --- | --- | | Size | 1 | 2 | 3 | | small | 16 | 16, 8 | 16, 8, 4 | | medium | 64 | 64, 32 | 64, 32, 16 | | large | 256 | 256, 128 | 256, 128, 64 | The models that performed best had 2–3 hidden layers with a medium to large number of nodes. All the “small” models underfit and would require more epochs to identify their minimum validation error. The large 3\-layer model overfits extremely fast. Preferably, we want a model that overfits more slowly such as the 1\- and 2\-layer medium and large models (Chollet and Allaire [2018](#ref-chollet2018deep)). If none of your models reach a flatlined validation error such as all the “small” models in Figure 13\.7, increase the number of epochs trained. Alternatively, if your epochs flatline early then there is no reason to run so many epochs as you are just wasting computational energy with no gain. We can add a `callback()` function inside of `fit()` to help with this. There are multiple callbacks to help automate certain tasks. One such callback is early stopping, which will stop training if the loss function does not improve for a specified number of epochs. Figure 13\.7: Training and validation performance for various model capacities. ### 13\.7\.2 Batch normalization We’ve normalized the data before feeding it into our model, but data normalization should be a concern after every transformation performed by the network. Batch normalization (Ioffe and Szegedy [2015](#ref-ioffe2015batch)) is a recent advancement that adaptively normalizes data even as the mean and variance change over time during training. The main effect of batch normalization is that it helps with gradient propagation, which allows for deeper networks. Consequently, as the depth of your networks increase, batch normalization becomes more important and can improve performance. We can add batch normalization by including `layer_batch_normalization()` after each middle layer within the network architecture section of our code: ``` model_w_norm <- keras_model_sequential() %>% # Network architecture with batch normalization layer_dense(units = 256, activation = "relu", input_shape = ncol(mnist_x)) %>% layer_batch_normalization() %>% layer_dense(units = 128, activation = "relu") %>% layer_batch_normalization() %>% layer_dense(units = 64, activation = "relu") %>% layer_batch_normalization() %>% layer_dense(units = 10, activation = "softmax") %>% # Backpropagation compile( loss = "categorical_crossentropy", optimizer = optimizer_rmsprop(), metrics = c("accuracy") ) ``` If we add batch normalization to each of the previously assessed models, we see a couple patterns emerge. One, batch normalization often helps to minimize the validation loss sooner, which increases efficiency of model training. Two, we see that for the larger, more complex models (3\-layer medium and 2\- and 3\-layer large), batch normalization helps to reduce the overall amount of overfitting. In fact, with batch normalization, our large 3\-layer network now has the best validation error. Figure 13\.8: The effect of batch normalization on validation loss for various model capacities. ### 13\.7\.3 Regularization As we’ve discussed in Chapters [6](regularized-regression.html#regularized-regression) and [12](gbm.html#gbm), placing constraints on a model’s complexity with regularization is a common way to mitigate overfitting. DNNs models are no different and there are two common approaches to regularizing neural networks. We can use an \\(L\_1\\) or \\(L\_2\\) penalty to add a cost to the size of the node weights, although the most common penalizer is the \\(L\_2\\) *norm*, which is called *weight decay* in the context of neural networks.[39](#fn39) Regularizing the weights will force small signals (noise) to have weights nearly equal to zero and only allow consistently strong signals to have relatively larger weights. As you add more layers and nodes, regularization with \\(L\_1\\) or \\(L\_2\\) penalties tend to have a larger impact on performance. Since having too many hidden units runs the risk of overparameterization, \\(L\_1\\) or \\(L\_2\\) penalties can shrink the extra weights toward zero to reduce the risk of overfitting. We can add an \\(L\_1\\), \\(L\_2\\), or a combination of the two by adding `regularizer_XX()` within each layer. ``` model_w_reg <- keras_model_sequential() %>% # Network architecture with L1 regularization and batch normalization layer_dense(units = 256, activation = "relu", input_shape = ncol(mnist_x), kernel_regularizer = regularizer_l2(0.001)) %>% layer_batch_normalization() %>% layer_dense(units = 128, activation = "relu", kernel_regularizer = regularizer_l2(0.001)) %>% layer_batch_normalization() %>% layer_dense(units = 64, activation = "relu", kernel_regularizer = regularizer_l2(0.001)) %>% layer_batch_normalization() %>% layer_dense(units = 10, activation = "softmax") %>% # Backpropagation compile( loss = "categorical_crossentropy", optimizer = optimizer_rmsprop(), metrics = c("accuracy") ) ``` Dropout (Srivastava et al. [2014](#ref-srivastava2014dropout)[b](#ref-srivastava2014dropout); Hinton et al. [2012](#ref-hinton2012improving)) is an additional regularization method that has become one of the most common and effectively used approaches to minimize overfitting in neural networks. Dropout in the context of neural networks randomly drops out (setting to zero) a number of output features in a layer during training. By randomly removing different nodes, we help prevent the model from latching onto happenstance patterns (noise) that are not significant. Typically, dropout rates range from 0\.2–0\.5 but can differ depending on the data (i.e., this is another tuning parameter). Similar to batch normalization, we can apply dropout by adding `layer_dropout()` in between the layers. ``` model_w_drop <- keras_model_sequential() %>% # Network architecture with 20% dropout layer_dense(units = 256, activation = "relu", input_shape = ncol(mnist_x)) %>% layer_dropout(rate = 0.2) %>% layer_dense(units = 128, activation = "relu") %>% layer_dropout(rate = 0.2) %>% layer_dense(units = 64, activation = "relu") %>% layer_dropout(rate = 0.2) %>% layer_dense(units = 10, activation = "softmax") %>% # Backpropagation compile( loss = "categorical_crossentropy", optimizer = optimizer_rmsprop(), metrics = c("accuracy") ) ``` For our MNIST data, we find that adding an \\(L\_1\\) or \\(L\_2\\) cost does not improve our loss function. However, adding dropout does improve performance. For example, our large 3\-layer model with 256, 128, and 64 nodes per respective layer so far has the best performance with a cross\-entropy loss of 0\.0818\. However, as illustrated in Figure [13\.8](deep-learning.html#fig:model-capacity-with-batch-norm-plot), this network still suffers from overfitting. Figure [13\.9](deep-learning.html#fig:model-with-regularization-plot) illustrates the same 3\-layer model with 256, 128, and 64 nodes per respective layers, batch normalization, and dropout rates of 0\.4, 0\.3, and 0\.2 between each respective layer. We see a significant improvement in overfitting, which results in an improved loss score. Note that as you constrain overfitting, often you need to increase the number of epochs to allow the network enough iterations to find the global minimal loss. Figure 13\.9: The effect of regularization with dropout on validation loss. ### 13\.7\.4 Adjust learning rate Another issue to be concerned with is whether or not we are finding a global minimum versus a local minimum with our loss value. The mini\-batch SGD optimizer we use will take incremental steps down our loss gradient until it no longer experiences improvement. The size of the incremental steps (i.e., the learning rate) will determine whether or not we get stuck in a local minimum instead of making our way to the global minimum. Figure 13\.10: A local minimum and a global minimum. There are two ways to circumvent this problem: 1. The different optimizers (e.g., RMSProp, Adam, Adagrad) have different algorithmic approaches for deciding the learning rate. We can adjust the learning rate of a given optimizer or we can adjust the optimizer used. 2. We can automatically adjust the learning rate by a factor of 2–10 once the validation loss has stopped improving. The following builds onto our optimal model by changing the optimizer to Adam (Kingma and Ba [2014](#ref-kingma2014adam)) and reducing the learning rate by a factor of 0\.05 as our loss improvement begins to stall. We also add an early stopping argument to reduce unnecessary runtime. We see a slight improvement in performance and our loss curve in Figure [13\.11](deep-learning.html#fig:adj-lrn-rate) illustrates how we stop model training just as we begin to overfit. ``` model_w_adj_lrn <- keras_model_sequential() %>% layer_dense(units = 256, activation = "relu", input_shape = ncol(mnist_x)) %>% layer_batch_normalization() %>% layer_dropout(rate = 0.4) %>% layer_dense(units = 128, activation = "relu") %>% layer_batch_normalization() %>% layer_dropout(rate = 0.3) %>% layer_dense(units = 64, activation = "relu") %>% layer_batch_normalization() %>% layer_dropout(rate = 0.2) %>% layer_dense(units = 10, activation = "softmax") %>% compile( loss = 'categorical_crossentropy', optimizer = optimizer_adam(), metrics = c('accuracy') ) %>% fit( x = mnist_x, y = mnist_y, epochs = 35, batch_size = 128, validation_split = 0.2, callbacks = list( callback_early_stopping(patience = 5), callback_reduce_lr_on_plateau(factor = 0.05) ), verbose = FALSE ) model_w_adj_lrn ## Trained on 48,000 samples, validated on 12,000 samples (batch_size=128, epochs=20) ## Final epoch (plot to see history): ## val_loss: 0.07223 ## val_acc: 0.9808 ## loss: 0.05366 ## acc: 0.9832 ## lr: 0.001 # Optimal min(model_w_adj_lrn$metrics$val_loss) ## [1] 0.0699492 max(model_w_adj_lrn$metrics$val_acc) ## [1] 0.981 # Learning rate plot(model_w_adj_lrn) ``` Figure 13\.11: Training and validation performance on our 3\-layer large network with dropout, adjustable learning rate, and using an Adam mini\-batch SGD optimizer. 13\.8 Grid Search ----------------- Hyperparameter tuning for DNNs tends to be a bit more involved than other ML models due to the number of hyperparameters that can/should be assessed and the dependencies between these parameters. For most implementations you need to predetermine the number of layers you want and then establish your search grid. If using **h2o**’s `h2o.deeplearning()` function, then creating and executing the search grid follows the same approach illustrated in Sections [11\.5](random-forest.html#rf-tuning-strategy) and [12\.4\.2](gbm.html#stochastic-gbm-h2o). However, for **keras**, we use *flags* in a similar manner but their implementation provides added flexibility for tracking, visualizing, and managing training runs with the **tfruns** package (Allaire [2018](#ref-R-tfruns)). For a full discussion regarding flags see the <https://tensorflow.rstudio.com/tools/> online resource. In this example we provide a training script [mnist\-grid\-search.R](http://bit.ly/mnist-grid-search) that will be sourced for the grid search. To create and perform a grid search, we first establish flags for the different hyperparameters of interest. These are considered the default flag values: ``` FLAGS <- flags( # Nodes flag_numeric("nodes1", 256), flag_numeric("nodes2", 128), flag_numeric("nodes3", 64), # Dropout flag_numeric("dropout1", 0.4), flag_numeric("dropout2", 0.3), flag_numeric("dropout3", 0.2), # Learning paramaters flag_string("optimizer", "rmsprop"), flag_numeric("lr_annealing", 0.1) ) ``` Next, we incorprate the flag parameters within our model: ``` model <- keras_model_sequential() %>% layer_dense(units = FLAGS$nodes1, activation = "relu", input_shape = ncol(mnist_x)) %>% layer_batch_normalization() %>% layer_dropout(rate = FLAGS$dropout1) %>% layer_dense(units = FLAGS$nodes2, activation = "relu") %>% layer_batch_normalization() %>% layer_dropout(rate = FLAGS$dropout2) %>% layer_dense(units = FLAGS$nodes3, activation = "relu") %>% layer_batch_normalization() %>% layer_dropout(rate = FLAGS$dropout3) %>% layer_dense(units = 10, activation = "softmax") %>% compile( loss = 'categorical_crossentropy', metrics = c('accuracy'), optimizer = FLAGS$optimizer ) %>% fit( x = mnist_x, y = mnist_y, epochs = 35, batch_size = 128, validation_split = 0.2, callbacks = list( callback_early_stopping(patience = 5), callback_reduce_lr_on_plateau(factor = FLAGS$lr_annealing) ), verbose = FALSE ) ``` To execute the grid search we use `tfruns::tuning_run()`. Since our grid search assesses 2,916 combinations, we perform a random grid search and assess only 5% of the total models (`sample = 0.05` which equates to 145 models). It becomes quite obvious that the hyperparameter search space explodes quickly with DNNs since there are so many model attributes that can be adjusted. Consequently, often a full Cartesian grid search is not possible due to time and computational constraints. The optimal model has a validation loss of 0\.0686 and validation accuracy rate of 0\.9806 and the below code chunk shows the hyperparameter settings for this optimal model. The following grid search took us over 1\.5 hours to run! ``` # Run various combinations of dropout1 and dropout2 runs <- tuning_run("scripts/mnist-grid-search.R", flags = list( nodes1 = c(64, 128, 256), nodes2 = c(64, 128, 256), nodes3 = c(64, 128, 256), dropout1 = c(0.2, 0.3, 0.4), dropout2 = c(0.2, 0.3, 0.4), dropout3 = c(0.2, 0.3, 0.4), optimizer = c("rmsprop", "adam"), lr_annealing = c(0.1, 0.05) ), sample = 0.05 ) runs %>% filter(metric_val_loss == min(metric_val_loss)) %>% glimpse() ## Observations: 1 ## Variables: 31 ## $ run_dir <chr> "runs/2019-04-27T14-44-38Z" ## $ metric_loss <dbl> 0.0598 ## $ metric_acc <dbl> 0.9806 ## $ metric_val_loss <dbl> 0.0686 ## $ metric_val_acc <dbl> 0.9806 ## $ flag_nodes1 <int> 256 ## $ flag_nodes2 <int> 128 ## $ flag_nodes3 <int> 256 ## $ flag_dropout1 <dbl> 0.4 ## $ flag_dropout2 <dbl> 0.2 ## $ flag_dropout3 <dbl> 0.3 ## $ flag_optimizer <chr> "adam" ## $ flag_lr_annealing <dbl> 0.05 ## $ samples <int> 48000 ## $ validation_samples <int> 12000 ## $ batch_size <int> 128 ## $ epochs <int> 35 ## $ epochs_completed <int> 17 ## $ metrics <chr> "runs/2019-04-27T14-44-38Z/tfruns.d/metrics.json" ## $ model <chr> "Model\n_______________________________________________________… ## $ loss_function <chr> "categorical_crossentropy" ## $ optimizer <chr> "<tensorflow.python.keras.optimizers.Adam>" ## $ learning_rate <dbl> 0.001 ## $ script <chr> "mnist-grid-search.R" ## $ start <dttm> 2019-04-27 14:44:38 ## $ end <dttm> 2019-04-27 14:45:39 ## $ completed <lgl> TRUE ## $ output <chr> "\n> #' Trains a feedforward DL model on the MNIST dataset.\n> … ## $ source_code <chr> "runs/2019-04-27T14-44-38Z/tfruns.d/source.tar.gz" ## $ context <chr> "local" ## $ type <chr> "training" ``` 13\.9 Final thoughts -------------------- Training DNNs often requires more time and attention than other ML algorithms. With many other algorithms, the search space for finding an optimal model is small enough that Cartesian grid searches can be executed rather quickly. With DNNs, more thought, time, and experimentation is often required up front to establish a basic network architecture to build a grid search around. However, even with prior experimentation to reduce the scope of a grid search, the large number of hyperparameters still results in an exploding search space that can usually only be efficiently searched at random. Historically, training neural networks was quite slow since runtime requires \\(O\\left(NpML\\right)\\) operations where \\(N \=\\) \# observations, \\(p\=\\) \# features, \\(M\=\\) \# hidden nodes, and \\(L\=\\) \# epchos. Fortunately, software has advanced tremendously over the past decade to make execution fast and efficient. With open source software such as TensorFlow and Keras available via R APIs, performing state of the art deep learning methods is much more efficient, plus you get all the added benefits these open source tools provide (e.g., distributed computations across CPUs and GPUs, more advanced DNN architectures such as convolutional and recurrent neural nets, autoencoders, reinforcement learning, and more!). 13\.1 Prerequisites ------------------- This tutorial will use a few supporting packages but the main emphasis will be on the **keras** package (Allaire and Chollet [2019](#ref-R-keras)). Additional content provided online illustrates how to execute the same procedures we cover here with the **h2o** package. For more information on installing both CPU and GPU\-based Keras and TensorFlow software, visit [https://keras.rstudio.com](https://keras.rstudio.com/). ``` # Helper packages library(dplyr) # for basic data wrangling # Modeling packages library(keras) # for fitting DNNs library(tfruns) # for additional grid search & model training functions # Modeling helper package - not necessary for reproducibility library(tfestimators) # provides grid search & model training interface ``` We’ll use the MNIST data to illustrate various DNN concepts. With DNNs, it is important to note a few items: 1. Feedforward DNNs require all feature inputs to be numeric. Consequently, if your data contains categorical features they will need to be numerically encoded (e.g., one\-hot encoded, integer label encoded, etc.). 2. Due to the data transformation process that DNNs perform, they are highly sensitive to the individual scale of the feature values. Consequently, we should standardize our features first. Although the MNIST features are measured on the same scale (0–255\), they are not standardized (i.e., have mean zero and unit variance); the code chunk below standardizes the MNIST data to resolve this.[35](#fn35) 3. Since we are working with a multinomial response (0–9\), **keras** requires our response to be a one\-hot encoded matrix, which can be accomplished with the **keras** function `to_categorical()`. ``` # Import MNIST training data mnist <- dslabs::read_mnist() mnist_x <- mnist$train$images mnist_y <- mnist$train$labels # Rename columns and standardize feature values colnames(mnist_x) <- paste0("V", 1:ncol(mnist_x)) mnist_x <- mnist_x / 255 # One-hot encode response mnist_y <- to_categorical(mnist_y, 10) ``` 13\.2 Why deep learning ----------------------- Neural networks originated in the computer science field to answer questions that normal statistical approaches were not designed to answer at the time. The MNIST data is one of the most common examples you will find, where the goal is to to analyze hand\-written digits and predict the numbers written. This problem was originally presented to AT\&T Bell Lab’s to help build automatic mail\-sorting machines for the USPS (LeCun et al. [1990](#ref-lecun1990handwritten)). Figure 13\.1: Sample images from MNIST test dataset . This problem is quite unique because many different features of the data can be represented. As humans, we look at these numbers and consider features such as angles, edges, thickness, completeness of circles, etc. We interpret these different representations of the features and combine them to recognize the digit. In essence, neural networks perform the same task albeit in a far simpler manner than our brains. At their most basic levels, neural networks have three layers: an *input layer*, a *hidden layer*, and an *output layer*. The input layer consists of all of the original input features. The majority of the *learning* takes place in the hidden layer, and the output layer outputs the final predictions. Figure 13\.2: Representation of a simple feedforward neural network. Although simple on the surface, the computations being performed inside a network require lots of data to learn and are computationally intense rendering them impractical to use in the earlier days. However, over the past several decades, advancements in computer hardware (off the shelf CPUs became faster and GPUs were created) made the computations more practical, the growth in data collection made them more relevant, and advancements in the underlying algorithms made the *depth* (number of hidden layers) of neural nets less of a constraint. These advancements have resulted in the ability to run very deep and highly parameterized neural networks (i.e., DNNs). Figure 13\.3: Representation of a deep feedforward neural network. Such DNNs allow for very complex representations of data to be modeled, which has opened the door to analyzing high\-dimensional data (e.g., images, videos, and sound bytes). In some machine learning approaches, features of the data need to be defined prior to modeling (e.g., ordinary linear regression). One can only imagine trying to create the features for the digit recognition problem above. However, with DNNs, the hidden layers provide the means to auto\-identify useful features. A simple way to think of this is to go back to our digit recognition problem. The first hidden layer may learn about the angles of the line, the next hidden layer may learn about the thickness of the lines, the next may learn the location and completeness of the circles, etc. Aggregating these different attributes together by linking the layers allows the model to accurately predict what digit each image represents. This is the reason that DNNs are so popular for very complex problems where feature engineering is important, but rather difficult to do by hand (e.g., facial recognition). However, at their core, DNNs perform successive non\-linear transformations across each layer, allowing DNNs to model very complex and non\-linear relationships. This can make DNNs suitable machine learning approaches for traditional regression and classification problems as well. But it is important to keep in mind that deep learning thrives when dimensions of your data are sufficiently large (e.g., very large training sets). As the number of observations (\\(n\\)) and feature inputs (\\(p\\)) decrease, shallow machine learning approaches tend to perform just as well, if not better, and are more efficient. 13\.3 Feedforward DNNs ---------------------- Multiple DNN architectures exist and, as interest and research in this area increases, the field will continue to flourish. For example, convolutional neural networks (CNNs or ConvNets) have widespread applications in image and video recognition, recurrent neural networks (RNNs) are often used with speech recognition, and long short\-term memory neural networks (LSTMs) are advancing automated robotics and machine translation. However, fundamental to all these methods is the feedforward DNN (aka multilayer perceptron). Feedforward DNNs are densely connected layers where inputs influence each successive layer which then influences the final output layer. Figure 13\.4: Feedforward neural network. To build a feedforward DNN we need four key components: 1. Input data ✔ 2. A pre\-defined network architecture; 3. A feedback mechanism to help the network learn; 4. A model training approach. The next few sections will walk you through steps 2\)–4\) to build a feedforward DNN to the MNIST data. 13\.4 Network architecture -------------------------- When developing the network architecture for a feedforward DNN, you really only need to worry about two features: (1\) layers and nodes, and (2\) activation. ### 13\.4\.1 Layers and nodes The layers and nodes are the building blocks of our DNN and they decide how complex the network will be. Layers are considered *dense* (fully connected) when all the nodes in each successive layer are connected. Consequently, the more layers and nodes you add the more opportunities for new features to be learned (commonly referred to as the model’s *capacity*).[36](#fn36) Beyond the *input layer*, which is just our original predictor variables, there are two main types of layers to consider: *hidden layers* and an *output layer*. #### 13\.4\.1\.1 Hidden layers There is no well\-defined approach for selecting the number of hidden layers and nodes; rather, these are the first of many hyperparameters to tune. With regular tabular data, 2–5 hidden layers are often sufficient but your best bet is to err on the side of more layers rather than fewer. The number of nodes you incorporate in these hidden layers is largely determined by the number of features in your data. Often, the number of nodes in each layer is equal to or less than the number of features but this is not a hard requirement. It is important to note that the number of hidden layers and nodes in your network can affect its computational complexity (e.g., training time). When dealing with many features and, therefore, many nodes, training deep models with many hidden layers can be computationally more efficient than training a single layer network with the same number of high volume nodes (Goodfellow, Bengio, and Courville [2016](#ref-goodfellow2016deep)). Consequently, the goal is to find the simplest model with optimal performance. #### 13\.4\.1\.2 Output layers The choice of output layer is driven by the modeling task. For regression problems, your output layer will contain one node that outputs the final predicted value. Classification problems are different. If you are predicting a binary output (e.g., True/False, Win/Loss), your output layer will still contain only one node and that node will predict the probability of success (however you define success). However, if you are predicting a multinomial output, the output layer will contain the same number of nodes as the number of classes being predicted. For example, in our MNIST data, we are predicting 10 classes (0–9\); therefore, the output layer will have 10 nodes and the output would provide the probability of each class. #### 13\.4\.1\.3 Implementation The **keras** package allows us to develop our network with a layering approach. First, we initiate our sequential feedforward DNN architecture with `keras_model_sequential()` and then add some dense layers. This example creates two hidden layers, the first with 128 nodes and the second with 64, followed by an output layer with 10 nodes. One thing to point out is that the first layer needs the `input_shape` argument to equal the number of features in your data; however, the successive layers are able to dynamically interpret the number of expected inputs based on the previous layer. ``` model <- keras_model_sequential() %>% layer_dense(units = 128, input_shape = ncol(mnist_x)) %>% layer_dense(units = 64) %>% layer_dense(units = 10) ``` ### 13\.4\.2 Activation A key component with neural networks is what’s called *activation*. In the human brain, the biologic neuron receives inputs from many adjacent neurons. When these inputs accumulate beyond a certain threshold the neuron is *activated* suggesting there is a signal. DNNs work in a similar fashion. #### 13\.4\.2\.1 Activation functions As stated previously, each node is connected to all the nodes in the previous layer. Each connection gets a weight and then that node adds all the incoming inputs multiplied by its corresponding connection weight plus an extra *bias* parameter (\\(w\_0\\)). The summed total of these inputs become an input to an *activation function*; see [13\.5](deep-learning.html#fig:perceptron-node). Figure 13\.5: Flow of information in an artificial neuron. The activation function is simply a mathematical function that determines whether or not there is enough informative input at a node to fire a signal to the next layer. There are multiple [activation functions](https://en.wikipedia.org/wiki/Activation_function) to choose from but the most common ones include: \\\[\\begin{equation} \\tag{13\.1} \\texttt{Linear (identity):} \\;\\; f\\left(x\\right) \= x \\end{equation}\\] \\\[\\begin{equation} \\tag{13\.2} \\texttt{Rectified linear unit (ReLU):} \\;\\; f\\left(x\\right) \= \\begin{cases} 0, \& \\text{for $x\<0$}.\\\\ x, \& \\text{for $x\\geq0$}. \\end{cases} \\end{equation}\\] \\\[\\begin{equation} \\tag{13\.3} \\texttt{Sigmoid:} \\;\\; f\\left(x\\right) \= \\frac{1}{1 \+ e^{\-x}} \\end{equation}\\] \\\[\\begin{equation} \\tag{13\.4} \\texttt{Softmax:} \\;\\; f\\left(x\\right) \= \\frac{e^{x\_i}}{\\sum\_j e^{x\_j}} \\end{equation}\\] When using rectangular data, the most common approach is to use ReLU activation functions in the hidden layers. The ReLU activation function is simply taking the summed weighted inputs and transforming them to a \\(0\\) (not fire) or \\(\>0\\) (fire) if there is enough signal. For the output layers we use the linear activation function for regression problems, the sigmoid activation function for binary classification problems, and softmax for multinomial classification problems. #### 13\.4\.2\.2 Implementation To control the activation functions used in our layers we specify the `activation` argument. For the two hidden layers we add the ReLU activation function and for the output layer we specify `activation = softmax` (since MNIST is a multinomial classification problem). ``` model <- keras_model_sequential() %>% layer_dense(units = 128, activation = "relu", input_shape = p) %>% layer_dense(units = 64, activation = "relu") %>% layer_dense(units = 10, activation = "softmax") ``` Next, we need to incorporate a feedback mechanism to help our model learn. ### 13\.4\.1 Layers and nodes The layers and nodes are the building blocks of our DNN and they decide how complex the network will be. Layers are considered *dense* (fully connected) when all the nodes in each successive layer are connected. Consequently, the more layers and nodes you add the more opportunities for new features to be learned (commonly referred to as the model’s *capacity*).[36](#fn36) Beyond the *input layer*, which is just our original predictor variables, there are two main types of layers to consider: *hidden layers* and an *output layer*. #### 13\.4\.1\.1 Hidden layers There is no well\-defined approach for selecting the number of hidden layers and nodes; rather, these are the first of many hyperparameters to tune. With regular tabular data, 2–5 hidden layers are often sufficient but your best bet is to err on the side of more layers rather than fewer. The number of nodes you incorporate in these hidden layers is largely determined by the number of features in your data. Often, the number of nodes in each layer is equal to or less than the number of features but this is not a hard requirement. It is important to note that the number of hidden layers and nodes in your network can affect its computational complexity (e.g., training time). When dealing with many features and, therefore, many nodes, training deep models with many hidden layers can be computationally more efficient than training a single layer network with the same number of high volume nodes (Goodfellow, Bengio, and Courville [2016](#ref-goodfellow2016deep)). Consequently, the goal is to find the simplest model with optimal performance. #### 13\.4\.1\.2 Output layers The choice of output layer is driven by the modeling task. For regression problems, your output layer will contain one node that outputs the final predicted value. Classification problems are different. If you are predicting a binary output (e.g., True/False, Win/Loss), your output layer will still contain only one node and that node will predict the probability of success (however you define success). However, if you are predicting a multinomial output, the output layer will contain the same number of nodes as the number of classes being predicted. For example, in our MNIST data, we are predicting 10 classes (0–9\); therefore, the output layer will have 10 nodes and the output would provide the probability of each class. #### 13\.4\.1\.3 Implementation The **keras** package allows us to develop our network with a layering approach. First, we initiate our sequential feedforward DNN architecture with `keras_model_sequential()` and then add some dense layers. This example creates two hidden layers, the first with 128 nodes and the second with 64, followed by an output layer with 10 nodes. One thing to point out is that the first layer needs the `input_shape` argument to equal the number of features in your data; however, the successive layers are able to dynamically interpret the number of expected inputs based on the previous layer. ``` model <- keras_model_sequential() %>% layer_dense(units = 128, input_shape = ncol(mnist_x)) %>% layer_dense(units = 64) %>% layer_dense(units = 10) ``` #### 13\.4\.1\.1 Hidden layers There is no well\-defined approach for selecting the number of hidden layers and nodes; rather, these are the first of many hyperparameters to tune. With regular tabular data, 2–5 hidden layers are often sufficient but your best bet is to err on the side of more layers rather than fewer. The number of nodes you incorporate in these hidden layers is largely determined by the number of features in your data. Often, the number of nodes in each layer is equal to or less than the number of features but this is not a hard requirement. It is important to note that the number of hidden layers and nodes in your network can affect its computational complexity (e.g., training time). When dealing with many features and, therefore, many nodes, training deep models with many hidden layers can be computationally more efficient than training a single layer network with the same number of high volume nodes (Goodfellow, Bengio, and Courville [2016](#ref-goodfellow2016deep)). Consequently, the goal is to find the simplest model with optimal performance. #### 13\.4\.1\.2 Output layers The choice of output layer is driven by the modeling task. For regression problems, your output layer will contain one node that outputs the final predicted value. Classification problems are different. If you are predicting a binary output (e.g., True/False, Win/Loss), your output layer will still contain only one node and that node will predict the probability of success (however you define success). However, if you are predicting a multinomial output, the output layer will contain the same number of nodes as the number of classes being predicted. For example, in our MNIST data, we are predicting 10 classes (0–9\); therefore, the output layer will have 10 nodes and the output would provide the probability of each class. #### 13\.4\.1\.3 Implementation The **keras** package allows us to develop our network with a layering approach. First, we initiate our sequential feedforward DNN architecture with `keras_model_sequential()` and then add some dense layers. This example creates two hidden layers, the first with 128 nodes and the second with 64, followed by an output layer with 10 nodes. One thing to point out is that the first layer needs the `input_shape` argument to equal the number of features in your data; however, the successive layers are able to dynamically interpret the number of expected inputs based on the previous layer. ``` model <- keras_model_sequential() %>% layer_dense(units = 128, input_shape = ncol(mnist_x)) %>% layer_dense(units = 64) %>% layer_dense(units = 10) ``` ### 13\.4\.2 Activation A key component with neural networks is what’s called *activation*. In the human brain, the biologic neuron receives inputs from many adjacent neurons. When these inputs accumulate beyond a certain threshold the neuron is *activated* suggesting there is a signal. DNNs work in a similar fashion. #### 13\.4\.2\.1 Activation functions As stated previously, each node is connected to all the nodes in the previous layer. Each connection gets a weight and then that node adds all the incoming inputs multiplied by its corresponding connection weight plus an extra *bias* parameter (\\(w\_0\\)). The summed total of these inputs become an input to an *activation function*; see [13\.5](deep-learning.html#fig:perceptron-node). Figure 13\.5: Flow of information in an artificial neuron. The activation function is simply a mathematical function that determines whether or not there is enough informative input at a node to fire a signal to the next layer. There are multiple [activation functions](https://en.wikipedia.org/wiki/Activation_function) to choose from but the most common ones include: \\\[\\begin{equation} \\tag{13\.1} \\texttt{Linear (identity):} \\;\\; f\\left(x\\right) \= x \\end{equation}\\] \\\[\\begin{equation} \\tag{13\.2} \\texttt{Rectified linear unit (ReLU):} \\;\\; f\\left(x\\right) \= \\begin{cases} 0, \& \\text{for $x\<0$}.\\\\ x, \& \\text{for $x\\geq0$}. \\end{cases} \\end{equation}\\] \\\[\\begin{equation} \\tag{13\.3} \\texttt{Sigmoid:} \\;\\; f\\left(x\\right) \= \\frac{1}{1 \+ e^{\-x}} \\end{equation}\\] \\\[\\begin{equation} \\tag{13\.4} \\texttt{Softmax:} \\;\\; f\\left(x\\right) \= \\frac{e^{x\_i}}{\\sum\_j e^{x\_j}} \\end{equation}\\] When using rectangular data, the most common approach is to use ReLU activation functions in the hidden layers. The ReLU activation function is simply taking the summed weighted inputs and transforming them to a \\(0\\) (not fire) or \\(\>0\\) (fire) if there is enough signal. For the output layers we use the linear activation function for regression problems, the sigmoid activation function for binary classification problems, and softmax for multinomial classification problems. #### 13\.4\.2\.2 Implementation To control the activation functions used in our layers we specify the `activation` argument. For the two hidden layers we add the ReLU activation function and for the output layer we specify `activation = softmax` (since MNIST is a multinomial classification problem). ``` model <- keras_model_sequential() %>% layer_dense(units = 128, activation = "relu", input_shape = p) %>% layer_dense(units = 64, activation = "relu") %>% layer_dense(units = 10, activation = "softmax") ``` Next, we need to incorporate a feedback mechanism to help our model learn. #### 13\.4\.2\.1 Activation functions As stated previously, each node is connected to all the nodes in the previous layer. Each connection gets a weight and then that node adds all the incoming inputs multiplied by its corresponding connection weight plus an extra *bias* parameter (\\(w\_0\\)). The summed total of these inputs become an input to an *activation function*; see [13\.5](deep-learning.html#fig:perceptron-node). Figure 13\.5: Flow of information in an artificial neuron. The activation function is simply a mathematical function that determines whether or not there is enough informative input at a node to fire a signal to the next layer. There are multiple [activation functions](https://en.wikipedia.org/wiki/Activation_function) to choose from but the most common ones include: \\\[\\begin{equation} \\tag{13\.1} \\texttt{Linear (identity):} \\;\\; f\\left(x\\right) \= x \\end{equation}\\] \\\[\\begin{equation} \\tag{13\.2} \\texttt{Rectified linear unit (ReLU):} \\;\\; f\\left(x\\right) \= \\begin{cases} 0, \& \\text{for $x\<0$}.\\\\ x, \& \\text{for $x\\geq0$}. \\end{cases} \\end{equation}\\] \\\[\\begin{equation} \\tag{13\.3} \\texttt{Sigmoid:} \\;\\; f\\left(x\\right) \= \\frac{1}{1 \+ e^{\-x}} \\end{equation}\\] \\\[\\begin{equation} \\tag{13\.4} \\texttt{Softmax:} \\;\\; f\\left(x\\right) \= \\frac{e^{x\_i}}{\\sum\_j e^{x\_j}} \\end{equation}\\] When using rectangular data, the most common approach is to use ReLU activation functions in the hidden layers. The ReLU activation function is simply taking the summed weighted inputs and transforming them to a \\(0\\) (not fire) or \\(\>0\\) (fire) if there is enough signal. For the output layers we use the linear activation function for regression problems, the sigmoid activation function for binary classification problems, and softmax for multinomial classification problems. #### 13\.4\.2\.2 Implementation To control the activation functions used in our layers we specify the `activation` argument. For the two hidden layers we add the ReLU activation function and for the output layer we specify `activation = softmax` (since MNIST is a multinomial classification problem). ``` model <- keras_model_sequential() %>% layer_dense(units = 128, activation = "relu", input_shape = p) %>% layer_dense(units = 64, activation = "relu") %>% layer_dense(units = 10, activation = "softmax") ``` Next, we need to incorporate a feedback mechanism to help our model learn. 13\.5 Backpropagation --------------------- On the first run (or *forward pass*), the DNN will select a batch of observations, randomly assign weights across all the node connections, and predict the output. The engine of neural networks is how it assesses its own accuracy and automatically adjusts the weights across all the node connections to improve that accuracy. This process is called *backpropagation*. To perform backpropagation we need two things: 1. An objective function; 2. An optimizer. First, you need to establish an objective (loss) function to measure performance. For regression problems this might be mean squared error (MSE) and for classification problems it is commonly binary and multi\-categorical cross entropy (reference Section [2\.6](process.html#model-eval)). DNNs can have multiple loss functions but we’ll just focus on using one. On each forward pass the DNN will measure its performance based on the loss function chosen. The DNN will then work backwards through the layers, compute the gradient[37](#fn37) of the loss with regards to the network weights, adjust the weights a little in the opposite direction of the gradient, grab another batch of observations to run through the model, …rinse and repeat until the loss function is minimized. This process is known as *mini\-batch stochastic gradient descent*[38](#fn38) (mini\-batch SGD). There are several variants of mini\-batch SGD algorithms; they primarily differ in how fast they descend the gradient (controlled by the *learning rate* as discussed in Section [12\.2\.2](gbm.html#gbm-gradient)). These different variations make up the different *optimizers* that can be used. Understanding the technical differences among the variants of gradient descent is beyond the intent of this book. An excellent source to learn more about these differences and appropriate scenarios to adjust this parameter is provided by Ruder ([2016](#ref-ruder2016overview)). For now, realize that sticking with the default optimizer (RMSProp) is often sufficient for most normal regression and classification problems; however, this is a tunable hyperparameter. To incorporate the backpropagation piece of our DNN we include `compile()` in our code sequence. In addition to the optimizer and loss function arguments, we can also identify one or more metrics in addition to our loss function to track and report. ``` model <- keras_model_sequential() %>% # Network architecture layer_dense(units = 128, activation = "relu", input_shape = ncol(mnist_x)) %>% layer_dense(units = 64, activation = "relu") %>% layer_dense(units = 10, activation = "softmax") %>% # Backpropagation compile( loss = 'categorical_crossentropy', optimizer = optimizer_rmsprop(), metrics = c('accuracy') ) ``` 13\.6 Model training -------------------- We’ve created a base model, now we just need to train it with some data. To do so we feed our model into a `fit()` function along with our training data. We also provide a few other arguments that are worth mentioning: * `batch_size`: As we mentioned in the last section, the DNN will take a batch of data to run through the mini\-batch SGD process. Batch sizes can be between one and several hundred. Small values will be more computationally burdensome while large values provide less feedback signal. Values are typically provided as a power of two that fit nicely into the memory requirements of the GPU or CPU hardware like 32, 64, 128, 256, and so on. * `epochs`: An *epoch* describes the number of times the algorithm sees the entire data set. So, each time the algorithm has seen all samples in the data set, an epoch has completed. In our training set, we have 60,000 observations so running batches of 128 will require 469 passes for one epoch. The more complex the features and relationships in your data, the more epochs you’ll require for your model to learn, adjust the weights, and minimize the loss function. * `validation_split`: The model will hold out XX% of the data so that we can compute a more accurate estimate of an out\-of\-sample error rate. * `verbose`: We set this to `FALSE` for brevity; however, when `TRUE` you will see a live update of the loss function in your RStudio IDE. Plotting the output shows how our loss function (and specified metrics) improve for each epoch. We see that our model’s performance is optimized at 5–10 epochs and then proceeds to overfit, which results in a flatlined accuracy rate. The training and validation below took \~30 seconds. ``` # Train the model fit1 <- model %>% fit( x = mnist_x, y = mnist_y, epochs = 25, batch_size = 128, validation_split = 0.2, verbose = FALSE ) # Display output fit1 ## Trained on 48,000 samples, validated on 12,000 samples (batch_size=128, epochs=25) ## Final epoch (plot to see history): ## val_loss: 0.1512 ## val_acc: 0.9773 ## loss: 0.002308 ## acc: 0.9994 plot(fit1) ``` Figure 13\.6: Training and validation performance over 25 epochs. 13\.7 Model tuning ------------------ Now that we have an understanding of producing and running a DNN model, the next task is to find an optimal one by tuning different hyperparameters. There are many ways to tune a DNN. Typically, the tuning process follows these general steps; however, there is often a lot of iteration among these: 1. Adjust model capacity (layers \& nodes); 2. Add batch normalization; 3. Add regularization; 4. Adjust learning rate. ### 13\.7\.1 Model capacity Typically, we start by maximizing predictive performance based on model capacity. Higher model capacity (i.e., more layers and nodes) results in more *memorization capacity* for the model. On one hand, this can be good as it allows the model to learn more features and patterns in the data. On the other hand, a model with too much capacity will overfit to the training data. Typically, we look to maximize validation error performance while minimizing model capacity. As an example, we assessed nine different model capacity settings that include the following number of layers and nodes while maintaining all other parameters the same as the models in the previous sections (i.e.. our medium sized 2\-hidden layer network contains 64 nodes in the first layer and 32 in the second.). Table 13\.1: Model capacities assessed represented as number of layers and nodes per layer. | | Hidden Layers | | | | --- | --- | --- | --- | | Size | 1 | 2 | 3 | | small | 16 | 16, 8 | 16, 8, 4 | | medium | 64 | 64, 32 | 64, 32, 16 | | large | 256 | 256, 128 | 256, 128, 64 | The models that performed best had 2–3 hidden layers with a medium to large number of nodes. All the “small” models underfit and would require more epochs to identify their minimum validation error. The large 3\-layer model overfits extremely fast. Preferably, we want a model that overfits more slowly such as the 1\- and 2\-layer medium and large models (Chollet and Allaire [2018](#ref-chollet2018deep)). If none of your models reach a flatlined validation error such as all the “small” models in Figure 13\.7, increase the number of epochs trained. Alternatively, if your epochs flatline early then there is no reason to run so many epochs as you are just wasting computational energy with no gain. We can add a `callback()` function inside of `fit()` to help with this. There are multiple callbacks to help automate certain tasks. One such callback is early stopping, which will stop training if the loss function does not improve for a specified number of epochs. Figure 13\.7: Training and validation performance for various model capacities. ### 13\.7\.2 Batch normalization We’ve normalized the data before feeding it into our model, but data normalization should be a concern after every transformation performed by the network. Batch normalization (Ioffe and Szegedy [2015](#ref-ioffe2015batch)) is a recent advancement that adaptively normalizes data even as the mean and variance change over time during training. The main effect of batch normalization is that it helps with gradient propagation, which allows for deeper networks. Consequently, as the depth of your networks increase, batch normalization becomes more important and can improve performance. We can add batch normalization by including `layer_batch_normalization()` after each middle layer within the network architecture section of our code: ``` model_w_norm <- keras_model_sequential() %>% # Network architecture with batch normalization layer_dense(units = 256, activation = "relu", input_shape = ncol(mnist_x)) %>% layer_batch_normalization() %>% layer_dense(units = 128, activation = "relu") %>% layer_batch_normalization() %>% layer_dense(units = 64, activation = "relu") %>% layer_batch_normalization() %>% layer_dense(units = 10, activation = "softmax") %>% # Backpropagation compile( loss = "categorical_crossentropy", optimizer = optimizer_rmsprop(), metrics = c("accuracy") ) ``` If we add batch normalization to each of the previously assessed models, we see a couple patterns emerge. One, batch normalization often helps to minimize the validation loss sooner, which increases efficiency of model training. Two, we see that for the larger, more complex models (3\-layer medium and 2\- and 3\-layer large), batch normalization helps to reduce the overall amount of overfitting. In fact, with batch normalization, our large 3\-layer network now has the best validation error. Figure 13\.8: The effect of batch normalization on validation loss for various model capacities. ### 13\.7\.3 Regularization As we’ve discussed in Chapters [6](regularized-regression.html#regularized-regression) and [12](gbm.html#gbm), placing constraints on a model’s complexity with regularization is a common way to mitigate overfitting. DNNs models are no different and there are two common approaches to regularizing neural networks. We can use an \\(L\_1\\) or \\(L\_2\\) penalty to add a cost to the size of the node weights, although the most common penalizer is the \\(L\_2\\) *norm*, which is called *weight decay* in the context of neural networks.[39](#fn39) Regularizing the weights will force small signals (noise) to have weights nearly equal to zero and only allow consistently strong signals to have relatively larger weights. As you add more layers and nodes, regularization with \\(L\_1\\) or \\(L\_2\\) penalties tend to have a larger impact on performance. Since having too many hidden units runs the risk of overparameterization, \\(L\_1\\) or \\(L\_2\\) penalties can shrink the extra weights toward zero to reduce the risk of overfitting. We can add an \\(L\_1\\), \\(L\_2\\), or a combination of the two by adding `regularizer_XX()` within each layer. ``` model_w_reg <- keras_model_sequential() %>% # Network architecture with L1 regularization and batch normalization layer_dense(units = 256, activation = "relu", input_shape = ncol(mnist_x), kernel_regularizer = regularizer_l2(0.001)) %>% layer_batch_normalization() %>% layer_dense(units = 128, activation = "relu", kernel_regularizer = regularizer_l2(0.001)) %>% layer_batch_normalization() %>% layer_dense(units = 64, activation = "relu", kernel_regularizer = regularizer_l2(0.001)) %>% layer_batch_normalization() %>% layer_dense(units = 10, activation = "softmax") %>% # Backpropagation compile( loss = "categorical_crossentropy", optimizer = optimizer_rmsprop(), metrics = c("accuracy") ) ``` Dropout (Srivastava et al. [2014](#ref-srivastava2014dropout)[b](#ref-srivastava2014dropout); Hinton et al. [2012](#ref-hinton2012improving)) is an additional regularization method that has become one of the most common and effectively used approaches to minimize overfitting in neural networks. Dropout in the context of neural networks randomly drops out (setting to zero) a number of output features in a layer during training. By randomly removing different nodes, we help prevent the model from latching onto happenstance patterns (noise) that are not significant. Typically, dropout rates range from 0\.2–0\.5 but can differ depending on the data (i.e., this is another tuning parameter). Similar to batch normalization, we can apply dropout by adding `layer_dropout()` in between the layers. ``` model_w_drop <- keras_model_sequential() %>% # Network architecture with 20% dropout layer_dense(units = 256, activation = "relu", input_shape = ncol(mnist_x)) %>% layer_dropout(rate = 0.2) %>% layer_dense(units = 128, activation = "relu") %>% layer_dropout(rate = 0.2) %>% layer_dense(units = 64, activation = "relu") %>% layer_dropout(rate = 0.2) %>% layer_dense(units = 10, activation = "softmax") %>% # Backpropagation compile( loss = "categorical_crossentropy", optimizer = optimizer_rmsprop(), metrics = c("accuracy") ) ``` For our MNIST data, we find that adding an \\(L\_1\\) or \\(L\_2\\) cost does not improve our loss function. However, adding dropout does improve performance. For example, our large 3\-layer model with 256, 128, and 64 nodes per respective layer so far has the best performance with a cross\-entropy loss of 0\.0818\. However, as illustrated in Figure [13\.8](deep-learning.html#fig:model-capacity-with-batch-norm-plot), this network still suffers from overfitting. Figure [13\.9](deep-learning.html#fig:model-with-regularization-plot) illustrates the same 3\-layer model with 256, 128, and 64 nodes per respective layers, batch normalization, and dropout rates of 0\.4, 0\.3, and 0\.2 between each respective layer. We see a significant improvement in overfitting, which results in an improved loss score. Note that as you constrain overfitting, often you need to increase the number of epochs to allow the network enough iterations to find the global minimal loss. Figure 13\.9: The effect of regularization with dropout on validation loss. ### 13\.7\.4 Adjust learning rate Another issue to be concerned with is whether or not we are finding a global minimum versus a local minimum with our loss value. The mini\-batch SGD optimizer we use will take incremental steps down our loss gradient until it no longer experiences improvement. The size of the incremental steps (i.e., the learning rate) will determine whether or not we get stuck in a local minimum instead of making our way to the global minimum. Figure 13\.10: A local minimum and a global minimum. There are two ways to circumvent this problem: 1. The different optimizers (e.g., RMSProp, Adam, Adagrad) have different algorithmic approaches for deciding the learning rate. We can adjust the learning rate of a given optimizer or we can adjust the optimizer used. 2. We can automatically adjust the learning rate by a factor of 2–10 once the validation loss has stopped improving. The following builds onto our optimal model by changing the optimizer to Adam (Kingma and Ba [2014](#ref-kingma2014adam)) and reducing the learning rate by a factor of 0\.05 as our loss improvement begins to stall. We also add an early stopping argument to reduce unnecessary runtime. We see a slight improvement in performance and our loss curve in Figure [13\.11](deep-learning.html#fig:adj-lrn-rate) illustrates how we stop model training just as we begin to overfit. ``` model_w_adj_lrn <- keras_model_sequential() %>% layer_dense(units = 256, activation = "relu", input_shape = ncol(mnist_x)) %>% layer_batch_normalization() %>% layer_dropout(rate = 0.4) %>% layer_dense(units = 128, activation = "relu") %>% layer_batch_normalization() %>% layer_dropout(rate = 0.3) %>% layer_dense(units = 64, activation = "relu") %>% layer_batch_normalization() %>% layer_dropout(rate = 0.2) %>% layer_dense(units = 10, activation = "softmax") %>% compile( loss = 'categorical_crossentropy', optimizer = optimizer_adam(), metrics = c('accuracy') ) %>% fit( x = mnist_x, y = mnist_y, epochs = 35, batch_size = 128, validation_split = 0.2, callbacks = list( callback_early_stopping(patience = 5), callback_reduce_lr_on_plateau(factor = 0.05) ), verbose = FALSE ) model_w_adj_lrn ## Trained on 48,000 samples, validated on 12,000 samples (batch_size=128, epochs=20) ## Final epoch (plot to see history): ## val_loss: 0.07223 ## val_acc: 0.9808 ## loss: 0.05366 ## acc: 0.9832 ## lr: 0.001 # Optimal min(model_w_adj_lrn$metrics$val_loss) ## [1] 0.0699492 max(model_w_adj_lrn$metrics$val_acc) ## [1] 0.981 # Learning rate plot(model_w_adj_lrn) ``` Figure 13\.11: Training and validation performance on our 3\-layer large network with dropout, adjustable learning rate, and using an Adam mini\-batch SGD optimizer. ### 13\.7\.1 Model capacity Typically, we start by maximizing predictive performance based on model capacity. Higher model capacity (i.e., more layers and nodes) results in more *memorization capacity* for the model. On one hand, this can be good as it allows the model to learn more features and patterns in the data. On the other hand, a model with too much capacity will overfit to the training data. Typically, we look to maximize validation error performance while minimizing model capacity. As an example, we assessed nine different model capacity settings that include the following number of layers and nodes while maintaining all other parameters the same as the models in the previous sections (i.e.. our medium sized 2\-hidden layer network contains 64 nodes in the first layer and 32 in the second.). Table 13\.1: Model capacities assessed represented as number of layers and nodes per layer. | | Hidden Layers | | | | --- | --- | --- | --- | | Size | 1 | 2 | 3 | | small | 16 | 16, 8 | 16, 8, 4 | | medium | 64 | 64, 32 | 64, 32, 16 | | large | 256 | 256, 128 | 256, 128, 64 | The models that performed best had 2–3 hidden layers with a medium to large number of nodes. All the “small” models underfit and would require more epochs to identify their minimum validation error. The large 3\-layer model overfits extremely fast. Preferably, we want a model that overfits more slowly such as the 1\- and 2\-layer medium and large models (Chollet and Allaire [2018](#ref-chollet2018deep)). If none of your models reach a flatlined validation error such as all the “small” models in Figure 13\.7, increase the number of epochs trained. Alternatively, if your epochs flatline early then there is no reason to run so many epochs as you are just wasting computational energy with no gain. We can add a `callback()` function inside of `fit()` to help with this. There are multiple callbacks to help automate certain tasks. One such callback is early stopping, which will stop training if the loss function does not improve for a specified number of epochs. Figure 13\.7: Training and validation performance for various model capacities. ### 13\.7\.2 Batch normalization We’ve normalized the data before feeding it into our model, but data normalization should be a concern after every transformation performed by the network. Batch normalization (Ioffe and Szegedy [2015](#ref-ioffe2015batch)) is a recent advancement that adaptively normalizes data even as the mean and variance change over time during training. The main effect of batch normalization is that it helps with gradient propagation, which allows for deeper networks. Consequently, as the depth of your networks increase, batch normalization becomes more important and can improve performance. We can add batch normalization by including `layer_batch_normalization()` after each middle layer within the network architecture section of our code: ``` model_w_norm <- keras_model_sequential() %>% # Network architecture with batch normalization layer_dense(units = 256, activation = "relu", input_shape = ncol(mnist_x)) %>% layer_batch_normalization() %>% layer_dense(units = 128, activation = "relu") %>% layer_batch_normalization() %>% layer_dense(units = 64, activation = "relu") %>% layer_batch_normalization() %>% layer_dense(units = 10, activation = "softmax") %>% # Backpropagation compile( loss = "categorical_crossentropy", optimizer = optimizer_rmsprop(), metrics = c("accuracy") ) ``` If we add batch normalization to each of the previously assessed models, we see a couple patterns emerge. One, batch normalization often helps to minimize the validation loss sooner, which increases efficiency of model training. Two, we see that for the larger, more complex models (3\-layer medium and 2\- and 3\-layer large), batch normalization helps to reduce the overall amount of overfitting. In fact, with batch normalization, our large 3\-layer network now has the best validation error. Figure 13\.8: The effect of batch normalization on validation loss for various model capacities. ### 13\.7\.3 Regularization As we’ve discussed in Chapters [6](regularized-regression.html#regularized-regression) and [12](gbm.html#gbm), placing constraints on a model’s complexity with regularization is a common way to mitigate overfitting. DNNs models are no different and there are two common approaches to regularizing neural networks. We can use an \\(L\_1\\) or \\(L\_2\\) penalty to add a cost to the size of the node weights, although the most common penalizer is the \\(L\_2\\) *norm*, which is called *weight decay* in the context of neural networks.[39](#fn39) Regularizing the weights will force small signals (noise) to have weights nearly equal to zero and only allow consistently strong signals to have relatively larger weights. As you add more layers and nodes, regularization with \\(L\_1\\) or \\(L\_2\\) penalties tend to have a larger impact on performance. Since having too many hidden units runs the risk of overparameterization, \\(L\_1\\) or \\(L\_2\\) penalties can shrink the extra weights toward zero to reduce the risk of overfitting. We can add an \\(L\_1\\), \\(L\_2\\), or a combination of the two by adding `regularizer_XX()` within each layer. ``` model_w_reg <- keras_model_sequential() %>% # Network architecture with L1 regularization and batch normalization layer_dense(units = 256, activation = "relu", input_shape = ncol(mnist_x), kernel_regularizer = regularizer_l2(0.001)) %>% layer_batch_normalization() %>% layer_dense(units = 128, activation = "relu", kernel_regularizer = regularizer_l2(0.001)) %>% layer_batch_normalization() %>% layer_dense(units = 64, activation = "relu", kernel_regularizer = regularizer_l2(0.001)) %>% layer_batch_normalization() %>% layer_dense(units = 10, activation = "softmax") %>% # Backpropagation compile( loss = "categorical_crossentropy", optimizer = optimizer_rmsprop(), metrics = c("accuracy") ) ``` Dropout (Srivastava et al. [2014](#ref-srivastava2014dropout)[b](#ref-srivastava2014dropout); Hinton et al. [2012](#ref-hinton2012improving)) is an additional regularization method that has become one of the most common and effectively used approaches to minimize overfitting in neural networks. Dropout in the context of neural networks randomly drops out (setting to zero) a number of output features in a layer during training. By randomly removing different nodes, we help prevent the model from latching onto happenstance patterns (noise) that are not significant. Typically, dropout rates range from 0\.2–0\.5 but can differ depending on the data (i.e., this is another tuning parameter). Similar to batch normalization, we can apply dropout by adding `layer_dropout()` in between the layers. ``` model_w_drop <- keras_model_sequential() %>% # Network architecture with 20% dropout layer_dense(units = 256, activation = "relu", input_shape = ncol(mnist_x)) %>% layer_dropout(rate = 0.2) %>% layer_dense(units = 128, activation = "relu") %>% layer_dropout(rate = 0.2) %>% layer_dense(units = 64, activation = "relu") %>% layer_dropout(rate = 0.2) %>% layer_dense(units = 10, activation = "softmax") %>% # Backpropagation compile( loss = "categorical_crossentropy", optimizer = optimizer_rmsprop(), metrics = c("accuracy") ) ``` For our MNIST data, we find that adding an \\(L\_1\\) or \\(L\_2\\) cost does not improve our loss function. However, adding dropout does improve performance. For example, our large 3\-layer model with 256, 128, and 64 nodes per respective layer so far has the best performance with a cross\-entropy loss of 0\.0818\. However, as illustrated in Figure [13\.8](deep-learning.html#fig:model-capacity-with-batch-norm-plot), this network still suffers from overfitting. Figure [13\.9](deep-learning.html#fig:model-with-regularization-plot) illustrates the same 3\-layer model with 256, 128, and 64 nodes per respective layers, batch normalization, and dropout rates of 0\.4, 0\.3, and 0\.2 between each respective layer. We see a significant improvement in overfitting, which results in an improved loss score. Note that as you constrain overfitting, often you need to increase the number of epochs to allow the network enough iterations to find the global minimal loss. Figure 13\.9: The effect of regularization with dropout on validation loss. ### 13\.7\.4 Adjust learning rate Another issue to be concerned with is whether or not we are finding a global minimum versus a local minimum with our loss value. The mini\-batch SGD optimizer we use will take incremental steps down our loss gradient until it no longer experiences improvement. The size of the incremental steps (i.e., the learning rate) will determine whether or not we get stuck in a local minimum instead of making our way to the global minimum. Figure 13\.10: A local minimum and a global minimum. There are two ways to circumvent this problem: 1. The different optimizers (e.g., RMSProp, Adam, Adagrad) have different algorithmic approaches for deciding the learning rate. We can adjust the learning rate of a given optimizer or we can adjust the optimizer used. 2. We can automatically adjust the learning rate by a factor of 2–10 once the validation loss has stopped improving. The following builds onto our optimal model by changing the optimizer to Adam (Kingma and Ba [2014](#ref-kingma2014adam)) and reducing the learning rate by a factor of 0\.05 as our loss improvement begins to stall. We also add an early stopping argument to reduce unnecessary runtime. We see a slight improvement in performance and our loss curve in Figure [13\.11](deep-learning.html#fig:adj-lrn-rate) illustrates how we stop model training just as we begin to overfit. ``` model_w_adj_lrn <- keras_model_sequential() %>% layer_dense(units = 256, activation = "relu", input_shape = ncol(mnist_x)) %>% layer_batch_normalization() %>% layer_dropout(rate = 0.4) %>% layer_dense(units = 128, activation = "relu") %>% layer_batch_normalization() %>% layer_dropout(rate = 0.3) %>% layer_dense(units = 64, activation = "relu") %>% layer_batch_normalization() %>% layer_dropout(rate = 0.2) %>% layer_dense(units = 10, activation = "softmax") %>% compile( loss = 'categorical_crossentropy', optimizer = optimizer_adam(), metrics = c('accuracy') ) %>% fit( x = mnist_x, y = mnist_y, epochs = 35, batch_size = 128, validation_split = 0.2, callbacks = list( callback_early_stopping(patience = 5), callback_reduce_lr_on_plateau(factor = 0.05) ), verbose = FALSE ) model_w_adj_lrn ## Trained on 48,000 samples, validated on 12,000 samples (batch_size=128, epochs=20) ## Final epoch (plot to see history): ## val_loss: 0.07223 ## val_acc: 0.9808 ## loss: 0.05366 ## acc: 0.9832 ## lr: 0.001 # Optimal min(model_w_adj_lrn$metrics$val_loss) ## [1] 0.0699492 max(model_w_adj_lrn$metrics$val_acc) ## [1] 0.981 # Learning rate plot(model_w_adj_lrn) ``` Figure 13\.11: Training and validation performance on our 3\-layer large network with dropout, adjustable learning rate, and using an Adam mini\-batch SGD optimizer. 13\.8 Grid Search ----------------- Hyperparameter tuning for DNNs tends to be a bit more involved than other ML models due to the number of hyperparameters that can/should be assessed and the dependencies between these parameters. For most implementations you need to predetermine the number of layers you want and then establish your search grid. If using **h2o**’s `h2o.deeplearning()` function, then creating and executing the search grid follows the same approach illustrated in Sections [11\.5](random-forest.html#rf-tuning-strategy) and [12\.4\.2](gbm.html#stochastic-gbm-h2o). However, for **keras**, we use *flags* in a similar manner but their implementation provides added flexibility for tracking, visualizing, and managing training runs with the **tfruns** package (Allaire [2018](#ref-R-tfruns)). For a full discussion regarding flags see the <https://tensorflow.rstudio.com/tools/> online resource. In this example we provide a training script [mnist\-grid\-search.R](http://bit.ly/mnist-grid-search) that will be sourced for the grid search. To create and perform a grid search, we first establish flags for the different hyperparameters of interest. These are considered the default flag values: ``` FLAGS <- flags( # Nodes flag_numeric("nodes1", 256), flag_numeric("nodes2", 128), flag_numeric("nodes3", 64), # Dropout flag_numeric("dropout1", 0.4), flag_numeric("dropout2", 0.3), flag_numeric("dropout3", 0.2), # Learning paramaters flag_string("optimizer", "rmsprop"), flag_numeric("lr_annealing", 0.1) ) ``` Next, we incorprate the flag parameters within our model: ``` model <- keras_model_sequential() %>% layer_dense(units = FLAGS$nodes1, activation = "relu", input_shape = ncol(mnist_x)) %>% layer_batch_normalization() %>% layer_dropout(rate = FLAGS$dropout1) %>% layer_dense(units = FLAGS$nodes2, activation = "relu") %>% layer_batch_normalization() %>% layer_dropout(rate = FLAGS$dropout2) %>% layer_dense(units = FLAGS$nodes3, activation = "relu") %>% layer_batch_normalization() %>% layer_dropout(rate = FLAGS$dropout3) %>% layer_dense(units = 10, activation = "softmax") %>% compile( loss = 'categorical_crossentropy', metrics = c('accuracy'), optimizer = FLAGS$optimizer ) %>% fit( x = mnist_x, y = mnist_y, epochs = 35, batch_size = 128, validation_split = 0.2, callbacks = list( callback_early_stopping(patience = 5), callback_reduce_lr_on_plateau(factor = FLAGS$lr_annealing) ), verbose = FALSE ) ``` To execute the grid search we use `tfruns::tuning_run()`. Since our grid search assesses 2,916 combinations, we perform a random grid search and assess only 5% of the total models (`sample = 0.05` which equates to 145 models). It becomes quite obvious that the hyperparameter search space explodes quickly with DNNs since there are so many model attributes that can be adjusted. Consequently, often a full Cartesian grid search is not possible due to time and computational constraints. The optimal model has a validation loss of 0\.0686 and validation accuracy rate of 0\.9806 and the below code chunk shows the hyperparameter settings for this optimal model. The following grid search took us over 1\.5 hours to run! ``` # Run various combinations of dropout1 and dropout2 runs <- tuning_run("scripts/mnist-grid-search.R", flags = list( nodes1 = c(64, 128, 256), nodes2 = c(64, 128, 256), nodes3 = c(64, 128, 256), dropout1 = c(0.2, 0.3, 0.4), dropout2 = c(0.2, 0.3, 0.4), dropout3 = c(0.2, 0.3, 0.4), optimizer = c("rmsprop", "adam"), lr_annealing = c(0.1, 0.05) ), sample = 0.05 ) runs %>% filter(metric_val_loss == min(metric_val_loss)) %>% glimpse() ## Observations: 1 ## Variables: 31 ## $ run_dir <chr> "runs/2019-04-27T14-44-38Z" ## $ metric_loss <dbl> 0.0598 ## $ metric_acc <dbl> 0.9806 ## $ metric_val_loss <dbl> 0.0686 ## $ metric_val_acc <dbl> 0.9806 ## $ flag_nodes1 <int> 256 ## $ flag_nodes2 <int> 128 ## $ flag_nodes3 <int> 256 ## $ flag_dropout1 <dbl> 0.4 ## $ flag_dropout2 <dbl> 0.2 ## $ flag_dropout3 <dbl> 0.3 ## $ flag_optimizer <chr> "adam" ## $ flag_lr_annealing <dbl> 0.05 ## $ samples <int> 48000 ## $ validation_samples <int> 12000 ## $ batch_size <int> 128 ## $ epochs <int> 35 ## $ epochs_completed <int> 17 ## $ metrics <chr> "runs/2019-04-27T14-44-38Z/tfruns.d/metrics.json" ## $ model <chr> "Model\n_______________________________________________________… ## $ loss_function <chr> "categorical_crossentropy" ## $ optimizer <chr> "<tensorflow.python.keras.optimizers.Adam>" ## $ learning_rate <dbl> 0.001 ## $ script <chr> "mnist-grid-search.R" ## $ start <dttm> 2019-04-27 14:44:38 ## $ end <dttm> 2019-04-27 14:45:39 ## $ completed <lgl> TRUE ## $ output <chr> "\n> #' Trains a feedforward DL model on the MNIST dataset.\n> … ## $ source_code <chr> "runs/2019-04-27T14-44-38Z/tfruns.d/source.tar.gz" ## $ context <chr> "local" ## $ type <chr> "training" ``` 13\.9 Final thoughts -------------------- Training DNNs often requires more time and attention than other ML algorithms. With many other algorithms, the search space for finding an optimal model is small enough that Cartesian grid searches can be executed rather quickly. With DNNs, more thought, time, and experimentation is often required up front to establish a basic network architecture to build a grid search around. However, even with prior experimentation to reduce the scope of a grid search, the large number of hyperparameters still results in an exploding search space that can usually only be efficiently searched at random. Historically, training neural networks was quite slow since runtime requires \\(O\\left(NpML\\right)\\) operations where \\(N \=\\) \# observations, \\(p\=\\) \# features, \\(M\=\\) \# hidden nodes, and \\(L\=\\) \# epchos. Fortunately, software has advanced tremendously over the past decade to make execution fast and efficient. With open source software such as TensorFlow and Keras available via R APIs, performing state of the art deep learning methods is much more efficient, plus you get all the added benefits these open source tools provide (e.g., distributed computations across CPUs and GPUs, more advanced DNN architectures such as convolutional and recurrent neural nets, autoencoders, reinforcement learning, and more!).
Machine Learning
bradleyboehmke.github.io
https://bradleyboehmke.github.io/HOML/svm.html
Chapter 14 Support Vector Machines ================================== Support vector machines (SVMs) offer a direct approach to binary classification: try to find a *hyperplane* in some feature space that “best” separates the two classes. In practice, however, it is difficult (if not impossible) to find a hyperplane to perfectly separate the classes using just the original features. SVMs overcome this by extending the idea of finding a separating hyperplane in two ways: (1\) loosen what we mean by “perfectly separates”, and (2\) use the so\-called *kernel trick* to enlarge the feature space to the point that perfect separation of classes is (more) likely. 14\.1 Prerequisites ------------------- Although there are a number of great packages that implement SVMs (e.g., **e1071** (Meyer et al. [2019](#ref-e1071-pkg)) and **svmpath** (Hastie [2016](#ref-svmpath-pkg))), we’ll focus on the most flexible implementation of SVMs in R: **kernlab** (Karatzoglou et al. [2004](#ref-kernlab-pkg)). We’ll also use **caret** for tuning SVMs and pre\-processing. In this chapter, we’ll explicitly load the following packages: ``` # Helper packages library(dplyr) # for data wrangling library(ggplot2) # for awesome graphics library(rsample) # for data splitting # Modeling packages library(caret) # for classification and regression training library(kernlab) # for fitting SVMs # Model interpretability packages library(pdp) # for partial dependence plots, etc. library(vip) # for variable importance plots ``` To illustrate the basic concepts of fitting SVMs we’ll use a mix of simulated data sets as well as the employee attrition data. The code for generating the simulated data sets and figures in this chapter are available on the book website. In the employee attrition example our intent is to predict on `Attrition` (coded as `"Yes"`/`"No"`). As in previous chapters, we’ll set aside 30% of the data for assessing generalizability. ``` # Load attrition data df <- attrition %>% mutate_if(is.ordered, factor, ordered = FALSE) # Create training (70%) and test (30%) sets set.seed(123) # for reproducibility churn_split <- initial_split(df, prop = 0.7, strata = "Attrition") churn_train <- training(churn_split) churn_test <- testing(churn_split) ``` 14\.2 Optimal separating hyperplanes ------------------------------------ Rather than diving right into SVMs we’ll build up to them using concepts from basic geometry, starting with hyperplanes. A hyperplane in \\(p\\)\-dimensional feature space is defined by the (linear) equation \\\[f\\left(X\\right) \= \\beta\_0 \+ \\beta\_1 X\_1 \+ \\dots \+ \\beta\_p X\_p \= 0\\] When \\(p \= 2\\), this defines a line in 2\-D space, and when \\(p \= 3\\), it defines a plane in 3\-D space (see Figure [14\.1](svm.html#fig:hyperplanes)). By definition, for points on one side of the hyperplane, \\(f\\left(X\\right) \> 0\\), and for points on the other side, \\(f\\left(X\\right) \< 0\\). For (mathematical) convenience, we’ll re\-encode the binary outcome \\(Y\_i\\) using {\-1, 1} so that \\(Y\_i \\times f\\left(X\_i\\right) \> 0\\) for points on the correct side of the hyperplane. In this context the hyperplane represents a *decision boundary* that partitions the feature space into two sets, one for each class. The SVM will classify all the points on one side of the decision boundary as belonging to one class and all those on the other side as belonging to the other class. Figure 14\.1: Examples of hyperplanes in 2\-D and 3\-D feature space. While SVMs may seem mathematically frightening at first, the fundamental ideas behind them are incredibly intuitive and easy to understand. We’ll illustrate these simple ideas using simulated binary classification data with two features. In this hypothetical example, we have two classes: households that own a riding lawn mower (\\(Y \= \+1\\)) and (2\) households that do not (\\(Y \= \-1\\)). We also have two features, household income (\\(X\_1\\)) and lot size (\\(X\_2\\)), that have been standardized (i.e., centered around zero with a standard deviation of one). Intuitively, we might expect households with a larger lot and a higher income to be more likely to own a riding mower. In fact, the two classes in the left side of Figure [14\.2](svm.html#fig:svm-separating-hyperplanes) are perfectly separable by a straight line (i.e., a hyperplane in 2\-D space). ### 14\.2\.1 The hard margin classifier As you might imagine, for two separable classes, there are an infinite number of separating hyperplanes! This is illustrated in the right side of Figure [14\.2](svm.html#fig:svm-separating-hyperplanes) where we show the hyperplanes (i.e., decision boundaries) that result from a simple logistic regression model (GLM), a *linear discriminant analysis* (LDA; another popular classification tool), and an example of a *hard margin classifier* (HMC)—which we’ll define in a moment. So which decision boundary is “best”? Well, it depends on how we define “best”. If you were asked to draw a decision boundary with good generalization performance on the left side of Figure [14\.2](svm.html#fig:svm-separating-hyperplanes), how would it look to you? Naturally, you would probably draw a boundary that provides the maximum separation between the two classes, and that’s exactly what the HMC is doing! Figure 14\.2: Simulated binary classification data with two separable classes. *Left:* Raw data. *Right:* Raw data with example decision boundaries (in this case, separating hyperplanes) from various machine learning algorithms. Although we can draw an unlimited number of separating hyperplanes, what we want is a separating hyperplane with good generalization performance! The HMC is one such “optimal” separating hyperplane and the simplest type of SVM. The HMC is optimal in the sense that it separates the two classes while maximizing the distance to the closest points from either class; see Figure [14\.3](svm.html#fig:svm-hmc) below. The decision boundary (i.e., hyperplane) from the HMC separates the two classes by maximizing the distance between them. This maximized distance is referred to as the margin \\(M\\) (the shaded areas in Figure [14\.3](svm.html#fig:svm-hmc)). Finding this decision boundary can also be done with simple geometry. Geometrically, finding the HMC for two separable classes amounts to the following: 1. Draw the *convex hull*[40](#fn40) around each class (these are the polygons surrounding each class in Figure [14\.3](svm.html#fig:svm-hmc)). 2. Draw the shortest line segment that connects the two convex hulls (this is the dotted line segment in Figure [14\.3](svm.html#fig:svm-hmc)). 3. The perpendicular bisector of this line segment is the HMC! 4. The margin boundaries are formed by drawing lines that pass through the support vectors and are parallel to the separating hyperplane (these are the dashed line segments in Figure [14\.3](svm.html#fig:svm-hmc)). Figure 14\.3: HMC for the simulated riding mower data. The solid black line forms the decision boundary (in this case, a separating hyperplane), while the dashed lines form the boundaries of the margins (shaded regions) on each side of the hyperplane. The shortest distance between the two classes (i.e., the dotted line connecting the two convex hulls) has length \\(2M\\). Two of the training observations (solid red points) fall on the margin boundaries; in the context of SVMs (which we discuss later), these two points form the *support vectors*. This can also be formulated as an optimization problem. Mathematically speaking, the HMC estimates the coefficients of the hyperplane by solving a quadratic programming problem with linear inequality constraints, in particular: \\\[\\begin{align} \&\\underset{\\beta\_0, \\beta\_1, \\dots, \\beta\_p}{\\text{maximize}} \\quad M \\\\ \&\\text{subject to} \\quad \\begin{cases} \\sum\_{j \= 1}^p \\beta\_j^2 \= 1,\\\\ y\_i\\left(\\beta\_0 \+ \\beta\_1 x\_{i1} \+ \\dots \+ \\beta\_p x\_{ip}\\right) \\ge M,\\quad i \= 1, 2, \\dots, n \\end{cases} \\end{align}\\] Put differently, the HMC finds the separating hyperplane that provides the largest margin/gap between the two classes. The width of both margin boundaries is \\(M\\). With the constraint \\(\\sum\_{j \= 1}^p \\beta\_j^2 \= 1\\), the quantity \\(y\_i\\left(\\beta\_0 \+ \\beta\_1 x\_{i1} \+ \\dots \+ \\beta\_p x\_{ip}\\right)\\) represents the distance from the \\(i\\)\-th data point to the decision boundary. Note that the solution to the optimization problem above does not allow any points to be on the wrong side of the margin; hence the term hard margin classifier. ### 14\.2\.2 The soft margin classifier Sometimes perfect separation is achievable, but not desirable! Take, for example, the data in Figure [14\.4](svm.html#fig:svm-noisy). Here we added a single outlier at the point \\(\\left(0\.5, 1\\right)\\). While the data are still perfectly separable, the decision boundaries obtained using logistic regression and the HMC will not generalize well to new data and accuracy will suffer (i.e., these models are not robust to outliers in the feature space). The LDA model seems to produce a more reasonable decision boundary. Figure 14\.4: Simulated binary classification data with an outlier at the point \\(\\left(0\.5, 1\\right)\\). In this situation, we can loosen the constraints (or *soften the margin*) by allowing some points to be on the wrong side of the margin; this is referred to as the the *soft margin classifier* (SMC). The SMC, similar to the HMC, estimates the coefficients of the hyperplane by solving the slightly modified optimization problem: \\\[\\begin{align} \&\\underset{\\beta\_0, \\beta\_1, \\dots, \\beta\_p}{\\text{maximize}} \\quad M \\\\ \&\\text{subject to} \\quad \\begin{cases} \\sum\_{j \= 1}^p \\beta\_j^2 \= 1,\\\\ y\_i\\left(\\beta\_0 \+ \\beta\_1 x\_{i1} \+ \\dots \+ \\beta\_p x\_{ip}\\right) \\ge M\\left(1 \- \\xi\_i\\right), \\quad i \= 1, 2, \\dots, n\\\\ \\xi\_i \\ge 0, \\\\ \\sum\_{i \= 1}^n \\xi\_i \\le C\\end{cases} \\end{align}\\] Similar to before, the SMC finds the separating hyperplane that provides the largest margin/gap between the two classes, but allows for some of the points to cross over the margin boundaries. Here \\(C\\) is the allowable budget for the total amount of overlap and is our first tunable hyperparameter for the SVM. By varying \\(C\\), we allow points to violate the margin which helps make the SVM robust to outliers. For example, in Figure [14\.5](svm.html#fig:smc), we fit the SMC at both extremes: \\(C \= 0\\) (the HMC) and \\(C \= \\infty\\) (maximum overlap). Ideally, the hyperplane giving the decision boundary with the best generalization performance lies somewhere in between these two extremes and can be determined using, for example, *k*\-fold CV. Figure 14\.5: Soft margin classifier. Left: Zero budget for overlap (i.e., the HMC). Right: Maximumn allowable overlap. The solid black points represent the support vectors that define the margin boundaries. 14\.3 The support vector machine -------------------------------- So far, we’ve only used linear decision boundaries. Such a classifier is likely too restrictive to be useful in practice, especially when compared to other algorithms that can adapt to nonlinear relationships. Fortunately, we can use a simple trick, called the *kernel trick*, to overcome this. A deep understanding of the kernel trick requires an understanding of *kernel functions* and *reproducing kernel Hilbert spaces* 😱. Fortunately, we can use a couple illustrations in 2\-D/3\-D feature space to drive home the key idea. Consider, for example, the circle data on the left side of Figure [14\.6](svm.html#fig:svm-circle). This is another binary classification problem. The first class forms a circle in the middle of a square, the remaining points form the second class. Although these two classes do not overlap (although they appear to overlap slightly due to the size of the plotted points), they are not perfectly separable by a hyperplane (i.e., a straight line). However, we can enlarge the feature space by adding a third feature, say \\(X\_3 \= X\_1^2 \+ X\_2^2\\)—this is akin to using the polynomial kernel function discussed below with \\(d \= 2\\). The data are plotted in the enlarged feature space in the middle of Figure [14\.6](svm.html#fig:svm-circle). In this new three dimensional feature space, the two classes are perfectly separable by a hyperplane (i.e., a flat plane); though it is hard to see (see the middle of Figure [14\.6](svm.html#fig:svm-circle)), the green points form the tip of the hyperboloid in 3\-D feature space (i.e., \\(X\_3\\) is smaller for all the green points leaving a small gap between the two classes). The resulting decision boundary is then projected back onto the original feature space resulting in a non\-linear decision boundary which perfectly separates the original data (see the right side of Figure [14\.6](svm.html#fig:svm-circle))! Figure 14\.6: Simulated nested circle data. *Left:* The two classes in the original (2\-D) feature space. *Middle:* The two classes in the enlarged (3\-D) feature space. *Right:* The decision boundary from the HMC in the enlarged feature space projected back into the original feature space. In essence, SVMs use the kernel trick to enlarge the feature space using basis functions (e.g., like in MARS or polynomial regression). In this enlarged (kernel\-induced) feature space, a hyperplane can often separate the two classes. The resulting decision boundary, which is linear in the enlarged feature space, will be nonlinear when transformed back onto the original feature space. Popular kernel functions used by SVMs include: * *d*\-th degree polynomial: \\(K\\left(x, x'\\right) \= \\gamma\\left(1 \+ \\langle x, x' \\rangle\\right) ^ d\\) * Radial basis function: \\(K\\left(x, x'\\right) \= \\exp\\left(\\gamma \\lVert x \- x'\\rVert ^ 2\\right)\\) * Hyperbolic tangent: \\(K\\left(x, x'\\right) \= \\tanh\\left(k\_1\\lVert x \- x'\\rVert \+ k\_2\\right)\\) Here \\(\\langle x, x' \\rangle \= \\sum\_{i \= 1}^n x\_i x\_i'\\) is called an *inner product*. Notice how each of these kernel functions include hyperparameters that need to be tuned. For example, the polynomial kernel includes a degree term \\(d\\) and a scale parameter \\(\\gamma\\). Similarly, the radial basis kernel includes a \\(\\gamma\\) parameter related to the inverse of the \\(\\sigma\\) parameter of a normal distribution. In R, you can use **caret**’s `getModelInfo()` to extract the hyperparameters from various SVM implementations with different kernel functions, for example: ``` # Linear (i.e., soft margin classifier) caret::getModelInfo("svmLinear")$svmLinear$parameters ## parameter class label ## 1 C numeric Cost # Polynomial kernel caret::getModelInfo("svmPoly")$svmPoly$parameters ## parameter class label ## 1 degree numeric Polynomial Degree ## 2 scale numeric Scale ## 3 C numeric Cost # Radial basis kernel caret::getModelInfo("svmRadial")$svmRadial$parameters ## parameter class label ## 1 sigma numeric Sigma ## 2 C numeric Cost ``` Through the use of various kernel functions, SVMs are extremely flexible and capable of estimating complex nonlinear decision boundaries. For example, the right side of Figure [14\.7](svm.html#fig:two-spirals) demonstrates the flexibility of an SVM using a radial basis kernel applied to the two spirals benchmark problem (see `?mlbench::mlbench.spirals` for details). As a reference, the left side of Figure [14\.7](svm.html#fig:two-spirals) shows the decision boundary from a default random forest fit using the **ranger** package. The random forest decision boundary, while flexible, has trouble capturing smooth decision boundaries (like a spiral). The SVM with a radial basis kernel, on the other hand, does a great job (and in this case is more accurate). Figure 14\.7: Two spirals benchmark problem. *Left:* Decision boundary from a random forest. *Right:* Decision boundary from an SVM with radial basis kernel. The radial basis kernel is extremely flexible and as a rule of thumb, we generally start with this kernel when fitting SVMs in practice. ### 14\.3\.1 More than two classes The SVM, as introduced, is applicable to only two classes! What do we do when we have more than two classes? There are two general approaches: *one\-versus\-all* (OVA) and one\-versus\-one (OVO). In OVA, we fit an SVM for each class (one class versus the rest) and classify to the class for which the margin is the largest. In OVO, we fit all \\(\\binom{\\\# \\ classes}{2}\\) pairwise SVMs and classify to the class that wins the most pairwise competitions. All the popular implementations of SVMs, including **kernlab**, provide such approaches to multinomial classification. ### 14\.3\.2 Support vector regression SVMs can also be extended to regression problems (i.e., when the outcome is continuous). In essence, SVMs find a separating hyperplane in an enlarged feature space that generally results in a nonlinear decision boundary in the original feature space with good generalization performance. This enlarged feature spaced is constructed using special functions called kernel functions. The idea behind support vector regression (SVR) is very similar: find a good fitting hyperplane in a kernel\-induced feature space that will have good generalization performance using the original features. Although there are many flavors of SVR, we’ll introduce the most common: *\\(\\epsilon\\)\-insensitive loss regression*. Recall that the least squares (LS) approach to function estimation (Chapter [4](linear-regression.html#linear-regression)) minimizes the sum of the squared residuals, where in general we define the residual as \\(r\\left(x, y\\right) \= y \- f\\left(x\\right)\\). (In ordinary linear regression \\(f\\left(x\\right) \= \\beta\_0 \+ \\beta\_1 x\_1 \+ \\dots \+ \\beta\_p x\_p\\)). The problem with LS is that it involves squaring the residuals which gives outliers undue influence on the fitted regression function. Although we could rely on the MAE metric (which looks at the absolute value as opposed to the squared residuals), another intuitive loss metric, called *\\(\\epsilon\\)\-insensitive loss*, is more robust to outliers: \\\[ L\_{\\epsilon} \= max\\left(0, \\left\|r\\left(x, y\\right)\\right\| \- \\epsilon\\right) \\] Here \\(\\epsilon\\) is a threshold set by the analyst. In essence, we’re forming a margin around the regression curve of width \\(\\epsilon\\) (see Figure [14\.8](svm.html#fig:eps-band)), and trying to contain as many data points within the margin as possible with a minimal number of violations. The data points that satisfy \\(r\\left(x, y\\right) \\pm \\epsilon\\) form the support vectors that define the margin. The model is said to be *\\(\\epsilon\\)\-insensitive* because the points within the margin have no influence on the fitted regression line! Similar to SVMs, we can use kernel functions to capture nonlinear relationships (in this case, the support vectors are those points who’s residuals satisfy \\(r\\left(x, y\\right) \\pm \\epsilon\\) in the kernel\-induced feature space). Figure 14\.8: \\(\\epsilon\\)\-insensitive regression band. The solid black line represents the estimated regression curve \\(f\\left(x\\right)\\). To illustrate, we simulated data from the sinc function \\(\\sin\\left(x\\right) / x\\) with added Gaussian nose (i.e., random errors from a normal distribution). The simulated data are shown in Figure [14\.9](svm.html#fig:sinc). This is a highly nonlinear, but smooth function of \\(x\\). Figure 14\.9: Simulated data from a sinc function with added noise. We fit three regression models to these data: a default MARS model (Chapter [7](mars.html#mars)), a default RF (Chapter [11](random-forest.html#random-forest)), and an SVR model using \\(\\epsilon\\)\-insensitive loss and a radial basis kernel with default tuning parameters (technically, we set `kpar = "automatic"` which tells `kernlab::ksvm()` to use `kernlab::sigest()` to find a reasonable estimate for the kernel’s scale parameter). To use \\(\\epsilon\\)\-insensitive loss regression, specify `type = "eps-svr"` in the call to `kernlab::ksvm()` (the default for \\(\\epsilon\\) is `epsilon = 0.1`. The results are displayed in Figure [14\.10](svm.html#fig:sinc-predictions). Although this is a simple one\-dimensional problem, the MARS and RF struggle to adapt to the smooth, but highly nonlinear function. The MARS model, while probably effective, is too rigid and fails to adequately capture the relationship towards the left\-hand side. The RF is too wiggly and is indicative of slight overfitting (perhaps tuning the minimum observations per node can help here and is left as an exercise on the book website). The SVR model, on the other hand, works quite well and provides a smooth fit to the data! Figure 14\.10: Simulated sine Applying support vector regression to the Ames housing example is left as an exercise for the reader on the book’s website. 14\.4 Job attrition example --------------------------- Returning to the employee attrition example, we tune and fit an SVM with a radial basis kernel (recall our earlier rule of thumb regarding kernel functions). Recall that the radial basis kernel has two hyperparameters: \\(\\sigma\\) and \\(C\\). While we can use *k*\-fold CV to find good estimates of both parameters, hyperparameter tuning can be time consuming for SVMs[41](#fn41). Fortunately, it is possible to use the training data to find a good estimate of \\(\\sigma\\). This is provided by the `kernlab::sigest()` function. This function estimates the range of \\(\\sigma\\) values which would return good results when used with a radial basis SVM. Ideally, any value within the range of estimates returned by this function should produce reasonable results. This is the approach taken by **caret**’s `train()` function when `method = "svmRadialSigma"`, which we use below. Also, note that a reasonable search grid for the cost parameter \\(C\\) is an exponentially growing series, for example \\(2^{\-2}, 2^{\-1}, 2^{0}, 2^{1}, 2^{2},\\) etc. See `caret::getModelInfo("svmRadialSigma")` for details. Next, we’ll use **caret**’s `train()` function to tune and train an SVM using the radial basis kernel function with autotuning for the \\(\\sigma\\) parameter (i.e., `"svmRadialSigma"`) and 10\-fold CV. ``` # Tune an SVM with radial basis kernel set.seed(1854) # for reproducibility churn_svm <- train( Attrition ~ ., data = churn_train, method = "svmRadial", preProcess = c("center", "scale"), trControl = trainControl(method = "cv", number = 10), tuneLength = 10 ) ``` Plotting the results, we see that smaller values of the cost parameter (\\(C \\approx\\) 2–8\) provide better cross\-validated accuracy scores for these training data: ``` # Plot results ggplot(churn_svm) + theme_light() ``` ``` # Print results churn_svm$results ## sigma C Accuracy Kappa AccuracySD KappaSD ## 1 0.009590249 0.25 0.8388542 0.0000000 0.004089627 0.0000000 ## 2 0.009590249 0.50 0.8388542 0.0000000 0.004089627 0.0000000 ## 3 0.009590249 1.00 0.8515233 0.1300469 0.014427649 0.1013069 ## 4 0.009590249 2.00 0.8708857 0.3526368 0.023749215 0.1449342 ## 5 0.009590249 4.00 0.8709611 0.4172884 0.026640331 0.1302496 ## 6 0.009590249 8.00 0.8660873 0.4242800 0.026271496 0.1206188 ## 7 0.009590249 16.00 0.8563495 0.4012563 0.026866012 0.1298460 ## 8 0.009590249 32.00 0.8515138 0.3831775 0.028717623 0.1338717 ## 9 0.009590249 64.00 0.8515138 0.3831775 0.028717623 0.1338717 ## 10 0.009590249 128.00 0.8515138 0.3831775 0.028717623 0.1338717 ``` ### 14\.4\.1 Class weights By default, most classification algorithms treat misclassification costs equally. This is not ideal in situations where one type of misclassification is more important than another or there is a severe class imbalance (which is usually the case). SVMs (as well as most tree\-based methods) allow you to assign specific misclassification costs to the different outcomes. In **caret** and **kernlab**, this is accomplished via the `class.weights` argument, which is just a named vector of weights for the different classes. In the employee attrition example, for instance, we might specify ``` class.weights = c("No" = 1, "Yes" = 10) ``` in the call to `caret::train()` or `kernlab::ksvm()` to make false negatives (i.e., predicting “Yes” when the truth is “No”) ten times more costly than false positives (i.e., predicting “No” when the truth is “Yes”). Cost\-sensitive training with SVMs is left as an exercise on the book website. ### 14\.4\.2 Class probabilities SVMs classify new observations by determining which side of the decision boundary they fall on; consequently, they do not automatically provide predicted class probabilities! In order to obtain predicted class probabilities from an SVM, additional parameters need to be estimated as described in Platt ([1999](#ref-platt-1999-probabilistic)). In practice, predicted class probabilities are often more useful than the predicted class labels. For instance, we would need the predicted class probabilities if we were using an optimization metric like AUC (Chapter [2](process.html#process)), as opposed to classification accuracy. In that case, we can set `prob.model = TRUE` in the call to `kernlab::ksvm()` or `classProbs = TRUE` in the call to `caret::trainControl()` (for details, see `?kernlab::ksvm` and the references therein): ``` # Control params for SVM ctrl <- trainControl( method = "cv", number = 10, classProbs = TRUE, summaryFunction = twoClassSummary # also needed for AUC/ROC ) # Tune an SVM set.seed(5628) # for reproducibility churn_svm_auc <- train( Attrition ~ ., data = churn_train, method = "svmRadial", preProcess = c("center", "scale"), metric = "ROC", # area under ROC curve (AUC) trControl = ctrl, tuneLength = 10 ) # Print results churn_svm_auc$results ## sigma C ROC Sens Spec ROCSD SensSD SpecSD ## 1 0.009727585 0.25 0.8379109 0.9675488 0.3933824 0.06701067 0.012073306 0.11466031 ## 2 0.009727585 0.50 0.8376397 0.9652767 0.3761029 0.06694554 0.010902039 0.14775214 ## 3 0.009727585 1.00 0.8377081 0.9652633 0.4055147 0.06725101 0.007798768 0.09871169 ## 4 0.009727585 2.00 0.8343294 0.9756750 0.3459559 0.06803483 0.012712528 0.14320366 ## 5 0.009727585 4.00 0.8200427 0.9745255 0.3452206 0.07188838 0.013092221 0.12082675 ## 6 0.009727585 8.00 0.8123546 0.9699278 0.3327206 0.07582032 0.013513393 0.11819788 ## 7 0.009727585 16.00 0.7915612 0.9756883 0.2849265 0.07791598 0.010094292 0.10700782 ## 8 0.009727585 32.00 0.7846566 0.9745255 0.2845588 0.07752526 0.010615423 0.08923723 ## 9 0.009727585 64.00 0.7848594 0.9745255 0.2845588 0.07741087 0.010615423 0.09848550 ## 10 0.009727585 128.00 0.7848594 0.9733895 0.2783088 0.07741087 0.010922892 0.10913126 ``` Similar to before, we see that smaller values of the cost parameter \\(C\\) (\\(C \\approx 2\-4\\)) provide better cross\-validated AUC scores on the training data. Also, notice how in addition to ROC we also get the corresponding sensitivity (true positive rate) and specificity (true negative rate). In this case, sensitivity (column `Sens`) refers to the proportion of `No`s correctly predicted as `No` and specificity (column `Spec`) refers to the proportion of `Yes`s correctly predicted as `Yes`. We can succinctly describe the different classification metrics using **caret**’s `confusionMatrix()` function (see `?caret::confusionMatrix` for details): ``` confusionMatrix(churn_svm_auc) ## Cross-Validated (10 fold) Confusion Matrix ## ## (entries are percentual average cell counts across resamples) ## ## Reference ## Prediction No Yes ## No 81.2 9.8 ## Yes 2.7 6.3 ## ## Accuracy (average) : 0.8748 ``` In this case it is clear that we do a far better job at predicting the `No`s. 14\.5 Feature interpretation ---------------------------- Like many other ML algorithms, SVMs do not emit any natural measures of feature importance; however, we can use the **vip** package to quantify the importance of each feature using the permutation approach described later on in Chapter [16](iml.html#iml) (the **iml** and **DALEX** packages could also be used). Our metric function should reflect the fact that we trained the model using AUC. Any custom metric function provided to `vip()` should have the arguments `actual` and `predicted` (in that order). We illustrate this below where we wrap the `auc()` function from the **ModelMetrics** package (Hunt [2018](#ref-R-ModelMetrics)): Since we are using AUC as our metric, our prediction wrapper function should return the predicted class probabilities for the reference class of interest. In this case, we’ll use `"Yes"` as the reference class (to do this we’ll specify `reference_class = "Yes"` in the call to `vip::vip()`). Our prediction function looks like: ``` prob_yes <- function(object, newdata) { predict(object, newdata = newdata, type = "prob")[, "Yes"] } ``` To compute the variable importance scores we just call `vip()` with `method = "permute"` and pass our previously defined predictions wrapper to the `pred_wrapper` argument: ``` # Variable importance plot set.seed(2827) # for reproducibility vip(churn_svm_auc, method = "permute", nsim = 5, train = churn_train, target = "Attrition", metric = "auc", reference_class = "Yes", pred_wrapper = prob_yes) ``` The results indicate that `OverTime` (Yes/No) is the most important feature in predicting attrition. Next, we use the **pdp** package to construct PDPs for the top four features according to the permutation\-based variable importance scores (notice we set `prob = TRUE` in the call to `pdp::partial()` so that the feature effect plots are on the probability scale; see `?pdp::partial` for details). Additionally, since the predicted probabilities from our model come in two columns (`No` and `Yes`), we specify `which.class = 2` so that our interpretation is in reference to predicting `Yes`: ``` features <- c("OverTime", "WorkLifeBalance", "JobSatisfaction", "JobRole") pdps <- lapply(features, function(x) { partial(churn_svm_auc, pred.var = x, which.class = 2, prob = TRUE, plot = TRUE, plot.engine = "ggplot2") + coord_flip() }) grid.arrange(grobs = pdps, ncol = 2) ``` For instance, we see that employees with a low job satisfaction level have the highest probability of attriting, while those with a very high level of satisfaction tend to have the lowest probability. 14\.6 Final thoughts -------------------- SVMs have a number of advantages compared to other ML algorithms described in this book. First off, they attempt to directly maximize generalizability (i.e., accuracy). Since SVMs are essentially just convex optimization problems, we’re always guaranteed to find a global optimum (as opposed to potentially getting stuck in local optima as with DNNs). By softening the margin using a budget (or cost) parameter (\\(C\\)), SVMs are relatively robust to outliers. And finally, using kernel functions, SVMs are flexible enough to adapt to complex nonlinear decision boundaries (i.e., they can flexibly model nonlinear relationships). However, SVMs do carry a few disadvantages as well. For starters, they can be slow to train on tall data (i.e., \\(n \>\> p\\)). This is because SVMs essentially have to estimate at least one parameter for each row in the training data! Secondly, SVMs only produce predicted class labels; obtaining predicted class probabilities requires additional adjustments and computations not covered in this chapter. Lastly, special procedures (e.g., OVA and OVO) have to be used to handle multinomial classification problems with SVMs. 14\.1 Prerequisites ------------------- Although there are a number of great packages that implement SVMs (e.g., **e1071** (Meyer et al. [2019](#ref-e1071-pkg)) and **svmpath** (Hastie [2016](#ref-svmpath-pkg))), we’ll focus on the most flexible implementation of SVMs in R: **kernlab** (Karatzoglou et al. [2004](#ref-kernlab-pkg)). We’ll also use **caret** for tuning SVMs and pre\-processing. In this chapter, we’ll explicitly load the following packages: ``` # Helper packages library(dplyr) # for data wrangling library(ggplot2) # for awesome graphics library(rsample) # for data splitting # Modeling packages library(caret) # for classification and regression training library(kernlab) # for fitting SVMs # Model interpretability packages library(pdp) # for partial dependence plots, etc. library(vip) # for variable importance plots ``` To illustrate the basic concepts of fitting SVMs we’ll use a mix of simulated data sets as well as the employee attrition data. The code for generating the simulated data sets and figures in this chapter are available on the book website. In the employee attrition example our intent is to predict on `Attrition` (coded as `"Yes"`/`"No"`). As in previous chapters, we’ll set aside 30% of the data for assessing generalizability. ``` # Load attrition data df <- attrition %>% mutate_if(is.ordered, factor, ordered = FALSE) # Create training (70%) and test (30%) sets set.seed(123) # for reproducibility churn_split <- initial_split(df, prop = 0.7, strata = "Attrition") churn_train <- training(churn_split) churn_test <- testing(churn_split) ``` 14\.2 Optimal separating hyperplanes ------------------------------------ Rather than diving right into SVMs we’ll build up to them using concepts from basic geometry, starting with hyperplanes. A hyperplane in \\(p\\)\-dimensional feature space is defined by the (linear) equation \\\[f\\left(X\\right) \= \\beta\_0 \+ \\beta\_1 X\_1 \+ \\dots \+ \\beta\_p X\_p \= 0\\] When \\(p \= 2\\), this defines a line in 2\-D space, and when \\(p \= 3\\), it defines a plane in 3\-D space (see Figure [14\.1](svm.html#fig:hyperplanes)). By definition, for points on one side of the hyperplane, \\(f\\left(X\\right) \> 0\\), and for points on the other side, \\(f\\left(X\\right) \< 0\\). For (mathematical) convenience, we’ll re\-encode the binary outcome \\(Y\_i\\) using {\-1, 1} so that \\(Y\_i \\times f\\left(X\_i\\right) \> 0\\) for points on the correct side of the hyperplane. In this context the hyperplane represents a *decision boundary* that partitions the feature space into two sets, one for each class. The SVM will classify all the points on one side of the decision boundary as belonging to one class and all those on the other side as belonging to the other class. Figure 14\.1: Examples of hyperplanes in 2\-D and 3\-D feature space. While SVMs may seem mathematically frightening at first, the fundamental ideas behind them are incredibly intuitive and easy to understand. We’ll illustrate these simple ideas using simulated binary classification data with two features. In this hypothetical example, we have two classes: households that own a riding lawn mower (\\(Y \= \+1\\)) and (2\) households that do not (\\(Y \= \-1\\)). We also have two features, household income (\\(X\_1\\)) and lot size (\\(X\_2\\)), that have been standardized (i.e., centered around zero with a standard deviation of one). Intuitively, we might expect households with a larger lot and a higher income to be more likely to own a riding mower. In fact, the two classes in the left side of Figure [14\.2](svm.html#fig:svm-separating-hyperplanes) are perfectly separable by a straight line (i.e., a hyperplane in 2\-D space). ### 14\.2\.1 The hard margin classifier As you might imagine, for two separable classes, there are an infinite number of separating hyperplanes! This is illustrated in the right side of Figure [14\.2](svm.html#fig:svm-separating-hyperplanes) where we show the hyperplanes (i.e., decision boundaries) that result from a simple logistic regression model (GLM), a *linear discriminant analysis* (LDA; another popular classification tool), and an example of a *hard margin classifier* (HMC)—which we’ll define in a moment. So which decision boundary is “best”? Well, it depends on how we define “best”. If you were asked to draw a decision boundary with good generalization performance on the left side of Figure [14\.2](svm.html#fig:svm-separating-hyperplanes), how would it look to you? Naturally, you would probably draw a boundary that provides the maximum separation between the two classes, and that’s exactly what the HMC is doing! Figure 14\.2: Simulated binary classification data with two separable classes. *Left:* Raw data. *Right:* Raw data with example decision boundaries (in this case, separating hyperplanes) from various machine learning algorithms. Although we can draw an unlimited number of separating hyperplanes, what we want is a separating hyperplane with good generalization performance! The HMC is one such “optimal” separating hyperplane and the simplest type of SVM. The HMC is optimal in the sense that it separates the two classes while maximizing the distance to the closest points from either class; see Figure [14\.3](svm.html#fig:svm-hmc) below. The decision boundary (i.e., hyperplane) from the HMC separates the two classes by maximizing the distance between them. This maximized distance is referred to as the margin \\(M\\) (the shaded areas in Figure [14\.3](svm.html#fig:svm-hmc)). Finding this decision boundary can also be done with simple geometry. Geometrically, finding the HMC for two separable classes amounts to the following: 1. Draw the *convex hull*[40](#fn40) around each class (these are the polygons surrounding each class in Figure [14\.3](svm.html#fig:svm-hmc)). 2. Draw the shortest line segment that connects the two convex hulls (this is the dotted line segment in Figure [14\.3](svm.html#fig:svm-hmc)). 3. The perpendicular bisector of this line segment is the HMC! 4. The margin boundaries are formed by drawing lines that pass through the support vectors and are parallel to the separating hyperplane (these are the dashed line segments in Figure [14\.3](svm.html#fig:svm-hmc)). Figure 14\.3: HMC for the simulated riding mower data. The solid black line forms the decision boundary (in this case, a separating hyperplane), while the dashed lines form the boundaries of the margins (shaded regions) on each side of the hyperplane. The shortest distance between the two classes (i.e., the dotted line connecting the two convex hulls) has length \\(2M\\). Two of the training observations (solid red points) fall on the margin boundaries; in the context of SVMs (which we discuss later), these two points form the *support vectors*. This can also be formulated as an optimization problem. Mathematically speaking, the HMC estimates the coefficients of the hyperplane by solving a quadratic programming problem with linear inequality constraints, in particular: \\\[\\begin{align} \&\\underset{\\beta\_0, \\beta\_1, \\dots, \\beta\_p}{\\text{maximize}} \\quad M \\\\ \&\\text{subject to} \\quad \\begin{cases} \\sum\_{j \= 1}^p \\beta\_j^2 \= 1,\\\\ y\_i\\left(\\beta\_0 \+ \\beta\_1 x\_{i1} \+ \\dots \+ \\beta\_p x\_{ip}\\right) \\ge M,\\quad i \= 1, 2, \\dots, n \\end{cases} \\end{align}\\] Put differently, the HMC finds the separating hyperplane that provides the largest margin/gap between the two classes. The width of both margin boundaries is \\(M\\). With the constraint \\(\\sum\_{j \= 1}^p \\beta\_j^2 \= 1\\), the quantity \\(y\_i\\left(\\beta\_0 \+ \\beta\_1 x\_{i1} \+ \\dots \+ \\beta\_p x\_{ip}\\right)\\) represents the distance from the \\(i\\)\-th data point to the decision boundary. Note that the solution to the optimization problem above does not allow any points to be on the wrong side of the margin; hence the term hard margin classifier. ### 14\.2\.2 The soft margin classifier Sometimes perfect separation is achievable, but not desirable! Take, for example, the data in Figure [14\.4](svm.html#fig:svm-noisy). Here we added a single outlier at the point \\(\\left(0\.5, 1\\right)\\). While the data are still perfectly separable, the decision boundaries obtained using logistic regression and the HMC will not generalize well to new data and accuracy will suffer (i.e., these models are not robust to outliers in the feature space). The LDA model seems to produce a more reasonable decision boundary. Figure 14\.4: Simulated binary classification data with an outlier at the point \\(\\left(0\.5, 1\\right)\\). In this situation, we can loosen the constraints (or *soften the margin*) by allowing some points to be on the wrong side of the margin; this is referred to as the the *soft margin classifier* (SMC). The SMC, similar to the HMC, estimates the coefficients of the hyperplane by solving the slightly modified optimization problem: \\\[\\begin{align} \&\\underset{\\beta\_0, \\beta\_1, \\dots, \\beta\_p}{\\text{maximize}} \\quad M \\\\ \&\\text{subject to} \\quad \\begin{cases} \\sum\_{j \= 1}^p \\beta\_j^2 \= 1,\\\\ y\_i\\left(\\beta\_0 \+ \\beta\_1 x\_{i1} \+ \\dots \+ \\beta\_p x\_{ip}\\right) \\ge M\\left(1 \- \\xi\_i\\right), \\quad i \= 1, 2, \\dots, n\\\\ \\xi\_i \\ge 0, \\\\ \\sum\_{i \= 1}^n \\xi\_i \\le C\\end{cases} \\end{align}\\] Similar to before, the SMC finds the separating hyperplane that provides the largest margin/gap between the two classes, but allows for some of the points to cross over the margin boundaries. Here \\(C\\) is the allowable budget for the total amount of overlap and is our first tunable hyperparameter for the SVM. By varying \\(C\\), we allow points to violate the margin which helps make the SVM robust to outliers. For example, in Figure [14\.5](svm.html#fig:smc), we fit the SMC at both extremes: \\(C \= 0\\) (the HMC) and \\(C \= \\infty\\) (maximum overlap). Ideally, the hyperplane giving the decision boundary with the best generalization performance lies somewhere in between these two extremes and can be determined using, for example, *k*\-fold CV. Figure 14\.5: Soft margin classifier. Left: Zero budget for overlap (i.e., the HMC). Right: Maximumn allowable overlap. The solid black points represent the support vectors that define the margin boundaries. ### 14\.2\.1 The hard margin classifier As you might imagine, for two separable classes, there are an infinite number of separating hyperplanes! This is illustrated in the right side of Figure [14\.2](svm.html#fig:svm-separating-hyperplanes) where we show the hyperplanes (i.e., decision boundaries) that result from a simple logistic regression model (GLM), a *linear discriminant analysis* (LDA; another popular classification tool), and an example of a *hard margin classifier* (HMC)—which we’ll define in a moment. So which decision boundary is “best”? Well, it depends on how we define “best”. If you were asked to draw a decision boundary with good generalization performance on the left side of Figure [14\.2](svm.html#fig:svm-separating-hyperplanes), how would it look to you? Naturally, you would probably draw a boundary that provides the maximum separation between the two classes, and that’s exactly what the HMC is doing! Figure 14\.2: Simulated binary classification data with two separable classes. *Left:* Raw data. *Right:* Raw data with example decision boundaries (in this case, separating hyperplanes) from various machine learning algorithms. Although we can draw an unlimited number of separating hyperplanes, what we want is a separating hyperplane with good generalization performance! The HMC is one such “optimal” separating hyperplane and the simplest type of SVM. The HMC is optimal in the sense that it separates the two classes while maximizing the distance to the closest points from either class; see Figure [14\.3](svm.html#fig:svm-hmc) below. The decision boundary (i.e., hyperplane) from the HMC separates the two classes by maximizing the distance between them. This maximized distance is referred to as the margin \\(M\\) (the shaded areas in Figure [14\.3](svm.html#fig:svm-hmc)). Finding this decision boundary can also be done with simple geometry. Geometrically, finding the HMC for two separable classes amounts to the following: 1. Draw the *convex hull*[40](#fn40) around each class (these are the polygons surrounding each class in Figure [14\.3](svm.html#fig:svm-hmc)). 2. Draw the shortest line segment that connects the two convex hulls (this is the dotted line segment in Figure [14\.3](svm.html#fig:svm-hmc)). 3. The perpendicular bisector of this line segment is the HMC! 4. The margin boundaries are formed by drawing lines that pass through the support vectors and are parallel to the separating hyperplane (these are the dashed line segments in Figure [14\.3](svm.html#fig:svm-hmc)). Figure 14\.3: HMC for the simulated riding mower data. The solid black line forms the decision boundary (in this case, a separating hyperplane), while the dashed lines form the boundaries of the margins (shaded regions) on each side of the hyperplane. The shortest distance between the two classes (i.e., the dotted line connecting the two convex hulls) has length \\(2M\\). Two of the training observations (solid red points) fall on the margin boundaries; in the context of SVMs (which we discuss later), these two points form the *support vectors*. This can also be formulated as an optimization problem. Mathematically speaking, the HMC estimates the coefficients of the hyperplane by solving a quadratic programming problem with linear inequality constraints, in particular: \\\[\\begin{align} \&\\underset{\\beta\_0, \\beta\_1, \\dots, \\beta\_p}{\\text{maximize}} \\quad M \\\\ \&\\text{subject to} \\quad \\begin{cases} \\sum\_{j \= 1}^p \\beta\_j^2 \= 1,\\\\ y\_i\\left(\\beta\_0 \+ \\beta\_1 x\_{i1} \+ \\dots \+ \\beta\_p x\_{ip}\\right) \\ge M,\\quad i \= 1, 2, \\dots, n \\end{cases} \\end{align}\\] Put differently, the HMC finds the separating hyperplane that provides the largest margin/gap between the two classes. The width of both margin boundaries is \\(M\\). With the constraint \\(\\sum\_{j \= 1}^p \\beta\_j^2 \= 1\\), the quantity \\(y\_i\\left(\\beta\_0 \+ \\beta\_1 x\_{i1} \+ \\dots \+ \\beta\_p x\_{ip}\\right)\\) represents the distance from the \\(i\\)\-th data point to the decision boundary. Note that the solution to the optimization problem above does not allow any points to be on the wrong side of the margin; hence the term hard margin classifier. ### 14\.2\.2 The soft margin classifier Sometimes perfect separation is achievable, but not desirable! Take, for example, the data in Figure [14\.4](svm.html#fig:svm-noisy). Here we added a single outlier at the point \\(\\left(0\.5, 1\\right)\\). While the data are still perfectly separable, the decision boundaries obtained using logistic regression and the HMC will not generalize well to new data and accuracy will suffer (i.e., these models are not robust to outliers in the feature space). The LDA model seems to produce a more reasonable decision boundary. Figure 14\.4: Simulated binary classification data with an outlier at the point \\(\\left(0\.5, 1\\right)\\). In this situation, we can loosen the constraints (or *soften the margin*) by allowing some points to be on the wrong side of the margin; this is referred to as the the *soft margin classifier* (SMC). The SMC, similar to the HMC, estimates the coefficients of the hyperplane by solving the slightly modified optimization problem: \\\[\\begin{align} \&\\underset{\\beta\_0, \\beta\_1, \\dots, \\beta\_p}{\\text{maximize}} \\quad M \\\\ \&\\text{subject to} \\quad \\begin{cases} \\sum\_{j \= 1}^p \\beta\_j^2 \= 1,\\\\ y\_i\\left(\\beta\_0 \+ \\beta\_1 x\_{i1} \+ \\dots \+ \\beta\_p x\_{ip}\\right) \\ge M\\left(1 \- \\xi\_i\\right), \\quad i \= 1, 2, \\dots, n\\\\ \\xi\_i \\ge 0, \\\\ \\sum\_{i \= 1}^n \\xi\_i \\le C\\end{cases} \\end{align}\\] Similar to before, the SMC finds the separating hyperplane that provides the largest margin/gap between the two classes, but allows for some of the points to cross over the margin boundaries. Here \\(C\\) is the allowable budget for the total amount of overlap and is our first tunable hyperparameter for the SVM. By varying \\(C\\), we allow points to violate the margin which helps make the SVM robust to outliers. For example, in Figure [14\.5](svm.html#fig:smc), we fit the SMC at both extremes: \\(C \= 0\\) (the HMC) and \\(C \= \\infty\\) (maximum overlap). Ideally, the hyperplane giving the decision boundary with the best generalization performance lies somewhere in between these two extremes and can be determined using, for example, *k*\-fold CV. Figure 14\.5: Soft margin classifier. Left: Zero budget for overlap (i.e., the HMC). Right: Maximumn allowable overlap. The solid black points represent the support vectors that define the margin boundaries. 14\.3 The support vector machine -------------------------------- So far, we’ve only used linear decision boundaries. Such a classifier is likely too restrictive to be useful in practice, especially when compared to other algorithms that can adapt to nonlinear relationships. Fortunately, we can use a simple trick, called the *kernel trick*, to overcome this. A deep understanding of the kernel trick requires an understanding of *kernel functions* and *reproducing kernel Hilbert spaces* 😱. Fortunately, we can use a couple illustrations in 2\-D/3\-D feature space to drive home the key idea. Consider, for example, the circle data on the left side of Figure [14\.6](svm.html#fig:svm-circle). This is another binary classification problem. The first class forms a circle in the middle of a square, the remaining points form the second class. Although these two classes do not overlap (although they appear to overlap slightly due to the size of the plotted points), they are not perfectly separable by a hyperplane (i.e., a straight line). However, we can enlarge the feature space by adding a third feature, say \\(X\_3 \= X\_1^2 \+ X\_2^2\\)—this is akin to using the polynomial kernel function discussed below with \\(d \= 2\\). The data are plotted in the enlarged feature space in the middle of Figure [14\.6](svm.html#fig:svm-circle). In this new three dimensional feature space, the two classes are perfectly separable by a hyperplane (i.e., a flat plane); though it is hard to see (see the middle of Figure [14\.6](svm.html#fig:svm-circle)), the green points form the tip of the hyperboloid in 3\-D feature space (i.e., \\(X\_3\\) is smaller for all the green points leaving a small gap between the two classes). The resulting decision boundary is then projected back onto the original feature space resulting in a non\-linear decision boundary which perfectly separates the original data (see the right side of Figure [14\.6](svm.html#fig:svm-circle))! Figure 14\.6: Simulated nested circle data. *Left:* The two classes in the original (2\-D) feature space. *Middle:* The two classes in the enlarged (3\-D) feature space. *Right:* The decision boundary from the HMC in the enlarged feature space projected back into the original feature space. In essence, SVMs use the kernel trick to enlarge the feature space using basis functions (e.g., like in MARS or polynomial regression). In this enlarged (kernel\-induced) feature space, a hyperplane can often separate the two classes. The resulting decision boundary, which is linear in the enlarged feature space, will be nonlinear when transformed back onto the original feature space. Popular kernel functions used by SVMs include: * *d*\-th degree polynomial: \\(K\\left(x, x'\\right) \= \\gamma\\left(1 \+ \\langle x, x' \\rangle\\right) ^ d\\) * Radial basis function: \\(K\\left(x, x'\\right) \= \\exp\\left(\\gamma \\lVert x \- x'\\rVert ^ 2\\right)\\) * Hyperbolic tangent: \\(K\\left(x, x'\\right) \= \\tanh\\left(k\_1\\lVert x \- x'\\rVert \+ k\_2\\right)\\) Here \\(\\langle x, x' \\rangle \= \\sum\_{i \= 1}^n x\_i x\_i'\\) is called an *inner product*. Notice how each of these kernel functions include hyperparameters that need to be tuned. For example, the polynomial kernel includes a degree term \\(d\\) and a scale parameter \\(\\gamma\\). Similarly, the radial basis kernel includes a \\(\\gamma\\) parameter related to the inverse of the \\(\\sigma\\) parameter of a normal distribution. In R, you can use **caret**’s `getModelInfo()` to extract the hyperparameters from various SVM implementations with different kernel functions, for example: ``` # Linear (i.e., soft margin classifier) caret::getModelInfo("svmLinear")$svmLinear$parameters ## parameter class label ## 1 C numeric Cost # Polynomial kernel caret::getModelInfo("svmPoly")$svmPoly$parameters ## parameter class label ## 1 degree numeric Polynomial Degree ## 2 scale numeric Scale ## 3 C numeric Cost # Radial basis kernel caret::getModelInfo("svmRadial")$svmRadial$parameters ## parameter class label ## 1 sigma numeric Sigma ## 2 C numeric Cost ``` Through the use of various kernel functions, SVMs are extremely flexible and capable of estimating complex nonlinear decision boundaries. For example, the right side of Figure [14\.7](svm.html#fig:two-spirals) demonstrates the flexibility of an SVM using a radial basis kernel applied to the two spirals benchmark problem (see `?mlbench::mlbench.spirals` for details). As a reference, the left side of Figure [14\.7](svm.html#fig:two-spirals) shows the decision boundary from a default random forest fit using the **ranger** package. The random forest decision boundary, while flexible, has trouble capturing smooth decision boundaries (like a spiral). The SVM with a radial basis kernel, on the other hand, does a great job (and in this case is more accurate). Figure 14\.7: Two spirals benchmark problem. *Left:* Decision boundary from a random forest. *Right:* Decision boundary from an SVM with radial basis kernel. The radial basis kernel is extremely flexible and as a rule of thumb, we generally start with this kernel when fitting SVMs in practice. ### 14\.3\.1 More than two classes The SVM, as introduced, is applicable to only two classes! What do we do when we have more than two classes? There are two general approaches: *one\-versus\-all* (OVA) and one\-versus\-one (OVO). In OVA, we fit an SVM for each class (one class versus the rest) and classify to the class for which the margin is the largest. In OVO, we fit all \\(\\binom{\\\# \\ classes}{2}\\) pairwise SVMs and classify to the class that wins the most pairwise competitions. All the popular implementations of SVMs, including **kernlab**, provide such approaches to multinomial classification. ### 14\.3\.2 Support vector regression SVMs can also be extended to regression problems (i.e., when the outcome is continuous). In essence, SVMs find a separating hyperplane in an enlarged feature space that generally results in a nonlinear decision boundary in the original feature space with good generalization performance. This enlarged feature spaced is constructed using special functions called kernel functions. The idea behind support vector regression (SVR) is very similar: find a good fitting hyperplane in a kernel\-induced feature space that will have good generalization performance using the original features. Although there are many flavors of SVR, we’ll introduce the most common: *\\(\\epsilon\\)\-insensitive loss regression*. Recall that the least squares (LS) approach to function estimation (Chapter [4](linear-regression.html#linear-regression)) minimizes the sum of the squared residuals, where in general we define the residual as \\(r\\left(x, y\\right) \= y \- f\\left(x\\right)\\). (In ordinary linear regression \\(f\\left(x\\right) \= \\beta\_0 \+ \\beta\_1 x\_1 \+ \\dots \+ \\beta\_p x\_p\\)). The problem with LS is that it involves squaring the residuals which gives outliers undue influence on the fitted regression function. Although we could rely on the MAE metric (which looks at the absolute value as opposed to the squared residuals), another intuitive loss metric, called *\\(\\epsilon\\)\-insensitive loss*, is more robust to outliers: \\\[ L\_{\\epsilon} \= max\\left(0, \\left\|r\\left(x, y\\right)\\right\| \- \\epsilon\\right) \\] Here \\(\\epsilon\\) is a threshold set by the analyst. In essence, we’re forming a margin around the regression curve of width \\(\\epsilon\\) (see Figure [14\.8](svm.html#fig:eps-band)), and trying to contain as many data points within the margin as possible with a minimal number of violations. The data points that satisfy \\(r\\left(x, y\\right) \\pm \\epsilon\\) form the support vectors that define the margin. The model is said to be *\\(\\epsilon\\)\-insensitive* because the points within the margin have no influence on the fitted regression line! Similar to SVMs, we can use kernel functions to capture nonlinear relationships (in this case, the support vectors are those points who’s residuals satisfy \\(r\\left(x, y\\right) \\pm \\epsilon\\) in the kernel\-induced feature space). Figure 14\.8: \\(\\epsilon\\)\-insensitive regression band. The solid black line represents the estimated regression curve \\(f\\left(x\\right)\\). To illustrate, we simulated data from the sinc function \\(\\sin\\left(x\\right) / x\\) with added Gaussian nose (i.e., random errors from a normal distribution). The simulated data are shown in Figure [14\.9](svm.html#fig:sinc). This is a highly nonlinear, but smooth function of \\(x\\). Figure 14\.9: Simulated data from a sinc function with added noise. We fit three regression models to these data: a default MARS model (Chapter [7](mars.html#mars)), a default RF (Chapter [11](random-forest.html#random-forest)), and an SVR model using \\(\\epsilon\\)\-insensitive loss and a radial basis kernel with default tuning parameters (technically, we set `kpar = "automatic"` which tells `kernlab::ksvm()` to use `kernlab::sigest()` to find a reasonable estimate for the kernel’s scale parameter). To use \\(\\epsilon\\)\-insensitive loss regression, specify `type = "eps-svr"` in the call to `kernlab::ksvm()` (the default for \\(\\epsilon\\) is `epsilon = 0.1`. The results are displayed in Figure [14\.10](svm.html#fig:sinc-predictions). Although this is a simple one\-dimensional problem, the MARS and RF struggle to adapt to the smooth, but highly nonlinear function. The MARS model, while probably effective, is too rigid and fails to adequately capture the relationship towards the left\-hand side. The RF is too wiggly and is indicative of slight overfitting (perhaps tuning the minimum observations per node can help here and is left as an exercise on the book website). The SVR model, on the other hand, works quite well and provides a smooth fit to the data! Figure 14\.10: Simulated sine Applying support vector regression to the Ames housing example is left as an exercise for the reader on the book’s website. ### 14\.3\.1 More than two classes The SVM, as introduced, is applicable to only two classes! What do we do when we have more than two classes? There are two general approaches: *one\-versus\-all* (OVA) and one\-versus\-one (OVO). In OVA, we fit an SVM for each class (one class versus the rest) and classify to the class for which the margin is the largest. In OVO, we fit all \\(\\binom{\\\# \\ classes}{2}\\) pairwise SVMs and classify to the class that wins the most pairwise competitions. All the popular implementations of SVMs, including **kernlab**, provide such approaches to multinomial classification. ### 14\.3\.2 Support vector regression SVMs can also be extended to regression problems (i.e., when the outcome is continuous). In essence, SVMs find a separating hyperplane in an enlarged feature space that generally results in a nonlinear decision boundary in the original feature space with good generalization performance. This enlarged feature spaced is constructed using special functions called kernel functions. The idea behind support vector regression (SVR) is very similar: find a good fitting hyperplane in a kernel\-induced feature space that will have good generalization performance using the original features. Although there are many flavors of SVR, we’ll introduce the most common: *\\(\\epsilon\\)\-insensitive loss regression*. Recall that the least squares (LS) approach to function estimation (Chapter [4](linear-regression.html#linear-regression)) minimizes the sum of the squared residuals, where in general we define the residual as \\(r\\left(x, y\\right) \= y \- f\\left(x\\right)\\). (In ordinary linear regression \\(f\\left(x\\right) \= \\beta\_0 \+ \\beta\_1 x\_1 \+ \\dots \+ \\beta\_p x\_p\\)). The problem with LS is that it involves squaring the residuals which gives outliers undue influence on the fitted regression function. Although we could rely on the MAE metric (which looks at the absolute value as opposed to the squared residuals), another intuitive loss metric, called *\\(\\epsilon\\)\-insensitive loss*, is more robust to outliers: \\\[ L\_{\\epsilon} \= max\\left(0, \\left\|r\\left(x, y\\right)\\right\| \- \\epsilon\\right) \\] Here \\(\\epsilon\\) is a threshold set by the analyst. In essence, we’re forming a margin around the regression curve of width \\(\\epsilon\\) (see Figure [14\.8](svm.html#fig:eps-band)), and trying to contain as many data points within the margin as possible with a minimal number of violations. The data points that satisfy \\(r\\left(x, y\\right) \\pm \\epsilon\\) form the support vectors that define the margin. The model is said to be *\\(\\epsilon\\)\-insensitive* because the points within the margin have no influence on the fitted regression line! Similar to SVMs, we can use kernel functions to capture nonlinear relationships (in this case, the support vectors are those points who’s residuals satisfy \\(r\\left(x, y\\right) \\pm \\epsilon\\) in the kernel\-induced feature space). Figure 14\.8: \\(\\epsilon\\)\-insensitive regression band. The solid black line represents the estimated regression curve \\(f\\left(x\\right)\\). To illustrate, we simulated data from the sinc function \\(\\sin\\left(x\\right) / x\\) with added Gaussian nose (i.e., random errors from a normal distribution). The simulated data are shown in Figure [14\.9](svm.html#fig:sinc). This is a highly nonlinear, but smooth function of \\(x\\). Figure 14\.9: Simulated data from a sinc function with added noise. We fit three regression models to these data: a default MARS model (Chapter [7](mars.html#mars)), a default RF (Chapter [11](random-forest.html#random-forest)), and an SVR model using \\(\\epsilon\\)\-insensitive loss and a radial basis kernel with default tuning parameters (technically, we set `kpar = "automatic"` which tells `kernlab::ksvm()` to use `kernlab::sigest()` to find a reasonable estimate for the kernel’s scale parameter). To use \\(\\epsilon\\)\-insensitive loss regression, specify `type = "eps-svr"` in the call to `kernlab::ksvm()` (the default for \\(\\epsilon\\) is `epsilon = 0.1`. The results are displayed in Figure [14\.10](svm.html#fig:sinc-predictions). Although this is a simple one\-dimensional problem, the MARS and RF struggle to adapt to the smooth, but highly nonlinear function. The MARS model, while probably effective, is too rigid and fails to adequately capture the relationship towards the left\-hand side. The RF is too wiggly and is indicative of slight overfitting (perhaps tuning the minimum observations per node can help here and is left as an exercise on the book website). The SVR model, on the other hand, works quite well and provides a smooth fit to the data! Figure 14\.10: Simulated sine Applying support vector regression to the Ames housing example is left as an exercise for the reader on the book’s website. 14\.4 Job attrition example --------------------------- Returning to the employee attrition example, we tune and fit an SVM with a radial basis kernel (recall our earlier rule of thumb regarding kernel functions). Recall that the radial basis kernel has two hyperparameters: \\(\\sigma\\) and \\(C\\). While we can use *k*\-fold CV to find good estimates of both parameters, hyperparameter tuning can be time consuming for SVMs[41](#fn41). Fortunately, it is possible to use the training data to find a good estimate of \\(\\sigma\\). This is provided by the `kernlab::sigest()` function. This function estimates the range of \\(\\sigma\\) values which would return good results when used with a radial basis SVM. Ideally, any value within the range of estimates returned by this function should produce reasonable results. This is the approach taken by **caret**’s `train()` function when `method = "svmRadialSigma"`, which we use below. Also, note that a reasonable search grid for the cost parameter \\(C\\) is an exponentially growing series, for example \\(2^{\-2}, 2^{\-1}, 2^{0}, 2^{1}, 2^{2},\\) etc. See `caret::getModelInfo("svmRadialSigma")` for details. Next, we’ll use **caret**’s `train()` function to tune and train an SVM using the radial basis kernel function with autotuning for the \\(\\sigma\\) parameter (i.e., `"svmRadialSigma"`) and 10\-fold CV. ``` # Tune an SVM with radial basis kernel set.seed(1854) # for reproducibility churn_svm <- train( Attrition ~ ., data = churn_train, method = "svmRadial", preProcess = c("center", "scale"), trControl = trainControl(method = "cv", number = 10), tuneLength = 10 ) ``` Plotting the results, we see that smaller values of the cost parameter (\\(C \\approx\\) 2–8\) provide better cross\-validated accuracy scores for these training data: ``` # Plot results ggplot(churn_svm) + theme_light() ``` ``` # Print results churn_svm$results ## sigma C Accuracy Kappa AccuracySD KappaSD ## 1 0.009590249 0.25 0.8388542 0.0000000 0.004089627 0.0000000 ## 2 0.009590249 0.50 0.8388542 0.0000000 0.004089627 0.0000000 ## 3 0.009590249 1.00 0.8515233 0.1300469 0.014427649 0.1013069 ## 4 0.009590249 2.00 0.8708857 0.3526368 0.023749215 0.1449342 ## 5 0.009590249 4.00 0.8709611 0.4172884 0.026640331 0.1302496 ## 6 0.009590249 8.00 0.8660873 0.4242800 0.026271496 0.1206188 ## 7 0.009590249 16.00 0.8563495 0.4012563 0.026866012 0.1298460 ## 8 0.009590249 32.00 0.8515138 0.3831775 0.028717623 0.1338717 ## 9 0.009590249 64.00 0.8515138 0.3831775 0.028717623 0.1338717 ## 10 0.009590249 128.00 0.8515138 0.3831775 0.028717623 0.1338717 ``` ### 14\.4\.1 Class weights By default, most classification algorithms treat misclassification costs equally. This is not ideal in situations where one type of misclassification is more important than another or there is a severe class imbalance (which is usually the case). SVMs (as well as most tree\-based methods) allow you to assign specific misclassification costs to the different outcomes. In **caret** and **kernlab**, this is accomplished via the `class.weights` argument, which is just a named vector of weights for the different classes. In the employee attrition example, for instance, we might specify ``` class.weights = c("No" = 1, "Yes" = 10) ``` in the call to `caret::train()` or `kernlab::ksvm()` to make false negatives (i.e., predicting “Yes” when the truth is “No”) ten times more costly than false positives (i.e., predicting “No” when the truth is “Yes”). Cost\-sensitive training with SVMs is left as an exercise on the book website. ### 14\.4\.2 Class probabilities SVMs classify new observations by determining which side of the decision boundary they fall on; consequently, they do not automatically provide predicted class probabilities! In order to obtain predicted class probabilities from an SVM, additional parameters need to be estimated as described in Platt ([1999](#ref-platt-1999-probabilistic)). In practice, predicted class probabilities are often more useful than the predicted class labels. For instance, we would need the predicted class probabilities if we were using an optimization metric like AUC (Chapter [2](process.html#process)), as opposed to classification accuracy. In that case, we can set `prob.model = TRUE` in the call to `kernlab::ksvm()` or `classProbs = TRUE` in the call to `caret::trainControl()` (for details, see `?kernlab::ksvm` and the references therein): ``` # Control params for SVM ctrl <- trainControl( method = "cv", number = 10, classProbs = TRUE, summaryFunction = twoClassSummary # also needed for AUC/ROC ) # Tune an SVM set.seed(5628) # for reproducibility churn_svm_auc <- train( Attrition ~ ., data = churn_train, method = "svmRadial", preProcess = c("center", "scale"), metric = "ROC", # area under ROC curve (AUC) trControl = ctrl, tuneLength = 10 ) # Print results churn_svm_auc$results ## sigma C ROC Sens Spec ROCSD SensSD SpecSD ## 1 0.009727585 0.25 0.8379109 0.9675488 0.3933824 0.06701067 0.012073306 0.11466031 ## 2 0.009727585 0.50 0.8376397 0.9652767 0.3761029 0.06694554 0.010902039 0.14775214 ## 3 0.009727585 1.00 0.8377081 0.9652633 0.4055147 0.06725101 0.007798768 0.09871169 ## 4 0.009727585 2.00 0.8343294 0.9756750 0.3459559 0.06803483 0.012712528 0.14320366 ## 5 0.009727585 4.00 0.8200427 0.9745255 0.3452206 0.07188838 0.013092221 0.12082675 ## 6 0.009727585 8.00 0.8123546 0.9699278 0.3327206 0.07582032 0.013513393 0.11819788 ## 7 0.009727585 16.00 0.7915612 0.9756883 0.2849265 0.07791598 0.010094292 0.10700782 ## 8 0.009727585 32.00 0.7846566 0.9745255 0.2845588 0.07752526 0.010615423 0.08923723 ## 9 0.009727585 64.00 0.7848594 0.9745255 0.2845588 0.07741087 0.010615423 0.09848550 ## 10 0.009727585 128.00 0.7848594 0.9733895 0.2783088 0.07741087 0.010922892 0.10913126 ``` Similar to before, we see that smaller values of the cost parameter \\(C\\) (\\(C \\approx 2\-4\\)) provide better cross\-validated AUC scores on the training data. Also, notice how in addition to ROC we also get the corresponding sensitivity (true positive rate) and specificity (true negative rate). In this case, sensitivity (column `Sens`) refers to the proportion of `No`s correctly predicted as `No` and specificity (column `Spec`) refers to the proportion of `Yes`s correctly predicted as `Yes`. We can succinctly describe the different classification metrics using **caret**’s `confusionMatrix()` function (see `?caret::confusionMatrix` for details): ``` confusionMatrix(churn_svm_auc) ## Cross-Validated (10 fold) Confusion Matrix ## ## (entries are percentual average cell counts across resamples) ## ## Reference ## Prediction No Yes ## No 81.2 9.8 ## Yes 2.7 6.3 ## ## Accuracy (average) : 0.8748 ``` In this case it is clear that we do a far better job at predicting the `No`s. ### 14\.4\.1 Class weights By default, most classification algorithms treat misclassification costs equally. This is not ideal in situations where one type of misclassification is more important than another or there is a severe class imbalance (which is usually the case). SVMs (as well as most tree\-based methods) allow you to assign specific misclassification costs to the different outcomes. In **caret** and **kernlab**, this is accomplished via the `class.weights` argument, which is just a named vector of weights for the different classes. In the employee attrition example, for instance, we might specify ``` class.weights = c("No" = 1, "Yes" = 10) ``` in the call to `caret::train()` or `kernlab::ksvm()` to make false negatives (i.e., predicting “Yes” when the truth is “No”) ten times more costly than false positives (i.e., predicting “No” when the truth is “Yes”). Cost\-sensitive training with SVMs is left as an exercise on the book website. ### 14\.4\.2 Class probabilities SVMs classify new observations by determining which side of the decision boundary they fall on; consequently, they do not automatically provide predicted class probabilities! In order to obtain predicted class probabilities from an SVM, additional parameters need to be estimated as described in Platt ([1999](#ref-platt-1999-probabilistic)). In practice, predicted class probabilities are often more useful than the predicted class labels. For instance, we would need the predicted class probabilities if we were using an optimization metric like AUC (Chapter [2](process.html#process)), as opposed to classification accuracy. In that case, we can set `prob.model = TRUE` in the call to `kernlab::ksvm()` or `classProbs = TRUE` in the call to `caret::trainControl()` (for details, see `?kernlab::ksvm` and the references therein): ``` # Control params for SVM ctrl <- trainControl( method = "cv", number = 10, classProbs = TRUE, summaryFunction = twoClassSummary # also needed for AUC/ROC ) # Tune an SVM set.seed(5628) # for reproducibility churn_svm_auc <- train( Attrition ~ ., data = churn_train, method = "svmRadial", preProcess = c("center", "scale"), metric = "ROC", # area under ROC curve (AUC) trControl = ctrl, tuneLength = 10 ) # Print results churn_svm_auc$results ## sigma C ROC Sens Spec ROCSD SensSD SpecSD ## 1 0.009727585 0.25 0.8379109 0.9675488 0.3933824 0.06701067 0.012073306 0.11466031 ## 2 0.009727585 0.50 0.8376397 0.9652767 0.3761029 0.06694554 0.010902039 0.14775214 ## 3 0.009727585 1.00 0.8377081 0.9652633 0.4055147 0.06725101 0.007798768 0.09871169 ## 4 0.009727585 2.00 0.8343294 0.9756750 0.3459559 0.06803483 0.012712528 0.14320366 ## 5 0.009727585 4.00 0.8200427 0.9745255 0.3452206 0.07188838 0.013092221 0.12082675 ## 6 0.009727585 8.00 0.8123546 0.9699278 0.3327206 0.07582032 0.013513393 0.11819788 ## 7 0.009727585 16.00 0.7915612 0.9756883 0.2849265 0.07791598 0.010094292 0.10700782 ## 8 0.009727585 32.00 0.7846566 0.9745255 0.2845588 0.07752526 0.010615423 0.08923723 ## 9 0.009727585 64.00 0.7848594 0.9745255 0.2845588 0.07741087 0.010615423 0.09848550 ## 10 0.009727585 128.00 0.7848594 0.9733895 0.2783088 0.07741087 0.010922892 0.10913126 ``` Similar to before, we see that smaller values of the cost parameter \\(C\\) (\\(C \\approx 2\-4\\)) provide better cross\-validated AUC scores on the training data. Also, notice how in addition to ROC we also get the corresponding sensitivity (true positive rate) and specificity (true negative rate). In this case, sensitivity (column `Sens`) refers to the proportion of `No`s correctly predicted as `No` and specificity (column `Spec`) refers to the proportion of `Yes`s correctly predicted as `Yes`. We can succinctly describe the different classification metrics using **caret**’s `confusionMatrix()` function (see `?caret::confusionMatrix` for details): ``` confusionMatrix(churn_svm_auc) ## Cross-Validated (10 fold) Confusion Matrix ## ## (entries are percentual average cell counts across resamples) ## ## Reference ## Prediction No Yes ## No 81.2 9.8 ## Yes 2.7 6.3 ## ## Accuracy (average) : 0.8748 ``` In this case it is clear that we do a far better job at predicting the `No`s. 14\.5 Feature interpretation ---------------------------- Like many other ML algorithms, SVMs do not emit any natural measures of feature importance; however, we can use the **vip** package to quantify the importance of each feature using the permutation approach described later on in Chapter [16](iml.html#iml) (the **iml** and **DALEX** packages could also be used). Our metric function should reflect the fact that we trained the model using AUC. Any custom metric function provided to `vip()` should have the arguments `actual` and `predicted` (in that order). We illustrate this below where we wrap the `auc()` function from the **ModelMetrics** package (Hunt [2018](#ref-R-ModelMetrics)): Since we are using AUC as our metric, our prediction wrapper function should return the predicted class probabilities for the reference class of interest. In this case, we’ll use `"Yes"` as the reference class (to do this we’ll specify `reference_class = "Yes"` in the call to `vip::vip()`). Our prediction function looks like: ``` prob_yes <- function(object, newdata) { predict(object, newdata = newdata, type = "prob")[, "Yes"] } ``` To compute the variable importance scores we just call `vip()` with `method = "permute"` and pass our previously defined predictions wrapper to the `pred_wrapper` argument: ``` # Variable importance plot set.seed(2827) # for reproducibility vip(churn_svm_auc, method = "permute", nsim = 5, train = churn_train, target = "Attrition", metric = "auc", reference_class = "Yes", pred_wrapper = prob_yes) ``` The results indicate that `OverTime` (Yes/No) is the most important feature in predicting attrition. Next, we use the **pdp** package to construct PDPs for the top four features according to the permutation\-based variable importance scores (notice we set `prob = TRUE` in the call to `pdp::partial()` so that the feature effect plots are on the probability scale; see `?pdp::partial` for details). Additionally, since the predicted probabilities from our model come in two columns (`No` and `Yes`), we specify `which.class = 2` so that our interpretation is in reference to predicting `Yes`: ``` features <- c("OverTime", "WorkLifeBalance", "JobSatisfaction", "JobRole") pdps <- lapply(features, function(x) { partial(churn_svm_auc, pred.var = x, which.class = 2, prob = TRUE, plot = TRUE, plot.engine = "ggplot2") + coord_flip() }) grid.arrange(grobs = pdps, ncol = 2) ``` For instance, we see that employees with a low job satisfaction level have the highest probability of attriting, while those with a very high level of satisfaction tend to have the lowest probability. 14\.6 Final thoughts -------------------- SVMs have a number of advantages compared to other ML algorithms described in this book. First off, they attempt to directly maximize generalizability (i.e., accuracy). Since SVMs are essentially just convex optimization problems, we’re always guaranteed to find a global optimum (as opposed to potentially getting stuck in local optima as with DNNs). By softening the margin using a budget (or cost) parameter (\\(C\\)), SVMs are relatively robust to outliers. And finally, using kernel functions, SVMs are flexible enough to adapt to complex nonlinear decision boundaries (i.e., they can flexibly model nonlinear relationships). However, SVMs do carry a few disadvantages as well. For starters, they can be slow to train on tall data (i.e., \\(n \>\> p\\)). This is because SVMs essentially have to estimate at least one parameter for each row in the training data! Secondly, SVMs only produce predicted class labels; obtaining predicted class probabilities requires additional adjustments and computations not covered in this chapter. Lastly, special procedures (e.g., OVA and OVO) have to be used to handle multinomial classification problems with SVMs.
Machine Learning
bradleyboehmke.github.io
https://bradleyboehmke.github.io/HOML/stacking.html
Chapter 15 Stacked Models ========================= In the previous chapters, you’ve learned how to train individual learners, which in the context of this chapter will be referred to as *base learners*. ***Stacking*** (sometimes called “stacked generalization”) involves training a new learning algorithm to combine the predictions of several base learners. First, the base learners are trained using the available training data, then a combiner or meta algorithm, called the *super learner*, is trained to make a final prediction based on the predictions of the base learners. Such stacked ensembles tend to outperform any of the individual base learners (e.g., a single RF or GBM) and have been shown to represent an asymptotically optimal system for learning (Laan, Polley, and Hubbard [2003](#ref-super-laan-2003)). 15\.1 Prerequisites ------------------- This chapter leverages the following packages, with the emphasis on **h2o**: ``` # Helper packages library(rsample) # for creating our train-test splits library(recipes) # for minor feature engineering tasks # Modeling packages library(h2o) # for fitting stacked models ``` To illustrate key concepts we continue with the Ames housing example from previous chapters: ``` # Load and split the Ames housing data ames <- AmesHousing::make_ames() set.seed(123) # for reproducibility split <- initial_split(ames, strata = "Sale_Price") ames_train <- training(split) ames_test <- testing(split) # Make sure we have consistent categorical levels blueprint <- recipe(Sale_Price ~ ., data = ames_train) %>% step_other(all_nominal(), threshold = 0.005) # Create training & test sets for h2o train_h2o <- prep(blueprint, training = ames_train, retain = TRUE) %>% juice() %>% as.h2o() test_h2o <- prep(blueprint, training = ames_train) %>% bake(new_data = ames_test) %>% as.h2o() # Get response and feature names Y <- "Sale_Price" X <- setdiff(names(ames_train), Y) ``` 15\.2 The Idea -------------- Leo Breiman, known for his work on classification and regression trees and random forests, formalized stacking in his 1996 paper on *Stacked Regressions* (Breiman [1996](#ref-breiman1996stacked)[b](#ref-breiman1996stacked)). Although the idea originated in (Wolpert [1992](#ref-stacked-wolpert-1992)) under the name “Stacked Generalizations”, the modern form of stacking that uses internal k\-fold CV was Breiman’s contribution. However, it wasn’t until 2007 that the theoretical background for stacking was developed, and also when the algorithm took on the cooler name, ***Super Learner*** (Van der Laan, Polley, and Hubbard [2007](#ref-van2007super)). Moreover, the authors illustrated that super learners will learn an optimal combination of the base learner predictions and will typically perform as well as or better than any of the individual models that make up the stacked ensemble. Until this time, the mathematical reasons for why stacking worked were unknown and stacking was considered a black art. ### 15\.2\.1 Common ensemble methods Ensemble machine learning methods use multiple learning algorithms to obtain better predictive performance than could be obtained from any of the constituent learning algorithms. The idea of combining multiple models rather than selecting the single best is well\-known and has been around for a long time. In fact, many of the popular modern machine learning algorithms (including ones in previous chapters) are actually ensemble methods. For example, bagging (Chapter [10](bagging.html#bagging)) and random forests (Chapter [11](random-forest.html#random-forest)) are ensemble approaches that average the predictions from many decision trees to reduce prediction variance and are robust to outliers and noisy data; ultimately leading to greater predictive accuracy. Boosted decision trees (Chapter [12](gbm.html#gbm)) are another ensemble approach that slowly learns unique patterns in the data by sequentially combining individual, shallow trees. Stacking, on the other hand, is designed to ensemble a *diverse group of strong learners*. ### 15\.2\.2 Super learner algorithm The super learner algorithm consists of three phases: 1. Set up the ensemble * Specify a list of \\(L\\) base learners (with a specific set of model parameters). * Specify a meta learning algorithm. This can be any one of the algorithms discussed in the previous chapters but most often is some form of regularized regression. 2. Train the ensemble * Train each of the \\(L\\) base learners on the training set. * Perform *k*\-fold CV on each of the base learners and collect the cross\-validated predictions from each (the same *k*\-folds must be used for each base learner). These predicted values represent \\(p\_1, \\dots, p\_L\\) in Eq. [(15\.1\)](stacking.html#eq:level1data). * The \\(N\\) cross\-validated predicted values from each of the \\(L\\) algorithms can be combined to form a new \\(N \\times L\\) feature matrix (represented by \\(Z\\) in Eq. [(15\.1\)](stacking.html#eq:level1data). This matrix, along with the original response vector (\\(y\\)), are called the “level\-one” data. (\\(N \=\\) number of rows in the training set.)\\\[\\begin{equation} \\tag{15\.1} n \\Bigg \\{ \\Bigg \[ p\_1 \\Bigg ] \\cdots \\Bigg \[ p\_L \\Bigg ] \\Bigg \[ y \\Bigg ] \\rightarrow n \\Bigg \\{ \\overbrace{\\Bigg \[ \\quad Z \\quad \\Bigg ]}^L \\Bigg \[ y \\Bigg ] \\end{equation}\\] * Train the meta learning algorithm on the level\-one data (\\(y \= f\\left(Z\\right)\\)). The “ensemble model” consists of the \\(L\\) base learning models and the meta learning model, which can then be used to generate predictions on new data. 3. Predict on new data. * To generate ensemble predictions, first generate predictions from the base learners. * Feed those predictions into the meta learner to generate the ensemble prediction. Stacking never does worse than selecting the single best base learner on the training data (but not necessarily the validation or test data). The biggest gains are usually produced when stacking base learners that have high variability, and uncorrelated, predicted values. The more similar the predicted values are between the base learners, the less advantage there is to combining them. ### 15\.2\.3 Available packages There are a few package implementations for model stacking in the R ecosystem. **SuperLearner** (Polley et al. [2019](#ref-R-SuperLearner)) provides the original Super Learner and includes a clean interface to 30\+ algorithms. Package **subsemble** (LeDell et al. [2014](#ref-R-subsemble)) also provides stacking via the super learner algorithm discussed above; however, it also offers improved parallelization over the **SuperLearner** package and implements the subsemble algorithm (Sapp, Laan, and Canny [2014](#ref-sapp2014subsemble)).[42](#fn42) Unfortunately, **subsemble** is currently only available via GitHub and is primarily maintained for backward compatibility rather than forward development. A third package, **caretEnsemble** (Deane\-Mayer and Knowles [2016](#ref-R-caretEnsemble)), also provides an approach for stacking, but it implements a bootsrapped (rather than cross\-validated) version of stacking. The bootstrapped version will train faster since bootsrapping (with a train/test set) requires a fraction of the work of *k*\-fold CV; however, the the ensemble performance often suffers as a result of this shortcut. This chapter focuses on the use of **h2o** for model stacking. **h2o** provides an efficient implementation of stacking and allows you to stack existing base learners, stack a grid search, and also implements an automated machine learning search with stacked results. All three approaches will be discussed. 15\.3 Stacking existing models ------------------------------ The first approach to stacking is to train individual base learner models separately and then stack them together. For example, say we found the optimal hyperparameters that provided the best predictive accuracy for the following algorithms: 1. Regularized regression base learner. 2. Random forest base learner. 3. GBM base learner. 4. XGBoost base learner. We can train each of these models individually (see the code chunk below). However, to stack them later we need to do a few specific things: 1. All models must be trained on the same training set. 2. All models must be trained with the same number of CV folds. 3. All models must use the same fold assignment to ensure the same observations are used (we can do this by using `fold_assignment = "Modulo"`). 4. The cross\-validated predictions from all of the models must be preserved by setting `keep_cross_validation_predictions = TRUE`. This is the data which is used to train the meta learner algorithm in the ensemble. ``` # Train & cross-validate a GLM model best_glm <- h2o.glm( x = X, y = Y, training_frame = train_h2o, alpha = 0.1, remove_collinear_columns = TRUE, nfolds = 10, fold_assignment = "Modulo", keep_cross_validation_predictions = TRUE, seed = 123 ) # Train & cross-validate a RF model best_rf <- h2o.randomForest( x = X, y = Y, training_frame = train_h2o, ntrees = 1000, mtries = 20, max_depth = 30, min_rows = 1, sample_rate = 0.8, nfolds = 10, fold_assignment = "Modulo", keep_cross_validation_predictions = TRUE, seed = 123, stopping_rounds = 50, stopping_metric = "RMSE", stopping_tolerance = 0 ) # Train & cross-validate a GBM model best_gbm <- h2o.gbm( x = X, y = Y, training_frame = train_h2o, ntrees = 5000, learn_rate = 0.01, max_depth = 7, min_rows = 5, sample_rate = 0.8, nfolds = 10, fold_assignment = "Modulo", keep_cross_validation_predictions = TRUE, seed = 123, stopping_rounds = 50, stopping_metric = "RMSE", stopping_tolerance = 0 ) # Train & cross-validate an XGBoost model best_xgb <- h2o.xgboost( x = X, y = Y, training_frame = train_h2o, ntrees = 5000, learn_rate = 0.05, max_depth = 3, min_rows = 3, sample_rate = 0.8, categorical_encoding = "Enum", nfolds = 10, fold_assignment = "Modulo", keep_cross_validation_predictions = TRUE, seed = 123, stopping_rounds = 50, stopping_metric = "RMSE", stopping_tolerance = 0 ) ``` We can now use `h2o.stackedEnsemble()` to stack these models. Note how we feed the base learner models into the `base_models = list()` argument. Here, we apply a random forest model as the metalearning algorithm. However, you could also apply regularized regression, GBM, or a neural network as the metalearner (see `?h2o.stackedEnsemble` for details). ``` # Train a stacked tree ensemble ensemble_tree <- h2o.stackedEnsemble( x = X, y = Y, training_frame = train_h2o, model_id = "my_tree_ensemble", base_models = list(best_glm, best_rf, best_gbm, best_xgb), metalearner_algorithm = "drf" ) ``` Since our ensemble is built on the CV results of the base learners, but has no cross\-validation results of its own, we’ll use the test data to compare our results. If we assess the performance of our base learners on the test data we see that the stochastic GBM base learner has the lowest RMSE of 20859\.92\. The stacked model achieves a small 1% performance gain with an RMSE of 20664\.56\. ``` # Get results from base learners get_rmse <- function(model) { results <- h2o.performance(model, newdata = test_h2o) results@metrics$RMSE } list(best_glm, best_rf, best_gbm, best_xgb) %>% purrr::map_dbl(get_rmse) ## [1] 30024.67 23075.24 20859.92 21391.20 # Stacked results h2o.performance(ensemble_tree, newdata = test_h2o)@metrics$RMSE ## [1] 20664.56 ``` We previously stated that the biggest gains are usually produced when we are stacking base learners that have high variability, and uncorrelated, predicted values. If we assess the correlation of the CV predictions we can see strong correlation across the base learners, especially with three tree\-based learners. Consequentley, stacking provides less advantage in this situation since the base learners have highly correlated predictions; however, a 1% performance improvement can still be considerable improvement depending on the business context. ``` data.frame( GLM_pred = as.vector(h2o.getFrame(best_glm@model$cross_validation_holdout_predictions_frame_id$name)), RF_pred = as.vector(h2o.getFrame(best_rf@model$cross_validation_holdout_predictions_frame_id$name)), GBM_pred = as.vector(h2o.getFrame(best_gbm@model$cross_validation_holdout_predictions_frame_id$name)), XGB_pred = as.vector(h2o.getFrame(best_xgb@model$cross_validation_holdout_predictions_frame_id$name)) ) %>% cor() ## GLM_pred RF_pred GBM_pred XGB_pred ## GLM_pred 1.0000000 0.9390229 0.9291982 0.9345048 ## RF_pred 0.9390229 1.0000000 0.9920349 0.9821944 ## GBM_pred 0.9291982 0.9920349 1.0000000 0.9854160 ## XGB_pred 0.9345048 0.9821944 0.9854160 1.0000000 ``` 15\.4 Stacking a grid search ---------------------------- An alternative ensemble approach focuses on stacking multiple models generated from the same base learner. In each of the previous chapters, you learned how to perform grid searches to automate the tuning process. Often we simply select the best performing model in the grid search but we can also apply the concept of stacking to this process. Many times, certain tuning parameters allow us to find unique patterns within the data. By stacking the results of a grid search, we can capitalize on the benefits of each of the models in our grid search to create a meta model. For example, the following performs a random grid search across a wide range of GBM hyperparameter settings. We set the search to stop after 25 models have run. ``` # Define GBM hyperparameter grid hyper_grid <- list( max_depth = c(1, 3, 5), min_rows = c(1, 5, 10), learn_rate = c(0.01, 0.05, 0.1), learn_rate_annealing = c(0.99, 1), sample_rate = c(0.5, 0.75, 1), col_sample_rate = c(0.8, 0.9, 1) ) # Define random grid search criteria search_criteria <- list( strategy = "RandomDiscrete", max_models = 25 ) # Build random grid search random_grid <- h2o.grid( algorithm = "gbm", grid_id = "gbm_grid", x = X, y = Y, training_frame = train_h2o, hyper_params = hyper_grid, search_criteria = search_criteria, ntrees = 5000, stopping_metric = "RMSE", stopping_rounds = 10, stopping_tolerance = 0, nfolds = 10, fold_assignment = "Modulo", keep_cross_validation_predictions = TRUE, seed = 123 ) ``` If we look at the grid search models we see that the cross\-validated RMSE ranges from 20756–57826 ``` # Sort results by RMSE h2o.getGrid( grid_id = "gbm_grid", sort_by = "rmse" ) ## H2O Grid Details ## ================ ## ## Grid ID: gbm_grid ## Used hyper parameters: ## - col_sample_rate ## - learn_rate ## - learn_rate_annealing ## - max_depth ## - min_rows ## - sample_rate ## Number of models: 25 ## Number of failed models: 0 ## ## Hyper-Parameter Search Summary: ordered by increasing rmse ## col_sample_rate learn_rate learn_rate_annealing max_depth min_rows sample_rate model_ids rmse ## 1 0.9 0.01 1.0 3 1.0 1.0 gbm_grid_model_20 20756.16775065606 ## 2 0.9 0.01 1.0 5 1.0 0.75 gbm_grid_model_2 21188.696088824694 ## 3 0.9 0.1 1.0 3 1.0 0.75 gbm_grid_model_5 21203.753908665003 ## 4 0.8 0.01 1.0 5 5.0 1.0 gbm_grid_model_16 21704.257699437963 ## 5 1.0 0.1 0.99 3 1.0 1.0 gbm_grid_model_17 21710.275753497197 ## ## --- ## col_sample_rate learn_rate learn_rate_annealing max_depth min_rows sample_rate model_ids rmse ## 20 1.0 0.01 1.0 1 10.0 0.75 gbm_grid_model_11 26164.879525289896 ## 21 0.8 0.01 0.99 3 1.0 0.75 gbm_grid_model_15 44805.63843296435 ## 22 1.0 0.01 0.99 3 10.0 1.0 gbm_grid_model_18 44854.611500840605 ## 23 0.8 0.01 0.99 1 10.0 1.0 gbm_grid_model_21 57797.874642563846 ## 24 0.9 0.01 0.99 1 10.0 0.75 gbm_grid_model_10 57809.60302408739 ## 25 0.8 0.01 0.99 1 5.0 0.75 gbm_grid_model_4 57826.30370545089 ``` If we apply the best performing model to our test set, we achieve an RMSE of 21599\.8\. ``` # Grab the model_id for the top model, chosen by validation error best_model_id <- random_grid_perf@model_ids[[1]] best_model <- h2o.getModel(best_model_id) h2o.performance(best_model, newdata = test_h2o) ## H2ORegressionMetrics: gbm ## ## MSE: 466551295 ## RMSE: 21599.8 ## MAE: 13697.78 ## RMSLE: 0.1090604 ## Mean Residual Deviance : 466551295 ``` Rather than use the single best model, we can combine all the models in our grid search using a super learner. In this example, our super learner does not provide any performance gains because the hyperparameter settings of the leading models have low variance which results in predictions that are highly correlated. However, in cases where you see high variability across hyperparameter settings for your leading models, stacking the grid search or even the leaders in the grid search can provide significant performance gains. Stacking a grid search provides the greatest benefit when leading models from the base learner have high variance in their hyperparameter settings. ``` # Train a stacked ensemble using the GBM grid ensemble <- h2o.stackedEnsemble( x = X, y = Y, training_frame = train_h2o, model_id = "ensemble_gbm_grid", base_models = random_grid@model_ids, metalearner_algorithm = "gbm" ) # Eval ensemble performance on a test set h2o.performance(ensemble, newdata = test_h2o) ## H2ORegressionMetrics: stackedensemble ## ## MSE: 469579433 ## RMSE: 21669.78 ## MAE: 13499.93 ## RMSLE: 0.1061244 ## Mean Residual Deviance : 469579433 ``` 15\.5 Automated machine learning -------------------------------- Our final topic to discuss involves performing an automated search across multiple base learners and then stack the resulting models (this is sometimes referred to as *automated machine learning* or AutoML). This is very much like the grid searches that we have been performing for base learners and discussed in Chapters [4](linear-regression.html#linear-regression)\-[14](svm.html#svm); however, rather than search across a variety of parameters for a *single base learner*, we want to perform a search across a variety of hyperparameter settings for many *different base learners*. There are several competitors that provide licensed software that help automate the end\-to\-end machine learning process to include feature engineering, model validation procedures, model selection, hyperparameter optimization, and more. Open source applications are more limited and tend to focus on automating the model building, hyperparameter configurations, and comparison of model performance. Although AutoML has made it easy for non\-experts to experiment with machine learning, there is still a significant amount of knowledge and background in data science that is required to produce high\-performing machine learning models. AutoML is more about freeing up your time (which is quite valuable). The machine learning process is often long, iterative, and repetitive and AutoML can also be a helpful tool for the advanced user, by simplifying the process of performing a large number of modeling\-related tasks that would typically require hours/days writing many lines of code. This can free up the user’s time to focus on other tasks in the data science pipeline such as data\-preprocessing, feature engineering, model interpretability, and model deployment. **h2o** provides an open source implementation of AutoML with the `h2o.automl()` function. The current version of `h2o.automl()` trains and cross\-validates a random forest, an *extremely\-randomized forest*, a random grid of GBMs, a random grid of DNNs, and then trains a stacked ensemble using all of the models; see `?h2o::h2o.automl` for details. By default, `h2o.automl()` will search for 1 hour but you can control how long it searches by adjusting a variety of stopping arguments (e.g., `max_runtime_secs`, `max_models`, and `stopping_tolerance`). The following performs an automated search for two hours, which ended up assessing 80 models. `h2o.automl()` will automatically use the same folds for stacking so you do not need to specify `fold_assignment = "Modulo"`. This allows for consistent model comparison across the same CV sets. We see that most of the leading models are GBM variants and achieve an RMSE in the 22000–23000 range. As you probably noticed, this was not as good as some of our best models we found using our own GBM grid searches (reference Chapter [12](gbm.html#gbm)). However, we could start this AutoML procedure and then spend our two hours performing other tasks while **h2o** automatically assesses these 80 models. The AutoML procedure then provides us direction for further analysis. In this case, we could start by further assessing the hyperparameter settings in the top five GBM models to see if there were common attributes that could point us to additional grid searches worth exploring. ``` # Use AutoML to find a list of candidate models (i.e., leaderboard) auto_ml <- h2o.automl( x = X, y = Y, training_frame = train_h2o, nfolds = 5, max_runtime_secs = 60 * 120, max_models = 50, keep_cross_validation_predictions = TRUE, sort_metric = "RMSE", seed = 123, stopping_rounds = 50, stopping_metric = "RMSE", stopping_tolerance = 0 ) # Assess the leader board; the following truncates the results to show the top # and bottom 15 models. You can get the top model with auto_ml@leader auto_ml@leaderboard %>% as.data.frame() %>% dplyr::select(model_id, rmse) %>% dplyr::slice(1:25) ## model_id rmse ## 1 XGBoost_1_AutoML_20190220_084553 22229.97 ## 2 GBM_grid_1_AutoML_20190220_084553_model_1 22437.26 ## 3 GBM_grid_1_AutoML_20190220_084553_model_3 22777.57 ## 4 GBM_2_AutoML_20190220_084553 22785.60 ## 5 GBM_3_AutoML_20190220_084553 23133.59 ## 6 GBM_4_AutoML_20190220_084553 23185.45 ## 7 XGBoost_2_AutoML_20190220_084553 23199.68 ## 8 XGBoost_1_AutoML_20190220_075753 23231.28 ## 9 GBM_1_AutoML_20190220_084553 23326.57 ## 10 GBM_grid_1_AutoML_20190220_075753_model_2 23330.42 ## 11 XGBoost_3_AutoML_20190220_084553 23475.23 ## 12 XGBoost_grid_1_AutoML_20190220_084553_model_3 23550.04 ## 13 XGBoost_grid_1_AutoML_20190220_075753_model_15 23640.95 ## 14 XGBoost_grid_1_AutoML_20190220_084553_model_8 23646.66 ## 15 XGBoost_grid_1_AutoML_20190220_084553_model_6 23682.37 ## ... ... ... ## 65 GBM_grid_1_AutoML_20190220_084553_model_5 33971.32 ## 66 GBM_grid_1_AutoML_20190220_075753_model_8 34489.39 ## 67 DeepLearning_grid_1_AutoML_20190220_084553_model_3 36591.73 ## 68 GBM_grid_1_AutoML_20190220_075753_model_6 36667.56 ## 69 XGBoost_grid_1_AutoML_20190220_084553_model_13 40416.32 ## 70 GBM_grid_1_AutoML_20190220_075753_model_9 47744.43 ## 71 StackedEnsemble_AllModels_AutoML_20190220_084553 49856.66 ## 72 StackedEnsemble_AllModels_AutoML_20190220_075753 59127.09 ## 73 StackedEnsemble_BestOfFamily_AutoML_20190220_084553 76714.90 ## 74 StackedEnsemble_BestOfFamily_AutoML_20190220_075753 76748.40 ## 75 GBM_grid_1_AutoML_20190220_075753_model_5 78465.26 ## 76 GBM_grid_1_AutoML_20190220_075753_model_3 78535.34 ## 77 GLM_grid_1_AutoML_20190220_075753_model_1 80284.34 ## 78 GLM_grid_1_AutoML_20190220_084553_model_1 80284.34 ## 79 XGBoost_grid_1_AutoML_20190220_075753_model_4 92559.44 ## 80 XGBoost_grid_1_AutoML_20190220_075753_model_10 125384.88 ``` 15\.1 Prerequisites ------------------- This chapter leverages the following packages, with the emphasis on **h2o**: ``` # Helper packages library(rsample) # for creating our train-test splits library(recipes) # for minor feature engineering tasks # Modeling packages library(h2o) # for fitting stacked models ``` To illustrate key concepts we continue with the Ames housing example from previous chapters: ``` # Load and split the Ames housing data ames <- AmesHousing::make_ames() set.seed(123) # for reproducibility split <- initial_split(ames, strata = "Sale_Price") ames_train <- training(split) ames_test <- testing(split) # Make sure we have consistent categorical levels blueprint <- recipe(Sale_Price ~ ., data = ames_train) %>% step_other(all_nominal(), threshold = 0.005) # Create training & test sets for h2o train_h2o <- prep(blueprint, training = ames_train, retain = TRUE) %>% juice() %>% as.h2o() test_h2o <- prep(blueprint, training = ames_train) %>% bake(new_data = ames_test) %>% as.h2o() # Get response and feature names Y <- "Sale_Price" X <- setdiff(names(ames_train), Y) ``` 15\.2 The Idea -------------- Leo Breiman, known for his work on classification and regression trees and random forests, formalized stacking in his 1996 paper on *Stacked Regressions* (Breiman [1996](#ref-breiman1996stacked)[b](#ref-breiman1996stacked)). Although the idea originated in (Wolpert [1992](#ref-stacked-wolpert-1992)) under the name “Stacked Generalizations”, the modern form of stacking that uses internal k\-fold CV was Breiman’s contribution. However, it wasn’t until 2007 that the theoretical background for stacking was developed, and also when the algorithm took on the cooler name, ***Super Learner*** (Van der Laan, Polley, and Hubbard [2007](#ref-van2007super)). Moreover, the authors illustrated that super learners will learn an optimal combination of the base learner predictions and will typically perform as well as or better than any of the individual models that make up the stacked ensemble. Until this time, the mathematical reasons for why stacking worked were unknown and stacking was considered a black art. ### 15\.2\.1 Common ensemble methods Ensemble machine learning methods use multiple learning algorithms to obtain better predictive performance than could be obtained from any of the constituent learning algorithms. The idea of combining multiple models rather than selecting the single best is well\-known and has been around for a long time. In fact, many of the popular modern machine learning algorithms (including ones in previous chapters) are actually ensemble methods. For example, bagging (Chapter [10](bagging.html#bagging)) and random forests (Chapter [11](random-forest.html#random-forest)) are ensemble approaches that average the predictions from many decision trees to reduce prediction variance and are robust to outliers and noisy data; ultimately leading to greater predictive accuracy. Boosted decision trees (Chapter [12](gbm.html#gbm)) are another ensemble approach that slowly learns unique patterns in the data by sequentially combining individual, shallow trees. Stacking, on the other hand, is designed to ensemble a *diverse group of strong learners*. ### 15\.2\.2 Super learner algorithm The super learner algorithm consists of three phases: 1. Set up the ensemble * Specify a list of \\(L\\) base learners (with a specific set of model parameters). * Specify a meta learning algorithm. This can be any one of the algorithms discussed in the previous chapters but most often is some form of regularized regression. 2. Train the ensemble * Train each of the \\(L\\) base learners on the training set. * Perform *k*\-fold CV on each of the base learners and collect the cross\-validated predictions from each (the same *k*\-folds must be used for each base learner). These predicted values represent \\(p\_1, \\dots, p\_L\\) in Eq. [(15\.1\)](stacking.html#eq:level1data). * The \\(N\\) cross\-validated predicted values from each of the \\(L\\) algorithms can be combined to form a new \\(N \\times L\\) feature matrix (represented by \\(Z\\) in Eq. [(15\.1\)](stacking.html#eq:level1data). This matrix, along with the original response vector (\\(y\\)), are called the “level\-one” data. (\\(N \=\\) number of rows in the training set.)\\\[\\begin{equation} \\tag{15\.1} n \\Bigg \\{ \\Bigg \[ p\_1 \\Bigg ] \\cdots \\Bigg \[ p\_L \\Bigg ] \\Bigg \[ y \\Bigg ] \\rightarrow n \\Bigg \\{ \\overbrace{\\Bigg \[ \\quad Z \\quad \\Bigg ]}^L \\Bigg \[ y \\Bigg ] \\end{equation}\\] * Train the meta learning algorithm on the level\-one data (\\(y \= f\\left(Z\\right)\\)). The “ensemble model” consists of the \\(L\\) base learning models and the meta learning model, which can then be used to generate predictions on new data. 3. Predict on new data. * To generate ensemble predictions, first generate predictions from the base learners. * Feed those predictions into the meta learner to generate the ensemble prediction. Stacking never does worse than selecting the single best base learner on the training data (but not necessarily the validation or test data). The biggest gains are usually produced when stacking base learners that have high variability, and uncorrelated, predicted values. The more similar the predicted values are between the base learners, the less advantage there is to combining them. ### 15\.2\.3 Available packages There are a few package implementations for model stacking in the R ecosystem. **SuperLearner** (Polley et al. [2019](#ref-R-SuperLearner)) provides the original Super Learner and includes a clean interface to 30\+ algorithms. Package **subsemble** (LeDell et al. [2014](#ref-R-subsemble)) also provides stacking via the super learner algorithm discussed above; however, it also offers improved parallelization over the **SuperLearner** package and implements the subsemble algorithm (Sapp, Laan, and Canny [2014](#ref-sapp2014subsemble)).[42](#fn42) Unfortunately, **subsemble** is currently only available via GitHub and is primarily maintained for backward compatibility rather than forward development. A third package, **caretEnsemble** (Deane\-Mayer and Knowles [2016](#ref-R-caretEnsemble)), also provides an approach for stacking, but it implements a bootsrapped (rather than cross\-validated) version of stacking. The bootstrapped version will train faster since bootsrapping (with a train/test set) requires a fraction of the work of *k*\-fold CV; however, the the ensemble performance often suffers as a result of this shortcut. This chapter focuses on the use of **h2o** for model stacking. **h2o** provides an efficient implementation of stacking and allows you to stack existing base learners, stack a grid search, and also implements an automated machine learning search with stacked results. All three approaches will be discussed. ### 15\.2\.1 Common ensemble methods Ensemble machine learning methods use multiple learning algorithms to obtain better predictive performance than could be obtained from any of the constituent learning algorithms. The idea of combining multiple models rather than selecting the single best is well\-known and has been around for a long time. In fact, many of the popular modern machine learning algorithms (including ones in previous chapters) are actually ensemble methods. For example, bagging (Chapter [10](bagging.html#bagging)) and random forests (Chapter [11](random-forest.html#random-forest)) are ensemble approaches that average the predictions from many decision trees to reduce prediction variance and are robust to outliers and noisy data; ultimately leading to greater predictive accuracy. Boosted decision trees (Chapter [12](gbm.html#gbm)) are another ensemble approach that slowly learns unique patterns in the data by sequentially combining individual, shallow trees. Stacking, on the other hand, is designed to ensemble a *diverse group of strong learners*. ### 15\.2\.2 Super learner algorithm The super learner algorithm consists of three phases: 1. Set up the ensemble * Specify a list of \\(L\\) base learners (with a specific set of model parameters). * Specify a meta learning algorithm. This can be any one of the algorithms discussed in the previous chapters but most often is some form of regularized regression. 2. Train the ensemble * Train each of the \\(L\\) base learners on the training set. * Perform *k*\-fold CV on each of the base learners and collect the cross\-validated predictions from each (the same *k*\-folds must be used for each base learner). These predicted values represent \\(p\_1, \\dots, p\_L\\) in Eq. [(15\.1\)](stacking.html#eq:level1data). * The \\(N\\) cross\-validated predicted values from each of the \\(L\\) algorithms can be combined to form a new \\(N \\times L\\) feature matrix (represented by \\(Z\\) in Eq. [(15\.1\)](stacking.html#eq:level1data). This matrix, along with the original response vector (\\(y\\)), are called the “level\-one” data. (\\(N \=\\) number of rows in the training set.)\\\[\\begin{equation} \\tag{15\.1} n \\Bigg \\{ \\Bigg \[ p\_1 \\Bigg ] \\cdots \\Bigg \[ p\_L \\Bigg ] \\Bigg \[ y \\Bigg ] \\rightarrow n \\Bigg \\{ \\overbrace{\\Bigg \[ \\quad Z \\quad \\Bigg ]}^L \\Bigg \[ y \\Bigg ] \\end{equation}\\] * Train the meta learning algorithm on the level\-one data (\\(y \= f\\left(Z\\right)\\)). The “ensemble model” consists of the \\(L\\) base learning models and the meta learning model, which can then be used to generate predictions on new data. 3. Predict on new data. * To generate ensemble predictions, first generate predictions from the base learners. * Feed those predictions into the meta learner to generate the ensemble prediction. Stacking never does worse than selecting the single best base learner on the training data (but not necessarily the validation or test data). The biggest gains are usually produced when stacking base learners that have high variability, and uncorrelated, predicted values. The more similar the predicted values are between the base learners, the less advantage there is to combining them. ### 15\.2\.3 Available packages There are a few package implementations for model stacking in the R ecosystem. **SuperLearner** (Polley et al. [2019](#ref-R-SuperLearner)) provides the original Super Learner and includes a clean interface to 30\+ algorithms. Package **subsemble** (LeDell et al. [2014](#ref-R-subsemble)) also provides stacking via the super learner algorithm discussed above; however, it also offers improved parallelization over the **SuperLearner** package and implements the subsemble algorithm (Sapp, Laan, and Canny [2014](#ref-sapp2014subsemble)).[42](#fn42) Unfortunately, **subsemble** is currently only available via GitHub and is primarily maintained for backward compatibility rather than forward development. A third package, **caretEnsemble** (Deane\-Mayer and Knowles [2016](#ref-R-caretEnsemble)), also provides an approach for stacking, but it implements a bootsrapped (rather than cross\-validated) version of stacking. The bootstrapped version will train faster since bootsrapping (with a train/test set) requires a fraction of the work of *k*\-fold CV; however, the the ensemble performance often suffers as a result of this shortcut. This chapter focuses on the use of **h2o** for model stacking. **h2o** provides an efficient implementation of stacking and allows you to stack existing base learners, stack a grid search, and also implements an automated machine learning search with stacked results. All three approaches will be discussed. 15\.3 Stacking existing models ------------------------------ The first approach to stacking is to train individual base learner models separately and then stack them together. For example, say we found the optimal hyperparameters that provided the best predictive accuracy for the following algorithms: 1. Regularized regression base learner. 2. Random forest base learner. 3. GBM base learner. 4. XGBoost base learner. We can train each of these models individually (see the code chunk below). However, to stack them later we need to do a few specific things: 1. All models must be trained on the same training set. 2. All models must be trained with the same number of CV folds. 3. All models must use the same fold assignment to ensure the same observations are used (we can do this by using `fold_assignment = "Modulo"`). 4. The cross\-validated predictions from all of the models must be preserved by setting `keep_cross_validation_predictions = TRUE`. This is the data which is used to train the meta learner algorithm in the ensemble. ``` # Train & cross-validate a GLM model best_glm <- h2o.glm( x = X, y = Y, training_frame = train_h2o, alpha = 0.1, remove_collinear_columns = TRUE, nfolds = 10, fold_assignment = "Modulo", keep_cross_validation_predictions = TRUE, seed = 123 ) # Train & cross-validate a RF model best_rf <- h2o.randomForest( x = X, y = Y, training_frame = train_h2o, ntrees = 1000, mtries = 20, max_depth = 30, min_rows = 1, sample_rate = 0.8, nfolds = 10, fold_assignment = "Modulo", keep_cross_validation_predictions = TRUE, seed = 123, stopping_rounds = 50, stopping_metric = "RMSE", stopping_tolerance = 0 ) # Train & cross-validate a GBM model best_gbm <- h2o.gbm( x = X, y = Y, training_frame = train_h2o, ntrees = 5000, learn_rate = 0.01, max_depth = 7, min_rows = 5, sample_rate = 0.8, nfolds = 10, fold_assignment = "Modulo", keep_cross_validation_predictions = TRUE, seed = 123, stopping_rounds = 50, stopping_metric = "RMSE", stopping_tolerance = 0 ) # Train & cross-validate an XGBoost model best_xgb <- h2o.xgboost( x = X, y = Y, training_frame = train_h2o, ntrees = 5000, learn_rate = 0.05, max_depth = 3, min_rows = 3, sample_rate = 0.8, categorical_encoding = "Enum", nfolds = 10, fold_assignment = "Modulo", keep_cross_validation_predictions = TRUE, seed = 123, stopping_rounds = 50, stopping_metric = "RMSE", stopping_tolerance = 0 ) ``` We can now use `h2o.stackedEnsemble()` to stack these models. Note how we feed the base learner models into the `base_models = list()` argument. Here, we apply a random forest model as the metalearning algorithm. However, you could also apply regularized regression, GBM, or a neural network as the metalearner (see `?h2o.stackedEnsemble` for details). ``` # Train a stacked tree ensemble ensemble_tree <- h2o.stackedEnsemble( x = X, y = Y, training_frame = train_h2o, model_id = "my_tree_ensemble", base_models = list(best_glm, best_rf, best_gbm, best_xgb), metalearner_algorithm = "drf" ) ``` Since our ensemble is built on the CV results of the base learners, but has no cross\-validation results of its own, we’ll use the test data to compare our results. If we assess the performance of our base learners on the test data we see that the stochastic GBM base learner has the lowest RMSE of 20859\.92\. The stacked model achieves a small 1% performance gain with an RMSE of 20664\.56\. ``` # Get results from base learners get_rmse <- function(model) { results <- h2o.performance(model, newdata = test_h2o) results@metrics$RMSE } list(best_glm, best_rf, best_gbm, best_xgb) %>% purrr::map_dbl(get_rmse) ## [1] 30024.67 23075.24 20859.92 21391.20 # Stacked results h2o.performance(ensemble_tree, newdata = test_h2o)@metrics$RMSE ## [1] 20664.56 ``` We previously stated that the biggest gains are usually produced when we are stacking base learners that have high variability, and uncorrelated, predicted values. If we assess the correlation of the CV predictions we can see strong correlation across the base learners, especially with three tree\-based learners. Consequentley, stacking provides less advantage in this situation since the base learners have highly correlated predictions; however, a 1% performance improvement can still be considerable improvement depending on the business context. ``` data.frame( GLM_pred = as.vector(h2o.getFrame(best_glm@model$cross_validation_holdout_predictions_frame_id$name)), RF_pred = as.vector(h2o.getFrame(best_rf@model$cross_validation_holdout_predictions_frame_id$name)), GBM_pred = as.vector(h2o.getFrame(best_gbm@model$cross_validation_holdout_predictions_frame_id$name)), XGB_pred = as.vector(h2o.getFrame(best_xgb@model$cross_validation_holdout_predictions_frame_id$name)) ) %>% cor() ## GLM_pred RF_pred GBM_pred XGB_pred ## GLM_pred 1.0000000 0.9390229 0.9291982 0.9345048 ## RF_pred 0.9390229 1.0000000 0.9920349 0.9821944 ## GBM_pred 0.9291982 0.9920349 1.0000000 0.9854160 ## XGB_pred 0.9345048 0.9821944 0.9854160 1.0000000 ``` 15\.4 Stacking a grid search ---------------------------- An alternative ensemble approach focuses on stacking multiple models generated from the same base learner. In each of the previous chapters, you learned how to perform grid searches to automate the tuning process. Often we simply select the best performing model in the grid search but we can also apply the concept of stacking to this process. Many times, certain tuning parameters allow us to find unique patterns within the data. By stacking the results of a grid search, we can capitalize on the benefits of each of the models in our grid search to create a meta model. For example, the following performs a random grid search across a wide range of GBM hyperparameter settings. We set the search to stop after 25 models have run. ``` # Define GBM hyperparameter grid hyper_grid <- list( max_depth = c(1, 3, 5), min_rows = c(1, 5, 10), learn_rate = c(0.01, 0.05, 0.1), learn_rate_annealing = c(0.99, 1), sample_rate = c(0.5, 0.75, 1), col_sample_rate = c(0.8, 0.9, 1) ) # Define random grid search criteria search_criteria <- list( strategy = "RandomDiscrete", max_models = 25 ) # Build random grid search random_grid <- h2o.grid( algorithm = "gbm", grid_id = "gbm_grid", x = X, y = Y, training_frame = train_h2o, hyper_params = hyper_grid, search_criteria = search_criteria, ntrees = 5000, stopping_metric = "RMSE", stopping_rounds = 10, stopping_tolerance = 0, nfolds = 10, fold_assignment = "Modulo", keep_cross_validation_predictions = TRUE, seed = 123 ) ``` If we look at the grid search models we see that the cross\-validated RMSE ranges from 20756–57826 ``` # Sort results by RMSE h2o.getGrid( grid_id = "gbm_grid", sort_by = "rmse" ) ## H2O Grid Details ## ================ ## ## Grid ID: gbm_grid ## Used hyper parameters: ## - col_sample_rate ## - learn_rate ## - learn_rate_annealing ## - max_depth ## - min_rows ## - sample_rate ## Number of models: 25 ## Number of failed models: 0 ## ## Hyper-Parameter Search Summary: ordered by increasing rmse ## col_sample_rate learn_rate learn_rate_annealing max_depth min_rows sample_rate model_ids rmse ## 1 0.9 0.01 1.0 3 1.0 1.0 gbm_grid_model_20 20756.16775065606 ## 2 0.9 0.01 1.0 5 1.0 0.75 gbm_grid_model_2 21188.696088824694 ## 3 0.9 0.1 1.0 3 1.0 0.75 gbm_grid_model_5 21203.753908665003 ## 4 0.8 0.01 1.0 5 5.0 1.0 gbm_grid_model_16 21704.257699437963 ## 5 1.0 0.1 0.99 3 1.0 1.0 gbm_grid_model_17 21710.275753497197 ## ## --- ## col_sample_rate learn_rate learn_rate_annealing max_depth min_rows sample_rate model_ids rmse ## 20 1.0 0.01 1.0 1 10.0 0.75 gbm_grid_model_11 26164.879525289896 ## 21 0.8 0.01 0.99 3 1.0 0.75 gbm_grid_model_15 44805.63843296435 ## 22 1.0 0.01 0.99 3 10.0 1.0 gbm_grid_model_18 44854.611500840605 ## 23 0.8 0.01 0.99 1 10.0 1.0 gbm_grid_model_21 57797.874642563846 ## 24 0.9 0.01 0.99 1 10.0 0.75 gbm_grid_model_10 57809.60302408739 ## 25 0.8 0.01 0.99 1 5.0 0.75 gbm_grid_model_4 57826.30370545089 ``` If we apply the best performing model to our test set, we achieve an RMSE of 21599\.8\. ``` # Grab the model_id for the top model, chosen by validation error best_model_id <- random_grid_perf@model_ids[[1]] best_model <- h2o.getModel(best_model_id) h2o.performance(best_model, newdata = test_h2o) ## H2ORegressionMetrics: gbm ## ## MSE: 466551295 ## RMSE: 21599.8 ## MAE: 13697.78 ## RMSLE: 0.1090604 ## Mean Residual Deviance : 466551295 ``` Rather than use the single best model, we can combine all the models in our grid search using a super learner. In this example, our super learner does not provide any performance gains because the hyperparameter settings of the leading models have low variance which results in predictions that are highly correlated. However, in cases where you see high variability across hyperparameter settings for your leading models, stacking the grid search or even the leaders in the grid search can provide significant performance gains. Stacking a grid search provides the greatest benefit when leading models from the base learner have high variance in their hyperparameter settings. ``` # Train a stacked ensemble using the GBM grid ensemble <- h2o.stackedEnsemble( x = X, y = Y, training_frame = train_h2o, model_id = "ensemble_gbm_grid", base_models = random_grid@model_ids, metalearner_algorithm = "gbm" ) # Eval ensemble performance on a test set h2o.performance(ensemble, newdata = test_h2o) ## H2ORegressionMetrics: stackedensemble ## ## MSE: 469579433 ## RMSE: 21669.78 ## MAE: 13499.93 ## RMSLE: 0.1061244 ## Mean Residual Deviance : 469579433 ``` 15\.5 Automated machine learning -------------------------------- Our final topic to discuss involves performing an automated search across multiple base learners and then stack the resulting models (this is sometimes referred to as *automated machine learning* or AutoML). This is very much like the grid searches that we have been performing for base learners and discussed in Chapters [4](linear-regression.html#linear-regression)\-[14](svm.html#svm); however, rather than search across a variety of parameters for a *single base learner*, we want to perform a search across a variety of hyperparameter settings for many *different base learners*. There are several competitors that provide licensed software that help automate the end\-to\-end machine learning process to include feature engineering, model validation procedures, model selection, hyperparameter optimization, and more. Open source applications are more limited and tend to focus on automating the model building, hyperparameter configurations, and comparison of model performance. Although AutoML has made it easy for non\-experts to experiment with machine learning, there is still a significant amount of knowledge and background in data science that is required to produce high\-performing machine learning models. AutoML is more about freeing up your time (which is quite valuable). The machine learning process is often long, iterative, and repetitive and AutoML can also be a helpful tool for the advanced user, by simplifying the process of performing a large number of modeling\-related tasks that would typically require hours/days writing many lines of code. This can free up the user’s time to focus on other tasks in the data science pipeline such as data\-preprocessing, feature engineering, model interpretability, and model deployment. **h2o** provides an open source implementation of AutoML with the `h2o.automl()` function. The current version of `h2o.automl()` trains and cross\-validates a random forest, an *extremely\-randomized forest*, a random grid of GBMs, a random grid of DNNs, and then trains a stacked ensemble using all of the models; see `?h2o::h2o.automl` for details. By default, `h2o.automl()` will search for 1 hour but you can control how long it searches by adjusting a variety of stopping arguments (e.g., `max_runtime_secs`, `max_models`, and `stopping_tolerance`). The following performs an automated search for two hours, which ended up assessing 80 models. `h2o.automl()` will automatically use the same folds for stacking so you do not need to specify `fold_assignment = "Modulo"`. This allows for consistent model comparison across the same CV sets. We see that most of the leading models are GBM variants and achieve an RMSE in the 22000–23000 range. As you probably noticed, this was not as good as some of our best models we found using our own GBM grid searches (reference Chapter [12](gbm.html#gbm)). However, we could start this AutoML procedure and then spend our two hours performing other tasks while **h2o** automatically assesses these 80 models. The AutoML procedure then provides us direction for further analysis. In this case, we could start by further assessing the hyperparameter settings in the top five GBM models to see if there were common attributes that could point us to additional grid searches worth exploring. ``` # Use AutoML to find a list of candidate models (i.e., leaderboard) auto_ml <- h2o.automl( x = X, y = Y, training_frame = train_h2o, nfolds = 5, max_runtime_secs = 60 * 120, max_models = 50, keep_cross_validation_predictions = TRUE, sort_metric = "RMSE", seed = 123, stopping_rounds = 50, stopping_metric = "RMSE", stopping_tolerance = 0 ) # Assess the leader board; the following truncates the results to show the top # and bottom 15 models. You can get the top model with auto_ml@leader auto_ml@leaderboard %>% as.data.frame() %>% dplyr::select(model_id, rmse) %>% dplyr::slice(1:25) ## model_id rmse ## 1 XGBoost_1_AutoML_20190220_084553 22229.97 ## 2 GBM_grid_1_AutoML_20190220_084553_model_1 22437.26 ## 3 GBM_grid_1_AutoML_20190220_084553_model_3 22777.57 ## 4 GBM_2_AutoML_20190220_084553 22785.60 ## 5 GBM_3_AutoML_20190220_084553 23133.59 ## 6 GBM_4_AutoML_20190220_084553 23185.45 ## 7 XGBoost_2_AutoML_20190220_084553 23199.68 ## 8 XGBoost_1_AutoML_20190220_075753 23231.28 ## 9 GBM_1_AutoML_20190220_084553 23326.57 ## 10 GBM_grid_1_AutoML_20190220_075753_model_2 23330.42 ## 11 XGBoost_3_AutoML_20190220_084553 23475.23 ## 12 XGBoost_grid_1_AutoML_20190220_084553_model_3 23550.04 ## 13 XGBoost_grid_1_AutoML_20190220_075753_model_15 23640.95 ## 14 XGBoost_grid_1_AutoML_20190220_084553_model_8 23646.66 ## 15 XGBoost_grid_1_AutoML_20190220_084553_model_6 23682.37 ## ... ... ... ## 65 GBM_grid_1_AutoML_20190220_084553_model_5 33971.32 ## 66 GBM_grid_1_AutoML_20190220_075753_model_8 34489.39 ## 67 DeepLearning_grid_1_AutoML_20190220_084553_model_3 36591.73 ## 68 GBM_grid_1_AutoML_20190220_075753_model_6 36667.56 ## 69 XGBoost_grid_1_AutoML_20190220_084553_model_13 40416.32 ## 70 GBM_grid_1_AutoML_20190220_075753_model_9 47744.43 ## 71 StackedEnsemble_AllModels_AutoML_20190220_084553 49856.66 ## 72 StackedEnsemble_AllModels_AutoML_20190220_075753 59127.09 ## 73 StackedEnsemble_BestOfFamily_AutoML_20190220_084553 76714.90 ## 74 StackedEnsemble_BestOfFamily_AutoML_20190220_075753 76748.40 ## 75 GBM_grid_1_AutoML_20190220_075753_model_5 78465.26 ## 76 GBM_grid_1_AutoML_20190220_075753_model_3 78535.34 ## 77 GLM_grid_1_AutoML_20190220_075753_model_1 80284.34 ## 78 GLM_grid_1_AutoML_20190220_084553_model_1 80284.34 ## 79 XGBoost_grid_1_AutoML_20190220_075753_model_4 92559.44 ## 80 XGBoost_grid_1_AutoML_20190220_075753_model_10 125384.88 ```
Machine Learning
bradleyboehmke.github.io
https://bradleyboehmke.github.io/HOML/iml.html
Chapter 16 Interpretable Machine Learning ========================================= In the previous chapters you learned how to train several different forms of advanced ML models. Often, these models are considered “black boxes” due to their complex inner\-workings. However, because of their complexity, they are typically more accurate for predicting nonlinear, faint, or rare phenomena. Unfortunately, more accuracy often comes at the expense of interpretability, and interpretability is crucial for business adoption, model documentation, regulatory oversight, and human acceptance and trust. Luckily, several advancements have been made to aid in interpreting ML models over the years and this chapter demonstrates how you can use them to extract important insights. Interpreting ML models is an emerging field that has become known as *interpretable machine learning* (IML). 16\.1 Prerequisites ------------------- There are multiple packages that provide robust machine learning interpretation capabilities. Unfortunately there is not one single package that is optimal for all IML applications; rather, when performing IML you will likely use a combination of packages. The following packages are used in this chapter. ``` # Helper packages library(dplyr) # for data wrangling library(ggplot2) # for awesome graphics # Modeling packages library(h2o) # for interfacing with H2O library(recipes) # for ML recipes library(rsample) # for data splitting library(xgboost) # for fitting GBMs # Model interpretability packages library(pdp) # for partial dependence plots (and ICE curves) library(vip) # for variable importance plots library(iml) # for general IML-related functions library(DALEX) # for general IML-related functions library(lime) # for local interpretable model-agnostic explanations ``` To illustrate various concepts we’ll continue working with the **h2o** version of the Ames housing example from Section [15\.1](stacking.html#h20-prereqs). We’ll also use the stacked ensemble model (`ensemble_tree`) created in Section [15\.3](stacking.html#stacking-existing). 16\.2 The idea -------------- It is not enough to identify a machine learning model that optimizes predictive performance; understanding and trusting model results is a hallmark of good science and necessary for our model to be adopted. As we apply and embed ever\-more complex predictive modeling and machine learning algorithms, both we (the analysts) and the business stakeholders need methods to interpret and understand the modeling results so we can have trust in its application for business decisions (Doshi\-Velez and Kim [2017](#ref-doshi2017towards)). Advancements in interpretability now allow us to extract key insights and actionable information from the most advanced ML models. These advancements allow us to answer questions such as: * What are the most important customer attributes driving behavior? * How are these attributes related to the behavior output? * Do multiple attributes interact to drive different behavior among customers? * Why do we expect a customer to make a particular decision? * Are the decisions we are making based on predicted results fair and reliable? Approaches to model interpretability to answer the exemplar questions above can be broadly categorized as providing *global* or *local* explanations. It is important to understand the entire model that you’ve trained on a global scale, and also to zoom in on local regions of your data or your predictions and derive explanations. Being able to answer such questions and provide both levels of explanation is key to any ML project becoming accepted, adopted, embedded, and properly utilized. ### 16\.2\.1 Global interpretation *Global interpretability* is about understanding how the model makes predictions, based on a holistic view of its features and how they influence the underlying model structure. It answers questions regarding which features are relatively influential, how these features influence the response variable, and what kinds of potential interactions are occurring. Global model interpretability helps to understand the relationship between the response variable and the individual features (or subsets thereof). Arguably, comprehensive global model interpretability is very hard to achieve in practice. Any model that exceeds a handful of features will be hard to fully grasp as we will not be able to comprehend the whole model structure at once. While global model interpretability is usually out of reach, there is a better chance to understand at least some models on a modular level. This typically revolves around gaining understanding of which features are the most influential (via *feature importance*) and then focusing on how the most influential variables drive the model output (via *feature effects*). Although you may not be able to fully grasp a model with a hundred features, typically only a dozen or so of these variables are really influential in driving the model’s performance. And it is possible to have a firm grasp of how a dozen variables are influencing a model. ### 16\.2\.2 Local interpretation Global interpretability methods help us understand the inputs and their overall relationship with the response variable, but they can be highly deceptive in some cases (e.g., when strong interactions are occurring). Although a given feature may influence the predictive accuracy of our model as a whole, it does not mean that that feature has the largest influence on a predicted value for a given observation (e.g., a customer, house, or employee) or even a group of observations. Local interpretations help us understand what features are influencing the predicted response for a given observation (or small group of observations). These techniques help us to not only answer what we expect a customer to do, but also why our model is making a specific prediction for a given observation. There are three primary approaches to local interpretation: * Local interpretable model\-agnostic explanations (LIME) * Shapley values * Localized step\-wise procedures These techniques have the same objective: to explain which variables are most influential in predicting the target for a set of observations. To illustrate, we’ll focus on two observations. The first is the observation that our ensemble produced the highest predicted `Sale_Price` for (i.e., observation 1825 which has a predicted `Sale_Price` of $663,136\), and the second is the observation with the lowest predicted `Sale_Price` (i.e., observation 139 which has a predicted `Sale_Price` of $47,245\.45\). Our goal with local interpretation is to explain what features are driving these two predictions. ``` # Compute predictions predictions <- predict(ensemble_tree, train_h2o) %>% as.vector() # Print the highest and lowest predicted sales price paste("Observation", which.max(predictions), "has a predicted sale price of", scales::dollar(max(predictions))) ## [1] "Observation 1825 has a predicted sale price of $663,136" paste("Observation", which.min(predictions), "has a predicted sale price of", scales::dollar(min(predictions))) ## [1] "Observation 139 has a predicted sale price of $47,245.45" # Grab feature values for observations with min/max predicted sales price high_ob <- as.data.frame(train_h2o)[which.max(predictions), ] %>% select(-Sale_Price) low_ob <- as.data.frame(train_h2o)[which.min(predictions), ] %>% select(-Sale_Price) ``` ### 16\.2\.3 Model\-specific vs. model\-agnostic It’s also important to understand that there are *model\-specific* and *model\-agnostic* approaches for interpreting your model. Many of the approaches you’ve seen in the previous chapters for understanding feature importance are model\-specific. For example, in linear models we can use the absolute value of the \\(t\\)–statistic as a measure of feature importance (though this becomes complicated when your linear model involves interaction terms and transformations). Random forests, on the other hand, can record the prediction accuracy on the OOB portion of the data, then the same is done after permuting each predictor variable, and the difference between the two accuracies are then averaged over all trees, and normalized by the standard error. These model\-specific interpretation tools are limited to their respective model classes. There can be advantages to using model\-specific approaches as they are more closely tied to the model performance and they may be able to more accurately incorporate the correlation structure between the predictors (Kuhn and Johnson [2013](#ref-apm)). However, there are also some disadvantages. For example, many ML algorithms (e.g., stacked ensembles) have no natural way of measuring feature importance: ``` vip(ensemble_tree, method = "model") ## Error in vi_model.default(ensemble_tree, method = "model") : ## model-specific variable importance scores are currently not available for objects of class "h2o.stackedEnsemble.summary". ``` Furthermore, comparing model\-specific feature importance across model classes is difficult since you are comparing different measurements (e.g., the magnitude of \\(t\\)\-statistics in linear models vs. degradation of prediction accuracy in random forests). In model\-agnostic approaches, the model is treated as a “black box”. The separation of interpretability from the specific model allows us to easily compare feature importance across different models. Ultimately, there is no one best approach for model interpretability. Rather, only by applying multiple approaches (to include comparing model specific and model agnostic results) can we really gain full trust in the interpretations we extract. An important item to note is that when using model agnostic procedures, additional code preparation is often required. For example, the **iml** (Molnar [2019](#ref-R-iml)), **DALEX** (Biecek [2019](#ref-R-DALEX)), and **LIME** (Pedersen and Benesty [2018](#ref-R-lime)) packages use purely model agnostic procedures. Consequently, we need to create a model agnostic object that contains three components: 1. A data frame with just the features (must be of class `"data.frame"`, cannot be an `"H2OFrame"` or other object). 2. A vector with the actual responses (must be numeric—0/1 for binary classification problems). 3. A custom function that will take the features from 1\), apply the ML algorithm, and return the predicted values as a vector. The following code extracts these items for the **h2o** example: ``` # 1) create a data frame with just the features features <- as.data.frame(train_h2o) %>% select(-Sale_Price) # 2) Create a vector with the actual responses response <- as.data.frame(train_h2o) %>% pull(Sale_Price) # 3) Create custom predict function that returns the predicted values as a vector pred <- function(object, newdata) { results <- as.vector(h2o.predict(object, as.h2o(newdata))) return(results) } # Example of prediction output pred(ensemble_tree, features) %>% head() ## [1] 207144.3 108958.2 164248.4 241984.2 190000.7 202795.8 ``` Once we have these three components we can create our model agnostic objects for the **iml**[43](#fn43) and **DALEX** packages, which will just pass these downstream components (along with the ML model) to other functions. ``` # iml model agnostic object components_iml <- Predictor$new( model = ensemble_tree, data = features, y = response, predict.fun = pred ) # DALEX model agnostic object components_dalex <- DALEX::explain( model = ensemble_tree, data = features, y = response, predict_function = pred ) ``` 16\.3 Permutation\-based feature importance ------------------------------------------- In previous chapters we illustrated a few model\-specific approaches for measuring feature importance (e.g., for linear models we used the absolute value of the \\(t\\)\-statistic). For SVMs, on the other hand, we had to rely on a model\-agnostic approach which was based on the permutation feature importance measurement introduced for random forests by Breiman ([2001](#ref-breiman2001random)) (see Section [11\.6](random-forest.html#rf-vip)) and expanded on by Fisher, Rudin, and Dominici ([2018](#ref-fisher2018model)). ### 16\.3\.1 Concept The permutation approach measures a feature’s importance by calculating the increase of the model’s prediction error after permuting the feature. The idea is that if we randomly permute the values of an important feature in the training data, the training performance would degrade (since permuting the values of a feature effectively destroys any relationship between that feature and the target variable). The permutation approach uses the difference (or ratio) between some baseline performance measure (e.g., RMSE) and the same performance measure obtained after permuting the values of a particular feature in the training data. From an algorithmic perspective, the approach follows these steps: ``` For any given loss function do the following: 1. Compute loss function for original model 2. For variable i in {1,...,p} do | randomize values | apply given ML model | estimate loss function | compute feature importance (some difference/ratio measure between permuted loss & original loss) End 3. Sort variables by descending feature importance ``` **Algorithm 1:** A simple algorithm for computing permutation\-based variable importance for the feature set \\(X\\). A feature is “important” if permuting its values increases the model error relative to the other features, because the model relied on the feature for the prediction. A feature is “unimportant” if permuting its values keeps the model error relatively unchanged, because the model ignored the feature for the prediction. This type of variable importance is tied to the model’s performance. Therefore, it is assumed that the model has been properly tuned (e.g., using cross\-validation) and is not over fitting. ### 16\.3\.2 Implementation Permutation\-based feature importance is available with the **DALEX**, **iml**, and **vip** packages; each providing unique benefits. The **iml** package provides the `FeatureImp()` function which computes feature importance for general prediction models using the permutation approach. It is written in R6 and allows the user to specify a generic loss function or select from a pre\-defined list (e.g., `loss = "mse"` for mean squared error). It also allows the user to specify whether importance is measures as the difference or as the ratio of the original model error and the model error after permutation. The user can also specify the number of repetitions used when permuting each feature to help stabilize the variability in the procedure. The **DALEX** package also provides permutation\-based variable importance scores through the `variable_importance()` function. Similar to `iml::FeatureImp()`, this function allows the user to specify a loss function and how the importance scores are computed (e.g., using the difference or ratio). It also provides an option to sample the training data before shuffling the data to compute importance (the default is to use `n_sample = 1000`. This can help speed up computation. The **vip** package specifically focuses on variable importance plots (VIPs) and provides both model\-specific and a number of model\-agnostic approaches to computing variable importance, including the permutation approach. With **vip** you can use customized loss functions (or select from a pre\-defined list), perform a Monte Carlo simulation to stabilize the procedure, sample observations prior to permuting features, perform the computations in parallel which can speed up runtime on large data sets, and more. The following executes a permutation\-based feature importance via **vip**. To speed up execution we sample 50% of the training data but repeat the simulations 5 times to increase stability of our estimates (whenever `nsim >`, you also get an estimated standard deviation for each importance score). We see that many of the same features that have been influential in model\-specific approaches illustrated in previous chapters (e.g., `Gr_Liv_Area`, `Overall_Qual`, `Total_Bsmt_SF`, and `Neighborhood`) are also considered influential in our stacked model using the permutation approach. Permutation\-based approaches can become slow as the number of predictors grows. This implementation took 9 minutes. You can speed up execution by parallelizing, reducing the sample size, or reducing the number of simulations. However, note that the last two options also increases the variability of the feature importance estimates. ``` vip( ensemble_tree, train = as.data.frame(train_h2o), method = "permute", target = "Sale_Price", metric = "RMSE", nsim = 5, sample_frac = 0.5, pred_wrapper = pred ) ``` Figure 7\.5: Top 10 most influential variables for the stacked H2O model using permutation\-based feature importance. 16\.4 Partial dependence ------------------------ Partial dependence helps to understand the marginal effect of a feature (or subset thereof) on the predicted outcome. In essence, it allows us to understand how the response variable changes as we change the value of a feature while taking into account the average effect of all the other features in the model. ### 16\.4\.1 Concept The procedure follows the traditional methodology documented in J. H. Friedman ([2001](#ref-friedman2001greedy)). The algorithm (illustrated below) will split the feature of interest into \\(j\\) equally spaced values. For example, the `Gr_Liv_Area` feature ranges from 334–5095 square feet. Say the user selects \\(j \= 20\\). The algorithm will first create an evenly spaced grid consisting of 20 values across the distribution of `Gr_Liv_area` (e.g., \\(334\.00, 584\.58, \\dots, 5095\.00\\)). Then the algorithm will make 20 copies of the original training data (one copy for each value in the grid). The algorithms will then set `Gr_Liv_Area` for all observations in the first copy to 334, 585 in the second copy, 835 in the third copy, …, and finally to 5095 in the 20\-th copy (all other features remain unchanged). The algorithm then predicts the outcome for each observation in each of the 20 copies, and then averages the predicted values for each set. These averaged predicted values are known as partial dependence values and are plotted against the 20 evenly spaced values for `Gr_Liv_Area`. ``` For a selected predictor (x) 1. Construct a grid of j evenly spaced values across the distribution of x: {x1, x2, ..., xj} 2. For i in {1,...,j} do | Copy the training data and replace the original values of x with the constant xi | Apply given ML model (i.e., obtain vector of predictions) | Average predictions together End 3. Plot the averaged predictions against x1, x2, ..., xj ``` **Algorithm 2:** A simple algorithm for constructing the partial dependence of the response on a single predictor \\(x\\). Algorithm 1 can be quite computationally intensive since it involves \\(j\\) passes over the training records (and therefore \\(j\\) calls to the prediction function). Fortunately, the algorithm can be parallelized quite easily (see (Brandon Greenwell [2018](#ref-R-pdp)) for an example). It can also be easily extended to larger subsets of two or more features as well (i.e., to visualize interaction effects). If we plot the partial dependence values against the grid values we get what’s known as a *partial dependence plot* (PDP) (Figure [16\.1](iml.html#fig:pdp-illustration)) where the line represents the average predicted value across all observations at each of the \\(j\\) values of \\(x\\). Figure 16\.1: Illustration of the partial dependence process. ### 16\.4\.2 Implementation The **pdp** package (Brandon Greenwell [2018](#ref-R-pdp)) is a widely used, mature, and flexible package for constructing PDPs. The **iml** and **DALEX** packages also provide PDP capabilities.[44](#fn44) **pdp** has built\-in support for many packages but for models that are not supported (such as **h2o** stacked models) we need to create a custom prediction function wrapper, as illustrated below. First, we create a custom prediction function similar to that which we created in Section [16\.2\.3](iml.html#agnostic); however, here we return the mean of the predicted values. We then use `pdp::partial()` to compute the partial dependence values. We can use `autoplot()` to view PDPs using **ggplot2**. The `rug` argument provides markers for the decile distribution of `Gr_Liv_Area` and when you include `rug = TRUE` you must also include the training data. ``` # Custom prediction function wrapper pdp_pred <- function(object, newdata) { results <- mean(as.vector(h2o.predict(object, as.h2o(newdata)))) return(results) } # Compute partial dependence values pd_values <- partial( ensemble_tree, train = as.data.frame(train_h2o), pred.var = "Gr_Liv_Area", pred.fun = pdp_pred, grid.resolution = 20 ) head(pd_values) # take a peak ## Gr_Liv_Area yhat ## 1 334 158858.2 ## 2 584 159566.6 ## 3 835 160878.2 ## 4 1085 165896.7 ## 5 1336 171665.9 ## 6 1586 180505.1 # Partial dependence plot autoplot(pd_values, rug = TRUE, train = as.data.frame(train_h2o)) ``` Figure 7\.6: Partial dependence plot for `Gr_Liv_Area` illustrating the average increase in predicted `Sale_Price` as `Gr_Liv_Area` increases. ### 16\.4\.3 Alternative uses PDPs have primarily been used to illustrate the marginal effect a feature has on the predicted response value. However, Brandon M Greenwell, Boehmke, and McCarthy ([2018](#ref-greenwell2018simple)) illustrate an approach that uses a measure of the relative “flatness” of the partial dependence function as a measure of variable importance. The idea is that those features with larger marginal effects on the response have greater importance. You can implement a PDP\-based measure of feature importance by using the **vip** package and setting `method = "pdp"`. The resulting variable importance scores also retain the computed partial dependence values (so you can easily view plots of both feature importance and feature effects). 16\.5 Individual conditional expectation ---------------------------------------- Individual conditional expectation (ICE) curves (Goldstein et al. [2015](#ref-goldstein2015peeking)) are very similar to PDPs; however, rather than averaging the predicted values across all observations we observe and plot the individual observation\-level predictions. ### 16\.5\.1 Concept An ICE plot visualizes the dependence of the predicted response on a feature for *each* instance separately, resulting in multiple lines, one for each observation, compared to one line in partial dependence plots. A PDP is the average of the lines of an ICE plot. Note that the following algorithm is the same as the PDP algorithms except for the last line where PDPs averaged the predicted values. ``` For a selected predictor (x) 1. Construct a grid of j evenly spaced values across the distribution of x: {x1, x2, ..., xj} 2. For i in {1,...,j} do | Copy the training data and replace the original values of x with the constant xi | Apply given ML model (i.e., obtain vector of predictions) End 3. Plot the predictions against x1, x2, ..., xj with lines connecting oberservations that correspond to the same row number in the original training data ``` **Algorithm 3:** A simple algorithm for constructing the individual conditional expectation of the response on a single predictor \\(x\\). So, what do you gain by looking at individual expectations, instead of partial dependencies? PDPs can obfuscate heterogeneous relationships that result from strong interaction effects. PDPs can show you what the average relationship between feature \\(x\_s\\) and the predicted value (\\(\\widehat{y}\\)) looks like. This works only well in cases where the interactions between features are weak but in cases where interactions exist, ICE curves will help to highlight this. One issue to be aware of, often differences in ICE curves can only be identified by centering the feature. For example, \~ref(fig:ice\-illustration) below displays ICE curves for the `Gr_Liv_Area` feature. The left plot makes it appear that all observations have very similar effects across `Gr_Liv_Area` values. However, the right plot shows centered ICE (c\-ICE) curves which helps to highlight heterogeneity more clearly and also draws more attention to those observations that deviate from the general pattern. You will typically see ICE curves centered at the minimum value of the feature. This allows you to see how effects change as the feature value increases. Figure 16\.2: Non\-centered (A) and centered (B) ICE curves for `Gr_Liv_Area` illustrating the observation\-level effects (black lines) in predicted `Sale_Price` as `Gr_Liv_Area` increases. The plot also illustrates the PDP line (red), representing the average values across all observations. ### 16\.5\.2 Implementation Similar to PDPs, the premier package to use for ICE curves is the **pdp** package; however, the **iml** package also provides ICE curves. To create ICE curves with the **pdp** package we follow the same procedure as with PDPs; however, we exclude the averaging component (applying `mean()`) in the custom prediction function. By default, `autoplot()` will plot all observations; we also include `center = TRUE` to center the curves at the first value. Note that we use `pred.fun = pred`. This is using the same custom prediction function created in Section 16\.2\.3\. ``` # Construct c-ICE curves partial( ensemble_tree, train = as.data.frame(train_h2o), pred.var = "Gr_Liv_Area", pred.fun = pred, grid.resolution = 20, plot = TRUE, center = TRUE, plot.engine = "ggplot2" ) ``` Figure 16\.3: Centered ICE curve for `Gr_Liv_Area` illustrating the observation\-level effects in predicted `Sale_Price` as `Gr_Liv_Area` increases. PDPs for classification models are typically plotted on a logit\-type scale, rather than on the probability scale (see Brandon Greenwell ([2018](#ref-R-pdp)) for details). This is more important for ICE curves and c\-ICE curves, which can be more difficult to interpret. For example, c\-ICE curves can result in negative probabilities. The ICE curves will also be more clumped together and harder to interpret when the predicted probabilities are close to zero or one. 16\.6 Feature interactions -------------------------- When features in a prediction model interact with each other, the influence of the features on the prediction surface is not additive but more complex. In real life, most relationships between features and some response variable are complex and include interactions. This is largely why more complex algorithms (especially tree\-based algorithms) tend to perform very well—the nature of their complexity often allows them to naturally capture complex interactions. However, identifying and understanding the nature of these interactions is difficult. One way to estimate the interaction strength is to measure how much of the variation of the predicted outcome depends on the interaction of the features. This measurement is called the \\(H\\)\-statistic and was introduced by Friedman, Popescu, and others ([2008](#ref-friedman2008predictive)). ### 16\.6\.1 Concept There are two main approaches to assessing interactions with the \\(H\\)\-statistic: 1. The interaction between two features, which tells us how strongly two specific features interact with each other in the model; 2. The interaction between a feature and all other features, which tells us how strongly (in total) the specific feature interacts in the model with all the other features. To measure both types of interactions, we leverage partial dependence values for the features of interest. For the first approach, which measures how a feature (\\(x\_i\\)) interacts with all other features. The algorithm performs the following steps: ``` 1. For variable i in {1,...,p} do | f(x) = estimate predicted values with original model | pd(x) = partial dependence of variable i | pd(!x) = partial dependence of all features excluding i | upper = sum(f(x) - pd(x) - pd(!x)) | lower = variance(f(x)) | rho = upper / lower End 2. Sort variables by descending rho (interaction strength) ``` **Algorithm 4:** A simple algorithm for measuring the interaction strength between \\(x\_i\\) and all other features. For the second approach, which measures the two\-way interaction strength of feature \\(x\_i\\) and \\(x\_j\\), the algorithm performs the following steps: ``` 1. i = a selected variable of interest 2. For remaining variables j in {1,...,p} do | pd(ij) = interaction partial dependence of variables i and j | pd(i) = partial dependence of variable i | pd(j) = partial dependence of variable j | upper = sum(pd(ij) - pd(i) - pd(j)) | lower = variance(pd(ij)) | rho = upper / lower End 3. Sort interaction relationship by descending rho (interaction strength) ``` **Algorithm 5:** A simple algorithm for measuring the interaction strength between \\(x\_i\\) and \\(x\_j\\). In essence, the \\(H\\)\-statistic measures how much of the variation of the predicted outcome depends on the interaction of the features. In both cases, \\(\\rho \= \\text{rho}\\) represents the interaction strength, which will be between 0 (when there is no interaction at all) and 1 (if all of variation of the predicted outcome depends on a given interaction). ### 16\.6\.2 Implementation Currently, the **iml** package provides the only viable implementation of the \\(H\\)\-statistic as a model\-agnostic application. We use `Interaction$new()` to compute the one\-way interaction to assess if and how strongly two specific features interact with each other in the model. We find that `First_Flr_SF` has the strongest interaction (although it is a weak interaction since \\(\\rho \< 0\.139\\) ). Unfortunately, due to the algorithm complexity, the \\(H\\)\-statistic is very computationally demanding as it requires \\(2n^2\\) runs. This example of computing the one\-way interaction \\(H\\)\-statistic took two hours to complete! However, **iml** does allow you to speed up computation by reducing the `grid.size` or by parallelizing computation with `parallel = TRUE`. See `vignette(“parallel”, package = “iml”)` for more info. ``` interact <- Interaction$new(components_iml) interact$results %>% arrange(desc(.interaction)) %>% head() ## .feature .interaction ## 1 First_Flr_SF 0.13917718 ## 2 Overall_Qual 0.11077722 ## 3 Kitchen_Qual 0.10531653 ## 4 Second_Flr_SF 0.10461824 ## 5 Lot_Area 0.10389242 ## 6 Gr_Liv_Area 0.09833997 plot(interact) ``` Figure 16\.4: \\(H\\)\-statistics for the 80 predictors in the Ames Housing data based on the H2O ensemble model. Once we’ve identified the variable(s) with the strongest interaction signal (`First_Flr_SF` in our case), we can then compute the \\(h\\)\-statistic to identify which features it mostly interacts with. This second iteration took over two hours and identified `Overall_Qual` as having the strongest interaction effect with `First_Flr_SF` (again, a weak interaction effect given \\(\\rho \= 0\.144\\) ). ``` interact_2way <- Interaction$new(components_iml, feature = "First_Flr_SF") interact_2way$results %>% arrange(desc(.interaction)) %>% top_n(10) ## .feature .interaction ## 1 Overall_Qual:First_Flr_SF 0.14385963 ## 2 Year_Built:First_Flr_SF 0.09314573 ## 3 Kitchen_Qual:First_Flr_SF 0.06567883 ## 4 Bsmt_Qual:First_Flr_SF 0.06228321 ## 5 Bsmt_Exposure:First_Flr_SF 0.05900530 ## 6 Second_Flr_SF:First_Flr_SF 0.05747438 ## 7 Kitchen_AbvGr:First_Flr_SF 0.05675684 ## 8 Bsmt_Unf_SF:First_Flr_SF 0.05476509 ## 9 Fireplaces:First_Flr_SF 0.05470992 ## 10 Mas_Vnr_Area:First_Flr_SF 0.05439255 ``` Identifying these interactions can help point us in the direction of assessing how the interactions relate to the response variable. We can use PDPs or ICE curves with interactions to see their effect on the predicted response. Since the above process pointed out that `First_Flr_SF` and `Overall_Qual` had the highest interaction effect, the code plots this interaction relationship with predicted `Sale_Price`. We see that properties with “good” or lower `Overall_Qual` values tend have their `Sale_Price`s level off as `First_Flr_SF` increases moreso than properties with really strong `Overall_Qual` values. Also, you can see that properties with “very good” `Overall_Qual` tend to have a much larger increase in `Sale_Price` as `First_Flr_SF` increases from 1500–2000 than most other properties. (Although **pdp** allows more than one predictor, we take this opportunity to illustrate PDPs with the **iml** package.) ``` # Two-way PDP using iml interaction_pdp <- Partial$new( components_iml, c("First_Flr_SF", "Overall_Qual"), ice = FALSE, grid.size = 20 ) plot(interaction_pdp) ``` Figure 16\.5: Interaction PDP illustrating the joint effect of `First_Flr_SF` and `Overall_Qual` on `Sale_Price`. ### 16\.6\.3 Alternatives Obviously computational time constraints are a major issue in identifying potential interaction effects. Although the \\(H\\)\-statistic is the most statistically sound approach to detecting interactions, there are alternatives. The PDP\-based variable importance measure discussed in Brandon M Greenwell, Boehmke, and McCarthy ([2018](#ref-greenwell2018simple)) can also be used to quantify the strength of potential interaction effects. A thorough discussion of this approach is provided by Greenwell, Brandon M. and Boehmke, Bradley C. ([2019](#ref-vint)) and can be implemented with `vip::vint()`. Also, Kuhn and Johnson ([2019](#ref-kuhn2019feature)) provide a fairly comprehensive chapter discussing alternative approaches for identifying interactions. 16\.7 Local interpretable model\-agnostic explanations ------------------------------------------------------ *Local Interpretable Model\-agnostic Explanations* (LIME) is an algorithm that helps explain individual predictions and was introduced by Ribeiro, Singh, and Guestrin ([2016](#ref-ribeiro2016should)). Behind the workings of LIME lies the assumption that every complex model is linear on a local scale (i.e. in a small neighborhood around an observation of interest) and asserting that it is possible to fit a simple surrogate model around a single observation that will mimic how the global model behaves at that locality. ### 16\.7\.1 Concept To do so, LIME samples the training data multiple times to identify observations that are similar to the individual record of interest. It then trains an interpretable model (often a LASSO model) weighted by the proximity of the sampled observations to the instance of interest. The resulting model can then be used to explain the predictions of the more complex model at the locality of the observation of interest. The general algorithm LIME applies is: 1. ***Permute*** your training data to create replicated feature data with slight value modifications. 2. Compute ***proximity measure*** (e.g., 1 \- distance) between the observation of interest and each of the permuted observations. 3. Apply selected machine learning model to ***predict outcomes*** of permuted data. 4. ***Select m number of features*** to best describe predicted outcomes. 5. ***Fit a simple model*** to the permuted data, explaining the complex model outcome with \\(m\\) features from the permuted data weighted by its similarity to the original observation. 6. Use the resulting ***feature weights to explain local behavior***. **Algorithm 6:** The generalized LIME algorithm. Each of these steps will be discussed in further detail as we proceed. Although the **iml** package implements the LIME algorithm, the **lime** package provides the most comprehensive implementation. ### 16\.7\.2 Implementation The implementation of **Algorithm 6** via the **lime** package is split into two operations: `lime::lime()` and `lime::explain()`. The `lime::lime()` function creates an `"explainer"` object, which is just a list that contains the fitted machine learning model and the feature distributions for the training data. The feature distributions that it contains includes distribution statistics for each categorical variable level and each continuous variable split into \\(n\\) bins (the current default is four bins). These feature attributes will be used to permute data. ``` # Create explainer object components_lime <- lime( x = features, model = ensemble_tree, n_bins = 10 ) class(components_lime) ## [1] "data_frame_explainer" "explainer" "list" summary(components_lime) ## Length Class Mode ## model 1 H2ORegressionModel S4 ## preprocess 1 -none- function ## bin_continuous 1 -none- logical ## n_bins 1 -none- numeric ## quantile_bins 1 -none- logical ## use_density 1 -none- logical ## feature_type 80 -none- character ## bin_cuts 80 -none- list ## feature_distribution 80 -none- list ``` Once we’ve created our lime object (i.e., `components_lime`), we can now perform the LIME algorithm using the `lime::explain()` function on the observation(s) of interest. Recall that for local interpretation we are focusing on the two observations identified in Section 16\.2\.2 that contain the highest and lowest predicted sales prices. This function has several options, each providing flexibility in how we perform **Algorithm 6**: * `x`: Contains the observation(s) you want to create local explanations for. (See step 1 in **Algorithm 6**.) * `explainer`: Takes the explainer object created by `lime::lime()`, which will be used to create permuted data. Permutations are sampled from the variable distributions created by the `lime::lime()` explainer object. (See step 1 in **Algorithm 6**.) * `n_permutations`: The number of permutations to create for each observation in `x` (default is 5,000 for tabular data). (See step 1 in **Algorithm 6**.) * `dist_fun`: The distance function to use. The default is Gower’s distance but can also use Euclidean, Manhattan, or any other distance function allowed by the `dist()` function (see `?dist()` for details). To compute similarities, categorical features will be recoded based on whether or not they are equal to the actual observation. If continuous features are binned (the default) these features will be recoded based on whether they are in the same bin as the observation to be explained. Using the recoded data the distance to the original observation is then calculated based on a user\-chosen distance measure. (See step 2 in **Algorithm 6**.) * `kernel_width`: To convert the distance measure to a similarity score, an exponential kernel of a user defined width (defaults to 0\.75 times the square root of the number of features) is used. Smaller values restrict the size of the local region. (See step 2 in **Algorithm 6**.) * `n_features`: The number of features to best describe the predicted outcomes. (See step 4 in **Algorithm 6**.) * `feature_select`: `lime::lime()` can use forward selection, ridge regression, lasso, or a decision tree to select the “best” `n_features` features. In the next example we apply a ridge regression model and select the \\(m\\) features with highest absolute weights. (See step 4 in **Algorithm 6**.) For classification models we need to specify a couple of additional arguments: * `labels`: The specific labels (classes) to explain (e.g., 0/1, “Yes”/“No”)? * `n_labels`: The number of labels to explain (e.g., Do you want to explain both success and failure or just the reason for success?) ``` # Use LIME to explain previously defined instances: high_ob and low_ob lime_explanation <- lime::explain( x = rbind(high_ob, low_ob), explainer = components_lime, n_permutations = 5000, dist_fun = "gower", kernel_width = 0.25, n_features = 10, feature_select = "highest_weights" ) ``` If the original ML model is a regressor, the local model will predict the output of the complex model directly. If it is a classifier, the local model will predict the probability of the chosen class(es). The output from `lime::explain()` is a data frame containing various information on the local model’s predictions. Most importantly, for each observation supplied it contains the fitted explainer model (`model_r2`) and the weighted importance (`feature_weight`) for each important feature (`feature_desc`) that best describes the local relationship. ``` glimpse(lime_explanation) ## Observations: 20 ## Variables: 11 ## $ model_type <chr> "regression", "regression", "regression", "regr… ## $ case <chr> "1825", "1825", "1825", "1825", "1825", "1825",… ## $ model_r2 <dbl> 0.41661172, 0.41661172, 0.41661172, 0.41661172,… ## $ model_intercept <dbl> 186253.6, 186253.6, 186253.6, 186253.6, 186253.… ## $ model_prediction <dbl> 406033.5, 406033.5, 406033.5, 406033.5, 406033.… ## $ feature <chr> "Gr_Liv_Area", "Overall_Qual", "Total_Bsmt_SF",… ## $ feature_value <int> 3627, 8, 1930, 35760, 1796, 1831, 3, 14, 1, 3, … ## $ feature_weight <dbl> 55254.859, 50069.347, 40261.324, 20430.128, 193… ## $ feature_desc <chr> "2141 < Gr_Liv_Area", "Overall_Qual = Very_Exce… ## $ data <list> [[Two_Story_1946_and_Newer, Residential_Low_De… ## $ prediction <dbl> 663136.38, 663136.38, 663136.38, 663136.38, 663… ``` Visualizing the results in Figure [16\.6](iml.html#fig:first-lime-fit) we see that size and quality of the home appears to be driving the predictions for both `high_ob` (high `Sale_Price` observation) and `low_ob` (low `Sale_Price` observation). However, it’s important to note the low \\(R^2\\) (“Explanation Fit”) of the models. The local model appears to have a fairly poor fit and, therefore, we shouldn’t put too much faith in these explanations. ``` plot_features(lime_explanation, ncol = 1) ``` Figure 16\.6: Local explanation for observations 1825 (`high_ob`) and 139 (`low_ob`) using LIME. ### 16\.7\.3 Tuning Considering there are several knobs we can adjust when performing LIME, we can treat these as tuning parameters to try to tune the local model. This helps to maximize the amount of trust we can have in the local region explanation. As an example, the following code block changes the distance function to be Euclidean, increases the kernel width to create a larger local region, and changes the feature selection approach to a LARS\-based LASSO model. The result is a fairly substantial increase in our explanation fits, giving us much more confidence in their explanations. ``` # Tune the LIME algorithm a bit lime_explanation2 <- explain( x = rbind(high_ob, low_ob), explainer = components_lime, n_permutations = 5000, dist_fun = "euclidean", kernel_width = 0.75, n_features = 10, feature_select = "lasso_path" ) # Plot the results plot_features(lime_explanation2, ncol = 1) ``` Figure 16\.7: Local explanation for observations 1825 (case 1\) and 139 (case 2\) after tuning the LIME algorithm. ### 16\.7\.4 Alternative uses The discussion above revolves around using LIME for tabular data sets. However, LIME can also be applied to non\-traditional data sets such as text and images. For text, LIME creates a new *document term matrix* with perturbed text (e.g., it generates new phrases and sentences based on existing text). It then follows a similar procedure of weighting the similarity of the generated text to the original. The localized model then helps to identify which words in the perturbed text are producing the strongest signal. For images, variations of the images are created by replacing certain groupings of pixels with a constant color (e.g., gray). LIME then assesses the predicted labels for the given group of pixels not perturbed. For more details on such use cases see Molnar and others ([2018](#ref-molnar2018interpretable)). 16\.8 Shapley values -------------------- Another method for explaining individual predictions borrows ideas from coalitional (or cooperative) game theory to produce whats called Shapley values (Lundberg and Lee [2016](#ref-lundberg2016unexpected), [2017](#ref-lundberg2017unified)). By now you should realize that when a model gives a prediction for an observation, all features do not play the same role: some of them may have a lot of influence on the model’s prediction, while others may be irrelevant. Consequently, one may think that the effect of each feature can be measured by checking what the prediction would have been if that feature was absent; the bigger the change in the model’s output, the more important that feature is. This is exactly what happens with permutation\-based variable importance (since LIME most often uses a ridge or lasso model, it also uses a similar approach to identify localized feature importance). However, observing only single feature effects at a time implies that dependencies between features are not taken into account, which could produce inaccurate and misleading explanations of the model’s internal logic. Therefore, to avoid missing any interaction between features, we should observe how the prediction changes for each possible subset of features and then combine these changes to form a unique contribution for each feature value. ### 16\.8\.1 Concept The concept of Shapley values is based on the idea that the feature values of an individual observation work together to cause a change in the model’s prediction with respect to the model’s expected output, and it divides this total change in prediction among the features in a way that is “fair” to their contributions across all possible subsets of features. To do so, Shapley values assess every combination of predictors to determine each predictors impact. Focusing on feature \\(x\_j\\), the approach will test the accuracy of every combination of features not including \\(x\_j\\) and then test how adding \\(x\_j\\) to each combination improves the accuracy. Unfortunately, computing Shapley values is very computationally expensive. Consequently, the **iml** package implements an approximate Shapley value. To compute the approximate Shapley contribution of feature \\(x\_j\\) on \\(x\\) we need to construct two new “Frankenstein” instances and take the difference between their corresponding predictions. This is outlined in the brief algorithm below. Note that this is often repeated several times (e.g., 10–100\) for each feature/observation combination and the results are averaged together. See <http://bit.ly/fastshap> and Štrumbelj and Kononenko ([2014](#ref-strumbelj2014explaining)) for details. ``` ob = single observation of interest 1. For variables j in {1,...,p} do | m = draw random sample(s) from data set | randomly shuffle the feature names, perm <- sample(names(x)) | Create two new instances b1 and b2 as follows: | b1 = x, but all the features in perm that appear after | feature xj get their values swapped with the | corresponding values in z. | b2 = x, but feature xj, as well as all the features in perm | that appear after xj, get their values swapped with the | corresponding values in z. | f(b1) = compute predictions for b1 | f(b2) = compute predictions for b2 | shap_ind = f(b1) - f(b2) | phi = mean(shap_ind) End 2. Sort phi in decreasing order ``` **Algorithm 7:** A simple algorithm for computing approximate Shapley values. The aggregated Shapley values (\\(\\phi \=\\) `phi`) represent the contribution of each feature towards a predicted value compared to the average prediction for the data set. Figure [16\.8](iml.html#fig:shapley-idea), represents the first iteration of our algorithm where we focus on the impact of feature \\(X\_1\\). In step (A) we sample the training data. In step (B) we create two copies of an individually sampled row and randomize the order of the features. Then in one copy we include all values from the observation of interest for the values from the first column feature up to *and including* \\(X\_1\\). We then include the values from the sampled row for all the other features. In the second copy, we include all values from the observation of interest for the values from the first column feature up to *but not including* \\(X\_1\\). We use values from the sample row for \\(X\_1\\) and all the other features. Then in step (C), we apply our model to both copies of this row and in step (D) compute the difference between the predicted outputs. We follow this procedure for all the sampled rows and the average difference across all sampled rows is the Shapley value. It should be obvious that the more observations we include in our sampling procedure the closer our approximate Shapley computation will be to the true Shapley value. Figure 16\.8: Generalized concept behind approximate Shapley value computation. ### 16\.8\.2 Implementation The **iml** package provides one of the few Shapley value implementations in R. We use `Shapley$new()` to create a new Shapley object. The time to compute is largely driven by the number of predictors and the sample size drawn. By default, `Shapley$new()` will only use a sample size of 100 but you can control this to either reduce compute time or increase confidence in the estimated values. In this example we increased the sample size to 1000 for greater confidence in the estimated values; it took roughly 3\.5 minutes to compute. Looking at the results we see that the predicted sale price of $663,136\.38 is $481,797\.42 larger than the average predicted sale price of $181,338\.96; Figure [16\.9](iml.html#fig:shapley) displays the contribution each predictor played in this difference. We see that `Gr_Liv_Area`, `Overall_Qual`, and `Second_Flr_SF` are the top three features positively influencing the predicted sale price; all of which contributed close to, or over, $75,000 towards the $481\.8K difference. ``` # Compute (approximate) Shapley values (shapley <- Shapley$new(components_iml, x.interest = high_ob, sample.size = 1000)) ## Interpretation method: Shapley ## Predicted value: 663136.380000, Average prediction: 181338.963590 (diff = 481797.416410) ## ## Analysed predictor: ## Prediction task: unknown ## ## ## Analysed data: ## Sampling from data.frame with 2199 rows and 80 columns. ## ## Head of results: ## feature phi phi.var ## 1 MS_SubClass 1746.38653 4.269700e+07 ## 2 MS_Zoning -24.01968 3.640500e+06 ## 3 Lot_Frontage 1104.17628 7.420201e+07 ## 4 Lot_Area 15471.49017 3.994880e+08 ## 5 Street 1.03684 6.198064e+03 ## 6 Alley 41.81164 5.831185e+05 ## feature.value ## 1 MS_SubClass=Two_Story_1946_and_Newer ## 2 MS_Zoning=Residential_Low_Density ## 3 Lot_Frontage=118 ## 4 Lot_Area=35760 ## 5 Street=Pave ## 6 Alley=No_Alley_Access # Plot results plot(shapley) ``` Figure 16\.9: Local explanation for observation 1825 using the Shapley value algorithm. Since **iml** uses R6, we can reuse the Shapley object to identify the influential predictors that help explain the low `Sale_Price` observation. In Figure [16\.10](iml.html#fig:shapley2) we see similar results to LIME in that `Overall_Qual` and `Gr_Liv_Area` are the most influential predictors driving down the price of this home. ``` # Reuse existing object shapley$explain(x.interest = low_ob) # Plot results shapley$results %>% top_n(25, wt = abs(phi)) %>% ggplot(aes(phi, reorder(feature.value, phi), color = phi > 0)) + geom_point(show.legend = FALSE) ``` Figure 16\.10: Local explanation for observation 139 using the Shapley value algorithm. ### 16\.8\.3 XGBoost and built\-in Shapley values True Shapley values are considered theoretically optimal (Lundberg and Lee [2016](#ref-lundberg2016unexpected)); however, as previously discussed they are computationally challenging. The approximate Shapley values provided by **iml** are much more computationally feasible. Another common option is discussed by Lundberg and Lee ([2017](#ref-lundberg2017unified)) and, although not purely model\-agnostic, is applicable to tree\-based models and is fully integrated in most XGBoost implementations (including the **xgboost** package). Similar to **iml**’s approximation procedure, this tree\-based Shapley value procedure is also an approximation, but allows for polynomial runtime instead of exponential runtime. To demonstrate, we’ll use the features used and the final XGBoost model created in Section [12\.5\.2](gbm.html#xgb-tuning-strategy). ``` # Compute tree SHAP for a previously obtained XGBoost model X <- readr::read_rds("data/xgb-features.rds") xgb.fit.final <- readr::read_rds("data/xgb-fit-final.rds") ``` The benefit of this expedient approach is we can reasonably compute Shapley values for every observation and every feature in one fell swoop. This allows us to use Shapley values for more than just local interpretation. For example, the following computes and plots the Shapley values for every feature and observation in our Ames housing example; see Figure [16\.11](iml.html#fig:shap-vip). The left plot displays the individual Shapley contributions. Each dot represents a feature’s contribution to the predicted `Sale_Price` for an individual observation. This allows us to see the general magnitude and variation of each feature’s contributions across all observations. We can use this information to compute the average absolute Shapley value across all observations for each features and use this as a global measure of feature importance (right plot). There’s a fair amount of general data wrangling going on here but the key line of code is `predict(newdata = X, predcontrib = TRUE)`. This line computes the prediction contribution for each feature and observation in the data supplied via `newdata`. ``` # Try to re-scale features (low to high) feature_values <- X %>% as.data.frame() %>% mutate_all(scale) %>% gather(feature, feature_value) %>% pull(feature_value) # Compute SHAP values, wrangle a bit, compute SHAP-based importance, etc. shap_df <- xgb.fit.final %>% predict(newdata = X, predcontrib = TRUE) %>% as.data.frame() %>% select(-BIAS) %>% gather(feature, shap_value) %>% mutate(feature_value = feature_values) %>% group_by(feature) %>% mutate(shap_importance = mean(abs(shap_value))) # SHAP contribution plot p1 <- ggplot(shap_df, aes(x = shap_value, y = reorder(feature, shap_importance))) + ggbeeswarm::geom_quasirandom(groupOnX = FALSE, varwidth = TRUE, size = 0.4, alpha = 0.25) + xlab("SHAP value") + ylab(NULL) # SHAP importance plot p2 <- shap_df %>% select(feature, shap_importance) %>% filter(row_number() == 1) %>% ggplot(aes(x = reorder(feature, shap_importance), y = shap_importance)) + geom_col() + coord_flip() + xlab(NULL) + ylab("mean(|SHAP value|)") # Combine plots gridExtra::grid.arrange(p1, p2, nrow = 1) ``` Figure 16\.11: Shapley contribution (left) and global importance (right) plots. We can also use this information to create an alternative to PDPs. Shapley\-based dependence plots (Figure [16\.12](iml.html#fig:shap-pdp)) show the Shapley values of a feature on the \\(y\\)\-axis and the value of the feature for the \\(x\\)\-axis. By plotting these values for all observations in the data set we can see how the feature’s attributed importance changes as its value varies. ``` shap_df %>% filter(feature %in% c("Overall_Qual", "Gr_Liv_Area")) %>% ggplot(aes(x = feature_value, y = shap_value)) + geom_point(aes(color = shap_value)) + scale_colour_viridis_c(name = "Feature value\n(standardized)", option = "C") + facet_wrap(~ feature, scales = "free") + scale_y_continuous('Shapley value', labels = scales::comma) + xlab('Normalized feature value') ``` Figure 16\.12: Shapley\-based dependence plot illustrating the variability in contribution across the range of `Gr_Liv_Area` and `Overall_Qual` values. 16\.9 Localized step\-wise procedure ------------------------------------ An additional approach for localized explanation is a procedure that is loosely related to the partial dependence algorithm with an added step\-wise procedure. The procedure was introduced by Staniak and Biecek ([2018](#ref-staniak2018explanations)) and is known as the *Break Down* method, which uses a greedy strategy to identify and remove features iteratively based on their influence on the overall average predicted response. ### 16\.9\.1 Concept The Break Down method provides two sequential approaches; the default is called *step up*. This procedure, essentially, takes the value for a given feature in the single observation of interest, replaces all the observations in the training data set, and identifies how it effects the prediction error. It performs this process iteratively and independently for each feature, identifies the column with the largest difference score, and adds that variable to the list as the most important. This feature’s signal is then removed (via randomization), and the procedure sweeps through the remaining predictors and applies the same process until all variables have been assessed. ``` existing_data = validation data set used in explainer new_ob = single observation to perform local interpretation on p = number of predictors l = list of predictors baseline = mean predicted response of existing_data for variable i in {1,...,p} do for variable j in {1,...,l} do | exchange variable j in existing_data with variable j value in new_ob | predicted_j = mean predicted response of altered existing_data | diff_j = absolute difference between baseline - predicted | reset existing_data end | t = variable j with largest diff value | contribution for variable t = diff value for variable t | remove variable t from l end ``` **Algorithm 8:** A simple algorithm for computing Break Down values with the step up method. An alternative approach is called the *step down* method which follows a similar algorithm but rather than remove the variable with the largest difference score on each sweep, it removes the variable with the smallest difference score. Both approaches are analogous to backward stepwise selection where *step up* removes variables with largest impact and *step down* removes variables with the smallest impact. ### 16\.9\.2 Implementation To perform the Break Down algorithm on a single observation, use the `DALEX::prediction_breakdown()` function. The output is a data frame with class `"prediction_breakdown_explainer"` that lists the contribution for each variable. Similar to Shapley values, the results display the contribution that each feature value for the given observation has on the difference between the overall average response (`Sale_Price` in this example) and the response for the given observation of interest. The default approach is ***step up*** but you can perform ***step down*** by specifying `direction = “down”`. If you look at the contribution output, realize the feature ordering is in terms of importance. Consequently, `Gr_Liv_Area` was identified as most influential followed by `Second_Flr_SF` and `Total_Bsmt_SF`. However, if you look at the contribution value, you will notice that `Second_Flr_SF` appears to have a larger contribution to the above average price than `Gr_Liv_Area`. However, the `Second_Flr_SF` contribution is based on having already taken `Gr_Liv_Area`’s contribution into effect. The break down algorithm is the most computationally intense of all methods discussed in this chapter. Since the number of required iterations increases by \\(p \\times \\left(p\-1\\right)\\) for every additional feature, wider data sets cause this algorithm to become burdensome. For example, this single application took over 6 hours to compute! ``` high_breakdown <- prediction_breakdown(components_dalex, observation = high_ob) # class of prediction_breakdown output class(high_breakdown) ## [1] "prediction_breakdown_explainer" "data.frame" # check out the top 10 influential variables for this observation high_breakdown[1:10, 1:5] ## variable contribution variable_name variable_value cummulative ## 1 (Intercept) 181338.96 Intercept 1 181338.9 ## Gr_Liv_Area + Gr_Liv_Area = 4316 46971.64 Gr_Liv_Area 4316 228310.5 ## Second_Flr_SF + Second_Flr_SF = 1872 52997.40 Second_Flr_SF 1872 281307.9 ## Total_Bsmt_SF + Total_Bsmt_SF = 2444 41339.89 Total_Bsmt_SF 2444 322647.8 ## Overall_Qual + Overall_Qual = Very_Excellent 47690.10 Overall_Qual Very_Excellent 370337.9 ## First_Flr_SF + First_Flr_SF = 2444 56780.92 First_Flr_SF 2444 427118.8 ## Bsmt_Qual + Bsmt_Qual = Excellent 49341.73 Bsmt_Qual Excellent 476460.6 ## Neighborhood + Neighborhood = Northridge 54289.27 Neighborhood Northridge 530749.8 ## Garage_Cars + Garage_Cars = 3 41959.23 Garage_Cars 3 572709.1 ## Kitchen_Qual + Kitchen_Qual = Excellent 59805.57 Kitchen_Qual Excellent 632514.6 ``` We can plot the entire list of contributions for each variable using `plot(high_breakdown)`. 16\.10 Final thoughts --------------------- Since this book focuses on hands\-on applications, we have focused on only a small sliver of IML. IML is a rapidly expanding research space that covers many more topics including moral and ethical considerations such as fairness, accountability, and transparency along with many more analytic procedures to interpret model performance, sensitivity, bias identification, and more. Moreover, the above discussion only provides a high\-level understanding of the methods. To gain deeper understanding around these methods and to learn more about the other areas of IML (like not discussed in this book) we highly recommend Molnar and others ([2018](#ref-molnar2018interpretable)) and Hall, Patrick ([2018](#ref-awesomeIML)). 16\.1 Prerequisites ------------------- There are multiple packages that provide robust machine learning interpretation capabilities. Unfortunately there is not one single package that is optimal for all IML applications; rather, when performing IML you will likely use a combination of packages. The following packages are used in this chapter. ``` # Helper packages library(dplyr) # for data wrangling library(ggplot2) # for awesome graphics # Modeling packages library(h2o) # for interfacing with H2O library(recipes) # for ML recipes library(rsample) # for data splitting library(xgboost) # for fitting GBMs # Model interpretability packages library(pdp) # for partial dependence plots (and ICE curves) library(vip) # for variable importance plots library(iml) # for general IML-related functions library(DALEX) # for general IML-related functions library(lime) # for local interpretable model-agnostic explanations ``` To illustrate various concepts we’ll continue working with the **h2o** version of the Ames housing example from Section [15\.1](stacking.html#h20-prereqs). We’ll also use the stacked ensemble model (`ensemble_tree`) created in Section [15\.3](stacking.html#stacking-existing). 16\.2 The idea -------------- It is not enough to identify a machine learning model that optimizes predictive performance; understanding and trusting model results is a hallmark of good science and necessary for our model to be adopted. As we apply and embed ever\-more complex predictive modeling and machine learning algorithms, both we (the analysts) and the business stakeholders need methods to interpret and understand the modeling results so we can have trust in its application for business decisions (Doshi\-Velez and Kim [2017](#ref-doshi2017towards)). Advancements in interpretability now allow us to extract key insights and actionable information from the most advanced ML models. These advancements allow us to answer questions such as: * What are the most important customer attributes driving behavior? * How are these attributes related to the behavior output? * Do multiple attributes interact to drive different behavior among customers? * Why do we expect a customer to make a particular decision? * Are the decisions we are making based on predicted results fair and reliable? Approaches to model interpretability to answer the exemplar questions above can be broadly categorized as providing *global* or *local* explanations. It is important to understand the entire model that you’ve trained on a global scale, and also to zoom in on local regions of your data or your predictions and derive explanations. Being able to answer such questions and provide both levels of explanation is key to any ML project becoming accepted, adopted, embedded, and properly utilized. ### 16\.2\.1 Global interpretation *Global interpretability* is about understanding how the model makes predictions, based on a holistic view of its features and how they influence the underlying model structure. It answers questions regarding which features are relatively influential, how these features influence the response variable, and what kinds of potential interactions are occurring. Global model interpretability helps to understand the relationship between the response variable and the individual features (or subsets thereof). Arguably, comprehensive global model interpretability is very hard to achieve in practice. Any model that exceeds a handful of features will be hard to fully grasp as we will not be able to comprehend the whole model structure at once. While global model interpretability is usually out of reach, there is a better chance to understand at least some models on a modular level. This typically revolves around gaining understanding of which features are the most influential (via *feature importance*) and then focusing on how the most influential variables drive the model output (via *feature effects*). Although you may not be able to fully grasp a model with a hundred features, typically only a dozen or so of these variables are really influential in driving the model’s performance. And it is possible to have a firm grasp of how a dozen variables are influencing a model. ### 16\.2\.2 Local interpretation Global interpretability methods help us understand the inputs and their overall relationship with the response variable, but they can be highly deceptive in some cases (e.g., when strong interactions are occurring). Although a given feature may influence the predictive accuracy of our model as a whole, it does not mean that that feature has the largest influence on a predicted value for a given observation (e.g., a customer, house, or employee) or even a group of observations. Local interpretations help us understand what features are influencing the predicted response for a given observation (or small group of observations). These techniques help us to not only answer what we expect a customer to do, but also why our model is making a specific prediction for a given observation. There are three primary approaches to local interpretation: * Local interpretable model\-agnostic explanations (LIME) * Shapley values * Localized step\-wise procedures These techniques have the same objective: to explain which variables are most influential in predicting the target for a set of observations. To illustrate, we’ll focus on two observations. The first is the observation that our ensemble produced the highest predicted `Sale_Price` for (i.e., observation 1825 which has a predicted `Sale_Price` of $663,136\), and the second is the observation with the lowest predicted `Sale_Price` (i.e., observation 139 which has a predicted `Sale_Price` of $47,245\.45\). Our goal with local interpretation is to explain what features are driving these two predictions. ``` # Compute predictions predictions <- predict(ensemble_tree, train_h2o) %>% as.vector() # Print the highest and lowest predicted sales price paste("Observation", which.max(predictions), "has a predicted sale price of", scales::dollar(max(predictions))) ## [1] "Observation 1825 has a predicted sale price of $663,136" paste("Observation", which.min(predictions), "has a predicted sale price of", scales::dollar(min(predictions))) ## [1] "Observation 139 has a predicted sale price of $47,245.45" # Grab feature values for observations with min/max predicted sales price high_ob <- as.data.frame(train_h2o)[which.max(predictions), ] %>% select(-Sale_Price) low_ob <- as.data.frame(train_h2o)[which.min(predictions), ] %>% select(-Sale_Price) ``` ### 16\.2\.3 Model\-specific vs. model\-agnostic It’s also important to understand that there are *model\-specific* and *model\-agnostic* approaches for interpreting your model. Many of the approaches you’ve seen in the previous chapters for understanding feature importance are model\-specific. For example, in linear models we can use the absolute value of the \\(t\\)–statistic as a measure of feature importance (though this becomes complicated when your linear model involves interaction terms and transformations). Random forests, on the other hand, can record the prediction accuracy on the OOB portion of the data, then the same is done after permuting each predictor variable, and the difference between the two accuracies are then averaged over all trees, and normalized by the standard error. These model\-specific interpretation tools are limited to their respective model classes. There can be advantages to using model\-specific approaches as they are more closely tied to the model performance and they may be able to more accurately incorporate the correlation structure between the predictors (Kuhn and Johnson [2013](#ref-apm)). However, there are also some disadvantages. For example, many ML algorithms (e.g., stacked ensembles) have no natural way of measuring feature importance: ``` vip(ensemble_tree, method = "model") ## Error in vi_model.default(ensemble_tree, method = "model") : ## model-specific variable importance scores are currently not available for objects of class "h2o.stackedEnsemble.summary". ``` Furthermore, comparing model\-specific feature importance across model classes is difficult since you are comparing different measurements (e.g., the magnitude of \\(t\\)\-statistics in linear models vs. degradation of prediction accuracy in random forests). In model\-agnostic approaches, the model is treated as a “black box”. The separation of interpretability from the specific model allows us to easily compare feature importance across different models. Ultimately, there is no one best approach for model interpretability. Rather, only by applying multiple approaches (to include comparing model specific and model agnostic results) can we really gain full trust in the interpretations we extract. An important item to note is that when using model agnostic procedures, additional code preparation is often required. For example, the **iml** (Molnar [2019](#ref-R-iml)), **DALEX** (Biecek [2019](#ref-R-DALEX)), and **LIME** (Pedersen and Benesty [2018](#ref-R-lime)) packages use purely model agnostic procedures. Consequently, we need to create a model agnostic object that contains three components: 1. A data frame with just the features (must be of class `"data.frame"`, cannot be an `"H2OFrame"` or other object). 2. A vector with the actual responses (must be numeric—0/1 for binary classification problems). 3. A custom function that will take the features from 1\), apply the ML algorithm, and return the predicted values as a vector. The following code extracts these items for the **h2o** example: ``` # 1) create a data frame with just the features features <- as.data.frame(train_h2o) %>% select(-Sale_Price) # 2) Create a vector with the actual responses response <- as.data.frame(train_h2o) %>% pull(Sale_Price) # 3) Create custom predict function that returns the predicted values as a vector pred <- function(object, newdata) { results <- as.vector(h2o.predict(object, as.h2o(newdata))) return(results) } # Example of prediction output pred(ensemble_tree, features) %>% head() ## [1] 207144.3 108958.2 164248.4 241984.2 190000.7 202795.8 ``` Once we have these three components we can create our model agnostic objects for the **iml**[43](#fn43) and **DALEX** packages, which will just pass these downstream components (along with the ML model) to other functions. ``` # iml model agnostic object components_iml <- Predictor$new( model = ensemble_tree, data = features, y = response, predict.fun = pred ) # DALEX model agnostic object components_dalex <- DALEX::explain( model = ensemble_tree, data = features, y = response, predict_function = pred ) ``` ### 16\.2\.1 Global interpretation *Global interpretability* is about understanding how the model makes predictions, based on a holistic view of its features and how they influence the underlying model structure. It answers questions regarding which features are relatively influential, how these features influence the response variable, and what kinds of potential interactions are occurring. Global model interpretability helps to understand the relationship between the response variable and the individual features (or subsets thereof). Arguably, comprehensive global model interpretability is very hard to achieve in practice. Any model that exceeds a handful of features will be hard to fully grasp as we will not be able to comprehend the whole model structure at once. While global model interpretability is usually out of reach, there is a better chance to understand at least some models on a modular level. This typically revolves around gaining understanding of which features are the most influential (via *feature importance*) and then focusing on how the most influential variables drive the model output (via *feature effects*). Although you may not be able to fully grasp a model with a hundred features, typically only a dozen or so of these variables are really influential in driving the model’s performance. And it is possible to have a firm grasp of how a dozen variables are influencing a model. ### 16\.2\.2 Local interpretation Global interpretability methods help us understand the inputs and their overall relationship with the response variable, but they can be highly deceptive in some cases (e.g., when strong interactions are occurring). Although a given feature may influence the predictive accuracy of our model as a whole, it does not mean that that feature has the largest influence on a predicted value for a given observation (e.g., a customer, house, or employee) or even a group of observations. Local interpretations help us understand what features are influencing the predicted response for a given observation (or small group of observations). These techniques help us to not only answer what we expect a customer to do, but also why our model is making a specific prediction for a given observation. There are three primary approaches to local interpretation: * Local interpretable model\-agnostic explanations (LIME) * Shapley values * Localized step\-wise procedures These techniques have the same objective: to explain which variables are most influential in predicting the target for a set of observations. To illustrate, we’ll focus on two observations. The first is the observation that our ensemble produced the highest predicted `Sale_Price` for (i.e., observation 1825 which has a predicted `Sale_Price` of $663,136\), and the second is the observation with the lowest predicted `Sale_Price` (i.e., observation 139 which has a predicted `Sale_Price` of $47,245\.45\). Our goal with local interpretation is to explain what features are driving these two predictions. ``` # Compute predictions predictions <- predict(ensemble_tree, train_h2o) %>% as.vector() # Print the highest and lowest predicted sales price paste("Observation", which.max(predictions), "has a predicted sale price of", scales::dollar(max(predictions))) ## [1] "Observation 1825 has a predicted sale price of $663,136" paste("Observation", which.min(predictions), "has a predicted sale price of", scales::dollar(min(predictions))) ## [1] "Observation 139 has a predicted sale price of $47,245.45" # Grab feature values for observations with min/max predicted sales price high_ob <- as.data.frame(train_h2o)[which.max(predictions), ] %>% select(-Sale_Price) low_ob <- as.data.frame(train_h2o)[which.min(predictions), ] %>% select(-Sale_Price) ``` ### 16\.2\.3 Model\-specific vs. model\-agnostic It’s also important to understand that there are *model\-specific* and *model\-agnostic* approaches for interpreting your model. Many of the approaches you’ve seen in the previous chapters for understanding feature importance are model\-specific. For example, in linear models we can use the absolute value of the \\(t\\)–statistic as a measure of feature importance (though this becomes complicated when your linear model involves interaction terms and transformations). Random forests, on the other hand, can record the prediction accuracy on the OOB portion of the data, then the same is done after permuting each predictor variable, and the difference between the two accuracies are then averaged over all trees, and normalized by the standard error. These model\-specific interpretation tools are limited to their respective model classes. There can be advantages to using model\-specific approaches as they are more closely tied to the model performance and they may be able to more accurately incorporate the correlation structure between the predictors (Kuhn and Johnson [2013](#ref-apm)). However, there are also some disadvantages. For example, many ML algorithms (e.g., stacked ensembles) have no natural way of measuring feature importance: ``` vip(ensemble_tree, method = "model") ## Error in vi_model.default(ensemble_tree, method = "model") : ## model-specific variable importance scores are currently not available for objects of class "h2o.stackedEnsemble.summary". ``` Furthermore, comparing model\-specific feature importance across model classes is difficult since you are comparing different measurements (e.g., the magnitude of \\(t\\)\-statistics in linear models vs. degradation of prediction accuracy in random forests). In model\-agnostic approaches, the model is treated as a “black box”. The separation of interpretability from the specific model allows us to easily compare feature importance across different models. Ultimately, there is no one best approach for model interpretability. Rather, only by applying multiple approaches (to include comparing model specific and model agnostic results) can we really gain full trust in the interpretations we extract. An important item to note is that when using model agnostic procedures, additional code preparation is often required. For example, the **iml** (Molnar [2019](#ref-R-iml)), **DALEX** (Biecek [2019](#ref-R-DALEX)), and **LIME** (Pedersen and Benesty [2018](#ref-R-lime)) packages use purely model agnostic procedures. Consequently, we need to create a model agnostic object that contains three components: 1. A data frame with just the features (must be of class `"data.frame"`, cannot be an `"H2OFrame"` or other object). 2. A vector with the actual responses (must be numeric—0/1 for binary classification problems). 3. A custom function that will take the features from 1\), apply the ML algorithm, and return the predicted values as a vector. The following code extracts these items for the **h2o** example: ``` # 1) create a data frame with just the features features <- as.data.frame(train_h2o) %>% select(-Sale_Price) # 2) Create a vector with the actual responses response <- as.data.frame(train_h2o) %>% pull(Sale_Price) # 3) Create custom predict function that returns the predicted values as a vector pred <- function(object, newdata) { results <- as.vector(h2o.predict(object, as.h2o(newdata))) return(results) } # Example of prediction output pred(ensemble_tree, features) %>% head() ## [1] 207144.3 108958.2 164248.4 241984.2 190000.7 202795.8 ``` Once we have these three components we can create our model agnostic objects for the **iml**[43](#fn43) and **DALEX** packages, which will just pass these downstream components (along with the ML model) to other functions. ``` # iml model agnostic object components_iml <- Predictor$new( model = ensemble_tree, data = features, y = response, predict.fun = pred ) # DALEX model agnostic object components_dalex <- DALEX::explain( model = ensemble_tree, data = features, y = response, predict_function = pred ) ``` 16\.3 Permutation\-based feature importance ------------------------------------------- In previous chapters we illustrated a few model\-specific approaches for measuring feature importance (e.g., for linear models we used the absolute value of the \\(t\\)\-statistic). For SVMs, on the other hand, we had to rely on a model\-agnostic approach which was based on the permutation feature importance measurement introduced for random forests by Breiman ([2001](#ref-breiman2001random)) (see Section [11\.6](random-forest.html#rf-vip)) and expanded on by Fisher, Rudin, and Dominici ([2018](#ref-fisher2018model)). ### 16\.3\.1 Concept The permutation approach measures a feature’s importance by calculating the increase of the model’s prediction error after permuting the feature. The idea is that if we randomly permute the values of an important feature in the training data, the training performance would degrade (since permuting the values of a feature effectively destroys any relationship between that feature and the target variable). The permutation approach uses the difference (or ratio) between some baseline performance measure (e.g., RMSE) and the same performance measure obtained after permuting the values of a particular feature in the training data. From an algorithmic perspective, the approach follows these steps: ``` For any given loss function do the following: 1. Compute loss function for original model 2. For variable i in {1,...,p} do | randomize values | apply given ML model | estimate loss function | compute feature importance (some difference/ratio measure between permuted loss & original loss) End 3. Sort variables by descending feature importance ``` **Algorithm 1:** A simple algorithm for computing permutation\-based variable importance for the feature set \\(X\\). A feature is “important” if permuting its values increases the model error relative to the other features, because the model relied on the feature for the prediction. A feature is “unimportant” if permuting its values keeps the model error relatively unchanged, because the model ignored the feature for the prediction. This type of variable importance is tied to the model’s performance. Therefore, it is assumed that the model has been properly tuned (e.g., using cross\-validation) and is not over fitting. ### 16\.3\.2 Implementation Permutation\-based feature importance is available with the **DALEX**, **iml**, and **vip** packages; each providing unique benefits. The **iml** package provides the `FeatureImp()` function which computes feature importance for general prediction models using the permutation approach. It is written in R6 and allows the user to specify a generic loss function or select from a pre\-defined list (e.g., `loss = "mse"` for mean squared error). It also allows the user to specify whether importance is measures as the difference or as the ratio of the original model error and the model error after permutation. The user can also specify the number of repetitions used when permuting each feature to help stabilize the variability in the procedure. The **DALEX** package also provides permutation\-based variable importance scores through the `variable_importance()` function. Similar to `iml::FeatureImp()`, this function allows the user to specify a loss function and how the importance scores are computed (e.g., using the difference or ratio). It also provides an option to sample the training data before shuffling the data to compute importance (the default is to use `n_sample = 1000`. This can help speed up computation. The **vip** package specifically focuses on variable importance plots (VIPs) and provides both model\-specific and a number of model\-agnostic approaches to computing variable importance, including the permutation approach. With **vip** you can use customized loss functions (or select from a pre\-defined list), perform a Monte Carlo simulation to stabilize the procedure, sample observations prior to permuting features, perform the computations in parallel which can speed up runtime on large data sets, and more. The following executes a permutation\-based feature importance via **vip**. To speed up execution we sample 50% of the training data but repeat the simulations 5 times to increase stability of our estimates (whenever `nsim >`, you also get an estimated standard deviation for each importance score). We see that many of the same features that have been influential in model\-specific approaches illustrated in previous chapters (e.g., `Gr_Liv_Area`, `Overall_Qual`, `Total_Bsmt_SF`, and `Neighborhood`) are also considered influential in our stacked model using the permutation approach. Permutation\-based approaches can become slow as the number of predictors grows. This implementation took 9 minutes. You can speed up execution by parallelizing, reducing the sample size, or reducing the number of simulations. However, note that the last two options also increases the variability of the feature importance estimates. ``` vip( ensemble_tree, train = as.data.frame(train_h2o), method = "permute", target = "Sale_Price", metric = "RMSE", nsim = 5, sample_frac = 0.5, pred_wrapper = pred ) ``` Figure 7\.5: Top 10 most influential variables for the stacked H2O model using permutation\-based feature importance. ### 16\.3\.1 Concept The permutation approach measures a feature’s importance by calculating the increase of the model’s prediction error after permuting the feature. The idea is that if we randomly permute the values of an important feature in the training data, the training performance would degrade (since permuting the values of a feature effectively destroys any relationship between that feature and the target variable). The permutation approach uses the difference (or ratio) between some baseline performance measure (e.g., RMSE) and the same performance measure obtained after permuting the values of a particular feature in the training data. From an algorithmic perspective, the approach follows these steps: ``` For any given loss function do the following: 1. Compute loss function for original model 2. For variable i in {1,...,p} do | randomize values | apply given ML model | estimate loss function | compute feature importance (some difference/ratio measure between permuted loss & original loss) End 3. Sort variables by descending feature importance ``` **Algorithm 1:** A simple algorithm for computing permutation\-based variable importance for the feature set \\(X\\). A feature is “important” if permuting its values increases the model error relative to the other features, because the model relied on the feature for the prediction. A feature is “unimportant” if permuting its values keeps the model error relatively unchanged, because the model ignored the feature for the prediction. This type of variable importance is tied to the model’s performance. Therefore, it is assumed that the model has been properly tuned (e.g., using cross\-validation) and is not over fitting. ### 16\.3\.2 Implementation Permutation\-based feature importance is available with the **DALEX**, **iml**, and **vip** packages; each providing unique benefits. The **iml** package provides the `FeatureImp()` function which computes feature importance for general prediction models using the permutation approach. It is written in R6 and allows the user to specify a generic loss function or select from a pre\-defined list (e.g., `loss = "mse"` for mean squared error). It also allows the user to specify whether importance is measures as the difference or as the ratio of the original model error and the model error after permutation. The user can also specify the number of repetitions used when permuting each feature to help stabilize the variability in the procedure. The **DALEX** package also provides permutation\-based variable importance scores through the `variable_importance()` function. Similar to `iml::FeatureImp()`, this function allows the user to specify a loss function and how the importance scores are computed (e.g., using the difference or ratio). It also provides an option to sample the training data before shuffling the data to compute importance (the default is to use `n_sample = 1000`. This can help speed up computation. The **vip** package specifically focuses on variable importance plots (VIPs) and provides both model\-specific and a number of model\-agnostic approaches to computing variable importance, including the permutation approach. With **vip** you can use customized loss functions (or select from a pre\-defined list), perform a Monte Carlo simulation to stabilize the procedure, sample observations prior to permuting features, perform the computations in parallel which can speed up runtime on large data sets, and more. The following executes a permutation\-based feature importance via **vip**. To speed up execution we sample 50% of the training data but repeat the simulations 5 times to increase stability of our estimates (whenever `nsim >`, you also get an estimated standard deviation for each importance score). We see that many of the same features that have been influential in model\-specific approaches illustrated in previous chapters (e.g., `Gr_Liv_Area`, `Overall_Qual`, `Total_Bsmt_SF`, and `Neighborhood`) are also considered influential in our stacked model using the permutation approach. Permutation\-based approaches can become slow as the number of predictors grows. This implementation took 9 minutes. You can speed up execution by parallelizing, reducing the sample size, or reducing the number of simulations. However, note that the last two options also increases the variability of the feature importance estimates. ``` vip( ensemble_tree, train = as.data.frame(train_h2o), method = "permute", target = "Sale_Price", metric = "RMSE", nsim = 5, sample_frac = 0.5, pred_wrapper = pred ) ``` Figure 7\.5: Top 10 most influential variables for the stacked H2O model using permutation\-based feature importance. 16\.4 Partial dependence ------------------------ Partial dependence helps to understand the marginal effect of a feature (or subset thereof) on the predicted outcome. In essence, it allows us to understand how the response variable changes as we change the value of a feature while taking into account the average effect of all the other features in the model. ### 16\.4\.1 Concept The procedure follows the traditional methodology documented in J. H. Friedman ([2001](#ref-friedman2001greedy)). The algorithm (illustrated below) will split the feature of interest into \\(j\\) equally spaced values. For example, the `Gr_Liv_Area` feature ranges from 334–5095 square feet. Say the user selects \\(j \= 20\\). The algorithm will first create an evenly spaced grid consisting of 20 values across the distribution of `Gr_Liv_area` (e.g., \\(334\.00, 584\.58, \\dots, 5095\.00\\)). Then the algorithm will make 20 copies of the original training data (one copy for each value in the grid). The algorithms will then set `Gr_Liv_Area` for all observations in the first copy to 334, 585 in the second copy, 835 in the third copy, …, and finally to 5095 in the 20\-th copy (all other features remain unchanged). The algorithm then predicts the outcome for each observation in each of the 20 copies, and then averages the predicted values for each set. These averaged predicted values are known as partial dependence values and are plotted against the 20 evenly spaced values for `Gr_Liv_Area`. ``` For a selected predictor (x) 1. Construct a grid of j evenly spaced values across the distribution of x: {x1, x2, ..., xj} 2. For i in {1,...,j} do | Copy the training data and replace the original values of x with the constant xi | Apply given ML model (i.e., obtain vector of predictions) | Average predictions together End 3. Plot the averaged predictions against x1, x2, ..., xj ``` **Algorithm 2:** A simple algorithm for constructing the partial dependence of the response on a single predictor \\(x\\). Algorithm 1 can be quite computationally intensive since it involves \\(j\\) passes over the training records (and therefore \\(j\\) calls to the prediction function). Fortunately, the algorithm can be parallelized quite easily (see (Brandon Greenwell [2018](#ref-R-pdp)) for an example). It can also be easily extended to larger subsets of two or more features as well (i.e., to visualize interaction effects). If we plot the partial dependence values against the grid values we get what’s known as a *partial dependence plot* (PDP) (Figure [16\.1](iml.html#fig:pdp-illustration)) where the line represents the average predicted value across all observations at each of the \\(j\\) values of \\(x\\). Figure 16\.1: Illustration of the partial dependence process. ### 16\.4\.2 Implementation The **pdp** package (Brandon Greenwell [2018](#ref-R-pdp)) is a widely used, mature, and flexible package for constructing PDPs. The **iml** and **DALEX** packages also provide PDP capabilities.[44](#fn44) **pdp** has built\-in support for many packages but for models that are not supported (such as **h2o** stacked models) we need to create a custom prediction function wrapper, as illustrated below. First, we create a custom prediction function similar to that which we created in Section [16\.2\.3](iml.html#agnostic); however, here we return the mean of the predicted values. We then use `pdp::partial()` to compute the partial dependence values. We can use `autoplot()` to view PDPs using **ggplot2**. The `rug` argument provides markers for the decile distribution of `Gr_Liv_Area` and when you include `rug = TRUE` you must also include the training data. ``` # Custom prediction function wrapper pdp_pred <- function(object, newdata) { results <- mean(as.vector(h2o.predict(object, as.h2o(newdata)))) return(results) } # Compute partial dependence values pd_values <- partial( ensemble_tree, train = as.data.frame(train_h2o), pred.var = "Gr_Liv_Area", pred.fun = pdp_pred, grid.resolution = 20 ) head(pd_values) # take a peak ## Gr_Liv_Area yhat ## 1 334 158858.2 ## 2 584 159566.6 ## 3 835 160878.2 ## 4 1085 165896.7 ## 5 1336 171665.9 ## 6 1586 180505.1 # Partial dependence plot autoplot(pd_values, rug = TRUE, train = as.data.frame(train_h2o)) ``` Figure 7\.6: Partial dependence plot for `Gr_Liv_Area` illustrating the average increase in predicted `Sale_Price` as `Gr_Liv_Area` increases. ### 16\.4\.3 Alternative uses PDPs have primarily been used to illustrate the marginal effect a feature has on the predicted response value. However, Brandon M Greenwell, Boehmke, and McCarthy ([2018](#ref-greenwell2018simple)) illustrate an approach that uses a measure of the relative “flatness” of the partial dependence function as a measure of variable importance. The idea is that those features with larger marginal effects on the response have greater importance. You can implement a PDP\-based measure of feature importance by using the **vip** package and setting `method = "pdp"`. The resulting variable importance scores also retain the computed partial dependence values (so you can easily view plots of both feature importance and feature effects). ### 16\.4\.1 Concept The procedure follows the traditional methodology documented in J. H. Friedman ([2001](#ref-friedman2001greedy)). The algorithm (illustrated below) will split the feature of interest into \\(j\\) equally spaced values. For example, the `Gr_Liv_Area` feature ranges from 334–5095 square feet. Say the user selects \\(j \= 20\\). The algorithm will first create an evenly spaced grid consisting of 20 values across the distribution of `Gr_Liv_area` (e.g., \\(334\.00, 584\.58, \\dots, 5095\.00\\)). Then the algorithm will make 20 copies of the original training data (one copy for each value in the grid). The algorithms will then set `Gr_Liv_Area` for all observations in the first copy to 334, 585 in the second copy, 835 in the third copy, …, and finally to 5095 in the 20\-th copy (all other features remain unchanged). The algorithm then predicts the outcome for each observation in each of the 20 copies, and then averages the predicted values for each set. These averaged predicted values are known as partial dependence values and are plotted against the 20 evenly spaced values for `Gr_Liv_Area`. ``` For a selected predictor (x) 1. Construct a grid of j evenly spaced values across the distribution of x: {x1, x2, ..., xj} 2. For i in {1,...,j} do | Copy the training data and replace the original values of x with the constant xi | Apply given ML model (i.e., obtain vector of predictions) | Average predictions together End 3. Plot the averaged predictions against x1, x2, ..., xj ``` **Algorithm 2:** A simple algorithm for constructing the partial dependence of the response on a single predictor \\(x\\). Algorithm 1 can be quite computationally intensive since it involves \\(j\\) passes over the training records (and therefore \\(j\\) calls to the prediction function). Fortunately, the algorithm can be parallelized quite easily (see (Brandon Greenwell [2018](#ref-R-pdp)) for an example). It can also be easily extended to larger subsets of two or more features as well (i.e., to visualize interaction effects). If we plot the partial dependence values against the grid values we get what’s known as a *partial dependence plot* (PDP) (Figure [16\.1](iml.html#fig:pdp-illustration)) where the line represents the average predicted value across all observations at each of the \\(j\\) values of \\(x\\). Figure 16\.1: Illustration of the partial dependence process. ### 16\.4\.2 Implementation The **pdp** package (Brandon Greenwell [2018](#ref-R-pdp)) is a widely used, mature, and flexible package for constructing PDPs. The **iml** and **DALEX** packages also provide PDP capabilities.[44](#fn44) **pdp** has built\-in support for many packages but for models that are not supported (such as **h2o** stacked models) we need to create a custom prediction function wrapper, as illustrated below. First, we create a custom prediction function similar to that which we created in Section [16\.2\.3](iml.html#agnostic); however, here we return the mean of the predicted values. We then use `pdp::partial()` to compute the partial dependence values. We can use `autoplot()` to view PDPs using **ggplot2**. The `rug` argument provides markers for the decile distribution of `Gr_Liv_Area` and when you include `rug = TRUE` you must also include the training data. ``` # Custom prediction function wrapper pdp_pred <- function(object, newdata) { results <- mean(as.vector(h2o.predict(object, as.h2o(newdata)))) return(results) } # Compute partial dependence values pd_values <- partial( ensemble_tree, train = as.data.frame(train_h2o), pred.var = "Gr_Liv_Area", pred.fun = pdp_pred, grid.resolution = 20 ) head(pd_values) # take a peak ## Gr_Liv_Area yhat ## 1 334 158858.2 ## 2 584 159566.6 ## 3 835 160878.2 ## 4 1085 165896.7 ## 5 1336 171665.9 ## 6 1586 180505.1 # Partial dependence plot autoplot(pd_values, rug = TRUE, train = as.data.frame(train_h2o)) ``` Figure 7\.6: Partial dependence plot for `Gr_Liv_Area` illustrating the average increase in predicted `Sale_Price` as `Gr_Liv_Area` increases. ### 16\.4\.3 Alternative uses PDPs have primarily been used to illustrate the marginal effect a feature has on the predicted response value. However, Brandon M Greenwell, Boehmke, and McCarthy ([2018](#ref-greenwell2018simple)) illustrate an approach that uses a measure of the relative “flatness” of the partial dependence function as a measure of variable importance. The idea is that those features with larger marginal effects on the response have greater importance. You can implement a PDP\-based measure of feature importance by using the **vip** package and setting `method = "pdp"`. The resulting variable importance scores also retain the computed partial dependence values (so you can easily view plots of both feature importance and feature effects). 16\.5 Individual conditional expectation ---------------------------------------- Individual conditional expectation (ICE) curves (Goldstein et al. [2015](#ref-goldstein2015peeking)) are very similar to PDPs; however, rather than averaging the predicted values across all observations we observe and plot the individual observation\-level predictions. ### 16\.5\.1 Concept An ICE plot visualizes the dependence of the predicted response on a feature for *each* instance separately, resulting in multiple lines, one for each observation, compared to one line in partial dependence plots. A PDP is the average of the lines of an ICE plot. Note that the following algorithm is the same as the PDP algorithms except for the last line where PDPs averaged the predicted values. ``` For a selected predictor (x) 1. Construct a grid of j evenly spaced values across the distribution of x: {x1, x2, ..., xj} 2. For i in {1,...,j} do | Copy the training data and replace the original values of x with the constant xi | Apply given ML model (i.e., obtain vector of predictions) End 3. Plot the predictions against x1, x2, ..., xj with lines connecting oberservations that correspond to the same row number in the original training data ``` **Algorithm 3:** A simple algorithm for constructing the individual conditional expectation of the response on a single predictor \\(x\\). So, what do you gain by looking at individual expectations, instead of partial dependencies? PDPs can obfuscate heterogeneous relationships that result from strong interaction effects. PDPs can show you what the average relationship between feature \\(x\_s\\) and the predicted value (\\(\\widehat{y}\\)) looks like. This works only well in cases where the interactions between features are weak but in cases where interactions exist, ICE curves will help to highlight this. One issue to be aware of, often differences in ICE curves can only be identified by centering the feature. For example, \~ref(fig:ice\-illustration) below displays ICE curves for the `Gr_Liv_Area` feature. The left plot makes it appear that all observations have very similar effects across `Gr_Liv_Area` values. However, the right plot shows centered ICE (c\-ICE) curves which helps to highlight heterogeneity more clearly and also draws more attention to those observations that deviate from the general pattern. You will typically see ICE curves centered at the minimum value of the feature. This allows you to see how effects change as the feature value increases. Figure 16\.2: Non\-centered (A) and centered (B) ICE curves for `Gr_Liv_Area` illustrating the observation\-level effects (black lines) in predicted `Sale_Price` as `Gr_Liv_Area` increases. The plot also illustrates the PDP line (red), representing the average values across all observations. ### 16\.5\.2 Implementation Similar to PDPs, the premier package to use for ICE curves is the **pdp** package; however, the **iml** package also provides ICE curves. To create ICE curves with the **pdp** package we follow the same procedure as with PDPs; however, we exclude the averaging component (applying `mean()`) in the custom prediction function. By default, `autoplot()` will plot all observations; we also include `center = TRUE` to center the curves at the first value. Note that we use `pred.fun = pred`. This is using the same custom prediction function created in Section 16\.2\.3\. ``` # Construct c-ICE curves partial( ensemble_tree, train = as.data.frame(train_h2o), pred.var = "Gr_Liv_Area", pred.fun = pred, grid.resolution = 20, plot = TRUE, center = TRUE, plot.engine = "ggplot2" ) ``` Figure 16\.3: Centered ICE curve for `Gr_Liv_Area` illustrating the observation\-level effects in predicted `Sale_Price` as `Gr_Liv_Area` increases. PDPs for classification models are typically plotted on a logit\-type scale, rather than on the probability scale (see Brandon Greenwell ([2018](#ref-R-pdp)) for details). This is more important for ICE curves and c\-ICE curves, which can be more difficult to interpret. For example, c\-ICE curves can result in negative probabilities. The ICE curves will also be more clumped together and harder to interpret when the predicted probabilities are close to zero or one. ### 16\.5\.1 Concept An ICE plot visualizes the dependence of the predicted response on a feature for *each* instance separately, resulting in multiple lines, one for each observation, compared to one line in partial dependence plots. A PDP is the average of the lines of an ICE plot. Note that the following algorithm is the same as the PDP algorithms except for the last line where PDPs averaged the predicted values. ``` For a selected predictor (x) 1. Construct a grid of j evenly spaced values across the distribution of x: {x1, x2, ..., xj} 2. For i in {1,...,j} do | Copy the training data and replace the original values of x with the constant xi | Apply given ML model (i.e., obtain vector of predictions) End 3. Plot the predictions against x1, x2, ..., xj with lines connecting oberservations that correspond to the same row number in the original training data ``` **Algorithm 3:** A simple algorithm for constructing the individual conditional expectation of the response on a single predictor \\(x\\). So, what do you gain by looking at individual expectations, instead of partial dependencies? PDPs can obfuscate heterogeneous relationships that result from strong interaction effects. PDPs can show you what the average relationship between feature \\(x\_s\\) and the predicted value (\\(\\widehat{y}\\)) looks like. This works only well in cases where the interactions between features are weak but in cases where interactions exist, ICE curves will help to highlight this. One issue to be aware of, often differences in ICE curves can only be identified by centering the feature. For example, \~ref(fig:ice\-illustration) below displays ICE curves for the `Gr_Liv_Area` feature. The left plot makes it appear that all observations have very similar effects across `Gr_Liv_Area` values. However, the right plot shows centered ICE (c\-ICE) curves which helps to highlight heterogeneity more clearly and also draws more attention to those observations that deviate from the general pattern. You will typically see ICE curves centered at the minimum value of the feature. This allows you to see how effects change as the feature value increases. Figure 16\.2: Non\-centered (A) and centered (B) ICE curves for `Gr_Liv_Area` illustrating the observation\-level effects (black lines) in predicted `Sale_Price` as `Gr_Liv_Area` increases. The plot also illustrates the PDP line (red), representing the average values across all observations. ### 16\.5\.2 Implementation Similar to PDPs, the premier package to use for ICE curves is the **pdp** package; however, the **iml** package also provides ICE curves. To create ICE curves with the **pdp** package we follow the same procedure as with PDPs; however, we exclude the averaging component (applying `mean()`) in the custom prediction function. By default, `autoplot()` will plot all observations; we also include `center = TRUE` to center the curves at the first value. Note that we use `pred.fun = pred`. This is using the same custom prediction function created in Section 16\.2\.3\. ``` # Construct c-ICE curves partial( ensemble_tree, train = as.data.frame(train_h2o), pred.var = "Gr_Liv_Area", pred.fun = pred, grid.resolution = 20, plot = TRUE, center = TRUE, plot.engine = "ggplot2" ) ``` Figure 16\.3: Centered ICE curve for `Gr_Liv_Area` illustrating the observation\-level effects in predicted `Sale_Price` as `Gr_Liv_Area` increases. PDPs for classification models are typically plotted on a logit\-type scale, rather than on the probability scale (see Brandon Greenwell ([2018](#ref-R-pdp)) for details). This is more important for ICE curves and c\-ICE curves, which can be more difficult to interpret. For example, c\-ICE curves can result in negative probabilities. The ICE curves will also be more clumped together and harder to interpret when the predicted probabilities are close to zero or one. 16\.6 Feature interactions -------------------------- When features in a prediction model interact with each other, the influence of the features on the prediction surface is not additive but more complex. In real life, most relationships between features and some response variable are complex and include interactions. This is largely why more complex algorithms (especially tree\-based algorithms) tend to perform very well—the nature of their complexity often allows them to naturally capture complex interactions. However, identifying and understanding the nature of these interactions is difficult. One way to estimate the interaction strength is to measure how much of the variation of the predicted outcome depends on the interaction of the features. This measurement is called the \\(H\\)\-statistic and was introduced by Friedman, Popescu, and others ([2008](#ref-friedman2008predictive)). ### 16\.6\.1 Concept There are two main approaches to assessing interactions with the \\(H\\)\-statistic: 1. The interaction between two features, which tells us how strongly two specific features interact with each other in the model; 2. The interaction between a feature and all other features, which tells us how strongly (in total) the specific feature interacts in the model with all the other features. To measure both types of interactions, we leverage partial dependence values for the features of interest. For the first approach, which measures how a feature (\\(x\_i\\)) interacts with all other features. The algorithm performs the following steps: ``` 1. For variable i in {1,...,p} do | f(x) = estimate predicted values with original model | pd(x) = partial dependence of variable i | pd(!x) = partial dependence of all features excluding i | upper = sum(f(x) - pd(x) - pd(!x)) | lower = variance(f(x)) | rho = upper / lower End 2. Sort variables by descending rho (interaction strength) ``` **Algorithm 4:** A simple algorithm for measuring the interaction strength between \\(x\_i\\) and all other features. For the second approach, which measures the two\-way interaction strength of feature \\(x\_i\\) and \\(x\_j\\), the algorithm performs the following steps: ``` 1. i = a selected variable of interest 2. For remaining variables j in {1,...,p} do | pd(ij) = interaction partial dependence of variables i and j | pd(i) = partial dependence of variable i | pd(j) = partial dependence of variable j | upper = sum(pd(ij) - pd(i) - pd(j)) | lower = variance(pd(ij)) | rho = upper / lower End 3. Sort interaction relationship by descending rho (interaction strength) ``` **Algorithm 5:** A simple algorithm for measuring the interaction strength between \\(x\_i\\) and \\(x\_j\\). In essence, the \\(H\\)\-statistic measures how much of the variation of the predicted outcome depends on the interaction of the features. In both cases, \\(\\rho \= \\text{rho}\\) represents the interaction strength, which will be between 0 (when there is no interaction at all) and 1 (if all of variation of the predicted outcome depends on a given interaction). ### 16\.6\.2 Implementation Currently, the **iml** package provides the only viable implementation of the \\(H\\)\-statistic as a model\-agnostic application. We use `Interaction$new()` to compute the one\-way interaction to assess if and how strongly two specific features interact with each other in the model. We find that `First_Flr_SF` has the strongest interaction (although it is a weak interaction since \\(\\rho \< 0\.139\\) ). Unfortunately, due to the algorithm complexity, the \\(H\\)\-statistic is very computationally demanding as it requires \\(2n^2\\) runs. This example of computing the one\-way interaction \\(H\\)\-statistic took two hours to complete! However, **iml** does allow you to speed up computation by reducing the `grid.size` or by parallelizing computation with `parallel = TRUE`. See `vignette(“parallel”, package = “iml”)` for more info. ``` interact <- Interaction$new(components_iml) interact$results %>% arrange(desc(.interaction)) %>% head() ## .feature .interaction ## 1 First_Flr_SF 0.13917718 ## 2 Overall_Qual 0.11077722 ## 3 Kitchen_Qual 0.10531653 ## 4 Second_Flr_SF 0.10461824 ## 5 Lot_Area 0.10389242 ## 6 Gr_Liv_Area 0.09833997 plot(interact) ``` Figure 16\.4: \\(H\\)\-statistics for the 80 predictors in the Ames Housing data based on the H2O ensemble model. Once we’ve identified the variable(s) with the strongest interaction signal (`First_Flr_SF` in our case), we can then compute the \\(h\\)\-statistic to identify which features it mostly interacts with. This second iteration took over two hours and identified `Overall_Qual` as having the strongest interaction effect with `First_Flr_SF` (again, a weak interaction effect given \\(\\rho \= 0\.144\\) ). ``` interact_2way <- Interaction$new(components_iml, feature = "First_Flr_SF") interact_2way$results %>% arrange(desc(.interaction)) %>% top_n(10) ## .feature .interaction ## 1 Overall_Qual:First_Flr_SF 0.14385963 ## 2 Year_Built:First_Flr_SF 0.09314573 ## 3 Kitchen_Qual:First_Flr_SF 0.06567883 ## 4 Bsmt_Qual:First_Flr_SF 0.06228321 ## 5 Bsmt_Exposure:First_Flr_SF 0.05900530 ## 6 Second_Flr_SF:First_Flr_SF 0.05747438 ## 7 Kitchen_AbvGr:First_Flr_SF 0.05675684 ## 8 Bsmt_Unf_SF:First_Flr_SF 0.05476509 ## 9 Fireplaces:First_Flr_SF 0.05470992 ## 10 Mas_Vnr_Area:First_Flr_SF 0.05439255 ``` Identifying these interactions can help point us in the direction of assessing how the interactions relate to the response variable. We can use PDPs or ICE curves with interactions to see their effect on the predicted response. Since the above process pointed out that `First_Flr_SF` and `Overall_Qual` had the highest interaction effect, the code plots this interaction relationship with predicted `Sale_Price`. We see that properties with “good” or lower `Overall_Qual` values tend have their `Sale_Price`s level off as `First_Flr_SF` increases moreso than properties with really strong `Overall_Qual` values. Also, you can see that properties with “very good” `Overall_Qual` tend to have a much larger increase in `Sale_Price` as `First_Flr_SF` increases from 1500–2000 than most other properties. (Although **pdp** allows more than one predictor, we take this opportunity to illustrate PDPs with the **iml** package.) ``` # Two-way PDP using iml interaction_pdp <- Partial$new( components_iml, c("First_Flr_SF", "Overall_Qual"), ice = FALSE, grid.size = 20 ) plot(interaction_pdp) ``` Figure 16\.5: Interaction PDP illustrating the joint effect of `First_Flr_SF` and `Overall_Qual` on `Sale_Price`. ### 16\.6\.3 Alternatives Obviously computational time constraints are a major issue in identifying potential interaction effects. Although the \\(H\\)\-statistic is the most statistically sound approach to detecting interactions, there are alternatives. The PDP\-based variable importance measure discussed in Brandon M Greenwell, Boehmke, and McCarthy ([2018](#ref-greenwell2018simple)) can also be used to quantify the strength of potential interaction effects. A thorough discussion of this approach is provided by Greenwell, Brandon M. and Boehmke, Bradley C. ([2019](#ref-vint)) and can be implemented with `vip::vint()`. Also, Kuhn and Johnson ([2019](#ref-kuhn2019feature)) provide a fairly comprehensive chapter discussing alternative approaches for identifying interactions. ### 16\.6\.1 Concept There are two main approaches to assessing interactions with the \\(H\\)\-statistic: 1. The interaction between two features, which tells us how strongly two specific features interact with each other in the model; 2. The interaction between a feature and all other features, which tells us how strongly (in total) the specific feature interacts in the model with all the other features. To measure both types of interactions, we leverage partial dependence values for the features of interest. For the first approach, which measures how a feature (\\(x\_i\\)) interacts with all other features. The algorithm performs the following steps: ``` 1. For variable i in {1,...,p} do | f(x) = estimate predicted values with original model | pd(x) = partial dependence of variable i | pd(!x) = partial dependence of all features excluding i | upper = sum(f(x) - pd(x) - pd(!x)) | lower = variance(f(x)) | rho = upper / lower End 2. Sort variables by descending rho (interaction strength) ``` **Algorithm 4:** A simple algorithm for measuring the interaction strength between \\(x\_i\\) and all other features. For the second approach, which measures the two\-way interaction strength of feature \\(x\_i\\) and \\(x\_j\\), the algorithm performs the following steps: ``` 1. i = a selected variable of interest 2. For remaining variables j in {1,...,p} do | pd(ij) = interaction partial dependence of variables i and j | pd(i) = partial dependence of variable i | pd(j) = partial dependence of variable j | upper = sum(pd(ij) - pd(i) - pd(j)) | lower = variance(pd(ij)) | rho = upper / lower End 3. Sort interaction relationship by descending rho (interaction strength) ``` **Algorithm 5:** A simple algorithm for measuring the interaction strength between \\(x\_i\\) and \\(x\_j\\). In essence, the \\(H\\)\-statistic measures how much of the variation of the predicted outcome depends on the interaction of the features. In both cases, \\(\\rho \= \\text{rho}\\) represents the interaction strength, which will be between 0 (when there is no interaction at all) and 1 (if all of variation of the predicted outcome depends on a given interaction). ### 16\.6\.2 Implementation Currently, the **iml** package provides the only viable implementation of the \\(H\\)\-statistic as a model\-agnostic application. We use `Interaction$new()` to compute the one\-way interaction to assess if and how strongly two specific features interact with each other in the model. We find that `First_Flr_SF` has the strongest interaction (although it is a weak interaction since \\(\\rho \< 0\.139\\) ). Unfortunately, due to the algorithm complexity, the \\(H\\)\-statistic is very computationally demanding as it requires \\(2n^2\\) runs. This example of computing the one\-way interaction \\(H\\)\-statistic took two hours to complete! However, **iml** does allow you to speed up computation by reducing the `grid.size` or by parallelizing computation with `parallel = TRUE`. See `vignette(“parallel”, package = “iml”)` for more info. ``` interact <- Interaction$new(components_iml) interact$results %>% arrange(desc(.interaction)) %>% head() ## .feature .interaction ## 1 First_Flr_SF 0.13917718 ## 2 Overall_Qual 0.11077722 ## 3 Kitchen_Qual 0.10531653 ## 4 Second_Flr_SF 0.10461824 ## 5 Lot_Area 0.10389242 ## 6 Gr_Liv_Area 0.09833997 plot(interact) ``` Figure 16\.4: \\(H\\)\-statistics for the 80 predictors in the Ames Housing data based on the H2O ensemble model. Once we’ve identified the variable(s) with the strongest interaction signal (`First_Flr_SF` in our case), we can then compute the \\(h\\)\-statistic to identify which features it mostly interacts with. This second iteration took over two hours and identified `Overall_Qual` as having the strongest interaction effect with `First_Flr_SF` (again, a weak interaction effect given \\(\\rho \= 0\.144\\) ). ``` interact_2way <- Interaction$new(components_iml, feature = "First_Flr_SF") interact_2way$results %>% arrange(desc(.interaction)) %>% top_n(10) ## .feature .interaction ## 1 Overall_Qual:First_Flr_SF 0.14385963 ## 2 Year_Built:First_Flr_SF 0.09314573 ## 3 Kitchen_Qual:First_Flr_SF 0.06567883 ## 4 Bsmt_Qual:First_Flr_SF 0.06228321 ## 5 Bsmt_Exposure:First_Flr_SF 0.05900530 ## 6 Second_Flr_SF:First_Flr_SF 0.05747438 ## 7 Kitchen_AbvGr:First_Flr_SF 0.05675684 ## 8 Bsmt_Unf_SF:First_Flr_SF 0.05476509 ## 9 Fireplaces:First_Flr_SF 0.05470992 ## 10 Mas_Vnr_Area:First_Flr_SF 0.05439255 ``` Identifying these interactions can help point us in the direction of assessing how the interactions relate to the response variable. We can use PDPs or ICE curves with interactions to see their effect on the predicted response. Since the above process pointed out that `First_Flr_SF` and `Overall_Qual` had the highest interaction effect, the code plots this interaction relationship with predicted `Sale_Price`. We see that properties with “good” or lower `Overall_Qual` values tend have their `Sale_Price`s level off as `First_Flr_SF` increases moreso than properties with really strong `Overall_Qual` values. Also, you can see that properties with “very good” `Overall_Qual` tend to have a much larger increase in `Sale_Price` as `First_Flr_SF` increases from 1500–2000 than most other properties. (Although **pdp** allows more than one predictor, we take this opportunity to illustrate PDPs with the **iml** package.) ``` # Two-way PDP using iml interaction_pdp <- Partial$new( components_iml, c("First_Flr_SF", "Overall_Qual"), ice = FALSE, grid.size = 20 ) plot(interaction_pdp) ``` Figure 16\.5: Interaction PDP illustrating the joint effect of `First_Flr_SF` and `Overall_Qual` on `Sale_Price`. ### 16\.6\.3 Alternatives Obviously computational time constraints are a major issue in identifying potential interaction effects. Although the \\(H\\)\-statistic is the most statistically sound approach to detecting interactions, there are alternatives. The PDP\-based variable importance measure discussed in Brandon M Greenwell, Boehmke, and McCarthy ([2018](#ref-greenwell2018simple)) can also be used to quantify the strength of potential interaction effects. A thorough discussion of this approach is provided by Greenwell, Brandon M. and Boehmke, Bradley C. ([2019](#ref-vint)) and can be implemented with `vip::vint()`. Also, Kuhn and Johnson ([2019](#ref-kuhn2019feature)) provide a fairly comprehensive chapter discussing alternative approaches for identifying interactions. 16\.7 Local interpretable model\-agnostic explanations ------------------------------------------------------ *Local Interpretable Model\-agnostic Explanations* (LIME) is an algorithm that helps explain individual predictions and was introduced by Ribeiro, Singh, and Guestrin ([2016](#ref-ribeiro2016should)). Behind the workings of LIME lies the assumption that every complex model is linear on a local scale (i.e. in a small neighborhood around an observation of interest) and asserting that it is possible to fit a simple surrogate model around a single observation that will mimic how the global model behaves at that locality. ### 16\.7\.1 Concept To do so, LIME samples the training data multiple times to identify observations that are similar to the individual record of interest. It then trains an interpretable model (often a LASSO model) weighted by the proximity of the sampled observations to the instance of interest. The resulting model can then be used to explain the predictions of the more complex model at the locality of the observation of interest. The general algorithm LIME applies is: 1. ***Permute*** your training data to create replicated feature data with slight value modifications. 2. Compute ***proximity measure*** (e.g., 1 \- distance) between the observation of interest and each of the permuted observations. 3. Apply selected machine learning model to ***predict outcomes*** of permuted data. 4. ***Select m number of features*** to best describe predicted outcomes. 5. ***Fit a simple model*** to the permuted data, explaining the complex model outcome with \\(m\\) features from the permuted data weighted by its similarity to the original observation. 6. Use the resulting ***feature weights to explain local behavior***. **Algorithm 6:** The generalized LIME algorithm. Each of these steps will be discussed in further detail as we proceed. Although the **iml** package implements the LIME algorithm, the **lime** package provides the most comprehensive implementation. ### 16\.7\.2 Implementation The implementation of **Algorithm 6** via the **lime** package is split into two operations: `lime::lime()` and `lime::explain()`. The `lime::lime()` function creates an `"explainer"` object, which is just a list that contains the fitted machine learning model and the feature distributions for the training data. The feature distributions that it contains includes distribution statistics for each categorical variable level and each continuous variable split into \\(n\\) bins (the current default is four bins). These feature attributes will be used to permute data. ``` # Create explainer object components_lime <- lime( x = features, model = ensemble_tree, n_bins = 10 ) class(components_lime) ## [1] "data_frame_explainer" "explainer" "list" summary(components_lime) ## Length Class Mode ## model 1 H2ORegressionModel S4 ## preprocess 1 -none- function ## bin_continuous 1 -none- logical ## n_bins 1 -none- numeric ## quantile_bins 1 -none- logical ## use_density 1 -none- logical ## feature_type 80 -none- character ## bin_cuts 80 -none- list ## feature_distribution 80 -none- list ``` Once we’ve created our lime object (i.e., `components_lime`), we can now perform the LIME algorithm using the `lime::explain()` function on the observation(s) of interest. Recall that for local interpretation we are focusing on the two observations identified in Section 16\.2\.2 that contain the highest and lowest predicted sales prices. This function has several options, each providing flexibility in how we perform **Algorithm 6**: * `x`: Contains the observation(s) you want to create local explanations for. (See step 1 in **Algorithm 6**.) * `explainer`: Takes the explainer object created by `lime::lime()`, which will be used to create permuted data. Permutations are sampled from the variable distributions created by the `lime::lime()` explainer object. (See step 1 in **Algorithm 6**.) * `n_permutations`: The number of permutations to create for each observation in `x` (default is 5,000 for tabular data). (See step 1 in **Algorithm 6**.) * `dist_fun`: The distance function to use. The default is Gower’s distance but can also use Euclidean, Manhattan, or any other distance function allowed by the `dist()` function (see `?dist()` for details). To compute similarities, categorical features will be recoded based on whether or not they are equal to the actual observation. If continuous features are binned (the default) these features will be recoded based on whether they are in the same bin as the observation to be explained. Using the recoded data the distance to the original observation is then calculated based on a user\-chosen distance measure. (See step 2 in **Algorithm 6**.) * `kernel_width`: To convert the distance measure to a similarity score, an exponential kernel of a user defined width (defaults to 0\.75 times the square root of the number of features) is used. Smaller values restrict the size of the local region. (See step 2 in **Algorithm 6**.) * `n_features`: The number of features to best describe the predicted outcomes. (See step 4 in **Algorithm 6**.) * `feature_select`: `lime::lime()` can use forward selection, ridge regression, lasso, or a decision tree to select the “best” `n_features` features. In the next example we apply a ridge regression model and select the \\(m\\) features with highest absolute weights. (See step 4 in **Algorithm 6**.) For classification models we need to specify a couple of additional arguments: * `labels`: The specific labels (classes) to explain (e.g., 0/1, “Yes”/“No”)? * `n_labels`: The number of labels to explain (e.g., Do you want to explain both success and failure or just the reason for success?) ``` # Use LIME to explain previously defined instances: high_ob and low_ob lime_explanation <- lime::explain( x = rbind(high_ob, low_ob), explainer = components_lime, n_permutations = 5000, dist_fun = "gower", kernel_width = 0.25, n_features = 10, feature_select = "highest_weights" ) ``` If the original ML model is a regressor, the local model will predict the output of the complex model directly. If it is a classifier, the local model will predict the probability of the chosen class(es). The output from `lime::explain()` is a data frame containing various information on the local model’s predictions. Most importantly, for each observation supplied it contains the fitted explainer model (`model_r2`) and the weighted importance (`feature_weight`) for each important feature (`feature_desc`) that best describes the local relationship. ``` glimpse(lime_explanation) ## Observations: 20 ## Variables: 11 ## $ model_type <chr> "regression", "regression", "regression", "regr… ## $ case <chr> "1825", "1825", "1825", "1825", "1825", "1825",… ## $ model_r2 <dbl> 0.41661172, 0.41661172, 0.41661172, 0.41661172,… ## $ model_intercept <dbl> 186253.6, 186253.6, 186253.6, 186253.6, 186253.… ## $ model_prediction <dbl> 406033.5, 406033.5, 406033.5, 406033.5, 406033.… ## $ feature <chr> "Gr_Liv_Area", "Overall_Qual", "Total_Bsmt_SF",… ## $ feature_value <int> 3627, 8, 1930, 35760, 1796, 1831, 3, 14, 1, 3, … ## $ feature_weight <dbl> 55254.859, 50069.347, 40261.324, 20430.128, 193… ## $ feature_desc <chr> "2141 < Gr_Liv_Area", "Overall_Qual = Very_Exce… ## $ data <list> [[Two_Story_1946_and_Newer, Residential_Low_De… ## $ prediction <dbl> 663136.38, 663136.38, 663136.38, 663136.38, 663… ``` Visualizing the results in Figure [16\.6](iml.html#fig:first-lime-fit) we see that size and quality of the home appears to be driving the predictions for both `high_ob` (high `Sale_Price` observation) and `low_ob` (low `Sale_Price` observation). However, it’s important to note the low \\(R^2\\) (“Explanation Fit”) of the models. The local model appears to have a fairly poor fit and, therefore, we shouldn’t put too much faith in these explanations. ``` plot_features(lime_explanation, ncol = 1) ``` Figure 16\.6: Local explanation for observations 1825 (`high_ob`) and 139 (`low_ob`) using LIME. ### 16\.7\.3 Tuning Considering there are several knobs we can adjust when performing LIME, we can treat these as tuning parameters to try to tune the local model. This helps to maximize the amount of trust we can have in the local region explanation. As an example, the following code block changes the distance function to be Euclidean, increases the kernel width to create a larger local region, and changes the feature selection approach to a LARS\-based LASSO model. The result is a fairly substantial increase in our explanation fits, giving us much more confidence in their explanations. ``` # Tune the LIME algorithm a bit lime_explanation2 <- explain( x = rbind(high_ob, low_ob), explainer = components_lime, n_permutations = 5000, dist_fun = "euclidean", kernel_width = 0.75, n_features = 10, feature_select = "lasso_path" ) # Plot the results plot_features(lime_explanation2, ncol = 1) ``` Figure 16\.7: Local explanation for observations 1825 (case 1\) and 139 (case 2\) after tuning the LIME algorithm. ### 16\.7\.4 Alternative uses The discussion above revolves around using LIME for tabular data sets. However, LIME can also be applied to non\-traditional data sets such as text and images. For text, LIME creates a new *document term matrix* with perturbed text (e.g., it generates new phrases and sentences based on existing text). It then follows a similar procedure of weighting the similarity of the generated text to the original. The localized model then helps to identify which words in the perturbed text are producing the strongest signal. For images, variations of the images are created by replacing certain groupings of pixels with a constant color (e.g., gray). LIME then assesses the predicted labels for the given group of pixels not perturbed. For more details on such use cases see Molnar and others ([2018](#ref-molnar2018interpretable)). ### 16\.7\.1 Concept To do so, LIME samples the training data multiple times to identify observations that are similar to the individual record of interest. It then trains an interpretable model (often a LASSO model) weighted by the proximity of the sampled observations to the instance of interest. The resulting model can then be used to explain the predictions of the more complex model at the locality of the observation of interest. The general algorithm LIME applies is: 1. ***Permute*** your training data to create replicated feature data with slight value modifications. 2. Compute ***proximity measure*** (e.g., 1 \- distance) between the observation of interest and each of the permuted observations. 3. Apply selected machine learning model to ***predict outcomes*** of permuted data. 4. ***Select m number of features*** to best describe predicted outcomes. 5. ***Fit a simple model*** to the permuted data, explaining the complex model outcome with \\(m\\) features from the permuted data weighted by its similarity to the original observation. 6. Use the resulting ***feature weights to explain local behavior***. **Algorithm 6:** The generalized LIME algorithm. Each of these steps will be discussed in further detail as we proceed. Although the **iml** package implements the LIME algorithm, the **lime** package provides the most comprehensive implementation. ### 16\.7\.2 Implementation The implementation of **Algorithm 6** via the **lime** package is split into two operations: `lime::lime()` and `lime::explain()`. The `lime::lime()` function creates an `"explainer"` object, which is just a list that contains the fitted machine learning model and the feature distributions for the training data. The feature distributions that it contains includes distribution statistics for each categorical variable level and each continuous variable split into \\(n\\) bins (the current default is four bins). These feature attributes will be used to permute data. ``` # Create explainer object components_lime <- lime( x = features, model = ensemble_tree, n_bins = 10 ) class(components_lime) ## [1] "data_frame_explainer" "explainer" "list" summary(components_lime) ## Length Class Mode ## model 1 H2ORegressionModel S4 ## preprocess 1 -none- function ## bin_continuous 1 -none- logical ## n_bins 1 -none- numeric ## quantile_bins 1 -none- logical ## use_density 1 -none- logical ## feature_type 80 -none- character ## bin_cuts 80 -none- list ## feature_distribution 80 -none- list ``` Once we’ve created our lime object (i.e., `components_lime`), we can now perform the LIME algorithm using the `lime::explain()` function on the observation(s) of interest. Recall that for local interpretation we are focusing on the two observations identified in Section 16\.2\.2 that contain the highest and lowest predicted sales prices. This function has several options, each providing flexibility in how we perform **Algorithm 6**: * `x`: Contains the observation(s) you want to create local explanations for. (See step 1 in **Algorithm 6**.) * `explainer`: Takes the explainer object created by `lime::lime()`, which will be used to create permuted data. Permutations are sampled from the variable distributions created by the `lime::lime()` explainer object. (See step 1 in **Algorithm 6**.) * `n_permutations`: The number of permutations to create for each observation in `x` (default is 5,000 for tabular data). (See step 1 in **Algorithm 6**.) * `dist_fun`: The distance function to use. The default is Gower’s distance but can also use Euclidean, Manhattan, or any other distance function allowed by the `dist()` function (see `?dist()` for details). To compute similarities, categorical features will be recoded based on whether or not they are equal to the actual observation. If continuous features are binned (the default) these features will be recoded based on whether they are in the same bin as the observation to be explained. Using the recoded data the distance to the original observation is then calculated based on a user\-chosen distance measure. (See step 2 in **Algorithm 6**.) * `kernel_width`: To convert the distance measure to a similarity score, an exponential kernel of a user defined width (defaults to 0\.75 times the square root of the number of features) is used. Smaller values restrict the size of the local region. (See step 2 in **Algorithm 6**.) * `n_features`: The number of features to best describe the predicted outcomes. (See step 4 in **Algorithm 6**.) * `feature_select`: `lime::lime()` can use forward selection, ridge regression, lasso, or a decision tree to select the “best” `n_features` features. In the next example we apply a ridge regression model and select the \\(m\\) features with highest absolute weights. (See step 4 in **Algorithm 6**.) For classification models we need to specify a couple of additional arguments: * `labels`: The specific labels (classes) to explain (e.g., 0/1, “Yes”/“No”)? * `n_labels`: The number of labels to explain (e.g., Do you want to explain both success and failure or just the reason for success?) ``` # Use LIME to explain previously defined instances: high_ob and low_ob lime_explanation <- lime::explain( x = rbind(high_ob, low_ob), explainer = components_lime, n_permutations = 5000, dist_fun = "gower", kernel_width = 0.25, n_features = 10, feature_select = "highest_weights" ) ``` If the original ML model is a regressor, the local model will predict the output of the complex model directly. If it is a classifier, the local model will predict the probability of the chosen class(es). The output from `lime::explain()` is a data frame containing various information on the local model’s predictions. Most importantly, for each observation supplied it contains the fitted explainer model (`model_r2`) and the weighted importance (`feature_weight`) for each important feature (`feature_desc`) that best describes the local relationship. ``` glimpse(lime_explanation) ## Observations: 20 ## Variables: 11 ## $ model_type <chr> "regression", "regression", "regression", "regr… ## $ case <chr> "1825", "1825", "1825", "1825", "1825", "1825",… ## $ model_r2 <dbl> 0.41661172, 0.41661172, 0.41661172, 0.41661172,… ## $ model_intercept <dbl> 186253.6, 186253.6, 186253.6, 186253.6, 186253.… ## $ model_prediction <dbl> 406033.5, 406033.5, 406033.5, 406033.5, 406033.… ## $ feature <chr> "Gr_Liv_Area", "Overall_Qual", "Total_Bsmt_SF",… ## $ feature_value <int> 3627, 8, 1930, 35760, 1796, 1831, 3, 14, 1, 3, … ## $ feature_weight <dbl> 55254.859, 50069.347, 40261.324, 20430.128, 193… ## $ feature_desc <chr> "2141 < Gr_Liv_Area", "Overall_Qual = Very_Exce… ## $ data <list> [[Two_Story_1946_and_Newer, Residential_Low_De… ## $ prediction <dbl> 663136.38, 663136.38, 663136.38, 663136.38, 663… ``` Visualizing the results in Figure [16\.6](iml.html#fig:first-lime-fit) we see that size and quality of the home appears to be driving the predictions for both `high_ob` (high `Sale_Price` observation) and `low_ob` (low `Sale_Price` observation). However, it’s important to note the low \\(R^2\\) (“Explanation Fit”) of the models. The local model appears to have a fairly poor fit and, therefore, we shouldn’t put too much faith in these explanations. ``` plot_features(lime_explanation, ncol = 1) ``` Figure 16\.6: Local explanation for observations 1825 (`high_ob`) and 139 (`low_ob`) using LIME. ### 16\.7\.3 Tuning Considering there are several knobs we can adjust when performing LIME, we can treat these as tuning parameters to try to tune the local model. This helps to maximize the amount of trust we can have in the local region explanation. As an example, the following code block changes the distance function to be Euclidean, increases the kernel width to create a larger local region, and changes the feature selection approach to a LARS\-based LASSO model. The result is a fairly substantial increase in our explanation fits, giving us much more confidence in their explanations. ``` # Tune the LIME algorithm a bit lime_explanation2 <- explain( x = rbind(high_ob, low_ob), explainer = components_lime, n_permutations = 5000, dist_fun = "euclidean", kernel_width = 0.75, n_features = 10, feature_select = "lasso_path" ) # Plot the results plot_features(lime_explanation2, ncol = 1) ``` Figure 16\.7: Local explanation for observations 1825 (case 1\) and 139 (case 2\) after tuning the LIME algorithm. ### 16\.7\.4 Alternative uses The discussion above revolves around using LIME for tabular data sets. However, LIME can also be applied to non\-traditional data sets such as text and images. For text, LIME creates a new *document term matrix* with perturbed text (e.g., it generates new phrases and sentences based on existing text). It then follows a similar procedure of weighting the similarity of the generated text to the original. The localized model then helps to identify which words in the perturbed text are producing the strongest signal. For images, variations of the images are created by replacing certain groupings of pixels with a constant color (e.g., gray). LIME then assesses the predicted labels for the given group of pixels not perturbed. For more details on such use cases see Molnar and others ([2018](#ref-molnar2018interpretable)). 16\.8 Shapley values -------------------- Another method for explaining individual predictions borrows ideas from coalitional (or cooperative) game theory to produce whats called Shapley values (Lundberg and Lee [2016](#ref-lundberg2016unexpected), [2017](#ref-lundberg2017unified)). By now you should realize that when a model gives a prediction for an observation, all features do not play the same role: some of them may have a lot of influence on the model’s prediction, while others may be irrelevant. Consequently, one may think that the effect of each feature can be measured by checking what the prediction would have been if that feature was absent; the bigger the change in the model’s output, the more important that feature is. This is exactly what happens with permutation\-based variable importance (since LIME most often uses a ridge or lasso model, it also uses a similar approach to identify localized feature importance). However, observing only single feature effects at a time implies that dependencies between features are not taken into account, which could produce inaccurate and misleading explanations of the model’s internal logic. Therefore, to avoid missing any interaction between features, we should observe how the prediction changes for each possible subset of features and then combine these changes to form a unique contribution for each feature value. ### 16\.8\.1 Concept The concept of Shapley values is based on the idea that the feature values of an individual observation work together to cause a change in the model’s prediction with respect to the model’s expected output, and it divides this total change in prediction among the features in a way that is “fair” to their contributions across all possible subsets of features. To do so, Shapley values assess every combination of predictors to determine each predictors impact. Focusing on feature \\(x\_j\\), the approach will test the accuracy of every combination of features not including \\(x\_j\\) and then test how adding \\(x\_j\\) to each combination improves the accuracy. Unfortunately, computing Shapley values is very computationally expensive. Consequently, the **iml** package implements an approximate Shapley value. To compute the approximate Shapley contribution of feature \\(x\_j\\) on \\(x\\) we need to construct two new “Frankenstein” instances and take the difference between their corresponding predictions. This is outlined in the brief algorithm below. Note that this is often repeated several times (e.g., 10–100\) for each feature/observation combination and the results are averaged together. See <http://bit.ly/fastshap> and Štrumbelj and Kononenko ([2014](#ref-strumbelj2014explaining)) for details. ``` ob = single observation of interest 1. For variables j in {1,...,p} do | m = draw random sample(s) from data set | randomly shuffle the feature names, perm <- sample(names(x)) | Create two new instances b1 and b2 as follows: | b1 = x, but all the features in perm that appear after | feature xj get their values swapped with the | corresponding values in z. | b2 = x, but feature xj, as well as all the features in perm | that appear after xj, get their values swapped with the | corresponding values in z. | f(b1) = compute predictions for b1 | f(b2) = compute predictions for b2 | shap_ind = f(b1) - f(b2) | phi = mean(shap_ind) End 2. Sort phi in decreasing order ``` **Algorithm 7:** A simple algorithm for computing approximate Shapley values. The aggregated Shapley values (\\(\\phi \=\\) `phi`) represent the contribution of each feature towards a predicted value compared to the average prediction for the data set. Figure [16\.8](iml.html#fig:shapley-idea), represents the first iteration of our algorithm where we focus on the impact of feature \\(X\_1\\). In step (A) we sample the training data. In step (B) we create two copies of an individually sampled row and randomize the order of the features. Then in one copy we include all values from the observation of interest for the values from the first column feature up to *and including* \\(X\_1\\). We then include the values from the sampled row for all the other features. In the second copy, we include all values from the observation of interest for the values from the first column feature up to *but not including* \\(X\_1\\). We use values from the sample row for \\(X\_1\\) and all the other features. Then in step (C), we apply our model to both copies of this row and in step (D) compute the difference between the predicted outputs. We follow this procedure for all the sampled rows and the average difference across all sampled rows is the Shapley value. It should be obvious that the more observations we include in our sampling procedure the closer our approximate Shapley computation will be to the true Shapley value. Figure 16\.8: Generalized concept behind approximate Shapley value computation. ### 16\.8\.2 Implementation The **iml** package provides one of the few Shapley value implementations in R. We use `Shapley$new()` to create a new Shapley object. The time to compute is largely driven by the number of predictors and the sample size drawn. By default, `Shapley$new()` will only use a sample size of 100 but you can control this to either reduce compute time or increase confidence in the estimated values. In this example we increased the sample size to 1000 for greater confidence in the estimated values; it took roughly 3\.5 minutes to compute. Looking at the results we see that the predicted sale price of $663,136\.38 is $481,797\.42 larger than the average predicted sale price of $181,338\.96; Figure [16\.9](iml.html#fig:shapley) displays the contribution each predictor played in this difference. We see that `Gr_Liv_Area`, `Overall_Qual`, and `Second_Flr_SF` are the top three features positively influencing the predicted sale price; all of which contributed close to, or over, $75,000 towards the $481\.8K difference. ``` # Compute (approximate) Shapley values (shapley <- Shapley$new(components_iml, x.interest = high_ob, sample.size = 1000)) ## Interpretation method: Shapley ## Predicted value: 663136.380000, Average prediction: 181338.963590 (diff = 481797.416410) ## ## Analysed predictor: ## Prediction task: unknown ## ## ## Analysed data: ## Sampling from data.frame with 2199 rows and 80 columns. ## ## Head of results: ## feature phi phi.var ## 1 MS_SubClass 1746.38653 4.269700e+07 ## 2 MS_Zoning -24.01968 3.640500e+06 ## 3 Lot_Frontage 1104.17628 7.420201e+07 ## 4 Lot_Area 15471.49017 3.994880e+08 ## 5 Street 1.03684 6.198064e+03 ## 6 Alley 41.81164 5.831185e+05 ## feature.value ## 1 MS_SubClass=Two_Story_1946_and_Newer ## 2 MS_Zoning=Residential_Low_Density ## 3 Lot_Frontage=118 ## 4 Lot_Area=35760 ## 5 Street=Pave ## 6 Alley=No_Alley_Access # Plot results plot(shapley) ``` Figure 16\.9: Local explanation for observation 1825 using the Shapley value algorithm. Since **iml** uses R6, we can reuse the Shapley object to identify the influential predictors that help explain the low `Sale_Price` observation. In Figure [16\.10](iml.html#fig:shapley2) we see similar results to LIME in that `Overall_Qual` and `Gr_Liv_Area` are the most influential predictors driving down the price of this home. ``` # Reuse existing object shapley$explain(x.interest = low_ob) # Plot results shapley$results %>% top_n(25, wt = abs(phi)) %>% ggplot(aes(phi, reorder(feature.value, phi), color = phi > 0)) + geom_point(show.legend = FALSE) ``` Figure 16\.10: Local explanation for observation 139 using the Shapley value algorithm. ### 16\.8\.3 XGBoost and built\-in Shapley values True Shapley values are considered theoretically optimal (Lundberg and Lee [2016](#ref-lundberg2016unexpected)); however, as previously discussed they are computationally challenging. The approximate Shapley values provided by **iml** are much more computationally feasible. Another common option is discussed by Lundberg and Lee ([2017](#ref-lundberg2017unified)) and, although not purely model\-agnostic, is applicable to tree\-based models and is fully integrated in most XGBoost implementations (including the **xgboost** package). Similar to **iml**’s approximation procedure, this tree\-based Shapley value procedure is also an approximation, but allows for polynomial runtime instead of exponential runtime. To demonstrate, we’ll use the features used and the final XGBoost model created in Section [12\.5\.2](gbm.html#xgb-tuning-strategy). ``` # Compute tree SHAP for a previously obtained XGBoost model X <- readr::read_rds("data/xgb-features.rds") xgb.fit.final <- readr::read_rds("data/xgb-fit-final.rds") ``` The benefit of this expedient approach is we can reasonably compute Shapley values for every observation and every feature in one fell swoop. This allows us to use Shapley values for more than just local interpretation. For example, the following computes and plots the Shapley values for every feature and observation in our Ames housing example; see Figure [16\.11](iml.html#fig:shap-vip). The left plot displays the individual Shapley contributions. Each dot represents a feature’s contribution to the predicted `Sale_Price` for an individual observation. This allows us to see the general magnitude and variation of each feature’s contributions across all observations. We can use this information to compute the average absolute Shapley value across all observations for each features and use this as a global measure of feature importance (right plot). There’s a fair amount of general data wrangling going on here but the key line of code is `predict(newdata = X, predcontrib = TRUE)`. This line computes the prediction contribution for each feature and observation in the data supplied via `newdata`. ``` # Try to re-scale features (low to high) feature_values <- X %>% as.data.frame() %>% mutate_all(scale) %>% gather(feature, feature_value) %>% pull(feature_value) # Compute SHAP values, wrangle a bit, compute SHAP-based importance, etc. shap_df <- xgb.fit.final %>% predict(newdata = X, predcontrib = TRUE) %>% as.data.frame() %>% select(-BIAS) %>% gather(feature, shap_value) %>% mutate(feature_value = feature_values) %>% group_by(feature) %>% mutate(shap_importance = mean(abs(shap_value))) # SHAP contribution plot p1 <- ggplot(shap_df, aes(x = shap_value, y = reorder(feature, shap_importance))) + ggbeeswarm::geom_quasirandom(groupOnX = FALSE, varwidth = TRUE, size = 0.4, alpha = 0.25) + xlab("SHAP value") + ylab(NULL) # SHAP importance plot p2 <- shap_df %>% select(feature, shap_importance) %>% filter(row_number() == 1) %>% ggplot(aes(x = reorder(feature, shap_importance), y = shap_importance)) + geom_col() + coord_flip() + xlab(NULL) + ylab("mean(|SHAP value|)") # Combine plots gridExtra::grid.arrange(p1, p2, nrow = 1) ``` Figure 16\.11: Shapley contribution (left) and global importance (right) plots. We can also use this information to create an alternative to PDPs. Shapley\-based dependence plots (Figure [16\.12](iml.html#fig:shap-pdp)) show the Shapley values of a feature on the \\(y\\)\-axis and the value of the feature for the \\(x\\)\-axis. By plotting these values for all observations in the data set we can see how the feature’s attributed importance changes as its value varies. ``` shap_df %>% filter(feature %in% c("Overall_Qual", "Gr_Liv_Area")) %>% ggplot(aes(x = feature_value, y = shap_value)) + geom_point(aes(color = shap_value)) + scale_colour_viridis_c(name = "Feature value\n(standardized)", option = "C") + facet_wrap(~ feature, scales = "free") + scale_y_continuous('Shapley value', labels = scales::comma) + xlab('Normalized feature value') ``` Figure 16\.12: Shapley\-based dependence plot illustrating the variability in contribution across the range of `Gr_Liv_Area` and `Overall_Qual` values. ### 16\.8\.1 Concept The concept of Shapley values is based on the idea that the feature values of an individual observation work together to cause a change in the model’s prediction with respect to the model’s expected output, and it divides this total change in prediction among the features in a way that is “fair” to their contributions across all possible subsets of features. To do so, Shapley values assess every combination of predictors to determine each predictors impact. Focusing on feature \\(x\_j\\), the approach will test the accuracy of every combination of features not including \\(x\_j\\) and then test how adding \\(x\_j\\) to each combination improves the accuracy. Unfortunately, computing Shapley values is very computationally expensive. Consequently, the **iml** package implements an approximate Shapley value. To compute the approximate Shapley contribution of feature \\(x\_j\\) on \\(x\\) we need to construct two new “Frankenstein” instances and take the difference between their corresponding predictions. This is outlined in the brief algorithm below. Note that this is often repeated several times (e.g., 10–100\) for each feature/observation combination and the results are averaged together. See <http://bit.ly/fastshap> and Štrumbelj and Kononenko ([2014](#ref-strumbelj2014explaining)) for details. ``` ob = single observation of interest 1. For variables j in {1,...,p} do | m = draw random sample(s) from data set | randomly shuffle the feature names, perm <- sample(names(x)) | Create two new instances b1 and b2 as follows: | b1 = x, but all the features in perm that appear after | feature xj get their values swapped with the | corresponding values in z. | b2 = x, but feature xj, as well as all the features in perm | that appear after xj, get their values swapped with the | corresponding values in z. | f(b1) = compute predictions for b1 | f(b2) = compute predictions for b2 | shap_ind = f(b1) - f(b2) | phi = mean(shap_ind) End 2. Sort phi in decreasing order ``` **Algorithm 7:** A simple algorithm for computing approximate Shapley values. The aggregated Shapley values (\\(\\phi \=\\) `phi`) represent the contribution of each feature towards a predicted value compared to the average prediction for the data set. Figure [16\.8](iml.html#fig:shapley-idea), represents the first iteration of our algorithm where we focus on the impact of feature \\(X\_1\\). In step (A) we sample the training data. In step (B) we create two copies of an individually sampled row and randomize the order of the features. Then in one copy we include all values from the observation of interest for the values from the first column feature up to *and including* \\(X\_1\\). We then include the values from the sampled row for all the other features. In the second copy, we include all values from the observation of interest for the values from the first column feature up to *but not including* \\(X\_1\\). We use values from the sample row for \\(X\_1\\) and all the other features. Then in step (C), we apply our model to both copies of this row and in step (D) compute the difference between the predicted outputs. We follow this procedure for all the sampled rows and the average difference across all sampled rows is the Shapley value. It should be obvious that the more observations we include in our sampling procedure the closer our approximate Shapley computation will be to the true Shapley value. Figure 16\.8: Generalized concept behind approximate Shapley value computation. ### 16\.8\.2 Implementation The **iml** package provides one of the few Shapley value implementations in R. We use `Shapley$new()` to create a new Shapley object. The time to compute is largely driven by the number of predictors and the sample size drawn. By default, `Shapley$new()` will only use a sample size of 100 but you can control this to either reduce compute time or increase confidence in the estimated values. In this example we increased the sample size to 1000 for greater confidence in the estimated values; it took roughly 3\.5 minutes to compute. Looking at the results we see that the predicted sale price of $663,136\.38 is $481,797\.42 larger than the average predicted sale price of $181,338\.96; Figure [16\.9](iml.html#fig:shapley) displays the contribution each predictor played in this difference. We see that `Gr_Liv_Area`, `Overall_Qual`, and `Second_Flr_SF` are the top three features positively influencing the predicted sale price; all of which contributed close to, or over, $75,000 towards the $481\.8K difference. ``` # Compute (approximate) Shapley values (shapley <- Shapley$new(components_iml, x.interest = high_ob, sample.size = 1000)) ## Interpretation method: Shapley ## Predicted value: 663136.380000, Average prediction: 181338.963590 (diff = 481797.416410) ## ## Analysed predictor: ## Prediction task: unknown ## ## ## Analysed data: ## Sampling from data.frame with 2199 rows and 80 columns. ## ## Head of results: ## feature phi phi.var ## 1 MS_SubClass 1746.38653 4.269700e+07 ## 2 MS_Zoning -24.01968 3.640500e+06 ## 3 Lot_Frontage 1104.17628 7.420201e+07 ## 4 Lot_Area 15471.49017 3.994880e+08 ## 5 Street 1.03684 6.198064e+03 ## 6 Alley 41.81164 5.831185e+05 ## feature.value ## 1 MS_SubClass=Two_Story_1946_and_Newer ## 2 MS_Zoning=Residential_Low_Density ## 3 Lot_Frontage=118 ## 4 Lot_Area=35760 ## 5 Street=Pave ## 6 Alley=No_Alley_Access # Plot results plot(shapley) ``` Figure 16\.9: Local explanation for observation 1825 using the Shapley value algorithm. Since **iml** uses R6, we can reuse the Shapley object to identify the influential predictors that help explain the low `Sale_Price` observation. In Figure [16\.10](iml.html#fig:shapley2) we see similar results to LIME in that `Overall_Qual` and `Gr_Liv_Area` are the most influential predictors driving down the price of this home. ``` # Reuse existing object shapley$explain(x.interest = low_ob) # Plot results shapley$results %>% top_n(25, wt = abs(phi)) %>% ggplot(aes(phi, reorder(feature.value, phi), color = phi > 0)) + geom_point(show.legend = FALSE) ``` Figure 16\.10: Local explanation for observation 139 using the Shapley value algorithm. ### 16\.8\.3 XGBoost and built\-in Shapley values True Shapley values are considered theoretically optimal (Lundberg and Lee [2016](#ref-lundberg2016unexpected)); however, as previously discussed they are computationally challenging. The approximate Shapley values provided by **iml** are much more computationally feasible. Another common option is discussed by Lundberg and Lee ([2017](#ref-lundberg2017unified)) and, although not purely model\-agnostic, is applicable to tree\-based models and is fully integrated in most XGBoost implementations (including the **xgboost** package). Similar to **iml**’s approximation procedure, this tree\-based Shapley value procedure is also an approximation, but allows for polynomial runtime instead of exponential runtime. To demonstrate, we’ll use the features used and the final XGBoost model created in Section [12\.5\.2](gbm.html#xgb-tuning-strategy). ``` # Compute tree SHAP for a previously obtained XGBoost model X <- readr::read_rds("data/xgb-features.rds") xgb.fit.final <- readr::read_rds("data/xgb-fit-final.rds") ``` The benefit of this expedient approach is we can reasonably compute Shapley values for every observation and every feature in one fell swoop. This allows us to use Shapley values for more than just local interpretation. For example, the following computes and plots the Shapley values for every feature and observation in our Ames housing example; see Figure [16\.11](iml.html#fig:shap-vip). The left plot displays the individual Shapley contributions. Each dot represents a feature’s contribution to the predicted `Sale_Price` for an individual observation. This allows us to see the general magnitude and variation of each feature’s contributions across all observations. We can use this information to compute the average absolute Shapley value across all observations for each features and use this as a global measure of feature importance (right plot). There’s a fair amount of general data wrangling going on here but the key line of code is `predict(newdata = X, predcontrib = TRUE)`. This line computes the prediction contribution for each feature and observation in the data supplied via `newdata`. ``` # Try to re-scale features (low to high) feature_values <- X %>% as.data.frame() %>% mutate_all(scale) %>% gather(feature, feature_value) %>% pull(feature_value) # Compute SHAP values, wrangle a bit, compute SHAP-based importance, etc. shap_df <- xgb.fit.final %>% predict(newdata = X, predcontrib = TRUE) %>% as.data.frame() %>% select(-BIAS) %>% gather(feature, shap_value) %>% mutate(feature_value = feature_values) %>% group_by(feature) %>% mutate(shap_importance = mean(abs(shap_value))) # SHAP contribution plot p1 <- ggplot(shap_df, aes(x = shap_value, y = reorder(feature, shap_importance))) + ggbeeswarm::geom_quasirandom(groupOnX = FALSE, varwidth = TRUE, size = 0.4, alpha = 0.25) + xlab("SHAP value") + ylab(NULL) # SHAP importance plot p2 <- shap_df %>% select(feature, shap_importance) %>% filter(row_number() == 1) %>% ggplot(aes(x = reorder(feature, shap_importance), y = shap_importance)) + geom_col() + coord_flip() + xlab(NULL) + ylab("mean(|SHAP value|)") # Combine plots gridExtra::grid.arrange(p1, p2, nrow = 1) ``` Figure 16\.11: Shapley contribution (left) and global importance (right) plots. We can also use this information to create an alternative to PDPs. Shapley\-based dependence plots (Figure [16\.12](iml.html#fig:shap-pdp)) show the Shapley values of a feature on the \\(y\\)\-axis and the value of the feature for the \\(x\\)\-axis. By plotting these values for all observations in the data set we can see how the feature’s attributed importance changes as its value varies. ``` shap_df %>% filter(feature %in% c("Overall_Qual", "Gr_Liv_Area")) %>% ggplot(aes(x = feature_value, y = shap_value)) + geom_point(aes(color = shap_value)) + scale_colour_viridis_c(name = "Feature value\n(standardized)", option = "C") + facet_wrap(~ feature, scales = "free") + scale_y_continuous('Shapley value', labels = scales::comma) + xlab('Normalized feature value') ``` Figure 16\.12: Shapley\-based dependence plot illustrating the variability in contribution across the range of `Gr_Liv_Area` and `Overall_Qual` values. 16\.9 Localized step\-wise procedure ------------------------------------ An additional approach for localized explanation is a procedure that is loosely related to the partial dependence algorithm with an added step\-wise procedure. The procedure was introduced by Staniak and Biecek ([2018](#ref-staniak2018explanations)) and is known as the *Break Down* method, which uses a greedy strategy to identify and remove features iteratively based on their influence on the overall average predicted response. ### 16\.9\.1 Concept The Break Down method provides two sequential approaches; the default is called *step up*. This procedure, essentially, takes the value for a given feature in the single observation of interest, replaces all the observations in the training data set, and identifies how it effects the prediction error. It performs this process iteratively and independently for each feature, identifies the column with the largest difference score, and adds that variable to the list as the most important. This feature’s signal is then removed (via randomization), and the procedure sweeps through the remaining predictors and applies the same process until all variables have been assessed. ``` existing_data = validation data set used in explainer new_ob = single observation to perform local interpretation on p = number of predictors l = list of predictors baseline = mean predicted response of existing_data for variable i in {1,...,p} do for variable j in {1,...,l} do | exchange variable j in existing_data with variable j value in new_ob | predicted_j = mean predicted response of altered existing_data | diff_j = absolute difference between baseline - predicted | reset existing_data end | t = variable j with largest diff value | contribution for variable t = diff value for variable t | remove variable t from l end ``` **Algorithm 8:** A simple algorithm for computing Break Down values with the step up method. An alternative approach is called the *step down* method which follows a similar algorithm but rather than remove the variable with the largest difference score on each sweep, it removes the variable with the smallest difference score. Both approaches are analogous to backward stepwise selection where *step up* removes variables with largest impact and *step down* removes variables with the smallest impact. ### 16\.9\.2 Implementation To perform the Break Down algorithm on a single observation, use the `DALEX::prediction_breakdown()` function. The output is a data frame with class `"prediction_breakdown_explainer"` that lists the contribution for each variable. Similar to Shapley values, the results display the contribution that each feature value for the given observation has on the difference between the overall average response (`Sale_Price` in this example) and the response for the given observation of interest. The default approach is ***step up*** but you can perform ***step down*** by specifying `direction = “down”`. If you look at the contribution output, realize the feature ordering is in terms of importance. Consequently, `Gr_Liv_Area` was identified as most influential followed by `Second_Flr_SF` and `Total_Bsmt_SF`. However, if you look at the contribution value, you will notice that `Second_Flr_SF` appears to have a larger contribution to the above average price than `Gr_Liv_Area`. However, the `Second_Flr_SF` contribution is based on having already taken `Gr_Liv_Area`’s contribution into effect. The break down algorithm is the most computationally intense of all methods discussed in this chapter. Since the number of required iterations increases by \\(p \\times \\left(p\-1\\right)\\) for every additional feature, wider data sets cause this algorithm to become burdensome. For example, this single application took over 6 hours to compute! ``` high_breakdown <- prediction_breakdown(components_dalex, observation = high_ob) # class of prediction_breakdown output class(high_breakdown) ## [1] "prediction_breakdown_explainer" "data.frame" # check out the top 10 influential variables for this observation high_breakdown[1:10, 1:5] ## variable contribution variable_name variable_value cummulative ## 1 (Intercept) 181338.96 Intercept 1 181338.9 ## Gr_Liv_Area + Gr_Liv_Area = 4316 46971.64 Gr_Liv_Area 4316 228310.5 ## Second_Flr_SF + Second_Flr_SF = 1872 52997.40 Second_Flr_SF 1872 281307.9 ## Total_Bsmt_SF + Total_Bsmt_SF = 2444 41339.89 Total_Bsmt_SF 2444 322647.8 ## Overall_Qual + Overall_Qual = Very_Excellent 47690.10 Overall_Qual Very_Excellent 370337.9 ## First_Flr_SF + First_Flr_SF = 2444 56780.92 First_Flr_SF 2444 427118.8 ## Bsmt_Qual + Bsmt_Qual = Excellent 49341.73 Bsmt_Qual Excellent 476460.6 ## Neighborhood + Neighborhood = Northridge 54289.27 Neighborhood Northridge 530749.8 ## Garage_Cars + Garage_Cars = 3 41959.23 Garage_Cars 3 572709.1 ## Kitchen_Qual + Kitchen_Qual = Excellent 59805.57 Kitchen_Qual Excellent 632514.6 ``` We can plot the entire list of contributions for each variable using `plot(high_breakdown)`. ### 16\.9\.1 Concept The Break Down method provides two sequential approaches; the default is called *step up*. This procedure, essentially, takes the value for a given feature in the single observation of interest, replaces all the observations in the training data set, and identifies how it effects the prediction error. It performs this process iteratively and independently for each feature, identifies the column with the largest difference score, and adds that variable to the list as the most important. This feature’s signal is then removed (via randomization), and the procedure sweeps through the remaining predictors and applies the same process until all variables have been assessed. ``` existing_data = validation data set used in explainer new_ob = single observation to perform local interpretation on p = number of predictors l = list of predictors baseline = mean predicted response of existing_data for variable i in {1,...,p} do for variable j in {1,...,l} do | exchange variable j in existing_data with variable j value in new_ob | predicted_j = mean predicted response of altered existing_data | diff_j = absolute difference between baseline - predicted | reset existing_data end | t = variable j with largest diff value | contribution for variable t = diff value for variable t | remove variable t from l end ``` **Algorithm 8:** A simple algorithm for computing Break Down values with the step up method. An alternative approach is called the *step down* method which follows a similar algorithm but rather than remove the variable with the largest difference score on each sweep, it removes the variable with the smallest difference score. Both approaches are analogous to backward stepwise selection where *step up* removes variables with largest impact and *step down* removes variables with the smallest impact. ### 16\.9\.2 Implementation To perform the Break Down algorithm on a single observation, use the `DALEX::prediction_breakdown()` function. The output is a data frame with class `"prediction_breakdown_explainer"` that lists the contribution for each variable. Similar to Shapley values, the results display the contribution that each feature value for the given observation has on the difference between the overall average response (`Sale_Price` in this example) and the response for the given observation of interest. The default approach is ***step up*** but you can perform ***step down*** by specifying `direction = “down”`. If you look at the contribution output, realize the feature ordering is in terms of importance. Consequently, `Gr_Liv_Area` was identified as most influential followed by `Second_Flr_SF` and `Total_Bsmt_SF`. However, if you look at the contribution value, you will notice that `Second_Flr_SF` appears to have a larger contribution to the above average price than `Gr_Liv_Area`. However, the `Second_Flr_SF` contribution is based on having already taken `Gr_Liv_Area`’s contribution into effect. The break down algorithm is the most computationally intense of all methods discussed in this chapter. Since the number of required iterations increases by \\(p \\times \\left(p\-1\\right)\\) for every additional feature, wider data sets cause this algorithm to become burdensome. For example, this single application took over 6 hours to compute! ``` high_breakdown <- prediction_breakdown(components_dalex, observation = high_ob) # class of prediction_breakdown output class(high_breakdown) ## [1] "prediction_breakdown_explainer" "data.frame" # check out the top 10 influential variables for this observation high_breakdown[1:10, 1:5] ## variable contribution variable_name variable_value cummulative ## 1 (Intercept) 181338.96 Intercept 1 181338.9 ## Gr_Liv_Area + Gr_Liv_Area = 4316 46971.64 Gr_Liv_Area 4316 228310.5 ## Second_Flr_SF + Second_Flr_SF = 1872 52997.40 Second_Flr_SF 1872 281307.9 ## Total_Bsmt_SF + Total_Bsmt_SF = 2444 41339.89 Total_Bsmt_SF 2444 322647.8 ## Overall_Qual + Overall_Qual = Very_Excellent 47690.10 Overall_Qual Very_Excellent 370337.9 ## First_Flr_SF + First_Flr_SF = 2444 56780.92 First_Flr_SF 2444 427118.8 ## Bsmt_Qual + Bsmt_Qual = Excellent 49341.73 Bsmt_Qual Excellent 476460.6 ## Neighborhood + Neighborhood = Northridge 54289.27 Neighborhood Northridge 530749.8 ## Garage_Cars + Garage_Cars = 3 41959.23 Garage_Cars 3 572709.1 ## Kitchen_Qual + Kitchen_Qual = Excellent 59805.57 Kitchen_Qual Excellent 632514.6 ``` We can plot the entire list of contributions for each variable using `plot(high_breakdown)`. 16\.10 Final thoughts --------------------- Since this book focuses on hands\-on applications, we have focused on only a small sliver of IML. IML is a rapidly expanding research space that covers many more topics including moral and ethical considerations such as fairness, accountability, and transparency along with many more analytic procedures to interpret model performance, sensitivity, bias identification, and more. Moreover, the above discussion only provides a high\-level understanding of the methods. To gain deeper understanding around these methods and to learn more about the other areas of IML (like not discussed in this book) we highly recommend Molnar and others ([2018](#ref-molnar2018interpretable)) and Hall, Patrick ([2018](#ref-awesomeIML)).
Machine Learning
bradleyboehmke.github.io
https://bradleyboehmke.github.io/HOML/pca.html
Chapter 17 Principal Components Analysis ======================================== Principal components analysis (PCA) is a method for finding low\-dimensional representations of a data set that retain as much of the original variation as possible. The idea is that each of the *n* observations lives in *p*\-dimensional space, but not all of these dimensions are equally interesting. In PCA we look for a smaller number of dimensions that are as interesting as possible, where the concept of *interesting* is measured by the amount that the observations vary along each dimension. Each of the new dimensions found in PCA is a linear combination of the original *p* features. The hope is to use a small subset of these linear feature combinations in further analysis while retaining most of the information present in the original data. 17\.1 Prerequisites ------------------- This chapter leverages the following packages. ``` library(dplyr) # basic data manipulation and plotting library(ggplot2) # data visualization library(h2o) # performing dimension reduction ``` To illustrate dimension reduction techniques, we’ll use the `my_basket` data set (Section [1\.4](intro.html#data)). This data set identifies items and quantities purchased for 2,000 transactions from a grocery store. The objective is to identify common groupings of items purchased together. ``` url <- "https://koalaverse.github.io/homlr/data/my_basket.csv" my_basket <- readr::read_csv(url) dim(my_basket) ## [1] 2000 42 ``` To perform dimension reduction techniques in R, generally, the data should be prepared as follows: 1. Data are in tidy format per Wickham and others ([2014](#ref-wickham2014tidy)); 2. Any missing values in the data must be removed or imputed; 3. Typically, the data must all be numeric values (e.g., one\-hot, label, ordinal encoding categorical features); 4. Numeric data should be standardized (e.g., centered and scaled) to make features comparable. The `my_basket` data already fullfills these requirements. However, some of the packages we’ll use to perform dimension reduction tasks have built\-in capabilities to impute missing data, numerically encode categorical features (typically one\-hot encode), and standardize the features. 17\.2 The idea -------------- Dimension reduction methods, such as PCA, focus on reducing the feature space, allowing most of the information or variability in the data set to be explained using fewer features; in the case of PCA, these new features will also be uncorrelated. For example, among the 42 variables within the `my_basket` data set, 23 combinations of variables have moderate correlation (\\(\\geq 0\.25\\)) with each other. Looking at the table below, we see that some of these combinations may be represented with smaller dimension categories (e.g., soda, candy, breakfast, and italian food) Table 17\.1: Various items in our my basket data that are correlated. | Item 1 | Item 2 | Correlation | | --- | --- | --- | | cheese | mayonnaise | 0\.345 | | bulmers | fosters | 0\.335 | | cheese | bread | 0\.320 | | lasagna | pizza | 0\.316 | | pepsi | coke | 0\.309 | | red.wine | fosters | 0\.308 | | milk | muesli | 0\.302 | | mars | twix | 0\.301 | | red.wine | bulmers | 0\.298 | | bulmers | kronenbourg | 0\.289 | | milk | tea | 0\.288 | | red.wine | kronenbourg | 0\.286 | | 7up | coke | 0\.282 | | spinach | broccoli | 0\.282 | | mayonnaise | bread | 0\.278 | | peas | potatoes | 0\.271 | | peas | carrots | 0\.270 | | tea | instant.coffee | 0\.270 | | milk | instant.coffee | 0\.267 | | bread | lettuce | 0\.264 | | twix | kitkat | 0\.259 | | mars | kitkat | 0\.255 | | muesli | instant.coffee | 0\.251 | We often want to explain common attributes such as these in a lower dimensionality than the original data. For example, when we purchase soda we may often buy multiple types at the same time (e.g., Coke, Pepsi, and 7UP). We could reduce these variables to one *latent variable* (i.e., unobserved feature) called “soda”. This can help in describing many features in our data set and it can also remove multicollinearity, which can often improve predictive accuracy in downstream supervised models. So how do we identify variables that could be grouped into a lower dimension? One option includes examining pairwise scatterplots of each variable against every other variable and identifying co\-variation. Unfortunately, this is tedious and becomes excessive quickly even with a small number of variables (given \\(p\\) variables there are \\(p(p\-1\)/2\\) possible scatterplot combinations). For example, since the `my_basket` data has 42 numeric variables, we would need to examine \\(42(42\-1\)/2 \= 861\\) scatterplots! Fortunately, better approaches exist to help represent our data using a smaller dimension. The PCA method was first published in 1901 (Pearson [1901](#ref-pearson1901liii)) and has been a staple procedure for dimension reduction for decades. PCA examines the covariance among features and combines multiple features into a smaller set of uncorrelated variables. These new features, which are weighted combinations of the original predictor set, are called *principal components* (PCs) and hopefully a small subset of them explain most of the variability of the full feature set. The weights used to form the PCs reveal the relative contributions of the original features to the new PCs. 17\.3 Finding principal components ---------------------------------- The *first principal component* of a set of features \\(X\_1\\), \\(X\_2\\), …, \\(X\_p\\) is the linear combination of the features \\\[\\begin{equation} \\tag{17\.1} Z\_{1} \= \\phi\_{11}X\_{1} \+ \\phi\_{21}X\_{2} \+ ... \+ \\phi\_{p1}X\_{p}, \\end{equation}\\] that has the largest variance. Here \\(\\phi\_1 \= \\left(\\phi\_{11}, \\phi\_{21}, \\dots, \\phi\_{p1}\\right)\\) is the *loading vector* for the first principal component. The \\(\\phi\\) are *normalized* so that \\(\\sum\_{j\=1}^{p}{\\phi\_{j1}^{2}} \= 1\\). After the first principal component \\(Z\_1\\) has been determined, we can find the second principal component \\(Z\_2\\). The second principal component is the linear combination of \\(X\_1, \\dots , X\_p\\) that has maximal variance out of all linear combinations that are ***uncorrelated*** with \\(Z\_1\\): \\\[\\begin{equation} \\tag{17\.2} Z\_{2} \= \\phi\_{12}X\_{1} \+ \\phi\_{22}X\_{2} \+ ... \+ \\phi\_{p2}X\_{p} \\end{equation}\\] where again we define \\(\\phi\_2 \= \\left(\\phi\_{12}, \\phi\_{22}, \\dots, \\phi\_{p2}\\right)\\) as the loading vector for the second principal component. This process proceeds until all *p* principal components are computed. So how do we calculate \\(\\phi\_1, \\phi\_2, \\dots, \\phi\_p\\) in practice?. It can be shown, using techniques from linear algebra[45](#fn45), that the *eigenvector* corresponding to the largest *eigenvalue* of the feature covariance matrix is the set of loadings that explains the greatest proportion of feature variability.[46](#fn46) An illustration provides a more intuitive grasp on principal components. Assume we have two features that have moderate (0\.56, say) correlation. We can explain the covariation of these variables in two dimensions (i.e., using PC 1 and PC 2\). We see that the greatest covariation falls along the first PC, which is simply the line that minimizes the total squared distance from each point to its *orthogonal projection* onto the line. Consequently, we can explain the vast majority (93% to be exact) of the variability between feature 1 and feature 2 using just the first PC. Figure 17\.1: Principal components of two features that have 0\.56 correlation. We can extend this to three variables, assessing the relationship among features 1, 2, and 3\. The first two PC directions span the plane that best fits the variability in the data. It minimizes the sum of squared distances from each point to the plan. As more dimensions are added, these visuals are not as intuitive but we’ll see shortly how we can still use PCA to extract and visualize important information. Figure 17\.2: Principal components of three features. 17\.4 Performing PCA in R ------------------------- There are several built\-in and external packages to perform PCA in R. We recommend to use **h2o** as it provides consistency across the dimension reduction methods we’ll discuss later and it also automates much of the data preparation steps previously discussed (i.e., standardizing numeric features, imputing missing values, and encoding categorical features). Let’s go ahead and start up **h2o**: ``` h2o.no_progress() # turn off progress bars for brevity h2o.init(max_mem_size = "5g") # connect to H2O instance ``` First, we convert our `my_basket` data frame to an appropriate **h2o** object and then use `h2o.prcomp()` to perform PCA. A few of the important arguments you can specify in `h2o.prcomp()` include: * `pca_method`: Character string specifying which PC method to use. there are actually a few different approaches to calculating principal components (PCs). When your data contains mostly numeric data (such as `my_basket`), its best to use `pca_method = "GramSVD"`. When your data contain many categorical variables (or just a few categorical variables with high cardinality) we recommend you use `pca_method = "GLRM"`. * `k`: Integer specifying how many PCs to compute. It’s best to create the same number of PCs as there are features and we will see shortly how to identify the number of PCs to use, where the number of PCs is less than the number of features. * `transform`: Character string specifying how (if at all) your data should be standardized. * `impute_missing`: Logical specifying whether or not to impute missing values; if your data have missing values, this will impute them with the corresponding column mean. * `max_runtime_secs`: Number specifying the max run time (in seconds); when working with large data sets this will limit the runtime for model training. When your data contains mostly numeric data (such as `my_basket`), its best to use `pca_method = “GramSVD”`. When your data contain many categorical variables (or just a few categorical variables with high cardinality) we recommend to use `pca_method = “GLRM”`. ``` # convert data to h2o object my_basket.h2o <- as.h2o(my_basket) # run PCA my_pca <- h2o.prcomp( training_frame = my_basket.h2o, pca_method = "GramSVD", k = ncol(my_basket.h2o), transform = "STANDARDIZE", impute_missing = TRUE, max_runtime_secs = 1000 ) ``` Our model object (`my_pca`) contains several pieces of information that we can extract (you can view all information with `glimpse(my_pca)`). The most important information is stored in `my_pca@model$importance` (which is the same output that gets printed when looking at our object’s printed output). This information includes each PC, the standard deviation of each PC, as well as the proportion and cumulative proportion of variance explained with each PC. ``` my_pca ## Model Details: ## ============== ## ## H2ODimReductionModel: pca ## Model ID: PCA_model_R_1536152543598_1 ## Importance of components: ## pc1 pc2 pc3 pc4 pc5 pc6 pc7 pc8 pc9 ## Standard deviation 1.513919 1.473768 1.459114 1.440635 1.435279 1.411544 1.253307 1.026387 1.010238 ## Proportion of Variance 0.054570 0.051714 0.050691 0.049415 0.049048 0.047439 0.037400 0.025083 0.024300 ## Cumulative Proportion 0.054570 0.106284 0.156975 0.206390 0.255438 0.302878 0.340277 0.365360 0.389659 ## pc10 pc11 pc12 pc13 pc14 pc15 pc16 pc17 pc18 ## Standard deviation 1.007253 0.988724 0.985320 0.970453 0.964303 0.951610 0.947978 0.944826 0.932943 ## Proportion of Variance 0.024156 0.023276 0.023116 0.022423 0.022140 0.021561 0.021397 0.021255 0.020723 ## Cumulative Proportion 0.413816 0.437091 0.460207 0.482630 0.504770 0.526331 0.547728 0.568982 0.589706 ## pc19 pc20 pc21 pc22 pc23 pc24 pc25 pc26 pc27 ## Standard deviation 0.931745 0.924207 0.917106 0.908494 0.903247 0.898109 0.894277 0.876167 0.871809 ## Proportion of Variance 0.020670 0.020337 0.020026 0.019651 0.019425 0.019205 0.019041 0.018278 0.018096 ## Cumulative Proportion 0.610376 0.630713 0.650739 0.670390 0.689815 0.709020 0.728061 0.746339 0.764436 ## pc28 pc29 pc30 pc31 pc32 pc33 pc34 pc35 pc36 ## Standard deviation 0.865912 0.855036 0.845130 0.842818 0.837655 0.826422 0.818532 0.813796 0.804380 ## Proportion of Variance 0.017852 0.017407 0.017006 0.016913 0.016706 0.016261 0.015952 0.015768 0.015405 ## Cumulative Proportion 0.782288 0.799695 0.816701 0.833614 0.850320 0.866581 0.882534 0.898302 0.913707 ## pc37 pc38 pc39 pc40 pc41 pc42 ## Standard deviation 0.796073 0.793781 0.780615 0.778612 0.763433 0.749696 ## Proportion of Variance 0.015089 0.015002 0.014509 0.014434 0.013877 0.013382 ## Cumulative Proportion 0.928796 0.943798 0.958307 0.972741 0.986618 1.000000 ## ## ## H2ODimReductionMetrics: pca ## ## No model metrics available for PCA ``` Naturally, the first PC (PC1\) captures the most variance followed by PC2, then PC3, etc. We can identify which of our original features contribute to the PCs by assessing the loadings. The loadings for the first PC represent \\(\\phi\_{11}, \\phi\_{21}, \\dots, \\phi\_{p1}\\) in Equation [(17\.1\)](pca.html#eq:pca1). Thus, these loadings represent each features ***influence*** on the associated PC. If we plot the loadings for PC1 we see that the largest contributing features are mostly adult beverages (and apparently eating candy bars, smoking, and playing the lottery are also associated with drinking!). ``` my_pca@model$eigenvectors %>% as.data.frame() %>% mutate(feature = row.names(.)) %>% ggplot(aes(pc1, reorder(feature, pc1))) + geom_point() ``` Figure 17\.3: Feature loadings illustrating the influence that each variable has on the first principal component. We can also compare PCs against one another. For example, Figure [17\.4](pca.html#fig:pc1-pc2-contributions) shows how the different features contribute to PC1 and PC2\. We can see distinct groupings of features and how they contribute to both PCs. For example, adult beverages (e.g., whiskey and wine) have a positive contribution to PC1 but have a smaller and negative contribution to PC2\. This means that transactions that include purchases of adult beverages tend to have larger than average values for PC1 but smaller than average for PC2\. ``` my_pca@model$eigenvectors %>% as.data.frame() %>% mutate(feature = row.names(.)) %>% ggplot(aes(pc1, pc2, label = feature)) + geom_text() ``` Figure 17\.4: Feature contribution for principal components one and two. 17\.5 Selecting the number of principal components -------------------------------------------------- So far we have computed PCs and gained a little understanding of what the results initially tell us. However, a primary goal in PCA is dimension reduction (in this case, feature reduction). In essence, we want to come out of PCA with fewer components than original features, and with the caveat that these components explain us as much variation as possible about our data. But how do we decide how many PCs to keep? Do we keep the first 10, 20, or 40 PCs? There are three common approaches in helping to make this decision: 1. Eigenvalue criterion 2. Proportion of variance explained criterion 3. Scree plot criterion ### 17\.5\.1 Eigenvalue criterion The sum of the eigenvalues is equal to the number of variables entered into the PCA; however, the eigenvalues will range from greater than one to near zero. An eigenvalue of 1 means that the principal component would explain about one variable’s worth of the variability. The rationale for using the eigenvalue criterion is that each component should explain at least one variable’s worth of the variability, and therefore, the eigenvalue criterion states that only components with eigenvalues greater than 1 should be retained. `h2o.prcomp()` automatically computes the standard deviations of the PCs, which is equal to the square root of the eigenvalues. Therefore, we can compute the eigenvalues easily and identify PCs where the sum of eigenvalues is greater than or equal to 1\. Consequently, using this criteria would have us retain the first 10 PCs in `my_basket` (see Figure [17\.5](pca.html#fig:eigen-criterion-plot)). ``` # Compute eigenvalues eigen <- my_pca@model$importance["Standard deviation", ] %>% as.vector() %>% .^2 # Sum of all eigenvalues equals number of variables sum(eigen) ## [1] 42 # Find PCs where the sum of eigenvalues is greater than or equal to 1 which(eigen >= 1) ## [1] 1 2 3 4 5 6 7 8 9 10 ``` Figure 17\.5: Eigenvalue criterion keeps all principal components where the sum of the eigenvalues are above or equal to a value of one. ### 17\.5\.2 Proportion of variance explained criterion The *proportion of variance explained* (PVE) identifies the optimal number of PCs to keep based on the total variability that we would like to account for. Mathematically, the PVE for the *m*\-th PC is calculated as: \\\[\\begin{equation} \\tag{17\.3} PVE \= \\frac{\\sum\_{i\=1}^{n}(\\sum\_{j\=1}^{p}{\\phi\_{jm}x\_{ij}})^{2}}{\\sum\_{j\=1}^{p}\\sum\_{i\=1}^{n}{x\_{ij}^{2}}} \\end{equation}\\] `h2o.prcomp()` provides us with the PVE and also the cumulative variance explained (CVE), so we just need to extract this information and plot it (see Figure [17\.6](pca.html#fig:pve-cve-plot)). ``` # Extract and plot PVE and CVE data.frame( PC = my_pca@model$importance %>% seq_along(), PVE = my_pca@model$importance %>% .[2,] %>% unlist(), CVE = my_pca@model$importance %>% .[3,] %>% unlist() ) %>% tidyr::gather(metric, variance_explained, -PC) %>% ggplot(aes(PC, variance_explained)) + geom_point() + facet_wrap(~ metric, ncol = 1, scales = "free") ``` Figure 17\.6: PVE criterion keeps all principal components that are above or equal to a pre\-specified threshold of total variability explained. The first PCt in our example explains 5\.46% of the feature variability, and the second principal component explains 5\.17%. Together, the first two PCs explain 10\.63% of the variability. Thus, if an analyst desires to choose the number of PCs required to explain at least 75% of the variability in our original data then they would choose the first 27 components. ``` # How many PCs required to explain at least 75% of total variability min(which(ve$CVE >= 0.75)) ## [1] 27 ``` What amount of variability is reasonable? This varies by application and the data being used. However, when the PCs are being used for descriptive purposes only, such as customer profiling, then the proportion of variability explained may be lower than otherwise. When the PCs are to be used as derived features for models downstream, then the PVE should be as much as can conveniently be achieved, given any constraints. ### 17\.5\.3 Scree plot criterion A *scree plot* shows the eigenvalues or PVE for each individual PC. Most scree plots look broadly similar in shape, starting high on the left, falling rather quickly, and then flattening out at some point. This is because the first component usually explains much of the variability, the next few components explain a moderate amount, and the latter components only explain a small fraction of the overall variability. The scree plot criterion looks for the “elbow” in the curve and selects all components just before the line flattens out, which looks like eight in our example (see Figure [17\.7](pca.html#fig:pca-scree-plot-criterion)). ``` data.frame( PC = my_pca@model$importance %>% seq_along, PVE = my_pca@model$importance %>% .[2,] %>% unlist() ) %>% ggplot(aes(PC, PVE, group = 1, label = PC)) + geom_point() + geom_line() + geom_text(nudge_y = -.002) ``` Figure 17\.7: Scree plot criterion looks for the ‘elbow’ in the curve and keeps all principal components before the line flattens out. 17\.6 Final thoughts -------------------- So how many PCs should we use in the `my_basket` example? The frank answer is that there is no one best method for determining how many components to use. In this case, differing criteria suggest to retain 8 (scree plot criterion), 10 (eigenvalue criterion), and 26 (based on a 75% of variance explained requirement) components. The number you go with depends on your end objective and analytic workflow. If we were merely trying to profile customers we would probably use 8 or 10, if we were performing dimension reduction to feed into a downstream predictive model we would likely retain 26 or more (the exact number being based on, for example, the CV results in the supervised modeling process). This is part of the challenge with unsupervised modeling, there is more subjectivity in modeling results and interpretation. Traditional PCA has a few disadvantages worth keeping in mind. First, PCA can be highly affected by outliers. There have been many robust variants of PCA that act to iteratively discard data points that are poorly described by the initial components (see, for example, Luu, Blum, and Privé ([2019](#ref-R-pcadapt)) and Erichson, Zheng, and Aravkin ([2018](#ref-R-sparsepca))). In Chapter [18](GLRM.html#GLRM) we discuss an alternative dimension reduction procedure that takes outliers into consideration, and in Chapter [19](autoencoders.html#autoencoders) we illustrate a procedure to help identify outliers. Also, note in Figures [17\.1](pca.html#fig:create-pca-image) and [17\.2](pca.html#fig:pca-3d-plot) that our PC directions are linear. Consequently, traditional PCA does not perform as well in very high dimensional space where complex nonlinear patterns often exist. Kernel PCA implements the kernel trick discussed in Chapter [14](svm.html#svm) and makes it possible to perform complex nonlinear projections of dimensionality reduction. See Karatzoglou, Smola, and Hornik ([2018](#ref-R-kernlab)) for an implementation of kernel PCA in R. Chapters [18](GLRM.html#GLRM) and [19](autoencoders.html#autoencoders) discuss two methods that allow us to reduce the feature space while also capturing nonlinearity. 17\.1 Prerequisites ------------------- This chapter leverages the following packages. ``` library(dplyr) # basic data manipulation and plotting library(ggplot2) # data visualization library(h2o) # performing dimension reduction ``` To illustrate dimension reduction techniques, we’ll use the `my_basket` data set (Section [1\.4](intro.html#data)). This data set identifies items and quantities purchased for 2,000 transactions from a grocery store. The objective is to identify common groupings of items purchased together. ``` url <- "https://koalaverse.github.io/homlr/data/my_basket.csv" my_basket <- readr::read_csv(url) dim(my_basket) ## [1] 2000 42 ``` To perform dimension reduction techniques in R, generally, the data should be prepared as follows: 1. Data are in tidy format per Wickham and others ([2014](#ref-wickham2014tidy)); 2. Any missing values in the data must be removed or imputed; 3. Typically, the data must all be numeric values (e.g., one\-hot, label, ordinal encoding categorical features); 4. Numeric data should be standardized (e.g., centered and scaled) to make features comparable. The `my_basket` data already fullfills these requirements. However, some of the packages we’ll use to perform dimension reduction tasks have built\-in capabilities to impute missing data, numerically encode categorical features (typically one\-hot encode), and standardize the features. 17\.2 The idea -------------- Dimension reduction methods, such as PCA, focus on reducing the feature space, allowing most of the information or variability in the data set to be explained using fewer features; in the case of PCA, these new features will also be uncorrelated. For example, among the 42 variables within the `my_basket` data set, 23 combinations of variables have moderate correlation (\\(\\geq 0\.25\\)) with each other. Looking at the table below, we see that some of these combinations may be represented with smaller dimension categories (e.g., soda, candy, breakfast, and italian food) Table 17\.1: Various items in our my basket data that are correlated. | Item 1 | Item 2 | Correlation | | --- | --- | --- | | cheese | mayonnaise | 0\.345 | | bulmers | fosters | 0\.335 | | cheese | bread | 0\.320 | | lasagna | pizza | 0\.316 | | pepsi | coke | 0\.309 | | red.wine | fosters | 0\.308 | | milk | muesli | 0\.302 | | mars | twix | 0\.301 | | red.wine | bulmers | 0\.298 | | bulmers | kronenbourg | 0\.289 | | milk | tea | 0\.288 | | red.wine | kronenbourg | 0\.286 | | 7up | coke | 0\.282 | | spinach | broccoli | 0\.282 | | mayonnaise | bread | 0\.278 | | peas | potatoes | 0\.271 | | peas | carrots | 0\.270 | | tea | instant.coffee | 0\.270 | | milk | instant.coffee | 0\.267 | | bread | lettuce | 0\.264 | | twix | kitkat | 0\.259 | | mars | kitkat | 0\.255 | | muesli | instant.coffee | 0\.251 | We often want to explain common attributes such as these in a lower dimensionality than the original data. For example, when we purchase soda we may often buy multiple types at the same time (e.g., Coke, Pepsi, and 7UP). We could reduce these variables to one *latent variable* (i.e., unobserved feature) called “soda”. This can help in describing many features in our data set and it can also remove multicollinearity, which can often improve predictive accuracy in downstream supervised models. So how do we identify variables that could be grouped into a lower dimension? One option includes examining pairwise scatterplots of each variable against every other variable and identifying co\-variation. Unfortunately, this is tedious and becomes excessive quickly even with a small number of variables (given \\(p\\) variables there are \\(p(p\-1\)/2\\) possible scatterplot combinations). For example, since the `my_basket` data has 42 numeric variables, we would need to examine \\(42(42\-1\)/2 \= 861\\) scatterplots! Fortunately, better approaches exist to help represent our data using a smaller dimension. The PCA method was first published in 1901 (Pearson [1901](#ref-pearson1901liii)) and has been a staple procedure for dimension reduction for decades. PCA examines the covariance among features and combines multiple features into a smaller set of uncorrelated variables. These new features, which are weighted combinations of the original predictor set, are called *principal components* (PCs) and hopefully a small subset of them explain most of the variability of the full feature set. The weights used to form the PCs reveal the relative contributions of the original features to the new PCs. 17\.3 Finding principal components ---------------------------------- The *first principal component* of a set of features \\(X\_1\\), \\(X\_2\\), …, \\(X\_p\\) is the linear combination of the features \\\[\\begin{equation} \\tag{17\.1} Z\_{1} \= \\phi\_{11}X\_{1} \+ \\phi\_{21}X\_{2} \+ ... \+ \\phi\_{p1}X\_{p}, \\end{equation}\\] that has the largest variance. Here \\(\\phi\_1 \= \\left(\\phi\_{11}, \\phi\_{21}, \\dots, \\phi\_{p1}\\right)\\) is the *loading vector* for the first principal component. The \\(\\phi\\) are *normalized* so that \\(\\sum\_{j\=1}^{p}{\\phi\_{j1}^{2}} \= 1\\). After the first principal component \\(Z\_1\\) has been determined, we can find the second principal component \\(Z\_2\\). The second principal component is the linear combination of \\(X\_1, \\dots , X\_p\\) that has maximal variance out of all linear combinations that are ***uncorrelated*** with \\(Z\_1\\): \\\[\\begin{equation} \\tag{17\.2} Z\_{2} \= \\phi\_{12}X\_{1} \+ \\phi\_{22}X\_{2} \+ ... \+ \\phi\_{p2}X\_{p} \\end{equation}\\] where again we define \\(\\phi\_2 \= \\left(\\phi\_{12}, \\phi\_{22}, \\dots, \\phi\_{p2}\\right)\\) as the loading vector for the second principal component. This process proceeds until all *p* principal components are computed. So how do we calculate \\(\\phi\_1, \\phi\_2, \\dots, \\phi\_p\\) in practice?. It can be shown, using techniques from linear algebra[45](#fn45), that the *eigenvector* corresponding to the largest *eigenvalue* of the feature covariance matrix is the set of loadings that explains the greatest proportion of feature variability.[46](#fn46) An illustration provides a more intuitive grasp on principal components. Assume we have two features that have moderate (0\.56, say) correlation. We can explain the covariation of these variables in two dimensions (i.e., using PC 1 and PC 2\). We see that the greatest covariation falls along the first PC, which is simply the line that minimizes the total squared distance from each point to its *orthogonal projection* onto the line. Consequently, we can explain the vast majority (93% to be exact) of the variability between feature 1 and feature 2 using just the first PC. Figure 17\.1: Principal components of two features that have 0\.56 correlation. We can extend this to three variables, assessing the relationship among features 1, 2, and 3\. The first two PC directions span the plane that best fits the variability in the data. It minimizes the sum of squared distances from each point to the plan. As more dimensions are added, these visuals are not as intuitive but we’ll see shortly how we can still use PCA to extract and visualize important information. Figure 17\.2: Principal components of three features. 17\.4 Performing PCA in R ------------------------- There are several built\-in and external packages to perform PCA in R. We recommend to use **h2o** as it provides consistency across the dimension reduction methods we’ll discuss later and it also automates much of the data preparation steps previously discussed (i.e., standardizing numeric features, imputing missing values, and encoding categorical features). Let’s go ahead and start up **h2o**: ``` h2o.no_progress() # turn off progress bars for brevity h2o.init(max_mem_size = "5g") # connect to H2O instance ``` First, we convert our `my_basket` data frame to an appropriate **h2o** object and then use `h2o.prcomp()` to perform PCA. A few of the important arguments you can specify in `h2o.prcomp()` include: * `pca_method`: Character string specifying which PC method to use. there are actually a few different approaches to calculating principal components (PCs). When your data contains mostly numeric data (such as `my_basket`), its best to use `pca_method = "GramSVD"`. When your data contain many categorical variables (or just a few categorical variables with high cardinality) we recommend you use `pca_method = "GLRM"`. * `k`: Integer specifying how many PCs to compute. It’s best to create the same number of PCs as there are features and we will see shortly how to identify the number of PCs to use, where the number of PCs is less than the number of features. * `transform`: Character string specifying how (if at all) your data should be standardized. * `impute_missing`: Logical specifying whether or not to impute missing values; if your data have missing values, this will impute them with the corresponding column mean. * `max_runtime_secs`: Number specifying the max run time (in seconds); when working with large data sets this will limit the runtime for model training. When your data contains mostly numeric data (such as `my_basket`), its best to use `pca_method = “GramSVD”`. When your data contain many categorical variables (or just a few categorical variables with high cardinality) we recommend to use `pca_method = “GLRM”`. ``` # convert data to h2o object my_basket.h2o <- as.h2o(my_basket) # run PCA my_pca <- h2o.prcomp( training_frame = my_basket.h2o, pca_method = "GramSVD", k = ncol(my_basket.h2o), transform = "STANDARDIZE", impute_missing = TRUE, max_runtime_secs = 1000 ) ``` Our model object (`my_pca`) contains several pieces of information that we can extract (you can view all information with `glimpse(my_pca)`). The most important information is stored in `my_pca@model$importance` (which is the same output that gets printed when looking at our object’s printed output). This information includes each PC, the standard deviation of each PC, as well as the proportion and cumulative proportion of variance explained with each PC. ``` my_pca ## Model Details: ## ============== ## ## H2ODimReductionModel: pca ## Model ID: PCA_model_R_1536152543598_1 ## Importance of components: ## pc1 pc2 pc3 pc4 pc5 pc6 pc7 pc8 pc9 ## Standard deviation 1.513919 1.473768 1.459114 1.440635 1.435279 1.411544 1.253307 1.026387 1.010238 ## Proportion of Variance 0.054570 0.051714 0.050691 0.049415 0.049048 0.047439 0.037400 0.025083 0.024300 ## Cumulative Proportion 0.054570 0.106284 0.156975 0.206390 0.255438 0.302878 0.340277 0.365360 0.389659 ## pc10 pc11 pc12 pc13 pc14 pc15 pc16 pc17 pc18 ## Standard deviation 1.007253 0.988724 0.985320 0.970453 0.964303 0.951610 0.947978 0.944826 0.932943 ## Proportion of Variance 0.024156 0.023276 0.023116 0.022423 0.022140 0.021561 0.021397 0.021255 0.020723 ## Cumulative Proportion 0.413816 0.437091 0.460207 0.482630 0.504770 0.526331 0.547728 0.568982 0.589706 ## pc19 pc20 pc21 pc22 pc23 pc24 pc25 pc26 pc27 ## Standard deviation 0.931745 0.924207 0.917106 0.908494 0.903247 0.898109 0.894277 0.876167 0.871809 ## Proportion of Variance 0.020670 0.020337 0.020026 0.019651 0.019425 0.019205 0.019041 0.018278 0.018096 ## Cumulative Proportion 0.610376 0.630713 0.650739 0.670390 0.689815 0.709020 0.728061 0.746339 0.764436 ## pc28 pc29 pc30 pc31 pc32 pc33 pc34 pc35 pc36 ## Standard deviation 0.865912 0.855036 0.845130 0.842818 0.837655 0.826422 0.818532 0.813796 0.804380 ## Proportion of Variance 0.017852 0.017407 0.017006 0.016913 0.016706 0.016261 0.015952 0.015768 0.015405 ## Cumulative Proportion 0.782288 0.799695 0.816701 0.833614 0.850320 0.866581 0.882534 0.898302 0.913707 ## pc37 pc38 pc39 pc40 pc41 pc42 ## Standard deviation 0.796073 0.793781 0.780615 0.778612 0.763433 0.749696 ## Proportion of Variance 0.015089 0.015002 0.014509 0.014434 0.013877 0.013382 ## Cumulative Proportion 0.928796 0.943798 0.958307 0.972741 0.986618 1.000000 ## ## ## H2ODimReductionMetrics: pca ## ## No model metrics available for PCA ``` Naturally, the first PC (PC1\) captures the most variance followed by PC2, then PC3, etc. We can identify which of our original features contribute to the PCs by assessing the loadings. The loadings for the first PC represent \\(\\phi\_{11}, \\phi\_{21}, \\dots, \\phi\_{p1}\\) in Equation [(17\.1\)](pca.html#eq:pca1). Thus, these loadings represent each features ***influence*** on the associated PC. If we plot the loadings for PC1 we see that the largest contributing features are mostly adult beverages (and apparently eating candy bars, smoking, and playing the lottery are also associated with drinking!). ``` my_pca@model$eigenvectors %>% as.data.frame() %>% mutate(feature = row.names(.)) %>% ggplot(aes(pc1, reorder(feature, pc1))) + geom_point() ``` Figure 17\.3: Feature loadings illustrating the influence that each variable has on the first principal component. We can also compare PCs against one another. For example, Figure [17\.4](pca.html#fig:pc1-pc2-contributions) shows how the different features contribute to PC1 and PC2\. We can see distinct groupings of features and how they contribute to both PCs. For example, adult beverages (e.g., whiskey and wine) have a positive contribution to PC1 but have a smaller and negative contribution to PC2\. This means that transactions that include purchases of adult beverages tend to have larger than average values for PC1 but smaller than average for PC2\. ``` my_pca@model$eigenvectors %>% as.data.frame() %>% mutate(feature = row.names(.)) %>% ggplot(aes(pc1, pc2, label = feature)) + geom_text() ``` Figure 17\.4: Feature contribution for principal components one and two. 17\.5 Selecting the number of principal components -------------------------------------------------- So far we have computed PCs and gained a little understanding of what the results initially tell us. However, a primary goal in PCA is dimension reduction (in this case, feature reduction). In essence, we want to come out of PCA with fewer components than original features, and with the caveat that these components explain us as much variation as possible about our data. But how do we decide how many PCs to keep? Do we keep the first 10, 20, or 40 PCs? There are three common approaches in helping to make this decision: 1. Eigenvalue criterion 2. Proportion of variance explained criterion 3. Scree plot criterion ### 17\.5\.1 Eigenvalue criterion The sum of the eigenvalues is equal to the number of variables entered into the PCA; however, the eigenvalues will range from greater than one to near zero. An eigenvalue of 1 means that the principal component would explain about one variable’s worth of the variability. The rationale for using the eigenvalue criterion is that each component should explain at least one variable’s worth of the variability, and therefore, the eigenvalue criterion states that only components with eigenvalues greater than 1 should be retained. `h2o.prcomp()` automatically computes the standard deviations of the PCs, which is equal to the square root of the eigenvalues. Therefore, we can compute the eigenvalues easily and identify PCs where the sum of eigenvalues is greater than or equal to 1\. Consequently, using this criteria would have us retain the first 10 PCs in `my_basket` (see Figure [17\.5](pca.html#fig:eigen-criterion-plot)). ``` # Compute eigenvalues eigen <- my_pca@model$importance["Standard deviation", ] %>% as.vector() %>% .^2 # Sum of all eigenvalues equals number of variables sum(eigen) ## [1] 42 # Find PCs where the sum of eigenvalues is greater than or equal to 1 which(eigen >= 1) ## [1] 1 2 3 4 5 6 7 8 9 10 ``` Figure 17\.5: Eigenvalue criterion keeps all principal components where the sum of the eigenvalues are above or equal to a value of one. ### 17\.5\.2 Proportion of variance explained criterion The *proportion of variance explained* (PVE) identifies the optimal number of PCs to keep based on the total variability that we would like to account for. Mathematically, the PVE for the *m*\-th PC is calculated as: \\\[\\begin{equation} \\tag{17\.3} PVE \= \\frac{\\sum\_{i\=1}^{n}(\\sum\_{j\=1}^{p}{\\phi\_{jm}x\_{ij}})^{2}}{\\sum\_{j\=1}^{p}\\sum\_{i\=1}^{n}{x\_{ij}^{2}}} \\end{equation}\\] `h2o.prcomp()` provides us with the PVE and also the cumulative variance explained (CVE), so we just need to extract this information and plot it (see Figure [17\.6](pca.html#fig:pve-cve-plot)). ``` # Extract and plot PVE and CVE data.frame( PC = my_pca@model$importance %>% seq_along(), PVE = my_pca@model$importance %>% .[2,] %>% unlist(), CVE = my_pca@model$importance %>% .[3,] %>% unlist() ) %>% tidyr::gather(metric, variance_explained, -PC) %>% ggplot(aes(PC, variance_explained)) + geom_point() + facet_wrap(~ metric, ncol = 1, scales = "free") ``` Figure 17\.6: PVE criterion keeps all principal components that are above or equal to a pre\-specified threshold of total variability explained. The first PCt in our example explains 5\.46% of the feature variability, and the second principal component explains 5\.17%. Together, the first two PCs explain 10\.63% of the variability. Thus, if an analyst desires to choose the number of PCs required to explain at least 75% of the variability in our original data then they would choose the first 27 components. ``` # How many PCs required to explain at least 75% of total variability min(which(ve$CVE >= 0.75)) ## [1] 27 ``` What amount of variability is reasonable? This varies by application and the data being used. However, when the PCs are being used for descriptive purposes only, such as customer profiling, then the proportion of variability explained may be lower than otherwise. When the PCs are to be used as derived features for models downstream, then the PVE should be as much as can conveniently be achieved, given any constraints. ### 17\.5\.3 Scree plot criterion A *scree plot* shows the eigenvalues or PVE for each individual PC. Most scree plots look broadly similar in shape, starting high on the left, falling rather quickly, and then flattening out at some point. This is because the first component usually explains much of the variability, the next few components explain a moderate amount, and the latter components only explain a small fraction of the overall variability. The scree plot criterion looks for the “elbow” in the curve and selects all components just before the line flattens out, which looks like eight in our example (see Figure [17\.7](pca.html#fig:pca-scree-plot-criterion)). ``` data.frame( PC = my_pca@model$importance %>% seq_along, PVE = my_pca@model$importance %>% .[2,] %>% unlist() ) %>% ggplot(aes(PC, PVE, group = 1, label = PC)) + geom_point() + geom_line() + geom_text(nudge_y = -.002) ``` Figure 17\.7: Scree plot criterion looks for the ‘elbow’ in the curve and keeps all principal components before the line flattens out. ### 17\.5\.1 Eigenvalue criterion The sum of the eigenvalues is equal to the number of variables entered into the PCA; however, the eigenvalues will range from greater than one to near zero. An eigenvalue of 1 means that the principal component would explain about one variable’s worth of the variability. The rationale for using the eigenvalue criterion is that each component should explain at least one variable’s worth of the variability, and therefore, the eigenvalue criterion states that only components with eigenvalues greater than 1 should be retained. `h2o.prcomp()` automatically computes the standard deviations of the PCs, which is equal to the square root of the eigenvalues. Therefore, we can compute the eigenvalues easily and identify PCs where the sum of eigenvalues is greater than or equal to 1\. Consequently, using this criteria would have us retain the first 10 PCs in `my_basket` (see Figure [17\.5](pca.html#fig:eigen-criterion-plot)). ``` # Compute eigenvalues eigen <- my_pca@model$importance["Standard deviation", ] %>% as.vector() %>% .^2 # Sum of all eigenvalues equals number of variables sum(eigen) ## [1] 42 # Find PCs where the sum of eigenvalues is greater than or equal to 1 which(eigen >= 1) ## [1] 1 2 3 4 5 6 7 8 9 10 ``` Figure 17\.5: Eigenvalue criterion keeps all principal components where the sum of the eigenvalues are above or equal to a value of one. ### 17\.5\.2 Proportion of variance explained criterion The *proportion of variance explained* (PVE) identifies the optimal number of PCs to keep based on the total variability that we would like to account for. Mathematically, the PVE for the *m*\-th PC is calculated as: \\\[\\begin{equation} \\tag{17\.3} PVE \= \\frac{\\sum\_{i\=1}^{n}(\\sum\_{j\=1}^{p}{\\phi\_{jm}x\_{ij}})^{2}}{\\sum\_{j\=1}^{p}\\sum\_{i\=1}^{n}{x\_{ij}^{2}}} \\end{equation}\\] `h2o.prcomp()` provides us with the PVE and also the cumulative variance explained (CVE), so we just need to extract this information and plot it (see Figure [17\.6](pca.html#fig:pve-cve-plot)). ``` # Extract and plot PVE and CVE data.frame( PC = my_pca@model$importance %>% seq_along(), PVE = my_pca@model$importance %>% .[2,] %>% unlist(), CVE = my_pca@model$importance %>% .[3,] %>% unlist() ) %>% tidyr::gather(metric, variance_explained, -PC) %>% ggplot(aes(PC, variance_explained)) + geom_point() + facet_wrap(~ metric, ncol = 1, scales = "free") ``` Figure 17\.6: PVE criterion keeps all principal components that are above or equal to a pre\-specified threshold of total variability explained. The first PCt in our example explains 5\.46% of the feature variability, and the second principal component explains 5\.17%. Together, the first two PCs explain 10\.63% of the variability. Thus, if an analyst desires to choose the number of PCs required to explain at least 75% of the variability in our original data then they would choose the first 27 components. ``` # How many PCs required to explain at least 75% of total variability min(which(ve$CVE >= 0.75)) ## [1] 27 ``` What amount of variability is reasonable? This varies by application and the data being used. However, when the PCs are being used for descriptive purposes only, such as customer profiling, then the proportion of variability explained may be lower than otherwise. When the PCs are to be used as derived features for models downstream, then the PVE should be as much as can conveniently be achieved, given any constraints. ### 17\.5\.3 Scree plot criterion A *scree plot* shows the eigenvalues or PVE for each individual PC. Most scree plots look broadly similar in shape, starting high on the left, falling rather quickly, and then flattening out at some point. This is because the first component usually explains much of the variability, the next few components explain a moderate amount, and the latter components only explain a small fraction of the overall variability. The scree plot criterion looks for the “elbow” in the curve and selects all components just before the line flattens out, which looks like eight in our example (see Figure [17\.7](pca.html#fig:pca-scree-plot-criterion)). ``` data.frame( PC = my_pca@model$importance %>% seq_along, PVE = my_pca@model$importance %>% .[2,] %>% unlist() ) %>% ggplot(aes(PC, PVE, group = 1, label = PC)) + geom_point() + geom_line() + geom_text(nudge_y = -.002) ``` Figure 17\.7: Scree plot criterion looks for the ‘elbow’ in the curve and keeps all principal components before the line flattens out. 17\.6 Final thoughts -------------------- So how many PCs should we use in the `my_basket` example? The frank answer is that there is no one best method for determining how many components to use. In this case, differing criteria suggest to retain 8 (scree plot criterion), 10 (eigenvalue criterion), and 26 (based on a 75% of variance explained requirement) components. The number you go with depends on your end objective and analytic workflow. If we were merely trying to profile customers we would probably use 8 or 10, if we were performing dimension reduction to feed into a downstream predictive model we would likely retain 26 or more (the exact number being based on, for example, the CV results in the supervised modeling process). This is part of the challenge with unsupervised modeling, there is more subjectivity in modeling results and interpretation. Traditional PCA has a few disadvantages worth keeping in mind. First, PCA can be highly affected by outliers. There have been many robust variants of PCA that act to iteratively discard data points that are poorly described by the initial components (see, for example, Luu, Blum, and Privé ([2019](#ref-R-pcadapt)) and Erichson, Zheng, and Aravkin ([2018](#ref-R-sparsepca))). In Chapter [18](GLRM.html#GLRM) we discuss an alternative dimension reduction procedure that takes outliers into consideration, and in Chapter [19](autoencoders.html#autoencoders) we illustrate a procedure to help identify outliers. Also, note in Figures [17\.1](pca.html#fig:create-pca-image) and [17\.2](pca.html#fig:pca-3d-plot) that our PC directions are linear. Consequently, traditional PCA does not perform as well in very high dimensional space where complex nonlinear patterns often exist. Kernel PCA implements the kernel trick discussed in Chapter [14](svm.html#svm) and makes it possible to perform complex nonlinear projections of dimensionality reduction. See Karatzoglou, Smola, and Hornik ([2018](#ref-R-kernlab)) for an implementation of kernel PCA in R. Chapters [18](GLRM.html#GLRM) and [19](autoencoders.html#autoencoders) discuss two methods that allow us to reduce the feature space while also capturing nonlinearity.
Machine Learning
bradleyboehmke.github.io
https://bradleyboehmke.github.io/HOML/GLRM.html
Chapter 18 Generalized Low Rank Models ====================================== The PCs constructed in PCA are linear in nature, which can cause deficiencies in its performance. This is much like the deficiency that linear regression has in capturing nonlinear relationships. Alternative approaches, known as matrix factorization methods have helped address this issue. More recently, however, a generalization of PCA and matrix factorization, called *generalized low rank models* (GLRMs) (Udell et al. [2016](#ref-udell2016generalized)), has become a popular approach to dimension reduction. 18\.1 Prerequisites ------------------- This chapter leverages the following packages: ``` # Helper packages library(dplyr) # for data manipulation library(ggplot2) # for data visualization library(tidyr) # for data reshaping # Modeling packages library(h2o) # for fitting GLRMs ``` To illustrate GLRM concepts, we’ll continue using the `my_basket` data set created in the previous chapter: ``` url <- "https://koalaverse.github.io/homlr/data/my_basket.csv" my_basket <- readr::read_csv(url) ``` 18\.2 The idea -------------- GLRMs reduce the dimension of a data set by producing a condensed vector representation for every row and column in the original data. Specifically, given a data set *A* with *m* rows and *n* columns, a GLRM consists of a decomposition of *A* into numeric matrices *X* and *Y*. The matrix *X* has the same number of rows as *A*, but only a small, *user\-specified* number of columns *k*. The matrix *Y* has *k* rows and *n* columns, where *n* is equal to the total dimension of the embedded features in *A*. For example, if *A* has 4 numeric columns and 1 categorical column with 3 distinct levels (e.g., red, blue, and green), then *Y* will have 7 columns (due to one\-hot encoding). When *A* contains only numeric features, the number of columns in *A* and *Y* are identical, as shown Eq. [(18\.1\)](GLRM.html#eq:glrm). \\\[\\begin{equation} \\tag{18\.1} m \\Bigg \\{ \\overbrace{\\Bigg \[ \\quad A \\quad \\Bigg ]}^n \\hspace{0\.5em} \\approx \\hspace{0\.5em} m \\Bigg \\{ \\overbrace{\\Bigg \[ X \\Bigg ]}^k \\hspace{0\.5em} \\overbrace{\\big \[ \\quad Y \\quad \\big ]}^n \\big \\}k \\end{equation}\\] Both *X* and *Y* have practical interpretations. Each row of *Y* is an archetypal feature formed from the columns of *A*, and each row of *X* corresponds to a row of *A* projected onto this smaller dimensional feature space. We can approximately reconstruct *A* from the matrix product \\(X \\times Y\\), which has rank *k*. The number *k* is chosen to be much less than both *m* and *n* (e.g., for 1 million rows and 2,000 columns of numeric data, *k* could equal 15\). The smaller *k* is, the more compression we gain from our low rank representation. To make this more concrete, lets look at an example using the `mtcars` data set (available from the built\-in **datasets** package) where we have 32 rows and 11 features (see `?datasets::mtcars` for details): ``` head(mtcars) ## mpg cyl disp hp drat wt qsec vs am gear carb ## Mazda RX4 21.0 6 160 110 3.90 2.620 16.46 0 1 4 4 ## Mazda RX4 Wag 21.0 6 160 110 3.90 2.875 17.02 0 1 4 4 ## Datsun 710 22.8 4 108 93 3.85 2.320 18.61 1 1 4 1 ## Hornet 4 Drive 21.4 6 258 110 3.08 3.215 19.44 1 0 3 1 ## Hornet Sportabout 18.7 8 360 175 3.15 3.440 17.02 0 0 3 2 ## Valiant 18.1 6 225 105 2.76 3.460 20.22 1 0 3 1 ``` `mtcars` represents our original matrix *A*. If we want to reduce matrix *A* to a rank of \\(k \= 3\\) then our objective is to produce two matrices *X* and *Y* that when we multiply them together produce a near approximation to the original values in *A*. We call the condensed columns and rows in matrices *X* and *X*, respectively, “archetypes” because they are a representation of the original features and observations. The archetypes in *X* represent each observation projected onto the smaller dimensional space, and the archetypes in *Y* represent each feature projeced onto the smaller dimensional space. Figure 18\.1: Example GLRM where we reduce the mtcars data set down to a rank of 3\. The resulting archetypes are similar in spirit to the PCs in PCA; as they are a reduced feature set that represents our original features. In fact, if our features truly behave in a linear and orthogonal manner than our archetypes produced by a GLRM will produce the same reduced feature set as PCA. However, if they are not linear, then GLRM will provide archetypes that are not necessarily orthogonal. However, a few questions remain: 1. How does GLRM produce the archetype values? 2. How do you select the appropriate value for *k*? We’ll address these questions next. 18\.3 Finding the lower ranks ----------------------------- ### 18\.3\.1 Alternating minimization There are a number of methods available to identify the optimal archetype values for each element in *X* and *Y*; however, the most common is based on *alternating minimization*. Alternating minimization simply alternates between minimizing some loss function for each feature in *X* and *Y*. In essence, random values are initially set for the archetype values in *X* and *Y*. The loss function is computed (more on this shortly), and then the archetype values in *X* are slightly adjusted via gradient descent (Section [12\.2\.2](gbm.html#gbm-gradient)) and the improvement in the loss function is recorded. The archetype values in *Y* are then slightly adjusted and the improvement in the loss function is recorded. This process is continued until the loss function is optimized or some suitable stopping condition is reached. ### 18\.3\.2 Loss functions As stated above, the optimal achetype values are selected based on minimizing some loss function. The loss function should reflect the intuitive notion of what it means to “fit the data well”. The most common loss function is the *quadratic loss*. The quadratic loss is very similar to the SSE criterion (Section [2\.6](process.html#model-eval)) for supervised learning models where we seek to minimize the squared difference between the actual value in our original data (matrix *A*) and the predicted value based on our achetypal matrices (\\(X \\times Y\\)) (i.e., minimizing the squared residuals). \\\[\\begin{equation} \\tag{18\.2} \\text{quadratic loss} \= minimize \\bigg\\{ \\sum^m\_{i\=1}\\sum^{n}\_{i\=1}\\left(A\_{i,j} \- X\_iY\_j\\right)^2 \\bigg\\} \\end{equation}\\] However, note that some loss functions are preferred over others in certain scenarios. For example, quadratic loss, similar to SSE, can be heavily influenced by outliers. If you do not want to emphasize outliers in your data set, or if you just want to try minimize errors for lower values in addition to higher values (e.g., trying to treat low\-cost products equally as important as high\-cost products) then you can use the Huber loss function. For brevity we do not show the Huber loss equation but it essentially applies quadratic loss to small errors and uses the absolute value for errors with larger values. Figure [18\.2](GLRM.html#fig:quadratic-vs-huber) illustrates how the quadratic and Huber loss functions differ. Figure 18\.2: Huber loss (green) compared to quadratic loss (blue). The \\(x\\)\-axis represents a particular value at \\(A\_{i,j}\\) and the \\(y\\)\-axis represents the predicted value produced by \\(X\_iY\_j\\). Note how the Huber loss produces a linear loss while the quadratic loss produces much larger loss values as the residual value increases. As with supervised learning, the choice of loss function should be driven by the business problem. ### 18\.3\.3 Regularization Another important component to fitting GLRMs that you, the analyst, should consider is regularization. Much like the regularization discussed in Chapter [6](regularized-regression.html#regularized-regression), regularization applied to GLRMs can be used to constrain the size of the archetypal values in *X* (with \\(r\_x\\left(X\\right)\\) in the equation below) and/or *Y* (with \\(r\_y\\left(Y\\right)\\) in the equation below). This can help to create *sparse* *X* and/or *Y* matrices to mitigate the effect of negative features in the data (e.g., multicollinearity or excessive noise) which can help prevent overfitting. If you’re using GLRMs to merely describe your data and gain a better understanding of how observations and/or features are similar then you do not need to use regularization. If you are creating a model that will be used to assign new observations and/or features to these dimensions, or you want to use GLRMs for imputation then you should use regularization as it can make your model generalize better to unseen data. \\\[\\begin{equation} \\tag{18\.3} \\text{regularization} \= minimize \\bigg\\{ \\sum^m\_{i\=1}\\sum^{n}\_{i\=1}\\left(A\_{i,j} \- X\_iY\_j\\right)^2 \+ r\_x\\left(X\\right) \+ r\_y\\left(Y\\right) \\bigg\\} \\end{equation}\\] As the above equation illustrates, we can regularize both matrices *X* and *Y*. However, when performing dimension reduction we are mainly concerned with finding a condensed representation of the features, or columns. Consequently, we’ll be more concerned with regularizing the *Y* matrix (\\(r\_y\\left(Y\\right)\\)). This regularizer encourages the *Y* matrix to be column\-sparse so that many of the columns are all zero. Columns in *Y* that are zero, means that those features are likely uninformative in reproducing the original matrix *A*. Even when we are focusing on dimension reduction, applying regularization to the *X* matrix can still improve performance. Consequently, it is good practice to compare different approaches. There are several regularizers to choose from. You can use a ridge regularizer to retain all columns but force many of the values to be near zero. You can also use a LASSO regularizer which will help zero out many of the columns; the LASSO helps you perform automated feature selection. The non\-negative regularizer can be used when your feature values should always be zero or positive (e.g., when performing market basket analysis). The primary purpose of the regularizer is to minimize overfitting. Consequently, performing GRLMs without a regularizer will nearly always perform better than when using a regularizer if you are only focusing on a single data set. The choice of regularization should be led by statistical considerations, so that the model generalizes well to unseen data. This means you should always incorporate some form of CV to assess the performance of regularization on unseen data. ### 18\.3\.4 Selecting *k* Lastly, how do we select the appropriate value for *k*? There are two main approaches, both of which will be illustrated in the section that follows. First, if you’re using GLRMs to describe your data, then you can use many of the same approaches we discussed in Section [17\.5](pca.html#pca-selecting-pcs) where we assess how different values of *k* minimize our loss function. If you are using GLRMs to produce a model that will be used to assign future observations to the reduced dimensions then you should use some form of CV. 18\.4 Fitting GLRMs in R ------------------------ **h2o** is the preferred package for fitting GLRMs in R. In fact, a few of the key researchers that developed the GLRM methodology helped develop the **h2o** implementation as well. Let’s go ahead and start up **h2o**: ``` h2o.no_progress() # turn off progress bars h2o.init(max_mem_size = "5g") # connect to H2O instance ``` ### 18\.4\.1 Basic GLRM model First, we convert our `my_basket` data frame to an appropriate **h2o** object before calling `h2o.glrm()`. The following performs a basic GLRM analysis with a quadratic loss function. A few arguments that `h2o.glrm()` provides includes: * `k`: rank size desired, which declares the desired reduced dimension size of the features. This is specified by you, the analysts, but is worth tuning to see which size `k` performs best. * `loss`: there are multiple loss functions to apply. The default is “quadratic”. * `regularization_x`: type of regularizer to apply to the *X* matrix. * `regularization_y`: type of regularizer to apply to the *Y* matrix. * `transform`: if your data are not already standardized this will automate this process for you. You can also normalize, demean, and descale. * `max_iterations`: number of iterations to apply for the loss function to converge. Your goal should be to increase `max_iterations` until your loss function plot flatlines. * `seed`: allows for reproducibility. * `max_runtime_secs`: when working with large data sets this will limit the runtime for model training. There are additional arguments that are worth exploring as you become more comfortable with `h2o.glrm()`. Some of the more useful ones include the magnitude of the regularizer applied (`gamma_x`, `gamma_y`). If you’re working with ordinal features then `multi_loss = “Ordinal”` may be more appropriate. If you’re working with very large data sets than `min_step_size` can be adjusted to speed up the learning process. ``` # convert data to h2o object my_basket.h2o <- as.h2o(my_basket) # run basic GLRM basic_glrm <- h2o.glrm( training_frame = my_basket.h2o, k = 20, loss = "Quadratic", regularization_x = "None", regularization_y = "None", transform = "STANDARDIZE", max_iterations = 2000, seed = 123 ) ``` We can check the results with `summary()`. Here, we see that our model converged at 901 iterations and the final quadratic loss value (SSE) is 31,004\.59\. We can also see how many iterations it took for our loss function to converge to its minimum: ``` # get top level summary information on our model summary(basic_glrm) ## Model Details: ## ============== ## ## H2ODimReductionModel: glrm ## Model Key: GLRM_model_R_1538746363268_1 ## Model Summary: ## number_of_iterations final_step_size final_objective_value ## 1 901 0.36373 31004.59190 ## ## H2ODimReductionMetrics: glrm ## ** Reported on training data. ** ## ## Sum of Squared Error (Numeric): 31004.59 ## Misclassification Error (Categorical): 0 ## Number of Numeric Entries: 84000 ## Number of Categorical Entries: 0 ## ## ## ## Scoring History: ## timestamp duration iterations step_size objective ## 1 2018-10-05 09:32:54 1.106 sec 0 0.66667 67533.03413 ## 2 2018-10-05 09:32:54 1.149 sec 1 0.70000 49462.95972 ## 3 2018-10-05 09:32:55 1.226 sec 2 0.46667 49462.95972 ## 4 2018-10-05 09:32:55 1.257 sec 3 0.31111 49462.95972 ## 5 2018-10-05 09:32:55 1.289 sec 4 0.32667 41215.38164 ## ## --- ## timestamp duration iterations step_size objective ## 896 2018-10-05 09:33:22 28.535 sec 895 0.28499 31004.59207 ## 897 2018-10-05 09:33:22 28.566 sec 896 0.29924 31004.59202 ## 898 2018-10-05 09:33:22 28.597 sec 897 0.31421 31004.59197 ## 899 2018-10-05 09:33:22 28.626 sec 898 0.32992 31004.59193 ## 900 2018-10-05 09:33:22 28.655 sec 899 0.34641 31004.59190 ## 901 2018-10-05 09:33:22 28.685 sec 900 0.36373 31004.59190 # Create plot to see if results converged - if it did not converge, # consider increasing iterations or using different algorithm plot(basic_glrm) ``` Figure 18\.3: Loss curve for our GLRM model. The model converged at 901 iterations. Our model object (`basic_glrm`) contains a lot of information (see everything it contains with `str(basic_glrm)`). Similar to `h2o.pca()`, we can see how much variance each archetype (aka principal component) explains by looking at the `model$importance` component: ``` # amount of variance explained by each archetype (aka "pc") basic_glrm@model$importance ## Importance of components: ## pc1 pc2 pc3 pc4 pc5 pc6 pc7 ## Standard deviation 1.513919 1.473768 1.459114 1.440635 1.435279 1.411544 1.253307 ## Proportion of Variance 0.054570 0.051714 0.050691 0.049415 0.049048 0.047439 0.037400 ## Cumulative Proportion 0.054570 0.106284 0.156975 0.206390 0.255438 0.302878 0.340277 ## pc8 pc9 pc10 pc11 pc12 pc13 pc14 ## Standard deviation 1.026387 1.010238 1.007253 0.988724 0.985320 0.970453 0.964303 ## Proportion of Variance 0.025083 0.024300 0.024156 0.023276 0.023116 0.022423 0.022140 ## Cumulative Proportion 0.365360 0.389659 0.413816 0.437091 0.460207 0.482630 0.504770 ## pc15 pc16 pc17 pc18 pc19 pc20 ## Standard deviation 0.951610 0.947978 0.944826 0.932943 0.931745 0.924206 ## Proportion of Variance 0.021561 0.021397 0.021255 0.020723 0.020670 0.020337 ## Cumulative Proportion 0.526331 0.547728 0.568982 0.589706 0.610376 0.630713 ``` Consequently, we can use this information just like we did in the PCA chapter to determine how many components to keep (aka how large should our *k* be). For example, the following provides nearly the same results as we saw in Section [17\.5\.2](pca.html#PVE). When your data aligns to the linearity and orthogonal assumptions made by PCA, the default GLRM model will produce nearly the exact same results regarding variance explained. However, how features align to the archetypes will be different than how features align to the PCs in PCA. ``` data.frame( PC = basic_glrm@model$importance %>% seq_along(), PVE = basic_glrm@model$importance %>% .[2,] %>% unlist(), CVE = basic_glrm@model$importance %>% .[3,] %>% unlist() ) %>% gather(metric, variance_explained, -PC) %>% ggplot(aes(PC, variance_explained)) + geom_point() + facet_wrap(~ metric, ncol = 1, scales = "free") ``` Figure 18\.4: Variance explained by the first 20 archetypes in our GLRM model. We can also extract how each feature aligns to the different archetypes by looking at the `model$archetypes` component: ``` t(basic_glrm@model$archetypes)[1:5, 1:5] ## Arch1 Arch2 Arch3 Arch4 Arch5 ## 7up -0.5783538 -1.5705325 0.9906612 -0.9306704 0.17552643 ## lasagna 0.2196728 0.1213954 -0.7068851 0.8436524 3.56206178 ## pepsi -0.2504310 -0.8156136 -0.7669562 -1.2551630 -0.47632696 ## yop -0.1856632 0.4000083 -0.4855958 1.1598919 -0.26142763 ## redwine -0.1372589 -0.1059148 -0.9579530 0.4641668 -0.08539977 ``` We can use this information to see how the different features contribute to Archetype 1 or compare how features map to multiple Archetypes (similar to how we did this in the PCA chapter). The following shows that many liquid refreshments (e.g., instant coffee, tea, horlics, and milk) contribute positively to archetype 1\. We also see that some candy bars contribute strongly to archetype 2 but minimally, or negatively, to archetype 1\. The results are displayed in Figure [18\.5](GLRM.html#fig:glrm-plot-archetypes). ``` p1 <- t(basic_glrm@model$archetypes) %>% as.data.frame() %>% mutate(feature = row.names(.)) %>% ggplot(aes(Arch1, reorder(feature, Arch1))) + geom_point() p2 <- t(basic_glrm@model$archetypes) %>% as.data.frame() %>% mutate(feature = row.names(.)) %>% ggplot(aes(Arch1, Arch2, label = feature)) + geom_text() gridExtra::grid.arrange(p1, p2, nrow = 1) ``` Figure 18\.5: Feature contribution for archetype 1 and 2\. If we were to use the scree plot approach (Section [17\.5\.3](pca.html#scree)) to determine \\(k\\), we would decide on \\(k \= 8\\). Consequently, we would want to re\-run our model with \\(k \= 8\\). We could then use `h2o.reconstruct()` and apply our model to a data set to see the predicted values. Below we see that our predicted values include negative numbers and non\-integers. Considering our original data measures the counts of each product purchased we would need to apply some additional rounding logic to convert values to integers: ``` # Re-run model with k = 8 k8_glrm <- h2o.glrm( training_frame = my_basket.h2o, k = 8, loss = "Quadratic", regularization_x = "None", regularization_y = "None", transform = "STANDARDIZE", max_iterations = 2000, seed = 123 ) # Reconstruct to see how well the model did my_reconstruction <- h2o.reconstruct(k8_glrm, my_basket.h2o, reverse_transform = TRUE) # Raw predicted values my_reconstruction[1:5, 1:5] ## reconstr_7up reconstr_lasagna reconstr_pepsi reconstr_yop reconstr_red-wine ## 1 0.025595726 -0.06657864 -0.03813350 -0.012225807 0.03814142 ## 2 -0.041778553 0.02401056 -0.05225379 -0.052248809 -0.05487031 ## 3 0.012373600 0.04849545 0.05760424 -0.009878976 0.02492625 ## 4 0.338875544 0.00577020 0.48763580 0.187669229 0.53358405 ## 5 0.003869531 0.05394523 0.07655745 -0.010977765 0.51779314 ## ## [5 rows x 5 columns] # Round values to whole integers my_reconstruction[1:5, 1:5] %>% round(0) ## reconstr_7up reconstr_lasagna reconstr_pepsi reconstr_yop reconstr_red-wine ## 1 0 0 0 0 0 ## 2 0 0 0 0 0 ## 3 0 0 0 0 0 ## 4 0 0 0 0 1 ## 5 0 0 0 0 1 ## ## [5 rows x 5 columns] ``` ### 18\.4\.2 Tuning to optimize for unseen data A more sophisticated use of GLRMs is to create a model where the reduced archetypes will be used on future, unseen data. The preferred approach to deciding on a final model when you are going to use a GLRM to score future observations, is to perform a validation process to select the optimally tuned model. This will help your final model generalize better to unseen data. As previously mentioned, when applying a GLRM model to unseen data, using a regularizer can help to reduce overfitting and help the model generalize better. Since our data represents all positive values (items purchases which can be 0 or any positive integer), we apply the non\-negative regularizer. This will force all predicted values to at least be non\-negative. We see this when we use `predict()` on the results. If we compare the non\-regularized GLRM model (`k8_glrm`) to our regularized model (`k8_glrm_regularized`), you will notice that the non\-regularized model will almost always have a lower loss value. However, this is because the regularized model is being generalized more and is not overfitting to our training data, which should help improve on unseen data. ``` # Use non-negative regularization k8_glrm_regularized <- h2o.glrm( training_frame = my_basket.h2o, k = 8, loss = "Quadratic", regularization_x = "NonNegative", regularization_y = "NonNegative", gamma_x = 0.5, gamma_y = 0.5, transform = "STANDARDIZE", max_iterations = 2000, seed = 123 ) # Show predicted values predict(k8_glrm_regularized, my_basket.h2o)[1:5, 1:5] ## reconstr_7up reconstr_lasagna reconstr_pepsi reconstr_yop reconstr_red-wine ## 1 0.000000 0 0.0000000 0.0000000 0.0000000 ## 2 0.000000 0 0.0000000 0.0000000 0.0000000 ## 3 0.000000 0 0.0000000 0.0000000 0.0000000 ## 4 0.609656 0 0.6311428 0.4565658 0.6697422 ## 5 0.000000 0 0.0000000 0.0000000 0.8257210 ## ## [5 rows x 5 columns] # Compare regularized versus non-regularized loss par(mfrow = c(1, 2)) plot(k8_glrm) plot(k8_glrm_regularized) ``` Figure 18\.6: Loss curve for original GLRM model that does not include regularization (left) compared to a GLRM model with regularization (right). GLRM models behave much like supervised models where there are several hyperparameters that can be tuned to optimize performance. For example, we can choose from a combination of multiple regularizers, we can adjust the magnitude of the regularization (i.e., the `gamma_*` parameters), and we can even tune the rank \\(k\\). Unfortunately, **h2o** does not currently provide an automated tuning grid option, such as `h2o.grid()` which can be applied to supervised learning models. To perform a grid search with GLRMs, we need to create our own custom process. First, we create training and validation sets so that we can use the validation data to see how well each hyperparameter setting does on unseen data. Next, we create a tuning grid that contains 225 combinations of hyperparameters. For this example, we’re going to assume we want \\(k \= 8\\) and we only want to tune the type and magnitude of the regularizers. Lastly, we create a `for` loop to go through each hyperparameter combination, apply the given model, assess on the model’s performance on the hold out validation set, and extract the error metric. The squared error loss ranges from as high as 58,908 down to 13,371\. This is a significant reduction in error. We see that the best models all have errors in the 13,700\+ range and the majority of them have a large (signaled by `gamma_x`) L1 (LASSO) regularizer on the *X* matrix and also a non\-negative regularizer on the *Y* matrix. However, the magnitude of the *Y* matrix regularizers (signaled by `gamma_y`) has little to no impact. The following tuning and validation process took roughly 35 minutes to complete. ``` # Split data into train & validation split <- h2o.splitFrame(my_basket.h2o, ratios = 0.75, seed = 123) train <- split[[1]] valid <- split[[2]] # Create hyperparameter search grid params <- expand.grid( regularization_x = c("None", "NonNegative", "L1"), regularization_y = c("None", "NonNegative", "L1"), gamma_x = seq(0, 1, by = .25), gamma_y = seq(0, 1, by = .25), error = 0, stringsAsFactors = FALSE ) # Perform grid search for(i in seq_len(nrow(params))) { # Create model glrm_model <- h2o.glrm( training_frame = train, k = 8, loss = "Quadratic", regularization_x = params$regularization_x[i], regularization_y = params$regularization_y[i], gamma_x = params$gamma_x[i], gamma_y = params$gamma_y[i], transform = "STANDARDIZE", max_runtime_secs = 1000, seed = 123 ) # Predict on validation set and extract error validate <- h2o.performance(glrm_model, valid) params$error[i] <- validate@metrics$numerr } # Look at the top 10 models with the lowest error rate params %>% arrange(error) %>% head(10) ## regularization_x regularization_y gamma_x gamma_y error ## 1 L1 NonNegative 1.00 0.25 13731.81 ## 2 L1 NonNegative 1.00 0.50 13731.81 ## 3 L1 NonNegative 1.00 0.75 13731.81 ## 4 L1 NonNegative 1.00 1.00 13731.81 ## 5 L1 NonNegative 0.75 0.25 13746.77 ## 6 L1 NonNegative 0.75 0.50 13746.77 ## 7 L1 NonNegative 0.75 0.75 13746.77 ## 8 L1 NonNegative 0.75 1.00 13746.77 ## 9 L1 None 0.75 0.00 13750.79 ## 10 L1 L1 0.75 0.00 13750.79 ``` Once we identify the optimal model, we’ll want to re\-run this on the entire training data set. We can then score new unseen observations with this model, which tells us based on their buying behavior and how this behavior aligns to the \\(k \= 8\\) dimensions in our model, what products are they’re likely to buy and would be good opportunities to market to them. ``` # Apply final model with optimal hyperparamters final_glrm_model <- h2o.glrm( training_frame = my_basket.h2o, k = 8, loss = "Quadratic", regularization_x = "L1", regularization_y = "NonNegative", gamma_x = 1, gamma_y = 0.25, transform = "STANDARDIZE", max_iterations = 2000, seed = 123 ) # New observations to score new_observations <- as.h2o(sample_n(my_basket, 2)) # Basic scoring predict(final_glrm_model, new_observations) %>% round(0) ## reconstr_7up reconstr_lasagna reconstr_pepsi reconstr_yop reconstr_red-wine reconstr_cheese reconstr_bbq reconstr_bulmers reconstr_mayonnaise reconstr_horlics reconstr_chicken-tikka reconstr_milk reconstr_mars reconstr_coke ## 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ## 2 0 -1 1 0 0 0 0 0 0 1 0 1 0 1 ## reconstr_lottery reconstr_bread reconstr_pizza reconstr_sunny-delight reconstr_ham reconstr_lettuce reconstr_kronenbourg reconstr_leeks reconstr_fanta reconstr_tea reconstr_whiskey reconstr_peas reconstr_newspaper ## 1 0 0 0 0 0 0 0 0 0 0 0 0 0 ## 2 0 0 -1 0 0 0 0 0 0 1 0 0 0 ## reconstr_muesli reconstr_white-wine reconstr_carrots reconstr_spinach reconstr_pate reconstr_instant-coffee reconstr_twix reconstr_potatoes reconstr_fosters reconstr_soup reconstr_toad-in-hole reconstr_coco-pops ## 1 0 0 0 0 0 0 0 0 0 0 0 0 ## 2 1 0 0 0 0 1 0 0 0 0 0 1 ## reconstr_kitkat reconstr_broccoli reconstr_cigarettes ## 1 0 0 0 ## 2 0 0 0 ``` 18\.5 Final thoughts -------------------- GLRMs are an extension of the well\-known matrix factorization methods such as PCA. While PCA is limited to numeric data, GLRMs can handle mixed numeric, categorical, ordinal, and boolean data with an arbitrary number of missing values. It allows the user to apply regularization to \\(X\\) and \\(Y\\), imposing restrictions like non\-negativity appropriate to a particular data science context. Thus, it is an extremely flexible approach for analyzing and interpreting heterogeneous data sets. Although this chapter focused on using GLRMs for dimension/feature reduction, GLRMs can also be used for clustering, missing data imputation, compute memory reduction, and speed improvements. 18\.1 Prerequisites ------------------- This chapter leverages the following packages: ``` # Helper packages library(dplyr) # for data manipulation library(ggplot2) # for data visualization library(tidyr) # for data reshaping # Modeling packages library(h2o) # for fitting GLRMs ``` To illustrate GLRM concepts, we’ll continue using the `my_basket` data set created in the previous chapter: ``` url <- "https://koalaverse.github.io/homlr/data/my_basket.csv" my_basket <- readr::read_csv(url) ``` 18\.2 The idea -------------- GLRMs reduce the dimension of a data set by producing a condensed vector representation for every row and column in the original data. Specifically, given a data set *A* with *m* rows and *n* columns, a GLRM consists of a decomposition of *A* into numeric matrices *X* and *Y*. The matrix *X* has the same number of rows as *A*, but only a small, *user\-specified* number of columns *k*. The matrix *Y* has *k* rows and *n* columns, where *n* is equal to the total dimension of the embedded features in *A*. For example, if *A* has 4 numeric columns and 1 categorical column with 3 distinct levels (e.g., red, blue, and green), then *Y* will have 7 columns (due to one\-hot encoding). When *A* contains only numeric features, the number of columns in *A* and *Y* are identical, as shown Eq. [(18\.1\)](GLRM.html#eq:glrm). \\\[\\begin{equation} \\tag{18\.1} m \\Bigg \\{ \\overbrace{\\Bigg \[ \\quad A \\quad \\Bigg ]}^n \\hspace{0\.5em} \\approx \\hspace{0\.5em} m \\Bigg \\{ \\overbrace{\\Bigg \[ X \\Bigg ]}^k \\hspace{0\.5em} \\overbrace{\\big \[ \\quad Y \\quad \\big ]}^n \\big \\}k \\end{equation}\\] Both *X* and *Y* have practical interpretations. Each row of *Y* is an archetypal feature formed from the columns of *A*, and each row of *X* corresponds to a row of *A* projected onto this smaller dimensional feature space. We can approximately reconstruct *A* from the matrix product \\(X \\times Y\\), which has rank *k*. The number *k* is chosen to be much less than both *m* and *n* (e.g., for 1 million rows and 2,000 columns of numeric data, *k* could equal 15\). The smaller *k* is, the more compression we gain from our low rank representation. To make this more concrete, lets look at an example using the `mtcars` data set (available from the built\-in **datasets** package) where we have 32 rows and 11 features (see `?datasets::mtcars` for details): ``` head(mtcars) ## mpg cyl disp hp drat wt qsec vs am gear carb ## Mazda RX4 21.0 6 160 110 3.90 2.620 16.46 0 1 4 4 ## Mazda RX4 Wag 21.0 6 160 110 3.90 2.875 17.02 0 1 4 4 ## Datsun 710 22.8 4 108 93 3.85 2.320 18.61 1 1 4 1 ## Hornet 4 Drive 21.4 6 258 110 3.08 3.215 19.44 1 0 3 1 ## Hornet Sportabout 18.7 8 360 175 3.15 3.440 17.02 0 0 3 2 ## Valiant 18.1 6 225 105 2.76 3.460 20.22 1 0 3 1 ``` `mtcars` represents our original matrix *A*. If we want to reduce matrix *A* to a rank of \\(k \= 3\\) then our objective is to produce two matrices *X* and *Y* that when we multiply them together produce a near approximation to the original values in *A*. We call the condensed columns and rows in matrices *X* and *X*, respectively, “archetypes” because they are a representation of the original features and observations. The archetypes in *X* represent each observation projected onto the smaller dimensional space, and the archetypes in *Y* represent each feature projeced onto the smaller dimensional space. Figure 18\.1: Example GLRM where we reduce the mtcars data set down to a rank of 3\. The resulting archetypes are similar in spirit to the PCs in PCA; as they are a reduced feature set that represents our original features. In fact, if our features truly behave in a linear and orthogonal manner than our archetypes produced by a GLRM will produce the same reduced feature set as PCA. However, if they are not linear, then GLRM will provide archetypes that are not necessarily orthogonal. However, a few questions remain: 1. How does GLRM produce the archetype values? 2. How do you select the appropriate value for *k*? We’ll address these questions next. 18\.3 Finding the lower ranks ----------------------------- ### 18\.3\.1 Alternating minimization There are a number of methods available to identify the optimal archetype values for each element in *X* and *Y*; however, the most common is based on *alternating minimization*. Alternating minimization simply alternates between minimizing some loss function for each feature in *X* and *Y*. In essence, random values are initially set for the archetype values in *X* and *Y*. The loss function is computed (more on this shortly), and then the archetype values in *X* are slightly adjusted via gradient descent (Section [12\.2\.2](gbm.html#gbm-gradient)) and the improvement in the loss function is recorded. The archetype values in *Y* are then slightly adjusted and the improvement in the loss function is recorded. This process is continued until the loss function is optimized or some suitable stopping condition is reached. ### 18\.3\.2 Loss functions As stated above, the optimal achetype values are selected based on minimizing some loss function. The loss function should reflect the intuitive notion of what it means to “fit the data well”. The most common loss function is the *quadratic loss*. The quadratic loss is very similar to the SSE criterion (Section [2\.6](process.html#model-eval)) for supervised learning models where we seek to minimize the squared difference between the actual value in our original data (matrix *A*) and the predicted value based on our achetypal matrices (\\(X \\times Y\\)) (i.e., minimizing the squared residuals). \\\[\\begin{equation} \\tag{18\.2} \\text{quadratic loss} \= minimize \\bigg\\{ \\sum^m\_{i\=1}\\sum^{n}\_{i\=1}\\left(A\_{i,j} \- X\_iY\_j\\right)^2 \\bigg\\} \\end{equation}\\] However, note that some loss functions are preferred over others in certain scenarios. For example, quadratic loss, similar to SSE, can be heavily influenced by outliers. If you do not want to emphasize outliers in your data set, or if you just want to try minimize errors for lower values in addition to higher values (e.g., trying to treat low\-cost products equally as important as high\-cost products) then you can use the Huber loss function. For brevity we do not show the Huber loss equation but it essentially applies quadratic loss to small errors and uses the absolute value for errors with larger values. Figure [18\.2](GLRM.html#fig:quadratic-vs-huber) illustrates how the quadratic and Huber loss functions differ. Figure 18\.2: Huber loss (green) compared to quadratic loss (blue). The \\(x\\)\-axis represents a particular value at \\(A\_{i,j}\\) and the \\(y\\)\-axis represents the predicted value produced by \\(X\_iY\_j\\). Note how the Huber loss produces a linear loss while the quadratic loss produces much larger loss values as the residual value increases. As with supervised learning, the choice of loss function should be driven by the business problem. ### 18\.3\.3 Regularization Another important component to fitting GLRMs that you, the analyst, should consider is regularization. Much like the regularization discussed in Chapter [6](regularized-regression.html#regularized-regression), regularization applied to GLRMs can be used to constrain the size of the archetypal values in *X* (with \\(r\_x\\left(X\\right)\\) in the equation below) and/or *Y* (with \\(r\_y\\left(Y\\right)\\) in the equation below). This can help to create *sparse* *X* and/or *Y* matrices to mitigate the effect of negative features in the data (e.g., multicollinearity or excessive noise) which can help prevent overfitting. If you’re using GLRMs to merely describe your data and gain a better understanding of how observations and/or features are similar then you do not need to use regularization. If you are creating a model that will be used to assign new observations and/or features to these dimensions, or you want to use GLRMs for imputation then you should use regularization as it can make your model generalize better to unseen data. \\\[\\begin{equation} \\tag{18\.3} \\text{regularization} \= minimize \\bigg\\{ \\sum^m\_{i\=1}\\sum^{n}\_{i\=1}\\left(A\_{i,j} \- X\_iY\_j\\right)^2 \+ r\_x\\left(X\\right) \+ r\_y\\left(Y\\right) \\bigg\\} \\end{equation}\\] As the above equation illustrates, we can regularize both matrices *X* and *Y*. However, when performing dimension reduction we are mainly concerned with finding a condensed representation of the features, or columns. Consequently, we’ll be more concerned with regularizing the *Y* matrix (\\(r\_y\\left(Y\\right)\\)). This regularizer encourages the *Y* matrix to be column\-sparse so that many of the columns are all zero. Columns in *Y* that are zero, means that those features are likely uninformative in reproducing the original matrix *A*. Even when we are focusing on dimension reduction, applying regularization to the *X* matrix can still improve performance. Consequently, it is good practice to compare different approaches. There are several regularizers to choose from. You can use a ridge regularizer to retain all columns but force many of the values to be near zero. You can also use a LASSO regularizer which will help zero out many of the columns; the LASSO helps you perform automated feature selection. The non\-negative regularizer can be used when your feature values should always be zero or positive (e.g., when performing market basket analysis). The primary purpose of the regularizer is to minimize overfitting. Consequently, performing GRLMs without a regularizer will nearly always perform better than when using a regularizer if you are only focusing on a single data set. The choice of regularization should be led by statistical considerations, so that the model generalizes well to unseen data. This means you should always incorporate some form of CV to assess the performance of regularization on unseen data. ### 18\.3\.4 Selecting *k* Lastly, how do we select the appropriate value for *k*? There are two main approaches, both of which will be illustrated in the section that follows. First, if you’re using GLRMs to describe your data, then you can use many of the same approaches we discussed in Section [17\.5](pca.html#pca-selecting-pcs) where we assess how different values of *k* minimize our loss function. If you are using GLRMs to produce a model that will be used to assign future observations to the reduced dimensions then you should use some form of CV. ### 18\.3\.1 Alternating minimization There are a number of methods available to identify the optimal archetype values for each element in *X* and *Y*; however, the most common is based on *alternating minimization*. Alternating minimization simply alternates between minimizing some loss function for each feature in *X* and *Y*. In essence, random values are initially set for the archetype values in *X* and *Y*. The loss function is computed (more on this shortly), and then the archetype values in *X* are slightly adjusted via gradient descent (Section [12\.2\.2](gbm.html#gbm-gradient)) and the improvement in the loss function is recorded. The archetype values in *Y* are then slightly adjusted and the improvement in the loss function is recorded. This process is continued until the loss function is optimized or some suitable stopping condition is reached. ### 18\.3\.2 Loss functions As stated above, the optimal achetype values are selected based on minimizing some loss function. The loss function should reflect the intuitive notion of what it means to “fit the data well”. The most common loss function is the *quadratic loss*. The quadratic loss is very similar to the SSE criterion (Section [2\.6](process.html#model-eval)) for supervised learning models where we seek to minimize the squared difference between the actual value in our original data (matrix *A*) and the predicted value based on our achetypal matrices (\\(X \\times Y\\)) (i.e., minimizing the squared residuals). \\\[\\begin{equation} \\tag{18\.2} \\text{quadratic loss} \= minimize \\bigg\\{ \\sum^m\_{i\=1}\\sum^{n}\_{i\=1}\\left(A\_{i,j} \- X\_iY\_j\\right)^2 \\bigg\\} \\end{equation}\\] However, note that some loss functions are preferred over others in certain scenarios. For example, quadratic loss, similar to SSE, can be heavily influenced by outliers. If you do not want to emphasize outliers in your data set, or if you just want to try minimize errors for lower values in addition to higher values (e.g., trying to treat low\-cost products equally as important as high\-cost products) then you can use the Huber loss function. For brevity we do not show the Huber loss equation but it essentially applies quadratic loss to small errors and uses the absolute value for errors with larger values. Figure [18\.2](GLRM.html#fig:quadratic-vs-huber) illustrates how the quadratic and Huber loss functions differ. Figure 18\.2: Huber loss (green) compared to quadratic loss (blue). The \\(x\\)\-axis represents a particular value at \\(A\_{i,j}\\) and the \\(y\\)\-axis represents the predicted value produced by \\(X\_iY\_j\\). Note how the Huber loss produces a linear loss while the quadratic loss produces much larger loss values as the residual value increases. As with supervised learning, the choice of loss function should be driven by the business problem. ### 18\.3\.3 Regularization Another important component to fitting GLRMs that you, the analyst, should consider is regularization. Much like the regularization discussed in Chapter [6](regularized-regression.html#regularized-regression), regularization applied to GLRMs can be used to constrain the size of the archetypal values in *X* (with \\(r\_x\\left(X\\right)\\) in the equation below) and/or *Y* (with \\(r\_y\\left(Y\\right)\\) in the equation below). This can help to create *sparse* *X* and/or *Y* matrices to mitigate the effect of negative features in the data (e.g., multicollinearity or excessive noise) which can help prevent overfitting. If you’re using GLRMs to merely describe your data and gain a better understanding of how observations and/or features are similar then you do not need to use regularization. If you are creating a model that will be used to assign new observations and/or features to these dimensions, or you want to use GLRMs for imputation then you should use regularization as it can make your model generalize better to unseen data. \\\[\\begin{equation} \\tag{18\.3} \\text{regularization} \= minimize \\bigg\\{ \\sum^m\_{i\=1}\\sum^{n}\_{i\=1}\\left(A\_{i,j} \- X\_iY\_j\\right)^2 \+ r\_x\\left(X\\right) \+ r\_y\\left(Y\\right) \\bigg\\} \\end{equation}\\] As the above equation illustrates, we can regularize both matrices *X* and *Y*. However, when performing dimension reduction we are mainly concerned with finding a condensed representation of the features, or columns. Consequently, we’ll be more concerned with regularizing the *Y* matrix (\\(r\_y\\left(Y\\right)\\)). This regularizer encourages the *Y* matrix to be column\-sparse so that many of the columns are all zero. Columns in *Y* that are zero, means that those features are likely uninformative in reproducing the original matrix *A*. Even when we are focusing on dimension reduction, applying regularization to the *X* matrix can still improve performance. Consequently, it is good practice to compare different approaches. There are several regularizers to choose from. You can use a ridge regularizer to retain all columns but force many of the values to be near zero. You can also use a LASSO regularizer which will help zero out many of the columns; the LASSO helps you perform automated feature selection. The non\-negative regularizer can be used when your feature values should always be zero or positive (e.g., when performing market basket analysis). The primary purpose of the regularizer is to minimize overfitting. Consequently, performing GRLMs without a regularizer will nearly always perform better than when using a regularizer if you are only focusing on a single data set. The choice of regularization should be led by statistical considerations, so that the model generalizes well to unseen data. This means you should always incorporate some form of CV to assess the performance of regularization on unseen data. ### 18\.3\.4 Selecting *k* Lastly, how do we select the appropriate value for *k*? There are two main approaches, both of which will be illustrated in the section that follows. First, if you’re using GLRMs to describe your data, then you can use many of the same approaches we discussed in Section [17\.5](pca.html#pca-selecting-pcs) where we assess how different values of *k* minimize our loss function. If you are using GLRMs to produce a model that will be used to assign future observations to the reduced dimensions then you should use some form of CV. 18\.4 Fitting GLRMs in R ------------------------ **h2o** is the preferred package for fitting GLRMs in R. In fact, a few of the key researchers that developed the GLRM methodology helped develop the **h2o** implementation as well. Let’s go ahead and start up **h2o**: ``` h2o.no_progress() # turn off progress bars h2o.init(max_mem_size = "5g") # connect to H2O instance ``` ### 18\.4\.1 Basic GLRM model First, we convert our `my_basket` data frame to an appropriate **h2o** object before calling `h2o.glrm()`. The following performs a basic GLRM analysis with a quadratic loss function. A few arguments that `h2o.glrm()` provides includes: * `k`: rank size desired, which declares the desired reduced dimension size of the features. This is specified by you, the analysts, but is worth tuning to see which size `k` performs best. * `loss`: there are multiple loss functions to apply. The default is “quadratic”. * `regularization_x`: type of regularizer to apply to the *X* matrix. * `regularization_y`: type of regularizer to apply to the *Y* matrix. * `transform`: if your data are not already standardized this will automate this process for you. You can also normalize, demean, and descale. * `max_iterations`: number of iterations to apply for the loss function to converge. Your goal should be to increase `max_iterations` until your loss function plot flatlines. * `seed`: allows for reproducibility. * `max_runtime_secs`: when working with large data sets this will limit the runtime for model training. There are additional arguments that are worth exploring as you become more comfortable with `h2o.glrm()`. Some of the more useful ones include the magnitude of the regularizer applied (`gamma_x`, `gamma_y`). If you’re working with ordinal features then `multi_loss = “Ordinal”` may be more appropriate. If you’re working with very large data sets than `min_step_size` can be adjusted to speed up the learning process. ``` # convert data to h2o object my_basket.h2o <- as.h2o(my_basket) # run basic GLRM basic_glrm <- h2o.glrm( training_frame = my_basket.h2o, k = 20, loss = "Quadratic", regularization_x = "None", regularization_y = "None", transform = "STANDARDIZE", max_iterations = 2000, seed = 123 ) ``` We can check the results with `summary()`. Here, we see that our model converged at 901 iterations and the final quadratic loss value (SSE) is 31,004\.59\. We can also see how many iterations it took for our loss function to converge to its minimum: ``` # get top level summary information on our model summary(basic_glrm) ## Model Details: ## ============== ## ## H2ODimReductionModel: glrm ## Model Key: GLRM_model_R_1538746363268_1 ## Model Summary: ## number_of_iterations final_step_size final_objective_value ## 1 901 0.36373 31004.59190 ## ## H2ODimReductionMetrics: glrm ## ** Reported on training data. ** ## ## Sum of Squared Error (Numeric): 31004.59 ## Misclassification Error (Categorical): 0 ## Number of Numeric Entries: 84000 ## Number of Categorical Entries: 0 ## ## ## ## Scoring History: ## timestamp duration iterations step_size objective ## 1 2018-10-05 09:32:54 1.106 sec 0 0.66667 67533.03413 ## 2 2018-10-05 09:32:54 1.149 sec 1 0.70000 49462.95972 ## 3 2018-10-05 09:32:55 1.226 sec 2 0.46667 49462.95972 ## 4 2018-10-05 09:32:55 1.257 sec 3 0.31111 49462.95972 ## 5 2018-10-05 09:32:55 1.289 sec 4 0.32667 41215.38164 ## ## --- ## timestamp duration iterations step_size objective ## 896 2018-10-05 09:33:22 28.535 sec 895 0.28499 31004.59207 ## 897 2018-10-05 09:33:22 28.566 sec 896 0.29924 31004.59202 ## 898 2018-10-05 09:33:22 28.597 sec 897 0.31421 31004.59197 ## 899 2018-10-05 09:33:22 28.626 sec 898 0.32992 31004.59193 ## 900 2018-10-05 09:33:22 28.655 sec 899 0.34641 31004.59190 ## 901 2018-10-05 09:33:22 28.685 sec 900 0.36373 31004.59190 # Create plot to see if results converged - if it did not converge, # consider increasing iterations or using different algorithm plot(basic_glrm) ``` Figure 18\.3: Loss curve for our GLRM model. The model converged at 901 iterations. Our model object (`basic_glrm`) contains a lot of information (see everything it contains with `str(basic_glrm)`). Similar to `h2o.pca()`, we can see how much variance each archetype (aka principal component) explains by looking at the `model$importance` component: ``` # amount of variance explained by each archetype (aka "pc") basic_glrm@model$importance ## Importance of components: ## pc1 pc2 pc3 pc4 pc5 pc6 pc7 ## Standard deviation 1.513919 1.473768 1.459114 1.440635 1.435279 1.411544 1.253307 ## Proportion of Variance 0.054570 0.051714 0.050691 0.049415 0.049048 0.047439 0.037400 ## Cumulative Proportion 0.054570 0.106284 0.156975 0.206390 0.255438 0.302878 0.340277 ## pc8 pc9 pc10 pc11 pc12 pc13 pc14 ## Standard deviation 1.026387 1.010238 1.007253 0.988724 0.985320 0.970453 0.964303 ## Proportion of Variance 0.025083 0.024300 0.024156 0.023276 0.023116 0.022423 0.022140 ## Cumulative Proportion 0.365360 0.389659 0.413816 0.437091 0.460207 0.482630 0.504770 ## pc15 pc16 pc17 pc18 pc19 pc20 ## Standard deviation 0.951610 0.947978 0.944826 0.932943 0.931745 0.924206 ## Proportion of Variance 0.021561 0.021397 0.021255 0.020723 0.020670 0.020337 ## Cumulative Proportion 0.526331 0.547728 0.568982 0.589706 0.610376 0.630713 ``` Consequently, we can use this information just like we did in the PCA chapter to determine how many components to keep (aka how large should our *k* be). For example, the following provides nearly the same results as we saw in Section [17\.5\.2](pca.html#PVE). When your data aligns to the linearity and orthogonal assumptions made by PCA, the default GLRM model will produce nearly the exact same results regarding variance explained. However, how features align to the archetypes will be different than how features align to the PCs in PCA. ``` data.frame( PC = basic_glrm@model$importance %>% seq_along(), PVE = basic_glrm@model$importance %>% .[2,] %>% unlist(), CVE = basic_glrm@model$importance %>% .[3,] %>% unlist() ) %>% gather(metric, variance_explained, -PC) %>% ggplot(aes(PC, variance_explained)) + geom_point() + facet_wrap(~ metric, ncol = 1, scales = "free") ``` Figure 18\.4: Variance explained by the first 20 archetypes in our GLRM model. We can also extract how each feature aligns to the different archetypes by looking at the `model$archetypes` component: ``` t(basic_glrm@model$archetypes)[1:5, 1:5] ## Arch1 Arch2 Arch3 Arch4 Arch5 ## 7up -0.5783538 -1.5705325 0.9906612 -0.9306704 0.17552643 ## lasagna 0.2196728 0.1213954 -0.7068851 0.8436524 3.56206178 ## pepsi -0.2504310 -0.8156136 -0.7669562 -1.2551630 -0.47632696 ## yop -0.1856632 0.4000083 -0.4855958 1.1598919 -0.26142763 ## redwine -0.1372589 -0.1059148 -0.9579530 0.4641668 -0.08539977 ``` We can use this information to see how the different features contribute to Archetype 1 or compare how features map to multiple Archetypes (similar to how we did this in the PCA chapter). The following shows that many liquid refreshments (e.g., instant coffee, tea, horlics, and milk) contribute positively to archetype 1\. We also see that some candy bars contribute strongly to archetype 2 but minimally, or negatively, to archetype 1\. The results are displayed in Figure [18\.5](GLRM.html#fig:glrm-plot-archetypes). ``` p1 <- t(basic_glrm@model$archetypes) %>% as.data.frame() %>% mutate(feature = row.names(.)) %>% ggplot(aes(Arch1, reorder(feature, Arch1))) + geom_point() p2 <- t(basic_glrm@model$archetypes) %>% as.data.frame() %>% mutate(feature = row.names(.)) %>% ggplot(aes(Arch1, Arch2, label = feature)) + geom_text() gridExtra::grid.arrange(p1, p2, nrow = 1) ``` Figure 18\.5: Feature contribution for archetype 1 and 2\. If we were to use the scree plot approach (Section [17\.5\.3](pca.html#scree)) to determine \\(k\\), we would decide on \\(k \= 8\\). Consequently, we would want to re\-run our model with \\(k \= 8\\). We could then use `h2o.reconstruct()` and apply our model to a data set to see the predicted values. Below we see that our predicted values include negative numbers and non\-integers. Considering our original data measures the counts of each product purchased we would need to apply some additional rounding logic to convert values to integers: ``` # Re-run model with k = 8 k8_glrm <- h2o.glrm( training_frame = my_basket.h2o, k = 8, loss = "Quadratic", regularization_x = "None", regularization_y = "None", transform = "STANDARDIZE", max_iterations = 2000, seed = 123 ) # Reconstruct to see how well the model did my_reconstruction <- h2o.reconstruct(k8_glrm, my_basket.h2o, reverse_transform = TRUE) # Raw predicted values my_reconstruction[1:5, 1:5] ## reconstr_7up reconstr_lasagna reconstr_pepsi reconstr_yop reconstr_red-wine ## 1 0.025595726 -0.06657864 -0.03813350 -0.012225807 0.03814142 ## 2 -0.041778553 0.02401056 -0.05225379 -0.052248809 -0.05487031 ## 3 0.012373600 0.04849545 0.05760424 -0.009878976 0.02492625 ## 4 0.338875544 0.00577020 0.48763580 0.187669229 0.53358405 ## 5 0.003869531 0.05394523 0.07655745 -0.010977765 0.51779314 ## ## [5 rows x 5 columns] # Round values to whole integers my_reconstruction[1:5, 1:5] %>% round(0) ## reconstr_7up reconstr_lasagna reconstr_pepsi reconstr_yop reconstr_red-wine ## 1 0 0 0 0 0 ## 2 0 0 0 0 0 ## 3 0 0 0 0 0 ## 4 0 0 0 0 1 ## 5 0 0 0 0 1 ## ## [5 rows x 5 columns] ``` ### 18\.4\.2 Tuning to optimize for unseen data A more sophisticated use of GLRMs is to create a model where the reduced archetypes will be used on future, unseen data. The preferred approach to deciding on a final model when you are going to use a GLRM to score future observations, is to perform a validation process to select the optimally tuned model. This will help your final model generalize better to unseen data. As previously mentioned, when applying a GLRM model to unseen data, using a regularizer can help to reduce overfitting and help the model generalize better. Since our data represents all positive values (items purchases which can be 0 or any positive integer), we apply the non\-negative regularizer. This will force all predicted values to at least be non\-negative. We see this when we use `predict()` on the results. If we compare the non\-regularized GLRM model (`k8_glrm`) to our regularized model (`k8_glrm_regularized`), you will notice that the non\-regularized model will almost always have a lower loss value. However, this is because the regularized model is being generalized more and is not overfitting to our training data, which should help improve on unseen data. ``` # Use non-negative regularization k8_glrm_regularized <- h2o.glrm( training_frame = my_basket.h2o, k = 8, loss = "Quadratic", regularization_x = "NonNegative", regularization_y = "NonNegative", gamma_x = 0.5, gamma_y = 0.5, transform = "STANDARDIZE", max_iterations = 2000, seed = 123 ) # Show predicted values predict(k8_glrm_regularized, my_basket.h2o)[1:5, 1:5] ## reconstr_7up reconstr_lasagna reconstr_pepsi reconstr_yop reconstr_red-wine ## 1 0.000000 0 0.0000000 0.0000000 0.0000000 ## 2 0.000000 0 0.0000000 0.0000000 0.0000000 ## 3 0.000000 0 0.0000000 0.0000000 0.0000000 ## 4 0.609656 0 0.6311428 0.4565658 0.6697422 ## 5 0.000000 0 0.0000000 0.0000000 0.8257210 ## ## [5 rows x 5 columns] # Compare regularized versus non-regularized loss par(mfrow = c(1, 2)) plot(k8_glrm) plot(k8_glrm_regularized) ``` Figure 18\.6: Loss curve for original GLRM model that does not include regularization (left) compared to a GLRM model with regularization (right). GLRM models behave much like supervised models where there are several hyperparameters that can be tuned to optimize performance. For example, we can choose from a combination of multiple regularizers, we can adjust the magnitude of the regularization (i.e., the `gamma_*` parameters), and we can even tune the rank \\(k\\). Unfortunately, **h2o** does not currently provide an automated tuning grid option, such as `h2o.grid()` which can be applied to supervised learning models. To perform a grid search with GLRMs, we need to create our own custom process. First, we create training and validation sets so that we can use the validation data to see how well each hyperparameter setting does on unseen data. Next, we create a tuning grid that contains 225 combinations of hyperparameters. For this example, we’re going to assume we want \\(k \= 8\\) and we only want to tune the type and magnitude of the regularizers. Lastly, we create a `for` loop to go through each hyperparameter combination, apply the given model, assess on the model’s performance on the hold out validation set, and extract the error metric. The squared error loss ranges from as high as 58,908 down to 13,371\. This is a significant reduction in error. We see that the best models all have errors in the 13,700\+ range and the majority of them have a large (signaled by `gamma_x`) L1 (LASSO) regularizer on the *X* matrix and also a non\-negative regularizer on the *Y* matrix. However, the magnitude of the *Y* matrix regularizers (signaled by `gamma_y`) has little to no impact. The following tuning and validation process took roughly 35 minutes to complete. ``` # Split data into train & validation split <- h2o.splitFrame(my_basket.h2o, ratios = 0.75, seed = 123) train <- split[[1]] valid <- split[[2]] # Create hyperparameter search grid params <- expand.grid( regularization_x = c("None", "NonNegative", "L1"), regularization_y = c("None", "NonNegative", "L1"), gamma_x = seq(0, 1, by = .25), gamma_y = seq(0, 1, by = .25), error = 0, stringsAsFactors = FALSE ) # Perform grid search for(i in seq_len(nrow(params))) { # Create model glrm_model <- h2o.glrm( training_frame = train, k = 8, loss = "Quadratic", regularization_x = params$regularization_x[i], regularization_y = params$regularization_y[i], gamma_x = params$gamma_x[i], gamma_y = params$gamma_y[i], transform = "STANDARDIZE", max_runtime_secs = 1000, seed = 123 ) # Predict on validation set and extract error validate <- h2o.performance(glrm_model, valid) params$error[i] <- validate@metrics$numerr } # Look at the top 10 models with the lowest error rate params %>% arrange(error) %>% head(10) ## regularization_x regularization_y gamma_x gamma_y error ## 1 L1 NonNegative 1.00 0.25 13731.81 ## 2 L1 NonNegative 1.00 0.50 13731.81 ## 3 L1 NonNegative 1.00 0.75 13731.81 ## 4 L1 NonNegative 1.00 1.00 13731.81 ## 5 L1 NonNegative 0.75 0.25 13746.77 ## 6 L1 NonNegative 0.75 0.50 13746.77 ## 7 L1 NonNegative 0.75 0.75 13746.77 ## 8 L1 NonNegative 0.75 1.00 13746.77 ## 9 L1 None 0.75 0.00 13750.79 ## 10 L1 L1 0.75 0.00 13750.79 ``` Once we identify the optimal model, we’ll want to re\-run this on the entire training data set. We can then score new unseen observations with this model, which tells us based on their buying behavior and how this behavior aligns to the \\(k \= 8\\) dimensions in our model, what products are they’re likely to buy and would be good opportunities to market to them. ``` # Apply final model with optimal hyperparamters final_glrm_model <- h2o.glrm( training_frame = my_basket.h2o, k = 8, loss = "Quadratic", regularization_x = "L1", regularization_y = "NonNegative", gamma_x = 1, gamma_y = 0.25, transform = "STANDARDIZE", max_iterations = 2000, seed = 123 ) # New observations to score new_observations <- as.h2o(sample_n(my_basket, 2)) # Basic scoring predict(final_glrm_model, new_observations) %>% round(0) ## reconstr_7up reconstr_lasagna reconstr_pepsi reconstr_yop reconstr_red-wine reconstr_cheese reconstr_bbq reconstr_bulmers reconstr_mayonnaise reconstr_horlics reconstr_chicken-tikka reconstr_milk reconstr_mars reconstr_coke ## 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ## 2 0 -1 1 0 0 0 0 0 0 1 0 1 0 1 ## reconstr_lottery reconstr_bread reconstr_pizza reconstr_sunny-delight reconstr_ham reconstr_lettuce reconstr_kronenbourg reconstr_leeks reconstr_fanta reconstr_tea reconstr_whiskey reconstr_peas reconstr_newspaper ## 1 0 0 0 0 0 0 0 0 0 0 0 0 0 ## 2 0 0 -1 0 0 0 0 0 0 1 0 0 0 ## reconstr_muesli reconstr_white-wine reconstr_carrots reconstr_spinach reconstr_pate reconstr_instant-coffee reconstr_twix reconstr_potatoes reconstr_fosters reconstr_soup reconstr_toad-in-hole reconstr_coco-pops ## 1 0 0 0 0 0 0 0 0 0 0 0 0 ## 2 1 0 0 0 0 1 0 0 0 0 0 1 ## reconstr_kitkat reconstr_broccoli reconstr_cigarettes ## 1 0 0 0 ## 2 0 0 0 ``` ### 18\.4\.1 Basic GLRM model First, we convert our `my_basket` data frame to an appropriate **h2o** object before calling `h2o.glrm()`. The following performs a basic GLRM analysis with a quadratic loss function. A few arguments that `h2o.glrm()` provides includes: * `k`: rank size desired, which declares the desired reduced dimension size of the features. This is specified by you, the analysts, but is worth tuning to see which size `k` performs best. * `loss`: there are multiple loss functions to apply. The default is “quadratic”. * `regularization_x`: type of regularizer to apply to the *X* matrix. * `regularization_y`: type of regularizer to apply to the *Y* matrix. * `transform`: if your data are not already standardized this will automate this process for you. You can also normalize, demean, and descale. * `max_iterations`: number of iterations to apply for the loss function to converge. Your goal should be to increase `max_iterations` until your loss function plot flatlines. * `seed`: allows for reproducibility. * `max_runtime_secs`: when working with large data sets this will limit the runtime for model training. There are additional arguments that are worth exploring as you become more comfortable with `h2o.glrm()`. Some of the more useful ones include the magnitude of the regularizer applied (`gamma_x`, `gamma_y`). If you’re working with ordinal features then `multi_loss = “Ordinal”` may be more appropriate. If you’re working with very large data sets than `min_step_size` can be adjusted to speed up the learning process. ``` # convert data to h2o object my_basket.h2o <- as.h2o(my_basket) # run basic GLRM basic_glrm <- h2o.glrm( training_frame = my_basket.h2o, k = 20, loss = "Quadratic", regularization_x = "None", regularization_y = "None", transform = "STANDARDIZE", max_iterations = 2000, seed = 123 ) ``` We can check the results with `summary()`. Here, we see that our model converged at 901 iterations and the final quadratic loss value (SSE) is 31,004\.59\. We can also see how many iterations it took for our loss function to converge to its minimum: ``` # get top level summary information on our model summary(basic_glrm) ## Model Details: ## ============== ## ## H2ODimReductionModel: glrm ## Model Key: GLRM_model_R_1538746363268_1 ## Model Summary: ## number_of_iterations final_step_size final_objective_value ## 1 901 0.36373 31004.59190 ## ## H2ODimReductionMetrics: glrm ## ** Reported on training data. ** ## ## Sum of Squared Error (Numeric): 31004.59 ## Misclassification Error (Categorical): 0 ## Number of Numeric Entries: 84000 ## Number of Categorical Entries: 0 ## ## ## ## Scoring History: ## timestamp duration iterations step_size objective ## 1 2018-10-05 09:32:54 1.106 sec 0 0.66667 67533.03413 ## 2 2018-10-05 09:32:54 1.149 sec 1 0.70000 49462.95972 ## 3 2018-10-05 09:32:55 1.226 sec 2 0.46667 49462.95972 ## 4 2018-10-05 09:32:55 1.257 sec 3 0.31111 49462.95972 ## 5 2018-10-05 09:32:55 1.289 sec 4 0.32667 41215.38164 ## ## --- ## timestamp duration iterations step_size objective ## 896 2018-10-05 09:33:22 28.535 sec 895 0.28499 31004.59207 ## 897 2018-10-05 09:33:22 28.566 sec 896 0.29924 31004.59202 ## 898 2018-10-05 09:33:22 28.597 sec 897 0.31421 31004.59197 ## 899 2018-10-05 09:33:22 28.626 sec 898 0.32992 31004.59193 ## 900 2018-10-05 09:33:22 28.655 sec 899 0.34641 31004.59190 ## 901 2018-10-05 09:33:22 28.685 sec 900 0.36373 31004.59190 # Create plot to see if results converged - if it did not converge, # consider increasing iterations or using different algorithm plot(basic_glrm) ``` Figure 18\.3: Loss curve for our GLRM model. The model converged at 901 iterations. Our model object (`basic_glrm`) contains a lot of information (see everything it contains with `str(basic_glrm)`). Similar to `h2o.pca()`, we can see how much variance each archetype (aka principal component) explains by looking at the `model$importance` component: ``` # amount of variance explained by each archetype (aka "pc") basic_glrm@model$importance ## Importance of components: ## pc1 pc2 pc3 pc4 pc5 pc6 pc7 ## Standard deviation 1.513919 1.473768 1.459114 1.440635 1.435279 1.411544 1.253307 ## Proportion of Variance 0.054570 0.051714 0.050691 0.049415 0.049048 0.047439 0.037400 ## Cumulative Proportion 0.054570 0.106284 0.156975 0.206390 0.255438 0.302878 0.340277 ## pc8 pc9 pc10 pc11 pc12 pc13 pc14 ## Standard deviation 1.026387 1.010238 1.007253 0.988724 0.985320 0.970453 0.964303 ## Proportion of Variance 0.025083 0.024300 0.024156 0.023276 0.023116 0.022423 0.022140 ## Cumulative Proportion 0.365360 0.389659 0.413816 0.437091 0.460207 0.482630 0.504770 ## pc15 pc16 pc17 pc18 pc19 pc20 ## Standard deviation 0.951610 0.947978 0.944826 0.932943 0.931745 0.924206 ## Proportion of Variance 0.021561 0.021397 0.021255 0.020723 0.020670 0.020337 ## Cumulative Proportion 0.526331 0.547728 0.568982 0.589706 0.610376 0.630713 ``` Consequently, we can use this information just like we did in the PCA chapter to determine how many components to keep (aka how large should our *k* be). For example, the following provides nearly the same results as we saw in Section [17\.5\.2](pca.html#PVE). When your data aligns to the linearity and orthogonal assumptions made by PCA, the default GLRM model will produce nearly the exact same results regarding variance explained. However, how features align to the archetypes will be different than how features align to the PCs in PCA. ``` data.frame( PC = basic_glrm@model$importance %>% seq_along(), PVE = basic_glrm@model$importance %>% .[2,] %>% unlist(), CVE = basic_glrm@model$importance %>% .[3,] %>% unlist() ) %>% gather(metric, variance_explained, -PC) %>% ggplot(aes(PC, variance_explained)) + geom_point() + facet_wrap(~ metric, ncol = 1, scales = "free") ``` Figure 18\.4: Variance explained by the first 20 archetypes in our GLRM model. We can also extract how each feature aligns to the different archetypes by looking at the `model$archetypes` component: ``` t(basic_glrm@model$archetypes)[1:5, 1:5] ## Arch1 Arch2 Arch3 Arch4 Arch5 ## 7up -0.5783538 -1.5705325 0.9906612 -0.9306704 0.17552643 ## lasagna 0.2196728 0.1213954 -0.7068851 0.8436524 3.56206178 ## pepsi -0.2504310 -0.8156136 -0.7669562 -1.2551630 -0.47632696 ## yop -0.1856632 0.4000083 -0.4855958 1.1598919 -0.26142763 ## redwine -0.1372589 -0.1059148 -0.9579530 0.4641668 -0.08539977 ``` We can use this information to see how the different features contribute to Archetype 1 or compare how features map to multiple Archetypes (similar to how we did this in the PCA chapter). The following shows that many liquid refreshments (e.g., instant coffee, tea, horlics, and milk) contribute positively to archetype 1\. We also see that some candy bars contribute strongly to archetype 2 but minimally, or negatively, to archetype 1\. The results are displayed in Figure [18\.5](GLRM.html#fig:glrm-plot-archetypes). ``` p1 <- t(basic_glrm@model$archetypes) %>% as.data.frame() %>% mutate(feature = row.names(.)) %>% ggplot(aes(Arch1, reorder(feature, Arch1))) + geom_point() p2 <- t(basic_glrm@model$archetypes) %>% as.data.frame() %>% mutate(feature = row.names(.)) %>% ggplot(aes(Arch1, Arch2, label = feature)) + geom_text() gridExtra::grid.arrange(p1, p2, nrow = 1) ``` Figure 18\.5: Feature contribution for archetype 1 and 2\. If we were to use the scree plot approach (Section [17\.5\.3](pca.html#scree)) to determine \\(k\\), we would decide on \\(k \= 8\\). Consequently, we would want to re\-run our model with \\(k \= 8\\). We could then use `h2o.reconstruct()` and apply our model to a data set to see the predicted values. Below we see that our predicted values include negative numbers and non\-integers. Considering our original data measures the counts of each product purchased we would need to apply some additional rounding logic to convert values to integers: ``` # Re-run model with k = 8 k8_glrm <- h2o.glrm( training_frame = my_basket.h2o, k = 8, loss = "Quadratic", regularization_x = "None", regularization_y = "None", transform = "STANDARDIZE", max_iterations = 2000, seed = 123 ) # Reconstruct to see how well the model did my_reconstruction <- h2o.reconstruct(k8_glrm, my_basket.h2o, reverse_transform = TRUE) # Raw predicted values my_reconstruction[1:5, 1:5] ## reconstr_7up reconstr_lasagna reconstr_pepsi reconstr_yop reconstr_red-wine ## 1 0.025595726 -0.06657864 -0.03813350 -0.012225807 0.03814142 ## 2 -0.041778553 0.02401056 -0.05225379 -0.052248809 -0.05487031 ## 3 0.012373600 0.04849545 0.05760424 -0.009878976 0.02492625 ## 4 0.338875544 0.00577020 0.48763580 0.187669229 0.53358405 ## 5 0.003869531 0.05394523 0.07655745 -0.010977765 0.51779314 ## ## [5 rows x 5 columns] # Round values to whole integers my_reconstruction[1:5, 1:5] %>% round(0) ## reconstr_7up reconstr_lasagna reconstr_pepsi reconstr_yop reconstr_red-wine ## 1 0 0 0 0 0 ## 2 0 0 0 0 0 ## 3 0 0 0 0 0 ## 4 0 0 0 0 1 ## 5 0 0 0 0 1 ## ## [5 rows x 5 columns] ``` ### 18\.4\.2 Tuning to optimize for unseen data A more sophisticated use of GLRMs is to create a model where the reduced archetypes will be used on future, unseen data. The preferred approach to deciding on a final model when you are going to use a GLRM to score future observations, is to perform a validation process to select the optimally tuned model. This will help your final model generalize better to unseen data. As previously mentioned, when applying a GLRM model to unseen data, using a regularizer can help to reduce overfitting and help the model generalize better. Since our data represents all positive values (items purchases which can be 0 or any positive integer), we apply the non\-negative regularizer. This will force all predicted values to at least be non\-negative. We see this when we use `predict()` on the results. If we compare the non\-regularized GLRM model (`k8_glrm`) to our regularized model (`k8_glrm_regularized`), you will notice that the non\-regularized model will almost always have a lower loss value. However, this is because the regularized model is being generalized more and is not overfitting to our training data, which should help improve on unseen data. ``` # Use non-negative regularization k8_glrm_regularized <- h2o.glrm( training_frame = my_basket.h2o, k = 8, loss = "Quadratic", regularization_x = "NonNegative", regularization_y = "NonNegative", gamma_x = 0.5, gamma_y = 0.5, transform = "STANDARDIZE", max_iterations = 2000, seed = 123 ) # Show predicted values predict(k8_glrm_regularized, my_basket.h2o)[1:5, 1:5] ## reconstr_7up reconstr_lasagna reconstr_pepsi reconstr_yop reconstr_red-wine ## 1 0.000000 0 0.0000000 0.0000000 0.0000000 ## 2 0.000000 0 0.0000000 0.0000000 0.0000000 ## 3 0.000000 0 0.0000000 0.0000000 0.0000000 ## 4 0.609656 0 0.6311428 0.4565658 0.6697422 ## 5 0.000000 0 0.0000000 0.0000000 0.8257210 ## ## [5 rows x 5 columns] # Compare regularized versus non-regularized loss par(mfrow = c(1, 2)) plot(k8_glrm) plot(k8_glrm_regularized) ``` Figure 18\.6: Loss curve for original GLRM model that does not include regularization (left) compared to a GLRM model with regularization (right). GLRM models behave much like supervised models where there are several hyperparameters that can be tuned to optimize performance. For example, we can choose from a combination of multiple regularizers, we can adjust the magnitude of the regularization (i.e., the `gamma_*` parameters), and we can even tune the rank \\(k\\). Unfortunately, **h2o** does not currently provide an automated tuning grid option, such as `h2o.grid()` which can be applied to supervised learning models. To perform a grid search with GLRMs, we need to create our own custom process. First, we create training and validation sets so that we can use the validation data to see how well each hyperparameter setting does on unseen data. Next, we create a tuning grid that contains 225 combinations of hyperparameters. For this example, we’re going to assume we want \\(k \= 8\\) and we only want to tune the type and magnitude of the regularizers. Lastly, we create a `for` loop to go through each hyperparameter combination, apply the given model, assess on the model’s performance on the hold out validation set, and extract the error metric. The squared error loss ranges from as high as 58,908 down to 13,371\. This is a significant reduction in error. We see that the best models all have errors in the 13,700\+ range and the majority of them have a large (signaled by `gamma_x`) L1 (LASSO) regularizer on the *X* matrix and also a non\-negative regularizer on the *Y* matrix. However, the magnitude of the *Y* matrix regularizers (signaled by `gamma_y`) has little to no impact. The following tuning and validation process took roughly 35 minutes to complete. ``` # Split data into train & validation split <- h2o.splitFrame(my_basket.h2o, ratios = 0.75, seed = 123) train <- split[[1]] valid <- split[[2]] # Create hyperparameter search grid params <- expand.grid( regularization_x = c("None", "NonNegative", "L1"), regularization_y = c("None", "NonNegative", "L1"), gamma_x = seq(0, 1, by = .25), gamma_y = seq(0, 1, by = .25), error = 0, stringsAsFactors = FALSE ) # Perform grid search for(i in seq_len(nrow(params))) { # Create model glrm_model <- h2o.glrm( training_frame = train, k = 8, loss = "Quadratic", regularization_x = params$regularization_x[i], regularization_y = params$regularization_y[i], gamma_x = params$gamma_x[i], gamma_y = params$gamma_y[i], transform = "STANDARDIZE", max_runtime_secs = 1000, seed = 123 ) # Predict on validation set and extract error validate <- h2o.performance(glrm_model, valid) params$error[i] <- validate@metrics$numerr } # Look at the top 10 models with the lowest error rate params %>% arrange(error) %>% head(10) ## regularization_x regularization_y gamma_x gamma_y error ## 1 L1 NonNegative 1.00 0.25 13731.81 ## 2 L1 NonNegative 1.00 0.50 13731.81 ## 3 L1 NonNegative 1.00 0.75 13731.81 ## 4 L1 NonNegative 1.00 1.00 13731.81 ## 5 L1 NonNegative 0.75 0.25 13746.77 ## 6 L1 NonNegative 0.75 0.50 13746.77 ## 7 L1 NonNegative 0.75 0.75 13746.77 ## 8 L1 NonNegative 0.75 1.00 13746.77 ## 9 L1 None 0.75 0.00 13750.79 ## 10 L1 L1 0.75 0.00 13750.79 ``` Once we identify the optimal model, we’ll want to re\-run this on the entire training data set. We can then score new unseen observations with this model, which tells us based on their buying behavior and how this behavior aligns to the \\(k \= 8\\) dimensions in our model, what products are they’re likely to buy and would be good opportunities to market to them. ``` # Apply final model with optimal hyperparamters final_glrm_model <- h2o.glrm( training_frame = my_basket.h2o, k = 8, loss = "Quadratic", regularization_x = "L1", regularization_y = "NonNegative", gamma_x = 1, gamma_y = 0.25, transform = "STANDARDIZE", max_iterations = 2000, seed = 123 ) # New observations to score new_observations <- as.h2o(sample_n(my_basket, 2)) # Basic scoring predict(final_glrm_model, new_observations) %>% round(0) ## reconstr_7up reconstr_lasagna reconstr_pepsi reconstr_yop reconstr_red-wine reconstr_cheese reconstr_bbq reconstr_bulmers reconstr_mayonnaise reconstr_horlics reconstr_chicken-tikka reconstr_milk reconstr_mars reconstr_coke ## 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ## 2 0 -1 1 0 0 0 0 0 0 1 0 1 0 1 ## reconstr_lottery reconstr_bread reconstr_pizza reconstr_sunny-delight reconstr_ham reconstr_lettuce reconstr_kronenbourg reconstr_leeks reconstr_fanta reconstr_tea reconstr_whiskey reconstr_peas reconstr_newspaper ## 1 0 0 0 0 0 0 0 0 0 0 0 0 0 ## 2 0 0 -1 0 0 0 0 0 0 1 0 0 0 ## reconstr_muesli reconstr_white-wine reconstr_carrots reconstr_spinach reconstr_pate reconstr_instant-coffee reconstr_twix reconstr_potatoes reconstr_fosters reconstr_soup reconstr_toad-in-hole reconstr_coco-pops ## 1 0 0 0 0 0 0 0 0 0 0 0 0 ## 2 1 0 0 0 0 1 0 0 0 0 0 1 ## reconstr_kitkat reconstr_broccoli reconstr_cigarettes ## 1 0 0 0 ## 2 0 0 0 ``` 18\.5 Final thoughts -------------------- GLRMs are an extension of the well\-known matrix factorization methods such as PCA. While PCA is limited to numeric data, GLRMs can handle mixed numeric, categorical, ordinal, and boolean data with an arbitrary number of missing values. It allows the user to apply regularization to \\(X\\) and \\(Y\\), imposing restrictions like non\-negativity appropriate to a particular data science context. Thus, it is an extremely flexible approach for analyzing and interpreting heterogeneous data sets. Although this chapter focused on using GLRMs for dimension/feature reduction, GLRMs can also be used for clustering, missing data imputation, compute memory reduction, and speed improvements.
Machine Learning
bradleyboehmke.github.io
https://bradleyboehmke.github.io/HOML/autoencoders.html
Chapter 19 Autoencoders ======================= An autoencoder is a neural network that is trained to learn efficient representations of the input data (i.e., the features). Although a simple concept, these representations, called *codings*, can be used for a variety of dimension reduction needs, along with additional uses such as *anomaly detection* and *generative modeling*. Moreover, since autoencoders are, fundamentally, feedforward deep learning models (Chapter [13](deep-learning.html#deep-learning)), they come with all the benefits and flexibility that deep learning models provide. Autoencoders have been around for decades (e.g., LeCun ([1987](#ref-lecun1987modeles)); Bourlard and Kamp ([1988](#ref-bourlard1988auto)); Hinton and Zemel ([1994](#ref-hinton1994autoencoders))) and this chapter will discuss the most popular autoencoder architectures; however, this domain continues to expand quickly so we conclude the chapter by highlighting alternative autoencoder architectures that are worth exploring on your own. 19\.1 Prerequisites ------------------- For this chapter we’ll use the following packages: ``` # Helper packages library(dplyr) # for data manipulation library(ggplot2) # for data visualization # Modeling packages library(h2o) # for fitting autoencoders ``` To illustrate autoencoder concepts we’ll continue with the `mnist` data set from previous chapters: ``` mnist <- dslabs::read_mnist() names(mnist) ## [1] "train" "test" ``` Since we will be using **h2o** we’ll also go ahead and initialize our H2O session: ``` h2o.no_progress() # turn off progress bars h2o.init(max_mem_size = "5g") # initialize H2O instance ``` 19\.2 Undercomplete autoencoders -------------------------------- An autoencoder has a structure very similar to a feedforward neural network (aka multi\-layer perceptron—MLP); however, the primary difference when using in an unsupervised context is that the number of neurons in the output layer are equal to the number of inputs. Consequently, in its simplest form, an autoencoder is using hidden layers to try to re\-create the inputs. We can describe this algorithm in two parts: (1\) an *encoder* function (\\(Z\=f\\left(X\\right)\\)) that converts \\(X\\) inputs to \\(Z\\) codings and (2\) a *decoder* function (\\(X'\=g\\left(Z\\right)\\)) that produces a reconstruction of the inputs (\\(X'\\)). For dimension reduction purposes, the goal is to create a reduced set of codings that adequately represents \\(X\\). Consequently, we constrain the hidden layers so that the number of neurons is less than the number of inputs. An autoencoder whose internal representation has a smaller dimensionality than the input data is known as an *undercomplete autoencoder*, represented in Figure [19\.1](autoencoders.html#fig:undercomplete-architecture). This compression of the hidden layers forces the autoencoder to capture the most dominant features of the input data and the representation of these signals are captured in the codings. Figure 19\.1: Schematic structure of an undercomplete autoencoder with three fully connected hidden layers . To learn the neuron weights and, thus the codings, the autoencoder seeks to minimize some loss function, such as mean squared error (MSE), that penalizes \\(X'\\) for being dissimilar from \\(X\\): \\\[\\begin{equation} \\tag{19\.1} \\text{minimize}\\enspace L \= f\\left(X, X'\\right) \\end{equation}\\] ### 19\.2\.1 Comparing PCA to an autoencoder When the autoencoder uses only linear activation functions (reference Section [13\.4\.2\.1](deep-learning.html#activations)) and the loss function is MSE, then it can be shown that the autoencoder reduces to PCA. When nonlinear activation functions are used, autoencoders provide nonlinear generalizations of PCA. The following demonstrates our first implementation of a basic autoencoder. When using **h2o** you use the same `h2o.deeplearning()` function that you would use to train a neural network; however, you need to set `autoencoder = TRUE`. We use a single hidden layer with only two codings. This is reducing 784 features down to two dimensions; although not very realistic, it allows us to visualize the results and gain some intuition on the algorithm. In this example we use a hyperbolic tangent activation function which has a nonlinear sigmoidal shape. To extract the reduced dimension codings, we use `h2o.deepfeatures()` and specify the layer of codings to extract. The MNIST data set is very sparse; in fact, over 80% of the elements in the MNIST data set are zeros. When you have sparse data such as this, using `sparse = TRUE` enables **h2o** to more efficiently handle the input data and speed up computation. ``` # Convert mnist features to an h2o input data set features <- as.h2o(mnist$train$images) # Train an autoencoder ae1 <- h2o.deeplearning( x = seq_along(features), training_frame = features, autoencoder = TRUE, hidden = 2, activation = 'Tanh', sparse = TRUE ) # Extract the deep features ae1_codings <- h2o.deepfeatures(ae1, features, layer = 1) ae1_codings ## DF.L1.C1 DF.L1.C2 ## 1 -0.1558956 -0.06456967 ## 2 0.3778544 -0.61518649 ## 3 0.2002303 0.31214266 ## 4 -0.6955515 0.13225607 ## 5 0.1912538 0.59865392 ## 6 0.2310982 0.20322605 ## ## [60000 rows x 2 columns] ``` The reduced codings we extract are sometimes referred to as deep features (DF) and they are similar in nature to the principal components for PCA and archetypes for GLRMs. In fact, we can project the MNIST response variable onto the reduced feature space and compare our autoencoder to PCA. Figure [19\.2](autoencoders.html#fig:pca-autoencoder-projection) illustrates how the nonlinearity of autoencoders can help to isolate the signals in the features better than PCA. Figure 19\.2: MNIST response variable projected onto a reduced feature space containin only two dimensions. PCA (left) forces a linear projection whereas an autoencoder with non\-linear activation functions allows non\-linear project. ### 19\.2\.2 Stacked autoencoders Autoencoders are often trained with only a single hidden layer; however, this is not a requirement. Just as we illustrated with feedforward neural networks, autoencoders can have multiple hidden layers. We refer to autoencoders with more than one layer as *stacked autoencoders* (or *deep autoencoders*). Adding additional layers to autoencoders can have advantages. Adding additional depth can allow the codings to represent more complex, nonlinear relationships at a reduced computational cost. In fact, Hinton and Salakhutdinov ([2006](#ref-hinton2006reducing)) show that deeper autoencoders often yield better data compression than shallower, or linear autoencoders. However, this is not always the case as we’ll see shortly. One must be careful not to make the autoencoder too complex and powerful as you can run the risk of nearly reconstructing the inputs perfectly while not identifying the salient features that generalize well. As you increase the depth of an autoencoder, the architecture typically follows a symmetrical pattern.[47](#fn47) For example, Figure [19\.3](autoencoders.html#fig:autoencodersymmetry) illustrates three different undercomplete autoencoder architectures exhibiting symmetric hidden layers. Figure 19\.3: As you add hidden layers to autoencoders, it is common practice to have symmetric hidden layer sizes between the encoder and decoder layers. So how does one find the right autencoder architecture? We can use the same grid search procedures we’ve discussed throughout the supervised learning section of the book. To illustrate, the following code examines five undercomplete autoencoder architectures. In this example we find that less depth provides the optimal MSE as a single hidden layer with 100 deep features has the lowest MSE of 0\.007\. The following grid search took a little over 9 minutes. ``` # Hyperparameter search grid hyper_grid <- list(hidden = list( c(50), c(100), c(300, 100, 300), c(100, 50, 100), c(250, 100, 50, 100, 250) )) # Execute grid search ae_grid <- h2o.grid( algorithm = 'deeplearning', x = seq_along(features), training_frame = features, grid_id = 'autoencoder_grid', autoencoder = TRUE, activation = 'Tanh', hyper_params = hyper_grid, sparse = TRUE, ignore_const_cols = FALSE, seed = 123 ) # Print grid details h2o.getGrid('autoencoder_grid', sort_by = 'mse', decreasing = FALSE) ## H2O Grid Details ## ================ ## ## Grid ID: autoencoder_grid ## Used hyper parameters: ## - hidden ## Number of models: 5 ## Number of failed models: 0 ## ## Hyper-Parameter Search Summary: ordered by increasing mse ## hidden model_ids mse ## 1 [100] autoencoder_grid3_model_2 0.00674637890553651 ## 2 [300, 100, 300] autoencoder_grid3_model_3 0.00830502966843272 ## 3 [100, 50, 100] autoencoder_grid3_model_4 0.011215307972822733 ## 4 [50] autoencoder_grid3_model_1 0.012450109189122541 ## 5 [250, 100, 50, 100, 250] autoencoder_grid3_model_5 0.014410280145600972 ``` ### 19\.2\.3 Visualizing the reconstruction So how well does our autoencoder reconstruct the original inputs? The MSE provides us an overall error assessment but we can also directly compare the inputs and reconstructed outputs. Figure [19\.4](autoencoders.html#fig:reconstructed-images-plot) illustrates this comparison by sampling a few test images, predicting the reconstructed pixel values based on our optimal autoencoder, and plotting the original versus reconstructed digits. The objective of the autoencoder is to capture the salient features of the images where any differences should be negligible; Figure [19\.4](autoencoders.html#fig:reconstructed-images-plot) illustrates that our autencoder does a pretty good job of this. ``` # Get sampled test images index <- sample(1:nrow(mnist$test$images), 4) sampled_digits <- mnist$test$images[index, ] colnames(sampled_digits) <- paste0("V", seq_len(ncol(sampled_digits))) # Predict reconstructed pixel values best_model_id <- ae_grid@model_ids[[1]] best_model <- h2o.getModel(best_model_id) reconstructed_digits <- predict(best_model, as.h2o(sampled_digits)) names(reconstructed_digits) <- paste0("V", seq_len(ncol(reconstructed_digits))) combine <- rbind(sampled_digits, as.matrix(reconstructed_digits)) # Plot original versus reconstructed par(mfrow = c(1, 3), mar=c(1, 1, 1, 1)) layout(matrix(seq_len(nrow(combine)), 4, 2, byrow = FALSE)) for(i in seq_len(nrow(combine))) { image(matrix(combine[i, ], 28, 28)[, 28:1], xaxt="n", yaxt="n") } ``` Figure 19\.4: Original digits (left) and their reconstructions (right). 19\.3 Sparse autoencoders ------------------------- Sparse autoencoders are used to pull out the most influential feature representations. This is beneficial when trying to understand what are the most unique features of a data set. It’s useful when using autoencoders as inputs to downstream supervised models as it helps to highlight the unique signals across the features. Recall that neurons in a network are considered active if the threshold exceeds certain capacity. Since a Tanh activation function is S\-curved from \-1 to 1, we consider a neuron active if the output value is closer to 1 and inactive if its output is closer to \-1\.[48](#fn48) Incorporating *sparsity* forces more neurons to be inactive. This requires the autoencoder to represent each input as a combination of a smaller number of activations. To incorporate sparsity, we must first understand the actual sparsity of the coding layer. This is simply the average activation of the coding layer as a function of the activation used (\\(A\\)) and the inputs supplied (\\(X\\)) as illustrated in Equation [(19\.2\)](autoencoders.html#eq:autoencoder-sparsity). \\\[\\begin{equation} \\tag{19\.2} \\hat{\\rho} \= \\frac{1}{m}\\sum^m\_{i\=1}A(X) \\end{equation}\\] For our current `best_model` with 100 codings, the sparsity level is approximately zero: ``` ae100_codings <- h2o.deepfeatures(best_model, features, layer = 1) ae100_codings %>% as.data.frame() %>% tidyr::gather() %>% summarize(average_activation = mean(value)) ## average_activation ## 1 -0.00677801 ``` This means, on average, the coding neurons are active half the time which is illustrated in Figure [19\.5](autoencoders.html#fig:average-activation). Figure 19\.5: The average activation of the coding neurons in our default autoencoder using a Tanh activation function. Sparse autoencoders attempt to enforce the constraint \\(\\widehat{\\rho} \= \\rho\\) where \\(\\rho\\) is a *sparsity parameter*. This penalizes the neurons that are too active, forcing them to activate less. To achieve this we add an extra penalty term to our objective function in Equation [(19\.1\)](autoencoders.html#eq:autoencoder-objective). The most commonly used penalty is known as the *Kullback\-Leibler divergence* (KL divergence), which will measure the divergence between the target probability \\(\\rho\\) that a neuron in the coding layer will activate, and the actual probability as illustrated in Equation [(19\.3\)](autoencoders.html#eq:kl-divergence). \\\[\\begin{equation} \\tag{19\.3} \\sum \\rho \\log \\frac{\\rho}{\\hat\\rho} \+ (1\-\\rho) \\log \\frac{1 \- \\rho}{1 \- \\hat \\rho} \\end{equation}\\] This penalty term is commonly written as Equation [(19\.4\)](autoencoders.html#eq:kl-shorthand) \\\[\\begin{equation} \\tag{19\.4} \\sum \\sum \\text{KL} (\\rho \|\| \\hat{\\rho}). \\end{equation}\\] Similar to the ridge and LASSO penalties discussed in Section [6\.2](regularized-regression.html#why), we add this penalty to our objective function and incorporate a parameter (\\(\\beta\\)) to control the weight of the penalty. Consequently, our revised loss function with sparsity induced is \\\[\\begin{equation} \\tag{19\.5} \\text{minimize} \\left(L \= f(X, X') \+ \\beta \\sum \\text{KL} (\\rho \|\| \\hat{\\rho}) \\right). \\end{equation}\\] Assume we want to induce sparsity with our current autoencoder that contains 100 codings. We need to specify two parameters: \\(\\rho\\) and \\(\\beta\\). In this example, we’ll just induce a little sparsity and specify \\(\\rho \= \-0\.1\\) by including `average_activation = -0.1`. And since \\(\\beta\\) could take on multiple values we’ll do a grid search across different `sparsity_beta` values. Our results indicate that \\(\\beta \= 0\.01\\) performs best in reconstructing the original inputs. The weight that controls the relative importance of the sparsity loss (\\(\\beta\\)) is a hyperparameter that needs to be tuned. If this weight is too high, the model will stick closely to the target sparsity but suboptimally reconstruct the inputs. If the weight is too low, the model will mostly ignore the sparsity objective. A grid search helps to find the right balance. ``` # Hyperparameter search grid hyper_grid <- list(sparsity_beta = c(0.01, 0.05, 0.1, 0.2)) # Execute grid search ae_sparsity_grid <- h2o.grid( algorithm = 'deeplearning', x = seq_along(features), training_frame = features, grid_id = 'sparsity_grid', autoencoder = TRUE, hidden = 100, activation = 'Tanh', hyper_params = hyper_grid, sparse = TRUE, average_activation = -0.1, ignore_const_cols = FALSE, seed = 123 ) # Print grid details h2o.getGrid('sparsity_grid', sort_by = 'mse', decreasing = FALSE) ## H2O Grid Details ## ================ ## ## Grid ID: sparsity_grid ## Used hyper parameters: ## - sparsity_beta ## Number of models: 4 ## Number of failed models: 0 ## ## Hyper-Parameter Search Summary: ordered by increasing mse ## sparsity_beta model_ids mse ## 1 0.01 sparsity_grid_model_1 0.012982916169006953 ## 2 0.2 sparsity_grid_model_4 0.01321464889160263 ## 3 0.05 sparsity_grid_model_2 0.01337749148043942 ## 4 0.1 sparsity_grid_model_3 0.013516631653257992 ``` If we look at the average activation across our neurons now we see that it shifted to the left compared to Figure [19\.5](autoencoders.html#fig:average-activation); it is now \-0\.108 as illustrated in Figure [19\.6](autoencoders.html#fig:sparse-codings-images-plot). Figure 19\.6: The average activation of the coding neurons in our sparse autoencoder is now \-0\.108\. The amount of sparsity you apply is dependent on multiple factors. When using autoencoders for descriptive dimension reduction, the level of sparsity is dependent on the level of insight you want to gain behind the most unique statistical features. If you’re trying to understand the most essential characteristics that explain the features or images then a lower sparsity value is preferred. For example, Figure [19\.7](autoencoders.html#fig:plot-sparsity-comparisons) compares the four sampled digits from the MNIST test set with a non\-sparse autoencoder with a single layer of 100 codings using Tanh activation functions and a sparse autoencoder that constrains \\(\\rho \= \-0\.75\\). Adding sparsity helps to highlight the features that are driving the uniqueness of these sampled digits. This is most pronounced with the number 5 where the sparse autoencoder reveals the primary focus is on the upper portion of the glyph. If you are using autoencoders as a feature engineering step prior to downstream supervised modeling, then the level of sparsity can be considered a hyperparameter that can be optimized with a search grid. Figure 19\.7: Original digits sampled from the MNIST test set (left), reconstruction of sampled digits with a non\-sparse autoencoder (middle), and reconstruction with a sparse autoencoder (right). In Section [19\.2](autoencoders.html#undercomplete-autoencoders), we discussed how an undercomplete autoencoder is used to constrain the number of codings to be less than the number of inputs. This constraint prevents the autoencoder from learning the identify function, which would just create a perfect mapping of inputs to outputs and not learn anything about the features’ salient characteristics. However, there are ways to prevent an autoencoder with more hidden units than inputs (known as an *overcomplete autoencoder*) from learning the identity function. Adding sparsity is one such approach (Poultney et al. [2007](#ref-poultney2007efficient); Lee, Ekanadham, and Ng [2008](#ref-lee2008sparse)) and another is to add randomness in the transformation from input to reconstruction, which we discuss next. 19\.4 Denoising autoencoders ---------------------------- The denoising autoencoder is a stochastic version of the autoencoder in which we train the autoencoder to reconstruct the input from a *corrupted* copy of the inputs. This forces the codings to learn more robust features of the inputs and prevents them from merely learning the identity function; even if the number of codings is greater than the number of inputs. We can think of a denoising autoencoder as having two objectives: (i) try to encode the inputs to preserve the essential signals, and (ii) try to undo the effects of a corruption process stochastically applied to the inputs of the autoencoder. The latter can only be done by capturing the statistical dependencies between the inputs. Combined, this denoising procedure allows us to implicitly learn useful properties of the inputs (Bengio et al. [2013](#ref-bengio2013generalized)). The corruption process typically follows one of two approaches. We can randomly set some of the inputs (as many as half of them) to zero or one; most commonly it is setting random values to zero to imply missing values (Vincent et al. [2008](#ref-vincent2008extracting)). This can be done by manually imputing zeros or ones into the inputs or adding a dropout layer (reference Section [13\.7\.3](deep-learning.html#dl-regularization)) between the inputs and first hidden layer. Alternatively, for continuous\-valued inputs, we can add pure Gaussian noise (Vincent [2011](#ref-vincent2011connection)). Figure [19\.8](autoencoders.html#fig:plot-corrupted-inputs) illustrates the differences between these two corruption options for a sampled input where 30% of the inputs were corrupted. Figure 19\.8: Original digit sampled from the MNIST test set (left), corrupted data with on/off imputation (middle), and corrupted data with Gaussian imputation (right). Training a denoising autoencoder is nearly the same process as training a regular autoencoder. The only difference is we supply our corrupted inputs to `training_frame` and supply the non\-corrupted inputs to `validation_frame`. The following code illustrates where we supply `training_frame` with inputs that have been corrupted with Gaussian noise (`inputs_currupted_gaussian`) and supply the original input data frame (`features`) to `validation_frame`. The remaining process stays, essentially, the same. We see that the validation MSE is 0\.02 where in comparison our MSE of the same model without corrupted inputs was 0\.006\. ``` # Train a denoise autoencoder denoise_ae <- h2o.deeplearning( x = seq_along(features), training_frame = inputs_currupted_gaussian, validation_frame = features, autoencoder = TRUE, hidden = 100, activation = 'Tanh', sparse = TRUE ) # Print performance h2o.performance(denoise_ae, valid = TRUE) ## H2OAutoEncoderMetrics: deeplearning ## ** Reported on validation data. ** ## ## Validation Set Metrics: ## ===================== ## ## MSE: (Extract with `h2o.mse`) 0.02048465 ## RMSE: (Extract with `h2o.rmse`) 0.1431246 ``` Figure [19\.9](autoencoders.html#fig:plot-denoise-results) visualizes the effect of a denoising autoencoder. The left column shows a sample of the original digits, which are used as the validation data set. The middle column shows the Gaussian corrupted inputs used to train the model, and the right column shows the reconstructed digits after denoising. As expected, the denoising autoencoder does a pretty good job of mapping the corrupted data back to the original input. Figure 19\.9: Original digits sampled from the MNIST test set (left), corrupted input digits (middle), and reconstructed outputs (right). 19\.5 Anomaly detection ----------------------- We can also use autoencoders for anomaly detection (Sakurada and Yairi [2014](#ref-sakurada2014anomaly); Zhou and Paffenroth [2017](#ref-zhou2017anomaly)). Since the loss function of an autoencoder measures the reconstruction error, we can extract this information to identify those observations that have larger error rates. These observations have feature attributes that differ significantly from the other features. We might consider such features as anomalous, or outliers. To extract the reconstruction error with **h2o**, we use `h2o.anomaly()`. The following uses our undercomplete autoencoder with 100 codings from Section [19\.2\.2](autoencoders.html#stacked-autoencoders). We can see that the distribution of reconstruction errors range from near zero to over 0\.03 with the average error being 0\.006\. ``` # Extract reconstruction errors (reconstruction_errors <- h2o.anomaly(best_model, features)) ## Reconstruction.MSE ## 1 0.009879666 ## 2 0.006485201 ## 3 0.017470110 ## 4 0.002339352 ## 5 0.006077669 ## 6 0.007171287 ## ## [60000 rows x 1 column] # Plot distribution reconstruction_errors <- as.data.frame(reconstruction_errors) ggplot(reconstruction_errors, aes(Reconstruction.MSE)) + geom_histogram() ``` Figure 19\.10: Distribution of reconstruction errors. Figure [19\.11](autoencoders.html#fig:bad-reconstruction-errors-plot) illustrates the actual and reconstructed digits for the observations with the five worst reconstruction errors. It is fairly intuitive why these observations have such large reconstruction errors as the corresponding input digits are poorly written. Figure 19\.11: Original digits (left) and their reconstructions (right) for the observations with the five largest reconstruction errors. In addition to identifying outliers, we can also use anomaly detection to identify unusual inputs such as fraudulent credit card transactions and manufacturing defects. Often, when performing anomaly detection, we retrain the autoencoder on a subset of the inputs that we’ve determined are a good representation of high quality inputs. For example, we may include all inputs that achieved a reconstruction error within the 75\-th percentile and exclude the rest. We would then retrain an autoencoder, use that autoencoder on new input data, and if it exceeds a certain percentile declare the inputs as anomalous. However, deciding on the threshold that determines an input as anomalous is subjective and often relies on the business purpose. 19\.6 Final thoughts -------------------- As we mentioned at the beginning of this chapter, autoencoders are receiving a lot of attention and many advancements have been made over the past decade. We discussed a few of the fundamental implementations of autoencoders; however, more exist. The following is an incomplete list of alternative autoencoders that are worthy of your attention. * *Variational autoencoders* are a form of generative autoencoders, which means they can be used to create new instances that closely resemble the input data but are completely generated from the coding distributions (Doersch [2016](#ref-doersch2016tutorial)). * *Contractive autoencoders* constrain the derivative of the hidden layer(s) activations to be small with respect to the inputs. This has a similar effect as denoising autoencoders in the sense that small perturbations to the input are essentially considered noise, which makes our codings more robust (Rifai et al. [2011](#ref-rifai2011contractive)). * *Stacked convolutional autoencoders* are designed to reconstruct visual features processed through convolutional layers (Masci et al. [2011](#ref-masci2011stacked)). They do not require manual vectorization of the image so they work well if you need to do dimension reduction or feature extraction on realistic\-sized high\-dimensional images. * *Winner\-take\-all autoencoders* leverage only the top X% activations for each neuron, while the rest are set to zero (Makhzani and Frey [2015](#ref-makhzani2015winner)). This leads to sparse codings. This approach has also been adapted to work with convolutional autoencoders (Makhzani and Frey [2014](#ref-makhzani2014winner)). * *Adversarial autoencoders* train two networks \- a generator network to reconstruct the inputs similar to a regular autoencoder and then a discriminator network to compute where the inputs lie on a probabilistic distribution. Similar to variational autoencoders, adversarial autoencoders are often used to generate new data and have also been used for semi\-supervised and supervised tasks (Makhzani et al. [2015](#ref-makhzani2015adversarial)). 19\.1 Prerequisites ------------------- For this chapter we’ll use the following packages: ``` # Helper packages library(dplyr) # for data manipulation library(ggplot2) # for data visualization # Modeling packages library(h2o) # for fitting autoencoders ``` To illustrate autoencoder concepts we’ll continue with the `mnist` data set from previous chapters: ``` mnist <- dslabs::read_mnist() names(mnist) ## [1] "train" "test" ``` Since we will be using **h2o** we’ll also go ahead and initialize our H2O session: ``` h2o.no_progress() # turn off progress bars h2o.init(max_mem_size = "5g") # initialize H2O instance ``` 19\.2 Undercomplete autoencoders -------------------------------- An autoencoder has a structure very similar to a feedforward neural network (aka multi\-layer perceptron—MLP); however, the primary difference when using in an unsupervised context is that the number of neurons in the output layer are equal to the number of inputs. Consequently, in its simplest form, an autoencoder is using hidden layers to try to re\-create the inputs. We can describe this algorithm in two parts: (1\) an *encoder* function (\\(Z\=f\\left(X\\right)\\)) that converts \\(X\\) inputs to \\(Z\\) codings and (2\) a *decoder* function (\\(X'\=g\\left(Z\\right)\\)) that produces a reconstruction of the inputs (\\(X'\\)). For dimension reduction purposes, the goal is to create a reduced set of codings that adequately represents \\(X\\). Consequently, we constrain the hidden layers so that the number of neurons is less than the number of inputs. An autoencoder whose internal representation has a smaller dimensionality than the input data is known as an *undercomplete autoencoder*, represented in Figure [19\.1](autoencoders.html#fig:undercomplete-architecture). This compression of the hidden layers forces the autoencoder to capture the most dominant features of the input data and the representation of these signals are captured in the codings. Figure 19\.1: Schematic structure of an undercomplete autoencoder with three fully connected hidden layers . To learn the neuron weights and, thus the codings, the autoencoder seeks to minimize some loss function, such as mean squared error (MSE), that penalizes \\(X'\\) for being dissimilar from \\(X\\): \\\[\\begin{equation} \\tag{19\.1} \\text{minimize}\\enspace L \= f\\left(X, X'\\right) \\end{equation}\\] ### 19\.2\.1 Comparing PCA to an autoencoder When the autoencoder uses only linear activation functions (reference Section [13\.4\.2\.1](deep-learning.html#activations)) and the loss function is MSE, then it can be shown that the autoencoder reduces to PCA. When nonlinear activation functions are used, autoencoders provide nonlinear generalizations of PCA. The following demonstrates our first implementation of a basic autoencoder. When using **h2o** you use the same `h2o.deeplearning()` function that you would use to train a neural network; however, you need to set `autoencoder = TRUE`. We use a single hidden layer with only two codings. This is reducing 784 features down to two dimensions; although not very realistic, it allows us to visualize the results and gain some intuition on the algorithm. In this example we use a hyperbolic tangent activation function which has a nonlinear sigmoidal shape. To extract the reduced dimension codings, we use `h2o.deepfeatures()` and specify the layer of codings to extract. The MNIST data set is very sparse; in fact, over 80% of the elements in the MNIST data set are zeros. When you have sparse data such as this, using `sparse = TRUE` enables **h2o** to more efficiently handle the input data and speed up computation. ``` # Convert mnist features to an h2o input data set features <- as.h2o(mnist$train$images) # Train an autoencoder ae1 <- h2o.deeplearning( x = seq_along(features), training_frame = features, autoencoder = TRUE, hidden = 2, activation = 'Tanh', sparse = TRUE ) # Extract the deep features ae1_codings <- h2o.deepfeatures(ae1, features, layer = 1) ae1_codings ## DF.L1.C1 DF.L1.C2 ## 1 -0.1558956 -0.06456967 ## 2 0.3778544 -0.61518649 ## 3 0.2002303 0.31214266 ## 4 -0.6955515 0.13225607 ## 5 0.1912538 0.59865392 ## 6 0.2310982 0.20322605 ## ## [60000 rows x 2 columns] ``` The reduced codings we extract are sometimes referred to as deep features (DF) and they are similar in nature to the principal components for PCA and archetypes for GLRMs. In fact, we can project the MNIST response variable onto the reduced feature space and compare our autoencoder to PCA. Figure [19\.2](autoencoders.html#fig:pca-autoencoder-projection) illustrates how the nonlinearity of autoencoders can help to isolate the signals in the features better than PCA. Figure 19\.2: MNIST response variable projected onto a reduced feature space containin only two dimensions. PCA (left) forces a linear projection whereas an autoencoder with non\-linear activation functions allows non\-linear project. ### 19\.2\.2 Stacked autoencoders Autoencoders are often trained with only a single hidden layer; however, this is not a requirement. Just as we illustrated with feedforward neural networks, autoencoders can have multiple hidden layers. We refer to autoencoders with more than one layer as *stacked autoencoders* (or *deep autoencoders*). Adding additional layers to autoencoders can have advantages. Adding additional depth can allow the codings to represent more complex, nonlinear relationships at a reduced computational cost. In fact, Hinton and Salakhutdinov ([2006](#ref-hinton2006reducing)) show that deeper autoencoders often yield better data compression than shallower, or linear autoencoders. However, this is not always the case as we’ll see shortly. One must be careful not to make the autoencoder too complex and powerful as you can run the risk of nearly reconstructing the inputs perfectly while not identifying the salient features that generalize well. As you increase the depth of an autoencoder, the architecture typically follows a symmetrical pattern.[47](#fn47) For example, Figure [19\.3](autoencoders.html#fig:autoencodersymmetry) illustrates three different undercomplete autoencoder architectures exhibiting symmetric hidden layers. Figure 19\.3: As you add hidden layers to autoencoders, it is common practice to have symmetric hidden layer sizes between the encoder and decoder layers. So how does one find the right autencoder architecture? We can use the same grid search procedures we’ve discussed throughout the supervised learning section of the book. To illustrate, the following code examines five undercomplete autoencoder architectures. In this example we find that less depth provides the optimal MSE as a single hidden layer with 100 deep features has the lowest MSE of 0\.007\. The following grid search took a little over 9 minutes. ``` # Hyperparameter search grid hyper_grid <- list(hidden = list( c(50), c(100), c(300, 100, 300), c(100, 50, 100), c(250, 100, 50, 100, 250) )) # Execute grid search ae_grid <- h2o.grid( algorithm = 'deeplearning', x = seq_along(features), training_frame = features, grid_id = 'autoencoder_grid', autoencoder = TRUE, activation = 'Tanh', hyper_params = hyper_grid, sparse = TRUE, ignore_const_cols = FALSE, seed = 123 ) # Print grid details h2o.getGrid('autoencoder_grid', sort_by = 'mse', decreasing = FALSE) ## H2O Grid Details ## ================ ## ## Grid ID: autoencoder_grid ## Used hyper parameters: ## - hidden ## Number of models: 5 ## Number of failed models: 0 ## ## Hyper-Parameter Search Summary: ordered by increasing mse ## hidden model_ids mse ## 1 [100] autoencoder_grid3_model_2 0.00674637890553651 ## 2 [300, 100, 300] autoencoder_grid3_model_3 0.00830502966843272 ## 3 [100, 50, 100] autoencoder_grid3_model_4 0.011215307972822733 ## 4 [50] autoencoder_grid3_model_1 0.012450109189122541 ## 5 [250, 100, 50, 100, 250] autoencoder_grid3_model_5 0.014410280145600972 ``` ### 19\.2\.3 Visualizing the reconstruction So how well does our autoencoder reconstruct the original inputs? The MSE provides us an overall error assessment but we can also directly compare the inputs and reconstructed outputs. Figure [19\.4](autoencoders.html#fig:reconstructed-images-plot) illustrates this comparison by sampling a few test images, predicting the reconstructed pixel values based on our optimal autoencoder, and plotting the original versus reconstructed digits. The objective of the autoencoder is to capture the salient features of the images where any differences should be negligible; Figure [19\.4](autoencoders.html#fig:reconstructed-images-plot) illustrates that our autencoder does a pretty good job of this. ``` # Get sampled test images index <- sample(1:nrow(mnist$test$images), 4) sampled_digits <- mnist$test$images[index, ] colnames(sampled_digits) <- paste0("V", seq_len(ncol(sampled_digits))) # Predict reconstructed pixel values best_model_id <- ae_grid@model_ids[[1]] best_model <- h2o.getModel(best_model_id) reconstructed_digits <- predict(best_model, as.h2o(sampled_digits)) names(reconstructed_digits) <- paste0("V", seq_len(ncol(reconstructed_digits))) combine <- rbind(sampled_digits, as.matrix(reconstructed_digits)) # Plot original versus reconstructed par(mfrow = c(1, 3), mar=c(1, 1, 1, 1)) layout(matrix(seq_len(nrow(combine)), 4, 2, byrow = FALSE)) for(i in seq_len(nrow(combine))) { image(matrix(combine[i, ], 28, 28)[, 28:1], xaxt="n", yaxt="n") } ``` Figure 19\.4: Original digits (left) and their reconstructions (right). ### 19\.2\.1 Comparing PCA to an autoencoder When the autoencoder uses only linear activation functions (reference Section [13\.4\.2\.1](deep-learning.html#activations)) and the loss function is MSE, then it can be shown that the autoencoder reduces to PCA. When nonlinear activation functions are used, autoencoders provide nonlinear generalizations of PCA. The following demonstrates our first implementation of a basic autoencoder. When using **h2o** you use the same `h2o.deeplearning()` function that you would use to train a neural network; however, you need to set `autoencoder = TRUE`. We use a single hidden layer with only two codings. This is reducing 784 features down to two dimensions; although not very realistic, it allows us to visualize the results and gain some intuition on the algorithm. In this example we use a hyperbolic tangent activation function which has a nonlinear sigmoidal shape. To extract the reduced dimension codings, we use `h2o.deepfeatures()` and specify the layer of codings to extract. The MNIST data set is very sparse; in fact, over 80% of the elements in the MNIST data set are zeros. When you have sparse data such as this, using `sparse = TRUE` enables **h2o** to more efficiently handle the input data and speed up computation. ``` # Convert mnist features to an h2o input data set features <- as.h2o(mnist$train$images) # Train an autoencoder ae1 <- h2o.deeplearning( x = seq_along(features), training_frame = features, autoencoder = TRUE, hidden = 2, activation = 'Tanh', sparse = TRUE ) # Extract the deep features ae1_codings <- h2o.deepfeatures(ae1, features, layer = 1) ae1_codings ## DF.L1.C1 DF.L1.C2 ## 1 -0.1558956 -0.06456967 ## 2 0.3778544 -0.61518649 ## 3 0.2002303 0.31214266 ## 4 -0.6955515 0.13225607 ## 5 0.1912538 0.59865392 ## 6 0.2310982 0.20322605 ## ## [60000 rows x 2 columns] ``` The reduced codings we extract are sometimes referred to as deep features (DF) and they are similar in nature to the principal components for PCA and archetypes for GLRMs. In fact, we can project the MNIST response variable onto the reduced feature space and compare our autoencoder to PCA. Figure [19\.2](autoencoders.html#fig:pca-autoencoder-projection) illustrates how the nonlinearity of autoencoders can help to isolate the signals in the features better than PCA. Figure 19\.2: MNIST response variable projected onto a reduced feature space containin only two dimensions. PCA (left) forces a linear projection whereas an autoencoder with non\-linear activation functions allows non\-linear project. ### 19\.2\.2 Stacked autoencoders Autoencoders are often trained with only a single hidden layer; however, this is not a requirement. Just as we illustrated with feedforward neural networks, autoencoders can have multiple hidden layers. We refer to autoencoders with more than one layer as *stacked autoencoders* (or *deep autoencoders*). Adding additional layers to autoencoders can have advantages. Adding additional depth can allow the codings to represent more complex, nonlinear relationships at a reduced computational cost. In fact, Hinton and Salakhutdinov ([2006](#ref-hinton2006reducing)) show that deeper autoencoders often yield better data compression than shallower, or linear autoencoders. However, this is not always the case as we’ll see shortly. One must be careful not to make the autoencoder too complex and powerful as you can run the risk of nearly reconstructing the inputs perfectly while not identifying the salient features that generalize well. As you increase the depth of an autoencoder, the architecture typically follows a symmetrical pattern.[47](#fn47) For example, Figure [19\.3](autoencoders.html#fig:autoencodersymmetry) illustrates three different undercomplete autoencoder architectures exhibiting symmetric hidden layers. Figure 19\.3: As you add hidden layers to autoencoders, it is common practice to have symmetric hidden layer sizes between the encoder and decoder layers. So how does one find the right autencoder architecture? We can use the same grid search procedures we’ve discussed throughout the supervised learning section of the book. To illustrate, the following code examines five undercomplete autoencoder architectures. In this example we find that less depth provides the optimal MSE as a single hidden layer with 100 deep features has the lowest MSE of 0\.007\. The following grid search took a little over 9 minutes. ``` # Hyperparameter search grid hyper_grid <- list(hidden = list( c(50), c(100), c(300, 100, 300), c(100, 50, 100), c(250, 100, 50, 100, 250) )) # Execute grid search ae_grid <- h2o.grid( algorithm = 'deeplearning', x = seq_along(features), training_frame = features, grid_id = 'autoencoder_grid', autoencoder = TRUE, activation = 'Tanh', hyper_params = hyper_grid, sparse = TRUE, ignore_const_cols = FALSE, seed = 123 ) # Print grid details h2o.getGrid('autoencoder_grid', sort_by = 'mse', decreasing = FALSE) ## H2O Grid Details ## ================ ## ## Grid ID: autoencoder_grid ## Used hyper parameters: ## - hidden ## Number of models: 5 ## Number of failed models: 0 ## ## Hyper-Parameter Search Summary: ordered by increasing mse ## hidden model_ids mse ## 1 [100] autoencoder_grid3_model_2 0.00674637890553651 ## 2 [300, 100, 300] autoencoder_grid3_model_3 0.00830502966843272 ## 3 [100, 50, 100] autoencoder_grid3_model_4 0.011215307972822733 ## 4 [50] autoencoder_grid3_model_1 0.012450109189122541 ## 5 [250, 100, 50, 100, 250] autoencoder_grid3_model_5 0.014410280145600972 ``` ### 19\.2\.3 Visualizing the reconstruction So how well does our autoencoder reconstruct the original inputs? The MSE provides us an overall error assessment but we can also directly compare the inputs and reconstructed outputs. Figure [19\.4](autoencoders.html#fig:reconstructed-images-plot) illustrates this comparison by sampling a few test images, predicting the reconstructed pixel values based on our optimal autoencoder, and plotting the original versus reconstructed digits. The objective of the autoencoder is to capture the salient features of the images where any differences should be negligible; Figure [19\.4](autoencoders.html#fig:reconstructed-images-plot) illustrates that our autencoder does a pretty good job of this. ``` # Get sampled test images index <- sample(1:nrow(mnist$test$images), 4) sampled_digits <- mnist$test$images[index, ] colnames(sampled_digits) <- paste0("V", seq_len(ncol(sampled_digits))) # Predict reconstructed pixel values best_model_id <- ae_grid@model_ids[[1]] best_model <- h2o.getModel(best_model_id) reconstructed_digits <- predict(best_model, as.h2o(sampled_digits)) names(reconstructed_digits) <- paste0("V", seq_len(ncol(reconstructed_digits))) combine <- rbind(sampled_digits, as.matrix(reconstructed_digits)) # Plot original versus reconstructed par(mfrow = c(1, 3), mar=c(1, 1, 1, 1)) layout(matrix(seq_len(nrow(combine)), 4, 2, byrow = FALSE)) for(i in seq_len(nrow(combine))) { image(matrix(combine[i, ], 28, 28)[, 28:1], xaxt="n", yaxt="n") } ``` Figure 19\.4: Original digits (left) and their reconstructions (right). 19\.3 Sparse autoencoders ------------------------- Sparse autoencoders are used to pull out the most influential feature representations. This is beneficial when trying to understand what are the most unique features of a data set. It’s useful when using autoencoders as inputs to downstream supervised models as it helps to highlight the unique signals across the features. Recall that neurons in a network are considered active if the threshold exceeds certain capacity. Since a Tanh activation function is S\-curved from \-1 to 1, we consider a neuron active if the output value is closer to 1 and inactive if its output is closer to \-1\.[48](#fn48) Incorporating *sparsity* forces more neurons to be inactive. This requires the autoencoder to represent each input as a combination of a smaller number of activations. To incorporate sparsity, we must first understand the actual sparsity of the coding layer. This is simply the average activation of the coding layer as a function of the activation used (\\(A\\)) and the inputs supplied (\\(X\\)) as illustrated in Equation [(19\.2\)](autoencoders.html#eq:autoencoder-sparsity). \\\[\\begin{equation} \\tag{19\.2} \\hat{\\rho} \= \\frac{1}{m}\\sum^m\_{i\=1}A(X) \\end{equation}\\] For our current `best_model` with 100 codings, the sparsity level is approximately zero: ``` ae100_codings <- h2o.deepfeatures(best_model, features, layer = 1) ae100_codings %>% as.data.frame() %>% tidyr::gather() %>% summarize(average_activation = mean(value)) ## average_activation ## 1 -0.00677801 ``` This means, on average, the coding neurons are active half the time which is illustrated in Figure [19\.5](autoencoders.html#fig:average-activation). Figure 19\.5: The average activation of the coding neurons in our default autoencoder using a Tanh activation function. Sparse autoencoders attempt to enforce the constraint \\(\\widehat{\\rho} \= \\rho\\) where \\(\\rho\\) is a *sparsity parameter*. This penalizes the neurons that are too active, forcing them to activate less. To achieve this we add an extra penalty term to our objective function in Equation [(19\.1\)](autoencoders.html#eq:autoencoder-objective). The most commonly used penalty is known as the *Kullback\-Leibler divergence* (KL divergence), which will measure the divergence between the target probability \\(\\rho\\) that a neuron in the coding layer will activate, and the actual probability as illustrated in Equation [(19\.3\)](autoencoders.html#eq:kl-divergence). \\\[\\begin{equation} \\tag{19\.3} \\sum \\rho \\log \\frac{\\rho}{\\hat\\rho} \+ (1\-\\rho) \\log \\frac{1 \- \\rho}{1 \- \\hat \\rho} \\end{equation}\\] This penalty term is commonly written as Equation [(19\.4\)](autoencoders.html#eq:kl-shorthand) \\\[\\begin{equation} \\tag{19\.4} \\sum \\sum \\text{KL} (\\rho \|\| \\hat{\\rho}). \\end{equation}\\] Similar to the ridge and LASSO penalties discussed in Section [6\.2](regularized-regression.html#why), we add this penalty to our objective function and incorporate a parameter (\\(\\beta\\)) to control the weight of the penalty. Consequently, our revised loss function with sparsity induced is \\\[\\begin{equation} \\tag{19\.5} \\text{minimize} \\left(L \= f(X, X') \+ \\beta \\sum \\text{KL} (\\rho \|\| \\hat{\\rho}) \\right). \\end{equation}\\] Assume we want to induce sparsity with our current autoencoder that contains 100 codings. We need to specify two parameters: \\(\\rho\\) and \\(\\beta\\). In this example, we’ll just induce a little sparsity and specify \\(\\rho \= \-0\.1\\) by including `average_activation = -0.1`. And since \\(\\beta\\) could take on multiple values we’ll do a grid search across different `sparsity_beta` values. Our results indicate that \\(\\beta \= 0\.01\\) performs best in reconstructing the original inputs. The weight that controls the relative importance of the sparsity loss (\\(\\beta\\)) is a hyperparameter that needs to be tuned. If this weight is too high, the model will stick closely to the target sparsity but suboptimally reconstruct the inputs. If the weight is too low, the model will mostly ignore the sparsity objective. A grid search helps to find the right balance. ``` # Hyperparameter search grid hyper_grid <- list(sparsity_beta = c(0.01, 0.05, 0.1, 0.2)) # Execute grid search ae_sparsity_grid <- h2o.grid( algorithm = 'deeplearning', x = seq_along(features), training_frame = features, grid_id = 'sparsity_grid', autoencoder = TRUE, hidden = 100, activation = 'Tanh', hyper_params = hyper_grid, sparse = TRUE, average_activation = -0.1, ignore_const_cols = FALSE, seed = 123 ) # Print grid details h2o.getGrid('sparsity_grid', sort_by = 'mse', decreasing = FALSE) ## H2O Grid Details ## ================ ## ## Grid ID: sparsity_grid ## Used hyper parameters: ## - sparsity_beta ## Number of models: 4 ## Number of failed models: 0 ## ## Hyper-Parameter Search Summary: ordered by increasing mse ## sparsity_beta model_ids mse ## 1 0.01 sparsity_grid_model_1 0.012982916169006953 ## 2 0.2 sparsity_grid_model_4 0.01321464889160263 ## 3 0.05 sparsity_grid_model_2 0.01337749148043942 ## 4 0.1 sparsity_grid_model_3 0.013516631653257992 ``` If we look at the average activation across our neurons now we see that it shifted to the left compared to Figure [19\.5](autoencoders.html#fig:average-activation); it is now \-0\.108 as illustrated in Figure [19\.6](autoencoders.html#fig:sparse-codings-images-plot). Figure 19\.6: The average activation of the coding neurons in our sparse autoencoder is now \-0\.108\. The amount of sparsity you apply is dependent on multiple factors. When using autoencoders for descriptive dimension reduction, the level of sparsity is dependent on the level of insight you want to gain behind the most unique statistical features. If you’re trying to understand the most essential characteristics that explain the features or images then a lower sparsity value is preferred. For example, Figure [19\.7](autoencoders.html#fig:plot-sparsity-comparisons) compares the four sampled digits from the MNIST test set with a non\-sparse autoencoder with a single layer of 100 codings using Tanh activation functions and a sparse autoencoder that constrains \\(\\rho \= \-0\.75\\). Adding sparsity helps to highlight the features that are driving the uniqueness of these sampled digits. This is most pronounced with the number 5 where the sparse autoencoder reveals the primary focus is on the upper portion of the glyph. If you are using autoencoders as a feature engineering step prior to downstream supervised modeling, then the level of sparsity can be considered a hyperparameter that can be optimized with a search grid. Figure 19\.7: Original digits sampled from the MNIST test set (left), reconstruction of sampled digits with a non\-sparse autoencoder (middle), and reconstruction with a sparse autoencoder (right). In Section [19\.2](autoencoders.html#undercomplete-autoencoders), we discussed how an undercomplete autoencoder is used to constrain the number of codings to be less than the number of inputs. This constraint prevents the autoencoder from learning the identify function, which would just create a perfect mapping of inputs to outputs and not learn anything about the features’ salient characteristics. However, there are ways to prevent an autoencoder with more hidden units than inputs (known as an *overcomplete autoencoder*) from learning the identity function. Adding sparsity is one such approach (Poultney et al. [2007](#ref-poultney2007efficient); Lee, Ekanadham, and Ng [2008](#ref-lee2008sparse)) and another is to add randomness in the transformation from input to reconstruction, which we discuss next. 19\.4 Denoising autoencoders ---------------------------- The denoising autoencoder is a stochastic version of the autoencoder in which we train the autoencoder to reconstruct the input from a *corrupted* copy of the inputs. This forces the codings to learn more robust features of the inputs and prevents them from merely learning the identity function; even if the number of codings is greater than the number of inputs. We can think of a denoising autoencoder as having two objectives: (i) try to encode the inputs to preserve the essential signals, and (ii) try to undo the effects of a corruption process stochastically applied to the inputs of the autoencoder. The latter can only be done by capturing the statistical dependencies between the inputs. Combined, this denoising procedure allows us to implicitly learn useful properties of the inputs (Bengio et al. [2013](#ref-bengio2013generalized)). The corruption process typically follows one of two approaches. We can randomly set some of the inputs (as many as half of them) to zero or one; most commonly it is setting random values to zero to imply missing values (Vincent et al. [2008](#ref-vincent2008extracting)). This can be done by manually imputing zeros or ones into the inputs or adding a dropout layer (reference Section [13\.7\.3](deep-learning.html#dl-regularization)) between the inputs and first hidden layer. Alternatively, for continuous\-valued inputs, we can add pure Gaussian noise (Vincent [2011](#ref-vincent2011connection)). Figure [19\.8](autoencoders.html#fig:plot-corrupted-inputs) illustrates the differences between these two corruption options for a sampled input where 30% of the inputs were corrupted. Figure 19\.8: Original digit sampled from the MNIST test set (left), corrupted data with on/off imputation (middle), and corrupted data with Gaussian imputation (right). Training a denoising autoencoder is nearly the same process as training a regular autoencoder. The only difference is we supply our corrupted inputs to `training_frame` and supply the non\-corrupted inputs to `validation_frame`. The following code illustrates where we supply `training_frame` with inputs that have been corrupted with Gaussian noise (`inputs_currupted_gaussian`) and supply the original input data frame (`features`) to `validation_frame`. The remaining process stays, essentially, the same. We see that the validation MSE is 0\.02 where in comparison our MSE of the same model without corrupted inputs was 0\.006\. ``` # Train a denoise autoencoder denoise_ae <- h2o.deeplearning( x = seq_along(features), training_frame = inputs_currupted_gaussian, validation_frame = features, autoencoder = TRUE, hidden = 100, activation = 'Tanh', sparse = TRUE ) # Print performance h2o.performance(denoise_ae, valid = TRUE) ## H2OAutoEncoderMetrics: deeplearning ## ** Reported on validation data. ** ## ## Validation Set Metrics: ## ===================== ## ## MSE: (Extract with `h2o.mse`) 0.02048465 ## RMSE: (Extract with `h2o.rmse`) 0.1431246 ``` Figure [19\.9](autoencoders.html#fig:plot-denoise-results) visualizes the effect of a denoising autoencoder. The left column shows a sample of the original digits, which are used as the validation data set. The middle column shows the Gaussian corrupted inputs used to train the model, and the right column shows the reconstructed digits after denoising. As expected, the denoising autoencoder does a pretty good job of mapping the corrupted data back to the original input. Figure 19\.9: Original digits sampled from the MNIST test set (left), corrupted input digits (middle), and reconstructed outputs (right). 19\.5 Anomaly detection ----------------------- We can also use autoencoders for anomaly detection (Sakurada and Yairi [2014](#ref-sakurada2014anomaly); Zhou and Paffenroth [2017](#ref-zhou2017anomaly)). Since the loss function of an autoencoder measures the reconstruction error, we can extract this information to identify those observations that have larger error rates. These observations have feature attributes that differ significantly from the other features. We might consider such features as anomalous, or outliers. To extract the reconstruction error with **h2o**, we use `h2o.anomaly()`. The following uses our undercomplete autoencoder with 100 codings from Section [19\.2\.2](autoencoders.html#stacked-autoencoders). We can see that the distribution of reconstruction errors range from near zero to over 0\.03 with the average error being 0\.006\. ``` # Extract reconstruction errors (reconstruction_errors <- h2o.anomaly(best_model, features)) ## Reconstruction.MSE ## 1 0.009879666 ## 2 0.006485201 ## 3 0.017470110 ## 4 0.002339352 ## 5 0.006077669 ## 6 0.007171287 ## ## [60000 rows x 1 column] # Plot distribution reconstruction_errors <- as.data.frame(reconstruction_errors) ggplot(reconstruction_errors, aes(Reconstruction.MSE)) + geom_histogram() ``` Figure 19\.10: Distribution of reconstruction errors. Figure [19\.11](autoencoders.html#fig:bad-reconstruction-errors-plot) illustrates the actual and reconstructed digits for the observations with the five worst reconstruction errors. It is fairly intuitive why these observations have such large reconstruction errors as the corresponding input digits are poorly written. Figure 19\.11: Original digits (left) and their reconstructions (right) for the observations with the five largest reconstruction errors. In addition to identifying outliers, we can also use anomaly detection to identify unusual inputs such as fraudulent credit card transactions and manufacturing defects. Often, when performing anomaly detection, we retrain the autoencoder on a subset of the inputs that we’ve determined are a good representation of high quality inputs. For example, we may include all inputs that achieved a reconstruction error within the 75\-th percentile and exclude the rest. We would then retrain an autoencoder, use that autoencoder on new input data, and if it exceeds a certain percentile declare the inputs as anomalous. However, deciding on the threshold that determines an input as anomalous is subjective and often relies on the business purpose. 19\.6 Final thoughts -------------------- As we mentioned at the beginning of this chapter, autoencoders are receiving a lot of attention and many advancements have been made over the past decade. We discussed a few of the fundamental implementations of autoencoders; however, more exist. The following is an incomplete list of alternative autoencoders that are worthy of your attention. * *Variational autoencoders* are a form of generative autoencoders, which means they can be used to create new instances that closely resemble the input data but are completely generated from the coding distributions (Doersch [2016](#ref-doersch2016tutorial)). * *Contractive autoencoders* constrain the derivative of the hidden layer(s) activations to be small with respect to the inputs. This has a similar effect as denoising autoencoders in the sense that small perturbations to the input are essentially considered noise, which makes our codings more robust (Rifai et al. [2011](#ref-rifai2011contractive)). * *Stacked convolutional autoencoders* are designed to reconstruct visual features processed through convolutional layers (Masci et al. [2011](#ref-masci2011stacked)). They do not require manual vectorization of the image so they work well if you need to do dimension reduction or feature extraction on realistic\-sized high\-dimensional images. * *Winner\-take\-all autoencoders* leverage only the top X% activations for each neuron, while the rest are set to zero (Makhzani and Frey [2015](#ref-makhzani2015winner)). This leads to sparse codings. This approach has also been adapted to work with convolutional autoencoders (Makhzani and Frey [2014](#ref-makhzani2014winner)). * *Adversarial autoencoders* train two networks \- a generator network to reconstruct the inputs similar to a regular autoencoder and then a discriminator network to compute where the inputs lie on a probabilistic distribution. Similar to variational autoencoders, adversarial autoencoders are often used to generate new data and have also been used for semi\-supervised and supervised tasks (Makhzani et al. [2015](#ref-makhzani2015adversarial)).
Machine Learning
smithjd.github.io
https://smithjd.github.io/sql-pet/chapter-appendix-postresql-authentication.html
E Appendix C \- PostgreSQL Authentication ========================================= E.1 Introduction ---------------- PostgreSQL has a very robust and flexible set of authentication methods (PostgreSQL Global Development Group [2018](#ref-PGDG2018a)[a](#ref-PGDG2018a)). In most production environments, these will be managed by the database administrator (DBA) on a need\-to\-access basis. People and programs will be granted access only to a minimum set of capabilities required to function, and nothing more. In this book, we are using a PostgreSQL Docker image (Docker [2018](#ref-Docker2018)[d](#ref-Docker2018)). When we create a container from that image, we use its native mechanism to create the `postgres` database superuser with a password specified in an R environment file `~/.Renviron`. See [Securing and using your dbms log\-in credentials](chapter-dbms-login-credentials.html#chapter_dbms-login-credentials) for how we do this. What that means is that you are the DBA \- the database superuser \- for the PostgreSQL database cluster running in the container! You can create and destroy databases, schemas, tables, views, etc. You can also create and destroy users \- called `roles` in PostgreSQL, and `GRANT` or `REVOKE` their privileges with great precision. You don’t have to do that to use this book. But if you want to experiment with it, feel free! E.2 Password authentication on the PostgreSQL Docker image ---------------------------------------------------------- Of the many PostgreSQL authentication mechanisms, the simplest that’s universallly available is `password authentication` (PostgreSQL Global Development Group [2018](#ref-PGDG2018b)[c](#ref-PGDG2018b)). That’s what we use for the `postgres` database superuser, and what we recommend for any roles you may create. Once a role has been created, you need five items to open a connection to the PostgreSQL database cluster: 1. The `host`. This is a name or IP address that your network can access. In this book, with the database running in a Docker container, that’s usually `localhost`. 2. The `port`. This is the port the server is listening on. It’s usually the default, `5439`, and that’s what we use. But in a secure environment, it will often be some random number to lower the chances that an attacker can find the database server. And if you have more than one server on the network, you’ll need to use different ports for each of them. 3. The `dbname` to connect to. This database must exist or the connection attempt will fail. 4. The `user`. This user must exist in the database cluster and be allowed to access the database. We are using the database superuser `postgres` in this book. 5. The `password`. This is set by the DBA for the user. In this book we use the password defined in [Securing and using your dbms log\-in credentials](chapter-dbms-login-credentials.html#chapter_dbms-login-credentials). E.3 Adding roles ---------------- As noted above, PostgreSQL has a very flexible fine\-grained access permissions system. We can’t cover all of it; see PostgreSQL Global Development Group ([2018](#ref-PGDG2018c)[b](#ref-PGDG2018c)) for the full details. But we can give an example. ### E.3\.1 Setting up Docker First, we need to make sure we don’t have any other databases listening on the default port `5439`. ``` library(tidyverse) ``` ``` ## ── Attaching packages ─────────────────────────── tidyverse 1.3.0 ── ``` ``` ## ✓ ggplot2 3.2.1 ✓ purrr 0.3.3 ## ✓ tibble 2.1.3 ✓ dplyr 0.8.3 ## ✓ tidyr 1.0.2 ✓ stringr 1.4.0 ## ✓ readr 1.3.1 ✓ forcats 0.4.0 ``` ``` ## ── Conflicts ────────────────────────────── tidyverse_conflicts() ── ## x dplyr::filter() masks stats::filter() ## x dplyr::lag() masks stats::lag() ``` ``` library(DBI) library(RPostgres) library(connections) ``` ``` sqlpetr::sp_check_that_docker_is_up() ``` ``` ## [1] "Docker is up but running no containers" ``` ``` sqlpetr::sp_docker_remove_container("cattle") ``` ``` ## [1] 0 ``` ``` # in case you've been doing things out of order, stop a container named 'adventureworks' if it exists: # sqlpetr::sp_docker_stop("adventureworks") ``` ### E.3\.2 Creating a new container We’ll create a “cattle” container with a default PostgreSQL 10 database cluster. ``` sqlpetr::sp_make_simple_pg("cattle") # con <- connection_open( # use in an interactive session con <- dbConnect( # use in other settings RPostgres::Postgres(), # without the following (and preceding) lines, # bigint become int64 which is a problem for ggplot bigint = "integer", host = "localhost", port = 5439, dbname = "postgres", user = "postgres", password = "postgres") ``` ### E.3\.3 Adding a role Now, let’s add a role. We’ll add a role that can log in and create databases, but isn’t a superuser. Since this is a demo and not a real production database cluster, we’ll specify a password in plaintext. And we’ll create a database for our new user. Create the role: ``` DBI::dbExecute( con, "CREATE ROLE charlie LOGIN CREATEDB PASSWORD 'chaplin';" ) ``` ``` ## [1] 0 ``` Create the database: ``` DBI::dbExecute( con, "CREATE DATABASE charlie OWNER = charlie") ``` ``` ## [1] 0 ``` ### E.3\.4 Did it work? ``` DBI::dbDisconnect(con) con <- sqlpetr::sp_get_postgres_connection( host = "localhost", port = 5439, dbname = "postgres", user = "charlie", password = "chaplin", seconds_to_test = 30 ) ``` OK, we can connect. Let’s do some stuff! ``` data("iris") ``` `dbCreateTable` creates the table with columns matching the data frame. But it does not send data to the table. ``` DBI::dbCreateTable(con, "iris", iris) ``` To send data, we use `dbAppendTable`. ``` DBI::dbAppendTable(con, "iris", iris) ``` ``` ## Warning: Factors converted to character ``` ``` ## [1] 150 ``` ``` DBI::dbListTables(con) ``` ``` ## [1] "iris" ``` ``` head(DBI::dbReadTable(con, "iris")) ``` ``` ## Sepal.Length Sepal.Width Petal.Length Petal.Width Species ## 1 5.1 3.5 1.4 0.2 setosa ## 2 4.9 3.0 1.4 0.2 setosa ## 3 4.7 3.2 1.3 0.2 setosa ## 4 4.6 3.1 1.5 0.2 setosa ## 5 5.0 3.6 1.4 0.2 setosa ## 6 5.4 3.9 1.7 0.4 setosa ``` ### E.3\.5 Disconnect and remove the container ``` DBI::dbDisconnect(con) # or if using the connections package, use: # connection_close(con) sqlpetr::sp_docker_remove_container("cattle") ``` ``` ## [1] 0 ``` E.1 Introduction ---------------- PostgreSQL has a very robust and flexible set of authentication methods (PostgreSQL Global Development Group [2018](#ref-PGDG2018a)[a](#ref-PGDG2018a)). In most production environments, these will be managed by the database administrator (DBA) on a need\-to\-access basis. People and programs will be granted access only to a minimum set of capabilities required to function, and nothing more. In this book, we are using a PostgreSQL Docker image (Docker [2018](#ref-Docker2018)[d](#ref-Docker2018)). When we create a container from that image, we use its native mechanism to create the `postgres` database superuser with a password specified in an R environment file `~/.Renviron`. See [Securing and using your dbms log\-in credentials](chapter-dbms-login-credentials.html#chapter_dbms-login-credentials) for how we do this. What that means is that you are the DBA \- the database superuser \- for the PostgreSQL database cluster running in the container! You can create and destroy databases, schemas, tables, views, etc. You can also create and destroy users \- called `roles` in PostgreSQL, and `GRANT` or `REVOKE` their privileges with great precision. You don’t have to do that to use this book. But if you want to experiment with it, feel free! E.2 Password authentication on the PostgreSQL Docker image ---------------------------------------------------------- Of the many PostgreSQL authentication mechanisms, the simplest that’s universallly available is `password authentication` (PostgreSQL Global Development Group [2018](#ref-PGDG2018b)[c](#ref-PGDG2018b)). That’s what we use for the `postgres` database superuser, and what we recommend for any roles you may create. Once a role has been created, you need five items to open a connection to the PostgreSQL database cluster: 1. The `host`. This is a name or IP address that your network can access. In this book, with the database running in a Docker container, that’s usually `localhost`. 2. The `port`. This is the port the server is listening on. It’s usually the default, `5439`, and that’s what we use. But in a secure environment, it will often be some random number to lower the chances that an attacker can find the database server. And if you have more than one server on the network, you’ll need to use different ports for each of them. 3. The `dbname` to connect to. This database must exist or the connection attempt will fail. 4. The `user`. This user must exist in the database cluster and be allowed to access the database. We are using the database superuser `postgres` in this book. 5. The `password`. This is set by the DBA for the user. In this book we use the password defined in [Securing and using your dbms log\-in credentials](chapter-dbms-login-credentials.html#chapter_dbms-login-credentials). E.3 Adding roles ---------------- As noted above, PostgreSQL has a very flexible fine\-grained access permissions system. We can’t cover all of it; see PostgreSQL Global Development Group ([2018](#ref-PGDG2018c)[b](#ref-PGDG2018c)) for the full details. But we can give an example. ### E.3\.1 Setting up Docker First, we need to make sure we don’t have any other databases listening on the default port `5439`. ``` library(tidyverse) ``` ``` ## ── Attaching packages ─────────────────────────── tidyverse 1.3.0 ── ``` ``` ## ✓ ggplot2 3.2.1 ✓ purrr 0.3.3 ## ✓ tibble 2.1.3 ✓ dplyr 0.8.3 ## ✓ tidyr 1.0.2 ✓ stringr 1.4.0 ## ✓ readr 1.3.1 ✓ forcats 0.4.0 ``` ``` ## ── Conflicts ────────────────────────────── tidyverse_conflicts() ── ## x dplyr::filter() masks stats::filter() ## x dplyr::lag() masks stats::lag() ``` ``` library(DBI) library(RPostgres) library(connections) ``` ``` sqlpetr::sp_check_that_docker_is_up() ``` ``` ## [1] "Docker is up but running no containers" ``` ``` sqlpetr::sp_docker_remove_container("cattle") ``` ``` ## [1] 0 ``` ``` # in case you've been doing things out of order, stop a container named 'adventureworks' if it exists: # sqlpetr::sp_docker_stop("adventureworks") ``` ### E.3\.2 Creating a new container We’ll create a “cattle” container with a default PostgreSQL 10 database cluster. ``` sqlpetr::sp_make_simple_pg("cattle") # con <- connection_open( # use in an interactive session con <- dbConnect( # use in other settings RPostgres::Postgres(), # without the following (and preceding) lines, # bigint become int64 which is a problem for ggplot bigint = "integer", host = "localhost", port = 5439, dbname = "postgres", user = "postgres", password = "postgres") ``` ### E.3\.3 Adding a role Now, let’s add a role. We’ll add a role that can log in and create databases, but isn’t a superuser. Since this is a demo and not a real production database cluster, we’ll specify a password in plaintext. And we’ll create a database for our new user. Create the role: ``` DBI::dbExecute( con, "CREATE ROLE charlie LOGIN CREATEDB PASSWORD 'chaplin';" ) ``` ``` ## [1] 0 ``` Create the database: ``` DBI::dbExecute( con, "CREATE DATABASE charlie OWNER = charlie") ``` ``` ## [1] 0 ``` ### E.3\.4 Did it work? ``` DBI::dbDisconnect(con) con <- sqlpetr::sp_get_postgres_connection( host = "localhost", port = 5439, dbname = "postgres", user = "charlie", password = "chaplin", seconds_to_test = 30 ) ``` OK, we can connect. Let’s do some stuff! ``` data("iris") ``` `dbCreateTable` creates the table with columns matching the data frame. But it does not send data to the table. ``` DBI::dbCreateTable(con, "iris", iris) ``` To send data, we use `dbAppendTable`. ``` DBI::dbAppendTable(con, "iris", iris) ``` ``` ## Warning: Factors converted to character ``` ``` ## [1] 150 ``` ``` DBI::dbListTables(con) ``` ``` ## [1] "iris" ``` ``` head(DBI::dbReadTable(con, "iris")) ``` ``` ## Sepal.Length Sepal.Width Petal.Length Petal.Width Species ## 1 5.1 3.5 1.4 0.2 setosa ## 2 4.9 3.0 1.4 0.2 setosa ## 3 4.7 3.2 1.3 0.2 setosa ## 4 4.6 3.1 1.5 0.2 setosa ## 5 5.0 3.6 1.4 0.2 setosa ## 6 5.4 3.9 1.7 0.4 setosa ``` ### E.3\.5 Disconnect and remove the container ``` DBI::dbDisconnect(con) # or if using the connections package, use: # connection_close(con) sqlpetr::sp_docker_remove_container("cattle") ``` ``` ## [1] 0 ``` ### E.3\.1 Setting up Docker First, we need to make sure we don’t have any other databases listening on the default port `5439`. ``` library(tidyverse) ``` ``` ## ── Attaching packages ─────────────────────────── tidyverse 1.3.0 ── ``` ``` ## ✓ ggplot2 3.2.1 ✓ purrr 0.3.3 ## ✓ tibble 2.1.3 ✓ dplyr 0.8.3 ## ✓ tidyr 1.0.2 ✓ stringr 1.4.0 ## ✓ readr 1.3.1 ✓ forcats 0.4.0 ``` ``` ## ── Conflicts ────────────────────────────── tidyverse_conflicts() ── ## x dplyr::filter() masks stats::filter() ## x dplyr::lag() masks stats::lag() ``` ``` library(DBI) library(RPostgres) library(connections) ``` ``` sqlpetr::sp_check_that_docker_is_up() ``` ``` ## [1] "Docker is up but running no containers" ``` ``` sqlpetr::sp_docker_remove_container("cattle") ``` ``` ## [1] 0 ``` ``` # in case you've been doing things out of order, stop a container named 'adventureworks' if it exists: # sqlpetr::sp_docker_stop("adventureworks") ``` ### E.3\.2 Creating a new container We’ll create a “cattle” container with a default PostgreSQL 10 database cluster. ``` sqlpetr::sp_make_simple_pg("cattle") # con <- connection_open( # use in an interactive session con <- dbConnect( # use in other settings RPostgres::Postgres(), # without the following (and preceding) lines, # bigint become int64 which is a problem for ggplot bigint = "integer", host = "localhost", port = 5439, dbname = "postgres", user = "postgres", password = "postgres") ``` ### E.3\.3 Adding a role Now, let’s add a role. We’ll add a role that can log in and create databases, but isn’t a superuser. Since this is a demo and not a real production database cluster, we’ll specify a password in plaintext. And we’ll create a database for our new user. Create the role: ``` DBI::dbExecute( con, "CREATE ROLE charlie LOGIN CREATEDB PASSWORD 'chaplin';" ) ``` ``` ## [1] 0 ``` Create the database: ``` DBI::dbExecute( con, "CREATE DATABASE charlie OWNER = charlie") ``` ``` ## [1] 0 ``` ### E.3\.4 Did it work? ``` DBI::dbDisconnect(con) con <- sqlpetr::sp_get_postgres_connection( host = "localhost", port = 5439, dbname = "postgres", user = "charlie", password = "chaplin", seconds_to_test = 30 ) ``` OK, we can connect. Let’s do some stuff! ``` data("iris") ``` `dbCreateTable` creates the table with columns matching the data frame. But it does not send data to the table. ``` DBI::dbCreateTable(con, "iris", iris) ``` To send data, we use `dbAppendTable`. ``` DBI::dbAppendTable(con, "iris", iris) ``` ``` ## Warning: Factors converted to character ``` ``` ## [1] 150 ``` ``` DBI::dbListTables(con) ``` ``` ## [1] "iris" ``` ``` head(DBI::dbReadTable(con, "iris")) ``` ``` ## Sepal.Length Sepal.Width Petal.Length Petal.Width Species ## 1 5.1 3.5 1.4 0.2 setosa ## 2 4.9 3.0 1.4 0.2 setosa ## 3 4.7 3.2 1.3 0.2 setosa ## 4 4.6 3.1 1.5 0.2 setosa ## 5 5.0 3.6 1.4 0.2 setosa ## 6 5.4 3.9 1.7 0.4 setosa ``` ### E.3\.5 Disconnect and remove the container ``` DBI::dbDisconnect(con) # or if using the connections package, use: # connection_close(con) sqlpetr::sp_docker_remove_container("cattle") ``` ``` ## [1] 0 ```
Data Databases and Engineering
smithjd.github.io
https://smithjd.github.io/sql-pet/chapter-appendix-dplyr-to-postres-translation.html
G Appendix \_ Dplyr to SQL translations ======================================= > You may be interested in exactly how the DBI package translates R functions into their SQL quivalents – and in which functions are translated and which are not. > > This Appendix answers those questions. It is based on the work of Dewey Dunnington ([@paleolimbot](http://twitter.com/paleolimbot)) which he published here: > > > <https://apps.fishandwhistle.net/archives/1503> > > > [https://rud.is/b/2019/04/10/lost\-in\-sql\-translation\-charting\-dbplyr\-mapped\-sql\-function\-support\-across\-all\-backends/](https://rud.is/b/2019/04/10/lost-in-sql-translation-charting-dbplyr-mapped-sql-function-support-across-all-backends/) G.1 Overview ------------ These packages are called below: ``` library(tidyverse) library(dbplyr) library(gt) library(here) library(sqlpetr) ``` list the DBI functions that are available: ``` names(sql_translate_env(simulate_dbi())) ``` ``` ## [1] "-" ":" "!" "!=" ## [5] "(" "[" "[[" "{" ## [9] "*" "/" "&" "&&" ## [13] "%%" "%>%" "%in%" "^" ## [17] "+" "<" "<=" "==" ## [21] ">" ">=" "|" "||" ## [25] "$" "abs" "acos" "as_date" ## [29] "as_datetime" "as.character" "as.Date" "as.double" ## [33] "as.integer" "as.integer64" "as.logical" "as.numeric" ## [37] "as.POSIXct" "asin" "atan" "atan2" ## [41] "between" "bitwAnd" "bitwNot" "bitwOr" ## [45] "bitwShiftL" "bitwShiftR" "bitwXor" "c" ## [49] "case_when" "ceil" "ceiling" "coalesce" ## [53] "cos" "cosh" "cot" "coth" ## [57] "day" "desc" "exp" "floor" ## [61] "hour" "if" "if_else" "ifelse" ## [65] "is.na" "is.null" "log" "log10" ## [69] "mday" "minute" "month" "na_if" ## [73] "nchar" "now" "paste" "paste0" ## [77] "pmax" "pmin" "qday" "round" ## [81] "second" "sign" "sin" "sinh" ## [85] "sql" "sqrt" "str_c" "str_conv" ## [89] "str_count" "str_detect" "str_dup" "str_extract" ## [93] "str_extract_all" "str_flatten" "str_glue" "str_glue_data" ## [97] "str_interp" "str_length" "str_locate" "str_locate_all" ## [101] "str_match" "str_match_all" "str_order" "str_pad" ## [105] "str_remove" "str_remove_all" "str_replace" "str_replace_all" ## [109] "str_replace_na" "str_sort" "str_split" "str_split_fixed" ## [113] "str_squish" "str_sub" "str_subset" "str_to_lower" ## [117] "str_to_title" "str_to_upper" "str_trim" "str_trunc" ## [121] "str_view" "str_view_all" "str_which" "str_wrap" ## [125] "substr" "switch" "tan" "tanh" ## [129] "today" "tolower" "toupper" "trimws" ## [133] "wday" "xor" "yday" "year" ## [137] "cume_dist" "cummax" "cummean" "cummin" ## [141] "cumsum" "dense_rank" "first" "lag" ## [145] "last" "lead" "max" "mean" ## [149] "median" "min" "min_rank" "n" ## [153] "n_distinct" "nth" "ntile" "order_by" ## [157] "percent_rank" "quantile" "rank" "row_number" ## [161] "sum" "var" "cume_dist" "cummax" ## [165] "cummean" "cummin" "cumsum" "dense_rank" ## [169] "first" "lag" "last" "lead" ## [173] "max" "mean" "median" "min" ## [177] "min_rank" "n" "n_distinct" "nth" ## [181] "ntile" "order_by" "percent_rank" "quantile" ## [185] "rank" "row_number" "sum" "var" ``` ``` sql_translate_env(simulate_dbi()) ``` ``` ## <sql_variant> ## scalar: -, :, !, !=, (, [, [[, {, *, /, &, &&, %%, %>%, %in%, ^, +, ## scalar: <, <=, ==, >, >=, |, ||, $, abs, acos, as_date, as_datetime, ## scalar: as.character, as.Date, as.double, as.integer, as.integer64, ## scalar: as.logical, as.numeric, as.POSIXct, asin, atan, atan2, ## scalar: between, bitwAnd, bitwNot, bitwOr, bitwShiftL, bitwShiftR, ## scalar: bitwXor, c, case_when, ceil, ceiling, coalesce, cos, cosh, ## scalar: cot, coth, day, desc, exp, floor, hour, if, if_else, ifelse, ## scalar: is.na, is.null, log, log10, mday, minute, month, na_if, ## scalar: nchar, now, paste, paste0, pmax, pmin, qday, round, second, ## scalar: sign, sin, sinh, sql, sqrt, str_c, str_conv, str_count, ## scalar: str_detect, str_dup, str_extract, str_extract_all, ## scalar: str_flatten, str_glue, str_glue_data, str_interp, ## scalar: str_length, str_locate, str_locate_all, str_match, ## scalar: str_match_all, str_order, str_pad, str_remove, ## scalar: str_remove_all, str_replace, str_replace_all, ## scalar: str_replace_na, str_sort, str_split, str_split_fixed, ## scalar: str_squish, str_sub, str_subset, str_to_lower, str_to_title, ## scalar: str_to_upper, str_trim, str_trunc, str_view, str_view_all, ## scalar: str_which, str_wrap, substr, switch, tan, tanh, today, ## scalar: tolower, toupper, trimws, wday, xor, yday, year ## aggregate: cume_dist, cummax, cummean, cummin, cumsum, dense_rank, ## aggregate: first, lag, last, lead, max, mean, median, min, min_rank, n, ## aggregate: n_distinct, nth, ntile, order_by, percent_rank, quantile, ## aggregate: rank, row_number, sum, var ## window: cume_dist, cummax, cummean, cummin, cumsum, dense_rank, ## window: first, lag, last, lead, max, mean, median, min, min_rank, n, ## window: n_distinct, nth, ntile, order_by, percent_rank, quantile, ## window: rank, row_number, sum, var ``` ``` source(here("book-src", "dbplyr-sql-function-translation.R")) ``` ``` ## Warning: The `.drop` argument of `unnest()` is deprecated as of tidyr 1.0.0. ## All list-columns are now preserved. ## This warning is displayed once per session. ## Call `lifecycle::last_warnings()` to see where this warning was generated. ``` Each of the following dbplyr back ends may have a slightly different translation: ``` translations %>% filter(!is.na(sql)) %>% count(variant) ``` ``` ## # A tibble: 11 x 2 ## variant n ## <chr> <int> ## 1 access 193 ## 2 dbi 183 ## 3 hive 187 ## 4 impala 190 ## 5 mssql 196 ## 6 mysql 194 ## 7 odbc 186 ## 8 oracle 184 ## 9 postgres 204 ## 10 sqlite 183 ## 11 teradata 196 ``` Only one postgres translation produces an output: ``` psql <- translations %>% filter(!is.na(sql), variant == "postgres") %>% select(r, n_args, sql) %>% arrange(r) # sp_print_df(head(psql, n = 40)) sp_print_df(psql) ``` G.1 Overview ------------ These packages are called below: ``` library(tidyverse) library(dbplyr) library(gt) library(here) library(sqlpetr) ``` list the DBI functions that are available: ``` names(sql_translate_env(simulate_dbi())) ``` ``` ## [1] "-" ":" "!" "!=" ## [5] "(" "[" "[[" "{" ## [9] "*" "/" "&" "&&" ## [13] "%%" "%>%" "%in%" "^" ## [17] "+" "<" "<=" "==" ## [21] ">" ">=" "|" "||" ## [25] "$" "abs" "acos" "as_date" ## [29] "as_datetime" "as.character" "as.Date" "as.double" ## [33] "as.integer" "as.integer64" "as.logical" "as.numeric" ## [37] "as.POSIXct" "asin" "atan" "atan2" ## [41] "between" "bitwAnd" "bitwNot" "bitwOr" ## [45] "bitwShiftL" "bitwShiftR" "bitwXor" "c" ## [49] "case_when" "ceil" "ceiling" "coalesce" ## [53] "cos" "cosh" "cot" "coth" ## [57] "day" "desc" "exp" "floor" ## [61] "hour" "if" "if_else" "ifelse" ## [65] "is.na" "is.null" "log" "log10" ## [69] "mday" "minute" "month" "na_if" ## [73] "nchar" "now" "paste" "paste0" ## [77] "pmax" "pmin" "qday" "round" ## [81] "second" "sign" "sin" "sinh" ## [85] "sql" "sqrt" "str_c" "str_conv" ## [89] "str_count" "str_detect" "str_dup" "str_extract" ## [93] "str_extract_all" "str_flatten" "str_glue" "str_glue_data" ## [97] "str_interp" "str_length" "str_locate" "str_locate_all" ## [101] "str_match" "str_match_all" "str_order" "str_pad" ## [105] "str_remove" "str_remove_all" "str_replace" "str_replace_all" ## [109] "str_replace_na" "str_sort" "str_split" "str_split_fixed" ## [113] "str_squish" "str_sub" "str_subset" "str_to_lower" ## [117] "str_to_title" "str_to_upper" "str_trim" "str_trunc" ## [121] "str_view" "str_view_all" "str_which" "str_wrap" ## [125] "substr" "switch" "tan" "tanh" ## [129] "today" "tolower" "toupper" "trimws" ## [133] "wday" "xor" "yday" "year" ## [137] "cume_dist" "cummax" "cummean" "cummin" ## [141] "cumsum" "dense_rank" "first" "lag" ## [145] "last" "lead" "max" "mean" ## [149] "median" "min" "min_rank" "n" ## [153] "n_distinct" "nth" "ntile" "order_by" ## [157] "percent_rank" "quantile" "rank" "row_number" ## [161] "sum" "var" "cume_dist" "cummax" ## [165] "cummean" "cummin" "cumsum" "dense_rank" ## [169] "first" "lag" "last" "lead" ## [173] "max" "mean" "median" "min" ## [177] "min_rank" "n" "n_distinct" "nth" ## [181] "ntile" "order_by" "percent_rank" "quantile" ## [185] "rank" "row_number" "sum" "var" ``` ``` sql_translate_env(simulate_dbi()) ``` ``` ## <sql_variant> ## scalar: -, :, !, !=, (, [, [[, {, *, /, &, &&, %%, %>%, %in%, ^, +, ## scalar: <, <=, ==, >, >=, |, ||, $, abs, acos, as_date, as_datetime, ## scalar: as.character, as.Date, as.double, as.integer, as.integer64, ## scalar: as.logical, as.numeric, as.POSIXct, asin, atan, atan2, ## scalar: between, bitwAnd, bitwNot, bitwOr, bitwShiftL, bitwShiftR, ## scalar: bitwXor, c, case_when, ceil, ceiling, coalesce, cos, cosh, ## scalar: cot, coth, day, desc, exp, floor, hour, if, if_else, ifelse, ## scalar: is.na, is.null, log, log10, mday, minute, month, na_if, ## scalar: nchar, now, paste, paste0, pmax, pmin, qday, round, second, ## scalar: sign, sin, sinh, sql, sqrt, str_c, str_conv, str_count, ## scalar: str_detect, str_dup, str_extract, str_extract_all, ## scalar: str_flatten, str_glue, str_glue_data, str_interp, ## scalar: str_length, str_locate, str_locate_all, str_match, ## scalar: str_match_all, str_order, str_pad, str_remove, ## scalar: str_remove_all, str_replace, str_replace_all, ## scalar: str_replace_na, str_sort, str_split, str_split_fixed, ## scalar: str_squish, str_sub, str_subset, str_to_lower, str_to_title, ## scalar: str_to_upper, str_trim, str_trunc, str_view, str_view_all, ## scalar: str_which, str_wrap, substr, switch, tan, tanh, today, ## scalar: tolower, toupper, trimws, wday, xor, yday, year ## aggregate: cume_dist, cummax, cummean, cummin, cumsum, dense_rank, ## aggregate: first, lag, last, lead, max, mean, median, min, min_rank, n, ## aggregate: n_distinct, nth, ntile, order_by, percent_rank, quantile, ## aggregate: rank, row_number, sum, var ## window: cume_dist, cummax, cummean, cummin, cumsum, dense_rank, ## window: first, lag, last, lead, max, mean, median, min, min_rank, n, ## window: n_distinct, nth, ntile, order_by, percent_rank, quantile, ## window: rank, row_number, sum, var ``` ``` source(here("book-src", "dbplyr-sql-function-translation.R")) ``` ``` ## Warning: The `.drop` argument of `unnest()` is deprecated as of tidyr 1.0.0. ## All list-columns are now preserved. ## This warning is displayed once per session. ## Call `lifecycle::last_warnings()` to see where this warning was generated. ``` Each of the following dbplyr back ends may have a slightly different translation: ``` translations %>% filter(!is.na(sql)) %>% count(variant) ``` ``` ## # A tibble: 11 x 2 ## variant n ## <chr> <int> ## 1 access 193 ## 2 dbi 183 ## 3 hive 187 ## 4 impala 190 ## 5 mssql 196 ## 6 mysql 194 ## 7 odbc 186 ## 8 oracle 184 ## 9 postgres 204 ## 10 sqlite 183 ## 11 teradata 196 ``` Only one postgres translation produces an output: ``` psql <- translations %>% filter(!is.na(sql), variant == "postgres") %>% select(r, n_args, sql) %>% arrange(r) # sp_print_df(head(psql, n = 40)) sp_print_df(psql) ```
Data Databases and Engineering
ukgovdatascience.github.io
https://ukgovdatascience.github.io/rap_companion/exemplar.html
Chapter 4 Exemplar RAP ====================== Chapter [3](why.html#why) considered why RAP is a useful paradigm. In this Chapter we demonstrate a RAP package developed in collaboration with the Department for Culture Media and Sport (DCMS). This package enshrines all the pertinent business knowledge in one corpus. 4\.1 Package Purpose -------------------- In this exemplar project Matt Upson aimed at a high level of automation to demonstrate what is possible, and because DCMS had a skilled data scientist on hand to maintain and develop the project. Nonetheless, in the course of the work, statisticians at DCMS continue to undertake training in R, and the [Better Use of Data Team](https://data.blog.gov.uk/) spent time to ensure that the software development practices such as managing [software dependencies](https://www.gov.uk/service-manual/technology/managing-software-dependencies), [version control](https://www.gov.uk/service-manual/technology/maintaining-version-control-in-coding), [package development](http://r-pkgs.had.co.nz/), [unit testing](http://r-pkgs.had.co.nz/tests.html), style [guide](http://adv-r.had.co.nz/Style.html), [open by default](https://www.gov.uk/service-manual/technology/making-source-code-open-and-reusable) and [continuous integration](https://www.r-bloggers.com/continuous-integration-for-r-packages/) are embedded within the team that owns the publication. We’re continuing to support DCMS in the development of this prototype pipeline, with the expectation that it will be used operationally in 2017\. If you want to learn more about this project, the source code for the eesectors R package is maintained on [GitHub.com](https://github.com/ukgovdatascience/eesectors). The README provides instructions on how to test the package using the openly published data from the 2016 publication. 4\.2 Tidy data -------------- > Tidy data are all alike; every messy data is messy in its own way. \- Hadley Tolstoy What is the [simplest representation](http://vita.had.co.nz/papers/tidy-data.html) of the data possible? Prior to any analysis we must tidy our data: structuring our data to facilitate analysis. Tidy datasets are easy to manipulate, model and visualize, and have a specific structure: each variable is a column, each observation is a row, and each type of observational unit is a table. You and your team trying to RAP should spend time reading this [paper](http://vita.had.co.nz/papers/tidy-data.pdf) and hold a seminar discussing it. It’s important to involve the analysts involved in the traditional production of this report as they will be familiar with the inputs and outputs of the report. With the heuristic of a tidy dataset in your mind, proceed, as a team, to look through the chapter or report you are attempting to produce using RAP. As you work through, note down what variables you would need to produce each table or figure, what would the input dataframe look like? (Say what you see.) After looking at all the figures and tables, is there one tidy daaset that could be used as input? Sketch out what it looks like. ### 4\.2\.1 eesectors tidy data We demonstrate this process using the DCMS publication, refer to [Chapter 3 \- GVA](https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/544103/DCMS_Sectors_Economic_Estimates_-_August_2016.pdf). What data do you need to produce this table? [ Variables: Year, Sector, GVA What data do you need to produce this figure? [ The GVA of each Sector by Year. Variables: Year, Sector, GVA What data do you need to produce this figure? [ Total GVA across all sectors. Variables: Year, Sector, GVA What data do you need to produce this figure? [ For each Year by Sector we need the GVA. Variables: Year, Sector, GVA ### 4\.2\.2 What does our eesectors tidy data look like? Remember, for tidy data: 1\. Each variable forms a column. 2\. Each observation forms a row. 3\. Each type of observational unit forms a table. Our tidy data is of the form **Year \- Sector \- GVA**: | Year | Sector | GVA | | --- | --- | --- | | 2010 | creative | 65188 | | 2010 | culture | 20291 | | 2010 | digital | 97303 | | 2011 | creative | 69398 | | 2011 | culture | 20954 | | 2011 | digital | 107303 | *This data is for demonstration purposes only.* #### 4\.2\.2\.1 Another worked example \- what does our SEN tidy data look like? We repeat the process above for a different publication related to [Special Educational Needs data](https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/633031/SFR37_2017_Main_Text.pdf) to demonstrate the thought process. We suggest you attempt to do this independently without peeking at the solution below, that way you can test your understanding. Look at the final report; work through and think about what data you need to produce each figure or table (write out the variables then sketch the minimal tidy data set required to build it). Ideally there will be one minimal tidy data set that we can build as input for our functions to produce these figures, tables or statistics. If a report covers a broad topic it might not be possible to have one minimal tidy data set (it’s OK to have more than one). We can create our own [custom class](http://adv-r.had.co.nz/OO-essentials.html) of object to cope and keep things simple for the user of our package. Here we draw our tables in a pseudo csv format to digitise for sharing. Sketching with pencil and paper is also fine and much clearer! I also use shorthand for some of the variable names given in the publication. ##### 4\.2\.2\.1\.1 Figure A year, all students, total sen, sen without statement or EHC plan, sen with statement or EHC plan … ##### 4\.2\.2\.1\.2 Figure B This digs deeper than Fig A by counting and categorising students (converted into percentage) by their primary type of need. Thus our minimal table above will not meet the needs for Figure B. We’ll add in some example made\-up data to check I understand the data correctly (the type of the data is the important thing e.g. date, integer, string). It’s important here to have expert domain knowledge as one might misunderstand due to esoteric language use. year, sen\_status, sen\_category, sen\_primary\_type\_need, count 2016, 0, NA, NA, 3e6 2016, 1, “SEN Support”, “Specific Learning Difficulty”, 5000 2016, 1, “Statement or EHC Plan”, “Specific Learning Difficulty”, 1500 2016, 1, “SEN Support”, “Moderate Learning Difficulty”, 5000 2017, … \*\* Question: using the data table above can you produce both Figure A and B? \*\* With our data structured like this we have all the data we need to produce Figure B and Figure A. ##### 4\.2\.2\.1\.3 Figure C Again we dig deeper and ask what’s their school type? We don’t have this in our previous minimal data table so we need to include this variable in our dataframe. year, school\_type, sen\_status, sen\_category, sen\_primary\_type\_need, count 2016, “State\-funded primary”, 0, NA, NA, 3e6 2016, “State\-funded primary”, 1, “SEN Support”, “Specific Learning Difficulty”, 5000 2016, “State\-funded primary”, 1, “Statement or EHC Plan”, “Specific Learning Difficulty”, 1500 2016, “State\-funded primary”, 1, “SEN Support”, “Moderate Learning Difficulty”, 5000 … As you can imagine the table can end up being quite long! \*\* Question: using the data table above can you produce both Figure A, B and C? \*\* Yes. Continue this thought process for the rest of the document. However, bear in mind that you have the added insight of where the data comes from and in what format, this might affect your using more than one data class for the package. For example you could call the one we described above as your “year\-sch\-sen” class, and have another data class dedicated to being the input for some of the other figures in the chapter. The data might come from an SQL query or a bunch of disparate spreadsheets. In the later case we can write some functions to extract and combine the data into a minimal tidy data table for use in our package. See eesectors [README](https://github.com/DCMSstats/eesectors/blob/master/README.md) for an example. ### 4\.2\.3 How to build your tidy data? With the minimal tidy dataset idea in place, you can begin to think about how you might construct this tidy dataset from the data stores you have availiable. As we are working in R we can formalise this minimal tidy dataset as a [class](http://adv-r.had.co.nz/OO-essentials.html). For our `eesectors` package we create our long data `year_sector_data` class as the fundamental input to create all our figures and tables for the output report. 4\.3 `eesectors` Package Exploration ------------------------------------ The following is an exploration of the `eesectors` package to help familiarise users with the key principles so that they can automate report production through package development in R using `knitr`. This examines the package in more detail compared to the README so that data scientists looking to implement RAP can note some of the characteristics of the code employed. ### 4\.3\.1 Installation The package can then be installed using `devtools::install_github('ukgovdatascience/eesectors')`. Some users may not be able to use the `devtools::install_github()` commands as a result of network security settings. If this is the case, `eesectors` can be installed by downloading the [zip of the repository](https://github.com/ukgovdatascience/govstyle/archive/master.zip) and installing the package locally using `devtools::install_local(<path to zip file>)`. #### 4\.3\.1\.1 Version control As the code is stored on Github we can access the current master version as well as all [historic versions](https://github.com/ukgovdatascience/eesectors/releases). This allows me to reproduce a report from last year if required. I can look at what release version was used and install that accordingly using the [additional arguments](ftp://cran.r-project.org/pub/R/web/packages/githubinstall/vignettes/githubinstall.html) for `install_github`. ### 4\.3\.2 Loading the package Installation means the package is on our computer but it is not loaded into the computer’s working memory. We also load any additional packages that might be useful for exploring the package or data therein. ``` devtools::install_github('ukgovdatascience/eesectors') ``` ``` ## eesectors: Reproducible Analytical Pipeline (RAP) for the ## Economic Estimates for DCMS Sectors Statistical First Release ## (SFR). For more information visit: ## https://github.com/ukgovdatascience/eesectors ``` This makes all the functions within the package available for use. It also provides us with some R [data objects](https://github.com/ukgovdatascience/eesectors/tree/master/data), such as aggregated data sets ready for visualisations or analysis within the report. > Packages are the fundamental units of reproducible R code. They include reusable R functions, the documentation that describes how to use them, and sample data. \- Hadely Wickham ### 4\.3\.3 Explore the package A good place to start is the package [README](https://github.com/ukgovdatascience/eesectors). #### 4\.3\.3\.1 Status badges The [status badges](https://stackoverflow.com/questions/35563012/what-are-the-status-tags-like-build-passing) provide useful information. They are found in the top left of the README and should be green and say passing. This indicates that this package will run OK on Windows and linux or mac. Essentially the package is likely to build correctly on your machine when you install it. You can carry out these build tests locally using the [`devtools` package](https://github.com/hadley/devtools). #### 4\.3\.3\.2 Look at the output first If you go to Chapter 3 of the [DCMS publication](https://www.gov.uk/government/statistics/dcms-sectors-economic-estimates-2016) it is apparent that most of the content is either data tables of summary statistics or visualisation of the data. This makes automation particularly useful here and likely to make time savings. Chapter 3 seems to be fairly typical in its length (if not a bit shroter compared to other Chapters). This package seems to work by taking the necessary data inputs as arguments in a function then outputting the relevant figures. The names of the functions match the figures they produce. Prior to this step we have to get the data in the correct format. If you look at the functions within the package within R Studio using the package navigator it is evident that there are a function of families dedicated to reading Excel spreadsheets and collecting the data in a tidy .Rds format. These are given the funciton name\-prefix of `extract_` (try to give your functions [good names](http://adv-r.had.co.nz/Style.html)). The `GVA_by_sector_2016` provides test data to work with during development. This will be important for the development of other packages for different reports. You need a precise understanding of how you go from raw data, to aggregated data (such as `GVA_by_sector_2016`) to the final figure. What are your inputs (arguments) and outputs? In some cases where your master data is stored in a particularly difficult for a machine to read you may prefer having a human to this extraction step. ``` dplyr::glimpse(GVA_by_sector_2016) ``` ``` ## Observations: 54 ## Variables: 3 ## $ sector <fctr> creative, culture, digital, gambling, sport, telecoms,... ## $ year <int> 2010, 2010, 2010, 2010, 2010, 2010, 2010, 2011, 2011, 2... ## $ GVA <dbl> 65188, 20291, 97303, 8407, 7016, 24738, 49150, 69398, 2... ``` ``` x <- GVA_by_sector_2016 ``` #### 4\.3\.3\.3 Automating QA Human’s are not particularly good at Quality Assurance (QA), especially when working with massive spreadsheets it’s easy for errors to creep in. We can automate alot of the sense checking and update this if things change or a human provides another creative test to use for sense checking. If you can describe the test to a colleague then you can code it. The author uses messages to tell us what checks are being conducted or we can look at the body of the function if we are interested. This is useful if you are considering developing your own package, it will help you struture the message which are useful for the user. ``` gva <- year_sector_data(GVA_by_sector_2016) ``` ``` ## Initiating year_sector_data class. ## ## ## Expects a data.frame with three columns: sector, year, and measure, where ## measure is one of GVA, exports, or enterprises. The data.frame should include ## historical data, which is used for checks on the quality of this year's data, ## and for producing tables and plots. More information on the format expected by ## this class is given by ?year_sector_data(). ``` ``` ## ## *** Running integrity checks on input dataframe (x): ``` ``` ## ## Checking input is properly formatted... ``` ``` ## Checking x is a data.frame... ``` ``` ## Checking x has correct columns... ``` ``` ## Checking x contains a year column... ``` ``` ## Checking x contains a sector column... ``` ``` ## Checking x does not contain missing values... ``` ``` ## Checking for the correct number of rows... ``` ``` ## ...passed ``` ``` ## ## ***Running statistical checks on input dataframe (x)... ## ## These tests are implemented using the package assertr see: ## https://cran.r-project.org/web/packages/assertr for more details. ``` ``` ## Checking years in a sensible range (2000:2020)... ``` ``` ## Checking sectors are correct... ``` ``` ## Checking for outliers (x_i > median(x) + 3 * mad(x)) in each sector timeseries... ``` ``` ## Checking sector timeseries: all_dcms ``` ``` ## Checking sector timeseries: creative ``` ``` ## Checking sector timeseries: culture ``` ``` ## Checking sector timeseries: digital ``` ``` ## Checking sector timeseries: gambling ``` ``` ## Checking sector timeseries: sport ``` ``` ## Checking sector timeseries: telecoms ``` ``` ## Checking sector timeseries: tourism ``` ``` ## Checking sector timeseries: UK ``` ``` ## ...passed ``` ``` ## Checking for outliers on a row by row basis using mahalanobis distance... ``` ``` ## Checking sector timeseries: all_dcms ``` ``` ## Checking sector timeseries: creative ``` ``` ## Checking sector timeseries: culture ``` ``` ## Checking sector timeseries: digital ``` ``` ## Checking sector timeseries: gambling ``` ``` ## Checking sector timeseries: sport ``` ``` ## Checking sector timeseries: telecoms ``` ``` ## Checking sector timeseries: tourism ``` ``` ## Checking sector timeseries: UK ``` ``` ## ...passed ``` This is a semi\-automated process so the user should check the Checks and ensure they meet their usual checks that would be conducted manually. If a new check or test becomes necessary then it should be implemented by changing the code. ``` body(year_sector_data) ``` ``` ## { ## message("Initiating year_sector_data class.\n\n\nExpects a data.frame with three columns: sector, year, and measure, where\nmeasure is one of GVA, exports, or enterprises. The data.frame should include\nhistorical data, which is used for checks on the quality of this year's data,\nand for producing tables and plots. More information on the format expected by\nthis class is given by ?year_sector_data().") ## message("\n*** Running integrity checks on input dataframe (x):") ## message("\nChecking input is properly formatted...") ## message("Checking x is a data.frame...") ## if (!is.data.frame(x)) ## stop("x must be a data.frame") ## message("Checking x has correct columns...") ## if (length(colnames(x)) != 3) ## stop("x must have three columns: sector, year, and one of GVA, export, or x") ## message("Checking x contains a year column...") ## if (!"year" %in% colnames(x)) ## stop("x must contain year column") ## message("Checking x contains a sector column...") ## if (!"sector" %in% colnames(x)) ## stop("x must contain sector column") ## message("Checking x does not contain missing values...") ## if (anyNA(x)) ## stop("x cannot contain any missing values") ## message("Checking for the correct number of rows...") ## if (nrow(x) != length(unique(x$sector)) * length(unique(x$year))) { ## warning("x does not appear to be well formed. nrow(x) should equal\nlength(unique(x$sector)) * length(unique(x$year)). Check the of x.") ## } ## message("...passed") ## message("\n***Running statistical checks on input dataframe (x)...\n\n These tests are implemented using the package assertr see:\n https://cran.r-project.org/web/packages/assertr for more details.") ## value <- colnames(x)[(!colnames(x) %in% c("sector", "year"))] ## message("Checking years in a sensible range (2000:2020)...") ## assertr::assert_(x, assertr::in_set(2000:2020), ~year) ## message("Checking sectors are correct...") ## sectors_set <- c(creative = "Creative Industries", culture = "Cultural Sector", ## digital = "Digital Sector", gambling = "Gambling", sport = "Sport", ## telecoms = "Telecoms", tourism = "Tourism", all_dcms = "All DCMS sectors", ## perc_of_UK = "% of UK GVA", UK = "UK") ## assertr::assert_(x, assertr::in_set(names(sectors_set)), ## ~sector, error_fun = raise_issue) ## message("Checking for outliers (x_i > median(x) + 3 * mad(x)) in each sector timeseries...") ## series_split <- split(x, x$sector) ## lapply(X = series_split, FUN = function(x) { ## message("Checking sector timeseries: ", unique(x[["sector"]])) ## assertr::insist_(x, assertr::within_n_mads(3), lazyeval::interp(~value, ## value = as.name(value)), error_fun = raise_issue) ## }) ## message("...passed") ## message("Checking for outliers on a row by row basis using mahalanobis distance...") ## lapply(X = series_split, FUN = maha_check) ## message("...passed") ## structure(list(df = x, colnames = colnames(x), type = colnames(x)[!colnames(x) %in% ## c("year", "sector")], sector_levels = levels(x$sector), ## sectors_set = sectors_set, years = unique(x$year)), class = "year_sector_data") ## } ``` The function is structured to tell the user what check is being made and then running that check given the input `x`. If the input fails a check the function is stopped with a useful diagnostic message for the user. This is achieved using `if` and the opposite of the desired feature of `x`. ``` message("Checking x has correct columns...") if (length(colnames(x)) != 3) stop("x must have three columns: sector, year, and one of GVA, export, or x") ``` For example, if `x` does not have exactly three columns we `stop`. #### 4\.3\.3\.4 Output of this function The output object is different to the input as expected, yet it does contain the initial data. ``` identical(gva$df, x) ``` ``` ## [1] TRUE ``` The rest of the list contains other details that could be changed at a later date if required, demonstrating defensive programming. For example, the sectors that are of interest to DCMS have changed and may change again. ``` ?year_sector_data ``` Let’s take a closer look at this function using the help and other standard function exploration functions. The help says it produces a custom class of object with five slots. ``` isS4(gva) ``` ``` ## [1] FALSE ``` ``` class(gva) ``` ``` ## [1] "year_sector_data" ``` It’s not actually an S4 object, by slots the author means a list of objects. This approach is sensible and easy to work with, as most users are familiar with [S3](http://adv-r.had.co.nz/S3.html). #### 4\.3\.3\.5 The input The input, which is likely a bunch of [not tidy or messy](https://cran.r-project.org/web/packages/tidyr/vignettes/tidy-data.html) spreadsheets needs to be wrangled and aggregated (if necessary) for input into the functions prefixed by `figure`. ``` dplyr::glimpse(GVA_by_sector_2016) ``` ``` ## Observations: 54 ## Variables: 3 ## $ sector <fctr> creative, culture, digital, gambling, sport, telecoms,... ## $ year <int> 2010, 2010, 2010, 2010, 2010, 2010, 2010, 2011, 2011, 2... ## $ GVA <dbl> 65188, 20291, 97303, 8407, 7016, 24738, 49150, 69398, 2... ``` #### 4\.3\.3\.6 The R output > We build our functions to use the same simple, tidy, data. \- Matt Upson With the data in the appropriate form to be received as an argument or input for the `figure` family of functions, we can proceed to plot. ``` figure3.1(x = gva) ``` Again we can look at the details of the plot. We could change the body of the function to affect change to the default plot or we can pass additional `ggplot` arguments to it. Reading the code we see it filters the data, makes the variables it needs, refactors the `sector` variable and then plots it. ``` body(figure3.1) ``` ``` ## { ## out <- tryCatch(expr = { ## sectors_set <- x$sectors_set ## x <- dplyr::filter_(x$df, ~sector != "UK") ## x <- dplyr::mutate_(x, year = ~factor(year, levels = c(2016:2010))) ## x$sector <- factor(x = unname(sectors_set[as.character(x$sector)]), ## levels = rev(as.character(unname(sectors_set[levels(x$sector)])))) ## p <- ggplot2::ggplot(x) + ggplot2::aes_(y = ~GVA, x = ~sector, ## fill = ~year) + ggplot2::geom_bar(colour = "slategray", ## position = "dodge", stat = "identity") + ggplot2::coord_flip() + ## govstyle::theme_gov(base_colour = "black") + ggplot2::scale_fill_brewer(palette = "Blues") + ## ggplot2::ylab("Gross Value Added (£bn)") + ggplot2::theme(legend.position = "right", ## legend.key = ggplot2::element_blank()) + ggplot2::scale_y_continuous(labels = scales::comma) ## return(p) ## }, warning = function() { ## w <- warnings() ## warning("Warning produced running figure3.1():", w) ## }, error = function(e) { ## stop("Error produced running figure3.1():", e) ## }, finally = { ## }) ## } ``` We can inspect and change an argument if we feel inclined or if a new colour scheme becomes preferred for example. However, there is no `...` in the body of the function itself so where does this argument get passed to? This all looks straight forward and we can inspect the other functions for generating the figures or plot output. ``` body(figure3.2) ``` ``` ## { ## out <- tryCatch(expr = { ## sectors_set <- x$sectors_set ## x <- dplyr::filter_(x$df, ~sector %in% c("UK", "all_dcms")) ## x$sector <- factor(x = unname(sectors_set[as.character(x$sector)])) ## x <- dplyr::group_by_(x, ~sector) ## x <- dplyr::mutate_(x, index = ~max(ifelse(year == 2010, ## GVA, 0)), indexGVA = ~GVA/index * 100) ## p <- ggplot2::ggplot(x) + ggplot2::aes_(y = ~indexGVA, ## x = ~year, colour = ~sector, linetype = ~sector) + ## ggplot2::geom_path(size = 1.5) + govstyle::theme_gov(base_colour = "black") + ## ggplot2::scale_colour_manual(values = unname(govstyle::gov_cols[c("red", ## "purple")])) + ggplot2::ylab("GVA Index: 2010=100") + ## ggplot2::theme(legend.position = "bottom", legend.key = ggplot2::element_blank()) + ## ggplot2::ylim(c(80, 130)) ## return(p) ## }, warning = function() { ## w <- warnings() ## warning("Warning produced running figure3.2():", w) ## }, error = function(e) { ## stop("Error produced running figure3.2():", e) ## }, finally = { ## }) ## } ``` ``` body(figure3.3) ``` ``` ## { ## out <- tryCatch(expr = { ## sectors_set <- x$sectors_set ## x <- dplyr::filter_(x$df, ~!sector %in% c("UK", "all_dcms")) ## x$sector <- factor(x = unname(sectors_set[as.character(x$sector)])) ## x <- dplyr::group_by_(x, ~sector) ## x <- dplyr::mutate_(x, index = ~max(ifelse(year == 2010, ## GVA, 0)), indexGVA = ~GVA/index * 100) ## p <- ggplot2::ggplot(x) + ggplot2::aes_(y = ~indexGVA, ## x = ~year, colour = ~sector, linetype = ~sector) + ## ggplot2::geom_path(size = 1.5) + govstyle::theme_gov(base_colour = "black") + ## ggplot2::scale_colour_brewer(palette = "Set1") + ## ggplot2::ylab("GVA Index: 2010=100") + ggplot2::theme(legend.position = "right", ## legend.key = ggplot2::element_blank()) + ggplot2::ylim(c(80, ## 150)) ## return(p) ## }, warning = function() { ## w <- warnings() ## warning("Warning produced running figure3.2():", w) ## }, error = function(e) { ## stop("Error produced running figure3.2():", e) ## }, finally = { ## }) ## } ``` #### 4\.3\.3\.7 Error handling A point of interest in the code with which some users may be unfamiliar is `tryCatch` which is a function that allows the function to catch conditions such as warnings, errors and messages. We see this towards the end of the function where if either of these conditions are produced then an informative message is produced (in that it tells you in what function there was a problem). The structure here is simple and could be copied and pasted for use in automating other figures of other chapters or statistical reports. For a comprehensive introduction see [Hadley’s Chapter](http://adv-r.had.co.nz/Exceptions-Debugging.html#condition-handling). 4\.4 Chapter plenary -------------------- We have explored the `eesectors` package from the perspective of someone wishing to develop our own semi\-automated chapter production through the development of a package in R. This package provides a useful tempplate where one could copy the foundations of the package and workflow. 4\.1 Package Purpose -------------------- In this exemplar project Matt Upson aimed at a high level of automation to demonstrate what is possible, and because DCMS had a skilled data scientist on hand to maintain and develop the project. Nonetheless, in the course of the work, statisticians at DCMS continue to undertake training in R, and the [Better Use of Data Team](https://data.blog.gov.uk/) spent time to ensure that the software development practices such as managing [software dependencies](https://www.gov.uk/service-manual/technology/managing-software-dependencies), [version control](https://www.gov.uk/service-manual/technology/maintaining-version-control-in-coding), [package development](http://r-pkgs.had.co.nz/), [unit testing](http://r-pkgs.had.co.nz/tests.html), style [guide](http://adv-r.had.co.nz/Style.html), [open by default](https://www.gov.uk/service-manual/technology/making-source-code-open-and-reusable) and [continuous integration](https://www.r-bloggers.com/continuous-integration-for-r-packages/) are embedded within the team that owns the publication. We’re continuing to support DCMS in the development of this prototype pipeline, with the expectation that it will be used operationally in 2017\. If you want to learn more about this project, the source code for the eesectors R package is maintained on [GitHub.com](https://github.com/ukgovdatascience/eesectors). The README provides instructions on how to test the package using the openly published data from the 2016 publication. 4\.2 Tidy data -------------- > Tidy data are all alike; every messy data is messy in its own way. \- Hadley Tolstoy What is the [simplest representation](http://vita.had.co.nz/papers/tidy-data.html) of the data possible? Prior to any analysis we must tidy our data: structuring our data to facilitate analysis. Tidy datasets are easy to manipulate, model and visualize, and have a specific structure: each variable is a column, each observation is a row, and each type of observational unit is a table. You and your team trying to RAP should spend time reading this [paper](http://vita.had.co.nz/papers/tidy-data.pdf) and hold a seminar discussing it. It’s important to involve the analysts involved in the traditional production of this report as they will be familiar with the inputs and outputs of the report. With the heuristic of a tidy dataset in your mind, proceed, as a team, to look through the chapter or report you are attempting to produce using RAP. As you work through, note down what variables you would need to produce each table or figure, what would the input dataframe look like? (Say what you see.) After looking at all the figures and tables, is there one tidy daaset that could be used as input? Sketch out what it looks like. ### 4\.2\.1 eesectors tidy data We demonstrate this process using the DCMS publication, refer to [Chapter 3 \- GVA](https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/544103/DCMS_Sectors_Economic_Estimates_-_August_2016.pdf). What data do you need to produce this table? [ Variables: Year, Sector, GVA What data do you need to produce this figure? [ The GVA of each Sector by Year. Variables: Year, Sector, GVA What data do you need to produce this figure? [ Total GVA across all sectors. Variables: Year, Sector, GVA What data do you need to produce this figure? [ For each Year by Sector we need the GVA. Variables: Year, Sector, GVA ### 4\.2\.2 What does our eesectors tidy data look like? Remember, for tidy data: 1\. Each variable forms a column. 2\. Each observation forms a row. 3\. Each type of observational unit forms a table. Our tidy data is of the form **Year \- Sector \- GVA**: | Year | Sector | GVA | | --- | --- | --- | | 2010 | creative | 65188 | | 2010 | culture | 20291 | | 2010 | digital | 97303 | | 2011 | creative | 69398 | | 2011 | culture | 20954 | | 2011 | digital | 107303 | *This data is for demonstration purposes only.* #### 4\.2\.2\.1 Another worked example \- what does our SEN tidy data look like? We repeat the process above for a different publication related to [Special Educational Needs data](https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/633031/SFR37_2017_Main_Text.pdf) to demonstrate the thought process. We suggest you attempt to do this independently without peeking at the solution below, that way you can test your understanding. Look at the final report; work through and think about what data you need to produce each figure or table (write out the variables then sketch the minimal tidy data set required to build it). Ideally there will be one minimal tidy data set that we can build as input for our functions to produce these figures, tables or statistics. If a report covers a broad topic it might not be possible to have one minimal tidy data set (it’s OK to have more than one). We can create our own [custom class](http://adv-r.had.co.nz/OO-essentials.html) of object to cope and keep things simple for the user of our package. Here we draw our tables in a pseudo csv format to digitise for sharing. Sketching with pencil and paper is also fine and much clearer! I also use shorthand for some of the variable names given in the publication. ##### 4\.2\.2\.1\.1 Figure A year, all students, total sen, sen without statement or EHC plan, sen with statement or EHC plan … ##### 4\.2\.2\.1\.2 Figure B This digs deeper than Fig A by counting and categorising students (converted into percentage) by their primary type of need. Thus our minimal table above will not meet the needs for Figure B. We’ll add in some example made\-up data to check I understand the data correctly (the type of the data is the important thing e.g. date, integer, string). It’s important here to have expert domain knowledge as one might misunderstand due to esoteric language use. year, sen\_status, sen\_category, sen\_primary\_type\_need, count 2016, 0, NA, NA, 3e6 2016, 1, “SEN Support”, “Specific Learning Difficulty”, 5000 2016, 1, “Statement or EHC Plan”, “Specific Learning Difficulty”, 1500 2016, 1, “SEN Support”, “Moderate Learning Difficulty”, 5000 2017, … \*\* Question: using the data table above can you produce both Figure A and B? \*\* With our data structured like this we have all the data we need to produce Figure B and Figure A. ##### 4\.2\.2\.1\.3 Figure C Again we dig deeper and ask what’s their school type? We don’t have this in our previous minimal data table so we need to include this variable in our dataframe. year, school\_type, sen\_status, sen\_category, sen\_primary\_type\_need, count 2016, “State\-funded primary”, 0, NA, NA, 3e6 2016, “State\-funded primary”, 1, “SEN Support”, “Specific Learning Difficulty”, 5000 2016, “State\-funded primary”, 1, “Statement or EHC Plan”, “Specific Learning Difficulty”, 1500 2016, “State\-funded primary”, 1, “SEN Support”, “Moderate Learning Difficulty”, 5000 … As you can imagine the table can end up being quite long! \*\* Question: using the data table above can you produce both Figure A, B and C? \*\* Yes. Continue this thought process for the rest of the document. However, bear in mind that you have the added insight of where the data comes from and in what format, this might affect your using more than one data class for the package. For example you could call the one we described above as your “year\-sch\-sen” class, and have another data class dedicated to being the input for some of the other figures in the chapter. The data might come from an SQL query or a bunch of disparate spreadsheets. In the later case we can write some functions to extract and combine the data into a minimal tidy data table for use in our package. See eesectors [README](https://github.com/DCMSstats/eesectors/blob/master/README.md) for an example. ### 4\.2\.3 How to build your tidy data? With the minimal tidy dataset idea in place, you can begin to think about how you might construct this tidy dataset from the data stores you have availiable. As we are working in R we can formalise this minimal tidy dataset as a [class](http://adv-r.had.co.nz/OO-essentials.html). For our `eesectors` package we create our long data `year_sector_data` class as the fundamental input to create all our figures and tables for the output report. ### 4\.2\.1 eesectors tidy data We demonstrate this process using the DCMS publication, refer to [Chapter 3 \- GVA](https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/544103/DCMS_Sectors_Economic_Estimates_-_August_2016.pdf). What data do you need to produce this table? [ Variables: Year, Sector, GVA What data do you need to produce this figure? [ The GVA of each Sector by Year. Variables: Year, Sector, GVA What data do you need to produce this figure? [ Total GVA across all sectors. Variables: Year, Sector, GVA What data do you need to produce this figure? [ For each Year by Sector we need the GVA. Variables: Year, Sector, GVA ### 4\.2\.2 What does our eesectors tidy data look like? Remember, for tidy data: 1\. Each variable forms a column. 2\. Each observation forms a row. 3\. Each type of observational unit forms a table. Our tidy data is of the form **Year \- Sector \- GVA**: | Year | Sector | GVA | | --- | --- | --- | | 2010 | creative | 65188 | | 2010 | culture | 20291 | | 2010 | digital | 97303 | | 2011 | creative | 69398 | | 2011 | culture | 20954 | | 2011 | digital | 107303 | *This data is for demonstration purposes only.* #### 4\.2\.2\.1 Another worked example \- what does our SEN tidy data look like? We repeat the process above for a different publication related to [Special Educational Needs data](https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/633031/SFR37_2017_Main_Text.pdf) to demonstrate the thought process. We suggest you attempt to do this independently without peeking at the solution below, that way you can test your understanding. Look at the final report; work through and think about what data you need to produce each figure or table (write out the variables then sketch the minimal tidy data set required to build it). Ideally there will be one minimal tidy data set that we can build as input for our functions to produce these figures, tables or statistics. If a report covers a broad topic it might not be possible to have one minimal tidy data set (it’s OK to have more than one). We can create our own [custom class](http://adv-r.had.co.nz/OO-essentials.html) of object to cope and keep things simple for the user of our package. Here we draw our tables in a pseudo csv format to digitise for sharing. Sketching with pencil and paper is also fine and much clearer! I also use shorthand for some of the variable names given in the publication. ##### 4\.2\.2\.1\.1 Figure A year, all students, total sen, sen without statement or EHC plan, sen with statement or EHC plan … ##### 4\.2\.2\.1\.2 Figure B This digs deeper than Fig A by counting and categorising students (converted into percentage) by their primary type of need. Thus our minimal table above will not meet the needs for Figure B. We’ll add in some example made\-up data to check I understand the data correctly (the type of the data is the important thing e.g. date, integer, string). It’s important here to have expert domain knowledge as one might misunderstand due to esoteric language use. year, sen\_status, sen\_category, sen\_primary\_type\_need, count 2016, 0, NA, NA, 3e6 2016, 1, “SEN Support”, “Specific Learning Difficulty”, 5000 2016, 1, “Statement or EHC Plan”, “Specific Learning Difficulty”, 1500 2016, 1, “SEN Support”, “Moderate Learning Difficulty”, 5000 2017, … \*\* Question: using the data table above can you produce both Figure A and B? \*\* With our data structured like this we have all the data we need to produce Figure B and Figure A. ##### 4\.2\.2\.1\.3 Figure C Again we dig deeper and ask what’s their school type? We don’t have this in our previous minimal data table so we need to include this variable in our dataframe. year, school\_type, sen\_status, sen\_category, sen\_primary\_type\_need, count 2016, “State\-funded primary”, 0, NA, NA, 3e6 2016, “State\-funded primary”, 1, “SEN Support”, “Specific Learning Difficulty”, 5000 2016, “State\-funded primary”, 1, “Statement or EHC Plan”, “Specific Learning Difficulty”, 1500 2016, “State\-funded primary”, 1, “SEN Support”, “Moderate Learning Difficulty”, 5000 … As you can imagine the table can end up being quite long! \*\* Question: using the data table above can you produce both Figure A, B and C? \*\* Yes. Continue this thought process for the rest of the document. However, bear in mind that you have the added insight of where the data comes from and in what format, this might affect your using more than one data class for the package. For example you could call the one we described above as your “year\-sch\-sen” class, and have another data class dedicated to being the input for some of the other figures in the chapter. The data might come from an SQL query or a bunch of disparate spreadsheets. In the later case we can write some functions to extract and combine the data into a minimal tidy data table for use in our package. See eesectors [README](https://github.com/DCMSstats/eesectors/blob/master/README.md) for an example. #### 4\.2\.2\.1 Another worked example \- what does our SEN tidy data look like? We repeat the process above for a different publication related to [Special Educational Needs data](https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/633031/SFR37_2017_Main_Text.pdf) to demonstrate the thought process. We suggest you attempt to do this independently without peeking at the solution below, that way you can test your understanding. Look at the final report; work through and think about what data you need to produce each figure or table (write out the variables then sketch the minimal tidy data set required to build it). Ideally there will be one minimal tidy data set that we can build as input for our functions to produce these figures, tables or statistics. If a report covers a broad topic it might not be possible to have one minimal tidy data set (it’s OK to have more than one). We can create our own [custom class](http://adv-r.had.co.nz/OO-essentials.html) of object to cope and keep things simple for the user of our package. Here we draw our tables in a pseudo csv format to digitise for sharing. Sketching with pencil and paper is also fine and much clearer! I also use shorthand for some of the variable names given in the publication. ##### 4\.2\.2\.1\.1 Figure A year, all students, total sen, sen without statement or EHC plan, sen with statement or EHC plan … ##### 4\.2\.2\.1\.2 Figure B This digs deeper than Fig A by counting and categorising students (converted into percentage) by their primary type of need. Thus our minimal table above will not meet the needs for Figure B. We’ll add in some example made\-up data to check I understand the data correctly (the type of the data is the important thing e.g. date, integer, string). It’s important here to have expert domain knowledge as one might misunderstand due to esoteric language use. year, sen\_status, sen\_category, sen\_primary\_type\_need, count 2016, 0, NA, NA, 3e6 2016, 1, “SEN Support”, “Specific Learning Difficulty”, 5000 2016, 1, “Statement or EHC Plan”, “Specific Learning Difficulty”, 1500 2016, 1, “SEN Support”, “Moderate Learning Difficulty”, 5000 2017, … \*\* Question: using the data table above can you produce both Figure A and B? \*\* With our data structured like this we have all the data we need to produce Figure B and Figure A. ##### 4\.2\.2\.1\.3 Figure C Again we dig deeper and ask what’s their school type? We don’t have this in our previous minimal data table so we need to include this variable in our dataframe. year, school\_type, sen\_status, sen\_category, sen\_primary\_type\_need, count 2016, “State\-funded primary”, 0, NA, NA, 3e6 2016, “State\-funded primary”, 1, “SEN Support”, “Specific Learning Difficulty”, 5000 2016, “State\-funded primary”, 1, “Statement or EHC Plan”, “Specific Learning Difficulty”, 1500 2016, “State\-funded primary”, 1, “SEN Support”, “Moderate Learning Difficulty”, 5000 … As you can imagine the table can end up being quite long! \*\* Question: using the data table above can you produce both Figure A, B and C? \*\* Yes. Continue this thought process for the rest of the document. However, bear in mind that you have the added insight of where the data comes from and in what format, this might affect your using more than one data class for the package. For example you could call the one we described above as your “year\-sch\-sen” class, and have another data class dedicated to being the input for some of the other figures in the chapter. The data might come from an SQL query or a bunch of disparate spreadsheets. In the later case we can write some functions to extract and combine the data into a minimal tidy data table for use in our package. See eesectors [README](https://github.com/DCMSstats/eesectors/blob/master/README.md) for an example. ##### 4\.2\.2\.1\.1 Figure A year, all students, total sen, sen without statement or EHC plan, sen with statement or EHC plan … ##### 4\.2\.2\.1\.2 Figure B This digs deeper than Fig A by counting and categorising students (converted into percentage) by their primary type of need. Thus our minimal table above will not meet the needs for Figure B. We’ll add in some example made\-up data to check I understand the data correctly (the type of the data is the important thing e.g. date, integer, string). It’s important here to have expert domain knowledge as one might misunderstand due to esoteric language use. year, sen\_status, sen\_category, sen\_primary\_type\_need, count 2016, 0, NA, NA, 3e6 2016, 1, “SEN Support”, “Specific Learning Difficulty”, 5000 2016, 1, “Statement or EHC Plan”, “Specific Learning Difficulty”, 1500 2016, 1, “SEN Support”, “Moderate Learning Difficulty”, 5000 2017, … \*\* Question: using the data table above can you produce both Figure A and B? \*\* With our data structured like this we have all the data we need to produce Figure B and Figure A. ##### 4\.2\.2\.1\.3 Figure C Again we dig deeper and ask what’s their school type? We don’t have this in our previous minimal data table so we need to include this variable in our dataframe. year, school\_type, sen\_status, sen\_category, sen\_primary\_type\_need, count 2016, “State\-funded primary”, 0, NA, NA, 3e6 2016, “State\-funded primary”, 1, “SEN Support”, “Specific Learning Difficulty”, 5000 2016, “State\-funded primary”, 1, “Statement or EHC Plan”, “Specific Learning Difficulty”, 1500 2016, “State\-funded primary”, 1, “SEN Support”, “Moderate Learning Difficulty”, 5000 … As you can imagine the table can end up being quite long! \*\* Question: using the data table above can you produce both Figure A, B and C? \*\* Yes. Continue this thought process for the rest of the document. However, bear in mind that you have the added insight of where the data comes from and in what format, this might affect your using more than one data class for the package. For example you could call the one we described above as your “year\-sch\-sen” class, and have another data class dedicated to being the input for some of the other figures in the chapter. The data might come from an SQL query or a bunch of disparate spreadsheets. In the later case we can write some functions to extract and combine the data into a minimal tidy data table for use in our package. See eesectors [README](https://github.com/DCMSstats/eesectors/blob/master/README.md) for an example. ### 4\.2\.3 How to build your tidy data? With the minimal tidy dataset idea in place, you can begin to think about how you might construct this tidy dataset from the data stores you have availiable. As we are working in R we can formalise this minimal tidy dataset as a [class](http://adv-r.had.co.nz/OO-essentials.html). For our `eesectors` package we create our long data `year_sector_data` class as the fundamental input to create all our figures and tables for the output report. 4\.3 `eesectors` Package Exploration ------------------------------------ The following is an exploration of the `eesectors` package to help familiarise users with the key principles so that they can automate report production through package development in R using `knitr`. This examines the package in more detail compared to the README so that data scientists looking to implement RAP can note some of the characteristics of the code employed. ### 4\.3\.1 Installation The package can then be installed using `devtools::install_github('ukgovdatascience/eesectors')`. Some users may not be able to use the `devtools::install_github()` commands as a result of network security settings. If this is the case, `eesectors` can be installed by downloading the [zip of the repository](https://github.com/ukgovdatascience/govstyle/archive/master.zip) and installing the package locally using `devtools::install_local(<path to zip file>)`. #### 4\.3\.1\.1 Version control As the code is stored on Github we can access the current master version as well as all [historic versions](https://github.com/ukgovdatascience/eesectors/releases). This allows me to reproduce a report from last year if required. I can look at what release version was used and install that accordingly using the [additional arguments](ftp://cran.r-project.org/pub/R/web/packages/githubinstall/vignettes/githubinstall.html) for `install_github`. ### 4\.3\.2 Loading the package Installation means the package is on our computer but it is not loaded into the computer’s working memory. We also load any additional packages that might be useful for exploring the package or data therein. ``` devtools::install_github('ukgovdatascience/eesectors') ``` ``` ## eesectors: Reproducible Analytical Pipeline (RAP) for the ## Economic Estimates for DCMS Sectors Statistical First Release ## (SFR). For more information visit: ## https://github.com/ukgovdatascience/eesectors ``` This makes all the functions within the package available for use. It also provides us with some R [data objects](https://github.com/ukgovdatascience/eesectors/tree/master/data), such as aggregated data sets ready for visualisations or analysis within the report. > Packages are the fundamental units of reproducible R code. They include reusable R functions, the documentation that describes how to use them, and sample data. \- Hadely Wickham ### 4\.3\.3 Explore the package A good place to start is the package [README](https://github.com/ukgovdatascience/eesectors). #### 4\.3\.3\.1 Status badges The [status badges](https://stackoverflow.com/questions/35563012/what-are-the-status-tags-like-build-passing) provide useful information. They are found in the top left of the README and should be green and say passing. This indicates that this package will run OK on Windows and linux or mac. Essentially the package is likely to build correctly on your machine when you install it. You can carry out these build tests locally using the [`devtools` package](https://github.com/hadley/devtools). #### 4\.3\.3\.2 Look at the output first If you go to Chapter 3 of the [DCMS publication](https://www.gov.uk/government/statistics/dcms-sectors-economic-estimates-2016) it is apparent that most of the content is either data tables of summary statistics or visualisation of the data. This makes automation particularly useful here and likely to make time savings. Chapter 3 seems to be fairly typical in its length (if not a bit shroter compared to other Chapters). This package seems to work by taking the necessary data inputs as arguments in a function then outputting the relevant figures. The names of the functions match the figures they produce. Prior to this step we have to get the data in the correct format. If you look at the functions within the package within R Studio using the package navigator it is evident that there are a function of families dedicated to reading Excel spreadsheets and collecting the data in a tidy .Rds format. These are given the funciton name\-prefix of `extract_` (try to give your functions [good names](http://adv-r.had.co.nz/Style.html)). The `GVA_by_sector_2016` provides test data to work with during development. This will be important for the development of other packages for different reports. You need a precise understanding of how you go from raw data, to aggregated data (such as `GVA_by_sector_2016`) to the final figure. What are your inputs (arguments) and outputs? In some cases where your master data is stored in a particularly difficult for a machine to read you may prefer having a human to this extraction step. ``` dplyr::glimpse(GVA_by_sector_2016) ``` ``` ## Observations: 54 ## Variables: 3 ## $ sector <fctr> creative, culture, digital, gambling, sport, telecoms,... ## $ year <int> 2010, 2010, 2010, 2010, 2010, 2010, 2010, 2011, 2011, 2... ## $ GVA <dbl> 65188, 20291, 97303, 8407, 7016, 24738, 49150, 69398, 2... ``` ``` x <- GVA_by_sector_2016 ``` #### 4\.3\.3\.3 Automating QA Human’s are not particularly good at Quality Assurance (QA), especially when working with massive spreadsheets it’s easy for errors to creep in. We can automate alot of the sense checking and update this if things change or a human provides another creative test to use for sense checking. If you can describe the test to a colleague then you can code it. The author uses messages to tell us what checks are being conducted or we can look at the body of the function if we are interested. This is useful if you are considering developing your own package, it will help you struture the message which are useful for the user. ``` gva <- year_sector_data(GVA_by_sector_2016) ``` ``` ## Initiating year_sector_data class. ## ## ## Expects a data.frame with three columns: sector, year, and measure, where ## measure is one of GVA, exports, or enterprises. The data.frame should include ## historical data, which is used for checks on the quality of this year's data, ## and for producing tables and plots. More information on the format expected by ## this class is given by ?year_sector_data(). ``` ``` ## ## *** Running integrity checks on input dataframe (x): ``` ``` ## ## Checking input is properly formatted... ``` ``` ## Checking x is a data.frame... ``` ``` ## Checking x has correct columns... ``` ``` ## Checking x contains a year column... ``` ``` ## Checking x contains a sector column... ``` ``` ## Checking x does not contain missing values... ``` ``` ## Checking for the correct number of rows... ``` ``` ## ...passed ``` ``` ## ## ***Running statistical checks on input dataframe (x)... ## ## These tests are implemented using the package assertr see: ## https://cran.r-project.org/web/packages/assertr for more details. ``` ``` ## Checking years in a sensible range (2000:2020)... ``` ``` ## Checking sectors are correct... ``` ``` ## Checking for outliers (x_i > median(x) + 3 * mad(x)) in each sector timeseries... ``` ``` ## Checking sector timeseries: all_dcms ``` ``` ## Checking sector timeseries: creative ``` ``` ## Checking sector timeseries: culture ``` ``` ## Checking sector timeseries: digital ``` ``` ## Checking sector timeseries: gambling ``` ``` ## Checking sector timeseries: sport ``` ``` ## Checking sector timeseries: telecoms ``` ``` ## Checking sector timeseries: tourism ``` ``` ## Checking sector timeseries: UK ``` ``` ## ...passed ``` ``` ## Checking for outliers on a row by row basis using mahalanobis distance... ``` ``` ## Checking sector timeseries: all_dcms ``` ``` ## Checking sector timeseries: creative ``` ``` ## Checking sector timeseries: culture ``` ``` ## Checking sector timeseries: digital ``` ``` ## Checking sector timeseries: gambling ``` ``` ## Checking sector timeseries: sport ``` ``` ## Checking sector timeseries: telecoms ``` ``` ## Checking sector timeseries: tourism ``` ``` ## Checking sector timeseries: UK ``` ``` ## ...passed ``` This is a semi\-automated process so the user should check the Checks and ensure they meet their usual checks that would be conducted manually. If a new check or test becomes necessary then it should be implemented by changing the code. ``` body(year_sector_data) ``` ``` ## { ## message("Initiating year_sector_data class.\n\n\nExpects a data.frame with three columns: sector, year, and measure, where\nmeasure is one of GVA, exports, or enterprises. The data.frame should include\nhistorical data, which is used for checks on the quality of this year's data,\nand for producing tables and plots. More information on the format expected by\nthis class is given by ?year_sector_data().") ## message("\n*** Running integrity checks on input dataframe (x):") ## message("\nChecking input is properly formatted...") ## message("Checking x is a data.frame...") ## if (!is.data.frame(x)) ## stop("x must be a data.frame") ## message("Checking x has correct columns...") ## if (length(colnames(x)) != 3) ## stop("x must have three columns: sector, year, and one of GVA, export, or x") ## message("Checking x contains a year column...") ## if (!"year" %in% colnames(x)) ## stop("x must contain year column") ## message("Checking x contains a sector column...") ## if (!"sector" %in% colnames(x)) ## stop("x must contain sector column") ## message("Checking x does not contain missing values...") ## if (anyNA(x)) ## stop("x cannot contain any missing values") ## message("Checking for the correct number of rows...") ## if (nrow(x) != length(unique(x$sector)) * length(unique(x$year))) { ## warning("x does not appear to be well formed. nrow(x) should equal\nlength(unique(x$sector)) * length(unique(x$year)). Check the of x.") ## } ## message("...passed") ## message("\n***Running statistical checks on input dataframe (x)...\n\n These tests are implemented using the package assertr see:\n https://cran.r-project.org/web/packages/assertr for more details.") ## value <- colnames(x)[(!colnames(x) %in% c("sector", "year"))] ## message("Checking years in a sensible range (2000:2020)...") ## assertr::assert_(x, assertr::in_set(2000:2020), ~year) ## message("Checking sectors are correct...") ## sectors_set <- c(creative = "Creative Industries", culture = "Cultural Sector", ## digital = "Digital Sector", gambling = "Gambling", sport = "Sport", ## telecoms = "Telecoms", tourism = "Tourism", all_dcms = "All DCMS sectors", ## perc_of_UK = "% of UK GVA", UK = "UK") ## assertr::assert_(x, assertr::in_set(names(sectors_set)), ## ~sector, error_fun = raise_issue) ## message("Checking for outliers (x_i > median(x) + 3 * mad(x)) in each sector timeseries...") ## series_split <- split(x, x$sector) ## lapply(X = series_split, FUN = function(x) { ## message("Checking sector timeseries: ", unique(x[["sector"]])) ## assertr::insist_(x, assertr::within_n_mads(3), lazyeval::interp(~value, ## value = as.name(value)), error_fun = raise_issue) ## }) ## message("...passed") ## message("Checking for outliers on a row by row basis using mahalanobis distance...") ## lapply(X = series_split, FUN = maha_check) ## message("...passed") ## structure(list(df = x, colnames = colnames(x), type = colnames(x)[!colnames(x) %in% ## c("year", "sector")], sector_levels = levels(x$sector), ## sectors_set = sectors_set, years = unique(x$year)), class = "year_sector_data") ## } ``` The function is structured to tell the user what check is being made and then running that check given the input `x`. If the input fails a check the function is stopped with a useful diagnostic message for the user. This is achieved using `if` and the opposite of the desired feature of `x`. ``` message("Checking x has correct columns...") if (length(colnames(x)) != 3) stop("x must have three columns: sector, year, and one of GVA, export, or x") ``` For example, if `x` does not have exactly three columns we `stop`. #### 4\.3\.3\.4 Output of this function The output object is different to the input as expected, yet it does contain the initial data. ``` identical(gva$df, x) ``` ``` ## [1] TRUE ``` The rest of the list contains other details that could be changed at a later date if required, demonstrating defensive programming. For example, the sectors that are of interest to DCMS have changed and may change again. ``` ?year_sector_data ``` Let’s take a closer look at this function using the help and other standard function exploration functions. The help says it produces a custom class of object with five slots. ``` isS4(gva) ``` ``` ## [1] FALSE ``` ``` class(gva) ``` ``` ## [1] "year_sector_data" ``` It’s not actually an S4 object, by slots the author means a list of objects. This approach is sensible and easy to work with, as most users are familiar with [S3](http://adv-r.had.co.nz/S3.html). #### 4\.3\.3\.5 The input The input, which is likely a bunch of [not tidy or messy](https://cran.r-project.org/web/packages/tidyr/vignettes/tidy-data.html) spreadsheets needs to be wrangled and aggregated (if necessary) for input into the functions prefixed by `figure`. ``` dplyr::glimpse(GVA_by_sector_2016) ``` ``` ## Observations: 54 ## Variables: 3 ## $ sector <fctr> creative, culture, digital, gambling, sport, telecoms,... ## $ year <int> 2010, 2010, 2010, 2010, 2010, 2010, 2010, 2011, 2011, 2... ## $ GVA <dbl> 65188, 20291, 97303, 8407, 7016, 24738, 49150, 69398, 2... ``` #### 4\.3\.3\.6 The R output > We build our functions to use the same simple, tidy, data. \- Matt Upson With the data in the appropriate form to be received as an argument or input for the `figure` family of functions, we can proceed to plot. ``` figure3.1(x = gva) ``` Again we can look at the details of the plot. We could change the body of the function to affect change to the default plot or we can pass additional `ggplot` arguments to it. Reading the code we see it filters the data, makes the variables it needs, refactors the `sector` variable and then plots it. ``` body(figure3.1) ``` ``` ## { ## out <- tryCatch(expr = { ## sectors_set <- x$sectors_set ## x <- dplyr::filter_(x$df, ~sector != "UK") ## x <- dplyr::mutate_(x, year = ~factor(year, levels = c(2016:2010))) ## x$sector <- factor(x = unname(sectors_set[as.character(x$sector)]), ## levels = rev(as.character(unname(sectors_set[levels(x$sector)])))) ## p <- ggplot2::ggplot(x) + ggplot2::aes_(y = ~GVA, x = ~sector, ## fill = ~year) + ggplot2::geom_bar(colour = "slategray", ## position = "dodge", stat = "identity") + ggplot2::coord_flip() + ## govstyle::theme_gov(base_colour = "black") + ggplot2::scale_fill_brewer(palette = "Blues") + ## ggplot2::ylab("Gross Value Added (£bn)") + ggplot2::theme(legend.position = "right", ## legend.key = ggplot2::element_blank()) + ggplot2::scale_y_continuous(labels = scales::comma) ## return(p) ## }, warning = function() { ## w <- warnings() ## warning("Warning produced running figure3.1():", w) ## }, error = function(e) { ## stop("Error produced running figure3.1():", e) ## }, finally = { ## }) ## } ``` We can inspect and change an argument if we feel inclined or if a new colour scheme becomes preferred for example. However, there is no `...` in the body of the function itself so where does this argument get passed to? This all looks straight forward and we can inspect the other functions for generating the figures or plot output. ``` body(figure3.2) ``` ``` ## { ## out <- tryCatch(expr = { ## sectors_set <- x$sectors_set ## x <- dplyr::filter_(x$df, ~sector %in% c("UK", "all_dcms")) ## x$sector <- factor(x = unname(sectors_set[as.character(x$sector)])) ## x <- dplyr::group_by_(x, ~sector) ## x <- dplyr::mutate_(x, index = ~max(ifelse(year == 2010, ## GVA, 0)), indexGVA = ~GVA/index * 100) ## p <- ggplot2::ggplot(x) + ggplot2::aes_(y = ~indexGVA, ## x = ~year, colour = ~sector, linetype = ~sector) + ## ggplot2::geom_path(size = 1.5) + govstyle::theme_gov(base_colour = "black") + ## ggplot2::scale_colour_manual(values = unname(govstyle::gov_cols[c("red", ## "purple")])) + ggplot2::ylab("GVA Index: 2010=100") + ## ggplot2::theme(legend.position = "bottom", legend.key = ggplot2::element_blank()) + ## ggplot2::ylim(c(80, 130)) ## return(p) ## }, warning = function() { ## w <- warnings() ## warning("Warning produced running figure3.2():", w) ## }, error = function(e) { ## stop("Error produced running figure3.2():", e) ## }, finally = { ## }) ## } ``` ``` body(figure3.3) ``` ``` ## { ## out <- tryCatch(expr = { ## sectors_set <- x$sectors_set ## x <- dplyr::filter_(x$df, ~!sector %in% c("UK", "all_dcms")) ## x$sector <- factor(x = unname(sectors_set[as.character(x$sector)])) ## x <- dplyr::group_by_(x, ~sector) ## x <- dplyr::mutate_(x, index = ~max(ifelse(year == 2010, ## GVA, 0)), indexGVA = ~GVA/index * 100) ## p <- ggplot2::ggplot(x) + ggplot2::aes_(y = ~indexGVA, ## x = ~year, colour = ~sector, linetype = ~sector) + ## ggplot2::geom_path(size = 1.5) + govstyle::theme_gov(base_colour = "black") + ## ggplot2::scale_colour_brewer(palette = "Set1") + ## ggplot2::ylab("GVA Index: 2010=100") + ggplot2::theme(legend.position = "right", ## legend.key = ggplot2::element_blank()) + ggplot2::ylim(c(80, ## 150)) ## return(p) ## }, warning = function() { ## w <- warnings() ## warning("Warning produced running figure3.2():", w) ## }, error = function(e) { ## stop("Error produced running figure3.2():", e) ## }, finally = { ## }) ## } ``` #### 4\.3\.3\.7 Error handling A point of interest in the code with which some users may be unfamiliar is `tryCatch` which is a function that allows the function to catch conditions such as warnings, errors and messages. We see this towards the end of the function where if either of these conditions are produced then an informative message is produced (in that it tells you in what function there was a problem). The structure here is simple and could be copied and pasted for use in automating other figures of other chapters or statistical reports. For a comprehensive introduction see [Hadley’s Chapter](http://adv-r.had.co.nz/Exceptions-Debugging.html#condition-handling). ### 4\.3\.1 Installation The package can then be installed using `devtools::install_github('ukgovdatascience/eesectors')`. Some users may not be able to use the `devtools::install_github()` commands as a result of network security settings. If this is the case, `eesectors` can be installed by downloading the [zip of the repository](https://github.com/ukgovdatascience/govstyle/archive/master.zip) and installing the package locally using `devtools::install_local(<path to zip file>)`. #### 4\.3\.1\.1 Version control As the code is stored on Github we can access the current master version as well as all [historic versions](https://github.com/ukgovdatascience/eesectors/releases). This allows me to reproduce a report from last year if required. I can look at what release version was used and install that accordingly using the [additional arguments](ftp://cran.r-project.org/pub/R/web/packages/githubinstall/vignettes/githubinstall.html) for `install_github`. #### 4\.3\.1\.1 Version control As the code is stored on Github we can access the current master version as well as all [historic versions](https://github.com/ukgovdatascience/eesectors/releases). This allows me to reproduce a report from last year if required. I can look at what release version was used and install that accordingly using the [additional arguments](ftp://cran.r-project.org/pub/R/web/packages/githubinstall/vignettes/githubinstall.html) for `install_github`. ### 4\.3\.2 Loading the package Installation means the package is on our computer but it is not loaded into the computer’s working memory. We also load any additional packages that might be useful for exploring the package or data therein. ``` devtools::install_github('ukgovdatascience/eesectors') ``` ``` ## eesectors: Reproducible Analytical Pipeline (RAP) for the ## Economic Estimates for DCMS Sectors Statistical First Release ## (SFR). For more information visit: ## https://github.com/ukgovdatascience/eesectors ``` This makes all the functions within the package available for use. It also provides us with some R [data objects](https://github.com/ukgovdatascience/eesectors/tree/master/data), such as aggregated data sets ready for visualisations or analysis within the report. > Packages are the fundamental units of reproducible R code. They include reusable R functions, the documentation that describes how to use them, and sample data. \- Hadely Wickham ### 4\.3\.3 Explore the package A good place to start is the package [README](https://github.com/ukgovdatascience/eesectors). #### 4\.3\.3\.1 Status badges The [status badges](https://stackoverflow.com/questions/35563012/what-are-the-status-tags-like-build-passing) provide useful information. They are found in the top left of the README and should be green and say passing. This indicates that this package will run OK on Windows and linux or mac. Essentially the package is likely to build correctly on your machine when you install it. You can carry out these build tests locally using the [`devtools` package](https://github.com/hadley/devtools). #### 4\.3\.3\.2 Look at the output first If you go to Chapter 3 of the [DCMS publication](https://www.gov.uk/government/statistics/dcms-sectors-economic-estimates-2016) it is apparent that most of the content is either data tables of summary statistics or visualisation of the data. This makes automation particularly useful here and likely to make time savings. Chapter 3 seems to be fairly typical in its length (if not a bit shroter compared to other Chapters). This package seems to work by taking the necessary data inputs as arguments in a function then outputting the relevant figures. The names of the functions match the figures they produce. Prior to this step we have to get the data in the correct format. If you look at the functions within the package within R Studio using the package navigator it is evident that there are a function of families dedicated to reading Excel spreadsheets and collecting the data in a tidy .Rds format. These are given the funciton name\-prefix of `extract_` (try to give your functions [good names](http://adv-r.had.co.nz/Style.html)). The `GVA_by_sector_2016` provides test data to work with during development. This will be important for the development of other packages for different reports. You need a precise understanding of how you go from raw data, to aggregated data (such as `GVA_by_sector_2016`) to the final figure. What are your inputs (arguments) and outputs? In some cases where your master data is stored in a particularly difficult for a machine to read you may prefer having a human to this extraction step. ``` dplyr::glimpse(GVA_by_sector_2016) ``` ``` ## Observations: 54 ## Variables: 3 ## $ sector <fctr> creative, culture, digital, gambling, sport, telecoms,... ## $ year <int> 2010, 2010, 2010, 2010, 2010, 2010, 2010, 2011, 2011, 2... ## $ GVA <dbl> 65188, 20291, 97303, 8407, 7016, 24738, 49150, 69398, 2... ``` ``` x <- GVA_by_sector_2016 ``` #### 4\.3\.3\.3 Automating QA Human’s are not particularly good at Quality Assurance (QA), especially when working with massive spreadsheets it’s easy for errors to creep in. We can automate alot of the sense checking and update this if things change or a human provides another creative test to use for sense checking. If you can describe the test to a colleague then you can code it. The author uses messages to tell us what checks are being conducted or we can look at the body of the function if we are interested. This is useful if you are considering developing your own package, it will help you struture the message which are useful for the user. ``` gva <- year_sector_data(GVA_by_sector_2016) ``` ``` ## Initiating year_sector_data class. ## ## ## Expects a data.frame with three columns: sector, year, and measure, where ## measure is one of GVA, exports, or enterprises. The data.frame should include ## historical data, which is used for checks on the quality of this year's data, ## and for producing tables and plots. More information on the format expected by ## this class is given by ?year_sector_data(). ``` ``` ## ## *** Running integrity checks on input dataframe (x): ``` ``` ## ## Checking input is properly formatted... ``` ``` ## Checking x is a data.frame... ``` ``` ## Checking x has correct columns... ``` ``` ## Checking x contains a year column... ``` ``` ## Checking x contains a sector column... ``` ``` ## Checking x does not contain missing values... ``` ``` ## Checking for the correct number of rows... ``` ``` ## ...passed ``` ``` ## ## ***Running statistical checks on input dataframe (x)... ## ## These tests are implemented using the package assertr see: ## https://cran.r-project.org/web/packages/assertr for more details. ``` ``` ## Checking years in a sensible range (2000:2020)... ``` ``` ## Checking sectors are correct... ``` ``` ## Checking for outliers (x_i > median(x) + 3 * mad(x)) in each sector timeseries... ``` ``` ## Checking sector timeseries: all_dcms ``` ``` ## Checking sector timeseries: creative ``` ``` ## Checking sector timeseries: culture ``` ``` ## Checking sector timeseries: digital ``` ``` ## Checking sector timeseries: gambling ``` ``` ## Checking sector timeseries: sport ``` ``` ## Checking sector timeseries: telecoms ``` ``` ## Checking sector timeseries: tourism ``` ``` ## Checking sector timeseries: UK ``` ``` ## ...passed ``` ``` ## Checking for outliers on a row by row basis using mahalanobis distance... ``` ``` ## Checking sector timeseries: all_dcms ``` ``` ## Checking sector timeseries: creative ``` ``` ## Checking sector timeseries: culture ``` ``` ## Checking sector timeseries: digital ``` ``` ## Checking sector timeseries: gambling ``` ``` ## Checking sector timeseries: sport ``` ``` ## Checking sector timeseries: telecoms ``` ``` ## Checking sector timeseries: tourism ``` ``` ## Checking sector timeseries: UK ``` ``` ## ...passed ``` This is a semi\-automated process so the user should check the Checks and ensure they meet their usual checks that would be conducted manually. If a new check or test becomes necessary then it should be implemented by changing the code. ``` body(year_sector_data) ``` ``` ## { ## message("Initiating year_sector_data class.\n\n\nExpects a data.frame with three columns: sector, year, and measure, where\nmeasure is one of GVA, exports, or enterprises. The data.frame should include\nhistorical data, which is used for checks on the quality of this year's data,\nand for producing tables and plots. More information on the format expected by\nthis class is given by ?year_sector_data().") ## message("\n*** Running integrity checks on input dataframe (x):") ## message("\nChecking input is properly formatted...") ## message("Checking x is a data.frame...") ## if (!is.data.frame(x)) ## stop("x must be a data.frame") ## message("Checking x has correct columns...") ## if (length(colnames(x)) != 3) ## stop("x must have three columns: sector, year, and one of GVA, export, or x") ## message("Checking x contains a year column...") ## if (!"year" %in% colnames(x)) ## stop("x must contain year column") ## message("Checking x contains a sector column...") ## if (!"sector" %in% colnames(x)) ## stop("x must contain sector column") ## message("Checking x does not contain missing values...") ## if (anyNA(x)) ## stop("x cannot contain any missing values") ## message("Checking for the correct number of rows...") ## if (nrow(x) != length(unique(x$sector)) * length(unique(x$year))) { ## warning("x does not appear to be well formed. nrow(x) should equal\nlength(unique(x$sector)) * length(unique(x$year)). Check the of x.") ## } ## message("...passed") ## message("\n***Running statistical checks on input dataframe (x)...\n\n These tests are implemented using the package assertr see:\n https://cran.r-project.org/web/packages/assertr for more details.") ## value <- colnames(x)[(!colnames(x) %in% c("sector", "year"))] ## message("Checking years in a sensible range (2000:2020)...") ## assertr::assert_(x, assertr::in_set(2000:2020), ~year) ## message("Checking sectors are correct...") ## sectors_set <- c(creative = "Creative Industries", culture = "Cultural Sector", ## digital = "Digital Sector", gambling = "Gambling", sport = "Sport", ## telecoms = "Telecoms", tourism = "Tourism", all_dcms = "All DCMS sectors", ## perc_of_UK = "% of UK GVA", UK = "UK") ## assertr::assert_(x, assertr::in_set(names(sectors_set)), ## ~sector, error_fun = raise_issue) ## message("Checking for outliers (x_i > median(x) + 3 * mad(x)) in each sector timeseries...") ## series_split <- split(x, x$sector) ## lapply(X = series_split, FUN = function(x) { ## message("Checking sector timeseries: ", unique(x[["sector"]])) ## assertr::insist_(x, assertr::within_n_mads(3), lazyeval::interp(~value, ## value = as.name(value)), error_fun = raise_issue) ## }) ## message("...passed") ## message("Checking for outliers on a row by row basis using mahalanobis distance...") ## lapply(X = series_split, FUN = maha_check) ## message("...passed") ## structure(list(df = x, colnames = colnames(x), type = colnames(x)[!colnames(x) %in% ## c("year", "sector")], sector_levels = levels(x$sector), ## sectors_set = sectors_set, years = unique(x$year)), class = "year_sector_data") ## } ``` The function is structured to tell the user what check is being made and then running that check given the input `x`. If the input fails a check the function is stopped with a useful diagnostic message for the user. This is achieved using `if` and the opposite of the desired feature of `x`. ``` message("Checking x has correct columns...") if (length(colnames(x)) != 3) stop("x must have three columns: sector, year, and one of GVA, export, or x") ``` For example, if `x` does not have exactly three columns we `stop`. #### 4\.3\.3\.4 Output of this function The output object is different to the input as expected, yet it does contain the initial data. ``` identical(gva$df, x) ``` ``` ## [1] TRUE ``` The rest of the list contains other details that could be changed at a later date if required, demonstrating defensive programming. For example, the sectors that are of interest to DCMS have changed and may change again. ``` ?year_sector_data ``` Let’s take a closer look at this function using the help and other standard function exploration functions. The help says it produces a custom class of object with five slots. ``` isS4(gva) ``` ``` ## [1] FALSE ``` ``` class(gva) ``` ``` ## [1] "year_sector_data" ``` It’s not actually an S4 object, by slots the author means a list of objects. This approach is sensible and easy to work with, as most users are familiar with [S3](http://adv-r.had.co.nz/S3.html). #### 4\.3\.3\.5 The input The input, which is likely a bunch of [not tidy or messy](https://cran.r-project.org/web/packages/tidyr/vignettes/tidy-data.html) spreadsheets needs to be wrangled and aggregated (if necessary) for input into the functions prefixed by `figure`. ``` dplyr::glimpse(GVA_by_sector_2016) ``` ``` ## Observations: 54 ## Variables: 3 ## $ sector <fctr> creative, culture, digital, gambling, sport, telecoms,... ## $ year <int> 2010, 2010, 2010, 2010, 2010, 2010, 2010, 2011, 2011, 2... ## $ GVA <dbl> 65188, 20291, 97303, 8407, 7016, 24738, 49150, 69398, 2... ``` #### 4\.3\.3\.6 The R output > We build our functions to use the same simple, tidy, data. \- Matt Upson With the data in the appropriate form to be received as an argument or input for the `figure` family of functions, we can proceed to plot. ``` figure3.1(x = gva) ``` Again we can look at the details of the plot. We could change the body of the function to affect change to the default plot or we can pass additional `ggplot` arguments to it. Reading the code we see it filters the data, makes the variables it needs, refactors the `sector` variable and then plots it. ``` body(figure3.1) ``` ``` ## { ## out <- tryCatch(expr = { ## sectors_set <- x$sectors_set ## x <- dplyr::filter_(x$df, ~sector != "UK") ## x <- dplyr::mutate_(x, year = ~factor(year, levels = c(2016:2010))) ## x$sector <- factor(x = unname(sectors_set[as.character(x$sector)]), ## levels = rev(as.character(unname(sectors_set[levels(x$sector)])))) ## p <- ggplot2::ggplot(x) + ggplot2::aes_(y = ~GVA, x = ~sector, ## fill = ~year) + ggplot2::geom_bar(colour = "slategray", ## position = "dodge", stat = "identity") + ggplot2::coord_flip() + ## govstyle::theme_gov(base_colour = "black") + ggplot2::scale_fill_brewer(palette = "Blues") + ## ggplot2::ylab("Gross Value Added (£bn)") + ggplot2::theme(legend.position = "right", ## legend.key = ggplot2::element_blank()) + ggplot2::scale_y_continuous(labels = scales::comma) ## return(p) ## }, warning = function() { ## w <- warnings() ## warning("Warning produced running figure3.1():", w) ## }, error = function(e) { ## stop("Error produced running figure3.1():", e) ## }, finally = { ## }) ## } ``` We can inspect and change an argument if we feel inclined or if a new colour scheme becomes preferred for example. However, there is no `...` in the body of the function itself so where does this argument get passed to? This all looks straight forward and we can inspect the other functions for generating the figures or plot output. ``` body(figure3.2) ``` ``` ## { ## out <- tryCatch(expr = { ## sectors_set <- x$sectors_set ## x <- dplyr::filter_(x$df, ~sector %in% c("UK", "all_dcms")) ## x$sector <- factor(x = unname(sectors_set[as.character(x$sector)])) ## x <- dplyr::group_by_(x, ~sector) ## x <- dplyr::mutate_(x, index = ~max(ifelse(year == 2010, ## GVA, 0)), indexGVA = ~GVA/index * 100) ## p <- ggplot2::ggplot(x) + ggplot2::aes_(y = ~indexGVA, ## x = ~year, colour = ~sector, linetype = ~sector) + ## ggplot2::geom_path(size = 1.5) + govstyle::theme_gov(base_colour = "black") + ## ggplot2::scale_colour_manual(values = unname(govstyle::gov_cols[c("red", ## "purple")])) + ggplot2::ylab("GVA Index: 2010=100") + ## ggplot2::theme(legend.position = "bottom", legend.key = ggplot2::element_blank()) + ## ggplot2::ylim(c(80, 130)) ## return(p) ## }, warning = function() { ## w <- warnings() ## warning("Warning produced running figure3.2():", w) ## }, error = function(e) { ## stop("Error produced running figure3.2():", e) ## }, finally = { ## }) ## } ``` ``` body(figure3.3) ``` ``` ## { ## out <- tryCatch(expr = { ## sectors_set <- x$sectors_set ## x <- dplyr::filter_(x$df, ~!sector %in% c("UK", "all_dcms")) ## x$sector <- factor(x = unname(sectors_set[as.character(x$sector)])) ## x <- dplyr::group_by_(x, ~sector) ## x <- dplyr::mutate_(x, index = ~max(ifelse(year == 2010, ## GVA, 0)), indexGVA = ~GVA/index * 100) ## p <- ggplot2::ggplot(x) + ggplot2::aes_(y = ~indexGVA, ## x = ~year, colour = ~sector, linetype = ~sector) + ## ggplot2::geom_path(size = 1.5) + govstyle::theme_gov(base_colour = "black") + ## ggplot2::scale_colour_brewer(palette = "Set1") + ## ggplot2::ylab("GVA Index: 2010=100") + ggplot2::theme(legend.position = "right", ## legend.key = ggplot2::element_blank()) + ggplot2::ylim(c(80, ## 150)) ## return(p) ## }, warning = function() { ## w <- warnings() ## warning("Warning produced running figure3.2():", w) ## }, error = function(e) { ## stop("Error produced running figure3.2():", e) ## }, finally = { ## }) ## } ``` #### 4\.3\.3\.7 Error handling A point of interest in the code with which some users may be unfamiliar is `tryCatch` which is a function that allows the function to catch conditions such as warnings, errors and messages. We see this towards the end of the function where if either of these conditions are produced then an informative message is produced (in that it tells you in what function there was a problem). The structure here is simple and could be copied and pasted for use in automating other figures of other chapters or statistical reports. For a comprehensive introduction see [Hadley’s Chapter](http://adv-r.had.co.nz/Exceptions-Debugging.html#condition-handling). #### 4\.3\.3\.1 Status badges The [status badges](https://stackoverflow.com/questions/35563012/what-are-the-status-tags-like-build-passing) provide useful information. They are found in the top left of the README and should be green and say passing. This indicates that this package will run OK on Windows and linux or mac. Essentially the package is likely to build correctly on your machine when you install it. You can carry out these build tests locally using the [`devtools` package](https://github.com/hadley/devtools). #### 4\.3\.3\.2 Look at the output first If you go to Chapter 3 of the [DCMS publication](https://www.gov.uk/government/statistics/dcms-sectors-economic-estimates-2016) it is apparent that most of the content is either data tables of summary statistics or visualisation of the data. This makes automation particularly useful here and likely to make time savings. Chapter 3 seems to be fairly typical in its length (if not a bit shroter compared to other Chapters). This package seems to work by taking the necessary data inputs as arguments in a function then outputting the relevant figures. The names of the functions match the figures they produce. Prior to this step we have to get the data in the correct format. If you look at the functions within the package within R Studio using the package navigator it is evident that there are a function of families dedicated to reading Excel spreadsheets and collecting the data in a tidy .Rds format. These are given the funciton name\-prefix of `extract_` (try to give your functions [good names](http://adv-r.had.co.nz/Style.html)). The `GVA_by_sector_2016` provides test data to work with during development. This will be important for the development of other packages for different reports. You need a precise understanding of how you go from raw data, to aggregated data (such as `GVA_by_sector_2016`) to the final figure. What are your inputs (arguments) and outputs? In some cases where your master data is stored in a particularly difficult for a machine to read you may prefer having a human to this extraction step. ``` dplyr::glimpse(GVA_by_sector_2016) ``` ``` ## Observations: 54 ## Variables: 3 ## $ sector <fctr> creative, culture, digital, gambling, sport, telecoms,... ## $ year <int> 2010, 2010, 2010, 2010, 2010, 2010, 2010, 2011, 2011, 2... ## $ GVA <dbl> 65188, 20291, 97303, 8407, 7016, 24738, 49150, 69398, 2... ``` ``` x <- GVA_by_sector_2016 ``` #### 4\.3\.3\.3 Automating QA Human’s are not particularly good at Quality Assurance (QA), especially when working with massive spreadsheets it’s easy for errors to creep in. We can automate alot of the sense checking and update this if things change or a human provides another creative test to use for sense checking. If you can describe the test to a colleague then you can code it. The author uses messages to tell us what checks are being conducted or we can look at the body of the function if we are interested. This is useful if you are considering developing your own package, it will help you struture the message which are useful for the user. ``` gva <- year_sector_data(GVA_by_sector_2016) ``` ``` ## Initiating year_sector_data class. ## ## ## Expects a data.frame with three columns: sector, year, and measure, where ## measure is one of GVA, exports, or enterprises. The data.frame should include ## historical data, which is used for checks on the quality of this year's data, ## and for producing tables and plots. More information on the format expected by ## this class is given by ?year_sector_data(). ``` ``` ## ## *** Running integrity checks on input dataframe (x): ``` ``` ## ## Checking input is properly formatted... ``` ``` ## Checking x is a data.frame... ``` ``` ## Checking x has correct columns... ``` ``` ## Checking x contains a year column... ``` ``` ## Checking x contains a sector column... ``` ``` ## Checking x does not contain missing values... ``` ``` ## Checking for the correct number of rows... ``` ``` ## ...passed ``` ``` ## ## ***Running statistical checks on input dataframe (x)... ## ## These tests are implemented using the package assertr see: ## https://cran.r-project.org/web/packages/assertr for more details. ``` ``` ## Checking years in a sensible range (2000:2020)... ``` ``` ## Checking sectors are correct... ``` ``` ## Checking for outliers (x_i > median(x) + 3 * mad(x)) in each sector timeseries... ``` ``` ## Checking sector timeseries: all_dcms ``` ``` ## Checking sector timeseries: creative ``` ``` ## Checking sector timeseries: culture ``` ``` ## Checking sector timeseries: digital ``` ``` ## Checking sector timeseries: gambling ``` ``` ## Checking sector timeseries: sport ``` ``` ## Checking sector timeseries: telecoms ``` ``` ## Checking sector timeseries: tourism ``` ``` ## Checking sector timeseries: UK ``` ``` ## ...passed ``` ``` ## Checking for outliers on a row by row basis using mahalanobis distance... ``` ``` ## Checking sector timeseries: all_dcms ``` ``` ## Checking sector timeseries: creative ``` ``` ## Checking sector timeseries: culture ``` ``` ## Checking sector timeseries: digital ``` ``` ## Checking sector timeseries: gambling ``` ``` ## Checking sector timeseries: sport ``` ``` ## Checking sector timeseries: telecoms ``` ``` ## Checking sector timeseries: tourism ``` ``` ## Checking sector timeseries: UK ``` ``` ## ...passed ``` This is a semi\-automated process so the user should check the Checks and ensure they meet their usual checks that would be conducted manually. If a new check or test becomes necessary then it should be implemented by changing the code. ``` body(year_sector_data) ``` ``` ## { ## message("Initiating year_sector_data class.\n\n\nExpects a data.frame with three columns: sector, year, and measure, where\nmeasure is one of GVA, exports, or enterprises. The data.frame should include\nhistorical data, which is used for checks on the quality of this year's data,\nand for producing tables and plots. More information on the format expected by\nthis class is given by ?year_sector_data().") ## message("\n*** Running integrity checks on input dataframe (x):") ## message("\nChecking input is properly formatted...") ## message("Checking x is a data.frame...") ## if (!is.data.frame(x)) ## stop("x must be a data.frame") ## message("Checking x has correct columns...") ## if (length(colnames(x)) != 3) ## stop("x must have three columns: sector, year, and one of GVA, export, or x") ## message("Checking x contains a year column...") ## if (!"year" %in% colnames(x)) ## stop("x must contain year column") ## message("Checking x contains a sector column...") ## if (!"sector" %in% colnames(x)) ## stop("x must contain sector column") ## message("Checking x does not contain missing values...") ## if (anyNA(x)) ## stop("x cannot contain any missing values") ## message("Checking for the correct number of rows...") ## if (nrow(x) != length(unique(x$sector)) * length(unique(x$year))) { ## warning("x does not appear to be well formed. nrow(x) should equal\nlength(unique(x$sector)) * length(unique(x$year)). Check the of x.") ## } ## message("...passed") ## message("\n***Running statistical checks on input dataframe (x)...\n\n These tests are implemented using the package assertr see:\n https://cran.r-project.org/web/packages/assertr for more details.") ## value <- colnames(x)[(!colnames(x) %in% c("sector", "year"))] ## message("Checking years in a sensible range (2000:2020)...") ## assertr::assert_(x, assertr::in_set(2000:2020), ~year) ## message("Checking sectors are correct...") ## sectors_set <- c(creative = "Creative Industries", culture = "Cultural Sector", ## digital = "Digital Sector", gambling = "Gambling", sport = "Sport", ## telecoms = "Telecoms", tourism = "Tourism", all_dcms = "All DCMS sectors", ## perc_of_UK = "% of UK GVA", UK = "UK") ## assertr::assert_(x, assertr::in_set(names(sectors_set)), ## ~sector, error_fun = raise_issue) ## message("Checking for outliers (x_i > median(x) + 3 * mad(x)) in each sector timeseries...") ## series_split <- split(x, x$sector) ## lapply(X = series_split, FUN = function(x) { ## message("Checking sector timeseries: ", unique(x[["sector"]])) ## assertr::insist_(x, assertr::within_n_mads(3), lazyeval::interp(~value, ## value = as.name(value)), error_fun = raise_issue) ## }) ## message("...passed") ## message("Checking for outliers on a row by row basis using mahalanobis distance...") ## lapply(X = series_split, FUN = maha_check) ## message("...passed") ## structure(list(df = x, colnames = colnames(x), type = colnames(x)[!colnames(x) %in% ## c("year", "sector")], sector_levels = levels(x$sector), ## sectors_set = sectors_set, years = unique(x$year)), class = "year_sector_data") ## } ``` The function is structured to tell the user what check is being made and then running that check given the input `x`. If the input fails a check the function is stopped with a useful diagnostic message for the user. This is achieved using `if` and the opposite of the desired feature of `x`. ``` message("Checking x has correct columns...") if (length(colnames(x)) != 3) stop("x must have three columns: sector, year, and one of GVA, export, or x") ``` For example, if `x` does not have exactly three columns we `stop`. #### 4\.3\.3\.4 Output of this function The output object is different to the input as expected, yet it does contain the initial data. ``` identical(gva$df, x) ``` ``` ## [1] TRUE ``` The rest of the list contains other details that could be changed at a later date if required, demonstrating defensive programming. For example, the sectors that are of interest to DCMS have changed and may change again. ``` ?year_sector_data ``` Let’s take a closer look at this function using the help and other standard function exploration functions. The help says it produces a custom class of object with five slots. ``` isS4(gva) ``` ``` ## [1] FALSE ``` ``` class(gva) ``` ``` ## [1] "year_sector_data" ``` It’s not actually an S4 object, by slots the author means a list of objects. This approach is sensible and easy to work with, as most users are familiar with [S3](http://adv-r.had.co.nz/S3.html). #### 4\.3\.3\.5 The input The input, which is likely a bunch of [not tidy or messy](https://cran.r-project.org/web/packages/tidyr/vignettes/tidy-data.html) spreadsheets needs to be wrangled and aggregated (if necessary) for input into the functions prefixed by `figure`. ``` dplyr::glimpse(GVA_by_sector_2016) ``` ``` ## Observations: 54 ## Variables: 3 ## $ sector <fctr> creative, culture, digital, gambling, sport, telecoms,... ## $ year <int> 2010, 2010, 2010, 2010, 2010, 2010, 2010, 2011, 2011, 2... ## $ GVA <dbl> 65188, 20291, 97303, 8407, 7016, 24738, 49150, 69398, 2... ``` #### 4\.3\.3\.6 The R output > We build our functions to use the same simple, tidy, data. \- Matt Upson With the data in the appropriate form to be received as an argument or input for the `figure` family of functions, we can proceed to plot. ``` figure3.1(x = gva) ``` Again we can look at the details of the plot. We could change the body of the function to affect change to the default plot or we can pass additional `ggplot` arguments to it. Reading the code we see it filters the data, makes the variables it needs, refactors the `sector` variable and then plots it. ``` body(figure3.1) ``` ``` ## { ## out <- tryCatch(expr = { ## sectors_set <- x$sectors_set ## x <- dplyr::filter_(x$df, ~sector != "UK") ## x <- dplyr::mutate_(x, year = ~factor(year, levels = c(2016:2010))) ## x$sector <- factor(x = unname(sectors_set[as.character(x$sector)]), ## levels = rev(as.character(unname(sectors_set[levels(x$sector)])))) ## p <- ggplot2::ggplot(x) + ggplot2::aes_(y = ~GVA, x = ~sector, ## fill = ~year) + ggplot2::geom_bar(colour = "slategray", ## position = "dodge", stat = "identity") + ggplot2::coord_flip() + ## govstyle::theme_gov(base_colour = "black") + ggplot2::scale_fill_brewer(palette = "Blues") + ## ggplot2::ylab("Gross Value Added (£bn)") + ggplot2::theme(legend.position = "right", ## legend.key = ggplot2::element_blank()) + ggplot2::scale_y_continuous(labels = scales::comma) ## return(p) ## }, warning = function() { ## w <- warnings() ## warning("Warning produced running figure3.1():", w) ## }, error = function(e) { ## stop("Error produced running figure3.1():", e) ## }, finally = { ## }) ## } ``` We can inspect and change an argument if we feel inclined or if a new colour scheme becomes preferred for example. However, there is no `...` in the body of the function itself so where does this argument get passed to? This all looks straight forward and we can inspect the other functions for generating the figures or plot output. ``` body(figure3.2) ``` ``` ## { ## out <- tryCatch(expr = { ## sectors_set <- x$sectors_set ## x <- dplyr::filter_(x$df, ~sector %in% c("UK", "all_dcms")) ## x$sector <- factor(x = unname(sectors_set[as.character(x$sector)])) ## x <- dplyr::group_by_(x, ~sector) ## x <- dplyr::mutate_(x, index = ~max(ifelse(year == 2010, ## GVA, 0)), indexGVA = ~GVA/index * 100) ## p <- ggplot2::ggplot(x) + ggplot2::aes_(y = ~indexGVA, ## x = ~year, colour = ~sector, linetype = ~sector) + ## ggplot2::geom_path(size = 1.5) + govstyle::theme_gov(base_colour = "black") + ## ggplot2::scale_colour_manual(values = unname(govstyle::gov_cols[c("red", ## "purple")])) + ggplot2::ylab("GVA Index: 2010=100") + ## ggplot2::theme(legend.position = "bottom", legend.key = ggplot2::element_blank()) + ## ggplot2::ylim(c(80, 130)) ## return(p) ## }, warning = function() { ## w <- warnings() ## warning("Warning produced running figure3.2():", w) ## }, error = function(e) { ## stop("Error produced running figure3.2():", e) ## }, finally = { ## }) ## } ``` ``` body(figure3.3) ``` ``` ## { ## out <- tryCatch(expr = { ## sectors_set <- x$sectors_set ## x <- dplyr::filter_(x$df, ~!sector %in% c("UK", "all_dcms")) ## x$sector <- factor(x = unname(sectors_set[as.character(x$sector)])) ## x <- dplyr::group_by_(x, ~sector) ## x <- dplyr::mutate_(x, index = ~max(ifelse(year == 2010, ## GVA, 0)), indexGVA = ~GVA/index * 100) ## p <- ggplot2::ggplot(x) + ggplot2::aes_(y = ~indexGVA, ## x = ~year, colour = ~sector, linetype = ~sector) + ## ggplot2::geom_path(size = 1.5) + govstyle::theme_gov(base_colour = "black") + ## ggplot2::scale_colour_brewer(palette = "Set1") + ## ggplot2::ylab("GVA Index: 2010=100") + ggplot2::theme(legend.position = "right", ## legend.key = ggplot2::element_blank()) + ggplot2::ylim(c(80, ## 150)) ## return(p) ## }, warning = function() { ## w <- warnings() ## warning("Warning produced running figure3.2():", w) ## }, error = function(e) { ## stop("Error produced running figure3.2():", e) ## }, finally = { ## }) ## } ``` #### 4\.3\.3\.7 Error handling A point of interest in the code with which some users may be unfamiliar is `tryCatch` which is a function that allows the function to catch conditions such as warnings, errors and messages. We see this towards the end of the function where if either of these conditions are produced then an informative message is produced (in that it tells you in what function there was a problem). The structure here is simple and could be copied and pasted for use in automating other figures of other chapters or statistical reports. For a comprehensive introduction see [Hadley’s Chapter](http://adv-r.had.co.nz/Exceptions-Debugging.html#condition-handling). 4\.4 Chapter plenary -------------------- We have explored the `eesectors` package from the perspective of someone wishing to develop our own semi\-automated chapter production through the development of a package in R. This package provides a useful tempplate where one could copy the foundations of the package and workflow.
Data Databases and Engineering
ukgovdatascience.github.io
https://ukgovdatascience.github.io/rap_companion/vs.html
Chapter 6 Version Control ========================= 6\.1 Introduction ----------------- Few software engineers would embark on a new project without using some sort of [version control software](https://en.wikipedia.org/wiki/Version_control). Version control software allows us to track the three Ws: **Who made Which change, and Why?**. Tools like [git](https://git-scm.com/) can be used to track files of any type, but are particularly useful for code in text files for example R or Python code. Whilst git can be used locally on a single machine, or many networked machines, git can also be hooked up to free cloud services such as [GitHub](https://github.com/), [GitLab](https://about.gitlab.com/), or Bitbucket(<https://bitbucket.org/>). Each of these services provides hosting for your version control repository, and makes the code open and easy to share. The entire project we are working on with DCMS can be seen on [GitHub](https://github.com/ukgovdatascience/eesectors). [ Obviously this won’t be appropriate for all Government projects (and solutions do exist to allow these services to be run within secure systems), but in our work with DCMS, we were able to publish all of our code openly. You can use [our code](https://github.com/ukgovdatascience/eesectors) to run an example based on the 2016 publication, but producing the entire publication from end to end would require access to data which is not published openly. Below is a screenshot from the commit history showing collaboration between data scientists in DCMS and GDS. The full page can be seen on [GitHub](https://github.com/ukgovdatascience/eesectors/commits/master). [ Using a service like GitHub allows us to formalise the system of quality assurance (QA) in an auditable way. We can configure GitHub to require a code review by another person before the update to the code (this is called a [pull request](https://help.github.com/articles/about-pull-requests/)) is accepted into the main workstream of the project. You can see this in the screenshot below which relates to a pull request which fixed a [minor bug in the prototype](https://github.com/ukgovdatascience/eesectors/pull/71). The work to fix it was done by a data scientist at DCMS, and reviewed by a data scientist from GDS. [ The open nature of the good is great for transparency and facilitates review. The entire community can contribute to helping QA your code and identify [issues or bugs](https://github.com/ukgovdatascience/eesectors/issues). If you are lucky, they will not only report the bug / issue, but may also offer a fix for your code in the form of a pull request. 6\.2 Useful resources --------------------- ### 6\.2\.1 Graphical user interface focus A [useful book](http://happygitwithr.com/) on Git and Github that should cover all your needs for those who are uncomfortable working in a command line interface. This will cover most of your Git and Github workflow needs for collaborating in a team. However, we recommend putting the effort into learning git without a GUI, so that you benefit from the full funcitonality on offer. ### 6\.2\.2 Git and RStudio You can also use [git and Github within R Studio](http://r-pkgs.had.co.nz/git.html). ### 6\.2\.3 Command line focus However, the terminal isn’t that scary really and we recommend using it from the outset. Here’s a [video tutorial](https://swcarpentry.github.io/git-novice/) that provides a good introduction and does not expect any experience of using the Unix shell (the terminal or command line). For a comprehensive tome try the [Pro Git book](https://git-scm.com/book/en/v2). 6\.3 Typical workflow --------------------- When you first start using git it can be difficult to remember all the commonly used commands (you might find it useful to keep a list of them in a text editor). We give a simple workflow here (assuming you are collaborating on Github with a small team and have [set up a repo](https://help.github.com/articles/creating-a-new-repository/) called `my_repo` with the origin and remote set (try to avoid hyphens in names)). Remember to remove the comments (the \#) when copying and pasting into the terminal. You will also need to give your new feature branch a good name. 1. Open your terminal (command line tool). 2. Navigate to `my_repo` using `cd`. 3. Check you are up\-to\-date: ``` # git checkout master # git pull ``` 4. Create your new feature branch to work on and get to work (track changes by adding and comitting locally as usual): ``` # git checkout -b feature/post_name ``` 5. [Squash your commits](https://stackoverflow.com/questions/5189560/squash-my-last-x-commits-together-using-git) if appropriate, then push your new branch to Github. You will want to squash all commits together associated with one discrete piece of work (e.g. coding one function). ``` # git push origin feature/post_name -u ``` 6. On Github create a [pull request](https://help.github.com/articles/about-pull-requests/) and ask a colleague to review your changes before merging with the `master` branch (you can assign a reviewer in the PR page on Github). 7. If accepted (and it passes all necessary checks) you’re new feature will have been merged on Github. Fetch these changes locally. ``` # git checkout master # git pull ``` 8. You have a new master on Github. Pull it to your local machine and the development cycle starts again! CAVEAT: this workflow is not appropriate for large open collaborations, where fork and pull is preferred. 6\.4 Branch naming etiquette ---------------------------- Generally I will start a new branch to either add a new feature `git branch feature/cool_new_feature` such as for testing or adding a new function `git branch feature/sen_function_name`. Or if fixing a bug or problem. We should be writing these on the Github [issues page for the package](https://github.com/ukgovdatascience/rap_companion/issues). We can then title our branches to tackle specific issues `git branch fix/issue_number`. It’s good to push your branches to Github if your working on it prior to it being finished so we all know what everyone is working on. You can simply put \[WIP] (Work in progress) in the title of the PR on Github to let people know its not ready for review yet. 6\.5 Watermarking ----------------- Imagine if someone asks you to reproduce some historic analysis further down the line. This will be easy if you’ve used git as long as you know which version of your code was used to produce the report (packrat also facilitates this). You can then load that [version](http://r-pkgs.had.co.nz/description.html#version) of the code to repeat the analysis and reproduce the report. As an additional measure, or if you find versioning intimidating, you could watermark your report by citing the git commit used to generate it, as demonstrated below and in the stackoverflow answer by [Wander Nauta, 2015](https://stackoverflow.com/questions/32260956/show-git-version-in-r-code). ``` print(system("git rev-parse --short HEAD", intern = TRUE)) ``` ``` ## [1] "ca874b6" ``` This commit hash can be used to “revert” back to the code at the time the report was produced, fool around and reproduce the original report. You also have the flexibility to do other things which are explored in this [Stack Overflow answer](https://stackoverflow.com/questions/4114095/how-to-revert-git-repository-to-a-previous-commit). This feature of version control is what makes our analytical pipelines reproducible. 6\.1 Introduction ----------------- Few software engineers would embark on a new project without using some sort of [version control software](https://en.wikipedia.org/wiki/Version_control). Version control software allows us to track the three Ws: **Who made Which change, and Why?**. Tools like [git](https://git-scm.com/) can be used to track files of any type, but are particularly useful for code in text files for example R or Python code. Whilst git can be used locally on a single machine, or many networked machines, git can also be hooked up to free cloud services such as [GitHub](https://github.com/), [GitLab](https://about.gitlab.com/), or Bitbucket(<https://bitbucket.org/>). Each of these services provides hosting for your version control repository, and makes the code open and easy to share. The entire project we are working on with DCMS can be seen on [GitHub](https://github.com/ukgovdatascience/eesectors). [ Obviously this won’t be appropriate for all Government projects (and solutions do exist to allow these services to be run within secure systems), but in our work with DCMS, we were able to publish all of our code openly. You can use [our code](https://github.com/ukgovdatascience/eesectors) to run an example based on the 2016 publication, but producing the entire publication from end to end would require access to data which is not published openly. Below is a screenshot from the commit history showing collaboration between data scientists in DCMS and GDS. The full page can be seen on [GitHub](https://github.com/ukgovdatascience/eesectors/commits/master). [ Using a service like GitHub allows us to formalise the system of quality assurance (QA) in an auditable way. We can configure GitHub to require a code review by another person before the update to the code (this is called a [pull request](https://help.github.com/articles/about-pull-requests/)) is accepted into the main workstream of the project. You can see this in the screenshot below which relates to a pull request which fixed a [minor bug in the prototype](https://github.com/ukgovdatascience/eesectors/pull/71). The work to fix it was done by a data scientist at DCMS, and reviewed by a data scientist from GDS. [ The open nature of the good is great for transparency and facilitates review. The entire community can contribute to helping QA your code and identify [issues or bugs](https://github.com/ukgovdatascience/eesectors/issues). If you are lucky, they will not only report the bug / issue, but may also offer a fix for your code in the form of a pull request. 6\.2 Useful resources --------------------- ### 6\.2\.1 Graphical user interface focus A [useful book](http://happygitwithr.com/) on Git and Github that should cover all your needs for those who are uncomfortable working in a command line interface. This will cover most of your Git and Github workflow needs for collaborating in a team. However, we recommend putting the effort into learning git without a GUI, so that you benefit from the full funcitonality on offer. ### 6\.2\.2 Git and RStudio You can also use [git and Github within R Studio](http://r-pkgs.had.co.nz/git.html). ### 6\.2\.3 Command line focus However, the terminal isn’t that scary really and we recommend using it from the outset. Here’s a [video tutorial](https://swcarpentry.github.io/git-novice/) that provides a good introduction and does not expect any experience of using the Unix shell (the terminal or command line). For a comprehensive tome try the [Pro Git book](https://git-scm.com/book/en/v2). ### 6\.2\.1 Graphical user interface focus A [useful book](http://happygitwithr.com/) on Git and Github that should cover all your needs for those who are uncomfortable working in a command line interface. This will cover most of your Git and Github workflow needs for collaborating in a team. However, we recommend putting the effort into learning git without a GUI, so that you benefit from the full funcitonality on offer. ### 6\.2\.2 Git and RStudio You can also use [git and Github within R Studio](http://r-pkgs.had.co.nz/git.html). ### 6\.2\.3 Command line focus However, the terminal isn’t that scary really and we recommend using it from the outset. Here’s a [video tutorial](https://swcarpentry.github.io/git-novice/) that provides a good introduction and does not expect any experience of using the Unix shell (the terminal or command line). For a comprehensive tome try the [Pro Git book](https://git-scm.com/book/en/v2). 6\.3 Typical workflow --------------------- When you first start using git it can be difficult to remember all the commonly used commands (you might find it useful to keep a list of them in a text editor). We give a simple workflow here (assuming you are collaborating on Github with a small team and have [set up a repo](https://help.github.com/articles/creating-a-new-repository/) called `my_repo` with the origin and remote set (try to avoid hyphens in names)). Remember to remove the comments (the \#) when copying and pasting into the terminal. You will also need to give your new feature branch a good name. 1. Open your terminal (command line tool). 2. Navigate to `my_repo` using `cd`. 3. Check you are up\-to\-date: ``` # git checkout master # git pull ``` 4. Create your new feature branch to work on and get to work (track changes by adding and comitting locally as usual): ``` # git checkout -b feature/post_name ``` 5. [Squash your commits](https://stackoverflow.com/questions/5189560/squash-my-last-x-commits-together-using-git) if appropriate, then push your new branch to Github. You will want to squash all commits together associated with one discrete piece of work (e.g. coding one function). ``` # git push origin feature/post_name -u ``` 6. On Github create a [pull request](https://help.github.com/articles/about-pull-requests/) and ask a colleague to review your changes before merging with the `master` branch (you can assign a reviewer in the PR page on Github). 7. If accepted (and it passes all necessary checks) you’re new feature will have been merged on Github. Fetch these changes locally. ``` # git checkout master # git pull ``` 8. You have a new master on Github. Pull it to your local machine and the development cycle starts again! CAVEAT: this workflow is not appropriate for large open collaborations, where fork and pull is preferred. 6\.4 Branch naming etiquette ---------------------------- Generally I will start a new branch to either add a new feature `git branch feature/cool_new_feature` such as for testing or adding a new function `git branch feature/sen_function_name`. Or if fixing a bug or problem. We should be writing these on the Github [issues page for the package](https://github.com/ukgovdatascience/rap_companion/issues). We can then title our branches to tackle specific issues `git branch fix/issue_number`. It’s good to push your branches to Github if your working on it prior to it being finished so we all know what everyone is working on. You can simply put \[WIP] (Work in progress) in the title of the PR on Github to let people know its not ready for review yet. 6\.5 Watermarking ----------------- Imagine if someone asks you to reproduce some historic analysis further down the line. This will be easy if you’ve used git as long as you know which version of your code was used to produce the report (packrat also facilitates this). You can then load that [version](http://r-pkgs.had.co.nz/description.html#version) of the code to repeat the analysis and reproduce the report. As an additional measure, or if you find versioning intimidating, you could watermark your report by citing the git commit used to generate it, as demonstrated below and in the stackoverflow answer by [Wander Nauta, 2015](https://stackoverflow.com/questions/32260956/show-git-version-in-r-code). ``` print(system("git rev-parse --short HEAD", intern = TRUE)) ``` ``` ## [1] "ca874b6" ``` This commit hash can be used to “revert” back to the code at the time the report was produced, fool around and reproduce the original report. You also have the flexibility to do other things which are explored in this [Stack Overflow answer](https://stackoverflow.com/questions/4114095/how-to-revert-git-repository-to-a-previous-commit). This feature of version control is what makes our analytical pipelines reproducible.
Data Databases and Engineering
ukgovdatascience.github.io
https://ukgovdatascience.github.io/rap_companion/package.html
Chapter 7 Packaging Code ======================== A package enshrines all the business knowledge used to create a corpus of work in one place; including the code and its relevant documentation. One of the difficulties that can arise in the more manual methods of statistics production is that we have many different files relating to many different stages of the process, each of which needs to be documented, and kept up to date. Part of the heavy lifting can be done here with version control as described in Chapter [6](vs.html#vs), but we can go a step further: we can create a package of code. As Hadley Wickham (author of a number of essential packages for package development) puts it for R: > Packages are the fundamental units of reproducible R code. They include reusable R functions, the documentation that describes how to use them, and sample data. \- Hadley Wickham Since it is a matter of statute that we produce our statistical publications, it is essential that our publications are as reproducible as possible. Packaging up the code can also help with institutional knowledge transfer. This was exemplified in Chapter [4](exemplar.html#exemplar) where we explored help files associated with code using the R `?` function. ``` library(eesectors) ?clean_sic() ``` Linking the documentation to the code makes everything much easier to understand, and can help to minimising the time taken to bring new team members up to speed. This all meets the requirements of the [AQUA](https://www.gov.uk/government/publications/the-aqua-book-guidance-on-producing-quality-analysis-for-government) book in that all assumptions and constraints can be described in the package documentation asssociated tied to the relevant code. 7\.1 Essential reading ---------------------- Hadley Wickham’s [R Packages](http://r-pkgs.had.co.nz/) book is an excellent and comprehensive introduction to developing your own package in R. It encourages you to start with the basics and improve over time; good advice. 7\.2 Development best practices for your package ------------------------------------------------ ### 7\.2\.1 Licensing your code Developing your code as an R package will require you to specify a license for your code in the DESCRIPTION file (for example the [eesectors](https://github.com/DCMSstats/eesectors/blob/master/DESCRIPTION) package uses the GPL\-3 license). We quote the [GDS Service Manual](https://www.gov.uk/service-manual/technology/making-source-code-open-and-reusable#licensing-your-code) by encouraging the use of an [Open Source Initiative](https://opensource.org/licenses) compatible licence. For example, GDS uses the [MIT licence](https://github.com/alphagov/styleguides/blob/master/licensing.md). It is also of note that all code produced by civil servants is automatically covered by [Crown Copyright](http://www.nationalarchives.gov.uk/information-management/re-using-public-sector-information/uk-government-licensing-framework/crown-copyright/). ### 7\.2\.2 Acting as the custodian for your code When you make your code open, you should: * use [Semantic Versioning](https://semver.org/) to make it clear when you release an update to your code * be clear about how you’ll communicate with users of your code, for example on support channels and email lists Encouraging contributions from people who use your code can help make your code more robust, as people will spot bugs and suggest new features. If you would like to encourage contributions, you can create a [CONTRIBUTING.md](https://github.com/blog/1184-contributing-guidelines) file on Github, like we [demonstrate for this book](https://github.com/ukgovdatascience/rap_companion/blob/master/CONTRIBUTING.md). 7\.1 Essential reading ---------------------- Hadley Wickham’s [R Packages](http://r-pkgs.had.co.nz/) book is an excellent and comprehensive introduction to developing your own package in R. It encourages you to start with the basics and improve over time; good advice. 7\.2 Development best practices for your package ------------------------------------------------ ### 7\.2\.1 Licensing your code Developing your code as an R package will require you to specify a license for your code in the DESCRIPTION file (for example the [eesectors](https://github.com/DCMSstats/eesectors/blob/master/DESCRIPTION) package uses the GPL\-3 license). We quote the [GDS Service Manual](https://www.gov.uk/service-manual/technology/making-source-code-open-and-reusable#licensing-your-code) by encouraging the use of an [Open Source Initiative](https://opensource.org/licenses) compatible licence. For example, GDS uses the [MIT licence](https://github.com/alphagov/styleguides/blob/master/licensing.md). It is also of note that all code produced by civil servants is automatically covered by [Crown Copyright](http://www.nationalarchives.gov.uk/information-management/re-using-public-sector-information/uk-government-licensing-framework/crown-copyright/). ### 7\.2\.2 Acting as the custodian for your code When you make your code open, you should: * use [Semantic Versioning](https://semver.org/) to make it clear when you release an update to your code * be clear about how you’ll communicate with users of your code, for example on support channels and email lists Encouraging contributions from people who use your code can help make your code more robust, as people will spot bugs and suggest new features. If you would like to encourage contributions, you can create a [CONTRIBUTING.md](https://github.com/blog/1184-contributing-guidelines) file on Github, like we [demonstrate for this book](https://github.com/ukgovdatascience/rap_companion/blob/master/CONTRIBUTING.md). ### 7\.2\.1 Licensing your code Developing your code as an R package will require you to specify a license for your code in the DESCRIPTION file (for example the [eesectors](https://github.com/DCMSstats/eesectors/blob/master/DESCRIPTION) package uses the GPL\-3 license). We quote the [GDS Service Manual](https://www.gov.uk/service-manual/technology/making-source-code-open-and-reusable#licensing-your-code) by encouraging the use of an [Open Source Initiative](https://opensource.org/licenses) compatible licence. For example, GDS uses the [MIT licence](https://github.com/alphagov/styleguides/blob/master/licensing.md). It is also of note that all code produced by civil servants is automatically covered by [Crown Copyright](http://www.nationalarchives.gov.uk/information-management/re-using-public-sector-information/uk-government-licensing-framework/crown-copyright/). ### 7\.2\.2 Acting as the custodian for your code When you make your code open, you should: * use [Semantic Versioning](https://semver.org/) to make it clear when you release an update to your code * be clear about how you’ll communicate with users of your code, for example on support channels and email lists Encouraging contributions from people who use your code can help make your code more robust, as people will spot bugs and suggest new features. If you would like to encourage contributions, you can create a [CONTRIBUTING.md](https://github.com/blog/1184-contributing-guidelines) file on Github, like we [demonstrate for this book](https://github.com/ukgovdatascience/rap_companion/blob/master/CONTRIBUTING.md).
Data Databases and Engineering
ukgovdatascience.github.io
https://ukgovdatascience.github.io/rap_companion/dep.html
Chapter 11 Dependency and reproducibility ========================================= *This section is in development. Please [contribute to the discussion](https://github.com/ukgovdatascience/rap_companion/issues/89).* 11\.1 Other people’s code ------------------------- You’re likely to make use of other people’s code when you develop your RAP project. Maybe you’ve imported packages to perform a statistical test, for example. These dependencies can be extremely helpful. They can: * prevent you from recreating code that already exists * save you time trying to solve a problem and optimise its solution * give you access to code and solutions from experts in the field * help to reduce the size of your scripts and make them more human\-readable * limit the need for you to update and fix problems yourself ### 11\.1\.1 Limitations Most statistical publications are [updated on a scheduled basis](https://www.gov.uk/government/statistics/announcements) when new data become available. It’s possible that you’ll get some errors when you reuse your RAP project code for the next update, even if everything worked perfectly last time. Why? The maintainers of your dependencies may have changed the code and it no longer works as you expect. Changes might not impact your publication. If changes do have an impact, the best case is that you’ll get a helpful error message. At worst, your code will execute with an imperceptible but impactful error. Maybe a rounding function now rounds to the nearest 10 instead of the nearest 1000\. The problem is compounded if your dependencies depend on other dependencies, or if two of your dependencies require conflicting versions of a third dependency. You could get stuck in so\-called [dependency hell](https://en.wikipedia.org/wiki/Dependency_hell). The bottom line: your publication is dependent on particular software *and* its state at a given time. How can you deal with this? This chapter considers a few possibilities given variation in tools, techniques and IT restrictions. ### 11\.1\.2 Think lightweight First, think about what you can do to reduce the chance of problems. To put it succinctly, [the tinyverse philosophy of dependency management](http://www.tinyverse.org/) suggests that: > Lightweight is the right weight To achieve this you could: * minimise the number of dependencies and remove redundancy where possible * avoid depending on packages that in turn have many dependencies * restrict yourself to ‘stable’ packages for which recent changes were restricted to minor updates and bug\-fixes * review regularly your dependencies to establish if better alternatives exist ### 11\.1\.3 Record the packages and versions It’s not enough to simply minimise your dependencies. You need to think about how this impacts the reproducibility of your project. To ensure that your scripts are executed in the same way next time, you need to record the packages and their versions in some way. Then you or a colleague can recreate the environment in which the outputs were produced the first time round. 11\.2 Approaches ---------------- ### 11\.2\.1 Version numbers Maintainers signal updates by increasing the version number of their software. This could be a simple patch of an earlier version’s bug (e.g. version 3\.2\.7 replaces 3\.2\.6\), or perhaps a major *breaking* change (e.g. version 3\.2\.6 is update to version 4\.0\.0\). There are many ways to record each of the packages used in our analysis and their version numbers. In R you could, for example, use the `session_info()` function from `devtools` package. This prints details about the current state of the working environment. ``` devtools::session_info() ``` ``` ─ Session info ───────────────────────────────────────── setting value version R version 3.5.2 (2018-12-20) os macOS High Sierra 10.13.6 system x86_64, darwin15.6.0 ui RStudio language (EN) collate en_GB.UTF-8 ctype en_GB.UTF-8 tz Europe/London date 2019-03-01 ─ Packages ──────────────────────────────────────────── package * version date lib source assertthat 0.2.0 2017-04-11 [1] CRAN (R 3.5.0) backports 1.1.3 2018-12-14 [1] CRAN (R 3.5.0) bookdown 0.7 2018-02-18 [1] CRAN (R 3.5.0) callr 3.1.1 2018-12-21 [1] CRAN (R 3.5.0) ... ``` You could do something like `pkgs <- devtools::session_info()$packages` to save a dataframe of the packages and versions. You can achieve a similar thing for Python with `pip freeze` in a shell script. ``` pip freeze ``` ``` ## alabaster==0.7.10 ## anaconda-client==1.6.14 ## anaconda-navigator==1.8.7 ## anaconda-project==0.8.2 ## appnope==0.1.0 ## appscript==1.0.1 ... ``` You can save this information with something like `pip freeze > requirements.txt` in the shell. The packages should be ‘pinned’ to specific versions, meaning that they’re in the form `packageName==1.3.2` rather than `packageName>=1.3.2`. We’re interested in storing *specific versions*, not *specific versions or newer*. But simply saving this information in your project folder isn’t good dependency control. It: * would be tedious for analysts to read these reports and download each recorded package version one\-by\-one * records *every* package and its version *on your whole system*, not just the ones relevant to your project * isn’t a reproducible or automated process ### 11\.2\.2 Environments for dependency management Ideally we want to automate the process of recording packages and their version numbers and have them installed in an isolated environment that’s specific to our project. Doing this makes the project more portable – you could run it easily from another machine that’s configured differently to your own – and it would therefore be more reproducible. #### 11\.2\.2\.1 Package managers in R There is currently no consensus approach for package management in R. Below are a few options, but this is a non\-exhaustive list. The [`packrat` package](https://rstudio.github.io/packrat/) is commonly used but [has known issues](https://rstudio.github.io/packrat/limitations.html). The RAP community has noted in particular that it has a problem compiling older package versions on Windows. [Join the discussion for more information](https://github.com/ukgovdatascience/rap_companion/issues/86). [A `packrat` walkthrough is available](https://rstudio.github.io/packrat/walkthrough.html), but the basic process is: 1. Activate ‘packrat mode’ in your project folder with `init()`, which records and snapshots the packages you’ve called in your scripts. 2. Install new packages as usual, except they’re now saved to a *private package repository* within the project, rather than your local machine. 3. By default, regular snapshots are taken to record the state of dependencies, but you can force one with `snapshot()`. 4. When opening the project fresh on a new machine, Packrat automates the process of fetching the packages – with their recorded version numbers – and storing them in a private package library created on the collaborator’s machine. As for other options, [the `checkpoint` package](https://github.com/RevolutionAnalytics/checkpoint/wiki) from Microsoft’s Revolution Analytics works like `packrat` but you simply `checkpoint()` your project for a given *date*. This allows you to call the packages from that date into a private library for that project. It works by fetching the packages from the [Microsoft R Application Network (MRAN)](https://mran.microsoft.com/), which is a daily snapshot of [CRAN](https://cran.r-project.org/). Note that this doesn’t permit control of packages that are hosted anywhere other than CRAN, such as Bioconductor or GitHub, and relies on Microsoft continuing to snapshot and store CRAN copies in MRAN. Another option is `jetpack`, which is different to `packrat` because it uses a DESCRIPTION file to list your dependencies. [DESCRIPTION files are used in package development](http://r-pkgs.had.co.nz/description.html) to store information, including that package’s dependencies. This is a lightweight option and can be run from the command line. Paid options also exist, but are obviously less accessible and require maintenance. One example is [RStudio’s Package Manager](https://www.rstudio.com/products/package-manager/). #### 11\.2\.2\.2 Virtual environments in Python In Python we can easily create an isolated environment for our project and load packages into it. This is possible with tools like [`virtualenv` and `Pipenv`](https://docs.python-guide.org/dev/virtualenvs/). You can set up a virtual environment in your project folder, activate it, install any packages you need and then record them in a file for use in future. One way to do this is with `virtualenv`. After installation and having navigated to your project’s home folder, you can follow something like this from the command line: ``` virtualenv venv # create virtual environment folder source venv/bin/activate # activate the environment pip install packageName # install packages you need pip freeze > requirements.txt # save package-version list deactivate # deactivate the environment when done ``` When another user downloads your version\-controlled project folder, the requirements.txt file will be there. Now they can create a virtual environment on their machine following steps 1 to 3 above, but rather than `pip install packageName` for each package they need, they can automate the process by installing everything from the requirements.txt file with: ``` pip install -r requirements.txt ``` This will download the packages one by one into the virtual environment in their copy of the project virtual environment. Now they’ll be using the same packages you were when you developed your project. 11\.3 Containers ---------------- ### 11\.3\.1 Theory Good package management deals with one of the major problems of dependency hell. But the problem is bigger. Collaborators could still encounter errors if they: * try to run your code in a later version of the language you used during development * use a different or updated [Integrated Development Environment](https://en.wikipedia.org/wiki/Integrated_development_environment) (IDE, like RStudio or Jupyter Notebooks) * try to re\-run the analysis on a different system, like if they try to run code on a Linux machine but the original was built on a Microsoft machine What you really want to do is create a virtual computer inside your computer – a *container* – with everything you need to recreate the analysis under consistent conditions, regardless of who you are and what equipment you’re using. Imagine one of those ubiquitous [shipping containers](https://en.wikipedia.org/wiki/Intermodal_container). They are: * capable of holding different cargo * can provide an isolated environment from the outside world * can be transported by various transport methods This is what we want for our project as well. We want to put whatever we want inside, we want it to be isolated and we want to be able to run it from anywhere. ### 11\.3\.2 Docker Docker can seem daunting at first. It works like this: 1. Create a ‘dockerfile’. This is like a plain\-text recipe that will build from scratch everything you need to recreate a project. It’s just a textfile that you can put under version control. 2. Run the dockerfile to generate a Docker ‘image’. The image is an instance of the environment and everything you need to recreate your analysis. It’s a delicious cake you made following the recipe. 3. Other people can follow the dockerfile recipe to make their own copies of the delicious image cake. Each running instance of an image is called a container. You can learn more about this process by [following the curriculum on Docker’s website](https://docker-curriculum.com/). You can also read about the use of Docker [in the Department for Work and Pensions](https://dwpdigital.blog.gov.uk/2018/05/18/using-containers-to-deliver-our-data-projects/) (DWP). Phil Chapman [wrote more about the technical side of this process](https://chapmandu2.github.io/post/2018/05/26/reproducible-data-science-environments-with-docker/). ### 11\.3\.3 Making it easier You don’t have to build everything from scratch. [Docker hub](https://hub.docker.com/) is a big library of pre\-prepared container images. For example, the [rocker project on Docker hub](https://hub.docker.com/u/rocker) lists a number of images containing R\-specific tools like [rocker/tidyverse](https://hub.docker.com/r/rocker/tidyverse) that contains R, RStudio and [the tidyverse packages](https://tidyverse.tidyverse.org/). You can specify a rocker image in your dockerfile to make your life easier. Learn more about [rOpenSci labs tutorial](http://ropenscilabs.github.io/r-docker-tutorial/) As well as rocker, R users can set up Docker from within an interactive R session: [the `containerit` package](https://o2r.info/containerit/index.html) lets you create a dockerfile given the current state of your session. This simplifies the process a great deal. R users can also read [Docker for the useR](https://github.com/noamross/nyhackr-docker-talk) by Noam Ross and [an introduction to Docker for R users](https://colinfay.me/docker-r-reproducibility/) by Colin Fay. 11\.1 Other people’s code ------------------------- You’re likely to make use of other people’s code when you develop your RAP project. Maybe you’ve imported packages to perform a statistical test, for example. These dependencies can be extremely helpful. They can: * prevent you from recreating code that already exists * save you time trying to solve a problem and optimise its solution * give you access to code and solutions from experts in the field * help to reduce the size of your scripts and make them more human\-readable * limit the need for you to update and fix problems yourself ### 11\.1\.1 Limitations Most statistical publications are [updated on a scheduled basis](https://www.gov.uk/government/statistics/announcements) when new data become available. It’s possible that you’ll get some errors when you reuse your RAP project code for the next update, even if everything worked perfectly last time. Why? The maintainers of your dependencies may have changed the code and it no longer works as you expect. Changes might not impact your publication. If changes do have an impact, the best case is that you’ll get a helpful error message. At worst, your code will execute with an imperceptible but impactful error. Maybe a rounding function now rounds to the nearest 10 instead of the nearest 1000\. The problem is compounded if your dependencies depend on other dependencies, or if two of your dependencies require conflicting versions of a third dependency. You could get stuck in so\-called [dependency hell](https://en.wikipedia.org/wiki/Dependency_hell). The bottom line: your publication is dependent on particular software *and* its state at a given time. How can you deal with this? This chapter considers a few possibilities given variation in tools, techniques and IT restrictions. ### 11\.1\.2 Think lightweight First, think about what you can do to reduce the chance of problems. To put it succinctly, [the tinyverse philosophy of dependency management](http://www.tinyverse.org/) suggests that: > Lightweight is the right weight To achieve this you could: * minimise the number of dependencies and remove redundancy where possible * avoid depending on packages that in turn have many dependencies * restrict yourself to ‘stable’ packages for which recent changes were restricted to minor updates and bug\-fixes * review regularly your dependencies to establish if better alternatives exist ### 11\.1\.3 Record the packages and versions It’s not enough to simply minimise your dependencies. You need to think about how this impacts the reproducibility of your project. To ensure that your scripts are executed in the same way next time, you need to record the packages and their versions in some way. Then you or a colleague can recreate the environment in which the outputs were produced the first time round. ### 11\.1\.1 Limitations Most statistical publications are [updated on a scheduled basis](https://www.gov.uk/government/statistics/announcements) when new data become available. It’s possible that you’ll get some errors when you reuse your RAP project code for the next update, even if everything worked perfectly last time. Why? The maintainers of your dependencies may have changed the code and it no longer works as you expect. Changes might not impact your publication. If changes do have an impact, the best case is that you’ll get a helpful error message. At worst, your code will execute with an imperceptible but impactful error. Maybe a rounding function now rounds to the nearest 10 instead of the nearest 1000\. The problem is compounded if your dependencies depend on other dependencies, or if two of your dependencies require conflicting versions of a third dependency. You could get stuck in so\-called [dependency hell](https://en.wikipedia.org/wiki/Dependency_hell). The bottom line: your publication is dependent on particular software *and* its state at a given time. How can you deal with this? This chapter considers a few possibilities given variation in tools, techniques and IT restrictions. ### 11\.1\.2 Think lightweight First, think about what you can do to reduce the chance of problems. To put it succinctly, [the tinyverse philosophy of dependency management](http://www.tinyverse.org/) suggests that: > Lightweight is the right weight To achieve this you could: * minimise the number of dependencies and remove redundancy where possible * avoid depending on packages that in turn have many dependencies * restrict yourself to ‘stable’ packages for which recent changes were restricted to minor updates and bug\-fixes * review regularly your dependencies to establish if better alternatives exist ### 11\.1\.3 Record the packages and versions It’s not enough to simply minimise your dependencies. You need to think about how this impacts the reproducibility of your project. To ensure that your scripts are executed in the same way next time, you need to record the packages and their versions in some way. Then you or a colleague can recreate the environment in which the outputs were produced the first time round. 11\.2 Approaches ---------------- ### 11\.2\.1 Version numbers Maintainers signal updates by increasing the version number of their software. This could be a simple patch of an earlier version’s bug (e.g. version 3\.2\.7 replaces 3\.2\.6\), or perhaps a major *breaking* change (e.g. version 3\.2\.6 is update to version 4\.0\.0\). There are many ways to record each of the packages used in our analysis and their version numbers. In R you could, for example, use the `session_info()` function from `devtools` package. This prints details about the current state of the working environment. ``` devtools::session_info() ``` ``` ─ Session info ───────────────────────────────────────── setting value version R version 3.5.2 (2018-12-20) os macOS High Sierra 10.13.6 system x86_64, darwin15.6.0 ui RStudio language (EN) collate en_GB.UTF-8 ctype en_GB.UTF-8 tz Europe/London date 2019-03-01 ─ Packages ──────────────────────────────────────────── package * version date lib source assertthat 0.2.0 2017-04-11 [1] CRAN (R 3.5.0) backports 1.1.3 2018-12-14 [1] CRAN (R 3.5.0) bookdown 0.7 2018-02-18 [1] CRAN (R 3.5.0) callr 3.1.1 2018-12-21 [1] CRAN (R 3.5.0) ... ``` You could do something like `pkgs <- devtools::session_info()$packages` to save a dataframe of the packages and versions. You can achieve a similar thing for Python with `pip freeze` in a shell script. ``` pip freeze ``` ``` ## alabaster==0.7.10 ## anaconda-client==1.6.14 ## anaconda-navigator==1.8.7 ## anaconda-project==0.8.2 ## appnope==0.1.0 ## appscript==1.0.1 ... ``` You can save this information with something like `pip freeze > requirements.txt` in the shell. The packages should be ‘pinned’ to specific versions, meaning that they’re in the form `packageName==1.3.2` rather than `packageName>=1.3.2`. We’re interested in storing *specific versions*, not *specific versions or newer*. But simply saving this information in your project folder isn’t good dependency control. It: * would be tedious for analysts to read these reports and download each recorded package version one\-by\-one * records *every* package and its version *on your whole system*, not just the ones relevant to your project * isn’t a reproducible or automated process ### 11\.2\.2 Environments for dependency management Ideally we want to automate the process of recording packages and their version numbers and have them installed in an isolated environment that’s specific to our project. Doing this makes the project more portable – you could run it easily from another machine that’s configured differently to your own – and it would therefore be more reproducible. #### 11\.2\.2\.1 Package managers in R There is currently no consensus approach for package management in R. Below are a few options, but this is a non\-exhaustive list. The [`packrat` package](https://rstudio.github.io/packrat/) is commonly used but [has known issues](https://rstudio.github.io/packrat/limitations.html). The RAP community has noted in particular that it has a problem compiling older package versions on Windows. [Join the discussion for more information](https://github.com/ukgovdatascience/rap_companion/issues/86). [A `packrat` walkthrough is available](https://rstudio.github.io/packrat/walkthrough.html), but the basic process is: 1. Activate ‘packrat mode’ in your project folder with `init()`, which records and snapshots the packages you’ve called in your scripts. 2. Install new packages as usual, except they’re now saved to a *private package repository* within the project, rather than your local machine. 3. By default, regular snapshots are taken to record the state of dependencies, but you can force one with `snapshot()`. 4. When opening the project fresh on a new machine, Packrat automates the process of fetching the packages – with their recorded version numbers – and storing them in a private package library created on the collaborator’s machine. As for other options, [the `checkpoint` package](https://github.com/RevolutionAnalytics/checkpoint/wiki) from Microsoft’s Revolution Analytics works like `packrat` but you simply `checkpoint()` your project for a given *date*. This allows you to call the packages from that date into a private library for that project. It works by fetching the packages from the [Microsoft R Application Network (MRAN)](https://mran.microsoft.com/), which is a daily snapshot of [CRAN](https://cran.r-project.org/). Note that this doesn’t permit control of packages that are hosted anywhere other than CRAN, such as Bioconductor or GitHub, and relies on Microsoft continuing to snapshot and store CRAN copies in MRAN. Another option is `jetpack`, which is different to `packrat` because it uses a DESCRIPTION file to list your dependencies. [DESCRIPTION files are used in package development](http://r-pkgs.had.co.nz/description.html) to store information, including that package’s dependencies. This is a lightweight option and can be run from the command line. Paid options also exist, but are obviously less accessible and require maintenance. One example is [RStudio’s Package Manager](https://www.rstudio.com/products/package-manager/). #### 11\.2\.2\.2 Virtual environments in Python In Python we can easily create an isolated environment for our project and load packages into it. This is possible with tools like [`virtualenv` and `Pipenv`](https://docs.python-guide.org/dev/virtualenvs/). You can set up a virtual environment in your project folder, activate it, install any packages you need and then record them in a file for use in future. One way to do this is with `virtualenv`. After installation and having navigated to your project’s home folder, you can follow something like this from the command line: ``` virtualenv venv # create virtual environment folder source venv/bin/activate # activate the environment pip install packageName # install packages you need pip freeze > requirements.txt # save package-version list deactivate # deactivate the environment when done ``` When another user downloads your version\-controlled project folder, the requirements.txt file will be there. Now they can create a virtual environment on their machine following steps 1 to 3 above, but rather than `pip install packageName` for each package they need, they can automate the process by installing everything from the requirements.txt file with: ``` pip install -r requirements.txt ``` This will download the packages one by one into the virtual environment in their copy of the project virtual environment. Now they’ll be using the same packages you were when you developed your project. ### 11\.2\.1 Version numbers Maintainers signal updates by increasing the version number of their software. This could be a simple patch of an earlier version’s bug (e.g. version 3\.2\.7 replaces 3\.2\.6\), or perhaps a major *breaking* change (e.g. version 3\.2\.6 is update to version 4\.0\.0\). There are many ways to record each of the packages used in our analysis and their version numbers. In R you could, for example, use the `session_info()` function from `devtools` package. This prints details about the current state of the working environment. ``` devtools::session_info() ``` ``` ─ Session info ───────────────────────────────────────── setting value version R version 3.5.2 (2018-12-20) os macOS High Sierra 10.13.6 system x86_64, darwin15.6.0 ui RStudio language (EN) collate en_GB.UTF-8 ctype en_GB.UTF-8 tz Europe/London date 2019-03-01 ─ Packages ──────────────────────────────────────────── package * version date lib source assertthat 0.2.0 2017-04-11 [1] CRAN (R 3.5.0) backports 1.1.3 2018-12-14 [1] CRAN (R 3.5.0) bookdown 0.7 2018-02-18 [1] CRAN (R 3.5.0) callr 3.1.1 2018-12-21 [1] CRAN (R 3.5.0) ... ``` You could do something like `pkgs <- devtools::session_info()$packages` to save a dataframe of the packages and versions. You can achieve a similar thing for Python with `pip freeze` in a shell script. ``` pip freeze ``` ``` ## alabaster==0.7.10 ## anaconda-client==1.6.14 ## anaconda-navigator==1.8.7 ## anaconda-project==0.8.2 ## appnope==0.1.0 ## appscript==1.0.1 ... ``` You can save this information with something like `pip freeze > requirements.txt` in the shell. The packages should be ‘pinned’ to specific versions, meaning that they’re in the form `packageName==1.3.2` rather than `packageName>=1.3.2`. We’re interested in storing *specific versions*, not *specific versions or newer*. But simply saving this information in your project folder isn’t good dependency control. It: * would be tedious for analysts to read these reports and download each recorded package version one\-by\-one * records *every* package and its version *on your whole system*, not just the ones relevant to your project * isn’t a reproducible or automated process ### 11\.2\.2 Environments for dependency management Ideally we want to automate the process of recording packages and their version numbers and have them installed in an isolated environment that’s specific to our project. Doing this makes the project more portable – you could run it easily from another machine that’s configured differently to your own – and it would therefore be more reproducible. #### 11\.2\.2\.1 Package managers in R There is currently no consensus approach for package management in R. Below are a few options, but this is a non\-exhaustive list. The [`packrat` package](https://rstudio.github.io/packrat/) is commonly used but [has known issues](https://rstudio.github.io/packrat/limitations.html). The RAP community has noted in particular that it has a problem compiling older package versions on Windows. [Join the discussion for more information](https://github.com/ukgovdatascience/rap_companion/issues/86). [A `packrat` walkthrough is available](https://rstudio.github.io/packrat/walkthrough.html), but the basic process is: 1. Activate ‘packrat mode’ in your project folder with `init()`, which records and snapshots the packages you’ve called in your scripts. 2. Install new packages as usual, except they’re now saved to a *private package repository* within the project, rather than your local machine. 3. By default, regular snapshots are taken to record the state of dependencies, but you can force one with `snapshot()`. 4. When opening the project fresh on a new machine, Packrat automates the process of fetching the packages – with their recorded version numbers – and storing them in a private package library created on the collaborator’s machine. As for other options, [the `checkpoint` package](https://github.com/RevolutionAnalytics/checkpoint/wiki) from Microsoft’s Revolution Analytics works like `packrat` but you simply `checkpoint()` your project for a given *date*. This allows you to call the packages from that date into a private library for that project. It works by fetching the packages from the [Microsoft R Application Network (MRAN)](https://mran.microsoft.com/), which is a daily snapshot of [CRAN](https://cran.r-project.org/). Note that this doesn’t permit control of packages that are hosted anywhere other than CRAN, such as Bioconductor or GitHub, and relies on Microsoft continuing to snapshot and store CRAN copies in MRAN. Another option is `jetpack`, which is different to `packrat` because it uses a DESCRIPTION file to list your dependencies. [DESCRIPTION files are used in package development](http://r-pkgs.had.co.nz/description.html) to store information, including that package’s dependencies. This is a lightweight option and can be run from the command line. Paid options also exist, but are obviously less accessible and require maintenance. One example is [RStudio’s Package Manager](https://www.rstudio.com/products/package-manager/). #### 11\.2\.2\.2 Virtual environments in Python In Python we can easily create an isolated environment for our project and load packages into it. This is possible with tools like [`virtualenv` and `Pipenv`](https://docs.python-guide.org/dev/virtualenvs/). You can set up a virtual environment in your project folder, activate it, install any packages you need and then record them in a file for use in future. One way to do this is with `virtualenv`. After installation and having navigated to your project’s home folder, you can follow something like this from the command line: ``` virtualenv venv # create virtual environment folder source venv/bin/activate # activate the environment pip install packageName # install packages you need pip freeze > requirements.txt # save package-version list deactivate # deactivate the environment when done ``` When another user downloads your version\-controlled project folder, the requirements.txt file will be there. Now they can create a virtual environment on their machine following steps 1 to 3 above, but rather than `pip install packageName` for each package they need, they can automate the process by installing everything from the requirements.txt file with: ``` pip install -r requirements.txt ``` This will download the packages one by one into the virtual environment in their copy of the project virtual environment. Now they’ll be using the same packages you were when you developed your project. #### 11\.2\.2\.1 Package managers in R There is currently no consensus approach for package management in R. Below are a few options, but this is a non\-exhaustive list. The [`packrat` package](https://rstudio.github.io/packrat/) is commonly used but [has known issues](https://rstudio.github.io/packrat/limitations.html). The RAP community has noted in particular that it has a problem compiling older package versions on Windows. [Join the discussion for more information](https://github.com/ukgovdatascience/rap_companion/issues/86). [A `packrat` walkthrough is available](https://rstudio.github.io/packrat/walkthrough.html), but the basic process is: 1. Activate ‘packrat mode’ in your project folder with `init()`, which records and snapshots the packages you’ve called in your scripts. 2. Install new packages as usual, except they’re now saved to a *private package repository* within the project, rather than your local machine. 3. By default, regular snapshots are taken to record the state of dependencies, but you can force one with `snapshot()`. 4. When opening the project fresh on a new machine, Packrat automates the process of fetching the packages – with their recorded version numbers – and storing them in a private package library created on the collaborator’s machine. As for other options, [the `checkpoint` package](https://github.com/RevolutionAnalytics/checkpoint/wiki) from Microsoft’s Revolution Analytics works like `packrat` but you simply `checkpoint()` your project for a given *date*. This allows you to call the packages from that date into a private library for that project. It works by fetching the packages from the [Microsoft R Application Network (MRAN)](https://mran.microsoft.com/), which is a daily snapshot of [CRAN](https://cran.r-project.org/). Note that this doesn’t permit control of packages that are hosted anywhere other than CRAN, such as Bioconductor or GitHub, and relies on Microsoft continuing to snapshot and store CRAN copies in MRAN. Another option is `jetpack`, which is different to `packrat` because it uses a DESCRIPTION file to list your dependencies. [DESCRIPTION files are used in package development](http://r-pkgs.had.co.nz/description.html) to store information, including that package’s dependencies. This is a lightweight option and can be run from the command line. Paid options also exist, but are obviously less accessible and require maintenance. One example is [RStudio’s Package Manager](https://www.rstudio.com/products/package-manager/). #### 11\.2\.2\.2 Virtual environments in Python In Python we can easily create an isolated environment for our project and load packages into it. This is possible with tools like [`virtualenv` and `Pipenv`](https://docs.python-guide.org/dev/virtualenvs/). You can set up a virtual environment in your project folder, activate it, install any packages you need and then record them in a file for use in future. One way to do this is with `virtualenv`. After installation and having navigated to your project’s home folder, you can follow something like this from the command line: ``` virtualenv venv # create virtual environment folder source venv/bin/activate # activate the environment pip install packageName # install packages you need pip freeze > requirements.txt # save package-version list deactivate # deactivate the environment when done ``` When another user downloads your version\-controlled project folder, the requirements.txt file will be there. Now they can create a virtual environment on their machine following steps 1 to 3 above, but rather than `pip install packageName` for each package they need, they can automate the process by installing everything from the requirements.txt file with: ``` pip install -r requirements.txt ``` This will download the packages one by one into the virtual environment in their copy of the project virtual environment. Now they’ll be using the same packages you were when you developed your project. 11\.3 Containers ---------------- ### 11\.3\.1 Theory Good package management deals with one of the major problems of dependency hell. But the problem is bigger. Collaborators could still encounter errors if they: * try to run your code in a later version of the language you used during development * use a different or updated [Integrated Development Environment](https://en.wikipedia.org/wiki/Integrated_development_environment) (IDE, like RStudio or Jupyter Notebooks) * try to re\-run the analysis on a different system, like if they try to run code on a Linux machine but the original was built on a Microsoft machine What you really want to do is create a virtual computer inside your computer – a *container* – with everything you need to recreate the analysis under consistent conditions, regardless of who you are and what equipment you’re using. Imagine one of those ubiquitous [shipping containers](https://en.wikipedia.org/wiki/Intermodal_container). They are: * capable of holding different cargo * can provide an isolated environment from the outside world * can be transported by various transport methods This is what we want for our project as well. We want to put whatever we want inside, we want it to be isolated and we want to be able to run it from anywhere. ### 11\.3\.2 Docker Docker can seem daunting at first. It works like this: 1. Create a ‘dockerfile’. This is like a plain\-text recipe that will build from scratch everything you need to recreate a project. It’s just a textfile that you can put under version control. 2. Run the dockerfile to generate a Docker ‘image’. The image is an instance of the environment and everything you need to recreate your analysis. It’s a delicious cake you made following the recipe. 3. Other people can follow the dockerfile recipe to make their own copies of the delicious image cake. Each running instance of an image is called a container. You can learn more about this process by [following the curriculum on Docker’s website](https://docker-curriculum.com/). You can also read about the use of Docker [in the Department for Work and Pensions](https://dwpdigital.blog.gov.uk/2018/05/18/using-containers-to-deliver-our-data-projects/) (DWP). Phil Chapman [wrote more about the technical side of this process](https://chapmandu2.github.io/post/2018/05/26/reproducible-data-science-environments-with-docker/). ### 11\.3\.3 Making it easier You don’t have to build everything from scratch. [Docker hub](https://hub.docker.com/) is a big library of pre\-prepared container images. For example, the [rocker project on Docker hub](https://hub.docker.com/u/rocker) lists a number of images containing R\-specific tools like [rocker/tidyverse](https://hub.docker.com/r/rocker/tidyverse) that contains R, RStudio and [the tidyverse packages](https://tidyverse.tidyverse.org/). You can specify a rocker image in your dockerfile to make your life easier. Learn more about [rOpenSci labs tutorial](http://ropenscilabs.github.io/r-docker-tutorial/) As well as rocker, R users can set up Docker from within an interactive R session: [the `containerit` package](https://o2r.info/containerit/index.html) lets you create a dockerfile given the current state of your session. This simplifies the process a great deal. R users can also read [Docker for the useR](https://github.com/noamross/nyhackr-docker-talk) by Noam Ross and [an introduction to Docker for R users](https://colinfay.me/docker-r-reproducibility/) by Colin Fay. ### 11\.3\.1 Theory Good package management deals with one of the major problems of dependency hell. But the problem is bigger. Collaborators could still encounter errors if they: * try to run your code in a later version of the language you used during development * use a different or updated [Integrated Development Environment](https://en.wikipedia.org/wiki/Integrated_development_environment) (IDE, like RStudio or Jupyter Notebooks) * try to re\-run the analysis on a different system, like if they try to run code on a Linux machine but the original was built on a Microsoft machine What you really want to do is create a virtual computer inside your computer – a *container* – with everything you need to recreate the analysis under consistent conditions, regardless of who you are and what equipment you’re using. Imagine one of those ubiquitous [shipping containers](https://en.wikipedia.org/wiki/Intermodal_container). They are: * capable of holding different cargo * can provide an isolated environment from the outside world * can be transported by various transport methods This is what we want for our project as well. We want to put whatever we want inside, we want it to be isolated and we want to be able to run it from anywhere. ### 11\.3\.2 Docker Docker can seem daunting at first. It works like this: 1. Create a ‘dockerfile’. This is like a plain\-text recipe that will build from scratch everything you need to recreate a project. It’s just a textfile that you can put under version control. 2. Run the dockerfile to generate a Docker ‘image’. The image is an instance of the environment and everything you need to recreate your analysis. It’s a delicious cake you made following the recipe. 3. Other people can follow the dockerfile recipe to make their own copies of the delicious image cake. Each running instance of an image is called a container. You can learn more about this process by [following the curriculum on Docker’s website](https://docker-curriculum.com/). You can also read about the use of Docker [in the Department for Work and Pensions](https://dwpdigital.blog.gov.uk/2018/05/18/using-containers-to-deliver-our-data-projects/) (DWP). Phil Chapman [wrote more about the technical side of this process](https://chapmandu2.github.io/post/2018/05/26/reproducible-data-science-environments-with-docker/). ### 11\.3\.3 Making it easier You don’t have to build everything from scratch. [Docker hub](https://hub.docker.com/) is a big library of pre\-prepared container images. For example, the [rocker project on Docker hub](https://hub.docker.com/u/rocker) lists a number of images containing R\-specific tools like [rocker/tidyverse](https://hub.docker.com/r/rocker/tidyverse) that contains R, RStudio and [the tidyverse packages](https://tidyverse.tidyverse.org/). You can specify a rocker image in your dockerfile to make your life easier. Learn more about [rOpenSci labs tutorial](http://ropenscilabs.github.io/r-docker-tutorial/) As well as rocker, R users can set up Docker from within an interactive R session: [the `containerit` package](https://o2r.info/containerit/index.html) lets you create a dockerfile given the current state of your session. This simplifies the process a great deal. R users can also read [Docker for the useR](https://github.com/noamross/nyhackr-docker-talk) by Noam Ross and [an introduction to Docker for R users](https://colinfay.me/docker-r-reproducibility/) by Colin Fay.
Data Databases and Engineering
ukgovdatascience.github.io
https://ukgovdatascience.github.io/rap_companion/qa-data.html
Chapter 12 Quality Assurance of the pipeline ============================================ All the testing we have described so far is to do with the code, and ensuring that the code does what we expect it to, but because we have written an [R package](https://github.com/ukgovdatascience/eesectors), it’s also very easy for us to institute tests for the consistency of the data at the time the data is loaded. We may also wish to employ defensive programming against potential errors and consider how we might want to flag these for the user and / or how our pipeline might recover from such errors. 12\.1 Testing the input data ---------------------------- If our RAP were a sausage factory, the data would be the input meat. Given we do not own the input data nor are we responsible for its preparation we should plan for how we can protect our pipeline against a change in input data format or any anomalous data therein. The list of tests that we might want to run is endless, and the scope of tests very much be dictated by the team which has the expert knowledge of the data. In the [eesectors](https://github.com/ukgovdatascience/eesectors) package we implemented two very simple checks, but these could very easily be expanded. The simplest of these is a simple test for outliers: since the data for the economic estimates is longitudinal, i.e. stretching back several years; we are able to look at the most recent values in comparison to the values from previous years. If the latest values lie within a threshold determined statistically from the other values then the data passes, if not a warning is raised. These kinds of automated tests are repeated every time the data are loaded, reducing the burden of QA, and the scope for human error, freeing up statistician time for identifying more subtle data quality issues which might otherwise go unnoticed. 12\.2 Murphy’s Law and errors in your pipeline ---------------------------------------------- Paraphrasing from [Advanced R](http://adv-r.had.co.nz/beyond-exception-handling.html) by Hadley Wickham: If something can go wrong, it will: the format of the spreadsheet you normally receive your raw data in changes, the server you’re talking to may be down, your Wi\-Fi drops. Any such problem may stop a piece of your pipeline (code) from doing what it is intended to do. This is not a bug; the problem did not originate from within the code itself. However, if the pipeline downstream is dependent on this code acting as intended then you have a problem, you need to deal with the error somehow. Errors aren’t caused by bugs *per se*, but neglecting to handle an error appropriately is a bug. 12\.3 Error handling -------------------- Not all problems are unexpected. For example, an input data file may be missing. We can often anticipate some of these likely problems by thinking about our users and how the code we might be implemented or misunderstood. If something goes wrong, we want the user to know about it. We can communicate to the user using a variety of conditions; messages, warnings and errors. If we want to let the user know about something fairly inoccuous or keep them informed we can use the `message()` function. Sometimes we might want to draw the users attention to something that might be problematic without stopping the code from running, using `warning()`. If there’s no way for the code to execute then a fatal error may be preferred using `stop()`. As an example we look to code from the RAP `eesectors` [package](https://github.com/DCMSstats/eesectors/) produced in collaboration with DCMS. Specifically, we look at a snippet of code from the `year_sector_data` [function](https://github.com/DCMSstats/eesectors/blob/master/R/year_sector_data.R). ``` message('Checking x does not contain missing values...') if (anyNA(x)) stop("x cannot contain any missing values") message('Checking for the correct number of rows...') if (nrow(x) != length(unique(x$sector)) * length(unique(x$year))) { warning("x does not appear to be well formed. nrow(x) should equal length(unique(x$sector)) * length(unique(x$year)). Check the of x.") } message('...passed') ``` We’ll work our way through the code above, step by step. The first line informs the user of the quality assurance that is being automatically conducted, i.e. that no data is missing (`NA`). The second line uses the logical `if` statement to assess this on `x` and if it were true, we use `stop` to produce a fatal error with a descriptive message, as below. ``` x <- c("culture", "sport", NA) message('Checking x does not contain missing values...') ``` ``` ## Checking x does not contain missing values... ``` ``` if (anyNA(x)) stop("x cannot contain any missing values") ``` ``` ## Error in eval(expr, envir, enclos): x cannot contain any missing values ``` The third line checks that the data has data for each sector for each year (indirectly). If not it throws up a warning, as this could be for a valid reason (e.g. a change of name of a factor level), but it’s better to let the user know rather than let it quietly pass. Thus all the expert domain knowledge can be incorporated into the code through condition handling, providing transparent quality assurance. These informative messages are useful but when used in conjuction with `tryCatch`, we can implement our own custom responses to a message, warning or error (this is explained in the [relevant Advanced R Chapter](http://adv-r.had.co.nz/Exceptions-Debugging.html)). We demonstrate a simplified example from the `eesectors` package where an informative message is provided. This is achieved by wrapping the main body of the function within the `tryCatch` function. ``` # Define as a method figure3.1 <- function(x, ...) { out <- tryCatch( expr = { p <- plot(x, y) return(p) }, warning = function() { w <- warnings() warning('Warning produced running figure3.1():', w) }, error = function(e) { stop('Error produced running figure3.1():', e) }, finally = {} ) } ``` The body of a `tryCatch()` must be a single expression; using `{` we combine several expressions into a single form. This a simple addition to our function but it’s powerful in that it provides the user with more information for anticipated problems. By wrapping the code in the `tryCatch` function we ensure that it gets evaluated, whereas normally an error would cause the evaluation of the code to `stop`. Instead we get the error and the evaluation of the next line of code. ``` try(print(this_object_does_not_exist)); print("What happens if this is not wrapped in try?") ``` ``` ## [1] "What happens if this is not wrapped in try?" ``` 12\.4 Error logging ------------------- We are increasingly using R and software packages like our RAP in “operational” settings that require robust error handling and logging. In this section we describe a quick\-and\-dirty way to get python style multi\-level log files by wrapping the `futile.logger` package inspired by this [blog post](https://www.r-bloggers.com/python-style-logging-in-r/). ### 12\.4\.1 Pipeline pitfalls Our real world scenario involves an R package that processes raw data (typically from a spreadsheet or SQL table) that is updated periodically. The raw data can come from various different sources, set up by different agencies and usually manually procured or copy and pasted together prioritising human readability over machine readability. Data could be missing or in a different layout from that which we are use to, for a variety of reasons, before it even gets to us. And then our R functions within our package may extract this data and perform various checks on it. From there a user (analyst / statistician) may put together a statistical report using Rmarkdown ultimately resulting in a html document. This pipeline has lots of steps and potential to encounter problems throughout. As we develop our RAP for our intended bespoke problem and start to use it in an operational setting, we must ensure that in this chaotic environment we protect ourselves against things going wrong without our realising. One of the methods for working with chaotic situations in operational software is to have lots and Lots and LOTS of logging. We take our inspirration from Python, which has the brilliant “logging” module that allows us to quickly set up separate output files for log statements at different levels. This is important as we have different users who may have different needs from the log files. For example, the data scientist / analyst who did the programming for the RAP package may wish to debug the code on logging an error whereas a statistician may prefer to be notified only when an ERROR or something FATAL occurred. ### 12\.4\.2 Error logging using `futile.logger` Fortunately there’s a package available on CRAN that makes this process easy in R called `futile.logger`. There are a few concepts that it’s helpful to be familiar with before proceeding which are introduced in this [blog post](https://www.r-bloggers.com/better-logging-in-r-aka-futile-logger-1-3-0-released/) by the package author. One approach is to replace `tryCatch` with the `ftry` function with the `finally`. This function integrates `futile.logger` with the error and warning system so problems can be caught both in the standard R warning system, while also being emitted via `futile.logger`. We think about how to adapt our earlier code to this function. The primary use case for `futile.logger` is to write out log messages. There are log writers associated with all the predefined log levels: TRACE, DEBUG, INFO, WARN, ERROR, FATAL. Log messages will only be written if the log level is equal to or more urgent than the current threshold. By default the ROOT logger is set to INFO but this can be adjusted by the user facilitating customisation of the error logging to meet the needs of the current user (by using the `flog.threshold` function). We demonstrate this hierarchy below by evaluating this code. ``` # library(futile.logger) futile.logger::flog.debug("This won't print") futile.logger::flog.info("But this %s", 'will') ``` ``` ## INFO [2019-03-01 14:51:32] But this will ``` ``` futile.logger::flog.warn("As will %s", 'this') ``` ``` ## WARN [2019-03-01 14:51:32] As will this ``` ``` x <- c("culture", "sport", NA) message('Checking x does not contain missing values...') if (anyNA(x)) stop("x cannot contain any missing values") ``` We start by re\-writing the above code using the `futile.logger` high level interface. As the default setting is at the INFO log level we can use `flog.trace` to hide most of the checks and messages from a typical user. ``` # Data from raw has an error x <- c("culture", "sport", NA) ### Non-urgent log level futile.logger::flog.trace("Checking x does not contain missing values...") ### Urgent log level, use capture to print out data structure if (anyNA(x)) { futile.logger::flog.error("x cannot contain any missing values.", x, capture = TRUE) } ``` ``` ## ERROR [2019-03-01 14:51:33] x cannot contain any missing values. ## ## [1] "culture" "sport" NA ``` ``` futile.logger::flog.info("Finished checks.") ``` ``` ## INFO [2019-03-01 14:51:33] Finished checks. ``` The above example can help the user identify where the pipeline is going wrong by logging the error and capturing the object `x` where the data is missing. This allows us to more quickly track down what’s going wrong. ### 12\.4\.3 Logging to file At the moment we default to writing our log to the console. We could write to a file if interested using the `appender` family of functions. ``` # Print log messages to the console futile.logger::appender.console() # Write log messages to a file futile.logger::appender.file("rap_companion.log") # Write log messages to console and a file futile.logger::appender.tee("rap_companion.log") ``` 12\.5 Proof calculation ----------------------- Regardless of how the results are published or the methodology used, the results need to be checked for correctness. Here we explore how we can use statistics to help us validate the correctness of results in a RAP. The scientific method of choice to address validity is peer review. This can go as far as having the reviewer implement the analysis as a completely separate and independent process in order to check that results agree. Such a co\-pilot approach fits nicely to the fact that real\-life statistical analysis rarely is a one\-person activity anymore. In practice, there might neither be a need nor the resources to rebuild entire analyses, but critical parts need to be double\-checked. There a variety of appraoches you could try that will suit different problems. * Pair [programming](https://en.wikipedia.org/wiki/Pair_programming) is one technique from the agile programming world to accomodate this. * Single programmers coding independently and then comparing results. * Peer [review](https://help.github.com/articles/about-pull-request-reviews/) of code and tests throughout the development process using Github. In our RAP projects to date we have opted for the third choice, as often our aim is to build programming capability as well as correct and reproducible results through code. We also use [unit tests](test.html#test) to check the critical areas of code by providing an expectation. However, unit tests are more useful for detecting errors with a code during development, as they are a manifestation of our expert domain knowledge. They are only as comprehensive as the work invested in writing them, conversely one does not need infinite tests. If you are interested in taking these ideas further and using statistics to help you estimate the number of wrong results in your report as part of your QA process, then read this [blog](https://www.r-bloggers.com/right-or-wrong-validate-numbers-like-a-boss/). 12\.6 When should one stop testing software? -------------------------------------------- Imagine that a team of developers of a new RAP R package needs to structure a test plan before the publication of their report. There is an (unknown) number of bugs in the package. The team starts their testing at time zero and subsequently find an increasing number of bugs as the test period passes by. The figure below shows such a testing process mimicking the example of [Dalal and Mallows](http://www.jstor.org/stable/2289319) (1988\) from the testing of a large software system at a telecommunications research company. [ We see that the number of bugs appears to level off. The question is now how long should we continue testing before releasing? For a discussion of this problem, see this [blog](http://staff.math.su.se/hoehle/blog/2016/05/06/when2stop.html), from which we have paraphrased. 12\.1 Testing the input data ---------------------------- If our RAP were a sausage factory, the data would be the input meat. Given we do not own the input data nor are we responsible for its preparation we should plan for how we can protect our pipeline against a change in input data format or any anomalous data therein. The list of tests that we might want to run is endless, and the scope of tests very much be dictated by the team which has the expert knowledge of the data. In the [eesectors](https://github.com/ukgovdatascience/eesectors) package we implemented two very simple checks, but these could very easily be expanded. The simplest of these is a simple test for outliers: since the data for the economic estimates is longitudinal, i.e. stretching back several years; we are able to look at the most recent values in comparison to the values from previous years. If the latest values lie within a threshold determined statistically from the other values then the data passes, if not a warning is raised. These kinds of automated tests are repeated every time the data are loaded, reducing the burden of QA, and the scope for human error, freeing up statistician time for identifying more subtle data quality issues which might otherwise go unnoticed. 12\.2 Murphy’s Law and errors in your pipeline ---------------------------------------------- Paraphrasing from [Advanced R](http://adv-r.had.co.nz/beyond-exception-handling.html) by Hadley Wickham: If something can go wrong, it will: the format of the spreadsheet you normally receive your raw data in changes, the server you’re talking to may be down, your Wi\-Fi drops. Any such problem may stop a piece of your pipeline (code) from doing what it is intended to do. This is not a bug; the problem did not originate from within the code itself. However, if the pipeline downstream is dependent on this code acting as intended then you have a problem, you need to deal with the error somehow. Errors aren’t caused by bugs *per se*, but neglecting to handle an error appropriately is a bug. 12\.3 Error handling -------------------- Not all problems are unexpected. For example, an input data file may be missing. We can often anticipate some of these likely problems by thinking about our users and how the code we might be implemented or misunderstood. If something goes wrong, we want the user to know about it. We can communicate to the user using a variety of conditions; messages, warnings and errors. If we want to let the user know about something fairly inoccuous or keep them informed we can use the `message()` function. Sometimes we might want to draw the users attention to something that might be problematic without stopping the code from running, using `warning()`. If there’s no way for the code to execute then a fatal error may be preferred using `stop()`. As an example we look to code from the RAP `eesectors` [package](https://github.com/DCMSstats/eesectors/) produced in collaboration with DCMS. Specifically, we look at a snippet of code from the `year_sector_data` [function](https://github.com/DCMSstats/eesectors/blob/master/R/year_sector_data.R). ``` message('Checking x does not contain missing values...') if (anyNA(x)) stop("x cannot contain any missing values") message('Checking for the correct number of rows...') if (nrow(x) != length(unique(x$sector)) * length(unique(x$year))) { warning("x does not appear to be well formed. nrow(x) should equal length(unique(x$sector)) * length(unique(x$year)). Check the of x.") } message('...passed') ``` We’ll work our way through the code above, step by step. The first line informs the user of the quality assurance that is being automatically conducted, i.e. that no data is missing (`NA`). The second line uses the logical `if` statement to assess this on `x` and if it were true, we use `stop` to produce a fatal error with a descriptive message, as below. ``` x <- c("culture", "sport", NA) message('Checking x does not contain missing values...') ``` ``` ## Checking x does not contain missing values... ``` ``` if (anyNA(x)) stop("x cannot contain any missing values") ``` ``` ## Error in eval(expr, envir, enclos): x cannot contain any missing values ``` The third line checks that the data has data for each sector for each year (indirectly). If not it throws up a warning, as this could be for a valid reason (e.g. a change of name of a factor level), but it’s better to let the user know rather than let it quietly pass. Thus all the expert domain knowledge can be incorporated into the code through condition handling, providing transparent quality assurance. These informative messages are useful but when used in conjuction with `tryCatch`, we can implement our own custom responses to a message, warning or error (this is explained in the [relevant Advanced R Chapter](http://adv-r.had.co.nz/Exceptions-Debugging.html)). We demonstrate a simplified example from the `eesectors` package where an informative message is provided. This is achieved by wrapping the main body of the function within the `tryCatch` function. ``` # Define as a method figure3.1 <- function(x, ...) { out <- tryCatch( expr = { p <- plot(x, y) return(p) }, warning = function() { w <- warnings() warning('Warning produced running figure3.1():', w) }, error = function(e) { stop('Error produced running figure3.1():', e) }, finally = {} ) } ``` The body of a `tryCatch()` must be a single expression; using `{` we combine several expressions into a single form. This a simple addition to our function but it’s powerful in that it provides the user with more information for anticipated problems. By wrapping the code in the `tryCatch` function we ensure that it gets evaluated, whereas normally an error would cause the evaluation of the code to `stop`. Instead we get the error and the evaluation of the next line of code. ``` try(print(this_object_does_not_exist)); print("What happens if this is not wrapped in try?") ``` ``` ## [1] "What happens if this is not wrapped in try?" ``` 12\.4 Error logging ------------------- We are increasingly using R and software packages like our RAP in “operational” settings that require robust error handling and logging. In this section we describe a quick\-and\-dirty way to get python style multi\-level log files by wrapping the `futile.logger` package inspired by this [blog post](https://www.r-bloggers.com/python-style-logging-in-r/). ### 12\.4\.1 Pipeline pitfalls Our real world scenario involves an R package that processes raw data (typically from a spreadsheet or SQL table) that is updated periodically. The raw data can come from various different sources, set up by different agencies and usually manually procured or copy and pasted together prioritising human readability over machine readability. Data could be missing or in a different layout from that which we are use to, for a variety of reasons, before it even gets to us. And then our R functions within our package may extract this data and perform various checks on it. From there a user (analyst / statistician) may put together a statistical report using Rmarkdown ultimately resulting in a html document. This pipeline has lots of steps and potential to encounter problems throughout. As we develop our RAP for our intended bespoke problem and start to use it in an operational setting, we must ensure that in this chaotic environment we protect ourselves against things going wrong without our realising. One of the methods for working with chaotic situations in operational software is to have lots and Lots and LOTS of logging. We take our inspirration from Python, which has the brilliant “logging” module that allows us to quickly set up separate output files for log statements at different levels. This is important as we have different users who may have different needs from the log files. For example, the data scientist / analyst who did the programming for the RAP package may wish to debug the code on logging an error whereas a statistician may prefer to be notified only when an ERROR or something FATAL occurred. ### 12\.4\.2 Error logging using `futile.logger` Fortunately there’s a package available on CRAN that makes this process easy in R called `futile.logger`. There are a few concepts that it’s helpful to be familiar with before proceeding which are introduced in this [blog post](https://www.r-bloggers.com/better-logging-in-r-aka-futile-logger-1-3-0-released/) by the package author. One approach is to replace `tryCatch` with the `ftry` function with the `finally`. This function integrates `futile.logger` with the error and warning system so problems can be caught both in the standard R warning system, while also being emitted via `futile.logger`. We think about how to adapt our earlier code to this function. The primary use case for `futile.logger` is to write out log messages. There are log writers associated with all the predefined log levels: TRACE, DEBUG, INFO, WARN, ERROR, FATAL. Log messages will only be written if the log level is equal to or more urgent than the current threshold. By default the ROOT logger is set to INFO but this can be adjusted by the user facilitating customisation of the error logging to meet the needs of the current user (by using the `flog.threshold` function). We demonstrate this hierarchy below by evaluating this code. ``` # library(futile.logger) futile.logger::flog.debug("This won't print") futile.logger::flog.info("But this %s", 'will') ``` ``` ## INFO [2019-03-01 14:51:32] But this will ``` ``` futile.logger::flog.warn("As will %s", 'this') ``` ``` ## WARN [2019-03-01 14:51:32] As will this ``` ``` x <- c("culture", "sport", NA) message('Checking x does not contain missing values...') if (anyNA(x)) stop("x cannot contain any missing values") ``` We start by re\-writing the above code using the `futile.logger` high level interface. As the default setting is at the INFO log level we can use `flog.trace` to hide most of the checks and messages from a typical user. ``` # Data from raw has an error x <- c("culture", "sport", NA) ### Non-urgent log level futile.logger::flog.trace("Checking x does not contain missing values...") ### Urgent log level, use capture to print out data structure if (anyNA(x)) { futile.logger::flog.error("x cannot contain any missing values.", x, capture = TRUE) } ``` ``` ## ERROR [2019-03-01 14:51:33] x cannot contain any missing values. ## ## [1] "culture" "sport" NA ``` ``` futile.logger::flog.info("Finished checks.") ``` ``` ## INFO [2019-03-01 14:51:33] Finished checks. ``` The above example can help the user identify where the pipeline is going wrong by logging the error and capturing the object `x` where the data is missing. This allows us to more quickly track down what’s going wrong. ### 12\.4\.3 Logging to file At the moment we default to writing our log to the console. We could write to a file if interested using the `appender` family of functions. ``` # Print log messages to the console futile.logger::appender.console() # Write log messages to a file futile.logger::appender.file("rap_companion.log") # Write log messages to console and a file futile.logger::appender.tee("rap_companion.log") ``` ### 12\.4\.1 Pipeline pitfalls Our real world scenario involves an R package that processes raw data (typically from a spreadsheet or SQL table) that is updated periodically. The raw data can come from various different sources, set up by different agencies and usually manually procured or copy and pasted together prioritising human readability over machine readability. Data could be missing or in a different layout from that which we are use to, for a variety of reasons, before it even gets to us. And then our R functions within our package may extract this data and perform various checks on it. From there a user (analyst / statistician) may put together a statistical report using Rmarkdown ultimately resulting in a html document. This pipeline has lots of steps and potential to encounter problems throughout. As we develop our RAP for our intended bespoke problem and start to use it in an operational setting, we must ensure that in this chaotic environment we protect ourselves against things going wrong without our realising. One of the methods for working with chaotic situations in operational software is to have lots and Lots and LOTS of logging. We take our inspirration from Python, which has the brilliant “logging” module that allows us to quickly set up separate output files for log statements at different levels. This is important as we have different users who may have different needs from the log files. For example, the data scientist / analyst who did the programming for the RAP package may wish to debug the code on logging an error whereas a statistician may prefer to be notified only when an ERROR or something FATAL occurred. ### 12\.4\.2 Error logging using `futile.logger` Fortunately there’s a package available on CRAN that makes this process easy in R called `futile.logger`. There are a few concepts that it’s helpful to be familiar with before proceeding which are introduced in this [blog post](https://www.r-bloggers.com/better-logging-in-r-aka-futile-logger-1-3-0-released/) by the package author. One approach is to replace `tryCatch` with the `ftry` function with the `finally`. This function integrates `futile.logger` with the error and warning system so problems can be caught both in the standard R warning system, while also being emitted via `futile.logger`. We think about how to adapt our earlier code to this function. The primary use case for `futile.logger` is to write out log messages. There are log writers associated with all the predefined log levels: TRACE, DEBUG, INFO, WARN, ERROR, FATAL. Log messages will only be written if the log level is equal to or more urgent than the current threshold. By default the ROOT logger is set to INFO but this can be adjusted by the user facilitating customisation of the error logging to meet the needs of the current user (by using the `flog.threshold` function). We demonstrate this hierarchy below by evaluating this code. ``` # library(futile.logger) futile.logger::flog.debug("This won't print") futile.logger::flog.info("But this %s", 'will') ``` ``` ## INFO [2019-03-01 14:51:32] But this will ``` ``` futile.logger::flog.warn("As will %s", 'this') ``` ``` ## WARN [2019-03-01 14:51:32] As will this ``` ``` x <- c("culture", "sport", NA) message('Checking x does not contain missing values...') if (anyNA(x)) stop("x cannot contain any missing values") ``` We start by re\-writing the above code using the `futile.logger` high level interface. As the default setting is at the INFO log level we can use `flog.trace` to hide most of the checks and messages from a typical user. ``` # Data from raw has an error x <- c("culture", "sport", NA) ### Non-urgent log level futile.logger::flog.trace("Checking x does not contain missing values...") ### Urgent log level, use capture to print out data structure if (anyNA(x)) { futile.logger::flog.error("x cannot contain any missing values.", x, capture = TRUE) } ``` ``` ## ERROR [2019-03-01 14:51:33] x cannot contain any missing values. ## ## [1] "culture" "sport" NA ``` ``` futile.logger::flog.info("Finished checks.") ``` ``` ## INFO [2019-03-01 14:51:33] Finished checks. ``` The above example can help the user identify where the pipeline is going wrong by logging the error and capturing the object `x` where the data is missing. This allows us to more quickly track down what’s going wrong. ### 12\.4\.3 Logging to file At the moment we default to writing our log to the console. We could write to a file if interested using the `appender` family of functions. ``` # Print log messages to the console futile.logger::appender.console() # Write log messages to a file futile.logger::appender.file("rap_companion.log") # Write log messages to console and a file futile.logger::appender.tee("rap_companion.log") ``` 12\.5 Proof calculation ----------------------- Regardless of how the results are published or the methodology used, the results need to be checked for correctness. Here we explore how we can use statistics to help us validate the correctness of results in a RAP. The scientific method of choice to address validity is peer review. This can go as far as having the reviewer implement the analysis as a completely separate and independent process in order to check that results agree. Such a co\-pilot approach fits nicely to the fact that real\-life statistical analysis rarely is a one\-person activity anymore. In practice, there might neither be a need nor the resources to rebuild entire analyses, but critical parts need to be double\-checked. There a variety of appraoches you could try that will suit different problems. * Pair [programming](https://en.wikipedia.org/wiki/Pair_programming) is one technique from the agile programming world to accomodate this. * Single programmers coding independently and then comparing results. * Peer [review](https://help.github.com/articles/about-pull-request-reviews/) of code and tests throughout the development process using Github. In our RAP projects to date we have opted for the third choice, as often our aim is to build programming capability as well as correct and reproducible results through code. We also use [unit tests](test.html#test) to check the critical areas of code by providing an expectation. However, unit tests are more useful for detecting errors with a code during development, as they are a manifestation of our expert domain knowledge. They are only as comprehensive as the work invested in writing them, conversely one does not need infinite tests. If you are interested in taking these ideas further and using statistics to help you estimate the number of wrong results in your report as part of your QA process, then read this [blog](https://www.r-bloggers.com/right-or-wrong-validate-numbers-like-a-boss/). 12\.6 When should one stop testing software? -------------------------------------------- Imagine that a team of developers of a new RAP R package needs to structure a test plan before the publication of their report. There is an (unknown) number of bugs in the package. The team starts their testing at time zero and subsequently find an increasing number of bugs as the test period passes by. The figure below shows such a testing process mimicking the example of [Dalal and Mallows](http://www.jstor.org/stable/2289319) (1988\) from the testing of a large software system at a telecommunications research company. [ We see that the number of bugs appears to level off. The question is now how long should we continue testing before releasing? For a discussion of this problem, see this [blog](http://staff.math.su.se/hoehle/blog/2016/05/06/when2stop.html), from which we have paraphrased.
Data Databases and Engineering
ukgovdatascience.github.io
https://ukgovdatascience.github.io/rap_companion/pub.html
Chapter 13 Producing the publication ==================================== > R Markdown provides an unified authoring framework for data science, combining your code, its results, and your prose commentary. R Markdown documents are fully reproducible and support dozens of output formats, like PDFs, Word files, slideshows, and more. \- Hadley Wikcham, R for Data Science Everything I have talked about so far is to do with the production of the statistics themselves, not preparation of the final publication, but there are tools that can help with this too. In our project with DCMS we plan to use [Rmarkdown](http://rmarkdown.rstudio.com/) (a flavour of [markdown](https://en.wikipedia.org/wiki/Markdown)) to incorporate the R code into the same document as the text of the publication. Working in this way means that we can do all of the operations in a single file, so we have no problems with ensuring that our tables or figures are synced with the latest version of the text: everything can be produced in a single file. We can even produce templates with boilerplate text like: ‘this measure increased by X%’, and then automatically populate the X with the correct values when we run the code. 13\.1 R Markdown overview ------------------------- Copied and paraphrased from [Hadley Wickham’s R for Data Science](https://github.com/hadley/r4ds): R Markdown provides an unified authoring framework for analytical reporting, combining your code, its results, and your prose commentary. R Markdown documents are fully reproducible and support dozens of output formats, like PDFs, Word files, slideshows, and more. R Markdown files as a data product of RAP are designed to be used: 1. For communicating to decision makers or users, who want to focus on the conclusions, not the code behind the analysis. R Markdown integrates a number of R packages and external tools. This means that help is, by\-and\-large, not available through `?`. Instead you can rely on the Help within RStudio: * R Markdown Cheat Sheet: *Help \> Cheatsheets \> R Markdown Cheat Sheet*, * R Markdown Reference Guide: *Help \> Cheatsheets \> R Markdown Reference Guide*. Both cheatsheets are also available at <http://rstudio.com/cheatsheets>. ### 13\.1\.1 Prerequisites You need the **rmarkdown** package, but you don’t need to explicitly install it or load it, as RStudio automatically does both when needed. 13\.2 R Markdown basics ----------------------- This file itself is an R Markdown file, a plain text file that has the extension `.Rmd`. Here’s a screenshot of a R Markdown file: It contains three important types of content: 1. An (optional) **YAML header** surrounded by `---`s (lines 1\-6\). 2. **Chunks** of R code surrounded by ````` (lines 22\-27 and 29\-31\). 3. Text mixed with simple text formatting like `# heading` and `_italics_` (lines 7\-20\). 13\.3 Text formatting with Markdown ----------------------------------- Prose in `.Rmd` files is written in Markdown, a lightweight set of conventions for formatting plain text files. Markdown is designed to be easy to read and easy to write. It is also very easy to learn thus we leave that to the reader to learn themselves through practise. 13\.4 Code chunks ----------------- To run code inside an R Markdown document, you need to insert a chunk. There are three ways to do so: 1. The keyboard shortcut Cmd/Ctrl \+ Alt \+ I 2. The “Insert” button icon in the editor toolbar. 3. By manually typing the chunk delimiters ````{r}` and `````. Obviously, Hadley Wickham recommends you learn the keyboard shortcut. It will save you a lot of time in the long run! You can continue to run the code using the keyboard shortcut that by now you know and love: Cmd/Ctrl \+ Enter. However, chunks get a new keyboard shortcut: Cmd/Ctrl \+ Shift \+ Enter, which runs all the code in the chunk (Cmd/Ctrl \+ Shift \+ N, runs the next chunk). Think of a chunk like a function. A chunk should be relatively self\-contained, and focussed around a single task. ### 13\.4\.1 Chunking code in RAP Ask your users what they might prefer; all the code in one chunk at the start, specifying all the variables needed for the rest of the document, to keep the code “out of the way”, or each code chunk occuring adjacent to the relevant statistic, figure or table generated. ### 13\.4\.2 Chunk name Chunks can be given an optional name: ````{r by-name}`. This has three advantages: 1. You can more easily navigate to specific chunks using the drop\-down code navigator in the bottom\-left of the script editor: 2. Graphics produced by the chunks will have useful names that make them easier to use elsewhere. 3. You can set up networks of cached chunks to avoid re\-performing expensive computations on every run. There is one chunk name that imbues special behaviour: `setup`. When you’re in a notebook mode, the chunk named setup will be run automatically once, before any other code is run. This can be used to set defaut behaviour for all of your chunks as well as a few other special things. ### 13\.4\.3 Chunk options Chunk output can be customised with **options**, arguments supplied to chunk header. Knitr provides almost 60 options that you can use to customize your code chunks. Here we’ll cover the most important chunk options that you’ll use frequently. You can see the full list at <http://yihui.name/knitr/options/>. The most important set of options controls if your code block is executed and what results are inserted in the finished report: * `eval = FALSE` prevents code from being evaluated. (And obviously if the code is not run, no results will be generated). This is useful for displaying example code, or for disabling a large block of code without commenting each line. * `include = FALSE` runs the code, but doesn’t show the code or results in the final document. Use this for setup code that you don’t want cluttering your report. * `echo = FALSE` prevents code, but not the results from appearing in the finished file. Use this when writing reports aimed at people who don’t want to see the underlying R code. * `message = FALSE` or `warning = FALSE` prevents messages or warnings from appearing in the finished file. * `results = 'hide'` hides printed output; `fig.show = 'hide'` hides plots. * `error = TRUE` causes the render to continue even if code returns an error. This is rarely something you’ll want to include in the final version of your report, but can be very useful if you need to debug exactly what is going on inside your `.Rmd`. It’s also useful if you’re teaching R and want to deliberately include an error. The default, `error = FALSE` causes knitting to fail if there is a single error in the document. The following table summarises which types of output each option supressess: | Option | Run code | Show code | Output | Plots | Messages | Warnings | | --- | --- | --- | --- | --- | --- | --- | | `eval = FALSE` | \- | | \- | \- | \- | \- | | `include = FALSE` | | \- | \- | \- | \- | \- | | `echo = FALSE` | | \- | | | | | | `results = "hide"` | | | \- | | | | | `fig.show = "hide"` | | | | \- | | | | `message = FALSE` | | | | | \- | | | `warning = FALSE` | | | | | | \- | ### 13\.4\.4 Global options As you work more with knitr, you will discover that some of the default chunk options don’t fit your needs and you want to change them. You can do this by calling `knitr::opts_chunk$set()` in a code chunk. For example, when business reports and statistical publications try: ``` knitr::opts_chunk$set( echo = FALSE ) ``` That will hide the code by default, so only showing the chunks you deliberately choose to show (with `echo = TRUE`). You might consider setting `message = FALSE` and `warning = FALSE`, but that would make it harder to debug problems because you wouldn’t see any messages in the final document. ### 13\.4\.5 Inline code Often a report might have statistics within a sentence of prose. There is a way to embed R code into prose, with [inline code](https://rmarkdown.rstudio.com/lesson-4.html). This can be very useful if you mention properties of your data in the text. You might want to write functions that prettify to create the desired kind of output (i.e. rounding to two digits and a % suffixed). Inline output is indistinguishable from the surrounding text. Inline expressions do not take knitr options. 13\.5 YAML header ----------------- You can control many other “whole document” settings by tweaking the parameters of the YAML header. You might wonder what YAML stands for: it’s “yet another markup language”, which is designed for representing hierarchical data in a way that’s easy for humans to read and write. R Markdown uses it to control many details of the output. ### 13\.5\.1 Output style You can output in a variety of file formats and customise the [appearance of your output](https://rmarkdown.rstudio.com/gallery.html). ### 13\.5\.2 Parameters R Markdown documents can include one or more parameters whose values can be set when you render the report. Parameters are useful when you want to re\-render the same report with distinct values for various key inputs. For example, you might be producing sales reports per branch, exam results by student, or demographic summaries by country. To declare one or more parameters, use the `params` field. In RStudio, you can click the “Knit with Parameters” option in the Knit dropdown menu to set parameters, render, and preview the report in a single user friendly step. You can customise the dialog by setting other options in the header. See [http://rmarkdown.rstudio.com/developer\_parameterized\_reports.html\#parameter\_user\_interfaces](http://rmarkdown.rstudio.com/developer_parameterized_reports.html#parameter_user_interfaces) for more details. 13\.6 RMarkdown exercises ------------------------- MoJ have created this [useful resource](https://github.com/moj-analytical-services/rmarkdown_training) for practising your RMarkdown. Should be relevant to Civil Servants seeking a “real” example to reproduce. 13\.7 Further Reading --------------------- R Markdown is still relatively young, and is still growing rapidly. The best place to stay on top of innovations is the official R Markdown website: <http://rmarkdown.rstudio.com> and <https://rmarkdown.rstudio.com/articles.html>. You can write individual chapters using [R markdown](http://r4ds.had.co.nz/r-markdown.html), as one file per Chapter. Alternatively you can write the whole publication using [bookdown](https://bookdown.org/yihui/bookdown/). For a basic start in bookdown try this [blog post](http://seankross.com/2016/11/17/How-to-Start-a-Bookdown-Book.html) (we wrote this book using this to kick things off). 13\.1 R Markdown overview ------------------------- Copied and paraphrased from [Hadley Wickham’s R for Data Science](https://github.com/hadley/r4ds): R Markdown provides an unified authoring framework for analytical reporting, combining your code, its results, and your prose commentary. R Markdown documents are fully reproducible and support dozens of output formats, like PDFs, Word files, slideshows, and more. R Markdown files as a data product of RAP are designed to be used: 1. For communicating to decision makers or users, who want to focus on the conclusions, not the code behind the analysis. R Markdown integrates a number of R packages and external tools. This means that help is, by\-and\-large, not available through `?`. Instead you can rely on the Help within RStudio: * R Markdown Cheat Sheet: *Help \> Cheatsheets \> R Markdown Cheat Sheet*, * R Markdown Reference Guide: *Help \> Cheatsheets \> R Markdown Reference Guide*. Both cheatsheets are also available at <http://rstudio.com/cheatsheets>. ### 13\.1\.1 Prerequisites You need the **rmarkdown** package, but you don’t need to explicitly install it or load it, as RStudio automatically does both when needed. ### 13\.1\.1 Prerequisites You need the **rmarkdown** package, but you don’t need to explicitly install it or load it, as RStudio automatically does both when needed. 13\.2 R Markdown basics ----------------------- This file itself is an R Markdown file, a plain text file that has the extension `.Rmd`. Here’s a screenshot of a R Markdown file: It contains three important types of content: 1. An (optional) **YAML header** surrounded by `---`s (lines 1\-6\). 2. **Chunks** of R code surrounded by ````` (lines 22\-27 and 29\-31\). 3. Text mixed with simple text formatting like `# heading` and `_italics_` (lines 7\-20\). 13\.3 Text formatting with Markdown ----------------------------------- Prose in `.Rmd` files is written in Markdown, a lightweight set of conventions for formatting plain text files. Markdown is designed to be easy to read and easy to write. It is also very easy to learn thus we leave that to the reader to learn themselves through practise. 13\.4 Code chunks ----------------- To run code inside an R Markdown document, you need to insert a chunk. There are three ways to do so: 1. The keyboard shortcut Cmd/Ctrl \+ Alt \+ I 2. The “Insert” button icon in the editor toolbar. 3. By manually typing the chunk delimiters ````{r}` and `````. Obviously, Hadley Wickham recommends you learn the keyboard shortcut. It will save you a lot of time in the long run! You can continue to run the code using the keyboard shortcut that by now you know and love: Cmd/Ctrl \+ Enter. However, chunks get a new keyboard shortcut: Cmd/Ctrl \+ Shift \+ Enter, which runs all the code in the chunk (Cmd/Ctrl \+ Shift \+ N, runs the next chunk). Think of a chunk like a function. A chunk should be relatively self\-contained, and focussed around a single task. ### 13\.4\.1 Chunking code in RAP Ask your users what they might prefer; all the code in one chunk at the start, specifying all the variables needed for the rest of the document, to keep the code “out of the way”, or each code chunk occuring adjacent to the relevant statistic, figure or table generated. ### 13\.4\.2 Chunk name Chunks can be given an optional name: ````{r by-name}`. This has three advantages: 1. You can more easily navigate to specific chunks using the drop\-down code navigator in the bottom\-left of the script editor: 2. Graphics produced by the chunks will have useful names that make them easier to use elsewhere. 3. You can set up networks of cached chunks to avoid re\-performing expensive computations on every run. There is one chunk name that imbues special behaviour: `setup`. When you’re in a notebook mode, the chunk named setup will be run automatically once, before any other code is run. This can be used to set defaut behaviour for all of your chunks as well as a few other special things. ### 13\.4\.3 Chunk options Chunk output can be customised with **options**, arguments supplied to chunk header. Knitr provides almost 60 options that you can use to customize your code chunks. Here we’ll cover the most important chunk options that you’ll use frequently. You can see the full list at <http://yihui.name/knitr/options/>. The most important set of options controls if your code block is executed and what results are inserted in the finished report: * `eval = FALSE` prevents code from being evaluated. (And obviously if the code is not run, no results will be generated). This is useful for displaying example code, or for disabling a large block of code without commenting each line. * `include = FALSE` runs the code, but doesn’t show the code or results in the final document. Use this for setup code that you don’t want cluttering your report. * `echo = FALSE` prevents code, but not the results from appearing in the finished file. Use this when writing reports aimed at people who don’t want to see the underlying R code. * `message = FALSE` or `warning = FALSE` prevents messages or warnings from appearing in the finished file. * `results = 'hide'` hides printed output; `fig.show = 'hide'` hides plots. * `error = TRUE` causes the render to continue even if code returns an error. This is rarely something you’ll want to include in the final version of your report, but can be very useful if you need to debug exactly what is going on inside your `.Rmd`. It’s also useful if you’re teaching R and want to deliberately include an error. The default, `error = FALSE` causes knitting to fail if there is a single error in the document. The following table summarises which types of output each option supressess: | Option | Run code | Show code | Output | Plots | Messages | Warnings | | --- | --- | --- | --- | --- | --- | --- | | `eval = FALSE` | \- | | \- | \- | \- | \- | | `include = FALSE` | | \- | \- | \- | \- | \- | | `echo = FALSE` | | \- | | | | | | `results = "hide"` | | | \- | | | | | `fig.show = "hide"` | | | | \- | | | | `message = FALSE` | | | | | \- | | | `warning = FALSE` | | | | | | \- | ### 13\.4\.4 Global options As you work more with knitr, you will discover that some of the default chunk options don’t fit your needs and you want to change them. You can do this by calling `knitr::opts_chunk$set()` in a code chunk. For example, when business reports and statistical publications try: ``` knitr::opts_chunk$set( echo = FALSE ) ``` That will hide the code by default, so only showing the chunks you deliberately choose to show (with `echo = TRUE`). You might consider setting `message = FALSE` and `warning = FALSE`, but that would make it harder to debug problems because you wouldn’t see any messages in the final document. ### 13\.4\.5 Inline code Often a report might have statistics within a sentence of prose. There is a way to embed R code into prose, with [inline code](https://rmarkdown.rstudio.com/lesson-4.html). This can be very useful if you mention properties of your data in the text. You might want to write functions that prettify to create the desired kind of output (i.e. rounding to two digits and a % suffixed). Inline output is indistinguishable from the surrounding text. Inline expressions do not take knitr options. ### 13\.4\.1 Chunking code in RAP Ask your users what they might prefer; all the code in one chunk at the start, specifying all the variables needed for the rest of the document, to keep the code “out of the way”, or each code chunk occuring adjacent to the relevant statistic, figure or table generated. ### 13\.4\.2 Chunk name Chunks can be given an optional name: ````{r by-name}`. This has three advantages: 1. You can more easily navigate to specific chunks using the drop\-down code navigator in the bottom\-left of the script editor: 2. Graphics produced by the chunks will have useful names that make them easier to use elsewhere. 3. You can set up networks of cached chunks to avoid re\-performing expensive computations on every run. There is one chunk name that imbues special behaviour: `setup`. When you’re in a notebook mode, the chunk named setup will be run automatically once, before any other code is run. This can be used to set defaut behaviour for all of your chunks as well as a few other special things. ### 13\.4\.3 Chunk options Chunk output can be customised with **options**, arguments supplied to chunk header. Knitr provides almost 60 options that you can use to customize your code chunks. Here we’ll cover the most important chunk options that you’ll use frequently. You can see the full list at <http://yihui.name/knitr/options/>. The most important set of options controls if your code block is executed and what results are inserted in the finished report: * `eval = FALSE` prevents code from being evaluated. (And obviously if the code is not run, no results will be generated). This is useful for displaying example code, or for disabling a large block of code without commenting each line. * `include = FALSE` runs the code, but doesn’t show the code or results in the final document. Use this for setup code that you don’t want cluttering your report. * `echo = FALSE` prevents code, but not the results from appearing in the finished file. Use this when writing reports aimed at people who don’t want to see the underlying R code. * `message = FALSE` or `warning = FALSE` prevents messages or warnings from appearing in the finished file. * `results = 'hide'` hides printed output; `fig.show = 'hide'` hides plots. * `error = TRUE` causes the render to continue even if code returns an error. This is rarely something you’ll want to include in the final version of your report, but can be very useful if you need to debug exactly what is going on inside your `.Rmd`. It’s also useful if you’re teaching R and want to deliberately include an error. The default, `error = FALSE` causes knitting to fail if there is a single error in the document. The following table summarises which types of output each option supressess: | Option | Run code | Show code | Output | Plots | Messages | Warnings | | --- | --- | --- | --- | --- | --- | --- | | `eval = FALSE` | \- | | \- | \- | \- | \- | | `include = FALSE` | | \- | \- | \- | \- | \- | | `echo = FALSE` | | \- | | | | | | `results = "hide"` | | | \- | | | | | `fig.show = "hide"` | | | | \- | | | | `message = FALSE` | | | | | \- | | | `warning = FALSE` | | | | | | \- | ### 13\.4\.4 Global options As you work more with knitr, you will discover that some of the default chunk options don’t fit your needs and you want to change them. You can do this by calling `knitr::opts_chunk$set()` in a code chunk. For example, when business reports and statistical publications try: ``` knitr::opts_chunk$set( echo = FALSE ) ``` That will hide the code by default, so only showing the chunks you deliberately choose to show (with `echo = TRUE`). You might consider setting `message = FALSE` and `warning = FALSE`, but that would make it harder to debug problems because you wouldn’t see any messages in the final document. ### 13\.4\.5 Inline code Often a report might have statistics within a sentence of prose. There is a way to embed R code into prose, with [inline code](https://rmarkdown.rstudio.com/lesson-4.html). This can be very useful if you mention properties of your data in the text. You might want to write functions that prettify to create the desired kind of output (i.e. rounding to two digits and a % suffixed). Inline output is indistinguishable from the surrounding text. Inline expressions do not take knitr options. 13\.5 YAML header ----------------- You can control many other “whole document” settings by tweaking the parameters of the YAML header. You might wonder what YAML stands for: it’s “yet another markup language”, which is designed for representing hierarchical data in a way that’s easy for humans to read and write. R Markdown uses it to control many details of the output. ### 13\.5\.1 Output style You can output in a variety of file formats and customise the [appearance of your output](https://rmarkdown.rstudio.com/gallery.html). ### 13\.5\.2 Parameters R Markdown documents can include one or more parameters whose values can be set when you render the report. Parameters are useful when you want to re\-render the same report with distinct values for various key inputs. For example, you might be producing sales reports per branch, exam results by student, or demographic summaries by country. To declare one or more parameters, use the `params` field. In RStudio, you can click the “Knit with Parameters” option in the Knit dropdown menu to set parameters, render, and preview the report in a single user friendly step. You can customise the dialog by setting other options in the header. See [http://rmarkdown.rstudio.com/developer\_parameterized\_reports.html\#parameter\_user\_interfaces](http://rmarkdown.rstudio.com/developer_parameterized_reports.html#parameter_user_interfaces) for more details. ### 13\.5\.1 Output style You can output in a variety of file formats and customise the [appearance of your output](https://rmarkdown.rstudio.com/gallery.html). ### 13\.5\.2 Parameters R Markdown documents can include one or more parameters whose values can be set when you render the report. Parameters are useful when you want to re\-render the same report with distinct values for various key inputs. For example, you might be producing sales reports per branch, exam results by student, or demographic summaries by country. To declare one or more parameters, use the `params` field. In RStudio, you can click the “Knit with Parameters” option in the Knit dropdown menu to set parameters, render, and preview the report in a single user friendly step. You can customise the dialog by setting other options in the header. See [http://rmarkdown.rstudio.com/developer\_parameterized\_reports.html\#parameter\_user\_interfaces](http://rmarkdown.rstudio.com/developer_parameterized_reports.html#parameter_user_interfaces) for more details. 13\.6 RMarkdown exercises ------------------------- MoJ have created this [useful resource](https://github.com/moj-analytical-services/rmarkdown_training) for practising your RMarkdown. Should be relevant to Civil Servants seeking a “real” example to reproduce. 13\.7 Further Reading --------------------- R Markdown is still relatively young, and is still growing rapidly. The best place to stay on top of innovations is the official R Markdown website: <http://rmarkdown.rstudio.com> and <https://rmarkdown.rstudio.com/articles.html>. You can write individual chapters using [R markdown](http://r4ds.had.co.nz/r-markdown.html), as one file per Chapter. Alternatively you can write the whole publication using [bookdown](https://bookdown.org/yihui/bookdown/). For a basic start in bookdown try this [blog post](http://seankross.com/2016/11/17/How-to-Start-a-Bookdown-Book.html) (we wrote this book using this to kick things off).
Data Databases and Engineering
compgenomr.github.io
http://compgenomr.github.io/book/clustering-grouping-samples-based-on-their-similarity.html
4\.1 Clustering: Grouping samples based on their similarity ----------------------------------------------------------- In genomics, we would very frequently want to assess how our samples relate to each other. Are our replicates similar to each other? Do the samples from the same treatment group have similar genome\-wide signals? Do the patients with similar diseases have similar gene expression profiles? Take the last question for example. We need to define a distance or similarity metric between patients’ expression profiles and use that metric to find groups of patients that are more similar to each other than the rest of the patients. This, in essence, is the general idea behind clustering. We need a distance metric and a method to utilize that distance metric to find self\-similar groups. Clustering is a ubiquitous procedure in bioinformatics as well as any field that deals with high\-dimensional data. It is very likely that every genomics paper containing multiple samples has some sort of clustering. Due to this ubiquity and general usefulness, it is an essential technique to learn. ### 4\.1\.1 Distance metrics The first required step for clustering is the distance metric. This is simply a measurement of how similar gene expressions are to each other. There are many options for distance metrics and the choice of the metric is quite important for clustering. Consider a simple example where we have four patients and expression of three genes measured in Table [4\.1](clustering-grouping-samples-based-on-their-similarity.html#tab:expTable). Which patients look similar to each other based on their gene expression profiles ? TABLE 4\.1: Gene expressions from patients | | IRX4 | OCT4 | PAX6 | | --- | --- | --- | --- | | patient1 | 11 | 10 | 1 | | patient2 | 13 | 13 | 3 | | patient3 | 2 | 4 | 10 | | patient4 | 1 | 3 | 9 | It may not be obvious from the table at first sight, but if we plot the gene expression profile for each patient (shown in Figure [4\.1](clustering-grouping-samples-based-on-their-similarity.html#fig:expPlot)), we will see that expression profiles of patient 1 and patient 2 are more similar to each other than patient 3 or patient 4\. FIGURE 4\.1: Gene expression values for different patients. Certain patients have gene expression values that are similar to each other. But how can we quantify what we see? A simple metric for distance between gene expression vectors between a given patient pair is the sum of the absolute difference between gene expression values. This can be formulated as follows: \\(d\_{AB}\={\\sum \_{i\=1}^{n}\|e\_{Ai}\-e\_{Bi}\|}\\), where \\(d\_{AB}\\) is the distance between patients A and B, and the \\(e\_{Ai}\\) and \\(e\_{Bi}\\) are expression values of the \\(i\\)th gene for patients A and B. This distance metric is called the **“Manhattan distance”** or **“L1 norm”**. Another distance metric uses the sum of squared distances and takes the square root of resulting value; this metric can be formulated as: \\(d\_{AB}\={{\\sqrt {\\sum \_{i\=1}^{n}(e\_{Ai}\-e\_{Bi})^{2}}}}\\). This distance is called **“Euclidean Distance”** or **“L2 norm”**. This is usually the default distance metric for many clustering algorithms. Due to the squaring operation, values that are very different get higher contribution to the distance. Due to this, compared to the Manhattan distance, it can be affected more by outliers. But, generally if the outliers are rare, this distance metric works well. The last metric we will introduce is the **“correlation distance”**. This is simply \\(d\_{AB}\=1\-\\rho\\), where \\(\\rho\\) is the Pearson correlation coefficient between two vectors; in our case those vectors are gene expression profiles of patients. Using this distance the gene expression vectors that have a similar pattern will have a small distance, whereas when the vectors have different patterns they will have a large distance. In this case, the linear correlation between vectors matters, although the scale of the vectors might be different. Now let’s see how we can calculate these distances in R. First, we have our gene expression per patient table. ``` df ``` ``` ## IRX4 OCT4 PAX6 ## patient1 11 10 1 ## patient2 13 13 3 ## patient3 2 4 10 ## patient4 1 3 9 ``` Next, we calculate the distance metrics using the `dist()` function and `1-cor()` expression. ``` dist(df,method="manhattan") ``` ``` ## patient1 patient2 patient3 ## patient2 7 ## patient3 24 27 ## patient4 25 28 3 ``` ``` dist(df,method="euclidean") ``` ``` ## patient1 patient2 patient3 ## patient2 4.123106 ## patient3 14.071247 15.842980 ## patient4 14.594520 16.733201 1.732051 ``` ``` as.dist(1-cor(t(df))) # correlation distance ``` ``` ## patient1 patient2 patient3 ## patient2 0.004129405 ## patient3 1.988522468 1.970725343 ## patient4 1.988522468 1.970725343 0.000000000 ``` #### 4\.1\.1\.1 Scaling before calculating the distance Before we proceed to the clustering, there is one more thing we need to take care of. Should we normalize our data? The scale of the vectors in our expression matrix can affect the distance calculation. Gene expression tables might have some sort of normalization, so the values are in comparable scales. But somehow, if a gene’s expression values are on a much higher scale than the other genes, that gene will affect the distance more than others when using Euclidean or Manhattan distance. If that is the case we can scale the variables. The traditional way of scaling variables is to subtract their mean, and divide by their standard deviation, this operation is also called “standardization”. If this is done on all genes, each gene will have the same effect on distance measures. The decision to apply scaling ultimately depends on our data and what you want to achieve. If the gene expression values are previously normalized between patients, having genes that dominate the distance metric could have a biological meaning and therefore it may not be desirable to further scale variables. In R, the standardization is done via the `scale()` function. Here we scale the gene expression values. ``` df ``` ``` ## IRX4 OCT4 PAX6 ## patient1 11 10 1 ## patient2 13 13 3 ## patient3 2 4 10 ## patient4 1 3 9 ``` ``` scale(df) ``` ``` ## IRX4 OCT4 PAX6 ## patient1 0.6932522 0.5212860 -1.0733721 ## patient2 1.0194886 1.1468293 -0.6214260 ## patient3 -0.7748113 -0.7298004 0.9603856 ## patient4 -0.9379295 -0.9383149 0.7344125 ## attr(,"scaled:center") ## IRX4 OCT4 PAX6 ## 6.75 7.50 5.75 ## attr(,"scaled:scale") ## IRX4 OCT4 PAX6 ## 6.130525 4.795832 4.425306 ``` ### 4\.1\.2 Hiearchical clustering This is one of the most ubiquitous clustering algorithms. Using this algorithm you can see the relationship of individual data points and relationships of clusters. This is achieved by successively joining small clusters to each other based on the inter\-cluster distance. Eventually, you get a tree structure or a dendrogram that shows the relationship between the individual data points and clusters. The height of the dendrogram is the distance between clusters. Here we can show how to use this on our toy data set from four patients. The base function in R to do hierarchical clustering in `hclust()`. Below, we apply that function on Euclidean distances between patients. The resulting clustering tree or dendrogram is shown in Figure [4\.1](clustering-grouping-samples-based-on-their-similarity.html#fig:expPlot). ``` d=dist(df) hc=hclust(d,method="complete") plot(hc) ``` FIGURE 4\.2: Dendrogram of distance matrix In the above code snippet, we have used the `method="complete"` argument without explaining it. The `method` argument defines the criteria that directs how the sub\-clusters are merged. During clustering, starting with single\-member clusters, the clusters are merged based on the distance between them. There are many different ways to define distance between clusters, and based on which definition you use, the hierarchical clustering results change. So the `method` argument controls that. There are a couple of values this argument can take; we list them and their description below: * **“complete”** stands for “Complete Linkage” and the distance between two clusters is defined as the largest distance between any members of the two clusters. * **“single”** stands for “Single Linkage” and the distance between two clusters is defined as the smallest distance between any members of the two clusters. * **“average”** stands for “Average Linkage” or more precisely the UPGMA (Unweighted Pair Group Method with Arithmetic Mean) method. In this case, the distance between two clusters is defined as the average distance between any members of the two clusters. * **“ward.D2”** and **“ward.D”** stands for different implementations of Ward’s minimum variance method. This method aims to find compact, spherical clusters by selecting clusters to merge based on the change in the cluster variances. The clusters are merged if the increase in the combined variance over the sum of the cluster\-specific variances is the minimum compared to alternative merging operations. In real life, we would get expression profiles from thousands of genes and we will typically have many more patients than our toy example. One such data set is gene expression values from 60 bone marrow samples of patients with one of the four main types of leukemia (ALL, AML, CLL, CML) or no\-leukemia controls. We trimmed that data set down to the top 1000 most variable genes to be able to work with it more easily, since genes that are not very variable do not contribute much to the distances between patients. We will now use this data set to cluster the patients and display the values as a heatmap and a dendrogram. The heatmap shows the expression values of genes across patients in a color coded manner. The heatmap function, `pheatmap()`, that we will use performs the clustering as well. The matrix that contains gene expressions has the genes in the rows and the patients in the columns. Therefore, we will also use a column\-side color code to mark the patients based on their leukemia type. For the hierarchical clustering, we will use Ward’s method designated by the `clustering_method` argument to the `pheatmap()` function. The resulting heatmap is shown in Figure [4\.3](clustering-grouping-samples-based-on-their-similarity.html#fig:heatmap1). ``` library(pheatmap) expFile=system.file("extdata","leukemiaExpressionSubset.rds", package="compGenomRData") mat=readRDS(expFile) # set the leukemia type annotation for each sample annotation_col = data.frame( LeukemiaType =substr(colnames(mat),1,3)) rownames(annotation_col)=colnames(mat) pheatmap(mat,show_rownames=FALSE,show_colnames=FALSE, annotation_col=annotation_col, scale = "none",clustering_method="ward.D2", clustering_distance_cols="euclidean") ``` FIGURE 4\.3: Heatmap of gene expression values from leukemia patients. Each column represents a patient. Columns are clustered using gene expression and color coded by disease type: ALL, AML, CLL, CML or no\-leukemia As we can observe in the heatmap, each cluster has a distinct set of expression values. The main clusters almost perfectly distinguish the leukemia types. Only one CML patient is clustered as a non\-leukemia sample. This could mean that gene expression profiles are enough to classify leukemia type. More detailed analysis and experiments are needed to verify that, but by looking at this exploratory analysis we can decide where to focus our efforts next. #### 4\.1\.2\.1 Where to cut the tree ? The example above seems like a clear\-cut example where we can pick clusters from the dendrogram by eye. This is mostly due to Ward’s method, where compact clusters are preferred. However, as is usually the case, we do not have patient labels and it would be difficult to tell which leaves (patients) in the dendrogram we should consider as part of the same cluster. In other words, how deep we should cut the dendrogram so that every patient sample still connected via the remaining sub\-dendrograms constitute clusters. The `cutree()` function provides the functionality to output either desired number of clusters or clusters obtained from cutting the dendrogram at a certain height. Below, we will cluster the patients with hierarchical clustering using the default method “complete linkage” and cut the dendrogram at a certain height. In this case, you will also observe that, changing from Ward’s distance to complete linkage had an effect on clustering. Now the two clusters that are defined by Ward’s distance are closer to each other and harder to separate from each other, shown in Figure [4\.4](clustering-grouping-samples-based-on-their-similarity.html#fig:hclustNcut). ``` hcl=hclust(dist(t(mat))) plot(hcl,labels = FALSE, hang= -1) rect.hclust(hcl, h = 80, border = "red") ``` FIGURE 4\.4: Dendrogram of Leukemia patients clustered by hierarchical clustering. Rectangles show the cluster we will get if we cut the tree at `height=80`. ``` clu.k5=cutree(hcl,k=5) # cut tree so that there are 5 clusters clu.h80=cutree(hcl,h=80) # cut tree/dendrogram from height 80 table(clu.k5) # number of samples for each cluster ``` ``` ## clu.k5 ## 1 2 3 4 5 ## 12 3 9 12 24 ``` Apart from the arbitrary values for the height or the number of clusters, how can we define clusters more systematically? As this is a general question, we will show how to decide the optimal number of clusters later in this chapter. ### 4\.1\.3 K\-means clustering Another very common clustering algorithm is k\-means. This method divides or partitions the data points, our working example patients, into a pre\-determined, “k” number of clusters (Hartigan and Wong [1979](#ref-hartigan1979algorithm)). Hence, these types of methods are generally called “partitioning” methods. The algorithm is initialized with randomly chosen \\(k\\) centers or centroids. In a sense, a centroid is a data point with multiple values. In our working example, it is a hypothetical patient with gene expression values. But in the initialization phase, those gene expression values are chosen randomly within the boundaries of the gene expression distributions from real patients. As the next step in the algorithm, each patient is assigned to the closest centroid, and in the next iteration, centroids are set to the mean of values of the genes in the cluster. This process of setting centroids and assigning patients to the clusters repeats itself until the sum of squared distances to cluster centroids is minimized. As you might see, the cluster algorithm starts with random initial centroids. This feature might yield different results for each run of the algorithm. We will now show how to use the k\-means method on the gene expression data set. We will use `set.seed()` for reproducibility. In the wild, you might want to run this algorithm multiple times to see if your clustering results are stable. ``` set.seed(101) # we have to transpore the matrix t() # so that we calculate distances between patients kclu=kmeans(t(mat),centers=5) # number of data points in each cluster table(kclu$cluster) ``` ``` ## ## 1 2 3 4 5 ## 12 14 11 12 11 ``` Now let us check the percentage of each leukemia type in each cluster. We can visualize this as a table. Looking at the table below, we see that each of the 5 clusters predominantly represents one of the 4 leukemia types or the control patients without leukemia. ``` type2kclu = data.frame( LeukemiaType =substr(colnames(mat),1,3), cluster=kclu$cluster) table(type2kclu) ``` ``` ## cluster ## LeukemiaType 1 2 3 4 5 ## ALL 12 0 0 0 0 ## AML 0 1 0 0 11 ## CLL 0 0 0 12 0 ## CML 0 1 11 0 0 ## NoL 0 12 0 0 0 ``` Another related and maybe more robust algorithm is called **“k\-medoids”** clustering (Reynolds, Richards, Iglesia, et al. [2006](#ref-reynolds2006clustering)). The procedure is almost identical to k\-means clustering with a couple of differences. In this case, centroids chosen are real data points in our case patients, and the metric we are trying to optimize in each iteration is based on the Manhattan distance to the centroid. In k\-means this was based on the sum of squared distances, so Euclidean distance. Below we show how to use the k\-medoids clustering function `pam()` from the `cluster` package. ``` kmclu=cluster::pam(t(mat),k=5) # cluster using k-medoids # make a data frame with Leukemia type and cluster id type2kmclu = data.frame( LeukemiaType =substr(colnames(mat),1,3), cluster=kmclu$cluster) table(type2kmclu) ``` ``` ## cluster ## LeukemiaType 1 2 3 4 5 ## ALL 12 0 0 0 0 ## AML 0 10 1 1 0 ## CLL 0 0 0 0 12 ## CML 0 0 0 12 0 ## NoL 0 0 12 0 0 ``` We cannot visualize the clustering from partitioning methods with a tree like we did for hierarchical clustering. Even if we can get the distances between patients the algorithm does not return the distances between clusters out of the box. However, if we had a way to visualize the distances between patients in 2 dimensions we could see the how patients and clusters relate to each other. It turns out that there is a way to compress between patient distances to a 2\-dimensional plot. There are many ways to do this, and we introduce these dimension\-reduction methods including the one we will use later in this chapter. For now, we are going to use a method called “multi\-dimensional scaling” and plot the patients in a 2D plot color coded by their cluster assignments shown in Figure [4\.5](clustering-grouping-samples-based-on-their-similarity.html#fig:kmeansmds). We will explain this method in more detail in the [Multi\-dimensional scaling](dimensionality-reduction-techniques-visualizing-complex-data-sets-in-2d.html#multi-dimensional-scaling) section below. ``` # Calculate distances dists=dist(t(mat)) # calculate MDS mds=cmdscale(dists) # plot the patients in the 2D space plot(mds,pch=19,col=rainbow(5)[kclu$cluster]) # set the legend for cluster colors legend("bottomright", legend=paste("clu",unique(kclu$cluster)), fill=rainbow(5)[unique(kclu$cluster)], border=NA,box.col=NA) ``` FIGURE 4\.5: K\-means cluster memberships are shown in a multi\-dimensional scaling plot The plot we obtained shows the separation between clusters. However, it does not do a great job showing the separation between clusters 3 and 4, which represent CML and “no leukemia” patients. We might need another dimension to properly visualize that separation. In addition, those two clusters were closely related in the hierarchical clustering as well. ### 4\.1\.4 How to choose “k”, the number of clusters Up to this point, we have avoided the question of selecting optimal number clusters. How do we know where to cut our dendrogram or which k to choose ? First of all, this is a difficult question. Usually, clusters have different granularity. Some clusters are tight and compact and some are wide, and both these types of clusters can be in the same data set. When visualized, some large clusters may look like they may have sub\-clusters. So should we consider the large cluster as one cluster or should we consider the sub\-clusters as individual clusters? There are some metrics to help but there is no definite answer. We will show a couple of them below. #### 4\.1\.4\.1 Silhouette One way to determine the quality of the clustering is to measure the expected self\-similar nature of the points in a set of clusters. The silhouette value does just that and it is a measure of how similar a data point is to its own cluster compared to other clusters (Rousseeuw [1987](#ref-rousseeuw1987silhouettes)). The silhouette value ranges from \-1 to \+1, where values that are positive indicate that the data point is well matched to its own cluster, if the value is zero it is a borderline case, and if the value is minus it means that the data point might be mis\-clustered because it is more similar to a neighboring cluster. If most data points have a high value, then the clustering is appropriate. Ideally, one can create many different clusterings with each with a different \\(k\\) parameter indicating the number of clusters, and assess their appropriateness using the average silhouette values. In R, silhouette values are referred to as silhouette widths in the documentation. A silhouette value is calculated for each data point. In our working example, each patient will get silhouette values showing how well they are matched to their assigned clusters. Formally this calculated as follows. For each data point \\(i\\), we calculate \\({\\displaystyle a(i)}\\), which denotes the average distance between \\(i\\) and all other data points within the same cluster. This shows how well the point fits into that cluster. For the same data point, we also calculate \\({\\displaystyle b(i)}\\), which denotes the lowest average distance of \\({\\displaystyle i}\\) to all points in any other cluster, of which \\({\\displaystyle i}\\) is not a member. The cluster with this lowest average \\(b(i)\\) is the “neighboring cluster” of data point \\({\\displaystyle i}\\) since it is the next best fit cluster for that data point. Then, the silhouette value for a given data point is \\(s(i) \= \\frac{b(i) \- a(i)}{\\max\\{a(i),b(i)\\}}\\). As described, this quantity is positive when \\(b(i)\\) is high and \\(a(i)\\) is low, meaning that the data point \\(i\\) is self\-similar to its cluster. And the silhouette value, \\(s(i)\\), is negative if it is more similar to its neighbors than its assigned cluster. In R, we can calculate silhouette values using the `cluster::silhouette()` function. Below, we calculate the silhouette values for k\-medoids clustering with the `pam()` function with `k=5`. The resulting silhouette values are shown in Figure [4\.6](clustering-grouping-samples-based-on-their-similarity.html#fig:sill). ``` library(cluster) set.seed(101) pamclu=cluster::pam(t(mat),k=5) plot(silhouette(pamclu),main=NULL) ``` FIGURE 4\.6: Silhouette values for k\-medoids with `k=5` Now, let us calculate the average silhouette value for different \\(k\\) values and compare. We will use `sapply()` function to get average silhouette values across \\(k\\) values between 2 and 7\. Within `sapply()` there is an anonymous function that that does the clustering and calculates average silhouette values for each \\(k\\). The plot showing average silhouette values for different \\(k\\) values is shown in Figure [4\.7](clustering-grouping-samples-based-on-their-similarity.html#fig:sillav). ``` Ks=sapply(2:7, function(i) summary(silhouette(pam(t(mat),k=i)))$avg.width) plot(2:7,Ks,xlab="k",ylab="av. silhouette",type="b", pch=19) ``` FIGURE 4\.7: Average silhouette values for k\-medoids clustering for `k` values between 2 and 7 In this case, it seems the best value for \\(k\\) is 4\. The k\-medoids function `pam()` will usually cluster CML and “no Leukemia” cases together when `k=4`, which are also related clusters according to the hierarchical clustering we did earlier. #### 4\.1\.4\.2 Gap statistic As clustering aims to find self\-similar data points, it would be reasonable to expect with the correct number of clusters the total within\-cluster variation is minimized. Within\-cluster variation for a single cluster can simply be defined as the sum of squares from the cluster mean, which in this case is the centroid we defined in the k\-means algorithm. The total within\-cluster variation is then the sum of within\-cluster variations for each cluster. This can be formally defined as follows: \\(\\displaystyle W\_k \= \\sum\_{k\=1}^K \\sum\_{\\mathrm{x}\_i \\in C\_k} (\\mathrm{x}\_i \- \\mu\_k )^2\\) where \\(\\mathrm{x}\_i\\) is a data point in cluster \\(k\\), and \\(\\mu\_k\\) is the cluster mean, and \\(W\_k\\) is the total within\-cluster variation quantity we described. However, the problem is that the variation quantity decreases with the number of clusters. The more centroids we have, the smaller the distances to the centroids become. A more reliable approach would be somehow calculating the expected variation from a reference null distribution and compare that to the observed variation for each \\(k\\). In the gap statistic approach, the expected distribution is calculated via sampling points from the boundaries of the original data and calculating within\-cluster variation quantity for multiple rounds of sampling (Tibshirani, Walther, and Hastie [2001](#ref-tibshirani2001estimating)). This way we have an expectation about the variability when there is no clustering, and then compare that expected variation to the observed within\-cluster variation. The expected variation should also go down with the increasing number of clusters, but for the optimal number of clusters, the expected variation will be furthest away from observed variation. This distance is called the **“gap statistic”** and defined as follows: \\(\\displaystyle \\mathrm{Gap}\_n(k) \= E\_n^\*\\{\\log W\_k\\} \- \\log W\_k\\), where \\(E\_n^\*\\{\\log W\_k\\}\\) is the expected variation in log\-scale under a sample size \\(n\\) from the reference distribution and \\(\\log W\_k\\) is the observed variation. Our aim is to choose the \\(k\\) number of clusters that maximizes \\(\\mathrm{Gap}\_n(k)\\). We can easily calculate the gap statistic with the `cluster::clusGap()` function. We will now use that function to calculate the gap statistic for our patient gene expression data. The resulting gap statistics are shown in Figure [4\.8](clustering-grouping-samples-based-on-their-similarity.html#fig:clusGap). ``` library(cluster) set.seed(101) # define the clustering function pam1 <- function(x,k) list(cluster = pam(x,k, cluster.only=TRUE)) # calculate the gap statistic pam.gap= clusGap(t(mat), FUN = pam1, K.max = 8,B=50) # plot the gap statistic accross k values plot(pam.gap, main = "Gap statistic for the 'Leukemia' data") ``` FIGURE 4\.8: Gap statistic for clustering the leukemia dataset with k\-medoids (pam) algorithm. In this case, the gap statistic shows that \\(k\=7\\) is the best if we take the maximum value as the best. However, after \\(k\=6\\), the statistic has more or less a stable curve. This observation is incorporated into algorithms that can select the best \\(k\\) value based on the gap statistic. A reasonable way is to take the simulation error (error bars in [4\.8](clustering-grouping-samples-based-on-their-similarity.html#fig:clusGap)) into account, and take the smallest \\(k\\) whose gap statistic is larger or equal to the one of \\(k\+1\\) minus the simulation error. Formally written, we would pick the smallest \\(k\\) satisfying the following condition: \\(\\mathrm{Gap}(k) \\geq \\mathrm{Gap}(k\+1\) \- s\_{k\+1}\\), where \\(s\_{k\+1}\\) is the simulation error for \\(\\mathrm{Gap}(k\+1\)\\). Using this procedure gives us \\(k\=6\\) as the optimum number of clusters. Biologically, we know that there are 5 main patient categories but this does not mean there are no sub\-categories or sub\-types for the cancers we are looking at. #### 4\.1\.4\.3 Other methods There are several other methods that provide insight into how many clusters. In fact, the package `NbClust` provides 30 different ways to determine the number of optimal clusters and can offer a voting mechanism to pick the best number. Below, we show how to use this function for some of the optimal number of cluster detection methods. ``` library(NbClust) nb = NbClust(data=t(mat), distance = "euclidean", min.nc = 2, max.nc = 7, method = "kmeans", index=c("kl","ch","cindex","db","silhouette", "duda","pseudot2","beale","ratkowsky", "gap","gamma","mcclain","gplus", "tau","sdindex","sdbw")) table(nb$Best.nc[1,]) # consensus seems to be 3 clusters ``` However, readers should keep in mind that clustering is an exploratory technique. If you have solid labels for your data points, maybe clustering is just a sanity check, and you should just do predictive modeling instead. However, in biology there are rarely solid labels and things have different granularity. Take the leukemia patients case we have been using for example, it is known that leukemia types have subtypes and those sub\-types that have different mutation profiles and consequently have different molecular signatures. Because of this, it is not surprising that some optimal cluster number techniques will find more clusters to be appropriate. On the other hand, CML (chronic myeloid leukemia) is a slow progressing disease and maybe their molecular signatures are closer to “no leukemia” patients, so clustering algorithms may confuse the two depending on what granularity they are operating with. It is always good to look at the heatmaps after clustering, if you have meaningful self\-similar data points, even if the labels you have do not agree that there can be different clusters, you can perform downstream analysis to understand the sub\-clusters better. As we have seen, we can estimate the optimal number of clusters but we cannot take that estimation as the absolute truth. Given more data points or a different set of expression signatures, you may have different optimal clusterings, or the supposed optimal clustering might overlook previously known sub\-groups of your data. ### 4\.1\.1 Distance metrics The first required step for clustering is the distance metric. This is simply a measurement of how similar gene expressions are to each other. There are many options for distance metrics and the choice of the metric is quite important for clustering. Consider a simple example where we have four patients and expression of three genes measured in Table [4\.1](clustering-grouping-samples-based-on-their-similarity.html#tab:expTable). Which patients look similar to each other based on their gene expression profiles ? TABLE 4\.1: Gene expressions from patients | | IRX4 | OCT4 | PAX6 | | --- | --- | --- | --- | | patient1 | 11 | 10 | 1 | | patient2 | 13 | 13 | 3 | | patient3 | 2 | 4 | 10 | | patient4 | 1 | 3 | 9 | It may not be obvious from the table at first sight, but if we plot the gene expression profile for each patient (shown in Figure [4\.1](clustering-grouping-samples-based-on-their-similarity.html#fig:expPlot)), we will see that expression profiles of patient 1 and patient 2 are more similar to each other than patient 3 or patient 4\. FIGURE 4\.1: Gene expression values for different patients. Certain patients have gene expression values that are similar to each other. But how can we quantify what we see? A simple metric for distance between gene expression vectors between a given patient pair is the sum of the absolute difference between gene expression values. This can be formulated as follows: \\(d\_{AB}\={\\sum \_{i\=1}^{n}\|e\_{Ai}\-e\_{Bi}\|}\\), where \\(d\_{AB}\\) is the distance between patients A and B, and the \\(e\_{Ai}\\) and \\(e\_{Bi}\\) are expression values of the \\(i\\)th gene for patients A and B. This distance metric is called the **“Manhattan distance”** or **“L1 norm”**. Another distance metric uses the sum of squared distances and takes the square root of resulting value; this metric can be formulated as: \\(d\_{AB}\={{\\sqrt {\\sum \_{i\=1}^{n}(e\_{Ai}\-e\_{Bi})^{2}}}}\\). This distance is called **“Euclidean Distance”** or **“L2 norm”**. This is usually the default distance metric for many clustering algorithms. Due to the squaring operation, values that are very different get higher contribution to the distance. Due to this, compared to the Manhattan distance, it can be affected more by outliers. But, generally if the outliers are rare, this distance metric works well. The last metric we will introduce is the **“correlation distance”**. This is simply \\(d\_{AB}\=1\-\\rho\\), where \\(\\rho\\) is the Pearson correlation coefficient between two vectors; in our case those vectors are gene expression profiles of patients. Using this distance the gene expression vectors that have a similar pattern will have a small distance, whereas when the vectors have different patterns they will have a large distance. In this case, the linear correlation between vectors matters, although the scale of the vectors might be different. Now let’s see how we can calculate these distances in R. First, we have our gene expression per patient table. ``` df ``` ``` ## IRX4 OCT4 PAX6 ## patient1 11 10 1 ## patient2 13 13 3 ## patient3 2 4 10 ## patient4 1 3 9 ``` Next, we calculate the distance metrics using the `dist()` function and `1-cor()` expression. ``` dist(df,method="manhattan") ``` ``` ## patient1 patient2 patient3 ## patient2 7 ## patient3 24 27 ## patient4 25 28 3 ``` ``` dist(df,method="euclidean") ``` ``` ## patient1 patient2 patient3 ## patient2 4.123106 ## patient3 14.071247 15.842980 ## patient4 14.594520 16.733201 1.732051 ``` ``` as.dist(1-cor(t(df))) # correlation distance ``` ``` ## patient1 patient2 patient3 ## patient2 0.004129405 ## patient3 1.988522468 1.970725343 ## patient4 1.988522468 1.970725343 0.000000000 ``` #### 4\.1\.1\.1 Scaling before calculating the distance Before we proceed to the clustering, there is one more thing we need to take care of. Should we normalize our data? The scale of the vectors in our expression matrix can affect the distance calculation. Gene expression tables might have some sort of normalization, so the values are in comparable scales. But somehow, if a gene’s expression values are on a much higher scale than the other genes, that gene will affect the distance more than others when using Euclidean or Manhattan distance. If that is the case we can scale the variables. The traditional way of scaling variables is to subtract their mean, and divide by their standard deviation, this operation is also called “standardization”. If this is done on all genes, each gene will have the same effect on distance measures. The decision to apply scaling ultimately depends on our data and what you want to achieve. If the gene expression values are previously normalized between patients, having genes that dominate the distance metric could have a biological meaning and therefore it may not be desirable to further scale variables. In R, the standardization is done via the `scale()` function. Here we scale the gene expression values. ``` df ``` ``` ## IRX4 OCT4 PAX6 ## patient1 11 10 1 ## patient2 13 13 3 ## patient3 2 4 10 ## patient4 1 3 9 ``` ``` scale(df) ``` ``` ## IRX4 OCT4 PAX6 ## patient1 0.6932522 0.5212860 -1.0733721 ## patient2 1.0194886 1.1468293 -0.6214260 ## patient3 -0.7748113 -0.7298004 0.9603856 ## patient4 -0.9379295 -0.9383149 0.7344125 ## attr(,"scaled:center") ## IRX4 OCT4 PAX6 ## 6.75 7.50 5.75 ## attr(,"scaled:scale") ## IRX4 OCT4 PAX6 ## 6.130525 4.795832 4.425306 ``` #### 4\.1\.1\.1 Scaling before calculating the distance Before we proceed to the clustering, there is one more thing we need to take care of. Should we normalize our data? The scale of the vectors in our expression matrix can affect the distance calculation. Gene expression tables might have some sort of normalization, so the values are in comparable scales. But somehow, if a gene’s expression values are on a much higher scale than the other genes, that gene will affect the distance more than others when using Euclidean or Manhattan distance. If that is the case we can scale the variables. The traditional way of scaling variables is to subtract their mean, and divide by their standard deviation, this operation is also called “standardization”. If this is done on all genes, each gene will have the same effect on distance measures. The decision to apply scaling ultimately depends on our data and what you want to achieve. If the gene expression values are previously normalized between patients, having genes that dominate the distance metric could have a biological meaning and therefore it may not be desirable to further scale variables. In R, the standardization is done via the `scale()` function. Here we scale the gene expression values. ``` df ``` ``` ## IRX4 OCT4 PAX6 ## patient1 11 10 1 ## patient2 13 13 3 ## patient3 2 4 10 ## patient4 1 3 9 ``` ``` scale(df) ``` ``` ## IRX4 OCT4 PAX6 ## patient1 0.6932522 0.5212860 -1.0733721 ## patient2 1.0194886 1.1468293 -0.6214260 ## patient3 -0.7748113 -0.7298004 0.9603856 ## patient4 -0.9379295 -0.9383149 0.7344125 ## attr(,"scaled:center") ## IRX4 OCT4 PAX6 ## 6.75 7.50 5.75 ## attr(,"scaled:scale") ## IRX4 OCT4 PAX6 ## 6.130525 4.795832 4.425306 ``` ### 4\.1\.2 Hiearchical clustering This is one of the most ubiquitous clustering algorithms. Using this algorithm you can see the relationship of individual data points and relationships of clusters. This is achieved by successively joining small clusters to each other based on the inter\-cluster distance. Eventually, you get a tree structure or a dendrogram that shows the relationship between the individual data points and clusters. The height of the dendrogram is the distance between clusters. Here we can show how to use this on our toy data set from four patients. The base function in R to do hierarchical clustering in `hclust()`. Below, we apply that function on Euclidean distances between patients. The resulting clustering tree or dendrogram is shown in Figure [4\.1](clustering-grouping-samples-based-on-their-similarity.html#fig:expPlot). ``` d=dist(df) hc=hclust(d,method="complete") plot(hc) ``` FIGURE 4\.2: Dendrogram of distance matrix In the above code snippet, we have used the `method="complete"` argument without explaining it. The `method` argument defines the criteria that directs how the sub\-clusters are merged. During clustering, starting with single\-member clusters, the clusters are merged based on the distance between them. There are many different ways to define distance between clusters, and based on which definition you use, the hierarchical clustering results change. So the `method` argument controls that. There are a couple of values this argument can take; we list them and their description below: * **“complete”** stands for “Complete Linkage” and the distance between two clusters is defined as the largest distance between any members of the two clusters. * **“single”** stands for “Single Linkage” and the distance between two clusters is defined as the smallest distance between any members of the two clusters. * **“average”** stands for “Average Linkage” or more precisely the UPGMA (Unweighted Pair Group Method with Arithmetic Mean) method. In this case, the distance between two clusters is defined as the average distance between any members of the two clusters. * **“ward.D2”** and **“ward.D”** stands for different implementations of Ward’s minimum variance method. This method aims to find compact, spherical clusters by selecting clusters to merge based on the change in the cluster variances. The clusters are merged if the increase in the combined variance over the sum of the cluster\-specific variances is the minimum compared to alternative merging operations. In real life, we would get expression profiles from thousands of genes and we will typically have many more patients than our toy example. One such data set is gene expression values from 60 bone marrow samples of patients with one of the four main types of leukemia (ALL, AML, CLL, CML) or no\-leukemia controls. We trimmed that data set down to the top 1000 most variable genes to be able to work with it more easily, since genes that are not very variable do not contribute much to the distances between patients. We will now use this data set to cluster the patients and display the values as a heatmap and a dendrogram. The heatmap shows the expression values of genes across patients in a color coded manner. The heatmap function, `pheatmap()`, that we will use performs the clustering as well. The matrix that contains gene expressions has the genes in the rows and the patients in the columns. Therefore, we will also use a column\-side color code to mark the patients based on their leukemia type. For the hierarchical clustering, we will use Ward’s method designated by the `clustering_method` argument to the `pheatmap()` function. The resulting heatmap is shown in Figure [4\.3](clustering-grouping-samples-based-on-their-similarity.html#fig:heatmap1). ``` library(pheatmap) expFile=system.file("extdata","leukemiaExpressionSubset.rds", package="compGenomRData") mat=readRDS(expFile) # set the leukemia type annotation for each sample annotation_col = data.frame( LeukemiaType =substr(colnames(mat),1,3)) rownames(annotation_col)=colnames(mat) pheatmap(mat,show_rownames=FALSE,show_colnames=FALSE, annotation_col=annotation_col, scale = "none",clustering_method="ward.D2", clustering_distance_cols="euclidean") ``` FIGURE 4\.3: Heatmap of gene expression values from leukemia patients. Each column represents a patient. Columns are clustered using gene expression and color coded by disease type: ALL, AML, CLL, CML or no\-leukemia As we can observe in the heatmap, each cluster has a distinct set of expression values. The main clusters almost perfectly distinguish the leukemia types. Only one CML patient is clustered as a non\-leukemia sample. This could mean that gene expression profiles are enough to classify leukemia type. More detailed analysis and experiments are needed to verify that, but by looking at this exploratory analysis we can decide where to focus our efforts next. #### 4\.1\.2\.1 Where to cut the tree ? The example above seems like a clear\-cut example where we can pick clusters from the dendrogram by eye. This is mostly due to Ward’s method, where compact clusters are preferred. However, as is usually the case, we do not have patient labels and it would be difficult to tell which leaves (patients) in the dendrogram we should consider as part of the same cluster. In other words, how deep we should cut the dendrogram so that every patient sample still connected via the remaining sub\-dendrograms constitute clusters. The `cutree()` function provides the functionality to output either desired number of clusters or clusters obtained from cutting the dendrogram at a certain height. Below, we will cluster the patients with hierarchical clustering using the default method “complete linkage” and cut the dendrogram at a certain height. In this case, you will also observe that, changing from Ward’s distance to complete linkage had an effect on clustering. Now the two clusters that are defined by Ward’s distance are closer to each other and harder to separate from each other, shown in Figure [4\.4](clustering-grouping-samples-based-on-their-similarity.html#fig:hclustNcut). ``` hcl=hclust(dist(t(mat))) plot(hcl,labels = FALSE, hang= -1) rect.hclust(hcl, h = 80, border = "red") ``` FIGURE 4\.4: Dendrogram of Leukemia patients clustered by hierarchical clustering. Rectangles show the cluster we will get if we cut the tree at `height=80`. ``` clu.k5=cutree(hcl,k=5) # cut tree so that there are 5 clusters clu.h80=cutree(hcl,h=80) # cut tree/dendrogram from height 80 table(clu.k5) # number of samples for each cluster ``` ``` ## clu.k5 ## 1 2 3 4 5 ## 12 3 9 12 24 ``` Apart from the arbitrary values for the height or the number of clusters, how can we define clusters more systematically? As this is a general question, we will show how to decide the optimal number of clusters later in this chapter. #### 4\.1\.2\.1 Where to cut the tree ? The example above seems like a clear\-cut example where we can pick clusters from the dendrogram by eye. This is mostly due to Ward’s method, where compact clusters are preferred. However, as is usually the case, we do not have patient labels and it would be difficult to tell which leaves (patients) in the dendrogram we should consider as part of the same cluster. In other words, how deep we should cut the dendrogram so that every patient sample still connected via the remaining sub\-dendrograms constitute clusters. The `cutree()` function provides the functionality to output either desired number of clusters or clusters obtained from cutting the dendrogram at a certain height. Below, we will cluster the patients with hierarchical clustering using the default method “complete linkage” and cut the dendrogram at a certain height. In this case, you will also observe that, changing from Ward’s distance to complete linkage had an effect on clustering. Now the two clusters that are defined by Ward’s distance are closer to each other and harder to separate from each other, shown in Figure [4\.4](clustering-grouping-samples-based-on-their-similarity.html#fig:hclustNcut). ``` hcl=hclust(dist(t(mat))) plot(hcl,labels = FALSE, hang= -1) rect.hclust(hcl, h = 80, border = "red") ``` FIGURE 4\.4: Dendrogram of Leukemia patients clustered by hierarchical clustering. Rectangles show the cluster we will get if we cut the tree at `height=80`. ``` clu.k5=cutree(hcl,k=5) # cut tree so that there are 5 clusters clu.h80=cutree(hcl,h=80) # cut tree/dendrogram from height 80 table(clu.k5) # number of samples for each cluster ``` ``` ## clu.k5 ## 1 2 3 4 5 ## 12 3 9 12 24 ``` Apart from the arbitrary values for the height or the number of clusters, how can we define clusters more systematically? As this is a general question, we will show how to decide the optimal number of clusters later in this chapter. ### 4\.1\.3 K\-means clustering Another very common clustering algorithm is k\-means. This method divides or partitions the data points, our working example patients, into a pre\-determined, “k” number of clusters (Hartigan and Wong [1979](#ref-hartigan1979algorithm)). Hence, these types of methods are generally called “partitioning” methods. The algorithm is initialized with randomly chosen \\(k\\) centers or centroids. In a sense, a centroid is a data point with multiple values. In our working example, it is a hypothetical patient with gene expression values. But in the initialization phase, those gene expression values are chosen randomly within the boundaries of the gene expression distributions from real patients. As the next step in the algorithm, each patient is assigned to the closest centroid, and in the next iteration, centroids are set to the mean of values of the genes in the cluster. This process of setting centroids and assigning patients to the clusters repeats itself until the sum of squared distances to cluster centroids is minimized. As you might see, the cluster algorithm starts with random initial centroids. This feature might yield different results for each run of the algorithm. We will now show how to use the k\-means method on the gene expression data set. We will use `set.seed()` for reproducibility. In the wild, you might want to run this algorithm multiple times to see if your clustering results are stable. ``` set.seed(101) # we have to transpore the matrix t() # so that we calculate distances between patients kclu=kmeans(t(mat),centers=5) # number of data points in each cluster table(kclu$cluster) ``` ``` ## ## 1 2 3 4 5 ## 12 14 11 12 11 ``` Now let us check the percentage of each leukemia type in each cluster. We can visualize this as a table. Looking at the table below, we see that each of the 5 clusters predominantly represents one of the 4 leukemia types or the control patients without leukemia. ``` type2kclu = data.frame( LeukemiaType =substr(colnames(mat),1,3), cluster=kclu$cluster) table(type2kclu) ``` ``` ## cluster ## LeukemiaType 1 2 3 4 5 ## ALL 12 0 0 0 0 ## AML 0 1 0 0 11 ## CLL 0 0 0 12 0 ## CML 0 1 11 0 0 ## NoL 0 12 0 0 0 ``` Another related and maybe more robust algorithm is called **“k\-medoids”** clustering (Reynolds, Richards, Iglesia, et al. [2006](#ref-reynolds2006clustering)). The procedure is almost identical to k\-means clustering with a couple of differences. In this case, centroids chosen are real data points in our case patients, and the metric we are trying to optimize in each iteration is based on the Manhattan distance to the centroid. In k\-means this was based on the sum of squared distances, so Euclidean distance. Below we show how to use the k\-medoids clustering function `pam()` from the `cluster` package. ``` kmclu=cluster::pam(t(mat),k=5) # cluster using k-medoids # make a data frame with Leukemia type and cluster id type2kmclu = data.frame( LeukemiaType =substr(colnames(mat),1,3), cluster=kmclu$cluster) table(type2kmclu) ``` ``` ## cluster ## LeukemiaType 1 2 3 4 5 ## ALL 12 0 0 0 0 ## AML 0 10 1 1 0 ## CLL 0 0 0 0 12 ## CML 0 0 0 12 0 ## NoL 0 0 12 0 0 ``` We cannot visualize the clustering from partitioning methods with a tree like we did for hierarchical clustering. Even if we can get the distances between patients the algorithm does not return the distances between clusters out of the box. However, if we had a way to visualize the distances between patients in 2 dimensions we could see the how patients and clusters relate to each other. It turns out that there is a way to compress between patient distances to a 2\-dimensional plot. There are many ways to do this, and we introduce these dimension\-reduction methods including the one we will use later in this chapter. For now, we are going to use a method called “multi\-dimensional scaling” and plot the patients in a 2D plot color coded by their cluster assignments shown in Figure [4\.5](clustering-grouping-samples-based-on-their-similarity.html#fig:kmeansmds). We will explain this method in more detail in the [Multi\-dimensional scaling](dimensionality-reduction-techniques-visualizing-complex-data-sets-in-2d.html#multi-dimensional-scaling) section below. ``` # Calculate distances dists=dist(t(mat)) # calculate MDS mds=cmdscale(dists) # plot the patients in the 2D space plot(mds,pch=19,col=rainbow(5)[kclu$cluster]) # set the legend for cluster colors legend("bottomright", legend=paste("clu",unique(kclu$cluster)), fill=rainbow(5)[unique(kclu$cluster)], border=NA,box.col=NA) ``` FIGURE 4\.5: K\-means cluster memberships are shown in a multi\-dimensional scaling plot The plot we obtained shows the separation between clusters. However, it does not do a great job showing the separation between clusters 3 and 4, which represent CML and “no leukemia” patients. We might need another dimension to properly visualize that separation. In addition, those two clusters were closely related in the hierarchical clustering as well. ### 4\.1\.4 How to choose “k”, the number of clusters Up to this point, we have avoided the question of selecting optimal number clusters. How do we know where to cut our dendrogram or which k to choose ? First of all, this is a difficult question. Usually, clusters have different granularity. Some clusters are tight and compact and some are wide, and both these types of clusters can be in the same data set. When visualized, some large clusters may look like they may have sub\-clusters. So should we consider the large cluster as one cluster or should we consider the sub\-clusters as individual clusters? There are some metrics to help but there is no definite answer. We will show a couple of them below. #### 4\.1\.4\.1 Silhouette One way to determine the quality of the clustering is to measure the expected self\-similar nature of the points in a set of clusters. The silhouette value does just that and it is a measure of how similar a data point is to its own cluster compared to other clusters (Rousseeuw [1987](#ref-rousseeuw1987silhouettes)). The silhouette value ranges from \-1 to \+1, where values that are positive indicate that the data point is well matched to its own cluster, if the value is zero it is a borderline case, and if the value is minus it means that the data point might be mis\-clustered because it is more similar to a neighboring cluster. If most data points have a high value, then the clustering is appropriate. Ideally, one can create many different clusterings with each with a different \\(k\\) parameter indicating the number of clusters, and assess their appropriateness using the average silhouette values. In R, silhouette values are referred to as silhouette widths in the documentation. A silhouette value is calculated for each data point. In our working example, each patient will get silhouette values showing how well they are matched to their assigned clusters. Formally this calculated as follows. For each data point \\(i\\), we calculate \\({\\displaystyle a(i)}\\), which denotes the average distance between \\(i\\) and all other data points within the same cluster. This shows how well the point fits into that cluster. For the same data point, we also calculate \\({\\displaystyle b(i)}\\), which denotes the lowest average distance of \\({\\displaystyle i}\\) to all points in any other cluster, of which \\({\\displaystyle i}\\) is not a member. The cluster with this lowest average \\(b(i)\\) is the “neighboring cluster” of data point \\({\\displaystyle i}\\) since it is the next best fit cluster for that data point. Then, the silhouette value for a given data point is \\(s(i) \= \\frac{b(i) \- a(i)}{\\max\\{a(i),b(i)\\}}\\). As described, this quantity is positive when \\(b(i)\\) is high and \\(a(i)\\) is low, meaning that the data point \\(i\\) is self\-similar to its cluster. And the silhouette value, \\(s(i)\\), is negative if it is more similar to its neighbors than its assigned cluster. In R, we can calculate silhouette values using the `cluster::silhouette()` function. Below, we calculate the silhouette values for k\-medoids clustering with the `pam()` function with `k=5`. The resulting silhouette values are shown in Figure [4\.6](clustering-grouping-samples-based-on-their-similarity.html#fig:sill). ``` library(cluster) set.seed(101) pamclu=cluster::pam(t(mat),k=5) plot(silhouette(pamclu),main=NULL) ``` FIGURE 4\.6: Silhouette values for k\-medoids with `k=5` Now, let us calculate the average silhouette value for different \\(k\\) values and compare. We will use `sapply()` function to get average silhouette values across \\(k\\) values between 2 and 7\. Within `sapply()` there is an anonymous function that that does the clustering and calculates average silhouette values for each \\(k\\). The plot showing average silhouette values for different \\(k\\) values is shown in Figure [4\.7](clustering-grouping-samples-based-on-their-similarity.html#fig:sillav). ``` Ks=sapply(2:7, function(i) summary(silhouette(pam(t(mat),k=i)))$avg.width) plot(2:7,Ks,xlab="k",ylab="av. silhouette",type="b", pch=19) ``` FIGURE 4\.7: Average silhouette values for k\-medoids clustering for `k` values between 2 and 7 In this case, it seems the best value for \\(k\\) is 4\. The k\-medoids function `pam()` will usually cluster CML and “no Leukemia” cases together when `k=4`, which are also related clusters according to the hierarchical clustering we did earlier. #### 4\.1\.4\.2 Gap statistic As clustering aims to find self\-similar data points, it would be reasonable to expect with the correct number of clusters the total within\-cluster variation is minimized. Within\-cluster variation for a single cluster can simply be defined as the sum of squares from the cluster mean, which in this case is the centroid we defined in the k\-means algorithm. The total within\-cluster variation is then the sum of within\-cluster variations for each cluster. This can be formally defined as follows: \\(\\displaystyle W\_k \= \\sum\_{k\=1}^K \\sum\_{\\mathrm{x}\_i \\in C\_k} (\\mathrm{x}\_i \- \\mu\_k )^2\\) where \\(\\mathrm{x}\_i\\) is a data point in cluster \\(k\\), and \\(\\mu\_k\\) is the cluster mean, and \\(W\_k\\) is the total within\-cluster variation quantity we described. However, the problem is that the variation quantity decreases with the number of clusters. The more centroids we have, the smaller the distances to the centroids become. A more reliable approach would be somehow calculating the expected variation from a reference null distribution and compare that to the observed variation for each \\(k\\). In the gap statistic approach, the expected distribution is calculated via sampling points from the boundaries of the original data and calculating within\-cluster variation quantity for multiple rounds of sampling (Tibshirani, Walther, and Hastie [2001](#ref-tibshirani2001estimating)). This way we have an expectation about the variability when there is no clustering, and then compare that expected variation to the observed within\-cluster variation. The expected variation should also go down with the increasing number of clusters, but for the optimal number of clusters, the expected variation will be furthest away from observed variation. This distance is called the **“gap statistic”** and defined as follows: \\(\\displaystyle \\mathrm{Gap}\_n(k) \= E\_n^\*\\{\\log W\_k\\} \- \\log W\_k\\), where \\(E\_n^\*\\{\\log W\_k\\}\\) is the expected variation in log\-scale under a sample size \\(n\\) from the reference distribution and \\(\\log W\_k\\) is the observed variation. Our aim is to choose the \\(k\\) number of clusters that maximizes \\(\\mathrm{Gap}\_n(k)\\). We can easily calculate the gap statistic with the `cluster::clusGap()` function. We will now use that function to calculate the gap statistic for our patient gene expression data. The resulting gap statistics are shown in Figure [4\.8](clustering-grouping-samples-based-on-their-similarity.html#fig:clusGap). ``` library(cluster) set.seed(101) # define the clustering function pam1 <- function(x,k) list(cluster = pam(x,k, cluster.only=TRUE)) # calculate the gap statistic pam.gap= clusGap(t(mat), FUN = pam1, K.max = 8,B=50) # plot the gap statistic accross k values plot(pam.gap, main = "Gap statistic for the 'Leukemia' data") ``` FIGURE 4\.8: Gap statistic for clustering the leukemia dataset with k\-medoids (pam) algorithm. In this case, the gap statistic shows that \\(k\=7\\) is the best if we take the maximum value as the best. However, after \\(k\=6\\), the statistic has more or less a stable curve. This observation is incorporated into algorithms that can select the best \\(k\\) value based on the gap statistic. A reasonable way is to take the simulation error (error bars in [4\.8](clustering-grouping-samples-based-on-their-similarity.html#fig:clusGap)) into account, and take the smallest \\(k\\) whose gap statistic is larger or equal to the one of \\(k\+1\\) minus the simulation error. Formally written, we would pick the smallest \\(k\\) satisfying the following condition: \\(\\mathrm{Gap}(k) \\geq \\mathrm{Gap}(k\+1\) \- s\_{k\+1}\\), where \\(s\_{k\+1}\\) is the simulation error for \\(\\mathrm{Gap}(k\+1\)\\). Using this procedure gives us \\(k\=6\\) as the optimum number of clusters. Biologically, we know that there are 5 main patient categories but this does not mean there are no sub\-categories or sub\-types for the cancers we are looking at. #### 4\.1\.4\.3 Other methods There are several other methods that provide insight into how many clusters. In fact, the package `NbClust` provides 30 different ways to determine the number of optimal clusters and can offer a voting mechanism to pick the best number. Below, we show how to use this function for some of the optimal number of cluster detection methods. ``` library(NbClust) nb = NbClust(data=t(mat), distance = "euclidean", min.nc = 2, max.nc = 7, method = "kmeans", index=c("kl","ch","cindex","db","silhouette", "duda","pseudot2","beale","ratkowsky", "gap","gamma","mcclain","gplus", "tau","sdindex","sdbw")) table(nb$Best.nc[1,]) # consensus seems to be 3 clusters ``` However, readers should keep in mind that clustering is an exploratory technique. If you have solid labels for your data points, maybe clustering is just a sanity check, and you should just do predictive modeling instead. However, in biology there are rarely solid labels and things have different granularity. Take the leukemia patients case we have been using for example, it is known that leukemia types have subtypes and those sub\-types that have different mutation profiles and consequently have different molecular signatures. Because of this, it is not surprising that some optimal cluster number techniques will find more clusters to be appropriate. On the other hand, CML (chronic myeloid leukemia) is a slow progressing disease and maybe their molecular signatures are closer to “no leukemia” patients, so clustering algorithms may confuse the two depending on what granularity they are operating with. It is always good to look at the heatmaps after clustering, if you have meaningful self\-similar data points, even if the labels you have do not agree that there can be different clusters, you can perform downstream analysis to understand the sub\-clusters better. As we have seen, we can estimate the optimal number of clusters but we cannot take that estimation as the absolute truth. Given more data points or a different set of expression signatures, you may have different optimal clusterings, or the supposed optimal clustering might overlook previously known sub\-groups of your data. #### 4\.1\.4\.1 Silhouette One way to determine the quality of the clustering is to measure the expected self\-similar nature of the points in a set of clusters. The silhouette value does just that and it is a measure of how similar a data point is to its own cluster compared to other clusters (Rousseeuw [1987](#ref-rousseeuw1987silhouettes)). The silhouette value ranges from \-1 to \+1, where values that are positive indicate that the data point is well matched to its own cluster, if the value is zero it is a borderline case, and if the value is minus it means that the data point might be mis\-clustered because it is more similar to a neighboring cluster. If most data points have a high value, then the clustering is appropriate. Ideally, one can create many different clusterings with each with a different \\(k\\) parameter indicating the number of clusters, and assess their appropriateness using the average silhouette values. In R, silhouette values are referred to as silhouette widths in the documentation. A silhouette value is calculated for each data point. In our working example, each patient will get silhouette values showing how well they are matched to their assigned clusters. Formally this calculated as follows. For each data point \\(i\\), we calculate \\({\\displaystyle a(i)}\\), which denotes the average distance between \\(i\\) and all other data points within the same cluster. This shows how well the point fits into that cluster. For the same data point, we also calculate \\({\\displaystyle b(i)}\\), which denotes the lowest average distance of \\({\\displaystyle i}\\) to all points in any other cluster, of which \\({\\displaystyle i}\\) is not a member. The cluster with this lowest average \\(b(i)\\) is the “neighboring cluster” of data point \\({\\displaystyle i}\\) since it is the next best fit cluster for that data point. Then, the silhouette value for a given data point is \\(s(i) \= \\frac{b(i) \- a(i)}{\\max\\{a(i),b(i)\\}}\\). As described, this quantity is positive when \\(b(i)\\) is high and \\(a(i)\\) is low, meaning that the data point \\(i\\) is self\-similar to its cluster. And the silhouette value, \\(s(i)\\), is negative if it is more similar to its neighbors than its assigned cluster. In R, we can calculate silhouette values using the `cluster::silhouette()` function. Below, we calculate the silhouette values for k\-medoids clustering with the `pam()` function with `k=5`. The resulting silhouette values are shown in Figure [4\.6](clustering-grouping-samples-based-on-their-similarity.html#fig:sill). ``` library(cluster) set.seed(101) pamclu=cluster::pam(t(mat),k=5) plot(silhouette(pamclu),main=NULL) ``` FIGURE 4\.6: Silhouette values for k\-medoids with `k=5` Now, let us calculate the average silhouette value for different \\(k\\) values and compare. We will use `sapply()` function to get average silhouette values across \\(k\\) values between 2 and 7\. Within `sapply()` there is an anonymous function that that does the clustering and calculates average silhouette values for each \\(k\\). The plot showing average silhouette values for different \\(k\\) values is shown in Figure [4\.7](clustering-grouping-samples-based-on-their-similarity.html#fig:sillav). ``` Ks=sapply(2:7, function(i) summary(silhouette(pam(t(mat),k=i)))$avg.width) plot(2:7,Ks,xlab="k",ylab="av. silhouette",type="b", pch=19) ``` FIGURE 4\.7: Average silhouette values for k\-medoids clustering for `k` values between 2 and 7 In this case, it seems the best value for \\(k\\) is 4\. The k\-medoids function `pam()` will usually cluster CML and “no Leukemia” cases together when `k=4`, which are also related clusters according to the hierarchical clustering we did earlier. #### 4\.1\.4\.2 Gap statistic As clustering aims to find self\-similar data points, it would be reasonable to expect with the correct number of clusters the total within\-cluster variation is minimized. Within\-cluster variation for a single cluster can simply be defined as the sum of squares from the cluster mean, which in this case is the centroid we defined in the k\-means algorithm. The total within\-cluster variation is then the sum of within\-cluster variations for each cluster. This can be formally defined as follows: \\(\\displaystyle W\_k \= \\sum\_{k\=1}^K \\sum\_{\\mathrm{x}\_i \\in C\_k} (\\mathrm{x}\_i \- \\mu\_k )^2\\) where \\(\\mathrm{x}\_i\\) is a data point in cluster \\(k\\), and \\(\\mu\_k\\) is the cluster mean, and \\(W\_k\\) is the total within\-cluster variation quantity we described. However, the problem is that the variation quantity decreases with the number of clusters. The more centroids we have, the smaller the distances to the centroids become. A more reliable approach would be somehow calculating the expected variation from a reference null distribution and compare that to the observed variation for each \\(k\\). In the gap statistic approach, the expected distribution is calculated via sampling points from the boundaries of the original data and calculating within\-cluster variation quantity for multiple rounds of sampling (Tibshirani, Walther, and Hastie [2001](#ref-tibshirani2001estimating)). This way we have an expectation about the variability when there is no clustering, and then compare that expected variation to the observed within\-cluster variation. The expected variation should also go down with the increasing number of clusters, but for the optimal number of clusters, the expected variation will be furthest away from observed variation. This distance is called the **“gap statistic”** and defined as follows: \\(\\displaystyle \\mathrm{Gap}\_n(k) \= E\_n^\*\\{\\log W\_k\\} \- \\log W\_k\\), where \\(E\_n^\*\\{\\log W\_k\\}\\) is the expected variation in log\-scale under a sample size \\(n\\) from the reference distribution and \\(\\log W\_k\\) is the observed variation. Our aim is to choose the \\(k\\) number of clusters that maximizes \\(\\mathrm{Gap}\_n(k)\\). We can easily calculate the gap statistic with the `cluster::clusGap()` function. We will now use that function to calculate the gap statistic for our patient gene expression data. The resulting gap statistics are shown in Figure [4\.8](clustering-grouping-samples-based-on-their-similarity.html#fig:clusGap). ``` library(cluster) set.seed(101) # define the clustering function pam1 <- function(x,k) list(cluster = pam(x,k, cluster.only=TRUE)) # calculate the gap statistic pam.gap= clusGap(t(mat), FUN = pam1, K.max = 8,B=50) # plot the gap statistic accross k values plot(pam.gap, main = "Gap statistic for the 'Leukemia' data") ``` FIGURE 4\.8: Gap statistic for clustering the leukemia dataset with k\-medoids (pam) algorithm. In this case, the gap statistic shows that \\(k\=7\\) is the best if we take the maximum value as the best. However, after \\(k\=6\\), the statistic has more or less a stable curve. This observation is incorporated into algorithms that can select the best \\(k\\) value based on the gap statistic. A reasonable way is to take the simulation error (error bars in [4\.8](clustering-grouping-samples-based-on-their-similarity.html#fig:clusGap)) into account, and take the smallest \\(k\\) whose gap statistic is larger or equal to the one of \\(k\+1\\) minus the simulation error. Formally written, we would pick the smallest \\(k\\) satisfying the following condition: \\(\\mathrm{Gap}(k) \\geq \\mathrm{Gap}(k\+1\) \- s\_{k\+1}\\), where \\(s\_{k\+1}\\) is the simulation error for \\(\\mathrm{Gap}(k\+1\)\\). Using this procedure gives us \\(k\=6\\) as the optimum number of clusters. Biologically, we know that there are 5 main patient categories but this does not mean there are no sub\-categories or sub\-types for the cancers we are looking at. #### 4\.1\.4\.3 Other methods There are several other methods that provide insight into how many clusters. In fact, the package `NbClust` provides 30 different ways to determine the number of optimal clusters and can offer a voting mechanism to pick the best number. Below, we show how to use this function for some of the optimal number of cluster detection methods. ``` library(NbClust) nb = NbClust(data=t(mat), distance = "euclidean", min.nc = 2, max.nc = 7, method = "kmeans", index=c("kl","ch","cindex","db","silhouette", "duda","pseudot2","beale","ratkowsky", "gap","gamma","mcclain","gplus", "tau","sdindex","sdbw")) table(nb$Best.nc[1,]) # consensus seems to be 3 clusters ``` However, readers should keep in mind that clustering is an exploratory technique. If you have solid labels for your data points, maybe clustering is just a sanity check, and you should just do predictive modeling instead. However, in biology there are rarely solid labels and things have different granularity. Take the leukemia patients case we have been using for example, it is known that leukemia types have subtypes and those sub\-types that have different mutation profiles and consequently have different molecular signatures. Because of this, it is not surprising that some optimal cluster number techniques will find more clusters to be appropriate. On the other hand, CML (chronic myeloid leukemia) is a slow progressing disease and maybe their molecular signatures are closer to “no leukemia” patients, so clustering algorithms may confuse the two depending on what granularity they are operating with. It is always good to look at the heatmaps after clustering, if you have meaningful self\-similar data points, even if the labels you have do not agree that there can be different clusters, you can perform downstream analysis to understand the sub\-clusters better. As we have seen, we can estimate the optimal number of clusters but we cannot take that estimation as the absolute truth. Given more data points or a different set of expression signatures, you may have different optimal clusterings, or the supposed optimal clustering might overlook previously known sub\-groups of your data.
Life Sciences
compgenomr.github.io
http://compgenomr.github.io/book/dimensionality-reduction-techniques-visualizing-complex-data-sets-in-2d.html
4\.2 Dimensionality reduction techniques: Visualizing complex data sets in 2D ----------------------------------------------------------------------------- In statistics, dimension reduction techniques are a set of processes for reducing the number of random variables by obtaining a set of principal variables. For example, in the context of a gene expression matrix across different patient samples, this might mean getting a set of new variables that cover the variation in sets of genes. This way samples can be represented by a couple of principal variables instead of thousands of genes. This is useful for visualization, clustering and predictive modeling. ### 4\.2\.1 Principal component analysis Principal component analysis (PCA) is maybe the most popular technique to examine high\-dimensional data. There are multiple interpretations of how PCA reduces dimensionality. We will first focus on geometrical interpretation, where this operation can be interpreted as rotating the original dimensions of the data. For this, we go back to our example gene expression data set. In this example, we will represent our patients with expression profiles of just two genes, CD33 (ENSG00000105383\) and PYGL (ENSG00000100504\). This way we can visualize them in a scatter plot (see Figure [4\.9](dimensionality-reduction-techniques-visualizing-complex-data-sets-in-2d.html#fig:scatterb4PCA)). ``` plot(mat[rownames(mat)=="ENSG00000100504",], mat[rownames(mat)=="ENSG00000105383",],pch=19, ylab="CD33 (ENSG00000105383)", xlab="PYGL (ENSG00000100504)") ``` FIGURE 4\.9: Gene expression values of CD33 and PYGL genes across leukemia patients. PCA rotates the original data space such that the axes of the new coordinate system point to the directions of highest variance of the data. The axes or new variables are termed principal components (PCs) and are ordered by variance: The first component, PC 1, represents the direction of the highest variance of the data. The direction of the second component, PC 2, represents the highest of the remaining variance orthogonal to the first component. This can be naturally extended to obtain the required number of components, which together span a component space covering the desired amount of variance. In our toy example with only two genes, the principal components are drawn over the original scatter plot and in the next plot we show the new coordinate system based on the principal components. We will calculate the PCA with the `princomp()` function; this function returns the new coordinates as well. These new coordinates are simply a projection of data over the new coordinates. We will decorate the scatter plots with eigenvectors showing the direction of greatest variation. Then, we will plot the new coordinates (the resulting plot is shown in Figure [4\.10](dimensionality-reduction-techniques-visualizing-complex-data-sets-in-2d.html#fig:pcaRot)). These are automatically calculated by the `princomp()` function. Notice that we are using the `scale()` function when plotting coordinates and also before calculating the PCA. This function centers the data, meaning it subtracts the mean of each column vector from the elements in the vector. This essentially gives the columns a zero mean. It also divides the data by the standard deviation of the centered columns. These two operations help bring the data to a common scale, which is important for PCA not to be affected by different scales in the data. ``` par(mfrow=c(1,2)) # create the subset of the data with two genes only # notice that we transpose the matrix so samples are # on the columns sub.mat=t(mat[rownames(mat) %in% c("ENSG00000100504","ENSG00000105383"),]) # ploting our genes of interest as scatter plots plot(scale(mat[rownames(mat)=="ENSG00000100504",]), scale(mat[rownames(mat)=="ENSG00000105383",]), pch=19, ylab="CD33 (ENSG00000105383)", xlab="PYGL (ENSG00000100504)", col=as.factor(annotation_col$LeukemiaType), xlim=c(-2,2),ylim=c(-2,2)) # create the legend for the Leukemia types legend("bottomright", legend=unique(annotation_col$LeukemiaType), fill =palette("default"), border=NA,box.col=NA) # calculate the PCA only for our genes and all the samples pr=princomp(scale(sub.mat)) # plot the direction of eigenvectors # pr$loadings returned by princomp has the eigenvectors arrows(x0=0, y0=0, x1 = pr$loadings[1,1], y1 = pr$loadings[2,1],col="pink",lwd=3) arrows(x0=0, y0=0, x1 = pr$loadings[1,2], y1 = pr$loadings[2,2],col="gray",lwd=3) # plot the samples in the new coordinate system plot(-pr$scores,pch=19, col=as.factor(annotation_col$LeukemiaType), ylim=c(-2,2),xlim=c(-4,4)) # plot the new coordinate basis vectors arrows(x0=0, y0=0, x1 =-2, y1 = 0,col="pink",lwd=3) arrows(x0=0, y0=0, x1 = 0, y1 = -1,col="gray",lwd=3) ``` FIGURE 4\.10: Geometric interpretation of PCA finding eigenvectors that point to the direction of highest variance. Eigenvectors can be used as a new coordinate system. As you can see, the new coordinate system is useful by itself. The X\-axis, which represents the first component, separates the data along the lymphoblastic and myeloid leukemias. PCA in this case, is obtained by calculating eigenvectors of the covariance matrix via an operation called eigen decomposition. The covariance matrix is obtained by covariance of pairwise variables of our expression matrix, which is simply \\({ \\operatorname{cov} (X,Y)\={\\frac {1}{n}}\\sum \_{i\=1}^{n}(x\_{i}\-\\mu\_X)(y\_{i}\-\\mu\_Y)}\\), where \\(X\\) and \\(Y\\) are expression values of genes in a sample in our example. This is a measure of how things vary together, if highly expressed genes in sample A are also highly expressed in sample B and lowly expressed in sample A are also lowly expressed in sample B, then sample A and B will have positive covariance. If the opposite is true, then they will have negative covariance. This quantity is related to correlation, and as we saw in the previous chapter, correlation is standardized covariance. Covariance of variables can be obtained with the `cov()` function, and eigen decomposition of such a matrix will produce a set of orthogonal vectors that span the directions of highest variation. In 2D, you can think of this operation as rotating two perpendicular lines together until they point to the directions where most of the variation in the data lies, similar to Figure [4\.10](dimensionality-reduction-techniques-visualizing-complex-data-sets-in-2d.html#fig:pcaRot). An important intuition is that, after the rotation prescribed by eigenvectors is complete, the covariance between variables in this rotated dataset will be zero. There is a proper mathematical relationship between covariances of the rotated dataset and the original dataset. That’s why operating on the covariance matrix is related to the rotation of the original dataset. ``` cov.mat=cov(sub.mat) # calculate covariance matrix cov.mat eigen(cov.mat) # obtain eigen decomposition for eigen values and vectors ``` Eigenvectors and eigenvalues of the covariance matrix indicate the direction and the magnitude of variation of the data. In our visual example, the eigenvectors are so\-called principal components. The eigenvector indicates the direction and the eigenvalues indicate the variation in that direction. Eigenvectors and values exist in pairs: every eigenvector has a corresponding eigenvalue and the eigenvectors are linearly independent from each other, which means they are orthogonal or uncorrelated as in our working example above. The eigenvectors are ranked by their corresponding eigenvalue, the higher the eigenvalue the more important the eigenvector is, because it explains more of the variation compared to the other eigenvectors. This feature of PCA makes the dimension reduction possible. We can sometimes display data sets that have many variables only in 2D or 3D because these top eigenvectors are sometimes enough to capture most of variation in the data. The `screeplot()` function takes the output of the `princomp()` or `prcomp()` functions as input and plots the variance explained by eigenvectors. #### 4\.2\.1\.1 Singular value decomposition and principal component analysis A more common way to calculate PCA is through something called singular value decomposition (SVD). This results in another interpretation of PCA, which is called “latent factor” or “latent component” interpretation. In a moment, it will be clearer what we mean by “latent factors”. SVD is a matrix factorization or decomposition algorithm that decomposes an input matrix,\\(X\\), to three matrices as follows: \\(\\displaystyle \\mathrm{X} \= USV^T\\). In essence, many matrices can be decomposed as a product of multiple matrices and we will come to other techniques later in this chapter. Singular value decomposition is shown in Figure [4\.11](dimensionality-reduction-techniques-visualizing-complex-data-sets-in-2d.html#fig:SVDcartoon). \\(U\\) is the matrix with eigenarrays on the columns and this has the same dimensions as the input matrix; you might see elsewhere the columns are called eigenassays. \\(S\\) is the matrix that contains the singular values on the diagonal. The singular values are also known as eigenvalues and their square is proportional to explained variation by each eigenvector. Finally, the matrix \\(V^T\\) contains the eigenvectors on its rows. Its interpretation is still the same. Geometrically, eigenvectors point to the direction of highest variance in the data. They are uncorrelated or geometrically orthogonal to each other. These interpretations are identical to the ones we made before. The slight difference is that the decomposition seems to output \\(V^T\\), which is just the transpose of the matrix \\(V\\). However, the SVD algorithms in R usually return the matrix \\(V\\). If you want the eigenvectors, you either simply use the columns of matrix \\(V\\) or rows of \\(V^T\\). FIGURE 4\.11: Singular value decomposition (SVD) explained in a diagram. One thing that is new in Figure [4\.11](dimensionality-reduction-techniques-visualizing-complex-data-sets-in-2d.html#fig:SVDcartoon) is the concept of eigenarrays. The eigenarrays, sometimes called eigenassays, represent the sample space and can be used to plot the relationship between samples rather than genes. In this way, SVD offers additional information than the PCA using the covariance matrix. It offers us a way to summarize both genes and samples. As we can project the gene expression profiles over the top two eigengenes and get a 2D representation of genes, but with the SVD, we can also project the samples over the top two eigenarrays and get a representation of samples in 2D scatter plot. The eigenvector could represent independent expression programs across samples, such as cell\-cycle, if we had time\-based expression profiles. However, there is no guarantee that each eigenvector will be biologically meaningful. Similarly each eigenarray represents samples with specific expression characteristics. For example, the samples that have a particular pathway activated might be correlated to an eigenarray returned by SVD. Previously, in order to map samples to the reduced 2D space we had to transpose the genes\-by\-samples matrix before using the `princomp()` function. We will now first use SVD on the genes\-by\-samples matrix to get eigenarrays and use that to plot samples on the reduced dimensions. We will project the columns in our original expression data on eigenarrays and use the first two dimensions in the scatter plot. If you look at the code you will see that for the projection we use \\(U^T X\\) operation, which is just \\(S V^T\\) if you follow the linear algebra. We will also perform the PCA this time with the `prcomp()` function on the transposed genes\-by\-samples matrix to get similar information, and plot the samples on the reduced coordinates. ``` par(mfrow=c(1,2)) d=svd(scale(mat)) # apply SVD assays=t(d$u) %*% scale(mat) # projection on eigenassays plot(assays[1,],assays[2,],pch=19, col=as.factor(annotation_col$LeukemiaType)) #plot(d$v[,1],d$v[,2],pch=19, # col=annotation_col$LeukemiaType) pr=prcomp(t(mat),center=TRUE,scale=TRUE) # apply PCA on transposed matrix # plot new coordinates from PCA, projections on eigenvectors # since the matrix is transposed eigenvectors represent plot(pr$x[,1],pr$x[,2],col=as.factor(annotation_col$LeukemiaType)) ``` FIGURE 4\.12: SVD on the matrix and its transpose As you can see in Figure [4\.12](dimensionality-reduction-techniques-visualizing-complex-data-sets-in-2d.html#fig:svd), the two approaches yield separation of samples, although they are slightly different. The difference comes from the centering and scaling. In the first case, we scale and center columns and in the second case we scale and center rows since the matrix is transposed. If we do not do any scaling or centering we would get identical plots. ##### 4\.2\.1\.1\.1 Eigenvectors as latent factors/variables Finally, we can introduce the latent factor interpretation of PCA via SVD. As we have already mentioned, eigenvectors can also be interpreted as expression programs that are shared by several genes such as cell cycle expression program when measuring gene expression across samples taken in different time points. In this interpretation, linear combination of expression programs makes up the expression profile of the genes. Linear combination simply means multiplying the expression program with a weight and adding them up. Our \\(USV^T\\) matrix multiplication can be rearranged to yield such an understanding, we can multiply eigenarrays \\(U\\) with the diagonal eigenvalues \\(S\\), to produce an m\-by\-n weights matrix called \\(W\\), so \\(W\=US\\) and we can re\-write the equation as just weights by eigenvectors matrix, \\(X\=WV^T\\) as shown in Figure [4\.13](dimensionality-reduction-techniques-visualizing-complex-data-sets-in-2d.html#fig:SVDasWeigths). FIGURE 4\.13: Singular value decomposition (SVD) reorganized as multiplication of m\-by\-n weights matrix and eigenvectors This simple transformation now makes it clear that indeed, if eigenvectors represent expression programs, their linear combination makes up individual gene expression profiles. As an example, we can show the linear combination of the first two eigenvectors can approximate the expression profile of a hypothetical gene in the gene expression matrix. Figure [4\.14](dimensionality-reduction-techniques-visualizing-complex-data-sets-in-2d.html#fig:SVDlatentExample) shows eigenvector 1 and eigenvector 2 combined with certain weights in \\(W\\) matrix can approximate gene expression pattern our example gene. FIGURE 4\.14: Gene expression of a gene can be regarded as a linear combination of eigenvectors. However, SVD does not care about biology. The eigenvectors are just obtained from the data with constraints of orthogonality and the direction of variation. There are examples of eigenvectors representing real expression programs but that does not mean eigenvectors will always be biologically meaningful. Sometimes a combination of them might make more sense in biology than single eigenvectors. This is also the same for the other matrix factorization techniques we describe below. ### 4\.2\.2 Other matrix factorization methods for dimensionality reduction We must mention a few other techniques that are similar to SVD in spirit. Remember, we mentioned that every matrix can be decomposed to other matrices where matrix multiplication operations reconstruct the original matrix, which is in general called “matrix factorization”. In the case of SVD/PCA, the constraint is that eigenvectors/arrays are orthogonal, however, there are other decomposition algorithms with other constraints. #### 4\.2\.2\.1 Independent component analysis (ICA) We will first start with independent component analysis (ICA) which is an extension of PCA. ICA algorithm decomposes a given matrix \\(X\\) as follows: \\(X\=SA\\) (Hyvärinen [2013](#ref-hyvarinen2013independent)). The rows of \\(A\\) could be interpreted similar to the eigengenes and columns of \\(S\\) could be interpreted as eigenarrays. These components are sometimes called metagenes and metasamples in the literature. Traditionally, \\(S\\) is called the source matrix and \\(A\\) is called mixing matrix. ICA is developed for a problem called “blind\-source separation”. In this problem, multiple microphones record sound from multiple instruments, and the task is to disentangle sounds from original instruments since each microphone is recording a combination of sounds. In this respect, the matrix \\(S\\) contains the original signals (sounds from different instruments) and their linear combinations identified by the weights in \\(A\\), and the product of \\(A\\) and \\(S\\) makes up the matrix \\(X\\), which is the observed signal from different microphones. With this interpretation in mind, if the interest is strictly expression patterns that represent the hidden expression programs, we see that the genes\-by\-samples matrix is transposed to a samples\-by\-genes matrix, so that the columns of \\(S\\) represent these expression patterns, here referred to as “metagenes”, hopefully representing distinct expression programs (Figure [4\.15](dimensionality-reduction-techniques-visualizing-complex-data-sets-in-2d.html#fig:ICAcartoon) ). FIGURE 4\.15: Independent Component Analysis (ICA) ICA requires that the columns of the \\(S\\) matrix, the “metagenes” in our example above, are statistically independent. This is a stronger constraint than uncorrelatedness. In this case, there should be no relationship between non\-linear transformation of the data either. There are different ways of ensuring this statistical indepedence and this is the main constraint when finding the optimal \\(A\\) and \\(S\\) matrices. The various ICA algorithms use different proxies for statistical independence, and the definition of that proxy is the main difference between many ICA algorithms. The algorithm we are going to use requires that metagenes or sources in the \\(S\\) matrix are non\-Gaussian (non\-normal) as possible. Non\-Gaussianity is shown to be related to statistical independence (Hyvärinen [2013](#ref-hyvarinen2013independent)). Below, we are using the `fastICA::fastICA()` function to extract 2 components and plot the rows of matrix \\(A\\) which represents metagenes shown in Figure [4\.16](dimensionality-reduction-techniques-visualizing-complex-data-sets-in-2d.html#fig:fastICAex). This way, we can visualize samples in a 2D plot. If we wanted to plot the relationship between genes we would use the columns of matrix \\(S\\). ``` library(fastICA) ica.res=fastICA(t(mat),n.comp=2) # apply ICA # plot reduced dimensions plot(ica.res$S[,1],ica.res$S[,2],col=as.factor(annotation_col$LeukemiaType)) ``` FIGURE 4\.16: Leukemia gene expression values per patient on reduced dimensions by ICA. #### 4\.2\.2\.2 Non\-negative matrix factorization (NMF) Non\-negative matrix factorization algorithms are series of algorithms that aim to decompose the matrix \\(X\\) into the product of matrices \\(W\\) and \\(H\\), \\(X\=WH\\) (Figure [4\.17](dimensionality-reduction-techniques-visualizing-complex-data-sets-in-2d.html#fig:NMFcartoon)) (Lee and Seung [2001](#ref-lee2001algorithms)). The constraint is that \\(W\\) and \\(H\\) must contain non\-negative values, so must \\(X\\). This is well suited for data sets that cannot contain negative values such as gene expression. This also implies additivity of components or latent factors. This is in line with the idea that expression pattern of a gene across samples is the weighted sum of multiple metagenes. Unlike ICA and SVD/PCA, the metagenes can never be combined in a subtractive way. In this sense, expression programs potentially captured by metagenes are combined additively. FIGURE 4\.17: Non\-negative matrix factorization summary The algorithms that compute NMF try to minimize the cost function \\(D(X,WH)\\), which is the distance between \\(X\\) and \\(WH\\). The early algorithms just use the Euclidean distance, which translates to \\(\\sum(X\-WH)^2\\); this is also known as the Frobenius norm and you will see in the literature it is written as :\\(\\\|\|X\-WH\|\|\_{F}\\). However, this is not the only distance metric; other distance metrics are also used in NMF algorithms. In addition, there could be other parameters to optimize that relates to sparseness of the \\(W\\) and \\(H\\) matrices. With sparse \\(W\\) and \\(H\\), each entry in the \\(X\\) matrix is expressed as the sum of a small number of components. This makes the interpretation easier, if the weights are \\(0\\) then there is no contribution from the corresponding factors. Below, we are plotting the values of metagenes (rows of \\(H\\)) for components 1 and 3, shown in Figure [4\.18](dimensionality-reduction-techniques-visualizing-complex-data-sets-in-2d.html#fig:nmfCode). In this context, these values can also be interpreted as the relationship between samples. If we wanted to plot the relationship between genes we would plot the columns of the \\(W\\) matrix. ``` library(NMF) res=NMF::nmf(mat,rank=3,seed="nndsvd") # nmf with 3 components/factors w <- basis(res) # get W h <- coef(res) # get H # plot 1st factor against 3rd factor plot(h[1,],h[3,],col=as.factor(annotation_col$LeukemiaType),pch=19) ``` FIGURE 4\.18: Leukemia gene expression values per patient on reduced dimensions by NMF. Components 1 and 3 is used for the plot. We should add the note that, due to random starting points of the optimization algorithm, NMF is usually run multiple times and a consensus clustering approach is used when clustering samples. This simply means that samples are clustered together if they cluster together in multiple runs of the NMF. The NMF package we used above has built\-in ways to achieve this. In addition, NMF is a family of algorithms. The choice of cost function to optimize the difference between \\(X\\) and \\(WH\\), and the methods used for optimization create multiple variants of NMF. The “method” parameter in the above `nmf()` function controls the algorithm choice for NMF. #### 4\.2\.2\.3 Choosing the number of components and ranking components in importance In both ICA and NMF, there is no well\-defined way to rank components or to select the number of components. There are a couple of approaches that might suit both ICA and NMF for ranking components. One can use the norms of columns/rows in mixing matrices. This could simply mean take the sum of absolute values in mixing matrices. For our ICA example above, we would take the sum of the absolute values of the rows of \\(A\\) since we transposed the input matrix \\(X\\) before ICA. And for the NMF, we would use the columns of \\(W\\). These ideas assume that the larger coefficients in the weight or mixing matrices indicate more important components. For selecting the optimal number of components, the NMF package provides different strategies. One way is to calculate the RSS for each \\(k\\), the number of components, and take the \\(k\\) where the RSS curve starts to stabilize. However, these strategies require that you run the algorithm with multiple possible component numbers. The `nmf` function will run these automatically when the `rank` argument is a vector of numbers. For ICA there is no straightforward way to choose the right number of components. A common strategy is to start with as many components as variables and try to rank them by their usefulness. **Want to know more ?** The NMF package vignette has extensive information on how to run NMF to get stable results and an estimate of components: [https://cran.r\-project.org/web/packages/NMF/vignettes/NMF\-vignette.pdf](https://cran.r-project.org/web/packages/NMF/vignettes/NMF-vignette.pdf) ### 4\.2\.3 Multi\-dimensional scaling MDS is a set of data analysis techniques that displays the structure of distance data in a high\-dimensional space into a lower dimensional space without much loss of information (Cox and Cox [2000](#ref-cox2000multidimensional)). The overall goal of MDS is to faithfully represent these distances with the lowest possible dimensions. The so\-called “classical multi\-dimensional scaling” algorithm, tries to minimize the following function: \\({\\displaystyle Stress\_{D}(z\_{1},z\_{2},...,z\_{N})\={\\Biggl (}{\\frac {\\sum \_{i,j}{\\bigl (}d\_{ij}\-\\\|z\_{i}\-z\_{j}\\\|{\\bigr )}^{2}}{\\sum \_{i,j}d\_{ij}^{2}}}{\\Biggr )}^{1/2}}\\) Here the function compares the new data points on the lower dimension \\((z\_{1},z\_{2},...,z\_{N})\\) to the input distances between data points or distance between samples in our gene expression example. It turns out, this problem can be efficiently solved with SVD/PCA on the scaled distance matrix, the projection on eigenvectors will be the most optimal solution for the equation above. Therefore, classical MDS is sometimes called Principal Coordinates Analysis in the literature. However, later variants improve on classical MDS by using this as a starting point and optimize a slightly different cost function that again measures how well the low\-dimensional distances correspond to high\-dimensional distances. This variant is called non\-metric MDS and due to the nature of the cost function, it assumes a less stringent relationship between the low\-dimensional distances $\|z\_{i}\-z\_{j}\| and input distances \\(d\_{ij}\\). Formally, this procedure tries to optimize the following function. \\({\\displaystyle Stress\_{D}(z\_{1},z\_{2},...,z\_{N})\={\\Biggl (}{\\frac {\\sum \_{i,j}{\\bigl (}\\\|z\_{i}\-z\_{j}\\\|\-\\theta(d\_{ij}){\\bigr )}^{2}}{\\sum \_{i,j}\\\|z\_{i}\-z\_{j}\\\|^{2}}}{\\Biggr )}^{1/2}}\\) The core of a non\-metric MDS algorithm is a two\-fold optimization process. First the optimal monotonic transformation of the distances has to be found, which is shown in the above formula as \\(\\theta(d\_{ij})\\). Secondly, the points on a low dimension configuration have to be optimally arranged, so that their distances match the scaled distances as closely as possible. These two steps are repeated until some convergence criteria is reached. This usually means that the cost function does not improve much after certain number of iterations. The basic steps in a non\-metric MDS algorithm are: 1. Find a random low\-dimensional configuration of points, or in the variant we will be using below we start with the configuration returned by classical MDS. 2. Calculate the distances between the points in the low dimension \\(\\\|z\_{i}\-z\_{j}\\\|\\), \\(z\_{i}\\) and \\(z\_{j}\\) are vector of positions for samples \\(i\\) and \\(j\\). 3. Find the optimal monotonic transformation of the input distance, \\({\\textstyle \\theta(d\_{ij})}\\), to approximate input distances to low\-dimensional distances. This is achieved by isotonic regression, where a monotonically increasing free\-form function is fit. This step practically ensures that ranking of low\-dimensional distances are similar to rankings of input distances. 4. Minimize the stress function by re\-configuring low\-dimensional space and keeping \\(\\theta\\) function constant. 5. Repeat from Step 2 until convergence. We will now demonstrate both classical MDS and Kruskal’s isometric MDS. ``` mds=cmdscale(dist(t(mat))) isomds=MASS::isoMDS(dist(t(mat))) ``` ``` ## initial value 15.907414 ## final value 13.462986 ## converged ``` ``` # plot the patients in the 2D space par(mfrow=c(1,2)) plot(mds,pch=19,col=as.factor(annotation_col$LeukemiaType), main="classical MDS") plot(isomds$points,pch=19,col=as.factor(annotation_col$LeukemiaType), main="isotonic MDS") ``` FIGURE 4\.19: Leukemia gene expression values per patient on reduced dimensions by classical MDS and isometric MDS. The resulting plot is shown in Figure [4\.19](dimensionality-reduction-techniques-visualizing-complex-data-sets-in-2d.html#fig:mds2). In this example, there is not much difference between isotonic MDS and classical MDS. However, there might be cases where different MDS methods provide visible changes in the scatter plots. ### 4\.2\.4 t\-Distributed Stochastic Neighbor Embedding (t\-SNE) t\-SNE maps the distances in high\-dimensional space to lower dimensions and it is similar to the MDS method in this respect. But the benefit of this particular method is that it tries to preserve the local structure of the data so the distances and grouping of the points we observe in lower dimensions such as a 2D scatter plot is as close as possible to the distances we observe in the high\-dimensional space (Maaten and Hinton [2008](#ref-maaten2008visualizing)). As with other dimension reduction methods, you can choose how many lower dimensions you need. The main difference of t\-SNE, as mentiones above, is that it tries to preserve the local structure of the data. This kind of local structure embedding is missing in the MDS algorithm, which also has a similar goal. MDS tries to optimize the distances as a whole, whereas t\-SNE optimizes the distances with the local structure in mind. This is defined by the “perplexity” parameter in the arguments. This parameter controls how much the local structure influences the distance calculation. The lower the value, the more the local structure is taken into account. Similar to MDS, the process is an optimization algorithm. Here, we also try to minimize the divergence between observed distances and lower dimension distances. However, in the case of t\-SNE, the observed distances and lower dimensional distances are transformed using a probabilistic framework with their local variance in mind. From here on, we will provide a bit more detail on how the algorithm works in case the conceptual description above is too shallow. In t\-SNE the Euclidean distances between data points are transformed into a conditional similarity between points. This is done by assuming a normal distribution on each data point with a variance calculated ultimately by the use of the “perplexity” parameter. The perplexity parameter is, in a sense, a guess about the number of the closest neighbors each point has. Setting it to higher values gives more weight to global structure. Given \\(d\_{ij}\\) is the Euclidean distance between point \\(i\\) and \\(j\\), the similarity score \\(p\_{ij}\\) is calculated as shown below. \\\[p\_{j \| i} \= \\frac{\\exp(\-\\\|d\_{ij}\\\|^2 / 2 σ\_i^2\)}{∑\_{k \\neq i} \\exp(\-\\\|d\_{ik}\\\|^2 / 2 σ\_i^2\)}\\] This distance is symmetrized by incorporating \\(p\_{i \| j}\\) as shown below. \\\[p\_{i j}\=\\frac{p\_{j\|i} \+ p\_{i\|j}}{2n}\\] For the distances in the reduced dimension, we use t\-distribution with one degree of freedom. In the formula below, \\(\| y\_i\-y\_j\\\|^2\\) is Euclidean distance between points \\(i\\) and \\(j\\) in the reduced dimensions. \\\[ q\_{i j} \= \\frac{(1\+ \\\| y\_i\-y\_j\\\|^2\)^{\-1}}{(∑\_{k \\neq l} 1\+ \\\| y\_k\-y\_l\\\|^2\)^{\-1} } \\] As most of the algorithms we have seen in this section, t\-SNE is an optimization process in essence. In every iteration the points along lower dimensions are re\-arranged to minimize the formulated difference between the observed joint probabilities (\\(p\_{i j}\\)) and low\-dimensional joint probabilities (\\(q\_{i j}\\)). Here we are trying to compare probability distributions. In this case, this is done using a method called Kullback\-Leibler divergence, or KL\-divergence. In the formula below, since the \\(p\_{i j}\\) is pre\-defined using original distances, the only way to optimize is to play with \\(q\_{i j}\\) because it depends on the configuration of points in the lower dimensional space. This configuration is optimized to minimize the KL\-divergence between \\(p\_{i j}\\) and \\(q\_{i j}\\). \\\[ KL(P\|\|Q) \= \\sum\_{i, j} p\_{ij} \\, \\log \\frac{p\_{ij}}{q\_{ij}}. \\] Strictly speaking, KL\-divergence measures how well the distribution \\(P\\) which is observed using the original data points can be approximated by distribution \\(Q\\), which is modeled using points on the lower dimension. If the distributions are identical, KL\-divergence would be \\(0\\). Naturally, the more divergent the distributions are, the higher the KL\-divergence will be. We will now show how to use t\-SNE on our gene expression data set using the `Rtsne` package . We are setting the random seed because again, the t\-SNE optimization algorithm has random starting points and this might create non\-identical results in every run. After calculating the t\-SNE lower dimension embeddings we plot the points in a 2D scatter plot, shown in Figure [4\.20](dimensionality-reduction-techniques-visualizing-complex-data-sets-in-2d.html#fig:tsne). ``` library("Rtsne") set.seed(42) # Set a seed if you want reproducible results tsne_out <- Rtsne(t(mat),perplexity = 10) # Run TSNE #image(t(as.matrix(dist(tsne_out$Y)))) # Show the objects in the 2D tsne representation plot(tsne_out$Y,col=as.factor(annotation_col$LeukemiaType), pch=19) # create the legend for the Leukemia types legend("bottomleft", legend=unique(annotation_col$LeukemiaType), fill =palette("default"), border=NA,box.col=NA) ``` FIGURE 4\.20: t\-SNE of leukemia expression dataset As you might have noticed, we set again a random seed with the `set.seed()` function. The optimization algorithm starts with random configuration of points in the lower dimension space, and in each iteration it tries to improve on the previous lower dimension conflagration, which is why starting points can result in different final outcomes. **Want to know more ?** * How perplexity affects t\-sne, interactive examples: [https://distill.pub/2016/misread\-tsne/](https://distill.pub/2016/misread-tsne/) * More on perplexity: [https://blog.paperspace.com/dimension\-reduction\-with\-t\-sne/](https://blog.paperspace.com/dimension-reduction-with-t-sne/) * Intro to t\-SNE: [https://www.oreilly.com/learning/an\-illustrated\-introduction\-to\-the\-t\-sne\-algorithm](https://www.oreilly.com/learning/an-illustrated-introduction-to-the-t-sne-algorithm) ### 4\.2\.1 Principal component analysis Principal component analysis (PCA) is maybe the most popular technique to examine high\-dimensional data. There are multiple interpretations of how PCA reduces dimensionality. We will first focus on geometrical interpretation, where this operation can be interpreted as rotating the original dimensions of the data. For this, we go back to our example gene expression data set. In this example, we will represent our patients with expression profiles of just two genes, CD33 (ENSG00000105383\) and PYGL (ENSG00000100504\). This way we can visualize them in a scatter plot (see Figure [4\.9](dimensionality-reduction-techniques-visualizing-complex-data-sets-in-2d.html#fig:scatterb4PCA)). ``` plot(mat[rownames(mat)=="ENSG00000100504",], mat[rownames(mat)=="ENSG00000105383",],pch=19, ylab="CD33 (ENSG00000105383)", xlab="PYGL (ENSG00000100504)") ``` FIGURE 4\.9: Gene expression values of CD33 and PYGL genes across leukemia patients. PCA rotates the original data space such that the axes of the new coordinate system point to the directions of highest variance of the data. The axes or new variables are termed principal components (PCs) and are ordered by variance: The first component, PC 1, represents the direction of the highest variance of the data. The direction of the second component, PC 2, represents the highest of the remaining variance orthogonal to the first component. This can be naturally extended to obtain the required number of components, which together span a component space covering the desired amount of variance. In our toy example with only two genes, the principal components are drawn over the original scatter plot and in the next plot we show the new coordinate system based on the principal components. We will calculate the PCA with the `princomp()` function; this function returns the new coordinates as well. These new coordinates are simply a projection of data over the new coordinates. We will decorate the scatter plots with eigenvectors showing the direction of greatest variation. Then, we will plot the new coordinates (the resulting plot is shown in Figure [4\.10](dimensionality-reduction-techniques-visualizing-complex-data-sets-in-2d.html#fig:pcaRot)). These are automatically calculated by the `princomp()` function. Notice that we are using the `scale()` function when plotting coordinates and also before calculating the PCA. This function centers the data, meaning it subtracts the mean of each column vector from the elements in the vector. This essentially gives the columns a zero mean. It also divides the data by the standard deviation of the centered columns. These two operations help bring the data to a common scale, which is important for PCA not to be affected by different scales in the data. ``` par(mfrow=c(1,2)) # create the subset of the data with two genes only # notice that we transpose the matrix so samples are # on the columns sub.mat=t(mat[rownames(mat) %in% c("ENSG00000100504","ENSG00000105383"),]) # ploting our genes of interest as scatter plots plot(scale(mat[rownames(mat)=="ENSG00000100504",]), scale(mat[rownames(mat)=="ENSG00000105383",]), pch=19, ylab="CD33 (ENSG00000105383)", xlab="PYGL (ENSG00000100504)", col=as.factor(annotation_col$LeukemiaType), xlim=c(-2,2),ylim=c(-2,2)) # create the legend for the Leukemia types legend("bottomright", legend=unique(annotation_col$LeukemiaType), fill =palette("default"), border=NA,box.col=NA) # calculate the PCA only for our genes and all the samples pr=princomp(scale(sub.mat)) # plot the direction of eigenvectors # pr$loadings returned by princomp has the eigenvectors arrows(x0=0, y0=0, x1 = pr$loadings[1,1], y1 = pr$loadings[2,1],col="pink",lwd=3) arrows(x0=0, y0=0, x1 = pr$loadings[1,2], y1 = pr$loadings[2,2],col="gray",lwd=3) # plot the samples in the new coordinate system plot(-pr$scores,pch=19, col=as.factor(annotation_col$LeukemiaType), ylim=c(-2,2),xlim=c(-4,4)) # plot the new coordinate basis vectors arrows(x0=0, y0=0, x1 =-2, y1 = 0,col="pink",lwd=3) arrows(x0=0, y0=0, x1 = 0, y1 = -1,col="gray",lwd=3) ``` FIGURE 4\.10: Geometric interpretation of PCA finding eigenvectors that point to the direction of highest variance. Eigenvectors can be used as a new coordinate system. As you can see, the new coordinate system is useful by itself. The X\-axis, which represents the first component, separates the data along the lymphoblastic and myeloid leukemias. PCA in this case, is obtained by calculating eigenvectors of the covariance matrix via an operation called eigen decomposition. The covariance matrix is obtained by covariance of pairwise variables of our expression matrix, which is simply \\({ \\operatorname{cov} (X,Y)\={\\frac {1}{n}}\\sum \_{i\=1}^{n}(x\_{i}\-\\mu\_X)(y\_{i}\-\\mu\_Y)}\\), where \\(X\\) and \\(Y\\) are expression values of genes in a sample in our example. This is a measure of how things vary together, if highly expressed genes in sample A are also highly expressed in sample B and lowly expressed in sample A are also lowly expressed in sample B, then sample A and B will have positive covariance. If the opposite is true, then they will have negative covariance. This quantity is related to correlation, and as we saw in the previous chapter, correlation is standardized covariance. Covariance of variables can be obtained with the `cov()` function, and eigen decomposition of such a matrix will produce a set of orthogonal vectors that span the directions of highest variation. In 2D, you can think of this operation as rotating two perpendicular lines together until they point to the directions where most of the variation in the data lies, similar to Figure [4\.10](dimensionality-reduction-techniques-visualizing-complex-data-sets-in-2d.html#fig:pcaRot). An important intuition is that, after the rotation prescribed by eigenvectors is complete, the covariance between variables in this rotated dataset will be zero. There is a proper mathematical relationship between covariances of the rotated dataset and the original dataset. That’s why operating on the covariance matrix is related to the rotation of the original dataset. ``` cov.mat=cov(sub.mat) # calculate covariance matrix cov.mat eigen(cov.mat) # obtain eigen decomposition for eigen values and vectors ``` Eigenvectors and eigenvalues of the covariance matrix indicate the direction and the magnitude of variation of the data. In our visual example, the eigenvectors are so\-called principal components. The eigenvector indicates the direction and the eigenvalues indicate the variation in that direction. Eigenvectors and values exist in pairs: every eigenvector has a corresponding eigenvalue and the eigenvectors are linearly independent from each other, which means they are orthogonal or uncorrelated as in our working example above. The eigenvectors are ranked by their corresponding eigenvalue, the higher the eigenvalue the more important the eigenvector is, because it explains more of the variation compared to the other eigenvectors. This feature of PCA makes the dimension reduction possible. We can sometimes display data sets that have many variables only in 2D or 3D because these top eigenvectors are sometimes enough to capture most of variation in the data. The `screeplot()` function takes the output of the `princomp()` or `prcomp()` functions as input and plots the variance explained by eigenvectors. #### 4\.2\.1\.1 Singular value decomposition and principal component analysis A more common way to calculate PCA is through something called singular value decomposition (SVD). This results in another interpretation of PCA, which is called “latent factor” or “latent component” interpretation. In a moment, it will be clearer what we mean by “latent factors”. SVD is a matrix factorization or decomposition algorithm that decomposes an input matrix,\\(X\\), to three matrices as follows: \\(\\displaystyle \\mathrm{X} \= USV^T\\). In essence, many matrices can be decomposed as a product of multiple matrices and we will come to other techniques later in this chapter. Singular value decomposition is shown in Figure [4\.11](dimensionality-reduction-techniques-visualizing-complex-data-sets-in-2d.html#fig:SVDcartoon). \\(U\\) is the matrix with eigenarrays on the columns and this has the same dimensions as the input matrix; you might see elsewhere the columns are called eigenassays. \\(S\\) is the matrix that contains the singular values on the diagonal. The singular values are also known as eigenvalues and their square is proportional to explained variation by each eigenvector. Finally, the matrix \\(V^T\\) contains the eigenvectors on its rows. Its interpretation is still the same. Geometrically, eigenvectors point to the direction of highest variance in the data. They are uncorrelated or geometrically orthogonal to each other. These interpretations are identical to the ones we made before. The slight difference is that the decomposition seems to output \\(V^T\\), which is just the transpose of the matrix \\(V\\). However, the SVD algorithms in R usually return the matrix \\(V\\). If you want the eigenvectors, you either simply use the columns of matrix \\(V\\) or rows of \\(V^T\\). FIGURE 4\.11: Singular value decomposition (SVD) explained in a diagram. One thing that is new in Figure [4\.11](dimensionality-reduction-techniques-visualizing-complex-data-sets-in-2d.html#fig:SVDcartoon) is the concept of eigenarrays. The eigenarrays, sometimes called eigenassays, represent the sample space and can be used to plot the relationship between samples rather than genes. In this way, SVD offers additional information than the PCA using the covariance matrix. It offers us a way to summarize both genes and samples. As we can project the gene expression profiles over the top two eigengenes and get a 2D representation of genes, but with the SVD, we can also project the samples over the top two eigenarrays and get a representation of samples in 2D scatter plot. The eigenvector could represent independent expression programs across samples, such as cell\-cycle, if we had time\-based expression profiles. However, there is no guarantee that each eigenvector will be biologically meaningful. Similarly each eigenarray represents samples with specific expression characteristics. For example, the samples that have a particular pathway activated might be correlated to an eigenarray returned by SVD. Previously, in order to map samples to the reduced 2D space we had to transpose the genes\-by\-samples matrix before using the `princomp()` function. We will now first use SVD on the genes\-by\-samples matrix to get eigenarrays and use that to plot samples on the reduced dimensions. We will project the columns in our original expression data on eigenarrays and use the first two dimensions in the scatter plot. If you look at the code you will see that for the projection we use \\(U^T X\\) operation, which is just \\(S V^T\\) if you follow the linear algebra. We will also perform the PCA this time with the `prcomp()` function on the transposed genes\-by\-samples matrix to get similar information, and plot the samples on the reduced coordinates. ``` par(mfrow=c(1,2)) d=svd(scale(mat)) # apply SVD assays=t(d$u) %*% scale(mat) # projection on eigenassays plot(assays[1,],assays[2,],pch=19, col=as.factor(annotation_col$LeukemiaType)) #plot(d$v[,1],d$v[,2],pch=19, # col=annotation_col$LeukemiaType) pr=prcomp(t(mat),center=TRUE,scale=TRUE) # apply PCA on transposed matrix # plot new coordinates from PCA, projections on eigenvectors # since the matrix is transposed eigenvectors represent plot(pr$x[,1],pr$x[,2],col=as.factor(annotation_col$LeukemiaType)) ``` FIGURE 4\.12: SVD on the matrix and its transpose As you can see in Figure [4\.12](dimensionality-reduction-techniques-visualizing-complex-data-sets-in-2d.html#fig:svd), the two approaches yield separation of samples, although they are slightly different. The difference comes from the centering and scaling. In the first case, we scale and center columns and in the second case we scale and center rows since the matrix is transposed. If we do not do any scaling or centering we would get identical plots. ##### 4\.2\.1\.1\.1 Eigenvectors as latent factors/variables Finally, we can introduce the latent factor interpretation of PCA via SVD. As we have already mentioned, eigenvectors can also be interpreted as expression programs that are shared by several genes such as cell cycle expression program when measuring gene expression across samples taken in different time points. In this interpretation, linear combination of expression programs makes up the expression profile of the genes. Linear combination simply means multiplying the expression program with a weight and adding them up. Our \\(USV^T\\) matrix multiplication can be rearranged to yield such an understanding, we can multiply eigenarrays \\(U\\) with the diagonal eigenvalues \\(S\\), to produce an m\-by\-n weights matrix called \\(W\\), so \\(W\=US\\) and we can re\-write the equation as just weights by eigenvectors matrix, \\(X\=WV^T\\) as shown in Figure [4\.13](dimensionality-reduction-techniques-visualizing-complex-data-sets-in-2d.html#fig:SVDasWeigths). FIGURE 4\.13: Singular value decomposition (SVD) reorganized as multiplication of m\-by\-n weights matrix and eigenvectors This simple transformation now makes it clear that indeed, if eigenvectors represent expression programs, their linear combination makes up individual gene expression profiles. As an example, we can show the linear combination of the first two eigenvectors can approximate the expression profile of a hypothetical gene in the gene expression matrix. Figure [4\.14](dimensionality-reduction-techniques-visualizing-complex-data-sets-in-2d.html#fig:SVDlatentExample) shows eigenvector 1 and eigenvector 2 combined with certain weights in \\(W\\) matrix can approximate gene expression pattern our example gene. FIGURE 4\.14: Gene expression of a gene can be regarded as a linear combination of eigenvectors. However, SVD does not care about biology. The eigenvectors are just obtained from the data with constraints of orthogonality and the direction of variation. There are examples of eigenvectors representing real expression programs but that does not mean eigenvectors will always be biologically meaningful. Sometimes a combination of them might make more sense in biology than single eigenvectors. This is also the same for the other matrix factorization techniques we describe below. #### 4\.2\.1\.1 Singular value decomposition and principal component analysis A more common way to calculate PCA is through something called singular value decomposition (SVD). This results in another interpretation of PCA, which is called “latent factor” or “latent component” interpretation. In a moment, it will be clearer what we mean by “latent factors”. SVD is a matrix factorization or decomposition algorithm that decomposes an input matrix,\\(X\\), to three matrices as follows: \\(\\displaystyle \\mathrm{X} \= USV^T\\). In essence, many matrices can be decomposed as a product of multiple matrices and we will come to other techniques later in this chapter. Singular value decomposition is shown in Figure [4\.11](dimensionality-reduction-techniques-visualizing-complex-data-sets-in-2d.html#fig:SVDcartoon). \\(U\\) is the matrix with eigenarrays on the columns and this has the same dimensions as the input matrix; you might see elsewhere the columns are called eigenassays. \\(S\\) is the matrix that contains the singular values on the diagonal. The singular values are also known as eigenvalues and their square is proportional to explained variation by each eigenvector. Finally, the matrix \\(V^T\\) contains the eigenvectors on its rows. Its interpretation is still the same. Geometrically, eigenvectors point to the direction of highest variance in the data. They are uncorrelated or geometrically orthogonal to each other. These interpretations are identical to the ones we made before. The slight difference is that the decomposition seems to output \\(V^T\\), which is just the transpose of the matrix \\(V\\). However, the SVD algorithms in R usually return the matrix \\(V\\). If you want the eigenvectors, you either simply use the columns of matrix \\(V\\) or rows of \\(V^T\\). FIGURE 4\.11: Singular value decomposition (SVD) explained in a diagram. One thing that is new in Figure [4\.11](dimensionality-reduction-techniques-visualizing-complex-data-sets-in-2d.html#fig:SVDcartoon) is the concept of eigenarrays. The eigenarrays, sometimes called eigenassays, represent the sample space and can be used to plot the relationship between samples rather than genes. In this way, SVD offers additional information than the PCA using the covariance matrix. It offers us a way to summarize both genes and samples. As we can project the gene expression profiles over the top two eigengenes and get a 2D representation of genes, but with the SVD, we can also project the samples over the top two eigenarrays and get a representation of samples in 2D scatter plot. The eigenvector could represent independent expression programs across samples, such as cell\-cycle, if we had time\-based expression profiles. However, there is no guarantee that each eigenvector will be biologically meaningful. Similarly each eigenarray represents samples with specific expression characteristics. For example, the samples that have a particular pathway activated might be correlated to an eigenarray returned by SVD. Previously, in order to map samples to the reduced 2D space we had to transpose the genes\-by\-samples matrix before using the `princomp()` function. We will now first use SVD on the genes\-by\-samples matrix to get eigenarrays and use that to plot samples on the reduced dimensions. We will project the columns in our original expression data on eigenarrays and use the first two dimensions in the scatter plot. If you look at the code you will see that for the projection we use \\(U^T X\\) operation, which is just \\(S V^T\\) if you follow the linear algebra. We will also perform the PCA this time with the `prcomp()` function on the transposed genes\-by\-samples matrix to get similar information, and plot the samples on the reduced coordinates. ``` par(mfrow=c(1,2)) d=svd(scale(mat)) # apply SVD assays=t(d$u) %*% scale(mat) # projection on eigenassays plot(assays[1,],assays[2,],pch=19, col=as.factor(annotation_col$LeukemiaType)) #plot(d$v[,1],d$v[,2],pch=19, # col=annotation_col$LeukemiaType) pr=prcomp(t(mat),center=TRUE,scale=TRUE) # apply PCA on transposed matrix # plot new coordinates from PCA, projections on eigenvectors # since the matrix is transposed eigenvectors represent plot(pr$x[,1],pr$x[,2],col=as.factor(annotation_col$LeukemiaType)) ``` FIGURE 4\.12: SVD on the matrix and its transpose As you can see in Figure [4\.12](dimensionality-reduction-techniques-visualizing-complex-data-sets-in-2d.html#fig:svd), the two approaches yield separation of samples, although they are slightly different. The difference comes from the centering and scaling. In the first case, we scale and center columns and in the second case we scale and center rows since the matrix is transposed. If we do not do any scaling or centering we would get identical plots. ##### 4\.2\.1\.1\.1 Eigenvectors as latent factors/variables Finally, we can introduce the latent factor interpretation of PCA via SVD. As we have already mentioned, eigenvectors can also be interpreted as expression programs that are shared by several genes such as cell cycle expression program when measuring gene expression across samples taken in different time points. In this interpretation, linear combination of expression programs makes up the expression profile of the genes. Linear combination simply means multiplying the expression program with a weight and adding them up. Our \\(USV^T\\) matrix multiplication can be rearranged to yield such an understanding, we can multiply eigenarrays \\(U\\) with the diagonal eigenvalues \\(S\\), to produce an m\-by\-n weights matrix called \\(W\\), so \\(W\=US\\) and we can re\-write the equation as just weights by eigenvectors matrix, \\(X\=WV^T\\) as shown in Figure [4\.13](dimensionality-reduction-techniques-visualizing-complex-data-sets-in-2d.html#fig:SVDasWeigths). FIGURE 4\.13: Singular value decomposition (SVD) reorganized as multiplication of m\-by\-n weights matrix and eigenvectors This simple transformation now makes it clear that indeed, if eigenvectors represent expression programs, their linear combination makes up individual gene expression profiles. As an example, we can show the linear combination of the first two eigenvectors can approximate the expression profile of a hypothetical gene in the gene expression matrix. Figure [4\.14](dimensionality-reduction-techniques-visualizing-complex-data-sets-in-2d.html#fig:SVDlatentExample) shows eigenvector 1 and eigenvector 2 combined with certain weights in \\(W\\) matrix can approximate gene expression pattern our example gene. FIGURE 4\.14: Gene expression of a gene can be regarded as a linear combination of eigenvectors. However, SVD does not care about biology. The eigenvectors are just obtained from the data with constraints of orthogonality and the direction of variation. There are examples of eigenvectors representing real expression programs but that does not mean eigenvectors will always be biologically meaningful. Sometimes a combination of them might make more sense in biology than single eigenvectors. This is also the same for the other matrix factorization techniques we describe below. ##### 4\.2\.1\.1\.1 Eigenvectors as latent factors/variables Finally, we can introduce the latent factor interpretation of PCA via SVD. As we have already mentioned, eigenvectors can also be interpreted as expression programs that are shared by several genes such as cell cycle expression program when measuring gene expression across samples taken in different time points. In this interpretation, linear combination of expression programs makes up the expression profile of the genes. Linear combination simply means multiplying the expression program with a weight and adding them up. Our \\(USV^T\\) matrix multiplication can be rearranged to yield such an understanding, we can multiply eigenarrays \\(U\\) with the diagonal eigenvalues \\(S\\), to produce an m\-by\-n weights matrix called \\(W\\), so \\(W\=US\\) and we can re\-write the equation as just weights by eigenvectors matrix, \\(X\=WV^T\\) as shown in Figure [4\.13](dimensionality-reduction-techniques-visualizing-complex-data-sets-in-2d.html#fig:SVDasWeigths). FIGURE 4\.13: Singular value decomposition (SVD) reorganized as multiplication of m\-by\-n weights matrix and eigenvectors This simple transformation now makes it clear that indeed, if eigenvectors represent expression programs, their linear combination makes up individual gene expression profiles. As an example, we can show the linear combination of the first two eigenvectors can approximate the expression profile of a hypothetical gene in the gene expression matrix. Figure [4\.14](dimensionality-reduction-techniques-visualizing-complex-data-sets-in-2d.html#fig:SVDlatentExample) shows eigenvector 1 and eigenvector 2 combined with certain weights in \\(W\\) matrix can approximate gene expression pattern our example gene. FIGURE 4\.14: Gene expression of a gene can be regarded as a linear combination of eigenvectors. However, SVD does not care about biology. The eigenvectors are just obtained from the data with constraints of orthogonality and the direction of variation. There are examples of eigenvectors representing real expression programs but that does not mean eigenvectors will always be biologically meaningful. Sometimes a combination of them might make more sense in biology than single eigenvectors. This is also the same for the other matrix factorization techniques we describe below. ### 4\.2\.2 Other matrix factorization methods for dimensionality reduction We must mention a few other techniques that are similar to SVD in spirit. Remember, we mentioned that every matrix can be decomposed to other matrices where matrix multiplication operations reconstruct the original matrix, which is in general called “matrix factorization”. In the case of SVD/PCA, the constraint is that eigenvectors/arrays are orthogonal, however, there are other decomposition algorithms with other constraints. #### 4\.2\.2\.1 Independent component analysis (ICA) We will first start with independent component analysis (ICA) which is an extension of PCA. ICA algorithm decomposes a given matrix \\(X\\) as follows: \\(X\=SA\\) (Hyvärinen [2013](#ref-hyvarinen2013independent)). The rows of \\(A\\) could be interpreted similar to the eigengenes and columns of \\(S\\) could be interpreted as eigenarrays. These components are sometimes called metagenes and metasamples in the literature. Traditionally, \\(S\\) is called the source matrix and \\(A\\) is called mixing matrix. ICA is developed for a problem called “blind\-source separation”. In this problem, multiple microphones record sound from multiple instruments, and the task is to disentangle sounds from original instruments since each microphone is recording a combination of sounds. In this respect, the matrix \\(S\\) contains the original signals (sounds from different instruments) and their linear combinations identified by the weights in \\(A\\), and the product of \\(A\\) and \\(S\\) makes up the matrix \\(X\\), which is the observed signal from different microphones. With this interpretation in mind, if the interest is strictly expression patterns that represent the hidden expression programs, we see that the genes\-by\-samples matrix is transposed to a samples\-by\-genes matrix, so that the columns of \\(S\\) represent these expression patterns, here referred to as “metagenes”, hopefully representing distinct expression programs (Figure [4\.15](dimensionality-reduction-techniques-visualizing-complex-data-sets-in-2d.html#fig:ICAcartoon) ). FIGURE 4\.15: Independent Component Analysis (ICA) ICA requires that the columns of the \\(S\\) matrix, the “metagenes” in our example above, are statistically independent. This is a stronger constraint than uncorrelatedness. In this case, there should be no relationship between non\-linear transformation of the data either. There are different ways of ensuring this statistical indepedence and this is the main constraint when finding the optimal \\(A\\) and \\(S\\) matrices. The various ICA algorithms use different proxies for statistical independence, and the definition of that proxy is the main difference between many ICA algorithms. The algorithm we are going to use requires that metagenes or sources in the \\(S\\) matrix are non\-Gaussian (non\-normal) as possible. Non\-Gaussianity is shown to be related to statistical independence (Hyvärinen [2013](#ref-hyvarinen2013independent)). Below, we are using the `fastICA::fastICA()` function to extract 2 components and plot the rows of matrix \\(A\\) which represents metagenes shown in Figure [4\.16](dimensionality-reduction-techniques-visualizing-complex-data-sets-in-2d.html#fig:fastICAex). This way, we can visualize samples in a 2D plot. If we wanted to plot the relationship between genes we would use the columns of matrix \\(S\\). ``` library(fastICA) ica.res=fastICA(t(mat),n.comp=2) # apply ICA # plot reduced dimensions plot(ica.res$S[,1],ica.res$S[,2],col=as.factor(annotation_col$LeukemiaType)) ``` FIGURE 4\.16: Leukemia gene expression values per patient on reduced dimensions by ICA. #### 4\.2\.2\.2 Non\-negative matrix factorization (NMF) Non\-negative matrix factorization algorithms are series of algorithms that aim to decompose the matrix \\(X\\) into the product of matrices \\(W\\) and \\(H\\), \\(X\=WH\\) (Figure [4\.17](dimensionality-reduction-techniques-visualizing-complex-data-sets-in-2d.html#fig:NMFcartoon)) (Lee and Seung [2001](#ref-lee2001algorithms)). The constraint is that \\(W\\) and \\(H\\) must contain non\-negative values, so must \\(X\\). This is well suited for data sets that cannot contain negative values such as gene expression. This also implies additivity of components or latent factors. This is in line with the idea that expression pattern of a gene across samples is the weighted sum of multiple metagenes. Unlike ICA and SVD/PCA, the metagenes can never be combined in a subtractive way. In this sense, expression programs potentially captured by metagenes are combined additively. FIGURE 4\.17: Non\-negative matrix factorization summary The algorithms that compute NMF try to minimize the cost function \\(D(X,WH)\\), which is the distance between \\(X\\) and \\(WH\\). The early algorithms just use the Euclidean distance, which translates to \\(\\sum(X\-WH)^2\\); this is also known as the Frobenius norm and you will see in the literature it is written as :\\(\\\|\|X\-WH\|\|\_{F}\\). However, this is not the only distance metric; other distance metrics are also used in NMF algorithms. In addition, there could be other parameters to optimize that relates to sparseness of the \\(W\\) and \\(H\\) matrices. With sparse \\(W\\) and \\(H\\), each entry in the \\(X\\) matrix is expressed as the sum of a small number of components. This makes the interpretation easier, if the weights are \\(0\\) then there is no contribution from the corresponding factors. Below, we are plotting the values of metagenes (rows of \\(H\\)) for components 1 and 3, shown in Figure [4\.18](dimensionality-reduction-techniques-visualizing-complex-data-sets-in-2d.html#fig:nmfCode). In this context, these values can also be interpreted as the relationship between samples. If we wanted to plot the relationship between genes we would plot the columns of the \\(W\\) matrix. ``` library(NMF) res=NMF::nmf(mat,rank=3,seed="nndsvd") # nmf with 3 components/factors w <- basis(res) # get W h <- coef(res) # get H # plot 1st factor against 3rd factor plot(h[1,],h[3,],col=as.factor(annotation_col$LeukemiaType),pch=19) ``` FIGURE 4\.18: Leukemia gene expression values per patient on reduced dimensions by NMF. Components 1 and 3 is used for the plot. We should add the note that, due to random starting points of the optimization algorithm, NMF is usually run multiple times and a consensus clustering approach is used when clustering samples. This simply means that samples are clustered together if they cluster together in multiple runs of the NMF. The NMF package we used above has built\-in ways to achieve this. In addition, NMF is a family of algorithms. The choice of cost function to optimize the difference between \\(X\\) and \\(WH\\), and the methods used for optimization create multiple variants of NMF. The “method” parameter in the above `nmf()` function controls the algorithm choice for NMF. #### 4\.2\.2\.3 Choosing the number of components and ranking components in importance In both ICA and NMF, there is no well\-defined way to rank components or to select the number of components. There are a couple of approaches that might suit both ICA and NMF for ranking components. One can use the norms of columns/rows in mixing matrices. This could simply mean take the sum of absolute values in mixing matrices. For our ICA example above, we would take the sum of the absolute values of the rows of \\(A\\) since we transposed the input matrix \\(X\\) before ICA. And for the NMF, we would use the columns of \\(W\\). These ideas assume that the larger coefficients in the weight or mixing matrices indicate more important components. For selecting the optimal number of components, the NMF package provides different strategies. One way is to calculate the RSS for each \\(k\\), the number of components, and take the \\(k\\) where the RSS curve starts to stabilize. However, these strategies require that you run the algorithm with multiple possible component numbers. The `nmf` function will run these automatically when the `rank` argument is a vector of numbers. For ICA there is no straightforward way to choose the right number of components. A common strategy is to start with as many components as variables and try to rank them by their usefulness. **Want to know more ?** The NMF package vignette has extensive information on how to run NMF to get stable results and an estimate of components: [https://cran.r\-project.org/web/packages/NMF/vignettes/NMF\-vignette.pdf](https://cran.r-project.org/web/packages/NMF/vignettes/NMF-vignette.pdf) #### 4\.2\.2\.1 Independent component analysis (ICA) We will first start with independent component analysis (ICA) which is an extension of PCA. ICA algorithm decomposes a given matrix \\(X\\) as follows: \\(X\=SA\\) (Hyvärinen [2013](#ref-hyvarinen2013independent)). The rows of \\(A\\) could be interpreted similar to the eigengenes and columns of \\(S\\) could be interpreted as eigenarrays. These components are sometimes called metagenes and metasamples in the literature. Traditionally, \\(S\\) is called the source matrix and \\(A\\) is called mixing matrix. ICA is developed for a problem called “blind\-source separation”. In this problem, multiple microphones record sound from multiple instruments, and the task is to disentangle sounds from original instruments since each microphone is recording a combination of sounds. In this respect, the matrix \\(S\\) contains the original signals (sounds from different instruments) and their linear combinations identified by the weights in \\(A\\), and the product of \\(A\\) and \\(S\\) makes up the matrix \\(X\\), which is the observed signal from different microphones. With this interpretation in mind, if the interest is strictly expression patterns that represent the hidden expression programs, we see that the genes\-by\-samples matrix is transposed to a samples\-by\-genes matrix, so that the columns of \\(S\\) represent these expression patterns, here referred to as “metagenes”, hopefully representing distinct expression programs (Figure [4\.15](dimensionality-reduction-techniques-visualizing-complex-data-sets-in-2d.html#fig:ICAcartoon) ). FIGURE 4\.15: Independent Component Analysis (ICA) ICA requires that the columns of the \\(S\\) matrix, the “metagenes” in our example above, are statistically independent. This is a stronger constraint than uncorrelatedness. In this case, there should be no relationship between non\-linear transformation of the data either. There are different ways of ensuring this statistical indepedence and this is the main constraint when finding the optimal \\(A\\) and \\(S\\) matrices. The various ICA algorithms use different proxies for statistical independence, and the definition of that proxy is the main difference between many ICA algorithms. The algorithm we are going to use requires that metagenes or sources in the \\(S\\) matrix are non\-Gaussian (non\-normal) as possible. Non\-Gaussianity is shown to be related to statistical independence (Hyvärinen [2013](#ref-hyvarinen2013independent)). Below, we are using the `fastICA::fastICA()` function to extract 2 components and plot the rows of matrix \\(A\\) which represents metagenes shown in Figure [4\.16](dimensionality-reduction-techniques-visualizing-complex-data-sets-in-2d.html#fig:fastICAex). This way, we can visualize samples in a 2D plot. If we wanted to plot the relationship between genes we would use the columns of matrix \\(S\\). ``` library(fastICA) ica.res=fastICA(t(mat),n.comp=2) # apply ICA # plot reduced dimensions plot(ica.res$S[,1],ica.res$S[,2],col=as.factor(annotation_col$LeukemiaType)) ``` FIGURE 4\.16: Leukemia gene expression values per patient on reduced dimensions by ICA. #### 4\.2\.2\.2 Non\-negative matrix factorization (NMF) Non\-negative matrix factorization algorithms are series of algorithms that aim to decompose the matrix \\(X\\) into the product of matrices \\(W\\) and \\(H\\), \\(X\=WH\\) (Figure [4\.17](dimensionality-reduction-techniques-visualizing-complex-data-sets-in-2d.html#fig:NMFcartoon)) (Lee and Seung [2001](#ref-lee2001algorithms)). The constraint is that \\(W\\) and \\(H\\) must contain non\-negative values, so must \\(X\\). This is well suited for data sets that cannot contain negative values such as gene expression. This also implies additivity of components or latent factors. This is in line with the idea that expression pattern of a gene across samples is the weighted sum of multiple metagenes. Unlike ICA and SVD/PCA, the metagenes can never be combined in a subtractive way. In this sense, expression programs potentially captured by metagenes are combined additively. FIGURE 4\.17: Non\-negative matrix factorization summary The algorithms that compute NMF try to minimize the cost function \\(D(X,WH)\\), which is the distance between \\(X\\) and \\(WH\\). The early algorithms just use the Euclidean distance, which translates to \\(\\sum(X\-WH)^2\\); this is also known as the Frobenius norm and you will see in the literature it is written as :\\(\\\|\|X\-WH\|\|\_{F}\\). However, this is not the only distance metric; other distance metrics are also used in NMF algorithms. In addition, there could be other parameters to optimize that relates to sparseness of the \\(W\\) and \\(H\\) matrices. With sparse \\(W\\) and \\(H\\), each entry in the \\(X\\) matrix is expressed as the sum of a small number of components. This makes the interpretation easier, if the weights are \\(0\\) then there is no contribution from the corresponding factors. Below, we are plotting the values of metagenes (rows of \\(H\\)) for components 1 and 3, shown in Figure [4\.18](dimensionality-reduction-techniques-visualizing-complex-data-sets-in-2d.html#fig:nmfCode). In this context, these values can also be interpreted as the relationship between samples. If we wanted to plot the relationship between genes we would plot the columns of the \\(W\\) matrix. ``` library(NMF) res=NMF::nmf(mat,rank=3,seed="nndsvd") # nmf with 3 components/factors w <- basis(res) # get W h <- coef(res) # get H # plot 1st factor against 3rd factor plot(h[1,],h[3,],col=as.factor(annotation_col$LeukemiaType),pch=19) ``` FIGURE 4\.18: Leukemia gene expression values per patient on reduced dimensions by NMF. Components 1 and 3 is used for the plot. We should add the note that, due to random starting points of the optimization algorithm, NMF is usually run multiple times and a consensus clustering approach is used when clustering samples. This simply means that samples are clustered together if they cluster together in multiple runs of the NMF. The NMF package we used above has built\-in ways to achieve this. In addition, NMF is a family of algorithms. The choice of cost function to optimize the difference between \\(X\\) and \\(WH\\), and the methods used for optimization create multiple variants of NMF. The “method” parameter in the above `nmf()` function controls the algorithm choice for NMF. #### 4\.2\.2\.3 Choosing the number of components and ranking components in importance In both ICA and NMF, there is no well\-defined way to rank components or to select the number of components. There are a couple of approaches that might suit both ICA and NMF for ranking components. One can use the norms of columns/rows in mixing matrices. This could simply mean take the sum of absolute values in mixing matrices. For our ICA example above, we would take the sum of the absolute values of the rows of \\(A\\) since we transposed the input matrix \\(X\\) before ICA. And for the NMF, we would use the columns of \\(W\\). These ideas assume that the larger coefficients in the weight or mixing matrices indicate more important components. For selecting the optimal number of components, the NMF package provides different strategies. One way is to calculate the RSS for each \\(k\\), the number of components, and take the \\(k\\) where the RSS curve starts to stabilize. However, these strategies require that you run the algorithm with multiple possible component numbers. The `nmf` function will run these automatically when the `rank` argument is a vector of numbers. For ICA there is no straightforward way to choose the right number of components. A common strategy is to start with as many components as variables and try to rank them by their usefulness. **Want to know more ?** The NMF package vignette has extensive information on how to run NMF to get stable results and an estimate of components: [https://cran.r\-project.org/web/packages/NMF/vignettes/NMF\-vignette.pdf](https://cran.r-project.org/web/packages/NMF/vignettes/NMF-vignette.pdf) ### 4\.2\.3 Multi\-dimensional scaling MDS is a set of data analysis techniques that displays the structure of distance data in a high\-dimensional space into a lower dimensional space without much loss of information (Cox and Cox [2000](#ref-cox2000multidimensional)). The overall goal of MDS is to faithfully represent these distances with the lowest possible dimensions. The so\-called “classical multi\-dimensional scaling” algorithm, tries to minimize the following function: \\({\\displaystyle Stress\_{D}(z\_{1},z\_{2},...,z\_{N})\={\\Biggl (}{\\frac {\\sum \_{i,j}{\\bigl (}d\_{ij}\-\\\|z\_{i}\-z\_{j}\\\|{\\bigr )}^{2}}{\\sum \_{i,j}d\_{ij}^{2}}}{\\Biggr )}^{1/2}}\\) Here the function compares the new data points on the lower dimension \\((z\_{1},z\_{2},...,z\_{N})\\) to the input distances between data points or distance between samples in our gene expression example. It turns out, this problem can be efficiently solved with SVD/PCA on the scaled distance matrix, the projection on eigenvectors will be the most optimal solution for the equation above. Therefore, classical MDS is sometimes called Principal Coordinates Analysis in the literature. However, later variants improve on classical MDS by using this as a starting point and optimize a slightly different cost function that again measures how well the low\-dimensional distances correspond to high\-dimensional distances. This variant is called non\-metric MDS and due to the nature of the cost function, it assumes a less stringent relationship between the low\-dimensional distances $\|z\_{i}\-z\_{j}\| and input distances \\(d\_{ij}\\). Formally, this procedure tries to optimize the following function. \\({\\displaystyle Stress\_{D}(z\_{1},z\_{2},...,z\_{N})\={\\Biggl (}{\\frac {\\sum \_{i,j}{\\bigl (}\\\|z\_{i}\-z\_{j}\\\|\-\\theta(d\_{ij}){\\bigr )}^{2}}{\\sum \_{i,j}\\\|z\_{i}\-z\_{j}\\\|^{2}}}{\\Biggr )}^{1/2}}\\) The core of a non\-metric MDS algorithm is a two\-fold optimization process. First the optimal monotonic transformation of the distances has to be found, which is shown in the above formula as \\(\\theta(d\_{ij})\\). Secondly, the points on a low dimension configuration have to be optimally arranged, so that their distances match the scaled distances as closely as possible. These two steps are repeated until some convergence criteria is reached. This usually means that the cost function does not improve much after certain number of iterations. The basic steps in a non\-metric MDS algorithm are: 1. Find a random low\-dimensional configuration of points, or in the variant we will be using below we start with the configuration returned by classical MDS. 2. Calculate the distances between the points in the low dimension \\(\\\|z\_{i}\-z\_{j}\\\|\\), \\(z\_{i}\\) and \\(z\_{j}\\) are vector of positions for samples \\(i\\) and \\(j\\). 3. Find the optimal monotonic transformation of the input distance, \\({\\textstyle \\theta(d\_{ij})}\\), to approximate input distances to low\-dimensional distances. This is achieved by isotonic regression, where a monotonically increasing free\-form function is fit. This step practically ensures that ranking of low\-dimensional distances are similar to rankings of input distances. 4. Minimize the stress function by re\-configuring low\-dimensional space and keeping \\(\\theta\\) function constant. 5. Repeat from Step 2 until convergence. We will now demonstrate both classical MDS and Kruskal’s isometric MDS. ``` mds=cmdscale(dist(t(mat))) isomds=MASS::isoMDS(dist(t(mat))) ``` ``` ## initial value 15.907414 ## final value 13.462986 ## converged ``` ``` # plot the patients in the 2D space par(mfrow=c(1,2)) plot(mds,pch=19,col=as.factor(annotation_col$LeukemiaType), main="classical MDS") plot(isomds$points,pch=19,col=as.factor(annotation_col$LeukemiaType), main="isotonic MDS") ``` FIGURE 4\.19: Leukemia gene expression values per patient on reduced dimensions by classical MDS and isometric MDS. The resulting plot is shown in Figure [4\.19](dimensionality-reduction-techniques-visualizing-complex-data-sets-in-2d.html#fig:mds2). In this example, there is not much difference between isotonic MDS and classical MDS. However, there might be cases where different MDS methods provide visible changes in the scatter plots. ### 4\.2\.4 t\-Distributed Stochastic Neighbor Embedding (t\-SNE) t\-SNE maps the distances in high\-dimensional space to lower dimensions and it is similar to the MDS method in this respect. But the benefit of this particular method is that it tries to preserve the local structure of the data so the distances and grouping of the points we observe in lower dimensions such as a 2D scatter plot is as close as possible to the distances we observe in the high\-dimensional space (Maaten and Hinton [2008](#ref-maaten2008visualizing)). As with other dimension reduction methods, you can choose how many lower dimensions you need. The main difference of t\-SNE, as mentiones above, is that it tries to preserve the local structure of the data. This kind of local structure embedding is missing in the MDS algorithm, which also has a similar goal. MDS tries to optimize the distances as a whole, whereas t\-SNE optimizes the distances with the local structure in mind. This is defined by the “perplexity” parameter in the arguments. This parameter controls how much the local structure influences the distance calculation. The lower the value, the more the local structure is taken into account. Similar to MDS, the process is an optimization algorithm. Here, we also try to minimize the divergence between observed distances and lower dimension distances. However, in the case of t\-SNE, the observed distances and lower dimensional distances are transformed using a probabilistic framework with their local variance in mind. From here on, we will provide a bit more detail on how the algorithm works in case the conceptual description above is too shallow. In t\-SNE the Euclidean distances between data points are transformed into a conditional similarity between points. This is done by assuming a normal distribution on each data point with a variance calculated ultimately by the use of the “perplexity” parameter. The perplexity parameter is, in a sense, a guess about the number of the closest neighbors each point has. Setting it to higher values gives more weight to global structure. Given \\(d\_{ij}\\) is the Euclidean distance between point \\(i\\) and \\(j\\), the similarity score \\(p\_{ij}\\) is calculated as shown below. \\\[p\_{j \| i} \= \\frac{\\exp(\-\\\|d\_{ij}\\\|^2 / 2 σ\_i^2\)}{∑\_{k \\neq i} \\exp(\-\\\|d\_{ik}\\\|^2 / 2 σ\_i^2\)}\\] This distance is symmetrized by incorporating \\(p\_{i \| j}\\) as shown below. \\\[p\_{i j}\=\\frac{p\_{j\|i} \+ p\_{i\|j}}{2n}\\] For the distances in the reduced dimension, we use t\-distribution with one degree of freedom. In the formula below, \\(\| y\_i\-y\_j\\\|^2\\) is Euclidean distance between points \\(i\\) and \\(j\\) in the reduced dimensions. \\\[ q\_{i j} \= \\frac{(1\+ \\\| y\_i\-y\_j\\\|^2\)^{\-1}}{(∑\_{k \\neq l} 1\+ \\\| y\_k\-y\_l\\\|^2\)^{\-1} } \\] As most of the algorithms we have seen in this section, t\-SNE is an optimization process in essence. In every iteration the points along lower dimensions are re\-arranged to minimize the formulated difference between the observed joint probabilities (\\(p\_{i j}\\)) and low\-dimensional joint probabilities (\\(q\_{i j}\\)). Here we are trying to compare probability distributions. In this case, this is done using a method called Kullback\-Leibler divergence, or KL\-divergence. In the formula below, since the \\(p\_{i j}\\) is pre\-defined using original distances, the only way to optimize is to play with \\(q\_{i j}\\) because it depends on the configuration of points in the lower dimensional space. This configuration is optimized to minimize the KL\-divergence between \\(p\_{i j}\\) and \\(q\_{i j}\\). \\\[ KL(P\|\|Q) \= \\sum\_{i, j} p\_{ij} \\, \\log \\frac{p\_{ij}}{q\_{ij}}. \\] Strictly speaking, KL\-divergence measures how well the distribution \\(P\\) which is observed using the original data points can be approximated by distribution \\(Q\\), which is modeled using points on the lower dimension. If the distributions are identical, KL\-divergence would be \\(0\\). Naturally, the more divergent the distributions are, the higher the KL\-divergence will be. We will now show how to use t\-SNE on our gene expression data set using the `Rtsne` package . We are setting the random seed because again, the t\-SNE optimization algorithm has random starting points and this might create non\-identical results in every run. After calculating the t\-SNE lower dimension embeddings we plot the points in a 2D scatter plot, shown in Figure [4\.20](dimensionality-reduction-techniques-visualizing-complex-data-sets-in-2d.html#fig:tsne). ``` library("Rtsne") set.seed(42) # Set a seed if you want reproducible results tsne_out <- Rtsne(t(mat),perplexity = 10) # Run TSNE #image(t(as.matrix(dist(tsne_out$Y)))) # Show the objects in the 2D tsne representation plot(tsne_out$Y,col=as.factor(annotation_col$LeukemiaType), pch=19) # create the legend for the Leukemia types legend("bottomleft", legend=unique(annotation_col$LeukemiaType), fill =palette("default"), border=NA,box.col=NA) ``` FIGURE 4\.20: t\-SNE of leukemia expression dataset As you might have noticed, we set again a random seed with the `set.seed()` function. The optimization algorithm starts with random configuration of points in the lower dimension space, and in each iteration it tries to improve on the previous lower dimension conflagration, which is why starting points can result in different final outcomes. **Want to know more ?** * How perplexity affects t\-sne, interactive examples: [https://distill.pub/2016/misread\-tsne/](https://distill.pub/2016/misread-tsne/) * More on perplexity: [https://blog.paperspace.com/dimension\-reduction\-with\-t\-sne/](https://blog.paperspace.com/dimension-reduction-with-t-sne/) * Intro to t\-SNE: [https://www.oreilly.com/learning/an\-illustrated\-introduction\-to\-the\-t\-sne\-algorithm](https://www.oreilly.com/learning/an-illustrated-introduction-to-the-t-sne-algorithm)
Life Sciences
compgenomr.github.io
http://compgenomr.github.io/book/exercises-2.html
4\.3 Exercises -------------- For this set of exercises we will be using the expression data shown below: ``` expFile=system.file("extdata", "leukemiaExpressionSubset.rds", package="compGenomRData") mat=readRDS(expFile) ``` ### 4\.3\.1 Clustering 1. We want to observe the effect of data transformation in this exercise. Scale the expression matrix with the `scale()` function. In addition, try taking the logarithm of the data with the `log2()` function prior to scaling. Make box plots of the unscaled and scaled data sets using the `boxplot()` function. \[Difficulty: **Beginner/Intermediate**] 2. For the same problem above using the unscaled data and different data transformation strategies, use the `ward.d` distance in hierarchical clustering and plot multiple heatmaps. You can try to use the `pheatmap` library or any other library that can plot a heatmap with a dendrogram. Which data\-scaling strategy provides more homogeneous clusters with respect to disease types? \[Difficulty: **Beginner/Intermediate**] 3. For the transformed and untransformed data sets used in the exercise above, use the silhouette for deciding number of clusters using hierarchical clustering. \[Difficulty: **Intermediate/Advanced**] 4. Now, use the Gap Statistic for deciding the number of clusters in hierarchical clustering. Is it the same number of clusters identified by two methods? Is it similar to the number of clusters obtained using the k\-means algorithm in the chapter. \[Difficulty: **Intermediate/Advanced**] ### 4\.3\.2 Dimension reduction We will be using the leukemia expression data set again. You can use it as shown in the clustering exercises. 1. Do PCA on the expression matrix using the `princomp()` function and then use the `screeplot()` function to visualize the explained variation by eigenvectors. How many top components explain 95% of the variation? \[Difficulty: **Beginner**] 2. Our next tasks are to remove eigenvectors and reconstruct the matrix using SVD, then calculate the reconstruction error as the difference between original and reconstructed matrix. HINT: You have to use the `svd()` function and equalize eigenvalue to \\(0\\) for the component you want to remove. \[Difficulty: **Intermediate/Advanced**] 3. Produce a 10\-component ICA from the expression data set. Remove each component and measure the reconstruction error without that component. Rank the components by decreasing reconstruction\-error. \[Difficulty: **Advanced**] 4. In this exercise we use the `Rtsne()` function on the leukemia expression data set. Try to increase and decrease perplexity t\-sne, and describe the observed changes in 2D plots. \[Difficulty: **Beginner**] ### 4\.3\.1 Clustering 1. We want to observe the effect of data transformation in this exercise. Scale the expression matrix with the `scale()` function. In addition, try taking the logarithm of the data with the `log2()` function prior to scaling. Make box plots of the unscaled and scaled data sets using the `boxplot()` function. \[Difficulty: **Beginner/Intermediate**] 2. For the same problem above using the unscaled data and different data transformation strategies, use the `ward.d` distance in hierarchical clustering and plot multiple heatmaps. You can try to use the `pheatmap` library or any other library that can plot a heatmap with a dendrogram. Which data\-scaling strategy provides more homogeneous clusters with respect to disease types? \[Difficulty: **Beginner/Intermediate**] 3. For the transformed and untransformed data sets used in the exercise above, use the silhouette for deciding number of clusters using hierarchical clustering. \[Difficulty: **Intermediate/Advanced**] 4. Now, use the Gap Statistic for deciding the number of clusters in hierarchical clustering. Is it the same number of clusters identified by two methods? Is it similar to the number of clusters obtained using the k\-means algorithm in the chapter. \[Difficulty: **Intermediate/Advanced**] ### 4\.3\.2 Dimension reduction We will be using the leukemia expression data set again. You can use it as shown in the clustering exercises. 1. Do PCA on the expression matrix using the `princomp()` function and then use the `screeplot()` function to visualize the explained variation by eigenvectors. How many top components explain 95% of the variation? \[Difficulty: **Beginner**] 2. Our next tasks are to remove eigenvectors and reconstruct the matrix using SVD, then calculate the reconstruction error as the difference between original and reconstructed matrix. HINT: You have to use the `svd()` function and equalize eigenvalue to \\(0\\) for the component you want to remove. \[Difficulty: **Intermediate/Advanced**] 3. Produce a 10\-component ICA from the expression data set. Remove each component and measure the reconstruction error without that component. Rank the components by decreasing reconstruction\-error. \[Difficulty: **Advanced**] 4. In this exercise we use the `Rtsne()` function on the leukemia expression data set. Try to increase and decrease perplexity t\-sne, and describe the observed changes in 2D plots. \[Difficulty: **Beginner**]
Life Sciences
compgenomr.github.io
http://compgenomr.github.io/book/use-case-disease-subtype-from-genomics-data.html
5\.3 Use case: Disease subtype from genomics data ------------------------------------------------- We will start our illustration of machine learning using a real dataset from tumor biopsies. We will use the gene expression data of glioblastoma tumor samples from The Cancer Genome Atlas project. We will try to predict the subtype of this disease using molecular markers. This subtype is characterized by large\-scale epigenetic alterations called the “CpG island methylator phenotype” or “CIMP” (Noushmehr, Weisenberger, Diefes, et al. [2010](#ref-pmid20399149)); half of the patients in our data set have this subtype and the rest do not, and we will try to predict which ones have the CIMP subtype. There two data objects we need for this exercise, one for gene expression values per tumor sample and the other one is subtype annotation per patient. In the expression data set, every row is a patient and every column is a gene expression value. There are 184 tumor samples. This data set might be a bit small for real\-world applications, however it is very relevant for the genomics focus of this book and the small datasets take less time to train, which is useful for reproducibility purposes. We will read these data sets from the **compGenomRData** package now with the `readRDS()` function. ``` # get file paths fileLGGexp=system.file("extdata", "LGGrnaseq.rds", package="compGenomRData") fileLGGann=system.file("extdata", "patient2LGGsubtypes.rds", package="compGenomRData") # gene expression values gexp=readRDS(fileLGGexp) head(gexp[,1:5]) ``` ``` ## TCGA-CS-4941 TCGA-CS-4944 TCGA-CS-5393 TCGA-CS-5394 TCGA-CS-5395 ## A1BG 72.2326 24.7132 46.3789 37.9659 19.5162 ## A1CF 0.0000 0.0000 0.0000 0.0000 0.0000 ## A2BP1 524.4997 105.4092 323.5828 19.7390 299.5375 ## A2LD1 144.0856 18.0154 29.0942 7.5945 202.1231 ## A2ML1 521.3941 159.3746 164.6157 63.5664 953.4106 ## A2M 17944.7205 10894.9590 16480.1130 9217.7919 10801.8461 ``` ``` dim(gexp) ``` ``` ## [1] 20501 184 ``` ``` # patient annotation patient=readRDS(fileLGGann) head(patient) ``` ``` ## subtype ## TCGA-FG-8185 CIMP ## TCGA-DB-5276 CIMP ## TCGA-P5-A77X CIMP ## TCGA-IK-8125 CIMP ## TCGA-DU-A5TR CIMP ## TCGA-E1-5311 CIMP ``` ``` dim(patient) ``` ``` ## [1] 184 1 ```
Life Sciences
compgenomr.github.io
http://compgenomr.github.io/book/data-preprocessing.html
5\.4 Data preprocessing ----------------------- We will have to preprocess the data before we start training. This might include exploratory data analysis to see how variables and samples relate to each other. For example, we might want to check the correlation between predictor variables and keep only one variable from that group. In addition, some training algorithms might be sensitive to data scales or outliers. We should deal with those issues in this step. In some cases, the data might have missing values. We can choose to remove the samples that have missing values or try to impute them. Many machine learning algorithms will not be able to deal with missing values. We will show how to do this in practice using the `caret::preProcess()` function and base R functions. Please note that there are more preprocessing options available than we will show here. There are more possibilities in `caret::preProcess()`function and base R functions, we are just going to cover a few basics in this section. ### 5\.4\.1 Data transformation The first thing we will do is data normalization and transformation. We have to take care of data scale issues that might come from how the experiments are performed and the potential problems that might occur during data collection. Ideally, each tumor sample has a similar distribution of gene expression values. Systematic differences between tumor samples must be corrected. We check if there are such differences using box plots. We will only plot the first 50 tumor samples so that the figure is not too squished. The resulting boxplot is shown in Figure [5\.1](data-preprocessing.html#fig:boxML). ``` boxplot(gexp[,1:50],outline=FALSE,col="cornflowerblue") ``` FIGURE 5\.1: Boxplots for gene expression values. It seems there was some normalization done on this data. Gene expression values per sample seem to have the same scale. However, it looks like they have long\-tailed distributions, so a log transformation may fix that. These long\-tailed distributions have outliers and this might adversely affect the models. Below, we show the effect of log transformation on the gene expression profile of a patient. We add a pseudo count of 1 to avoid `log(0)`. The resulting histograms are shown in Figure [5\.2](data-preprocessing.html#fig:logTransform). ``` par(mfrow=c(1,2)) hist(gexp[,5],xlab="gene expression",main="",border="blue4", col="cornflowerblue") hist(log10(gexp+1)[,5], xlab="gene expression log scale",main="", border="blue4",col="cornflowerblue") ``` FIGURE 5\.2: Gene expression distribution for the 5th patient (left). Log transformed gene expression distribution for the same patient (right). Since taking a log seems to work to tame the extreme values, we do that below and also add \\(1\\) pseudo\-count to be able to deal with \\(0\\) values: ``` gexp=log10(gexp+1) ``` Another thing we can do in combination with this is to winsorize the data, which caps extreme values to the 1st and 99th percentiles or to other user\-defined percentiles. But before we go forward, we should transpose our data. In this case, the predictor variables are gene expression values and they should be on the column side. It was OK to leave them on the row side, to check systematic errors with box plots, but machine learning algorithms require that predictor variables are on the column side. ``` # transpose the data set tgexp <- t(gexp) ``` ### 5\.4\.2 Filtering data and scaling We can filter predictor variables which have low variation. They are not likely to have any predictive importance since there is not much variation and they will just slow our algorithms. The more variables, the slower the algorithms will be generally. The `caret::preProcess()` function can help filter the predictor variables with near zero variance. ``` library(caret) # remove near zero variation for the columns at least # 85% of the values are the same # this function creates the filter but doesn't apply it yet nzv=preProcess(tgexp,method="nzv",uniqueCut = 15) # apply the filter using "predict" function # return the filtered dataset and assign it to nzv_tgexp # variable nzv_tgexp=predict(nzv,tgexp) ``` In addition, we can also choose arbitrary cutoffs for variability. For example, we can choose to take the top 1000 variable predictors. ``` SDs=apply(tgexp,2,sd ) topPreds=order(SDs,decreasing = TRUE)[1:1000] tgexp=tgexp[,topPreds] ``` We can also center the data, which as we have seen in Chapter 4, is subtracting the mean. Following this, the predictor variables will have zero means. In addition, we can scale the data. When we scale, each value of the predictor variable is divided by its standard deviation. Therefore predictor variables will have the same standard deviation. These manipulations are generally used to improve the numerical stability of some calculations. In distance\-based metrics, it could be beneficial to at least center the data. We will now center the data using the `preProcess()` function. This is more practical than the `scale()` function because when we get a new data point, we can use the `predict()` function and `processCenter` object to process it just like we did for the training samples. ``` library(caret) processCenter=preProcess(tgexp, method = c("center")) tgexp=predict(processCenter,tgexp) ``` We will next filter the predictor variables that are highly correlated. You may choose not to do this as some methods can handle correlation between predictor variables. However, the fewer predictor variables we have, the faster the model fitting can be done. ``` # create a filter for removing higly correlated variables # if two variables are highly correlated only one of them # is removed corrFilt=preProcess(tgexp, method = "corr",cutoff = 0.9) tgexp=predict(corrFilt,tgexp) ``` ### 5\.4\.3 Dealing with missing values In real\-life situations, there will be missing values in our data. In genomics, we might not have values for certain genes or genomic locations due to technical problems during experiments. We have to be able to deal with these missing values. For demonstration purposes, we will now introduce NA values in our data, the “NA” value is normally used to encode missing values in R. We then show how to check and deal with those. One way is to impute them; here, we again use a machine learning algorithm to guess the missing values. Another option is to discard the samples with missing values or discard the predictor variables with missing values. First, we replace one of the values as NA and check if it is there. ``` missing_tgexp=tgexp missing_tgexp[1,1]=NA anyNA(missing_tgexp) # check if there are NA values ``` ``` ## [1] TRUE ``` Next, we will try to remove that gene from the set. Removing genes or samples both have downsides. You might be removing a predictor variable that could be important for the prediction. Removing samples with missing values will decrease the number of samples in the training set. The code below checks which values are NA in the matrix, then runs a column sum and keeps everything where the column sum is equal to 0\. The column sums where there are NA values will be higher than 0 depending on how many NA values there are in a column. ``` gexpnoNA=missing_tgexp[ , colSums(is.na(missing_tgexp)) == 0] ``` We will next try to impute the missing value(s). Imputation can be as simple as assigning missing values to the mean or median value of the variable, or assigning the mean/median of values from nearest neighbors of the sample having the missing value. We will show both using the `caret::preProcess()` function. First, let us run the median imputation. ``` library(caret) mImpute=preProcess(missing_tgexp,method="medianImpute") imputedGexp=predict(mImpute,missing_tgexp) ``` Another imputation method that is more precise than the median imputation is to impute the missing values based on the nearest neighbors of the samples. In this case, the algorithm finds samples that are most similar to the sample vector with NA values. Next, the algorithm averages the non\-missing values from those neighbors and replaces the missing value with that value. ``` library(RANN) knnImpute=preProcess(missing_tgexp,method="knnImpute") knnimputedGexp=predict(knnImpute,missing_tgexp) ``` ### 5\.4\.1 Data transformation The first thing we will do is data normalization and transformation. We have to take care of data scale issues that might come from how the experiments are performed and the potential problems that might occur during data collection. Ideally, each tumor sample has a similar distribution of gene expression values. Systematic differences between tumor samples must be corrected. We check if there are such differences using box plots. We will only plot the first 50 tumor samples so that the figure is not too squished. The resulting boxplot is shown in Figure [5\.1](data-preprocessing.html#fig:boxML). ``` boxplot(gexp[,1:50],outline=FALSE,col="cornflowerblue") ``` FIGURE 5\.1: Boxplots for gene expression values. It seems there was some normalization done on this data. Gene expression values per sample seem to have the same scale. However, it looks like they have long\-tailed distributions, so a log transformation may fix that. These long\-tailed distributions have outliers and this might adversely affect the models. Below, we show the effect of log transformation on the gene expression profile of a patient. We add a pseudo count of 1 to avoid `log(0)`. The resulting histograms are shown in Figure [5\.2](data-preprocessing.html#fig:logTransform). ``` par(mfrow=c(1,2)) hist(gexp[,5],xlab="gene expression",main="",border="blue4", col="cornflowerblue") hist(log10(gexp+1)[,5], xlab="gene expression log scale",main="", border="blue4",col="cornflowerblue") ``` FIGURE 5\.2: Gene expression distribution for the 5th patient (left). Log transformed gene expression distribution for the same patient (right). Since taking a log seems to work to tame the extreme values, we do that below and also add \\(1\\) pseudo\-count to be able to deal with \\(0\\) values: ``` gexp=log10(gexp+1) ``` Another thing we can do in combination with this is to winsorize the data, which caps extreme values to the 1st and 99th percentiles or to other user\-defined percentiles. But before we go forward, we should transpose our data. In this case, the predictor variables are gene expression values and they should be on the column side. It was OK to leave them on the row side, to check systematic errors with box plots, but machine learning algorithms require that predictor variables are on the column side. ``` # transpose the data set tgexp <- t(gexp) ``` ### 5\.4\.2 Filtering data and scaling We can filter predictor variables which have low variation. They are not likely to have any predictive importance since there is not much variation and they will just slow our algorithms. The more variables, the slower the algorithms will be generally. The `caret::preProcess()` function can help filter the predictor variables with near zero variance. ``` library(caret) # remove near zero variation for the columns at least # 85% of the values are the same # this function creates the filter but doesn't apply it yet nzv=preProcess(tgexp,method="nzv",uniqueCut = 15) # apply the filter using "predict" function # return the filtered dataset and assign it to nzv_tgexp # variable nzv_tgexp=predict(nzv,tgexp) ``` In addition, we can also choose arbitrary cutoffs for variability. For example, we can choose to take the top 1000 variable predictors. ``` SDs=apply(tgexp,2,sd ) topPreds=order(SDs,decreasing = TRUE)[1:1000] tgexp=tgexp[,topPreds] ``` We can also center the data, which as we have seen in Chapter 4, is subtracting the mean. Following this, the predictor variables will have zero means. In addition, we can scale the data. When we scale, each value of the predictor variable is divided by its standard deviation. Therefore predictor variables will have the same standard deviation. These manipulations are generally used to improve the numerical stability of some calculations. In distance\-based metrics, it could be beneficial to at least center the data. We will now center the data using the `preProcess()` function. This is more practical than the `scale()` function because when we get a new data point, we can use the `predict()` function and `processCenter` object to process it just like we did for the training samples. ``` library(caret) processCenter=preProcess(tgexp, method = c("center")) tgexp=predict(processCenter,tgexp) ``` We will next filter the predictor variables that are highly correlated. You may choose not to do this as some methods can handle correlation between predictor variables. However, the fewer predictor variables we have, the faster the model fitting can be done. ``` # create a filter for removing higly correlated variables # if two variables are highly correlated only one of them # is removed corrFilt=preProcess(tgexp, method = "corr",cutoff = 0.9) tgexp=predict(corrFilt,tgexp) ``` ### 5\.4\.3 Dealing with missing values In real\-life situations, there will be missing values in our data. In genomics, we might not have values for certain genes or genomic locations due to technical problems during experiments. We have to be able to deal with these missing values. For demonstration purposes, we will now introduce NA values in our data, the “NA” value is normally used to encode missing values in R. We then show how to check and deal with those. One way is to impute them; here, we again use a machine learning algorithm to guess the missing values. Another option is to discard the samples with missing values or discard the predictor variables with missing values. First, we replace one of the values as NA and check if it is there. ``` missing_tgexp=tgexp missing_tgexp[1,1]=NA anyNA(missing_tgexp) # check if there are NA values ``` ``` ## [1] TRUE ``` Next, we will try to remove that gene from the set. Removing genes or samples both have downsides. You might be removing a predictor variable that could be important for the prediction. Removing samples with missing values will decrease the number of samples in the training set. The code below checks which values are NA in the matrix, then runs a column sum and keeps everything where the column sum is equal to 0\. The column sums where there are NA values will be higher than 0 depending on how many NA values there are in a column. ``` gexpnoNA=missing_tgexp[ , colSums(is.na(missing_tgexp)) == 0] ``` We will next try to impute the missing value(s). Imputation can be as simple as assigning missing values to the mean or median value of the variable, or assigning the mean/median of values from nearest neighbors of the sample having the missing value. We will show both using the `caret::preProcess()` function. First, let us run the median imputation. ``` library(caret) mImpute=preProcess(missing_tgexp,method="medianImpute") imputedGexp=predict(mImpute,missing_tgexp) ``` Another imputation method that is more precise than the median imputation is to impute the missing values based on the nearest neighbors of the samples. In this case, the algorithm finds samples that are most similar to the sample vector with NA values. Next, the algorithm averages the non\-missing values from those neighbors and replaces the missing value with that value. ``` library(RANN) knnImpute=preProcess(missing_tgexp,method="knnImpute") knnimputedGexp=predict(knnImpute,missing_tgexp) ```
Life Sciences
compgenomr.github.io
http://compgenomr.github.io/book/splitting-the-data.html
5\.5 Splitting the data ----------------------- At this point we might choose to split the data into the test and the training partitions. The reason for this is that we need an independent test we did not train on. This will become clearer in the following sections, but without having a separate test set, we cannot assess the performance of our model or tune it properly. ### 5\.5\.1 Holdout test dataset There are multiple data split strategies. For starters, we will split 30% of the data as the test. This method is the gold standard for testing performance of our model. By doing this, we have a separate data set that the model has never seen. First, we create a single data frame with predictors and response variables. ``` tgexp=merge(patient,tgexp,by="row.names") # push sample ids back to the row names rownames(tgexp)=tgexp[,1] tgexp=tgexp[,-1] ``` Now that the response variable or the class label is merged with our dataset, we can split it into test and training sets with the `caret::createPartition()` function. ``` set.seed(3031) # set the random number seed for reproducibility # get indices for 70% of the data set intrain <- createDataPartition(y = tgexp[,1], p= 0.7)[[1]] # seperate test and training sets training <- tgexp[intrain,] testing <- tgexp[-intrain,] ``` ### 5\.5\.2 Cross\-validation In some cases, we might have too few data points and it might be too costly to set aside a significant portion of the data set as a holdout test set. In these cases a resampling\-based technique such as cross\-validation may be useful. Cross\-validation works by splitting the data into randomly sampled \\(k\\) subsets, called k\-folds. So, for example, in the case of 5\-fold cross\-validation with 100 data points, we would create 5 folds, each containing 20 data points. We would then build models and estimate errors 5 times. Each time, four of the groups are combined (resulting in 80 data points) and used to train your model. Then the 5th group of 20 points that was not used to construct the model is used to estimate the test error. In the case of 5\-fold cross\-validation, we would have 5 error estimates that could be averaged to obtain a more robust estimate of the test error. An extreme case of k\-fold cross\-validation, is to equalize the \\(k\\) to the number of data points or in our case, the number of tumor samples. This is called leave\-one\-out cross\-validation (LOOCV). This could be better than k\-fold cross\-validation but it takes too much time to train that many models if the number of data points is large. The `caret` package has built\-in cross\-validation functionality for all the machine learning methods and we will be using that in the later sections. ### 5\.5\.3 Bootstrap resampling Another method to estimate the prediction error is to use bootstrap resampling. This is a general method we have already introduced in Chapter [3](stats.html#stats). It can be used to estimate variability of any statistical parameter. In this case, that parameter is the test error or test accuracy. The training set is drawn from the original set with replacement (same size as the original set), then we build a model with this bootstrap resampled set. Next, we take the data points that are not selected for the random sample and predict labels for them. These data points are called the “out\-of\-the\-bag (OOB) sample”. We repeat this process many times and record the error for the OOB samples. We can take the average of the OOB errors to estimate the real test error. This is a powerful method that is not only used to estimate test error but incorporated into the training part of some machine learning methods such as random forests. Normally, we should repeat the process hundreds or up to a thousand times to get good estimates. However, the limiting factor would be the time it takes to construct and test that many models. Twenty to 30 repetitions might be enough if the time cost of training is too high. Again, the `caret` package provides the bootstrap interface for many machine learning models for sampling before training and estimating the error on OOB samples. ### 5\.5\.1 Holdout test dataset There are multiple data split strategies. For starters, we will split 30% of the data as the test. This method is the gold standard for testing performance of our model. By doing this, we have a separate data set that the model has never seen. First, we create a single data frame with predictors and response variables. ``` tgexp=merge(patient,tgexp,by="row.names") # push sample ids back to the row names rownames(tgexp)=tgexp[,1] tgexp=tgexp[,-1] ``` Now that the response variable or the class label is merged with our dataset, we can split it into test and training sets with the `caret::createPartition()` function. ``` set.seed(3031) # set the random number seed for reproducibility # get indices for 70% of the data set intrain <- createDataPartition(y = tgexp[,1], p= 0.7)[[1]] # seperate test and training sets training <- tgexp[intrain,] testing <- tgexp[-intrain,] ``` ### 5\.5\.2 Cross\-validation In some cases, we might have too few data points and it might be too costly to set aside a significant portion of the data set as a holdout test set. In these cases a resampling\-based technique such as cross\-validation may be useful. Cross\-validation works by splitting the data into randomly sampled \\(k\\) subsets, called k\-folds. So, for example, in the case of 5\-fold cross\-validation with 100 data points, we would create 5 folds, each containing 20 data points. We would then build models and estimate errors 5 times. Each time, four of the groups are combined (resulting in 80 data points) and used to train your model. Then the 5th group of 20 points that was not used to construct the model is used to estimate the test error. In the case of 5\-fold cross\-validation, we would have 5 error estimates that could be averaged to obtain a more robust estimate of the test error. An extreme case of k\-fold cross\-validation, is to equalize the \\(k\\) to the number of data points or in our case, the number of tumor samples. This is called leave\-one\-out cross\-validation (LOOCV). This could be better than k\-fold cross\-validation but it takes too much time to train that many models if the number of data points is large. The `caret` package has built\-in cross\-validation functionality for all the machine learning methods and we will be using that in the later sections. ### 5\.5\.3 Bootstrap resampling Another method to estimate the prediction error is to use bootstrap resampling. This is a general method we have already introduced in Chapter [3](stats.html#stats). It can be used to estimate variability of any statistical parameter. In this case, that parameter is the test error or test accuracy. The training set is drawn from the original set with replacement (same size as the original set), then we build a model with this bootstrap resampled set. Next, we take the data points that are not selected for the random sample and predict labels for them. These data points are called the “out\-of\-the\-bag (OOB) sample”. We repeat this process many times and record the error for the OOB samples. We can take the average of the OOB errors to estimate the real test error. This is a powerful method that is not only used to estimate test error but incorporated into the training part of some machine learning methods such as random forests. Normally, we should repeat the process hundreds or up to a thousand times to get good estimates. However, the limiting factor would be the time it takes to construct and test that many models. Twenty to 30 repetitions might be enough if the time cost of training is too high. Again, the `caret` package provides the bootstrap interface for many machine learning models for sampling before training and estimating the error on OOB samples.
Life Sciences
compgenomr.github.io
http://compgenomr.github.io/book/predicting-the-subtype-with-k-nearest-neighbors.html
5\.6 Predicting the subtype with k\-nearest neighbors ----------------------------------------------------- One of the easiest things to wrap our heads around when we are trying to predict a label such as disease subtype is to look for similar samples and assign the labels of those similar samples to our sample. Conceptually, k\-nearest neighbors (k\-NN) is very similar to clustering algorithms we have seen earlier. If we have a measure of distance between the samples, we can find the nearest \\(k\\) samples to our new sample and use a voting method to decide on the label of our new sample. Let us run the k\-NN algorithm with our cancer data. For illustrative purposes, we provide the same data set for training and test data. Providing the training data as the test data shows us the training error or accuracy, which is how the model is doing on the data it is trained with. Below we are running k\-NN with the `caret:knn3()` function. The most important argument is `k`, which is the number of nearest neighbors to consider. In this case, we set it to 5\. We will later discuss how to find the best `k`. ``` library(caret) knnFit=knn3(x=training[,-1], # training set y=training[,1], # training set class labels k=5) # predictions on the test set trainPred=predict(knnFit,training[,-1]) ```
Life Sciences
compgenomr.github.io
http://compgenomr.github.io/book/assessing-the-performance-of-our-model.html
5\.7 Assessing the performance of our model ------------------------------------------- We have to define some metrics to see if our model worked. The algorithm is trying to reduce the classification error, or in other words it is trying to increase the training accuracy. For the assessment of performance, there are other different metrics to consider. All the metrics for 2\-class classification depend on the table below, which shows the number of true positives (TP), false positives (FP), true negatives (TN) and false negatives (FN), similar to a table we used in the hypothesis testing section in the statistics chapter previously. | | Actual CIMP | Actual noCIMP | | --- | --- | --- | | Predicted as CIMP | True Positives (TP) | False Positive (FP) | | Predicted as noCIMP | False Positives (FN) | True negatives (TN) | Accuracy is the first metric to look at. This metric is simply \\((TP\+TN)/(TP\+TN\+FP\+FN)\\) and shows the proportion of times we were right. There are other accuracy metrics that are important and output by `caret` functions. We will go over some of them here. Precision, \\(TP/(TP\+FP)\\), is about the confidence we have on our CIMP calls. If our method is very precise, we will have low false positives. That means every time we call a CIMP event, we would be relatively certain it is not a false positive. Sensitivity, \\(TP/(TP\+FN)\\), is how often we miss CIMP cases and call them as noCIMP. Making fewer mistakes in noCIMP cases will increase our sensitivity. You can think of sensitivity also in a sick/healthy context. A highly sensitive method will be good at classifying sick people when they are indeed sick. Specificity, \\(TN/(TN\+FP)\\), is about how sure we are when we call something noCIMP. If our method is not very specific, we would call many patients CIMP, while in fact, they did not have the subtype. In the sick/healthy context, a highly specific method will be good at not calling healthy people sick. An alternative to accuracy we showed earlier is “balanced accuracy”. Accuracy does not perform well when classes have very different numbers of samples (class imbalance). For example, if you have 90 CIMP cases and 10 noCIMP cases, classifying all the samples as CIMP gives 0\.9 accuracy score by default. Using the “balanced accuracy” metric can help in such situations. This is simply \\((Precision\+Sensitivity)/2\\). In this case above with the class imbalance scenario, the “balanced accuracy” would be 0\.5\. Another metric that takes into account accuracy that could be generated by chance is the “Kappa statistic” or “Cohen’s Kappa”. This metric includes expected accuracy, which is affected by class imbalance in the training set and provides a metric corrected by that. In the k\-NN example above, we trained and tested on the same data. The model returned the predicted labels for our training. We can calculate the accuracy metrics using the `caret::confusionMatrix()` function. This is sometimes called training accuracy. If you take \\(1\-accuracy\\), it will be the “training error”. ``` # get k-NN prediction on the training data itself, with k=5 knnFit=knn3(x=training[,-1], # training set y=training[,1], # training set class labels k=5) # predictions on the training set trainPred=predict(knnFit,training[,-1],type="class") # compare the predicted labels to real labels # get different performance metrics confusionMatrix(data=training[,1],reference=trainPred) ``` ``` ## Confusion Matrix and Statistics ## ## Reference ## Prediction CIMP noCIMP ## CIMP 65 0 ## noCIMP 2 63 ## ## Accuracy : 0.9846 ## 95% CI : (0.9455, 0.9981) ## No Information Rate : 0.5154 ## P-Value [Acc > NIR] : <2e-16 ## ## Kappa : 0.9692 ## ## Mcnemar's Test P-Value : 0.4795 ## ## Sensitivity : 0.9701 ## Specificity : 1.0000 ## Pos Pred Value : 1.0000 ## Neg Pred Value : 0.9692 ## Prevalence : 0.5154 ## Detection Rate : 0.5000 ## Detection Prevalence : 0.5000 ## Balanced Accuracy : 0.9851 ## ## 'Positive' Class : CIMP ## ``` Now, let us see what our test set accuracy looks like again using the `knn` function and the `confusionMatrix()` function on the predicted and real classes. ``` # predictions on the test set, return class labels testPred=predict(knnFit,testing[,-1],type="class") # compare the predicted labels to real labels # get different performance metrics confusionMatrix(data=testing[,1],reference=testPred) ``` ``` ## Confusion Matrix and Statistics ## ## Reference ## Prediction CIMP noCIMP ## CIMP 27 0 ## noCIMP 2 25 ## ## Accuracy : 0.963 ## 95% CI : (0.8725, 0.9955) ## No Information Rate : 0.537 ## P-Value [Acc > NIR] : 2.924e-12 ## ## Kappa : 0.9259 ## ## Mcnemar's Test P-Value : 0.4795 ## ## Sensitivity : 0.9310 ## Specificity : 1.0000 ## Pos Pred Value : 1.0000 ## Neg Pred Value : 0.9259 ## Prevalence : 0.5370 ## Detection Rate : 0.5000 ## Detection Prevalence : 0.5000 ## Balanced Accuracy : 0.9655 ## ## 'Positive' Class : CIMP ## ``` Test set accuracy is not as good as the training accuracy, which is usually the case. That is why the best way to evaluate performance is to use test data that is not used by the model for training. That gives you an idea about real\-world performance where the model will be used to predict data that is not previously seen. ### 5\.7\.1 Receiver Operating Characteristic (ROC) curves One important and popular metric when evaluating performance is looking at receiver operating characteristic (ROC) curves. The ROC curve is created by evaluating the class probabilities for the model across a continuum of thresholds. Typically, in the case of two\-class classification, the methods return a probability for one of the classes. If that probability is higher than \\(0\.5\\), you call the label, for example, class A. If less than \\(0\.5\\), we call the label class B. However, we can move that threshold and change what we call class A or B. For each candidate threshold, the resulting sensitivity and 1\-specificity are plotted against each other. The best possible prediction would result in a point in the upper left corner, representing 100% sensitivity (no false negatives) and 100% specificity (no false positives). For the best model, the curve will be almost like a square. Since this is important information, area under the curve (AUC) is calculated. This is a quantity between 0 and 1, and the closer to 1, the better the performance of your classifier in terms of sensitivity and specificity. For an uninformative classification model, AUC will be \\(0\.5\\). Although, ROC curves are initially designed for two\-class problems, later extensions made it possible to use ROC curves for multi\-class problems. ROC curves can also be used to determine alternate cutoffs for class probabilities for two\-class problems. However, this will always result in a trade\-off between sensitivity and specificity. Sometimes it might be desirable to limit the number of false positives because making such mistakes would be too costly for the individual cases. For example, if predicted with a certain disease, you might be recommended to have surgery. However, if your classifier has a relatively high false positive rate, low specificity, you might have surgery for no reason. Typically, you want your classification model to have high specificity and sensitivity, which may not always be possible in the real world. You might have to choose what is more important for a specific problem and try to increase that. Next, we will show how to use ROC curves for our k\-NN application. The method requires classification probabilities in the format where 0 probability denotes class “noCIMP” and probability 1 denotes class “CIMP”. This way the ROC curve can be drawn by varying the probability cutoff for calling class a “noCIMP” or “CIMP”. Below we are getting a similar probability from k\-NN, but we have to transform it to the format we described above. Then, we feed those class probabilities to the `pROC::roc()` function to calculate the ROC curve and the area\-under\-the\-curve. The resulting ROC curve is shown in Figure [5\.3](assessing-the-performance-of-our-model.html#fig:ROC). ``` library(pROC) # get k-NN class probabilities # prediction probabilities on the test set testProbs=predict(knnFit,testing[,-1]) # get the roc curve rocCurve <- pROC::roc(response = testing[,1], predictor = testProbs[,1], ## This function assumes that the second ## class is the class of interest, so we ## reverse the labels. levels = rev(levels(testing[,1]))) # plot the curve plot(rocCurve, legacy.axes = TRUE) ``` FIGURE 5\.3: ROC curve for k\-NN. ``` # return area under the curve pROC::auc(rocCurve) ``` ``` ## Area under the curve: 0.976 ``` ### 5\.7\.1 Receiver Operating Characteristic (ROC) curves One important and popular metric when evaluating performance is looking at receiver operating characteristic (ROC) curves. The ROC curve is created by evaluating the class probabilities for the model across a continuum of thresholds. Typically, in the case of two\-class classification, the methods return a probability for one of the classes. If that probability is higher than \\(0\.5\\), you call the label, for example, class A. If less than \\(0\.5\\), we call the label class B. However, we can move that threshold and change what we call class A or B. For each candidate threshold, the resulting sensitivity and 1\-specificity are plotted against each other. The best possible prediction would result in a point in the upper left corner, representing 100% sensitivity (no false negatives) and 100% specificity (no false positives). For the best model, the curve will be almost like a square. Since this is important information, area under the curve (AUC) is calculated. This is a quantity between 0 and 1, and the closer to 1, the better the performance of your classifier in terms of sensitivity and specificity. For an uninformative classification model, AUC will be \\(0\.5\\). Although, ROC curves are initially designed for two\-class problems, later extensions made it possible to use ROC curves for multi\-class problems. ROC curves can also be used to determine alternate cutoffs for class probabilities for two\-class problems. However, this will always result in a trade\-off between sensitivity and specificity. Sometimes it might be desirable to limit the number of false positives because making such mistakes would be too costly for the individual cases. For example, if predicted with a certain disease, you might be recommended to have surgery. However, if your classifier has a relatively high false positive rate, low specificity, you might have surgery for no reason. Typically, you want your classification model to have high specificity and sensitivity, which may not always be possible in the real world. You might have to choose what is more important for a specific problem and try to increase that. Next, we will show how to use ROC curves for our k\-NN application. The method requires classification probabilities in the format where 0 probability denotes class “noCIMP” and probability 1 denotes class “CIMP”. This way the ROC curve can be drawn by varying the probability cutoff for calling class a “noCIMP” or “CIMP”. Below we are getting a similar probability from k\-NN, but we have to transform it to the format we described above. Then, we feed those class probabilities to the `pROC::roc()` function to calculate the ROC curve and the area\-under\-the\-curve. The resulting ROC curve is shown in Figure [5\.3](assessing-the-performance-of-our-model.html#fig:ROC). ``` library(pROC) # get k-NN class probabilities # prediction probabilities on the test set testProbs=predict(knnFit,testing[,-1]) # get the roc curve rocCurve <- pROC::roc(response = testing[,1], predictor = testProbs[,1], ## This function assumes that the second ## class is the class of interest, so we ## reverse the labels. levels = rev(levels(testing[,1]))) # plot the curve plot(rocCurve, legacy.axes = TRUE) ``` FIGURE 5\.3: ROC curve for k\-NN. ``` # return area under the curve pROC::auc(rocCurve) ``` ``` ## Area under the curve: 0.976 ```
Life Sciences
compgenomr.github.io
http://compgenomr.github.io/book/model-tuning-and-avoiding-overfitting.html
5\.8 Model tuning and avoiding overfitting ------------------------------------------ How can we know that we picked the best \\(k\\)? One straightforward way is that we can try many different \\(k\\) values and check the accuracy of our model. We will first check the effect of different \\(k\\) values on training accuracy. Below, we will go through many \\(k\\) values and calculate the training accuracy for each. ``` set.seed(101) k=1:12 # set k values trainErr=c() # set vector for training errors for( i in k){ knnFit=knn3(x=training[,-1], # training set y=training[,1], # training set class labels k=i) # predictions on the training set class.res=predict(knnFit,training[,-1],type="class") # training error err=1-confusionMatrix(training[,1],class.res)$overall[1] trainErr[i]=err } # plot training error vs k plot(k,trainErr,type="p",col="#CC0000",pch=20) # add a smooth line for the trend lines(loess.smooth(x=k, trainErr,degree=2),col="#CC0000") ``` FIGURE 5\.4: Training error for k\-NN classification of glioma tumor samples. The resulting training error plot is shown in Figure [5\.4](model-tuning-and-avoiding-overfitting.html#fig:trainingErrork). We can see the effect of \\(k\\) in the training error; as \\(k\\) increases the model tends to be a bit worse on training. This makes sense because with large \\(k\\) we take into account more and more neighbors, and at some point we start considering data points from the other classes as well and that decreases our accuracy. However, looking at the training accuracy is not the right way to test the model as we have mentioned. The models are generally tested on the datasets that are not used when building model. There are different strategies to do this. We have already split part of our dataset as test set, so let us see how we do it on the test data using the code below. The resulting plot is shown in Figure [5\.5](model-tuning-and-avoiding-overfitting.html#fig:testTrainErr). ``` set.seed(31) k=1:12 testErr=c() for( i in k){ knnFit=knn3(x=training[,-1], # training set y=training[,1], # training set class labels k=i) # predictions on the training set class.res=predict(knnFit,testing[,-1],type="class") testErr[i]=1-confusionMatrix(testing[,1], class.res)$overall[1] } # plot training error plot(k,trainErr,type="p",col="#CC0000", ylim=c(0.000,0.08), ylab="prediction error (1-accuracy)",pch=19) # add a smooth line for the trend lines(loess.smooth(x=k, trainErr,degree=2), col="#CC0000") # plot test error points(k,testErr,col="#00CC66",pch=19) lines(loess.smooth(x=k,testErr,degree=2), col="#00CC66") # add legend legend("bottomright",fill=c("#CC0000","#00CC66"), legend=c("training","test"),bty="n") ``` FIGURE 5\.5: Training and test error for k\-NN classification of glioma tumor samples. The test data show a different thing, of course. It is not the best strategy to increase the \\(k\\) indefinitely. The test error rate increases after a while. Increasing \\(k\\) results in too many data points influencing the decision about the class of the new sample, this may not be desirable since this strategy might include points from other classes eventually. On the other hand, if we set \\(k\\) too low, we are restricting the model to only look for a few neighbors. In addition, \\(k\\) values that give the best performance for the training set are not the best \\(k\\) for the test set. In fact, if we stick with \\(k\=1\\) as the best \\(k\\) obtained from the training set, we would obtain a worse performance on the test set. In this case, we can talk about the concept of overfitting. This happens when our models fit the data in the training set extremely well but cannot perform well in the test data; in other words, they cannot generalize. Similarly, underfitting could occur when our models do not learn well from the training data and they are overly simplistic. Ideally, we should use methods that help us estimate the real test error when tuning the models such as cross\-validation, bootstrap or holdout test set. ### 5\.8\.1 Model complexity and bias variance trade\-off The case of over\- and underfitting is closely related to the model complexity and the related bias\-variance trade\-off. We will introduce these concepts now. First, let us point out that prediction error depends on the real value of the class label of the test case and predicted value. The test case label or value is not dependent on the prediction; the only thing that is variable here is the model. Therefore, if we could train multiple models with different data sets for the same problem, our predictions for the test set would vary. That means our prediction error would also vary. Now, with this setting we can talk about expected prediction error for a given machine learning model. This is the average error you would get for a test set if you were able to train multiple models. This expected prediction error can largely be decomposed into the variability of the predictions due to the model variability (variance) and the difference between the expected prediction values and the correct value of the response (bias). Formally, the expected prediction error, \\(E\[Error]\\) is decomposed as follows: \\\[ E\[Error]\=Bias^2 \+ Variance \+ \\sigma\_e^2 \\] Note that in the above equation \\(\\sigma\_e^2\\) is the irreducible error. This is the noise term that cannot fundamentally be accounted for by any model. The bias is formally the difference between the expected prediction value and the correct response value, \\(Y\\): \\(Bias\=(Y\-E\[PredictedValue])\\). The variance is simply the variability of the prediction values when we construct models multiple times with different training sets for the same problem: \\(Variance\=E\[(PredictedValue\-E\[PredictedValue])^2]\\). Note that this value of the variance does not depend of the correct value of the test cases. The models that have high variance are generally more complex models that have many knobs or parameters than can fit the training data well. These models, due to their flexibility, can fit training data too much that it creates poor prediction performance in a new data set. On the other hand, simple, less complex models do not have the flexibility to fit every data set that well, so they can avoid overfitting. However, they can underfit if they are not flexible enough to model or at least approximate the true relationship between predictors and the response variable. The bias term is mostly about the general model performance (expected or average value of predictions) that can be attributed to approximating a real\-life problem with simpler models. These simple models can have less variability in their predictions, so the prediction error will be mostly composed of the bias term. In reality, there is always a trade\-off between bias and variance (See Figure [5\.6](model-tuning-and-avoiding-overfitting.html#fig:varBias)). Increasing the variance with complex models will decrease the bias, but that might overfit. Conversely, simple models will increase the bias at the expense of the model variance, and that might underfit. There is an optimal point for model complexity, a balance between overfitting and underfitting. In practice, there is no analytical way to find this optimal complexity. Instead we must use an accurate measure of prediction error and explore different levels of model complexity and choose the complexity level that minimizes the overall error. Another approach to this is to use “the one standard error rule”. Instead of choosing the parameter that minimizes the error estimate, we can choose the simplest model whose error estimate is within one standard error of the best model (see Chapter 7 of (J. Friedman, Hastie, and Tibshirani [2001](#ref-friedman2001elements))). The rationale behind that is to choose a simple model with the hope that it would perform better in the unseen data since its performance is not different from the best model in a statistically significant way. You might see the option to choose the “one\-standard\-error” model in some machine learning packages. FIGURE 5\.6: Variance\-bias trade\-off visualized as components of total prediction error in relation to model complexity. In our k\-NN example, lower \\(k\\) values create a more flexible model. This might be counterintuitive, but as we have explained before having small \\(k\\) values will fit the data in a very data\-specific manner. It will probably not generalize well. Therefore in this respect, lower \\(k\\) values will result in more complex models with high variance. On the other hand, higher \\(k\\) values will result in less variance but higher bias. Figure [5\.7](model-tuning-and-avoiding-overfitting.html#fig:kNNboundary) shows the decision boundary for two different k\-NN models with \\(k\=2\\) and \\(k\=12\\). To be able to plot this in 2D we ran the model on principal component 1 and 2 of the training data set, and predicted the class label of many points in this 2D space. As you can see, \\(k\=2\\) creates a more variable model which tries aggressively to include all training samples in the correct class. This creates a high\-variance model because the model could change drastically from dataset to dataset. On the other hand, setting \\(k\=12\\) creates a model with a smoother decision boundary. This model will have less variance since it considers many points for a decision, and therefore the decision boundary is smoother. FIGURE 5\.7: Decision boundary for different k values in k\-NN models. k\=12 creates a smooth decision boundary and ignores certain data points on either side of the boundary. k\=2 is less smooth and more variable. ### 5\.8\.2 Data split strategies for model tuning and testing The data split strategy is essential for accurate prediction of the test error. As we have seen in the model complexity/bias\-variance discussion, estimating the prediction error is central for model tuning in order to find the model with the right complexity. Therefore, we will revisit this and show how to build and test models, and measure their prediction error in practice. #### 5\.8\.2\.1 Training\-validation\-test This data split strategy creates three partitions of the dataset, training, validation, and test sets. In this strategy, the training set is used to train the data and the validation set is used to tune the model to the best possible model. The final partition, “test”, is only used for the final test and should not be used to tune the model. This is regarded as the real\-world prediction error for your model. This strategy works when you have a lot of data to do a three\-way split. The test set we used above is most likely too small to measure the prediction error with just using a test set. In such cases, bootstrap or cross\-validation should yield more stable results. #### 5\.8\.2\.2 Cross\-validation A more realistic approach when you do not have a lot of data to do the three\-way split is cross\-validation. You can use cross\-validation in the model\-tuning phase as well, instead of going with a single train\-validation split. As with the three\-way split, the final prediction error could be estimated with the test set. In other words, we can separate 80% of the data for model building with cross\-validation, and the final model performance will be measured on the test set. We have already split our glioma dataset into training and test sets. Now, we will show how to run a k\-NN model with cross\-validation using the `caret::train()` function. This function will use cross\-validation to train models for different \\(k\\) values. Every \\(k\\) value will be trained and tested with cross\-validation to estimate prediction performance for each \\(k\\). We will then plot the cross\-validation error and the resulting plot is shown in Figure [5\.8](model-tuning-and-avoiding-overfitting.html#fig:kknCv). ``` set.seed(17) # this method controls everything about training # we will just set up 10-fold cross validation trctrl <- trainControl(method = "cv",number=10) # we will now train k-NN model knn_fit <- train(subtype~., data = training, method = "knn", trControl=trctrl, tuneGrid = data.frame(k=1:12)) # best k value by cross-validation accuracy knn_fit$bestTune ``` ``` ## k ## 4 4 ``` ``` # plot k vs prediction error plot(x=1:12,1-knn_fit$results[,2],pch=19, ylab="prediction error",xlab="k") lines(loess.smooth(x=1:12,1-knn_fit$results[,2],degree=2), col="#CC0000") ``` FIGURE 5\.8: Cross\-validated estimate of prediction error of k in k\-NN models. Based on Figure [5\.8](model-tuning-and-avoiding-overfitting.html#fig:kknCv) the cross\-validation accuracy reveals that \\(k\=5\\) is the best \\(k\\) value. On the other hand, we can also try bootstrap resampling and check the prediction error that way. We will again use the `caret::trainControl()` function to do the bootstrap sampling and estimate OOB\-based error. However, for a small number of samples like we have in our example, the difference between the estimated and the true value of the prediction error can be large. Below we show how to use bootstrapping for the k\-NN model. ``` set.seed(17) # this method controls everything about training # we will just set up 100 bootstrap samples and for each # bootstrap OOB samples to test the error trctrl <- trainControl(method = "boot",number=20, returnResamp="all") # we will now train k-NN model knn_fit <- train(subtype~., data = training, method = "knn", trControl=trctrl, tuneGrid = data.frame(k=1:12)) ``` ### 5\.8\.1 Model complexity and bias variance trade\-off The case of over\- and underfitting is closely related to the model complexity and the related bias\-variance trade\-off. We will introduce these concepts now. First, let us point out that prediction error depends on the real value of the class label of the test case and predicted value. The test case label or value is not dependent on the prediction; the only thing that is variable here is the model. Therefore, if we could train multiple models with different data sets for the same problem, our predictions for the test set would vary. That means our prediction error would also vary. Now, with this setting we can talk about expected prediction error for a given machine learning model. This is the average error you would get for a test set if you were able to train multiple models. This expected prediction error can largely be decomposed into the variability of the predictions due to the model variability (variance) and the difference between the expected prediction values and the correct value of the response (bias). Formally, the expected prediction error, \\(E\[Error]\\) is decomposed as follows: \\\[ E\[Error]\=Bias^2 \+ Variance \+ \\sigma\_e^2 \\] Note that in the above equation \\(\\sigma\_e^2\\) is the irreducible error. This is the noise term that cannot fundamentally be accounted for by any model. The bias is formally the difference between the expected prediction value and the correct response value, \\(Y\\): \\(Bias\=(Y\-E\[PredictedValue])\\). The variance is simply the variability of the prediction values when we construct models multiple times with different training sets for the same problem: \\(Variance\=E\[(PredictedValue\-E\[PredictedValue])^2]\\). Note that this value of the variance does not depend of the correct value of the test cases. The models that have high variance are generally more complex models that have many knobs or parameters than can fit the training data well. These models, due to their flexibility, can fit training data too much that it creates poor prediction performance in a new data set. On the other hand, simple, less complex models do not have the flexibility to fit every data set that well, so they can avoid overfitting. However, they can underfit if they are not flexible enough to model or at least approximate the true relationship between predictors and the response variable. The bias term is mostly about the general model performance (expected or average value of predictions) that can be attributed to approximating a real\-life problem with simpler models. These simple models can have less variability in their predictions, so the prediction error will be mostly composed of the bias term. In reality, there is always a trade\-off between bias and variance (See Figure [5\.6](model-tuning-and-avoiding-overfitting.html#fig:varBias)). Increasing the variance with complex models will decrease the bias, but that might overfit. Conversely, simple models will increase the bias at the expense of the model variance, and that might underfit. There is an optimal point for model complexity, a balance between overfitting and underfitting. In practice, there is no analytical way to find this optimal complexity. Instead we must use an accurate measure of prediction error and explore different levels of model complexity and choose the complexity level that minimizes the overall error. Another approach to this is to use “the one standard error rule”. Instead of choosing the parameter that minimizes the error estimate, we can choose the simplest model whose error estimate is within one standard error of the best model (see Chapter 7 of (J. Friedman, Hastie, and Tibshirani [2001](#ref-friedman2001elements))). The rationale behind that is to choose a simple model with the hope that it would perform better in the unseen data since its performance is not different from the best model in a statistically significant way. You might see the option to choose the “one\-standard\-error” model in some machine learning packages. FIGURE 5\.6: Variance\-bias trade\-off visualized as components of total prediction error in relation to model complexity. In our k\-NN example, lower \\(k\\) values create a more flexible model. This might be counterintuitive, but as we have explained before having small \\(k\\) values will fit the data in a very data\-specific manner. It will probably not generalize well. Therefore in this respect, lower \\(k\\) values will result in more complex models with high variance. On the other hand, higher \\(k\\) values will result in less variance but higher bias. Figure [5\.7](model-tuning-and-avoiding-overfitting.html#fig:kNNboundary) shows the decision boundary for two different k\-NN models with \\(k\=2\\) and \\(k\=12\\). To be able to plot this in 2D we ran the model on principal component 1 and 2 of the training data set, and predicted the class label of many points in this 2D space. As you can see, \\(k\=2\\) creates a more variable model which tries aggressively to include all training samples in the correct class. This creates a high\-variance model because the model could change drastically from dataset to dataset. On the other hand, setting \\(k\=12\\) creates a model with a smoother decision boundary. This model will have less variance since it considers many points for a decision, and therefore the decision boundary is smoother. FIGURE 5\.7: Decision boundary for different k values in k\-NN models. k\=12 creates a smooth decision boundary and ignores certain data points on either side of the boundary. k\=2 is less smooth and more variable. ### 5\.8\.2 Data split strategies for model tuning and testing The data split strategy is essential for accurate prediction of the test error. As we have seen in the model complexity/bias\-variance discussion, estimating the prediction error is central for model tuning in order to find the model with the right complexity. Therefore, we will revisit this and show how to build and test models, and measure their prediction error in practice. #### 5\.8\.2\.1 Training\-validation\-test This data split strategy creates three partitions of the dataset, training, validation, and test sets. In this strategy, the training set is used to train the data and the validation set is used to tune the model to the best possible model. The final partition, “test”, is only used for the final test and should not be used to tune the model. This is regarded as the real\-world prediction error for your model. This strategy works when you have a lot of data to do a three\-way split. The test set we used above is most likely too small to measure the prediction error with just using a test set. In such cases, bootstrap or cross\-validation should yield more stable results. #### 5\.8\.2\.2 Cross\-validation A more realistic approach when you do not have a lot of data to do the three\-way split is cross\-validation. You can use cross\-validation in the model\-tuning phase as well, instead of going with a single train\-validation split. As with the three\-way split, the final prediction error could be estimated with the test set. In other words, we can separate 80% of the data for model building with cross\-validation, and the final model performance will be measured on the test set. We have already split our glioma dataset into training and test sets. Now, we will show how to run a k\-NN model with cross\-validation using the `caret::train()` function. This function will use cross\-validation to train models for different \\(k\\) values. Every \\(k\\) value will be trained and tested with cross\-validation to estimate prediction performance for each \\(k\\). We will then plot the cross\-validation error and the resulting plot is shown in Figure [5\.8](model-tuning-and-avoiding-overfitting.html#fig:kknCv). ``` set.seed(17) # this method controls everything about training # we will just set up 10-fold cross validation trctrl <- trainControl(method = "cv",number=10) # we will now train k-NN model knn_fit <- train(subtype~., data = training, method = "knn", trControl=trctrl, tuneGrid = data.frame(k=1:12)) # best k value by cross-validation accuracy knn_fit$bestTune ``` ``` ## k ## 4 4 ``` ``` # plot k vs prediction error plot(x=1:12,1-knn_fit$results[,2],pch=19, ylab="prediction error",xlab="k") lines(loess.smooth(x=1:12,1-knn_fit$results[,2],degree=2), col="#CC0000") ``` FIGURE 5\.8: Cross\-validated estimate of prediction error of k in k\-NN models. Based on Figure [5\.8](model-tuning-and-avoiding-overfitting.html#fig:kknCv) the cross\-validation accuracy reveals that \\(k\=5\\) is the best \\(k\\) value. On the other hand, we can also try bootstrap resampling and check the prediction error that way. We will again use the `caret::trainControl()` function to do the bootstrap sampling and estimate OOB\-based error. However, for a small number of samples like we have in our example, the difference between the estimated and the true value of the prediction error can be large. Below we show how to use bootstrapping for the k\-NN model. ``` set.seed(17) # this method controls everything about training # we will just set up 100 bootstrap samples and for each # bootstrap OOB samples to test the error trctrl <- trainControl(method = "boot",number=20, returnResamp="all") # we will now train k-NN model knn_fit <- train(subtype~., data = training, method = "knn", trControl=trctrl, tuneGrid = data.frame(k=1:12)) ``` #### 5\.8\.2\.1 Training\-validation\-test This data split strategy creates three partitions of the dataset, training, validation, and test sets. In this strategy, the training set is used to train the data and the validation set is used to tune the model to the best possible model. The final partition, “test”, is only used for the final test and should not be used to tune the model. This is regarded as the real\-world prediction error for your model. This strategy works when you have a lot of data to do a three\-way split. The test set we used above is most likely too small to measure the prediction error with just using a test set. In such cases, bootstrap or cross\-validation should yield more stable results. #### 5\.8\.2\.2 Cross\-validation A more realistic approach when you do not have a lot of data to do the three\-way split is cross\-validation. You can use cross\-validation in the model\-tuning phase as well, instead of going with a single train\-validation split. As with the three\-way split, the final prediction error could be estimated with the test set. In other words, we can separate 80% of the data for model building with cross\-validation, and the final model performance will be measured on the test set. We have already split our glioma dataset into training and test sets. Now, we will show how to run a k\-NN model with cross\-validation using the `caret::train()` function. This function will use cross\-validation to train models for different \\(k\\) values. Every \\(k\\) value will be trained and tested with cross\-validation to estimate prediction performance for each \\(k\\). We will then plot the cross\-validation error and the resulting plot is shown in Figure [5\.8](model-tuning-and-avoiding-overfitting.html#fig:kknCv). ``` set.seed(17) # this method controls everything about training # we will just set up 10-fold cross validation trctrl <- trainControl(method = "cv",number=10) # we will now train k-NN model knn_fit <- train(subtype~., data = training, method = "knn", trControl=trctrl, tuneGrid = data.frame(k=1:12)) # best k value by cross-validation accuracy knn_fit$bestTune ``` ``` ## k ## 4 4 ``` ``` # plot k vs prediction error plot(x=1:12,1-knn_fit$results[,2],pch=19, ylab="prediction error",xlab="k") lines(loess.smooth(x=1:12,1-knn_fit$results[,2],degree=2), col="#CC0000") ``` FIGURE 5\.8: Cross\-validated estimate of prediction error of k in k\-NN models. Based on Figure [5\.8](model-tuning-and-avoiding-overfitting.html#fig:kknCv) the cross\-validation accuracy reveals that \\(k\=5\\) is the best \\(k\\) value. On the other hand, we can also try bootstrap resampling and check the prediction error that way. We will again use the `caret::trainControl()` function to do the bootstrap sampling and estimate OOB\-based error. However, for a small number of samples like we have in our example, the difference between the estimated and the true value of the prediction error can be large. Below we show how to use bootstrapping for the k\-NN model. ``` set.seed(17) # this method controls everything about training # we will just set up 100 bootstrap samples and for each # bootstrap OOB samples to test the error trctrl <- trainControl(method = "boot",number=20, returnResamp="all") # we will now train k-NN model knn_fit <- train(subtype~., data = training, method = "knn", trControl=trctrl, tuneGrid = data.frame(k=1:12)) ```
Life Sciences
compgenomr.github.io
http://compgenomr.github.io/book/variable-importance.html
5\.9 Variable importance ------------------------ Another important purpose of machine learning models could be to learn which variables are more important for the prediction. This information could lead to potential biological insights or could help design better data collection methods or experiments. Variable importance metrics can be separated into two groups: those that are model dependent and those that are not. Many machine\-learning methods come with built\-in variable importance measures. These may be able to incorporate the correlation structure between the predictors into the importance calculation. Model\-independent methods are not able to use any internal model data. We will go over some model\-independent strategies below. The model\-dependent importance measures will be mentioned when we introduce machine learning methods that have built\-in variable importance measures. One simple method for variable importance is to correlate or apply statistical tests to test the association of the predictor variable with the response variable. Variables can be ranked based on the strength of those associations. For classification problems, ROC curves can be computed by thresholding the predictor variable, and for each variable an AUC can be computed. The variables can be ranked based on these values. However, these methods completely ignore how variables would behave in the presence of other variables. The `caret::filterVarImp()` function implements some of these strategies. If a variable is important for prediction, removing that variable before model training will cause a drop in performance. With this understanding, we can remove the variables one by one and train models without them and rank them by the loss of performance. The most important variables must cause the largest loss of performance. This strategy requires training and testing models as many times as the number of predictor variables. This will consume a lot of time. A related but more practical approach has been put forward to measure variable importance in a model\-independent manner but without re\-training (Biecek [2018](#ref-dalex); Fisher, Rudin, and Dominici [2018](#ref-mcr)). In this case, instead of removing the variables at training, variables are permuted at the test phase. The loss in prediction performance is calculated by comparing the labels/values from the original response variable to the labels/values obtained by running the permuted test data through the model. This is called “variable dropout loss”. In this case, we are not really dropping out variables, but by permuting them, we destroy their relationship to the response variable. The dropout loss is compared to the “worst case” scenario where the response variable is permuted and compared against the original response variables, which is called “baseline loss”. The algorithm ranks the variables by their variable dropout loss or by their ratio of variable dropout to baseline loss. Both quantities are proportional but the second one contains information about the baseline loss. Below, we run the `DALEX::explain()` function to do the permutation drop\-out strategy for the variables. The function needs the machine learning model, and new data and its labels to do the permutation\-based dropout strategy. In this case, we are feeding the function with the data we used for training. For visualization we can use the `DALEX::feature_importance()` function which plots the loss. Although, in this case we are not plotting the results. In the following sections, we will discuss method\-specific variable importance measures. ``` library(DALEX) set.seed(102) # do permutation drop-out explainer_knn<- DALEX::explain(knn_fit, label="knn", data =training[,-1], y = as.numeric(training[,1])) viknn=feature_importance(explainer_knn,n_sample=50,type="difference") plot(viknn) ``` Although the variable drop\-out strategy will still be slow if you have a lot of variables, the upside is that you can use any black\-box model as long as you have access to the model to run new predictions. Later sections in this chapter will show methods with built\-in variable importance metrics, since these are calculated during training it comes with less of an additional compute cost.
Life Sciences