Using a little bit of algebra, prove that (4.2) is equivalent to (4.3). In other words, the logistic function representation and logit representation for the logistic regression model are equivalent.
Answer:
It was stated in the text that classifying an observation to the class for which (4.12) is largest is equivalent to classifying an observation to the class for which (4.13) is largest. Prove that this is the case. In other words, under the assumption that the observations in the kth class are drawn from a N(\(\mu_k\), \(\sigma^2\)) distribution, the Bayes’ classifier assigns an observation to the class for which the discriminant function is maximized.
Answer:
This problem relates to the QDA model, in which the observations within each class are drawn from a normal distribution with a class specific mean vector and a class specific covariance matrix. We consider the simple case where p = 1; i.e. there is only one feature.
Suppose that we have K classes, and that if an observation belongs to the kth class then X comes from a one-dimensional normal distribution, X ~ N(\(\mu_k\), \(\sigma^2_k\)). Recall that the density function for the one-dimensional normal distribution is given in (4.11). Prove that in this case, the Bayes’ classifier is not linear. Argue that it is in fact quadratic.
Answer:
When the number of features p is large, there tends to be a deterioration in the performance of KNN and other local approaches that perform prediction using only observations that are near the test observation for which a prediction must be made. This phenomenon is known as the curse of dimensionality, and it ties into the fact that non-parametric approaches often perform poorly when p is large. We will now investigate this curse.
(a) Suppose that we have a set of observations, each with measurements on p = 1 feature, X. We assume that X is uniformly (evenly) distributed on [0, 1]. Associated with each observation is a response value. Suppose that we wish to predict a test observation’s response using only observations that are within 10% of the range of X closest to that test observation. For instance, in order to predict the response for a test observation with X = 0.6, we will use observations in the range [0.55, 0.65]. On average, what fraction of the available observations will we use to make the prediction?
Answer: \(avg. fraction = 0.1\)
(b) Now suppose that we have a set of observations, each with measurements on p = 2 features, X1 and X2. We assume that (X1,X2) are uniformly distributed on [0, 1] × [0, 1]. We wish to predict a test observation’s response using only observations that are within 10% of the range of X1 and within 10% of the range of X2 closest to that test observation. For instance, in order to predict the response for a test observation with X1 = 0.6 and X2 = 0.35, we will use observations in the range [0.55, 0.65] for X1 and in the range [0.3, 0.4] for X2. On average, what fraction of the available observations will we use to make the prediction?
Answer: \(avg. fraction = 0.1^2 = 0.01\)
(c) Now suppose that we have a set of observations on p = 100 features. Again the observations are uniformly distributed on each feature, and again each feature ranges in value from 0 to 1. We wish to predict a test observation’s response using observations within the 10% of each feature’s range that is closest to that test observation. What fraction of the available observations will we use to make the prediction?
Answer: \(avg. fraction = 0.1^{100} = 0.0...\)
(d) Using your answers to parts (a)–(c), argue that a drawback of KNN when p is large is that there are very few training observations “near” any given test observation.
Answer: K observations that are nearest to a given test observation x0 may be very far away from x0 in p-dimensional space when p is large, leading to a very poor prediction of f(x0) and hence a poor KNN fit. (p.109)
(e) Now suppose that we wish to make a prediction for a test observation by creating a p-dimensional hypercube centered around the test observation that contains, on average, 10% of the training observations. For p = 1, 2, and 100, what is the length of each side of the hypercube? Comment on your answer.
Answer: The higher the dimension, the bigger the hypercube needs to be to keep using 10%. For \(p = 100\) hypercube gets almost the same as the whole set of training observations.
\(p = 1, length = 0.1^1 \times 1^1 = 0.1\)
\(p = 2, length = 0.1^{1/2} \times 1^2 = 0.3162278\)
\(p = 100, length = 0.1^{1/100} \times 1^{100} = 0.9772372\)
We now examine the differences between LDA and QDA.
(a) If the Bayes decision boundary is linear, do we expect LDA or QDA to perform better on the training set? On the test set?
Answer: We expect QDA to perform better on the training set as it is more flexible method, but worse on the test set as the true function is linear and QDA will result with higher bias.
(b) If the Bayes decision boundary is non-linear, do we expect LDA or QDA to perform better on the training set? On the test set?
Answer: We expect QDA to perform better on both the training and test set.
(c) In general, as the sample size n increases, do we expect the test prediction accuracy of QDA relative to LDA to improve, decline, or be unchanged? Why?
Answer: As n increases we expect the test prediction accuracy of QDA, relative to LDA, to improve because QDA is more flexible method; has lower bias and high variance is compensated by the large number of n.
(d) True or False: Even if the Bayes decision boundary for a given problem is linear, we will probably achieve a superior test error rate using QDA rather than LDA because QDA is flexible enough to model a linear decision boundary. Justify your answer.
Answer: False. QDA will suffer from higher variance without a corresponding decrease in bias.
Suppose we collect data for a group of students in a statistics class with variables X1 =hours studied, X2 =undergrad GPA, and Y = receive an A. We fit a logistic regression and produce estimated coefficient, \(\hat\beta_0 = -6\), \(\hat\beta_1 = 0.05\), \(\hat\beta_2 = 1\).
(a) Estimate the probability that a student who studies for 40 h and has an undergrad GPA of 3.5 gets an A in the class.
Answer:
x_1 <- 40
x_2 <- 3.5
beta_0 <- -6
beta_1 <- 0.05
beta_2 <- 1
Prob_Get_A <- exp(beta_0 + beta_1*x_1 + beta_2*x_2)/(1 + exp(beta_0 + beta_1*x_1 + beta_2*x_2))
percent(Prob_Get_A)
## [1] "37.8%"
(b) How many hours would the student in part (a) need to study to have a 50% chance of getting an A in the class?
Answer:
x_1 <- (log(0.5/(1-0.5)) - beta_0 - beta_2*x_2)/beta_1
x_1
## [1] 50
rm(list = ls())
Suppose that we wish to predict whether a given stock will issue a dividend this year (“Yes” or “No”) based on X, last year’s percent profit.We examine a large number of companies and discover that the mean value of X for companies that issued a dividend was \(\overline{X} = 10\), while the mean for those that didn’t was \(\overline{X} = 0\). In addition, the variance of X for these two sets of companies was \(\hat\sigma^2 = 36\). Finally, 80% of companies issued dividends. Assuming that X follows a normal distribution, predict the probability that a company will issue a dividend this year given that its percentage profit was X = 4 last year.
Answer:
miu_1 <- 10
miu_2 <- 0
sigma <- sqrt(36)
pai_1 <- 0.8
x <- 4
f_1 <- (1/sqrt(2*pi*sigma))*exp((-1/(2*sigma^2))*(x-miu_1)^2)
f_2 <- (1/sqrt(2*pi*sigma))*exp((-1/(2*sigma^2))*(x-miu_2)^2)
Prob_Get_Divident <- (pai_1*f_1)/(pai_1*f_1 + (1-pai_1)*f_2)
percent(Prob_Get_Divident)
## [1] "75.2%"
rm(list = ls())
Suppose that we take a data set, divide it into equally-sized training and test sets, and then try out two different classification procedures. First we use logistic regression and get an error rate of 20% on the training data and 30% on the test data. Next we use 1-nearest neighbors (i.e. K = 1) and get an average error rate (averaged over both test and training data sets) of 18%. Based on these results, which method should we prefer to use for classification of new observations? Why?
Answer: When K = 1, the KNN fit perfectly interpolates the training observations so training error would be 0% and testing - 72%. Logistic regression should be preferred with 30% error rate on the test data.
This problem has to do with odds.
(a) On average, what fraction of people with an odds of 0.37 of defaulting on their credit card payment will in fact default?
Answer: We solve \(0.37 = \frac{x}{1-x}\).
odds <- 0.37
x <- odds/(1+odds)
On average 27.0% of people with an odds of 0.37 will default.
(b) Suppose that an individual has a 16% chance of defaulting on her credit card payment. What are the odds that she will default?
Answer: We solve \(odds = \frac{0.16}{1-0.16}\)
rm(list = ls())
x <- 0.16
odds <- x/(1-x)
The odds of an individual that has 16% chance of default are 19.0%
This question should be answered using the Weekly data set, which is part of the ISLR package. This data is similar in nature to the Smarket data from this chapter’s lab, except that it contains 1, 089 weekly returns for 21 years, from the beginning of 1990 to the end of 2010.
(a) Produce some numerical and graphical summaries of the Weekly data. Do there appear to be any patterns?
Answer: What follows are numerical and graphical summaries with comments:
rm(list = ls())
head(as_tibble(Weekly))
## # A tibble: 6 x 9
## Year Lag1 Lag2 Lag3 Lag4 Lag5 Volume Today Direction
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <fct>
## 1 1990 0.816 1.57 -3.94 -0.229 -3.48 0.155 -0.27 Down
## 2 1990 -0.27 0.816 1.57 -3.94 -0.229 0.149 -2.58 Down
## 3 1990 -2.58 -0.27 0.816 1.57 -3.94 0.160 3.51 Up
## 4 1990 3.51 -2.58 -0.27 0.816 1.57 0.162 0.712 Up
## 5 1990 0.712 3.51 -2.58 -0.27 0.816 0.154 1.18 Up
## 6 1990 1.18 0.712 3.51 -2.58 -0.27 0.154 -1.37 Down
str(Weekly)
## 'data.frame': 1089 obs. of 9 variables:
## $ Year : num 1990 1990 1990 1990 1990 1990 1990 1990 1990 1990 ...
## $ Lag1 : num 0.816 -0.27 -2.576 3.514 0.712 ...
## $ Lag2 : num 1.572 0.816 -0.27 -2.576 3.514 ...
## $ Lag3 : num -3.936 1.572 0.816 -0.27 -2.576 ...
## $ Lag4 : num -0.229 -3.936 1.572 0.816 -0.27 ...
## $ Lag5 : num -3.484 -0.229 -3.936 1.572 0.816 ...
## $ Volume : num 0.155 0.149 0.16 0.162 0.154 ...
## $ Today : num -0.27 -2.576 3.514 0.712 1.178 ...
## $ Direction: Factor w/ 2 levels "Down","Up": 1 1 2 2 2 1 2 2 2 1 ...
summary(Weekly)
## Year Lag1 Lag2 Lag3
## Min. :1990 Min. :-18.1950 Min. :-18.1950 Min. :-18.1950
## 1st Qu.:1995 1st Qu.: -1.1540 1st Qu.: -1.1540 1st Qu.: -1.1580
## Median :2000 Median : 0.2410 Median : 0.2410 Median : 0.2410
## Mean :2000 Mean : 0.1506 Mean : 0.1511 Mean : 0.1472
## 3rd Qu.:2005 3rd Qu.: 1.4050 3rd Qu.: 1.4090 3rd Qu.: 1.4090
## Max. :2010 Max. : 12.0260 Max. : 12.0260 Max. : 12.0260
## Lag4 Lag5 Volume
## Min. :-18.1950 Min. :-18.1950 Min. :0.08747
## 1st Qu.: -1.1580 1st Qu.: -1.1660 1st Qu.:0.33202
## Median : 0.2380 Median : 0.2340 Median :1.00268
## Mean : 0.1458 Mean : 0.1399 Mean :1.57462
## 3rd Qu.: 1.4090 3rd Qu.: 1.4050 3rd Qu.:2.05373
## Max. : 12.0260 Max. : 12.0260 Max. :9.32821
## Today Direction
## Min. :-18.1950 Down:484
## 1st Qu.: -1.1540 Up :605
## Median : 0.2410
## Mean : 0.1499
## 3rd Qu.: 1.4050
## Max. : 12.0260
We see that Ups and Downs relate almost 55/45 in Direction
column
ggpairs(Weekly)
A strong correlation is observed only between Volume
and Year
.
(b) Use the full data set to perform a logistic regression with Direction as the response and the five lag variables plus Volume as predictors.
lr_mod <- logistic_reg()
lr_fit <-
lr_mod %>%
set_engine("glm") %>%
fit(Direction ~ Lag1+Lag2+Lag3+Lag4+Lag5+Volume, data = Weekly)
lr_fit
## parsnip model object
##
##
## Call: stats::glm(formula = formula, family = stats::binomial, data = data)
##
## Coefficients:
## (Intercept) Lag1 Lag2 Lag3 Lag4
## 0.26686 -0.04127 0.05844 -0.01606 -0.02779
## Lag5 Volume
## -0.01447 -0.02274
##
## Degrees of Freedom: 1088 Total (i.e. Null); 1082 Residual
## Null Deviance: 1496
## Residual Deviance: 1486 AIC: 1500
Use the summary function to print the results.
lr_fit$fit %>% broom::tidy()
## # A tibble: 7 x 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 0.267 0.0859 3.11 0.00190
## 2 Lag1 -0.0413 0.0264 -1.56 0.118
## 3 Lag2 0.0584 0.0269 2.18 0.0296
## 4 Lag3 -0.0161 0.0267 -0.602 0.547
## 5 Lag4 -0.0278 0.0265 -1.05 0.294
## 6 Lag5 -0.0145 0.0264 -0.549 0.583
## 7 Volume -0.0227 0.0369 -0.616 0.538
Do any of the predictors appear to be statistically significant? If so, which ones?
Answer: Only
Lag2
appears to be statistically significant.
(c) Compute the confusion matrix and overall fraction of correct predictions.
Weekly_pred <- predict(lr_fit, Weekly) %>%
bind_cols(Weekly)
conf_mat(Weekly_pred, Direction, .pred_class)
## Truth
## Prediction Down Up
## Down 54 48
## Up 430 557
Explain what the confusion matrix is telling you about the types of mistakes made by logistic regression.
lr_summary <- summary(conf_mat(Weekly_pred, Direction, .pred_class))
lr_summary
## # A tibble: 13 x 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 accuracy binary 0.561
## 2 kap binary 0.0350
## 3 sens binary 0.112
## 4 spec binary 0.921
## 5 ppv binary 0.529
## 6 npv binary 0.564
## 7 mcc binary 0.0550
## 8 j_index binary 0.0322
## 9 bal_accuracy binary 0.516
## 10 detection_prevalence binary 0.0937
## 11 precision binary 0.529
## 12 recall binary 0.112
## 13 f_meas binary 0.184
Answer: The model correctly predicted 56.1% of Directions, which is model’s accuracy.
The model poorly identified \(Downs\) and very well \(Ups\), respectively 11.2% and 92.1%
(d) Now fit the logistic regression model using a training data period from 1990 to 2008, with Lag2 as the only predictor.
Weekly_train_10d <- Weekly %>%
filter(Year >= 1990 & Year <=2008)
lr_fit_10d <-
lr_mod %>%
set_engine("glm") %>%
fit(Direction ~ Lag2, data = Weekly_train_10d)
lr_fit_10d
## parsnip model object
##
##
## Call: stats::glm(formula = formula, family = stats::binomial, data = data)
##
## Coefficients:
## (Intercept) Lag2
## 0.2033 0.0581
##
## Degrees of Freedom: 984 Total (i.e. Null); 983 Residual
## Null Deviance: 1355
## Residual Deviance: 1351 AIC: 1355
Compute the confusion matrix and the overall fraction of correct predictions for the held out data (that is, the data from 2009 and 2010).
Weekly_test_10d <- Weekly %>%
filter(Year >= 2009 & Year <=2010)
Weekly_pred_10d <- predict(lr_fit_10d, Weekly_test_10d) %>%
bind_cols(Weekly_test_10d)
conf_mat(Weekly_pred_10d, Direction, .pred_class)
## Truth
## Prediction Down Up
## Down 9 5
## Up 34 56
Answer: We get the following metrics for the model:
lr_summary_10d <- summary(conf_mat(Weekly_pred_10d, Direction, .pred_class))
lr_summary_10d
## # A tibble: 13 x 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 accuracy binary 0.625
## 2 kap binary 0.141
## 3 sens binary 0.209
## 4 spec binary 0.918
## 5 ppv binary 0.643
## 6 npv binary 0.622
## 7 mcc binary 0.184
## 8 j_index binary 0.127
## 9 bal_accuracy binary 0.564
## 10 detection_prevalence binary 0.135
## 11 precision binary 0.643
## 12 recall binary 0.209
## 13 f_meas binary 0.316
Observed accuracy is 62.5%
(e) Repeat (d) using LDA.
Answer: Metrics for the LDA model are the same as the ones for the logistic regression.
lr_fit_10e <- lda(Direction ~ Lag2, data = Weekly_train_10d)
Weekly_test_10e <- Weekly %>%
filter(Year >= 2009 & Year <=2010)
Weekly_pred_10e <- Weekly_test_10e %>%
mutate(.pred_class = predict(lr_fit_10e, Weekly_test_10e)$class)
lr_summary_10e <- summary(conf_mat(Weekly_pred_10e, Direction, .pred_class))
lr_summary_10e
## # A tibble: 13 x 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 accuracy binary 0.625
## 2 kap binary 0.141
## 3 sens binary 0.209
## 4 spec binary 0.918
## 5 ppv binary 0.643
## 6 npv binary 0.622
## 7 mcc binary 0.184
## 8 j_index binary 0.127
## 9 bal_accuracy binary 0.564
## 10 detection_prevalence binary 0.135
## 11 precision binary 0.643
## 12 recall binary 0.209
## 13 f_meas binary 0.316
Observed accuracy is 62.5%
(f) Repeat (d) using QDA.
Answer: We get the following result for the QDA model:
lr_fit_10f <- qda(Direction ~ Lag2, data = Weekly_train_10d)
Weekly_test_10f <- Weekly %>%
filter(Year >= 2009 & Year <=2010)
Weekly_pred_10f <- Weekly_test_10f %>%
mutate(.pred_class = predict(lr_fit_10f, Weekly_test_10f)$class)
lr_summary_10f <- accuracy(conf_mat(Weekly_pred_10f, Direction, .pred_class)$table)
lr_summary_10f
## # A tibble: 1 x 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 accuracy binary 0.587
Observed accuracy is 58.7%
(g) Repeat (d) using KNN with K = 1.
Answer: We get the following result for the KNN model:
lr_mod_10g <- nearest_neighbor(neighbors = 1)
lr_fit_10g <-
lr_mod_10g %>%
set_engine("kknn") %>%
fit(Direction ~ Lag2, data = Weekly_train_10d)
Weekly_test_10g <- Weekly %>%
filter(Year >= 2009 & Year <=2010)
Weekly_pred_10g <- predict(lr_fit_10g, Weekly_test_10g) %>%
bind_cols(Weekly_test_10g)
lr_summary_10g <- summary(conf_mat(Weekly_pred_10g, Direction, .pred_class))
lr_summary_10g
## # A tibble: 13 x 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 accuracy binary 0.5
## 2 kap binary 0.00332
## 3 sens binary 0.512
## 4 spec binary 0.492
## 5 ppv binary 0.415
## 6 npv binary 0.588
## 7 mcc binary 0.00338
## 8 j_index binary 0.00343
## 9 bal_accuracy binary 0.502
## 10 detection_prevalence binary 0.510
## 11 precision binary 0.415
## 12 recall binary 0.512
## 13 f_meas binary 0.458
Observed accuracy is 50.0%
(h) Which of these methods appears to provide the best results on this data?
Answer: In terms of accuracy we observer the following:
tbl_10h <- tibble(
model = c("LogReg",
"LDA",
"QDA",
"KNN"),
accuracy = c(percent(lr_summary_10d$.estimate[1]),
percent(lr_summary_10e$.estimate[1]),
percent(lr_summary_10f$.estimate[1]),
percent(lr_summary_10g$.estimate[1])
)
)
tbl_10h
## # A tibble: 4 x 2
## model accuracy
## <chr> <chr>
## 1 LogReg 62.5%
## 2 LDA 62.5%
## 3 QDA 58.7%
## 4 KNN 50.0%
LogReg and LDA exhibit the same highest accuracy.
(i) Experiment with different combinations of predictors, including possible transformations and interactions, for each of the methods. Report the variables, method, and associated confusion matrix that appears to provide the best results on the held out data. Note that you should also experiment with values for K in the KNN classifier.
Answer: We experiment with the following combinations of predictors, transformations and interactions:
Lag1
and Lag2
as predictors, considering they exhibit the lowest p-values.#LogReg
logReg_fit_10i <-
lr_mod %>%
set_engine("glm") %>%
fit(Direction ~ Lag1 + Lag2, data = Weekly_train_10d)
Weekly_logReg_pred_10i <- predict(logReg_fit_10i, Weekly_test_10d) %>%
bind_cols(Weekly_test_10d)
logReg_summary_10i <- summary(conf_mat(Weekly_logReg_pred_10i, Direction, .pred_class))
#LDA
lda_fit_10i <- lda(Direction ~ Lag1 + Lag2, data = Weekly_train_10d)
Weekly_lda_pred_10i <- Weekly_test_10d %>%
mutate(.pred_class = predict(lda_fit_10i, Weekly_test_10d)$class)
lda_summary_10i <- summary(conf_mat(Weekly_lda_pred_10i, Direction, .pred_class))
#QDA
qda_fit_10i <- qda(Direction ~ Lag1 + Lag2, data = Weekly_train_10d)
Weekly_qda_pred_10i <- Weekly_test_10d %>%
mutate(.pred_class = predict(qda_fit_10i, Weekly_test_10d)$class)
qda_summary_10i <- summary(conf_mat(Weekly_qda_pred_10i, Direction, .pred_class))
#KNN
knn_mod_10i <- nearest_neighbor(neighbors = 1)
knn_fit_10i <-
knn_mod_10i %>%
set_engine("kknn") %>%
fit(Direction ~ Lag1 + Lag2, data = Weekly_train_10d)
Weekly_knn_pred_10i <- predict(knn_fit_10i, Weekly_test_10d) %>%
bind_cols(Weekly_test_10d)
knn_summary_10i <- summary(conf_mat(Weekly_knn_pred_10i, Direction, .pred_class))
#Comparison table
tbl_10i <- tibble(
model = c("LogReg",
"LDA",
"QDA",
"KNN"),
accuracy_Lag2 = c(percent(lr_summary_10d$.estimate[1]),
percent(lr_summary_10e$.estimate[1]),
percent(lr_summary_10f$.estimate[1]),
percent(lr_summary_10g$.estimate[1])
),
accuracy_Lag1_Lag2 = c(percent(logReg_summary_10i$.estimate[1]),
percent(lda_summary_10i$.estimate[1]),
percent(qda_summary_10i$.estimate[1]),
percent(knn_summary_10i$.estimate[1])
)
)
tbl_10i
## # A tibble: 4 x 3
## model accuracy_Lag2 accuracy_Lag1_Lag2
## <chr> <chr> <chr>
## 1 LogReg 62.5% 57.7%
## 2 LDA 62.5% 57.7%
## 3 QDA 58.7% 55.8%
## 4 KNN 50.0% 48.1%
We observe no improvement in terms of accuracy.
#knn 3
knn3_mod_10i <- nearest_neighbor(neighbors = 3)
knn3_fit_10i <-
knn3_mod_10i %>%
set_engine("kknn") %>%
fit(Direction ~ Lag2, data = Weekly_train_10d)
Weekly_knn3_pred_10i <- predict(knn3_fit_10i, Weekly_test_10d) %>%
bind_cols(Weekly_test_10d)
knn3_summary_10i <- summary(conf_mat(Weekly_knn3_pred_10i, Direction, .pred_class))
#knn 5
knn5_mod_10i <- nearest_neighbor(neighbors = 5)
knn5_fit_10i <-
knn5_mod_10i %>%
set_engine("kknn") %>%
fit(Direction ~ Lag2, data = Weekly_train_10d)
Weekly_knn5_pred_10i <- predict(knn5_fit_10i, Weekly_test_10d) %>%
bind_cols(Weekly_test_10d)
knn5_summary_10i <- summary(conf_mat(Weekly_knn5_pred_10i, Direction, .pred_class))
#knn 10
knn10_mod_10i <- nearest_neighbor(neighbors = 10)
knn10_fit_10i <-
knn10_mod_10i %>%
set_engine("kknn") %>%
fit(Direction ~ Lag2, data = Weekly_train_10d)
Weekly_knn10_pred_10i <- predict(knn10_fit_10i, Weekly_test_10d) %>%
bind_cols(Weekly_test_10d)
knn10_summary_10i <- summary(conf_mat(Weekly_knn10_pred_10i, Direction, .pred_class))
#knn 100
knn100_mod_10i <- nearest_neighbor(neighbors = 100)
knn100_fit_10i <-
knn100_mod_10i %>%
set_engine("kknn") %>%
fit(Direction ~ Lag2, data = Weekly_train_10d)
Weekly_knn100_pred_10i <- predict(knn100_fit_10i, Weekly_test_10d) %>%
bind_cols(Weekly_test_10d)
knn100_summary_10i <- summary(conf_mat(Weekly_knn100_pred_10i, Direction, .pred_class))
tbl_Ks_10i <- tibble(
accuracy = c(lr_summary_10g$.estimate[1],
knn3_summary_10i$.estimate[1],
knn5_summary_10i$.estimate[1],
knn10_summary_10i$.estimate[1],
knn100_summary_10i$.estimate[1]
),
`1/K` = c(1/1,
1/3,
1/5,
1/10,
1/100
)
)
ggplot(data = tbl_Ks_10i, aes(x = `1/K`, y = accuracy)) +
geom_point() +
geom_smooth()
We observe highest accuracy for K = 100.
In conclusion, we have not found better model in terms of accuracy from LogReg and LDA with only Lag2
as predictor.
In this problem, you will develop a model to predict whether a given car gets high or low gas mileage based on the Auto data set.
(a) Create a binary variable, mpg01, that contains a 1 if mpg contains a value above its median, and a 0 if mpg contains a value below its median. You can compute the median using the median() function. Note you may find it helpful to use the data.frame() function to create a single data set containing both mpg01 and the other Auto variables.
Answer: Creating the binary variable is in the following code:
# Clearing the environment
rm(list = ls())
Auto_11a <- Auto %>%
mutate(mpg01 = ifelse(mpg > median(mpg), 1, 0)) %>%
select(mpg01, everything()) %>%
mutate(mpg01 = as.factor(mpg01)) %>%
mutate(cylinders = as.factor(cylinders)) %>%
mutate(origin = as.factor(origin)) %>%
mutate(year = as.integer(year))
head(Auto_11a)
## mpg01 mpg cylinders displacement horsepower weight acceleration year
## 1 0 18 8 307 130 3504 12.0 70
## 2 0 15 8 350 165 3693 11.5 70
## 3 0 18 8 318 150 3436 11.0 70
## 4 0 16 8 304 150 3433 12.0 70
## 5 0 17 8 302 140 3449 10.5 70
## 6 0 15 8 429 198 4341 10.0 70
## origin name
## 1 1 chevrolet chevelle malibu
## 2 1 buick skylark 320
## 3 1 plymouth satellite
## 4 1 amc rebel sst
## 5 1 ford torino
## 6 1 ford galaxie 500
(b) Explore the data graphically in order to investigate the association between mpg01 and the other features. Which of the other features seem most likely to be useful in predicting mpg01? Scatterplots and boxplots may be useful tools to answer this question. Describe your findings.
Answer: First we will generate some plots for the discrete variables:
cylinders
andorigin
. After that for the continuous -displacement
,horsepower
,weight
,acceleration
andyear
. We will conclude with comments on which vars would be usefule for predictingmpg01
cylinders
Auto_11b <- Auto_11a %>%
select(-mpg, -name)
ggplot(Auto_11b) +
geom_bar(aes(cylinders, fill = mpg01)) +
facet_grid(.~mpg01)
We see that lower mpg is associated with more cars having higher number of cylinders.
origin
ggplot(Auto_11b) +
geom_bar(aes(origin, fill = origin)) +
facet_grid(.~mpg01)
We see that origin 1 is strongly associated with lower mpg.
displacement
, horsepower
, weight
displ_11b <- ggplot(Auto_11b) +
geom_boxplot(aes(x = mpg01, y = displacement, fill = mpg01)) +
theme(legend.position="none")
horse_11b <- ggplot(Auto_11b) +
geom_boxplot(aes(x = mpg01, y = horsepower, fill = mpg01)) +
theme(legend.position="none")
weight_11b <- ggplot(Auto_11b) +
geom_boxplot(aes(x = mpg01, y = weight, fill = mpg01)) +
theme(legend.position="none")
grid.arrange(displ_11b, horse_11b, weight_11b, ncol = 3)
We see clear relation of mpg with the vars.
acceleration
and year
acc_11b <- ggplot(Auto_11b) +
geom_boxplot(aes(x = mpg01, y = acceleration, fill = mpg01)) +
theme(legend.position="none")
year_11b <- ggplot(Auto_11b) +
geom_boxplot(aes(x = mpg01, y = year, fill = mpg01)) +
theme(legend.position="none")
grid.arrange(acc_11b, year_11b, ncol = 2)
We see clear relation of mpg with the vars.
All vars exhibit good prediction capability for mpg.
(c) Split the data into a training set and a test set.
set.seed(123)
auto_split_11c <- initial_split(Auto_11b, prop = 3/4)
head(training(auto_split_11c))
## mpg01 cylinders displacement horsepower weight acceleration year origin
## 2 0 8 350 165 3693 11.5 70 1
## 3 0 8 318 150 3436 11.0 70 1
## 4 0 8 304 150 3433 12.0 70 1
## 5 0 8 302 140 3449 10.5 70 1
## 6 0 8 429 198 4341 10.0 70 1
## 7 0 8 454 220 4354 9.0 70 1
(d) Perform LDA on the training data in order to predict mpg01 using the variables that seemed most associated with mpg01 in (b).
lda_fit_11d <- lda(mpg01 ~ ., data = training(auto_split_11c))
Auto_11d_pred <- testing(auto_split_11c) %>%
mutate(.pred_class = predict(lda_fit_11d, testing(auto_split_11c))$class)
lda_summary_11d <- summary(conf_mat(Auto_11d_pred, mpg01, .pred_class))
lda_summary_11d
## # A tibble: 13 x 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 accuracy binary 0.939
## 2 kap binary 0.877
## 3 sens binary 0.933
## 4 spec binary 0.943
## 5 ppv binary 0.933
## 6 npv binary 0.943
## 7 mcc binary 0.877
## 8 j_index binary 0.877
## 9 bal_accuracy binary 0.938
## 10 detection_prevalence binary 0.459
## 11 precision binary 0.933
## 12 recall binary 0.933
## 13 f_meas binary 0.933
What is the test error of the model obtained?
Answer: We get 6.12%
(e) Perform QDA on the training data in order to predict mpg01 using the variables that seemed most associated with mpg01 in (b).
We have to remove cylinders
as being highly collinear.
qda_fit_11e <- qda(mpg01 ~ ., data = training(auto_split_11c) %>% select(-cylinders))
Auto_11e_pred <- testing(auto_split_11c) %>%
mutate(.pred_class = predict(qda_fit_11e, testing(auto_split_11c))$class)
qda_summary_11e <- summary(conf_mat(Auto_11e_pred, mpg01, .pred_class))
qda_summary_11e
## # A tibble: 13 x 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 accuracy binary 0.908
## 2 kap binary 0.812
## 3 sens binary 0.8
## 4 spec binary 1
## 5 ppv binary 1
## 6 npv binary 0.855
## 7 mcc binary 0.827
## 8 j_index binary 0.8
## 9 bal_accuracy binary 0.9
## 10 detection_prevalence binary 0.367
## 11 precision binary 1
## 12 recall binary 0.8
## 13 f_meas binary 0.889
What is the test error of the model obtained?
Answer: We get 9.18%
(f) Perform logistic regression on the training data in order to predict mpg01 using the variables that seemed most associated with mpg01 in (b).
lr_mod_11f <- logistic_reg()
logReg_fit_11f <-
lr_mod_11f %>%
set_engine("glm") %>%
fit(mpg01 ~ ., data = training(auto_split_11c))
logReg_pred_11f <- predict(logReg_fit_11f, testing(auto_split_11c)) %>%
bind_cols(testing(auto_split_11c))
logReg_summary_11f <- summary(conf_mat(logReg_pred_11f, mpg01, .pred_class))
logReg_summary_11f
## # A tibble: 13 x 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 accuracy binary 0.939
## 2 kap binary 0.876
## 3 sens binary 0.889
## 4 spec binary 0.981
## 5 ppv binary 0.976
## 6 npv binary 0.912
## 7 mcc binary 0.879
## 8 j_index binary 0.870
## 9 bal_accuracy binary 0.935
## 10 detection_prevalence binary 0.418
## 11 precision binary 0.976
## 12 recall binary 0.889
## 13 f_meas binary 0.930
What is the test error of the model obtained?
Answer: We get 6.12%
(g) Perform KNN on the training data, with several values of K, in order to predict mpg01. Use only the variables that seemed most associated with mpg01 in (b).
We remove cylinders
and origin
factor vars and we no longer get the error: \(Error in get(ctr, mode = "function", envir = parent.frame()) : object 'contr.dummy' of mode 'function' was not found\)
#knn 3
knn3_mod_11g <- nearest_neighbor(neighbors = 3)
knn3_fit_11g <-
knn3_mod_11g %>%
set_engine("kknn") %>%
fit(mpg01 ~ ., data = training(auto_split_11c) %>% select(-cylinders, -origin))
knn3_pred_11g <- predict(knn3_fit_11g, testing(auto_split_11c)) %>%
bind_cols(testing(auto_split_11c))
knn3_summary_11g <- summary(conf_mat(knn3_pred_11g, mpg01, .pred_class))
#knn 5
knn5_mod_11g <- nearest_neighbor(neighbors = 5)
knn5_fit_11g <-
knn5_mod_11g %>%
set_engine("kknn") %>%
fit(mpg01 ~ ., data = training(auto_split_11c) %>% select(-cylinders, -origin))
knn5_pred_11g <- predict(knn5_fit_11g, testing(auto_split_11c)) %>%
bind_cols(testing(auto_split_11c))
knn5_summary_11g <- summary(conf_mat(knn5_pred_11g, mpg01, .pred_class))
#knn 10
knn10_mod_11g <- nearest_neighbor(neighbors = 10)
knn10_fit_11g <-
knn10_mod_11g %>%
set_engine("kknn") %>%
fit(mpg01 ~ ., data = training(auto_split_11c) %>% select(-cylinders, -origin))
knn10_pred_11g <- predict(knn10_fit_11g, testing(auto_split_11c)) %>%
bind_cols(testing(auto_split_11c))
knn10_summary_11g <- summary(conf_mat(knn10_pred_11g, mpg01, .pred_class))
#knn 100
knn100_mod_11g <- nearest_neighbor(neighbors = 100)
knn100_fit_11g <-
knn100_mod_11g %>%
set_engine("kknn") %>%
fit(mpg01 ~ ., data = training(auto_split_11c) %>% select(-cylinders, -origin))
knn100_pred_11g <- predict(knn100_fit_11g, testing(auto_split_11c)) %>%
bind_cols(testing(auto_split_11c))
knn100_summary_11g <- summary(conf_mat(knn100_pred_11g, mpg01, .pred_class))
tbl_Ks_11g <- tibble(
accuracy = c(
knn3_summary_11g$.estimate[1],
knn5_summary_11g$.estimate[1],
knn10_summary_11g$.estimate[1],
knn100_summary_11g$.estimate[1]
),
`1/K` = c(
1/3,
1/5,
1/10,
1/100
),
K = c(
3,
5,
10,
100
)
)
What test errors do you obtain?
tbl_Ks_11g %>%
select(-`1/K`) %>%
arrange(desc(accuracy))
## # A tibble: 4 x 2
## accuracy K
## <dbl> <dbl>
## 1 0.918 5
## 2 0.918 100
## 3 0.908 3
## 4 0.908 10
Which value of K seems to perform the best on this data set?
ggplot(data = tbl_Ks_11g, aes(x = `1/K`, y = accuracy)) +
geom_point() +
geom_smooth()
Answer: Best test errors are observer with \(K = 5\) and \(K = 100\)
# clear the environment
rm(list = ls())
This problem involves writing functions.
(a)Write a function, Power(), that prints out the result of raising 2 to the 3rd power. In other words, your function should compute \(2^3\) and print out the results.
Answer: Following is the implementation:
Power <- function(){
2^3
}
Power()
## [1] 8
(b) Create a new function, Power2(), that allows you to pass any two numbers, x and a, and prints out the value of x^a. You can do this by beginning your function with the line
Power2 =function (x,a){
You should be able to call your function by entering, for instance,
Power2 (3,8)
on the command line. This should output the value of \(3^8\), namely, 6, 561.
Answer: Following is the implementation:
Power2 <- function(x, y){
x^y
}
Power2(3, 8)
## [1] 6561
(c) Using the Power2() function that you just wrote, compute \(10^3\), \(8^17\), and \(131^3\).
Answer: Following is the implementation:
Power2(10, 3)
## [1] 1000
Power2(8, 17)
## [1] 2.2518e+15
Power2(131, 3)
## [1] 2248091
(d) Now create a new function, Power3(), that actually returns the result x^a as an R object, rather than simply printing it to the screen. That is, if you store the value x^a in an object called result within your function, then you can simply return() this result, using the following line:
return (result)
The line above should be the last line in your function, before the } symbol.
Answer: Following is the implementation:
Power3 <- function(x, y){
result <- x^y
return(result)
}
(e) Now using the Power3() function, create a plot of f(x) = x2. The x-axis should display a range of integers from 1 to 10, and the y-axis should display x2. Label the axes appropriately, and use an appropriate title for the figure. Consider displaying either the x-axis, the y-axis, or both on the log-scale. You can do this by using log=‘‘x’’, log=‘‘y’’, or log=‘‘xy’’ as arguments to the plot() function.
Answer: Following is the implementation:
x <- seq(1:10)
y <- Power3(x, 2)
ggplot() +
geom_point(aes(x = log(x), y = log(y)))
(f) Create a function, PlotPower(), that allows you to create a plot of x against x^a for a fixed a and for a range of values of x. For instance, if you call
PlotPower (1:10 ,3)
then a plot should be created with an x-axis taking on values 1, 2, . . . , 10, and a y-axis taking on values \(1^3\), \(2^3\), . . . , \(10^3\).
Answer: FOllowing is the code:
PlotPower <- function(x, a){
y <- x^a
ggplot() +
geom_point(aes(x = x, y = y))
}
PlotPower(1:10, 3)
# clear the environment
rm(list = ls())
Using the Boston data set, fit classification models in order to predict whether a given suburb has a crime rate above or below the median. Explore logistic regression, LDA, and KNN models using various subsets of the predictors. Describe your findings.
Answer: Following is the exploration of logreg, LDA and KNN models using various subset of predictors:
head(Boston)
## crim zn indus chas nox rm age dis rad tax ptratio black
## 1 0.00632 18 2.31 0 0.538 6.575 65.2 4.0900 1 296 15.3 396.90
## 2 0.02731 0 7.07 0 0.469 6.421 78.9 4.9671 2 242 17.8 396.90
## 3 0.02729 0 7.07 0 0.469 7.185 61.1 4.9671 2 242 17.8 392.83
## 4 0.03237 0 2.18 0 0.458 6.998 45.8 6.0622 3 222 18.7 394.63
## 5 0.06905 0 2.18 0 0.458 7.147 54.2 6.0622 3 222 18.7 396.90
## 6 0.02985 0 2.18 0 0.458 6.430 58.7 6.0622 3 222 18.7 394.12
## lstat medv
## 1 4.98 24.0
## 2 9.14 21.6
## 3 4.03 34.7
## 4 2.94 33.4
## 5 5.33 36.2
## 6 5.21 28.7
str(Boston)
## 'data.frame': 506 obs. of 14 variables:
## $ crim : num 0.00632 0.02731 0.02729 0.03237 0.06905 ...
## $ zn : num 18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
## $ indus : num 2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
## $ chas : int 0 0 0 0 0 0 0 0 0 0 ...
## $ nox : num 0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
## $ rm : num 6.58 6.42 7.18 7 7.15 ...
## $ age : num 65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
## $ dis : num 4.09 4.97 4.97 6.06 6.06 ...
## $ rad : int 1 2 2 3 3 3 5 5 5 5 ...
## $ tax : num 296 242 242 222 222 222 311 311 311 311 ...
## $ ptratio: num 15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
## $ black : num 397 397 393 395 397 ...
## $ lstat : num 4.98 9.14 4.03 2.94 5.33 ...
## $ medv : num 24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
summary(Boston)
## crim zn indus chas
## Min. : 0.00632 Min. : 0.00 Min. : 0.46 Min. :0.00000
## 1st Qu.: 0.08204 1st Qu.: 0.00 1st Qu.: 5.19 1st Qu.:0.00000
## Median : 0.25651 Median : 0.00 Median : 9.69 Median :0.00000
## Mean : 3.61352 Mean : 11.36 Mean :11.14 Mean :0.06917
## 3rd Qu.: 3.67708 3rd Qu.: 12.50 3rd Qu.:18.10 3rd Qu.:0.00000
## Max. :88.97620 Max. :100.00 Max. :27.74 Max. :1.00000
## nox rm age dis
## Min. :0.3850 Min. :3.561 Min. : 2.90 Min. : 1.130
## 1st Qu.:0.4490 1st Qu.:5.886 1st Qu.: 45.02 1st Qu.: 2.100
## Median :0.5380 Median :6.208 Median : 77.50 Median : 3.207
## Mean :0.5547 Mean :6.285 Mean : 68.57 Mean : 3.795
## 3rd Qu.:0.6240 3rd Qu.:6.623 3rd Qu.: 94.08 3rd Qu.: 5.188
## Max. :0.8710 Max. :8.780 Max. :100.00 Max. :12.127
## rad tax ptratio black
## Min. : 1.000 Min. :187.0 Min. :12.60 Min. : 0.32
## 1st Qu.: 4.000 1st Qu.:279.0 1st Qu.:17.40 1st Qu.:375.38
## Median : 5.000 Median :330.0 Median :19.05 Median :391.44
## Mean : 9.549 Mean :408.2 Mean :18.46 Mean :356.67
## 3rd Qu.:24.000 3rd Qu.:666.0 3rd Qu.:20.20 3rd Qu.:396.23
## Max. :24.000 Max. :711.0 Max. :22.00 Max. :396.90
## lstat medv
## Min. : 1.73 Min. : 5.00
## 1st Qu.: 6.95 1st Qu.:17.02
## Median :11.36 Median :21.20
## Mean :12.65 Mean :22.53
## 3rd Qu.:16.95 3rd Qu.:25.00
## Max. :37.97 Max. :50.00
Boston_13 <- Boston %>%
mutate(crimrate = as.factor(ifelse(crim>median(crim), "high", "low"))) %>%
select(crimrate, everything())
head(Boston_13)
## crimrate crim zn indus chas nox rm age dis rad tax ptratio
## 1 low 0.00632 18 2.31 0 0.538 6.575 65.2 4.0900 1 296 15.3
## 2 low 0.02731 0 7.07 0 0.469 6.421 78.9 4.9671 2 242 17.8
## 3 low 0.02729 0 7.07 0 0.469 7.185 61.1 4.9671 2 242 17.8
## 4 low 0.03237 0 2.18 0 0.458 6.998 45.8 6.0622 3 222 18.7
## 5 low 0.06905 0 2.18 0 0.458 7.147 54.2 6.0622 3 222 18.7
## 6 low 0.02985 0 2.18 0 0.458 6.430 58.7 6.0622 3 222 18.7
## black lstat medv
## 1 396.90 4.98 24.0
## 2 396.90 9.14 21.6
## 3 392.83 4.03 34.7
## 4 394.63 2.94 33.4
## 5 396.90 5.33 36.2
## 6 394.12 5.21 28.7
attr(Boston_13, "factor") <- "crimrate"
attr(Boston_13, "relg1") <- c("zn", "indus", "nox")
attr(Boston_13, "relg2") <- c("rm", "age", "dis")
attr(Boston_13, "relg3") <- c("rad", "tax", "ptratio")
attr(Boston_13, "relg4") <- c("black", "lstat", "medv")
fac_var <- attr(Boston_13, "factor")
rel_vars_1 <- attr(Boston_13, "relg1")
rel_vars_2 <- attr(Boston_13, "relg2")
rel_vars_3 <- attr(Boston_13, "relg3")
rel_vars_4 <- attr(Boston_13, "relg4")
grob1_13 <- ggduo(
Boston_13, fac_var, rel_vars_1,
mapping = aes(color = crimrate)
)
grob2_13 <- ggduo(
Boston_13, fac_var, rel_vars_2,
mapping = aes(color = crimrate)
)
grob3_13 <- ggduo(
Boston_13, fac_var, rel_vars_3,
mapping = aes(color = crimrate)
)
grob4_13 <- ggduo(
Boston_13, fac_var, rel_vars_4,
mapping = aes(color = crimrate)
)
plot_grid(
ggmatrix_gtable(grob1_13),
ggmatrix_gtable(grob2_13),
ggmatrix_gtable(grob3_13),
ggmatrix_gtable(grob4_13),
nrow = 1
)
We observe very good split for indus
, nox
, age
, dis
, rad
and tax
. Single var overlap is observed for zn
and black
. In-between in terms of clearness of split are rm
, ptratio
, lstat
and medv
.
ggpairs(Boston_13,
mapping = aes(color = crimrate),
columns = c(3,4,5,6,7,8,9,10,11,12,13,14,15)
)
We observe that nox
is highly correlated with indus
, age
and dis
, hence we choose only indus
instead of all 4 vars for prediction purposes.
We observe that rm
is highly correlated with lstat
and medv
, hence we choose only rm
instead of all 3 vars for prediction purposes.
We choose also ptratio
for prediction purposes.
Furthermore, we observe that nox
and rm
, as well as nox
and ptratio
combinations achieve good split in terms of crime rate - high and low rates separate with clear boundary between. Not the same situation is observed for the rm
and ptratio
combination.
set.seed(123)
Boston_13_split <- initial_split(Boston_13)
Boston_13_training <- training(Boston_13_split)
Boston_13_testing <- testing(Boston_13_split)
#LogRegError function
LogRegError <- function(formula,
outc,
df_training,
df_testing
){
resp <- enquo(outc)
logReg_mod <- logistic_reg()
logReg_fit <-
logReg_mod %>%
set_engine("glm", control = list(maxit = 50)) %>%
fit(formula, data = df_training)
#print(logReg_fit)
logReg_pred <- predict(logReg_fit, df_testing) %>%
bind_cols(df_testing)
logReg_summary <- summary(conf_mat(logReg_pred, !!resp, .pred_class))
#browser()
logReg_summary
1 - logReg_summary$.estimate[1]
}
tbl_logReg_errors <-
tibble(Error = c(LogRegError(crimrate ~ .,
crimrate,
Boston_13_training,
Boston_13_testing),
LogRegError(crimrate ~ nox + rm + ptratio,
crimrate,
Boston_13_training,
Boston_13_testing),
LogRegError(crimrate ~ nox + ptratio,
crimrate,
Boston_13_training,
Boston_13_testing),
LogRegError(crimrate ~ nox + rm,
crimrate,
Boston_13_training,
Boston_13_testing)
),
Formula = c(format(crimrate ~ .),
format(crimrate ~ nox + rm + ptratio),
format(crimrate ~ nox + ptratio),
format(crimrate ~ nox + rm)
),
Model = as_factor("LogReg")
) %>%
arrange(Error)
tbl_logReg_errors
## # A tibble: 4 x 3
## Error Formula Model
## <dbl> <chr> <fct>
## 1 0.103 crimrate ~ nox + ptratio LogReg
## 2 0.111 crimrate ~ nox + rm + ptratio LogReg
## 3 0.135 crimrate ~ nox + rm LogReg
## 4 0.167 crimrate ~ . LogReg
As observed from graphics nox
and ptratio
achieve lowest error.
# LDA Error function
LDAError <- function(formula,
outc,
df_training,
df_testing){
resp <- enquo(outc)
lda_fit <- lda(formula, data = df_training)
#browser()
df_pred <- df_testing %>%
mutate(.pred_class = predict(lda_fit, df_testing)$class)
lda_summary <- summary(conf_mat(df_pred, !!resp, .pred_class))
1 - lda_summary$.estimate[1]
}
tbl_LDA_errors <-
tibble(Error = c(LDAError(crimrate ~ .,
crimrate,
Boston_13_training,
Boston_13_testing),
LDAError(crimrate ~ nox + rm + ptratio,
crimrate,
Boston_13_training,
Boston_13_testing),
LDAError(crimrate ~ nox + ptratio,
crimrate,
Boston_13_training,
Boston_13_testing),
LDAError(crimrate ~ nox + rm,
crimrate,
Boston_13_training,
Boston_13_testing)
),
Formula = c(format(crimrate ~ .),
format(crimrate ~ nox + rm + ptratio),
format(crimrate ~ nox + ptratio),
format(crimrate ~ nox + rm)
),
Model = as_factor("LDA")
) %>%
arrange(Error)
tbl_LDA_errors
## # A tibble: 4 x 3
## Error Formula Model
## <dbl> <chr> <fct>
## 1 0.119 crimrate ~ . LDA
## 2 0.127 crimrate ~ nox + ptratio LDA
## 3 0.151 crimrate ~ nox + rm + ptratio LDA
## 4 0.151 crimrate ~ nox + rm LDA
For this method we achieve lowest error when all predictors are used.
# KNN Error function
KNNError <- function(formula,
outc,
k,
df_training,
df_testing
){
resp <- enquo(outc)
knn_mod <- nearest_neighbor(neighbors = k)
knn_fit <-
knn_mod %>%
set_engine("kknn") %>%
fit(formula, data = df_training)
#print(logReg_fit)
knn_pred <- predict(knn_fit, df_testing) %>%
bind_cols(df_testing)
knn_summary <- summary(conf_mat(knn_pred, !!resp, .pred_class))
#browser()
knn_summary
1 - knn_summary$.estimate[1]
}
tbl_knn_errors <-
tibble(
Ks = 1:100,
Error = map_dbl(1:100,
function(x)
KNNError(
formula = crimrate ~ .,
outc = crimrate,
k = x,
df_training = Boston_13_training,
df_testing = Boston_13_testing
)
),
Formula = format(crimrate ~ .)
) %>%
bind_rows(
tibble(
Ks = 1:100,
Error = map_dbl(1:100,
function(x)
KNNError(
formula = crimrate ~ nox + ptratio,
outc = crimrate,
k = x,
df_training = Boston_13_training,
df_testing = Boston_13_testing
)
),
Formula = format(crimrate ~ nox + ptratio)
)
) %>%
bind_rows(
tibble(
Ks = 1:100,
Error = map_dbl(1:100,
function(x)
KNNError(
formula = crimrate ~ nox + rm,
outc = crimrate,
k = x,
df_training = Boston_13_training,
df_testing = Boston_13_testing
)
),
Formula = format(crimrate ~ nox + rm)
)
) %>%
bind_rows(
tibble(
Ks = 1:100,
Error = map_dbl(1:100,
function(x)
KNNError(
formula = crimrate ~ nox + ptratio + rm,
outc = crimrate,
k = x,
df_training = Boston_13_training,
df_testing = Boston_13_testing
)
),
Formula = format(crimrate ~ nox + ptratio + rm)
)
)
ggplot(tbl_knn_errors) +
geom_line(aes(x = Ks, y = Error)) +
facet_wrap( ~ Formula)
We observe that lowest error is achieved for the case below:
tbl_knn_errors %>%
select(Formula, Ks, Error) %>%
filter(Error == min(Error)) %>%
filter(Ks == min(Ks))
## # A tibble: 1 x 3
## Formula Ks Error
## <chr> <int> <dbl>
## 1 crimrate ~ nox + ptratio 4 0.0397
tbl_knn_errors %>%
select(Formula, Ks, Error) %>%
filter(Error == min(Error)) %>%
filter(Ks == min(Ks)) %>%
mutate(Model = as_factor("KNN")) %>%
select(-Ks) %>%
bind_rows(
tbl_LDA_errors %>%
select(Model, Error, Formula) %>%
filter(Error == min(Error))
) %>%
bind_rows(
tbl_logReg_errors %>%
select(Model, Error, Formula) %>%
filter(Error == min(Error))
) %>%
arrange(Error) %>%
ggplot() +
geom_col(aes(x = fct_reorder(Model, desc(Error)), y = Error, fill = Formula)) +
labs(x = "Model") +
coord_flip()