Supervised Machine Learning: Classification Quiz Answers

Supervised Machine Learning: Classification Week 01 Quiz Answers

Quiz: End of Module

Q1. (True/False) One of the requirements of logistic regression is that you need a variable with two classes.

  • True
  • False

Q2. (True/False) The shape of ROC curves are the leading indicator of an overfitted logistic regression.

  • True
  • False

Q3. Consider this scenario for Questions 3 to 7.

You are evaluating a binary classifier. There are 50 positive outcomes in the test data, and 100 observations. Using a 50% threshold, the classifier predicts 40 positive outcomes, of which 10 are incorrect.

What is the classifier’s Precision on the test sample?

  • 25%
  • 60%
  • 75%
  • 80%

Q4. Consider this scenario for Questions 3 to 7.

You are evaluating a binary classifier. There are 50 positive outcomes in the test data, and 100 observations. Using a 50% threshold, the classifier predicts 40 positive outcomes, of which 10 are incorrect.

What is the classifier’s Recall on the test sample?

  • 25%
  • 60%
  • 75%
  • 80%

Q5. Consider this scenario for Questions 3 to 7.

You are evaluating a binary classifier. There are 50 positive outcomes in the test data, and 100 observations. Using a 50% threshold, the classifier predicts 40 positive outcomes, of which 10 are incorrect.

What is the classifier’s F1 score on the test sample?

  • 50%
  • 66.7%
  • 67.5%
  • 70%

Q6. Consider this scenario for Questions 3 to 7.

You are evaluating a binary classifier. There are 50 positive outcomes in the test data, and 100 observations. Using a 50% threshold, the classifier predicts 40 positive outcomes, of which 10 are incorrect.

Increasing the threshold to 60% results in 5 additional positive predictions, all of which are correct. Which of the following statements about this new model (compared with the original model that had a 50% threshold) is TRUE?

  • The F1 score of the classifier would decrease.
  • The area under the ROC curve would decrease.
  • The F1 score of the classifier would remain the same.
  • The area under the ROC curve would remain the same.

Q7. Consider this scenario for Questions 3 to 7.

You are evaluating a binary classifier. There are 50 positive outcomes in the test data, and 100 observations. Using a 50% threshold, the classifier predicts 40 positive outcomes, of which 10 are incorrect.

The threshold is now increased further, to 70%. Which of the following statements is TRUE?

  • The Recall of the classifier would decrease.
  • The Precision of the classifier would decrease.
  • The Recall of the classifier would increase or remain the same.
  • The Precision of the classifier would increase or remain the same.

Supervised Machine Learning: Classification Week 02 Quiz Answers

Quiz 01: End of Module

  1. (True/False) K Nearest Neighbors with large k tend to be the best classifiers.
  • True
  • False

Q2. When building a KNN classifier for a variable with 2 classes, it is advantageous to set the neighbor count k to an odd number.

  • True
  • False

Q3. The Euclidean distance between two points will always be shorter than the Manhattan distance:

  • True
  • False

Q4. The main purpose of scaling features before fitting a k nearest neighbor model is to:

  • Help find the appropriate value of k
  • Ensure decision boundaries have roughly the same size for all classes
  • Ensure that features have similar influence on the distance calculation
  • Break ties in case there is the same number of neighbors of different classes next to a given observation

Q5. These are all pros of the k nearest neighbor algorithm EXCEPT:

  • It is sensitive to the curse of dimensionality
  • It is easy to interpret
  • It adapt wells to new training data
  • It is simple to implement as it does not require parameter estimation

Quiz 02: End of Module

Q1. Select the TRUE statement regarding the cost function for SVMs:

  • SVMs do not use a cost function. They use regularization instead of a cost function.
  • SVMs use same loss function as logistic regression
  • SVMs use a loss function that penalizes vectors prone to misclassification
  • SVMs use the Hinge Loss function as a cost function

Q2. Which statement about Support Vector Machines is TRUE?

  • Support Vector Machine models are non-linear.
  • Support Vector Machine models rarely overfit on training data.
  • Support Vector Machine models can be used for classification but not for regression.
  • Support Vector Machine models can be used for regression but not for classification.

Q3. (True/False) A large c term will penalize the SVM coefficients more heavily.

  • False
  • True

Supervised Machine Learning: Classification Week 03 Quiz Answers

Quiz 01: End of Module

Q1. These are all characteristics of decision trees, EXCEPT:

  • They segment data based on features to predict results
  • They split nodes into leaves
  • They can be used for either classification or regression
  • They have well rounded decision boundaries

Q2. Decision trees used as classifiers compute the value assigned to a leaf by calculating the ratio: number of observations of one class divided by the number of observations in that leaf

E.g. number of customers that are younger than 50 years old divided by the total number of customers.

How are leaf values calculated for regression decision trees?

  • average value of the predicted variable
  • median value of the predicted variable
  • mode value of the predicted variable
  • weighted average value of the predicted variable

Q3. These are two main advantages of decision trees:

  • They do not tend to overfit and are not sensitive to changes in data
  • They are very visual and easy to interpret
  • They are resistant to outliers and output scaled features
  • They output both parameters and significance levels

Quiz 02: End of Module

Q1. The term Bagging stands for bootstrap aggregating.

  • True
  • False

Q2. This is the best way to choose the number of trees to build on a Bagging ensemble.

  • Choose a large number of trees, typically above 100
  • Prioratize training error metrics over out of bag sample
  • Tune number of trees as a hyperparameter that needs to be optimized
  • Choose a number of trees past the point of diminishing returns

Q3. Which type of Ensemble modeling approach is NOT a special case of model averaging?

  • Random Forest methods
  • Boosting methods
  • The Pasting method of Bootstrap aggregation
  • The Bagging method of Bootstrap aggregation

Q4. What is an ensemble model that needs you to look at out of bag error?

  • Logistic Regression.
  • Out of Bag Regression
  • Random Forest
  • Stacking

Q5. What is the main condition to use stacking as ensemble method?

  • Models need to be parametric
  • Models need to be nonparametric
  • Models need to output residual values for each class
  • Models need to output predicted probabilities

Q6. This tree ensemble method only uses a subset of the features for each tree:

  • Random Forest
  • Adaboost
  • Bagging
  • Stacking

Q7. Order these tree ensembles in order of most randomness to least randomness:

  • Random Forest, Random Trees, Bagging
  • Bagging, Random Forest, Random Trees
  • Random Trees, Random Forest, Bagging
  • Random Forest, Bagging, Random Trees

Q8. This is an ensemble model that does not use bootstrapped samples to fit the base trees, takes residuals into account, and fits the base trees iteratively:

  • Random Trees
  • Boosting
  • Bagging
  • Random Forest

Supervised Machine Learning: Classification Week 04 Quiz Answers

Quiz: End of Module

Q1. Which of the following statements about Downsampling is TRUE?

  • Downsampling is likely to decrease Recall.
  • Downsampling is likely to decrease Precision.
  • Downsampling preserves all the original observations.
  • Downsampling results in excessive focus on the more frequently-occurring class.

Q2. Which of the following statements about Random Upsampling is TRUE?

  • Random Upsampling preserves all original observations.
  • Random Upsampling will generally lead to a higher F1 score.
  • Random Upsampling generates observations that were not part of the original data.
  • Random Upsampling results in excessive focus on the more frequently-occurring class.

Q3. Which of the following statements about Synthetic Upsampling is TRUE?

  • Synthetic Upsampling will generally lead to a higher F1 score.
  • Synthetic Upsampling uses fewer hyperparameters than Random Upsampling.
  • Synthetic Upsampling generates observations that were not part of the original data.
  • Synthetic Upsampling results in excessive focus on the more frequently-occurring class.
Get All Course Quiz Answers of IBM Machine Learning Professional Certificate

Exploratory Data Analysis for Machine Learning Quiz Answers

Supervised Machine Learning: Regression Quiz Answers

Supervised Machine Learning: Classification Coursera Quiz Answers

Unsupervised Machine Learning Coursera Quiz Answers

Deep Learning and Reinforcement Learning Quiz Answers

Specialized Models: Time Series and Survival Analysis Quiz Answers

Share your love

Newsletter Updates

Enter your email address below and subscribe to our newsletter

Leave a Reply

Your email address will not be published. Required fields are marked *