Book Appointment Now

Supervised Machine Learning: Regression Quiz Answers

Supervised Machine Learning: Regression Week 01 Quiz Answers

Quiz 01: Check for Understanding

Q1. Select the option that is the most INACCURATE regarding the definition of Machine Learning:

  • Machine Learning allows computers to learn from data
  • Machine Learning allows computers to infer predictions for new data
  • Machine Learning is a subset of Artificial Intelligence
  • Machine Learning is automated and requires no programming

Q2. This is the type of Machine Learning that uses both data with labeled outcomes and data without labeled outcomes:

  • Supervised Machine Learning
  • Unsupervised Machine Learning
  • Mixed Machine Learning
  • Semi-Supervised Machine Learning

Q3. Predicting total revenue, number of customers, and percentage of returning customers are examples of:

  • classification
  • regression

Q4. Predicting payment default, whether a transaction is fraudulent, and whether a customer will be part of the top 5% spenders on a given year, are examples of:

  • classification
  • regression

Quiz 02: Check for Understanding

Q1. Which statement about evaluating a Machine Learning model is the most accurate?

  • Model selection involves choosing a model that minimizes the cost function.
  • Model estimation involves choosing parameters that minimize the cost function.
  • Model estimation involves choosing a cost function that can be compared across models.
  • Model selection involves choosing modeling parameters that minimize in-sample validation error.

Q2. (True/False) The unadjusted value from estimating a linear regression model will almost always increase if more features are added.

  • True
  • False

Q3. (True/False) The Total Sum of Squares (TSS) can be used to select the best-fitting regression model.

  • True
  • False

Q4. (True/False) The Sum of Squared Errors (SSE) can be used to select the best-fitting regression model.

  • True
  • False

End of Module Quiz

Q1. You can use supervised machine learning for all of the following examples, EXCEPT:

  • Segment customers by their demographics.
  • Predict the number of customers that will visit a store on a given week.
  • Predict the probability of a customer returning to a store.
  • Interpret the main drivers that determine if a customer will return to a store.

Q2. The autocorrect on your phone is an example of:

  • Unsupervised learning
  • Supervised learning
  • Semi-supervised learning
  • Reinforcement learning

Q3. This is the type of Machine Learning that uses both data with labeled outcomes and data without labeled outcomes:

  • Supervised Machine Learning
  • Unsupervised Machine Learning
  • Mixed Machine Learning
  • Semi-Supervised Machine Learning

Q4. This option describes a way of turning a regression problem into a classification problem:

  • Create a new variable that flags 1 for above a certain value and 0 otherwise
  • Use outlier treatment
  • Use missing value handling
  • Create a new variable that uses autoencoding to transform a continuous outcome into categorical

Q5. This is the syntax you need to predict new data after you have trained a linear regression called LR:

  • LR=predict(X_test)
  • LR.predict(X_test)
  • LR.predict(LR, X_test)
  • predict(LR, X_test)

Q6. All of these options are useful error measures to compare regressions:

  • SSE
  • R squared
  • TSS
  • ROC index

Q7. (True/False) It is less concerning to treat a Machine Learning model as a black box for prediction purposes, compared to interpretation purposes:

  • True
  • False

Supervised Machine Learning: Regression Week 02 Quiz Answers

Quiz 01: Check for Understanding

Q1. Another common term for the testing split is:

  • Training split
  • Validation split
  • Corroboration split
  • Cross validation split

Q2. Complete the following sentence: The training data is used to fit the model, while the test data is used to:

  • measure the parameters and hyperparameters of the model
  • tweak the model hyperparameters
  • tweak the model parameters
  • measure error and performance of the model

Q3. Select the option that has the syntax to obtain the data splits you will need to train a model having a test split that is a third the size of your available data.

  • X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5)
  • X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33)
  • X_train, y_test = train_test_split(X, y, test_size=0.33)
  • X_train, y_test = train_test_split(X, y, test_size=0.5)

Quiz 02: Check for Understanding

Q1. Which statement about K-fold cross-validation below is TRUE?

  • Each subsample in K-fold cross-validation has at least k observations.
  • Each subsample in K-fold cross-validation has at least k-1 observations.
  • Each of the k subsamples in K-fold cross-validation is used as a training set.
  • Each of the k subsamples in K-fold cross-validation is used as a test set.

Q2. (True/False) For a dataset with M observations and N features, Leave-one-out cross-validation is equivalent to k-fold cross-validation with k =M-1 .

  • True
  • False

Q3. If a low-complexity model is underfitting during estimation, which of the following is MOST LIKELY true (holding the model constant)?

  • K-fold cross-validation will still lead to underfitting, for any k.
  • K-cross-validation with a small k will reduce or eliminate underfitting.
  • K-fold cross-validation with a large k will reduce or eliminate underfitting.
  • None of the above.

Q4. Which of the following statements about a high-complexity model in a linear regression setting is TRUE?

  • Cross-validation with a small k will reduce or eliminate overfitting.
  • A high variance of parameter estimates across cross-validation subsamples indicates likely overfitting.
  • A low variance of parameter estimates across cross-validation subsamples indicates likely overfitting.
  • Cross-validation with a large k will reduce or eliminate overfitting.

Quiz 03: Check for Understanding

Q1. What is the main goal of adding polynomial features to a linear regression?

  • Remove the linearity of the regression and turn it into a polynomial model.
  • Capture the relation of the outcome with features of higher order.
  • Increase the interpretability of a black box model.
  • Ensure similar results across all folds when using K-fold cross validation.

Q2. What is the most common sklearn methods to add polynomial features to your data?

  • polyFeat.add and polyFeat.transform
  • polyFeat.add and polyFeat.fit
  • polyFeat.fit and polyFeat.transform
  • polyFeat.transform

End of Module Quiz

Q1. The main purpose of splitting your data into a training and test sets is:

  • To improve accuracy
  • To avoid overfitting
  • To improve regularization
  • To improve crossvalidation and overfitting

Q2. (True/False) For a dataset with M observations and N features, Stratified cross-validation is equivalent to k-fold cross-validation, where k =N-1 .

  • True
  • False

Q3. (True/False) A linear regression model is being tested by cross-validation. Relative to K-fold cross-validation, stratified cross-validation (with the same k ) will likely increase the variance of estimated parameters.

  • True
  • False

Q4. In K-fold cross-validation, how will increasing k affect the variance (across subsamples) of estimated model parameters?

  • Increasing k will not affect the variance of estimated parameters.
  • Increasing k will usually reduce the variance of estimated parameters.
  • Increasing k will usually increase the variance of estimated parameters.
  • Increasing k will increase the variance of estimated parameters if models are underfit, but reduce it if models are overfit.

Supervised Machine Learning: Regression Week 03 Quiz Answers

Quiz 01: Check for Understanding

Q1. Which of the following statements about model complexity is TRUE?

  • Higher model complexity leads to a lower chance of overfitting.
  • Higher model complexity leads to a higher chance of overfitting.
  • Reducing the number of features while adding feature interactions leads to a lower chance of overfitting.
  • Reducing the number of features while adding feature interactions leads to a higher chance of overfitting.

Q2. Which of the following statements about model errors is TRUE?

  • Underfitting is characterized by lower errors in both training and test samples.
  • Underfitting is characterized by higher errors in both training and test samples.
  • Underfitting is characterized by higher errors in training samples and lower errors in test samples.
  • Underfitting is characterized by lower errors in training samples and higher errors in test samples.

Q3. Which of the following statements about regularization is TRUE?

  • Regularization always reduces the number of selected features.
  • Regularization increases the likelihood of overfitting relative to training data.
  • Regularization decreases the likelihood of overfitting relative to training data.
  • Regularization performs feature selection without a negative impact in the likelihood of overfitting relative to the training data.

Q4. BOTH Ridge regression and Lasso regression

  • do not adjust the cost function used to estimate a model.
  • add a term to the loss function proportional to a regularization parameter.
  • add a term to the loss function proportional to the square of parameter coefficients.
  • add a term to the loss function proportional to the absolute value of parameter coefficients.

Q5. Compared with Lasso regression (assuming similar implementation), Ridge regression is:

  • less likely to overfit to training data.
  • more likely to overfit to training data.
  • less likely to set feature coefficients to zero.
  • more likely to set feature coefficients to zero.

End of Module Quiz

Q1. (True/False) The variance of a model is determined by the degree of irreducible error.

  • True
  • False

Q2. (True/False) As more variables are added to a model, both its complexity and its variance generally increase.

  • True
  • False

Q3. (True/False) Model adjustments that decrease bias also decrease variance, leading to a bias-variance tradeoff.

  • True
  • False

Q4. Which of the following statements about scaling features prior to regularization is TRUE?

  • The scale or features must be the same to implement L1 or L2 regularization.
  • Features should rarely or never be scaled prior to implementing regularization.
  • The larger a feature’s scale, the more likely its estimated impact will be influenced by regularization.
  • The smaller a feature’s scale, the more likely its estimated impact will be influenced by regularization.

Q5. Which of the following statements about model complexity is TRUE?

  • Higher model complexity leads to a lower chance of overfitting.
  • Higher model complexity leads to a higher chance of overfitting.
  • Reducing the number of features while adding feature interactions leads to a lower chance of overfitting.
  • Reducing the number of features while adding feature interactions leads to a higher chance of overfitting.

Q6. (True/False) A model with high variance is characterized by sensitivity to small changes in input data.

  • True
  • False

Q7. Which of the following statements about Elastic Net regression is TRUE?

  • Elastic Net combines L1 and L2 regularization.
  • Elastic Net does not use L1 or L2 regularization.
  • Elastic Net uses L2 regularization, as with Ridge regression.
  • Elastic Net uses L1 regularization, as with Ridge regression.
Get All Course Quiz Answers of IBM Machine Learning Professional Certificate

Exploratory Data Analysis for Machine Learning Quiz Answers

Supervised Machine Learning: Regression Quiz Answers

Supervised Machine Learning: Classification Coursera Quiz Answers

Unsupervised Machine Learning Coursera Quiz Answers

Deep Learning and Reinforcement Learning Quiz Answers

Specialized Models: Time Series and Survival Analysis Quiz Answers

Share your love

Newsletter Updates

Enter your email address below and subscribe to our newsletter

Leave a Reply

Your email address will not be published. Required fields are marked *