## Get All Weeks Supervised Machine Learning: Regression and Classification Quiz Answers

## Table of Contents

### Supervised Machine Learning: Regression and Classification Week 01 Quiz Answers

#### Quiz 1: Supervised vs. unsupervised learning Quiz Answers

Q1. Which are the two common types of supervised learning? (Choose two)

**Regression****Classification**- Clustering

Q2. Which of these is a type of unsupervised learning?

**Clustering**- Classification
- Regression

#### Quiz 2: Regression Quiz Answers

Q1. For linear regression, the model is f_{w,b}(x) = wx + bf

Which of the following are the inputs, or features, that are fed into the model and with which the model is expected to make a prediction?

**xx**- mm
- ww and bb.
- (x,y)(x,y)

Q2. For linear regression, if you find parameters ww and bb so that J(w,b)J(w,b) is very close to zero, what can you conclude?

- This is never possible — there must be a bug in the code.
**The selected values of the parameters ww and bb cause the algorithm to fit the training set really well.**- The selected values of the parameters ww and bb cause the algorithm to fit the training set really poorly.

#### Quiz 3: Train the model with gradient descent Quiz Answers

Q1. Gradient descent is an algorithm for finding values of parameters w and b that minimize the cost function J.

When \frac{\partial J(w,b)}{\partial w}∂*w*∂*J*(*w*,*b*) is a negative number (less than zero), what happens to w*w* after one update step?

- It is not possible to tell if w
*w*will increase or decrease. - w
*w*increases. - w
*w*stays the same **w***w*decreases

Q2. For linear regression, what is the update step for parameter b?

**b = b – \alpha \frac{1}{m} \sum\limits_{i=1}^{m} (f_{w,b}(x^{(i)}) – y^{(i)})x^{(i)}b=b−αm1i=1∑m(fw,b(x(i))−y(i))x(i)**- b = b – \alpha \frac{1}{m} \sum\limits_{i=1}^{m} (f_{w,b}(x^{(i)}) – y^{(i)})
*b*=*b*−*αm*1*i*=1∑*m*(*fw*,*b*(*x*(*i*))−*y*(*i*))

### Week 2 Quiz Answers

#### Quiz 1: Multiple Linear Regression Quiz Answers

Q1. In the training set below, what is x_4^{(3)}x ? Please type in the number below (this is an integer such as 123, no decimal points).

Answer: 125

Q2. Which of the following are the potential benefits of vectorization? Please choose the best option.

- It makes your code run faster
- It can make your code shorter
- It allows your code to run more easily on parallel computing hardware
**All of the above**

Q3. True/False? To make gradient descent converge about twice as fast, a technique that almost always works is to double the learning rate alpha alpha.

**False**- True

#### Quiz 2: Gradient descent in practice Quiz Answers

Q1. Which of the following is a valid step used during feature scaling?

**Subtract the mean (average) from each value and then divide by the (max – min).**- Add the mean (average) from each value and then divide by the (max – min).

Q2. Suppose a friend ran gradient descent three separate times with three choices of the learning rate \alphaα and plotted the learning curves for each (cost J for each iteration).

For which case, A or B, was the learning rate \alphaα likely too large?

- Both Cases A and B
**case B only**- case A only
- Neither Case A nor B

Q3. Of the circumstances below, for which is feature scaling particularly helpful?

**Feature scaling is helpful when one feature is much larger (or smaller) than another feature.**- Feature scaling is helpful when all the features in the original data (before scaling is applied) range from 0 to 1.

Q4. You are helping a grocery store predict its revenue, and have data on its items sold per week and price per item. What could be a useful engineered feature?

**For each product, calculate the number of items sold times the price per item.**- For each product, calculate the number of items sold divided by the price per item.

Q5. True/False? With polynomial regression, the predicted values f_w,b(x) do not necessarily have to be a straight line (or linear) function of the input feature x.

**True**- False

### Week 3 Quiz Answers

#### Quiz 1: Classification with Logistic Regression Quiz Answers

Q1. Which is an example of a classification task?

- Based on a patient’s blood pressure, determine how much blood pressure medication (a dosage measured in milligrams) the patient should be prescribed.
**Based on the size of each tumor, determine if each tumor is malignant (cancerous) or not.**- Based on a patient’s age and blood pressure, determine how much blood pressure medication (measured in milligrams) the patient should be prescribed.

Q2. Recall the sigmoid function is g(z) = \frac{1}{1+e^{-z}}g(z)=

1+e

−z

1 If z is a large positive number, then:

**g(z)g(z) will be near zero (0)**- g(z)g(z) is near negative one (-1)
- g(z)g(z) will be near 0.5
- g(z)g(z) is near one (1)

Q3. A cat photo classification model predicts 1 if it’s a cat, and 0 if it’s not a cat. For a particular photograph, the logistic regression model outputs g(z)g(z) (a number between 0 and 1). Which of these would be a reasonable criteria to decide whether to predict if it’s a cat?

- Predict it is a cat if g(z) < 0.7
- Predict it is a cat if g(z) = 0.5
- Predict it is a cat if g(z) < 0.5
**Predict it is a cat if g(z) >= 0.5**

Q4. True/False? No matter what features you use (including if you use polynomial features), the decision boundary learned by logistic regression will be a linear decision boundary.

- False
**True**

#### Quiz 2: Cost function for logistic regression Quiz Answers

Q1. In this lecture series, “cost” and “loss” have distinct meanings. Which one applies to a single training example?

**Loss**- Cost
- Both Loss and Cost
- Neither Loss nor Cost

#### Quiz 3: Gradient descent for logistic regression

Q1. Which of the following two statements is a more accurate statement about gradient descent for logistic regression?

**The update steps look like the update steps for linear regression, but the definition of f_{\vec{w},b}(\mathbf{x}^{(i)})***fw*,*b*(x(*i*)) is different.- The update steps are identical to the update steps for linear regression.

#### Quiz 4: The problem of overfitting

Q1. Which of the following can address overfitting?

**Remove a random set of training examples****Collect more training data****Select a subset of the more relevant features.****Apply regularization**

Q2. You fit logistic regression with polynomial features to a dataset, and your model looks like this.

What would you conclude? (Pick one)

- The model has a high bias (underfit). Thus, adding data is likely to help
- The model has a high variance (overfit). Thus, adding data is likely to help
**The model has a high variance (overfit). Thus, adding data is, by itself, unlikely to help much.**- The model has a high bias (underfit). Thus, adding data is, by itself, unlikely to help much.

Q3. Suppose you have a regularized linear regression model. If you increase the regularization parameter \lambda*λ*, what do you expect to happen to the parameters w_1,w_2,…,w_n*w*1,*w*2,…,*wn*?

**This will increase the size of the parameters w_1,w_2,…, w_n***w*1,*w*2,…,*wn*- This will reduce the size of the parameters w_1,w_2,…, w_n
*w*1,*w*2,…,*wn*

#### Get All Course Quiz Answers of Machine Learning Specialization

Supervised Machine Learning: Regression and Classification Quiz Answers

Advanced Learning Algorithms Coursera Quiz Answers

Unsupervised Learning, Recommenders, Reinforcement Learning Quiz Answers