Neural Networks and Deep Learning Coursera Quiz Answers

Get All Weeks Neural Networks and Deep Learning Coursera Quiz Answers

Neural Networks and Deep Learning Week 01 Quiz Answers

Q1. What does the analogy “AI is the new electricity” refer to?

[expand title=View Answer] Similar to electricity starting about 100 years ago, AI is transforming multiple industries. [/expand]

Q2. Which of these are reasons for Deep Learning recently taking off? (Check the three options that apply.)

[expand title=View Answer]

1. We have access to a lot more computational power.
2. We have access to a lot more data
3. Deep learning has resulted in significant improvements in important applications such as online advertising, speech recognition, and image recognition.

[/expand]

Q3. Recall this diagram of iterating over different ML ideas. Which of the statements below are true? (Check all that apply.)

[expand title=View Answer]

Being able to try out ideas quickly allows deep learning engineers to iterate more quickly.

Recent progress in deep learning algorithms has allowed us to train good models faster (even without changing the CPU/GPU hardware).

Faster computation can help speed up how long a team takes to iterate to a good idea.

[/expand]

Q4. When an experienced deep learning engineer works on a new problem, they can usually use insight from previous problems to train a good model on the first try, without needing to iterate multiple times through different models. True/False?

[expand title=View Answer] False [/expand]

Q5. Which one of these plots represents a ReLU activation function?

[expand title=View Answer] Figure 1: [/expand]

Q6. Images for cat recognition is an example of “structured” data because it is represented as a structured array in a computer. True/False?

[expand title=View Answer] False [/expand]

Q7. A demographic dataset with statistics on different cities’ populations, GDP per capita, and economic growth is an example of “unstructured” data because it contains data coming from different sources. True/False?

[expand title=View Answer] False [/expand]

Q8. Why is an RNN (Recurrent Neural Network) used for machine translation, say translating English to French? (Check all that apply.)

[expand title=View Answer] It can be trained as a supervised learning problem.
It is applicable when the input/output is a sequence (e.g., a sequence of words).[/expand]

Q9. In this diagram which we hand-drew in lecture, what do the horizontal axis (x-axis) and vertical axis (y-axis) represent?

[expand title=View Answer] x-axis is the amount of data

y-axis (vertical axis) is the performance of the algorithm.
[/expand]

Q10. Assuming the trends described in the previous question’s figure are accurate (and I hope you got the axis labels right), which of the following are true? (Check all that apply.)

[expand title=View Answer] Increasing the size of a neural network generally does not hurt an algorithm’s performance, and it may help significantly.

Increasing the training set size generally does not hurt an algorithm’s performance, and it may help significantly.
[/expand]

Neural Networks and Deep Learning Week 02 Quiz Answers

Q1. What does a neuron compute?

[expand title=View Answer] A neuron computes a linear function (z = Wx + b) followed by an activation function [/expand]

Q3. Suppose img is a (32,32,3) array, representing a 32×32 image with 3 color channels red, green and blue. How do you reshape this into a column vector?

[expand title=View Answer] x = img.reshape((32,32,3,1)) [/expand]

Q4. Consider the two following random arrays aa and bb:

a = np. random.randn(2, 3)a=np.random.randn(2,3) # a.shape = (2, 3)a.shape=(2,3)

b = np. random.randn(2, 1)b=np.random.randn(2,1) # b.shape = (2, 1)b.shape=(2,1)

c = a + b c=a+b

What will be the shape of cc?

[expand title=View Answer] c.shape = (2, 3)
c.shape = (2, 1)
[/expand]

Q5. Consider the two following random arrays aa and bb:

a = np. random.randn(4, 3)a=np.random.randn(4,3) # a.shape = (4, 3)a.shape=(4,3)

b = np. random.randn(3, 2)b=np.random.randn(3,2) # b.shape = (3, 2)b.shape=(3,2)

c = a*bc=a∗b

What will be the shape of the cc?

[expand title=View Answer] c.shape = (4, 3) [/expand]

Q6. Recall that X = [x^{(1)} x^{(2)} … x^{(m)}]X=[x(1)x(2)…x(m)]. What is the dimension of X?

[expand title=View Answer] (n_x, m) [/expand]

Q7. Recall that np. dot(a,b)np.dot(a,b) performs a matrix multiplication on aa and bb, whereas a*ba∗b performs an element-wise multiplication.

Consider the two following random arrays aa and bb:

a = np. random.randn(12288, 150)a=np.random.randn(12288,150) # a.shape = (12288, 150)a.shape=(12288,150)

b = np. random.randn(150, 45)b=np.random.randn(150,45) # b.shape = (150, 45)$$

c = np.dot(a,b)c=np.dot(a,b)

What is the shape of cc?

[expand title=View Answer] c.shape = (12288, 45)

c.shape = (12288, 150) [/expand]

Q8. Consider the following code snippet:

# a.shape = (3,4)a.shape=(3,4)

b.shape = (4,1)b.shape=(4,1)

for i in range(3):
for j in range(4):
c[i][j] = a[i][j] + b[j]c[i][j]=a[i][j]+b[j]

How do you vectorize this?

[expand title=View Answer] c = a + b.T [/expand]

Q9. Consider the following code:

[expand title=View Answer] Answer Not Available [/expand]

a = np. random.randn(3, 3)a=np.random.randn(3,3)

b = np. random.randn(3, 1)b=np.random.randn(3,1)

c = a*bc=a∗b

What will be cc? (If you’re not sure, feel free to run this in Python to find out).

  1. This will multiply a 3×3 matrix a with a 3×1 vector, thus resulting in a 3×1 vector. That is, c.shape = (3,1).
  2. This will invoke broadcasting, so b is copied three times to become (3,3), and *∗ is an element-wise product so c.shape will be (3, 3)
  3. This will invoke broadcasting, so b is copied three times to become (3, 3), and *∗ invokes a matrix multiplication operation of two 3×3 matrices so c.shape will be (3, 3)
  4. It will lead to an error since you cannot use “*” to operate on these two matrices. You need to instead use np. dot(a,b)

Q10. Consider the following computation graph.

What is the output J?

[expand title=View Answer] J = (a – 1) * (b + c) [/expand]

Neural Networks and Deep Learning Week 03 Quiz Answers

Q1. Which of the following are true? (Check all that apply.)

[expand title=View Answer]
1.X is a matrix in which each row is one training example.
2.a^{[2](12)}a[2](12) denotes the activation vector of the 2^{nd}2nd layer for the 12^{th}12th training example.
3.a^{[2]}_4a4[2]​ is the activation output by the 4^{th}4th neuron of the 2^{nd}2nd layer
4.a^{[2]}a[2] denotes the activation vector of the 2^{nd}2nd layer.

[/expand]

Q2. The tanh activation is not always better than the sigmoid activation function for hidden units because the mean of its output is closer to zero, and so it centers the data, making learning complex for the next layer. True/False?

[expand title=View Answer]True [/expand]

Q3. Which of these is a correct vectorized implementation of forward propagation for layer l, where 1≤lL?

[expand title=View Answer]
1.Z^[l]=W^[l]A^[l−1]+b^[l]
2.A^{[l]} = g^{[l]}(Z^{[l]})A[l]=g[l](Z[l]

[/expand]

Q4. You are building a binary classifier for recognizing cucumbers (y=1) vs. watermelons (y=0). Which one of these activation functions would you recommend using for the output layer?

[expand title=View Answer]sigmoid [/expand]

Q5. Consider the following code:

A = np. random.randn(4,3)B =

B = np.sum(A, axis = 1, keepdims = True)

What will be B.shape? (If you’re not sure, feel free to run this in Python to find out).

[expand title=View Answer]
1.(4, 1)
2.(4, 3)
[/expand]

Q6. Suppose you have built a neural network. You decide to initialize the weights and biases to be zero. Which of the following statements is true?

[expand title=View Answer]
Each neuron in the first hidden layer will perform the same computation. So even after multiple iterations of gradient descent each neuron in the layer will be computing the same thing as other neurons.
[/expand]

Q7. Logistic regression’s weights w should be initialized randomly rather than to all zeros, because if you initialize to all zeros, then logistic regression will fail to learn a useful decision boundary because it will fail to “break symmetry”, True/False?

[expand title=View Answer] False [/expand]

Q8. You have built a network using the tanh activation for all the hidden units. You initialize the weights to relatively large values, using np.random.randn(..,..)*1000. What will happen?

[expand title=View Answer] This will cause the inputs of the tanh to also be very large, thus causing gradients to be close to zero. The optimization algorithm will thus become slow. [/expand]

Q9. Consider the following 1 hidden layer neural network:

Which of the following statements is True? (Check all that apply).

[expand title=View Answer]
1.b^{[1]}b[1] will have shape (4, 1)
2.W^{[1]}W[1] will have shape (4, 2)
3.b^{[2]}b[2] will have shape (1, 1)
4.W^{[2]}W[2] will have shape (1, 4)
[/expand]

Q10. In the same network as the previous question, what are the dimensions of Z[1] and A^{[1]}A[1]?

[expand title=View Answer]Z^{[1]}Z[1] and A^{[1]}A[1] are (4,m) [/expand]

Neural Networks and Deep Learning Week 04 Quiz Answers

Q1. What is the “cache” used for in our implementation of forward propagation and backward propagation?

[expand title=View Answer] We use it to pass variables computed during forward propagation to the corresponding backward propagation step. It contains useful values for backward propagation to compute derivatives. [/expand]

  • It is used to cache the intermediate values of the cost function during training.

Q2. Among the following, which ones are “hyperparameters”? (Check all that apply.)

[expand title=View Answer]
1.learning rate α
2.number of layers LL in the neural network
3.size of the hidden layers n^{[l]}
4.number of iterations
[/expand]

Q3. Which of the following statements is true?

[expand title=View Answer]The deeper layers of a neural network are typically computing more complex features of the input than the earlier layers. [/expand]

Q4. Vectorization allows you to compute forward propagation in an LL-layer neural network without an explicit for-loop (or any other explicit iterative loop) over the layers l=1, 2, …,L. True/False?

[expand title=View Answer] False [/expand]

Q5. Assume we store the values for n^{[l]} in an array called layer_dims, as follows: layer_dims = [n_xn x, 4,3,2,1]. So layer 1 has four hidden units, layer 2 has 3 hidden units, and so on. Which of the following for-loops will allow you to initialize the parameters for the model?

[expand title=View Answer] for i in range(1, len(layer_dims)):
parameter[‘W’ + str(i)] = np.random.randn(layer_dims[i], layer_dims[i-1]) * 0.01
parameter[‘b’ + str(i)] = np.random.randn(layer_dims[i], 1) * 0.01[/expand]

Q6. Consider the following neural network.

[expand title=View Answer]The number of layers L is 4. The number of hidden layers is 3. [/expand]

Q7. During forward propagation, in the forward function for a layer ll you need to know what is the activation function in a layer (Sigmoid, tanh, ReLU, etc.). During backpropagation, the corresponding backward function also needs to know what is the activation function for layer ll, since the gradient depends on it. True/False?

[expand title=View Answer]True [/expand]

Q8. There are certain functions with the following properties:

(i) To compute the function using a shallow network circuit, you will need a large network (where we measure size by the number of logic gates in the network), but (ii) To compute it using a deep network circuit, you need only an exponentially smaller network. True/False?

[expand title=View Answer] True [/expand]

Q9. Consider the following 2 hidden-layer neural networks:

Which of the following statements is True? (Check all that apply).

[expand title=View Answer]
1.W ^ [2] will have shape (3, 4)
2.b^ [2] will have shape (3, 1)
3.W^ [3] will have shape (1, 3)
4.b^ [3] will have shape (1, 1)
5.W^ [1] will have shape (4, 4)
6.b^ [1] will have shape (4, 1)
[/expand]

Q10. Whereas the previous question used a specific network, in the general case what is the dimension of W^{[l]}, the weight matrix associated with layer ll?

[expand title=View Answer]W[l] has shape (n[l],n[l−1]) [/expand]

Conclusion:

In conclusion, the Neural Networks and Deep Learning Coursera Quiz Answers provide a comprehensive understanding of key concepts and principles in the field of neural networks and deep learning. These answers not only serve as a valuable resource for learners seeking to solidify their knowledge but also offer insights into solving practical problems using deep learning techniques.

Get all Course Quiz Answers of Deep Learning Specialization

Course 01: Neural Networks and Deep Learning Coursera Quiz Answers

Course 02: Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization Quiz Answers

Course 03: Structuring Machine Learning Projects Coursera Quiz Answers

Course 04: Convolutional Neural Networks Coursera Quiz Answers

Course 05: Sequence Models Coursera Q

Share your love

Newsletter Updates

Enter your email address below and subscribe to our newsletter

Leave a Reply

Your email address will not be published. Required fields are marked *