Unsupervised Learning, Recommenders, Reinforcement Learning Quiz Answers

Get All Weeks Unsupervised Learning, Recommenders, Reinforcement Learning Quiz Answers

Week 01: Unsupervised Learning, Recommenders, Reinforcement Learning Quiz Answers

Clustering Quiz Answers

Q1. Which of these best describes unsupervised learning?

View
A form of machine learning that finds patterns using unlabeled data (x).

Q2. Which of these statements are true about K-means? Check all that apply.

View
1.If you are running K-means with K=3K=3 clusters, then each c^{(i)}c(i) should be 1, 2, or 3.
2.The number of cluster assignment variables c^{(i)}c(i) is equal to the number of training examples.

Q3. You run K-means 100 times with different initializations. How should you pick from the 100 resulting solutions?

View
Pick the one with the lowest cost JJ

.

Q4. You run K-means and compute the value of the cost function J(c^{(1)}, …, c^{(m)}, \mu_1, …, \mu_K)J(c(1),…,c(m),μ1​,…,μK​) after each iteration. Which of these statements should be true?

View
The cost will either decrease or stay the same after each iteration. .

Q5. In K-means, the elbow method is a method to

View
Choose the number of clusters K

Anomaly Detection Quiz Answers

Q1. You are building a system to detect if computers in a data center are malfunctioning. You have 10,000 data points of computers functioning well, and no data from computers malfunctioning. What type of algorithm should you use?

View
Anomaly detection

Q2. You are building a system to detect if computers in a data center are malfunctioning. You have 10,000 data points of computers functioning well and 10,000 data points of computers malfunctioning. What type of algorithm should you use?

View
Supervised learning

Q3. Say you have 5,000 examples of normal airplane engines and 15 examples of anomalous engines. How would you use the 15 examples of anomalous engines to evaluate your anomaly detection algorithm?

View
Use it during training by fitting one Gaussian model to the normal engines, and a different Gaussian model to the anomalous engines.

Q4. Anomaly detection flags a new input xx as an anomaly if p(x) < \epsilonp(x)<ϵ. If we reduce the value of \epsilonϵ, what happens?

View
The algorithm is more likely to classify new examples as an anomaly.

Q5. You are monitoring the temperature and vibration intensity on newly manufactured aircraft engines. You have measured 100 engines and fit the Gaussian model described in the video lectures to the data. The 100 examples and the resulting distributions are shown in the figure below.

View
1.17.5 + 48 = 65.5
2.17.5 * 48 = 840

Week 02: Unsupervised Learning, Recommenders, Reinforcement Learning Quiz Answers

Collaborative Filtering Quiz Answers

Q1. You have the following table of movie ratings:

MovieElissaZachBarryTerry
Football Forever543?
Pies, Pies, Pies1?54
Linear Algebra Live45?1

Refer to the table above for questions 1 and 2. Assume numbering starts at 1 for this quiz, so the rating for Football Forever by Elissa is at (1,1)

What is the value of n_unu

View
​Anomaly detection

Q2. What is the value of r(2,2)r(2,2)

View
Anomaly detection

Q3. In which of the following situations will a collaborative filtering system be the most appropriate learning algorithm (compared to linear or logistic regression)?

View
You run an online bookstore and collect the ratings of many users. You want to use this to identify what books are “similar” to each other (i.e., if a user likes a certain book, what are other books that they might also like?)

.

Q4. For recommender systems with binary labels y, which of these are reasonable ways for defining when yy should be 1 for a given user jj and item ii? (Check all that apply.)

View
1.yy is 1 if user jj purchases item ii (after being shown the item)
2.yy is 1 if user jj fav/likes/clicks on item ii (after being shown the item)

Recommender systems implementation Quiz Answers

Q1. The lecture described using ‘mean normalization’ to do feature scaling of the ratings. What equation below best describes this algorithm?

View
Comment Answer Below

Q2. The implementation of collaborative filtering utilized a custom training loop in TensorFlow. Is it true that TensorFlow always requires a custom training loop?

View
No: TensorFlow provides simplified training operations for some applications.

Q3. Once a model is trained, the ‘distance’ between feature vectors gives an indication of how similar items are.

The squared distance between the two vectors \mathbf{x}^{(k)}x(k) and \mathbf{x}^{(i)}x(i) is:

Using the table below, find the closest item to the movie “Pies, Pies, Pies”.

MovieUser 1User nx_0x0​x_1x1​x_2x2​
Pastries for Supper2.02.01.0
Pies, Pies, Pies2.03.04.0
Pies and You5.03.04.0
View

1.Pies and You
2.Pastries for Supper

Q4. Which of these is an example of the cold start problem? (Check all that apply.)

View
1.A recommendation system is unable to give accurate rating predictions for a new user that has rated few products.
2.A recommendation system is unable to give accurate rating predictions for a new product that no users have rated.

Content-based filtering Quiz Answers

Q1. Vector x_uxu​ and vector x_mxm​ must be of the same dimension, where x_uxu​ is the input features vector for a user (age, gender, etc.) x_mxm​ is the input features vector for a movie (year, genre, etc.) True or false?

View
True

Q2. If we find that two movies, ii and kk, have vectors v_m^{(i)}vm(i)​ and v_m^{(k)}vm(k)​ that are similar to each other (i.e., ||v_m^{(i)} – v_m^{(k)}||∣∣vm(i)​−vm(k)​∣∣ is small), then which of the following is likely to be true? Pick the best answer.

View
A user who has watched one of these two movies has probably watched the other as well.

Q3. Which of the following neural network configurations are valid for a content-based filtering application? Please note carefully the dimensions of the neural network indicated in the diagram. Check all the options that apply:

View
The user and item networks have 64-dimensional v_u and v_m vectors respectively

Q4. You have built a recommendation system to retrieve musical pieces from a large database of music and have an algorithm that uses separate retrieval and ranking steps. If you modify the algorithm to add more musical pieces to the retrieved list (i.e., the retrieval step returns more items), which of these are likely to happen? Check all that apply.

View
The system’s response time might decrease (i.e., users get recommendations more quickly)

Q5. To speed up the response time of your recommendation system, you can pre-compute the vectors v_m for all the items you might recommend. This can be done even before a user logs in to your website and even before you know the x_uxu​ or v_uvu​ vector. True/False?

View
True

Week 03: Unsupervised Learning, Recommenders, Reinforcement Learning Quiz Answers

Reinforcement Learning Introduction Quiz Answers

Q1. You are using reinforcement learning to control a four-legged robot. The position of the robot would be its _____.

View
state

Q2. You are controlling a Mars rover. You will be very very happy if it gets to state 1 (significant scientific discovery), slightly happy if it gets to state 2 (small scientific discovery), and unhappy if it gets to state 3 (rover is permanently damaged). To reflect this, choose a reward function so that:

View
R(1) > R(2) > R(3), where R(1), R(2) and R(3) are positive.

Q3. You are using reinforcement learning to fly a helicopter. Using a discount factor of 0.75, your helicopter starts in some state and receives rewards -100 on the first step, -100 on the second step, and 1000 on the third and final step (where it has reached a terminal state). What is the return?

View
J-100 – 0.75*100 + 0.75^2*1000

Q4. Given the rewards and actions below, compute the return from state 3 with a discount factor of \gamma = 0.25γ=0.25.

Unsupervised Learning, Recommenders, Reinforcement Learning Quiz Answers

View
0.39

State-action value function Quiz Answers

Q1. Which of the following accurately describes the state-action value function Q(s,a)Q(s,a)?

View
It is the return if you start from state ss, take action aa (once), then behave optimally after that.

Q2. You are controlling a robot that has 3 actions: ← (left), → (right) and STOP. From a given state ss, you have computed Q(s, ←) = -10, Q(s, →) = -20, Q(s, STOP) = 0.

What is the optimal action to take in state ss?

View
STOP

Q3. For this problem, \gamma = 0.25γ=0.25. The diagram below shows the return and the optimal action from each state. Please compute Q(5, ←).

Unsupervised Learning, Recommenders, Reinforcement Learning Quiz Answers

View
0.625

Continuous state spaces Quiz Answers

Q1. The Lunar Lander is a continuous state Markov Decision Process (MDP) because:

View
The state contains numbers such as position and velocity that are continuous-valued.

Q2. In the learning algorithm described in the videos, we repeatedly create an artificial training set to which we apply supervised learning where the input x = (s,a)x=(s,a) and the target, constructed using Bellman’s equations, is y = _____?

View
y=\max\limits_{a’} Q(s’,a’)y=a′max​Q(s′,a′) where s’s′ is the state you get to after taking action aa in state ss

Q3. You have reached the final practice quiz of this class! What does that mean? (Please check all the answers, because all of them are correct!)

View
1.Andrew sends his heartfelt congratulations to you!
2.The DeepLearning.AI and Stanford Online teams would like to give you a round of applause!
3.You deserve to celebrate!
4.What an accomplishment — you made it!

Conclusion:

In conclusion, our journey through the realms of Unsupervised Learning, Recommenders, and Reinforcement Learning has been a remarkable voyage into the fascinating world of machine learning. We’ve explored diverse and powerful techniques that enable machines to uncover patterns, make recommendations, and learn through interaction.

Get All Course Quiz Answers of Machine Learning Specialization

Supervised Machine Learning: Regression and Classification Quiz Answers

Advanced Learning Algorithms Coursera Quiz Answers

Unsupervised Learning, Recommenders, Reinforcement Learning Quiz Answers

Team Networking Funda
Team Networking Funda

We are Team Networking Funda, a group of passionate authors and networking enthusiasts committed to sharing our expertise and experiences in networking and team building. With backgrounds in Data Science, Information Technology, Health, and Business Marketing, we bring diverse perspectives and insights to help you navigate the challenges and opportunities of professional networking and teamwork.

Leave a Reply

Your email address will not be published. Required fields are marked *