# Fundamentals of Reinforcement Learning Quiz Answers

## Get All Weeks Fundamentals of Reinforcement Learning Quiz Answers

### Fundamentals of Reinforcement Learning Week 01 Quiz Answers

#### Quiz 1: Sequential Decision-Making

Q1. What is the incremental rule (sample average) for action values?

• Q_{n+1}= Q_n + \frac{1}{n} [R_n + Q_n]
• Q_{n+1}= Q_n – \frac{1}{n} [R_n – Q_n]
• Q_{n+1}= Q_n + \frac{1}{n} [R_n – Q_n]
• Q_{n+1}= Q_n + \frac{1}{n} [Q_n]

Q2. Equation 2.5 (from the SB textbook, 2nd edition) is a key update rule we will use throughout the Specialization. We discussed this equation extensively in video. This exercise will give you a better hands-on feel for how it works. The blue line is the target that we might estimate with equation 2.5. The red line is our estimate plotted over time.

q_{n+1}=q_n+\alpha_n[R_n -q_n]

Given the estimated update in red, what do you think was the value of the step size parameter we used to update the estimate on each time step?

• 1.0
• 1/2
• 1/8
• 1 / (t – 1)

Q3. Equation 2.5 (from the SB textbook, 2nd edition) is a key update rule we will use throughout the Specialization. We discussed this equation extensively in video. This exercise will give you a better hands-on feel for how it works. The blue line is the target that we might estimate with equation 2.5. The red line is our estimate plotted over time.

q_{n+1}=q_n+\alpha_n[R_n -q_n]

Given the estimated update in red, what do you think was the value of the step size parameter we used to update the estimate on each time step?

• 1 / (t – 1)
• 1/2
• 1/8
• 1.0

Q4. Equation 2.5 (from the SB textbook, 2nd edition) is a key update rule we will use throughout the Specialization. We discussed this equation extensively in video. This exercise will give you a better hands-on feel for how it works. The blue line is the target that we might estimate with equation 2.5. The red line is our estimate plotted over time.

q_{n+1}=q_n+\alpha_n[R_n -q_n]

Given the estimated update in red, what do you think was the value of the step size parameter we used to update the estimate on each time step?

• 1.0
• 1/8
• 1/2
• 1 / (t – 1)

Q5. Equation 2.5 (from the SB textbook, 2nd edition) is a key update rule we will use throughout the Specialization. We discussed this equation extensively in video. This exercise will give you a better hands-on feel for how it works. The blue line is the target that we might estimate with equation 2.5. The red line is our estimate plotted over time.

q_{n+1}=q_n+\alpha_n[R_n -q_n]

Given the estimated update in red, what do you think was the value of the step size parameter we used to update the estimate on each time step?

• 1.0
• 1/2
• 1/8
• 1 / (t – 1)

Q6. What is the exploration/exploitation tradeoff?

• The agent wants to explore to get more accurate estimates of its values. The agent also wants to exploit to get more rewards. The agent cannot, however, choose to do both simultaneously.
• The agent wants to explore the environment to learn as much about it as possible about the various actions. That way once it knows every arm’s true value it can choose the best one for the rest of the time.
• The agent wants to maximize the amount of reward it receives over its lifetime. To do so it needs to avoid the action it believes is worst to exploit what it knows about the environment. However, to discover which arm is truly worst it needs to explore different actions which potentially will lead it to take the worst action at times.

Q7. Why did an epsilon of 0.1 perform better over 1000 steps than an epsilon of 0.01?

• The 0.01 agent did not explore enough. Thus it ended up selecting a suboptimal arm for longer.
• The 0.01 agent explored too much causing the arm to choose a bad action too often.
• Epsilon of 0.1 is the optimal value for epsilon in general.

Q8. If exploration is so great why did an epsilon of 0.0 (a greedy agent) perform better than an epsilon of 0.4?

• Epsilon of 0.0 is greedy, thus it will always choose the optimal arm.
• Epsilon of 0.4 doesn’t explore often enough to find the optimal action.
• Epsilon of 0.4 explores too often that it takes many sub-optimal actions causing it to do worse over the long term.

### Fundamentals of Reinforcement Learning Week 02 Quiz Answers

#### Quiz 1: MDPs Quiz Answers

Q1. The learner and decision maker is the _.

• Environment
• Reward
• State
• Agent

Q2. At each time step the agent takes an _.

• Action
• State
• Environment
• Reward

Q3. Imagine the agent is learning in an episodic problem. Which of the following is true?

• The number of steps in an episode is always the same.
• The number of steps in an episode is stochastic: each episode can have a different number of steps.
• The agent takes the same action at each step during an episode.

Q4. If the reward is always +1 what is the sum of the discounted infinite return when \gamma < 1γ<1

G_t=\sum_{k=0}^{\infty} \gamma^{k}R_{t+k+1}Gt​=∑k=0∞​γkRt+k+1​

• Gt=11−γ
• G_t=\frac{\gamma}{1-\gamma}Gt​=1−γγ
• Infinity.
• G_t=1*\gamma^kGt​=1∗γk

Q5. How does the magnitude of the discount factor (gamma/\gammaγ) affect learning?

• With a larger discount factor the agent is more far-sighted and considers rewards farther into the future.
• The magnitude of the discount factor has no effect on the agent.
• With a smaller discount factor the agent is more far-sighted and considers rewards farther into the future.

Q6. Suppose \gamma=0.8γ=0.8 and we observe the following sequence of rewards: R_1 = -3R1​=−3, R_2 = 5R2​=5, R_3=2R3​=2, R_4 = 7R4​=7, and R_5 = 1R5​=1, with T=5T=5. What is G_0G0​? Hint: Work Backwards and recall that G_t = R_{t+1} + \gamma G_{t+1}Gt​=Rt+1​+γGt+1​.

• 12
• -3
• 8.24
• 11.592
• 6.2736

Q7. What does MDP stand for?

• Markov Decision Protocol
• Markov Decision Process
• Markov Deterministic Policy
• Meaningful Decision Process

Q8. Suppose reinforcement learning is being applied to determine moment-by-moment temperatures and stirring rates for a bioreactor (a large vat of nutrients and bacteria used to produce useful chemicals). The actions in such an application might be target temperatures and target stirring rates that are passed to lower-level control systems that, in turn, directly activate heating elements and motors to attain the targets. The states are likely to be thermocouples and other sensory readings, perhaps filtered and delayed, plus symbolic inputs representing the ingredients in the vat and the target chemical. The rewards might be moment-by-moment measures of the rate at which the useful chemical is produced by the bioreactor.

Notice that here each state is a list, or vector, of sensor readings and symbolic inputs, and each action is a vector consisting of a target temperature and a stirring rate.

Is this a valid MDP?

• Yes. Assuming the state captures the relevant sensory information (inducing historical values to account for sensor delays). It is typical of reinforcement learning tasks to have states and actions with such structured representations; the states might be constructed by processing the raw sensor information in a variety of ways.
• No. If the instantaneous sensor readings are non-Markov it is not an MDP: we cannot construct a state different from the sensor readings available on the current time-step.

Q9. Case 1: Imagine that you are a vision system. When you are first turned on for the day, an image floods into your camera. You can see lots of things, but not all things. You can’t see objects that are occluded, and of course, you can’t see objects that are behind you. After seeing that first scene, do you have access to the Markov state of the environment?

Case 2: Imagine that the vision system never worked properly: it always returned the same static image, forever. Would you have access to the Markov state then? (Hint: Reason about P(S_{t+1} | S_t, …, S_0)P(S
t+1= AllWhitePixels)

• You have access to the Markov state in both Cases 1 and 2.
• You have access to the Markov state in Case 1, but you don’t have access to the Markov state in Case 2.
• You don’t have access to the Markov state in Case 1, but you do have access to the Markov state in Case 2.
• You don’t have access to the Markov state in both Cases 1 and 2.

Q10. What is the reward hypothesis?

• That all of what we mean by goals and purposes can be well thought of as the minimization of the expected value of the cumulative sum of a received scalar signal (called reward)
• That all of what we mean by goals and purposes can be well thought of as the maximization of the expected value of the cumulative sum of a received scalar signal (called reward)
• Ignore rewards and find other signals.
• Always take the action that gives you the best reward at that point.

Q11. Imagine, an agent is in a maze-like grid world. You would like the agent to find the goal, as quickly as possible. You give the agent a reward of +1 when it reaches the goal and the discount rate is 1.0 because this is an episodic task. When you run the agent it finds the goal but does not seem to care how long it takes to complete each episode. How could you fix this? (Select all that apply)

• Give the agent a reward of 0 at every time step so it wants to leave.
• Set a discount rate less than 1 and greater than 0, like 0.9.
• Give the agent -1 at each time step.
• Give the agent a reward of +1 at every time step.

Q12. When may you want to formulate a problem as episodic?

• When the agent-environment interaction does not naturally break into sequences. Each new episode begins independently of how the previous episode ended.
• When the agent-environment interaction naturally breaks into sequences. Each sequence begins independently of how the episode ended.

### Fundamentals of Reinforcement Learning Week 03 Quiz Answers

#### Quiz 1: [Practice] Value Functions and Bellman Equations Quiz Answers

Q1. A policy is a function which maps _ to _.

• Actions to probability distributions over values.
• Actions to probabilities.
• States to values.
• States to probability distributions over actions.
• States to actions.

Q2. The term “backup” most closely resembles the term _ in meaning.

• Value
• Update
• Diagram

Q3. At least one deterministic optimal policy exists in every Markov decision process.

• False
• True

Q4. The optimal state-value function:

• Is not guaranteed to be unique, even in finite Markov decision processes.
• Is unique in every finite Markov decision process.

Q5. Does adding a constant to all rewards change the set of optimal policies in episodic tasks?

• Yes, adding a constant to all rewards changes the set of optimal policies.
• No, as long as the relative differences between rewards remain the same, the set of optimal policies is the same.

Q6. Does adding a constant to all rewards change the set of optimal policies in continuing tasks?

• Yes, adding a constant to all rewards changes the set of optimal policies.
• No, as long as the relative differences between rewards remain the same, the set of optimal policies is the same.

#### Quiz 2: Value Functions and Bellman Equations Quiz Answers

Q1. function which maps _ to _ is a value function. [Select all that apply]

• Values to states.
• State-action pairs to expected returns.
• States to expected returns.
• Values to actions.

Q2. Every finite Markov decision process has __. [Select all that apply]

• A stochastic optimal policy
• A unique optimal policy
• A deterministic optimal policy
• A unique optimal value function

Q3. The Bellman equation for a given a policy \piπ: [Select all that apply]

• Holds only when the policy is greedy with respect to the value function.
• Expresses the improved policy in terms of the existing policy.
• Expresses state values v(s)v(s) in terms of state values of successor states.

Q6. An optimal policy:

• Is not guaranteed to be unique, even in finite Markov decision processes.
• Is unique in every Markov decision process.
• Is unique in every finite Markov decision process.

### Fundamentals of Reinforcement Learning Week 04 Quiz Answers

#### Quiz 1: Dynamic Programming Quiz Answers

Q1. The value of any state under an optimal policy is _ the value of that state under a non-optimal policy. [Select all that apply]

• Strictly greater than
• Greater than or equal to
• Strictly less than
• Less than or equal to

Q2. If a policy is greedy with respect to the value function for the
equiprobable random policy, then it is guaranteed to be an optimal policy.

• True
• False

Q3. Let v_{\pi}v

• True
• False

Q4. What is the relationship between value iteration and policy iteration? [Select all that apply]

• Value iteration is a special case of policy iteration.
• Policy iteration is a special case of value iteration.
• Value iteration and policy iteration are both special cases of generalized policy iteration.

Q5. The word synchronous means “at the same time”. The word asynchronous means “not at the same time”. A dynamic programming algorithm is: [Select all that apply]

• Asynchronous, if it does not update all states at each iteration.
• Synchronous, if it systematically sweeps the entire state space at each iteration.
• Asynchronous, if it updates some states more than others.

Q6. All Generalized Policy Iteration algorithms are synchronous.

• True
• False

Q7. Which of the following is true?

• Synchronous methods generally scale to large state spaces better than asynchronous methods.
• Asynchronous methods generally scale to large state spaces better than synchronous methods.

Q8. Why are dynamic programming algorithms considered planning methods? [Select all that apply]

• They compute optimal value functions.
• They learn from trial and error interaction.
• They use a model to improve the policy.

Q9. Consider the undiscounted, episodic MDP below. There are four actions possible in each state, A = {up, down, right, left}, which deterministically cause the corresponding state transitions, except that actions that would take the agent off the grid in fact leave the state unchanged. The right half of the figure shows the value of each state under the equiprobable random policy. If \piπ is the equiprobable random policy, what is q(7,down)?

• q(7,down)=−14
• q(7,down)=−20
• q(7,down)=−21
• q(7,down)=−15

Q10. Consider the undiscounted, episodic MDP below. There are four actions possible in each state, A = {up, down, right, left}, which deterministically cause the corresponding state transitions, except that actions that would take the agent off the grid in fact leave the state unchanged. The right half of the figure shows the value of each state under the equiprobable random policy. If \piπ is the equiprobable random policy, what is v(15)v(15)? Hint: Recall the Bellman equation v(s) = \sum_a \pi(a | s) \sum_{s’, r} p(s’, r | s, a) [r + ]

p(s’,r∣s,a)[r+γv(s’)].

• v(15) = -25v(15)=−25
• v(15) = -22v(15)=−22
• v(15) = -24v(15)=−24
• v(15) = -23v(15)=−23
• v(15) = -21v(15)=−21
##### Get All Course Quiz Answers of Reinforcement Learning Specialization

Fundamentals of Reinforcement Learning Quiz Answers

Sample-based Learning Methods Coursera Quiz Answers

Prediction and Control with Function Approximation Quiz Answers ##### Team Networking Funda

Welcome to the official Author Page of Team Networking Funda! Here, we are dedicated to unraveling the intricate world of networking, connectivity, and team dynamics. Our mission is to provide you with insightful, actionable advice and solutions that will help you build strong connections, foster collaboration, and achieve success in all aspects of your professional life.

🌐 Networking Insights: Dive into the art of networking with us, as we explore strategies, tips, and real-world examples that can elevate your networking game. Whether you're a seasoned pro or just starting, we have valuable insights to offer.

🤝 Team Synergy: Discover the secrets to creating high-performing teams. We delve into team dynamics, leadership, and communication to help you unlock the full potential of your team and achieve outstanding results.

🚀 Professional Growth: Explore the tools and techniques that can accelerate your professional growth. From career development to personal branding, we're here to guide you toward achieving your goals.

🌟 Success Stories: Be inspired by success stories, case studies, and interviews with experts who have mastered the art of networking and teamwork. Learn from their experiences and apply their insights to your journey.

💬 Engage and Connect: Join the conversation, leave comments, and share your own networking and team-building experiences. Together, we can create a vibrant community of like-minded professionals dedicated to growth and success.

Stay tuned for a wealth of resources that will empower you to excel in your professional life. We're here to provide you with the knowledge and tools you need to thrive in today's interconnected world.

We are Team Networking Funda, a group of passionate authors and networking enthusiasts committed to sharing our expertise and experiences in the world of networking and team building. With backgrounds in [Your Background or Expertise], we bring a diverse range of perspectives and insights to help you navigate the challenges and opportunities of professional networking and teamwork.

error: Content is protected !!