Fundamentals of Reinforcement Learning Quiz Answers

Get All Weeks Fundamentals of Reinforcement Learning Quiz Answers

Fundamentals of Reinforcement Learning Week 01 Quiz Answers

Quiz 1: Sequential Decision-Making

Q1. What is the incremental rule (sample average) for action values?

View
Q_{n+1}= Q_n + \frac{1}{n} [R_n – Q_n]

Q2. Equation 2.5 (from the SB textbook, 2nd edition) is a key update rule we will use throughout the Specialization. We discussed this equation extensively in video. This exercise will give you a better hands-on feel for how it works. The blue line is the target that we might estimate with equation 2.5. The red line is our estimate plotted over time.

View
q_{n+1}=q_n+\alpha_n[R_n -q_n]

Given the estimated update in red, what do you think was the value of the step size parameter we used to update the estimate on each time step?

View
1/2

Q3. Equation 2.5 (from the SB textbook, 2nd edition) is a key update rule we will use throughout the Specialization. We discussed this equation extensively in video. This exercise will give you a better hands-on feel for how it works. The blue line is the target that we might estimate with equation 2.5. The red line is our estimate plotted over time.

View
q_{n+1}=q_n+\alpha_n[R_n -q_n]

Given the estimated update in red, what do you think was the value of the step size parameter we used to update the estimate on each time step?

View
1/8

Q4. Equation 2.5 (from the SB textbook, 2nd edition) is a key update rule we will use throughout the Specialization. We discussed this equation extensively in video. This exercise will give you a better hands-on feel for how it works. The blue line is the target that we might estimate with equation 2.5. The red line is our estimate plotted over time.

View
q_{n+1}=q_n+\alpha_n[R_n -q_n]

Given the estimated update in red, what do you think was the value of the step size parameter we used to update the estimate on each time step?

View
1.0

Q5. Equation 2.5 (from the SB textbook, 2nd edition) is a key update rule we will use throughout the Specialization. We discussed this equation extensively in video. This exercise will give you a better hands-on feel for how it works. The blue line is the target that we might estimate with equation 2.5. The red line is our estimate plotted over time.

q_{n+1}=q_n+\alpha_n[R_n -q_n]

Given the estimated update in red, what do you think was the value of the step size parameter we used to update the estimate on each time step?

View
1 / (t – 1)

Q6. What is the exploration/exploitation tradeoff?

View
The agent wants to explore to get more accurate estimates of its values. The agent also wants to exploit to get more rewards. The agent cannot, however, choose to do both simultaneously.

Q7. Why did an epsilon of 0.1 perform better over 1000 steps than an epsilon of 0.01?

View
The 0.01 agent did not explore enough. Thus it ended up selecting a suboptimal arm for longer.

Q8. If exploration is so great why did an epsilon of 0.0 (a greedy agent) perform better than an epsilon of 0.4?

View
Epsilon of 0.4 explores too often that it takes many sub-optimal actions causing it to do worse over the long term.

Fundamentals of Reinforcement Learning Week 02 Quiz Answers

Quiz 1: MDPs Quiz Answers

Q1. The learner and decision maker is the _.

View
Agent

Q2. At each time step the agent takes an _.

View
Action

Q3. Imagine the agent is learning in an episodic problem. Which of the following is true?

View
The number of steps in an episode is stochastic: each episode can have a different number of steps.

Q4. If the reward is always +1 what is the sum of the discounted infinite return when \gamma < 1γ<1

G_t=\sum_{k=0}^{\infty} \gamma^{k}R_{t+k+1}Gt​=∑k=0∞​γkRt+k+1​

View
Gt=11−γ

Q5. How does the magnitude of the discount factor (gamma/\gammaγ) affect learning?

View
With a larger discount factor the agent is more far-sighted and considers rewards farther into the future.

Q6. Suppose \gamma=0.8γ=0.8 and we observe the following sequence of rewards: R_1 = -3R1​=−3, R_2 = 5R2​=5, R_3=2R3​=2, R_4 = 7R4​=7, and R_5 = 1R5​=1, with T=5T=5. What is G_0G0​? Hint: Work Backwards and recall that G_t = R_{t+1} + \gamma G_{t+1}Gt​=Rt+1​+γGt+1​.

View
6.2736

Q7. What does MDP stand for?

View
Markov Decision Process

Q8. Suppose reinforcement learning is being applied to determine moment-by-moment temperatures and stirring rates for a bioreactor (a large vat of nutrients and bacteria used to produce useful chemicals). The actions in such an application might be target temperatures and target stirring rates that are passed to lower-level control systems that, in turn, directly activate heating elements and motors to attain the targets. The states are likely to be thermocouples and other sensory readings, perhaps filtered and delayed, plus symbolic inputs representing the ingredients in the vat and the target chemical. The rewards might be moment-by-moment measures of the rate at which the useful chemical is produced by the bioreactor.

Notice that here each state is a list, or vector, of sensor readings and symbolic inputs, and each action is a vector consisting of a target temperature and a stirring rate.

Is this a valid MDP?

View
Yes. Assuming the state captures the relevant sensory information (inducing historical values to account for sensor delays). It is typical of reinforcement learning tasks to have states and actions with such structured representations; the states might be constructed by processing the raw sensor information in a variety of ways.

Q9. Case 1: Imagine that you are a vision system. When you are first turned on for the day, an image floods into your camera. You can see lots of things, but not all things. You can’t see objects that are occluded, and of course, you can’t see objects that are behind you. After seeing that first scene, do you have access to the Markov state of the environment?

Case 2: Imagine that the vision system never worked properly: it always returned the same static image, forever. Would you have access to the Markov state then? (Hint: Reason about P(S_{t+1} | S_t, …, S_0)P(S
t+1= AllWhitePixels)

View
You have access to the Markov state in Case 1, but you don’t have access to the Markov state in Case 2.

Q10. What is the reward hypothesis?

View
That all of what we mean by goals and purposes can be well thought of as the maximization of the expected value of the cumulative sum of a received scalar signal (called reward)

Q11. Imagine, an agent is in a maze-like grid world. You would like the agent to find the goal, as quickly as possible. You give the agent a reward of +1 when it reaches the goal and the discount rate is 1.0 because this is an episodic task. When you run the agent it finds the goal but does not seem to care how long it takes to complete each episode. How could you fix this? (Select all that apply)

View
Give the agent a reward of 0 at every time step so it wants to leave.

Set a discount rate less than 1 and greater than 0, like 0.9.

Give the agent -1 at each time step.

Q12. When may you want to formulate a problem as episodic?

View
When the agent-environment interaction naturally breaks into sequences. Each sequence begins independently of how the episode ended.

Fundamentals of Reinforcement Learning Week 03 Quiz Answers

Quiz 1: [Practice] Value Functions and Bellman Equations Quiz Answers

Q1. A policy is a function which maps _ to _.

View
States to actions.

Q2. The term “backup” most closely resembles the term _ in meaning.

View
Update

Q3. At least one deterministic optimal policy exists in every Markov decision process.

View
False

Q4. The optimal state-value function:

View
Is unique in every finite Markov decision process.

Q5. Does adding a constant to all rewards change the set of optimal policies in episodic tasks?

View
No, as long as the relative differences between rewards remain the same, the set of optimal policies is the same.

Q6. Does adding a constant to all rewards change the set of optimal policies in continuing tasks?

View
No, as long as the relative differences between rewards remain the same, the set of optimal policies is the same.

Quiz 2: Value Functions and Bellman Equations Quiz Answers

Q1. function which maps _ to _ is a value function. [Select all that apply] View

State-action pairs to expected returns.

States to expected returns.

Q2. Every finite Markov decision process has __. [Select all that apply] View

A unique optimal value function

Q3. The Bellman equation for a given a policy \piπ: [Select all that apply] View

1.Expresses state values v(s)v(s) in terms of state values of successor states.
2.Holds only when the policy is greedy with respect to the value function.

Q6. An optimal policy:

View
Is not guaranteed to be unique, even in finite Markov decision processes.

.

Fundamentals of Reinforcement Learning Week 04 Quiz Answers

Quiz 1: Dynamic Programming Quiz Answers

Q1. The value of any state under an optimal policy is _ the value of that state under a non-optimal policy. [Select all that apply] View

Greater than or equal to

Q2. If a policy is greedy with respect to the value function for the
equiprobable random policy, then it is guaranteed to be an optimal policy.

View
False

Q3. Let v_{\pi}v

View
False

Q4. What is the relationship between value iteration and policy iteration? [Select all that apply] View

Value iteration is a special case of policy iteration.

Policy iteration is a special case of value iteration.

Value iteration and policy iteration are both special cases of generalized policy iteration.

Q5. The word synchronous means “at the same time”. The word asynchronous means “not at the same time”. A dynamic programming algorithm is: [Select all that apply] View

Asynchronous, if it does not update all states at each iteration.

Synchronous, if it systematically sweeps the entire state space at each iteration.

Asynchronous, if it updates some states more than others.

Q6. All Generalized Policy Iteration algorithms are synchronous.

View
False

Q7. Which of the following is true?

View
Asynchronous methods generally scale to large state spaces better than synchronous methods.

Q8. Why are dynamic programming algorithms considered planning methods? [Select all that apply] View

1.They use a model to improve the policy.
2.They compute optimal value functions.

Q9. Consider the undiscounted, episodic MDP below. There are four actions possible in each state, A = {up, down, right, left}, which deterministically cause the corresponding state transitions, except that actions that would take the agent off the grid in fact leave the state unchanged. The right half of the figure shows the value of each state under the equiprobable random policy. If \piπ is the equiprobable random policy, what is q(7,down)?

View
q(7,down)=−14

Q10. Consider the undiscounted, episodic MDP below. There are four actions possible in each state, A = {up, down, right, left}, which deterministically cause the corresponding state transitions, except that actions that would take the agent off the grid in fact leave the state unchanged. The right half of the figure shows the value of each state under the equiprobable random policy. If \piπ is the equiprobable random policy, what is v(15)v(15)? Hint: Recall the Bellman equation v(s) = \sum_a \pi(a | s) \sum_{s’, r} p(s’, r | s, a) [r + ]

p(s’,r∣s,a)[r+γv(s’)].

View
v(15) = -23v(15)=−23
Get All Course Quiz Answers of Reinforcement Learning Specialization

Fundamentals of Reinforcement Learning Quiz Answers

Sample-based Learning Methods Coursera Quiz Answers

Prediction and Control with Function Approximation Quiz Answers

Team Networking Funda
Team Networking Funda

We are Team Networking Funda, a group of passionate authors and networking enthusiasts committed to sharing our expertise and experiences in networking and team building. With backgrounds in Data Science, Information Technology, Health, and Business Marketing, we bring diverse perspectives and insights to help you navigate the challenges and opportunities of professional networking and teamwork.

Leave a Reply

Your email address will not be published. Required fields are marked *