Markov Decision Processes II
PPTs by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley.
Example: Grid World
▪ A maze-like problem
▪ The agent lives in a grid
▪ Walls block the agent’s path
▪ Noisy movement: actions do not always go as planned
▪ 80% of the time, the action North takes the agent North
▪ 10% of the time, North takes the agent West; 10% East
▪ If there is a wall in the direction the agent would have
been taken, the agent stays put
▪ The agent receives rewards each time step
▪ Small “living” reward each step (can be negative)
▪ Big rewards come at the end (good or bad)
▪ Goal: maximize sum of (discounted) rewards
Recap: MDPs
▪ Markov decision processes:
▪ States S s
▪ Actions A a
▪ Transitions P(s’|s,a) (or T(s,a,s’))
▪ Rewards R(s,a,s’) (and discount γ) s, a
▪ Start state s0
s,a,s’
’s
▪ Quantities:
▪ Policy = map of states to actions
▪ Utility = sum of discounted rewards
▪ Values = expected future utility from a state (max node)
▪ Q-Values = expected future utility from a q-state (chance node)
Optimal Quantities
▪ The value (utility) of a state s:
V*(s) = expected utility starting in s and s is a
s
acting optimally state
a
▪ The value (utility) of a q-state (s,a): s, a
(s, a) is a
q-state
Q*(s,a) = expected utility starting out
having taken action a from state s and s,a,s’ (s,a,s’) is a
(thereafter) acting optimally ’s transition
▪ The optimal policy:
π*(s) = optimal action from state s
[Demo: gridworld values (L9D1)]
Gridworld Values V*
Gridworld: Q*
The Bellman Equations
How to be optimal:
Step 1: Take correct first action
Step 2: Keep being optimal
The Bellman Equations
▪ Definition of “optimal utility” via expectimax recurrence s
gives a simple one-step lookahead relationship amongst a
optimal utility values
s, a
s,a,s’
’s
▪ These are the Bellman equations, and they characterize
optimal values in a way we’ll use over and over
Value Iteration
▪ Bellman equations characterize the optimal values: V(s)
a
s, a
s,a,s’
▪ Value iteration computes them: V(s’)
▪ Value iteration is just a fixed point solution method
▪ … though the Vk vectors are also interpretable as time-limited values
Convergence*
▪ How do we know the Vk vectors are going to converge?
▪ Case 1: If the tree has maximum depth M, then VM holds
the actual untruncated values
▪ Case 2: If the discount is less than 1
▪ Sketch: For any state Vk and Vk+1 can be viewed as depth
k+1 expectimax results in nearly identical search trees
▪ The difference is that on the bottom layer, Vk+1 has actual
rewards while Vk has zeros
▪ That last layer is at best all RMAX
▪ It is at worst RMIN
▪ But everything is discounted by γk that far out
▪ So Vk and Vk+1 are at most γk max|R| different
▪ So as k increases, the values converge
Policy Methods
Policy Evaluation
Fixed Policies
Do the optimal action Do what π says to do
s s
a π(s)
s, a s, π(s)
s,a,s’ s, π(s),s’
’s ’s
▪ Expectimax trees max over all actions to compute the optimal values
▪ If we fixed some policy π(s), then the tree would be simpler – only one action per state
▪ … though the tree’s value would depend on which policy we fixed
Utilities for a Fixed Policy
▪ Another basic operation: compute the utility of a state s s
under a fixed (generally non-optimal) policy
π(s)
▪ Define the utility of a state s, under a fixed policy π: s, π(s)
Vπ(s) = expected total discounted rewards starting in s and following π
s, π(s),s’
▪ Recursive relation (one-step look-ahead / Bellman equation): ’s
Example: Policy Evaluation
Always Go Right Always Go Forward
Example: Policy Evaluation
Always Go Right Always Go Forward
Policy Evaluation
▪ How do we calculate the V’s for a fixed policy π? s
▪ Idea 1: Turn recursive Bellman equations into updates π(s)
(like value iteration) s, π(s)
s, π(s),s’
’s
▪ Efficiency: O(S2) per iteration
▪ Idea 2: Without the maxes, the Bellman equations are just a linear system
▪ Solve with Matlab (or your favorite linear system solver)
Policy Extraction
Computing Actions from Values
▪ Let’s imagine we have the optimal values V*(s)
▪ How should we act?
▪ It’s not obvious!
▪ We need to do a mini-expectimax (one step)
▪ This is called policy extraction, since it gets the policy implied by the values
Computing Actions from Q-Values
▪ Let’s imagine we have the optimal q-values:
▪ How should we act?
▪ Completely trivial to decide!
▪ Important lesson: actions are easier to select from q-values than values!
Policy Iteration
Problems with Value Iteration
▪ Value iteration repeats the Bellman updates: s
a
s, a
▪ Problem 1: It’s slow – O(S2A) per iteration s,a,s’
’s
▪ Problem 2: The “max” at each state rarely changes
▪ Problem 3: The policy often converges long before the values
[Demo: value iteration (L9D2)]
k=0
Noise = 0.2
Discount = 0.9
Living reward = 0
k=1
Noise = 0.2
Discount = 0.9
Living reward = 0
k=2
Noise = 0.2
Discount = 0.9
Living reward = 0
k=3
Noise = 0.2
Discount = 0.9
Living reward = 0
k=4
Noise = 0.2
Discount = 0.9
Living reward = 0
k=5
Noise = 0.2
Discount = 0.9
Living reward = 0
k=6
Noise = 0.2
Discount = 0.9
Living reward = 0
k=7
Noise = 0.2
Discount = 0.9
Living reward = 0
k=8
Noise = 0.2
Discount = 0.9
Living reward = 0
k=9
Noise = 0.2
Discount = 0.9
Living reward = 0
k=10
Noise = 0.2
Discount = 0.9
Living reward = 0
k=11
Noise = 0.2
Discount = 0.9
Living reward = 0
k=12
Noise = 0.2
Discount = 0.9
Living reward = 0
k=100
Noise = 0.2
Discount = 0.9
Living reward = 0
Policy Iteration
▪ Alternative approach for optimal values:
▪ Step 1: Policy evaluation: calculate utilities for some fixed policy (not optimal
utilities!) until convergence
▪ Step 2: Policy improvement: update policy using one-step look-ahead with resulting
converged (but not optimal!) utilities as future values
▪ Repeat steps until policy converges
▪ This is policy iteration
▪ It’s still optimal!
▪ Can converge (much) faster under some conditions
Policy Iteration
▪ Evaluation: For fixed current policy π, find values with policy evaluation:
▪ Iterate until values converge:
▪ Improvement: For fixed values, get a better policy using policy extraction
▪ One-step look-ahead:
Comparison
▪ Both value iteration and policy iteration compute the same thing (all optimal values)
▪ In value iteration:
▪ Every iteration updates both the values and (implicitly) the policy
▪ We don’t track the policy, but taking the max over actions implicitly recomputes it
▪ In policy iteration:
▪ We do several passes that update utilities with fixed policy (each pass is fast because we
consider only one action, not all of them)
▪ After the policy is evaluated, a new policy is chosen (slow like a value iteration pass)
▪ The new policy will be better (or we’re done)
▪ Both are dynamic programs for solving MDPs
Summary: MDP Algorithms
▪ So you want to….
▪ Compute optimal values: use value iteration or policy iteration
▪ Compute values for a particular policy: use policy evaluation
▪ Turn your values into a policy: use policy extraction (one-step lookahead)
▪ These all look the same!
▪ They basically are – they are all variations of Bellman updates
▪ They all use one-step lookahead expectimax fragments
▪ They differ only in whether we plug in a fixed policy or max over actions
Double Bandits
Double-Bandit MDP
▪ Actions: Blue, Red No discount
▪ States: Win, Lose 100 time steps
0.25 $0
Both states have
the same value
0.75
$2
W 0.25 L
$0
$1 $1
0.75 $2
1.0 1.0
Offline Planning
▪ Solving MDPs is offline planning No discount
▪ You determine all quantities through computation 100 time steps
▪ You need to know the details of the MDP Both states have
▪ You do not actually play the game! the same value
0.25 $0
Value
0.75
$2 0.25
Play Red 150 W L
$1 $0 $1
0.75 $2
Play Blue 100 1.0 1.0
Let’s Play!
$2 $2 $0 $2 $2
$2 $2 $0 $0 $0
Online Planning
▪ Rules changed! Red’s win chance is different.
?? $0
??
$2
W ?? L
$0
$1 $1
?? $2
1.0 1.0
Let’s Play!
$0 $0 $0 $2 $0
$2 $0 $0 $0 $0
What Just Happened?
▪ That wasn’t planning, it was learning!
▪ Specifically, reinforcement learning
▪ There was an MDP, but you couldn’t solve it with just computation
▪ You needed to actually act to figure it out
▪ Important ideas in reinforcement learning that came up
▪ Exploration: you have to try unknown actions to get information
▪ Exploitation: eventually, you have to use what you know
▪ Regret: even if you learn intelligently, you make mistakes
▪ Sampling: because of chance, you have to try things repeatedly
▪ Difficulty: learning can be much harder than solving a known MDP
Next Time: Reinforcement Learning!
Asynchronous Value Iteration*
▪ In value iteration, we update every state in each iteration
▪ Actually, any sequences of Bellman updates will converge if
every state is visited infinitely often
▪ In fact, we can update the policy as seldom or often as we
like, and we will still converge
▪ Idea: Update states whose value we expect to change:
If is large then update predecessors of s
Interlude