Newer
Older
"cell_type": "raw",
"metadata": {},
"source": [
"# Markov decision processes (MDPs)\n",
"\n",
"This IPy notebook acts as supporting material for topics covered in **Chapter 17 Making Complex Decisions** of the book* Artificial Intelligence: A Modern Approach*. We makes use of the implementations in mdp.py module. This notebook also includes a brief summary of the main topics as a review. Let us import everything from the mdp module to get started."
]
},
<<<<<<< HEAD
"cell_type": "raw",
=======
"cell_type": "code",
"execution_count": null,
"metadata": {},
>>>>>>> 3fed6614295b7270ca1226415beff7305e387eeb
"from mdp import *\n",
"from notebook import psource, pseudocode"
"cell_type": "raw",
"## CONTENTS\n",
"\n",
"* Overview\n",
"* MDP\n",
"* Grid MDP\n",
"* Value Iteration\n",
" * Value Iteration Visualization\n",
"* Policy Iteration"
"cell_type": "raw",
"metadata": {},
"source": [
"## OVERVIEW\n",
SnShine
a validé
"\n",
"Before we start playing with the actual implementations let us review a couple of things about MDPs.\n",
"\n",
"- A stochastic process has the **Markov property** if the conditional probability distribution of future states of the process (conditional on both past and present states) depends only upon the present state, not on the sequence of events that preceded it.\n",
"\n",
" -- Source: [Wikipedia](https://en.wikipedia.org/wiki/Markov_property)\n",
"\n",
"Often it is possible to model many different phenomena as a Markov process by being flexible with our definition of state.\n",
" \n",
"\n",
"- MDPs help us deal with fully-observable and non-deterministic/stochastic environments. For dealing with partially-observable and stochastic cases we make use of generalization of MDPs named POMDPs (partially observable Markov decision process).\n",
"\n",
"Our overall goal to solve a MDP is to come up with a policy which guides us to select the best action in each state so as to maximize the expected sum of future rewards."
]
},
{
"cell_type": "raw",
"metadata": {},
"source": [
"## MDP\n",
"\n",
"To begin with let us look at the implementation of MDP class defined in mdp.py The docstring tells us what all is required to define a MDP namely - set of states, actions, initial state, transition model, and a reward function. Each of these are implemented as methods. Do not close the popup so that you can follow along the description of code below."
<<<<<<< HEAD
"cell_type": "raw",
"metadata": {},
=======
"execution_count": null,
Aman Deep Singh
a validé
"metadata": {},
"outputs": [],
>>>>>>> 3fed6614295b7270ca1226415beff7305e387eeb
Aman Deep Singh
a validé
"psource(MDP)"
"cell_type": "raw",
"metadata": {},
"source": [
"The **_ _init_ _** method takes in the following parameters:\n",
"\n",
"- init: the initial state.\n",
"- actlist: List of actions possible in each state.\n",
"- terminals: List of terminal states where only possible action is exit\n",
"- gamma: Discounting factor. This makes sure that delayed rewards have less value compared to immediate ones.\n",
"\n",
"**R** method returns the reward for each state by using the self.reward dict.\n",
"\n",
"**T** method is not implemented and is somewhat different from the text. Here we return (probability, s') pairs where s' belongs to list of possible state by taking action a in state s.\n",
"\n",
"**actions** method returns list of actions possible in each state. By default it returns all actions for states other than terminal states.\n"
]
},
{
"cell_type": "raw",
"metadata": {},
"source": [
"Now let us implement the simple MDP in the image below. States A, B have actions X, Y available in them. Their probabilities are shown just above the arrows. We start with using MDP as base class for our CustomMDP. Obviously we need to make a few changes to suit our case. We make use of a transition matrix as our transitions are not very simple.\n",
"<img src=\"files/images/mdp-a.png\">"
]
},
{
<<<<<<< HEAD
<<<<<<< HEAD
"execution_count": 3,
=======
"cell_type": "raw",
>>>>>>> 9d5ec3c0e1d0c03cd1333afcbd6bbc35daf30c21
=======
"execution_count": null,
>>>>>>> 3fed6614295b7270ca1226415beff7305e387eeb
"metadata": {
"collapsed": true
},
"source": [
"# Transition Matrix as nested dict. State -> Actions in state -> List of (Probability, State) tuples\n",
" \"X\": [(0.3, \"A\"), (0.7, \"B\")],\n",
" \"Y\": [(1.0, \"A\")]\n",
" \"X\": {(0.8, \"End\"), (0.2, \"B\")},\n",
" \"Y\": {(1.0, \"A\")}\n",
" },\n",
" \"End\": {}\n",
"}\n",
"\n",
"init = \"A\"\n",
"\n",
"terminals = [\"End\"]\n",
"\n",
"rewards = {\n",
" \"A\": 5,\n",
" \"B\": -10,\n",
" \"End\": 100\n",
"}"
]
},
{
<<<<<<< HEAD
<<<<<<< HEAD
"execution_count": 4,
=======
"cell_type": "raw",
>>>>>>> 9d5ec3c0e1d0c03cd1333afcbd6bbc35daf30c21
=======
"execution_count": null,
>>>>>>> 3fed6614295b7270ca1226415beff7305e387eeb
},
"source": [
"class CustomMDP(MDP):\n",
" def __init__(self, init, terminals, transition_matrix, reward = None, gamma=.9):\n",
" # All possible actions.\n",
" actlist = []\n",
" for state in transition_matrix.keys():\n",
" actlist.extend(transition_matrix[state])\n",
" MDP.__init__(self, init, actlist, terminals, transition_matrix, reward, gamma=gamma)\n",
"\n",
" def T(self, state, action):\n",
" if action is None:\n",
" return [(0.0, state)]\n",
" else: \n",
" return self.t[state][action]"
"cell_type": "raw",
"metadata": {},
"source": [
"Finally we instantize the class with the parameters for our MDP in the picture."
]
},
{
<<<<<<< HEAD
<<<<<<< HEAD
"execution_count": 5,
=======
"cell_type": "raw",
>>>>>>> 9d5ec3c0e1d0c03cd1333afcbd6bbc35daf30c21
=======
"execution_count": null,
"metadata": {},
>>>>>>> 3fed6614295b7270ca1226415beff7305e387eeb
"our_mdp = CustomMDP(init, terminals, t, rewards, gamma=.9)"
"cell_type": "raw",
"With this we have successfully represented our MDP. Later we will look at ways to solve this MDP."
"cell_type": "raw",
"Now we look at a concrete implementation that makes use of the MDP as base class. The GridMDP class in the mdp module is used to represent a grid world MDP like the one shown in in **Fig 17.1** of the AIMA Book. We assume for now that the environment is _fully observable_, so that the agent always knows where it is. The code should be easy to understand if you have gone through the CustomMDP example."
<<<<<<< HEAD
"cell_type": "raw",
"metadata": {},
=======
"execution_count": null,
Aman Deep Singh
a validé
"metadata": {},
"outputs": [],
>>>>>>> 3fed6614295b7270ca1226415beff7305e387eeb
Aman Deep Singh
a validé
"psource(GridMDP)"
"cell_type": "raw",
"metadata": {},
"source": [
"The **_ _init_ _** method takes **grid** as an extra parameter compared to the MDP class. The grid is a nested list of rewards in states.\n",
"\n",
"**go** method returns the state by going in particular direction by using vector_add.\n",
"\n",
"**T** method is not implemented and is somewhat different from the text. Here we return (probability, s') pairs where s' belongs to list of possible state by taking action a in state s.\n",
"\n",
"**actions** method returns list of actions possible in each state. By default it returns all actions for states other than terminal states.\n",
"\n",
"**to_arrows** are used for representing the policy in a grid like format."
]
},
{
"cell_type": "raw",
"metadata": {},
"source": [
"We can create a GridMDP like the one in **Fig 17.1** as follows: \n",
"\n",
" GridMDP([[-0.04, -0.04, -0.04, +1],\n",
" [-0.04, None, -0.04, -1],\n",
" [-0.04, -0.04, -0.04, -0.04]],\n",
" terminals=[(3, 2), (3, 1)])\n",
" \n",
"In fact the **sequential_decision_environment** in mdp module has been instantized using the exact same code."
]
},
{
<<<<<<< HEAD
"execution_count": null,
<<<<<<< HEAD
"outputs": [
{
"data": {
"text/plain": [
"<mdp.GridMDP at 0x107b78438>"
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
=======
"cell_type": "raw",
"metadata": {},
>>>>>>> 9d5ec3c0e1d0c03cd1333afcbd6bbc35daf30c21
=======
"outputs": [],
>>>>>>> 3fed6614295b7270ca1226415beff7305e387eeb
"source": [
"sequential_decision_environment"
"cell_type": "raw",
"metadata": {
"collapsed": true
},
"source": [
"\n",
"Now that we have looked how to represent MDPs. Let's aim at solving them. Our ultimate goal is to obtain an optimal policy. We start with looking at Value Iteration and a visualisation that should help us understanding it better.\n",
"\n",
<<<<<<< HEAD
"We start by calculating Value/Utility for each of the states. The Value of each state is the expected sum of discounted future rewards given we start in that state and follow a particular policy $pi$. The value or the utility of a state is given by\n",
=======
"We start by calculating Value/Utility for each of the states. The Value of each state is the expected sum of discounted future rewards given we start in that state and follow a particular policy $\\pi$. The value or the utility of a state is given by\n",
>>>>>>> 9d5ec3c0e1d0c03cd1333afcbd6bbc35daf30c21
"\n",
"$$U(s)=R(s)+\\gamma\\max_{a\\epsilon A(s)}\\sum_{s'} P(s'\\ |\\ s,a)U(s')$$\n",
"\n",
"This is called the Bellman equation. The algorithm Value Iteration (**Fig. 17.4** in the book) relies on finding solutions of this Equation. The intuition Value Iteration works is because values propagate through the state space by means of local updates. This point will we more clear after we encounter the visualisation. For more information you can refer to **Section 17.2** of the book. \n"
<<<<<<< HEAD
"cell_type": "raw",
"metadata": {},
=======
"execution_count": null,
"outputs": [],
>>>>>>> 3fed6614295b7270ca1226415beff7305e387eeb
"cell_type": "raw",
"metadata": {},
"source": [
"It takes as inputs two parameters, an MDP to solve and epsilon, the maximum error allowed in the utility of any state. It returns a dictionary containing utilities where the keys are the states and values represent utilities. <br> Value Iteration starts with arbitrary initial values for the utilities, calculates the right side of the Bellman equation and plugs it into the left hand side, thereby updating the utility of each state from the utilities of its neighbors. \n",
"This is repeated until equilibrium is reached. \n",
"It works on the principle of _Dynamic Programming_ - using precomputed information to simplify the subsequent computation. \n",
"If $U_i(s)$ is the utility value for state $s$ at the $i$ th iteration, the iteration step, called Bellman update, looks like this:\n",
"\n",
"$$ U_{i+1}(s) \\leftarrow R(s) + \\gamma \\max_{a \\epsilon A(s)} \\sum_{s'} P(s'\\ |\\ s,a)U_{i}(s') $$\n",
"\n",
"As you might have noticed, `value_iteration` has an infinite loop. How do we decide when to stop iterating? \n",
"The concept of _contraction_ successfully explains the convergence of value iteration. \n",
"Refer to **Section 17.2.3** of the book for a detailed explanation. \n",
<<<<<<< HEAD
<<<<<<< HEAD
"In the algorithm, we calculate a value $\\delta$ that measures the difference in the utilities of the current time step and the previous time step. \n",
=======
"In the algorithm, we calculate a value $delta$ that measures the difference in the utilities of the current time step and the previous time step. \n",
>>>>>>> 9d5ec3c0e1d0c03cd1333afcbd6bbc35daf30c21
=======
"In the algorithm, we calculate a value $\\delta$ that measures the difference in the utilities of the current time step and the previous time step. \n",
>>>>>>> 3fed6614295b7270ca1226415beff7305e387eeb
"\n",
"$$\\delta = \\max{(\\delta, \\begin{vmatrix}U_{i + 1}(s) - U_i(s)\\end{vmatrix})}$$\n",
"\n",
"This value of delta decreases as the values of $U_i$ converge.\n",
<<<<<<< HEAD
<<<<<<< HEAD
=======
>>>>>>> 3fed6614295b7270ca1226415beff7305e387eeb
"We terminate the algorithm if the $\\delta$ value is less than a threshold value determined by the hyperparameter _epsilon_.\n",
"\n",
"$$\\delta \\lt \\epsilon \\frac{(1 - \\gamma)}{\\gamma}$$\n",
"\n",
"To summarize, the Bellman update is a _contraction_ by a factor of $gamma$ on the space of utility vectors. \n",
"Hence, from the properties of contractions in general, it follows that `value_iteration` always converges to a unique solution of the Bellman equations whenever $gamma$ is less than 1.\n",
"We then terminate the algorithm when a reasonable approximation is achieved.\n",
"In practice, it often occurs that the policy $pi$ becomes optimal long before the utility function converges. For the given 4 x 3 environment with $gamma = 0.9$, the policy $pi$ is optimal when $i = 4$ (at the 4th iteration), even though the maximum error in the utility function is stil 0.46. This can be clarified from **figure 17.6** in the book. Hence, to increase computational efficiency, we often use another method to solve MDPs called Policy Iteration which we will see in the later part of this notebook. \n",
=======
"We terminate the algorithm if the $delta$ value is less than a threshold value determined by the hyperparameter _epsilon_.\n",
"\n",
"$$\\delta \\lt \\epsilon \\frac{(1 - \\gamma)}{\\gamma}$$\n",
"\n",
"To summarize, the Bellman update is a _contraction_ by a factor of $\\gamma$ on the space of utility vectors. \n",
"Hence, from the properties of contractions in general, it follows that `value_iteration` always converges to a unique solution of the Bellman equations whenever $\\gamma$ is less than 1.\n",
"We then terminate the algorithm when a reasonable approximation is achieved.\n",
"In practice, it often occurs that the policy $\\pi$ becomes optimal long before the utility function converges. For the given 4 x 3 environment with $\gamma = 0.9$, the policy $\\pi$ is optimal when $i = 4$ (at the 4th iteration), even though the maximum error in the utility function is stil 0.46. This can be clarified from **figure 17.6** in the book. Hence, to increase computational efficiency, we often use another method to solve MDPs called Policy Iteration which we will see in the later part of this notebook. \n",
>>>>>>> 9d5ec3c0e1d0c03cd1333afcbd6bbc35daf30c21
"<br>For now, let us solve the **sequential_decision_environment** GridMDP using `value_iteration`."
<<<<<<< HEAD
"cell_type": "code",
"execution_count": null,
<<<<<<< HEAD
"outputs": [
{
"data": {
"text/plain": [
"{(0, 0): 0.2962883154554812,\n",
" (0, 1): 0.3984432178350045,\n",
" (0, 2): 0.5093943765842497,\n",
" (1, 0): 0.25386699846479516,\n",
" (1, 2): 0.649585681261095,\n",
" (2, 0): 0.3447542300124158,\n",
" (2, 1): 0.48644001739269643,\n",
" (2, 2): 0.7953620878466678,\n",
" (3, 0): 0.12987274656746342,\n",
" (3, 1): -1.0,\n",
" (3, 2): 1.0}"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
=======
"cell_type": "raw",
"metadata": {},
>>>>>>> 9d5ec3c0e1d0c03cd1333afcbd6bbc35daf30c21
=======
"outputs": [],
>>>>>>> 3fed6614295b7270ca1226415beff7305e387eeb
"source": [
"value_iteration(sequential_decision_environment)"
]
},
"cell_type": "raw",
"metadata": {},
"source": [
"The pseudocode for the algorithm:"
]
},
{
<<<<<<< HEAD
"execution_count": null,
<<<<<<< HEAD
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
"outputs": [
{
"data": {
"text/markdown": [
"### AIMA3e\n",
"__function__ VALUE-ITERATION(_mdp_, _ε_) __returns__ a utility function \n",
" __inputs__: _mdp_, an MDP with states _S_, actions _A_(_s_), transition model _P_(_s′_ | _s_, _a_), \n",
"      rewards _R_(_s_), discount _γ_ \n",
"   _ε_, the maximum error allowed in the utility of any state \n",
" __local variables__: _U_, _U′_, vectors of utilities for states in _S_, initially zero \n",
"        _δ_, the maximum change in the utility of any state in an iteration \n",
"\n",
" __repeat__ \n",
"   _U_ ← _U′_; _δ_ ← 0 \n",
"   __for each__ state _s_ in _S_ __do__ \n",
"     _U′_\\[_s_\\] ← _R_(_s_) + _γ_ max<sub>_a_ ∈ _A_(_s_)</sub> Σ _P_(_s′_ | _s_, _a_) _U_\\[_s′_\\] \n",
"     __if__ | _U′_\\[_s_\\] − _U_\\[_s_\\] | > _δ_ __then__ _δ_ ← | _U′_\\[_s_\\] − _U_\\[_s_\\] | \n",
" __until__ _δ_ < _ε_(1 − _γ_)/_γ_ \n",
" __return__ _U_ \n",
"\n",
"---\n",
"__Figure ??__ The value iteration algorithm for calculating utilities of states. The termination condition is from Equation (__??__)."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
=======
"cell_type": "raw",
"metadata": {},
>>>>>>> 9d5ec3c0e1d0c03cd1333afcbd6bbc35daf30c21
=======
"outputs": [],
>>>>>>> 3fed6614295b7270ca1226415beff7305e387eeb
"source": [
"pseudocode(\"Value-Iteration\")"
]
},
"cell_type": "raw",
"metadata": {},
"source": [
"### AIMA3e\n",
"__function__ VALUE-ITERATION(_mdp_, _ε_) __returns__ a utility function \n",
" __inputs__: _mdp_, an MDP with states _S_, actions _A_(_s_), transition model _P_(_s′_ | _s_, _a_), \n",
"      rewards _R_(_s_), discount _γ_ \n",
"   _ε_, the maximum error allowed in the utility of any state \n",
" __local variables__: _U_, _U′_, vectors of utilities for states in _S_, initially zero \n",
"        _δ_, the maximum change in the utility of any state in an iteration \n",
"\n",
" __repeat__ \n",
"   _U_ ← _U′_; _δ_ ← 0 \n",
"   __for each__ state _s_ in _S_ __do__ \n",
"     _U′_\\[_s_\\] ← _R_(_s_) + _γ_ max<sub>_a_ ∈ _A_(_s_)</sub> Σ _P_(_s′_ | _s_, _a_) _U_\\[_s′_\\] \n",
"     __if__ | _U′_\\[_s_\\] − _U_\\[_s_\\] | > _δ_ __then__ _δ_ ← | _U′_\\[_s_\\] − _U_\\[_s_\\] | \n",
" __until__ _δ_ < _ε_(1 − _γ_)/_γ_ \n",
" __return__ _U_ \n",
"\n",
"---\n",
"__Figure ??__ The value iteration algorithm for calculating utilities of states. The termination condition is from Equation (__??__)."
"cell_type": "raw",
"metadata": {},
"source": [
"## VALUE ITERATION VISUALIZATION\n",
"\n",
"To illustrate that values propagate out of states let us create a simple visualisation. We will be using a modified version of the value_iteration function which will store U over time. We will also remove the parameter epsilon and instead add the number of iterations we want."
]
},
{
<<<<<<< HEAD
"cell_type": "raw",
=======
"execution_count": null,
>>>>>>> 3fed6614295b7270ca1226415beff7305e387eeb
"metadata": {
"collapsed": true
},
"source": [
"def value_iteration_instru(mdp, iterations=20):\n",
" U_over_time = []\n",
" U1 = {s: 0 for s in mdp.states}\n",
" R, T, gamma = mdp.R, mdp.T, mdp.gamma\n",
" for _ in range(iterations):\n",
" U = U1.copy()\n",
" for s in mdp.states:\n",
" U1[s] = R(s) + gamma * max([sum([p * U[s1] for (p, s1) in T(s, a)])\n",
" for a in mdp.actions(s)])\n",
" U_over_time.append(U)\n",
" return U_over_time"
]
},
{
"cell_type": "raw",
"metadata": {},
"source": [
"Next, we define a function to create the visualisation from the utilities returned by **value_iteration_instru**. The reader need not concern himself with the code that immediately follows as it is the usage of Matplotib with IPython Widgets. If you are interested in reading more about these visit [ipywidgets.readthedocs.io](http://ipywidgets.readthedocs.io)"
]
},
{
<<<<<<< HEAD
"cell_type": "raw",
=======
"execution_count": null,
>>>>>>> 3fed6614295b7270ca1226415beff7305e387eeb
},
"source": [
"columns = 4\n",
"rows = 3\n",
"U_over_time = value_iteration_instru(sequential_decision_environment)"
<<<<<<< HEAD
"cell_type": "raw",
=======
"execution_count": null,
>>>>>>> 3fed6614295b7270ca1226415beff7305e387eeb
"%matplotlib inline\n",
"from notebook import make_plot_grid_step_function\n",
"\n",
"plot_grid_step = make_plot_grid_step_function(columns, rows, U_over_time)"
<<<<<<< HEAD
"cell_type": "raw",
"metadata": {
"scrolled": true
},
=======
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
>>>>>>> 3fed6614295b7270ca1226415beff7305e387eeb
"source": [
"import ipywidgets as widgets\n",
"from IPython.display import display\n",
"iteration_slider = widgets.IntSlider(min=1, max=15, step=1, value=0)\n",
"w=widgets.interactive(plot_grid_step,iteration=iteration_slider)\n",
"\n",
"visualize_callback = make_visualize(iteration_slider)\n",
"\n",
"visualize_button = widgets.ToggleButton(description = \"Visualize\", value = False)\n",
"time_select = widgets.ToggleButtons(description='Extra Delay:',options=['0', '0.1', '0.2', '0.5', '0.7', '1.0'])\n",
"a = widgets.interactive(visualize_callback, Visualize = visualize_button, time_step=time_select)\n",
"display(a)"
"cell_type": "raw",
"metadata": {},
"source": [
Aman Deep Singh
a validé
"Move the slider above to observe how the utility changes across iterations. It is also possible to move the slider using arrow keys or to jump to the value by directly editing the number with a double click. The **Visualize Button** will automatically animate the slider for you. The **Extra Delay Box** allows you to set time delay in seconds upto one second for each time step. There is also an interactive editor for grid-world problems `grid_mdp.py` in the gui folder for you to play around with."
"cell_type": "raw",
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
"metadata": {
"collapsed": true
},
"source": [
"# POLICY ITERATION\n",
"\n",
"We have already seen that value iteration converges to the optimal policy long before it accurately estimates the utility function. \n",
"If one action is clearly better than all the others, then the exact magnitude of the utilities in the states involved need not be precise. \n",
"The policy iteration algorithm works on this insight. \n",
"The algorithm executes two fundamental steps:\n",
"* **Policy evaluation**: Given a policy _πᵢ_, calculate _Uᵢ = U(πᵢ)_, the utility of each state if _πᵢ_ were to be executed.\n",
"* **Policy improvement**: Calculate a new policy _πᵢ₊₁_ using one-step look-ahead based on the utility values calculated.\n",
"\n",
"The algorithm terminates when the policy improvement step yields no change in the utilities. \n",
"Refer to **Figure 17.6** in the book to see how this is an improvement over value iteration.\n",
"We now have a simplified version of the Bellman equation\n",
"\n",
"$$U_i(s) = R(s) + \\gamma \\sum_{s'}P(s'\\ |\\ s, \\pi_i(s))U_i(s')$$\n",
"\n",
"An important observation in this equation is that this equation doesn't have the `max` operator, which makes it linear.\n",
"For _n_ states, we have _n_ linear equations with _n_ unknowns, which can be solved exactly in time _**O(n³)**_.\n",
"For more implementational details, have a look at **Section 17.3**.\n",
"Let us now look at how the expected utility is found and how `policy_iteration` is implemented."
]
},
{
<<<<<<< HEAD
"cell_type": "raw",
"metadata": {},
=======
"execution_count": null,
"outputs": [],
>>>>>>> 3fed6614295b7270ca1226415beff7305e387eeb
"source": [
"psource(expected_utility)"
]
},
{
<<<<<<< HEAD
"cell_type": "raw",
"metadata": {},
=======
"execution_count": null,
"outputs": [],
>>>>>>> 3fed6614295b7270ca1226415beff7305e387eeb
"source": [
"psource(policy_iteration)"
]
},
{
"cell_type": "raw",
"metadata": {},
"source": [
"<br>Fortunately, it is not necessary to do _exact_ policy evaluation. \n",
"The utilities can instead be reasonably approximated by performing some number of simplified value iteration steps.\n",
"The simplified Bellman update equation for the process is\n",
"\n",
"$$U_{i+1}(s) \\leftarrow R(s) + \\gamma\\sum_{s'}P(s'\\ |\\ s,\\pi_i(s))U_{i}(s')$$\n",
"\n",
"and this is repeated _k_ times to produce the next utility estimate. This is called _modified policy iteration_."
]
},
{
<<<<<<< HEAD
"cell_type": "raw",
"metadata": {},
=======
"execution_count": null,
"outputs": [],
>>>>>>> 3fed6614295b7270ca1226415beff7305e387eeb
"source": [
"psource(policy_evaluation)"
]
},
{
"cell_type": "raw",
"metadata": {},
"source": [
"Let us now solve **`sequential_decision_environment`** using `policy_iteration`."
]
},
{
<<<<<<< HEAD
"cell_type": "raw",
"metadata": {},
=======
"execution_count": null,
"outputs": [],
>>>>>>> 3fed6614295b7270ca1226415beff7305e387eeb
"source": [
"policy_iteration(sequential_decision_environment)"
]
},
{
<<<<<<< HEAD
"execution_count": null,
<<<<<<< HEAD
"outputs": [
{
"data": {
"text/markdown": [
"### AIMA3e\n",
"__function__ POLICY-ITERATION(_mdp_) __returns__ a policy \n",
" __inputs__: _mdp_, an MDP with states _S_, actions _A_(_s_), transition model _P_(_s′_ | _s_, _a_) \n",
" __local variables__: _U_, a vector of utilities for states in _S_, initially zero \n",
"        _π_, a policy vector indexed by state, initially random \n",
"\n",
" __repeat__ \n",
"   _U_ ← POLICY\\-EVALUATION(_π_, _U_, _mdp_) \n",
"   _unchanged?_ ← true \n",
"   __for each__ state _s_ __in__ _S_ __do__ \n",
"     __if__ max<sub>_a_ ∈ _A_(_s_)</sub> Σ<sub>_s′_</sub> _P_(_s′_ | _s_, _a_) _U_\\[_s′_\\] > Σ<sub>_s′_</sub> _P_(_s′_ | _s_, _π_\\[_s_\\]) _U_\\[_s′_\\] __then do__ \n",
"       _π_\\[_s_\\] ← argmax<sub>_a_ ∈ _A_(_s_)</sub> Σ<sub>_s′_</sub> _P_(_s′_ | _s_, _a_) _U_\\[_s′_\\] \n",
"       _unchanged?_ ← false \n",
" __until__ _unchanged?_ \n",
" __return__ _π_ \n",
"\n",
"---\n",
"__Figure ??__ The policy iteration algorithm for calculating an optimal policy."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
=======
"cell_type": "raw",
"metadata": {},
>>>>>>> 9d5ec3c0e1d0c03cd1333afcbd6bbc35daf30c21
=======
"outputs": [],
>>>>>>> 3fed6614295b7270ca1226415beff7305e387eeb
"source": [
"pseudocode('Policy-Iteration')"
]
},
{
<<<<<<< HEAD
"cell_type": "markdown",
=======
"cell_type": "raw",
>>>>>>> 9d5ec3c0e1d0c03cd1333afcbd6bbc35daf30c21
"metadata": {},
"source": [
"### AIMA3e\n",
"__function__ POLICY-ITERATION(_mdp_) __returns__ a policy \n",
" __inputs__: _mdp_, an MDP with states _S_, actions _A_(_s_), transition model _P_(_s′_ | _s_, _a_) \n",
" __local variables__: _U_, a vector of utilities for states in _S_, initially zero \n",
"        _π_, a policy vector indexed by state, initially random \n",
"\n",
" __repeat__ \n",
"   _U_ ← POLICY\\-EVALUATION(_π_, _U_, _mdp_) \n",
"   _unchanged?_ ← true \n",
"   __for each__ state _s_ __in__ _S_ __do__ \n",
"     __if__ max<sub>_a_ ∈ _A_(_s_)</sub> Σ<sub>_s′_</sub> _P_(_s′_ | _s_, _a_) _U_\\[_s′_\\] > Σ<sub>_s′_</sub> _P_(_s′_ | _s_, _π_\\[_s_\\]) _U_\\[_s′_\\] __then do__ \n",
"       _π_\\[_s_\\] ← argmax<sub>_a_ ∈ _A_(_s_)</sub> Σ<sub>_s′_</sub> _P_(_s′_ | _s_, _a_) _U_\\[_s′_\\] \n",
"       _unchanged?_ ← false \n",
" __until__ _unchanged?_ \n",
" __return__ _π_ \n",
"\n",
"---\n",
"__Figure ??__ The policy iteration algorithm for calculating an optimal policy."
"cell_type": "raw",
"metadata": {
"collapsed": true
},
"source": [
"## Sequential Decision Problems\n",
"\n",
"Now that we have the tools required to solve MDPs, let us see how Sequential Decision Problems can be solved step by step and how a few built-in tools in the GridMDP class help us better analyse the problem at hand. \n",
"As always, we will work with the grid world from **Figure 17.1** from the book.\n",
"\n",
"<br>This is the environment for our agent.\n",
"We assume for now that the environment is _fully observable_, so that the agent always knows where it is.\n",
"We also assume that the transitions are **Markovian**, that is, the probability of reaching state $s'$ from state $s$ depends only on $s$ and not on the history of earlier states.\n",
"Almost all stochastic decision problems can be reframed as a Markov Decision Process just by tweaking the definition of a _state_ for that particular problem.\n",
"<br>\n",
"However, the actions of our agent in this environment are unreliable. In other words, the motion of our agent is stochastic. \n",
"<br><br>\n",
"More specifically, the agent may - \n",
"* move correctly in the intended direction with a probability of _0.8_, \n",
"* move $90^\\circ$ to the right of the intended direction with a probability 0.1\n",
"* move $90^\\circ$ to the left of the intended direction with a probability 0.1\n",
"<br><br>\n",
"The agent stays put if it bumps into a wall.\n",
""
]
},
{
"cell_type": "raw",
"metadata": {},
"source": [
"These properties of the agent are called the transition properties and are hardcoded into the GridMDP class as you can see below."
]
},
{
<<<<<<< HEAD
<<<<<<< HEAD
"execution_count": 12,
=======
"cell_type": "raw",
>>>>>>> 9d5ec3c0e1d0c03cd1333afcbd6bbc35daf30c21
=======
"execution_count": null,
"metadata": {},
"outputs": [],
>>>>>>> 3fed6614295b7270ca1226415beff7305e387eeb
"source": [
"psource(GridMDP.T)"
]
},
{
"cell_type": "raw",
"metadata": {},
"source": [
"To completely define our task environment, we need to specify the utility function for the agent. \n",
"This is the function that gives the agent a rough estimate of how good being in a particular state is, or how much _reward_ an agent receives by being in that state.\n",
"The agent then tries to maximize the reward it gets.\n",
"As the decision problem is sequential, the utility function will depend on a sequence of states rather than on a single state.\n",
"For now, we simply stipulate that in each state $s$, the agent receives a finite reward $R(s)$.\n",
"\n",
"For any given state, the actions the agent can take are encoded as given below:\n",
"- Move Up: (0, 1)\n",
"- Move Down: (0, -1)\n",
"- Move Left: (-1, 0)\n",
"- Move Right: (1, 0)\n",
"- Do nothing: `None`\n",
"\n",
"We now wonder what a valid solution to the problem might look like. \n",
"We cannot have fixed action sequences as the environment is stochastic and we can eventually end up in an undesirable state.\n",
"Therefore, a solution must specify what the agent shoulddo for _any_ state the agent might reach.\n",
"<br>\n",
"Such a solution is known as a **policy** and is usually denoted by $\\pi$.\n",
"The **optimal policy** is the policy that yields the highest expected utility an is usually denoted by $\\pi^*$.\n",
"<br>\n",
"The `GridMDP` class has a useful method `to_arrows` that outputs a grid showing the direction the agent should move, given a policy.\n",
"We will use this later to better understand the properties of the environment."
]
},
{
<<<<<<< HEAD
<<<<<<< HEAD
"execution_count": 13,
=======
"cell_type": "raw",
>>>>>>> 9d5ec3c0e1d0c03cd1333afcbd6bbc35daf30c21
=======
"execution_count": null,
"metadata": {},
"outputs": [],
>>>>>>> 3fed6614295b7270ca1226415beff7305e387eeb
"source": [
"psource(GridMDP.to_arrows)"
]
},
{
"cell_type": "raw",
"metadata": {},
"source": [
"This method directly encodes the actions that the agent can take (described above) to characters representing arrows and shows it in a grid format for human visalization purposes. \n",
"It converts the received policy from a `dictionary` to a grid using the `to_grid` method."
]
},
{
<<<<<<< HEAD
<<<<<<< HEAD
"execution_count": 14,
=======
"cell_type": "raw",
>>>>>>> 9d5ec3c0e1d0c03cd1333afcbd6bbc35daf30c21
=======
"execution_count": null,
"metadata": {},
"outputs": [],
>>>>>>> 3fed6614295b7270ca1226415beff7305e387eeb
"source": [
"psource(GridMDP.to_grid)"
]
},
{
"cell_type": "raw",
"metadata": {},
"source": [
"Now that we have all the tools required and a good understanding of the agent and the environment, we consider some cases and see how the agent should behave for each case."
]
},
{
"cell_type": "raw",
"metadata": {},
"source": [
"### Case 1\n",
"---\n",
"R(s) = -0.04 in all states except terminal states"
]
},
{
<<<<<<< HEAD
<<<<<<< HEAD
"execution_count": 15,
=======
"cell_type": "raw",
>>>>>>> 9d5ec3c0e1d0c03cd1333afcbd6bbc35daf30c21
=======
"execution_count": null,
>>>>>>> 3fed6614295b7270ca1226415beff7305e387eeb
"metadata": {
"collapsed": true
},
"source": [
"# Note that this environment is also initialized in mdp.py by default\n",
"sequential_decision_environment = GridMDP([[-0.04, -0.04, -0.04, +1],\n",
" [-0.04, None, -0.04, -1],\n",
" [-0.04, -0.04, -0.04, -0.04]],\n",
" terminals=[(3, 2), (3, 1)])"
]
},
{
"cell_type": "raw",
"metadata": {},
"source": [
"We will use the `best_policy` function to find the best policy for this environment.\n",
"But, as you can see, `best_policy` requires a utility function as well.\n",
"We already know that the utility function can be found by `value_iteration`.\n",
"Hence, our best policy is:"
]
},
{
<<<<<<< HEAD
<<<<<<< HEAD
"execution_count": 16,
=======
"cell_type": "raw",
>>>>>>> 9d5ec3c0e1d0c03cd1333afcbd6bbc35daf30c21
=======
"execution_count": null,
>>>>>>> 3fed6614295b7270ca1226415beff7305e387eeb
"metadata": {
"collapsed": true
},
"source": [
"pi = best_policy(sequential_decision_environment, value_iteration(sequential_decision_environment, .001))"
]
},
{
"cell_type": "raw",
"metadata": {},
"source": [
"We can now use the `to_arrows` method to see how our agent should pick its actions in the environment."
]
},
{
<<<<<<< HEAD
<<<<<<< HEAD
"execution_count": 17,
=======
"cell_type": "raw",
>>>>>>> 9d5ec3c0e1d0c03cd1333afcbd6bbc35daf30c21