{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Markov decision processes (MDPs)\n", "\n", "This IPy notebook acts as supporting material for topics covered in **Chapter 17 Making Complex Decisions** of the book* Artificial Intelligence: A Modern Approach*. We makes use of the implementations in mdp.py module. This notebook also includes a brief summary of the main topics as a review. Let us import everything from the mdp module to get started." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "from mdp import *\n", "from notebook import psource, pseudocode" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## CONTENTS\n", "\n", "* Overview\n", "* MDP\n", "* Grid MDP\n", "* Value Iteration\n", " * Value Iteration Visualization\n", "* Policy Iteration" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## OVERVIEW\n", "\n", "Before we start playing with the actual implementations let us review a couple of things about MDPs.\n", "\n", "- A stochastic process has the **Markov property** if the conditional probability distribution of future states of the process (conditional on both past and present states) depends only upon the present state, not on the sequence of events that preceded it.\n", "\n", " -- Source: [Wikipedia](https://en.wikipedia.org/wiki/Markov_property)\n", "\n", "Often it is possible to model many different phenomena as a Markov process by being flexible with our definition of state.\n", " \n", "\n", "- MDPs help us deal with fully-observable and non-deterministic/stochastic environments. For dealing with partially-observable and stochastic cases we make use of generalization of MDPs named POMDPs (partially observable Markov decision process).\n", "\n", "Our overall goal to solve a MDP is to come up with a policy which guides us to select the best action in each state so as to maximize the expected sum of future rewards." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## MDP\n", "\n", "To begin with let us look at the implementation of MDP class defined in mdp.py The docstring tells us what all is required to define a MDP namely - set of states, actions, initial state, transition model, and a reward function. Each of these are implemented as methods. Do not close the popup so that you can follow along the description of code below." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "

\n", "\n", "
class MDP:\n",
       "\n",
       "    """A Markov Decision Process, defined by an initial state, transition model,\n",
       "    and reward function. We also keep track of a gamma value, for use by\n",
       "    algorithms. The transition model is represented somewhat differently from\n",
       "    the text. Instead of P(s' | s, a) being a probability number for each\n",
       "    state/state/action triplet, we instead have T(s, a) return a\n",
       "    list of (p, s') pairs. We also keep track of the possible states,\n",
       "    terminal states, and actions for each state. [page 646]"""\n",
       "\n",
       "    def __init__(self, init, actlist, terminals, transitions = {}, reward = None, states=None, gamma=.9):\n",
       "        if not (0 < gamma <= 1):\n",
       "            raise ValueError("An MDP must have 0 < gamma <= 1")\n",
       "\n",
       "        if states:\n",
       "            self.states = states\n",
       "        else:\n",
       "            ## collect states from transitions table\n",
       "            self.states = self.get_states_from_transitions(transitions)\n",
       "            \n",
       "        \n",
       "        self.init = init\n",
       "        \n",
       "        if isinstance(actlist, list):\n",
       "            ## if actlist is a list, all states have the same actions\n",
       "            self.actlist = actlist\n",
       "        elif isinstance(actlist, dict):\n",
       "            ## if actlist is a dict, different actions for each state\n",
       "            self.actlist = actlist\n",
       "        \n",
       "        self.terminals = terminals\n",
       "        self.transitions = transitions\n",
       "        if self.transitions == {}:\n",
       "            print("Warning: Transition table is empty.")\n",
       "        self.gamma = gamma\n",
       "        if reward:\n",
       "            self.reward = reward\n",
       "        else:\n",
       "            self.reward = {s : 0 for s in self.states}\n",
       "        #self.check_consistency()\n",
       "\n",
       "    def R(self, state):\n",
       "        """Return a numeric reward for this state."""\n",
       "        return self.reward[state]\n",
       "\n",
       "    def T(self, state, action):\n",
       "        """Transition model. From a state and an action, return a list\n",
       "        of (probability, result-state) pairs."""\n",
       "        if(self.transitions == {}):\n",
       "            raise ValueError("Transition model is missing")\n",
       "        else:\n",
       "            return self.transitions[state][action]\n",
       "\n",
       "    def actions(self, state):\n",
       "        """Set of actions that can be performed in this state. By default, a\n",
       "        fixed list of actions, except for terminal states. Override this\n",
       "        method if you need to specialize by state."""\n",
       "        if state in self.terminals:\n",
       "            return [None]\n",
       "        else:\n",
       "            return self.actlist\n",
       "\n",
       "    def get_states_from_transitions(self, transitions):\n",
       "        if isinstance(transitions, dict):\n",
       "            s1 = set(transitions.keys())\n",
       "            s2 = set([tr[1] for actions in transitions.values() \n",
       "                              for effects in actions.values() for tr in effects])\n",
       "            return s1.union(s2)\n",
       "        else:\n",
       "            print('Could not retrieve states from transitions')\n",
       "            return None\n",
       "\n",
       "    def check_consistency(self):\n",
       "        # check that all states in transitions are valid\n",
       "        assert set(self.states) == self.get_states_from_transitions(self.transitions)\n",
       "        # check that init is a valid state\n",
       "        assert self.init in self.states\n",
       "        # check reward for each state\n",
       "        #assert set(self.reward.keys()) == set(self.states)\n",
       "        assert set(self.reward.keys()) == set(self.states)\n",
       "        # check that all terminals are valid states\n",
       "        assert all([t in self.states for t in self.terminals])\n",
       "        # check that probability distributions for all actions sum to 1\n",
       "        for s1, actions in self.transitions.items():\n",
       "            for a in actions.keys():\n",
       "                s = 0\n",
       "                for o in actions[a]:\n",
       "                    s += o[0]\n",
       "                assert abs(s - 1) < 0.001\n",
       "
\n", "\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "psource(MDP)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The **_ _init_ _** method takes in the following parameters:\n", "\n", "- init: the initial state.\n", "- actlist: List of actions possible in each state.\n", "- terminals: List of terminal states where only possible action is exit\n", "- gamma: Discounting factor. This makes sure that delayed rewards have less value compared to immediate ones.\n", "\n", "**R** method returns the reward for each state by using the self.reward dict.\n", "\n", "**T** method is not implemented and is somewhat different from the text. Here we return (probability, s') pairs where s' belongs to list of possible state by taking action a in state s.\n", "\n", "**actions** method returns list of actions possible in each state. By default it returns all actions for states other than terminal states.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let us implement the simple MDP in the image below. States A, B have actions X, Y available in them. Their probabilities are shown just above the arrows. We start with using MDP as base class for our CustomMDP. Obviously we need to make a few changes to suit our case. We make use of a transition matrix as our transitions are not very simple.\n", "" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Transition Matrix as nested dict. State -> Actions in state -> List of (Probability, State) tuples\n", "t = {\n", " \"A\": {\n", " \"X\": [(0.3, \"A\"), (0.7, \"B\")],\n", " \"Y\": [(1.0, \"A\")]\n", " },\n", " \"B\": {\n", " \"X\": {(0.8, \"End\"), (0.2, \"B\")},\n", " \"Y\": {(1.0, \"A\")}\n", " },\n", " \"End\": {}\n", "}\n", "\n", "init = \"A\"\n", "\n", "terminals = [\"End\"]\n", "\n", "rewards = {\n", " \"A\": 5,\n", " \"B\": -10,\n", " \"End\": 100\n", "}" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": true }, "outputs": [], "source": [ "class CustomMDP(MDP):\n", " def __init__(self, init, terminals, transition_matrix, reward = None, gamma=.9):\n", " # All possible actions.\n", " actlist = []\n", " for state in transition_matrix.keys():\n", " actlist.extend(transition_matrix[state])\n", " actlist = list(set(actlist))\n", " MDP.__init__(self, init, actlist, terminals, transition_matrix, reward, gamma=gamma)\n", "\n", " def T(self, state, action):\n", " if action is None:\n", " return [(0.0, state)]\n", " else: \n", " return self.t[state][action]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally we instantize the class with the parameters for our MDP in the picture." ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": true }, "outputs": [], "source": [ "our_mdp = CustomMDP(init, terminals, t, rewards, gamma=.9)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "With this we have successfully represented our MDP. Later we will look at ways to solve this MDP." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## GRID MDP\n", "\n", "Now we look at a concrete implementation that makes use of the MDP as base class. The GridMDP class in the mdp module is used to represent a grid world MDP like the one shown in in **Fig 17.1** of the AIMA Book. We assume for now that the environment is _fully observable_, so that the agent always knows where it is. The code should be easy to understand if you have gone through the CustomMDP example." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "

\n", "\n", "
class GridMDP(MDP):\n",
       "\n",
       "    """A two-dimensional grid MDP, as in [Figure 17.1]. All you have to do is\n",
       "    specify the grid as a list of lists of rewards; use None for an obstacle\n",
       "    (unreachable state). Also, you should specify the terminal states.\n",
       "    An action is an (x, y) unit vector; e.g. (1, 0) means move east."""\n",
       "\n",
       "    def __init__(self, grid, terminals, init=(0, 0), gamma=.9):\n",
       "        grid.reverse()  # because we want row 0 on bottom, not on top\n",
       "        reward = {}\n",
       "        states = set()\n",
       "        self.rows = len(grid)\n",
       "        self.cols = len(grid[0])\n",
       "        self.grid = grid\n",
       "        for x in range(self.cols):\n",
       "            for y in range(self.rows):\n",
       "                if grid[y][x] is not None:\n",
       "                    states.add((x, y))\n",
       "                    reward[(x, y)] = grid[y][x]\n",
       "        self.states = states\n",
       "        actlist = orientations\n",
       "        transitions = {}\n",
       "        for s in states:\n",
       "            transitions[s] = {}\n",
       "            for a in actlist:\n",
       "                transitions[s][a] = self.calculate_T(s, a)\n",
       "        MDP.__init__(self, init, actlist=actlist,\n",
       "                     terminals=terminals, transitions = transitions, \n",
       "                     reward = reward, states = states, gamma=gamma)\n",
       "\n",
       "    def calculate_T(self, state, action):\n",
       "        if action is None:\n",
       "            return [(0.0, state)]\n",
       "        else:\n",
       "            return [(0.8, self.go(state, action)),\n",
       "                    (0.1, self.go(state, turn_right(action))),\n",
       "                    (0.1, self.go(state, turn_left(action)))]\n",
       "    \n",
       "    def T(self, state, action):\n",
       "        if action is None:\n",
       "            return [(0.0, state)]\n",
       "        else:\n",
       "            return self.transitions[state][action]\n",
       " \n",
       "    def go(self, state, direction):\n",
       "        """Return the state that results from going in this direction."""\n",
       "        state1 = vector_add(state, direction)\n",
       "        return state1 if state1 in self.states else state\n",
       "\n",
       "    def to_grid(self, mapping):\n",
       "        """Convert a mapping from (x, y) to v into a [[..., v, ...]] grid."""\n",
       "        return list(reversed([[mapping.get((x, y), None)\n",
       "                               for x in range(self.cols)]\n",
       "                              for y in range(self.rows)]))\n",
       "\n",
       "    def to_arrows(self, policy):\n",
       "        chars = {\n",
       "            (1, 0): '>', (0, 1): '^', (-1, 0): '<', (0, -1): 'v', None: '.'}\n",
       "        return self.to_grid({s: chars[a] for (s, a) in policy.items()})\n",
       "
\n", "\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "psource(GridMDP)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The **_ _init_ _** method takes **grid** as an extra parameter compared to the MDP class. The grid is a nested list of rewards in states.\n", "\n", "**go** method returns the state by going in particular direction by using vector_add.\n", "\n", "**T** method is not implemented and is somewhat different from the text. Here we return (probability, s') pairs where s' belongs to list of possible state by taking action a in state s.\n", "\n", "**actions** method returns list of actions possible in each state. By default it returns all actions for states other than terminal states.\n", "\n", "**to_arrows** are used for representing the policy in a grid like format." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can create a GridMDP like the one in **Fig 17.1** as follows: \n", "\n", " GridMDP([[-0.04, -0.04, -0.04, +1],\n", " [-0.04, None, -0.04, -1],\n", " [-0.04, -0.04, -0.04, -0.04]],\n", " terminals=[(3, 2), (3, 1)])\n", " \n", "In fact the **sequential_decision_environment** in mdp module has been instantized using the exact same code." ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "sequential_decision_environment" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "# VALUE ITERATION\n", "\n", "Now that we have looked how to represent MDPs. Let's aim at solving them. Our ultimate goal is to obtain an optimal policy. We start with looking at Value Iteration and a visualisation that should help us understanding it better.\n", "\n", "We start by calculating Value/Utility for each of the states. The Value of each state is the expected sum of discounted future rewards given we start in that state and follow a particular policy $\\pi$. The value or the utility of a state is given by\n", "\n", "$$U(s)=R(s)+\\gamma\\max_{a\\epsilon A(s)}\\sum_{s'} P(s'\\ |\\ s,a)U(s')$$\n", "\n", "This is called the Bellman equation. The algorithm Value Iteration (**Fig. 17.4** in the book) relies on finding solutions of this Equation. The intuition Value Iteration works is because values propagate through the state space by means of local updates. This point will we more clear after we encounter the visualisation. For more information you can refer to **Section 17.2** of the book. \n" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "

\n", "\n", "
def value_iteration(mdp, epsilon=0.001):\n",
       "    """Solving an MDP by value iteration. [Figure 17.4]"""\n",
       "    U1 = {s: 0 for s in mdp.states}\n",
       "    R, T, gamma = mdp.R, mdp.T, mdp.gamma\n",
       "    while True:\n",
       "        U = U1.copy()\n",
       "        delta = 0\n",
       "        for s in mdp.states:\n",
       "            U1[s] = R(s) + gamma * max([sum([p * U[s1] for (p, s1) in T(s, a)])\n",
       "                                        for a in mdp.actions(s)])\n",
       "            delta = max(delta, abs(U1[s] - U[s]))\n",
       "        if delta < epsilon * (1 - gamma) / gamma:\n",
       "            return U\n",
       "
\n", "\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "psource(value_iteration)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It takes as inputs two parameters, an MDP to solve and epsilon, the maximum error allowed in the utility of any state. It returns a dictionary containing utilities where the keys are the states and values represent utilities.
Value Iteration starts with arbitrary initial values for the utilities, calculates the right side of the Bellman equation and plugs it into the left hand side, thereby updating the utility of each state from the utilities of its neighbors. \n", "This is repeated until equilibrium is reached. \n", "It works on the principle of _Dynamic Programming_ - using precomputed information to simplify the subsequent computation. \n", "If $U_i(s)$ is the utility value for state $s$ at the $i$ th iteration, the iteration step, called Bellman update, looks like this:\n", "\n", "$$ U_{i+1}(s) \\leftarrow R(s) + \\gamma \\max_{a \\epsilon A(s)} \\sum_{s'} P(s'\\ |\\ s,a)U_{i}(s') $$\n", "\n", "As you might have noticed, `value_iteration` has an infinite loop. How do we decide when to stop iterating? \n", "The concept of _contraction_ successfully explains the convergence of value iteration. \n", "Refer to **Section 17.2.3** of the book for a detailed explanation. \n", "In the algorithm, we calculate a value $delta$ that measures the difference in the utilities of the current time step and the previous time step. \n", "\n", "$$\\delta = \\max{(\\delta, \\begin{vmatrix}U_{i + 1}(s) - U_i(s)\\end{vmatrix})}$$\n", "\n", "This value of delta decreases as the values of $U_i$ converge.\n", "We terminate the algorithm if the $\\delta$ value is less than a threshold value determined by the hyperparameter _epsilon_.\n", "\n", "$$\\delta \\lt \\epsilon \\frac{(1 - \\gamma)}{\\gamma}$$\n", "\n", "To summarize, the Bellman update is a _contraction_ by a factor of $gamma$ on the space of utility vectors. \n", "Hence, from the properties of contractions in general, it follows that `value_iteration` always converges to a unique solution of the Bellman equations whenever $gamma$ is less than 1.\n", "We then terminate the algorithm when a reasonable approximation is achieved.\n", "In practice, it often occurs that the policy $pi$ becomes optimal long before the utility function converges. For the given 4 x 3 environment with $gamma = 0.9$, the policy $pi$ is optimal when $i = 4$ (at the 4th iteration), even though the maximum error in the utility function is stil 0.46. This can be clarified from **figure 17.6** in the book. Hence, to increase computational efficiency, we often use another method to solve MDPs called Policy Iteration which we will see in the later part of this notebook. \n", "
For now, let us solve the **sequential_decision_environment** GridMDP using `value_iteration`." ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{(0, 0): 0.2962883154554812,\n", " (0, 1): 0.3984432178350045,\n", " (0, 2): 0.5093943765842497,\n", " (1, 0): 0.25386699846479516,\n", " (1, 2): 0.649585681261095,\n", " (2, 0): 0.3447542300124158,\n", " (2, 1): 0.48644001739269643,\n", " (2, 2): 0.7953620878466678,\n", " (3, 0): 0.12987274656746342,\n", " (3, 1): -1.0,\n", " (3, 2): 1.0}" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "value_iteration(sequential_decision_environment)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The pseudocode for the algorithm:" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "### AIMA3e\n", "__function__ VALUE-ITERATION(_mdp_, _ε_) __returns__ a utility function \n", " __inputs__: _mdp_, an MDP with states _S_, actions _A_(_s_), transition model _P_(_s′_ | _s_, _a_), \n", "      rewards _R_(_s_), discount _γ_ \n", "   _ε_, the maximum error allowed in the utility of any state \n", " __local variables__: _U_, _U′_, vectors of utilities for states in _S_, initially zero \n", "        _δ_, the maximum change in the utility of any state in an iteration \n", "\n", " __repeat__ \n", "   _U_ ← _U′_; _δ_ ← 0 \n", "   __for each__ state _s_ in _S_ __do__ \n", "     _U′_\\[_s_\\] ← _R_(_s_) + _γ_ max_a_ ∈ _A_(_s_) Σ _P_(_s′_ | _s_, _a_) _U_\\[_s′_\\] \n", "     __if__ | _U′_\\[_s_\\] − _U_\\[_s_\\] | > _δ_ __then__ _δ_ ← | _U′_\\[_s_\\] − _U_\\[_s_\\] | \n", " __until__ _δ_ < _ε_(1 − _γ_)/_γ_ \n", " __return__ _U_ \n", "\n", "---\n", "__Figure ??__ The value iteration algorithm for calculating utilities of states. The termination condition is from Equation (__??__)." ], "text/plain": [ "" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "pseudocode(\"Value-Iteration\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### AIMA3e\n", "__function__ VALUE-ITERATION(_mdp_, _ε_) __returns__ a utility function \n", " __inputs__: _mdp_, an MDP with states _S_, actions _A_(_s_), transition model _P_(_s′_ | _s_, _a_), \n", "      rewards _R_(_s_), discount _γ_ \n", "   _ε_, the maximum error allowed in the utility of any state \n", " __local variables__: _U_, _U′_, vectors of utilities for states in _S_, initially zero \n", "        _δ_, the maximum change in the utility of any state in an iteration \n", "\n", " __repeat__ \n", "   _U_ ← _U′_; _δ_ ← 0 \n", "   __for each__ state _s_ in _S_ __do__ \n", "     _U′_\\[_s_\\] ← _R_(_s_) + _γ_ max_a_ ∈ _A_(_s_) Σ _P_(_s′_ | _s_, _a_) _U_\\[_s′_\\] \n", "     __if__ | _U′_\\[_s_\\] − _U_\\[_s_\\] | > _δ_ __then__ _δ_ ← | _U′_\\[_s_\\] − _U_\\[_s_\\] | \n", " __until__ _δ_ < _ε_(1 − _γ_)/_γ_ \n", " __return__ _U_ \n", "\n", "---\n", "__Figure ??__ The value iteration algorithm for calculating utilities of states. The termination condition is from Equation (__??__)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## VALUE ITERATION VISUALIZATION\n", "\n", "To illustrate that values propagate out of states let us create a simple visualisation. We will be using a modified version of the value_iteration function which will store U over time. We will also remove the parameter epsilon and instead add the number of iterations we want." ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def value_iteration_instru(mdp, iterations=20):\n", " U_over_time = []\n", " U1 = {s: 0 for s in mdp.states}\n", " R, T, gamma = mdp.R, mdp.T, mdp.gamma\n", " for _ in range(iterations):\n", " U = U1.copy()\n", " for s in mdp.states:\n", " U1[s] = R(s) + gamma * max([sum([p * U[s1] for (p, s1) in T(s, a)])\n", " for a in mdp.actions(s)])\n", " U_over_time.append(U)\n", " return U_over_time" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, we define a function to create the visualisation from the utilities returned by **value_iteration_instru**. The reader need not concern himself with the code that immediately follows as it is the usage of Matplotib with IPython Widgets. If you are interested in reading more about these visit [ipywidgets.readthedocs.io](http://ipywidgets.readthedocs.io)" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "collapsed": true }, "outputs": [], "source": [ "columns = 4\n", "rows = 3\n", "U_over_time = value_iteration_instru(sequential_decision_environment)" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "collapsed": true }, "outputs": [], "source": [ "%matplotlib inline\n", "from notebook import make_plot_grid_step_function\n", "\n", "plot_grid_step = make_plot_grid_step_function(columns, rows, U_over_time)" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "scrolled": true }, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAATcAAADuCAYAAABcZEBhAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMS4wLCBo\ndHRwOi8vbWF0cGxvdGxpYi5vcmcvpW3flQAADYxJREFUeJzt211oW2eex/Hf2Xpb0onWrVkm1otL\nW2SmrNaVtzS2K8jCFhJPXsbtRWcTX4zbmUBINkMYw5jmYrYwhNJuMWTjaTCYDSW5cQK9iEOcpDad\nLAREVtBEF+OwoDEyWEdxirvjelw36cScubCi1PWLvK0lnfnP9wMGHz2P4dEf8fWRnDie5wkArPmb\nah8AAMqBuAEwibgBMIm4ATCJuAEwibgBMIm4ATCJuAEwibgBMKnm/7N5bk78dwagjDYHnGofwf88\nb11D4s4NgEnEDYBJxA2AScQNgEnEDYBJxA2AScQNgEnEDYBJxA2AScQNgEnEDYBJxA2AScQNgEnE\nDYBJxA2AScQNgEnEDYBJxA2AScQNgEnEDYBJxA2AScQNgEnEDYBJxA2AScQNgEnEDYBJxA2AScQN\ngEnEDYBJxA2AScQNgEm+jZvneerpOaJ4PKq2tueVTt9Ycd/Nm5+otbVJ8XhUPT1H5HnekvUTJ3oV\nCDianp6uxLErhvmUxoxW9zNJ35f0j6use5KOSIpKel7S1yd3WlJj4et0Gc/4Xfk2biMjlzU+nlE6\nnVFf34C6uw+tuK+7+5D6+gaUTmc0Pp7R6OiV4louN6mrV0fV0PBUpY5dMcynNGa0ujckXVlj/bKk\nTOFrQNKDyf2fpF9L+h9JqcL3fyjbKb8b38ZteHhInZ1dchxHLS1tmpmZ0dTU7SV7pqZua3Z2Vq2t\nL8lxHHV2dunixfPF9aNHu3Xs2HtyHKfSxy875lMaM1rdP0uqW2N9SFKXJEdSm6QZSbclfSRpe+Fn\nnyx8v1Ykq8m3ccvnXYXDDcXrcDiifN5dYU+keB0KPdwzPHxBoVBYTU3xyhy4wphPaczo23MlNXzt\nOlJ4bLXH/aim2gdYzTc/95C07Lfnanvm5+fV2/u2zp8fKdv5qo35lMaMvr3lU1m8i1vtcT/y1Z3b\nwMBJJRLNSiSaFQyG5LqTxTXXzSkYDC3ZHw5H5Lq54nU+v7gnmx3XxERWiURcsdjTct2ctm17QXfu\nTFXsuZQD8ymNGW2MiKTJr13nJIXWeNyPfBW3AwcOK5lMK5lMa8+eVzU4eEae5ymVuq7a2lrV1weX\n7K+vDyoQCCiVui7P8zQ4eEa7d7+iWKxJ2eynGhub0NjYhMLhiK5du6EtW+qr9Mw2BvMpjRltjA5J\nZ7R4p3ZdUq2koKR2SSNa/CPCHwrft1fpjKX49m1pe/sujYxcUjwe1aZNj6u//4PiWiLRrGQyLUk6\nfrxfBw++obt3v9T27Tu1Y8fOah25ophPacxodZ2S/lvStBbvxn4t6U+FtYOSdkm6pMV/CvK4pAeT\nq5P075K2Fq7f0tp/mKgmZ6XPHFYzN7fiW24AG2RzwK+fYPmI561rSL56WwoAG4W4ATCJuAEwibgB\nMIm4ATCJuAEwibgBMIm4ATCJuAEwibgBMIm4ATCJuAEwibgBMIm4ATCJuAEwibgBMIm4ATCJuAEw\nibgBMIm4ATCJuAEwibgBMIm4ATCJuAEwibgBMIm4ATCJuAEwibgBMIm4ATCJuAEwibgBMKmm2gew\nZPP3vGofwffmvnCqfQRfc8RrqJT1Tog7NwAmETcAJhE3ACYRNwAmETcAJhE3ACYRNwAmETcAJhE3\nACYRNwAmETcAJhE3ACYRNwAmETcAJhE3ACYRNwAmETcAJhE3ACYRNwAmETcAJhE3ACYRNwAmETcA\nJhE3ACYRNwAmETcAJhE3ACYRNwAmETcAJhE3ACYRNwAm+TZunuepp+eI4vGo2tqeVzp9Y8V9N29+\notbWJsXjUfX0HJHneUvWT5zoVSDgaHp6uhLHrpgrV67oB889p2hjo959991l6/fu3dPeffsUbWxU\na1ubJiYmimvvvPOOoo2N+sFzz+mjjz6q4Kkri9dQKf8r6SVJj0nqXWNfVlKrpEZJeyV9VXj8XuE6\nWlifKNdBvxXfxm1k5LLGxzNKpzPq6xtQd/ehFfd1dx9SX9+A0umMxsczGh29UlzL5SZ19eqoGhqe\nqtSxK2JhYUGHf/5zXb50SbfGxjR49qxu3bq1ZM+pU6f05BNP6PeZjLp/8Qu9efSoJOnWrVs6e+6c\nxn73O125fFn/dviwFhYWqvE0yo7XUCl1kvok/bLEvjcldUvKSHpS0qnC46cK178vrL9ZnmN+S76N\n2/DwkDo7u+Q4jlpa2jQzM6OpqdtL9kxN3dbs7KxaW1+S4zjq7OzSxYvni+tHj3br2LH35DhOpY9f\nVqlUStFoVM8++6weffRR7du7V0NDQ0v2DF24oNdff12S9Nprr+njjz+W53kaGhrSvr179dhjj+mZ\nZ55RNBpVKpWqxtMoO15DpXxf0lZJf7vGHk/SbyW9Vrh+XdKD+QwVrlVY/7iw3x98G7d83lU43FC8\nDocjyufdFfZEiteh0MM9w8MXFAqF1dQUr8yBK8h1XTVEHj7vSCQi13WX72lYnF9NTY1qa2v12Wef\nLXlckiLh8LKftYLX0Eb4TNITkmoK1xFJD2boSnow3xpJtYX9/lBTekt1fPNzD0nLfnuutmd+fl69\nvW/r/PmRsp2vmr7LbNbzs1bwGtoIK92JOetYqz5f3bkNDJxUItGsRKJZwWBIrjtZXHPdnILB0JL9\n4XBErpsrXufzi3uy2XFNTGSVSMQViz0t181p27YXdOfOVMWeSzlFIhFN5h4+71wup1AotHzP5OL8\n7t+/r88//1x1dXVLHpeknOsu+9m/ZLyGSjkpqbnwlV/H/r+XNCPpfuE6J+nBDCOSHsz3vqTPtfg5\nnj/4Km4HDhxWMplWMpnWnj2vanDwjDzPUyp1XbW1taqvDy7ZX18fVCAQUCp1XZ7naXDwjHbvfkWx\nWJOy2U81NjahsbEJhcMRXbt2Q1u21FfpmW2srVu3KpPJKJvN6quvvtLZc+fU0dGxZE/Hj36k06dP\nS5I+/PBDvfzyy3IcRx0dHTp77pzu3bunbDarTCajlpaWajyNsuA1VMphSenC13p+qTmS/kXSh4Xr\n05JeKXzfUbhWYf1l+enOzbdvS9vbd2lk5JLi8ag2bXpc/f0fFNcSiWYlk2lJ0vHj/Tp48A3dvful\ntm/fqR07dlbryBVTU1Oj93/zG7X/8IdaWFjQz376U8ViMb311lt68cUX1dHRof379+snXV2KNjaq\nrq5OZwcHJUmxWEz/+uMf6x9iMdXU1Ojk++/rkUceqfIzKg9eQ6VMSXpR0qwW73P+U9ItSX8naZek\n/9JiAP9D0j5Jv5L0T5L2F35+v6SfaPGfgtRJOlvBs5fmrPSZw2rm5nz0pxAf2vw9xlPK3Bf++c3u\nR4FAtU/gf563vttDX70tBYCNQtwAmETcAJhE3ACYRNwAmETcAJhE3ACYRNwAmETcAJhE3ACYRNwA\nmETcAJhE3ACYRNwAmETcAJhE3ACYRNwAmETcAJhE3ACYRNwAmETcAJhE3ACYRNwAmETcAJhE3ACY\nRNwAmETcAJhE3ACYRNwAmETcAJhE3ACYVFPtA1gy94VT7SPgL9wf/1jtE9jBnRsAk4gbAJOIGwCT\niBsAk4gbAJOIGwCTiBsAk4gbAJOIGwCTiBsAk4gbAJOIGwCTiBsAk4gbAJOIGwCTiBsAk4gbAJOI\nGwCTiBsAk4gbAJOIGwCTiBsAk4gbAJOIGwCTiBsAk4gbAJOIGwCTiBsAk4gbAJOIGwCTiBsAk4gb\nAJN8GzfP89TTc0TxeFRtbc8rnb6x4r6bNz9Ra2uT4vGoenqOyPO8JesnTvQqEHA0PT1diWNXDPMp\njRmtzfp8fBu3kZHLGh/PKJ3OqK9vQN3dh1bc1919SH19A0qnMxofz2h09EpxLZeb1NWro2poeKpS\nx64Y5lMaM1qb9fn4Nm7Dw0Pq7OyS4zhqaWnTzMyMpqZuL9kzNXVbs7Ozam19SY7jqLOzSxcvni+u\nHz3arWPH3pPjOJU+ftkxn9KY0dqsz8e3ccvnXYXDDcXrcDiifN5dYU+keB0KPdwzPHxBoVBYTU3x\nyhy4wphPacxobdbnU1PtA6zmm+/rJS377bDanvn5efX2vq3z50fKdr5qYz6lMaO1WZ+Pr+7cBgZO\nKpFoViLRrGAwJNedLK65bk7BYGjJ/nA4ItfNFa/z+cU92ey4JiaySiTiisWeluvmtG3bC7pzZ6pi\nz6UcmE9pzGhtf03z8VXcDhw4rGQyrWQyrT17XtXg4Bl5nqdU6rpqa2tVXx9csr++PqhAIKBU6ro8\nz9Pg4Bnt3v2KYrEmZbOfamxsQmNjEwqHI7p27Ya2bKmv0jPbGMynNGa0tr+m+fj2bWl7+y6NjFxS\nPB7Vpk2Pq7//g+JaItGsZDItSTp+vF8HD76hu3e/1PbtO7Vjx85qHbmimE9pzGht1ufjrPSeejVz\nc1r/ZgAog82bta4/zfrqbSkAbBTiBsAk4gbAJOIGwCTiBsAk4gbAJOIGwCTiBsAk4gbAJOIGwCTi\nBsAk4gbAJOIGwCTiBsAk4gbAJOIGwCTiBsAk4gbAJOIGwCTiBsAk4gbAJOIGwCTiBsAk4gbAJOIG\nwCTiBsAk4gbAJOIGwCTiBsAk4gbAJOIGwCTH87xqnwEANhx3bgBMIm4ATCJuAEwibgBMIm4ATCJu\nAEwibgBMIm4ATCJuAEwibgBM+jPdN0cNjYpeKAAAAABJRU5ErkJggg==\n", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stderr", "output_type": "stream", "text": [ "The installed widget Javascript is the wrong version. It must satisfy the semver range ~2.1.4.\n" ] }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "77e9849e074841e49d8b0ebc8191507c" } }, "metadata": {}, "output_type": "display_data" } ], "source": [ "import ipywidgets as widgets\n", "from IPython.display import display\n", "from notebook import make_visualize\n", "\n", "iteration_slider = widgets.IntSlider(min=1, max=15, step=1, value=0)\n", "w=widgets.interactive(plot_grid_step,iteration=iteration_slider)\n", "display(w)\n", "\n", "visualize_callback = make_visualize(iteration_slider)\n", "\n", "visualize_button = widgets.ToggleButton(description = \"Visualize\", value = False)\n", "time_select = widgets.ToggleButtons(description='Extra Delay:',options=['0', '0.1', '0.2', '0.5', '0.7', '1.0'])\n", "a = widgets.interactive(visualize_callback, Visualize = visualize_button, time_step=time_select)\n", "display(a)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Move the slider above to observe how the utility changes across iterations. It is also possible to move the slider using arrow keys or to jump to the value by directly editing the number with a double click. The **Visualize Button** will automatically animate the slider for you. The **Extra Delay Box** allows you to set time delay in seconds upto one second for each time step. There is also an interactive editor for grid-world problems `grid_mdp.py` in the gui folder for you to play around with." ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "# POLICY ITERATION\n", "\n", "We have already seen that value iteration converges to the optimal policy long before it accurately estimates the utility function. \n", "If one action is clearly better than all the others, then the exact magnitude of the utilities in the states involved need not be precise. \n", "The policy iteration algorithm works on this insight. \n", "The algorithm executes two fundamental steps:\n", "* **Policy evaluation**: Given a policy _πᵢ_, calculate _Uᵢ = U(πᵢ)_, the utility of each state if _πᵢ_ were to be executed.\n", "* **Policy improvement**: Calculate a new policy _πᵢ₊₁_ using one-step look-ahead based on the utility values calculated.\n", "\n", "The algorithm terminates when the policy improvement step yields no change in the utilities. \n", "Refer to **Figure 17.6** in the book to see how this is an improvement over value iteration.\n", "We now have a simplified version of the Bellman equation\n", "\n", "$$U_i(s) = R(s) + \\gamma \\sum_{s'}P(s'\\ |\\ s, \\pi_i(s))U_i(s')$$\n", "\n", "An important observation in this equation is that this equation doesn't have the `max` operator, which makes it linear.\n", "For _n_ states, we have _n_ linear equations with _n_ unknowns, which can be solved exactly in time _**O(n³)**_.\n", "For more implementational details, have a look at **Section 17.3**.\n", "Let us now look at how the expected utility is found and how `policy_iteration` is implemented." ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "

\n", "\n", "
def expected_utility(a, s, U, mdp):\n",
       "    """The expected utility of doing a in state s, according to the MDP and U."""\n",
       "    return sum([p * U[s1] for (p, s1) in mdp.T(s, a)])\n",
       "
\n", "\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "psource(expected_utility)" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "

\n", "\n", "
def policy_iteration(mdp):\n",
       "    """Solve an MDP by policy iteration [Figure 17.7]"""\n",
       "    U = {s: 0 for s in mdp.states}\n",
       "    pi = {s: random.choice(mdp.actions(s)) for s in mdp.states}\n",
       "    while True:\n",
       "        U = policy_evaluation(pi, U, mdp)\n",
       "        unchanged = True\n",
       "        for s in mdp.states:\n",
       "            a = argmax(mdp.actions(s), key=lambda a: expected_utility(a, s, U, mdp))\n",
       "            if a != pi[s]:\n",
       "                pi[s] = a\n",
       "                unchanged = False\n",
       "        if unchanged:\n",
       "            return pi\n",
       "
\n", "\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "psource(policy_iteration)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
Fortunately, it is not necessary to do _exact_ policy evaluation. \n", "The utilities can instead be reasonably approximated by performing some number of simplified value iteration steps.\n", "The simplified Bellman update equation for the process is\n", "\n", "$$U_{i+1}(s) \\leftarrow R(s) + \\gamma\\sum_{s'}P(s'\\ |\\ s,\\pi_i(s))U_{i}(s')$$\n", "\n", "and this is repeated _k_ times to produce the next utility estimate. This is called _modified policy iteration_." ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "

\n", "\n", "
def policy_evaluation(pi, U, mdp, k=20):\n",
       "    """Return an updated utility mapping U from each state in the MDP to its\n",
       "    utility, using an approximation (modified policy iteration)."""\n",
       "    R, T, gamma = mdp.R, mdp.T, mdp.gamma\n",
       "    for i in range(k):\n",
       "        for s in mdp.states:\n",
       "            U[s] = R(s) + gamma * sum([p * U[s1] for (p, s1) in T(s, pi[s])])\n",
       "    return U\n",
       "
\n", "\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "psource(policy_evaluation)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let us now solve **`sequential_decision_environment`** using `policy_iteration`." ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{(0, 0): (0, 1),\n", " (0, 1): (0, 1),\n", " (0, 2): (1, 0),\n", " (1, 0): (1, 0),\n", " (1, 2): (1, 0),\n", " (2, 0): (0, 1),\n", " (2, 1): (0, 1),\n", " (2, 2): (1, 0),\n", " (3, 0): (-1, 0),\n", " (3, 1): None,\n", " (3, 2): None}" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "policy_iteration(sequential_decision_environment)" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "### AIMA3e\n", "__function__ POLICY-ITERATION(_mdp_) __returns__ a policy \n", " __inputs__: _mdp_, an MDP with states _S_, actions _A_(_s_), transition model _P_(_s′_ | _s_, _a_) \n", " __local variables__: _U_, a vector of utilities for states in _S_, initially zero \n", "        _π_, a policy vector indexed by state, initially random \n", "\n", " __repeat__ \n", "   _U_ ← POLICY\\-EVALUATION(_π_, _U_, _mdp_) \n", "   _unchanged?_ ← true \n", "   __for each__ state _s_ __in__ _S_ __do__ \n", "     __if__ max_a_ ∈ _A_(_s_) Σ_s′_ _P_(_s′_ | _s_, _a_) _U_\\[_s′_\\] > Σ_s′_ _P_(_s′_ | _s_, _π_\\[_s_\\]) _U_\\[_s′_\\] __then do__ \n", "       _π_\\[_s_\\] ← argmax_a_ ∈ _A_(_s_) Σ_s′_ _P_(_s′_ | _s_, _a_) _U_\\[_s′_\\] \n", "       _unchanged?_ ← false \n", " __until__ _unchanged?_ \n", " __return__ _π_ \n", "\n", "---\n", "__Figure ??__ The policy iteration algorithm for calculating an optimal policy." ], "text/plain": [ "" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "pseudocode('Policy-Iteration')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### AIMA3e\n", "__function__ POLICY-ITERATION(_mdp_) __returns__ a policy \n", " __inputs__: _mdp_, an MDP with states _S_, actions _A_(_s_), transition model _P_(_s′_ | _s_, _a_) \n", " __local variables__: _U_, a vector of utilities for states in _S_, initially zero \n", "        _π_, a policy vector indexed by state, initially random \n", "\n", " __repeat__ \n", "   _U_ ← POLICY\\-EVALUATION(_π_, _U_, _mdp_) \n", "   _unchanged?_ ← true \n", "   __for each__ state _s_ __in__ _S_ __do__ \n", "     __if__ max_a_ ∈ _A_(_s_) Σ_s′_ _P_(_s′_ | _s_, _a_) _U_\\[_s′_\\] > Σ_s′_ _P_(_s′_ | _s_, _π_\\[_s_\\]) _U_\\[_s′_\\] __then do__ \n", "       _π_\\[_s_\\] ← argmax_a_ ∈ _A_(_s_) Σ_s′_ _P_(_s′_ | _s_, _a_) _U_\\[_s′_\\] \n", "       _unchanged?_ ← false \n", " __until__ _unchanged?_ \n", " __return__ _π_ \n", "\n", "---\n", "__Figure ??__ The policy iteration algorithm for calculating an optimal policy." ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "## Sequential Decision Problems\n", "\n", "Now that we have the tools required to solve MDPs, let us see how Sequential Decision Problems can be solved step by step and how a few built-in tools in the GridMDP class help us better analyse the problem at hand. \n", "As always, we will work with the grid world from **Figure 17.1** from the book.\n", "![title](images/grid_mdp.jpg)\n", "
This is the environment for our agent.\n", "We assume for now that the environment is _fully observable_, so that the agent always knows where it is.\n", "We also assume that the transitions are **Markovian**, that is, the probability of reaching state $s'$ from state $s$ depends only on $s$ and not on the history of earlier states.\n", "Almost all stochastic decision problems can be reframed as a Markov Decision Process just by tweaking the definition of a _state_ for that particular problem.\n", "
\n", "However, the actions of our agent in this environment are unreliable. In other words, the motion of our agent is stochastic. \n", "

\n", "More specifically, the agent may - \n", "* move correctly in the intended direction with a probability of _0.8_, \n", "* move $90^\\circ$ to the right of the intended direction with a probability 0.1\n", "* move $90^\\circ$ to the left of the intended direction with a probability 0.1\n", "

\n", "The agent stays put if it bumps into a wall.\n", "![title](images/grid_mdp_agent.jpg)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "These properties of the agent are called the transition properties and are hardcoded into the GridMDP class as you can see below." ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "

\n", "\n", "
    def T(self, state, action):\n",
       "        if action is None:\n",
       "            return [(0.0, state)]\n",
       "        else:\n",
       "            return self.transitions[state][action]\n",
       "
\n", "\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "psource(GridMDP.T)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To completely define our task environment, we need to specify the utility function for the agent. \n", "This is the function that gives the agent a rough estimate of how good being in a particular state is, or how much _reward_ an agent receives by being in that state.\n", "The agent then tries to maximize the reward it gets.\n", "As the decision problem is sequential, the utility function will depend on a sequence of states rather than on a single state.\n", "For now, we simply stipulate that in each state $s$, the agent receives a finite reward $R(s)$.\n", "\n", "For any given state, the actions the agent can take are encoded as given below:\n", "- Move Up: (0, 1)\n", "- Move Down: (0, -1)\n", "- Move Left: (-1, 0)\n", "- Move Right: (1, 0)\n", "- Do nothing: `None`\n", "\n", "We now wonder what a valid solution to the problem might look like. \n", "We cannot have fixed action sequences as the environment is stochastic and we can eventually end up in an undesirable state.\n", "Therefore, a solution must specify what the agent shoulddo for _any_ state the agent might reach.\n", "
\n", "Such a solution is known as a **policy** and is usually denoted by $\\pi$.\n", "
\n", "The **optimal policy** is the policy that yields the highest expected utility an is usually denoted by $\\pi^*$.\n", "
\n", "The `GridMDP` class has a useful method `to_arrows` that outputs a grid showing the direction the agent should move, given a policy.\n", "We will use this later to better understand the properties of the environment." ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "

\n", "\n", "
    def to_arrows(self, policy):\n",
       "        chars = {\n",
       "            (1, 0): '>', (0, 1): '^', (-1, 0): '<', (0, -1): 'v', None: '.'}\n",
       "        return self.to_grid({s: chars[a] for (s, a) in policy.items()})\n",
       "
\n", "\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "psource(GridMDP.to_arrows)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This method directly encodes the actions that the agent can take (described above) to characters representing arrows and shows it in a grid format for human visalization purposes. \n", "It converts the received policy from a `dictionary` to a grid using the `to_grid` method." ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "

\n", "\n", "
    def to_grid(self, mapping):\n",
       "        """Convert a mapping from (x, y) to v into a [[..., v, ...]] grid."""\n",
       "        return list(reversed([[mapping.get((x, y), None)\n",
       "                               for x in range(self.cols)]\n",
       "                              for y in range(self.rows)]))\n",
       "
\n", "\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "psource(GridMDP.to_grid)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that we have all the tools required and a good understanding of the agent and the environment, we consider some cases and see how the agent should behave for each case." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Case 1\n", "---\n", "R(s) = -0.04 in all states except terminal states" ] }, { "cell_type": "code", "execution_count": 23, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Note that this environment is also initialized in mdp.py by default\n", "sequential_decision_environment = GridMDP([[-0.04, -0.04, -0.04, +1],\n", " [-0.04, None, -0.04, -1],\n", " [-0.04, -0.04, -0.04, -0.04]],\n", " terminals=[(3, 2), (3, 1)])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will use the `best_policy` function to find the best policy for this environment.\n", "But, as you can see, `best_policy` requires a utility function as well.\n", "We already know that the utility function can be found by `value_iteration`.\n", "Hence, our best policy is:" ] }, { "cell_type": "code", "execution_count": 24, "metadata": { "collapsed": true }, "outputs": [], "source": [ "pi = best_policy(sequential_decision_environment, value_iteration(sequential_decision_environment, .001))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can now use the `to_arrows` method to see how our agent should pick its actions in the environment." ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "> > > .\n", "^ None ^ .\n", "^ > ^ <\n" ] } ], "source": [ "from utils import print_table\n", "print_table(sequential_decision_environment.to_arrows(pi))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is exactly the output we expected\n", "
\n", "![title](images/-0.04.jpg)\n", "
\n", "Notice that, because the cost of taking a step is fairly small compared with the penalty for ending up in `(4, 2)` by accident, the optimal policy is conservative. \n", "In state `(3, 1)` it recommends taking the long way round, rather than taking the shorter way and risking getting a large negative reward of -1 in `(4, 2)`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Case 2\n", "---\n", "R(s) = -0.4 in all states except in terminal states" ] }, { "cell_type": "code", "execution_count": 26, "metadata": { "collapsed": true }, "outputs": [], "source": [ "sequential_decision_environment = GridMDP([[-0.4, -0.4, -0.4, +1],\n", " [-0.4, None, -0.4, -1],\n", " [-0.4, -0.4, -0.4, -0.4]],\n", " terminals=[(3, 2), (3, 1)])" ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "> > > .\n", "^ None ^ .\n", "^ > ^ <\n" ] } ], "source": [ "pi = best_policy(sequential_decision_environment, value_iteration(sequential_decision_environment, .001))\n", "from utils import print_table\n", "print_table(sequential_decision_environment.to_arrows(pi))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is exactly the output we expected\n", "![title](images/-0.4.jpg)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As the reward for each state is now more negative, life is certainly more unpleasant.\n", "The agent takes the shortest route to the +1 state and is willing to risk falling into the -1 state by accident." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Case 3\n", "---\n", "R(s) = -4 in all states except terminal states" ] }, { "cell_type": "code", "execution_count": 28, "metadata": { "collapsed": true }, "outputs": [], "source": [ "sequential_decision_environment = GridMDP([[-4, -4, -4, +1],\n", " [-4, None, -4, -1],\n", " [-4, -4, -4, -4]],\n", " terminals=[(3, 2), (3, 1)])" ] }, { "cell_type": "code", "execution_count": 29, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "> > > .\n", "^ None > .\n", "> > > ^\n" ] } ], "source": [ "pi = best_policy(sequential_decision_environment, value_iteration(sequential_decision_environment, .001))\n", "from utils import print_table\n", "print_table(sequential_decision_environment.to_arrows(pi))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is exactly the output we expected\n", "![title](images/-4.jpg)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The living reward for each state is now lower than the least rewarding terminal. Life is so _painful_ that the agent heads for the nearest exit as even the worst exit is less painful than any living state." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Case 4\n", "---\n", "R(s) = 4 in all states except terminal states" ] }, { "cell_type": "code", "execution_count": 30, "metadata": { "collapsed": true }, "outputs": [], "source": [ "sequential_decision_environment = GridMDP([[4, 4, 4, +1],\n", " [4, None, 4, -1],\n", " [4, 4, 4, 4]],\n", " terminals=[(3, 2), (3, 1)])" ] }, { "cell_type": "code", "execution_count": 31, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "> > < .\n", "> None < .\n", "> > > v\n" ] } ], "source": [ "pi = best_policy(sequential_decision_environment, value_iteration(sequential_decision_environment, .001))\n", "from utils import print_table\n", "print_table(sequential_decision_environment.to_arrows(pi))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this case, the output we expect is\n", "![title](images/4.jpg)\n", "
\n", "As life is positively enjoyable and the agent avoids _both_ exits.\n", "Even though the output we get is not exactly what we want, it is definitely not wrong.\n", "The scenario here requires the agent to anything but reach a terminal state, as this is the only way the agent can maximize its reward (total reward tends to infinity), and the program does just that.\n", "
\n", "Currently, the GridMDP class doesn't support an explicit marker for a \"do whatever you like\" action or a \"don't care\" condition.\n", "You can however, extend the class to do so.\n", "
\n", "For in-depth knowledge about sequential decision problems, refer **Section 17.1** in the AIMA book." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "## Appendix\n", "\n", "Surprisingly, it turns out that there are six other optimal policies for various ranges of R(s). \n", "You can try to find them out for yourself.\n", "See **Exercise 17.5**.\n", "To help you with this, we have a GridMDP editor in `grid_mdp.py` in the GUI folder. \n", "
\n", "Here's a brief tutorial about how to use it\n", "
\n", "Let us use it to solve `Case 2` above\n", "1. Run `python gui/grid_mdp.py` from the master directory.\n", "2. Enter the dimensions of the grid (3 x 4 in this case), and click on `'Build a GridMDP'`\n", "3. Click on `Initialize` in the `Edit` menu.\n", "4. Set the reward as -0.4 and click `Apply`. Exit the dialog. \n", "![title](images/ge0.jpg)\n", "
\n", "5. Select cell (1, 1) and check the `Wall` radio button. `Apply` and exit the dialog.\n", "![title](images/ge1.jpg)\n", "
\n", "6. Select cells (4, 1) and (4, 2) and check the `Terminal` radio button for both. Set the rewards appropriately and click on `Apply`. Exit the dialog. Your window should look something like this.\n", "![title](images/ge2.jpg)\n", "
\n", "7. You are all set up now. Click on `Build and Run` in the `Build` menu and watch the heatmap calculate the utility function.\n", "![title](images/ge4.jpg)\n", "
\n", "Green shades indicate positive utilities and brown shades indicate negative utilities. \n", "The values of the utility function and arrow diagram will pop up in separate dialogs after the algorithm converges." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.1" }, "widgets": { "state": { "001e6c8ed3fc4eeeb6ab7901992314dd": { "views": [] }, "00f29880456846a8854ab515146ec55b": { "views": [] }, "010f52f7cde545cba25593839002049b": { "views": [] }, "01473ad99aa94acbaca856a7d980f2b9": { "views": [] }, "021a4a4f35da484db5c37c5c8d0dbcc2": { "views": [] }, "02229be5d3bc401fad55a0378977324a": { "views": [] }, "022a5fdfc8e44fb09b21c4bd5b67a0db": { "views": [ { "cell_index": 27 } ] }, "025c3b0250b94d4c8d9b33adfdba4c15": { "views": [] }, "028f96abfed644b8b042be1e4b16014d": { "views": [] }, "0303bad44d404a1b9ad2cc167e42fcb7": { "views": [] }, "031d2d17f32347ec83c43798e05418fe": { "views": [] }, "03de64f0c2fd43f1b3b5d84aa265aeb7": { "views": [] }, "03fdd484675b42ad84448f64c459b0e0": { "views": [] }, "044cf74f03fd44fd840e450e5ee0c161": { "views": [] }, "054ae5ba0a014a758de446f1980f1ba5": { "views": [] }, "0675230fb92f4539bc257b768fb4cd10": { "views": [ { "cell_index": 27 } ] }, "06c93b34e1f4424aba9a0b172c428260": { "views": [] }, "077a5ea324be46c3ad0110671a0c6a12": { "views": [] }, "0781138d150142a08775861a69beaec9": { "views": [] }, "0783e74a8c2b40cc9b0f5706271192f4": { "views": [ { "cell_index": 27 } ] }, "07c7678b73634e728085f19d7b5b84f7": { "views": [] }, "07febf1d15a140d8adb708847dd478ec": { "views": [] }, "08299b681cd9477f9b19a125e186ce44": { "views": [] }, "083af89d82e445aab4abddfece61d700": { "views": [] }, "08a1129a8bd8486bbfe2c9e49226f618": { "views": [] }, "08a2f800c0d540fdb24015156c7ffc15": { "views": [] }, "097d8d0feccc4c76b87bbcb3f1ecece7": { "views": [] }, "098f12158d844cdf89b29a4cd568fda0": { "views": [ { "cell_index": 27 } ] }, "09e96f9d5d32453290af60fbd29ca155": { "views": [] }, "0a2ec7c49dcd4f768194483c4f2e8813": { "views": [] }, "0b1d6ed8fe4144b8a24228e1befe2084": { "views": [] }, "0b299f8157d24fa9830653a394ef806a": { "views": [] }, "0b2a4ac81a244ff1a7b313290465f8f4": { "views": [] }, "0b52cfc02d604bc2ae42f4ba8c7bca4f": { "views": [] }, "0b65fb781274495ab498ad518bc274d4": { "views": [ { "cell_index": 27 } ] }, "0b865813de0841c49b41f6ad5fb85c6a": { "views": [] }, "0c2070d20fb04864aeb2008a6f2b8b30": { "views": [] }, "0cf5319bcde84f65a1a91c5f9be3aa28": { "views": [] }, "0d721b5be85f4f8aafe26b3597242d60": { "views": [] }, "0d9f29e197ad45d6a04bbb6864d3be6d": { "views": [] }, "0e03c7e2c0414936b206ed055e19acba": { "views": [] }, "0e2265aa506a4778bfc480d5e48c388b": { "views": [] }, "0e4e3d0b6afc413e86970ec4250df678": { "views": [] }, "0e6a5fe6423542e6a13e30f8929a8b02": { "views": [] }, "0e7b2f39c94343c3b0d3b6611351886e": { "views": [] }, "0eb5005fa34440988bcf3be231d31511": { "views": [] }, "104703ad808e41bc9106829bb0396ece": { "views": [] }, "109c376b28774a78bf90d3da4587d834": { "views": [] }, "10b24041718843da976ac616e77ea522": { "views": [] }, "11516bb6db8b45ef866bd9be8bb59312": { "views": [] }, "1203903354fa467a8f38dbbad79cbc81": { "views": [] }, "124ecbe68ada40f68d6a1807ad6bcdf9": { "views": [] }, "1264becdbb63455183aa75f236a3413e": { "views": [] }, "13061cc21693480a8380346277c1b877": { "views": [] }, "130dd4d2c9f04ad28d9a6ac40045a329": { "views": [] }, "1350a087b5a9422386c3c5f04dd5d1c9": { "views": [] }, "139bd19be4a4427a9e08f0be6080188e": { "views": [] }, "13f9f589d36c477f9b597dda459efd16": { "views": [] }, "140917b5c77348ec82ea45da139a3045": { "views": [] }, "145419657bb1401ba934e6cea43d5fd1": { "views": [] }, "15d748f1629d4da1982cd62cfbcb1725": { "views": [] }, "17ad015dbc744ac6952d2a6da89f0289": { "views": [] }, "17b6508f32e4425e9f43e5407eb55ed3": { "views": [] }, "185598d8e5fc4dffae293f270a6e7328": { "views": [] }, "196473b25f384f3895ee245e8b7874e9": { "views": [] }, "19c0f87663a0431285a62d4ad6748046": { "views": [] }, "1a00a7b7446d4ad8b08c9a2a9ea9c852": { "views": [] }, "1a97f5b88cdc4ae0871578c06bbb9965": { "views": [] }, "1a9a07777b0c4a45b33e25a70ebdc290": { "views": [] }, "1af711fe8e4f43f084cef6c89eec40ae": { "views": [ { "cell_index": 27 } ] }, "1aff6a6e15b34bb89d7579d445071230": { "views": [] }, "1b1ea7e915d846aea9efeae4381b2c48": { "views": [] }, "1ba02ae1967740b0a69e07dbe95635cb": { "views": [] }, "1c5c913acbde4e87a163abb2e24e6e38": { "views": [ { "cell_index": 27 } ] }, "1cfca0b7ef754c459e1ad97c1f0ceb3b": { "views": [] }, "1d8f6a4910e649589863b781aab4c4d4": { "views": [] }, "1e64b8f5a1554a22992693c194f7b971": { "views": [] }, "1e8f0a2bf7614443a380e53ed27b48c0": { "views": [] }, "1f4e6fa4bacc479e8cd997b26a5af733": { "views": [] }, "1fdf09158eb44415a946f07c6aaba620": { "views": [] }, "200e3ebead3d4858a47e2f6d345ca395": { "views": [ { "cell_index": 27 } ] }, "2050d4b462474a059f9e6493ba06ac58": { "views": [] }, "20b5c21a6e6a427ba3b9b55a0214f75e": { "views": [] }, "20b99631feba4a9c98c9d5f74c620273": { "views": [] }, "20bcff5082854ab89a7977ae56983e30": { "views": [] }, "20d708bf9b7845fa946f5f37c7733fee": { "views": [] }, "210b36ea9edf4ee49ae1ae3fe5005282": { "views": [] }, "21415393cb2d4f72b5c3f5c058aeaf66": { "views": [] }, "2186a18b6ed8405a8a720bae59de2ace": { "views": [] }, "220dc13e9b6942a7b9ed9e37d5ede7ba": { "views": [] }, "221a735fa6014a288543e6f8c7e4e2ef": { "views": [] }, "2288929cec4d4c8faad411029f5e21fa": { "views": [] }, "22b86e207ea6469d85d8333870851a86": { "views": [] }, "23283ad662a140e3b5e8677499e91d64": { "views": [] }, "23a7cc820b63454ca6be3dcfd2538ac1": { "views": [] }, "240ed02d576546028af3edfab9ea8558": { "views": [] }, "24678e52a0334cb9a9a56f92c29750be": { "views": [] }, "247820f6d83f4dd9b68f5df77dbda4b7": { "views": [] }, "24b6a837fbd942c9a68218fb8910dcd5": { "views": [] }, "24ee3204f26348bca5e6a264973e5b56": { "views": [] }, "262c7bb5bd7447f791509571fe74ae44": { "views": [] }, "263595f22d0d45e2a850854bcefe4731": { "views": [] }, "2640720aa6684c5da6d7870abcbc950b": { "views": [] }, "265ca1ec7ad742f096bb8104d0cf1550": { "views": [] }, "26bf66fba453464fac2f5cd362655083": { "views": [] }, "29769879478f49e8b4afd5c0b4662e87": { "views": [] }, "29a13bd6bc8d486ca648bf30c9e4c2a6": { "views": [] }, "29c5df6267584654b76205fc5559c553": { "views": [] }, "29ce25045e7248e5892e8aafc635c416": { "views": [] }, "2a17207c43c9424394299a7b52461794": { "views": [] }, "2a777941580945bc83ddb0c817ed4122": { "views": [] }, "2ae1844e2afe416183658d7a602e5963": { "views": [] }, "2afa2938b41944cf8c14e41a431e3969": { "views": [] }, "2bdc5f9b161548e3aab8ea392b5af1a1": { "views": [] }, "2c26b2bcfc96473584930a4b622d268e": { "views": [] }, "2ca2a914a5f940b18df0b5cde2b79e4b": { "views": [] }, "2ca2c532840548a9968d1c6b2f0acdd8": { "views": [] }, "2d17c32bfea143babe2b114d8777b15d": { "views": [] }, "2d3acd8872c342eab3484302cac2cb05": { "views": [ { "cell_index": 27 } ] }, "2dc514cc2f5547aeb97059a5070dc9e3": { "views": [] }, "2e1351ad05384d058c90e594bc6143c1": { "views": [ { "cell_index": 27 } ] }, "2e9b80fa18984615933e41c1c1db2171": { "views": [] }, "2ef17ee6b7c74a4bbbbbe9b1a93e4fb6": { "views": [] }, "2f5438f1b34046a597a467effd43df11": { "views": [ { "cell_index": 27 } ] }, "2f8d22417f3e421f96027fca40e1554f": { "views": [] }, "2fb0409cfb49469d89a32597dc3edba9": { "views": [] }, "303ccef837984c97b7e71f2988c737a4": { "views": [] }, "3058b0808dca48a0bba9a93682260491": { "views": [] }, "306b65493c28411eb10ad786bbf85dc5": { "views": [] }, "30f5d30cf2d84530b3199015c5ff00eb": { "views": [] }, "310b1ac518bd4079bdb7ecaf523a6809": { "views": [] }, "313eca81d9d24664bcc837db54d59618": { "views": [] }, "31413caf78c14548baa61e3e3c9edc55": { "views": [] }, "317fbd3cb6324b2fbdfd6aa46a8d1192": { "views": [] }, "319425ba805346f5ba366c42e220f9c6": { "views": [ { "cell_index": 27 } ] }, "31fc8165275e473f8f75c6215b5184ff": { "views": [] }, "329f12edaa0c44d2a619450f188e8777": { "views": [] }, "32edf057582f4a6ca30ce3cb685bf971": { "views": [] }, "330e74773ba148e18674cfa3e63cd6cc": { "views": [] }, "332a89c03bfb49c2bb291051d172b735": { "views": [ { "cell_index": 27 } ] }, "3347dfda0aca450f89dd9b39ca1bec7d": { "views": [] }, "336e8bcfd7cc4a85956674b0c7bffff2": { "views": [] }, "3376228b3b614d4ab2a10b2fd0f484fd": { "views": [] }, "3380a22bc67c4be99c61050800f93395": { "views": [] }, "34b5c16cbea448809c2ccbce56f8d5a5": { "views": [] }, "34bb050223504afc8053ce931103f52c": { "views": [] }, "34c28187175d49198b536a1ab13668c4": { "views": [] }, "3521f32644514ecf9a96ddfa5d80fb9b": { "views": [] }, "36511bd77ed74f668053df749cc735d4": { "views": [] }, "36541c3490bd4268b64daf20d8c24124": { "views": [] }, "37aa1dd4d76a4bac98857b519b7b523a": { "views": [] }, "37aa3cfa3f8f48989091ec46ac17ae48": { "views": [] }, "386991b0b1424a9c816dac6a29e1206b": { "views": [] }, "386cf43742234dda994e35b41890b4d8": { "views": [] }, "388571e8e0314dfab8e935b7578ba7f9": { "views": [ { "cell_index": 27 } ] }, "3974e38e718547efaf0445da2be6a739": { "views": [] }, "398490e0cc004d22ac9c4486abec61e1": { "views": [] }, "399875994aba4c53afa8c49fae8d369e": { "views": [] }, "39b64aa04b1d4a81953e43def0ef6e10": { "views": [] }, "39ffc3dd42d94a27ba7240d10c11b565": { "views": [] }, "3a21291c8e7249e3b04417d31b0447cf": { "views": [ { "cell_index": 27 } ] }, "3a377d9f46704d749c6879383c89f5d3": { "views": [] }, "3a44a6f1f62742849e96d957033a0039": { "views": [] }, "3b22d68709b046e09fe70f381a3944cd": { "views": [ { "cell_index": 27 } ] }, "3b329209c8f547acae1925dc3eb4af77": { "views": [] }, "3c1b2ec10a9041be8a3fad9da78ff9f6": { "views": [ { "cell_index": 27 } ] }, "3c2be3c85c6d41268bb4f9d63a43e196": { "views": [] }, "3c6796eff7c54238a7b7776e88721b08": { "views": [] }, "3cbca3e11edf439fb7f8ba41693b4824": { "views": [] }, "3d4b6b7c0b0c48ff8c4b8d78f58e0f1c": { "views": [] }, "3de1faf0d2514f49a99b3d60ea211495": { "views": [] }, "3df60d9ac82b42d9b885d895629e372e": { "views": [] }, "3e5b9fd779574270bf58101002c152ce": { "views": [ { "cell_index": 27 } ] }, "3e80f34623c94659bfab5b3b56072d9a": { "views": [] }, "3e8bb05434cb4a0291383144e4523840": { "views": [ { "cell_index": 27 } ] }, "3ea1c8e4f9b34161928260e1274ee048": { "views": [] }, "3f32f0915bc6469aaaf7170eff1111e3": { "views": [] }, "3fe69a26ae7a46fda78ae0cb519a0f8b": { "views": [] }, "4000ecdd75d9467e9dffd457b35aa65f": { "views": [] }, "402d346f8b68408faed2fd79395cf3fb": { "views": [] }, "402f4116244242148fdc009bb399c3bd": { "views": [] }, "4049e0d7c0d24668b7eae2bb7169376e": { "views": [] }, "4088c9ed71b0467b9b9417d5b04eda0e": { "views": [] }, "40d70faa07654b6cb13496c32ba274b3": { "views": [] }, "4146be21b7614abe827976787ec570f1": { "views": [] }, "4198c08edda440dd93d1f6ce3e4efa62": { "views": [] }, "42023d7d3c264f9d933d4cee4362852b": { "views": [] }, "421ad8c67f754ce2b24c4fa3a8e951cf": { "views": [] }, "4263fe0cef42416f8d344c1672f591f9": { "views": [] }, "428e42f04a1e4347a1f548379c68f91b": { "views": [ { "cell_index": 27 } ] }, "42a47243baf34773943a25df9cf23854": { "views": [] }, "4343b72c91d04a7c9a6080f30fc63d7d": { "views": [] }, "43488264fc924c01a30fa58604074b07": { "views": [] }, "4379175239b34553bf45c8ef9443ac55": { "views": [ { "cell_index": 27 } ] }, "43859798809a4a289c58b4bd5e49d357": { "views": [] }, "43ad406a61a34249b5622aba9450b23d": { "views": [] }, "4421c121414d464bb3bf1b5f0e86c37b": { "views": [ { "cell_index": 27 } ] }, "445cc08b4da44c2386ac9379793e3506": { "views": [] }, "447cff7e256c434e859bb7ce9e5d71c8": { "views": [] }, "44af7da9d8304f07890ef7d11a9f95fe": { "views": [] }, "45021b6f05db4c028a3b5572bc85217f": { "views": [] }, "457768a474844556bf9b215439a2f2e9": { "views": [] }, "45d5689de53646fe9042f3ce9e281acc": { "views": [] }, "461aa21d57824526a6b61e3f9b5af523": { "views": [] }, "472ca253aab34b098f53ed4854d35f23": { "views": [] }, "4731208453424514b471f862804d9bb8": { "views": [ { "cell_index": 27 } ] }, "47dfef9eaf0e433cb4b3359575f39480": { "views": [] }, "48220a877d494a3ea0cc9dae19783a13": { "views": [] }, "4882c417949b4b6788a1c3ec208fb1ac": { "views": [] }, "49f5c38281984e3bad67fe3ea3eb6470": { "views": [] }, "4a0d39b43eee4e818d47d382d87d86d1": { "views": [] }, "4a470bf3037047f48f4547b594ac65fa": { "views": [] }, "4abab5bca8334dfbb0434be39eb550db": { "views": [] }, "4b48e08fd383489faa72fc76921eac4e": { "views": [] }, "4b9439e6445c4884bd1cde0e9fd2405e": { "views": [] }, "4b9fa014f9904fcf9aceff00cc1ebf44": { "views": [] }, "4bdc63256c3f4e31a8fa1d121f430518": { "views": [] }, "4bebb097ddc64bbda2c475c3a0e92ab5": { "views": [] }, "4c201df21ca34108a6e7b051aa58b7f6": { "views": [] }, "4ced8c156fd941eca391016fc256ce40": { "views": [] }, "4d281cda33fa489d86228370e627a5b0": { "views": [ { "cell_index": 27 } ] }, "4d85e68205d94965bdb437e5441b10a1": { "views": [] }, "4e0e6dd34ba7487ba2072d352fe91bf5": { "views": [] }, "4e82b1d731dd419480e865494f932f80": { "views": [] }, "4e9f52dea051415a83c4597c4f7a6c00": { "views": [] }, "4ec035cba73647358d416615cf4096ee": { "views": [ { "cell_index": 27 } ] }, "4f09442f99aa4a9e9f460f82a50317c4": { "views": [] }, "4f80b4e6b074475698efbec6062e3548": { "views": [] }, "4f905a287b4f4f0db64b9572432b0139": { "views": [] }, "50a339306cd549de86fbe5fa2a0a3503": { "views": [] }, "51068697643243e18621c888a6504434": { "views": [] }, "51333b89f44b41aba813aef099bdbb42": { "views": [] }, "5141ae07149b46909426208a30e2861e": { "views": [ { "cell_index": 27 } ] }, "515606cb3b3a4fccad5056d55b262db4": { "views": [] }, "51aa6d9f5a90481db7e3dd00d77d4f09": { "views": [] }, "524091ea717d427db2383b46c33ef204": { "views": [] }, "524d1132c88f4d91b15344cc427a9565": { "views": [] }, "52f70e249adc4edb8dca28b883a5d4f4": { "views": [] }, "531c080221f64b8ca50d792bbaa6f31e": { "views": [] }, "53349c544b54450f8e2af9b8ba176d78": { "views": [] }, "53a8b8e7b7494d02852a0dc5ccca51a2": { "views": [] }, "53c963469eee41b59479753201626f18": { "views": [] }, "5436516c280a49828c1c2f4783d9cf0e": { "views": [] }, "55a1b0b794f44ac796bc75616f65a2a1": { "views": [ { "cell_index": 27 } ] }, "55ebf735de4c4b5ba2f09bc51d3593fd": { "views": [] }, "56007830e925480e94a12356ff4fb6a4": { "views": [] }, "56def8b3867843f990439b33dab3da58": { "views": [] }, "5719bb596a5649f6af38c11c3daae6e9": { "views": [] }, "572245b145014b6e91a3b5fe55e4cf78": { "views": [] }, "5728da2e2d5a4c5595e1f49723151dca": { "views": [] }, "579673c076da4626bc34a34370702bd4": { "views": [] }, "57c2148f18314c3789c3eb9122a85c86": { "views": [] }, "58066439757048b98709d3b3f99efdf8": { "views": [] }, "58108da85e9443ea8ba884e8adda699e": { "views": [] }, "583f252174d9450196cdc7c1ebab744f": { "views": [] }, "58b92095873e4d22895ee7dde1f8e09a": { "views": [] }, "58be1833a5b344fb80ec86e08e8326da": { "views": [] }, "58ee0f251d7c4aca82fdace15ff52414": { "views": [] }, "590f2f9f8dc342b594dc9e79990e641f": { "views": [] }, "593c6f6b541e49be95095be63970f335": { "views": [] }, "593d3f780c1a4180b83389afdb9fecfe": { "views": [] }, "5945f05889be40019f93a90ecd681125": { "views": [] }, "595c537ed2514006ac823b4090cf3b4b": { "views": [ { "cell_index": 27 } ] }, "599cfb7471ec4fd29d835d2798145a54": { "views": [] }, "5a8d17dc45d54463a6a49bad7a7d87ac": { "views": [] }, "5bb323bde7e4454e85aa18fda291e038": { "views": [] }, "5bc5e0429c1e4863adc6bd1ff2225b6d": { "views": [] }, "5bd0fafc4ced48a5889bbcebc9275e40": { "views": [] }, "5ccf965356804bc38c94b06698a2c254": { "views": [] }, "5d1f96bedebf489cac8f820c783f7a14": { "views": [] }, "5d3fc58b96804b57aad1d67feb26c70a": { "views": [] }, "5d41872e720049198a319adc2f476276": { "views": [] }, "5d7a630da5f14cd4969b520c77bc5bc5": { "views": [] }, "5da153e0261e43af8fd1c3c5453cace0": { "views": [] }, "5dde90afb01e44888d3c92c32641d4e2": { "views": [] }, "5de2611543ff4475869ac16e9bf406fd": { "views": [] }, "5e03db9b91124e79b082f7e3e031a7d3": { "views": [] }, "5e576992ccfe4bb383c88f80d9746c1d": { "views": [] }, "5e91029c26c642a9a8c90186f3acba8e": { "views": [] }, "5ea2a6c21b9845d18f72757ca5af8340": { "views": [] }, "5ef08dc24584438c8bc6c618763f0bc8": { "views": [] }, "5f823979d2ce4c34ba18b4ca674724e4": { "views": [ { "cell_index": 27 } ] }, "5fc7b070fc1a4e809da4cda3a40fc6d9": { "views": [] }, "601ca9a27da94a6489d62ac26f2805a9": { "views": [] }, "605cbb1049a4462e9292961e62e55cee": { "views": [] }, "60addd9bec3f4397b20464fdbcf66340": { "views": [] }, "60e17d6811c64dc8a69b342abe20810a": { "views": [] }, "611840434d9046488a028618769e4b86": { "views": [] }, "627ab7014bbf404ba8190be17c22e79d": { "views": [] }, "633aa1edce474560956be527039800e7": { "views": [] }, "63b6e287d1aa48efad7c8154ddd8f9c4": { "views": [] }, "63dcfdb9749345bab675db257bda4b81": { "views": [] }, "640ba8cc905a4b47ad709398cc41c4e3": { "views": [] }, "644dcff39d7c47b7b8b729d01f59bee5": { "views": [ { "cell_index": 27 } ] }, "6455faf9dbc6477f8692528e6eb90c9a": { "views": [ { "cell_index": 27 } ] }, "64ca99573d5b48d2ba4d5815a50e6ffe": { "views": [] }, "65d7924ba8c44d3f98a1d2f02dc883f1": { "views": [] }, "665ed2b201144d78a5a1f57894c2267c": { "views": [ { "cell_index": 27 } ] }, "66742844c1cd47ddbbe9aacf2e805f36": { "views": [] }, "6678811915f14d0f86660fe90f63bd60": { "views": [] }, "66a04a5cf76e429cadbebfc527592195": { "views": [] }, "66e5c563ffe94e29bab82fdecbd1befa": { "views": [] }, "673066e0bb0b40e288e6750452c52bf6": { "views": [] }, "67ae0fb9621d488f879d0e3c458e88e9": { "views": [] }, "687702eca5f74e458c8d43447b3b9ed5": { "views": [] }, "68a4135d6f0a4bae95130539a2a44b3c": { "views": [] }, "68c3a74e9ea74718b901c812ed179f47": { "views": [] }, "694bd01e350449c2a40cd4ffc5d5a873": { "views": [] }, "6981c38c44ad4b42bfb453b36d79a0e6": { "views": [] }, "69e08ffffce9464589911cc4d2217df2": { "views": [] }, "6a28f605a5d14589907dba7440ede2fc": { "views": [ { "cell_index": 27 } ] }, "6a74dc52c2a54837a64ad461e174d4e0": { "views": [] }, "6ad1e0bf705141b3b6e6ab7bd6f842ea": { "views": [] }, "6b37935db9f44e6087d1d262a61d54ac": { "views": [] }, "6b402f0f3afb4d0dad0e2fa8b71aa890": { "views": [] }, "6bc95be59a054979b142d2d4a8900cf2": { "views": [] }, "6ce0ea52c2fc4a18b1cce33933df2be4": { "views": [] }, "6d7effd6bc4c40a4b17bf9e136c5814c": { "views": [ { "cell_index": 27 } ] }, "6d9a639e949c4d1d8a7826bdb9e67bb5": { "views": [] }, "6e18fafd95744f689c06c388368f1d21": { "views": [] }, "6e2bc4a1e3424e2085d0363b7f937884": { "views": [] }, "6e30c494930c439a996ba7c77bf0f721": { "views": [] }, "6e682d58cc384145adb151652f0e3d15": { "views": [] }, "6f08def65d27471b88fb14e9b63f9616": { "views": [] }, "6f20c1dc00ef4a549cd9659a532046bf": { "views": [] }, "6f605585550d4879b2f27e2fda0192be": { "views": [] }, "706dd4e39f194fbbba6e34acd320d1c3": { "views": [] }, "70f21ab685dc4c189f00a17a1810bbad": { "views": [] }, "7101b67c47a546c881fdaf9c934c0264": { "views": [] }, "71b0137b5ed741be979d1896762e5c75": { "views": [] }, "7223df458fdf4178af0b9596e231c09c": { "views": [] }, "7262519db6f94e2a9006c68c20b79d29": { "views": [] }, "72dfe79a3e52429da1cf4382e78b2144": { "views": [ { "cell_index": 27 } ] }, "72e8d31709eb4e3ea28af5cb6d072ab2": { "views": [] }, "73647a1287424ee28d2fb3c4471d720c": { "views": [] }, "739c5dde541a41e1afae5ba38e4b8ee3": { "views": [] }, "74187cc424a347a5aa73b8140772ec68": { "views": [] }, "7418edf751a6486c9fae373cde30cb74": { "views": [] }, "744302ec305b4405894ed1459b9d41d0": { "views": [] }, "74dfbaa15be44021860f7ba407810255": { "views": [] }, "750a30d80fd740aaabc562c0564f02a7": { "views": [] }, "75e344508b0b45d1a9ae440549d95b1a": { "views": [ { "cell_index": 27 } ] }, "766efd1cfee542d3ba068dfa1705c4eb": { "views": [] }, "7738084e8820466f9f763d49b4bf7466": { "views": [] }, "781855043f1147679745947ff30308fa": { "views": [] }, "78e2cfb79878452fa4f6e8baea88f822": { "views": [] }, "796027b3dd6b4b888553590fecd69b29": { "views": [] }, "7a302f58080c4420b138db1a9ed8103e": { "views": [] }, "7a3c362499f54884b68e951a1bcfc505": { "views": [] }, "7a4ee63f5f674454adf660bfcec97162": { "views": [] }, "7ac2c18126414013a1b2096233c88675": { "views": [] }, "7b1e3c457efa4f92ab8ff225a1a2c45e": { "views": [] }, "7b8897b4f8094eef98284f5bb1ed5d51": { "views": [] }, "7bbfd7b13dd242f0ac15b36bb437eb22": { "views": [] }, "7d3c88bc5a0f4b428174ff33d5979cfd": { "views": [] }, "7d4f53bd14d44f3f80342925f5b0b111": { "views": [] }, "7d95ca693f624336a91c3069e586ef1b": { "views": [] }, "7dcdc07b114e4ca69f75429ec042fabf": { "views": [] }, "7e79b941d7264d27a82194c322f53b80": { "views": [] }, "7f2f98bbffc0412dbb31c387407a9fed": { "views": [ { "cell_index": 27 } ] }, "7f4688756da74b369366c22fd99657f4": { "views": [] }, "7f7ed281359f4a55bbe75ce841dd1453": { "views": [] }, "7fdf429182a740a097331bddad58f075": { "views": [] }, "81b312df679f4b0d8944bc680a0f517e": { "views": [] }, "82036e8fa76544ae847f2c2fc3cf72c2": { "views": [] }, "821f1041188a43a4be4bdaeb7fa2f201": { "views": [] }, "827358a9b4ce49de802df37b7b673aea": { "views": [] }, "82db288a0693422cbd846cc3cb5f0415": { "views": [] }, "82e2820c147a4dff85a01bcddbad8645": { "views": [ { "cell_index": 27 } ] }, "82f795491023435e8429ea04ff4dc60a": { "views": [] }, "8317620833b84ccebc4020d90382e134": { "views": [] }, "8346e26975524082af27967748792444": { "views": [] }, "83f8ed39d0c34dce87f53f402d6ee276": { "views": [] }, "844ac22a0ebe46db84a6de7472fe9175": { "views": [] }, "849948fe6e3144e1b05c8df882534d5a": { "views": [] }, "85058c7c057043b185870da998e4be61": { "views": [] }, "85443822f3714824bec4a56d4cfed631": { "views": [] }, "8566379c7ff943b0bb0f9834ed4f0223": { "views": [] }, "85a3c6f9a0464390be7309edd36c323c": { "views": [] }, "85d7a90fbac640c9be576f338fa25c81": { "views": [] }, "85f31444b4e44e11973fd36968bf9997": { "views": [] }, "867875243ad24ff6ae39b311efb875d3": { "views": [] }, "8698bede085142a29e9284777f039c93": { "views": [] }, "86bf40f5107b4cb6942800f3930fdd41": { "views": [] }, "874c486c4ebb445583bd97369be91d9b": { "views": [] }, "87c469625bda412185f8a6c803408064": { "views": [] }, "87d4bd76591f4a9f991232ffcff3f73b": { "views": [] }, "87df3737c0fc4e848fe4100b97d193df": { "views": [] }, "886b599c537b467ab49684d2c2f8fb78": { "views": [] }, "889e19694e8043e289d8efc269eba934": { "views": [] }, "88c628983ad1475ea3a9403f6fea891c": { "views": [] }, "88c807c411d34103ba2e31b2df28b947": { "views": [] }, "895ddca8886b4c06ad1d71326ca2f0af": { "views": [] }, "899cc011a1bd4046ac798bc5838c2150": { "views": [] }, "89d0e7a3090c47df9689d8ca28914612": { "views": [] }, "89ea859f8bbd48bb94b8fa899ab69463": { "views": [] }, "8a600988321e4e489450d26dedaa061f": { "views": [] }, "8adcca252aff41a18cca5d856c17e42f": { "views": [] }, "8b2fe9e4ea1a481089f73365c5e93d8b": { "views": [] }, "8b5acd50710c4ca185037a73b7c9b25c": { "views": [] }, "8bbdba73a1454cac954103a7b1789f75": { "views": [] }, "8cffde5bdb3d4f7597131b048a013929": { "views": [ { "cell_index": 27 } ] }, "8db2abcad8bc44df812d6ccf2d2d713c": { "views": [ { "cell_index": 27 } ] }, "8dd5216b361c44359ba1233ee93683a4": { "views": [ { "cell_index": 27 } ] }, "8e13719438804be4a0b74f73e25998cd": { "views": [] }, "8eb4ff3279fe4d43a9d8ee752c78a956": { "views": [] }, "8f577d437d4743fd9399fefcd8efc8cb": { "views": [] }, "8f8fbe8fd1914eae929069aeeac16b6d": { "views": [] }, "8f9b8b5f7dd6425a9e8e923464ab9528": { "views": [] }, "8f9e3422db114095a72948c37e98dd3e": { "views": [] }, "8fd325068289448d990b045520bad521": { "views": [] }, "9039bc40a5ad4a1c87272d82d74004e2": { "views": [] }, "90bf5e50acbb4bccad380a6e33df7e40": { "views": [] }, "91028fc3e4bc4f6c8ec752b89bcf3139": { "views": [] }, "9274175be7fb47f4945e78f96d39a7a6": { "views": [] }, "929245675b174fe5bfa102102b8db897": { "views": [] }, "92be1f7fb2794c9fb25d7bbb5cbc313d": { "views": [] }, "933904217b6045c1b654b7e5749203f5": { "views": [ { "cell_index": 27 } ] }, "936bc7eb12e244c196129358a16e14bb": { "views": [] }, "936c09f4dde8440b91e9730a0212497c": { "views": [] }, "9406b6ae7f944405a0e8a22f745a39b2": { "views": [] }, "942a96eea03740719b28fcc1544284d4": { "views": [] }, "94840e902ffe4bbba5b374ff4d26f19f": { "views": [] }, "948d01f0901545d38e05f070ce4396e4": { "views": [] }, "94e2a0bc2d724f7793bb5b6d25fc7088": { "views": [] }, "94f2b877a79142839622a61a3a081c03": { "views": [ { "cell_index": 27 } ] }, "94f30801a94344129363c8266bf2e1f8": { "views": [] }, "95b127e8aff34a76a813783a6a3c6369": { "views": [] }, "95d44119bf714e42b163512d9a15bbc5": { "views": [] }, "95f016e9ea9148a4a3e9f04cb8f5132d": { "views": [] }, "968e9e9de47646409744df3723e87845": { "views": [] }, "97207358fc65430aa196a7ed78b252f0": { "views": [ { "cell_index": 27 } ] }, "9768d539ee4044dc94c0bd5cfb827a18": { "views": [] }, "98587702cc55456aa881daf879d2dc8d": { "views": [] }, "986c6c4e92964759903d6eb7f153df8a": { "views": [ { "cell_index": 27 } ] }, "987d808edd63404f8d6f2ce42efff33a": { "views": [] }, "9895c26dfb084d509adc8abc3178bad3": { "views": [] }, "994bc7678f284a24a8700b2a69f09f8d": { "views": [] }, "99eee4e3d9c34459b12fe14cee543c28": { "views": [] }, "9a5c0b0805034141a1c96ddd57995a3c": { "views": [] }, "9a7862bb66a84b4f897924278a809ef3": { "views": [] }, "9b812f733f6a4b60ba4bf725959f7913": { "views": [] }, "9bb5ae9ff9c94fe7beece9ce43f519af": { "views": [] }, "9bfde7b437fb4e76a16a49574ea5b7ec": { "views": [] }, "9c1d14484b6d4ab3b059731f17878d14": { "views": [] }, "9c7a66ead55e48c8b92ef250a5a464b7": { "views": [] }, "9ce50a53aafe439ebb19fff363c1bfe2": { "views": [] }, "9d5e9658af264ad795f6a5f3d8c3c30f": { "views": [ { "cell_index": 27 } ] }, "9d7aa65511b6482d9587609ad7898f54": { "views": [ { "cell_index": 27 } ] }, "9d87f94baf454bd4b529e55e0792a696": { "views": [] }, "9de4bd9c6a7b4f3dbd401df15f0b9984": { "views": [] }, "9dfd6b08a2574ed89f0eb084dae93f73": { "views": [] }, "9e1dffcb1d9d48aaafa031da2fb5fed9": { "views": [] }, "9efb46d2bb0648f6b109189986f4f102": { "views": [ { "cell_index": 27 } ] }, "9f1439500d624f769dd5e5c353c46866": { "views": [] }, "9f27ba31ccc947b598dc61aefca16a7f": { "views": [] }, "9f31a58b6e8e4c79a92cf65c497ee000": { "views": [] }, "9f43f85a0fb9464e9b7a25a85f6dba9c": { "views": [ { "cell_index": 27 } ] }, "9f4970dc472946d48c14e93e7f4d4b70": { "views": [] }, "9f5dd25217a84799b72724b2a37281ea": { "views": [] }, "9faa50b44e1842e0acac301f93a129c4": { "views": [ { "cell_index": 27 } ] }, "a0202917348d4c41a176d9871b65b168": { "views": [] }, "a058f021f4ca4daf8ab830d8542bf90b": { "views": [] }, "a0a2dded995543a6b68a67cd91baa252": { "views": [] }, "a0e170b3ea484fd984985d2607f90ef3": { "views": [] }, "a168e79f4cbb44c8ac7214db964de5f2": { "views": [] }, "a182b774272b48238b55e3c4d40e6152": { "views": [] }, "a1840ca22d834df2b145151baf6d8241": { "views": [ { "cell_index": 27 } ] }, "a1bb2982e88e4bb1a2729cc08862a859": { "views": [] }, "a1d897a6094f483d8fc9a3638fbc179d": { "views": [] }, "a231ee00d2b7404bb0ff4e303c6b04ee": { "views": [] }, "a29fdc2987f44e69a0343a90d80c692c": { "views": [] }, "a2de3ac1f4fe423997c5612b2b21c12f": { "views": [] }, "a30ba623acec4b03923a2576bcfcbdf5": { "views": [] }, "a3357d5460c5446196229eae087bb19e": { "views": [] }, "a358d9ecd754457db178272315151fa3": { "views": [] }, "a35aec268ac3406daa7fe4563f83f948": { "views": [] }, "a38c5ed35b9945008341c2d3c0ef1470": { "views": [] }, "a39cfb47679c4d2895cda12c6d9d2975": { "views": [ { "cell_index": 27 } ] }, "a55227f2fd5d42729fc4fd39a8c11914": { "views": [] }, "a65af2c8506d47ec803c15815e2ab445": { "views": [] }, "a6d2366540004eeaab760c8be196f10a": { "views": [] }, "a709f15a981a468b9471a0f672f961a7": { "views": [] }, "a7258472ad944d038cd227de28d9155f": { "views": [] }, "a72eb43242c34ef19399c52a77da8830": { "views": [] }, "a7568aed621548649e37cfa6423ca198": { "views": [] }, "a83f7f5c09a845ecb3f5823c1d178a54": { "views": [] }, "a87c651448f14ce4958d73c2f1e413e1": { "views": [ { "cell_index": 27 } ] }, "a8e78f5bc64e412ab44eb9c293a7e63b": { "views": [] }, "a996d507452241e0b99aabe24eecbdd9": { "views": [] }, "a9a4b7a2159e40f8aa93a50f11048342": { "views": [] }, "a9cc48370b964a888f8414e1742d6ff2": { "views": [] }, "a9dcbe9e9a4445bf9cf8961d4c1214a6": { "views": [] }, "aab29dfddb98416ea815475d6c6a3eed": { "views": [] }, "ab89783a86bc4939a5f78957f4019553": { "views": [] }, "abaee5bb577d4a68b6898d637a4c7898": { "views": [] }, "abecb04251e04260860074b8bdad088a": { "views": [] }, "acc07b8cf2cf4d50ae1bceef2254637f": { "views": [] }, "ae3ee1ee05a2443c8bf2f79cd9e86e56": { "views": [] }, "ae4e85e2bceb4ec783dbfaaf3a174ea7": { "views": [] }, "aec1a51db98f470cb0854466f3461fc1": { "views": [] }, "afc5dccd3db64a1592ee0b2fd516b71d": { "views": [] }, "afe28f5bae8941b19717e3d7285ddc61": { "views": [] }, "b00516b171544bca9113adc99ed528a1": { "views": [] }, "b005d7f2afbe479eb02678447a079a1a": { "views": [] }, "b020ad1a7750461bb79fe4e74b9384f6": { "views": [] }, "b07d0aab375142978e1261a6a4c94b10": { "views": [] }, "b2c18df5c51649cdbdaf64092fc945b3": { "views": [] }, "b410c14ee52d4af49c08da115db85ac7": { "views": [] }, "b41220079b2b49c2ba6f59dcfe9e7757": { "views": [] }, "b445a187ca6943bbb465782a67288ce5": { "views": [] }, "b4dfb435038645dc9673ea4257fc26f3": { "views": [] }, "b5633708bd8b4abdaec77a96aca519bb": { "views": [] }, "b59b2622026d4ec582354d919e16f658": { "views": [] }, "b635f31747e14f989c7dee2ba5d5caa5": { "views": [] }, "b63dfdde813a4f019998e118b5168943": { "views": [] }, "b6c3d440986d44ed88a9471a69b70e05": { "views": [] }, "b6ee195c9bfd48ee8526b8cf0f3322b9": { "views": [] }, "b7064dd21c9949d79f40c73fee431dff": { "views": [] }, "b7537298609f4d64b8e36692b84f376c": { "views": [] }, "b755013f41fa4dce8e2bab356d85d26d": { "views": [] }, "b7cd4bfabc2e40fe9f30de702ae63716": { "views": [] }, "b7e4c497ff5c4173961ffdc3bd3821a9": { "views": [ { "cell_index": 27 } ] }, "b821a13ce3e8453d85f07faccc95fee1": { "views": [] }, "b86ea9c1f1ee45a380e35485ad4e2fac": { "views": [] }, "b87f4d4805944698a0011c10d626726c": { "views": [] }, "b8e173c7c8be41df9161cbbe2c4c6c86": { "views": [] }, "b9322adcd8a241478e096aa1df086c78": { "views": [] }, "b9ad471398784b6889ce7a1d2ef5c4c0": { "views": [] }, "b9c138598fce460692cc12650375ee52": { "views": [ { "cell_index": 27 } ] }, "ba146eb955754db88ba6c720e14ea030": { "views": [] }, "ba48cba009e8411ea85c7e566a47a934": { "views": [] }, "bb2793de83a64688b61a2007573a8110": { "views": [] }, "bb53891d7f514a17b497f699484c9aed": { "views": [] }, "bbe5dea9d57d466ba4e964fce9af13cf": { "views": [ { "cell_index": 27 } ] }, "bbe88faf528d44a0a9083377d733d66a": { "views": [] }, "bc0525d022404722a921132e61319e46": { "views": [] }, "bc320fb35f5744cc82486b85f7a53b6f": { "views": [] }, "bc900e9562c546f9ae3630d5110080ec": { "views": [] }, "bcbf6b3ff19d4eb5aa1b8a57672d7f6f": { "views": [] }, "bccf183ccb0041e380732005f2ca2d0a": { "views": [] }, "bd0d18e3441340a7a56403c884c87a8e": { "views": [] }, "bd21e4fe92614c22a76ae515077d2d11": { "views": [] }, "bd5b05203cfd402596a6b7f076c4a8f8": { "views": [] }, "beb0c9b29d8d4d69b3147af666fa298b": { "views": [ { "cell_index": 27 } ] }, "bf0d147a6a1346799c33807404fa1d46": { "views": [] }, "c03d4477fa2a423dba6311b003203f62": { "views": [] }, "c05697bcb0a247f78483e067a93f3468": { "views": [] }, "c09c3d0e94ca4e71b43352ca91b1a88a": { "views": [] }, "c0d015a0930e4ddf8f10bbace07c0b24": { "views": [] }, "c15edd79a0fd4e24b06d1aae708a38c4": { "views": [] }, "c20b6537360f4a70b923e6c5c2ba7d9b": { "views": [] }, "c21fff9912924563b28470d32f62cd44": { "views": [] }, "c2482621d28542268a2b0cbf4596da37": { "views": [] }, "c25bd0d8054b4508a6b427447b7f4576": { "views": [] }, "c301650ac4234491af84937a8633ad76": { "views": [] }, "c333a0964b1e43d0817e73cb47cf0317": { "views": [] }, "c36213b1566843ceb05b8545f7d3325c": { "views": [] }, "c37d0add29fa4f41a47caf6538ec6685": { "views": [] }, "c409a01effb945c187e08747e383463c": { "views": [] }, "c4e104a7b731463688e0a8f25cf50246": { "views": [] }, "c54f609af4e94e93b57304bc55e02eba": { "views": [] }, "c576bf6d24184f3a9f31d4f40231ce87": { "views": [] }, "c58ab80a895344008b5aadd8b8c628a4": { "views": [] }, "c5d28bea41da447e88f4cec9cfaaf197": { "views": [] }, "c74bbd55a8644defa3fcef473002a626": { "views": [ { "cell_index": 27 } ] }, "c856e77b213b400599b6e026baaa4c85": { "views": [] }, "c894f9e350a1473abb28ff651443ae6f": { "views": [] }, "c8e3827ae28b45bc9768a8c3e35cc8b1": { "views": [] }, "c95bf1935b71400e98c63722b77caa08": { "views": [] }, "c9e5129d30ea4b78b846e8e92651b0e9": { "views": [] }, "ca2123c7b103485c851815cbcb4a6c17": { "views": [] }, "ca34917db02148168daf0c30ceed7466": { "views": [] }, "caa6adf7b0d243da8229c317c7482fe3": { "views": [] }, "cb924475ebb64e76964f88e830979d38": { "views": [] }, "cba1473ccaee4b2a89aba4d2b4b1e648": { "views": [] }, "cbd735eb8eb446069ee912d795ccaf14": { "views": [] }, "cc0ee37900ef40069515c79e99a9a875": { "views": [] }, "cc564bca35c743b89697f5cfd4ecccc2": { "views": [] }, "cc5a47588e2b4c8eb5deff560a0256c2": { "views": [] }, "ccc64ac3a8a84ae9815ff9e8bdc3279d": { "views": [] }, "cd02a06cec7342438f8585af6227db96": { "views": [] }, "cd236465e91d4a90a2347e6baab6ab71": { "views": [] }, "cd9a0aa1700a4407ab445053029dca18": { "views": [] }, "cdd6c6a945a74c568d611b42e4ba8a1a": { "views": [] }, "cdf0323ea1324c0b969f49176ecee1c2": { "views": [] }, "ce3a0e82e80d48b9b2658e0c52196644": { "views": [ { "cell_index": 27 } ] }, "ce6ad0459f654b6785b3a71ccdf05063": { "views": [] }, "ce8d3cd3535b459c823da2f49f3cc526": { "views": [ { "cell_index": 27 } ] }, "cf8c8f791d0541ffa4f635bb07389292": { "views": [] }, "cfed29ab68f244e996b0d571c31020ec": { "views": [] }, "d034cbd7b06a448f98b3f11b68520c08": { "views": [] }, "d13135f5facc4c5996549a85974145a1": { "views": [] }, "d18c7c17fa93493ebc622fe3d2c0d44e": { "views": [] }, "d23b743d7d0342aca257780f2df758d6": { "views": [] }, "d2fe43f4a2064078a6c8da47f8afb903": { "views": [] }, "d34f626ca035456bb9e0c9ad2a9dced1": { "views": [] }, "d359911be08f4342b20e86a954cd060f": { "views": [] }, "d4d76a1c09a342e79cd6733886626459": { "views": [] }, "d58d12f54e2b426fba4ca611b0ffc68f": { "views": [] }, "d5e2a77d429d4ca0969e1edec5dc2690": { "views": [] }, "d5f4bbe3242245f0a2c3b18a284e55f8": { "views": [] }, "d6c325f3069a4186b3022619f4280c37": { "views": [] }, "d6d46520bbcf495bad20bcd266fe1357": { "views": [] }, "d72b7c8058324d1bb56b6574090ccda6": { "views": [] }, "d73bbb49a33d49e187200fa7c8f23aaa": { "views": [] }, "d80e4f8eb9a54aef8b746e38d8c3ef1b": { "views": [] }, "d819255bc7104ee8b9466b149dba5bff": { "views": [] }, "d819fcff913441d39a41982518127af5": { "views": [] }, "d8295021db704345a63c9ff9d692b761": { "views": [] }, "d83329fe36014f85bb5d0247d3ae4472": { "views": [ { "cell_index": 27 } ] }, "d88a0305cc224037a14e5040ed8e13af": { "views": [] }, "d89b81d63c6048ff800d3380bf921ac0": { "views": [] }, "d8d8667ab50944e4b066d648aa3c8e2a": { "views": [] }, "d8fd2b5ef6e24628b2b5102d3cd375f3": { "views": [] }, "d9579a126d5f44a3bc0a731e0ad55f24": { "views": [] }, "da51bd4d4fd848699919e3973b2fabc2": { "views": [] }, "dba5a5a8fec346b2bcdc88f4ce294550": { "views": [] }, "dc201c38ac434cb8a424553f1fa5a791": { "views": [] }, "dc631df85ae84ffc964acd7a76e399ce": { "views": [] }, "dc7376a2272e44179f237e5a1c7f6a49": { "views": [ { "cell_index": 27 } ] }, "dc8a45203a0a457c927f582f9d576e5d": { "views": [] }, "dcc0e1ea9e994fc0827d9d7f648e4ad9": { "views": [] }, "dce6f4cb98094ee1b06c0dd0ff8f488a": { "views": [] }, "dcfc688de41b4ed7a8f89ae84089d5c0": { "views": [] }, "dd486b2cbda84c83ace5ceaee8a30ff8": { "views": [] }, "ddcfbf7b97714357920ba9705e8d4ab0": { "views": [] }, "ddd4485714564c65b70bd865783076af": { "views": [] }, "de7738417f1040b1a06ad25e485eb91d": { "views": [] }, "df4cada92e484fd4ae75026eaf1845e2": { "views": [] }, "dfb3707b4a01441c8a0a1751425b8e1c": { "views": [] }, "e03b701a52d948aab86117c928cbe275": { "views": [] }, "e0a614fe085c4d3c835c78d6ada60a40": { "views": [] }, "e138e0c7d5a4471d99bbdac50de00fe1": { "views": [] }, "e154289ce1774450a9a51ac45a1d5725": { "views": [] }, "e25c1d2c78c94c9a805920df36268508": { "views": [] }, "e281172ebc7f48b5ae6545b16da79477": { "views": [] }, "e2862bd7efac4bc0b23532705f5e46c4": { "views": [] }, "e2cd9bb21f254e08885f43fd6e968879": { "views": [] }, "e2f4acecaf194351b8e67439440a9966": { "views": [] }, "e3198c124ac841a79db062efa81f6812": { "views": [] }, "e36f3009f61a4f5ba047562e70330add": { "views": [] }, "e3765274f28b4a55a82d9115ded151de": { "views": [] }, "e37e3fba3b40413180cd30e594bf62bd": { "views": [] }, "e3f9760867fa410fbdc4611aef1cee18": { "views": [] }, "e4331c134ab24f9cae99d476dfa04c89": { "views": [] }, "e46db59e121045169a1ea5313b1748b7": { "views": [] }, "e475d1e00f9d48edadac886fb53c2a20": { "views": [] }, "e48449d21c2d4360b851169468066470": { "views": [] }, "e4c26b8a42b54e959b276a174f2c2795": { "views": [] }, "e4e55dabd92f4c17b78ed4b6881842e8": { "views": [] }, "e4e5dd3dc28d4aa3ab8f8f7c4a475115": { "views": [ { "cell_index": 27 } ] }, "e516fd8ebfc6478c95130d6edec77c88": { "views": [] }, "e5afb8d0e8a94c4dac18f2bbf1d042ce": { "views": [] }, "e5bcb13bf2e94afc857bcbb37f6d4d87": { "views": [] }, "e64ab85e80184b70b69d01a9c6851943": { "views": [ { "cell_index": 27 } ] }, "e66b26fb788944ba83b7511d79b85dc5": { "views": [] }, "e73434cfcc854429ac27ddc9c9b07f5e": { "views": [] }, "e7a8244ea5a84493b3b5bdeaf92a50b4": { "views": [] }, "e81ed2c281df4f06bc1d4e6b67c574b4": { "views": [] }, "e85ff7ccdc034c268df9cb0e95e9b850": { "views": [] }, "e8a198bff55a437eab56887563cd9a6e": { "views": [] }, "e92ede4cfc96436b84e63809bcb22385": { "views": [] }, "e949474f6aa64c5dada603476ea6cabd": { "views": [] }, "e98e59c3156c49c1bb27be7a478c3654": { "views": [] }, "e9ea6f88d1334fbcab7f9c9a11cf4a50": { "views": [] }, "ea09e5da878c42f2b533856dc3149e3e": { "views": [] }, "ea74036074054593b1cc31fec030d2a2": { "views": [] }, "ea8d97fb8c0d499095cceb133e4d7d9c": { "views": [] }, "eafbea5bce1f4ab4bcbb0aa08598af0f": { "views": [] }, "ec01e6cdc5a54f068f1bb033415b4a06": { "views": [] }, "ec2d1f18f2e841b184f5d4cd15979d46": { "views": [] }, "ec923af478b94ad99bdfd3257f48cb06": { "views": [] }, "ed02e2272e844678979bd6a3c00f5cb3": { "views": [] }, "ed80296f5f5e42e694dfc5cc7fd3acee": { "views": [] }, "ee4df451ca9d4ed48044b25b19dc3f3f": { "views": [] }, "ee77219007884e089fc3c1479855c469": { "views": [] }, "ef372681937b4e90a04b0d530b217edb": { "views": [] }, "ef452efe39d34db6b4785cb816865ca3": { "views": [] }, "efcb07343f244ff084ea49dbc7e3d811": { "views": [] }, "f083a8e4c8574fe08f5eb0aac66c1e71": { "views": [] }, "f09d7c07bec64811805db588515af7f6": { "views": [] }, "f0ef654c93974add9410a6e243e0fbf2": { "views": [] }, "f20d7c2fcf144f5da875c6af5ffd35cb": { "views": [] }, "f234eb38076146b9a640f44b7ef30892": { "views": [] }, "f24d087598434ed1bb7f5ae3b0b4647a": { "views": [] }, "f262055f3f1b48029f9e2089f752b0b8": { "views": [ { "cell_index": 27 } ] }, "f2d40a380f884b1b95992ccc7c3df04e": { "views": [] }, "f2e2e2e5177542aa9e5ca3d69508fb89": { "views": [] }, "f31914f694384908bec466fc2945f1c7": { "views": [] }, "f31cbea99df94f2281044c369ef1962d": { "views": [] }, "f32c6c5551f540709f7c7cd9078f1aad": { "views": [] }, "f337eb824d654f0fbd688e2db3c5bf7b": { "views": [] }, "f36f776a7767495cbda2f649c2b3dd48": { "views": [] }, "f3cef080253c46989413aad84b478199": { "views": [] }, "f3df35ce53e0466e81a48234b36a1430": { "views": [ { "cell_index": 27 } ] }, "f3fa0f8a41ab4ede9c4e20f16e35237d": { "views": [] }, "f42e4f996f254a1bb7fe6f4dfc49aba3": { "views": [] }, "f437babcddc64a8aa238fc7013619fbb": { "views": [] }, "f44a5661ed1f4b5d97849cf4bb5e862e": { "views": [] }, "f44d24e28afa475da40628b4fd936922": { "views": [] }, "f44d5e6e993745b8b12891d1f3af3dc3": { "views": [] }, "f457cb5e76be46a29d9f49ba0dc135f1": { "views": [] }, "f4691cbe84534ef6b7d3fca530cf1704": { "views": [] }, "f4ca26fbbdbf49dda5d1b8affdecfa3e": { "views": [] }, "f54998361fe84a8a95b2607fbe367d52": { "views": [] }, "f54bdb1d3bfb47af9e7aaabb4ed12eff": { "views": [] }, "f54c28b82f7d498b83bf6908e19b6d1b": { "views": [] }, "f5cc05fcee4d4c3e80163c6e9c072b6e": { "views": [] }, "f621b91a209e4997a47cf458f8a5027f": { "views": [] }, "f665bf176eb443f6867cef8fdd79b4e5": { "views": [] }, "f6e27824f5e84bd8b4671e9eb030b20f": { "views": [] }, "f6f162ac0811434ea95875f6335bd484": { "views": [] }, "f6f629e6fb164c97acdc50c25d1354ee": { "views": [] }, "f71adee125f74ddd8302aa2796646d67": { "views": [] }, "f731d66445aa4543800a6bb3e9267936": { "views": [] }, "f8f8e8c27fff45afa309a849d1655e29": { "views": [] }, "f913752b9e86487cb197f894d667d432": { "views": [] }, "f92cde8d24064ae5afd4cd577eaa895a": { "views": [] }, "f944674b7ca345a582de627055614499": { "views": [] }, "f9458080ed534d25856c67ce8f93d5a1": { "views": [ { "cell_index": 27 } ] }, "f986f98d05dd4b9fa8a3c1111c1cea9b": { "views": [] }, "f9f7bc097f654e41b68f2d849c99a1a1": { "views": [] }, "fa00693458bc45669e2ed4ee536e98d6": { "views": [] }, "fa2f219e60ff453da3842df62a371813": { "views": [] }, "fa6cbfe76fff48848dc08a9344de84ff": { "views": [] }, "fb3b6d5e405d4e1b87e82bcc8ae3df0f": { "views": [] }, "fbe27ee7dc93467292b67f68935ae6f0": { "views": [] }, "fc494b2bcade4c3a890f08386dd8aab0": { "views": [] }, "fd98ac9b76cc44f09bc3b684caf1882d": { "views": [] }, "feb9bf5d951c40d4a87d57a4de5e819a": { "views": [] }, "fedfd679505d409fa74ccaa52b87fcce": { "views": [] }, "fef0278d4386407f96c44b4affe437b8": { "views": [] }, "ff29b06d50b048d6bbcbdb5a8665dcde": { "views": [] }, "ff3c868e31c0430dbf5b85415da9a24b": { "views": [] }, "ff8a91a101044f4fba19cdfffc39e0d3": { "views": [] }, "ffbca26ec77b492bbbda1be40b044d8e": { "views": [] }, "fff5f5bc334942bd851ac24f782f4f3c": { "views": [] } }, "version": "1.1.1" } }, "nbformat": 4, "nbformat_minor": 1 }