search.ipynb 191 ko
Newer Older
       "        <span class=\"k\">return</span> <span class=\"bp\">self</span><span class=\"o\">.</span><span class=\"n\">seq</span><span class=\"o\">.</span><span class=\"n\">pop</span><span class=\"p\">(</span><span class=\"mi\">0</span><span class=\"p\">)</span>\n",
       "\n",
       "    <span class=\"k\">def</span> <span class=\"nf\">update_state</span><span class=\"p\">(</span><span class=\"bp\">self</span><span class=\"p\">,</span> <span class=\"n\">percept</span><span class=\"p\">):</span>\n",
       "        <span class=\"k\">raise</span> <span class=\"ne\">NotImplementedError</span>\n",
       "\n",
       "    <span class=\"k\">def</span> <span class=\"nf\">formulate_goal</span><span class=\"p\">(</span><span class=\"bp\">self</span><span class=\"p\">,</span> <span class=\"n\">state</span><span class=\"p\">):</span>\n",
       "        <span class=\"k\">raise</span> <span class=\"ne\">NotImplementedError</span>\n",
       "\n",
       "    <span class=\"k\">def</span> <span class=\"nf\">formulate_problem</span><span class=\"p\">(</span><span class=\"bp\">self</span><span class=\"p\">,</span> <span class=\"n\">state</span><span class=\"p\">,</span> <span class=\"n\">goal</span><span class=\"p\">):</span>\n",
       "        <span class=\"k\">raise</span> <span class=\"ne\">NotImplementedError</span>\n",
       "\n",
       "    <span class=\"k\">def</span> <span class=\"nf\">search</span><span class=\"p\">(</span><span class=\"bp\">self</span><span class=\"p\">,</span> <span class=\"n\">problem</span><span class=\"p\">):</span>\n",
       "        <span class=\"k\">raise</span> <span class=\"ne\">NotImplementedError</span>\n",
       "</pre></div>\n",
       "</body>\n",
       "</html>\n"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
    "psource(SimpleProblemSolvingAgentProgram)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The SimpleProblemSolvingAgentProgram class has six methods:  \n",
    "\n",
Anthony Marakis's avatar
Anthony Marakis a validé
    "* `__init__(self, intial_state=None)`: This is the `contructor` of the class and is the first method to be called when the class is instantiated. It takes in a keyword argument, `initial_state` which is initially `None`. The argument `intial_state` represents the state from which the agent starts.\n",
    "\n",
    "* `__call__(self, percept)`: This method updates the `state` of the agent based on its `percept` using the `update_state` method. It then formulates a `goal` with the help of `formulate_goal` method and a `problem` using the `formulate_problem` method and returns a sequence of actions to solve it (using the `search` method).\n",
    "\n",
Anthony Marakis's avatar
Anthony Marakis a validé
    "* `update_state(self, percept)`: This method updates the `state` of the agent based on its `percept`.\n",
    "\n",
    "* `formulate_goal(self, state)`: Given a `state` of the agent, this method formulates the `goal` for it.\n",
    "\n",
    "* `formulate_problem(self, state, goal)`: It is used in problem formulation given a `state` and a `goal` for the `agent`.\n",
    "\n",
    "* `search(self, problem)`: This method is used to search a sequence of `actions` to solve a `problem`."
   ]
  },
  {
   "cell_type": "markdown",
    "## SEARCHING ALGORITHMS VISUALIZATION\n",
    "In this section, we have visualizations of the following searching algorithms:\n",
    "1. Breadth First Tree Search - Implemented\n",
    "2. Depth First Tree Search - Implemented\n",
    "3. Depth First Graph Search - Implemented\n",
    "4. Breadth First Search - Implemented\n",
    "5. Best First Graph Search - Implemented\n",
    "6. Uniform Cost Search - Implemented\n",
    "7. Depth Limited Search\n",
    "8. Iterative Deepening Search\n",
    "9. A\\*-Search - Implemented\n",
    "10. Recursive Best First Search\n",
    "\n",
    "We add the colors to the nodes to have a nice visualisation when displaying. So, these are the different colors we are using in these visuals:\n",
    "* Un-explored nodes - <font color='black'>white</font>\n",
    "* Frontier nodes - <font color='orange'>orange</font>\n",
    "* Currently exploring node - <font color='red'>red</font>\n",
    "* Already explored nodes - <font color='gray'>gray</font>\n",
    "Now, we will define some helper methods to display interactive buttons and sliders when visualising search algorithms."
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def final_path_colors(problem, solution):\n",
    "    \"returns a node_colors dict of the final path provided the problem and solution\"\n",
    "    \n",
    "    # get initial node colors\n",
    "    final_colors = dict(initial_node_colors)\n",
    "    # color all the nodes in solution and starting node to green\n",
    "    final_colors[problem.initial] = \"green\"\n",
    "    for node in solution:\n",
    "        final_colors[node] = \"green\"  \n",
    "    return final_colors\n",
    "\n",
    "\n",
    "def display_visual(user_input, algorithm=None, problem=None):\n",
    "    if user_input == False:\n",
    "        def slider_callback(iteration):\n",
    "            # don't show graph for the first time running the cell calling this function\n",
    "            try:\n",
    "                show_map(all_node_colors[iteration])\n",
    "            except:\n",
    "                pass\n",
    "        def visualize_callback(Visualize):\n",
    "            if Visualize is True:\n",
    "                button.value = False\n",
    "                \n",
    "                global all_node_colors\n",
    "                \n",
    "                iterations, all_node_colors, node = algorithm(problem)\n",
    "                solution = node.solution()\n",
    "                all_node_colors.append(final_path_colors(problem, solution))\n",
    "                \n",
    "                slider.max = len(all_node_colors) - 1\n",
    "                \n",
    "                for i in range(slider.max + 1):\n",
    "                    slider.value = i\n",
    "                     #time.sleep(.5)\n",
    "        \n",
    "        slider = widgets.IntSlider(min=0, max=1, step=1, value=0)\n",
    "        slider_visual = widgets.interactive(slider_callback, iteration = slider)\n",
    "        display(slider_visual)\n",
    "        button = widgets.ToggleButton(value = False)\n",
    "        button_visual = widgets.interactive(visualize_callback, Visualize = button)\n",
    "        display(button_visual)\n",
    "    \n",
    "    if user_input == True:\n",
    "        node_colors = dict(initial_node_colors)\n",
    "        if algorithm == None:\n",
    "            algorithms = {\"Breadth First Tree Search\": breadth_first_tree_search,\n",
    "                          \"Depth First Tree Search\": depth_first_tree_search,\n",
    "                          \"Breadth First Search\": breadth_first_search,\n",
    "                          \"Depth First Graph Search\": depth_first_graph_search,\n",
    "                          \"Uniform Cost Search\": uniform_cost_search,\n",
    "                          \"A-star Search\": astar_search}\n",
    "            algo_dropdown = widgets.Dropdown(description = \"Search algorithm: \",\n",
    "                                             options = sorted(list(algorithms.keys())),\n",
    "                                             value = \"Breadth First Tree Search\")\n",
    "            display(algo_dropdown)\n",
    "        \n",
    "        def slider_callback(iteration):\n",
    "            # don't show graph for the first time running the cell calling this function\n",
    "            try:\n",
    "                show_map(all_node_colors[iteration])\n",
    "            except:\n",
    "                pass\n",
    "        def visualize_callback(Visualize):\n",
    "            if Visualize is True:\n",
    "                button.value = False\n",
    "                \n",
    "                problem = GraphProblem(start_dropdown.value, end_dropdown.value, romania_map)\n",
    "                global all_node_colors\n",
    "                \n",
    "                if algorithm == None:\n",
    "                    user_algorithm = algorithms[algo_dropdown.value]\n",
    "                \n",
    "#                 print(user_algorithm)\n",
    "#                 print(problem)\n",
    "                \n",
    "                iterations, all_node_colors, node = user_algorithm(problem)\n",
    "                solution = node.solution()\n",
    "                all_node_colors.append(final_path_colors(problem, solution))\n",
    "                slider.max = len(all_node_colors) - 1\n",
    "                \n",
    "                for i in range(slider.max + 1):\n",
    "                    slider.value = i\n",
    "#                     time.sleep(.5)\n",
    "        start_dropdown = widgets.Dropdown(description = \"Start city: \",\n",
    "                                          options = sorted(list(node_colors.keys())), value = \"Arad\")\n",
    "        display(start_dropdown)\n",
    "        end_dropdown = widgets.Dropdown(description = \"Goal city: \",\n",
    "                                        options = sorted(list(node_colors.keys())), value = \"Fagaras\")\n",
    "        display(end_dropdown)\n",
    "        \n",
    "        button = widgets.ToggleButton(value = False)\n",
    "        button_visual = widgets.interactive(visualize_callback, Visualize = button)\n",
    "        display(button_visual)\n",
    "        \n",
    "        slider = widgets.IntSlider(min=0, max=1, step=1, value=0)\n",
    "        slider_visual = widgets.interactive(slider_callback, iteration = slider)\n",
    "        display(slider_visual)"
   ]
  },
  {
   "cell_type": "markdown",
    "## BREADTH-FIRST TREE SEARCH\n",
    "We have a working implementation in search module. But as we want to interact with the graph while it is searching, we need to modify the implementation. Here's the modified breadth first tree search."
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def tree_search(problem, frontier):\n",
    "    \"\"\"Search through the successors of a problem to find a goal.\n",
    "    The argument frontier should be an empty queue.\n",
    "    Don't worry about repeated paths to a state. [Figure 3.7]\"\"\"\n",
    "    \n",
    "    # we use these two variables at the time of visualisations\n",
    "    iterations = 0\n",
    "    all_node_colors = []\n",
    "    node_colors = dict(initial_node_colors)\n",
    "    #Adding first node to the queue\n",
    "    frontier.append(Node(problem.initial))\n",
    "    node_colors[Node(problem.initial).state] = \"orange\"\n",
    "    iterations += 1\n",
    "    all_node_colors.append(dict(node_colors))\n",
    "        #Popping first node of queue\n",
    "        # modify the currently searching node to red\n",
    "        node_colors[node.state] = \"red\"\n",
    "        iterations += 1\n",
    "        all_node_colors.append(dict(node_colors))\n",
    "        \n",
    "        if problem.goal_test(node.state):\n",
    "            # modify goal node to green after reaching the goal\n",
    "            node_colors[node.state] = \"green\"\n",
    "            iterations += 1\n",
    "            all_node_colors.append(dict(node_colors))\n",
    "            return(iterations, all_node_colors, node)\n",
    "        frontier.extend(node.expand(problem))\n",
    "           \n",
    "        for n in node.expand(problem):\n",
    "            node_colors[n.state] = \"orange\"\n",
    "            iterations += 1\n",
    "            all_node_colors.append(dict(node_colors))\n",
    "        # modify the color of explored nodes to gray\n",
    "        node_colors[node.state] = \"gray\"\n",
    "        iterations += 1\n",
    "        all_node_colors.append(dict(node_colors))\n",
    "    return None\n",
    "\n",
    "def breadth_first_tree_search(problem):\n",
    "    \"Search the shallowest nodes in the search tree first.\"\n",
    "    iterations, all_node_colors, node = tree_search(problem, FIFOQueue())\n",
    "    return(iterations, all_node_colors, node)"
Anthony Marakis's avatar
Anthony Marakis a validé
    "Now, we use `ipywidgets` to display a slider, a button and our romania map. By sliding the slider we can have a look at all the intermediate steps of a particular search algorithm. By pressing the button **Visualize**, you can see all the steps without interacting with the slider. These two helper functions are the callback functions which are called when we interact with the slider and the button."
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "d55324f7343a4c71a9a2d4da6d037037"
      }
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "b07a3813dd724c51a9b37f646cf2be25"
      }
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "all_node_colors = []\n",
    "romania_problem = GraphProblem('Arad', 'Fagaras', romania_map)\n",
    "display_visual(user_input = False, algorithm = breadth_first_tree_search, problem = romania_problem)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Depth-First Tree Search:\n",
    "Now let's discuss another searching algorithm, Depth-First Tree Search."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {
    "collapsed": true
   },
   "source": [
    "def depth_first_tree_search(problem):\n",
    "    \"Search the deepest nodes in the search tree first.\"\n",
    "    # This algorithm might not work in case of repeated paths\n",
    "    # and may run into an infinite while loop.\n",
    "    iterations, all_node_colors, node = tree_search(problem, Stack())\n",
    "    return(iterations, all_node_colors, node)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "523b10cf84e54798a044ee714b864b52"
      }
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "aecea953f6a448c192ac8e173cf46e35"
      }
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
    "all_node_colors = []\n",
    "romania_problem = GraphProblem('Arad', 'Oradea', romania_map)\n",
    "display_visual(user_input = False, algorithm = depth_first_tree_search, problem = romania_problem)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
    "## BREADTH-FIRST SEARCH\n",
Anthony Marakis's avatar
Anthony Marakis a validé
    "Let's change all the `node_colors` to starting position and define a different problem statement."
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def breadth_first_search(problem):\n",
    "    \"[Figure 3.11]\"\n",
    "    \n",
    "    # we use these two variables at the time of visualisations\n",
    "    iterations = 0\n",
    "    all_node_colors = []\n",
    "    node_colors = dict(initial_node_colors)\n",
    "    \n",
    "    node = Node(problem.initial)\n",
    "    \n",
    "    node_colors[node.state] = \"red\"\n",
    "    iterations += 1\n",
    "    all_node_colors.append(dict(node_colors))\n",
    "      \n",
    "    if problem.goal_test(node.state):\n",
    "        node_colors[node.state] = \"green\"\n",
    "        iterations += 1\n",
    "        all_node_colors.append(dict(node_colors))\n",
    "        return(iterations, all_node_colors, node)\n",
    "    \n",
    "    frontier = FIFOQueue()\n",
    "    frontier.append(node)\n",
    "    \n",
    "    # modify the color of frontier nodes to blue\n",
    "    node_colors[node.state] = \"orange\"\n",
    "    iterations += 1\n",
    "    all_node_colors.append(dict(node_colors))\n",
    "        \n",
    "    explored = set()\n",
    "    while frontier:\n",
    "        node = frontier.pop()\n",
    "        node_colors[node.state] = \"red\"\n",
    "        iterations += 1\n",
    "        all_node_colors.append(dict(node_colors))\n",
    "        \n",
    "        explored.add(node.state)     \n",
    "        \n",
    "        for child in node.expand(problem):\n",
    "            if child.state not in explored and child not in frontier:\n",
    "                if problem.goal_test(child.state):\n",
    "                    node_colors[child.state] = \"green\"\n",
    "                    iterations += 1\n",
    "                    all_node_colors.append(dict(node_colors))\n",
    "                    return(iterations, all_node_colors, child)\n",
    "                frontier.append(child)\n",
    "\n",
    "                node_colors[child.state] = \"orange\"\n",
    "                iterations += 1\n",
    "                all_node_colors.append(dict(node_colors))\n",
    "                    \n",
    "        node_colors[node.state] = \"gray\"\n",
    "        iterations += 1\n",
    "        all_node_colors.append(dict(node_colors))\n",
    "    return None"
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "735a3dea191a42b6bd97fdfd337ea3e7"
      }
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "ef445770d70a4b7c9d1544b98a55ca4d"
      }
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "all_node_colors = []\n",
    "romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)\n",
    "display_visual(user_input = False, algorithm = breadth_first_search, problem = romania_problem)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Depth-First Graph Search:  \n",
    "Although we have a working implementation in search module, we have to make a few changes in the algorithm to make it suitable for visualization."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {
    "collapsed": true
   },
   "source": [
    "def graph_search(problem, frontier):\n",
    "    \"\"\"Search through the successors of a problem to find a goal.\n",
    "    The argument frontier should be an empty queue.\n",
    "    If two paths reach a state, only use the first one. [Figure 3.7]\"\"\"\n",
    "    # we use these two variables at the time of visualisations\n",
    "    iterations = 0\n",
    "    all_node_colors = []\n",
    "    node_colors = dict(initial_node_colors)\n",
    "    \n",
    "    frontier.append(Node(problem.initial))\n",
    "    explored = set()\n",
    "    \n",
    "    # modify the color of frontier nodes to orange\n",
    "    node_colors[Node(problem.initial).state] = \"orange\"\n",
    "    iterations += 1\n",
    "    all_node_colors.append(dict(node_colors))\n",
    "      \n",
    "    while frontier:\n",
    "        # Popping first node of queue\n",
    "        node = frontier.pop()\n",
    "        \n",
    "        # modify the currently searching node to red\n",
    "        node_colors[node.state] = \"red\"\n",
    "        iterations += 1\n",
    "        all_node_colors.append(dict(node_colors))\n",
    "        \n",
    "        if problem.goal_test(node.state):\n",
    "            # modify goal node to green after reaching the goal\n",
    "            node_colors[node.state] = \"green\"\n",
    "            iterations += 1\n",
    "            all_node_colors.append(dict(node_colors))\n",
    "            return(iterations, all_node_colors, node)\n",
    "        \n",
    "        explored.add(node.state)\n",
    "        frontier.extend(child for child in node.expand(problem)\n",
    "                        if child.state not in explored and\n",
    "                        child not in frontier)\n",
    "        \n",
    "        for n in frontier:\n",
    "            # modify the color of frontier nodes to orange\n",
    "            node_colors[n.state] = \"orange\"\n",
    "            iterations += 1\n",
    "            all_node_colors.append(dict(node_colors))\n",
    "\n",
    "        # modify the color of explored nodes to gray\n",
    "        node_colors[node.state] = \"gray\"\n",
    "        iterations += 1\n",
    "        all_node_colors.append(dict(node_colors))\n",
    "        \n",
    "    return None\n",
    "\n",
    "\n",
    "def depth_first_graph_search(problem):\n",
    "    \"\"\"Search the deepest nodes in the search tree first.\"\"\"\n",
    "    iterations, all_node_colors, node = graph_search(problem, Stack())\n",
    "    return(iterations, all_node_colors, node)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "61149ffbc02846af97170f8975d4f11d"
      }
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "90b1f8f77fdb4207a3570fbe88a0bdf6"
      }
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
    "all_node_colors = []\n",
    "romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)\n",
    "display_visual(user_input = False, algorithm = depth_first_graph_search, problem = romania_problem)"
   "cell_type": "markdown",
Vinay Varma's avatar
Vinay Varma a validé
    "## BEST FIRST SEARCH\n",
Anthony Marakis's avatar
Anthony Marakis a validé
    "\n",
    "Let's change all the `node_colors` to starting position and define a different problem statement."
Vinay Varma's avatar
Vinay Varma a validé
   "execution_count": 21,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def best_first_graph_search(problem, f):\n",
    "    \"\"\"Search the nodes with the lowest f scores first.\n",
    "    You specify the function f(node) that you want to minimize; for example,\n",
    "    if f is a heuristic estimate to the goal, then we have greedy best\n",
    "    first search; if f is node.depth then we have breadth-first search.\n",
    "    There is a subtlety: the line \"f = memoize(f, 'f')\" means that the f\n",
    "    values will be cached on the nodes as they are computed. So after doing\n",
    "    a best first search you can examine the f values of the path returned.\"\"\"\n",
    "    \n",
    "    # we use these two variables at the time of visualisations\n",
    "    iterations = 0\n",
    "    all_node_colors = []\n",
    "    node_colors = dict(initial_node_colors)\n",
    "    \n",
    "    f = memoize(f, 'f')\n",
    "    node = Node(problem.initial)\n",
    "    \n",
    "    node_colors[node.state] = \"red\"\n",
    "    iterations += 1\n",
    "    all_node_colors.append(dict(node_colors))\n",
    "    \n",
    "    if problem.goal_test(node.state):\n",
    "        node_colors[node.state] = \"green\"\n",
    "        iterations += 1\n",
    "        all_node_colors.append(dict(node_colors))\n",
    "        return(iterations, all_node_colors, node)\n",
    "    \n",
    "    frontier = PriorityQueue(min, f)\n",
    "    frontier.append(node)\n",
    "    \n",
    "    node_colors[node.state] = \"orange\"\n",
    "    iterations += 1\n",
    "    all_node_colors.append(dict(node_colors))\n",
    "    \n",
    "    explored = set()\n",
    "    while frontier:\n",
    "        node = frontier.pop()\n",
    "        \n",
    "        node_colors[node.state] = \"red\"\n",
    "        iterations += 1\n",
    "        all_node_colors.append(dict(node_colors))\n",
    "        \n",
    "        if problem.goal_test(node.state):\n",
    "            node_colors[node.state] = \"green\"\n",
    "            iterations += 1\n",
    "            all_node_colors.append(dict(node_colors))\n",
    "            return(iterations, all_node_colors, node)\n",
    "        \n",
    "        explored.add(node.state)\n",
    "        for child in node.expand(problem):\n",
    "            if child.state not in explored and child not in frontier:\n",
    "                frontier.append(child)\n",
    "                node_colors[child.state] = \"orange\"\n",
    "                iterations += 1\n",
    "                all_node_colors.append(dict(node_colors))\n",
    "            elif child in frontier:\n",
    "                incumbent = frontier[child]\n",
    "                if f(child) < f(incumbent):\n",
    "                    del frontier[incumbent]\n",
    "                    frontier.append(child)\n",
    "                    node_colors[child.state] = \"orange\"\n",
    "                    iterations += 1\n",
    "                    all_node_colors.append(dict(node_colors))\n",
    "\n",
    "        node_colors[node.state] = \"gray\"\n",
    "        iterations += 1\n",
    "        all_node_colors.append(dict(node_colors))\n",
Vinay Varma's avatar
Vinay Varma a validé
    "    return None"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## UNIFORM COST SEARCH\n",
Anthony Marakis's avatar
Anthony Marakis a validé
    "Let's change all the `node_colors` to starting position and define a different problem statement."
Vinay Varma's avatar
Vinay Varma a validé
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {
    "collapsed": true
   },
Vinay Varma's avatar
Vinay Varma a validé
   "outputs": [],
   "source": [
    "def uniform_cost_search(problem):\n",
    "    \"[Figure 3.14]\"\n",
Vinay Varma's avatar
Vinay Varma a validé
    "    #Uniform Cost Search uses Best First Search algorithm with f(n) = g(n)\n",
    "    iterations, all_node_colors, node = best_first_graph_search(problem, lambda node: node.path_cost)\n",
    "    return(iterations, all_node_colors, node)"
   "cell_type": "code",
   "execution_count": 23,
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
Vinay Varma's avatar
Vinay Varma a validé
       "model_id": "46b8200b4a8f47e7b18145234a8469da"
      }
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
Vinay Varma's avatar
Vinay Varma a validé
       "model_id": "ca9b2d01bbd5458bb037585c719d73fc"
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
    "all_node_colors = []\n",
    "romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)\n",
    "display_visual(user_input = False, algorithm = uniform_cost_search, problem = romania_problem)"
   "cell_type": "markdown",
   "metadata": {},
Vinay Varma's avatar
Vinay Varma a validé
    "## GREEDY BEST FIRST SEARCH\n",
    "Let's change all the node_colors to starting position and define a different problem statement."
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
Vinay Varma's avatar
Vinay Varma a validé
    "def greedy_best_first_search(problem, h=None):\n",
    "    \"\"\"Greedy Best-first graph search is an informative searching algorithm with f(n) = h(n).\n",
    "    You need to specify the h function when you call best_first_search, or\n",
    "    else in your Problem subclass.\"\"\"\n",
    "    h = memoize(h or problem.h, 'h')\n",
Vinay Varma's avatar
Vinay Varma a validé
    "    iterations, all_node_colors, node = best_first_graph_search(problem, lambda n: h(n))\n",
    "    return(iterations, all_node_colors, node)"
Vinay Varma's avatar
Vinay Varma a validé
   "execution_count": 25,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
Vinay Varma's avatar
Vinay Varma a validé
       "model_id": "e3ddd0260d7d4a8aa62d610976b9568a"
      }
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
Vinay Varma's avatar
Vinay Varma a validé
       "model_id": "dae485b1f4224c34a88de42d252da76c"
      }
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
    "all_node_colors = []\n",
Vinay Varma's avatar
Vinay Varma a validé
    "romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)\n",
    "display_visual(user_input = False, algorithm = greedy_best_first_search, problem = romania_problem)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## A\\* SEARCH\n",
    "\n",
Anthony Marakis's avatar
Anthony Marakis a validé
    "Let's change all the `node_colors` to starting position and define a different problem statement."
Vinay Varma's avatar
Vinay Varma a validé
   "execution_count": 25,
   "metadata": {
    "collapsed": true
   },
Vinay Varma's avatar
Vinay Varma a validé
   "outputs": [],
   "source": [
    "def astar_search(problem, h=None):\n",
    "    \"\"\"A* search is best-first graph search with f(n) = g(n)+h(n).\n",
    "    You need to specify the h function when you call astar_search, or\n",
    "    else in your Problem subclass.\"\"\"\n",
    "    h = memoize(h or problem.h, 'h')\n",
    "    iterations, all_node_colors, node = best_first_graph_search(problem, lambda n: n.path_cost + h(n))\n",
    "    return(iterations, all_node_colors, node)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
Vinay Varma's avatar
Vinay Varma a validé
       "model_id": "15a78d815f0c4ea589cdd5ad40bc8794"
      }
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
Vinay Varma's avatar
Vinay Varma a validé
       "model_id": "10450687dd574be2a380e9e40403fa83"
      }
     },
     "metadata": {},
     "output_type": "display_data"
Vinay Varma's avatar
Vinay Varma a validé
    }
   ],
   "source": [
    "all_node_colors = []\n",
    "romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)\n",
    "display_visual(user_input = False, algorithm = astar_search, problem = romania_problem)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {
    "scrolled": false
   },
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
Vinay Varma's avatar
Vinay Varma a validé
       "model_id": "9019790cf8324d73966373bb3f5373a8"
      }
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
Vinay Varma's avatar
Vinay Varma a validé
       "model_id": "b8a3195598da472d996e4e8b81595cb7"
      }
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
Vinay Varma's avatar
Vinay Varma a validé
       "model_id": "aabe167a0d6440f0a020df8a85a9206c"
      }
     },
     "metadata": {},
     "output_type": "display_data"
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
Vinay Varma's avatar
Vinay Varma a validé
       "model_id": "25d146d187004f4f9db6a7dccdbc7e93"
      }
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
Vinay Varma's avatar
Vinay Varma a validé
       "model_id": "68d532810a9e46309415fd353c474a4d"
      }
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "all_node_colors = []\n",
Vinay Varma's avatar
Vinay Varma a validé
    "# display_visual(user_input = True, algorithm = breadth_first_tree_search)\n",
    "display_visual(user_input = True)"
surya saini's avatar
surya saini a validé
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## A* Heuristics\n",
surya saini's avatar
surya saini a validé
    "\n",
    "Different heuristics provide different efficiency in solving A* problems which are generally defined by the number of explored nodes as well as the branching factor. With the classic 8 puzzle we can show the efficiency of different heuristics through the number of explored nodes.\n",
surya saini's avatar
surya saini a validé
    "\n",
    "### 8 Puzzle Problem\n",
surya saini's avatar
surya saini a validé
    "\n",
    "The *8 Puzzle Problem* consists of a 3x3 tray in which the goal is to get the initial configuration to the goal state by shifting the numbered tiles into the blank space.\n",
surya saini's avatar
surya saini a validé
    "\n",
    "example:- \n",
surya saini's avatar
surya saini a validé
    "\n",
    "              Initial State                        Goal State\n",
    "              | 7 | 2 | 4 |                       | 0 | 1 | 2 |\n",
    "              | 5 | 0 | 6 |                       | 3 | 4 | 5 |\n",
    "              | 8 | 3 | 1 |                       | 6 | 7 | 8 |\n",
    "              \n",
Anthony Marakis's avatar
Anthony Marakis a validé
    "We have a total of 9 blank tiles giving us a total of 9! initial configuration but not all of these are solvable. The solvability of a configuration can be checked by calculating the Inversion Permutation. If the total Inversion Permutation is even then the initial configuration is solvable else the initial configuration is not solvable which means that only 9!/2 initial states lead to a solution.\n",
surya saini's avatar
surya saini a validé
    "\n",
    "#### Heuristics :-\n",
surya saini's avatar
surya saini a validé
    "\n",
Anthony Marakis's avatar
Anthony Marakis a validé
    "1) Manhattan Distance:- For the 8 puzzle problem Manhattan distance is defined as the distance of a tile from its goal state( for the tile numbered '1' in the initial configuration Manhattan distance is 4 \"2 for left and 2 for upward displacement\").\n",
surya saini's avatar
surya saini a validé
    "\n",
Anthony Marakis's avatar
Anthony Marakis a validé
    "2) No. of Misplaced Tiles:- The heuristic calculates the number of misplaced tiles between the current state and goal state.\n",
surya saini's avatar
surya saini a validé
    "\n",
Anthony Marakis's avatar
Anthony Marakis a validé
    "3) Sqrt of Manhattan Distance:- It calculates the square root of Manhattan distance.\n",
surya saini's avatar
surya saini a validé
    "\n",
Anthony Marakis's avatar
Anthony Marakis a validé
    "4) Max Heuristic:- It assign the score as the maximum between \"Manhattan Distance\" and \"No. of Misplaced Tiles\". "
surya saini's avatar
surya saini a validé
   ]
  },
  {
   "cell_type": "code",
Anthony Marakis's avatar
Anthony Marakis a validé
   "execution_count": 11,
   "metadata": {
    "collapsed": true
   },
surya saini's avatar
surya saini a validé
   "outputs": [],
   "source": [
Anthony Marakis's avatar
Anthony Marakis a validé
    "# Heuristics for 8 Puzzle Problem\n",
surya saini's avatar
surya saini a validé
    "def linear(state,goal):\n",
    "    return sum([1 if state[i] != goal[i] else 0 for i in range(8)])\n",
    "\n",
    "def manhanttan(state,goal):\n",
Anthony Marakis's avatar
Anthony Marakis a validé
    "    index_goal = {0:[2,2], 1:[0,0], 2:[0,1], 3:[0,2], 4:[1,0], 5:[1,1], 6:[1,2], 7:[2,0], 8:[2,1]}\n",
    "    index_state = {}\n",
    "    index = [[0,0], [0,1], [0,2], [1,0], [1,1], [1,2], [2,0], [2,1], [2,2]]\n",
    "    x, y = 0, 0\n",
    "    \n",
    "    for i in range(len(state)):\n",
    "        index_state[state[i]] = index[i]\n",
    "    \n",
    "    mhd = 0\n",
    "    \n",
    "    for i in range(8):\n",
    "        for j in range(2):\n",
    "            mhd = abs(index_goal[i][j] - index_state[i][j]) + mhd\n",
    "    \n",
    "    return mhd\n",
surya saini's avatar
surya saini a validé
    "\n",
    "def sqrt_manhanttan(state,goal):\n",
Anthony Marakis's avatar
Anthony Marakis a validé
    "    index_goal = {0:[2,2], 1:[0,0], 2:[0,1], 3:[0,2], 4:[1,0], 5:[1,1], 6:[1,2], 7:[2,0], 8:[2,1]}\n",
    "    index_state = {}\n",
    "    index = [[0,0], [0,1], [0,2], [1,0], [1,1], [1,2], [2,0], [2,1], [2,2]]\n",
    "    x, y = 0, 0\n",
    "    \n",
    "    for i in range(len(state)):\n",
    "        index_state[state[i]] = index[i]\n",
    "    \n",
    "    mhd = 0\n",
    "    \n",
    "    for i in range(8):\n",
    "        for j in range(2):\n",
    "            mhd = (index_goal[i][j] - index_state[i][j])**2 + mhd\n",
    "    \n",
    "    return math.sqrt(mhd)\n",
surya saini's avatar
surya saini a validé
    "\n",
    "def max_heuristic(state,goal):\n",
Anthony Marakis's avatar
Anthony Marakis a validé
    "    score1 = manhanttan(state, goal)\n",
    "    score2 = linear(state, goal)\n",
    "    return max(score1, score2)"
surya saini's avatar
surya saini a validé
   ]
  },
  {
   "cell_type": "code",
Anthony Marakis's avatar
Anthony Marakis a validé
   "execution_count": 12,
surya saini's avatar
surya saini a validé
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "True\n",
Anthony Marakis's avatar
Anthony Marakis a validé
      "Number of explored nodes by the following heuristic are:  145\n",
surya saini's avatar
surya saini a validé
      "[2, 4, 3, 1, 5, 6, 7, 8, 0]\n",
Anthony Marakis's avatar
Anthony Marakis a validé
      "[2, 4, 3, 1, 5, 6, 7, 0, 8]\n",
      "[2, 4, 3, 1, 0, 6, 7, 5, 8]\n",
      "[2, 0, 3, 1, 4, 6, 7, 5, 8]\n",
      "[0, 2, 3, 1, 4, 6, 7, 5, 8]\n",
      "[1, 2, 3, 0, 4, 6, 7, 5, 8]\n",
      "[1, 2, 3, 4, 0, 6, 7, 5, 8]\n",
      "[1, 2, 3, 4, 5, 6, 7, 0, 8]\n",
      "[1, 2, 3, 4, 5, 6, 7, 8, 0]\n",
Anthony Marakis's avatar
Anthony Marakis a validé
      "Number of explored nodes by the following heuristic are:  153\n",
      "[2, 4, 3, 1, 5, 6, 7, 8, 0]\n",
Anthony Marakis's avatar
Anthony Marakis a validé
      "[2, 4, 3, 1, 5, 6, 7, 0, 8]\n",
      "[2, 4, 3, 1, 0, 6, 7, 5, 8]\n",
      "[2, 0, 3, 1, 4, 6, 7, 5, 8]\n",
      "[0, 2, 3, 1, 4, 6, 7, 5, 8]\n",
      "[1, 2, 3, 0, 4, 6, 7, 5, 8]\n",
      "[1, 2, 3, 4, 0, 6, 7, 5, 8]\n",
      "[1, 2, 3, 4, 5, 6, 7, 0, 8]\n",
      "[1, 2, 3, 4, 5, 6, 7, 8, 0]\n",
Anthony Marakis's avatar
Anthony Marakis a validé
      "Number of explored nodes by the following heuristic are:  145\n",
      "[2, 4, 3, 1, 5, 6, 7, 8, 0]\n",
Anthony Marakis's avatar
Anthony Marakis a validé
      "[2, 4, 3, 1, 5, 6, 7, 0, 8]\n",
      "[2, 4, 3, 1, 0, 6, 7, 5, 8]\n",
      "[2, 0, 3, 1, 4, 6, 7, 5, 8]\n",
      "[0, 2, 3, 1, 4, 6, 7, 5, 8]\n",
      "[1, 2, 3, 0, 4, 6, 7, 5, 8]\n",
      "[1, 2, 3, 4, 0, 6, 7, 5, 8]\n",
      "[1, 2, 3, 4, 5, 6, 7, 0, 8]\n",
      "[1, 2, 3, 4, 5, 6, 7, 8, 0]\n",
Anthony Marakis's avatar
Anthony Marakis a validé
      "Number of explored nodes by the following heuristic are:  169\n",
      "[2, 4, 3, 1, 5, 6, 7, 8, 0]\n",
Anthony Marakis's avatar
Anthony Marakis a validé
      "[2, 4, 3, 1, 5, 6, 7, 0, 8]\n",
      "[2, 4, 3, 1, 0, 6, 7, 5, 8]\n",
      "[2, 0, 3, 1, 4, 6, 7, 5, 8]\n",