Newer
Older
"* `formulate_goal(self, state)`: Given a `state` of the agent, this method formulates the `goal` for it.\n",
"\n",
"* `formulate_problem(self, state, goal)`: It is used in problem formulation given a `state` and a `goal` for the `agent`.\n",
"\n",
"* `search(self, problem)`: This method is used to search a sequence of `actions` to solve a `problem`."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let us now define a Simple Problem Solving Agent Program. We will create a simple `vacuumAgent` class which will inherit from the abstract class `SimpleProblemSolvingAgentProgram` and overrides its methods. We will create a simple intelligent vacuum agent which can be in any one of the following states. It will move to any other state depending upon the current state as shown in the picture by arrows:\n",
"\n",
""
]
},
{
"cell_type": "code",
"metadata": {
"collapsed": true
},
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
"outputs": [],
"source": [
"class vacuumAgent(SimpleProblemSolvingAgentProgram):\n",
" def update_state(self, state, percept):\n",
" return percept\n",
"\n",
" def formulate_goal(self, state):\n",
" goal = [state7, state8]\n",
" return goal \n",
"\n",
" def formulate_problem(self, state, goal):\n",
" problem = state\n",
" return problem \n",
" \n",
" def search(self, problem):\n",
" if problem == state1:\n",
" seq = [\"Suck\", \"Right\", \"Suck\"]\n",
" elif problem == state2:\n",
" seq = [\"Suck\", \"Left\", \"Suck\"]\n",
" elif problem == state3:\n",
" seq = [\"Right\", \"Suck\"]\n",
" elif problem == state4:\n",
" seq = [\"Suck\"]\n",
" elif problem == state5:\n",
" seq = [\"Suck\"]\n",
" elif problem == state6:\n",
" seq = [\"Left\", \"Suck\"]\n",
" return seq"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, we will define all the 8 states and create an object of the above class. Then, we will pass it different states and check the output:"
]
},
{
"cell_type": "code",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Left\n",
"Suck\n",
"Right\n"
]
}
],
"state1 = [(0, 0), [(0, 0), \"Dirty\"], [(1, 0), [\"Dirty\"]]]\n",
"state2 = [(1, 0), [(0, 0), \"Dirty\"], [(1, 0), [\"Dirty\"]]]\n",
"state3 = [(0, 0), [(0, 0), \"Clean\"], [(1, 0), [\"Dirty\"]]]\n",
"state4 = [(1, 0), [(0, 0), \"Clean\"], [(1, 0), [\"Dirty\"]]]\n",
"state5 = [(0, 0), [(0, 0), \"Dirty\"], [(1, 0), [\"Clean\"]]]\n",
"state6 = [(1, 0), [(0, 0), \"Dirty\"], [(1, 0), [\"Clean\"]]]\n",
"state7 = [(0, 0), [(0, 0), \"Clean\"], [(1, 0), [\"Clean\"]]]\n",
"state8 = [(1, 0), [(0, 0), \"Clean\"], [(1, 0), [\"Clean\"]]]\n",
"a = vacuumAgent(state1)\n",
"print(a(state6)) \n",
"print(a(state1))\n",
"print(a(state3))"
{
"cell_type": "markdown",
"## SEARCHING ALGORITHMS VISUALIZATION\n",
"In this section, we have visualizations of the following searching algorithms:\n",
"1. Breadth First Tree Search\n",
"2. Depth First Tree Search\n",
"3. Breadth First Search\n",
"4. Depth First Graph Search\n",
"5. Best First Graph Search\n",
"6. Uniform Cost Search\n",
"7. Depth Limited Search\n",
"8. Iterative Deepening Search\n",
"10. Recursive Best First Search\n",
"\n",
"We add the colors to the nodes to have a nice visualisation when displaying. So, these are the different colors we are using in these visuals:\n",
"* Un-explored nodes - <font color='black'>white</font>\n",
"* Frontier nodes - <font color='orange'>orange</font>\n",
"* Currently exploring node - <font color='red'>red</font>\n",
"* Already explored nodes - <font color='gray'>gray</font>"
]
},
{
"cell_type": "markdown",
"## 1. BREADTH-FIRST TREE SEARCH\n",
"We have a working implementation in search module. But as we want to interact with the graph while it is searching, we need to modify the implementation. Here's the modified breadth first tree search."
]
},
{
"cell_type": "code",
"metadata": {
"collapsed": true
},
"source": [
"def tree_search_for_vis(problem, frontier):\n",
" \"\"\"Search through the successors of a problem to find a goal.\n",
" The argument frontier should be an empty queue.\n",
" Don't worry about repeated paths to a state. [Figure 3.7]\"\"\"\n",
" \n",
" # we use these two variables at the time of visualisations\n",
" iterations = 0\n",
" all_node_colors = []\n",
" node_colors = {k : 'white' for k in problem.graph.nodes()}\n",
" \n",
" frontier.append(Node(problem.initial))\n",
" node_colors[Node(problem.initial).state] = \"orange\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" \n",
" while frontier:\n",
" node = frontier.pop()\n",
" \n",
" # modify the currently searching node to red\n",
" node_colors[node.state] = \"red\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" \n",
" if problem.goal_test(node.state):\n",
" # modify goal node to green after reaching the goal\n",
" node_colors[node.state] = \"green\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" return(iterations, all_node_colors, node)\n",
" frontier.extend(node.expand(problem))\n",
" \n",
" for n in node.expand(problem):\n",
" node_colors[n.state] = \"orange\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" # modify the color of explored nodes to gray\n",
" node_colors[node.state] = \"gray\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" return None\n",
"\n",
"def breadth_first_tree_search_(problem):\n",
" \"Search the shallowest nodes in the search tree first.\"\n",
" iterations, all_node_colors, node = tree_search_for_vis(problem, FIFOQueue())\n",
" return(iterations, all_node_colors, node)"
]
},
{
"cell_type": "markdown",
"Now, we use `ipywidgets` to display a slider, a button and our romania map. By sliding the slider we can have a look at all the intermediate steps of a particular search algorithm. By pressing the button **Visualize**, you can see all the steps without interacting with the slider. These two helper functions are the callback functions which are called when we interact with the slider and the button."
{
"cell_type": "code",
"metadata": {},
"source": [
"all_node_colors = []\n",
"romania_problem = GraphProblem('Arad', 'Fagaras', romania_map)\n",
"a, b, c = breadth_first_tree_search_(romania_problem)\n",
"display_visual(romania_graph_data, user_input=False, \n",
" algorithm=breadth_first_tree_search_, \n",
" problem=romania_problem)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 2. Depth-First Tree Search:\n",
"Now let's discuss another searching algorithm, Depth-First Tree Search."
]
},
{
"cell_type": "code",
"metadata": {
"collapsed": true
},
"def depth_first_tree_search_graph(problem):\n",
" \"Search the deepest nodes in the search tree first.\"\n",
" # This algorithm might not work in case of repeated paths\n",
" # and may run into an infinite while loop.\n",
" iterations, all_node_colors, node = tree_search_for_vis(problem, Stack())\n",
" return(iterations, all_node_colors, node)"
]
},
{
"cell_type": "code",
"metadata": {},
"all_node_colors = []\n",
"romania_problem = GraphProblem('Arad', 'Oradea', romania_map)\n",
"display_visual(romania_graph_data, user_input=False, \n",
" algorithm=depth_first_tree_search_graph, \n",
" problem=romania_problem)"
]
},
{
"cell_type": "markdown",
"metadata": {
"## 3. BREADTH-FIRST GRAPH SEARCH\n",
"Let's change all the `node_colors` to starting position and define a different problem statement."
]
},
{
"cell_type": "code",
},
"outputs": [],
"source": [
"def breadth_first_search_graph(problem):\n",
" \"[Figure 3.11]\"\n",
" \n",
" # we use these two variables at the time of visualisations\n",
" iterations = 0\n",
" all_node_colors = []\n",
" node_colors = {k : 'white' for k in problem.graph.nodes()}\n",
" \n",
" node = Node(problem.initial)\n",
" \n",
" node_colors[node.state] = \"red\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" \n",
" if problem.goal_test(node.state):\n",
" node_colors[node.state] = \"green\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" return(iterations, all_node_colors, node)\n",
" \n",
" frontier = FIFOQueue()\n",
" frontier.append(node)\n",
" \n",
" # modify the color of frontier nodes to blue\n",
" node_colors[node.state] = \"orange\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" \n",
" explored = set()\n",
" while frontier:\n",
" node = frontier.pop()\n",
" node_colors[node.state] = \"red\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" \n",
" explored.add(node.state) \n",
" \n",
" for child in node.expand(problem):\n",
" if child.state not in explored and child not in frontier:\n",
" if problem.goal_test(child.state):\n",
" node_colors[child.state] = \"green\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" return(iterations, all_node_colors, child)\n",
" frontier.append(child)\n",
"\n",
" node_colors[child.state] = \"orange\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" \n",
" node_colors[node.state] = \"gray\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" return None"
]
},
{
"cell_type": "code",
"metadata": {},
"source": [
"all_node_colors = []\n",
"romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)\n",
"display_visual(romania_graph_data, user_input=False, \n",
" algorithm=breadth_first_search_graph, \n",
" problem=romania_problem)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 4. Depth-First Graph Search: \n",
"Although we have a working implementation in search module, we have to make a few changes in the algorithm to make it suitable for visualization."
]
},
{
"cell_type": "code",
"metadata": {
"collapsed": true
},
"def graph_search_for_vis(problem, frontier):\n",
" \"\"\"Search through the successors of a problem to find a goal.\n",
" The argument frontier should be an empty queue.\n",
" If two paths reach a state, only use the first one. [Figure 3.7]\"\"\"\n",
" # we use these two variables at the time of visualisations\n",
" iterations = 0\n",
" all_node_colors = []\n",
" node_colors = {k : 'white' for k in problem.graph.nodes()}\n",
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
" \n",
" frontier.append(Node(problem.initial))\n",
" explored = set()\n",
" \n",
" # modify the color of frontier nodes to orange\n",
" node_colors[Node(problem.initial).state] = \"orange\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" \n",
" while frontier:\n",
" # Popping first node of queue\n",
" node = frontier.pop()\n",
" \n",
" # modify the currently searching node to red\n",
" node_colors[node.state] = \"red\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" \n",
" if problem.goal_test(node.state):\n",
" # modify goal node to green after reaching the goal\n",
" node_colors[node.state] = \"green\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" return(iterations, all_node_colors, node)\n",
" \n",
" explored.add(node.state)\n",
" frontier.extend(child for child in node.expand(problem)\n",
" if child.state not in explored and\n",
" child not in frontier)\n",
" \n",
" for n in frontier:\n",
" # modify the color of frontier nodes to orange\n",
" node_colors[n.state] = \"orange\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
"\n",
" # modify the color of explored nodes to gray\n",
" node_colors[node.state] = \"gray\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" \n",
" return None\n",
"\n",
"\n",
"def depth_first_graph_search(problem):\n",
" \"\"\"Search the deepest nodes in the search tree first.\"\"\"\n",
" iterations, all_node_colors, node = graph_search_for_vis(problem, Stack())\n",
" return(iterations, all_node_colors, node)"
]
},
{
"cell_type": "code",
"metadata": {},
"source": [
"all_node_colors = []\n",
"romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)\n",
"display_visual(romania_graph_data, user_input=False, \n",
" algorithm=depth_first_graph_search, \n",
" problem=romania_problem)"
"cell_type": "markdown",
"## 5. BEST FIRST SEARCH\n",
"\n",
"Let's change all the `node_colors` to starting position and define a different problem statement."
]
},
{
"cell_type": "code",
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def best_first_graph_search_for_vis(problem, f):\n",
" \"\"\"Search the nodes with the lowest f scores first.\n",
" You specify the function f(node) that you want to minimize; for example,\n",
" if f is a heuristic estimate to the goal, then we have greedy best\n",
" first search; if f is node.depth then we have breadth-first search.\n",
" There is a subtlety: the line \"f = memoize(f, 'f')\" means that the f\n",
" values will be cached on the nodes as they are computed. So after doing\n",
" a best first search you can examine the f values of the path returned.\"\"\"\n",
" \n",
" # we use these two variables at the time of visualisations\n",
" iterations = 0\n",
" all_node_colors = []\n",
" node_colors = {k : 'white' for k in problem.graph.nodes()}\n",
" \n",
" f = memoize(f, 'f')\n",
" node = Node(problem.initial)\n",
" \n",
" node_colors[node.state] = \"red\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" \n",
" if problem.goal_test(node.state):\n",
" node_colors[node.state] = \"green\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" return(iterations, all_node_colors, node)\n",
" \n",
" frontier = PriorityQueue(min, f)\n",
" frontier.append(node)\n",
" \n",
" node_colors[node.state] = \"orange\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" \n",
" explored = set()\n",
" while frontier:\n",
" node = frontier.pop()\n",
" \n",
" node_colors[node.state] = \"red\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" \n",
" if problem.goal_test(node.state):\n",
" node_colors[node.state] = \"green\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" return(iterations, all_node_colors, node)\n",
" \n",
" explored.add(node.state)\n",
" for child in node.expand(problem):\n",
" if child.state not in explored and child not in frontier:\n",
" frontier.append(child)\n",
" node_colors[child.state] = \"orange\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" elif child in frontier:\n",
" incumbent = frontier[child]\n",
" if f(child) < f(incumbent):\n",
" del frontier[incumbent]\n",
" frontier.append(child)\n",
" node_colors[child.state] = \"orange\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
"\n",
" node_colors[node.state] = \"gray\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" return None"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 6. UNIFORM COST SEARCH\n",
"Let's change all the `node_colors` to starting position and define a different problem statement."
"metadata": {
"collapsed": true
},
"def uniform_cost_search_graph(problem):\n",
" \"[Figure 3.14]\"\n",
" #Uniform Cost Search uses Best First Search algorithm with f(n) = g(n)\n",
" iterations, all_node_colors, node = best_first_graph_search_for_vis(problem, lambda node: node.path_cost)\n",
" return(iterations, all_node_colors, node)\n"
"cell_type": "code",
"all_node_colors = []\n",
"romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)\n",
"display_visual(romania_graph_data, user_input=False, \n",
" algorithm=uniform_cost_search_graph, \n",
" problem=romania_problem)"
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 7. Depth Limited Search\n",
"\n",
"Let's change all the 'node_colors' to starting position and define a different problem statement. \n",
"Although we have a working implementation, but we need to make changes."
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [],
"source": [
"def depth_limited_search(problem, frontier, limit = -1):\n",
" '''\n",
" Perform depth first search of graph g.\n",
" if limit >= 0, that is the maximum depth of the search.\n",
" '''\n",
" # we use these two variables at the time of visualisations\n",
" iterations = 0\n",
" all_node_colors = []\n",
" node_colors = {k : 'white' for k in problem.graph.nodes()}\n",
" \n",
" frontier.append(Node(problem.initial))\n",
" explored = set()\n",
" \n",
" cutoff_occurred = False\n",
" node_colors[Node(problem.initial).state] = \"orange\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" \n",
" while frontier:\n",
" # Popping first node of queue\n",
" node = frontier.pop()\n",
" \n",
" # modify the currently searching node to red\n",
" node_colors[node.state] = \"red\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" \n",
" if problem.goal_test(node.state):\n",
" # modify goal node to green after reaching the goal\n",
" node_colors[node.state] = \"green\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" return(iterations, all_node_colors, node)\n",
"\n",
" elif limit >= 0:\n",
" cutoff_occurred = True\n",
" limit += 1\n",
" all_node_color.pop()\n",
" iterations -= 1\n",
" node_colors[node.state] = \"gray\"\n",
"\n",
" \n",
" explored.add(node.state)\n",
" frontier.extend(child for child in node.expand(problem)\n",
" if child.state not in explored and\n",
" child not in frontier)\n",
" \n",
" for n in frontier:\n",
" limit -= 1\n",
" # modify the color of frontier nodes to orange\n",
" node_colors[n.state] = \"orange\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
"\n",
" # modify the color of explored nodes to gray\n",
" node_colors[node.state] = \"gray\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" \n",
" return 'cutoff' if cutoff_occurred else None\n",
"\n",
"\n",
"def depth_limited_search_for_vis(problem):\n",
" \"\"\"Search the deepest nodes in the search tree first.\"\"\"\n",
" iterations, all_node_colors, node = depth_limited_search(problem, Stack())\n",
" return(iterations, all_node_colors, node) "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"all_node_colors = []\n",
"romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)\n",
"display_visual(romania_graph_data, user_input=False, \n",
" algorithm=depth_limited_search_for_vis, \n",
" problem=romania_problem)"
]
},
"cell_type": "markdown",
"metadata": {},
"Let's change all the node_colors to starting position and define a different problem statement."
]
},
{
"cell_type": "code",
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def greedy_best_first_search(problem, h=None):\n",
" \"\"\"Greedy Best-first graph search is an informative searching algorithm with f(n) = h(n).\n",
" You need to specify the h function when you call best_first_search, or\n",
" else in your Problem subclass.\"\"\"\n",
" h = memoize(h or problem.h, 'h')\n",
" iterations, all_node_colors, node = best_first_graph_search_for_vis(problem, lambda n: h(n))\n",
" return(iterations, all_node_colors, node)\n"
]
},
{
"cell_type": "code",
"metadata": {},
"all_node_colors = []\n",
"romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)\n",
"display_visual(romania_graph_data, user_input=False, \n",
" algorithm=greedy_best_first_search, \n",
" problem=romania_problem)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's change all the `node_colors` to starting position and define a different problem statement."
]
},
{
"cell_type": "code",
"metadata": {
"collapsed": true
},
"def astar_search_graph(problem, h=None):\n",
" \"\"\"A* search is best-first graph search with f(n) = g(n)+h(n).\n",
" You need to specify the h function when you call astar_search, or\n",
" else in your Problem subclass.\"\"\"\n",
" h = memoize(h or problem.h, 'h')\n",
" iterations, all_node_colors, node = best_first_graph_search_for_vis(problem, \n",
" lambda n: n.path_cost + h(n))\n",
" return(iterations, all_node_colors, node)\n"
"source": [
"all_node_colors = []\n",
"romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)\n",
"display_visual(romania_graph_data, user_input=False, \n",
" algorithm=astar_search_graph, \n",
" problem=romania_problem)"
"source": [
"all_node_colors = []\n",
"# display_visual(romania_graph_data, user_input=True, algorithm=breadth_first_tree_search)\n",
"algorithms = { \"Breadth First Tree Search\": breadth_first_tree_search,\n",
" \"Depth First Tree Search\": depth_first_tree_search,\n",
" \"Breadth First Search\": breadth_first_search,\n",
" \"Depth First Graph Search\": depth_first_graph_search,\n",
" \"Uniform Cost Search\": uniform_cost_search,\n",
" \"A-star Search\": astar_search}\n",
"display_visual(romania_graph_data, algorithm=algorithms, user_input=True)"
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Different heuristics provide different efficiency in solving A* problems which are generally defined by the number of explored nodes as well as the branching factor. With the classic 8 puzzle we can show the efficiency of different heuristics through the number of explored nodes.\n",
"The *8 Puzzle Problem* consists of a 3x3 tray in which the goal is to get the initial configuration to the goal state by shifting the numbered tiles into the blank space.\n",
" | 7 | 2 | 4 | | 1 | 2 | 3 |\n",
" | 5 | 0 | 6 | | 4 | 5 | 6 |\n",
" | 8 | 3 | 1 | | 7 | 8 | 0 |\n",
"We have a total of 9 blank tiles giving us a total of 9! initial configuration but not all of these are solvable. The solvability of a configuration can be checked by calculating the Inversion Permutation. If the total Inversion Permutation is even then the initial configuration is solvable else the initial configuration is not solvable which means that only 9!/2 initial states lead to a solution.\n",
"<br>\n",
"Let's define our goal state."
]
},
{
"cell_type": "code",
"execution_count": 30,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"goal = [1, 2, 3, 4, 5, 6, 7, 8, 0]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"1) Manhattan Distance:- For the 8 puzzle problem Manhattan distance is defined as the distance of a tile from its goal state( for the tile numbered '1' in the initial configuration Manhattan distance is 4 \"2 for left and 2 for upward displacement\").\n",
"2) No. of Misplaced Tiles:- The heuristic calculates the number of misplaced tiles between the current state and goal state.\n",
"3) Sqrt of Manhattan Distance:- It calculates the square root of Manhattan distance.\n",
"4) Max Heuristic:- It assign the score as the maximum between \"Manhattan Distance\" and \"No. of Misplaced Tiles\"."
"metadata": {
"collapsed": true
},
"def linear(node):\n",
" return sum([1 if node.state[i] != goal[i] else 0 for i in range(8)])\n",
"def manhattan(node):\n",
" state = node.state\n",
" index_goal = {0:[2,2], 1:[0,0], 2:[0,1], 3:[0,2], 4:[1,0], 5:[1,1], 6:[1,2], 7:[2,0], 8:[2,1]}\n",
" index_state = {}\n",
" index = [[0,0], [0,1], [0,2], [1,0], [1,1], [1,2], [2,0], [2,1], [2,2]]\n",
" x, y = 0, 0\n",
" \n",
" for i in range(len(state)):\n",
" index_state[state[i]] = index[i]\n",
" \n",
" mhd = 0\n",
" \n",
" for i in range(8):\n",
" for j in range(2):\n",
" mhd = abs(index_goal[i][j] - index_state[i][j]) + mhd\n",
" \n",
" return mhd\n",
"def sqrt_manhattan(node):\n",
" state = node.state\n",
" index_goal = {0:[2,2], 1:[0,0], 2:[0,1], 3:[0,2], 4:[1,0], 5:[1,1], 6:[1,2], 7:[2,0], 8:[2,1]}\n",
" index_state = {}\n",
" index = [[0,0], [0,1], [0,2], [1,0], [1,1], [1,2], [2,0], [2,1], [2,2]]\n",
" x, y = 0, 0\n",
" \n",
" for i in range(len(state)):\n",
" index_state[state[i]] = index[i]\n",
" \n",
" mhd = 0\n",
" \n",
" for i in range(8):\n",
" for j in range(2):\n",
" mhd = (index_goal[i][j] - index_state[i][j])**2 + mhd\n",
" \n",
" return math.sqrt(mhd)\n",
"def max_heuristic(node):\n",
" score1 = manhattan(node)\n",
" score2 = linear(node)\n",
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can solve the puzzle using the `astar_search` method."
]
},
"outputs": [
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 32,
"metadata": {},
"output_type": "execute_result"
}
],
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
"puzzle = EightPuzzle((2, 4, 3, 1, 5, 6, 7, 8, 0))\n",
"puzzle.check_solvability((2, 4, 3, 1, 5, 6, 7, 8, 0)) # checks whether the initialized configuration is solvable or not"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This case is solvable, let's proceed.\n",
"<br>\n",
"The default heuristic function returns the number of misplaced tiles."
]
},
{
"cell_type": "code",
"execution_count": 33,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['UP', 'LEFT', 'UP', 'LEFT', 'DOWN', 'RIGHT', 'RIGHT', 'DOWN']"
]
},
"execution_count": 33,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"astar_search(puzzle).solution()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In the following cells, we use different heuristic functions.\n",
"<br>"
]
},
{
"cell_type": "code",
"execution_count": 34,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['UP', 'LEFT', 'UP', 'LEFT', 'DOWN', 'RIGHT', 'RIGHT', 'DOWN']"
]
},
"execution_count": 34,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"astar_search(puzzle, linear).solution()"
]
},
{
"cell_type": "code",
"execution_count": 35,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['LEFT', 'UP', 'UP', 'LEFT', 'DOWN', 'RIGHT', 'DOWN', 'RIGHT']"
]
},
"execution_count": 35,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"astar_search(puzzle, manhattan).solution()"
]
},
{
"cell_type": "code",
"execution_count": 36,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['LEFT', 'UP', 'UP', 'LEFT', 'DOWN', 'RIGHT', 'DOWN', 'RIGHT']"
]
},
"execution_count": 36,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"astar_search(puzzle, sqrt_manhattan).solution()"
]
},
{
"cell_type": "code",
"execution_count": 37,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['LEFT', 'UP', 'UP', 'LEFT', 'DOWN', 'RIGHT', 'DOWN', 'RIGHT']"
]
},
"execution_count": 37,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"astar_search(puzzle, max_heuristic).solution()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Even though all the heuristic functions give the same solution, the difference lies in the computation time.\n",
"<br>\n",
"This might make all the difference in a scenario where high computational efficiency is required.\n",
"<br>\n",
"Let's define a few puzzle states and time `astar_search` for every heuristic function.\n",
"We will use the %%timeit magic for this."
]
},