Newer
Older
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
" <span class=\"k\">def</span> <span class=\"nf\">update_state</span><span class=\"p\">(</span><span class=\"bp\">self</span><span class=\"p\">,</span> <span class=\"n\">percept</span><span class=\"p\">):</span>\n",
" <span class=\"k\">raise</span> <span class=\"ne\">NotImplementedError</span>\n",
"\n",
" <span class=\"k\">def</span> <span class=\"nf\">formulate_goal</span><span class=\"p\">(</span><span class=\"bp\">self</span><span class=\"p\">,</span> <span class=\"n\">state</span><span class=\"p\">):</span>\n",
" <span class=\"k\">raise</span> <span class=\"ne\">NotImplementedError</span>\n",
"\n",
" <span class=\"k\">def</span> <span class=\"nf\">formulate_problem</span><span class=\"p\">(</span><span class=\"bp\">self</span><span class=\"p\">,</span> <span class=\"n\">state</span><span class=\"p\">,</span> <span class=\"n\">goal</span><span class=\"p\">):</span>\n",
" <span class=\"k\">raise</span> <span class=\"ne\">NotImplementedError</span>\n",
"\n",
" <span class=\"k\">def</span> <span class=\"nf\">search</span><span class=\"p\">(</span><span class=\"bp\">self</span><span class=\"p\">,</span> <span class=\"n\">problem</span><span class=\"p\">):</span>\n",
" <span class=\"k\">raise</span> <span class=\"ne\">NotImplementedError</span>\n",
"</pre></div>\n",
"</body>\n",
"</html>\n"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"psource(SimpleProblemSolvingAgentProgram)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The SimpleProblemSolvingAgentProgram class has six methods: \n",
"\n",
"* `__init__(self, intial_state=None)`: This is the `contructor` of the class and is the first method to be called when the class is instantiated. It takes in a keyword argument, `initial_state` which is initially `None`. The argument `intial_state` represents the state from which the agent starts.\n",
"\n",
"* `__call__(self, percept)`: This method updates the `state` of the agent based on its `percept` using the `update_state` method. It then formulates a `goal` with the help of `formulate_goal` method and a `problem` using the `formulate_problem` method and returns a sequence of actions to solve it (using the `search` method).\n",
"\n",
"* `update_state(self, percept)`: This method updates the `state` of the agent based on its `percept`.\n",
"\n",
"* `formulate_goal(self, state)`: Given a `state` of the agent, this method formulates the `goal` for it.\n",
"\n",
"* `formulate_problem(self, state, goal)`: It is used in problem formulation given a `state` and a `goal` for the `agent`.\n",
"\n",
"* `search(self, problem)`: This method is used to search a sequence of `actions` to solve a `problem`."
]
},
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let us now define a Simple Problem Solving Agent Program. We will create a simple `vacuumAgent` class which will inherit from the abstract class `SimpleProblemSolvingAgentProgram` and overrides its methods. We will create a simple intelligent vacuum agent which can be in any one of the following states. It will move to any other state depending upon the current state as shown in the picture by arrows:\n",
"\n",
""
]
},
{
"cell_type": "code",
"execution_count": 146,
"metadata": {},
"outputs": [],
"source": [
"class vacuumAgent(SimpleProblemSolvingAgentProgram):\n",
" def update_state(self, state, percept):\n",
" return percept\n",
"\n",
" def formulate_goal(self, state):\n",
" goal = [state7, state8]\n",
" return goal \n",
"\n",
" def formulate_problem(self, state, goal):\n",
" problem = state\n",
" return problem \n",
" \n",
" def search(self, problem):\n",
" if problem == state1:\n",
" seq = [\"Suck\", \"Right\", \"Suck\"]\n",
" elif problem == state2:\n",
" seq = [\"Suck\", \"Left\", \"Suck\"]\n",
" elif problem == state3:\n",
" seq = [\"Right\", \"Suck\"]\n",
" elif problem == state4:\n",
" seq = [\"Suck\"]\n",
" elif problem == state5:\n",
" seq = [\"Suck\"]\n",
" elif problem == state6:\n",
" seq = [\"Left\", \"Suck\"]\n",
" return seq"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, we will define all the 8 states and create an object of the above class. Then, we will pass it different states and check the output:"
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Left\n",
"Suck\n",
"Right\n"
]
}
],
"source": [
" state1 = [(0, 0), [(0, 0), \"Dirty\"], [(1, 0), [\"Dirty\"]]]\n",
" state2 = [(1, 0), [(0, 0), \"Dirty\"], [(1, 0), [\"Dirty\"]]]\n",
" state3 = [(0, 0), [(0, 0), \"Clean\"], [(1, 0), [\"Dirty\"]]]\n",
" state4 = [(1, 0), [(0, 0), \"Clean\"], [(1, 0), [\"Dirty\"]]]\n",
" state5 = [(0, 0), [(0, 0), \"Dirty\"], [(1, 0), [\"Clean\"]]]\n",
" state6 = [(1, 0), [(0, 0), \"Dirty\"], [(1, 0), [\"Clean\"]]]\n",
" state7 = [(0, 0), [(0, 0), \"Clean\"], [(1, 0), [\"Clean\"]]]\n",
" state8 = [(1, 0), [(0, 0), \"Clean\"], [(1, 0), [\"Clean\"]]]\n",
"\n",
" a = vacuumAgent(state1)\n",
"\n",
" print(a(state6)) \n",
" print(a(state1))\n",
" print(a(state3))"
]
},
{
"cell_type": "markdown",
"## SEARCHING ALGORITHMS VISUALIZATION\n",
"In this section, we have visualizations of the following searching algorithms:\n",
"1. Breadth First Tree Search - Implemented\n",
"2. Depth First Tree Search - Implemented\n",
"3. Depth First Graph Search - Implemented\n",
"4. Breadth First Search - Implemented\n",
"5. Best First Graph Search - Implemented\n",
"6. Uniform Cost Search - Implemented\n",
"7. Depth Limited Search\n",
"8. Iterative Deepening Search\n",
"9. A\\*-Search - Implemented\n",
"10. Recursive Best First Search\n",
"\n",
"We add the colors to the nodes to have a nice visualisation when displaying. So, these are the different colors we are using in these visuals:\n",
"* Un-explored nodes - <font color='black'>white</font>\n",
"* Frontier nodes - <font color='orange'>orange</font>\n",
"* Currently exploring node - <font color='red'>red</font>\n",
"* Already explored nodes - <font color='gray'>gray</font>\n",
"Now, we will define some helper methods to display interactive buttons and sliders when visualising search algorithms."
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def final_path_colors(problem, solution):\n",
" \"returns a node_colors dict of the final path provided the problem and solution\"\n",
" \n",
" # get initial node colors\n",
" final_colors = dict(initial_node_colors)\n",
" # color all the nodes in solution and starting node to green\n",
" final_colors[problem.initial] = \"green\"\n",
" for node in solution:\n",
" final_colors[node] = \"green\" \n",
" return final_colors\n",
"\n",
"\n",
"def display_visual(user_input, algorithm=None, problem=None):\n",
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
" if user_input == False:\n",
" def slider_callback(iteration):\n",
" # don't show graph for the first time running the cell calling this function\n",
" try:\n",
" show_map(all_node_colors[iteration])\n",
" except:\n",
" pass\n",
" def visualize_callback(Visualize):\n",
" if Visualize is True:\n",
" button.value = False\n",
" \n",
" global all_node_colors\n",
" \n",
" iterations, all_node_colors, node = algorithm(problem)\n",
" solution = node.solution()\n",
" all_node_colors.append(final_path_colors(problem, solution))\n",
" \n",
" slider.max = len(all_node_colors) - 1\n",
" \n",
" for i in range(slider.max + 1):\n",
" slider.value = i\n",
" \n",
" slider = widgets.IntSlider(min=0, max=1, step=1, value=0)\n",
" slider_visual = widgets.interactive(slider_callback, iteration = slider)\n",
" display(slider_visual)\n",
" button = widgets.ToggleButton(value = False)\n",
" button_visual = widgets.interactive(visualize_callback, Visualize = button)\n",
" display(button_visual)\n",
" \n",
" if user_input == True:\n",
" node_colors = dict(initial_node_colors)\n",
" if algorithm == None:\n",
" algorithms = {\"Breadth First Tree Search\": breadth_first_tree_search,\n",
" \"Depth First Tree Search\": depth_first_tree_search,\n",
" \"Breadth First Search\": breadth_first_search,\n",
" \"Depth First Graph Search\": depth_first_graph_search,\n",
" \"Uniform Cost Search\": uniform_cost_search,\n",
" \"A-star Search\": astar_search}\n",
" algo_dropdown = widgets.Dropdown(description = \"Search algorithm: \",\n",
" options = sorted(list(algorithms.keys())),\n",
" value = \"Breadth First Tree Search\")\n",
" display(algo_dropdown)\n",
" \n",
" def slider_callback(iteration):\n",
" # don't show graph for the first time running the cell calling this function\n",
" try:\n",
" show_map(all_node_colors[iteration])\n",
" except:\n",
" pass\n",
" def visualize_callback(Visualize):\n",
" if Visualize is True:\n",
" button.value = False\n",
" \n",
" problem = GraphProblem(start_dropdown.value, end_dropdown.value, romania_map)\n",
" global all_node_colors\n",
" \n",
" if algorithm == None:\n",
" user_algorithm = algorithms[algo_dropdown.value]\n",
" \n",
"# print(user_algorithm)\n",
"# print(problem)\n",
" \n",
" iterations, all_node_colors, node = user_algorithm(problem)\n",
" solution = node.solution()\n",
" all_node_colors.append(final_path_colors(problem, solution))\n",
" slider.max = len(all_node_colors) - 1\n",
" \n",
" for i in range(slider.max + 1):\n",
" slider.value = i\n",
"# time.sleep(.5)\n",
" start_dropdown = widgets.Dropdown(description = \"Start city: \",\n",
" options = sorted(list(node_colors.keys())), value = \"Arad\")\n",
" display(start_dropdown)\n",
" end_dropdown = widgets.Dropdown(description = \"Goal city: \",\n",
" options = sorted(list(node_colors.keys())), value = \"Fagaras\")\n",
" display(end_dropdown)\n",
" \n",
" button = widgets.ToggleButton(value = False)\n",
" button_visual = widgets.interactive(visualize_callback, Visualize = button)\n",
" display(button_visual)\n",
" \n",
" slider = widgets.IntSlider(min=0, max=1, step=1, value=0)\n",
" slider_visual = widgets.interactive(slider_callback, iteration = slider)\n",
" display(slider_visual)"
]
},
{
"cell_type": "markdown",
"## BREADTH-FIRST TREE SEARCH\n",
"We have a working implementation in search module. But as we want to interact with the graph while it is searching, we need to modify the implementation. Here's the modified breadth first tree search."
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {
"collapsed": true
},
"source": [
"def tree_search(problem, frontier):\n",
" \"\"\"Search through the successors of a problem to find a goal.\n",
" The argument frontier should be an empty queue.\n",
" Don't worry about repeated paths to a state. [Figure 3.7]\"\"\"\n",
" \n",
" # we use these two variables at the time of visualisations\n",
" iterations = 0\n",
" all_node_colors = []\n",
" node_colors = dict(initial_node_colors)\n",
" \n",
" frontier.append(Node(problem.initial))\n",
" node_colors[Node(problem.initial).state] = \"orange\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" \n",
" while frontier:\n",
" node = frontier.pop()\n",
" \n",
" # modify the currently searching node to red\n",
" node_colors[node.state] = \"red\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" \n",
" if problem.goal_test(node.state):\n",
" # modify goal node to green after reaching the goal\n",
" node_colors[node.state] = \"green\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" return(iterations, all_node_colors, node)\n",
" frontier.extend(node.expand(problem))\n",
" \n",
" for n in node.expand(problem):\n",
" node_colors[n.state] = \"orange\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" # modify the color of explored nodes to gray\n",
" node_colors[node.state] = \"gray\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" return None\n",
"\n",
"def breadth_first_tree_search(problem):\n",
" \"Search the shallowest nodes in the search tree first.\"\n",
" iterations, all_node_colors, node = tree_search(problem, FIFOQueue())\n",
" return(iterations, all_node_colors, node)"
]
},
{
"cell_type": "markdown",
"Now, we use `ipywidgets` to display a slider, a button and our romania map. By sliding the slider we can have a look at all the intermediate steps of a particular search algorithm. By pressing the button **Visualize**, you can see all the steps without interacting with the slider. These two helper functions are the callback functions which are called when we interact with the slider and the button."
{
"cell_type": "code",
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
"execution_count": 14,
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "d55324f7343a4c71a9a2d4da6d037037"
}
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "b07a3813dd724c51a9b37f646cf2be25"
}
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"all_node_colors = []\n",
"romania_problem = GraphProblem('Arad', 'Fagaras', romania_map)\n",
"display_visual(user_input = False, algorithm = breadth_first_tree_search, problem = romania_problem)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Depth-First Tree Search:\n",
"Now let's discuss another searching algorithm, Depth-First Tree Search."
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {
"collapsed": true
},
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
"source": [
"def depth_first_tree_search(problem):\n",
" \"Search the deepest nodes in the search tree first.\"\n",
" # This algorithm might not work in case of repeated paths\n",
" # and may run into an infinite while loop.\n",
" iterations, all_node_colors, node = tree_search(problem, Stack())\n",
" return(iterations, all_node_colors, node)"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "523b10cf84e54798a044ee714b864b52"
}
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "aecea953f6a448c192ac8e173cf46e35"
}
},
"metadata": {},
"output_type": "display_data"
}
],
"all_node_colors = []\n",
"romania_problem = GraphProblem('Arad', 'Oradea', romania_map)\n",
"display_visual(user_input = False, algorithm = depth_first_tree_search, problem = romania_problem)"
]
},
{
"cell_type": "markdown",
"metadata": {
"## BREADTH-FIRST SEARCH\n",
"Let's change all the `node_colors` to starting position and define a different problem statement."
]
},
{
"cell_type": "code",
"execution_count": 17,
},
"outputs": [],
"source": [
"def breadth_first_search(problem):\n",
" \"[Figure 3.11]\"\n",
" \n",
" # we use these two variables at the time of visualisations\n",
" iterations = 0\n",
" all_node_colors = []\n",
" node_colors = dict(initial_node_colors)\n",
" \n",
" node = Node(problem.initial)\n",
" \n",
" node_colors[node.state] = \"red\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" \n",
" if problem.goal_test(node.state):\n",
" node_colors[node.state] = \"green\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" return(iterations, all_node_colors, node)\n",
" \n",
" frontier = FIFOQueue()\n",
" frontier.append(node)\n",
" \n",
" # modify the color of frontier nodes to blue\n",
" node_colors[node.state] = \"orange\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" \n",
" explored = set()\n",
" while frontier:\n",
" node = frontier.pop()\n",
" node_colors[node.state] = \"red\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" \n",
" explored.add(node.state) \n",
" \n",
" for child in node.expand(problem):\n",
" if child.state not in explored and child not in frontier:\n",
" if problem.goal_test(child.state):\n",
" node_colors[child.state] = \"green\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" return(iterations, all_node_colors, child)\n",
" frontier.append(child)\n",
"\n",
" node_colors[child.state] = \"orange\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" \n",
" node_colors[node.state] = \"gray\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" return None"
]
},
{
"cell_type": "code",
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
"execution_count": 18,
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "735a3dea191a42b6bd97fdfd337ea3e7"
}
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "ef445770d70a4b7c9d1544b98a55ca4d"
}
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"all_node_colors = []\n",
"romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)\n",
"display_visual(user_input = False, algorithm = breadth_first_search, problem = romania_problem)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Depth-First Graph Search: \n",
"Although we have a working implementation in search module, we have to make a few changes in the algorithm to make it suitable for visualization."
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {
"collapsed": true
},
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
"source": [
"def graph_search(problem, frontier):\n",
" \"\"\"Search through the successors of a problem to find a goal.\n",
" The argument frontier should be an empty queue.\n",
" If two paths reach a state, only use the first one. [Figure 3.7]\"\"\"\n",
" # we use these two variables at the time of visualisations\n",
" iterations = 0\n",
" all_node_colors = []\n",
" node_colors = dict(initial_node_colors)\n",
" \n",
" frontier.append(Node(problem.initial))\n",
" explored = set()\n",
" \n",
" # modify the color of frontier nodes to orange\n",
" node_colors[Node(problem.initial).state] = \"orange\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" \n",
" while frontier:\n",
" # Popping first node of queue\n",
" node = frontier.pop()\n",
" \n",
" # modify the currently searching node to red\n",
" node_colors[node.state] = \"red\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" \n",
" if problem.goal_test(node.state):\n",
" # modify goal node to green after reaching the goal\n",
" node_colors[node.state] = \"green\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" return(iterations, all_node_colors, node)\n",
" \n",
" explored.add(node.state)\n",
" frontier.extend(child for child in node.expand(problem)\n",
" if child.state not in explored and\n",
" child not in frontier)\n",
" \n",
" for n in frontier:\n",
" # modify the color of frontier nodes to orange\n",
" node_colors[n.state] = \"orange\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
"\n",
" # modify the color of explored nodes to gray\n",
" node_colors[node.state] = \"gray\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" \n",
" return None\n",
"\n",
"\n",
"def depth_first_graph_search(problem):\n",
" \"\"\"Search the deepest nodes in the search tree first.\"\"\"\n",
" iterations, all_node_colors, node = graph_search(problem, Stack())\n",
" return(iterations, all_node_colors, node)"
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "61149ffbc02846af97170f8975d4f11d"
}
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "90b1f8f77fdb4207a3570fbe88a0bdf6"
}
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"all_node_colors = []\n",
"romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)\n",
"display_visual(user_input = False, algorithm = depth_first_graph_search, problem = romania_problem)"
"cell_type": "markdown",
"\n",
"Let's change all the `node_colors` to starting position and define a different problem statement."
]
},
{
"cell_type": "code",
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def best_first_graph_search(problem, f):\n",
" \"\"\"Search the nodes with the lowest f scores first.\n",
" You specify the function f(node) that you want to minimize; for example,\n",
" if f is a heuristic estimate to the goal, then we have greedy best\n",
" first search; if f is node.depth then we have breadth-first search.\n",
" There is a subtlety: the line \"f = memoize(f, 'f')\" means that the f\n",
" values will be cached on the nodes as they are computed. So after doing\n",
" a best first search you can examine the f values of the path returned.\"\"\"\n",
" \n",
" # we use these two variables at the time of visualisations\n",
" iterations = 0\n",
" all_node_colors = []\n",
" node_colors = dict(initial_node_colors)\n",
" \n",
" f = memoize(f, 'f')\n",
" node = Node(problem.initial)\n",
" \n",
" node_colors[node.state] = \"red\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" \n",
" if problem.goal_test(node.state):\n",
" node_colors[node.state] = \"green\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" return(iterations, all_node_colors, node)\n",
" \n",
" frontier = PriorityQueue(min, f)\n",
" frontier.append(node)\n",
" \n",
" node_colors[node.state] = \"orange\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" \n",
" explored = set()\n",
" while frontier:\n",
" node = frontier.pop()\n",
" \n",
" node_colors[node.state] = \"red\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" \n",
" if problem.goal_test(node.state):\n",
" node_colors[node.state] = \"green\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" return(iterations, all_node_colors, node)\n",
" \n",
" explored.add(node.state)\n",
" for child in node.expand(problem):\n",
" if child.state not in explored and child not in frontier:\n",
" frontier.append(child)\n",
" node_colors[child.state] = \"orange\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" elif child in frontier:\n",
" incumbent = frontier[child]\n",
" if f(child) < f(incumbent):\n",
" del frontier[incumbent]\n",
" frontier.append(child)\n",
" node_colors[child.state] = \"orange\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
"\n",
" node_colors[node.state] = \"gray\"\n",
" iterations += 1\n",
" all_node_colors.append(dict(node_colors))\n",
" return None"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## UNIFORM COST SEARCH\n",
"Let's change all the `node_colors` to starting position and define a different problem statement."
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {
"collapsed": true
},
"def uniform_cost_search(problem):\n",
" \"[Figure 3.14]\"\n",
" #Uniform Cost Search uses Best First Search algorithm with f(n) = g(n)\n",
" iterations, all_node_colors, node = best_first_graph_search(problem, lambda node: node.path_cost)\n",
" return(iterations, all_node_colors, node)"
"cell_type": "code",
"execution_count": 23,
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "46b8200b4a8f47e7b18145234a8469da"
}
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "ca9b2d01bbd5458bb037585c719d73fc"
},
"metadata": {},
"output_type": "display_data"
}
],
"all_node_colors = []\n",
"romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)\n",
"display_visual(user_input = False, algorithm = uniform_cost_search, problem = romania_problem)"
"cell_type": "markdown",
"metadata": {},
"Let's change all the node_colors to starting position and define a different problem statement."
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def greedy_best_first_search(problem, h=None):\n",
" \"\"\"Greedy Best-first graph search is an informative searching algorithm with f(n) = h(n).\n",
" You need to specify the h function when you call best_first_search, or\n",
" else in your Problem subclass.\"\"\"\n",
" h = memoize(h or problem.h, 'h')\n",
" iterations, all_node_colors, node = best_first_graph_search(problem, lambda n: h(n))\n",
" return(iterations, all_node_colors, node)"
]
},
{
"cell_type": "code",
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "e3ddd0260d7d4a8aa62d610976b9568a"
}
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "dae485b1f4224c34a88de42d252da76c"
}
},
"metadata": {},
"output_type": "display_data"
}
],
"all_node_colors = []\n",
"romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)\n",
"display_visual(user_input = False, algorithm = greedy_best_first_search, problem = romania_problem)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## A\\* SEARCH\n",
"\n",
"Let's change all the `node_colors` to starting position and define a different problem statement."
]
},
{
"cell_type": "code",
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def astar_search(problem, h=None):\n",
" \"\"\"A* search is best-first graph search with f(n) = g(n)+h(n).\n",
" You need to specify the h function when you call astar_search, or\n",
" else in your Problem subclass.\"\"\"\n",
" h = memoize(h or problem.h, 'h')\n",
" iterations, all_node_colors, node = best_first_graph_search(problem, lambda n: n.path_cost + h(n))\n",
" return(iterations, all_node_colors, node)"
]
},
{
"cell_type": "code",
"execution_count": 26,
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "15a78d815f0c4ea589cdd5ad40bc8794"
}
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "10450687dd574be2a380e9e40403fa83"
}
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"all_node_colors = []\n",
"romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)\n",
"display_visual(user_input = False, algorithm = astar_search, problem = romania_problem)"
]
},
{
"cell_type": "code",
"execution_count": 27,
"metadata": {
"scrolled": false
},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "9019790cf8324d73966373bb3f5373a8"
}
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "b8a3195598da472d996e4e8b81595cb7"
}
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "aabe167a0d6440f0a020df8a85a9206c"
}
},
"metadata": {},
"output_type": "display_data"
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "25d146d187004f4f9db6a7dccdbc7e93"
}
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "68d532810a9e46309415fd353c474a4d"
}
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"all_node_colors = []\n",
"# display_visual(user_input = True, algorithm = breadth_first_tree_search)\n",
"display_visual(user_input = True)"
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Different heuristics provide different efficiency in solving A* problems which are generally defined by the number of explored nodes as well as the branching factor. With the classic 8 puzzle we can show the efficiency of different heuristics through the number of explored nodes.\n",
"The *8 Puzzle Problem* consists of a 3x3 tray in which the goal is to get the initial configuration to the goal state by shifting the numbered tiles into the blank space.\n",
" Initial State Goal State\n",
" | 7 | 2 | 4 | | 0 | 1 | 2 |\n",
" | 5 | 0 | 6 | | 3 | 4 | 5 |\n",
" | 8 | 3 | 1 | | 6 | 7 | 8 |\n",
" \n",
"We have a total of 9 blank tiles giving us a total of 9! initial configuration but not all of these are solvable. The solvability of a configuration can be checked by calculating the Inversion Permutation. If the total Inversion Permutation is even then the initial configuration is solvable else the initial configuration is not solvable which means that only 9!/2 initial states lead to a solution.\n",
"1) Manhattan Distance:- For the 8 puzzle problem Manhattan distance is defined as the distance of a tile from its goal state( for the tile numbered '1' in the initial configuration Manhattan distance is 4 \"2 for left and 2 for upward displacement\").\n",
"2) No. of Misplaced Tiles:- The heuristic calculates the number of misplaced tiles between the current state and goal state.\n",
"3) Sqrt of Manhattan Distance:- It calculates the square root of Manhattan distance.\n",
"4) Max Heuristic:- It assign the score as the maximum between \"Manhattan Distance\" and \"No. of Misplaced Tiles\". "
"metadata": {
"collapsed": true
},
"def linear(state,goal):\n",
" return sum([1 if state[i] != goal[i] else 0 for i in range(8)])\n",
"\n",
"def manhanttan(state,goal):\n",
" index_goal = {0:[2,2], 1:[0,0], 2:[0,1], 3:[0,2], 4:[1,0], 5:[1,1], 6:[1,2], 7:[2,0], 8:[2,1]}\n",
" index_state = {}\n",
" index = [[0,0], [0,1], [0,2], [1,0], [1,1], [1,2], [2,0], [2,1], [2,2]]\n",
" x, y = 0, 0\n",