logic.ipynb 41,8 ko
Newer Older
jeff3456's avatar
jeff3456 a validé
  {
   "cell_type": "markdown",
jeff3456's avatar
jeff3456 a validé
   "metadata": {
    "collapsed": true
jeff3456's avatar
jeff3456 a validé
   },
   "source": [
Peter Norvig's avatar
Peter Norvig a validé
    "# Logic: `logic.py`; Chapters 6-8"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
Peter Norvig's avatar
Peter Norvig a validé
    "This notebook describes the [logic.py](https://github.com/aimacode/aima-python/blob/master/logic.py) module, which covers Chapters 6 (Logical Agents),  7 (First-Order Logic) and  8 (Inference in First-Order Logic) of *[Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu)*. See the [intro notebook](https://github.com/aimacode/aima-python/blob/master/intro.ipynb) for instructions.\n",
    "We'll start by looking at `Expr`, the data type for logical sentences, and the convenience function `expr`. We'll be covering two types of knowledge bases, `PropKB` - Propositional logic knowledge base and `FolKB` - First order logic knowledge base. We will construct a propositional knowledge base of a specific situation in the Wumpus World. We will next go through the `tt_entails` function and experiment with it a bit. The `pl_resolution` and `pl_fc_entails` functions will come next. We'll study forward chaining and backward chaining algorithms for `FolKB` and use them on `crime_kb` knowledge base.\n",
Peter Norvig's avatar
Peter Norvig a validé
    "But the first step is to load the code:"
jeff3456's avatar
jeff3456 a validé
   ]
  },
Peter Norvig's avatar
Peter Norvig a validé
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
    "from utils import *\n",
Peter Norvig's avatar
Peter Norvig a validé
    "from logic import *"
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
Peter Norvig's avatar
Peter Norvig a validé
    "## Logical Sentences"
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
Peter Norvig's avatar
Peter Norvig a validé
    "The `Expr` class is designed to represent any kind of mathematical expression. The simplest type of `Expr` is a symbol, which can be defined with the function `Symbol`:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
Peter Norvig's avatar
Peter Norvig a validé
   "outputs": [
    {
     "data": {
      "text/plain": [
       "x"
      ]
     },
     "execution_count": 2,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "Symbol('x')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Or we can define multiple symbols at the same time with the function `symbols`:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
C.G.Vedant's avatar
C.G.Vedant a validé
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
Peter Norvig's avatar
Peter Norvig a validé
    "(x, y, P, Q, f) = symbols('x, y, P, Q, f')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can combine `Expr`s with the regular Python infix and prefix operators. Here's how we would form the logical sentence \"P and not Q\":"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
Peter Norvig's avatar
Peter Norvig a validé
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(P & ~Q)"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
Peter Norvig's avatar
Peter Norvig a validé
    "P & ~Q"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
Peter Norvig's avatar
Peter Norvig a validé
    "This works because the `Expr` class overloads the `&` operator with this definition:\n",
    "\n",
    "```python\n",
    "def __and__(self, other): return Expr('&',  self, other)```\n",
    "     \n",
    "and does similar overloads for the other operators. An `Expr` has two fields: `op` for the operator, which is always a string, and `args` for the arguments, which is a tuple of 0 or more expressions. By \"expression,\" I mean either an instance of `Expr`, or a number. Let's take a look at the fields for some `Expr` examples:"
Peter Norvig's avatar
Peter Norvig a validé
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
Peter Norvig's avatar
Peter Norvig a validé
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'&'"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "sentence = P & ~Q\n",
Peter Norvig's avatar
Peter Norvig a validé
    "\n",
    "sentence.op"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(P, ~Q)"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
Peter Norvig's avatar
Peter Norvig a validé
    "sentence.args"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
Peter Norvig's avatar
Peter Norvig a validé
       "'P'"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
Peter Norvig's avatar
Peter Norvig a validé
    "P.op"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
Peter Norvig's avatar
Peter Norvig a validé
   "outputs": [
    {
     "data": {
      "text/plain": [
       "()"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "P.args"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
Peter Norvig's avatar
Peter Norvig a validé
       "'P'"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
Peter Norvig's avatar
Peter Norvig a validé
    "Pxy = P(x, y)\n",
    "\n",
    "Pxy.op"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
Peter Norvig's avatar
Peter Norvig a validé
     "data": {
      "text/plain": [
       "(x, y)"
      ]
     },
     "execution_count": 10,
     "metadata": {},
     "output_type": "execute_result"
Peter Norvig's avatar
Peter Norvig a validé
    "Pxy.args"
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "It is important to note that the `Expr` class does not define the *logic* of Propositional Logic sentences; it just gives you a way to *represent* expressions. Think of an `Expr` as an [abstract syntax tree](https://en.wikipedia.org/wiki/Abstract_syntax_tree).  Each of the `args` in an `Expr` can be either a symbol, a number, or a nested `Expr`. We can nest these trees to any depth. Here is a deply nested `Expr`:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
Peter Norvig's avatar
Peter Norvig a validé
     "data": {
      "text/plain": [
       "(((3 * f(x, y)) + (P(y) / 2)) + 1)"
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
Peter Norvig's avatar
Peter Norvig a validé
    "3 * f(x, y) + P(y) / 2 + 1"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Operators for Constructing Logical Sentences\n",
    "Here is a table of the operators that can be used to form sentences. Note that we have a problem: we want to use Python operators to make sentences, so that our programs (and our interactive sessions like the one here) will show simple code. But Python does not allow implication arrows as operators, so for now we have to use a more verbose notation that Python does allow: `|'==>'|` instead of just `==>`. Alternately, you can always use the more verbose `Expr` constructor forms:\n",
    "| Operation                | Book | Python Infix Input | Python Output | Python `Expr` Input\n",
Peter Norvig's avatar
Peter Norvig a validé
    "|--------------------------|----------------------|-------------------------|---|---|\n",
    "| Negation                 | ¬ P      | `~P`                       | `~P` | `Expr('~', P)`\n",
    "| And                      | P ∧ Q       | `P & Q`                     | `P & Q` | `Expr('&', P, Q)`\n",
    "| Or                       | P &or; Q | `P`<tt> &#124; </tt>`Q`| `P`<tt> &#124; </tt>`Q` | `Expr('`&#124;`', P, Q)`\n",
Peter Norvig's avatar
Peter Norvig a validé
    "| Inequality (Xor)         | P &ne; Q     | `P ^ Q`                | `P ^ Q`  | `Expr('^', P, Q)`\n",
    "| Implication                  | P &rarr; Q    | `P` <tt>&#124;</tt>`'==>'`<tt>&#124;</tt> `Q`   | `P ==> Q` | `Expr('==>', P, Q)`\n",
    "| Reverse Implication      | Q &larr; P     | `Q` <tt>&#124;</tt>`'<=='`<tt>&#124;</tt> `P`  |`Q <== P` | `Expr('<==', Q, P)`\n",
    "| Equivalence            | P &harr; Q   | `P` <tt>&#124;</tt>`'<=>'`<tt>&#124;</tt> `Q`   |`P <=> Q` | `Expr('<=>', P, Q)`\n",
    "Here's an example of defining a sentence with an implication arrow:"
Peter Norvig's avatar
Peter Norvig a validé
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
Peter Norvig's avatar
Peter Norvig a validé
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(~(P & Q) ==> (~P | ~Q))"
Peter Norvig's avatar
Peter Norvig a validé
      ]
     },
     "execution_count": 12,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "~(P & Q)  |'==>'|  (~P | ~Q)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
Peter Norvig's avatar
Peter Norvig a validé
    "## `expr`: a Shortcut for Constructing Sentences\n",
    "If the `|'==>'|` notation looks ugly to you, you can use the function `expr` instead:"
Peter Norvig's avatar
Peter Norvig a validé
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
Peter Norvig's avatar
Peter Norvig a validé
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(~(P & Q) ==> (~P | ~Q))"
Peter Norvig's avatar
Peter Norvig a validé
      ]
     },
     "execution_count": 13,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "expr('~(P & Q)  ==>  (~P | ~Q)')"
Peter Norvig's avatar
Peter Norvig a validé
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "`expr` takes a string as input, and parses it into an `Expr`. The string can contain arrow operators: `==>`, `<==`, or `<=>`, which are handled as if they were regular Python infix operators. And `expr` automatically defines any symbols, so you don't need to pre-define them:"
Peter Norvig's avatar
Peter Norvig a validé
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
Peter Norvig's avatar
Peter Norvig a validé
   "outputs": [
    {
     "data": {
      "text/plain": [
       "sqrt(((b ** 2) - ((4 * a) * c)))"
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "expr('sqrt(b ** 2 - 4 * a * c)')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
C.G.Vedant's avatar
C.G.Vedant a validé
    "For now that's all you need to know about `expr`. If you are interested, we explain the messy details of how `expr` is implemented and how `|'==>'|` is handled in the appendix."
Peter Norvig's avatar
Peter Norvig a validé
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Propositional Knowledge Bases: `PropKB`\n",
Peter Norvig's avatar
Peter Norvig a validé
    "The class `PropKB` can be used to represent a knowledge base of propositional logic sentences.\n",
Peter Norvig's avatar
Peter Norvig a validé
    "We see that the class `KB` has four methods, apart from `__init__`. A point to note here: the `ask` method simply calls the `ask_generator` method. Thus, this one has already been implemented and what you'll have to actually implement when you create your own knowledge base class (if you want to, though I doubt you'll ever need to; just use the ones we've created for you), will be the `ask_generator` function and not the `ask` function itself.\n",
    "\n",
    "The class `PropKB` now.\n",
    "* `__init__(self, sentence=None)` : The constructor `__init__` creates a single field `clauses` which will be a list of all the sentences of the knowledge base. Note that each one of these sentences will be a 'clause' i.e. a sentence which is made up of only literals and `or`s.\n",
    "* `tell(self, sentence)` : When you want to add a sentence to the KB, you use the `tell` method. This method takes a sentence, converts it to its CNF, extracts all the clauses, and adds all these clauses to the `clauses` field. So, you need not worry about `tell`ing only clauses to the knowledge base. You can `tell` the knowledge base a sentence in any form that you wish; converting it to CNF and adding the resulting clauses will be handled by the `tell` method.\n",
    "* `ask_generator(self, query)` : The `ask_generator` function is used by the `ask` function. It calls the `tt_entails` function, which in turn returns `True` if the knowledge base entails query and `False` otherwise. The `ask_generator` itself returns an empty dict `{}` if the knowledge base entails query and `None` otherwise. This might seem a little bit weird to you. After all, it makes more sense just to return a `True` or a `False` instead of the `{}` or `None` But this is done to maintain consistency with the way things are in First-Order Logic, where, an `ask_generator` function, is supposed to return all the substitutions that make the query true. Hence the dict, to return all these substitutions. I will be mostly be using the `ask` function which returns a `{}` or a `False`, but if you don't like this, you can always use the `ask_if_true` function which returns a `True` or a `False`.\n",
    "* `retract(self, sentence)` : This function removes all the clauses of the sentence given, from the knowledge base. Like the `tell` function, you don't have to pass clauses to remove them from the knowledge base; any sentence will do fine. The function will take care of converting that sentence to clauses and then remove those."
   ]
  },
C.G.Vedant's avatar
C.G.Vedant a validé
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Wumpus World KB\n",
    "Let us create a `PropKB` for the wumpus world with the sentences mentioned in `section 7.4.3`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "wumpus_kb = PropKB()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We define the symbols we use in our clauses.<br/>\n",
    "$P_{x, y}$ is true if there is a pit in `[x, y]`.<br/>\n",
    "$B_{x, y}$ is true if the agent senses breeze in `[x, y]`.<br/>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "P11, P12, P21, P22, P31, B11, B21 = expr('P11, P12, P21, P22, P31, B11, B21')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now we tell sentences based on `section 7.4.3`.<br/>\n",
    "There is no pit in `[1,1]`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "wumpus_kb.tell(~P11)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "A square is breezy if and only if there is a pit in a neighboring square. This has to be stated for each square but for now, we include just the relevant squares."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "wumpus_kb.tell(B11 | '<=>' | ((P12 | P21)))\n",
    "wumpus_kb.tell(B21 | '<=>' | ((P11 | P22 | P31)))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now we include the breeze percepts for the first two squares leading up to the situation in `Figure 7.3(b)`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "wumpus_kb.tell(~B11)\n",
    "wumpus_kb.tell(B21)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can check the clauses stored in a `KB` by accessing its `clauses` variable"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[~P11,\n",
       " (~P12 | B11),\n",
       " (~P21 | B11),\n",
       " (P12 | P21 | ~B11),\n",
       " (~P11 | B21),\n",
       " (~P22 | B21),\n",
       " (~P31 | B21),\n",
       " (P11 | P22 | P31 | ~B21),\n",
       " ~B11,\n",
       " B21]"
      ]
     },
     "execution_count": 20,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "wumpus_kb.clauses"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We see that the equivalence $B_{1, 1} \\iff (P_{1, 2} \\lor P_{2, 1})$ was automatically converted to two implications which were inturn converted to CNF which is stored in the `KB`.<br/>\n",
    "$B_{1, 1} \\iff (P_{1, 2} \\lor P_{2, 1})$ was split into $B_{1, 1} \\implies (P_{1, 2} \\lor P_{2, 1})$ and $B_{1, 1} \\Longleftarrow (P_{1, 2} \\lor P_{2, 1})$.<br/>\n",
    "$B_{1, 1} \\implies (P_{1, 2} \\lor P_{2, 1})$ was converted to $P_{1, 2} \\lor P_{2, 1} \\lor \\neg B_{1, 1}$.<br/>\n",
    "$B_{1, 1} \\Longleftarrow (P_{1, 2} \\lor P_{2, 1})$ was converted to $\\neg (P_{1, 2} \\lor P_{2, 1}) \\lor B_{1, 1}$ which becomes $(\\neg P_{1, 2} \\lor B_{1, 1}) \\land (\\neg P_{2, 1} \\lor B_{1, 1})$ after applying De Morgan's laws and distributing the disjunction.<br/>\n",
    "$B_{2, 1} \\iff (P_{1, 1} \\lor P_{2, 2} \\lor P_{3, 2})$ is converted in similar manner."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Inference in Propositional Knowlwdge Base\n",
    "In this section we will look at two algorithms to check if a sentence is entailed by the `KB`. Our goal is to decide whether $\\text{KB} \\vDash \\alpha$ for some sentence $\\alpha$.\n",
    "### Truth Table Enumeration\n",
    "It is a model-checking approach which, as the name suggests, enumerates all possible models in which the `KB` is true and checks if $\\alpha$ is also true in these models. We list the $n$ symbols in the `KB` and enumerate the $2^{n}$ models in a depth-first manner and check the truth of `KB` and $\\alpha$."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {
    "collapsed": true
   },
C.G.Vedant's avatar
C.G.Vedant a validé
   "outputs": [],
   "source": [
    "%psource tt_check_all"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Note that `tt_entails()` takes an `Expr` which is a conjunction of clauses as the input instead of the `KB` itself. You can use the `ask_if_true()` method of `PropKB` which does all the required conversions. Let's check what `wumpus_kb` tells us about $P_{1, 1}$."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(True, False)"
      ]
     },
     "execution_count": 22,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "wumpus_kb.ask_if_true(~P11), wumpus_kb.ask_if_true(P11)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Looking at Figure 7.9 we see that in all models in which the knowledge base is `True`, $P_{1, 1}$ is `False`. It makes sense that `ask_if_true()` returns `True` for $\\alpha = \\neg P_{1, 1}$ and `False` for $\\alpha = P_{1, 1}$. This begs the question, what if $\\alpha$ is `True` in only a portion of all models. Do we return `True` or `False`? This doesn't rule out the possibility of $\\alpha$ being `True` but it is not entailed by the `KB` so we return `False` in such cases. We can see this is the case for $P_{2, 2}$ and $P_{3, 1}$."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(False, False)"
      ]
     },
     "execution_count": 23,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "wumpus_kb.ask_if_true(~P22), wumpus_kb.ask_if_true(P22)"
   ]
  },
Peter Norvig's avatar
Peter Norvig a validé
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998
    "### Proof by Resolution\n",
    "Recall that our goal is to check whether $\\text{KB} \\vDash \\alpha$ i.e. is $\\text{KB} \\implies \\alpha$ true in every model. Suppose we wanted to check if $P \\implies Q$ is valid. We check the satisfiability of $\\neg (P \\implies Q)$, which can be rewritten as $P \\land \\neg Q$. If $P \\land \\neg Q$ is unsatisfiable, then $P \\implies Q$ must be true in all models. This gives us the result \"$\\text{KB} \\vDash \\alpha$ <em>if and only if</em> $\\text{KB} \\land \\neg \\alpha$ is unsatisfiable\".<br/>\n",
    "This technique corresponds to <em>proof by <strong>contradiction</strong></em>, a standard mathematical proof technique. We assume $\\alpha$ to be false and show that this leads to a contradiction with known axioms in $\\text{KB}$. We obtain a contradiction by making valid inferences using inference rules. In this proof we use a single inference rule, <strong>resolution</strong> which states $(l_1 \\lor \\dots \\lor l_k) \\land (m_1 \\lor \\dots \\lor m_n) \\land (l_i \\iff \\neg m_j) \\implies l_1 \\lor \\dots \\lor l_{i - 1} \\lor l_{i + 1} \\lor \\dots \\lor l_k \\lor m_1 \\lor \\dots \\lor m_{j - 1} \\lor m_{j + 1} \\lor \\dots \\lor m_n$. Applying the resolution yeilds us a clause which we add to the KB. We keep doing this until:\n",
    "<ul>\n",
    "<li>There are no new clauses that can be added, in which case $\\text{KB} \\nvDash \\alpha$.</li>\n",
    "<li>Two clauses resolve to yield the <em>empty clause</em>, in which case $\\text{KB} \\vDash \\alpha$.</li>\n",
    "</ul>\n",
    "The <em>empty clause</em> is equivalent to <em>False</em> because it arises only from resolving two complementary\n",
    "unit clauses such as $P$ and $\\neg P$ which is a contradiction as both $P$ and $\\neg P$ can't be <em>True</em> at the same time."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "%psource pl_resolution"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(True, False)"
      ]
     },
     "execution_count": 25,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "pl_resolution(wumpus_kb, ~P11), pl_resolution(wumpus_kb, P11)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(False, False)"
      ]
     },
     "execution_count": 26,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "pl_resolution(wumpus_kb, ~P22), pl_resolution(wumpus_kb, P22)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## First-Order Logic Knowledge Bases: `FolKB`\n",
    "\n",
    "The class `FolKB` can be used to represent a knowledge base of First-order logic sentences. You would initialize and use it the same way as you would for `PropKB` except that the clauses are first-order definite clauses. We will see how to write such clauses to create a database and query them in the following sections."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Criminal KB\n",
    "In this section we create a `FolKB` based on the following paragraph.<br/>\n",
    "<em>The law says that it is a crime for an American to sell weapons to hostile nations. The country Nono, an enemy of America, has some missiles, and all of its missiles were sold to it by Colonel West, who is American.</em><br/>\n",
    "The first step is to extract the facts and convert them into first-order definite clauses. Extracting the facts from data alone is a challenging task. Fortnately we have a small paragraph and can do extraction and conversion manually. We'll store the clauses in list aptly named `clauses`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "clauses = []"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<em>“... it is a crime for an American to sell weapons to hostile nations”</em><br/>\n",
    "The keywords to look for here are 'crime', 'American', 'sell', 'weapon' and 'hostile'. We use predicate symbols to make meaning of them.\n",
    "<ul>\n",
    "<li>`Criminal(x)`: `x` is a criminal</li>\n",
    "<li>`American(x)`: `x` is an American</li>\n",
    "<li>`Sells(x ,y, z)`: `x` sells `y` to `z`</li>\n",
    "<li>`Weapon(x)`: `x` is a weapon</li>\n",
    "<li>`Hostile(x)`: `x` is a hostile nation</li>\n",
    "</ul>\n",
    "Let us now combine them with appropriate variable naming depict the meaning of the sentence. The criminal `x` is also the American `x` who sells weapon `y` to `z`, which is a hostile nation.\n",
    "\n",
    "$\\text{American}(x) \\land \\text{Weapon}(y) \\land \\text{Sells}(x, y, z) \\land \\text{Hostile}(z) \\implies \\text{Criminal} (x)$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "clauses.append(expr(\"(American(x) & Weapon(y) & Sells(x, y, z) & Hostile(z)) ==> Criminal(x)\"))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<em>\"The country Nono, an enemy of America\"</em><br/>\n",
    "We now know that Nono is an enemy of America. We represent these nations using the constant symbols `Nono` and `America`. the enemy relation is show using the predicate symbol `Enemy`.\n",
    "\n",
    "$\\text{Enemy}(\\text{Nono}, \\text{America})$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "clauses.append(expr(\"Enemy(Nono, America)\"))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<em>\"Nono ... has some missiles\"</em><br/>\n",
    "This states the existance of some missile which is owned by Nono. $\\exists x \\text{Owns}(\\text{Nono}, x) \\land \\text{Missile}(x)$. We invoke existential instantiation to introduce a new constant `M1` which is the missile owned by Nono.\n",
    "\n",
    "$\\text{Owns}(\\text{Nono}, \\text{M1}), \\text{Missile}(\\text{M1})$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "clauses.append(expr(\"Owns(Nono, M1)\"))\n",
    "clauses.append(expr(\"Missile(M1)\"))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "<em>\"All of its missiles were sold to it by Colonel West\"</em><br/>\n",
    "If Nono owns something and it classifies as a missile, then it was sold to Nono by West.\n",
    "\n",
    "$\\text{Missile}(x) \\land \\text{Owns}(\\text{Nono}, x) \\implies \\text{Sells}(\\text{West}, x, \\text{Nono})$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "clauses.append(expr(\"(Missile(x) & Owns(Nono, x)) ==> Sells(West, x, Nono)\"))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<em>\"West, who is American\"</em><br/>\n",
    "West is an American.\n",
    "\n",
    "$\\text{American}(\\text{West})$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "clauses.append(expr(\"American(West)\"))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We also know, from our understanding of language, that missiles are weapons and that an enemy of America counts as “hostile”.\n",
    "\n",
    "$\\text{Missile}(x) \\implies \\text{Weapon}(x), \\text{Enemy}(x, \\text{America}) \\implies \\text{Hostile}(x)$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "clauses.append(expr(\"Missile(x) ==> Weapon(x)\"))\n",
    "clauses.append(expr(\"Enemy(x, America) ==> Hostile(x)\"))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now that we have converted the information into first-order definite clauses we can create our first-order logic knowledge base."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "crime_kb = FolKB(clauses)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Inference in First-Order Logic\n",
    "In this section we look at a forward chaining and a backward chaining algorithm for `FolKB`. Both the aforementioned algorithms rely on a process called <strong>unification</strong>, a key component of all first-order inference algorithms."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Unification\n",
    "We sometimes require finding substitutions that make different logical expressions look identical. This process, called unification, is done by the `unify` algorithm. It takes as input two sentences and returns a <em>unifier</em> for them if one exists. A unifier is a dictionary which stores the substitutions required to make the two sentences identical. It does so by recursively unifying the components of a sentence, where the unification of a variable symbol `var` with a constant symbol `Const` is the mapping `{var: Const}`. Let's look at a few examples."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{x: 3}"
      ]
     },
     "execution_count": 35,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "unify(expr('x'), 3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{x: B}"
      ]
     },
     "execution_count": 36,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "unify(expr('A(x)'), expr('A(B)'))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{x: Bella, y: Dobby}"
      ]
     },
     "execution_count": 37,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "unify(expr('Cat(x) & Dog(Dobby)'), expr('Cat(Bella) & Dog(y)'))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In cases where there is no possible substitution that unifies the two sentences the function return `None`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "None\n"
     ]
    }
   ],
   "source": [
    "print(unify(expr('Cat(x)'), expr('Dog(Dobby)')))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We also need to take care we do not unintentionally use same variable name. Unify treats them as a single variable which prevents it from taking multiple value."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "None\n"
     ]
    }
   ],
   "source": [
    "print(unify(expr('Cat(x) & Dog(Dobby)'), expr('Cat(Bella) & Dog(x)')))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Forward Chaining Algorithm\n",
    "We consider the simple forward-chaining algorithm presented in <em>Figure 9.3</em>. We look at each rule in the knoweldge base and see if the premises can be satisfied. This is done by finding a substitution which unifies the each of the premise with a clause in the `KB`. If we are able to unify the premises the conclusion (with the corresponding substitution) is added to the `KB`. This inferencing process is repeated until either the query can be answered or till no new sentences can be aded. We test if the newly added clause unifies with the query in which case the substitution yielded by `unify` is an answer to the query. If we run out of sentences to infer, this means the query was a failure.\n",
    "\n",
    "The function `fol_fc_ask` is a generator which yields all substitutions which validate the query."