Newer
Older
]
},
{
"cell_type": "code",
"execution_count": 36,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{(False,): 0.30000000000000004, (True,): 0.7}"
]
},
"execution_count": 36,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"new_factor.cpt"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here the **cpt** is for **P(MaryCalls | Alarm = True)**. Therefore the probabilities for True and False sum up to one. Note the difference between both the cases. Again the only rows included are those consistent with the evidence.\n",
"\n",
"#### Operations on Factors\n",
"\n",
"We are interested in two kinds of operations on factors. **Pointwise Product** which is used to created joint distributions and **Summing Out** which is used for marginalization."
]
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"psource(Factor.pointwise_product)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Factor.pointwise_product** implements a method of creating a joint via combining two factors. We take the union of **variables** of both the factors and then generate the **cpt** for the new factor using **all_events** function. Note that the given we have eliminated rows that are not consistent with the evidence. Pointwise product assigns new probabilities by multiplying rows similar to that in a database join."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"psource(pointwise_product)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**pointwise_product** extends this operation to more than two operands where it is done sequentially in pairs of two."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"psource(Factor.sum_out)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Factor.sum_out** makes a factor eliminating a variable by summing over its values. Again **events_all** is used to generate combinations for the rest of the variables."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**sum_out** uses both **Factor.sum_out** and **pointwise_product** to finally eliminate a particular variable from all factors by summing over its values."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Elimination Ask\n",
"\n",
"The algorithm described in **Figure 14.11** of the book is implemented by the function **elimination_ask**. We use this for inference. The key idea is that we eliminate the hidden variables by interleaving joining and marginalization. It takes in 3 arguments **X** the query variable, **e** the evidence variable and **bn** the Bayes network. \n",
"\n",
"The algorithm creates factors out of Bayes Nodes in reverse order and eliminates hidden variables using **sum_out**. Finally it takes a point wise product of all factors and normalizes. Let us finally solve the problem of inferring \n",
"\n",
"**P(Burglary=True | JohnCalls=True, MaryCalls=True)** using variable elimination."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"psource(elimination_ask)"
]
},
{
"cell_type": "code",
"execution_count": 38,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'False: 0.716, True: 0.284'"
]
},
"execution_count": 38,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"elimination_ask('Burglary', dict(JohnCalls=True, MaryCalls=True), burglary).show_approx()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Approximate Inference in Bayesian Networks\n",
"\n",
"Exact inference fails to scale for very large and complex Bayesian Networks. This section covers implementation of randomized sampling algorithms, also called Monte Carlo algorithms."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
},
"outputs": [],
"source": [
"psource(BayesNode.sample)"
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Before we consider the different algorithms in this section let us look at the **BayesNode.sample** method. It samples from the distribution for this variable conditioned on event's values for parent_variables. That is, return True/False at random according to with the conditional probability given the parents. The **probability** function is a simple helper from **utils** module which returns True with the probability passed to it.\n",
"\n",
"### Prior Sampling\n",
"\n",
"The idea of Prior Sampling is to sample from the Bayesian Network in a topological order. We start at the top of the network and sample as per **P(X<sub>i</sub> | parents(X<sub>i</sub>)** i.e. the probability distribution from which the value is sampled is conditioned on the values already assigned to the variable's parents. This can be thought of as a simulation."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The function **prior_sample** implements the algorithm described in **Figure 14.13** of the book. Nodes are sampled in the topological order. The old value of the event is passed as evidence for parent values. We will use the Bayesian Network in **Figure 14.12** to try out the **prior_sample**\n",
"\n",
"<img src=\"files/images/sprinklernet.jpg\" height=\"500\" width=\"500\">\n",
"\n",
"We store the samples on the observations. Let us find **P(Rain=True)**"
]
},
{
"cell_type": "code",
},
"outputs": [],
"source": [
"N = 1000\n",
"all_observations = [prior_sample(sprinkler) for x in range(N)]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we filter to get the observations where Rain = True"
]
},
{
"cell_type": "code",
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"rain_true = [observation for observation in all_observations if observation['Rain'] == True]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, we can find **P(Rain=True)**"
]
},
{
"cell_type": "code",
"execution_count": 41,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"0.508\n"
]
}
],
"source": [
"answer = len(rain_true) / N\n",
"print(answer)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To evaluate a conditional distribution. We can use a two-step filtering process. We first separate out the variables that are consistent with the evidence. Then for each value of query variable, we can find probabilities. For example to find **P(Cloudy=True | Rain=True)**. We have already filtered out the values consistent with our evidence in **rain_true**. Now we apply a second filtering step on **rain_true** to find **P(Rain=True and Cloudy=True)**"
]
},
{
"cell_type": "code",
"execution_count": 42,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"0.7755905511811023\n"
]
}
],
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
"source": [
"rain_and_cloudy = [observation for observation in rain_true if observation['Cloudy'] == True]\n",
"answer = len(rain_and_cloudy) / len(rain_true)\n",
"print(answer)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Rejection Sampling\n",
"\n",
"Rejection Sampling is based on an idea similar to what we did just now. First, it generates samples from the prior distribution specified by the network. Then, it rejects all those that do not match the evidence. The function **rejection_sampling** implements the algorithm described by **Figure 14.14**"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"psource(rejection_sampling)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The function keeps counts of each of the possible values of the Query variable and increases the count when we see an observation consistent with the evidence. It takes in input parameters **X** - The Query Variable, **e** - evidence, **bn** - Bayes net and **N** - number of prior samples to generate.\n",
"\n",
"**consistent_with** is used to check consistency."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"psource(consistent_with)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To answer **P(Cloudy=True | Rain=True)**"
]
},
{
"cell_type": "code",
"execution_count": 43,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"0.7835249042145593"
]
},
"execution_count": 43,
"metadata": {},
"output_type": "execute_result"
}
],
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
"source": [
"p = rejection_sampling('Cloudy', dict(Rain=True), sprinkler, 1000)\n",
"p[True]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Likelihood Weighting\n",
"\n",
"Rejection sampling tends to reject a lot of samples if our evidence consists of a large number of variables. Likelihood Weighting solves this by fixing the evidence (i.e. not sampling it) and then using weights to make sure that our overall sampling is still consistent.\n",
"\n",
"The pseudocode in **Figure 14.15** is implemented as **likelihood_weighting** and **weighted_sample**."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"psource(weighted_sample)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"**weighted_sample** samples an event from Bayesian Network that's consistent with the evidence **e** and returns the event and its weight, the likelihood that the event accords to the evidence. It takes in two parameters **bn** the Bayesian Network and **e** the evidence.\n",
"\n",
"The weight is obtained by multiplying **P(x<sub>i</sub> | parents(x<sub>i</sub>))** for each node in evidence. We set the values of **event = evidence** at the start of the function."
]
},
{
"cell_type": "code",
"execution_count": 44,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"({'Cloudy': True, 'Rain': True, 'Sprinkler': False, 'WetGrass': True}, 0.8)"
]
},
"execution_count": 44,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"weighted_sample(sprinkler, dict(Rain=True))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"psource(likelihood_weighting)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**likelihood_weighting** implements the algorithm to solve our inference problem. The code is similar to **rejection_sampling** but instead of adding one for each sample we add the weight obtained from **weighted_sampling**."
]
},
{
"cell_type": "code",
"execution_count": 45,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'False: 0.184, True: 0.816'"
]
},
"execution_count": 45,
"metadata": {},
"output_type": "execute_result"
}
],
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
"source": [
"likelihood_weighting('Cloudy', dict(Rain=True), sprinkler, 200).show_approx()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Gibbs Sampling\n",
"\n",
"In likelihood sampling, it is possible to obtain low weights in cases where the evidence variables reside at the bottom of the Bayesian Network. This can happen because influence only propagates downwards in likelihood sampling.\n",
"\n",
"Gibbs Sampling solves this. The implementation of **Figure 14.16** is provided in the function **gibbs_ask** "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In **gibbs_ask** we initialize the non-evidence variables to random values. And then select non-evidence variables and sample it from **P(Variable | value in the current state of all remaining vars) ** repeatedly sample. In practice, we speed this up by using **markov_blanket_sample** instead. This works because terms not involving the variable get canceled in the calculation. The arguments for **gibbs_ask** are similar to **likelihood_weighting**"
]
},
{
"cell_type": "code",
"execution_count": 46,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'False: 0.17, True: 0.83'"
]
},
"execution_count": 46,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"gibbs_ask('Cloudy', dict(Rain=True), sprinkler, 200).show_approx()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
},
"widgets": {
"state": {},
"version": "1.1.1"
}
},
"nbformat": 4,