# Relevance Reasoning with pyAgrum

Relevance reasoning is the analysis of the influence of evidence on a Bayesian network.

In this notebook we will explain what is relevance reasoning and how to do it using pyAgrum.

In [1]:

import pyAgrum as gum
import pyAgrum.lib.notebook as gnb

import time
import os
%matplotlib inline
from pylab import *
import matplotlib.pyplot as plt


## Multiple inference

In the well known ‘alarm’ BN, how to analyze the influence on ‘VENTALV’ of a soft evidence on ‘MINVOLSET’ ?

In [2]:

bn=gum.loadBN("res/alarm.dsl")
gnb.showBN(bn,size="6")


We propose to draw the plot of the posterior of ‘VENTALV’ for the evidence :

$\forall x \in [0,1], e_{MINVOLSET}=[0,x,0.5]$

To do so, we perform a large number of inference and plot the posteriors.

In [3]:

K=1000
r=range(0,K)
xs=[x/K for x in r]

def getPlot(xs,ys,K,duration):
p=plot(xs,ys)
legend(p,[bn['VENTALV'].label(i) for i in range(bn['VENTALV'].domainSize())],loc=7);
title('VENTALV ({} inferences in {:5.3} s)'.format(K,duration));
ylabel('posterior Probability');
xlabel('Evidence on MINVOLSET : [0,x,0.5]');


## First try : classical lazy propagation

In [4]:

tf=time.time()
ys=[]
for x in r:
ie=gum.LazyPropagation(bn)
ie.makeInference()
ys.append(ie.posterior('VENTALV').tolist())
delta1=time.time()-tf
getPlot(xs,ys,K,delta1)


The title of the figure above gives the time for those 1000 inference.

## Second try : classical variable elimination

One can note that we just need one posterior. This is a case where VariableElimination should give better results.

In [5]:

tf=time.time()
ys=[]
for x in r:
ie=gum.VariableElimination(bn)
ie.makeInference()
ys.append(ie.posterior('VENTALV').tolist())
delta2=time.time()-tf
getPlot(xs,ys,K,delta2)


pyAgrum give us a function gum.getPosterior to do this same job more easily.

In [6]:

tf=time.time()
ys=[gum.getPosterior(bn,evs={'MINVOLSET':[0,x/K,0.5]},target='VENTALV').tolist()
for x in r]
getPlot(xs,ys,K,time.time()-tf)


## Last try : optimized Lazy propagation with relevance reasoning and incremental inference

Optimized inference in aGrUM can use the targets and the evidence to optimize the computations. This is called relevance reasonning.

Moreover, if the values of the evidence change but not the structure of the query (same nodes as target, same nodes as hard evidence, same nodes as soft evidence), inference in aGrUM may re-use some of the computations from a query to another. This is called incremental inference.

In [7]:

tf=time.time()
ie=gum.LazyPropagation(bn)
ys=[]
for x in r:
ie.chgEvidence('MINVOLSET',[0,x/K,0.5])
ie.makeInference()
ys.append(ie.posterior('VENTALV').tolist())
delta3=time.time()-tf
getPlot(xs,ys,K,delta3)

In [8]:

print("Mean duration of a lazy propagation            : {:5.3f}ms".format(1000*delta1/K))
print("Mean duration of a variable elimination        : {:5.3f}ms".format(1000*delta2/K))
print("Mean duration of an optimized lazy propagation : {:5.3f}ms".format(1000*delta3/K))

Mean duration of a lazy propagation            : 18.409ms
Mean duration of a variable elimination        : 1.743ms
Mean duration of an optimized lazy propagation : 1.603ms


## How it works

In [9]:

bn=gum.fastBN("Y->X->T1;Z2->X;Z1->X;Z1->T1;Z1->Z3->T2")
ie=gum.LazyPropagation(bn)

gnb.flow.row(bn,bn.cpt("X"),gnb.getJunctionTree(bn),gnb.getJunctionTreeMap(bn,size="3!"),
captions=["BN","potential","Junction Tree","The map"])


BN
X
Z1
Z2
Y
0
1
0
0
0
0.66330.3367
1
0.01500.9850
1
0
0.98170.0183
1
0.52630.4737
1
0
0
0.55590.4441
1
0.23490.7651
1
0
0.59590.4041
1
0.67900.3210

potential

Junction Tree

The map

### aGrUM/pyAgrum uses as much as possible techniques of relevance reasonning to reduce the complexity of the inference.

In [10]:

ie.setEvidence({"X":0})
gnb.sideBySide(ie,gnb.getDot(ie.joinTree().toDotWithNames(bn)),ie.joinTree().map(),
captions=["","Join tree optimized for hard evidence on X","the map"])

 G Z3 Z3 T2 T2 Z3->T2 Z1 Z1 Z1->Z3 T1 T1 Z1->T1 X X Z1->X Y Y Y->X Z2 Z2 Z2->X X->T1 Lazy Propagation on this BN hard evidenceXtarget(s) allEvidence and targets

Join tree optimized for hard evidence on X

the map
In [11]:

ie.updateEvidence({"X":[0.1,0.9]})
gnb.sideBySide(ie,gnb.getDot(ie.joinTree().toDotWithNames(bn)),ie.joinTree().map(),
captions=["","Join tree optimized for soft evidence on X","the map"])

 G Z3 Z3 T2 T2 Z3->T2 Z1 Z1 Z1->Z3 T1 T1 Z1->T1 X X Z1->X Y Y Y->X Z2 Z2 Z2->X X->T1 Lazy Propagation on this BN soft evidenceXtarget(s) allEvidence and targets

Join tree optimized for soft evidence on X

the map
In [12]:

ie.updateEvidence({"Y":0,"X":0,3:[0.1,0.9],"Z1":[0.4,0.6]})
gnb.sideBySide(ie,gnb.getDot(ie.joinTree().toDotWithNames(bn)),ie.joinTree().map(),
captions=["","Join tree optimized for hard evidence on X and Y, soft on Z2 and Z1","the map"])

 G Z3 Z3 T2 T2 Z3->T2 Z1 Z1 Z1->Z3 T1 T1 Z1->T1 X X Z1->X Y Y Y->X Z2 Z2 Z2->X X->T1 Lazy Propagation on this BN hard evidenceY, Xsoft evidenceZ2, Z1target(s) allEvidence and targets

Join tree optimized for hard evidence on X and Y, soft on Z2 and Z1

the map
In [13]:

ie.setEvidence({"X":0})
ie.setTargets({"T1","Z1"})
gnb.sideBySide(ie,gnb.getDot(ie.joinTree().toDotWithNames(bn)),ie.joinTree().map(),
captions=["","Join tree optimized for hard evidence on X and targets T1,Z1","the map"])

 G Z3 Z3 T2 T2 Z3->T2 Z1 Z1 Z1->Z3 T1 T1 Z1->T1 X X Z1->X Y Y Y->X Z2 Z2 Z2->X X->T1 Lazy Propagation on this BN hard evidenceXtarget(s)T1, Z1Evidence and targets

Join tree optimized for hard evidence on X and targets T1,Z1

the map
In [14]:

ie.updateEvidence({"Y":0,"X":0,3:[0.1,0.9],"Z1":[0.4,0.6]})

gnb.sideBySide(ie,
gnb.getDot(ie.joinTree().toDotWithNames(bn)),ie.joinTree().map(),
captions=["","Join tree optimized for hard evidence on X and targets T1,Z1","the map"])

 G Z3 Z3 T2 T2 Z3->T2 Z1 Z1 Z1->Z3 T1 T1 Z1->T1 X X Z1->X Y Y Y->X Z2 Z2 Z2->X X->T1 Lazy Propagation on this BN hard evidenceY, Xsoft evidenceZ2, Z1target(s)T1, Z1Joint target(s)[T1, Z2, Z1]Evidence and targets

Join tree optimized for hard evidence on X and targets T1,Z1

the map
In [15]:

ie.makeInference()
ie.jointPosterior({"Z2","Z1","T1"})

Out[15]:

Z1
Z2
T1
0
1
0
0
0.01080.0222
1
0.01250.0033
1
0
0.24500.3668
1
0.28510.0544
In [16]:

ie.jointPosterior({"Z2","Z1"})

Out[16]:

Z1
Z2
0
1
0
0.02330.0255
1
0.53000.4212
In [17]:

# this will not work
try:
ie.jointPosterior({"Z3","Z1"})
except gum.UndefinedElement:
print("Indeed, there is no joint target which contains {4,5} !")

Indeed, there is no joint target which contains {4,5} !

In [18]:

ie.addJointTarget({"Z2","Z1"})
gnb.sideBySide(ie,
gnb.getDot(ie.joinTree().toDotWithNames(bn)),
captions=['','JoinTree'])

 G Z3 Z3 T2 T2 Z3->T2 Z1 Z1 Z1->Z3 T1 T1 Z1->T1 X X Z1->X Y Y Y->X Z2 Z2 Z2->X X->T1 Lazy Propagation on this BN hard evidenceY, Xsoft evidenceZ2, Z1target(s)T1, Z1Joint target(s)[T1, Z2, Z1]Evidence and targets

JoinTree