Inference

Inference is the process that consists in computing new probabilistc information from a Bayesian network and some evidence. aGrUM/pyAgrum mainly focus and the computation of (joint) posterior for some variables of the Bayesian networks given soft or hard evidence that are the form of likelihoods on some variables. Inference is a hard task (NP-complete). aGrUM/pyAgrum implements exact inference but also approximated inference that can converge slowly and (even) not exactly but thant can in many cases be useful for applications.

Exact Inference

Lazy Propagation

Lazy Propagation is the main exact inference for classical Bayesian networks in aGrUM/pyAgrum.

class pyAgrum.LazyPropagation(*args)

Class used for Lazy Propagation

LazyPropagation(bn) -> LazyPropagation
Parameters:
  • bn (pyAgrum.BayesNet) – a Bayesian network
BN(LazyPropagation self)
Returns:A constant reference over the IBayesNet referenced by this class.
Return type:pyAgrum.IBayesNet
Raises:gum.UndefinedElement – If no Bayes net has been assigned to the inference.
H(LazyPropagation self, int X)

H(LazyPropagation self, str nodeName) -> double

Parameters:
  • X (int) – a node Id
  • nodeName (str) – a node name
Returns:

the computed Shanon’s entropy of a node given the observation

Return type:

double

I(LazyPropagation self, int X, int Y)
Parameters:
  • X (int) – a node Id
  • Y (int) – another node Id
Returns:

the computed Shanon’s entropy of a node given the observation

Return type:

double

VI(LazyPropagation self, int X, int Y)
Parameters:
  • X (int) – a node Id
  • Y (int) – another node Id
Returns:

variation of information between X and Y

Return type:

double

addAllTargets(LazyPropagation self)

Add all the nodes as targets.

addEvidence(LazyPropagation self, int id, int val)

addEvidence(LazyPropagation self, str nodeName, int val) addEvidence(LazyPropagation self, int id, str val) addEvidence(LazyPropagation self, str nodeName, str val) addEvidence(LazyPropagation self, int id, Vector vals) addEvidence(LazyPropagation self, str nodeName, Vector vals)

Adds a new evidence on a node (might be soft or hard).

Parameters:
  • id (int) – a node Id
  • nodeName (int) – a node name
  • val – (int) a node value
  • val – (str) the label of the node value
  • vals (list) – a list of values
Raises:
  • gum.InvalidArgument – If the node already has an evidence
  • gum.InvalidArgument – If val is not a value for the node
  • gum.InvalidArgument – If the size of vals is different from the domain side of the node
  • gum.FatalError – If vals is a vector of 0s
  • gum.UndefinedElement – If the node does not belong to the Bayesian network
addJointTarget(LazyPropagation self, PyObject * targets)

Add a list of nodes as a new joint target. As a collateral effect, every node is added as a marginal target.

Parameters:list – a list of names of nodes
Raises:gum.UndefinedElement – If some node(s) do not belong to the Bayesian network
addTarget(LazyPropagation self, int target)

addTarget(LazyPropagation self, str nodeName)

Add a marginal target to the list of targets.

Parameters:
  • target (int) – a node Id
  • nodeName (str) – a node name
Raises:

gum.UndefinedElement – If target is not a NodeId in the Bayes net

chgEvidence(LazyPropagation self, int id, int val)

chgEvidence(LazyPropagation self, str nodeName, int val) chgEvidence(LazyPropagation self, int id, str val) chgEvidence(LazyPropagation self, str nodeName, str val) chgEvidence(LazyPropagation self, int id, Vector vals) chgEvidence(LazyPropagation self, str nodeName, Vector vals)

Change the value of an already existing evidence on a node (might be soft or hard).

Parameters:
  • id (int) – a node Id
  • nodeName (int) – a node name
  • val – (int) a node value
  • val – (str) the label of the node value
  • vals (list) – a list of values
Raises:
  • gum.InvalidArgument – If the node does not already have an evidence
  • gum.InvalidArgument – If val is not a value for the node
  • gum.InvalidArgument – If the size of vals is different from the domain side of the node
  • gum.FatalError – If vals is a vector of 0s
  • gum.UndefinedElement – If the node does not belong to the Bayesian network
eraseAllEvidence(LazyPropagation self)

Removes all the evidence entered into the network.

eraseAllJointTargets(LazyPropagation self)

Clear all previously defined joint targets.

eraseAllMarginalTargets(LazyPropagation self)

Clear all the previously defined marginal targets.

eraseAllTargets(LazyPropagation self)

Clear all previously defined targets (marginal and joint targets).

As a result, no posterior can be computed (since we can only compute the posteriors of the marginal or joint targets that have been added by the user).

eraseEvidence(LazyPropagation self, int id)

eraseEvidence(LazyPropagation self, str nodeName)

Remove the evidence, if any, corresponding to the node Id or name.

Parameters:
  • id (int) – a node Id
  • nodeName (int) – a node name
Raises:

gum.IndexError – If the node does not belong to the Bayesian network

eraseJointTarget(LazyPropagation self, PyObject * targets)

Remove, if existing, the joint target.

Parameters:

list – a list of names or Ids of nodes

Raises:
  • gum.IndexError – If one of the node does not belong to the Bayesian network
  • gum.UndefinedElement – If node Id is not in the Bayesian network
eraseTarget(LazyPropagation self, int target)

eraseTarget(LazyPropagation self, str nodeName)

Remove, if existing, the marginal target.

Parameters:
  • target (int) – a node Id
  • nodeName (int) – a node name
Raises:
  • gum.IndexError – If one of the node does not belong to the Bayesian network
  • gum.UndefinedElement – If node Id is not in the Bayesian network
evidenceImpact(LazyPropagation self, PyObject * target, PyObject * evs)

Create a pyAgrum.Potential for P(target|evs) (for all instanciation of target and evs)

Parameters:
  • target (set) – a set of targets ids or names.
  • evs (set) – a set of nodes ids or names.

Warning

if some evs are d-separated, they are not included in the Potential.

Returns:a Potential for P(targets|evs)
Return type:pyAgrum.Potential
evidenceJointImpact(LazyPropagation self, PyObject * targets, PyObject * evs)

evidenceJointImpact(LazyPropagation self, Vector_string targets, Vector_string evs) -> Potential

Create a pyAgrum.Potential for P(joint targets|evs) (for all instanciation of targets and evs)

Parameters:
  • targets – (int) a node Id
  • targets – (str) a node name
  • evs (set) – a set of nodes ids or names.
Returns:

a Potential for P(target|evs)

Return type:

pyAgrum.Potential

Raises:

gum.Exception – If some evidene entered into the Bayes net are incompatible (their joint proba = 0)

evidenceProbability(LazyPropagation self)
Returns:the probability of evidence
Return type:double
hardEvidenceNodes(LazyPropagation self)
Returns:the set of nodes with hard evidence
Return type:set
hasEvidence(LazyPropagation self, int id)

hasEvidence(LazyPropagation self, str nodeName) -> bool

Parameters:
  • id (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if some node(s) (or the one in parameters) have received evidence

Return type:

bool

Raises:

gum.IndexError – If the node does not belong to the Bayesian network

hasHardEvidence(LazyPropagation self, str nodeName)
Parameters:
  • id (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if node has received a hard evidence

Return type:

bool

Raises:

gum.IndexError – If the node does not belong to the Bayesian network

hasSoftEvidence(LazyPropagation self, int id)

hasSoftEvidence(LazyPropagation self, str nodeName) -> bool

Parameters:
  • id (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if node has received a soft evidence

Return type:

bool

Raises:

gum.IndexError – If the node does not belong to the Bayesian network

isJointTarget(LazyPropagation self, PyObject * targets)
Parameters:

list – a list of nodes ids or names.

Returns:

True if target is a joint target.

Return type:

bool

Raises:
  • gum.IndexError – If the node does not belong to the Bayesian network
  • gum.UndefinedElement – If node Id is not in the Bayesian network
isTarget(LazyPropagation self, int variable)

isTarget(LazyPropagation self, str nodeName) -> bool

Parameters:
  • variable (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if variable is a (marginal) target

Return type:

bool

Raises:
  • gum.IndexError – If the node does not belong to the Bayesian network
  • gum.UndefinedElement – If node Id is not in the Bayesian network
joinTree(LazyPropagation self)
Returns:the current join tree used
Return type:pyAgrum.CliqueGraph
jointMutualInformation(LazyPropagation self, PyObject * targets)
jointPosterior(LazyPropagation self, PyObject * targets)

Compute the joint posterior of a set of nodes.

Parameters:list – the list of nodes whose posterior joint probability is wanted

Warning

The order of the variables given by the list here or when the jointTarget is declared can not be assumed to be used bu the Potential.

Returns:a ref to the posterior joint probability of the set of nodes.
Return type:pyAgrum.Potential
Raises:gum.UndefinedElement – If an element of nodes is not in targets
jointTargets(LazyPropagation self)
Returns:the list of target sets
Return type:list
junctionTree(LazyPropagation self)
Returns:the current junction tree
Return type:pyAgrum.CliqueGraph
makeInference(LazyPropagation self)

Perform the heavy computations needed to compute the targets’ posteriors

In a Junction tree propagation scheme, for instance, the heavy computations are those of the messages sent in the JT. This is precisely what makeInference should compute. Later, the computations of the posteriors can be done ‘lightly’ by multiplying and projecting those messages.

nbrEvidence(LazyPropagation self)
Returns:the number of evidence entered into the Bayesian network
Return type:int
nbrHardEvidence(LazyPropagation self)
Returns:the number of hard evidence entered into the Bayesian network
Return type:int
nbrJointTargets(LazyPropagation self)
Returns:the number of joint targets
Return type:int
nbrSoftEvidence(LazyPropagation self)
Returns:the number of soft evidence entered into the Bayesian network
Return type:int
nbrTargets(LazyPropagation self)
Returns:the number of marginal targets
Return type:int
posterior(LazyPropagation self, int var)

posterior(LazyPropagation self, str nodeName) -> Potential

Computes and returns the posterior of a node.

Parameters:
  • var (int) – the node Id of the node for which we need a posterior probability
  • nodeName (str) – the node name of the node for which we need a posterior probability
Returns:

a ref to the posterior probability of the node

Return type:

pyAgrum.Potential

Raises:

gum.UndefinedElement – If an element of nodes is not in targets

setEvidence(evidces)

Erase all the evidences and apply addEvidence(key,value) for every pairs in evidces.

Parameters:

evidces (dict) – a dict of evidences

Raises:
  • gum.InvalidArgument – If one value is not a value for the node
  • gum.InvalidArgument – If the size of a value is different from the domain side of the node
  • gum.FatalError – If one value is a vector of 0s
  • gum.UndefinedElement – If one node does not belong to the Bayesian network
setFindBarrenNodesType(LazyPropagation self, pyAgrum.FindBarrenNodesType type)

sets how we determine barren nodes

Barren nodes are unnecessary for probability inference, so they can be safely discarded in this case (type = FIND_BARREN_NODES). This speeds-up inference. However, there are some cases in which we do not want to remove barren nodes, typically when we want to answer queries such as Most Probable Explanations (MPE).

0 = FIND_NO_BARREN_NODES 1 = FIND_BARREN_NODES

Parameters:type (int) – the finder type
Raises:gum.InvalidArgument – If type is not implemented
setRelevantPotentialsFinderType(LazyPropagation self, pyAgrum.RelevantPotentialsFinderType type)

sets how we determine the relevant potentials to combine

When a clique sends a message to a separator, it first constitute the set of the potentials it contains and of the potentials contained in the messages it received. If RelevantPotentialsFinderType = FIND_ALL, all these potentials are combined and projected to produce the message sent to the separator. If RelevantPotentialsFinderType = DSEP_BAYESBALL_NODES, then only the set of potentials d-connected to the variables of the separator are kept for combination and projection.

0 = FIND_ALL 1 = DSEP_BAYESBALL_NODES 2 = DSEP_BAYESBALL_POTENTIALS 3 = DSEP_KOLLER_FRIEDMAN_2009

Parameters:type (int) – the finder type
Raises:gum.InvalidArgument – If type is not implemented
setTargets(targets)

Remove all the targets and add the ones in parameter.

Parameters:targets (set) – a set of targets
Raises:gum.UndefinedElement – If one target is not in the Bayes net
setTriangulation(LazyPropagation self, Triangulation new_triangulation)
softEvidenceNodes(LazyPropagation self)
Returns:the set of nodes with soft evidence
Return type:set
targets(LazyPropagation self)
Returns:the list of marginal targets
Return type:list
updateEvidence(evidces)

Apply chgEvidence(key,value) for every pairs in evidces (or addEvidence).

Parameters:

evidces (dict) – a dict of evidences

Raises:
  • gum.InvalidArgument – If one value is not a value for the node
  • gum.InvalidArgument – If the size of a value is different from the domain side of the node
  • gum.FatalError – If one value is a vector of 0s
  • gum.UndefinedElement – If one node does not belong to the Bayesian network

Shafer Shenoy Inference

class pyAgrum.ShaferShenoyInference(*args)

Class used for Shafer-Shenoy inferences.

ShaferShenoyInference(bn) -> ShaferShenoyInference
Parameters:
  • bn (pyAgrum.BayesNet) – a Bayesian network
BN(ShaferShenoyInference self)
Returns:A constant reference over the IBayesNet referenced by this class.
Return type:pyAgrum.IBayesNet
Raises:gum.UndefinedElement – If no Bayes net has been assigned to the inference.
H(ShaferShenoyInference self, int X)

H(ShaferShenoyInference self, str nodeName) -> double

Parameters:
  • X (int) – a node Id
  • nodeName (str) – a node name
Returns:

the computed Shanon’s entropy of a node given the observation

Return type:

double

I(ShaferShenoyInference self, int X, int Y)
Parameters:
  • X (int) – a node Id
  • Y (int) – another node Id
Returns:

the computed Shanon’s entropy of a node given the observation

Return type:

double

VI(ShaferShenoyInference self, int X, int Y)
Parameters:
  • X (int) – a node Id
  • Y (int) – another node Id
Returns:

variation of information between X and Y

Return type:

double

addAllTargets(ShaferShenoyInference self)

Add all the nodes as targets.

addEvidence(ShaferShenoyInference self, int id, int val)

addEvidence(ShaferShenoyInference self, str nodeName, int val) addEvidence(ShaferShenoyInference self, int id, str val) addEvidence(ShaferShenoyInference self, str nodeName, str val) addEvidence(ShaferShenoyInference self, int id, Vector vals) addEvidence(ShaferShenoyInference self, str nodeName, Vector vals)

Adds a new evidence on a node (might be soft or hard).

Parameters:
  • id (int) – a node Id
  • nodeName (int) – a node name
  • val – (int) a node value
  • val – (str) the label of the node value
  • vals (list) – a list of values
Raises:
  • gum.InvalidArgument – If the node already has an evidence
  • gum.InvalidArgument – If val is not a value for the node
  • gum.InvalidArgument – If the size of vals is different from the domain side of the node
  • gum.FatalError – If vals is a vector of 0s
  • gum.UndefinedElement – If the node does not belong to the Bayesian network
addJointTarget(ShaferShenoyInference self, PyObject * targets)

Add a list of nodes as a new joint target. As a collateral effect, every node is added as a marginal target.

Parameters:list – a list of names of nodes
Raises:gum.UndefinedElement – If some node(s) do not belong to the Bayesian network
addTarget(ShaferShenoyInference self, int target)

addTarget(ShaferShenoyInference self, str nodeName)

Add a marginal target to the list of targets.

Parameters:
  • target (int) – a node Id
  • nodeName (str) – a node name
Raises:

gum.UndefinedElement – If target is not a NodeId in the Bayes net

chgEvidence(ShaferShenoyInference self, int id, int val)

chgEvidence(ShaferShenoyInference self, str nodeName, int val) chgEvidence(ShaferShenoyInference self, int id, str val) chgEvidence(ShaferShenoyInference self, str nodeName, str val) chgEvidence(ShaferShenoyInference self, int id, Vector vals) chgEvidence(ShaferShenoyInference self, str nodeName, Vector vals)

Change the value of an already existing evidence on a node (might be soft or hard).

Parameters:
  • id (int) – a node Id
  • nodeName (int) – a node name
  • val – (int) a node value
  • val – (str) the label of the node value
  • vals (list) – a list of values
Raises:
  • gum.InvalidArgument – If the node does not already have an evidence
  • gum.InvalidArgument – If val is not a value for the node
  • gum.InvalidArgument – If the size of vals is different from the domain side of the node
  • gum.FatalError – If vals is a vector of 0s
  • gum.UndefinedElement – If the node does not belong to the Bayesian network
eraseAllEvidence(ShaferShenoyInference self)

Removes all the evidence entered into the network.

eraseAllJointTargets(ShaferShenoyInference self)

Clear all previously defined joint targets.

eraseAllMarginalTargets(ShaferShenoyInference self)

Clear all the previously defined marginal targets.

eraseAllTargets(ShaferShenoyInference self)

Clear all previously defined targets (marginal and joint targets).

As a result, no posterior can be computed (since we can only compute the posteriors of the marginal or joint targets that have been added by the user).

eraseEvidence(ShaferShenoyInference self, int id)

eraseEvidence(ShaferShenoyInference self, str nodeName)

Remove the evidence, if any, corresponding to the node Id or name.

Parameters:
  • id (int) – a node Id
  • nodeName (int) – a node name
Raises:

gum.IndexError – If the node does not belong to the Bayesian network

eraseJointTarget(ShaferShenoyInference self, PyObject * targets)

Remove, if existing, the joint target.

Parameters:

list – a list of names or Ids of nodes

Raises:
  • gum.IndexError – If one of the node does not belong to the Bayesian network
  • gum.UndefinedElement – If node Id is not in the Bayesian network
eraseTarget(ShaferShenoyInference self, int target)

eraseTarget(ShaferShenoyInference self, str nodeName)

Remove, if existing, the marginal target.

Parameters:
  • target (int) – a node Id
  • nodeName (int) – a node name
Raises:
  • gum.IndexError – If one of the node does not belong to the Bayesian network
  • gum.UndefinedElement – If node Id is not in the Bayesian network
evidenceImpact(ShaferShenoyInference self, PyObject * target, PyObject * evs)

Create a pyAgrum.Potential for P(target|evs) (for all instanciation of target and evs)

Parameters:
  • target (set) – a set of targets ids or names.
  • evs (set) – a set of nodes ids or names.

Warning

if some evs are d-separated, they are not included in the Potential.

Returns:a Potential for P(targets|evs)
Return type:pyAgrum.Potential
evidenceJointImpact(ShaferShenoyInference self, PyObject * targets, PyObject * evs)

evidenceJointImpact(ShaferShenoyInference self, Vector_string targets, Vector_string evs) -> Potential

Create a pyAgrum.Potential for P(joint targets|evs) (for all instanciation of targets and evs)

Parameters:
  • targets – (int) a node Id
  • targets – (str) a node name
  • evs (set) – a set of nodes ids or names.
Returns:

a Potential for P(target|evs)

Return type:

pyAgrum.Potential

Raises:

gum.Exception – If some evidene entered into the Bayes net are incompatible (their joint proba = 0)

evidenceProbability(ShaferShenoyInference self)
Returns:the probability of evidence
Return type:double
hardEvidenceNodes(ShaferShenoyInference self)
Returns:the set of nodes with hard evidence
Return type:set
hasEvidence(ShaferShenoyInference self, int id)

hasEvidence(ShaferShenoyInference self, str nodeName) -> bool

Parameters:
  • id (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if some node(s) (or the one in parameters) have received evidence

Return type:

bool

Raises:

gum.IndexError – If the node does not belong to the Bayesian network

hasHardEvidence(ShaferShenoyInference self, str nodeName)
Parameters:
  • id (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if node has received a hard evidence

Return type:

bool

Raises:

gum.IndexError – If the node does not belong to the Bayesian network

hasSoftEvidence(ShaferShenoyInference self, int id)

hasSoftEvidence(ShaferShenoyInference self, str nodeName) -> bool

Parameters:
  • id (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if node has received a soft evidence

Return type:

bool

Raises:

gum.IndexError – If the node does not belong to the Bayesian network

isJointTarget(ShaferShenoyInference self, PyObject * targets)
Parameters:

list – a list of nodes ids or names.

Returns:

True if target is a joint target.

Return type:

bool

Raises:
  • gum.IndexError – If the node does not belong to the Bayesian network
  • gum.UndefinedElement – If node Id is not in the Bayesian network
isTarget(ShaferShenoyInference self, int variable)

isTarget(ShaferShenoyInference self, str nodeName) -> bool

Parameters:
  • variable (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if variable is a (marginal) target

Return type:

bool

Raises:
  • gum.IndexError – If the node does not belong to the Bayesian network
  • gum.UndefinedElement – If node Id is not in the Bayesian network
joinTree(ShaferShenoyInference self)
Returns:the current join tree used
Return type:pyAgrum.CliqueGraph
jointMutualInformation(ShaferShenoyInference self, PyObject * targets)
jointPosterior(ShaferShenoyInference self, PyObject * targets)

Compute the joint posterior of a set of nodes.

Parameters:list – the list of nodes whose posterior joint probability is wanted

Warning

The order of the variables given by the list here or when the jointTarget is declared can not be assumed to be used bu the Potential.

Returns:a ref to the posterior joint probability of the set of nodes.
Return type:pyAgrum.Potential
Raises:gum.UndefinedElement – If an element of nodes is not in targets
jointTargets(ShaferShenoyInference self)
Returns:the list of target sets
Return type:list
junctionTree(ShaferShenoyInference self)
Returns:the current junction tree
Return type:pyAgrum.CliqueGraph
makeInference(ShaferShenoyInference self)

Perform the heavy computations needed to compute the targets’ posteriors

In a Junction tree propagation scheme, for instance, the heavy computations are those of the messages sent in the JT. This is precisely what makeInference should compute. Later, the computations of the posteriors can be done ‘lightly’ by multiplying and projecting those messages.

nbrEvidence(ShaferShenoyInference self)
Returns:the number of evidence entered into the Bayesian network
Return type:int
nbrHardEvidence(ShaferShenoyInference self)
Returns:the number of hard evidence entered into the Bayesian network
Return type:int
nbrJointTargets(ShaferShenoyInference self)
Returns:the number of joint targets
Return type:int
nbrSoftEvidence(ShaferShenoyInference self)
Returns:the number of soft evidence entered into the Bayesian network
Return type:int
nbrTargets(ShaferShenoyInference self)
Returns:the number of marginal targets
Return type:int
posterior(ShaferShenoyInference self, int var)

posterior(ShaferShenoyInference self, str nodeName) -> Potential

Computes and returns the posterior of a node.

Parameters:
  • var (int) – the node Id of the node for which we need a posterior probability
  • nodeName (str) – the node name of the node for which we need a posterior probability
Returns:

a ref to the posterior probability of the node

Return type:

pyAgrum.Potential

Raises:

gum.UndefinedElement – If an element of nodes is not in targets

setEvidence(evidces)

Erase all the evidences and apply addEvidence(key,value) for every pairs in evidces.

Parameters:

evidces (dict) – a dict of evidences

Raises:
  • gum.InvalidArgument – If one value is not a value for the node
  • gum.InvalidArgument – If the size of a value is different from the domain side of the node
  • gum.FatalError – If one value is a vector of 0s
  • gum.UndefinedElement – If one node does not belong to the Bayesian network
setFindBarrenNodesType(ShaferShenoyInference self, pyAgrum.FindBarrenNodesType type)

sets how we determine barren nodes

Barren nodes are unnecessary for probability inference, so they can be safely discarded in this case (type = FIND_BARREN_NODES). This speeds-up inference. However, there are some cases in which we do not want to remove barren nodes, typically when we want to answer queries such as Most Probable Explanations (MPE).

0 = FIND_NO_BARREN_NODES 1 = FIND_BARREN_NODES

Parameters:type (int) – the finder type
Raises:gum.InvalidArgument – If type is not implemented
setTargets(targets)

Remove all the targets and add the ones in parameter.

Parameters:targets (set) – a set of targets
Raises:gum.UndefinedElement – If one target is not in the Bayes net
setTriangulation(ShaferShenoyInference self, Triangulation new_triangulation)
softEvidenceNodes(ShaferShenoyInference self)
Returns:the set of nodes with soft evidence
Return type:set
targets(ShaferShenoyInference self)
Returns:the list of marginal targets
Return type:list
updateEvidence(evidces)

Apply chgEvidence(key,value) for every pairs in evidces (or addEvidence).

Parameters:

evidces (dict) – a dict of evidences

Raises:
  • gum.InvalidArgument – If one value is not a value for the node
  • gum.InvalidArgument – If the size of a value is different from the domain side of the node
  • gum.FatalError – If one value is a vector of 0s
  • gum.UndefinedElement – If one node does not belong to the Bayesian network

Variable Elimination

class pyAgrum.VariableElimination(*args)

Class used for Variable Elimination inference algorithm.

VariableElimination(bn) -> VariableElimination
Parameters:
  • bn (pyAgrum.BayesNet) – a Bayesian network
BN(VariableElimination self)
Returns:A constant reference over the IBayesNet referenced by this class.
Return type:pyAgrum.IBayesNet
Raises:gum.UndefinedElement – If no Bayes net has been assigned to the inference.
H(VariableElimination self, int X)

H(VariableElimination self, str nodeName) -> double

Parameters:
  • X (int) – a node Id
  • nodeName (str) – a node name
Returns:

the computed Shanon’s entropy of a node given the observation

Return type:

double

addAllTargets(VariableElimination self)

Add all the nodes as targets.

addEvidence(VariableElimination self, int id, int val)

addEvidence(VariableElimination self, str nodeName, int val) addEvidence(VariableElimination self, int id, str val) addEvidence(VariableElimination self, str nodeName, str val) addEvidence(VariableElimination self, int id, Vector vals) addEvidence(VariableElimination self, str nodeName, Vector vals)

Adds a new evidence on a node (might be soft or hard).

Parameters:
  • id (int) – a node Id
  • nodeName (int) – a node name
  • val – (int) a node value
  • val – (str) the label of the node value
  • vals (list) – a list of values
Raises:
  • gum.InvalidArgument – If the node already has an evidence
  • gum.InvalidArgument – If val is not a value for the node
  • gum.InvalidArgument – If the size of vals is different from the domain side of the node
  • gum.FatalError – If vals is a vector of 0s
  • gum.UndefinedElement – If the node does not belong to the Bayesian network
addJointTarget(VariableElimination self, PyObject * targets)

Add a list of nodes as a new joint target. As a collateral effect, every node is added as a marginal target.

Parameters:list – a list of names of nodes
Raises:gum.UndefinedElement – If some node(s) do not belong to the Bayesian network
addTarget(VariableElimination self, int target)

addTarget(VariableElimination self, str nodeName)

Add a marginal target to the list of targets.

Parameters:
  • target (int) – a node Id
  • nodeName (str) – a node name
Raises:

gum.UndefinedElement – If target is not a NodeId in the Bayes net

chgEvidence(VariableElimination self, int id, int val)

chgEvidence(VariableElimination self, str nodeName, int val) chgEvidence(VariableElimination self, int id, str val) chgEvidence(VariableElimination self, str nodeName, str val) chgEvidence(VariableElimination self, int id, Vector vals) chgEvidence(VariableElimination self, str nodeName, Vector vals)

Change the value of an already existing evidence on a node (might be soft or hard).

Parameters:
  • id (int) – a node Id
  • nodeName (int) – a node name
  • val – (int) a node value
  • val – (str) the label of the node value
  • vals (list) – a list of values
Raises:
  • gum.InvalidArgument – If the node does not already have an evidence
  • gum.InvalidArgument – If val is not a value for the node
  • gum.InvalidArgument – If the size of vals is different from the domain side of the node
  • gum.FatalError – If vals is a vector of 0s
  • gum.UndefinedElement – If the node does not belong to the Bayesian network
eraseAllEvidence(VariableElimination self)

Removes all the evidence entered into the network.

eraseAllTargets(VariableElimination self)

Clear all previously defined targets (marginal and joint targets).

As a result, no posterior can be computed (since we can only compute the posteriors of the marginal or joint targets that have been added by the user).

eraseEvidence(VariableElimination self, int id)

eraseEvidence(VariableElimination self, str nodeName)

Remove the evidence, if any, corresponding to the node Id or name.

Parameters:
  • id (int) – a node Id
  • nodeName (int) – a node name
Raises:

gum.IndexError – If the node does not belong to the Bayesian network

eraseJointTarget(VariableElimination self, PyObject * targets)

Remove, if existing, the joint target.

Parameters:

list – a list of names or Ids of nodes

Raises:
  • gum.IndexError – If one of the node does not belong to the Bayesian network
  • gum.UndefinedElement – If node Id is not in the Bayesian network
eraseTarget(VariableElimination self, int target)

eraseTarget(VariableElimination self, str nodeName)

Remove, if existing, the marginal target.

Parameters:
  • target (int) – a node Id
  • nodeName (int) – a node name
Raises:
  • gum.IndexError – If one of the node does not belong to the Bayesian network
  • gum.UndefinedElement – If node Id is not in the Bayesian network
evidenceImpact(VariableElimination self, PyObject * target, PyObject * evs)

Create a pyAgrum.Potential for P(target|evs) (for all instanciation of target and evs)

Parameters:
  • target (set) – a set of targets ids or names.
  • evs (set) – a set of nodes ids or names.

Warning

if some evs are d-separated, they are not included in the Potential.

Returns:a Potential for P(targets|evs)
Return type:pyAgrum.Potential
evidenceJointImpact(VariableElimination self, PyObject * targets, PyObject * evs)

Create a pyAgrum.Potential for P(joint targets|evs) (for all instanciation of targets and evs)

Parameters:
  • targets – (int) a node Id
  • targets – (str) a node name
  • evs (set) – a set of nodes ids or names.
Returns:

a Potential for P(target|evs)

Return type:

pyAgrum.Potential

Raises:

gum.Exception – If some evidene entered into the Bayes net are incompatible (their joint proba = 0)

hardEvidenceNodes(VariableElimination self)
Returns:the set of nodes with hard evidence
Return type:set
hasEvidence(VariableElimination self, int id)

hasEvidence(VariableElimination self, str nodeName) -> bool

Parameters:
  • id (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if some node(s) (or the one in parameters) have received evidence

Return type:

bool

Raises:

gum.IndexError – If the node does not belong to the Bayesian network

hasHardEvidence(VariableElimination self, str nodeName)
Parameters:
  • id (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if node has received a hard evidence

Return type:

bool

Raises:

gum.IndexError – If the node does not belong to the Bayesian network

hasSoftEvidence(VariableElimination self, int id)

hasSoftEvidence(VariableElimination self, str nodeName) -> bool

Parameters:
  • id (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if node has received a soft evidence

Return type:

bool

Raises:

gum.IndexError – If the node does not belong to the Bayesian network

isJointTarget(VariableElimination self, PyObject * targets)
Parameters:

list – a list of nodes ids or names.

Returns:

True if target is a joint target.

Return type:

bool

Raises:
  • gum.IndexError – If the node does not belong to the Bayesian network
  • gum.UndefinedElement – If node Id is not in the Bayesian network
isTarget(VariableElimination self, int variable)

isTarget(VariableElimination self, str nodeName) -> bool

Parameters:
  • variable (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if variable is a (marginal) target

Return type:

bool

Raises:
  • gum.IndexError – If the node does not belong to the Bayesian network
  • gum.UndefinedElement – If node Id is not in the Bayesian network
jointMutualInformation(VariableElimination self, PyObject * targets)
jointPosterior(VariableElimination self, PyObject * targets)

Compute the joint posterior of a set of nodes.

Parameters:list – the list of nodes whose posterior joint probability is wanted

Warning

The order of the variables given by the list here or when the jointTarget is declared can not be assumed to be used bu the Potential.

Returns:a ref to the posterior joint probability of the set of nodes.
Return type:pyAgrum.Potential
Raises:gum.UndefinedElement – If an element of nodes is not in targets
jointTargets(VariableElimination self)
Returns:the list of target sets
Return type:list
junctionTree(VariableElimination self, int id)
Returns:the current junction tree
Return type:pyAgrum.CliqueGraph
makeInference(VariableElimination self)

Perform the heavy computations needed to compute the targets’ posteriors

In a Junction tree propagation scheme, for instance, the heavy computations are those of the messages sent in the JT. This is precisely what makeInference should compute. Later, the computations of the posteriors can be done ‘lightly’ by multiplying and projecting those messages.

nbrEvidence(VariableElimination self)
Returns:the number of evidence entered into the Bayesian network
Return type:int
nbrHardEvidence(VariableElimination self)
Returns:the number of hard evidence entered into the Bayesian network
Return type:int
nbrSoftEvidence(VariableElimination self)
Returns:the number of soft evidence entered into the Bayesian network
Return type:int
nbrTargets(VariableElimination self)
Returns:the number of marginal targets
Return type:int
posterior(VariableElimination self, int var)

posterior(VariableElimination self, str nodeName) -> Potential

Computes and returns the posterior of a node.

Parameters:
  • var (int) – the node Id of the node for which we need a posterior probability
  • nodeName (str) – the node name of the node for which we need a posterior probability
Returns:

a ref to the posterior probability of the node

Return type:

pyAgrum.Potential

Raises:

gum.UndefinedElement – If an element of nodes is not in targets

setEvidence(evidces)

Erase all the evidences and apply addEvidence(key,value) for every pairs in evidces.

Parameters:

evidces (dict) – a dict of evidences

Raises:
  • gum.InvalidArgument – If one value is not a value for the node
  • gum.InvalidArgument – If the size of a value is different from the domain side of the node
  • gum.FatalError – If one value is a vector of 0s
  • gum.UndefinedElement – If one node does not belong to the Bayesian network
setFindBarrenNodesType(VariableElimination self, pyAgrum.FindBarrenNodesType type)

sets how we determine barren nodes

Barren nodes are unnecessary for probability inference, so they can be safely discarded in this case (type = FIND_BARREN_NODES). This speeds-up inference. However, there are some cases in which we do not want to remove barren nodes, typically when we want to answer queries such as Most Probable Explanations (MPE).

0 = FIND_NO_BARREN_NODES 1 = FIND_BARREN_NODES

Parameters:type (int) – the finder type
Raises:gum.InvalidArgument – If type is not implemented
setRelevantPotentialsFinderType(VariableElimination self, pyAgrum.RelevantPotentialsFinderType type)

sets how we determine the relevant potentials to combine

When a clique sends a message to a separator, it first constitute the set of the potentials it contains and of the potentials contained in the messages it received. If RelevantPotentialsFinderType = FIND_ALL, all these potentials are combined and projected to produce the message sent to the separator. If RelevantPotentialsFinderType = DSEP_BAYESBALL_NODES, then only the set of potentials d-connected to the variables of the separator are kept for combination and projection.

0 = FIND_ALL 1 = DSEP_BAYESBALL_NODES 2 = DSEP_BAYESBALL_POTENTIALS 3 = DSEP_KOLLER_FRIEDMAN_2009

Parameters:type (int) – the finder type
Raises:gum.InvalidArgument – If type is not implemented
setTargets(targets)

Remove all the targets and add the ones in parameter.

Parameters:targets (set) – a set of targets
Raises:gum.UndefinedElement – If one target is not in the Bayes net
setTriangulation(VariableElimination self, Triangulation new_triangulation)
softEvidenceNodes(VariableElimination self)
Returns:the set of nodes with soft evidence
Return type:set
targets(VariableElimination self)
Returns:the list of marginal targets
Return type:list
updateEvidence(evidces)

Apply chgEvidence(key,value) for every pairs in evidces (or addEvidence).

Parameters:

evidces (dict) – a dict of evidences

Raises:
  • gum.InvalidArgument – If one value is not a value for the node
  • gum.InvalidArgument – If the size of a value is different from the domain side of the node
  • gum.FatalError – If one value is a vector of 0s
  • gum.UndefinedElement – If one node does not belong to the Bayesian network

Approximated Inference

Loopy Belief Propagation

class pyAgrum.LoopyBeliefPropagation(bn: pyAgrum.IBayesNet)

Class used for inferences using loopy belief propagation algorithm.

LoopyBeliefPropagation(bn) -> LoopyBeliefPropagation
Parameters:
  • bn (pyAgrum.BayesNet) – a Bayesian network
BN(LoopyBeliefPropagation self)
Returns:A constant reference over the IBayesNet referenced by this class.
Return type:pyAgrum.IBayesNet
Raises:gum.UndefinedElement – If no Bayes net has been assigned to the inference.
H(LoopyBeliefPropagation self, int X)

H(LoopyBeliefPropagation self, str nodeName) -> double

Parameters:
  • X (int) – a node Id
  • nodeName (str) – a node name
Returns:

the computed Shanon’s entropy of a node given the observation

Return type:

double

addAllTargets(LoopyBeliefPropagation self)

Add all the nodes as targets.

addEvidence(LoopyBeliefPropagation self, int id, int val)

addEvidence(LoopyBeliefPropagation self, str nodeName, int val) addEvidence(LoopyBeliefPropagation self, int id, str val) addEvidence(LoopyBeliefPropagation self, str nodeName, str val) addEvidence(LoopyBeliefPropagation self, int id, Vector vals) addEvidence(LoopyBeliefPropagation self, str nodeName, Vector vals)

Adds a new evidence on a node (might be soft or hard).

Parameters:
  • id (int) – a node Id
  • nodeName (int) – a node name
  • val – (int) a node value
  • val – (str) the label of the node value
  • vals (list) – a list of values
Raises:
  • gum.InvalidArgument – If the node already has an evidence
  • gum.InvalidArgument – If val is not a value for the node
  • gum.InvalidArgument – If the size of vals is different from the domain side of the node
  • gum.FatalError – If vals is a vector of 0s
  • gum.UndefinedElement – If the node does not belong to the Bayesian network
addTarget(LoopyBeliefPropagation self, int target)

addTarget(LoopyBeliefPropagation self, str nodeName)

Add a marginal target to the list of targets.

Parameters:
  • target (int) – a node Id
  • nodeName (str) – a node name
Raises:

gum.UndefinedElement – If target is not a NodeId in the Bayes net

chgEvidence(LoopyBeliefPropagation self, int id, int val)

chgEvidence(LoopyBeliefPropagation self, str nodeName, int val) chgEvidence(LoopyBeliefPropagation self, int id, str val) chgEvidence(LoopyBeliefPropagation self, str nodeName, str val) chgEvidence(LoopyBeliefPropagation self, int id, Vector vals) chgEvidence(LoopyBeliefPropagation self, str nodeName, Vector vals)

Change the value of an already existing evidence on a node (might be soft or hard).

Parameters:
  • id (int) – a node Id
  • nodeName (int) – a node name
  • val – (int) a node value
  • val – (str) the label of the node value
  • vals (list) – a list of values
Raises:
  • gum.InvalidArgument – If the node does not already have an evidence
  • gum.InvalidArgument – If val is not a value for the node
  • gum.InvalidArgument – If the size of vals is different from the domain side of the node
  • gum.FatalError – If vals is a vector of 0s
  • gum.UndefinedElement – If the node does not belong to the Bayesian network
currentTime(LoopyBeliefPropagation self)
Returns:get the current running time in second (double)
Return type:double
epsilon(LoopyBeliefPropagation self)
Returns:the value of epsilon
Return type:double
eraseAllEvidence(LoopyBeliefPropagation self)

Removes all the evidence entered into the network.

eraseAllTargets(LoopyBeliefPropagation self)

Clear all previously defined targets (marginal and joint targets).

As a result, no posterior can be computed (since we can only compute the posteriors of the marginal or joint targets that have been added by the user).

eraseEvidence(LoopyBeliefPropagation self, int id)

eraseEvidence(LoopyBeliefPropagation self, str nodeName)

Remove the evidence, if any, corresponding to the node Id or name.

Parameters:
  • id (int) – a node Id
  • nodeName (int) – a node name
Raises:

gum.IndexError – If the node does not belong to the Bayesian network

eraseTarget(LoopyBeliefPropagation self, int target)

eraseTarget(LoopyBeliefPropagation self, str nodeName)

Remove, if existing, the marginal target.

Parameters:
  • target (int) – a node Id
  • nodeName (int) – a node name
Raises:
  • gum.IndexError – If one of the node does not belong to the Bayesian network
  • gum.UndefinedElement – If node Id is not in the Bayesian network
evidenceImpact(LoopyBeliefPropagation self, PyObject * target, PyObject * evs)

Create a pyAgrum.Potential for P(target|evs) (for all instanciation of target and evs)

Parameters:
  • target (set) – a set of targets ids or names.
  • evs (set) – a set of nodes ids or names.

Warning

if some evs are d-separated, they are not included in the Potential.

Returns:a Potential for P(targets|evs)
Return type:pyAgrum.Potential
hardEvidenceNodes(LoopyBeliefPropagation self)
Returns:the set of nodes with hard evidence
Return type:set
hasEvidence(LoopyBeliefPropagation self, int id)

hasEvidence(LoopyBeliefPropagation self, str nodeName) -> bool

Parameters:
  • id (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if some node(s) (or the one in parameters) have received evidence

Return type:

bool

Raises:

gum.IndexError – If the node does not belong to the Bayesian network

hasHardEvidence(LoopyBeliefPropagation self, str nodeName)
Parameters:
  • id (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if node has received a hard evidence

Return type:

bool

Raises:

gum.IndexError – If the node does not belong to the Bayesian network

hasSoftEvidence(LoopyBeliefPropagation self, int id)

hasSoftEvidence(LoopyBeliefPropagation self, str nodeName) -> bool

Parameters:
  • id (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if node has received a soft evidence

Return type:

bool

Raises:

gum.IndexError – If the node does not belong to the Bayesian network

history(LoopyBeliefPropagation self)
Returns:the scheme history
Return type:tuple
Raises:gum.OperationNotAllowed – If the scheme did not performed or if verbosity is set to false
isTarget(LoopyBeliefPropagation self, int variable)

isTarget(LoopyBeliefPropagation self, str nodeName) -> bool

Parameters:
  • variable (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if variable is a (marginal) target

Return type:

bool

Raises:
  • gum.IndexError – If the node does not belong to the Bayesian network
  • gum.UndefinedElement – If node Id is not in the Bayesian network
makeInference(LoopyBeliefPropagation self)

Perform the heavy computations needed to compute the targets’ posteriors

In a Junction tree propagation scheme, for instance, the heavy computations are those of the messages sent in the JT. This is precisely what makeInference should compute. Later, the computations of the posteriors can be done ‘lightly’ by multiplying and projecting those messages.

maxIter(LoopyBeliefPropagation self)
Returns:the criterion on number of iterations
Return type:int
maxTime(LoopyBeliefPropagation self)
Returns:the timeout(in seconds)
Return type:double
messageApproximationScheme(LoopyBeliefPropagation self)
Returns:the approximation scheme message
Return type:str
minEpsilonRate(LoopyBeliefPropagation self)
Returns:the value of the minimal epsilon rate
Return type:double
nbrEvidence(LoopyBeliefPropagation self)
Returns:the number of evidence entered into the Bayesian network
Return type:int
nbrHardEvidence(LoopyBeliefPropagation self)
Returns:the number of hard evidence entered into the Bayesian network
Return type:int
nbrIterations(LoopyBeliefPropagation self)
Returns:the number of iterations
Return type:int
nbrSoftEvidence(LoopyBeliefPropagation self)
Returns:the number of soft evidence entered into the Bayesian network
Return type:int
nbrTargets(LoopyBeliefPropagation self)
Returns:the number of marginal targets
Return type:int
periodSize(LoopyBeliefPropagation self)
Returns:the number of samples between 2 stopping
Return type:int
Raises:gum.OutOfLowerBound – If p<1
posterior(LoopyBeliefPropagation self, int var)

posterior(LoopyBeliefPropagation self, str nodeName) -> Potential

Computes and returns the posterior of a node.

Parameters:
  • var (int) – the node Id of the node for which we need a posterior probability
  • nodeName (str) – the node name of the node for which we need a posterior probability
Returns:

a ref to the posterior probability of the node

Return type:

pyAgrum.Potential

Raises:

gum.UndefinedElement – If an element of nodes is not in targets

setEpsilon(LoopyBeliefPropagation self, double eps)
Parameters:eps (double) – the epsilon we want to use
Raises:gum.OutOfLowerBound – If eps<0
setEvidence(evidces)

Erase all the evidences and apply addEvidence(key,value) for every pairs in evidces.

Parameters:

evidces (dict) – a dict of evidences

Raises:
  • gum.InvalidArgument – If one value is not a value for the node
  • gum.InvalidArgument – If the size of a value is different from the domain side of the node
  • gum.FatalError – If one value is a vector of 0s
  • gum.UndefinedElement – If one node does not belong to the Bayesian network
setMaxIter(LoopyBeliefPropagation self, int max)
Parameters:max (int) – the maximum number of iteration
Raises:gum.OutOfLowerBound – If max <= 1
setMaxTime(LoopyBeliefPropagation self, double timeout)
Parameters:tiemout (double) – stopping criterion on timeout (in seconds)
Raises:gum.OutOfLowerBound – If timeout<=0.0
setMinEpsilonRate(LoopyBeliefPropagation self, double rate)
Parameters:rate (double) – the minimal epsilon rate
setPeriodSize(LoopyBeliefPropagation self, int p)
Parameters:p (int) – number of samples between 2 stopping
Raises:gum.OutOfLowerBound – If p<1
setTargets(targets)

Remove all the targets and add the ones in parameter.

Parameters:targets (set) – a set of targets
Raises:gum.UndefinedElement – If one target is not in the Bayes net
setVerbosity(LoopyBeliefPropagation self, bool v)
Parameters:v (bool) – verbosity
softEvidenceNodes(LoopyBeliefPropagation self)
Returns:the set of nodes with soft evidence
Return type:set
targets(LoopyBeliefPropagation self)
Returns:the list of marginal targets
Return type:list
updateEvidence(evidces)

Apply chgEvidence(key,value) for every pairs in evidces (or addEvidence).

Parameters:

evidces (dict) – a dict of evidences

Raises:
  • gum.InvalidArgument – If one value is not a value for the node
  • gum.InvalidArgument – If the size of a value is different from the domain side of the node
  • gum.FatalError – If one value is a vector of 0s
  • gum.UndefinedElement – If one node does not belong to the Bayesian network
verbosity(LoopyBeliefPropagation self)
Returns:True if the verbosity is enabled
Return type:bool

Sampling

Gibbs Sampling

class pyAgrum.GibbsSampling(bn: pyAgrum.IBayesNet)

Class for making Gibbs sampling inference in bayesian networks.

GibbsSampling(bn) -> GibbsSampling
Parameters:
  • bn (pyAgrum.BayesNet) – a Bayesian network
BN(GibbsSampling self)
Returns:A constant reference over the IBayesNet referenced by this class.
Return type:pyAgrum.IBayesNet
Raises:gum.UndefinedElement – If no Bayes net has been assigned to the inference.
H(GibbsSampling self, int X)

H(GibbsSampling self, str nodeName) -> double

Parameters:
  • X (int) – a node Id
  • nodeName (str) – a node name
Returns:

the computed Shanon’s entropy of a node given the observation

Return type:

double

addAllTargets(GibbsSampling self)

Add all the nodes as targets.

addEvidence(GibbsSampling self, int id, int val)

addEvidence(GibbsSampling self, str nodeName, int val) addEvidence(GibbsSampling self, int id, str val) addEvidence(GibbsSampling self, str nodeName, str val) addEvidence(GibbsSampling self, int id, Vector vals) addEvidence(GibbsSampling self, str nodeName, Vector vals)

Adds a new evidence on a node (might be soft or hard).

Parameters:
  • id (int) – a node Id
  • nodeName (int) – a node name
  • val – (int) a node value
  • val – (str) the label of the node value
  • vals (list) – a list of values
Raises:
  • gum.InvalidArgument – If the node already has an evidence
  • gum.InvalidArgument – If val is not a value for the node
  • gum.InvalidArgument – If the size of vals is different from the domain side of the node
  • gum.FatalError – If vals is a vector of 0s
  • gum.UndefinedElement – If the node does not belong to the Bayesian network
addTarget(GibbsSampling self, int target)

addTarget(GibbsSampling self, str nodeName)

Add a marginal target to the list of targets.

Parameters:
  • target (int) – a node Id
  • nodeName (str) – a node name
Raises:

gum.UndefinedElement – If target is not a NodeId in the Bayes net

burnIn(GibbsSampling self)
Returns:size of burn in on number of iteration
Return type:int
chgEvidence(GibbsSampling self, int id, int val)

chgEvidence(GibbsSampling self, str nodeName, int val) chgEvidence(GibbsSampling self, int id, str val) chgEvidence(GibbsSampling self, str nodeName, str val) chgEvidence(GibbsSampling self, int id, Vector vals) chgEvidence(GibbsSampling self, str nodeName, Vector vals)

Change the value of an already existing evidence on a node (might be soft or hard).

Parameters:
  • id (int) – a node Id
  • nodeName (int) – a node name
  • val – (int) a node value
  • val – (str) the label of the node value
  • vals (list) – a list of values
Raises:
  • gum.InvalidArgument – If the node does not already have an evidence
  • gum.InvalidArgument – If val is not a value for the node
  • gum.InvalidArgument – If the size of vals is different from the domain side of the node
  • gum.FatalError – If vals is a vector of 0s
  • gum.UndefinedElement – If the node does not belong to the Bayesian network
currentPosterior(GibbsSampling self, int id)

currentPosterior(GibbsSampling self, str name) -> Potential

Computes and returns the current posterior of a node.

Parameters:
  • var (int) – the node Id of the node for which we need a posterior probability
  • nodeName (str) – the node name of the node for which we need a posterior probability
Returns:

a ref to the current posterior probability of the node

Return type:

pyAgrum.Potential

Raises:

UndefinedElement – If an element of nodes is not in targets

currentTime(GibbsSampling self)
Returns:get the current running time in second (double)
Return type:double
epsilon(GibbsSampling self)
Returns:the value of epsilon
Return type:double
eraseAllEvidence(GibbsSampling self)

Removes all the evidence entered into the network.

eraseAllTargets(GibbsSampling self)

Clear all previously defined targets (marginal and joint targets).

As a result, no posterior can be computed (since we can only compute the posteriors of the marginal or joint targets that have been added by the user).

eraseEvidence(GibbsSampling self, int id)

eraseEvidence(GibbsSampling self, str nodeName)

Remove the evidence, if any, corresponding to the node Id or name.

Parameters:
  • id (int) – a node Id
  • nodeName (int) – a node name
Raises:

gum.IndexError – If the node does not belong to the Bayesian network

eraseTarget(GibbsSampling self, int target)

eraseTarget(GibbsSampling self, str nodeName)

Remove, if existing, the marginal target.

Parameters:
  • target (int) – a node Id
  • nodeName (int) – a node name
Raises:
  • gum.IndexError – If one of the node does not belong to the Bayesian network
  • gum.UndefinedElement – If node Id is not in the Bayesian network
evidenceImpact(GibbsSampling self, PyObject * target, PyObject * evs)

Create a pyAgrum.Potential for P(target|evs) (for all instanciation of target and evs)

Parameters:
  • target (set) – a set of targets ids or names.
  • evs (set) – a set of nodes ids or names.

Warning

if some evs are d-separated, they are not included in the Potential.

Returns:a Potential for P(targets|evs)
Return type:pyAgrum.Potential
hardEvidenceNodes(GibbsSampling self)
Returns:the set of nodes with hard evidence
Return type:set
hasEvidence(GibbsSampling self, int id)

hasEvidence(GibbsSampling self, str nodeName) -> bool

Parameters:
  • id (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if some node(s) (or the one in parameters) have received evidence

Return type:

bool

Raises:

gum.IndexError – If the node does not belong to the Bayesian network

hasHardEvidence(GibbsSampling self, str nodeName)
Parameters:
  • id (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if node has received a hard evidence

Return type:

bool

Raises:

gum.IndexError – If the node does not belong to the Bayesian network

hasSoftEvidence(GibbsSampling self, int id)

hasSoftEvidence(GibbsSampling self, str nodeName) -> bool

Parameters:
  • id (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if node has received a soft evidence

Return type:

bool

Raises:

gum.IndexError – If the node does not belong to the Bayesian network

history(GibbsSampling self)
Returns:the scheme history
Return type:tuple
Raises:gum.OperationNotAllowed – If the scheme did not performed or if verbosity is set to false
isDrawnAtRandom(GibbsSampling self)
Returns:True if variables are drawn at random
Return type:bool
isTarget(GibbsSampling self, int variable)

isTarget(GibbsSampling self, str nodeName) -> bool

Parameters:
  • variable (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if variable is a (marginal) target

Return type:

bool

Raises:
  • gum.IndexError – If the node does not belong to the Bayesian network
  • gum.UndefinedElement – If node Id is not in the Bayesian network
makeInference(GibbsSampling self)

Perform the heavy computations needed to compute the targets’ posteriors

In a Junction tree propagation scheme, for instance, the heavy computations are those of the messages sent in the JT. This is precisely what makeInference should compute. Later, the computations of the posteriors can be done ‘lightly’ by multiplying and projecting those messages.

maxIter(GibbsSampling self)
Returns:the criterion on number of iterations
Return type:int
maxTime(GibbsSampling self)
Returns:the timeout(in seconds)
Return type:double
messageApproximationScheme(GibbsSampling self)
Returns:the approximation scheme message
Return type:str
minEpsilonRate(GibbsSampling self)
Returns:the value of the minimal epsilon rate
Return type:double
nbrDrawnVar(GibbsSampling self)
Returns:the number of variable drawn at each iteration
Return type:int
nbrEvidence(GibbsSampling self)
Returns:the number of evidence entered into the Bayesian network
Return type:int
nbrHardEvidence(GibbsSampling self)
Returns:the number of hard evidence entered into the Bayesian network
Return type:int
nbrIterations(GibbsSampling self)
Returns:the number of iterations
Return type:int
nbrSoftEvidence(GibbsSampling self)
Returns:the number of soft evidence entered into the Bayesian network
Return type:int
nbrTargets(GibbsSampling self)
Returns:the number of marginal targets
Return type:int
periodSize(GibbsSampling self)
Returns:the number of samples between 2 stopping
Return type:int
Raises:gum.OutOfLowerBound – If p<1
posterior(GibbsSampling self, int var)

posterior(GibbsSampling self, str nodeName) -> Potential

Computes and returns the posterior of a node.

Parameters:
  • var (int) – the node Id of the node for which we need a posterior probability
  • nodeName (str) – the node name of the node for which we need a posterior probability
Returns:

a ref to the posterior probability of the node

Return type:

pyAgrum.Potential

Raises:

gum.UndefinedElement – If an element of nodes is not in targets

setBurnIn(GibbsSampling self, int b)
Parameters:b (int) – size of burn in on number of iteration
setDrawnAtRandom(GibbsSampling self, bool _atRandom)
Parameters:_atRandom (bool) – indicates if variables should be drawn at random
setEpsilon(GibbsSampling self, double eps)
Parameters:eps (double) – the epsilon we want to use
Raises:gum.OutOfLowerBound – If eps<0
setEvidence(evidces)

Erase all the evidences and apply addEvidence(key,value) for every pairs in evidces.

Parameters:

evidces (dict) – a dict of evidences

Raises:
  • gum.InvalidArgument – If one value is not a value for the node
  • gum.InvalidArgument – If the size of a value is different from the domain side of the node
  • gum.FatalError – If one value is a vector of 0s
  • gum.UndefinedElement – If one node does not belong to the Bayesian network
setMaxIter(GibbsSampling self, int max)
Parameters:max (int) – the maximum number of iteration
Raises:gum.OutOfLowerBound – If max <= 1
setMaxTime(GibbsSampling self, double timeout)
Parameters:tiemout (double) – stopping criterion on timeout (in seconds)
Raises:gum.OutOfLowerBound – If timeout<=0.0
setMinEpsilonRate(GibbsSampling self, double rate)
Parameters:rate (double) – the minimal epsilon rate
setNbrDrawnVar(GibbsSampling self, int _nbr)
Parameters:_nbr (int) – the number of variables to be drawn at each iteration
setPeriodSize(GibbsSampling self, int p)
Parameters:p (int) – number of samples between 2 stopping
Raises:gum.OutOfLowerBound – If p<1
setTargets(targets)

Remove all the targets and add the ones in parameter.

Parameters:targets (set) – a set of targets
Raises:gum.UndefinedElement – If one target is not in the Bayes net
setVerbosity(GibbsSampling self, bool v)
Parameters:v (bool) – verbosity
softEvidenceNodes(GibbsSampling self)
Returns:the set of nodes with soft evidence
Return type:set
targets(GibbsSampling self)
Returns:the list of marginal targets
Return type:list
updateEvidence(evidces)

Apply chgEvidence(key,value) for every pairs in evidces (or addEvidence).

Parameters:

evidces (dict) – a dict of evidences

Raises:
  • gum.InvalidArgument – If one value is not a value for the node
  • gum.InvalidArgument – If the size of a value is different from the domain side of the node
  • gum.FatalError – If one value is a vector of 0s
  • gum.UndefinedElement – If one node does not belong to the Bayesian network
verbosity(GibbsSampling self)
Returns:True if the verbosity is enabled
Return type:bool

Monte Carlo Sampling

class pyAgrum.MonteCarloSampling(bn: pyAgrum.IBayesNet)

Class used for Monte Carlo sampling inference algorithm.

MonteCarloSampling(bn) -> MonteCarloSampling
Parameters:
  • bn (pyAgrum.BayesNet) – a Bayesian network
BN(MonteCarloSampling self)
Returns:A constant reference over the IBayesNet referenced by this class.
Return type:pyAgrum.IBayesNet
Raises:gum.UndefinedElement – If no Bayes net has been assigned to the inference.
H(MonteCarloSampling self, int X)

H(MonteCarloSampling self, str nodeName) -> double

Parameters:
  • X (int) – a node Id
  • nodeName (str) – a node name
Returns:

the computed Shanon’s entropy of a node given the observation

Return type:

double

addAllTargets(MonteCarloSampling self)

Add all the nodes as targets.

addEvidence(MonteCarloSampling self, int id, int val)

addEvidence(MonteCarloSampling self, str nodeName, int val) addEvidence(MonteCarloSampling self, int id, str val) addEvidence(MonteCarloSampling self, str nodeName, str val) addEvidence(MonteCarloSampling self, int id, Vector vals) addEvidence(MonteCarloSampling self, str nodeName, Vector vals)

Adds a new evidence on a node (might be soft or hard).

Parameters:
  • id (int) – a node Id
  • nodeName (int) – a node name
  • val – (int) a node value
  • val – (str) the label of the node value
  • vals (list) – a list of values
Raises:
  • gum.InvalidArgument – If the node already has an evidence
  • gum.InvalidArgument – If val is not a value for the node
  • gum.InvalidArgument – If the size of vals is different from the domain side of the node
  • gum.FatalError – If vals is a vector of 0s
  • gum.UndefinedElement – If the node does not belong to the Bayesian network
addTarget(MonteCarloSampling self, int target)

addTarget(MonteCarloSampling self, str nodeName)

Add a marginal target to the list of targets.

Parameters:
  • target (int) – a node Id
  • nodeName (str) – a node name
Raises:

gum.UndefinedElement – If target is not a NodeId in the Bayes net

chgEvidence(MonteCarloSampling self, int id, int val)

chgEvidence(MonteCarloSampling self, str nodeName, int val) chgEvidence(MonteCarloSampling self, int id, str val) chgEvidence(MonteCarloSampling self, str nodeName, str val) chgEvidence(MonteCarloSampling self, int id, Vector vals) chgEvidence(MonteCarloSampling self, str nodeName, Vector vals)

Change the value of an already existing evidence on a node (might be soft or hard).

Parameters:
  • id (int) – a node Id
  • nodeName (int) – a node name
  • val – (int) a node value
  • val – (str) the label of the node value
  • vals (list) – a list of values
Raises:
  • gum.InvalidArgument – If the node does not already have an evidence
  • gum.InvalidArgument – If val is not a value for the node
  • gum.InvalidArgument – If the size of vals is different from the domain side of the node
  • gum.FatalError – If vals is a vector of 0s
  • gum.UndefinedElement – If the node does not belong to the Bayesian network
currentPosterior(MonteCarloSampling self, int id)

currentPosterior(MonteCarloSampling self, str name) -> Potential

Computes and returns the current posterior of a node.

Parameters:
  • var (int) – the node Id of the node for which we need a posterior probability
  • nodeName (str) – the node name of the node for which we need a posterior probability
Returns:

a ref to the current posterior probability of the node

Return type:

pyAgrum.Potential

Raises:

UndefinedElement – If an element of nodes is not in targets

currentTime(MonteCarloSampling self)
Returns:get the current running time in second (double)
Return type:double
epsilon(MonteCarloSampling self)
Returns:the value of epsilon
Return type:double
eraseAllEvidence(MonteCarloSampling self)

Removes all the evidence entered into the network.

eraseAllTargets(MonteCarloSampling self)

Clear all previously defined targets (marginal and joint targets).

As a result, no posterior can be computed (since we can only compute the posteriors of the marginal or joint targets that have been added by the user).

eraseEvidence(MonteCarloSampling self, int id)

eraseEvidence(MonteCarloSampling self, str nodeName)

Remove the evidence, if any, corresponding to the node Id or name.

Parameters:
  • id (int) – a node Id
  • nodeName (int) – a node name
Raises:

gum.IndexError – If the node does not belong to the Bayesian network

eraseTarget(MonteCarloSampling self, int target)

eraseTarget(MonteCarloSampling self, str nodeName)

Remove, if existing, the marginal target.

Parameters:
  • target (int) – a node Id
  • nodeName (int) – a node name
Raises:
  • gum.IndexError – If one of the node does not belong to the Bayesian network
  • gum.UndefinedElement – If node Id is not in the Bayesian network
evidenceImpact(MonteCarloSampling self, PyObject * target, PyObject * evs)

Create a pyAgrum.Potential for P(target|evs) (for all instanciation of target and evs)

Parameters:
  • target (set) – a set of targets ids or names.
  • evs (set) – a set of nodes ids or names.

Warning

if some evs are d-separated, they are not included in the Potential.

Returns:a Potential for P(targets|evs)
Return type:pyAgrum.Potential
hardEvidenceNodes(MonteCarloSampling self)
Returns:the set of nodes with hard evidence
Return type:set
hasEvidence(MonteCarloSampling self, int id)

hasEvidence(MonteCarloSampling self, str nodeName) -> bool

Parameters:
  • id (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if some node(s) (or the one in parameters) have received evidence

Return type:

bool

Raises:

gum.IndexError – If the node does not belong to the Bayesian network

hasHardEvidence(MonteCarloSampling self, str nodeName)
Parameters:
  • id (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if node has received a hard evidence

Return type:

bool

Raises:

gum.IndexError – If the node does not belong to the Bayesian network

hasSoftEvidence(MonteCarloSampling self, int id)

hasSoftEvidence(MonteCarloSampling self, str nodeName) -> bool

Parameters:
  • id (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if node has received a soft evidence

Return type:

bool

Raises:

gum.IndexError – If the node does not belong to the Bayesian network

history(MonteCarloSampling self)
Returns:the scheme history
Return type:tuple
Raises:gum.OperationNotAllowed – If the scheme did not performed or if verbosity is set to false
isTarget(MonteCarloSampling self, int variable)

isTarget(MonteCarloSampling self, str nodeName) -> bool

Parameters:
  • variable (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if variable is a (marginal) target

Return type:

bool

Raises:
  • gum.IndexError – If the node does not belong to the Bayesian network
  • gum.UndefinedElement – If node Id is not in the Bayesian network
makeInference(MonteCarloSampling self)

Perform the heavy computations needed to compute the targets’ posteriors

In a Junction tree propagation scheme, for instance, the heavy computations are those of the messages sent in the JT. This is precisely what makeInference should compute. Later, the computations of the posteriors can be done ‘lightly’ by multiplying and projecting those messages.

maxIter(MonteCarloSampling self)
Returns:the criterion on number of iterations
Return type:int
maxTime(MonteCarloSampling self)
Returns:the timeout(in seconds)
Return type:double
messageApproximationScheme(MonteCarloSampling self)
Returns:the approximation scheme message
Return type:str
minEpsilonRate(MonteCarloSampling self)
Returns:the value of the minimal epsilon rate
Return type:double
nbrEvidence(MonteCarloSampling self)
Returns:the number of evidence entered into the Bayesian network
Return type:int
nbrHardEvidence(MonteCarloSampling self)
Returns:the number of hard evidence entered into the Bayesian network
Return type:int
nbrIterations(MonteCarloSampling self)
Returns:the number of iterations
Return type:int
nbrSoftEvidence(MonteCarloSampling self)
Returns:the number of soft evidence entered into the Bayesian network
Return type:int
nbrTargets(MonteCarloSampling self)
Returns:the number of marginal targets
Return type:int
periodSize(MonteCarloSampling self)
Returns:the number of samples between 2 stopping
Return type:int
Raises:gum.OutOfLowerBound – If p<1
posterior(MonteCarloSampling self, int var)

posterior(MonteCarloSampling self, str nodeName) -> Potential

Computes and returns the posterior of a node.

Parameters:
  • var (int) – the node Id of the node for which we need a posterior probability
  • nodeName (str) – the node name of the node for which we need a posterior probability
Returns:

a ref to the posterior probability of the node

Return type:

pyAgrum.Potential

Raises:

gum.UndefinedElement – If an element of nodes is not in targets

setEpsilon(MonteCarloSampling self, double eps)
Parameters:eps (double) – the epsilon we want to use
Raises:gum.OutOfLowerBound – If eps<0
setEvidence(evidces)

Erase all the evidences and apply addEvidence(key,value) for every pairs in evidces.

Parameters:

evidces (dict) – a dict of evidences

Raises:
  • gum.InvalidArgument – If one value is not a value for the node
  • gum.InvalidArgument – If the size of a value is different from the domain side of the node
  • gum.FatalError – If one value is a vector of 0s
  • gum.UndefinedElement – If one node does not belong to the Bayesian network
setMaxIter(MonteCarloSampling self, int max)
Parameters:max (int) – the maximum number of iteration
Raises:gum.OutOfLowerBound – If max <= 1
setMaxTime(MonteCarloSampling self, double timeout)
Parameters:tiemout (double) – stopping criterion on timeout (in seconds)
Raises:gum.OutOfLowerBound – If timeout<=0.0
setMinEpsilonRate(MonteCarloSampling self, double rate)
Parameters:rate (double) – the minimal epsilon rate
setPeriodSize(MonteCarloSampling self, int p)
Parameters:p (int) – number of samples between 2 stopping
Raises:gum.OutOfLowerBound – If p<1
setTargets(targets)

Remove all the targets and add the ones in parameter.

Parameters:targets (set) – a set of targets
Raises:gum.UndefinedElement – If one target is not in the Bayes net
setVerbosity(MonteCarloSampling self, bool v)
Parameters:v (bool) – verbosity
softEvidenceNodes(MonteCarloSampling self)
Returns:the set of nodes with soft evidence
Return type:set
targets(MonteCarloSampling self)
Returns:the list of marginal targets
Return type:list
updateEvidence(evidces)

Apply chgEvidence(key,value) for every pairs in evidces (or addEvidence).

Parameters:

evidces (dict) – a dict of evidences

Raises:
  • gum.InvalidArgument – If one value is not a value for the node
  • gum.InvalidArgument – If the size of a value is different from the domain side of the node
  • gum.FatalError – If one value is a vector of 0s
  • gum.UndefinedElement – If one node does not belong to the Bayesian network
verbosity(MonteCarloSampling self)
Returns:True if the verbosity is enabled
Return type:bool

Weighted Sampling

class pyAgrum.WeightedSampling(bn: pyAgrum.IBayesNet)

Class used for Weighted sampling inference algorithm.

WeightedSampling(bn) -> WeightedSampling
Parameters:
  • bn (pyAgrum.BayesNet) – a Bayesian network
BN(WeightedSampling self)
Returns:A constant reference over the IBayesNet referenced by this class.
Return type:pyAgrum.IBayesNet
Raises:gum.UndefinedElement – If no Bayes net has been assigned to the inference.
H(WeightedSampling self, int X)

H(WeightedSampling self, str nodeName) -> double

Parameters:
  • X (int) – a node Id
  • nodeName (str) – a node name
Returns:

the computed Shanon’s entropy of a node given the observation

Return type:

double

addAllTargets(WeightedSampling self)

Add all the nodes as targets.

addEvidence(WeightedSampling self, int id, int val)

addEvidence(WeightedSampling self, str nodeName, int val) addEvidence(WeightedSampling self, int id, str val) addEvidence(WeightedSampling self, str nodeName, str val) addEvidence(WeightedSampling self, int id, Vector vals) addEvidence(WeightedSampling self, str nodeName, Vector vals)

Adds a new evidence on a node (might be soft or hard).

Parameters:
  • id (int) – a node Id
  • nodeName (int) – a node name
  • val – (int) a node value
  • val – (str) the label of the node value
  • vals (list) – a list of values
Raises:
  • gum.InvalidArgument – If the node already has an evidence
  • gum.InvalidArgument – If val is not a value for the node
  • gum.InvalidArgument – If the size of vals is different from the domain side of the node
  • gum.FatalError – If vals is a vector of 0s
  • gum.UndefinedElement – If the node does not belong to the Bayesian network
addTarget(WeightedSampling self, int target)

addTarget(WeightedSampling self, str nodeName)

Add a marginal target to the list of targets.

Parameters:
  • target (int) – a node Id
  • nodeName (str) – a node name
Raises:

gum.UndefinedElement – If target is not a NodeId in the Bayes net

chgEvidence(WeightedSampling self, int id, int val)

chgEvidence(WeightedSampling self, str nodeName, int val) chgEvidence(WeightedSampling self, int id, str val) chgEvidence(WeightedSampling self, str nodeName, str val) chgEvidence(WeightedSampling self, int id, Vector vals) chgEvidence(WeightedSampling self, str nodeName, Vector vals)

Change the value of an already existing evidence on a node (might be soft or hard).

Parameters:
  • id (int) – a node Id
  • nodeName (int) – a node name
  • val – (int) a node value
  • val – (str) the label of the node value
  • vals (list) – a list of values
Raises:
  • gum.InvalidArgument – If the node does not already have an evidence
  • gum.InvalidArgument – If val is not a value for the node
  • gum.InvalidArgument – If the size of vals is different from the domain side of the node
  • gum.FatalError – If vals is a vector of 0s
  • gum.UndefinedElement – If the node does not belong to the Bayesian network
currentPosterior(WeightedSampling self, int id)

currentPosterior(WeightedSampling self, str name) -> Potential

Computes and returns the current posterior of a node.

Parameters:
  • var (int) – the node Id of the node for which we need a posterior probability
  • nodeName (str) – the node name of the node for which we need a posterior probability
Returns:

a ref to the current posterior probability of the node

Return type:

pyAgrum.Potential

Raises:

UndefinedElement – If an element of nodes is not in targets

currentTime(WeightedSampling self)
Returns:get the current running time in second (double)
Return type:double
epsilon(WeightedSampling self)
Returns:the value of epsilon
Return type:double
eraseAllEvidence(WeightedSampling self)

Removes all the evidence entered into the network.

eraseAllTargets(WeightedSampling self)

Clear all previously defined targets (marginal and joint targets).

As a result, no posterior can be computed (since we can only compute the posteriors of the marginal or joint targets that have been added by the user).

eraseEvidence(WeightedSampling self, int id)

eraseEvidence(WeightedSampling self, str nodeName)

Remove the evidence, if any, corresponding to the node Id or name.

Parameters:
  • id (int) – a node Id
  • nodeName (int) – a node name
Raises:

gum.IndexError – If the node does not belong to the Bayesian network

eraseTarget(WeightedSampling self, int target)

eraseTarget(WeightedSampling self, str nodeName)

Remove, if existing, the marginal target.

Parameters:
  • target (int) – a node Id
  • nodeName (int) – a node name
Raises:
  • gum.IndexError – If one of the node does not belong to the Bayesian network
  • gum.UndefinedElement – If node Id is not in the Bayesian network
evidenceImpact(WeightedSampling self, PyObject * target, PyObject * evs)

Create a pyAgrum.Potential for P(target|evs) (for all instanciation of target and evs)

Parameters:
  • target (set) – a set of targets ids or names.
  • evs (set) – a set of nodes ids or names.

Warning

if some evs are d-separated, they are not included in the Potential.

Returns:a Potential for P(targets|evs)
Return type:pyAgrum.Potential
hardEvidenceNodes(WeightedSampling self)
Returns:the set of nodes with hard evidence
Return type:set
hasEvidence(WeightedSampling self, int id)

hasEvidence(WeightedSampling self, str nodeName) -> bool

Parameters:
  • id (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if some node(s) (or the one in parameters) have received evidence

Return type:

bool

Raises:

gum.IndexError – If the node does not belong to the Bayesian network

hasHardEvidence(WeightedSampling self, str nodeName)
Parameters:
  • id (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if node has received a hard evidence

Return type:

bool

Raises:

gum.IndexError – If the node does not belong to the Bayesian network

hasSoftEvidence(WeightedSampling self, int id)

hasSoftEvidence(WeightedSampling self, str nodeName) -> bool

Parameters:
  • id (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if node has received a soft evidence

Return type:

bool

Raises:

gum.IndexError – If the node does not belong to the Bayesian network

history(WeightedSampling self)
Returns:the scheme history
Return type:tuple
Raises:gum.OperationNotAllowed – If the scheme did not performed or if verbosity is set to false
isTarget(WeightedSampling self, int variable)

isTarget(WeightedSampling self, str nodeName) -> bool

Parameters:
  • variable (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if variable is a (marginal) target

Return type:

bool

Raises:
  • gum.IndexError – If the node does not belong to the Bayesian network
  • gum.UndefinedElement – If node Id is not in the Bayesian network
makeInference(WeightedSampling self)

Perform the heavy computations needed to compute the targets’ posteriors

In a Junction tree propagation scheme, for instance, the heavy computations are those of the messages sent in the JT. This is precisely what makeInference should compute. Later, the computations of the posteriors can be done ‘lightly’ by multiplying and projecting those messages.

maxIter(WeightedSampling self)
Returns:the criterion on number of iterations
Return type:int
maxTime(WeightedSampling self)
Returns:the timeout(in seconds)
Return type:double
messageApproximationScheme(WeightedSampling self)
Returns:the approximation scheme message
Return type:str
minEpsilonRate(WeightedSampling self)
Returns:the value of the minimal epsilon rate
Return type:double
nbrEvidence(WeightedSampling self)
Returns:the number of evidence entered into the Bayesian network
Return type:int
nbrHardEvidence(WeightedSampling self)
Returns:the number of hard evidence entered into the Bayesian network
Return type:int
nbrIterations(WeightedSampling self)
Returns:the number of iterations
Return type:int
nbrSoftEvidence(WeightedSampling self)
Returns:the number of soft evidence entered into the Bayesian network
Return type:int
nbrTargets(WeightedSampling self)
Returns:the number of marginal targets
Return type:int
periodSize(WeightedSampling self)
Returns:the number of samples between 2 stopping
Return type:int
Raises:gum.OutOfLowerBound – If p<1
posterior(WeightedSampling self, int var)

posterior(WeightedSampling self, str nodeName) -> Potential

Computes and returns the posterior of a node.

Parameters:
  • var (int) – the node Id of the node for which we need a posterior probability
  • nodeName (str) – the node name of the node for which we need a posterior probability
Returns:

a ref to the posterior probability of the node

Return type:

pyAgrum.Potential

Raises:

gum.UndefinedElement – If an element of nodes is not in targets

setEpsilon(WeightedSampling self, double eps)
Parameters:eps (double) – the epsilon we want to use
Raises:gum.OutOfLowerBound – If eps<0
setEvidence(evidces)

Erase all the evidences and apply addEvidence(key,value) for every pairs in evidces.

Parameters:

evidces (dict) – a dict of evidences

Raises:
  • gum.InvalidArgument – If one value is not a value for the node
  • gum.InvalidArgument – If the size of a value is different from the domain side of the node
  • gum.FatalError – If one value is a vector of 0s
  • gum.UndefinedElement – If one node does not belong to the Bayesian network
setMaxIter(WeightedSampling self, int max)
Parameters:max (int) – the maximum number of iteration
Raises:gum.OutOfLowerBound – If max <= 1
setMaxTime(WeightedSampling self, double timeout)
Parameters:tiemout (double) – stopping criterion on timeout (in seconds)
Raises:gum.OutOfLowerBound – If timeout<=0.0
setMinEpsilonRate(WeightedSampling self, double rate)
Parameters:rate (double) – the minimal epsilon rate
setPeriodSize(WeightedSampling self, int p)
Parameters:p (int) – number of samples between 2 stopping
Raises:gum.OutOfLowerBound – If p<1
setTargets(targets)

Remove all the targets and add the ones in parameter.

Parameters:targets (set) – a set of targets
Raises:gum.UndefinedElement – If one target is not in the Bayes net
setVerbosity(WeightedSampling self, bool v)
Parameters:v (bool) – verbosity
softEvidenceNodes(WeightedSampling self)
Returns:the set of nodes with soft evidence
Return type:set
targets(WeightedSampling self)
Returns:the list of marginal targets
Return type:list
updateEvidence(evidces)

Apply chgEvidence(key,value) for every pairs in evidces (or addEvidence).

Parameters:

evidces (dict) – a dict of evidences

Raises:
  • gum.InvalidArgument – If one value is not a value for the node
  • gum.InvalidArgument – If the size of a value is different from the domain side of the node
  • gum.FatalError – If one value is a vector of 0s
  • gum.UndefinedElement – If one node does not belong to the Bayesian network
verbosity(WeightedSampling self)
Returns:True if the verbosity is enabled
Return type:bool

Importance Sampling

class pyAgrum.ImportanceSampling(bn: pyAgrum.IBayesNet)

Class used for inferences using the Importance Sampling algorithm.

ImportanceSampling(bn) -> ImportanceSampling
Parameters:
  • bn (pyAgrum.BayesNet) – a Bayesian network
BN(ImportanceSampling self)
Returns:A constant reference over the IBayesNet referenced by this class.
Return type:pyAgrum.IBayesNet
Raises:gum.UndefinedElement – If no Bayes net has been assigned to the inference.
H(ImportanceSampling self, int X)

H(ImportanceSampling self, str nodeName) -> double

Parameters:
  • X (int) – a node Id
  • nodeName (str) – a node name
Returns:

the computed Shanon’s entropy of a node given the observation

Return type:

double

addAllTargets(ImportanceSampling self)

Add all the nodes as targets.

addEvidence(ImportanceSampling self, int id, int val)

addEvidence(ImportanceSampling self, str nodeName, int val) addEvidence(ImportanceSampling self, int id, str val) addEvidence(ImportanceSampling self, str nodeName, str val) addEvidence(ImportanceSampling self, int id, Vector vals) addEvidence(ImportanceSampling self, str nodeName, Vector vals)

Adds a new evidence on a node (might be soft or hard).

Parameters:
  • id (int) – a node Id
  • nodeName (int) – a node name
  • val – (int) a node value
  • val – (str) the label of the node value
  • vals (list) – a list of values
Raises:
  • gum.InvalidArgument – If the node already has an evidence
  • gum.InvalidArgument – If val is not a value for the node
  • gum.InvalidArgument – If the size of vals is different from the domain side of the node
  • gum.FatalError – If vals is a vector of 0s
  • gum.UndefinedElement – If the node does not belong to the Bayesian network
addTarget(ImportanceSampling self, int target)

addTarget(ImportanceSampling self, str nodeName)

Add a marginal target to the list of targets.

Parameters:
  • target (int) – a node Id
  • nodeName (str) – a node name
Raises:

gum.UndefinedElement – If target is not a NodeId in the Bayes net

chgEvidence(ImportanceSampling self, int id, int val)

chgEvidence(ImportanceSampling self, str nodeName, int val) chgEvidence(ImportanceSampling self, int id, str val) chgEvidence(ImportanceSampling self, str nodeName, str val) chgEvidence(ImportanceSampling self, int id, Vector vals) chgEvidence(ImportanceSampling self, str nodeName, Vector vals)

Change the value of an already existing evidence on a node (might be soft or hard).

Parameters:
  • id (int) – a node Id
  • nodeName (int) – a node name
  • val – (int) a node value
  • val – (str) the label of the node value
  • vals (list) – a list of values
Raises:
  • gum.InvalidArgument – If the node does not already have an evidence
  • gum.InvalidArgument – If val is not a value for the node
  • gum.InvalidArgument – If the size of vals is different from the domain side of the node
  • gum.FatalError – If vals is a vector of 0s
  • gum.UndefinedElement – If the node does not belong to the Bayesian network
currentPosterior(ImportanceSampling self, int id)

currentPosterior(ImportanceSampling self, str name) -> Potential

Computes and returns the current posterior of a node.

Parameters:
  • var (int) – the node Id of the node for which we need a posterior probability
  • nodeName (str) – the node name of the node for which we need a posterior probability
Returns:

a ref to the current posterior probability of the node

Return type:

pyAgrum.Potential

Raises:

UndefinedElement – If an element of nodes is not in targets

currentTime(ImportanceSampling self)
Returns:get the current running time in second (double)
Return type:double
epsilon(ImportanceSampling self)
Returns:the value of epsilon
Return type:double
eraseAllEvidence(ImportanceSampling self)

Removes all the evidence entered into the network.

eraseAllTargets(ImportanceSampling self)

Clear all previously defined targets (marginal and joint targets).

As a result, no posterior can be computed (since we can only compute the posteriors of the marginal or joint targets that have been added by the user).

eraseEvidence(ImportanceSampling self, int id)

eraseEvidence(ImportanceSampling self, str nodeName)

Remove the evidence, if any, corresponding to the node Id or name.

Parameters:
  • id (int) – a node Id
  • nodeName (int) – a node name
Raises:

gum.IndexError – If the node does not belong to the Bayesian network

eraseTarget(ImportanceSampling self, int target)

eraseTarget(ImportanceSampling self, str nodeName)

Remove, if existing, the marginal target.

Parameters:
  • target (int) – a node Id
  • nodeName (int) – a node name
Raises:
  • gum.IndexError – If one of the node does not belong to the Bayesian network
  • gum.UndefinedElement – If node Id is not in the Bayesian network
evidenceImpact(ImportanceSampling self, PyObject * target, PyObject * evs)

Create a pyAgrum.Potential for P(target|evs) (for all instanciation of target and evs)

Parameters:
  • target (set) – a set of targets ids or names.
  • evs (set) – a set of nodes ids or names.

Warning

if some evs are d-separated, they are not included in the Potential.

Returns:a Potential for P(targets|evs)
Return type:pyAgrum.Potential
hardEvidenceNodes(ImportanceSampling self)
Returns:the set of nodes with hard evidence
Return type:set
hasEvidence(ImportanceSampling self, int id)

hasEvidence(ImportanceSampling self, str nodeName) -> bool

Parameters:
  • id (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if some node(s) (or the one in parameters) have received evidence

Return type:

bool

Raises:

gum.IndexError – If the node does not belong to the Bayesian network

hasHardEvidence(ImportanceSampling self, str nodeName)
Parameters:
  • id (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if node has received a hard evidence

Return type:

bool

Raises:

gum.IndexError – If the node does not belong to the Bayesian network

hasSoftEvidence(ImportanceSampling self, int id)

hasSoftEvidence(ImportanceSampling self, str nodeName) -> bool

Parameters:
  • id (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if node has received a soft evidence

Return type:

bool

Raises:

gum.IndexError – If the node does not belong to the Bayesian network

history(ImportanceSampling self)
Returns:the scheme history
Return type:tuple
Raises:gum.OperationNotAllowed – If the scheme did not performed or if verbosity is set to false
isTarget(ImportanceSampling self, int variable)

isTarget(ImportanceSampling self, str nodeName) -> bool

Parameters:
  • variable (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if variable is a (marginal) target

Return type:

bool

Raises:
  • gum.IndexError – If the node does not belong to the Bayesian network
  • gum.UndefinedElement – If node Id is not in the Bayesian network
makeInference(ImportanceSampling self)

Perform the heavy computations needed to compute the targets’ posteriors

In a Junction tree propagation scheme, for instance, the heavy computations are those of the messages sent in the JT. This is precisely what makeInference should compute. Later, the computations of the posteriors can be done ‘lightly’ by multiplying and projecting those messages.

maxIter(ImportanceSampling self)
Returns:the criterion on number of iterations
Return type:int
maxTime(ImportanceSampling self)
Returns:the timeout(in seconds)
Return type:double
messageApproximationScheme(ImportanceSampling self)
Returns:the approximation scheme message
Return type:str
minEpsilonRate(ImportanceSampling self)
Returns:the value of the minimal epsilon rate
Return type:double
nbrEvidence(ImportanceSampling self)
Returns:the number of evidence entered into the Bayesian network
Return type:int
nbrHardEvidence(ImportanceSampling self)
Returns:the number of hard evidence entered into the Bayesian network
Return type:int
nbrIterations(ImportanceSampling self)
Returns:the number of iterations
Return type:int
nbrSoftEvidence(ImportanceSampling self)
Returns:the number of soft evidence entered into the Bayesian network
Return type:int
nbrTargets(ImportanceSampling self)
Returns:the number of marginal targets
Return type:int
periodSize(ImportanceSampling self)
Returns:the number of samples between 2 stopping
Return type:int
Raises:gum.OutOfLowerBound – If p<1
posterior(ImportanceSampling self, int var)

posterior(ImportanceSampling self, str nodeName) -> Potential

Computes and returns the posterior of a node.

Parameters:
  • var (int) – the node Id of the node for which we need a posterior probability
  • nodeName (str) – the node name of the node for which we need a posterior probability
Returns:

a ref to the posterior probability of the node

Return type:

pyAgrum.Potential

Raises:

gum.UndefinedElement – If an element of nodes is not in targets

setEpsilon(ImportanceSampling self, double eps)
Parameters:eps (double) – the epsilon we want to use
Raises:gum.OutOfLowerBound – If eps<0
setEvidence(evidces)

Erase all the evidences and apply addEvidence(key,value) for every pairs in evidces.

Parameters:

evidces (dict) – a dict of evidences

Raises:
  • gum.InvalidArgument – If one value is not a value for the node
  • gum.InvalidArgument – If the size of a value is different from the domain side of the node
  • gum.FatalError – If one value is a vector of 0s
  • gum.UndefinedElement – If one node does not belong to the Bayesian network
setMaxIter(ImportanceSampling self, int max)
Parameters:max (int) – the maximum number of iteration
Raises:gum.OutOfLowerBound – If max <= 1
setMaxTime(ImportanceSampling self, double timeout)
Parameters:tiemout (double) – stopping criterion on timeout (in seconds)
Raises:gum.OutOfLowerBound – If timeout<=0.0
setMinEpsilonRate(ImportanceSampling self, double rate)
Parameters:rate (double) – the minimal epsilon rate
setPeriodSize(ImportanceSampling self, int p)
Parameters:p (int) – number of samples between 2 stopping
Raises:gum.OutOfLowerBound – If p<1
setTargets(targets)

Remove all the targets and add the ones in parameter.

Parameters:targets (set) – a set of targets
Raises:gum.UndefinedElement – If one target is not in the Bayes net
setVerbosity(ImportanceSampling self, bool v)
Parameters:v (bool) – verbosity
softEvidenceNodes(ImportanceSampling self)
Returns:the set of nodes with soft evidence
Return type:set
targets(ImportanceSampling self)
Returns:the list of marginal targets
Return type:list
updateEvidence(evidces)

Apply chgEvidence(key,value) for every pairs in evidces (or addEvidence).

Parameters:

evidces (dict) – a dict of evidences

Raises:
  • gum.InvalidArgument – If one value is not a value for the node
  • gum.InvalidArgument – If the size of a value is different from the domain side of the node
  • gum.FatalError – If one value is a vector of 0s
  • gum.UndefinedElement – If one node does not belong to the Bayesian network
verbosity(ImportanceSampling self)
Returns:True if the verbosity is enabled
Return type:bool

Loopy sampling

Loopy Gibbs Sampling

class pyAgrum.LoopyGibbsSampling(bn: pyAgrum.IBayesNet)

Class used for inferences using a loopy version of Gibbs sampling.

LoopyGibbsSampling(bn) -> LoopyGibbsSampling
Parameters:
  • bn (pyAgrum.BayesNet) – a Bayesian network
BN(LoopyGibbsSampling self)
Returns:A constant reference over the IBayesNet referenced by this class.
Return type:pyAgrum.IBayesNet
Raises:gum.UndefinedElement – If no Bayes net has been assigned to the inference.
H(LoopyGibbsSampling self, int X)

H(LoopyGibbsSampling self, str nodeName) -> double

Parameters:
  • X (int) – a node Id
  • nodeName (str) – a node name
Returns:

the computed Shanon’s entropy of a node given the observation

Return type:

double

addAllTargets(LoopyGibbsSampling self)

Add all the nodes as targets.

addEvidence(LoopyGibbsSampling self, int id, int val)

addEvidence(LoopyGibbsSampling self, str nodeName, int val) addEvidence(LoopyGibbsSampling self, int id, str val) addEvidence(LoopyGibbsSampling self, str nodeName, str val) addEvidence(LoopyGibbsSampling self, int id, Vector vals) addEvidence(LoopyGibbsSampling self, str nodeName, Vector vals)

Adds a new evidence on a node (might be soft or hard).

Parameters:
  • id (int) – a node Id
  • nodeName (int) – a node name
  • val – (int) a node value
  • val – (str) the label of the node value
  • vals (list) – a list of values
Raises:
  • gum.InvalidArgument – If the node already has an evidence
  • gum.InvalidArgument – If val is not a value for the node
  • gum.InvalidArgument – If the size of vals is different from the domain side of the node
  • gum.FatalError – If vals is a vector of 0s
  • gum.UndefinedElement – If the node does not belong to the Bayesian network
addTarget(LoopyGibbsSampling self, int target)

addTarget(LoopyGibbsSampling self, str nodeName)

Add a marginal target to the list of targets.

Parameters:
  • target (int) – a node Id
  • nodeName (str) – a node name
Raises:

gum.UndefinedElement – If target is not a NodeId in the Bayes net

burnIn(LoopyGibbsSampling self)
Returns:size of burn in on number of iteration
Return type:int
chgEvidence(LoopyGibbsSampling self, int id, int val)

chgEvidence(LoopyGibbsSampling self, str nodeName, int val) chgEvidence(LoopyGibbsSampling self, int id, str val) chgEvidence(LoopyGibbsSampling self, str nodeName, str val) chgEvidence(LoopyGibbsSampling self, int id, Vector vals) chgEvidence(LoopyGibbsSampling self, str nodeName, Vector vals)

Change the value of an already existing evidence on a node (might be soft or hard).

Parameters:
  • id (int) – a node Id
  • nodeName (int) – a node name
  • val – (int) a node value
  • val – (str) the label of the node value
  • vals (list) – a list of values
Raises:
  • gum.InvalidArgument – If the node does not already have an evidence
  • gum.InvalidArgument – If val is not a value for the node
  • gum.InvalidArgument – If the size of vals is different from the domain side of the node
  • gum.FatalError – If vals is a vector of 0s
  • gum.UndefinedElement – If the node does not belong to the Bayesian network
currentPosterior(LoopyGibbsSampling self, int id)

currentPosterior(LoopyGibbsSampling self, str name) -> Potential

Computes and returns the current posterior of a node.

Parameters:
  • var (int) – the node Id of the node for which we need a posterior probability
  • nodeName (str) – the node name of the node for which we need a posterior probability
Returns:

a ref to the current posterior probability of the node

Return type:

pyAgrum.Potential

Raises:

UndefinedElement – If an element of nodes is not in targets

currentTime(LoopyGibbsSampling self)
Returns:get the current running time in second (double)
Return type:double
epsilon(LoopyGibbsSampling self)
Returns:the value of epsilon
Return type:double
eraseAllEvidence(LoopyGibbsSampling self)

Removes all the evidence entered into the network.

eraseAllTargets(LoopyGibbsSampling self)

Clear all previously defined targets (marginal and joint targets).

As a result, no posterior can be computed (since we can only compute the posteriors of the marginal or joint targets that have been added by the user).

eraseEvidence(LoopyGibbsSampling self, int id)

eraseEvidence(LoopyGibbsSampling self, str nodeName)

Remove the evidence, if any, corresponding to the node Id or name.

Parameters:
  • id (int) – a node Id
  • nodeName (int) – a node name
Raises:

gum.IndexError – If the node does not belong to the Bayesian network

eraseTarget(LoopyGibbsSampling self, int target)

eraseTarget(LoopyGibbsSampling self, str nodeName)

Remove, if existing, the marginal target.

Parameters:
  • target (int) – a node Id
  • nodeName (int) – a node name
Raises:
  • gum.IndexError – If one of the node does not belong to the Bayesian network
  • gum.UndefinedElement – If node Id is not in the Bayesian network
evidenceImpact(LoopyGibbsSampling self, PyObject * target, PyObject * evs)

Create a pyAgrum.Potential for P(target|evs) (for all instanciation of target and evs)

Parameters:
  • target (set) – a set of targets ids or names.
  • evs (set) – a set of nodes ids or names.

Warning

if some evs are d-separated, they are not included in the Potential.

Returns:a Potential for P(targets|evs)
Return type:pyAgrum.Potential
hardEvidenceNodes(LoopyGibbsSampling self)
Returns:the set of nodes with hard evidence
Return type:set
hasEvidence(LoopyGibbsSampling self, int id)

hasEvidence(LoopyGibbsSampling self, str nodeName) -> bool

Parameters:
  • id (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if some node(s) (or the one in parameters) have received evidence

Return type:

bool

Raises:

gum.IndexError – If the node does not belong to the Bayesian network

hasHardEvidence(LoopyGibbsSampling self, str nodeName)
Parameters:
  • id (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if node has received a hard evidence

Return type:

bool

Raises:

gum.IndexError – If the node does not belong to the Bayesian network

hasSoftEvidence(LoopyGibbsSampling self, int id)

hasSoftEvidence(LoopyGibbsSampling self, str nodeName) -> bool

Parameters:
  • id (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if node has received a soft evidence

Return type:

bool

Raises:

gum.IndexError – If the node does not belong to the Bayesian network

history(LoopyGibbsSampling self)
Returns:the scheme history
Return type:tuple
Raises:gum.OperationNotAllowed – If the scheme did not performed or if verbosity is set to false
isDrawnAtRandom(LoopyGibbsSampling self)
Returns:True if variables are drawn at random
Return type:bool
isTarget(LoopyGibbsSampling self, int variable)

isTarget(LoopyGibbsSampling self, str nodeName) -> bool

Parameters:
  • variable (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if variable is a (marginal) target

Return type:

bool

Raises:
  • gum.IndexError – If the node does not belong to the Bayesian network
  • gum.UndefinedElement – If node Id is not in the Bayesian network
makeInference(LoopyGibbsSampling self)

Perform the heavy computations needed to compute the targets’ posteriors

In a Junction tree propagation scheme, for instance, the heavy computations are those of the messages sent in the JT. This is precisely what makeInference should compute. Later, the computations of the posteriors can be done ‘lightly’ by multiplying and projecting those messages.

makeInference_(LoopyGibbsSampling self)
maxIter(LoopyGibbsSampling self)
Returns:the criterion on number of iterations
Return type:int
maxTime(LoopyGibbsSampling self)
Returns:the timeout(in seconds)
Return type:double
messageApproximationScheme(LoopyGibbsSampling self)
Returns:the approximation scheme message
Return type:str
minEpsilonRate(LoopyGibbsSampling self)
Returns:the value of the minimal epsilon rate
Return type:double
nbrDrawnVar(LoopyGibbsSampling self)
Returns:the number of variable drawn at each iteration
Return type:int
nbrEvidence(LoopyGibbsSampling self)
Returns:the number of evidence entered into the Bayesian network
Return type:int
nbrHardEvidence(LoopyGibbsSampling self)
Returns:the number of hard evidence entered into the Bayesian network
Return type:int
nbrIterations(LoopyGibbsSampling self)
Returns:the number of iterations
Return type:int
nbrSoftEvidence(LoopyGibbsSampling self)
Returns:the number of soft evidence entered into the Bayesian network
Return type:int
nbrTargets(LoopyGibbsSampling self)
Returns:the number of marginal targets
Return type:int
periodSize(LoopyGibbsSampling self)
Returns:the number of samples between 2 stopping
Return type:int
Raises:gum.OutOfLowerBound – If p<1
posterior(LoopyGibbsSampling self, int var)

posterior(LoopyGibbsSampling self, str nodeName) -> Potential

Computes and returns the posterior of a node.

Parameters:
  • var (int) – the node Id of the node for which we need a posterior probability
  • nodeName (str) – the node name of the node for which we need a posterior probability
Returns:

a ref to the posterior probability of the node

Return type:

pyAgrum.Potential

Raises:

gum.UndefinedElement – If an element of nodes is not in targets

setBurnIn(LoopyGibbsSampling self, int b)
Parameters:b (int) – size of burn in on number of iteration
setDrawnAtRandom(LoopyGibbsSampling self, bool _atRandom)
Parameters:_atRandom (bool) – indicates if variables should be drawn at random
setEpsilon(LoopyGibbsSampling self, double eps)
Parameters:eps (double) – the epsilon we want to use
Raises:gum.OutOfLowerBound – If eps<0
setEvidence(evidces)

Erase all the evidences and apply addEvidence(key,value) for every pairs in evidces.

Parameters:

evidces (dict) – a dict of evidences

Raises:
  • gum.InvalidArgument – If one value is not a value for the node
  • gum.InvalidArgument – If the size of a value is different from the domain side of the node
  • gum.FatalError – If one value is a vector of 0s
  • gum.UndefinedElement – If one node does not belong to the Bayesian network
setMaxIter(LoopyGibbsSampling self, int max)
Parameters:max (int) – the maximum number of iteration
Raises:gum.OutOfLowerBound – If max <= 1
setMaxTime(LoopyGibbsSampling self, double timeout)
Parameters:tiemout (double) – stopping criterion on timeout (in seconds)
Raises:gum.OutOfLowerBound – If timeout<=0.0
setMinEpsilonRate(LoopyGibbsSampling self, double rate)
Parameters:rate (double) – the minimal epsilon rate
setNbrDrawnVar(LoopyGibbsSampling self, int _nbr)
Parameters:_nbr (int) – the number of variables to be drawn at each iteration
setPeriodSize(LoopyGibbsSampling self, int p)
Parameters:p (int) – number of samples between 2 stopping
Raises:gum.OutOfLowerBound – If p<1
setTargets(targets)

Remove all the targets and add the ones in parameter.

Parameters:targets (set) – a set of targets
Raises:gum.UndefinedElement – If one target is not in the Bayes net
setVerbosity(LoopyGibbsSampling self, bool v)
Parameters:v (bool) – verbosity
setVirtualLBPSize(LoopyGibbsSampling self, double vlbpsize)
Parameters:vlbpsize (double) – the size of the virtual LBP
softEvidenceNodes(LoopyGibbsSampling self)
Returns:the set of nodes with soft evidence
Return type:set
targets(LoopyGibbsSampling self)
Returns:the list of marginal targets
Return type:list
updateEvidence(evidces)

Apply chgEvidence(key,value) for every pairs in evidces (or addEvidence).

Parameters:

evidces (dict) – a dict of evidences

Raises:
  • gum.InvalidArgument – If one value is not a value for the node
  • gum.InvalidArgument – If the size of a value is different from the domain side of the node
  • gum.FatalError – If one value is a vector of 0s
  • gum.UndefinedElement – If one node does not belong to the Bayesian network
verbosity(LoopyGibbsSampling self)
Returns:True if the verbosity is enabled
Return type:bool

Loopy Monte Carlo Sampling

class pyAgrum.LoopyMonteCarloSampling(bn: pyAgrum.IBayesNet)

Class used for inferences using a loopy version of Monte Carlo sampling.

LoopyMonteCarloSampling(bn) -> LoopyMonteCarloSampling
Parameters:
  • bn (pyAgrum.BayesNet) – a Bayesian network
BN(LoopyMonteCarloSampling self)
Returns:A constant reference over the IBayesNet referenced by this class.
Return type:pyAgrum.IBayesNet
Raises:gum.UndefinedElement – If no Bayes net has been assigned to the inference.
H(LoopyMonteCarloSampling self, int X)

H(LoopyMonteCarloSampling self, str nodeName) -> double

Parameters:
  • X (int) – a node Id
  • nodeName (str) – a node name
Returns:

the computed Shanon’s entropy of a node given the observation

Return type:

double

addAllTargets(LoopyMonteCarloSampling self)

Add all the nodes as targets.

addEvidence(LoopyMonteCarloSampling self, int id, int val)

addEvidence(LoopyMonteCarloSampling self, str nodeName, int val) addEvidence(LoopyMonteCarloSampling self, int id, str val) addEvidence(LoopyMonteCarloSampling self, str nodeName, str val) addEvidence(LoopyMonteCarloSampling self, int id, Vector vals) addEvidence(LoopyMonteCarloSampling self, str nodeName, Vector vals)

Adds a new evidence on a node (might be soft or hard).

Parameters:
  • id (int) – a node Id
  • nodeName (int) – a node name
  • val – (int) a node value
  • val – (str) the label of the node value
  • vals (list) – a list of values
Raises:
  • gum.InvalidArgument – If the node already has an evidence
  • gum.InvalidArgument – If val is not a value for the node
  • gum.InvalidArgument – If the size of vals is different from the domain side of the node
  • gum.FatalError – If vals is a vector of 0s
  • gum.UndefinedElement – If the node does not belong to the Bayesian network
addTarget(LoopyMonteCarloSampling self, int target)

addTarget(LoopyMonteCarloSampling self, str nodeName)

Add a marginal target to the list of targets.

Parameters:
  • target (int) – a node Id
  • nodeName (str) – a node name
Raises:

gum.UndefinedElement – If target is not a NodeId in the Bayes net

chgEvidence(LoopyMonteCarloSampling self, int id, int val)

chgEvidence(LoopyMonteCarloSampling self, str nodeName, int val) chgEvidence(LoopyMonteCarloSampling self, int id, str val) chgEvidence(LoopyMonteCarloSampling self, str nodeName, str val) chgEvidence(LoopyMonteCarloSampling self, int id, Vector vals) chgEvidence(LoopyMonteCarloSampling self, str nodeName, Vector vals)

Change the value of an already existing evidence on a node (might be soft or hard).

Parameters:
  • id (int) – a node Id
  • nodeName (int) – a node name
  • val – (int) a node value
  • val – (str) the label of the node value
  • vals (list) – a list of values
Raises:
  • gum.InvalidArgument – If the node does not already have an evidence
  • gum.InvalidArgument – If val is not a value for the node
  • gum.InvalidArgument – If the size of vals is different from the domain side of the node
  • gum.FatalError – If vals is a vector of 0s
  • gum.UndefinedElement – If the node does not belong to the Bayesian network
currentPosterior(LoopyMonteCarloSampling self, int id)

currentPosterior(LoopyMonteCarloSampling self, str name) -> Potential

Computes and returns the current posterior of a node.

Parameters:
  • var (int) – the node Id of the node for which we need a posterior probability
  • nodeName (str) – the node name of the node for which we need a posterior probability
Returns:

a ref to the current posterior probability of the node

Return type:

pyAgrum.Potential

Raises:

UndefinedElement – If an element of nodes is not in targets

currentTime(LoopyMonteCarloSampling self)
Returns:get the current running time in second (double)
Return type:double
epsilon(LoopyMonteCarloSampling self)
Returns:the value of epsilon
Return type:double
eraseAllEvidence(LoopyMonteCarloSampling self)

Removes all the evidence entered into the network.

eraseAllTargets(LoopyMonteCarloSampling self)

Clear all previously defined targets (marginal and joint targets).

As a result, no posterior can be computed (since we can only compute the posteriors of the marginal or joint targets that have been added by the user).

eraseEvidence(LoopyMonteCarloSampling self, int id)

eraseEvidence(LoopyMonteCarloSampling self, str nodeName)

Remove the evidence, if any, corresponding to the node Id or name.

Parameters:
  • id (int) – a node Id
  • nodeName (int) – a node name
Raises:

gum.IndexError – If the node does not belong to the Bayesian network

eraseTarget(LoopyMonteCarloSampling self, int target)

eraseTarget(LoopyMonteCarloSampling self, str nodeName)

Remove, if existing, the marginal target.

Parameters:
  • target (int) – a node Id
  • nodeName (int) – a node name
Raises:
  • gum.IndexError – If one of the node does not belong to the Bayesian network
  • gum.UndefinedElement – If node Id is not in the Bayesian network
evidenceImpact(LoopyMonteCarloSampling self, PyObject * target, PyObject * evs)

Create a pyAgrum.Potential for P(target|evs) (for all instanciation of target and evs)

Parameters:
  • target (set) – a set of targets ids or names.
  • evs (set) – a set of nodes ids or names.

Warning

if some evs are d-separated, they are not included in the Potential.

Returns:a Potential for P(targets|evs)
Return type:pyAgrum.Potential
hardEvidenceNodes(LoopyMonteCarloSampling self)
Returns:the set of nodes with hard evidence
Return type:set
hasEvidence(LoopyMonteCarloSampling self, int id)

hasEvidence(LoopyMonteCarloSampling self, str nodeName) -> bool

Parameters:
  • id (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if some node(s) (or the one in parameters) have received evidence

Return type:

bool

Raises:

gum.IndexError – If the node does not belong to the Bayesian network

hasHardEvidence(LoopyMonteCarloSampling self, str nodeName)
Parameters:
  • id (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if node has received a hard evidence

Return type:

bool

Raises:

gum.IndexError – If the node does not belong to the Bayesian network

hasSoftEvidence(LoopyMonteCarloSampling self, int id)

hasSoftEvidence(LoopyMonteCarloSampling self, str nodeName) -> bool

Parameters:
  • id (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if node has received a soft evidence

Return type:

bool

Raises:

gum.IndexError – If the node does not belong to the Bayesian network

history(LoopyMonteCarloSampling self)
Returns:the scheme history
Return type:tuple
Raises:gum.OperationNotAllowed – If the scheme did not performed or if verbosity is set to false
isTarget(LoopyMonteCarloSampling self, int variable)

isTarget(LoopyMonteCarloSampling self, str nodeName) -> bool

Parameters:
  • variable (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if variable is a (marginal) target

Return type:

bool

Raises:
  • gum.IndexError – If the node does not belong to the Bayesian network
  • gum.UndefinedElement – If node Id is not in the Bayesian network
makeInference(LoopyMonteCarloSampling self)

Perform the heavy computations needed to compute the targets’ posteriors

In a Junction tree propagation scheme, for instance, the heavy computations are those of the messages sent in the JT. This is precisely what makeInference should compute. Later, the computations of the posteriors can be done ‘lightly’ by multiplying and projecting those messages.

makeInference_(LoopyMonteCarloSampling self)
maxIter(LoopyMonteCarloSampling self)
Returns:the criterion on number of iterations
Return type:int
maxTime(LoopyMonteCarloSampling self)
Returns:the timeout(in seconds)
Return type:double
messageApproximationScheme(LoopyMonteCarloSampling self)
Returns:the approximation scheme message
Return type:str
minEpsilonRate(LoopyMonteCarloSampling self)
Returns:the value of the minimal epsilon rate
Return type:double
nbrEvidence(LoopyMonteCarloSampling self)
Returns:the number of evidence entered into the Bayesian network
Return type:int
nbrHardEvidence(LoopyMonteCarloSampling self)
Returns:the number of hard evidence entered into the Bayesian network
Return type:int
nbrIterations(LoopyMonteCarloSampling self)
Returns:the number of iterations
Return type:int
nbrSoftEvidence(LoopyMonteCarloSampling self)
Returns:the number of soft evidence entered into the Bayesian network
Return type:int
nbrTargets(LoopyMonteCarloSampling self)
Returns:the number of marginal targets
Return type:int
periodSize(LoopyMonteCarloSampling self)
Returns:the number of samples between 2 stopping
Return type:int
Raises:gum.OutOfLowerBound – If p<1
posterior(LoopyMonteCarloSampling self, int var)

posterior(LoopyMonteCarloSampling self, str nodeName) -> Potential

Computes and returns the posterior of a node.

Parameters:
  • var (int) – the node Id of the node for which we need a posterior probability
  • nodeName (str) – the node name of the node for which we need a posterior probability
Returns:

a ref to the posterior probability of the node

Return type:

pyAgrum.Potential

Raises:

gum.UndefinedElement – If an element of nodes is not in targets

setEpsilon(LoopyMonteCarloSampling self, double eps)
Parameters:eps (double) – the epsilon we want to use
Raises:gum.OutOfLowerBound – If eps<0
setEvidence(evidces)

Erase all the evidences and apply addEvidence(key,value) for every pairs in evidces.

Parameters:

evidces (dict) – a dict of evidences

Raises:
  • gum.InvalidArgument – If one value is not a value for the node
  • gum.InvalidArgument – If the size of a value is different from the domain side of the node
  • gum.FatalError – If one value is a vector of 0s
  • gum.UndefinedElement – If one node does not belong to the Bayesian network
setMaxIter(LoopyMonteCarloSampling self, int max)
Parameters:max (int) – the maximum number of iteration
Raises:gum.OutOfLowerBound – If max <= 1
setMaxTime(LoopyMonteCarloSampling self, double timeout)
Parameters:tiemout (double) – stopping criterion on timeout (in seconds)
Raises:gum.OutOfLowerBound – If timeout<=0.0
setMinEpsilonRate(LoopyMonteCarloSampling self, double rate)
Parameters:rate (double) – the minimal epsilon rate
setPeriodSize(LoopyMonteCarloSampling self, int p)
Parameters:p (int) – number of samples between 2 stopping
Raises:gum.OutOfLowerBound – If p<1
setTargets(targets)

Remove all the targets and add the ones in parameter.

Parameters:targets (set) – a set of targets
Raises:gum.UndefinedElement – If one target is not in the Bayes net
setVerbosity(LoopyMonteCarloSampling self, bool v)
Parameters:v (bool) – verbosity
setVirtualLBPSize(LoopyMonteCarloSampling self, double vlbpsize)
Parameters:vlbpsize (double) – the size of the virtual LBP
softEvidenceNodes(LoopyMonteCarloSampling self)
Returns:the set of nodes with soft evidence
Return type:set
targets(LoopyMonteCarloSampling self)
Returns:the list of marginal targets
Return type:list
updateEvidence(evidces)

Apply chgEvidence(key,value) for every pairs in evidces (or addEvidence).

Parameters:

evidces (dict) – a dict of evidences

Raises:
  • gum.InvalidArgument – If one value is not a value for the node
  • gum.InvalidArgument – If the size of a value is different from the domain side of the node
  • gum.FatalError – If one value is a vector of 0s
  • gum.UndefinedElement – If one node does not belong to the Bayesian network
verbosity(LoopyMonteCarloSampling self)
Returns:True if the verbosity is enabled
Return type:bool

Loopy Weighted Sampling

class pyAgrum.LoopyWeightedSampling(bn: pyAgrum.IBayesNet)

Class used for inferences using a loopy version of weighted sampling.

LoopyWeightedSampling(bn) -> LoopyWeightedSampling
Parameters:
  • bn (pyAgrum.BayesNet) – a Bayesian network
BN(LoopyWeightedSampling self)
Returns:A constant reference over the IBayesNet referenced by this class.
Return type:pyAgrum.IBayesNet
Raises:gum.UndefinedElement – If no Bayes net has been assigned to the inference.
H(LoopyWeightedSampling self, int X)

H(LoopyWeightedSampling self, str nodeName) -> double

Parameters:
  • X (int) – a node Id
  • nodeName (str) – a node name
Returns:

the computed Shanon’s entropy of a node given the observation

Return type:

double

addAllTargets(LoopyWeightedSampling self)

Add all the nodes as targets.

addEvidence(LoopyWeightedSampling self, int id, int val)

addEvidence(LoopyWeightedSampling self, str nodeName, int val) addEvidence(LoopyWeightedSampling self, int id, str val) addEvidence(LoopyWeightedSampling self, str nodeName, str val) addEvidence(LoopyWeightedSampling self, int id, Vector vals) addEvidence(LoopyWeightedSampling self, str nodeName, Vector vals)

Adds a new evidence on a node (might be soft or hard).

Parameters:
  • id (int) – a node Id
  • nodeName (int) – a node name
  • val – (int) a node value
  • val – (str) the label of the node value
  • vals (list) – a list of values
Raises:
  • gum.InvalidArgument – If the node already has an evidence
  • gum.InvalidArgument – If val is not a value for the node
  • gum.InvalidArgument – If the size of vals is different from the domain side of the node
  • gum.FatalError – If vals is a vector of 0s
  • gum.UndefinedElement – If the node does not belong to the Bayesian network
addTarget(LoopyWeightedSampling self, int target)

addTarget(LoopyWeightedSampling self, str nodeName)

Add a marginal target to the list of targets.

Parameters:
  • target (int) – a node Id
  • nodeName (str) – a node name
Raises:

gum.UndefinedElement – If target is not a NodeId in the Bayes net

chgEvidence(LoopyWeightedSampling self, int id, int val)

chgEvidence(LoopyWeightedSampling self, str nodeName, int val) chgEvidence(LoopyWeightedSampling self, int id, str val) chgEvidence(LoopyWeightedSampling self, str nodeName, str val) chgEvidence(LoopyWeightedSampling self, int id, Vector vals) chgEvidence(LoopyWeightedSampling self, str nodeName, Vector vals)

Change the value of an already existing evidence on a node (might be soft or hard).

Parameters:
  • id (int) – a node Id
  • nodeName (int) – a node name
  • val – (int) a node value
  • val – (str) the label of the node value
  • vals (list) – a list of values
Raises:
  • gum.InvalidArgument – If the node does not already have an evidence
  • gum.InvalidArgument – If val is not a value for the node
  • gum.InvalidArgument – If the size of vals is different from the domain side of the node
  • gum.FatalError – If vals is a vector of 0s
  • gum.UndefinedElement – If the node does not belong to the Bayesian network
currentPosterior(LoopyWeightedSampling self, int id)

currentPosterior(LoopyWeightedSampling self, str name) -> Potential

Computes and returns the current posterior of a node.

Parameters:
  • var (int) – the node Id of the node for which we need a posterior probability
  • nodeName (str) – the node name of the node for which we need a posterior probability
Returns:

a ref to the current posterior probability of the node

Return type:

pyAgrum.Potential

Raises:

UndefinedElement – If an element of nodes is not in targets

currentTime(LoopyWeightedSampling self)
Returns:get the current running time in second (double)
Return type:double
epsilon(LoopyWeightedSampling self)
Returns:the value of epsilon
Return type:double
eraseAllEvidence(LoopyWeightedSampling self)

Removes all the evidence entered into the network.

eraseAllTargets(LoopyWeightedSampling self)

Clear all previously defined targets (marginal and joint targets).

As a result, no posterior can be computed (since we can only compute the posteriors of the marginal or joint targets that have been added by the user).

eraseEvidence(LoopyWeightedSampling self, int id)

eraseEvidence(LoopyWeightedSampling self, str nodeName)

Remove the evidence, if any, corresponding to the node Id or name.

Parameters:
  • id (int) – a node Id
  • nodeName (int) – a node name
Raises:

gum.IndexError – If the node does not belong to the Bayesian network

eraseTarget(LoopyWeightedSampling self, int target)

eraseTarget(LoopyWeightedSampling self, str nodeName)

Remove, if existing, the marginal target.

Parameters:
  • target (int) – a node Id
  • nodeName (int) – a node name
Raises:
  • gum.IndexError – If one of the node does not belong to the Bayesian network
  • gum.UndefinedElement – If node Id is not in the Bayesian network
evidenceImpact(LoopyWeightedSampling self, PyObject * target, PyObject * evs)

Create a pyAgrum.Potential for P(target|evs) (for all instanciation of target and evs)

Parameters:
  • target (set) – a set of targets ids or names.
  • evs (set) – a set of nodes ids or names.

Warning

if some evs are d-separated, they are not included in the Potential.

Returns:a Potential for P(targets|evs)
Return type:pyAgrum.Potential
hardEvidenceNodes(LoopyWeightedSampling self)
Returns:the set of nodes with hard evidence
Return type:set
hasEvidence(LoopyWeightedSampling self, int id)

hasEvidence(LoopyWeightedSampling self, str nodeName) -> bool

Parameters:
  • id (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if some node(s) (or the one in parameters) have received evidence

Return type:

bool

Raises:

gum.IndexError – If the node does not belong to the Bayesian network

hasHardEvidence(LoopyWeightedSampling self, str nodeName)
Parameters:
  • id (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if node has received a hard evidence

Return type:

bool

Raises:

gum.IndexError – If the node does not belong to the Bayesian network

hasSoftEvidence(LoopyWeightedSampling self, int id)

hasSoftEvidence(LoopyWeightedSampling self, str nodeName) -> bool

Parameters:
  • id (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if node has received a soft evidence

Return type:

bool

Raises:

gum.IndexError – If the node does not belong to the Bayesian network

history(LoopyWeightedSampling self)
Returns:the scheme history
Return type:tuple
Raises:gum.OperationNotAllowed – If the scheme did not performed or if verbosity is set to false
isTarget(LoopyWeightedSampling self, int variable)

isTarget(LoopyWeightedSampling self, str nodeName) -> bool

Parameters:
  • variable (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if variable is a (marginal) target

Return type:

bool

Raises:
  • gum.IndexError – If the node does not belong to the Bayesian network
  • gum.UndefinedElement – If node Id is not in the Bayesian network
makeInference(LoopyWeightedSampling self)

Perform the heavy computations needed to compute the targets’ posteriors

In a Junction tree propagation scheme, for instance, the heavy computations are those of the messages sent in the JT. This is precisely what makeInference should compute. Later, the computations of the posteriors can be done ‘lightly’ by multiplying and projecting those messages.

makeInference_(LoopyWeightedSampling self)
maxIter(LoopyWeightedSampling self)
Returns:the criterion on number of iterations
Return type:int
maxTime(LoopyWeightedSampling self)
Returns:the timeout(in seconds)
Return type:double
messageApproximationScheme(LoopyWeightedSampling self)
Returns:the approximation scheme message
Return type:str
minEpsilonRate(LoopyWeightedSampling self)
Returns:the value of the minimal epsilon rate
Return type:double
nbrEvidence(LoopyWeightedSampling self)
Returns:the number of evidence entered into the Bayesian network
Return type:int
nbrHardEvidence(LoopyWeightedSampling self)
Returns:the number of hard evidence entered into the Bayesian network
Return type:int
nbrIterations(LoopyWeightedSampling self)
Returns:the number of iterations
Return type:int
nbrSoftEvidence(LoopyWeightedSampling self)
Returns:the number of soft evidence entered into the Bayesian network
Return type:int
nbrTargets(LoopyWeightedSampling self)
Returns:the number of marginal targets
Return type:int
periodSize(LoopyWeightedSampling self)
Returns:the number of samples between 2 stopping
Return type:int
Raises:gum.OutOfLowerBound – If p<1
posterior(LoopyWeightedSampling self, int var)

posterior(LoopyWeightedSampling self, str nodeName) -> Potential

Computes and returns the posterior of a node.

Parameters:
  • var (int) – the node Id of the node for which we need a posterior probability
  • nodeName (str) – the node name of the node for which we need a posterior probability
Returns:

a ref to the posterior probability of the node

Return type:

pyAgrum.Potential

Raises:

gum.UndefinedElement – If an element of nodes is not in targets

setEpsilon(LoopyWeightedSampling self, double eps)
Parameters:eps (double) – the epsilon we want to use
Raises:gum.OutOfLowerBound – If eps<0
setEvidence(evidces)

Erase all the evidences and apply addEvidence(key,value) for every pairs in evidces.

Parameters:

evidces (dict) – a dict of evidences

Raises:
  • gum.InvalidArgument – If one value is not a value for the node
  • gum.InvalidArgument – If the size of a value is different from the domain side of the node
  • gum.FatalError – If one value is a vector of 0s
  • gum.UndefinedElement – If one node does not belong to the Bayesian network
setMaxIter(LoopyWeightedSampling self, int max)
Parameters:max (int) – the maximum number of iteration
Raises:gum.OutOfLowerBound – If max <= 1
setMaxTime(LoopyWeightedSampling self, double timeout)
Parameters:tiemout (double) – stopping criterion on timeout (in seconds)
Raises:gum.OutOfLowerBound – If timeout<=0.0
setMinEpsilonRate(LoopyWeightedSampling self, double rate)
Parameters:rate (double) – the minimal epsilon rate
setPeriodSize(LoopyWeightedSampling self, int p)
Parameters:p (int) – number of samples between 2 stopping
Raises:gum.OutOfLowerBound – If p<1
setTargets(targets)

Remove all the targets and add the ones in parameter.

Parameters:targets (set) – a set of targets
Raises:gum.UndefinedElement – If one target is not in the Bayes net
setVerbosity(LoopyWeightedSampling self, bool v)
Parameters:v (bool) – verbosity
setVirtualLBPSize(LoopyWeightedSampling self, double vlbpsize)
Parameters:vlbpsize (double) – the size of the virtual LBP
softEvidenceNodes(LoopyWeightedSampling self)
Returns:the set of nodes with soft evidence
Return type:set
targets(LoopyWeightedSampling self)
Returns:the list of marginal targets
Return type:list
updateEvidence(evidces)

Apply chgEvidence(key,value) for every pairs in evidces (or addEvidence).

Parameters:

evidces (dict) – a dict of evidences

Raises:
  • gum.InvalidArgument – If one value is not a value for the node
  • gum.InvalidArgument – If the size of a value is different from the domain side of the node
  • gum.FatalError – If one value is a vector of 0s
  • gum.UndefinedElement – If one node does not belong to the Bayesian network
verbosity(LoopyWeightedSampling self)
Returns:True if the verbosity is enabled
Return type:bool

Loopy Importance Sampling

class pyAgrum.LoopyImportanceSampling(bn: pyAgrum.IBayesNet)

Class used for inferences using a loopy version of importance sampling.

LoopyImportanceSampling(bn) -> LoopyImportanceSampling
Parameters:
  • bn (pyAgrum.BayesNet) – a Bayesian network
BN(LoopyImportanceSampling self)
Returns:A constant reference over the IBayesNet referenced by this class.
Return type:pyAgrum.IBayesNet
Raises:gum.UndefinedElement – If no Bayes net has been assigned to the inference.
H(LoopyImportanceSampling self, int X)

H(LoopyImportanceSampling self, str nodeName) -> double

Parameters:
  • X (int) – a node Id
  • nodeName (str) – a node name
Returns:

the computed Shanon’s entropy of a node given the observation

Return type:

double

addAllTargets(LoopyImportanceSampling self)

Add all the nodes as targets.

addEvidence(LoopyImportanceSampling self, int id, int val)

addEvidence(LoopyImportanceSampling self, str nodeName, int val) addEvidence(LoopyImportanceSampling self, int id, str val) addEvidence(LoopyImportanceSampling self, str nodeName, str val) addEvidence(LoopyImportanceSampling self, int id, Vector vals) addEvidence(LoopyImportanceSampling self, str nodeName, Vector vals)

Adds a new evidence on a node (might be soft or hard).

Parameters:
  • id (int) – a node Id
  • nodeName (int) – a node name
  • val – (int) a node value
  • val – (str) the label of the node value
  • vals (list) – a list of values
Raises:
  • gum.InvalidArgument – If the node already has an evidence
  • gum.InvalidArgument – If val is not a value for the node
  • gum.InvalidArgument – If the size of vals is different from the domain side of the node
  • gum.FatalError – If vals is a vector of 0s
  • gum.UndefinedElement – If the node does not belong to the Bayesian network
addTarget(LoopyImportanceSampling self, int target)

addTarget(LoopyImportanceSampling self, str nodeName)

Add a marginal target to the list of targets.

Parameters:
  • target (int) – a node Id
  • nodeName (str) – a node name
Raises:

gum.UndefinedElement – If target is not a NodeId in the Bayes net

chgEvidence(LoopyImportanceSampling self, int id, int val)

chgEvidence(LoopyImportanceSampling self, str nodeName, int val) chgEvidence(LoopyImportanceSampling self, int id, str val) chgEvidence(LoopyImportanceSampling self, str nodeName, str val) chgEvidence(LoopyImportanceSampling self, int id, Vector vals) chgEvidence(LoopyImportanceSampling self, str nodeName, Vector vals)

Change the value of an already existing evidence on a node (might be soft or hard).

Parameters:
  • id (int) – a node Id
  • nodeName (int) – a node name
  • val – (int) a node value
  • val – (str) the label of the node value
  • vals (list) – a list of values
Raises:
  • gum.InvalidArgument – If the node does not already have an evidence
  • gum.InvalidArgument – If val is not a value for the node
  • gum.InvalidArgument – If the size of vals is different from the domain side of the node
  • gum.FatalError – If vals is a vector of 0s
  • gum.UndefinedElement – If the node does not belong to the Bayesian network
currentPosterior(LoopyImportanceSampling self, int id)

currentPosterior(LoopyImportanceSampling self, str name) -> Potential

Computes and returns the current posterior of a node.

Parameters:
  • var (int) – the node Id of the node for which we need a posterior probability
  • nodeName (str) – the node name of the node for which we need a posterior probability
Returns:

a ref to the current posterior probability of the node

Return type:

pyAgrum.Potential

Raises:

UndefinedElement – If an element of nodes is not in targets

currentTime(LoopyImportanceSampling self)
Returns:get the current running time in second (double)
Return type:double
epsilon(LoopyImportanceSampling self)
Returns:the value of epsilon
Return type:double
eraseAllEvidence(LoopyImportanceSampling self)

Removes all the evidence entered into the network.

eraseAllTargets(LoopyImportanceSampling self)

Clear all previously defined targets (marginal and joint targets).

As a result, no posterior can be computed (since we can only compute the posteriors of the marginal or joint targets that have been added by the user).

eraseEvidence(LoopyImportanceSampling self, int id)

eraseEvidence(LoopyImportanceSampling self, str nodeName)

Remove the evidence, if any, corresponding to the node Id or name.

Parameters:
  • id (int) – a node Id
  • nodeName (int) – a node name
Raises:

gum.IndexError – If the node does not belong to the Bayesian network

eraseTarget(LoopyImportanceSampling self, int target)

eraseTarget(LoopyImportanceSampling self, str nodeName)

Remove, if existing, the marginal target.

Parameters:
  • target (int) – a node Id
  • nodeName (int) – a node name
Raises:
  • gum.IndexError – If one of the node does not belong to the Bayesian network
  • gum.UndefinedElement – If node Id is not in the Bayesian network
evidenceImpact(LoopyImportanceSampling self, PyObject * target, PyObject * evs)

Create a pyAgrum.Potential for P(target|evs) (for all instanciation of target and evs)

Parameters:
  • target (set) – a set of targets ids or names.
  • evs (set) – a set of nodes ids or names.

Warning

if some evs are d-separated, they are not included in the Potential.

Returns:a Potential for P(targets|evs)
Return type:pyAgrum.Potential
hardEvidenceNodes(LoopyImportanceSampling self)
Returns:the set of nodes with hard evidence
Return type:set
hasEvidence(LoopyImportanceSampling self, int id)

hasEvidence(LoopyImportanceSampling self, str nodeName) -> bool

Parameters:
  • id (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if some node(s) (or the one in parameters) have received evidence

Return type:

bool

Raises:

gum.IndexError – If the node does not belong to the Bayesian network

hasHardEvidence(LoopyImportanceSampling self, str nodeName)
Parameters:
  • id (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if node has received a hard evidence

Return type:

bool

Raises:

gum.IndexError – If the node does not belong to the Bayesian network

hasSoftEvidence(LoopyImportanceSampling self, int id)

hasSoftEvidence(LoopyImportanceSampling self, str nodeName) -> bool

Parameters:
  • id (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if node has received a soft evidence

Return type:

bool

Raises:

gum.IndexError – If the node does not belong to the Bayesian network

history(LoopyImportanceSampling self)
Returns:the scheme history
Return type:tuple
Raises:gum.OperationNotAllowed – If the scheme did not performed or if verbosity is set to false
isTarget(LoopyImportanceSampling self, int variable)

isTarget(LoopyImportanceSampling self, str nodeName) -> bool

Parameters:
  • variable (int) – a node Id
  • nodeName (str) – a node name
Returns:

True if variable is a (marginal) target

Return type:

bool

Raises:
  • gum.IndexError – If the node does not belong to the Bayesian network
  • gum.UndefinedElement – If node Id is not in the Bayesian network
makeInference(LoopyImportanceSampling self)

Perform the heavy computations needed to compute the targets’ posteriors

In a Junction tree propagation scheme, for instance, the heavy computations are those of the messages sent in the JT. This is precisely what makeInference should compute. Later, the computations of the posteriors can be done ‘lightly’ by multiplying and projecting those messages.

makeInference_(LoopyImportanceSampling self)
maxIter(LoopyImportanceSampling self)
Returns:the criterion on number of iterations
Return type:int
maxTime(LoopyImportanceSampling self)
Returns:the timeout(in seconds)
Return type:double
messageApproximationScheme(LoopyImportanceSampling self)
Returns:the approximation scheme message
Return type:str
minEpsilonRate(LoopyImportanceSampling self)
Returns:the value of the minimal epsilon rate
Return type:double
nbrEvidence(LoopyImportanceSampling self)
Returns:the number of evidence entered into the Bayesian network
Return type:int
nbrHardEvidence(LoopyImportanceSampling self)
Returns:the number of hard evidence entered into the Bayesian network
Return type:int
nbrIterations(LoopyImportanceSampling self)
Returns:the number of iterations
Return type:int
nbrSoftEvidence(LoopyImportanceSampling self)
Returns:the number of soft evidence entered into the Bayesian network
Return type:int
nbrTargets(LoopyImportanceSampling self)
Returns:the number of marginal targets
Return type:int
periodSize(LoopyImportanceSampling self)
Returns:the number of samples between 2 stopping
Return type:int
Raises:gum.OutOfLowerBound – If p<1
posterior(LoopyImportanceSampling self, int var)

posterior(LoopyImportanceSampling self, str nodeName) -> Potential

Computes and returns the posterior of a node.

Parameters:
  • var (int) – the node Id of the node for which we need a posterior probability
  • nodeName (str) – the node name of the node for which we need a posterior probability
Returns:

a ref to the posterior probability of the node

Return type:

pyAgrum.Potential

Raises:

gum.UndefinedElement – If an element of nodes is not in targets

setEpsilon(LoopyImportanceSampling self, double eps)
Parameters:eps (double) – the epsilon we want to use
Raises:gum.OutOfLowerBound – If eps<0
setEvidence(evidces)

Erase all the evidences and apply addEvidence(key,value) for every pairs in evidces.

Parameters:

evidces (dict) – a dict of evidences

Raises:
  • gum.InvalidArgument – If one value is not a value for the node
  • gum.InvalidArgument – If the size of a value is different from the domain side of the node
  • gum.FatalError – If one value is a vector of 0s
  • gum.UndefinedElement – If one node does not belong to the Bayesian network
setMaxIter(LoopyImportanceSampling self, int max)
Parameters:max (int) – the maximum number of iteration
Raises:gum.OutOfLowerBound – If max <= 1
setMaxTime(LoopyImportanceSampling self, double timeout)
Parameters:tiemout (double) – stopping criterion on timeout (in seconds)
Raises:gum.OutOfLowerBound – If timeout<=0.0
setMinEpsilonRate(LoopyImportanceSampling self, double rate)
Parameters:rate (double) – the minimal epsilon rate
setPeriodSize(LoopyImportanceSampling self, int p)
Parameters:p (int) – number of samples between 2 stopping
Raises:gum.OutOfLowerBound – If p<1
setTargets(targets)

Remove all the targets and add the ones in parameter.

Parameters:targets (set) – a set of targets
Raises:gum.UndefinedElement – If one target is not in the Bayes net
setVerbosity(LoopyImportanceSampling self, bool v)
Parameters:v (bool) – verbosity
setVirtualLBPSize(LoopyImportanceSampling self, double vlbpsize)
Parameters:vlbpsize (double) – the size of the virtual LBP
softEvidenceNodes(LoopyImportanceSampling self)
Returns:the set of nodes with soft evidence
Return type:set
targets(LoopyImportanceSampling self)
Returns:the list of marginal targets
Return type:list
updateEvidence(evidces)

Apply chgEvidence(key,value) for every pairs in evidces (or addEvidence).

Parameters:

evidces (dict) – a dict of evidences

Raises:
  • gum.InvalidArgument – If one value is not a value for the node
  • gum.InvalidArgument – If the size of a value is different from the domain side of the node
  • gum.FatalError – If one value is a vector of 0s
  • gum.UndefinedElement – If one node does not belong to the Bayesian network
verbosity(LoopyImportanceSampling self)
Returns:True if the verbosity is enabled
Return type:bool