Logo
1.1.1

Fundamental components

  • Graphs manipulation
    • Edges and Arcs
      • Arc
      • Edge
    • Directed Graphs
      • Digraph
      • Directed Acyclic Graph
    • Undirected Graphs
      • UndiGraph
      • Clique Graph
    • Mixed Graph
  • Random Variables
    • Common API for Random Discrete Variables
    • Concrete classes for Random Discrete Variables
      • LabelizedVariable
      • DiscretizedVariable
      • IntegerVariable
      • RangeVariable
  • Potential and Instantiation
    • Instantiation
    • Potential

Graphical Models

  • Bayesian network
    • Model
    • Tools for Bayesian networks
      • Generation of database
      • Comparison of Bayesian networks
      • Explanation and analysis
      • Fragment of Bayesian networks
    • Inference
    • Exact Inference
      • Lazy Propagation
      • Shafer Shenoy Inference
      • Variable Elimination
    • Approximated Inference
      • Loopy Belief Propagation
      • Sampling
        • Gibbs Sampling
        • Monte Carlo Sampling
        • Weighted Sampling
        • Importance Sampling
      • Loopy sampling
        • Loopy Gibbs Sampling
        • Loopy Monte Carlo Sampling
        • Loopy Weighted Sampling
        • Loopy Importance Sampling
    • Learning
  • Influence Diagram
    • Model for Decision in PGM
    • Inference for Influence Diagram
  • Credal Network
    • CN Model
    • CN Inference
  • Markov Network
    • Undirected Graphical Model
    • Inference in Markov Networks
      • Shafer Shenoy Inference in Markov Network
  • Probabilistic Relational Models

Causality

  • pyAgrum.causal documentation
    • Causal Model
    • Causal Formula
    • Causal Inference
    • Other functions
    • Abstract Syntax Tree for Do-Calculus
    • Exceptions
    • Notebook’s tools for causality

scikit-learn-like BN Classifiers

  • pyAgrum.skbn documentation
    • Classifier using Bayesian networks
    • Discretizer for Bayesian networks

pyAgrum.lib modules

  • pyAgrum.lib.notebook
  • pyAgrum.lib.image
  • pyAgrum.lib.explain
  • pyAgrum.lib.dynamicBN
  • other pyAgrum.lib modules

Miscellaneous

  • Functions from pyAgrum
    • Useful functions in pyAgrum
    • Quick specification of (randomly parameterized) graphical models
    • Input/Output for Bayesian networks
    • Input/Output for Markov networks
    • Input for influence diagram
  • Other functions from aGrUM
    • Listeners
      • LoadListener
      • StructuralListener
      • ApproximationSchemeListener
      • DatabaseGenerationListener
    • Random functions
    • OMP functions
  • Exceptions from aGrUM

Customizing pyAgrum

  • Configuration for pyAgrum

Notebooks

  • Tutorials on pyAgrum
    • Tutorial pyAgrum
      • Creating your first Bayesian network with pyAgrum
        • Import the pyAgrum package
        • Create the network topology
        • Create the probability tables
        • Input/output
      • Inference in Bayesian networks
        • Inference without evidence
        • Inference with evidence
        • inference in the whole Bayes net
      • Testing independence in Bayesian networks
      • Conditional Independence
        • Directly
        • Markov Blanket
        • Minimal conditioning set and evidence Impact using probabilistic inference
        • PS- the complete code to create the first image
        • PS2- a second glimpse of gum.config
    • Using pyAgrum
      • Initialisation
      • Visualisation and inspection
      • Results of inference
      • Using inference as a function
      • BN as a classifier
        • Generation of databases
        • probabilistic classifier using BN
      • Fast prototyping for BNs
      • Joint posterior, impact of multiple evidence
  • Inference in Bayesian networks
    • Probablistic Inference with pyAgrum
      • Basic inference and display
      • Showing the information graph
      • Exploring the junction tree
        • Introspection in junction trees
      • Introspecting junction trees and friends
        • junction tree from graphs (using uniform domainSize)
        • Using partial order to specify the elimination order
    • Relevance Reasoning with pyAgrum
      • Multiple inference
      • First try : classical lazy propagation
      • Second try : classical variable elimination
      • Last try : optimized Lazy propagation with relevance reasoning and incremental inference
      • How it works
    • Some other features in Bayesian inference
      • Evidence impact
      • Evidence Joint Imapct
    • Approximate inference in aGrUM (pyAgrum)
      • First, an exact inference.
        • Gibbs Inference
      • Gibbs inference with default parameters
      • Changing parameters
      • Looking at the convergence
        • Importance Sampling
        • Loopy Gibbs Sampling
        • Comparison of approximate inference
      • Inference stopped by epsilon
      • inference stopped by time
      • Inference with Evidence
    • Different sampling inference
      • First, some helpers
      • Exact inference.
  • Learning Bayesian networks
    • Learning the structure of a Bayesian network
      • Generating the database from a BN
        • type induction
      • Parameters learning from the database
      • Structural learning a BN from the database
        • Different learning algorithms
      • Following the learning curve
      • Customizing the learning algorithms
        • 1. Learn a tree ?
        • 2. with prior structural knowledge
        • 3. changing the scores
        • 4. comparing BNs
        • 5. Mixing algorithms
      • Impact of the size of the database for the learning
    • Learning BN as probabilistic classifier
      • Learning a BN from learn.csv
      • Two classifiers from the learned BN
    • Learning essential graphs
      • Compare learning algorithms
    • Dirichlet prior
      • Dirichlet prior as database
        • Generating databases for Dirichlet prior and for the learning
        • Learning databases
        • Learning with Dirichlet prior
        • Weighted database and records
    • Parametric EM (missing data)
      • Generating data with missing values (at random)
      • Learning with missing data
      • Learning with smaller error (and no smoothing)
    • Scores, Chi2, etc. with BNLearner
      • Generating the database for scoring
      • Testing d-separations using chi2 in the database
      • Evolution of chi2 p-values w.r.t the size of the database (in Asia)
      • Testing d-separations using G2 in the database
      • Evolution of G2 p-values w.r.t the size of the database (in Asia)
      • Conditional joint log-likelihood
      • Evolution of conditional log-likelihood w.r.t the size of the database (in Asia)
      • Comparing the scores
  • Different Graphical Models
    • Influence diagram
      • Build a influencediagram
        • fast build with string
        • bifxml format file
        • the hard way :-)
      • Optimization in an influence diagram (actually LIMID)
      • Graphical inference with evidence and targets (developped nodes)
      • Soft evidence on chance node
      • Forced decision
      • LIMID versus Influence Diagram
      • Customizing visualization of the results
    • dynamic Bayesian networks
      • Building a 2TBN
        • 2TBN
        • unrolling 2TBN
        • dynamic inference : following variables
        • nsDBN (Non-Stationnary Dynamic Bayesian network)
    • Markov networks
      • building a Markov Network
      • Accessors for Markov Networks
      • Manipulating factors
      • Customizing graphical representation
      • from BayesNet to MarkovNet
      • Inference in Markov network
      • Graphical inference in markov network
    • Credal Networks
      • Credal Net from BN
        • We can use LBP on CN (L2U) only for binary credal networks (here B is not binary). We then propose the classical binarization (but warn the user that this leads to approximation in the inference)
      • Credal Net from bif files
      • Comparing inference in credal networks
        • The two inference give quite the same result
        • but not when evidence are inserted
      • Dynamical Credal Net
    • Object-Oriented Probabilistic Relational Model
      • O3PRM syntax
      • Using o3prm syntax for creating BayesNet
      • Exploring Probabilistic Relational Model
  • Bayesian networks as scikit-learn compliant classifiers
    • Learning classifiers
      • Learn from csv file
      • Learn from array-likes
      • Create a classifier from a Bayesian network
      • Prediction for classifier
        • Prediction with csv file
        • Prediction with array-like
      • ROC and Precision-Recall curves with all methods
    • The BNDiscretizer Class
      • Creation of an instance and setting parameters
      • Auditing data
      • Creating variables from data
      • Using Discretizer with BNLearner
    • Comparing classifiers (including Bayesian networks) with scikit-learn
      • Binary classifiers
      • n-ary classifiers on IRIS dataset
      • Recognizing hand-written digits with Bayesian Networks
        • Focus on the pixels needed for the classification
    • Using sklearn to cross-validate bayesian network classifier
    • From a Bayesian network to a Classifier
      • From a Bayesian network to a Binary classifier
  • Causal Bayesian Networks
    • Smoking, Cancer and causality
      • Direct causality between Smoking and Cancer
      • Latent confounder between Smoking and Cancer
      • Confounder and direct causality
      • A intermediary observed variable
      • Other causal impacts for this last model
    • Simpson’s Paradox
      • How to compute causal impacts on the patient’s health ?
        • Computing \(P (Patient = Healed \mid \hookrightarrow Drug = Without)\)
        • Computing \(P (Patient = Healed \mid \hookrightarrow Drug = With )\)
      • Simpson paradox solved by interventions
    • Multinomial Simpson Paradox
      • Building the models
      • The observationnal model and its paradoxal structure (exactly the same with the second Markov-equivalent model)
      • The paradox is revealed in the trend of the inferred means : the means are increasing with the value of \(A\) except for any value of :math:`C` …
      • Now that the paradoxal structure is understood and the paradox is revealed, will we choose to observe \(C\) (or not) before deciding to increase or decrease \(A\) (with the goal to maximize \(B\)) ?
      • If \(C\) is cause for \(A\), observing \(C\) really gives a new information about \(B\).
      • if \(A\) is cause for \(C\), observing \(C\) may lead to misinterpretations about the causal role of \(A\).
    • Some examples of do-calculus
      • S. Tikka and J. Karvanen, 2016 [CRAN]
      • Front door
      • Unidentifiabilty
      • another one
      • Other example
      • example f
      • Example [Pearl,2009] Causality, p66
    • Counterfactual : the Effect of Education and Experience on Salary
      • Counterfactuals
        • We create the causal diagram
      • Step 1 : Abduction
      • Step 2 & 3 : Action And Prediction
        • Alice’s salary would be \(\$81.843\) if she had attended college !
      • pyAgrum.causal.counterfactual
        • Let’s try with the previous query :
        • If we omit values:
        • What would Alice’s salary be if she had attended college and had 8 years of experience ?
        • if she attended college and had 8 years of experience Alice’s salary would be 91k !
        • if she had more experience :
        • Let’s try with the previous queries :
        • Latent variable between \(U_x\) and \(experience\) :
  • Examples
    • Asthma
      • The model
      • Some inference
    • Kaggle Titanic
      • Titanic: Machine Learning from Disaster
        • Pretreatment
        • Modeling withtout learning
        • Pre-learning
        • Learning a probabilistic model
        • Exploring the data
      • Using BNClassifier
      • Making a BN without learning data
      • Conclusion
    • Naive modeling of credit defaults using a Markov Random Field
      • Constructing the model
        • Making inferences
      • Example 1 & 2
      • Example 3 & 4
      • Impact of defaults on a credit portfolio and the distribution of losses
        • Impact of a belief of stress on a sector for the number of defaults in the portfolio
    • Learning and causality
      • Model
      • Simulation of the data
      • statistical learning
      • Evaluating the impact of \(X2\) on \(Y1\)
      • Evaluating the causal impact of \(X2\) on \(Y1\) with the learned model
      • Causal learning and causal impact
    • Sensitivity analysis for Bayesian networks using credal networks
      • Creating a Bayesian network
      • Building a credal network from a BN
      • Testing difference hypothesis about the global precision on the parameters
    • Quasi-continuous BN
      • CPT for quasi-continuous variables (with parents)
      • Quasi =-continuous inference
      • Quasi-continuous variable with quasi-continuous parent
        • Inference in quasi-continuous BN
      • Changing prior
      • inference with evidence in quasi-continuous BN
      • Multiple inference : MAP DECISION between Gaussian and Maxwell-Boltzman distributions
        • Changing the prior \(P(A)\)
    • Parameter learning with Pandas
      • Importing pyAgrum
      • Loading two BNs
      • Randomizing the parameters
      • Direct comparison of parameters
      • Exact KL-divergence
      • Generate a database from the original BN
      • Using pandas for _counting
        • Now, let’s try to learn the parameters with pandas
      • A global method for estimating Bayesian network parameters from CSV file using PANDAS
      • Influence of the size of the database on the quality of learned parameters
  • pyAgrum’s specific features
    • Potentials : named tensors
      • potential algebra
      • Bayes’ theorem
      • Joint, marginal probability, likelihood
        • Computing \(p(A)\)
        • Computing \(p(A |C=1)\)
        • Computing \(P(A|C)\)
        • Likelihood \(P(A=2|C)\)
      • entropy of potential
    • Aggregators
    • Explaining a model
      • Building the model
      • 1-independence list (w.r.t. the class Y)
      • 2-ShapValues
        • Compute Conditionnal in Bayesian Network
        • Causal Shap Values
        • Marginal Shap Values
        • Visualizing shapvalues directly on a BN
      • Visualizing information
    • Kullback-Leibler for Bayesian networks
      • Initialisation
      • Create a first BN : bn
      • Create a second BN : bn2
      • bn vs bn2 : different parameters
      • Exact and (Gibbs) approximated KL-divergence
        • Animation of Gibbs KL
    • Comparing BNs
      • How to compare two BNs
        • Between two different structures
        • Same structure, different parameters
        • identical BNs
    • Coloring and exporting graphical models as image (pdf, png)
      • customizing colours and width for model and inference
      • Exporting model and inference as image
        • exporting inference with evidence
        • Other models
        • Exporting any object with toDot() method
        • … or even a string in dot syntax
        • Exporting to pyplot
    • gum.config :the configuration object for pyAgrum
      • gum.config as singleton
      • pyagrum.ini
      • config constantly keeps track of the differences between defaults and actual values :
        • Accessors
      • Getter
      • Setter
      • constant structure, mutable content
      • properties as string
      • Reset, reload, save
        • Using configuration
        • Finding a specific property
        • Saving current configuration in pyagrum.ini
      • From PyAgrumConfiguration to ConfigParser
pyAgrum
  • Docs »
  • Search
  • Edit on GitLab


© Copyright 2018-22, aGrUM/pyAgrum Team <info_at_agrum_dot_org> Revision 1a905d0e.

Built with Sphinx using a theme provided by Read the Docs.
Read the Docs v: 1.1.1
Versions
master
latest
1.1.1
1.1.0
1.0.0
0.22.9
0.22.8
0.22.7
0.22.5
0.22.4
0.22.3
0.22.2
0.22.0
0.21.0
0.20.3
0.20.2
0.20.1
0.19.3
0.19.2
0.19.1
0.19.0
0.18.2
0.18.1
0.18.0
0.17.3
0.17.2
0.17.1
0.17.0
0.16.4
0.16.3
0.16.2
0.16.1
0.15.0
0.14.3
0.13.6
pyagrum-rtdwithconda
Downloads
pdf
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.