The BNDiscretizer Class

Creative Commons License

aGrUM

interactive online version

Most of the functionality of pyAgrum works only on discrete data. However data in the real world can be often continous. This class can be used to create dizcretized variables from continous data. Since this class was made for the purposes of the class BNClassifier, this class accepts data in the form of ndarrays. To transform data from a csv file to an ndarray we can use the function in BNClassifier XYfromCSV.

Creation of an instance and setting parameters

To create an instance of this class we need to specify the default parameters (the discretization method and the number of bins) for discretizing data. We create a discretizer which uses the EWD (Equal Width Discretization) method with 5 bins. The threshold is used for determining if a variable is already discretized. In this case, if a variable has more than 10 unique values we treat it as continous. we can use the setDiscretizationParameters method to set the discretization parameters for a specific variable

In [1]:
%load_ext autoreload
%autoreload 2

import pyAgrum.skbn as skbn

discretizer=skbn.BNDiscretizer(defaultDiscretizationMethod='uniform',defaultNumberOfBins=5,discretizationThreshold=10)

Auditing data

To see how certain data will be treated by the discretizer we can use the audit method.

In [2]:
import pandas
X = pandas.DataFrame.from_dict({
  'var1': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 1, 2, 3],
  'var2': ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n'],
  'var3': [1, 2, 5, 1, 2, 5, 1, 2, 5, 1, 2, 5, 1, 2],
  'var4': [1.11, 2.213, 3.33, 4.23, 5.42, 6.6, 7.5, 8.9, 9.19, 10.11, 11.12, 12.21, 13.3, 14.5],
  'var5': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 1]
})

print(X)

auditDict=discretizer.audit(X)

print()
print("** audit **")
for var in auditDict:
    print(f"- {var} : ")
    for k,v in auditDict[var].items():
        print(f"    + {k} : {v}")
    var1 var2  var3    var4  var5
0      1    a     1   1.110     1
1      2    b     2   2.213     2
2      3    c     5   3.330     3
3      4    d     1   4.230     4
4      5    e     2   5.420     5
5      6    f     5   6.600     6
6      7    g     1   7.500     7
7      8    h     2   8.900     8
8      9    i     5   9.190     9
9     10    j     1  10.110    10
10    11    k     2  11.120    11
11     1    l     5  12.210    12
12     2    m     1  13.300    13
13     3    n     2  14.500     1

** audit **
- var1 :
    + method : uniform
    + nbBins : 5
    + type : Continuous
    + minInData : 1
    + maxInData : 11
- var2 :
    + method : NoDiscretization
    + values : ['a' 'b' 'c' 'd' 'e' 'f' 'g' 'h' 'i' 'j' 'k' 'l' 'm' 'n']
    + type : Discrete
- var3 :
    + method : NoDiscretization
    + values : [1 2 5]
    + type : Discrete
- var4 :
    + method : uniform
    + nbBins : 5
    + type : Continuous
    + minInData : 1.11
    + maxInData : 14.5
- var5 :
    + method : uniform
    + nbBins : 5
    + type : Continuous
    + minInData : 1
    + maxInData : 13

We can see that even though var1 has more unique values than var1, it is treated as a discrete variable. This is because the values of var2 are strings and therefore cannot be discretized.

Now we would like to discretized var1 using k-means, var4 using decille and we would like for var3 to stay not discretized but with all the value from 1 to 5…

In [3]:
discretizer=skbn.BNDiscretizer(defaultDiscretizationMethod='uniform',defaultNumberOfBins=5,discretizationThreshold=10)

discretizer.setDiscretizationParameters('var1','kmeans')
discretizer.setDiscretizationParameters('var4','quantile',10)
discretizer.setDiscretizationParameters('var3','NoDiscretization',[1,2,3,4,5])

auditDict=discretizer.audit(X)

print()
print("** audit **")
for var in auditDict:
    print(f"- {var} : ")
    for k,v in auditDict[var].items():
        print(f"    + {k} : {v}")

** audit **
- var1 :
    + method : kmeans
    + param : 5
    + type : Continuous
    + minInData : 1
    + maxInData : 11
- var2 :
    + method : NoDiscretization
    + values : ['a' 'b' 'c' 'd' 'e' 'f' 'g' 'h' 'i' 'j' 'k' 'l' 'm' 'n']
    + type : Discrete
- var3 :
    + method : NoDiscretization
    + param : [1, 2, 3, 4, 5]
    + type : Discrete
- var4 :
    + method : quantile
    + param : 10
    + type : Continuous
    + minInData : 1.11
    + maxInData : 14.5
- var5 :
    + method : uniform
    + nbBins : 5
    + type : Continuous
    + minInData : 1
    + maxInData : 13

Creating template BN from data

To create a template BN (and the variables) from data we can use the createVariable method for each column in our data matrix. This will use the parameters that we have already set to create discrete (or discretized) variables from our data.

In [4]:
template_bn=discretizer.discretizedBN(X)

print(template_bn)
print(template_bn["var1"])
print(template_bn["var2"])
print(template_bn["var3"])
print(template_bn["var4"])
print(template_bn["var5"])
BN{nodes: 5, arcs: 0, domainSize: 17500, dim: 34, mem: 312o}
var1:Discretized(<(1;2.625[,[2.625;5.125[,[5.125;7.5[,[7.5;9.5[,[9.5;11)>)
var2:Labelized({a|b|c|d|e|f|g|h|i|j|k|l|m|n})
var3:Range([1,5])
var4:Discretized(<(1.11;2.5481[,[2.5481;3.87[,[3.87;5.301[,[5.301;6.78[,[6.78;8.2[,[8.2;9.132[,[9.132;10.211[,[10.211;11.556[,[11.556;12.973[,[12.973;14.5)>)
var5:Discretized(<(1;3.4[,[3.4;5.8[,[5.8;8.2[,[8.2;10.6[,[10.6;13)>)

For supervised discretization algorithms (MDLP and CAIM) the list of class labels for each datapoint is also needed.

In [5]:
y=[True,False,False,True,False,False,True,True,False,False,True,True,False,True]

discretizer.setDiscretizationParameters('var4','CAIM')
template_bn=discretizer.discretizedBN(X,y)
print(template_bn["var4"])
discretizer.setDiscretizationParameters('var4','MDLP')
template_bn=discretizer.discretizedBN(X,y)
print(template_bn["var4"])

var4:Discretized(<(1.11;10.615[,[10.615;14.5)>)
var4:Discretized(<(1.11;1.6615[,[1.6615;14.5)>)

The discretizer keeps track of the number of discretized variables created by it and the number of bins used to discretize them. To reset these two numbers to 0 we can use the clear method. We can also use it to clear the specific parameteres we have set for each variable.

In [6]:
print(f"numberOfContinuous : {discretizer.numberOfContinuous}")
print(f"totalNumberOfBins : {discretizer.totalNumberOfBins}")

discretizer.clear()
print("\n")

print(f"numberOfContinuous : {discretizer.numberOfContinuous}")
print(f"totalNumberOfBins : {discretizer.totalNumberOfBins}")

discretizer.audit(X)
numberOfContinuous : 9
totalNumberOfBins : 44


numberOfContinuous : 0
totalNumberOfBins : 0
Out[6]:
{'var1': {'method': 'kmeans',
  'param': 5,
  'type': 'Continuous',
  'minInData': 1,
  'maxInData': 11},
 'var2': {'method': 'NoDiscretization',
  'values': array(['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm',
         'n'], dtype=object),
  'type': 'Discrete'},
 'var3': {'method': 'NoDiscretization',
  'param': [1, 2, 3, 4, 5],
  'type': 'Discrete'},
 'var4': {'method': 'MDLP',
  'param': 10,
  'type': 'Continuous',
  'minInData': 1.11,
  'maxInData': 14.5},
 'var5': {'method': 'uniform',
  'nbBins': 5,
  'type': 'Continuous',
  'minInData': 1,
  'maxInData': 13}}
In [7]:
discretizer.clear(True)
discretizer.audit(X)
Out[7]:
{'var1': {'method': 'uniform',
  'nbBins': 5,
  'type': 'Continuous',
  'minInData': 1,
  'maxInData': 11},
 'var2': {'method': 'NoDiscretization',
  'values': array(['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm',
         'n'], dtype=object),
  'type': 'Discrete'},
 'var3': {'method': 'NoDiscretization',
  'values': array([1, 2, 5], dtype=object),
  'type': 'Discrete'},
 'var4': {'method': 'uniform',
  'nbBins': 5,
  'type': 'Continuous',
  'minInData': 1.11,
  'maxInData': 14.5},
 'var5': {'method': 'uniform',
  'nbBins': 5,
  'type': 'Continuous',
  'minInData': 1,
  'maxInData': 13}}

Using Discretizer with BNClassifier

In [8]:
import pyAgrum as gum
import pyAgrum.lib.notebook as gnb

import pyAgrum.skbn as skbn
import pandas as pd

X = pandas.DataFrame.from_dict({
  'var1': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 1, 2, 3],
  'var2': ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n'],
  'var3': [1, 2, 5, 1, 2, 5, 1, 2, 5, 1, 2, 5, 1, 2],
  'var4': [1.11, 2.213, 3.33, 4.23, 5.42, 6.6, 7.5, 8.9, 9.19, 10.11, 11.12, 12.21, 13.3, 14.5],
  'var5': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 1]
})
Y= [1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0]

classif=skbn.BNClassifier(learningMethod="TAN")
# by default number of bins is 5
classif.discretizer.setDiscretizationParameters('var1','kmeans')
# ... but 10 for var4
classif.discretizer.setDiscretizationParameters('var4','quantile',10)
# in the database, var3 only takes values 1,2 or 5 but 3 and 4 are also possible
classif.discretizer.setDiscretizationParameters('var3','NoDiscretization',[1,2,3,4,5])

classif.fit(X,Y)

gnb.showInference(classif.bn)
../_images/notebooks_52-Classifier_Discretizer_17_0.svg

Using Discretizer with BNLearner

In [9]:
import pyAgrum as gum
import pyAgrum.lib.notebook as gnb

import pyAgrum.skbn as skbn
import pandas as pd

file_name = 'res/discretizable.csv'
data = pd.read_csv(file_name)

discretizer = skbn.BNDiscretizer(defaultDiscretizationMethod='quantile',
                                 defaultNumberOfBins=10,
                                 discretizationThreshold=25)
In [10]:
# creating a template explaining the variables proposed by the discretizer. This variables will be used by the learner
template = discretizer.discretizedBN(data)
In [11]:
learner = gum.BNLearner(file_name, template)
learner.useMIIC()
learner.useNMLCorrection()

bn = learner.learnBN()
gnb.showInference(bn,size="10!")
../_images/notebooks_52-Classifier_Discretizer_21_0.svg

Comparing discretization methods

Different discretization for the same mixture of 2 Gaussians.

In [12]:
import numpy as np
import pandas

N=20000
N1=2*N//3
N2=N-N1

classY=np.array([1]*N1+[0]*N2)
data=pandas.DataFrame(data={"y":classY,
                            # discretiation using quantile (15 bins)
                            "q15":np.concatenate((np.random.normal(0, 2, N1),np.random.normal(10, 2, N2) )),
                            # discretiation using uniform (15 bins)
                            "u15":np.concatenate((np.random.normal(0, 2, N1),np.random.normal(10, 2, N2) )),
                            # discretiation using kmeans (15 bins)
                            "k15":np.concatenate((np.random.normal(0, 2, N1),np.random.normal(10, 2, N2) )),

                            # discretiation using quantile (5 bins)
                            "q5":np.concatenate((np.random.normal(0, 2, N1),np.random.normal(10, 2, N2) )),
                            # discretiation using kmeans (5 bins)
                            "k5":np.concatenate((np.random.normal(0, 2, N1),np.random.normal(10, 2, N2) )),

                            # other discretization methods
                            "caim":np.concatenate((np.random.normal(0, 2, N1),np.random.normal(10, 2, N2) )),
                            "mdlp":np.concatenate((np.random.normal(0, 2, N1),np.random.normal(10, 2, N2) )),

                            "expert":np.concatenate((np.random.normal(0, 2, N1),np.random.normal(10, 2, N2) )) })

discretizer=skbn.BNDiscretizer(defaultDiscretizationMethod='quantile',
                               defaultNumberOfBins=15,discretizationThreshold=10)

discretizer.setDiscretizationParameters("u15",method="uniform")
discretizer.setDiscretizationParameters("k15",method="kmeans")
discretizer.setDiscretizationParameters("q5",method="quantile",paramDiscretizationMethod=5)
discretizer.setDiscretizationParameters("k5",method="kmeans",paramDiscretizationMethod=5)
discretizer.setDiscretizationParameters("caim",method="CAIM",paramDiscretizationMethod=5)
discretizer.setDiscretizationParameters("mdlp",method="MDLP",paramDiscretizationMethod=5)
discretizer.setDiscretizationParameters("expert",method="expert",paramDiscretizationMethod=[-30.0,-2,0.2,1,30.0])

By default, the distributions for discretized variables are represented as “histogram”(the surface of the bars are proportionnal to the probabilities).

In [13]:
template=discretizer.discretizedBN(data,y=classY,possibleValuesY=[0,1])
for i,n in template:
    print(f"{n:7} : {template.variable(i)}")
y       : y:Range([0,1])
q15     : q15:Discretized(<(-7.81747;-2.55044[,[-2.55044;-1.65004[,[-1.65004;-1.03189[,[-1.03189;-0.504739[,[-0.504739;0.0169877[,[0.0169877;0.533415[,[0.533415;1.07102[,[1.07102;1.70217[,[1.70217;2.60999[,[2.60999;5.33646[,[5.33646;8.34561[,[8.34561;9.50159[,[9.50159;10.5151[,[10.5151;11.6898[,[11.6898;18.2675)>)
u15     : u15:Discretized(<(-7.19953;-5.55223[,[-5.55223;-3.90492[,[-3.90492;-2.25761[,[-2.25761;-0.610305[,[-0.610305;1.037[,[1.037;2.68431[,[2.68431;4.33162[,[4.33162;5.97892[,[5.97892;7.62623[,[7.62623;9.27354[,[9.27354;10.9208[,[10.9208;12.5682[,[12.5682;14.2155[,[14.2155;15.8628[,[15.8628;17.5101)>)
k15     : k15:Discretized(<(-8.84491;-4.02554[,[-4.02554;-2.6833[,[-2.6833;-1.60386[,[-1.60386;-0.592431[,[-0.592431;0.420574[,[0.420574;1.50559[,[1.50559;2.82143[,[2.82143;4.84299[,[4.84299;6.98609[,[6.98609;8.50211[,[8.50211;9.70054[,[9.70054;10.8129[,[10.8129;12.0069[,[12.0069;13.5218[,[13.5218;17.4437)>)
q5      : q5:Discretized(<(-6.88905;-0.998628[,[-0.998628;0.541285[,[0.541285;2.57268[,[2.57268;9.50987[,[9.50987;17.3454)>)
k5      : k5:Discretized(<(-7.73058;-1.20791[,[-1.20791;1.27046[,[1.27046;5.54745[,[5.54745;10.1737[,[10.1737;17.01)>)
caim    : caim:Discretized(<(-8.21937;5.25927[,[5.25927;17.2989)>)
mdlp    : mdlp:Discretized(<(-9.3718;2.93132[,[2.93132;4.28044[,[4.28044;5.38012[,[5.38012;5.86724[,[5.86724;7.10095[,[7.10095;18.0946)>)
expert  : expert:Discretized(<(-30;-2[,[-2;0.2[,[0.2;1[,[1;30)>)
In [14]:
learner=gum.BNLearner(data,template)
bn=gum.BayesNet(template)
for i,n in bn:
    if n!="y":
        bn.addArc("y",n)
bn
Out[14]:
G q15 q15 caim caim k5 k5 mdlp mdlp q5 q5 y y y->q15 y->caim y->k5 y->mdlp y->q5 expert expert y->expert k15 k15 y->k15 u15 u15 y->u15
In [15]:
#learner.useMIIC()
#bn=learner.learnBN()
learner.fitParameters(bn)

# dot | neato | fdp | sfdp | twopi | circo | osage | patchwork
gum.config.push()
gum.config["notebook", "graph_layout"]="fdp"
gnb.showInference(bn,size="8!")
gum.config.pop()
../_images/notebooks_52-Classifier_Discretizer_27_0.svg

But you can always choose to show them as as “bar” (the height of the bars are proportionnal to the probabilities) instead of “histogram” (the area of the bars is proportionnal to the probabilites)

In [16]:
# changing how discretized variable are visualized
gum.config.push()
gum.config['notebook','histogram_discretized_visualisation']="bar"
gnb.showInference(bn,size="13!")
gum.config.pop() # default (above) is "histogram"
gnb.showInference(bn,size="13!")
../_images/notebooks_52-Classifier_Discretizer_29_0.svg
../_images/notebooks_52-Classifier_Discretizer_29_1.svg
In [ ]: