Probabilistic and Bayesian Analytics: Probability
Probabilistic and Bayesian Analytics: Probability
=
k
j
j
v A P
Copyright Andrew W. Moore Slide 20
Another fact about Multivalued
Random Variables:
Using the axioms of probability
0 <= P(A) <= 1, P(True) = 1, P(False) = 0
P(A or B) = P(A) + P(B) - P(A and B)
And assuming that A obeys
Its easy to prove that
j i v A v A P
j i
= = = if 0 ) (
1 ) (
2 1
= = = =
k
v A v A v A P
) ( ]) [ (
1
2 1
=
= = = = =
i
j
j i
v A B P v A v A v A B P
11
Copyright Andrew W. Moore Slide 21
Another fact about Multivalued
Random Variables:
Using the axioms of probability
0 <= P(A) <= 1, P(True) = 1, P(False) = 0
P(A or B) = P(A) + P(B) - P(A and B)
And assuming that A obeys
Its easy to prove that
j i v A v A P
j i
= = = if 0 ) (
1 ) (
2 1
= = = =
k
v A v A v A P
) ( ]) [ (
1
2 1
=
= = = = =
i
j
j i
v A B P v A v A v A B P
And thus we can prove
) ( ) (
1
=
= =
k
j
j
v A B P B P
Copyright Andrew W. Moore Slide 22
Elementary Probability in Pictures
P(~A) + P(A) = 1
12
Copyright Andrew W. Moore Slide 23
Elementary Probability in Pictures
P(B) = P(B ^ A) + P(B ^ ~A)
Copyright Andrew W. Moore Slide 24
Elementary Probability in Pictures
1 ) (
1
= =
=
k
j
j
v A P
13
Copyright Andrew W. Moore Slide 25
Elementary Probability in Pictures
) ( ) (
1
=
= =
k
j
j
v A B P B P
Copyright Andrew W. Moore Slide 26
Conditional Probability
P(A|B) = Fraction of worlds in which B is
true that also have A true
F
H
H = Have a headache
F = Coming down with Flu
P(H) = 1/10
P(F) = 1/40
P(H|F) = 1/2
Headaches are rare and flu
is rarer, but if youre
coming down with flu
theres a 50-50 chance
youll have a headache.
14
Copyright Andrew W. Moore Slide 27
Conditional Probability
F
H
H = Have a headache
F = Coming down with Flu
P(H) = 1/10
P(F) = 1/40
P(H|F) = 1/2
P(H|F) = Fraction of flu-inflicted
worlds in which you have a
headache
= #worlds with flu and headache
------------------------------------
#worlds with flu
= Area of H and F region
------------------------------
Area of F region
= P(H ^ F)
-----------
P(F)
Copyright Andrew W. Moore Slide 28
Definition of Conditional Probability
P(A ^ B)
P(A|B) = -----------
P(B)
Corollary: The Chain Rule
P(A ^ B) = P(A|B) P(B)
15
Copyright Andrew W. Moore Slide 29
Probabilistic Inference
F
H
H = Have a headache
F = Coming down with Flu
P(H) = 1/10
P(F) = 1/40
P(H|F) = 1/2
One day you wake up with a headache. You think: Drat!
50% of flus are associated with headaches so I must have a
50-50 chance of coming down with flu
Is this reasoning good?
Copyright Andrew W. Moore Slide 30
Probabilistic Inference
F
H
H = Have a headache
F = Coming down with Flu
P(H) = 1/10
P(F) = 1/40
P(H|F) = 1/2
P(F ^ H) =
P(F|H) =
16
Copyright Andrew W. Moore Slide 31
Another way to understand the
intuition
Thanks to J ahanzeb Sherwani for contributing this explanation:
Copyright Andrew W. Moore Slide 32
What we just did
P(A ^ B) P(A|B) P(B)
P(B|A) = ----------- = ---------------
P(A) P(A)
This is Bayes Rule
Bayes, Thomas (1763) An essay
towards solving a problem in the
doctrine of chances. Philosophical
Transactions of the Royal Society of
London, 53:370-418
17
Copyright Andrew W. Moore Slide 33
Using Bayes Rule to Gamble
The Win envelope
has a dollar and four
beads in it
$1.00
The Lose envelope
has three beads and
no money
Trivial question: someone draws an envelope at random and offers to
sell it to you. How much should you pay?
Copyright Andrew W. Moore Slide 34
Using Bayes Rule to Gamble
The Win envelope
has a dollar and four
beads in it
$1.00
The Lose envelope
has three beads and
no money
Interesting question: before deciding, you are allowed to see one bead
drawn from the envelope.
Suppose its black: How much should you pay?
Suppose its red: How much should you pay?
18
Copyright Andrew W. Moore Slide 35
Calculation
$1.00
Copyright Andrew W. Moore Slide 36
More General Forms of Bayes Rule
) (~ ) |~ ( ) ( ) | (
) ( ) | (
) | (
A P A B P A P A B P
A P A B P
B A P
+
=
) (
) ( ) | (
) | (
X B P
X A P X A B P
X B A P
=
19
Copyright Andrew W. Moore Slide 37
More General Forms of Bayes Rule
=
= =
= =
= =
A
n
k
k k
i i
i
v A P v A B P
v A P v A B P
B v A P
1
) ( ) | (
) ( ) | (
) | (
Copyright Andrew W. Moore Slide 38
Useful Easy-to-prove facts
1 ) | ( ) | ( = + B A P B A P
1 ) | (
1
= =
=
A
n
k
k
B v A P
20
Copyright Andrew W. Moore Slide 39
The J oint Distribution
Recipe for making a joint distribution
of M variables:
Example: Boolean
variables A, B, C
Copyright Andrew W. Moore Slide 40
The J oint Distribution
Recipe for making a joint distribution
of M variables:
1. Make a truth table listing all
combinations of values of your
variables (if there are M Boolean
variables then the table will have
2
M
rows).
Example: Boolean
variables A, B, C
1 1 1
0 1 1
1 0 1
0 0 1
1 1 0
0 1 0
1 0 0
0 0 0
C B A
21
Copyright Andrew W. Moore Slide 41
The J oint Distribution
Recipe for making a joint distribution
of M variables:
1. Make a truth table listing all
combinations of values of your
variables (if there are M Boolean
variables then the table will have
2
M
rows).
2. For each combination of values,
say how probable it is.
Example: Boolean
variables A, B, C
0.10 1 1 1
0.25 0 1 1
0.10 1 0 1
0.05 0 0 1
0.05 1 1 0
0.10 0 1 0
0.05 1 0 0
0.30 0 0 0
Prob C B A
Copyright Andrew W. Moore Slide 42
The J oint Distribution
Recipe for making a joint distribution
of M variables:
1. Make a truth table listing all
combinations of values of your
variables (if there are M Boolean
variables then the table will have
2
M
rows).
2. For each combination of values,
say how probable it is.
3. If you subscribe to the axioms of
probability, those numbers must
sum to 1.
Example: Boolean
variables A, B, C
0.10 1 1 1
0.25 0 1 1
0.10 1 0 1
0.05 0 0 1
0.05 1 1 0
0.10 0 1 0
0.05 1 0 0
0.30 0 0 0
Prob C B A
A
B
C
0.05
0.25
0.10 0.05
0.05
0.10
0.10
0.30
22
Copyright Andrew W. Moore Slide 43
Using the
J oint
One you have the J D you can
ask for the probability of any
logical expression involving
your attribute
=
E
P E P
matching rows
) row ( ) (
Copyright Andrew W. Moore Slide 44
Using the
J oint
P(Poor Male) = 0.4654
=
E
P E P
matching rows
) row ( ) (
23
Copyright Andrew W. Moore Slide 45
Using the
J oint
P(Poor) = 0.7604
=
E
P E P
matching rows
) row ( ) (
Copyright Andrew W. Moore Slide 46
Inference
with the
J oint
=
2
2 1
matching rows
and matching rows
2
2 1
2 1
) row (
) row (
) (
) (
) | (
E
E E
P
P
E P
E E P
E E P
24
Copyright Andrew W. Moore Slide 47
Inference
with the
J oint
=
2
2 1
matching rows
and matching rows
2
2 1
2 1
) row (
) row (
) (
) (
) | (
E
E E
P
P
E P
E E P
E E P
P(Male | Poor) = 0.4654 / 0.7604 = 0.612
Copyright Andrew W. Moore Slide 48
Inference is a big deal
Ive got this evidence. Whats the chance
that this conclusion is true?
Ive got a sore neck: how likely am I to have meningitis?
I see my lights are out and its 9pm. Whats the chance
my spouse is already asleep?
25
Copyright Andrew W. Moore Slide 49
Inference is a big deal
Ive got this evidence. Whats the chance
that this conclusion is true?
Ive got a sore neck: how likely am I to have meningitis?
I see my lights are out and its 9pm. Whats the chance
my spouse is already asleep?
Copyright Andrew W. Moore Slide 50
Inference is a big deal
Ive got this evidence. Whats the chance
that this conclusion is true?
Ive got a sore neck: how likely am I to have meningitis?
I see my lights are out and its 9pm. Whats the chance
my spouse is already asleep?
Theres a thriving set of industries growing based
around Bayesian Inference. Highlights are:
Medicine, Pharma, Help Desk Support, Engine
Fault Diagnosis
26
Copyright Andrew W. Moore Slide 51
Where do J oint Distributions
come from?
Idea One: Expert Humans
Idea Two: Simpler probabilistic facts and
some algebra
Example: Suppose you knew
P(A) = 0.7
P(B|A) = 0.2
P(B|~A) = 0.1
P(C|A^B) = 0.1
P(C|A^~B) = 0.8
P(C|~A^B) = 0.3
P(C|~A^~B) = 0.1
Then you can automatically
compute the J D using the
chain rule
P(A=x ^ B=y ^ C=z) =
P(C=z|A=x^ B=y) P(B=y|A=x) P(A=x)
In another lecture:
Bayes Nets, a
systematic way to
do this.
Copyright Andrew W. Moore Slide 52
Where do J oint Distributions
come from?
Idea Three: Learn them from data!
Prepare to see one of the most impressive learning
algorithms youll come across in the entire course.
27
Copyright Andrew W. Moore Slide 53
Learning a joint distribution
Build a J D table for your
attributes in which the
probabilities are unspecified
The fill in each row with
records of number total
row matching records
) row (
= P
? 1 1 1
? 0 1 1
? 1 0 1
? 0 0 1
? 1 1 0
? 0 1 0
? 1 0 0
? 0 0 0
Prob C B A
0.10 1 1 1
0.25 0 1 1
0.10 1 0 1
0.05 0 0 1
0.05 1 1 0
0.10 0 1 0
0.05 1 0 0
0.30 0 0 0
Prob C B A
Fraction of all records in which
A and B are True but C is False
Copyright Andrew W. Moore Slide 54
Example of Learning a J oint
This J oint was
obtained by
learning from
three
attributes in
the UCI
Adult
Census
Database
[Kohavi 1995]
28
Copyright Andrew W. Moore Slide 55
Where are we?
We have recalled the fundamentals of
probability
We have become content with what J Ds are
and how to use them
And we even know how to learn J Ds from
data.
Copyright Andrew W. Moore Slide 56
Density Estimation
Our J oint Distribution learner is our first
example of something called Density
Estimation
A Density Estimator learns a mapping from
a set of attributes to a Probability
Density
Estimator
Probability
Input
Attributes
29
Copyright Andrew W. Moore Slide 57
Density Estimation
Compare it against the two other major
kinds of models:
Regressor
Prediction of
real-valued output
Input
Attributes
Density
Estimator
Probability
Input
Attributes
Classifier
Prediction of
categorical output
Input
Attributes
Copyright Andrew W. Moore Slide 58
Evaluating Density Estimation
Regressor
Prediction of
real-valued output
Input
Attributes
Density
Estimator
Probability
Input
Attributes
Classifier
Prediction of
categorical output
Input
Attributes
Test set
Accuracy
?
Test set
Accuracy
Test-set criterion for estimating performance
on future data*
* See the Decision Tree or Cross Validation lecture for more detail
30
Copyright Andrew W. Moore Slide 59
Given a record x, a density estimator Mcan
tell you how likely the record is:
Given a dataset with R records, a density
estimator can tell you how likely the dataset
is:
(Under the assumption that all records were independently
generated from the Density Estimators J D)
Evaluating a density estimator
=
= =
R
k
k R
|M P |M P |M P
1
2 1
) (
) (
) dataset (
x x x x K
) (
|M P x
Copyright Andrew W. Moore Slide 60
A small dataset: Miles Per Gallon
From the UCI repository (thanks to Ross Quinlan)
192
Training
Set
Records
mpg modelyear maker
good 75to78 asia
bad 70to74 america
bad 75to78 europe
bad 70to74 america
bad 70to74 america
bad 70to74 asia
bad 70to74 asia
bad 75to78 america
: : :
: : :
: : :
bad 70to74 america
good 79to83 america
bad 75to78 america
good 79to83 america
bad 75to78 america
good 79to83 america
good 79to83 america
bad 70to74 america
good 75to78 europe
bad 75to78 europe
31
Copyright Andrew W. Moore Slide 61
A small dataset: Miles Per Gallon
192
Training
Set
Records
mpg modelyear maker
good 75to78 asia
bad 70to74 america
bad 75to78 europe
bad 70to74 america
bad 70to74 america
bad 70to74 asia
bad 70to74 asia
bad 75to78 america
: : :
: : :
: : :
bad 70to74 america
good 79to83 america
bad 75to78 america
good 79to83 america
bad 75to78 america
good 79to83 america
good 79to83 america
bad 70to74 america
good 75to78 europe
bad 75to78 europe
Copyright Andrew W. Moore Slide 62
A small dataset: Miles Per Gallon
192
Training
Set
Records
mpg modelyear maker
good 75to78 asia
bad 70to74 america
bad 75to78 europe
bad 70to74 america
bad 70to74 america
bad 70to74 asia
bad 70to74 asia
bad 75to78 america
: : :
: : :
: : :
bad 70to74 america
good 79to83 america
bad 75to78 america
good 79to83 america
bad 75to78 america
good 79to83 america
good 79to83 america
bad 70to74 america
good 75to78 europe
bad 75to78 europe
203 -
1
2 1
10 3.4 case) (in this
) (
) (
) dataset (
= =
= =
=
R
k
k R
|M P |M P |M P x x x x K
32
Copyright Andrew W. Moore Slide 63
Log Probabilities
Since probabilities of datasets get so
small we usually use log probabilities
= =
= =
R
k
k
R
k
k
|M P |M P |M P
1 1
) (
log ) (
log ) dataset (
log x x
Copyright Andrew W. Moore Slide 64
A small dataset: Miles Per Gallon
192
Training
Set
Records
mpg modelyear maker
good 75to78 asia
bad 70to74 america
bad 75to78 europe
bad 70to74 america
bad 70to74 america
bad 70to74 asia
bad 70to74 asia
bad 75to78 america
: : :
: : :
: : :
bad 70to74 america
good 79to83 america
bad 75to78 america
good 79to83 america
bad 75to78 america
good 79to83 america
good 79to83 america
bad 70to74 america
good 75to78 europe
bad 75to78 europe
466.19 case) (in this
) (
log ) (
log ) dataset (
log
1 1
= =
= =
= =
R
k
k
R
k
k
|M P |M P |M P x x
33
Copyright Andrew W. Moore Slide 65
Summary: The Good News
We have a way to learn a Density Estimator
from data.
Density estimators can do many good
things
Can sort the records by probability, and thus
spot weird records (anomaly detection)
Can do inference: P(E1|E2)
Automatic Doctor / Help Desk etc
Ingredient for Bayes Classifiers (see later)
Copyright Andrew W. Moore Slide 66
Summary: The Bad News
Density estimation by directly learning the
joint is trivial, mindless and dangerous
34
Copyright Andrew W. Moore Slide 67
Using a test set
An independent test set with 196 cars has a worse log likelihood
(actually its a billion quintillion quintillion quintillion quintillion
times less likely)
.Density estimators can overfit. And the full joint density
estimator is the overfittiest of them all!
Copyright Andrew W. Moore Slide 68
Overfitting Density Estimators
If this ever happens, it means
there are certain combinations
that we learn are impossible
0 ) (
any for if
) (
log ) (
log ) testset (
log
1 1
= =
= =
= =
|M P k
|M P |M P |M P
k
R
k
k
R
k
k
x
x x
35
Copyright Andrew W. Moore Slide 69
Using a test set
The only reason that our test set didnt score -infinity is that my
code is hard-wired to always predict a probability of at least one
in 10
20
We need Density Estimators that are less prone
to overfitting
Copyright Andrew W. Moore Slide 70
Nave Density Estimation
The problem with the J oint Estimator is that it just
mirrors the training data.
We need something which generalizes more usefully.
The nave model generalizes strongly:
Assume that each attribute is distributed
independently of any of the other attributes.
36
Copyright Andrew W. Moore Slide 71
Independently Distributed Data
Let x[i] denote the ith field of record x.
The independently distributed assumption
says that for any i,v, u
1
u
2
u
i-1
u
i+1
u
M
) ] [ (
) ] [ , ] 1 [ , ] 1 [ , ] 2 [ , ] 1 [ | ] [ (
1 1 2 1
v i x P
u M x u i x u i x u x u x v i x P
M i i
= =
= = + = = = =
+
K K
Or in other words, x[i] is independent of
{x[1],x[2],..x[i-1], x[i+1],x[M]}
This is often written as
]} [ ], 1 [ ], 1 [ ], 2 [ ], 1 [ { ] [ M x i x i x x x i x K K +
Copyright Andrew W. Moore Slide 72
A note about independence
Assume A and B are Boolean Random
Variables. Then
A and B are independent
if and only if
P(A|B) = P(A)
A and B are independent is often notated
as
B A
37
Copyright Andrew W. Moore Slide 73
Independence Theorems
Assume P(A|B) = P(A)
Then P(A^B) =
= P(A) P(B)
Assume P(A|B) = P(A)
Then P(B|A) =
= P(B)
Copyright Andrew W. Moore Slide 74
Independence Theorems
Assume P(A|B) = P(A)
Then P(~A|B) =
= P(~A)
Assume P(A|B) = P(A)
Then P(A|~B) =
= P(A)
38
Copyright Andrew W. Moore Slide 75
Multivalued Independence
For multivalued Random Variables A and B,
B A
if and only if
) ( ) | ( : , u A P v B u A P v u = = = =
from which you can then prove things like
) ( ) ( ) ( : , v B P u A P v B u A P v u = = = = =
) ( ) | ( : , v B P v A v B P v u = = = =
Copyright Andrew W. Moore Slide 76
Back to Nave Density Estimation
Let x[i] denote the ith field of record x:
Nave DE assumes x[i] is independent of {x[1],x[2],..x[i-1], x[i+1],x[M]}
Example:
Suppose that each record is generated by randomly shaking a green dice
and a red dice
Dataset 1: A = red value, B = green value
Dataset 2: A = red value, B = sum of values
Dataset 3: A = sum of values, B = difference of values
Which of these datasets violates the nave assumption?
39
Copyright Andrew W. Moore Slide 77
Using the Nave Distribution
Once you have a Nave Distribution you can easily
compute any row of the joint distribution.
Suppose A, B, C and Dare independently
distributed. What is P(A^~B^C^~D)?
Copyright Andrew W. Moore Slide 78
Using the Nave Distribution
Once you have a Nave Distribution you can easily
compute any row of the joint distribution.
Suppose A, B, C and D are independently
distributed. What is P(A^~B^C^~D)?
= P(A|~B^C^~D) P(~B^C^~D)
= P(A) P(~B^C^~D)
= P(A) P(~B|C^~D) P(C^~D)
= P(A) P(~B) P(C^~D)
= P(A) P(~B) P(C|~D) P(~D)
= P(A) P(~B) P(C) P(~D)
40
Copyright Andrew W. Moore Slide 79
Nave Distribution General Case
Suppose x[1], x[2], x[M] are independently
distributed.
=
= = = = =
M
k
k M
u k x P u M x u x u x P
1
2 1
) ] [ ( ) ] [ , ] 2 [ , ] 1 [ ( K
So if we have a Nave Distribution we can
construct any row of the implied J oint Distribution
on demand.
So we can do any inference
But how do we learn a Nave Density Estimator?
Copyright Andrew W. Moore Slide 80
Learning a Nave Density
Estimator
records of number total
] [ in which records #
) ] [ (
u i x
u i x P
=
= =
Another trivial learning algorithm!
41
Copyright Andrew W. Moore Slide 81
Contrast
Given 100 records and 10,000
multivalued attributes will be fine
Given 100 records and more than 6
Boolean attributes will screw up
badly
Outside Naves scope No problem to model C
is a noisy copy of A
Can model only very
boring distributions
Can model anything
Nave DE J oint DE
Copyright Andrew W. Moore Slide 82
Empirical Results: Hopeless
The hopeless dataset consists of 40,000 records and 21 Boolean
attributes called a,b,c, u. Each attribute in each record is generated
50-50 randomly as 0 or 1.
Despite the vast amount of data, J oint overfits hopelessly and
does much worse
Average test set log
probability during
10 folds of k-fold
cross-validation*
Described in a future Andrew lecture
42
Copyright Andrew W. Moore Slide 83
Empirical Results: Logical
The logical dataset consists of 40,000 records and 4 Boolean
attributes called a,b,c,d where a,b,c are generated 50-50 randomly as 0
or 1. D = A^~C, except that in 10% of records it is flipped
The DE
learned by
J oint
The DE
learned by
Naive
Copyright Andrew W. Moore Slide 84
Empirical Results: Logical
The logical dataset consists of 40,000 records and 4 Boolean
attributes called a,b,c,d where a,b,c are generated 50-50 randomly as 0
or 1. D = A^~C, except that in 10% of records it is flipped
The DE
learned by
J oint
The DE
learned by
Naive
43
Copyright Andrew W. Moore Slide 85
A tiny part of
the DE
learned by
J oint
Empirical Results: MPG
The MPG dataset consists of 392 records and 8 attributes
The DE
learned by
Naive
Copyright Andrew W. Moore Slide 86
A tiny part of
the DE
learned by
J oint
Empirical Results: MPG
The MPG dataset consists of 392 records and 8 attributes
The DE
learned by
Naive
44
Copyright Andrew W. Moore Slide 87
The DE
learned by
J oint
Empirical Results: Weight vs. MPG
Suppose we train only from the Weight and MPG attributes
The DE
learned by
Naive
Copyright Andrew W. Moore Slide 88
The DE
learned by
J oint
Empirical Results: Weight vs. MPG
Suppose we train only from the Weight and MPG attributes
The DE
learned by
Naive
45
Copyright Andrew W. Moore Slide 89
The DE
learned by
J oint
Weight vs. MPG: The best that Nave can do
The DE
learned by
Naive
Copyright Andrew W. Moore Slide 90
Reminder: The Good News
We have two ways to learn a Density
Estimator from data.
*In other lectures well see vastly more
impressive Density Estimators (Mixture Models,
Bayesian Networks, Density Trees, Kernel Densities and many more)
Density estimators can do many good
things
Anomaly detection
Can do inference: P(E1|E2) Automatic Doctor / Help Desk etc
Ingredient for Bayes Classifiers
46
Copyright Andrew W. Moore Slide 91
Bayes Classifiers
A formidable and sworn enemy of decision
trees
Classifier
Prediction of
categorical output
Input
Attributes
DT BC
Copyright Andrew W. Moore Slide 92
How to build a Bayes Classifier
Assume you want to predict output Y which has arity n
Y
and values
v
1
, v
2
, v
ny
.
Assume there are minput attributes called X
1
, X
2
, X
m
Break dataset into n
Y
smaller datasets called DS
1
, DS
2
, DS
ny
.
Define DS
i
= Records in which Y=v
i
For each DS
i
, learn Density Estimator M
i
to model the input
distribution among the Y=v
i
records.
47
Copyright Andrew W. Moore Slide 93
How to build a Bayes Classifier
Assume you want to predict output Y which has arity n
Y
and values
v
1
, v
2
, v
ny
.
Assume there are minput attributes called X
1
, X
2
, X
m
Break dataset into n
Y
smaller datasets called DS
1
, DS
2
, DS
ny
.
Define DS
i
= Records in which Y=v
i
For each DS
i
, learn Density Estimator M
i
to model the input
distribution among the Y=v
i
records.
M
i
estimates P(X
1
, X
2
, X
m
| Y=v
i
)
Copyright Andrew W. Moore Slide 94
How to build a Bayes Classifier
Assume you want to predict output Y which has arity n
Y
and values
v
1
, v
2
, v
ny
.
Assume there are minput attributes called X
1
, X
2
, X
m
Break dataset into n
Y
smaller datasets called DS
1
, DS
2
, DS
ny
.
Define DS
i
= Records in which Y=v
i
For each DS
i
, learn Density Estimator M
i
to model the input
distribution among the Y=v
i
records.
M
i
estimates P(X
1
, X
2
, X
m
| Y=v
i
)
Idea: When a new set of input values (X
1
= u
1
, X
2
= u
2
, . X
m
= u
m
) come along to be evaluated predict the value of Y that
makes P(X
1
, X
2
, X
m
| Y=v
i
) most likely
) | ( argmax
1 1
predict
v Y u X u X P Y
m m
v
= = = = L
Is this a good idea?
48
Copyright Andrew W. Moore Slide 95
How to build a Bayes Classifier
Assume you want to predict output Y which has arity n
Y
and values
v
1
, v
2
, v
ny
.
Assume there are minput attributes called X
1
, X
2
, X
m
Break dataset into n
Y
smaller datasets called DS
1
, DS
2
, DS
ny
.
Define DS
i
= Records in which Y=v
i
For each DS
i
, learn Density Estimator M
i
to model the input
distribution among the Y=v
i
records.
M
i
estimates P(X
1
, X
2
, X
m
| Y=v
i
)
Idea: When a new set of input values (X
1
= u
1
, X
2
= u
2
, . X
m
= u
m
) come along to be evaluated predict the value of Y that
makes P(X
1
, X
2
, X
m
| Y=v
i
) most likely
) | ( argmax
1 1
predict
v Y u X u X P Y
m m
v
= = = = L
Is this a good idea?
This is a Maximum Likelihood
classifier.
It can get silly if some Ys are
very unlikely
Copyright Andrew W. Moore Slide 96
How to build a Bayes Classifier
Assume you want to predict output Y which has arity n
Y
and values
v
1
, v
2
, v
ny
.
Assume there are minput attributes called X
1
, X
2
, X
m
Break dataset into n
Y
smaller datasets called DS
1
, DS
2
, DS
ny
.
Define DS
i
= Records in which Y=v
i
For each DS
i
, learn Density Estimator M
i
to model the input
distribution among the Y=v
i
records.
M
i
estimates P(X
1
, X
2
, X
m
| Y=v
i
)
Idea: When a new set of input values (X
1
= u
1
, X
2
= u
2
, . X
m
= u
m
) come along to be evaluated predict the value of Y that
makes P(Y=v
i
| X
1
, X
2
, X
m
) most likely
) | ( argmax
1 1
predict
m m
v
u X u X v Y P Y = = = = L
Is this a good idea?
Much Better Idea
49
Copyright Andrew W. Moore Slide 97
Terminology
MLE (Maximum Likelihood Estimator):
MAP (Maximum A-Posteriori Estimator):
) | ( argmax
1 1
predict
m m
v
u X u X v Y P Y = = = = L
) | ( argmax
1 1
predict
v Y u X u X P Y
m m
v
= = = = L
Copyright Andrew W. Moore Slide 98
Getting what we need
) | ( argmax
1 1
predict
m m
v
u X u X v Y P Y = = = = L
50
Copyright Andrew W. Moore Slide 99
Getting a posterior probability
=
= = = =
= = = =
=
= =
= = = =
=
= = =
Y
n
j
j j m m
m m
m m
m m
m m
v Y P v Y u X u X P
v Y P v Y u X u X P
u X u X P
v Y P v Y u X u X P
u X u X v Y P
1
1 1
1 1
1 1
1 1
1 1
) ( ) | (
) ( ) | (
) (
) ( ) | (
) | (
L
L
L
L
L
Copyright Andrew W. Moore Slide 100
Bayes Classifiers in a nutshell
) ( ) | ( argmax
) | ( argmax
1 1
1 1
predict
v Y P v Y u X u X P
u X u X v Y P Y
m m
v
m m
v
= = = = =
= = = =
L
L
1. Learn the distribution over inputs for each value Y.
2. This gives P(X
1
, X
2
, X
m
| Y=v
i
).
3. Estimate P(Y=v
i
). as fraction of records with Y=v
i
.
4. For a new prediction:
51
Copyright Andrew W. Moore Slide 101
Bayes Classifiers in a nutshell
) ( ) | ( argmax
) | ( argmax
1 1
1 1
predict
v Y P v Y u X u X P
u X u X v Y P Y
m m
v
m m
v
= = = = =
= = = =
L
L
1. Learn the distribution over inputs for each value Y.
2. This gives P(X
1
, X
2
, X
m
| Y=v
i
).
3. Estimate P(Y=v
i
). as fraction of records with Y=v
i
.
4. For a new prediction:
We can use our favorite
Density Estimator here.
Right now we have two
options:
J oint Density Estimator
Nave Density Estimator
Copyright Andrew W. Moore Slide 102
J oint Density Bayes Classifier
) ( ) | ( argmax
1 1
predict
v Y P v Y u X u X P Y
m m
v
= = = = = L
In the case of the joint Bayes Classifier this
degenerates to a very simple rule:
Y
predict
= the most common value of Y among records
in which X
1
= u
1
, X
2
= u
2
, . X
m
= u
m
.
Note that if no records have the exact set of inputs X
1
= u
1
, X
2
= u
2
, . X
m
= u
m
, then P(X
1
, X
2
, X
m
| Y=v
i
)
= 0 for all values of Y.
In that case we just have to guess Ys value
52
Copyright Andrew W. Moore Slide 103
J oint BC Results: Logical
The logical dataset consists of 40,000 records and 4 Boolean
attributes called a,b,c,d where a,b,c are generated 50-50 randomly as 0
or 1. D = A^~C, except that in 10% of records it is flipped
The Classifier
learned by
J oint BC
Copyright Andrew W. Moore Slide 104
J oint BC Results: All Irrelevant
The all irrelevant dataset consists of 40,000 records and 15 Boolean
attributes called a,b,c,d..o where a,b,c are generated 50-50 randomly
as 0 or 1. v (output) = 1 with probability 0.75, 0 with prob 0.25
53
Copyright Andrew W. Moore Slide 105
Nave Bayes Classifier
) ( ) | ( argmax
1 1
predict
v Y P v Y u X u X P Y
m m
v
= = = = = L
In the case of the naive Bayes Classifier this can be
simplified:
=
= = = =
Y
n
j
j j
v
v Y u X P v Y P Y
1
predict
) | ( ) ( argmax
Copyright Andrew W. Moore Slide 106
Nave Bayes Classifier
) ( ) | ( argmax
1 1
predict
v Y P v Y u X u X P Y
m m
v
= = = = = L
In the case of the naive Bayes Classifier this can be
simplified:
=
= = = =
Y
n
j
j j
v
v Y u X P v Y P Y
1
predict
) | ( ) ( argmax
Technical Hint:
If you have 10,000 input attributes that product will
underflow in floating point math. You should use logs:
= = + = =
=
Y
n
j
j j
v
v Y u X P v Y P Y
1
predict
) | ( log ) ( log argmax
54
Copyright Andrew W. Moore Slide 107
BC Results: XOR
The XOR dataset consists of 40,000 records and 2 Boolean inputs called a
and b, generated 50-50 randomly as 0 or 1. c (output) = a XOR b
The Classifier
learned by
Naive BC
The Classifier
learned by
J oint BC
Copyright Andrew W. Moore Slide 108
Naive BC Results: Logical
The logical dataset consists of 40,000 records and 4 Boolean
attributes called a,b,c,d where a,b,c are generated 50-50 randomly as 0
or 1. D = A^~C, except that in 10% of records it is flipped
The Classifier
learned by
Naive BC
55
Copyright Andrew W. Moore Slide 109
Naive BC Results: Logical
The logical dataset consists of 40,000 records and 4 Boolean
attributes called a,b,c,d where a,b,c are generated 50-50 randomly as 0
or 1. D = A^~C, except that in 10% of records it is flipped
The Classifier
learned by
J oint BC
This result surprised Andrew until he
had thought about it a little
Copyright Andrew W. Moore Slide 110
Nave BC Results: All Irrelevant
The all irrelevant dataset consists
of 40,000 records and 15 Boolean
attributes called a,b,c,d..o where
a,b,c are generated 50-50 randomly
as 0 or 1. v (output) = 1 with
probability 0.75, 0 with prob 0.25
The Classifier
learned by
Naive BC
56
Copyright Andrew W. Moore Slide 111
BC Results:
MPG: 392
records
The Classifier
learned by
Naive BC
Copyright Andrew W. Moore Slide 112
BC Results:
MPG: 40
records
57
Copyright Andrew W. Moore Slide 113
More Facts About Bayes
Classifiers
Many other density estimators can be slotted in*.
Density estimation can be performed with real-valued
inputs*
Bayes Classifiers can be built with real-valued inputs*
Rather Technical Complaint: Bayes Classifiers dont try to
be maximally discriminative---they merely try to honestly
model whats going on*
Zero probabilities are painful for J oint and Nave. A hack
(justifiable with the magic words Dirichlet Prior) can
help*.
Nave Bayes is wonderfully cheap. And survives 10,000
attributes cheerfully!
*See future Andrew Lectures
Copyright Andrew W. Moore Slide 114
What you should know
Probability
Fundamentals of Probability and Bayes Rule
Whats a J oint Distribution
How to do inference (i.e. P(E1|E2)) once you
have a J D
Density Estimation
What is DE and what is it good for
How to learn a J oint DE
How to learn a nave DE
58
Copyright Andrew W. Moore Slide 115
What you should know
Bayes Classifiers
How to build one
How to predict with a BC
Contrast between nave and joint BCs
Copyright Andrew W. Moore Slide 116
Interesting Questions
Suppose you were evaluating NaiveBC,
J ointBC, and Decision Trees
Invent a problem where only NaiveBC would do well
Invent a problem where only Dtree would do well
Invent a problem where only J ointBC would do well
Invent a problem where only NaiveBC would do poorly
Invent a problem where only Dtree would do poorly
Invent a problem where only J ointBC would do poorly