Understanding
Bayes’
Theorem
Introduction to Probability
Probability measures how likely an event is to occur.
Key terms:
Experiment: An action with uncertain outcome.
Sample Space (S): All possible outcomes.
Event (E): A subset of outcomes from the sample space.
Example: Probability of rolling a 3 on a die = 1/6
What is Bayes’ Theorem?
Bayes' Theorem helps us update our beliefs based on new evidence.
Formula:
Understanding the Terms
Prior Probability (P(A)): Belief about A before evidence.
Likelihood (P(B|A)): Probability of evidence B assuming A is true.
Marginal Probability (P(B)): Total probability of evidence B.
Posterior Probability (P(A|B)): Updated belief about A after seeing B.
Formula Breakdown
Step-by-step:
1.Start with prior belief: P(A)P(A)P(A)
2.Multiply by likelihood: P(B∣A)P(B|A)P(B∣A)
3.Divide by total probability of B: P(B)P(B)P(B)
Visual Aid Idea: Use a flowchart or Venn diagram to
show relationships.
Why is Bayes’ Theorem Important?
Helps make decisions under uncertainty
Useful in:
Diagnosing diseases
Filtering spam
Machine learning algorithms
Legal judgments
Advanced Example / Case Study
Medical Testing Example:
Disease prevalence = 1%
Test sensitivity = 99%, false positive rate = 5%
Someone tests positive. What’s the actual chance they have the disease?
Use formula:
Conclusion: Even with a positive result, there's only a 16.7% chance of
having the disease.
Applications of Bayes’ Theorem
Medicine: Diagnosis and disease prediction
AI & Machine Learning: Naive Bayes classifiers
Finance: Risk assessment and market predictions
Law: Assessing guilt based on evidence
Bayesian vs Frequentist (Optional)
Aspect Bayesian Frequentist
Probability as long-term
Interpretation Probability as belief
frequency
Yes, includes prior No, relies only on data
Uses Prior?
knowledge from current study
Probability of data given a
Output Probability of a hypothesis
hypothesis
Limitations of Bayes’ Theorem
Requires accurate prior probabilities
Can be sensitive to bias in prior or evidence
Not always easy to compute P(B)P(B)P(B) in complex cases
Misuse can lead to wrong conclusions
Advantages of Bayes’ Theorem:
Incorporates Prior Knowledge:
Allows the use of existing knowledge or beliefs (prior probabilities) in calculations.
Works Well with New Evidence:
Continuously updates probabilities as new data becomes available — ideal for dynamic systems.
Clear Probabilistic Interpretation:
Provides an intuitive framework for understanding uncertainty and making decisions.
Useful in Real-World Applications:
Widely used in medicine, machine learning (e.g., Naive Bayes), spam filters, and legal reasoning.
Handles Incomplete Data:
Can give meaningful results even with partial or indirect information .
Disadvantages of Bayes’ Theorem:
Requires Accurate Prior Probabilities:
If priors are wrong or biased, the final results may be misleading.
Computationally Intensive:
In complex models or large datasets, calculating probabilities (especially marginal probability)
can be difficult.
Not Always Intuitive:
Understanding or choosing prior probabilities and interpreting posterior probabilities can be
confusing.
Sensitive to Input Data:
A small error in likelihood or prior can significantly affect the output.
May Seem Subjective:
Critics argue that using prior beliefs can make Bayesian analysis less "objective" than
frequentist methods.
References
"Introduction to Probability" by Blitzstein and Hwang
Khan Academy – Bayes’ Theorem
StatQuest with Josh Starmer (YouTube)
Wikipedia: Bayes' Theorem
Thank You ….