Posterior probability is a key concept in Bayesian statistics that represents the updated probability of a hypothesis given new evidence. It combines prior beliefs with new data to provide a revised probability that incorporates the new information.
In this article, we have covered Posterior Probability definition, formula, example and others in detail.
What Is a Posterior Probability?
In Bayesian statistics, posterior probability is the revised or updated probability of an event after taking into account new information. The posterior probability is calculated by updating the prior probability using the Bayes Theorem.
Posterior Probability
Posterior probabilities are conditional probabilities created using Bayes' rule to update prior probabilities and the information summarized by those probabilities. From an epistemological point of view, it includes all the background information about an abstract idea (such as a scientific hypothesis or parameter value), as well as the prior knowledge and mathematical model that describes the available observations. at the time. Once new information is received, the current posterior probability can be used for another round of Bayesian updating.
The formula to calculate a posterior probability of A occurring given that B occurred:
P(A∣B) = P(A∩B)/P(B)
where:
- A, B = Events
- P(B∣A) = Probability of B occurring given that A is true
- P(A) and P(B) = Probabilities of A occurring and B occurring independently of each other
Posterior probability is the probability of an event occurring after taking into account new information or evidence. It is calculated using Bayes' theorem, which relates the conditional probabilities of two events.
Posterior Probability Formula
The formula for posterior probability is:
P(A|B) = (P(B|A) × P(A)) / P(B)
where:
- P(A|B) is Posterior Probability of event A occurring given that event B is observed
- P(B|A) is Probability of observing event B given that event A has occurred
- P(A) is Prior Probability of event A occurring before observing event B
- P(B) is Probability of Observing Event B
Applications of Posterior Probability
Posterior probability has wide applications and some areas where posterior probability is used are:
- Medical Diagnosis: Updating the likelihood of diseases based on test results.
- Machine Learning: In Bayesian classifiers and models.
- Economics: Updating predictions based on new market data.
- Engineering: Risk assessment and reliability analysis.
Conclusion
A posterior probability, in Bayesian data, is the revised or updated possibility of an event occurring after taking into account new records. The posterior probability is calculated by updating the prior possibility with the use of Bayes’ theorem.
Here, we have discussed the formula for calculating the posterior probability. The discussion of the posterior probability has been discussed with examples for a better understanding of the topic. We have seen the rule of probability in the posterior probability.
Example on Posterior Probability
Example 1: Suppose a medical test for a rare disease has a false positive rate of 5% and a false negative rate of 2%. The prevalence of the disease in the general population is 0.1%. If a person tests positive for the disease, what is the probability that the person actually has the disease (posterior probability)?
Solution:
Let,
- A = ThPerson has Disease
- B = Positive Test Result
We need to calculate P(A|B), the posterior probability of having the disease given a positive test result.
Given,
- P(B|A) = 0.98 (true positive rate)
- P(¬B|¬A) = 0.95 (true negative rate, or 1 - false positive rate)
- P(A) = 0.001 (prevalence of the disease)
Using Bayes' theorem:
P(A|B) = (P(B|A) * P(A)) / P(B)
We need to calculate P(B), the probability of a positive test result:
P(B) = P(B|A) × P(A) + P(B|¬A) × P(¬A)
= (0.98 × 0.001) + (0.05 × 0.999)
= 0.05098
Substituting the values in Bayes' theorem:
P(A|B) = (0.98 × 0.001) / 0.05098
= 0.019 or 1.9%
Therefore, the posterior probability that the person has the disease given a positive test result is 1.9%.
This example shows how the posterior probability revises the initial probability (prior probability) based on new evidence (the positive test result) using Bayes' theorem.
Example 2: A manufacturer produces two types of light bulbs, A and B. Past records show that 60% of the bulbs produced are type A and the remaining 40% are type B. The probability that a type A bulb will be defective is 0.03, and the probability that a type B bulb will be defective is 0.05. If a bulb is selected at random and found to be defective, what is the probability that it is a type A bulb?
Solution:
- Let A = Event that the bulb is type A
- Let B = Event that the bulb is type B
- Let D = Event that the bulb is defective
We want to find P(A|D), the posterior probability that the bulb is type A given that it is defective.
Using Bayes' theorem:
P(A|D) = (P(D|A) * P(A)) / P(D)
We know:
P(A) = 0.6 (60% of bulbs are type A)
P(B) = 0.4 (40% of bulbs are type B)
P(D|A) = 0.03 (probability a type A bulb is defective)
P(D|B) = 0.05 (probability a type B bulb is defective)
P(D) = P(D|A)P(A) + P(D|B)P(B)
= 0.030.6 + 0.050.4
= 0.038
Substituting in the values:
P(A|D) = (0.03 × 0.6) / 0.038
= 0.474 or 47.4%
Therefore, the posterior probability that a defective bulb is type A is 47.4%.
Explore
Maths
4 min read
Basic Arithmetic
Algebra
Geometry
Trigonometry & Vector Algebra
Calculus
Probability and Statistics
Practice