Open In App

Likelihood Weighting in Artificial Intelligence

Last Updated : 23 Jul, 2025
Comments
Improve
Suggest changes
1 Likes
Like
Report

Probabilistic inference involves reasoning under uncertainty and making predictions or decisions based on probabilistic models. One of the key challenges in probabilistic inference is efficiently estimating the probabilities of certain events or the expectations of random variables, especially in complex models with many variables.

Likelihood weighting is a Monte Carlo sampling technique that addresses this challenge by providing a way to estimate probabilities and expectations in Bayesian networks or other probabilistic graphical models. Unlike naive sampling methods, likelihood weighting incorporates evidence directly into the sampling process, leading to more accurate and efficient estimates.

Bayesian Networks and Probabilistic Inference

Before diving into likelihood weighting, it’s important to understand the context in which it is used.  

Bayesian network is a graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG).

  • In DAG, each node represents a random variable, and the edges represent conditional dependencies.
  • The joint probability distribution of the variables can be factored into a product of conditional probabilities, as specified by the network structure.

Probabilistic inference in Bayesian networks involves computing the posterior probability of certain variables given observed evidence.

For example, given a Bayesian network representing a medical diagnosis system, we might want to compute the probability of a disease given observed symptoms. Exact inference methods, such as variable elimination or belief propagation, can be computationally expensive, especially for large networks. This is where approximate inference methods, like likelihood weighting, come into play.

Challenge of Sampling with Evidence

Monte Carlo methods are a class of algorithms that rely on random sampling to approximate complex probabilities or expectations.

In the context of Bayesian networks, a straightforward Monte Carlo approach might involve generating random samples from the joint distribution and using them to estimate probabilities. However, this approach becomes inefficient when dealing with evidence (observed variables), as most samples may be inconsistent with the evidence and thus contribute little to the estimate.

For example, suppose we have a Bayesian network with variables X_1, X_2, ... X_n, and we observe evidence X_k = x_k ​. A naïve sampling approach would generate samples from the joint distribution and discard those that do not match the evidence. This can be highly inefficient, especially if the evidence is unlikely under the prior distribution.

Likelihood Weighting

Likelihood weighting is a technique that addresses this inefficiency by incorporating evidence directly into the sampling process. Instead of discarding samples that do not match the evidence, likelihood weighting assigns a weight to each sample based on how well it matches the evidence. This weight reflects the likelihood of the evidence given the sample, hence the name "likelihood weighting."

Likelihood Weighting Algorithm

The likelihood weighting algorithm proceeds as follows:

  1. Initialize: Start with an empty sample and set its weight to 1.
  2. Sample Variables: For each variable in the Bayesian network, in topological order (parents before children):
    • If the variable is observed (part of the evidence), set its value to the observed value and update the weight by multiplying it with the conditional probability of the observed value given its parents.
    • If the variable is not observed, sample its value from its conditional distribution given its parents.
  3. Store the Sample: Add the sampled values and the computed weight to the set of weighted samples.
  4. Repeat: Repeat the process to generate a large number of weighted samples.
  5. Estimate Probabilities: Use the weighted samples to estimate probabilities or expectations. For example, the probability of a query variable X_q​ taking a specific value can be estimated by summing the weights of samples where X_q takes that value and dividing by the total weight of all samples.

Example of Likelihood Weighting in Bayesian Networks

Consider a simple Bayesian network with three binary variables: A, B, and C, where A is the parent of B, and B is the parent of C. Suppose we observe evidence C=1 and want to estimate the probability P(A=1∣C=1).

  1. Start with an empty sample and set its weight to 1.
  2. Sample A from its prior distribution P(A). Suppose we sample A=1.
  3. Since B depends on A, sample B from P(B∣A=1). Suppose we sample B=0.
  4. Since C is observed, set C=1 and update the weight by multiplying it with P(C=1∣B=0). Suppose P(C=1∣B=0)=0.3, so the weight becomes  1×0.3=0.3.
  5. Store the sample (A=1,B=0,C=1) with weight 0.3.
  6. Repeat the process to generate many weighted samples.
  7. Estimate P(A=1∣C=1) by summing the weights of samples where A=1 and dividing by the total weight of all samples.

Advantages of Likelihood Weighting

  • Efficiency: Likelihood weighting avoids the inefficiency of discarding samples by incorporating evidence directly into the sampling process.
  • Flexibility: It can be applied to any Bayesian network, regardless of its structure.
  • Simplicity: The algorithm is straightforward to implement and does not require complex data structures or computations.

Limitations of Likelihood Weighting

  • Weight Degeneration: In some cases, the weights of the samples can become very small, leading to poor estimates. This is especially problematic when the evidence is unlikely under the prior distribution.
  • Bias: Likelihood weighting produces biased estimates, although the bias decreases as the number of samples increases.
  • Dependence on Evidence: The quality of the estimates depends on the strength and nature of the evidence. In some cases, other sampling methods, such as Markov Chain Monte Carlo (MCMC), may be more effective.

Explore