Probability and Random Variables
Probability and Random Variables
Probability and
Random Process
Review of Probability
⚫ Some important
properties of the
probability are
b) an even number
c) P(A|B)
⚫ Bay’s Theorem
Bayes's rule gives the conditional probabilities P (Ei | A) by the relation
25
1/7/2021
Latha, Dept. of ECE, ASE, Bengaluru
Binomial random variable
⚫ This is a discrete random variable giving the number of 1 's in a
sequence of n-independent Bernoulli trials. The PMF is given by
⚫ This function represents the area under the tail of a standard normal
random variable
⚫ Q-function is a decreasing function
⚫ This function is well tabulated and
frequently used in analyzing the
performance of communication systems
Q(x) using calculator(fx-991ES)
Mode 3→AC
Shift(1) →5 →3:(x)= Q(x) Figure 5.9 The Q-function as the area under32 the
1/7/2021 tail of a standard normal random variable
Latha, Dept. of ECE, ASE, Bengaluru
1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 33
Q(x) satisfies the relations
Two important upper bounds on the Q-function are widely used to find
bounds on the error probability of various communication systems.
x−m
For N(m, σ 2) random variable Q ( x ) = P ( X x ) = Q
1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 34
Example 5.1.6
⚫ X is a Gaussian random variable with mean 1 and variance 4. Find the
probability that X is between 5 and 7
If Y=g(X), then
If Y=g(X), then
Or simply as
where m1 , m2, σ12 and σ22 are the mean and variances of X and Y,
respectively, and ρ is their correlation coefficient.
When two random variables X and Y are distributed according to a
binormal distribution, then X and Y are normal random variables and
the conditional densities f(x|y) and f(y|x) are also Gaussian
The definition of two jointly Gaussian random variables can be extended
to more random variablesLatha, Dept. of ECE, ASE, Bengaluru
1/7/2021 49
Properties of Jointly Gaussian Random Variables
⚫ If n random variables are jointly Gaussian, any subset of them is also
distributed according to a jointly Gaussian distribution of the appropriate
size. In particular, all individual random variables are Gaussian.
⚫ Jointly Gaussian random variables are completely characterized by the
means of all random variables m1 , m2,, . . . , mn and the set of all
covariance COV(Xi , Xj) for all 1 ≤ i ≤ n and 1 ≤ j ≤ n. These so-called
second-order properties completely describe the random variables.
⚫ Any set of linear combinations of (X 1 , X2, . . . , Xn) are themselves
jointly Gaussian. In particular, any linear combination of Xi 's is a
Gaussian random variable.
⚫ Two uncorrelated jointly Gaussian random variables are independent.
Therefore, for jointly Gaussian random variables, independence and
uncorrelatedness are equivalent. As previously stated, this is not true for
general random variables.
1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 50
5.1 .6 Sums of Random Variables
Law of large numbers and central limit theorem
⚫ Law of large numbers: It states that if the sequence of random
variables X1 , X2, . . . , Xn are uncorrelated with the same mean mx
and variance σ2 < ∞ or Xi 's are i.i.d. (independent and identically
distributed) random variables then for any
where
This means that the average converges (in probability) to the expected
value.
1/7/2021 51
Latha, Dept. of ECE, ASE, Bengaluru
⚫ Central limit theorem : This theorem states that if
Xi 's are i.i.d. (independent and identically distributed)
random variables which each have a mean m and variance σ2,
Then converges to a N( m, σ2/n ). Central limit
theorem states that the sum of many i.i.d. random variable
converges to a Gaussian random variable. This theorem
explains why thermal noise follows a Gaussian distribution.
(c) Find the probability that the number of errors per block is greater than or
equal to 4.
Solution:
(a) Let X representing the number of errors per block. X has binomial
distribution with n=16 and p=0.01. Average number of errors per block
is E(X) = np = 0.16
(b) Variance σx2 = np (1-p) = 0.158
(c) P(X ≥ 4 )= 1- P(X ≤ 3)
= 1-[ P(X=0) + P(X = 1) + P(X = 2) + P(X = 3)]
1/7/2021 54
Solution b
1
(a) We know that
f ( x ) dx = k dx = 1 Or k =
−
X
a
b−a
1
a xb
Hence f X ( x) = b − a
0 otherwise
1 1 1
1/ 2 1/ 2
1 1
P | X | = P − X =
2 2 2
−1 / 2
f X ( x) dx = dx =
−1 / 2
3 3
7−4
1. P ( X 7) = Q = Q(1) = 0.158
3
0−4 9−4 4 5
2. P(0 X 9) = Q −Q = Q − − Q
3 3 3 3
4 5
= 1 − Q − Q
3 3
= 0.858
f
− −
X ,Y ( x, y ) dx dy =1
− −
f X ,Y ( x, y ) dx dy = K e − x − y dx dy
0 y
1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 61
− −
f X ,Y ( x, y ) dx dy = K e − y e − x dx dy
0 y
e−x
f ( x, y ) dx dy = K e −y
dy
−1
X ,Y y
− − 0
− −
f X ,Y ( x, y ) dx dy = K e − y e − y dy = K e − 2 y dy
0 0
e −2 y
−− f X ,Y ( x, y) dx dy = K − 2
0
K
= =1
2
K =2
LHS ≠ RHS
Hence X and Y are not independent
1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 63
4. Conditional density function
If x < y then fX | Y(x|y) =0
If x ≥ y then
5.
6. Covariance of X and Y
COV(X, Y) = E(X Y) - E(X) E(Y)
1/7/2021 65
Latha, Dept. of ECE, ASE, Bengaluru
Hence
and