Chapter 5. Probability and Random Process - Updated
Chapter 5. Probability and Random Process - Updated
Probability and
Random Process
Review of Probability
⚫ Some important
properties of the
probability are
b) an even number
c) P(A|B)
⚫ Bay’s Theorem
Bayes's rule gives the conditional probabilities P (Ei | A) by the relation
25
2/4/2021
Latha, Dept. of ECE, ASE, Bengaluru
Binomial random variable
⚫ This is a discrete random variable giving the number of 1 's in a
sequence of n-independent Bernoulli trials. The PMF is given by
2/4/2021 28
⚫ This function represents the area under the tail of a standard normal
random variable
⚫ Q-function is a decreasing function
⚫ This function is well tabulated and
frequently used in analyzing the
performance of communication systems
Q(x) using calculator(fx-991ES)
Mode 3→AC
Shift(1) →5 →3: R(x)= Q(x) Figure 5.9 The Q-function as the area under32 the
2/4/2021 tail of a standard normal random variable
Latha, Dept. of ECE, ASE, Bengaluru
33
2/4/2021 Latha, Dept. of ECE, ASE, Bengaluru
Q(x) satisfies the relations
Two important upper bounds on the Q-function are widely used to find
bounds on the error probability of various communication systems.
x−m
For N(m, σ 2) random variable Q ( x ) = P ( X x ) = Q
2/4/2021 34
Latha, Dept. of ECE, ASE, Bengaluru
Example 5.1.6
⚫ X is a Gaussian random variable with mean 1 and variance 4. Find the
probability that X is between 5 and 7
If Y=g(X), then
If Y=g(X), then
2/4/2021 37
Latha, Dept. of ECE, ASE, Bengaluru
Variance of X
⚫ In the special case where g(X) = (X - E(X))2
⚫ E(g(X)) is called the variance of X
⚫ E(g(X))=E( X – E (X) )2
= E [ X2 + (E(X))2 - 2X E(X)]
= E(X2) + (E(X))2 -2 E(X) E(X)
= E(X2) + (E(X))2 – 2(E(X))2
E(g(X)) = E(X2) – (E(X))2
⚫ The variance is denoted by σ2 and its square root σ, is called
the standard deviation
σ2 = E(X2) – (E(X))2
Or simply as
where m1 , m2, σ12 and σ22 are the mean and variances of X and Y,
respectively, and ρ is their correlation coefficient.
When two random variables X and Y are distributed according to a
binormal distribution, then X and Y are normal random variables and
the conditional densities f(x|y) and f(y|x) are also Gaussian
The definition of two jointly Gaussian random variables can be extended
to more random variablesLatha, Dept. of ECE, ASE, Bengaluru
2/4/2021 49
Properties of Jointly Gaussian Random Variables
⚫ If n random variables are jointly Gaussian, any subset of them is also
distributed according to a jointly Gaussian distribution of the appropriate
size. In particular, all individual random variables are Gaussian.
⚫ Jointly Gaussian random variables are completely characterized by the
means of all random variables m1 , m2,, . . . , mn and the set of all
covariance COV(Xi , Xj) for all 1 ≤ i ≤ n and 1 ≤ j ≤ n. These so-called
second-order properties completely describe the random variables.
⚫ Any set of linear combinations of (X 1 , X2, . . . , Xn) are themselves
jointly Gaussian. In particular, any linear combination of Xi 's is a
Gaussian random variable.
⚫ Two uncorrelated jointly Gaussian random variables are independent.
Therefore, for jointly Gaussian random variables, independence and
uncorrelatedness are equivalent. As previously stated, this is not true for
general random variables.
2/4/2021 Latha, Dept. of ECE, ASE, Bengaluru 50
5.1 .6 Sums of Random Variables
Law of large numbers and central limit theorem
⚫ Law of large numbers: It states that if the sequence of random
variables X1 , X2, . . . , Xn are uncorrelated with the same mean mx
and variance σ2 < ∞ or Xi 's are i.i.d. (independent and identically
distributed) random variables then for any
where
This means that the average converges (in probability) to the expected
value.
2/4/2021 51
Latha, Dept. of ECE, ASE, Bengaluru
⚫ Central limit theorem : This theorem states that if
Xi 's are i.i.d. (independent and identically distributed)
random variables which each have a mean m and variance σ2,
Then converges to a N( m, σ2/n ). Central limit
theorem states that the sum of many i.i.d. random variable
converges to a Gaussian random variable. This theorem
explains why thermal noise follows a Gaussian distribution.
(c) Find the probability that the number of errors per block is greater than or
equal to 4.
Solution:
(a) Let X representing the number of errors per block. X has binomial
distribution with n=16 and p=0.01. Average number of errors per block
is E(X) = np = 0.16
(b) Variance σx2 = np (1-p) = 0.158
(c) P(X ≥ 4 )= 1- P(X ≤ 3)
= 1-[ P(X=0) + P(X = 1) + P(X = 2) + P(X = 3)]
2/4/2021 54
Solution b
1
(a) We know that
f ( x ) dx = k dx = 1 Or k =
−
X
a
b−a
1
a xb
Hence f X ( x) = b − a
0 otherwise
1 1 1
1/ 2 1/ 2
1 1
P | X | = P − X =
2 2 2
−1 / 2
f X ( x) dx = dx =
−1 / 2
3 3
7−4
1. P ( X 7) = Q = Q(1) = 0.158
3
0−4 9−4 4 5
2. P(0 X 9) = Q −Q = Q − − Q
3 3 3 3
4 5
= 1 − Q − Q
3 3
= 0.858
f
− −
X ,Y ( x, y ) dx dy =1
− −
f X ,Y ( x, y ) dx dy = K e − x − y dx dy
0 y
2/4/2021 Latha, Dept. of ECE, ASE, Bengaluru 61
− −
f X ,Y ( x, y ) dx dy = K e − y e − x dx dy
0 y
e−x
f ( x, y ) dx dy = K e −y
dy
−1
X ,Y y
− − 0
− −
f X ,Y ( x, y ) dx dy = K e − y e − y dy = K e − 2 y dy
0 0
e −2 y
−− f X ,Y ( x, y) dx dy = K − 2
0
K
= =1
2
K =2
LHS ≠ RHS
Hence X and Y are not independent
2/4/2021 Latha, Dept. of ECE, ASE, Bengaluru 63
4. Conditional density function
If x < y then fX | Y(x|y) =0
If x ≥ y then
5.
6. Covariance of X and Y
COV(X, Y) = E(X Y) - E(X) E(Y)
2/4/2021 65
Latha, Dept. of ECE, ASE, Bengaluru
Hence
and
2/4/2021 70
Latha, Dept. of ECE, ASE, Bengaluru
The process X (t) is defined by X (t) = X, where X is a random variable
uniformly distributed on [- 1 , 1]. In this case, an analytic description of the
random process is given. For this random process, each sample is a constant signal
2/4/2021 71
Latha, Dept. of ECE, ASE, Bengaluru
⚫ we see that corresponding to each outcome ωi in a sample space Ω,
there exists a signal x ( t ; ωi).
⚫ This description is similar to the description of random variables in
which a real number is assigned to each outcome ωi.
⚫ For each ωi, there exists a deterministic time function x ( t ; ωi) , which
is called a sample function or a realization of the random process
⚫ At each time instant t0 and for each, ωi € Ω, we have the number
x(t0; ωi)
⚫ For the different outcomes (ωi 's) at a fixed time t0, the numbers
independent of time.
⚫ Depending on what properties are independent of time, different notions
As the mean mx(t) is zero and autocorrelation is depending on (t1- t2), the
process X(t) is WSS.
2/4/2021 Latha, Dept. of ECE, ASE, Bengaluru 84
Example 5.2.10
We know that the independence of random processes implies that they are
uncorrelated, whereas uncorrelatedness generally does not imply
independence, except for the important class of Gaussian processes for
which the two properties are equivalent.
In general
Assuming that the two random processes X (t) and Y (t) are jointly
stationary, determine the autocorrelation of the process
Z(t) = X (t) + Y(t) .
X(t) Y(t)
h(t)
We next demonstrate that the input and output processes X (t) and Y (t)
will be jointly stationary with
here mY is independent of t
2/4/2021 Latha, Dept. of ECE, ASE, Bengaluru 93
Cross correlation between X(t) and Y(t)
The cross-correlation function between the output and the input is
This shows that both RY (τ) and RXY (τ) depend only on τ = t1 -t2 and,
hence, the output process is stationary. Therefore, the input and output
processes are jointly stationary.
2/4/2021 Latha, Dept. of ECE, ASE, Bengaluru 95
5.2.5 Power Spectral Density of the
Stationary Processes
⚫ A random process is a collection of signals
⚫ spectral characteristics of these signals determine the spectral
characteristics of the random process
⚫ If the signals of the random process are slowly varying, then the
random process will mainly contain low frequencies and its power
will be mostly concentrated at low frequencies
⚫ On the other hand, if the signals change very fast, then most of the
power in the random process will be at the high-frequency
components
⚫ power spectral density or power spectrum of the random process is
a function that determines the distribution of the power of the random
process at different frequencies
2/4/2021 Latha, Dept. of ECE, ASE, Bengaluru 96
⚫ The power spectral density of a random process X(t) is denoted by
SX (f).
⚫ Meaning of Power Spectral Density SX (f) : The value of SX (f)is the
power concentrated in a frequency interval [f ; f +df].
⚫ The unit for power spectral density is W /Hz
⚫ Wiener-Khinchin theorem: For a stationary random process X (t), the
power spectral density is the Fourier transform of the autocorrelation
function
SX(f )=FT{RX (τ)}
⚫ For a cyclostationary process, the power spectral density is the Fourier
transform of the average autocorrelation function
SX(f )=FT{Avg(RX (τ))}
PX = RX (0)
2/4/2021 107
It follows that at any time instant t0, the random variable X (t0) is
Gaussian and at any two points t1 , t2, random variables
(X(t1), X (t2)) are distributed according to a two dimensional jointly
Gaussian distribution.
Since for a stationary process E[X(t1) X(t2)] = RX(t1 - t2)= RX (3 - 3)= RX (0)
2/4/2021 114
Latha, Dept. of ECE, ASE, Bengaluru
We know that
PX = RX (0) and
From this definition, it is obvious that if X (t) and Y(t) are jointly
Gaussian, then each of them is individually Gaussian two individually
Gaussian random processes are not always jointly Gaussian.
and we have
⚫ Here both the in-phase and quadrature components of a bandpass
signal are slowly varying signals, therefore, they are both lowpass
signals
⚫ Basically a bandpass signal can be represented in terms of two
lowpass signals, namely, its in-phase and quadrature components.
2/4/2021 Latha, Dept. of ECE, ASE, Bengaluru 126
⚫ In this case, the complex lowpass signal
127
2/4/2021 Latha, Dept. of ECE, ASE, Bengaluru
5.3.3 Filtered Noise Processes
2/4/2021 128
Latha, Dept. of ECE, ASE, Bengaluru
⚫ Let X (t) is the output of an ideal bandpass filter of bandwidth
W which is located at frequencies around fc
⚫ Since thermal noise is white and Gaussian, the filtered thermal noise
will be Gaussian but not white.
⚫ The power spectral density of the filtered noise will be
2/4/2021 132
Latha, Dept. of ECE, ASE, Bengaluru
⚫ All bandpass filtered noise signals have an in-phase and
quadrature phase components that are lowpass signals.
⚫ Bandpass random process X (t) can be expressed
3. Processes Xc(t) and Xs(t) have a common power spectral density. This
power spectral density is obtained by shifting the positive frequencies
in SX ( f ) to the left by fc shifting the negative frequencies of SX ( f ) to
the right by fc, and adding the two shifted spectra.
Hence
RX ( t1, t2) = E [ X( t1 ) X( t2 ) ] = E [ ( A + B t1 ) ( A + B t2 ) ]
= E [A2 + AB t2 + BA t1 + B2 t1t2 ]
= E [A2] + E [AB ] t2 + E [ BA ] t1 + E[B2] t1t2
The random variables A, B are independent therefore
E[AB] = E[A]E[B] = 0.
Furthermore
Therefore
Impulse response h (t ) =
d
( (t ) + (t − T ) ) = ( (t ) ) + ( ( t − T ) )
d d
dt dt dt
h (t ) = ' (t ) + ' ( t − T )
Frequency response H ( f ) = j 2 f + j 2 f e − j 2 f T
2/4/2021 147
Latha, Dept. of ECE, ASE, Bengaluru
1. The system with impulse response h (t ) = ' (t ) + ' ( t − T ) is an LTI system.
If input to the LTI system is a stationary process, then output is also stationary
process. Hence Y(t) is a stationary process.
2/4/2021 148
Latha, Dept. of ECE, ASE, Bengaluru
3. The power spectral density of the out put process is
SY ( f ) = SX ( f ) 8π2f 2 (1 + cos (2πf T ))
The following frequencies do not present at the out put process.
1. f =0, since there is f 2 in SY ( f ) expression, SY ( f ) = 0 when f 2 =0
k 1
2. f = +
T 2T
h(t) = δ(t) – δ( t – T )
H(f)= 1 – e -j2πfT
|H ( f )|2 = | 1 – e -j2πfT |2 = |1 – ( cos(2πfT) –j sin(2πfT))|2
= 2 (1 - ( cos(2πfT) )
SY ( f ) = SX ( f ) |H ( f )|2
= SX ( f ) 2 (1 - cos(2πfT))