0% found this document useful (0 votes)
222 views13 pages

DISCRETE-TIME RANDOM PROCESS Summary

This document discusses discrete-time random processes. Some key points covered include: - A discrete-time random process is a collection of discrete-time signals where each signal occurs with a certain probability. - Properties of random processes include their mean, variance, autocorrelation, and relationship between multiple random processes. - Stationary random processes have properties that do not change over time, such as their density functions or ensemble averages. - Autocorrelation and autocovariance matrices describe the linear dependence between random variables within a process.

Uploaded by

Ege Erdem
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
222 views13 pages

DISCRETE-TIME RANDOM PROCESS Summary

This document discusses discrete-time random processes. Some key points covered include: - A discrete-time random process is a collection of discrete-time signals where each signal occurs with a certain probability. - Properties of random processes include their mean, variance, autocorrelation, and relationship between multiple random processes. - Stationary random processes have properties that do not change over time, such as their density functions or ensemble averages. - Autocorrelation and autocovariance matrices describe the linear dependence between random variables within a process.

Uploaded by

Ege Erdem
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

DISCRETE-TIME RANDOM PROCESS (2)

RANDOM PROCESSES

Random Processes
• Discrete-time random processes:
• Mean and variance:
• Autocorrelation and autocovariance:
• Relationship between random variables in a single random process:
• Cross-covariance and cross-correlation of two random processes:
• Relationship between multiple random processes:

Stationary Random Processes


• Strict-sense stationarity − stationarity in terms of density functions of different order:
• Wide sense stationarity − stationarity in terms of ensemble average:
• Properties of the autocorrelation sequence of a WSS process:
Symmetry, Mean-square value, Maximum value, Periodicity
• Joint wide sense stationarity of two random processes:

Autocorrelation and Autocovariance Matrices


• Definition of autocorrelation and autocovariance matrices:
• Properties of autocorrelation matrix:

Ergodicity of Random Processes


• Mean ergodicity
• Autocorrelation ergodicity

White Noise

The Power Spectrum


Definition:
Properties of the power spectrum of a WSS random process:
Symmetry, Positivity, Total power

Some useful MATLAB functions for studying random processes

13
RANDOM PROCESSES

Discrete-time random processes:


• A discrete-time random process x(n) is a collection, or ensemble, of discrete-time signals, x k (n ) where k
is an integer.
• A discrete-time random process x(n) is an indexed sequence of random variables if we look at the process
at a certain ‘fixed’ time instant n (e.g., n = n0 ).
It should be pointed out that in the Hayes' textbook the term 'discrete-time signal', instead 'discrete-time
single realization', is used, and thus we keep using 'discrete-time signal' and treat it as an equivalent to 'single
realization' (as we used in the previous section) or to 'single observation'. We should also note that the term
'discrete-time signal' is different from 'discrete-time random signal' in the present case. A discrete-time
random signal x(n) is associated with an ensemble of discrete-time signals x k (n ) .

Example 10. A random process is of the form of sinusoid x(n) = A cos( nω 0 ) where A ∈ Ω = {1,2,  ,6} , the
amplitude is a random variable that assumes any integer number between one and six, each with equal
probability Pr(A=k)=1/6 (k=1, 2, …, 6). This random process consists of an ensemble of six different
discrete-time signals x k (n ) ,
x1 ( n ) = cos( nω 0 ) , x 2 (n ) = 2 cos( nω 0 ) , … x6 ( n ) = 6 cos( nω 0 ) ,
each of which shows up with equal probability.

Question: Given a random process x(n) = A(n) cos( nω 0 ) , where the amplitude A(n) is a random variable
(at instant n) that assumes any integer number between one and six, each with equal probability, how many
equally probable discrete-time signals are there in the ensemble?

Example 11. A random process shown in Fig. 1 has an ensemble of different discrete-time signals, each
occurring according to a certain probability. From a sample space point of view, to each experimental
outcome ω i in the sample space, there is a corresponding discrete-time signal xi (n) . If we look at the
random process at a certain ‘fixed’ time instant n, e.g., n = n0 , the signal value x(n 0 ) is a random variable
that is defined on the sample space and has an underlying probability distribution and density functions
Fx ( n0 ) (α ) = Pr{x(n0 ) ≤ α } , and f x ( n0 ) (α ) = dFx ( n0 ) (α ) dα
For a different n 0 , x(n 0 ) is a random variable at a different time instant. Therefore, a discrete-time random
process is an indexed sequence of random variables x(n) that is an ensemble of elementary events xi (n) at n.

Since a discrete-time random process is an indexed sequence of random variables, the statistical quantities
(mean, variance, correlation, covariance, etc.) and properties (independence, uncorrelatedness, orthogonality,
etc.) of random variables studied in the previous section apply to random processes. For a random process,
therefore, we will have a sequence of mean values and variances of these indexed random variables, and the

14
auto-relationship between the random variables. For multiple random processes we have the cross-
relationship between the processes.

Fig. 1. A random process


consisting of an ensemble of
different discrete-time signals,
each occurring according to
a certain probability.

Mean and Variance:


The mean of a random process x(n) defined as

m x ( n ) = E {x ( n )} =  αf x ( n ) (α )dα (44)
−∞

is a deterministic sequence with the same index as the sequence of random variables x(n).
If x(n) is a function of another random process ζ (n ) with a probability density function f ζ ( n ) (α ) , i.e.,
x ( n ) = g [ζ (n )] , then the expected value of x(n) is

m x ( n ) = E {x ( n )} = E {g [ζ ( n )]} =  g (α ) f ζ ( n ) (α )dα (44’)
−∞

The variance of each random variable x(n) in the sequence


{
σ x2 (n) = E | x(n) − m x (n) | 2 } (45)
2
defines the variance of the random process. σ x ( n) is also a deterministic sequence.
Note that the ‘deterministic’ is the consequence of ensemble average. Thus, we will have E{m x (n)} = m x ( n)
{ }
and E σ x2 (n) = σ x2 (n) . The mean and variance of a random process are the first-order statistics and both, in
general, depend on n.

Autocorrelation and Autocovariance:


The autocorrelation is defined as
rx (k , l ) = E{x(k ) x * (l )} (46)
The autocovariance is expressed as
c x (k , l ) = E {[x(k ) − m x (k )][x(l ) − m x (l )] *} , or c x (k , l ) = rx (k , l ) −m x (k )m ∗x (l ) (47)
Note that if k = l then the autocovariance reduces to the variance,
c x (k , k ) = σ x2 (k ) (48)

15
Auto-correlation and auto-covariance are termed because the correlation and the covariance between the
random variables x(k) and x(l) are derived from the same random process x(n). The autocorrelation and
autocovariance sequences provide information about the degree of linear dependence between two variables
in the same process.

Relationship between random variables in a single random process:


As in the case of multiple random variables, two random variables in a single random process may be related
in terms of independence, uncorrelatedness, and orthogonality. Thus, the relations for the multiple random
variables apply. For example, if c x (k , l ) =0 for k ≠ l , then the random variables x(k) and x(l) are uncorrelated
and knowledge of one does not help in the estimation of the other using a linear estimator.

Example 12. The mean and autocorrelation of a harmonic process with random phase (Example 3.3.1)
(i) A real-valued harmonic random process is a random process with a form of a sinusoid
x(n) = A sin( nω 0 + φ ) , (49)
where ω 0 is a fixed constant. Consider the case where the amplitude A is a fixed constant but the phase φ is
a random variable. The phase random variable φ is uniformly distributed over the interval [− π , π ) , i.e.,
1 (2π ); − π ≤ α < π
f φ (α ) =  (50)
 0; otherwise
Find the mean and the autocorrelation of the harmonic process.
Solution. The mean of the process is by definition
∞ π 1
m x ( n ) = E {x ( n )} = 
−∞
A sin(nω 0 + α ) fφ (α )dα = 
−π
A sin(nω 0 + α )

dα = 0 (51)

Thus, the random phase harmonic process is a zero mean process.


The autocorrelation of the process is determined by
{ }
rx ( k , l ) = E x(k ) x * (l ) = E {A sin(kω 0 + φ ) A sin(lω 0 + φ )}
1 2 1
=A E {cos[(k − l )ω 0 ]} − A 2 E {cos[(k + l )ω 0 + 2φ ]} (52)
2 2
1
= A 2 cos[(k − l )ω 0 ]
2
where the trigonometric identity sin( A) sin( B ) = [cos( A − B ) − cos( A + B )] 2 and the integral relation
π
E {cos[( k + l )ω 0 + 2φ ]} = (1 2π )−π cos[( k + l )ω 0 + 2α ]dα = 0 are used.

(ii) A complex-valued harmonic random process is of the form


x(n) = A exp(nω 0 + φ ) , (53)
where ω 0 and A are fixed constants and the phase φ is a uniformly-distribute random variable defined in
Eq. (50). Find the mean and the autocorrelation of the complex harmonic process.
Solution. The mean is
m x (n) = E{x(n)} = E {A exp(nω 0 + φ )} = E{A cos( nω 0 + φ ) + Aj sin( nω 0 + φ )} = 0 (54)
The autocorrelation is
{ } {
rx (k , l ) = E x(k ) x * (l ) = E A exp[ j ( kω 0 + φ )]A* exp[− j (lω 0 + φ )] } (55)
= AA* E{exp[ j ( k − l )ω 0 ]} =| A | 2 exp[ j (k − l )ω 0 ]

16
From Eqs. (51), (52), (54) and (55), it follows that both the real- and complex-valued harmonic processes
have a zero mean and an autocorrelation that only depends on the difference between k and l.

Cross-covariance and Cross-correlation of two random processes:


The cross-correlation of two random processes x(n) and y(n) is
rxy (k , l ) = E{x(k ) y * (l )} (56)

The cross-covariance is
[ ]
c xy ( k , l ) = E {[x ( k ) − m x ( k )] y (l ) − m y (l ) *}, or c xy (k , l ) = rxy (k , l ) −m x (k )m ∗y (l ) (57)

Cross-correlation and cross-covariance are named because the correlation and the covariance between the
random variables are derived from the different random process. Obviously, the autocorrelation and
autocovariance are the special cases of the cross-correlation and cross-covariance, respectively, for x(n) =
y(n). Cross-correlation is very useful in signal detection in which the issue of interest is to find whether or
not a desired signal exists in an observed (noisy) signal.

Example 13. Cross-correlation


Consider two random processes x(n) and y(n), where x(n) is known with mean m x (n) and autocorrelation
rx (k , l ) , and y(n) is the convolution of x(n) with a deterministic sequence h(n), as follows

y (n ) =  h(m ) x (n − m ) .
m = −∞
(58)

Find (i) the cross-correlation between x(n) and y(n) and (ii) the cross-correlation between y(n) and x(n).
 ∞
 ∞ ∞
rxy (k , l ) = E x(k ) y ∗ (l ) = E  x(k )  h ∗ (m) x ∗ (l − m) =  h ∗ (m) E x(k ) x ∗ (l − m) =  h ∗ (m)rx (k , l − m)
{ } { }
 m = −∞  m = −∞ m = −∞

(59)
 ∞
  ∞ ∞
ryx (k , l ) = E y (k ) x ∗ (l ) = E   h(m) x(k − m) x * (l ) =  h(m) E x(k − m) x * (l ) =  h(m)rx (k − m, l )
{ } { }
 m = −∞   m = −∞ m = −∞

(60)
rxy (k , l ) ≠ ryx (k , l ) (61)

Relationship between multiple random processes:


Two random processes x(n) and y(n) are said to be uncorrelated if
c xy (k , l ) = 0 (62)

for all k and l or, equivalently, if


rxy (k , l ) =m x (k )m ∗y (l ) (63)

Two random processes x(n) and y(n) are said to be orthogonal if


rxy (k , l ) = 0 (64)

Note that two orthogonal random processes are not necessarily uncorrelated, but the uncorrelated processes
of which one has a zero mean are orthogonal since rxy (k , l ) =m x (k )m ∗y (l ) =0.

17
If two random processes x(n) and y(n) are uncorrelated and one or both of them has/have a zero mean,
then the autocorrelation of the sum, z(n) = x(n) + y(n), is
rz (k , l ) = rx (k , l ) + r y (k , l ) (65)

Stationary Random Processes

From Example 12, it is known that for a real-valued harmonic process x(n) = A sin( nω 0 + φ ) with
random phase φ that is uniformly distributed over the interval [− π , π ) , the mean m x (n) = 0 is independent
of time, and the autocorrelation rx (k , l ) = (1 / 2) A 2 cos[( k − l )ω 0 ] only depends on the difference between k
and l. This actually brings up a class of commonly-encountered random processes, that is, a wide sense
stationary process. That a random process is stationary means that the statistics or ensemble averages of a
random process are independent of time, i.e., ‘statistical time-invariant’. Several different types of
stationarity are defined either in terms of density functions of different order or in terms of ensemble average
operations.

Stationarity in terms of density functions of different order and strict-sense stationarity:


A random process x(n) is said to be first-order stationary if the first-order density function of the process
is independent of time, i.e., f x ( n )(α ) = f x ( n + k )(α ) , for all k.

For a first-order stationary process, thus, we have has time-independent statistics. For example, the mean
∞ ∞
of the process is constant, m x (n) =m x because m x (n ) =  −∞
αf x ( n + k )(α )dα =  αf x ( n + k )(α )dα =m x ( n + k )
−∞
2 2
for all k, and this is true for the variance, i.e., σ x (n) =σ x .

A random process x(n) is said to be second-order stationary if the second-order joint density function
f x ( n1 ), x ( n2 )(α 1 , α 2 ) depends on the difference, n1 − n 2 , and not on individual times n 1 and n 2 , which is

equivalent to f x ( n1 ), x ( n2 )( 1 ,
α α2) = f α α2 ) .
x ( n1 + k ), x ( n2 + k )( 1 ,

If a random process is second-order stationary, then it will be first-order stationary.


A second-order stationary process has second-order time-shift-invariant statistics, e.g., the autocorrelation
sequence has the property, r x (k , l ) =r x (k + n, l + n) , which depends only on the difference, k – l, separating
the two variables x(k) and x(l) in time, r x (k , l ) =r x (k − l , l − l ) =r x (k − l ,0) . Thus, r x ( k , l ) is often simply
written as r x (k − l ) in this case. The difference, k − l, is called lag.
A random process x(n) is said to be stationary of order L if the random process x(n) and x(n+k) have the
same Lth-order joint density functions. A random process is said to be stationary in the strict sense (or strict-
sense stationary) if it is stationary for all orders L.

Stationarity in terms of ensemble average and wide sense stationarity:


The stationarity is defined in terms of ensemble average operations such as the mean and autocorrelation
and autocovariance since they are often given.
Wide Sense Stationarity. A random process x(n) is said to be wide-sense stationary (WSS) if the following
three conditions are satisfied:
1. The mean of the process is a constant, m x (n) =m x .

18
2. The autocorrelation r x ( k , l ) depends only on the difference, k–l, i.e., r x ( k , l ) = r x (k − l ) .
3. The variance of the process is finite, c x (0) < ∞ .
The harmonic process with random phase (see Example 12) is a WSS random process because m x ( n ) = 0 ,
rx ( k , l ) = (1 / 2) A2 cos[(k − l )ω 0 ] that depends only on k–l, and c x (0) = rx (0) = (1 / 2) A2 is bounded by noting
c x ( k , l ) = rx ( k , l ) − m x ( k )m *x (l ) = rx ( k , l ) .
The wide sense stationarity is a weaker condition than second-order stationarity because the constraints are
placed on ensemble averages rather than on density functions. For a Gaussian process, wide-sense
stationarity is equivalent to strict-sense stationarity because of the fact that a Gaussian process is completely
defined in terms of the mean and covariance.
Note that if an autocorrelation of a random process is of the form r x (k − l ) or r x (k ) , the process is not
necessarily WSS. For example, r x ( k ) = 2 k is not a valid autocorrelation for a WSS random process. Why?

Example 14. Wide sense stationarity of a harmonic process with random amplitude
Consider a real-valued harmonic random process
x(n) = A sin( nω 0 + φ ) , (66)
where the frequency ω 0 and the phase φ are fixed constants, but the amplitude A is a random variable that is
uniformly distributed over the interval [b, c] with c>b. Determine the stationarity of the random process.
Solution. The mean of the process is by definition
b+c
m x (n) = E{x(n)} = E {A sin(nω 0 + φ )} = E{A}sin(nω 0 + φ ) = sin(nω 0 + φ ) (67)
2
which depends on n. Therefore, a harmonic process with random amplitude is not WSS.

Properties of the autocorrelation sequence of a WSS process:


Property 1 – Symmetry. The autocorrelation sequence of a WSS random process x(n) is a conjugate
symmetric function of k, r x (k ) = rx* (−k ) .
For a real process, the autocorrelation sequence is symmetric, r x (k ) = rx (−k ) .
{ } { }
This property follows from the definition, i.e., r x (k ) = E x(n + k ) x * (n) = E x * (n) x(n + k ) = rx* (− k ) .

Property 2 – Mean-square value. The autocorrelation sequence of a WSS random process x(n) at lag k = 0 is
equal to the mean-square value of the process r x (0) = E{| x(n) | 2 } ≥ 0 .

Property 3 – Maximum value. The magnitude of the autocorrelation sequence of a WSS random process x(n)
at lag k is upper bounded by its value at k = 0, r x (0) ≥ r x (k ) .

This property may be explained in such a way that the correlation between the same variables, x(n), is always
equal to or greater than the correlation between different variables, x(n+k) and x(n) for k ≠ 0 .

Property 4 – Periodicity. If the autocorrelation sequence of a WSS random process x(n) is such that
{
r x ( k 0 ) =r x (0) for some k 0 , then r x (k ) is periodic with period k 0 . Furthermore, E | x(n) − x(n − k 0 ) | 2 =0, }
and x(n) is said to be mean-square periodic.

19
For example r x ( k ) = 0.5 A2 cos(kπ ) is periodic with a period of 2.

Questions (1): Which one(s) of the following autocorrelations is valid for WSS random processes?
(i) r x (k ) = 2 |k | ; (ii) r x ( k ) = (1 2)|k | ; (iii) r x (k ) = (1 2)k ; (iv) r x ( k ) = (1 2 )|k +1| + (1 2 )|k −1| ;
(v) r x ( k ) = −2δ ( k ) + δ ( k − 1) + δ (k + 1) ; (vi) r x ( k ) = 1.2δ (k ) + δ ( k − 1) + δ (k + 1) .

Joint wide sense stationarity of two random processes:


Two random processes x(n) and y(n) are said to be jointly wide-sense stationary if x(n) and y(n) are wide-
sense stationary (i.e., r x (k , l ) =r x (k − l ) and r y (k , l ) =r y (k − l ) ) and if the cross-correlation r xy (k , l )

depends only on the difference, k – l, i.e.,


r xy (k , l ) =r xy (k + n, l + n) = r xy (k − l ) (68)

which is a function only of the lag, k – l. This implies that two WSS random processes x(n) and y(n) are not
jointly WSS if r xy (k , l ) ≠ r xy (k − l ) .

The Autocorrelation and Autocovariance Matrices

Definition of autocorrelation and autocovariance matrices:


An autocorrelation matrix R x is the matrix form of an autocorrelation sequence of a random process x(n),
and it is defined as R x = E{xx H } where x = [x( 0) , x(1),  , x( p )]T is the vector of p+1 values of a random
[ ]
process x(n), and x H = (x * ) T = x *( 0) , x * (1),  , x * ( p ) is the Hermitian transpose (the Hermitian transpose
of A, denoted by A H , is the complex conjugate of the transpose of A, i.e., A H = ( A * ) T = ( A T ) * ).
The autocorrelation matrix R x is a compact, convenient form that is often used in the following chapters
as well as in MATLAB programming. It is explicitly written as
 x(0) x * (0) x(0) x * (1) ... x(0) x * ( p )    rx (0,0) rx (0,1) ... rx (0, p ) 
   
 x(1) x (0) x(1) x (1) ... x(1) x ( p)    rx (1,0) rx (1,1) ... rx (1, p) 
* * *
H
R x = E{xx } = E 
   ...
= (69)
 ... ... ... ... ... 
  
 x( p ) x * (0) x( p ) x * (1) ... x( p) x * ( p )   rx ( p,0) rx ( p,1) ... rx ( p, p )
 
where r x (k , l ) = E{x(k ) x* (l )} is the autocorrelation. Note that the r x (k , k ) = E{| x(k ) |2 } are always real !
If a random process x(n) is WSS, then r x (k ) = rx* (−k ) , and thus the autocorrelation matrix R x becomes
 rx (0) *
rx (1) ... rx ( p ) 
*

 * 
H
 rx (1) rx (0) ... rx ( p − 1)
R x = E{xx } =  rx (2) rx (1) rx ( p − 2)  (70)
 
 ... ... ... 
 rx ( p ) rx ( p − 1) ... rx (0) 
The autocorrelation matrix R x is a ( p + 1) × ( p + 1) square matrix.
Similarly, the autocovariance matrix of a random process x(n) is defined as
{
C x = E (x − m x )(x − m x ) H } (71)
and the relationship between R x and C x is
H
Cx = R x − m xm x (72)

20
T
where m x = [m x ,m x ,  , m x ] is a vector of length (p+1) containing the mean value of the WSS process. For
m x = 0 , Cx = R x .

Properties of autocorrelation matrix:

Property 1. The autocorrelation matrix of a WSS random process x(n) is a Hermitian Toeplitz matrix,
R x = Toep{rx (0), rx (1), ..., rx ( p )} .
Note that not all Hermitian Toeplitz matrices (see p. 38) represent a valid autocorrelation matrix.

Property 2. The autocorrelation matrix of a WSS random process x(n) is nonnegative definite, R x > 0 .
Property 2 is a necessary condition that a given sequence rx (k ) for k=0, 1, …, p represents the
autocorrelation values of a WSS random process (see p. 40 for ‘nonnegative definite’).

Property 3. The eigenvalues, λ k , of the autocorrelation matrix of a WSS random process x(n) are real-
valued and nonnegative.
This property is a result of the fact that the autocorrelation matrix is Hermitian and nonnegative definite.

Example 15. Determine whether or not the following matrices are valid autocorrelation matrices:
 3 −1 1  4 j 1 1   4 1− j j 

(i) R 1 =  1 5 − 1  
(ii) R 2 =  1 5 1   
(iii) R 3 = 1 + j 4 1 − j 
 − 1 1 3   1 1 3 j   − j 1 + j 4 
(i) R 1 is not a valid autocorrelation matrix since it is real-valued and not symmetric.
(ii) R 2 is not a valid autocorrelation matrix either since the entries along the diagonal is not real-valued.
(iii) R 3 is a valid autocorrelation matrix since it is a Hermitian Toeplitz matrix, R 3 = Toep{4 1 + j − j}
and nonnegative definite.

Ergodicity

It has been seen that the mean and autocorrelation of a random process are determined from the ensemble
averages of all possible discrete-time signals in the ensemble. However in practice when only one single
realization of a random process is available to us and we want to determine the mean and the autocorrelation
from the single realization, then we need to consider the ergodicity of the process. When the mean or
autocorrelation of a random process can be found from appropriate time averages of one single realization of
the process, the process is mean-ergodic or autocorrelation-ergodic. The ergodicity of a random process, as
will be seen later, is important in estimating its autocorrelation and power spectrum in practice.

21
Mean Ergodicity

Definition. If the sample mean mˆ x ( N ) of a wide-sense stationary process converges to m x in the mean-
{ }
square sense, lim E |mˆ x ( N ) − m x | 2 = 0, then the process is said to be ergodic in the mean and we write
N →∞
lim mˆ x ( N ) = m x .
N →∞

In order for the sample mean to converge in the mean-square sense it is necessary and sufficient that
the sample mean be asymptotically unbiased, lim E{mˆ x ( N )} = m x , and
N →∞

the variance of the sample mean goes to zero as N → ∞ , i.e., lim Var{mˆ x ( N )} = 0 .
N →∞

Mean Ergodic Theorem 1. Let x(n) to be a WSS random process with autocovariance sequence c x (k ) . A
necessary and sufficient condition for x(n) to be ergodic in the mean is
1 N −1
lim
N →∞ N

k =0
c x (k ) = 0 .

Mean Ergodic Theorem 2. Let x(n) to be a WSS random process with autocovariance sequence c x (k ) .
Sufficient conditions for x(n) to be ergodic in the mean are that c x (0) < ∞
lim c x (k ) = 0
k →∞

Example 16. Stationarity and ergodicity of a random process


Consider a random process x(n)=A where A is a random variable that is uniformly distributed over the
interval [b, c] with c>b. Determine the stationarity and the mean ergodicity of the random process.
Solution. The mean of the process is
b+c
m x (n) = E{x(n)} = E {A} = (73)
2
which is a constant. The autocorrelation is
c c
c 3 − b 3 c 2 + cb + b 2
 
1
{ } { }
rx (k , l ) = E x(k ) x * (l ) = E A 2 = α 2 f x (α )dα =
c−b b
α 2 dα =
3(c − b)
=
3
(74)
b

which is a constant either.


1
{ } (c − b )2
c x ( k ) = rx ( k ) − m x ( k )m *x ( k ) = E A2 − E 2 {A} = Var{A} =
12
(75)

Eq. (75) shows that c x (k ) , which is equal to the variance, is a constant for all k, and c x (0) < ∞ . Thus, the
process is WSS. From Eq. (75), it follows that
N −1

 c (k ) = 121 (c − b)
1 2
lim x
N →∞ N
k =0

is not zero since b ≠ c . Thus, the process is not ergodic in the mean.

Autocorrelation Ergodicity

22
N −1
Definition. If the sample autocorrelation rˆ x (k , N ) = (1 / N )  x(n) x * (n − k ) of a wide-sense stationary
n =0

{ }
process converges to r x (k ) in the mean-square sense, lim E |rˆ x (k , N ) − rx (k ) | 2 = 0, then the process is
N →∞

said to be autocorrelation ergodic and we write


lim rˆ x (k , N ) = rx (k ) .
N →∞

The autocorrelation ergodicity is useful in the following chapter concerning spectrum estimation.

White Noise

White noise v(n) is a WSS process that has the autocovariance function,
cv (k ) = σ v2δ (k ) . (76)
It is simply a sequence of uncorrelated random variables, each having a variance of σ v2 . In other words, v(n)
2
and v(n+k) are uncorrelated for k ≠ 0 since cv (k ) = 0 , or rv (k ) = mv , for k ≠ 0 . Thus, knowledge of one
does not help in the estimation of the other using a linear estimator (refer to the section Linear Mean-
Square Estimation in the first lecture).
Note that white noise is often assumed to have zero mean, and thus cv (k ) = rv (k ) . Also note that there is an
infinite variety of white noise random processes because the uncorrelated random variables can be an infinite
variety of different distribution and density functions, e.g., white Gaussian noise and white Bernoulli noise.

The Power Spectrum

Definition:
The power spectrum or power spectral density of a random process x(n) is the discrete-time Fourier
transform of the random process autocorrelation sequence r x (k ) (that is a deterministic sequence),

Px (e jω ) =  r x (k )e − jkω (77)
k = −∞

The autocorrelation sequence may be determined by taking the inverse discrete-time Fourier transform of the
power spectrum Px (e jω )


1 π
r x (k ) = Px (e jω )e jkω dω (78)
2π −π
The power spectrum or power spectral density can be obtained using the z-transform of r x (k ) as follows,

Px ( z ) = {r x (k )} =  r x (k ) z −k (79)
k = −∞

which, in some cases, is more convenient.


The power spectrum is the spectrum of power that is related with the mean square values of random signals,
e.g., r x (0) = E{| x(n) | 2 } .
Note that since a random process is an ensemble of discrete-time signals, we can compute the Fourier (or z-)
transform of each discrete-time signal in the process, but we can not compute the Fourier (or z-) transform of

23
the random process itself. Instead, we calculate the power spectrum of a random process x(n), Px (e jω ) ,
which is the discrete-time Fourier transform of the deterministic autocorrelation sequence r x (k ) .

Properties of the power spectrum of a WSS random process:

Property 1–Symmetry. The power spectrum of a WSS random process x(n) is a real-valued,
Px (e jω ) = Px* (e jω ) , and Px (z ) satisfies the symmetry condition
Px ( z ) = Px* (1 / z*)
In addition, if x(n) is real then the power is even, Px (e jω ) = Px (e − jω ) , which implies that
Px ( z ) = Px* ( z*)

Property 2–Positivity. The power spectrum of a WSS random process x(n) is nonnegative
Px (e jω ) ≥ 0
Property 3–Total power. The power in a zero mean WSS random process x(n) is proportional to the area
under the power spectral density curve


1 π
{ }
E | x(n) | 2 = rx (0) = Px (e jω )dω
2π −π

Property 4–Eigenvalue Extremal Property. The eigenvalues of the n × n autocorrelation matrix of a zero
mean WSS random process are upper and lower bounded by the maximum and minimum values,
respectively, of the power spectrum,
min Px (e jω ) ≤ λi ≤ max Px (e jω )
ω ω

Questions (2): Which one(s) of the following power spectrum is (are) valid for WSS random processes?
2+ z 1
(i) P x ( z ) = 5 + ( z −1 + z ) , (ii) P x ( z ) = −1
, (iii) P x ( z ) = −1
,
2+ z (3 + z )(3 + z )
1 1 1
(iv) Px ( e jω ) = , (v) Px ( e jω ) = , and (vi) Px ( e jω ) = .

1 + 2 cos ω 1 − 0.8 cosω 1 − 0.8 sin ω

Example 17. The power spectrum


(i) The power spectrum of a harmonic process with random phase in Example 12.
Solution. From Example 12 it follows that the autocorrelation of the random phase harmonic process is
rx (k ) = (1 / 2) A 2 cos(kω 0 ) (80)
and the power spectrum is
∞ ∞ ∞

Px (e jω
)= 
k = −∞
rx (k )e − jkω
=
A2
2 
k = −∞
cos(kω 0 )e − jkω
=
A2
2 
k = −∞
e jkω 0 + e − jkω 0 − jkω
2
e
(81)
πA 2
[δ (ω − ω 0 ) + δ (ω + ω 0 )]
=
2
where the DTFT relation x( n ) = e jnω 0 → X ( e jω ) = 2πδ (ω − ω 0 ) is used. Obviously, Px (e jω ) is real, even
and nonnegative.

24
(ii) The power spectrum of a random process that has an autocorrelation sequence rx (k ) = α |k | where | α |< 1 .
Solution. From the definition of the power spectrum in Eq. (77), it follows that
∞ −1 ∞ ∞ ∞
Px (e jω ) =  r x (k )e − jkω =  α −k e − jkω +  α k e − jkω =  α k e jkω − 1 +  α k e − jkω
k = −∞ k = −∞ k =0 k =0 k =0
(82)
2
1 1 1−α
= jω
+ − jω
−1=
1−α e 1−α e 1 − 2α cos ω + α 2
Obviously, Px (e jω ) is real and nonnegative, and it is even since Px (e jω ) = Px (e − jω ) .

Answers to questions (1): (i) No, r x (0) < r x ( k ) for |k| > 0; (ii) Yes; (iii) No, since it is not symmetric, i.e.,
r x ( k ) ≠ r x ( −k ) ; (iv) No, since r x (0) =1< r x (1) = r x (−1) =1.25 although r x (k ) is symmetric; (v) No, since
r x (0) = −2 is negative, and r x (0) = −2 < r x (1) = r x (−1) =1; (vi) Yes.
Answers to questions (2): (i) Yes; (ii) No, since Px (z ) is not symmetric, i.e., Px ( z ) ≠ Px ( z −1 ) ; (iii) Yes; (iv)
No, since Px ( e jω ) < 0 for ω > 2π / 3 ; (v) Yes; (vi) No, since Px ( e jω ) is not symmetric,
Px ( e jω ) ≠ Px ( e − jω ) .

Some useful MATLAB functions for studying random processes

>> rand() % creates uniformly distributed random processes.


>> randn() % creates Gaussian (normal) random processes.
>> toeplitz (R) % produces a Toeplitz matrix.
>> xcorr() % auto- and cross-correlation function estimates.
>> xcov() % auto- and cross-covariance function estimates.

The MATLAB functions for studying random variables are also useful for random processes.

25

You might also like