Additional Problems
Additional Problems
Ali H. Sayed
This file contains additional problems that instructors may find useful. The file is updated on
a regular basis. Solutions are available to instructors only. Please contact the author at
[email protected].
ii ADDITIONAL PROBLEMS
Last updated February 2010
y 1 = x1 + v 1 and y 2 = x1 + x2 + v 2
(a) Express the pdfs of the individual random variables x1 and x2 in terms of delta
functions.
(b) Find the joint pdf of (x1 , x2 ).
(c) Find the joint pdf of (y 1 , y 2 ).
(d) Find the joint pdf of (x1 , x2 , y 1 , y 2 ).
(e) Find the conditional pdf of (x1 , x2 ) given (y 1 , y 2 ).
(f) Find the minimum mean-square error estimator of x2 given {y 1 , y 2 }.
(g) Find the minimum mean-square error estimator of x2 given {y 1 , y 2 , x1 }.
(h) Find the linear least-mean-squares error estimator of x2 given {y 1 , y 2 , x1 }.
µu∗i
wi = wi−1 + [d(i) − ui wi−1 ]
kui k2
with 1 × M regression vectors ui and step-size µ. Each entry of ui has the form
rej θ , where θ is uniformly distributed over [0, 2π] and r > 0. In other words, the
entries of ui lie on a circle of radius r. Assume the data d(i) satisfy the stationary
data model of Section 6.2.
(a) Find an exact expression for the EMSE of NLMS under such conditions.
(b) Does the value of r have an influence on the EMSE? Is there an optimal choice
for r?
(c) The entries of the regression vectors are further assumed to be independent of
each other. Find an exact condition on the step-size µ to ensure mean-square
convergence.
(d) Which algorithm will have the lower MSE in steady-state for the same step- iii
size: LMS or NLMS?
(e) Evaluate the number of iterations that are needed for LMS to be within 5% of
its EMSE? What about NLMS? Which algorithm converges faster? Assume
the same step-size for both algorithms.
where {x0,i , xM,i } denote the leading and trailing columns of HM +1,i . It is assumed
that the regression data satisfy the following structural relation:
· ¸
0
H̄M,i =
HM,i−1 Φ−1
M
Let wN denote the solution to the above least-squares problem. Can you derive a
recursive least-squares solution that updates wN to wN +1 ?
5. (Chapters 2, 4) Consider noisy observations y(i) = x + v(i), where x and v(i) are
independent random variables, v(i) is a white random process with zero mean and
distributed as follows:
Moreover, x assumes the values {1+j, 1−j, −1+j, −1−j} with equal probability.
The value of x is the same for all measurements {y(i)}.
(b) Find the linear least-mean-squares estimate of x given the combined observa-
tions {y(0), y(1), . . . , y(N − 1)} and {y 2 (0), y 2 (1), . . . , y 2 (N − 1)}.
7. (Chapters 8, 10) Let d denote a scalar zero-mean random variable with variance
σd2 , and let u denote a 1 × M zero-mean random vector with covariance matrix
Ru = E u∗ u > 0. Consider the optimization problem
M
X
min E |d − uw|2 subject to c(k)w(k) = 1
w
k=1
where the {w(k)} denote the individual entries of w and the {c(k)} are scaling
coefficients.
v
(a) Derive a stochastic-gradient algorithm for approximating the optimal solution
wo in terms of realizations {d(i), ui } for {d, u}, and starting from an initial
condition w−1 that satisfies the constraint.
(b) Derive an approximate expression for the EMSE of the filter for sufficiently
small-step-sizes.
(c) Derive an optimal choice for the coefficients {c(k)} in order to result in the
smallest EMSE.
(d) (Bonus) Can you repeat parts (a)-(c) when the {c(k)} are required to be non-
negative scalars?
8. (Chapters 15, 16, 23) Consider the following constrained LMS recursion
· ¸
cc∗
wi = wi−1 + µ I − u∗ (d(i) − ui wi−1 ), c∗ w−1 = 1
kck2 i
where the {w(k)} denote the individual entries of w and the {c(k)} are the scalar
entries of the column vector c. Moreover, d denotes a scalar zero-mean random
variable with variance σd2 , and u denotes a 1 × M zero-mean random vector with
covariance matrix Ru = E u∗ u > 0. Assume all data are circular Gaussian.
(a) Perform a transient mean-square-error analysis of the adaptive filter and pro-
vide conditions on the step-size µ in order to ensure that the filter is mean-
square stable. Specify clearly the conditions on the data that you are assuming
for your analysis.
(b) Derive expressions for the EMSE and the MSD of the filter.
(c) Derive an expression for the learning curve of the filter.
where W > 0 and Π > 0. Let yb = H wb denote the resulting estimate of y and let ξ
denote the corresponding minimum cost. Now consider the extended problem
where a and b are positive scalars, ha and hb are column vectors, αa and αb are
scalars, d is a scalar, u is a row vector, and
a · ¸
W
Πz = Π , Wz =
1
b
vi Let · ¸
ha H hb
ybz = w
bz
αa u αb
and let ξz denote the corresponding minimum cost of the extended problem.
(a) Relate {w
bz , ybz , ξz } to {w,
b yb, ξ}.
(b) Can you motivate and derive an array algorithm to update the solution from w
b
to w
bz ?
10. (Chapter 30) Let wi denote the solution to the following regularized least-squares
problem: £ ¤
min λi+1 w∗ Πw + kyi − Hi wk2
w
with d(i) denoting a scalar and ui denoting a row vector. We also define the matrix
£ ¤−1
Pi = λi+1 Π + Hi∗ Hi
a) Assume that at time i, it holds that Hi∗ Hi > 0. Let wu,i denote the solution to
the following un-regularized least-squares problem:
and let Pu,i = [Hi∗ Hi ]−1 . Provide a recursive algorithm to compute the un-
regularized quantities {wu,i , Pu,i } from the regularized quantities {wi , Pi }.
b) Derive a recursive algorithm to update the regularized solution, i.e., to compute
{wi , Pi } from {wi−1 , Pi−1 }.
In both parts (a) and (b), your algorithm should use a series of rank-1 updates, and
direct matrix inversions are not allowed.