Exercises 02
Exercises 02
2.78. (a) Is it possible, in general, to solve a sym- 2.80. Suppose you have already solved the n × n
metric indefinite linear system at a cost similar to linear system Ax = b by LU factorization and
that for using Cholesky factorization to solve a back-substitution. What is the further cost (order
symmetric positive definite linear system? of magnitude will suffice) of solving a new system
(b) If so, what is an algorithm for accomplishing (a) With the same matrix but a different right-
this? If not, why? hand-side vector?
2.79. Give two reasons why iterative improve- (b) With the matrix changed by adding a matrix
ment for solutions of linear systems is often im- of rank one?
practical to implement. (c) With the matrix changed completely?
Exercises
2.1. In Section 2.2, four defining properties are (d ) In floating-point arithmetic, for what range of
given for a singular matrix. Show that these four values of will the computed value of U be singu-
properties are indeed equivalent. lar?
2.2. Suppose that each of the row sums of an 2.8. Let A and B be any two n × n matrices.
n × n matrix A is equal to zero. Show that A (a) Prove that (AB)T = B T AT .
must be singular. (b) If A and B are both nonsingular, prove that
2.3. Suppose that A is a singular n × n matrix. (AB)−1 = B −1 A−1 .
Prove that if the linear system Ax = b has at 2.9. If A is any nonsingular real matrix, show
least one solution x, then it has infinitely many that (A−1 )T = (AT )−1 . Consequently, the nota-
solutions. tion A−T can be used unambiguously to denote
2.4. (a) Show that the following matrix is sin- this matrix. Similarly, if A is any nonsingular
gular. complex matrix, then (A−1 )H = (AH )−1 , and the
1 1 0
notation A−H can be used unambiguously to de-
A = 1 2 1 note this matrix.
1 3 2 2.10. Let P be any permutation matrix.
(a) Prove that P −1 = P T .
(b) If b = [ 2 4 6 ]T , how many solutions are
(b) Prove that P can be expressed as a product
there to the system Ax = b?
of pairwise interchanges.
2.5. What is the inverse of the following matrix? 2.11. Write out a detailed algorithm for solving a
1 0 0
lower triangular linear system Lx = b by forward-
A = 1 −1 0 substitution.
1 −2 1 2.12. Verify that the dominant term in the oper-
ation count (number of multiplications or number
2.6. Let A be an n×n matrix such that A2 = O, of additions) for solving a lower triangular system
the zero matrix. Show that A must be singular. of order n by forward substitution is n2 /2.
2.7. Let 2.13. How would you solve a partitioned linear
system of the form
1 1+
A= .
L1 O
x b
1− 1 = ,
B L2 y c
(a) What is the determinant of A? where L1 and L2 are nonsingular lower triangu-
(b) In floating-point arithmetic, for what range of lar matrices, and the solution and right-hand-side
values of will the computed value of the deter- vectors are partitioned accordingly? Show the
minant be zero? specific steps you would perform in terms of the
(c) What is the LU factorization of A? given submatrices and vectors.
Exercises 97
2.14. Prove each of the four properties of ele- 2.22. Verify that the dominant term in the oper-
mentary elimination matrices enumerated in Sec- ation count (number of multiplications or number
tion 2.4.3. of additions) for LU factorization of a matrix of
2.15. (a) Prove that the product of two lower tri- order n by Gaussian elimination is n3 /3.
angular matrices is lower triangular. 2.23. Verify that the dominant term in the oper-
(b) Prove that the inverse of a nonsingular lower ation count (number of multiplications or number
triangular matrix is lower triangular. of additions) for computing the inverse of a matrix
2.16. (a) What is the LU factorization of the fol- of order n by Gaussian elimination is n3 .
lowing matrix? 2.24. Verify that the dominant term in the oper-
1 a ation count (number of multiplications or number
c b of additions) for Gauss-Jordan elimination for a
(b) Under what condition is this matrix singular? matrix of order n is n3 /2.
2.17. Write out the LU factorization of the fol- 2.25. (a) If u and v are nonzero n-vectors, prove
lowing matrix (show both the L and U matrices that the n × n outer product matrix uv T has rank
explicitly): one.
1 −1 0 (b) If A is an n × n matrix such that rank(A) = 1,
−1 2 −1 . prove that there exist nonzero n-vectors u and v
0 −1 1 such that A = uv T .
2.18. Prove that the matrix 2.26. An n × n matrix A is said to be elementary
if it differs from the identity matrix by a matrix of
0 1
A= rank one, i.e., if A = I − uv T for some n-vectors
1 0
u and v.
has no LU factorization, i.e., no lower triangu-
(a) If A is elementary, what condition on u and
lar matrix L and upper triangular matrix U exist
v ensures that A is nonsingular?
such that A = LU .
(b) If A is elementary and nonsingular, prove that
2.19. Let A be an n×n nonsingular matrix. Con- A−1 is also elementary by showing that A−1 =
sider the following algorithm: I − σuv T for some scalar σ. What is the specific
1. Scan columns 1 through n of A in succession, value for σ, in terms of u and v?
and permute rows, if necessary, so that the di-
(c) Is an elementary elimination matrix, as defined
agonal entry is the largest entry in magnitude
in Section 2.4.3, elementary? If so, what are u, v,
on or below the diagonal in each column. The
and σ in this case?
result is P A for some permutation matrix P .
2. Now carry out Gaussian elimination without 2.27. Prove that the Sherman-Morrison formula
pivoting to compute the LU factorization of
(A − uv T )−1 =
P A.
(a) Is this algorithm numerically stable? A−1 + A−1 u(1 − v T A−1 u)−1 v T A−1
(b) If so, explain why. If not, give a counterexam- given in Section 2.4.9 is correct. (Hint: Multiply
ple to illustrate. both sides by A − uv T .)
2.20. Prove that if Gaussian elimination with 2.28. Prove that the Woodbury formula
partial pivoting is applied to a matrix A that is
diagonally dominant by columns, then no row in- (A − U V T )−1 =
terchanges will occur.
A−1 + A−1 U (I − V T A−1 U )−1 V T A−1
2.21. If A, B, and C are n × n matrices, with
B and C nonsingular, and b is an n-vector, how given in Section 2.4.9 is correct. (Hint: Multiply
would you implement the formula both sides by A − U V T .)
2.29. For p = 1, 2, and ∞, prove that the vector
x = B −1 (2A + I)(C −1 + A)b
p-norms satisfy the three properties given near the
without computing any matrix inverses? end of Section 2.3.1.
98 Chapter 2: Systems of Linear Equations
2.30. For p = 1 and ∞, prove that the matrix 2.38. Suppose that the symmetric matrix
p-norms satisfy the five properties given near the
end of Section 2.3.2 A a
B= T
2.31. Let A be a symmetric positive definite ma- a α
trix. Show that the function of order n + 1 is positive definite.
kxkA = (xT Ax)1/2 (a) Show that the scalar α must be positive and
satisfies the three properties of a vector norm the n × n matrix A must be positive definite.
given near the end of Section 2.3.1. This vector (b) What is the Cholesky factorization of B in
norm is said to be induced by the matrix A. terms of the constituent submatrices?
2.32. Show that the following functions of an 2.39. Verify that the dominant term in the oper-
m×n matrix A satisfy the first three properties of ation count (number of multiplications or number
a matrix norm given near the end of Section 2.3.2 of additions) for Cholesky factorization of a sym-
and hence are matrix norms in the more general metric positive definite matrix of order n is n3 /6.
sense mentioned there.
2.40. Let A be a band matrix with bandwidth β,
(a)
and suppose that the LU factorization P A = LU
kAkmax = max |aij |
i,j is computed using Gaussian elimination with par-
Note that this is simply the ∞-norm of A consid- tial pivoting. Show that the bandwidth of the up-
ered as a vector in Rmn . per triangular factor U is at most 2β.
(b) 2.41. Let A be a nonsingular tridiagonal matrix.
!1/2
kAkF =
X
|aij | 2 (a) Show that in general A−1 is dense.
i,j (b) Compare the work and storage required in this
Note that this is simply the 2-norm of A consid- case to solve the linear system Ax = b by Gaus-
ered as a vector in Rmn . It is called the Frobenius sian elimination and back-substitution with those
matrix norm. required to solve the system by explicit matrix in-
2.33. Prove or give a counterexample: If A is a version.
nonsingular matrix, then kA−1 k = kAk−1 . This example illustrates yet another reason why
2.34. Suppose that A is a positive definite ma- explicit matrix inversion is usually a bad idea.
trix. 2.42. (a) Devise an algorithm for computing the
(a) Show that A must be nonsingular. inverse of a nonsingular n × n triangular matrix
(b) Show that A−1 must be positive definite. in place, i.e., with no additional array storage.
2.35. Suppose that the matrix A has a factoriza- (b) Is it possible to compute the inverse of a gen-
tion of the form A = BB T , with B nonsingular. eral nonsingular n × n matrix in place? If so,
Show that A must be symmetric and positive def- sketch an algorithm for doing so, and if not, ex-
inite. plain why. For purposes of this exercise, you may
2.36. Derive an algorithm for computing the assume that pivoting is not required.
Cholesky factorization LLT of an n×n symmetric 2.43. Suppose you need to solve the linear sys-
positive definite matrix A by equating the corre- tem Cz = d, where C is a complex n × n matrix
sponding entries of A and LLT . and d and z are complex n-vectors, but your lin-
2.37. Suppose that the symmetric matrix ear equation solver handles only real systems. Let
α aT
C = A + iB and d √ = b + ic, where A, B, b, and
B= c are real and i = −1. Show that the solution
a A
z = x + iy is given by the 2n × 2n real linear
of order n + 1 is positive definite. system
(a) Show that the scalar α must be positive and
A −B x b
the n × n matrix A must be positive definite. = .
B A y c
(b) What is the Cholesky factorization of B in
Is this a good way to solve this problem? Why?
terms of the constituent submatrices?