Handbook of Matrices
H. Liitkepohl
Humboldt-Universitat zu Berlin, Germany
JOHN WILEY & SONS
Chichester « New York « Brisbane Toronto» SingaporeCopyright © 1996 by John Wiley & Sons Ltd,
Baffins Lane, Chichester,
‘West Sussex POID IUD, England
National 01243 779777
Intemational (+44) 1243 779777
ail (for orders and customer servive enquiries): es-books(@wiley.co.uk.
Visit our Home Page on http://www.witey.co.uk
‘or hitp://wwrw.wiley.com
All Rights Reserved.No part of this book may be reproduced, stored in a
retrieval system, or transmitted, in any form or by any means, electronic,
mechanical, photocopying, recording of otherwise, except under the terms of
the Copyright. Designs and Patents Act 1988 or under the terms of a licence
issued by the Copyright Licensing Ageney, 90 Tottenham Court Road, London,
UK WIP SLIE, without the pennission in writing of the publisher.
Other Wiley Edutonal Offices
John Wiley & Sons, Ine.. 608 Third Avenue,
New York, NY 10158-0012, USA
Jacaranda Wiley Ltd, 33 Park Road, Milt
Queensland 4064, Australia
John Wiley & Sons (Canada) Ltd, 22 Worcester Road,
Rexdale. Ontario MOW ILL, Car
John Wiley & Sons (Asia) Pte 14d, 2 Clementi Loop #02-01,
Jin Xing Distripark, Singapore 0512
British Library Cataloguing in Publication Data
able from the 3
A catalogue record for this hook is av:
ibrary
ISBN 0.471 96688 6: 0.471 97015 & (pbk.)
Produced from camera-ready copy supplied by the authors using La
Printed and bound in Groat Britain by Biddles Lid, Guildford and King’s Lynn
“This book is printed on acid-free paper respousibly manufactured from sustainable forestation,
of which at Teast two trees are planted for each one used for paper production,Contents
Preface
List of Symbols
1 Definitions, Notation, Terminology
1.1 Basic Notation and Terminology
1.2 Operations Relating Matrices»... 2. .0..0.0..
1.3. Inequality Relations Between Matrices
1.4. Operations Related to Individual Matrices
1.5 Some Special Matrices 2... 2...
1.6 Some Terms and Quantities Related to Matrices
2 Rules for Matrix Operations
2.1 Rules Related to Matrix Sums and Differences
2.2. Rules Related to Matrix Multiplication
2.3. Rules Related to Multiplication by a Scalar .
2.4 Rules for the Kronecker Product
2.5 Rules for the Hadamard Product. . .
2.6 Rules for Direct Sums
3 Matrix Valued Functions of a Matrix
3.1. The Transpose nn
3.2 The Conjugate... 2.2.0... 000.
3.3. The Conjugate Transpose
3.4. The Adjoint of a Square Matrix .
3.5 The Inverse of a Square Matrix .
3.5.1 General Results. 2... ee
3.5.2 Inverses Involving Sums and Differences.
3.5.3. Partitioned Inverses .
3.5.4 Inverses Involving Commutation, Duplication and
Elimination Matrices .on
CONTENTS
3.6 Generalized Inverses .
3.6.1 General Results... ... .
3.6.2 The Moore -Penrose Inverse
3.7 Matrix Powers . . .
3.8 The Absolute Value tee Se
Trace, Determinant and Rank of a Matrix .
4.1 The Trace . oe . se .
4.1.1 General Results... . bee
4.1.2 Inequalities Involving the Trace
4.1.3 Optimization of Functions Involving the Trace
4.2 The Determinant... ..........
4.2.1 General Results .
4.2.2 Determinants of Partitioned Matrices |
4.2.3. Determinants Involving Duplication Matrices
4.2.4 Determinants Involving Elimination Matrices
4.2.5 Determinants Involving Both Duplication and Elimina-
tion Matrices . . tee ee
4.2.6 Inequalities Related to Determinants bee
4.2.7 Optimization of Functions Involving a Determinant .
4.3 The Rank of a Matrix Sees ee .
4.3.1 General Results... 0... ...0...000.%
43.2 Matrix Decompositions Related to the Rank
4.3.3 Inequalities Related to the Rank
Eigenvalues and Singular Values
5.1 Definitions .... .
5.2 Properties of Eigenvalues and Eigenvectors Lee
5.2.1 General Results... 2.0.02... ee eee
5.2.2 Optimization Properties of ‘Eigenvalues rn
5.2.3 Matrix Decompositions Involving Eigenvalues... .
5.3. Eigenvalue Inequalities . . .
5.3.1 Inequalities for the Eigenvalues ofa Single Matrix
5.3.2 Relations Between Eigenvalues of More Than One Matrix
5. Results for the Spectral Radius
Singular Values... . Lee eee
5.5.1 General Results .
5.5.2 Inequalities . .
on
Matrix Decompositions and Canonical Forms
6.1 Complex Matrix Decompositions
6.1.1 Jordan Type Decompositions .
6.1.2 Diagonal Decompositions
32
32
34
37
39
41
41
41
43
45
47
47
49
bl
52CONTENTS vii
6.1.3 Other Triangular Decompositions and Factorizations . . 86
6.1.4 Miscellaneous Decompositions . . cee 88
6.2 Real Matrix Decompositions . . . 89
6.2.1 Jordan Decompositions .......... : -. 89
622 Other Real Block Diagonal and Diagonal Decompositions 90
6.2.3 Other Triangular and Miscellaneous Reductions -: 92
7 Vectorization Operators........ 7 7 . 95
7.1 Definitions... . . . : : 95
7.2 Rules for the vec Opersior . se . 97
7.3 Rules for the vech Operator ce 99
8 Vector and Matrix Norms os . : 101
8.1 General Definitions... .. . -: : we 101
8.2 Specific Norms and Inner Products we ce . - 103
8.3. Results for General Norms and Inner Products woe. 104
8.4 Results for Matrix Norms . .. . se eee eee - 106
8.4.1 General Matrix Norms... .. 2... - see es » 106
8.4.2 Induced Matrix Norms... . . : ae ws 108
8.5 Properties of Special Norms 109
8.5.1 General Results . . 109
8.5.2 Inequalities dt
9 Properties of Special Matrices . . beens 113
9.1 Circulant Matrices 113
9.2 Commutation Matrices See se oe 115
9.2.1 General Properties . . he Se we 6
9.2.2 Kronecker Products ......-...-.00-0000005 17
9.2.3 Relations With Duplication and Elimination Matrices . 118
9.3 Convergent Matrices 2.2.2... ee . . 119
9.4 Diagonal Matrices 120
9.5 Duplication Matrices... 2. ee 122
9.5.1 General Properties 122
9.5.2 Relations With Commutation and. Elimination Matrices 123
9.5.3 Expressions With vec and vech Operators ss. 123
9.5.4 Duplication Matrices and Kronecker Products... . . 124
9.5.5 Duplication Matrices, Elimination Matrices and Kro-
necker Products See ce 126
9.6 Elimination Matrices se : oe . 127
9.6.1 General Properties . . . we 127
9.6.2 Relations With Commutation and Duplication Matrices 127
9.6.3 Expressions With vec and vech Operators . . . . 128
9.6.4 Elimination Matrices and Kronecker Products . 128viii
9.8
9.9
9.10
9.1
9.12
9.13
ot
=
9.15
10 Vec:
10.1
10.2
CONTENTS
9.6.5 Elimination Matrices, Duplication Matrices and Kro-
necker Products 130
Hermitian Matrices . 131
9.7.) General Results... . be 131
9.7.2 Eigenvalues of Hermitian “Matrices eee 133
9.7.3 Eigenvalue Inequalities : .. 134
9.7.4 Decompositions of Hermitian Matrices . 137
Idempotent Matrices 138
Nomnegative, Positive and Stochastic Matrices . 139
9.9.1 Definitions 139
9.9.2 General Results - 140
9.9.3 Results Related to the Spectral Radius 141
Orthogonal Matrices . . . » ‘142
9.10.1 General Results 143
9.10.2 Decompositions of Orthogonal “Matrices 144
Partitioned Matrices .. 144
9.LL.1 General Results . 45
9.11.2 Determinants of Partitioned Matrices 146
9.11.3. Partitioned Inverses 147
9.11.4 Partitioned Generalized Inverses . 148
9.11.5 Partitioned Matrices Related to Duplication Matrices 149
Positive Definite, Negative Definite and Semidefinite Matrices . 150
9.12.1 General Properties 15h
9.12.2 Eigenvalue Results . we 153
9.12.3 Deconiposition ‘Theorems for Definite Matrices 154
Syninetric Matrices : bee eee 156
9.13.1 General Properties . : 156
9.13.2 Symmetry and Duplication Matrices 157
9.13.3 Figenvalues of Symmetric Matrices... . 158
9.13.4 Figenvalue Inequalities 159
Deconipesitions of Symmetric and Skew. symoietric
Matrices 163
‘Triangular Matrices . 164
9.14.1 Properties of General ‘Triangular Matrices 164
9.14.2 Triangularity, Elimination and Duplication Matrices 165
9.14.3 Properties of Strictly Triangular Matrices 167
Unitary Matrices . . 167
‘tor and Matrix Derivatives 171
Notation . 171
Gradients and Hessian Matrices of Real Valued Functions with
Vector Arguments 174
10.2.1 Gradients
174CONTENTS
10.3
10.4
10.6
10.7
10.8
10.9
10.2.2 Hessian Matrices
Derivatives of Real Valued Functions with Matrix ‘Arguments
10.3.1 General and Miscellaneous Rules .
10.3.2 Derivatives of the Trace -
10.3.3 Derivatives of Determinants
Jacobian Matrices of Linear Functions .
10.4.1 Linear Functions with General Matrix Arguments
10.4.2 Linear Functions with Symmetric Matrix Arguments .
10.4.3. Linear Functions with Triangular Matrix Arguments
10.4.4 Linear Functions of Vector and Matrix Valued Func-
tions with Vector Arguments
Product Rules . : :
10.5.1 Matrix Products.
10.5.2 Kronecker and Hadamard Products .
10.5.3. Functions with Symmetric Matrix Arguments .
10.5.4 Functions with Lower Triangular Matrix Arguments
10.5.5 Products of Matrix Valued Functions with Vector
Arguments
Jacobian Matrices of Functions Involving Inverse Matrices
10.6.1. Matrix Products
10.6.2 Kronecker and Hadamard Products
10.6.3 Matrix Valued Functions with Vector Arguments
Chain Rules and Miscellaneous Jacobian Matrices
Jacobian Determinants
10.8.1 Linear Transformations
10.8.2. Nonlinear Transformations
Matrix Valued Functions of a Scalar Variable
11 Polynomials, Power Series and Matrices
111
11.2
11.3
Definitions and Notations ..... .
11.1.1 Definitions and Notation Related to Polynomials
11.1.2 Matrices Related to Polynomials ...........-
11.1.3 Polynomials and Power Series Related to Matrices
Results Relating Polynomials and Matrices
Polynomial Matrices
11.3.1 Definitions
11.3.2. Results for Polynomial Matrices
Appendix A Dictionary of Matrices and Related Terms
References... ..........
Index
187
188
189
192
193
195
196
198
198
199
200
202
204
. 204
206
208
2u1
21
211
213
214
» 215
217
. 217
221Preface
Nowadays matrices are used in many fields of science. Accordingly they have
become standard tools in statistics, econometrics, mathematics, engineering
and natural sciences textbooks. In fact, many textbooks from these fields have
chapters, appendices or sections on matrices. Often collections of those results
used in a book are included. Of course, there are also numerous books and
even journals on matrices. For my own work I have found it useful, however, to
have a collection of important, matrix results handy for quick reference in one
source. Therefore I have started collecting matrix results which are important
for my own work. Over the years, this collection has grown to an extent that
it may now be a valuable source of results for other researchers and students
as well. To make it useful for less advanced students I have also included many
elementary results.
The idea is to provide a collection where special matrix results are casy
to locate. Therefore there is some repetition because some results fit. under
different headings and are consequently listed more than once. For example,
for suitable matrices A, B and C, A(B+C) = AB + AC is a result on matrix
multiplication as well as on matrix sums. It is therefore listed under both
headings. Although reducing search costs has been an important objective in
putting together this collection of results, it is still possible that a specific
result is listed in a place where I would look for it but where not everyone
else would, because too much repetition turned out to be counterproductive.
Of course, this collection very much reflects my own personal preferences in
this respect. Also, it is, of course, not a complete collection of results. In fact,
in this respect it is again very subjective. Therefore, to make the volume more
useful to others in the future, I would like to hear of any readers’ pet. results
that I have left out. Moreover, I would be grateful to learn about errors that.
readers discover. It seems unavoidable that flaws sneak in somewhere.
In this book, definitions and results are given only and no proofs. At the end
of most sections there is a note regarding proofs. Often a reference is made
to one or more textbooks where proofs can be found. No attempt has been
made to reference the origins of the results. Also, computational algorithmsxii
are not given, These may again be found in the references.
As mentioned earlier, it is hoped that this book is useful as a reference
for researchers aud students. I should perhaps give a warning, however, that
it requires some basic knowledge and understanding of matrix algebra. It
is not meant to be a tool for self teaching at an introductory level. Before
searching for specific matrix results readers may want to go over Chapter |
to familiarize themselves with my notation and terminology. Definitions and
brief explanations of many terms related to matrices are given in an appendix.
Generally the results ate stated for complex matrices (matrices of complex
numbers). Of course, they also hold for real matrices because the latter may
be regarded as special complex matrices. In some sections and chapters many
results are formulated for real matrices only. In those instances a note to that
effect appears at the beginning of the section or chapter. To make sure that a
specific result holds indeed for complex matrices it is therefore recommended
to check the beginning of the section and chapter where the result is found to
determine possible qualifications
Finally, 1 would like to acknowledge the help of many people who com-
mented on this volume and helped me avoiding errors. In particular, I ami
grateful to Alexander Benkwitz, Jorg Breitung, Maike Burda, Helmut Her-
wartz, Kirstin Hubrich, Martin Moryson, and Rolf Tschernig for their care-
ful scrutiny. Of course, none of them bears responsibility for any remaining
errors. Financial support was provided by the Deutsche Forschungsgemein-
schaft. Sonderforschungsbereich 373
Berlin, April 1996 Helmut LiitkepohlList of Symbols
General Notation
DL summation
Tl product
= equivalent to, by definition equal to
> implies that, only if
= if and only if
® 3.14159...
i VT, imaginary unit
c complex conjugate of a complex number c
In natural logarithm
exp exponential function
sin sine function
cos cosine function
max maximum
min minimum
sup supremum, least upper bound
inf infimum, greatest lower bound
a derivative of the function f(r) with respect to x
of 7 : : :
an partial derivative of the function f(-) with respect to r
n! =S1-2e-n
(7) = ma binomial coefficient
Sets and Spaces
feeb bed set
R real numbers
c complex numbersIR™
c
qmen
General Matrices
A = [a5))
A(mxn)
Au... Aig
Ap * Apg
Special Matrices
Dn
Im
Kan or Kan
Lm
Omxn
0
Matrix Operations
A+B
A-B
AB
positive integers
integers
m-dimensional Euclidean space, space of real
(m x 1) vectors
real (m x n) matrices
m-dimensional complex space, space of complex
(m x 1) vectors
complex (m x n) matrices
matrix with typical element a3;
matrix with m rows and n columns
matrix consisting of submatrices A, and Az
matrix consisting of submatrices A1,..., An
matrix consisting of submatrices A, and Az
matrix consisting of submatrices A1,...,4n
partitioned matrix consisting of submatrices
Air, Ata, An, Azz
partitioned matrix consisting of submatrices Ai;
(m? x $m(m + 1)) duplication matrix, 9
(m x m) identity matrix, 10
(mn x mn) commutation matrix, 9
(4m(m + 1) x m?) elimination matrix, 9
(m x n) zero or null matrix, 2
zero, null matrix or zero vector
sum of matrices A and B, 3
difference between matrices 4 and B, 3
product of matrices A and B, 3xv
cA, Ac product of a scalar c and a matrix 4,3
ASB Kronecker product of matrices A and B, 3
AOB Hadamard or elementwise product of matrices
Aand B,3
ASB direct sum of matrices A and B, 4
Matrix Transformations and Functions
det A, det( A) determinant of a matrix 4, 6
tr A, te(A) trace of a matrix A, 4
All norm of a matrix A, 101
HAlle Euclidean norm of a matrix A, 103
rk A, rk(A) rank of a matrix A, 12
|Alavs absolute value or modulus of a matrix 4.5
a transpose of a matrix A, 5
A conjugate of a matrix A, 5
AH conjugate transpose or Hermitian adjoint
of a matrix A, 5
Aod adjoint of a matrix A, 6
Av inverse of a matrix 4, 7
An generalized inverse of a matrix A, 7
At Moore-Penrose inverse of a matrix A, 7
At ith power of a matrix A, 7
AM? square root. of a matrix A, 8
ay 0
dg([ais]) = :
0 Omm
ai 0
diag(a11,.--, 4mm) =
0 Qmm
vec, col column stacking operator, 8
rvec row stacking operator, 8
row operator that stacks the rows of a matrix in a
column vector, 8
yech half-vectorization operator, 8
Matrix Inequal
A>0O all elements of A are positive real numbers. 4A>0 all elements of A are nonnegative real numbers, 4
A<0 all elements of A are negative real numbers, 4
A<0 all elements of A are nonpositive real numbers, 4
A>B each element of A is greater than the
corresponding element. of B, 4
4>B each element of A is greater than or equal to
the corresponding element of B, 4
Other Symbols Related to Matrices
minor(qj,) minor of the element @;; of a matrix A, 6
cof(a,;) cofactor of the element aj; of a matrix A, 6
MA) eigenvalue of a matrix A, 63
Amar OF Amar(A) maximum eigenvalue of a matrix A, 63
Amin OF Amin(A) minimum eigenvalue of a matrix A, 63
o( A) singular value of a matrix A, 64
Omar OF Omaz(A) maximum singular value of a matrix A, 64
min OF Omin(A) minimum singular value of a matrix A, 64
pl A) spectral radius of the matrix A, 64
pal) characteristic polynomial of a matrix A, 63
at) minimal polynomial of a matrix A, 2151
Definitions, Notation,
Terminology
1.1 Basic Notation and Terminology
An (m x n) matrix A is an array of numbers:
ay a2 Qn
a2, ang Q2n
A= 7 :
Gm} m2 +7 Amn
Alternative notations are:
A(m xan), A, A={ai;] (mx n)
(mx n)
m and n are positive integers denoting the row dimension and the column
dimension, respectively. (m x n) is the dimension or order of A. The
ajj, i= 1,...,m, j= 1,...,n, are real (elements of IR) or complex numbers
(elements of C), They are the elements or entries of the matrix A and a,;
is the ijth element or entry of A. The matrix A is sometimes called a real
matrix if all its elements are real numbers. If some of its elements are complex
numbers the matrix A is said to be a complex matrix. The set of all complex
(mx n) matrices is sometimes denoted by C”*" and the set of all real (mm x n)
matrices is denoted by IR™*”
Further notations:
An Aiz +++ Atg
Ag Az +++ Aay
A= : =
Apr Apo <=> Apg
Here the Ajj are (1m; x ny) submatrices of A with S7?_,m; = m and
Djaim =n. A matrix written in terms of submatrices rather than individual2 HANDBOOK OF MATRICES
elements is often called a partitioned matrix or a block matrix. Special
cases are
A= (Aj... An] = (At eees Anh
where the A, are blocks of columns, and
AL
A=]: |,
Am
where the Aj are blocks of rows of A
A matrix
a aim
A=
(mem |g gn
having the same number of rows and columns is a square or quadratic
matrix. The elements @11,422,-.-,@mm (aii? 1,....™) constitute its
principal diagonal
= [ai]
Im
(mem) | 4 1
with aj; = 1 for i=1,...,m, and aj; = 0 for é # j is an (mx m) identity or
unit matrix and
0 --. 0
Omxn =]: = [ais]
0. 0
(mx n)
with aj; = 0 for all i,j is an (m x n) zero or null matrix. It is sometimes
simply denoted by 0. A (1 x n) matrix
a=([a),....a,]
is an n-dimensional row vector or (1 x n) vector. An (m x 1) matrix
4
b=
bm
is an m-vector, m-dimensional vector or m-dimensional column
vector. The set of all real m-vectors is often denoted by IR™ and the set
of all complex m-vectors is denoted by ©”. Many more special matrices are
listed in Section 1.5 and the AppendixDEFINITIONS, NOTATION, TERMINOLOGY
1.2 Operations Relating Matrices
Addition: A = (ajj] (mx n), B= [bij] (m x n)
A+ B= (aj +53] (mx n)
(for the rules see Section 2.1).
Subtraction: A = [aij] (m xn), B = (b;;] (mx n)
A— B= (ai; — by] (mx n)
(for the rules see Section 2.1).
Multiplication by a sealar: A = aij] (mx n), ¢ a number (a scalar)
leais] (m x n)
Ae = [eaij] (m x n)
(for the rules see Section 2.3).
Matrix multiplication or matrix product:
A= [aij] (mx n), B= [by] (2 xp)
AB= Sout] (m x p)
rat
(for the rules see Section 2.2).
Kronecker product, tensor product or direct product:
A= [ajj] (mx n), B= [bis] (Px 9)
aiBo--- ain
A@B= : : (mp x ng)
ami Bo «+ amnB
(for the rules see Section 2.4)
Hadamard product, Schur product or elementwise product:
A= [aij] (mx n), B= [by] (mx n)
AG B& [aijbij] (m xn)
(for the rules see Section 2.5)4 HANDBOOK OF MATRICES
Direct sum: A(m x m), B(n xn)
A 0
sape| 9 B
| ((m +n) x (m+n))
(for the rules see Section 2.6).
Hierarchy of operations: If more than two matrices are related by the
foregoing operations, they are performed in the following order
(i) operations in parentheses or brackets,
(ii) matrix multiplication and multiplication by a scalar,
(iii) Kronecker product and Hadamard product,
(iv) addition and subtraction,
(v) direct sum.
Operations of the same hierarchical level are performed from left to
right.
1.3 Inequality Relations Between Matrices
{a,;], B= [645] (mx n) real
A>O <= a;>0,
A>0 = a; 20,
A>B => ayj>biy.i
ADB = ay >), i=],
Warning: In other literature inequality signs between matrices are
sometimes used to denote different relations between positive definite and
semidefinite matrices.
1.4 Operations Related to Individual Matrices
Trace: A = a,j) (m x m)
ted str(A) Saunt + dmm = dais
(for its properties see Section 4.1)DEFINITIONS, NOTATION, TERMINOLOGY
Absolute value or modulus: A = [a,j] (mx n)
larilavs |aizlabs > I@inlabs
laziJabe |az2lars ~~ |@anlabs
[Alats = [ |aijlabs ] = i . (mx n)
lamilabs l@mzlabs --* |@mnlabs
where |claps denotes the modulus of a complex number ¢ = c; + ice
which is defined as |clans = \/c? + ¢2 = Vee. Here ¢ is the complex
conjugate of ¢. Properties of the absolute value of a matrix are given
in Section 3.8.
Warning: In other literature |A| sometimes denotes the determinant
of the matrix 4
‘Transpose: A = [a,)] (m x n)
an a2 7 in ayy a2, “7 amy
a2, 422, ** Qn aQy2 a2 - ame
@m1 Om? ann Gin Gan 7 Ann
(nx m)
that is, the rows of A are the columns of A! (see Section 3.1 for its
properties). In some other literature the transpose of a matrix A is
denoted by AT
Conjugate: A = [aij] (m x n)
A= [a].
where @;; is the complex conjugate of a,j; (see Section 3.2 for its
properties).
Conjugate transpose or Hermitian adjoint: A = [aij] (mx n)
AH = =
(see Section 3.3 for its properties)
Diagonal matrices: A = [a;;] (m x m)
ay 0
dg(A) =
0 amm6 HANDBOOK OF MATRICES
ay) ay 0
diag(ai1,..-, 4mm) = diag
amm 0 amm
(see Section 9.4 for further details)
Determinant: A = [a;;] (m x m)
det A = det(A) = $7 (-1?Paisazi, x
where the sum is taken over all products consisting of precisely one
clement from each row and each column of A multiplied by —1 or 1,
if the permutation i;,..., im is odd or even, respectively (see Section
4.2 for the properties of determinants)
X mins
Minor of a matrix: The determinant of a submatrix of an (m x m)
matrix A is a minor of A.
Minor of an element of a matrix: A = (a;;] (mx m). The minor of
a,; is the deterininant of the matrix obtained by deleting the ith row
and jth colunin from A,
Qo angen age am
inna Qing aie Gini
minor(aj;) = det MM hin} litt "
itt tt Gitngj-1 Gitaj+1 1+ Gitim
amt Am j-1 Amj+t °° amm
Principal minor: A = [aij] (m x m)
a . ak
det :
aero akk
is a principal minor of A for k = 1,...,.m—1.
Cofactor of an element of a matrix: A
of ay; is
(a;j] (m x m). The cofactor
cof(a;;) & (-1)*F minor(a;;).
Adjoint: 4
y] (mx m),m>1,
cof(ai)- cof(aim)
Ads . = [cof(ai;)].
coflam1) ++ cof(amm)DEFINITIONS, NOTATION, TERMINOLOGY
At =1 if m= 1 (see Section 3.4 for the properties of the adjoint)
Inverse: A = [a;j] (mx m) with det(A) # 0. The inverse of A is the unique
(m x m) inatrix AW? satisfying AA~! = A) A = Im (see Section 3.5
for its properties).
Generalized inverse: An (n x m) matrix AW is a generalized inverse of
the (m xn) matrix A if it satisfies AA~ A = A (see Section 3.6 for its
properties).
Moore~Penrose (generalized) inverse: The (n xm) matrix A+ is the
Moore —Penrose (generalized) inverse of the (7 x 2) matrix 4 if it
satisfies
(i) AAtA= A,
(ii) AtAAt = At,
(iii) (AAt)" = AAt,
(iv) (At A)! = ATA
(sce Section 3.6.2 for its properties).
Power of a matrix: A (m x m)
Tin) A= Ax: A for positive integers i
i times
In fori =0
-1
(ty, 4) for negative integers i, if det(A) 40
If A can be written as
» 0
u ., ve
0 Ym
for some unitary matrix U (see Section 1.5 and Chapter 6) then the
power of A is defined for any a € IR, a > 0, as follows:
ae 0
At =U ut,
0 de,
This definition applies for instance for Hermitian and real symmetric
matrices (see Section 1.5 for the definitions of Hermitian and
symmetric matrices and Section 3.7 for the properties of powers of
matrices).
~8 HANDBOOK OF MATRICES
Square root of a matrix: The (m x m) matrix A'/? is a square root of
the (m x m) matrix A if A!/?.41/? = A. Elsewhere in the literature a
matrix B satisfying B/B = A or BB! = A is sometimes regarded as
a square root of A
Vectorization: 4 = [a;;] (m x n)
ay
a2
Om)
ay
vee A= vec(A) = col(A) = (mn x 1)
Om?
a3
Qmn
that is, vec stacks the columns of A in a column vector.
rvec(A) = [vee( A’)]’
that is, rvec stacks the rows of A in a row vector and
row(A) = vec( A’) = rvec( AY’
that is, row stacks the rows of A in a column vector. (See Chapter 7
for the properties of vectorization operators)
Half-vectorizatior
A = [a;;] (m x m)
a)
a2)
amy
a2
vech A=vech(A)=] (Em(m + 1) x 1)
ame
a33
anm
that is, vech stacks the columns of A from the principal diagonal
downwards in a column vector (see Chapter 7 for the propertics of
the half-vectorization operator and more details).DEFINITIONS, NOTATION, TERMINOLOGY 9
1.5 Some Special Matrices
Commutation matrix: The (mn x mn) matrix Ayan is a commutation
matrix if vec(A') = Kimnvec(A) for any (mx n) matrix A, It is
sometimes denoted by K'mn. For example,
Koo = Kaa =
ecocon
ecorc]e
e+rcsce
coocHe
ooHoceo
-ecocce
is a commutation matrix. (For the properties of commutation matrices
see Section 9.2.)
Diagonal matrix: An (m x m) matrix
ay 0
= [as]
0 Qnm
with aj, = 0 for i # j is a diagonal matrix. (For the properties of
diagonal matrices see Section 9.4.)
Duplication matrix: An (rm? x }m(m + 1)) matrix Dm is a duplication
matrix if vec(A) = Dmvech(A) for any symmetric (m x m) matrix A.
For example,
100000
010000
001000
010000
Ds=}0 00100
000010
001000
000010
000001
is a duplication matrix. (For the properties of duplication matrices
see Section 9.5.)
Elimination matrix: A ($m(m + 1) x m?) elimination matrix Ly» is
defined such that vech(A) = Lmvec(A) for any (m x m) matrix A10 HANDBOOK OF MATRICES
For example,
ecosoe
ecooHso
cooroo
ococoo
ooHcoo
erccos
oocooo
oocooo
-cocoo
is an elimination matrix. (For the properties of elimination matrices
see Section 9.6.)
Hermitian matrix: An (m x m) matrix A is Hermitian if A’ = A4 =
(For its properties see Section 9.7.)
Idempotent matrix: An (mx7m) matrix A is idempotent if A? = A. (For
the properties see Section 9.8.)
Identity matrix: Av (m x m) matrix
1 0
Im = . = [ais]
0 1
with a; = 1 for i=
unit matrix.
,m and aj; = 0 for i # j is an identity or
Nonnegative matrix: A real (m x n) matrix A = [aj] is nonnegative if
ai; > 0 for i= 1,...,m, j =1,...,n. (For the properties see Section
9.9.)
Nonsingular matrix: An (m x m) matrix A is said to be nonsingular or
invertible or regular if det(A) # 0 and thus A~! exists. (For the rules
for matrix inversion see Section 3.5.)
Normal matrix: An (mm x m) matrix A is normal if A4 A= AA4
Null matrix: An (m xn) matrix is a null matrix or zero matrix, denoted
by Omxn oF simply by 0, if all its elements are zero.
Orthogonal matrix: An (m x m) matrix A is orthogonal if A is
nonsingular and A’ = AW). (For the properties see Section 9.10.)
Positive and negative definite and semidefinite matrices: A
Hermitian or real symmetric (™m x m) matrix A is positive definite if
24 Az > 0 for all (m x 1) vectors x ¥ 0; it is positive semidefiniteDEFINITIONS, NOTATION, TERMINOLOGY W
if r¥ Ax > 0 for all (m x 1) vectors z; it is negative definite if
x4 Ax <0 for all (m x 1) vectors z # 0; it is negative semidefinite
if x” Az < 0 for all (m x 1) vectors x; it is indefinite if (m x 1)
vectors 2 and y exist such that 2 Ax > 0 and y¥ Ay < 0 (see Section
9.12)
Positive matrix: A real (m xn) matrix A = [aij] is positive if aij > 0 for
i=1,...,m, j =1,....n (see Section 9.9).
aji, if =
A (see
Symmetric matrix: An (m x m) matrix A = [a;;] with aj;
1,...,m is symmetric. In other words, A is symmetric if A’
Section 9.13).
Triangular matrices: An (m x m) matrix
a 0 0
= [aij]
0
Gm1 + -7* Omm
with a; = 0 for j > i is lower triangular. An (m x m) matrix
ay 1m
0 Ginm
with aij = 0 for ¢ > j is upper triangular (see Section 9.14).
Unitary matrix: An (m xm) matrix A is unitary if it is nonsingular and
AH = AW (see Section 9.15)
Note: Many more special matrices are listed in the Appendix
1.6 Some Terms and Quantities Related to Matrices
Linear independence of vectors: The m-dimensional row or column
vectors 2),..., 24 are linearly independent if, for complex numbers
Cheek, CLE, +++ + eerE = 0 implies c) = --- = cy = 0. They are
linearly dependent if ¢)z, +--+ + cez% = 0 holds with at least one
ci # 0. In other words, 21,...,2% are linearly dependent if aj € C
exist such that for some i € {1,...,k}, 24 = ave, +-++ + Qi-1%i-1 +
Ogg + + RTE.12 HANDBOOK OF MATRICES
Rank: A = [a;;] (m x n)
tk A= rk(A)
= maximum number of linearly independent rows or columns of A
row rk A = row rk(A)
= maximum number of linearly independent. rows of A
col rk A = col rk(A)
= maximum number of linearly independent. columns of A
(for rules related to the rank of a matrix see Section 4.3).
Elementary operations: The following changes to a matrix are called
elemeutary operations:
(i) interchanging two rows or two columns,
(ii) multiplying any row or column by a nonzero number,
(ii) adding a multiple of one row to another row,
(iv) adding a multiple of one column to another column.
Quadratic form: Given a real symmetric (7m x m) matrix A, the function
Q-IR™ — IR defined by Q(z) = 2/Az is called a quadratic form. The
quadratic form is called positive (semi) definite if A is positive
(semi) definite. It is called negative (semi) definite if A is negative
(semi) definite. It is indefinite if -4 is indefinite.
Hermitian form: Given a Hermitian (m x m) matrix A, the function
Q:C” — IR defined by Q(z) = 2% Ax is called a Hermitian forn.
‘The Hermitian form is called positive (semi) definite if 4 is positive
(semi) definite. It is called negative (semi) definite if A is negative
(semi) definite. It is indefinite if A is indefinite
Characteristic polynomial: The polynomial in d given by det(Alm — A)
is the characteristic polynomial of the (m x m) matrix A (see Section
5.1).
Characteristic determinant: The determinant det(Alm — A) is the
characteristic determinant of the (m x m) matrix A.
Characteristic equation: The equation det(AJ, — A) = 0 is the
characteristic equation of the (m x m) matrix A
Eigenvalue, characteristic value, characteristic root or latent root:
The roots of the characteristic polynomial of an (mx m) matrix ADEFINITIONS, NOTATION, TERMINOLOGY 13
are the eigenvalues, the characteristic values, the characteristic roots
or the latent roots of A (see Chapter 5)
Eigenvector or characteristic vector: An (mx 1) vector v # 0 satisfying
Av = Av, where is an eigenvalue of the (m x m) matrix A,
is an eigenvector or characteristic vector of A corresponding to or
associated with the eigenvalue \ (see Chapter 5)
Singular value: The singular values of an (m x n) matrix A are the
nonnegative square roots of the eigenvalues of AA” if m
n (see Chapter 5)2
Rules for Matrix Operations
In the following all matrices are assumed to be complex matrices unless
otherwise stated. All rules for complex matrices also hold for real matrices
because the latter may be regarded as special complex matrices.
2.1 Rules Related to Matrix Sums and Differences
(1) A,B(mxn),cec:
(a) AL B=BHEA.
(b) c(A + B) = (A+ B)e=cA4cB
(2) A,B,C(mxn): (A+B) tC =A4(BEC) SABC.
(3) A(mxn),B,C(nxr): A(B+C)= ABH AC.
(4) A,B(mxn),C(nxr): (A+ B)C = AC + BC.
(5) A(mxn),B,C (px aq):
(a) A@(BLC) =ASQBLAOC,
(b) (BEC) @A= BOAECBOA.
(6) A, B,C (mx n)
(a) AQ(B#C)=AQBLAOC.
(b) (AE B)OC=AGCLBOC.
(7) A,C (mxm), B,D (nxn): (A®B)4(C®D) = (ALC)G(BtD)
(8) A,B(mxn):
(a) |A + Blavs < |Alabs + |Blabs
(b) (A+ BY = A's BY
(c) (A+ B)¥ = AM + BH,
(d) AFB=A+B.16 HANDBOOK OF MATRICES.
(9) A,B (mx m):
(a) tr(A + B) = tr(A) + tr(B).
(b) dg(A + B) = dg(A) + dg(B).
(c) vech(A + B) = vech(.4) + vech(B)
(10) A,B (mx n):
(a) rk(A + B) <1k(A) + rk(B)
(b) vee( A B) = vec( A) + vec( B)
(11) A (mx m):
(a) A+ A’ is a synimetric (m x m) matrix,
(b) A — A’ is a skew-symmetric (m x m) matrix.
(c) A+ A4® is a Hermitian (sn x m) matrix.
(d) 4 - A¥ is a skew-Hermitian (m x m) matrix.
Note: All the rules of this section follow easily from basic principles by
writing down the typical elements of the matrices involved. For further details
consult introductory matrix books such as Bronson (1989), Barnett (1990)
Horn & Johnson (1985) and Searle (1982).
2.2 Rules Related to Matrix Multiplication
(1) A(m xn), Bn p).C(p xq): (AB)C = A(BC) = ABC.
(2) A.B(mxm): AB BA in general.
(3) A(mxn), B.C (nx p): A(BEC) = ABEAC
(4) A,B(mxn),C (nx p): (At B)C = AC EBC.
(5) A(m xn), B(pxq).€ (mx 7), D(q xs):
(AS BY(C OD) = ACS BD.
(6) A,C (mx m), B,D (nxn): (AG BYC@D) = ACH BD
(7) A(mx m), B(mxn): |ABlavs < |Alats|Blavs-
(8) A(m xn), B(n x p):
(a) (ABY = Bia
(b) (AB)4 = BAH,
(c) AB = AB.
(9) A,B (mx m) nonsingular: (AB)"? = BOAT.RULES FOR MATRIX OPERATIONS 17
(10) 4 = [Ay] Qn x n) with Ai (mj x ny), B = [By] (2 x p) with
Bi; (m x pj)
AB= [= Ano] .
E
(11) A(mxn),B(n xm): tr(AB) = tr(BA).
(12) A,B (m xm): det(AB) = det( A) det(B).
(13) A(mxn),B(n xp): rk(AB) < min{rk(A), rk(B)}.
(14) A(m xn): rk(AA") = rk(A‘A) = rk(A)
(15) A(m x n), B(n x p)
vec( AB) = (Ip © A)vec(B) = (B’ ® Im)vec( A) = (B'S A)vec(In}.
(16) A(m xn), B (nx p).C (px q): vec(ABC) = (C2 A)vec(B).
(17) A(mxn),B(nxm): — tr(AB) = vee(A’)/vec( B) = vec( BY)" vec( A)
(18) A(mxn),B(nxp),C(pxm): tr(ABC) = vec(A’)'(C’ 2 D)vec( B)
(19) A Qn xn), B(n x p).C (px q), D(q x m)
tr(ABCD) = vee(D')'(C" ® A)vec(B) = vec( D)"(A-S C')vec( BY)
(20) A(m xm). B(m xn): A nonsingular = rk(AB) = rk(B)
(21) A(m xn), BOrxn): Bvonsingular = rk(4B) = rk(A)
(22) A (Qn xm), Bn xn). C (nxn):
A,C nonsingular => tk(ABC) = rk(B).
(23) A (Qn x m) nonsingulas: = AA7? = A7!A = Im
(24) A(Qn xn):
(a) Aln =ImA=A
(b) AOnxp = Omxp» OpxmA = Opxn
(25) A (m x n) real
(a) AA’ and AA are symmetric positive semidefinite.
(b) rk(A) =m = AA! is positive definite.
(c) tk(A) = => A°A is positive definite
(26) A(m xn):
(a) Aa and 444 are Hermitian positive semidefinite
(b) rk(4) =m => AAA is positive definite
(c) rk(A) =n => AM A is positive definite.18
Note: Most of the results of this section can be proven by considering
typical elements of the matrices involved or follow directly from the definitions
(see also introductory books such as Bronson (1989), Barnett (1990), Horn
& Johnson (1985), Lancaster & Tismenetsky (1985) and Searle (1982)). The
rules involving the vec operator can be found, e.g., in Magnus & Neudecker
HANDBOOK OF MATRICES
(1988).
2.3
(1)
(2)
(3)
(4
(5)
(6)
(7)
(8)
(9)
Rules Related to Multiplication by a Scalar
A(mxn), e1,c2€C:
(a) er(c2A) = e1( Ace) = (e1e2)A = A(eriee)
(b) (cp $e2)A =A £02.
A,B(mxn),c€€: cAtB)=cAtcB.
A(mxn),B(n xp), c1,c2€C€: (cr A(erB) = c1c2AB.
A(mxn),B(pxq), c1,c2 EC:
(a) (A ® B) = (14) @ B = A@ (cB).
(b) c1A ® coB = (e1¢2)(A © B).
A,B(mxn), c1.c2€€:
(a) (A ® B) = (4, A)@ B= AO (eB)
(b) cA @ c2B = (e1e2)(A @ B).
A(mxm),B(nxn)c€C: cA®B)=cAGcB.
A(mxn): 04=A0= Onn
A(mxn)ceC:
(a) leAlabs = [clabsl Alabs-
(b) (cA)! = cA’
(c) (cA)4 = EAM.
A(mxm),ceC:
(a) tr(eA) =e tr(A).
(b) dg(cA) = cdg( A).
(c) Ais nonsingular, ¢ #0 = (cA)7? = LAu!.
(d) det(cA) = c™det( A).
(e) vech(cA) = ¢ vech( A).
(f) is eigenvalue of A = cd is eigenvalue of cA.RULES FOR MATRIX OPERATIONS 19
(10) A(mxn),ce€:
(a) ¢ #0 => rk(eA) = rk(A).
(b) © #0 = (cA)t = LAP.
(c) vec(cA) = ¢ vec(A).
Note: All these rules are elementary and can be found in introductory
matrix books such as Bronson (1989), Barnett (1990), Horn & Johnson (1985),
Lancaster & Tismenetsky (1985) or follow directly from definitions,
2.4 Rules for the Kronecker Product
(1) A(mxn),B(pxq): ASB#BOA in general.
(2) A(mxn), B,C (pxq): A@(BEC)=A@BLAOC.
(3) A(mxn), B(pxq),C (rx 8):
A@(BOC)=(ASB)@CH=AQBOC.
(4) A, B(mxn), C,D(pxq):
(A+ B)@(C+D)=ASC4+AOD4+BOC+BOD.
(5) A(mxn), B (px), C (nx 1), D(qxs):
(A@ BC @ D) = AC@ BD.
(6) A(mxn),B(pxq),C (rx 8),D (nx k), EB (qx), F (xt):
(AS B@C\(D® EF) = ADS BE@CF.
(7) A(mxn)c€EC: e@A=ch=AGe
(8) A(mxn),B(pxq), .de€:
cA @ dB = cd(A@ B) = (cd A) @ B = AW (cdB)
(9) A(mxm),B(nxn),C(pxp): (A®@B)@C = (ASC) O(BSC).
(10) A(mxn),B(p xq):
(a) (A@ BY = A'@ B.
(b) (A@ B)! = AM @ BH,
(c) A®@B=AOB
(4) (A@ B)t = At @ Bt20 HANDBOOK OF MATRICES
{e) |A © Blavs = |Alats © |Blabs.
(f) rk(A @ B) = rk(A) rk(B).
(11) A(mx m), B (nxn):
(a) A,B nonsingular 3 (A@ B)-) = A~'@ B.
(b) tr(A@ B) = tr( A) (B).
(c) dg(A@ B) = dg( A) @ dg(B)
(d) det(A @ B) = (det A)"(det BY”.
{e
AA) and A(B) are eigenvalues of A and B, respectively, with
associated eigenvectors v(A) and t(B) => (A) - A(B) is
eigenvalue of 4 © B with eigenvector v( A) @ v( B).
(12) r.y(mx I): 2’ Sy=yr’ =y@r’.
(13) r(mx l)y(nx 1): vec(zy) = yor
(14) A(mxn),B(nx p),.C(pxq): (C'@ A)vec(B) = vec(ABC).
(15) A (mx n), B (nx p),C (px q), D (qx m) :
tr(ABCD) = vec(D')'(C'@ A)vec( B)
= vec(D)'(A@ C’)vec( B’),
vee(D')'(C’ & A)vec(B) = vee(A")'(D' © B)vee(C)
vec(B')'(A" @ C)vee( D)
vec(C")'(B’ © Dyvec( A).
Note: A substantial collection of results on Kronecker products including
many of those given here cau be found in Magnus (1988). Some results are
also given in Magnus & Neudecker (1988) and Lancaster & Tismenetsky
(1985). There are many more rules for Kronecker products related to special
matrices, in particular to commutation, duplication and elimination matrices
(see Sections 9.2, 9.5 and 9.6)
2.5 Rules for the Hadamard Product
(1) A B(mxnjce€
(a) AB B= BWA
(b) eA B) = (ed) B= A@ (cB)
(2) A.B.C(m xn):
(a) AS(Be C)= (AP B)EC=AE BECRULES FOR MATRIX OPERATIONS. 21
(b) (AF B)OC=AOCHBOC.
(3) A,B,C, D(m x n):
(A+ B)O(C+D)=AOC+AOQOD+BOC+BOD.
(4) 4,C (mxm), B,D(nxn): (A®B)O(C@D) = (A@C)S(BOD).
(5) A,B (mx n)
(a) (A@ BY =A’ OB.
(b) (A@ B)¥ = AN @ BE,
(c) AD B=AOB.
(d) |A® Blavs = |Alabs © |Blats
(6) A(mxn): AGOmxn = Omen © A= Omen
(7) A(mxm): AG Im = Im @ A= dg(A)
1 i
A(mxn)J=]i 0. i] (mxm): A@J=A=IGA
1 al
(9) A,B,C (mxn): tfA(BOC)] = trf(A’@ BC].
(10) A,B(mxm), j= (,...,D'(mx 1): tr(AB’) = j(A® BD).
(11) 4,B, D(m x m), §=(1,...,1) (mx 1)
(8
D diagonal > tr(ADB’D) = j'D(A® B)Dj.
(12) 4,B,D(mxm): D diagonal > (DA) @(BD) = D(A®B)D.
(13) A,B (mx n)
(a) vec(A & B) = diag( vec A)vec( B) = diag( vec B)vec( A)
(b) vec(A > B) = (vee A) vec(B).
(14) A, B(m xm):
(a) vech(A © B) = diag(vech A)vech( B) = diag(vech B)vech( A).
(b) vech(A@ B) = (vech A) ® vech(B).
(15) (Schur product theorem)
A,B (m xm): A,B positive (semi) definite = 43, B positive
(semi) definite.
Note: Most results of this section follow directly from definitions. The
remaining ones can be found in Magnus & Neudecker (1988). The Schur
product theorem is, for example, given in Horn & Johnson (1985).22 HANDBOOK OF MATRICES
2.6 Rules for Direct Sums
(1) A(mx m), B(n xn), c€C:
(a) (A® B) = cA @cB
(b) AA B => ADB BOA.
(2) 4,B(mxm), C,D(nxn):
(a) (At B)@(C 4 D) = (AGC) + (BOD).
(b) (A®C)\(B® D) = ABOCD.
(3) A(mx m), B (nx n),C (px p):
(a) A®(BSC) = (AGB) GC=AGBEC.
(b) (ABB) @C=(AQC)O(BOC).
(4) 4,D (mx m), B,E (nxn), C,F (px p)
(A® BOC\(DOESF) = ADS BE SCF.
(5) A (mx m), B(n xn):
(a) [A @ Blabs = |Alabs ® |Blabs.
(b) (ADB) = A'@B
(c) (A® BF = AP BF,
(d) ABB=AQB
{e) (A@ B)-) = A) @ Bf A and B are nonsingular
(f) (A@® B)t = Atm Bt.
(6) A(mx m), B (nxn):
(a) tr(A ® B) = tr(A) + tr(B)
(b) dg(A ® B) = dg( A) © dg(B)
(c) rk(A ® B) = rk( A) + rk(B).
(d) det(A © B) = det(A)det(B).
(e) AA) and A(B) are eigenvalues of A and B, respectively, with
associated eigenvectors v(A) and u(B) => (A) and A(B) are
eigenvalues of A B with eigenvectors
[SO] [ain]
Note: All results of this section follow easily from the definitions (see also
Horn & Johnson (1985).
respectively,3
Matrix Valued Functions of a
Matrix
In this chapter all matrices are assumed to be complex matrices unless
otherwise stated. All rules for complex matrices also hold for real matrices
as the latter may be regarded as special complex matrices.
3.1. The Transpose
Definition: The (n x m) matrix
ay a2 vee Gin ay
a2; 22 +1 Gn a2
A= : =
ami Om2 . amn in
a2
a2
Gon
is the transpose of the (mm x n) matrix A = (a,j).
(1) A,B(mxn): (A+B) = ATSB.
(2) A(mxn),c€€: (cA) =cA’,
(3) A(mxn),B(nx p): (AB)! = BA’.
(4) A(mxn),B(pxq): (A@BY =A‘ OB
(5) A,B(mxn): (AOBY=A'OB'.
(6) A(mxm),B(nxn): (A®B) = A'@ BY
(7) A(mxn):
(a) [Aleve = lls
(b) rk(A’) = rk(A).
(c) (A)! = A.
Omi
(nx m)
@mn24 HANDBOOK OF MATRIC
(a) (AH = (AM) = A,
(e) Al = AH.
(8) A (mx m)
(a) dg(4’) = dg( A)
(b) tr(A’) = tr( A).
(c) det(A’) = det(A)
(d) (A)! = (47!) if A is nonsingular.
(e) (AQ*4 = (asey.
(f) A is eigenvalue of A + 2 is eigenvalues of A’.
(9) A(m xn) real: (A’)* = (At)!
(10) A (m x n), Kin (mn x mn) commutation matrix:
vec(A’) = Kn vec( A).
(11) A.B (mxn): — vec(B’)'vec( A) = tr(AB).
(12) A (m x n), B(m x p),C (qx n),D (q xp):
A B)'_[a co
cp\"|s po
(13) A (m x m)
(a) Ais diagonal > A’ =A
(b) Ais symmetric => A’ = A.
(14) A(mx m) nonsingular: 4 orthogonal <= A’ = Aq?!
(15) A(mxn): A’A and AA’ are symmetric matrices.
Note: All these results are elementary, They follow either directly from
definitions or can be obtained by considering the individual elements of the
uiatrices involved (see, ¢.g., Lancaster & Tismenetsky (1985))
3.2 The Conjugate
Definition: The (m x n) matrix A = {4j)] is the conjugate of the (m x n)
matrix A = [aij]. Here aj; denotes the complex conjugate of aij
(1) A,B(mxn): AEB=A+B.
(2) A(mxn),B(n xp): AB= AB
(3) A(mxn) cet =cA,MATRIX VALUED FUNCTIONS OF A MATRIX 25
(4) A(mxn), B(pxq): A®@B=AOB
(5) A,B(mxn): A®B=AOB.
(6) A(mxm),B(nxn): ABB=AB.
(7) A(mxn):
(a) |Alabs = |Alabs-
(b) rk(A) = rk(A)
(c) ASA
(d) A'= A= AP,
(ce) A# = AN = Al.
(f) vec(A) = vee(A).
(g) Ais real > A= A.
(8) A(mx m):
(a) tr(A) = (A).
(b) det(A) = det A.
(c) Att = Aad.
(4) A-
(e) vech(A) = vech(A).
(f) dg(A) = dg(A).
(g) ) is eigenvalue of A with eigenvector v = is eigenvalue of A
with eigenvector 6
(9) A(mx n), B (mx p),C (qx n), D(q xp):
A B)_[A B
ec D\| "le bl
am EC: diag(ay,...,Gm) = diag(@y,..., am).
F-|, if A is nonsingular
(10) a,..
Note: The results in this section are easily obtained from basic definitions
or elementwise considerations and the fact that for ¢1,c2 € C, Wey = 4162.
3.3 The Conjugate Transpose
Definition: The conjugate transpose of the (m x n) matrix A = [a,,] is the
(nx m) matrix AY = At = [a;)!
(1) A,B(mxn): (At B)¥ = AM + BE26 HANDBOOK OF MATRICES
(2) A(mxn),B(nx p): (AB)H = BY AH,
(3) A(mxn),ceC: (cA)# = cAt
(4) A(mxn), B(pxq): (A@B)! = AX @ BH,
(5) A,B(mxn): (A@ By! = Al a BY,
(6) A(mxm),B(nxn): (A@B)Y = At ae BH.
(7) AQ xn):
(a) [AM lave = LAloog
(b) rk(A#) = rk(A).
(c) (AM)H = AL
(d) (A)4 = A= (Ay,
(e) 44 = Al = (AF),
(f) (AM) = (ate,
(g) vee(A") = vec()
(h) Aisreal > 4” = A’
(8) A (mx m)
(a) tr(A”) = tr(A) = UA).
(b) det(4¥) = det(.4) = deta
(c) (AM jad = (aod)
(d) (A# 71) = (A7!)4, if A is nonsingular.
(e) dg(A") = dg(A) = (dg A)"
(9) A(m x n), Bm x p).C (qx n).D(q x p) :
AB)" _[ 4H cH
c Dy >| Bt pe
(10) ay,..-,am EC: [diag(ay,...,am))4 = diag(ay,...,@m).
(11) AQm xm): A Hermitian <=> A” = 4,
(12) A (mx m) nonsingular: A unitary <= A¥ = Am?!
(13) AQ xn):
(a) AMA and AA" are Hermitian positive semidefinite.
(b) rk(A)
(c) rk(A) =
m => AA" is Hermitian positive definite.
=> A" 4 is Hermitian positive definite.
Note: The results in this section are easily obtained from basic definitions
or clementwise considerations (see, ¢.g., Barnett (1990)).MATRIX VALUED FUNCTIONS OF A MATRIX 27
3.4 The Adjoint of a Square Matrix
Definition: For m > 1, the (m x m) matrix A°% = [cof(a;,)]’ is the adjoint
of the (m x m) matrix A = [a;;]. Here cof(ai;) is the cofactor of aij. For
m= 1, A®4 =1. For instance, for m = 3,
an2 a. a2, a a2) ay
det | 222 223 —det | 221 923 det | 222 922
@32 a33 a3; 433 a3; a32
a a; a ay ay
Aed = | dep | 212 213 det | 1 413 —det | 211 912
32 433 43) 433 431 432
a2 a ay a a, @
det | 2 3 | go, | fun 213 det} 21 a2
422 423 42; a23 42) G22
(1) A,B (mx m): (AB)°4 = Bed aad,
(2) A(mxm),cEC: (cA)*4 = c™! Aad,
(3) A(mx m)
(a) A%% = det(A)A™?, if A is nonsingular.
(b) AA = AM4 A = det(A)Im
(c) (AP9 = (Atv,
(4) (AMOS = (aed),
(e) det(.424) = (det Ay™-?
(f) rk(A) << m—1 => Ad =0.
(4) A(mx m),m>2:
m if rk(A)=m
aun ={ 1 if tk(A)=m-1
0 if rk(A) + is eigenvalue of
Av! with eigenvector v.
(5) A(mx m):
(a) rk(A) = m <=> A7! exists.
(b) rk(A) =m => At = AT?
(c) A is diagonal dominant => A is nonsingular.
(8) a.-5am ECL; £0 fore
A= diag(ay,...,am) => Aq! = diag(ay',....a3,')
(7) Im (7m x m) identity matrix: 15, = Im.
(8) A (mx m) nonsingular
(a) A orthogonal <=> Av! = A’.
(b) A unitary <=> A! = 4¥,
(9) A (mx m) positive definite: (4'/2)~! is a square root of 47!
Note: Many of these results are standard rules which can be found in
introductory textbooks such as Barnett (1990) aud Horn & Johnson (1985)
or follow immediately from definitions
3.5.2 Inverses Involving Sums and Differences
(1) A (mx m) with eigenvalues A,,..., Am, [Ailabs < 1, @= 1.2...MATRIX VALUED FUNCTIONS OF A MATRIX 29
co.
(a) Um + A)“ = SO(-Ay.
i=0
(b) (Im — A)7! “ye
(0) Um? +48 A) = S(-Ay oa!
i=0
(4) Um? - A@ Ay = SO Abe A
(2) A(mx m), B (mx n),C (n x m),D (nx n)
(A- BD C)"! = AW! 4+. A“? B(D ~ CA™!B)"'CA“!
if all involved inverses exist.
(3) A(mxn):
det(Im+AA") £0 => (ImtAA#)"! = In-A(In +44 A) A4,
(4) A(mxm) nonsingular, B(mxm): (A+ BBA), UIm+B4 Aq'B)
nonsingular = (A+ BB") B = A! BU, + BY A! BY}.
(5) A, B (m x m) nonsingular:
(a) Av! 4 Bo! = A“"(A4 B)B!
(b) Av? 4 Bo! nonsingular
=> (A714 B')"! = A(A 4+ B))B = B(A+ BY'A
(c) A724 Bo! = (A+B)! = ABA = BAB.
(6) A.B (m xm):
(a) Im + AB nonsingular = (Im +AB)~'A = A(Im + BA)7}.
(b) A+B nonsingular > A—A(A+ B)7'A = B- B(A+8B)7!B.
Note: Most results of this subsection may be found in Searl (1982, Chapter
5) or follow from results given there. For (1) see Section 5.4
3.5.3. Partitioned Inverses
(1) A(m x m), B(m x n),C (nx m). D (nx ny30, HANDBOOK OF MATRICES
AB 4 .
[ CoD jae Aland (D — CA~B) nonsingular
A
B
= le Dp
14 A-1B(D - CA~'B)"!C A! -A-'B(D — CA“! BY!
—(D = CABO AT! (D-CA™'B)-!
(2) A (mx m), B(m x n).C (nx m), D (nx n)
AB mae .
[ CD |: D and (A~ BD7!C) nonsingular
AB)
>l|e Dp} =
(A-BD-!C)! -(A- BD-'C)"'BD-!
-DC(A— BD“'C)7) D7! + D-IC(A — BD-!C)"1 BD"?
(3) A (mx m) symmetric, B (m x n),C (m x p), D (n x n) symmetric.
E (px p) symmetric
A Bc)"
BD 0
Cc 0 £F
F -FBD" -FCE"!
=| -D-'B'F D-'+ D~)B!FBD-} D-'BIFCE"!
-EOCF EC'FBD"! E+E C'FCE!
if all inverses exist and F’
(A- BD"'B'-~ CEC)".
(4) Ay (m, x mj) nonsingular, i= 1,...,7:
Ay 0 77!
0
0 Ay 0 Az!
(5) mn €IN,m>n,A(mxn),B(mx(m-—n)):
rk(A) = n, rk(B) = m—n, AY B=0
ota =| Arar]
(6) mneN,m AT = AT!
(2) A(mxn)
(a) Aq is not unique in general
(b) AAT and AWA are idempotent.
(c) Im — AAW and f, — AWA are idempotent.
(d) A( A” A)~ A# is idempotent.
(3) A(mxn)
(a) tk(A) = rk(A~ A) = tk(AAm)
(b) rk(A) = tr(A7 A) = tr(AA7).
(c) tk(AA™) = tr(AA~)
(d) rk(A7) > rk(A)
(e) rk(A7) = rk(A) <=> A> AAT =
(f) rk(A) =n => -
(g) rk(A) =m <> AAW = Im.
(4) A(mxn):
(a) A(A# A)" AM ASA
(b) 4% 4(4" 4) 44 = AM
(c) ACA A) A# is Hermitian.
(d) (A7)# is a generalized inverse of AY.
(5) Onxm is a generalized inverse of Omxn-
(6) A(mxm): A isidempotent = A is a generalized inverse of itself.
(7) A (mx n).B (nx m): AM is a generalized inverse of A =>
AT + B-— A” ABAA™ is a generalized inverse of A.MATRI.< VALUED FUNCTIONS OF A MATRIX 33
(8) A (mx n), B,C (nx m) AX is a generalized inverse of
A = Av + BUIm — AAW) + (In — A AJC is a generalized inverse of
A.
(9) A(mx n), B(mx m), C (nx n)
B,C nonsingular = C-!A~ B! is a generalized inverse of BAC.
(10) A(mxn), B(pxq): A> @ Bo isa generalized inverse of A B
(11) A(mxn), B(mxr), C (mx r): Generalized inverse matrices
A> and Co of A and C, respectively, exist such that AAT BC~C
B = the system of equations AXC = B can be solved for X and
NX = A~BC™ 4Y — A AY CC? is a solution for any (n x m) matrix
Y.
Partitioned Matrices
(12) A (m x m) nonsingular, B (m x n),C (rx m), D (rn):
«[ 5 ]=™ D=CA7B
At Omxr alized (fa B
= | Ooem Onn | is a generalized inverse of | 5
(13) A(mxn), rk(A) =r: B(mxm), C (nx n) are nonsingular and
such that
[hb 0
aac =| 5 0 |
> c| : A ] Bis a generalized inverse of A,
for any D (rx (n—1)), E ((m—r) xr), F ((m—r) x (n= 1).
(14) A (mj xn), f= 1,...,7:
Ay 0 A 0
. is a generalized inverse of
0 Ar 0 Ar
Note: A number of books on generalized inverses exist which contain the
foregoing results and more on generalized inverses (e.g., Rao & Mitra (1971).
Pringle & Rayner (1971), Boullion & Odell (1971), Ben-lsrael & Greville
(1974)). Some of these books contain also extensive lists of references34 HANDBOOK OF MATRICES
3.6.2 The Moore —Penrose Inverse
Definition: The (n x m) matrix A+ is the Moore ~Penrose (generalized)
inverse of the (m x n) matrix A if it satisfies the following four conditions:
(i) AAtA=A,
(ii) A+AAt = At,
(iii) (AAt)¥ = AAt,
(iv) (At A) = A+ A,
Properties
(1) A(mxn): At exists and is unique.
(2) A(mxn)
=H [ p t | is the singular value decomposition of A
a1
= atave[ Oo 0 |”
(3) A (mx n).U (mx m),V (nxn)
U,V unitary > (UAV)* = Vi atu,
(4) A(m xn), rk(A) =r
B(mxr), C(r xn) such that A= BC = At =CtBt.
. _fot if c#0
(5) cee ow={5 if «x0
(6) A(mxn), cE @: (cA)t =ctAt.
(7) A(mxn), B(rxs): (A®B)t = At @ Bt
(8) A(mxm)
(a) 4 is nonsingular = At = An},
(b) Ais Hermitian = A* is Hermitian.
(c) A is Hermitian and idempotent > A* =A
(9) A(mxn):
(a) (At)#= A
(b) (A¥)* = (A*)4.MATRIX VALUED FUNCTIONS OF A MATRIX 35
(c) AYAA* = AP.
(d) AtAAM = At
(e) AM (At)4 At = At
(f) At(At)4 AF = At.
(g) (AM A)* = At(At)A,
(h) (AAM)* = (At)F At
(i) A(AM AJt ANA = A.
(j) AAM(AA* +A =A.
(k) At = (AM A)+ AM = AM(AAH)+
(10) A (mx n)
(a) rk(A) =m <> AAt = In.
(b) rk(A) =n <=> AtAZ= Iq
(c) rk(A) =n = At = (AAS AH
(d) rk(A) = m = At = AM(AAH)“}
(e) rk(A) =n = (AAM)+ = A(AM A“? AM
(f) rk(A) = 1 = At = [tr(AA* “1 AF,
(8) A=Omxn => At = Onxm-
(11) A(mxn):
(a) rk(A*) = rk(A)
(b) rk(AAt) = rk(A* A) = rk(A)
(c) tr(AA*) = rk(A)
(12) A(m xn):
(a) AA* and At+A are idempotent.
(b) Im — AA* and I, — A+A are idempotent.
(13) A(mxn), B(nx nr): AB=Omxr > BtAt = Onem-
(14) A(mxn), B(mxr): AYB=Onxe <> A+B = Onur.
(15) A,B (mx n), ABY =0
(A+ B)*
= At + (In — A*B)[C* + (In — C+C)M BH (At)" At (In, — BC*)],
where C = (Im — AA+)B and
M = [In + (In —CtC)BA(At)F At BU, ~CTC))-?.36 HANDBOOK OF MATRICES
(16) A(mxn), Bln xr), C (mr):
AH AB = AMC <=> AB = AAtC.
(17) AQn xn), B(n xr): det(BB4) #0 + AB(AB)* = AAt+
(18) A (im x m) Hermitian idempotent, B (7 x n):
(a) AB = B= A— BBt is Hermitian idempotent with
tk(A — BB+) = rk(A) — rk(B)
(b) AB =0 and rk(A) + rk(B) =m => A= Im — BBY
(19) @ (m x m) Hermitian positive definite, A (m x n):
AHQ- 1 ACAHO-IA)+ AM = AM
(20) A (m x m) Hermitian: A # 0 is eigenvalue of A with associated
eigenvector = => A7! is eigenvalue of A+ with associated eigenvector
0
Partitioned Matrices
(21) Ay (me xm), FS 1a
Ay 0 77 At 0
(22) A(m xn):
[oa
(23) A(m xn), B(m x p)
4 _ | At- AtB(Ct+ + D)
where C = (Im — AA+)B and
D = (Ip —CtC lp + Up — CHC)BH (At) 4 At BU, — C*C)“!
xBH(At)A At (I, — BCT)
(24) A(mxn), B(pxn):
[ ’ i = [At —TBAt: 7]
B :
where T = E+ + (In — E+ B)A+(At)" BY K (I, — EE*) with E =
BUI, — At A) and W = [[,+([p— BE*+) BAt( At)" BY (1, ~EFt))~!MATRIX VALUED FUNCTIONS OF A MATRIX 37
Note: These results on Moore-Penrose generalized inverses are also
contained in books on generalized inverses such as Rao & Mitra (1971), Pringle
& Rayner (1971), Boullion & Odell (1971), Ben-Israel & Greville (1974). A
good collection of results is also contained in Magnus & Neudecker (1988),
including many of those given here. Many of the results follow easily by
verifying the defining properties of a Moore ~Penrose inverse.
3.7 Matrix Powers
Definition: For i € Z, the ith power of the (m x m) matrix A, denoted by
A’, is defined as follows:
Il4 for positive integers i
jal
Aad Im fori=0
=
(if ‘) for negative integers i, if det(.4) # 0
jal
If A can be written as
Mt 0
A=U : ul
0 Arn
for some unitary matrix U (see Chapter 6) then the power of A is defined for
any a € IR, a > 0, as follows:
Ao 0
Ae su : u#
0 Ag,
This definition applies, for instance, for Hermitian matrices. The definitions
are equivalent for integer values of a
Properties
(1) A(mxm),ceC,ieN: (cAy
(2) (Binomial formula)
A,B(mxm),i€Ni>1:
i
(44 BY = SOAP BAP B.. AL BAbH,
j=38 HANDBOOK OF MATRICES
where the second sum is taken over all ky,...,kj41 € {0,
Apts thjgp = i-j.
(3) (Binomial formula)
A,B(mxm): (A+B)? = A? +AB+ BA+ B?
(4) (Binomial formula for commuting matrices)
A,B(mxm),i€ Ni 21:
i} with
AB=BA = (A+B => Or Bi,
i
(5) 4,B(mxm),i€IN,i>1:
int
Al Bis SOA A BBs
j=0
(6) A(m x m),B(nxn),i€IN
(a) (A®@ By
Aig BE
A O}'_[A 0 ]_ yn pe
[3 a}=[e pi |=4ie8
(b) (AB By
(7) A(mx m),i€ IN
(a) Atlas < [Aldus
(b) (At = (4’¥.
(c) (ANH = (AMY.
(d) A= A’
(e) (AT = (AF
(f) rk(A#) < rk(A).
(g) det(A*) = (det A)t.
(8) A(mxm),i€ INeven: —tr(At) = vee(Ai/?")'vee( Ai/?).
(9) Im (mx m) identity matrix,i€ IN: Hi, = Im.
(10) i€ Ni #0: O%
nx m = Omxm-MATRIX VALUED FUNCTIONS OF A MATRIX 39
(l1jyieN:
AM G)ATE (qt)
A 1 0... 07° o -w. i) yinma2
oA 1 0 mca)
0 0 aA 1 fy yint
oo. d oO oF. ()
(nxn) 0 0... a
(12) A(mx m):
(a) Ais idempotent = A! = A for i= 1,2,
(b) Ais nilpotent <=> A’ = Omxm for some i > 0.
(c) A is symmetric => A! is symmetric for i = 1,2,..
(d) A is Hermitian = A‘ is Hermitian for i = 1,2,
(13) A(mx m),i€ IN:
vee(A?) = (Im @ A*)vec(Im)
= ((A') @ In)vec(Im)
= ((A¥?)@ A‘ )vec(Im), if A is positive definite
= ((A?Y @ A‘? )vec(Im), if is even
= (A'@ Ajvec(A*), iff > 2
(14) A (mx m)
At —+y 100 0 <=> all eigenvalues of A have modulus less than 1.
Note: Most results of this section are basic and follow immediately from
definitions. The binomial formulae may be found in Johansen (1995, Section
A.2) and (13) is a consequence of an important relation between Kronecker
products and the vec operator (see Section 2.4).
3.8 The Absolute Value
Definition: Given an (m x n) matrix A =
is
,j] its absolute value or moduhis
\@rilabs |@i2labs ~~ [ain labs
l@rrlabs |@z2lebs --- (@anlabs
\Alabs = [ lasjlavs ] = : : : (m x n)
lamilabs [@m2labs *** [an labs40 HANDBOOK OF MATRICES
where the modulus of a complex number ¢ = ¢1 + ica is defined as |claus =
Vel +03 = Vee. Here é is the complex conjugate of ¢
(1) A(mxn):
(a) |Alabs 2 Omxn
(b) |Alabs = Omxn <=> A= Omxn-
(2) A(mxn),cEC: |cAlavs = lelabs |Alavs
(3) A.B(mxn): [A+ Blavs < |Alabs + [Blats-
(4) A(mxn),B(nx p): |ABlavs < |Alabs |Blats-
(5) A(mx m),iEIN: JA¥labs < |Albos
(6) A(mxn),B(pxq): |A® Blavs = |Alabs @ |Blavs-
(7) A(m xm), B (nxn): |A® Blas = [Alans ®[Blavs.
(8) A(mxn):
(a) |A'lavs = [Alans
(b) |Alabs = [Alabs
(c) [A lave = |Albes-
A(m xn), B (mx p),C (qx n), D (qx p)
i[é 5 | ~([ ge ee |
~ | [Clas |Plavs
Note: These results may be found in Horn & Johnson (1985, Chapter 8)
or follow easily from the definition of the absolute value.
(9
abs4
Trace, Determinant and Rank
of a Matrix
4.1 The Trace
Definition: The trace of an (m x m) matrix A = [aij] is defined as
tra = tr(A) Sau test amm = do aii
=
4.1.1 General Results
(1) A,B (mx mm): tr(A t B) = tr(A) + tr(B).
(2) A(mxm),cEC: tr(eA) = c tr(A).
(3) A,B(mxm), c1,c2€C: tr(eyA + e2B) = eyte(A) + cotr(B)
(4) A(mx m):
(a) tr(A’) = tr(A).
(b) tr(A) = (A)
(c) tr(A#) = tr(A)
(5) A(mxn):
(a) tr(AAt) = rk(A).
(b) tr(A¥ A) = 0 > A= Onxn
(6) A(mxm): A idempotent => tr(A) = rk(A)
(7) A (mx m) with eigenvalues 41,...,Am: 0 tr(A) = Ay + --- + Am
(8) A(mx n), B(n x m):
(a) tr(AB) = tr(BA).
(b) tr(AB) = vec(A’)'vec(B) = vee(B’)’vee(A)42 HANDBOOK OF MATRICES
(9) A(mxn),B(nx p),C (px q),D(q xm)
tr(ABCD) vee(D')'(C" @ Ajvec( B)
= vee(A’)'(D! @ B)vec(C)
vee(B')'(A’ & C)vec(D)
= vec(C")/(B! & D)vec( A)
(10) 4, B(m x m) B nonsingular = tr(BAB~') = tr(A).
(11) A,B (m xn), je = (11 (Rx Ls tr(AB’) = f,(A® B)jn
(12) A.B. D (mx m), j= (1... 1) Gn x 1):
Ddiagonal > tr(ADB'D) = j'D'(A® B)D}
(13) AB.C(mxn): tr[A(BaC)) = tla a BC]
(14) A(m x m) tr(A S Im) = (A).
(15) A(mx m), B(nxn):
(a) tr(A @ B) = tr A)tr(B).
(b) tr( A B) = tr(A) + tr(B)
(16) A (m x m), B (rm x n),C (nx m), D (n x n)
uf 3 | = tr(A) + tr(D)
(17) Kiam (m? x m?) commutation matrix: — tr(Amm) = m.
(18) Dy, (mm? x $m(m x 1)) duplication matrix :
(a) tr(D%, Din) = tt(Dm Di,) = m?
(b) tr(D), Dm)~! = m(m + 3)/4
(19) Lym (4m(m + 1) x m?) elimination matrix
tr( Lm Li) = tr( Lig Lm) = 3m(m + 1)
(20) A = [a;;], B (mm x m) real positive semidefinite, a € R,a > 0,0 #1:
m
tr(A%) = Ya <=> Ais diagonal
i=1
(21) A, B £0 (12 x m) real positive semidefinite, a € R,0< a <1:
tr(A%B!~*) = (tr A)*(tr BY"? <> B=cA for somec€ Rye > 0.TRACE, DETERMINANT AND RANK OF A MATRIX 43
(22) A,B (m x m) real positive semidefinite, a € R,a > 1:
[tr(A + B)2}"/ = (tr AP)'/ + (tr BA)
<=> B=cA for somec€ R,c > 0.
Note: The rules involving the vec operator, the commutation, duplication
and elimination matrices are given in Magnus & Neudecker (1988) and Magnus
(1988). (20) ~ (22) are given in Magnus & Neudecker (1988, Chapter 11). The
other rules follow from basic principles
4.1.2 Inequalities Involving the Trace
In this subsection all matrices are real unless otherwise stated.
(1) A(m x n) complex:
(a) |tr Alabs < tr{Alabs
(b) tr(A¥ A) = tr(AA*) > 0.
(2) A,B (m xn):
(a) (Cauchy-Schwarz inequality)
tr(A'B)? < tr(A’A)tr( B’B)
(b) tr(A’B)? < tr(A’ AB’ B)
(c) tr(A’B)? < tr(AA’ BB’)
(3) (Schur’s inequality)
A(mxm): tr(A2) < tr(A’A)
(4) A(m x m) positive semidefinite: (det A)!/™ < Str(A).
(5) A(m xm):
All eigenvalues of A are real => |Ltr(A)las < (Ltr(A2)]/2.
(6) A (m x m) symmetric with eigenvalues 4) <--> < Am, X (m x n)
with X'X = In:
SOs S te(X/AX) < Among
ist iz
(7) A= (ai;] (m x m) positive semidefinite:
>LR ey for a>l
(Arye = te “
< DM ag for 0