Pinkus Totally Positive Matrices
Pinkus Totally Positive Matrices
C AMB RI D G E TR ACTS I N M AT H E M AT I C S
General Editors
ALLAN PINKUS
Technion, Israel
CAMBRIDGE UNIVERSITY PRESS
Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore,
São Paulo, Delhi, Dubai, Tokyo
Published in the United States of America by Cambridge University Press, New York
www.cambridge.org
Information on this title: www.cambridge.org/9780521194082
© A. Pinkus 2010
Foreword page ix
1 Basic properties of totally positive and strictly
totally positive matrices 1
1.1 Preliminaries 1
1.2 Building (strictly) totally positive matrices 5
1.3 Nonsingularity and rank 12
1.4 Determinantal inequalities 24
1.5 Remarks 33
2 Criteria for total positivity and strict total
positivity 36
2.1 Criteria for strict total positivity 37
2.2 Density and some further applications 41
2.3 Triangular total positivity 47
2.4 LDU factorizations 50
2.5 Criteria for total positivity 55
2.6 “Simple” criteria for strict total positivity 60
2.7 Remarks 74
3 Variation diminishing 76
3.1 Main equivalence theorems 76
3.2 Intervals of strict total positivity 83
3.3 Remarks 85
4 Examples 87
4.1 Totally positive kernels and Descartes systems 87
4.2 Exponentials and powers 88
4.3 Cauchy matrix 92
4.4 Green’s matrices 94
4.5 Jacobi matrices 97
vii
viii Contents
ix
x Foreword
in analysis, and the main initiators and contributors to the theory were
analysts. I. J. Schoenberg was interested in the problem of estimating
the number of real zeros of a polynomial, and this led him to his work
on variation diminishing transformations (in the early 1930s) and Pólya
frequency sequences, functions, and kernels (late 1940s and early 1950s).
These, together with his work on splines (1960s and 1970s), are central
topics in the theory of total positivity. M. G. Krein was led to the
theory of total positivity via ordinary differential equations whose Green’s
functions are totally positive (mid 1930s). S. Karlin came to the theory
of total positivity (in the 1950s and 1960s) by way of statistics, reliability
theory, and mathematical economics. The two major texts on the subject
Oscillation Matrices and Kernels and Small Vibrations of Mechanical
Systems, by F. R. Gantmacher and M. G. Krein (see Gantmacher, Krein
[1950]), and Total Positivity. Volume 1, by S. Karlin (see Karlin [1968]),
are a blend of analysis and matrix theory (and in the latter case the
emphasis is most certainly on analysis). (Their companion volumes The
Markov Moment Problem and Extremal Problems, by M. G. Krein and
A. A. Nudel’man (see Krein, Nudel’man [1977]) and Tchebycheff Systems:
with Applications in Analysis and Statistics, by S. Karlin and W. J. Studden
(see Karlin, Studden [1966]), are totally devoted to topics of analysis.)
Thankfully we have the short monograph of T. Ando that eventually
appeared as Ando [1987] (it was written a few years earlier) and was
devoted to totally positive matrices. The present monograph is an attempt
to update and expand upon Ando’s monograph. A considerable amount of
research has been devoted to this area in the past twenty years, and such
an update is certainly warranted.
It was Schoenberg, in Schoenberg [1930], who coined the term total positiv
(in German). Krein and Gantmacher (see Gantmacher, Krein [1935]),
unaware of Schoenberg’s earlier paper, used the term complètement non
négative and complètement positive (French) for totally positive and strictly
totally positive, respectively. As such, many authors use the term totally
nonnegative and totally positive for totally positive and strictly totally
positive, respectively, which, aside from the lack of consistency and order,
all too often leads to confusion. We follow the Schoenberg/Karlin/Ando
terminology.
It is a pleasure to acknowledge the help of Carl de Boor and David
Tranah. All errors, omissions and other transgressions are the author’s
responsibility.
Foreword xi
I would like to close this short foreword with a personal note. My first
mathematical paper (jointly written with my doctoral supervisor Sam
Karlin) was in the area of total positivity. It is said that as one gets old(er)
one often returns to one’s first love. I plead guilty on both counts.
Haifa, 2008.
1
Basic properties of totally positive and
strictly totally positive matrices
1.1 Preliminaries
For a positive integer n, and for each p ∈ {1, . . . , n}, we define the simplex
in Zp+ . That is, Ipn denotes the set of strictly increasing sequences of p
integers in {1, . . . , n}.
We use the following notation to define submatrices and minors of a
matrix. If A = (aij )ni=1 m j=1 is an n × m matrix, then for each i ∈ Ip and
n
j ∈ Iq we let
m
i i1 , . . . , ip p
A[i, j] = A =A := (aik j )k=1 q=1
j j1 , . . . , jq
denote the p × q submatrix of A determined by the rows indexed i1 , . . . , ip
1
2 Basic properties of totally positive matrices
We use and reuse various classic facts and formulæ. We list some of them
here for easy reference.
where the i ∈ Ipn and j ∈ Ipm are arranged in lexicographic order, i.e., for
distinct i, k ∈ Ipn we set i > k if the first nonzero term in the sequence
i1 − k1 , . . . , ip − kp is positive.
Assume B = CD, where B is an n × m matrix, C is an n × r matrix,
and D is an r × m matrix. The Cauchy–Binet formula may be written as
follows. For each i ∈ Ipn , j ∈ Ipm ,
B(i, j) = C(i, k)D(k, j) ,
k∈Ipr
i.e.,
i1 , . . . , ip i1 , . . . , ip k1 , . . . , kp
B = C D ,
j1 , . . . , jp k1 , . . . , kp j1 , . . . , jp
1≤k1 <···<kp ≤r
or, alternatively,
B[p] = C[p] D[p] .
1.1 Preliminaries 3
The submatrix
α1 , . . . , αp
A
β1 , . . . , βp
is called the pivot block.
where we use j to indicate that we have deleted the jth index. Thus, in
the numerator above, we have taken the determinant of the submatrix of A
obtained by deleting the jth row and ith column. More generally we have
p
i +j j1 , . . . , jn−p
(−1) k=1 k k A i , . . . , i
i1 , . . . , ip 1 n−p
A−1 =
j1 , . . . , jp 1, . . . , n
A
1, . . . , n
where i1 < · · · < ip and i1 < · · · < in−p are complementary indices in
{1, . . . , n}, as are the j1 < · · · < jp and j1 < · · · < jn−p
.
In the above, i1 < · · · < ip and i1 < · · · < in−p are complementary indices
in {1, . . . , n}, as are the j1 < · · · < jp and j1 < · · · < jn−p
; p is fixed; and
the summation is over all ordered p-tuples j1 < · · · < jp .
0 ··· 0 1
From the formulæ for minors of the inverse we also have the following.
b1j = a1j , j = 1, . . . , n,
and for i ≥ 2
n
bij = bi−1,k akj , j = 1, . . . , n.
k=1
We first assume that i1 = 1. Expanding the above minor by its first row
we obtain
r
i1 , . . . , ir s−1 i2 , . . . , ir
B = (−1) a1js B .
j1 , . . . , jr s=1
j1 , . . . , js , . . . , jr
Thus
i1 , . . . , ir
B =
j1 , . . . , jr
r
i2 − 1, . . . , ir − 1
(−1)s−1 a1js B
k1 , . . . , kr−1
s=1 1≤k1 <···<kr−1 ≤n
k1 , . . . , kr−1
A
j1 , . . . , js , . . . , jr
r
i2 − 1, . . . , ir − 1
= B (−1)s−1 a1js
k1 , . . . , kr−1
1≤k1 <···<kr−1 ≤n s=1
k1 , . . . , kr−1
A
j1 , . . . , js , . . . , jr
i2 − 1, . . . , ir − 1 1, k1 , . . . , kr−1
= B A .
k1 , . . . , kr−1 j1 , . . . , jr
2≤k1 <···<kr−1 ≤n
As each of the factors in the last sum is strictly positive (we use here the
induction hypothesis) we have that
i1 , . . . , ir
B > 0.
j1 , . . . , jr
We complete the proof, for this fixed r, by applying an induction
argument based on the value i1 . We have proved the result for i1 = 1.
Now assume that i1 > 1. From the Cauchy–Binet formula,
i1 , . . . , ir i1 − 1, . . . , ir − 1 k1 , . . . , kr
B = B A .
j1 , . . . , jr k1 , . . . , kr j1 , . . . , jr
1≤k1 <···<kr ≤n
Then,
cij = bn−i+1,j , i = 1, . . . , n, j = 1, . . . , m − n.
Proof Let D be the 2n×m matrix whose first n rows are the unit vectors ei ,
i = 1, . . . , n, and whose last n rows are A. We apply Sylvester’s Determinant
Identity with pivot block
n + 1, . . . , 2n
D .
1, . . . , n
That is, set
i, n + 1, . . . , 2n
eij = D , i = 1, . . . , n, j = 1, . . . , m − n.
1, . . . , n, n + j
Note that
i+1 1, . . . , n
eij = (−1) A = (−1)i+1 bij .
1, . . . , i, . . . , n, n + j
Therefore, from Sylvester’s Determinant Identity,
i1 , . . . , ir r+ rk=1 ik i1 , . . . , ir
B = (−1) E
j1 , . . . , jr j1 , . . . , jr
r
r−1
n + 1, . . . , 2n
= (−1)r+ k=1 ik D
1, . . . , n
i1 , . . . , ir , n + 1, . . . , 2n
D .
1, . . . , n, n + j1 , . . . , n + jr
Now
n + 1, . . . , 2n 1, . . . , n
D =A
1, . . . , n 1, . . . , n
1.2 Building (strictly) totally positive matrices 11
while
i1 , . . . , ir , n + 1, . . . , 2n
D =
1, . . . , n, n + j1 , . . . , n + jr
r
1, . . . , n
(−1) k=1 ik +k A
i1 , . . . , in−r , n + j1 , . . . , n + jr
where i1 , . . . , in−r is complementary to i1 , . . . , ir in {1, . . . , n}.
Thus
i1 , . . . , ir
B =
j1 , . . . , jr
r−1
r(r−1) 1, . . . , n 1, . . . , n
(−1) 2 A A .
1, . . . , n i1 , . . . , in−r , n + j1 , . . . , n + jr
The matrix C is obtained from B by simply interchanging the order of the
rows and therefore
r−1
i1 , . . . , ir 1, . . . , n 1, . . . , n
C = A A .
j1 , . . . , jr 1, . . . , n i1 , . . . , in−r , n + j1 , . . . , n + jr
The result now follows.
with the above equality. Thus the strict total positivity property of A is
equivalent to the fact
α1 , . . . , αm
D >0
1, . . . , m
for all 1 ≤ α1 < · · · < αm ≤ n + m.
Two elementary operations that preserve this latter property of D are
the following. First, we can cyclically shift the rows of D, where we multiply
the row going from the first to the last (or last to first) by (−1)m−1 . Second,
we can multiply D by any m × m matrix M with det M > 0.
Let E be the (n + m) × m matrix obtained from D by a simple forward
cyclic rotation of the rows, i.e., shift row r to row r + 1 and row n + m to
row 1, and multiply the new first row of E by (−1)m−1 . In addition, since
n + 1, . . . , n + m n, n + 1, . . . , n + m − 1
E =D >0
1, . . . , m 1, . . . , m
there exists an m × m matrix M with det M > 0 such that
B
F = EM =
C
where C is as was previously defined. Thus B is an n × m strictly totally
positive matrix. A calculation shows that
an,2 an,3 ··· an,m 1
A 1,n 1,2 A 1,n
1,3 ··· A 1,m 1,n
a1,1
B= .. .. .. .. .. .
. . . . .
n−1,n
A 1,2 n−1,n
A 1,3 · · · A 1,mn−1,n
an−1,1
Prior to proving this result we first explain and prove an ancillary result
that will be of independent interest.
1.3 Nonsingularity and rank 13
Zero entries of totally positive matrices and zero values of their minors
are evidence of boundary behavior within the class of totally positive
matrices and, as such, are not arbitrary in nature. A zero entry of a totally
positive matrix A or a zero minor of this totally positive matrix portends
linear dependence or “throws a shadow.” That is, under suitable linear
independence assumptions all minors of the same order to the right and
above it, or to the left and below it, are also zero. Let us define these notions
more precisely.
shadow of
i + 1, . . . , i + r
A
j + 1, . . . , j + r
has rank r − 1.
Proof We first prove the case r = 1. That is, we prove that if A is totally
positive and aij = 0, then at least one of the following holds. Either the ith
row or jth column of A is zero, or the right or left shadow of aij is zero.
Assume neither the ith row nor the jth column of A is zero. Let ai > 0
for some . If < j we prove that the right shadow of A is zero as follows.
For any r < i
r, i
0≤A = ar aij − arj ai = −arj ai ≤ 0.
, j
Since ai > 0 we have arj = 0 for all r < i. As the jth column of A is not
zero we have a k > i for which akj > 0. Now, for any r ≤ i, s ≥ j,
r, k
0≤A = arj aks − ars akj = −ars akj ≤ 0,
j, s
and since akj > 0 we have ars = 0. Similarly, if > j then it follows that
the left shadow of aij is zero.
Let us now assume that r > 1. As
i + 1, . . . , i + r
A
j + 1, . . . , j + r
is of rank r − 1 there are p, q ∈ {1, . . . , r} such that
i + 1, . . . , i
+ p, . . . , i + r
A > 0. (1.3)
j + 1, . . . , j + q, . . . , j + r
Set
i + 1, . . . , i
+ p, . . . , i + r, k
bk = A
j + 1, . . . , j
+ q, . . . , j + r,
Proof of Theorem 1.13 We first prove directly that arr > 0 for all
r ∈ {1, . . . , n}. Assume arr = 0. From Proposition 1.15 we have four
options. But all four options contradict the nonsingularity of A. Obviously
we cannot have that the rth row or column of A is zero. Thus either the
left or right shadow of arr is zero. Assume it is the right shadow. Then
aij = 0 for all i ≤ r and all j ≥ r, implying that the first r rows of A are
linearly dependent. This is a contradiction and therefore arr > 0.
We derive the general result by applying an induction argument on the
size of the minor and using Sylvester’s Determinant Identity. We assume
that for any totally positive nonsingular n × n matrix (any n) all principal
minors of order at most p − 1 are strictly positive (p ≤ n). We prove that
this same result holds for all principal minors of order p . We have proved
the case p = 1. For any 1 ≤ i1 < · · · < ip ≤ n set
i1 , . . . , ip−1 , k
bk = A ,
i1 , . . . , ip−1 ,
16 Basic properties of totally positive matrices
For totally positive matrices there is also an interaction between its rank,
which of its rows and columns can be linearly (in)dependent, and the strict
positivity of specific minors. This we detail in the next series of results.
Proof Since the ai1 , . . . , air+1 are linearly dependent, while the ai2 , . . . , air+1
are linearly independent we can write
r+1
a i1 = cs ais .
s=2
As the ai1 , . . . , air are linearly independent there exist 1 ≤ j1 < · · · < jr ≤
m for which
i1 , . . . , ir
A > 0.
j1 , . . . , jr
Substituting for ai1 we obtain
r+1
i1 , . . . , ir is , i2 , . . . , ir ir+1 , i2 , . . . , ir
A = cs A = cr+1 A
j1 , . . . , jr s=2
j1 , j2 , . . . , jr j1 , j2 , . . . , jr
i2 , . . . , ir , ir+1
= (−1)r+1 cr+1 A .
j1 , . . . , jr−1 , jr
1.3 Nonsingularity and rank 17
The matrix A is totally positive and the left-hand side is strictly positive.
Thus (−1)r+1 cr+1 > 0.
By assumption A is of rank at least r. If r = m there is nothing to
prove, while if r + 1 = n there is also nothing to prove. As such we assume
that r + 1 ≤ m and r + 1 < n. Let ∈ {1, . . . , n}\{i1 , . . . , ir+1 }. Thus
1 = i1 < < ir+1 = n. For every choice of 1 ≤ k1 < · · · < kr+1 ≤ m
r+1
i1 , . . . , , . . . , ir is , i2 , . . . , , . . . , ir
A = cs A
k1 , k2 , . . . , kr+1 s=2
k1 , k2 , . . . , kr+1
ir+1 , i2 , . . . , , . . . , ir
= cr+1 A
k1 , k2 , . . . , kr+1
i2 , . . . , , . . . , ir , ir+1
= (−1)r cr+1 A .
k1 , k2 , . . . , kr+1
As A is totally positive and (−1)r cr+1 < 0, it follows that
i1 , . . . , , . . . , ir
A = 0.
k1 , k2 , . . . , kr+1
Since this is true for every choice of 1 ≤ k1 < · · · < kr+1 ≤ m, we have
that a ∈ span{ai1 , . . . , air }. From our assumption we also have air+1 ∈
span{ai1 , . . . , air }. Thus A is of rank r.
where we exchange the roles of the rows and columns. That is, if bk denotes
the kth column of B, then, by assumption, the r + 1 vectors bj1 , . . . , bjr+1
are linearly dependent, while the r vectors bj1 , . . . , bjr and bj2 , . . . , bjr+1
are each linearly independent. Thus B is of rank r, implying that the r + 1
vectors ai1 , . . . , air+1 are linearly dependent.
independent, then, from Proposition 1.16, the rank of the matrix composed
from the rows indexed i1 , . . . , ir , ir+1 , k is exactly r. But, by assumption,
the ai1 , . . . , air+1 are linearly independent. Thus the ai2 , . . . , air , ak are
necessarily linearly dependent. We repeat this argument, each time lopping
off the first vector, until we arrive at the desired result that ak is linearly
dependent, i.e., ak is the zero vector.
Proof Assume
αk + 1, . . . , αk + rk
A = 0,
βk + 1, . . . , βk + rk
and no principal minor of
αk + 1, . . . , αk + rk
A
βk + 1, . . . , βk + rk
vanishes.
As A is nonsingular it follows from Theorem 1.13 that αk = βk . In
addition, from Proposition 1.15 we have that each such vanishing minor
either throws a right or a left shadow. If αk < βk then it must throw a
right shadow since the left shadow of
αk + 1, . . . , αk + rk
A
βk + 1, . . . , βk + rk
is
αk + 1, . . . , n
A
1, . . . , βk + rk
which contains the nonsingular rk × rk principal submatrix
αk + 1, . . . , αk + rk
A .
αk + 1, . . . , αk + rk
20 Basic properties of totally positive matrices
jp − i1 ≤ p − 2 or ip − j1 ≤ p − 2.
jp − i1 ≥ p − 1 and ip − j1 ≥ p − 1. (1.4)
Set
α = max{i1 , j1 } − 1
We claim that
i1 ≤ α + 1 < · · · < α + p ≤ ip
and
j1 ≤ α + 1 < · · · < α + p ≤ jp .
jp − i1 ≤ p − 2 or ip − j1 ≤ p − 2.
is of rank p − 1. As
i1 , . . . , ip
A
j1 , . . . , jp
lies in this submatrix we have that
i1 , . . . , ip
A
j1 , . . . , jp
lies in the left shadow of
α + 1, . . . , α + p
A
β + 1, . . . , β + p
where α = i1 − 1 and β = jp − p. In fact from our assumption that no
principal minors of
i1 , . . . , ip
A
j1 , . . . , jp
vanish, it necessarily follows that
α + 1, . . . , α + p
A
β + 1, . . . , β + p
is of rank exactly p − 1.
The case where ip − j1 ≤ p − 2 is handled similarly. That is, it follows
that the
i1 , . . . , ip
A
j1 , . . . , jp
is in the right shadow of the matrix
ip − p + 1, ip − p + 2, . . . , ip
A
j1 , j1 + 1, . . . , j1 + p − 1
of rank p − 1. This proves the theorem.
Proof Assume
i1 , . . . , ip
A = 0.
j1 , . . . , jp
From Theorem 1.19 there exist (α, β, r) such that
α + 1, . . . , α + r
A = 0,
β + 1, . . . , β + r
no principal submatrix of
α + 1, . . . , α + r
A
β + 1, . . . , β + r
vanishes, and some r × r principal submatrix of
i1 , . . . , ip
A
j1 , . . . , jp
lies in the right shadow of
α + 1, . . . , α + r
A
β + 1, . . . , β + r
if α < β, or in its left shadow if α > β.
The assumption of the proposition implies that r = 1 and thus
aα+1,β+1 = 0. We therefore have an α = β such that for some k ∈ {1, . . . , p}
the aik ,jk lies in the right shadow of aα+1,β+1 = 0 if α < β, or in its left
shadow if α > β. This implies that
aik ,jk = 0.
if and only if
aik ,jk > 0, k = 1, . . . , p,
Proof We prove the result for a totally positive matrix. If A(i, j, k) = 0 then
(1.5) is certainly valid. As such we may assume that A(i, j, k) > 0. Thus
from Theorem 1.13 all the other minors in (1.5) are strictly positive.
Let
i, k
bk = A .
i,
Set |j| = q and |k| = r, where | · | denotes the cardinality of the set. From
Sylvester’s Determinant Identity
B(j) = A(i, j)A(i)q−1
B(k) = A(i, k)A(i)r−1
and
B(j, k) = A(i, j, k)A(i)q+r−1 .
where j and k are disjoint sets of ordered indices in {1, . . . , n}, B is a totally
positive matrix, and B(j, k) > 0. It is this inequality that we now prove.
Our proof will be by induction on q + r = m. We always assume that
q, r ≥ 1. For m = 2 we must show that
j, k j k
B ≤B B = bjj bkk
j, k j k
where j = (j) and k = (k). As
j, k
B = bjj bkk − bjk bkj
j, k
and bjk , bkj ≥ 0, this inequality is immediate.
Assume m > 2 and q > 1. Let j1 ∈ j and j = j\{j1 } and set
j1 , k
ck = B
j1 ,
for k, ∈ j ∪ k. From Sylvester’s Determinant Identity, C = (ck ) is totally
positive and
C(j , k) > 0.
Now
C(j ) = B(j)bq−2
j1 j1
C(k) = B(k, j1 )br−1j1 j1
26 Basic properties of totally positive matrices
and
C(j , k) = B(j, k)bjq+r−2
1 j1
.
Then
A(j1 ) · · · A(jq ) ≤ A(i1 ) · · · A(iq ).
Proof We prove this result for a totally positive matrix. The same proof,
with minor modifications, is valid for strictly totally positive matrices. Our
proof will be by induction on q. Note that the case q = 2 is exactly Theorem
1.21. Set
which reduces to
q−1
q
A(j ) ≤
A(k ) A(m0 )
=1 =1
Thus
n
Q1 = aii
i=1
1
n−1
i1 , i2
Q2 = A
i1 , i2
1≤i1 <i2 ≤n
..
.
1, . . . , n
Qn = A .
1, . . . , n
28 Basic properties of totally positive matrices
Qn ≤ Qn−1 ≤ · · · ≤ Q1 .
Proof We prove this result for totally positive matrices. The same proof,
with minor modifications, is valid for strictly totally positive matrices. Let
1
(nr)
i1 , . . . , ir
Pr = A , r = 1, . . . , n,
i1 , . . . , ir
1≤i1 <···<ir ≤n
n
r, s
A ≤ an−1
ii
r, s
1≤r<s≤n i=1
from which
2 n n2
n(n−1)
r, s
P2 = A ≤ aii = P12 .
r, s
1≤r<s≤n i=1
Fixing i1 , . . . , ir−1 and taking products over ir < ir+1 (distinct from
1.4 Determinantal inequalities 29
i1 , . . . , ir−1 ) we obtain
n−r+1
i1 , . . . , i , i , i
r−1 r r+1 i1 , . . . , ir−1 ( 2 )
A A
ir <ir+1
i1 , . . . , ir−1 , ir , ir+1 i1 , . . . , ir−1
i1 , . . . , ir−1 , ir
n−r
≤ A .
i
i1 , . . . , ir−1 , ir
r
i.e.,
n r+1 n
(r+1)(r−1 ) (r−1 )(n−r+1) (n)(n−r)(r−1
r
)
Pr+1 Pr−1 2
≤ Pr r .
Now
Qr/n
r = Pr , r = 1, . . . , n.
Thus we have
2r
r+1 Qr−1 ≤ Qr ,
Qr+1 r = 1, . . . , n − 1,
r−1
Q22 ≤ Q21
implying
Q2 ≤ Q1 .
Assume 2 ≤ r ≤ n − 1 and
Qr ≤ · · · ≤ Q1 .
30 Basic properties of totally positive matrices
and therefore
Qr+1 ≤ Qr .
Thus
Qn ≤ · · · ≤ Q1 .
0 ≤ jn−1 ≤ · · · ≤ j1 .
Then
(−1)j1 +···+jn−1 det C > 0.
Let r denote the number of zeros in the first column of C. Then by the
induction hypothesis
j2 +···+jn−2 −(r−1) 2, . . . , n − 1
(−1) C > 0,
2, . . . , n − 1
and applying the induction hypothesis to each of the bij we get,
j1 +···+jn−2 j1 +···+jn−2 1, . . . , n − 1
(−1) b11 = (−1) C > 0,
1, . . . , n − 1
2, . . . , n
(−1)j2 +···+jn−2 −(r−1) bnn = (−1)j2 +···+jn−2 −(r−1) C > 0,
2, . . . , n
1, . . . , n − 1
(−1)j1 +···+jn−2 −r b1n = (−1)j1 +···+jn−2 −r C > 0,
2, . . . , n
2, . . . , n
(−1)j2 +···+jn−2 bn1 = (−1)j2 +···+jn−2 C > 0.
1, . . . , n − 1
Thus
(−1)j1 −(r−1) [b11 bnn − b1n bn1 ] > 0
and therefore
(−1)j1 +···+jn−1 det C > 0.
to Theorem 2.6 we then can show that if A is only totally positive then B
is totally positive.)
Set
1, . . . , k
csj = (−1)s+k A , s = 1, . . . , k, j = k + 1, . . . , m.
1, . . . , s, . . . , k, j
where we set as zero the r × r bottom right corner of this matrix, i.e., set as
zero the (i, j) entries for i, j = k + 1, . . . , k + r. It follows from Proposition
1.3 that Proposition 1.24 also holds in this case as interchanging all rows
and columns preserves strict total positivity. The resulting matrix has sign
1.5 Remarks 33
1.5 Remarks
The study of total positivity and strict total positivity for continuous
kernels predates the study of total positivity and strict total positivity for
matrices (see e.g., Kellogg [1918]). Such phenomena are not uncommon.
The three pioneers of the theory of totally positive matrices are
F. R. Gantmacher, M. G. Krein, and I. J. Schoenberg. It was Schoenberg
[1930] who first considered such matrices. He did so in his study of variation
diminishing properties (see Chapter 3). In fact Schoenberg also coined
the term total positiv (in German) in his 1930 paper. Krein came to
consider such kernels and (later) matrices as a consequence of his research
on ordinary differential equations whose Green’s kernel is totally positive.
The joint paper of Gantmacher, Krein [1937] (an announcement appeared
in Gantmacher, Krein [1935]) presented most of the main results relating to
spectral properties of totally positive matrices, and many other important
results concerning totally positive matrices (except that they were then
unaware of Schoenberg’s earlier paper and the variation diminishing
properties associated with such matrices). This 1937 paper, in a slightly
expanded form, is most of Chapter II of the book Gantmacher, Krein
[1950]. In Gantmacher, Krein [1937] they used the term complètement
non négative and complètement positive (French) for totally positive and
strictly totally positive, respectively. As such many authors use the terms
totally nonnegative and totally positive for totally positive and strictly
totally positive, respectively.
For a proof and history of the Cauchy–Binet formula, Sylvester’s
Determinant Identity and the Laplace expansion by minors, see Brualdi,
Schneider [1983] and references therein. The initial propositions of
Section 1.2 can, for the most part, be found in Gantmacher, Krein
[1937], Ando [1987], and Karlin [1968]. Theorem 1.8 is from Ando [1987],
Theorem 3.9. The proof as presented here is very different. Proposition 1.9
34 Basic properties of totally positive matrices
i1 (r−1)
Qr = A s ∪ · · · ∪ sir .
1≤i1 <···<ir ≤q
Then
Qq ≤ · · · ≤ Q1 .
Other determinantal inequalities exist for (strictly) totally positive
matrices (see e.g., Mühlbach, Gasca [1985], Fallat, Gekhtman, Johnson
[2003], Skandera [2004] and Boocher, Froehle [2008]). The construction at
1.5 Remarks 35
π
aij ai+1,j+1 > 4 cos2 ai,j+1 ai+1,j
n+1
36
2.1 Criteria for strict total positivity 37
d(i) counts the number of integers between i1 and ik that are not in the
sequence i1 , . . . , ik . Thus d(i) = 0 if and only if the sequence i is composed
of consecutive integers. We will prove this lemma by induction on d(·).
38 Criteria for total positivity and strict total positivity
In the above equation the three m − 1 order minors are all positive by
assumption, as they are all based on the columns 1, . . . , m − 1. Now d(i) =
(im − i1 ) − (m − 1) = (jm+1 − j1 ) − (m − 1) = r, while (jm+1 − j2 ) − (m − 1)
and (jm − j1 ) − (m − 1) are strictly less than r. Thus, by our induction
hypothesis, we have
j2 , . . . , jm+1 j1 , . . . , jm
C , C > 0.
1, . . . , m 1, . . . , m
We will use the method of proof of Fekete’s Lemma at least twice more in
this chapter. An immediate consequence of the above result is the following.
Theorem 2.2 Let A be an n×m matrix. Assume that all kth order minors
of A composed from consecutive rows and consecutive columns are strictly
positive for k = 1, . . . , min{n, m}. Then A is strictly totally positive.
2.1 Criteria for strict total positivity 39
Theorem 2.3 Let A be an n×m matrix. Assume that all kth order minors
of A composed from the first k rows and k consecutive columns, and also all
kth order minors of A composed from the first k columns and k consecutive
rows, are strictly positive for k = 1, . . . , min{n, m}. Then A is strictly
totally positive.
Proof We prove that the conditions in Theorem 2.3 imply that all kth
order minors of A composed from consecutive rows and columns are strictly
positive for k = 1, . . . , min{n, m}. We then apply Theorem 2.2 to obtain
our desired result. In other words we prove that
i + 1, . . . , i + k
A >0 (2.1)
j + 1, . . . , j + k
for i = 0, . . . , n − k, j = 0, . . . , m − k, and k = 1, . . . , min{n, m}. Our
assumption is that this result holds when i = 0 and when j = 0.
Let us first prove that (2.1) also holds when j = 1, i.e.,
i + 1, . . . , i + k
A > 0.
2, . . . , k + 1
We prove this by induction on i and on k. Assume i = 1. For k = 1 we
must prove that
2
A = a22 > 0.
2
Now
1, 2
A = a11 a22 − a21 a12 > 0.
1, 2
By assumption a11 , a21 , a12 > 0. Thus a22 > 0. Assume the result holds for
k − 1. By an application of Sylvester’s Determinant Identity,
1, . . . , k 2, . . . , k + 1 1, . . . , k 2, . . . , k + 1
A A −A A
1, . . . , k 2, . . . , k + 1 2, . . . , k + 1 1, . . . , k
2, . . . , k 1, . . . , k + 1
=A A .
2, . . . , k 1, . . . , k + 1
40 Criteria for total positivity and strict total positivity
When dealing with strictly totally positive matrices we can reverse the
order of the rows and columns (see Proposition 1.3). As a consequence
thereof, we have the following.
Corollary 2.4 Let A be an n×m matrix. Assume that all kth order minors
of A composed from the last k rows and k consecutive columns, and also all
kth order minors of A composed from the last k columns and k consecutive
rows, are strictly positive for k = 1, . . . , min{n, m}. Then A is strictly
totally positive.
Similar corollaries hold for many of the results of this chapter. They
should be understood. We will not bother to state them again.
Is this the minimal number of determinants that must be checked when
determining whether A is a strictly totally positive matrix? It seems that
it must be minimal as the number of conditions is nm, which is the number
of entries in the matrix A.
The following is presented here as it fits in with our present discussion
and can be seen to be a simple consequence of Theorem 2.3. It is also an
immediate corollary of Theorem 1.19. We state it without proof.
lim Ak = A.
k→∞
Theorem 2.6 Strictly totally positive matrices are dense in the class of
totally positive matrices.
2
Proof For each q ∈ (0, 1) the matrix Qn (q) = (q (i−j) )ni=1 nj=1 is a strictly
2
totally positive matrix. This may be proven as follows. As q (i−j) =
2 2
q i q j q −2ij it suffices to prove that pij is strictly totally positive for
p = q −2 > 1. Let P = (pij )ni=1 m
j=1 . Consider
i + 1, . . . , i + k
P .
j + 1, . . . , j + k
Set xr = pi+r , r = 1, . . . , k. It then easily follows, factoring out from the
rth row of this minor p(i+r)(j+1) , that
k
i + 1, . . . , i + k
P = p(i+r)(j+1) V (x1 , . . . , xk ) ,
j + 1, . . . , j + k r=1
The terms in the product on the right are strictly positive since p > 1. This
implies that
i + 1, . . . , i + k
P >0
j + 1, . . . , j + k
and therefore, from Theorem 2.2, the matrix Qn (q) is strictly totally
positive. The matrix Qn (q) has the further property that
lim Qn (q) = In
q→0+
(1, 1)-element of Bq . The matrix Bq is totally positive and of rank r + 1.
As such, by the same method as employed above, we can approximate Bq
arbitrarily well by a matrix of rank r + 1, all of whose k × k minors are
strictly positive for every k ≤ r + 1. We continue this process to obtain the
result.
Proof Assume all kth order minors of A composed from arbitrary k rows
and k consecutive columns are nonnegative. Let Q(q) be the n × n matrix
as used in the proof of Theorem 2.6, q ∈ (0, 1). Set
Aq = Q(q)A.
for all i, j ∈ {0, . . . , n−k}. From Theorem 2.2 Aq is strictly totally positive.
In addition,
lim Aq = A,
q→0+
Theorems 2.2 and 2.3 are very useful tools for proving strict total
positivity. We now settle a previous debt by presenting proofs of
Propositions 1.9 and 1.10 from Section 1.2. We recall the following.
The first statement in Proposition 1.9 will then follow from either Theorem
2.2 or 2.3.
Obviously if i + k ≤ r, then
i + 1, . . . , i + k i + 1, . . . , i + k
C =A .
j1 , . . . , jk j1 , . . . , jk
By adding ar+1,1 times the (r − i)th row (which is the row with entries
(ar,j1 , . . . , ar,jk )) to the succeeding row, we get a new (r − i + 1)st row of
Factor out ar,1 and add ar+2,1 times this row to the next row to obtain a
new (r − i + 2)nd row of
Factor out ai1 and add ai+2,1 times the second row to the third row etc.
The formula now easily follows.
From these three formulæ it follows from Theorems 2.2 or 2.3 that if A
is strictly totally positive then C is strictly totally positive. Now assume
that A is only totally positive. From Theorem 2.6 there exists a sequence
(Ak ) of n × m strictly totally positive matrices whose elements converge
46 Criteria for total positivity and strict total positivity
are strictly positive and independent of j. Thus the matrix D being strictly
totally positive is equivalent to the matrix C with elements
i − p, i − p + 1, . . . , i − 1, i
cij = A
1, . . . , p − 1, p, j
being strictly totally positive. This advances the induction step and proves
the proposition.
matrix is strictly totally positive in the above sense. These criteria are very
reminiscent of the criteria in Theorem 2.3, and for good reason.
Proof Obviously the two claims are equivalent. We prove the first claim.
That is, we show that if A is an n × n upper triangular matrix satisfying
(2.2), then
i1 , . . . , ik
A >0
j1 , . . . , jk
whenever im ≤ jm , m = 1, . . . , k.
From (2.2) and Fekete’s Lemma 2.1, it follows that
1, . . . , k
A >0 (2.3)
j1 , . . . , jk
for all 1 ≤ j1 < · · · < jk ≤ n, and k = 1, . . . , n. Since A is upper triangular
and satisfies (2.3) we also have that for any 1 ≤ j1 < · · · < jk ≤ n, where
i + m ≤ jm , m = 1, . . . , k,
1, . . . , i i + 1, . . . , i + k 1, . . . , i, i + 1, . . . , i + k
A A =A > 0.
1, . . . , i j1 , . . . , jk 1, . . . , i, j1 , . . . , jk
A = LDU,
Using Theorem 2.8 we prove that L is a lower strictly totally positive matrix
and U is an upper strictly totally positive matrix.
A = LDU,
Proof We need only prove that L is a lower strictly totally positive matrix
and U is an upper strictly totally positive matrix. The other facts follow
trivially. As A = LDU and D is a diagonal matrix, then from the
Cauchy–Binet formula
1, . . . , k
A
j + 1, . . . , j + k
1, . . . , k 1 , . . . , k 1 , . . . , k
= L D U .
1 , . . . , k 1 , . . . , k j + 1, . . . , j + k
1≤1 <···<k ≤n
52 Criteria for total positivity and strict total positivity
Now
1, . . . , k 1, . . . , k 1, . . . , k
L = 1, D > 0, and A > 0.
1, . . . , k 1, . . . , k j + 1, . . . , j + k
Therefore
1, . . . , k
U > 0.
j + 1, . . . , j + k
As this is valid for all appropriate j and k we have from Theorem 2.8 that
U is an upper strictly totally positive matrix. The analogous proof shows
that L is a lower strictly totally positive matrix.
A = U DL,
This can be shown in a variety of ways. It also follows easily from the
previous result and the following. Let
0 ··· 0 1
0 ··· 1 0
Q= . .. .. .
.. . .
1 ··· 0 0
Then
QAQ
is the matrix obtained from A by reversing the order of its rows and
columns. As is known (see Proposition 1.3 in Chapter 1), A is strictly
totally positive if and only if QAQ is strictly totally positive. From the
LDU factorization of QAQ (Theorem 2.10) we have
A = (QLQ)(QDQ)(QU Q).
all principal minors of A are strictly positive (Theorem 1.13). We state this
result here for easy reference.
A = LDU
Proof From Theorem 2.6 there exists a sequence of strictly totally positive
matrices (Am ) that approximate A, i.e., such that
lim Am = A.
m→∞
From Theorem 2.10, each Am = (aij (m)) has a unique factorization of the
form
Am = Lm Dm Um
! m and U
where L !m are stochastic (rows sums 1). As a consequence we also
2.5 Criteria for total positivity 55
have
aij (m) = d!ii (m)
j
A = LU.
1 0 0 1
k
d(i) := (ij − ij−1 − 1) = (ik − i1 ) − (k − 1) ≥ 0.
j=2
Note that d(i) counts the number of integers between i1 and ik that are
not in the sequence i1 , . . . , ik , and thus d(i) = 0 if and only if the sequence
i is composed of consecutive integers.
The main theorem providing determinantal criteria for when a matrix
is totally positive is the following. It is based on previous reasonings and
techniques, but is technically more detailed.
56 Criteria for total positivity and strict total positivity
determinantal equality (1.2) of Section 1.1, we have for each s ∈ {1, . . . , k}:
r1 , . . . , rt , . . . , rk+1 r2 , . . . , rk
A A
j1∗ , . . . , jk∗ j1∗ , . . . , js∗ , . . . , jk∗
r2 , . . . , rk+1 r1 , . . . , rt , . . . , rk
=A A (2.9)
j1∗ , . . . , jk∗ j1∗ , . . . , js∗ , . . . , jk∗
r1 , . . . , rk r2 , . . . , rt , . . . , rk+1
+A ∗ A .
j1 , . . . , jk∗ j1∗ , . . . , js∗ , . . . , jk∗
By the induction assumption, all three k − 1 × k − 1 minors in (2.9) are
nonnegative, while the two k × k minors on the right-hand-side of the
equation are nonnegative since the dispersion of the rows therein is strictly
less than , and the minimality property of . Thus the value of the right-
hand-side of (2.9) is nonnegative. As (2.8) holds for i∗1 , . . . , i∗k and j1∗ , . . . , jk∗
(which is the left-most determinant in (2.9)), it necessarily follows that we
must have
i∗2 , . . . , p, . . . , i∗k−1 r2 , . . . , rk
A =A ∗ = 0. (2.10)
j1∗ , . . . , js∗ , . . . , jk∗ j1 , . . . , js∗ , . . . , jk∗
∗ ∗
As (2.10) is valid for all s = 1, . . . , k, we have ap ∈ span{ai2 , . . . , aik−1 } on
the columns j1∗ , . . . , jk∗ , for each p satisfying i∗1 < p < i∗k .
Since ∗
i1 , . . . , i∗k
A ∗ < 0,
j1 , . . . , jk∗
∗ ∗
the rows ai1 , . . . , aik are linearly independent on the columns j1∗ , . . . , jk∗ .
∗ ∗
Thus the rows ai2 , . . . , aik−1 are linearly independent on some columns
!
j1 , . . . , !
jk−2 , where {!
j1 , . . . , !
jk−2 } = {j1∗ , . . . , jk∗ }\{js∗ , jt∗ } (we assume that
∗ ∗
js < jt in what follows). That is,
∗
i2 , . . . , i∗k−1
A = 0.
j1 , . . . , !
! jk−2
By our induction assumption, this implies that the above minor is, in fact,
strictly positive. We use
∗
i2 , . . . , i∗k−1
A
j1 , . . . , !
! jk−2
as a pivot block in Sylvester’s Determinant Identity. That is, set
∗
i2 , . . . , i∗k−1 , i
dij = A .
j1 , . . . , !
! jk−2 , j
Then by our induction hypothesis on k we have dij ≥ 0 for all i and j,
58 Criteria for total positivity and strict total positivity
and from (2.10) we have dpjs∗ = dpjt∗ = 0 for each p between i∗1 and i∗k .
Furthermore, by Sylvester’s Determinant Identity we have
∗ ∗ ∗ ∗
i1 , ik i2 , . . . , i∗k−1 i1 , . . . , i∗k
D =A A ∗ < 0.
js∗ , jt∗ j1 , . . . , !
! jk−2 j1 , . . . , jk∗
Thus we must have di∗1 jt∗ > 0 and di∗k js∗ > 0.
From Sylvester’s Determinant Identity, for j < jt∗ ,
∗ ∗ ∗
i1 , p i2 , . . . , i∗k−1 i1 , . . . , i∗k−1 , p
D =A A .
j, jt∗ j1 , . . . , !
! jk−2 j1 , . . . , !
! jk−2 , j, jt∗
Both terms of the right-hand side of this equality are nonnegative. The
first of these two terms is strictly positive. The second term is nonnegative
because the dispersion of i∗1 , . . . , i∗k−1 , p is strictly less than . Thus
∗
i ,p
0 ≤ D 1 ∗ = di∗1 j dpjt∗ − di∗1 jt∗ dpj .
j, jt
Now dpjt∗ = 0, di∗1 jt∗ > 0, and dpj ≥ 0. Thus dpj = 0 for all j < jt∗ . When
j > js∗ we consider
∗
p, ik
D
js∗ , j
and apply the same reasoning to obtain dpj = 0 for all j > js∗ . Thus for all
j we have
∗
i2 , . . . , i∗k−1 , p
0 = dpj = A .
j1 , . . . , !
! jk−2 , j
Since
i∗2 , . . . , i∗k−1
A > 0,
j1 , . . . , !
! jk−2
∗ ∗
this implies that row ap is in span{ai2 , . . . , aik−1 } over all columns of A,
and this is true for all p satisfying i∗1 < p < i∗k .
for all appropriate i, 1 ≤ j1 < · · · < jk ≤ n and k (i.e., we must check the
case d(i) = 0). It suffices to consider only those minors with i + m ≤ jm ,
m = 1, . . . , k, since all other minors vanish due to the upper triangular
structure of A. Now, by assumption,
i
1, . . . , i, i + 1, . . . , i + k i + 1, . . . , i + k
0≤A = amm A .
1, . . . , i, j1 , . . . , jk m=1
j1 , . . . , jk
Based on the above Theorem 2.14 and the analysis of Section 2.4 we
have the following result, which should be compared to Theorem 2.13 and
Proposition 2.7.
Proof If A is totally positive then (ii) and (iii) hold. If A is also nonsingular
then (i) holds from Theorem 1.13.
Assume (i) – (iii) hold. The LDU factorization of A, as in Section 2.4,
is well defined since (i) holds. From (i), (ii), and (iii) we see that D is
a diagonal matrix whose diagonal entries are strictly positive, L is a unit
60 Criteria for total positivity and strict total positivity
Corollary 2.17 Let A = (aij ) be an n×m matrix whose entries are strictly
positive. Assume
aij ai+1,j+1 ≥ 4ai,j+1 ai+1,j
where 0 ≤ θ < π/2. Expanding by the last row we see that det An (θ)
satisfies the recurrence relation
Now
sin 2θ
det A1 (θ) = 2 cos θ = ,
sin θ
and
sin 3θ
det A2 (θ) = 4 cos2 θ − 1 = .
sin θ
62 Criteria for total positivity and strict total positivity
The proof of the main claim of Theorem 2.16 is lengthy and complicated.
We therefore divide it into a series of lemmas. We start with the following.
"m/2 m−j
(i) Fm (c) = j=0 j (−1)j c1j , m = 0, 1, 2, . . .
1 1
Fm−1 (ck ) − 2 Fm−2 (ck ) − m ≥ Fm (ck )
ck ck
for k ≥ 3, m = 2, 3, . . . , k.
Now
sin 3θ
4 cos2 θ = + 1.
sin θ
Thus continuing the above and using (i) we have
(m−1)π
ck − 1 sin k+1 1
= − m
c2k c m−2
2 π
sin k+1 ck
k
3π (m−1)π
1 sin k+1 sin k+1 1
= m+2 π π − m−2
ck 2 sin k+1 sin k+1 π
2 cos k+1
1
3π
sin k+1 sin (m−1)π
k+1
≥ m+2 π π − 1 ≥ 0,
c 2 sin k+1 sin k+1
k
Note that from Lemma 2.21(i) we have that Fk (ck ) = 0 and Fk (cm ) > 0
for m < k.
The proof of Theorem 2.16 is based on an induction argument. To ease
exposition we assume, in the next series of lemmas, the following.
for r = 1, . . . , m − 2.
Now
m − 1, m
A = am−1,m−1 amm − am−1,m am,m−1
m − 1, m
and as
we have
m − 1, m 1
A > am−1,m−1 amm − am−1,m−1 amm
m − 1, m c
1
= 1− am−1,m−1 amm .
c
2.6 “Simple” criteria for strict total positivity 67
Thus
2, . . . , n
a1,j+1 A
1, . . . , j+ 1, . . . , n
j + 2, . . . , n
< a21 · · · aj+1,j a1,j+1 A
j + 2, . . . , n
1 j + 2, . . . , n
< j a1j a21 · · · aj,j−1 aj+1,j+1 A . (2.15)
cn j + 2, . . . , n
As cn = 4 cos2 θn , θn = n+1
π
, and Fk (c) > 0 for c = 4 cos2 θ where θ < π
k+1
(see Lemma 2.21(i)), it follows that
Fk (cn ) > 0, k = 1, . . . , n − 1.
70 Criteria for total positivity and strict total positivity
2, . . . , n j, j + 1, . . . , n
A > a21 · · · aj−1,j−2 Fj−2 (cn )A
1, . . . , j, . . . , n j − 1, j + 1, . . . , n
1 j + 1, . . . , n
− Fj−3 (cn )aj,j−1 A .
cn j + 1, . . . , n
j, j + 1, . . . , n
A
j − 1, j + 1, . . . , n
j + 1, . . . , n j + 2, . . . , n
> aj,j−1 A − aj,j+1 aj+1,j−1 A ,
j + 1, . . . , n j + 2, . . . , n
and thus
2, . . . , n
A
1, . . . , j, . . . , n
j + 1, . . . , n
> a21 · · · aj−1,j−2 aj,j−1 Fj−2 (cn )A
j + 1, . . . , n
j + 2, . . . , n
−aj,j+1 aj+1,j−1 Fj−2 (cn )A
j + 2, . . . , n
1 j + 1, . . . , n
− Fj−3 (cn ) aj,j−1 A
cn j + 1, . . . , n
1 j + 1, . . . , n
= a21 · · · aj−1,j−2 aj,j−1 Fj−2 (cn ) − Fj−3 (cn ) A
cn j + 1, . . . , n
j + 2, . . . , n
−aj,j+1 aj+1,j−1 Fj−2 (cn )A .
j + 2, . . . , n
Since
1
Fj−1 (c) = Fj−2 (c) − Fj−3 (c)
c
and
we have
2, . . . , n
A
1, . . . , j, . . . , n
j + 1, . . . , n
> a21 · · · aj−1,j−2 aj,j−1 Fj−1 (cn )A
j + 1, . . . , n
aj,j−1 aj+1,j+1 j + 2, . . . , n
− Fj−2 (cn )A
c2n j + 2, . . . , n
j + 1, . . . , n
= a21 · · · aj−1,j−2 aj,j−1 Fj−1 (cn )A
j + 1, . . . , n
aj+1,j+1 j + 2, . . . , n
− Fj−2 (cn )A .
c2n j + 2, . . . , n
Thus, finally,
2, . . . , n
A
1, . . . , j, . . . , n
> a21 · · · aj−1,j−2 aj,j−1 aj+1,j+1 Fj−1 (cn )
j + 2, . . . , n
1 j + 3, . . . , n
A − aj+2,j+2 A
cn
j + 2, . . . , n j + 3, . . . , n
1 j + 2, . . . , n
− 2 Fj−2 (cn )A
cn j + 2, . . . , n
1
= a21 · · · aj−1,j−2 aj,j−1 aj+1,j+1 Fj−1 (cn ) − 2 Fj−2 (cn )
cn
j + 2, . . . , n 1 j + 3, . . . , n
A − aj+2,j+2 Fj−1 (cn )A . (2.16)
j + 2, . . . , n cn j + 3, . . . , n
72 Criteria for total positivity and strict total positivity
> a1j a21 · · · aj−1,j−2 aj,j−1 aj+1,j+1 aj+2,j+2 · · · ann Fn−1 (cn ).
and
1, . . . , n 2, . . . , n 3, . . . , n
A > a11 A − a12 a21 A . (2.18)
1, . . . , n 2, . . . , n 3, . . . , n
From Proposition 2.26 (which assumed the validity of the result for (n −
1) × (n − 1) matrices)
n
1, . . . , n 2, . . . , n
A = (−1)k+1 a1k A
1, . . . , n 1, . . . , k, . . . , n
k=1
2, . . . , n 2, 3, . . . , n
> a11 A − a12 A .
2, . . . , n 1, 3, . . . , n
By the Hadamard Inequality
2, . . . , n 3, . . . , n
A < a21 A
1, 3, . . . , n 3, . . . , n
and thus
1, . . . , n 2, . . . , n 3, . . . , n
A > a11 A − a12 a21 A .
1, . . . , n 2, . . . , n 3, . . . , n
This proves (2.18).
As (2.12) holds for r = 2, . . . , n, by the induction hypothesis, and since we
have now proven (2.18), which completes (2.13) for m = n (again applying
the induction hypothesis), we have that the results of Lemmas 2.22–2.25
hold for m = n. From Lemma 2.25 (with m = n and s = n − 2)
1, . . . , n
A
1, . . . , n
n − 1, n 1
> a11 · · · an−2,n−2 Fn−2 (cn )A − Fn−3 (cn )an−1,n−1 ann .
n − 1, n cn
Now (see the proof of Lemma 2.24)
n − 1, n 1
A = an−1,n−1 ann − an−1,n an,n−1 > 1 − an−1,n−1 ann .
n − 1, n cn
Thus
n − 1, n 1
Fn−2 (cn )A − Fn−3 (cn )an−1,n−1 ann
n − 1, n cn
1 1
> Fn−2 (cn ) 1 − an−1,n−1 ann − Fn−3 (cn )an−1,n−1 ann
cn cn
1 1
= an−1,n−1 ann Fn−2 (cn ) 1 − − Fn−3 (cn ) .
cn cn
74 Criteria for total positivity and strict total positivity
2.7 Remarks
Fekete’s Lemma 2.1 can be found in Fekete, Pólya [1912]. (Different sections
of this paper were written by the different authors.) The fact that for
strictly totally positive matrices it suffices to consider only minors with
consecutive rows and consecutive columns implies that the remaining
minors are nonnegative combinations of the minors with consecutive rows
and consecutive columns. If we detail the proof of Fekete’s Lemma 2.1 we
see that the coefficients in this combination depend upon other minors of
the same size, plus lower order minors, and in a nonlinear fashion. Such
a formula is not valid for totally positive matrices in general. Theorem
2.3 was proved in Gasca, Peña [1992]. Their proof uses factorization, a
subject to be considered in Chapter 6 (see also Metelmann [1973]). The
two proofs given here are different. Proposition 2.5 first appears in Gasca,
Peña [1992]. It is reproved in Gladwell [1998]. Theorem 2.6 is due to
Whitney [1952]. Proposition 1.9 is also to be found in Whitney [1952].
Theorem 2.8 appears in Karlin [1968], p. 85. Proposition 2.9 can be found
in Shapiro, Shapiro [1995]. Theorem 2.10 is in Cryer [1973]. Theorem 2.12
was conjectured in Cryer [1973] and proved in Cryer [1976] by very different
methods. The proof given here can be found in Micchelli, Pinkus [1991].
The example of a 4 × 4 matrix, which is not totally positive but whose
minors with consecutive rows and consecutive columns are nonnegative, is
from Cryer [1973]. Theorem 2.13 was proved in Cryer [1976]. Theorem 2.14
was conjectured in Cryer [1973] and proved in Cryer [1976]. Proposition
2.15 is from Gasca, Peña [1993].
In Proposition 1.20, as a consequence of Theorem 1.19, we proved that
a nonsingular totally positive matrix A is almost strictly totally positive if
i + 1, . . . , i + p
A >0
j + 1, . . . , j + p
2.7 Remarks 75
76
3.1 Main equivalence theorems 77
we get two additional sign changes. Giving the second 0 the value 1 or −1
does not alter the number of sign changes.
The following elementary facts will be used. Their proof is left to the
reader.
S + (c) + S − (!
c) = n − 1,
Lemma 3.2 If
lim ck = c
k→∞
then
lim S − (ck ) ≥ S − (c)
k→∞
and
lim S + (ck ) ≤ S + (c).
k→∞
S + (Ax) ≤ S − (x);
(b) if S + (Ax) = S − (x) and Ax = 0, then the sign of the first (and last)
component of Ax (if zero, the sign given in determining S + (Ax))
agrees with the sign of the first (and last) nonzero component of x.
Conversely, if (a) and (b) hold for some n×m matrix A and every x ∈ Rm ,
x = 0, then A is strictly totally positive.
Proof We first prove that if A is strictly totally positive then (a) and (b)
hold.
Assume x = 0 and S − (x) = r. The components of x may be therefore
divided into r + 1 blocks
Thus
m
r+1
Ax = xk ak = (−1)i+1 yi .
k=1 i=1
As
i1 , . . . , ip
A >0
k1 , . . . , kp
for all choices of 1 ≤ i1 < · · · < ip ≤ n, 1 ≤ k1 < · · · < kp ≤ m, and
|xk1 | · · · |xkp | > 0
for some choice of admissible {k1 , . . . , kp } in the above sum, it follows that
Y is an n × (r + 1) strictly totally positive matrix.
We first prove that S + (Ax) ≤ S − (x). If n ≤ r + 1 there is nothing to
prove. For if Ax = 0, then
S + (Ax) ≤ n − 1 ≤ r = S − (x).
If n = r + 1 then Ax = 0 since Ax = Y d where Y is a nonsingular
(r + 1) × (r + 1) matrix and d = 0. Thus, if Ax = 0 then n ≤ r and
S + (Ax) = n ≤ r = S − (x).
We shall therefore assume that n > r + 1. If S + (Ax) ≥ r + 1, then there
exist indices 1 ≤ i1 < · · · < ir+2 ≤ n such that
εwij (−1)j+1 ≥ 0, j = 1, . . . , r + 2,
3.1 Main equivalence theorems 79
This equality cannot possibly hold since Y is strictly totally positive and
wij (−1)j+1 is of one fixed (weak) sign and some of the wij are not zero.
This contradiction implies that S + (Ax) ≤ r = S − (x).
It remains to prove (b), namely that if S + (Ax) = S − (x) and Ax = 0,
then the sign of the first (and last) component of Ax (if zero, the sign given
in determining S + (Ax)) agrees with the sign of the first (and last) nonzero
component of x. We continue our analysis using the above notation. We
wish to prove that if S + (Ax) = S − (x) = r, Ax = 0, and
εwij (−1)j+1 ≥ 0, j = 1, . . . , r + 1,
where w = Y d = Ax, then ε = 1. Since every set of r + 1 rows of Y is
linearly independent, we can solve for any component of d based on the
values wi1 , . . . , wir+1 . Thus
If
i1 , . . . , ir
A =0
j1 , . . . , jr
then there exists a z = (zj1 , . . . , zjr ) ∈ Rr \{0} such that Bz = 0. Let
z ∈ Rm be the vector whose jk component is the above zjk , k = 1, . . . , r, and
whose other components are zero. Then S − (z) ≤ r − 1 while S + (Az) ≥ r.
This is a contradiction. Thus each r × r minor of A is nonsingular.
We now let x = (xj1 , . . . , xjr ) ∈ Rr satisfy Bx = d where
S − (Ax) = S − (x) = r − 1
We now return to the equations Bx = d and solve for xj1 to obtain
r
i1 , . . . , i , . . . , ir
A
j2 , . . . , jr
0 < xj1 = =1 .
i1 , . . . , ir
A
j1 , . . . , jr
From the induction hypothesis the numerator is positive. Thus
i1 , . . . , ir
A > 0.
j1 , . . . , jr
For totally positive matrices the result is slightly weaker, but of the same
form.
S − (Ax) ≤ S − (x);
(b) if S − (Ax) = S − (x) and Ax = 0, then the sign of the first (and last)
nonzero component of Ax agrees with the sign of the first (and last)
nonzero component of x.
Conversely, if (a) and (b) hold for some n×m matrix A and every x ∈ Rm ,
then A is totally positive.
S + (Ak x) ≤ S − (x).
S − (Ax) ≤ S − (x),
which is (a).
82 Variation diminishing
and thus
We now return to the equations Bx = d and solve for xj1 to obtain
r
i1 , . . . , i , . . . , ir
A
j2 , . . . , jr
0 < xj1 = =1 .
i1 , . . . , ir
A
j1 , . . . , jr
From the induction hypothesis the numerator is positive. Thus
i1 , . . . , ir
A > 0.
j1 , . . . , jr
S + (c) + S − (Dc) = n − 1
alternately 1 and −1, then DBD ≤ DCD where ≤ is the usual entrywise
inequality.
Theorem 3.6 Assume B = (bij ) and C = (cij ) are strictly totally positive
n × n matrices satisfying
B ≤∗ C.
If A is an n × n matrix and
B ≤∗ A ≤∗ C,
then A is strictly totally positive.
That is,
(−1)i (Bx)i ≤ (−1)i (Ax)i = 0, i = 1, . . . , n. (3.2)
3.3 Remarks
According to Schoenberg [1930] the term variation diminishing
(variationsvermindernd in German) was coined by Pólya. To obtain the
variation diminishing property of a matrix, i.e., S + (Ax) ≤ S − (x) or
S − (Ax) ≤ S − (x) one does not need that the matrix A be strictly totally
positive or totally positive, respectively. If A is an n × m matrix and n ≥ m
then A satisfies S + (Ax) ≤ S − (x) for all x ∈ Rm \{0} if and only if A is
86 Variation diminishing
strictly sign regular (see e.g., Karlin [1968], p. 219, or Ando [1987], p. 29).
An n × m matrix A is strictly sign regular if all its minors of order k are of
one fixed sign εk ∈ {−1, 1} for each k = 1, . . . , m. If we permit minors to
also vanish, then A is said to be sign regular. An n × m matrix A of rank
m satisfies S − (Ax) ≤ S − (x) for all x ∈ Rm if and only if A is sign regular.
This was first proven in Schoenberg [1930] with this rank condition (see also
Motzkin [1936] and Schoenberg, Whitney [1951] for the more general case).
The inequality n ≥ m and the rank condition, or something similar, is truly
needed in these results. However, for strictly totally positive and totally
positive matrices we have property (b) in the statements of Theorems 3.3
and 3.4. This property obviates the need for the rank condition and the
inequality n ≥ m. This fact seems to be generally overlooked. In this regard
see Brown, Johnstone, MacGibbon [1981], Lemma A.1, and Fan [1966],
Theorems 5 and 6.
For a generalization of sorts of variation diminishing, see Carnicer,
Goodman, Peña [1995].
Garloff [1982] proved Theorem 3.6 by different methods. This theorem
cannot hold for arbitrary totally positive matrices B and C. (Let B and C
be matrices all of whose entries are 0 except for a single 1, suitably chosen.)
In Garloff [1982] it is conjectured that the theorem holds if B and C are
totally positive and nonsingular. In Garloff [1982] and Garloff [2003] the
conjecture is proved for specific subclasses of such matrices.
4
Examples
is, respectively, a totally positive or strictly totally positive matrix for every
choice of x1 < · · · < xn in X, y1 < · · · < ym in Y and all possible n and m.
If X and Y are finite sets then this is simply a definition of a totally positive
or strictly totally positive matrix. As we shall see in the next two sections
K(x, y) = exy is a strictly totally positive kernel on R × R, K(x, y) = xy
87
88 Examples
first prove that the determinant of A is nonzero, and then we prove that it
is positive.
Set
n
f (x) = λj excj .
j=1
lim f (x)e−xcm = λm = 0.
x→∞
We proved earlier that f has at most n−1 zeros. Since f (bn ) = 1 it therefore
follows that f is strictly positive for all x > bn−1 . As such
and therefore
det (ebi cj )ni,j=1 > 0.
A second proof uses the fact that the sign of the determinant of A is
independent of the choice of the arbitrary increasing sequences (bi )ni=1 and
(cj )nj=1 , but may depend upon n. As such it suffices to explicitly calculate
the determinant of A for some specific choice of the (bi )ni=1 and (cj )nj=1 .
Let cj = j − 1 and set ebi = xi . Then we have
Note that with the above substitution ebi = xi we also have that
c
A = (xi j )ni,j=1
is strictly totally positive for any 0 < x1 < · · · < xn and c1 < · · · < cn .
There is another method of proof of the total positivity of the (ebi cj ) (for
bi , cj > 0) based on the following idea. Let
m
G(x, y) = λk xk y k ,
k=0
need not be the case, and is not the case for G(x, y) = exy .) Then for any
0 < b1 < · · · < bn and 0 < c1 < · · · < cn the matrix
A = (G(bi , cj ))ni,j=1
and
∞
x2k+1 y 2k+1
exy − e−xy
G2 (x, y) = sinh xy = =
2 (2k + 1)!
k=0
is called a Cauchy matrix. For 0 < x1 < · · · < xn , 0 < y1 < · · · < yn it is
strictly totally positive.
We provide two proofs of this result.
Proof I There is an explicit formula for the determinant of the above matrix
and it is given by
n # #
1 1≤<k≤n (xk − x ) 1≤<k≤n (yk − y )
det = #n .
xi + yj i,j=1 i,j=1 (xi + yj )
Here is a proof of this result. Obviously the result holds for n = 1. Now
1 1 1 x1 +y1 x1 +y1
x1 +y1 x1 +y2 ··· x1 +yn 1 x1 +y2 ··· x1 +yn
1 1 1 1 1 1
x2 +y1 x2 +y2 ··· x2 +yn 1 x2 +y1 x2 +y2 ··· x2 +yn
.. .. .. .. = .. .. .. .. .
. . . . x1 + y1 . . . .
1 1 1 1 1 1
xn +y1 xn +y2 ··· xn +yn xn +y1 xn +y2 ··· xn +yn
1
Multiply the first row by xi +y1 and subtract from the ith row to obtain
x1 +y1 x1 +y1
1 x1 +y2 ··· x1 +yn
(x2 −x1 )(y2 −y1 ) (x2 −x1 )(yn −y1 )
1 0 (x2 +y2 )(x1 +y2 )(x2 +y1 ) ··· (x2 +yn )(x1 +yn )(x2 +y1 )
= .. .. .. ..
x1 + y1 . . . .
(xn −x1 )(y2 −y1 ) (xn −x1 )(yn −y1 )
0 (xn +y2 )(x1 +y2 )(xn +y1 ) ··· (xn +yn )(x1 +yn )(xn +y1 )
1 xx11 +y
+y2
1 +y1
· · · xx11+y n
#n 0 x2 +y1
· · · 1
(x − x1 )(yk − y1 ) x2 +yn
= #n k
k=1 #n ..
2
.. .. ..
(x1 + y1 ) i=2 (xi + y1 ) j=2 (x1 + yj ) . . . .
1 1
0 xn +y2 ··· xn +yn
Thus #
"n n
a
j=1 j k=1 (x + yk )
#n k=j = 0, x = x1 , . . . , xn ,
j=1 (x + yj )
and therefore
n
n
aj (x + yk ) = 0, x = x1 , . . . , xn .
j=1 k=1
k=j
are linearly independent polynomials since at the value −yr all but the rth
polynomial vanishes. Thus a1 = · · · = an = 0, which is a contradiction,
and A is nonsingular.
It remains to prove that each of these determinants is positive. By
continuity det A is of one sign for all choices of 0 < x1 < · · · < xn ,
0 < y1 < · · · < yn and a fixed n. For n = 1 it is positive by inspection. Let
us apply an induction argument.
Multiply the first row of
1 1 1
x1 +y1 x1 +y2 ··· x1 +yn
1 1 1
x2 +y1 x2 +y2 ··· x2 +yn
.. .. .. ..
. . . .
1 1 1
xn +y1 xn +y2 ··· xn +yn
by x1 + y1 to obtain
x1 +y1 x1 +y1
1 x1 +y2 ··· x1 +yn
1 1 1
x2 +y1 x2 +y2 ··· x2 +yn
.. .. .. .. .
. . . .
1 1 1
xn +y1 xn +y2 ··· xn +yn
aij = bmin{i,j} , i, j = 1, . . . , n.
0 ≤ b1 < · · · < bn .
Remark Note that we can always assume that the {bi } are distinct. For
if bj = bj+1 then the jth and (j + 1)st rows and columns are identical.
0 ≤ b1 < · · · < bn .
If b1 = 0 then the first row and the first column of A are identically zero
and we may simply disregard them. As such let us assume that
Consider
i1 , . . . , ir
A
j1 , . . . , jr
To see this, let us assume, without loss of generality, that i ≥ j+1 for
some ∈ {1, . . . , r − 1}. As j1 < · · · < j+1 ≤ i < i+1 < · · · < ir we have
ais jt = bjt
where
bi1 bj1
ck = bk − .
bα1
For k > i1 , j1 , the {ck } is a strictly increasing sequence of positive numbers
and we can apply an induction hypothesis (the case r = 1 is trivial). Thus
i1 , . . . , ir
r−1
A = bα1 cα2 cαk+1 − cβk .
j1 , . . . , jr
k=2
As
bi1 bj1 bα bβ
ck = bk − = bk − 1 1 = bk − bβ1 ,
bα1 bα1
it therefore follows that
i1 , . . . , ir
r−1
r−1
A = bα1 (bα2 − bβ1 ) bαk+1 − bβk = bα1 bαk+1 − bβk .
j1 , . . . , jr
k=2 k=1
The matrix A = (aij )ni,j=1 is called a Green’s matrix. We prove the following
result.
we have
i1 , . . . , ir i1 , . . . , ir
A = di1 · · · dir dj1 · · · djr B .
j1 , . . . , jr j1 , . . . , jr
From Proposition 4.1 B is totally positive if 0 < b1 < · · · < bn . By
continuity B is totally positive for 0 ≤ b1 ≤ · · · ≤ bn . Thus A is totally
positive. In fact, from the previous analysis it follows that
i1 , . . . , ir
A =0
j1 , . . . , jr
unless
i1 , j1 < i2 , j2 < · · · < ir , jr
Theorem 4.3 A Jacobi matrix A is totally positive if and only if all its
off-diagonal elements {bi }, {ci } and all its principal minors containing
consecutive rows and columns are nonnegative.
4.5 Jacobi matrices 99
There are two additional facts worth noting concerning totally positive
Jacobi matrices. In our explanation thereof we will use the following formula
based on the Laplace expansion by minors. Namely, for a Jacobi matrix A
and any i ∈ {1, . . . , n − 1} we have
1, . . . , n 1, . . . , i i + 1, . . . , n
A = A A
1, . . . , n 1, . . . , i i + 1, . . . , n
1, . . . , i − 1 i + 2, . . . , n
−ai,i+1 ai+1,i A A . (4.1)
1, . . . , i − 1 i + 2, . . . , n
The first fact we will explain is that the inverse of a nonsingular
symmetric Jacobi matrix is a Green’s matrix. The reason for this is the
following. Assume A is an n × n symmetric Jacobi matrix. Let {ci } denote
the off-diagonal entries of A. Assume, for convenience, that each of the ci
is nonzero. Set B = A−1 . Then
i+j 1, . . . , j, . . . , n
(−1) A
1, . . . , i, . . . , n
bij = .
1, . . . , n
A 1, . . . , n
and
j + 1, . . . , n
A j + 1, . . . , n
ej = (−1)j .
cj · · · cn−1
Then it is readily verified that
and
d!i d!i+1
< , i = 1, . . . , n − 1.
e!i e!i+1
From the above definitions of the {d!i } and {! ei } this latter inequality is
equivalent to proving
1, . . . , i − 1 1, . . . , i
A 1, . . . , i − 1 A 1, . . . , i
c2 < , i = 1, . . . , n − 1,
i + 1, . . . , n i i + 2, . . . , n
A i + 1, . . . , n A i + 2, . . . , n
i.e.,
1, . . . , i i + 1, . . . , n 2 1, . . . , i − 1 i + 2, . . . , n
0<A A −ci A A .
1, . . . , i i + 1, . . . , n 1, . . . , i − 1 i + 2, . . . , n
As ci = ai,i+1 = ai+1,i , from (4.1) we see that the right-hand-side equals
1, . . . , n
A
1, . . . , n
which is strictly positive.
The other fact we wish to remark upon are necessary and sufficient
conditions for when A is a nonsingular totally positive Jacobi matrix.
From Theorem 4.3 we have that A is totally positive if and only if all
its off-diagonal elements {bi }, {ci } and all its principal minors containing
consecutive rows and columns are nonnegative. This latter condition can be
further simplified. Namely a nonsingular Jacobi matrix A is totally positive
if and only if all its off-diagonal elements {bi }, {ci } are nonnegative and all
its principal minors composed of initial consecutive rows and columns are
strictly positive. This fact follows from Theorem 4.3 and from the identity
(4.1) with arbitrary r ∈ {2, . . . , n} and s ∈ {1, . . . , r − 1}. Namely
1, . . . , r 1, . . . , s s + 1, . . . , r
A = A A
1, . . . , r 1, . . . , s s + 1, . . . , r
1, . . . , s − 1 s + 2, . . . , r
−A as,s+1 as+1,s A .
1, . . . , s − 1 s + 2, . . . , r
The details of the proof based on this formula are left to the reader.
4.6 Hankel matrices 101
A = (bi+j )ni,j=0
Theorem 4.4 The above Hankel matrix A is strictly totally positive if and
only if
n
n−1
bi+j xi xj , bi+j+1 xi xj
i,j=0 i,j=0
are both strictly positive definite forms, i.e., the symmetric matrices
b0 b1 ... bn
b1 b2 . . . bn+1
. .. .. ..
. . . . .
bn bn+1 ... b2n
and
b1 b2 ... bn
b2 b3 ... bn+1
.. .. .. ..
. . . .
bn bn+1 ... b2n−1
Proof Recall that one equivalent definition of strict positive definiteness for
symmetric matrices is the strict positivity of the initial principal minors.
Thus the necessity of the above conditions is obvious since every symmetric
strictly totally positive matrix must be strictly positive definite. It remains
to prove the sufficiency.
Let us therefore assume that the two matrices in the statement of
102 Examples
"2n i
the theorem are strictly positive definite. Let p(t) = i=0 ai t be any
(nontrivial) polynomial of degree at most 2n that is nonnegative on [0, ∞).
We claim that we must have
2n
ai bi > 0.
i=0
and
n−1
σj σk bj+k+1 > 0
j,k=0
for every choice of (nontrivial) (µ0 , . . . , µn ) and (σ0 , . . . , σn−1 ). These latter
conditions are exactly the strict positive definiteness of our two matrices.
A minor digression is now in order. The study of moments was a central
motivating theme in the development of functional analysis. An important
precursor in this development is Stieltjes [1894–95]. In this paper Stieltjes
discussed the problem of necessary and sufficient conditions on a sequence
(bi )∞ ∞
i=0 so that the (bi )i=0 can be represented in the form
, ∞
bi = ti dα(t), i = 0, 1, 2, . . .
0
for some nonnegative Borel measure dα. (We have rephrased Stieltjes’
original problem as there were then no Borel measures. In fact the Stieltjes
4.6 Hankel matrices 103
integral was introduced in that paper explicitly to deal with this problem.)
We now know that this is essentially equivalent (via the Hahn–Banach
Theorem) to asking for the existence of a continuous nonnegative linear
functional L on C[0, ∞) satisfying
L(ti ) = bi , i = 0, 1, 2, . . .
n+1
bi = λj ξji , i = 0, 1, . . . , 2n, (4.2)
j=1
Set
ξ10 0
. . . ξn+1
.. .. ..
X= . . .
ξ1n n
. . . ξn+1
104 Examples
and
λ1 ... 0
.. .. .. .
Λ= . . .
0 . . . λn+1
Thus
A = XΛX T .
i1 , . . . , ir j1 , . . . , jr
λ k1 · · · λ kr X X > 0.
k1 , . . . , kr k1 , . . . , kr
1≤k1 <···<kr ≤n+1
. . . , a−2 , a−1 , a0 , a1 , a2 , . . . ,
A = (aj−i )∞
i,j=1 .
a0 , a1 , a2 , . . . ,
a0 , a1 , a2 , . . . ,
A = (aj−i )∞
i,j=1
4.7 Toeplitz matrices 105
"∞
if and only if ak z k has the form
k=0
#∞
(1 + αi z)
e #i=1
γz
∞
i=1 (1 − βi z)
"∞
where γ ≥ 0, αi ≥ 0, βi ≥ 0, and i=1 (αi + βi ) < ∞. The proof of this
and the corresponding representation for the bi-infinite sequence will not be
presented here. However, we will prove the characterization in the simpler
case where the sequence has only a finite number of nonzero terms.
Theorem 4.5 Let a0 > 0, an = 0 and ak = 0 for k < 0 and k > n. Then
A = (aj−i )∞
i,j=1
i.e., 0 ≤ jk − ik ≤ n, k = 1, . . . , r.
Proof Let us first assume that p has n negative zeros. Then, assuming
a0 = 1, we have
n
p(x) = (1 + αr x)
r=1
This latter result follows from the fact that the generating function of a
106 Examples
we have
n
ak rk+ sin(k + )θ = 0, ∈ Z.
k=0
As n ≥ 1 and because
rank A[m] = m
there exists an x∗ ∈ Rn+m satisfying
A[m] x∗ = d
where
d = (1, −1, , . . . , (−1)m−1 ) ∈ Rm .
Thus for any ε
A[m] (x[m] + εx∗ ) = εd.
From the variation diminishing properties of the totally positive matrix
A[m] (Theorem 3.4) we have
m − 1 = S − (εd) ≤ S − (x[m] + εx∗ )
for any ε = 0.
Take ε∗ = 0 and sufficiently small such that
(x[m] + εx∗ )k (x[m] )k > 0
if (x[m] )k = xk = 0. Note that if xk = 0, 0 < k < n + m − 1, then since
0 < θ < π we have
xk−1 xk+1 < 0.
Thus
S − (x[m] + εx∗ ) ≤ S − (x[m] ) + 2
(because we do have x0 = 0 and can have xn+m−1 = 0). Therefore
m − 1 ≤ S − (x[m] ) + 2,
i.e.,
m − 3 ≤ S − (x[m] )
for all m.
Now, as is easily seen,
(n + m)θ
S − (x[m] ) ≤
π
simply because sin α > 0 for 0 < α < π and sin α < 0 for π < α < 2π, and
the fact that when going from xk to xk+1 we alter the “argument” of xk
by θ. Thus
(n + m)θ
m−3≤
π
108 Examples
p(x) = (1 + x)n .
and for i ≥ 2
bij = bi−1,k akj , for all j,
k
and for i ≥ 2
is totally positive. This is equivalent to the fact that the infinite Toeplitz
matrix
∞
1
D=
(i − j)! i,j=0
is totally positive.
3. Applying Theorem 1.11 to the infinite totally positive Toeplitz matrix
A = (aij )∞i,j=0 , where
1, i ≤ j
aij =
0, i > j
it easily follows that the matrix
∞
i+j
B=
j i,j=0
is totally positive, which is equivalent to the fact that the Hankel matrix
∞
D = ((i + j)!)i,j=0
(1 + q n x)p(x) = (1 + x)p(qx)
we have
n
n
(1 + q n x) ak xk = (1 + x) ak q k xk .
k=0 k=0
k
Equating the coefficients of x we obtain for k = 1, . . . , n
implying
(1 − q n−k+1 )
ak = ak−1 q k−1 .
(1 − q k )
Now a0 = 1 so that
(1 − q n−k+1 ) · · · (1 − q n )
ak = q (k−1)+(k−2)+···+1 ,
(1 − q k ) · · · (1 − q)
which we write as
-n.
ak = q k(k−1)/2 .
k q
The
-n. (1 − q n−k+1 ) · · · (1 − q n )
=
k q (1 − q k ) · · · (1 − q)
are termed q-binomial coefficients.
Thus the Toeplitz matrix with
-n.
ak = q k(k−1)/2 , k = 0, 1, . . . , n,
k q
is totally positive.
4.8 Generalized Hurwitz matrices 111
P = (pij )∞
i,j=1
where
pij = aM j−i , i, j = 1, 2, . . . .
Thus
aM −1 a2M −1 a3M −1 ···
aM −2 a2M −2 a3M −2 ···
.. .. ..
. . .
···
a0 aM a2M
0 aM −1 a2M −1 ···
P =
0 aM −2 a2M −2
··· .
.. .. ..
. . .
0 a0 aM ···
0 0 aM −1 ···
.. .. ..
. . .
The matrix P is not a Toeplitz matrix, but it resembles a Toeplitz matrix.
It also contains M Toeplitz submatrices. Simply take all columns and every
M th row.
Somewhat surprisingly we prove that there are a finite number of
determinantal conditions that imply that this matrix is totally positive.
i.e., 0 ≤ M jk − ik ≤ n.
112 Examples
n+k−1
Before we prove this theorem note that for r > M −1 the last column
of
k, k + 1, . . . , k + r − 1
P
1, 2, . . . , r
vanishes identically. That is, the conditions of (4.3), together with a0 > 0,
are exactly the demand that all possible nonvanishing minors composed
from consecutive rows and initial consecutive columns be strictly positive.
In addition, let k ∈ {1, . . . , M − 1} be such that
n+k−1
r=
M −1
is an integer. Such a k exists and r ≥ 2. For this k and r
k, . . . , k + r − 1 k, . . . , k + r − 2
P = an P .
1, . . . , r 1, . . . , r − 1
bM j+k−1 = aM j+k , k = 1, . . . , M − 1, j = 0, 1, . . . ,
and
a0
bM j−1 = aM j − aM j+1 , j = 1, 2, . . . ,
a1
where applicable, to define b0 , . . . , bn−1 . Set bk = 0 for k < 0 and k > n − 1.
Define
Q = (qij )∞
i,j=1
where
qij = bM j−i , i, j = 1, 2, . . . .
Recall that
aM −1 a2M −1 a3M −1 ···
aM −2 a2M −2 a3M −2 ···
.. .. ..
. . .
···
a0 aM a2M
0 aM −1 a2M −1 ···
P =
0 aM −2 a2M −2
···
.. .. ..
. . .
0 a0 aM ···
0 0 aM −1 ···
.. .. ..
. . .
bM −1 b2M −1 b3M −1 ···
bM −2 b2M −2 b3M −2 ···
.. .. ..
. . .
···
b0 bM b2M
0 bM −1 b2M −1 ···
Q=
0 bM −2 b2M −2
··· .
.. .. ..
. . .
0 b0 bM ···
0 0 bM −1 ···
.. .. ..
. . .
k, k + 1, . . . , k + r − 1 k − 1, k, . . . , k + r − 2
Q =P >0
1, 2, . . . , r 1, 2, . . . , r
n+(k−1)−1 (n−1)+k−1
for r = 1, . . . , M −1 = M −1 . We must also consider
1, 2, . . . , r
Q .
1, 2, . . . , r
114 Examples
n+M −2
Now, for 2 ≤ r ≤ M −1 ,
a1 aM +1 a2m+1 ···
a0 aM a2M ···
M − 1, M, . . . , M + r − 2
0 < P = 0 aM −1 a2M −1 ···
1, 2, . . . , r
.. .. .. ..
. . . .
aM a2M ··· aM +1 a2M +1 ···
= a1 aM −1 a2M −1 ··· − a0 aM −1 a2M −1 ···
.. .. .. .. .. ..
. . . . . .
aM − aa01 aM +1 a2M − aa01 a2M +1 ···
= a1 aM −1 a2M −1 ···
.. .. ..
. . .
1, 2, . . . , r − 1
= a1 Q .
1, 2, . . . , r − 1
As a1 > 0 we have
1, 2, . . . , r − 1
Q >0
1, 2, . . . , r − 1
for 1 ≤ r − 1 ≤ n−1
M −1 , which is exactly what we wished to prove.
We now proceed to the proof of our theorem.
Thus a1 , a1 , . . . , aM > 0. The other claims follow easily from this fact and
the geometric form of P .
We now apply induction assuming the result is valid for n − 1, where
n > M . We start with the a0 , . . . , an and P as in the statement of Theorem
4.6. From Lemma 4.7, the b0 , . . . , bn−1 and the Q therein satisfy (4.3) for
n − 1, and therefore Q is totally positive, bk > 0, k = 0, . . . , n − 1, and
i1 , . . . , ir
Q >0
j1 , . . . , jr
if and only if
qik jk = bM jk −ik > 0, k = 1, . . . , r,
i.e., 0 ≤ M jk − ik ≤ n − 1.
From Lemma 4.7 we have
aM j+k = bM j+k−1 , k = 1, . . . , M − 1
aM j = bM j−1 + cbM j , k = 0, (4.4)
is a Hurwitz polynomial, i.e., has all its zeros in the open left-hand plane
Re (z) < 0. If p has all its zeros in the closed left-hand plane Re (z) ≤ 0,
then P is totally positive. Unfortunately the converse is not true.
Then
b2k = ak , k = 0, 1, . . . , n,
and
b2k+1 = (n − k)ak , k = 1, . . . , n − 1.
Let A be the infinite Toeplitz matrix based on p, i.e.,
a0 a1 a2 · · · an 0 0 ···
0 a0 a1 · · · an−1 a 0 ···
n
A = 0 0 a ··· a a a ···
0 n−2 n−1 n
.. .. .. . . .. .. .. ..
. . . . . . . .
and B be the infinite Hurwitz matrix based on q, i.e.,
b1 b3 b5 · · · na0 (n − 1)a1 (n − 2)a2 ···
b b2 b4 · · · a ···
0 0 a1 a2
0 b1 b3 · · · 0 na0 (n − 1)a1 ···
B= 0
b0 b2 · · · = 0 a0 a1 ···
.
0 0 b1 · · · 0 0 na0 ···
.. .. .. . . .. .. .. ..
. . . . . . . .
Proof From Theorem 4.5, conditions (i) and (iii) are equivalent.
Furthermore, if B is totally positive, then since A is a submatrix of B
it follows that A is totally positive.
4.10 Hadamard products of totally positive matrices 119
Assume p has n negative zeros. Perturb these zeros so that they are
simple and negative. Then from Theorem 4.8 q is a Hurwitz polynomial
and thus the associated Hurwitz matrix B is totally positive. Perturb back.
has determinant equal to −1. (As strictly totally positive matrices are
dense in the class of totally positive matrices, it follows that there also exist
strictly totally positive matrices whose Hadamard product is not totally
positive.)
However, there are numerous subclasses of totally positive matrices that
are closed under the operation of taking Hadamard products. In this section
we detail some of them.
A ◦ B = (aij bij )
Remark The same proof shows that the Hadamard product of a totally
positive Jacobi matrix and an arbitrary totally positive matrix is a totally
positive Jacobi matrix.
where a0 > 0, an = 0 and ak = 0 for k < 0 and k > n, and showed that A
is totally positive if and only if
n
p(x) = ak xk
k=0
has r negative zeros, where r = min{n, m}. Thus the Hadamard product of
two totally positive Toeplitz matrices of the above form is again a totally
positive Toeplitz matrix.
is a Hurwitz polynomial if and only if all its zeros lie in the open left-hand
plane Re (z) < 0. Assuming a0 > 0, we have that p is a Hurwitz polynomial
if and only if the matrix
a1 a3 a5 · · ·
a a a ···
0 2 4
0 a1 a3 · · ·
P = 0 a a ···
0 2
0 0 a1 · · ·
.. .. .. . .
. . . .
satisfies
1, . . . , r
P > 0, r = 1, . . . , n,
1, . . . , r
which implies that P is also totally positive (see Theorem 4.6). It is known
that if
n
p(x) = ak xk
k=0
4.11 Remarks
The main reference books on totally positive kernels and Chebyshev
systems are Gantmacher, Krein [1950], Karlin [1968], Karlin, Studden
[1966] and Krein, Nudel’man [1977]. The examples of Sections 4.2 through
4.6 can all be found, in more or less detail, in Gantmacher, Krein [1937].
The examples of Sections 4.2 and 4.3 can also be found in Pólya, Szegő
[1976] (originally published in 1925). Karlin [1968] contains many, many
additional examples of totally positive and strictly totally positive kernels
and matrices. See also Carlson, Gustafson [1983] for some further examples.
For a discussion of the theory of moments as presented in the
proof of Theorem 4.4, see Karlin, Studden [1968], Chap. V, Krein,
Nudel’man [1977], Chap. V and Shohat, Tamarkin [1943]. The form of
the representation of the generating functions for totally positive infinite
Toeplitz matrices was conjectured by Schoenberg in 1951. It was partially
solved in Aissen, Schoenberg, Whitney [1952]. The final proof of the
representations is to be found in Edrei [1952] and Edrei [1953]. The
sequences in the totally positive Toeplitz matrices are also called Pólya
Frequency Sequences; see Karlin [1968], Chap. 8, and references therein. For
many more examples of sequences satisfying Theorem 4.5, see e.g., Brenti
[1989], Brenti [1995], Pitman [1997], and Wang, Yeh [2005]. Theorem 4.6
is from Goodman, Sun [2004]. The fact that in the special case M = 2
(Hurwitz polynomials) the initial minors being strictly positive implies the
total positivity of the full matrix was first proved by Asner [1970] and then
reproved in a more transparent form by Kemperman [1982]. An example
of a polynomial p whose zeros are not all in the closed left-hand plane
Re (z) ≤ 0, but where the associated Hurwitz matrix P is totally positive
(but without the strict positivity of the appropriate principal minors),
can be found in Asner [1970]. The proof of Theorem 4.6 follows the lines
of Kemperman’s proof. See also Holtz [2003] and references therein for
another approach to the Hurwitz polynomials and the total positivity of the
associated matrix. Material on Hurwitz matrices may be found in various
texts, e.g., Gantmacher [1953], Marden [1966], and Rahman, Schmeisser
[2002]. Theorem 4.8 is a special case of a more general result; see e.g.,
Gantmacher [1953], Theorem 13, Chap. XV of the English translation.
126 Examples
Theorem 4.10 and Proposition 4.11 can be found, for example, in Rahman,
Schmeisser [2002], Cor. 10.6.13 (the notation is somewhat different).
Applying the criteria of Theorem 2.16 we obtain relatively simple
sufficient conditions for when certain of the matrices considered in this
chapter are totally positive. For example, the Hankel matrix
A = (bi+j )ni,j=0
is strictly totally positive if bk > 0, k = 0, 1, . . ., and
2 π
bk−1 bk+1 > 4 cos b2 , k = 1, . . . , 2n − 1,
n+2 k
while the Toeplitz matrix
A = (aj−i )∞
i,j=1
has all real negative zeros. This was already proved in Kurtz [1992].
For a general discussion and survey of Hadamard products see Horn,
Johnson [1991], Chap. 5, and the many references therein. It seems that
Markham [1970] was the first to consider Hadamard products of totally
positive matrices. He showed that the class of totally positive matrices
is not closed under the operation of Hadamard products and proved
Proposition 4.12. The Schur Product Theorem concerning the Hadamard
product of positive definite matrices can be found, for example, in Horn,
Johnson [1991], p. 309. Maló’s Theorem is from Maló [1895]. In Wagner
[1992] the result concerning the Hadamard product of Toeplitz matrices
is extended to a subclass of the totally positive Toeplitz matrices with an
infinite number of nonzero coefficients. The closure of Hurwitz matrices
under the Hadamard product is in Garloff, Wagner [1996a]. Its proof is far
beyond the scope of this monograph. Some additional results can be found
in Garloff, Wagner [1996b] and Crans, Fallat, Johnson [2001].
5
Eigenvalues and eigenvectors
127
128 Eigenvalues and eigenvectors
positive. However we wish to prove a bit more; namely that An−1 is strictly
totally positive. From Proposition 2.5 it suffices to prove that
n−1 1, . . . , r
A >0
n − r + 1, . . . , n
and
n−1 n − r + 1, . . . , n
A >0
1, . . . , r
for r = 1, . . . , n. We will only consider the former inequalities. Note that
n−1 i1 , . . . , ir
A =
j1 , . . . , jr
1 n−2
i1 , . . . , ir s1 , . . . , s1r s1 , . . . , sn−2
A 1 A 2 ···A r
s1 , . . . , s1r s1 , . . . , s2r j1 , . . . , jr
where the sum is over all 1 ≤ s1 < · · · < sr ≤ n, = 1, . . . , n − 2. As each of
these terms is nonnegative, for strict positivity to hold it suffices to prove
that at least one of the products is strictly positive. For example,
1 1 2 n−1
An−1
≥A A ···A > 0,
n 2 3 n
while
1, 2 1, 2 1, 3 n − 3, n − 1 n − 2, n
An−1 ≥A A ···A A
n − 1, n 1, 3 2, 4 n − 2, n n − 1, n
> 0.
where
k } = sk
max{sk , s+1 k+1 }.
+1
< sk+1 = min{sk+1 , s+1
Thus
s1 , . . . , sr
A > 0.
s+1
1 , . . . , sr
+1
130 Eigenvalues and eigenvectors
Proof There are numerous proofs of this result in the literature, and
an almost uncountable number of generalizations. For completeness, we
present a proof that seems to be one of the simpler and more transparent.
For vectors x and y in Rn , we write x ≥ y if xi ≥ yi , i = 1, . . . , n, and
5.2 The Gantmacher–Krein theorem 131
Ax∗ = λ∗ x∗ .
Now x∗ ≥ 0 (x∗ = 0) and thus Ax∗ > 0, whence x∗ > 0. We have found a
positive eigenvalue with a strictly positive eigenvector.
Let λ be any other eigenvalue of A. Then
Ay = λy
where |y| = (|y1 |, . . . , |yn |). From the definition of λ∗ , it follows that |λ| ≤
λ∗ . If |λ| = λ∗ , then we must have λ∗ |y| = A|y| (since otherwise λ∗ (A|y|) <
A(A|y|), contradicting the definition of λ∗ ). Thus, for each i = 1, . . . , n,
n
n
|λ| |yi | = aij yj = aij |yj | .
j=1 j=1
This implies the existence of a γ ∈ C, |γ| = 1, such that γyj = |yj | for
each j = 1, . . . , n. Thus, we may in fact assume that if |λ| = λ∗ , then
λ = |λ| = λ∗ , and for the associated eigenvector y, we have y ≥ 0. Two
consequences of these facts are the following: for every eigenvalue λ, λ = λ∗ ,
we have |λ| < λ∗ ; the geometric multiplicity of the eigenvalue λ∗ is exactly
1. The latter holds because, if not, we can easily construct a real eigenvector
associated with λ∗ that is not of one sign, and this contradicts the above
analysis.
It remains to prove that the eigenvalue λ∗ is of algebraic multiplicity 1.
Assume not. There then exists a vector y∗ (linearly independent of x∗ )
such that
Ay∗ = λ∗ y∗ + αx∗
Proof of Theorem 5.3 It suffices to prove the theorem for strictly totally
positive matrices. For if A has eigenvalues λ1 , . . . , λn , listed to their
algebraic multiplicity, then Ak has the eigenvalues λk1 , . . . , λkn . If we show
that λk1 > · · · > λkn > 0 for all k sufficiently large, then obviously we must
have λ1 > · · · > λn > 0. In addition, if Au = λu then Ak u = λk u, so that
if A has n distinct eigenvalues, then the eigenvectors of A and Ak are one
and the same.
Let λ1 , . . . , λn denote the eigenvalues of A listed to their algebraic
multiplicity, and assume that |λ1 | ≥ · · · ≥ |λn | ≥ 0. As A is strictly
totally positive we have that A[p] is a matrix all of whose entries are strictly
positive. From the Perron and Kronecker Theorems applied to A[p] we have
From the Perron and Kronecker Theorems it also follows that by a suitable
normalization of the associated eigenvectors u1 , . . . , un , we may assume
that
u1 ∧ · · · ∧ up > 0
for p = 1, 2, . . . , n.
It remains for us to prove " the sign change properties of the eigenvectors
1 + p
n
u , . . . , u . Assume S i=q ci u
i
≥ p for some choice of nontrivial
(cq , . . . , cp ). There then exist j0 < · · · < jp and an ε ∈ {−1, 1} such that
p
ε(−1)k ci ui ≥ 0 , k = 0, . . . , p .
i=q
jk
"p
Set u0 = i=q ci ui . Let ui = (u1,i , . . . , un,i ), i = 0, 1, . . . , n, and U denote
the matrix with columns u0 , u1 , . . . , un . Thus
j0 , j1 , . . . , jp p
U = det (ujk ,i )i,k=0 = 0
0, 1, . . . , p
(u, vj ) = 0 , j = 1, . . . , q − 1 .
ε(−1)k uj ≥ 0 , ik−1 + 1 ≤ j ≤ ik , k = 1, . . . , r + 1,
(u, v) = 0 ,
and therefore
p
S− ci ui ≥ q − 1 ,
i=q
S − (u) + S + (!
u) = n − 1
implying
p
S− ci ui ≥ q − 1 .
i=q
We now present a very different proof of Theorem 5.3. The proof of the
sign change properties of the eigenvectors will be based on the variation
diminishing properties of strictly totally positive matrices. We start by
providing an alternative proof of the fact that A has n simple and positive
eigenvalues. We also simultaneously prove that the eigenvalues of the two
principal submatrices obtained by deleting from A either the first row and
column, or the last row and column, strictly interlace the eigenvalues of A.
for k = 1 and k = n.
Another Proof of Theorem 5.3 We assume that λ > µ > 0 are eigenvalues
of A with associated eigenvectors x and y. Then from Theorem 3.3 we have
for all (α, β) = (0, 0)
S + (λαx + µβy) = S + (A(αx + βy)) ≤ S − (αx + βy) . (5.4)
138 Eigenvalues and eigenvectors
S + (uk ) = S − (uk ) = k − 1, k = 1, . . . , n.
while
p m
λp
lim S − ci ui ≤ S + (up ) = p − 1 .
m→∞
i=q
λi
Thus
p
p
q − 1 ≤ S− ci ui ≤ S + ci ui ≤ p − 1 ,
i=q i=q
S − (uk ) = S + (uk ) = k − 1 ,
the function uk (t) has exactly k − 1 zeros that are each strict sign changes.
Since
k − 1 ≤ S − (αuk + βuk+1 ) ≤ S + (αuk + βuk+1 ) ≤ k ,
for every choice of (α, β) = (0, 0), it can be shown that the k − 1 zeros of
uk (t) strictly interlace the k zeros of uk+1 (t).
Since strictly totally positive matrices are dense in the set of totally
positive matrices, see Theorem 2.6, and because of the continuity of the
eigenvalues as functions of the matrix entries, we have the following:
Corollary 5.5 The eigenvalues of totally positive matrices are both real
and nonnegative.
for each k. However, the strict interlacing property does not hold for all
principal submatrices. As an example, consider
1 1 0
A = 2 2 1 .
2 2 1
The matrix A is totally positive but not strictly totally positive. However,
since strictly totally positive matrices are dense in the class of totally
positive matrices, there are strictly totally positive matrices whose
eigenvalues (and the eigenvalues of its principal submatrices)
√ √are arbitrarily
close to those of A. The eigenvalues of A are 2 + 3, 2 − 3, and 0. The
eigenvalues of the principal submatrix of A obtained by deleting the second
row and column are 1 and 1. Obviously we do not have interlacing.
Nevertheless there is a weaker interlacing that does hold and it is the
following.
(where λ0 = λ1 ).
and set
v = (v1 , . . . , vk−1 , 0, vk+1 , . . . , vn ) .
142 Eigenvalues and eigenvectors
Thus
v = (v1 , . . . , vk−1 , wk , vk+1 , . . . , vn ) ,
for some easily calculated wk . From Theorem 3.3 and the properties of v,
v and v we have,
Thus
S + (v ) = S − (v ) = j − 1 .
Au = λj+1 u .
S + (u) = S − (u) = j .
(Here we have used the fact that cvk + uk = uk and wk are of the same
5.3 Eigenvalues of principal submatrices 143
sign.) For all c sufficiently small and positive, e.g., such that c|vi | < |ui | if
ui = 0, we have
S − (cv + u) ≥ S − (u) = j .
From the definition of c∗ and continuity properties of S + and S − , it
follows that
S − (c∗ v + u) ≤ j − 1
and
S + (cv + u) ≥ j
for all c ≤ c∗ .
Now
(k)
A(c∗ v + u) = c∗ µj v + λj+1 u .
Let
(k)
c∗ µj
c= > 0.
λj+1
From Theorem 3.3,
S + (cv + u) ≤ S − (c∗ v + u) ≤ j − 1 .
Since cv + u and cv + u differ only in their kth coordinates, where both
are positive, it follows that
S + (cv + u) = S + (cv + u) ≤ j − 1 .
(k)
This implies, from the above, that c > c∗ . Thus µj > λj+1 .
The proof of the reverse inequality
(k)
λj−1 > µj
for any k, and j = 2, . . . , n − 1, is essentially the same. Let x be a real
eigenvector of A with eigenvalue λj−1 . That is,
Ax = λj−1 x .
Note that
S + (x) = S − (x) = j − 2 .
(k)
The vectors v, v , and v are as defined above. If wk = 0 then µj = λj <
(k) (k)
λj−1 . If xk = 0, then λj−1 = µj−1 > µj . We may thus assume that wk
and xk are both positive. We set
a∗ = inf{a : a > 0, S − (ax + v ) ≤ j − 2} ,
144 Eigenvalues and eigenvectors
As strictly totally positive matrices are dense in the set of totally positive
matrices and because of the continuity of the eigenvalues as functions of
the matrix entries, we have the following.
λ1 ≥ · · · ≥ λn ≥ 0
(where λ0 = λ1 ).
5.4 Eigenvectors
In this section we study in considerably more detail the eigenvector
structure of a strictly totally positive or oscillation matrix. We have from
Theorem 5.3 that if uk is a real eigenvector associated with the kth
eigenvalue in magnitude of an oscillation matrix, then
p p
q − 1 ≤ S− ci ui ≤ S + ci ui ≤ p − 1
i=q i=q
But then
p
+
S ck uk ≥p
k=1
and
p
ck uk = αr .
k=1 r
For any r ∈ {1, . . . , n}\{i1 , . . . , ip−1 }, assuming ij−1 < r < ij , we have
i1 , . . . , ip−1
αr U
1, . . . , p − 1
cp = .
i1 , . . . , ij−1 , r, ij , . . . , ip−1
(−1)j+p U
1, . . . , p
As sgn αr = (−1)j ε for some ε ∈ {−1, 1} it follows, using the induction
hypothesis, that for some εp ∈ {−1, 1}
i1 , . . . , ij−1 , r, ij , . . . , ip−1
εp U >0
1, . . . , p
for all r between ij−1 and ij and all j = 1, . . . , p, i.e., for all
r ∈ {1, . . . , n}\{i1 , . . . , ip−1 }
5.4 Eigenvectors 147
is equivalent to
n
S+ ! k ≤ n − q,
ck u
k=q
n
iq , . . . , in
δ!q U (−1) k=q ik +1 > 0,
q, . . . , n
implying
n
iq , . . . , in k=q ik
δq U (−1) >0
q, . . . , n
for some δq ∈ {−1, 1} and all 1 ≤ iq < · · · < in ≤ n.
Multiplying any one of the uk by a nonzero constant we may and will
assume that
1, . . . , n
U = (−1)n(n+1)/2 ,
1, . . . , n
148 Eigenvalues and eigenvectors
(−1)n(n+1)/2 = det U
n
ik +k i1 , . . . , ip ip+1 , . . . , in
= (−1) k=p+1 U U
1, . . . , p p + 1, . . . , n
1≤i1 <···<ip ≤n
where the i1 < · · · < ip and ip+1 < · · · < in are complementary indices in
{1, . . . , n}. As
i1 , . . . , ip
εp U >0
1, . . . , p
and
ip+1 , . . . , in n
k=p+1 ik
δp+1 U (−1) >0
p + 1, . . . , n
For any λ1 > · · · > λn > 0 let Λ be the n × n diagonal matrix with
diagonal entries {λ1 , . . . , λn }. Consider
A = U ΛV.
From (5.8) we see that we can choose the λ1 > · · · > λn > 0 so that A is
strictly totally positive. (From the above construction we see that for any
choice of λ1 > · · · > λn > 0 and A = U ΛV , we have that Am = U Λm V is
a strictly totally positive matrix for m sufficiently large.)
Proof The {λk } are locally differentiable functions of the aij since they are
the distinct roots of a polynomial of degree n whose coefficients depend
algebraically upon the aij .
150 Eigenvalues and eigenvectors
Similarly vk A = λk vk , i.e.,
n
vkr ars = λk vks , s = 1, . . . , n.
r=1
Now
A = U ΛV
i.e.,
n
λk = vkr ars usk , k = 1, . . . , n.
r,s=1
∂λk n
∂vkr ∂usk
= vki ujk + ars usk + vkr ars
∂aij r,s=1
∂a ij ∂aij
n
n
∂vkr
n n ∂usk
= vki ujk + ars usk + vkr ars .
r=1
∂aij s=1 s=1 r=1
∂aij
∂λk
n
∂vkr
n
∂usk
= vki ujk + λk urk +
λk vks
∂aij r=1
∂a ij s=1
∂aij
n
∂vkr n
∂usk
= vki ujk + λk urk + vks
r=1
∂aij s=1
∂aij
n
∂
= vki ujk + λk vkr urk
∂aij r=1
= vki ujk
"n
since r=1 vkr urk ≡ 1 as V = U −1 .
The inequality
∂λ1
>0
∂aij
is also a consequence of Perron’s Theorem. Another consequence of Perron’s
Theorem is the following.
Let
a11 · · · a1n
.. ..
. .
Ac = cak1 · · · cakn
. .
. . .
.
an1 ··· ann
i.e., Ac is obtained from A by multiplying the elements of the kth row by
c.
Proof All three inequalities are consequences of Perron’s Theorem. That is,
we use the characterization
λ1 = sup{λ : Ax ≥ λx, for some x ≥ 0}
valid for any strictly positive matrix. The first inequality is a direct
application thereof (and is also in Theorem 5.10). The second inequality
follows from an application of Perron’s Theorem to the oscillation matrix
DA−1 c D. For a proof of the third inequality, apply Perron’s Theorem to the
rth compound matrix A[r] .
It is not necessarily true that
∂λr (c)
>0
∂c
for all r. The 3 × 3 totally positive matrix
1 1 0
Ac = 2c 2c c
2 2 1
is such that λ2 (c) is a strictly decreasing function of c, c > 0.
5.6 Remarks
The study of the spectral properties of integral equations with totally
positive continuous kernels substantially predates the study of the spectral
properties of totally positive matrices. In 1918 O. D. Kellogg proved
the main spectral properties in the case of a symmetric continuous
totally positive kernel (see Kellogg [1918]). Kellogg was an American
mathematician who obtained his doctorate from Göttingen in 1903 under
the supervision of Hilbert. He is best known for his work on potential
theory, and his book thereon Kellogg [1929] has been reprinted many times
since. Both Krein and Gantmacher were very much influenced by Kellogg’s
work on this subject. An announcement of the parallel result for continuous
non-symmetric totally positive kernels is in Gantmacher [1936].
The main results concerning spectral properties of totally positive
matrices are in Gantmacher, Krein [1937]. An announcement appears
in Gantmacher, Krein [1935]. Oscillation matrices were introduced in
Gantmacher, Krein [1937], and Theorem 5.2 can be found therein on p. 454
(see also Gantmacher, Krein [1950], Chap. II, §7, Karlin [1968], Chap. 2,
§9, and Ando [1987]). The proof of Theorem 5.3, based on the Perron
and Kronecker theorems, is from Gantmacher, Krein [1937], Theorems 10
5.6 Remarks 153
and 14. The same proof also appears in Gantmacher [1953], Gantmacher,
Krein [1950], and Ando [1987]. The strict interlacing of the eigenvalues
of a strictly totally positive matrix with the eigenvalues of the principal
submatrices obtained by deleting the first (or last) row and column was
first proved in Gantmacher, Krein [1937]. The proof as given in Proposition
5.4 is from Koteljanskii [1955]. The second proof of Theorem 5.3, based
on variation diminishing, is from Elias, Pinkus [2002], which contains
generalizations of Theorem 5.3 to the nonlinear setting (see also Pinkus
[1985a], [1985b] and Buslaev [1990]). For more on the eigenvalues and
eigenvectors of oscillation matrices see Eveson [1996], Karlin [1965], Karlin
[1972] and Karlin, Pinkus [1974]. Fallat, Gekhtman, Johnson [2000] consider
the possible spectral structure of a subclass of totally positive matrices.
Theorem 5.6 is due to Pinkus [1998]. The example of a totally positive
matrix where interlacing of the eigenvalues of the minor fails is from Karlin,
(k)
Pinkus [1974]. Friedland [1985] had previously proved that µ1 ≥ λ2 and
(k)
µn−1 ≥ λn for k = 1, . . . , n − 1. Parts of Theorem 5.8, in a slightly different
form, appear in Gantmacher, Krein [1937], Theorem 16. Theorem 5.10 is
also from Gantmacher, Krein [1937], Theorems 18 and 19.
A survey of the spectral properties of totally positive kernels and
matrices, with extensive references, can be found in Pinkus [1996].
6
Factorizations of totally positive matrices
A = LU
A = L1 · · · Ln−1 U 1 · · · U n−1
6.1 Preliminaries
We start with two definitions to set our notation.
Ei,j (α)
154
6.1 Preliminaries 155
as the n × n unit diagonal matrix with α in the (i, j) position and zeroes
in the other off-diagonal entries.
We recall that right multiplication of A by Ei,j (α) is the operation
whereby α times the ith column of A is added to the jth column of A
with the other columns left unchanged. Left multiplication of A by Ei,j (α)
is the operation whereby α times the jth row of A is added to the ith row
of A with the other rows left unchanged. In addition, as is readily verified,
(Eij (α))−1 = Eij (−α).
If
Rk = En−k+1,n−k (αn−k ) · · · En,n−1 (αn−1 ),
αj · · · αi−1 ,
A = LDU
where
Proof Assume L has the form (6.1) where each Rk satisfies (6.2), with
the appropriate αk,j strictly positive. As L is the product of unit diagonal
lower totally positive matrices, it is necessarily a unit diagonal lower totally
positive matrix. But we want to prove that it is a lower strictly totally
positive matrix. We recall (Theorem 2.8) that to prove that L is lower
strictly totally positive it suffices to prove that
i + 1, . . . , i + r
L > 0, i = 0, 1, . . . , n − r, r = 1, . . . , n.
1, . . . , r
(Since L is unit diagonal, the equations in the case i = 0 are of no interest.)
In fact (see Proposition 2.9), we need only prove the strict positivity of the
above inequalities for i = n − r. However, in the proof of the converse
direction we make use of all these inequalities, and so we also prove them
here.
Recall that
1
0 ·
· ·
· ·
Rk = 0 · .
αk,n−k ·
· ·
· ·
αk,n−1 1
As
(Rn−i )i+1,i > 0, i = 1, . . . , n − 1,
158 Factorizations of totally positive matrices
we have
(L)i+1,1 > 0, i = 1, . . . , n − 1.
Now consider
i + 1, i + 2
L .
1, 2
Again, from the form of the R1 , . . . , Rn−1 , it follows that
i + 1, i + 2 i + 1, i + 2 2, 3
L = Rn−i · · · Rn−1 , i = 1, . . . , n − 2.
1, 2 i, i + 1 1, 2
By assumption,
i + 1, i + 2
R n−i
= (Rn−i )i+1,i (Rn−i )i+2,i+1 > 0, i = 1, . . . , n − 2.
i, i + 1
Thus
i + 1, i + 2
L > 0, i = 1, . . . , n − 2.
1, 2
We progress in this fashion to obtain
i + 1, . . . , i + r
L > 0, i = 1, . . . , n − r, r = 1, . . . , n − 1.
1, . . . , r
Thus L is lower strictly totally positive.
Let us now assume that L is lower strictly totally positive. For k =
1, . . . , n − 1 we define the unit diagonal, 1-banded matrix Rk with
k
R i+1,i = 0, i = 1, . . . , n − k − 1,
k
R i+1,i > 0, i = n − k, . . . , n − 1
and the fact that (L)i+1,i > 0 define for us the strictly positive values
(Rn−i )i+1,i , i = 1, . . . , n − 1.
The equations
i + 1, i + 2 i + 1, i + 2 2, 3
L = Rn−i · · · Rn−1 , i = 1, . . . , n − 2,
1, 2 i, i + 1 1, 2
6.2 Factorizations of strictly totally positive matrices 159
and the strict positivity of the left-hand side of the equations define for us
the strictly positive values
i + 1, i + 2
Rn−i = (Rn−i )i+1,i (Rn−i )i+2,i+1 , i = 1, . . . , n − 2,
i, i + 1
(since we set (Rn−i )i+2,i = 0), which in turn define for us the strictly
positive values
(Rn−i )i+2,i+1 , i = 1, . . . , n − 2.
We continue in this fashion to define all the desired (Rk )i+1,i .
We claim that
L = R1 · · · Rn−1 .
That is, we have chosen the values of the Rk so that the formulæ for the
i + 1, . . . , i + r
L , i = 1, . . . , n − r, r = 1, . . . , n − 1,
1, . . . , r
are valid. But do these formulæ necessarily guarantee that
L = R1 · · · Rn−1 ?
The answer is, of course, yes. The above set of data, and their strict
positivity, determine explicitly all the entries of L. That is, the first column
of L is given (r = 1). Given the first column (and the fact that its entries
are not zero) and
i + 1, i + 2
L , i = 1, . . . , n − 2,
1, 2
determines the entries of the second column of L, etc. . . Thus we do have
L = R1 · · · Rn−1 .
This proves the theorem.
Remark From the above follow very explicit formulæ for the (Rk )i+1,i ,
i = n − k, . . . , n − 1, k = 1, . . . , n − 1, in terms of minors of L. But we shall
not list them here.
Let us now consider how we can construct a different factorization for L.
where
C k = E2,1 (βk,1 ) · · · Ek+1,k (βk,k ) (6.4)
Proof There are two simple ways of justifying this corollary. The first is to
take the proof of Theorem 6.6 and make the simple obvious modifications
therein.
The second explanation is more satisfying. Recall from Propositions 1.2
and 1.3 that a matrix A is totally positive (strictly totally positive) if and
only if AT is totally positive (strictly totally positive) if and only if QAQ
is totally positive (strictly totally positive), where
0 ··· 0 1
0 ··· 1 0
Q= . . .
.. . . ... ...
1 ··· 0 0
Note that QAQ is the matrix A with the order of its rows and columns
reversed. Given a lower triangular matrix L = (ij )ni,j=1 , then
(QLQ)T = R1 · · · Rn−1
Set
C k = (QRk Q)T , k = 1, . . . , n − 1.
S k = Rk · · · Rn−1 , k = 1, . . . , n − 1.
6.2 Factorizations of strictly totally positive matrices 161
Thus
L = R1 · · · Rk−1 S k .
To explain what is happening recall that (Ei,j (α))−1 = Ei,j (−α) and
Thus
(Rk )−1 = En,n−1 (−αk,n−1 ) · · · En−k+1,n−k (−αk,n−k ),
and
S k+1 = En,n−1 (−αk,n−1 ) · · · En−k+1,n−k (−αk,n−k )S k .
T k = C n−1 · · · C k , k = 1, . . . , n − 1.
Thus
L = T k C k−1 · · · C 1
and
T k = T k+1 C k .
js ≤ is ≤ js + (n − k), s = 1, . . . , r,
and the exact same inequalities hold for T k . Thus the S k and T k have
exactly the same total positivity structure.
We can state this procedure formally as follows.
P = Rk S = T C k
We sequentially did the same thing at each step in Theorem 6.6 and 6.7,
i.e., we decomposed
S k = Rk S k+1
T k = T k+1 C k
and T k via
T k = Rk T k+1
A = L1 · · · Ln−1 DU 1 · · · U n−1
We should also recall that every strictly totally positive (totally positive
matrix) also has a factorization of the form A = U DL, where U is upper
strictly totally positive (totally positive) and L is lower strictly totally
positive (totally positive). Thus there are many, many factorizations of A
as a product of 2n − 1 1-banded totally positive matrices.
L = R1 · · · Rn−1
L = C n−1 · · · C 1
k
where the Rm are nonsingular 1-banded lower totally positive matrices with
166 Factorizations of totally positive matrices
(Rmk
)i+1,i = 0, i = 1, . . . , n − k − 1. By pre- and postmultiplying by positive
diagonal matrices we can factor Lm in the form
! mR
Lm = D !m1 !n−1
···R m
L = R1 · · · Rn−1
L = C n−1 · · · C 1
As was seen in Section 6.2, these are just two of at least 2n−1 (generally
distinct) possible factorizations. That is, we can mix and match the factors
where for each k we choose to obtain a C k or an Rk .
Note that if L is nonsingular then each of the Rk (or C k ) must be
nonsingular. Thus if the L of Theorem 6.10 is unit diagonal, then we can
also make this same demand on the Rk (or C k ).
Paralleling Theorem 6.9, we can record the following.
A = L1 · · · Ln−1 U 1 · · · U n−1
where the Lk are 1-banded, unit diagonal, lower totally positive matrices,
and the U k are 1-banded, unit diagonal, upper totally positive matrices.
6.4 Remarks
The first factorization theorem for totally positive matrices is often called
the Whitney Theorem. The name was given by Loewner [1955], although
there is no evidence to indicate that Whitney [1951] ever thought of her
result in this way. Whitney only talks about a “reduction theorem” and
proves what we have listed as Proposition 1.9. Loewner gave this name to
his factorization theorem because it is proved by repeated applications of
Proposition 1.9. In fact the Whitney/Loewner method of factorization is
not, from our perspective, a good factorization. Rather than eliminating
successive off-diagonals, i.e., going from a (k + 1)-banded lower (upper)
totally positive matrix to a k-banded lower (upper) totally positive matrix
via multiplication by one 1-banded totally positive matrix, the procedure
of Whitney/Loewner eliminates (makes zero) successive columns in a lower
totally positive matrix. This implies the need for n(n − 1)/2 1-banded
factors when factoring a lower totally positive matrix (rather than just
the n − 1 factors). In the Whitney Theorem one assumes nonsingularity of
the matrix. Cryer [1976] obtained 1-banded factorizations for an arbitrary
totally positive matrix. But again the number of such factors can be very
large. In fact Metelmann [1973] (the paper was almost unnoticed) was the
first to state a factorization theorem for totally positive matrices with the
correct number of factors. In Cavaretta, Dahmen, Micchelli, Smith [1981]
can be found a factorization theorem (with the correct number of factors)
for infinite strictly m-banded totally positive matrices. This was generalized
in de Boor, Pinkus [1982] to an arbitrary totally positive nonsingular matrix
(banded if the matrix is infinite). All three of these last mentioned papers,
i.e., Metelmann [1973], Cavaretta, Dahmen, Micchelli, Smith [1981] and
de Boor, Pinkus [1982] also consider totally positive matrices A = (aij )
that are (r, s)-banded, i.e., for which aij = 0 only if −s ≤ i − j ≤ r,
and in this case show that one can make do with r + s factors (aside
from the diagonal matrix). This also follows from the method of proof of
the results in this chapter and Proposition 6.8. The proofs in some of the
papers mentioned above tend to be rather different. The method of proof
of Theorem 6.6 as presented here is basically to be found in Micchelli,
Pinkus [1991]. In that paper a factorization is also given for n × m totally
positive matrices. If n > m then aside from m − 1 lower and m − 1 upper
triangular m × m 1-banded totally positive factors, there are also n − m
totally positive factors that are k × (k − 1) matrices, k = m + 1, . . . , n, all of
whose entries are zero except for those on the two main diagonals (i.e., the
(i, i)- and (i + 1, i)-entries, i = 1, . . . , k − 1). This follows by expanding the
168 Factorizations of totally positive matrices
It is very difficult, if not well nigh impossible, to give an exact history of the
development of any set of ideas. Nonetheless, there are four persons whose
contributions “stand out” when considering the history of total positivity.
They are I. J. Schoenberg, M. G. Krein, F. R. Gantmacher, and S. Karlin.
Of course they did not work in a vacuum and numerous influences are very
evident in their research.
It was Schoenberg who initiated the study of the variation diminishing
properties of totally positive matrices in 1930 in Schoenberg [1930], and
the study of Pólya frequency functions in the late 1940s and early 1950s.
Independently, and unaware of Schoenberg’s work, Krein was developing
the theory of total positivity as it related to ordinary differential equations
whose Green’s functions are totally positive. Furthermore, in the mid-
1930s Krein, together with Gantmacher, proved the spectral properties
of totally positive kernels and matrices, and many other properties (see
Gantmacher, Krein [1935], Gantmacher [1936], Gantmacher, Krein [1937],
and their influential Gantmacher, Krein [1941], which was later reissued as
Gantmacher, Krein [1950], and its translations in German in 1960 and in
English in 1961 and 2002). These topics are the foundations upon which has
been constructed the theory of total positivity. Karlin’s role was somewhat
different. His books Karlin, Studden [1966] and Karlin [1968], the latter
titled Total Positivity. Volume 1 (but there is no Volume 2), presented
many new results and ideas and also synthesized and popularized many of
these ideas.
As the reader has hopefully noted, each chapter of this monograph
ends with remarks that include bibliographical references and explanations.
However I wanted to take this opportunity to write a “few words” in
memory of each of these gentlemen.
I. J. (Iso) Schoenberg (1903–1990) was born in Galatz, Romania and
169
170 Afterword
I. J. Schoenberg, 1903–1990
M. G. Krein, 1907–1989
F. R. Gantmacher, 1908–1964
S. Karlin, 1924–2007
Afterword 173
174
References 175
Fan K. [1966], Some matrix inequalities, Abh. Math. Sem. Univ. Hamburg 29,
185–196.
Fan K. [1967], Subadditive functions on a distributive lattice and an extension of
Szasz’s inequality, J. Math. Anal. Appl. 18, 262–268.
Fan K. [1968], An inequality for subadditive functions on a distributive lattice
with application to determinantal inequalities, Lin. Alg. and Appl. 1, 33–38.
Fekete M. and G. Pólya [1912], Über ein Problem von Laguerre, Rend. C. M.
Palermo 34, 89–120.
Fomin S. and A. Zelevinsky [1999], Double Bruhat cells and total positivity,
J. Amer. Math. Soc. 12, 335–380.
Fomin S. and A. Zelevinsky [2000], Total positivity: tests and parametrizations,
Math. Intell. 22, 23–33.
Friedland S. [1985], Weak interlacing properties of totally positive matrices,
Lin. Alg. and Appl. 71, 95–100.
Gantmacher F. [1936], Sur les noyaux de Kellogg non symétriques, Comptes
Rendus (Doklady) de l’Academie des Sciences de l’URSS 1 (10), 3–5.
Gantmacher F. R. [1953], The Theory of Matrices, Gostekhizdat, Moscow–
Leningrad; English transl. as Matrix Theory, Chelsea, New York, 2 vols., 1959.
Gantmacher F. R. [1965], Obituary, in Uspekhi Mat. Nauk 20, 149–158; English
transl. as Russian Math. Surveys, 20 (1965), 143–151.
Gantmacher F. R. and M. G. Krein [1935], Sur les matrices oscillatoires, C. R.
Acad. Sci. (Paris) 201, 577–579.
Gantmakher F. R. and M. G. Krein [1937], Sur les matrices complètement non
négatives et oscillatoires, Compositio Math. 4, 445–476.
Gantmacher F. R. and M. G. Krein [1941], Oscillation Matrices and
Small Oscillations of Mechanical Systems (Russian), Gostekhizdat, Moscow-
Leningrad.
Gantmacher F. R. and M. G. Krein [1950], Ostsillyatsionye Matritsy
i Yadra i Malye Kolebaniya Mekhanicheskikh Sistem, Gosudarstvenoe
Izdatel’stvo, Moskva-Leningrad, 1950; German transl. as Oszillationsmatrizen,
Oszillationskerne und kleine Schwingungen mechanischer Systeme, Akademie
Verlag, Berlin, 1960; English transl. as Oscillation Matrices and Kernels and
Small Vibrations of Mechanical Systems, USAEC, 1961, and also a revised
English edition from AMS Chelsea Publ., 2002.
Garloff J. [1982], Criteria for sign regularity of sets of matrices, Lin. Alg. and
Appl. 44, 153–160.
Garloff J. [2003], Intervals of almost totally positive matrices, Lin. Alg. and Appl.
363, 103–108.
Garloff J. and D. Wagner [1996a], Hadamard products of stable polynomials are
stable, J. Math. Anal. Appl. 202, 797–809.
Garloff J. and D. Wagner [1996b], Preservation of total nonnegativity under the
Hadamard products and related topics, in Total Positivity and its Applications
eds., C. A. Micchelli and M. Gasca, Kluwer Acad. Publ., Dordrecht, 97–102.
Gasca M. and C. A. Micchelli [1996], Total Positivity and its Applications, eds.,
Kluwer Acad. Publ., Dordrecht.
References 177
Gasca M., C. A. Micchelli and J. M. Peña [1992], Almost strictly totally positive
matrices, Numerical Algorithms 2, 225–236.
Gasca M. and J. M. Peña [1992], Total positivity and Neville elimination, Lin.
Alg. and Appl. 165, 25–44.
Gasca M. and J. M. Peña [1993], Total positivity, QR factorization, and Neville
elimination, SIAM J. Matrix Anal. Appl. 14, 1132–1140.
Gasca M. and J. M. Peña [1995], On the characterization of almost strictly totally
positive matrices, Adv. Comp. Math. 3, 239–250.
Gasca M. and J. M. Peña [1996], On factorizations of totally positive matrices,
in Total Positivity and its Applications eds., C. A. Micchelli and M. Gasca,
Kluwer Acad. Publ., Dordrecht, 109–130.
Gasca M. and J. M. Peña [2006], Characterizations and decompositions of almost
strictly totally positive matrices, SIAM J. Matrix Anal. 28, 1–8.
Gladwell G. M. L. [1998], Total positivity and the QR algorithm, Lin. Alg. and
Appl. 271, 257–272.
Gladwell G. M. L. [2004], Inner totally positive matrices, Lin. Alg. and Appl.
393, 179–195.
Gohberg I. [1989], Mathematical Tales, in The Gohberg Anniversary Collection,
Eds. H. Dym, S. Goldberg, M. A. Kaashoek, P. Lancaster, pp. 17–56, Operator
Theory: Advances and Applications, Vol. 40, Birkhäuser Verlag, Basel.
Gohberg I. [1990], Mark Grigorievich Krein 1907–1989, Notices Amer. Math. Soc.
37, 284–285.
Goodman T. N. T. [1995], Total positivity and the shape of curves, in Total
Positivity and its Applications eds., C. A. Micchelli and M. Gasca, Kluwer
Acad. Publ., Dordrecht, 157–186.
Goodman T. N. T. and Q. Sun [2004], Total positivity and refinable functions
with general dilation, Appl. Comput. Harmon. Anal. 16, 69–89.
Gross K. I. and D. St. P. Richards [1995], Total positivity, finite reflection groups,
and a formula of Harish-Chandra, J. Approx. Theory 82, 60–87.
Holtz O. [2003], Hermite-Biehler, Routh-Hurwitz and total positivity, Lin. Alg.
and Appl. 372, 105–110.
Horn R. A. and C. R. Johnson [1991], Topics in Matrix Analysis, Cambridge
University Press, Cambridge.
Karlin S. [1964], The existence of eigenvalues for integral operators, Trans. Amer.
Math. Soc. 113, 1–17.
Karlin S. [1965], Oscillation properties of eigenvectors of strictly totally positive
matrices, J. D’Analyse Math. 14, 247–266.
Karlin S. [1968], Total Positivity. Volume 1, Stanford University Press, Stanford,
CA.
Karlin S. [1972], Some extremal problems for eigenvalues of certain matrix and
integral operators, Adv. in Math. 9, 93–136.
Karlin S. [2002], Interdisciplinary meandering in science. 50th anniversary issue
of Operations Research. Oper. Res. 50, 114–121.
Karlin S. and A. Pinkus [1974], Oscillation properties of generalized characteristic
polynomials for totally positive and positive definite matrices, Lin. Alg. and
Appl. 8, 281–312.
178 References
Fallat, S. M., 34, 35, 126, 153, 175 Mühlbach, G., 34, 178
Fan, K., 34, 86, 176 MacGibbon, K. B., 86, 174
Fekete, M., 74, 176 Maló, E., 126, 178
Fomin, S., ix, 168, 174, 176 Marden, M., 125, 178
Friedland, S., 153, 176 Markham, T. L., 126, 178
Froehle, B., 34, 35, 174 Marshall, A. W., ix, 178
Metelmann, K., 74, 167, 178
Gantmacher, F. R., ix, x, 33, 34, 125, Micchelli, C. A., ix, 34, 74, 167, 175–178
152, 153, 169, 172, 176 Motzkin, Th., 86, 178
180
Author index 181
182