Linear Algebra
Linear Algebra
Jim Hefferon
http://joshua.smcvt.edu/linearalgebra
Notation
R, R+ , Rn
N, C
(a .. b), [a .. b]
h. . .i
hi,j
V, W, U
~v, ~0, ~0V
Pn , Mnm
[S]
~ ~
hB, Di, ,
En = h~e1 , . . . , ~en i
W
V=
MN
h, g
t, s
RepB (~v), RepB,D (h)
Znm or Z, Inn or I
|T |
R(h), N (h)
R (h), N (h)
,
,
name
alpha AL-fuh
beta BAY-tuh
gamma GAM-muh
delta DEL-tuh
epsilon EP-suh-lon
zeta ZAY-tuh
eta AY-tuh
theta THAY-tuh
iota eye-OH-tuh
kappa KAP-uh
lambda LAM-duh
mu MEW
character
,
o
,
,
,
,
,
name
nu NEW
xi KSIGH
omicron OM-uh-CRON
pi PIE
rho ROW
sigma SIG-muh
tau TOW (as in cow)
upsilon OOP-suh-LON
phi FEE, or FI (as in hi)
chi KI (as in hi)
psi SIGH, or PSIGH
omega oh-MAY-guh
Capitals shown are the ones that differ from Roman capitals.
Preface
This book helps students to master the material of a standard US undergraduate
first course in Linear Algebra.
The material is standard in that the subjects covered are Gaussian reduction,
vector spaces, linear maps, determinants, and eigenvalues and eigenvectors.
Another standard is books audience: sophomores or juniors, usually with
a background of at least one semester of calculus. The help that it gives to
students comes from taking a developmental approach this books presentation
emphasizes motivation and naturalness, using many examples as well as extensive
and careful exercises.
The developmental approach is what most recommends this book so I will
elaborate. Courses at the beginning of a mathematics program focus less on
theory and more on calculating. Later courses ask for mathematical maturity: the
ability to follow different types of arguments, a familiarity with the themes that
underlie many mathematical investigations such as elementary set and function
facts, and a capacity for some independent reading and thinking. Some programs
have a separate course devoted to developing maturity but in any case a Linear
Algebra course is an ideal spot to work on this transition to more rigor. It comes
early in a program so that progress made here pays off later but also comes late
enough so that the students in the class are serious about mathematics. The
material is accessible, coherent, and elegant. There are a variety of argument
styles, including proofs by contradiction and proofs by induction. And, examples
are plentiful.
Helping readers start the transition to being serious students of mathematics
requires taking the mathematics seriously so all of the results here are proved.
On the other hand, we cannot assume that students have already arrived and so
in contrast with more advanced texts this book is filled with examples, often
quite detailed.
Some texts that assume a not-yet sophisticated reader begin with extensive
computations, including matrix multiplication and determinants. Then, when
vector spaces and linear maps finally appear and definitions and proofs start, the
abrupt change brings students to an abrupt stop. While this book begins with
linear reduction, from the start we do more than compute. The first chapter
includes proofs, such as that linear reduction gives a correct and complete
solution set. With that as motivation the second chapter begins with real vector
spaces. In the schedule below this happens at the start of the third week.
A student progresses most in mathematics while doing exercises. The problem
sets start with routine checks and range up to reasonably involved proofs. I have
aimed to typically put two dozen in each set, thereby giving a selection. There
are even a few that are puzzles taken from various journals, competitions, or
problems collections. These are marked with a ? and as part of the fun I have
kept the original wording as much as possible.
That is, as with the rest of the book, the exercises are aimed to both build
an ability at, and help students experience the pleasure of, doing mathematics.
Students should see how the ideas arise and should be able to picture themselves
doing the same type of work.
Applications. Applications and computing are interesting and vital aspects of the
subject. Consequently, each chapter closes with a selection of topics in those
areas. These give a reader a taste of the subject, discuss how Linear Algebra
comes in, point to some further reading, and give a few exercises. They are
brief enough that an instructor can do one in a days class or can assign them
as projects for individuals or small groups. Whether they figure formally in a
course or not, they help readers see for themselves that Linear Algebra is a tool
that a professional must have.
Availability. This book is Free. In particular, instructors can run off copies
for students and sell them at the bookstore. See this books web page http:
//joshua.smcvt.edu/linearalgebra for the license details. That page also
has the latest version, exercise answers, beamer slides, lab manual, additional
material, and LATEX source.
Acknowledgments. A lesson of software development is that complex projects
need a process for bug fixes. I am grateful for such reports from both instructors
and students and I periodically issue revisions. My contact information is on
the web page.
I thank Gabriel S Santiago for the cover colors. I am also grateful to Saint
Michaels College for supporting this project over many years.
And, I thank my wife Lynne for her unflagging encouragement.
Advice. This books emphasis on motivation and development, and its availability,
make it widely used for self-study. If you are an independent student then good
for you, I admire your industry. However, you may find some advice useful.
While an experienced instructor knows what subjects and pace suit their
class, this semesters timetable (graciously shared by George Ashline) may help
you plan a sensible rate. It presumes Section One.II, the elements of vectors.
week
1
2
3
4
5
6
7
8
9
10
11
12
13
14
Monday
One.I.1
One.I.3
Two.I.1
Two.II.1
Two.III.2
exam
Three.I.2
Three.II.1
Three.III.1
Three.IV.2, 3
Three.V.1
exam
Five.II.1
Five.II.1, 2
Wednesday
One.I.1, 2
One.III.1
Two.I.1, 2
Two.III.1
Two.III.2, 3
Three.I.1
Three.I.2
Three.II.2
Three.III.2
Three.IV.4
Three.V.2
Four.I.2
Thanksgiving
Five.II.2
Friday
One.I.2, 3
One.III.2
Two.I.2
Two.III.2
Two.III.3
Three.I.1
Three.II.1
Three.II.2
Three.IV.1, 2
Three.V.1
Four.I.1
Four.III.1
break
Five.II.3
(Using this schedule as a target, I find that I have room for a lecture or two on
an application or from the lab manual.) Note that in addition to the in-class
exams, students in this course do take-home problems that include arguments,
for instance showing that a set is a vector space. Computations are important
but so are the proofs.
In the table of contents I have marked subsections as optional if some
instructors will pass over them in favor of spending more time elsewhere.
As enrichment, you might pick one or two topics that appeal to you from
the end of each chapter or from the lab manual. Youll get more from these if
you have access to software for calculations. I recommend Sage, freely available
from http://sagemath.org.
My main advice is: do many exercises. I have marked a good sample with
Xs in the margin. Do not simply read the answers you must try the problems
and possibly struggle with them. For all of the exercises, you must justify your
answer either with a computation or with a proof. Be aware that few people
can write correct proofs without training; try to find a knowledgeable person to
work with you.
2014-Dec-25
Authors Note. Inventing a good exercise, one that enlightens as well as tests,
is a creative act, and hard work. The inventor deserves recognition. But texts
have traditionally not given attributions for questions. I have changed that here
where I was sure of the source. I would be glad to hear from anyone who can
help me to correctly attribute others of the questions.
Contents
Chapter One:
Linear Systems
I Solving Linear Systems . . . . . . . . . . .
I.1 Gausss Method . . . . . . . . . . . .
I.2 Describing the Solution Set . . . . . .
I.3 General = Particular + Homogeneous .
II Linear Geometry . . . . . . . . . . . . . .
II.1 Vectors in Space* . . . . . . . . . . .
II.2 Length and Angle Measures* . . . . .
III Reduced Echelon Form . . . . . . . . . . .
III.1 Gauss-Jordan Reduction . . . . . . . .
III.2 The Linear Combination Lemma . . .
Topic: Computer Algebra Systems . . . . . .
Topic: Accuracy of Computations . . . . . . .
Topic: Analyzing Networks . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
2
12
23
35
35
42
50
50
55
64
66
70
Chapter Two:
Vector Spaces
I Definition of Vector Space . . . . . . .
I.1 Definition and Examples . . . . .
I.2 Subspaces and Spanning Sets . . .
II Linear Independence . . . . . . . . . .
II.1 Definition and Examples . . . . .
III Basis and Dimension . . . . . . . . . .
III.1 Basis . . . . . . . . . . . . . . . . .
III.2 Dimension . . . . . . . . . . . . .
III.3 Vector Spaces and Linear Systems
III.4 Combining Subspaces* . . . . . . .
Topic: Fields . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
78
78
90
101
101
113
113
120
126
134
143
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
163
163
173
181
181
189
202
202
212
221
221
225
234
243
251
251
256
264
264
269
274
283
289
296
301
307
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
314
314
319
324
333
342
342
349
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
349
355
358
362
366
Chapter Five:
Similarity
I Complex Vector Spaces . . . . . . . . . . . . . . . .
I.1 Polynomial Factoring and Complex Numbers*
I.2 Complex Representations . . . . . . . . . . . .
II Similarity . . . . . . . . . . . . . . . . . . . . . . .
II.1 Definition and Examples . . . . . . . . . . . .
II.2 Diagonalizability . . . . . . . . . . . . . . . . .
II.3 Eigenvalues and Eigenvectors . . . . . . . . . .
III Nilpotence . . . . . . . . . . . . . . . . . . . . . . .
III.1 Self-Composition* . . . . . . . . . . . . . . . .
III.2 Strings* . . . . . . . . . . . . . . . . . . . . . .
IV Jordan Form . . . . . . . . . . . . . . . . . . . . . .
IV.1 Polynomials of Maps and Matrices* . . . . . .
IV.2 Jordan Canonical Form* . . . . . . . . . . . . .
Topic: Method of Powers . . . . . . . . . . . . . . . . .
Topic: Stable Populations . . . . . . . . . . . . . . . .
Topic: Page Ranking . . . . . . . . . . . . . . . . . . .
Topic: Linear Recurrences . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
379
380
382
384
384
389
393
403
403
407
418
418
426
441
445
447
451
Appendix
Statements . . . . . . . . . . .
Quantifiers . . . . . . . . . . .
Techniques of Proof . . . . . .
Sets, Functions, and Relations .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
A-1
A-2
A-3
A-5
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Chapter One
Linear Systems
I
Systems of linear equations are common in science and mathematics. These two
examples from high school science [Onan] give a sense of how they arise.
The first example is from Statics. Suppose that we have three objects, we
know that one has a mass of 2 kg, and we want to find the two unknown masses.
Suppose further that experimentation with a meter stick produces these two
balances.
40
50
25
50
2
15
25
For the masses to balance we must have that the sum of moments on the left
equals the sum of moments on the right, where the moment of an object is its
mass times its distance from the balance point. That gives a system of two
linear equations.
40h + 15c = 100
25c = 50 + 50h
The second example is from Chemistry. We can mix, under controlled
conditions, toluene C7 H8 and nitric acid HNO3 to produce trinitrotoluene
C7 H5 O6 N3 along with the byproduct water (conditions have to be very well
controlled trinitrotoluene is better known as TNT). In what proportion should
we mix them? The number of atoms of each element present before the reaction
x C7 H8 + y HNO3
z C7 H5 O6 N3 + w H2 O
must equal the number present afterward. Applying that in turn to the elements
C, H, N, and O gives this system.
7x = 7z
8x + 1y = 5z + 2w
1y = 3z
3y = 6z + 1w
Both examples come down to solving a system of equations. In each system,
the equations involve only the first power of each variable. This chapter shows
how to solve any such system.
I.1
Gausss Method
Finding the set of all solutions is solving the system. We dont need guesswork
or good luck, there is an algorithm that always works. This algorithm is Gausss
Method (or Gaussian elimination or linear elimination).
1.4 Example To solve this system
3x3 = 9
x1 + 5x2 2x3 = 2
1
x
=3
3 1 + 2x2
we transform it, step by step, until it is in a form that we can easily solve.
The first transformation rewrites the system by interchanging the first and
third row.
swap row 1 with row 3
1
3 x1
+ 2x2
=3
x1 + 5x2 2x3 = 2
3x3 = 9
x1 + 6x2
=9
x1 + 5x2 2x3 = 2
3x3 = 9
The third transformation is the only nontrivial one in this example. We mentally
multiply both sides of the first row by 1, mentally add that to the second row,
and write the result in as the new second row.
add 1 times row 1 to row 2
x1 + 6x2
= 9
x2 2x3 = 7
3x3 = 9
These steps have brought the system to a form where we can easily find the
value of each variable. The bottom equation shows that x3 = 3. Substituting 3
for x3 in the middle equation shows that x2 = 1. Substituting those two into
the top equation gives that x1 = 3. Thus the system has a unique solution; the
solution set is { (3, 1, 3) }.
Most of this subsection and the next one consists of examples of solving
linear systems by Gausss Method, which we will use throughout the book. It is
fast and easy. But before we do those examples we will first show that it is also
safe: Gausss Method never loses solutions (any tuple that is a solution to the
system before you apply the method is also a solution after), nor does it ever
pick up extraneous solutions (any tuple that is not a solution before is also not
a solution after).
d1
di
dj
dm
The tuple (s1 , . . . , sn ) satisfies this system if and only if substituting the values
for the variables, the ss for the xs, gives a conjunction of true statements:
a1,1 s1 +a1,2 s2 + +a1,n sn = d1 and . . . ai,1 s1 +ai,2 s2 + +ai,n sn = di and
. . . aj,1 s1 + aj,2 s2 + + aj,n sn = dj and . . . am,1 s1 + am,2 s2 + + am,n sn =
dm .
In a list of statements joined with and we can rearrange the order of the
statements. Thus this requirement is met if and only if a1,1 s1 + a1,2 s2 + +
a1,n sn = d1 and . . . aj,1 s1 + aj,2 s2 + + aj,n sn = dj and . . . ai,1 s1 + ai,2 s2 +
+ ai,n sn = di and . . . am,1 s1 + am,2 s2 + + am,n sn = dm . This is exactly
the requirement that (s1 , . . . , sn ) solves the system after the row swap. QED
1.6 Definition The three operations from Theorem 1.5 are the elementary reduction operations, or row operations, or Gaussian operations. They are
swapping, multiplying by a scalar (or rescaling), and row combination.
When writing out the calculations, we will abbreviate row i by i . For
instance, we will denote a row combination operation by ki + j , with the row
that changes written second. To save writing we will often combine addition
steps when they use the same i as in the next example.
1.7 Example Gausss Method systematically applies the row operations to solve
a system. Here is a typical case.
x+ y
=0
2x y + 3z = 3
x 2y z = 3
We begin by using the first row to eliminate the 2x in the second row and the x
in the third. To get rid of the 2x we mentally multiply the entire first row by
2, add that to the second row, and write the result in as the new second row.
To eliminate the x in the third row we multiply the first row by 1, add that to
the third row, and write the result in as the new third row.
x+
21 +2
1 +3
y
=0
3y + 3z = 3
3y z = 3
We finish by transforming the second system into a third, where the bottom
equation involves only one unknown. We do that by using the second row to
eliminate the y term from the third row.
x+
2 +3
y
=0
3y + 3z = 3
4z = 0
Now finding the systems solution is easy. The third row gives z = 0. Substitute
that back into the second row to get y = 1. Then substitute back into the first
row to get x = 1.
1.8 Example For the Physics problem from the start of this chapter, Gausss
Method gives this.
40h + 15c = 100
50h + 25c = 50
5/41 +2
40h +
15c = 100
(175/4)c = 175
x+ y+ z= 9
2y 5z = 17
3y 8z = 27
21 +2
31 +3
(3/2)2 +3
x+ y+
2y
z=
9
5z =
17
(1/2)z = (3/2)
xy
21 +2
=0
z + 2w = 4
+ w=0
2z + w = 5
the second equation has no leading y. We exchange it for a lower-down row that
has a leading y.
2 3
xy
y
=0
+ w=0
z + 2w = 4
2z + w = 5
(Had there been more than one suitable row below the second then we could
have used any one.) With that, Gausss Method proceeds as before.
23 +4
xy
y
= 0
+ w= 0
z + 2w = 4
3w = 3
Strictly speaking, to solve linear systems we dont need the row rescaling
operation. We have introduced it here because it is convenient and because we
will use it later in this chapter as part of a variation of Gausss Method, the
Gauss-Jordan Method.
All of the systems so far have the same number of equations as unknowns.
All of them have a solution and for all of them there is only one solution. We
finish this subsection by seeing other things that can happen.
1.12 Example This system has more equations than variables.
x + 3y = 1
2x + y = 3
2x + 2y = 2
Gausss Method helps us understand this system also, since this
x + 3y = 1
5y = 5
4y = 4
21 +2
21 +3
x + 3y = 1
5y = 5
0= 0
21 +2
21 +3
x + 3y = 1
5y = 5
4y = 2
Here the system is inconsistent: no pair of numbers (s1 , s2 ) satisfies all three
equations simultaneously. Echelon form makes the inconsistency obvious.
(4/5)2 +3
x + 3y = 1
5y = 5
0= 2
1.14 Example The prior system has more equations than unknowns but that
is not what causes the inconsistency Example 1.12 has more equations than
unknowns and yet is consistent. Nor is having more equations than unknowns
necessary for inconsistency, as we see with this inconsistent system that has the
same number of equations as unknowns.
x + 2y = 8
2x + 4y = 8
21 +2
x + 2y = 8
0 = 8
Instead, inconsistency has to do with the interaction of the left and right sides;
in the first system above the left sides second equation is twice the first but the
right sides second constant is not twice the first. Later we will have more to
say about dependencies between a systems parts.
The other way that a linear system can fail to have a unique solution, besides
having no solutions, is to have many solutions.
1.15 Example In this system
x+ y=4
2x + 2y = 8
any pair of numbers satisfying the first equation also satisfies the second. The
solution set { (x, y) | x + y = 4 } is infinite; some example member pairs are (0, 4),
(1, 5), and (2.5, 1.5).
The result of applying Gausss Method here contrasts with the prior example
because we do not get a contradictory equation.
21 +2
x+y=4
0=0
2z = 6
y+ z=1
2x + y z = 7
3y + 3z = 0
2x
1 +3
2x
2 +3
32 +4
2z = 6
y+ z=1
y+ z=1
3y + 3z = 0
2z = 6
y+ z= 1
0= 0
0 = 3
In summary, Gausss Method uses the row operations to set a system up for
back substitution. If any step shows a contradictory equation then we can stop
with the conclusion that the system has no solutions. If we reach echelon form
without a contradictory equation, and each variable is a leading variable in its
row, then the system has a unique solution and we find it by back substitution.
Finally, if we reach echelon form without a contradictory equation, and there is
not a unique solution that is, at least one variable is not a leading variable
then the system has many solutions.
The next subsection explores the third case. We will see that such a system
must have infinitely many solutions and we will describe the solution set.
Note. Here, and in the rest of the book, you must justify all of your exercise
answers. For instance, if a question asks whether a system has a solution
then you must justify a yes response by producing the solution and must
justify a no response by showing that no solution exists.
Exercises
X 1.17 Use Gausss Method to find the unique solution for each system.
x
z=0
2x + 3y = 13
(a)
(b) 3x + y
=1
x y = 1
x + y + z = 4
X 1.18 Use Gausss Method to solve each system or conclude many solutions or no
solutions.
(a) 2x + 2y = 5
(b) x + y = 1
(c) x 3y + z = 1
(d) x y = 1
x 4y = 0
x+y=2
x + y + 2z = 14
3x 3y = 2
(e)
4y + z = 20
(f) 2x
+ z+w= 5
2x 2y + z = 0
y
w = 1
x
+z= 5
3x
zw= 0
x + y z = 10
4x + y + 2z + w = 9
X 1.19 We can solve linear systems by methods other than Gausss. One often taught
in high school is to solve one of the equations for a variable, then substitute the
resulting expression into other equations. Then we repeat that step until there
10
X 1.20 For which values of k are there no solutions, many solutions, or a unique
solution to this system?
x y=1
3x 3y = k
X 1.21 This system is not linear in that it says sin instead of
2 sin cos + 3 tan = 3
4 sin + 2 cos 2 tan = 10
6 sin 3 cos + tan = 9
and yet we can apply Gausss Method. Do so. Does the system have a solution?
X 1.22 What conditions must the constants, the bs, satisfy so that each of these
systems has a solution? Hint. Apply Gausss Method and see what happens to the
right side.
(a) x 3y = b1
(b) x1 + 2x2 + 3x3 = b1
3x + y = b2
2x1 + 5x2 + 3x3 = b2
x1
+ 8x3 = b3
x + 7y = b3
2x + 4y = b4
1.23 True or false: a system with more unknowns than equations has at least one
solution. (As always, to say true you must prove it, while to say false you must
produce a counterexample.)
1.24 Must any Chemistry problem like the one that starts this subsection a balance
the reaction problem have infinitely many solutions?
X 1.25 Find the coefficients a, b, and c so that the graph of f(x) = ax2 + bx + c passes
through the points (1, 2), (1, 6), and (2, 3).
1.26 After Theorem 1.5 we note that multiplying a row by 0 is not allowed because
that could change a solution set. Give an example of a system with solution set S0
where after multiplying a row by 0 the new system has a solution set S1 and S0 is
a proper subset of S1 . Give an example where S0 = S1 .
11
1.27 Gausss Method works by combining the equations in a system to make new
equations.
(a) Can we derive the equation 3x 2y = 5 by a sequence of Gaussian reduction
steps from the equations in this system?
x+y=1
4x y = 6
(b) Can we derive the equation 5x 3y = 2 with a sequence of Gaussian reduction
steps from the equations in this system?
2x + 2y = 5
3x + y = 4
(c) Can we derive 6x 9y + 5z = 2 by a sequence of Gaussian reduction steps
from the equations in the system?
2x + y z = 4
6x 3y + z = 5
1.28 Prove that, where a, b, . . . , e are real numbers and a 6= 0, if
ax + by = c
has the same solution set as
ax + dy = e
then they are the same equation. What if a = 0?
X 1.29 Show that if ad bc 6= 0 then
ax + by = j
cx + dy = k
has a unique solution.
X 1.30 In the system
ax + by = c
dx + ey = f
each of the equations describes a line in the xy-plane. By geometrical reasoning,
show that there are three possibilities: there is a unique solution, there is no
solution, and there are infinitely many solutions.
1.31 Finish the proof of Theorem 1.5.
1.32 Is there a two-unknowns linear system whose solution set is all of R2 ?
X 1.33 Are any of the operations used in Gausss Method redundant? That is, can we
make any of the operations from a combination of the others?
1.34 Prove that each operation of Gausss Method is reversible. That is, show that
if two systems are related by a row operation S1 S2 then there is a row operation
to go back S2 S1 .
? 1.35 [Anton] A box holding pennies, nickels and dimes contains thirteen coins with
a total value of 83 cents. How many coins of each type are in the box? (These are
US coins; a penny is 1 cent, a nickel is 5 cents, and a dime is 10 cents.)
? 1.36 [Con. Prob. 1955] Four positive integers are given. Select any three of the
integers, find their arithmetic average, and add this result to the fourth integer.
Thus the numbers 29, 23, 21, and 17 are obtained. One of the original integers
is:
12
(b) 21
(c) 23
(d) 29
(e) 17
? X 1.37 [Am. Math. Mon., Jan. 1935] Laugh at this: AHAHA + TEHE = TEHAW. It
resulted from substituting a code letter for each digit of a simple example in
addition, and it is required to identify the letters and prove the solution unique.
? 1.38 [Wohascum no. 2] The Wohascum County Board of Commissioners, which has
20 members, recently had to elect a President. There were three candidates (A, B,
and C); on each ballot the three candidates were to be listed in order of preference,
with no abstentions. It was found that 11 members, a majority, preferred A over
B (thus the other 9 preferred B over A). Similarly, it was found that 12 members
preferred C over A. Given these results, it was suggested that B should withdraw,
to enable a runoff election between A and C. However, B protested, and it was
then found that 14 members preferred B over C! The Board has not yet recovered
from the resulting confusion. Given that every possible order of A, B, C appeared
on at least one ballot, how many members voted for B as their first choice?
? 1.39 [Am. Math. Mon., Jan. 1963] This system of n linear equations with n unknowns, said the Great Mathematician, has a curious property.
Good heavens! said the Poor Nut, What is it?
Note, said the Great Mathematician, that the constants are in arithmetic
progression.
Its all so clear when you explain it! said the Poor Nut. Do you mean like
6x + 9y = 12 and 15x + 18y = 21?
Quite so, said the Great Mathematician, pulling out his bassoon. Indeed,
the system has a unique solution. Can you find it?
Good heavens! cried the Poor Nut, I am baffled.
Are you?
I.2
A linear system with a unique solution has a solution set with one element. A
linear system with no solution has a solution set that is empty. In these cases
the solution set is easy to describe. Solution sets are a challenge to describe only
when they contain many elements.
2.1 Example This system has many solutions because in echelon form
2x
+z=3
xyz=1
3x y
=4
2x
(1/2)1 +2
(3/2)1 +3
2x
2 +3
+
z=
3
y (3/2)z = 1/2
y (3/2)z = 1/2
+
z=
3
y (3/2)z = 1/2
0=
0
13
not all of the variables are leading variables. Theorem 1.5 shows that an (x, y, z)
satisfies the first system if and only if it satisfies the third. So we can describe
the solution set {(x, y, z) | 2x + z = 3 and x y z = 1 and 3x y = 4 } in this
way.
{ (x, y, z) | 2x + z = 3 and y 3z/2 = 1/2}
()
This description is better because it has two equations instead of three but it is
not optimal because it still has some hard to understand interactions among the
variables.
To improve it, use the variable that does not lead any equation, z, to describe
the variables that do lead, x and y. The second equation gives y = (1/2)(3/2)z
and the first equation gives x = (3/2)(1/2)z. Thus we can describe the solution
set in this way.
{(x, y, z) = ((3/2) (1/2)z, (1/2) (3/2)z, z) | z R }
()
Compared with (), the advantage of () is that z can be any real number.
This makes the job of deciding which tuples are in the solution set much easier.
For instance, taking z = 2 shows that (1/2, 5/2, 2) is a solution.
2.2 Definition In an echelon form linear system the variables that are not leading
are free.
2.3 Example Reduction of a linear system can end with more than one variable
free. Gausss Method on this system
x+ y+ z w= 1
y z + w = 1
3x
+ 6z 6w = 6
y + z w = 1
x+
31 +3
32 +3
2 +4
y+ z w= 1
y z + w = 1
3y + 3z 3w = 3
y + z w = 1
x+y+zw= 1
y z + w = 1
0= 0
0= 0
leaves x and y leading and both z and w free. To get the description that we
prefer, we work from the bottom. We first express the leading variable y in terms
of z and w, as y = 1 + z w. Moving up to the top equation, substituting for
y gives x + (1 + z w) + z w = 1 and solving for x leaves x = 2 2z + 2w.
The solution set
{(2 2z + 2w, 1 + z w, z, w) | z, w R }
has the leading variables in terms of the variables that are free.
()
14
2.4 Example The list of leading variables may skip over some columns. After
this reduction
2x 2y
=0
z + 3w = 2
3x 3y
=0
x y + 2z + 6w = 4
2x 2y
(3/2)1 +3
(1/2)1 +4
2x 2y
22 +4
=0
z + 3w = 2
0=0
2z + 6w = 4
=0
z + 3w = 2
0=0
0=0
x and z are the leading variables, not x and y. The free variables are y and w
and so we can describe the solution set as {(y, y, 2 3w, w) | y, w R }. For
instance, (1, 1, 2, 0) satisfies the system take y = 1 and w = 0. The four-tuple
(1, 0, 5, 4) is not a solution since its first coordinate does not equal its second.
A variable that we use to describe a family of solutions is a parameter. We
say that the solution set in the prior example is parametrized with y and w.
(The terms parameter and free variable do not mean the same thing. In the
prior example y and w are free because in the echelon form system they do not
lead while they are parameters because of how we used them to describe the set
of solutions. Had we instead rewritten the second equation as w = 2/3 (1/3)z
then the free variables would still be y and w but the parameters would be y
and z.)
In the rest of this book we will solve linear systems by bringing them to
echelon form and then parametrizing with the free variables.
2.5 Example This is another system with infinitely many solutions.
x + 2y
=1
2x
+z
=2
3x + 2y + z w = 4
21 +2
31 +3
2 +3
x + 2y
=1
4y + z
=0
4y + z w = 1
x + 2y
4y + z
=1
=0
w = 1
The leading variables are x, y, and w. The variable z is free. Notice that,
although there are infinitely many solutions, the value of w doesnt vary but is
constant at 1. To parametrize, write w in terms of z with w = 1 + 0z. Then
y = (1/4)z. Substitute for y in the first equation to get x = 1 (1/2)z. The
solution set is {(1 (1/2)z, (1/4)z, z, 1) | z R }.
Parametrizing solution sets shows that systems with free variables have
15
infinitely many solutions. For instance, above z takes on all of infinitely many
real number values, each associated with a different solution.
We finish this subsection by developing a streamlined notation for linear
systems and their solution sets.
2.6 Definition An mn matrix is a rectangular array of numbers with m rows
and n columns. Each number in the matrix is an entry.
We usually denote a matrix with an upper case roman letters. For instance,
!
1 2.2 5
A=
3 4 7
has 2 rows and 3 columns and so is a 23 matrix. Read that aloud as two-bythree; the number of rows is always given first. (The matrix has parentheses
on either side so that when two matrices are adjacent we can tell where one
ends and the other begins.) We name matrix entries with the corresponding
lower-case letter so that a2,1 = 3 is the entry in the second row and first column
of the above array. Note that the order of the subscripts matters: a1,2 6= a2,1
since a1,2 = 2.2. We denote the set of all nm matrices by Mnm .
We use matrices to do Gausss Method in essentially the same way that we
did it for systems of equations: where a rows leading entry. is its first nonzero
entry (if it has one), we perform row operations to arrive at matrix echelon
form,where the leading entry in lower rows are to the right of those in the rows
above. We switch to this notation because it lightens the clerical load of Gausss
Method the copying of variables and the writing of +s and =s.
2.7 Example We can abbreviate this linear system
x + 2y
=4
y z=0
x
+ 2z = 4
with this matrix.
1 2
0 1
1 0
0
1
2
0
4
The vertical bar reminds a reader of the difference between the coefficients on
the systems left hand side and the constants on the right. With a bar, this is
an augmented matrix.
1 2 0 4
1 2
0 4
1 2 0 4
1 +3
22 +3
0 1 1 0
0 1 1 0
0 1 1 0
1 0 2 4
0 2 2 0
0 0 0 0
16
The second row stands for y z = 0 and the first row stands for x + 2y = 4 so
the solution set is {(4 2z, z, z) | z R }.
Matrix notation also clarifies the descriptions of solution sets. Example 2.3s
{ (2 2z + 2w, 1 + z w, z, w) | z, w R } is hard to read. We will rewrite it
to group all of the constants together, all of the coefficients of z together, and
all of the coefficients of w together. We write them vertically, in one-column
matrices.
2
2
2
1 1
1
{ + z + w | z, w R}
0 1
0
0
0
1
For instance, the top line says that x = 2 2z + 2w and the second line says
that y = 1 + z w. (Our next section gives a geometric interpretation that
will help picture the solution sets.)
2.8 Definition A vector (or column vector ) is a matrix with a single column.
A matrix with a single row is a row vector . The entries of a vector are its
components. A column or row vector whose components are all zeros is a zero
vector.
Vectors are an exception to the convention of representing matrices with
capital roman letters. We use lower-case roman or greek letters overlined with an
~ . . . (boldface is also common: a or ). For instance,
arrow: a
~ , ~b, . . . or
~ , ,
this is a column vector with a third component of 7.
1
~v = 3
7
A zero vector is denoted ~0. There are many different zero vectors the one-tall
zero vector, the two-tall zero vector, etc. but nonetheless we will often say
the zero vector, expecting that the size will be clear from the context.
2.9 Definition The linear equation a1 x1 + a2 x2 + + an xn = d with unknowns
x1 , . . . , xn is satisfied by
s1
..
~s = .
sn
if a1 s1 + a2 s2 + + an sn = d. A vector satisfies a linear system if it satisfies
each equation in the system.
17
The style of description of solution sets that we use involves adding the
vectors, and also multiplying them by real numbers. Before we give the examples
showing the style we first need to define these operations.
2.10 Definition The vector sum of ~u and ~v is the vector of the sums.
u1
v1
u1 + v1
. .
..
~u + ~v = .. + .. =
.
un
vn
un + vn
Note that for the addition to be defined the vectors must have the same
number of entries. This entry-by-entry addition works for any pair of matrices,
not just vectors, provided that they have the same number of rows and columns.
2.11 Definition The scalar multiplication of the real number r and the vector ~v
is the vector of the multiples.
v1
rv1
. .
r ~v = r .. = ..
vn
rvn
1
2
1
4
1+4
5
1
7
4 28
7 =
1 7
3
21
Observe that the definitions of addition and scalar multiplication agree where
they overlap; for instance, ~v + ~v = 2~v.
With these definitions, we are set to use matrix and vector notation to both
solve systems and express the solution.
2.13 Example This system
2x + y
w
=4
y
+ w+u=4
x
z + 2w
=0
18
reduces in
2 1
0 1
1 0
1
1
2
0
1
0
4
0
(1/2)1 +3
(1/2)2 +3
2
1
0
1
1
0
1
0
0 1/2 1 5/2
2 1 0 1
0
1
1
0 1 0
0 0 1 3 1/2
0
4
1
4
0 2
4
0
x
0
1
1/2
y 4 1
1
{ z = 0 + 3 w + 1/2 u | w, u R }
w 0 1
0
u
0
0
1
Note how well vector notation sets off the coefficients of each parameter. For
instance, the third row of the vector form shows plainly that if u is fixed then z
increases three times as fast as w. Another thing shown plainly is that setting
both w and u to zero gives that
x
0
y 4
z = 0
w 0
u
0
is a particular solution of the linear system.
2.14 Example In the same way, the system
x y+ z=1
3x
+ z=3
5x 2y + 3z = 5
reduces
1 1
0 1
2 3
3
5
31 +2
51 +3
2 +3
1 1 1 1
0 3 2 0
0 3 2 0
1 1 1 1
0 3 2 0
0 0
0 0
19
1
1/3
{ 0 + 2/3 z | z R }
0
1
As in the prior example, the vector not associated with the parameter
1
0
0
is a particular solution of the system.
Before the exercises, we will consider what we have accomplished and what
we have yet to do.
So far we have done the mechanics of Gausss Method. We have not stopped
to consider any of the interesting questions that arise, except for proving Theorem 1.5 which justifies the method by showing that it gives the right answers.
For example, can we always describe solution sets as above, with a particular
solution vector added to an unrestricted linear combination of some other vectors?
Weve noted that the solution sets we described in this way have infinitely many
solutions so an answer to this question would tell us about the size of solution
sets.
Many questions arise from our observation that we can do Gausss Method
in more than one way (for instance, when swapping rows we may have a choice
of more than one row). Theorem 1.5 says that we must get the same solution
set no matter how we proceed but if we do Gausss Method in two ways must
we get the same number of free variables in each echelon form system? Must
those be the same variables, that is, is it impossible to solve a problem one way
to get y and w free and solve it another way to get y and z free?
In the rest of this chapter we will answer these questions. The answer to
each is yes. In the next subsection we do the first one: we will prove that we
can always describe solution sets in that way. Then, in this chapters second
section, we will use that understanding to describe the geometry of solution sets.
In this chapters final section, we will settle the questions about the parameters.
When we are done, we will not only have a solid grounding in the practice of
Gausss Method but we will also have a solid grounding in the theory. We will
know exactly what can and cannot happen in a reduction.
Exercises
X 2.15 Find the indicated entry of the matrix, if it is defined.
1 3 1
A=
2 1 4
20
(b) a1,2
(c) a2,2
(d) a3,1
(c)
5
10
10
5
(d) 7
2
3
+9
1
5
X 2.18 Solve each system using matrix notation. Express the solution using vectors.
(a) 3x + 6y = 18
(b) x + y = 1
(c) x1
+ x3 = 4
x + 2y = 6
x y = 1
x1 x2 + 2x3 = 5
4x1 x2 + 5x3 = 17
(d) 2a + b c = 2
(e) x + 2y z
=3
(f) x
+z+w=4
2a
+c=3
2x + y
+w=4
2x + y
w=2
ab
=0
x y+z+w=1
3x + y + z
=7
X 2.19 Solve each system using matrix notation. Give each
notation.
(a) 2x + y z = 1
(b) x
z
=1
(c) x
4x y
=3
y + 2z w = 3
x + 2y + 3z w = 7
3x
(d)
a + 2b + 3c + d e = 1
3a b + c + d + e = 3
2.20 Solve each system using matrix notation. Express the solution set using
vectors.
x + y 2z = 0
3x + 2y + z = 1
x y
= 3
2x y z + w = 4
(a) x y + z = 2
(b)
(c)
3x y 2z = 6
x+y+z
= 1
5x + 5y + z = 0
2y 2z = 3
x + y 2z = 0
(d) x y
= 3
3x y 2z = 0
X 2.21 The vector is in the set. What value of the parameters produces that vector?
5
1
(a)
,{
k | k R}
5
1
1
2
3
(b) 2 , { 1 i + 0 j | i, j R }
1
0
1
21
0
1
2
(c) 4, { 1 m + 0 n | m, n R }
2
0
1
2.22 Decide
the vector
is in the set.
if
3
6
(a)
,{
k | k R}
1
2
5
5
(b)
,{
j | j R}
4
4
2
0
1
(c) 1 , { 3 + 1 r | r R }
1
7
3
1
2
3
(d) 0, { 0 j + 1 k | j, k R }
1
1
1
2.23 [Cleary] A farmer with 1200 acres is considering planting three different crops,
corn, soybeans, and oats. The farmer wants to use all 1200 acres. Seed corn costs
$20 per acre, while soybean and oat seed cost $50 and $12 per acre respectively.
The farmer has $40 000 available to buy seed and intends to spend it all.
(a) Use the information above to formulate two linear equations with three
unknowns and solve it.
(b) Solutions to the system are choices that the farmer can make. Write down
two reasonable solutions.
(c) Suppose that in the fall when the crops mature, the farmer can bring in
revenue of $100 per acre for corn, $300 per acre for soybeans and $80 per acre
for oats. Which of your two solutions in the prior part would have resulted in a
larger revenue?
2.24 Parametrize the solution set of this one-equation system.
x1 + x2 + + xn = 0
X 2.25
22
2
5
3
6
2
(b)
1
3
1
(c)
5
10
10
5
1
(d) 1
0
X 2.29 (a) Describe all functions f(x) = ax2 + bx + c such that f(1) = 2 and f(1) = 6.
(b) Describe all functions f(x) = ax2 + bx + c such that f(1) = 2.
2.30 Show that any set of five points from the plane R2 lie on a common conic section,
that is, they all satisfy some equation of the form ax2 + by2 + cxy + dx + ey + f = 0
where some of a, . . . , f are nonzero.
2.31 Make up a four equations/four unknowns system having
(a) a one-parameter solution set;
(b) a two-parameter solution set;
(c) a three-parameter solution set.
? 2.32 [Shepelev] This puzzle is from a Russian web-site http://www.arbuz.uz/ and
there are many solutions to it, but mine uses linear algebra and is very naive.
Theres a planet inhabited by arbuzoids (watermeloners, to translate from Russian).
Those creatures are found in three colors: red, green and blue. There are 13 red
arbuzoids, 15 blue ones, and 17 green. When two differently colored arbuzoids
meet, they both change to the third color.
The question is, can it ever happen that all of them assume the same color?
? 2.33 [USSR Olympiad no. 174]
(a) Solve the system of equations.
ax + y = a2
x + ay = 1
For what values of a does the system fail to have solutions, and for what values
of a are there infinitely many solutions?
(b) Answer the above question for the system.
ax + y = a3
x + ay = 1
? 2.34 [Math. Mag., Sept. 1952] In air a gold-surfaced sphere weighs 7588 grams. It
is known that it may contain one or more of the metals aluminum, copper, silver,
or lead. When weighed successively under standard conditions in water, benzene,
alcohol, and glycerin its respective weights are 6588, 6688, 6778, and 6328 grams.
How much, if any, of the forenamed metals does it contain if the specific gravities
of the designated substances are taken to be as follows?
Aluminum
Copper
Gold
Lead
Silver
2.7
8.9
19.3
11.3
10.8
Alcohol
Benzene
Glycerin
Water
0.81
0.90
1.26
1.00
I.3
23
In the prior subsection the descriptions of solution sets all fit a pattern. They
have a vector that is a particular solution of the system added to an unrestricted combination of some other vectors. The solution set from Example 2.13
illustrates.
1/2
1
0
1
1
4
{ 0 + w 3 + u 1/2 | w, u R}
0
1
0
1
0
0
particular
solution
unrestricted
combination
reduction.
The solution description has two parts, the particular solution ~p and the
~
unrestricted linear combination of the s.
We shall prove the theorem with two
corresponding lemmas.
We will focus first on the unrestricted combination. For that we consider
systems that have the vector of zeroes as a particular solution so that we can
~ 1 + + ck
~ k to c1
~ 1 + + ck
~ k.
shorten ~p + c1
3.2 Definition A linear equation is homogeneous if it has a constant of zero, so
that it can be written as a1 x1 + a2 x2 + + an xn = 0.
24
(2/3)1 +2
3x +
4y = 3
(11/3)y = 1
(2/3)1 +2
3x +
4y = 0
(11/3)y = 0
Obviously the two reductions go in the same way. We can study how to reduce
a linear systems by instead studying how to reduce the associated homogeneous
system.
Studying the associated homogeneous system has a great advantage over
studying the original system. Nonhomogeneous systems can be inconsistent.
But a homogeneous system must be consistent since there is always at least one
solution, the zero vector.
3.4 Example Some homogeneous systems have the zero vector as their only
solution.
3x + 2y + z = 0
6x + 4y
=0
y+z=0
3x + 2y +
21 +2
z=0
2z = 0
y+ z=0
2 3
3x + 2y +
y+
z=0
z=0
2z = 0
3.5 Example Some homogeneous systems have many solutions. One is the
25
7x
(8/7)1 +2
7x
2 +3
y+
32 +4
7x
(5/2)3 +4
7z
=0
y + 3z 2w = 0
y 3z
=0
3y 6z w = 0
7z
=0
3z 2w = 0
6z + 2w = 0
15z + 5w = 0
7z
=0
y + 3z 2w = 0
6z + 2w = 0
0=0
1/3
1
{
w | w R}
1/3
1
has many vectors besides the zero vector (if we interpret w as a number of
molecules then solutions make sense only when w is a nonnegative multiple of
3).
~ 1, . . . ,
~k
3.6 Lemma For any homogeneous linear system there exist vectors
such that the solution set of the system is
~ 1 + + ck
~ k | c1 , . . . , c k R }
{c1
where k is the number of free variables in an echelon form version of the system.
We will make two points before the proof. The first is that the basic idea of
the proof is straightforward. Consider this system of homogeneous equations in
echelon form.
x + y + 2z + u + v = 0
y+ z+uv=0
u+v=0
Start with the bottom equation. Express its leading variable in terms of the
free variables with u = v. For the next row up, substitute for the leading
variable u of the row below y + z + (v) v = 0 and solve for this rows leading
variable y = z + 2v. Iterate: on the next row up, substitute expressions found
26
in lower rows x + (z + 2v) + 2z + (v) + v = 0 and solve for the leading variable
x = z 2v. To finish, write the solution in vector notation
x
1
2
y 1
2
for z, v R
z = 1 z + 0 v
u 0
1
v
0
1
~ 1 and
~ 2 of the lemma are the vectors associated with
and recognize that the
the free variables z and v.
The prior paragraph is an example, not a proof. But it does suggest the
second point about the proof, its approach. The example moves row-by-row up
the system, using the equations from lower rows to do the next row. This points
to doing the proof by mathematical induction.
Induction is an important and non-obvious proof technique that we shall
use a number of times in this book. We will do proofs by induction in two
steps, a base step and an inductive step. In the base step we verify that the
statement is true for some first instance, here that for the bottom equation we
can write the leading variable in terms of free variables. In the inductive step
we must establish an implication, that if the statement is true for all prior cases
then it follows for the present case also. Here we will establish that if for the
bottom-most t rows we can express the leading variables in terms of the free
variables, then for the t + 1-th row from the bottom we can also express the
leading variable in terms of those that are free.
Those two steps together prove the statement for all the rows because by
the base step it is true for the bottom equation, and by the inductive step the
fact that it is true for the bottom equation shows that it is true for the next one
up. Then another application of the inductive step implies that it is true for the
third equation up, etc.
Proof Apply Gausss Method to get to echelon form. There may be some 0 = 0
equations; we ignore these (if the system consists only of 0 = 0 equations then
the lemma is trivially true because there are no leading variables). But because
the system is homogeneous there are no contradictory equations.
We will use induction to verify that each leading variable can be expressed
in terms of free variables. That will finish the proof because we can use the free
~ are the vectors of coefficients of those free
variables as parameters and the s
variables.
For the base step consider the bottom-most equation
am,`m x`m + am,`m +1 x`m +1 + + am,n xn = 0
()
27
where am,`m 6= 0. (The ` means leading so that x`m is the leading variable
in row m.) This is the bottom row so any variables after the leading one must
be free. Move these to the right hand side and divide by am,`m
x`m = (am,`m +1 /am,`m )x`m +1 + + (am,n /am,`m )xn
to express the leading variable in terms of free variables. (There is a tricky
technical point here: if in the bottom equation () there are no variables to
the right of xlm then x`m = 0. This satisfies the statement we are verifying
because, as alluded to at the start of this subsection, it has x`m written as a
sum of a number of the free variables, namely as the sum of zero many, under
the convention that a trivial sum totals to 0.)
For the inductive step assume that the statement holds for the bottom-most
t rows, with 0 6 t < m 1. That is, assume that for the m-th equation, and
the (m 1)-th equation, etc., up to and including the (m t)-th equation, we
can express the leading variable in terms of free ones. We must verify that
this then also holds for the next equation up, the (m (t + 1))-th equation.
For that, take each variable that leads in a lower equation x`m , . . . , x`mt and
substitute its expression in terms of free variables. We only need expressions
for leading variables from lower equations because the system is in echelon
form, so the leading variables in equations above this one do not appear in
this equation. The result has a leading term of am(t+1),`m(t+1) x`m(t+1)
with am(t+1),`m(t+1) 6= 0, and the rest of the left hand side is a linear
combination of free variables. Move the free variables to the right side and divide
by am(t+1),`m(t+1) to end with this equations leading variable x`m(t+1) in
terms of free variables.
We have done both the base step and the inductive step so by the principle
of mathematical induction the proposition is true.
QED
This shows, as discussed between the lemma and its proof, that we can
parametrize solution sets using the free variables. We say that the set of
~ 1 + + ck
~ k | c1 , . . . , ck R } is generated by or spanned by the
vectors {c1
~
~
set { 1 , . . . , k }.
To finish the proof of Theorem 3.1 the next lemma considers the particular
solution part of the solution sets description.
3.7 Lemma For a linear system and for any particular solution ~p, the solution
set equals {~p + ~h | ~h satisfies the associated homogeneous system}.
Proof We will show mutual set inclusion, that any solution to the system is in
the above set and that anything in the set is a solution of the system.
28
For set inclusion the first way, that if a vector solves the system then it is in
the set described above, assume that ~s solves the system. Then ~s ~p solves the
associated homogeneous system since for each equation index i,
ai,1 (s1 p1 ) + + ai,n (sn pn )
= (ai,1 s1 + + ai,n sn ) (ai,1 p1 + + ai,n pn ) = di di = 0
where pj and sj are the j-th components of ~p and ~s. Express ~s in the required
~p + ~h form by writing ~s ~p as ~h.
For set inclusion the other way, take a vector of the form ~p + ~h, where ~p
solves the system and ~h solves the associated homogeneous system and note
that ~p + ~h solves the given system since for any equation index i,
ai,1 (p1 + h1 ) + + ai,n (pn + hn )
= (ai,1 p1 + + ai,n pn ) + (ai,1 h1 + + ai,n hn ) = di + 0 = di
where as earlier pj and hj are the j-th components of ~p and ~h.
QED
The two lemmas together establish Theorem 3.1. Remember that theorem
with the slogan, General = Particular + Homogeneous.
3.8 Example This system illustrates Theorem 3.1.
x + 2y z = 1
2x + 4y
=2
y 3z = 0
Gausss Method
21 +2
x + 2y z = 1
2z = 0
y 3z = 0
2 3
x + 2y z = 1
y 3z = 0
2z = 0
21 +2 2 3
x + 2y z = 0
y 3z = 0
2z = 0
29
x
21 +2
1 +3
+ z + w = 1
y 2z w = 5
y + 2z + w = 2
It has no solutions because the final two equations conflict. But the associated
homogeneous system does have a solution, as do all homogeneous systems.
x
+ z+ w=0
2x y
+ w=0
x + y + 3z + 2w = 0
x
21 +2
1 +3
2 +3
+ z+w=0
y 2z w = 0
0=0
30
non-~0 as any non-0 component of ~v, when rescaled by the non-0 factor s t, will
give a non-0 value.
Now apply Lemma 3.7 to conclude that a solution set
{~p + ~h | ~h solves the associated homogeneous system }
is either empty (if there is no particular solution ~p), or has one element (if there
is a ~p and the homogeneous system has the unique solution ~0), or is infinite (if
there is a ~p and the homogeneous system has a non-~0 solution, and thus by the
prior paragraph has infinitely many solutions).
QED
This table summarizes the factors affecting the size of a general solution.
number of solutions of the
homogeneous system
particular
solution
exists?
yes
no
one
unique
solution
infinitely many
infinitely many
solutions
no
solutions
no
solutions
The dimension on the top of the table is the simpler one. When we perform
Gausss Method on a linear system, ignoring the constants on the right side and
so paying attention only to the coefficients on the left-hand side, we either end
with every variable leading some row or else we find some variable that does not
lead a row, that is, we find some variable that is free. (We formalize ignoring
the constants on the right by considering the associated homogeneous system.)
A notable special case is systems having the same number of equations as
unknowns. Such a system will have a solution, and that solution will be unique,
if and only if it reduces to an echelon form system where every variable leads its
row (since there are the same number of variables as rows), which will happen if
and only if the associated homogeneous system has a unique solution.
3.11 Definition A square matrix is nonsingular if it is the matrix of coefficients
of a homogeneous system with a unique solution. It is singular otherwise, that
is, if it is the matrix of coefficients of a homogeneous system with infinitely many
solutions.
3.12 Example The first of these matrices is nonsingular while the second is
singular
!
!
1 2
1 2
3 4
3 6
31
because the first of these homogeneous systems has a unique solution while the
second has infinitely many solutions.
x + 2y = 0
3x + 4y = 0
x + 2y = 0
3x + 6y = 0
We have made the distinction in the definition because a system with the same
number of equations as variables behaves in one of two ways, depending on
whether its matrix of coefficients is nonsingular or singular. Where the matrix
of coefficients is nonsingular the system has a unique solution for any constants
on the right side: for instance, Gausss Method shows that this system
x + 2y = a
3x + 4y = b
has the unique solution x = b2a and y = (3ab)/2. On the other hand, where
the matrix of coefficients is singular the system never has a unique solution it
has either no solutions or else has infinitely many, as with these.
x + 2y = 1
3x + 6y = 2
x + 2y = 1
3x + 6y = 3
The definition uses the word singular because it means departing from
general expectation. People often, naively, expect that systems with the same
number of variables as equations will have a unique solution. Thus, we can think
of the word as connoting troublesome, or at least not ideal. (That singular
applies to those systems that never have exactly one solution is ironic, but it is
the standard term.)
3.13 Example The systems from Example 3.3, Example 3.4, and Example 3.8
each have an associated homogeneous system with a unique solution. Thus these
matrices are nonsingular.
!
3 2 1
1 2 1
3 4
6 4 0
2 4 0
2 1
0 1 1
0 1 3
The Chemistry problem from Example 3.5 is a homogeneous system with more
than one solution so its matrix is singular.
7 0 7 0
8 1 5 2
0 1 3 0
0 3 6 1
32
The table above has two dimensions. We have considered the one on top: we
can tell into which column a given linear system goes solely by considering the
systems left-hand side; the constants on the right-hand side play no role in this.
The tables other dimension, determining whether a particular solution exists,
is tougher. Consider these two systems with the same left side but different
right sides.
3x + 2y = 5
3x + 2y = 5
3x + 2y = 5
3x + 2y = 4
The first has a solution while the second does not, so here the constants on the
right side decide if the system has a solution. We could conjecture that the left
side of a linear system determines the number of solutions while the right side
determines if solutions exist but that guess is not correct. Compare these two,
with the same right sides but different left sides.
3x + 2y = 5
4x + 2y = 4
3x + 2y = 5
3x + 2y = 4
The first has a solution but the second does not. Thus the constants on the
right side of a system dont alone determine whether a solution exists. Rather,
that depends on some interaction between the left and right.
For some intuition about that interaction, consider this system with one of
the coefficients left unspecified, as the variable c.
x + 2y + 3z = 1
x+ y+ z=1
cx + 3y + 4z = 0
If c = 2 then this system has no solution because the left-hand side has the
third row as the sum of the first two, while the right-hand does not. If c 6= 2
then this system has a unique solution (try it with c = 1). For a system to
have a solution, if one row of the matrix of coefficients on the left is a linear
combination of other rows then on the right the constant from that row must be
the same combination of constants from the same rows.
More intuition about the interaction comes from studying linear combinations.
That will be our focus in the second chapter, after we finish the study of Gausss
Method itself in the rest of this chapter.
Exercises
3.14 Solve this system. Then solve the associated homogeneous system.
x + y 2z = 0
x y
= 3
3x y 2z = 6
2y 2z = 3
33
X 3.15 Solve each system. Express the solution set using vectors. Identify the particular
solution and the solution set of the homogeneous system. (These systems also
appear in Exercise 18.)
(a) 3x + 6y = 18
(b) x + y = 1
(c) x1
+ x3 = 4
x + 2y = 6
x y = 1
x1 x2 + 2x3 = 5
4x1 x2 + 5x3 = 17
(d) 2a + b c = 2
(e) x + 2y z
=3
(f) x
+z+w=4
2a
+c=3
2x + y
+w=4
2x + y
w=2
ab
=0
x y+z+w=1
3x + y + z
=7
3.16 Solve each system, giving the solution set in vector notation. Identify the
particular solution and the solution of the homogeneous system.
(a) 2x + y z = 1
(b) x
z
=1
(c) x y + z
=0
4x y
=3
y + 2z w = 3
y
+w=0
x + 2y + 3z w = 7
3x 2y + 3z + w = 0
y
w=0
(d) a + 2b + 3c + d e = 1
3a b + c + d + e = 3
X 3.17 For the system
2x y
w= 3
y + z + 2w = 2
x 2y z
= 1
which of these can be used as the particular solution part of some general solution?
0
2
1
3
1
4
(a)
(b)
(c)
5
1
8
0
0
1
X 3.18 Lemma 3.7 says that we can use any particular solution for ~p. Find, if possible,
a general solution to this system
x y
+w=4
2x + 3y z
=0
y+z+w=4
that uses the given vector as its particular solution.
0
5
2
0
1
1
(a)
(b)
(c)
0
7
1
4
10
1
3.19 One
is
nonsingular
while
the
other
is singular. Which is which?
1
3
1 3
(a)
(b)
4 12
4 12
X 3.20 Singular
or
nonsingular?
1
2
1 2 1
1 2
(a)
(b)
(c)
(Careful!)
1 3
3 6
1 3 1
1 2 1
2 2 1
(d) 1 1 3
(e) 1 0 5
3 4 7
1 1 4
34
X 3.21 Isthe
given
vector
in the set generated by the given set?
2
1
1
(a)
,{
,
}
3
4
5
1
2
1
(b) 0 , { 1 , 0 }
1
0
1
1
1
2
3
4
(c) 3 , { 0 , 1 , 3 , 2 }
0
4
5
0
1
1
2
3
0 1 0
(d)
1 , { 0 , 0 }
1
1
2
3.22 Prove that any linear system with a nonsingular matrix of coefficients has a
solution, and that the solution is unique.
3.23 In the proof of Lemma 3.6, what happens if there are no non-0 = 0 equations?
X 3.24 Prove that if ~s and ~t satisfy a homogeneous system then so do these vectors.
(a) ~s + ~t
(b) 3~s
(c) k~s + m~t for k, m R
Whats wrong with this argument: These three show that if a homogeneous system
has one solution then it has many solutions any multiple of a solution is another
solution, and any sum of solutions is a solution also so there are no homogeneous
systems with exactly one solution.?
3.25 Prove that if a system with only rational coefficients and constants has a
solution then it has at least one all-rational solution. Must it have infinitely many?
II
35
Linear Geometry
If you have seen the elements of vectors then this section is an optional
review. However, later work will refer to this material so if this is not a
review then it is not optional.
In the first section we had to do a bit of work to show that there are only
three types of solution sets singleton, empty, and infinite. But this is easy to
see geometrically in the case of systems with two equations and two unknowns.
Draw each two-unknowns equation as a line in the plane, and then the two lines
could have a unique intersection, be parallel, or be the same line.
Unique solution
Infinitely many
solutions
No solutions
3x + 2y = 7
x y = 1
3x + 2y = 7
3x + 2y = 4
3x + 2y = 7
6x + 4y = 14
These pictures arent a short way to prove the results from the prior section,
because those results apply to linear systems of any size. But they do broaden
our understanding of those results.
This section develops what we need to express our results geometrically. In
particular, while the two-dimensional case is familiar, to extend to systems with
more than two unknowns we shall need some higher-dimensional geometry.
II.1
Vectors in Space
Now, with a scale and a direction, we have a correspondence with R. For instance,
36
to find the point matching +2.17, start at 0 and head in the direction of 1, and
go 2.17 times as far.
The basic idea here, combining magnitude with direction, is the key to
extending to higher dimensions.
An object comprised of a magnitude and a direction is a vector (we use the
same word as in the prior section because we shall show below how to describe
such an object with a column vector). We can draw a vector as having some
length and pointing in some direction.
are equal, even though they start in different places They are equal because they
have equal lengths and equal directions. Again: those vectors are not just alike,
they are equal.
How can things that are in different places be equal? Think of a vector as
representing a displacement (the word vector is Latin for carrier or traveler).
These two squares undergo displacements that are equal despite that they start
in different places.
When we want to emphasize this property vectors have of not being anchored
we refer to them as free vectors. Thus, these free vectors are equal, as each is a
displacement of one over and two up.
More generally, vectors in the plane are the same if and only if they have the
same change in first components and the same change in second components: the
vector extending from (a1 , a2 ) to (b1 , b2 ) equals the vector from (c1 , c2 ) to
(d1 , d2 ) if and only if b1 a1 = d1 c1 and b2 a2 = d2 c2 .
Saying the vector that, were it to start at (a1 , a2 ), would extend to (b1 , b2 )
would be unwieldy. We instead describe that vector as
!
b1 a1
b2 a2
37
so that we represent the one over and two up arrows shown above in this way.
!
1
2
We often draw the arrow as starting at the origin, and we then say it is in the
canonical position (or natural position or standard position). When
!
v1
~v =
v2
is in canonical position then it extends from the origin to the endpoint (v1 , v2 ).
We will typically say the point
!
1
2
rather than the endpoint of the canonical position of that vector. Thus, we
will call each of these R2 .
!
x1
{(x1 , x2 ) | x1 , x2 R }
{
| x1 , x2 R }
x2
In the prior section we defined vectors and vector operations with an algebraic
motivation;
!
!
!
!
!
v1
rv1
v1
w1
v1 + w1
r
=
+
=
v2
rv2
v2
w2
v2 + w2
we can now understand those operations geometrically. For instance, if ~v
represents a displacement then 3~v represents a displacement in the same direction
but three times as far and 1~v represents a displacement of the same distance
as ~v but in the opposite direction.
~v
3~v
~v
38
The long arrow is the combined displacement in this sense: imagine that you are
walking on a ships deck. Suppose that in one minute the ships motion gives it
a displacement relative to the sea of ~v, and in the same minute your walking
gives you a displacement relative to the ships deck of w
~ . Then ~v + w
~ is your
displacement relative to the sea.
Another way to understand the vector sum is with the parallelogram rule.
Draw the parallelogram formed by the vectors ~v and w
~ . Then the sum ~v + w
~
extends along the diagonal to the far corner.
~v + w
~
w
~
~v
The above drawings show how vectors and vector operations behave in R2 .
We can extend to R3 , or to even higher-dimensional spaces where we have no
pictures, with the obvious generalization: the free vector that, if it starts at
(a1 , . . . , an ), ends at (b1 , . . . , bn ), is represented by this column.
b1 a1
..
.
bn an
Vectors are equal if they have the same representation. We arent too careful
about distinguishing between a point and the vector whose canonical representation ends at that point.
v1
..
n
R = { . | v1 , . . . , vn R}
vn
And, we do addition and scalar multiplication component-wise.
Having considered points, we next turn to lines. In R2 , the line through
(1, 2) and (3, 1) is comprised of (the endpoints of) the vectors in this set.
!
!
1
2
{
+t
| t R}
2
1
That description expresses this picture.
2
1
=
3
1
1
2
39
1
1
2
is the one shown in the picture as having its whole body in the line it is a
direction vector for the line. Note that points on the line to the left of x = 1
are described using negative values of t.
In R3 , the line through (1, 2, 1) and (2, 3, 2) is the set of (endpoints of) vectors
of this form
1
1
{ 2 + t 1 | t R }
1
1
1
1
3
{ 0 + t 1 + s 4 | t, s R }
5
8
4.5
The column vectors associated with the parameters come from these calculations.
2
1
1
2
1
3
4 = 4 0
1 = 1 0
8
3
5
4.5
0.5
5
As with the line, note that we describe some points in this plane with negative
ts or negative ss or both.
Calculus books often describe a plane by using a single linear equation.
x
P = { y | 2x + y + z = 4 }
z
40
2
1/2
1/2
P = { 0 + y 1 + z 0 | y, z R }
0
0
1
Shown in grey are the vectors associated with y and z, offset from the origin
by 2 units along the x-axis, so that their entire body lies in the plane. Thus the
vector sum of the two, shown in black, has its entire body in the plane along
with the rest of the parallelogram.
Generalizing, a set of the form {~p + t1~v1 + t2~v2 + + tk~vk | t1 , . . . , tk R}
where ~v1 , . . . ,~vk Rn and k 6 n is a k-dimensional linear surface (or k-flat ).
For example, in R4
2
1
0
{
+ t | t R}
3
0
0.5
0
is a line,
0
1
2
0
1
0
{ + t + s | t, s R }
0
0
1
0
1
0
is a plane, and
3
0
1
2
1
0
0
0
{ + r + s + t | r, s, t R}
2
0
1
1
0.5
1
0
0
is a three-dimensional linear surface. Again, the intuition is that a line permits
motion in one direction, a plane permits motion in combinations of two directions,
etc. When the dimension of the linear surface is one less than the dimension of
the space, that is, when in Rn we have an (n 1)-flat, the surface is called a
hyperplane.
A description of a linear surface can be misleading about the dimension. For
41
example, this
2
1
1
2
1
0
L = { + t + s | t, s R }
0
0
1
2
1
2
is a degenerate plane because it is actually a line, since the vectors are multiples
of each other and we can omit one.
1
1
0
1
L = { + r | r R}
1
0
2
1
We shall see in the Linear Independence section of Chapter Two what relationships among vectors causes the linear surface they generate to be degenerate.
We now can restate in geometric terms our conclusions from earlier. First,
the solution set of a linear system with n unknowns is a linear surface in Rn .
Specifically, it is a k-dimensional linear surface, where k is the number of free
variables in an echelon form version of the system. For instance, in the single
equation case the solution set is an n 1-dimensional hyperplane in Rn (where
n > 0). Second, the solution set of a homogeneous linear system is a linear
surface passing through the origin. Finally, we can view the general solution
set of any linear system as being the solution set of its associated homogeneous
system offset from the origin by a vector, namely by any particular solution.
Exercises
X 1.1 Find the canonical name for each vector.
(a) the vector from (2, 1) to (4, 2) in R2
(b) the vector from (3, 3) to (2, 5) in R2
(c) the vector from (1, 0, 6) to (5, 0, 3) in R3
(d) the vector from (6, 8, 8) to (6, 8, 8) in R3
X 1.2 Decide if the two vectors are equal.
(a) the vector from (5, 3) to (6, 2) and the vector from (1, 2) to (1, 1)
(b) the vector from (2, 1, 1) to (3, 0, 4) and the vector from (5, 1, 4) to (6, 0, 7)
X 1.3 Does (1, 0, 2, 1) lie on the line through (2, 1, 1, 0) and (5, 10, 1, 4)?
X 1.4 (a) Describe the plane through (1, 1, 5, 1), (2, 2, 2, 0), and (3, 1, 0, 4).
(b) Is the origin in that plane?
1.5 Describe the plane that contains this point and line.
2
1
1
0
{ 0 + 1 t | t R }
3
4
2
42
1
0
2
{ 1 + 3 k + 0 m | k, m R }
0
0
4
2
0.5
0.5
{ 0 + 1 y + 0 z | y, z R }
0
0
1
and the three vectors with endpoints (2, 0, 0), (1.5, 1, 0), and (1.5, 0, 1).
(a) Redraw the picture, including the vector in the plane that is twice as long
as the one with endpoint (1.5, 1, 0). The endpoint of your vector is not (3, 2, 0);
what is it?
(b) Redraw the picture, including the parallelogram in the plane that shows the
sum of the vectors ending at (1.5, 0, 1) and (1.5, 1, 0). The endpoint of the sum,
on the diagonal, is not (3, 1, 1); what is it?
1.9 Show that the line segments (a1 , a2 )(b1 , b2 ) and (c1 , c2 )(d1 , d2 ) have the same
lengths and slopes if b1 a1 = d1 c1 and b2 a2 = d2 c2 . Is that only if?
1.10 How should we define R0 ?
? X 1.11 [Math. Mag., Jan. 1957] A person traveling eastward at a rate of 3 miles per
hour finds that the wind appears to blow directly from the north. On doubling his
speed it appears to come from the north east. What was the winds velocity?
1.12 Euclid describes a plane as a surface which lies evenly with the straight lines
on itself. Commentators such as Heron have interpreted this to mean, (A plane
surface is) such that, if a straight line pass through two points on it, the line
coincides wholly with it at every spot, all ways. (Translations from [Heath], pp.
171-172.) Do planes, as described in this section, have that property? Does this
description adequately define planes?
II.2
Weve translated the first sections results about solution sets into geometric
terms, to better understand those sets. But we must be careful not to be misled
43
by our own terms labeling subsets of Rk of the forms {~p + t~v | t R} and
{~p + t~v + s~
w | t, s R } as lines and planes doesnt make them act like the
lines and planes of our past experience. Rather, we must ensure that the names
suit the sets. While we cant prove that the sets satisfy our intuition we
cant prove anything about intuition in this subsection well observe that a
result familiar from R2 and R3 , when generalized to arbitrary Rn , supports the
idea that a line is straight and a plane is flat. Specifically, well see how to do
Euclidean geometry in a plane by giving a definition of the angle between two
Rn vectors, in the plane that they generate.
2.1 Definition The length of a vector ~v Rn is the square root of the sum of the
squares of its components.
q
|~v | = v21 + + v2n
2.2 Remark This is a natural generalization of the Pythagorean Theorem. A
classic motivating discussion is in [Polya].
For any nonzero ~v, the vector ~v/|~v| has length one. We say that the second
normalizes ~v to length one.
We can use that to get a formula for the angle between two vectors. Consider
two vectors in R3 where neither is a multiple of the other
~v
~u
(the special case of multiples will turn out below not to be an exception). They
determine a two-dimensional plane for instance, put them in canonical position
and take the plane formed by the origin and the endpoints. In that plane consider
the triangle with sides ~u, ~v, and ~u ~v.
Apply the Law of Cosines: |~u ~v |2 = |~u |2 + |~v |2 2 |~u | |~v | cos where is the
44
u 1 v1 + u 2 v2 + u 3 v3
)
|~u | |~v |
45
finish
~u + ~v
start
~v
~u
Proof (Well use some algebraic properties of dot product that we have not yet
Because the Triangle Inequality says that in any Rn the shortest cut between
two endpoints is simply the line segment connecting them, linear surfaces have
no bends.
46
Back to the definition of angle measure. The heart of the Triangle Inequalitys
proof is the ~u ~v 6 |~u | |~v | line. We might wonder if some pairs of vectors satisfy
the inequality in this way: while ~u ~v is a large number, with absolute value
bigger than the right-hand side, it is a negative large number. The next result
says that does not happen.
2.6 Corollary (Cauchy-Schwarz Inequality) For any ~u,~v Rn ,
| ~u ~v | 6 | ~u | |~v |
with equality if and only if one vector is a scalar multiple of the other.
Proof The Triangle Inequalitys proof shows that ~
u ~v 6 |~u | |~v | so if ~u ~v is
QED
~u ~v
)
|~u | |~v |
1
1
!
=0
Weve drawn the arrows away from canonical position but nevertheless the
vectors are orthogonal.
2.10 Example The R3 angle formula given at the start of this subsection is a
special case of the definition. Between these two
47
0
3
2
1
1
0
the angle is
(1)(0) + (1)(3) + (0)(2)
3
arccos(
) = arccos( )
2
2
2
2
2
2
1 +1 +0 0 +3 +2
2 13
approximately 0.94 radians. Notice that these vectors are not orthogonal. Although the yz-plane may appear to be perpendicular to the xy-plane, in fact
the two planes are that way only in the weak sense that there are vectors in each
orthogonal to all vectors in the other. Not every vector in each is orthogonal to
all vectors in the other.
Exercises
X 2.11 Find the length of each vector.
4
3
1
(a)
(b)
(c) 1
1
2
1
1
1
(e)
1
0
X 2.12 Find the angle between each two, if it is defined.
1
0
1
1
1
1
,
(b) 2 , 4
(c)
, 4
(a)
4
2
2
0
1
1
X 2.13 [Ohanian] During maneuvers preceding the Battle of Jutland, the British battle
cruiser Lion moved as follows (in nautical miles): 1.2 miles north, 6.1 miles 38
degrees east of south, 4.0 miles at 89 degrees east of north, and 6.5 miles at 31
degrees east of north. Find the distance between starting and ending positions.
(Ignore the earths curvature.)
0
(d) 0
0
48
49
2.35 Show that if a vector is perpendicular to each of two others then it is perpendicular to each vector in the plane they generate. (Remark. They could generate a
degenerate plane a line or a point but the statement remains true.)
2.36 Prove that, where ~u,~v Rn are nonzero vectors, the vector
~u
~v
+
|~u | |~v |
bisects the angle between them. Illustrate in R2 .
2.37 Verify that the definition of angle is dimensionally correct: (1) if k > 0 then the
cosine of the angle between k~u and ~v equals the cosine of the angle between ~u and
~v, and (2) if k < 0 then the cosine of the angle between k~u and ~v is the negative of
the cosine of the angle between ~u and ~v.
X 2.38 Show that the inner product operation is linear : for ~u,~v, w
~ Rn and k, m R,
~u (k~v + m~
w) = k(~u ~v) + m(~u w
~ ).
X 2.39 The geometric mean of two positive reals x, y is xy. It is analogous to the
arithmetic mean (x + y)/2. Use the Cauchy-Schwarz inequality to show that the
geometric mean of any x, y R is less than or equal to the arithmetic mean.
? 2.40 [Cleary] Astrologers claim to be able to recognize trends in personality and
fortune that depend on an individuals birthday by somehow incorporating where
the stars were 2000 years ago. Suppose that instead of star-gazers coming up
with stuff, math teachers who like linear algebra (well call them vectologers) had
come up with a similar system as follows: Consider your birthday as a row vector
(month day). For instance, I was born on July 12 so my vector would be (7 12).
Vectologers have made the rule that how well individuals get along with each other
depends on the angle between vectors. The smaller the angle, the more harmonious
the relationship.
(a) Find the angle between your vector and mine, in radians.
(b) Would you get along better with me, or with a professor born on September 19?
(c) For maximum harmony in a relationship, when should the other person be
born?
(d) Is there a person with whom you have a worst case relationship, i.e., your
vector and theirs are orthogonal? If so, what are the birthdate(s) for such people?
If not, explain why not.
? 2.41 [Am. Math. Mon., Feb. 1933] A ship is sailing with speed and direction ~v1 ; the
wind blows apparently (judging by the vane on the mast) in the direction of a
vector a
~ ; on changing the direction and speed of the ship from ~v1 to ~v2 the apparent
wind is in the direction of a vector ~b.
Find the vector velocity of the wind.
2.42 Verify the Cauchy-Schwarz inequality by first proving Lagranges identity:
!
!
!2
X
X
X
X
2
2
aj bj
=
aj
bj
(ak bj aj bk )2
16j6n
16j6n
16j6n
16k<j6n
and then noting that the final term is positive. This result is an improvement over
Cauchy-Schwarz because it gives a formula for the difference between the two sides.
Interpret that difference in R2 .
50
III
III.1
Gauss-Jordan Reduction
1
1 +3
0
0
1 2 2
1 1
2 +3
1
3
7 0 1
1
1 1
0 0
1
(1/4)3
0
0
2
3
4
7
8
1 2 2
1 3
7
0 1
2
1 1 0 2
1 0
33 +2
2 +1
0 1 0 1
0 1
23 +1
0 0 1 2
0 0
51
eliminate all of the
0
0
1
1
2
4 2 6
0 4 8
!
1 1/2 7/2
(1/2)1
0
1
2
(1/4)2
!
1 0 5/2
(1/2)2 +1
0 1
2
The answer is x = 5/2 and y = 2.
This extension of Gausss Method is the Gauss-Jordan Method or GaussJordan reduction.
1.3 Definition A matrix or linear system is in reduced echelon form if, in addition
to being in echelon form, each leading entry is a 1 and is the only nonzero entry
in its column.
The cost of using Gauss-Jordan reduction to solve a system is the additional
arithmetic. The benefit is that we can just read off the solution set description.
In any echelon form system, reduced or not, we can read off when the system
has an empty solution set because there is a contradictory equation. We can
read off when the system has a one-element solution set because there is no
contradiction and every variable is the leading variable in some row. And, we
can read off when the system has an infinite solution set because there is no
contradiction and at least one variable is free.
However, in reduced echelon form we can read off not just the size of the
solution set but also its description. We have no trouble describing the solution
52
set when it is empty, of course. Example 1.1 and 1.2 show how in a single
element solution set case the single element is in the column of constants. The
next example shows how to read the parametrization of an infinite solution set.
1.4 Example
0
0
6
3
3
1
1
1
2
4
2
1
5
2 +3
(1/2)1
2 6
0 3
0 0
1 2
1 4
0 2
(4/3)3 +2 32 +1
3 +1
(1/3)2
(1/2)3
1
4
1 0
0 1
0 0
1/2
1/3
0
0
0
1
9/2
3
2
1/2x3
x2 + 1/3x3
= 9/2
=
3
x4 = 2
x1
1/2
9/2
x 3 1/3
2
S = { =
x3 | x3 R }
+
x3 0 1
0
2
x4
Thus echelon form isnt some kind of one best form for systems. Other forms,
such as reduced echelon form, have advantages and disadvantages. Instead of
picturing linear systems (and the associated matrices) as things we operate
on, always directed toward the goal of echelon form, we can think of them as
interrelated when we can get from one to another by row operations. The rest
of this subsection develops this relationship.
1.5 Lemma Elementary row operations are reversible.
Proof For any matrix A, the effect of swapping rows is reversed by swapping
i j j i
ki
(1/k)i
ki +j ki +j
A
QED
53
Again, the point of view that we are developing, supported now by the lemma,
is that the term reduces to is misleading: where A B, we shouldnt think
of B as after A or simpler than A. Instead we should think of the two matrices
as interrelated. Below is a picture. It shows the matrices from the start of this
section and their reduced echelon form version in a cluster, as inter-reducible.
2
0
2
4
0
1
1
0
2
3
1
0
0
1
1
1
2
0
2
1
We say that matrices that reduce to each other are equivalent with respect
to the relationship of row reducibility. The next result justifies this, using the
definition of an equivalence.
1.6 Lemma Between matrices, reduces to is an equivalence relation.
Proof We must check the conditions (i) reflexivity, that any matrix reduces
...
54
One of the classes is the cluster of interrelated matrices from the start of this
section pictured earlier, expanded to include all of the nonsingular 22 matrices.
The next subsection proves that the reduced echelon form of a matrix is
unique. Rephrased in terms of the row-equivalence relationship, we shall prove
that every matrix is row equivalent to one and only one reduced echelon form
matrix. In terms of the partition what we shall prove is: every equivalence class
contains one and only one reduced echelon form matrix. So each reduced echelon
form matrix serves as a representative of its class.
Exercises
X 1.8 Use Gauss-Jordan reduction to solve each system.
(a) x + y = 2
(b) x
z=4
(c) 3x 2y = 1
xy=0
2x + 2y
=1
6x + y = 1/2
(d) 2x y
= 1
x + 3y z = 5
y + 2z = 5
1.9 Do Gauss-Jordan reduction.
(a) x + y z = 3
(b) x + y + 2z = 0
2x y z = 1
2x y + z = 1
3x + y + 2z = 0
4x + y + 5z = 1
X 1.10 Find the reduced
echelon form ofeach matrix.
1
3
1
1 0 3 1
2 1
(a)
(b) 2
(c) 1 4 2 1
0
4
1 3
1 3 3
3 4 8 1
0 1 3 2
(d) 0 0 5 6
2
5
2
1 5 1 5
1.11 Get
the reducedechelon form
of each.
0
2 1
1 3 1
(a) 2 1 1
(b) 2 6 2
2 1 0
1 0 0
X 1.12 Find each solution set by using Gauss-Jordan reduction and then reading off
the parametrization.
(a) 2x + y z = 1
4x y
=3
(b) x
z
=1
y + 2z w = 3
x + 2y + 3z w = 7
(c) x y + z
=0
y
+w=0
3x 2y + 3z + w = 0
y
w=0
(d) a + 2b + 3c + d e = 1
3a b + c + d + e = 3
55
2 1 1 3
6 4 1 2
1 5 1 5
X 1.14 List the reduced echelon forms possible for each size.
(a) 22
(b) 23
(c) 32
(d) 33
X 1.15 What results from applying Gauss-Jordan reduction to a nonsingular matrix?
1.16 [Cleary] Consider the following relationship on the set of 22 matrices: we say
that A is sum-what like B if the sum of all of the entries in A is the same as the
sum of all the entries in B. For instance, the zero matrix would be sum-what like
the matrix whose first row had two sevens, and whose second row had two negative
sevens. Prove or disprove that this is an equivalence relation on the set of 22
matrices.
1.17 The proof of Lemma 1.5 contains a reference to the i 6= j condition on the row
combination operation.
(a) Write down a 22 matrix with nonzero entries, and show that the 1 1 + 1
operation is not reversed by 1 1 + 1 .
(b) Expand the proof of that lemma to make explicit exactly where it uses the
i 6= j condition on combining.
1.18 [Cleary] Consider the set of students in a class. Which of the following relationships are equivalence relations? Explain each answer in at least a sentence.
(a) Two students x, y are related if x has taken at least as many math classes as y.
(b) Students x, y are related if they have names that start with the same letter.
1.19 Show that each of these is an equivalence on the set of 22 matrices. Describe
the equivalence classes.
(a) Two matrices are related if they have the same product down the diagonal,
that is, if the product of the entries in the upper left and lower right are equal.
(b) Two matrices are related if they both have at least one entry that is a 1, or if
neither does.
1.20 Show that each is not an equivalence on the set of 22 matrices.
(a) Two matrices A, B are related if a1,1 = b1,1 .
(b) Two matrices are related if the sum of their entries are within 5, that is, A is
related to B if |(a1,1 + + a2,2 ) (b1,1 + + b2,2 )| < 5.
III.2
We will close this chapter by proving that every matrix is row equivalent to one
and only one reduced echelon form matrix. The ideas here will reappear, and be
further developed, in the next chapter.
56
The crucial observation concerns how row operations act to transform one
matrix into another: the new rows are linear combinations of the old.
2.1 Example Consider this Gauss-Jordan reduction.
!
!
2 1 0
2
1
0
(1/2)1 +2
1 3 5
0 5/2 5
!
1 1/2 0
(1/2)1
0
1
2
(2/5)2
!
1 0 1
(1/2)2 +1
0 1 2
Denoting those matrices A D G B and writing the rows of A as 1 and
2 , etc., we have this.
!
!
1
1 = 1
(1/2)1 +2
2
2 = (1/2)1 + 2
!
1 = (1/2)1
(1/2)1
2 = (1/5)1 + (2/5)2
(2/5)2
!
1 = (3/5)1 (1/5)2
(1/2)2 +1
2 = (1/5)1 + (2/5)2
2.2 Example The fact that Gaussian operations combine rows linearly also holds
if there is a row swap. With this A, D, G, and B
!
!
!
!
0 2
1 1
1 1
1 0
1 2
(1/2)2
2 +1
1 1
0 2
0 1
0 1
we get these linear relationships.
~1
~2
1 2
~1 =
~2
~2 =
~1
(1/2)2
2 +1
~1 =
~2
~2 = (1/2)~
1
~ 1 = (1/2)~
1 + 1
~2
~ 2 = (1/2)~
57
QED
2.4 Corollary Where one matrix reduces to another, each row of the second is a
linear combination of the rows of the first.
Proof For any two interreducible matrices A and B there is some minimum
number of row operations that will take one to the other. We proceed by
induction on that number.
In the base step, that we can go from the first to the second using zero
reduction operations, the two matrices are equal. Then each row of B is trivially
~i = 0
a combination of As rows
~1 + + 1
~i + + 0
~ m.
For the inductive step assume the inductive hypothesis: with k > 0, any
matrix that can be derived from A in k or fewer operations has rows that are
linear combinations of As rows. Consider a matrix B such that reducing A to B
requires k + 1 operations. In that reduction there is a next-to-last matrix G, so
that A G B. The inductive hypothesis applies to this G because
it is only k steps away from A. That is, each row of G is a linear combination of
the rows of A.
We will verify that the rows of B are linear combinations of the rows of G.
Then the Linear Combination Lemma, Lemma 2.3, applies to show that the
rows of B are linear combinations of the rows of A.
If the row operation taking G to B is a swap then the rows of B are just the
rows of G reordered and each row of B is a linear combination of the rows of G.
If the operation taking G to B is multiplication of a row by a scalar ci then
~ i = c~i and the other rows are unchanged. Finally, if the row operation is
adding a multiple of one row to another ri + j then only row j of B differs from
~ j = ri + j , which is indeed a linear combinations
the matching row of G, and
of the rows of G.
Because we have proved both a base step and an inductive step, the proposition follows by the principle of mathematical induction.
QED
We now have the insight that Gausss Method builds linear combinations
of the rows. But of course its goal is to end in echelon form, since that is a
58
2 3 7 8 0
0 0 1 5 1
R=
0 0 0 3 3
0 0 0 0 2
0
1
0
1
x1 has been removed from x5 s equation. That is, Gausss Method has made
x5 s row in some way independent of x1 s row.
The following result makes this intuition precise. What Gausss Method
eliminates is linear relationships among the rows.
2.5 Lemma In an echelon form matrix, no nonzero row is a linear combination
of the other nonzero rows.
Proof Let R be an echelon form matrix and consider its non-~0 rows. First
()
The matrix is in echelon form so every row after the first has a zero entry in that
column r2,`1 = = rm,`1 = 0. Thus equation () shows that c1 = 0, because
r1,`1 6= 0 as it leads the row.
The inductive step is much the same as the base step. Again consider
equation (). We will prove that if the coefficient ci is 0 for each row index
i { 1, . . . , k} then ck+1 is also 0. We focus on the entries from column `k+1 .
0 = c1 r1,`k+1 + + ck+1 rk+1,`k+1 + + cm rm,`k+1
59
number of columns n.
The base case is that the matrix has n = 1 column. If this is the zero matrix
then its echelon form is the zero matrix. If instead it has any nonzero entries
then when the matrix is brought to reduced echelon form it must have at least
one nonzero entry, which must be a 1 in the first row. Either way, its reduced
echelon form is unique.
For the inductive step we assume that n > 1 and that all m row matrices
having fewer than n columns have a unique reduced echelon form. Consider
an mn matrix A and suppose that B and C are two reduced echelon form
matrices derived from A. We will show that these two must be equal.
be the matrix consisting of the first n 1 columns of A. Observe
Let A
that any sequence of row operations that bring A to reduced echelon form will
to reduced echelon form. By the inductive hypothesis this reduced
also bring A
is unique, so if B and C differ then the difference must occur
echelon form of A
in column n.
We finish the inductive step, and the argument, by showing that the two
cannot differ only in that column. Consider a homogeneous system of equations
for which A is the matrix of coefficients.
a1,1 x1 + a1,2 x2 + + a1,n xn = 0
a2,1 x1 + a2,2 x2 + + a2,n xn = 0
..
.
am,1 x1 + am,2 x2 + + am,n xn = 0
()
By Theorem One.I.1.5 the set of solutions to that system is the same as the set
of solutions to Bs system
b1,1 x1 + b1,2 x2 + + b1,n xn = 0
b2,1 x1 + b2,2 x2 + + b2,n xn = 0
..
.
bm,1 x1 + bm,2 x2 + + bm,n xn = 0
()
60
and to Cs.
c1,1 x1 + c1,2 x2 + + c1,n xn = 0
c2,1 x1 + c2,2 x2 + + c2,n xn = 0
..
.
cm,1 x1 + cm,2 x2 + + cm,n xn = 0
()
With B and C different only in column n, suppose that they differ in row i.
Subtract row i of () from row i of () to get the equation (bi,n ci,n )xn = 0.
Weve assumed that bi,n 6= ci,n so xn = 0. Thus in () and () the n-th
column contains a leading entry, or else the variable xn would be free. Thats a
contradiction because with B and C equal on the first n 1 columns, the leading
entries in the n-th column would have to be in the same row, and with both
matrices in reduced echelon form, both leading entries would have to be 1, and
would have to be the only nonzero entries in that column. So B = C.
QED
That result answers the two questions from this sections introduction: do
any two echelon form versions of a linear system have the same number of free
variables, and if so are they exactly the same variables? We get from any echelon
form version to the reduced echelon form by eliminating up, so any echelon form
version of a system has the same free variables as the reduced echelon form, and
therefore uniqueness of reduced echelon form gives that the same variables are
free in all echelon form version of a system. Thus both questions are answered
yes. There is no linear system and no combination of row operations such that,
say, we could solve the system one way and get y and z free but solve it another
way and get y and w free.
We close with a recap. In Gausss Method we start with a matrix and then
derive a sequence of other matrices. We defined two matrices to be related if we
can derive one from the other. That relation is an equivalence relation, called
row equivalence, and so partitions the set of all matrices into row equivalence
classes.
13
27
13
01
...
(There are infinitely many matrices in the pictured class, but weve only got
room to show two.) We have proved there is one and only one reduced echelon
form matrix in each row equivalence class. So the reduced echelon form is a
canonical form for row equivalence: the reduced echelon form matrices are
61
?
?
10
01
...
1 0
0 1
0 0
0
1
62
2.9 Example We can describe all the classes by listing all possible reduced echelon
form matrices. Any 22 matrix lies in one of these: the class of matrices row
equivalent to this,
!
0 0
0 0
the infinitely many classes of matrices row equivalent to one of this type
!
1 a
0 0
where a R (including a = 0), the class of matrices row equivalent to this,
!
0 1
0 0
and the class of matrices row equivalent to this
!
1 0
0 1
(this is the class of nonsingular 22 matrices).
Exercises
X 2.10 Decide if the matrices are row equivalent.
1 0 2
1 0 2
1 2
0 1
(a)
,
(b) 3 1 1 , 0 2 10
4 8
1 2
5 1 5
2 0 4
2 1 1
1 0 2
1 1 1
0 3 1
(c) 1 1 0 ,
(d)
,
0 2 10
1 2 2
2 2 5
4 3 1
1 1 1
0 1 2
(e)
,
0 0 3
1 1 1
2.11 Describe the matrices in each of the classes represented in Example 2.9.
2.12 Describe
the row equivalence
class of these.
all matrices
in
1 0
1 2
1 1
(a)
(b)
(c)
0 0
2 4
1 3
2.13 How many row equivalence classes are there?
2.14 Can row equivalence classes contain different-sized matrices?
2.15 How big are the row equivalence classes?
(a) Show that for any matrix of all zeros, the class is finite.
(b) Do any other classes contain only finitely many members?
X 2.16 Give two reduced echelon form matrices that have their leading entries in the
same columns, but that are not row equivalent.
X 2.17 Show that any two nn nonsingular matrices are row equivalent. Are any two
singular matrices row equivalent?
X 2.18 Describe all of the row equivalence classes containing these.
63
(b) 2 3 matrices
(c) 3 2 matrices
1
3
1
2
0
4
3
3
5
Topic
Computer Algebra Systems
The linear systems in this chapter are small enough that their solution by hand
is easy. For large systems, including those involving thousands of equations,
we need a computer. There are special purpose programs such as LINPACK
for this. Also popular are general purpose computer algebra systems including
Maple, Mathematica, or MATLAB, and Sage.
For example, in the Topic on Networks, we need to solve this.
i0 i1 i2
i1
i2
= 0
i3
i5
= 0
i4 + i5
= 0
i3 + i4
i6 = 0
5i1
+ 10i3
= 10
2i2
+ 4i4
= 10
5i1 2i2
+ 50i5
= 0
Doing this by hand would take time and be error-prone. A computer is better.
Here is that system solved with Sage. (There are many ways to do this; the
one here has the advantage of simplicity.)
sage: var('i0,i1,i2,i3,i4,i5,i6')
(i0, i1, i2, i3, i4, i5, i6)
sage: network_system=[i0-i1-i2==0, i1-i3-i5==0,
....:
i2-i4+i5==0, i3+i4-i6==0, 5*i1+10*i3==10,
....:
2*i2+4*i4==10, 5*i1-2*i2+50*i5==0]
sage: solve(network_system, i0,i1,i2,i3,i4,i5,i6)
[[i0 == (7/3), i1 == (2/3), i2 == (5/3), i3 == (2/3),
i4 == (5/3), i5 == 0, i6 == (7/3)]]
Magic.
Here is the same system solved under Maple. We enter the array of coefficients
and the vector of constants, and then we get the solution.
> A:=array( [[1,-1,-1,0,0,0,0],
[0,1,0,-1,0,-1,0],
[0,0,1,0,-1,1,0],
[0,0,0,1,1,0,-1],
[0,5,0,10,0,0,0],
65
[0,0,2,0,4,0,0],
[0,5,-2,0,0,50,0]] );
> u:=array( [0,0,0,0,10,10,0] );
> linsolve(A,u);
7 2 5 2 5
7
[ -, -, -, -, -, 0, - ]
3 3 3 3 3
3
If a system has infinitely many solutions then the program will return a
parametrization.
Exercises
1 Use the computer to solve the two problems that opened this chapter.
(a) This is the Statics problem.
40h + 15c = 100
25c = 50 + 50h
(b) This is the Chemistry problem.
7h = 7j
8h + 1i = 5j + 2k
1i = 3j
3i = 6j + 1k
2 Use the computer to solve these systems from the first subsection, or conclude
many solutions or no solutions.
(a) 2x + 2y = 5
(b) x + y = 1
(c) x 3y + z = 1
(d) x y = 1
x 4y = 0
x+y=2
x + y + 2z = 14
3x 3y = 2
(e)
4y + z = 20
(f) 2x
+ z+w= 5
2x 2y + z = 0
y
w = 1
x
+z= 5
3x
zw= 0
x + y z = 10
4x + y + 2z + w = 9
3 Use the computer to solve these systems from the second subsection.
(a) 3x + 6y = 18
(b) x + y = 1
(c) x1
+ x3 = 4
x + 2y = 6
x y = 1
x1 x2 + 2x3 = 5
4x1 x2 + 5x3 = 17
(d) 2a + b c = 2
(e) x + 2y z
=3
(f) x
+z+w=4
2a
+c=3
2x + y
+w=4
2x + y
w=2
ab
=0
x y+z+w=1
3x + y + z
=7
4 What does the computer give for the solution of the general 22 system?
ax + cy = p
bx + dy = q
Topic
Accuracy of Computations
Gausss Method lends itself to computerization. The code below illustrates. It
operates on an nn matrix named a, doing row combinations using the first
row, then the second row, etc.
for(row=1; row<=n-1; row++){
for(row_below=row+1; row_below<=n; row_below++){
multiplier=a[row_below,row]/a[row,row];
for(col=row; col<=n; col++){
a[row_below,col]-=multiplier*a[row,col];
}
}
}
This is in the C language. The for(row=1; row<=n-1; row++){ .. } loop initializes row at 1 and then iterates while row is less than or equal to n 1, each
time through incrementing row by one with the ++ operation. The other nonobvious language construct is that the -= in the innermost loop has the effect of
a[row_below,col]=-1*multiplier*a[row,col]+a[row_below,col].
While that code is a first take on mechanizing Gausss Method, it is naive.
For one thing, it assumes that the entry in the row,row position is nonzero. So
one way that it needs to be extended is to cover the case where finding a zero
in that location leads to a row swap or to the conclusion that the matrix is
singular.
We could add some if statements to cover those cases but we will instead
consider another way in which this code is naive. It is prone to pitfalls arising
from the computers reliance on floating point arithmetic.
For example, above we have seen that we must handle a singular system as a
separate case. But systems that are nearly singular also require care. Consider
this one (the extra digits are in the ninth significant place).
x + 2y = 3
1.000 000 01x + 2y = 3.000 000 01
()
67
it will represent the second equation internally as 1.000 000 0x + 2y = 3.000 000 0,
losing the digits in the ninth place. Instead of reporting the correct solution,
this computer will think that the two equations are equal and it will report that
the system is singular.
For some intuition about how the computer could come up with something
that far off, consider this graph of the system.
(1, 1)
We cannot tell the two lines apart; this system is nearly singular in the sense that
the two lines are nearly the same line. This gives the system () the property
that a small change in an equation can cause a large change in the solution. For
instance, changing the 3.000 000 01 to 3.000 000 03 changes the intersection point
from (1, 1) to (3, 0). The solution changes radically depending on the ninth digit,
which explains why an eight-place computer has trouble. A problem that is very
sensitive to inaccuracy or uncertainties in the input values is ill-conditioned.
The above example gives one way in which a system can be difficult to
solve on a computer. It has the advantage that the picture of nearly-equal lines
gives a memorable insight into one way for numerical difficulties to happen.
Unfortunately this insight isnt useful when we wish to solve some large system.
We typically will not understand the geometry of an arbitrary large system.
There are other ways that a computers results may be unreliable, besides
that the angle between some of the linear surfaces is small. For example, consider
this system (from [Hamming]).
0.001x + y = 1
xy=0
()
68
The computer decides from the second equation that y = 1 and with that it
concludes from the first equation that x = 0. The y value is close but the x is
bad the ratio of the actual answer to the computers answer is infinite. In
short, another cause of unreliable output is the computers reliance on floating
point arithmetic when the system-solving code leads to using leading entries
that are small.
An experienced programmer may respond by using double precision, which
retains sixteen significant digits, or perhaps using some even larger size. This
will indeed solve many problems. However, double precision has greater memory
requirements and besides we can obviously tweak the above to give the same
trouble in the seventeenth digit, so double precision isnt a panacea. We need a
strategy to minimize numerical trouble as well as some guidance about how far
we can trust the reported solutions.
A basic improvement on the naive code above is to not determine the factor
to use for row combinations by simply taking the entry in the row,row position,
but rather to look at all of the entries in the row column below the row,row entry
and take one that is likely to give reliable results because it is not too small.
This is partial pivoting.
For example, to solve the troublesome system () above we start by looking
at both equations for a best entry to use, and take the 1 in the second equation
as more likely to give good results. The combination step of .0012 + 1
gives a first equation of 1.001y = 1, which the computer will represent as
(1.0 100 )y = 1.0 100 , leading to the conclusion that y = 1 and, after backsubstitution, that x = 1, both of which are close to right. We can adapt the
code from above to do this.
for(row=1; row<=n-1; row++){
/* find the largest entry in this column (in row max) */
max=row;
for(row_below=row+1; row_below<=n; row_below++){
if (abs(a[row_below,row]) > abs(a[max,row]));
max = row_below;
}
/* swap rows to move that best entry up */
for(col=row; col<=n; col++){
temp=a[row,col];
a[row,col]=a[max,col];
a[max,col]=temp;
}
/* proceed as before */
for(row_below=row+1; row_below<=n; row_below++){
multiplier=a[row_below,row]/a[row,row];
for(col=row; col<=n; col++){
a[row_below,col]-=multiplier*a[row,col];
}
}
}
A full analysis of the best way to implement Gausss Method is beyond the
scope of this book (see [Wilkinson 1965]), but the method recommended by
69
most experts first finds the best entry among the candidates and then scales it
to a number that is less likely to give trouble. This is scaled partial pivoting.
In addition to returning a result that is likely to be reliable, most well-done
code will return a conditioning number that describes the factor by which
uncertainties in the input numbers could be magnified to become inaccuracies
in the results returned (see [Rice]).
The lesson is that just because Gausss Method always works in theory, and
just because computer code correctly implements that method, doesnt mean
that the answer is reliable. In practice, always use a package where experts have
worked hard to counter what can go wrong.
Exercises
1 Using two decimal places, add 253 and 2/3.
2 This intersect-the-lines problem contrasts with the example discussed above.
(1, 1)
x + 2y = 3
3x 2y = 1
Illustrate that in this system some small change in the numbers will produce only
a small change in the solution by changing the constant in the bottom equation to
1.008 and solving. Compare it to the solution of the unchanged system.
3 Consider this system ([Rice]).
0.000 3x + 1.556y = 1.569
0.345 4x 2.346y = 1.018
(a) Solve it.
(b) Solve it by rounding at each step to four digits.
4 Rounding inside the computer often has an effect on the result. Assume that your
machine has eight significant digits.
(a) Show that the machine will compute (2/3) + ((2/3) (1/3)) as unequal to
((2/3) + (2/3)) (1/3). Thus, computer arithmetic is not associative.
(b) Compare the computers version of (1/3)x + y = 0 and (2/3)x + 2y = 0. Is
twice the first equation the same as the second?
5 Ill-conditioning is not only dependent on the matrix of coefficients. This example
[Hamming] shows that it can arise from an interaction between the left and right
sides of the system. Let be a small real.
3x + 2y + z =
6
2x + 2y + 2z = 2 + 4
x + 2y z = 1 +
(a) Solve the system by hand. Notice that the s divide out only because there is
an exact cancellation of the integer parts on the right side as well as on the left.
(b) Solve the system by hand, rounding to two decimal places, and with = 0.001.
Topic
Analyzing Networks
The diagram below shows some of a cars electrical network. The battery is on
the left, drawn as stacked line segments. The wires are lines, shown straight and
with sharp right angles for neatness. Each light is a circle enclosing a loop.
Off
12V
Dimmer
Hi
Lo
R
Brake
Lights
R
Parking
Lights
Dome
Light
Light
Switch
Brake
Actuated
Switch
R
Rear
Lights
Door
Actuated
Switch
Headlights
The designer of such a network needs to answer questions such as: how much
electricity flows when both the hi-beam headlights and the brake lights are on?
We will use linear systems to analyze simple electrical networks.
For the analysis we need two facts about electricity and two facts about
electrical networks.
The first fact is that a battery is like a pump, providing a force impelling
the electricity to flow, if there is a path. We say that the battery provides a
potential. For instance, when the driver steps on the brake then the switch
makes contact and so makes a circuit on the left side of the diagram, which
includes the brake lights. Once the circuit exists, the batterys force creates a
current flowing through that circuit, lighting the lights.
The second electrical fact is that in some kinds of network components the
amount of flow is proportional to the force provided by the battery. That is, for
each such component there is a number, its resistance, such that the potential
71
is equal to the flow times the resistance. Potential is measured in volts, the
rate of flow is in amperes, and resistance to the flow is in ohms; these units are
defined so that volts = amperes ohms.
Components with this property, that the voltage-amperage response curve is a
line through the origin, are resistors. For example, if a resistor measures 2 ohms
then wiring it to a 12 volt battery results in a flow of 6 amperes. Conversely, if
electrical current of 2 amperes flows through that resistor then there must be
a 4 volt potential difference between its ends. This is the voltage drop across
the resistor. One way to think of the electrical circuits that we consider here is
that the battery provides a voltage rise while the other components are voltage
drops.
The facts that we need about networks are Kirchoff s Current Law, that for
any point in a network the flow in equals the flow out and Kirchoff s Voltage
Law, that around any circuit the total drop equals the total rise.
We start with the network below. It has a battery that provides the potential
to flow and three resistors, shown as zig-zags. When components are wired one
after another, as here, they are in series.
20 volt
potential
2 ohm
resistance
3 ohm
resistance
5 ohm
resistance
By Kirchoffs Voltage Law, because the voltage rise is 20 volts, the total voltage
drop must also be 20 volts. Since the resistance from start to finish is 10 ohms
(the resistance of the wire connecting the components is negligible), the current
is (20/10) = 2 amperes. Now, by Kirchhoffs Current Law, there are 2 amperes
through each resistor. Therefore the voltage drops are: 4 volts across the 2 ohm
resistor, 10 volts across the 5 ohm resistor, and 6 volts across the 3 ohm resistor.
The prior network is simple enough that we didnt use a linear system but
the next one is more complicated. Here the resistors are in parallel.
20 volt
12 ohm
8 ohm
We begin by labeling the branches as below. Let the current through the left
branch of the parallel portion be i1 and that through the right branch be i2 ,
72
and also let the current through the battery be i0 . Note that we dont need to
know the actual direction of flow if current flows in the direction opposite to
our arrow then we will get a negative number in the solution.
i0
i1
i2
The Current Law, applied to the split point in the upper right, gives that
i0 = i1 + i2 . Applied to the split point lower right it gives i1 + i2 = i0 . In
the circuit that loops out of the top of the battery, down the left branch of the
parallel portion, and back into the bottom of the battery, the voltage rise is
20 while the voltage drop is i1 12, so the Voltage Law gives that 12i1 = 20.
Similarly, the circuit from the battery to the right branch and back to the
battery gives that 8i2 = 20. And, in the circuit that simply loops around in the
left and right branches of the parallel portion (we arbitrarily take the direction
of clockwise), there is a voltage rise of 0 and a voltage drop of 8i2 12i1 so
8i2 12i1 = 0.
i0
i0 +
i1 i2 = 0
i1 + i2 = 0
12i1
= 20
8i2 = 20
12i1 + 8i2 = 0
2
50
10 volt
10
This is a Wheatstone bridge (see Exercise 3). To analyze it, we can place the
arrows in this way.
73
i1 .
& i2
i5
i0
i3 &
. i4
Kirchhoffs Current Law, applied to the top node, the left node, the right node,
and the bottom node gives these.
i0 = i1 + i2
i1 = i3 + i5
i2 + i5 = i4
i3 + i4 = i0
Kirchhoffs Voltage Law, applied to the inside loop (the i0 to i1 to i3 to i0 loop),
the outside loop, and the upper loop not involving the battery, gives these.
5i1 + 10i3 = 10
2i2 + 4i4 = 10
5i1 + 50i5 2i2 = 0
Those suffice to determine the solution i0 = 7/3, i1 = 2/3, i2 = 5/3, i3 = 2/3,
i4 = 5/3, and i5 = 0.
We can understand many kinds of networks in this way. For instance, the
exercises analyze some networks of streets.
Exercises
1 Calculate the amperages in each part of each network.
(a) This is a simple network.
3 ohm
9 volt
2 ohm
2 ohm
(b) Compare this one with the parallel case discussed above.
3 ohm
9 volt
2 ohm
2 ohm
2 ohm
74
3 ohm
3 ohm
2 ohm
4 ohm
2 ohm
2 ohm
2 In the first network that we analyzed, with the three resistors in series, we just
added to get that they acted together like a single resistor of 10 ohms. We can do
a similar thing for parallel circuits. In the second circuit analyzed,
20 volt
12 ohm
8 ohm
the electric current through the battery is 25/6 amperes. Thus, the parallel portion
is equivalent to a single resistor of 20/(25/6) = 4.8 ohms.
(a) What is the equivalent resistance if we change the 12 ohm resistor to 5 ohms?
(b) What is the equivalent resistance if the two are each 8 ohms?
(c) Find the formula for the equivalent resistance if the two resistors in parallel
are r1 ohms and r2 ohms.
3 A Wheatstone bridge is used to measure resistance.
r1
r3
rg
r2
r4
Show that in this circuit if the current flowing through rg is zero then r4 = r2 r3 /r1 .
(To operate the device, put the unknown resistance at r4 . At rg is a meter that
shows the current. We vary the three resistances r1 , r2 , and r3 typically they
each have a calibrated knob until the current in the middle reads 0. Then the
equation gives the value of r4 .)
4 Consider this traffic circle.
North Avenue
Main Street
Pier Boulevard
75
Jay Ln
west
east
Winooski Ave
We can observe the hourly flow of cars into this networks entrances, and out of its
exits.
east Winooski west Winooski Willow Jay Shelburne
into
80
50
65
40
out of
30
5
70
55
75
(Note that to reach Jay a car must enter the network via some other road first,
which is why there is no into Jay entry in the table. Note also that over a long
period of time, the total in must approximately equal the total out, which is why
both rows add to 235 cars.) Once inside the network, the traffic may flow in different
ways, perhaps filling Willow and leaving Jay mostly empty, or perhaps flowing in
some other way. Kirchhoffs Laws give the limits on that freedom.
(a) Determine the restrictions on the flow inside this network of streets by setting
up a variable for each block, establishing the equations, and solving them. Notice
that some streets are one-way only. (Hint: this will not yield a unique solution,
since traffic can flow through this network in various ways; you should get at
least one free variable.)
(b) Suppose that someone proposes construction for Winooski Avenue East between Willow and Jay, and traffic on that block will be reduced. What is the least
amount of traffic flow that can we can allow on that block without disrupting
the hourly flow into and out of the network?
Chapter Two
Vector Spaces
The first chapter finished with a fair understanding of how Gausss Method
solves a linear system. It systematically takes linear combinations of the rows.
Here we move to a general study of linear combinations.
We need a setting. At times in the first chapter weve combined vectors from
R2 , at other times vectors from R3 , and at other times vectors from higherdimensional spaces. So our first impulse might be to work in Rn , leaving n
unspecified. This would have the advantage that any of the results would hold
for R2 and for R3 and for many other spaces, simultaneously.
But if having the results apply to many spaces at once is advantageous then
sticking only to Rn s is overly restrictive. Wed like our results to apply to
combinations of row vectors, as in the final section of the first chapter. Weve
even seen some spaces that are not simply a collection of all of the same-sized
column vectors or row vectors. For instance, weve seen a homogeneous systems
solution set that is a plane inside of R3 . This set is a closed system in that a
linear combination of these solutions is also a solution. But it does not contain
all of the three-tall column vectors, only some of them.
We want the results about linear combinations to apply anywhere that linear
combinations make sense. We shall call any such set a vector space. Our results,
instead of being phrased as Whenever we have a collection in which we can
sensibly take linear combinations . . . , will be stated In any vector space . . .
Such a statement describes at once what happens in many spaces. To
understand the advantages of moving from studying a single space to studying
a class of spaces, consider this analogy. Imagine that the government made
laws one person at a time: Leslie Jones cant jay walk. That would be bad;
statements have the virtue of economy when they apply to many cases at once.
Or suppose that they said, Kim Ke must stop when passing an accident.
Contrast that with, Any doctor must stop when passing an accident. More
general statements, in some ways, are clearer.
78
We shall study structures with two operations, an addition and a scalar multiplication, that are subject to some simple conditions. We will reflect more on
the conditions later but on first reading notice how reasonable they are. For
instance, surely any operation that can be called an addition (e.g., column vector
addition, row vector addition, or real number addition) will satisfy conditions
(1) through (5) below.
I.1
1.1 Definition A vector space (over R) consists of a set V along with two
operations + and subject to the conditions that for all vectors ~v, w
~ , ~u V
and all scalars r, s R:
(1) the set V is closed under vector addition, that is, ~v + w
~ V
(2) vector addition is commutative, ~v + w
~ =w
~ + ~v
(3) vector addition is associative, (~v + w
~ ) + ~u = ~v + (~
w + ~u)
~
~
(4) there is a zero vector 0 V such that ~v + 0 = ~v for all ~v V
(5) each ~v V has an additive inverse w
~ V such that w
~ + ~v = ~0
(6) the set V is closed under scalar multiplication, that is, r ~v V
(7) addition of scalars distributes over scalar multiplication, (r+s)~v = r~v +s~v
(8) scalar multiplication distributes over vector addition, r(~v + w
~ ) = r~v +r w
~
(9) ordinary multipication of scalars associates with scalar multiplication,
(rs) ~v = r (s ~v)
(10) multiplication by the scalar 1 is the identity operation, 1 ~v = ~v.
1.2 Remark The definition involves two kinds of addition and two kinds of
multiplication, and so may at first seem confused. For instance, in condition (7)
the + on the left is addition of two real numbers while the + on the right
is addition of two vectors in V. These expressions arent ambiguous because
of context; for example, r and s are real numbers so r + s can only mean
real number addition. In the same way, item (9)s left side rs is ordinary
real number multiplication, while its right side s ~v is the scalar multipliction
defined for this vector space.
The best way to understand the definition is to go through the examples below
and for each, check all ten conditions. The first example includes that check,
written out at length. Use it as a model for the others. Especially important are
the closure conditions, (1) and (6). They specify that the addition and scalar
79
multiplication operations are always sensible they are defined for every pair of
vectors and every scalar and vector, and the result of the operation is a member
of the set (see Example 1.4).
1.3 Example The set R2 is a vector space if the operations + and have their
usual meaning.
!
!
!
!
!
x1
y1
x1 + y1
x1
rx1
+
=
r
=
x2
y2
x2 + y2
x2
rx2
We shall check all of the conditions.
There are five conditions in the paragraph having to do with addition. For
(1), closure of addition, observe that for any v1 , v2 , w1 , w2 R the result of the
vector sum
!
!
!
v1
w1
v1 + w1
+
=
v2
w2
v2 + w2
is a column array with two real entries, and so is in R2 .
of vectors commutes, take all entries to be real numbers
!
!
!
!
v1
w1
v1 + w1
w1 + v1
+
=
=
=
v2
w2
v2 + w2
w2 + v2
(the second equality follows from the fact that the components of the vectors are
real numbers, and the addition of real numbers is commutative). Condition (3),
associativity of vector addition, is similar.
!
!
!
!
v1
w1
u1
(v1 + w1 ) + u1
(
+
)+
=
v2
w2
u2
(v2 + w2 ) + u2
!
v1 + (w1 + u1 )
=
v2 + (w2 + u2 )
!
!
!
v1
w1
u1
=
+(
+
)
v2
w2
u2
For the fourth condition we must produce a zero element the vector of zeroes
is it.
!
!
!
v1
0
v1
+
=
v2
0
v2
For (5), to produce an additive inverse, note that for any v1 , v2 R we have
!
!
!
v1
v1
0
+
=
v2
v2
0
80
v1
v2
!
=
(rs)v1
(rs)v2
!
=
r(sv1 )
r(sv2 )
!
= r (s
v1
v2
!
v1
)
v2
In a similar way, each Rn is a vector space with the usual operations of vector
addition and scalar multiplication. (In R1 , we usually do not write the members
as column vectors, i.e., we usually do not write (). Instead we just write .)
1.4 Example This subset of R3 that is a plane through the origin
x
P = { y | x + y + z = 0 }
z
is a vector space if + and are interpreted in this way.
x1
x2
x1 + x2
x
rx
r y = ry
y1 + y2 = y1 + y2
z1
z2
z1 + z 2
z
rz
The addition and scalar multiplication operations here are just the ones of R3 ,
reused on its subset P. We say that P inherits these operations from R3 . This
81
example of an addition in P
1
1
0
+
=
1
0
1
2
1
1
illustrates that P is closed under addition. Weve added two vectors from P
that is, with the property that the sum of their three entries is zero and the
result is a vector also in P. Of course, this example is not a proof. For the proof
that P is closed under addition, take two elements of P.
x1
x2
y1 y2
z1
z2
Membership in P means that x1 + y1 + z1 = 0 and x2 + y2 + z2 = 0. Observe
that their sum
x1 + x2
y1 + y2
z 1 + z2
is also in P since its entries add (x1 + x2 ) + (y1 + y2 ) + (z1 + z2 ) = (x1 + y1 +
z1 ) + (x2 + y2 + z2 ) to 0. To show that P is closed under scalar multiplication,
start with a vector from P
x
y
z
where x + y + z = 0, and then for r R observe that the scalar multiple
x
rx
r y = ry
z
rz
gives rx + ry + rz = r(x + y + z) = 0. Thus the two closure conditions are
satisfied. Verification of the other conditions in the definition of a vector space
are just as straightforward.
1.5 Example Example 1.3 shows that the set of all two-tall vectors with real
entries is a vector space. Example 1.4 gives a subset of an Rn that is also a
vector space. In contrast with those two, consider the set of two-tall columns
with entries that are integers (under the usual operations of component-wise
addition and scalar multiplication). This is a subset of a vector space but it is
not itself a vector space. The reason is that this set is not closed under scalar
82
multiplication, that is, it does not satisfy condition (6). Here is a column with
integer entries and a scalar such that the outcome of the operation
!
!
4
2
0.5
=
3
1.5
is not a member of the set, since its entries are not all integers.
1.6 Example The singleton set
0
0
{ }
0
0
is a vector space under the operations
0
0
0
0 0 0
+ =
0 0 0
0
0
0
0
0
0 0
r =
0 0
0
0
83
(the verification is easy). This vector space is worthy of attention because these
are the polynomial operations familiar from high school algebra. For instance,
3 (1 2x + 3x2 4x3 ) 2 (2 3x + x2 (1/2)x3 ) = 1 + 7x2 11x3 .
Although this space is not a subset of any Rn , there is a sense in which we
can think of P3 as the same as R4 . If we identify these two spaces elements in
this way
a0
a
1
a0 + a1 x + a2 x2 + a3 x3 corresponds to
a2
a3
then the operations also correspond. Here is an example of corresponding
additions.
3
1
2
2
3
1 2x + 0x + 1x
2 3 1
+ 2 + 3x + 7x2 4x3
corresponds to + =
0 7 7
3 + 1x + 7x2 3x3
3
4
1
Things we are thinking of as the same add to the same sum. Chapter Three
makes precise this idea of vector space correspondence. For now we shall just
leave it as an intuition.
1.9 Example The set M22 of 22 matrices with real number entries is a vector
space under the natural entry-by-entry operations.
!
!
!
!
!
a b
w x
a+w b+x
a b
ra rb
+
=
r
=
c d
y z
c+y d+z
c d
rc rd
As in the prior example, we can think of this space as the same as R4 .
1.10 Example The set {f | f : N R} of all real-valued functions of one natural
number variable is a vector space under the operations
(f1 + f2 ) (n) = f1 (n) + f2 (n)
(r f) (n) = r f(n)
so that if, for example, f1 (n) = n2 + 2 sin(n) and f2 (n) = sin(n) + 0.5 then
(f1 + 2f2 ) (n) = n2 + 1.
We can view this space as a generalization of Example 1.3 instead of 2-tall
vectors, these functions are like infinitely-tall vectors.
n
0
1
2
3
..
.
f(n) = n2 + 1
1
2
5
10
..
.
corresponds to
1
2
5
10
..
.
84
(r f) (x) = r f(x)
The difference between this and Example 1.10 is the domain of the functions.
1.13 Example The set F = {a cos + b sin | a, b R} of real-valued functions of
the real variable is a vector space under the operations
(a1 cos + b1 sin ) + (a2 cos + b2 sin ) = (a1 + a2 ) cos + (b1 + b2 ) sin
and
r (a cos + b sin ) = (ra) cos + (rb) sin
inherited from the space in the prior example. (We can think of F as the same
as R2 in that a cos + b sin corresponds to the vector with components a and
b.)
85
(r f) (x) = r f(x)
d2 f
d2 (rf)
+
(rf)
=
r(
+ f)
dx2
dx2
of basic Calculus. This turns out to equal the space from the prior example
functions satisfying this differential equation have the form a cos + b sin
but this description suggests an extension to solutions sets of other differential
equations.
v1
.
~v = ..
vn
w1
.
w
~ = ..
wn
86
Another answer is perhaps more satisfying. People in this area have worked
to develop the right balance of power and generality. This definition is shaped
so that it contains the conditions needed to prove all of the interesting and
important properties of spaces of linear combinations. As we proceed, we shall
derive all of the properties natural to collections of linear combinations from the
conditions given in the definition.
The next result is an example. We do not need to include these properties
in the definition of vector space because they follow from the properties already
listed there.
1.16 Lemma In any vector space V, for any ~v V and r R, we have (1) 0 ~v = ~0,
(2) (1 ~v) + ~v = ~0, and (3) r ~0 = ~0.
Proof For (1) note that ~v = (1 + 0) ~v = ~v + (0 ~v). Add to both sides the
87
{
R4 | x + y z + w = 0 }
z
w
under the operations inherited from R4 .
X 1.22 Show that each of these is not a vector space. (Hint. Check closure by listing
two members of each set and trying some operations on them.)
(a) Under the operations inherited from R3 , this set
x
{ y R3 | x + y + z = 1 }
z
(b) Under the operations inherited from R3 , this set
x
{ y R3 | x2 + y2 + z2 = 1 }
z
(c) Under the usual matrix operations,
a 1
{
| a, b, c R }
b c
(d) Under the usual polynomial operations,
{ a0 + a1 x + a2 x2 | a0 , a1 , a2 R+ }
where R+ is the set of reals greater than zero
88
X 1.24 Is the set of rational numbers a vector space over R under the usual addition
and scalar multiplication operations?
1.25 Show that the set of linear combinations of the variables x, y, z is a vector space
under the natural addition and scalar multiplication operations.
1.26 Prove that this is not a vector space: the set of two-tall column vectors with
real entries subject to these operations.
x1
x2
x1 x2
x
rx
+
=
r
=
y1 y2
y
ry
y1
y2
1.27 Prove or disprove that R3 is a vector space under these operations.
x1
x2
0
x
rx
(a) y1 + y2 = 0 and r y = ry
z1
z2
0
z
rz
x1
x2
0
x
0
(b) y1 + y2 = 0 and r y = 0
z1
z2
0
z
0
X 1.28 For each, decide if it is a vector space; the intended operations are the natural
ones.
(a) The diagonal 22 matrices
a 0
{
| a, b R }
0 b
(b) This set of 22 matrices
x
{
x+y
x+y
y
| x, y R }
{
z R | x + y + w = 1}
w
(d) The set of functions { f : R R | df/dx + 2f = 0 }
(e) The set of functions { f : R R | df/dx + 2f = 1 }
X 1.29 Prove or disprove that this is a vector space: the real-valued functions f of one
real variable such that f(7) = 0.
X 1.30 Show that the set R+ of positive reals is a vector space when we interpret x + y
to mean the product of x and y (so that 2 + 3 is 6), and we interpret r x as the
r-th power of x.
1.31 Is { (x, y) | x, y R } a vector space under these operations?
(a) (x1 , y1 ) + (x2 , y2 ) = (x1 + x2 , y1 + y2 ) and r (x, y) = (rx, y)
(b) (x1 , y1 ) + (x2 , y2 ) = (x1 + x2 , y1 + y2 ) and r (x, y) = (rx, 0)
89
1.32 Prove or disprove that this is a vector space: the set of polynomials of degree
greater than or equal to two, along with the zero polynomial.
1.33 At this point the same is only an intuition, but nonetheless for each vector
space identify the k for which the space is the same as Rk .
(a) The 23 matrices under the usual operations
(b) The nm matrices (under their usual operations)
(c) This set of 22 matrices
a 0
{
| a, b, c R }
b c
(d) This set of 22 matrices
{
a
b
0
c
| a + b + c = 0}
90
0
b
a
| a, b C and a + b = 0 + 0i }
0
1.43 Name a property shared by all of the Rn s but not listed as a requirement for a
vector space.
X 1.44 (a) Prove that for any four vectors ~v1 , . . . ,~v4 V we can associate their sum
in any way without changing the result.
((~v1 + ~v2 ) + ~v3 ) + ~v4 = (~v1 + (~v2 + ~v3 )) + ~v4 = (~v1 + ~v2 ) + (~v3 + ~v4 )
= ~v1 + ((~v2 + ~v3 ) + ~v4 ) = ~v1 + (~v2 + (~v3 + ~v4 ))
This allows us to write ~v1 + ~v2 + ~v3 + ~v4 without ambiguity.
(b) Prove that any two ways of associating a sum of any number of vectors give
the same sum. (Hint. Use induction on the number of vectors.)
1.45 Example 1.5 gives a subset of R2 that is not a vector space, under the obvious
operations, because while it is closed under addition, it is not closed under scalar
multiplication. Consider the set of vectors in the plane whose components have
the same sign or are 0. Show that this set is closed under scalar multiplication but
not addition.
I.2
One of the examples that led us to define vector spaces was the solution set of a
homogeneous system. For instance, we saw in Example 1.4 such a space that
is a planar subset of R3 . There, the vector space R3 contains inside it another
vector space, the plane.
2.1 Definition For any vector space, a subspace is a subset that is itself a vector
space, under the inherited operations.
2.2 Example Example 1.4s plane
x
P = { y | x + y + z = 0 }
z
is a subspace of R3 . As required by the definition the planes operations are
inherited from the larger space, that is, vectors add in P as they add in R3
x1
x2
x1 + x2
y1 + y2 = y1 + y2
z1
z2
z1 + z2
91
92
equivalent.
(1) (2)
(2) (3)
(3) (1)
We will prove the equivalence by establishing that (1) = (3) = (2) = (1).
This strategy is suggested by the observation that the implications (1) = (3)
and (3) = (2) are easy and so we need only argue that (2) = (1).
Assume that S is a nonempty subset of a vector space V that is S closed
under combinations of pairs of vectors. We will show that S is a vector space by
checking the conditions.
The vector space definition has five conditions on addition. First, for closure
under addition, if ~s1 ,~s2 S then ~s1 + ~s2 S, as it is a combination of a pair
of vectors and we are assuming that S is closed under those. Second, for any
~s1 ,~s2 S, because addition is inherited from V, the sum ~s1 + ~s2 in S equals the
sum ~s1 + ~s2 in V, and that equals the sum ~s2 + ~s1 in V (because V is a vector
space, its addition is commutative), and that in turn equals the sum ~s2 + ~s1 in
S. The argument for the third condition is similar to that for the second. For
the fourth, consider the zero vector of V and note that closure of S under linear
combinations of pairs of vectors gives that 0 ~s + 0 ~s = ~0 is an element of S
(where ~s is any member of the nonempty set S); checking that ~0 acts under the
inherited operations as the additive identity of S is easy. The fifth condition
is satisfied because for any ~s S, closure under linear combinations of pairs of
vectors shows that 0 ~0 + (1) ~s is an element of S, and it is obviously the
additive inverse of ~s under the inherited operations.
More
93
The verifications for the scalar multiplication conditions are similar; see
Exercise 34.
QED
We will usually verify that a subset is a subspace by checking that it satisfies
statement (2).
2.10 Remark At the start of this chapter we introduced vector spaces as collections
in which linear combinations make sense. Theorem 2.9s statements (1)-(3)
say that we can always make sense of an expression like r1~s1 + r2~s2 in that the
vector described is in the set S.
As a contrast, consider the set T of two-tall vectors whose entries add to
a number greater than or equal to zero. Here we cannot just write any linear
combination such as 2~t1 3~t2 and be confident the result is an element of T .
Lemma 2.9 suggests that a good way to think of a vector space is as a
collection of unrestricted linear combinations. The next two examples take some
spaces and recasts their descriptions to be in that form.
2.11 Example We can show that this plane through the origin subset of R3
x
S = { y | x 2y + z = 0 }
z
is a subspace under the usual addition and scalar multiplication operations
of column vectors by checking that it is nonempty and closed under linear
combinations of two vectors. But there is another way. Think of x 2y + z = 0
as a one-equation linear system and parametrize it by expressing the leading
variable in terms of the free variables x = 2y z.
2y z
2
1
S = { y | y, z R } = {y 1 + z 0 | y, z R }
()
z
0
1
Now, to show that this is a subspace consider r1~s1 + r2~s2 . Each ~si is a linear
combination of the two vectors in () so this is a linear combination of linear
combinations.
2
1
2
1
r1 (y1 1 + z1 0 ) + r2 (y2 1 + z2 0 )
0
1
0
1
The Linear Combination Lemma, Lemma One.III.2.3, shows that the total is
a linear combination of the two vectors and so Theorem 2.9s statement (2) is
satisfied.
94
If S is not empty then by Lemma 2.9 we need only check that the span [S] is
closed under linear combinations of pairs of elements. For a pair of vectors from
that span, ~v = c1~s1 + + cn~sn and w
~ = cn+1~sn+1 + + cm~sm , a linear
combination
p (c1~s1 + + cn~sn ) + r (cn+1~sn+1 + + cm~sm )
= pc1~s1 + + pcn~sn + rcn+1~sn+1 + + rcm~sm
95
The converse of the lemma holds: any subspace is the span of some set,
because a subspace is obviously the span of itself, the set of all of its members.
Thus a subset of a vector space is a subspace if and only if it is a span. This
fits the intuition that a good way to think of a vector space is as a collection in
which linear combinations are sensible.
Taken together, Lemma 2.9 and Lemma 2.15 show that the span of a subset
S of a vector space is the smallest subspace containing all of the members of S.
2.16 Example In any vector space V, for any vector ~v V, the set {r ~v | r R }
is a subspace of V. For instance, for any vector ~v R3 the line through the
origin containing that vector { k~v | k R } is a subspace of R3 . This is true even
if ~v is the zero vector, in which case it is the degenerate line, the trivial subspace.
2.17 Example The span of this set is all of R2 .
!
!
1
1
{
,
}
1
1
We know that the span is some subspace of R2 . To check that it is all of R2 we
must show that any member of R2 is a linear combination of these two vectors.
So we ask: for which vectors with real components x and y are there scalars c1
and c2 such that this holds?
!
!
!
1
1
x
c1
+ c2
=
()
1
1
y
Gausss Method
c1 + c2 = x
c1 c2 = y
1 +2
c1 +
c2 =
x
2c2 = x + y
96
1
0
{ x 0 + y 1 }
0
0
1
0
{ x 0 + z 0 }
0
1
AH
H
A HH
1
{ x 0 }
0
0
{ y 1 }
0 P
X
2
{ y 1 }
0
1
0
{ x 1 + z 0 }
0
1
1
{ y 1 }
1
XXX PP
XXXPPHH
HH AA
XXP
0
XP
XP
X { 0 }
0
97
So far in this chapter we have seen that to study the properties of linear
combinations, the right setting is a collection that is closed under these combinations. In the first subsection we introduced such collections, vector spaces,
and we saw a great variety of examples. In this subsection we saw still more
spaces, ones that are subspaces of others. In all of the variety there is a commonality. Example 2.19 above brings it out: vector spaces and subspaces are
best understood as a span, and especially as a span of a small number of vectors.
The next section studies spanning sets that are minimal.
Exercises
X 2.20 Which of these subsets of the vector space of 2 2 matrices are subspaces
under the inherited operations? For each one that is a subspace, parametrize its
description.
For each that is not, give a condition that fails.
a 0
(a) {
| a, b R }
0 b
a 0
(b) {
| a + b = 0}
0 b
a 0
(c) {
| a + b = 5}
0 b
a c
| a + b = 0, c R }
(d) {
0 b
X 2.21 Is this a subspace of P2 : { a0 + a1 x + a2 x2 | a0 + 2a1 + a2 = 4 }? If it is then
parametrize its description.
2.22 Is the vector in the span of the set?
1
2
1
0 { 1 , 1 }
3
1
1
X 2.23 Decide if the vector lies in the span of the set, inside of the space.
2
1
0
(a) 0, { 0 , 0 }, in R3
1
0
1
(b) x x3 , { x2 , 2x + x2 , x + x3 }, in P3
0 1
1 0
2 0
(c)
,{
,
}, in M22
4 2
1 1
2 3
2.24 Which of these are members of the span [{ cos2 x, sin2 x }] in the vector space of
real-valued functions of one real variable?
(a) f(x) = 1
(b) f(x) = 3 + x2
(c) f(x) = sin x
(d) f(x) = cos(2x)
X 2.25 Which of these sets spans R3 ? That is, which of these sets has the property
that any three-tall vector can be expressed as a suitable linear combination of the
sets elements?
98
1
0
0
2
1
0
1
3
(a) { 0 , 2 , 0 }
(b) { 0 , 1 , 0 }
(c) { 1 , 0 }
0
0
3
1
0
1
0
0
1
3
1
2
2
3
5
6
(d) { 0 , 1 , 0 , 1 }
(e) { 1 , 0 , 1 , 0 }
1
0
0
5
1
1
2
2
X 2.26 Parametrize each subspaces description. Then express each subspace as a
span.
(a) The subset { (a b c) | a c = 0 } of the three-wide row vectors
(b) This subset of M22
a b
{
| a + d = 0}
c d
(c) This subset of M22
a
{
c
b
d
| 2a c d = 0 and a + 3b = 0 }
(c) {
z | 2x + y + w = 0 and y + 2z = 0 } in R
w
(d) { a0 + a1 x + a2 x2 + a3 x3 | a0 + a1 = 0 and a2 a3 = 0 } in P3
(e) The set P4 in the space P4
(f) M22 in M22
2.28 Is R2 a subspace of R3 ?
X 2.29 Decide if each is a subspace of the vector space of real-valued functions of one
real variable.
(a) The even functions { f : R R | f(x) = f(x) for all x }. For example, two members of this set are f1 (x) = x2 and f2 (x) = cos(x).
(b) The odd functions { f : R R | f(x) = f(x) for all x }. Two members are
f3 (x) = x3 and f4 (x) = sin(x).
2.30 Example 2.16 says that for any vector ~v that is an element of a vector space
V, the set { r ~v | r R } is a subspace of V. (This is of course, simply the span of
the singleton set {~v }.) Must any such subspace be a proper subspace, or can it be
improper?
2.31 An example following the definition of a vector space shows that the solution
set of a homogeneous linear system is a vector space. In the terminology of this
subsection, it is a subspace of Rn where the system has n variables. What about
99
x1
x2
x1 + x2 1
y1 + y2 = y1 + y2
z1
z2
z1 + z2
x
rx r + 1
r y =
ry
z
rz
100
2.39 Is a space determined by its subspaces? That is, if two vector spaces have the
same subspaces, must the two be equal?
2.40 (a) Give a set that is closed under scalar multiplication but not addition.
(b) Give a set closed under addition but not scalar multiplication.
(c) Give a set closed under neither.
2.41 Show that the span of a set of vectors does not depend on the order in which
the vectors are listed in that set.
2.42 Which trivial subspace is the span of the empty set? Is it
0
{ 0 } R3 , or { 0 + 0x } P1 ,
0
or some other subspace?
2.43 Show that if a vector is in the span of a set then adding that vector to the set
wont make the span any bigger. Is that also only if ?
X 2.44 Subspaces are subsets and so we naturally consider how is a subspace of
interacts with the usual set operations.
(a) If A, B are subspaces of a vector space, must their intersection A B be a
subspace? Always? Sometimes? Never?
(b) Must the union A B be a subspace?
(c) If A is a subspace, must its complement be a subspace?
(Hint. Try some test subspaces from Example 2.19.)
X 2.45 Does the span of a set depend on the enclosing space? That is, if W is a
subspace of V and S is a subset of W (and so also a subset of V), might the span
of S in W differ from the span of S in V?
2.46 Is the relation is a subspace of transitive? That is, if V is a subspace of W
and W is a subspace of X, must V be a subspace of X?
X 2.47 Because span of is an operation on sets we naturally consider how it interacts
with the usual set operations.
(a) If S T are subsets of a vector space, is [S] [T ]? Always? Sometimes?
Never?
(b) If S, T are subsets of a vector space, is [S T ] = [S] [T ]?
(c) If S, T are subsets of a vector space, is [S T ] = [S] [T ]?
(d) Is the span of the complement equal to the complement of the span?
2.48 Reprove Lemma 2.15 without doing the empty set separately.
2.49 Find a structure that is closed under linear combinations, and yet is not a
vector space.
II
101
Linear Independence
II.1
We saw repeats in the first chapter. There, Gausss Method turned them into
0 = 0 equations.
1.1 Example Recall the Statics example from Chapter Ones opening. We got
two balances with the pair of unknown-mass objects, one at 40 cm and 15 cm
and another at 50 cm and 25 cm, and we then computed the value of those
masses. Had we instead gotten the second balance at 20 cm and 7.5 cm then
Gausss Method on the resulting two-equations, two-unknowns system would
not have yielded a solution, it would have yielded a 0 = 0 equation along with
an equation containing a free variable. Intuitively, the problem is that (20 7.5)
is half of (40 15), that is, (20 7.5) is in the span of the set {(40 15) } and so is
102
103
(15/40)1 +2
40c1
50c2 = 0
(175/4)c2 = 0
Both c1 and c2 are zero. So the only linear relationship between the two given
row vectors is the trivial relationship.
In the same vector space, the set { (40 15), (20 7.5) } is linearly dependent
since we can satisfy c1 (40 15) + c2 (20 7.5) = (0 0) with c1 = 1 and c2 = 2.
1.7 Example The set {1 + x, 1 x} is linearly independent in P2 , the space of
quadratic polynomials with real coefficients, because
0 + 0x + 0x2 = c1 (1 + x) + c2 (1 x) = (c1 + c2 ) + (c1 c2 )x + 0x2
104
gives
c1 + c2 = 0
c1 c2 = 0
c1 + c2 = 0
2c2 = 0
1 +2
since polynomials are equal only if their coefficients are equal. Thus, the only
linear relationship between these two members of P2 is the trivial one.
1.8 Example The rows of this matrix
2 3
A = 0 1
0 0
1 0
0 2
0 1
form a linearly independent set. This is easy to check for this case but also recall
that Lemma One.III.2.5 shows that the rows of any echelon form matrix form a
linearly independent set.
1.9 Example In R3 , where
3
~v1 = 4
5
2
~v2 = 9
2
4
~v3 = 18
4
the set S = {~v1 ,~v2 ,~v3 } is linearly dependent because this is a relationship
0 ~v1 + 2 ~v2 1 ~v3 = ~0
where not all of the scalars are zero (the fact that some of the scalars are zero
doesnt matter).
That example illustrates why, although Definition 1.4 is a clearer statement
of what independence means, Lemma 1.5 is better for computations. Working
straight from the definition, someone trying to compute whether S is linearly
independent would start by setting ~v1 = c2~v2 + c3~v3 and concluding that there
are no such c2 and c3 . But knowing that the first vector is not dependent on the
other two is not enough. This person would have to go on to try ~v2 = c1~v1 +c3~v3 ,
in order to find the dependence c1 = 0, c3 = 1/2. Lemma 1.5 gets the same
conclusion with only one computation.
1.10 Example The empty subset of a vector space is linearly independent. There
is no nontrivial linear relationship among its members as it has no members.
1.11 Example In any vector space, any subset containing the zero vector is linearly
dependent. One example is, in the space P2 of quadratic polynomials, the subset
{ 1 + x, x + x2 , 0 }. It is linearly dependent because 0 ~v1 + 0 ~v2 + 1 ~0 = ~0 is a
nontrivial relationship, since not all of the coefficients are zero.
A subtler way to see that this subset is dependent is to remember that the
zero vector is equal to the trivial sum, the sum of the empty set. So any set
105
1 1 1
1 1 1
(1/2)2
2 2 2
1 1 1
1 2 3
1 2 3
On the left the set of matrix rows {(1 1 1), (2 2 2), (1 2 3) } is linearly dependent. But on the right the set {(1 1 1), (1 1 1), (1 2 3) } = {(1 1 1), (1 2 3) }
is linearly independent. Thats not what we need; we rely on Gausss Method
to preserve dependence so we need that (1 1 1) appears twice.
1.13 Corollary A set S is linearly independent if and only if for any ~v S, its
removal shrinks the span [S {v }] ( [S].
Proof This follows from Corollary 1.3. If S is linearly independent then none
of its vectors is dependent on the other elements, so removal of any vector will
shrink the span. If S is not linearly independent then it contains a vector that
is dependent on other elements of the set, and removal of that vector will not
shrink the span.
QED
So a spanning set is minimal if and only if it is linearly independent.
The prior result addresses removing elements from a linearly independent
set. The next one adds elements.
1.14 Lemma Suppose that S is linearly independent and that ~v
/ S. Then the
set S {~v } is linearly independent if and only if ~v
/ [S].
Proof We will show that S {~v } is not linearly independent if and only if
~v [S].
Suppose first that v [S]. Express ~v as a combination ~v = c1~s1 + + cn~sn .
Rewrite that ~0 = c1~s1 + + cn~sn 1 ~v. Since v
/ S, it does not equal any of
the ~si so this is a nontrivial linear dependence among the elements of S {~v }.
Thus that set is not linearly independent.
Now suppose that S{~v } is not linearly independent and consider a nontrivial
dependence among its members ~0 = c1~s1 + + cn~sn + cn+1 ~v. If cn+1 = 0
106
then that is a dependence among the elements of S, but we are assuming that S
is independent, so cn+1 6= 0. Rewrite the equation as ~v = (c1 /cn+1 )~s1 + +
(cn /cn+1 )~sn to get ~v [S]
QED
1.15 Example This subset of R3 is linearly independent.
1
S = { 0 }
0
The span of S is the x-axis. Here are two supersets, one that is linearly dependent
and the other independent.
1
3
1
0
dependent: { 0 , 0 }
independent: { 0 , 1 }
0
0
0
0
We got the dependent superset by adding a vector from the x-axis and so the
span did not grow. We got the independent superset by adding a vector that
isnt in [S], because it has a nonzero y component, causing the span to grow.
For the independent set
1
0
S = { 0 , 1 }
0
0
the span [S] is the xy-plane. Here are two supersets.
1
0
0
1
0
3
independent: { 0 , 1 , 0 }
dependent: { 0 , 1 , 2 }
0
0
0
0
0
1
As above, the additional member of the dependent superset comes from [S],
the xy-plane, while the added member of the independent superset comes from
outside of that span.
Finally, consider this independent set
1
0
0
S = { 0 , 1 , 0 }
0
0
1
with [S] = R3 . We can get a linearly dependent superset.
1
0
0
2
dependent: { 0 , 1 , 0 , 1 }
0
0
1
3
107
+ c3 + + 3c5 = 0
2c2 + 2c3 c4 + 3c5 = 0
c4
=0
108
3
1
c1
3/2
1
c
{ c3 = c3 1 + c5 0 | c3 , c5 R}
0
0
c4
1
0
c5
Set c5 = 1 and c3 = 0
1
3
3 0
2
0
This shows that the vector from S that weve associated with c5 is in the span
of the set of c1 s vector and c2 s vector. We can discard Ss fifth vector without
shrinking the span.
Similarly, set c3 = 1, and c5 = 0 to get an instance of () that shows we can
discard Ss third vector without shrinking the span. Thus this set has the same
span as S.
1
0
0
{ 0 , 2 , 1 }
0
0
1
The check that it is linearly independent is routine.
1.18 Corollary A subset S = {~s1 , . . . ,~sn } of a vector space is linearly dependent
if and only if some s~i is a linear combination of the vectors ~s1 , . . . , ~si1 listed
before it.
Proof Consider S0 = { }, S1 = { s~1 }, S2 = {~s1 ,~s2 }, etc. Some index i > 1 is the
first one with Si1 {~si } linearly dependent, and there ~si [Si1 ].
QED
QED
109
S independent
S dependent
S
S
S
S
must be independent
S
may be either
S
may be either
S
must be dependent
S
Example 1.15 has something else to say about the interaction between linear
independence and superset. It names a linearly independent set that is maximal
in that it has no supersets that are linearly independent. By Lemma 1.14 a
linearly independent set is maximal if and only if it spans the entire space,
because that is when all the vectors in the space are already in the span. This
nicely complements Lemma 1.13, that a spanning set is minimal if and only if it
is linearly independent.
Exercises
X 1.20 Decide whether each subset of R3 is linearly dependent or linearly independent.
1
2
4
1
2
3
0
1
(a) { 3 , 2 , 4 }
(b) { 7 , 7 , 7 }
(c) { 0 , 0 }
5
4
14
7
7
7
1
4
9
2
3
12
(d) { 9 , 0 , 5 , 12 }
0
1
4
1
X 1.21 Which of these subsets of P3 are linearly dependent and which are independent?
(a) { 3 x + 9x2 , 5 6x + 3x2 , 1 + 1x 5x2 }
(b) { x2 , 1 + 4x2 }
(c) { 2 + x + 7x2 , 3 x + 2x2 , 4 3x2 }
(d) { 8 + 3x + 3x2 , x + 2x2 , 2 + 2x + 2x2 , 8 2x + 5x2 }
1.22 Determine if each set is linearly independent in the natural space.
1
1
(a) { 2 , 1 }
(b) { (1 3 1), (1 4 3), (1 11 7) }
0
0
5 4
0 0
1 0
(c) {
,
,
}
1 2
0 0
1 4
X 1.23 Prove that each set { f, g } is linearly independent in the vector space of all
functions from R+ to R.
(a) f(x) = x and g(x) = 1/x
110
111
spanning set is minimal in this strong sense: each vector in [S] is a combination
of elements of S a minimum number of times only once.
(d) Prove that it can happen when S is not linearly independent that distinct
linear combinations sum to the same vector.
1.33 Prove that a polynomial gives rise to the zero function if and only if it is
the zero polynomial. (Comment. This question is not a Linear Algebra matter
but we often use the result. A polynomial gives rise to a function in the natural
way: x 7 cn xn + + c1 x + c0 .)
1.34 Return to Section 1.2 and redefine point, line, plane, and other linear surfaces
to avoid degenerate cases.
1.35 (a) Show that any set of four vectors in R2 is linearly dependent.
(b) Is this true for any set of five? Any set of three?
(c) What is the most number of elements that a linearly independent subset of
R2 can have?
X 1.36 Is there a set of four vectors in R3 such that any three form a linearly independent
set?
1.37 Must every linearly dependent set have a subset that is dependent and a subset
that is independent?
1.38 In R4 what is the biggest linearly independent set you can find? The smallest?
The biggest linearly dependent set? The smallest? (Biggest and smallest mean
that there are no supersets or subsets with the same property.)
X 1.39 Linear independence and linear dependence are properties of sets. We can thus
naturally ask how the properties of linear independence and dependence act with
respect to the familiar elementary set relations and operations. In this body of this
subsection we have covered the subset and superset relations. We can also consider
the operations of intersection, complementation, and union.
(a) How does linear independence relate to intersection: can an intersection of
linearly independent sets be independent? Must it be?
(b) How does linear independence relate to complementation?
(c) Show that the union of two linearly independent sets can be linearly independent.
(d) Show that the union of two linearly independent sets need not be linearly
independent.
1.40 Continued from prior exercise. What is the interaction between the property
of linear independence and the operation of union?
(a) We might conjecture that the union ST of linearly independent sets is linearly
independent if and only if their spans have a trivial intersection [S] [T ] = {~0 }.
What is wrong with this argument for the if direction of that conjecture? If
the union S T is linearly independent then the only solution to c1~s1 + +
cn~sn + d1~t1 + + dm~tm = ~0 is the trivial one c1 = 0, . . . , dm = 0. So any
member of the intersection of the spans must be the zero vector because in
c1~s1 + + cn~sn = d1~t1 + + dm~tm each scalar is zero.
(b) Give an example showing that the conjecture is false.
112
III
113
The prior section ends with the observation that a spanning set is minimal when
it is linearly independent and a linearly independent set is maximal when it spans
the space. So the notions of minimal spanning set and maximal independent set
coincide. In this section we will name this idea and study its properties.
III.1
Basis
1.1 Definition A basis for a vector space is a sequence of vectors that is linearly
independent and that spans the space.
Because a basis is a sequence, meaning that bases are different if they contain
the same elements but in different orders, we denote it with angle brackets
~ 1,
~ 2 , . . .i. (A sequence is linearly independent if the multiset consisting of
h
the elements of the sequence in is independent. Similarly, a sequence spans the
space if the set of elements of the sequence spans the space.)
1.2 Example This is a basis for R2 .
!
!
2
1
h
,
i
4
1
It is linearly independent
!
!
!
2
1
0
c1
+ c2
=
4
1
0
2c1 + 1c2 = 0
4c1 + 1c2 = 0
c1 = c2 = 0
and it spans R2 .
2c1 + 1c2 = x
4c1 + 1c2 = y
c2 = 2x y and c1 = (y x)/2
1.3 Example This basis for R2 differs from the prior one
!
!
1
2
h
,
i
1
4
because it is in a different order. The verification that it is a basis is just as in
the prior example.
114
1.4 Example The space R2 has many bases. Another one is this.
!
!
1
0
h
,
i
0
1
The verification is easy.
1.5 Definition For any Rn
0
1
0 1
En = h
.. , .. , . . . ,
. .
0
0
0
. i
.
.
1
is the standard (or natural) basis. We denote these vectors ~e1 , . . . , ~en .
Calculus books denote R2 s standard basis vectors as ~ and ~ instead of ~e1 and
~e2 and they denote to R3 s standard basis vectors as ~, ~, and ~k instead of ~e1 ,
~e2 , and ~e3 . Note that ~e1 means something different in a discussion of R3 than
it means in a discussion of R2 .
1.6 Example Consider the space {a cos + b sin | a, b R } of functions of the
real variable . This is a natural basis hcos , sin i = h1cos +0sin , 0cos +
1 sin i. A more generic basis for this space is hcos sin , 2 cos + 3 sin i.
Verification that these two are bases is Exercise 27.
1.7 Example A natural basis for the vector space of cubic polynomials P3 is
h1, x, x2 , x3 i. Two other bases for this space are hx3 , 3x2 , 6x, 6i and h1, 1 + x, 1 +
x + x2 , 1 + x + x2 + x3 i. Checking that each is linearly independent and spans
the space is easy.
1.8 Example The trivial space {~0 } has only one basis, the empty one hi.
1.9 Example The space of finite-degree polynomials has a basis with infinitely
many elements h1, x, x2 , . . .i.
1.10 Example We have seen bases before. In the first chapter we described the
solution set of homogeneous systems such as this one
x+y
w=0
z+w=0
by parametrizing.
1
1
1
0
{ y + w | y, w R }
0
1
0
1
115
Thus the vector space of solutions is the span of a two-element set. This twovector set is also linearly independent, which is easy to check. Therefore the
solution set is a subspace of R4 with a basis comprised of these two vectors.
1.11 Example Parametrization finds bases for other vector spaces, not just for
solution sets of homogeneous systems. To find a basis for this subspace of M22
!
a b
{
| a + b 2c = 0 }
c 0
we rewrite the condition as a = b + 2c.
!
!
b + 2c b
1 1
2
{
| b, c R } = { b
+c
c
0
0 0
1
0
0
!
| b, c R}
that is linearly independent. A subset is a spanning set if and only if each vector
in the space is a linear combination of elements of that subset in at least one
way. Thus we need only show that a spanning subset is linearly independent if
and only if every vector in the space is a linear combination of elements from
the subset in at most one way.
Consider two expressions of a vector as a linear combination of the members
~i
of the subset. Rearrange the two sums, and if necessary add some 0
116
RepB (~v) =
..
.
cn
~ 1, . . . ,
~ n i and ~v = c1
~ 1 + c2
~ 2 + + cn
~ n . The cs are the
where B = h
coordinates of ~v with respect to B.
1.14 Example In P3 , with respect to the basis B = h1, 2x, 2x2 , 2x3 i, the representation of x + x2 is
0
1/2
RepB (x + x2 ) =
1/2
0 B
because x + x2 = 0 1 + (1/2) 2x + (1/2) 2x2 + 0 2x3 . With respect to a
different basis D = h1 + x, 1 x, x + x2 , x + x3 i, the representation is different.
0
0
RepD (x + x2 ) =
1
0 D
(When there is more than one basis around, to help keep straight which representation is with respect to which basis we often write it as a subscript on the
column vector.)
117
1.15 Remark Definition 1.1 requires that a basis be a sequence because without
that we couldnt write these coordinates in a fixed order.
1.16 Example In R2 , where ~v = 32 , to find the coordinates of that vector with
respect to the basis
!
!
1
0
B=h
,
i
1
2
we solve
c1
1
1
!
+ c2
0
2
!
=
3
2
!
3
1/2
1.17 Remark This use of column notation and the term coordinate has both a
disadvantage and an advantage. The disadvantage is that representations look
like vectors from Rn , which can be confusing when the vector space is Rn , as in
the prior example. We must infer the intent from the context. For example, the
phrase in R2 , where ~v = 32 refers to the plane vector that, when in canonical
position, ends at (3, 2). And in the end of that example, although weve omitted
a subscript B from the column, that the right side is a representation is clear
from the context.
The advantage of the notation and the term is that they generalize the
familiar case: in Rn and with respect to the standard basis En , the vector
starting at the origin and ending at (v1 , . . . , vn ) has this representation.
v1
v1
..
..
RepEn ( . ) = .
vn
vn E
Our main use of representations will come later but the definition appears
here because the fact that every vector is a linear combination of basis vectors in
a unique way is a crucial property of bases, and also to help make a point. For
calculation of coordinates among other things, we shall restrict our attention
to spaces with bases having only finitely many elements. That will start in the
next subsection.
Exercises
1.18 Decide if each is a basis for P2 .
(a) hx2 x + 1, 2x + 1, 2x 1i
(b) hx + x2 , x x2 i
X 1.19 Decide if each is a basis for R3 .
118
1
3
0
1
3
0
1
2
(a) h2 , 2 , 0i
(b) h2 , 2i
(c) h 2 , 1 , 5i
3
1
1
3
1
1
1
0
0
1
1
(d) h 2 , 1 , 3i
1
1
0
X 1.20 Represent
the
vector
with
respect to the basis.
1
1
1
(a)
,B=h
,
i R2
2
1
1
(b) x2 + x3 , D = h1, 1 + x, 1 + x + x2 , 1 + x + x2 + x3 i P3
0
1
4
(c)
0 , E4 R
1
1.21 Represent the vector with respect to each of the two bases.
3
1
1
1
1
~v =
B1 = h
,
i, B2 = h
,
i
1
1
1
2
3
1.22 Find a basis for P2 , the space of all quadratic polynomials. Must any such
basis contain a polynomial of each degree: degree zero, degree one, and degree two?
1.23 Find a basis for the solution set of this system.
x1 4x2 + 3x3 x4 = 0
2x1 8x2 + 6x3 2x4 = 0
X 1.24 Find a basis for M22 , the space of 22 matrices.
X 1.25 Find a basis for each.
(a) The subspace { a2 x2 + a1 x + a0 | a2 2a1 = a0 } of P2
(b) The space of three-wide row vectors whose first and second components add
to zero
(c) This subspace of the 22 matrices
a b
{
| c 2b = 0 }
0 c
1.26 Find a basis for each space, and verify that it is a basis.
(a) The subspace M = { a + bx + cx2 + dx3 | a 2b + c d = 0 } of P3 .
(b) This subspace of M22 .
a b
W={
| a c = 0}
c d
1.27 Check Example 1.6.
X 1.28 Find the span of each set and then find a basis for that span.
(a) { 1 + x, 1 + 2x } in P2
(b) { 2 2x, 3 + 4x2 } in P2
X 1.29 Find a basis for each of these subspaces of the space P3 of cubic polynomials.
(a) The subspace of cubic polynomials p(x) such that p(7) = 0
(b) The subspace of polynomials p(x) such that p(7) = 0 and p(5) = 0
(c) The subspace of polynomials p(x) such that p(7) = 0, p(5) = 0, and p(3) = 0
119
(d) The space of polynomials p(x) such that p(7) = 0, p(5) = 0, p(3) = 0,
and p(1) = 0
1.30 Weve seen that the result of reordering a basis can be another basis. Must it
be?
1.31 Can a basis contain a zero vector?
~ 1,
~ 2,
~ 3 i be a basis for a vector space.
X 1.32 Let h
~ 1 , c2
~ 2 , c3
~ 3 i is a basis when c1 , c2 , c3 6= 0. What happens
(a) Show that hc1
when at least one ci is 0?
~1 +
~ i.
(b) Prove that h~
1 ,
~ 2,
~ 3 i is a basis where
~i =
1.33 Find one vector ~v that will make each into a basis for the space.
1
0
1
(a) h
,~vi in R2
(b) h1 , 1 ,~vi in R3
(c) hx, 1 + x2 ,~vi in P2
1
0
0
~
~
X 1.34 Where h1 , . . . , n i is a basis, show that in this equation
~ 1 + + ck
~ k = ck+1
~ k+1 + + cn
~n
c1
each of the ci s is zero. Generalize.
1.35 A basis contains some of the vectors from a vector space; can it contain them
all?
1.36 Theorem 1.12 shows that, with respect to a basis, every linear combination is
unique. If a subset is not a basis, can linear combinations be not unique? If so,
must they be?
X 1.37 A square matrix is symmetric if for all indices i and j, entry i, j equals entry
j, i.
(a) Find a basis for the vector space of symmetric 22 matrices.
(b) Find a basis for the space of symmetric 33 matrices.
(c) Find a basis for the space of symmetric nn matrices.
X 1.38 We can show that every basis for R3 contains the same number of vectors.
(a) Show that no linearly independent subset of R3 contains more than three
vectors.
(b) Show that no spanning subset of R3 contains fewer than three vectors. Hint:
recall how to calculate the span of a set and show that this method cannot yield
all of R3 when we apply it to fewer than three vectors.
1.39 One of the exercises in the Subspaces subsection shows that the set
x
{ y | x + y + z = 1 }
z
is a vector space under these operations.
x1 + x2 1
x1
x2
y1 + y2 = y1 + y2
z1
z2
z1 + z2
Find a basis.
x
rx r + 1
r y =
ry
z
rz
120
III.2
Dimension
The previous subsection defines a basis of a vector space and shows that a space
can have many different bases. So we cannot talk about the basis for a vector
space. True, some vector spaces have bases that strike us as more natural than
others, for instance, R2 s basis E2 or P2 s basis h1, x, x2 i. But for the vector
space {a2 x2 + a1 x + a0 | 2a2 a0 = a1 }, no particular basis leaps out at us as
the natural one. We cannot, in general, associate with a space any single basis
that best describes it.
We can however find something about the bases that is uniquely associated
with the space. This subsection shows that any two bases for a space have the
same number of elements. So with each space we can associate a number, the
number of vectors in any of its bases.
Before we start, we first limit our attention to spaces where at least one basis
has only finitely many members.
2.1 Definition A vector space is finite-dimensional if it has a basis with only
finitely many vectors.
One space that is not finite-dimensional is the set of polynomials with real
coefficients Example 1.11; this space is not spanned by any finite subset since
that would contain a polynomial of largest degree but this space has polynomials
of all degrees. Such spaces are interesting and important but we will focus
in a different direction. From now on we will study only finite-dimensional
vector spaces. In the rest of this book we shall take vector space to mean
finite-dimensional vector space.
2.2 Remark One reason for sticking to finite-dimensional spaces is so that the
representation of a vector with respect to a basis is a finitely-tall vector and we
can easily write it. Another reason is that the statement any infinite-dimensional
vector space has a basis is equivalent to a statement called the Axiom of Choice
[Blass 1984] and so covering this would move us far past this books scope. (A
discussion of the Axiom of Choice is in the Frequently Asked Questions list for
sci.math, and another accessible one is [Rucker].)
To prove the main theorem we shall use a technical result, the Exchange
Lemma. We first illustrate it with an example.
2.3 Example Here is a basis for R3 and a vector given as a linear combination of
members of that basis.
1
1
0
1
1
1
0
B = h0 , 1 , 0i
2 = (1) 0 + 2 1 + 0 0
0
0
2
0
0
0
2
121
Two of the basis vectors have non-zero coefficients. Pick one, for instance the
first. Replace it with the vector that weve expressed as the combination
0
1
1
B = h2 , 1 , 0i
0
0
2
and the result is another basis for R3 .
~ 1, . . . ,
~ n i is a basis for a
2.4 Lemma (Exchange Lemma) Assume that B = h
~ 1 + c2
~2 + +
vector space, and that for the vector ~v the relationship ~v = c1
~
~
cn n has ci 6= 0. Then exchanging i for ~v yields another basis for the space.
= h
~ 1, . . . ,
~ i1 ,~v,
~ i+1 , . . . ,
~ n i.
Proof Call the outcome of the exchange B
is linearly independent. Any relationship d1
~1 + +
We first show that B
~
~
di~v + + dn n = 0 among the members of B, after substitution for ~v,
~ 1 + + di (c1
~ 1 + + ci
~ i + + cn
~ n ) + + dn
~ n = ~0
d1
()
~ 1 + +di~v+ +dn
~ n of [B]
that [B]
~
~
~
~
as d1 1 + +di (c1 1 + +cn n )+ +dn n , which is a linear combination
of linear combinations of members of B, and hence is in [B]. For the [B] [B]
~ 1 + +cn
~ n with ci 6= 0 then we can
half of the argument, recall that if ~v = c1
~
~
~ n.
rearrange the equation to i = (c1 /ci )1 + + (1/ci )~v + + (cn /ci )
~ 1 + + di
~ i + + dn
~ n of [B], substitute for
Now, consider any member d1
and recognize,
~
i its expression as a linear combination of the members of B,
as in the first half of this argument, that the result is a linear combination of
and hence is in [B].
~ 1, . . . ,
~ n i of minimal size. We will show
all of this spaces bases, one B = h
~
~
that any other basis D = h1 , 2 , . . .i also has the same number of members, n.
Because B has minimal size, D has no fewer than n vectors. We will argue that
it cannot have more than n vectors.
122
The basis B spans the space and ~1 is in the space, so ~1 is a nontrivial linear
combination of elements of B. By the Exchange Lemma, we can swap ~1 for a
vector from B, resulting in a basis B1 , where one element is ~1 and all of the
~
n 1 other elements are s.
The prior paragraph forms the basis step for an induction argument. The
inductive step starts with a basis Bk (for 1 6 k < n) containing k members of D
and n k members of B. We know that D has at least n members so there is a
~k+1 . Represent it as a linear combination of elements of Bk . The key point: in
that representation, at least one of the nonzero scalars must be associated with
~ i or else that representation would be a nontrivial linear relationship among
a
~ i to get a new
elements of the linearly independent set D. Exchange ~k+1 for
~ fewer than the previous basis Bk .
basis Bk+1 with one ~ more and one
~ remain, so that Bn contains ~1 , . . . , ~n . Now, D
Repeat that until no s
cannot have more than these n vectors because any ~n+1 that remains would be
in the span of Bn (since it is a basis) and hence would be a linear combination
of the other ~s, contradicting that D is linearly independent.
QED
2.6 Definition The dimension of a vector space is the number of vectors in any
of its bases.
2.7 Example Any basis for Rn has n vectors since the standard basis En has n
vectors. Thus, this definition of dimension generalizes the most familiar use of
term, that Rn is n-dimensional.
2.8 Example The space Pn of polynomials of degree at most n has dimension
n+1. We can show this by exhibiting any basis h1, x, . . . , xn i comes to mind
and counting its members.
2.9 Example The space of functions {a cos + b sin | a, b R } of the real
variable has dimension 2 since this space has the basis hcos , sin i.
2.10 Example A trivial space is zero-dimensional since its basis is empty.
Again, although we sometimes say finite-dimensional for emphasis, from
now on we take all vector spaces to be finite-dimensional. So in the next result
the word space means finite-dimensional vector space.
2.11 Corollary No linearly independent set can have a size greater than the
dimension of the enclosing space.
Proof The proof of Theorem 2.5 never uses that D spans the space, only that
it is linearly independent.
QED
2.12 Example Recall the diagram from Example I.2.19 showing the subspaces
of R3 . Each subspace is described with a minimal spanning set, a basis. The
123
whole space has a basis with three members, the plane subspaces have bases with
two members, the line subspaces have bases with one member, and the trivial
subspace has a basis with zero members. We could not in that section show that
these are R3 s only subspaces. We can show it now. The prior corollary proves
that the only subspaces of R3 are either three-, two-, one-, or zero-dimensional.
There are no subspaces somehow, say, between lines and planes.
2.13 Corollary Any linearly independent set can be expanded to make a basis.
Proof If a linearly independent set is not already a basis then it must not span
the space. Adding to the set a vector that is not in the span will preserve linear
independence by Lemma II.1.14. Keep adding until the resulting set does span
the space, which the prior corollary shows will happen after only a finite number
of steps.
QED
2.14 Corollary Any spanning set can be shrunk to a basis.
Proof Call the spanning set S. If S is empty then it is already a basis (the
space must be a trivial space). If S = {~0 } then it can be shrunk to the empty
basis, thereby making it linearly independent, without changing its span.
Otherwise, S contains a vector ~s1 with ~s1 6= ~0 and we can form a basis
B1 = h~s1 i. If [B1 ] = [S] then we are done. If not then there is a ~s2 [S] such
that ~s2 6 [B1 ]. Let B2 = h~s1 , s~2 i; by Lemma II.1.14 this is linearly independent
so if [B2 ] = [S] then we are done.
We can repeat this process until the spans are equal, which must happen in
at most finitely many steps.
QED
2.15 Corollary In an n-dimensional space, a set composed of n vectors is linearly
independent if and only if it spans the space.
Proof First we will show that a subset with n vectors is linearly independent if
and only if it is a basis. The if is trivially true bases are linearly independent.
Only if holds because a linearly independent set can be expanded to a basis,
but a basis has n elements, so this expansion is actually the set that we began
with.
To finish, we will show that any subset with n vectors spans the space if and
only if it is a basis. Again, if is trivial. Only if holds because any spanning
set can be shrunk to a basis, but a basis has n elements and so this shrunken
set is just the one we started with.
QED
124
The main result of this subsection, that all of the bases in a finite-dimensional
vector space have the same number of elements, is the single most important
result in this book. As Example 2.12 shows, it describes what vector spaces and
subspaces there can be.
One immediate consequence brings us back to when we considered the two
things that could be meant by the term minimal spanning set. At that point we
defined minimal as linearly independent but we noted that another reasonable
interpretation of the term is that a spanning set is minimal when it has the
fewest number of elements of any set with the same span. Now that we have
shown that all bases have the same number of elements, we know that the two
senses of minimal are equivalent.
Exercises
Assume that all spaces are finite-dimensional unless otherwise stated.
X 2.16 Find a basis for, and the dimension of, P2 .
2.17 Find a basis for, and the dimension of, the solution set of this system.
x1 4x2 + 3x3 x4 = 0
2x1 8x2 + 6x3 2x4 = 0
X 2.18 Find a basis for, and the dimension of, each space.
x
y
(a) {
R4 | x w + z = 0 }
z
w
(b) the set of 55 matrices whose only nonzero entries are on the diagonal (e.g.,
in entry 1, 1 and 2, 2, etc.)
(c) { a0 + a1 x + a2 x2 + a3 x3 | a0 + a1 = 0 and a2 2a3 = 0 } P3
2.19 Find a basis for, and the dimension of, M22 , the vector space of 22 matrices.
2.20 Find the dimension of the vector space of matrices
a b
c d
subject to each condition.
(a) a, b, c, d R
(b) a b + 2c = 0 and d R
(c) a + b + c = 0, a + b c = 0, and d R
X 2.21 Find the dimension of this subspace of R2 .
a+b
S={
| a, b, c R }
a+c
X 2.22 Find the dimension of each.
(a) The space of cubic polynomials p(x) such that p(7) = 0
(b) The space of cubic polynomials p(x) such that p(7) = 0 and p(5) = 0
(c) The space of cubic polynomials p(x) such that p(7) = 0, p(5) = 0, and p(3) = 0
125
(d) The space of cubic polynomials p(x) such that p(7) = 0, p(5) = 0, p(3) = 0,
and p(1) = 0
2.23 What is the dimension of the span of the set { cos2 , sin2 , cos 2, sin 2 }? This
span is a subspace of the space of all real-valued functions of one real variable.
2.24 Find the dimension of C47 , the vector space of 47-tuples of complex numbers.
2.25 What is the dimension of the vector space M35 of 35 matrices?
X 2.26 Show that this is a basis for R4 .
1
1
1
1
0 1 1 1
h
0 , 0 , 1 , 1i
0
0
0
1
(We can use the results of this subsection to simplify this job.)
2.27 Refer to Example 2.12.
(a) Sketch a similar subspace diagram for P2 .
(b) Sketch one for M22 .
X 2.28 Where S is a set, the functions f : S R form a vector space under the natural
operations: the sum f + g is the function given by f + g (s) = f(s) + g(s) and the
scalar product is r f (s) = r f(s). What is the dimension of the space resulting for
each domain?
(a) S = { 1 }
(b) S = { 1, 2 }
(c) S = { 1, . . . , n }
2.29 (See Exercise 28.) Prove that this is an infinite-dimensional space: the set of
all functions f : R R under the natural operations.
2.30 (See Exercise 28.) What is the dimension of the vector space of functions
f : S R, under the natural operations, where the domain S is the empty set?
2.31 Show that any set of four vectors in R2 is linearly dependent.
2.32 Show that h~
1 ,
~ 2,
~ 3 i R3 is a basis if and only if there is no plane through
the origin containing all three vectors.
2.33 Prove that any subspace of a finite dimensional space is finite dimensional.
2.34 Where is the finiteness of B used in Theorem 2.5?
2.35 Prove that if U and W are both three-dimensional subspaces of R5 then U W
is non-trivial. Generalize.
2.36 A basis for a space consists of elements of that space. So we are naturally led to
how the property is a basis interacts with operations and and . (Of course,
a basis is actually a sequence in that it is ordered, but there is a natural extension
of these operations.)
(a) Consider first how bases might be related by . Assume that U, W are
subspaces of some vector space and that U W. Can there exist bases BU for U
and BW for W such that BU BW ? Must such bases exist?
For any basis BU for U, must there be a basis BW for W such that BU BW ?
For any basis BW for W, must there be a basis BU for U such that BU BW ?
For any bases BU , BW for U and W, must BU be a subset of BW ?
(b) Is the of bases a basis? For what space?
(c) Is the of bases a basis? For what space?
126
III.3
We will now reconsider linear systems and Gausss Method, aided by the tools
and terms of this chapter. We will make three points.
For the first, recall the insight from the Chapter One that Gausss Method
works by taking linear combinations of rows if two matrices are related by
row operations A B then each row of B is a linear combination of
the rows of A. Therefore, the right setting in which to study row operations in
general, and Gausss Method in particular, is the following vector space.
3.1 Definition The row space of a matrix is the span of the set of its rows. The
row rank is the dimension of this space, the number of linearly independent
rows.
3.2 Example If
A=
2
4
3
6
127
i j
ki
B or A B or A
ki +j
(for i 6= j and k 6= 0) then their row spaces are equal. Hence, row-equivalent
matrices have the same row space and therefore the same row rank.
Proof Corollary One.III.2.4 shows that when A B then each row of B is a
linear combination of the rows of A. That is, in the above terminology, each row
of B is an element of the row space of A. Then Rowspace(B) Rowspace(A)
follows because a member of the set Rowspace(B) is a linear combination of the
rows of B, so it is a combination of combinations of the rows of A, and by the
Linear Combination Lemma is also a member of Rowspace(A).
For the other set containment, recall Lemma One.III.1.5, that row operations are reversible so A B if and only if B A. Then Rowspace(A)
Rowspace(B) follows as in the previous paragraph.
QED
Of course, Gausss Method performs the row operations systematically, with
the goal of echelon form.
3.4 Lemma The nonzero rows of an echelon form matrix make up a linearly
independent set.
Proof Lemma One.III.2.5 says that no nonzero row of an echelon form matrix
is a linear combination of the other rows. This result just restates that in this
chapters terminology.
QED
Thus, in the language of this chapter, Gaussian reduction works by eliminating
linear dependences among rows, leaving the span unchanged, until no nontrivial
linear relationships remain among the nonzero rows. In short, Gausss Method
produces a basis for the row space.
3.5 Example From any matrix, we can produce a basis for the row space by
performing Gausss Method and taking the nonzero rows of the resulting echelon
form matrix. For instance,
1 3 1
1 3 1
1 +2 62 +3
1 4 1
0 1 0
21 +3
2 0 5
0 0 3
produces the basis h(1 3 1), (0 1 0), (0 0 3)i for the row space. This is a basis
for the row space of both the starting and ending matrices, since the two row
spaces are equal.
128
Using this technique, we can also find bases for spans not directly involving
row vectors.
3.6 Definition The column space of a matrix is the span of the set of its columns.
The column rank is the dimension of the column space, the number of linearly
independent columns.
Our interest in column spaces stems from our study of linear systems. An
example is that this system
c1 + 3c2 + 7c3 = d1
2c1 + 3c2 + 8c3 = d2
c2 + 2c3 = d3
4c1
+ 4c3 = d4
has a solution if and only if the vector of ds is a linear combination of the other
column vectors,
1
3
7
d1
2
3
8 d
2
c1 + c2 + c3 =
0
1
2 d3
4
0
4
d4
meaning that the vector of ds is in the column space of the matrix of coefficients.
3.7 Example Given this matrix,
1
2
0
4
3
3
1
0
7
8
2
4
1 2 0 4
31 +2 22 +3
3 3 1 0
71 +3
7 8 2 4
0
0
2
3
0
0
4
1 12
0
0
1
0
2 3
h ,
i
0 1
4
12
The result is a basis for the column space of the given matrix.
129
3.8 Definition The transpose of a matrix is the result of interchanging its rows
and columns, so that column j of the matrix A is row j of AT and vice versa.
So we can summarize the prior example as transpose, reduce, and transpose
back.
We can even, at the price of tolerating the as-yet-vague idea of vector spaces
being the same, use Gausss Method to find bases for spans in other types of
vector spaces.
3.9 Example To get a basis for the span of {x2 + x4 , 2x2 + 3x4 , x2 3x4 } in
the space P4 , think of these three polynomials as the same as the row vectors
(0 0 1 0 1), (0 0 2 0 3), and (0 0 1 0 3), apply Gausss Method
0
0
0
0
0
1
2
1
0
0
0
3
3
21 +2 22 +3
1 +3
0
0
0
0
0
1
0
0
0
0
0
1
0
and translate back to get the basis hx2 + x4 , x4 i. (As mentioned earlier, we will
make the phrase the same precise at the start of the next chapter.)
Thus, the first point for this subsection is that the tools of this chapter give
us a more conceptual understanding of Gaussian reduction.
For the second point observe that row operations on a matrix can change its
column space.
!
!
1 2
1 2
21 +2
2 4
0 0
The column space of the left-hand matrix contains vectors with a second component that is nonzero but the column space of the right-hand matrix contains
only vectors whose second component is zero, so the two spaces are different.
This observation makes next result surprising.
3.10 Lemma Row operations do not change the column rank.
Proof Restated, if A reduces to B then the column rank of B equals the column
rank of A.
This proof will be finished if we show that row operations do not affect linear
relationships among columns, because the column rank is the size of the largest
set of unrelated columns. That is, we will show that a relationship exists among
columns (such as that the fifth column is twice the second plus the fourth) if and
only if that relationship exists after the row operation. But this is exactly the
130
a1,1
a1,n
0
a2,1
a2,n 0
c1
.. + + cn .. = ..
.
. .
am,1
am,n
0
row operations leave unchanged the set of solutions (c1 , . . . , cn ).
QED
Another way to make the point that Gausss Method has something to say
about the column space as well as about the row space is with Gauss-Jordan
reduction. It ends with the reduced echelon form of a matrix, as here.
1 3 1 6
1 3 0 2
2 6 3 16 0 0 1 4
1 3 1 6
0 0 0 0
Consider the row space and the column space of this result.
The first point made earlier in this subsection says that to get a basis for the
row space we can just collect the rows with leading entries. However, because
this is in reduced echelon form, a basis for the column space is just as easy: collect
the columns containing the leading entries, h~e1 , ~e2 i. Thus, for a reduced echelon
form matrix we can find bases for the row and column spaces in essentially the
same way, by taking the parts of the matrix, the rows or columns, containing
the leading entries.
3.11 Theorem For any matrix, the row rank and column rank are equal.
Proof Bring the matrix to reduced echelon form. Then the row rank equals
the number of leading entries since that equals the number of nonzero rows.
Then also, the number of leading entries equals the column rank because the
set of columns containing leading entries consists of some of the ~ei s from a
standard basis, and that set is linearly independent and spans the set of columns.
Hence, in the reduced echelon form matrix, the row rank equals the column
rank, because each equals the number of leading entries.
But Lemma 3.3 and Lemma 3.10 show that the row rank and column rank
are not changed by using row operations to get to reduced echelon form. Thus
the row rank and the column rank of the original matrix are also equal. QED
3.12 Definition The rank of a matrix is its row rank or column rank.
So the second point that we have made in this subsection is that the column
space and row space of a matrix have the same dimension.
131
Our final point is that the concepts that weve seen arising naturally in the
study of vector spaces are exactly the ones that we have studied with linear
systems.
3.13 Theorem For linear systems with n unknowns and with matrix of coefficients
A, the statements
(1) the rank of A is r
(2) the vector space of solutions of the associated homogeneous system has
dimension n r
are equivalent.
So if the system has at least one particular solution then for the set of solutions,
the number of parameters equals n r, the number of variables minus the rank
of the matrix of coefficients.
Proof The rank of A is r if and only if Gaussian reduction on A ends with r
nonzero rows. Thats true if and only if echelon form matrices row equivalent
to A have r-many leading variables. That in turn holds if and only if there are
n r free variables.
QED
3.14 Corollary Where the matrix A is nn, these statements
(1) the rank of A is n
(2) A is nonsingular
(3) the rows of A form a linearly independent set
(4) the columns of A form a linearly independent set
(5) any linear system whose matrix of coefficients is A has one and only one
solution
are equivalent.
Proof Clearly (1) (2) (3) (4). The last, (4) (5), holds
a1,1
a1,n
d1
a
a
2,1
2,n d2
+ + cn . = .
c1
.
.
. .
.
. .
am,1
am,n
dm
has a unique solution for all choices of d1 , . . . , dn R if and only if the vectors
of as on the left form a basis.
QED
3.15 Remark [Munkres] Sometimes the results of this subsection are mistakenly
remembered to say that the general solution of an n unknowns system of
132
1
3
1
(c)
6
4
7
3
8
0
(d) 0
0
(e) (1 2)
X 3.17 Decide
2
(a)
3
0 1 3
1
, (1 0)
(b) 1 0 1, (1 1 1)
1
1 2 7
X 3.18 Decide if the vector is in the column space.
1 3
1
1
1 1
1
(a)
,
(b) 2 0
4 , 0
1 1
3
1 3 3
0
X 3.19 Decide if the vector is in the column space of the matrix.
1 1
2 1
1
4 8
0
(a)
,
(b)
,
(c) 1
1
2 5
3
2 4
1
1 1
X 3.20 Find a basis for the row space of this matrix.
2 0 3
4
0 1 1 1
3 1 0
2
1
X 3.21 Find
of each matrix.
the rank
2 1 3
1 1
(a) 1 1 2
(b) 3 3
1 0 3
2 2
0 0 0
(d) 0 0 0
2
6
4
0 0 0
3.22 Give a basis for the column space of
1 3
2 1
0 1
1
2
1, 0
1
0
1
(c) 5
6
3
1
4
2
1
3
1 2
1 0
1 4
133
(c) { 1 + x, 1 x2 , 3 + 2x x2 } P3
1 0 3
1 0 5
1 0 1
(d) {
,
,
} M23
3 1 1
2 1 4
1 1 9
3.24 Give a basis for the span of each set, in the natural vector space.
1
1
0
(a) { 1 , 2 , 12 }
3
0
6
(b) { x + x2 , 2 2x, 7, 4 + 3x + 2x2 }
3.25 Which matrices have rank zero? Rank one?
X 3.26 Given a, b, c R, what choice of d will cause this matrix to have the rank of
one?
a b
c d
3.27 Find the column rank of this matrix.
1 3 1 5
2 0 1 0
0
4
4
1
3.28 Show that a linear system with at least one solution has at most one solution if
and only if the matrix of coefficients has rank equal to the number of its columns.
X 3.29 If a matrix is 59, which set must be dependent, its set of rows or its set of
columns?
3.30 Give an example to show that, despite that they have the same dimension, the
row space and column space of a matrix need not be equal. Are they ever equal?
3.31 Show that the set { (1, 1, 2, 3), (1, 1, 2, 0), (3, 1, 6, 6) } does not have the
same span as { (1, 0, 1, 0), (0, 2, 0, 3) }. What, by the way, is the vector space?
X 3.32 Show that this set of column vectors
d1
3x + 2y + 4z = d1
{ d2 | there are x, y, and z such that: x
z = d2 }
d3
2x + 2y + 5z = d3
is a subspace of R3 . Find a basis.
3.33 Show that the transpose operation is linear:
(rA + sB)T = rAT + sBT
for r, s R and A, B Mmn .
X 3.34 In this subsection we have shown that Gaussian reduction finds a basis for the
row space.
(a) Show that this basis is not unique different reductions may yield different
bases.
(b) Produce matrices with equal row spaces but unequal numbers of rows.
(c) Prove that two matrices have equal row spaces if and only if after Gauss-Jordan
reduction they have the same nonzero rows.
3.35 Why is there not a problem with Remark 3.15 in the case that r is bigger than
n?
3.36 Show that the row rank of an mn matrix is at most m. Is there a better
bound?
134
X 3.37 Show that the rank of a matrix equals the rank of its transpose.
3.38 True or false: the column space of a matrix equals the row space of its transpose.
X 3.39 We have seen that a row operation may change the column space. Must it?
3.40 Prove that a linear system has a solution if and only if that systems matrix of
coefficients has the same rank as its augmented matrix.
3.41 An mn matrix has full row rank if its row rank is m, and it has full column
rank if its column rank is n.
(a) Show that a matrix can have both full row rank and full column rank only if
it is square.
(b) Prove that the linear system with matrix of coefficients A has a solution for
any d1 , . . . , dn s on the right side if and only if A has full row rank.
(c) Prove that a homogeneous system has a unique solution if and only if its
matrix of coefficients A has full column rank.
(d) Prove that the statement if a system with matrix of coefficients A has any
solution then it has a unique solution holds if and only if A has full column
rank.
3.42 How would the conclusion of Lemma 3.3 change if Gausss Method were changed
to allow multiplying a row by zero?
X 3.43 What is the relationship between rank(A) and rank(A)? Between rank(A)
and rank(kA)? What, if any, is the relationship between rank(A), rank(B), and
rank(A + B)?
III.4
Combining Subspaces
135
is in none of the three axes and hence is not in the union. Therefore to combine
subspaces, in addition to the members of those subspaces, we must at least also
include all of their linear combinations.
4.1 Definition Where W1 , . . . , Wk are subspaces of a vector space, their sum is
the span of their union W1 + W2 + + Wk = [W1 W2 Wk ].
Writing + fits with the conventional practice of using this symbol for a natural
accumulation operation.
4.2 Example Our R3 prototype works with this. Any vector w
~ R3 is a linear
combination c1~v1 + c2~v2 + c3~v3 where ~v1 is a member of the x-axis, etc., in this
way
w1
w1
0
0
w2 = 1 0 + 1 w2 + 1 0
w3
0
0
w3
and so x-axis + y-axis + z-axis = R3 .
4.3 Example A sum of subspaces can be less than the entire space. Inside of P4 ,
let L be the subspace of linear polynomials {a + bx | a, b R} and let C be the
subspace of purely-cubic polynomials {cx3 | c R}. Then L + C is not all of P4 .
Instead, L + C = {a + bx + cx3 | a, b, c R }.
4.4 Example A space can be described as a combination of subspaces in more
than one way. Besides the decomposition R3 = x-axis + y-axis + z-axis, we can
also write R3 = xy-plane + yz-plane. To check this, note that any w
~ R3 can
be written as a linear combination of a member of the xy-plane and a member
of the yz-plane; here are two such combinations.
0
w1
w1
0
w1
w1
w2 = 1 w2 + 1 0
w2 = 1 w2 /2 + 1 w2 /2
w3
w3
0
w3
w3
0
The above definition gives one way in which we can think of a space as a
combination of some of its parts. However, the prior example shows that there is
at least one interesting property of our benchmark model that is not captured by
the definition of the sum of subspaces. In the familiar decomposition of R3 , we
often speak of a vectors x part or y part or z part. That is, in our prototype
each vector has a unique decomposition into pieces from the parts making up
the whole space. But in the decomposition used in Example 4.4, we cannot refer
to the xy part of a vector these three sums
1
1
0
1
0
1
0
2 = 2 + 0 = 0 + 2 = 1 + 1
3
0
3
0
3
0
3
136
all describe the vector as comprised of something from the first plane plus
something from the second plane, but the xy part is different in each.
That is, when we consider how R3 is put together from the three axes we
might mean in such a way that every vector has at least one decomposition,
which gives the definition above. But if we take it to mean in such a way
that every vector has one and only one decomposition then we need another
condition on combinations. To see what this condition is, recall that vectors are
uniquely represented in terms of a basis. We can use this to break a space into a
sum of subspaces such that any vector in the space breaks uniquely into a sum
of members of those subspaces.
4.5 Example Consider R3 with its standard basis E3 = h~e1 , ~e2 , ~e3 i. The subspace
with the basis B1 = h~e1 i is the x-axis, the subspace with the basis B2 = h~e2 i is
the y-axis, and the subspace with the basis B3 = h~e3 i is the z-axis. The fact
that any member of R3 is expressible as a sum of vectors from these subspaces
x
x
0
0
y = 0 + y + 0
z
0
0
z
reflects the fact that E3 spans the space this equation
x
1
0
0
=
c
+
c
+
c
y
0
1
1
2
3 0
z
0
0
1
has a solution for any x, y, z R. And the fact that each such expression is
unique reflects that fact that E3 is linearly independent, so any equation like
the one above has a unique solution.
4.6 Example We dont have to take the basis vectors one at a time, we can
conglomerate them into larger sequences. Consider again the space R3 and the
vectors from the standard basis E3 . The subspace with the basis B1 = h~e1 , ~e3 i
is the xz-plane. The subspace with the basis B2 = h~e2 i is the y-axis. As in the
prior example, the fact that any member of the space is a sum of members of
the two subspaces in one and only one way
x
x
0
y = 0 + y
z
z
0
is a reflection of the fact that these vectors form a basis this equation
x
1
0
0
y = (c1 0 + c3 0) + c2 1
z
0
1
0
137
B2
_
~ 1,1 , . . . ,
~ 1,n1 ,
~ 2,1 , . . . ,
~ k,nk i
Bk = h
4.8 Lemma Let V be a vector space that is the sum of some of its subspaces
V = W1 + + Wk . Let B1 , . . . , Bk be bases for these subspaces. The following
are equivalent.
(1) The expression of any ~v V as a combination ~v = w
~ 1 + + w
~ k with
w
~ i Wi is unique.
_
_
(2) The concatenation B1 Bk is a basis for V.
(3) Among nonzero vectors from different Wi s every linear relationship is
trivial.
Proof We will show that (1) = (2), that (2) = (3), and finally that
(3) = (1). For these arguments, observe that we can pass from a combination
~
of w
~ s to a combination of s
~ 1,1 + + c1,n1
~ 1,n1 )
d1 w
~ 1 + + dk w
~ k = d1 (c1,1
~ k,1 + + ck,nk
~ k,nk )
+ + dk (ck,1
~ 1,1 + + dk ck,nk
~ k,nk
= d1 c1,1
()
and vice versa (we can move from the bottom to the top by taking each di to
be 1).
For (1) = (2), assume that all decompositions are unique. We will show
_
_
that B1 Bk spans the space and is linearly independent. It spans the
space because the assumption that V = W1 + + Wk means that every ~v
can be expressed as ~v = w
~ 1 + + w
~ k , which translates by equation () to an
~ from the concatenation. For
expression of ~v as a linear combination of the s
linear independence, consider this linear relationship.
~0 = c1,1
~ 1,1 + + ck,nk
~ k,nk
Regroup as in () (that is, move from bottom to top) to get the decomposition
~0 = w
~ 1 + + w
~ k . Because the zero vector obviously has the decomposition
~0 = ~0 + + ~0, the assumption that decompositions are unique shows that each
~ i,1 + + ci,ni
~ i,ni = ~0, and since
w
~ i is the zero vector. This means that ci,1
each Bi is a basis we have the desired conclusion that all of the cs are zero.
For (2) = (3) assume that the concatenation of the bases is a basis for the
entire space. Consider a linear relationship among nonzero vectors from different
138
Wi s. This might or might not involve a vector from W1 , or one from W2 , etc.,
so we write it ~0 = + di w
~ i + . As in equation () expand the vector.
~0 = + di (ci,1
~ i,1 + + ci,ni
~ i,ni ) +
~ i,1 + + di ci,ni
~ i,ni +
= + di ci,1
_
QED
139
140
4.19 Example If there are more than two subspaces then having a trivial intersection is not enough to guarantee unique decomposition (i.e., is not enough to
ensure that the spaces are independent). In R3 , let W1 be the x-axis, let W2 be
the y-axis, and let W3 be this.
q
W3 = { q | q, r R}
r
The check that R3 = W1 + W2 + W3 is easy. The intersection W1 W2 W3 is
trivial, but decompositions arent unique.
x
0
0
x
xy
0
y
y = 0 + y x + x = 0 + 0 + y
z
0
0
z
0
0
z
(This example also shows that this requirement is also not enough: that all
pairwise intersections of the subspaces be trivial. See Exercise 30.)
In this subsection we have seen two ways to regard a space as built up from
component parts. Both are useful; in particular we will use the direct sum
definition at the end of the Chapter Five.
Exercises
2
X 4.20 Decide ifR
is the direct sumofeach W1 and W2 .
x
x
(a) W1 = {
| x R }, W2 = {
| x R}
0
x
s
s
(b) W1 = {
| s R }, W2 = {
| s R}
s
1.1s
(c) W1 = R2 , W2 = {~0 }
t
(d) W1 = W2 = {
| t R}
t
1
x
1
0
(e) W1 = {
+
| x R }, W2 = {
+
| y R}
0
0
0
y
X 4.21 Show that R3 is the direct sum of the xy-plane with each of these.
(a) the z-axis
(b) the line
z
{ z | z R }
z
2
4.22 Is P2 the direct sum of { a + bx | a, b R } and { cx | c R }?
X 4.23 In Pn , the even polynomials are the members of this set
E = { p Pn | p(x) = p(x) for all x }
and the odd polynomials are the members of this set.
O = { p Pn | p(x) = p(x) for all x }
Show that these are complementary subspaces.
141
142
Topic
Fields
Computations involving only integers or only rational numbers are much easier
than those with real numbers. Could other algebraic structures, such as the
integers or the rationals, work in the place of R in the definition of a vector
space?
If we take work to mean that the results of this chapter remain true then
there is a natural list of conditions that a structure (that is, number system)
must have in order to work in the place of R. A field is a set F with operations
+ and such that
(1) for any a, b F the result of a + b is in F, and a + b = b + a, and if c F
then a + (b + c) = (a + b) + c
(2) for any a, b F the result of a b is in F, and a b = b a, and if c F then
a (b c) = (a b) c
(3) if a, b, c F then a (b + c) = a b + a c
(4) there is an element 0 F such that if a F then a + 0 = a, and for each
a F there is an element a F such that (a) + a = 0
(5) there is an element 1 F such that if a F then a 1 = a, and for each
element a 6= 0 of F there is an element a1 F such that a1 a = 1.
For example, the algebraic structure consisting of the set of real numbers
along with its usual addition and multiplication operation is a field. Another
field is the set of rational numbers with its usual addition and multiplication
operations. An example of an algebraic structure that is not a field is the integers,
because it fails the final condition.
Some examples are more surprising. The set B = { 0, 1 } under these operations:
+
0
1
is a field; see Exercise 4.
0
0
1
1
1
0
0
1
0
0
0
1
0
1
144
We could in this book develop Linear Algebra as the theory of vector spaces
with scalars from an arbitrary field. In that case, almost all of the statements here
would carry over by replacing R with F, that is, by taking coefficients, vector
entries, and matrix entries to be elements of F (the exceptions are statements
involving distances or angles, which would need additional development). Here
are some examples; each applies to a vector space V over a field F.
For any ~v V and a F, (i) 0 ~v = ~0, (ii) 1 ~v +~v = ~0, and (iii) a ~0 = ~0.
The span, the set of linear combinations, of a subset of V is a subspace of
V.
Any subset of a linearly independent set is also linearly independent.
In a finite-dimensional vector space, any two bases have the same number
of elements.
(Even statements that dont explicitly mention F use field properties in their
proof.)
We will not develop vector spaces in this more general setting because the
additional abstraction can be a distraction. The ideas we want to bring out
already appear when we stick to the reals.
The exception is Chapter Five. There we must factor polynomials, so we
will switch to considering vector spaces over the field of complex numbers.
Exercises
1 Check that the real numbers form a field.
2 Prove that these are fields.
(a) The rational numbers Q
3 Give an example that shows that the integer number system is not a field.
4 Check that the set B = { 0, 1 } is a field under the operations listed above,
5 Give suitable operations to make the set { 0, 1, 2 } a field.
Topic
Crystals
This orderly outside arises from an orderly inside the way the atoms lie is
also cubical, these cubes stack in neat rows and columns, and the salt faces tend
to be just an outer layer of cubes. One cube of atoms is shown below. Salt is
sodium chloride and the small spheres shown are sodium while the big ones are
chloride. To simplify the view, it only shows the sodiums and chlorides on the
front, top, and right.
The specks of salt that we see above have many repetitions of this fundamental
unit. A solid, such as table salt, with a regular internal structure is a crystal.
We can restrict our attention to the front face. There we have a square
repeated many times giving a lattice of atoms.
146
The distance along the sides of each square cell is about 3.34 ngstroms (an
ngstrom is 1010 meters). When we want to refer to atoms in the lattice that
number is unwieldy, and so we take the squares side length as a unit. That is,
we naturally adopt this basis.
!
!
3.34
0
h
,
i
0
3.34
Now we can describe, say, the atom in the upper right of the lattice picture
~ 1 + 2
~ 2 , instead of 10.02 ngstroms over and 6.68 up.
above as 3
Another crystal from everyday experience is pencil lead. It is graphite,
formed from carbon atoms arranged in this shape.
The vectors that form the sides of that unit cell make a convenient basis. The
distance along the bottom and slant is 1.42 ngstroms, so this
!
!
1.42
0.71
h
,
i
0
1.23
Topic: Crystals
147
is a good basis.
Another familiar crystal formed from carbon is diamond. Like table salt it
is built from cubes but the structure inside each cube is more complicated. In
addition to carbons at each corner,
(To show the new face carbons clearly, the corner carbons are reduced to dots.)
There are also four more carbons inside the cube, two that are a quarter of the
way up from the bottom and two that are a quarter of the way down from the
top.
(As before, carbons shown earlier are reduced here to dots.) The distance along
any edge of the cube is 2.18 ngstroms. Thus, a natural basis for describing the
locations of the carbons and the bonds between them, is this.
2.18
0
0
h 0 , 2.18 , 0 i
0
0
2.18
The examples here show that the structures of crystals is complicated enough
to need some organized system to give the locations of the atoms and how they
are chemically bound. One tool for that organization is a convenient basis. This
application of bases is simple but it shows a science context where the idea arises
naturally.
Exercises
1 How many fundamental regions are there in one face of a speck of salt? (With a
ruler, we can estimate that face is a square that is 0.1 cm on a side.)
148
2 In the graphite picture, imagine that we are interested in a point 5.67 ngstroms
over and 3.14 ngstroms up from the origin.
(a) Express that point in terms of the basis given for graphite.
(b) How many hexagonal shapes away is this point from the origin?
(c) Express that point in terms of a second basis, where the first basis vector is
the same, but the second is perpendicular to the first (going up the plane) and
of the same length.
3 Give the locations of the atoms in the diamond cube both in terms of the basis,
and in ngstroms.
4 This illustrates how we could compute the dimensions of a unit cell from the
shape in which a substance crystallizes ([Ebbing], p. 462).
(a) Recall that there are 6.022 1023 atoms in a mole (this is Avogadros number).
From that, and the fact that platinum has a mass of 195.08 grams per mole,
calculate the mass of each atom.
(b) Platinum crystallizes in a face-centered cubic lattice with atoms at each lattice
point, that is, it looks like the middle picture given above for the diamond crystal.
Find the number of platinums per unit cell (hint: sum the fractions of platinums
that are inside of a single cell).
(c) From that, find the mass of a unit cell.
(d) Platinum crystal has a density of 21.45 grams per cubic centimeter. From
this, and the mass of a unit cell, calculate the volume of a unit cell.
(e) Find the length of each edge.
(f) Describe a natural three-dimensional basis.
Topic
Voting Paradoxes
Imagine that a Political Science class studying the American presidential process
holds a mock election. The 29 class members rank the Democratic Party,
Republican Party, and Third Party nominees, from most preferred to least
preferred (> means is preferred to).
preference order
number with
that preference
5
4
2
8
8
2
7 voters
Third
Republican
1 voter
150
choosing the Democrat as the overall winner by first asking for a vote between
the Republican and the Third, and then asking for a vote between the winner
of that contest, who will be the Republican, and the Democrat. By similar
manipulations the instructor can make any of the other two candidates come out
as the winner. (We will stick to three-candidate elections but the same thing
happens in larger elections.)
Mathematicians also study voting paradoxes simply because they are interesting. One interesting aspect is that the groups overall majority cycle occurs
despite that each single voters preference list is rational, in a straight-line order.
That is, the majority cycle seems to arise in the aggregate without being present
in the components of that aggregate, the preference lists. However we can use
linear algebra to argue that a tendency toward cyclic preference is actually
present in each voters list and that it surfaces when there is more adding of the
tendency than canceling.
For this, abbreviating the choices as D, R, and T , we can describe how a
voter with preference order D > R > T contributes to the above cycle.
D
1 voter
1 voter
R
1 voter
(The negative sign is here because the arrow describes T as preferred to D, but
this voter likes them the other way.) The descriptions for the other preference
lists are in the table on page 152.
Now, to conduct the election we linearly combine these descriptions; for
instance, the Political Science mock election
D
D
1
R
1
+4
D
1
R
1
+ + 2
R
1
151
subspace of R3 .
k
1
C = { k | k R } = {k 1 | k R}
k
1
For the second part, consider the set of vectors that are perpendicular to all of
the vectors in C. Exercise 6 shows that this is a subspace
k
c1
c1
C = { c2 | c2 k = 0 for all k R}
k
c3
c3
c1
1
1
= { c2 | c1 + c2 + c3 = 0 } = { c2 1 + c3 0 | c2 , c3 R }
c3
0
1
(read that aloud as C perp). So we are led to this basis for R3 .
1
1
1
h1 , 1 , 0 i
1
0
1
We can represent votes with respect to this basis, and thereby decompose them
into a cyclic part and an acyclic part. (Note for readers who have covered the
optional section in this chapter: that is, the space is the direct sum of C
and C .)
For example, consider the D > R > T voter discussed above. We represent it
with respect to the basis
c1 c2 c3 = 1
c1 + c2
= 1
c1
+ c3 = 1
c1 c2
2c2 +
1 +2 (1/2)2 +3
1 +3
c3 = 1
c3 = 2
(3/2)c3 = 1
4/3
1
1/3
1
1
1
1 2 2
1 = 1 + 1 + 0 = 1/3 + 2/3
3
3
3
2/3
1
0
1
1/3
1
gives the desired decomposition into a cyclic part and an acyclic part.
D
D
1
R
1
D
1/3
1/3
R
1/3
2/3
4/3
R
2/3
152
Thus we can see that this D > R > T voters rational preference list does have a
cyclic part.
The T > R > D voter is opposite to the one just considered in that the >
symbols are reversed. This voters decomposition
D
D
T
D
1/3
1/3
1/3
2/3
4/3
2/3
shows that these opposite preferences have decompositions that are opposite.
We say that the first voter has positive spin since the cycle part is with the
direction that we have chosen for the arrows, while the second voters spin is
negative.
The fact that these opposite voters cancel each other is reflected in the fact
that their vote vectors add to zero. This suggests an alternate way to tally an
election. We could first cancel as many opposite preference lists as possible, and
then determine the outcome by adding the remaining lists.
The table below contains the three pairs of opposite preference lists. For
instance, the top line contains the voters discussed above.
positive spin
negative spin
D
1
2/3
1/3
R
1
R
1/3
T
4/3
2/3
D
1/3
4/3
2/3
1/3
2/3
D
2/3
2/3
D
1/3
D
1/3
1/3
1/3
2/3
2/3
4/3
D
4/3
2/3
D
1/3
1/3
D
1/3
1/3
D
2/3
4/3
1/3
D
1/3
1/3
R
1
D
1/3
1/3
2/3
2/3
1/3
If we conduct the election as just described then after the cancellation of as many
opposite pairs of voters as possible there will remain three sets of preference
lists: one set from the first row, one from the second row, and one from the third
row. We will finish by proving that a voting paradox can happen only if the
spins of these three sets are in the same direction. That is, for a voting paradox
to occur the three remaining sets must all come from the left of the table or all
come from the right (see Exercise 3). This shows that there is some connection
R
4/3
153
between the majority cycle and the decomposition that we are using a voting
paradox can happen only when the tendencies toward cyclic preference reinforce
each other.
For the proof, assume that we have canceled opposite preference orders and
we are left with one set of preference lists from each of the three rows. Consider
the sum of these three (here, the numbers a, b, and c could be positive, negative,
or zero).
D
T
R
a
D
a
R
b
D
c
R
c
ab+c
a + b + c
a+bc
A voting paradox occurs when the three numbers on the right, a b + c and
a + b c and a + b + c, are all nonnegative or all nonpositive. On the left,
at least two of the three numbers a and b and c are both nonnegative or both
nonpositive. We can assume that they are a and b. That makes four cases: the
cycle is nonnegative and a and b are nonnegative, the cycle is nonpositive and
a and b are nonpositive, etc. We will do only the first case, since the second is
similar and the other two are also easy.
So assume that the cycle is nonnegative and that a and b are nonnegative.
The conditions 0 6 a b + c and 0 6 a + b + c add to give that 0 6 2c, which
implies that c is also nonnegative, as desired. That ends the proof.
This result says only that having all three spin in the same direction is a
necessary condition for a majority cycle. It is not sufficient; see Exercise 4.
Voting theory and associated topics are the subject of current research. There
are many intriguing results, notably the one produced by K Arrow [Arrow] who
won the Nobel Prize in part for this work, showing that no voting system is
entirely fair (for a reasonable definition of fair). Some good introductory articles are [Gardner, 1970], [Gardner, 1974], [Gardner, 1980], and [Neimi & Riker].
[Taylor] is a readable recent book. The long list of cases from recent American political history in [Poundstone] shows these paradoxes are routinely manipulated
in practice.
This Topic is largely drawn from [Zwicker]. (Authors Note: I would like
to thank Professor Zwicker for his kind and illuminating discussions.)
Exercises
1 Here is a reasonable way in which a voter could have a cyclic preference. Suppose
that this voter ranks each candidate on each of three criteria.
(a) Draw up a table with the rows labeled Democrat, Republican, and Third,
and the columns labeled character, experience, and policies. Inside each
column, rank some candidate as most preferred, rank another as in the middle,
and rank the remaining one as least preferred.
154
Topic
Dimensional Analysis
You cant add apples and oranges, the old saying goes. It reflects our experience
that in applications the quantities have units and keeping track of those units
can help. Everyone has done calculations such as this one that use the units as
a check.
min
hr
day
sec
sec
60
60
24
365
= 31 536 000
min
hr
day
year
year
We can take the idea of including the units beyond bookkeeping. We can use
units to draw conclusions about what relationships are possible among the
physical quantities.
To start, consider the falling body equation distance = 16 (time)2 . If the
distance is in feet and the time is in seconds then this is a true statement.
However it is not correct in other unit systems, such as meters and seconds,
because 16 isnt the right constant in those systems. We can fix that by attaching
units to the 16, making it a dimensional constant .
dist = 16
ft
(time)2
sec2
Now the equation holds also in the meter-second system because when we align
the units (a foot is approximately 0.30 meters),
distance in meters = 16
0.30 m
m
(time in sec)2 = 4.8
(time in sec)2
2
sec
sec2
the constant gets adjusted. So in order to have equations that are correct across
unit systems, we restrict our attention to those that use dimensional constants.
Such an equation is complete.
Moving away from a particular unit system allows us to just measure quantities in combinations of some units of length L, mass M, and time T . These
three are our physical dimensions. For instance, we could measure velocity in
feet/second or fathoms/hour but at all events it involves a unit of length divided
by a unit of time so the dimensional formula of velocity is L/T . Similarly,
densitys dimensional formula is M/L3 .
156
which is the ratio of a length to a length (L1 M0 T 0 )(L1 M0 T 0 )1 and thus angles
have the dimensional formula L0 M0 T 0 .
The classic example of using the units for more than bookkeeping, using
them to draw conclusions, considers the formula for the period of a pendulum.
p = some expression involving the length of the string, etc.
The period is in units of time L0 M0 T 1 . So the quantities on the other side of
the equation must have dimensional formulas that combine in such a way that
their Ls and Ms cancel and only a single T remains. The table on page 157 has
the quantities that an experienced investigator would consider possibly relevant
to the period of a pendulum. The only dimensional formulas involving L are for
the length of the string and the acceleration due to gravity. For the Ls of these
two to cancel when they appear in the equation they must be in ratio, e.g., as
(`/g)2 , or as cos(`/g), or as (`/g)1 . Therefore the period is a function of `/g.
This is a remarkable result: with a pencil and paper analysis, before we ever
took out the pendulum and made measurements, we have determined something
about what makes up its period.
To do dimensional analysis systematically, we need two facts (arguments
for these are in [Bridgman], Chapter II and IV). The first is that each equation
relating physical quantities that we shall see involves a sum of terms, where each
term has the form
p2
pk
1
mp
1 m2 mk
157
pk
2
where 1 = m1 mp
2 mk is dimensionless and the products 2 , . . . , n
dont involve m1 (as with f, here f is an arbitrary function, this time of n 1
arguments). Thus, to do dimensional analysis we should find which dimensionless
products are possible.
For example, again consider the formula for a pendulums period.
quantity
period p
length of string `
mass of bob m
acceleration due to gravity g
arc of swing
dimensional
formula
L0 M0 T 1
L1 M0 T 0
L0 M1 T 0
L1 M0 T 2
L0 M0 T 0
By the first fact cited above, we expect the formula to have (possibly sums of
terms of) the form pp1 `p2 mp3 gp4 p5 . To use the second fact, to find which
combinations of the powers p1 , . . . , p5 yield dimensionless products, consider
this equation.
(L0 M0 T 1 )p1 (L1 M0 T 0 )p2 (L0 M1 T 0 )p3 (L1 M0 T 2 )p4 (L0 M0 T 0 )p5 = L0 M0 T 0
It gives three conditions on the powers.
p2
p3
p1
+ p4 = 0
=0
2p4 = 0
158
Note that p3 = 0 so the mass of the bob does not affect the period. Gaussian
reduction and parametrization of that system gives this
p1
1
0
p 1/2
0
2
{ p3 = 0 p1 + 0 p5 | p1 , p5 R }
p4 1/2
0
p5
0
1
(weve taken p1 as one of the parameters in order to express the period in terms
of the other quantities).
The set of dimensionless products contains all terms pp1 `p2 mp3 ap4 p5
subject to the conditions above. This set forms a vector space under the +
operation of multiplying two such products and the operation of raising such
a product to the power of the scalar (see Exercise 5). The term complete set of
dimensionless products in Buckinghams Theorem means a basis for this vector
space.
We can get a basis by first taking p1 = 1, p5 = 0, and then taking p1 = 0,
p5 = 1. The associated dimensionless products are 1 = p`1/2 g1/2 and 2 = .
Because the set {1 , 2 } is complete, Buckinghams Theorem says that
p
= `/g f()
dimensional
formula
L0 M0 T 1
L1 M0 T 0
L0 M1 T 0
L0 M1 T 0
L3 M1 T 2
159
+ 3p5 = 0
p3 + p4 p5 = 0
2p5 = 0
1
0
3/2
0
{ 1/2 p1 + 1 p4 | p1 , p4 R }
0
1
1/2
0
As earlier, the set of dimensionless products of these quantities forms a
vector space and we want to produce a basis for that space, a complete set of
dimensionless products. One such set, gotten from setting p1 = 1 and p4 = 0
1/2
and also setting p1 = 0 and p4 = 1 is { 1 = pr3/2 m1 G1/2 , 2 = m1
1 m2 }.
With that, Buckinghams Theorem says that any complete relationship among
these quantities is stateable this form.
1/2
p = r3/2 m1
3/2
1 m2 ) = r
2 /m1 )
G1/2 f(m
f(m
1
Gm1
160
quantity
velocity of the wave v
density of the water d
acceleration due to gravity g
wavelength
dimensional
formula
L1 M0 T 1
L3 M1 T 0
L1 M0 T 2
L1 M0 T 0
The equation
(L1 M0 T 1 )p1 (L3 M1 T 0 )p2 (L1 M0 T 2 )p3 (L1 M0 T 0 )p4 = L0 M0 T 0
gives this system
p1 3p2 + p3 + p4 = 0
p2
=0
p1
2p3
=0
with this solution space.
1
0
{
p1 | p1 R }
1/2
1/2
161
(b) These two equations of motion for projectiles are familiar: x = v0 cos()t and
y = v0 sin()t (g/2)t2 . Manipulate each to rewrite it as a relationship among
the dimensionless products of the prior item.
2 [Einstein] conjectured that the infrared characteristic frequencies of a solid might
be determined by the same forces between atoms as determine the solids ordinary
elastic behavior. The relevant quantities are these.
dimensional
quantity formula
characteristic frequency L0 M0 T 1
compressibility k L1 M1 T 2
number of atoms per cubic cm N L3 M0 T 0
mass of an atom m L0 M1 T 0
Show that there is one dimensionless product. Conclude that, in any complete
relationship among quantities with these dimensional formulas, k is a constant
times 2 N1/3 m1 . This conclusion played an important role in the early study
of quantum phenomena.
3 [Giordano, Wells, Wilde] The torque produced by an engine has dimensional
formula L2 M1 T 2 . We may first guess that it depends on the engines rotation
rate (with dimensional formula L0 M0 T 1 ), and the volume of air displaced (with
dimensional formula L3 M0 T 0 ).
(a) Try to find a complete set of dimensionless products. What goes wrong?
(b) Adjust the guess by adding the density of the air (with dimensional formula
L3 M1 T 0 ). Now find a complete set of dimensionless products.
4 [Tilley] Dominoes falling make a wave. We may conjecture that the wave speed v
depends on the spacing d between the dominoes, the height h of each domino, and
the acceleration due to gravity g.
(a) Find the dimensional formula for each of the four quantities.
(b) Show that { 1 = h/d, 2 = dg/v2 } is a complete set of dimensionless products.
(c) Show that if h/d is fixed then the propagation speed is proportional to the
square root of d.
~ operation
5 Prove that the dimensionless products form a vector space under the +
of multiplying two such products and the ~ operation of raising such the product
to the power of the scalar. (The vector arrows are a precaution against confusion.)
That is, prove that, for any particular homogeneous system, this set of products of
powers of m1 , . . . , mk
p
p +q1
~ m1 1 . . . m k k = m1 1
m1 1 . . . m k k +
p +qk
. . . mk k
and
p
rp1
r~(m1 1 . . . mkk ) = m1
rpk
. . . mk
162
Chapter Three
Isomorphisms
I.1
164
1.1 Example The space of two-wide row vectors and the space of two-tall column
vectors are the same in that if we associate the vectors that have the same
components, e.g.,
!
1
(1 2)
2
(read the double arrow as corresponds to) then this association respects the
operations. For instance these corresponding vectors add to corresponding totals
!
!
!
1
3
4
(1 2) + (3 4) = (4 6)
+
=
2
4
6
and here is an example of the correspondence respecting scalar multiplication.
!
!
5
1
=
5 (1 2) = (5 10) 5
2
10
Stated generally, under the correspondence
(a0 a1 )
a0
a1
a0
a1
and
r (a0 a1 ) = (ra0 ra1 )
a0
a1
b0
b1
!
=
!
=
ra0
ra1
a0 + b0
a1 + b1
a0 + a1 x + a2 x2
a0
b0
a0 + b0
+ b0 + b1 x + b2 x2 a1 + b1 = a1 + b1
a2
b2
a2 + b2
(a0 + b0 ) + (a1 + b1 )x + (a2 + b2 )x2
Section I. Isomorphisms
165
a0
ra0
r a1 = ra1
a2
ra2
!
=
b1
b2
166
To prove that f is onto we must check that any member of the codomain R2
is the image of some member of the domain G. So, consider a member of the
codomain
!
x
y
and note that it is the image under f of x cos + y sin .
Next we will verify condition (2), that f preserves structure. This computation
shows that f preserves addition.
f (a1 cos + a2 sin ) + (b1 cos + b2 sin )
c1 x + c2 y + c3 z
1
7
c1 + c2 x + c3 x2
f2
c2 + c3 x + c1 x2
f3
c1 c2 x c3 x2
f4
7
7
7
Section I. Isomorphisms
167
The first map is the more natural correspondence in that it just carries the
coefficients over. However we shall do f2 to underline that there are isomorphisms
other than the obvious one. (Checking that f1 is an isomorphism is Exercise 14.)
To show that f2 is one-to-one we will prove that if f2 (c1 x + c2 y + c3 z) =
f2 (d1 x + d2 y + d3 z) then c1 x + c2 y + c3 z = d1 x + d2 y + d3 z. The assumption
that f2 (c1 x + c2 y + c3 z) = f2 (d1 x + d2 y + d3 z) gives, by the definition of f2 , that
c2 + c3 x + c1 x2 = d2 + d3 x + d1 x2 . Equal polynomials have equal coefficients so
c2 = d2 , c3 = d3 , and c1 = d1 . Hence f2 (c1 x+c2 y+c3 z) = f2 (d1 x+d2 y+d3 z)
implies that c1 x + c2 y + c3 z = d1 x + d2 y + d3 z, and f2 is one-to-one.
The map f2 is onto because a member a + bx + cx2 of the codomain is the
image of a member of the domain, namely it is f2 (cx + ay + bz). For instance,
2 + 3x 4x2 is f2 (4x + 2y + 3z).
The computations for structure preservation are like those in the prior
example. The map f2 preserves addition
f2 (c1 x + c2 y + c3 z) + (d1 x + d2 y + d3 z)
d1.5
d1.5 (~v)
168
t/6
~u
~u
p0
p1
Section I. Isomorphisms
169
picture centered at x = 0 and the other centered at x = 1, then the two pictures
would be indistinguishable.
As described in the opening to this section, having given the definition of
isomorphism, we next look to support the thesis that it captures our intuition of
vector spaces being the same. First, the definition itself is persuasive: a vector
space consists of a set and some structure and the definition simply requires that
the sets correspond and that the structures correspond also. Also persuasive
are the examples above, such as Example 1.1, which dramatize that isomorphic
W,
spaces are the same in all relevant respects. Sometimes people say, where V =
that W is just V painted green differences are merely cosmetic.
The results below further support our contention that under an isomorphism
all the things of interest in the two vector spaces correspond. Because we
introduced vector spaces to study linear combinations, of interest means
pertaining to linear combinations. Not of interest is the way that the vectors
are presented typographically (or their color!).
1.10 Lemma An isomorphism maps a zero vector to a zero vector.
Proof Where f : V W is an isomorphism, fix some ~v V. Then f(~0V ) =
QED
1.11 Lemma For any map f : V W between vector spaces these statements are
equivalent.
(1) f preserves structure
f(~v1 + ~v2 ) = f(~v1 ) + f(~v2 )
and
f(c~v) = c f(~v)
only show that (1) = (3). So assume statement (1). We will prove (3) by
induction on the number of summands n.
The one-summand base case, that f(c~v1 ) = c f(~v1 ), is covered by the second
clause of statement (1).
For the inductive step assume that statement (3) holds whenever there are k
or fewer summands. Consider the k + 1-summand case. Use the first half of (1)
170
QED
We often use item (2) to simplify the verification that a map preserves structure.
Finally, a summary. In the prior chapter, after giving the definition of a
vector space, we looked at examples and noted that some spaces seemed to be
and have
essentially the same as others. Here we have defined the relation =
argued that it is the right way to precisely say what we mean by the same
because it preserves the features of interest in a vector space in particular, it
preserves linear combinations. In the next section we will show that isomorphism
is an equivalence relation and so partitions the collection of vector spaces.
Exercises
X 1.12 Verify, using Example 1.4 as a model, that the two correspondences given before
the definition are isomorphisms.
(a) Example 1.1
(b) Example 1.2
X 1.13 For the map f : P1 R2 given by
ab
f
a + bx 7
b
Find the image of each of these elements of the domain.
(a) 3 2x
(b) 2 + 2x
(c) x
Show that this map is an isomorphism.
1.14 Show that the natural map f1 from Example 1.5 is an isomorphism.
1.15 Show that the map t : P2 P2 given by t(ax2 + bx + c) = bx2 (a + c)x + a is
an isomorphism.
1.16 Verify that this map is an isomorphism: h : R4 M22 given by
a
b
7 c a + d
c
b
d
d
X 1.17 Decide whether each map is an isomorphism (if it is an isomorphism then prove
it and if it isnt then state a condition that it fails to satisfy).
(a) f : M22 R given by
a b
7 ad bc
c d
Section I. Isomorphisms
171
a+b+c+d
a+b+c
a+b
a
a
c
b
d
c d
(d) f : M22 P3 given by
a b
7 c + (d + c)x + (b + a + 1)x2 + ax3
c d
1.18 Show that the map f : R1 R1 given by f(x) = x3 is one-to-one and onto. Is it
an isomorphism?
X 1.19 Refer to Example 1.1. Produce two more isomorphisms (of course, you must
also verify that they satisfy the conditions in the definition of isomorphism).
1.20 Refer to Example 1.2. Produce two more isomorphisms (and verify that they
satisfy the conditions).
X 1.21 Show that, although R2 is not itself a subspace of R3 , it is isomorphic to the
xy-plane subspace of R3 .
1.22 Find two isomorphisms between R16 and M44 .
X 1.23 For what k is Mmn isomorphic to Rk ?
1.24 For what k is Pk isomorphic to Rn ?
1.25 Prove that the map in Example 1.9, from P5 to P5 given by p(x) 7 p(x 1),
is a vector space isomorphism.
1.26 Why, in Lemma 1.10, must there be a ~v V? That is, why must V be
nonempty?
1.27 Are any two trivial spaces isomorphic?
1.28 In the proof of Lemma 1.11, what about the zero-summands case (that is, if n
is zero)?
1.29 Show that any isomorphism f : P0 R1 has the form a 7 ka for some nonzero
real number k.
X 1.30 These prove that isomorphism is an equivalence relation.
(a) Show that the identity map id : V V is an isomorphism. Thus, any vector
space is isomorphic to itself.
(b) Show that if f : V W is an isomorphism then so is its inverse f1 : W V.
Thus, if V is isomorphic to W then also W is isomorphic to V.
(c) Show that a composition of isomorphisms is an isomorphism: if f : V W is
an isomorphism and g : W U is an isomorphism then so also is g f : V U.
Thus, if V is isomorphic to W and W is isomorphic to U, then also V is isomorphic
to U.
1.31 Suppose that f : V W preserves structure. Show that f is one-to-one if and
only if the unique member of V mapped by f to ~0W is ~0V .
172
1.32 Suppose that f : V W is an isomorphism. Prove that the set {~v1 , . . . ,~vk } V
is linearly dependent if and only if the set of images { f(~v1 ), . . . , f(~vk ) } W is
linearly dependent.
X 1.33 Show that each type of map from Example 1.8 is an automorphism.
(a) Dilation ds by a nonzero scalar s.
(b) Rotation t through an angle .
(c) Reflection f` over a line through the origin.
Hint. For the second and third items, polar coordinates are useful.
1.34 Produce an automorphism of P2 other than the identity map, and other than a
shift map p(x) 7 p(x k).
1.35 (a) Show that a function f : R1 R1 is an automorphism if and only if it has
the form x 7 kx for some k 6= 0.
(b) Let f be an automorphism of R1 such that f(3) = 7. Find f(2).
(c) Show that a function f : R2 R2 is an automorphism if and only if it has the
form
x
ax + by
7
y
cx + dy
for some a, b, c, d R with ad bc 6= 0. Hint. Exercises in prior subsections
have shown that
b
a
is not a multiple of
d
c
if and only if ad bc 6= 0.
(d) Let f be an automorphism of R2 with
1
2
f(
)=
and
3
1
1
0
f(
)=
.
4
1
Find
0
f(
).
1
1.36 Refer to Lemma 1.10 and Lemma 1.11. Find two more things preserved by
isomorphism.
1.37 We show that isomorphisms can be tailored to fit in that, sometimes, given
vectors in the domain and in the range we can produce an isomorphism associating
those vectors.
~ 1,
~ 2,
~ 3 i be a basis for P2 so that any ~p P2 has a unique
(a) Let B = h
~ 1 + c2
~ 2 + c3
~ 3 , which we denote in this way.
representation as ~p = c1
c1
RepB (~p) = c2
c3
Show that the RepB () operation is a function from P2 to R3 (this entails showing
that with every domain vector ~v P2 there is an associated image vector in R3 ,
and further, that with every domain vector ~v P2 there is at most one associated
image vector).
(b) Show that this RepB () function is one-to-one and onto.
(c) Show that it preserves structure.
Section I. Isomorphisms
173
and
r (~u, w
~ ) = (r~u, r~
w)
(~u, w
~ ) 7 ~u + w
~
is an isomorphism. Thus if the internal direct sum is defined then the internal
and external direct sums are isomorphic.
I.2
In the prior subsection, after stating the definition of isomorphism, we gave some
results supporting our sense that such a map describes spaces as the same.
Here we will develop this intuition. When two (unequal) spaces are isomorphic
we think of them as almost equal, as equivalent. We shall make that precise by
proving that the relationship is isomorphic to is an equivalence relation.
2.1 Lemma The inverse of an isomorphism is also an isomorphism.
Proof Suppose that V is isomorphic to W via f : V W. An isomorphism is a
174
QED
V
W
...
W
V=
The next result characterizes these classes by dimension. That is, we can describe
each class simply by giving the number that is the dimension of all of the spaces
in that class.
Section I. Isomorphisms
175
2.3 Theorem Vector spaces are isomorphic if and only if they have the same
dimension.
In this double implication statement the proof of each half involves a significant idea so we will do the two separately.
2.4 Lemma If spaces are isomorphic then they have the same dimension.
Proof We shall show that an isomorphism of two spaces gives a correspon-
we will have that all such spaces are isomorphic to each other by transitivity,
which was shown in Theorem 2.2.
~ 1, . . . ,
~ n i for the domain V.
Let V be n-dimensional. Fix a basis B = h
Consider the operation of representing the members of V with respect to B as a
function from V to Rn .
v1
RepB .
~
~
~v = v1 1 + + vn n 7 ..
vn
It is well-defined since every ~v has one and only one such representation (see
Remark 2.6 following this proof).
176
then
u1
v1
.. ..
. = .
un
vn
~1 +
and so u1 = v1 , . . . , un = vn , implying that the original arguments u1
~ n and v1
~ 1 + + vn
~ n are equal.
+ un
This function is onto; any member of Rn
w1
.
w
~ = ..
wn
~ 1 + + wn
~ n ).
is the image of some ~v V, namely w
~ = RepB (w1
Finally, this function preserves structure.
~ 1 + + (run + svn )
~n )
RepB (r ~u + s ~v) = RepB ( (ru1 + sv1 )
ru1 + sv1
..
=
.
run + svn
u1
v1
..
..
=r . +s .
un
vn
Section I. Isomorphisms
177
The worry that there is no more than one associated member of the codomain
is subtler. A contrasting example, where an association fails this unique output
requirement, illuminates the issue. Let the domain be P2 and consider a set that
is not a basis (it is not linearly independent, although it does span the space).
A = {1 + 0x + 0x2 , 0 + 1x + 0x2 , 0 + 0x + 1x2 , 1 + 1x + 2x2 }
Call those polynomials
~ 1, . . . ,
~ 4 . In contrast to the situation when the set
is a basis, here there can be more than one expression of a domain vector in
terms of members of the set. For instance, consider ~v = 1 + x + x2 . Here are
two different expansions.
~v = 1~
1 + 1~
2 + 1~
3 + 0~
4
~v = 0~
1 + 0~
2 1~
3 + 1~
4
0
0
1
1
Thus, with A the association is not well-defined. (The issue is that A is not
linearly independent; to show uniqueness Theorem Two.III.1.12s proof uses only
linear independence.)
In general, any time that we define a function we must check that output
values are well-defined. Most of the time that condition is perfectly obvious but
in the above proof it needs verification. See Exercise 19.
2.7 Corollary A finite-dimensional vector space is isomorphic to one and only one
of the Rn .
This gives us a collection of representatives of the isomorphism classes.
All finite dimensional
vector spaces:
? R0
? R2
? R1
? R3
...
One representative
per class
The proofs above pack many ideas into a small space. Through the rest of
this chapter well consider these ideas again, and fill them out. As a taste of
this we will expand here on the proof of Lemma 2.5.
178
!
0
0
,
0
0
!
0
i
1
the isomorphism given in the lemma, the representation map f1 = RepB , carries
the entries over.
a
!
a b f1
b
7
c
c d
d
One way to think of the map f1 is: fix the basis B for the domain, use the
~ 1 with ~e1 ,
~ 2 with ~e2 , etc.
standard basis E4 for the codomain, and associate
Then extend this association to all of the members of two spaces.
a
!
b
a b
f1
~ 1 + b
~ 2 + c
~ 3 + d
~ 4 7 a~e1 + b~e2 + c~e3 + d~e4 =
= a
c
c d
d
We can do the same thing with different bases, for instance, taking this basis
for the domain.
!
!
!
!
2 0
0 2
0 0
0 0
A=h
,
,
,
i
0 0
0 0
2 0
0 2
Associating corresponding members of A and E4 gives this.
!
a b
= (a/2)~
1 + (b/2)~
2 + (c/2)~
3 + (d/2)~
4
c d
a/2
b/2
f2
c/2
d/2
gives rise to an isomorphism that is different than f1 .
The prior map arose by changing the basis for the domain. We can also
change the basis for the codomain. Go back to the basis B above and use this
basis for the codomain.
1
0
0
0
0 1 0 0
D = h , , , i
0 0 0 1
0
0
1
0
Section I. Isomorphisms
179
b
d
!
~ 1 + b
~ 2 + c
~ 3 + d
~4
= a
a
b
f3
7
We close with a recap. Recall that the first chapter defines two matrices to be
row equivalent if they can be derived from each other by row operations. There
we showed that relation is an equivalence and so the collection of matrices is
partitioned into classes, where all the matrices that are row equivalent together
fall into a single class. Then for insight into which matrices are in each class we
gave representatives for the classes, the reduced echelon form matrices.
In this section we have followed that pattern except that the notion here
of the same is vector space isomorphism. We defined it and established some
properties, including that it is an equivalence. Then, as before, we developed
a list of class representatives to help us understand the partition it classifies
vector spaces by dimension.
In Chapter Two, with the definition of vector spaces, we seemed to have
opened up our studies to many examples of new structures besides the familiar
Rn s. We now know that isnt the case. Any finite-dimensional vector space is
actually the same as a real space.
Exercises
X 2.9 Decide if the spaces are isomorphic.
(a) R2 , R4
(b) P5 , R5
(c) M23 , R6
(d) P5 , M23
k
(e) M2k , C
X 2.10 Consider the isomorphism RepB () : P1 R2 where B = h1, 1 + xi. Find the
image of each of these elements of the domain.
(a) 3 2x;
(b) 2 + 2x;
(c) x
X 2.11 Show that if m 6= n then Rm =
6 Rn .
Mnm ?
X 2.12 Is Mmn =
X 2.13 Are any two planes through the origin in R3 isomorphic?
2.14 Find a set of equivalence class representatives other than the set of Rn s.
2.15 True or false: between any n-dimensional space and Rn there is exactly one
isomorphism.
2.16 Can a vector space be isomorphic to one of its (proper) subspaces?
X 2.17 This subsection shows that for any isomorphism, the inverse map is also an
isomorphism. This subsection also shows that for a fixed basis B of an n-dimensional
vector space V, the map RepB : V Rn is an isomorphism. Find the inverse of
this map.
X 2.18 Prove these facts about matrices.
(a) The row space of a matrix is isomorphic to the column space of its transpose.
180
~ 1 + +cn
~ n to f(~
~ 1 )+ +cn f(
~ n)
for W. Must f: V W sending ~v = c1
be an isomorphism?
2.25 (Requires the subsection on Combining Subspaces, which is optional.) Suppose that V = V1 V2 and that V is isomorphic to the space U under the map f.
Show that U = f(V1 ) f(U2 ).
2.26 Show that this is not a well-defined function from the rational numbers to the
integers: with each fraction, associate the value of its numerator.
II
181
Homomorphisms
The definition of isomorphism has two conditions. In this section we will consider
the second one. We will study maps that are required only to preserve structure,
maps that are not also required to be correspondences.
Experience shows that these maps are tremendously useful. For one thing
we shall see in the second subsection below that while isomorphisms describe
how spaces are the same, we can think of these maps as describing how spaces
are alike.
II.1
Definition
!
x1
x2
x1 + x2
x1
x2
x1 + x2
(y1 + y2 ) = (y1 + y2 ) =
= (y1 ) + (y2 )
y1 + y2
z1
z2
z1 + z 2
z1
z2
and scalar multiplication.
x1
rx1
(r y1 ) = (ry1 ) =
z1
rz1
rx1
ry1
x1
= r (y1 )
z1
This is not an isomorphism since it is not one-to-one. For instance, both ~0 and
~e3 in R3 map to the zero vector in R2 .
182
1.3 Example The domain and codomain can be other than spaces of column
vectors. Both of these are homomorphisms; the verifications are straightforward.
(1) f1 : P2 P3 given by
a0 + a1 x + a2 x2 7 a0 x + (a1 /2)x2 + (a2 /3)x3
(2) f2 : M22 R given by
a b
c d
!
7 a + d
1.4 Example Between any two spaces there is a zero homomorphism, mapping
every vector in the domain to the zero vector in the codomain.
1.5 Example These two suggest why we use the term linear map.
(1) The map g : R3 R given by
x
g
y 7 3x + 2y 4.5z
z
is linear, that is, is a homomorphism. The check is easy. In contrast, the
: R3 R given by
map g
x
g
y 7 3x + 2y 4.5z + 1
z
is not linear. To show this we need only produce a single linear combination
that the map does not preserve. Here is one.
0
1
0
1
(0 + 0) = 4
(0) + g
(0) = 5
g
g
0
0
0
0
(2) The first of these two maps t1 , t2 : R3 R2 is linear while the second is
not.
!
!
x
x
5x 2y
5x 2y
t1
t2
7
y 7
y
x+y
xy
z
z
Finding a linear combination that the second map does not preserve is
easy.
183
So one way to think of homomorphism is that we are generalizing isomorphism (by dropping the condition that the map is a correspondence), motivated
by the observation that many of the properties of isomorphisms have only to
do with the maps structure-preservation property. The next two results are
examples of this motivation. In the prior section we saw a proof for each that
only uses preservation of addition and preservation of scalar multiplication, and
therefore applies to homomorphisms.
1.6 Lemma A homomorphism sends the zero vector to the zero vector.
1.7 Lemma The following are equivalent for any map f : V W between vector
spaces.
(1) f is a homomorphism
(2) f(c1 ~v1 + c2 ~v2 ) = c1 f(~v1 ) + c2 f(~v2 ) for any c1 , c2 R and ~v1 ,~v2 V
(3) f(c1 ~v1 + + cn ~vn ) = c1 f(~v1 ) + + cn f(~vn ) for any c1 , . . . , cn R
and ~v1 , . . . ,~vn V
1.8 Example The function f : R2 R4 given by
x/2
!
x
f 0
7
x + y
y
3y
is linear since it satisfies item (2).
0
0
0
= r1
+ r2
r1 (x1 + y1 ) + r2 (x2 + y2 )
x1 + y1
x2 + y2
r1 (3y1 ) + r2 (3y2 )
3y1
3y2
However, some things that hold for isomorphisms fail to hold for homomorphisms. One example is in the proof of Lemma I.2.4, which shows that
an isomorphism between spaces gives a correspondence between their bases.
Homomorphisms do not give any such correspondence; Example 1.2 shows this
and another example is the zero map between two nontrivial spaces. Instead,
for homomorphisms we have a weaker but still very useful result.
1.9 Theorem A homomorphism is determined by its action on a basis: if V is a vec~ 1, . . . ,
~ n i, if W is a vector space, and if w
tor space with basis h
~ 1, . . . , w
~n W
(these codomain elements need not be distinct) then there exists a homomor~ i to w
phism from V to W sending each
~ i , and that homomorphism is unique.
184
Proof For any input ~v V let its expression with respect to the basis be
~ 1 + +cn
~ n . Define the associated output by using the same coordinates
~v = c1
h(~v) = c1 w
~ 1 + + cn w
~ n . This is well defined because, with respect to the
basis, the representation of each domain vector ~v is unique.
This map is a homomorphism because it preserves linear combinations: where
~ 1 + + cn
~ n and v~2 = d1
~ 1 + + dn
~ n , here is the calculation.
v~1 = c1
~ 1 + + (r1 cn + r2 dn )
~n )
h(r1~v1 + r2~v2 ) = h( (r1 c1 + r2 d1 )
= (r1 c1 + r2 d1 )~
w1 + + (r1 cn + r2 dn )~
wn
= r1 h(~v1 ) + r2 h(~v2 )
: V W is another homomorphism satisfying
This map is unique because if h
QED
~ 1, . . . ,
~ n i be a
1.10 Definition Let V and W be vector spaces and let B = h
basis for V. A function defined on that basis f : B W is extended linearly
~ 1 + + cn
~ n , the
to a function f: V W if for all ~v V such that ~v = c1
~
~
action of the map is f(~v) = c1 f(1 ) + + cn f(n ).
1.11 Example If we specify a map h : R2 R2 that acts on the standard basis
E2 in this way
!
!
!
!
1
1
0
4
h(
)=
h(
)=
0
1
1
4
then we have also specified the action of h on any other member of the domain.
For instance, the value of h on this argument
!
!
!
!
!
!
3
1
0
1
0
5
h(
) = h(3
2
) = 3 h(
) 2 h(
)=
2
0
1
0
1
5
is a direct consequence of the value of h on the basis vectors.
Later in this chapter we shall develop a convenient scheme for computations
like this one, using matrices.
185
1.12 Definition A linear map from a space into itself t : V V is a linear transformation.
1.13 Remark In this book we use linear transformation only in the case where
the codomain equals the domain. However, be aware that other sources may
instead use it as a synonym for homomorphism.
1.14 Example The map on R2 that projects all vectors down to the x-axis is a
linear transformation.
!
!
x
x
7
y
0
1.15 Example The derivative map d/dx : Pn Pn
d/dx
to show that it is a subspace we need only check that it is closed under the
186
QED
187
1.21 Is (perpendicular) projection from R3 to the xz-plane a homomorphism? Projection to the yz-plane? To the x-axis? The y-axis? The z-axis? Projection to the
origin?
1.22 Verify that each map is a homomorphism.
(a) h : P3 R2 given by
a+b
ax2 + bx + c 7
a+c
(b) f : R2 R3 given by
0
x
7 x y
y
3y
1.23 Show that, while the maps from Example 1.3 preserve linear operations, they
are not isomorphisms.
1.24 Is an identity map a linear transformation?
1.25 Stating that a function is linear is different than stating that its graph is a
line.
(a) The function f1 : R R given by f1 (x) = 2x 1 has a graph that is a line.
Show that it is not a linear function.
(b) The function f2 : R2 R given by
x
7 x + 2y
y
does not have a graph that is a line. Show that it is a linear function.
1.26 Part of the definition of a linear function is that it respects addition. Does a
linear function respect subtraction?
~ 1, . . . ,
~ n i is a basis
1.27 Assume that h is a linear transformation of V and that h
of V. Prove each statement.
~ i ) = ~0 for each basis vector then h is the zero map.
(a) If h(
~ i) =
~ i for each basis vector then h is the identity map.
(b) If h(
~ i ) = r
~ i for each basis vector then h(~v) = r~v
(c) If there is a scalar r such that h(
for all vectors in V.
1.28 Consider the vector space R+ where vector addition and scalar multiplication
are not the ones inherited from R but rather are these: a + b is the product of
a and b, and r a is the r-th power of a. (This was shown to be a vector space
in an earlier exercise.) Verify that the natural logarithm map ln : R+ R is a
homomorphism between these two spaces. Is it an isomorphism?
1.29 Consider this transformation of R2 .
x
x/2
7
y
y/3
Find the image under this map of this ellipse.
x
{
| (x2 /4) + (y2 /9) = 1 }
y
1.30 Imagine a rope wound around the earths equator so that it fits snugly (suppose
that the earth is a sphere). How much extra rope must we add to raise the circle
to a constant six feet off the ground?
188
x1
a1,1 x1 + + a1,n xn
.
..
. 7
.
.
xn
am,1 x1 + + am,n xn
(b) Show that for each i, the i-th derivative operator di /dxi is a linear transformation of Pn . Conclude that for any scalars ck , . . . , c0 this map is a linear
transformation of that space.
dk1
d
dk
f
+
c
f + + c1 f + c0 f
k1
dxk
dxk1
dx
1.34 Lemma 1.17 shows that a sum of linear functions is linear and that a scalar
multiple of a linear function is linear. Show also that a composition of linear
functions is linear.
f 7
189
II.2
190
b
d
!
7 (a + b + 2d) + cx2 + cx3
an image vector in the range can have any constant term, must have an x
coefficient of zero, and must have the same coefficient of x2 as of x3 . That is,
the range space is R(h) = { r + sx2 + sx3 | r, s R } and so the rank is 2.
The prior result shows that, in passing from the definition of isomorphism to
the more general definition of homomorphism, omitting the onto requirement
doesnt make an essential difference. Any homomorphism is onto some space,
namely its range.
However, omitting the one-to-one condition does make a difference. A
homomorphism may have many elements of the domain that map to one element
of the codomain. Below is a bean sketch of a many-to-one map between sets. It
shows three elements of the codomain that are each the image of many members
of the domain. (Rather than picture lots of individual 7 arrows, each association
of many inputs with one output shows only one such arrow.)
191
Recall that for any function h : V W, the set of elements of V that map to
w
~ W is the inverse image h1 (~
w) = {~v V | h(~v) = w
~ }. Above, the left side
shows three inverse image sets.
2.5 Example Consider the projection : R3 R2
!
x
x
y 7
y
z
which is a homomorphism that is many-to-one. An inverse image set is a vertical
line of vectors in the domain.
R2
R
w
~
R2
R1
192
In generalizing from isomorphisms to homomorphisms by dropping the oneto-one condition we lose the property that, intuitively, the domain is the same
as the range. We lose, that is, that the domain corresponds perfectly to the range.
The examples below illustrate that what we retain is that a homomorphism
describes how the domain is analogous to or like the range.
2.7 Example We think of R3 as like R2 except that vectors have an extra
component. That is, we think of the vector with components x, y, and z as
like the vector with components x and y. Defining the projection map makes
precise which members of the domain we are thinking of as related to which
members of the codomain.
To understanding how the preservation conditions in the definition of homomorphism show that the domain elements are like the codomain elements, start
by picturing R2 as the xy-plane inside of R3 (the xy plane inside of R3 is a set
of three-tall vectors with a third component of zero and so does not precisely
equal the set of two-tall vectors R2 , but this embedding makes the picture much
clearer). The preservation of addition property says that vectors in R3 act like
their shadows in the plane.
x1
x2
x1 + y1
x
x
x1 + x2
1
2
y1 above
plus y2 above
equals y1 + y2 above
y1
y2
y1 + y2
z1
z2
z1 + z2
Thinking of (~v) as the shadow of ~v in the plane gives this restatement: the
sum of the shadows (~v1 ) + (~v2 ) equals the shadow of the sum (~v1 + ~v2 ).
Preservation of scalar multiplication is similar.
Drawing the codomain R2 on the right gives a picture that is uglier but is
more faithful to the bean sketch above.
w
~2
w
~1+w
~2
w
~1
193
a w1 vector
~v2
plus
a w2 vector
equals
a w1 + w2 vector
194
0V
0W
195
2.12 Example The map from Example 2.3 has this null space N (d/dx) =
{a0 + 0x + 0x2 + 0x3 | a0 R } so its nullity is 1.
2.13 Example The map from Example 2.4 has this null space, and nullity 2.
!
a
b
N (h) = {
| a, b R}
0 (a + b)/2
Now for the second insight from the above examples. In Example 2.7 each of
the vertical lines squashes down to a single point in passing from the domain to
the range, takes all of these one-dimensional vertical lines and maps them to a
point, leaving the range smaller than the domain by one dimension. Similarly, in
Example 2.8 the two-dimensional domain compresses to a one-dimensional range
by breaking the domain into the diagonal lines and maps each of those to a single
member of the range. Finally, in Example 2.9 the domain breaks into planes
which get squashed to a point and so the map starts with a three-dimensional
domain but ends two smaller, with a one-dimensional range. (The codomain
is two-dimensional but the range is one-dimensional and the dimension of the
range is what matters.)
2.14 Theorem A linear maps rank plus its nullity equals the dimension of its
domain.
~ 1, . . . ,
~ k i be a basis for
Proof Let h : V W be linear and let BN = h
~ 1, . . . ,
~ k,
~ k+1 , . . . ,
~ n i for
the null space. Expand that to a basis BV = h
the entire domain, using Corollary Two.III.2.13. We shall show that BR =
~ k+1 ), . . . , h(
~ n )i is a basis for the range space. Then counting the size of
hh(
the bases gives the result.
~ k+1 ) + +
To see that BR is linearly independent, consider ~0W = ck+1 h(
~
~
~
~
~
~n
cn h(n ). We have 0W = h(ck+1 k+1 + +cn n ) and so ck+1 k+1 + +cn
is in the null space of h. As BN is a basis for the null space there are scalars
c1 , . . . , ck satisfying this relationship.
~ 1 + + ck
~ k = ck+1
~ k+1 + + cn
~n
c1
But this is an equation among members of BV , which is a basis for V, so each
ci equals 0. Therefore BR is linearly independent.
To show that BR spans the range space consider a member of the range space
~ 1 + + cn
~ n of members
h(~v). Express ~v as a linear combination ~v = c1
~ 1 + + cn
~ n ) = c1 h(
~ 1 ) + + ck h(
~ k) +
of BV . This gives h(~v) = h(c1
~
~
~
~
ck+1 h(k+1 ) + + cn h(n ) and since 1 , . . . , k are in the null space, we
~ k+1 ) + + cn h(
~ n ). Thus, h(~v) is a
have that h(~v) = ~0 + + ~0 + ck+1 h(
linear combination of members of BR , and so BR spans the range space. QED
196
and
0
N (h) = { 0 | z R }
z
197
198
~ 1 + c2
~ 2 + + cn
~ n ),
combination of basis elements produces h(~v) = h(c1
~ 1 ) + + cn h(
~ n ), as desired.
which gives that h(~v) = c1 h(
~ 1, . . . ,
~ n i is a basis
Finally, for the (5) = (2) implication, assume that h
~ 1 ), . . . , h(
~ n )i is a basis for R(h). Then every w
for V so that hh(
~ R(h) has
~
~
the unique representation w
~ = c1 h(1 ) + + cn h(n ). Define a map from
R(h) to V by
~ 1 + c2
~ 2 + + cn
~n
w
~ 7 c1
(uniqueness of the representation makes this well-defined). Checking that it is
linear and that it is the inverse of h are easy.
QED
We have seen that a linear map expresses how the structure of the domain is
like that of the range. We can think of such a map as organizing the domain
space into inverse images of points in the range. In the special case that the map
is one-to-one, each inverse image is a single point and the map is an isomorphism
between the domain and the range.
Exercises
X 2.21 Let h : P3 P4 be given by p(x) 7 x p(x). Which of these are in the null
space? Which are in the range space?
(a) x3
(b) 0
(c) 7
(d) 12x 0.5x3
(e) 1 + 3x2 x3
2.22 Find the range space and the rank of each homomorphism.
(a) h : P3 R2 given by
a+b
2
ax + bx + c 7
a+c
(b) f : R2 R3 given by
0
x
7 x y
y
3y
X 2.23 Find the range space and rank of each map.
(a) h : R2 P3 given by
a
7 a + ax + ax2
b
(b) h : M22 R given by
a
c
b
d
7 a + d
b
d
7 a + b + c + dx2
199
2.35 (a) Prove that a homomorphism is onto if and only if its rank equals the
dimension of its codomain.
(b) Conclude that a homomorphism between vector spaces with the same dimension is one-to-one if and only if it is onto.
2.36 Show that a linear map is one-to-one if and only if it preserves linear independence.
2.37 Corollary 2.17 says that for there to be an onto homomorphism from a vector
space V to a vector space W, it is necessary that the dimension of W be less
than or equal to the dimension of V. Prove that this condition is also sufficient;
use Theorem 1.9 to show that if the dimension of W is less than or equal to the
dimension of V, then there is a homomorphism from V to W that is onto.
X 2.38 Recall that the null space is a subset of the domain and the range space is a
subset of the codomain. Are they necessarily distinct? Is there a homomorphism
that has a nontrivial intersection of its null space and its range space?
200
2.39 Prove that the image of a span equals the span of the images. That is, where
h : V W is linear, prove that if S is a subset of V then h([S]) equals [h(S)]. This
generalizes Lemma 2.1 since it shows that if U is any subspace of V then its image
{ h(~u) | ~u U } is a subspace of W, because the span of the set U is U.
X 2.40 (a) Prove that for any linear map h : V W and any w
~ W, the set h1 (~
w)
has the form
{~v + n
~ |n
~ N (h) }
for ~v V with h(~v) = w
~ (if h is not onto then this set may be empty). Such a
set is a coset of N (h) and we denote it as ~v + N (h).
(b) Consider the map t : R2 R2 given by
x
ax + by
t
7
y
cx + dy
for some scalars a, b, c, and d. Prove that t is linear.
(c) Conclude from the prior two items that for any linear system of the form
ax + by = e
cx + dy = f
we can write the solution set (the vectors are members of R2 )
{~p + ~h | ~h satisfies the associated homogeneous system }
where ~p is a particular solution of that linear system (if there is no particular
solution then the above set is empty).
(d) Show that this map h : Rn Rm is linear
x1
a1,1 x1 + + a1,n xn
.
..
. 7
.
.
xn
am,1 x1 + + am,n xn
for any scalars a1,1 , . . . , am,n . Extend the conclusion made in the prior item.
(e) Show that the k-th derivative map is a linear transformation of Pn for each k.
Prove that this map is a linear transformation of the space
dk
dk1
d
f 7
f + ck1 k1 f + + c1 f + c0 f
k
dx
dx
dx
for any scalars ck , . . . , c0 . Draw a conclusion as above.
2.41 Prove that for any transformation t : V V that is rank one, the map given by
composing the operator with itself t t : V V satisfies t t = r t for some real
number r.
2.42 Let h : V R be a homomorphism, but not the zero homomorphism. Prove
~ 1, . . . ,
~ n i is a basis for the null space and if ~v V is not in the null space
that if h
~ 1, . . . ,
~ n i is a basis for the entire domain V.
then h~v,
2.43 Show that for any space V of dimension n, the dual space
L(V, R) = { h : V R | h is linear }
V.
is isomorphic to R . It is often denoted V . Conclude that V =
2.44 Show that any linear map is the sum of maps of rank one.
n
201
2.46 Show that the range spaces and null spaces of powers of linear maps t : V V
form descending
V R(t) R(t2 ) . . .
and ascending
{~0 } N (t) N (t2 ) . . .
chains. Also show that if k is such that R(tk ) = R(tk+1 ) then all following range
spaces are equal: R(tk ) = R(tk+1 ) = R(tk+2 ) . . . . Similarly, if N (tk ) = N (tk+1 )
then N (tk ) = N (tk+1 ) = N (tk+2 ) = . . . .
202
III
The prior section shows that a linear map is determined by its action on a basis.
The equation
~ 1 + + cn
~ n ) = c1 h(
~ 1 ) + + cn h(
~ n)
h(~v) = h(c1
describes how we get the value of the map on any vector ~v by starting from the
~ i in a basis and extending linearly.
value of the map on the vectors
This section gives a convenient scheme based on matrices to use the represen~ 1 ), . . . , h(
~ n ) to compute, from the representation of a vector in
tations of h(
the domain RepB (~v), the representation of that vectors image in the codomain
RepD (h(~v)).
III.1
1
1
0
1
0
1
~ 1 )) =
RepD (h(
1/2
1 = 0 0 2 + 1 0
2
1
0
0
1
1
D
~ 2 ).
and the vector h(
1
1
0
1
2 = 1 0 1 2 + 0 0
0
0
0
1
~ 2 )) =
RepD (h(
1
0
203
With these, for any member ~v of the domain we can compute h(~v).
!
!
2
1
h(~v) = h(c1
+ c2
)
0
4
!
!
2
1
= c1 h(
) + c2 h(
)
0
4
1
0
1
1
0
1
1
= c1 (0 0 2 + 1 0) + c2 (1 0 1 2 + 0 0)
2
0
0
1
0
0
1
1
0
1
1
= (0c1 + 1c2 ) 0 + ( c1 1c2 ) 2 + (1c1 + 0c2 ) 0
2
0
0
1
Thus,
if RepB (~v) =
c1
c2
0c1 + 1c2
For instance,
!
since RepB (
4
)=
8
1
2
2
4
we have RepD ( h(
) ) = 5/2.
8
1
!
!
0
1
0c1 + 1c2
c1
= (1/2)c1 1c2
1/2 1
c2
B
1
0
1c1 + 0c2
B,D
In the middle is the argument ~v to the map, represented with respect to the
domains basis B by the column vector with components c1 and c2 . On the
right is the value of the map on that argument h(~v), represented with respect to
the codomains basis D. The matrix on the left is the new thing. We will use it
to represent the map and we will think of the above equation as representing an
application of the map to the matrix.
That matrix consists of the coefficients from the vector on the right, 0 and
1 from the first row, 1/2 and 1 from the second row, and 1 and 0 from the
~ i )s.
third row. That is, we make it by adjoining the vectors representing the h(
..
..
.
.
RepD ( h(
~ 1 ) ) RepD ( h(
~ 2) )
..
..
.
.
204
1.2 Definition Suppose that V and W are vector spaces of dimensions n and m
with bases B and D, and that h : V W is a linear map. If
h1,1
h1,n
h2,1
h2,n
~
~
RepD (h(1 )) = .
. . . RepD (h(n )) = .
..
..
hm,1 D
hm,n D
then
h1,1
h2,1
RepB,D (h) =
hm,1
h1,2
h2,2
..
.
hm,2
h1,n
h2,n
hm,n B,D
...
...
...
D = h1 + x, 1 + xi
2
h
0 7 4
0
0
h
2 7 2
0
A simple calculation
RepD (x) =
1/2
1/2
!
RepD (2) =
D
1
1
!
RepD (4) =
D
2
2
!
D
205
shows that this is the matrix representing h with respect to the bases.
!
1/2 1
2
RepB,D (h) =
1/2 1 2
B,D
hm,1
hm,2
...
hm,n
dimensions n and m
If h is represented by
B,D
and ~v V is represented by
c1
c2
RepB (~v) =
..
.
cn B
RepD ( h(~v) ) =
..
a
2,1 a2,2 . . . a2,n
..
am,1
am,2
...
am,n
QED
a1,1 c1 + + a1,n cn
c1
. a2,1 c1 + + a2,n cn
. =
..
.
cn
am,1 c1 + + am,n cn
206
were not true then we would adjust the definition to make it so. Nonetheless,
we need the verification.
1.7 Example For the matrix from Example 1.3 we can calculate where that map
sends this vector.
4
~v = 1
0
With respect to the domain basis B the representation of this vector is
and so the matrix-vector product gives the representation of the value h(~v) with
respect to the codomain basis D.
!
0
1/2 1
2
RepD (h(~v)) =
1/2
1/2 1 2
B,D
2
!B
!
(1/2) 0 + 1 (1/2) + 2 2
9/2
=
=
(1/2) 0 1 (1/2) 2 2
9/2
D
Then find the representation of each image with respect to the codomains basis.
!
!
!
!
!
!
1
1
1
0
1
1
RepD (
)=
RepD (
)=
RepD (
)=
0
1
1
1
0
1
Finally, adjoining these representations gives the matrix representing with
respect to B, D.
!
1 0 1
RepB,D () =
1 1 1
B,D
207
We can illustrate Theorem 1.4 by computing the matrix-vector product representing this action by the projection map.
!
2
2
(2) =
2
1
Represent the domain vector with respect to the domains basis
2
1
RepB (2) = 2
1
1
B
1
1
2 =
1
B,D
1
!
0
2
We now have two ways to compute the effect of projection, the straightforward
formula that drops each three-tall vectors third component to make a two-tall
vector, and the above formula that uses representations and matrix-vector
multiplication. The second way may seem complicated compared to the first,
but it has advantages. The next example shows that for some maps this new
scheme simplifies the formula.
1.9 Example To represent a rotation map t : R2 R2 that turns all vectors in
the plane counterclockwise through an angle
t/6
~u
t/6 (~u)
208
we start by fixing the standard bases E2 for both the domain and codomain
basis, Now find the image under the map of each vector in the domains basis.
!
!
!
!
1 t cos
0 t sin
7
7
()
0
sin
1
cos
Represent these images with respect to the codomains basis. Because this basis
is E2 , vectors represent themselves. Adjoin the representations to get the matrix
representing the map.
!
cos sin
RepE2 ,E2 (t ) =
sin cos
The advantage of this scheme is that we get a formula for the image of any
vector at all just by knowing in () how to represent the image of the two basis
vectors. For instance, here we rotate a vector by = /6.
!
!
!
!
!
!
t/6
3
3
3/2 1/2
3
3.598
3.598
=
7
=
2
2
1/2
3/2
2
0.232
0.232
E2
E2
3/2 1/2
x t/6
x
( 3/2)x (1/2)y
7
=
1/2
(1/2)x + ( 3/2)y
y
3/2
y
1.10 Example In the definition of matrix-vector product the width of the matrix
equals the height of the vector. Hence, this product is not defined.
! !
1 0 0
1
4 3 1
0
It is undefined for a reason: the three-wide matrix represents a map with a
three-dimensional domain while the two-tall vector represents a member of a
two-dimensional space. So the vector cannot be in the domain of the map.
Nothing in Definition 1.5 forces us to view matrix-vector product in terms
of representations. We can get some insights by focusing on how the entries
combine.
A good way to view matrix-vector product is that it is formed from the dot
products of the rows of the matrix with the column vector.
c1
..
..
.
.
c2
.
..
..
.
.
cn
209
Looked at in this row-by-row way, this new operation generalizes dot product.
We can also view the operation column-by-column.
h1,1 c1 + h1,2 c2 + + h1,n cn
c1
h1,1 h1,2 . . . h1,n
h2,1 h2,2 . . . h2,n c2 h2,1 c1 + h2,2 c2 + + h2,n cn
..
..
.. =
.
.
.
hm,1 c1 + hm,2 c2 + + hm,n cn
cn
hm,1 hm,2 . . . hm,n
h1,1
h1,n
h2,1
h2,n
= c1 . + + cn .
..
..
hm,1
hm,n
The result is the columns of the matrix weighted by the entries of the vector.
1.11 Example
1
2
0
0
!
!
!
2
1
1
0
1
1
+1
=
1 = 2
3
2
0
3
1
!
1
7
1
0
1
3
1
1
1
2
0
210
1
4
1/2
2
(b)
1
2
2 1
0 1
1 1
1
1
1
0
3
0
1
(c)
1
2
1
1
3
1
1
1
x
8
3 y = 4
2
z
4
x 7 1 + 2x,
and
x2 7 x x3
(c)
by
R1
0:
a1
an
+ +
2
n+1
(d) eval3 : Pn R with respect to B, E1 where B = h1, x, . . . , xn i and E1 = h1i,
given by
a0 + a1 x + a2 x2 + + an xn 7 a0 +
a0 + a1 x + a2 x2 + + an xn 7 a0 + a1 3 + a2 32 + + an 3n
(e) slide1 : Pn Pn with respect to B, B where B = h1, x, . . . , xn i, given by
a0 + a1 x + a2 x2 + + an xn 7 a0 + a1 (x + 1) + + an (x + 1)n
1.20 Represent the identity map on any nontrivial space with respect to B, B, where
B is any basis.
211
1.21 Represent, with respect to the natural basis, the transpose transformation on
the space M22 of 22 matrices.
~ 1,
~ 2,
~ 3,
~ 4 i is a basis for a vector space. Represent with
1.22 Assume that B = h
respect to B, B the transformation that is determined by each.
~ 1 7
~ 2,
~ 2 7
~ 3,
~ 3 7
~ 4,
~ 4 7 ~0
(a)
~ 1 7
~ 2,
~ 2 7 ~0,
~ 3 7
~ 4,
~ 4 7 ~0
(b)
~ 1 7
~ 2,
~ 2 7
~ 3,
~ 3 7 ~0,
~ 4 7 ~0
(c)
1.23 Example 1.9 shows how to represent the rotation transformation of the plane
with respect to the standard basis. Express these other transformations also with
respect to the standard basis.
(a) the dilation map ds , which multiplies all vectors by the same scalar s
(b) the reflection map f` , which reflects all all vectors across a line ` through the
origin
X 1.24 Consider a linear transformation of R2 determined by these two.
1
2
1
1
7
7
1
0
0
0
(a) Represent this transformation with respect to the standard bases.
(b) Where does the transformation send this vector?
0
5
(c) Represent this transformation with respect to these bases.
1
1
2
1
B=h
,
i
D=h
,
i
1
1
2
1
(d) Using B from the prior item, represent the transformation with respect to
B, B.
1.25 Suppose that h : V W is one-to-one so that by Theorem 2.20, for any basis B =
~ 1, . . . ,
~ n i V the image h(B) = hh(
~ 1 ), . . . , h(
~ n )i is a basis for W.
h
(a) Represent the map h with respect to B, h(B).
(b) For a member ~v of the domain, where the representation of ~v has components
c1 , . . . , cn , represent the image vector h(~v) with respect to the image basis h(B).
1.26 Give a formula for the product of a matrix and ~ei , the column vector that is
all zeroes except for a single one in the i-th position.
X 1.27 For each vector space of functions of one real variable, represent the derivative
transformation with respect to B, B.
(a) { a cos x + b sin x | a, b R }, B = hcos x, sin xi
(b) { aex + be2x | a, b R }, B = hex , e2x i
(c) { a + bx + cex + dxex | a, b, c, d R }, B = h1, x, ex , xex i
1.28 Find the range of the linear transformation of R2 represented with respect to
the standard
bases byeach matrix.
1 0
0 0
a
b
(a)
(b)
(c) a matrix of the form
0 0
3 2
2a 2b
X 1.29 Can one matrix represent two different linear maps? That is, can RepB,D (h) =
Rep (h)?
B,D
212
[B] = V
form a strictly increasing chain of subspaces. Show that for any linear map
h : V W there is a chain W0 = {~0 } W1 Wm = W of subspaces of W
such that
~ 1, . . . ,
~ i }]) Wi
h([{
for each i.
(d) Conclude that for every linear map h : V W there are bases B, D so the
matrix representing h with respect to B, D is upper-triangular (that is, each
entry hi,j with i > j is zero).
(e) Is an upper-triangular representation unique?
III.2
The prior subsection shows that the action of a linear map h is described by a
matrix H, with respect to appropriate bases, in this way.
v1
.
~v = ..
vn B
h1,1 v1 + + h1,n vn
..
h(~v) =
.
hm,1 v1 + + hm,n vn D
7
H
Here we will show the converse, that each matrix represents a linear map.
()
h1,1
h2,1
H=
hm,1
213
h1,2
h2,2
..
.
hm,2
...
...
...
h1,n
h2,n
hm,n
and we will describe how it defines a map h. We require that the map be
represented by the matrix so first note that in () the dimension of the maps
domain is the number of columns n of the matrix and the dimension of the
codomain is the number of rows m. Thus, for hs domain fix an n-dimensional
vector space V and for the codomain fix an m-dimensional space W. Also fix
~ 1, . . . ,
~ n i and D = h~1 , . . . , ~m i for those spaces.
bases B = h
Now let h : V W be: where ~v in the domain has the representation
v1
..
RepB (~v) = .
vn B
then its image h(~v) is the member the codomain with this representation.
h1,1 v1 + + h1,n vn
..
RepD ( h(~v) ) =
.
hm,1 v1 + + hm,n vn D
That is, to compute the action of h on any ~v V, first express ~v with respect to
~ 1 + + vn
~ n and then h(~v) = (h1,1 v1 + + h1,n vn ) ~1 +
the basis ~v = v1
+ (hm,1 v1 + + hm,n vn ) ~m .
Above we have made some choices; for instance V can be any n-dimensional
space and B could be any basis for V, so H does not define a unique function.
However, note once we have fixed V, B, W, and D then h is well-defined since ~v
has a unique representation with respect to the basis B and the calculation of w
~
from its representation is also uniquely determined.
2.1 Example Consider this matrix.
H = 3
5
4
6
214
!
2
11/2
1/2
= 23/2
4
5/2
6
35/2
v1
.
RepB (~v) = ..
u1
.
RepB (~u) = ..
vn
un
QED
2.3 Example Even if the domain and codomain are the same, the map that the
matrix represents depends on the bases that we choose. If
!
!
!
!
!
1 0
1
0
0
1
H=
, B1 = D1 = h
,
i, and B2 = D2 = h
,
i,
0 0
0
1
1
0
then h1 : R2 R2 represented by H with respect to B1 , D1 maps
!
!
!
!
c1
c1
c1
c1
=
7
=
c2
c2
0
0
B1
D1
215
=
c2
c1
0
c2
B2
D2
These are different functions. The first is projection onto the x-axis while the
second is projection onto the y-axis.
This result means that when convenient we can work solely with matrices,
just doing the computations without having to worry whether a matrix of interest
represents a linear map on some pair of spaces.
When we are working with a matrix but we do not have particular spaces or
bases in mind then we can take the domain and codomain to be Rn and Rm ,
with the standard bases. This is convenient because with the standard bases
vector representation is transparent the representation of ~v is ~v. (In this case
the column space of the matrix equals the range of the map and consequently
the column space of H is often denoted by R(H).)
Given a matrix, to come up with an associated map we can choose among
many domain and codomain spaces, and many bases for those. So a matrix can
represent many maps. We finish this section by illustrating how the matrix can
give us information about the associated maps.
2.4 Theorem The rank of a matrix equals the rank of any map that it represents.
Proof Suppose that the matrix H is mn. Fix domain and codomain spaces
~ 1, . . . ,
~ n i and D. Then H
V and W of dimension n and m with bases B = h
represents some linear map h between those spaces with respect to these bases
whose range space
~ 1 + + cn
~ n ) | c1 , . . . , c n R }
{h(~v) | ~v V } = {h(c1
~ 1 ) + + cn h(
~ n ) | c1 , . . . , c n R }
= {c1 h(
~ 1 ), . . . , h(
~ n ) }]. The rank of the map h is the dimension of this
is the span [{ h(
range space.
The rank of the matrix is the dimension of its column space, the span of the
~ 1 )), . . . , RepD (h(
~ n )) } ].
set of its columns [ {RepD (h(
To see that the two spans have the same dimension, recall from the proof
of Lemma I.2.5 that if we fix a basis then representation with respect to that
basis gives an isomorphism RepD : W Rm . Under this isomorphism there is a
linear relationship among members of the range space if and only if the same
~ 1 ) + + cn h(
~ n ) if
relationship holds in the column space, e.g, ~0 = c1 h(
~ 1 )) + + cn RepD (h(
~ n )). Hence, a subset of
and only if ~0 = c1 RepD (h(
216
the range space is linearly independent if and only if the corresponding subset
of the column space is linearly independent. Therefore the size of the largest
linearly independent subset of the range space equals the size of the largest
linearly independent subset of the column space, and so the two spaces have the
same dimension.
QED
That settles the apparent ambiguity in our use of the same word rank to
apply both to matrices and to maps.
2.5 Example Any map represented by
1
1
0
0
2
2
0
0
2
1
3
2
of h, which equals the rank of H by the theorem. Since the dimension of the
codomain of h equals the number of rows in H, if the rank of H equals the
number of rows then the dimension of the range space equals the dimension
of the codomain. But a subspace with the same dimension as its superspace
must equal that superspace (because any basis for the range space is a linearly
independent subset of the codomain whose size is equal to the dimension of the
codomain, and thus so this basis for the range space must also be a basis for the
codomain).
For the other half, a linear map is one-to-one if and only if it is an isomorphism
between its domain and its range, that is, if and only if its domain has the same
dimension as its range. The number of columns in H is the dimension of hs
domain and by the theorem the rank of H equals the dimension of hs range.
QED
2.7 Definition A linear map that is one-to-one and onto is nonsingular , otherwise
it is singular . That is, a linear map is nonsingular if and only if it is an
isomorphism.
217
2.8 Remark Some authors use nonsingular as a synonym for one-to-one while
others use it the way that we have here. The difference is slight because any
map is onto its range space, so a one-to-one map is an isomorphism with its
range.
In the first chapter we defined a matrix to be nonsingular if it is square and
is the matrix of coefficients of a linear system with a unique solution. The next
result justifies our dual use of the term.
2.9 Lemma A nonsingular linear map is represented by a square matrix. A
square matrix represents nonsingular maps if and only if it is a nonsingular
matrix. Thus, a matrix represents isomorphisms if and only if it is square and
nonsingular.
Proof Assume that the map h : V W is nonsingular. Corollary 2.6 says that
for any matrix H representing that map, because h is onto the number of rows
of H equals the rank of H, and because h is one-to-one the number of columns
of H is also equal to the rank of H. Hence H is square.
Next assume that H is square, n n. The matrix H is nonsingular if
and only if its row rank is n, which is true if and only if Hs rank is n by
Theorem Two.III.3.11, which is true if and only if hs rank is n by Theorem 2.4,
which is true if and only if h is an isomorphism by Theorem I.2.3. (This last
holds because the domain of h is n-dimensional as it is the number of columns
in H.)
QED
2.10 Example Any map from R2 to P1 represented with respect to any pair of
bases by
!
1 2
0 3
is nonsingular because this matrix has rank two.
2.11 Example Any map g : V W represented by
!
1 2
3 6
is singular because this matrix is singular.
Weve now seen that the relationship between maps and matrices goes both
ways: for a particular pair of bases, any linear map is represented by a matrix
and any matrix describes a linear map. That is, by fixing spaces and bases we
get a correspondence between maps and matrices. In the rest of this chapter
we will explore this correspondence. For instance, weve defined for linear maps
the operations of addition and scalar multiplication and we shall see what the
218
corresponding matrix operations are. We shall also see the matrix operation
that represent the map operation of composition. And, we shall see how to find
the matrix that represents a maps inverse.
Exercises
X 2.12 Let h be the linear map defined by this matrix on the domain P1 and
codomain R2 with respect to the given bases.
2 1
1
1
H=
B = h1 + x, xi, D = h
,
i
4 2
1
0
What is the image under h of the vector ~v = 2x 1?
X 2.13 Decide if each vector lies in the range of the map from R3 to R2 represented
with respect
to the
bases
standard
by thematrix.
1 1 3
1
2 0 3
1
(a)
,
(b)
,
0 1 4
3
4 0 6
1
X 2.14 Consider this matrix, representing a transformation of R2 , and these bases for
that space.
1
1 1
0
1
1
1
B=h
,
i D=h
,
i
1 1
1
0
1
1
2
(a) To what vector in the codomain is the first member of B mapped?
(b) The second member?
(c) Where is a general vector from the domain (a vector with components x and
y) mapped? That is, what transformation of R2 is represented with respect to
B, D by this matrix?
2.15 What transformation of F = { a cos + b sin | a, b R } is represented with
respect to B = hcos sin , sin i and D = hcos + sin , cos i by this matrix?
0 0
1 0
X 2.16 Decide whether 1 + 2x is in the range of the map from R3 to P2 represented
with respect to E3 and h1, 1 + x2 , xi by this matrix.
1 3 0
0 1 0
1 0 1
2.17 Example 2.11 gives a matrix that is nonsingular and is therefore associated
with maps that are nonsingular.
(a) Find the set of column vectors representing the members of the null space of
any map represented by this matrix.
(b) Find the nullity of any such map.
(c) Find the set of column vectors representing the members of the range space
of any map represented by this matrix.
(d) Find the rank of any such map.
(e) Check that rank plus nullity equals the dimension of the domain.
219
X 2.18 Take each matrix to represent h : Rm Rn with respect to the standard bases.
For each (i) state m and n. Then set up an augmented matrix with the given
matrix on the left and a vector representing a range space element on the right
(e.g., if the codomain is R3 then in the right-hand column put the three entries a,
b, and c). Perform Gauss-Jordan reduction. Use that to (ii) find R(h) and rank(h)
(and state whether the underlying map is onto), and (iii) find N (h) and nullity(h)
(and state whether the underlying map is one-to-one).
2 1
(a)
1 3
0
1 3
(b) 2
3 4
2 1 2
1 1
(c) 2 1
3 1
2.19 Use the method from the prior exercise on each.
1 0 1
(a) 2 1 0
2 2 2
(b) Verify that the map represented by this matrix is an isomorphism.
2 1 0
3 1 1
7 2 1
2.20 This is an alternative proof of Lemma 2.9. Given an n n matrix H, fix a
domain V and codomain W of appropriate dimension n, and bases B, D for those
spaces, and consider the map h represented by the matrix.
(a) Show that h is onto if and only if there is at least one RepB (~v) associated by
H with each RepD (~
w).
(b) Show that h is one-to-one if and only if there is at most one RepB (~v) associated
by H with each RepD (~
w).
(c) Consider the linear system HRepB (~v) = RepD (~
w). Show that H is nonsingular
if and only if there is exactly one solution RepB (~v) for each RepD (~
w).
X 2.21 Because the rank of a matrix equals the rank of any map it represents, if
220
X 2.24 A square matrix is a diagonal matrix if it is all zeroes except possibly for the
entries on its upper-left to lower-right diagonal its 1, 1 entry, its 2, 2 entry, etc.
Show that a linear map is an isomorphism if there are bases such that, with respect
to those bases, the map is represented by a diagonal matrix with no zeroes on the
diagonal.
2.25 Describe geometrically the action on R2 of the map represented with respect
to the standard bases E2 , E2 by this matrix.
3 0
0 2
Do the same for these.
1
0
0
0
0
1
1
0
1
0
3
1
2.26 The fact that for any linear map the rank plus the nullity equals the dimension
of the domain shows that a necessary condition for the existence of a homomorphism
between two spaces, onto the second space, is that there be no gain in dimension.
That is, where h : V W is onto, the dimension of W must be less than or equal
to the dimension of V.
(a) Show that this (strong) converse holds: no gain in dimension implies that
there is a homomorphism and, further, any matrix with the correct size and
correct rank represents such a map.
(b) Are there bases for R3 such that this matrix
1 0 0
H = 2 0 0
0 1 0
represents a map from R3 to R3 whose range is the xy plane subspace of R3 ?
2.27 Let V be an n-dimensional space and suppose that ~x Rn . Fix a basis
B for V and consider the map h~x : V R given ~v 7 ~x RepB (~v) by the dot
product.
(a) Show that this map is linear.
(b) Show that for any linear map g : V R there is an ~x Rn such that g = h~x .
(c) In the prior item we fixed the basis and varied the ~x to get all possible linear
maps. Can we get all possible linear maps by fixing an ~x and varying the basis?
2.28 Let V, W, X be vector spaces with bases B, C, D.
(a) Suppose that h : V W is represented with respect to B, C by the matrix H.
Give the matrix representing the scalar multiple rh (where r R) with respect
to B, C by expressing it in terms of H.
(b) Suppose that h, g : V W are represented with respect to B, C by H and G.
Give the matrix representing h + g with respect to B, C by expressing it in terms
of H and G.
(c) Suppose that h : V W is represented with respect to B, C by H and g : W X
is represented with respect to C, D by G. Give the matrix representing g h
with respect to B, D by expressing it in terms of H and G.
IV
221
Matrix Operations
The prior section shows how matrices represent linear maps. We now explore
how this representation interacts with things that we already know. First we
will see how the representation of a scalar product r f of a linear map relates to
the representation of f, and also how the representation of a sum f + g relates to
the representations of the two summands. Later we will do the same comparison
for the map operations of composition and inverse.
IV.1
222
1.2 Example We can do a similar exploration for the sum of two maps. Suppose
that two linear maps with the same domain and codomain f, g : R2 R2 are
represented with respect to bases B and D by these matrices.
!
!
1 3
2 1
RepB,D (f) =
RepB,D (g) =
2 0
2
4
Recall the definition of sum: if f does ~v 7 ~u and g does ~v 7 w
~ then f + g is
the function whose action is ~v 7 ~u + w
~ . Let these be the representations of the
input and output vectors.
!
!
!
v1
u1
w1
RepB (~v) =
RepD (~u) =
RepD (~
w) =
v2
u2
w2
Where D = h~1 , ~2 i we have ~u + w
~ = (u1~1 + u2~2 ) + (w1~1 + w2~2 ) =
(u1 + w1 )~1 + (u2 + w2 )~2 and so this is the representation of the vector sum.
!
u1 + w1
RepD (~u + w
~)=
u2 + w2
Thus, since these represent the actions of of the maps f and g on the input ~v
!
!
!
!
!
!
v1
v1
1 3
v1 + 3v2
2 1
2v1 v2
=
=
2 0
v2
2v1
2
4
v2
2v1 + 4v2
adding the entries represents the action of the map f + g.
!
!
v1
v1 + 2v2
RepB,D (f + g)
=
v2
4v1 + 4v2
Therefore, we compute the matrix representing the function sum by adding the
entries of the matrices representing the functions.
!
1 2
RepB,D (f + g) =
4 4
1.3 Definition The scalar multiple of a matrix is the result of entry-by-entry
scalar multiplication. The sum of two same-sized matrices is their entry-by-entry
sum.
These operations extend the first chapters operations of addition and scalar
multiplication of vectors.
We need a result that proves these matrix operations do what the examples
suggest that they do.
223
QED
1.5 Remark These two operations on matrices are simple. But we did not define
them in this way because they are simple. We defined them in this way because
they represent function addition and function scalar multiplication. That is,
our program is to define matrix operations by referencing function operations.
Simplicity is a pleasant bonus.
We will see this again in the next subsection, where we will define the
operation of multiplying matrices. Since weve just defined matrix scalar multiplication and matrix sum to be entry-by-entry operations, a naive thought is
to define matrix multiplication to be the entry-by-entry product. In theory we
could do whatever we please but we will instead be practical and combine the
entries in the way that represents the function operation of composition.
A special case of scalar multiplication is multiplication by zero. For any map
0 h is the zero homomorphism and for any matrix 0 H is the matrix with all
entries zero.
1.6 Definition A zero matrix has all entries 0. We write Znm or simply Z
(another common notation is 0nm or just 0).
1.7 Example The zero map from any three-dimensional space to any twodimensional space is represented by the 23 zero matrix
!
0 0 0
Z=
0 0 0
no matter what domain and codomain bases we use.
Exercises
X 1.8 Perform
the indicated
if defined.
operations,
5 1 2
2 1 4
(a)
+
6 1 1
3 0 5
2 1 1
(b) 6
1 2
3
2 1
2 1
(c)
+
0 3
0 3
1 2
1 4
+5
(d) 4
3 1
2 1
224
2
3
1
1
+2
0
3
1
0
4
5
IV.2
225
Matrix Multiplication
After representing addition and scalar multiplication of linear maps in the prior
subsection, the natural next operation to consider is function composition.
2.1 Lemma The composition of linear maps is linear.
Proof (Note: this argument has already appeared, as part of the proof of
!
1 1
4 6 8 2
H = RepB,C (h) =
G = RepC,D (g) = 0 1
5 7 9 3
B,C
1 0
C,D
4 6 8 2
4v1 + 6v2 + 8v3 + 2v4
v 2
RepC ( h(~v) ) =
=
v 3
5 7 9 3
5v1 + 7v2 + 9v3 + 3v4
C
B,C
v4 B
The representation of g( h(~v) ) is the product of gs matrix and h(~v)s vector.
!
1 1
4v1 + 6v2 + 8v3 + 2v4
RepD ( g(h(~v)) ) = 0 1
5v1 + 7v2 + 9v3 + 3v4
C
1 0
C,D
226
= 0 4 + 1 5 0 6 + 1 7 0 8 + 1 9 0 2 + 1 3
14+05 16+07 18+09 12+03
The matrix representing g h has the rows of G combined with the columns of
H.
2.3 Definition The matrix-multiplicative product of the mr matrix G and the
rn matrix H is the mn matrix P, where
pi,j = gi,1 h1,j + gi,2 h2,j + + gi,r hr,j
so that the i, j-th entry of the product is the dot product of the i-th row of the
first matrix with the j-th column of the second.
h1,j
..
..
.
.
h2,j
= pi,j
GH =
g
g
g
i,1
i,2
i,r
.
.
.
..
..
.
.
hr,j
2.4 Example
4
8
0
1
6
5
2
3
7
21+05 23+07
2
= 4 1 + 6 5 4 3 + 6 7 = 34
81+25 83+27
18
54
38
2.5 Example Some products are not defined, such as the product of a 23 matrix
with a 22, because the number of columns in the first matrix must equal the
number of rows in the second. But the product of two nn matrices is always
defined. Here are two 22s.
!
!
!
!
1 2
1 0
1 (1) + 2 2 1 0 + 2 (2)
3 4
=
=
3 4
2 2
3 (1) + 4 2 3 0 + 4 (2)
5 8
227
2.6 Example The matrices from Example 2.2 combine in this way.
!
1 1
4 6 8 2
0 1
5 7 9 3
1 0
= 0 4 + 1 5 0 6 + 1 7 0 8 + 1 9 0 2 + 1 3
14+05 16+07 18+09 12+03
9 13 17 5
= 5 7
9 3
4 6
8 2
2.7 Theorem A composition of linear maps is represented by the matrix product
of the representatives.
Proof This argument generalizes Example 2.2. Let h : V W and g : W X
QED
This arrow diagram pictures the relationship between maps and matrices
(wrt abbreviates with respect to).
Wwrt C
g
h
G
Vwrt B
gh
GH
Xwrt D
228
Above the arrows, the maps show that the two ways of going from V to X,
straight over via the composition or else in two steps by way of W, have the
same effect
gh
g
h
~v 7 g(h(~v))
~v 7 h(~v) 7 g(h(~v))
(this is just the definition of composition). Below the arrows, the matrices
indicate that multiplying GH into the column vector RepB (~v) has the same
effect as multiplying the column vector first by H and then multiplying the
result by G.
RepB,D (g h) = GH
()
229
over the rows of H the definition treats left differently than right. So we may
reasonably suspect that GH can be unequal to HG.
2.9 Example Matrix multiplication is not commutative.
1
3
2
4
5
7
6
8
19
43
22
50
5
7
6
8
1
3
2
4
!
=
23
31
34
46
6
8
1
3
2
4
!
0
=
0
23
31
1
3
2
4
!
0
5
0
7
6
8
34
46
0
0
while
!
2.13 Remark We could instead prove that result by slogging through indices. For
230
2
3 1
0
5
1 1 1
(a)
(b)
3
4 2
0 0.5
4 0 3
3
1 0 5
2 7
5 2
1
(c)
(d)
1 1 1
7 4
3 1
3
3 8 4
1
1
1
1
1
1
2
5
231
X 2.15 Where
1 1
5 2
2 3
B=
C=
2 0
4 4
4 1
compute or state not defined.
(a) AB
(b) (AB)C
(c) BC
(d) A(BC)
2.16 Which products are defined?
(a) 3 2 times 2 3
(b) 2 3 times 3 2
(c) 2 2 times 3 3
(d) 33 times 22
X 2.17 Give the size of the product or state not defined.
(a) a 23 matrix times a 31 matrix
(b) a 112 matrix times a 121 matrix
(c) a 23 matrix times a 21 matrix
(d) a 22 matrix times a 22 matrix
X 2.18 Find the system of equations resulting from starting with
h1,1 x1 + h1,2 x2 + h1,3 x3 = d1
h2,1 x1 + h2,2 x2 + h2,3 x3 = d2
A=
232
2.22 [Cleary] Match each type of matrix with all these descriptions that could fit:
(i) can be multiplied by its transpose to make a 11 matrix, (ii) is similar to the
33 matrix of all zeros, (iii) can represent a linear map from R3 to R2 that is not
onto, (iv) can represent an isomorphism from R3 to P2 .
(a) a 23 matrix whose rank is 1
(b) a 33 matrix that is nonsingular
(c) a 22 matrix that is singular
(d) an n1 column vector
2.23 Show that composition of linear transformations on R1 is commutative. Is this
true for any one-dimensional space?
2.24 Why is matrix multiplication not defined as entry-wise multiplication? That
would be easier, and commutative too.
2.25 (a) Prove that Hp Hq = Hp+q and (Hp )q = Hpq for positive integers p, q.
(b) Prove that (rH)p = rp Hp for any positive integer p and scalar r R.
X 2.26 (a) How does matrix multiplication interact with scalar multiplication: is
r(GH) = (rG)H? Is G(rH) = r(GH)?
(b) How does matrix multiplication interact with linear combinations: is F(rG +
sH) = r(FG) + s(FH)? Is (rF + sG)H = rFH + sGH?
2.27 We can ask how the matrix product operation interacts with the transpose
operation.
(a) Show that (GH)T = HT GT .
(b) A square matrix is symmetric if each i, j entry equals the j, i entry, that is, if
the matrix equals its own transpose. Show that the matrices HHT and HT H are
symmetric.
X 2.28 Rotation of vectors in R3 about an axis is a linear map. Show that linear maps
do not commute by showing geometrically that rotations do not commute.
2.29 In the proof of Theorem 2.12 we used some maps. What are the domains and
codomains?
2.30 How does matrix rank interact with matrix multiplication?
(a) Can the product of rank n matrices have rank less than n? Greater?
(b) Show that the rank of the product of two matrices is less than or equal to the
minimum of the rank of each factor.
2.31 Is commutes with an equivalence relation among nn matrices?
X 2.32 (We will use this exercise in the Matrix Inverses exercises.) Here is another
property of matrix multiplication that might be puzzling at first sight.
(a) Prove that the composition of the projections x , y : R3 R3 onto the x and
y axes is the zero map despite that neither one is itself the zero map.
(b) Prove that the composition of the derivatives d2 /dx2 , d3 /dx3 : P4 P4 is the
zero map despite that neither is the zero map.
(c) Give a matrix equation representing the first fact.
(d) Give a matrix equation representing the second.
When two things multiply to give zero despite that neither is zero we say that each
is a zero divisor.
2.33 Show that, for square matrices, (S + T )(S T ) need not equal S2 T 2 .
233
a0 + a1 x + + an xn 7 0 + a0 x + a1 x2 + + an xn+1
Show that the two maps dont commute d/dx s 6= s d/dx; in fact, not only is
(d/dx s) (s d/dx) not the zero map, it is the identity map.
2.38 Recall the notation for the sum of the sequence of numbers a1 , a2 , . . . , an .
n
X
ai = a1 + a2 + + an
i=1
r
X
gi,k hk,j
k=1
234
IV.3
!
1 1
9 13 17 5
4 6 8 2
= 5 7
9 3
0 1
5 7 9 3
4 6
8 2
1 0
We can view this as the left matrix acting by multiplying its rows into the
columns of the right matrix. Or, it is the right matrix using its columns to act
on the rows of the left matrix. Below, we will examine actions from the left and
from the right for some simple matrices.
Simplest is the zero matrix.
3.1 Example Multiplying by a zero matrix from the left or from the right results
in a zero matrix.
!
!
!
!
!
!
0 0
1 3 2
0 0 0
2 3
0 0
0 0
=
=
0 0
1 1 1
0 0 0
1 4
0 0
0 0
The next easiest matrices are the ones with a single nonzero entry.
3.2 Definition A matrix with all 0s except for a 1 in the i, j entry is an i, j unit
matrix (or matrix unit ).
3.3 Example This is the 1, 2
multiplying from the left.
0
0
1
5
0
7
0
6
8
= 0
0
0
0
Acting from the left, an i, j unit matrix copies row j of the multiplicand into
row i of the result. From the right an i, j unit matrix picks out column i of the
multiplicand and copies it into column j of the result.
0 1
0 1
1 2 3
4 5 6 0 0 = 0 4
7 8 9
0 0
0 7
235
3.4 Example Rescaling unit matrices simply rescales the result. This is the action
from the left of the matrix that is twice the one in the prior example.
!
0 2
14 16
5 6
=0 0
0 0
7 8
0 0
0 0
Next in complication are matrices with two nonzero entries.
3.5 Example There are two cases. If a left-multiplier has entries in
then their actions dont interact.
1 0 0
1 2 3
1 0 0
0 0 0
1
=
(
+
)
0
0
2
4
5
6
0
0
0
0
0
2
4
0 0 0
7 8 9
0 0 0
0 0 0
7
1 2 3
0 0
0
= 0 0 0 + 14 16 18
0 0 0
0 0
0
1
2
3
= 14 16 18
0
0
0
But if the left-multipliers nonzero entries are in the
the result is a combination.
1 0 2
1 2 3
1 0 0
0
0 0 0 4 5 6 = (0 0 0 + 0
0 0 0
7 8 9
0 0 0
0
1 2 3
14
= 0 0 0 + 0
0 0 0
0
15 18 21
=0
0
0
0
0
0
different rows
2
5
8
6
9
16 18
0
0
0
0
0
0
0
2
5
8
6
9
236
3.7 Lemma In a product of two matrices G and H, the columns of GH are formed
by taking G times the columns of H
.
.
.
~
G
h 1
..
.
.. ..
. .
~
~
hn
= G h 1
..
..
.
.
..
.
G ~hn
..
.
~g1
~g1 H
..
..
H=
.
.
~gr
~gr H
g1,2
g2,2
h1,1
h2,1
h1,2
h2,2
!
=
!
(g1,1 g1,2 )H
(g2,1 g2,2 )H
QED
1 0 ... 0
0 1 . . . 0
Inn =
..
0 0 ... 1
3.10 Example Here is the 22 identity matrix leaving its multiplicand unchanged
1
0
1
4
237
2
2
1
1 0
3
0
1
1
0
=
1
4
2
2
1
3
3.11 Example Here the 33 identity leaves its multiplicand unchanged both from
the left
1 0 0
2 3 6
2 3 6
0 1 0 1 3 8 = 1 3 8
0 0 1
7 1 0
7 1 0
and from the right.
1
7
3
3
1
1
6
8 0
0
0
0
1
0
0
2 3
0 = 1 3
1
7 1
8
0
a1,1
0
...
0
a2,2 . . .
0
0
..
0
0
. . . an,n
3.13 Example From the left, the action of multiplication by a diagonal matrix is
to rescales the rows.
!
!
!
2 0
2 1 4 1
4 2
8 2
=
0 1
1 3 4 4
1 3 4 4
From the right such a matrix rescales the columns.
! 3 0 0
1 2 1
3 4
0 2 0 =
2 2 2
6 4
0 0 2
2
4
238
We can also generalize identity matrices by putting a single one in each row
and column in ways other than putting them down the diagonal.
3.14 Definition A permutation matrix is square and is all 0s except for a single 1
in each row and column.
3.15 Example From the left these matrices permute
1 2 3
7
0 0 1
1 0 0 4 5 6 = 1
0 1 0
7 8 9
4
From the right they permute columns.
1 2 3
0 0
4 5 6 1 0
7 8 9
0 1
1
2
0 = 5
0
8
rows.
8
2
5
3
6
3
6
9
4
7
0
1 0 0
0
2
1 1
=
0
3
0
0
1/3
1
1
0
1
0 0 1
1
0
2 0
3.17 Example This
0
1
2 1 1
1 3 3
0 2 0
0 1
0 2 1 1
1 0 2
1 0 0 1 3 3 = 0 1 3
0 0
1 0 2 0
0 2 1
rows.
3
1
1 0 0
1 0 0
32
0 1 0 0 3 0
0 0 1
0 0 1
Similarly, the matrix that
0
0
0 0
0 0
1 3
1 0 0 1
0 1
1 0
0
0
239
1 0 0
1
22 +3
0 1 0
0
0 0 1
0
will, when it acts
0
0
0 0
1
1 0 0
2 1
0
0
1
2
0
1
0 2 0
1 0 2
0
1 3 3 = 0 1 3 3
2 1 1
0 0 5 7
3.19 Definition The elementary reduction matrices (or just elementary matrices) result from applying a one Gaussian operation to an identity matrix.
ki
i j
Pi,j for i 6= j
ki +j
i j
G then Pi,j H = G.
ki +j
Proof Clear.
QED
3.21 Example This is the first system, from the first chapter, on which we
performed Gausss Method.
3x3 = 9
x1 + 5x2 2x3 = 2
(1/3)x1 + 2x2
=3
We can reduce it
0
1
0 1
0
0 3 9
1/3 2 0 3
1 0 1
5 2 2 = 1
5 2 2
0 0
1/3 2 0 3
0
0 3 9
240
3 0
0 1
0 0
and then add 1
1
0
0
1/3
0 1
1
0
2
5
0
0
2
3
0 0
1 6 0
1 0 1 5 2
0 1
0 0 3
1
3
2 = 1
9
0
6
5
0
0
2
3
2
9
the second.
9
1 6
2 = 0 1
9
0 0
0
2
3
7
9
1 6 0 9
1 0
0
1 6
0
9
0 0 1 2 7 = 0 1 2 7
0 1
0 0
3
0 0 1 3
0 0 1/3
9
then clear the third column, and then the second column.
1 6 0
1 0 0
1 6 0 9
1
0 1 0 0 1 2 0 1 2 7 = 0
0 0 1
0 0 1
0 0 1 3
0
0
1
0
0
0
1
1
3
3.23 Corollary For any matrix H there are elementary reduction matrices R1 , . . . ,
Rr such that Rr Rr1 R1 H is in reduced echelon form.
Until now we have taken the point of view that our primary objects of study
are vector spaces and the maps between them, and we seemed to have adopted
matrices only for computational convenience. This subsection show that this
isnt the entire story.
Understanding matrices operations by understanding the mechanics of how
the entries combine is also useful. In the rest of this book we shall continue to
focus on maps as the primary objects but we will be pragmatic if the matrix
point of view gives some clearer idea then we will go with it.
Exercises
X 3.24 Predict the result of each multiplication by an elementary reduction matrix,
and then check by multiplying it out.
241
3 0
1 2
1 0
1 2
1 0
1 2
(b)
(c)
0 1
3 4
0 2
3 4
2 1
3 4
1 2
1 1
1 2
0 1
(d)
(e)
3 4
0 1
3 4
1 0
3.25 Predict the result of each multiplication by a diagonal matrix, and then check
by multiplying
it out.
3 0
1 2
4 0
1 2
(a)
(b)
0 1
3 4
0 2
3 4
3.26 Produce each.
(a) a 33 matrix that, acting from the left, swaps rows one and two
(b) a 22 matrix that, acting from the right, swaps column one and two
(a)
X 3.27 Show how to use matrix multiplication to bring this matrix to echelon form.
1 2 1 0
2 3 1 1
7 11 4 3
3.28 Find the product of this matrix with its transpose.
cos sin
sin
cos
X 3.29 The need to take linear combinations of rows and columns in tables of numbers
arises often in practice. For instance, this is a map of part of Vermont and New
York.
Swanton
Grand Isle
Colchester
Winooski
Burlington
(a) The adjacency matrix of a map is the square matrix whose i, j entry is the
number of roads from city i to city j (all (i, i) entries are 0). Produce the
adjacency matrix of this map, with the cities in alphabetical order.
(b) A matrix is symmetric if it equals its transpose. Show that an adjacency
matrix is symmetric. (These are all two-way streets. Vermont doesnt have many
one-way streets.)
(c) What is the significance of the square of the incidence matrix? The cube?
242
X 3.30 This table gives the number of hours of each type done by each worker, and
the associated pay rates. Use matrices to compute the wages due.
regular overtime
wage
Alan
40
12
regular
$25.00
Betty
35
6
overtime $45.00
Catherine
40
18
Donald
28
0
Remark. This illustrates that in practice we often want to compute linear combinations of rows and columns in a context where we really arent interested in any
associated linear maps.
3.31 Express this nonsingular matrix as a product of elementary reduction matrices.
1 2 0
T = 2 1 0
3 1 2
3.32 Express
1
3
0
3
243
3.46 Give an example of two matrices of the same rank and size with squares of
differing rank.
3.47 On a computer multiplications have traditionally been more costly than additions, so people have tried to in reduce the number of multiplications used to
compute a matrix product.
(a) How many real number multiplications do we need in the formula we gave for
the product of a mr matrix and a rn matrix?
(b) Matrix multiplication is associative, so all associations yield the same result.
The cost in number of multiplications, however, varies. Find the association
requiring the fewest real number multiplications to compute the matrix product
of a 510 matrix, a 1020 matrix, a 205 matrix, and a 51 matrix.
(c) (Very hard.) Find a way to multiply two 2 2 matrices using only seven
multiplications instead of the eight suggested by the naive approach.
? 3.48 [Putnam, 1990, A-5] If A and B are square matrices of the same size such that
ABAB = 0, does it follow that BABA = 0?
3.49 [Am. Math. Mon., Dec. 1966] Demonstrate these four assertions to get an alternate proof that column rank equals row rank.
(a) ~y ~y = 0 iff ~y = ~0.
(b) A~x = ~0 iff AT A~x = ~0.
(c) dim(R(A)) = dim(R(AT A)).
(d) col rank(A) = col rank(AT ) = row rank(A).
3.50 [Ackerson] Prove (where A is an nn matrix and so defines a transformation of
any n-dimensional space V with respect to B, B where B is a basis) that dim(R(A)
N (A)) = dim(R(A)) dim(R(A2 )). Conclude
(a) N (A) R(A) iff dim(N (A)) = dim(R(A)) dim(R(A2 ));
(b) R(A) N (A) iff A2 = 0;
(c) R(A) = N (A) iff A2 = 0 and dim(N (A)) = dim(R(A)) ;
(d) dim(R(A) N (A)) = 0 iff dim(R(A)) = dim(R(A2 )) ;
(e) (Requires the Direct Sum subsection, which is optional.) V = R(A) N (A)
iff dim(R(A)) = dim(R(A2 )).
IV.4
Inverses
We finish this section by considering how to represent the inverse of a linear map.
We first recall some things about inverses. Where : R3 R2 is the projection
map and : R2 R3 is the embedding
x
y 7
z
x
y
x
y
x
7 y
244
y
y
y
z
z
for all of the infinitely many zs. But a function f cannot send a single argument
x
y to more than one value.
So a function can have a right inverse but no left inverse, or a left inverse
but no right inverse. A function can also fail to have an inverse on either side;
one example is the zero transformation on R2 .
Some functions have a two-sided inverse, another function that is the inverse
both from the left and from the right. For instance, the transformation given by
~v 7 2 ~v has the two-sided inverse ~v 7 (1/2) ~v. The appendix shows that a
function has a two-sided inverse if and only if it is both one-to-one and onto.
The appendix also shows that if a function f has a two-sided inverse then it is
unique, so we call it the inverse and write f1 .
In addition, recall that we have shown in Theorem II.2.20 that if a linear
map has a two-sided inverse then that inverse is also linear.
Thus, our goal in this subsection is, where a linear h has an inverse, to find
the relationship between RepB,D (h) and RepD,B (h1 ).
4.1 Definition A matrix G is a left inverse matrix of the matrix H if GH is the
identity matrix. It is a right inverse if HG is the identity. A matrix H with
a two-sided inverse is an invertible matrix. That two-sided inverse is denoted
H1 .
Because of the correspondence between linear maps and matrices, statements
about map inverses translate into statements about matrix inverses.
245
4.2 Lemma If a matrix has both a left inverse and a right inverse then the two
are equal.
4.3 Theorem A matrix is invertible if and only if it is nonsingular.
Proof (For both results.) Given a matrix H, fix spaces of appropriate dimension
for the domain and codomain and fix bases for these spaces. With respect to
these bases, H represents a map h. The statements are true about the map and
therefore they are true about the matrix.
QED
4.4 Lemma A product of invertible matrices is invertible: if G and H are invertible
and GH is defined then GH is invertible and (GH)1 = H1 G1 .
Proof Because the two matrices are invertible they are square, and because
their product is defined they must both be nn. Fix spaces and bases say,
Rn with the standard bases to get maps g, h : Rn Rn that are associated
with the matrices, G = RepEn ,En (g) and H = RepEn ,En (h).
Consider h1 g1 . By the prior paragraph this composition is defined. This
map is a two-sided inverse of gh since (h1 g1 )(gh) = h1 (id)h = h1 h = id
and (gh)(h1 g1 ) = g(id)g1 = gg1 = id. The matrices representing the
maps reflect this equality.
QED
This is the arrow diagram giving the relationship between map inverses and
matrix inverses. It is a special case of the diagram relating function composition
to matrix multiplication.
Wwrt C
h1
h
1
Vwrt B
id
I
Vwrt B
Beyond its place in our program of seeing how to represent map operations,
another reason for our interest in inverses comes from linear systems. A linear
system is equivalent to a matrix equation, as here.
!
!
!
x1 + x2 = 3
1 1
x1
3
=
2x1 x2 = 2
2 1
x2
2
By fixing spaces and bases (for instance, R2 , R2 with the standard bases), we
take the matrix H to represent a map h. The matrix equation then becomes
this linear map equation.
h(~x) = ~d
246
If we had a left inverse map g then we could apply it to both sides gh(~x) = g(~d)
to get ~x = g(~d). Restating in terms of the matrices, we want to multiply by the
inverse matrix RepC,B (g) RepC (~d) to get RepB (~x).
4.5 Example We can find a left inverse for the matrix just given
!
!
!
m n
1 1
1 0
=
p q
2 1
0 1
by using Gausss Method to solve the resulting linear system.
m + 2n
m n
=1
=0
p + 2q = 0
p q=1
2
3
247
(1/3)(3.01) + (1/3)(2)
(2/3)(3.01) (1/3)(2)
shows that the first component of the solution changes by 1/3 of the tweak,
while the second component moves by 2/3 of the tweak. This is sensitivity
analysis. We could use it to decide how accurately we must specify the data in
a linear model to ensure that the solution has a desired accuracy.
4.7 Lemma A matrix H is invertible if and only if it can be written as the product
of elementary reduction matrices. We can compute the inverse by applying to
the identity matrix the same row steps, in the same order, that Gauss-Jordan
reduce H.
Proof The matrix H is invertible if and only if it is nonsingular and thus
()
For the first sentence of the result, note that elementary matrices are invertible
because elementary row operations are reversible, and that their inverses are also
elementary. Apply R1
from the left to both sides of (). Then apply R1
r
r1 , etc.
The result gives H as the product of elementary matrices H = R1
R1
r I.
1
(The I there covers the case r = 0.)
For the second sentence, group () as (Rr Rr1 . . . R1 ) H = I and recognize
whats in the parentheses as the inverse H1 = Rr Rr1 . . . R1 I. Restated:
applying R1 to the identity, followed by R2 , etc., yields the inverse of H. QED
4.8 Example To find the inverse of
1
2
1
1
2 1 0 1
0 3 2 1
!
1 1
1
0
1/32
0 1 2/3 1/3
!
1 0 1/3
1/3
2 +1
0 1 2/3 1/3
248
!
1/3 1/3
2/3 1/3
0
3 1 1 0 0
1
0
1 0 1 0
1 2
0
1 0 1 0 0
3 1 1 0 0
1
1 1
0 0 0 1
1 1
0 0 0 1
1 0
1
0
1 0
1 +3
3 1 1
0 0
0
0 1 1 0 1 1
..
.
0
0
0
1
0
0
0
1
1/4 1/4
1/4 1/4
1/4 3/4
3/4
1/4
3/4
4.10 Example This algorithm detects a non-invertible matrix when the left half
wont reduce to the identity.
!
!
1 1 1 0
1 1
1 0
21 +2
2 2 0 1
0 0 2 1
With this procedure we can give a formula for the inverse of a general 22
matrix, which is worth memorizing.
4.11 Corollary The inverse for a 22 matrix exists and equals
a
c
b
d
!1
1
=
ad bc
d b
c a
if and only if ad bc 6= 0.
Proof This computation is Exercise 21.
QED
249
Matrix addition and subtraction work in much the same way as the real number
operations except that they only combine same-sized matrices. Scalar multiplication is in some ways an extension of real number multiplication. We also have
a matrix multiplication operation and its inverse that are somewhat like the
familiar real number operations (associativity, and distributivity over addition,
for example), but there are differences (failure of commutativity). This section
provides an example that algebra systems other than the usual real number one
can be interesting and useful.
Exercises
4.12 Supply the intermediate steps in Example 4.9.
X 4.13 Use
has an inverse.
Corollary
4.11 to
decideif each matrix
2 1
0 4
2 3
(a)
(b)
(c)
1 1
1 3
4 6
X 4.14 For each invertible matrix in the prior problem, use Corollary 4.11 to find its
inverse.
X 4.15 Find the inverse, if it exists, by using the Gauss-Jordan Method. Check the
answers for the 22 matrices with Corollary 4.11.
1 1 3
3 1
2 1/2
2 4
(a)
(b)
(c)
(d) 0 2 4
0 2
3
1
1 2
1 1 0
0 1
5
2 2
3
(e) 0 2 4
(f) 1 2 3
2 3 2
4 2 3
X 4.16 What matrix has this one for its inverse?
1 3
2 5
4.17 How does the inverse operation interact with scalar multiplication and addition
of matrices?
(a) What is the inverse of rH?
(b) Is (H + G)1 = H1 + G1 ?
X 4.18 Is (T k )1 = (T 1 )k ?
4.19 Is H1 invertible?
4.20 For each real number let t : R2 R2 be represented with respect to the
standard bases by this matrix.
cos sin
sin
cos
Show that t1 +2 = t1 t2 . Show also that t 1 = t .
4.21 Do the calculations for the proof of Corollary 4.11.
4.22 Show that this matrix
1
H=
0
0
1
1
0
has infinitely many right inverses. Show also that it has no left inverse.
250
X
X
4.23 In the review of inverses example, starting this subsection, how many left
inverses has ?
4.24 If a matrix has infinitely many right-inverses, can it have infinitely many
left-inverses? Must it have?
4.25 Assume that g : V W is linear. One of these is true, the other is false. Which
is which?
(a) If f : W V is a left inverse of g then f must be linear.
(b) If f : W V is a right inverse of g then f must be linear.
4.26 Assume that H is invertible and that HG is the zero matrix. Show that G is a
zero matrix.
4.27 Prove that if H is invertible then the inverse commutes with a matrix GH1 =
H1 G if and only if H itself commutes with that matrix GH = HG.
4.28 Show that if T is square and if T 4 is the zero matrix then (IT )1 = I+T +T 2 +T 3 .
Generalize.
4.29 Let D be diagonal. Describe D2 , D3 , . . . , etc. Describe D1 , D2 , . . . , etc.
Define D0 appropriately.
4.30 Prove that any matrix row-equivalent to an invertible matrix is also invertible.
4.31 The first question below appeared as Exercise 30.
(a) Show that the rank of the product of two matrices is less than or equal to the
minimum of the rank of each.
(b) Show that if T and S are square then T S = I if and only if ST = I.
4.32 Show that the inverse of a permutation matrix is its transpose.
4.33 The first two parts of this question appeared as Exercise 27.
(a) Show that (GH)T = HT GT .
(b) A square matrix is symmetric if each i, j entry equals the j, i entry (that is,
if the matrix equals its transpose). Show that the matrices HHT and HT H are
symmetric.
(c) Show that the inverse of the transpose is the transpose of the inverse.
(d) Show that the inverse of a symmetric matrix is symmetric.
4.34 The items starting this question appeared as Exercise 32.
(a) Prove that the composition of the projections x , y : R3 R3 is the zero
map despite that neither is the zero map.
(b) Prove that the composition of the derivatives d2 /dx2 , d3 /dx3 : P4 P4 is the
zero map despite that neither map is the zero map.
(c) Give matrix equations representing each of the prior two items.
When two things multiply to give zero despite that neither is zero, each is said to
be a zero divisor. Prove that no zero divisor is invertible.
4.35 In real number algebra, there are exactly two numbers, 1 and 1, that are
their own multiplicative inverse. Does H2 = I have exactly two solutions for 22
matrices?
4.36 Is the relation is a two-sided inverse of transitive? Reflexive? Symmetric?
4.37 [Am. Math. Mon., Nov. 1951] Prove: if the sum of the elements of each row
of a square matrix is k, then the sum of the elements in each row of the inverse
matrix is 1/k.
251
Change of Basis
Representations vary with the bases. For instance, with respect to the bases E2
and
!
!
1
1
B=h
,
i
1
1
~e1 R2 has these different representations.
!
1
RepE2 (~e1 ) =
RepB (~e1 ) =
0
!
1/2
1/2
The same holds for maps: with respect to the basis pairs E2 , E2 and E2 , B, the
identity map has these representations.
!
!
1 0
1/2 1/2
RepE2 ,E2 (id) =
RepE2 ,B (id) =
0 1
1/2 1/2
This section shows how to translate among the representations. That is, we will
compute how the representations vary as the bases vary.
V.1
In converting RepB (~v) to RepD (~v) the underlying vector ~v doesnt change. Thus,
the translation between these two ways of expressing the vector is accomplished
by the identity map on the space, described so that the domain space vectors are
represented with respect to B and the codomain space vectors are represented
with respect to D.
Vwrt B
idy
Vwrt D
(This diagram is vertical to fit with the ones in the next subsection.)
1.1 Definition The change of basis matrix for bases B, D V is the representation of the identity map id : V V with respect to those bases.
..
.
~
RepB,D (id) =
RepD (1 )
..
.
..
.
~ n )
RepD (
..
.
252
1.2 Remark A better name would be change of representation matrix but the
above name is standard.
The next result supports the definition.
1.3 Lemma Left-multiplication by the change of basis matrix for B, D converts
a representation with respect to B to one with respect to D. Conversely, if
left-multiplication by a matrix changes bases M RepB (~v) = RepD (~v) then M is
a change of basis matrix.
Proof The first sentence holds because matrix-vector multiplication represents
a map application and so RepB,D (id) RepB (~v) = RepD ( id(~v) ) = RepD (~v) for
each ~v. For the second sentence, with respect to B, D the matrix M represents
a linear map whose action is to map each vector to itself, and is therefore the
identity map.
QED
1.4 Example With these bases for R2 ,
!
!
2
1
B=h
,
i
1
0
!
!
1
1
D=h
,
i
1
1
because
!
2
RepD ( id(
)) =
1
1/2
3/2
!
D
!
1
RepD ( id(
)) =
0
1/2
1/2
!
D
1/2 1/2
3/2
1/2
!
!
1/2
1
=
1/2
2
1/2
1/2
253
bases then the matrix represents an invertible function, simply because we can
invert the function by changing the bases back. Because it represents a function
that is invertible, the matrix itself is invertible, and so is nonsingular.
For if we will show that any nonsingular matrix M performs a change of
basis operation from any given starting basis B (having n vectors, where the
matrix is nn) to some ending basis.
If the matrix is the identity I then the statement is obvious. Otherwise
because the matrix is nonsingular Corollary IV.3.23 says there are elementary
reduction matrices such that Rr R1 M = I with r > 1. Elementary matrices
are invertible and their inverses are also elementary so multiplying both sides of
that equation from the left by Rr 1 , then by Rr1 1 , etc., gives M as a product
of elementary matrices M = R1 1 Rr 1 .
We will be done if we show that elementary matrices change a given basis to
another basis, since then Rr 1 changes B to some other basis Br and Rr1 1
changes Br to some Br1 , etc. We will cover the three types of elementary
matrices separately; recall the notation for the three.
c1
c1
. .
.. ..
Mi (k) ci = kci
. .
. .
. .
cn
cn
c1
c1
.. ..
. .
c c
i j
. .
Pi,j
.. = ..
cj ci
.. ..
. .
cn
cn
c1
c1
..
..
.
.
c c
i
i
..
Ci,j (k) .. =
cj kci + cj
..
..
.
.
cn
cn
254
~ 1, . . . ,
~ j, . . . ,
~ i, . . . ,
~ n i.
h
~ 1 + + ci
~ i + + cj
~ j + + cn
~n
~v = c1
~ 1 + + cj
~ j + + ci
~ i + + cn
~ n = ~v
7 c1
~ 1, . . . ,
~ i, . . . ,
~ j, . . . ,
~ n i changes via
And, a representation with respect to h
left-multiplication by a row-combination matrix Ci,j (k) into a representation
~ 1, . . . ,
~ i k
~ j, . . . ,
~ j, . . . ,
~ ni
with respect to h
~ 1 + + ci
~ i + cj
~ j + + cn
~n
~v = c1
~ 1 + + ci (
~ i k
~ j ) + + (kci + cj )
~ j + + cn
~ n = ~v
7 c1
(the definition of Ci,j (k) specifies that i 6= j and k 6= 0).
QED
1.6 Corollary A matrix is nonsingular if and only if it represents the identity map
with respect to some pair of bases.
Exercises
X 1.7 In R2 , where
2
2
,
i
1
4
find the change of basis matrices from D to E2 and from E2 to D. Multiply the
two.
X 1.8 Find the change of basis matrix for B, D R2 .
1
1
(b) B = E2 , D = h
,
i
(a) B = E2 , D = h~e2 , ~e1 i
2
4
1
1
1
2
0
1
(c) B = h
,
i, D = E2
(d) B = h
,
i, D = h
,
i
2
4
1
2
4
3
X 1.9 Find the change of basis matrix for each B, D P2 .
(a) B = h1, x, x2 i, D = hx2 , 1, xi
(b) B = h1, x, x2 i, D = h1, 1 + x, 1 + x + x2 i
(c) B = h2, 2x, x2 i, D = h1 + x2 , 1 x2 , x + x2 i
1.10 For the bases in Exercise 8, find the change of basis matrix in the other direction,
from D to B.
2
X 1.11 Decide
ifeach changes
bases
on R. To what
basis is E2 changed?
5 0
2 1
1 4
1 1
(a)
(b)
(c)
(d)
0 4
3 1
2 8
1 1
1.12 For each space find the matrix changing a vector representation with respect
to B to one with respect to D.
1
1
0
(a) V = R3 , B = E3 , D = h2 , 1 , 1 i
3
1
1
1
1
0
(b) V = R3 , B = h2 , 1 , 1 i, D = E3
3
1
1
D=h
255
3 1 4
2 1 1
0 0 4
1.14 Consider the vector space of real-valued functions with basis hsin(x), cos(x)i.
Show that h2 sin(x) + cos(x), 3 cos(x)i is also a basis for this space. Find the change
of basis matrix in each direction.
1.15 Where does this matrix
cos(2)
sin(2)
sin(2)
cos(2)
send the standard basis for R2 ? Any other bases? Hint. Consider the inverse.
X 1.16 What is the change of basis matrix with respect to B, B?
1.17 Prove that a matrix changes bases if and only if it is invertible.
1.18 Finish the proof of Lemma 1.5.
X 1.19 Let H be an nn nonsingular matrix. What basis of Rn does H change to the
standard basis?
X 1.20
RepB (1 x + 3x2 x3 ) =
1
2
Find a basis D giving this different representation for the same polynomial.
1
0
2
3
RepD (1 x + 3x x ) =
2
0 D
(b) State and prove that we can change any nonzero vector representation to any
other.
Hint. The proof of Lemma 1.5 is constructive it not only says the bases change,
it shows how they change.
be bases for V and D, D
be bases for
1.21 Let V, W be vector spaces, and let B, B
W. Where h : V W is linear, find a formula relating RepB,D (h) to RepB,
D
(h).
X 1.22 Show that the columns of an n n change of basis matrix form a basis for
Rn . Do all bases appear in that way: can the vectors from any Rn basis make the
columns of a change of basis matrix?
X 1.23 Find a matrix having this effect.
1
4
7
3
1
That is, find a M that left-multiplies the starting vector to yield the ending vector.
Is there a matrix having these two effects?
256
V.2
The first subsection shows how to convert the representation of a vector with
respect to one basis to the representation of that same vector with respect to
another basis. We next convert the representation of a map with respect to one
pair of bases to the representation with respect to a different pair we convert
from RepB,D (h) to RepB,
D
(h). Here is the arrow diagram.
h
Vwrt B Wwrt D
H
idy
idy
h
Vwrt B Wwrt D
To move from the lower-left to the lower-right we can either go straight over,
=
or else up to VB then over to WD and then down. So we can calculate H
RepB,
D
(h) either by directly using B and D, or else by first changing bases with
RepB,B
(id) then multiplying by H = RepB,D (h) and then changing bases with
RepD,D
(id).
= Rep (id) H Rep (id)
H
()
D,D
B,B
2.1 Example The matrix
T=
cos(/6)
sin(/6)
sin(/6)
cos(/6)
!
=
3/2
1/2
!
1/2
3/2
t/6
(3 + 3)/2
(1 + 3 3)/2
these
!
2
i
3
257
R2wrt E2 R2wrt E2
T
idy
idy
t
R2wrt B R2wrt D
The picture illustrates that we can compute T either directly by going along the
squares bottom, or as in formula () by going up on the left, then across the
top, and then down on the right, with T = RepE2 ,D
(id) T RepB,E
2 (id). (Note
again that the matrix multiplication reads right to left, as the three functions
are composed and function composition reads right to left.)
Find the matrix for the left-hand side, the matrix RepB,E
2 (id), in the usual
which of
way: find the effect of the identity matrix on the starting basis B
course is no effect at all and then represent those basis elements with respect
to the ending basis E3 .
!
1 0
RepB,E
2 (id) =
1 2
This calculation is easy when the ending basis is the standard one.
There are two ways to compute the matrix for going down the squares right
side, RepE2 ,D
(id). We could calculate it directly as we did for the other change
of basis matrix. Or, we could instead calculate it as the inverse of the matrix
for going up RepD,E
2 (id). Find that matrix is easy, and we have a formula for
the 22 inverse so thats what is in the equation below.
RepB,
D
(t) =
=
!1
!
1 2
3/2 1/2
1
1/2
0 3
3/2
1
!
(5 3)/6 (3 + 2 3)/3
(1 + 3)/6
3/3
0
2
The matrix is messier but the map that it represents is the same. For instance,
apply T ,
(5 3)/6
(1 + 3)/6
(3 + 2 3)/3
3/3
B,D
1
1
!
=
(11 + 3 3)/6
(1 + 3 3)/6
258
11 + 3 3
1
0
1+3 3
+
6
2
3
!
=
(3 + 3)/2
(1 + 3 3)/2
2.2 Example Changing bases can make the matrix simpler. On R3 the map
x
y+z
t
y 7 x + z
z
x+y
is represented with respect to the standard basis in this way.
1
0
1
1
0
1
1
1
B = h1 , 1 , 1i
0
2
1
gives a matrix that is diagonal.
RepB,B (t) = 0
0
0
1
0
0
2
259
2.4 Corollary Matrix equivalent matrices represent the same map, with respect
to appropriate pairs of bases.
Proof This is immediate from equation () above.
QED
All matrices:
H matrix equivalent
to H
...
We can get insight into the classes by comparing matrix equivalence with row
equivalence (remember that matrices are row equivalent when they can be
= PHQ, the matrices P and Q
reduced to each other by row operations). In H
are nonsingular and thus each is a product of elementary reduction matrices
by Lemma IV.4.7. Left-multiplication by the reduction matrices making up P
performs row operations. Right-multiplication by the reduction matrices making
up Q performs column operations. Hence, matrix equivalence is a generalization
of row equivalence two matrices are row equivalent if one can be converted to
the other by a sequence of row reduction steps, while two matrices are matrix
equivalent if one can be converted to the other by a sequence of row reduction
steps followed by a sequence of column reduction steps.
Consequently, if matrices are row equivalent then they are also matrix
equivalent since we can take Q to be the identity matrix. The converse, however,
does not hold: two matrices can be matrix equivalent but not row equivalent.
2.5 Example These two are matrix equivalent
1
0
0
0
1
0
1
0
because the second reduces to the first by the column operation of taking 1
times the first column and adding to the second. They are not row equivalent
because they have different reduced echelon forms (both are already in reduced
form).
We close this section by giving a set of representatives for the matrix equivalence classes.
260
1
0
0
1
..
.
0
0
..
.
0
...
...
0
0
0
0
...
...
...
...
1
0
0
0
...
...
...
...
0
0
Z
Z
Proof Gauss-Jordan reduce the given matrix and combine all the row reduction
matrices to make P. Then use the leading entries to do column reduction and
finish by swapping the columns to put the leading ones on the diagonal. Combine
the column reduction matrices into Q.
QED
2.7 Example We illustrate the proof
0
2
2 1 1
0 1 1
4 2 2
1 1 0
1 0 0
1
0 1 0 0 1 0 0
0 0 1
2 0 1
2
2
0
4
1 1
1 2
1 1 = 0 0
2 2
0 0
1 2 0 0
1 0 0 0
1 2 0 0
1
0 1 0 0 0 1 0 0
= 0
0 0 1 1
0 0 1 0 0 0 1 1
0 0 0 0
0
0 0 0 1
0 0 0 1
0 0
1 1
0 0
0
0
0
0
0
0
0
0
0
1
0
1
0
0
0
0
0
0
0
0
1
0
0
1
0
0
0
1
0
= 0
0
0
1
0
1
0
0
0
0
0
0
0
1
0
0
0
261
1 0 2 0
1 2 1 1
1 0 0 0
1 1 0
0 0 1 0
1 0 0 0 1 1
= 0 1 0 0
0
0 1 0 1
2 0 1
2 4 2 2
0 0 0 0
0 0 0 1
2.8 Corollary Matrix equivalence classes are characterized by rank: two same-sized
matrices are matrix equivalent if and only if they have the same rank.
Proof Two same-sized matrices with the same rank are equivalent to the same
QED
2.9 Example The 22 matrices have only three possible ranks: zero, one, or two.
Thus there are three matrix equivalence classes.
All 22 matrices:
00
00
?
10
00
10
?
01
Three equivalence
classes
Each class consists of all of the 22 matrices with the same rank. There is
only one rank zero matrix. The other two classes have infinitely many members;
weve shown only the canonical representative.
One nice thing about the representative in Theorem 2.6 is that we can
completely understand the linear map when it is expressed in this way: where
~ 1, . . . ,
~ n i and D = h~1 , . . . , ~m i then the maps action is
the bases are B = h
~ 1 + + ck
~ k + ck+1
~ k+1 + + cn
~ n 7 c1~1 + + ck~k + ~0 + + ~0
c1
where k is the rank. Thus we can view any
c1
.
..
c
k
7
ck+1
..
.
cn B
Exercises
X 2.10 Decide
if these
matrices are
matrix equivalent.
1 3 0
2 2 1
(a)
,
2 3 0
0 5 1
262
0 3
4 0
,
1 1
0 5
1 3
1 3
(c)
,
2 6
2 6
X 2.11 Find the canonical representative of the matrix equivalence class of each matrix.
0 1 0 2
2 1 0
(a)
(b) 1 1 0 4
4 2 0
3 3 3 1
2.12 Suppose that, with respect to
1
1
B = E2
D=h
,
i
1
1
(b)
2 1 1
H = 3 1 0
1 3 2
X 2.16 Use Theorem 2.6 to show that a square matrix is nonsingular if and only if it
is equivalent to an identity matrix.
X 2.17 Show that, where A is a nonsingular square matrix, if P and Q are nonsingular
square matrices such that PAQ = I then QP = A1 .
X 2.18 Why does Theorem 2.6 not show that every matrix is diagonalizable (see
Example 2.2)?
2.19 Must matrix equivalent matrices have matrix equivalent transposes?
2.20 What happens in Theorem 2.6 if k = 0?
2.21 Show that matrix equivalence is an equivalence relation.
263
X 2.22 Show that a zero matrix is alone in its matrix equivalence class. Are there
other matrices like that?
2.23 What are the matrix equivalence classes of matrices of transformations on R1 ?
R3 ?
2.24 How many matrix equivalence classes are there?
2.25 Are matrix equivalence classes closed under scalar multiplication? Addition?
2.26 Let t : Rn Rn represented by T with respect to En , En .
(a) Find RepB,B (t) in this specific case.
1 1
1
1
T=
B=h
,
i
3 1
2
1
~ 1, . . . ,
~ n i.
(b) Describe RepB,B (t) in the general case where B = h
2.27 (a) Let V have bases B1 and B2 and suppose that W has the basis D. Where
h : V W, find the formula that computes RepB2 ,D (h) from RepB1 ,D (h).
(b) Repeat the prior question with one basis for V and two bases for W.
2.28 (a) If two matrices are matrix equivalent and invertible, must their inverses
be matrix equivalent?
(b) If two matrices have matrix equivalent inverses, must the two be matrix
equivalent?
(c) If two matrices are square and matrix equivalent, must their squares be matrix
equivalent?
(d) If two matrices are square and have matrix equivalent squares, must they be
matrix equivalent?
X 2.29 Square matrices are similar if they represent the same transformation, but
each with respect to the same ending as starting basis. That is, RepB1 ,B1 (t) is
similar to RepB2 ,B2 (t).
(a) Give a definition of matrix similarity like that of Definition 2.3.
(b) Prove that similar matrices are matrix equivalent.
(c) Show that similarity is an equivalence relation.
(d) Show that if T is similar to T then T 2 is similar to T 2 , the cubes are similar,
etc. Contrast with the prior exercise.
(e) Prove that there are matrix equivalent matrices that are not similar.
264
VI
Projection
This section is optional. It is a prerequisite only for the final two sections
of Chapter Five, and some Topics.
We have described projection from R3 into its xy-plane subspace as a shadow
map. This shows why but it also shows that some shadows fall upward.
1
2
2
1
2
1
So perhaps a better description is: the projection of ~v is the vector ~p in the plane
with the property that someone standing on ~p and looking straight up or down
that is, looking orthogonally to the plane sees the tip of ~v. In this section we
will generalize this to other projections, orthogonal and non-orthogonal.
VI.1
Since the line is the span of some vector ` = {c ~s | c R}, we have a coefficient
c~p with the property that ~v c~p~s is orthogonal to c~p~s.
~v
~v c~p ~s
c~p ~s
265
2 1
3 2
8
1
8/5
1
=
=
2
16/5
2
5
1 1
2 2
266
1.4 Example A railroad car left on an east-west track without its brake is pushed
by a wind blowing toward the northeast at fifteen miles per hour; what speed
will the car reach?
For the wind we use a vector of length 15 that points toward the northeast.
!
p
15p1/2
~v =
15 1/2
The car is only affected by the part of the wind blowing in the east-west
direction the part of ~v in the direction of the x-axis is this (the picture has
the same perspective as the railroad car picture above).
north
~p =
east
15
1/2
0
Thus, another way to think of the picture that precedes the definition is that
it shows ~v as decomposed into two parts, the part ~p with the line, and the part
that is orthogonal to the line (shown above on the north-south axis). These
two are non-interacting in the sense that the east-west car is not at all affected
by the north-south part of the wind (see Exercise 10). So we can think of the
orthogonal projection of ~v into the line spanned by ~s as the part of ~v that lies
in the direction of ~s.
Still another useful way to think of orthogonal projection into a line is to
have the person stand on the vector, not the line. This person holds a rope
looped over the line. As they pull, the loop slides on the line.
When it is tight, the rope is orthogonal to the line. That is, we can think of the
projection ~p as being the vector in the line that is closest to ~v (see Exercise 16).
1.5 Example A submarine is tracking a ship moving along the line y = 3x + 2.
Torpedo range is one-half mile. If the sub stays where it is, at the origin on the
chart below, will the ship pass within range?
267
north
east
The formula for projection into a line does not immediately apply because the
line doesnt pass through the origin, and so isnt the span of any ~s. To adjust
for this, we start by shifting the entire map down two units. Now the line is
y = 3x, a subspace. We project to get the point ~p on the line closest to
!
0
~v =
2
the subs shifted position.
~p =
!
!
0
1
2
3
!
!
1
1
3
3
1
3
!
=
3/5
9/5
The distance between ~v and ~p is about 0.63 miles. The ship will never be in
range.
Exercises
X 1.6 Project the first vector orthogonally into the line spanned by the second vector.
1
1
1
3
2
3
2
3
(a)
,
(b)
,
(c) 1, 2
(d) 1, 3
1
2
1
0
4
1
4
12
X 1.7 Project the vector orthogonally into the line.
2
3
1
(a) 1 , { c 1 | c R }
(b)
, the line y = 3x
1
4
3
1.8 Although pictures guided our development of Definition 1.1, we are not restricted
to spaces that we can draw. In R4 project this vector into this line.
1
1
2
1
~v =
` = {c
1
1 | c R }
3
X 1.9 Definition 1.1 uses two vectors ~s and ~v. Consider the transformation of R2
resulting from fixing
3
~s =
1
and projecting ~v into the line that is the span of ~s. Apply it to these vectors.
268
VI.2
269
Gram-Schmidt Orthogonalization
The prior subsection suggests that projecting ~v into the line spanned by ~s
decomposes that vector into two parts
~v
~v proj[~s] (~p)
proj[~s] (~p)
that are orthogonal and so are non-interacting. We now develop that suggestion.
2.1 Definition Vectors ~v1 , . . . ,~vk Rn are mutually orthogonal when any two
are orthogonal: if i 6= j then the dot product ~vi ~vj is zero.
2.2 Theorem If the vectors in a set {~v1 , . . . ,~vk } Rn are mutually orthogonal
and nonzero then that set is linearly independent.
Proof Consider ~0 = c1~v1 + c2~v2 + + ck~vk . For i {1, .. , k }, taking the dot
product of ~vi with both sides of the equation ~vi (c1~v1 +c2~v2 + +ck~vk ) = ~vi ~0,
which gives ci (~vi ~vi ) = 0, shows that ci = 0 since ~vi 6= ~0.
QED
2.3 Corollary In a k dimensional vector space, if the vectors in a size k set are
mutually orthogonal and nonzero then that set is a basis for the space.
Proof Any linearly independent size k subset of a k dimensional space is a
basis.
QED
Of course, the converse of Corollary 2.3 does not hold not every basis of
every subspace of Rn has mutually orthogonal vectors. However, we can get
the partial converse that for every subspace of Rn there is at least one basis
consisting of mutually orthogonal vectors.
~ 1 and
~ 2 of this basis for R2 are not orthogonal.
2.4 Example The members
4
1
B=h
,
i
2
3
~2
~1
We will derive from B a new basis for the space h~1 , ~2 i consisting of mutually
~ 1.
orthogonal vectors. The first member of the new basis is just
!
4
~1 =
2
270
~2 =
1
1
1
2
1
proj[~1 ] (
)=
=
3
3
3
1
2
~2
2/3
2/3
0
0
0
~2 = 2 proj[~1 ] (2) = 2 2/3 = 4/3
2/3
0
0
2/3
0
~ 3 the part in the direction of ~1 and also the part
Find ~3 by subtracting from
in the direction of ~2 .
1
1
1
1
~3 = 0 proj[~1 ] (0) proj[~2 ] (0) = 0
3
3
3
1
As above, the corollary gives that the result is a basis for R3 .
1
2/3
1
h1 , 4/3 , 0 i
1
2/3
1
271
~ 1, . . .
~ k i is a basis for a sub2.7 Theorem (Gram-Schmidt orthogonalization) If h
n
space of R then the vectors
~1
~1 =
~ 2 proj[~ ] (
~ 2)
~2 =
1
~ 3 proj[~ ] (
~ 3 ) proj[~ ] (
~ 3)
~3 =
1
2
..
.
~ k proj[~ ] (
~ k ) proj[~ ] (
~ k)
~k =
1
k1
form an orthogonal basis for the same subspace.
2.8 Remark This is restricted to Rn only because we have not given a definition
of orthogonality for other spaces.
Proof We will use induction to check that each ~
i is nonzero, is in the span of
~ 1, . . .
~ i i, and is orthogonal to all preceding vectors ~1 ~i = = ~i1 ~i = 0.
h
Then Corollary 2.3 gives that h~1 , . . . ~k i is a basis for the same space as is the
starting basis.
We shall only cover the cases up to i = 3, to give the sense of the argument.
The full argument is Exercise 25.
~ 1 makes it a nonzero vector since
The i = 1 case is trivial; taking ~1 to be
~ 1 is a member of a basis, it is obviously in the span of h
~ 1 i, and the orthogonal
~1 ~1
~1 ~1
shows that ~2 6= ~0 or else this would be a non-trivial linear dependence among
~ (it is nontrivial because the coefficient of
~ 2 is 1). It also shows that ~2
the s
~
~
is in the span of h1 , 2 i. And, ~2 is orthogonal to the only preceding vector
~ 2 proj[~ ] (
~ 2 )) = 0
~1 ~2 = ~1 (
1
because this projection is orthogonal.
The i = 3 case is the same as the i = 2 case except for one detail. As in the
i = 2 case, expand the definition.
~ 3 ~1
~ 3 ~2
~1
~2
~1 ~1
~2 ~2
~
~
~
~ 3 3 ~1
~ 1 3 ~2
~ 2 2 ~1
~1
=
~1 ~1
~2 ~2
~1 ~1
~3
~3 =
272
~ 1,
~ 2 ] and therefore by the
~ 3 isnt in the span [
By the first line ~3 6= ~0, since
inductive hypothesis it isnt in the span [~1 , ~2 ]. By the second line ~3 is in
~
the span of the first three s.
Finally, the calculation below shows that ~3 is
orthogonal to ~1 .
~ 3 proj[~ ] (
~ 3 ) proj[~ ] (
~ 3)
~1 ~3 = ~1
1
2
~ 3 proj[~ ] (
~ 3 ) ~1 proj[~ ] (
~ 3)
= ~1
1
=0
(Here is the difference with the i = 2 case: as happened for i = 2 the first term
is 0 because this projection is orthogonal, but here the second term in the second
line is 0 because ~1 is orthogonal to ~2 and so is orthogonal to any vector in
the line spanned by ~2 .) A similar check shows that ~3 is also orthogonal to ~2 .
QED
h1/ 3 , 2/ 6 , 0 i
1/ 2
1/ 3
1/ 6
Besides its intuitive appeal, and its analogy with the standard basis En for
Rn , an orthonormal basis also simplifies some computations. Exercise 19 is an
example.
Exercises
2
2.10 Perform
of
these
the
Gram-Schmidt
process
on each
bases
for R .
1
2
0
1
0
1
(a) h
,
i
(b) h
,
i
(c) h
,
i
1
1
1
3
1
0
Then turn those orthogonal bases into orthonormal bases.
X 2.11 Perform the Gram-Schmidt process on each of these bases for R3 .
2
1
0
1
0
2
(a) h2 , 0 , 3i
(b) h1 , 1 , 3i
2
1
1
0
0
1
Then turn those orthogonal bases into orthonormal bases.
X 2.12 Find an orthonormal basis for this subspace of R3 : the plane x y + z = 0.
2.13 Find an orthonormal basis for this subspace of R4 .
x
y
{
z | x y z + w = 0 and x + z = 0 }
w
273
2.14 Show that any linearly independent subset of Rn can be orthogonalized without
changing its span.
2.15 What happens if we try to apply the Gram-Schmidt process to a finite set that
is not a basis?
X 2.16 What happens if we apply the Gram-Schmidt process to a basis that is already
orthogonal?
2.17 Let h~1 , . . . , ~k i be a set of mutually orthogonal vectors in Rn .
(a) Prove that for any ~v in the space, the vector ~v (proj[~1 ] (~v ) + + proj[~vk ] (~v ))
is orthogonal to each of ~1 , . . . , ~k .
(b) Illustrate the prior item in R3 by using ~e1 as ~1 , using ~e2 as ~2 , and taking ~v
to have components 1, 2, and 3.
(c) Show that proj[~1 ] (~v ) + + proj[~vk ] (~v ) is the vector in the span of the set of
~s that is closest to ~v. Hint. To the illustration done for the prior part, add a
vector d1~1 + d2~2 and apply the Pythagorean Theorem to the resulting triangle.
2.18 Find a nonzero vector in R3 that is orthogonal to both of these.
1
2
5
2
1
0
X 2.19 One advantage of orthogonal bases is that they simplify finding the representation of a vector with respect to that basis.
(a) For this vector and this non-orthogonal basis for R2
2
1
1
~v =
B=h
,
i
3
1
0
first represent the vector with respect to the basis. Then project the vector into
~ 1 ] and [
~ 2 ].
the span of each basis vector [
2
(b) With this orthogonal basis for R
1
1
K=h
,
i
1
1
represent the same vector ~v with respect to the basis. Then project the vector
into the span of each basis vector. Note that the coefficients in the representation
and the projection are the same.
(c) Let K = h~1 , . . . , ~k i be an orthogonal basis for some subspace of Rn . Prove
that for any ~v in the subspace, the i-th component of the representation RepK (~v )
is the scalar coefficient (~v ~i )/(~i ~i ) from proj[~i ] (~v ).
(d) Prove that ~v = proj[~1 ] (~v ) + + proj[~k ] (~v ).
2.20 Bessels Inequality. Consider these orthonormal sets
B1 = {~e1 }
B2 = {~e1 , ~e2 }
274
VI.3
This subsection uses material from the optional earlier subsection on Combining Subspaces.
The prior subsections project a vector into a line by decomposing it into two
parts: the part in the line proj[~s ] (~v ) and the rest ~v proj[~s ] (~v ). To generalize
projection to arbitrary subspaces we will follow this decomposition idea.
3.1 Definition Let a vector space be a direct sum V = M N. Then for any
~v V with ~v = m
~ +n
~ where m
~ M, n
~ N, the projection of ~v into M along
N is projM,N (~v ) = m.
~
This definition applies in spaces where we dont have a ready definition
of orthogonal. (Definitions of orthogonality for spaces other than the Rn are
perfectly possible but we havent seen any in this book.)
275
3.2 Example The space M22 of 22 matrices is the direct sum of these two.
a
M={
0
b
0
0
N={
c
| a, b R }
!
0
| c, d R}
d
To project
A=
3
0
1
4
!
0
i
1
Their concatenation
B = BM
BN
1
=h
0
!
0
0
,
0
0
!
1
0
,
0
1
!
0
0
,
0
0
!
0
i
1
is a basis for the entire space because M22 is the direct sum. So we can use it
to represent A.
!
!
!
!
!
3 1
1 0
0 1
0 0
0 0
=3
+1
+0
+4
0 4
0 0
0 0
1 0
0 1
The projection of A into M along N keeps the M part and drops the N part.
!
!
!
!
3 1
1 0
0 1
3 1
projM,N (
)=3
+1
=
0 4
0 0
0 0
0 0
3.3 Example Both subscripts on projM,N (~v ) are significant. The first subscript
M matters because the result of the projection is a member of M. For an
example showing that the second one matters, fix this plane subspace of R3 and
its basis.
x
1
0
M = { y | y 2z = 0 }
BM = h0 , 2i
z
0
1
We will compare the projections of this element of R3
2
~v = 2
5
276
N = {k 0 | k R}
N = {k 1 | k R }
1
2
BN
0
= h 1 i
2
_
(8/5)
+
(9/5)
=
2
2
0
2
1
2
5
0
1
part.
and omit the N
2
0
1
projM,N
v ) = 2 0 + (9/5) 2 = 18/5
(~
9/5
0
1
So projecting along different subspaces can give different results.
These pictures compare the two maps. Both show that the projection is
indeed into the plane and along the line.
N
N
M
277
Notice that the projection along N is not orthogonal since there are members
of the plane M that are not orthogonal to the dotted line. But the projection
is orthogonal.
along N
We have seen two projection operations, orthogonal projection into a line as
well as this subsectionss projection into an M and along an N, and we naturally
ask whether they are related. The right-hand picture above suggests the answer
orthogonal projection into a line is a special case of this subsections projection;
it is projection along a subspace perpendicular to the line.
N
M
! v
1
3
v 2 =
2
v3
!
0
}
0
278
We are thus left with finding the null space of the map represented by the matrix,
that is, with calculating the solution set of the homogeneous linear system.
3
v1
+ 3v3 = 0
= P = {k 2 | k R }
v2 + 2v3 = 0
1
3.6 Example Where M is the xy-plane subspace of R3 , what is M ? A common
first reaction is that M is the yz-plane but thats not right because some
vectors from the yz-plane are not perpendicular to every vector in the xy-plane.
1
0
1 6 3
0
= arccos(
10+13+02
) 0.94 rad
2 13
Instead M is the z-axis, since proceeding as in the prior example and taking
the natural basis for the xy-plane gives this.
! x
!
x
x
1 0 0
0
M = { y |
} = { y | x = 0 and y = 0 }
y =
0 1 0
0
z
z
z
3.7 Lemma If M is a subspace of Rn then its orthogonal complement M is also
a subspace. The space is the direct sum of the two Rn = M M . And, for
any ~v Rn the vector ~v projM (~v ) is perpendicular to every vector in M.
Proof First, the orthogonal complement M is a subspace of Rn because, as
()
279
~ 1 + + ck
~ k . Since As columns are the s,
~ there is a
of basis vectors c1
k
~c R such that projM (~v ) = A~c. To find ~c note that the vector ~v projM (~v )
is perpendicular to each member of the basis so
~0 = AT ~v A~c = AT~v AT A~c
and solving gives this (showing that AT A is invertible is an exercise).
1 T
~c = AT A
A ~v
Therefore projM (~v ) = A ~c = A(AT A)1 AT ~v, as required.
3.9 Example To orthogonally project this vector into this subspace
1
x
~v = 1
P = { y | x + z = 0 }
1
z
QED
280
first make a matrix whose columns are a basis for the subspace
0 1
A = 1 0
0 1
and then compute.
A A A
1
!
0 1
0
0
1
T
A = 1 0
0 1/2
1
0 1
1/2 0 1/2
= 0
1
0
1/2 0 1/2
1
0
0
1
With the matrix, calculating the orthogonal projection of any vector into P is
easy.
1/2 0 1/2
1
0
projP (~v) = 0
1
0 1 = 1
1/2 0 1/2
1
0
Note, as a check, that this result is indeed in P.
Exercises
X 3.10 Project
the vectors
into
M along N.
x
3
x
(a)
, M={
| x + y = 0 }, N = {
| x 2y = 0 }
2
y
y
1
x
x
(b)
, M={
| x y = 0 }, N = {
| 2x + y = 0 }
2
y
y
3
x
1
(c) 0 , M = { y | x + y = 0 }, N = { c 0 | c R }
1
z
1
X 3.11 Find M .
x
x
(a) M = {
| x + y = 0}
(b) M = {
| 2x + 3y = 0 }
y
y
x
x
(c) M = {
| x y = 0}
(d) M = {~0 }
(e) M = {
| x = 0}
y
y
x
x
(f) M = { y | x + 3y + z = 0 }
(g) M = { y | x = 0 and y + z = 0 }
z
z
3.12 This subsection shows how to project orthogonally in two ways, the method of
Example 3.2 and 3.3, and the method of Theorem 3.8. To compare them, consider
the plane P specified by 3x + 2y z = 0 in R3 .
(a) Find a basis for P.
281
282
~ i) =
t(
~0
i = r + 1, r + 2, . . . , n
where r is the rank of t.
(d) Conclude that every projection is a projection along a subspace.
(e) Also conclude that every projection has a representation
!
I Z
RepB,B (t) =
Z Z
in block partial-identity form.
3.26 A square matrix is symmetric if each i, j entry equals the j, i entry (i.e., if the
matrix equals its transpose). Show that the projection matrix A(AT A)1 AT is
symmetric. [Strang 80] Hint. Find properties of transposes by looking in the index
under transpose.
Topic
Line of Best Fit
This Topic requires the formulas from the subsections on Orthogonal Projection Into a Line and Projection Into a Subspace.
Scientists are often presented with a system that has no solution and they
must find an answer anyway. More precisely, they must find a best answer.
For instance, this is the result of flipping a penny, including some intermediate
numbers.
number of flips
number of heads
30
16
60
34
90
51
Because of the randomness in this experiment we expect that the ratio of heads
to flips will fluctuate around a pennys long-term ratio of 50-50. So the system
for such an experiment likely has no solution, and thats what happened here.
30m = 16
60m = 34
90m = 51
That is, the vector of data that we collected is not in the subspace where theory
has it.
16
30
34 6 {m 60 | m R }
51
90
However, we have to do something so we look for the m that most nearly works.
An orthogonal projection of the data vector into the line subspace gives this
best guess.
16
30
34 60
30
30
51
90
7110
60
60 =
12600
30
30
90
90
60 60
90
90
284
The estimate (m = 7110/12600 0.56) is a bit more than one half, but not
much more than half, so probably the penny is fair enough.
The line with the slope m 0.56 is the line of best fit for this data.
heads
60
30
30
60
90
flips
Minimizing the distance between the given vector and the vector used as the
right-hand side minimizes the total of these vertical lengths, and consequently
we say that the line comes from fitting by least-squares.
This diagram exaggerates the vertical scale by a factor of ten to make the lengths
more visible.
In the above equation the line must pass through (0, 0), because we take it
to be the line whose slope is this coins true proportion of heads to flips. We
can also handle cases where the line need not pass through the origin.
Here is the progression of world record times for the mens mile race
[Oakley & Baker]. In the early 1900s many people wondered when, or if, this
record would fall below the four minute mark. Here are the times that were in
force on January first of each decade through the first half of that century.
year
secs
1870
268.8
1880
264.5
1890
258.4
1900
255.6
1910
255.6
1920
252.6
1930
250.4
1940
246.4
1950
241.4
We can use this to give a circa 1950 prediction of the date for 240 seconds, and
then compare that to the actual date. As with the penny data, these numbers
do not lie in a perfect line. That is, this system does not have an exact solution
for the slope and intercept.
b + 1870m = 268.8
b + 1880m = 264.5
..
.
b + 1950m = 241.4
We find a best approximation by using orthogonal projection.
(Comments on the data. Restricting to the times at the start of each decade
reduces the data entry burden, smooths the data to some extent, and gives much
285
the same result as entering all of the dates and records. There are different
sequences of times from competing standards bodies but the ones here are from
[Wikipedia, Mens Mile]. Weve started the plot at 1870 because at one point
there were two classes of records, called professional and amateur, and after a
while the first class stopped being active so weve followed the second class.)
Write the linear systems matrix of coefficients and also its vector of constants,
the world record times.
1 1870
268.8
1 1880
264.5
A=
~
v
=
..
..
..
.
.
.
1
1950
241.4
The ending result in the subsection on Projection into a Subspace gives the
formula for the the coefficients b and m that make the linear combination of
As columns as close as possible to ~v. Those coefficients are the entries of the
vector (AT A)1 AT ~v.
Sage can do the computation for us.
sage: year = [1870, 1880, 1890, 1900, 1910, 1920, 1930, 1940, 1950]
sage: secs = [268.8, 264.5, 258.4, 255.6, 255.6, 252.6, 250.4, 246.4, 241.4]
sage: var('a, b, t')
(a, b, t)
sage: model(t) = a*t+b
sage: data = zip(year, secs)
sage: fit = find_fit(data, model, solution_dict=True)
sage: model.subs(fit)
t |--> -0.3048333333333295*t + 837.0872222222147
sage: g=points(data)+plot(model.subs(fit),(t,1860,1960),color='red',
....:
figsize=3,fontsize=7,typeset='latex')
sage: g.save("four_minute_mile.pdf")
sage: g
270
265
260
255
250
245
240
1860
1880
1900
1920
1940
1960
The progression makes a surprisingly good line. From the slope and intercept we
predict 1958.73; the actual date of Roger Bannisters record was 1954-May-06.
The final example compares team salaries from US major league baseball
against the number of wins the team had, for the year 2002. In this year the
286
The graph is below. The team in the upper left, who paid little for many
wins, is the Oakland As.
100
90
80
70
60
40
60
80
100
120
287
4 Find the line of best fit for the records for womens mile.
5 Do the lines of best fit for the mens and womens miles cross?
6 (This illustrates that there are data sets for which a linear model is not right,
and that the line of best fit doesnt in that case have any predictive value.) In a
highway restaurant a trucker told me that his boss often sends him by a roundabout
route, using more gas but paying lower bridge tolls. He said that New York State
calibrates the toll for each bridge across the Hudson, playing off the extra gas to
get there from New York City against a lower crossing cost, to encourage people to
go upstate. This table, from [Cost Of Tolls] and [Google Maps], lists for each toll
crossing of the Hudson River, the distance to drive from Times Square in miles
and the cost in US dollars for a passenger car (if a crossings has a one-way toll
then it shows half that number).
Crossing
Lincoln Tunnel
Holland Tunnel
George Washington Bridge
Verrazano-Narrows Bridge
Tappan Zee Bridge
Bear Mountain Bridge
Newburgh-Beacon Bridge
Mid-Hudson Bridge
Kingston-Rhinecliff Bridge
Rip Van Winkle Bridge
Distance
2
7
8
16
27
47
67
82
102
120
Toll
6.00
6.00
6.00
6.50
2.50
1.00
1.00
1.00
1.00
1.00
Find the line of best fit and graph the data to show that the driver was practicing
on my credulity.
7 When the space shuttle Challenger exploded in 1986, one of the criticisms made
of NASAs decision to launch was in the way they did the analysis of number of
O-ring failures versus temperature (O-ring failure caused the explosion). Four
O-ring failures would be fatal. NASA had data from 24 previous flights.
53 75 57 58 63 70 70 66 67 67 67
temp F
failures
3
2
1
1
1
1
1
0
0
0
0
68 69 70 70 72 73 75 76 76 78 79 80
0
0
0
0
0
0
0
0
0
0
0
0
81
0
288
8 This table lists the average distance from the sun to each of the first seven planets,
using Earths average as a unit.
Mercury Venus Earth Mars Jupiter Saturn Uranus
0.39
0.72
1.00
1.52
5.20
9.54
19.2
(a) Plot the number of the planet (Mercury is 1, etc.) versus the distance. Note
that it does not look like a line, and so finding the line of best fit is not fruitful.
(b) It does, however look like an exponential curve. Therefore, plot the number
of the planet versus the logarithm of the distance. Does this look like a line?
(c) The asteroid belt between Mars and Jupiter is what is left of a planet that
broke apart. Renumber so that Jupiter is 6, Saturn is 7, and Uranus is 8, and
plot against the log again. Does this look better?
(d) Use least squares on that data to predict the location of Neptune.
(e) Repeat to predict where Pluto is.
(f) Is the formula accurate for Neptune and Pluto?
This method was used to help discover Neptune (although the second item is
misleading about the history; actually, the discovery of Neptune in position 9
prompted people to look for the missing planet in position 5). See [Gardner, 1970]
Topic
Geometry of Linear Maps
These pairs of pictures contrast the geometric action of the nonlinear maps
f1 (x) = ex and f2 (x) = x2
-5
-5
-5
-5
Each of the four pictures shows the domain R on the left mapped to the codomain
R on the right. Arrows trace where each map sends x = 0, x = 1, x = 2, x = 1,
and x = 2.
290
The nonlinear maps distort the domain in transforming it into the range. For
instance, f1 (1) is further from f1 (2) than it is from f1 (0) this map spreads the
domain out unevenly so that a domain interval near x = 2 is spread apart more
than is a domain interval near x = 0. The linear maps are nicer, more regular,
in that for each map all of the domain spreads by the same factor. The map h1
on the left spreads all intervals apart to be twice as wide while on the right h2
keeps intervals the same length but reverses their orientation, as with the rising
interval from 1 to 2 being transformed to the falling interval from 1 to 2.
The only linear maps from R to R are multiplications by a scalar but in
higher dimensions more can happen. For instance, this linear transformation of
R2 rotates vectors counterclockwise.
x
y
x cos y sin
x sin + y cos
The transformation of R3 that projects vectors into the xz-plane is also not
simply a rescaling.
y70
Despite this additional variety, even in higher dimensions linear maps behave
nicely. Consider a linear h : Rn Rm and use the standard bases to represent
it by a matrix H. Recall that H factors into H = PBQ where P and Q are
nonsingular and B is a partial-identity matrix. Recall also that nonsingular
matrices factor into elementary matrices PBQ = Tn Tn1 Ts BTs1 T1 , which
are matrices that come from the identity I after one Gaussian row operation, so
each T matrix is one of these three kinds
ki
I Mi (k)
i j
Pi,j
ki +j
Ci,j (k)
291
The geometric effect of the linear transformation represented by a partialidentity matrix is projection.
1 0 0
0 1 0
x
x
0 0 0
y
y
z
0
The geometric effect of the Mi (k) matrices is to stretch vectors by a factor
of k along the i-th axis. This map stretches by a factor of 3 along the x-axis.
x
y
3x
y
If 0 6 k < 1 or if k < 0 then the i-th component goes the other way, here to the
left.
x
y
2x
y
x
y
y
x
y
2x + y
In the picture below, the vector ~u with the first component of 1 is affected less
than the vector ~v with the first component of 2. The vector ~u is mapped to a
h(~u) that is only 2 higher than ~u while h(~v) is 4 higher than ~v.
h(~
v)
h(~
u)
u
~
x
y
x
2x + y
7
~
v
292
Any vector with a first component of 1 would be affected in the same way as
~u: it would slide up by 2. And any vector with a first component of 2 would
slide up 4, as was ~v. That is, the transformation represented by Ci,j (k) affects
vectors depending on their i-th component.
Another way to see this point is to consider the action of this map on the
unit square. In the next picture, vectors with a first component of 0, such as the
origin, are not pushed vertically at all but vectors with a positive first component
slide up. Here, all vectors with a first component of 1, the entire right side of
the square, slide to the same extent. In general, vectors on the same vertical
line slide by the same amount, by twice their first component. The resulting
shape has the same base and height as the square (and thus the same area) but
the right angle corners are gone.
x
y
x
2x + y
For contrast, the next picture shows the effect of the map represented by
C2,1 (2). Here vectors are affected according to their second component: yx
slides horizontally by twice y.
x
y
x + 2y
y
In general, for any Ci,j (k), the sliding happens so that vectors with the same
i-th component are slid by the same amount. This kind of map is a shear.
With that we understand the geometric effect of the four types of matrices
on the right-hand side of H = Tn Tn1 Tj BTj1 T1 and so in some sense we
understand the action of any matrix H. Thus, even in higher dimensions the
geometry of linear maps is easy: it is built by putting together a number of
components, each of which acts in a simple way.
We will apply this understanding in two ways. The first way is to prove
something general about the geometry of linear maps. Recall that under a linear
map, the image of a subspace is a subspace and thus the linear transformation
h represented by H maps lines through the origin to lines through the origin.
(The dimension of the image space cannot be greater than the dimension of the
domain space, so a line cant map onto, say, a plane.) We will show that h maps
any line not just one through the origin to a line. The proof is simple: the
partial-identity projection B and the elementary Ti s each turn a line input into
a line output; verifying the four cases is Exercise 6. Therefore their composition
also preserves lines.
293
The second way that we will apply the geometric understanding of linear
maps is to elucidate a point from Calculus. Below is a picture of the action of
the one-variable real function y(x) = x2 + x. As with the nonlinear functions
pictured earlier, the geometric effect of this map is irregular in that at different
domain points it has different effects; for example as the input x goes from 2 to
2, the associated output f(x) at first decreases, then pauses for an instant, and
then increases.
But in Calculus we focus less on the map overall and more on the local effect
of the map. Below we look closely at what this map does near x = 1. The
derivative is dy/dx = 2x + 1 so that near x = 1 we have y 3 x. That is, in
a neighborhood of x = 1, in carrying the domain over this map causes it to grow
by a factor of 3 it is, locally, approximately, a dilation. The picture below
shows this as a small interval in the domain (1 x .. 1 + x) carried over to an
interval in the codomain (2 y .. 2 + y) that is three times as wide.
y=2
x=1
In higher dimensions the core idea is the same but more can happen. For a
function y : Rn Rm and a point ~x Rn , the derivative is defined to be the
linear map h : Rn Rm that best approximates how y changes near y(~x). So
the geometry described above directly applies to the derivative.
We close by remarking how this point of view makes clear an often misunderstood result about derivatives, the Chain Rule. Recall that, under suitable
294
g(f(x))
f(x)
x
295
3 What combination of dilations, flips, skews, and projections produces the map
h : R3 R3 represented with respect to the standard bases by this matrix?
1 2 1
3 6 0
1 2 2
4 Show that any linear transformation of R1 is the map that multiplies by a scalar
x 7 kx.
5 Show that for any permutation (that is, reordering) p of the numbers 1, . . . , n,
the map
x1
xp(1)
x
x
2
p(2)
. 7 .
.
.
.
.
xn
xp(n)
can be done with a composition of maps, each of which only swaps a single pair of
coordinates. Hint: you can use induction on n. (Remark: in the fourth chapter we
will show this and we will also show that the parity of the number of swaps used is
determined by p. That is, although a particular permutation could be expressed in
two different ways with two different numbers of swaps, either both ways use an
even number of swaps, or both use an odd number.)
6 Show that linear maps preserve the linear structures of a space.
(a) Show that for any linear map from Rn to Rm , the image of any line is a line.
The image may be a degenerate line, that is, a single point.
(b) Show that the image of any linear surface is a linear surface. This generalizes
the result that under a linear map the image of a subspace is a subspace.
(c) Linear maps preserve other linear ideas. Show that linear maps preserve
betweeness: if the point B is between A and C then the image of B is between
the image of A and the image of C.
7 Use a picture like the one that appears in the discussion of the Chain Rule to
answer: if a function f : R R has an inverse, whats the relationship between how
the function locally, approximately dilates space, and how its inverse dilates
space (assuming, of course, that it has an inverse)?
Topic
Magic Squares
A Chinese legend tells the story of a flood by the Lo river. People offered
sacrifices to appease the river. Each time a turtle emerged, walked around the
sacrifice, and returned to the water. Fuh-Hi, the founder of Chinese civilization,
interpreted this to mean that the river was still cranky. Fortunately, a child
noticed that on its shell the turtle had the pattern on the left below, which is
today called Lo Shu (river scroll).
4
3
8
9
5
1
2
7
6
The dots make the matrix on the right where the rows, columns, and diagonals
add to 15. Now that the people knew how much to sacrifice, the rivers anger
cooled.
A square matrix is magic if each row, column, and diagonal add to the same
number, the matrixs magic number.
Another magic square appears in the engraving Melencolia I by Drer.
297
16
5
9
4
3
10
6
15
2
11
7
14
13
8
12
1
The middle entries on the bottom row give 1514, the date of the engraving.
The above two squares are arrangements of 1 . . . n2 . They are normal. The
11 square whose sole entry is 1 is normal, Exercise 2 shows that there is no normal 22 magic square, and there are normal magic squares of every other size; see
[Wikipedia, Magic Square]. Finding how many normal magic squares there are of
each size is an unsolved problem; see [Online Encyclopedia of Integer Sequences].
If we dont require that the squares be normal then we can say much more.
Every 11 square is magic, trivially. If the rows, columns, and diagonals of a
22 matrix
!
a b
c d
add to s then a + b = s, c + d = s, a + c = s, b + d = s, a + d = s, and b + c = s.
Exercise 2 shows that this system has the unique solution a = b = c = d = s/2.
So the set of 22 magic squares is a one-dimensional subspace of M22 .
A sum of two same-sized magic squares is magic and a scalar multiple of a
magic square is magic so the set of nn magic squares Mn is a vector space,
a subspace of Mnn . This Topic shows that for n > 3 the dimension of Mn is
n2 n. The set Mn,0 of nn magic squares with magic number 0 is another
subspace and we will verify the formula for its dimension also: n2 2n 1 when
n > 3.
We will first prove that dim Mn = dim Mn,0 + 1. Define the trace of a
matrix to be the sum down its upper-left to lower-right diagonal Tr(M) =
m1,1 + + mn,n . Consider the restriction of the trace to the magic squares
Tr : Mn R. The null space N (Tr) is the set of magic squares with magic
number zero Mn,0 . Observe that the trace is onto because for any r in the
codomain R the nn matrix whose entries are all r/n is a magic square with
magic number r. Theorem Two.II.2.14 says that for any linear map the dimension
298
of the domain equals the dimension of the range space plus the dimension of the
null space, the maps rank plus its nullity. Here the domain is Mn , the range
space is R and the null space is Mn,0 , so we have that dim Mn = 1 + dim Mn,0 .
We will finish by finding the dimension of the vector space Mn,0 . For n = 1
the dimension is clearly 0. Exercise 3 shows that dim Mn,0 is also 0 for n = 2.
That leaves showing that dim Mn,0 = n2 2n 1 for n > 3. The fact that
the squares in this vector space are magic gives us a linear system of restrictions,
and the fact that they have magic number zero makes this system homogeneous:
for instance consider the 33 case. The restriction that the rows, columns, and
diagonals of
a b c
d e f
g h i
add to zero gives this (2n + 2)n2 linear system.
a+b+c
=0
=0
g+h+i=0
+d
+g
=0
+e
+h
=0
c
+f
+i=0
+e
+i=0
c
+e
+g
=0
d+e+f
a
b
a
We will find the dimension of the space by finding the number of free variables
in the linear system.
The matrix of coefficients for the particular cases of n = 3 and n = 4 are
below, with the rows and columns numbered to help in reading the proof. With
2
respect to the standard basis, each represents a linear map h : Rn R2n+2 .
The domain has dimension n2 so if we show that the rank of the matrix is 2n + 1
then we will have what we want, that the dimension of the null space Mn,0 is
n2 (2n + 1).
1 2 3 4 5 6 7 8 9
~1 1 1 1 0 0 0 0 0 0
~2 0 0 0 1 1 1 0 0 0
~3 0 0 0 0 0 0 1 1 1
~4
~5
~6
1
0
0
0
1
0
0
0
1
1
0
0
0
1
0
0
0
1
1
0
0
0
1
0
0
0
1
~7
~8
1
0
0
0
0
1
0
0
1
1
0
0
0
1
0
0
1
0
299
~1
~2
~3
~4
1
1
0
0
0
2
1
0
0
0
3
1
0
0
0
4
1
0
0
0
5
0
1
0
0
6
0
1
0
0
7
0
1
0
0
8
0
1
0
0
9 10 11
0 0 0
0 0 0
1 1 1
0 0 0
12 13 14
0 0 0
0 0 0
1 0 0
0 1 1
15
0
0
0
1
16
0
0
0
1
~5
~6
~7
~8
1
0
0
0
0
1
0
0
0
0
1
0
0
0
0
1
1
0
0
0
0
1
0
0
0
0
1
0
0
0
0
1
1
0
0
0
0
1
0
0
0
0
1
0
0
0
0
1
1
0
0
0
0
1
0
0
0
0
1
0
0
0
0
1
~9
~10
1
0
0
0
0
0
0
1
0
0
1
0
0
1
0
0
0
0
0
1
1
0
0
0
0
1
0
0
0
0
1
0
We want to show that the rank of the matrix of coefficients, the number of
rows in a maximal linearly independent set, is 2n + 1. The first n rows of the
matrix of coefficients add to the same vector as the second n rows, the vector of
all ones. So a maximal linearly independent must omit at least one row. We will
show that the set of all rows but the first {~2 . . . ~2n+2 } is linearly independent.
So consider this linear relationship.
c2~2 + + c2n~2n + c2n+1~2n+1 + c2n+2~2n+2 = ~0
()
Now it gets messy. Focus on the lower left of the tables. Observe that in the
final two rows, in the first n columns, is a subrow that is all zeros except that it
starts with a one in column 1 and a subrow that is all zeros except that it ends
with a one in column n.
First, with ~1 omitted, both column 1 and column n contain only two ones.
Since the only rows in () with nonzero column 1 entries are rows ~n+1 and
~2n+1 , which have ones, we must have c2n+1 = cn+1 . Likewise considering
the n-th entries of the vectors in () gives that c2n+2 = c2n .
Next consider the columns between those two in the n = 3 table this
includes only column 2 while in the n = 4 table it includes both columns 2
and 3. Each such column has a single one. That is, for each column index
j {2 . . . n 2} the column consists of only zeros except for a one in row n + j,
and hence cn+j = 0.
On to the next block of columns, from n + 1 through 2n. Column n + 1 has
only two ones (because n > 3 the ones in the last two rows do not fall in the first
column of this block). Thus c2 = cn+1 and therefore c2 = c2n+1 . Likewise,
from column 2n we conclude that c2 = c2n and so c2 = c2n+2 .
Because n > 3 there is at least one column between column n + 1 and
column 2n 1. In at least one of those columns a one appears in ~2n+1 . If a one
also appears in that column in ~2n+2 then we have c2 = (c2n+1 + c2n+2 ) since
300
cn+j = 0 for j {2 . . . n 2}. If a one does not appear in that column in ~2n+2
then we have c2 = c2n+1 . In either case c2 = 0, and thus c2n+1 = c2n+2 = 0
and cn+1 = c2n = 0.
If the next block of n-many columns is not the last then similarly conclude
from its first column that c3 = cn+1 = 0.
Keep this up until we reach the last block of columns, those numbered
(n 1)n + 1 through n2 . Because cn+1 = = c2n = 0 column n2 gives that
cn = c2n+1 = 0.
Therefore the rank of the matrix is 2n + 1, as required.
The classic source on normal magic squares is [Ball & Coxeter]. More on the
Lo Shu square is at [Wikipedia, Lo Shu Square]. The proof given here began
with [Ward].
Exercises
1 Let M be a 33 magic square with magic number s.
(a) Prove that the sum of Ms entries is 3s.
(b) Prove that s = 3 m2,2 .
(c) Prove that m2,2 is the average of the entries in its row, its column, and in
each diagonal.
(d) Prove that m2,2 is the median of Ms entries.
2 Solve the system a + b = s, c + d = s, a + c = s, b + d = s, a + d = s, and b + c = s.
3 Show that dim M2,0 = 0.
4 Let the trace function be Tr(M) = m1,1 + + mn,n . Define also the sum down
the other diagonal Tr (M) = m1,n + + mn,1 .
(a) Show that the two functions Tr, Tr : Mnn R are linear.
(b) Show that the function : Mnn R2 given by (M) = (Tr(M), Tr (m)) is
linear.
(c) Generalize the prior item.
5 A square matrix is semimagic if the rows and columns add to the same value,
that is, if we drop the condition on the diagonals.
(a) Show that the set of semimagic squares Hn is a subspace of Mnn .
(b) Show that the set Hn,0 of nn semimagic squares with magic number 0 is
also a subspace of Mnn .
Topic
Markov Chains
Here is a simple game: a player bets on coin tosses, a dollar each time, and the
game ends either when the player has no money or is up to five dollars. If the
player starts with three dollars, what is the chance that the game takes at least
five flips? Twenty-five flips?
At any point, this player has either $0, or $1, . . . , or $5. We say that the
player is in the state s0 , s1 , . . . , or s5 . In the game the player moves from state
to state. For instance, a player now in state s3 has on the next flip a 0.5 chance
of moving to state s2 and a 0.5 chance of moving to s4 . The boundary states
are different; a player never leaves state s0 or state s5 .
Let pi (n) be the probability that the player is in state si after n flips. Then
for instance the probability of being in state s0 after flip n + 1 is p0 (n + 1) =
p0 (n) + 0.5 p1 (n). This equation summarizes.
1.0
0.0
0.0
0.0
0.0
0.0
0.5
0.0
0.5
0.0
0.0
0.0
0.0
0.5
0.0
0.5
0.0
0.0
0.0
0.0
0.5
0.0
0.5
0.0
0.0
0.0
0.0
0.5
0.0
0.5
0.0
p0 (n)
p0 (n + 1)
0.0
p1 (n) p1 (n + 1)
0.0 p2 (n) p2 (n + 1)
0.0 p3 (n) p3 (n + 1)
0.0 p4 (n) p4 (n + 1)
1.0
p5 (n)
0.0,
0.5,
0.0,
0.5,
0.0,
0.0,
0.0,
0.0,
0.5,
0.0,
0.5,
0.0,
0.0,
0.0,
0.0,
0.5,
0.0,
0.5,
0.0],
0.0],
0.0],
0.0],
0.5],
1.0]])
p5 (n + 1)
302
(Two notes: (1) Sage can use various number systems to make the matrix entries
and here we have used Real Double Float, and (2) Sage likes to do matrix
multiplication from the right, as ~v M instead of our usual M~v, so we needed to
take the matrixs transpose.)
These are components of the resulting vectors.
n=0
0
0
0
1
0
0
n=1
0
0
0.5
0
0.5
0
n=2
0
0.25
0
0.5
0
0.25
n=3
0.125
0
0.375
0
0.25
0.25
n=4
0.125
0.187 5
0
0.312 5
0
0.375
n = 24
0.396 00
0.002 76
0
0.004 47
0
0.596 76
This game is not likely to go on for long since the player quickly moves to an
ending state. For instance, after the fourth flip there is already a 0.50 probability
that the game is over.
This is a Markov chain. Each vector is a probability vector, whose entries
are nonnegative real numbers that sum to 1. The matrix is a transition matrix
or stochastic matrix, whose entries are nonnegative reals and whose columns
sum to 1.
A characteristic feature of a Markov chain model is that it is historyless in
that the next state depends only on the current state, not on any prior ones.
Thus, a player who arrives at s2 by starting in state s3 and then going to state s2
has exactly the same chance of moving next to s3 as does a player whose history
was to start in s3 then go to s4 then to s3 and then to s2 .
Here is a Markov chain from sociology. A study ([Macdonald & Ridge],
p. 202) divided occupations in the United Kingdom into three levels: executives
and professionals, supervisors and skilled manual workers, and unskilled workers.
They asked about two thousand men, At what level are you, and at what level
was your father when you were fourteen years old? Here the Markov model
assumption about history may seem reasonable we may guess that while a
parents occupation has a direct influence on the occupation of the child, the
grandparents occupation likely has no such direct influence. This summarizes
the studys conclusions.
303
a middle class worker has a 0.37 chance of being middle class, and a child of a
lower class worker has a 0.27 probability of becoming middle class.
Sage will compute the successive stages of this system (the current class
distribution is ~v0 ).
sage: M = matrix(RDF, [[0.60, 0.29,
....:
[0.26, 0.37,
....:
[0.14, 0.34,
sage: M = M.transpose()
sage: v0 = vector(RDF, [0.12, 0.32,
sage: v0*M
(0.2544, 0.3008, 0.4448)
sage: v0*M^2
(0.31104, 0.297536, 0.391424)
sage: v0*M^3
(0.33553728, 0.2966432, 0.36781952)
0.16],
0.27],
0.57]])
0.56])
Here are the next five generations. They show upward mobility, especially in
the first generation. In particular, lower class shrinks a good bit.
n=0
.12
.32
.56
n=1
.25
.30
.44
n=2
.31
.30
.39
n=3
.34
.30
.37
n=4
.35
.30
.36
n=5
.35
.30
.35
One more example. In professional American baseball there are two leagues,
the American League and the National League. At the end of the annual season
the team winning the American League and the team winning the National
League play the World Series. The winner is the first team to take four games.
That means that a series is in one of twenty-four states: 0-0 (no games won
yet by either team), 1-0 (one game won for the American League team and no
games for the National League team), etc.
Consider a series with a probability p that the American League team wins
each game. We have this.
0
0
0
0 ...
p0-0 (n)
p0-0 (n + 1)
p
0
0
0 . . .
0
0
0 . . . p0-1 (n) p0-1 (n + 1)
p
0
0 . . . p2-0 (n) = p2-0 (n + 1)
0
1p
p
0 . . .
p
0
.
.
.
..
..
..
..
..
..
.
.
.
.
.
.
An especially interesting special case is when the teams are evenly matched,
p = 0.50. This table below lists the resulting components of the n = 0 through
n = 7 vectors.
Note that evenly-matched teams are likely to have a long series there is a
probability of 0.625 that the series goes at least six games.
304
00
10
01
20
11
02
30
21
12
03
40
31
22
13
04
41
32
23
14
42
33
24
43
34
n=2
0
0
0
0.25
0.5
0.25
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
n=3
0
0
0
0
0
0
0.125
0.375
0.375
0.125
0
0
0
0
0
0
0
0
0
0
0
0
0
0
n=4
0
0
0
0
0
0
0
0
0
0
0.062 5
0.25
0.375
0.25
0.062 5
0
0
0
0
0
0
0
0
0
n=5
0
0
0
0
0
0
0
0
0
0
0.062 5
0
0
0
0.062 5
0.125
0.312 5
0.312 5
0.125
0
0
0
0
0
n=6
0
0
0
0
0
0
0
0
0
0
0.062 5
0
0
0
0.062 5
0.125
0
0
0.125
0.156 25
0.312 5
0.156 25
0
0
n=7
0
0
0
0
0
0
0
0
0
0
0.062 5
0
0
0
0.062 5
0.125
0
0
0.125
0.156 25
0
0.156 25
0.156 25
0.156 25
305
3 [Kelton] There has been much interest in whether industries in the United States
are moving from the Northeast and North Central regions to the South and West,
motivated by the warmer climate, by lower wages, and by less unionization. Here
is the transition matrix for large firms in Electric and Electronic Equipment.
NE
NC
S
W
Z
NE
0.787
0
0
0
0.021
NC
0
0.966
0.063
0
0.009
S
0
0.034
0.937
0.074
0.005
W
0.111
0
0
0.612
0.010
Z
0.102
0
0
0.314
0.954
For example, a firm in the Northeast region will be in the West region next
year with probability 0.111. (The Z entry is a birth-death state. For instance,
with probability 0.102 a large Electric and Electronic Equipment firm from the
Northeast will move out of this system next year: go out of business, move abroad,
or move to another category of firm. There is a 0.021 probability that a firm in the
National Census of Manufacturers will move into Electronics, or be created, or
move in from abroad, into the Northeast. Finally, with probability 0.954 a firm
out of the categories will stay out, according to this research.)
(a) Does the Markov model assumption of lack of history seem justified?
(b) Assume that the initial distribution is even, except that the value at Z is 0.9.
Compute the vectors for n = 1 through n = 4.
(c) Suppose that the initial distribution is this.
NE
NC
S
W
Z
0.0000 0.6522 0.3478 0.0000 0.0000
Calculate the distributions for n = 1 through n = 4.
(d) Find the distribution for n = 50 and n = 51. Has the system settled down to
an equilibrium?
4 [Wickens] Here is a model of some kinds of learning The learner starts in an
undecided state sU . Eventually the learner has to decide to do either response A
(that is, end in state sA ) or response B (ending in sB ). However, the learner doesnt
jump right from undecided to sure that A is the correct thing to do (or B). Instead,
the learner spends some time in a tentative-A state, or a tentative-B state,
trying the response out (denoted here tA and tB ). Imagine that once the learner has
decided, it is final, so once in sA or sB , the learner stays there. For the other state
changes, we can posit transitions with probability p in either direction.
(a) Construct the transition matrix.
(b) Take p = 0.25 and take the initial vector to be 1 at sU . Run this for five steps.
What is the chance of ending up at sA ?
(c) Do the same for p = 0.20.
(d) Graph p versus the chance of ending at sA . Is there a threshold value for p,
above which the learner is almost sure not to take longer than five steps?
5 A certain town is in a certain country (this is a hypothetical problem). Each year
ten percent of the town dwellers move to other parts of the country. Each year one
percent of the people from elsewhere move to the town. Assume that there are two
306
6 For the World Series application, use a computer to generate the seven vectors for
p = 0.55 and p = 0.6.
(a) What is the chance of the National League team winning it all, even though
they have only a probability of 0.45 or 0.40 of winning any one game?
(b) Graph the probability p against the chance that the American League team
wins it all. Is there a threshold value a p above which the better team is
essentially ensured of winning?
7 Above we define a transition matrix to have each entry nonnegative and each
column sum to 1.
(a) Check that the three transition matrices shown in this Topic meet these two
conditions. Must any transition matrix do so?
(b) Observe that if A~v0 = ~v1 and A~v1 = ~v2 then A2 is a transition matrix from
~v0 to ~v2 . Show that a power of a transition matrix is also a transition matrix.
(c) Generalize the prior item by proving that the product of two appropriatelysized transition matrices is a transition matrix.
Topic
Orthonormal Matrices
In The Elements, Euclid considers two figures to be the same if they have the
same size and shape. That is, while the triangles below are not equal because
they are not the same set of points, they are, for Euclids purposes, essentially
indistinguishable because we can imagine picking the plane up, sliding it over
and rotating it a bit, although not warping or stretching it, and then putting it
back down, to superimpose the first figure on the second. (Euclid never explicitly
states this principle but he uses it often [Casey].)
P2
P1
Q2
P3
Q1
Q3
In modern terms picking the plane up . . . is taking a map from the plane to
itself. Euclid considers only transformations that may slide or turn the plane but
not bend or stretch it. Accordingly, define a map f : R2 R2 to be distancepreserving or a rigid motion or an isometry if for all points P1 , P2 R2 , the
distance from f(P1 ) to f(P2 ) equals the distance from P1 to P2 . We also define a
plane figure to be a set of points in the plane and we say that two figures are
congruent if there is a distance-preserving map from the plane to itself that
carries one figure onto the other.
Many statements from Euclidean geometry follow easily from these definitions.
Some are: (i) collinearity is invariant under any distance-preserving map (that is,
if P1 , P2 , and P3 are collinear then so are f(P1 ), f(P2 ), and f(P3 )), (ii) betweeness
is invariant under any distance-preserving map (if P2 is between P1 and P3 then
so is f(P2 ) between f(P1 ) and f(P3 )), (iii) the property of being a triangle is
invariant under any distance-preserving map (if a figure is a triangle then the
image of that figure is also a triangle), (iv) and the property of being a circle is
invariant under any distance-preserving map. In 1872, F. Klein suggested that
we can define Euclidean geometry as the study of properties that are invariant
308
under these maps. (This forms part of Kleins Erlanger Program, which proposes
the organizing principle that we can describe each kind of geometry Euclidean,
projective, etc. as the study of the properties that are invariant under some
group of transformations. The word group here means more than just collection
but that lies outside of our scope.)
We can use linear algebra to characterize the distance-preserving maps of
the plane.
To begin, observe that there are distance-preserving transformations of the
plane that are not linear. The obvious example is this translation.
!
!
!
!
x
x
1
x+1
7
+
=
y
y
0
y
However, this example turns out to be the only one, in that if f is distancepreserving and sends ~0 to ~v0 then the map ~v 7 f(~v) ~v0 is linear. That will
follow immediately from this statement: a map t that is distance-preserving
and sends ~0 to itself is linear. To prove this equivalent statement, consider the
standard basis and suppose that
!
!
a
c
t(~e1 ) =
t(~e2 ) =
b
d
for some a, b, c, d R. To show that t is linear we can show that it can be
represented by a matrix, that is, that t acts in this way for all x, y R.
!
!
x
ax + cy
t
~v =
7
()
y
bx + dy
Recall that if we fix three non-collinear points then we can determine any point
by giving its distance from those three. So we can determine any point ~v in the
domain by its distance from ~0, ~e1 , and ~e2 . Similarly, we can determine any point
t(~v) in the codomain by its distance from the three fixed points t(~0), t(~e1 ), and
t(~e2 ) (these three are not collinear because, as mentioned above, collinearity is
invariant and ~0, ~e1 , and ~e2 are not collinear). Because t is distance-preserving
we can say more: for the point ~v in the plane that is determined by being the
distance d0 from ~0, the distance d1 from ~e1 , and the distance d2 from ~e2 , its
image t(~v) must be the unique point in the codomain that is determined by
being d0 from t(~0), d1 from t(~e1 ), and d2 from t(~e2 ). Because of the uniqueness,
checking that the action in () works in the d0 , d1 , and d2 cases
!
!
!
x ~
x
ax
+
cy
dist(
, 0) = dist(t(
), t(~0)) = dist(
, ~0)
y
bx + dy
y
309
310
often use this term to mean not just that the columns are orthogonal but also
that they have length one).
We can leverage this characterization to understand the geometric actions
of distance-preserving maps. Because kt(~v )k = k~v k, the map t sends any ~v
somewhere on the circle about the origin that has radius equal to the length of
~v. In particular, ~e1 and ~e2 map to the unit circle. Whats more, once we fix the
unit vector ~e1 as mapped to the vector with components a and b then there
are only two places where ~e2 can go if its image is to be perpendicular to the
first vectors image: it can map either to one where ~e2 maintains its position a
quarter circle clockwise from ~e1
b
a
a
b
a
b
b
a
a
b
b
a
b
a
The geometric description of these two cases is easy. Let be the counterclockwise angle between the x-axis and the image of ~e1 . The first matrix above
represents, with respect to the standard bases, a rotation of the plane by
radians.
b
a
a
b
x
y
!
t
!
x cos y sin
x sin + y cos
The second matrix above represents a reflection of the plane through the line
bisecting the angle between ~e1 and t(~e1 ).
311
a
b
x
y
b
a
!
t
!
x cos + y sin
x sin y cos
(This picture shows ~e1 reflected up into the first quadrant and ~e2 reflected down
into the fourth quadrant.)
Note: in the domain the angle between ~e1 and ~e2 runs counterclockwise, and
in the first map above the angle from t(~e1 ) to t(~e2 ) is also counterclockwise,
so it preserves the orientation of the angle. But the second map reverses the
orientation. A distance-preserving map is direct if it preserves orientations and
opposite if it reverses orientation.
With that, we have characterized the Euclidean study of congruence. It
considers, for plane figures, the properties that are invariant under combinations
of (i) a rotation followed by a translation, or (ii) a reflection followed by a
translation (a reflection followed by a non-trivial translation is a glide reflection).
Another idea encountered in elementary geometry, besides congruence of
figures, is that figures are similar if they are congruent after a change of scale.
The two triangles below are similar since the second is the same shape as the
first but 3/2-ths the size.
P2
P1
Q2
Q1
P3
Q3
From the above work we have that figures are similar if there is an orthonormal
matrix T such that the points ~q on one figure are the images of the points ~p on
the other figure by ~q = (kT )~v + ~p0 for some nonzero real number k and constant
vector ~p0 .
Although these ideas are from Euclid, mathematics is timeless and they are
still in use today. One application of the maps studied above is in computer
graphics. We can, for example, animate this top view of a cube by putting
together film frames of it rotating; thats a rigid motion.
Frame 1
Frame 2
Frame 3
312
We could also make the cube appear to be moving away from us by producing
film frames of it shrinking, which gives us figures that are similar.
Frame 1:
Frame 2:
Frame 3:
1/
3
2/ 3
(c)
2/ 3
1/ 3
2 Write down the formula for each of these distance-preserving maps.
(a) the map that rotates /6 radians, and then translates by ~e2
(b) the map that reflects about the line y = 2x
(c) the map that reflects about y = 2x and translates over 1 and up 1
3 (a) The proof that a map that is distance-preserving and sends the zero vector
to itself incidentally shows that such a map is one-to-one and onto (the point
in the domain determined by d0 , d1 , and d2 corresponds to the point in the
codomain determined by those three). Therefore any distance-preserving map
has an inverse. Show that the inverse is also distance-preserving.
(b) Prove that congruence is an equivalence relation between plane figures.
4 In practice the matrix for the distance-preserving linear transformation and the
translation are often combined into one. Check that these two computations yield
the same first two components.
a c e
x
a c
x
e
b d f y
+
b d
y
f
0 0 1
1
(These are homogeneous coordinates; see the Topic on Projective Geometry).
5 (a) Verify that the properties described in the second paragraph of this Topic as
invariant under distance-preserving maps are indeed so.
(b) Give two more properties that are of interest in Euclidean geometry from
your experience in studying that subject that are also invariant under distancepreserving maps.
(c) Give a property that is not of interest in Euclidean geometry and is not
invariant under distance-preserving maps.
Chapter Four
Determinants
In the first chapter we highlighted the special case of linear systems with the
same number of equations as unknowns, those of the form T~x = ~b where T is a
square matrix. We noted that there are only two kinds of T s. If T is associated
with a unique solution for any ~b, such as for the homogeneous system T~x = ~0,
then T is associated with a unique solution for every such ~b. We call such a
matrix nonsingular. The other kind of T , where every linear system for which it
is the matrix of coefficients has either no solution or infinitely many solutions,
we call singular.
In our work since then this distinction has been a theme. For instance, we
now know that an nn matrix T is nonsingular if and only if each of these holds:
any system T~x = ~b has a solution and that solution is unique;
Gauss-Jordan reduction of T yields an identity matrix;
the rows of T form a linearly independent set;
the columns of T form a linearly independent set, a basis for Rn ;
any map that T represents is an isomorphism;
an inverse matrix T 1 exists.
So when we look at a square matrix, one of the first things that we ask is whether
it is nonsingular.
This chapter develops a formula that determines whether T is nonsingular.
More precisely, we will develop a formula for 11 matrices, one for 22 matrices,
etc. These are naturally related; that is, we will develop a family of formulas, a
scheme that describes the formula for each size.
Since we will restrict the discussion to square matrices, in this chapter we
will often simply say matrix in place of square matrix.
314
Definition
a b c
I.1
Exploration
h
But these observations are perhaps more puzzling than enlightening. For instance,
we might wonder why some terms are added but some are subtracted.
Section I. Definition
315
ki +j
!
b
) = a(kb + d) (ka + c)b = ad bc
kb + d
det(kg + d
g
b
kh + e
h
316
g h
det(d e
a b
inside of a 33 matrix
also does not give the same determinant as before the swap since again there is
a sign change. Trying a different 33 swap 1 2
d e f
a
b
c
det( d
e
f ) = ae(ki) + bf(kg) + cd(kh)
(kh)fa (ki)db (kg)ec
kg kh ki
= k (aei + bfg + cdh hfa idb gec)
as do the other two 33 cases. These make us suspect that multiplying a row
by k multiplies the determinant by k. As before, this modifies our plan but does
not wreck it. We are asking only that the zero-ness of the determinant formula
be unchanged, not focusing on the its sign or magnitude.
So in this exploration out plan got modified in some inessential ways and is
now: we will look for nn determinant functions that remain unchanged under
Section I. Definition
317
the operation of row combination, that change sign on a row swap, that rescale
on the rescaling of a row, and such that the determinant of an echelon form
matrix is the product down the diagonal. In the next two subsections we will
see that for each n there is one and only one such function.
Finally, for the next subsection note that factoring out scalars is a row-wise
operation: here
3 3
9
1 1
3
det(2 1
1 ) = 3 det(2 1
1 )
5 11 5
5 11 5
the 3 comes only out of the top row only, leaving the other rows unchanged.
Consequently in the definition of determinant we will write it as a function of
the rows det(~1 , ~2 , . . . ~n ), rather than as det(T ) or as a function of the entries
det(t1,1 , . . . , tn,n ).
Exercises
X 1.1 Evaluate the determinant
of each.
2 0 1
4 0 1
3 1
(a)
(b) 3 1 1
(c) 0 0 1
1 1
1 0 1
1 3 1
1.2 Evaluate the determinant
of
each.
2 1
1
2 3 4
2 0
(a)
(b) 0 5 2
(c) 5 6 7
1 3
1 3 4
8 9 1
X 1.3 Verify that the determinant of an upper-triangular 33 matrix is the product
down the diagonal.
a b c
det( 0 e f ) = aei
0 0 i
Do lower-triangular matrices work the same way?
X 1.4 Usethe determinant
or nonsingular.
to decide
if each
is singular
2 1
0 1
4 2
(a)
(b)
(c)
3 1
1 1
2 1
1.5 Singular or nonsingular? Use the determinant to decide.
2 1 1
1 0 1
2 1 0
(a) 3 2 2
(b) 2 1 1
(c) 3 2 0
0 1 4
4 1 3
1 0 0
X 1.6 Each pair of matrices differ by one row operation. Use this operation to compare
det(A) with det(B).
1 2
1 2
(a) A =
B=
2 3
0 1
3 1 0
3 1 0
(b) A = 0 0 1 B = 0 1 2
0 1 2
0 0 1
318
1 1
(c) A = 2 2
1 0
1.7 Show this.
3
1
6 B = 1
4
1
1
1
0
3
3
4
1
1
1
det( a
b
c ) = (b a)(c a)(c b)
a2 b2 c2
X 1.8 Which real numbers x make this matrix singular?
12 x
4
8
8x
1.9 Do the Gaussian reduction to check the formula for 33 matrices stated in the
preamble
section.
to this
a b c
d e f is nonsingular iff aei + bfg + cdh hfa idb gec 6= 0
g h i
1.10 Show that the equation of a line in R2 thru (x1 , y1 ) and (x2 , y2 ) is given by
this determinant.
x
y 1
x1 6= x2
det(x1 y1 1) = 0
x2 y2 1
X 1.11 Many people know this mnemonic for the determinant of a 33 matrix: first
repeat the first two columns and then sum the products on the forward diagonals
and subtract the products on the backward diagonals. That is, first write
y1
~y = y2
x3
y3
is the vector computed as this determinant.
Section I. Definition
319
(b) If T is invertible then the determinant of the inverse is the inverse of the
determinant det(T 1 ) = ( det(T ) )1 .
Matrices T and T 0 are similar if there is a nonsingular matrix P such that T 0 = PT P1 .
(We shall look at this relationship in Chapter Five.) Show that similar 22 matrices
have the same determinant.
X 1.14 Prove that the area of this region in the plane
x2
y2
x1
y1
x2
)
y2
x2
det(
y2
x1
)
y1
1.15 Prove that for 22 matrices, the determinant of a matrix equals the determinant
of its transpose. Does that also hold for 33 matrices?
X 1.16 Is the determinant function linear is det(x T + y S) = x det(T ) + y det(S)?
1.17 Show that if A is 33 then det(c A) = c3 det(A) for any scalar c.
1.18 Which real numbers make
cos
sin
sin
cos
I.2
Properties of Determinants
320
i +j j +i i +j
i
T
swaps rows i and j. We have listed it for consistency with the Gausss Method
presentation in earlier chapters.
2.3 Remark Condition (3) does not have a k 6= 0 restriction, although the Gausss
Method operation of multiplying a row by k does have it. The next result shows
that we do not need that restriction here.
2.4 Lemma A matrix with two identical rows has a determinant of zero. A matrix
with a zero row has a determinant of zero. A matrix is nonsingular if and only
if its determinant is nonzero. The determinant of an echelon form matrix is the
product down its diagonal.
Proof To verify the first sentence swap the two equal rows. The sign of the
determinant changes but the matrix is the same and so its determinant is the
same. Thus the determinant is zero.
For the second sentence multiply the zero row by two. That doubles the determinant but it also leaves the row unchanged, and hence leaves the determinant
unchanged. Thus the determinant must be zero.
Do Gauss-Jordan reduction for the third sentence, T T . By the first
three properties the determinant of T is zero if and only if the determinant of T is
zero (although the two could differ in sign or magnitude). A nonsingular matrix
T Gauss-Jordan reduces to an identity matrix and so has a nonzero determinant.
A singular T reduces to a T with a zero row; by the second sentence of this
lemma its determinant is zero.
The fourth sentence has two cases. If the echelon form matrix is singular
then it has a zero row. Thus it has a zero on its diagonal and the product down
its diagonal is zero. By the third sentence of this result the determinant is zero
and therefore this matrixs determinant equals the product down its diagonal.
Section I. Definition
321
If the echelon form matrix is nonsingular then none of its diagonal entries is
zero. This means that we can divide by those entries and use condition (3) to
get 1s on the diagonal.
t
1 t /t
t1,n
t1,n /t1,1
1,2 1,1
1,1 t1,2
t2,2
t2,n
1
t2,n /t2,2
0
0
= t1,1 t2,2 tn,n
..
..
.
.
0
0
tn,n
1
Then the Jordan half of Gauss-Jordan elimination leaves the identity matrix.
1 0
0
0
0 1
= t1,1 t2,2 tn,n 1
= t1,1 t2,2 tn,n
..
.
0
1
So in this case also, the determinant is the product down the diagonal.
QED
Gausss Method
4
= 10
5
doesnt give a big time savings because the 22 determinant formula is easy.
However, a 33 determinant is often easier to calculate with Gausss Method
than with its formula.
2 2 6 2 2
2 2
6
6
4 4 3 = 0 0 9 = 0 3 5 = 54
0 3 5 0 3 5
0 0 9
2.6 Example
procedure.
1 0
0 1
0 0
0 1
1
1
0
0
3 1
4 0
=
5 0
1 0
0
1
0
0
1
1
0
1
1
3
0
4
=
0
5
0
3
0
1
0
0
1
1
1
0
3
4
= (5) = 5
3
5
322
The prior example illustrates an important point. Although we have not yet
found a 44 determinant formula, if one exists then we know what value it gives
to the matrix if there is a function with properties (1)-(4) then on the above
matrix the function must return 5.
2.7 Lemma For each n, if there is an nn determinant function then it is unique.
Proof Perform Gausss Method on the matrix, keeping track of how the sign
alternates on row swaps and any row-scaling factors, and then multiply down
the diagonal of the echelon form result. By the definition and the lemma, all
nn determinant functions must return this value on the matrix.
QED
The if there is an nn determinant function emphasizes that, although we
can use Gausss Method to compute the only value that a determinant function
could possibly return, we havent yet shown that such a function exists for all n.
The rest of this section does that.
Exercises
For these, assume that an nn determinant function exists for all n.
X 2.8 Use Gausss Method to find each determinant.
1 0 0 1
3 1 2
2 1 1 0
(a) 3 1 0
(b)
0 1 4
1 0 1 0
1 1 1 0
2.9 Use Gausss Methodto find each.
1 1 0
2 1
(a)
(b) 3 0 2
1 1
5 2 2
2.10 For which values of k does this system have a unique solution?
x
+ zw=2
y 2z
=3
x + kz
=4
zw=2
X 2.11 Express each of these in terms of |H|.
h3,1 h3,2 h3,3
(a) h2,1 h2,2 h2,3
h
h1,2 h1,3
1,1
h1,1
h1,2
h1,3
(b) 2h2,1 2h2,2 2h2,3
3h
3h3,2 3h3,3
3,1
h1,1 + h3,1 h1,2 + h3,2 h1,3 + h3,3
(c) h2,1
h2,2
h2,3
5h
5h
5h
3,1
3,2
3,3
X 2.12 Find the determinant of a diagonal matrix.
Section I. Definition
323
2.13 Describe the solution set of a homogeneous linear system if the determinant of
the matrix of coefficients is nonzero.
X 2.14 Show that this determinant is zero.
y + z x + z x + y
x
y
z
1
1
1
2.15 (a) Find the 11, 22, and 33 matrices with i, j entry given by (1)i+j .
(b) Find the determinant of the square matrix with i, j entry (1)i+j .
2.16 (a) Find the 11, 22, and 33 matrices with i, j entry given by i + j.
(b) Find the determinant of the square matrix with i, j entry i + j.
X 2.17 Show that determinant functions are not linear by giving a case where |A + B| 6=
|A| + |B|.
2.18 The second condition in the definition, that row swaps change the sign of a
determinant, is somewhat annoying. It means we have to keep track of the number
of swaps, to compute how the sign alternates. Can we get rid of it? Can we replace
it with the condition that row swaps leave the determinant unchanged? (If so
then we would need new 11, 22, and 33 formulas, but that would be a minor
matter.)
2.19 Prove that the determinant of any triangular matrix, upper or lower, is the
product down its diagonal.
2.20 Refer to the definition of elementary matrices in the Mechanics of Matrix
Multiplication subsection.
(a) What is the determinant of each kind of elementary matrix?
(b) Prove that if E is any elementary matrix then |ES| = |E||S| for any appropriately
sized S.
(c) (This question doesnt involve determinants.) Prove that if T is singular
then a product T S is also singular.
(d) Show that |T S| = |T ||S|.
(e) Show that if T is nonsingular then |T 1 | = |T |1 .
2.21 Prove that the determinant of a product is the product of the determinants
|T S| = |T | |S| in this way. Fix the n n matrix S and consider the function
d : Mnn R given by T 7 |T S|/|S|.
(a) Check that d satisfies condition (1) in the definition of a determinant function.
(b) Check condition (2).
(c) Check condition (3).
(d) Check condition (4).
(e) Conclude the determinant of a product is the product of the determinants.
2.22 A submatrix of a given matrix A is one that we get by deleting some of the
rows and columns of A. Thus, the first matrix here is a submatrix of the second.
3 4
1
3 1
0 9 2
2 5
2 1 5
Prove that for any square matrix, the rank of the matrix is r if and only if r is the
largest integer such that there is an rr submatrix with a nonzero determinant.
324
X 2.23 Prove that a matrix with rational entries has a rational determinant.
? 2.24 [Am. Math. Mon., Feb. 1953] Find the element of likeness in (a) simplifying a
fraction, (b) powdering the nose, (c) building new steps on the church, (d) keeping
emeritus professors on campus, (e) putting B, C, D in the determinant
1
a a2 a3
3
2
a
1
a a
.
B a3 1
a
C D a3 1
I.3
3 4
0 2
and the second with one.
!
1 2
1 2
3 4
3
1
4
2
(1/3)1 +2
3
0
4
2/3
Both yield the determinant 2 since in the second one we note that the row
swap changes the sign of the result we get by multiplying down the diagonal.
The fact that we are able to proceed in two ways opens the possibility that the
two give different answers. That is, the way that we have given to compute
determinant values does not plainly eliminate the possibility that there might be,
say, two reductions of some 77 matrix that lead to different determinant values.
In that case we would not have a function, since the definition of a function is
that for each input there must be exactly associated one output. The rest of
this section shows that the definition Definition 2.1 never leads to a conflict.
To do this we will define an alternative way to find the value of a determinant.
(This alternative is less useful in practice because it is slow. But it is very useful
Section I. Definition
325
for theory.) The key idea is that condition (3) of Definition 2.1 shows that the
determinant function is not linear.
3.1 Example With condition (3) scalars come out of each row separately,
4 2
2 1
2 1
=2
=4
2 6
2 6
1 3
not from the entire matrix at once. So, where
!
2 1
A=
1 3
then det(2A) 6= 2 det(A) (instead, det(2A) = 4 det(A)).
Since scalars come out a row at a time we might guess that determinants are
linear a row at a time.
3.2 Definition Let V be a vector space. A map f : V n R is multilinear if
(1) f(~1 , . . . ,~v + w
~ , . . . , ~n ) = f(~1 , . . . ,~v, . . . , ~n ) + f(~1 , . . . , w
~ , . . . , ~n )
(2) f(~1 , . . . , k~v, . . . , ~n ) = k f(~1 , . . . ,~v, . . . , ~n )
for ~v, w
~ V and k R.
3.3 Lemma Determinants are multilinear.
Proof Property (2) here is just Definition 2.1s condition (3) so we need only
verify property (1).
There are two cases. If the set of other rows {~1 , . . . , ~i1 , ~i+1 , . . . , ~n }
is linearly dependent then all three matrices are singular and so all three
determinants are zero and the equality is trivial.
Therefore assume that the set of other rows is linearly independent. We can
~ ~i+1 , . . . , ~n i. Express
make a basis by adding one more vector h~1 , . . . , ~i1 , ,
~v and w
~ with respect to this basis
~ + vi+1~i+1 + + vn~n
~v = v1~1 + + vi1~i1 + vi
~ + wi+1~i+1 + + wn~n
w
~ = w1~1 + + wi1~i1 + wi
and add.
~ + + (vn + wn )~n
~v + w
~ = (v1 + w1 )~1 + + (vi + wi )
Consider the left side of (1) and expand ~v + w
~.
~ + + (vn + wn )~n , . . . , ~n ) ()
det(~1 , . . . , (v1 + w1 )~1 + + (vi + wi )
326
row.
1
3
The result is four determinants. In each row of each of the four there is a single
entry from the original matrix.
3.5 Example In the same way, a 33 determinant separates into a sum of many
simpler determinants. Splitting along the first row produces three determinants
(we have highlighted the zero in the 1, 3 position to set it off visually from the
zeroes that appear as part of the splitting).
2 1 1 2 0 0 0 1 0 0 0 1
4 3 0 = 4 3 0 + 4 3 0 + 4 3 0
2 1 5 2 1 5 2 1 5 2 1 5
Section I. Definition
327
In turn, each of the above splits in three along the second row. Then each of the
nine splits in three along the third row. The result is twenty seven determinants,
such that each row contains a single entry from the starting matrix.
2
= 4
2
0
0
0
0 2
0 + 4
0 0
0
0
1
0 2
0 + 4
0 0
0
0
0
0 2
0 + 0
5 2
0 0
0
0 + + 0 0
0 0
0
0
3
0
1
0
5
0
0
1
0
0
0
0 0
0 3
0 0
1
0
5
0
0
0
1
0
0
0
0
5
For instance, in the first matrix the 2 and the 4 both come from the first column
of the original matrix. In the second matrix the 1 and 5 both come from the
third column. And in the third matrix the 0 and 5 both come from the third
column. Any such matrix is singular because one row is a multiple of the other.
Thus any such determinant is zero, by Lemma 2.4.
With that observation the above expansion of the 33 determinant into the
sum of the twenty seven determinants simplifies to the sum of these six where
the entries from the original matrix come one per row, and also one per column.
2 1
4 3
2 1
1 2
0 = 0
5 0
0
+ 4
0
0
+ 4
0
0
3
0
0 2 0 0
0 + 0 0 0
5 0 1 0
1 0 0 1 0
0 0 + 0 0 0
0 5 2 0 0
0 1 0 0 1
0 0 + 0 3 0
1 0 2 0 0
328
1
0
0
Section I. Definition
329
We denote the row vector that is all 0s except for a 1 in entry j with j so
that the four-wide 2 is (0 1 0 0). Now our notation for permutation matrices
is: with any = h(1), . . . , (n)i associate the matrix whose rows are (1) ,
. . . , (n) . For instance, associated with the 4-permutation = h3, 2, 1, 4i is the
matrix whose rows are the corresponding s.
3
0 0 1 0
0 1 0 0
2
P = =
1 1 0 0 0
4
0 0 0 1
3.10 Example These are the permutation matrices for the 2-permutations listed
in Example 3.8.
!
!
!
!
1
1 0
2
0 1
P1 =
=
P2 =
=
2
0 1
1
1 0
For instance, P2 s first row is 2 (1) = 2 and its second is 2 (2) = 1 .
3.11 Example Consider the 3-permutation 5 = h3, 1, 2i. The permutation matrix
P5 has rows 5 (1) = 3 , 5 (2) = 1 , and 5 (3) = 2 .
P5
3.12 Definition
t
1,1
t2,1
tn,1
= 1
0
0
0
1
0
0
read aloud as, the sum, over all permutations , of terms having the form
t1,(1) t2,(2) tn,(n) |P |.
330
3.13 Example The familiar 22 determinant formula follows from the above
t
1,1 t1,2
= t1,1 t2,2 |P1 | + t1,2 t2,1 |P2 |
t2,1 t2,2
0 1
1 0
= t1,1 t2,2
+ t1,2 t2,1
1 0
0 1
= t1,1 t2,2 t1,2 t2,1
as does the
t
1,1 t1,2
t2,1 t2,2
t3,1 t3,2
33 formula.
t1,3
t2,3 = t1,1 t2,2 t3,3 |P1 | + t1,1 t2,3 t3,2 |P2 | + t1,2 t2,1 t3,3 |P3 |
t3,3
+ t1,2 t2,3 t3,1 |P4 | + t1,3 t2,1 t3,2 |P5 | + t1,3 t2,2 t3,1 |P6 |
= t1,1 t2,2 t3,3 t1,1 t2,3 t3,2 t1,2 t2,1 t3,3
+ t1,2 t2,3 t3,1 + t1,3 t2,1 t3,2 t1,3 t2,2 t3,1
the same determinant, and with two equal rows, and hence a determinant of
zero. Prove the other two in the same way.
QED
We finish this subsection with a summary: determinant functions exist, are
unique, and we know how to compute them. As for what determinants are
about, perhaps these lines [Kemp] help make it memorable.
Section I. Definition
331
Determinant none,
Solution: lots or none.
Determinant some,
Solution: just one.
Exercises
This summarizes our notation
i
1
1 (i) 1
2 (i) 2
X 3.17 Compute
the
1 2 3
(a) 4 5 6
7 8 9
X 3.18 Compute these both with Gausss Method and the permutation expansion
formula.
0 1 4
2 1
(a)
(b) 0 2 3
3 1
1 5 1
X 3.19 Use the permutation expansion formula to derive the formula for 33 determinants.
3.20 List all of the 4-permutations.
3.21 A permutation, regarded as a function from the set { 1, .., n } to itself, is one-toone and onto. Therefore, each permutation has an inverse.
(a) Find the inverse of each 2-permutation.
(b) Find the inverse of each 3-permutation.
~ V and k1 , k2 R, this
3.22 Prove that f is multilinear if and only if for all ~v, w
holds.
f(~1 , . . . , k1~v1 + k2~v2 , . . . , ~n ) = k1 f(~1 , . . . ,~v1 , . . . , ~n ) + k2 f(~1 , . . . ,~v2 , . . . , ~n )
3.23 How would determinants change if we changed property (4) of the definition to
read that |I| = 2?
3.24 Verify the second and third statements in Corollary 3.16.
X 3.25 Show that if an nn matrix has a nonzero determinant then we can express
any column vector ~v Rn as a linear combination of the columns of the matrix.
3.26 [Strang 80] True or false: a matrix whose entries are only zeros or ones has a
determinant equal to zero, one, or negative one.
3.27 (a) Show that there are 120 terms in the permutation expansion formula of a
55 matrix.
(b) How many are sure to be zero if the 1, 2 entry is zero?
3.28 How many n-permutations are there?
332
1 2
0
3 4
0
0 0 2
which shows four blocks, the square 22 and 11 ones in the upper left and lower
right, and the zero blocks in the upper right and lower left. Show that if a matrix
is such that we can partition it as
J
Z2
T=
Z1 K
where J and K are square, and Z1 and Z2 are all zeroes, then |T | = |J| |K|.
X 3.34 Prove that for any nn matrix T there are at most n distinct reals r such that
the matrix T rI has determinant zero (we shall use this result in Chapter Five).
? 3.35 [Math. Mag., Jan. 1963, Q307] The nine positive digits can be arranged into
33 arrays in 9! ways. Find the sum of the determinants of these arrays.
3.36 [Math. Mag., Jan. 1963, Q237]
x 2
x + 1
x 4
Show that
x3
x1
x7
x 4
x 3 = 0.
x 10
? 3.37 [Am. Math. Mon., Jan. 1949] Let S be the sum of the integer elements of a
magic square of order three and let D be the value of the square considered as a
determinant. Show that D/S is an integer.
Section I. Definition
333
? 3.38 [Am. Math. Mon., Jun. 1931] Show that the determinant of the n2 elements in
the upper left corner of the Pascal triangle
1
1
1
1
.
.
1
2
3
.
1
3
.
.
1
.
.
.
.
I.4
Determinants Exist
This subsection contains proofs of two results from the prior subsection.
It is optional. We will use the material developed here only in the Jordan
Canonical Form subsection, which is also optional.
We wish to show that for any size n, the determinant function on nn matrices
is well-defined. The prior subsection develops the permutation expansion formula.
t
1,1
t2,1
tn,1
t1,2
t2,2
..
.
tn,2
...
...
...
t1,n
t2,n
= t1, (1) t2, (2) tn, (n) |P |
1
1
1
1
+
t
t
t
1,2 (1) 2,2 (2)
n,2 (n) |P2 |
tn,n
..
.
+ t1,k (1) t2,k (2) tn,k (n) |Pk |
X
=
t1,(1) t2,(2) tn,(n) |P |
permutations
This reduces the problem of showing that the determinant is well-defined to only
showing that the determinant is well-defined on the set of permutation matrices.
A permutation matrix can be row-swapped to the identity matrix. So one
way that we can calculate its determinant is by keeping track of the number of
swaps. However, we still must show that the result is well-defined. Recall what
the difficulty is: the determinant of
0
1
P =
0
0
1
0
0
0
0
0
1
0
0
0
0
1
334
1 2
1
0
0
0
0
1
0
0
0
0
1
0
0
0
0
1
1
0
0
0
0
1
0
0
or with three.
3 1 2 3 1 3
0
0
1
0
0
0
0
1
Both reductions have an odd number of swaps so in this case we figure that
|P | = 1 but if there were some way to do it with an even number of swaps then
we would have the determinant giving two different outputs from a single input.
Below, Corollary 4.5 proves that this cannot happen there is no permutation
matrix that can be row-swapped to an identity matrix in two ways, one with an
even number of swaps and the other with an odd number of swaps.
4.1 Definition In a permutation = h. . . , k, . . . , j, . . .i, elements such that k > j
are in an inversion of their natural order. Similarly, in a permutation matrix
two rows
..
.
k
.
P =
..
j
..
.
such that k > j are in an inversion.
4.2 Example This permutation matrix
1 0 0
0 0 1
0 1 0
0 0 0
0
1
0 3
=
0 2
1
4
3
0 0 1
0 1 0 = 2
1 0 0
1
Section I. Definition
335
..
.
(j)
P =
(k)
..
.
k j
..
.
(k)
(j)
..
.
then since inversions involving rows not in this pair are not affected, the swap
changes the total number of inversions by one, either removing or producing one
inversion depending on whether (j) > (k) or not. Consequently, the total
number of inversions changes from odd to even or from even to odd.
If the rows are not adjacent then we can swap them via a sequence of adjacent
swaps, first bringing row k up
.
.
..
.
.
(j)
(k)
(j)
(j+1)
k k1 k1 k2
j+1 j
(j+1)
(j+2)
..
..
.
.
(k)
(k1)
..
..
.
.
and then bringing row j down.
k1 k
..
.
(k)
(j+1)
(j+2)
..
.
(j)
..
.
Each of these adjacent swaps changes the number of inversions from odd to even
or from even to odd. The total number of swaps (k j) + (k j 1) is odd.
336
Thus, in aggregate, the number of inversions changes from even to odd, or from
odd to even.
QED
4.5 Corollary If a permutation matrix has an odd number of inversions then
swapping it to the identity takes an odd number of swaps. If it has an even
number of inversions then swapping to the identity takes an even number.
Proof The identity matrix has zero inversions. To change an odd number to
zero requires an odd number of swaps, and to change an even number to zero
requires an even number of swaps.
QED
4.6 Example The matrix in Example 4.3 can be brought to the identity with one
swap 1 3 . (So the number of swaps neednt be the same as the number of
inversions, but the oddness or evenness of the two numbers is the same.)
4.7 Definition The signum of a permutation sgn() is 1 if the number of
inversions in is odd and is +1 if the number of inversions is even.
4.8 Example Using the notation
have
1 0
P1 = 0 1
0 0
0
1
P2
= 0
0
0
0
1
1
0
permutations
The advantage of this formula is that the number of inversions is clearly welldefined just count them. Therefore, we will be finished showing that an nn
determinant function exists when we show that this d satisfies the conditions
required of a determinant.
4.9 Lemma The function d above is a determinant. Hence determinants exist
for every n.
Section I. Definition
337
Proof We must check that it has the four conditions from the definition of
all of the terms in the summation are zero except for the one where the permutation is the identity, which gives the product down the diagonal, which is
one.
ki
For condition (3) suppose that T T and consider d(T ).
X
t1,(1) ti,(i) tn,(n) sgn()
perm
i j
For (2) suppose that T T . We must show that d(T ) is the negative
of d(T ).
X
t1,(1) ti,(i) tj,(j) tn,(n) sgn()
d(T ) =
()
perm
We will show that each term in () is associated with a term in d(T ), and that the
two terms are negatives of each other. Consider the matrix from the multilinear
expansion of d(T ) giving the term t1,(1) ti,(i) tj,(j) tn,(n) sgn().
..
.
i,(i)
..
tj,(j)
..
.
It is the result of the i j operation performed on this matrix.
..
.
ti,(j)
..
tj,(i)
..
.
338
That is, the term with hatted ts is associated with this term from the d(T )
expansion: t1,(1) tj,(j) ti,(i) tn,(n) sgn(), where the permutation
equals but with the i-th and j-th numbers interchanged, (i) = (j) and
(j) = (i). The two terms have the same multiplicands t1,(1) = t1,(1) ,
. . . , including the entries from the swapped rows ti,(i) = tj,(i) = tj,(j) and
tj,(j) = ti,(j) = ti,(i) . But the two terms are negatives of each other since
sgn() = sgn() by Lemma 4.4.
Now, any permutation can be derived from some other permutation by
such a swap, in one and only one way. Therefore the summation in () is in fact
a sum over all permutations, taken once and only once.
X
t1,(1) ti,(i) tj,(j) tn,(n) sgn()
d(T ) =
perm
perm
X
+
t1,(1) ti,(i) tj,(j) tn,(n) sgn()
+ d(T )
Consider the terms t1,(1) ti,(i) ti,(j) tn,(n) sgn(). Notice the subscripts; the entry is ti,(j) , not tj,(j) . The sum of these terms is the determinant
of a matrix S that is equal to T except that row j of S is a copy of row i of T ,
that is, S has two equal rows. In the same way that we proved Lemma 2.4 we
Section I. Definition
339
can see that d(S) = 0: a swap of Ss equal rows will change the sign of d(S)
but since the matrix is unchanged by that swap the value of d(S) must also be
unchanged, and so that value must be zero.
QED
We have now proved that determinant functions exist for each size nn. We
already know that for each size there is at most one determinant. Therefore, for
each size there is one and only one determinant function.
We end this subsection by proving the other result remaining from the prior
subsection.
4.10 Theorem The determinant of a matrix equals the determinant of its transpose.
Proof The proof is best understood by doing the general 33 case. That the
0
1
0
0
1
0
1
0
0
1
0
0
0
1
0
0
1
0
1
0
0
0
1
0
0
1
0
1
0
0
Compare first the six products of ts. The ones in the expansion of T are the
same as the ones in the expansion of the transpose; for instance, t1,2 t2,3 t3,1 is
340
in the top and t3,1 t1,2 t2,3 is in the bottom. Thats perfectly sensible the six
in the top arise from all of the ways of picking one entry of T from each row and
column while the six in the bottom are all of the ways of picking one entry of T
from each column and row, so of course they are the same set.
Next observe that in the two expansions, each t-product expression is not
necessarily associated with the same permutation matrix. For instance, on the
top t1,2 t2,3 t3,1 is associated with the matrix for the map 1 7 2, 2 7 3, 3 7 1.
On the bottom t3,1 t1,2 t2,3 is associated with the matrix for the map 1 7 3,
2 7 1, 3 7 2. The second map is inverse to the first. This is also perfectly
sensible both the matrix transpose and the map inverse flip the 1, 2 to 2, 1,
flip the 2, 3 to 3, 2, and flip 3, 1 to 1, 3.
We finish by noting that the determinant of P equals the determinant of
P1 , as Exercise 16shows.
QED
Exercises
These summarize the notation
i
1
1 (i) 1
2 (i) 2
4.11 Give the permutation expansion of a general 22 matrix and its transpose.
X 4.12 This problem appears also in the prior subsection.
(a) Find the inverse of each 2-permutation.
(b) Find the inverse of each 3-permutation.
X 4.13 (a) Find the signum of each 2-permutation.
(b) Find the signum of each 3-permutation.
4.14 Find the only nonzero term in the permutation expansion of this matrix.
0 1 0 0
1 0 1 0
0 1 0 1
0 0 1 0
Compute that determinant by finding the signum of the associated permutation.
4.15 [Strang 80] What is the signum of the n-permutation = hn, n 1, . . . , 2, 1i?
4.16 Prove these.
(a) Every permutation has an inverse.
(b) sgn(1 ) = sgn()
(c) Every permutation is the inverse of another.
4.17 Prove that the matrix of the permutation inverse is the transpose of the matrix
of the permutation P1 = P T , for any permutation .
Section I. Definition
341
X 4.18 Show that a permutation matrix with m inversions can be row swapped to the
identity in m steps. Contrast this with Corollary 4.5.
X 4.19 For any permutation let g() be the integer defined in this way.
Y
g() =
[(j) (i)]
i<j
(This is the product, over all indices i and j with i < j, of terms of the given
form.)
(a) Compute the value of g on all 2-permutations.
(b) Compute the value of g on all 3-permutations.
(c) Prove that g() is not 0.
(d) Prove this.
g()
sgn() =
|g()|
Many authors give this formula as the definition of the signum function.
342
II
Geometry of Determinants
II.1
This parallelogram picture is familiar from the construction of the sum of the
two vectors.
x2
y2
x1
y1
1.1 Definition In Rn the box (or parallelepiped) formed by h~v1 , . . . ,~vn i is the
set {t1~v1 + + tn~vn | t1 , . . . , tn [0 . . . 1] }.
Thus the parallelogram above is the box formed by h yx11 , yx22 i. A three-space
box is shown in Example 1.4.
We can find the area of the above box by drawing an enclosing rectangle and
subtracting away areas not in the box.
y2
y1 C
F
E
x2
x1
area of parallelogram
= area of rectangle area of A area of B
area of F
= (x1 + x2 )(y1 + y2 ) x2 y1 x1 y1 /2
x2 y2 /2 x2 y2 /2 x1 y1 /2 x2 y1
= x1 y2 x2 y1
343
w
~
~
v
k~
v
Shown here is k = 1.4. On the right the rescaled region is in solid lines with the
original region shaded for comparison.
That is, we can reasonably expect that size(. . . , k~v, . . . ) = k size(. . . ,~v, . . . ).
Of course, this condition is one of those in the definition of determinants.
Another property of determinants that should apply to any function measuring the size of a box is that it is unaffected by row combinations. Here are
before-combining and after-combining boxes (the scalar shown is k = 0.35).
k~
v+w
~
w
~
~
v
~
v
344
~v
~v
~u
4
2
1
= 10
3
~u
1
3
4
= 10
2
Swapping the columns changes the sign. On the left, starting with ~u and
following the arc inside the angle to ~v (that is, going counterclockwise), we get
a positive size. On the right, starting at ~v and going to ~u, and so following
the clockwise arc, gives a negative size. The sign returned by the size function
reflects the orientation or sense of the box. (We see the same thing if we
picture the effect of scalar multiplication by a negative scalar.)
1.3 Definition The volume of a box is the absolute value of the determinant of a
matrix with those vectors as columns.
1.4 Example By the formula that takes the area of the base times the height, the
volume of this parallelepiped is 12. That agrees with the determinant.
2 0
0 3
2 1
1
0 = 12
1
We can also compute the volume as the absolute value of this determinant.
0 2 0
3 0 3 = 12
1 2 1
1.5 Theorem A transformation t : Rn Rn changes the size of all boxes by the
same factor, namely, the size of the image of a box |t(S)| is |T | times the size of
the box |S|, where T is the matrix representing t with respect to the standard
basis.
That is, the determinant of a product is the product of the determinants
|T S| = |T | |S|.
The two sentences say the same thing, first in map terms and then in matrix
terms. This is because |t(S)| = |T S|, as both give the size of the box that is
345
the image of the unit box En under the composition t s, where the maps
are represented with respect to the standard basis. We will prove the second
sentence.
Proof First consider the case that T is singular and thus does not have an
2
1
1
=3
2
to this
t(~
v)
t(w)
~
3
4
3
=6
2
1.7 Corollary If a matrix is invertible then the determinant of its inverse is the
inverse of its determinant |T 1 | = 1/|T |.
Proof 1 = |I| = |T T 1 | = |T | |T 1 |
QED
346
Exercises
1.8 Find
the
volume
of the region defined by the vectors.
1
1
(a) h
,
i
3
4
2
3
8
(b) h1 , 2 , 3i
0
4
8
1
2
1
0
2 2 3 1
(c) h
0 , 2 , 0 , 0 i
1
X 1.9 Is
4
1
2
inside of the box formed by these three?
3
2
3 6
1
1
X 1.10 Find the volume of this region.
1
0
5
1 0 1
3 1 1
1
(a) Compute the determinant of the matrix. Does the transformation preserve
orientation or reverse it?
(b) Find the size of the box defined by these vectors. What is its orientation?
1
2
1
1 0 1
2
1
0
(c) Find the images under t of the vectors in the prior item and find the size of
the box that they define. What is the orientation?
1.13 By what factor does each transformation change the size of boxes?
x
xy
2x
x
3x y
x
(a)
7
(b)
7
(c) y 7 x + y + z
y
3y
y
2x + y
z
y 2z
1.14 What is the area of the image of the rectangle [2..4] [2..5] under the action of
this matrix?
2 3
4 1
347
0 1
area is 2
determinant is 2
area is 5
X 1.18 Does |T S| = |ST |? |T (SP)| = |(T S)P|?
1.19 Show that there are no 22 matrices A and B satisfying these.
1 1
2 1
AB =
BA =
2 0
1 1
X
X
X
1.20 (a) Suppose that |A| = 3 and that |B| = 2. Find |A2 BT B2 AT |.
(b) Assume that |A| = 0. Prove that |6A3 + 5A2 + 2A| = 0.
1.21 Let T be the matrix representing (with respect to the standard bases) the map
that rotates plane vectors counterclockwise thru radians. By what factor does T
change sizes?
1.22 Must a transformation t : R2 R2 that preserves areas also preserve lengths?
1.23 What is the volume of a parallelepiped in R3 bounded by a linearly dependent
set?
1.24 Find the area of the triangle in R3 with endpoints (1, 2, 1), (3, 1, 4), and
(2, 2, 2). (Area, not volume. The triangle defines a plane what is the area of the
triangle in that plane?)
1.25 An alternate proof of Theorem 1.5 uses the definition of determinant functions.
(a) Note that the vectors forming S make a linearly dependent set if and only if
|S| = 0, and check that the result holds in this case.
(b) For the |S| 6= 0 case, to show that |T S|/|S| = |T | for all transformations, consider
the function d : Mnn R given by T 7 |T S|/|S|. Show that d has the first
property of a determinant.
(c) Show that d has the remaining three properties of a determinant function.
(d) Conclude that |T S| = |T | |S|.
1.26 Give a non-identity matrix with the property that AT = A1 . Show that if
AT = A1 then |A| = 1. Does the converse hold?
1.27 The algebraic property of determinants that factoring a scalar out of a single
row will multiply the determinant by that scalar shows that where H is 33, the
determinant of cH is c3 times the determinant of H. Explain this geometrically,
that is, using Theorem 1.5. (The observation that increasing the linear size of a
three-dimensional object by a factor of c will increase its volume by a factor of c3
while only increasing its surface area by an amount proportional to a factor of c2
is the Square-cube law [Wikipedia, Square-cube Law].)
1.28 We say that matrices H and G are similar if there is a nonsingular matrix P
such that H = P1 GP (we will study this relation in Chapter Five). Show that
similar matrices have the same determinant.
348
1.29 We usually represent vectors in R2 with respect to the standard basis so vectors
in the first quadrant have both coordinates positive.
~
v
+3
RepE2 (~v) =
+2
Moving counterclockwise around the origin, we cycle thru four regions:
+
+
.
+
+
0
1
,
i
1
0
gives the same counterclockwise cycle. We say these two bases have the same
orientation.
(a) Why do they give the same cycle?
(b) What other configurations of unit vectors on the axes give the same cycle?
(c) Find the determinants of the matrices formed from those (ordered) bases.
(d) What other counterclockwise cycles are possible, and what are the associated
determinants?
(e) What happens in R1 ?
(f) What happens in R3 ?
A fascinating general-audience discussion of orientations is in [Gardner].
1.30 This question uses material from the optional Determinant Functions Exist
subsection. Prove Theorem 1.5 by using the permutation expansion formula for
the determinant.
X 1.31
(b) [Petersen] Prove that the area of a triangle with vertices (x1 , y1 ), (x2 , y2 ), and
(x3 , y3 ) is
x
x2 x3
1 1
y1 y2 y3 .
2
1
1
1
(c) [Math. Mag., Jan. 1973] Prove that the area of a triangle with vertices at
(x1 , y1 ), (x2 , y2 ), and (x3 , y3 ) whose coordinates are integers has an area of N or
N/2 for some positive integer N.
III
349
Laplaces Formula
Determinants are a font of interesting and amusing formulas. Here is one that is
often used to compute determinants by hand.
III.1
Laplaces Expansion
The example shows a 33 case but the approach works for any size n > 1.
1.1 Example Consider the permutation expansion.
t
1,1
t2,1
t3,1
t1,2
t2,2
t3,2
1
t1,3
t2,3 = t1,1 t2,2 t3,3 0
0
t3,3
0
1
0
0
+ t1,2 t2,1 t3,3 1
0
0
+ t1,3 t2,1 t3,2 1
0
1 0
0
0 + t1,1 t2,3 t3,2 0 0
0 1
1
0
1 0
0 0 + t1,2 t2,3 t3,1 0
1
0 1
0
0 1
0 0 + t1,3 t2,2 t3,1 0
1
1 0
0
1
0
1
0
0
0
1
0
0
1
0
1
0
0
Pick a row or column and factor out its entries; here we do the entries in the
first row.
1 0
= t1,1 t2,2 t3,3 0 1
0 0
+ t1,2 t2,1 t3,3 1
0
+ t1,3 t2,1 t3,2 1
0
1 0
0
0 + t2,3 t3,2 0 0
0 1
1
0
1 0
0 0 + t2,3 t3,1 0
1
0 1
0
0 1
0 0 + t2,2 t3,1 0
1
1 0
0
1
0
1 0
0 1
0 0
0 1
1 0
0 0
In those permutation matrices, swap to get the first rows into place. This
requires one swap to each of the permutation matrices on the second line, and
two swaps to each on the third line. (Recall that row swaps change the sign of
350
the determinant.)
1 0
+ t1,3 t2,1 t3,2 0
0
1 0
0
0 + t2,3 t3,2 0 0
0 1
1
1
0 0
1 0 + t2,3 t3,1 0
0
0 1
1
0 0
1 0 + t2,2 t3,1 0
0
0 1
0
1
0
0 0
0 1
1 0
0 0
0 1
1 0
On each line the terms in square brackets involve only the second and third row
and column, and simplify to a 22 determinant.
t
t
t
2,2 t2,3
2,1 t2,3
2,1 t2,2
= t1,1
t1,2
+ t1,3
t3,2 t3,3
t3,1 t3,3
t3,1 t3,2
The formula given in Theorem 1.5, which generalizes this example, is a recurrence the determinant is expressed as a combination of determinants. This
formula isnt circular because it gives the nn case in terms of smaller ones.
1.2 Definition For any nn matrix T , the (n 1)(n 1) matrix formed by
deleting row i and column j of T is the i, j minor of T . The i, j cofactor Ti,j of
T is (1)i+j times the determinant of the i, j minor of T .
1.3 Example The 1, 2 cofactor of the matrix from Example 1.1 is the negative of
the second 22 determinant.
t
2,1 t2,3
T1,2 = 1
t3,1 t3,3
1.4 Example Where
T = 4
7
these are the 1, 2 and 2, 2 cofactors.
4 6
T1,2 = (1)1+2
=6
7 9
2
5
8
T2,2
6
9
1
= (1)2+2
7
3
= 12
9
351
QED
3
= 12 60 + 48 = 0
6
1.7 Example A row or column with many zeroes suggests a Laplace expansion.
1 5 0
1 5
1 5
2 1
= 16
+ 0 (+1)
+ 1 (1)
2 1 1 = 0 (+1)
2 1
3 1
3 1
3 1 0
We finish by applying Laplaces expansion to derive a new formula for the
inverse of a matrix. With Theorem 1.5, we can calculate the determinant of a
matrix by taking linear combinations of entries from a row with their associated
cofactors.
ti,1 Ti,1 + ti,2 Ti,2 + + ti,n Ti,n = |T |
()
Recall that a matrix with two identical rows has a zero determinant. Thus,
weighing the cofactors by entries from row k with k 6= i gives zero
ti,1 Tk,1 + ti,2 Tk,2 + + ti,n Tk,n = 0
because it represents the expansion along the row k of a matrix with
to row k. This summarizes () and ().
t1,1 t1,2 . . . t1,n
T1,1 T2,1 . . . Tn,1
|T | 0
t2,1 t2,2 . . . t2,n T1,2 T2,2 . . . Tn,2 0 |T |
=
..
..
..
.
.
.
tn,1 tn,2 . . . tn,n
T1,n T2,n . . . Tn,n
0
0
()
row i equal
...
...
...
|T |
352
Note that the order of the subscripts in the matrix of cofactors is opposite to
the order of subscripts in the other matrix; e.g., along the first row of the matrix
of cofactors the subscripts are 1, 1 then 2, 1, etc.
1.8 Definition The matrix adjoint to the square matrix T is
adj(T ) =
..
QED
1.10 Example If
T = 2
1
0
1
0
1
1
then adj(T ) is
T1,1
T1,2
T1,3
T2,1
T2,2
T2,3
1
0
T3,1
2
T3,2 =
1
T3,3
2
1
1
1
1
1
1
0
0
0
1
1
1
1
4 0
1 1
1
4
2
1
0
1
2
0
4
1
1
4
= 3
1
1
0
1
0
3
0
1 0 4
1
0 4
3 0
0
2 1 1 3 3 9 = 0 3 0
1 0 1
1 0
1
0
0 3
The inverse of T is (1/ 3) adj(T ).
1/3
0/3 4/3
1/3 0
T 1 = 3/3 3/3 9/3 = 1
1
1/3 0/3
1/3
1/3 0
4/3
3
1/3
9
1
353
The formulas from this subsection are often used for by-hand calculation
and are sometimes useful with special types of matrices. However, for generic
matrices they are not the best choice because they require more arithmetic than,
for instance, the Gauss-Jordan method.
Exercises
X 1.11 Find the cofactor.
1
T = 1
0
(a) T2,3
(b) T3,2
0
1
2
(c) T1,3
2
3
1
1
2
0
2
(a) 1
1
X 1.15 Find the
1 4
1 4 3
3
1
1
1
(b)
(c)
(d) 1 0 3
0 2
2 4
5 0
1 8 9
0 1
inverse of each matrix in the prior question with Theorem 1.9.
2 1
1 2
0 1
0 0
0
1
2
1
0
0
1
2
X 1.17 Expand across the first row to derive the formula for the determinant of a 22
matrix.
X 1.18 Expand across the first row to derive the formula for the determinant of a 33
matrix.
X 1.19 (a) Give a formula for the adjoint of a 22 matrix.
(b) Use it to derive the formula for the inverse.
X 1.20 Can we compute a determinant by expanding down the diagonal?
1.21 Give a formula for the adjoint of a diagonal matrix.
X 1.22 Prove that the transpose of the adjoint is the adjoint of the transpose.
1.23 Prove or disprove: adj(adj(T )) = T .
1.24 A square matrix is upper triangular if each i, j entry is zero in the part above
the diagonal, that is, when i > j.
(a) Must the adjoint of an upper triangular matrix be upper triangular? Lower
triangular?
354
1.25 This question requires material from the optional Determinants Exist subsection. Prove Theorem 1.5 by using the permutation expansion.
1.26 Prove that the determinant of a matrix equals the determinant of its transpose
using Laplaces expansion and induction on the size of the matrix.
? 1.27 Show that
1
1
Fn = 0
0
.
1
1
1
0
.
1
0
1
1
.
1
1
0
1
.
1
0
1
0
.
1
1
0
1
.
. . .
. . .
. . .
. . .
. . .
Topic
Cramers Rule
A linear system is equivalent to a linear relationship among vectors.
!
!
!
x1 + 2x2 = 6
1
2
6
x1
+ x2
=
3x1 + x2 = 8
3
1
8
In the picture below the small parallelogram is formed from sides that are
the vectors 13 and 21 . It is nested inside a parallelogram with sides x1 13
and x2 21 . By the vector equation, the far corner of the larger parallelogram is
6
8 .
6
8
1
x1
3
1
3
2
1
2
x2
1
This drawing restates the algebraic question of finding the solution of a linear
system into geometric terms: by what factors x1 and x2 must we dilate the sides
of the starting parallelogram so that it will fill the other one?
We can use this picture, and our geometric understanding of determinants,
to get a new formula for solving linear systems. Compare the sizes of these
shaded boxes.
6
8
1
x1
3
1
3
2
1
2
1
2
1
356
The second is defined by the vectors x1 13 and 21 and one of the properties of
the size function the determinant is that therefore the size of the second
box is x1 times the size of the first. The third box is derived from the second by
shearing, adding x2 21 to x1 13 to get x1 13 + x2 21 = 68 , along with 21 . The
determinant is not affected by shearing so the size of the third box equals that
of the second.
Taken together we have this.
1 2 x 1 2 x 1 + x 2 2 6 2
1
1
2
x1
=
=
=
3 1 x1 3 1 x1 3 + x2 1 1 8 1
Solving gives the value of one of the
6
8
x1 =
1
3
variables.
2
1
10
=
=2
5
2
1
1 0 4
x1
2
2 1 1 x2 = 1
1 0 1
x3
1
we do this computation.
1
2
1
x2 =
1
2
1
2
1
1
0
1
0
4
1
1
18
=
3
4
1
1
Cramers Rule lets us by-eye solve systems that are small and simple. For
example, we can solve systems with two equations and two unknowns, or three
equations and three unknowns, where the numbers are small integers. Such
cases appear often enough that many people find this formula handy.
But using it to solving large or complex systems is not practical, either by
hand or by a computer. A Gausss Method-based approach is faster.
Exercises
1 Use Cramers Rule to solve each for each of the variables.
357
x y= 4
2x + y = 2
(b)
x + 2y = 7
x 2y = 2
2 Use Cramers Rule to solve this system for z.
(a)
2x + y + z = 1
3x
+z=4
xyz=2
3 Prove Cramers Rule.
4 Here is an alternative proof of Cramers Rule that doesnt overtly contain any
geometry. Write Xi for the identity matrix with column i replaced by the vector ~x
of unknowns x1 , . . . , xn .
(a) Observe that AXi = Bi .
(b) Take the determinant of both sides.
5 Suppose that a linear system has as many equations as unknowns, that all of
its coefficients and constants are integers, and that its matrix of coefficients has
determinant 1. Prove that the entries in the solution are all integers. (Remark.
This is often used to invent linear systems for exercises.)
6 Use Cramers Rule to give a formula for the solution of a two equations/two
unknowns linear system.
7 Can Cramers Rule tell the difference between a system with no solutions and one
with infinitely many?
8 The first picture in this Topic (the one that doesnt use determinants) shows a
unique solution case. Produce a similar picture for the case of infinitely many
solutions, and the case of no solutions.
Topic
Speed of Calculating Determinants
For large matrices, finding the determinant by using row operations is typically
much faster than using the permutation expansion. We make this statement
precise by finding how many operations each method performs.
To compare the speed of two algorithms, we find for each one how the time
taken grows as the size of its input data set grows. For instance, if we increase
the size of the input by a factor of ten does the time taken grow by a factor of
ten, or by a factor of a hundred, or by a factor of a thousand? That is, is the
time taken proportional to the size of the data set, or to the square of that size,
or to the cube of that size, etc.? An algorithm whose time is proportional to the
square is faster than one that takes time proportional to the cube.
First consider the permutation expansion formula.
t
1,1 t1,2 . . . t1,n
X
t2,1 t2,2 . . . t2,n
=
t1,(1) t2,(2) tn,(n) |P |
.
..
permutations
tn,1 tn,2 . . . tn,n
There are n! = n (n 1) 2 1 different n-permutations so for a matrix with
n rows this sum has n! terms (and inside each term is n-many multiplications).
The factorial function grows quickly: when n is only 10 the expansion already
has 10! = 3, 628, 800 terms. Observe that growth proportional to the factorial is
bigger than growth proportional to the square n! > n2 because multiplying the
first two factors in n! gives n (n 1), which for large n is approximately n2 and
then multiplying in more factors will make the factorial even larger. Similarly,
the factorial function grows faster than n3 , etc. So an algorithm that uses the
permutation expansion formula, and thus performs a number of operations at
least as large as the factorial of the number of rows, would be very slow.
In contrast, the time taken by the row reduction method does not grow
so fast. Below is a script for row reduction in the computer language Python.
(Note: The code here is naive; for example it does not handle the case that the
359
import random
def random_matrix(num_rows, num_cols):
m = []
for col in range(num_cols):
new_row = []
for row in range(num_rows):
new_row.append(random.uniform(0,100))
m.append(new_row)
return m
def gauss_method(m):
"""Perform Gauss's Method on m. This code is for illustration only
and should not be used in practice.
m list of lists of numbers; each included list is a row
"""
num_rows, num_cols = len(m), len(m[0])
for p_row in range(num_rows):
for row in range(p_row+1, num_rows):
factor = -m[row][p_row] / float(m[p_row][p_row])
new_row = []
for col_num in range(num_cols):
p_entry, entry = m[p_row][col_num], m[row][col_num]
new_row.append(entry+factor*p_entry)
m[row] = new_row
return m
response = raw_input('number of rows? ')
num_rows = int(response)
m = random_matrix(num_rows, num_rows)
for row in m:
print row
M = gauss_method(m)
print "-----"
for row in M:
print row
Inside of the gauss_method routine, for each row prow, the routine performs
prow + row on the rows below. For each of these rows below, this
factor
360
then Python will time the program. Here is the output from a timed test run.
num_rows=
num_rows=
num_rows=
num_rows=
num_rows=
num_rows=
num_rows=
num_rows=
num_rows=
num_rows=
10
20
30
40
50
60
70
80
90
100
seconds= 0.0162539482117
seconds= 0.0808238983154
seconds= 0.248152971268
seconds= 0.555531978607
seconds= 1.05453586578
seconds= 1.77881097794
seconds= 2.75969099998
seconds= 4.10647988319
seconds= 5.81125879288
seconds= 7.86893582344
20
40
60
80
100
361
Exercises
1 To get an idea of what happens for typical matrices we can use the ability of
computer systems to generate random numbers (of course, these are only pseudorandom in that they come from an algorithm but they pass a number of reasonable
statistical tests for randomness).
(a) Fill a 55 array with random numbers say, in the range [0 . . . 1)). See if it is
singular. Repeat that experiment a few times. Are singular matrices frequent or
rare in this sense?
(b) Time your computer algebra system at finding the determinant of ten 1010
arrays of random numbers. Find the average time per array. Repeat the prior
item for 2020 arrays, 3030 arrays, . . . 100100 arrays, and compare to the
numbers given above. (Notice that, when an array is singular, we can sometimes
decide that quickly, for instance if the first row equals the second. In the light of
your answer to the first part, do you expect that singular systems play a large
role in your average?)
(c) Graph the input size versus the average time.
2 Compute the determinant of each of these by hand using the two methods discussed
above.
2 1
0 0
3 1 1
2 1
1 3
2 0
1 0 5
(b)
(c)
(a)
0 1 2 1
5 3
1 2 2
0 0 2 1
Count the number of multiplications and divisions used in each case, for each of
the methods.
3 The use by the timing routine of do_matrix has a bug. That routine does
two things, generate a random matrix and then do gauss_method on it, and the
timing number returned is for the combination. Produce code that times only the
gauss_method routine.
4 What 1010 array can you invent that takes your computer the longest time to
reduce? The shortest?
5 Some computer language specifications requires that arrays be stored by column,
that is, the entire first column is stored contiguously, then the second column, etc.
Does the code fragment given take advantage of this, or can it be rewritten to
make it faster, by taking advantage of the fact that computer fetches are faster
from contiguous locations?
Topic
Chis Method
When doing Gausss Method on a matrix that contains only integers people
often like to keep it that way. To avoid fractions in the reduction of this matrix
A = 3
1
1
4
5
1
1
2
22
6
23
2
1
1
8 2
10 2
()
1 +3
0
0
1
5
8
5
0
()
This all-integer approach is easier for mental calculations. And, using integer
arithmetic on a computer avoids some sticky issues involving floating point
calculations [Kahan]. So there are sound reasons for this approach.
Another advantage of this approach is that we can easily apply Laplaces expansion to the first column of () and then get the determinant by remembering
to divide by 4 because of ().
Here is the general 33 case of this approach to finding the determinant.
First, assuming a1,1 6= 0, we can rescale the lower rows.
a1,1
A = a2,1
a3,1
a1,2
a2,2
a3,2
a1,3
a2,3
a3,3
a1,1 2
a1,1 3
a1,1
a2,1 a1,1
a3,1 a1,1
a1,2
a2,2 a1,1
a3,2 a1,1
a1,3
a2,3 a1,1
a3,3 a1,1
363
This rescales the determinant by a21,1 . Now eliminate down the first column.
a1,1
a1,2
a1,3
a2,1 1 +2
2 1 1
A = 3 4 1
1 5 1
This is Chis matrix.
2
3
C =
2
1
1
4
1
5
2 1
3 1
=
2 1
1 1
5
9
5
1
364
The formula for 33 matrices det(A) = det(C)/a1,1 gives det(A) = (50/2) = 25.
For a larger determinant we must do multiple steps but each involves only
22 determinants. So we can often calculate the determinant just by writing
down a bit of intermediate information. For instance, with this 44 matrix
3 0 1 1
1 2 0 1
A=
2 1 0 3
1 0 0 1
we can mentally doing each of the 22 calculations and only write down the
33 result.
3 0 3 1 3 1
1 2 1 0 1 1
6
1
2
3 0 3 1 3 1
C3 =
2 1 2 0 2 3 = 3 2 7
0 1 2
3 0 3 1 3 1
1 0 1 0 1 1
2
Note that the determinant of this is a42
1,1 = 3 times the determinant of A.
To finish, iterate. Here is Chis matrix of C3 .
6 1 6 2
!
3 2 3 7
15 48
C2 =
=
6 2
6 1
6 12
0 1
0 2
365
2 1 4 0
0 1 4 0
(b)
1 1 1 1
0 2 1 1
2 What if a1,1 is zero?
3 The Rule of Sarrus is a mnemonic that many people learn for the 33 determinant
formula. To the right of the matrix, copy the first two columns.
a b c a b
d e f d e
g h i g h
Then the determinant is the sum of the three upper-left to lower-right diagonals
minus the three lower-left to upper-right diagonals aei+bfg+cdhgechfaidb.
Count the operations involved in Sarruss formula and in Chis.
4 Prove Chis formula.
1
(a) 4
7
2
5
8
3
6
9
Computer Code
This implements Chis Method. It is in the computer language Python.
#!/usr/bin/python
# chio.py
# Calculate a determinant using Chio's method.
# Jim Hefferon; Public Domain
# For demonstration only; for instance, does not handle the M[0][0]=0 case
def det_two(a,b,c,d):
"""Return the determinant of the 2x2 matrix [[a,b], [c,d]]"""
return a*d-b*c
def chio_mat(M):
"""Return the Chio matrix as a list of the rows
M nxn matrix, list of rows"""
dim=len(M)
C=[]
for row in range(1,dim):
C.append([])
for col in range(1,dim):
C[-1].append(det_two(M[0][0], M[0][col], M[row][0], M[row][col]))
return C
def chio_det(M,show=None):
"""Find the determinant of M by Chio's method
M mxm matrix, list of rows"""
dim=len(M)
key_elet=M[0][0]
if dim==1:
return key_elet
return chio_det(chio_mat(M))/(key_elet**(dim-2))
if __name__=='__main__':
M=[[2,1,1], [3,4,-1], [1,5,1]]
print "M=",M
print "Det is", chio_det(M)
Topic
Projective Geometry
There are geometries other than the familiar Euclidean one. One such geometry
arose when artists observed that what a viewer sees is not necessarily what is
there. As an example, here is Leonardo da Vincis The Last Supper.
Look at where the ceiling meets the left and right walls. In the room those lines
are parallel but da Vinci has painted lines that, if extended, would intersect.
The intersection is the vanishing point. This aspect of perspective is familiar
as an image of railroad tracks that appear to converge at the horizon.
Da Vinci has adopted a model of how we see. Imagine a person viewing a
room. From the persons eye, in every direction, carry a ray outward until it
intersects something, such as a point on the line where the wall meets the ceiling.
This first intersection point is what the person sees in that direction. Overall
what the person sees is the collection of three-dimensional intersection points
projected to a common two dimensional image.
A
B
C
367
This is a central projection from a single point. As the sketch shows, this
projection is not orthogonal like the ones we have seen earlier because the line
from the viewer to C is not orthogonal to the image plane. (This model is only
an approximation it does not take into account such factors as that we have
binocular vision or that our brains processing greatly affects what we perceive.
Nonetheless the model is interesting, both artistically and mathematically.)
The operation of central projection preserves some geometric properties, for
instance lines project to lines. However, it fails to preserve some others. One
example is that equal length segments can project to segments of unequal length
(above, AB is longer than BC because the segment projected to AB is closer to
the viewer and closer things look bigger). The study of the effects of central
projections is projective geometry.
There are three cases of central projection. The first is the projection done
by a movie projector.
projector P
source S
image I
We can think that each source point is pushed from the domain plane S outward
to the image plane I. The second case of projection is that of the artist pulling
the source back to a canvas.
painter P
image I
source S
The two are different because first S is in the middle and then I. One more
configuration can happen, with P in the middle. An example of this is when we
use a pinhole to shine the image of a solar eclipse onto a paper.
368
image I
pinhole P
source S
Although the three are not exactly the same, they are similar. We shall say
that each is a central projection by P of S to I. We next look at three models of
central projection, of increasing abstractness but also of increasing uniformity.
The last model will bring out the linear algebra.
Consider again the effect of railroad tracks that appear to converge to a point.
Model this with parallel lines in a domain plane S and a projection via a P to
a codomain plane I. (The gray lines shown are parallel to the S plane and to
the I plane.)
S
P
This single setting shows all three projection cases. The first picture below shows
P acting as a movie projector by pushing points from part of S out to image
points on the lower half of I. The middle picture shows P acting as the artist by
pulling points from another part of S back to image points in the middle of I.
In the third picture P acts as the pinhole, projecting points from S to the upper
part of I. This third picture is the trickiest the points that are projected near
to the vanishing point are the ones that are far out on the lower left of S. Points
in S that are near to the vertical gray line are sent high up on I.
S
P
S
P
S
P
369
There are two awkward things here. First, neither of the two points in the
domain nearest to the vertical gray line (see below) has an image because a
projection from those two is along the gray line that is parallel to the codomain
plane (we say that these two are projected to infinity). The second is that the
vanishing point in I isnt the image of any point from S because a projection to
this point would be along the gray line that is parallel to the domain plane (we
say that the vanishing point is the image of a projection from infinity).
S
P
For a model that eliminates this awkwardness, cover the projector P with
a hemispheric dome. In any direction, defined by a line through the origin,
project anything in that direction to the single spot on the dome where the line
intersects. This includes projecting things on the line between P and the dome,
as with the movie projector. It includes projecting things on the line further
from P than the dome, as with the painter. More subtly, it also includes things
on the line that lie behind P, as with the pinhole case.
1
` = { k 2 | k R }
3
More formally, for any nonzero vector ~v R3 , let the associated point v in the
projective plane be the set {k~v | k R and k 6= 0 } of nonzero vectors lying on
the same line through the origin as ~v. To describe a projective point we can give
any representative member of the line, so that the projective point shown above
can be represented in any of these three ways.
1
1/3
2
2
2/3
4
3
1
6
Each of these is a homogeneous coordinate vector for the point `.
370
This picture and definition clarifies central projection but there is still
something ungainly about the dome model: what happens when P looks down?
Consider, in the sketch above, the part of Ps line of sight that comes up towards
us, out of the page. Imagine that this part of the line falls, to the equator and
below. Now the part of the line ` that intersects the dome lies behind the page.
That is, as the line of sight continues down past the equator, the projective
point suddenly shifts from the front of the dome to the back of the dome. (This
brings out that the dome does not include the entire equator or else when the
viewer is looking exactly along the equator then there would be two points in the
line that are both on the dome. Instead we define the dome so that it includes
the points on the equator with a positive y coordinate, as well as the point
where y = 0 and x is positive.) This discontinuity means that we often have to
treat equatorial points as a separate case. So while the railroad track model of
central projection has three cases, the dome has two.
We can do better, we can reduce to a model having a single case. Consider a
sphere centered at the origin. Any line through the origin intersects the sphere
in two spots, said to be antipodal. Because we associate each line through
the origin with a point in the projective plane, we can draw such a point as a
pair of antipodal spots on the sphere. Below, we show the two antipodal spots
connected by a dashed line to emphasize that they are not two different points,
the pair of spots together make one projective point.
While drawing a point as a pair of antipodal spots on the sphere is not as intuitive
as the one-spot-per-point dome mode, on the other hand the awkwardness of
the dome model is gone in that as a line of view slides from north to south, no
sudden changes happen. This central projection model is uniform.
So far we have described points in projective geometry. What about lines?
What a viewer P at the origin sees as a line is shown below as a great circle, the
intersection of the model sphere with a plane through the origin.
371
(Weve included one of the projective points on this line to bring out a subtlety.
Because two antipodal spots together make up a single projective point, the
great circles behind-the-paper part is the same set of projective points as its
in-front-of-the-paper part.) Just as we did with each projective point, we can
also describe a projective line with a triple of reals. For instance, the members
of this plane through the origin in R3
x
{ y | x + y z = 0 }
z
project to a line that we can describe with (1 1 1) (using a row vector for
this typographically distinguishes lines from points). In general, for any nonzero
three-wide row vector ~L we define the associated line in the projective plane,
to be the set L = { k~L | k R and k 6= 0 }.
The reason this description of a line as a triple is convenient is that in
the projective plane a point v and a line L are incident the point lies on
the line, the line passes through the point if and only if a dot product of
their representatives v1 L1 + v2 L2 + v3 L3 is zero (Exercise 4 shows that this is
independent of the choice of representatives ~v and ~L). For instance, the projective
point described above by the column vector with components 1, 2, and 3 lies
in the projective line described by (1 1 1), simply because any vector in R3
whose components are in ratio 1 : 2 : 3 lies in the plane through the origin whose
equation is of the form k x + k y k z = 0 for any nonzero k. That is, the
incidence formula is inherited from the three-space lines and planes of which v
and L are projections.
With this, we can do analytic projective geometry. For instance, the projective
line L = (1 1 1) has the equation 1v1 + 1v2 1v3 = 0, meaning that for any
projective point v incident with the line, any of vs representative homogeneous
coordinate vectors will satisfy the equation. This is true simply because those
vectors lie on the three space plane. One difference from Euclidean analytic
geometry is that in projective geometry besides talking about the equation of a
line, we also talk about the equation of a point. For the fixed point
1
v = 2
3
the property that characterizes lines incident on this point is that the components
of any representatives satisfy 1L1 + 2L2 + 3L3 = 0 and so this is the equation of
v.
372
This symmetry of the statements about lines and points is the Duality
Principle of projective geometry: in any true statement, interchanging point
with line results in another true statement. For example, just as two distinct
points determine one and only one line, in the projective plane two distinct lines
determine one and only one point. Here is a picture showing two projective lines
that cross in antipodal spots and thus cross at one projective point.
()
Contrast this with Euclidean geometry, where two unequal lines may have a
unique intersection or may be parallel. In this way, projective geometry is
simpler, more uniform, than Euclidean geometry.
That simplicity is relevant because there is a relationship between the two
spaces: we can view the projective plane as an extension of the Euclidean plane.
Draw the sphere model of the projective plane as the unit sphere in R3 . Take
Euclidean 2-space to be the plane z = 1. As shown below, all of the points on the
Euclidean plane are projections of antipodal spots from the sphere. Conversely,
we can view some points in the projective plane as corresponding to points in
Euclidean space. (Note that projective points on the equator dont correspond
to points on the plane; instead we say these project out to infinity.)
()
Thus we can think of projective space as consisting of the Euclidean plane with
some extra points adjoined the Euclidean plane is embedded in the projective
plane. The extra points in projective space, the equatorial points, are called
ideal points or points at infinity and the equator is called the ideal line or
line at infinity (it is not a Euclidean line, it is a projective line).
The advantage of this extension from the Euclidean plane to the projective
plane is that some of the nonuniformity of Euclidean geometry disappears. For
instance, the projective lines shown above in () cross at antipodal spots, a
single projective point, on the spheres equator. If we put those lines into ()
then they correspond to Euclidean lines that are parallel. That is, in moving
373
from the Euclidean plane to the projective plane, we move from having two
cases, that distinct lines either intersect or are parallel, to having only one case,
that distinct lines intersect (possibly at a point at infinity).
A disadvantage of the projective plane is that we dont have the same
familiarity with it as we have with the Euclidean plane. Doing analytic geometry
in the projective plane helps because the equations lead us to the right conclusions.
Analytic projective geometry uses linear algebra. For instance, for three points
of the projective plane t, u, and v, setting up the equations for those points by
fixing vectors representing each shows that the three are collinear if and only
if the resulting three-equation system has infinitely many row vector solutions
representing their line. That in turn holds if and only if this determinant is zero.
t u v
1
1
1
t2 u2 v2
t3 u3 v3
Thus, three points in the projective plane are collinear if and only if any three
representative column vectors are linearly dependent. Similarly, by duality, three
lines in the projective plane are incident on a single point if and only if any
three row vectors representing them are linearly dependent.
The following result is more evidence of the niceness of the geometry of
the projective plane. These two triangles are in perspective from the point O
because their corresponding vertices are collinear.
O
T1
V1
U1
T2
V2
U2
Consider the pairs of corresponding sides: the sides T1 U1 and T2 U2 , the sides
T1 V1 and T2 V2 , and the sides U1 V1 and U2 V2 . Desargues Theorem is that
when we extend the three pairs of corresponding sides, they intersect (shown
here as the points T U, T V, and UV). Whats more, those three intersection
points are collinear.
UV
TV
TU
374
We will prove this using projective geometry. (Weve drawn Euclidean figures
because that is the more familiar image. To consider them as projective figures
we can imagine that, although the line segments shown are parts of great circles
and so are curved, the model has such a large radius compared to the size of the
figures that the sides appear in our sketch to be straight.)
For the proof we need a preliminary lemma [Coxeter]: if W, X, Y, Z are four
points in the projective plane, no three of which are collinear, then there are
homogeneous coordinate vectors w
~ , ~x, ~y, and ~z for the projective points, and a
basis B for R3 , satisfying this.
1
RepB (~
w) = 0
0
0
RepB (~x) = 1
0
0
RepB (~y) = 0
1
1
RepB (~z) = 1
1
To prove the lemma, because W, X, and Y are not on the same projective line,
any homogeneous coordinate vectors w
~ 0 , ~x0 , and ~y0 do not line on the same
plane through the origin in R3 and so form a spanning set for R3 . Thus any
homogeneous coordinate vector for Z is a combination ~z0 = a w
~ 0 + b ~x0 + c ~y0 .
Then let the basis be B = h~
w, ~x, ~yi and take w
~ =aw
~ 0 , ~x = b ~x0 , ~y = c ~y0 ,
and ~z = ~z0 .
To prove Desargues Theorem use the lemma to fix homogeneous coordinate
vectors and a basis.
1
0
0
1
RepB (~t1 ) = 0 RepB (~u1 ) = 1 RepB (~v1 ) = 0 RepB (~o) = 1
0
0
1
1
The projective point T2 is incident on the projective line OT1 so any homogeneous
coordinate vector for T2 lies in the plane through the origin in R3 that is spanned
by homogeneous coordinate vectors of O and T1 :
1
1
RepB (~t2 ) = a 1 + b 0
1
0
for some scalars a and b. Hence the homogeneous coordinate vectors of members
T2 of the line OT1 are of the form on the left below. The forms for U2 and V2
are similar.
t2
1
1
RepB (~t2 ) = 1
RepB (~u2 ) = u2
RepB (~v2 ) = 1
1
1
v2
375
t2 1
T1 U1 T2 U2 = 1 u2
0
(This is, of course, a homogeneous coordinate vector of a projective point.) The
other two intersections are similar.
1 t2
0
T1 V1 T2 V2 = 0
U1 V1 U2 V2 = u2 1
v2 1
1 v2
Finish the proof by noting that these projective points are on one projective
line because the sum of the three homogeneous coordinate vectors is zero.
Every projective theorem has a translation to a Euclidean version, although
the Euclidean result may be messier to state and prove. Desargues theorem
illustrates this. In the translation to Euclidean space, we must treat separately
the case where O lies on the ideal line, for then the lines T1 T2 , U1 U2 , and V1 V2
are parallel.
The remark following the statement of Desargues Theorem suggests thinking
of the Euclidean pictures as figures from projective geometry for a sphere model
with very large radius. That is, just as a small area of the world seems to people
living there to be flat, the projective plane is locally Euclidean.
We finish by pointing out one more thing about the projective plane. Although its local properties are familiar, the projective plane has a perhaps
unfamiliar global property. The picture below shows a projective point. At that
point we have drawn Cartesian axes, xy-axes. Of course, the axes appear in
the picture at both antipodal spots, one in the northern hemisphere (that is,
376
shown on the right) and the other in the south. Observe that in the northern
hemisphere a person who puts their right hand on the sphere, palm down, with
their thumb on the y axis will have their fingers pointing along the x-axis in the
positive direction.
The sequence of pictures below show a trip around this space: the antipodal spots
rotate around the sphere with the spot in the northern hemisphere moving up
and over the north pole, ending on the far side of the sphere, and its companion
coming to the front. (Be careful: the trip shown is not halfway around the
projective plane. It is a full circuit. The spots at either end of the dashed line
are the same projective point. So by the third sphere below the trip has pretty
much returned to the same projective point where we drew it starting above.)
At the end of the circuit, the x part of the xy-axes sticks out in the other
direction. That is, for a person to put their thumb on the y-axis and have
their fingers point positively on the x-axis, they must use their left hand. The
projective plane is not orientable in this geometry, left and right handedness
are not fixed properties of figures (said another way, we cannot describe a spiral
as clockwise or counterclockwise).
This exhibition of the existence of a non-orientable space raises the question
of whether our universe orientable. Could an astronaut leave earth right-handed
and return left-handed? [Gardner] is a nontechnical reference. [Clarke] is a
classic science fiction story about orientation reversal.
For an overview of projective geometry see [Courant & Robbins]. The approach weve taken here, the analytic approach, leads to quick theorems and
illustrates the power of linear algebra; see [Hanes], [Ryan], and [Eggar]. But
another approach, the synthetic approach of deriving the results from an axiom
system, is both extraordinarily beautiful and is also the historical route of
development. Two fine sources for this approach are [Coxeter] or [Seidenberg].
An easy and interesting application is in [Davies].
377
Exercises
1 What is the equation of this point?
1
0
0
2
(a) Find the line incident on these points in the projective plane.
1
4
2 , 5
3
6
(b) Find the point incident on both of these projective lines.
(1 2 3), (4 5 6)
3 Find the formula for the line incident on two projective points. Find the formula
for the point incident on two projective lines.
4 Prove that the definition of incidence is independent of the choice of the representatives of p and L. That is, if p1 , p2 , p3 , and q1 , q2 , q3 are two triples of
homogeneous coordinates for p, and L1 , L2 , L3 , and M1 , M2 , M3 are two triples of
homogeneous coordinates for L, prove that p1 L1 + p2 L2 + p3 L3 = 0 if and only if
q1 M1 + q2 M2 + q3 M3 = 0.
5 Give a drawing to show that central projection does not preserve circles, that a
circle may project to an ellipse. Can a (non-circular) ellipse project to a circle?
6 Give the formula for the correspondence between the non-equatorial part of the
antipodal modal of the projective plane, and the plane z = 1.
7 (Pappuss Theorem) Assume that T0 , U0 , and V0 are collinear and that T1 , U1 ,
and V1 are collinear. Consider these three points: (i) the intersection V2 of the lines
T0 U1 and T1 U0 , (ii) the intersection U2 of the lines T0 V1 and T1 V0 , and (iii) the
intersection T2 of U0 V1 and U1 V0 .
(a) Draw a (Euclidean) picture.
(b) Apply the lemma used in Desargues Theorem to get simple homogeneous
coordinate vectors for the T s and V0 .
(c) Find the resulting homogeneous coordinate vectors for Us (these must each
involve a parameter as, e.g., U0 could be anywhere on the T0 V0 line).
(d) Find the resulting homogeneous coordinate vectors for V1 . (Hint: it involves
two parameters.)
(e) Find the resulting homogeneous coordinate vectors for V2 . (It also involves
two parameters.)
(f) Show that the product of the three parameters is 1.
(g) Verify that V2 is on the T2 U2 line.
Chapter Five
Similarity
We have shown that for any homomorphism there are bases B and D such that
the matrix representing the map has a block partial-identity form.
!
Identity Zero
RepB,D (h) =
Zero
Zero
~ 1 + + cn
~ n to c1~1 +
This representation describes the map as sending c1
+ ck~k + ~0 + + ~0, where n is the dimension of the domain and k is the
dimension of the range. Under this representation the action of the map is easy
to understand because most of the matrix entries are zero.
This chapter considers the special case where the domain and codomain are
the same. Here we naturally ask for the domain basis and codomain basis to be
the same. That is, we want a basis B so that RepB,B (t) is as simple as possible,
where we take simple to mean that it has many zeroes. We will find that we
cannot always get a matrix having the above block partial-identity form but we
will develop a form that comes close, a representation that is nearly diagonal.
This chapter requires that we factor polynomials. But many polynomials do not
factor over the real numbers; for instance, x2 + 1 does not factor into a product
of two linear polynomials with real coefficients; instead it requires complex
numbers x2 + 1 = (x i)(x + i).
380
Consequently in this chapter we shall use complex numbers for our scalars,
including entries in vectors and matrices. That is, we shift from studying vector
spaces over the real numbers to vector spaces over the complex numbers. Any
real number is a complex number and in this chapter most of the examples use
only real numbers but nonetheless, the critical theorems require that the scalars
be complex. So this first section is a review of complex numbers.
In this book our approach is to shift to this more general context of taking
scalars to be complex for the pragmatic reason that we must do so in order
to move forward. However, the idea of doing vector spaces by taking scalars
from a structure other than the real numbers is an interesting and useful one.
Delightful presentations that take this approach from the start are in [Halmos]
and [Hoffman & Kunze].
I.1
381
r(x) = 2x + 3. Note that r(x) has a lower degree than does d(x).
1.4 Corollary The remainder when p(x) is divided by x is the constant
polynomial r(x) = p().
Proof The remainder must be a constant polynomial because it is of degree less
than the divisor x . To determine the constant, take the theorems divisor
d(x) to be x and substitute for x.
QED
If a divisor d(x) goes into a dividend p(x) evenly, meaning that r(x) is the
zero polynomial, then d(x) is a called a factor of p(x). Any root of the factor,
any R such that d() = 0, is a root of p(x) since p() = d() q() = 0.
1.5 Corollary If is a root of the polynomial p(x) then x divides p(x) evenly,
that is, x is a factor of p(x).
Proof By the above corollary p(x) = (x ) q(x) + p(). Since is a root,
p() = 0 so x is a factor.
QED
b + b2 4ac
b b2 4ac
1 =
2 =
2a
2a
(if the discriminant b2 4ac is negative then the polynomial has no real number
roots). A polynomial that cannot be factored into two lower-degree polynomials
with real number coefficients is said to be irreducible over the reals.
1.6 Theorem Any constant or linear polynomial is irreducible over the reals. A
quadratic polynomial is irreducible over the reals if and only if its discriminant
is negative. No cubic or higher-degree polynomial is irreducible over the reals.
1.7 Corollary Any polynomial with real coefficients can be factored into linear
and irreducible quadratic polynomials. That factorization is unique; any two
factorizations have the same powers of the same factors.
Note the analogy with the prime factorization of integers. In both cases the
uniqueness clause is very useful.
382
1.8 Example Because of uniqueness we know, without multiplying them out, that
(x + 3)2 (x2 + 1)3 does not equal (x + 3)4 (x2 + x + 1)2 .
1.9 Example By uniqueness, if c(x) = m(x)q(x) then where c(x) = (x3)2 (x+2)3
and m(x) = (x 3)(x + 2)2 , we know that q(x) = (x 3)(x + 2).
While x2 +1 has no real roots and so doesnt factor over the real numbers, if we
imagine a root traditionally denoted i, so that i2 + 1 = 0 then x2 + 1 factors
into a product of linears (xi)(x+i). When we adjoin this root i to the reals and
close the new system with respect to addition and multiplication then we have
the complex numbers C = {a + bi | a, b R and i2 = 1 }. (These are often
pictured on a plane with a plotted on the horizontal axis and b on the vertical;
note that the distance of the point from the origin is |a + bi| = a2 + b2 .)
In C all quadratics factor. That is, in contrast with the reals, C has no
irreducible quadratics.
b + b2 4ac
b b2 4ac
2
ax + bx + c = a x
x
2a
2a
1.10 Example The second degree polynomial x2 + x + 1 factors over the complex
numbers into the product of two first degree polynomials.
1 + 3
1 3
1
3
1
3
x
x
= x ( +
i) x (
i)
2
2
2
2
2
2
1.11 Theorem (Fundamental Theorem of Algebra) Polynomials with complex coefficients factor into linear polynomials with complex coefficients. The factorization
is unique.
I.2
Complex Representations
383
With these rules, all of the operations that weve used for real vector spaces
carry over unchanged to vector spaces with complex scalars.
2.2 Example Matrix multiplication is the same, although the scalar arithmetic
involves more bookkeeping.
1 + 1i
i
=
2 0i
2 + 3i
1 + 0i
3i
1 0i
i
We shall carry over unchanged from the previous chapters everything that
we can. For instance, we shall call this
1 + 0i
0 + 0i
0 + 0i
0 + 0i
h
,
.
.
.
,
..
.. i
.
.
0 + 0i
1 + 0i
the standard basis for Cn as a vector space over C and again denote it En .
Another example is that Pn will be the vector space of degree n polynomials
with coefficients that are complex.
384
II
Similarity
of bases, B, D and B,
h
Vwrt B Wwrt D
H
idy
idy
h
Vwrt B Wwrt D
Vwrt B Vwrt B
T
idy
idy
t
Vwrt D Vwrt D
In matrix terms, RepD,D (t) = RepB,D (id) RepB,B (t) RepB,D (id)
II.1
1
P2 wrt B P2 wrt B
T
idy
idy
d/dx
P2 wrt D P2 wrt D
The top is first. The effect of the transformation on the starting basis B
d/dx
x2 7 2x
d/dx
x 7 1
d/dx
1 7 0
385
0
RepB (1) = 0
1
0
RepB (0) = 0
0
T = RepB,B (d/dx) = 2
0
0
0
1
0
0
1 7 0
d/dx
1 + x 7 1
d/dx
1 + x2 7 2x
T = RepD,D (d/dx) = 0
0
T .
1
0
0
2
0
Third, computing the matrix for the right-hand side involves finding the
effect of the identity map on the elements of B. Of course, the identity map
does not transform them at all so to find the matrix we represent Bs elements
with respect to D.
1
1
1
RepD (x2 ) = 0 RepD (x) = 1 RepD (1) = 0
1
0
0
So the matrix for going down the right side is the concatenation of those.
1 1 1
P = RepB,D (id) = 0
1 0
1
0 0
With that, we have two options to compute the matrix for going up on left
side. The direct computation represents elements of D with respect to B
0
0
1
RepB (1) = 0 RepB (1 + x) = 1 RepB (1 + x2 ) = 0
1
1
1
386
0 0
0 1
1 1
0
1
1
1 1 1 1 0 0
1 0 0 1 0 0
0
1
0 0 0 0 1
0
0
1
0
0
0
1
0
0
1
0
1
1
0
1
1.2 Definition The matrices T and T are similar if there is a nonsingular P such
that T = PT P1 .
Since nonsingular matrices are square, T and T must be square and of the same
size. Exercise 15 checks that similarity is an equivalence relation.
1.3 Example The definition does not require that we consider a map. Calculation
with these two
!
!
2 1
2 3
P=
T=
1 1
1 1
gives that T is similar to this matrix.
T =
12
7
19
11
1.4 Example The only matrix similar to the zero matrix is itself: PZP1 = PZ = Z.
The identity matrix has the same property: PIP1 = PP1 = I.
Matrix similarity is a special case of matrix equivalence so if two matrices
are similar then they are matrix equivalent. What about the converse: if they
are square, must any two matrix equivalent matrices be similar? No; the matrix
equivalence class of an identity matrix consists of all nonsingular matrices of
that size while the prior example shows that the only member of the similarity
class of an identity matrix is itself. Thus these two are matrix equivalent but
not similar.
!
!
1 0
1 2
T=
S=
0 1
0 3
So some matrix equivalence classes split into two or more similarity classes
similarity gives a finer partition than does matrix equivalence. This shows some
matrix equivalence classes subdivided into similarity classes.
387
S
T
...
1
2
3
6
T =
0
11/2
0
5
P=
4
3
2
2
check that T = PT P1 .
1.6 Example 1.4 shows that the only matrix similar to a zero matrix is itself and
that the only matrix similar to the identity is itself.
(a) Show that the 11 matrix whose single entry is 2 is also similar only to itself.
(b) Is a matrix of the form cI for some scalar c similar only to itself?
(c) Is a diagonal matrix similar only to itself?
X 1.7 Consider this transformation of C3
x
xz
t(y) = z
z
2y
388
1 0 4
1
1 1 3
0
2 1 7
3
0
1
1
1
1
2
II.2
389
Diagonalizability
The prior subsection shows that although similar matrices are necessarily matrix
equivalent, the converse does not hold. Some matrix equivalence classes break
into two or more similarity classes; for instance, the nonsingular 22 matrices
form one matrix equivalence class but more than one similarity class.
Thus we cannot use the canonical form for matrix equivalence, a block
partial-identity matrix, as a canonical form for matrix similarity. The diagram
below illustrates. The stars are similarity class representatives. Each dashed-line
similarity class subdivision has one star but each solid-curve matrix equivalence
class division has only one partial identity matrix.
?
?
?
? ? ?
?
?
...
2
1
4
1
is diagonalizable.
2
0
0
3
!
=
1
1
2
1
2
1
1
1
2
1
!1
390
be similar to a diagonal matrix then that matrix would have have at least one
nonzero entry on its diagonal.
The square of N is the zero matrix. This imples that for any map n
represented by N (with respect to some B, B) the composition n n is the zero
map. This in turn implies that for any matrix representing n (with respect to
B),
its square is the zero matrix. But the square of a nonzero diagonal
some B,
matrix cannot be the zero matrix, because the square of a diagonal matrix is the
diagonal matrix whose entries are the squares of the entries from the starting
B
such that n is represented by a diagonal matrix
matrix. Thus there is no B,
the matrix N is not diagonalizable.
That example shows that a diagonal form will not suffice as a canonical form
for similarity we cannot find a diagonal matrix in each matrix similarity class.
However, some similarity classes contain a diagonal matrix and the canonical
form that we are developing has the property that if a matrix can be diagonalized
then the diagonal matrix is the canonical representative of its similarity class.
2.4 Lemma A transformation t is diagonalizable if and only if there is a basis
~ 1, . . . ,
~ n i and scalars 1 , . . . , n such that t(
~ i ) = i
~ i for each i.
B = h
Proof Consider a diagonal representation matrix.
..
.
~
RepB,B (t) =
RepB (t(1 ))
..
.
..
1
.
.
~ n )) = .
RepB (t(
.
..
0
.
..
0
..
.
n
Consider the representation of a member of this basis with respect to the basis
~ i ). The product of the diagonal matrix and the representation vector
RepB (
0
0
. .
. .
1
0
. .
.
.
.
~ i )) = .
. . .. 1
RepB (t(
=
.
. .i
0
n
.. ..
0
0
has the stated action.
QED
3
0
2
1
391
~ 1,
~ 2 i such that
basis RepE2 ,E2 (t) and look for a basis B = h
!
1 0
RepB,B (t) =
0 2
~ 1 ) = 1
~ 1 and t(
~ 2 ) = 2
~ 2.
that is, such that t(
!
!
3 2 ~
3 2 ~
~
~2
1 = 1 1
2 = 2
0 1
0 1
We are looking for scalars x such that this equation
!
!
!
3 2
b1
b1
=x
0 1
b2
b2
has solutions b1 and b2 that are not both 0 (the zero vector is not the member
of any basis). Thats a linear system.
(3 x) b1 +
2 b2 = 0
(1 x) b2 = 0
()
Focus first on the bottom equation. There are two cases: either b2 = 0 or x = 1.
In the b2 = 0 case the first equation gives that either b1 = 0 or x = 3. Since
weve disallowed the possibility that both b2 = 0 and b1 = 0, we are left with
the first diagonal entry 1 = 3. With that, ()s first equation is 0 b1 + 2 b2 = 0
and so associated with 1 = 3 are vectors having a second component of zero
while the first component is free.
!
!
!
3 2
b1
b1
=3
0 1
0
0
To get a first basis vector choose any nonzero b1 .
!
1
~1 =
0
The other case for the bottom equation of () is 2 = 1. Then ()s first
equation is 2 b1 + 2 b2 = 0 and so associated with this case are vectors whose
second component is the negative of the first.
!
!
!
3 2
b1
b1
=1
0 1
b1
b1
Get the second basis vector by choosing a nonzero one of these.
!
1
~2 =
392
R2wrt E2 R2wrt E2
T
idy
idy
t
R2wrt B R2wrt B
D
and note that the matrix RepB,E2 (id) is easy, giving this diagonalization.
3
0
0
1
!
=
1
0
1
1
!1
3
0
2
1
1
0
1
1
393
1 1
0 1
(b)
0 0
1 0
2.16 We can ask how diagonalization interacts with the matrix operations. Assume
that t, s : V V are each diagonalizable. Is ct diagonalizable for all scalars c?
What about t + s? t s?
X 2.17 Show that matrices of this form are not diagonalizable.
1 c
c 6= 0
0 1
2.18 Show
is diagonalizable.
that
each ofthese
1 2
x y
(a)
(b)
x, y, z scalars
2 1
y z
(a)
II.3
x, y, z C
394
3.3 Example The only transformation on the trivial space {~0 } is ~0 7 ~0. This
map has no eigenvalues because there are no non-~0 vectors ~v mapped to a scalar
multiple ~v of themselves.
3.4 Example Consider the homomorphism t : P1 P1 given by c0 + c1 x 7
(c0 + c1 ) + (c0 + c1 )x. While the codomain P1 of t is two-dimensional, its range
is one-dimensional R(t) = {c + cx | c C}. Application of t to a vector in that
range will simply rescale the vector c + cx 7 (2c) + (2c)x. That is, t has an
eigenvalue of 2 associated with eigenvectors of the form c + cx where c 6= 0.
This map also has an eigenvalue of 0 associated with eigenvectors of the form
c cx where c 6= 0.
The definition above is for maps. We can give a matrix version.
3.5 Definition A square matrix T has a scalar eigenvalue associated with the
nonzero eigenvector ~ if T ~ = ~.
This extension of the definition for maps to a definition for matrices is natural
but there is a point on which we must take care. The eigenvalues of a map are also
the eigenvalues of matrices r epresenting that map, and so similar matrices have
the same eigenvalues. However, the eigenvectors can differ similar matrices
need not have the same eigenvectors. The next example explains.
3.6 Example These matrices are similar
!
2 0
T=
T =
0 0
4
4
2
2
1
1
1
2
!
P
2
1
1
1
The matrix T has two eigenvalues, 1 = 2 and 2 = 0. The first one is associated
with this eigenvector.
! !
!
2 0
1
2
T~e1 =
=
= 2~e1
0 0
0
0
Suppose that T represents a transformation t : C2 C2 with respect to the
standard basis. Then the action of this transformation t is simple.
!
!
x
2x
t
7
y
0
395
Vwrt E3 Vwrt E3
T
idy
idy
t
Vwrt B Vwrt B
T RepE2 (~e1 ) = T
1
0
T RepB (~e1 ) = T
1
1
=2
1
0
=2
1
1
That is, when the matrix representing the transformation is T = RepE2 ,E2 (t)
then it assumes that column vectors are representations with respect to E2 .
However T = RepB,B (t) assumes that column vectors are representations with
respect to B, and so the column vectors that get doubled are different.
We next see the basic tool for finding eigenvectors and eigenvalues.
3.7 Example If
1 2
T = 2 0
1 2
2
3
396
then to find the scalars x such that T ~ = x~ for nonzero eigenvectors ~, bring
everything to the left-hand side
1 2 1
z1
z1
~
2 0 2 z2 x z2 = 0
1 2 3
z3
z3
and factor (T xI)~ = ~0. (Note that it says T xI. The expression T x doesnt
make sense because T is a matrix while x is a scalar.) This homogeneous linear
system
1x
2
1
z1
0
0 x 2 z2 = 0
2
1
2
3x
z3
0
has a nonzero solution ~z if and only if the matrix is singular. We can determine
when that happens.
0 = |T xI|
1 x
2
1
= 2
0 x 2
1
2
3 x
= x3 4x2 + 4x
= x(x 2)2
The eigenvalues are 1 = 0 and 2 = 2. To find the associated eigenvectors plug
in each eigenvalue. Plugging in 1 = 0 gives
10
2
1
z1
0
z1
a
0 0 2 z2 = 0 = z2 = a
2
1
2
30
z3
0
z3
a
for a 6= 0 (a must be non-0 because eigenvectors are defined to be non-~0).
Plugging in 2 = 2 gives
b
12
2
1
z1
0
z1
=
=
=
2
0
2
2
z
0
z
2
2 0
1
2
32
z3
0
z3
b
with b 6= 0.
3.8 Example If
S=
1
3
397
1
z1
0
z1
a
=
=
=
0
3
z2
0
z2
0
for a scalar a 6= 0. Then plug in 2
!
!
!
3
1
z1
0
=
0
33
z2
0
z1
z2
!
=
b/( 3)
b
where b 6= 0.
QED
3.11 Remark That result is the reason that in this chapter we use scalars that
are complex numbers.
398
Proof Fix an eigenvalue . Notice first that V contains the zero vector since
t(~0) = ~0, which equals ~0. So the eigenspace is a nonempty subset of the space.
What remains is to check closure of this set under linear combinations. Take
~1 , . . . , ~n V and then verify
t(c1 ~1 + c2 ~2 + + cn ~n ) = c1 t(~1 ) + + cn t(~n )
= c1 ~1 + + cn ~n
= (c1 ~1 + + cn ~n )
that the combination is also an element of V .
QED
3.14 Example In Example 3.7 these are the eigenspaces associated with the
eigenvalues 0 and 2.
a
V0 = { a | a C},
a
b
V2 = { 0 | b C }.
b
3.15 Example In Example 3.8 these are the eigenspaces associated with the
eigenvalues and 3.
!
a
V = {
| a C}
0
b/( 3)
V3 = {
b
!
| b C}
0
0
0
1
0
0
0
represents projection.
x
x
y 7 y
z
0
x, y, z C
399
that there are zero eigenvalues. Then the set of associated vectors is empty and
so is linearly independent.
For the inductive step assume that the statement is true for any set of
k > 0 distinct eigenvalues. Consider distinct eigenvalues 1 , . . . , k+1 and let
~v1 , . . . ,~vk+1 be associated eigenvectors. Suppose that ~0 = c1~v1 + + ck~vk +
ck+1~vk+1 . Derive two equations from that, the first by multiplying by k+1 on
both sides ~0 = c1 k+1~v1 + + ck+1 k+1~vk+1 and the second by applying the
map to both sides ~0 = c1 t(~v1 )+ +ck+1 t(~vk+1 ) = c1 1~v1 + +ck+1 k+1~vk+1
(applying the matrix gives the same result). Subtract the second from the first.
~0 = c1 (k+1 1 )~v1 + + ck (k+1 k )~vk + ck+1 (k+1 k+1 )~vk+1
400
The ~vk+1 term vanishes. Then the induction hypothesis gives that c1 (k+1
1 ) = 0, . . . , ck (k+1 k ) = 0. The eigenvalues are distinct so the coefficients
c1 , . . . , ck are all 0. With that we are left with the equation ~0 = ck+1~vk+1 so
ck+1 is also 0.
QED
3.18 Example The eigenvalues of
0
4
2
1
8
1
3
QED
This section observes that some matrices are similar to a diagonal matrix.
The idea of eigenvalues arose as the entries of that diagonal matrix, although
the definition applies more broadly than just to diagonalizable matrices. To find
eigenvalues we defined the characteristic equation and that led to the final result,
a criteria for diagonalizability. (While it is useful for the theory, note that in
applications finding eigenvalues this way is typically impractical; for one thing
the matrix may be large and finding roots of large-degree polynomials is hard.)
In the next section we study matrices that cannot be diagonalized.
Exercises
3.20 For each, find
polynomial
the characteristic
and theeigenvalues.
10 9
1 2
0 3
0 0
(a)
(b)
(c)
(d)
4 2
4 3
7 0
0 0
1 0
(e)
0 1
X 3.21 For each matrix, find the characteristic equation, and the eigenvalues and
associated
eigenvectors.
3 0
(a)
8 1
3 2
(b)
1 0
401
3.22 Find the characteristic equation, and the eigenvalues and associated eigenvectors
for this matrix. Hint. The eigenvalues are complex.
2 1
5
2
3.23 Find the characteristic polynomial,
vectors of this matrix.
1
0
0
1
1
1
X 3.24 For each matrix, find the characteristic equation, and the eigenvalues and
associated eigenvectors.
3 2 0
0
1
0
(a) 2 3 0
(b) 0
0
1
0
0 5
4 17 8
X 3.25 Let t : P2 P2 be
a0 + a1 x + a2 x2 7 (5a0 + 6a1 + 2a2 ) (a1 + 8a2 )x + (a0 2a2 )x2 .
Find its eigenvalues and the associated eigenvectors.
3.26 Find the eigenvalues and eigenvectors of this map t : M2 M2 .
a b
2c
a+c
7
c d
b 2c
d
X 3.27 Find the eigenvalues and associated eigenvectors of the differentiation operator
d/dx : P3 P3 .
3.28 Prove that the eigenvalues of a triangular matrix (upper or lower triangular)
are the entries on the diagonal.
X 3.29 Find the formula for the characteristic polynomial of a 22 matrix.
3.30 Prove that the characteristic polynomial of a transformation is well-defined.
3.31 Prove or disprove: if all the eigenvalues of a matrix are 0 then it must be the
zero matrix.
X 3.32 (a) Show that any non-~0 vector in any nontrivial vector space can be a
eigenvector. That is, given a ~v 6= ~0 from a nontrivial V, show that there is a
transformation t : V V having a scalar eigenvalue R such that ~v V .
(b) What if we are given a scalar ? Can any non-~0 member of any nontrivial
vector space be an eigenvector associated with ?
X 3.33 Suppose that t : V V and T = RepB,B (t). Prove that the eigenvectors of T
associated with are the non-~0 vectors in the kernel of the map represented (with
respect to the same bases) by T I.
3.34 Prove that if a, . . . , d are all integers and a + b = c + d then
a b
c d
has integral eigenvalues, namely a + b and a c.
X 3.35 Prove that if T is nonsingular and has eigenvalues 1 , . . . , n then T 1 has
eigenvalues 1/1 , . . . , 1/n . Is the converse true?
402
1
2
3
2
2
6
2
2
6
III
403
Nilpotence
This chapter shows that every square matrix is similar to one that is a sum of
two kinds of simple matrices. The prior section focused on the first simple kind,
diagonal matrices. We now consider the other kind.
III.1
Self-Composition
~v
t(~v )
t2 (~v )
Note that the superscript power notation tj for iterates of the transformations
fits with the notation that weve used for their square matrix representations
because if RepB,B (t) = T then RepB,B (tj ) = T j .
1.1 Example For the derivative map d/dx : P3 P3 given by
d/dx
a + bx + cx2 + dx3 7 6d
and any higher power is the zero map.
1.2 Example This transformation of the space M22 of 22 matrices
!
!
a b
b a
t
7
c d
d 0
404
b
d
a
c
b
d
t2
a
0
b
0
t3
b
0
a
0
x
0
0
0
t t t
y 7 x 7 0 7 0
z
y
x
0
0
3
R(t ) = { 0 }
0
These examples suggest that after some number of iterations the map settles
down.
1.4 Lemma For any transformation t : V V, the range spaces of the powers
form a descending chain
V R(t) R(t2 )
and the null spaces form an ascending chain.
{~0 } N (t) N (t2 )
Further, there is a k such that for powers less than k the subsets are proper so that
if j < k then R(tj ) R(tj+1 ) and N (tj ) N (tj+1 ), while for higher powers
the sets are equal so that if j > k then R(tj ) = R(tj+1 ) and N (tj ) = N (tj+1 )).
Proof First recall that for any map the dimension of its range space plus
the dimension of its null space equals the dimension of its domain. So if the
405
dimensions of the range spaces shrink then the dimensions of the null spaces
must rise. We will do the range space half here and leave the rest for Exercise 14.
We start by showing that the range spaces form a chain. If w
~ R(tj+1 ), so
j+1
j
that w
~ = t (~v) for some ~v, then w
~ = t ( t(~v) ). Thus w
~ R(tj ).
Next we verify the further property: in the chain the subsets containments
are proper initially, and then from some power k onward the range spaces
are equal. We first show that if any pair of adjacent range spaces in the
chain are equal R(tk ) = R(tk+1 ) then all subsequent ones are also equal
R(tk+1 ) = R(tk+2 ), etc. This holds because t : R(tk+1 ) R(tk+2 ) is the
same map, with the same domain, as t : R(tk ) R(tk+1 ) and it therefore has
the same range R(tk+1 ) = R(tk+2 ) (it holds for all higher powers by induction).
So if the chain of range spaces ever stops strictly decreasing then from that point
onward it is stable.
We end by showing that the chain must eventually stop decreasing. Each
range space is a subspace of the one before it. For it to be a proper subspace it
must be of strictly lower dimension (see Exercise 12). These spaces are finitedimensional and so the chain can fall for only finitely many steps. That is, the
power k is at most the dimension of V.
QED
d/dx
R(t) = {a + bx | a, b C}
R(t2 ) = { a | a C}
and then stabilizes R(t2 ) = R(t3 ) = while the null space grows
N (t0 ) = {0 } N (t) = { cx | c C } N (t2 ) = {cx + d | c, d C }
and then stabilizes N (t2 ) = N (t3 ) = .
1.7 Example The transformation : C3 C3 projecting onto the first two coordinates
c1
c1
c2 7 c2
c3
0
406
nullity(tj )
dim(N (t))
rank(tj )
...
dim(R (t))
0
0 1 2
On iteration the rank falls and the nullity rises until there is some k such
that the map reaches a steady state R(tk ) = R(tk+1 ) = R (t) and N (tk ) =
N (tk+1 ) = N (t). This must happen by the n-th iterate.
Exercises
X 1.9 Give the chains of range spaces and null spaces for the zero and identity transformations.
X 1.10 For each map, give the chain of range spaces and the chain of null spaces, and
the generalized range space and the generalized null space.
(a) t0 : P2 P2 , a + bx + cx2 7 b + cx2
(b) t1 : R2 R2 ,
a
0
7
b
a
(c) t2 : P2 P2 , a + bx + cx2 7 b + cx + ax2
(d) t3 : R3 R3 ,
a
a
b 7 a
c
b
407
III.2
Strings
is equivalent to the first. Clause (1) is true because any transformation satisfies
408
that its rank plus its nullity equals the dimension of the space, and in particular
this holds for the transformation tn .
For clause (2), assume that ~v R (t) N (t) to prove that ~v = ~0. Because
~v is in the generalized null space, tn (~v) = ~0. On the other hand, by the lemma
t : R (t) R (t) is one-to-one and a composition of one-to-one maps is oneto-one, so tn : R (t) R (t) is one-to-one. Only ~0 is sent by a one-to-one
linear map to ~0 so the fact that tn (~v) = ~0 implies that ~v = ~0.
QED
2.3 Remark Technically there is a difference between the map t : V V and
the map on the subspace t : R (t) R (t) if the generalized range space is
not equal to V, because the domains are different. But the difference is small
because the second is the restriction of the first to R (t).
For powers between j = 0 and j = n, the space V might not be the direct
sum of R(tj ) and N (tj ). The next example shows that the two can have a
nontrivial intersection.
2.4 Example Consider the transformation of C2 defined by this action on the
elements of the standard basis.
!
!
!
!
!
1
0
0
0
0 0
n
n
7
7
N = RepE2 ,E2 (n) =
0
1
1
0
1 0
This is a shift map and is clearly nilpotent of index two.
!
!
x
0
7
y
x
Another way to depict this maps action is with a string.
~e1 7 ~e2 7 ~0
The vector
~e2 =
0
1
0 0 0 0
1 0 0 0
= RepE ,E (
N
n) =
4
4
0 1 0 0
0 0 1 0
409
2.6 Example Transformations can act via more than one string. A transformation
~ 1, . . . ,
~ 5 i by
t acting on a basis B = h
~1
~ 2 7
~ 3 7 ~0
7
~
~
~0
4
7 5
7
is represented by a matrix that is all zeros except for blocks of subdiagonal ones
0 0 0 0 0
1 0 0 0 0
RepB,B (t) = 0 1 0 0 0
0 0 0 0 0
0 0 0 1 0
(the lines just visually organize the blocks).
In those examples all vectors are eventually transformed to zero.
2.7 Definition A nilpotent transformation is one with a power that is the zero
map. A nilpotent matrix is one with a power that is the zero matrix. In either
case, the least such power is the index of nilpotency.
2.8 Example In Example 2.4 the index of nilpotency is two. In Example 2.5 it is
four. In Example 2.6 it is three.
2.9 Example The differentiation map d/dx : P2 P2 is nilpotent of index three
since the third derivative of any quadratic polynomial is zero. This maps action
is described by the string x2 7 2x 7 2 7 0 and taking the basis B = hx2 , 2x, 2i
gives this representation.
0 0 0
RepB,B (d/dx) = 1 0 0
0 1 0
Not all nilpotent matrices are all zeros except for blocks of subdiagonal ones.
from Example 2.5, and this four-vector basis
2.10 Example With the matrix N
1
0
1
0
0 2 1 0
D = h , , , i
1 1 1 0
0
0
0
1
a change of
1 0
0 2
1 1
0 0
basis operation
1 0
0 0
1 0
1 0
1 0 0 1
0 1
0 0
1
0 0
1 0 1 0
1
3
0 0
0 2 1 0
=
2
0 0 1 1 1 0
1 0
0 0 0 1
2
respect to D, D.
0
1 0
2 5 0
1 3 0
1 2 0
410
The new matrix is nilpotent; its fourth power is the zero matrix. We could
verify this with a tedious computation or we can instead just observe that it is
4 , the zero matrix, and the only
nilpotent since its fourth power is similar to N
matrix similar to the zero matrix is itself.
1 )4 = PNP
1 PNP
1 PNP
1 PNP
1 = PN
4 P1
(PNP
The goal of this subsection is to show that the prior example is prototypical
in that every nilpotent matrix is similar to one that is all zeros except for blocks
of subdiagonal ones.
2.11 Definition Let t be a nilpotent transformation on V. A t-string of length
k generated by ~v V is a sequence h~v, t(~v), . . . , tk1 (~v)i. A t-string basis is a
basis that is a concatenation of t-strings.
2.12 Example Consider differentiation d/dx : P2 P2 . The sequence hx2 , 2x, 2, 0i
is a d/dx-string of length 4. The sequence hx2 , 2x, 2i is a d/dx-string of length 3
that is a basis for P2 .
Note that the strings cannot form a basis under concatenation if they are
not disjoint because a basis cannot have a repeated vector.
~ 1,
~ 2,
~ 3 i and
2.13 Example In Example 2.6, we can concatenate the t-strings h
~
~
h4 , 5 i to make a basis for the domain of t.
2.14 Lemma If a space has a t-string basis then the index of nilpotency of t is
the length of the longest string in that basis.
Proof Let the space have a basis of t-strings and let ts index of nilpotency
be k. We cannot have that the longest string in that basis is longer than ts
index of nilpotency because tk sends any vector, including the vector starting
the longest string, to ~0. Therefore instead suppose that the space has a t-string
basis B where all of the strings are shorter than length k. Because t has index k,
there is a vector ~v such that tk1 (~v) 6= ~0. Represent ~v as a linear combination
of elements from B and apply tk1 . We are supposing that tk1 maps each
element of B to ~0, and therefore maps each term in the linear combination to ~0,
but also that it does not map ~v to ~0. That is a contradiction.
QED
We shall show that each nilpotent map has an associated string basis, a basis
of disjoint strings.
To see the main idea of the argument, imagine that we want to construct
a counterexample, a map that is nilpotent but without an associated disjoint
string basis. We might think to make something like the map t : C5 C5 with
this action.
~e2 7
411
0
0
0
0
~e3 7 ~0
~e4 7 ~e5 7 ~0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
But, the fact that the shown basis isnt disjoint doesnt mean that there isnt
another basis that consists of disjoint strings.
To produce such a basis for this map we will first find the number and lengths
of its strings. Observer that ts index of nilpotency is two. Lemma 2.14 says
that at least one string in a disjoint string basis has length two. There are five
basis elements so if there is a disjoint string basis then the map must act in one
of these ways.
~1
7
~
3 7
~ 5 7
~ 2 7 ~0
~ 4 7 ~0
~0
~1
~3
~4
~5
7
7
7
7
~ 2 7 ~0
~0
~0
~0
Now, the key point. A transformation with the left-hand action has a null space
of dimension three since thats how many basis vectors are mapped to zero. A
transformation with the right-hand action has a null space of dimension four.
Wit the matrix representation above we can determine which of the two possible
shapes is right.
x
x
N (t) = { z | x, z, r C }
0
r
This is three-dimensional, meaning that of the two disjoint string basis forms
above, ts basis has the left-hand one.
~ 2 and
~ 4 from R(t) N (t).
To produce a string basis for t, first pick
0
0
~2 =
1
0
0
0
0
~4 =
0
0
1
~ 2,
~ 4 } is linearly inde(Other choices are possible, just be sure that the set {
412
~5 =
0
0
0
~ 1 and
~ 3 such that t(
~ 1) =
~ 2 and t(
~ 3) =
~ 4.
Finally, take
0
0
1
0
~3 =
~1 =
0
0
0
1
0
0
~ 1, . . . ,
~ 5 i and with respect to that basis
Therefore, we have a string basis B = h
the matrix of t has blocks of subdiagonal 1s.
0 0 0 0 0
1 0 0 0 0
RepB,B (t) = 0 0 0 0 0
0 0 1 0 0
0 0 0 0 0
2.15 Theorem Any nilpotent transformation t is associated with a t-string basis.
While the basis is not unique, the number and the length of the strings is
determined by t.
This illustrates the proof, which describes three kinds of basis vectors (shown
in squares if they are in the null space and in circles if they are not).
3k 7
3k 7
..
.
k
3 7
1k7
1k7
7 1k7 1 7 ~0
7 1k7 1
7 ~0
1k7 7 1k7 1 7 ~0
7
..
.
~0
~0
413
For the inductive step, assume that the theorem holds for any transformation
t : V V with an index of nilpotency between 1 and k 1 (with k > 1) and
consider the index k case.
Observe that the restriction of t to the range space t : R(t) R(t) is also
nilpotent, of index k 1. Apply the inductive hypothesis to get a string basis
for R(t), where the number and length of the strings is determined by t.
_
~ 1 , t(
~ 1 ), . . . , th1 (
~ 1 )i h
~ 2 , . . . , t h2 (
~ 2 )i h
~ i , . . . , t hi (
~ i )i
B = h
(In the illustration above these are the vectors of kind 1.)
Note that taking the final nonzero vector in each of these strings gives a basis
~ 1 ), . . . , thi (
~ i )i for the intersection R(t) N (t). This is because a
C = hth1 (
member of R(t) maps to zero if and only if it is a linear combination of those
basis vectors that map to zero. (The illustration shows these as 1s in squares.)
Now extend C to a basis for all of N (t).
= C _ h~1 , . . . , ~p i
C
is the set of
(In the illustration the ~s are the vectors of kind 2 and so the set C
vectors in squares.) While the vectors ~ we choose arent uniquely determined
by t, what is uniquely determined is the number of them: it is the dimension of
N (t) minus the dimension of R(t) N (t).
_
is a basis for R(t) + N (t) because any sum of something in the
Finally, B C
range space with something in the null space can be represented using elements
for the part from the null space.
of B for the range space part and elements of C
Note that
dim R(t) + N (t) = dim(R(t)) + dim(N (t)) dim(R(t) N (t))
= rank(t) + nullity(t) i
= dim(V) i
_
to a basis for all of V by the addition of i more
and so we can extend B C
vectors, provided that they are not linearly dependent on what we have already.
with vectors ~v1 , . . . ,~vi
~ 1, . . . ,
~ i is in R(t), and extend B_C
Recall that each of
~
~
such that t(~v1 ) = 1 , . . . , t(~vi ) = i . (In the illustration these are the 3s.) The
check that this extension preserves linear independence is Exercise 31.
QED
2.16 Corollary Every nilpotent matrix is similar to a matrix that is all zeros except
for blocks of subdiagonal ones. That is, every nilpotent map is represented with
respect to some basis by such a matrix.
414
This form is unique in the sense that if a nilpotent matrix is similar to two
such matrices then those two simply have their blocks ordered differently. Thus
this is a canonical form for the similarity classes of nilpotent matrices provided
that we order the blocks, say, from longest to shortest.
2.17 Example The matrix
1
1
M=
1
1
power p
1
M=
M2 =
1
1
1
1
!
0 0
0 0
N (Mp )
!
{ x | x C}
x
C2
C2wrt E2 C2wrt E2
M
idyP
idyP
m
C2wrt B C2wrt B
N
1
0
1
1
415
0
1
0
1
0
0
1
1
0
0
0
1
0
1
0
0
1
0
1
0
0
0
1
is nilpotent, of index 3.
Np
power p
0
1
0
1
0
0
1
0
0
0
1
1
0
0
0
1
0
1
0
0
0
0
0
0
0
0
0
0
0
0
1
0
1
0
0
0
0
0
0
0
0
1
0
0
0
0
zero matrix
N (Np )
0
0
{
u v | u, v C }
u
v
0
y
{
z | y, z, u, v C}
u
v
C5
The table tells us this about any string basis: the null space after one map
application has dimension two so two basis vectors map directly to zero, the
null space after the second application has dimension four so two additional
basis vectors map to zero by the second iteration, and the null space after three
applications is of dimension five so the remaining one basis vector maps to zero
in three hops.
~ 1 7
~ 2 7
~ 3 7 ~0
~
~
4 7 5 7 ~0
To produce such a basis, first pick two vectors from N (n) that form a linearly
independent set.
0
0
0
0
~
~3 =
1
5 = 0
1
1
0
1
416
~ 2,
~ 4 N (n2 ) such that n(
~ 2) =
~ 3 and n(
~ 4) =
~ 5.
Then add
0
1
~2 =
0
0
0
0
1
~4 =
0
1
0
~ 1 such that n(
~ 1) =
~ 2.
Finish by adding
1
0
~1 =
1
0
0
Exercises
X 2.19 What is the index of nilpotency of the right-shift operator, here acting on the
space of triples of reals?
(x, y, z) 7 (0, x, y)
X 2.20 For each string basis state the index of nilpotency and give the dimension of
the range space and null space of each iteration of the nilpotent map.
~ 1 7
~ 2 7 ~0
(a)
~ 3 7
~ 4 7 ~0
~
~ 2 7
~ 3 7 ~0
(b) 1 7
~
~
4 7 0
~ 5 7 ~0
~ 6 7 ~0
~ 1 7
~ 2 7
~ 3 7 ~0
(c)
Also give the canonical form of the matrix.
2.21 Decide which of these matrices are nilpotent.
3 2 1
2 4
3 1
(a)
(b)
(c) 3 2 1
1 2
1 3
3 2 1
45 22 19
(e) 33 16 14
69 34 29
X 2.22 Find the canonical form of this matrix.
0 1 1 0 1
0 0 1 1 1
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
1
(d) 3
5
1
0
2
4
1
7
417
0 0 0
1/2 1/2
(a)
(b) 0 1 1
1/2 1/2
0 1 1
Put each in canonical form.
1
(c) 1
1
1
0
1
1
1
1
2.25 Describe the effect of left or right multiplication by a matrix that is in the
canonical form for nilpotent matrices.
2.26 Is nilpotence invariant under similarity? That is, must a matrix similar to a
nilpotent matrix also be nilpotent? If so, with the same index?
X 2.27 Show that the only eigenvalue of a nilpotent matrix is zero.
2.28 Is there a nilpotent transformation of index three on a two-dimensional space?
2.29 In the proof of Theorem 2.15, why isnt the proofs base case that the index of
nilpotency is zero?
X 2.30 Let t : V V be a linear transformation and suppose ~v V is such that
tk (~v) = ~0 but tk1 (~v) 6= ~0. Consider the t-string h~v, t(~v), . . . , tk1 (~v)i.
(a) Prove that t is a transformation on the span of the set of vectors in the string,
that is, prove that t restricted to the span has a range that is a subset of the
span. We say that the span is a t-invariant subspace.
(b) Prove that the restriction is nilpotent.
(c) Prove that the t-string is linearly independent and so is a basis for its span.
(d) Represent the restriction map with respect to the t-string basis.
2.31 Finish the proof of Theorem 2.15.
2.32 Show that the terms nilpotent transformation and nilpotent matrix, as
given in Definition 2.7, fit with each other: a map is nilpotent if and only if it is
represented by a nilpotent matrix. (Is it that a transformation is nilpotent if an
only if there is a basis such that the maps representation with respect to that basis
is a nilpotent matrix, or that any representation is a nilpotent matrix?)
2.33 Let T be nilpotent of index four. How big can the range space of T 3 be?
2.34 Recall that similar matrices have the same eigenvalues. Show that the converse
does not hold.
2.35 Lemma 2.1 shows that any for any linear transformation t : V V the restriction
t : R (t) R (t) is one-to-one. Show that it is also onto, so it is an automorphism.
Must it be the identity map?
2.36 Prove that a nilpotent matrix is similar to one that is all zeros except for blocks
of super-diagonal ones.
X 2.37 Prove that if a transformation has the same range space as null space. then
the dimension of its domain is even.
2.38 Prove that if two nilpotent matrices commute then their product and sum are
also nilpotent.
418
IV
Jordan Form
IV.1
Recall that the set of square matrices Mnn is a vector space under entry-byentry addition and scalar multiplication, and that this space has dimension n2 .
2
Thus, for any nn matrix T the n2 + 1-member set { I, T, T 2 , . . . , T n } is linearly
dependent and so there are scalars c0 , . . . , cn2 , not all zero, such that
2
cn2 T n + + c1 T + c0 I
is the zero matrix. Therefore every transformation has a kind of generalized
nilpotency: the powers of a square matrix cannot climb forever without a repeat.
1.1 Example Rotation of plane vectors /6 radians counterclockwise is represented
with respect to the standard basis by
!
3/2 1/2
T=
1/2
3/2
and verifying that 0T 4 + 0T 3 + 1T 2 2T 1I equals the zero matrix is easy.
419
420
cancel, d is of
the map or matrix to zero and since the leading terms of m and m
smaller degree than the other two. If d were to have a nonzero leading coefficient
then we could divide by it to get a polynomial that takes the map or matrix to
zero and has leading coefficient 1. This would contradict the minimality of the
Thus the leading coefficient of d is zero, so m(x) m(x)
degree of m and m.
is
the zero polynomial, and so the two are equal.
QED
1.6 Example We can compute that m(x) = x2 2x 1 is minimal for the matrix
of Example 1.1 by finding the powers of T up to n2 = 4.
!
!
!
1/2 3/2
0 1
1/2 3/2
2
3
4
T =
T =
T =
3/2
1/2
1 0
3/2 1/2
Put c4 T 4 + c3 T 3 + c2 T 2 + c1 T + c0 I equal to the zero matrix
(1/2)c4
+ (1/2)c2 + ( 3/2)c1 + c0 = 0
=0
( 3/2)c4 + c3 + ( 3/2)c2 + (1/2)c1
(1/2)c4
+ (1/2)c2 + ( 3/2)c1 + c0 = 0
and use Gauss Method.
c4
c
2
c3 + 3c2 +
3c1 2c0 = 0
2c1 + 3c0 = 0
Setting c4 , c3 , and c2 to zero forces c1 and c0 to also come out as zero. To get
a leading one, the most we can do is to set c4 and c3 to zero. Thus the minimal
polynomial is quadratic.
Using the method of that example to find the minimal polynomial of a 33
matrix would mean doing Gaussian reduction on a system with nine equations
in ten unknowns. We shall develop an alternative.
1.7 Lemma Suppose that the polynomial f(x) = cn xn + + c1 x + c0 factors
as k(x 1 )q1 (x z )qz . If t is a linear transformation then these two are
equal maps.
cn tn + + c1 t + c0 = k (t 1 )q1 (t z )qz
Consequently, if T is a square matrix then f(T ) and k (T 1 I)q1 (T z I)qz
are equal matrices.
Proof We use induction on the degree of the polynomial. The cases where
the polynomial is of degree zero and degree one are clear. The full induction
argument is Exercise 1.7 but we will give its sense with the degree two case.
421
= k (t2 (1 + 2 )t + 1 2 ) (~v)
The third equality holds because the scalar 2 comes out of the second term,
since t is linear.
QED
In particular, if a minimal polynomial m(x) for a transformation t factors
as m(x) = (x 1 )q1 (x z )qz then m(t) = (t 1 )q1 (t z )qz is
the zero map. Since m(t) sends every vector to zero, at least one of the maps
t i sends some nonzero vectors to zero. Exactly the same holds in the matrix
case if m is minimal for T then m(T ) = (T 1 I)q1 (T z I)qz is the zero
matrix and at least one of the matrices T i I sends some nonzero vectors to
zero. That is, in both cases at least some of the i are eigenvalues. (Exercise 29
expands on this.)
The next result is that every root of the minimal polynomial is an eigenvalue,
and further that every eigenvalue is a root of the minimal polynomial (i.e, below
it says 1 6 qi and not just 0 6 qi ). For that result, recall that to find
eigenvalues we solve |T xI| = 0 and this determinant gives a polynomial in x,
called the characteristic polynomial, whose roots are the eigenvalues.
1.8 Theorem (Cayley-Hamilton) If the characteristic polynomial of a transformation or square matrix factors into
k (x 1 )p1 (x 2 )p2 (x z )pz
then its minimal polynomial factors into
(x 1 )q1 (x 2 )q2 (x z )qz
where 1 6 qi 6 pi for each i between 1 and z.
The proof takes up the next three lemmas. We will state them in matrix terms
but they apply equally well to maps. (The matrix version is convenient for the
first proof.)
The first result is the key. For the proof, observe that we can view a matrix
422
x2 + 2
4x2 + x + 1
!
=
2
3
!
1 2
x +
4
3
4
!
0
x+
1
1 2
1 1
1.9 Lemma If T is a square matrix with characteristic polynomial c(x) then c(T )
is the zero matrix.
Proof Let C be T xI, the matrix whose determinant is the characteristic
polynomial c(x) = cn xn + + c1 x + c0 .
t1,1 x
t1,2
...
t2,2 x
t2,1
C=
..
..
.
.
tn,n x
Recall Theorem Four.III.1.9, that the product of a matrix with its adjoint equals
the determinant of the matrix times the identity.
c(x) I = adj(C)C = adj(C)(T xI) = adj(C)T adj(C) x
()
The left side of () is cn Ixn +cn1 Ixn1 + +c1 Ix+c0 I. For the right side, the
entries of adj(C) are polynomials, each of degree at most n 1 since the minors
of a matrix drop a row and column. As suggested before the proof, rewrite it
as a polynomial with matrix coefficients: adj(C) = Cn1 xn1 + + C1 x + C0
where each Ci is a matrix of scalars. Now this is the right side of ().
[(Cn1 T )xn1 + + (C1 T )x + C0 T ] [Cn1 xn Cn2 xn1 C0 x]
Equate the left and right side of ()s coefficients of xn , of xn1 , etc.
cn I = Cn1
cn1 I = Cn2 + Cn1 T
..
.
c1 I = C0 + C1 T
c0 I = C 0 T
Multiply, from the right, both sides of the first equation by T n , both sides of
423
f(x) = q(x)m(x) + r(x) where the degree of r is strictly less than the degree of
m. Because T satisfies both f and m, plugging T into that equation gives that
r(T ) is the zero matrix. That contradicts the minimality of m unless r is the
zero polynomial.
QED
Combining the prior two lemmas shows that the minimal polynomial divides
the characteristic polynomial. Thus any root of the minimal polynomial is
also a root of the characteristic polynomial. That is, so far we have that if
m(x) = (x1 )q1 (xi )qi then c(x) has the form (x1 )p1 (xi )pi (x
i+1 )pi+1 (x z )pz where each qj is less than or equal to pj . We finish
the proof of the Cayley-Hamilton Theorem by showing that the characteristic
polynomial has no additional roots, that is, there are no i+1 , i+2 , etc.
1.11 Lemma Each linear factor of the characteristic polynomial of a square matrix
is also a linear factor of the minimal polynomial.
Proof Let T be a square matrix with minimal polynomial m(x) and assume
424
any polynomial function p(x), application of the matrix p(T ) to ~v equals the
result of multiplying ~v by the scalar p().
p(T ) ~v = (ck T k + + c1 T + c0 I) ~v = ck T k~v + + c1 T~v + c0~v
= ck k~v + + c1 ~v + c0~v = p() ~v
Since m(T ) is the zero matrix, ~0 = m(T )(~v) = m() ~v for all ~v, and hence
m() = 0.
QED
That concludes the proof of the Cayley-Hamilton Theorem.
1.12 Example We can use the Cayley-Hamilton Theorem to find the minimal
polynomial of this matrix.
2 0 0 1
1 2 0 2
T =
0 0 2 1
0 0 0 1
First we find its characteristic polynomial c(x) = (x 1)(x 2)3 with the
usual determinant. Now, the Cayley-Hamilton Theorem says that T s minimal
polynomial is either (x 1)(x 2) or (x 1)(x 2)2 or (x 1)(x 2)3 . We can
decide among the choices just by computing
1 0 0 1
0 0 0 1
0 0 0 0
1 1 0 2 1 0 0 2 1 0 0 1
(T 1I)(T 2I) =
0 0 1 1 0 0 0 1 0 0 0 0
0 0 0 0
0 0 0 1
0 0 0 0
and
0
1
(T 1I)(T 2I)2 =
0
0
0
0
0
0
0
0
0
0
0
0
1 1
0 0
0
0
0
0
0
0
0 1
0
0 2 0
=
0 1 0
0
0 1
0
0
0
0
0
0
0
0
0
0
0
0
3
(a) 1
0
425
3 0 0
3 0
(b) 1 3 0
(c) 1 3
0 0 3
0 1
1 4
0 0 0
0
2 2 1
3
0 0 0
(e) 0 6 2
(f) 0 4 1 0 0
3 9 4 2 1
0 0 2
1
5
4 1 4
1.15 Find the minimal polynomial of this matrix.
0 1 0
0 0 1
1 0 0
0
3
0
0
0
4
0
0
3
2
(d) 0
0
0
6
0
1
2
2
0
1
0 1
0 ...
0
0
0
..
0
...
426
IV.2
We are looking for a canonical form for matrix similarity. This subsection
completes this program by moving from the canonical form for the classes of
nilpotent matrices to the canonical form for all classes.
2.1 Lemma A linear transformation on a nontrivial vector space is nilpotent if
and only if its only eigenvalue is zero.
Proof Let the linear transformation be t : V V. If t is nilpotent then there
427
polynomial has only zero for a root. By Cayley-Hamilton, Theorem 1.8, the
characteristic polynomial has only zero for a root. Thus the only eigenvalue of t
is zero.
Conversely, if a transformation t on an n-dimensional space has only the
single eigenvalue of zero then its characteristic polynomial is xn . Lemma 1.9
says that a map satisfies its characteristic polynomial so tn is the zero map.
Thus t is nilpotent.
QED
The nontrivial vector space is in the statement of that lemma because on a
trivial space {~0 } the only transformation is the zero map, which has no eigenvalues
because there are no associated nonzero eigenvectors.
2.2 Corollary The transformation t is nilpotent if and only if ts only eigenvalue
is .
Proof The transformation t is nilpotent if and only if t s only eigenvalue
PP1 (I) since the diagonal matrix I commutes with anything, and so N =
PT P1 I. Therefore N + I = PT P1 .
QED
2.4 Example The characteristic polynomial of
!
2 1
T=
1 4
is (x 3)2 and so T has only the single eigenvalue 3. Thus for
!
1 1
T 3I =
1
1
the only eigenvalue is 0 and T 3I is nilpotent. Finding the null spaces is routine;
to ease this computation we take T to represent a transformation t : C2 C2
with respect to the standard basis (we shall do this for the rest of the chapter).
!
y
N (t 3) = {
| y C}
N ((t 3)2 ) = C2
y
428
The dimension of each null space shows that the action of the map t 3 on a
~ 1 7
~ 2 7 ~0. Thus, here is the canonical form for t 3 with one
string basis is
choice for a string basis.
!
!
!
0 0
1
2
RepB,B (t 3) = N =
B=h
,
i
1 0
1
2
By Lemma 2.3, T is similar to this matrix.
RepB,B (t) = N + 3I =
3
1
0
3
We can produce the similarity computation. Recall how to find the change of
basis matrices P and P1 to express N as P(T 3I)P1 . The similarity diagram
t3
C2wrt E2 C2wrt E2
T 3I
idyP
idyP
t3
C2wrt B C2wrt B
N
describes that to move from the lower left to the upper left we multiply by
!
1
1 2
1
P = RepE2 ,B (id)
= RepB,E2 (id) =
1 2
and to move from the upper right to the lower right we multiply by this matrix.
P=
1
1
2
2
!1
=
1/2
1/4
1
4
1/2
1/4
1
1
2
2
4 1 0 1
0 3 0 1
T =
0 0 4 0
1 0 0 5
and so has the single eigenvalue 4. The null space of t 4 has dimension two,
the null space of (t 4)2 has dimension three, and the null space of (t 4)3 has
429
~ 1 7
~ 2 7
~ 3 7 ~0
dimension four. Thus, t4 has the action on a string basis of
~ 4 7 ~0. This gives the canonical form N for t 4, which in turn gives the
and
form for t.
4 0 0 0
1 4 0 0
N + 4I =
0 1 4 0
0 0 0 4
An array that is all zeroes, except for some number down the diagonal and
blocks of subdiagonal ones, is a Jordan block. We have shown that Jordan block
matrices are canonical representatives of the similarity classes of single-eigenvalue
matrices.
2.6 Example The 33 matrices whose only eigenvalue is 1/2 separate into three
similarity classes. The three classes have these canonical representatives.
1/2
0
0
0
1/2
0
0
1/2
1/2
1
0
0
1/2
0
0
1/2
1/2
0
0
0
1/2
1
0
1/2
1/2
0
1/2
1
0
1
0
1/2
belongs to the similarity class represented by the middle one, because we have
adopted the convention of ordering the blocks of subdiagonal ones from the
longest block to the shortest.
We will finish the program of this chapter by extending this work to cover
maps and matrices with multiple eigenvalues. The best possibility for general
maps and matrices would be if we could break them into a part involving their
first eigenvalue 1 (which we represent using its Jordan block), a part with 2 ,
etc.
This best possibility is what happens. For any transformation t : V V, we
shall break the space V into the direct sum of a part on which t 1 is nilpotent,
a part on which t 2 is nilpotent, etc.
Suppose that t : V V is a linear transformation. The restriction of t to a
subspace M need not be a linear transformation on M because there may be an
m
~ M with t(m)
~ 6 M (for instance, the transformation that rotates the plane
by a quarter turn does not map most members of the x = y line subspace back
within that subspace). To ensure that the restriction of a transformation to a
part of a space is a transformation on the part we need the next condition.
430
The second step of the three that we will take to prove this sections major
result makes use of an additional property of N (t i ) and R (t i ), that
they are complementary. Recall that if a space is the direct sum of two others
V = N R then any vector ~v in the space breaks into two parts ~v = n
~ +~r where
n
~ N and ~r R, and recall also that if BN and BR are bases for N and R
_
then the concatenation BN BR is linearly independent. The next result says
that for any subspaces N and R that are complementary as well as t invariant,
the action of t on ~v breaks into the actions of t on n
~ and on ~r.
431
T2
} dim(R)-many rows
..
..
.
.
RepB,B (t) =
RepB (t(~1 )) RepB (t(~q ))
..
..
.
.
has the desired form.
Any vector ~v V is a member of N if and only if when it is represented
with respect to B the final q coefficients are zero. As N is t invariant, each of
the vectors RepB (t(~1 )), . . . , RepB (t(~p )) has this form. Hence the lower left
of RepB,B (t) is all zeroes. The argument for the upper right is similar. QED
To see that we have decomposed t into its action on the parts, let BN =
h~1 , . . . , ~p i and BR = h~1 , . . . , ~q i. The restrictions of t to the subspaces N
and R are represented with respect to the bases BN , BN and BR , BR by the
matrices T1 and T2 . So with subspaces that are invariant and complementary
we can split the problem of examining a linear transformation into two lowerdimensional subproblems. The next result illustrates this decomposition into
blocks.
2.10 Lemma If T is a matrix with square submatrices T1 and T2
!
T1 Z 2
T=
Z 1 T2
where the Zs are blocks of zeroes, then |T | = |T1 | |T2 |.
Proof Suppose that T is nn, that T1 is pp, and that T2 is qq. In the
432
t1,1 (1) tp,1 (p) sgn(1 )
perms 1
of 1,...,p
tp+1,2 (p+1) tp+q,2 (p+q) sgn(2 )
perms 2
of p+1,...,p+q
equals |T | =
tn,(n) sgn().
QED
2.11 Example
2
1
0
0
0
2
0
0
0
0
3
0
0
0 2
=
0 1
3
0 3
2 0
0
= 36
3
From Lemma 2.10 we conclude that if two subspaces are complementary and
t invariant then t is one-to-one if and only if its restriction to each subspace is
nonsingular.
Now for the promised third, and final, step to the main result.
2.12 Lemma If a linear transformation t : V V has the characteristic polynomial
(x 1 )p1 . . . (x k )pk then (1) V = N (t 1 ) N (t k ) and
(2) dim(N (t i )) = pi .
Proof This argument consists of proving two preliminary claims, followed by
433
any eigenvalues on the intersection then the only eigenvalue is both i and j .
This cannot be, so the restriction has no eigenvalues: N (t i ) N (t j )
is the trivial space (Lemma 3.10 shows that the only transformation that is
without any eigenvalues is the transformation on the trivial space).
The second claim is that N (t i ) R (t j ), where i 6= j. To verify it
we will show that t j is one-to-one on N (t i ) so that, since N (t i )
is t j invariant by Lemma 2.8, the map t j is an automorphism of the
subspace N (t i ) and therefore that N (t i ) is a subset of each R(t j ),
R((t j )2 ), etc. For the verification that the map is one-to-one suppose that
~v N (t i ) is in the null space of t j , aiming to show that ~v = ~0.
Consider the map [(t i ) (t j )]n . On the one hand, the only vector that
(t i ) (t j ) = i j maps to zero is the zero vector. On the other hand,
as in the proof of Lemma 1.7 we can apply the binomial expansion to get this.
n
n
n
n1
1
(t i ) (~v) +
(t i )
(t j ) (~v) +
(t i )n2 (t j )2 (~v) +
1
2
The first term is zero because ~v N (t i ) while the remaining terms are
zero because ~v is in the null space of t j . Therefore ~v = ~0.
With those two preliminary claims done we can prove clause (1), that the space
is the direct sum of the generalized null spaces. By Corollary III.2.2 the space is
the direct sum V = N (t 1 ) R (t 1 ). By the second claim N (t 2 )
R (t 1 ) and so we can get a basis for R (t 1 ) by starting with a basis for
N (t 2 ) and adding extra basis elements taken from R (t 1 ) R (t 2 ).
Thus V = N (t 1 ) N (t 2 ) (R (t 1 ) R (t 2 )). Continuing
in this way we get this.
V = N (t 1 ) R (t k ) (R (t 1 ) R (t k ))
The first claim above shows that the final space is trivial.
We finish by verifying clause (2). Decompose V as N (t i ) R (t i )
and apply Lemma 2.9.
T=
T1
Z2
Z1
T2
Lemma 2.10 says that |T xI| = |T1 xI| |T2 xI|. By the uniqueness clause
of the Fundamental Theorem of Algebra, Theorem I.1.11, the determinants of
the blocks have the same factors as the characteristic polynomial |T1 xI| =
(x1 )q1 (xz )qk and |T2 xI| = (x1 )r1 (xz )rk , where q1 +r1 = p1 ,
. . . , qk + rk = pk . We will finish by establishing that (i) qj = 0 for all j 6= i,
and (ii) qi = pi . Together these prove clause (2) because they show that the
434
degree of the polynomial |T1 xI| is qi and the degree of that polynomial equals
the dimension of the generalized null space N (t i ).
For (i), because the restriction of t i to N (t i ) is nilpotent on that
space, ts only eigenvalue on that space is i , by Lemma 2.2. So qj = 0 for j 6= i.
For (ii), consider the restriction of t to R (t i ). By Lemma III.2.1, the
map t i is one-to-one on R (t i ) and so i is not an eigenvalue of t on
that subspace. Therefore x i is not a factor of |T2 xI|, so ri = 0, and so
q i = pi .
QED
Recall the goal of this chapter, to give a canonical form for matrix similarity.
That result is next. It translates the above steps into matrix terms.
2.13 Theorem Any square matrix is similar to one in Jordan form
J1
zeroes
J2
..
Jk1
zeroes
Jk
where each J is the Jordan block associated with an eigenvalue of the original
matrix (that is, each J is all zeroes except for s down the diagonal and some
subdiagonal ones).
Proof Given an nn matrix T , consider the linear map t : Cn Cn that it
represents with respect to the standard bases. Use the prior lemma to write
Cn = N (t 1 ) N (t k ) where 1 , . . . , k are the eigenvalues of t.
Because each N (t i ) is t invariant, Lemma 2.9 and the prior lemma show
that t is represented by a matrix that is all zeroes except for square blocks along
the diagonal. To make those blocks into Jordan blocks, pick each Bi to be a
string basis for the action of t i on N (t i ).
QED
2.14 Corollary Every square matrix is similar to the sum of a diagonal matrix
and a nilpotent matrix.
For Jordan form a canonical form for matrix similarity, strictly speaking it
must be unique. That is, for any square matrix there needs to be one and only
one matrix J similar to it and of the specified form. As stated the theorem allows
us to rearrange the Jordan blocks. We could make this form unique, say by
arranging the Jordan blocks so the eigenvalues are in order, and then arranging
the blocks of subdiagonal ones from longest to shortest. Below, we wont bother
with that.
435
2.15 Example This matrix has the characteristic polynomial (x 2)2 (x 6).
2 0 1
T = 0 6 2
0 0 2
First we do the eigenvalue 2. Computation of the powers of T 2I, and of the
null spaces and nullities, is routine. (Recall from Example 2.4 our convention of
taking T to represent a transformation t : C3 C3 with respect to the standard
basis.)
p
(T
2I)p
0 1
4 2
0 0
0 0
16 8
0 0
0
0
64 32
0
0
N ((t 2)p )
x
{ 0 | x C}
0
{ z/2 | x, z C }
nullity
same
same
So the generalized null space N (t 2) has dimension two. We know that the
restriction of t 2 is nilpotent on this subspace. From the way that the nullities
~ 1 7
~ 2 7 ~0. Thus
grow we know that the action of t 2 on a string basis is
we can represent the restriction in the canonical form
!
1
2
0 0
N2 =
= RepB,B (t 2)
B2 = h 1 , 0 i
1 0
2
0
(other choices of basis are possible). Consequently, the action of the restriction
of t to N (t 2) is represented by this matrix.
!
2 0
J2 = N2 + 2I = RepB2 ,B2 (t) =
1 2
The second eigenvalue is 6. Its computations are easier. Because the power of
x 6 in the characteristic polynomial is one, the restriction of t 6 to N (t 6)
436
must be nilpotent, of index one (it cant be of index less than one and since
x 6 is a factor of the characteristic polynomial with the exponent one it cant
~ 3 7 ~0
be of index more than one either). Its action on a string basis must be
and since it is the zero map, its canonical form N6 is the 1 1 zero matrix.
Consequently, the canonical form J6 for the action of t on N (t 6) is the 11
matrix with the single entry 6. For the basis we can use any nonzero vector
from the generalized null space.
0
B6 = h1i
0
Taken together, these two give that the Jordan form of T is
2 0 0
RepB,B (t) = 1 2 0
0 0 6
where B is the concatenation of B2 and B6 .
2.16 Example As a contrast with the prior example, this matrix
2 2 1
T = 0 6 2
0 0 2
has the same characteristic polynomial (x 2)2 (x 6), but here
(T 6I)p
4 3 1
0 0 2
0 0 4
16 12 2
0
8
0
0
0
16
N ((t 6)p )
{
(4/3)x | x C }
0
nullity
same
437
So the contrast with the prior example is that while the characteristic
polynomial tells us to look at the action of t 2 on its generalized null space, the
characteristic polynomial does not completely describe t 2s action. We must
do some computations to find that the minimal polynomial is (x 2)(x 6).
For the eigenvalue 6 the arguments for the second eigenvalue of the prior
example apply again. The restriction of t 6 to N (t 6) is nilpotent of index
one. Thus t 6s canonical form N6 is the 11 zero matrix, and the associated
Jordan block J6 is the 11 matrix with entry 6.
Therefore the Jordan form for T is a diagonal matrix.
2 0 0
1
0
2
_
RepB,B (t) = 0 2 0
B = B2 B6 = h0 , 1 , 4i
0 0 6
0
2
0
(Checking that the third vector in B is in the null space of t 6 is routine.)
2.17 Example A bit of computing with
1 4
0
3
T = 0 4
3 9
1
5
0
0
1
4
4
0
0
0
2
1
0
0
1
4
N ((t 3)p )
nullity
4 4
0
0
0
(u + v)/2
0
0
0
0
0
(u + v)/2
2
0 (u + v)/2 | u, v C}
0 4 4 0
3 9 4 1 1
u
1
5
4
1
1
v
16 16
0
0 0
z
0
z
0
0
0
0
{
3
16
16 0 0
0
z | z, u, v C}
16 32
u
16 0 0
0
16 16 0 0
v
64
64
0
0 0
0
0
0
0 0
same
same
64 64 0 0
0
64 128 64 0 0
0
64
64 0 0
(T 3I)p
438
16
16
16
40
16
24
16
24
N ((t + 1)p )
(u + v)
{ v | u, v C}
nullity
same
same
gives that the restriction of t + 1 to its generalized null space acts on a string
~ 4 7 ~0 and
~ 5 7 ~0.
basis via the two separate strings
Therefore T is similar to this Jordan form matrix.
1 0 0 0 0
0 1 0 0 0
0 3 0 0
0
0
0 1 3 0
0
0 0 0 3
Exercises
2.18 Do the check for Example 2.4.
2.19 Each matrix is in Jordan form. State its characteristic polynomial and its
minimal polynomial.
2 0
0
3 0 0
3 0
1 0
(a)
(b)
(c) 1 2
(d) 1 3 0
0
1 3
0 1
0 0 1/2
0 1 3
3 0 0 0
4 0 0
0
5 0 0
1 3 0 0
1 4 0
(e)
(f)
(g) 0 2 0
0 0 3 0
0 0 4 0
0 0 3
0 0 1 3
0 0 1 4
5 0 0 0
5 0 0 0
0 2 0 0
0 2 0 0
(h)
(i)
0 0 2 0
0 1 2 0
0 0 0 3
0 0 0 3
439
4 0 0
5
4
3
10 4
5 4
(a)
(b)
(c) 2 1 3
(d) 1 0 3
25 10
9 7
5 0 4
1 2 1
7 1 2
2
9
7
3
2
2 1
1 4 1 1
(e) 9 7 4
(f) 1 1 1
(g)
2 1 5 1
4
4
4
1 2 2
1 1 2
8
X 2.23 Find all possible Jordan forms of a transformation with characteristic polynomial
(x 1)2 (x + 2)2 .
2.24 Find all possible Jordan forms of a transformation with characteristic polynomial
(x 1)3 (x + 2).
X 2.25 Find all possible Jordan forms of a transformation with characteristic polynomial
(x 2)3 (x + 1) and minimal polynomial (x 2)2 (x + 1).
2.26 Find all possible Jordan forms of a transformation with characteristic polynomial
(x 2)4 (x + 1) and minimal polynomial (x 2)2 (x + 1).
X 2.27 Diagonalize
these.
1 1
0 1
(a)
(b)
0 0
1 0
X 2.28 Find the Jordan matrix representing the differentiation operator on P3 .
X 2.29 Decide if these two are similar.
1 1
4 3
1
1
0
1
440
2.35 Prove or disprove: two nn matrices are similar if and only if they have the
same characteristic and minimal polynomials.
2.36 The trace of a square matrix is the sum of its diagonal entries.
(a) Find the formula for the characteristic polynomial of a 22 matrix.
(b) Show that trace is invariant under similarity, and so we can sensibly speak of
the trace of a map. (Hint: see the prior item.)
(c) Is trace invariant under matrix equivalence?
(d) Show that the trace of a map is the sum of its eigenvalues (counting multiplicities).
(e) Show that the trace of a nilpotent map is zero. Does the converse hold?
2.37 To use Definition 2.7 to check whether a subspace is t invariant, we seemingly
have to check all of the infinitely many vectors in a (nontrivial) subspace to see if
they satisfy the condition. Prove that a subspace is t invariant if and only if its
~ is in the subspace.
subbasis has the property that for all of its elements, t()
X 2.38 Is t invariance preserved under intersection? Under union? Complementation?
Sums of subspaces?
2.39 Give a way to order the Jordan blocks if some of the eigenvalues are complex
numbers. That is, suggest a reasonable ordering for the complex numbers.
2.40 Let Pj (R) be the vector space over the reals of degree j polynomials. Show
that if j 6 k then Pj (R) is an invariant subspace of Pk (R) under the differentiation
operator. In P7 (R), does any of P0 (R), . . . , P6 (R) have an invariant complement?
2.41 In Pn (R), the vector space (over the reals) of degree n polynomials,
E = { p(x) Pn (R) | p(x) = p(x) for all x }
and
O = { p(x) Pn (R) | p(x) = p(x) for all x }
are the even and the odd polynomials; p(x) = x2 is even while p(x) = x3 is odd.
Show that they are subspaces. Are they complementary? Are they invariant under
the differentiation transformation?
2.42 Lemma 2.9 says that if M and N are invariant complements then t has a
representation in the given block form (with respect to the same ending as starting
basis, of course). Does the implication reverse?
2.43 A matrix S is the square root of another T if S2 = T . Show that any nonsingular
matrix has a square root.
Topic
Method of Powers
In applications matrices can be large. Calculating eigenvalues and eigenvectors
by finding and solving the characteristic polynomial is impractical, too slow and
too error-prone. Some techniques avoid the characteristic polynomial. Here we
shall see a method that is suitable for large matrices that are sparse, meaning
that the great majority of the entries are zero.
Suppose that the nn matrix T has n distinct eigenvalues 1 , 2 , . . . , n .
Then Cn has a basis made of the associated eigenvectors h~1 , . . . , ~n i. For any
~v Cn , writing ~v = c1 ~1 + + cn ~n and iterating T on ~v gives these.
T~v = c1 1 ~1 + c2 2 ~2 + + cn n ~n
T 2~v = c1 21 ~1 + c2 22 ~2 + + cn 2n ~n
T 3~v = c1 31 ~1 + c2 32 ~2 + + cn 3n ~n
..
.
T k~v = c1 k1 ~1 + c2 k2 ~2 + + cn kn ~n
Assuming that |1 | is the largest and dividing through
k
T k~v
k ~
= c1 ~1 + c2 2k ~2 + + cn n
n
k
1
1
k1
shows that as k gets larger the fractions go to zero and so 1 s term will dominate
the expression and that expression has a limit of c1 ~1 .
Thus if c1 6= 0, as k increases the vectors T k~v will tend toward the direction
of the eigenvectors associated with the dominant eigenvalue. Consequently, the
ratios of the vector lengths |T k~v|/|T k1~v| tend to that dominant eigenvalue.
For example, the eigenvalues of the matrix
!
3 0
T=
8 1
are 3 and 1. If ~v has the components 1 and 1 then iterating gives this.
442
T~v
!
3
7
T 2~v
!
9
17
T 9~v
19 683
39 367
T 10~v
!
59 049
118 097
443
such as tridiagonal form, where the only nonzero entries are on the diagonal, or
just above or below it. Then special case techniques can find the eigenvalues.
Once we know the eigenvalues then we can easily compute the eigenvectors of T .
These other methods are outside of our scope. A good reference is [Goult, et al.]
Exercises
1 Use ten iterations to estimate the largest eigenvalue of these matrices, starting
from the vector with components 1 and 2. Compare the answer with the one
obtained
solving the
equation.
by
characteristic
1 5
3 2
(a)
(b)
0 4
1 0
2 Redo the prior exercise by iterating until |~vk | |~vk1 | has absolute value less than
0.01 At each step, normalize by dividing each vector by its length. How many
iterations does it take? Are the answers significantly different?
3 Use ten iterations to estimate the largest eigenvalue of these matrices, starting
from the vector with components 1, 2, and 3. Compare the answer with the one
obtained by solving the characteristic equation.
4 0 1
1 2
2
(a) 2 1 0
(b) 2
2
2
2 0 1
3 6 6
4 Redo the prior exercise by iterating until |~vk | |~vk1 | has absolute value less than
0.01. At each step, normalize by dividing each vector by its length. How many
iterations does it take? Are the answers significantly different?
5 What happens if c1 = 0? That is, what happens if the initial vector does not to
have any component in the direction of the relevant eigenvector?
6 How can we adapt the method of powers to find the smallest eigenvalue?
Computer Code
This is the code for the computer algebra system Octave that did the calculation
above. (It has been lightly edited to remove blank lines, etc.)
>T=[3, 0;
8, -1]
T=
3
0
8 -1
>v0=[1; 2]
v0=
1
1
>v1=T*v0
v1=
3
7
>v2=T*v1
v2=
9
17
>T9=T**9
T9=
19683 0
444
39368 -1
>T10=T**10
T10=
59049 0
118096 1
>v9=T9*v0
v9=
19683
39367
>v10=T10*v0
v10=
59049
118096
>norm(v10)/norm(v9)
ans=2.9999
Remark. This does not use the full power of Octave; it has built-in functions to
automatically apply sophisticated methods to find eigenvalues and eigenvectors.
Topic
Stable Populations
Imagine a reserve park with animals from a species that we are protecting. The
park doesnt have a fence so animals cross the boundary, both from the inside
out and from the outside in. Every year, 10% of the animals from inside of the
park leave and 1% of the animals from the outside find their way in. Can we
reach a stable level; are there populations for the park and the rest of the world
that will stay constant over time, with the number of animals leaving equal to
the number of animals entering?
Let pn be the year n population in the park and let rn be the population in
the rest of the world.
pn+1 = .90pn + .01rn
rn+1 = .10pn + .99rn
We have this matrix equation.
pn+1
rn+1
!
=
.90
.10
!
!
.01
pn
.99
rn
The population will be stable if pn+1 = pn and rn+1 = rn so that the matrix
equation ~vn+1 = T~vn becomes ~v = T~v. We are therefore looking for eigenvectors
for T that are associated with the eigenvalue = 1. The equation ~0 = (IT )~v =
(I T )~v is
!
!
!
0.10 0.01
p
0
=
0.10 0.01
r
0
and gives the eigenspace of vectors with the restriction that p = .1r. For
example, if we start with a park population p = 10 000 animals and a rest of the
world population of r = 100 000 animals then every year ten percent of those
inside leave the park (this is a thousand animals), and every year one percent of
those from the rest of the world enter the park (also a thousand animals). The
population is stable, self-sustaining.
446
Now imagine that we are trying to raise the total world population of this
species. We are trying to have the world population grow at 1% per year.
This makes the population level stable in some sense, although it is a dynamic
stability, in contrast to the static population level of the = 1 case. The equation
~vn+1 = 1.01 ~vn = T~vn leads to ((1.01I T )~v = ~0, which gives this system.
!
!
!
0.11 0.01
p
0
=
0.10 0.02
r
0
This matrix is nonsingular and so the only solution is p = 0, r = 0. Thus there is
no nontrivial initial population that would lead to a regular annual one percent
growth rate in p and r.
We can look for the rates that allow an initial population for the park that
results in a steady growth behavior. We consider ~v = T~v and solve for .
.9
.01
0=
= ( .9)( .99) (.10)(.01) = 2 1.89 + .89
.10
.99
We already know that = 1 is one solution of this characteristic equation. The
other is 0.89. Thus there are two ways to have a dynamically stable p and r,
where the two grow at the same rate despite the leaky park boundaries: have a
world population that is does not grow or shrink, and have a world population
that shrinks by 11% every year.
So one way to look at eigenvalues and eigenvectors is that they give a stable
state for a system. If the eigenvalue is one then the system is static and if the
eigenvalue isnt one then it is a dynamic stability.
Exercises
1 For the park discussed above, what should be the initial park population in the
case where the populations decline by 11% every year?
2 What will happen to the population of the park in the event of a growth in world
population of 1% per year? Will it lag the world growth, or lead it? Assume
that the initial park population is ten thousand, and the world population is one
hundred thousand, and calculate over a ten year span.
3 The park discussed above is partially fenced so that now, every year, only 5% of
the animals from inside of the park leave (still, about 1% of the animals from the
outside find their way in). Under what conditions can the park maintain a stable
population now?
4 Suppose that a species of bird only lives in Canada, the United States, or in Mexico.
Every year, 4% of the Canadian birds travel to the US, and 1% of them travel to
Mexico. Every year, 6% of the US birds travel to Canada, and 4% go to Mexico.
From Mexico, every year 10% travel to the US, and 0% go to Canada.
(a) Give the transition matrix.
(b) Is there a way for the three countries to have constant populations?
Topic
Page Ranking
Imagine that you are looking for the best book on Linear Algebra. You probably
would try a web search engine such as Google. These lists pages ranked by importance. The ranking is defined, as Googles founders have said in [Brin & Page],
that a page is important if other important pages link to it: a page can have
a high PageRank if there are many pages that point to it, or if there are some
pages that point to it and have a high PageRank. But isnt that circular
how can they tell whether a page is important without first deciding on the
important pages? With eigenvalues and eigenvectors.
We will present a simplified version of the Page Rank algorithm. For that
we will model the World Wide Web as a collection of pages connected by links.
This diagram, from [Wills], shows the pages as circles, and the links as arrows;
for instance, page p1 has a link to page p2 .
p1
p2
p4
p3
The key idea is that pages that should be highly ranked if they are cited often
by other pages. That is, we raise the importance of a page pi if it is linked-to
from page pj . The increment depends on the importance of the linking page pj
divided by how many out-links aj are on that page.
I(pi ) =
X
in-linking pages pj
I(pj )
aj
448
0 0
1 0
0 1
0 0
1/3
1/3
0
1/3
0
0
0
0
0 0 1/3 1/4
1 0 1/3 1/4
H=
0 1
0
1/4
0 0 1/3 1/4
We will find vector ~I whose components are the importance rankings of each
page I(pi ). With this notation, our requirements for the page rank are that
H~I = ~I. That is, we want an eigenvector of the matrix associated with the
eigenvalue = 1.
Here is Sages calculation of the eigenvectors (slightly edited to fit on the
page).
sage: H=matrix([[0,0,1/3,1/4], [1,0,1/3,1/4], [0,1,0,1/4], [0,0,1/3,1/4]])
sage: H.eigenvectors_right()
[(1, [
(1, 2, 9/4, 1)
], 1), (0, [
(0, 1, 3, -4)
], 1), (-0.3750000000000000? - 0.4389855730355308?*I,
[(1, -0.1250000000000000? + 1.316956719106593?*I,
-1.875000000000000? - 1.316956719106593?*I, 1)], 1),
(-0.3750000000000000? + 0.4389855730355308?*I,
[(1, -0.1250000000000000? - 1.316956719106593?*I,
-1.875000000000000? + 1.316956719106593?*I, 1)], 1)]
The eigenvector that Sage gives associated with the eigenvalue = 1 is this.
1
2
9/4
1
449
Of course, there are many vectors in that eigenspace. To get a page rank number
we normalize to length one.
sage: v=vector([1, 2, 9/4, 1])
sage: v/v.norm()
(4/177*sqrt(177), 8/177*sqrt(177), 3/59*sqrt(177), 4/177*sqrt(177))
sage: w=v/v.norm()
sage: w.n()
(0.300658411201132, 0.601316822402263, 0.676481425202546, 0.300658411201132)
So we rank the first and fourth pages as of equal importance. We rank the
second and third pages as much more important than those, and about equal in
importance as each other.
Well add one more refinement. We will allow the surfer to pick a new
page at random even if they are not on a dangling page. Let this happen with
probability .
0
1
G=
0
0
0
0
1
0
1/3
1/3
0
1/3
1/4
1/4
1/4
1/4
+ (1 )
1/4
1/4
1/4
1/4
1/4
1/4
1/4
1/4
1/4
1/4
1/4
1/4
1/4
1/4
1/4
1/4
p1
p2
p3
p4
0.85
0.325
0.602
0.652
0.325
0.90
0.317
0.602
0.661
0.317
0.95
0.309
0.602
0.669
0.309
0.99
0.302
0.601
0.675
0.302
The details of the algorithms used by commercial search engines are secret, no doubt have many refinements, and also change frequently. But the
inventors of Google were gracious enough to outline the basis for their work in
[Brin & Page]. A more current source is [Wikipedia, Google Page Rank]. Two
additional excellent expositions are [Wills] and [Austin].
Exercises
1 A square matrix is stochastic if the sum of the entries in each column is one. The
Google matrix is computed by taking a combination G = H + (1 ) S of two
stochastic matrices. Show that G must be stochastic.
450
2 For this web of pages, the importance of each page should be equal. Verify it for
= 0.85.
p1
p2
p4
p3
3 [Bryan & Leise] Give the importance ranking for this web of pages.
p1
p2
p4
p3
Topic
Linear Recurrences
In 1202 Leonardo of Pisa, known as Fibonacci, posed this problem.
A certain man put a pair of rabbits in a place surrounded on all sides
by a wall. How many pairs of rabbits can be produced from that
pair in a year if it is supposed that every month each pair begets a
new pair which from the second month on becomes productive?
This moves past an elementary exponential growth model for populations to
include that newborns are not fertile for some period, here a month. However,
it retains other simplifying assumptions such as that there is an age after which
the rabbits are infertile.
To get next months total number of pairs we add the number of pairs alive
going into next month to the number of pairs that will be newly born next
month. The latter equals the number of pairs that will be productive going into
next month, which is the number that next month will have been alive for at
least two months.
where F(0) = 0, F(1) = 1
()
On the left is a recurrence relation. It gets that name because F recurs in its
own defining equation. On the right are the initial conditions. From () we can
compute F(2), F(3), etc., to work up to the answer for Fibonaccis question.
month n
pairs F(n)
0
0
1
1
2
1
3
2
4
3
5
5
6
8
7
13
8
21
9
34
10
55
11
89
12
144
We will use linear algebra to get a formula that calculates F(n) without having
to first calculate the intermediate values F(2), F(3), etc.
We start by giving () a matrix formulation.
!
!
!
!
!
F(n)
1 1
F(n 1)
F(1)
1
=
where
=
F(n 1)
1 0
F(n 2)
F(0)
0
452
Write T for the matrix and ~vn for the vector with components F(n) and F(n 1)
so that ~vn = T n1~v1 for n > 1. If we diagonalize T then we have a fast way
to compute its powers: where T = PDP1 then T n = PDn P1 and the n-th
power of the diagonal matrix D is the diagonal matrix whose entries are the
n-th powers of the entries of D.
The characteristic equation of T is 2 1 = 0. The quadratic formula
gives its roots as (1 + 5)/2 and (1 5)/2. (These are sometimes called golden
ratios; see [Falbo].) Diagonalizing gives this.
1
1
1
0
1+ 5
2
!
=
!
1 5
2
1+ 5
2
1 5
2
1
5
1
5)
( 1
2 5
1+ 5
2 5
1+ 5
2
!n1
!
1
f(1)
=
0
f(0)
n1
!
1+ 5
1 5
0
2
2
n1
1 5
1
0
2
1
1
1+ 5
1
1+
5
5
F(n)
2
2
2
=
F(n 1)
1
1
0
1
=
5
n
1+ 5
2n1
1+ 5
2
1
=
5
!
1 5
1
5)
( 1
2 5
1+ 5
2 5
0
1 5
2
n1
1
5
15
1
0
n1
1+ 5
2
n1
1 5
2
n
1 5
2
n1
1 5
2
1+ 5
2
1
5
1
"
!n
1+ 5
!n #
1 5
2
This formula gives the value of any member of the sequence without having to
first find the intermediate values.
Because (1 5)/2 0.618 has absolute value less than one, its powers
go to zero and so the F(n) formula is dominated by its first term. Although we
453
454
We can find the dimension of S. Where k is the order of the recurrence, consider
this map from the set of functions S to the set of k-tall vectors.
f(0)
f(1)
f 7
..
f(k 1)
Exercise 4 shows that this is linear. Any solution of the recurrence is uniquely
determined by the k-many initial conditions so this map is one-to-one and onto.
Thus it is an isomorphism, and S has dimension k.
So we can describe the set of solutions of our linear homogeneous recurrence
relation of order k by finding a basis consisting of k-many linearly independent
functions. To produce those we give the recurrence a matrix formulation.
1
f(n)
0
0
...
0
0
f(n 1)
1
0
f(n 1) 0
f(n 2)
=
..
..
0
1
.
.
..
..
..
..
.
.
f(n k + 1)
.
. f(n k)
0
...
an1
an2
an3
leads us to expect, and Exercise 5 verifies, that this is the characteristic equation.
an1 an2 an3 . . . ank+1 ank
1
0
...
0
0
0
1
0=
0
0
1
..
..
..
..
.
.
.
.
0
0
0
...
1
= (k + an1 k1 + an2 k2 + + ank+1 + ank )
455
The is not relevant to find the roots so we drop it. We say that the polynomial
k + an1 k1 + an2 k2 + + ank+1 + ank is associated with the
recurrence relation.
If the characteristic equation has no repeated roots then the matrix is
diagonalizable and we can, in theory, get a formula for f(n), as in the Fibonacci
case. But because we know that the subspace of solutions has dimension k we
do not need to do the diagonalization calculation, provided that we can exhibit
k different linearly independent functions satisfying the relation.
Where r1 , r2 , . . . , rk are the distinct roots, consider the functions of powers
n
of those roots, fr1 (n) = rn
1 through frk (n) = rk . Exercise 6 shows that each is
a solution of the recurrence and that they form a linearly independent set. So, if
the roots of the associated polynomial are distinct, any solution of the relation
n
n
has the form f(n) = c1 rn
1 + c2 r2 + + ck rk for some scalars c1 , . . . , cn . (The
case of repeated roots is similar but we wont cover it here; see any text on
Discrete Mathematics.)
Now we bring in the initial conditions. Use them to solve for c1 , . . . , cn . For
instance, the polynomial associated with the Fibonacci relation is 2 + + 1,
whose roots are r1 = (1 + 5)/2 and r2 = (1 5)/2 and so any solution of the
Fibonacci recurrence has the form f(n) = c1 ((1 + 5)/2)n + c2 ((1 5)/2)n .
Use the Fibonacci initial conditions for n = 0 and n = 1
c1 +
c2 = 0
(1 + 5/2)c1 + (1 5/2)c2 = 1
456
We put aside the question of why the priests dont sit down for a while and
have the world last a little longer, and instead ask how many disk moves it will
take. Before tackling the sixty four disk problem we will consider the problem
for three disks.
To begin, all three disks are on the same needle.
After the three moves of taking the small disk to the far needle, the mid-sized
disk to the middle needle, and then the small disk to the middle needle, we have
this.
Now we can move the big disk to the far needle. Then to finish we repeat the
three-move process on the two smaller disks, this time so that they end up on
the third needle, on top of the big disk.
That sequence of moves is the best that we can do. To move the bottom disk
at a minimum we must first move the smaller disks to the middle needle, then
move the big one, and then move all the smaller ones from the middle needle to
the ending needle. Since this minimum suffices, we get this recurrence.
T (n) = T (n 1) + 1 + T (n 1) = 2T (n 1) + 1
where T (1) = 1
1
1
2
3
3
7
4
15
5
31
6
63
7
127
8
255
9
511
10
1023
Of course, these numbers are one less than a power of two. To derive this write
the original relation as 1 = T (n)+2T (n1). Consider 0 = T (n)+2T (n1),
457
Computer Code
This code generates the first few values of a function defined by a recurrence and initial conditions. It is in the Scheme dialect of LISP, specifically,
[Chicken Scheme].
458
After loading an extension that keeps the computer from switching to floating
point numbers when the integers get large, the Tower of Hanoi function is
straightforward.
(require-extension numbers)
(define (tower-of-hanoi-moves n)
(if (= n 1)
1
(+ (* (tower-of-hanoi-moves (- n 1))
2)
1) ) )
; Two helper funcitons
(define (first-few-outputs proc n)
(first-few-outputs-aux proc n '()) )
(define (first-few-outputs-aux proc n lst)
(if (< n 1)
lst
(first-few-outputs-aux proc (- n 1) (cons (proc n) lst)) ) )
(For readers unused to recursive code: to compute T (64), the computer wants to
compute 2 T (63) 1, which requires computing T (63). The computer puts the
times 2 and the plus 1 aside for a moment. It computes T (63) by using this
same piece of code (thats what recursive means), and to do that it wants to
compute 2 T (62) 1. This keeps up until, after 63 steps, the computer tries to
compute T (1). It then returns T (1) = 1, which allows the computation of T (2)
to proceed, etc., until the original computation of T (64) finishes.)
The helper functions give a table of the first few values. Here is the session
at the prompt.
#;1> (load "hanoi.scm")
; loading hanoi.scm ...
; loading /var/lib//chicken/6/numbers.import.so ...
; loading /var/lib//chicken/6/chicken.import.so ...
; loading /var/lib//chicken/6/foreign.import.so ...
; loading /var/lib//chicken/6/numbers.so ...
#;2> (tower-of-hanoi-moves 64)
18446744073709551615
#;3> (first-few-outputs tower-of-hanoi-moves 64)
(1 3 7 15 31 63 127 255 511 1023 2047 4095 8191 16383 32767 65535 131071 262143 524287 1048575
2097151 4194303 8388607 16777215 33554431 67108863 134217727 268435455 536870911 1073741823
2147483647 4294967295 8589934591 17179869183 34359738367 68719476735 137438953471 274877906943
549755813887 1099511627775 2199023255551 4398046511103 8796093022207 17592186044415
35184372088831 70368744177663 140737488355327 281474976710655 562949953421311 1125899906842623
2251799813685247 4503599627370495 9007199254740991 18014398509481983 36028797018963967
72057594037927935 144115188075855871 288230376151711743 576460752303423487 1152921504606846975
2305843009213693951 4611686018427387903 9223372036854775807 18446744073709551615)
This is a list of T (1) through T (64) (the session was edited to put in line breaks
for readability).
Appendix
Mathematics is made of arguments (reasoned discourse that is, not crockerythrowing). This section sketches the background material and argument techniques that we use in the book.
This section informally outlines the topics, skipping proofs. For more,
[Velleman2] is excellent. Two other sources, available online, are [Hefferon]
and [Beck].
Statements
Formal mathematical statements come labelled as a Theorem for major points,
a Corollary for results that follow immediately from a prior one, or a Lemma
for results chiefly used to prove others.
Statements can be complex and have many parts. The truth or falsity of the
entire statement depends both on the truth value of the parts and on how the
statement is put together.
Not Where P is a proposition, it is not the case that P is true provided that
P is false. For instance, n is not prime is true only when n is the product of
smaller integers.
To prove that a not P statement holds, show that P is false.
And For a statement of the form P and Q to be true both halves must hold:
7 is prime and so is 3 is true, while 7 is prime and 3 is not is false.
To prove a P and Q, prove each half.
Or A P or Q statement is true when either half holds: 7 is prime or 4 is prime
is true, while 8 is prime or 4 is prime is false. In the case that both clauses of
the statement are true, as in 7 is prime or 3 is prime, we take the statement
as a whole to be true. (In everyday speech people occasionally use or in an
exclusive way Live free or die does not intend both halves to hold but we
will not use or in that way.)
A-2
To prove P or Q, show that in all cases at least one half holds (perhaps
sometimes one half and sometimes the other, but always at least one).
If-then An if P then Q statement may also appear as P implies Q or P = Q
or P is sufficient to give Q or Q if P. It is true unless P is true while Q is
false. Thus if 7 is prime then 4 is not is true while if 7 is prime then 4 is also
prime is false. (Contrary to its use in casual speech, in mathematics if P then
Q does not connote that P precedes Q or causes Q.)
Note this consequence of the prior paragraph: if P is false then if P then Q
is true irrespective of the value of Q: if 4 is prime then 7 is prime and if 4 is
prime then 7 is not are both true statements. (They are vacuously true.) Also
observe that if P then Q is true when Q is true: if 4 is prime then 7 is prime
and if 4 is not prime then 7 is prime are both true.
There are two main ways to establish an implication. The first way is
direct: assume that P is true and use that assumption to prove Q. For instance,
to show if a number is divisible by 5 then twice that number is divisible by
10 we can assume that the number is 5n and deduce that 2(5n) = 10n. The
indirect way is to prove the contrapositive statement: if Q is false then P is
false (rephrased, Q can only be false when P is also false). Thus to show if a
natural number is prime then it is not a perfect square we can argue that if it
were a square p = n2 then it could be factored p = n n where n < p and so
wouldnt be prime (p = 0 or p = 1 dont satisfy n < p but they are nonprime).
Equivalent statements Sometimes, not only does P imply Q but also Q implies P.
Some ways to say this are: P if and only if Q, P iff Q, P and Q are logically
equivalent, P is necessary and sufficient to give Q, P Q. An example is
an integer is divisible by ten if and only if that number ends in 0.
Although in simple arguments a chain like P if and only if R, which holds if
and only if S . . . may be practical, to prove that statements are equivalent we
more often prove the two halves if P then Q and if Q then P separately.
Quantifiers
Compare these statements about natural numbers: there is a natural number x
such that x is divisible by x2 is true, while for all natural numbers x, that x is
divisible by x2 is false. The prefixes there is and for all are quantifiers.
For all The for all prefix is the universal quantifier, symbolized .
The most straightforward way to prove that a statement holds in all cases is
to prove that it holds in each case. Thus to show that every number divisible by
p has its square divisible by p2 , take a single number of the form pn and square
it (pn)2 = p2 n2 . This is a typical element proof. (In this kind of argument
be careful not to assume properties for that element other than the ones in the
A-3
hypothesis. This argument is wrong: If n is divisible by a prime, say 2, so that
n = 2k for some natural number k, then n2 = (2k)2 = 4k2 and the square of n
is divisible by the square of the prime. That is a proof for the special case p = 2
but it isnt a proof for all p. Contrast it with a correct one: If n is divisible
by a prime so that n = pk for some natural number k then n2 = (pk)2 = p2 k2
and so the square of n is divisible by the square of the prime.)
There exists The there exists prefix is the existential quantifier, symbolized .
We can prove an existence proposition by producing something satisfying
5
the property: for instance, to settle the question of primality of 22 + 1, Euler
exhibited the divisor 641[Sandifer]. But there are proofs showing that something
exists without saying how to find it; Euclids argument given in the next
subsection shows there are infinitely many primes without giving a formula
naming them.
Finally, after answering Are there any? affirmatively we often ask How
many? That is, the question of uniqueness often arises in conjunction with the
question of existence. Sometimes the two arguments are simpler if separated so
note that just as proving something exists does not show it is unique, neither
does proving something is unique show that it exists. (For instance, we can
easily show that the natural number halfway between three and four is unique,
even thouge no such number exists.)
Techniques of Proof
Induction Many proofs are iterative, Heres why the statement is true for the
number 0, it then follows for 1 and from there to 2 . . . . These are proofs by
mathematical induction. This technique is often not obvious to a person who
has not seen it before, even to a person with a mathematical turn of mind. So
we will see two examples.
We will first prove that 1 + 2 + 3 + + n = n(n + 1)/2. That formula has
a natural number variable n that is free, meaning that setting n to be 1, or
2, etc., gives a family of cases of the statement: first that 1 = 1(2)/2, second
that 1 + 2 = 2(3)/2, etc. Our induction proofs involve statements with one free
natural number variable.
Each proof has two steps. In the base step we show that the statement holds
for some intial number i N. Often this step is a routine, and short, verification.
The second step, the inductive step, is more subtle; we will show that this
implication holds:
If the statement holds from n = i up to and including n = k
then the statement holds also in the n = k + 1 case
()
A-4
(the first line is the inductive hypothesis). The Principle of Mathematical
Induction is that completing both steps proves that the statement is true for all
natural numbers greater than or equal to i.
For the sum of the initial n numbers statement the intuition behind the
principle is that first, the base step directly verifies the statement for the case
of the initial number n = 1. Then, because the inductive step verifies the
implication () for all k, that implication applied to k = 1 gives that the
statement is true for the case of the number n = 2. Now, with the statement
established for both 1 and 2, apply () again to conclude that the statement is
true for the number n = 3. In this way, we bootstrap to all numbers n > 1.
Here is a proof of 1 + 2 + 3 + + n = n(n + 1)/2, with separate paragraphs
for the base step and the inductive step.
For the base step we show that the formula holds when n = 1. Thats
easy; the sum of the first 1 natural number equals 1(1 + 1)/2.
For the inductive step, assume the inductive hypothesis that the formula
holds for the numbers n = 1, n = 2, . . . , n = k with k > 1. That is,
assume 1 = 1(1)/2, and 1 + 2 = 2(3)/2, and 1 + 2 + 3 = 3(4)/2, through
1 + 2 + + k = k(k + 1)/2. With that, the formula holds also in the
n = k + 1 case:
1 + 2 + + k + (k + 1) =
k(k + 1)
(k + 1)(k + 2)
+ (k + 1) =
2
2
Here is another example, proving that every integer greater than or equal to
2 is a product of primes.
The base step is easy: 2 is the product of a single prime.
For the inductive step assume that each of 2, 3, . . . , k is a product of primes,
aiming to show k+1 is also a product of primes. There are two possibilities.
First, if k + 1 is not divisible by a number smaller than itself then it is a
prime and so is the product of primes. The second possibility is that k + 1
is divisible by a number smaller than itself, and then by the inductive
hypothesis its factors can be written as a product of primes. In either case
k + 1 can be rewritten as a product of primes.
A-5
Suppose that there are only finitely many primes p1 , . . . , pk . Consider the
number p1 p2 . . . pk + 1. None of the primes on the supposedly exhaustive
list divides this number evenly since each leaves a remainder of 1. But
every number is a product of primes so this cant be. Therefore there
cannot be only finitely many primes.
Suppose that 2 = m/n, so that 2n2 = m2 . Factor out any 2s, giving
and m = 2km m.
Rewrite.
n = 2kn n
)2 = (2km m)
2
2 (2kn n
The Prime Factorization Theorem says that there must be the same number
of factors of 2 on both sides, but there are an odd number of them 1 + 2kn
on the left and an even number 2km on the right. Thats a contradiction,
so a rational number with a square of 2 is impossible.
A-6
Diagrams We picture basic set operations with a Venn diagram. This shows
x P.
P
x
The outer rectangle contains the universe of all objects under discussion. For
instance, in a statement about real numbers, the rectangle encloses all members
of R. The set is pictured as a circle, enclosing its members.
Here is the diagram for P Q. It shows that if x P then x Q.
A-7
Multisets A multiset is a collection that is like a set in that order does not
matter, but in which, unlike a set, repeats do not collapse. Thus the multiset
{2, 1, 2 } is the same as the multiset {1, 2, 2 } but differs from the multiset {1, 2 }.
Note that we use the same { . . . } curly brackets notation as for sets. Also as with
sets, we say A is a multiset subset if A is a subset of B and A is a multiset.
Sequences In addition to sets and multisets, we also use collections where order
matters and where repeats do not collapse. These are sequences, denoted with
angle brackets: h2, 3, 7i 6= h2, 7, 3i. A sequence of length 2 is an ordered pair,
and is often written with parentheses: (, 3). We also sometimes say ordered
triple, ordered 4-tuple, etc. The set of ordered n-tuples of elements of a set A
is denoted An . Thus R2 is the set of pairs of reals.
Functions A function or map f : D C is is an association between input
arguments x D and output values f(x) C subject to the the requirement that
the function must be well-defined, that x suffices to determine f(x). Restated,
the condition is that if x1 = x2 then f(x1 ) = f(x2 ).
The set of all arguments D is fs domain and the set of output values is
its range R(f). Often we dont work with the range and instead work with
a convenient superset, the codomain C. For instance, we might describe the
squaring function with s : R R instead of s : R R+ {0 }.
We picture functions with a bean diagram.
The blob on the left is the domain while on the right is the codomain. The
function associates the three points of the domain with three in the codomain.
Note that by the definition of a function every point in the domain is associated
with a unique point in the codomain, but the converse neednt be true.
The association is arbitrary; no formula or algorithm is required, although in
this book there typically is one. We often use y to denote f(x). We also use the
f
notation x 7 16x2 100, read x maps under f to 16x2 100 or 16x2 100
is the image of x.
A map such as x 7 sin(1/x) is a combinations of simpler maps, here g(y) =
sin(y) applied to the image of f(x) = 1/x. The composition of g : Y Z with
f : X Y, is the map sending x X to g( f(x) ) Z. It is denoted g f : X Z.
This definition only makes sense if the range of f is a subset of the domain of g.
An identity map id : Y Y defined by id(y) = y has the property that for
any f : X Y, the composition id f is equal to f. So an identity map plays the
A-8
same role with respect to function composition that the number 0 plays in real
number addition or that 1 plays in multiplication.
In line with that analogy, we define a left inverse of a map f : X Y to be
a function g : range(f) X such that g f is the identity map on X. A right
inverse of f is a h : Y X such that f h is the identity.
For some fs there is a map that is both a left and right inverse of f. If such
a map exists then it is unique because if both g1 and g2 have this property
then g1 (x) = g1 (f g2 ) (x) = (g1 f) g2 (x) = g2 (x) (the middle equality
comes from the associativity of function composition) so we call it a two-sided
inverse or just the inverse, and denote it f1 . For instance, the inverse of
the function f : R R given by f(x) = 2x 3 is the function f1 : R R given
by f1 (x) = (x + 3)/2.
The superscript notation for function inverse f1 fits into a larger scheme.
Functions with the same codomain as domain f : X X can be iterated, so that
we can consider the composition of f with itself: f f, and f f f, etc. We write
f f as f2 and f f f as f3 , etc. Note that the familiar exponent rules for real
numbers hold: fi fj = fi+j and (fi )j = fij . Then where f is invertible, writing
f1 for the inverse and f2 for the inverse of f2 , etc., gives that these familiar
exponent rules continue to hold, since we define f0 to be the identity map.
The definition of function requires that for every input there is one and only
one associated output value. If a function f : D C has the additional property
that for every output value there is at least one associated input value that is,
the additional property that fs codomain equals its range C = R(f) then the
function is onto.
A function has a right inverse if and only if it is onto. (The f pictured above
has a right inverse g : C D given by following the arrows backwards, from
right to left. For the codomain point on the top, choose either one of the arrows
to follow. With that, applying g first followed by f takes elements y C to
themselves, and so is the identity function.)
If a function f : D C has the property that for every output value there is
at most one associated input value that is, if no two arguments share an image
so that f(x1 ) = f(x2 ) implies that x1 = x2 then the function is one-to-one.
The bean diagram from earlier illustrates.
A-9
A function has a left inverse if and only if it is one-to-one. (In the picture define
g : C D to follow the arrows backwards for those y C that are at the end of
an arrow, and to send the point to an arbitrary element in D otherwise. Then
applying f followed by g to elements of D will act as the identity.)
By the prior paragraphs, a map has a two-sided inverse if and only if that map
is both onto and one-to-one. Such a function is a correspondence. It associates
one and only one element of the domain with each element of the codomain.
Because a composition of one-to-one maps is one-to-one, and a composition of
onto maps is onto, a composition of correspondences is a correspondence.
We sometimes want to shrink the domain of a function. For instance, we
may take the function f : R R given by f(x) = x2 and, in order to have an
inverse, limit input arguments to nonnegative reals f: R+ {0 } R. Then f is
the restriction of f to the smaller domain.
Relations Some familiar mathematical things, such as < or =, are most
naturally understood as relations between things. A binary relation on a set
A is a set of ordered pairs of elements of A. For example, some elements of
the set that is the relation < on the integers are (3, 5), (3, 7), and (1, 100).
Another binary relation on the integers is equality; this relation is the set
{. . . , (1, 1), (0, 0), (1, 1), . . . }. Still another example is closer than 10, the set
{(x, y) | |x y| < 10 }. Some members of this relation are (1, 10), (10, 1), and
(42, 44). Neither (11, 1) nor (1, 11) is a member.
Those examples illustrate the generality of the definition. All kinds of
relationships (e.g., both numbers even or first number is the second with the
digits reversed) are covered.
Equivalence Relations We shall need to express that two objects are alike in
some way. They arent identical, but they are related (e.g., two integers that
give the same remainder when divided by 2).
A binary relation { (a, b), . . . } is an equivalence relation when it satisfies
(1) reflexivity: any object is related to itself, (2) symmetry: if a is related
to b then b is related to a, and (3) transitivity: if a is related to b and b is
related to c then a is related to c. Some examples (on the integers): = is an
equivalence relation, < does not satisfy symmetry, same sign is a equivalence,
while nearer than 10 fails transitivity.
Partitions In the same sign relation {(1, 3), (5, 7), (0, 0), . . . } there are three
A-10
kinds of pairs, pairs with both numbers positive, pairs with both negative, and
the one pair with both zero. So integers fall into exactly one of three classes,
positive, or negative, or zero.
A partition of a set is a collection of subsets {S0 , S1 , S2 , . . . } such that
every element of S is an element of a subset S1 S2 = and overlapping
parts are equal: if Si Sj 6= then Si = Sj . Picture that is decomposed into
non-overlapping parts.
S1
S0
S2
...
S3
Thus the prior paragraph says that same sign partitions the integers into the
set of positives, the set of negatives, and the set containing only zero. Similarly,
the equivalence relation = partitions the integers into one-element sets.
Another example is the set of strings consisting of a number, followed by
a slash, followed by a nonzero number = {n/d | n, d Z and d 6= 0 }. Define
Sn,d if n
Checking that this is a partition of is
/d
d = nd.
Sn,d by: n
routine (observe for instance that S4,3 = S8,6 ). This shows some parts, listing
in each a couple of its infinitely many members.
. 1/1 . 2 2/ 4
. /4
. 2/2
. 0/1
...
. 0/3
. 4/3
. 8/6
Every equivalence relation induces a partition, and every partition is induced
by an equivalence. (This is routine to check.) Below are two examples.
Consider the equivalence relationship between two integers of gives the same
remainder when divided by 2, the set P = {(1, 3), (2, 4), (0, 0), . . . }. In the set P
are two kinds of pairs, the pairs with both members even and the pairs with both
members odd. This equivalence induces a partition where the parts are found
by: for each x we define the set of numbers related to it Sx = {y | (x, y) P }.
The parts are { . . . , 3, 1, 1, 3, . . . } and {. . . , 2, 0, 2, 4, . . .}. Each part can be
named in many ways; for instance, { . . . , 3, 1, 1, 3, . . . } is S1 and also is S3 .
Now consider the partition of the natural numbers where two numbers are
in the same part if they leave the same remainder when divided by 10, that
is, if they have the same least significant digit. This partition is induced by
the equivalence relation R defined by: two numbers n, m are related if they
are together in the same part. For example, 3 is related to 33, but 3 is not
A-11
related to 102. Verifying the three conditions in the definition of equivalence
are straightforward.
We call each part of a partition an equivalence class. We sometimes pick a
single element of each equivalence class to be the class representative.
?
?
?
...
? 1/2
...
Bibliography
[Ackerson] R. H. Ackerson, A Note on Vector Spaces, American Mathematical
Monthly, vol. 62 no. 10 (Dec. 1955), p. 721.
[Am. Math. Mon., Jun. 1931] C. A. Rupp (proposer), H. T. R. Aude (solver),
problem 3468, American Mathematical Monthly, vol. 37 no. 6 (June-July
1931), p. 355.
[Am. Math. Mon., Feb. 1933] V. F. Ivanoff (proposer), T. C. Esty (solver), problem
3529, American Mathematical Monthly, vol. 39 no. 2 (Feb. 1933), p. 118.
[Am. Math. Mon., Jan. 1935] W. R. Ransom (proposer), Hansraj Gupta (solver),
Elementary problem 105, American Mathematical Monthly, vol. 42 no. 1 (Jan.
1935), p. 47.
[Am. Math. Mon., Jan. 1949] C. W. Trigg (proposer), R. J. Walker (solver),
Elementary problem 813, American Mathematical Monthly, vol. 56 no. 1 (Jan.
1949), p. 33.
[Am. Math. Mon., Jun. 1949] Don Walter (proposer), Alex Tytun (solver),
Elementary problem 834, American Mathematical Monthly, vol. 56 no. 6
(June-July 1949), p. 409.
[Am. Math. Mon., Nov. 1951] Albert Wilansky, The Row-Sums of the Inverse
Matrix, American Mathematical Monthly, vol. 58 no. 9 (Nov. 1951), p. 614.
[Am. Math. Mon., Feb. 1953] Norman Anning (proposer), C. W. Trigg (solver),
Elementary problem 1016, American Mathematical Monthly, vol. 60 no. 2 (Feb.
1953), p. 115.
[Am. Math. Mon., Apr. 1955] Vern Haggett (proposer), F. W. Saunders (solver),
Elementary problem 1135, American Mathematical Monthly, vol. 62 no. 4
(Apr. 1955), p. 257.
[Am. Math. Mon., Jan. 1963] Underwood Dudley, Arnold Lebow (proposers), David
Rothman (solver), Elementary problem 1151, American Mathematical
Monthly, vol. 70 no. 1 (Jan. 1963), p. 93.
[Am. Math. Mon., Dec. 1966] Hans Liebeck, A Proof of the Equality of Column
Rank and Row Rank of a Matrix American Mathematical Monthly, vol. 73
no. 10 (Dec. 1966), p. 1114.
[Anton] Howard Anton, Elementary Linear Algebra, John Wiley & Sons, 1987.
[Arrow] Kenneth J. Arrow, Social Choice and Individual Values, Wiley, 1963.
[Austin] David Austin, How Google Finds Your Needle in the Webs Haystack,
http://www.ams.org/samplings/feature-column/fcarc-pagerank, retrieved
Feb. 2012.
[Ball & Coxeter] W.W. Rouse Ball, Mathematical Recreations and Essays, revised
by H.S.M. Coxeter, MacMillan, 1962.
[Beck] Matthias Beck, Ross Geoghegan, The Art of Proof,
http://math.sfsu.edu/beck/papers/aop.noprint.pdf, 2011-Aug-08.
[Beardon] A.F. Beardon, The Dimension of the Space of Magic Squares, The
Mathematical Gazette, vol. 87, no. 508 (Mar. 2003), p. 112-114.
[Birkhoff & MacLane] Garrett Birkhoff, Saunders MacLane, Survey of Modern
Algebra, third edition, Macmillan, 1965.
[Blass 1984] A. Blass, Existence of Bases Implies the Axiom of Choice, pp. 31 33,
Axiomatic Set Theory, J. E. Baumgartner, ed., American Mathematical
Society, Providence RI, 1984.
[Bridgman] P.W. Bridgman, Dimensional Analysis, Yale University Press, 1931.
[Brin & Page] Sergey Brin and Lawrence Page, The Anatomy of a Large-Scale
Hypertextual Web Search Engine,
http://infolab.stanford.edu/pub/papers/google.pdf, retrieved Feb. 2012.
[Bryan & Leise] Kurt Bryan, Tanya Leise, The $25,000,000,000 Eigenvector: the
Linear Algebra Behind Google, SIAM Review, Vol. 48, no. 3 (2006), p. 569-81.
[Casey] John Casey, The Elements of Euclid, Books I to VI and XI, ninth edition,
Hodges, Figgis, and Co., Dublin, 1890.
[ChessMaster] User ChessMaster of StackOverflow, answer to Python determinant
calculation(without the use of external libraries),
http://stackoverflow.com/a/10037087/238366, answer posted 2012-Apr-05,
accessed 2012-Jun-18.
[Chicken Scheme] Free software implementation, Felix L. Winkelmann and The
Chicken Team, http://wiki.call-cc.org/, accessed 2013-Nov-20.
[Clark & Coupe] David H. Clark, John D. Coupe, The Bangor Area Economy Its
Present and Future, report to the city of Bangor ME, Mar. 1967.
[Cleary] R. Cleary, private communication, Nov. 2011.
[Clarke] Arthur C. Clarke, Technical Error, Fantasy, December 1946, reprinted in
Great SF Stories 8 (1946), DAW Books, 1982.
[Con. Prob. 1955] The Contest Problem Book, 1955 number 38.
[Goult, et al.] R.J. Goult, R.F. Hoskins, J.A. Milner, M.J. Pratt, Computational
Methods in Linear Algebra, Wiley, 1975.
[Graham, Knuth, Patashnik] Ronald L. Graham, Donald E. Knuth, Oren Patashnik,
Concrete Mathematics, Addison-Wesley, 1988.
[Halmos] Paul R. Halmos, Finite Dimensional Vector Spaces, second edition, Van
Nostrand, 1958.
[Hamming] Richard W. Hamming, Introduction to Applied Numerical Analysis,
Hemisphere Publishing, 1971.
[Hanes] Kit Hanes, Analytic Projective Geometry and its Applications, UMAP Unit
710, UMAP Modules, 1990, p. 111.
[Heath] T. Heath, Euclids Elements, volume 1, Dover, 1956.
[Hefferon] J Hefferon, Introduction to Proofs, an Inquiry-Based approach,
http://joshua.smcvt.edu/proofs/, 2013.
[Hoffman & Kunze] Kenneth Hoffman, Ray Kunze, Linear Algebra, second edition,
Prentice-Hall, 1971.
[Hofstadter] Douglas R. Hofstadter, Metamagical Themas: Questing for the Essence
of Mind and Pattern, Basic Books, 1985.
[Iosifescu] Marius Iofescu, Finite Markov Processes and Their Applications, John
Wiley, 1980.
`
[Kahan] William Kahan, Chis
Trick for Linear Equations with Integer
Coefcients, http://www.cs.berkeley.edu/~wkahan/MathH110/chio.pdf,
1998, retrieved 2012-Jun-18.
[Kelton] Christina M.L. Kelton, Trends on the Relocation of U.S. Manufacturing,
UMI Research Press, 1983.
[Kemeny & Snell] John G. Kemeny, J. Laurie Snell, Finite Markov Chains, D. Van
Nostrand, 1960.
[Rice] John R. Rice, Numerical Mathods, Software, and Analysis, second edition,
Academic Press, 1993.
[Rucker] Rudy Rucker, Infinity and the Mind, Birkhauser, 1982.
[Ryan] Patrick J. Ryan, Euclidean and Non-Euclidean Geometry: an Analytic
Approach, Cambridge University Press, 1986.
[Sandifer] Ed Sandifer, How Euler Did It,
http://www.maa.org/news/howeulerdidit.html, 2012-Dec-27.
2011-Apr-09.
[Wikipedia, Square-cube Law] The Square-cube law,
http://en.wikipedia.org/wiki/Square-cube_law, 2011-Jan-17.
[Wills] Rebecca S. Wills, Googles Page Rank, Mathematical Intelligencer, vol. 28,
no. 4, Fall 2006.
[Wohascum no. 2] The Wohascum County Problem Book problem number 2.
[Wohascum no. 47] The Wohascum County Problem Book problem number 47.
[Yaglom] I. M. Yaglom, Felix Klein and Sophus Lie: Evolution of the Idea of
Symmetry in the Nineteenth Century, translated by Sergei Sossinsky,
Birkhuser, 1988.
[Yuster] Thomas Yuster, The Reduced Row Echelon Form of a Matrix is Unique: a
Simple Proof, Mathematics Magazine, vol. 57, no. 2 (Mar. 1984), pp. 93-94.
[Zwicker] William S. Zwicker, The Voters Paradox, Spin, and the Borda Count,
Mathematical Social Sciences, vol. 22 (1991), p. 187 227.
Index
accuracy
of Gausss Method, 6669
rounding error, 68
adding rows, 5
addition of vectors, 17, 38, 78
additive inverse, 78
adjacency matrix, 241
adjoint matrix, 352
angle, 46
antipodal points, 370
antisymmetric matrix, 142
argument, of a function, A-7
arrow diagram, 227, 245, 251, 256, 384
augmented matrix, 15
automorphism, 167
dilation, 167
reflection, 168
rotation, 168
back-substitution, 5
base step, of induction proof, A-3
basis, 113126
change of, 251
definition, 113
orthogonal, 270
orthogonalization, 271
orthonormal, 272
standard, 114, 383
string, 410
bean diagram, A-7
best fit line, 284
block matrix, 332
box, 342
orientation, 344
sense, 344
volume, 344
C language, 66
canonical form
for matrix equivalence, 260
for nilpotent matrices, 413
for row equivalence, 60
for similarity, 434
canonical representative, A-11
Cauchy-Schwarz Inequality, 46
Cayley-Hamilton theorem, 421
central projection, 367
change of basis, 251263
characteristic
equation, 397
polynomial, 397
satisfied by, 423
root, 402
vectors, values, 393
characterize, 174
characterizes, 261
Chemistry problem, 1, 10, 25
Chis method, 362365
circuits
parallel, 71
series, 71
series-parallel, 72
class
equivalence, A-11
representative, A-11
closure, 94
of null space, 406
of range space, 406
codomain, A-7
cofactor, 350
column, 15
rank, 128
full, 134
space, 128
vector, 16
combining rows, 5
complement, A-6
complementary subspaces, 139
orthogonal, 277
complete equation, 155
complex numbers, 382
vector space over, 89, 380
component of a vector, 16
composition, A-7
self, 403
computer algebra systems, 6465
concatenation of sequences, 137
conditioning number, 69
congruent plane figures, 307
constant polynomial, 380
contradiction, A-4
contrapositive, A-2
convex set, 188
coordinates
homogeneous, 369
with respect to a basis, 116
correspondence, 165, A-9
coset, 200
Cramers rule, 355357
cross product, 318
crystals, 145148
unit cell, 146
da Vinci, Leonardo, 366
degree of a polynomial, 380
Desargues Theorem, 373
determinant, 314, 319341
Cramers rule, 356
definition, 320
exists, 330, 336
Laplace expansion, 351
minor, 350
permutation expansion, 329, 333, 358
using cofactors, 350
onto, A-8
range, A-7
restriction, A-9
right inverse, 244
structure preserving, 165, 169
see homomorphism 181
two-sided inverse, 244
value, A-7
well-defined, A-7
zero, 182
Fundamental Theorem
of Algebra, 382
of Linear Algebra, 282
Gausss Method, 3
accuracy, 6669
back-substitution, 5
by matrix multiplication, 239
elementary operations, 5
Gauss-Jordan, 51
Gauss-Jordan Method, 51
Gaussian elimination, 3
generalized null space, 406
generalized range space, 406
generated, 27
generated by, 27
Geometry of Linear Maps, 289295
Google matrix, 449
Gram-Schmidt process, 269274
graphite, 146
historyless process, 302
homogeneous coordinate vector, 369
homogeneous coordinates, 312
homogeneous equation, 23
homomorphism, 181
composition, 227
matrix representing, 202212, 214
nonsingular, 216, 217
null space, 194
nullity, 194
range space, 190
rank, 215
singular, 216
zero, 182
hyperplane, 40
ideal
line, 372
point, 372
identity
function, A-7
matrix, 236
if-then statement, A-2
ill-conditioned problem, 67
image, under a function, A-7
improper subspace, 91
index of nilpotency, 409
induction, 26, A-3
inductive hypothesis, A-4
induction, mathematical, A-3
inductive hypothesis, A-4
inductive step, of induction proof, A-3
inherited operations, 80
inner product, 44
internal direct sum, 138, 173
intersection, of sets, A-6
invariant subspace, 417, 430
inverse, 244, A-8
additive, 78
exists, 245
function, A-8
left, A-8
right, A-8
left, 244, A-8
matrix, 352
right, 244, A-8
two-sided, A-8
inverse function, 244, A-8
inverse image, 191
inversion, 334, A-8
irreducible polynomial, 381
isometry, 307
isomorphism, 163180
classes characterized by dimension,
174
definition, 165
of a space with itself, 167
Jordan block, 429
Jordan form, 418440
represents similarity classes, 434
minor, 350
multiplication, 226
nilpotent, 409
nonsingular, 30, 217
orthogonal, 309
orthonormal, 307312
permutation, 238, 328
rank, 215
representation, 204
row, 15
row equivalence, 53
row rank, 126
row space, 126
scalar multiple, 222
scalar multiplication, 17
similar, 347
similarity, 386
singular, 30
skew-symmetric, 332
sparse, 441
stochastic, 302, 449
submatrix, 323
sum, 17, 222
symmetric, 119, 142, 224, 232, 241,
282
trace, 224, 242, 297, 440
transition, 302
transpose, 21, 129, 224
triangular, 212, 242, 353
tridiagonal, 443
unit, 234
Vandermonde, 332
zero, 223
matrix equivalence, 256263
canonical form, 260
definition, 258
mean
arithmetic, 49
geometric, 49
member, A-5
method of powers, 441444
minimal polynomial, 233, 419
minor, of a matrix, 350
morphism, 165
multilinear, 325
multiplication
matrix-matrix, 226
matrix-vector, 205
multiplicity, of a root, 381
multiset, A-7
mutual inclusion, A-5
natural representative, A-11
networks, 7075
Kirchhoffs Laws, 71
nilpotency, index of, 409
nilpotent, 407417
canonical form for, 413
definition, 409
matrix, 409
transformation, 409
nonsingular, 217, 245
homomorphism, 216
matrix, 30
normalize, vector, 43, 272
null space, 194
closure of, 406
generalized, 406
nullity, 194
odd function, 98, 140
one-to-one function, A-8
onto function, A-8
opposite map, 311
ordered pair, A-7
orientation, 344, 348
orientation preserving map, 311
orientation reversing map, 311
orthogonal, 46
basis, 270
complement, 277
mutually, 269
projection, 277
orthogonal matrix, 309
orthogonalization, 271
orthonormal basis, 272
orthonormal matrix, 307312
page ranking, 447450
pair, ordered, A-7
parallelepiped, 342
parallelogram rule, 38
parameter, 14
parametrized, 14
partial pivoting, 68
partition, A-9A-11
into isomorphism classes, 174
matrix equivalence classes, 259, 261
row equivalence classes, 53
permutation, 328
inversions, 334
matrix, 238
signum, 336
permutation expansion, 329, 333, 358
permutation matrix, 328
perp, of a subspace, 277
perpendicular, 46
perspective, triangles, 373
physical dimension, 155
pivoting, 51
full, 68
partial
scaled, 69
plane figure, 307
congruence, 307
point
at infinity, 372
in projective plane, 369
polynomial, 380
associated with recurrence, 455
constant, 380
degree, 380
division theorem, 380
even, 440
factor, 381
irreducible, 381
leading coefficient, 380
minimal, 419
multiplicity, 381
of map, matrix, 418
root, 381
populations, stable, 445446
potential, electric, 70
powers, method of, 441444
preserves structure, 181
probability vector, 302
linear, 103
representation
of a matrix, 204
of a vector, 116
representative
canonical, A-11
class, A-11
for row equivalence classes, 60
of matrix equivalence classes, 259
of similarity classes, 434
rescaling rows, 5
resistance, 70
resistance:equivalent, 74
resistor, 71
restriction, A-9
right inverse, A-8
rigid motion, 307
root, 381
characteristic, 402
rotation, 290, 310
rotation (or turning), 168
represented, 207
row, 15
rank, 126
vector, 16
row equivalence, 53
row rank, 126
full, 134
row space, 126
Rule of Sarrus, 365
Sage, 64
salt, 145
Sarrus, Rule of, 365
scalar, 78
scalar multiple
matrix, 222
vector, 17, 37, 78
scalar multiplication
matrix, 17
scalar product, 44
scaled partial pivoting, 69
Schwarz Inequality, 46
self composition
of maps, 403
sense, 344